Hi, well I'm also not the auto spellcheck guy. I just need something
that saves the state of the session. No auto bla bla. It should save,
when I click 'save session', pressumebly just causing all the apps to do
their internal save operation.
> +1 for Calf Monosynth and WhySynth. They, in addition to AMS and PHASEX, are the synths I've used most.
>
> Zyn is kind of old and doesn't do RT; the new thing is Yoshimi, and I dunno if it supports LASH or ladish, but I'd guess both.
>
> For the record, I *HATE* session management and I don't run LASH at all when I can avoid it (IIRC, there's some synth that I use or used which requires LASH, so I occasionally have to start it up).
>
> I generally can't stand technologies that try to be "smart" and do things I don't explicitly instruct them to do. Frustrates the hell out of me.
>
> FWIW, I am also the kind of guy who turns off autocomplete and spelling checkers whenever I can.
>
> -ken
All:
Based on some of the recent discussions on LAD and LAU, I
thought it prudent to forward this message from the
Jack-Devel list.
If you wish to continue the discussion on the the Jack
session management API, please do it on the Jack-Devel list.
Scroll down to the very bottom for a URL to the list.
-gabriel
---------- Forwarded message ----------
Date: Thu, 4 Mar 2010 18:16:39 -0500
From: Paul Davis <paul(a)linuxaudiosystems.com>
To: jack-devel(a)lists.jackaudio.org
Subject: Re: [Jack-Devel] session management API proposal, take 3
[ ... chatter chatter chatter ... ]
i'm on the verge of approving the API proposed by Torben for inclusion
as part of the JACK API. I believe that it accomplishes the goal of
leveraging an existing IPC channel (via libjack), combined with a
minimalistic set of changes to the server and easy-to-implement
support in clients, to provide the basis for session management at
different levels of sophistication, depending on a user's needs.
Torben's example showed the generation of a shell script that will
restart the session, and this is probably the "bottom line" in terms
of simple session management.
There are still some details (below) but more importantly, I would
like to hear from people who have concrete objections, particularly of
the following form:
* the proposal does not address some functionality that it should address
* the proposal will make later, more extensive session management
harder, more complex or impossible
* the proposal is nonsensical because ...
As for details...
I agree with the suggestion that the initial set of session events
should include "quit".
Nedko has also raised some objections to the use of a UUID, and has
proposed a scheme based on PIDs. I personally do not understand how
this can possibly work if JACK is to retain the ability to support
multiple clients per process. Any scheme designed to use PIDs but
still allow this seems to me to have to turn the PID into a UUID by
combining it with some other information (eg. a client name). This
doesn't seem to be semantically different from starting with a
declaration that we use UUIDs to identify clients - how the UUIDs are
constructed is an implementation detail.
Finally, the semantics of the return codes from a session event need
to be pinned down better, since there are circumstances where a client
could *not* save its state without that being an error (e.g. ardour
running with a session on read-only media). For this, I currently
propose:
success == client believes that it will be able to restore its
state to the current state upon a reload
fail = client believes that it will not (or may not) be able to
restore its state to the current state upon a reload
similar semantics will be needed for each other defined session event.
_______________________________________________
Jack-Devel mailing list
Jack-Devel(a)lists.jackaudio.org
http://lists.jackaudio.org/listinfo.cgi/jack-devel-jackaudio.org
Hi, this is a part of a previous mail,but with catchier Subject
I think LASH should be integrated into Jack, to make it mandatory for
linux audio apps. The missing LASH support is one of the main issues
disturbing me, when working with linux audio. Now I've said it, ha.
I'm thinking of having Jack require a Load/Save callback, prior to
activating the client. How feasible is that?
What do you think?
Gerald
Hello!
sorry for crossposting, but I didn't know, where to best reach the
LS-wizards.
I wondered about convolution in LinuxSampler. The GS3 format certainly
supports in-sampler convolving (having their own files for that). I seem to
rememb3er, that working out the format was not the real issue, but the
implementation itself. But there I think Fons offered his convolution classes
- as used in jconv - to easily implement it.
As I now have more and more nice sounds, including even more handsome IRs
(either rooms or body IRs) I was wondering, if the convolution issue is still
persued. I certainly would very much appreciate it!
So could someone please give me an update of the way LS is going?
Warm regards
Julien
--------
Music was my first love and it will be my last (John Miles)
======== FIND MY WEB-PROJECT AT: ========
http://ltsb.sourceforge.net
the Linux TextBased Studio guide
======= AND MY PERSONAL PAGES AT: =======
http://www.juliencoder.de
I wanted a very simple SDR with jack inputs and outputs for a
demonstration I was doing. I had a look at the DSP guts of dttsp and
quisk, and sat down to code.
Now, since I wanted to demonstrate how you could use LADSPA filters to
clean up received audio, it occurred to me that I should implement my
SDR core as a LADSPA plugin. So, I did.
It "works for me". If you try it out, let me know how you get on. At
256 frames/period it sits at about 3% usage on my P4-2.8 without any
other LADSPAs running - not bad, but it probably could be better.
If you want to build it, get the code with:
git clone git://lovesthepython.org/ladspa-sdr.git
then build it with scons. You'll need to manually copy the resulting
sdr.so to wherever your LADSPA plugins live. Load it up in jack-rack
and add in an amplifier plugin (there's no AGC) and some sort of filter
(I recommend the Glame Bandpass Filter).
Performance and quality isn't exactly amazing, but for less than 300
lines of code - much of that used to set up the plugin - it's not too
bad.
Gordon MM0YEQ
We seem to be fairly intrested in the same things James!
I don't know if you have access to University Lecturers... if you do, go
have a chat with
the software engineer lecturer. I've only had positive experiences when
approaching them
about "totally-unrelated-to-course" projects.
On the other hand, I bought a book (forget the exact name.. can find out)
which showed some of the basic ObjectOrientated stuff, but at the same time,
I found it to be relatively useless when trying to apply it to
"music-software".
(Ie: Ardour, Seq24, Dino, etc kind of programs)
Spending time doing out program diagrams.. (you know the "standard" boxes
approach
to explaing how classes interact.) That's been my approach, I didnt really
find any great
resources online. If you do find any, please post back here! :-)
Good luck, -Harry
Jorn, Fons, i'm looking for a ladspa UHJ encoder, and can't seem to
find one. Any idea if such a beast exists? Or if there's a standalone
instance or ambdec preset i can use, and route in and out of?
Jorn ,i've had several browses over your web examples of using AMB
plugins with Ardour, and have reflected the setup where possible in
Non-Mixer.
I'm using samples (ala LSampler) for noise, but i'll ask here, what's
the function of using the tetraproc mike plugin over something else?
I'm lost in your explanation.
I'm still getting my feet wet in ambisonics, and making plenty of
errors along the way, but progress seems imminent. (as it always does
i guess, for the optimistic among us.)
Some general questions.
When i use Jconvolver standalone (my preference) and test with a
*amb.conf, i get 1 input and 4 outputs WXYZ. Is this correct for 4
signals coming into 1, into the *amb.conf, or do i need to change this
to reflect individual WXYZ routing, from something like a MASTER
strip, or from an ambdec plugin in a channel strip? (i'm trying to get
the signal chain sorted out correctly.) i.e. 4 in, 4 out.
I'm using all mono ins for sound sources, and want to reflect
positioning in the busses, as i have multitrack 1st violins,
2ndviolins, etc...
So my 1st violins (4 monotracks) are going into a 1stviolin buss (4
ins) and in the buss signal chain, i'm adding a ladspa amb mono
panner, which naturally gives me 4 outs, then the chain continues to
the MASTER and jconvolver, back into a jconv buss in the mixer with
the intent of finally routing that to the UHJ buss...
Should i then stay "faithful" to that signal chain, and up to a UHJ
encode to stereo (which i hope is in ladspa existence) maintain the 4
port stream to stay compliant with WXYZ?
The intent with this is provide ambisonic positioning, and convolver
tail, right up until downsizing to stereo as the last part of the
signal chain.
I'm finding the challenge of this interesting, and may have more
questions as more of this slowly seeps into my head.
Feel free to point out obvious errors, or alternative (meaning
smarter) suggestions.
Alex.
--
www.openoctave.org
midi-subscribe(a)openoctave.org
development-subscribe(a)openoctave.org
Good day...
Just coming to grips working with and learning the alias system...
Under what conditions might a Jack port not have any alias names?
When might I expect to encounter that situation?
Because our app supports both ALSA midi and Jack midi, the app's very own
ALSA ports are showing up in its list of Jack midi ports. We don't want our
own ALSA ports listed in there.
So to filter them out of our Jack midi ports list, I look at the port's (first)
alias name and see if the app's name is in there, and filter out the port if
it's a match.
So far so good, but I'm worried what happens if there's no alias to work with.
I can't figure out a way to determine if a non-aliased name like
"system:midi_playback_4" actually belongs to our app's own ALSA ports.
I don't know how or if 'alias renaming' will affect my plans.
Still learning + investigating much about this system.
Thanks. Tim.
Hey, has anyone been seeing strange behavior from this combination?
kernel 2.6.31.x rt20 + alsa 1.0.22 userland
RME card (pcmcia card + multiface)
hdspmixer is not doing the right thing (does not initialize the card in
a way in which playback works), it does not see the hwdep interface (or
something like that) and disables metering, alsamixer even segfaults
when I reach the end of the controls listed. Plain weird. Smells like
something changed deep in the kernel that makes alsa-lib very unhappy.
Alsa-tools rebuilt from source does not make a difference.
Weirdness goes away when I boot into 2.6.29.6 rt23...
Is there anything in alsa-* that depends on which _kernel_ is available
at compile time?
-- Fernando