Hi there.
The distro I'm using installs a script with jack that lets it run as a
daemon.
I'm wondering whether this makes any sense at all, whether there is a
use case for this that isn't covered by anything else.
I'd like to hear you opinion.
http://repos.archlinux.org/wsvn/packages/jack/repos/testing-x86_64/?opt=dir
jack.conf is the config file in /etc/
jack.install is run when jack is installed/upgraded/removed
rc.jack is the daemon script itself
So what do you think?
Regards,
Philipp
The CLAM project is pleased to announce the first stable release of Chordata.
Chordata is a simple but powerful application that analyses the chords of any
music file in your computer. You can use it to travel back and forward the song
while watching insightful visualizations of the tonal features of the song.
Key bindings and mouse interactions for song navigation are designed thinking
in a musician with an instrument at hands.
Don't miss it working in this video:
http://www.youtube.com/watch?v=xVmkIznjUPE
Downloat it at http://clam-project.org
--
David García Garzón
(Work) david dot garcia at upf anotherdot edu
http://www.iua.upf.edu/~dgarcia
As a power user who's modestly (just kidding) keen on saving time,
using great workflow, and avoiding as much of the drudgery of editing
work over and over again to get an end result as is possible, i've had
the privilege and pleasure of testing and working with a data protocol
called CV, or control voltage, in these last 2 weeks.
Non-Daw, and it's new buddy Non-Mixer, enable me to write function
data in ND control sequences, or "lanes" at will. From my POV it's
like turbo automation, and i'm still surprised and delighted at how
easy and FAST it is to work with. Without the multiple complexity that
is midi, in a simple 1:1 format, this is a very clever way to handle
automated data between apps, imho.
I ask Devs who are building up or modifying their linux audio and
video apps, if they could cast a brief eye over this protocol, and at
least spare a thought for the opportunities it offers to stream direct
data from one app to another. It seems to be an ideal solution for a
modular framework, without a lot of complexity involved. Best of all,
it uses jack ports to do the routing work, so there's no additional
work devs have to do, when trying to stream data across apps. I know
some of you will be familiar with this protocol, so this quick note
could be considered a reminder. :)
Non-Daw, and Non-Mixer are CV capable, and i can enthusiastically
testify to the system working very well indeed.
I guess you could call this a quick heads up for a community interapp
opportunity, and given the recent resurgence of the Session discussion
(woohoo), i'm thinking the CV protocol might be complementary as a
component in such a framework, from a user's perspective.
Alex.
--
www.openoctave.org
midi-subscribe(a)openoctave.org
development-subscribe(a)openoctave.org
Hi, does anyone know a synth powerfull like zynadd, phasex or
bristol,but in dssi format? I need something I can load into Rosegarden,
since I dont want 10 Standalones running, until ardour, rg and the
synths support LASH, if that ever happens.
I think LASH should be integrated into Jack, to make it mandatory for
linux audio apps. The missing LASH support is one of the main issues
disturbing me, when working with linux audio. Now I've said it, ha.
I'm thinking of having Jack require a Load/Save callback, prior to
activating the client. How feasible is that?
What do you think?
Gerald
Hi, well I'm also not the auto spellcheck guy. I just need something
that saves the state of the session. No auto bla bla. It should save,
when I click 'save session', pressumebly just causing all the apps to do
their internal save operation.
> +1 for Calf Monosynth and WhySynth. They, in addition to AMS and PHASEX, are the synths I've used most.
>
> Zyn is kind of old and doesn't do RT; the new thing is Yoshimi, and I dunno if it supports LASH or ladish, but I'd guess both.
>
> For the record, I *HATE* session management and I don't run LASH at all when I can avoid it (IIRC, there's some synth that I use or used which requires LASH, so I occasionally have to start it up).
>
> I generally can't stand technologies that try to be "smart" and do things I don't explicitly instruct them to do. Frustrates the hell out of me.
>
> FWIW, I am also the kind of guy who turns off autocomplete and spelling checkers whenever I can.
>
> -ken
All:
Based on some of the recent discussions on LAD and LAU, I
thought it prudent to forward this message from the
Jack-Devel list.
If you wish to continue the discussion on the the Jack
session management API, please do it on the Jack-Devel list.
Scroll down to the very bottom for a URL to the list.
-gabriel
---------- Forwarded message ----------
Date: Thu, 4 Mar 2010 18:16:39 -0500
From: Paul Davis <paul(a)linuxaudiosystems.com>
To: jack-devel(a)lists.jackaudio.org
Subject: Re: [Jack-Devel] session management API proposal, take 3
[ ... chatter chatter chatter ... ]
i'm on the verge of approving the API proposed by Torben for inclusion
as part of the JACK API. I believe that it accomplishes the goal of
leveraging an existing IPC channel (via libjack), combined with a
minimalistic set of changes to the server and easy-to-implement
support in clients, to provide the basis for session management at
different levels of sophistication, depending on a user's needs.
Torben's example showed the generation of a shell script that will
restart the session, and this is probably the "bottom line" in terms
of simple session management.
There are still some details (below) but more importantly, I would
like to hear from people who have concrete objections, particularly of
the following form:
* the proposal does not address some functionality that it should address
* the proposal will make later, more extensive session management
harder, more complex or impossible
* the proposal is nonsensical because ...
As for details...
I agree with the suggestion that the initial set of session events
should include "quit".
Nedko has also raised some objections to the use of a UUID, and has
proposed a scheme based on PIDs. I personally do not understand how
this can possibly work if JACK is to retain the ability to support
multiple clients per process. Any scheme designed to use PIDs but
still allow this seems to me to have to turn the PID into a UUID by
combining it with some other information (eg. a client name). This
doesn't seem to be semantically different from starting with a
declaration that we use UUIDs to identify clients - how the UUIDs are
constructed is an implementation detail.
Finally, the semantics of the return codes from a session event need
to be pinned down better, since there are circumstances where a client
could *not* save its state without that being an error (e.g. ardour
running with a session on read-only media). For this, I currently
propose:
success == client believes that it will be able to restore its
state to the current state upon a reload
fail = client believes that it will not (or may not) be able to
restore its state to the current state upon a reload
similar semantics will be needed for each other defined session event.
_______________________________________________
Jack-Devel mailing list
Jack-Devel(a)lists.jackaudio.org
http://lists.jackaudio.org/listinfo.cgi/jack-devel-jackaudio.org
Hi, this is a part of a previous mail,but with catchier Subject
I think LASH should be integrated into Jack, to make it mandatory for
linux audio apps. The missing LASH support is one of the main issues
disturbing me, when working with linux audio. Now I've said it, ha.
I'm thinking of having Jack require a Load/Save callback, prior to
activating the client. How feasible is that?
What do you think?
Gerald
Hello!
sorry for crossposting, but I didn't know, where to best reach the
LS-wizards.
I wondered about convolution in LinuxSampler. The GS3 format certainly
supports in-sampler convolving (having their own files for that). I seem to
rememb3er, that working out the format was not the real issue, but the
implementation itself. But there I think Fons offered his convolution classes
- as used in jconv - to easily implement it.
As I now have more and more nice sounds, including even more handsome IRs
(either rooms or body IRs) I was wondering, if the convolution issue is still
persued. I certainly would very much appreciate it!
So could someone please give me an update of the way LS is going?
Warm regards
Julien
--------
Music was my first love and it will be my last (John Miles)
======== FIND MY WEB-PROJECT AT: ========
http://ltsb.sourceforge.net
the Linux TextBased Studio guide
======= AND MY PERSONAL PAGES AT: =======
http://www.juliencoder.de
I wanted a very simple SDR with jack inputs and outputs for a
demonstration I was doing. I had a look at the DSP guts of dttsp and
quisk, and sat down to code.
Now, since I wanted to demonstrate how you could use LADSPA filters to
clean up received audio, it occurred to me that I should implement my
SDR core as a LADSPA plugin. So, I did.
It "works for me". If you try it out, let me know how you get on. At
256 frames/period it sits at about 3% usage on my P4-2.8 without any
other LADSPAs running - not bad, but it probably could be better.
If you want to build it, get the code with:
git clone git://lovesthepython.org/ladspa-sdr.git
then build it with scons. You'll need to manually copy the resulting
sdr.so to wherever your LADSPA plugins live. Load it up in jack-rack
and add in an amplifier plugin (there's no AGC) and some sort of filter
(I recommend the Glame Bandpass Filter).
Performance and quality isn't exactly amazing, but for less than 300
lines of code - much of that used to set up the plugin - it's not too
bad.
Gordon MM0YEQ