I got curious, so I bashed out a quick program to benchmark pipes vs
POSIX message queues. It just pumps a bunch of messages through the
pipe/queue in a tight loop. The results were interesting:
$ ./ipc 4096 1000000
Sending a 4096 byte message 1000000 times.
Pipe recv time: 6.881104
Pipe send time: 6.880998
Queue send time: 1.938512
Queue recv time: 1.938581
Whoah. Which made me wonder what happens with realtime priority
(SCHED_FIFO priority 90):
$ ./ipc 4096 1000000
Sending a 4096 byte message 1000000 times.
Pipe send time: 5.195232
Pipe recv time: 5.195475
Queue send time: 5.224862
Queue recv time: 5.224987
Pipes get a bit faster, and POSIX message queues get dramatically
slower. Interesting.
I am opening the queues as blocking here, and both sender and receiver
are at the same priority, and aggressively pumping the queue as fast as
they can, so there is a lot of competition and this is not an especially
good model of any reality we care about, but it's interesting
nonetheless.
The first result really has me thinking how much Jack would benefit from
using message queues instead of pipes and sockets. It looks like
there's definitely potential here... I might try to write a more
scientific benchmark that better emulates the case Jack would care about
and measures wakeup latency, unless somebody beats me to it. That test
could have the shm + wakeup pattern Jack actually uses and benchmark it
vs. actually firing buffer payload over message queues...
But I should be doing more pragmatic things, so here's this for now :)
Program is here: http://drobilla.net/files/ipc.c
Cheers,
-dr
> In other words, you'd expect such a system to behave as if you
> had two faders in series.
>
> Now if the DSP code only sees the sum of the two values (as it
> should, having a VCA group is just a user interface issue),
Ah, You just contradicted yourself. If you expect the system to behave like
two sliders, then the model must represent 2 sliders. The user widgets
should not embody decision making, that implies intelligence in the GUI,
which is not strictly model-view-controller.
Imagine your device supplemented with a 'dumb' pair of MIDI controllers
complementing the two GUI sliders, they could not correctly implement the
complex interaction between the two gain settings, therefore you need to go
back and move that 'intelligence' into the model.
Best Regards,
Jeff
> Message: 18
> Date: Thu, 24 Nov 2011 20:45:09 +0000
> From: Fons Adriaensen <fons(a)linuxaudio.org>
> Subject: Re: [LAD] sliders/fans
> To: linux-audio-dev(a)lists.linuxaudio.org
> Message-ID: <20111124204509.GA14316(a)linuxaudio.org>
> Content-Type: text/plain; charset=us-ascii
>
> On Thu, Nov 24, 2011 at 02:21:25PM -0500, David Robillard wrote:
>
> > Agreed. Everything here is about the *view*. How that maps to
> actual
> > parameter values is an underlying model issue.
>
> Not always. Consider the case of 'VCA' groups for faders. That
> is: you have a slider that controls the gain of a group of
> channels (without those being mixed). The effective channel
> gain (in dB) is the sum of the per channel fader value and
> the one from the group fader. The model sees only this sum.
>
> Now a fader has to go down to zero gain (-inf dB). So you would
> map the lowest possible (finite) value of the widget to something
> that the model (the DSP code) would translate to 'off'.
>
> The question is then: is this exception handled by the widget and
> the DSP code, or by the DSP code only ?
>
> Suppose the minimum value of the widget would correspond to say
> -100 dB if not handled specially. If you just have a single fader
> per channel, you could arrange for the model or the DSP code to
> translate that to 'off'. That is no longer the case if you have
> 'VCA' faders.
>
> There are two thing you'd expect from such a system:
>
> * If either the channel or the group fader is at minimum, then
> the channel must be off (zero gain).
>
> * If the channel fader is at -50 dB, and the group at -60 dB
> you don't want zero gain, but -110 dB. Becaus either fader is
> still in a position where you'd expect that moving it makes a
> difference.
>
> In other words, you'd expect such a system to behave as if you
> had two faders in series.
>
> Now if the DSP code only sees the sum of the two values (as it
> should, having a VCA group is just a user interface issue), then
> that implies that the mapping of the minimum fader position (e.g.
> -100 dB) to something that would be interpreted as 'off' by the
> DSP code (e.g. -9999999 dB) _must be done by each individual
> fader_.
>
>
> Ciao,
>
>
> --
> FA
>
> Vor uns liegt ein weites Tal, die Sonne scheint - ein Glitzerstrahl.
>
>
>
Here is the current version of the LV2 state extension, which defines
the model for plugin state, and a mechanism for saving and restoring it:
http://lv2plug.in/ns/ext/state
It's time to tag this one as stable, unless anyone can see any issues
(i.e. speak now, or forever hold your peace). If anyone has the time to
give it a quick read-through, feedback would be appreciated. I have
done a lot of work on the documentation lately, hopefully everything
should be clear.
This is currently implemented in Ardour 3 SVN, QTractor SVN, and a patch
for LinuxSampler SVN is available here:
http://drobilla.net/files/linuxsampler_lv2_state_0_4.diff
Thanks,
-dr
Just wondering if I understand this correctly. I making a loop based app
for step sequencing. When I previously did this in Csound, I clocked it off
a phasor, so the timing was sample accurate ( but that brought all it's own
issues to be sure ). I'm wondering whether I should do the same thing in
jack app, or use the jack transport clock, or some hybrid.
My question, am I correct in understanding that if I use the jack transport
position to rewind in time, I'll get:
C) any other clients with running audio looping back to ( may or may not be
desirable )
B) a jitter based on the amount of time left between when the loop should
end and the end of the frame buffer in which the loop length runs out?
Has anyone solved B? Could it be done by some complex tempo cheating trick?
Does anyone have any methods they've used for tight timing of looping in a
jack app?
Pointers at code appreciated of course. =)
thank!
Iain
On , Iain Duncan <iainduncanlists(a)gmail.com> wrote:
> Thanks! Did you just write it?
Yup. As in literally just there. And I was reading your post in the new
RAUL thread as you were typing that :D
All the best, -Harry
On Tue, Nov 22, 2011 at 09:13:37PM +0100, Nick Copeland wrote:
> If you are using a toolkit that has a data flow of the following:
>
> pointer motion->graphical display->values->application->output
>
> Well, basically that is broken as you have a flow that is
>
> input->output->input->application->output
>
> invariably that is going to lead to issues. The tail (the toolkit) is wagging the dog
> (the application) as it imposes restrictions on the values the application is allowed
> to see.
>
> In my opinion (ok, it is only opinion) is that the correct flow is
>
> input->application->output
Yes, I see your point, and it makes a lot of sense. So what would be
required is
* compute the new parameter value from
- a stored state in 'paramater space' rather than 'widget space'
- and pointer (mouse) gestures,
* update the widget according to that value.
This is more or less what I do in the rotary controls used
in e.g. zita-at1 and zita-rev1. It's possible because the
mouse movement and the visual representation of the value
(the angle of the line on the rotary knob) are not directly
related anyway.
But this is not how most (all) toolkits work.
You could probably use them in the way you suggest with some
extra effort. But in many cases (e.g. linear sliders) the
pointer and widget would have to remain in sync visually,
which then means that your resolution in paramater space
can't be better than the visual one. Unless you allow the
pointer to move faster than the visual object it is controlling
(which is what I do in the 2-D panner, but it's possible only
because the widget is so small).
Ciao,
--
FA
Vor uns liegt ein weites Tal, die Sonne scheint - ein Glitzerstrahl.
Hi,
I am trying to compile Aliki 0.1 in Ubuntu 10.10
I Installed the required libraries
libclthreads (>= 2.4.0) and libclxclient (>= 3.6.1)
and libclalsadrv (>= 2.0.0).
but when I try to compile I get this error:
In file included from aliki.cc:26:
styles.h:26: fatal error: clxclient.h: No such file or directory
compilation terminated.
Note: I found and fixed a problem installing libclxclient, but maybe I
broke something...
when I tried to install clxclient show this error:
In file included from xdisplay.cc:22:
clxclient.h:31: fatal error: X11/Xft/Xft.h: No such file or directory
compilation terminated.
I found a solution in a website and was solved with:
|sudo apt-get install libxft-dev|
thanks,
federico lopez
http://kinlan-presentations.appspot.com/bleeding/index.html#42
............
Don't we already have HTML5 <audio>?
Yes :)...but <audio> can only take us so far
Simple low-latency, glitch-free, audio playback and scheduling
Real-time processing and analysis
Low-level audio manipulation
Effects: spatial panning, low/high pass filters, convolution, gain, ...
...........
Judging by all the google-chrome symbols, this appears to be
google-chrome-specific. Is any of this FOSS and available in Chromium?
-- Niels
http://nielsmayer.com
Just because everyone's tips here were so helpful for the ringbuffer
conversation, does anyone have any pointers for where to start
understanding jack transport and clocking, other than the transport client
example?
thanks!
Iain