[LAD] timing the processing of queues between engine and ui threads?

Paul Davis paul at linuxaudiosystems.com
Fri Nov 4 02:12:01 UTC 2011


On Thu, Nov 3, 2011 at 9:44 PM, Iain Duncan <iainduncanlists at gmail.com> wrote:
>> For my particular case, no drop outs is critical, and I really really want
>> to be able to run multiple UIs on lots of cheap machines talking to the
>> same
>> engine over something (osc I expect). So I'm ok with the fact that user
>> input and requests for user interface updates may lag, as the queue is
>> likely to be really busy sometimes. I'm imagining:
>
>> you're going to want at least 3 threads:
>>
>>  1) inside the engine, something to handle requests from a UI that
>> cannot be done under RT constraints
>>         and route others that can into ...
>>  2) the engine thread
>>  3) some UI (not necessarily in the same process)
>
> Thanks, can you elaborate on the first two? ( I appreciate the time,
> understand if not ). Is thread 1 spawned by thread 2? Is the idea that the
> engine thread can then start stuff that it allows to be interrupted but
> still owns all the data for? And how would that be handled if that thread is
> being handled by the audio subsystem and I'm just writing a callback
> function that runs once a sample?

in reverse order ... in such systems, the audio subsystem is providing
(2) for you, so you don't have to start it. however, consider that
many, many systems are now multiprocessor so that even though ardour
gets (2) from JACK, it also has its own pool of worker threads waiting
to function on behalf of "the engine thread". this is a complication
you probably don't want to deal with.

(1) ... it doesn't really matter who starts it. the idea is that the
engine thread (where it comes from) can run under strict RT
constraints. it does not example inter-process communication objects,
it does not perform significant data structure manipulation (what it
does is mostly limited to setting members of existing structures
and/or doing pointer swaps), and it does not have to do bursty cpu
consuming extra work from time to time. the IPC (or just communication
from the GUI) is handled by (1), which takes care of this and all the
other stuff that (2) should not be doing. but its *server* side, or
"backend" if you prefer, not part of any UI. in fact, it can be
handling requests from multiple UIs (as happens, for example, in
ardour where the GUI, OSC and MIDI could all be used to change the
state of things).



More information about the Linux-audio-dev mailing list