For my particular case, no drop outs is critical, and
I really really want
to be able to run multiple UIs on lots of cheap machines talking to the
same
engine over something (osc I expect). So I'm ok
with the fact that user
input and requests for user interface updates may lag, as the queue is
likely to be really busy sometimes. I'm imagining:
you're going to want at least 3 threads:
1) inside the engine, something to handle requests from a UI that
cannot be done under RT constraints
and route others that can into ...
2) the engine thread
3) some UI (not necessarily in the same process)
Thanks, can you elaborate on the first two? ( I appreciate the time,
understand if not ). Is thread 1 spawned by thread 2? Is the idea that the
engine thread can then start stuff that it allows to be interrupted but
still owns all the data for? And how would that be handled if that thread
is being handled by the audio subsystem and I'm just writing a callback
function that runs once a sample?
thanks again
iain