[linux-audio-dev] A "best" event delegation strategy?
philkerr at elec.gla.ac.uk
Fri May 30 10:15:01 UTC 2003
For the networking transportation you may want to have a look at IEEE
P1639, formerly known as DMIDI, and RTP-MIDI (aka MWPP).
Both provide a network transport framework for MIDI event transmission.
IEEE 1639 is layer 2, raw Ethernet, and RTP-MIDI is layer 3 based.
On Fri, 2003-05-30 at 14:44, Lukas Degener wrote:
> >(The "engine thread" here would usually be the real time audio
> >You'll have to be more specific to get a more specific answer. :-)
> Ok, sorry, the scope of the whole proposal was somewhat ambigous, i'm
> afraid. :-)
> I don't intend to work with audio stream, at least not right now. (there
> are dozens of apps that do this much more elegant than i would be ever
> able to do it)
> The main focus is on midi events. And i also would rather like a push
> model for this. That is, the alsa sequencer client module listens on the
> input ports and creates events which are pushed through the network. The
> things happening within the individual modules will propably happen in
> constant time, or O(n) respectively, if you send n events through them.
> So i don't expect performance problems from this side. The only thing i
> could imagine to have fatal impact on latency would be that a relevant
> thread is blocked / not awoken in time by the system. So i thought if i
> have a "master" thread runing with rt priority, which takes care of all
> the event delegation (i.e. via a global queue), i should not run into
> severe problems. For example: Any event source/subject, no matter which
> thread it runs on, delivers events to the global queue (which should, as
> you pointed out, be some kind of lock-free fifo). Every event is
> associated with the observer/listener it is to be delivered to. Another
> thread running with rt priority, reads the events from the queue and
> delivers them.
> As a result, all event _processing_ would run on a single thread, and
> should happen in O(1).
> I'd like to diregarded ui interaction for now. Anyway, events that
> orginate in the gui could be managed as any other event. Events that
> sould be send to the gui can be decoupled via another fifo.
> As for the feedbacks: using a fifo in the way above should by itself
> introduce a minimal (but required) delay, and therfore make feedbacks
> controllable. (i.e. the loop is "flattned", no recursions.)
> Ah, in the meantime, to other replies poped into my mailbox.
> yes i have tried pd some time ago, and was rather impressed by what it
> could do. I guess by now it is possible to cook coffee and get the girl
> next door laid with it. :-)
> But i haven't ever looked at the code. And i imagine it to be rather
> complex. :-)
> Interestingly, what you describe as the thing ardour is slowly into,
> i.e. one RT thread to process moreless all events, is exactly what
> becomes more and more my favourite alternative. Mostly because it seems
> relatively easy to do, or rather i think i can imagine how this can be
> done. Two issues remain, nevertheless:
> A) What happens if any future plugin for some reason does something more
> complex, let's say O(n^k) per event, would it still be possible to do
> this on the rt thread? Propably not. But anyway, how could one possibly
> guaranty rt processing of such a problem? Propably not at all?
> B) How to implement a lock-free fifo? Or rather: is there some ready to
> use implementation of it?
> Thanks for the replies, everyone.
More information about the Linux-audio-dev