[linux-audio-dev] Audio/Midi system - RT prios..

fons adriaensen fons.adriaensen at skynet.be
Fri Dec 30 23:04:26 UTC 2005


On Fri, Dec 30, 2005 at 05:10:44PM -0500, Paul Davis wrote:
> On Fri, 2005-12-30 at 22:27 +0100, Pedro Lopez-Cabanillas wrote:
> > On Friday 30 December 2005 17:37, Werner Schweer wrote:
> > 
> > > The ALSA seq api is from ancient time were no realtime threads were
> > > available in linux. Only a kernel driver could provide usable
> > > midi timing. But with the introduction of RT threads the
> > > ALSA seq api is obsolete IMHO.
> > 
> > I don't agree with this statement. IMHO, a design based on raw MIDI ports used  
> > like simple Unix file descriptors, and every user application implementing  
> > its own event schedule mechanism is the ancient and traditional way, and it 
> > should be considered obsolete now in Linux since we have the advanced 
> > queueing capabilities provided by the ALSA sequencer.
> 
> low latency apps don't want queuing they just want routing. this is why
> the ALSA sequencer is obsolete for such apps. frank (v.d.p) had the
> right idea back when he started this, but i agree with werner's
> perspective that the queuing facilities are no longer relevant, at least
> not for "music" or "pro-audio" applications.

I'd agree with Pedro on this.

1. If things have to be timed accurately, it seem logical to concentrate
this activity at one point. At least then the timing will be consistent,
you can impose priority rules in case of conflict, etc.

2. Translating from data having an implicit or explicit timestamp
associated with it, to a physical signal having a real physical time is
something that belongs at the system or even hardware level, just as it
does for audio.
When you are dealing with midi in software, it should just be timetagged
data, just as audio samples are. The only place where the timing matters
is when midi is output on a real electrical midi port.
Trying to deliver e.g. note-on events from a software sequencer to a soft
synth exactly 'on time' is a waste of effort - what the synth needs to know
is not 'when' on some physical time scale the note starts, but at which
sample it should start. In other words, the note-on event needs a timestamp
that can be converted easily to a frame time.

-- 
FA






More information about the Linux-audio-dev mailing list