[linux-audio-dev] XAP: a polemic

David Olofson david at olofson.net
Sun Dec 15 20:30:00 UTC 2002


On Monday 16 December 2002 01.43, Tim Goetze wrote:
[...]
> to allow for everything you mention here, you need to
> start counting a different time -- you've stopped the
> transport, so transport time isn't flowing any more.
>
> at least that's what i do, calling it 'virtual time'.

Well, I'm well aware of that solution, but I can't say I like it...


> the thing is that you need to keep time well-defined
> and controllable at one point, for the whole network.
> if you don't, things like synchronization and transport
> control are tough to get right.

Yes indeed!

So, what's wrong with something very closely related to the blocks 
and process() calls; sample frames?


> >Standing proposal:
> > Host processes blocks of 'n' samples.  Events are delivered with
> > a timestamp that says 'actuate this event at this time within
> > this buffer'. This is exactly what user-supplied automation is,
> > totally randomly timed events.  Some plugins need to sync to
> > tempo or music-milestones.  They indicate this need and receive
> > tempo, meter, ticks events. It is responsible for tracking
> > changes.
>
> drop the tick events and it starts to sound reasonable.

This I agree with.


> buffer-relative timestamps are a definite no.

This too, actually - but for different reasons.


> reason: you need to be able to schedule events far
> ahead (streaming, prequeueing).

I still think this should be an optional service; not something built 
into the API. To me, event prequeueing is no different from audio 
prequeueing - and I don't think either belongs in the low levels of a 
callback based, RT oriented API.


> calculating a buffer-
> relative time in this case requires either knowing
> all future buffer sizes or updating these events at
> every cycle. the latter is too awkward, and the former
> enforces a guarantee that severely limits the system's
> synchronization capabilities.

I agree that the first approach is simply incorrect. As I've said 
before, you don't know anything about the future, especially not 
about the relation between the timeline and audio time.

In what way do timestamps related to the musical timeline help here? 

It seems to me that it's only a matter of moving the problem 
somewhere else - and I'm still very far from convinced that that 
makes anything easier, or that it's at all doable without causing 
trouble.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



More information about the Linux-audio-dev mailing list