[linux-audio-dev] XAP and these <MEEP> timestamps...

David Olofson david at olofson.net
Thu Dec 12 14:57:01 UTC 2002


On Thursday 12 December 2002 15.32, Steve Harris wrote:
> On Thu, Dec 12, 2002 at 02:01:45PM +0100, David Olofson wrote:
> > > It does need to be part of the API, but not mixed in with the
> > > (very low level) event system.
> >
> > Right. So, what we need to do now is agree on a sensible time
> > struct. That is, basically copying that of VST or JACK.
>
> OK, sorry, I may have misunderstood the current state of the
> discussion, I've not been following this thread as closly as the
> pitch one.
>
> > I'm trying to find out whether or not it makes sense to say
> > anything more than "in the past" or "in the future" about
> > timestamps that are outside the buffer... Do you ever have a
> > *valid* reason to query the musical time of an audio time that is
> > outside the time frame you're supposed to work within?
>
> I think it depends on how expressive your time struct is. You do
> need to know what the delay time would be from now, assuming no
> changes.

Yes... Maybe one should actually have specific functions that work 
with *durations* instead? Or maybe not - they would still be 
completely equivalent to asking for "now" and "now + delay", and then 
calculating the delta. It would just be a minor performance hack that 
the host may or may not be able use for some optimizations.

Either way, do keep in mind that whether you ask for a duration or 
two points on the timeline, the result is still invalidated if the 
timeline is changed or there is a transport skip during the period of 
time you're dealing with. There's no way to avoid that without using 
musical time (or other timeline relative) timestamps - and then 
you're in that nice position where you can lose events forever due to 
timeline loops and transport skips. *heh*

Thinking about situations when this delay time thing would matter, 
tempo scaled envelopes spring to mind. Say you want something to 
decay linearly in exactly 1 bar, tracking tempo and tempo changes. 
You *could* just ask for the audio time for "now + 1 bar" and the go 
ahead. Simple - calculate the slope and then go back to processing.

But then you would not even track tempo changes properly, let alone 
transport skips! So, it doesn't work anyway. The *correct* way would 
be to look at the musical time for *every single sample frame*, 
calculating the level directly as a relation between the starting 
time and the current time. (And then you would obviously not need to 
see into the future at all; just find out the musical time of the 
current sample frame.)

Any other approach is an approximation. (Note that you may still 
implement it without one host call per sample frame! Just track tempo 
changes, and assume that the slope only changes when they occur. For 
beat synchronized effects, you'll also have to look for transport 
skips, since tempo changes do not reflect those.)

Now, a relatiely nice approximation (for reasonably small blocks and 
not too drastical tempo changes) would be to check start and end 
times, but restrict the latter to within the current buffer. You 
still won't handle tempo changes or transport skips in the middle of 
the buffer properly, but at least, nothing unexpected can happen in 
between your start and end points.

So, where's the big difference? Well, no one can edit the timeline 
while you're in the middle of processing a block. As soon as the host 
starts running all plugins for one block, all events must be set in 
stone, or you will have parts of the net running out of sync! This 
applies to transport as well. A skip would already be known and 
"implemented" before your plugin is called. No surprises, that is. 

OTOH, *no one* knows what will happen in the next block! So, whatever 
you ask the host about regarding the time after the current block, it 
may well be totally nonsense when you're actually there.


I'm becoming more and more convinced that there is no valid reason 
for a plugin to ask detailed questions about the future. The answers 
are just (qualified, but still) guesses anyway.


> > Anyway, the *real* reason why I'm worrying about this is that
> > such a rule could make life a lot easier for hosts. You can cache
> > time structs for the current buffer, but never have to even
> > generate them for timestamps that fall outside. (The call would
> > just return one of two error codes or something.)
>
> As long as you have musical time information you can extrapole from
> there when you need to. Or is that what you're trying to stop?

No, I'm just trying to stop plugins from asking about the future, and 
then assuming that what they asked is still true when they get there.

If by extrapolation, you mean figuring out things like "when is 1 bar 
ahead?", you're on slippery ground... You can still do that, but in 
effect, you'll just be *guessing* - just like the host would, if it 
would answer questions about the time beyond the end of the current 
block.

I'm trying to a avoid an API where you have to think twice to know if 
something the host tells you is a *fact* or an *estimate*.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
|    The Multimedia Application Integration Architecture    |
`----------------------------> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



More information about the Linux-audio-dev mailing list