On Monday 16 December 2002 04.51, Paul Davis wrote:
On Monday 16
December 2002 03.09, Paul Davis wrote:
if you
change/add tempo map entries while only half your
network has completed a cycle, you're in deep sh*t. i
found the easiest solution to be preventing this from
happening in the first place.
two words i learnt from ardour-dev: accelerando, decelarando.
think about it ;)
You still want it applied to all plugins in *sync*, don't you? You
don't run a random set of plugin using and older version of the
timeline.
my point was that the tempo *is* changing within processing
cycles. its moving from one value to another. unless the API allows
this to be indicated, its goodbye to a certain category of
organized noise. you can't feasibly indicate it with a value for
each sample - you need to indicate it with a ramp. nothing we've
spoken about so far has covered this.
Well, we have ramp events for controls. These could be applied to
tempo as well, of course.
However, that doesn't really solve the problem, if there is one. For
all practical matters in audio rate processing, tempo ramp events
differ from one tempo change per sample only in the amount of
overhead.
So, why do you need better than sample accurate resolution for the
timeline - or what's missing? I would think that in most cases, being
able to tell the exact position and tempo for every sample is
sufficient. If not, ramp events tell you the full story, for highly
tempo sensetive oversampled effects, or whatever. It sure would help
when translating from one sample rate to another.
moreover,
"transport" time can run backwards while other kinds of time run
forwards.
That's an interesting point, BTW. How do you handle queued events
with musical timestamps in reverse order...? :-)
how do you handle any queued events in reverse order?
We'll that's exactly my point. You *can't*, unless you consider the
whole queue - and we don't want every plugin to sort all input. (BTW,
in theory, you must do that with VST - but AFAIK, most people ignore
this, since hosts generally deliver events in timestamp order.)
there is absolutely no mapping between
"transport" time
and a free running clock.
Well, perhaps besides the point, but what would you call the
implicit "mapping" that is defined as events finally reach the
outputs of your interface, in the form of audio signals? (I mean,
theoretically, there has to be one, since every timestamp,
whatever domain it belongs in, eventually ends up in the real
world, as a real "event" that occurs at some point in wall clock
time.)
what do i call it? "user operations". the mapping is defined in
real-time at run-time by the choices the user makes. "start",
"stop", "locate to time T", "rewind", "loop over this
section",
"slow that down" etc. by the time we have this information, there
is a mapping,
Yes.
but its too late to be useful.
Why? It seems to me that this is exactly the same as with RT audio
processing: At some point you have a deadline, and to meet it, you
must take a "snapshot" of what you know and work with that. Beyond
that point, it's too late to change anything.
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`--------------------------->
http://olofson.net/audiality -'
---
http://olofson.net ---
http://www.reologica.se ---