On Sunday 15 December 2002 23.13, Tim Goetze wrote:
David Olofson wrote:
>Musical time *stops* when you stop the sequencer, which means
> that
How would you go about implementing an event delay effect?
by definition, time isn't flowing when the transport is
stopped.
My hardware synths and FX disagree...
a delay in stationary time can only be a zero
delay because there's no future. this doesn't preclude
other ways of processing, ie. plugins might only refrain
from scheduling future events in stopped state.
Sure, but I still don't see any real advantage with *forcing* this
upon the system.
I would find it really rather logical if plugins did the same thing
as my synths when I stop the sequencer - which means maintain tempo,
and let "musical time" freewheel until the sequencer tells them to do
otherwise.
Nothing will convince me that this does not make sense, or that it is
useless. I know for a fact that it *is* both sensible and useful, and
I can see no technical or political reasons to prevent plugins from
acting in the same way.
This won't
help if you're locking to an external device. (I'm sure
Paul can explain this a lot better - I only seem to confuse people
most of the time...)
yes, because you are confusing things yourself.
I'm quite sure I'm not, but I seem to be having a hard time
explaining what I mean. This is getting really rather frustrating...
:-/
there is
only one time within one system.
So now we don't have wall clock time, audio time, musical time, SMPTE
time or anything else; just "time"...?
Sure, wall clock time is the only *real* time there is. The rest are
just timelines that you map to wall clock time directly or
indirectly, one way or another.
if you sync this system
to external, the flow of this time sways somewhat, but
it's still one integral time.
I would like to see how you manage to sync your audio interface to an
external sequencer or VCR... You may lock the sequencer, but the
audio card (which may well drive the sequencer's thread physically)
will just play the fixed 48 kHz rate you told it to use.
you may also want to synchronize changes to the tempo
map and
the loop points to be executed at cycle boundaries, which is
how i am making these less invasive, but that's another story.
I certainly wouldn't want the API to depend on such limitations.
Applications may do what they like, but thinking of block/cycle
boundaries as anything more than the limits for the non-zero
length time span that is "now", is not helpful in any way, if you
want fully sample accurate timing.
block/cycle boundaries exist because hosts do block/cycle
based processing -- and will do so for a couple of years
on commodity hardware. conceptually, blockless processing
is no different from one-sample sized blocks.
Well, why do you seem to have the idea that I wouldn't agree?
if you change/add tempo map entries while only half
your
network has completed a cycle, you're in deep sh*t.
Yes indeed. That's *exactly* what I've been trying to say a number of
times now.
i
found the easiest solution to be preventing this from
happening in the first place.
It's the *only* solution. Plugins that run in the same cycle will
(usually) generate data to be played at the same time, so anything
else would simply be incorrect. You can't be at two places in the
same timeline at once.
i don't see how this plays into sample-accuracy
matters.
Well, it doesn't really. It applies to block "accurate" timing as
well.
The point I was trying to make is just that once you start processing
one block, nothing can change. The timeline during that block is
strictly defined and synchronized with whatever you're sync'ing your
sequencer to. The mapping from the timeline to audio time is known
and fixed.
Now, why does it make a difference if you get events timestamped in
audio time or whatever other format someone might find useful? You
can translate back and forth. What makes most sense to *me* at least,
is not to be forced to convert anything if you're only interested in
the actual place in the buffer where each event should be handled.
Besides, VST and AudioUnits all use audio timestamps to control
synths and effects. AFAIK, DXi does too. Mess (MusE synths) isn't
exactly widely adopted, but that's another example.
I have yet to see a widely adopted plugin API that does not use audio
timestamps.
What is it that we're all failing to understand!?
I'm arguing
for audio timestamps, because I do not want plugins to
have two different time domains forced upon them, especially not
when stopping one of them prevents plugins from communicating
properly.
it does not, because any point in time can be expressed in
any domain. and to repeat, in stopped state all clocks are
frozen, no matter what they count. and to repeat again,
device-dependent units for information interchange across
implementation/abstraction layers are stoneage methodology.
It's just that when the sequencer is stopped, any point in free
running audio time maps to the same point in musical time.
I don't see a good reason to force plugins to accept that, unless
they actually care.
You're not
required to sequencer every event in the network.
you wouldn't mind either, would you?
Well, I guess it *could* be useful to control every single plugin
from one place, but I think that most useful setups are more complex
than that. You need to chain plugins to construct anything
interesting, and plugins that don't care about musical time should
not need a direct connection to the sequencer, just to be able to do
much at all. Nor should they be deprived of all means of
communicating anything but audio data with sample accuracy while the
sequencer is stopped.
transport control is no event because it invariably
involves
a discontinuity in time, thus it transcends the very idea of
an event in time.
Yes - if you think of time in terms of musical time only.
if you think in any form of transport time, be it ticks,
seconds or frames. this is the time context that plugins
operate in. any other concept of time is orthogonal.
This is where I strongly disagree. *Audio time* is what plugins
operate in - or you wouldn't call their process() callbacks, would
you?
and plugins don't have an internal 'song
position counter'.
It could be anywhere, but then you'd have to make a call somewhere
every time you want a sample accurate version of it.
that's what a system-wide uniform time base requires.
No. It's just one way of implementing an interface to it.
You obviously don't understand what timestamped events are all about,
and what you can do with them.
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`--------------------------->
http://olofson.net/audiality -'
---
http://olofson.net ---
http://www.reologica.se ---