On Sunday 15 December 2002 06.23, Tim Hockin wrote:
No matter how
you turn this stuff about, some things get a bit
hairy. The most important thing to keep in mind though, is that
some designs make some things virtually *impossible*.
I think this is the important point - whether the simple timestamp
is sample-time or musical time, SOME plugins need to convert. Now
the question is - which plugin classes require which, and which is
the majority.
Well, my "guess" (not really) is that most plugins will need the
audio timestamps, *even* when they deal with musical time. Whenever
you actually need to *do* something, you'll need the audio timestamp,
because that's effectively real time to you. You can't even tell if
an event is within the current block, unless you convert one way or
the other.
Or perhaps, if it is lightweight enough, we SHOULD
pas sboth sample-time and tick-time for events?
I've considered that, but remember that musical time is absolute and
non-wrapping. A float has only 24 bits of accuracy, so it will drop
below sample accuracy rather quickly.
My events look something like this, so far:
typedef struct XAP_event
{
struct XAP_event *next;
XAP_timestamp when; /* When to process */
XAP_ui32 action; /* What to do */
XAP_ui32 target; /* Target Cookie */
XAP_f32 value; /* Value/Ramp Target */
XAP_f32 slope; /* Ramp Target Slope */
XAP_ui32 count; /* Ramp Duration */
XAP_ui32 id; /* VVID */
} XAP_event;
You'll need all these fields at once when sending control ramp events
to voices, so unions won't help. The struct is exactly 32 bytes, so
we're going to end up with 64 byte events if we add anything. On big
deal, maybe. (64 bit platforms will need an extra 4 bytes for 'next'
anyway.)
More importantly, it would mean that everyone that sends events have
to calculate *both* timestamps for every event.
And what do you do if there is no timeline at all? 0.0? (That would
make the timestamp useless, as described before.)
I disagree.
It's also a technical decision. Many synths and
effects will sync with the tempo, and/or lock to the timeline. If
you can have only one timeline, you'll have trouble controlling
these plugins properly, since they tread the timeline pretty much
like a "rhythm" that's hardcoded into the timeline.
I don't see what the trouble is...
Well, you do expect a beat synced effect to track the *beat*, right?
Now, if you have two timelines running at different tempi, which one
do you track?
I'm not sure it's a good idea to ignore this, just because *most*
people never do this sort of stuff. I've mentioned that it can be
handy when creating soundscapes, and I'd guess there are more
applications for it. So, if it doesn't complicate the API
significantly, why not just do it right?
[...]
It doesn't
seem too complicated if you think of it as separate
sequencers, each with a timeline of it's own... They're just
sending events to various units anyway, so what's the difference
if they send events describing different tempo maps as well?
We've been talking about 'TEMPO' and 'TRANSPORT' and 'TICKS'
and
'METER' controls, which (honestly) kind of turns my stomach.
But telling a beat synced LFO what note value the beat is is ok?
(Well, it can't be avoided.)
This
is not what controls are meant to be doing. the answer strikes me
in shadowy details:
Each host struct has a timeline member.
You're forgetting that it's not the *host* that handles this. The
host is not necessarily the sequencer, and thus, it may not even know
what a timeline is.
If you have the timeline interface in the host struct, who tells the
host about the timeline...?
Plugins register with the
host for notification of ceratin things:
host->register_time_event(plugin, event, function);
events:
TIME_TICKS // call me on every tick edge
TIME_TRANSPORT // call me when a transport happens
TIME_METER // call me when the meter changes
Why special case it like this? Why not just have Control inputs named
TEMPO, TRANSPORT and METER? Then you could also have plugins with
corresponding outputs - or just build those outputs into the host, if
you don't want to make the sequencer a real plugin.
You don't need to request anything from anywhere; just say that you
have TEMPO, TRANSPORT and/or METER control - input and/or output.
Just as if they were ordinary controls.
They need different event types, but that applies to string and raw
data controls as well. And this just makes them easier to handle.
What about multiple timelines, you ask? Use different
host
structs. Or something.
Yes, but each plugin would be able to know about only one timeline.
One may argue that this restriction doesn't matter much, but then
again, we could just assume that no one uses multiple timelines at
al. Where to draw the line?
Anyway, if you throw these things in as normal controls, they become
per-Channel for free. They're just ordinary connections, from the
internals all the way up to the UI.
If we standardize a timeline interface, we
don't have to overload the control-event mechanism (which forces
hosts to understand the hints or the plugin won't work AT ALL).
If your plugin does not work without timeline input, it is broken.
Your internal tempo will be 0.0, and your sound position will stand
still at 0.0. You won't hear anything about TRANSPORT_START or
TRANSPORT_STOP. No meter changes either, obviously.
So what? Same things as if the sequencer is stopped. Have a default
value for tempo, or something. (Just an f64 control...)
Do you expect a synth to play anything but some default frequency,
without pitch input? :-)
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`--------------------------->
http://olofson.net/audiality -'
---
http://olofson.net ---
http://www.reologica.se ---