On Thursday 19 December 2002 23.51, Tim Hockin wrote:
[...]
i think that
ticks/measure is a nice idea. combined with
ticks/sec for tempo, you can compute everything you need.
however, i don't see how you can derive the actual western A/B
form from just this information, and this might be necessary for
some purposes.
you can't - we'd need to make METER a two-control tuple. Is it
really needed? If so, I suggest we make it int values of the
beat-note (quarter is 4.0, thirty-second is 32.0). Isn't this
really the domain of the user, though? What is it used for?
Do you actually need the beat value for anything on this level?
Remember that tempo and position are based on the *beat*, rather than
on a fixed note value, so you don't need the beat value to know what
one beat is in terms of musical time. One beat is N ticks.
If the measure isn't an integer number of beats, that just means the
last beat will be "cut off". If you're using 3.5/4 and 1920
ticks/beat, the first measure will start at 0, and the next measure
will start at 3.5 * 1920 = 6720. That is, the last beat is only 960
ticks.
The only case I can think of when you need to know the beat value is
when dealing with changes from one beat value to another. The beat
value is effectively the relation between "absolute musical speed"
and our TEMPO, which is expressed as actual beats/<time unit>.
[...]
Not clear - I don't have anything that refers to
current time,
except get_transport. I don't know that we should allow mid-block
transport stops (which is why I suggested making transport
start/stop be a plugin global evcent). I split time_next() into a
separate function because it has different semantics than
get_time(). Fundamentally there are two functions I believe are
needed
time_next() - get the timestamp of the next beat/bar/...
get_time() - get the timeinfo struct (bitmasked as you suggest)
for a specific timestamp, assuming current tempo/meter/etc.
Just remember that the returned data is essentially invalid if you
ask for a time beyond the end of the current block. It is little more
than a hint about the future.
get_transport() is really useless if transport state
is
plugin-global, and useless if it is not. If it is, just get the
event. If it is not, put it into get_time()
Right. Either it's a value in the time info, or it's a timestamped
event. I believe the latter is the right way to do it, as start/stop
isn't strictyly bound to measures, musical time or block cycles. When
you're sync'ing with an external device, there is nothing that
prevents transport start/stop events (or jumps, for that matter) from
occuring at any point - including in the middle of a block.
I'm worried about sync/lock issues if start/stop is treated as
something totally different from position and tempo.
[...]
What if we had plugin-global notifier events that told
the
plugin some things:
TRANSPORT_STATE start/stop events
TIMEINFO - something time-ish has changed
Now I know that making some of this plugin-global means that
different channels can't be on different timelines, but
honestly, I don't think it matters :)
i would agree with this. i think its a very sensible idea.
Do we allow mid-block TRANSPORT_STOP ? What does that MEAN? It
doesn't make sense to me...
Mid-block TRANSPORT_STOP simply means that the timeline stopped
moving in relation to audio time exactly at the point where you got
the event. That's all.
I am leaning towards one timeline per plugin, not per
channel. It
is simpler and I think the loss of flexibility is so marginal as to
not matter.
Yes, probably. The only case where it really matters is when you have
beat sync effects - and then you can probably live with using two
instances of a synth instead of one. (Though, you'd have to get an
extra drive for something like Linuxsampler, unless you can make all
instances share the same disk butler thread.)
Also, since transport start/stop doesn't make
sense
mid-block, does it really make sense as a control? Maybe
plug->transport(XAP_TRANSPORT_STOP); makes more sense?
I think it *does* make sense mid-block. For example, if you want to
start playback at a particular point in musical time, while the
sequencer is on external sync (that is, transport control is external
as well), you'll need better than block accurate timing for
TRANSPORT_START, or you simply can't track external sync properly.
[...]
TIMEBASE: (FIXME: instance-constant by host/user or a
dynamic
event?)
Constant for the sequencer at least, I think. (As to GUI level
Is it a constant as in host->timebase, or is it a control that
never changes? Or is it passed to create() like sample rate?
The only real reason to change this value is to avoid inexact values
in some places. Given that there won't be many exact values in real
calculations anyway (as soon as the audio rate gets involved), this
matters only to the sequencer and some musical time aware plugins.
So, it seems logical that it's the sequencer that decides what
TIMEBASE to use - and it would have to do that based on the project
you load.
Yep. However,
is it assumed that plugins should always count
meter relative time from where the last METER event was received?
I guess that would be the simplest and "safest" way, although it
*does* reduce musical time accuracy to the sample accuracy level.
We could suggest that hosts send a METER event at each
measure-start...
Yeah. Beat sync effects will need to get that information one way or
another anyway.
too - special event TRANSPORT_CTL - parm =
start/stop/freeze -
do we really need a special event? see below...
I think we need a special event, since otherwise you can't move
the transport position when the transport is stopped. (See my
post on cue points.)
Maybe moving the transport position (play-head, if you will) should
not be sent to all plugins? If you have Cuepoints (will talk in
another thread), then you have cuepoints. If you don't, you get
the TRANSPORT value when play is resumed.
Yeah. It's just that HDRs tracking the play cursor and that kind of
stuff is so commonly used that I think it makes sense to have the
timeline work the same way.
Besides, what use is the old song position when it will never be used
again? I don't see a point in plugins keeping an irrelevant value
just to merge two different kinds of information in one event. (To
me, start/stop seems to be more closely related to TEMPO than to
POSITION - but I don't think abusing TEMPO == 0 for stop is a good
idea either.)
[...]
TEMPO is not
special in any way, and nor is transport start/stop.
Both *can* be normal controls. If they're not, they'll require
special case handling in hosts for defaults and preset handling.
TEMPO and all time controls *are* special. They are not part of a
preset, they are part of the current *studio state*. Saving the
TEMPO control with the preset is wrong.
What do you do if you want to run a beat sync effect in real time,
without a sequencer?
Well, the best way would be to set up a "timeline generator". That
would allow multiple beat sync effects to stay in sync with each
other, and you'd get a nice, central interface to set the tempo and
meter.
So, yes, they might as well be considered special enough not to need
default values, presets or anything. Then it's probably a better idea
to use special events for them as well, not to have them handled by
the same code that handles normal controls.
Yes. (METER
would be two controls.)
Only if we need to specify the beat-note, which I don't know.
I think it might be useful, but as I said; only when you want to know
the relation between beat values in different meters. This could
matter to some plugins in songs with meter changes.
Either way, I
think it's important to keep in mind that control
!= event. Events are just a transport layer for control data, and
other
I think controls should receive <type>_event events only. Sending
'special' events to controls raises a flag that the design needs
more thought.
Well, if TEMPO uses a "special" event, it *cannot* be a normal float
(or whatever) control, and thus it *must* be hinted as a different
type, or the host wouldn't be able to make correct connections. I've
never suggested that these events should be hinted as normal controls
unless they really are 100% compatible in all respects.
One event set <==> one event data type.
[David and Frank wrote...]
<lots of stuff about SPEED control>
Hrrm, so is SPEED just a hint to ordinary plugins that they may
want to speed up/slow down their playback because the host is in
fast-fwd/rewind mode?
Yes.
We are adding a lot of levels of indirection
for figuring out anything time/tempo related.
Where? You'll always get the *actual* time and tempo info, whether
it's caused by the sequencer being on SPEED ;-) or something else.
If a host is being fast-forwarded (say 2x) does the
TEMPO double,
or just SPEED?
Both. That's how you can ignore SPEED if you don't feel like
implementing timestretching and stuff.
[...]
And now for a couple more off-the-wall ideas.
1) SEEK control
If we loop/jump into the middle of a long note, it would be nice to
start playing from the middle. Obviously not all plugins will
support it, but some can (sample players, for instance). How does
this sound:
IFF the plugin has a SEEK control:
- Send a VOICE_ON event with timestamp 'now' (same as buffer
start)
- Send a voice SEEK event with timestamp 'now' and value
'ticks-to-skip' = plugin can seek that many ticks into the voice
If the plugin has no SEEK control, it just won't play.
In the contrary, it would play the note without the offset, as it
would look like just any note. That sounds sensible to me, though.
(I'd *really* like MIDI sequencers to do that with strings and
similar sounds, but there just isn't such a feature in most of them.
"Controller seachback" is as close as you get.)
Sequencers could have an option that lets the user decide when/if
notes should be sent in these situations even if there's not SEEK
control. That way, you have support for all three variants; skip
"missed" notes, play them from the start, or play them from the right
position.
I definitely like this idea.
Alternatively, if we use CUEPOINTs for looping, we can
use that.
Doesn't fix random jumps, though.
No, it's a different problem entirely. To use cue
points/PCPs/whatever for anything that relies on events from the
sequencer, we would need an entirely different kind of interface
between plugins and sequencers.
The SEEK control, OTOH, would provide the information needed to get
it right most of the time, though. (It doesn't work if you pitch bend
samples. You'd need the full story behind the note do get that right.)
2) another idea wrt
TEMPO/METER/POSITION/TRANSPORT/SPEED
All these 'controls' are really broadcasts.
They *could* be, but not if you have multiple timelines. Assuming
that plugins can belong to different sub nets in this regard seems
acceptable to me, though.
They are data
transmitted from the host/timeline to all the plugins in their
chain which have the controls. What if we made it more of a
broadcast. Have a way to have host or timeline global variables
(which is what these are) which can be read by plugins. Plugins
have to factor in all the variables whenever they do any
calculation wrt time. We don't want to make it a host callback
(function calls are bad, mmkay). We can make it a global position
that plugins read.
It would have to be a list of events, rather than global variables.
Otherwise, tempo changes, position jumps and the like would be
restricted to block boundaries, and you couldn't have more than one
change per block. The net result would be forcing hosts to perform
global buffer splitting, as there would be no way to implement a
correct host without it.
When they receive a TIMEINFO event, they need to
re-examine some
(if they don't look at them for every sample already). When they
receive a TRANSPORT event, they need to re-examine it (if they
care).
Plugins are free to drop these global events. There is no function
call overhead. The only overhead is the indirection of using a
pointer to a global.
Well, that's one way of doing it. The only problem is figuring out
what actually needs to be calculated. The time info struct will
contain musical time, SMPTE, MIDI clock and whatnot. Are you supposed
to send TIMEINFO events for every update (new SMPTE frame, new MIDI
tick), or once per beat, or once per measure, all of those, or when?
Also, if you can send these events for more than one reason, plugins
would have to be able to tell the difference, or they'd have to check
all data in the time info struct to know what's happened. Multiple
events would be the most efficient way (no extra level of
conditionals for decoding), but then we're almost back at square one,
only with a less efficient interface.
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`--------------------------->
http://olofson.net/audiality -'
---
http://olofson.net ---
http://www.reologica.se ---