On Thursday 09 January 2003 10.17, Tim Hockin wrote:
that it has N voices on at all times. Hmm. Still,
VOICE_ALLOC
is akin to note_on.
Well, if a voice can be "started" and still actually both silent
and physically deactivated (that is, acting as or being a dump
control tracker) - then yes.
I personally find this notion bizarre and counter-intuitive. The
idea that the note is turned on by some random control is just
awkward. I'm willing to concede it, but I just want to be on the
record that I find it bizarre.
Well, maybe it's just that continous control synths are bizarre by
definition? They just work this way, and there's nothing an API can
do about it.
The way I see it, this "random control" that triggers a note is
equivalent to MIDI NoteOn. It can even be a standardized NOTE control
that all synths must respond to, one way or another.
The event that binds a voice to a VVID however, doesn't really have a
counterpart in MIDI, and I belive that's where the confusion is.
Initializing a VVID is actually rather similar to picking a MIDI
channel for future notes, in that it doesn't trigger a real action in
the synth.
Well, MIDI *does* actually have virtual voice allocation. It's
implicit: voice ID == MIDI pitch. The only difference with our VVIDs
is that they have no fixed relation to note pitch, so you have to
allocate them in some other way. (Unless you just simulate the MIDI
approach, that is.)
I still believe that the VOICE_ON approach is simpler
to
comprehend, and more consistent. I want to toss MIDI, but not
where the convention makes things easy to understand. I think that
explaining the idea that a voice is created but not 'on' until the
instrument decides is going to confuse people. Over engineered.
I think the alternative would render continous control synths even
more confusing. "Why do I have to send a VOICE_ON to make the synth
work at all?"
I think VOICE_ON is like telling people that sequencers know better
than synths when to start and stop physical voices, and that's very
far from the truth, especially with continous control synths.
Anyway, it's really an implementation issue. Just don't mention it in
the API docs. Just say that the NOTE control corresponds to MIDI
NoteOn/Off. Problem solved!
[...]
having hosts
provide VVID entries (which they never access) is
stretching it, but it's the cleanest way of avoiding "voice
searching" proposed so far, and it's a synth<->local host thing
only.
If the host never accesses the VVID table, why is it in the host
domain and not the plugins? Simpler linearity? I don't buy that.
No, the real reason is that having synths allocate the entries would
force senders to indirectly communicate with synths when making
connections. And it move the anagement work from the host into
plugins, obviously.
I just don't see a good reason not to have the host do it if it can.
* Less code in plugins.
* Less risk of plugins leaking memory.
* Hosts don't have to ask synths for extra VVIDs
when making connections.
* VVID entry allocation can be made RT safe by the host,
instead of requiring fully RT safe generic memory
management for RT safe connections.
The plugin CAN use the VVID table to store flags about
the voice,
as you suggested. I just want to point out that this is
essentially the same as the plugin communicating to the host about
voices, just more passively.
Only the host can't really make any sense of the data.
It seems useful.
Not really, because of the latency, the polling requirement and the
coarse timing.
If the plugin can flag VVID table entries as released,
the host can
have a better idea of which VVIDs it can reuse.
Why would this matter? Again, the host does *not* do physical voice
management.
You can reuse a VVID at any time, because *you* know whether or not
you'll need it again. The synth just doesn't care, as all it will
ever notice is that you stopped talking about whatever voice was
previously attached to that VVID.
[...]
Am I losing my mind or are we back at a prior
scenario?
Init controls:
time X: ALLOC_VOICE
time X: CONTROL A SET
time X: CONTROL B SET
This tastes just like VOICE_ON, SET, SET.
Except that
1) VOICE_ON implies that something is actually "started"
instantly, whereas ALLOC_VOICE just says "I'm going to
talk about a new voice, referring to it using this VVID."
2) There is no requirement that the controls are set at
time X.
If controls A and B are
both required to start a voice, the synth has to expect both.
Actually, *values* would trigger the synth; not the events
themselves. (The difference is important when dealing with ramp
events!)
The synth may not bother to start playing until A > 0.5 and B > 0,
for example. If B is > 0 by default, you only need to fiddle with A
to get the synth to play.
Agreed - they are semantically the same. The question
is
whether or not it has a counterpart to say that init-time
controls are done.
This is where the confusion/disagreement is, I think: I don't
think of this event as "INIT_START", but rather as
"CONTEXT_START". I don't see the need for a specific "init" part
of the lifetime of a context. Initialization ends whenever the
synth decides to start playing instead of just tracking controls.
Right - this is the bit I find insane. From the user perspective:
I want to start a note. Not whenever you feel like it. Now. Here
are the non-default control values for this voice. Anything I did
not send you, assume the default. Go.
So, bowed string instruments, wind instruments and the like are
insane designs? :-)
What you describe only applies to certain instruments, such as
keyboard instruments and to some extent, percussion.
A bowed string instrument is "triggered" by the bow pressure and
speed exceeding certain levels; not directly by the player thinking
"note!". Why would the instrument care what the player is *thinking*
while playing? I don't see the logic in forcing that into the API.
I can map this onto your same behavior, but I
don't like the way
you characterize it. The init period is instantaneous. If I did
not provide enough information (e.g.: no velocity, and default is
0.0) then the voice is silent. Same behavior different
characterization.
The difference comes when the host sends the 'magic' start-voice
control too soon.
Assume a synth with a bunch of init-latched controls.
Assume velocity is the 'magic trigger'.
time0: Host sends VOICE_START/ALLOC/whatever
time0: Host sends controls A, B, C (latched, but no effect from
the synth) time0: Host sends control VELOCITY (host activates
voice)
time0: Host sends controls D, E, F (ignored - they are
init-latched, and init is over!)
Do you see the problem?
No, I see a host sending continous control data to an init-latched
synth. This is nothing that an API can fix automatically.
It is easily solved by declaring a
context, then setting init controls, then activating the voice.
Yes, but that's a matter of sequencing; not API design.
But the activation of the voice has to be consistent
for all
Instruments, or the host can't get it right.
Yes, it has to be triggered by a standardized control, so hosts
and/or users will know how to hook synths up with sequencers,
controllers and other senders.
If that means that the whitenoise generator with no
per-voice
controls has to receive VOICE_ALLOC(vvid) and VOICE_ON(vvid), then
that is OK.
If it has no voice controls, there will be no VVIDs. You can still
allocate and use one if you don't want to special case this, though.
Sending voice control events to channel control inputs is safe, since
the receiver will just ignore the 'vvid' field of events.
Anyway, assuming this plugin has a gate control, that could be
implemented as a NOTE control. That would make it more obvious how to
make a sequencer gate the plugin using "note data". If you just
insert a "cable" between the sender and the plugin, you'd get this
setup automatically, since the host sees NOTE on both ends, and
connects them.
physically
when the sender has reassigned the VVID and the synth
has killed the voice. Thus, no need for a "VOICE_END" or similar
event either.
The host still has to be able to end a voice, without starting a
new one.
Why? What does "end a voice" actually mean?
From the sender POV:
I'm done with this
context, and won't send any more events
referring to it's VVID.
From the synth POV:
The voice assigned to this
VVID is now silent and passive,
just waiting for further instructions. If no events arrive,
nothing will happen. If the voice is stolen, the VVID is
detached, and voice allocation will be reevaluated, should
any further events for this VVID arrive.
Maybe it would be "interesting" for synths to find out when a sender
really intends never to use a context again, but I don't see why.
It's like telling a MIDI synth that you've stopped using a certain
MIDI pitch for now. Who cares? You can just conclude that nothing is
playing on that pitch at the moment, and think no further.
For a
continous velocity instrument, this is obvious; default
velocity has to be 0, or you'll have a "random" note started soon
as a voice is allocated. Thus you *have* to change the velocity
control to start a note.
For a "latched velocity" instrument (MIDI style), it's exactly
the same thing. There will be a NOTE control, and it has to be
"off" by default. Just set it to "on" to start a note with
default velocity.
That's fine, but can't it be consistent? The continuous velocity
instrument does not suffer because it has to accept a VOICE_ON
(what you called NOTE here). Does it map perfectly, no. Does it
map well enough, yes - and it has the benefit of being consistent,
and easy to explain.
Fine, I have no problems with that. I just don't see why it should be
assumed to be more special than it really is. NOTE/VOICE_ON/VOICE_OFF
is a gate control. What more do you need to say about it?
Realistically MOST synths will have at least one
init-time control
(velocity). As we evolve, this can go away and just be a normal
control. But until then we need a consistent way to handle
init-latched controls. Because they ARE special. They all need to
be sent before the voice is activated.
Yes - just like you have to send MIDI Program Change *before* playing
notes. It's not really a continous control, but MIDI sequencers don't
special case it, since it's obvious enough that the user should say
what sound he/she wants *before* starting to play.
BTW, it might
be a good idea to always provide NOTE output from
senders, even though continous velocity and similar synths won't
care about it. Then again, continous velocity data doesn't work
very well as velocity data for note based synths, so it's
probably not much use...
We should - I agree. I just wrote that diatribe without reading
ahead :) Yes. So every voice creation involves a VOICE_ALLOC and a
VOICE_ON. Once VOICE_ON is received, the synth may make sound for
the voice. It may be silent, also, but the voice IS active.
Yes. (If there's no "active" voice of some sort, there's nothing to
track the control changes.)
[...]
Fiddle with
the NOTE control, maybe? (Which would default to
"off", of course.)
YES! The same as a VOICE_ON. or send a 1 to the VOICE control.
Describe it how you will, it tastes like VOICE_ON. Remember before
when I said I would concede this point? I lied! I might concede
VVIDs and give up on plugin-allocated VIDs, but this seems more and
more right, and you yourself talked me back into it.
Well, then we can probably conclude that most of this is a matter of
terminology confusion. :-)
[...]
VELOCITY vs
DAMPING, or something. (Makes sense for continous
velocity instruments as well!)
I'll think about this scheme. As a simple alternative - mirror the
VOICE_ON? VOICE_RELEASE, CONTROL SET, VOICE_OFF.
Well, the VOICE_RELEASE would be implicit and *after* the VOICE_OFF,
I think.
Either way, what I'm trying to figure out is a way to have both
note-on latched and continous control synths do sensible things even
if the sender doesn't know which type it's dealing with.
If this can't be done, we have *serious* trouble; the whole idea of
playing back sequenced data goes down the drain, basically.
It actually is starting to taste more like
VOICE_PAUSE, now. Tell
the synth not to react to changes until unpause.
Only the synth actually *does* react - it tracks the control changes.
(Or it would have no values to latch when a note is to start.)
I don't like
that. Perhaps release velocity is a different control.
Yes, I think so.
But let's
not make velocity special again. If VELOCITY means something
special to the end of a note, maybe there are others.
Sure. (There are many ways to stop a cymbal...)
The only control that has to be special is NOTE/GATE/whatever (I kind
of like GATE, BTW) - and I'm not even sure it's all that special.
It's not equivalent to VELOCITY, but "abusing" it as that for
continous velocity synths make them playable with data recorded for
note-on latched synths. Not very nice, though, and I'm thinking it
might be better to solve this with hints. (GATE can still be a
continous control, of course, but that's not the same thing as
"equivalent to VELOCITY".)
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`--------------------------->
http://olofson.net/audiality -'
---
http://olofson.net ---
http://www.reologica.se ---