[Lost touch with the list, so I'm trying to catch
up here... I did
notice that
gardena.net is gone - but I forgot that I was using
david(a)gardena.net for this list! *heh*]
Woops! Welcome back!
If flags are
standardized, it can. Int32: 0 = unused, +ve = plugin
owned, -ve = special meaning.
Sure. I just don't see why it would be useful, or why the VVID
subsystem should be turned into some kind of synth status API.
You originally suggested it. More on this later. Let's drop it for now :)
VVIDs can't end spontaneously. Only synth voices
can, and VVIDs are
only temporary references to voices. A voice may detach itself from
"it's" VVID, but the VVID is still owned by the sender, and it's
still effectively bound to the same context.
OK, we're having more term conflicts and some idealogical conflicts - read
on.
Ok, let me
make it more clear. Again, same example. The host wants
to send 7 parameters to the Note-on. It sends 3 then VELOCITY. But
as soon as VELOCITY is received 'init-time' is over. This is bad.
Yes, it's event ordering messed up. This will never happen unless the
events are *created* out of order, or mixed up by some event
Imagine a simple host that has a dialog to edit 'init' params for a new
note. The host can't know what order to send init-latched events unless it
knows there is a safe 'go' that it can send. That is the VOICE_ON.
Why? So it can "automatically" reorder
events at some point?
It may not have any clue as to what events come first/last.
The easiest way is to just make one event the
"trigger", but I'm not
sure it's the right thing to do. What if you have more than one
control of this sort, and the "trigger" is actually a product of
both? Maybe just assume that synths will use the standardized
The trigger is a virtual control which really just says whether the voice is
on or not. You set up all your init-latched controls in the init window,
THEN you set the voice on.
It is conceptually simple, similar to what people know and it fits well
enough. And I can't find any problems with it technically.
And the
NOTE/VOICE starter is a voice-control, so any Instrument
MUST have that.
This is very "anti modular synth". NOTE/VOICE/GATE is a control type
hint. I see no reason to imply that it can only be used for a certain
kind of controls, since it's really just a "name" used by users
and/or hosts to match ins and outs.
This is not at all what I see as intuitive. VOICE is a separate control
used ONLY for voice control. Instruments have it. Effects do not.
About VVID management:
Since mono synths won't need VVIDs, host shouldn't have to
allocate any for them. (That would be a waste of resources.)
The last case also indicates a handy shortcut you can take
if you *know* that VVIDs won't be considered. Thus, I'd
suggest that plugins can indicate that they won't use VVIDs.
This is a possible optimization. I'll add it to my notes. It may really
not be worth it at all.
Why? What does "end a voice" actually mean?
It means that the host wants this voice to stop. If there is a
release phase, go to it. If not, end this voice (in a
plugin-dpecific way).
Without it, how do you enter the release phase?
Right, then we agree on that as well. What I mean is just that "end a
voice" doesn't *explicitly* kill the voice instantly.
Ok, we agree on this.
What might be confusing things is that I don's
consider "voice" and
"context" equivalent - and VVIDs refer to *contexts* rather than
voices. There will generally be either zero or one voice connected to
a context, but the same context may be used to play several notes.
I disagree - a VVID refers to a voice at some point in time. A context can
not be re-used. Once a voice is stopped and the release has ended, that
VVID has expired.
No. It means I
want the sound on this voice to stop. It implies the
above, too. After a VOICE_OFF, no more events will be sent for this
VVID.
That just won't work. You don't want continous pitch and stuff to
work except when the note is on?
More or less, yes! If you want sound, you should tell the synth that by
allocating a VVID for it, and truning it on.
Another example that demonstrates why this distinction
matters would
be a polyphonic synth with automatic glisando. (Something you can
Starting a new note on a VVID when a previous note is still in the
release phase would cause a glisando, while if the VVID has no
playing voice, one would be activated and started as needed to play a
new note. The sender can't reliably know which action will be taken
for each new note, so it really *has* to be left to the synth to
decide. And for this, the lifetime of VVIDs/contexts need to span
zero or more notes, with no upper limit.
I don't follow you at all - a new note is a new note. If your instrument
has a glissando control, use it. It does the right thing. Each new note
gets a new VVID.
Reusing a VVID seems insane to me. It just doesn't jive with anything I can
comprehend as approaching reality.
The reason
that VVID_ALLOC is needed at voice_start is because the host might
never have sent a VOICE_OFF. Or maybe we can make it simpler:
If the host/sender doesn't sent VOICE_OFF when needed, it's broken,
just like a MIDI sequencer that forgets to stop playing notes when
you hit the stop button.
Stop button is different than not sending a note-off. Stop should
automatically send a note-off to any VVIDs. Or perhaps more accurately, it
should send a stop-all sound event.
Host turns the
NOTE/VOICE on.
It can either turn the NOTE/VOICE off or DETACH it. Here your
detach name makes more sense.
VOICE_OFF and DETACH *have* to be separate concepts. (See above.)
Absolutely. First let me say that we have two main issues in discussion.
1) VVID management and voice control
2) init and release latched events
We're discussing #1, #2 comes later. I just want to say that out loud, once
more :)
I'm proposing a very simple model for VVID and voice management. One that I
think is easy to understand, explain, document, and implement. It jives
with reality and with what users of soft-studios expect.
Every active voice is represented by one VVID and vice-versa.
There are two lifecycles for a voice.
1) The piano-rolled note:
a) host sends a VOICE(vvid, VOICE_ON) event
- synth allocates a voice (real or virtual) or fails
- synth begins processing the voice
b) time elapses as per the sequencer
- host may send multiple voice events for 'vvid'
c) host sends a VOICE(vvid, VOICE_OFF)
- synth puts voice in release phase and detaches from 'vvid'
- host will not send any more events for 'vvid'
- host may now re-use 'vvid'
2) The step-sequenced note:
a) host sends a VOICE(vvid, VOICE_ON) event
- synth allocates a voice (real or virtual) or fails
- synth begins processing the voice
b) host sends a VOICE(vvid, VOICE_DETACH) event
- synth handles the voice as normal, but detaches from 'vvid'
- host will not send any more events for 'vvid'
- host may now re-use 'vvid'
These are very straight forward and handle all cases I can think up. The
actual voice allocation is left to the synth. A mono-synth will always use
the same physical voice. A poly-synth will normally allocate a voice from
it's pool. A poly-synth under voice pressure can either steal a real voice
for 'vvid' (and swap out the old VVID to a virtual voice), or allocate a
virtual voice for 'vvid', or fail altogether. A sampler which is playing
short notes (my favorite hihat example) can EOL a voice when the sample is
done playing (and ignore further events for the VVID).
It's cute. I like it a lot.
Now on to subject #2 - init and release-latched events.
-- INIT:
send SET(new_vvid, ctrl) /* implicitly creates a voice */
send VOICE_ON(new_vvid) /* start the vvid */
-- RELEASE:
send SET(new_vvid, ctrl) /* send with time X */
send VOICE_OFF(vvid) /* also time X - plug 'knows' it was for
release */
I see why you don't like this. You're forgetting that it's the
*value* that is the "initializer" for the VOICE_OFF action; not the
SET event that brings it. Of course the plugin "knows" - the last set
put a new value into the control that the VOICE_OFF action code looks
at! :-)
A synth is a state machine, and the events are just what provides it
with data and - directly or indirectly - triggers state changes.
And I am advocating that voice on/off state changes be EXPLICITLY handled
via a VOICE control, as well as init and release-latched controls be
EXPLICITLY handled.
Yeah, it makes for some extra events. I think that the benefit of clarity
in the model is worth it. We can also optimize the extra events away in the
case they are not needed.
As to 1, that's what we're really talking
about here. When do you
start and stop tracking voice controls?
And how do you identify control events that are intended to be init-latched
from continuous events?
Simple: When you get the first control for a
"new" VVID, start
tracking. When you know there will be no more data for that VVID, or
that you just don't care anymore (voice and/or context stolen), stop
tracking.
Exactly what I want, but I want it to be more explicit
* Context allocation:
// Prepare the synth to receive events for 'my_vvid'
send(ALLOC_VVID, my_vvid)
// (Control tracking starts here.)
yes - only I am calling it voice allocation - the host is allocating a voice
in the synth (real or not) and will eventually turn it on. I'd bet 99.999%
of the time the ALLOC_VVID and VOICE_ON are on the same timestamp.
{
* Starting a note:
// Set up any latched controls here
send(CONTROL, <whatever>, my_vvid, <value>)
...
// (Synth updates control values.)
// Start the note!
send(CONTROL, VOICE, my_vvid, 1)
// (Synth latches "on" controls and (re)starts
// voice. If control tracking is not done by
// real voices, this is when a real voice would
// be allocated.)
This jives EXACTLY ives with what I have been saying, though I
characterized it as:
VOICE_INIT(vvid) -> synth gets a virtual voice, start init-latch window
VOICE_SETs -> init-latched events
VOICE_ON(vvid) -> synth (optionally) makes it a real voice (end init-window)
* Stopping a note:
send(CONTROL, <whatever>, my_vvid, <value>)
...
// (Synth updates control values.)
// Stop the note!
send(CONTROL, VOICE, my_vvid, 0)
// (Synth latches "off" controls and enters the
// release phase.)
Except how does the synth know that the controls you send are meant to be
release-latched?
My answer is to parallel the init-window.
VOICE_DEINIT(vvid) -> synth gets a virtual voice, start release-latch window
VOICE_SETs -> release-latched events
VOICE_OFF(vvid) -> synth sends voice to release phase (end release-window)
* Context deallocation:
// Tell the synth we won't talk any more about 'my_vvid'
send(DETACH_VVID, my_vvid)
// (Control tracking stops here.)
THIS is what I disagree with. I think VOICE_OFF implicitly does this. What
does it mean to send controls after a voice is stopped? The ONLY things
I can see this for are mono-synths (who can purely IGNORE vvid or flag
themselves as non-VVID) and MIDI where you want one VVID for each note (so
send a VOICE_OFF before you alloc the VVID again).
This still contains a logic flaw, though. Continous
control synths
won't necessarily trigger on the VOICE control changes. Does it make
sense to assume that they'll latch latched controls at VOICE control
changes anyway? It seems illogical to me, but I can see why it might
seem to make sense in some cases...
It makes *enough* sense that the consistency pays off, IM(ns)HO.
Welcome back! As I indicated, I am moving this week, so my response times
may be laggy. I am also trying to shape up some (admittedly SIMPLE) docs on
the few subjects we've reached agreement on so far.
Tim