Problem is that that requires special case event
processing in
synths. You'll need an inner event decoding loop just to get the
parameters, before you can go on with voice intantiation.
You always process all events for timestamp X before you generate X's audio
right?
1. Recv VOICE_ON, allocate a voice struct
2. Recv VELOCITY, set velocity in voice struct
3. No more events for timestamp X, next is timestamp Y
4. Process audio until stamp Y
4.a. Start playing the voice with given velocity
The alternative indeed means that you have to find
somewhere to store
the parameters until the "VOICE_ON" arrives - but that doesn't screw
up the API, and in fact, it's not an issue at all until you get into
voice stealing. (If you just grab a voice when you get a "new" VVID,
parameters will work like any controls - no extra work or special
cases.)
All this is much easier if synths pre-allocate their internal voice structs.
Host: I know SYNTH has voices 1, 2, 3 active. So I send params for voice 4.
How does VST handle this?
Send 0-VVID to
the VOICE contol with timestamp Y for note-off
I'm not quite following with the 0-VVID thing here... (The VVID *is*
0-VVID is just so you can have one control for voice on and off. Positive
means ON, negative means OFF. abs(event->vvid) is the VVID.
can tell the synth that you have nothing further to
say to this Voice
by implying that NOTE_OFF means "I will no longer use this VVID to
address whatever voice it's connected to now." Is that the idea?
The synth doesn't care if you have nothing further to say. Either it will
hold the note forever (synth, violin, woodwind, etc) or it will end
eventually on it's own (sample, drum, gong). You do, however want to be
able to shut off continuous notes and to terminate self-ending voices
(hi-hat being hit again, or a crash cymbal being grabbed).
If so, I would suggest that a special
"DETACH_VVID" control/event is
used for this. There's no distinct relation between a note being "on"
and whether or not Voice Controls can be used. At least with MIDI
synths, it's really rather common that you want Channel controls to
affect *all* voices, even after NoteOff, and I see no reason why our
Channel Controls should be different.
I don't get what this has to do with things - of course channel control
affect everything. That is their nature. You're right that a note can end,
and the host might still send events for it. In that case, the plugin
should just drop them. Imagine I piano-roll a 4 bar note with 50 different
velocity changes for a 1/2 second hihat sample. The sample ends
spontaneously (the host did not call NOTE_OFF). The sequencer still has
events, and so keeps sending them. No problem, plugin ignores them.
That doesn't really change anything. Whether
you're using
VOICE_ON/VOICE_OFF or VOICE(1)/VOICE(0), the resulting actions and
timing are identical.
Right - Aesthetics. I prefer VOICE_ON/VOICE_OFF but a VOICE control fits
our model more generically.