1. Recv
VOICE_ON, allocate a voice struct
2. Recv VELOCITY, set velocity in voice struct
3. No more events for timestamp X, next is timestamp Y
4. Process audio until stamp Y
4.a. Start playing the voice with given velocity
Problem is step 1. If the voice allocator looks at velocity, it won't
work, since that information is not available when you do the
allocation. Likewise for setting up waveforms with velocity maps and
the like.
When are you supposed to do that sort of stuff? VOICE_ON is what
triggers it in a normal synth, but with this scheme, you have to wait
for some vaguely defined "all parameters available" point.
So maybe VOICE creation needs to be a three-step process.
* Allocate voice
* Set initial voice-controls
* Voice on
This way the instrument is alerted to the fact that a new voice is being
created without deciding which entry in the velocity map to use. This is
essentially saying that initial parameters are 'special', and they are in
many-ways (I'm sure velocity maps are just one case).
Or we can make the rule that you do not choose an entry in a velocity map
until you start PROCESSING a voice, not when you create it. VOICE_ON is a
placeholder. The plugin should see that a voice is on that has no
velocity-map entry and deal with it whn processing starts. Maybe not.
Needs thinking
Host: I know
SYNTH has voices 1, 2, 3 active. So I send params for
voice 4.
Actually, it doesn't know anything about that. The physical
VVID->voice mapping is a synth implementation thing, and is entirely
dependent on how the synth manages voices. s/voice/VVID/, and you get
closer to what VVIDs are about.
But it COULD. This could become more exported. The plugin tells the host
what it's max polyphony is (a POLYPHONY control?). The host manages voices
0 to (MAX_POLY-1) for each synth.
0-VVID is just
so you can have one control for voice on and off.
Positive means ON, negative means OFF. abs(event->vvid) is the
VVID.
Ok. Why not just use the "value" field instead, like normal Voice
Controls? :-)
Because VOICE is actually a channel control? I dunno, being thick,
probably. :) VOICE(vid, 1) and VOICE(vid, 0) are the notation I will use
to indicate that a voice 'vid' has been turned on or off. :)
Not really - but whoever *sends* to the synth will
care, when running
out of VVIDs. (Unless it's a MIDI based sequencer, VVID management
isn't as easy as "one VVID per MIDI pitch value".)
Ahh, this does get interesting.
way you can know when it's safe to reuse a VVID.
(Release
envelopes...) Polling the synth for voice status, or having synths
return voice status events doesn't seem very nice to me. The very
It seems to me that voice allocation and de-allocation HAS to be a two-way
dialogue, or you have use-once VVIDs. Maybe this is OK - 2^32 VVIDs per
synth. The host only really needs to store a small number - the active
list. Obviously it can't be a linear index EVER, but it makes a fine hash
key or index modulo N.
1) Voice Control. (Keep the VVID for as long as you
need it!)
2) Channel Control. ("Kill All Notes" type of controls.)
My header already has a 'stop all sound' event. :)
Although the VOICE control might actually be the
VELOCITY control,
where anything non-0 means "on"... A specific, non-optional VOICE
control doesn't make sense for all types of instruments, but there
may be implementational reasons to have it anyway; not sure yet.
VELOCITY can be continuous - as you pointed out with strings and such. The
creation of a voice must be separate in the API, I think.
This all needs more thinking. I haven't had too much time to think on these
hard subjects the past two weeks, and I might not for a few more. I'll try
to usurp work-time when I can :)
This all leads me back to my original thoughts, that the voice-management
MUST be a two-way dialog. I don't like the idea of use-once VVIDs because
eventually SOMEONE will hit the limit. I hate limits :)
more thought needed
Tim