Two problems solved! (Well, almost... Still need
temporary space for
the "parameters", unless voice allocation is non-conditional.)
I think if you get an allocate voice event, you MUST get a voice on event.
That doesn't really work for normal polyphonic
instruments, unless
the host *fully* understands the synth's voice allocation rules,
release envelopes and whatnot. The polyphonic synth is effectively
reduced to a tracker style synth with N monophonic channels.
two-way
dialogue, or you have use-once VVIDs. Maybe this is OK -
2^32 VVIDs per synth. The host only really needs to store a small
number - the active list.
Why would the host (or rather, sender) care about the VVIDs that are
*not* active? (Remember; "Kill All Notes" is a Channel Control, and
if you want per-voice note killing, you simply keep your VVIDs until
you're done with them - as always.)
It wouldn't - if the host has a limit of 128 voice polyphony, it keeps a
hash or array of 128 VVIDs. There is a per-instrument (or per-channel)
next_vvid variable. Whenever host wants a new voice on an instrument it
finds an empty slot on the VVID table (or the oldest VVID if full) and sets
it to next_vvid++. That is then the VVID for the new voice. If we had to
steal one, it's because the user went too far. In that case,
VOICE(oldest_vvid, 0) is probably acceptable.
The plugin sees a stream of new VVIDs (maybe wrapping every 2^32 notes -
probably OK). It has it's own internal rules about voice allocation, and
probably has less polyphony than 128 (or whatever the host sets). It can do
smart voice stealing (though the LRU algorithm the host uses is probably
good enough). It hashes VVIDs in the 0-2^32 namespace on it's real voices
internally. You only re-use VVIDs every 2^32 notes.
VELOCITY can
be continuous - as you pointed out with strings and
such. The creation of a voice must be separate in the API, I
think.
Why? It's up to the *instrument* to decide when the string (or
whatever) actually starts to vibrate, isn't it? (Could be VELOCITY >=
0.5, or whatever!) Then, what use is it for hosts/senders to try to
figure out where notes start and end?
And for a continuous velocity instrument, how do you make a new voice? And
why is velocity becoming special again?
I think voice-on/off is well understood and applies pretty well to
everything. I am all for inventing new concepts, but this will be more
confusing than useful, I think. Open to convincing, but dubious.
Of course - but I don't think it has to be a
two-way dialog for this
reason. And I don't think a two-way dialog can work very well in this
context anyway. Either you have to bypass the event system, or you
have to allow for quite substantial latency in the feedback direction.
I once had the idea that you send all your events in a given block with
negative voice-ids. The plugin responds with the proper per-plugin
voice-id.
Block start:
time X: voice(-1, ALLOC) /* a new voice is coming */
time X: velocity(-1, 100) /* set init controls */
time X: voice(-1, ON) /* start the voice */
time X: (plugin sends host 'voice -1 = 16')
time Y: voice(-2, ALLOC)
time Y: velocity(-2, 66)
time Y: voice(-2, ON)
time Y: (plugin sends host 'voice -2 = 17')
From then out the host uses the plugin-allocated
voice-ids. We get a large
(all negative numbers) namespace for new notes per block.
We get
plugin-specific voice-ids (no hashing/translating). Plugin handles voice
stealing in a plugin specific way (ask for a voice, it's all full, it returns
a voice-id you already have and internally sends voice_off).
I found it ugly at first. I like that it all is plugin-specific. Poke
holes in it?
Note that when a synth starts stealing voices,
that's actually *error
handling* going on. If a soft synth with say, 32 physical and 32
Not necessarily. I've played with soft-synths with no envelope control,
which I wanted to be mono. I set maxpoly to 1, and let it cut off tails
automatically. It isn't _necessarily_ an error.
Tim