On Tuesday 07 January 2003 04.24, Tim Hockin wrote:
Two problems
solved! (Well, almost... Still need temporary space
for the "parameters", unless voice allocation is
non-conditional.)
I think if you get an allocate voice event, you MUST get a voice on
event.
Why? Isn't a "voice on event" actually just "something" that makes
the synth activate a voice?
[...]
Why would the
host (or rather, sender) care about the VVIDs that
are *not* active? (Remember; "Kill All Notes" is a Channel
Control, and if you want per-voice note killing, you simply keep
your VVIDs until you're done with them - as always.)
It wouldn't - if the host has a limit of 128 voice polyphony, it
keeps a hash or array of 128 VVIDs. There is a per-instrument (or
per-channel) next_vvid variable. Whenever host wants a new voice
on an instrument it finds an empty slot on the VVID table (or the
oldest VVID if full) and sets it to next_vvid++. That is then the
VVID for the new voice. If we had to steal one, it's because the
user went too far. In that case, VOICE(oldest_vvid, 0) is probably
acceptable.
The plugin sees a stream of new VVIDs (maybe wrapping every 2^32
notes - probably OK). It has it's own internal rules about voice
allocation, and probably has less polyphony than 128 (or whatever
the host sets). It can do smart voice stealing (though the LRU
algorithm the host uses is probably good enough). It hashes VVIDs
in the 0-2^32 namespace on it's real voices internally. You only
re-use VVIDs every 2^32 notes.
Ok, but I don't see the advantage of this, vs explicitly assigning
preallocated VVIDs to new voices. All I see is a rather significant
performance hit when looking up voices.
VELOCITY can be continuous - as you pointed out with
strings
and such. The creation of a voice must be separate in the API,
I think.
Why? It's up to the *instrument* to decide when the string (or
whatever) actually starts to vibrate, isn't it? (Could be
VELOCITY >= 0.5, or whatever!) Then, what use is it for
hosts/senders to try to figure out where notes start and end?
And for a continuous velocity instrument, how do you make a new
voice?
Just grab a new VVID and start playing. The synth will decide when a
physical voice should be used, just as it decides what exactly to do
with that physical voice.
And why is velocity becoming special again?
It isn't. VELOCITY was just an example; any control - or a
combination of controls - would do.
I think voice-on/off is well understood and applies
pretty well to
everything. I am all for inventing new concepts, but this will be
more confusing than useful, I think. Open to convincing, but
dubious.
I think the voice on/off concept is little more than a MIDIism. Since
basic MIDI does not support continous velocity at all, it makes sense
to merge "note on" and "velocity" into one, and assume that a note
starts when the resulting message is received.
With continous velocity, it is no longer obvious when the synth
should actually start playing. Consequently, it seems like wasted
code the have the host/sender "guess" when the synth might want to
allocate or free voices, since the synth may ignore that information
anyway. This is why the explicit note on/off logic seems broken to me.
Note that using an event for VVID allocation has very little to do
with this. VVID allocation is just a way for the host/sender to
explicitly tell the synth when it's talking about a new "voice
context" without somehow grabbing a new VVID. It doesn't imply
anything about physical voice allocation.
[...]
Block start:
time X: voice(-1, ALLOC) /* a new voice is coming */
time X: velocity(-1, 100) /* set init controls */
time X: voice(-1, ON) /* start the voice */
time X: (plugin sends host 'voice -1 = 16')
time Y: voice(-2, ALLOC)
time Y: velocity(-2, 66)
time Y: voice(-2, ON)
time Y: (plugin sends host 'voice -2 = 17')
From then out the host uses the plugin-allocated voice-ids. We get
a large (all negative numbers) namespace for new notes per block.
Short term VVIDs, basically. (Which means there will be voice
marking, LUTs or similar internally in synths.)
We get plugin-specific voice-ids (no
hashing/translating).
Actually, you *always* need to do some sort of translation if you
have anything but actual voice indices. Also note that there must be
a way to assign voice IDs to non-voices (ie NULL voices) or similar,
when running out of physical voices.
Plugin
handles voice stealing in a plugin specific way (ask for a voice,
it's all full, it returns a voice-id you already have and
internally sends voice_off).
You can never return an in-use voice ID, unless the sender is
supposed to check every returned voice ID. Better return an invalid
voice ID or something...
I found it ugly at first. I like that it all is
plugin-specific.
Poke holes in it?
Well, it's an interesting idea, but it has exactly the same problem
as VVIDs, and doesn't solve any of the problems with VVIDs. The fact
that it's all plugin specific looks more like a problem than an
advantage to me. We're basically back at square 1 (with "tag +
search" and/or hashing), and it doesn't really buy us much, compared
to the wrapping 32 bit VVID idea.
Note that when
a synth starts stealing voices, that's actually
*error handling* going on. If a soft synth with say, 32 physical
and 32
Not necessarily. I've played with soft-synths with no envelope
control, which I wanted to be mono. I set maxpoly to 1, and let it
cut off tails automatically. It isn't _necessarily_ an error.
Well, that's a special case, I'd say. I'm not even sure if your
average synth handles voice stealing well enough to be used that way.
(*Should* work, but it does require click free voice stealing without
too much delay. That basically requires a set of spare voices, or a
"mix twice" feature, so new notes can start playing while stolen
voices do a quick fade-out or similar.)
Anyway, with the VVID system, you do this by using a single, fixed
VVID. (No "new voice for this VVID" events.) No need to mess with the
polyphony.
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`--------------------------->
http://olofson.net/audiality -'
---
http://olofson.net ---
http://www.reologica.se ---