On Wednesday 08 January 2003 09.48, Tim Hockin wrote:
I agree entirely. If each VVID=a voice then we should
just call
them Voice ID's, and let the event-sender make decisions about
voice reappropriation.
Actually, they're still virtual, unless we have zero latency
feedback from the synths. (Which is not possible, unless
everything is function call based, and processing is blockless.)
The sender never knows when a VVID loses it's voice, and can't
even be sure a VVID *gets* a voice in the first place. Thus, it
can't rely on anything that has a fixed relation to physical
synth voices.
<Arguing my model>
I think it is fair to say that for a block, the sender can assume a
voice-allocation succeeds. The only time a VID is ever virtual is
during the creation block.
No. It'll become "virtual" if the voice gets stolen. Then the synth
has to remember to listen only to the new virtual ID for that voice,
and ignore the direct references, since those are for the old context.
The sender can assume that the neagtive
VID exists for that block, and at the end of the block's run() it
will know whether it can send any further events to that VID.
Provided we don't allow connections with more than one block of
latency; yes.
I think this protocol is not so insane. At least no
more insane
than the VVID allocation scheme.
Well, I'm still seing a lot more issues and more complexity with this
scheme than with VVIDs. Could be missing something, though.
Anyway, I'm just about to put VVIDs, the way I think of them, to work
in Audiality. Let's see how that works out...
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`--------------------------->
http://olofson.net/audiality -'
---
http://olofson.net ---
http://www.reologica.se ---