[linux-audio-dev] more on XAP Virtual Voice ID system

David Olofson david at olofson.net
Mon Jan 6 19:21:00 UTC 2003


On Tuesday 07 January 2003 00.31, Tim Hockin wrote:
> > Problem is that that requires special case event processing in
> > synths. You'll need an inner event decoding loop just to get the
> > parameters, before you can go on with voice intantiation.
>
> You always process all events for timestamp X before you generate
> X's audio right?

Yes, that's true. (You loop until you see a different timestamp.)


> 1. Recv VOICE_ON, allocate a voice struct
> 2. Recv VELOCITY, set velocity in voice struct
> 3. No more events for timestamp X, next is timestamp Y
> 4. Process audio until stamp Y
> 4.a. Start playing the voice with given velocity

Problem is step 1. If the voice allocator looks at velocity, it won't 
work, since that information is not available when you do the 
allocation. Likewise for setting up waveforms with velocity maps and 
the like.

When are you supposed to do that sort of stuff? VOICE_ON is what 
triggers it in a normal synth, but with this scheme, you have to wait 
for some vaguely defined "all parameters available" point.


> > The alternative indeed means that you have to find somewhere to
> > store the parameters until the "VOICE_ON" arrives - but that
> > doesn't screw up the API, and in fact, it's not an issue at all
> > until you get into voice stealing. (If you just grab a voice when
> > you get a "new" VVID, parameters will work like any controls - no
> > extra work or special cases.)
>
> All this is much easier if synths pre-allocate their internal voice
> structs.

I'm assuming that they do this at all times. There is no other 
reliable way of doing it.

The problem is when the sender uses more VVIDs than there are voices, 
or when voices linger in the release phase. That is, the problem is 
voice stealing occurs - or rather, that there may not always be a 
physical voice "object" for each in-use VVID.


> Host: I know SYNTH has voices 1, 2, 3 active.  So I send params for
> voice 4.

Actually, it doesn't know anything about that. The physical 
VVID->voice mapping is a synth implementation thing, and is entirely 
dependent on how the synth manages voices. s/voice/VVID/, and you get 
closer to what VVIDs are about.


> How does VST handle this?

Same way as MIDI synths; MIDI pitch == note ID.


> > > Send 0-VVID to the VOICE contol with timestamp Y for note-off
> >
> > I'm not quite following with the 0-VVID thing here... (The VVID
> > *is*
>
> 0-VVID is just so you can have one control for voice on and off. 
> Positive means ON, negative means OFF.  abs(event->vvid) is the
> VVID.

Ok. Why not just use the "value" field instead, like normal Voice 
Controls? :-)


> > can tell the synth that you have nothing further to say to this
> > Voice by implying that NOTE_OFF means "I will no longer use this
> > VVID to address whatever voice it's connected to now." Is that
> > the idea?
>
> The synth doesn't care if you have nothing further to say.

Not really - but whoever *sends* to the synth will care, when running 
out of VVIDs. (Unless it's a MIDI based sequencer, VVID management 
isn't as easy as "one VVID per MIDI pitch value".)

My point is that if you don't have a way of doing this, there's no 
way you can know when it's safe to reuse a VVID. (Release 
envelopes...) Polling the synth for voice status, or having synths 
return voice status events doesn't seem very nice to me. The very 
idea with VVIDs was to keep communication one way, so why not keep it 
that way as far as possible?


> Either
> it will hold the note forever (synth, violin, woodwind, etc) or it
> will end eventually on it's own (sample, drum, gong).

Yeah - and if you can't detach VVIDs, you have to find out when this 
happens, which requires synth->sender feedback. (Or you basically 
cannot safely reuse a VVID, ever, once you've used it to play a note.)


> You do,
> however want to be able to shut off continuous notes and to
> terminate self-ending voices (hi-hat being hit again, or a crash
> cymbal being grabbed).

Yes - but that falls in one of two categories:

	1) Voice Control. (Keep the VVID for as long as you need it!)

	2) Channel Control. ("Kill All Notes" type of controls.)


> > If so, I would suggest that a special "DETACH_VVID" control/event
> > is used for this. There's no distinct relation between a note
> > being "on" and whether or not Voice Controls can be used. At
> > least with MIDI synths, it's really rather common that you want
> > Channel controls to affect *all* voices, even after NoteOff, and
> > I see no reason why our Channel Controls should be different.
>
> I don't get what this has to do with things - of course channel
> control affect everything.  That is their nature.

Sorry, I meant so say "*Voice* Controls", of course...


> You're right
> that a note can end, and the host might still send events for it. 

Yep - and the note might *not* end, while the host (or rather, 
"whatever sends the events" - doesn't have to be the host, IMO) 
doesn't care, and just needs a "new" VVID for other things.


> In that case, the plugin should just drop them.

Yes, that's the "NULL Voice"...


> Imagine I
> piano-roll a 4 bar note with 50 different velocity changes for a
> 1/2 second hihat sample.  The sample ends spontaneously (the host
> did not call NOTE_OFF). The sequencer still has events, and so
> keeps sending them.  No problem, plugin ignores them.

Yeah - but I'm talking about the other way around; when the sender 
needs a VVID without a "random" relation to any lingering voice.


> > That doesn't really change anything. Whether you're using
> > VOICE_ON/VOICE_OFF or VOICE(1)/VOICE(0), the resulting actions
> > and timing are identical.
>
> Right - Aesthetics.  I prefer VOICE_ON/VOICE_OFF but a VOICE
> control fits our model more generically.

Yes.

Although the VOICE control might actually be the VELOCITY control, 
where anything non-0 means "on"... A specific, non-optional VOICE 
control doesn't make sense for all types of instruments, but there 
may be implementational reasons to have it anyway; not sure yet.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



More information about the Linux-audio-dev mailing list