[linux-audio-dev] XAP - init/release latched controls and protocol

David Olofson david at olofson.net
Thu Jan 16 14:00:01 UTC 2003


On Thursday 16 January 2003 06.50, Tim Hockin wrote:
> *** From: David Olofson <david at olofson.net>
>
> > > This jives EXACTLY ives with what I have been saying, though I
> > > characterized it as:
> > >
> > > VOICE_INIT(vvid)   -> synth gets a virtual voice, start
> > > init-latch window VOICE_SETs         -> init-latched events
> > > VOICE_ON(vvid)     -> synth (optionally) makes it a real voice
> > > (end init-window)
> >
> > Well, then that conflict is resolved - provided synths are note
> > *required* to take VOICE_ON if they care only about "real"
> > controls.
> >
> > :-)
>
> Uggh.  Why is this so hard?

It's not hard, but it seems totally pointless to even receive events 
you don't care about. I'd like to be able to just not have VOICE 
inputs, or ignore the VOICE events.


> If you don't get a VOICE_ON, you don't
> go.

So you have to disable control tracking until you get that VOICE_ON? 

Assuming that the sender is doing things right, a control event 
should never arrive before it's actually meant to take effect. That 
means you'll never get a continous VELOCITY event before the sender 
really intends that you start a note. Whether that control event is 
followed by a VOICE_ON or not is irrelevant to a full continous 
control synth, since it will have to react to every control change in 
about the same way anyway.


> It is a flag from the sequencer that says "I am done sending
> init events".  If you don't have any init events, then who cares? 

Exactly. So for a continous control synth, I just ignore the VOICE 
events altogether, and look for the controls that run the synth.

As far as I can see, this will Just Work(TM), unless the sender is 
doing something *really* stupid, like screwing up the timing of the 
continous control events before the first VOICE_ON.


> It means the synth can now start making sound for this voice. Think
> of it as releasing the latch.

It's just that there's no latch to release if you have only continous 
controls... (So VOICE_ON becomes a NOP.)


> > For example, your average synth will have a VELOCITY and a
> > DAMPING (or something) control pair, corresponding to MIDI NoteOn
> > and NoteOff velocity, respectively. You could basically set both
> > right after allocating a voice/VVID, as the only requirement is
> > that the right values are in place when they should be latched.
>
> OK, I'll buy that.  So the way to end a voice is:
>  SEND(release-latched controls)
>  VOICE_OFF(vvid) /* latches control values */
>
> It means that a control can not be both release-latched and
> continuous.

It can be, but *only* if you're allowed to change them at any time 
before the VOICE_OFF. I can't see why you would have to restrict that 
in any way.


> Perhaps that is good.

No, and I don't think it's required.


> Are there init-latched controls
> that can also be continuous that are not purely continuous?

Yes. Consider a sampler with pitch mapping. The initial PITCH selects 
a waveform to use, and if we're dealing with waveforms that cannot 
take crossfading (goes for just about anything sampled that hasn't 
been run through an Autotune or similar), it will have to stick to 
that waveform even if the pitch is changed. That is, PITCH would be 
init-latched in respect to selection of waveform, but continous when 
it comes to pitch control. Similar logic can be used for continous 
velocity and various other stuff as well.


> I'd been assuming that init-latched, continuous, and
> release-latched were not mutually exclusive..but maybe they should
> be.

I don't think so. For a moment, I was thinking that init-latched and 
release-latched are mutually exclusive, but there's no real reason 
for that either. It's not obviously wrong to have the same control 
affect both starts and ends of notes.

For example, if you have a sound with a pitch envelope that starts 
one octave low, slides up to the note, and then ends by sliding back 
down one octave, you might well use a single "SLIDE_SPEED" control 
for both slides.

And BTW, the synth may also implement this as a continous control. If 
the sliding is revaluated once per sample, and based directly on the 
SLIDE_SPEED control, you can control the sliding dynamically while 
it's in progress. This makes the control state dependent, but 
continous.

You could do this with VELOCITY as well; modulating envelope timing 
on the fly and whatnot. You may even want to do this while latching 
the initial VELOCITY value for the the filter cutoff. Then VELOCITY 
is both init-latched and continous. (But probably not 
release-latched, since we'll probably decide that using a different 
control for that feels more familiar to MIDI users, or something. I'm 
no longer sure that's a good idea, though...)


> So again, I advocate over-verbosity as less evil than
> over-terseness.

Sure. Senders can talk all they want, as long as it doesn't mean 
synths should do thinking and error correction for them.


> /* if a plugin is voiced, it has a VOICE control */
> VOICE(vvid, VOICE_INIT) // start init-latch window
> // send init events
> VOICE(vvid, VOICE_ON)
> // run, send events, whatever
> VOICE(vvid, VOICE_RELEASE) // start release phase
> // send release events
> VOICE(vvid, VOICE_OFF)

Maybe ON should be ATTACK or START or something? It looks like ON and 
OFF are matching pairs otherwise.

More seriously thoug, I think you still have to send the 
release-latched events *before* VOICE_RELEASE, or we get the "parse 
ahead" problem back.


> I like the fact that you are forced to declare the fact that you
> want to use a VVID.

Yeah. Should eliminate VVID checking more or less completely, I think.


> If you have no init and no release controls:
>
> VOICE(vvid, VOICE_INIT)
> VOICE(vvid, VOICE_ON)
> // run, send events, whatever
> VOICE(vvid, VOICE_OFF)

Hmm... I'm not sure I'm folling completely. I'll give it a try with 
different names and some clarifications (and my VVID reuse loop, of 
course ;-):

send(vvid, VOICE_BEGIN)
// Synth assigns a voice or a "control tracker" to 'vvid'.

for(as many times as you like)
{
	// Any start-latched controls you want to be used by
	// the next VOICE_START must be sent now.

	send(vvid, VOICE_START)
	// Synth latches any start-latched controls.
	// Synth starts attack phase.

	// Any stop-latched controls you want to be used by
	// the next VOICE_STOP must be sent now.

	send(vvid, VOICE_STOP)
	// Synth latches any stop-latched controls.
	// Synth starts release phase.
}

send(vvid, VOICE_END)
// Synth disconnects 'vvid'.
// 'vvid' is now illegal, until reassigned.

// You may send any voice control events at any time between
// VOICE_BEGIN and VOICE_END, but note that continous control
// synths may not care about the VOICE_START and VOICE_STOP
// events, since they're really only concerned with continous
// controls.


When I'm looking at this, I realize that there really isn't anything 
special about VOICE_START and VOICE_STOP. They just trigger state 
changes or similar actions in the synth; actions that some synths may 
not even bother to implement.

I'm willing to accept VOICE_START and VOICE_STOP as "special" for 
practical reasons, but there are two issues:

	1. Some synths have no use for them at all.

	2. Someone might want *more* than two "state change" events...


I think 2 is stretching it, though. It could be used for speach 
synths and similar things, but you might as well use an integer 
control for that. (Changing the control triggers a state change.) It 
would be nice if you could somehow specify which controls are latched 
by which state change, though. That's why I'm even thinking about 
this in this context. (We've already concluded that this is useful 
for starting and stopping notes, so why not doing it for other synth 
defined state changes as well?)

Design suggestion:

	* Instead of VOICE_START/STOP and the like, use an
	  integer control VOICE_STATE.

	* States 1 and 0 would be reserved for STARTING and
	  STOPPING, respectively; ie the states that
	  VOICE_START and VOICE_STOP would switch to.

	* Plugins can export a list of states with their
	  VOICE_STATE control. If no list is exported, the
	  default list "STOPPING;STARTING" is used.

	* A control can be hinted LATCHED, to indicate that
	  it is latched when entering certain states. The
	  LATCHED hint comes with a list of the states that
	  will latch the control when entered. An empty list
	  is short for "all state changes latch this control".


Maybe this is stretching it, but this looks a lot more like a generic 
and neutral interface than anything that's based on the idea that all 
instruments have distinct "start" and "stop" state changes.

I think this would be sufficient to drive a full blown speech synth 
without non-standard control interpretation, which seems pretty cool 
for an instrument control API. More interestingly (to most of us), I 
think this could be used for *very* interesting rhythmic synthesis 
stuff, fully controlled by the user from the sequencer.


> I could even be convinced that VOICE_RELEASE is superfluous, but I
> (kind of) like how release mirrors init.

No, it's not superfluous. The other week, I realized that knowing 
when a sender is done with a VVID can be *really* useful at times, as 
a result of the need to track voice controls even if no sound is 
played ATM. It's impossible to steal the right one of all the silent 
voices, unless you know which VVIDs are "dead".

Note that you *can* get away without VOICE_RELEASE if you disallow 
VVID reuse, since then, you know that when *you're* done, it's safe 
to stop tracking voice controls, because that VVID will never make 
sound again.

Unfortunately, it's never that easy with continous control synths, as 
you can't really prevent those from playing multiple "notes" during 
the life time of a VVID. Doesn't really matter as long as you have 
more physical voices than VVIDs in use, but if you get a sender that 
"circulates" VVIDs rather than reusing "fresh" VVIDs ASAP, you're 
screwed pretty soon...


> We're close on this one, I can smell it!

Yeah, that's what you think - but now I'm bringing in more stuff! ;-)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



More information about the Linux-audio-dev mailing list