[linux-audio-dev] Plugin APIs (again)

Tim Hockin thockin at hockin.org
Sat Dec 7 14:18:01 UTC 2002


Combining three replies:


Steve Harris <S.W.Harris at ecs.soton.ac.uk> wrote:
> 
> On Sat, Dec 07, 2002 at 12:25:09 -0800, Tim Hockin wrote:
> > This is what I am thinking:  One of the flags on each control is whether it
> > is a CTRL_MASTER, or a CTRL_VOICE (or both).  This allows the plugin to
> > define a ctrl labelled (for example) VELOCITY and the host can connect it to
> > any MIDI input, if it wants.
> 
> I'm not sure that making one control both is meaningful. Per voice and per
> instrument controls should probably be kept seperate.

Whether they are defined together or seperately, there are certainly
controls that apply to each voice and to each (instrument, channel (midi),
plugin).  For example volume.

What advantage do you see to defining them seperately?  Certainly the
structures are analogous, if not identical.

So no one has offered me any suggestions on how we handle the clash between
Master and per-voice controls.  To re-ask:

<RE-ASK rephrased="true">
Let's assume a plugin provides a control named VOLUME which is both VOICE and
MASTER.  Let's assume the current state of VOLUME is the value 5.0.  Let's
assume the sequencer triggers a voice and sets VOLUME for that voice to 8.0.
The user then turns the master VOLUME down to 0.  What happens to the value of
the voice's VOLUME.
	a) goes to 0
	b) ends up at 3.0
	c) stays at 8.0

Maybe controls that are both MASTER and VOICE should be absolute values for
MASTER and scalars against the MASTER value per-voice?
</RE-ASK>

> > 	1) * Host calls plug->note_on(timestamp, note, velocity), and 
> > 	     gets a voice-id.
> 
> I dont think I like the firstclass-ness of note and velocity, a we've

I agree, mostly.  I threw this in as a bit of a red-herring, to see what
people were thinking.

> > 	   * Host sends n VOICE_PARAM events to set up any params it wants
> 
> You could just send pitch and velocity this way?

Absolutely.  HOWEVER, I have one design issue with it:  Velocity is not a
continuous control.  You can't adjust the velocity halfway through a long
note.  You can adjust the pitch.  You can adjust portamento time.  Velocity
relates SPECIFICALLY to the attack and release force of the musician.
Unless we all agree that velocity == loudness, which will be tough, since I
think _I_ disagree.

Could I get you to accept voice_on(velocity); and pass all the rest as
events?

> > Should each plugin provide serialize() and deserialize() methods (thus
> 
> The host should do it. You dont want to have to write serialising code for
> every single instrument. This also ensures that all the state can be
> automated.

Two votes for this - I'll consider it decided.

David Olofson <david at olofson.net> wrote:
> 
> I'm still not sure I understand the terminology here... Considering 
> some recent posts, I assume that the following applies:
> 
> 	Instrument:
> 		Monophonic or polyphonic plugin, capable of
> 		maintaining one set of "patch data" (or "one

Correct.

> 	Voice:
> 		An abstract object inside an Instrument,
> 		representing "one note". A polyphonic Instrument

Correct.

> a Channel, and Plugins may or may not be allowed to have multiple 
> Channels. As to the control protocol, if you want to do away with the 
> "channel" field, you could just allow synths to have more than one 
> event input port. One might think that that means more event decoding 

So I haven't come to this note in my TODO list yet.  So we want to support
the idea of (MIDI word) multi-timbral plugins?  Given that it is software,
we can say 'just load a new instance for the new channel'.  It prevents
anyone from doing an exact software mockup of a bit of hardware, but I'm
inclined not to care that much..

> ports. As of now, there is one event port for each Channel, and one 
> event port for each physical Voice. "Virtual Voices" (ie Voices as 

Are you using multiple event ports, then, or just having one port per
instrument-plugin and sending EV_VOICE_FOO and EV_MASTER_FOO (modulo naming)
events?

> bandwidth, and it works well for most keyboard controlled 
> instruments, but that's about it.

Other than Organs, which you've mentioned, what kinds of instruments don't
have some concept of velocity (whether they ignore it or not..).  As I've
said above I have a hard-time reconciling velocity with any timed event
other than on/off.

> As to the "gets a voice ID" thing, no, I think this is a bad idea. 
> The host should *pick* a voice ID from a previously allocated pool of 
> Virtual Voice IDs. That way, you eliminate the host-plugin or 
> plugin-plugin (this is where it starts getting interesting) 
> roundtrip, which means you can, amond other things:

I'm still not clear on this.  What plugin would trigger another plugin?  Do
you envision that both the host and a plugin would be controlling this
plugin?  If so, how do you reconcile that they will each have a pool of
VVIDs - I suppose they can get VVIDs from the host, but we're adding a fair
bit of complexity now.

> 	* send controls *before* you start a voice, if you like

you can do this already - any voice-control event with a timestamp before
(or equal to) the note_on timestamp can be assumed to be a startup param.

> 	* implement smarter and tighter voice allocation

I don't see what you mean by this, or how it matters.  I see voices as being
the sole property of the instrument.  All the host knows about them is that
they have some (int) id that is unique per-instrument.

> 	* use a timestamped event system

umm, I thik this is not solely dependant on vvids - I think timestamps will
work just fine as proposed.  It's a matter of taste, unity, and presentation
we're discussing.

> 	* pipe events over cables, somewhat like MIDI

Ahh, now HERE is something interesting.  I'd always assumed the plugin would
return something, self-allocating a voice.  This is what you've called the
round-trip problem, I guess. But it seems to me that even if you were piping
something over a wire, you have some sort of local plugin handling it.  THAT
plugin allocates the voice-id (arguing my model).  The question I was asking
was: is voice-id allocation synchronous (call plug->voice_on(timestamp) and
get a valid voice-id right now), or is it async (send a note-on event and
wait for it to come back).

This raises another question for me - the host sends events to the plugins.
Do the plugins send events back?  It seems useful.  The host has to handle
syncronization issues (one recv event-queue per thread, or it has to be
plugin -> host callback, or something), but that's OK.  Do timestamps matter
to the host, except as bookkeeping?

> and all hosts... Also note that event size (which has to be fixed) 
> increases if you must squeeze in multiple parameters in some events.

Why does it have to be fixed-size?  It doesn't strictly HAVE to be.  On one
hand I HATE when an API says 'you have three available params' foo->param1,
foo->param2, foo->param3.  If you need more, too bad.  On the other hand,
there are performance advantages to having events pre-allocated, and
thereby, fixed sized, or at least bounded.

> The intention is that these things would (on the whole) be sound
> generators, right? To me plugin implies inline processing.

This API is not purely instrumental.  It can certainly be for effects and
sinks, too.  That said, we're spending a lot of time on the instrumental
part because it's the new ground wrt LADSPA.


Tim



More information about the Linux-audio-dev mailing list