[linux-audio-dev] Plugin APIs (again)

Steve Harris S.W.Harris at ecs.soton.ac.uk
Sat Dec 7 15:23:00 UTC 2002


On Sat, Dec 07, 2002 at 11:12:54 -0800, Tim Hockin wrote:
> What advantage do you see to defining them seperately?  Certainly the
> structures are analogous, if not identical.

Not defining them seperatly, but making a distinction, I dont think one
control should be applied per voice and per instrument.
 
> So no one has offered me any suggestions on how we handle the clash between
> Master and per-voice controls.  To re-ask:

Ban it :)
 
> > > 	   * Host sends n VOICE_PARAM events to set up any params it wants
> > 
> > You could just send pitch and velocity this way?
> 
> Absolutely.  HOWEVER, I have one design issue with it:  Velocity is not a
> continuous control.  You can't adjust the velocity halfway through a long
> note.  You can adjust the pitch.  You can adjust portamento time.  Velocity
> relates SPECIFICALLY to the attack and release force of the musician.
> Unless we all agree that velocity == loudness, which will be tough, since I
> think _I_ disagree.

No, velocity != loudless.

However, even in midi there can be more than one velocity value per note,
there is seperate release volcity IIRC. I expect the only reason attack
velocity is package up with the note start in MIDI is because otherwise
they couldn't guarantee it would arrive at the same time, and it saved a
few bytes.

Retriggering controlers (like that giant ribbon controller) could provide
multiple velovity changes within one "note" too.

> we can say 'just load a new instance for the new channel'.  It prevents
> anyone from doing an exact software mockup of a bit of hardware, but I'm
> inclined not to care that much..

There are probably other reasons why you /might/ want this, but I cant
think of anyoffhand. I suspect one is OK and it makes things nice and
simple.
 
> Other than Organs, which you've mentioned, what kinds of instruments don't
> have some concept of velocity (whether they ignore it or not..).  As I've
> said above I have a hard-time reconciling velocity with any timed event
> other than on/off.

Analogue synths generally dont, some have channel pressure, but that is
slightly different and variable through the duration.

> I'm still not clear on this.  What plugin would trigger another plugin?  Do
> you envision that both the host and a plugin would be controlling this
> plugin?  If so, how do you reconcile that they will each have a pool of
> VVIDs - I suppose they can get VVIDs from the host, but we're adding a fair
> bit of complexity now.

Not really the host can just allocate them from a pool of 32bit ints (for
example) and pass them in by the new_voice call.
 
> This raises another question for me - the host sends events to the plugins.
> Do the plugins send events back?  It seems useful.  The host has to handle
> syncronization issues (one recv event-queue per thread, or it has to be
> plugin -> host callback, or something), but that's OK.  Do timestamps matter
> to the host, except as bookkeeping?

I'm not sure what the instrument will send back to the host that needs to
be timestamped.
 
> Why does it have to be fixed-size?  It doesn't strictly HAVE to be.  On one
> hand I HATE when an API says 'you have three available params' foo->param1,
> foo->param2, foo->param3.  If you need more, too bad.  On the other hand,
> there are performance advantages to having events pre-allocated, and
> thereby, fixed sized, or at least bounded.

Well, we were talking about floats, strings or opaque data blocks before,
so presumanbly the event would be either a float, a pointer to a string or
a pointer to a data block (+ a size).

If you need to send it over a wire you will have to serialise the strings
and blocks in variabley size chunks, but thats unavoiadable.
 
> > The intention is that these things would (on the whole) be sound
> > generators, right? To me plugin implies inline processing.
> 
> This API is not purely instrumental.  It can certainly be for effects and
> sinks, too.  That said, we're spending a lot of time on the instrumental
> part because it's the new ground wrt LADSPA.

Its not purely aimed at instruments, but its heavily focused that way, it
probably wouldn't make sense to use this API for something LADSPA can do
well (eg. audio+control -> audio+control). If you need timestamps or
polyphony then this would be the way to go, but realisticly that means
intruments and a few other odd cases.

- Steve



More information about the Linux-audio-dev mailing list