[LAD] Plugin buffer size restrictions

David Robillard d at drobilla.net
Tue May 29 03:55:04 UTC 2012

On Mon, 2012-05-28 at 22:40 +0000, Fons Adriaensen wrote:
> On Mon, May 28, 2012 at 05:08:02PM -0400, David Robillard wrote:
> > > Assuming that by 'signals' you mean audio rate controls, yes.
> > > Again, the exception is 'synthesis' modules. You'd expect a
> > > VCA in a modular synth to do what the ADSR tells it to do,
> > > even if that results in artefacts. But you wouldn't want a
> > > fader in a DAW (essentially the same thing if implemented as
> > > a plugin) to do the same.
> > 
> > Sorry, that term isn't clear.  By 'signals' I meant basically a single
> > numeric value (float), whatever the rate.
> > 
> > As opposed to an 'event' (also an ambiguous term) which actually says
> > "hey, this control changed to 4.0 now" when it changes - and only when
> > it changes.  An 'event' could also have ramp information for
> > interpolation and whatever else.
> Dave, thanks for this reply. Reading your complete post, there's
> a lot to discuss - more than I'm prepared to write at this time
> (past midnight here) - but I'll do so ASAP.

No rush.  I am particularly interested in your thoughts on my question
about how L is related to the run buffer size, and what restrictions
there are there (e.g. L of course at least has some minimum value
dependent on the sample rate to make bandlimited interpolation possible
at all, but surely there is more to it in various cases)

> It's not a simple issue. For example you mention that control
> values should somehow look ahead (how is another matter, and
> it can get hairy). That is true, but otoh in many cases the
> delay you get when not doing this doesn't matter, so there's
> good reason to avoid the complexity in those cases, etc. etc.

Actually I think it's globally significantly *less* complicated to
provide future values.

> So it ends up as an exercise in weighting pros and cons, you
> can't reduce it to some simple 'binary' statement. We wouldn't
> have this debate if things were so simple. 

Absolutely true.  All of these issues, various scenarios and the
different ways they must be handled, are inherently there no matter

However, everything gets quite ugly and complicated when the values the
plugin has during run() have different time bases, because all those
different scenarios then cross-cut the plugin API, and it's *everyone's*
problem to carefully deal with *every* case that exists.  This is
probably literally impossible.

Even if it isn't, as you mentioned - lots of code sucks.  If plugins
have to have fancy configurable control latency settings it takes a
mailing list of intelligent people weeks to even figure out - they're
simply going to do it wrong.

However, it seems possible to move all of that stuff out of the plugin
API so those different scenarios become only the host's problem.
Everything then becomes clean and feasible.

In my opinion, having several different interpretations for controls
depending on scenario in the plugin is an impossible solution.  It would
never happen.  Avoiding this, then, is an overriding requirement.  We
need *one* way of describing controls in run() that works in all cases.

I think providing synchronous control events, with 'future' values (at
least some distance L in the future) is the way to get that.  Let's
pretend that the Ultimate Plugin Interface (UPI) 1.0 exists, works this
way, is stable and unmalleable, and all you have to work with to deliver
your product (a plugin).

>From the plugin author perspective: is there anything that is
*impossible* to do correctly?


More information about the Linux-audio-dev mailing list