2012/5/26 Fons Adriaensen <fons(a)linuxaudio.org>rg>:
1. telling a plugin that at N frames from the current
position the
parameter P should have value V, and
2. doing the same, while also requiring that the plugin outputs
N frames at that time.
My argumentation is that doing (2) is a bad idea, and even more so if
you consider that (1) is a silly thing to in the first place, at least
for the usual audio processing tasks such as EQ, dynamics, and most
effects. The exception is synthesis modules of course, but those should
have a dedicated mechanism for it anyway, or accept audio rate controls.
The full report included a detailed anaysis of why (1) is a bad
idea in most cases (with the exception of synthesis modules). It
is because it makes it almost impossible for the plugin code to
compute good parameter trajectories. A well-designed EQ, effect,
compressor, etc. should actually *ignore* attempts to control its
internals in such a way. So there was never any need to allow
arbitrary nframes values. The correct thing to do would be to
remove that from the core spec, and provide an extension for
finer-grained 'sample accurate' control.
If I understand correctly an implication would be that you get uniform
sampling of parameter signals with control rate = sample rate /
nframes. I assume that computing parameter trajectories basically
means interpolating, and that inevitably introduces some latency if
you want the audio and control streams to be in sync (e.g., nframes
for linear interpolation, 2 * nframes for splines, etc.).
In practice, this would mean that we might want to have two flavours
of each and every algorithm: one for offline use that keeps audio and
controls in sync (and adds latency) and another for live use that
doesn't keep the sync (since in practice a couple of blocks of unsync
between audio and control is unnoticeable when controlling through
GUIs, MIDI, OSC or similar).
The way I usually coped with parameter trajectories (not really) was
to add things like leaky integrators and use filtered parameters at
the audio rate (mostly for smoothing to avoid clicks and noises), but
more orthodox resampling equivalents (whether uniform or not) have at
least performance advantages, or are at least alternatives worth
considering.
In practical terms, especially w.r.t. LV2, there may be a third way:
let the host pass a limited number of future parameter samples at each
run() (could be negotiated at instantiation time), so that the plugin
doesn't have to add latency to the audio streams in any case. Would be
only supported by "offline hosts". If the block sizes are variable,
future block sizes should be passed as well (argh?). But I don't know
if this really makes sense or has downsides... ideas, folks?
Stefano