On Sun, May 27, 2012 at 09:59:29PM +0300, Stefano
D'Angelo wrote:
If I understand correctly an implication would be
that you get uniform
sampling of parameter signals with control rate = sample rate /
nframes. I assume that computing parameter trajectories basically
means interpolating, and that inevitably introduces some latency if
you want the audio and control streams to be in sync (e.g., nframes
for linear interpolation, 2 * nframes for splines, etc.).
Indeed, interpolation introcudes delay. If control values are
delivered at a constant rate it's quite easy for a host to
compensate for this. What is needed in that case is that the
delay is *defined*. But there is anothor issue you touch on.
If automation data is the result of 'real time' input, then
that delay doesn't matter - the user, while creating the data
using some GUI widget or a HW controller, will compensate for
it because he/she simply has no other choice. Then, if the host
delivers the data at the same time (w.r.t. the audio) as when
it was recorded, then the plugin will do the same things and
the user will hear the result he/she wanted.
If OTOH the automation data is created by e.g. drawing some
curves in a graphical interface (with probaby the audio wave-
form as a reference) then you don't want that delay, and the
host shoudl compensate for this.
In most cases (general audio effects in a DAW) it just doesn't
matter much. The only thing (apart from synthesis/instruments)
that need accurate timing is the 'mute' function when used to
remove unwanted noises etc. But that is usually handled by the
host itself, and anyway automation is not the best way to deal
with such things - editing is.
So far so good, but in practical terms how could a plugin API allow
all use cases without requiring the plugin writer to do twice the
work? The only solution I can think of is "getting future values" in
case a host can provide them, but maybe there is a better way?
Stefano