I wrote:
Maybe I made a
mistake there, confusing a plugin chain with analog
latencies. Ah but then, isn't a plugin chain like a 'bucket brigade'?
Each stage can't know what the previous stage does to the signal until
that previous stage has processed a whole buffer.
("He said, waiting to be clobbered by the responses.")
Oops, yeah that
happens, but all within one process cycle.
Each plugin in the chain is run in succession on the data, right?
My mistake. Must be tired, the ol' brain wanders...
I have a question, sort of related to that latency discussion:
Plugin 'run' length need not be related to audio buffer size, right?
I'm thinking of breaking up my 'single-run-per-process' into multiple
'run-lengths-between-control-or-program-changes-or-midi-events',
per process.
I know dssi docs mention doing this in regards to program changes
'between notes', but don't specifically mention handling control changes
or midi events that way, but it seems reasonable.
So not only will I get more or less 'sample accurate' control and program
changes and midi events, but processing each control's successive changes
won't take so long from waiting for the next process cycle in order to
set each change (I found when processing a string of dssi-vst osc control
change notifications for one control, the vst expects *all* of the changes to
be sent back to it via the ladspa control port, so the quicker the port value
can be changed, the better, that's why I want to break up the runs).
Is this a good method?
I'm also asking because...
If I carry this logic to an extreme, what might be wrong with
using say 128 'single sample' runs, for an audio buffer size of 128?
Can plugins handle 'single sample' (or even dynamically varying runs)?
My proposed method implies this would actually happen, if control or
program changes or midi events needed to occur very close together.
Thanks. Tim.