On Sun, 2011-09-25 at 22:02 +0000, Fons Adriaensen wrote:
On Sun, Sep 25, 2011 at 05:11:13PM -0400, David
Robillard wrote:
This problem was always the most blatant LADSPA
inadequacy for modulars
to me...
At least LADSPA allows to do this (to have control inputs that are
at audio rate). And if a LADSPA plugin designed to work in AMS
(such as the MCP ones) doesn't make much sense in an environment
that assumes that audio rate == audio signal, so be it. There is
no rule that says a plugin must be usable in all hosts.
Sure. LADSPA and (core) LV2 are identical in this respect. They have
exactly the same port types. It is really ugly and inconvenient to use
audio ports as control ports though.
In particular, synth plugins (VCOs, VCFs, etc.) are
fundamentally
different from general audio processing ones (EQ, dynamics, etc.).
This doesn't exclude the possibility that some may be useful in
both contexts.
Yes, but many, if not most, are usable in both contexts. Filters of any
variety are the best examples.
One of the reasons for this is that
'multichannel' is not the same
as 'polyphonic'. For a polyphonic plugin (typically found in a
synthesis environment), the only data shared between voices are
the values of GUI widgets. Everything else, internal state and
control inputs that are not mapped to a GUI widget but appear as
a patchable port, is 'per voice'. It even makes sense to mark
each voice as 'active' or not so you don't waste CPU cycles
on inactive ones, and to distribute the per voice processing
over multiple CPUs.
A tricky one no one has yet tackled (anywhere) AFAIK.
For a multichannel plugin (typically found in a
general audio
processing environment, e.g. a mixer) this is not the case: for
example the gain factor used by a multichannel compressor does
not depend only on the input level of the channel it is applied
to, but either on all channels, or on a fixed single one (as for
Ambisonics, or when using a sidechain input).
Port "groups" and "roles" (within those groups) describe all this
kind
of thing nicely. This is simple, purely metadata stuff.
Another typical feature of multichannel processors is
that the
the CPU cycles required for the 'per channel' part are a tiny
fraction of what is required for the 'common' part (for example
calculating interpolated internal parameter trajectories). Which
also means you wouldn't distribute channels over multiple CPUs.
This means that you can't just replicate a single channel plugin
- either you have specific versions for each channel count (not
a practical solution), or the plugin system has to explicitly
support the separation of the 'common' and 'per channel' code.
I think there is more overlap between these cases than this implies, or
at least can be. A polyphonic synth *could* have a very large portion
of the processing time be common shared data.
There is probably an elegant solution that can cover 'replication' in
all cases. Any solution would require pretty significant compatibility
breaks, though. Maybe some day. We are stuck with single-channel audio
inputs and outputs. When I think about this sort of thing, I wish we
just had more "message" like ports, so we could add things like multiple
channels without having to completely change what you actually connect a
port pointer to. It also makes the question of whether or not to
replicate this control or that no longer a compatibility issue. It makes
a lot of things simple, as consistent computational models tend to do...
(All that said, replication doesn't seem all that pressing to me at the
moment, at least relatively speaking. Nobody's work is being held back
for lack of it that I know of.)
-dr