On 26/09/11 05:08 AM, Fons Adriaensen wrote:
On Sun, Sep 25, 2011 at 06:20:22PM -0400, David
Robillard wrote:
In
particular, synth plugins (VCOs, VCFs, etc.) are fundamentally
different from general audio processing ones (EQ, dynamics, etc.).
This doesn't exclude the possibility that some may be useful in
both contexts.
Yes, but many, if not most, are usable in both contexts. Filters of
any
variety are the best examples.
I'd contest the 'most'. Can you give
any more examples ?
For filters, only 'static' ones (only GUI controls, same
processing on all channels or voices) could be used in
both contexts. Once any parameter becomes per voice and
'voltage controlled' the similarity ends.
I am not considering per voice at
all since we don't have that
technology anyway.
"Once any parameter becomes 'voltage controlled'..."
It's still a parameter a user might want to set manually if it's not
being "voltage controlled", and it's still a parameter that should be
presented sensibly in a non-modular host like a DAW or effects rack.
Because if you start to analyse these things (and
there are many more aspects to it) it becomes clear
that current plugin standards completely ignore all
of this, they get in the way rather than provide the
necessary hooks, and you better start from zero.
Just consider the following list:
- 'Voltage control'
- MIDI control
- OSC control
- Save/restore settings
- Automation
Traditionally a host will try to do any of them
using only the set of 'control ports' exposed by
a plugin, or by hooking into the GUI<-> DSP
communication if the plugin has its own GUI.
But the requirements for each of these are quite
different and usually in conflict.
MIDI and OSC are more or less logistically
equivalent. There are only
really two fundamentally different things here with respect to how the
plugin itself works: control via messages (MIDI/OSC) and control via
signals (voltage control).
I think there
is more overlap between these cases than this implies, or
at least can be. A polyphonic synth *could* have a very large portion
of the processing time be common shared data.
As long as voices are independent
there isn't much to share
except e.g. big lookup tables as used in some oscillators.
Even if the host replicates the plugin for each voice (which
it could do) you'd want the instances to share such data.
Which again requires some support from the plugin standard.
Right.
There is another thing missing in current synthesis
hosts
(AMS, and AFAIK also Ingen): an explicit definition of the
point where a polyphonic patch mixes its voices into a single
audio signal. Some processing should be done after this point,
e.g. reverb and some other effects. So if you do this in a
plugin, it becomes 'polyphonic' at one side, and 'multichannel'
at the other. Some more metadata required...
Ingen allows the user to set each module as either polyphonic or not.
Mixing down is done wherever necessary (i.e. wherever a poly output is
connected to a mono input). I think maybe this is a bit more fine
grained than is useful though, and have considered having a single
well-defined (internal) module that mixes down as you've described, or
maybe just not allowing poly and mono in a single patch at all... Making
sure things like reverb etc. are put after the polyphonic stuff is a
user decision in either case, though.
With respect to plugins, supporting poly/multichannel would inherently
be a per-port thing, i.e. specific port(s) would be labeled as
polyphonic, or multi-channel, so I don't think such a plugin would cause
any trouble.
-dr
Ciao,