On Mon, Mar 03, 2003 at 12:15:13PM +0100, David Olofson wrote:
On Monday 03 March 2003 07.49, torbenh(a)gmx.de wrote:
[...]
Yes,
although it might make sense to have a standard
"SILENCE_THRESHOLD" control, so hosts can present it as a central
control for the whole net, or for a group of plugins (mixer
inserts, fo example), or whatever. Where it's not present, the
host will assume that the plugin consideres only absolute
silence, provided the plugin supports silence at all.
ok ... so we are going one level up now...
standard names for controls...
Yeah, that's how we intend to hint which controls go where when
connecting plugins. It's handy for users, and hosts can use it for
automatic "cable" connections. "PITCH goes to PITCH, VELOCITY goes to
VELOCITY, ..."
yes... this is very nice...
i am dreaming of something like this for the galan too...
[...]
how will the silence be indicated ?
Each buffer will probably be a struct, containing a buffer pointer
(which will always be valid for outputs) and a flag field. At least,
that's the way I do it in the Audiality mixer, and it's quite handy.
yes... i like it like that.
NULL is not so good for inplace processing :)
Right. :-) I use that in Audiality for the FX plugins, but those can
only be stereo in, stereo out. Obviously, it doesn't work with the
single pointer inplace version of the process() call, but that's a
silly idea anyway. (Inplace is better done with separate in and out
pointers and a replacing process() call.)
ok. agreed.
is there inplace processing in XAP ?
I'm not sure if we have decided on this. At first though, it seems
like it would just be a matter of a flag saying whether or not a
plugin can safely be given a shared buffer for an input/output pair.
However, it's far from that simple!
Just consider multichannel plugins. These are often inplace safe
within channels, but *not* across channels. This is a result of
channels being processed one at a time, rather than all in parallel.
a multichannel plugin only makes sense if the channels depend on each other.
There are loads of non obvious issues related to this. The related
"output modes" (replacing, adding, possibly others) add another bunch
of issues, such as the need for standardized gain controls to make
adding mode useful. (VST doesn't have it, and as a result, process()
[the adding version] is essentially useless in most situations.)
hmmm... right...
The general impression is that to be truly useful, it gets pretty
complicated. A popular opinion is that we should support only one
output mode, and just have a plugin global flag to mark plugins as
inplace safe.
I'm on the fence myself, though I have a feeling that adding/mixing
mode could make sense for plugins with low CPU/output ratios... (That
is, fast plugins, or plugins with loads of outputs.)
hmm... you are right...
for the run adding the graphsorter would need to realize the
situation of a mixer bus...
plugin -> gain
\
plugin -> gain----- out
/
plugin -> gain
if run_adding was optional it would be ok for me...
[...]
a fixed DC of 1000 means something completely
different to a
frequency modulator than silence...
but silence is only dcsilence(0)
Sure - but then it's not silence, IMHO.
Anyway, buffers with "silent" flags *are* already structured data of
sorts, but we have to draw the line somewhere. Or should we express
audio as linear ramps, splines, wavelets, or something else...?
:-) well that would be nice...
but you are right we have to draw the line somewhere...
i started the dcsilence stuff with :-)
it was just an idea...
(At least in XAP, we don't use audio inputs for
control data. We
use sample accurate timestamped events with linear ramping.)
a frequency modulator takes audio input as i see it.
Right.
even if linear ramping would be extended to
polynoms
and other nonlinear things it would never have the power
of an audio rate control...
Of course - but an audio rate control input is an *audio* input. Audio
inputs *could* inherit some of the control semantics (specifically
"fixed value"), but I think this opens up a big can of worms. I
strongly doubt that your average plugin will have much use for DC
input optimization, and I don't think your average plugin author
would want to be *forced* to deal with it.
It can be made optional, and hosts can translate as
needed, but if
it's going into the API, there has to be a good motivation.
yes... lets leave it like this...
[...]
Yeah, I
understand that, but I don't see how plugins that use any
significant amount of cycles can make use of this. "Register
instead of stream" kind of optimizations are pretty much
irrelevant for most plugins, I think, and all the special cases
it would require (one case for every likely combination of
active, silent or DC inputs) are just not worth it.
i guess you are right... but if you think this somewhat further it
is a nice idea... making up a different very complex protocol
It's a nice idea in theory, but your own words "very complex protocol"
is what turns me away from this approach. Any useful API is going to
be quite complex enough anyway.
yes lets leave it a nice idea...
it should be handled like extended ramping
events...
If you think about how to actually implement sending and receiving
code for that, I doubt you'll ever want to implement plugins that
support this... :-)
Keep in mind that most plugins will have multiple inputs and outputs
per channel, and as a result, you'll be confronted with *mixed* input
modes. That is, one inner loop with ramped control and one with audio
rate controls (for the extended ramping events) just won't work.
You'll have to process each control separately, and at that point, it
will probably run slower than hardcoded audio rate controls - even
with host side control->audio conversions.
I think most people would rather spend their time tuning the DSP code
than messing with this.
this idea is not feasible for XAP...
i would only give it a try for a small plugin set consisting of
osc, dc, filter, delay...
--
torben Hohn
http://galan.sourceforge.net -- The graphical Audio language