On Wednesday 05 February 2003 09.31, Sami P Perttu wrote:
[...]
The more I'm thinking about this the more biased I
am toward just
one process() that replaces values. And an in-place-is-okay hint.
No gains or DRY and WET controls. The host can probably reserve
some host-global buffers for mixing, no? The cache impact wouldn't
be big in that case. Somebody should do some actual measurements to
find out.
Yep, although my experience with optimizing Audiality on PC133 DRAM
machines is that it's *very* easy to end up memory bound, even if
you're free to reuse buffers and whatnot any way you like. Memory is
just dog slow in relation to CPU bandwidth, and 333 and 400 MHz DDR
isn't *that* much faster, really...
A small but seemingly annoying can of worms is the
case when you
have WET and DRY, more than one audio output, and some of the
outputs use the same buffer. Now you have to impose that the plugin
write its outputs in the right order to enable the host to
calculate suitable DRY controls.
Yes, that's a good point. Didn't think of that. With just the WET
control, there's no problem with pointing multiple outputs to the
same buffer, but the DRY control would scale the buffer once per
output... *heh*
...pitch...
I'm still having problems understanding why logarithmic frequency
is better than linear. Doesn't it violate the principle of keeping
plugins as simple as possible? Most plugins need linear frequency.
How is the conversion done? Well, maybe there could be a control
iterator that provides for it. Please tell me about your plan.
First of all, I think we have some terminology confusion here. We like
to think of "linear pitch" as something that's linear in musical
terms, ie 1.0/octave. Hz, rad/s and samples/period would be
logarithmic units, from this perspective.
Anyway, the reason for not using Hz or similar in the API is that such
units are harder to deal with, pretty much no matter what you want to
do. The only exception is when you actually want to have some
periodic action driven by it, ie sound. Thus, rather than using Hz in
the API and having event processors and whatnot convert back and
forth, we use linear pitch (1.0/octave) throughout, and leave it to
the final stage; the synth, to convert it into whatever drives it's
oscillators. Which, mind you, is not necessarily Hz or
samples/period! It may well be coefficients for a resonant filter you
need.
As to actually doing the conversion, I suppose we could throw some
inline implementations into the plugin SDK. (One with an interpolated
LUT and one plain FPU based - although I'm not sure the former is of
much use on modern hardware. FPUs have faster LUTs! ;-) It's rather
trivial stuff, but it might as well be there, given that it's pretty
universal and makes life a bit easier, especially for DSP beginners.
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`--------------------------->
http://olofson.net/audiality -'
---
http://olofson.net ---
http://www.reologica.se ---