[linux-audio-dev] exploring LADSPA

Paul Davis paul at linuxaudiosystems.com
Wed Aug 13 20:24:00 UTC 2003


>that someone may find a few of them useful, and to perhaps contribute to 
>LADSPA's evolution:

LADSPA's evolution is most likely to take place within the context of
GMPI (Generalized Music Plugin Interface), a cross-industry attempt to
define a new platform and vendor independent audio/MIDI plugin API. it
will be slow, but there isn't a lot of point in doing much more than
tweaking LADSPA for now.

note that LADSPA was originally designed to be a "least common
denominator" among numerous fractured app-specific audio plugin
APIs. it has served this purpose extremely well, and i think most of
its contributors, users and developers would be inclined to leave it
that way :)

>- I've done away with the distinction between control signals and audio 
>signals. I understand the performance gains to be had by computing one class 
>of signals less often than another, but I feel this is a hold-over from days 
>when computers were much slower than they are now. In my ideal system, 

Sure, in your ideal system. In my ideal system, I'd love to be able to
do real-time physical modelling of a full string orchestra, and add in
real time convolution-based reverb to model the hall. But in any real
system, there are always limits, and even on the recent 2.4GHz dual
athlon system i tested recently, its completely *trivial* to overload
the CPU with audio synthesis and processing.

>- Somewhat related to the item above, a plugin's run() method computes exactly
>one sample at each call, not a block of samples. This is again a matter of 

perry cook's SDK does this too. everybody knows its cool, just as
everybody knows its incredibly inefficient. you have 100% of the
overhead of a chain of function calls for every sample. for anything
except trivial processing, its too expensive to be useful for a
general purpose API (for now).

>conceptual simplification. I don't want the individual plugin to have to know 
>anything of process iteration; that job is for the containing infrastructure. 
>Also, some years ago I started working on some computer synthesis software 
>and found that when units ("plugins") computed samples in blocks (instead of 
>one at a time), there was a strange behavior when these units were patched 
>together in looped delay line configurations. As I recall, gaps would appear 

if you read "the computer music tutorial" by curtis rhoads, you could
avoid discovering this, and instead read about what people have done
to tackle the problem since it was noted 25-30 years ago :)

per-sample processing isn't a feasible option as a general API model
for, oh, i'd guess at least another 3-4 years. and many operations
that want to work in the frequency domain require blocks anyway, and
so are not helped by this design.

>- Every input port is a mixer/modulator. Since the operations of mixing and 
>modulating (multiplying) signals together are so often needed, I decided to 
>build them into plugin input ports. A given input port can accept any number 
>of connections from "feeders" and either mixes or modulates their outputs 
>transparently, according to the port's configuration. I believe this 
>simplifies use of the system and eliminates the need for a special 
>runAdding() plugin method.

JACK (http://jackit.sf.net/) does this too. its a very nice design,
although it has its downsides.

--p




More information about the Linux-audio-dev mailing list