[LAD] Project proposition: llvm based dsp engine
zanga.mail at gmail.com
Mon Dec 6 12:48:25 UTC 2010
2010/12/6 Maurizio De Cecco <jmax at dececco.name>:
> I have been looking for a while to LLVM as a possible technology to build a
> DSP execution engine, providing the runtime flexibility needed by real-time
> interactive DSP applications (like patcher languages), or by plug-in based
> processors, and in the same moment
> the powerful link time optimizations that such a system can provide.
> Such a task is daunting for a single developer project like mine, but it may
> become feasible if such engine could be useful for multiple projects, so to
> become a community initiative (i am sending this
> mail to the Linux Audio Developers mailing list and to the LLVM mailing
> list, feel free to send it elsewhere if you find it useful).
It's some time that I actually have a somewhat related idea: when you
use physics-based modeling techniques a crucial point is proper
scheduling of operations, which has NOT to be done on a "processing
object" (i.e., plugin or whatever) basis, but by considering
dependencies among single input/output expressions even inside
"processing objects" (this may sound unclear, I know).
This means that in practice, if you deal with compiled code (a.k.a.
plugins), you just can't reasonably do it, as opposed to specialized
tools, and in particular special audio programming languages that can
either be interpreted or compiled to some sort of "understandable"
bytecode that preserves input/output time relationships (i.e., delays)
and then run in a virtual machine that solves the scheduling issue.
An example of such specialized tools is the audio DSP language I wrote
called Permafrost. Here, the scheduling issue is solved when compiling
the Permafrost source code to LV2 plugin source (C code and Turtle/RDF
metadata), but this is a one way operation - you can't use the output
plugin code to build a new plugin taking into account these issues,
whether at runtime or not.
My idea was to define an intermediate DSP bytecode, hopefully also
capable of including metadata, and a virtual machine that schedules,
optimizes and runs the whole chain/graph.
A stupid example of what can be achieved with such a thing is the
following: suppose you have a physics-based simulator of a tube
amplifier that allows you to "plug" a physics-based loudspeaker model
into it (if you are into this kind of stuff, suppose it is WDF-based)
- you would be able to change such loudspeaker model while the system
is running and have it all correctly scheduled, optimized and
On the other hand, I believe you are aiming for something different,
hence I do suggest you (as Paul did) to contact FAUST developers,
since they're pretty much into this kind of stuff, especially when it
comes to LLVM.
More information about the Linux-audio-dev