Tim Goetze wrote:
Steve Harris wrote:
On Tue, Dec 10, 2002 at 03:08:01 +0100, Tim Goetze
wrote:
it'd still be interesting to know how the sync
problems this
method poses are solved: you cannot rely on executable code
By sync problemt do you mean loop latency? There not solved exactly its
nope, i meant dynamic updates on a realtime (lock-free)
code path; it's an interesting problem with, afaict, no
obviously elegant solutions.
I imagine a code-compiling plugin system working something like this:
(note that when I describe the host as doing something below, I mostly
mean that the host is calling a provided library to do it)
1. Each plugin is a code-fragment that processes inputs, and generates
outputs, for one sample.
2. The host pastes/links a bunch of these fragments together in such a way
as to represent the current graph.
3. The host pastes Preamble/postamble code around this, turning it into
the complete source code for a callable function.
Note that by using different pre/postamble code it could be turned into
a function that processes one sample at a time or, by including an outer
loop, into a function that processes a block of samples. In either case the
entire graph gets processed by one call. If its turned into a block
processing
function, feedback loops can still run with lower latency - right down to
one sample - if the relevant fragments are wrapped in an inner loop.
4. The resulting source file is compiled and loaded into memory.
5. The host calls the generated function at an appropriate rate.
Whenever the graph changes, the host goes through steps 1 to 4 for
the new graph whilst continuing to call the previously generated
function regularly. When the new function is ready it can be
seamlessly switched to between samples (or blocks). Processing
power permitting the host could even call both functions for a
few samples and cross-fade to the new graph.
Ok, this is an ultra simplified account, I've skipped some thorny
problems and it would involve a major development effort. But:
+ there's no problem with dynamically modifying code, because
the same code is never being executed and generated at the same
time.
+ If the graph is compiled as a block-processing function it should
be able to out-perform traditional block-processing methods:
a) there's less function call overhead
b) code optimiser can see everything
c) code can be generated for the exact processor model in use
d) all code is in one place and there's less of it. (For this to be
entirely true something would have to be done about the situation
where there are multiple instances of a single plugin in the graph.
OTOH, depending on graph size, larger "straight through" code
might go faster in this situation anyway).
+low latency feedback loops are available (at a price) even when
block processing.
+ the audio data type can be resolved at compile time.
+ if (probably more accurately: *when*) hosts want and can
afford fully blockless processing they can have it.
+ plugins can be compiled for and run on a dedicated DSP
board given an appropriate compiler.
+ plugins are unavoidably open source.
Simon Jenkins
(Bristol, UK)