On Thursday 12 December 2002 14.12, Sami P Perttu wrote:
[...]
In XAP, then, things like filter cutoff and Q would be
controls and
respond to "set" and "ramp" events. Conversions or possibly plugins
that convert between audio rate signals and events will still be
needed, I think, but they are no problem to implement.
Right, you can have Control -> Audio and vice versa - but the way
instruments are usually implemented in this kind of APIs, such
conersions are rarely needed.
What should
be avoided is the proliferation of plugins that perform the same
task and only differ in audio versus control rate details - CSound
is a notorious example of this.
Yes, but CSound opcodes are *very* low level in relation to XAP.
Still, that does not prevent a gread deal of overlap, although XAP
gets more and more expensive the more you "abuse" it - just like full
audio rate or blockless systems tend to be less efficient when used
more like "VSTi style" synths.
Hmm, events and subblocks are so close but not quite
the same. I
was almost dreaming of abstracting everything to a single signal
type. But already the interleaving of different controls into a
single event queue is a major point in favor of events.
Especially since XAP plugins may chose to get events for any Control
in any Queue they want. Have one Queue per internal loop, and there's
only one Queue to check at a time, regardless of how many Controls
there are. (This is what motivated me to go for one Queue per Plugin
in MAIA - but the XAP way is much better.)
So how
about integrating audio data into event queues as well? These
events would contain a pointer to a buffer. There - a single
structured signal type that caters for everything. Goodbye
audio-only ports.
However, Audio Ports would still exist, just like before. They have
just become abstract object, rather than physical objects, so the
Host must access them through events to the Plugin.
I mentioned in another mail that I did consider this for MAIA. I also
mentioned that it brings implementational issues for Plugins...
(Asynchronous buffer splitting is nasty to deal with.)
My conclusion has to be that for both complexity and performance
reasons, you're better off with physical Audio Ports, that
effectively are nothing but buffer pointers, maintained by the Host.
(LADSPA style.) This is simple and effective, and avoids some
complexity in Plugins.
Now all we need are some functions in the API
converting between set, ramp and data events, so that plugins that
can deal only in one type get what they want. And each plugin
instance would need just one input port. Is this totally crazy?
No, but (as explained above) there are reasons why passing buffers
via the event system, and enforcing that there be only one event
queue per plugin, are not as great ideas as they may seem at first.
Yes. The same
language (although interpretted) is alreay used for
rendering waveforms off-line. (Optimized for quality and
flexibility, rather than speed.)
What kind of semantics does your language have? Imperative,
functional, something in between? So far, mine is just a
"mathematical" expression language with simple function
definitions.
Being that it started out as little more than a list of "commands"
and arguments (as an alternative to a bunch of hardcoded C calls), it
is currently *very* imperative.
However, due to the nature of the underlying audio rendering API, it
would not be far off to interpret the scripts as net descriptions
instead of step by step instructions. Just add plugins instead of
running commands, and then run the result in real time.
I have seriously considered this approach, but I'm not sure I'll do
it that way - at least not as an implicit, only way of using the
language. (The language itself is indeed imperative; this is more a
matter of what the extension commands do.)
The
benefit is that since basically all values are given as
expressions, the system is very flexible.
Yeah, that's a great idea. I'm not quite sure I see how that can
result in expressions being reparsed, though. When does this
happen?
Well, real-time control is exerted like this: you tie (for
instance) some MIDI control to a function (with no parameters - it
is a constant function) - the value of the function at any time is
the value of the MIDI control. An arbitrary number of expressions
for all kinds of things can refer to the function, and thus their
value can depend on it. Now if you turn a knob on your MIDI
keyboard and the control changes, the function is effectively
redefined, and any expression that depends on it has to be at least
re-evaluated. I have all kinds of glue that take care of the
details - it is not simple but not impossible either.
Ok, I see.
This is a good
point. So far, only "set" and "linear ramp" has
been discussed, really, and that's what some of the proprietary
plugin APIs use. It seems to work well enough for most things,
and in the cases where linear is insufficient for quality
reasons, plugins are *much* better off with linear ramp input
than just points with no implied relation to the actual signal.
True, true. Piece-wise linear is just fine by me. Some folks might
want to see log-linear implemented as well, I don't know.
I've considered something spline-like, with one control point or
something, but you really need 5 arguments for something useful;
<duration, start, end, start_k, end_k>. (With just one control point,
you would still have transients in between curve sections.)
That said, if curves are *chained*, you need only three arguments
<duraction, end, end_k>. <start, start_k> of the next segment will be
taken directly from the end parameters of the previous segment. (When
you use "set" events, those will be set to the target value and 0,
respectively, so "set" followed by a chain of curve segments will
work right.)
Since the actual *value* is still in there, plugins may chose to pass
curve segment events to the same case as the "ramp" events, which
would effectively just mean that the end_k argument is ignored. Or
you can approximate the curve with N linear sections internally, or
whatever.
It is
difficult to optimize your DSP calculations with respect to more
complex functions than linear
Depends. Polynomial is just another addition per sample...
value += dvalue;
becomes
value += dvalue;
dvalue += ddvalue;
Calculating dvalue and ddvalue becomes more expensive, though, but I
have a feeling that might be compensated for by the lower number of
segments needed for a good approximation.
Not sure, though, at least not for the buffer sizes we're dealing
with in real time engines. I've explained in another post why you
must not assume things about the future (you don't know what will
happen in the next block), and this applies to ramp/curve events as
well!
but I suppose the real advantage lies
in not having to read stuff in from memory all the time.
Well, at least for reasonably simple calculations on fast CPUs, where
memory bandwidth is a problem.
The *major* advantage though, is when you have filters and other
algorithms for which the input -> coefficient transform is expensive.
With audio rate control input, you have to do this for every sample.
With events, you can do it once per event or so, and then internally
ramp the coefficients instead. If linear ramping is not sufficient,
extend the transform (or do it for a center point as well) so you can
use polynomial "ramping" instead.
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`--------------------------->
http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`---------------------------->
http://www.linuxdj.com/maia -'
---
http://olofson.net ---
http://www.reologica.se ---