[linux-audio-dev] Synth APIs, MONKEY

Sami P Perttu perttu at cc.helsinki.fi
Thu Dec 12 08:18:01 UTC 2002


On Wed, 11 Dec 2002, David Olofson wrote:

> An event system with "set" and "ramp" events can do the same thing -
> although it does get pretty inefficient when you want to transfer
> actuall audio rate data! ;-)

Yes. It seems that audio architecture design is dominated by the audio
versus control rate issue. The complexity can be shifted to various parts
in the system but it will be there as long as we want to avoid everything
being audio rate.

In XAP, then, things like filter cutoff and Q would be controls and
respond to "set" and "ramp" events. Conversions or possibly plugins that
convert between audio rate signals and events will still be needed, I
think, but they are no problem to implement. What should be avoided is the
proliferation of plugins that perform the same task and only differ in
audio versus control rate details - CSound is a notorious example of this.

Hmm, events and subblocks are so close but not quite the same. I was
almost dreaming of abstracting everything to a single signal type.
But already the interleaving of different controls into a single event
queue is a major point in favor of events. So how about integrating audio
data into event queues as well? These events would contain a pointer to
a buffer. There - a single structured signal type that caters for
everything. Goodbye audio-only ports. Now all we need are some functions
in the API converting between set, ramp and data events, so that plugins
that can deal only in one type get what they want. And each plugin
instance would need just one input port. Is this totally crazy?

> Yes. The same language (although interpretted) is alreay used for
> rendering waveforms off-line. (Optimized for quality and flexibility,
> rather than speed.)

What kind of semantics does your language have? Imperative, functional,
something in between? So far, mine is just a "mathematical" expression
language with simple function definitions.

> > The
> > benefit is that since basically all values are given as
> > expressions, the system is very flexible.
>
> Yeah, that's a great idea. I'm not quite sure I see how that can
> result in expressions being reparsed, though. When does this happen?

Well, real-time control is exerted like this: you tie (for instance) some
MIDI control to a function (with no parameters - it is a constant
function) - the value of the function at any time is the value of the MIDI
control. An arbitrary number of expressions for all kinds of things can
refer to the function, and thus their value can depend on it. Now if you
turn a knob on your MIDI keyboard and the control changes, the function is
effectively redefined, and any expression that depends on it has to be at
least re-evaluated. I have all kinds of glue that take care of the details
- it is not simple but not impossible either.

> This is a good point. So far, only "set" and "linear ramp" has been
> discussed, really, and that's what some of the proprietary plugin
> APIs use. It seems to work well enough for most things, and in the
> cases where linear is insufficient for quality reasons, plugins are
> *much* better off with linear ramp input than just points with no
> implied relation to the actual signal.

True, true. Piece-wise linear is just fine by me. Some folks might want to
see log-linear implemented as well, I don't know. It is difficult to
optimize your DSP calculations with respect to more complex functions than
linear but I suppose the real advantage lies in not having to read stuff
in from memory all the time.

--
Sami Perttu                       "Flower chase the sunshine"
Sami.Perttu at hiit.fi               http://www.cs.helsinki.fi/u/perttu




More information about the Linux-audio-dev mailing list