On Wed, 11 Dec 2002, David Olofson wrote:
Well, in
MONKEY I have done away with separate audio and control
signals - there is only one type of signal. However, each block of
a signal may consist of an arbitrary number of consecutive
subblocks. There are three types of subblocks: constant, linear and
data. A (say) LADSPA control signal block is equivalent to a MONKEY
signal block that has one subblock which is constant and covers the
whole block. Then there's the linear subblock type, which specifies
a value at the beginning and a per-sample delta value. The data
subblock type is just audio rate data.
That sounds a lot like a specialized event system, actually. You have
structured data - and that is essentially what events are about.
Hmm, that's one way of looking at it. I had thought of the subblock aspect
as something that is "peeled away" to get at the continuous signal
underneath.
About the
cost: an expression for pitch would be evaluated, say,
100 times a second, and values in between would be linearly
interpolated, so that overhead is negligible.
I see. This is what I intend to do in Audiality later on, although it
will be more event centered and not "just" expressions. As an
alternative to the current mono, poly and sequencer "patch plugins",
there will be one that lets you code patch plugins in a byte compiled
scripting language. Timing is sample accurate, but since we're
dealing with "structured control", there's no need to evaluate once
per sample, or even once per buffer. You just do what you want when
you want.
Sounds cool. So these would be scripts that read and write events..? I
also have something similar in mind but writing the compiler is an effort
in itself. Especially because it has to be as fast as possible: in MONKEY
real-time control is applied by redefining functions. So when you turn a
knob an arbitrary number of expressions may have to be re-evaluated or
even reparsed. The benefit is that since basically all values are given as
expressions, the system is very flexible.
Yes, but there is a problem with fixed control rate,
even if you can
pick one for each expression: If you set it low, you can't handle
fast transients (percussion attacks and the like), and if you set it
high, you get constantly high CPU utilization.
That's one of the main reason why I prefer timestamped events: One
less descision to make. You always have sample accurate timing when
you need it, but no cost when you don't.
Isn't that one more decision to make? :) What do you do in between events?
Do you have a set of prescribed envelope shapes that you can choose from,
or something else?
However, even relatively simple FIR filters and the
like may have
rather expensive initialization that you cannot do much about,
without instantiating "something" resident when you load the plugin.
True; I don't have that problem yet because I only have a class interface,
and classes can have static data.
standard
block-based processing, though. Yes, sample accurate
timing is implemented: when a plugin is run it is given start and
end sample offsets.
As in "start processing HERE in your first buffer", and similarly for
the last buffer? Couldn't that be handled by the host, though "buffer
splitting", to avoid explicitly supporting that in every plugin?
No, as in "process this block from offset x to offset y". The complexity
is hidden inside an iterator - plugins can mostly ignore it. The clever
plugin writer can also parameterize her processing for different subblock
types via C++ templates, etc.
It's probably time to start working on a
prototype, as a sanity check
of the design. Some things are hard to see until you actually try to
implement something.
Especially when it comes to the user interface. Ever since I started to
design the GUI I have found myself evaluating features based more on their
value to the user and less on their technical merits.
--
Sami Perttu "Flower chase the sunshine"
Sami.Perttu(a)hiit.fi
http://www.cs.helsinki.fi/u/perttu