On Wednesday 11 December 2002 20.25, Sami P Perttu wrote:
[...]
That sounds a
lot like a specialized event system, actually. You
have structured data - and that is essentially what events are
about.
Hmm, that's one way of looking at it. I had thought of the subblock
aspect as something that is "peeled away" to get at the continuous
signal underneath.
A sort of combined "rendering language" and compressed format.
An event system with "set" and "ramp" events can do the same thing -
although it does get pretty inefficient when you want to transfer
actuall audio rate data! ;-)
About the cost: an expression for pitch would be
evaluated,
say, 100 times a second, and values in between would be
linearly interpolated, so that overhead is negligible.
I see. This is what I intend to do in Audiality later on,
although it will be more event centered and not "just"
expressions. As an alternative to the current mono, poly and
sequencer "patch plugins", there will be one that lets you code
patch plugins in a byte compiled scripting language. Timing is
sample accurate, but since we're dealing with "structured
control", there's no need to evaluate once per sample, or even
once per buffer. You just do what you want when you want.
Sounds cool. So these would be scripts that read and write
events..?
Yes. The same language (although interpretted) is alreay used for
rendering waveforms off-line. (Optimized for quality and flexibility,
rather than speed.) It will eventually be able to construct (or
rather, describe) simple networks that the real time part of the
scripts can control. Currently, the real time synth is little more
than a sampleplayer with an envelope generater, so there isn't much
use for "net definition code" yet. :-)
I also have something similar in mind but writing the
compiler is an effort in itself.
No kidding...!
Especially because it has to be as
fast as possible: in MONKEY real-time control is applied by
redefining functions. So when you turn a knob an arbitrary number
of expressions may have to be re-evaluated or even reparsed.
I prefer to think in source->compiler->code terms, to avoid getting
into these kinds of situations. (I guess the 8 and 16 bit ages still
have some effect on me. ;-)
The
benefit is that since basically all values are given as
expressions, the system is very flexible.
Yeah, that's a great idea. I'm not quite sure I see how that can
result in expressions being reparsed, though. When does this happen?
I would have thought you could just compile all expressions used in
your net, and then plugin the compiled "code". You can create, load
or modify a net, and then you compile and run.
Yes, but there
is a problem with fixed control rate, even if you
can pick one for each expression: If you set it low, you can't
handle fast transients (percussion attacks and the like), and if
you set it high, you get constantly high CPU utilization.
That's one of the main reason why I prefer timestamped events:
One less descision to make. You always have sample accurate
timing when you need it, but no cost when you don't.
Isn't that one more decision to make? :) What do you do in between
events? Do you have a set of prescribed envelope shapes that you
can choose from, or something else?
This is a good point. So far, only "set" and "linear ramp" has been
discussed, really, and that's what some of the proprietary plugin
APIs use. It seems to work well enough for most things, and in the
cases where linear is insufficient for quality reasons, plugins are
*much* better off with linear ramp input than just points with no
implied relation to the actual signal.
Why? Well, if you consider what a plugin would have to do to
interpolate at all, it becomes obvious that it either needs two
points, or a time constant. Both result in a delay - and a delay that
is not known to the host or other plugins, at that! Nor can it be
specified in the API in any useful way (some algos are more sensitive
than others), and agreeing on a way for plugins to tell the host
about their control input latency may not be easy either.
Linear ramping doesn't eliminate this problem entirely, but at least,
it lets you tell the plugin *explicitly* what kind of steepness you
have in mind.
A ramp event effectiely spans the whole time of the change, whereas
"set" events always arrive exactly when you want the target value
reached - and that's rather late if you want to avoid clicks! :-)
However, even
relatively simple FIR filters and the like may have
rather expensive initialization that you cannot do much about,
without instantiating "something" resident when you load the
plugin.
True; I don't have that problem yet because I only have a class
interface, and classes can have static data.
I see. Then you actually *have* a form of load time initialization
for plugins. The major difference for us would be that we have that
per-instance, rather than per-class.
standard block-based processing, though. Yes, sample
accurate
timing is implemented: when a plugin is run it is given start
and end sample offsets.
As in "start processing HERE in your first buffer", and similarly
for the last buffer? Couldn't that be handled by the host, though
"buffer splitting", to avoid explicitly supporting that in every
plugin?
No, as in "process this block from offset x to offset y".
Ah! Ok.
The
complexity is hidden inside an iterator - plugins can mostly ignore
it.
Slightly harder to do that in C - but no language can avoid the
overhead. (Ooooh, and that's *some* overhead! ;-D)
The clever plugin writer can also parameterize her
processing
for different subblock types via C++ templates, etc.
That's when it *really* starts to work like an event based system. :-)
It's
probably time to start working on a prototype, as a sanity
check of the design. Some things are hard to see until you
actually try to implement something.
Especially when it comes to the user interface. Ever since I
started to design the GUI I have found myself evaluating features
based more on their value to the user and less on their technical
merits.
Well, of course - that's what actually matters in the end. If it does
the job; great! If not, clean designs and sophisticated solutions are
entirely worthless.
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`--------------------------->
http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`---------------------------->
http://www.linuxdj.com/maia -'
---
http://olofson.net ---
http://www.reologica.se ---