On Wednesday 11 December 2002 13.14, Sami P Perttu wrote:
[...]
This sounds
interesting and very flexible - but what's the cost?
How many voices of "real" sounds can you play at once on your
average PC? (Say, a 2 GHz P4 or someting.) Is it possible to
start a sound with sample accurate timing? How many voices would
this average PC cope with starting at the exact same time?
Well, in MONKEY I have done away with separate audio and control
signals - there is only one type of signal. However, each block of
a signal may consist of an arbitrary number of consecutive
subblocks. There are three types of subblocks: constant, linear and
data. A (say) LADSPA control signal block is equivalent to a MONKEY
signal block that has one subblock which is constant and covers the
whole block. Then there's the linear subblock type, which specifies
a value at the beginning and a per-sample delta value. The data
subblock type is just audio rate data.
That sounds a lot like a specialized event system, actually. You have
structured data - and that is essentially what events are about.
The native API then provides for conversion between
different types
of blocks for units that want, say, flat audio data. This is
actually less expensive and complex than it sounds.
Well, it doesn't sound tremendously expensive to me - and the point
is that you can still accept the structured data if you can do a
better job with that.
About the cost: an expression for pitch would be
evaluated, say,
100 times a second, and values in between would be linearly
interpolated, so that overhead is negligible.
I see. This is what I intend to do in Audiality later on, although it
will be more event centered and not "just" expressions. As an
alternative to the current mono, poly and sequencer "patch plugins",
there will be one that lets you code patch plugins in a byte compiled
scripting language. Timing is sample accurate, but since we're
dealing with "structured control", there's no need to evaluate once
per sample, or even once per buffer. You just do what you want when
you want.
It probably does not
matter that e.g. pitch glides are not exactly logarithmic, a
piece-wise approximation should suffice in most cases.
Yes, but there is a problem with fixed control rate, even if you can
pick one for each expression: If you set it low, you can't handle
fast transients (percussion attacks and the like), and if you set it
high, you get constantly high CPU utilization.
That's one of the main reason why I prefer timestamped events: One
less descision to make. You always have sample accurate timing when
you need it, but no cost when you don't.
I'm not sure about the overhead of the whole
system but I believe
the instantiation overhead to be small, even if you play 100 notes
a second.
Yes, the "note frequency" shouldn't be a major issue in itself; no
need to go to extremes optimizing the handling of those events.
However, even relatively simple FIR filters and the like may have
rather expensive initialization that you cannot do much about,
without instantiating "something" resident when you load the plugin.
However, I haven't measured instantiation times,
and
there certainly is some overhead. We are still talking about
standard block-based processing, though. Yes, sample accurate
timing is implemented: when a plugin is run it is given start and
end sample offsets.
As in "start processing HERE in your first buffer", and similarly for
the last buffer? Couldn't that be handled by the host, though "buffer
splitting", to avoid explicitly supporting that in every plugin?
Hmm, that might have sounded confusing, but I intend
to write a
full account of MONKEY's architecture in the near future.
Ok, that sounds like an interesting read. :-)
You could
think of our API as...
It seems to be a solid design so far. I will definitely comment on
it when you have a first draft for a proposal.
Well, the naming scheme isn't that solid... ;-)
But I think we have solved most of the hard technical problems.
(Event routing, timestamp wrapping, addressing of synth voices, pitch
control vs scales,...)
It's probably time to start working on a prototype, as a sanity check
of the design. Some things are hard to see until you actually try to
implement something.
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`--------------------------->
http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`---------------------------->
http://www.linuxdj.com/maia -'
---
http://olofson.net ---
http://www.reologica.se ---