On Thursday 29 November 2007, Dave Robillard wrote:
[...]
Well, sure, but big data is big data. In the typical
case plugin
buffers are much smaller than the cache
[...]
Of course, but that's exactly what I'm talking about - large buffers,
and why it doesn't make sense to support them. :-)
If you're using 65536 samples per buffer, it just takes a plugin with
four audio inputs and you're up to 1 MB of intermediate buffers. Even
if that does fit in the cache, in a real life situation, with other
threads working, most of it will be cold again every time the audio
thread starts. So, your processing speed is potentially capped at the
memory bandwidth throughout the buffer cycle, or at least until you
start reusing buffers in the graph. And what is supposed to be gained
by this...?
I don't see why a plugin API of this type should support nonsense like
that at all, and thus, it shouldn't affect event timestamps either -
but well, now it's there, and there isn't really any Right Thing(TM)
to do here, I guess.
crunching away on plain old audio here is definitely
CPU bound (with
properly written RT safe host+plugins anyway).
Last time I looked into this, a reasonably optimized resampler with
cubic interpolation and some ramped parameters was memory bound even
on a lowly P-III CPU, at least with integer processing. (Haven't
actually tested this on my AMD64...)
I think floating point should be as fast or faster in most cases, at
least on P-III CPUs and better - and with SIMD, you may get another
2x-4x higher throughput at that.
Could be way off here, though. Do you have benchmark figures?
//David Olofson - Programmer, Composer, Open Source Advocate
.-------
http://olofson.net - Games, SDL examples -------.
|
http://zeespace.net - 2.5D rendering engine |
|
http://audiality.org - Music/audio engine |
|
http://eel.olofson.net - Real time scripting |
'--
http://www.reologica.se - Rheology instrumentation --'