On Tuesday 05 October 2010, at 14.39.25, Arnout Engelen <lad(a)bzzt.net> wrote:
[...]
Hence it's
impossible to accurately honor the frame/time stamp of a midi
event. That's what drove drove the experimentation with splitting the
audio generation down to tighter blocks.
Yes, that could be an interesting way to reduce (though not eliminate
entirely) jitter even at large jack period sizes.
Not only that. As long as the "fragment" initialization overhead can be kept
low, smaller fragments (within reasonable limits) can also improve throughput
as a result of smaller memory footprint.
Depending on the design, a synthesizer with a large number of voices playing
can have a rather large memory footprint (intermediate buffers etc), which can
be significantly reduced by doing the processing in smaller fragments.
Obviously, this depends a lot on the design and what hardware you're running
on, but you can be pretty certain that no modern CPU likes the occasional
short bursts of accesses scattered over a large memory area - especially not
when other application code keeps pushing your synth code and data out of the
cache between the audio callbacks.
--
//David Olofson - Developer, Artist, Open Source Advocate
.--- Games, examples, libraries, scripting, sound, music, graphics ---.
|
http://olofson.net http://kobodeluxe.com http://audiality.org |
|
http://eel.olofson.net http://zeespace.net http://reologica.se |
'---------------------------------------------------------------------'