[LAD] on the soft synth midi jitters ...

fons at kokkinizita.net fons at kokkinizita.net
Tue Oct 5 20:00:11 UTC 2010

On Tue, Oct 05, 2010 at 02:50:10PM +0200, David Olofson wrote:

> Not only that. As long as the "fragment" initialization overhead can be kept 
> low, smaller fragments (within reasonable limits) can also improve throughput 
> as a result of smaller memory footprint.

'Fragment initialisation' should be little more than
ensuring you have the right pointers into the in/out
> Depending on the design, a synthesizer with a large number of voices playing 
> can have a rather large memory footprint (intermediate buffers etc), which can 
> be significantly reduced by doing the processing in smaller fragments.

> Obviously, this depends a lot on the design and what hardware you're running 
> on, but you can be pretty certain that no modern CPU likes the occasional 
> short bursts of accesses scattered over a large memory area - especially not 
> when other application code keeps pushing your synth code and data out of the 
> cache between the audio callbacks.

Very true. The 'bigger' the app (voices for a synth, channels for
a mixer or daw) the more this will impact the performance. Designing
the audio code for a fairly small basic period size will pay off.
As will some simple optimisations of buffer use.
There are other possible issues, such as using FFT operations.
Calling a large FFT every N frames may have little impact on
the average load, but it could have a big one on the worst case
in a period, and in the end that's what counts.

Zyn/Yoshimi uses FFTs for some of its algorithms IIRC. Getting
the note-on timing more accurate could help to distribute those
FFT calls more evenly over Jack periods, if the input is 'human'.
Big chords generated by a sequencer or algorithmically will still
start at the same period, maybe they should be 'dispersed'...



There are three of them, and Alleline.

More information about the Linux-audio-dev mailing list