On Sat, 20 Sep 2014, Will Godfrey wrote:
While getting to grips with Yoshimi I set up a microsecond timer in the function
that actually generates the data. To get a worst-case scenario I did this on my
office machine that's not specially set up for audio.
I configured jack for 512 frames/period, 2 periods/buffer and 48k, giving an
overall latency of 21.3mS
Running a moderately complex 12 part tune, the data 'build' time was varying
between about 1.8mS and 6.1mS per buffer. It dropped to less than 1mS when there
was no sound being produced.
Is it possible to deal with this in two threads? In the case of generating audio, there is no "waiting" for audio to come in to process and the processing could start for the next cycle right after the callback rather than waiting for the next callback(thinking multicore processors). The outgoing audio is put into storage and the callback only puts it into the audio stream. Effectively, the generation thread would be running in sort of a freerun mode, filling up a buffer as there is room to do so.