On Sat, Sep 20, 2014 at 1:07 PM, Len Ovens <len(a)ovenwerks.net> wrote:
On Sat, 20 Sep 2014, Will Godfrey wrote:
While getting to grips with Yoshimi I set up a microsecond timer in the
function
that actually generates the data. To get a worst-case scenario I did this
on my
office machine that's not specially set up for audio.
I configured jack for 512 frames/period, 2 periods/buffer and 48k, giving
an
overall latency of 21.3mS
Running a moderately complex 12 part tune, the data 'build' time was
varying
between about 1.8mS and 6.1mS per buffer. It dropped to less than 1mS
when there
was no sound being produced.
Is it possible to deal with this in two threads? In the case of generating
audio, there is no "waiting" for audio to come in to process and the
processing could start for the next cycle right after the callback rather
than waiting for the next callback(thinking multicore processors). The
outgoing audio is put into storage and the callback only puts it into the
audio stream. Effectively, the generation thread would be running in sort
of a freerun mode, filling up a buffer as there is room to do so.
this doesn't work for instruments.
the audio being generated has to change as rapidly as possibly following
the receipt of some of event that changes things (e.g. "start playing a new
note" or "lower the cutoff frequency of the filter"). fast response/low
latency for instruments means that all generation happens as-needed, not
pre-rendered, not "in storage".
for streaming audio playback (from disk or the net or whatever) this
approach is fine and is equivalent of many buffering schemes.