On Sat, Sep 20, 2014 at 04:41:31PM +0100, Will Godfrey wrote:
While getting to grips with Yoshimi I set up a
microsecond timer in the function
that actually generates the data. To get a worst-case scenario I did this on my
office machine that's not specially set up for audio.
I configured jack for 512 frames/period, 2 periods/buffer and 48k, giving an
overall latency of 21.3mS
Running a moderately complex 12 part tune, the data 'build' time was varying
between about 1.8mS and 6.1mS per buffer. It dropped to less than 1mS when there
was no sound being produced.
That was a lot more variation than I was expecting but considering the variety
of calls that were being made, dependent on which voices were sounding and with
what effects, I don't know how this could be avoided.
If the load changes in only function of the number of active voices
that is perfectly OK.
I did another check for continuous sounds, and under
those circumstances the
time didn't vary significantly.
That's a good sign. You should also test this with smaller period
sizes. If all is OK the required calculation time should just
decrease in proportion. If that is the case then the CPU load as
seen by Jack should remain the same, apart from a small increase
due to overhead (task switching etc.).
The thing to be avoided is code that e.g. generates a heavy load
every fourth period and does almost nothing in the three periods in
between. Or a synth that generates a peak load whenever a voice is
started and much less while the note lasts. IIRC yoshi/zyn use FFTs
in some of the algorithms, so this sort of thing could happen.
Ciao,
--
FA
A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)