On Tue, February 23, 2016 12:27 pm, Jonathan Brickman wrote:
No, I know the difference very well. My architecture
currently has
seven synchronous parallel chains right now
OK, now we're getting some details. That is very much different than the
impression I got from earlier emails, that the system was using 85% dsp
with only two Yoshimi instances running.
So actually there are how many applications in the Jack graph? At least
seven sub-graphs, but how many total connections?
There are at least seven instances of calf plugins, right?
Each chain is controlled and modulated by Calf plugin
sets and a single big...Non-Mixer.
I think this may be the source of the high DSP usage:
"All JACK connections, MIDI and audio, which will ever be needed in any
patch, are always connected."
Even when an application such as Yoshimi is not creating any audio data,
jackd has no way of detecting that, so will call the jack process function
for each application during each period. I haven't looked far enough down
into the details, but I assume that some process, either the application
or jackd, has to explicitly put all 0 value samples into the buffer each
period to make sure there is no unwanted audio output. Even staying
silent is not a zero cost operation.
Each Yoshimi instance hits a separate input (thread)
in Non-Mixer, this
is one of the things which has made things work as well as they are.
Paul, Stephane, or J Liles would have to comment on the details of that,
I'm not sure how the final output mixing affects the processing graph with
jack2. Maybe that is the point that ties all the independent graph lines
together and forces everything to happen within one period.
Would it be possible to call jack_disconnect and jack_connect in addition
or in place of changing the MIDI channel of the control keyboard?
At least as an experiment. Possibly there is a high percentage of the
period where jack is just processing empty buffers from all the synth
chains which are connected but not actually generating audio.
I am not sure why I cannot cram more in the JACK
cycle
given my low CPU usage for the JACK process
Each function call takes time, each context switch for a different process
to get run time, any time the OS has to synchronize data structures
between different CPU's, all those things take time.
but I am thinking that this
is probably bound to the audio signal sample rate, and I am already at
96Hz for that very reason.
Have you tried running at 48kHz? For a given amount of time, having a
higher sample rate means more processing to generate or process the audio
samples. For a given number of samples the latency is lower, for a given
latency the work load is higher. Since in live performance you really
care about the latency, it seems like for a given latency you would want
to reduce the CPU load. Maybe that won't matter since you don't seem to
be CPU limited, but could be a useful experiment.
--
Chris Caudle