On Wed, Nov 13, 2019 at 12:17:47PM +0100, Giso Grimm wrote:
we are running a setup with two RME MADI cards to
drive a system with
approx. 100 playback channels. We are using jack2 and zita-j2a to
achieve a sufficient number of playback channels. The sample clocks are
synchronized via word clock, therefor the resampling is deactivated in
zita-j2a.
On a similar setup (with two different card types) I noticed that the
latency between the cards is fixed as long as jack/zita-j2a is running,
however, the latency differs between starts of jack.
It will be different each time you (re)start zita-j2a, even if Jack
remains runnning.
This is a known problem. The quick solution ATM is to *not* disable
resampling.
When j2a or a2j is used *with* resampling, there is a control loop
that tries to set the average number of samples that is buffered
between the two clock domains to a preset value. It does this by
1. Making an initial guess based on the timing of the first
available ALSA period relative to Jack's start of period.
2. Continuously adjusting the resampling ratio based on the most
recent timing info as it becomes available. This is a very
slow control loop that after a few seconds will remove any
error made in the first step. The result is a constant and
defined latency.
When resampling is disabled, only (1) happens, which means that
the initial error is never corrected. This is why you get a
different latency each time.
The solution (without resampling) would be to run the control
loop anyway for 10 seconds or so, and then adjust the latency
again by either inserting or skipping a number of samples. This
would require significant changes to the code, and it would
result in a glitch some time after starting. The cleanest
solution to that would be to mute the ALSA device until the
loop has settled. So you'd only get signal 10 seconds or so
after starting a2j or j2a.
I've been planning to implement this, but there's always
something more urgent...
Ciao,
--
FA