Was the Jack latency always been this way or was it in
the past
comparable to the Alsa-only latency?
Also, would you presume that having for newer machines with faster FSB
would improve latency even further?
the JACK data i provided covers an out-of-process client. the all-in-process
client case has the same latency as ALSA only.
the difference between the two is that although 99.9% of the time, the
kernel operates as desired and JACK can provide ALSA-only/in-process
performance in the out-of-process case, every once in a few tens of
thousands of process() cycles, the kernel messes up our scheduling and
this causes an xrun. tracking this down is an important but very, very
difficult case. its possible that it may never be solved.
--p