So ok, I was able to confirm by having someone try it out for me (not on my linux machine right now) that Tim and of course Paul, you are both correct in that a JACK client in a send/return loop adds additional latency. So now I'm left with the obvious question of "why?".

What is the difference, in terms of latency, between the following signal chains in JACK?:


1)    A/D -> A -> B -> A -> D/A

and

2)    A/D -> A[B as plugin] ->  -> D/A

and

3)    A/D -> A -> B -> C -> D/A

Why is there additional latency added to client B in 1) but not 3), especially in the case where C could be another instance of A?

And also, could the following section the JACK faq perhaps be altered to provide some clarification here:

Doesn't use JACK add latency?

There is NO extra latency caused by using JACK for audio input and output. When we say none, we mean absolutely zero. The only impact of using JACK is a slight increase in the amount of work done by the CPU to process a given chunk of audio, which means that in theory you could not get 100% of the processing power that you might get it if your application(s) used ALSA or CoreAudio directly. However, given that the difference is less than 1%, and that your system will be unstable before you get close to 80% of the theoretical processing power, the effect is completely disregardable.


I understand that this explicitly says "using JACK for audio input and output", but the question itself is a little broader than that. To me it implies that the entire JACK graph adds no additional latency. If I can be pointed to a clarified explanation of the matter I'd happily provide some documentation if no one else has the time or inclination.

regards

michael