On Wed, Jul 07, 2021 at 01:00:21PM +0200, Wim Taymans wrote:
Challenge accepted!... I made a little jack client
with 32 input and 32 output
ports that memcpy the samples. Then I started 16 of those and linked them
all in a long chain.
Then I linked the input of the chain to a USB mic and the output to another
USB card (it needs to do adaptive resampling to keep this going),
That takes about 6 seconds to setup on my machine. I run this with a buffer
size of 128 samples and 48KHz.
Tried something similar: 16 instances of JackGainctl (from zita-jacktools)
with 32 channels (i.e. 64 ports) each, run from a Python script.
With jack2 this takes 0.5s to create the clients, and on average 0.1s to
connect all of them in a chain (15 * 32 connect calls).
With jack1 this fails miserably. Reason is probably that jack1 recomputes
the graph for each and every connection change, even if the actual client
dependencies don't change [1].
Plain unpatched kernel, with -p256, no xruns after one hour.
Works okish, some xruns here and there and this is a
stock fedora setup
with extra rtprio for the user. No low latency kernel or any tuning. I had to
increase the max fds to 8192.
Why on earth do you need that many kernel objects (fds) to synchronise just
16 processes ? Again something doesn't scale here...
This utterly fails with jackd on this system, it
doesn't even want
to start all the clients, I'm sure it's something with the config somewhere...
See above if you were using jackd1.
[1] This was one of the many things that my rejected patch (years ago) actually
fixed. IIRC the complexity of jack_connect() in jackd1 is at least O(n^2) if
not O(n^3) where n is the number of existing connections - this doesn't scale.
Ciao,
--
FA