Since I originally wrote audiorack using Apple's CoreAudio, I made design decisions based on the functioning of that API. Many of those choices needed to be reconsidered as I adapted the design to the jack API.

A very big structural difference between the APIs is how "rendering" sample buffers is accomplished. CoreAudio provides a separate callback to get or deliver a block of samples for EVERY audio interface your program attachs to.  For my program using that API, my design is based on an audio interface being choosen as a "master." This is the low latency interface that my code's core mixer is driven by. Other interfaces, both ins and outs, get an extra buffer between them and the core mixer. For the call-back code of these "slave" interfaces, my code compared the time stamps between the core mixer buffer, and the slave buffer to make a "phase" adjustment using Apple's varispeed resampling audiounit plugin with a PID error loop controlling the resampling ratio. This keeps the buffers, on average, in sync, but with extra delay to handle kernel callback scheduling jitter. i.e. no guarantee what order the OS will schedule the call-backs, even if they are on the same sample clock. So with this scheme, I could use any number of interfaces I wanted, each with it's own slightly different clock rates and drift, with one interface selected as the low latency master. After years of tweaking the PID filter, I had it working very well, with no cost (other than processor overhead of the re-sampling) to the master interface.

Jack on, the other hand, has a single callback from which samples are received from jack source ports, and new sample data is delivered to jack destination ports. A very nice and clean approach, driven with a single clock source. And appropriate for interconnecting audio streams between programs. I like it a lot.

I have lost the ability to allow for my software to handle re-sampling on multiple clock domains. I was thinking that zita-a2j, etc, was my path to get the functionality back. If I didn't have it working so well on OSX, I wouldn't lament the loss with jack. But it's hard to give up on it!

Thanks for the Ubuntu Studio Control links.

Ethan...

On Sun, 2019-11-17 at 00:21 +0100, Ralf Mardorf wrote:
On Sat, 16 Nov 2019 15:49:34 -0700, Ethan Funk wrote:
Why do you need zita-a2j/j2a anyway ? Using a single multichannelcard
is usually the better solution.  

I have one multichannel audio interface for everything important:
program out, studio monitors, headphones, guest mic, host mic,
etc.  But it sure is nice to be able to use the built-in audio for a
cue channel and talkback mic, where latency is not important.  Also
handy for USB turntables, and other random devices that are
occasionally used in a radio show without latency being important.

You are mistaken. Try to avoid anything that affects "everything
important". To sync different devices by software affects "everything
important".

On Sat, 16 Nov 2019 14:47:26 -0700, Ethan Funk wrote:
Does anyone know where I can find the source code for Ubuntu
Studio Control?

https://packages.ubuntu.com/source/eoan/ubuntustudio-controls

https://launchpad.net/ubuntustudio-controls