[LAD] Jack audio questions to aid in my development of audiorack

Ethan Funk ethan at redmountainradio.com
Sun Nov 17 01:50:16 CET 2019


Since I originally wrote audiorack using Apple's CoreAudio, I made
design decisions based on the functioning of that API.  Many of those
choices needed to be reconsidered as I adapted the design to the jack
API.
A very big structural difference between the APIs is how "rendering"
sample buffers is accomplished.  CoreAudio provides a separate callback
to get or deliver a block of samples for EVERY audio interface your
program attachs to.  For my program using that API, my design is based
on an audio interface being choosen as a "master."  This is the low
latency interface that my code's core mixer is driven by.  Other
interfaces, both ins and outs, get an extra buffer between them and the
core mixer.  For the call-back code of these "slave" interfaces, my
code compared the time stamps between the core mixer buffer, and the
slave buffer to make a "phase" adjustment using Apple's varispeed
resampling audiounit plugin with a PID error loop controlling the
resampling ratio. This keeps the buffers, on average, in sync, but with
extra delay to handle kernel callback scheduling jitter. i.e. no
guarantee what order the OS will schedule the call-backs, even if they
are on the same sample clock.  So with this scheme, I could use any
number of interfaces I wanted, each with it's own slightly different
clock rates and drift, with one interface selected as the low latency
master.  After years of tweaking the PID filter, I had it working very
well, with no cost (other than processor overhead of the re-sampling)
to the master interface.
Jack on, the other hand, has a single callback from which samples are
received from jack source ports, and new sample data is delivered to
jack destination ports.  A very nice and clean approach, driven with a
single clock source. And appropriate for interconnecting audio streams
between programs.  I like it a lot.
I have lost the ability to allow for my software to handle re-sampling
on multiple clock domains. I was thinking that zita-a2j, etc, was my
path to get the functionality back. If I didn't have it working so well
on OSX, I wouldn't lament the loss with jack.  But it's hard to give up
on it!
Thanks for the Ubuntu Studio Control links.
Ethan...
On Sun, 2019-11-17 at 00:21 +0100, Ralf Mardorf wrote:
> On Sat, 16 Nov 2019 15:49:34 -0700, Ethan Funk wrote:
> > > Why do you need zita-a2j/j2a anyway ? Using a single
> > > multichannelcardis usually the better solution.  
> > 
> > I have one multichannel audio interface for everything
> > important:program out, studio monitors, headphones, guest mic, host
> > mic,etc.  But it sure is nice to be able to use the built-in audio
> > for acue channel and talkback mic, where latency is not
> > important.  Alsohandy for USB turntables, and other random devices
> > that areoccasionally used in a radio show without latency being
> > important.
> 
> You are mistaken. Try to avoid anything that affects
> "everythingimportant". To sync different devices by software affects
> "everythingimportant".
> On Sat, 16 Nov 2019 14:47:26 -0700, Ethan Funk wrote:
> > Does anyone know where I can find the source code for UbuntuStudio
> > Control?
> 
> https://packages.ubuntu.com/source/eoan/ubuntustudio-controls
> https://launchpad.net/ubuntustudio-controls
> 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.linuxaudio.org/archives/linux-audio-dev/attachments/20191116/625684f9/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: This is a digitally signed message part
URL: <https://lists.linuxaudio.org/archives/linux-audio-dev/attachments/20191116/625684f9/attachment.sig>


More information about the Linux-audio-dev mailing list