On Wed, Jan 10, 2018 at 3:48 PM, Andrew Voelkel <jandyman.voelkel@gmail.com> wrote:

Glad to hear the discussion. From an outsider’s point of view, there are few things I don’t get.

 

First, if each JACK app is a separate process, then theoretically you have to do a bunch of expensive process context switches for each audio buffer.


​In the scheme of things, a context switch is typically not very expensive compared to a typical audio buffer size (say 16 samples and up).​ The only way it gets unusually expensive is if a process touches a lot of memory during its execution. This isn't impossible in an audio/DSP context, but it isn't very likely either.
 

And then there is interprocess communication. Uses shared memory buffers?


​it's all in shared memory. there's no communication overhead.​

As opposed to a typical plug-in architecture where everything runs under the host process. It is amazing to me that an interprocess scheme wouldn’t run into major problems under compute load when running with the small buffers needed for low latency.


​be amazed :)​
 

 

What am I missing here?

 

The other thing I’m used to seeing in a plug in API is setup for buffer sizes, sample rates, audio IO. The host centralizes this process.


​JACK does the same thing. Each client has no control over those things (at least, not in the normal way).​ They can ask to change the buffer size, but their request may be ignored, or it may be reset by something else. Just like in a plugin architecture.

Then there is advertising your pins and capabilities. I was mystified when looking at the simple JACK examples that I didn’t see code for dealing with these issues (e.g. being sample rate aware).


​JACK clients have only one thing of interest to other clients: ports. These are all clearly visible, and tagged appropriately (though for most clients, there's almost no tagging to do)