[LAU] Example setup for zita-njbridge in multicast mode?

Will Godfrey willgodfrey at musically.me.uk
Tue Jan 9 18:56:40 UTC 2018


On Tue, 9 Jan 2018 10:40:47 -0800 (PST)
Len Ovens <len at ovenwerks.net> wrote:

>On Tue, 9 Jan 2018, Will Godfrey wrote:
>
>> On Tue, 9 Jan 2018 11:26:19 -0600
>> "Chris Caudle" <chris at chriscaudle.org> wrote:
>>  
>>> Going through the network to jack adapter layer adds additional latency,
>>> so I'm not sure exactly what the purpose of running separate jack servers
>>> at low latency would be compared to just running a single server with
>>> higher latency settings.
>>>  
>> I seem to remember hearing somewhere that the jack server can't make use of
>> multiple cores, but surely multiple *severs* could each be on their own core.  
>
>Introducing a network layer adds more latency than creating a jack client 
>with buffer that talks to two jackd servers.
>
>I believe jackd2 can use more than one core for non-dependant strings of 
>clients. That is, a jack aware application could split two jack strings by 
>calling itself two clients and buffering audio between them. However, the 
>purpose of jackd is to provide known latency. Adding workarounds to make 
>use of more cores destroys that.
>
>
>--
>Len Ovens
>www.ovenwerks.net

Interesting.

Maybe I miss-read Jonathan's original post, but I was under the impression that
he was using three instances of Yoshimi that had their own MIDI streams, rather
than a general audioIn->audioOut. If these were ALSA MIDI then they would be
buffered at that level rather than the audio.

Does that make a difference? Would a single jack2 server be able to put the
audio on different cores?


-- 
Will J Godfrey
http://www.musically.me.uk
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.


More information about the Linux-audio-user mailing list