[linux-audio-dev] Audio synchronization, MIDI API

John Check j4strngs at bitless.net
Thu Aug 19 01:27:14 UTC 2004


On Wednesday 18 August 2004 02:52 pm, Ralf Beck wrote:
> Am Mittwoch, 18. August 2004 19:36 schrieb Nelson Posse Lago:
> > Quoting Paul Davis <paul at linuxaudiosystems.com>:
> > > wrong model. a given jackd has a single driver.  a new jack client,
> > > sure.
> >
> > I believe the way to do this is to have one remote jackd with a driver
> > that sends/receives data through UDP and one local jack client that
> > interacts with this remote server.
> >
> > There is something like this already, I believe (haven't checked):
> >
> > http://www.alphalink.com.au/~rd/m/jack.udp.html
> >
> > As a side note, the system I developed intends to do this over ladspa;
> > more on this on another message.
> >
> > > oh, and a small correction. VST System Link has basically nothing to
> > > do with networked audio. [...] it does *not* distribute audio
> > > across the network at all.
>
> Hm, so there are two ways to do remote processing:
>
> 1. jack over ethernet
>
> Would use a server application (jack client) on the host that provides
> jack ports and sends the data over ethernet.
> Pro: You can bundle several audio streams and send them in a single
> package. Con: If you want to control remote plugins from your host
> application, you need an additional application doing the same for the
> ladspa parameters.
>
> 2. ladspa over ethernet
> Here a pseudo ladspa plugin would send the parameters and audiodata over
> ethernet (something like ladspavst is doing for vst plugins on the local
> host).
>
> Pro: On the remote machines all you need is a plugin manager, which could
> even be
> a pseudo host for audiounit plugins on a Mac or VST host for VST plugins on
> a WinXP machine, that
> does the ldspa->vst/au wrapping, or a true ladpsa manager on a linux
> machine.
>
> Con: Sending data for each single plugins produces more overhead and
> thus takes up more cpu power on the host.
>

Well there is Moore's law *ducks*

> So which approach is best to follow?

Who's to say we couldn't have both? The former sounds like virtual MADI
and would be better for streaming capture to a disk array (I didn't think 
about that at all so maybe not)



More information about the Linux-audio-dev mailing list