[linux-audio-dev] Quick and efficient sound daemon idea -- why not do it this way?

Paul Davis pbd at op.net
Wed Oct 16 13:09:01 UTC 2002


>There should be just a simple sound daemon running 24/7, constantly
>reading from the /dev/dsp inputs and writing into the outputs with a
>small circular buffer that keeps on recycling itself (i.e. 64 bytes to
>allow for low-latency work if needed). Then, when an app that does not

sigh. i thought we/i explained this before. there are two performance
characteristics of a sound daemon that are of interest here.

1) latency
2) sample synchronicity

if you don't care about (1), then the basic design of esd will do
*exactly* what you want. its a bit broken if you try to head for
particularly low latency, but that just needs hacking to fix it
(though quite a lot of hacking). of course, you need the LD_PRELOAD
hack to make esd work, and this reflects the most basic and
fundamental breakage in the OSS API: applications open a device driver
and write to it using system calls, thus offering no way to interpose
between the app and the device without linker hacks like LD_PRELOAD. i
don't like, you don't like it, but its what OSS has forced on the world.

in addition, alsa-lib contains the "mix" "plugin" which as has been
pointed out very often is intended to do exactly what you describe as
well but has not been tested by anyone except its author, abramo
bagnara, who seems not to have the time right now to work on it.

if you care about particularly low latencies, then you probably also
care about sample synchronicity, even if you don't realize it yet. in
which case, the design you are imagining doesn't work. you're thinking
of what we generally call a "push model", where apps "push" data
toward the playback device. i think that most people on this list
concur that to ensure sample synchronicity requires a "pull" model on
the part of the server, hence ... JACK.

>Now, someone please tell me why is this not doable? Sound daemon must
>be, at least in my mind, compatible with all software, and the only way

"compatible with all software" ... including those that choose to use
mmap mode, thereby writing directly into a memory buffer they believe
represents the audio interface's hardware buffer (even if under
alsa-lib, it might not be)?

>I am sure that with the above description I have covered in a nutshell
>both JACK and ARTSD to a certain extent, but the fact remains that both
>solutions require application to be aware of them if any serious work is

not if you use portaudio (well, soon, anyway). i'm a little unhappy
about recommending a portaudio solution because i'd prefer to see more
native JACK apps. but the truth is that the portaudio crew have done a
fabulous job wrapping up both callback-based and push-model API's into
a single unifying callback-based API. if you use portaudio, your
software works (or will work soon) with ASIO, MME, CoreAudio, OSS,
ALSA and JACK.

>to be done. And as such, there is only a VERY limited pool of
>applications that can be used in combination with either of these.

i don't understand why you keep asking about this when esd exists and
when abramo has already written the mix plugin for alsa-lib? if you
don't like the qualities of esd, why not take that up on the
development list for esd? if you want to try out the mix plugin, why
not write to abramo and gently nag him to do something with it.

--p




More information about the Linux-audio-dev mailing list