On Thu, 2005-07-07 at 10:45 -0700, Jan Holst Jensen wrote:
There's absolutely no need for the sound server,
it's just bloat.
What about in a networked/thin client environment?
The application will
be on another machine from the audio interface.
Virtual sound device with ALSA on top of that is one
model. The audio abstraction is then at the level of
streaming a single PCM stream to the thin client. No
different than handling an ordinary sound device.
But you still need an audio server on the thin client, to accept the
audio data over the network and pass it to the audio interface.
(Just like the X server handles the video data over the network.)
Maybe ALSA could be extended to support some of this, though ALSA is
closely tied to Linux at present so I'm not sure that is a good idea.
Or how about
supporting OSes that don't have any
support for mixing
themselves, or only allow one process to use the
audio interface.
Do we need an audio server to handle these cases?
As Lee wrote a bit further up:
> > So in your GNOME app you call
> > gnome_play_sound(&filehandle) and the
> > Gnome middle layer snd_pcm_write()s it on an ALSA
> > system, write()s
> > to /dev/dsp on an OSS system, and delivers it to
> > the DirectSound or
> > whatever on Windows, depending on the system it
> > was compiled for.
I don't think that answers the question. If an OS/platform only allows
one process to use the audio interface at a time, then if 2 applications
need to output audio you must have something like an audio server.
Damon