There's
absolutely no need for the sound server,
it's just bloat.
What about in a networked/thin client environment?
The application will
be on another machine from the audio interface.
Virtual sound device with ALSA on top of that is one
model. The audio abstraction is then at the level of
streaming a single PCM stream to the thin client. No
different than handling an ordinary sound device.
Or how about supporting OSes that don't have any
support for mixing
themselves, or only allow one process to use the
audio interface.
Do we need an audio server to handle these cases?
As Lee wrote a bit further up:
> So in your GNOME app you call
> gnome_play_sound(&filehandle) and the
> Gnome middle layer snd_pcm_write()s it on an ALSA
> system, write()s
> to /dev/dsp on an OSS system, and delivers it to
> the DirectSound or
> whatever on Windows, depending on the system it
> was compiled for.
Cheers
-- Jan
____________________________________________________
Sell on Yahoo! Auctions no fees. Bid on great items.
http://auctions.yahoo.com/