I remember how happy I once was when I found libao to
avoid
the parallel ALSA/OSS existance problem when coding a very
small application that basically just wanted to deliver
output. I wonder how hard it would be to write one library
that does the multiplexing and autodetection/config parsing
once and for all, and maybe also auto-wrapped the
callback/vs/push model situation in both directions, such
that an application could choose which model it wanted?
New options on the planet could then in the future just be
added to this layer. Some global library config would
allow the user to define default output devices and maybe
even do more sophisticated things like passing backend
specific parameters for a specific program...
I also had this idea some time ago. Some disadvantages:
* Someone is needed who does this job
* It added a further layer to the already existing mess
* Many many applications need to be rewritten
* It is a dirty hack regardless how well it will be written
BTW: For me
ALSA direct access (and therefore blocking
the device) seems to be a bit ugly for a multitasking and
multi user operating system like linux is. Is ALSA direct
access really an option (regardless that DMIX can help
with this)?
I use it all the time since I own an SB Live!. On one
other box I occasionally do work on I have one of those
evil non-hardware-mixing cards, and it iritates the hell
out of me that I can no start jack while the stream from my
local radio station is running.
Yeah, but users want hardware as cheap as possible, even
musicians. AC '97 doesn't offer hardware mixing, my terratec
USB card doesn't and I'm not sure if the AC '97 successor HDA
does.
An desktop
independent soundserver available on each
linux machine could help a lot. JACK could be a possible
solution.
Why can't ALSA solve this once and for all somehow? It
seems a bit of an overkill to have a full-fledged
soundserver running when something like dmix could also do
it?
Why not? JACK even enables users to easily record a web radio
stream while listening to it. Even normal users can benefit
from it. CPU usage? Forget about it. Contemporary machines
are bored while the user listens to ogg files while reading a
web page (or even lenghthy discussions on LAD ;-) .
(BTW, have to try dmix now, really...)
Me too. I'd like to try kasound which has been released on
sourceforge just these days (Thanks dp for the hint).
What I mean is
if you have your soundserver, you are still left with
rewritting every single app you ever want to use to use the
soundserver, otherwise the problem still remains. There are
a hell of a lot of tools around that use either plain OSS
or direct ALSA.
I really dislike the OSS based applications.
Thanks for your thoughts & best regards