I had an idea the other day, and I wanted to bounce it around for
feedback before I invest any real time into it. While I have done
some minor kernel hackng over the years, I am not experienced in the
audio subject area.
Background:
Despite the various and sundry advances in linux audio, I find there
are still legacy apps that are built against the OSS API. This is
problematic since the legacy OSS model has blocking semantics. To get
multiple audio streams, one must use an audio server such as esd,
aRts, etc. Wouldn't it be nice if all the legacy apps "just worked"?
Without blocking each other?
Idea:
Suppose one were to write a kernel module that implemented the OSS
API, but had non-blocking semantics, and instead of driving a sound
card, the module encapsulated the OSS API calls somehow and passed
them back to a user-space audio server.
Ao, for example: suppose we had a system with alsa and the proposed
passback driver. xmms is playing via esd-passback-alsa, which is a
modified version that supports input from the passback driver and
outputs via alsa. The user fires up Ogle, which only has OSS output
code (I'm not sure if this is true; just an example). /dev/dsp is the
passback driver, and when ogle opens it, it succedes, even through
someone else is using the sound card (esd). The passback driver
"passes back" all of Ogle's ioctls to a modified version of esd,
which, in turn multiplexes the audio with whatever else is playing
(xmms, in this example), and plays it via alsa.
I'm not precisely sure how I would to handle the actual "passing
back". Probably a separate device file /dev/dsp-passback, which is
read by the audio server.
Can anyone say why this idea can not, or should not be implemented?
Now that I've articulated the idea, would anyone care to take over the
implementation?