no, it would
provide names like
MOTU 828 mkII channel 1+2
RME HDSP (#1)
Builtin Audio
to the user.
it would also fix a myriad of other problems in ALSA, such as its
reliance on interrupts that occur at regular sample-based intervals,
Can you suggest alternatives?
i don't need to - they are fully documented by Apple in its description
of the HAL for audio devices. Rather than rely on the interrupts as
absolute indicators of time, you use them to feed a DLL. Then you use
the DLL in conjunction with a monotonic clock source (e.g. a cycle timer
or a reliable equivalent on certain AMD systems), and you can estimate
position in the h/w buffers to way better than single sample accuracy at
all times. more importantly, you can do this no matter what the basis
for the interrupt frequency is, so you get a single HAL model that works
equally well for PCI, USB and ieee1394 devices.
its
presentation of a multiplicity of programming models, and its
There is no one-size-fits-all with sound programming models.
Apple don't agree with you, and neither do Steinberg or Microsoft (the
modern, reformed post-MME Microsoft, anyway). They each offer a single
programming model at the HAL level, and remarkably, its the same
programming model in every case. I don't see forums for those platforms
complaining that its a problem. Only unix programmers who go about
insisting that "everything should be a file, all i/o should be
open/read/write/close/ioctl" seem to have a problem with it, yet
curiously have no problem with the fact that you don't do video in that
way at all.
lack of
reasonable way to present itself to ordinary users.
--p
What is wrong with the current presentation?
You currently get the name of the card.
i meant more generally. ALSA is so full of configuration options that
will be used by almost no-one that its incredibly confusing for almost
everyone. i wrote years ago on alsa-devel about the paper from the guy
at SGI who was involved in their video API design and ended up being a
little disappointed. his reason? they went to so effort to handle all
the corner cases that the core use case ("dump pixels into this part of
the framebuffer") was remarkably complex to do. ALSA strikes me as much
the same way, at every level, from the kernel API, to libasound, to user
space utilities.
one could argue, as Lee has done, that people (programmers, users)
should be using higher level APIs and leave the complexity behind, but
somebody or something has to deal with it at some point in order to get
sound in or out of the machine.
--p