[LAD] ALSA doumentation

Hannu Savolainen hannu at opensound.com
Sun Nov 16 00:22:16 UTC 2008

Fons Adriaensen wrote:
> On Sun, Nov 16, 2008 at 12:24:38AM +0200, Hannu Savolainen wrote:
>> By making this kind of changes pretty much impossible we can prevent 
>> programmers from doing stupid mistakes. Before ading this kind of stupid 
>> workarounds
>> they will probably ask somebody to help them. In this way there 
>> is a chance that somebody with better knowledge can stop them.
> It seems like you missed the point: there can be good
> reasons for making this kind of changes from a program
> and not only interactively by using some control panel. 
> It all depends on what the program is, and who is running
> it. Any normal app should probably not touch these settings,
> but a session manager that sets up a multi-PC studio should
> be able to do it. Even more so if most of the PCs involved
> are in a machine room that is normally not accessible by
> the studio's users.
> In my case, the sample clock source should always be 'external'
> (pretty obvious in a multi-machine setup), but the driver will
> change it permanently to 'internal' when the external signal is
> absent, which can happen temporarily. So I need not only set
> this when initialising a session, but watch it permanently and
> reset it when necessary. Since the users of this setup can't
> be bothered to understand such things, it has to be automatic
> and not by using some control panel.
Right. The driver or the hardware itself can do this transition 
automatically when it's required. The control panel will reflect this 
change in way or another.

However there is absolutely no idea in implementing this kind of 
functionality in more than one application. The control panel program 
shipped with the sound subsystem can do the change out of box. If a MP3 
player (or whatever) implements support for this kind of control then 
does it make the world better?

ALSA's design philosophy has been to expose all aspects of the sound 
hardware to the applications so that they can play with any of them. 
This is all bullshit. All audio applications have some kind of audio 
stream to play/record (with given sample rate/format). This is all they 
should care about. It's responsibility of the audio subsystem to ensure 
that the stream is properly converted to match the capabilities of the 
actual device. Applications can (optionally) check if the device 
supports 192 kGz/32 bits/64 channels before giving the user a choice to 
use such format. However in the OSS model they are not required (or 
supposed) to do this. OSS will automatically do the conversions or to 
return an error if the requested rate/format/#channels combination 
doesn't make any sense with the current device.

If you look at any kernel level subsystem such as networking or access 
to ordinary disk files. None of such applications want to know anything 
about the actual hardware details. I have not seen any PHP or CGI 
scripts/programs that try to switch the NIC card to use 1 gigabit 
signalling rate instead of the 10 mbit one supported by the network HUB. 
Why should any audio application be different?

Best regards,


More information about the Linux-audio-dev mailing list