[LAD] [sound/usb/line6] Question on PODHD Edit features like implementation

Takashi Sakamoto o-takashi at sakamocchi.jp
Wed Jul 15 18:45:09 CEST 2020


Hi,

On Wed, Jul 15, 2020 at 02:27:01PM +0200, Aurryon SCHWARTZ wrote:
> First of all, thank you for the feedback. To be honest, your suggestion
> to use the sequencer layer that is below the MIDI layer is very
> appealing to me. I would be able to use the sound subsystem as a
> transport layer without worrying of being compliant with the MIDI
> protocol: The Line6 Edit protocol is proprietary.
> 
> 
> Please find below my new questions. Be prepared, I am newbie with the
> ALSA subsystem ^^.
> 
> - My understanding is that in [2], you are directly using the Linux
> firewire subsystem with your service from the userspace to parse or
> write events from/to the device before sending them to ALSA kernel
> subsystem. If i try to implement the same kind of process with libusb,
> within the userspace, I will be confronted to the fact that the device
> is already claimed(locked) by the kernel module snd-usb-podhd. If I read
> [3] it seems that such mechanism are not existing for firewire and
> therefore it is not possible to do the same with usb. Am I correct?

You got things correctly. In Linux USB subsystem, any USB interface can
be reserved for exclusive access by software procedure called as 'claim'.
It is one of features in Linux USB subsystem and not included in USB
standard itself, as long as I know.

So the service program is written with the expectation that no driver
reserved the target USB interface. If in-kernel driver is expected to
reserve the target USB interface, we need to prepare another 'path' to
transfer/receive data to/from the interface in userspace. For the path,
ALSA hwdep interface is one of options.

(I note that Linux FireWire subsystem has no feature corresponding to
'claim' in Linux USB subsystem. IEEE 1394 specification has similar
procedure called as isochronous resource reservation. The reservation
is done just for stream data such as the sequence of audio data frame,
therefore any node on the bus can always request simple read/write
operation to the other nodes.)

> - If I check /proc/asound/hwdep, I have "00-00: config". This seems to
> be more generic to ALSA subsystem than the audio/device module. In your
> architecture in [2], is the ALSA hwdep dedicated to your sound card or
> is it global to the ALSA subsystem?
 
In a design of ALSA hwdep core, ALSA hwdep interface is a thin wrapper
of `struct file_operation` for Linux character device:
https://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound.git/tree/sound/core/hwdep.c#n319

Actual implementation of ALSA hwdep device differs driver by driver. All
of ALSA drivers don't necessarily add hwdep device to Linux system.

For the case of drivers in ALSA firewire stack, all of supported
ioctl command are exported to user space in a shape of UAPI header.
In the header, you can see some common ioctl(2) command and
model-specific commands and notifications:
https://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound.git/tree/include/uapi/sound/firewire.h

The drivers implement the above. For example of ALSA bebob driver,
`sound/firewire/bebob/bebob_hwdep.c` implements it.
https://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound.git/tree/sound/firewire/bebob/bebob_hwdep.c

You can see slight different in
`sound/firewire/fireworks/fireworks_hwdep.c`. It implements
vendor-specific transaction:
https://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound.git/tree/sound/firewire/fireworks/fireworks_hwdep.c

Actual implementation of the vendor-specific command/response is in
the service program.

> - Why using specifically a model "device<->firewire subsystem
> (kernel)<->service(userspace)<->alsa
> subsystem(kernel)<->application(userspace)" instead of something like
> "device<->firewire subsystem (kernel)<->service(userspace)<-pipe/unix
> socks->application(userspace)"? What are the pros and cons?

It depends on the case, but protocol is always important when using
inter-process communication (IPC).

For the latter model, we need to decide protocol between service and
application by ourselves.

For the former model, the protocol is already given as ALSA control
interface or ALSA Sequencer interface. We already have many ALSA
applications in user space; e.g. amixer(1), alsactl(1), alsamixer(1)
as ALSA control application, aseqdump(1), aplaymidi(1) as ALSA
sequencer application.  I'd like to use them as 'frontend' for
functionality. Usually any in-kernel driver works as 'backend' for
the functionality, but ALSA control core and ALSA sequencer core
allows any userspace applications transparently works as 'backend'.
It's convenient to the case of ALSA firewire stack.


Regards

Takashi Sakamoto


More information about the Linux-audio-dev mailing list