Hi,
I started working with Linux audio very recently. Primarily, I wanted to
understand the Linux audio stack and audio processing flow. As a simple
exercise,
- I write a simple capture/playback application for PCM interface and store
the captured audio data in a .wav or .raw file
- The device parameters I played with are: hw:2,0/plughw:2,0 (for USB
headset) , 1/2 channel, 44100 sample rate, SND_PCM_ACCESS_RW_INTERLEAVED
mode, period size 32, S16_LE format.
- Use Ubuntu, kernel 3.9.4 and Logitech USB headset for development/testing
To understand the control flow, I inspected the ALSA driver source code and
got an understanding of the flow in kernel space. However, I am kind of lost
in the user-space ALSA library.
- What happens after ALSA handovers the data i.e. the audio data from mic is
copied from USB transfer buffer to userspace buffer? What are the data
processing steps?
- Where is Pulseaudio working here?
- Is there any software mixing happening?
- In the USB sound driver, I changed the data in urb->transferbuffer to a
pattern like 'abcdefghijklmnop', but in the capture application when I store
the audio data in a .raw file, I don't completely get the data back.
If I set 1 channel in the hw params, I get almost 'abcdefghijklmnop' pattern
but sometimes get 'abcdefghijklm' i.e. the pattern incomplete and another
pattern starts over. For 2 channels, the data interleaves in a pattern like
'ababcdcd...' but there are also some incomplete patterns and also I see
some unexpected characters.
I know it's a long post, but even if you can help with a part of it, I'll be
greatly benefited. Thanks!
--
View this message in context:
http://linux-audio.4202.n7.nabble.com/Linux-Audio-Architecture-tp85571.html
Sent from the linux-audio-dev mailing list archive at
Nabble.com.