Please excuse the double post,
I hit the moderator and realized I was not subscibed :-)
=========
Hi all,
I am writing a multi-streamed audio player for an embedded linux system
and am a little confused about what technology will accomplish what task for
me (I've been reading up but thought that maybe some of you might easily
point me in the right directions).
- Is JACK a suitable place to implement entire audio pipelines ?
(i.e. if I have one "jack client" for each link in the pipeline; one
reading an mp3 file, another decoding the mp3 file and outputting
pcm data and another one creating FFT data for other purposes etc.)
- Is ALSA capable of really "mixing" or does it only route available
commands supported by the hardware ?
Some (or most) sound cards come with a DSP with a bunch of
funky fresh features on it (like mixing two or more input channels
into one or more output channels + control volumes on inputs
and outputs + equalizer etc.)
My initial assumption is that a mixer with fine-grained control
should be implemented as close as possible to the hardware
(assuming that the driver will make use of any hardware acceleration
and then fall back to software routines where needed).
Does ALSA offer me an api that will allow me to "mix" streams ?
The design I'm probably going to go with is:
- GStreamer to handle audio decription & any mixing / filtering
- Jack to obtain RCA & Microphone Input channels and write pcm
data to output through ALSA (possibly making alsa transperent to
the player application ?).
As you can clearly see,
/me is a little lost
Any help & pointers are greatly appreciated.
Cheers,
-Tristan