On Friday 10 October 2003 17.41, hexe_2003(a)directbox.com wrote:
[...]
But well, If I synthize ( ? spelling) two signals and
want to send
them at the same time to the device, my first idea'd be:
1.) Open /dev/dsp, maybe O_NONBLOCK | O_WRONLY
2.) thread
3.) two threads calculate the both signals
4.) sync them
5.) two threads write their signals to /dev/dsp
6.) threads close
certainly some of this. I mean mp3blaster, for example, have
several pids while playing audio streams, and it is basicly
OSS-programming and not alsa.
Am I so wrong ? ;)
Yep! ;-)
Synths, trackers, games and other "polyphonic things" at some point
mix everything into a single stream that is sent to the audio device.
In general, all audio processing is done in a single thread, but in
some cases, it can be pretty hard to get the processing deterministic
enough that it can run in the audio thread without causing drop-outs.
This is especially troublesome if you need low latency, as in games
and other interactive applications.
This is probably why mp3blaster is using multiple threads; one for
each mp3 stream (decoding into raw audio data) and one for mixing and
output. There is most probably substantial buffering between the
decoding threads and the mixer thread, as this would allow solid
playback with less buffering (ie less laggy mixing controls) between
the mixer and the audio device.
//David Olofson - Programmer, Composer, Open Source Advocate
.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,... |
`----------------------------------->
http://audiality.org -'
---
http://olofson.net ---
http://www.reologica.se ---