I have an idea in mind for an application that would involve a core
audio callback responsible for playing several sounds at the same
time, each being streamed in by some as-yet-undetermined means.
Before I get too far into it, I have a few questions about the best
method for ensuring that the audio callback is not interrupted for
lengthy disk access, etc. Obviously I am not planning on doing the
main disk I/O in the callback, but I am thinking about the best means
for the callback to communicate with the rest of the application.
Possibly I might like to support having some of these streams come
from external processes, opened through popen() for example.
So, the idea for an RT audio callback is that it should not wait on
data, (whether it comes from a file or process), but continue
processing the other streams if audio data is not immediately
available. There are a few ways to do this in Linux:
1) Have a secondary thread responsible for passing data to the audio
callback through a wait-free ring buffer.
2) Read from a pipe, FIFO, or socket from another process (e.g.
popen), using select() or poll() to check when there is actually data
to read.
3) Read from a file, using select()?
4) The async I/O API.
5) Interprocess shared memory, presumably using a semaphore of some
kind. I guess this is similar to (1) but for inter-process
communication.
The question is, which one of these methods is the most "real-time
friendly"? Under what conditions, if any, can I be sure a read() will
not block? Is there any advantage to threads vs. processes? Using
async I/O I suppose I could avoid either one. Are there any general
guidelines somewhere for dealing with I/O in audio applications?
thanks in advance,
Steve