Greg Reddin wrote:
Is there a web resource or book or something where one
can learn
about these programming caveats for audio developers? I knew nothing
of this problem, but it makes sense when you mention it.
I don't think that there is an all-in-one howto around, most infos
are scattered around the net (and most issues can be solved
with a bit of knowledge of the real time paradigm and a bit of
common sense)
Or do I just have to do this for a few years and learn the hard way?
:-)
Most developers seems to do so, they are just too lazy to think
about the constraints that real time programming imposes.
They believe they can make full use of the operating system,
call any system calls they like, C++ constructors, malloc()s etc,
accesss to mutexes locked by low priority GUI threads
and then after wonder why their application does not work with
low latencies or why there are sporadic dropouts they cannot get rid of.
most problems go away if you increase the buffer sizes (thus latency)
to let's say 20-50 msec, but as we know any soft-instrument with
bigger than 15-20msec latencies becomes unplayable
(measured by professional standards).
Anyway there are not many rules to follow:
run the audio thread SCHED_FIFO
(and do not use SCHED_FIFO on other threads, except
when you know exactly what you are doing).
in the audio thread just do not call any system calls except
read/write to the audio device .
especially file system ops memory allocation, calling C++
constructors/destructors (cause memory allocation/deallocation)
etc. Basically any operation that could possibly be blocked
because it has to wait for some event, data, etc.
Do all communication between audio thread and other threads
via lock free FIFOs or shared mem.
Avoid mutexes like they were radioactive material.
(there are a few execptions where they are allowed
like using mutexes between two real time threads where you know
that they will not tie up the CPU for more than a certain amount of time,
or using pthread_mudex_trylock() in the audio thread that
tries to acquire the mutex but if it fails it returns immediately
without stalling).
For example Ardour is well designed in this regard, and this is why
Paul is an experianced programmer (and probably learned some RT issues
the hard way too).
Unfortunately many audio apps on linux do not meet "low latency standards"
or better are not designed with real time safeness in mind and I think
it is one of the obstacles that keep them out of the professional
domain making them look like a toy.
It is sad because there are many nice apps with an excellent
sound generation engine, but often the framework around that engine
is weak.
It does not that actually that much energy to fix it but people
seem either too lazy to do that or either do not know what the
RT-safeness guidelines are. I never understood it ...
it is much more complex to desing a synthesis algorithm, etc
than just avoiding a few mutexes and use a lockfree-fifo instead etc.
Anyway I agree it would be cool to have page on the LAD site that
explain all these issues and provide source for some basic routines
and classes. This would prevent that the mistakes are made
all over again.
cheers,
Benno
http://linuxsampler.sourceforge.net
-------------------------------------------------
This mail sent through
http://www.gardena.net