On Tuesday 26 January 2010, at 21.15.43, David McClanahan
<david.mcclanahan(a)gmail.com> wrote:
[...]
3. I'm a little worried about what some are
calling realtime systems. The
realtime system that is part of Ubuntu Studio and others may be more
preemptible than the normal kernel(as in kernel calls themselves can be
preempted), but that's not a hard realtime system. A hard realtime
system(simplistic I know) might entail a task whose sole job is to pump out
a sinusoidal sound sample to the D-to-A on the sound card. A hard realtime
scheduler would run that task at 44Khz no matter what. This would entail
developing code that when the machine instructions were analyzed, would run
in the time constraints(aka the 44Khz). RTLinux appears to be suitable and
RTAI might be. Perhaps others.
The relevant definition of "hard realtime system" here is "a system that
always responds in bounded time." That bounded time may be one microsecond or
one hour, but as long as the system can meet it's deadline every time, it's a
hard realtime system. The definition doesn't really imply any specific time
frames.
Now, in real life, the "every time" part will never be quite accurate. After
all, you may see some "once in a billion" combination of hardware events that
delays your IRQ a few microseconds too many, or your lose power, or the
hardware breaks down, or a software bug strikes... There are countless things
that can go wrong in any non-trivial system.
Of course, there's a big difference between a DAW that drops out a few times a
day, and one that runs rock solid for weeks - but a truly glitch-free system
would probably be ridiculously expensive, if it's even possible to build.
Triple redundancy hardware, code verified by NASA, various other things I've
never even thought of; that sort of stuff...
As to the 44 kHz "cycle rate" on the software level, although possible, is big
waste of CPU power on any general purpose CPU, as the IRQ and context
switching overhead will be quite substantial. Further, even the (normally
irrelevant) worst case scheduling jitter starts making a significant impact on
the maximum safe "DSP" CPU load. (Double the cycle rate, and the constant
jitter makes twice the impact.)
Therefore, most low latency audio applications (whether on PCs/workstations or
dedicated hardware) process a bunch of samples at a time, usually somewhere
around one millisecond's worth of audio. This allows you to use nearly all
available CPU power for actual DSP work, and you don't even need to use an
"extreme" RTOS like RTAI/LXRT or RT-Linux to make it "reasonably
reliable".
With a properly configured "lowlatency" Linux system on decent hardware (as
in, no BIOS super-NMIs blocking IRQs and stuf; raw performance is less of an
issue), you can probably have a few days without a glitch, with a latency of a
few milliseconds.
I haven't kept up with the latest developments, but I remember stress-testing
the first generation lowlatency kernels by Ingo Molnar, at 3 ms latency with
80% "DSP" CPU load. Hours of X11 stress, disk I/O stress, CPU stress and
combined stress, without a single drop-out. This was back in the Pentium II
days, and IIRC, the fastest CPU I tested on was a 333 MHz Celeron. Not saying
this will work with any lowlatency kernel on any hardware, but it's definitely
possible without a "real" RT kernel.
--
//David Olofson - Developer, Artist, Open Source Advocate
.--- Games, examples, libraries, scripting, sound, music, graphics ---.
|
http://olofson.net http://kobodeluxe.com http://audiality.org |
|
http://eel.olofson.net http://zeespace.net http://reologica.se |
'---------------------------------------------------------------------'