On Thursday 28 January 2010, at 21.01.38, David McClanahan
<david.mcclanahan(a)gmail.com> wrote:
[...]
The relevant
definition of "hard realtime system" here is "a system that
always responds in bounded time." That bounded time may be one
microsecond or one hour, but as long as the system can meet it's deadline
every time, it's a hard realtime system. The definition doesn't really
imply any specific time frames.
I agree with the definition but feel its a bit incomplete. Somebody can
write a piece of software and performance test it on a "soft realtime"
system and it meet all its deadlines DURING THE TEST. But a hard realtime
system should have mechanisms(the scheduler and timing analysis of the
code) to insure the deadlines are met. The current "RT patches" system is
probablistic("cross your fingers"). It may be a good one and sufficient on
most machines.
This has nothing to do with the definition of hard realtime. These lowlatency
kernels don't even claim to be *hard* realtime. I think you'd get
"lowlatency"
or possibly "firm realtime", depending on who you ask.
True hard realtime systems don't really exist, but if we accept hardware
failure and application bugs as acceptable reasons for failure, we can get
pretty close. For Linux, you'd go with RTAI or RT-Linux.
However, although running all realtime audio code under RTAI or RT-Linux might
offer a slightly lower risk of drop-outs, those are pretty harsh environments,
even compared to the "strict" requirements of JACK clients, realtime safe
LADSPA or LV2 plugins and the like. You basically cannot use ANY syscalls, but
have to work with a completely different API. (RTAI/LXRT does allow hard
realtime code to run in userspace, but when a thread is actually running in
hard realtime context, it will be thrown back to "standard" mode as soon as it
makes an ordinary syscall.)
More seriously, you cannot make use of ANY normal Linux drivers. Drivers have
to be ported, so that all code involved in the hard realtime "loop" actually
runs under the realtime kernel, and not Linux.
Even more seriously, there is just no way that any realtime kernel can ensure
anything on bad hardware. If BIOS super-NMIs are blocking normal IRQs every
now and then, or you have some device + driver combo that abuses PCI bus
blocking (common issue with 3D accelerators), you may get worst case latencies
of any number of milliseconds - and there is nothing at all that any OS can do
about this. You *may* be able to avoid these issues by replacing cards,
patching or reconfiguring drivers or tweaking the BIOS settings, but as
general purpose PC/workstation hardware isn't really designed for this sort of
work, there are no guarantees. It basically comes down to trying hardware
until you find something that does the job.
BTW, this has very little to do with raw performance. A faster CPU does let
you finish the job quicker, but if you wake up too late, you still won't be
able to make the deadline...
The bottom line is that, in the context of mainstream hardware and systems
that aren't specifically put together for lowlatency audio work, this is a
matter of diminishing returns. Indeed, it's possible to do better than a
lowlatency kernel, but it's a lot of work, and it's completely wasted without
perfectly configured, well behaved hardware. Sure, RTAI or RT-Linux would
support 0.1 ms audio latency on a purpose built system with ported drivers,
properly configured BIOS, SMI hacks etc, but it just won't happen on your
average PC.
Now, in real
life, the "every time" part will never be quite accurate.
After
all, you may see some "once in a billion" combination of hardware events
that
delays your IRQ a few microseconds too many, or your lose power, or the
hardware breaks down, or a software bug strikes... There are countless
things
that can go wrong in any non-trivial system.
Even in HRT systems, things go wrong. But in an HRT system, you lash the
squirrels nuts down. In a soft realtime system, you bet that there won't be
a storm.
Well, yes - but going beyond lowlatency kernels, it's usually the hardware you
need to deal with; not the OS...
Of course,
there's a big difference between a DAW that drops out a few
times a
day, and one that runs rock solid for weeks - but a truly glitch-free
system
would probably be ridiculously expensive, if it's even possible to build.
Triple redundancy hardware, code verified by NASA, various other things
I've
never even thought of; that sort of stuff...
Who wants a DAW. I'd be happy a while with a stable minimoog emulator.
Same difference. If you have sufficiently solid scheduling for the realtime
processing part, you can build pretty much anything around that.
[...]
Well there are affordable synths(mostly wavetable
ones) that don't appear
any more sophisticated hardware-wise than a PC.
It's not about sophistication. A low cost singleboard computer with an AMD
Geode, VIA C7, some Intel Celeron or whatever you need in terms of raw power,
will do just fine - as long as the chipset, BIOS and connected devices are
well behaved and properly configured.
If you, as a manufacturer of synths or similar devices, don't want to try a
bunch of different motherboards for every new revision you make, you might
decide to design your own board instead. Then again, if your product is low
volume and requires enormous CPU power, carefully selected mainstream hardware
may still be a better option.
The PC may be such a
"generalized" piece of hardware as to make it impractical as a dedicated
synth(unless it's of a "super" computer variety). I haven't heard
anything
yet that quite "put the nail in the coffin" yet. The SMI issue mentioned
earlier might be such an issue.
SMI is one of them. In my experience, nearly every motherboard at least has
some BIOS features you must stay away from, so even "know good" hardware
sometimes need special tuning for this sort of work. General purpose computers
just aren't built for low latency realtime work - but most of them can still
do it pretty well, with some tweaking.
[...]
> ... process a bunch of samples at a time,
usually
> somewhere around one millisecond's worth of audio.
[...]
Well I understand it from that perspective, but for a
performance
instrument I would think no buffering would be the ideal.
That's just pointless, as the ADC and DAC latencies are already several sample
periods, and the way DMA works on any PCI, USB or 1394 soundcard will add
somewhere around 64 bytes' worth of latency or more to that.
Also note that your average MIDI synth has anywhere from a few through several
tens of milliseconds of latency! You can only send around 1000 messages per
second over a standard MIDI wire anyway, so where would you get the timing
information to make use of less than 1 ms latency? Actually, going below a few
ms only guarantees that the notes in a chord can never be triggered
simultaneously.
[...]
Well my question is if you took something like a
Bristol synth, and
operated multiple control streams(pitch bend, filter sweeps, etc) if you
would experience latency(ie you turn the knob and the pitch bends 1/2 hour
later)
For knobs and similar "analog" controls, I'd say it takes at least tens of
ms
before you start to notice any latency. For keys, I personally think it starts
to feel weird if the latency approaches 10 ms.
More importantly though, latency must be *constant*! A synth that just grabs
all pending events once per buffer cycle won't be playable with more than a
few ms of latency, as the "random" response times quickly become very
noticeable and annoying as the "average" latency increases. If incoming events
are properly timestamped and scheduled, this is much less of an issue, and
latency has the same effect as varying the distance to the monitor speakers.
--
//David Olofson - Developer, Artist, Open Source Advocate
.--- Games, examples, libraries, scripting, sound, music, graphics ---.
|
http://olofson.net http://kobodeluxe.com http://audiality.org |
|
http://eel.olofson.net http://zeespace.net http://reologica.se |
'---------------------------------------------------------------------'