To all concerned, I've gotten quite a few responses and rather respond
individually to each every one I'm responding to David Olufson's because his
is overall the most sensible and informative. I'll respond to some of his
indivdual points and then finish with this in general at the end.
On Thu, Jan 28, 2010 at 8:32 PM, David Olofson <david(a)olofson.net> wrote:
On Thursday 28 January 2010, at 21.01.38, David
McClanahan
<david.mcclanahan(a)gmail.com> wrote:
[...]
> The relevant definition of "hard
realtime system" here is "a system
that
> always responds in bounded time." That
bounded time may be one
> microsecond or one hour, but as long as the system can meet it's
deadline
every
time, it's a hard realtime system. The definition doesn't really
imply any specific time frames.
I agree with the definition but feel its a bit incomplete. Somebody can
write a piece of software and performance test it on a "soft realtime"
system and it meet all its deadlines DURING THE TEST. But a hard realtime
system should have mechanisms(the scheduler and timing analysis of the
code) to insure the deadlines are met. The current "RT patches" system
is
probablistic("cross your fingers"). It
may be a good one and sufficient
on
most machines.
This has nothing to do with the definition of hard realtime. These
lowlatency
kernels don't even claim to be *hard* realtime. I think you'd get
"lowlatency"
or possibly "firm realtime", depending on who you ask.
I understand this. That's kind of why we were discussing hard realtime
True hard realtime systems don't really exist, but if we accept hardware
failure and application bugs as acceptable reasons for failure, we can get
pretty close. For Linux, you'd go with RTAI or RT-Linux.
However, although running all realtime audio code under RTAI or RT-Linux
might
offer a slightly lower risk of drop-outs, those are pretty harsh
environments,
even compared to the "strict" requirements of JACK clients, realtime safe
LADSPA or LV2 plugins and the like. You basically cannot use ANY syscalls,
but
have to work with a completely different API. (RTAI/LXRT does allow hard
realtime code to run in userspace, but when a thread is actually running in
hard realtime context, it will be thrown back to "standard" mode as soon as
it
makes an ordinary syscall.)
More seriously, you cannot make use of ANY normal Linux drivers. Drivers
have
to be ported, so that all code involved in the hard realtime "loop"
actually
runs under the realtime kernel, and not Linux.
I didn't know this but I'm not surprised by it either.
Even more seriously, there is just no way that any
realtime kernel can
ensure
anything on bad hardware. If BIOS super-NMIs are blocking normal IRQs every
now and then, or you have some device + driver combo that abuses PCI bus
blocking (common issue with 3D accelerators), you may get worst case
latencies
of any number of milliseconds - and there is nothing at all that any OS can
do
about this. You *may* be able to avoid these issues by replacing cards,
patching or reconfiguring drivers or tweaking the BIOS settings, but as
general purpose PC/workstation hardware isn't really designed for this sort
of
work, there are no guarantees. It basically comes down to trying hardware
until you find something that does the job.
Ok, that's the answer I've been looking for.
BTW, this has very little to do with raw performance. A faster CPU does let
you finish the job quicker, but if you wake up too late, you still won't be
able to make the deadline...
The bottom line is that, in the context of mainstream hardware and systems
that aren't specifically put together for lowlatency audio work, this is a
matter of diminishing returns. Indeed, it's possible to do better than a
lowlatency kernel, but it's a lot of work, and it's completely wasted
without
perfectly configured, well behaved hardware. Sure, RTAI or RT-Linux would
support 0.1 ms audio latency on a purpose built system with ported drivers,
properly configured BIOS, SMI hacks etc, but it just won't happen on your
average PC.
> Now, in real life, the "every
time" part will never be quite accurate.
> After
> all, you may see some "once in a billion" combination of hardware
events
that
delays your IRQ a few microseconds too many, or your lose power, or the
hardware breaks down, or a software bug strikes... There are countless
things
that can go wrong in any non-trivial system.
Even in HRT systems, things go wrong. But in an HRT system, you lash the
squirrels nuts down. In a soft realtime system, you bet that there won't
be
a storm.
Well, yes - but going beyond lowlatency kernels, it's usually the hardware
you
need to deal with; not the OS...
> Of course, there's a big difference
between a DAW that drops out a few
> times a
> day, and one that runs rock solid for weeks - but a truly glitch-free
> system
> would probably be ridiculously expensive, if it's even possible to
build.
Triple
redundancy hardware, code verified by NASA, various other things
I've
never even thought of; that sort of stuff...
Who wants a DAW. I'd be happy a while with a stable minimoog emulator.
Same difference. If you have sufficiently solid scheduling for the realtime
processing part, you can build pretty much anything around that.
[...]
Well there are affordable synths(mostly wavetable
ones) that don't appear
any more sophisticated hardware-wise than a PC.
It's not about sophistication. A low cost singleboard computer with an AMD
Geode, VIA C7, some Intel Celeron or whatever you need in terms of raw
power,
will do just fine - as long as the chipset, BIOS and connected devices are
well behaved and properly configured.
If you, as a manufacturer of synths or similar devices, don't want to try a
bunch of different motherboards for every new revision you make, you might
decide to design your own board instead. Then again, if your product is low
volume and requires enormous CPU power, carefully selected mainstream
hardware
may still be a better option.
The PC may be such a
"generalized" piece of hardware as to make it impractical as a dedicated
synth(unless it's of a "super" computer variety). I haven't heard
anything
yet that quite "put the nail in the
coffin" yet. The SMI issue mentioned
earlier might be such an issue.
SMI is one of them. In my experience, nearly every motherboard at least has
some BIOS features you must stay away from, so even "know good" hardware
sometimes need special tuning for this sort of work. General purpose
computers
just aren't built for low latency realtime work - but most of them can
still
do it pretty well, with some tweaking.
[...]
> ... process a bunch of samples at a time,
usually
> somewhere around one millisecond's worth of audio.
[...]
Well I understand it from that perspective, but
for a performance
instrument I would think no buffering would be the ideal.
That's just pointless, as the ADC and DAC latencies are already several
sample
periods, and the way DMA works on any PCI, USB or 1394 soundcard will add
somewhere around 64 bytes' worth of latency or more to that.
Also note that your average MIDI synth has anywhere from a few through
several
tens of milliseconds of latency! You can only send around 1000 messages per
second over a standard MIDI wire anyway, so where would you get the timing
information to make use of less than 1 ms latency? Actually, going below a
few
ms only guarantees that the notes in a chord can never be triggered
simultaneously.
Yes I understand that from a practical standpoint some buffering is
necessary and
acceptable. But for a performance instrument buffering
introduces latency. I have gotten the impression from past
reading/discussions that buffering is a preferred as opposed to a practical
condition to some.
[...]
Well my question is if you took something like a
Bristol synth, and
operated multiple control streams(pitch bend, filter sweeps, etc) if you
would experience latency(ie you turn the knob and the pitch bends 1/2
hour
later)
For knobs and similar "analog" controls, I'd say it takes at least tens of
ms
before you start to notice any latency. For keys, I personally think it
starts
to feel weird if the latency approaches 10 ms.
More importantly though, latency must be *constant*! A synth that just
grabs
all pending events once per buffer cycle won't be playable with more than a
few ms of latency, as the "random" response times quickly become very
noticeable and annoying as the "average" latency increases. If incoming
events
are properly timestamped and scheduled, this is much less of an issue, and
latency has the same effect as varying the distance to the monitor
speakers.
I'll take your word on this in general. Part of issue with "latency" is
that is thrown around in the Linux audio community like the holy grail
without much consideration as to its context. Some speak of it in terms of
delays in response(ie outputting a sample or frame) in the context disk
accesses, interrupts etc and that's fair but not complete.
Latency also includes the delays in response resulting from calculating the
sample output. As a synthesis system gets more sophisticated I could see
this becoming as much or more of an issue than the other type of latency. I
vaguely recall installing the Molnar patches on a kernel years ago and
running some "latency" test(Ben Sommer??) Anyway I think it did various
naughty things while playing a single tone. I think it would be fairer test
to exercise a midi synth with full out midi stream(pitch bends, notes,
chords, patch changes etc) and see the results in that case.
-------------------------------------------------------------------------------------------------------------------------------------
As a coda
I'm gonna let this go cause we're(we in this maillist suarez) are bordering
on repeating ourselves louder. I'll state my conclusions/interpretations to
this point.
1. Part of the reason I originally posted was to get a sense of the issues
in embarking on implementing a synth on my Dell on one of the available hard
realtime systems currently available. Some have contributed useful info that
I was not aware of and I am glad to know. I have heard some of the issues
involved. I don't think anyone has said it can't be done just that it may be
harder, more complicated than expected. It's software....it's always harder
than you expect.
2. The other issue--hard realtime vs the RT patches.
First of all since I've mentioned Bristol and ZynAddSub I want to say I did
not mean to make them the center of my wrath. I have used Zyn on a desktop
and have heard it make quite neat sounds. Bristol I had hopes for but I've
never had much success with it. Maybe in the future.
The other side of that coin is that I don't blame their failure to work on
the RT patch system. They came with the Ubuntu Studio system. I installed
all this stuff with the Ubuntu/Debian packaging system. I ran them under the
RT patch system and the whole system locked up. Whether that's the RT system
or a failure of these systems to by "RT safe"(please issue a bullet list as
to WTF that means)
When I started out in this discussion, I didn't assume hard realtime would
be an easy answer and I realized that hardware quirks could be an issue and
so on. I still don't like the answer from the other side which equates with
"Get a faster system and fiddle/tune it and it'll be ok". At least add in
the provisio, "After you get it working never upgrade cause someone's liable
to slip in a new stack of bricks in your Ferrari"
Finally
--from gabriel's message
Why hasn't it happened, yet? Because most folks don't want this from Linux.
And those that /do/ realize that it is
hardware specific, and you /will/ have to roll your own OS (e.g. Korg,
Harrison Consoles). You can't just download "KewlSynthOS" and run it,
because there are several prerequisite hardware components to make the
system run properly (whether off-the-shelf or home-grown).
-------------------------------------------------------------------------------------------------------------
"KewlSynthOS" ?? No shit. What do you call all these audio distributions
floating around that basically claim "Plug us in and you'll have an instant
studio", "Look at our low latency" Blah Blah Blah. How many years has
Linux
been out? And how many years has ALSA and OSS been coupled with it? Since
we're into "latency", I dare say I'd get less latency if I plugged in a
1Mhz Commodore 64(with 64Kb) and played the SID chip than the latency I've
gotten from trying to get Linux to give me soft synth on a machine with
200Mhz processor and 200+MB of memory. When this started this a dedicated
bootup synth what I suggested because quite frankly think its a bit much to
insure a machine will run reliably as a synth and do spreadsheets at the
same time. AND I think a lot people would gladly make the tradeoff to have
an inexpensive reliable instrument especially if they could resurrect an
older machine for such purposes.
From where I stand, it looks like a LOT of effort has
been expended on Linux
audio systems. It seems to me(forgetting my mission to
acheive synth nirvana
on the Dell for the moment) that it would have been worthwhile to build the
audio on a hard realtime system since
1. Correct behavior is dependent upon time deadlines
2. That's what hard realtime systems are specifically geared to do.
Anyway, enough.
--
//David Olofson - Developer, Artist, Open Source Advocate
.--- Games, examples, libraries, scripting, sound, music, graphics ---.
|
http://olofson.net http://kobodeluxe.com http://audiality.org |
|
http://eel.olofson.net http://zeespace.net http://reologica.se |
'---------------------------------------------------------------------'
_______________________________________________
Linux-audio-dev mailing list
Linux-audio-dev(a)lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev