Hi Ailo,
Well this will get a bit long.. I hope you don't mind.
On 12/12/2010 03:51 PM, ailo wrote:
On 12/12/2010 01:41 PM, Robin Gareus wrote:
On 12/12/2010 01:03 PM, ailo wrote:
I've been looking around for any tests made
comparing the different
kernels, -rt, generic, or any other type of realtime enchanced kernel.
I haven't found any test results yet, at least none audio related. I did
find some testing tools at
rt.wiki.kernel.org, but don't know if and how
they could be made relevant to audio low latency testing.
I suppose the most interesting results would come from testing different
kernels with jack/alsa and jack/ffado.
Has anyone done such tests?
It is not trivial to perform such tests and AFAIK there's no benchmark
suite to automate the process.
There are a few tools to test JACK's realtime performance:
- the ardour-source includes `tools/jacktest.c` checks for the max DSP
load at which an x-run occurs.
-
http://wiki.linuxaudio.org/wiki/jack_latency_tests
git://rg42.org/latentor is a tool to automate measuring round-trip
audio latency iterating all JACKd -n/-p/-S parameters
However latentor is a pretty recent development and does not yet
report x-runs. We watch qjackctl's icon for now.
AFAICT there's no recipe. It's a matter of knowing some internals about
RT-linux to come up with a proper kernel .config and doing real-life
tests. I think it is impossible to assign a number "suitability for
pro-audio" to a kernel.
For testing performance of the 64studio RT kernel: I do run a couple of
heavy-sessions (e.g. 16 jconvolvers in a 16 track ardour session + jamin
which procudes quite some DSP, system and IO load). If there's no x-run
at 32fpp*2p/48kHz after 24 h while I to surf the web and read email and
compile another kernel in the meantime I bless the build OK :) There's a
few additional things: wifi, suspend/resume, freq scaling, etc on the
checklist, too.
2c,
robin
Thanks for your excellent info.
I will need to have a closer look on specific details to get a better
understanding of the problem with doing this kind of testing.
Kernel .config is not my language yet, for instance.
From a practical point of view for someone as ignorant about the
technical details as myself, I suppose I'm just trying to get a general
idea of what you get from different kernels.
So, I was making outlines on how to do these two things:
1. a test/script for making tests on a single machine to compare
performance on different kernels.
2. results summed up in a table that gives you a general idea of what
you get with different kernels.
There could perhaps be a number of different tables for different
processors, assuming that the processor is the main important factor
that decides the actual latency you get with a kernel.
These tests could then be published in a wiki, if deemed worthy.
for 'latentor', Luis and me are pondering to move away from the wiki and
add some phone-home support to collect data from various systems.
But it's not high-priority and we question the use-fullness of such
information, anyway.
- Why such a table should be made, and for who?:
Some practical uses that may require a rt kernel:
* live audio processing (requires low latency)
* monitoring (requires low latency)
* using firewire devices (As far as I understand ffado works best with
-rt kernels)
Possible/impossible?
short-answer: "impossible" - meaning: theoretically possible but not
feasible.
long-answer and brainstorm:
The question is whether it is worth the time to make it, and what can be
learned from the information.
End-users won't learn much from those statistics. Unless you have
exactly the same hardware, it is close to impossible to draw
conclusions. One would need to
a) isolate the correlating factors and there are certainly much more
than just the kernel-revision, flavor, CPU-speed and audio-interface.
b) find values that can be measured reliably.
eg. what would you learn from information such as:
With kernel A on system B using sound-card C and jack-settings D
one can run for >=X seconds without an x-run.
where D is chosen to maximize X while f.i. minimizing latency.
Besides there is no common goal: some end-users require huge I/O (read
128 audio tracks from disk). Others only need one channel but low
latency, yet others only CPU power for effect-processing.
As for making something that is useful for developers: compare different
versions, do regression tests on the same system. It will take huge
effort to pull it off. Maybe the Phoronix can be used as basis: They
already laid a solid basis for statistical analysis and are working on a
system that allows to cherry-pick revisions, change-sets and compare
those. However AFAIK it runs in virtual-machine which makes it useless
for testing rt performance, but there may be options to use it on
bare-metal.
IMHO low-latency is quite overrated. There are few use-cases that
actually require it. If one really needs reliable low-latency (lets say
<20ms) - s/he needs a realtime-kernel (and for live-performances also
some other tweaks e.g. disable updated).
A RT-linux system either works or it does not. The performance
differences between different working rt-kernel revisions are usually
quite subtle, and it is always possible to overload the system because
of hardware limitations.
For benchmarking different kernel revisions/systems: I suggest to stick
to the rt-test tools such as hwlatdetect, cyclictest, pi_stress etc.
Something that would be useful to have is a jack-audio stress-test suite!
After installing a new system or getting new hardware: automatically run
some tests to check the limits of the system.
ardour's jacktest.c is a first step in that direction (testing CPU/DSP
load). It could be supplemented by an I/O test tool (read audio-files
from disk). An extended version of latentor, that uses jack_delay to
find the lowest possible latency with no x-runs may be a 3rd, etc
But that information would only be useful for you, not for others.
2c,
robin