Jack O'Quin wrote:
Here are my results running vanilla 2.6.10. They support your
conclusion, but also the idea that the vanilla kernel is really quite
usable.
Not sure what system statistics we should collect for this. My system
is Debian woody with 2.6.10 and realtime-lsm on an Athlon 1800+ XP
with 256MB main memory and M-Audio Delta-66 sound card.
Something worth telling, in deed: my laptop's Mandrake 10.1 and all my
tests were taken while on a perfectly usual X desktop session (KDE 3.2),
bringing more merit to the RT patch performance. Another note goes to my
custom jackit-0.99.41.10cvs installation: it includes an additional
max_delayed_usecs-histogram patch (also derived from the Lee's original).
Without this modification the result lines that read "Delay Count (>1000
usecs)" are pretty a no-op. Just ignore those, anyway.
Oh, and another important thingie: being it a laptop, it has ACPI support
set on by default. If ACPI is switched off, I'll surely read fewer XRUNs
under vanilla 2.6.10, quite as fewer as yours, but then I will miss some
system monitor goodies (e.g battery status, temperature, etc.).
Incidentally, Ingo has also solved this ACPI latency issue, and that just
makes yet another chalkmark for the RT patch ;)
I imagine that long 10 msec xrun delay probably
occurred during the
graph sort after one of the clients disconnected. If so, that's more
of a JACK implementation artifact than a kernel or system problem.
************* SUMMARY RESULT ****************
Total seconds ran . . . . . . : 300
Number of clients . . . . . . : 20
Ports per client . . . . . . : 4
Frames per buffer . . . . . . : 64
*********************************************
Timeout Count . . . . . . . . :( 1)
XRUN Count . . . . . . . . . : 2
Delay Count (>spare time) . . : 0
Delay Count (>1000 usecs) . . : 0
Delay Maximum . . . . . . . . : 10258 usecs
Cycle Maximum . . . . . . . . : 825 usecs
Average DSP Load. . . . . . . : 32.4 %
Average CPU System Load . . . : 7.3 %
Average CPU User Load . . . . : 24.1 %
Average CPU Nice Load . . . . : 0.0 %
Average CPU I/O Wait Load . . : 1.4 %
Average CPU IRQ Load . . . . : 0.7 %
Average CPU Soft-IRQ Load . . : 0.0 %
Average Interrupt Rate . . . : 1689.4 /sec
Average Context-Switch Rate . : 11771.0 /sec
*********************************************
Hmmm... Now I notice that there was at least one Timeout occurrence. Check
the output log/chart. In my own experience, when this happens the results
are pretty skewed right after that timeout moment. Something about clients
being kicked out from the chain, maybe? Anyway, I believe the only trusty
results are the ones with 0 (zero) timeouts.
Very nice test, BTW.
I had to hack it a bit to work on my system (mainly due to an old GCC
2.95.4 compiler). I would love to include some version of this as a
`make test' option with the JACK distribution.
Glad you find it useful. Feel free to it. But take a note that the
included jack_test_nmeter.c is a minor change from nmeter.c, handed to me
by Denis Vlasenko on the LKML.
Bye now, and thanks.
--
rncbc aka Rui Nuno Capela
rncbc(a)rncbc.org