Just looking through some of the results on:
https://www.osadl.org/Hardware-overview.qa-farm-hardware.0.html
Some of the higher speed many core CPUs have the worst latency:
https://www.osadl.org/Latency-plot-of-system-in-rack-1-slot.qa-latencyplot-…
https://www.osadl.org/Latency-plot-of-system-in-rack-0-slot.qa-latencyplot-…
(These will cause xruns at 16/2 at 48000 sample/sec)
Some slow cpus do much better:
https://www.osadl.org/Latency-plot-of-system-in-rack-3-slot.qa-latencyplot-…
https://www.osadl.org/Latency-plot-of-system-in-rack-1-slot.qa-latencyplot-…
Two very similar CPUs can vary:
https://www.osadl.org/Latency-plot-of-system-in-rack-c-slot.qa-latencyplot-…
https://www.osadl.org/Latency-plot-of-system-in-rack-0-slot.qa-latencyplot-…
(the second is a faster but older one)
Hyperthreading does not effect latency?:
https://www.osadl.org/Latency-plot-of-system-in-rack-0-slot.qa-latencyplot-…
(at least this plot seems to say that) The state of HT and what effects
how much it hits latency has changed. In order for HT to be worth while,
any one thread needs to keep the core for a certain number of
instructions. As the CPU speed goes up the same number of instructions
gets done faster. I also think that the overhead for context switching has
gone down. With my old P4 (single core) HT restricted my AI to 64/2 with
the ocasional xrun, HT off would allow solid 16/2 performance. My new i5
does not have HT, so I am not able to compare.
AMD seems to have something similar enough to hyperthreading to be wary
of. A number of the AMD CPUs shows twice as many threads as cores. I know
that they do double caching, I don't know if this is what the Linux kernel
is seeing as shadow cpus or not.
Looking through some of the AMD documentation (which is very sparse BTW)
It appears to say that some of the boost and processor speed changes are
not controllable by bios.
On any of the Intel CPUs, both boost and HT can be turned off by any
reasonable bios. With AMD I am sure that the kernel at least can be told
to ignore shadow CPUs. I would like to know more about speed scaling and
how much influence the the OS has on it. Another thing with CPUs that
seems to becoming common is powering off parts of the CPU that are idle.
The poweron time is very short, but context change time would still be
there. There have been some experiments that show just keeping the cpu
busy with a do-nothing-script can give lower latency times.
Anyway, the latest, greatest MB and CPU is not the best bet for low
latency audio. Advertised "performance" is based on throughput. Any of the
benchmarks I have seen on the manufacture's sites are about throughput.
Latency is not mentioned or measured. The few times it has been mentioned
has been in regard to video where it is called "low latency" which
probably in the audio world is noticable delay. (in most computer
engineer's minds low latency is 30ms)
The idea of what latency is good enough for audio work has changed with
the change from mostly PCI cards to USB. Where 64/2 was the normal
maximum, 128/2 or even 256/2 have become common... and seem to be accepted
as "OK". (there are also USB cards that will run at 32/2 or better) Maybe
there are other reasons for the change in thought on this too.
Note: this was a very lite look, not in depth at all. The profiles do not
match the tests always. I see that in the profiles the governor is set to
OnDemand, but the test plots all say "lowest P state: performance".
--
Len Ovens
www.ovenwerks.net