Hi all,
I've had an interesting discussion with a professor and a distinguished
member of the electroacoustic music community regarding audio latencies
which made me realize that I did not understand the issue in its
entirety. Hence, I looked around the net in order to educate myself.
I soon stumbled across the following site:
http://old.lwn.net/1999/0916/a/latency.html
Admittedly, it's quite old but that, if anything speaks only in Linux's
favor in terms of its pro-audio readiness. At any rate, I was checking
out the benchmark data and was wondering as to how did this
person/software app get to the 0.73ms buffer fragment that is equal to
128bytes? In other words, what sampling rate was used?
128 bytes in 44100Hz sampling rate = 3ms
128 bytes in 88200Hz sampling rate = 1.45ms
128 bytes in 176400Hz sampling rate = 0.725ms (this one being obviously
closest, but at the same time, what kind of hardware supports this
sampling rate, especially in 1999 when this test was done?)
128 bytes in 192000Hz sampling rate = 0.3ms
So what gives? It seems like it is some kind of a 176k-ish sampling rate
that, AFAIK does not exist.
Furthermore, my question is what app was used to produce those
graphs/results and whether these latency tests take into account
hardware latencies (i.e. DSP converters, PCI->CPU->PCI->output etc.), in
other words, is this latency that is achievable with the following
setup:
Input->soundcard->cpu(with some kind of DSP)->soundcard->Output
Your help on this matter is greatly appreciated!
Ivica Ico Bukvic, composer & multimedia sculptor
http://meowing.ccm.uc.edu/~ico