But the sample rate *was* specified to 44.1 kHz in
this case, wasn't
it...?
Well if you wanna get *technical* about it, the hdsp tools (which was in
the screenshot) on Windows reflects the same latency values regardless
of what sampling rate you use (they do not change their ms rating in
order to adjust to the changes in sampling rate -- see
http://meowing.ccm.uc.edu/~ico/hdsp.jpg), and the original question,
even though pointing at that particular screenshot did not necessarily
refer only to the 44.1kHz sampling rate, but rather to the best
achievable latency. In his case the original poster was right in both
assumptions: 128 bits x 2 could be either 1.5ms or 3ms depending upon
the sampling rate...
> There is obviously still a question whether any
> kernel on the face of earth would be able to provide soundcard with
> data in time in order to avoid dropouts...
<snip>
Also note that the x86 arch
sucks at this pretty much by design. Alpha, PPC and others can get
even lower latencies.
Although this used to be the case, I tend to disagree, because where x86
architecture is lacking in design, it compensates with the increased
clock speeds and various add-ons (i.e. hyper-threading). Even with a
branch mis-prediction in a lengthy pipeline, these chips pump-out enough
cycles/second to be able to withstand such penalties without much, if
any performance loss when compared to their competition. This
furthermore is a mute point with the 64-bit Intel's and AMD's offering
as well as the upcoming Prescott (if the rumors are to be trusted).
Ico