On Tue, Jul 31, 2012 at 05:06:17PM -0400, Paul Davis wrote:
it is NOT a reasonable attempt. its an effort by
someone who doesn't really
understand much about audio, and from some of the available video evidence,
doesn't understand a lot about linux either, to reinvent what doesn't need
reinventing and to avoid doing the actual hard work that would genuinely
improve things.
Yep, it looks like a big waste of time and effort.
There *is* a grain of truth in the argument pro fixed point,
but it matters only in extreme cases and I really wonder if
the Klang author really understands anything of this.
If you compute a very long FIR filter in the 'obvious'
way using 32-bit floating point:
s = 0;
for (i = 0; i < N; i++) s += c [i] * x [i];
out = s;
with N = 100000 or so, you will be adding a nontrivial
amount of noise. The solution is to split up the sum,
first make sums of say 1000 samples then add these:
s1 = s2 = 0;
for (i = j = 0; i < N; i++, j++)
{
if (j == 1000)
{
s2 += s1;
s1 = 0;
j = 0;
}
s1 += c [i] * x [i];
}
out = s1 + s2;
The extreme case of this happens within the FFT: each of the
N outputs is a sum of N terms, but they are added in binary
tree fashion, first sums of two, then sums of four, etc.
It's the reason why computing a long FIR using an FFT-based
convolution engine is not only much more efficient, but also
more accurate than the direct form.
BTW, how long it is since TI introduced their first 40-bit
floating point DSP chip (the C30) ? 20 years or so ...
Ciao,
--
FA
A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)