On Sun, Jul 22, 2007 at 01:29:19PM -0400, Paul Davis wrote:
2) there are some good theoretical arguments for
needing more than 32bit
floating point resolution for a mixer
I did a quick experiment.
N floating point signals, all of them Gaussian distributed, summed
both as single and double precision. If the difference between the
two sums is considered 'noise', the signal/noise ratios R in dB are:
N R
----------------
16 -142.9
32 -140.0
64 -136.7
128 -133.8
256 -130.7
512 -128.0
1024 -124.7
2048 -121.9
4096 -118.8
So even when summing 1024 signals the result is better than
120 dB, and an already big mixing session with 64 sources
would produce 136 dB. This should be compared to the S/N
ratio of individual sources which is unlikely to be better
than 90 or 100 dB.
The only case where things would turn out to be worse is
when you are mixing signals that cancel systematically and
the resulting low level sum is amplified. If that happens
you will have other and much more serious problems anyway.
And it's quite easy to improve these figures even when using
just single precision. For example if the summing is done in
two steps, first making sums of 4 inputs and then adding these,
the results for large N improves by 6 dB:
N R
----------------
16 -146.2
32 -144.2
64 -141.8
128 -139.3
256 -136.6
512 -133.6
1024 -130.7
2048 -127.8
4096 -124.9
Or 12 dB using a first grouping of 16:
N R
----------------
16 -142.3
32 -141.8
64 -141.3
128 -140.4
256 -139.3
512 -137.5
1024 -135.7
2048 -133.2
4096 -130.6
Or two intermediate levels of 4:
N R
----------------
16 -144.8
32 -143.9
64 -143.1
128 -141.8
256 -140.4
512 -138.3
1024 -135.9
2048 -133.6
4096 -130.5
Taking this to the limit, a binary tree addition would produce the
best results. This also explains why even single-precision FFTs
produce good results when used for convolution - an FFT computes
sums over all its input samples, but the addition is organised as
a tree by the very nature of the algorithm.
--
FA
Follie! Follie! Delirio vano รจ questo !