On Saturday 01 May 2010, at 20.57.36, "Tim E. Real" <termtech(a)rogers.com>
wrote:
[...]
I used to be fanatical about floating point (remember
the co-processor
days?) But I've grown to dislike it.
Bankers won't use it for calculations.
(Have you ever been stung by extra or missing pennies using a 'NUMBER'
database field instead of a 'BCD' field? I have.)
So why do we use floating point for scientific and audio work?
Dynamic range, performance and ease of use. (However, as most FPUs - apart
from SIMD implementations that generally don't have denormals at all - lack a
simple switch to disable denormals, the last point is pretty much eliminated,
I think...)
Considering audio can have really small values, does
it not lead to errors
upon summation of signals?
Yes, and no. If you add values in the same general order of magnitude, it's
pretty much like adding integers. If you add a very small value to a very
large one, and the difference is so large that the mantissas don't overlap,
nothing happens! >:-)
Why do we not use some sort of fixed-point
computations?
I do, sometimes. ;-) However, it's a PITA, and I do it only when the code is
supposed to scale to hardware with slow FPUs or no FPUs at all. I suspect
floating point implementations would run faster on current PC/workstation CPUs
- but then again, *correct* (ie denormal handling) code may not...! I'm not
sure.
--
//David Olofson - Developer, Artist, Open Source Advocate
.--- Games, examples, libraries, scripting, sound, music, graphics ---.
|
http://olofson.net http://kobodeluxe.com http://audiality.org |
|
http://eel.olofson.net http://zeespace.net http://reologica.se |
'---------------------------------------------------------------------'