On Thu, 24 Jun 2004 02:11:34 -0300
Juan Linietsky <coding(a)reduz.com.ar> wrote:
Sorry for not checking, but I guess it could be
good for the record (and
people googling for it) to ask here..
The macro for dealing denormals that I have is:
#define undenormalise(sample) if(((*(unsigned int*)&sample)&0x7f800000)==0)
sample=0.0f
however gcc 3.3 and 3.4 seem to produce an undesired effect
when the optimizer is turned on, rendering this macro unusable..
this breaks freeverb and a few other stuff I have that uses it.
How was/is this fixed properly?
Personally, I think that macro is rather silly. You bascially end up
forcing data that would otherwise be in the floating point pipeline
into the integer pipeline and then latter, it probably goes back into
the FP pipeline. Secondly, it contains a branch which is usually far
from optimal.
I (re)discovered the following branch-free method a while ago:
/* branch-free denormal killer (slightly blunt) */
inline float FlushToZero( volatile float f )
{
f += 9.8607615E-32f;
return f - 9.8607615E-32f;
}
/* end */
The people who discovered it first call this method
"Elimination by Quantification".
Its slightly blunt because it damages the precision of
extremely low but not yet denormal numbers: Anything
of magnitude < 2 ** -103 loses one bit of precision for
each binary order of magnitude it is below that number.
(This means that denormal numbers lose /all/ of their
precision and become zero).
Simon Jenkins
(Bristol, UK)