On Mon, Jan 2, 2012 at 9:51 PM, Fons Adriaensen <fons(a)linuxaudio.org> wrote:
  'Taking care of (2)' may avoid the slowdown,
but it may also
 hide the real problem, which is that you are trying to do a
 computation that is beyond the limits of what the FPU can do.
 And that problem is *not* solved by ensuring there will be
 no slowdown. In some cases replacing small values (of which
 denormals are just one form) by zero or adding a small bias
 may help. In other cases it doesn't and it just produces new
 problems. Such a method is no substitute for analysis, not any
 more than blindly changing floats into doubles and hoping for
 the best. 
no doubt. i didn't say that solving (2) was a replacement for solving
(1). i noted that they were different tasks on the way to the optimal
solution.
the difficulty is that its relatively easy (if the code in question
uses SSE math) to avoid (2) even when insufficient attention has been
paid to (1). the side effects of (1) are  real, but subtle. the side
effects of (2) are substantive and can be crippling to workflow.
if you could ensure that all DSP code tackles (1) correctly, then
sure, (2) would never arise. but you can't ensure that, and neither
can anyone else. this leaves users left wondering why their DSP load
shoots up to 80% a few seconds after the transport stops ... this is a
problem that can be tackled separately even if better code is the best
solution.