Malcolm Baldridge wrote:
>i imagine you'd have a blast at somewhere like
this:
>
>http://forums.gentoo.org/viewtopic.php?t=5717
>
>(there's bad and good advice in there)
You also tend to be roadkill for subtle/bizarre bugs
in the code optimiser
when you crank it up to maximum levels like that. I instinctively distrust
that zone, myself.
Grzegorz Prokopski working on SableVM has built this inlinability
testing framework that (if I understand correctly) basically precompiles
each bytecode instruction and executes it, looks for violations of what
gcc promises, and then uses this profile info to generate a final build,
for something like 10 different architectures. Sort of a macro-scale
"we don't trust gcc" approach to optimization :/
It often looks "noisy" or "simple and
inelegant", i.e., hoisting invariant
stuff to temporary vars which are locally declared so the stack frame usage
is kept to a minimum, or even better, the # of vars can fit nicely into
registers. But it ends up compiling to faster-running code than "elegantly
written" C did.
Just imagine if all the man-hours people put into hinting compilers and
hand-optimizing code were put into developing an optimizing compiler...
Still, I think doing things like reading and writing heap data in bursts
will help you for a long time yet ... not to mention making a single lib
file that simply #include's all of your real source files (I never
understand these builds that compile each C file to an object file
separately, and then link them together -- people tell you to do this
because it _compiles_ faster if you change something ... so what?!?)
Compilers *ARE* improving... there was a time though
when I couldn't rely on
them to reduce an integer unsigned multiplication/division by a constant
power-of-2 to a bitshift.
Somebody ought to write a library of slow math workarounds, for cases
where you know you can do it faster using the fast operators, but the
compiler doesn't. Actually, this might exist, I just haven't looked.
Cheers,
Chris