Hello all,


i have written a channel class, which collects data from (file) sources and copies it to a buffer. Jack will get this buffer and put it into his streams.
So far, I think, this is a normal design with the following code


for (unsigned int n = 0; n < nframes; ++n)
{
pBuffer[n] += pFrames[n];
pBuffer[n] *= volume;
}


i know, it's not really optimized. But it works as an example. as you can think, pBuffer and pFrames are float* and volume is also a float.


now it happens, when the volume is 0, after 3-5 seconds, the CPU will run into 100%.


a workaround is the following before the loop


if(volume < 0.0000001)
volume = 0.0000001;


But i try to understand, what happens here. Is the compiler overoptimizing zeros.
compiler is: $g++ -v
Using built-in specs.
Target: i486-linux-gnu
Configured with: ../src/configure -v --enable-languages=c,c++,fortran,objc,obj-c++,treelang --prefix=/usr --enable-shared --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --enable-nls --program-suffix=-4.1 --enable-__cxa_atexit --enable-clocale=gnu --enable-libstdcxx-debug --enable-mpfr --with-tune=i686 --enable-checking=release i486-linux-gnu
Thread model: posix
gcc version 4.1.2 20061115 (prerelease) (Debian 4.1.1-21)



what puzzles me is, that this doesn't happen on my double core system.


Any ideas. Thanks c~