Hi all,
People who play around with floating point code (especially on x86)
quickly learn about the evils of comparing the equality of one floating
point value with another.
There are other related evils with floating point one of which I was
bitten by just recently and I thought I'd share it with y'all. If it
helps just one person from spending 20 odd hours chasing an elusive
bug like I did, I will have acheived something.
The evil I speak of is the difference between 32 and 64 bit floating
point representations (types float and double) and the x86 CPU's
internal 80 bit representation.
The most common trap is something like:
if (x == y)
do_something ();
where x and y are say double flotaing point numbers. Also lets just say
that the value of x is already in the CPU's FPU register as a result of
a previous operation and the other y, is not. What happens is that the
result of the previous operation can have a full 80 bits (part mantissa,
part exponent and a sign bit) of precision while y, loaded from memory
does not have this extra precision. The comparison therefore fails, even
though when printed out (or when compiler optimisation is switched off)
the two values are equal. This is the reason why the above if statement
is better written as:
if (fabs (x - y) < 1e-20)
do_something ();
The reason I am writing this email is that I was recently bitten by a
similar problem. I was keeping a running index into a table, and keeping
the integer part separate from the fractional part which was kept in a
double floating point:
double fractional = 0.0, increment = 0.1;
int integer = 0;
for (;;)
{
/* Bunch of other code. */
fractional += increment ;
integer += lrint (floor (fractional));
fractional -= floor (fractional);
}
The above code can produce very odd results for certain values of
increment. The problem in this case manifested itself in the
integer/fractional losing counts when compiled with gcc-3.4 while
the same code had worked perfectly with previous versions of the
compiler. The problem seems to be caused by the fact that the other
code in the loop was pushing at least some of the relevant values
out of the FPU stack into double floating point variables and that
when they were reloaded they had lost precision.
The fix in this case was this:
for (;;)
{
/* Bunch of other code. */
fractional += increment ;
rem = fmod (fractional, 1.0); /* floating point modulus */
integer += lrint (round (fractional - rem));
fractional = rem;
}
which is far more robust.
HTH,
Erik
--
+-----------------------------------------------------------+
Erik de Castro Lopo nospam(a)mega-nerd.com (Yes it's valid)
+-----------------------------------------------------------+
"The X-files is too optimistic. The truth is not out there."
-- Anthony Ord
>
> From: Takashi Iwai <tiwai(a)suse.de>
> Date: 2004/07/02 Fri PM 01:15:46 GMT
> To: <ico(a)fuse.net>
> CC: <alsa-devel(a)lists.sourceforge.net>,
> <linux-audio-dev(a)music.columbia.edu>
> Subject: Re: [Alsa-devel] Re: [linux-audio-dev] snd-hdsp oddities
>
> At Fri, 2 Jul 2004 12:56:06 +0000,
> <ico(a)fuse.net> wrote:
> >
> > > Shouting "DON'T USE 2.6" isn't a good solution. Though, we need to
> > > inform to "set LD_ASSUME_KERNEL as a workaround"...
> >
> > Pardon my ignorance but how does one do this? As a part of the
> > config before compiling kernel or?
>
> No, just set the environment variable like
> export LD_ASSUME_KERNEL=2.4.19
> (better globally) and start jack. That's all.
>
> In this way, glibc chooses LinuxThreads instead of NPTL.
>
> > Also, any ideas on the odd behavior of the hdspmixer? (see my other post)
>
> Not checked yet... Did hdspmixer work on any versions correctly on
> your system?
>
Well not really since this notebook is only a couple months old so I never had the chance to test anything pre 1.0.2. But anything 1.0.2 and greater behaves the same.
Ico
> Shouting "DON'T USE 2.6" isn't a good solution. Though, we need to
> inform to "set LD_ASSUME_KERNEL as a workaround"...
Pardon my ignorance but how does one do this? As a part of the config before compiling kernel or?
Also, any ideas on the odd behavior of the hdspmixer? (see my other post)
Ico
Hi,
A friend of mine owns a Roland vs-1680 harddiskrecorder. It has a
feature that allows for recording and playing back a midi clock signal
alongside the audio tracks. This makes it possible to create a tempo
map with a (hardware) sequencer, record it on the vs-1680, and use it
to keep the sequencer in sync with subsequently recorded audio tracks .
We work together on a project, and i would like to be able to record
and play back the audio mix from the vs-1680 to my computer together
with the midi clock data (he takes his machine home to work on the
audio part). That way i can work on the sequences on my mpc1000, and
play back the audio whith the sequences synced midi clock slave.
I believe there is currently no linux app that can do that, so i am
considering to try to create a utility for it. But i have little
programming experience and none regarding alsa and jack, so i would
like feedback on my ideas.
My plan is to write a little program that opens 2 midi ports (in and
out) and 2 jack ports (in and out). The program wil monitor the midi
input port for midi clock messages. It will write zero's to the jack
output port but when it recieves a midi clock message it will write a
value of 1 to the jack port. This way i can store the midi clock
signal in a audio file, for example with ecasound. I know, this is
ugly! For playback i would have the program monitor the jack input port and
output a midi clock message when a sample value of 1 passes by.
I hope this explains my plan clear enough. I have looked at the
example programs for alsa and jack and to me it looks like i could
copy and paste something together, that could be my lack of
experience. What do you think? Is this possebly going to work or do i
overlook major difficulties?
Any advice would be very welcome!
--
Greetings, Gerrit
Hi all!
I've been messing with hdsp (again :-) and found out that my hdspmixer has very oddly behaving sound meters. While the input (2nd row) appears to be more or less ok (the yellow peak things kind of fly all over the place and often drop off below green lines or even completely dissapear), the analog outputs as well as combined monitoring output (front 1/4" phono jack on the multiface) only occassionally spike with a line input (usually only one channel). I am wondering why this is so since the audio is definitely working ok, but the monitoring of the outputs simply isn't working (at least not visually).
I am using Mdk10.0 community with lots of updates.
Alsa 1.0.4rc2 as well as 1.0.5 (libs are from 1.0.4rc2 at this point).
Could this be the version of fltk I am using or something more serious?
I am using 2.6.5 and 2.6.7 kernels.
Any help is greatly appreciated!
Best wishes,
Ico
Hi everyone,
How should I perform resampling at runtime ? Like : I load samples with
different bitrates, then JACK calls my process() callback function using
its own bitrate... If the JACK bitrate and the sample one match, there's
nothing to do, but if they differ, there's a need for resampling.
Do you have an example algorithm, or some pointers, like some app that
does this in a nice manner ?
Thanks.
--
Olivier
"Tim Goetze" <tim(a)quitte.de> writes:
> fwiw, i'm achieving quite satisfying results driving MIDI out from a
> 1024 Hz RTC thread, with external hardware locking steadily onto the
> output MIDI clock stream, even at tempi up to 240 bpm.
>
> MIDI out jitter is about the audio block size at max. DSP load (~1.3
> ms) during audio processing cycles, a fraction of a millisecond for
> the difference between 1024 Hz and the wanted MIDI clock frequency
> otherwise; however this seems to be no problem for the hardware
> attached (a fairly recent synthesizer and infrequently an aging cheap
> drum machine).
>
> at lower RTC frequencies, the jitter effect on the MIDI h/w becomes
> noticeable (erratic rubato) but it still locks on.
>
> thus, i must disagree with your 'no way'.
Could you point me to some code examples on how you do this? I am an
absolute beginner in this.
--
gerrit
> getting a general purpose computer to output MIDI Clock (and/or MIDI
> Time Code, just so nobody confuses the two of them) is a very hard
> problem. There are 24 MIDI Clock messages per quarter note. This means
> that for a piece in 4/4 at 120bpm, you need to output 1 MIDI byte
> every 20ms. Not so bad - its a nice even multiplier of the system
> interrupt frequency. However, just change the tempo or the time
> signature, and all of a sudden you have situations where the MIDI
> Clock byte needs to be output every 18ms or every 32ms or ever 9.7ms
> or every 56.5ms.
Yes, our music involves many (subtle) tempo changes, accelerando's
etc. Accuracy would be very important.
> until the high resolution clock timers patch is solid enough to be
> used by any system, there is no way to schedule MIDI output with this
> kind of resolution, and if you can't schedule it, then the receiver of
> your MIDI clock signal will see a lot of jitter and may refuse to lock
> to it. Even if it locks, its not clear what it will do with the jitter.
So it will not work then. That is a pity!
>>signal in a audio file, for example with ecasound. I know, this is
>>ugly! For playback i would have the program monitor the jack input port and
>>output a midi clock message when a sample value of 1 passes by.
>
> JACK does block structured audio. This won't work. If the "1" is in
> the model of a JACK process cycle, when does it actually get delivered
> to the MIDI output port?
In my naieve view this would simply be done in the process cycle, but
you suggest it is not possible?
> We'd love to provide this kind of functionality in Ardour, BTW, so if
> you're serious about doing the research and work needed to check on
> the HRT patch etc etc etc, we'd love to see it done in a way that we
> could use too.
I would like to contribute to the Ardour project. But i am neither
a skilled or an experienced programmer, so i doubt if i am capable to
do this. But if there is anything i can do i will do it. Is there any
particular documentation i should look at?
--
Gerrit