On Fri, 2007-05-11 at 18:24 +0100, Steve Harris wrote:
On 11 May 2007, at 15:07, Fons Adriaensen wrote:
> On Fri, May 11, 2007 at 03:33:04PM +0200, Lars Luthman wrote:
>
>> That sounds like a good argument for two ints to me. Although
>> you'd have
>> to do a lot better than double if you wanted to represent irrational
>> numbers in binary form. =)
>
> Two 32-bit ints can represent (the non-integer part of) most (not all)
> irrational values to better precision than a double. The algo to find
> them is a bit mysterious but very simple. Simple example: 355/113 is
> equal to pi with a relative error of less than 1e-7, not bad for two
> 3-digit numbers. It's not difficult to find two 32-bit ints that would
> be better than a double.
[snip]
My preference is still to go for a double, as the
difference is too
minimal to be audible, under any circumstances, and it makes plugin
developers lives harder.
The annoyance on the plugin developer is small enough that I don't think
it sways the decision very much.
I'm all for LV2 being minimal, since extensions can take care of
additional functionality and everything will just work out all nice and
open sourcey; but deliberately putting things in the spec that's known
to be less able than a (not that much more complicated) alternative
really doesn't sit right with me.
A single division doesn't justify crippling the spec if there's any even
remotely feasible application of the numerator/demoninator method, IMO.
LV2 is intended to be an audio plugin
format, not for scientific uses. Of course if the host gets its
sample rate as a double, it's a PITA, and generally lossy to convert
that into num/denom.
This, OTOH, would sway the decision more than the division, if it were
actually a problem (though being a host and not plugin nuisance weakens
the relevance quite a bit). It's not a problem on any of the (typical
'round these parts) audio APIs I know.. is it a problem anywhere LV2 is
ever likely to see the light of day?
-DR-