On Sat, Jul 08, 2006 at 01:34:44PM +0100, James Courtier-Dutton wrote:
Is there a standard way of converting a 24bit sample
to 16bit?
I ask because I think that in different scenarios, one would want a
different result.
1) scale a 24bit value to a 16bit by simple multiplication by a fraction.
2) bit shift the 24bit value, so that the "most useful 16bits" are
returned to the user. The problem here is what are the "most useful
16bits"?
Bit shifting is just multiplication by a power of 2, so it's not
essentially different from general multiplication.
Normal practice would be to dither the 24 bit signal, then take
the upper sixteen bits.
I have one application where just using the lower
16bits of the
24bit value is ideal, due to extremely low input signals.
That's really no good reason to divert from the normal practice.
It probably means that your analog input signal is way too low for
the input you are using, i.e. a mic connected to a line input. The
solution here is to preamp it to a normal level before conversion,
otherwise you're just amplifying the noise + hum + interference of
your sound card.
--
FA
Follie! Follie! Delirio vano e' questo!