On Sat, 2006-07-08 at 13:34 +0100, James Courtier-Dutton wrote:
Hi,
Is there a standard way of converting a 24bit sample to 16bit?
I ask because I think that in different scenarios, one would want a
different result.
1) scale a 24bit value to a 16bit by simple multiplication by a fraction.
2) bit shift the 24bit value, so that the "most useful 16bits" are
returned to the user. The problem here is what are the "most useful
16bits"? I have one application where just using the lower 16bits of the
24bit value is ideal, due to extremely low input signals.
Option (1) looses information due to compression of the signal.
Option (2) is more likely to loose information through clipping.
google for "dither"