On 04/24/2010 06:40 AM, Niels Mayer wrote:
Monty, quoting me out of context:
<snip>
A few people seem to think I'm in denial of
Nyquist.
you are mistaken. what at least i think is that you are taking the
sampling theorem as a model of human hearing. it is not. which makes
most of your arguments moot.
Please note what I
said at the top of the thread. What is hopeless is your need to argue about
something you clearly know nothing about -- psychology, neurology, the
biology of human perception, and cognition.
first of all, there seems little point in getting into a pissing contest
with monty about auditory perception. i'm all for healthy scepticism,
but i also happen to know that this guy has probably forgotten more
about dsp and psychoacoustic coding than i will ever know. does "ogg
vorbis" ring a bell?
Your assumption that only power or real-plane
information matters to
biological entities is nonsense.
nobody ever assumed this.
The notion that "sampling" applies to
biological entities is nonsense.
indeed. and you are the first to even suggest this. everybody else is
talking about digital devices represent, store and reproduce information.
Even just saying "our ear drums" are
vibrating is a gross oversimplification as ear pinnae-shape actually acts as
a directional-filter, that in conjuntion with cognitive processes, allow us
to locate the position of sound. This positioning is extremely phase and
timing dependent.
thanks for this lecture on the basics of binaural perception ;)
Humans can potentially locate sound sources to within
10 degrees of arc,
even with complex reflections and reverberations taking place.
and they can also do that in artificially (re-)created sound fields made
of digital signals sampled at 16 khz or less. also in the presence of
significant phase distortion.
you are welcome to drop by whenever you're in europe and hear for
yourself. which should demonstrate that you are mixing lines of argument
which should be kept separate.
Looking at the world from a POV that power and
spectrums is the only thing
that matters is total nonsense.
nobody is doing that.
How much positional resolution is lost by
quantizing the onset/reflection to with 1/44,000th of a second? Asking
wolfram alpha
"(1/44000 seconds) * (speed of sound) = 7.73386364 millimeters"
.... which seems like a short distance, until you take a formerly-aligned
woofer and tweeter on a high end studio monitor and then move them forward
or back an additional centimeter.
this example is looks very tempting on the surface. it is also very wrong.
the auditory information is not quantized in time. it is just _sampled_
at fixed intervals. you can demonstrably gain timing information from
digital systems with sub-sample accuracy. the analogue reconstruction
filter will interpolate and yield sub-sample information. google for
"inter-sample peaks" to find out more.
all you lose by using finite sample rates is some high frequency content
(i'm assuming proper anti-aliasing, of course).
moreover, even if it were quantized, it would equally apply to all
frequencies, and hence would not create time alignment issues.
all this boils down to _bandwidth_. amplitude and phase information is
retained, correctly, for all frequencies below nyquist (or more
correctly, for all frequencies unaffected by the anti-alias filter).
Then run some test tones at the crossover
frequency, and some impulse reponses too... You can watch that difference on
a scope && you can graph a different diffraction and lobing pattern around
the room. Some people will be able to tell by listening that something's
wrong.
i'm pretty sure monty has looked as his share of impulse responses in
his lifetime. so have i :)
the question is: have you?
Understand something fundamental about humans:
We're not linear.
has it occured to you that people who have developed psychoacoustic
codecs might already know this? it's basic introductory textbook knowledge.
it does not support your assumptions. all these issues you mention are
orthogonal to the question at hand.
So if you *really* want to get information theoretic,
how a sound is represented, stored and reproduced outside of the human
hearing apparatus is orthogonal to hearing physiology. knowing about
hearing just helps to make the best compromises.
all the information that our hearing apparatus needs is still there
after proper sampling, within reasonable limits. (and no, i won't
discuss sample rates with you :)
how do
you explain the fact that the mastering process easily loses a lot of the
low-level positional cues that people can easily hear -- even in the face of
louder masking information.
i'd like to know what these cues are supposed to be, and whether there
is any scientific data to support this argument.
Is the "mixing process" losing information
--
from the nyquist perspective -- no -- but from the human perspective -- yes.
Those Low-order bits might matter as much as the MSB, so you can't just keep
adding MSBs and truncating LSB's and expect it to not sound like a big wall
of mud eventually.
you are again mixing things that are orthogonal. this is childish.
otoh, claiming a difference between MSBs and LSBs is funny. please, try
to get some basic understanding of how digital audio works.
the human hearing has a very well known and understood dynamic range.
whether you represent it by 24 more significant bits or 24 less
significant bits does not matter.
truncation in a signal chain is a problem, and believe me, everybody
here knows that. it's another orthogonal random thought in your line of
argument that distracts from the original topic.
Why is that?? We're logarithmic, and nyquist,
sampling and binary coding is
linear. Our wide dynamic range is provided by a simple Nyquist-violating
equation that is both true for amplitude and frequency perception: log(a *
b) = log(a)+log(b)
get some coffee. this is basic arithmetic. it's no more
nyquist-violating than n(a+b) = na + nb.
this is getting ridiculous.
(where multiplication is akin to what happens when
"mixing"). [[NB:
http://www.aes.org/e-lib/browse.cfm?elib=11981 Dynamic
Range Requirement for Subjective Noise Free Reproduction of Music -- 118db]]
did you just quote an AES paper in support of logarithmic computation
rules? or to reveal the spectacular fact that human hearing has a
dynamic range of > 118dB? (which is uncontested, and, like most of your
other arguments, has nothing to do with the original topic.)
How
does nyquist, as human perceptual theorem, "model" our logarithmic
perception of power and frequency??
until you understand that the sampling theorem is not a model of human
hearing, there is really no point in continuing this.
the rest of the mail doesn't get any better, so i'll just drop out here.
no need to cc: me on followups, i read the list.