[LAU] Rolling off high frequencies when mastering?

nescivi nescivi at gmail.com
Sun Apr 25 09:44:46 UTC 2010


Hiho,

On Sunday 25 April 2010 03:31:33 Niels Mayer wrote:
> On Sat, Apr 24, 2010 at 7:50 PM, nescivi <nescivi at gmail.com> wrote:
> > yes, and in fact for higher frequency signals it is generally understood
> in
> > psycho-acoustics that we distinguish location more by level differences
> (due
> > to masking of the head) than by phase differences, as the latter have
> become
> > quite irrelevant, as the wavelength of these higher frequency is
> > generally much smaller than our ears are apart from each other. OTOH, the
> > high
> frequency
> > waves have a harder time bending around our heads, and thus create level
> > differences based on whether the sound source is to the left or right of
> us.
> 
> Thank you for providing some useful perspective and attitude. (Is this your
> work? http://www.aes.org/events/122/papers/session.cfm?code=P5 
>  "Reproduction of Arbitrarily Shaped Sound Sources with Wave Field
>  Synthesis—Physical and Perceptual Effects" -- Very interesting!!)

yes, it is.

> What is the frequency at which these diffraction level effects produce
>  level differences in the ears?? Are the results taken from
> http://www.aes.org/e-lib/browse.cfm?elib=14003 ? (The Effect of Head
>  Diffraction on Stereo Localization in the Mid-Frequency Range—Eric
> Benjamin, Phil Brown, Dolby Laboratories - San Francisco, CA, USA) -- IMHO,
> that paper explicitly focused on the mid-frequency range -- doesn't mean
> that is the extent of our hearing, nor is it clear they deny high-frequency
> effects, as they'd be in contradiction to a huge body of academic
>  literature that states the exact contrary. Further, by limiting to "stereo
> localization"  and mid-range they're implicitly ignoring pinna effects and
> throwing their experimental results in the direction they planned to prove
> in the first place: which is pretty obvious basic mixing -- you can use a
> pan-pot to place a midrange source in a stereo field.  Also, i very much
> doubt they did they experiments to the same standards demanded by academic
> psychology research -- are their findings Post hoc ergo propter
> hoc  scientific justification for existing product features or limitations?
> Has anybody in psychology accepted their findings as valid? Or reproduced
> the results independent of Dolby's financial interests??

Just reading the abstract of that paper, they are looking at this range, 
because it is in the transition range between where frequencies are completely 
masked by the head, and where the sound waves easily bend around the head; and 
they were prompted to look in more detail at the theoretical model, as they 
found anomalies in localisation tests.
From what I understand from the introduction, in this paper they are looking 
at a better way of modeling the signal path from the speakers to both ears, 
and have come up with a model that takes into account diffraction, rather than 
a model which just assumes that one ear is in the shadow zone.
They've then compared their modelled signals at the ears, with signal they 
measured at listener's ears and found they were more in agreement.

I don't think they are ignoring any of the other effects, they are just 
focusing on how thing works in this frequency range.

A regular method of science... reducing the problem at study to a specific 
issue that you can control, and the issue that you have modelled and want to 
verify.


> The part that doesn't make sense is that we can also localize sound
> up/down/front/back and certainly for up/down, level differences between the
> ears couldn't explain our ability to localize -- pinnae and high-frequency
> response does. Which is what the animal and perception psychology
>  literature tells us. Fortunately, there *ARE* animal studies very similar
>  to the ones i suggested we do, before declaring -- apriori -- what sound
>  matters and what sound doesn't.

No one is denying that; not even the authors of that paper; it is just not 
something they were studying in that specific paper.

Nor do I deny that. I only stated that for horizontal localisation, in the 
higher frequency ranges interaural level differences play a role, more than 
interaural time differences. Indeed for vertical and back/front localisation, 
the pinnae shape is quite important.
You can test this yourself (well, you need one more person to help you), by 
cupping your hands around your ears, folding your pinnae, closing your eyes 
and having someone make some noises (shaking keys or something) at various 
places around your head and let you pinpoint them.

Saying that one ear doesn't get certain high frequency content, doesn't mean 
that we don't hear the high frequency... the other ear will get it.

oh, and you'll find some papers in the AES literature about pinna envy...
 
> One thing you might find interesting is that there is at least
> counterargument to your statement for mammals  -- with a 33khz hearing
> limit, requiring 40Khz bandwidth, and performing worse on a sound
> localization task as the bandwidth reduces under 20Khz: "like humans,
> chinchillas use the upper octave of their hearing range for sound
> localization using pinna cues." And yet we allocate very little resolution,
> at 44.1Ksamples/second, to hearing from which we derive positional cueues:
> "In humans, frequencies as high as 15 kHz have been shown to be necessary
> either for optimal discrimination in the lateral field or in elevation
> (Hebrank and Wright, 1974; Belendiuk and Butler, 1975).

Maybe for stereo hi-fi reproduction we don't want optimal localisation ;)
After all, not sitting in the sweet spot for a set of stereo speakers already 
affects the perfect stereo listening experience. And.. since we're putting 
these speakers normally at ear height, we are not interested in elevation cues 
from them either.


sincerely,
Marije


More information about the Linux-audio-user mailing list