[LAD] Attenuation of sounds in 3D space

Jörn Nettingsmeier nettings at folkwang-hochschule.de
Thu Jul 22 22:58:38 UTC 2010


On 07/22/2010 08:15 PM, Gene Heskett wrote:
> 1. How are the signals brought into phase such that electronically, all mic
> ribbons or diaphragms seem to occupy the same space, just facing in
> different directions?

well, you can't do that :)
two approaches:
if you only care for horizontal surround (and there is no significant z 
axis content), you can stack an omni and two fig8s on top of one 
another, so that they are in phase for signals arriving along the 
horizontal plane. this is often called the nimbus-halliday array, after 
the guy that introduced it at nimbus records. sounds very good, but no 
height.
if you do want height, place 4 sub-cards or cardiods as close together 
as possible, arranged on the faces of a tetrahedron. then with some 
clever filtering, you can get very good results, and the comb filtering 
at hf can be worked around with a slight treble boost. comb filtering at 
hf looks worse than it sounds, usually.

> And of course the same concern comes into play at the
> speakers since they are generally placed around the listener which in no
> way approximates the nearly single point reception these mics will hear.

that's not the point. the idea is this: you measure the soundfield in 
one spot, and you aim to reproduce it in this one spot only (that's the 
maths). so the speakers should be really really close together. the good 
thing is: it also works when the speakers are far apart, which means we 
get room for listeners :)
the interesting aspect of first-order ambisonics is not how it works (it 
only works in a point volume, which is not terribly useful), but how 
gracefully it fails outside this volume.

> In my own mind, the placement of a PZ microphone in each of the places one
> would place the playback speakers would seem to be a superior method, at
> least for a listener sitting in the nominal center, who will be so
> overwhelmed by (supposedly not important sonically we are told) the phasing
> errors that he cannot single out a single largest cause for the lack of
> realism.

it might seem so, but in practice it does not really work. say you have 
a regular hexagon of microphones, with a radius of 2m. assume that a 
singer is standing at the "north" mike. sound reaches this mike 
instantaneously. it takes about 12ms to reach the south mike. that is a 
comb filter that reaches way down where it makes the sound tinny, 
unpleasant, and worst of all, unrepairable.
and consider what the effect during playback would be:
loud, correct sound from north speaker after 6ms, bogus echo from south 
speaker after 18ms, combining into coloration, giving a wrong spatial 
cue of a back wall that isn't there, yet arriving too soon for the brain 
to be able to separate it as a bogus event and throw it away.

the fun thing is, if you had several thousand microphones (and 
speakers), the results would be excellent. but before shouldering the 
expenses, you might be tempted to cut some corners. ambisonics is a very 
usable shortcut imho.




More information about the Linux-audio-dev mailing list