Jorn, i'm not that far into it yet, and have no coding experience or mathematical degree to back me up. :) Just a former orchestral musician and composer trying to make his way, who has to use his ears, and a slide rule, to work things out.<br>
<br>As soon as i know what 2nd level ambisonics are, i'll have a better idea though. :)<br><br>Anyway, back to the books........<br><br>Alex.<br><br><div class="gmail_quote">On Tue, Jan 13, 2009 at 6:28 PM, Jörn Nettingsmeier <span dir="ltr"><<a href="mailto:nettings@folkwang-hochschule.de">nettings@folkwang-hochschule.de</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><div class="Ih2E3d">alex stone wrote:<br>
> Jorn, thanks for feedback. I've just tried one of Fon's amb plugs, and<br>
> it repeatedly crashed ardour, so i think i'd better fix that before<br>
> going further.<br>
<br>
</div>interesting. i've never had problems with those plugins in ardour. which<br>
specifically? and which ardour version?<br>
<div class="Ih2E3d"><br>
> As for mike bleed, it's not a full multimix of pseudo mic blend, but<br>
> more a 'hint' of signal from adjacent instruments. It's artifical, imho,<br>
> to completely remove any resonant blend of adjacent instruments, and i<br>
> already have a modicum of success in terms of 'more lifelike' response<br>
> using this method. I'm also using orchestral samples here, not a live<br>
> orchestra, so i'm keen to explore just how far we can get down the<br>
> 'real' road, before limitations prevail.<br>
<br>
</div>i see.<br>
<div class="Ih2E3d"><br>
> As an aside to this, the VSL orchestral sample library team have already<br>
> started a project not dissimilar to this, called MIR, so the concept is<br>
> not just mine, or even theirs... :)<br>
><br>
><br>
> I knew i was kinda hopeful when i asked about cutting an impulse into<br>
> chunks, so i'm not surprised at all.<br>
><br>
> Now to get this Amb problem sorted out.<br>
<br>
</div>yeah, i'd be interested to hear how it turns out. if you find the time,<br>
post your findings to LAU.<br>
<br>
fwiw, i'm just working on a somewhat related project. i have a<br>
multi(close)miked recording of an organ concert with three spatially<br>
discrete organs and a few hamasaki signals, and i'm trying to shoehorn<br>
those into a spatially correct and pleasant 2nd order ambisonic mix in<br>
full 3d. i've taken a leaf from your book and i'm applying individual<br>
delays to each microphone to correct the distance to the (virtual)<br>
listening position i'm mixing for, and i've measured the source<br>
positions in azimuth and elevation to be able to pan them correctly.<br>
results are quite enjoyable so far, but i hope to be able to bribe the<br>
organist to play some excerpts for me again, so that i can take a<br>
soundfield recording for reference... the results will be presented in a<br>
paper on LAC 2009.<br>
<br>
have you considered publishing your results as well? the lac paper<br>
deadline is still open iirc.<br>
<br>
regards,<br>
<font color="#888888"><br>
jörn<br>
</font><div class="Ih2E3d"><br>
<br>
<br>
<br>
> Alex.<br>
><br>
><br>
> On Sun, Jan 4, 2009 at 1:32 PM, Jörn Nettingsmeier<br>
> <<a href="mailto:nettings@folkwang-hochschule.de">nettings@folkwang-hochschule.de</a><br>
</div><div><div></div><div class="Wj3C7c">> <mailto:<a href="mailto:nettings@folkwang-hochschule.de">nettings@folkwang-hochschule.de</a>>> wrote:<br>
><br>
> alex stone wrote:<br>
> > Ok, this might be a bit of curly question, and as i don't know if this<br>
> > is possible, either valid or not.<br>
> ><br>
> > The subject is placement, and pertains to orchestral recording.<br>
> (My own<br>
> > work composed within the box with linuxsampler, from midi in RG, and<br>
> > recorded in Ardour.)<br>
> ><br>
> > I'd like to place my instruments as close as possible to an orchestral<br>
> > setup, in terms of recorded sound. That is, once i've recorded,<br>
> i'd like<br>
> > to use convolution and other tools to 'correctly' place instruments<br>
> > within the overall soundscape.<br>
> ><br>
> ><br>
> > example:<br>
> ><br>
> > With the listener sitting 10 metres back from the stage, and<br>
> facing the<br>
> > conductor (central) my 1st violins are on the listener's left. Those<br>
> > first violins occupy a portion of the overall soundscape from a point<br>
> > approximately 2 metres to the left of the conductor, to an outside<br>
> left<br>
> > position, approximately 10 metres from the conductor, and with 8<br>
> desks<br>
> > (2 players per desk) about 4 metres deep at the section's deepest<br>
> > point, in the shape of a wedge, more or less. That's the pan width of<br>
> > the section.<br>
> ><br>
> > Now as i understand it, a metre represents approximately 3ms, so<br>
> > calculating the leading edge of the section across the stage as<br>
> 'zero',<br>
> > the first violin players the furthest in depth from the front of the<br>
> > stage, should, in theory, (and i know this is approximate only, as<br>
> i sat<br>
> > as a player in orchestras for some years, and understand the<br>
> instinctive<br>
> > timing compensation that goes on) play about 12ms later than those at<br>
> > the front. Using the ears, and experimenting, this actually translates<br>
> > as about 6ms, before the sound becomes unrealistic, using layered<br>
> violin<br>
> > samples, both small section and solo. (highly subjective i know, but i<br>
> > only have my own experience as a player and composer to fall back<br>
> on here.)<br>
><br>
> make sure that you are using different samples for each desk if you use<br>
> individual delays, otherwise you will introduce comb filtering<br>
> artefacts.<br>
> but i doubt these delays will have any perceptible benefit.<br>
><br>
> > A violin has it's own unique characteristics in distribution of sound<br>
> > emanating from the instrument. The player sits facing the<br>
> conductor, and<br>
> > the bulk of the overall sound goes up, at an angle, at more or less<br>
> > 30degrees towards the ceiling to a 'point' equivalent to almost<br>
> directly<br>
> > over the listener's right shoulder. Naturally the listener<br>
> 'hears' the<br>
> > direct sound most prominently, (both with ears, and the 'visual<br>
> > perception' he gains from listening with his eyes.) Secondly, the<br>
> violin<br>
> > also sounds, to a lesser degree, downwards, and in varying<br>
> proportions,<br>
> > in a reasonably 'spherical' sound creation model, with the possible<br>
> > exception of the sound hitting the player's body, and those in his<br>
> > immediate vicinity. (and other objects, like stands, sheet music, etc,<br>
> > all playing a part too.)<br>
> ><br>
> > I've experimented with this quite a bit, and the best result seems to<br>
> > come from a somewhat inadequate, but acceptable, computational model<br>
> > based on using, you guessed it, the orchestral experience ears.<br>
> ><br>
> > So i take one 'hall' impulse, and apply it to varying degrees, mixed<br>
> > with as precise a pan model as possible (and i use multiple desks to<br>
> > layer with,more or less, so there's a reasonably accurate<br>
> depiction of a<br>
> > pan placed section, instead of the usual pan sample model of either<br>
> > shifting the section with a stereo pan, or the inadequate right<br>
> channel<br>
> > down, left channel up method.)<br>
><br>
> phew! ambitious!<br>
><br>
> > to make this more complicated (not by intent, i assure you), i'm<br>
> > attempting to add a degree of pseudo mike bleed, from my 1st violins,<br>
> > into the cellos sitting deeper on the stage, and in reduced amounts to<br>
> > the violas and second violins sitting on the other side of the<br>
> digital<br>
> > stage.<br>
> ><br>
> > All of this is with the intent of getting as as lifelike a sound as<br>
> > possible from my digital orchestra.<br>
><br>
> why simulate mike bleed? i thought you were after creating a "true"<br>
> orchestra sound, not one including all unwanted multi-miking<br>
> artefacts... i'd rather concentrate on instruments and room.<br>
><br>
> > The questions:<br>
> ><br>
> > In terms of convolution, , can i 'split' a convolution impulse<br>
> with some<br>
> > sort of software device, as to emulate the varying degrees of<br>
> spherical<br>
> > sound from instruments as described above?<br>
><br>
> you could get a b-format response from every place in the orchestra<br>
> (with all other musicians sitting there, for damping), and then convolve<br>
> it with the violin (which would also have to be shoehorned to b-format,<br>
> simulating the desired radiation pattern).<br>
> but if you have the room and the orchestra, you might as well let them<br>
> play your stuff ;)<br>
><br>
> > So, one impulse (I use Jconv by default, as it does a great job, far<br>
> > better than most gui bloated offerings in the commercial world)<br>
> that can<br>
> > be, by way of sends, and returns, be 'split' or manipulated not<br>
> only in<br>
> > terms of length of impulse, but fed as 'panned' so as to put more<br>
> > impulse 'up', less impulse 'down' and just a twitch of impulse<br>
> 'forward'<br>
> > of the player, with near enough to none on the sound going back<br>
> into the<br>
> > player.<br>
><br>
> i'm not sure i understand 100%, but you might want to look into<br>
> ambisonics for that. ardour can do it just fine, all you need to do is<br>
> bypass the panners and use fons' AMB plugins instead. as to target<br>
> format, you could use UHJ stereo. if you desire 5.1, you might want to<br>
> consider working in second order ambisonics.<br>
><br>
> > I've written this rather clumsily, but i hope some of you experts may<br>
> > understand what i'm trying to achieve here.<br>
> > Can the impulse be split down it's middle, separating left from right,<br>
> > aurally speaking, and if this is possible, can i split the impulse<br>
> into<br>
> > 'wedges' emulating that sphere i wrote of, more or less?<br>
><br>
> no, i don't think so. you will need a spatial impulse response. the<br>
> simplest way to obtain one is to use a soundfield microphone (or a<br>
> tetramic, for that matter).<br>
><br>
> > if there's a way to do this, then i'm all ears, as my mike bleed<br>
> > experiments suffer from a 'generic' impulse per section affecting<br>
> > everything to the same degree, including the instruments bled in. I<br>
> > should note here, this is not about gain, but a wedge of impulse, cut<br>
> > out of the overall chunk, that represents a 'window' or pan section of<br>
> > the whole.<br>
><br>
> i still don't understand why you're after "mike bleed".<br>
><br>
> > I suppose an analogy for the chunk of impulse idea would be to<br>
> stretch a<br>
> > ribbon across a stage, and cut a metre out of the middle. That metre<br>
> > would be the bit i'd use, as a portion of the whole, in an aural<br>
> > soundscape, to manipulate, or place, instruments, to a finer<br>
> degree, in<br>
> > the attempt to create a more realistic '3d' effect for the listener.<br>
> > That metre along with other cut out sections of the impulse soundscape<br>
> > could help me introduce a more....'human' element to a layered<br>
> > instrument section.<br>
><br>
> yeah, well, *if* we had a way of capturing a sound field completely over<br>
> such a vast area, we would all be very happy indeed. it can be recreated<br>
> using wave field synthesis or very high order ambisonics, but currently<br>
> there is no way of capturing it, other than measuring a set of<br>
> individual points in that sound field.<br>
><br>
><br>
> hth,<br>
><br>
> jörn<br>
><br>
><br>
><br>
><br>
<br>
</div></div></blockquote></div><br>