Jorn, thanks for feedback. I've just tried one of Fon's amb plugs, and it repeatedly crashed ardour, so i think i'd better fix that before going further.<br><br>As for mike bleed, it's not a full multimix of pseudo mic blend, but more a 'hint' of signal from adjacent instruments. It's artifical, imho, to completely remove any resonant blend of adjacent instruments, and i already have a modicum of success in terms of 'more lifelike' response using this method. I'm also using orchestral samples here, not a live orchestra, so i'm keen to explore just how far we can get down the 'real' road, before limitations prevail.<br>
As an aside to this, the VSL orchestral sample library team have already started a project not dissimilar to this, called MIR, so the concept is not just mine, or even theirs... :)<br><br><br>I knew i was kinda hopeful when i asked about cutting an impulse into chunks, so i'm not surprised at all.<br>
<br>Now to get this Amb problem sorted out.<br>Alex.<br><br><br><div class="gmail_quote">On Sun, Jan 4, 2009 at 1:32 PM, Jörn Nettingsmeier <span dir="ltr"><<a href="mailto:nettings@folkwang-hochschule.de">nettings@folkwang-hochschule.de</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><div><div></div><div class="Wj3C7c">alex stone wrote:<br>
> Ok, this might be a bit of curly question, and as i don't know if this<br>
> is possible, either valid or not.<br>
><br>
> The subject is placement, and pertains to orchestral recording. (My own<br>
> work composed within the box with linuxsampler, from midi in RG, and<br>
> recorded in Ardour.)<br>
><br>
> I'd like to place my instruments as close as possible to an orchestral<br>
> setup, in terms of recorded sound. That is, once i've recorded, i'd like<br>
> to use convolution and other tools to 'correctly' place instruments<br>
> within the overall soundscape.<br>
><br>
><br>
> example:<br>
><br>
> With the listener sitting 10 metres back from the stage, and facing the<br>
> conductor (central) my 1st violins are on the listener's left. Those<br>
> first violins occupy a portion of the overall soundscape from a point<br>
> approximately 2 metres to the left of the conductor, to an outside left<br>
> position, approximately 10 metres from the conductor, and with 8 desks<br>
> (2 players per desk) about 4 metres deep at the section's deepest<br>
> point, in the shape of a wedge, more or less. That's the pan width of<br>
> the section.<br>
><br>
> Now as i understand it, a metre represents approximately 3ms, so<br>
> calculating the leading edge of the section across the stage as 'zero',<br>
> the first violin players the furthest in depth from the front of the<br>
> stage, should, in theory, (and i know this is approximate only, as i sat<br>
> as a player in orchestras for some years, and understand the instinctive<br>
> timing compensation that goes on) play about 12ms later than those at<br>
> the front. Using the ears, and experimenting, this actually translates<br>
> as about 6ms, before the sound becomes unrealistic, using layered violin<br>
> samples, both small section and solo. (highly subjective i know, but i<br>
> only have my own experience as a player and composer to fall back on here.)<br>
<br>
</div></div>make sure that you are using different samples for each desk if you use<br>
individual delays, otherwise you will introduce comb filtering artefacts.<br>
but i doubt these delays will have any perceptible benefit.<br>
<div class="Ih2E3d"><br>
> A violin has it's own unique characteristics in distribution of sound<br>
> emanating from the instrument. The player sits facing the conductor, and<br>
> the bulk of the overall sound goes up, at an angle, at more or less<br>
> 30degrees towards the ceiling to a 'point' equivalent to almost directly<br>
> over the listener's right shoulder. Naturally the listener 'hears' the<br>
> direct sound most prominently, (both with ears, and the 'visual<br>
> perception' he gains from listening with his eyes.) Secondly, the violin<br>
> also sounds, to a lesser degree, downwards, and in varying proportions,<br>
> in a reasonably 'spherical' sound creation model, with the possible<br>
> exception of the sound hitting the player's body, and those in his<br>
> immediate vicinity. (and other objects, like stands, sheet music, etc,<br>
> all playing a part too.)<br>
><br>
> I've experimented with this quite a bit, and the best result seems to<br>
> come from a somewhat inadequate, but acceptable, computational model<br>
> based on using, you guessed it, the orchestral experience ears.<br>
><br>
> So i take one 'hall' impulse, and apply it to varying degrees, mixed<br>
> with as precise a pan model as possible (and i use multiple desks to<br>
> layer with,more or less, so there's a reasonably accurate depiction of a<br>
> pan placed section, instead of the usual pan sample model of either<br>
> shifting the section with a stereo pan, or the inadequate right channel<br>
> down, left channel up method.)<br>
<br>
</div>phew! ambitious!<br>
<div class="Ih2E3d"><br>
> to make this more complicated (not by intent, i assure you), i'm<br>
> attempting to add a degree of pseudo mike bleed, from my 1st violins,<br>
> into the cellos sitting deeper on the stage, and in reduced amounts to<br>
> the violas and second violins sitting on the other side of the digital<br>
> stage.<br>
><br>
> All of this is with the intent of getting as as lifelike a sound as<br>
> possible from my digital orchestra.<br>
<br>
</div>why simulate mike bleed? i thought you were after creating a "true"<br>
orchestra sound, not one including all unwanted multi-miking<br>
artefacts... i'd rather concentrate on instruments and room.<br>
<div class="Ih2E3d"><br>
> The questions:<br>
><br>
> In terms of convolution, , can i 'split' a convolution impulse with some<br>
> sort of software device, as to emulate the varying degrees of spherical<br>
> sound from instruments as described above?<br>
<br>
</div>you could get a b-format response from every place in the orchestra<br>
(with all other musicians sitting there, for damping), and then convolve<br>
it with the violin (which would also have to be shoehorned to b-format,<br>
simulating the desired radiation pattern).<br>
but if you have the room and the orchestra, you might as well let them<br>
play your stuff ;)<br>
<div class="Ih2E3d"><br>
> So, one impulse (I use Jconv by default, as it does a great job, far<br>
> better than most gui bloated offerings in the commercial world) that can<br>
> be, by way of sends, and returns, be 'split' or manipulated not only in<br>
> terms of length of impulse, but fed as 'panned' so as to put more<br>
> impulse 'up', less impulse 'down' and just a twitch of impulse 'forward'<br>
> of the player, with near enough to none on the sound going back into the<br>
> player.<br>
<br>
</div>i'm not sure i understand 100%, but you might want to look into<br>
ambisonics for that. ardour can do it just fine, all you need to do is<br>
bypass the panners and use fons' AMB plugins instead. as to target<br>
format, you could use UHJ stereo. if you desire 5.1, you might want to<br>
consider working in second order ambisonics.<br>
<div class="Ih2E3d"><br>
> I've written this rather clumsily, but i hope some of you experts may<br>
> understand what i'm trying to achieve here.<br>
> Can the impulse be split down it's middle, separating left from right,<br>
> aurally speaking, and if this is possible, can i split the impulse into<br>
> 'wedges' emulating that sphere i wrote of, more or less?<br>
<br>
</div>no, i don't think so. you will need a spatial impulse response. the<br>
simplest way to obtain one is to use a soundfield microphone (or a<br>
tetramic, for that matter).<br>
<div class="Ih2E3d"><br>
> if there's a way to do this, then i'm all ears, as my mike bleed<br>
> experiments suffer from a 'generic' impulse per section affecting<br>
> everything to the same degree, including the instruments bled in. I<br>
> should note here, this is not about gain, but a wedge of impulse, cut<br>
> out of the overall chunk, that represents a 'window' or pan section of<br>
> the whole.<br>
<br>
</div>i still don't understand why you're after "mike bleed".<br>
<div class="Ih2E3d"><br>
> I suppose an analogy for the chunk of impulse idea would be to stretch a<br>
> ribbon across a stage, and cut a metre out of the middle. That metre<br>
> would be the bit i'd use, as a portion of the whole, in an aural<br>
> soundscape, to manipulate, or place, instruments, to a finer degree, in<br>
> the attempt to create a more realistic '3d' effect for the listener.<br>
> That metre along with other cut out sections of the impulse soundscape<br>
> could help me introduce a more....'human' element to a layered<br>
> instrument section.<br>
<br>
</div>yeah, well, *if* we had a way of capturing a sound field completely over<br>
such a vast area, we would all be very happy indeed. it can be recreated<br>
using wave field synthesis or very high order ambisonics, but currently<br>
there is no way of capturing it, other than measuring a set of<br>
individual points in that sound field.<br>
<br>
<br>
hth,<br>
<font color="#888888"><br>
jörn<br>
<br>
<br>
<br>
</font></blockquote></div><br>