Mark,
Or just let us find interesting ways to use the
technology within the
current framework...
Well, I suppose one could try --- and I would welcome that, but I may provide
only token assistance, depending on the project.
Will he listen on a stereo with his real room
acoustics? Or will he listen
on headphones without? And how do I make one mix that works for both
environments?
Once upon a time, some studios made both types of recordings: One for
speakers, one for headphones. All who did this gave up. The speaker mixes
won out. Now that many more people listen with headphones of all types,
things may change once people realize that headphones can sound a lot
better with a different type of recording (or mixing). This appears to
be a well-kept secret.
Regarding room acoustics: The larger room dominates, so you're not really
in control of the situation anyway. Your customers are already listening to
different versions of your music, depending upon where they're listening from,
even if all of them are listening to speakers.
I try myself to produce something that sounds good with both, but it may be
that this is not possible for many situations. I definitely make decisions
in favor of the headphone version.
Should it use this technology or not when I
don't know the
listening environment?
As I mentioned above, it already varies a lot even amongst speakers-only
environments, so I'm not sure it really matters. If it sounds good
both ways, that's what I would use. Many people recommend listening to
final mixes with various types of equipment; headphones should be included,
but they usually aren't. We already know it's not going to sound right,
and we've adjusted our behavior accordingly --- and our brains have
become used to the odd sound. (Mine has now be re-re-trained!)
I think this makes sense actually. My input would be
that you need more ears
and more recorded sources to facilitate understand where this works well and
where it doesn't. Electronica vs. symphonic, jazz trios vs. grunge rock,
complete sound tracks vs. single instruments, drums vs. guitars vs. vocals
vs. dot.dot.dot...
Yes, I agree completely. It will probably "work" for all of these, but
with different "parameters" (i.e. rooms). So the parameter mapping needs
to be done. I would welcome more "ears."
I presume that this technology does not lend itself to
automation controls
within a real-time environment? When parameters are changed with the IR
stuff I'm using there is a very large recalculation overhead before I can
start using it again. True for yours too? (If not true, then a plug in could
be of lots of uses!)
I'll send something to answer this a little later --- perhaps today.
Thanks for your comments, Mark. I appreciate the discussion with you
and others after working on this solo for quite a while.
------------------------
A few questions I have for you, because you're looking at IR's, is whether
or not you'd like to generate your own? I'm assuming that you've
got some that somebody recorded... Are they stereophonic? Can you
adjust them in any way? The programs I've written would allow you to
generate your own for different distances in an ideal room of any size
you wanted. You could fix the T60's for whatever you wished, and you
could set your own frequency dependence for them so that, for example,
the low-frequency T60 is 50% longer than the high-frequency one.
By the way, there is no requirement on this for 3-D at all. You could just
use one impulse response function for both L and R, balancing the volumes
as you normally would. It's nice being able to generate your own!
Dave.