[linux-audio-user] Audio 3-D Demo --- Any Interest in Software?
davidrclark at earthlink.net
davidrclark at earthlink.net
Tue Jan 6 12:37:49 EST 2004
Thanks for your great mailing. You've raised some excellent points.
>Will IR tell me that the average/desirable Sabin value from 20Hz to
4KHz for a .3 reverberation time in a 17X14X7 room is 148.6?
>A fundamental problem I see with IR is that the accoustical values of
good rooms are being applied to signals that are created in very poor
accoustical environments where frequency responses in the audible
range are not flat. In this scenario, only half the problem is being
addressed. This might not be a problem with IR as much as it is a mis-
use or underutilization issue.
You are absolutely right on both points. This is why I emphasized
earlier that the application of the impulse response function is the
"trivial" part of the problem. The larger problem is to design either
a real room or a virtual room for the sound. It's also why I asked
about instrument generation. Physical modelling of both the
instruments and the room is one way to provide the non-acoustical
sources Steve mentioned in an idealized environment. [*]
It's more of a misuse and underutilization issue, IMO, as I'll discuss
later in this mailing.
>My point is that great room modeling, reverberation and echo tools
are "the cart before the horse."
In some cases, that's correct; in others it's not correct. Many of us
have sampler synths or sample CD's or purely synthesized tones (and I
have my instrument generator). Many of these samples are fine for
putting into a room modelling program.
But --- I certainly do agree wholeheartedly that discussing IR's is
premature if one isn't thinking along the lines you've mentioned in
your mailing. IR's are way down the line, practically the last step
as they are actually applied. But the considerations that lead to
them are present throughout the recording and mixing process.
>Almost everyone on this list has a recording studio in an untuned
room. [Snip] So, what good does it do to put that poorly recorded
source into a great room?
>I'd suggest that the first thing to deal with is correcting these bad
rooms with Virtual Rooms. You know, crap in equals crap out.
I'm not sure whay you mean by "Virtual Room" in this context. My
program can be used to create virtual rooms. The way I handle bad
recordings (after trying not to do that) is to superimpose a frequency
spectrum on top of the frequency response of the virtual room that I
created in my 3-D program. In other words, as long as it's a linear
process (as Steve said), you can combine everything into the same
frequency response, then obtain a single impulse response function
that both corrects any tuning type of problem (as well as whatever
else you want to throw in) and models a 3-D environment complete with
a frequency-dependent decay. It's all part of the same process of
getting the total environment set up correctly. You then look at test
signals recorded in your non-optimum setting, passed through the
wonderful IR you've created, and everything sounds --- uh, sort of OK.
Another thing one can do is to model the crappy room backwards in
time. You would ask yourself, "What crappy signal do I need to create
at the source to produce a good signal at location (x,y,z)?" The
impulse response function for that case can be generated, and it can
be used to "correct" problems. Now not all problems can be corrected,
but certainly incorrect frequency responses can be. This is what one
is actually doing with a multi-band equalizer.
Another consideration that many audio engineers overlook is that most
people listen to most music in crappy listening environments --- like
MP3's with cheap headphones or in a car with the A/C going. Most of
these crappy listening environments completely swamp out everything
that the audio engineers did to make the sound as good as it can be.
But there is a possible solution to that: To some extent, a known
crappy environment such as the typical American living room can be
somewhat corrected by the backwards-in-time (retrodiction) room
modelling. Admittedly, there are a number of problems that cannot
be corrected, but:
Room modelling has a number of applications, not just determining the
frequency response of an ideal concert hall.
>The question I can't answer is, can Virtual Rooms sounds as good as
physical rooms that are tuned?
That's a multimillion dollar question, isn't it? I'd like to find
out, which is the real reason why I created these programs in the
first place. So far, it sounds even better than I had hoped for.
[*] The ability to model both instruments and the environment leads to
the "rethinking" of the entire process that I referred to in an
earlier mailing. If you can model the instruments, then you don't
necessarily need to play them in the time sequence of a recording.
"Recording" of the audio doesn't exist any more. It's more of an
assembly or compilation process. However, artists need not worry
because one still needs to perform the score which does need to be
recorded somehow for a realistic performance.
More information about the Linux-audio-user