On Wed, 5 Feb 2014, Matt Garman wrote:
I have a collection of FLAC files, all ripped from my
CD collection
What I would like to do is run an analysis across all the music to
determine how the bass/lower frequencies are generally mixed. For
example, how much content below (for example) 150 Hz is on the left
channel versus the right channel?
I'm not sure if "histogram" is the right word, but in my mind what I'd
like to see, per-channel, is something like this:
150--125 Hz: x samples
125--100 Hz: y samples
100--80 Hz: z samples
...
Then I can look at the two channels of a song, and if the histograms
are approximately the same, I can assume the bass was mixed equally to
both channels.
I am a programmer, and thought it would be easy to quickly hack
something up that would do this, but I have no experience with signal
processing, and as I started reading about this, I quickly got in over
my head! So I was hoping there might already exist a tool that has
this functionality.
[...]
This is a subject which, as someone getting more and more immersed in
recording, I find very interesting:
Everyone pretty much seems to agree that a perfect loudspeaker is
physically impossible. We also have mastering engineers because even if
there were perfect loudspeakers, most people wouldn't use them, thus
emerges the black art of creating a mix that will sound good on "most of
what most people are using," whatever that means. If you did try to
perfect your speakers' EQ curve, waterfall plot, etc., you might not
even like the result, since people actually like coloration in their
system when they listen for enjoyment (instead of mixing).
Basically, this means that if one were to look at all of the listeners
and recording engineers as a whole, they are following eachothers
guidance like a bunch of lost travelers driving in the fog, following
eachother's car tail lights:
The speaker manufacturers don't have a standard. They're making
speakers that sound good with most of the music out there.
The recording engineers don't have a standard. They're mixing and
mastering to sound good on most of the speakers.
The listeners don't have a standard. They're EQ'ing their systems to
make the music the engineers made with reference to the speakers people
are using sound good on their speakers (which were made to sound good
with most of the music).
Do you notice a pattern here? :)
Despite all of this, some sort of consensus has emerged from the fog,
otherwise mastering would not even be possible, not even for old guys
who "know kung fu" in this black art. The speaker companies, listeners,
and engineers must have, however accidentally, reached a sort of
meta-standard for what things should sound like if you want them to
translate across audio reproduction systems. Maybe they weren't trying
to...maybe they don't even realize it...but if they hadn't, mastering
would be impossible.
What you're talking about, as far as batch analysis of commercially
produced music, surely would turn up some interesting information, if it
was done properly by someone who knows audio analysis way better than I
do. Context is everything with those sorts of measurements, so it would
have to be done by someone who knows what to measure (not me).
But I'm thinking if it was done right, you might get a peek at the
parameters of the golden "meta-standard" that has emerged from all this
tail light chasing that's going on, as far as what mixes are really
aiming for in realy quantifiable numbers, whether the engineers doing
them know it consciously or not.
--
+ Brent A. Busby + "We've all heard that a million monkeys
+ Sr. UNIX Systems Admin + banging on a million typewriters will
+ University of Chicago + eventually reproduce the entire works of
+ James Franck Institute + Shakespeare. Now, thanks to the Internet,
+ Materials Research Ctr + we know this is not true." -Robert Wilensky