On Sat, Apr 04, 2009 at 11:07:11PM +0200, Julien Claassen wrote:
Ive got a few questions/assumpotions, i'd like
to have your opinion about.
Clarification: I don't use GUI, because I can't. I have to go by ear.
Equalisation; Assume using Fons Adriaesen's LADSPA EQ (filters.so Unique ID:
1970). I choose the band freuqencies (as a starting point) around the
dominanat note of my piece. So if it's in A major/minor I might choose 55Hz,
220Hz, 440Hz and 3520Hz. Assume we have a simple piece in one key only and we
don't do too much weird. Good assumption? Or should I start out by listening
for the main frequencies of instruments with harsh and loud attacks (like
drums, strongly plucked instruments...)?
Doing compression on filtered bands: I again go for bass, mid (rhtym
instrument and perhaps main voice) and high (lead sounds and all the
overtones). Again I choose based on the key of the piece. Good choice?
I don't think the key of the music should have any influence
on EQ, exectp maybe in very very specific cases.
...
There are limits to what can be done in mastering, unless you
accept to completey modify (usually mutilate) what has been
done in mixing.
The mastering step in commercial music production exists
for three reasons:
1. Some people believe that pressing out the last half
dB does provide an advantage in the context of airplay on
commercial radio, no matter what it otherwise does to the
quality of the sound. A mastering step separate from mixing
allows record label executives to have things their way at
that time without the artist looking over their shoulders.
2. To adapt a recording to the limits of the distribution
medium. In vinyl disk times this was essential. It still
has some significance today, but much less.
3. To ensure some uniformity in levels and sound between
traks of an album.
For someone producing his own music that will be distributed
on digital media only (3) remains, maybe together with some
fast limiting (using a look-ahead limiter) to remove short
peaks and raise the average level without essentially modifying
the sound.
Everything else should have been done during recording or
mixing. In particular things like filtering out unwanted
LF (from mics) or HF (from electronics) noise are best done
during recording.
With the exception of close-miked drums, most instruments
will need little EQ - if something extreme is required this
almost allways indicates a problem elsewhere. This does not
mean you should avoid EQ for some 'purity' reasons - this is
nonsense. It also assumes instruments are supposed to remain
'themselves' and will be mixed more or less to their natural
levels. If you are blending instruments (playing the same
parts) to new new sounds, or building up a very dense mix
in which individual instruments more ore less disappear into
the soup then things are different, and 'anything goes'.
The 'problem elsewhere' need not even be technical, it
could be musical such as a part being played in a register
that doesn't fit in with the rest - changing the instrument
or transposing an octave up or down could solve it.
How much compression is required depends very much on
the type of music and sound - something supposed to
sound natural will require very little compression but
probably benefit more from overal peak limiting. When
using compression, keep in mind *why* you use it: just
to get a more constant or manangable level, or to modify
the sound - a heavily compressed piano for example is
almost a different instrument.
Ciao,
--
FA
Io lo dico sempre: l'Italia รจ troppo stretta e lunga.