You are on page 1of 59

The ART of

MIXING &
MASTERING

KOSMAS LAPATAS

-1-
[The Art of Mixing & Mastering]

Author: [©Kosmas Lapatas, ©2014]

Publishing House: Omnibus Press

Omnibus Press
14/15 Berners Street
London W1T 3LJ

All rights reserved. No part of this publication may be reproduced, stored


in a retrieval system, or transmitted in any form or by any means,
electronic, mechanical, photocopying, recording or otherwise, without
the prior written permission of the author.

-2-
THE AUTHOR

Kosmas Lapatas studied Classical and Modern Piano,


Harmony, Counterpoint, Fugue (Athens Conservatory),
Composition (MIT), Musicology (ACG), Music Technology
(GIT), Audio Engineering (RID) and Music Therapy (SHA).
He has performed as soloist and accompanist with various
orchestras and at various music halls, and he has taught at
prestigious colleges, conservatories, private schools and
institutions.

He is a member of the International Association of Piano


Teachers, the Greek Association of Primary Education
Teachers, the Greek Society for Music Education, the Music
Producers Guild, and the International Alliance of Composers.
His Fellowships include institutions such as the
Massachusetts Institute of Technology, the Georgia Institute of
Technology, the Emory University, the State University of New
York, the Institute of Education and the Recording Institute of
Detroit.

-3-
ACKNOWLEDGEMENTS

I would like to thank ALL the people that have put


their faith on me and my work all these years, my
students, my colleagues and my everlasting friend,
soul mate and supporter Gianna Tzanoukaki.

FOREWORD

This book is divided into FOUR sections: MIXING


EXPLAINED, MASTERING ESSENTIALS,
MASTERING EXPLAINED and MY MASTERING.

-4-
MIXING
EXPLAINED

-5-
The purpose of mixing is to take all the recorded
tracks from a session and put them together, so that
the listener hears exactly what you want them to
hear. It is an extremely complicated and time
consuming task, which as you might expect, doesn't
just involve setting volumes and panoramas. There
are three fundamental areas to focus on. These are
volume & pan, spectral coverage and spatial
positioning. While the principals of volume & pan
are well known, the latter two may be a little less
obvious.

So let's cover the basics. Looking at audio


waveforms shows you the level and little more.
Similarly, when looking at a spectrogram, the levels
of bass may be apparent but little else. Our brain
however is exceptional at decoding these audio
signals. Not only can it analyze levels, it can
distinguish between lots of them, and most of all it

-6-
is able to decode the spectral domain with quite
astounding results. It can separate individual
frequencies, single out instruments including their
harmonics, compare phase differences in order to
detect the instrument location and even recognize
echos to further improve the sense of location and
space. The only downside of this however, is that
the brain expects lots of information within the
audio signal. And as a mixing engineer you have to
provide this.

SPECTRAL COVERAGE

If you were asked to put hundreds of small colorful


marbles on the floor and were then instructed to
find the one with a little star, it would probably take
some time. What if you were tasked to find the only
red marble, when each had just one color? It is
similar with audio - when you listen to 10
instruments all covering a range of 100Hz - 1kHz,

-7-
you will have a hard time detecting which is which.
But when you divide a spectrum into several
intervals so that each instrument covers only a part
of it, the brain will do most of the hard work for you.
We call this spectral coverage.

This leads us to an obvious conclusion:

If we have multiple instruments occupying the same


part of the spectrum, there will probably be a need
to sacrifice something in order to clear the mix and
make it easier to listen to. Alternatively, we could
purposely use this effect to mask a chord structure,
for example that we don't want other musicians to
find out how to play. By simply playing another
instrument at the same time, it will generate several
harmonics, thus filling the spectrum, and making
analysis difficult.

-8-
SPATIAL POSITIONING

Now imagine you are on a crowded bridge with


several street musicians. If the musicians are far
enough apart from each other, and you are fairly
central, you will be able to distinguish what each are
playing and where they are, even blindfolded. If
they are all in the same place however, you will
probably still be able to determine where they are,
but not who is who. This is because they are
generating similar echoes and the so-called direct
signal, which is the wave that comes directly from
the instrument into our ears without any
reflections, will also be similar. Let's look at it
physically.

Direct wave

When any of these musicians generates a sound,


audio waves will go in all directions from them. Our
brains are able to detect even tiny time differences
-9-
with sound waves, so distance matters a lot. You
first receive the direct wave. If the musician was on
your left, then the signal will be intercepted by the
left ear first, with the right ear receiving the wave
after a little delay, or possibly even not at all. If the
musician is in front of you, both ears will get the
signal, but one may receive it a few microseconds
earlier than the other, which makes the brain say
"OK, it's a little to the right".

Echoes

After receiving the direct wave your ears start to


pick up echoes. The sound has been spread in all
directions and reflected, so you may intercept
echoes from many things around you and also
echoes of the echoes etc. Each of these reflections
causes the sound to lose energy, until it eventually
fades out completely. The time this takes depends
largely on where you are. There will be few echoes

- 10 -
in the desert for instance, as sand is not exactly an
ideal reflective surface. However, in a church, the
stone reflects sound very well and there are many
walls, so each wave generates multiple echoes until
you have the full ambience, associated with
churches.

Example

Now let's say you are stood next to a wall on your


right, and a musician is playing a few meters in front
of you and to the left. Your left ear will receive the
direct waves and then the echoes. There is nothing
on your left side, so your left ear will not gather
many reflections. Your right ear will start catching
many echoes from all the waves thrown into the
wall, reflected from the floor and ceiling etc. Your
brain can derive a lot from this information. The
direct wave arrives at the left ear first, so the sound
source is on the left. The many echoes on your right

- 11 -
indicate that there is some big obstacle on your
right side, from a material that reflects the sound a
lot, possibly stone. Some reflections sound 'metallic'
which may indicate ironwork, some are dull (with
less highs and bass) which suggest perhaps
blankets or curtains. If the echoes go away quickly,
then there are probably no other major objects
around you. Your left ear picks up some reflections
from the floor and ceiling, which tells the brain how
tall the room is. From the delay between the direct
wave and the first echo it can even work out, how
far away the wall is to you on your right. And so on.

The brain is an amazing organ!

The brain is able to analyze things that we can


hardly simulate, so when mixing, we should try to
keep things simple to acquire as clean a sound as
possible. Our aim is to move all sound sources to
different places. If not, the mix will sound crowded

- 12 -
and the listener will feel like all of the instruments
are located in the same place. If the mix is over
complicated, the listener may lose a sense of space
as the echoes will just not correspond to each other
(as if the wall is on your right in one track, but on
your left in another).

DYNAMICS & TRACK PREPROCESSING

Very often you need to gate and compress tracks.


Gating ensures there won't be any residual noise in
the silent parts of the track. Compression makes the
track level uniform, which is necessary when
adjusting volumes. Just imagine what would happen
if you mix the chorus and then go to the verse only
to discover that the levels are completely different.
Compressors can also heavily affect the final sound
of the track, so it is good practice to use them at the
start of the mixing process. See the compression
tutorial for more information on using

- 13 -
compressors. Next it may be necessary to equalize
some tracks. Drums for example, rarely sound good
without a fairly high amount of equalization. There
will be more equalization added later, but at this
stage we just need to get an idea about how the
tracks should sound. After this step you should have
all the tracks prepared. They should sound good
and their levels should be more or less stable
(which doesn't mean that the whole song should
have the same loudness!). Modern hosts provide
reasonably advanced routing capabilities, so if you
have several vocals or doubled background guitars
for example, you may decide on grouping these
tracks together. Creating a group track, allows you
to adjust the advanced parameters of multiple
tracks at once and can help with work flow (mixing
will take only 4 hours as opposed to 40!).

- 14 -
PRIORITIES

Remember our goal is - to tell the listener what to


hear, and where it is located. Now we have all tracks
prepared, we need to decide which of them we wish
to highlight and which should be placed in the
background. Try listening passively to some CD's
and see if you can get a 'feel' for the priority order
that was used. It may surprise you to discover how
quickly your brain can pick this up. Most
commercial recordings look like this:

1) Lead vocals & solo instruments

2) Drums & percussion

3) Bass

4) Guitars, pianos & background instruments

5) Background percussion (shakers, conga etc.)

6) Pads and ambience

- 15 -
PANORAMA & SPATIAL POSITIONING

It may seem odd to start with this especially as you


will probably need to tweak it again later. But in my
opinion it is beneficial to do it at this stage, because
it tends to significantly change the sound character
and levels of the mix. Firstly you should decide upon
a particular position and space for each of the
instruments according to your priority list.
Generally the more important the track is, the closer
it should seem. It may help to try and visualize the
instruments as if they were on stage albeit with the
drummer and bassist stood in front of the guitarists!
Use the fact that all the tracks have been recorded
separately to your advantage. You have the
conditions that real time mixing engineers can only
dream of. How about making your listeners feel like
they are between the musicians? Or even moving
the singer into the listener's head! There are many
approaches to this. You may wish to solo each track
and setup the panoramas, delays and reverbs, or
- 16 -
you could play all of them, adjust the levels
temporarily and then process them. Either way you
will probably return to this point later when
tweaking the whole mix.

LEAD VOCALS AND SOLO INSTRUMENTS

Lead vocals and solos are almost always panned to


the center and don't have much ambience as it tends
to make them 'somewhere in the room' rather than
'close to you'. Maybe it is because people like the
singers, I don't know :). If you want to add some
reverb, try using a middle or large room setting with
a depth of around 10% and no or minimal pre-delay.
By making it minimal we ensure it will be 'close'.
This will give the vocal some space.

- 17 -
DRUMS & PERCUSSION

Drums and percussion are usually the most difficult


to mix, because they contain such a variety of
different sounds and recording them properly is an
art in itself. To ensure the drums won't sound like a
huge noisy ball, most engineers apply quite drastic
pans to each track. This ensures that each of the
tracks will be easy to distinguish in the stereo field
albeit at the expense of being artificially located in
space. The drums should have an overall reverb
applied giving them more space and bringing them
right behind the singer. I'd recommend a larger
room with a depth of around 20% and just a little
pre-delay. Drums also usually contain some bass
frequencies (bass drum, djembe etc.). In general it
is not a good idea to use a reverb on low-frequency.
So it's usual to use a high-pass filter on the reverb
or equalize the reverberation signal.

- 18 -
BASS

Bass can be very challenging for real time mixing


especially in areas of poor acoustics, because low-
frequency echoes are very hard to manage. This
restricts us to placing it in the center with very little
or no reverb at all. As a result, bass won't fulfill our
spatial positioning requirements because it won't
be placed anywhere (except for the ambience in the
recorded track itself). However, this is preferable to
crowding the bass spectrum. As a general rule, it's
usually a mistake to put reverb on low-frequency
tracks. If you really want some ambience, use a
master track overall reverb.

GUITARS, PIANOS & BACKGROUND


INSTRUMENTS

According to our priority list these are classed as


background instruments, so they must appear as
such. Therefore don't be afraid to apply quite a lot
- 19 -
of pan and reverberation. Our first task is to decide
which track should be where. If you have 2 guitars,
make your decision based on the drums you already
have in the mix. For example, if the first guitar is
more high-pitched, place it on the opposite side to
the hi-hat, which is also high pitched. Another
example is when you have a guitar and a piano.
Since the guitar is usually more rhythmical than the
piano, you may want to place it on the less
rhythmical side, again on the opposite side of the hi-
hat. Think about placement and how it affects other
instruments, but most of all - experiment...

BACKGROUND PERCUSSION

These instruments usually support the rhythm and


fill the space, but they typically don't have an
important musical meaning. It is common to pan
them slightly (to ensure they seem far away) and
give them some distant ambience.

- 20 -
PADS AND AMBIENCE

Many music genres contain these sounds which you


may not even notice at first. Yet without them, the
music would sound very different. In most cases
these are already very 'stereophonic' and ambient,
so you may decide to leave them. But if they are too
upfront, you could send them through a large room
or hall. It's usual to leave them panned close to the
center, because otherwise they would lose their
natural ambience, which is after all the reason we
are using them. Sometimes it may be useful to
actually collapse their ambience a little to put them
further away.

VOLUMES

Although many presume that this is the hardest


step, it is technically the simplest one. Your aim is to
support the order of instruments defined earlier.
The idea is, when you play the mix and let your
- 21 -
brain analyze the recording, you should notice
instruments one by one in that order. So if the first
thing you notice is the guitar, there's something
wrong because there are other tracks you should
hear first, such as vocals or drums. It is always good
practice to jump through the different parts of the
song, so that when your brain adjusts to the guitar
being in the front during the solo, it can regain the
objectivity during the verses, when the guitar
should be strictly in the background, not interfering
with the main vocal. It is also good to take breaks
during mixing and to repeatedly check other songs
of the same genre. And finally, try to switch back
into the spatial positioning step often to help
maintain the order.

SPECTRAL COVERAGE

In many cases you may find that you are not be able
to create a really clean mix without this step, simply

- 22 -
because multiple tracks are colliding in the spectral
domain. In most cases you should hear the problem.
Common cases are bass vs. bass drum, multiple
guitars, guitar vs. vocal etc.

BASS VS. BASS DRUM

A bass drum almost always resonates somewhere


around 80-120Hz. That's low enough to provide the
typical bass hit. Note that the drum usually
generates lots of sub-bass frequencies as well
(around 50-80Hz). The bass guitar is placed in a
similar location, usually between 80-300Hz. So
what happens at 80-120Hz? Firstly, the song
arrangement may be good enough, so that this
collision actually doesn't matter. The bass drum
may be duplicating the bass guitar rhythm, and also
supporting it. In most cases however, it's not that
simple. The idea is that the bass drum needs the
very low frequencies that make the low 'pulse' and

- 23 -
the high frequencies to give it some punch (it is very
hard to create a bass sound with a very short attack,
because the brain has a resolution of about 10ms,
which is 100Hz, so we could theoretically
distinguish each single sine wave in this spectrum!
So could you create a 100Hz sine with an attack of
1ms?). The bass guitar on the other hand should not
sound too low, as it would only make a big dull mess
on sub woofers (there are exceptions, such as
drum'n'bass, though). There is also another brain
phenomenon. As we know, each instrument
generates harmonics (multiples of the fundamental
frequency, i.e. the tone). Now when you remove the
fundamental frequency, the brain may still be able
to 'feel' it just by analyzing the harmonics. So when
our bass drum needs 100Hz, and the bass guitar
sounds at 100Hz as well, you may radically remove
100Hz from the bass guitar, because the brain
should still 'recover' it from the other harmonic
parts of the bass guitar track. We can use a high-

- 24 -
pass or low-shelf filter on the bass guitar track and
slide it somewhere, so that both instruments are
clear enough. You may also use a peak filter to
diminish frequencies above say 100Hz from the
bass drum track, as they are usually not needed. It's
all about compromise. If the 2 tracks collide, you
will have to take something out, whether you like it
or not. Note that each single track may then sound
thin or empty, but in the mix it will fit well, and that
is what's important!

MULTIPLE GUITARS

Guitarists are well known for being exhibitionists!


They often make such sonic chaos just to show they
can play, which unfortunately means one thing - if
multiple guitars collide, then the problem is often
with the guitarist.

- 25 -
For example, you may have two rhythm guitars - if
the arrangement is good, then both guitars can
either play together or fill the spaces between each
other. If half of the notes are together and half are
not, then it often sounds cluttered. The guitars
themselves may sound good and full, but it may be
almost impossible to add anything. You can try
panning them a little in opposite directions, but
although this may give a little more space it will not
remove the rhythmical jumble. In the case of
distorted guitars, used in some harder music, these
are typically similar in rhythm so the only problem
is that they occupy a similar spectrum, which then
gets too crowded. Usually one of the guitars is
playing higher notes and the other lower ones, so
the solution is as before - remove lower frequencies
from the high pitched guitar and remove higher
frequencies from the low pitched guitar. Using
low/high-pass filters may be too harsh in this case,
so you may want to stick with shelf filters. The art is

- 26 -
to find the best cut-off frequencies and Q's, so that
both still sound good, while each retains a distinct
tone. Finally you should also note, that there are
cases when the 'mess' is actually desirable. Such as
in a very hard part of a metal song, where the low-
end should "kill" the listener. And what's simpler
than combining multiple guitars and even bass to
achieve this?

VOCALS VS. GUITAR

Vocals have the highest priority, which means if you


have to sacrifice something, let it be the guitar. On
the other hand, it's quite common to remove
everything below around 200Hz from the vocal and
that may help on its own. If you suspect that the
vocals and the guitar are in a similar spectrum, you
can easily find out using an analyzer. To solve the
problem, you may want to use a peak filter with
negative gain on the guitar track and place it just at

- 27 -
the highest peak in the vocal spectrum. That would
diminish the most problematic frequencies. If you
are somewhat experienced, you could also try using
a light side-chain compressor on the guitar track, by
sending the vocal track into the side-chain and
filtering it, so that only the problematic frequencies
are measured by the compressor. The idea is to
lower the volume of the guitar when problematic
peaks in the vocal occur. Or you may use a
multiband compressor, not that it would be simple,
but when it needs to be perfect, you should try every
tool you have.

GET BACK TO PREVIOUS STEPS

Mixing is rarely that simple to achieve in just a few


steps. So when you reach this point, it is probable
that something is still not right. If that's the case,
just check the spatial positioning, volumes etc.
again. If it seems OK, wait 24 hours and check again.

- 28 -
If it still seems OK, well, you are finished. Render it,
and send it to the mastering engineer. You may
want to create a few different versions, maybe one
with the vocals a little higher, so that if there is a
problem, the mastering engineer can solve it
themselves. No master compression, equalization
(possibly just a little) or limiting should be used! All
of this is up to the mastering engineer. You may
want to do some light compression & equalization
during the mixing, to give you some idea of how it
will sound, but you should still export it without
these processors. Render it to the highest possible
quality (your project's sampling rate, usually
96kHz, 32-bit float),

- 29 -
MASTERING
ESSENTIALS

- 30 -
Mastering requires an entirely different “head” than
mixing. Mastering is the art of COMPROMISE;
knowing what’s possible and impossible, and
making decisions about what’s most import and in
the music. Before mastering, listen carefully to the
performance, the message of the music. In many
music genres, the vocal message is the most
important. In other styles, it’s the rhythm, in some
it’s intended distortion, and so on. Always start by
learning the EMOTION and the message of the
client’s music. Always relate your decisions to the
intended MESSAGE of the music. There is no “one-
size-fits-all” setting, and each song should be
approached from scratch.

BRIEF HISTORY
Originally, MASTERING was simply the process of
transferring a finished mix to the intended listening
medium, which at some time was 78rpm vinyl.
What is now commonly referred to as “mastering”

- 31 -
is actually “pre-mastering”, i.e. preparing the audio
for its transmission to a finished “master”. The
process of this transmission is now typically
performed at duplication plants.

GOOD MASTERING
Well mastered records sound BETTER: bigger,
clearer, wider, more coherent and louder. They
have a TRACK SPACING that makes artistic sense,
highlighting the contrast and flow of the music.
They are free from POPS and CLICKS as well as any
NOISE that detracts from the music. A GOOD
mastering engineer is like a good DOCTOR, and the
first rule of his oath is do NO harm. If something
makes the mix sound worse, it should not be done.

BAD MASTERING
Badly mastered records actually sound worse than
the original mixes. They contain destroyed
balances, mangling high or low frequencies, and

- 32 -
horrible distortion through an over-use of brick
wall limiters or clipping. You can probably tell a
BAD mastering when it sounds nothing like the mix
you sent in. MASTERING SHOULD BE ENHANCING
THE MIX, NOT CHANGING THE MIX. Any processing
that is done to the mix should be in the SPIRIT of
what already exists.

THE PROCESS
The typical mastering process begins with simply
listening to all the tracks. Each production, and each
product, must be treated as an INDIVIDUAL piece of
sound. Mastering can improve a terrible mix to an
extent and it can certainly take a mix from “good” to
“great” but it cannot make a TERRIBLE mix GREAT.

EQUIPMENT
The most important piece of equipment in any
mastering studio is the engineer’s pair of EARS, not
the equipment. Of course an ACCURATE acoustic

- 33 -
environment and a decent MONITORING chain
(monitors and converters) are ESSENTIAL,
otherwise the mastering engineer will have no idea
what he is actually hearing and will have no
reference to base his decisions on.

MONITORING
With few exceptions, you won't find near-field
monitors in a professional mastering room. Near-
field monitoring was devised to overcome the
interference of poor control-room acoustics, but it's
far from perfect. It's almost impossible to locate
nearfield monitors without breaking a fundamental
acoustic rule: The length of the reflected signal path
to the ears should be at least 2 to 3 times the direct
signal path. Near-field monitoring also exaggerates
the amount of REVERBERATION and left-right
separation in a recording. A good mastering room
should be at least 20 feet long, preferably 30 feet,
and the monitors, if not in soffits, anchored to the

- 34 -
floor, and placed several feet from walls and
corners.

FORMATS
The mastering engineer needs the highest
resolution version of the final mix that you have for
each song. 128khz/32bit would be ideal, but let's
face it, it’s RARE. Many mastering engineers will tell
you to use NO processing on the stereo bus at all. If
you are so confident of yourself do it. But DON’T use
Brick wall limiters in any case.

LEVELS
Part of mastering is bringing the audio up to
optimum level. Be aware though, SUPER LOUD
master with no dynamic range is GARBAGE. If you
want a super loud master then you will have to live
with a certain amount of distortion and pumping.

- 35 -
LOUDNESS
Sequencing an album requires adjustment of the
levels of each tune. We've seen that the EAR judges
loudness by the AVERAGE, not peak levels of the
music. Normalization is the process of finding the
highest peak, and raising the gain until it reaches 0
dBFS. But do NOT use normalization to adjust the
relative loudness of tunes, or you will end up with
nonsense

WHY NOT LOUD


There is a scientific reason for not monitoring too
loudly. The louder you monitor, you can be fooled
into thinking music has more bass energy. Thus it is
extremely important to monitor at approximately
the same level as the ultimate listener to your
recording.

- 36 -
HOW MUCH LOUD?
In a world where music is often through
headphones, iPods, iPhones and on car stereos, the
dynamic range has to be limited somewhat in order
that the quiet sections can be heard over the
background noise. It is possible to get a nice, loud
sounding master without completely destroying the
song however.

METERING
The EAR is the final arbiter of quality, but METERS
can help. The VU helps demonstrate if average
levels are too hot. While mastering, use the average
meter and glance at the peak meter. A popular
meter for detecting audible peaks is a quasi-peak
meter, or analog PPM, defined by an EBU standard.

DYNAMIC PROCESSING
Wide dynamic range material, such as classical
music, folk music, some jazz and other styles are

- 37 -
often mastered WITHOUT any dynamics processing
at all. Most mastering engineers have discovered
that you can often hit 0 dBFS on a digital PPM
without hearing any distortion. Both compression
and limiting change the peak to average ratio of
music, and both tools reduce dynamic range. While
reducing dynamic range, it can "beef up" or "punch"
low level and mid-level passages to make a stronger
musical message.

MULTIBAND COMPRESSION
Multiband compression permits you to bring out
certain elements that appear to be weak in the mix,
such as the bass or bass drum, the vocal or guitars,
or the snare, literally changing the mix.

RADIO EDIT
Advertisements are created by marketing people,
whose goal is to sell products, and often use
ambiguous terms. The most ambiguous of those

- 38 -
terms is RADIO ready. Almost no special
preparation is required to make a recording radio
ready. Think of your dynamics processor as a tool to
help create your sound, not to be used for radio
ready. The more compressed your material, the less
the transient impact of the drums, clarity of the
vocal syllables, and percussion. Subtle multi-band
compression and soft clipping can make you appear
louder on the radio. If you feel this compromises the
sound of the CD when played on the home system,
why not make a special compressed single just for
radio release. This gives you the best of both worlds.

EQUALIZATION
Most of us are familiar with the difference between
parametric and shelving equalizers . Very few
people know of a third and important curve that's
extremely useful in mastering: the BAXANDALL
curve. A Baxandall curve is applied to low or high
frequency boost/cuts. With a boost, instead of

- 39 -
reaching a plateau (shelf), the Baxandall continues
to rise. With good monitoring, equalization changes
of less than 1/2 dB are audible, so subtlety counts.
You probably won't hear these changes in an instant
A/B comparison, but you will notice them over time.

DIGITAL
Many people have complained that digital recording
is harsh and bright. This is partly accurate. Digital
recording is extremely unforgiving; distortion in
preamplifiers, A/Ds, errors in mike placement are
mercilessly revealed. The mastering engineer
recognizes these defects and struggles to make a
pleasant-sounding result.

NOISE
Compression tends to amplify the NOISE in a
source: Tape hiss, preamp hiss, noisy guitar and
synth amplifiers can be perceived as problems. The
key to good-sounding noise reduction is NOT to

- 40 -
remove all the noise, but to accept a small
improvement. An inaccurate or unrefined monitor
system not only causes incorrect equalization, but
also results in too much equalization. The more
accurate and linear your monitors, the less
equalization you will apply. Try to avoid adding
monitor correction equalizers; better to fix the
room or replace the loudspeakers.

STEREO BALANCE
Stereo balance must NOT be judged by comparing
channel meters. The only way to accurately adjust
stereo balance is by EAR. The Finalizer provides
powerful techniques for adjusting stereo imaging.

DAW
Mastering benefits from the digital audio
workstation. The DAW lets you make edits, smooth
fades, emphasize or de-emphasize the loudness of
sections, Redbooks etc.

- 41 -
WHO SHOULD DO THE MASTERING?
One of the main advantages of hiring a mastering
engineer to master your record is the FRESH
perspective that he brings to proceedings. He can
HEAR things in your mix that you can’t because he
listens as a LISTENER not as someone who has been
mixing the song for 10 hours. If mastering is done
by the mix engineer then the mastering phase does
not make sense. If your mixing engineer could spot
the obvious flaws then you wouldn’t need
mastering in the first place.

- 42 -
MASTERING
EXPLAINED

- 43 -
THE BEST MASTERING IS NO MASTERING
If you think your mix is perfect as is, don't process
it anymore. Each step in the processing chain adds
extra noise and distortion, digital or analog, it is
always there.

GET THE BEST MIX QUALITY YOU CAN!


The better the mix, the less processing needed at the
mastering stage. However, your mixes should not
contain compression or be equalized too much.
Once something has been changed, it is hard to undo
it.

ALWAYS MASTER AT 96 KHZ.


Alternatively, you can use up-sampling, however
the quality will always be worse and your mix
quality is already degraded from being sampled at a
lower rate. Many low-end studios still record in 44
kHz. If this is the case, you should use your DAW
(e.g. Sequoia) to increase the sampling rate of the

- 44 -
mix to 96kHz for mastering, and then convert it
back to 44 kHz after completion, ideally with some
dithering applied.

ALWAYS EXPORT YOUR MIXES IN 32-BIT


FLOATING POINT FORMAT.
Audio degradation when truncated to 24-bits is
typically not audible, but further processing can
exaggerate the defects. 16-bit audio degradation is
often audible on good studio monitors. Be aware
that your ears will soon think your sound is great
even if it is not. You should always master in a DAW
where you can switch between tracks, so you can
listen to your current audio, the original, and your
reference tracks you have chosen for comparison.
Before you start, prepare these in your DAW project
(e.g. in Sequoia), and choose at least one
professionally mixed song (ideally up to 10!) for
guidance.

- 45 -
THE 'LOUDNESS WAR' IS NOT A GOOD THING.
Modern pop-songs have almost no dynamic range
compared to recordings made 20 years ago. It all
started when someone discovered that songs sound
better when they are played louder. Modern digital
processors are able to increase the volume of songs
so much, that when compared to their older
versions, they sound many times louder. However
the dynamic range is SACRIFICED. You can hear that
most modern songs sound the same from beginning
to end because removing the transients, in order to
increase the volume to these incredible levels,
removes dynamics of the music. Therefore, always
increase the volume only as much as necessary, not
one single dB more! It may sound better, but only
for a few seconds!

NO PEAKING!
You will create a chain of effects, with each one
performing some operation on the sound and very

- 46 -
often this sound may get amplified. Most plugins
have a GAIN control and a PEAK meter to ensure
that the output isn't clipping. It may be in the red
zone, but not above, otherwise the sound may get
distorted, when bypassing an effect for example.
Moreover, if the output of every effect has
approximately the same level as the input, then you
may bypass any effect in the chain to check the
sound without it. It doesn't always work, but it can
help a little.

MANAGE THE STEREO FIELD


One very hard task of a mixing engineer is to
prepare a room for the instruments. In a good mix
your brain should be able to identify each single
sound source and place it somewhere in the space.
This can be managed by various panoramas,
reverbs and delays. When mastering you should
ensure that this depth of field is PRESERVED. You
can start by checking if there is a good amount of

- 47 -
stereo field and correct it if necessary. It's usually
good to keep the bass more monophonic, for
example. The goal is not to make an artificially
stereo-sounding output, but to control the stereo
content. The resulting signal in the stereo field view
on the right should form a nice vertical ellipse, not
too wide, and not too thin. Finally you should check
for MONO compatibility. Remember that even in the
21st century your recordings should be mono
compatible! When you compare the stereo and
mono recordings, the monophonic one loses the
stereo content, but it should still have some depth
and there should be no significant frequency loss
caused by phase cancellations. In extreme yet
typical cases, a track may completely disappear
when played in mono. Poor mono compatibility at
this stage, means you will have little choice but to
obtain a remix as in most cases this cannot be fixed
during mastering.

- 48 -
BALANCE THE SPECTRAL CONTENT
Spectral content describes proportions of
frequencies in the audio. Mixes rarely have good
overall spectral content, but to be fair, that is not the
aim of mixing! You can usually fix this using an
equalizer, however very often these disproportions
are not constant and are changing during the song.
Spectral content affects the overall loudness a lot.
Our brain adaptively masks silent frequencies in
order to let us listen to what is important - what is
loud. Therefore the loudest recording has equal
power over most frequencies consistently. There is
an extreme case - WHITE NOISE. It is a signal that
has the same magnitude for all frequencies. I bet it
is louder than any of your recordings! Keep in mind,
that every dB you increase in loudness is lost
somewhere else. Spectral balancing and other
problems are generally fixed using multiband
compressors. They will also reduce the dynamic
range. Generally, you use equally distributed bands,

- 49 -
about 4 or 5 is usually enough. Set the threshold of
each band to a similar value, and the ratio to about
2:1 and then tweak all of the bands. Remember, the
less you change the better.

COMPRESSION.
As I mentioned already, the modern trend is to over-
compress recordings in order to make them LOUD
from beginning to end. So, please read the following
advice and remember - if you don't need it, don't do
it!

MACRODYNAMICS
Your first aim is to make your song sound consistent
- ensure each refrain is not much louder than a
verse etc. The best way to do this is using
ENVELOPES in your DAW to manually manipulate
the gain. Ratio: the higher the ratio is, the more
reduction and loudness you will get, and, the more
dynamics you will lose. Set it at 1.5:1 for starters,

- 50 -
then play the chorus and the verse to check if they
are similar enough.

MICRODYNAMICS
You may want to remove peaks and make it sound
louder. We repeat again - if you don't need it, don't
do it! The loudness will be increased in the final part
as well! Attack time: short, let's say 10ms. Release
time: short, let's say 100ms. RMS length: short, but
probably not minimal (peak). Threshold: overall
level should be pretty consistent due to the
previous macrodynamics stage. So play the chorus
and set the threshold slightly below the current
level.

EQUALIZATION.
This will finally make the spectrum sound
professional. This is the most IMPORTANT part.
Even the compression and limiting stages are
expendable, but this has to be present. Some of you

- 51 -
are may wonder why I have put the equalization
after the dynamics. This is because the compression
may change the spectral content, so I feel it's
prudent to put the equalizer here although it is your
choice.

LIMITING.
Play the loudest part of the song. Move the
Threshold to 0dB if it's not there already. Now there
is no limiting, so only clipping or saturation is
performed. Watch the peak meter and use the Input
gain to lower it if necessary. The input should not be
peaking! Decrease the Threshold very slowly to the
point where the peak meter touches the 0dB limit.
Not a single dB more, unless you want a crunchy
master. If you make the threshold too low, the
output will get distorted. The meter above the peak
meter is called the gain reduction meter and shows
how much of the track dynamics you have lost. It
should be tapping -6dB at most. You can get more

- 52 -
transparent limiting using a MULTIBAND limiter,
but be extremely careful! Multiband limiters can
provide a higher level of loudness than single-band
limiters just by increasing the Input gain parameter.
Because each band is limited separately, by
increasing the input gain you also balance the
spectrum and get closer to the white noise we
talked about above. This tricks the brain into
thinking that it sounds better. Working with a
multiband limiter is not too difficult. Just increase
the input gain and watch the meters on the right,
especially R, the gain reduction meter, and S, the
saturation reduction meter. Saturation reduction
causes distortions, and gain reduction causes
pumping. Finally, when you think some bands are
affected more than others, you can use separate
thresholds or band input gains for them. Listen
closely to the results and if you hear any unpleasant
distortions, clicks or pops, increase the threshold
(decrease drive) or remove saturation. If none of

- 53 -
this helps, decrease the input gain. If there are still
artifacts present, bypass the limiter as it is very
probable that the distortions are generated by any
of the previous steps or are even contained within
the mix! Lower the ceiling parameter to say -0.2dB.
This is basically just output gain, but some media
can by its nature create output levels after decoding
higher than the original. So despite we have room to
0dB, it is a good idea to keep the output slightly
lower. And finally, the golden rule as usual - LISTEN.
Switch between tracks and COMPARE. Don't make
your recording louder than the comparison, there is
too much to lose. Compare it to the original too, to
see if the mastering has helped. And don't forget to
listen to the quiet parts of the song as well, since the
processing might also have amplified any noise etc.

- 54 -
EXPORTING THE RESULT.
When your master sounds good, go for a walk to
CLEAR your head. When you get back, which should
be at least a few hours later, listen again. Still sounds
good? Great! Use the export/mixdown feature of
your host to generate a wave file at the same audio
quality, preferably 96 kHz and 32-bit floats. At this
stage you can add some fade-ins/outs if necessary.
Now you have the finished recording at the highest
quality. Use your DAW to create a file in the format
you need. In most cases you will be down-sampling
to 44.1 kHz and decreasing resolution to 16-bits.
Dithering is recommended here. You probably
won't hear the difference, but when your recording
is played on a big concert system, someone might!

- 55 -
MY
MASTERING

- 56 -
MY PHILOSOPHY
The key to achieving a great sounding MASTER is to
start with a great sounding MIX. If you don’t get the
mix right first, MASTERING will compensate for the
mixing issues. Distortion and over-compression for
example are difficult to deal with. Distortion is
broadband noise, so it cannot be removed with EQ.

HOW I WORK
I spend at least THREE HOURS per song. I pay close
attention to the structure, balance, movement and
tonality of the mix before I determine what the song
needs. My motto is ‘’LESS IS MORE’’. I only use
processing that I believe is ABSOLUTELY necessary.
My adjustments are SUBTLE, I try to BALANCE and
ENHANCE your mix, NOT alter the character,
emotion or sound of it. That is why I use ‘’musically
perceived’’ LOUDNESS.

- 57 -
LOUDNESS WAR
If you use high compression, limiting or brick-walls,
you always LOSE musicality, energy and dynamics.
However, if you want your mix to be as loud and
proud as it can be, I will do it for you. I understand
that the majority of people these days listen to
music through iPods, iPhones and cheap ear-buds.
That is why I always provide TWO mastering
versions.

ALBUMS
When mastering an ALBUM I pay close attention to
the GAIN levels of each song, in relation to each of
the other songs. TONALITY is important, too. Some
mixes may be brighter, some may be recorded in
different places, and some may be mixed by
different engineers. I try to make the whole release
sound COHESIVE. I spend at least ONE DAY
mastering your album.

- 58 -
HOW TO PREPARE YOUR FILES FOR MASTERING
Bounce your song in –at least- 44khz/16bit
interleaved wav/aiff. Don't dither, don't normalize,
and don’t use limiters /compressors on the master
channel. Try to leave at least -6dB headroom.

MOST IMPORTANT!
Don't forget to mention your IDOL
band/album/song to make sure I'm after the sound
you want!

©Omnibus Press 2014

- 59 -

You might also like