Professional Documents
Culture Documents
Testimonials
My Main Resources
Thank You!
Contact Info
Song Mixing Secrets
SongMixingSecrets.com
Revised 2021 - Copyright 2019
I wrote this book so you can quickly learn (in a matter of days) the
techniques, tips, and secrets that took me over 19 years to learn!
What You Will Learn From This Book
The mixer definitely does not need to read a 400 page book on basic
mixing techniques, or take a six week mixing course. They're way
beyond that.
What they do need is a book that points out the most common
mixing errors I see daily, so they can check their mixes for them.
Problems they don't even realize their mixes suffer from. And then, use
the information in this book to correct these problems. That's what
“Song Mastering Secrets” is!
If you told me the style of your music, I could pretty much tell you
what will be wrong with your mixes, without even listening to them!
Not exactly, but it's the same mixing mistakes every day, song after
song.
This book does not include a hundred extra pages of theory and
worthless advice you'll never use. I included only information you
need to know. Solutions to real-world problems that WILL occur in
YOUR song mixes.
There are many many examples of players who, over the winter, went
from a .260 avg, 15 HR hitter (ready to lose his starting job) to a .290
avg, 35 HR hitter who ends up signing an extension for $80 million
dollars! And sometimes all the coach told him was to simply keep his
head straighter and change his swing launch angle by 15%. That's it!
So, was that a huge secret? Well, to the player in this example it was
“the $80 million dollar secret!”
And I'm hoping this book has the exact same effect on your mixes.
Simple solutions that have major positive effects.
Also, this book was re-released in mid 2021. It covers current mixing
techniques using up-to-date gear. Since 90% of the current software
plugins weren't available in 2007, books written at that time or earlier
are primarily analog gear based, using old school mixing techniques.
About The Author
I've worked with several Grammy nominees and winners. I've also
mastered many billboard top 10 songs in Europe, movie soundtracks,
videos that have appeared on MTV, and dance/EDM music that has
been played in dance clubs all around the world.
That is very kind of you to give me such detailed notes on ways I can
improve future mixes and recordings.... I've been reading up on your
mixing tips articles, and they are very informative, but having
something that's personalized feedback on my mixes is beyond
fantastic! I really want to thank you, John, again for everything, all the
tips, all your time and work, you've got loyal clients here, I will
certainly continue to always bring my business your way... Aaron F.
I don't now how you do it, but you are amazing! You are so dead on
with your instincts and skills. I'm gonna drop $36 into your PayPal
account. Keep the extra $30 as a tip. Go have a beer on me. An
EXPENSIVE one! Wow, you have just blown me away with your
work. I'm coming to you every single time, man. You are the tops.
Give me a call anytime you need anything, and if I can help, I will.
And when we gear up for our next CD, I'll give you a shout. Thanks
again, man. You're just awesome. Felix
SONG MIXING FAQS
At least once a week, someone uploads one song .wav file and says
they want it mixed and mastered. Well, you can't mix one file. The
word "mixing" is a verb and you need at least two tracks (files) to be
able to mix them together. You need at least two of anything to be able
to mix it together!
What's The Main Goal In Song Mixing?
I want to first start out with your goal in audio mastering. Your main
goal in audio mastering is to replicate the sonic qualities of a well
professionally mastered commercial song, in the same genre and
style as the song mix you are working on.
Yes, everyone has their own slight preference adjustments like a little
more bass, brightness, etc., but overall you want to be at least 80%
similar to the current industry standard. The only exception is if a
client specifically requests an old school master. If that's the case, then
you're going to have to replicate the sonic qualities of songs from a
past era.
https://jrmastering.com/audio-engineer-portfolio-reviews-samples.html
Notice how these mixes are slightly dull and have a lot of headroom.
The rule when preparing a song mix for mastering you want to
keep everything slightly “below” industry standard, so the
mastering engineer can bring the mixes “up'” to standard.
As opposed to overly bright, way too much bass, distorted, and already
louder than a song on the radio. Too much cutting effects in audio
mastering makes a song flat and removes its sparkle. The audio
engineer prefers adding effects.
Back To The Main Topic – So, What Is The Main Goal in Song
Mixing?
In song mixing you want to create a good mix that focuses on the
sonic qualities that can't be corrected in audio mastering.
Elements like overall clarity and track separation (space), proper track
volumes in relation to each other, correct panning, track EQ and
effects, these all need to be done properly in the mixing process.
The bigger question is, how much time and effort are you going to
put in? You know, everyone can be in good physical shape, or at least
in decent shape. The choice is yours.
But, is there going to be a diet and exercise plan that's followed daily
for months, or is everything going on eBay or craigslist a couple weeks
after starting? It's up to each individual to do what's necessary to
achieve their end goal.
You get out what you put in, even when it comes to song mixing.
This book gives you the knowledge and tools you need to become a
great mixing engineer. And you'll learn a lot faster than I did. But it's
still up to you to read this book in its entirety and put in the
practice time.
Mutt Lange, the engineer who worked with Shania Twain, Def
Leppard, AC/DC, Michael Bolton and a slew of other stars was a
LEGENDARY slow worker.
Truly AMAZING RESULTS (and like I always say, all anyone cares
about is the end result). But, a sloooow methodical worker.
I'm not going to go back and read up on him again for exact details, but
I remember reading each song takes him a couple months to record and
mix. I said a couple months for ONE song!
One factor is how well you know your EQ and effects, and how
many you use. If you've been doing this for many years, you already
know HI & Low Pass EQ filtering, and can apply it right off the bat.
You also have favorite effects you like to use for each genre, and know
the delay times, which reverbs to use, etc. on vocals in certain
situations.
Someone like this could finish a song MUCH FASTER than someone
just starting out who is in “trial and error mode,” testing different
effects on every track. Someone like this could take weeks to mix
one song. And it still might suck! But that's a different story.
Number of effects used will also affect song mixing time. A pro
might use a dozen different effects on the lead vocal, changing them in
each section. While a newbie might use ONE EFFECT (plate reverb)
on all the vocals in the entire song.
That all being said, I'm going to go with 6-8 hours to mix an average
25-30 track song, over the course of a couple days. I say a couple
days because I'm a firm believer in the “next day fresh ear” mixing and
mastering method.
I like to clear my head and listen to a mix (or audio master) fresh the
next day, and then make my final adjustments and tweaks.
I've actually seen a few sound engineers online say the opposite. They
say to make your mix sound as close as possible to a finished master.
"All you want the mastering engineer to do is make the song louder."
These comments are based on common sense and theory, NOT on
real world experience working with thousands of clients (like I
have)!
If the only thing your songs really needed was for the mastering
engineer to make them louder, why not just make them louder yourself
and save the money? Duh! No, pay someone $500 to do it in 15
minutes. I want that job!
Here's a non-music example. Before you get your car detailed, do you
clean it spotless inside and out, but leave only the windshield dirty, so
that's all the car detailer has to do is wash your windshield and he's
done? NO! That doesn't even make any sense.
You're paying for a car detail and you want every process that
comes with it. That's their specialty and you want their expertise in
every area, not just the windshield washing process. The same goes for
audio mastering.
3. And the #1 reason is THEY CAN'T DO IT! They don't have the
knowledge, skills, replicating abilities, etc. That's why they're looking
for a mastering engineer to begin with!
In the past, before I was giving this info, do you know what my
clients would send me? I would say half the submissions were partial
masters where the clients were trying to make them sound radio ready.
The problem was they were TERRIBLE! Way over-level, distorted,
super bright and/or super bass, way too much compression, etc. Many
times they had everything wrong!
In a perfect world, yeah send me a song that's already mastered
and I don't have to do anything to it. But I know from years of
experience working with actual people that this is a very unrealistic
request. Most can't do it, that's why they're looking for a mastering
engineer to begin with!
I've also heard the comment that mastering is taking all the songs
on a CD and making them all sound similar. Really? That's all
mastering is? So, I'm supposed to take the best mix on the CD and
make all the other songs sound just like it? That doesn't even make any
sense! What if the best mix on the CD sucks?
WHAT IS HEADROOM?
Headroom is the distance between an audio tracks peak level (when
the meter is at its highest) and 0 level on the output meter.
How far is your peak loudness below 0 level? That's the simple
definition of what headroom is. So, if your meter is peaking -3dbs
below 0 level, you have +3dbs of headroom. If your meter is peaking
right at 0 level, you have NO headroom.
Dynamic range is the area between the peak level (when the meter
is up) and the low level (when the meter is down).
Roughly, 3dbs to 6dbs of movement between the high and low meter
level is a typical amount of dynamic range, but this does depend on the
genre of music.
Note - One problem you want to watch out for (if you're mixing your
own music) is dynamic range between song sections. You don't want a
+10db or more difference between the verses and choruses. This
creates a mastering problem because the verses will never be loud
enough compared to the rest of the song. Roughly a 6db difference is
as high as you want to go.
Also, if a song has very low dynamic range (the meter barely moves)
it's probably over-compressed. Which means it could lack punch,
power, clarity, or could even limit EQ options. You want your song to
have some dynamic life!
I want a song mix with some headroom and decent dynamic range.
I want to EQ it as necessary, I want to compress it as necessary, I want
to be able to set the overall volume as necessary, and I don't want to
work with a distorted mix.
That's usually all you need to do to achieve proper headroom, but there
are always exceptions. If most of your tracks are HOT and peaking
right at 0 level, your song mix could be at 0 level with no
headroom. It won't be distorted (if not recorded distorted), but you still
won't be leaving the audio mastering engineer much to work with. If
this is the case, after finishing your mix, link all the channel faders
together and just pull them all down a few dbs. Now you have
headroom.
If you did become the greatest sound engineer who ever lived, but had
to suffer from hearing a loud HISS sound while relaxing on the beach,
that's something you'll have to live with for the rest of your life. No
amount of money can fix it. Hopefully this chapter helps prevent that
problem.
What Is Tinnitus?
Tinnitus is the constant hearing of a sound when there is no sound
present. Some describe it as a ringing sound, a hiss, or a high pitched
tone. The sound is continual, and it varies from one tinnitus suffer to
another.
How Is Tinnitus Caused?
Tinnitus is caused by either a single extremely loud sound or by
loud sounds over a period of time. I know a military vet who got
severe tinnitus from the sound of jets taking off in close proximity.
Another guy I know got it from a single bomb explosion that was right
next to him. Listening to loud music at a concert or club, if you're in a
band, if you play music loud on your iPod, or monitoring music very
loud as a sound engineer; over a period of time any of these scenarios
could cause tinnitus. If you cut grass for a living and don't wear
earplugs, I would imagine that could eventually cause it too.
The first few months I suffered from tinnitus, I would say it was very
annoying at night but not too bad during the day. It did not effect my
sound engineering. After six months, it improved about 25%. After a
year, it's roughly 50% better. Now, many years later, I would say I
have maybe 20% of the original tinnitus I got in 1999 (an 80%
improvement). Its pretty much gone because it was a mild case of
tinnitus to begin with, and I took preventative measures so it wouldn't
get worse.
Singer Phil Collins retired because of his tinnitus. Bono also has very
severe tinnitus that greatly affects his everyday life. If your tinnitus is
very severe like those two, I'm sure your sound engineering skills
would be greatly affected.
Also, don't listen to music over 100dbs for long periods of time.
OSHA recommends no more than 1 hour @ 105dbs. I would NEVER
go more than 20 minutes straight at 105dbs, if that. Also, be careful
with ibuprofen and prescription drugs.
The guinea pigs who listened to the same extremely loud 30 minutes of
music, but it was not continuous (roughly 2 minutes of music and then
a 2 minute break), the damage to their internal ears was FAR LESS
severe compared to the first group.
Now remember, both groups listened to music the same total time
(30 minutes). But, continuous listening was far more damaging
than intermittent listening.
Above is what I've been doing since 1999 and it works for me.
Maybe you can go 100dbs for 8 hours a day and never have a problem,
but I'm not risking it. 105-110dbs for 30 minutes continuous ruined
my ears pretty good, and I had to make sure they didn't get any
worse.
Room Size
Technically, you can properly mix or master in any room size. But,
I believe a smaller room is better than a very large one for someone
who's just starting out. And when I say smaller I mean closer to 12'x15'
than to 20'x30'. I've mixed and mastered songs for a number of years in
a 20'x30' room. It took me a few days to get used to it, but after that I
could do it.
The obvious problem with a big room is it's a very open space. If
you don't have a good acoustic setup, the room will add reverb to every
song.
You have to compensate for this on every song you mix or master,
because the extra reverb you hear isn't really in the music. It's coming
from the room.
Speaker Choices
I've used dozens of different brands of speakers in my career and I do
like a few better than others. But, this article deals more with types and
sizes of speakers, not with brand choices.
The main mixing and mastering speakers I currently use are Dynaudio
100w powered studio monitors with 6" woofers and 1.1" tweeters.
They have nice EQ adjustment options on the back and I know these
speakers very well. If you get speakers that are a little larger, you'll get
better LOW-end out of them, but I'm happy with the size I use.
Speaker Placement
Before I tell you the setup I like best, after many years of
experimentation, I'd like to first tell you the setup I personally don't
like (even though a lot of sound engineers do this). Two studio
monitors, five feet apart, on a desk two feet away from their face. And
NO sub-woofer! I think they call this "near field" monitoring.
But at some point during the audio mastering process, you must
crank the music up very loud to set your final compression and to
hear how it translates at high volume levels. You can't do this if your
speakers are right next to your ears! At least I can't.
Maybe this is why the songs I get in for re-mastering badly break up
when cranked up loud, and the bass is totally washed out. They were
originally mastered at very low levels without a sub-woofer, and not
optimized for loud playback.
I also find it hard determining the overall depth and stereo width
in music when the speakers are two feet in front of me. Which
makes sense. Its like watching a 50" TV. I want it to be far enough
away so I can take the whole picture in. No one puts a 50" TV on a
table right in front of them, yet this is done with speakers.
First off, I use speaker stands for my studio monitors, and the speakers
stand 3.5 feet off the ground. I have the stands roughly 7 feet apart, and
the speakers are about 6-7 feet away from my face. The sub-woofer is
on the floor, centered between the two speaker stands.
Note - Do not put the back of the speakers right up against a wall.
Have at least 10 inches between the back of your speakers and the wall,
or the sound will be altered.
I've found this setup is close enough where I can here all the details in
the music, wide enough so I get a full representation of the stereo field,
and the speakers are far away enough so I can crank the music up to
105dbs to make sure it sounds right for loud playback, without blasting
myself in the face.
I've mastered many songs, and time and time again I've notice they
all suffer from the EXACT same common basic mixing mistakes.
In this section I'm going over the top 14 mixing mistakes I see daily,
and how to fix them.
The percentages you see next to each title represents how often I
see the mistake overall. I say overall because if you see (10%), that
doesn't necessarily mean I will see this problem once on every 10 song
CD.
I'm just mentioning this because I don't want you to see 10% and think
you HAVE TO have one song with a particular problem on every CD
you work on.
Even though many of these mistakes and their solutions are pretty
simple (while others might be new to you), they all have to be
mentioned so you know what to look for when finalizing your mixes.
As I'm writing this book, I'm starting to realize that knowing what you
are doing wrong, is just as important as me telling you how to fix it.
I see this problem mainly in songs where mics are used to record
instrument tracks, like a rock song. As opposed to a song made
using a keyboard with noise free samples. Those songs could contain
distorted tracks, but track noise is a rarity.
* The Problem -
Tracks are recorded with a slight hiss. When all vocals and
instrument tracks are playing at the same time, you can't notice any
hiss. The audio content is loud, so the hiss is drowned out.
But even though you can't always hear the hiss, when you have 10+
tracks with a tiny amount of it on each one, it adds up to an element of
space eating mud.
* The Solution -
Then play back the silent track you just recorded and increase its
volume. If it gets louder, obviously it's not silent! And if it isn't silent
during this test, it won't be when you start recording either.
Usually if these two areas are not over-gained, you won't have the
hiss problem.
Check this out before you start recording! Why add extra noise?
#13 Songs Are Partially Poorly Mastered Or Over-
level (10%)
But then again, if you are doing your own mastering, your mixes still
can't be over-level and distorted when you start the mastering process.
I'm going to repeat the text from an earlier chapter so you don't have to
go back.
That's usually all you need to do to achieve proper headroom, but there
are always exceptions. If most of your tracks are HOT and peaking
right at 0 level, your song mix could be at 0 level with no
headroom. It won't be distorted (if not recorded distorted), but you still
won't be leaving the audio mastering engineer much to work with. If
this is the case, after finishing your mix, link all the channel faders
together and just pull them all down a few dbs. Now you have
headroom.
If you're going to master your own music, like I do, you can mix
and master at the same time, using effects on the stereo/main out as
you go along.
But if you're going to be sending your songs out for mastering, nothing
goes on the stereo/main out.
This mixing mistake is critical when working with EDM, dance, fast
pop and some hip hop music. Other genres of music, it's still important
but not crucial to the integrity of the song.
* The Problem -
Usually, the hi hat is the culprit. The sound engineer just forgets about
it when mixing. Or, a newbie song arranger doesn't even put a hi hat in
the song arrangement at all!
A. Simply raise the faders until you can hear these elements
coming out of the mix. Keep going up until you can hear them
driving the rhythm of the song.
One trick I use is to double the snare. Many rock songs I double the
snare right off the bat, setting one dry and the other I put a little drum
verb on it.
If that doesn't work, you might have to create space in the Upper
MIDs, which I discuss in mixing mistake #1 creating space in a mix.
Sit and listen to your mix. Is that snare/clap and hi hat combo
coming through nicely? Is it driving the rhythm of the song?
#11 Kick/Bass Relationship Is Poor (10%)
In most songs, the meat of the kick is within the frequency range of 60-
90hz, though it usually goes past 100hz (with its power subsiding).
* The Problems –
A. The sound engineer uses the bass and kick “as is” not applying
any EQ filtering. So, their frequency ranges all cross over (bleed into)
each other and either create a big punchless muffle, or the kick is
phased out entirely to the point where it's almost non-existent.
B. With sampled music, the meat of the bass and kick samples that
were chosen are right on top of each other. They are both in the
exact same frequency range. Neither has its own space in the mix.
* The Solution -
A. Space must be created in the mix for your basses and kick. They can't
run over each other. I explain this in detail in the final chapter, Mixing
Mistake #1 where I break down HI & Low Pass filtering and subtraction EQ.
B. This is an arrangement issue. The meat of the kick and bass can't
be the exact same frequency range. This occasionally happens with hip
hop songs I get in.
For example, you can't have your kick and bass BOTH in the 60-
90hz range.
One of the two samples has to be hot in the 100-140hz range (and it's
almost always the bass) and the kick in the 60-90hz area.
Don't select bass & kick samples that are right on top of each other
in the frequency range.
#10 Bass Is Too Loud Or Boomy (10%)
The percentage of songs I get from clients with this problem isn't
extremely high, because paying customers submit pretty good mixes.
They don't send me total garbage when money is involved.
Now, the guy who wants the free demo.... Garbage! Ha! But that's a
story for my next book....
* The Problem -
* The Solution –
But of course, you don't want your bass +5dbs louder than the
mastered commercial radio reference track you're referencing. Not
to mention distorted too.
2. They can't hear the bass!! And this happens VERY often.
I once had a client who not only didn't have a subwoofer, he was
mixing with only ONE speaker too! WTF! And then he's complaining
that the masters I gave him are in mono... He's in the next book too...
#9 Overall Mix Is Muddy Or Too Bassy (15%)
* The Problem -
* The Solution –
But I would say it's usually a combo solution. EQ filtering first, with
a little bit of volume reduction.
#8 Overall Mix Is Too Thin Or Tinny (15%)
Or thicken things up with effects like slight delays, light reverbs, etc.
Mixing mistake #1 breaks down HI & Low Pass EQ filtering and mix
space.
C. As for the overall tinny issue, this effect happens when the 3-5k
frequency range is the loudest part of the song. This is also how you
create that radio effect.
The only reason this percentage isn't higher is because most songs
don't have lyrics with a lot of Sss's in them. If they all did, probably
30% would have too much Ssss going on.
* The Problem -
* The Solution –
Use your de-esser! Now some of you might say, this isn't a secret!
“Use the de-esser on vocals that have a sharp sss sound.”
Well, it's a secret to 15% of the sound engineers I've worked with,
because they didn't do it!
Crank up your mixes loud. If you hear even the slightest sss sound in
your mix, it will be 2-3 times louder after mastering (when your song
is made louder). If this is the case, apply the de-esser a little heavier.
Note – Most of the time, this sharp ssss mistake can be fixed in
mastering without destroying the rest of the song. But always fix it in
the mix whenever possible.
#6 Bright Instrument Or Vocal Sticks Out (15%)
* The Problem –
The song has great volume levels overall, but then one instrument
or vocal is just super loud or bright. Many times nothing can be
done in audio mastering, it has to be fixed in the mix.
* The Solutions –
B. Use compression.
Brickwall limit the bright tracks using a high threshold. So, at its
loudest point, the compressor will kick in and cut its top off.
More Tips -
I guess there are some exceptions to every rule, but the lead vocal
track is almost never the brightest track in a song. You have hi hats,
cymbals, violins, some synths, rock guitars, and a few other elements
that all should be brighter than the lead vocal.
This usually is done because the mix is so crowded, the only way to
get the lead vocal to cut through is to make it super bright. Either
the lead vocal EQ is boosted around 3k, or its HI Pass EQ filtered to
death, removing all of its bass. This is not the solution. Creating space
in the mix for the vocal track is the solution, and I cover that in
mixing mistake #1.
That being said, you can make the volume of these high pitched
frequency tracks lower than you would think and still hear them
just fine. Just lower their volumes all the way down, and then raise
them up until they cut through and you can hear them.
Is your vocal the brightest track in the song? Does it becomes annoying
at loud volume levels? Is an ear piercing instrument track coming in
anywhere in the mix?
#5 Lead Vocal Bassy Or Too Thin (20%)
* The Problem –
* The Solution -
You want your lead vocal to be just right. If it's too bassy, Hi Pass
EQ filter it around 250hz.
Or scale back on your HI Pass EQ filtering (if you applied too much).
Simple, but 20% of the time it's not happening. It's all about knowing
what problems you're looking for in your mixes.
#4 Narrow Mix, Sounds Bad In A Car, Fake Stereo
(30%)
* The Problem –
Your mix is too narrow. When you listen to it on studio monitors,
everything sounds pretty good from a stereo width standpoint.
But a car stereo is the real barometer for stereo width. When your
song is played in a car, the music should spread across the entire width
from the left door to the right, NOT only two feet wide above the car
stereo.
* The Solution –
One thing I tell everyone, when they're arranging a song, if they don't
have anything panned on the outsides of the stereo field, nothing
will be playing there.
With rock music, I rarely see this problem because at the very least,
the song has left and right stereo guitars that were recorded separately
(true stereo) that create great width.
Hip hop is where I see this problem the most. Many times a hip hop
song has a lead vocal, kick, snare, bass, and hi hat all panned close to
center (between L20 and R20). They then add a stereo synth that
comes in every now and then. In this scenario, the song is basically a
mono song when the synth isn't playing.
If you have two audio tracks that sound EXACTLY the same (and
start at the exact same time) they will sound mono to the brain,
regardless of where they are panned.
Try this with a mono guitar track. Copy the track (so you have two)
and then pan one left and one right.
Now, the doubled guitar track will be louder, but you'll never be able
to get them to sound stereo. They will sound mono (right in the
middle) no matter where you pan them.
The only way to create a stereo sound is to make the two guitar
tracks sound different from each other, by adding different effects or
by offsetting their start times. Usually both.
To create a true stereo guitar effect, you DO NOT record the guitar
part only once, double the track, and then pan it left and right.
You play and record the guitar part twice or more (in time) and
then you'll have multiple tracks with natural subtle differences.
When these are panned opposite of each other they WILL sound true
stereo! This goes for any instrument or vocal. This is how true stereo
is created.
This is all basic stereo 101, but a critical mistake I see too often.
I use the Haas effect to some extent on pretty much every song I
mix, considering the fact that clients submit many mono or fake stereo
tracks.
First off, the Haas Effect is used to - Create stereo sound from a
mono audio track, or to increase the stereo field of a stereo file..
Note – To use the Haas Effect on stereo interleaved files (that are
really mono), you will first have to separate the left and right side
into two mono tracks. Most DAW software allows you to do this on
import.
#3 Lead Vocal Not Loud Enough Or Inconsistent
(30%)
*Problem #1 –
The lead vocal is too low for the entire song. I see this VERY often in
heavy metal songs because many times the singer can't sing very well.
And 90% of the time the client refuses to raise the volume level,
regardless of what I tell them.
*The Solution –
Play a song for a friend who doesn't know the words, and have
them repeat them back to you. Or at least have them tell you which
words they can't understand, so you can make those vocals louder.
*Problem #2 –
The vocals are inconsistent. In the verses they are too loud, in the
chorus they're run over by the music, etc.
*The Solution –
The main reason for most of these problems is the song was not
mixed in sections.
An example, a hard rock song, the sound engineer sets the lead vocal
volume and uses heavy delay and reverb on it because the guy's
singing is terrible.
In this particular song, the guitars and drums are light in the verses, but
the mix sounds excellent in this section. Lots of space, full sounding
stereo, the lead vocal sounds a little distant but clear, etc.
Now the chorus comes in. Four loud stereo guitars are added to the
mix. Guess what happens to the other tracks once these guitars come
in?
Question for you. If the mix sounds perfect during the verses, and
then four loud stereo guitars are added to the chorus, while
NOTHING changes with the other tracks, can the mix still sound
perfect in the chorus? It's impossible...
This is like adding your preferred cream and sugar to a 10oz coffee,
and then pouring that cup into an empty 30oz one. Now add 20 more
oz of coffee, filling the cup up to the top.
Well, you now basically have black coffee again. The coffee ran over
your cream and sugar. Ha! You have to adjust the cream and sugar
levels again to get your preferred taste.
The same goes for music. If track numbers greatly vary from section
to section, effects and volumes need to be adjusted too. You can't just
set everything at the beginning of the song and let it go all the way
to the end!
For example, if a song starts with five vocal/music tracks and ends
with the exact same five tracks (nothing ever changes), then yes, you
can just mix the beginning of the song and you're done.
In the 90's, when I was just starting out in music, I watched several
mixing videos and read a few books. Not even one mentioned this
concept!! They did exactly what I mentioned above in the hard rock
song example. Set it and forget it.
Here's a small fact to help you understand this concept even better.
A pro mixing engineer who gets paid $15,000 plus to mix one song, he
uses several different reverbs and delays on the lead vocal track in a
song. Usually, a different combination is used in every section. And
this is done with all tracks to some extent.
So, that ONE reverb you use on your vocal track for the entire
song, this is wrong.
* The Problem -
I see this often in Hip Hop mixes. The bass is completely shut off in a
few sections, leaving a super bright lead vocal. It's terrible. Reason
being, the lead vocals are EQ'ed bright to cut through the big bass, and
this works well while the big bass is playing. But not when it's turned
off.
*The Solution –
Using the Hip Hop example above, there's a good balance when all
instrumentation is playing. So, we need to address that bright vocal
when the bass goes off.
You might even compress any offending tracks harder since they
no longer need to cut through the mix in the bass-less section.
Also, no bass (or when instrumentation is thin) also means the song
will be thinner. More delays and reverbs might be necessary to beef
up this section.
*The Problem -
I talked about this earlier. When this happens, it's usually a hard
rock song, where the lead vocal is never loud enough to begin with
and the guitars completely bury it in the chorus. But of course, this
could happen in any song.
With Hip Hop, I don't see this problem very often. The vocals are
up front to begin with, and instrumentation is usually so thin it can't
run the vocals over.
*The Solution –
The solution could also be to remix it for space, giving the vocal more
room. I talk about this is the next chapter.
#3 The Chorus Is Too Loud/Thick Compared To The Rest of
The Song
*The Problem -
The chorus is obviously just too big compared to the rest of the song.
*The Solution -
If the chorus is mixed well and the only problem is its overall volume
level, you can link all the chorus tracks together on your DAW
mixer board and move the faders down as a whole, a few dbs
(provided you have your choruses separate from the rest of the song).
Even though track numbers are light, you still want this opening
part to sound full with decent thickness. You can't have a dry weak
vocal and thin instrumentation while we wait for more tracks to be
added.
*The Solution -
If the opening/verses are mixed well and the only problem is their
overall volume level, you can link all these tracks together on your
DAW mixer board and move the faders up as a whole, a few dbs
(provided you have them separate from the rest of the song).
Maybe even add more instrumentation. If it's your song, you could
add a light background instrument like a synth pad to fill the
arrangement up a bit.
*The Problem -
Example, the kick drum is too loud in the verses, it's run over and weak
in the chorus, and in the finale the kick is GONE!
*The Solution -
But I like to manually use the gain function on the .wav files when
faced with the scenario. Especially on vocals. Sometimes a few words
get run over by something in the mix. It's easier to just highlight
those few words in the .wav file track and gain them up a few dbs,
until they cut through.
It's amazing how you can't even tell the vocals are getting louder since
the music is too. To the brain, the vocals sound like they are a
consistent unchanged level.
There are a few exceptions, but usually you want to be able to clearly
hear every track in the mix, to some extent.
C. Very simple, the track just isn't loud enough. You can't
hear it.
I don't see this very often, but when I do it's a bad problem to have.
*The Problem -
The verses are super bright, the chorus is super dull, the bridge is
perfect.
*The Solution -
Listen to the entire song for inconsistencies in EQ. You want a good
balance from section to section.
This is the BIG ONE! At least 25,000 of the over 40,000 songs I've
mastered (to some extent) suffered from lack of space. This problem
is so big, I get a little more detailed with my solutions in this
chapter.
Before I get into the problems and their solutions, let's discuss how a
good commercial radio song should sound.
I can clearly hear the lead vocal that has it's own space in the mix,
and I can clearly hear the effects on it too.
The snare has its own space and can clearly be heard, along with
any effects it might have.
I can also distinctly tell the bass and kick apart, they are not
canceling each other out or muffled.
Everything has it's own space. Even the soft synth pads coming in, I
can clearly hear them too. The kick has punch and the bass has
presence.
* The Problems –
I don't know any other way to explain it other than the song mix is
just a big mash of noise. Just solid sound with no space between any
of the tracks. And any effects used are a mystery.
FYI – Of course I know there are genres where solid noise and a guy
screaming is the goal of the song. Ha! I'm not talking about that genre.
98% of the time, we don't want that.
2. Or of course, they just don't know what they are doing. Ha!
* The Solutions –
This was so odd to me. How could this happen with someone who
knows music so well?
The reason? I guess you could say his emotions took over his sonic
evaluations. In his excitement to make this “big sound” he just kept
adding instruments, vocals and effects without limitation.
This would be like someone who wants to start a real healthy diet but
they over do it, taking 100 vitamins and 10 scoops of protein every
day. There's only so much the body can process at a time. More does
not always mean better.
Those powerful stereo guitars you hear in that song you like could
be only two tracks with a nice distortion effect and delay on them.
Not six guitar tracks using every effect plugin you own, like you're
trying to do!
That thick choir singing in the background that sounds great,
might be only four vocal tracks, not eight like you're trying to squeeze
in there.
Anything you want to sound “real big” many times it doesn't take as
much as you think to achieve it.
Whatever your plan is to achieve that big sound, cut it down a bit since
most people naturally go overboard.
It's so hard to judge your own material, let someone listen and get
some feedback.
If you're thin, it will be obvious to the casual listen, and they will let
you know.
Note – I get in a lot of thin songs to work on too. I don't want you to
eliminate so many tracks that your songs are too thin. I'm all for
doubling up instruments and vocal tracks when needed. Just don't go
over-board.
For example, a song has two background vocals in the bridge, that
have a high amount of hall reverb and slight delay on them.
When the chorus starts, four more background vocal tracks are added
to the original two.
Now that you have six background vocal tracks, you will probably be
able to considerably reduce the reverb and delay on those original
first two tracks.
All six tracks will more than likely have very light effects on them, and
will still sound nice and full.
This also gives you more clarity in the vocal tracks. And you're
leaving space for the rest of the song, which means the entire song will
be clearer and more dynamic.
So many times I've seen people ask online, “What should I set my
instrument volumes at, and exactly which effects should I use, and
how much?”
It's impossible to give someone volume and effect presets for a song
without hearing it.
Even within a single song it's impossible. Volumes and effects have
to be changed accordingly from section to section.
2. PANNING
But then again, I do get in a lot of hip hop mixes that really centralize
the entire mix.
When I first started out mixing for a living, I never realized how
important panning was until I worked with this guy who submitted a
pop rock project. The song arrangements had a lot of acoustic guitars,
but also included synth melodies and occasional vibes.
When you pan this way, there is so much space in the center of the
mix that the lead vocal and drums sounds amazing!
Also, you don't have to crank up the volume or brightly EQ the lead
vocal (or snare) so it can cut through the mix. It's just there, clean and
clear with its own space.
I get in many songs to master where the lead vocal is the brightest and
loudest element in the mix, so it can cut through. This is wrong and
will create problems. And it's still not clear because the mix is too
thick.
I saved the best and biggest for last! I would say at least 40% of the
clients I work with don't use subtraction EQ at all!
Most of their songs still sound pretty good after I master them at my
studio, JRmastering.com. But if they would have used subtraction
EQ, their songs move up to commercial radio quality level.
The Problem -
What these clients do is they record their guitars and vocals, play the
keyboard or synth bass sample, drums, etc., and then balance volumes
and add effects. Basic mixing, just like we all do. Nothing is wrong yet.
Note – This is how I mixed for many years when I first started out. Not
good.
Here's an example. Let's say you have a song that includes a kick, bass,
and a piano (that's playing a lot of low frequency notes). And then
throw in a bassy “Barrry White” sounding singer.
This is where Hi & Low pass EQ filtering comes in. Since so many
people know nothing about this, I have to explain the basics.
In the first picture, notice how the sub-bass audio content spans from
20hz to 3k. And it peaks +6dbs @ 60hz. 60hz is where the sub-bass is
most powerful. The meat of the sub-bass is right around 40-80hz.
Now, look at all the audio content past 200hz. The sub-bass doesn't
need to span this much of the frequency range. This only makes sense.
If it does, all it's going to do is run over the kick/bass combo and bleed
into the vocals and MIDs.
The sub-bass is very low in the frequency range, and that's where we
want to isolate it.
If you're not doing this with your songs, you will be IN SHOCK once
you apply EQ filtering to every track. The clarity and space you gain
is crazy!
But in this 1k area is the click of the kick. This is one instance where
you don't want to isolate the meat. Doing so would eliminate this kick
drum click.
Some engineers slightly EQ boost this 1k click area, and then filter
out between 200hz to 500hz.
For the lead vocals, move around in the 300hz range with the HI
Pass EQ filter. You want to remove the lead vocal's bass that will
collide with any instrument bass, but you don't want to remove so
much of it that it sounds thin, tinny, and nasally like Pee-wee Herman.
And, the Low Pass EQ filter @ 15k was done more for the sake of
example. To show that you can HI & Low Pass EQ filter the same
track. The audio content is so weak after 10k, Low Pass EQ filtering is
optional. Note – A perfectionist will use this option.
What little audio content you have in your LOW Band, you can't
eliminate it with a HI Pass EQ filter and have a bass-less song.
MORE ON EQ FILTERING
The kick/bass EQ filtering is a very important one. If the bass or
synth-bass bleeds into the kick it could phase (cancel) it out.
Then Low Pass EQ filter the kick around 120hz, so it's not bleeding
into the bass or synth-bass.
Note – Many times you can't perfectly filter out all bleed through.
There will be some going on. Any reduction is an improvement.
I was going to make a basic chart for each instrument, but even similar
instruments have subtle frequency differences.
Just pull up your EQ for each track and see where the meat in the
frequency range is. And then isolate it with HI or Low Pass EQ
filtering (or both).
Hissy songs, I use the Low Pass EQ filter on them maybe 2% of the
time. This technique is rarely ever needed.
When it is needed, it's usually some real old guy's crappy rock songs he
transferred from cassette tape (or 8 track). That he recorded in the 70s,
when he was in his prime. Ha!
Hi Pass EQ filtering a sub at 20-30hz, I don't think I've ever done this
in my lifetime. But some sound engineers swear by it.
*The Problem -
Sometimes you have a vocal and an instrument (or a few
instruments) with the EXACT same frequency meat. They're all
competing for space, but you need one to cut through.
*The Solution -
This is where EQ notching comes in. Basically, you're cutting the
meat out of one or several tracks (using a narrow Q) so the meat
from another track can come through.
Or you can do it with an entire instrumental song mix, like the example
below.
2. Entire hip hop song mix. -20dbs @ 1.5k using a very narrow Q
of 12
In this example, we will be working with a two track mix. I've done
a mix/master on over 1,000 two track song mixes.
Many times the beat is not arranged or mixed with space to add a lead
vocal track at a later time. So, it's tough getting it to cut through.
In the first picture, the vocal is HI pass EQ filtered at 300hz, and then
we boost the meat +3dbs @ 1-2k. I guess technically, with this boost
we're creating our own meat.
We just created a vocal track that will nicely cut through the song
mix.
In the second picture, we notch out some of the 1-2k frequency range
to make room for the vocal.
Most of the time this works very very well. You can go from not
being able to understand half the vocals to 100% clarity, once this
technique is applied.
Note – If you're a perfectionist, cut up the music track and do not apply
the EQ notch out when the vocal isn't playing.
Or you can automate this. You only need to notch out vocal space from
the beat when the vocal is playing.
For example, a few blaring guitars are wiping out the lead vocal. You
can notch them out a bit at 1k to make room for the lead vocal.
I would like to thank you for purchasing this book. Your support is
greatly appreciated. I hope it has immensely improved your song
mixing skills.
I wish you the very best of luck in all that you do! Cheers! God
Bless!
Your Friend,
John Rogers
Contact Info
Contact Me - JRmasteringStudio@gmail.com
Ordering Information: