You are on page 1of 95

Table of Contents

Song Mixing Secrets

Who This Book Is For

What You Will Learn From This Book

About The Author

Testimonials

Song Mixing FAQs

Protecting Your Hearing

Setting Up Your Listening Environment

How To Fix Common Mixing Mistakes

#14 Noisy Tracks Are Making The Mix Muddy (5%)

#13 Poorly Mastered Or Over-level (10%)

#12 Rhythm Of The Song is Dragging (10%)

#11 Kick/Bass Relationship Is Poor (10%)

#10 Bass Is Too Loud Or Boomy (10%)

#9 Overall Mix Is Muddy Or Too Bassy (15%)

#8 Overall Mix Is Too Thin Or Tinny (15%)

#7 Too Much Ssss Sound In The Vocals (15%)

#6 Bright Instrument Or Vocal Sticks Out (15%)

#5 Lead Vocal Bassy Or Too Thin (20%)


#4 Narrow Mix, Sounds Bad In A Car (30%)

#3 Lead Vocal Not Loud Enough Or Inconsistent (30%)

#2 Inconsistent Tracks And Sections (60%)

#1 Not Enough Space In The Mix, Very Crowded - Especially in The


Choruses (70%)

My Main Resources

Thank You!

Contact Info
Song Mixing Secrets

How To Fix The Most Common


Mistakes
By John Rogers

SongMixingSecrets.com
Revised 2021 - Copyright 2019

All rights reserved. No part of this publication may be reproduced,


distributed, or transmitted in any form or by any means, including
photocopying, recording, or other electronic or mechanical methods, without
the prior written permission from JR Mastering, except in the case of brief
quotations embodied in critical reviews and certain other noncommercial uses
permitted by copyright law. For permission requests please email John
Rogers at JRmasteringStudio@gmail.com.
Who This Book Is For

You're pretty good at mixing, but you know something is preventing


your music from sounding like commercial radio songs. And you want
to know what it is!

This book is for someone who has a good understanding of basic


mixing procedures. Sometimes as a solution to a problem, I might
suggest compressing the lead vocal a bit. But, I don't get into detailed
threshold and ratio settings. I assume you already know how to
compress a vocal track.

I wrote this entire book in simple plain English (layman's terms). I


eliminated all the words you never heard of and hi-tech jargon, so
anyone at any level can understand and learn from this book.

You've invested hundreds, if not thousands, of hours into your


music. If you're serious about it, now's the time to make a very small
financial investment in this book so your music will sound the very
best it can!

I wrote this book so you can quickly learn (in a matter of days) the
techniques, tips, and secrets that took me over 19 years to learn!
What You Will Learn From This Book

Since 1999, the year I opened my recording studio JRmastering.com,


I've worked with over 8,000 clients (that's 6-8 per week). Roughly
80% of these clients mix their own music at home. And I must say,
most of their mixes are very good. They're very close to commercial
radio sounding mixes.

The mixer definitely does not need to read a 400 page book on basic
mixing techniques, or take a six week mixing course. They're way
beyond that.

What they do need is a book that points out the most common
mixing errors I see daily, so they can check their mixes for them.
Problems they don't even realize their mixes suffer from. And then, use
the information in this book to correct these problems. That's what
“Song Mastering Secrets” is!

This book is a learning guide, filled with in-depth information. I


wrote it from the perspective of a recording studio owner who's worked
with many satisfied clients. Working with this many clients gave me
the rare opportunity to discover what areas most sound engineers are
having problems with.

If you told me the style of your music, I could pretty much tell you
what will be wrong with your mixes, without even listening to them!
Not exactly, but it's the same mixing mistakes every day, song after
song.

This book does not include a hundred extra pages of theory and
worthless advice you'll never use. I included only information you
need to know. Solutions to real-world problems that WILL occur in
YOUR song mixes.

I compare this book to a good baseball coach. I don't know if you're


familiar with how a major league baseball player's season goes, but the
regular season runs from April to October.

In the winter, players who need it, partake in off-season leagues or


advanced hitting training. During these training sessions, hitting
coaches film the player's swing and determine what improvements can
be made. The players then apply these suggested changes to their
swing.

There are many many examples of players who, over the winter, went
from a .260 avg, 15 HR hitter (ready to lose his starting job) to a .290
avg, 35 HR hitter who ends up signing an extension for $80 million
dollars! And sometimes all the coach told him was to simply keep his
head straighter and change his swing launch angle by 15%. That's it!

So, was that a huge secret? Well, to the player in this example it was
“the $80 million dollar secret!”

And I'm hoping this book has the exact same effect on your mixes.
Simple solutions that have major positive effects.

Also, this book was re-released in mid 2021. It covers current mixing
techniques using up-to-date gear. Since 90% of the current software
plugins weren't available in 2007, books written at that time or earlier
are primarily analog gear based, using old school mixing techniques.
About The Author

Hello, my name is John Rogers. I'm a professional sound engineer


and have been mixing and mastering at my Las Vegas studio
Jrmastering.com, since 1999. I've worked with many highly satisfied
customers and have mixed and mastered songs in every genre and style
imaginable.

I've worked with several Grammy nominees and winners. I've also
mastered many billboard top 10 songs in Europe, movie soundtracks,
videos that have appeared on MTV, and dance/EDM music that has
been played in dance clubs all around the world.

Learn in a matter of weeks what took me over 20 long years to


learn! Let's get started!
Testimonials

Here are a handful of testimonials from former clients.

I do everything I can to get my clients' music sounding the very


best it can! And treat each project as if it were my own.

That is very kind of you to give me such detailed notes on ways I can
improve future mixes and recordings.... I've been reading up on your
mixing tips articles, and they are very informative, but having
something that's personalized feedback on my mixes is beyond
fantastic! I really want to thank you, John, again for everything, all the
tips, all your time and work, you've got loyal clients here, I will
certainly continue to always bring my business your way... Aaron F.

Every project I work on always goes to John! He has an awesome way


of turning your track into something really great! I always suggest him
to other producers. I will continue to use him on ALL my tracks, and I
would recommend him for all of your projects!
Jake W.

They sound great!! Thanks for finishing so quickly! Jared Reddick,


Bowling For Soup

I don't now how you do it, but you are amazing! You are so dead on
with your instincts and skills. I'm gonna drop $36 into your PayPal
account. Keep the extra $30 as a tip. Go have a beer on me. An
EXPENSIVE one! Wow, you have just blown me away with your
work. I'm coming to you every single time, man. You are the tops.
Give me a call anytime you need anything, and if I can help, I will.
And when we gear up for our next CD, I'll give you a shout. Thanks
again, man. You're just awesome. Felix
SONG MIXING FAQS

Here are answers to a few common song mixing questions.

What's The Difference Between Song Mixing And


Audio Mastering?
In song mixing, you're "mixing together" multiple audio tracks to
create a song. Different tracks like the lead vocal, bass line, guitars,
drums, etc. are being combined together. Then proper effects like EQ,
reverb, delay are added to each track, along with panning and volume
adjustments. In the final step, the mix is exported to create a song in
the form of a single stereo interleaved .wav or .aiff file.

In audio mastering you're working with one stereo interleaved file.


And, the entire song is affected by effects processes, not individual
tracks.

At least once a week, someone uploads one song .wav file and says
they want it mixed and mastered. Well, you can't mix one file. The
word "mixing" is a verb and you need at least two tracks (files) to be
able to mix them together. You need at least two of anything to be able
to mix it together!
What's The Main Goal In Song Mixing?
I want to first start out with your goal in audio mastering. Your main
goal in audio mastering is to replicate the sonic qualities of a well
professionally mastered commercial song, in the same genre and
style as the song mix you are working on.

Yes, everyone has their own slight preference adjustments like a little
more bass, brightness, etc., but overall you want to be at least 80%
similar to the current industry standard. The only exception is if a
client specifically requests an old school master. If that's the case, then
you're going to have to replicate the sonic qualities of songs from a
past era.

Now when it comes to song mixing, achieving your goals aren't as


easy as just comparing your mix to a song on the radio and
matching it. Why? Because the songs on the radio have already been
mastered, and you are only mixing. There's no where for you to listen
to examples of an unmastered commercial radio mixes, so you can try
to match them.

Note – I do have 50+ great unmastered mixes on my website, so


you can get a feel for how an unmastered mix should sound.

https://jrmastering.com/audio-engineer-portfolio-reviews-samples.html

Notice how these mixes are slightly dull and have a lot of headroom.
The rule when preparing a song mix for mastering you want to
keep everything slightly “below” industry standard, so the
mastering engineer can bring the mixes “up'” to standard.
As opposed to overly bright, way too much bass, distorted, and already
louder than a song on the radio. Too much cutting effects in audio
mastering makes a song flat and removes its sparkle. The audio
engineer prefers adding effects.

Back To The Main Topic – So, What Is The Main Goal in Song
Mixing?

In song mixing you want to create a good mix that focuses on the
sonic qualities that can't be corrected in audio mastering.

Elements like overall clarity and track separation (space), proper track
volumes in relation to each other, correct panning, track EQ and
effects, these all need to be done properly in the mixing process.

Why Do Songs Need To Be Mastered?


I've seen this question on the Internet many times. The answer I always
see is "Because all songs on the radio have been professionally
mastered, yours should be too." This is a true fact, but not an
answer.

Yes, your songs need to be mastered because you want them to


have the same qualities of a well professionally mastered
commercial song, but this can only be achieved by using effects on
the stereo/main out bus. Which is what mastering is (using effects on
the stereo/main out bus). Many critical processes can only be done in
mastering.

Here are couple of examples of why a song needs to be mastered:


Do you want to be able to play your song loud without it breaking
up? Then you need mastering. In mastering, you can compress the
entire song as a whole (or in separate BANDS) so it doesn't peak too
hot or distort during loud playback. Compressing a song in mastering
also kind of meshes everything together. In mixing, you can only
compress individual instruments like the bass, vocals, etc. This does
nothing for loud playback nor does it mesh the song together as a
whole.

Do you want your songs to be as loud as the songs on the radio?


Then you need mastering. In mastering, you can use a loudness
maximizer and make your songs as loud as needed without distorting.
You can't do this in mixing just by cranking up your levels. You will
distort before reaching your desired loudness.

This question is kind of like asking, "Why does my cake need to go


in the oven?" If you want it to be a real cake, and be similar to other
cakes, the oven is the final process to get it there. You don't have a
cake unless it goes in the oven! You only have raw mixed ingredients.
The oven meshes everything together to be like other cakes.

Can Anyone Become A Great Mixing Engineer?


I would say YES, most people can become a great mixing engineer.
I say this because most of the mixes I receive from clients are pretty
good and I know the audio engineer (the band member with a
computer) has only minimal training. He could easily be great if he put
a little more study and practice time into it. And if he had this book to
teach him what took me over 19 years to learn! Ha!

The bigger question is, how much time and effort are you going to
put in? You know, everyone can be in good physical shape, or at least
in decent shape. The choice is yours.

But, is there going to be a diet and exercise plan that's followed daily
for months, or is everything going on eBay or craigslist a couple weeks
after starting? It's up to each individual to do what's necessary to
achieve their end goal.

You get out what you put in, even when it comes to song mixing.
This book gives you the knowledge and tools you need to become a
great mixing engineer. And you'll learn a lot faster than I did. But it's
still up to you to read this book in its entirety and put in the
practice time.

How Long Does It Take to Mix A Song?


Honestly, there's no definitive answer to this.

Mutt Lange, the engineer who worked with Shania Twain, Def
Leppard, AC/DC, Michael Bolton and a slew of other stars was a
LEGENDARY slow worker.

Truly AMAZING RESULTS (and like I always say, all anyone cares
about is the end result). But, a sloooow methodical worker.

I'm not going to go back and read up on him again for exact details, but
I remember reading each song takes him a couple months to record and
mix. I said a couple months for ONE song!

I've heard many engineers quote 10 hours to mix a 25-30 track


song. This sounds fair.

One factor is how well you know your EQ and effects, and how
many you use. If you've been doing this for many years, you already
know HI & Low Pass EQ filtering, and can apply it right off the bat.
You also have favorite effects you like to use for each genre, and know
the delay times, which reverbs to use, etc. on vocals in certain
situations.

Someone like this could finish a song MUCH FASTER than someone
just starting out who is in “trial and error mode,” testing different
effects on every track. Someone like this could take weeks to mix
one song. And it still might suck! But that's a different story.

Number of effects used will also affect song mixing time. A pro
might use a dozen different effects on the lead vocal, changing them in
each section. While a newbie might use ONE EFFECT (plate reverb)
on all the vocals in the entire song.

That all being said, I'm going to go with 6-8 hours to mix an average
25-30 track song, over the course of a couple days. I say a couple
days because I'm a firm believer in the “next day fresh ear” mixing and
mastering method.
I like to clear my head and listen to a mix (or audio master) fresh the
next day, and then make my final adjustments and tweaks.

Should Your Mix Sound Close To A Mastered


Song?
This section is basically a tip for those of you working with clients or if
you plan on submitting your mixes to a mastering engineer.

DEFINITELY 100% NO!!

I've actually seen a few sound engineers online say the opposite. They
say to make your mix sound as close as possible to a finished master.
"All you want the mastering engineer to do is make the song louder."
These comments are based on common sense and theory, NOT on
real world experience working with thousands of clients (like I
have)!

If the only thing your songs really needed was for the mastering
engineer to make them louder, why not just make them louder yourself
and save the money? Duh! No, pay someone $500 to do it in 15
minutes. I want that job!

Here's why you don't tell clients to partially master their


own songs:

1. Because they're paying an experienced mastering engineer to


properly do the entire job.

Here's a non-music example. Before you get your car detailed, do you
clean it spotless inside and out, but leave only the windshield dirty, so
that's all the car detailer has to do is wash your windshield and he's
done? NO! That doesn't even make any sense.

You're paying for a car detail and you want every process that
comes with it. That's their specialty and you want their expertise in
every area, not just the windshield washing process. The same goes for
audio mastering.

2. Mixing is not mastering. For a song to sound like a commercially


mastered song on the radio you MUST use effects on the stereo/main
out bus. When you do use effects on the stereo/main out bus you are
mastering not mixing. And you're not supposed to partially master your
songs if they're going to a mastering engineer. Read the paragraph
above again.

3. And the #1 reason is THEY CAN'T DO IT! They don't have the
knowledge, skills, replicating abilities, etc. That's why they're looking
for a mastering engineer to begin with!

I explain to my clients to submit a mix that's clean, but slightly dull


with lower overall volume levels. That way I can bring everything up
to where it needs to be. Nothing on the stereo/main out bus.

In the past, before I was giving this info, do you know what my
clients would send me? I would say half the submissions were partial
masters where the clients were trying to make them sound radio ready.
The problem was they were TERRIBLE! Way over-level, distorted,
super bright and/or super bass, way too much compression, etc. Many
times they had everything wrong!
In a perfect world, yeah send me a song that's already mastered
and I don't have to do anything to it. But I know from years of
experience working with actual people that this is a very unrealistic
request. Most can't do it, that's why they're looking for a mastering
engineer to begin with!

I've also heard the comment that mastering is taking all the songs
on a CD and making them all sound similar. Really? That's all
mastering is? So, I'm supposed to take the best mix on the CD and
make all the other songs sound just like it? That doesn't even make any
sense! What if the best mix on the CD sucks?

What if someone gives me only one song (which happens daily)? It


can't be mastered because there isn't an entire CD?

I make every song on the CD sound as close as possible to a


commercial industry standard song. Each song is mastered to sound
the very best it can, regardless of past or future songs on the CD. Also,
every song is mixed differently and requires different actions to
achieve this. After I do this, all the songs on the CD are comparable in
every sonic area.

Audio mastering is taking a mix and bringing it up to commercial


industry standards.

What Is Headroom And Dynamic Range?


In order to produce a good master, a mix needs proper headroom and
dynamic range.

WHAT IS HEADROOM?
Headroom is the distance between an audio tracks peak level (when
the meter is at its highest) and 0 level on the output meter.

As a song plays, the output meter on the stereo/main out bounces up


and down with the music. You can see how high (loud) an audio track
peaks by looking at this meter. Anything peaking over 0 level usually
means distortion, so at all costs stay below 0 level.

How far is your peak loudness below 0 level? That's the simple
definition of what headroom is. So, if your meter is peaking -3dbs
below 0 level, you have +3dbs of headroom. If your meter is peaking
right at 0 level, you have NO headroom.

Note - +3dbs to +6dbs of headroom is the standard recommended


amount, but many times a song with more or a little less headroom can
still be mastered with no problems. But, it really makes no sense to
give a mastering engineer a mix with 0dbs of headroom and risk the
chance that it's slightly distorted. There's no reason to do it because
overall song volume is done in the mastering process, not in mixing.

WHAT IS DYNAMIC RANGE?

Dynamic range is the area between the peak level (when the meter
is up) and the low level (when the meter is down).

Roughly, 3dbs to 6dbs of movement between the high and low meter
level is a typical amount of dynamic range, but this does depend on the
genre of music.

Note - One problem you want to watch out for (if you're mixing your
own music) is dynamic range between song sections. You don't want a
+10db or more difference between the verses and choruses. This
creates a mastering problem because the verses will never be loud
enough compared to the rest of the song. Roughly a 6db difference is
as high as you want to go.

Why Does A Mix Need Headroom And Dynamic


Range?
In one sentence, it gives the mastering engineer more room to work
with.

If I compared a hair stylist to a mastering engineer, hair length would


be headroom. If someone came in with 18" of hair, the skies the limit
as to how she could style it. But, if they came in with only 1" of hair,
her style options are very limited. In audio mastering, no headroom
limits your options.

In the audio mastering process, a series of EQ boosts and cuts are


performed. Most of the time you're going to need to boost something,
even if it's only a little +2db boost at 100hz. Well, if the song is
already at 0 volume level or higher, you might not be able to make
a necessary boost without distorting.

Also, if a song has very low dynamic range (the meter barely moves)
it's probably over-compressed. Which means it could lack punch,
power, clarity, or could even limit EQ options. You want your song to
have some dynamic life!

I want a song mix with some headroom and decent dynamic range.
I want to EQ it as necessary, I want to compress it as necessary, I want
to be able to set the overall volume as necessary, and I don't want to
work with a distorted mix.

How To Create Proper Headroom In Your Mixes


To create a mix for mastering with proper headroom is pretty simple.
In mixing, never let your levels go over 0db on ANY of your
individual instrument or vocal tracks. If you do this, 99% of the
time you will be under 0 level on the stereo/main out meter, which
leaves you some headroom (not necessarily +3db or more but at least
+1db).

Don't compress your tracks heavy and you'll usually have


adequate dynamic range too.

That's usually all you need to do to achieve proper headroom, but there
are always exceptions. If most of your tracks are HOT and peaking
right at 0 level, your song mix could be at 0 level with no
headroom. It won't be distorted (if not recorded distorted), but you still
won't be leaving the audio mastering engineer much to work with. If
this is the case, after finishing your mix, link all the channel faders
together and just pull them all down a few dbs. Now you have
headroom.

Remember, loudness maximization is done in mastering not


mixing. DO NOT keep raising your faders way over 0 level in an
attempt to match the volume level of your favorite song. This will
distort it. Yes, you matched the volume level, but now you have a
scratchy distorted mess!
PROTECTING YOUR HEARING

This chapter is also included in my best selling book, Audio Mastering


Secrets. It's so important, I had to repeat it again here.

If you did become the greatest sound engineer who ever lived, but had
to suffer from hearing a loud HISS sound while relaxing on the beach,
that's something you'll have to live with for the rest of your life. No
amount of money can fix it. Hopefully this chapter helps prevent that
problem.

Millions of people suffer from tinnitus. A new study shows around


10% of the U.S. population suffers from it in some form, but many
have never even heard of it until they got it! Unfortunately, I was
one of those people. It can happen quickly and it lasts a lifetime...

As a sound engineer, your hearing is your most important asset.


It's critical that you protect it for as long as possible. In this section, I
will discuss the causes and prevention of tinnitus, and general safety
practices that will help you keep your ears healthy.

What Is Tinnitus?
Tinnitus is the constant hearing of a sound when there is no sound
present. Some describe it as a ringing sound, a hiss, or a high pitched
tone. The sound is continual, and it varies from one tinnitus suffer to
another.
How Is Tinnitus Caused?
Tinnitus is caused by either a single extremely loud sound or by
loud sounds over a period of time. I know a military vet who got
severe tinnitus from the sound of jets taking off in close proximity.
Another guy I know got it from a single bomb explosion that was right
next to him. Listening to loud music at a concert or club, if you're in a
band, if you play music loud on your iPod, or monitoring music very
loud as a sound engineer; over a period of time any of these scenarios
could cause tinnitus. If you cut grass for a living and don't wear
earplugs, I would imagine that could eventually cause it too.

It can also be caused by prescription drugs in the benzodiazepine


family or even by over the counter drugs like ibuprofen. I heard of
a man who got severe tinnitus from MSG in Chinese food. The cook
made a mistake and loaded it up heavy with MSG.

How Did I Get My Tinnitus And How Did It


Sound?
I got tinnitus in 1999, working on one of my very first mastering
projects the day I opened my studio, JR mastering. At that time, I
needed to listen to the songs much longer than I do now because of my
lack of experience, and I listened to them WAY too loud.

I was reviewing my final masters at a high volume level (105-110


dbs) for about 30 minutes non-stop. When I was finished I didn't
really notice anything. But, when night time came I could hear a high-
pitched tone. It sounded like a 40db test tone @ 5k. I could hear the
tone in both ears, but my right ear was twice as bad as my left. The first
few days it was hard for me to sleep because I kept thinking about this
sound. The sound was also very annoying when sitting outside in a
quiet area.

Is There A Cure For Tinnitus?


No. There are many pills and snake oil products online, but I have
never heard of a valid cure. But, experts say it gets better over time
as long as you don't make it worse.

The first few months I suffered from tinnitus, I would say it was very
annoying at night but not too bad during the day. It did not effect my
sound engineering. After six months, it improved about 25%. After a
year, it's roughly 50% better. Now, many years later, I would say I
have maybe 20% of the original tinnitus I got in 1999 (an 80%
improvement). Its pretty much gone because it was a mild case of
tinnitus to begin with, and I took preventative measures so it wouldn't
get worse.

Can You Mix And Master Music With Tinnitus?


Mine was not severe, so it did not hinder me at all. The noise I heard
was a 40db loud test tone @ 5k. I started doing my initial mixing and
mastering at around 85dbs, so the tinnitus tone was pretty much
masked (drowned out). Its kind of like when someone records mic
hiss. You can hear it when the music stops, but when the guitars are
playing you can't hear it at all because the hiss is being masked.

Singer Phil Collins retired because of his tinnitus. Bono also has very
severe tinnitus that greatly affects his everyday life. If your tinnitus is
very severe like those two, I'm sure your sound engineering skills
would be greatly affected.

How To Prevent Tinnitus In Everyday Life


Always use hearing protection (earplugs) when at a concert or a club
playing loud music, when cutting grass, when using a blower or
electric power tools, and for sure when shooting a gun. Any situation
where a continuous 100db sound is present.

Also, don't listen to music over 100dbs for long periods of time.
OSHA recommends no more than 1 hour @ 105dbs. I would NEVER
go more than 20 minutes straight at 105dbs, if that. Also, be careful
with ibuprofen and prescription drugs.

How Loud Is Too Loud When Mixing And


Mastering?
Well, 105-110dbs for 30 minutes straight was too loud for me. I
had tinnitus at the end of the 30-minute session! Everyone is
different, so I don't want to give you specific sound ranges and upper
limits. All I can tell you is what gave me tinnitus, and what's worked
for me to improve my tinnitus 80% since 1999.

How To Protect Your Hearing As A Sound


Engineer
THE GUINEA PIG EXPERIMENT

I read a study online where a scientist exposed guinea pigs to


extremely loud music. The results were that the guinea pigs who
listened to extremely loud music continuously, say for 30 minutes, had
severe structural damage to their internal ears.

The guinea pigs who listened to the same extremely loud 30 minutes of
music, but it was not continuous (roughly 2 minutes of music and then
a 2 minute break), the damage to their internal ears was FAR LESS
severe compared to the first group.

Now remember, both groups listened to music the same total time
(30 minutes). But, continuous listening was far more damaging
than intermittent listening.

The Rules I Follow During Song Mixing And


Mastering
I initially mix and master at a lower level. I keep levels around 80-
90dbs. I listen 10-15 minutes at these levels, then take a 5-minute
break. I will do this for between 2-4 hours, then I take a full two-hour
break.

When it comes time for the finalization (loud listening @ 105-


110dbs) I NEVER go more than 2-3 minutes continuous and I
pretty much split time. If I listen to loud music for 5 minutes, I take a
5 minute break before starting up again.

Above is what I've been doing since 1999 and it works for me.
Maybe you can go 100dbs for 8 hours a day and never have a problem,
but I'm not risking it. 105-110dbs for 30 minutes continuous ruined
my ears pretty good, and I had to make sure they didn't get any
worse.

Tinnitus isn't fun. I might be going overboard a bit with my silence


rests, but it's working for me.

Better safe than sorry with tinnitus because it lasts A LIFETIME!


SETTING UP YOUR LISTENING ENVIRONMENT

Mixing and mastering audio starts with your listening environment. If


it isn't giving you a true sound, you'll be lost.

Room Size
Technically, you can properly mix or master in any room size. But,
I believe a smaller room is better than a very large one for someone
who's just starting out. And when I say smaller I mean closer to 12'x15'
than to 20'x30'. I've mixed and mastered songs for a number of years in
a 20'x30' room. It took me a few days to get used to it, but after that I
could do it.

The obvious problem with a big room is it's a very open space. If
you don't have a good acoustic setup, the room will add reverb to every
song.

You have to compensate for this on every song you mix or master,
because the extra reverb you hear isn't really in the music. It's coming
from the room.

In a smaller room, even with no acoustic treatment, your mixes


and masters will all sound more true. They won't be discolored from
bouncing around a big room.

Speaker Choices
I've used dozens of different brands of speakers in my career and I do
like a few better than others. But, this article deals more with types and
sizes of speakers, not with brand choices.

The main mixing and mastering speakers I currently use are Dynaudio
100w powered studio monitors with 6" woofers and 1.1" tweeters.
They have nice EQ adjustment options on the back and I know these
speakers very well. If you get speakers that are a little larger, you'll get
better LOW-end out of them, but I'm happy with the size I use.

The most important part of your speaker setup (that a lot of


newbies don't know about) is having a sub-woofer on the floor
between your main studio monitors. If you're using 4-6” monitor
speakers, it's impossible to correctly mix or master any music content
under 150hz without having a sub-woofer. 4-6" studio monitors will
not play the low 60hz sub-bass AT ALL, and they're weak at best in the
100hz area.

Speaker Placement
Before I tell you the setup I like best, after many years of
experimentation, I'd like to first tell you the setup I personally don't
like (even though a lot of sound engineers do this). Two studio
monitors, five feet apart, on a desk two feet away from their face. And
NO sub-woofer! I think they call this "near field" monitoring.

But at some point during the audio mastering process, you must
crank the music up very loud to set your final compression and to
hear how it translates at high volume levels. You can't do this if your
speakers are right next to your ears! At least I can't.

Maybe this is why the songs I get in for re-mastering badly break up
when cranked up loud, and the bass is totally washed out. They were
originally mastered at very low levels without a sub-woofer, and not
optimized for loud playback.
I also find it hard determining the overall depth and stereo width
in music when the speakers are two feet in front of me. Which
makes sense. Its like watching a 50" TV. I want it to be far enough
away so I can take the whole picture in. No one puts a 50" TV on a
table right in front of them, yet this is done with speakers.

My Personal Speaker Placement

First off, I use speaker stands for my studio monitors, and the speakers
stand 3.5 feet off the ground. I have the stands roughly 7 feet apart, and
the speakers are about 6-7 feet away from my face. The sub-woofer is
on the floor, centered between the two speaker stands.
Note - Do not put the back of the speakers right up against a wall.
Have at least 10 inches between the back of your speakers and the wall,
or the sound will be altered.

I've found this setup is close enough where I can here all the details in
the music, wide enough so I get a full representation of the stereo field,
and the speakers are far away enough so I can crank the music up to
105dbs to make sure it sounds right for loud playback, without blasting
myself in the face.

Learning And Calibrating Your Speakers


When I first start out with NEW speakers (though I never change them
now), I listen to my favorite hit songs in every genre and style. Songs
that I know from my years of experience have X amount of bass, X
amount of brightness, etc. I know how these songs are "supposed" to
sound.
Most good speakers have EQ adjustment switches on them, and the
sub-woofer has a volume control on it. After several listens, I'll slightly
adjust the EQ on the monitor speakers and the bass amount on the sub-
woofer so that my favorite reference songs sound "true" to me. I'm
making my speakers sound true to life, not exaggerated in any sonic
area. Once the songs playing through these speakers sound "true"
to me, then anything I mix or master will be done correctly.
How could speakers sound untrue to me? An example, if I'm playing
a few commercially hip hop songs and the bass sounds very weak on
every song, the speakers I'm using are untrue because I know in reality
the bass should be much higher/louder in hip hop. I know the sub-
woofer needs the bass volume adjusted and maybe the studio monitors
do too. And I adjust them accordingly.
If I worked with these untrue speakers, I would improperly raise the
bass on every song, thinking it was too low, when in reality the
speakers aren't properly playing the bass.
It's easier when working with true speakers because what you hear
correctly represents the audio material you're working on.

Sound Proofing Your Room


Way too much is made out of room sound proofing. I've audio
mastered songs to perfection in an untreated basement (when I started
out), in huge and small treated pro studio rooms, and I've had several
song that were on the radio in Europe, that ranked on the billboard
charts, that were ALL mastered in a quiet small UNTREATED
bedroom. No one can honestly say it can't be done, I have the radio
masters to prove it.
You will have to listen to a few reference tracks using good speakers
you know well, to learn your environment. And some client feedback is
required. But after some practice, you can do it.
Now, If your listening space echoes or just sounds to wide, by all
means deaden the sound. Remember, your goal is to get a true sound in
your room. But,If your studio is a small carpeted room, you don't
necessarily need foam or sound proofing on your walls. You already
have a quiet room. The important thing is to learn your listening
environment, whatever it is.
Note - If you feel you have to have some foam or acoustic treatment in
your room, BUY IT! Remember, you'll still have to learn how the room
sounds after you install it. Many don't get this part right and their work
is very poor. Again, my point is you don't necessarily need it for audio
mastering if your room doesn't reverberate or heavily color the audio. This is
common sense. But, I see too many people online that won't start mixing and
mastering until they can scrape up $1,000 for room treatment.
I've gotten in over 100 remaster jobs that were originally mastered at
studios with thousands of dollars worth of foam on their walls, and
these masters were HORRENDOUS! A mash of distorted noise
trainwreck! The foam did a lot of good..
How To Fix The Most Common Mixing Mistakes

I've mastered many songs, and time and time again I've notice they
all suffer from the EXACT same common basic mixing mistakes.

In this section I'm going over the top 14 mixing mistakes I see daily,
and how to fix them.

The percentages you see next to each title represents how often I
see the mistake overall. I say overall because if you see (10%), that
doesn't necessarily mean I will see this problem once on every 10 song
CD.

Using hip hop music as an example, I might get in 9 CD projects that


all have proper headroom, but then I'll get in a hip hop project where
the entire 10 song CD is over-level. Even though the entire CD is over-
level, that's still 10% overall compared to everything I get in. But
100% for that hip hop CD!

I'm just mentioning this because I don't want you to see 10% and think
you HAVE TO have one song with a particular problem on every CD
you work on.

If you're working with clients, mixing engineers are fairly


consistent. If the first song in a project has one or more of my listed
errors, many times the entire CD will have that same error.

Even though many of these mistakes and their solutions are pretty
simple (while others might be new to you), they all have to be
mentioned so you know what to look for when finalizing your mixes.

As I'm writing this book, I'm starting to realize that knowing what you
are doing wrong, is just as important as me telling you how to fix it.

Like a checklist, go over all 14 of my mixing mistakes, making sure


your songs don't suffer from any of them. Doing this will take your
mixes to the next level!
#14 Noisy Tracks Are Making The Mix Muddy (5%)

I see this problem mainly in songs where mics are used to record
instrument tracks, like a rock song. As opposed to a song made
using a keyboard with noise free samples. Those songs could contain
distorted tracks, but track noise is a rarity.

* The Problem -
Tracks are recorded with a slight hiss. When all vocals and
instrument tracks are playing at the same time, you can't notice any
hiss. The audio content is loud, so the hiss is drowned out.

But even though you can't always hear the hiss, when you have 10+
tracks with a tiny amount of it on each one, it adds up to an element of
space eating mud.

When I start mastering a song, the hiss is apparent at the very


beginning or end, when the music isn't playing. You can even see it
in the .wav file. Silent tracks are hairline thick in the .wav file, while a
hiss track (that's supposed to be silent) has a little thickness to it. Not
jagged, it's just several times thicker than the hairline.

* The Solution -

Before you even begin recording anything, record silence. Is the


.wav file a hairline line or is it a little thicker?

Then play back the silent track you just recorded and increase its
volume. If it gets louder, obviously it's not silent! And if it isn't silent
during this test, it won't be when you start recording either.

Usually these hiss problems are because something is gained too


high. It's either the gain boost on the preamp (stand alone or on your
audio interface) or the gain boost going into your DAW software,
which you control within the DAW program itself.

Usually if these two areas are not over-gained, you won't have the
hiss problem.

Check this out before you start recording! Why add extra noise?
#13 Songs Are Partially Poorly Mastered Or Over-
level (10%)

This problem pertains more to working with client submissions, or if


you're going to be sending your mixes to my mastering studio,
JRmastering.com. Ha!

But then again, if you are doing your own mastering, your mixes still
can't be over-level and distorted when you start the mastering process.

I'm going to repeat the text from an earlier chapter so you don't have to
go back.

How To Create Proper Headroom In Your Mixes

To create a mix for mastering with proper headroom is pretty simple.


In mixing, never let your levels go over 0db on ANY of your
individual instrument or vocal tracks. If you do this, 99% of the
time you will be under 0 level on the stereo/main out meter, which
leaves you some headroom (not necessarily +3db or more but at least
+1db).

Don't compress your tracks heavy and you'll usually have


adequate dynamic range too.

That's usually all you need to do to achieve proper headroom, but there
are always exceptions. If most of your tracks are HOT and peaking
right at 0 level, your song mix could be at 0 level with no
headroom. It won't be distorted (if not recorded distorted), but you still
won't be leaving the audio mastering engineer much to work with. If
this is the case, after finishing your mix, link all the channel faders
together and just pull them all down a few dbs. Now you have
headroom.

Remember, loudness maximization is done in mastering not


mixing. DO NOT keep raising your faders way over 0 level in an
attempt to match the volume level of your favorite song. This will
distort it. Yes, you matched the volume level, but now you have a
scratchy distorted mess!

Don't Partially Master Your Mixes

If you're going to master your own music, like I do, you can mix
and master at the same time, using effects on the stereo/main out as
you go along.

But if you're going to be sending your songs out for mastering, nothing
goes on the stereo/main out.

If you put effects there, this is a partial master. The mastering


engineer is doing the mastering, and they don't need you to do so some
of the processes for them. Reason being, if you do something wrong,
sometimes it can't be corrected. So, why do anything at all since you're
paying the engineer to do it?
#12 Rhythm Of The Song is Dragging (10%)

This mixing mistake is critical when working with EDM, dance, fast
pop and some hip hop music. Other genres of music, it's still important
but not crucial to the integrity of the song.

* The Problem -

You're listening to a fast paced song and it seems to be dragging.


The bpm is fast, the lyrics and most of the instrumentation is hype, but
something seems to be missing.

Many times this is because the snare/clap or hi hat is buried


underneath the music and/or just not loud enough.

Usually, the hi hat is the culprit. The sound engineer just forgets about
it when mixing. Or, a newbie song arranger doesn't even put a hi hat in
the song arrangement at all!

* The Simple Solution –

Raise the snare and/or hi hat volume. A simple solution to a critical


problem.

Here are three ways to raise the volume levels -

A. Simply raise the faders until you can hear these elements
coming out of the mix. Keep going up until you can hear them
driving the rhythm of the song.

B. Hi hat problems can usually be adjusted in audio mastering. I


talk about this in detail in my Audio Mastering Secrets book.
Note - The snare/claps volume levels cannot be corrected in
audio mastering.

C. When working with the snare/clap volume levels, if your


song is dense in the Upper MIDs, you might keep raising the
volume and it never cuts through.

One trick I use is to double the snare. Many rock songs I double the
snare right off the bat, setting one dry and the other I put a little drum
verb on it.

If that doesn't work, you might have to create space in the Upper
MIDs, which I discuss in mixing mistake #1 creating space in a mix.

Sit and listen to your mix. Is that snare/clap and hi hat combo
coming through nicely? Is it driving the rhythm of the song?
#11 Kick/Bass Relationship Is Poor (10%)

In most songs, the meat of the kick is within the frequency range of 60-
90hz, though it usually goes past 100hz (with its power subsiding).

The bass/synth is most powerful between 100-140hz, but it can extend


much lower.

A sub-bass sits around 50hz and lower.

* The Problems –

A. The sound engineer uses the bass and kick “as is” not applying
any EQ filtering. So, their frequency ranges all cross over (bleed into)
each other and either create a big punchless muffle, or the kick is
phased out entirely to the point where it's almost non-existent.

B. With sampled music, the meat of the bass and kick samples that
were chosen are right on top of each other. They are both in the
exact same frequency range. Neither has its own space in the mix.

* The Solution -
A. Space must be created in the mix for your basses and kick. They can't
run over each other. I explain this in detail in the final chapter, Mixing
Mistake #1 where I break down HI & Low Pass filtering and subtraction EQ.

B. This is an arrangement issue. The meat of the kick and bass can't
be the exact same frequency range. This occasionally happens with hip
hop songs I get in.
For example, you can't have your kick and bass BOTH in the 60-
90hz range.

One of the two samples has to be hot in the 100-140hz range (and it's
almost always the bass) and the kick in the 60-90hz area.

Don't select bass & kick samples that are right on top of each other
in the frequency range.
#10 Bass Is Too Loud Or Boomy (10%)

The percentage of songs I get from clients with this problem isn't
extremely high, because paying customers submit pretty good mixes.
They don't send me total garbage when money is involved.

Now, the guy who wants the free demo.... Garbage! Ha! But that's a
story for my next book....

* The Problem -

This is a self explanatory problem with a simple solution, but it


happens often enough for me to mention it. Especially with hip
hop. The bass is obviously too loud and boomy, overtaking the entire
song.

* The Solution –

A better job has to be done with A/B comparison. Listen to a


commercial radio reference track in the same genre. Listen very closely
to the bass volume level and boominess.

This is mixing, not mastering, so you don't want to match it


exactly. You want to come in a little lower and let the audio mastering
engineer match it perfectly.

But of course, you don't want your bass +5dbs louder than the
mastered commercial radio reference track you're referencing. Not
to mention distorted too.

There are a couple reasons why this happens.


1. The mixing engineer wants this super loud bass. In their
excitement to achieve this they go way overboard, not realizing there's
a sonic bass volume limitation. If you're already way over a
commercial radio song, you've passed it....

2. They can't hear the bass!! And this happens VERY often.

You need a SUBWOOFER to be able to hear 80hz and under, and


they don't have one. If you can't hear the bass, you can't properly mix
or master it.

I once had a client who not only didn't have a subwoofer, he was
mixing with only ONE speaker too! WTF! And then he's complaining
that the masters I gave him are in mono... He's in the next book too...
#9 Overall Mix Is Muddy Or Too Bassy (15%)

I experience this mistake with clients about 15% of the time. I DO


want a slightly dull song mix for mastering, because then I can bring it
up to where it needs to be. That's fine. But muddy, which means not
clear, I don't want that.

* The Problem -

Nothing is EQ filtered and several tracks are colliding between


100-500hz. This creates a huge muffle that bleeds into the Upper
MIDs, and muddies up the entire song.

Or the bass tracks are just too loud.

* The Solution –

Mixing mistake #1 breaks down HI & Low Pass EQ filtering and


mix space. Read this chapter to solve the mud problem.

Reducing bass track volumes could also be a solution if you have


them cranked up loud.

But I would say it's usually a combo solution. EQ filtering first, with
a little bit of volume reduction.
#8 Overall Mix Is Too Thin Or Tinny (15%)

* The Problem – You get the problem.

* The Solution – There are three possible solutions.


A. If instrumentation is thin in a band or two, this is mostly an
arrangement issue. For example, if your Upper MIDs have only a
vocal and maybe one other instrument, unless it's a singer songwriter
song, you need too fill up the frequency spectrum by adding more
instrument melodies, a synth pad, or something.

Or thicken things up with effects like slight delays, light reverbs, etc.

B. You're too aggressive with your HI & Low Pass EQ filtering.


You're cutting the instrument and vocals down too thin. This will
create a tinny vocal and could create a thin mix. This is rare because
most people don't subtract EQ at all, but I do see it.

Mixing mistake #1 breaks down HI & Low Pass EQ filtering and mix
space.

C. As for the overall tinny issue, this effect happens when the 3-5k
frequency range is the loudest part of the song. This is also how you
create that radio effect.

The solution is to either boost up everything around the 3-5k range.

Or maybe you need to add more instrumentation in the adjacent


frequency ranges to fill up your frequency spectrum.
Lastly, lower the volume of the offending tracks. That one should have
been first.... Ha!
#7 Too Much Ssss Sound In The Vocals (15%)

The only reason this percentage isn't higher is because most songs
don't have lyrics with a lot of Sss's in them. If they all did, probably
30% would have too much Ssss going on.

* The Problem -

Another self explanatory problem, that happens often.

* The Solution –

Use your de-esser! Now some of you might say, this isn't a secret!
“Use the de-esser on vocals that have a sharp sss sound.”

Well, it's a secret to 15% of the sound engineers I've worked with,
because they didn't do it!

I use the de-esser on all vocal tracks, with at least a moderate


setting. I always use a de-esser because it's non destructive. If you
apply a de-esser to a vocal track and the bright sss sound never occurs,
it just doesn't go on.

Crank up your mixes loud. If you hear even the slightest sss sound in
your mix, it will be 2-3 times louder after mastering (when your song
is made louder). If this is the case, apply the de-esser a little heavier.
Note – Most of the time, this sharp ssss mistake can be fixed in
mastering without destroying the rest of the song. But always fix it in
the mix whenever possible.
#6 Bright Instrument Or Vocal Sticks Out (15%)

* The Problem –

The song has great volume levels overall, but then one instrument
or vocal is just super loud or bright. Many times nothing can be
done in audio mastering, it has to be fixed in the mix.

* The Solutions –

A. Simple. Lower the volume of the offending track. That's it.

B. Use compression.

Brickwall limit the bright tracks using a high threshold. So, at its
loudest point, the compressor will kick in and cut its top off.

More Tips -

Don't Make Your Lead Vocal The Brightest Track In Your


Song

I guess there are some exceptions to every rule, but the lead vocal
track is almost never the brightest track in a song. You have hi hats,
cymbals, violins, some synths, rock guitars, and a few other elements
that all should be brighter than the lead vocal.

This usually is done because the mix is so crowded, the only way to
get the lead vocal to cut through is to make it super bright. Either
the lead vocal EQ is boosted around 3k, or its HI Pass EQ filtered to
death, removing all of its bass. This is not the solution. Creating space
in the mix for the vocal track is the solution, and I cover that in
mixing mistake #1.

Another tip, brighter instruments easily cut through mixes because


there usually isn't much audio content in their frequency range. If
you have a high pitched synth or violin at 6-7k there isn't much else in
that frequency range to compete with it, so it shines right through.

That being said, you can make the volume of these high pitched
frequency tracks lower than you would think and still hear them
just fine. Just lower their volumes all the way down, and then raise
them up until they cut through and you can hear them.

Crank your mixes up loud and listen for these mistakes.

Is your vocal the brightest track in the song? Does it becomes annoying
at loud volume levels? Is an ear piercing instrument track coming in
anywhere in the mix?
#5 Lead Vocal Bassy Or Too Thin (20%)

* The Problem –

You get this one.

* The Solution -

You want your lead vocal to be just right. If it's too bassy, Hi Pass
EQ filter it around 250hz.

If it was recorded too thin, add some bass around 400hz.

Or scale back on your HI Pass EQ filtering (if you applied too much).

Simple, but 20% of the time it's not happening. It's all about knowing
what problems you're looking for in your mixes.
#4 Narrow Mix, Sounds Bad In A Car, Fake Stereo
(30%)

The definition of stereo is audio coming out of two or more


speakers that surrounds the listener. If your mix sounds two feet
wide in the car (right above the car stereo) you failed the definition of
stereo. You're not surrounding the listener.

* The Problem –
Your mix is too narrow. When you listen to it on studio monitors,
everything sounds pretty good from a stereo width standpoint.

On headphones, poor stereo width is more noticeable.

But a car stereo is the real barometer for stereo width. When your
song is played in a car, the music should spread across the entire width
from the left door to the right, NOT only two feet wide above the car
stereo.

* The Solution –

One thing I tell everyone, when they're arranging a song, if they don't
have anything panned on the outsides of the stereo field, nothing
will be playing there.

You can't have a wide mix if nothing is panned wide! This is


common sense, but a lot of newbies don't get it. “I” didn't when I first
started out. I never even thought about it.
Too many people pan all their tracks between L30 and R30, and
then add reverb thinking this will create stereo width. If you do this in
a car your mix will sound two feet wide with slight reverb on the
outsides. You won't even notice the reverb.

With rock music, I rarely see this problem because at the very least,
the song has left and right stereo guitars that were recorded separately
(true stereo) that create great width.

Hip hop is where I see this problem the most. Many times a hip hop
song has a lead vocal, kick, snare, bass, and hi hat all panned close to
center (between L20 and R20). They then add a stereo synth that
comes in every now and then. In this scenario, the song is basically a
mono song when the synth isn't playing.

When arranging a song, I do a stereo checklist from section to


section. What's panned to the outsides in my verses, bridge, chorus?
And if the answer is nothing, I have to either pan some existing tracks
to the outsides or create new ones.

You need to purposely designate and/or create what is going to


reside on the outsides of the stereo field. Either instruments and/or
background vocals.

One important note – Be careful with ANYTHING you pan full L


or R.

You're isolating it pretty much by itself out there and IN


HEADPHONES it will playback a lot louder than you think, and
could burn someone's ears!
The farthest I go on solo tracks is 80 L/R. And when I go there I still
must do a headphone mix check to make sure those tracks aren't
playing too hot.

A Few More Notes About Stereo


Back to the “surrounding the listener” definition of stereo. If you
have two audio signals that are panned left and right, they must sound
different to produce a true stereo sound.

If you have two audio tracks that sound EXACTLY the same (and
start at the exact same time) they will sound mono to the brain,
regardless of where they are panned.

Try this with a mono guitar track. Copy the track (so you have two)
and then pan one left and one right.

Now, the doubled guitar track will be louder, but you'll never be able
to get them to sound stereo. They will sound mono (right in the
middle) no matter where you pan them.

The only way to create a stereo sound is to make the two guitar
tracks sound different from each other, by adding different effects or
by offsetting their start times. Usually both.

Note – I do have to mention this. The example above is how to create


stereo sound when all you have is a mono track.

To create a true stereo guitar effect, you DO NOT record the guitar
part only once, double the track, and then pan it left and right.
You play and record the guitar part twice or more (in time) and
then you'll have multiple tracks with natural subtle differences.

When these are panned opposite of each other they WILL sound true
stereo! This goes for any instrument or vocal. This is how true stereo
is created.

Multiple recorded vocal tracks panned on top of each other (not on


opposite sides) will create that radio quality thickness. Some artists
record and use four or more lead vocal tracks to create a thick powerful
vocal sound.

This is all basic stereo 101, but a critical mistake I see too often.

The Almighty Haas Effect

Ok, back to working with mono tracks.

I use the Haas effect to some extent on pretty much every song I
mix, considering the fact that clients submit many mono or fake stereo
tracks.

What is the Haas effect? I looked up a few definitions online to copy


here, but they were all so confusing, I'm going to make up an easy
John Rogers two part definition. Ha!

First off, the Haas Effect is used to - Create stereo sound from a
mono audio track, or to increase the stereo field of a stereo file..

How Is The Haas Effect Done?


By copying the mono track and then offsetting the start time
(several milliseconds) of one of the tracks. Then panning them left
and right (not necessarily full L/R just on opposite sides). Finally,
adding different effects to each side is also required to intensify the
stereo effect.

Sometimes I get in R&B songs with great vocal performances, but


they're all mono adlibs and background vocals. I have to Haas Effect
everything, and the results are amazing!

Note – To use the Haas Effect on stereo interleaved files (that are
really mono), you will first have to separate the left and right side
into two mono tracks. Most DAW software allows you to do this on
import.
#3 Lead Vocal Not Loud Enough Or Inconsistent
(30%)

*Problem #1 –

The lead vocal is too low for the entire song. I see this VERY often in
heavy metal songs because many times the singer can't sing very well.
And 90% of the time the client refuses to raise the volume level,
regardless of what I tell them.

I'll say, “I honestly can't understand 70% of the vocals in this


song, AT ALL!” And they come back with, “Oh we want some of the
words to be a mystery. You know how songs on the radio, you can't
understand every word?” Yeah I do. A word or two I can't understand,
or maybe half a sentence. But NOT 70% of the song!

*The Solution –

Play a song for a friend who doesn't know the words, and have
them repeat them back to you. Or at least have them tell you which
words they can't understand, so you can make those vocals louder.

*Problem #2 –

The vocals are inconsistent. In the verses they are too loud, in the
chorus they're run over by the music, etc.

*The Solution –

I cover this in detail in the very next chapter, mistake #2 inconsistent


tracks and sections.
#2 Inconsistent Tracks And Sections (60%)

There are several related problems I will address here.

The main reason for most of these problems is the song was not
mixed in sections.

Specific attention was not given to each section (opening, verses,


chorus, bridge, finale) with a goal of making each one sound great.

What usually happens is once a sound engineer chooses their effects


(which includes EQ) and volume levels, they use them for the entire
song.

Nothing changes regardless of the fact that vocal and instrument


tracks are being added and subtracted from section to section.

An example, a hard rock song, the sound engineer sets the lead vocal
volume and uses heavy delay and reverb on it because the guy's
singing is terrible.

In this particular song, the guitars and drums are light in the verses, but
the mix sounds excellent in this section. Lots of space, full sounding
stereo, the lead vocal sounds a little distant but clear, etc.

Now the chorus comes in. Four loud stereo guitars are added to the
mix. Guess what happens to the other tracks once these guitars come
in?

Question for you. If the mix sounds perfect during the verses, and
then four loud stereo guitars are added to the chorus, while
NOTHING changes with the other tracks, can the mix still sound
perfect in the chorus? It's impossible...

This is like adding your preferred cream and sugar to a 10oz coffee,
and then pouring that cup into an empty 30oz one. Now add 20 more
oz of coffee, filling the cup up to the top.

Well, you now basically have black coffee again. The coffee ran over
your cream and sugar. Ha! You have to adjust the cream and sugar
levels again to get your preferred taste.

The same goes for music. If track numbers greatly vary from section
to section, effects and volumes need to be adjusted too. You can't just
set everything at the beginning of the song and let it go all the way
to the end!

Note – Of course singer songwriter, monotonous simple EDM and


some hip hop songs don't apply here.

For example, if a song starts with five vocal/music tracks and ends
with the exact same five tracks (nothing ever changes), then yes, you
can just mix the beginning of the song and you're done.

A Mix Isn't Only One Mix

Back to the “mixing in sections” concept. Technically, a mix isn't


ONE mix. Think of it as several mixes. You have an opening mix,
verses mix, chorus mix, bridge mix, the finale mix, etc. Each section
has to be mixed independently to sound great.

In the 90's, when I was just starting out in music, I watched several
mixing videos and read a few books. Not even one mentioned this
concept!! They did exactly what I mentioned above in the hard rock
song example. Set it and forget it.

Back then, my mixes always sounded like crap and I couldn't


understand why. One day I was sitting there and thought, this would
all sound a lot better if I mixed each section separately and adjusted the
tracks so they all worked together to sound great in every section.
DUH!

Here's a small fact to help you understand this concept even better.

A pro mixing engineer who gets paid $15,000 plus to mix one song, he
uses several different reverbs and delays on the lead vocal track in a
song. Usually, a different combination is used in every section. And
this is done with all tracks to some extent.

So, that ONE reverb you use on your vocal track for the entire
song, this is wrong.

Specific Track/Section Problems And Their Solutions

Now, I'm going to go through seven section problems I often see in


client mixes (in no specific order).

#1 Super Bright Solos (very common when applicable)

* The Problem -

The mix sounds good during sections where the instrumentation is


full (let's say 12 tracks). It was mixed well.
Now, track numbers are reduced in a new section, say down to 2-3
tracks.

The problem, one vocal or instrument track that is being featured


is now way too loud and bright.

This happens because the offending track was mixed to sound


great with full instrumentation around it. It can't possibly still sound
great if everything is shut off around it. The mix has drastically
changed!

I see this often in Hip Hop mixes. The bass is completely shut off in a
few sections, leaving a super bright lead vocal. It's terrible. Reason
being, the lead vocals are EQ'ed bright to cut through the big bass, and
this works well while the big bass is playing. But not when it's turned
off.

*The Solution –

Using the Hip Hop example above, there's a good balance when all
instrumentation is playing. So, we need to address that bright vocal
when the bass goes off.

Consider this bass-less part a new section, and mix it accordingly.


No bass means you need to mix whatever is left playing less bright,
compared to when there was bass.

EQ cut any bright tracks at around 4-5k, or lower their volumes


(though this probably won't work).

You might even compress any offending tracks harder since they
no longer need to cut through the mix in the bass-less section.
Also, no bass (or when instrumentation is thin) also means the song
will be thinner. More delays and reverbs might be necessary to beef
up this section.

Remember, we want each section to sound good.

#2 The Lead Vocal Gets Run Over By The Music

*The Problem -

I talked about this earlier. When this happens, it's usually a hard
rock song, where the lead vocal is never loud enough to begin with
and the guitars completely bury it in the chorus. But of course, this
could happen in any song.

With Hip Hop, I don't see this problem very often. The vocals are
up front to begin with, and instrumentation is usually so thin it can't
run the vocals over.

*The Solution –

Sometimes just making the vocals louder solves the problem.

Or even a combination of raising the vocal volume, and lowering the


volume of the added chorus tracks.

The solution could also be to remix it for space, giving the vocal more
room. I talk about this is the next chapter.
#3 The Chorus Is Too Loud/Thick Compared To The Rest of
The Song

*The Problem -

The chorus is obviously just too big compared to the rest of the song.

*The Solution -

If the chorus is mixed well and the only problem is its overall volume
level, you can link all the chorus tracks together on your DAW
mixer board and move the faders down as a whole, a few dbs
(provided you have your choruses separate from the rest of the song).

The thickness issue, I address in the next chapter which focuses on


creating space in a mix.

#4 The Song Opening Or Verses Are Too Quiet Or Thin


Compared To The Rest Of The Song
*The Problem -

Sometimes songs are arranged with thinner instrumentation and vocals


at the start, and then they build up from there. But they're just not
loud or thick enough.

Even though track numbers are light, you still want this opening
part to sound full with decent thickness. You can't have a dry weak
vocal and thin instrumentation while we wait for more tracks to be
added.
*The Solution -

If the opening/verses are mixed well and the only problem is their
overall volume level, you can link all these tracks together on your
DAW mixer board and move the faders up as a whole, a few dbs
(provided you have them separate from the rest of the song).

If there's a thickness issue, doubling your tracks could be a


solution. I do this often when the arrangement is thin.

Adding effects like reverbs and delays to create thickness is


another solution.

Maybe even add more instrumentation. If it's your song, you could
add a light background instrument like a synth pad to fill the
arrangement up a bit.

#5 Vocals/Instrumentation Go In And Out

*The Problem -

Example, the kick drum is too loud in the verses, it's run over and weak
in the chorus, and in the finale the kick is GONE!

*The Solution -

Usually just manually gaining or cutting the .wav file's volume


where needed will do the trick. If it's way too low in a section, gain it
up several dbs.
Of course you can just raise the fader volume (if the section is
separate from the rest of the song).

But I like to manually use the gain function on the .wav files when
faced with the scenario. Especially on vocals. Sometimes a few words
get run over by something in the mix. It's easier to just highlight
those few words in the .wav file track and gain them up a few dbs,
until they cut through.

It's amazing how you can't even tell the vocals are getting louder since
the music is too. To the brain, the vocals sound like they are a
consistent unchanged level.

If there's a phase issue, where the bass guitar/synth is running over


the kick and phasing (canceling) it out, you can raise the kick volume
as loud as you want and you still won't be able to hear it. It won't
work.

If this is the case, you have to HI Pass EQ filter your bass so it


doesn't bleed into the kick. I talk about this more in the next chapter
that deals with EQ space issues.

#6 Something That's Supposed To Be Playing Is Missing

There are a few exceptions, but usually you want to be able to clearly
hear every track in the mix, to some extent.

The problem is when something is just gone!

How could this happen?


A. A track is muted by mistake.

B. Phase issues. Two similar frequencies are crossing each other


and one is being canceled out. HI & Low Pass filtering, could be
the solution.

C. Very simple, the track just isn't loud enough. You can't
hear it.

D. The track is getting smashed, no space. Again, this is


discussed in the next chapter on space.

E. Instrumentation is too thick and there's no room for the


track to cut through. Do you really need all the tracks you are
using in the arrangement to achieve the desired sound you're
looking for? Can you achieve what you're looking for with less
tracks?

#7 EQ problems from section to section

I don't see this very often, but when I do it's a bad problem to have.

*The Problem -

The verses are super bright, the chorus is super dull, the bridge is
perfect.

In audio mastering, any EQ adjustments affect the entire song as a


whole. Since this problem can't be fixed with traditional audio
mastering, it has to be fixed in the mix.
Using the example above, if I boosted the EQ at 4-6k in audio
mastering, to brighten up the super dull chorus, the super bright verses
now become distorted bright!

*The Solution -

You have to EQ each section properly. Remember, we want each


section to sound good.

Listen to the entire song for inconsistencies in EQ. You want a good
balance from section to section.

The major problem here is super brightness, and then dull


sections. If you notice this, many times adjusting the volume of the
offending super bright tracks will solve this problem. And then audio
mastering will take care of the dull parts in the mix.

Or an EQ cut in the 4-6k range might do the trick.

Lastly, compressing the offending tracks could be an option. But I


would try the first two before compressing. I hate to lose dynamic
range on bright instruments if I don't have too.
#1 Not Enough Space In The Mix, Very Crowded -
Especially in The Choruses (70%)

This is the BIG ONE! At least 25,000 of the over 40,000 songs I've
mastered (to some extent) suffered from lack of space. This problem
is so big, I get a little more detailed with my solutions in this
chapter.

Before I get into the problems and their solutions, let's discuss how a
good commercial radio song should sound.

When listening to a commercial radio song that is mixed and


mastered very well (and most are) everything is very clean and
clear.

I can clearly hear the lead vocal that has it's own space in the mix,
and I can clearly hear the effects on it too.

The snare has its own space and can clearly be heard, along with
any effects it might have.

I can also distinctly tell the bass and kick apart, they are not
canceling each other out or muffled.

Everything has it's own space. Even the soft synth pads coming in, I
can clearly hear them too. The kick has punch and the bass has
presence.
* The Problems –

I don't know any other way to explain it other than the song mix is
just a big mash of noise. Just solid sound with no space between any
of the tracks. And any effects used are a mystery.

FYI – Of course I know there are genres where solid noise and a guy
screaming is the goal of the song. Ha! I'm not talking about that genre.
98% of the time, we don't want that.

* Why This Happens –

1. If it's someone who is pretty good at mixing (which is most of my


clients) it's usually because they want to achieve this “big sound” and
they go to far with the vocals/instrumentation or effects.

2. Or of course, they just don't know what they are doing. Ha!

Here are some specific related problems -

1. Effects are too heavy, or too much unneeded instrumentation.

2. Panning is poorly done.

3. Subtraction EQ to create space is not done very well, or many


times not at all.

4. Sometimes, to some extent it's all three.

* The Solutions –

1 . VOCALS/INSTRUMENTATION OR EFFECTS ARE TOO


HEAVY
I was working on a song with a good friend who really knows
music. He's been into music 20 plus years as a singer, songwriter, and
mixer.

He gives me a song to master that he's really passionate about. It's


his “big song” and he wants it to sound big!

Well, the mix he gives me is an absolute TRAINWRECK! Super mash


of noise, crazy excessive reverb!

This was so odd to me. How could this happen with someone who
knows music so well?

The reason? I guess you could say his emotions took over his sonic
evaluations. In his excitement to make this “big sound” he just kept
adding instruments, vocals and effects without limitation.

This would be like someone who wants to start a real healthy diet but
they over do it, taking 100 vitamins and 10 scoops of protein every
day. There's only so much the body can process at a time. More does
not always mean better.

Well, the same goes for music arrangements.

Heavy Vocal/Instrument Tracks

Those powerful stereo guitars you hear in that song you like could
be only two tracks with a nice distortion effect and delay on them.
Not six guitar tracks using every effect plugin you own, like you're
trying to do!
That thick choir singing in the background that sounds great,
might be only four vocal tracks, not eight like you're trying to squeeze
in there.

Anything you want to sound “real big” many times it doesn't take as
much as you think to achieve it.

Whatever your plan is to achieve that big sound, cut it down a bit since
most people naturally go overboard.

Remember, less vocal/instrumentation tracks and effects means


everything else has more space and will be clearer.

It's so hard to judge your own material, let someone listen and get
some feedback.

If you're thin, it will be obvious to the casual listen, and they will let
you know.

Note – I get in a lot of thin songs to work on too. I don't want you to
eliminate so many tracks that your songs are too thin. I'm all for
doubling up instruments and vocal tracks when needed. Just don't go
over-board.

Listen to similar good commercial radio reference tracks and


compare their track thickness levels to your songs, and then
replicate them.

Effects Are Too Heavy

As you add more vocal/instrument tracks to a section, you must


reduce your effects.

On the flip side, when vocal/instrumentation tracks are thin, you


need to add effects.

For example, a song has two background vocals in the bridge, that
have a high amount of hall reverb and slight delay on them.

When the chorus starts, four more background vocal tracks are added
to the original two.

Now that you have six background vocal tracks, you will probably be
able to considerably reduce the reverb and delay on those original
first two tracks.

All six tracks will more than likely have very light effects on them, and
will still sound nice and full.

This also gives you more clarity in the vocal tracks. And you're
leaving space for the rest of the song, which means the entire song will
be clearer and more dynamic.

Here's another tip -

So many times I've seen people ask online, “What should I set my
instrument volumes at, and exactly which effects should I use, and
how much?”

It's impossible to give someone volume and effect presets for a song
without hearing it.

It could be a singer songwriter song (only guitar and vocal), or a hip


hop song with only drums and one synths. Both of these songs are very
thin. Or a huge 50 piece orchestra mix. Everything is different. How
could anyone possibly give you exact preset settings?

Even within a single song it's impossible. Volumes and effects have
to be changed accordingly from section to section.

2. PANNING

This is basic mixing. If anyone knows anything about mixing, they


usually get this right. So, I'm not getting too deep into this.

But then again, I do get in a lot of hip hop mixes that really centralize
the entire mix.

When I first started out mixing for a living, I never realized how
important panning was until I worked with this guy who submitted a
pop rock project. The song arrangements had a lot of acoustic guitars,
but also included synth melodies and occasional vibes.

He sent me stereo tracks already panned and told me to leave them


“as is.” The lead vocal and drums were centralized, and everything
else was panned L/R 50 or more.

When you pan this way, there is so much space in the center of the
mix that the lead vocal and drums sounds amazing!

Also, you don't have to crank up the volume or brightly EQ the lead
vocal (or snare) so it can cut through the mix. It's just there, clean and
clear with its own space.

I get in many songs to master where the lead vocal is the brightest and
loudest element in the mix, so it can cut through. This is wrong and
will create problems. And it's still not clear because the mix is too
thick.

When a mix is properly panned (and the vocals/instrumentation and


effects are not overdone) the lead vocal can be the 3rd or 4th brightest
element in a song and still sound great. And you'll be surprised how
low you can make the vocal volume and still hear it perfectly.

Experiment with your panning!

3. EQ Subtraction And HI & LOW Pass Filtering

I saved the best and biggest for last! I would say at least 40% of the
clients I work with don't use subtraction EQ at all!

Most of their songs still sound pretty good after I master them at my
studio, JRmastering.com. But if they would have used subtraction
EQ, their songs move up to commercial radio quality level.

The Problem -
What these clients do is they record their guitars and vocals, play the
keyboard or synth bass sample, drums, etc., and then balance volumes
and add effects. Basic mixing, just like we all do. Nothing is wrong yet.

The problem is they never use subtraction EQ on anything. If they


use EQ at all, it's addition EQ to brighten up the mid-frequency tracks,
or they add bass to the bass tracks. That's about it.

Note – This is how I mixed for many years when I first started out. Not
good.

The problem is frequency ranges collide in many of the tracks,


creating a big muffle (the opposite of separation). Not to mention, this
is also a space eater.

Subtraction EQ is used to create space where needed.

*The Solution - Subtraction EQ

Why Do You Need subtraction EQ?

HI & Low Pass EQ filtering or any subtraction EQ reduces or


eliminates a designated frequency range in an audio track, which
gives space to other priority tracks in that range.

And the reason you want to do this is because there's a limitation to


the amount of audio material you can have in any given frequency
range before everything becomes muffled and smashed. You have to
give each vocal/instrument its own frequency space, the best you can.
And prioritize space when necessary.

Here's an example. Let's say you have a song that includes a kick, bass,
and a piano (that's playing a lot of low frequency notes). And then
throw in a bassy “Barrry White” sounding singer.

All four elements mentioned above have audio content in 200hz-


500hz range (some more than others) and it's all colliding.

If EQ is not subtracted from any of these tracks, you WILL have a


big muffle.
HI & Low Pass EQ Filtering Is A Must For Frequency Space

This is where Hi & Low pass EQ filtering comes in. Since so many
people know nothing about this, I have to explain the basics.

How it works, if you set a Hi pass EQ filter at 1k, everything below


1k is muted. Everything above 1k passes through. It's a “Hi pass”
filter. Which means whatever frequency range you set, everything
higher than the setting passes through.

A Low Pass EQ filter is in reverse. Below the frequency you set


passes through, and above is muted.

LOW PASS EQ FILTERING A SUB-BASS

1. Sub-bass With The Meat @ 40-80hz


2. Low Pass EQ Filter @ 200hz

Every vocal/instrument's audio content spans across the frequency


range (to some extent).

In the first picture, notice how the sub-bass audio content spans from
20hz to 3k. And it peaks +6dbs @ 60hz. 60hz is where the sub-bass is
most powerful. The meat of the sub-bass is right around 40-80hz.

Now, look at all the audio content past 200hz. The sub-bass doesn't
need to span this much of the frequency range. This only makes sense.
If it does, all it's going to do is run over the kick/bass combo and bleed
into the vocals and MIDs.

The sub-bass is very low in the frequency range, and that's where we
want to isolate it.

In the second picture, a Low Pass EQ filter is applied @ 200hz.


Everything above 200hz is completely muted, leaving (isolating) only
the meat of the sub-bass.

That's the object of HI & Low Pass EQ filtering!


Isolating THE MEAT of each track's frequency range! Which
leaves space for other tracks.

If you're not doing this with your songs, you will be IN SHOCK once
you apply EQ filtering to every track. The clarity and space you gain
is crazy!

Note – One exception that comes to mind is a rock kick drum (a


real drum). Around 1k, there's very little audio content. The .wav file
will be down near the bottom.

But in this 1k area is the click of the kick. This is one instance where
you don't want to isolate the meat. Doing so would eliminate this kick
drum click.

Some engineers slightly EQ boost this 1k click area, and then filter
out between 200hz to 500hz.

HI & LOW PASS FILTERING A MALE ROCK VOCAL


1. Male rock vocal. The meat's @ 350hz-2k

2. HI Pass @ 300hz & Low Pass @ 15k

Most vocal tracks and mid frequency instruments, the meat is in


the middle of the frequency range. When this is the case, you can
both HI AND Low Pass EQ filter these tracks, isolating the meat, as
seen in the pictures above.
A few notes here.

For the lead vocals, move around in the 300hz range with the HI
Pass EQ filter. You want to remove the lead vocal's bass that will
collide with any instrument bass, but you don't want to remove so
much of it that it sounds thin, tinny, and nasally like Pee-wee Herman.

And, the Low Pass EQ filter @ 15k was done more for the sake of
example. To show that you can HI & Low Pass EQ filter the same
track. The audio content is so weak after 10k, Low Pass EQ filtering is
optional. Note – A perfectionist will use this option.

One exception to HI & Low Pass EQ filter usage is if


vocals/instrumentation is light. Let's say you have only a piano
(which spans the entire frequency range) and a vocal track. Well, you
probably won't have to HI or Low Pass anything here. There's
nothing that can collide in this example.

What little audio content you have in your LOW Band, you can't
eliminate it with a HI Pass EQ filter and have a bass-less song.

EQ filtering is needed only when multiple tracks are colliding.

MORE ON EQ FILTERING
The kick/bass EQ filtering is a very important one. If the bass or
synth-bass bleeds into the kick it could phase (cancel) it out.

This problem can be solved by HI Pass EQ filtering the bass or


synth-bass somewhere around 100hz. Move around that range until
you're off the kick, and then boost the meat of the bass or synth-bass.

Then Low Pass EQ filter the kick around 120hz, so it's not bleeding
into the bass or synth-bass.

Note – Many times you can't perfectly filter out all bleed through.
There will be some going on. Any reduction is an improvement.

Bright instruments like a violin, cymbals, hi hats those can be HI


Pass filtered around 3k. Once again, slide around in that range.

I was going to make a basic chart for each instrument, but even similar
instruments have subtle frequency differences.

Just pull up your EQ for each track and see where the meat in the
frequency range is. And then isolate it with HI or Low Pass EQ
filtering (or both).

It's all about isolating the meat!

EQ FILTERING IN AUDIO MASTERING

Hi & Low Pass EQ filtering is primarily used on a track by track


basis in mixing, not in audio mastering.

The exception when it could be used in audio mastering would be a Hi


end hissy song (@ 12k) or Low rumbly/distorted sub-bass bass (@
30hz).

Hissy songs, I use the Low Pass EQ filter on them maybe 2% of the
time. This technique is rarely ever needed.
When it is needed, it's usually some real old guy's crappy rock songs he
transferred from cassette tape (or 8 track). That he recorded in the 70s,
when he was in his prime. Ha!

Hi Pass EQ filtering a sub at 20-30hz, I don't think I've ever done this
in my lifetime. But some sound engineers swear by it.

Lowering the LOW Band's volume using a multi-band compressor


in audio mastering is the solution for a bad sub-bass, not HI Pass
EQ filtering in mixing.

Notching Out A Frequency Range To Create Space

*The Problem -
Sometimes you have a vocal and an instrument (or a few
instruments) with the EXACT same frequency meat. They're all
competing for space, but you need one to cut through.

*The Solution -
This is where EQ notching comes in. Basically, you're cutting the
meat out of one or several tracks (using a narrow Q) so the meat
from another track can come through.

Or you can do it with an entire instrumental song mix, like the example
below.

PREPPING A LEAD VOCAL AND EQ NOTCHING OUT


SPACE FOR IT
1. Hip hop Vocal Track. HI Pass @ 350hz& +3dbs @ 1.5k

2. Entire hip hop song mix. -20dbs @ 1.5k using a very narrow Q
of 12

In this example, we will be working with a two track mix. I've done
a mix/master on over 1,000 two track song mixes.

A rapper buys a beat (that's a hip hop instrumental) online, and


then raps to it.

Many times the beat is not arranged or mixed with space to add a lead
vocal track at a later time. So, it's tough getting it to cut through.

In the first picture, the vocal is HI pass EQ filtered at 300hz, and then
we boost the meat +3dbs @ 1-2k. I guess technically, with this boost
we're creating our own meat.

We just created a vocal track that will nicely cut through the song
mix.

In the second picture, we notch out some of the 1-2k frequency range
to make room for the vocal.

Most of the time this works very very well. You can go from not
being able to understand half the vocals to 100% clarity, once this
technique is applied.

Note – If you're a perfectionist, cut up the music track and do not apply
the EQ notch out when the vocal isn't playing.

Or you can automate this. You only need to notch out vocal space from
the beat when the vocal is playing.

The above example used a vocal track and an entire song. Of


course you could do this with individual tracks too.

For example, a few blaring guitars are wiping out the lead vocal. You
can notch them out a bit at 1k to make room for the lead vocal.

Some sound engineers swear by this EQ notch out technique, while


others will never do it. Myself personally, I go by ear. If I need space I
will try it. If it works it stays. If it doesn't sound right, I don't use it.
MY MAIN RESOURCES

Check out my other two books in the Music Production


Secrets Series
Thank You!

I would like to thank you for purchasing this book. Your support is
greatly appreciated. I hope it has immensely improved your song
mixing skills.

Stay confident, work hard, and keep on learning. After 30 years in


music, I still learn a little something here and there every week.
"Learning Is For A Lifetime!"

I wish you the very best of luck in all that you do! Cheers! God
Bless!

Your Friend,

John Rogers
Contact Info

Contact Me - JRmasteringStudio@gmail.com

Follow Me On Instagram - @JRmasteringStudio

Follow Me On Facebook - @JRmasteringLasVegas

Follow Me On Twitter - @JRmastering

Follow Me On YouTube - JRmastering

Revised 2021 - Copyright 2019

All rights reserved. No part of this publication may be reproduced,


distributed, or transmitted in any form or by any means, including
photocopying, recording, or other electronic or mechanical methods,
without the prior written permission from JR Mastering, except in the
case of brief quotations embodied in critical reviews and certain other
noncommercial uses permitted by copyright law. For permission
requests please email John Rogers at JRmasteringStudio@gmail.com.

Ordering Information:

Special discounts are available on quantity purchases by corporations,


associations, and others. For details, contact John Rogers at
JRmasteringStudio@gmail.com.

You might also like