You are on page 1of 21

MASTERING

[INTRODUCTION]
Mastering is often thought of as a mysterious art form. This guide aims to tackle that
mystery head on—to not just explain what mastering is, but to outline how one might
go about achieving the primary goal of any good mastering engineer. And what’s that
primary goal? It’s simple: to prepare an audio recording for distribution while
ensuring it sounds at least as good (if not better!) when it goes out than it did when
it came in.
You’ve just finished mixing what you think is a pretty good recording. The playing is
good, the recording is clean, and the mix is decent. Mastering is a process that can,
and with practice often does, take recordings to the next level. What mastering
shouldn’t be expected to do is completely reinvent the sound of your recording.
Mastering is not a substitute for good mixing, or good arranging for that matter!
“Loud” records are a result of good writing/arranging/mixing and mastering. They
are made to sound good and loud (if loud is what you are after) from the get-go, not
just at the end. Once you have reached the final step of mixing with something that
represents your best effort, something that you are proud of, then it’s time to dig in
and see how much further mastering can get you toward the sound that you hear in
your mind’s ear. In the end there are no right answers, no wrong answers, and no
hard and fast rules. However, there are some well-known principles of audio
production and mastering that are worth thinking through as you experiment.

[WHAT IS MASTERING]
Although there are many definitions of what “mastering” is, for the purpose of this
guide we refer to “mastering” as the process of taking a mix and preparing it for
distribution. In general, this involves the following steps and goals.
The Sound of a Record
The goal of this step is to take a good mix (usually in the form of a stereo file) and
put the final touches on it. This can involve adjusting levels and general
“sweetening” of the mix. Think of it as the difference between a good-sounding mix
and a professional-sounding master. This process can, when necessary, involve adding
things such as broad equalization, compression, limiting, etc. This process is often
actually referred to as “premastering”.
Consistency Across an Album
Consideration also has to be made for how the individual tracks of an album work
together when played one after another. Is there a consistent sound? Are the levels
matched? Does the collection have a common “character,” or at least play back
evenly so that the listener doesn’t have to adjust the volume?
This process is generally included in the previous step, with the additional evaluation
of how individual tracks sound in sequence and in relation to each other. This doesn’t
mean that you simply make one preset and use it on all your tracks so that they have
a consistent sound. Instead, the goal is to reconcile the differences between tracks
while maintaining (or even enhancing) the character of each of them, which will
most likely mean different settings for different tracks.

Preparation for Distribution


The final step usually involves preparing the song or sequence of songs for download,
manufacturing, and/or duplication/replication. This step varies depending on the
intended delivery format. In the case of a CD or streaming, it can mean converting to
16 bit/44.1 kHz audio through resampling and/or dithering, and setting track
indexes, track gaps etc. For web-centered distribution, you might need to adjust the
levels to prepare for conversion to AAC, MP3, or hi-resolution files and include the
required metadata.

[MASTERING BASICS]
When mastering, you’re typically working with a limited set of specific processors.
Compressors, limiters, and expanders are used to adjust the dynamics of a mix. For
adjusting the dynamics of specific frequencies or instruments (such as controlling
bass or de-essing vocals) a multiband dynamic processor might be required. A single-
band compressor simply applies any changes to the entire range of frequencies in the
mix. Equalizers are used to shape the tonal balance.
• Equalizers: are used to shape the tonal balance.
• Stereo Imaging: can adjust the perceived width and image of the sound field.
• Harmonic Exciters: can add an edge or “sparkle” to the mix.
• Limiters/Maximizers: can increase the overall level of the sound by limiting
the peaks to prevent clipping
• Dither: provides the ability to convert higher word-length recordings (e.g. 24
or 32 bit) to lower bit depths (e.g. 16 bit) while maintaining dynamic range
and minimizing quantization distortion.

[EQUALIZER]
There are many different types of equalizers, and they are all meant to boost or cut
specific ranges of frequencies. EQs are typically made up of several bands. A band of
EQ is a single filter. By combining bands, you can create a nearly infinite number of
equalization shapes. Parametric equalizers provide the greatest level of control for
each band. They allow for independent control of the three variables—amplitude,
center frequency, and bandwidth—that make up a bell or peaking equalizer.
The picture below shows the equalizer screen in EQ Eight of Ableton Live, but the
principles are the same for most parametric EQs. There are eight sets of arrows,
which represent eight bands of equalization.
Below is the Pro-Q equalizer from Fabfilter. Note that it has the same parameters as
EQ Eight (native to Ableton Live):

[DYNAMICS]
Mastering the dynamics of a mix using compressors, limiters, and expanders is
probably the most challenging step of the process, but the one that can make the
most difference between a basement tape and a commercial-sounding mix. Taking
the time to understand dynamics processing can be well worth the effort.
There are a few things that make mastering dynamics challenging:
The effect is subtle, at least if done correctly. It’s not something you clearly hear,
like a flanger or reverb or so forth, but instead something that changes the character
of the mix. If you think about it, compression removes something (dynamic range)
and so what you will hear is the absence of something.
A compressor is not necessarily working all the time. Since it changes in response to
the dynamics in the music, you can’t listen for one specific effect. Level histograms
and compression meters can be invaluable for referencing when the compression is
occurring, and by how much.
Not all compressors are created equal. While the concept is simple enough—restrain
the volume when it crosses a threshold—the design and implementation (and
therefore the quality) of compressors varies considerably. Applying a quality
compressor correctly, however, can smooth the peaks and valleys in your mix and
make it sound fuller, smoother, or allow you to increase the average level (if that’s
the desired goal).
These are the four different types of compressors that exist to work in mastering:
1) Vari-MU (or Variable Mu or valvulador or Tube Compressors):

The best known is the Fairchild, from the 50s. The VARI-MU are not very fast
compressors and have the characteristic of coloring the sound. In other words, they
create harmonics. A kind of subtle saturation.

2) OPTO:
These are called optical compressors. The most representative compressor among
optics is the Teletronix LA-2A. This famous compressor has already received several
emulations, such as the Waves CLA-2A (from the image above).
3) VCA:
VCAs are more modern compressors. Perhaps the most famous example is the DBX
160 and the SSL BUSS COMPRESSOR, which was emulated by Cytomic and if you are
an Ableton user, we are talking about Glue.

(Glue Compressor – Ableton Live)

(DBX 160)
VCAs are compressors, for the most part, fast and transparent. In addition, they have
precise attack and release control.

4) FET:
FET compressor technology allows it to be ultra-fast. In addition, they promote a
peculiar coloring in the audio. They are aggressive compressors with personality.
Perhaps the most famous in this category is Universal Audio's 1176.

This is the most used compressor in the mixing and mastering of the greatest albums
in history, including vocals Michael Jackson.

[LOUNDNESS MAXIMIZER (Limiting)]


Using tools like the Brainworx bx_limiter to perform limiting is not solely about
making a recording louder, though that is a consideration. Judicious use of a limiter
can also enhance the perceived presence and impact of a track.

Most sound editors have a Normalize function. The Normalize function analyzes your
entire mix, finds the highest peak, and adjusts the gain of the entire mix so that the
highest peak in the mix is at 0 dBFS (the verge of clipping) or a specified target level.
The rest of the music is then adjusted in level by the same amount. However, all this
does is makes the single highest peak on the verge of clipping.
The principle behind a limiter is that you can limit the peaks at the threshold and
then bring up the rest of the mix. The bulk of the mix can be brought up since the
peaks are cut down, so nothing overloads 0 dBfs.
A tiny bit of limiting is almost unnoticeable. In fact, if you were to limit or clip a
single sample, it is beyond our perception to notice that at all.
[STEREO IMAGING]
Most pop/rock-based musical idioms have the following in common: the most
important elements are the drums and the vocals. To that end, the kick, snare, and
lead vocal tracks are usually panned to the center. When you use a stereo widener,
you are therefore usually emphasizing the other elements in the mix. A little of that
might help, but only a little.

Other issues come into play regarding phase relationships and sonic clarity overall, so
if you use a widening tool, listen to be sure that the heart of your recording isn’t
diminished.

[METERS]
Here are the three main parameters of meters and their uses in mastering:
1) Level Meters:
Level meters are probably the most familiar and ubiquitous meter. They will usually
display Peak level (the level of a signal from moment to moment) and RMS or
“average” level (the level of a signal averaged over a short window of time). Both
types of information are important but for different reasons.
Peak level tells us how close the signal is to the point of distortion. It’s a way of
helping us understand whether we have any headroom to bring a signal up without
changing anything else about it and staying below the point of distortion.
RMS level gives us information that relates to our perception of volume. Our brain
processes information during a short window of time to evaluate how loud something
is in our environment, and an RMS level is a way to attempt to give feedback about
that. However, RMS doesn’t relate directly to perception in the sense that it doesn’t
take frequency content and balance into account.
The relationship between Peak level and RMS level will vary widely depending on the
dynamics of the mix and the genre of the music. That makes it hard to generalize too
much. However if one had to give a general idea about where the RMS level should
generally sit with respect to 0 dBFS:
• Electronica: -8 to -12
• Pop/RnB: -10 to -14
• Rock: -12 to -16
• Acoustic idioms (Jazz, Classical, folk music related): -14 to -20

2) Spectrograms:
While not exactly a meter, a spectrogram does map levels, or energy, across the
spectrum. It is a helpful tool that provides a reality check on what you hear and, to
an extent, what you can’t hear—especially in the nether regions of the bass and the
region close to 20 kHz. Spectrograms are good for helping you quickly diagnose a
problem frequency or set of frequencies. For instance: When a track that has a
problem with sibilance (the sound a vocal “S” makes), it usually shows up quite
readily on a spectrogram. That makes it easy to focus an EQ or a de-esser on the
problem.
It’s helpful to have a spectrogram running while working in the EQ module to help
you focus your EQ settings. It’s also useful to have one running at the beginning and
the end of your mastering chain to keep tabs on what is happening and how your
original mix has been altered.

3) Vectorscope/Correlation/Stereo Image Metering


This sort of metering is probably the most under-appreciated, and in some ways this
isn’t surprising. The concept of thinking about mono-compatibility, and the width of
a stereo image, isn’t always discussed often enough when people begin to learn
about audio engineering or mastering.
When you are mastering, you want to be sure that the main instruments in a mix
don’t disappear when you listen to it in mono. A correlation meter gives you visual
feedback so you can be sure that the recording has a strong orientation toward
mono, and that it most certainly does not spend too much time with a strong
orientation toward outof-phase information. Why? Scenarios such as vinyl cutting,
MP3 encoding, and terrestrial broadcasting (radio and TV) rely heavily on the mono
signal being in good proportion. In any of these scenarios, too much out-of-phase
information will cause unpleasant artifacts for listening and, in some cases, actually
cause a mix to be sent back to the mixer as unusable.
As with any mastering process, a major problem with phase is best corrected at the
mix stage. If that’s not possible, one can try to address these problems using Mid/
Side processing or the stereo imaging tool. In these cases, the correlation meter is a
helpful gauge for making your adjustments.
MIX
What is Mixing?
Operation that consists of merging, in a single soundtrack, the sounds of several
other tracks dialogues, music and noise.
Main Elements:
The audio mixing process can be understood as the balance and organization of
various sound sources taking into account the following elements basic:
• Volumes
• Panorama (Pan)
• Equalization
• Compression
• Harmonic Excitation (Drive, Amplifier Simulators, etc.)
• Effects (Delay, Chorus, Reverb, etc.)
Basic Methodology
Mixing can be understood as a spiral process. Usually not we worked on an element
and considered it "ready" until the end of the mix. We need work and listen to every
element present in the music and we will possibly have to revisit each of them a few
times until you find the most suitable way for it to fit and is balanced with the rest
of the mix elements.
Mixing with elements in SOLO: Despite being the most intuitive way to start making
adjustments to the elements in the mix, it's the most treacherous way. The track can
sound interesting on its own, but generally it will not fit with the other elements. It
can lead to mistakes more often and make you have to spend more time correcting
and discovering sound inconsistencies.
Mixing with all elements playing: The way that seems less intuitive and more
difficult at first will lead you to more and more consistent results and interesting
throughout the learning process. Only then will it begin to gain dominance full of
audio tools.
The most basic point to start your mix is perhaps making a Rough Mix:
1) You can choose to reset all faders or start from the original point at which
faders are; if you participated in the recording and production, chances are
you have already directed the sound in the previous phases, so I recommend
that you do not zero the faders. If you’re mixing for someone, bounce the
session the way you she finds herself. This bounce will be your reference
point. Then yes, if you wish, you can lower the faders to start from scratch.
Use this bounce as a reference to throughout the Mixing process to know
where it came from and where it wants to go. IT'S very common, when
learning, to process tracks too much; in this In this case, returning to the
original bounce for a reference is very important. If the "mixed" and
"processed" sounds are worse than the "raw" track, you will know just in time.
Avoid unnecessary processing!!!
2) Remove all plugins (leave only those that are part of the production, such as
special effects plugins or possible simulators of amplifiers);
3) Start to raise the faders (if you have zeroed) or swing, little by little, using
volumes only;
4) The ideal would be to do this process in groups, such as the battery group (or
electronic beats), percussions, string instruments, keyboards / synthesizers
and voices;
5) Next to the volume adjustment, work on the pan adjustment, which will
allow to arrange the elements more clearly in the stereo field (left / right).
Volume and Pan: are the most basic elements of the mix and therefore the most
important. At all times, you’ll be adjusting volumes and panning to get to a more
accurate result, but this basic adjustment will guide your entire process. Therefore,
watch out! If you drop an element with an extreme pan to the left, for example, will
be a lot of work, at a more advanced stage of the mixing process, the modification of
the positioning of this element without drastically affecting the whole. Each element
within the mix depends on all other elements of the mix. So be it well aware of this
process!

We can understand Mixing as a process in 4 dimensions. We can see the elements


arranged as a 3-dimensional image, which moves along the time (4th dimension).
Therefore, the áudio elements can be arranged as follows in the mix:
This form of association with images helps a lot in the process of mixing the our day-
to-day lives and can facilitate understanding, speeding up learning techniques in a
short time.
[FUNCTIONAL MIXING]
Although the image with the visualization of the instruments is very useful as a
“Photography” of your mix, we can use the concept of “Functional Mixing” to further
assist in the process of organizing the mix. Each instrument or element in the mix has
its own well-defined musical function. In generally speaking, drums and bass create
the base, guitars complement the base, but can also create details, while voices
bring the main message and remain the focus of attentions. Some elements remain
active for the entire duration of the song, while some synthesizers and percussions
can appear only for short sections of the arrangement. The understanding of all
elements and the judgment of the functions and importance of each of them within
the mix, will guide our process and Mixing methodology.
There are no rules for choosing the order of elements of a mix, but understanding of
its functions facilitates the process as a whole. The big Most Mixing Engineers follow
the following reasoning: bases to complements to voices and details. This reasoning
follows the logic of a construction civil, for example. Engineers plan to build a house
by laying the foundation, slabs and beams (base), then walls and roof (complements)
and then the cladding interior, doors and windows (voices and details). Working on
the rhythmic elements facilitates process for the balance and placement of the
harmony elements and voices in the mix. In most of the time, the adoption of this
process leads to a much faster result in the mixing. Even so, depending on the music,
some other element can guide the attention of the engineer to start work for him
instead of the rhythmic elements or base. But, in any case, the base will always have
a lot of attention right in the start of the process. Obviously, we cannot forget the
details, as they can just ruin a mix. Some elements, even if not touched in the most
of the arrangement of the song, can obstruct or hinder some key element of the
music, if treated incorrectly.
[EQUALIZERS]
They are used to change the gain in specific portions of the audio signal spectrum. In
practical terms, equalizers are used to change the "color" of an element within of the
mix.

Filter Types:
Each equalizer can have one or more types of filters:
• Band Pass (also called Bell): Boost or cut based on the central frequency
chosen and width defined by parameter Q (when available);
• High Pass Filter (HPF): Removes frequencies below the selected frequency
(cut-off frequency) and only pass frequencies above it.
• Low Pass Filter (LPF): Opposite to HPF; removes only frequencies above the
selected frequency;
• High-Shelf: Boost or cut on all frequencies above the selected frequency;
• Low-Shelf: Boost or cut on all frequencies below the selected frequency;
[COMPRESSORS]
Perhaps the most misunderstood and misused tool in the audio world, the compressor
basically has the function of doing what its name says: compress the dynamic
variation of the audio. It is natural that in the recording of a drums there is a
variation in intensity in the beats of kick and box or else in the recording of guitar
that some chords come out stronger than others. This is all natural, but in the mix it
is very important to have control over these elements so that we can organize the
sound more precisely. We could simply use a volume fader to make these
adjustments for dynamic variations, but the compressors serve precisely to do this
task automatically, especially considering that many elements may have the
problems described previously and it would simply be impossible to address all these
situations individually.

The illustration below shows the difference between an audio signal before and after
compression:

(Before)

(After)
What the compressor did in practical terms was to hold the highest portions of audio
and, with that, we generated a fuller sound (with fewer peaks). Through the
compensation gain, the lower intensity portions are enlarged and, in this way, we
create a smaller difference between the strongest and weakest portions of the
signal. Hence, we achieve what we call reducing the dynamic range of audio.
To perform this work, a basic compressor uses 5 parameters:
• Threshold;
• Attack;
• Release;
• Ratio;
• Make-Up Gain.
Depending on the compressor architecture, it may contain all of these parameters,
some more and sometimes not all. Some compressors, for example, have attack or
threshold parameters pre-defined by the manufacturer (which cannot be selected by
the user), but allow you to select the release and / or ratio. Threshold is the
parameter that defines from which point the compressor starts to act. The
compressor operates only in the region that is above the threshold (highest in
amplitude), at first.
Let's assume that a guitar track rotates its average peak amplitude at -20 dBFS. After
a certain point, the musician started to make chords with a slightly heavier beat and
the signal became stronger consequently. If we set the compressor threshold to -18
dBFS (it will be above the -20 dBFS point), it will not act during most of the audio
signal. If, from the moment the musician started playing stronger, the sound will pass
-18 dBFS, the compressor will start running and remain so until the signal returns
below -18dBFS again. Obviously, the compressor depends on another important
parameter to take action as soon as the threshold is passed: the Ratio.
Ratio is the compression ratio (or rate). In the following illustration we see different
lines with the values 1: 1, 2: 1, 4: 1 and 20: 1. If our compressor is selected in 1: 1
mode, no compression will occur. 1: 1 means that for every 1 dB that passes above
the threshold, 1 dB will result as an output signal. In this way, the signal remains
intact. However, if the compressor is in 2: 1 mode, we will have light compression.
For every 2 dB that exceeds the threshold, only 1 dB will be at the signal output. In
our previous case, let's assume that at a certain point the musician played a chord
that peaked at -12 dBFS. Our threshold was set at -18 dBFS, so the signal exceeded it
by 6 dB. At a rate of 2: 1, instead of the output signal being -12 dBFS, it will become
-15 dBFS, as we will have a reduction of 3 dB (half the signal that exceeded the
threshold). The higher the ratio, the greater the compression. Very strong
compression (for example 20: 1) is what we call a limiter. If a signal exceeds the
threshold of a limiter, it is practically "cut", leaving only the portion below the
threshold. A very strong limiter is what we call a brickwall limiter ("brick wall").
The attack parameter defines how fast a compressor goes into action as soon as the
signal exceeds the threshold and release, how fast the compressor stops compressing
as soon as the signal returns below the threshold. They are parameters that define
the shape (or mold) of the sound. A sound with a very fast attack sounds more
aggressive, while a slower attack lets the transients and serious portions of the sound
pass without being too compressed, generating a more natural sound.
A quick release also makes the sound more aggressive, since the transition from
compressed to uncompressed sound is done very abruptly. A medium long release is
generally used more in general practical situations, as it allows the sound to be
softer and more controlled. There are several different currents of thought and
different applications regarding the selection of the attack and release times of the
compressor, but there are no definitive rules in relation to this.
After the sound is compressed, obviously the feeling we have is that the sound is
lower. To compensate for the gain, most compressors have the make-up gain
parameter, which serves to leave the compressed signal with the approximate
perceived volume of the uncompressed audio.
Expanders and Gates
Expanders and gates work much like compressors, but instead of reducing the
dynamic range of a signal, they increase the dynamic range. The easiest use to
understand how an expander or gate works is, for example, in the sound of the box,
kick drum or drum tones. There is a lot of leakage of cymbals and other parts in a
box microphone. In a mix, we can try to "isolate" the sound from the box as much as
possible through the use of an expander or gate. When the box is touched, the sound
is left unprocessed, but in the intervals between box beats, the expander "reduces"
background noise. Basically what it does is to increase the dynamic range of the
audio down, pushing the audio towards the lower signal levels. The difference
between an expander and a gate is basically the same difference between
compressors and limiters. The rate of a gate is much higher than the rate of an
expander. The idea is exactly analogous to that of a compressor. The threshold
defines at what point the expander (or gate) will begin to reduce (or expand down)
the audio. The ratio is written in reverse, ie 1: 2, 1: 4, 1:10, 1:20. A ratio of 1: 4
means that for every 1 dB that passes below the threshold, 4 dB will be expanded
downwards, that is, we will have the sensation of "pushing the dirt" 4 dB's downwards
with each dB below of the threshold. The attack and release values are inverted
here, compared to the compressors.
When the expander starts to act, we set the release time. When the sound returns
above the threshold, the expander (or gate) "opens" and what defines how fast this
opening is is the attack parameter. Expanders and gates also have two special
parameters depending on their architecture: range and hold. Range defines the
maximum dB reduction the processor can make, thus creating a "floor boundary" for
background noise. Hold is used to "ask" the expander to wait a certain time (in
milliseconds) before the processor starts pushing the sound down. The release time
starts to count after the hold time. If hold is 0 ms, only the release time is taken into
account. There is also a special type of expander, the upward expander, which is
used to treat the upper part of the signal, as well as the compressor, but it performs
a dynamic "expansion" upwards. expander is used to treat wrongly processed audio
signals, mainly compression and limiter used in an inappropriate and extreme way.
audio quality, but it allows a totally "limited" or squashed signal to be treated with
the same level of compression as other elements within a mix.
Time-based Reverbs, Delays and Effects
Reverb, or simply reverb, is perhaps the most intuitive signal processing for people.
Hear the sound of a guitar as if it were inside atheater, is perhaps one of the most
natural things to think about in terms of sound processing. Reverb, delay, phaser,
chorus and flanger are all time-based effects.
Basically, the dry sound (without processing) occurs at a certain point in time and a
few milliseconds (and depending on the effect, a few seconds) later the processed
sound is added to the original sound, creating the effect. Reverb can be created in
the digital world in two basic ways: digital algorithms or convolution. The most
common way would be through digital algorithms, where the processor takes the
original signal, simulates sound reflections on the walls of a room or imaginary
environment and leaves the sound reflecting in this environment for a certain period
of time. The sound is combined with the original, generating the reverberated sound.
A convolution reverb uses a stimulus recorded in the real world (which we call
impulse-response - impulse-response) and this stimulus, which is basically a very
short-lived sound (usually a pink noise of infinitely short duration), mathematically
processes the original sound so that we have the feeling of inserting the sound
recorded physically in the room where the pulse was measured.

Physically, the reverberation occurs as shown in the figure above. The original signal
(initial pulse) is reproduced in a given environment. After a very short time (pre-
delay), the original signal initially reaches the walls of the room, where the first
reflections of the sound occur (early reflections). The sound is reflected on the
various walls and begins to bounce over a period of time on the other walls (late
reflections or reverb). The time it takes for the sound to be reduced by 60 dB from
the beginning of the primary reflections is what we call the reverb time, or simply,
reverb time (in some processors it can be called decay time).
Thus, the main parameters of a reverb processor are:
• Type: We can select the environment as a room, hall, plate, chamber and so
on;
• Decay: reverb time (usually shown in seconds);
• Pre-Delay: Delay between the original impulse and the first reflections; a
higher pre-delay value creates the feeling of a larger room;
• Room Size: Determines the physical size of the room and generally increases
the decay proportionately, if the room is large;
• Diffusion: Determines the amount of diffusion in a room; a room with uneven
surfaces, tends to "spread" sound reflections more, generating a more
"colorful" sound (higher diffusion value); a room with straighter surfaces tends
to spread the sound less, generating a more neutral and transparent sound
(lower diffusion value).
Unlike reverb, which generates many reflections that are perceived as a large sound
mass, the delay can be perceived as distinct repetitions of the audio. The basic
processing of the delay is quite simple. The audio passes through the processor,
which stores the content in its memory; after a certain predefined period of time
(delay time), the stored sound is repeated, being added to the original sound during
the period of time specified by the feedback. This stored sound can still be processed
before being replicated; this processing is done by the modulation unit that exists in
several delay processors. This modulation is used to change the delay repetition time
through the depth and rate parameters. With the modulation unit active, a delay
processor can create the effects of phaser, chorus and flanger, which are basically
forms of delay created from the modulation of the repeated signal.
Harmonic Drivers
It is very common nowadays to see amplifier simulators and plug-in audio hardware.
These types of processors use what we call harmonic excitation. We can record a
clean guitar sound and then process it entirely during the mix, choosing a virtual
amplifier, type of speaker, type of microphone used in the recording and so on. All
this processing defines the timbre of the audio signal, that is, it alters the
characteristics and interrelationship between the fundamental frequencies and
harmonic sounds of the signal. Not only can we use these tools in case we fully treat
the recorded guitar sound without any processing, we can also add energy, create
harmonic distortion and many other interesting effects using harmonic excitation
plugins. There are many such tools available on the market, but we can mention
some well used:
• NLS Waves: Analog sound table simulators;
• Waves Vitamin: multiband harmonic exciter;
• Guitar Rig, Amplitube and Ampeg SVX: Simulators for guitar and bass
amplifiers;
• Kramer Tape, UAD Studer and Ampex Tape Recorder: Simulators for magnetic
tape recorders;
• PSA Sansamp and UAD Thermionic Culture Vulture: Distortion units

Image Manipulators (Mid / Side)


Mid / side is a signal processing technique that allows us to treat stereo audio in a
different way. In mid / side (or M / S), we can treat audio in phase completely
(which comes out equally in both speakers, which consequently is the sound that
comes out in the center of the stereo image) of out-of-phase audio ( which is the
audio that comes out only from the sides, with the pan open) - more details about
M / S in the Mastering section. This allows us to perform actions such as, for
example, expanding the sensation of stereo opening of a synthesizer recorded in two
channels or even transforming a stereo sound into mono (adding the two channels).
We can also generate the so-called fake stereo, creating a feeling of depth and
openness in a sound that was originally recorded in mono. Various tools are also
available on the market for this type of handling in the Mixing, such as:
• Waves S1 Stereo Imager;
• Waves PS22 Mono to Stereo Enhancer;
• UAD Precision K-Stereo Ambience Recovery;
• Izotope Ozone (allows opening of the stereo image, equalization and
compression in Mid / Side mode).

Noise Reducers
Successful production depends on impeccable mixing, but we must not forget that
editing work before the mix is vital. And when we are editing the audio, adjusting
the performance, choosing the best takes, tuning voices and so on, we can come
across situations in which the audio needs to be dealt with. "Air punch" sounds when
recording voice with dynamic microphones, clicking sounds, unwanted noises, cable
noises or "hum" and so on, can and should be dealt with during the Music Production
process. There are numerous audio “restoration” tools available on the market from
various brands. The most important thing, however, is to understand the processing
categories and their functionality. Nowadays, many people use these tools
completely erroneously and it is necessary to understand when to use, what to use
and how to use:
• De-clicker: Tool used to remove eventual clicks caused by electrical surges,
sound wave continuity failures, clicks caused by incorrectly edited audio
(without fades or crossfades), crackling and so on;
• De-clipper: Tool used to “undo” the finding of the sound wave (digital clip)
that occurs mainly if the audio is captured with a lot of gain in the
preamplifier or if it simply had one or more points of excessive gain over a
musical performance, exceeding the amplitude limit (0 dBFS) of the recording
system. It is a tool based on the use of upward expansion;
• De-crackler: Used to remove noise (crackles) present in recordings of vinyl
records;
• De-noiser: Tool that takes care of reducing constant noise in an audio file,
such as noise from cassette tapes, noise from guitar amplifiers and so on;
• Hum Removal: This tool is basically a specific equalizer that takes care of
eliminating noise caused by interference in the electrical network, which
generates the so-called “hum” in the sound (very common in guitars and
instruments susceptible to electrical interference);
We must use very carefully any of these noise reduction tools, as they leave many
devices in the audio when used incorrectly. They are corrective measures for when
we have an audio problem and we cannot re-record, edit or use another take. Always
give preference to having natural audio with noise over clean, but extremely
processed audio.

You might also like