You are on page 1of 270

Page 1

Table of Contents
Preface......................................................................................................................................11

Chapter 1 | Mixing Philosophies............................................................................................14

Are You a Musician, Producer, or Engineer?............................................................................................14


Right Brain Versus Left Brain....................................................................................................................15
How to Stay in Right-Brain Mode.........................................................................................................15
The Importance of “Feel”...........................................................................................................................16
The Arrangement’s Importance.................................................................................................................17
Make Sure Every Part Serves the Song..............................................................................................17
Build to the Moments of Impact............................................................................................................17
The Mute Button’s Importance.............................................................................................................18
You Have 10 Seconds to Grab Someone’s Attention...........................................................................18
Think About Your Audience..................................................................................................................18
What’s Your Intended Result?...................................................................................................................19
Key Takeaways.........................................................................................................................................19

Chapter 2 | Technical Basics.................................................................................................22

Hearing and Frequency Response...........................................................................................................22


The Problem with Ears.........................................................................................................................23
Optimum Mixing Levels........................................................................................................................24
Monitoring and Acoustics..........................................................................................................................24
The Room.............................................................................................................................................24
Near-Field Monitors..............................................................................................................................27
Anatomy of the Near-Field Monitor......................................................................................................29
Rear-Panel Controls.............................................................................................................................29
Low-Shelf Cutoff..............................................................................................................................30
Mid-Frequency Control....................................................................................................................30
High-Frequency Control..................................................................................................................30
Acoustic Space Switch....................................................................................................................31
A Caution about Rear-Panel EQ Settings.......................................................................................31
What’s the “Best” Monitor?...................................................................................................................31
Setting Levels............................................................................................................................................31
Key Takeaways.........................................................................................................................................32

Chapter 3 | Mixing with Computers.......................................................................................34

Mixer Architecture......................................................................................................................................34
Mono vs. Stereo Tracks........................................................................................................................35
Bus Basics............................................................................................................................................35
Channel Strips......................................................................................................................................36
Tech Talk: Mute and Solo Buttons...................................................................................................38
About Grouping....................................................................................................................................38
Grouping in the Virtual World..........................................................................................................39
Another Way to Group.....................................................................................................................40
Bus Grouping...................................................................................................................................40
VCA Channels...........................................................................................................................................40

Page 2
Unique Aspects of Mixing with Digital Audio.............................................................................................42
The Two Kinds of Resolution................................................................................................................42
The Two Types of Audio—and How to Save CPU Power....................................................................43
Tech Talk: Bouncing.........................................................................................................................43
Tech Talk: About the CPU................................................................................................................44
Setting Levels with Digital Mixers.........................................................................................................45
Tech Talk: Headroom.......................................................................................................................45
The Best Sample Rate for Recording and Mixing................................................................................45
Tech Talk: A Workaround to Obtain the Benefits of Higher Sample Rates......................................47
Key Takeaways.........................................................................................................................................47

Chapter 4 | How to Use Plug-Ins............................................................................................50

Plug-In Technologies.................................................................................................................................50
Plug-In Formats.........................................................................................................................................51
32-Bit vs. 64-Bit Plug-Ins......................................................................................................................51
Plug-In Wrappers..................................................................................................................................52
Stereo vs. Mono Plug-Ins..........................................................................................................................52
Effects Plug-Ins Are Always “Re-Amping”.................................................................................................52
The Four Places to Insert Effects..............................................................................................................53
Track (Channel) Insert Effects..............................................................................................................53
Event (Clip) Effects...............................................................................................................................54
Bus and FX Channels..........................................................................................................................55
Adjusting Send and Bus Levels.......................................................................................................56
FX Chains..................................................................................................................................................57
Split Types............................................................................................................................................58
Parallel Effects Applications......................................................................................................................58
Master Effects...........................................................................................................................................59
Using Virtual Instrument Plug-Ins..............................................................................................................59
Instruments with Multiple Outputs........................................................................................................60
ReWire and Mixing....................................................................................................................................61
Computer Requirements......................................................................................................................62
Applying ReWire...................................................................................................................................62
ReWire Implementations......................................................................................................................62
Loading ReWire Instruments................................................................................................................63
Using ReWired Instruments.................................................................................................................63
The Simplest Option: Reason as a Collection of Virtual Instruments..............................................64
Using as Real-Time Virtual Instruments..........................................................................................65
More About ReWired Instruments...................................................................................................68
Using Hardware Effects when Mixing.......................................................................................................68
Using Pipeline......................................................................................................................................68
Limitations of External Hardware.........................................................................................................70
Key Takeaways.........................................................................................................................................71

Chapter 5 | Mixing and MIDI...................................................................................................74

Mixing’s Most Important MIDI Data...........................................................................................................74


Virtual Instruments and CPU Issues.........................................................................................................75
How to Enhance MIDI Drum Parts in a Mix...............................................................................................75
Shift Pitch.............................................................................................................................................75
Make Impact XT Drum Sounds More Expressive................................................................................76
Assign Velocity to Parameters.........................................................................................................76
Multiple Drum Samples...................................................................................................................77

Page 3
How to Simulate Multiple Drum Samples........................................................................................78
Hi-Hat Amplitude Envelope Decay Modulation...............................................................................79
Choke Groups.................................................................................................................................79
Enhancing Synth Parts in the Mix.............................................................................................................79
Change the Sample Start Point............................................................................................................79
Layering Techniques............................................................................................................................80
Taming Peaks.......................................................................................................................................83
Synth/Sampler Parameter Automation Applications.............................................................................83
Humanizing Sequences............................................................................................................................85
How Timing Shifts Produce “Feel”.............................................................................................................86
Track Timing Tricks...............................................................................................................................86
Tech Talk: Timing Shifts with Audio.................................................................................................88
Quantization Options............................................................................................................................88
Tech Talk: Quantization with Audio..................................................................................................90
Proofing MIDI Sequences.........................................................................................................................90
Key Takeaways.........................................................................................................................................90

Chapter 6 | Prepare for the Mix..............................................................................................92

Before You Mix..........................................................................................................................................93


Mental Preparation, Organization, and Setup...........................................................................................93
Review the Tracks.....................................................................................................................................94
Organize Your Mixer Space..................................................................................................................94
Put on Headphones and Listen for Glitches.........................................................................................94
Render Virtual Instruments as Audio Tracks.............................................................................................95
Set Up a Relative Level Balance Among the Tracks.................................................................................96
Key Takeaways.........................................................................................................................................96

Chapter 7 | Adjust Equalization (EQ)....................................................................................98

Main EQ Parameters.................................................................................................................................98
Tech Talk: Understanding the deciBel (dB).....................................................................................98
Equalizer Responses................................................................................................................................99
Dynamic Equalization..............................................................................................................................106
Spectrum Analysis...................................................................................................................................107
Linear-Phase Equalization......................................................................................................................108
Linear-Phase Basics..........................................................................................................................108
Limitations..........................................................................................................................................109
Minimizing Latency.............................................................................................................................110
Mid-Side Processing with Equalization....................................................................................................111
The Pros—and Pitfalls—of Presets.........................................................................................................113
EQ Applications.......................................................................................................................................114
Solve Problems...................................................................................................................................114
Remove Subsonic Audio................................................................................................................114
Tame Resonances.........................................................................................................................115
Make Amp Sims Sound Warmer....................................................................................................116
Minimize Vocal Pops......................................................................................................................117
Reduce Muddiness........................................................................................................................118
Create “Virtual Mics” with EQ........................................................................................................118
Emphasize Instruments......................................................................................................................121
Create New Sonic Personalities.........................................................................................................121
Additional Equalization Tips....................................................................................................................122
Key Takeaways.......................................................................................................................................122

Page 4
Chapter 8 | Dynamics Processing.......................................................................................124

Manual Gain-Riding................................................................................................................................124
Level-Riding Plug-Ins..............................................................................................................................125
Normalization..........................................................................................................................................126
Compressor.............................................................................................................................................126
How to Adjust the Parameters............................................................................................................127
The Gain Reduction Meter’s Importance...........................................................................................128
Parameter Adjustment Tips................................................................................................................128
Compressor Applications....................................................................................................................129
Vocals............................................................................................................................................129
Electric Bass..................................................................................................................................130
Guitar (Sustainer)..........................................................................................................................131
Guitar (Acoustic Rhythm)..............................................................................................................131
Kick and Snare..............................................................................................................................132
Cymbals.........................................................................................................................................133
Limiter2....................................................................................................................................................134
Limiter vs. Compressor......................................................................................................................134
Gain Reduction Metering....................................................................................................................134
Limiting Parameters...........................................................................................................................135
How to Adjust the Parameters............................................................................................................135
Input Gain......................................................................................................................................135
Ceiling............................................................................................................................................136
Release (Decay)............................................................................................................................136
Modes and Attack..........................................................................................................................136
To Clip, or Not to Clip?.......................................................................................................................137
Limiter Applications............................................................................................................................137
Tricomp....................................................................................................................................................138
How to Adjust the Parameters............................................................................................................138
Expander.................................................................................................................................................140
How to Adjust the Parameters............................................................................................................141
Expander Applications........................................................................................................................141
Reduce Low-Level Noise..............................................................................................................141
Tighten a Mixed Drum Sound........................................................................................................142
Fix Overcompressed Sounds........................................................................................................143
Make Dry Electric Guitars More Dynamic.....................................................................................144
Multiband Dynamics................................................................................................................................147
How to Adjust the Parameters............................................................................................................147
Diagnostic Controls and Shortcuts................................................................................................148
Color-Coding.................................................................................................................................148
Setting the Frequency Bands........................................................................................................148
Setting Threshold, Ratio, and Gain...............................................................................................148
Dynamics Controls........................................................................................................................149
Global Controls..............................................................................................................................149
Multiband Dynamics Applications.......................................................................................................149
Tips for Individual Instruments...........................................................................................................150
Multiband Dynamics for Vocal De-Essing.....................................................................................152
Multiband Dynamics as Graphic Equalizer........................................................................................153
Transient Shaper.....................................................................................................................................154
Studio One’s Zero-Latency/Zero-Artifact Transient Shaper...............................................................155
Gate.........................................................................................................................................................156
How to Adjust the Parameters............................................................................................................157

Page 5
About Trigger Event.......................................................................................................................157
Gate Applications...............................................................................................................................157
Attack Delay Processor.................................................................................................................157
Reduce Noise with Saturated/Distorted Audio..............................................................................158
Drum Enhancement with Multitracked Drums...............................................................................158
Snare Drum or Tom Enhancement with Mixed Drums..................................................................159
Tweaking the Synth.......................................................................................................................162
Additional Gate Applications for Drums.........................................................................................163
Dynamics Coda: Should Compression or Limiting Go Before or After EQ?...........................................163
Key Takeaways.......................................................................................................................................164

Chapter 9 | Sidechaining......................................................................................................166

How to Access the Sidechain Input.........................................................................................................166


Insert a Send to Feed the Sidechain..................................................................................................167
How to Process the Signal Feeding a Sidechain Input......................................................................167
Internal vs. External Sidechaining......................................................................................................168
Sidechain Applications.......................................................................................................................169
Lock Kick Drum and Bass Together (Gate Application)................................................................169
Pump Drums with Internal Sidechaining (Compressor Application)..............................................170
Conventional Pumped Drums (Compressor Application)..............................................................171
Frequency-Selective Compression (Compressor Application)......................................................171
Muting via Ducking (Gate Application)..........................................................................................172
Lower a Track’s Level from a Different Track (Compressor Application)......................................173
Reduce Amp Sim Harshness with De-Essing (Compressor Application)......................................175
Drum Sound Enhancement (Gate)................................................................................................176
Spectrum Meter..................................................................................................................................177
Key Takeaways.......................................................................................................................................177

Chapter 10 | Add Other Effects............................................................................................180

Console Emulation..................................................................................................................................180
Saturation and Distortion.........................................................................................................................182
Applying Saturation............................................................................................................................183
Drums............................................................................................................................................183
Bass...............................................................................................................................................183
Vocals............................................................................................................................................184
Keyboards.....................................................................................................................................184
Bus/FX Channel Send Effects Distortion.......................................................................................184
Where to Insert EQ with Saturation....................................................................................................184
Delay.......................................................................................................................................................184
Delay Parameters...............................................................................................................................184
Delay Time.....................................................................................................................................185
Feedback.......................................................................................................................................187
Mix.................................................................................................................................................187
Creating Wider-than-Life Sounds with Delay.....................................................................................187
Using Delay to Create Long, Trailing Echoes....................................................................................188
Stereo Image Enhancers....................................................................................................................189
Modulation Effects (Chorus, Flanger, etc.)..............................................................................................190
How to Adjust Modulation Effects Parameters...................................................................................192
Initial Delay....................................................................................................................................193
Mix or Depth..................................................................................................................................193
Feedback.......................................................................................................................................193

Page 6
Modulation or LFO Width..............................................................................................................193
Modulation Waveform or Shape....................................................................................................193
Modulation Rate............................................................................................................................193
Modulation Effects Tips......................................................................................................................193
Pitch Correction.......................................................................................................................................194
How Pitch Correction Works..............................................................................................................194
Applying Pitch Correction...................................................................................................................195
Other Pitch Correction Applications...................................................................................................195
Automatic Double-Tracking (ADT) for Vocals................................................................................195
Add a Harmony..............................................................................................................................196
Create Heavy Drum Sounds.........................................................................................................197
Full, Tight Kick Drums....................................................................................................................197
Octave Divider Bass......................................................................................................................197
Pitch Uncorrection.........................................................................................................................197
Restoration Plug-Ins................................................................................................................................197
Multiband Processing..............................................................................................................................197
Key Takeaways.......................................................................................................................................199

Chapter 11 | Create a Soundstage.......................................................................................202

Panning Basics........................................................................................................................................202
How Panning Differs for Stereo and Mono Tracks.............................................................................204
Panning and MIDI...............................................................................................................................204
Panning Tips............................................................................................................................................205
The Audio Architect: Build An Acoustical Space.....................................................................................206
About Reverb..........................................................................................................................................206
Different Reverb Types.......................................................................................................................206
Synthesized Reverb......................................................................................................................206
Convolution Reverb.......................................................................................................................207
Tech Talk: The Convolution Process.............................................................................................208
One Reverb or Many?........................................................................................................................208
Supplementing Reverb with a Real Acoustic Space..........................................................................209
Reverb Parameters and Controls.......................................................................................................209
Early Reflections............................................................................................................................209
Decay Time and Decay Time Frequencies....................................................................................210
Reflexivity......................................................................................................................................210
Reverb Algorithm...........................................................................................................................210
Room Reverb Size, Width, and Height Parameters......................................................................210
Room Reverb Damping and Population Parameters....................................................................211
Dual-Band Reverb..............................................................................................................................211
Reverb Settings.............................................................................................................................212
Create Virtual Room Mics with Delay......................................................................................................215
The Setup...........................................................................................................................................215
Plan Ahead with Reverb and Panning.....................................................................................................216
Key Takeaways.......................................................................................................................................217

Chapter 12 | Mix Automation................................................................................................220

What You Can Automate.........................................................................................................................220


Automation Basics...................................................................................................................................221
Audio Track Automation Methods...........................................................................................................222
Method 1: Record On-Screen Control Motion....................................................................................222
Automation Modes.........................................................................................................................223

Page 7
Method 2: Draw and Edit Envelopes..................................................................................................224
Method 3: Record Automation Moves from a Control Surface...........................................................225
Tech Talk: Control Surfaces...........................................................................................................225
Method 4: Snapshot Automation........................................................................................................226
Automating Effect and Virtual Instrument Parameters............................................................................227
Adding or Removing Envelopes..............................................................................................................229
Finding the Parameter to Add or Remove..........................................................................................229
Global Method...............................................................................................................................229
Automation Track Method.............................................................................................................230
Edit View Method...........................................................................................................................230
The Add/Remove Dialog Box.............................................................................................................230
MIDI Learn...............................................................................................................................................231
Using Hardware Control Surfaces...........................................................................................................232
Traditional Mixing...............................................................................................................................232
How to Choose a Control Surface......................................................................................................233
Automation Applications..........................................................................................................................234
The Trim Function..............................................................................................................................234
Add Expressiveness with Controllers.................................................................................................235
Aux Send Automation and Delays......................................................................................................235
Aux Sends and Reverb Splashes......................................................................................................236
Panning..............................................................................................................................................236
Complementary Motion......................................................................................................................236
Mutes and Solos.................................................................................................................................236
Mute vs. Change Level.......................................................................................................................236
Plug-In Automation Applications..............................................................................................................237
Better Chorusing and Flanging...........................................................................................................237
Creative Distortion Crunch.................................................................................................................237
Emphasizing with EQ.........................................................................................................................237
Delay Feedback.................................................................................................................................237
Envelope-Based Tremolo...................................................................................................................238
Key Takeaways.......................................................................................................................................238

Chapter 13 | Final Timing Tweaks.......................................................................................240

James Brown “Papa’s Got a Brand New Bag”........................................................................................240


The Beatles “Love Me Do.”.....................................................................................................................241
The Police “Walking on the Moon”..........................................................................................................241
Smokey Robinson & The Miracles “Tears of a Clown”............................................................................242
Pat Benatar “Shadows of the Night”........................................................................................................242
So the Point Is.........................................................................................................................................243
Adding Tempo Changes To a Final Mix...................................................................................................243
Preparation.........................................................................................................................................243
Making the Tempo Changes...............................................................................................................244
Inserting “Time Traps”........................................................................................................................246
Key Takeaways.......................................................................................................................................247

Chapter 14 | Review and Export..........................................................................................250

Mastering While Mixing—Pros and Cons...............................................................................................250


Can Online and Automated Mastering Services Do the Job?............................................................251
Mastering While Mixing......................................................................................................................252
Prepping Files for a Mastering Engineer............................................................................................252
Export Your Mixed File............................................................................................................................253

Page 8
Main File Types..................................................................................................................................254
Sample Rates and Bit Depth..............................................................................................................255
Tech Talk: Sample Rates and Resolution......................................................................................255
Bouncing Mixes Inside the Project.....................................................................................................255
Check Your Mix Over Different Systems.................................................................................................257
Key Takeaways.......................................................................................................................................258

Appendix A | MIDI 1.0 Basics...............................................................................................260

The Most Important MIDI Messages.......................................................................................................260


MIDI Continuous Controllers..............................................................................................................260
Continuous Controller Numbers....................................................................................................261
MIDI Controller Assignment....................................................................................................................261
Fixed Parameter Assignments...........................................................................................................261
Uncommitted Parameter Assignments...............................................................................................261
Manual Linking..............................................................................................................................261
MIDI Learn.....................................................................................................................................262
Scaling and Inversion.........................................................................................................................263
Parameter Value Takeover.................................................................................................................264
Tweak Presets with MIDI....................................................................................................................264

Appendix B | Calibrating Levels..........................................................................................265

The Level Meter......................................................................................................................................265


Calibrating Your Monitors........................................................................................................................266

Appendix C | Mixing with Noise...........................................................................................267

One Reason Why Mixing Is Challenging................................................................................................267


This Technique’s Backstory.....................................................................................................................267
How to Mix with Noise.............................................................................................................................268

About the Author...................................................................................................................269

Other Studio One Books by the Author..............................................................................270

Page 9
Page 10
Preface

This revised edition of How to Create Compelling Mixes in Studio One, which incorporates several
changes in the most recent version of Studio One (as well as a new section on MIDI Basics), covers the
techniques and the artistic philosophies of mixing. Perhaps no other area of recording has been affected
as much by technology as mixing. What used to require expensive multitrack recorders, huge mixing
consoles, and a substantial investment in outboard gear now fits in a desktop or laptop computer—at a
fraction of the cost.

What hasn’t changed is that you still have to learn how to mix. It doesn’t matter if a mixing console (or
virtual musical instrument) is pixels on a computer screen or hardware controls, creating a great mix is
a complex and sometimes daunting process. If you’re new to mixing, it’s as if you’d walked into a
million-dollar studio a few decades ago and the owner said “The good news: I’ll charge you only $1 an
hour for studio time. The bad news: there’s no engineer. Good luck!” and hands you the keys.

This book describes the many facets of mixing in our computer-based world. No one said learning how
to mix would be easy, but hopefully How to Create Compelling Mixes will make it less difficult—and
remind you that the purpose of making music is to enjoy yourself, give your listeners an emotional
experience, be creative, and maybe even discover a little bit more about who you are.

—Craig Anderton

Page 11
How to Create Compelling Mixes in Studio One

Published by Craig D Anderton, Inc.


© 2020. All rights reserved.

Page 12
Page 13
Chapter 1 | Mixing Philosophies

Mixing is the process of turning the tracks you recorded into a cohesive listening experience. This
involves adjusting the levels, tonal balance, stereo or surround placement, and adding appropriate
signal processing until everything sounds great. While that may seem straightforward, a mix requires a
huge number of value judgments—which track should have the focus at any given moment, should
some unneeded parts be muted or erased, do you want a raw or highly produced sound, and perhaps
most importantly, who is your target audience, and what do they want to hear? Your mix’s success
depends on your ability to answer and resolve these questions.

Mixing is a combination of art—you have to judge what’s most “musical”—and science, where you
need to know the processes, technologies, and settings that will produce the sounds you want to hear. In
a way, mixing is like a combination lock: once all the tumblers are in place, the lock opens.

Are You a Musician, Producer, or Engineer?


In pro studios, the musician is usually part of a team that includes at least a producer and an engineer.
In a home studio environment, you may need to perform all three roles. It’s not easy to step out of the
musician mindset and learn to be objective about your playing, songwriting, and engineering. However
if you do, your music will benefit greatly. Here’s each participant’s role.

• The producer oversees the process, approves the arrangement, gauges the overall emotional
impact, and makes artistic judgments about what does and does not work. A producer sees each
aspect of the mixing process as part of a whole, and each track as a contributor to the final
composition. If you know where you’re going, it’s a lot easier to get there.
• The musician participates in the mix by making sure the production remains true to the original
artistic vision.
• The engineer fulfills the producer’s needs with technological solutions. If the producer wants a
“bigger” drum sound, the engineer does the tweaks to provide the desired effect. The engineer
doesn’t worry about whether you could have done a better solo, but works with what’s
available.

Become familiar with these roles, and apply their differing outlooks to your music to obtain a balanced
perspective. Mixing isn’t just about blending tracks, but producing a musical experience from blending
those tracks.

It’s equally important not to overproduce. Sometimes tracks are best left unprocessed, or you may need
to delete parts to create space for other parts. Be careful about falling in love with the elements that
make up a particular piece of music; keep your focus on what makes the strongest final result. Every
part should support the music. Although a dazzling guitar lick might be impressive, it might also be a
distraction.

Page 14
Right Brain Versus Left Brain
Although an oversimplification, the human brain is a dual-processing system. The left hemisphere
handles more analytical tasks, while the right hemisphere is more about creative tasks and emotional
responses. This matters with mixing, because it’s difficult to switch at will between the two
hemispheres. For example, suppose you’re in a creative mode and your mix is progressing well. If a
technical glitch occurs, you have to switch over to analytical mode and begin troubleshooting. When
you return to the mix, the magic is gone—the glitch stuck you in left-brain mode.

In a conventional studio situation where the engineer handles the left-brain tasks, the artist can stay in
right-brain mode because the engineer is taking care of the details, and the producer works to integrate
the two. Performing all these functions by yourself is a major challenge, but not an insurmountable one.

How to Stay in Right-Brain Mode


One of the best ways to stay in right-brain mode is to make left-brain activities second nature, so you
don’t have to think about them. Here are some tips.

• Learn keyboard shortcuts. It’s less effort to hit a couple of keys than to locate an area on the
screen, move your mouse to it, go down a menu, select an item, etc.
• Create macros. Studio One can create Macros, which combine strings of keyboard commands
into a single command (Fig. 1.1). This can save a lot of time.

Figure 1.1 It may seem difficult to create a Macro, but Studio One includes several default Macros, and the
documentation explains the process in a non-intimidating way.

• Know how to show/hide windows. This is a special case of keyboard shortcuts. Hiding
windows you aren’t using reduces screen clutter. F2 shows/hides the Edit view, F3 the Mix

Page 15
Console, F4 the Inspector, and F5 the Browser. Additional show/hide options apply to functions
like the Record Panel (Shift+Alt+R).
• Take advantage of color. The brain decodes colors and images more easily than words. My
vocal tracks are green, guitar tracks blue, etc. so they’re easy to pick out from lots of tracks.
Consider making the current track that’s being edited a brighter color, so that your eye jumps to
it. Color is particularly helpful in the Console’s narrow view (Fig. 1.2).

Figure 1.2 Compare how easy it is to pick out the guitar tracks that are colorized blue (top), compared to a
monochromatic coloring scheme (bottom).

The Importance of “Feel”


Some older recordings, created under technically primitive conditions, still had a great “feel” that made
you want to listen to them over and over again. Although a musician’s performance accounts for most
of that feel, a mix can also be a performance—especially when mixes were done on large consoles with
different people “playing” different faders. The same principle holds true today.

Page 16
Some producers prioritize feel, and aren’t concerned about minor technical errors or musical glitches.
Others seek perfection by recording parts over and over again, or splicing together bits from multiple
takes. Both approaches are valid, but avoid tilting too much in one direction. Some musicians are so
self-critical they never finish a mix, but don’t be so forgiving that you let issues slip through you’ll
regret later. One of the great aspects of working with producers is their ability to pinpoint what can be
improved in your music. Always ask yourself what can make the mix better, but realize that getting too
obsessed with detail can take the life out of a mix. It’s a fine line that becomes clearer with experience.

Technology can help thanks to automation (Chapter 12), which remembers your mixing moves and
makes it possible to “Save As...” different mixes. The initial mixes may have an energy later mixes
don’t have, so save often. Starting with Studio One 5, you can also save different mix “Scenes” or
combine different parts from different mixes—the first verse from one mix, the chorus from another,
and so on. Import the mixes into the Arrange view, cut and paste the appropriate sections, then export
the final result.

While mixing, it’s even possible to put some of the feel back into parts, as covered in Chapter 5.

The Arrangement’s Importance


Mixing is your last chance to alter the arrangement. The ability of one person to write, play, produce,
record, master, and even duplicate music is a fairly recent development; traditionally, music has been a
collaborative process. A trusted associate who can give honest, objective feedback is invaluable. If
that’s not possible, you need to figure out how to provide some of the objectivity and detachment a
producer provides. It’s not easy, but here are some recommendations on how to look at your
arrangement.

Make Sure Every Part Serves the Song


You may be really proud of a particular riff, but does it serve the song? The main lesson I learned from
my studio musician years is that with vocal-based music, everything exists to help the singer tell the
story. Your licks are there only to make the lead vocal more effective.

Writing a part without considering the song’s context can be a problem. I once came up with a lyrical,
melodic bass part for a verse while waiting for the engineer to get a snare drum sound. I thought it was
a really good part. Unfortunately when mixed with the vocal, it was distracting. So I ended up playing
an ultra-simple part anyone could play—but the simpler version contributed more to the song.

Build to the Moments of Impact


Sometimes vocalists double their vocals to create a “bigger” sound. However, that reduces the vocal’s
potential intimacy if layering the two vocals obscures some vocal nuances. Mute the doubled part until
it’s really needed (e.g., to add extra emphasis to a hook when it reappears), or for a big chorus.

Another example is dropping out instruments, so that they have more impact when they return. DJs
excel at this—they’ll take the kick drum and bass out for a while, and then when those instruments

Page 17
return, the drama is palpable. If the bass is absent during the intro, it will have more impact when it
finally enters. I also like to mute drums in the middle of a song for a few measures and let the vocals
and other instruments carry the tune for a while. This adds an element of tension that’s released when
the drums return.

Tempo changes can also be a big factor in emphasizing the tension and release of music. Chapter 13
covers Studio One’s ability to add tempo changes after the fact, to a final, mixed track.

The Mute Button’s Importance


One of the console’s most important buttons is the mute button, which silences a Channel momentarily.
When mixing, I use it to find out whether a part is essential or not. Taking out a part makes the
remaining parts more important, and provides contrast (which will more likely hold the listener’s
attention) as parts weave in and out of the arrangement.

Fewer parts also simplify the mixing process. If there are only two tracks in a song, like a singer and
rhythm guitar, there’s not much to mix. But if there are a zillion tracks, trying to find the right balance
becomes far more difficult. Fewer tracks help a song “mix itself.”

The mixing process is your last chance to be uncompromisingly honest. If something doesn’t work
quite right, get rid of it—regardless of how clever it is, or how good it sounds on its own.

You Have 10 Seconds to Grab Someone’s Attention


The intro will make or break your mix because you have to hook the listener immediately. This was
even true decades ago, when radio station DJs would go through records by playing the first 10
seconds. If something didn’t grab them...next.

Here’s a test for intros. Picture an office party filled with a variety of people, from the new mailroom
guy to upper management. They’re all a bit tipsy and chatting away, while some streaming service
(whose quality isn’t great) provides background music. Now imagine that your song starts playing.

How do the people react? Is there something in the first few seconds to grab their attention and keep it?
Do they stop talking and listen? Do they listen for the first few bars, then go back to conversing? Do
they ignore it entirely? Think of your music in the context of a playlist. It has to be able to segue from
anything to anything, appeal to short attention spans, and be different. Also remember a listener’s first
exposure may be on the internet—where someone else’s music is only a click away.

Think About Your Audience


Songs were once honed on the road, and recording’s goal was to capture that magic in the studio. Now
songs are more often created in the studio, and re-created on the road. As you mix a tune, always
imagine an audience is listening. It will influence how the song develops.

One musician borrows his daughter’s dolls and stuffed animals upon starting a mix, and sets them up so
they’re staring at him. He claims it’s like the initial audience environment at a bar—expressionless and

Page 18
bored, which reminds him to think about what would get an audience to react in some way. A mix can’t
just play your song: it has to sell your song.

What’s Your Intended Result?


All this advice assumes that you want to connect with an audience. But is that important to you?
Although it’s great to communicate through the language of music, creating music can also be about
self-discovery. Even if I was told no one else would ever hear my songs, I’d still make music because
the process of making music is magical.

I believe there are two main ways to be successful with your art. One is to be totally true to yourself,
and hope that the music you make strikes a chord in others. This creates the brightest stars with the
longest careers, because what they’re doing comes naturally—they don’t have to fake it. If the music
you make doesn’t “fit” with a mass audience, at least your friends will probably enjoy it because they
know who you are already, and will enjoy seeing another aspect of your personality.

The other option is to study past hits carefully, how they were arranged, pick lyrical subjects with wide
appeal, and do mixes designed to appeal to specific audiences. That’s fine, and can lead to a
comfortable, well-paying career. But it’s not effortless; it still requires a strong creative spark, and
being brutally honest about whether a piece of music has potential for mass appeal (spoiler alert: most
of the time, the answer is “no”).

Perhaps combining the two approaches yields the best results. Let the artist in you create, then let the
hard-headed, objective part of you do the mix. While this section has concentrated on what it takes to
become more objective, I don’t want to trivialize the creative factor. As in so many aspects of life,
sometimes a synthesis of opposites creates the best results. Go ahead, love your music—but don’t be in
love with it if you want to remain objective.

Okay, enough opinions…let’s get technical.

Key Takeaways
• If you work by yourself, you’re not only the artist, but also the engineer and producer. These are
very different skills.
• Make repetitive tasks second nature (e.g., use keyboard shortcuts), so that you can concentrate
on being creative.
• Use color to make it easier to identify tracks in a full mix.
• Be careful not to prioritize perfection over feel.
• Mixing is the last chance to change the arrangement. Make sure every part serves the song.
• Mute tracks to determine whether they’re necessary or not. Less can indeed be more.
• The beginning is crucial—you have 10 seconds to grab someone’s attention.
• Think about your music’s intended result, and work toward making it happen.

Page 19

Page 20
Page 21
Chapter 2 | Technical Basics

The more you know the technology involved in mixing, the more easily you can take advantage of the
process. However, always remember that the most important—and technically complex—piece of gear
used in mixing is the human ear. Let’s start there.

Hearing and Frequency Response


Mixing’s goal is to produce a balanced, even sound with a full, satisfying bass, a well-defined
midrange, and sparkly (not screechy) highs. Equalization, which alters frequency response, helps make
this possible.

Frequency response defines how a system records or reproduces the audible frequencies from 20 Hz to
20,000 Hz. (Hz, short for Hertz, measures the number of cycles per second in an audio waveform; 1
kHz or kiloHertz equals 1,000 Hz.) On a frequency response graph that shows levels at different
frequencies, the Y-axis (vertical) displays the audio’s level, while the X-axis (horizontal) indicates
frequency (Fig. 2.1).

Figure 2.1 This graph shows the frequency response for an audio interface. The bass response drops off
somewhat in the ultra-low bass frequencies.

The audible range is further divided into bands. There’s no official definition of the frequencies each
band covers, but the following is close enough:

• Bass: Lowest frequencies, typically below 200 Hz


• Lower midrange: 200 to 500 Hz
• Midrange: 500 Hz to 2.5 kHz
• Upper midrange: 2.5 kHz to 5 kHz
• Treble: 5 kHz and higher

Page 22
Although these guidelines are approximate, they’re still useful. For example, bass guitar and kick drum
occupy the bass range. Vocals are in the midrange and lower midrange. Percussion instruments like
tambourine have lots of energy in the treble region.

Although electronic devices can have a flat frequency response, no mechanical device does. A
speaker’s response falls off at high and low frequencies. Guitar pickup response falls off at high
frequencies, which is why guitar amps often boost the upper midrange. Pothing’s nerfect.

The Problem with Ears


Your ears’ limitations become more pronounced if you don’t take care of your hearing (e.g., listen to
loud music for prolonged periods of time, do deep sea diving, drink a lot of alcohol, etc.). Even flying
can affect your ears’ high frequency response. I’ll wait at least 24 hours after flying before mixing or
mastering; the few times I’ve disregarded that rule, mixes that seemed fine sounded too bright the next
day. And no matter how well you take care of your hearing, age takes a toll.

But even healthy, young ears aren’t perfect. The ear has a midrange peak and does not respond as well
to low and high frequencies, particularly at lower volumes. The response comes closest to flat response
at relatively high levels. The Fletcher-Munson curve (Fig. 2.2) illustrates this phenomenon.

Figure 2.2 The Fletcher-Munson curve shows that signals need to be at different levels to be perceived as
having the same volume. The intensity needs to be much higher at lower and higher frequencies to be perceived
as having equal volume.

It’s crucial to care for your hearing. In my touring days when I’d often play 200 days out of the year, I
wore cotton in my ears. While not as effective as present-day earplugs, I feel it saved my hearing.

Page 23
I often carry the cylindrical foam ear plugs available at sporting goods stores, and wear them while
walking city streets, at clubs, when hammering or using power tools, or anywhere my ears are going to
get more abuse than someone talking at a conversational level. I make my living with my ears, so
taking care of them is a priority.

Tip: Schedule an appointment with an audiologist at least once every year or two. Some hearing issues
that lead to deafness can be prevented if caught in time.

Optimum Mixing Levels


Loud mixes may be exciting, but loud, extended mixing sessions are tough on the ears. Mixing at low
levels keeps your ears “fresher” and minimizes ear fatigue; you’ll also be able to discriminate better
among subtle level variations. However as mentioned, your ears’ response changes at different levels.
Although I start a mix at low levels, it increases over time to a consistent, comfortable level—one
where I can listen for hours on end, with zero listening fatigue. That becomes the benchmark level.

After getting a good mix, I then check the mix again at low levels, and finally crank it up for a “let’s
turn this sucker up” reality test. If at loud levels the mix sounds just a little too bright and boomy, and if
at low levels it sounds just a bit bass- and treble-light, that’s about right. When a mix is satisfactory at
all levels, it will “translate” well over different playback systems—assuming there aren’t problems with
your listening environment, so let’s cover that next.

Monitoring and Acoustics


It’s almost impossible to do a good mix if your monitoring system isn’t honest about the sounds you
hear. If a mix sounds great on your system but falls apart when played elsewhere on good systems,
something’s wrong with your monitoring process. The problem could be the speakers, the room
acoustics, your hearing, or a combination of these factors.

The Room
The room in which you monitor has an influence on how you mix. For a real shocker, set up an audio
level meter (several smartphone apps can do the job reasonably well, such as Decibel X for iOS and
Android), sit with it in the middle of your room, run a sine wave test tone oscillator through the
speakers, sweep it through the audible frequency range, and watch the meter. Unless you have great
monitors and an acoustically tuned room, that meter will fluctuate like a leaf in a tornado. Monitor
speakers don’t have perfectly flat responses, but they look ruler-flat compared to the average
acoustically untreated room.

You don’t even need a level meter to conduct this test: Play a steady tone around 5 kHz or so, then
move your head around. You’ll hear obvious volume fluctuations. These variations occur because as
sound bounces off walls, the reflections become part of the overall sound. This creates signal
cancellations and additions.

Page 24
Another example of how acoustics affects sound is when you place a speaker against a wall, which
seems to increase bass. This is because any sounds emanating from the rear of the speaker bounce off
the wall. These reflections can reinforce the main wave coming from the speaker’s front.

Because the walls, floors, and ceilings all interact with speakers, it’s important to place speakers
symmetrically within a room. Otherwise, if (for example) one speaker is 3 feet from a wall and another
10 feet from a wall, any reflections will be wildly different and affect the response.

The subject of acoustical treatment deserves a book in itself, but here are some quick tips:

• Avoid walls and corners. Try not to place speakers right in front of a wall or in a corner.
• Don’t sit close to walls when mixing. Avoid a listening position (your ears) closer than 3 feet
(1 meter) from any wall. My recording setup with mixer, controller, and computer keyboard is
set up on tables within the front third of the room. Doing so reduces reflection buildup of peak
frequencies, and also frees up the wall space for a combination of shelves and acoustical
treatment. The “middle of the room” approach also makes it easier to deal with the cables
running among your gear’s rear panels.
• Center your setup’s left and right sides. Place your left and right speakers an equal distance
from their respective walls. This produces a more balanced mid- and low-frequency response,
and preserves stereo imaging.
• Keep the mixing area uncluttered. Avoid large objects (such as lamps or decorations—sorry,
the lava lamp has to go) near the studio monitor and listening position.
• Use diffusion and absorption. Diffusers and sound-absorbent material in a room help prevent
reflections from entering the listening space. Carpeting minimizes reflections from hard floor
surfaces.
• Decouple (isolate) the speaker base from where it sits. A thick piece of neoprene, or even a
thick mouse pad, can help minimize vibrations from traveling into a stand or table that cause it
to vibrate. I use Primacoustic’s Recoil Stabilizers (Fig. 2.3).

Figure 2.3 The Primacoustic Recoil Stabilizer isolates speakers from the surfaces on which they sit.

Page 25
• Minimize reflective surfaces between the speakers and your ears. If the speakers are on a
table along with gear like a mixer, place the speakers to the side of the mixer, and on small
stands. You don’t want waves reflecting off the table (or the mixer) and hitting your ears.
Placing sound-absorbing material (like a thick rug) on top of the table can help.
• Place speaker fronts in front of monitor screens. If your speakers are to the side of your
computer monitors, make sure each speaker’s front is in front of the monitor’s screen. Being
behind the screen or flush with it may affect the sound quality.
• Reality test with headphones. Test your mixes occasionally with high-quality, circumaural
headphones (i.e., they go over your ears, not sit on top of them) as a reality check, because
headphones make room acoustics irrelevant and the circumaural design keeps out any external
noise. However for mixing on headphones over extended periods, most people prefer supra-
aural (on-ear) types. Sometimes you can even hear bass more accurately with headphones than
with near-field monitors. However, avoid headphones designed for consumers, because they
often “hype” the highs and lows to give deep bass and sizzling highs. (Careful: It’s easy to blast
your ears with headphones and not know it. Watch those volume levels—and be real careful
about accidentally setting up a feedback loop, because a loud enough squeal could cause
permanent hearing damage.)

This is basic advice. Hiring a professional studio acoustics consultant to “tune” your room with bass
traps and proper acoustic treatment could be the best investment you ever make in your music. Every
room is different, so solutions differ (Fig. 2.4).

Figure 2.4 Acoustical treatment can improve room acoustics dramatically. Note the bass traps in the corners,
and additional treatment on the ceiling. (Photo courtesy Primacoustic.)

Some people try to compensate for room anomalies by inserting a graphic equalizer with as many
bands as possible between their mixer or audio interface and speakers, then “tuning” the equalization to
adjust for frequency response variations. However if your position deviates at all from the sweet spot
(the place at which the room acoustics were tuned), the frequency response will change. Also, heavily
equalizing a poor-quality acoustical space simply gives you a heavily equalized, poor-quality acoustical

Page 26
space. Like noise reduction, which works best on signals without much noise, room tuning works best
on rooms that don’t have serious response problems.

Some options are better than others; the calibration technology from Sonarworks (Fig. 2.5) is one of the
best. It takes about 20 minutes to calibrate your studio, but the result is a better mixing environment. A
plug-in compensates for the room acoustics while you mix. Before exporting the final mix, you bypass
the plug-in. Note that Sonarworks also has calibration curves for almost all popular studio headphones,
so you can mix on headphones while Sonarworks compensates to provide a flat frequency response.
However, these calibration curves represent an average. Different headphones, even of the same make
and model, can have significant response variations. If you’re concerned about maximum accuracy,
Sonarworks provides a service where they’ll generate a curve for your specific headphones.

Figure 2.5 The Sonarworks system compensates for room acoustics and headphone response anomaly issues.
The plug-in then applies a correction curve, so you mix as if you were in a room with correct acoustics.

Near-Field Monitors
The traditional, big studios of the late 20th century had large monitors mounted at a considerable
distance (6 to 10 ft. or so) from the mixer, with the front flush to the wall, and an acoustically treated
control room to minimize response variations. The sweet spot for the best monitoring position was
where the mixing engineer sat at the console. In smaller, project studios, near-field monitors have
become the standard way to monitor (Fig. 2.6).

Page 27
Figure 2.6 The PreSonus Eris E5 XT is a near-field monitor with a 5" low-frequency driver.

These relatively compact speakers sit around 3 to 6 feet from the mixer’s ears, with the head and
speakers forming a triangle (Fig. 2.7).

Figure 2.7 This top view of the mix position shows how the monitor speakers and the listener form an
equilateral triangle.

Two-way (also called bi-amped) monitors incorporate a tweeter to reproduce high frequencies, and a
woofer to reproduce mid and low frequencies, in one enclosure. These are also called high-frequency
and low-frequency drivers, respectively. The full frequency range comes together at the acoustic axis
point, which is between the tweeter and woofer. This should be at ear level in the listening position;
angle the studio monitors if needed so the acoustic axis point is at your ears.

Page 28
Near-field monitors reduce (but do not eliminate) the impact of room acoustics, because the speakers’
direct sound is louder than the reflections from the room surfaces. As a side benefit, near-field monitors
don’t need to produce a lot of power because they’re close to your ears.

However, room placement remains an issue. Being too close to the walls will boost the bass artificially.
Although you can compensate somewhat with EQ (or possibly controls on the speakers themselves—
see later), the build-up will differ at different frequencies. High frequencies are not as affected because
they’re more directional.

Placing speakers about 6 ft. away from the wall in a room where the longest wall is about 18 feet
(hopefully the speakers can also be reasonably far away from the side walls) works well, but not
everyone has that much room. One option is to mount the speakers a bit away from the wall on the
same table holding the mixer, and on stands so they’re not sitting on the table itself. After creating a
direct path from speaker to ear, pad the walls behind the speakers with as much sound-deadening
material as possible.

Anatomy of the Near-Field Monitor


Near-field monitors are available in various sizes, at numerous price points. Most are two-way designs,
with (typically) a 5", 6", or 8" woofer and smaller tweeter. A three-way design adds a separate
midrange driver.

Tip: A three-way design is not necessarily better than a two-way design. A well-designed two-way
system is better than a poorly designed three-way system.

Although larger speaker sizes may be harder to fit in a small studio, the increase in low-frequency
accuracy can be substantial. If your room is big enough to accommodate an 8" monitor, it may be worth
the extra expense when working with bass-heavy music like hip-hop or EDM (electronic dance music).

There are two main monitor types, passive and active. Passive monitors consist of only the speakers
and a crossover (the filter that splits the input to the low- and high-frequency speakers), and require an
outboard amplifier. Active monitors incorporate the crossover and the amps needed to drive the
speakers from a line-level signal. With active monitors, the power amp and speaker have hopefully
been tweaked into a smooth, efficient team. Issues such as speaker cable resistance become irrelevant,
and protection can be built into the amp to prevent blowouts. Powered monitors are usually bi-amped
(i.e., a separate amp for the woofer and tweeter), which provides a cleaner sound and allows the
manufacturer to optimize the crossover points and frequency response for the speakers being used.

If you hook up passive monitors to your own amps, make sure the amps have adequate headroom. Any
clipping generates lots of high-frequency harmonics. Sustained clipping can burn out tweeters.

Rear-Panel Controls
Because compensating for room acoustics can be a significant issue in today’s studios, most
manufacturers offer equalization options to compensate for acoustics-related problems (Fig. 2.8).
Different model speakers have different control complements; the following are typical.

Page 29
Figure 2.8 These controls on the Eris XT’s rear panel offer multiple tone-shaping options.

Low-Shelf Cutoff

This compensates for wall coupling where low-frequency sounds from the speaker’s rear reinforce
sounds coming from the speaker’s front. However, you may not want to compensate for this if an artist
wants to hear more bass while recording than would be desirable for a final mix or master. Leaving the
emphasized low frequencies alone lets you mix with the correct amount of bass at the mixer, while
giving artists what they want to hear.

Mid-Frequency Control

This control can de-emphasize or emphasize the vocal range. If the room throws the highs and lows out
of balance, you may need to tweak the midrange to compensate. You can also use this for the same kind
of mind trick as alluded to above for bass: for example, if the singer wants to hear more vocals than
would be appropriate for a final mix, boost the midrange a bit at the speaker, not in the mixer.

High-Frequency Control

High-frequency controls correct for mixing environments that are not bright enough or too bright, but
can also be about adjusting to taste.

Page 30
Acoustic Space Switch

This does bass compensation for three speaker placements: middle of room, close to wall, or corner
placement.

A Caution about Rear-Panel EQ Settings

It takes time for the ear to acclimate to EQ and level changes. It’s best to start off with flat settings and
get to know your speakers and your room. Listen to music with which you’re familiar, and preferably,
have heard over quality monitors in a studio with good acoustics so you have a frame of reference. Try
different positions in your room and speaker placements before making EQ adjustments. After finding
the optimum position, then adjust the EQ for the best listening and monitoring experience.

What’s the “Best” Monitor?


You’ll see endless discussions on the net as to which near-field monitors are best. Actually, the answer
may be the monitor that compensates best for your imperfect listening space, and imperfect hearing
response.

I’ve been fortunate enough to hear my music over some hugely expensive systems in mastering labs
and high-end studios, so I know what my music is supposed to sound like, and can judge my studio’s
sound accordingly. If you haven’t had similar listening experiences, book 30 minutes or so at a really
good studio (you can probably get a price break since you’re not asking to use a lot of the facilities),
and bring along a few of your favorite recordings. Listen to them and get to know what they sound like
on a really good system. Compare what your studio sounds like to that standard, and compensate if
needed to approach that ideal.

Tip: When comparing two sets of speakers, if one is even slightly louder than the other, people often
think the louder one sounds “better.” To make a valid comparison, match the speaker levels. This is
another situation where a smartphone’s sound level meter can come in handy.

Setting Levels
As your ears’ frequency response varies with level, it’s good practice to maintain a consistent level
when mixing. You should still test a mix at various levels to hear whether it translates well, but a
consistent monitoring level helps promote consistent mixes.

Appendix B includes information on calibrating your speaker levels, using K-System or LUFS
metering, to help ensure consistent monitoring levels.

Page 31
Key Takeaways
• Above all, take care of your hearing. That’s your one piece of gear with no warranty and no
return policy, and can’t be replaced at any price.
• There’s a learning curve—learn the room acoustics, and if necessary, how to compensate for
deficiencies in your monitoring system.
• Near-field monitors can reduce the effect of room acoustics.
• Speaker placement is crucial. Moving a speaker just a few inches can change the perceived
response. Avoid placing speakers against walls or in corners.
• One of the best investments for your music is treating your room acoustically.
• To compensate for acoustics issues, the tone controls mounted on the rear of your monitor
speakers may help.
• Once you’ve found a good speaker location that’s unlikely to change, get to know the sound so
you can compensate mentally for any response anomalies.
• Make sure that the final mix sounds good over headphones as well as speakers.
• Before signing off on a mix, test it over multiple systems and make sure it “translates” well over
all of them.

Page 32
Page 33
Chapter 3 | Mixing with Computers

Mixing used to be done with hardware mixers, but Studio One can perform the functions of a
traditional mixer inside your computer—hence the term, inside the box (ITB) mixing.

Mixer Architecture
Virtual, software-based mixers have a layout that’s similar to their hardware ancestors. There are three
main, independent places to adjust levels (Fig. 3.1).

Figure 3.1 The mixer Channels (colored gray), Buses (colored blue), and Main Bus (colored red) represent the
main level-setting elements in a virtual mixer. Studio One also includes a bus variation, called an FX Bus, that’s
optimized for using effects.

• Channels correspond to the audio coming from particular tracks. For example, the audio from
each track of a 16-track project will appear over 16 channels. Each Channel has a fader (linear
volume control) that you can adjust to determine the Channel’s level. This is the essence of
mixing—creating a pleasing balance among the various Channels.
• Buses are specialized channels that receive the outputs from other Channels. Most of the time,
all Channels terminate eventually in the Main Bus, as does a Bus output. You use the Main Bus
fader to adjust the overall mix level, like a master volume control. (Studio One’s FX Channels
are buses that are optimized for using effects. See “Bus Basics” for details.)
• The Main Bus works in conjunction with your audio interface to send the final mix to
headphones, speakers, and other monitoring devices (e.g., your computer can send audio over
Bluetooth to a Bluetooth receiver).

Page 34
Mono vs. Stereo Tracks
Early hardware consoles had mono inputs and mono channels. When stereo became more common,
mixers evolved to include stereo inputs. A channel’s fader adjusted the level for the left and right
channels simultaneously.

You can switch Studio One’s Channels to mono or stereo operation. The mono vs. stereo distinction
won’t impact your workflow much, although it may impact plug-in effects (see Chapter 4).

Bus Basics
The most common bus is a mixer’s Main (also called master) Bus. All the Channel outputs feed into
this bus, which includes a master fader for raising or lowering the combined level of all the Channels.

Another common bus example is a reverb bus, which can add a reverb effect to several Channels at
once. This bus application works in conjunction with a Channel’s Send controls. These send a
selectable amount of audio from a Channel to the Bus. For example, if you wanted to add reverb on
some (but not all) Channels, you would create a Bus, insert a reverb effect in the Bus, and send audio to
the Bus from the Channels to which you want to add reverb. The reverb Bus output would join the
Channel outputs (which contain the non-reverberated signal) in feeding the Main Bus (Fig. 3.2).

Page 35
Figure 3.2 There are FX Channel buses with effects (outlined in yellow) for Reverb and Saturation. Several
Channels are using Send controls (outlined in orange) to send signal to the FX Channels. The Drums are going
to both reverb and saturation, Bass to saturation only, and African Vox and Giant Thud to reverb only. The Lead
Guitar isn’t sending any signal from its Channel. The Channel outputs and FX Channel outputs terminate in the
Main Bus; the area outlined in white shows Channel and Bus destinations.

The distinction of buses being separate from channels is a holdover from the days of physical consoles.
There’s no technical reason why virtual mixers have to differentiate between the two. However, many
people find this a logical distinction, because the two provide different functions.

Send controls can be either pre-fader or post-fader. If pre-fader, then the amount of sent signal remains
constant, regardless of the Channel fader’s setting. If post-fader, then the amount of sent signal follows
the Channel fader’s level (Fig. 3.3).

Figure 3.3 The pre/post fader buttons are circled. Clicking on the button toggles between the two states: pre-
fader position (circled in orange), and post-fader position (circled in blue).

Channel Strips
A Channel incorporates more controls than just a master fader. A panpot (short for panoramic
potentiometer) places a mono Channel’s signal anywhere in the stereo field from left to right as you
move the panpot slider from full left to full right.

With stereo Channels, the panpot acts like a balance control. At center, the left and right channels have
equal levels. Sliding the panpot to the left turns down the right channel, while sliding it to the right
turns down the left channel.

Hardware mixer channel strips often include signal processors like EQ and dynamics. Studio One’s
virtual mixer takes an à la carte approach based on plug-ins (see Chapter 4), because you can insert
any kind of EQ, dynamics, or other plug-ins you want to add into the Channel.

However, Studio One also includes two channel strip plug-ins that combine multiple processors. The
Channel Strip plug-in (Fig. 3.4) is basic, and intended for quick tweaks. It includes a low-cut filter,
dynamics, and a three-band EQ. (Later chapters describe what dynamics and EQ do, and how to use
them when mixing.)

Page 36
Figure 3.4 The Channel Strip plug-in is basic, but effective. It requires very little CPU power.

PreSonus makes hardware mixers that include a channel strip called the Fat Channel; it includes several
processors. Because of its popularity, they’ve also translated the Fat Channel’s functionality into a
plug-in (Fig. 3.5) for Studio One.

Figure 3.5 The Fat Channel includes five signal processors: a high pass filter, noise gate/expander,
compressor, EQ with four parametric EQ stages (as well as high and low shelving EQ), and an output dynamic
range limiter. It also can load different compressor types and equalizers—they’re like plug-ins within a plug-in.

Page 37
Tech Talk: Mute and Solo Buttons

A channel strip’s Mute and Solo buttons are important for mixing. Clicking on a Channel’s mute button
silences that Channel. Clicking on a solo button, which is mostly for diagnostic purposes, mutes all
Channels for which solo is not engaged, so you can hear a track without the distraction of hearing other
tracks. You can solo multiple tracks at once, however there’s also a special solo mode called exclusive
solo. When enabled by Alt+Clicking a solo button, soloing a Channel mutes all other Channels, even if
some of them are soloed.

About Grouping
Grouping Channels together into a subgroup (also just called a group) can simplify the mixing process,
and add flexibility.

There are two main types of groups. One sends the outputs of several Channels to a dedicated Bus
instead of the Main Bus, but not with Send controls—instead, the outputs go directly to the subgroup
Bus. Then the subgroup Bus output usually feeds the Main Bus.

A classic subgroup application is mixing together a drum set’s individual outputs—kick, snare, toms,
hi-hat, cymbals, etc. You can adjust each output individually with their respective Channel faders, but
also alter the subgroup’s level (or add processing to the subgroup) that affects all drum sounds
simultaneously (Fig. 3.6). Changing the subgroup level alters all the drum levels, which is easier than
changing each Channel level individually—this would complicate maintaining the correct balance.

Figure 3.6 The drum Channel outputs (outlined in orange) don’t go to the Main Bus, but to the Drums subgroup
Bus (toward the right, with the Bus name outlined in yellow).

Other musical elements that lend themselves to subgrouping are multi-miked piano (e.g., left, right,
room ambiance), background vocals, horn sections, choirs, and so on.

Page 38
Grouping in the Virtual World

The preceding describes the traditional subgroup approach—now let’s get modern. One of the
advantages of using a subgroup with Studio One’s virtual mixer is that unlike hardware, where each
fader took up space, once the individual Channels are set up you can hide them (Fig. 3.7).

Figure 3.7 Click on the Channel List (outlined in red) to see a list of all the Channels in use. By clicking on a
Channel’s dot (outlined in orange), you can show or hide Channels. In this image, the individual drum Channels
are hidden. You see only the Drums Bus, where altering its fader changes the level of all the drums.

Unless you have a monitor with infinite screen space, replacing several individual faders with a single
subgroup fader can simplify mixing. If you need to make finer adjustments, show the hidden Channels
long enough to tweak their levels, and then hide them again.

Also, note that subgroups can feed subgroups. For example, a subgroup of male voices, and another of
female voices, could be part of a choir subgroup. The male voices would go to one Bus, the female
voices to another Bus, and then the two Buses would go to a third Bus (the Choir Bus). Changing the
level at this third Bus then changes the male and female voices simultaneously, but you can also set the
levels of the male and female voices independently..

Page 39
Unlike analog mixing, where each submix causes a subtle signal deterioration, digital mixers let you
feed Buses into Buses without any loss of fidelity.

Another Way to Group

Not all grouping requires submixing into a Bus. Studio One can group faders, which links controls so
that moving one control moves all the controls in the virtual group. It’s as if you bolted all the faders
together so that moving one fader moves all of them. To do this, Shift+Click or Ctrl+Click to select
multiple Channels. Now, varying one fader will control the other faders for as long as the Channels
remain grouped. (To adjust one fader without dissolving the group, hold Alt while moving the fader.
When you release Alt, it will become part of the group again.)

Grouping applies to other Channel parameters: mute, solo, monitor, Send level, Send pre/post fader,
Channel output, automation, and color. Pan, Send pan, and polarity are not grouped. Also, if you insert
a send or effect in one Channel that’s part of a group, it will be inserted in all Channels.

Virtual grouping is useful with send effects. Suppose several Channel Send Controls feed a master
reverb Bus, but you want to change Send levels on only two sends. Grouping the two Channel Send
controls together adjusts the sends for these two Channels, independently of the other sends.

Bus Grouping

Grouping Buses is as useful as grouping Channels. For example, suppose there are Buses for violins,
cellos, and violas. By grouping them, you can fade those all of those tracks in and out, or change their
levels, smoothly and equally.

Mixing is complicated enough without fader clutter that may be unneeded. Grouping can simplify the
mixing environment, and make multiple adjustments simultaneously. Get to know these techniques—
they’ll serve you well.

VCA Channels
A VCA Channel has a fader, but it doesn’t pass audio. Instead:

• The VCA Channel fader acts like a “remote control” for other faders. Assign a Channel fader to
a VCA fader, and now the VCA can control the Channel’s gain, without moving the associated
track fader.
• Assign multiple Channel faders to the VCA, and you can bring all their levels up and down
simultaneously (Fig. 3.8).
• You can adjust track faders independently in a group controlled by a VCA fader—the other
faders won’t follow along.

Page 40
Figure 3.8 Note how the label below each fader says VCA 1. This means all their levels are being controlled by
the VCA 1 fader on the right.

This is similar to sending Channels to a Bus, and using the Bus fader to control their combined levels.
But VCAs take this further, because the same Channel fader can belong to different VCA groups. This
is helpful for projects with a high track count. For example, you could have individual groups for
violins, violas, and cellos—as well as a violins + violas group, and yet another group with all the
strings.

A classic reason for using a VCA fader involves send effects. Suppose several Channels (e.g.,
individual drums) go to a submix fader, and the Channels also have post-fader Send controls going to
an effect, such as reverb. With a conventional submix Bus, as you pull down the Bus fader, the faders
for the individual tracks haven’t changed—so the post-fader send from those tracks is still sending a
signal to the reverb Bus. Even with the Bus fader down all the way, you’ll still hear the reverb.

A VCA Channel solves this because it controls the gain of the individual Channels. Less gain means
less signal going into the Channel’s Send control, regardless of the Channel fader’s position. So with
the VCA fader all the way down, there’s no signal going to the reverb (Fig. 3.9)

Page 41
Figure 3.9 The VCA Channel controls the amount of post-fader Send going to a reverb, because the VCA fader
affects the gain regardless of fader position. If the drum Channels went to a conventional bus, reducing the bus
volume would have no effect on the post-fader Sends.

Another use for VCA Channels is to control the gain of a Channel that includes level automation. If the
automation changes are exactly as desired, but the overall level needs to increase or decrease, offset the
gain by adjusting the Channel’s overall level with the VCA fader. This can be simpler than trying to
raise or lower an entire automation curve using the automation Trim control (described in Chapter 12).

Although VCA Channels aren’t essential to many workflows, if you know what they can do, a VCA
Channel may be able to solve a problem that would require a workaround with any other option.

Unique Aspects of Mixing with Digital Audio


Analog and digital mixing use different technologies to accomplish the same ultimate goal.
However when working with computer-based recording, you’ll encounter technical terms unique to
digital mixing. Understanding these makes it easier to optimize your setup.

The Two Kinds of Resolution


Recording resolution is the resolution with which an audio interface converts analog signals into digital
data. Currently, recording resolutions higher than 24 bits are irrelevant, due to the limitations of analog-
to-digital converter technology. Although you can record signals into your recording software with
higher resolution, it won’t improve the sound quality.

Page 42
Tip: Most engineers record with 24-bit resolution, even though it takes up 50% more memory than 16-
bit resolution.

DAWs also have an audio engine resolution, which Studio One calls Process Precision (accessed
through the Studio One > Options > Audio Setup menu). This is the resolution the software uses to
process and mix the audio, and is independent of (and usually greater than) the recording resolution.
The audio engine needs greater resolution because a 24-bit piece of audio might sound fine by itself,
but when you change the signal (level, equalization, anything that requires calculations), multiplying or
dividing that 24-bit data might produce a result that can’t be expressed with only 24 bits. The principle
is the same as if you multiply, for example, 4 times 4. Although each number is only one digit, you
need two digits to express the result—16.

Unless the audio engine has enough resolution to handle these calculations, roundoffs occur—and
they’re cumulative, which can possibly lead to inaccuracies. As a result, your audio engine’s resolution
should always be considerably higher than the recording resolution.

Studio One offers a choice of 32- or 64-bit precision. 64 bits will give more accurate computations, but
use more CPU cycles. With most audio material, you will not hear a difference between the two
options. If possible, use 64-bit precision. But if your CPU is running close to its limits, 32-bit precision
will stress your CPU a bit less, and you probably won’t hear a difference in sound quality. (Note that
many early recording programs had much lower precision, which may be one reason why mixing “in
the box” received a negative initial reputation.)

The Two Types of Audio—and How to Save CPU Power


An audio track plays back a file, like a WAV or AIFF format file, that contains audio data. It’s
conceptually similar to a track in a tape recorder, where the tape contained magnetic particles that
represented audio data.

However virtual instruments can now be part of the mix. These are sound generators that live within
your software, and create and play back their sounds in real time—the sounds are not inherently stored
in files. It’s as if while you were playing back audio files, a piano player played the piano along with
the files in real time, the same way every time, while you did your mix. MIDI commands (see Chapter
5), like those generated by a MIDI-compatible keyboard controller, trigger instrument notes and alter
instrument parameters.

Tech Talk: Bouncing

The term bouncing originated with tape, when track counts were limited. For example with a four-track
recorder, you might record drums, guitar, and bass into their own tracks. This left only one track for
everything else, so you’d mix (bounce) the drums, guitar, and bass tracks to the one available track.
You could then record more parts over the three original tracks. The downside was you had to make
sure the bounce was mixed properly, because there was no “undo” with analog tape. Your only option
was creating safety copies of what you had recorded by transferring the tracks to a second recorder, but
that degraded the quality.

Page 43
It would make more sense to record the piano player, and capture the performance—then you’d have
the track, the pianist could go home, and you wouldn’t have to pay time-and-a-half for overtime.
There’s a similar process for virtual instruments called bouncing, freezing, rendering, or (as Studio One
calls it) transforming. Right-click in a track header (not individual Events: this is track-based) and
choose Transform to Audio Track. This process records the sounds a virtual instrument plays back as an
audio file, like any other audio track. There are two main reasons to render a virtual Instrument track:

• A virtual instrument requires more CPU power than an audio track. Once the instrument sound
has been transformed, the instrument itself no longer needs to play, which saves CPU power.
However, if you check Preserve Instrument Track State in the Transform Instrument Track
dialog box, you can reverse the effect of transforming, make any edits, and then re-transform.
• If you want to open a project in the future, the plug-in may not work with a future operating
system or version of your software. If the sound has been preserved as an audio file, then
you’ve “future-proofed” the part.

Tech Talk: About the CPU

CPU stands for Central Processing Unit, the computer’s brain. It executes millions of instructions per
second, from checking the USB port to see if you’re using your mouse to creating sawtooth waves for
your virtual synthesizer. Although today’s computers are very powerful, they still have limits. Audio is
more demanding on computers than tasks like running a word processor, so anything that reduces the
CPU’s workload frees up more power.

The main factors that determine CPU power are its speed of operation (clock speed), and its number of
cores (a CPU distributes its work over multiple processing cores). For example, all things being equal,
a CPU with a clock speed of 3.0 GHz will be able to execute instructions faster than a CPU that
processes instructions at 2.2 GHz, and a 7-core CPU will be able to do more real-time instruction
processing than a 2-core CPU. For music applications you want the most powerful CPU you can afford,
coupled with at least 8 GB of memory (preferably more).

Your computer can show details about its CPU, clock speed, and available memory. With Windows,
right-click on This PC and choose Properties. With the Mac, choose the Apple menu and About This
Mac. These also include other useful information about your computer.

Like virtual instruments, effects plug-ins generate their effects in real time. Suppose you’ve recorded a
guitar track, and inserted an amp sim plug-in. Although you hear the amp sim, that sound is being
generated electronically—not playing back from a track. The guitar track is always the dry guitar you
recorded. To incorporate the amp sim sound as part of an audio track, you need to bounce or render the
guitar track with its associated effect.

However some effects, particularly amp sims, require a fair amount of CPU. It’s possible to transform
audio tracks similarly to instruments. That way CPU-hungry effects don’t have to operate in real time,
which loads down the CPU. Right-click in a track header (not individual Events, this is track-based)
and choose Transform to Rendered Audio.

Page 44
When mixing, I often render the virtual instruments and sometimes even plug-in effects to audio tracks.
Rather than delete the instruments or MIDI tracks, I hide them so they don’t appear in the mixer.
Rendering may be a personal bias from being raised on tape, but it does improve the computer’s
efficiency, and allows using more virtual instruments in a project.

Setting Levels with Digital Mixers


Because modern audio engines have so much dynamic range (headroom), you can run Channels at high
levels without necessarily hearing distortion. However, eventually your mix will feed actual hardware,
which is susceptible to overload and distortion if the levels exceed 0 dB.

Tech Talk: Headroom

Even modern computers and audio systems don’t have an infinite dynamic range, so at some point, a
signal’s level could exceed the available dynamic range. To prevent this, it’s good practice to allow for
some headroom, or a range of levels before distortion occurs. For example, if a signal’s peaks reach 0,
then it has used up its available headroom. Any level increases will result in distortion. If the peaks
register as -6 dB, then there’s 6 dB of headroom prior to the onset of distortion.

Many engineers recommend keeping the Main fader close to 0, and adjusting gain within individual
Channels to avoid overloads at the master out (i.e., adjust the individual Channel faders to prevent the
final audio output from exceeding 0 dB). This is a better way to manage levels than keeping the
Channel faders high, and reducing the master gain to bring the output level down to 0 dB.

It’s also a good practice to leave a few dB of headroom, and not run levels right up to 0 dB. Most
digital metering measures the level of the digital audio samples. However, for reasons that might make
people doze off if I started explaining them, converting digital audio back to analog may result in
higher values than the samples themselves. This creates intersample distortion. Please see Appendix B
for more information about headroom and potential sources of distortion.

Tip: The level of loops that stretch with tempo variations can change depending on the tempo, which
may cause distortion in some cases.

The Best Sample Rate for Recording and Mixing


This is a geeky topic, but you’ll run into it eventually...so let’s deal with it now. If it’s too geeky, just
skip this section. The sample rate at which you record and mix will have little, if any, effect on your
music’s emotional impact.

By the time you’re mixing, you’ve already decided your project’s sample rate because doing so was a
necessary step to record audio tracks. The sampling rate for CDs is 44.1 kHz, and most home recording
projects use this sample rate. However, some people believe recording at higher sample rates, like 96
kHz, provides better fidelity. The topic is controversial, but it’s indisputable that some factors involved
in recording at higher sample rates could influence your mix.

Page 45
The argument about whether people can tell the difference between audio recorded at 96 kHz and
played back at 44.1 kHz or 96 kHz has never really been resolved (I’ve yet to meet anyone who can do
so reliably). Nonetheless, under some circumstances recording at a higher sample rate can give audibly
superior sound quality.

The reason for this is that sounds generated inside the computer can generate harmonics that interact
with the program’s sample rate. This won’t happen with acoustic or electric sounds recorded through an
audio interface, because the interface itself will remove ultra-high frequencies that could otherwise be a
problem.

For example, consider a virtual instrument plug-in that synthesizes a sound, or distortion created by a
guitar amp simulator. In these cases, the basis for the improvement heard with high sample rates comes
from eliminating foldover distortion, also known as aliasing.

A digital system can accurately represent audio only at frequencies lower than half the sampling rate,
e.g., in a 44.1 kHz project the audio shouldn’t be higher than 22.05 kHz. If a synthesizer plug-in
generates harmonic content above the clock frequency—for example, at 46 kHz—then this signal will
interact with the sampling rate to create sonic artifacts within the audible range. These sounds won’t be
related harmonically to the original signal, and will generally sound ugly.

Should you record at 96 kHz to eliminate these issues? Maybe...but maybe not, because not all plug-ins
will exhibit these problems for one of three reasons:

 The audio coming out of them doesn’t have high-frequency harmonics that can cause audible
aliasing.
 The plug-in oversamples, which means that the plug-in itself runs at a higher sample rate
internally. As far as it’s concerned, the sample rate already is higher than that of the project. As
a result, any aliasing occurs outside the audio range.
 The plug-in designers have built in appropriate filtering to reduce or eliminate aliasing.

Because today’s software can handle higher sample rates, it might seem that starting projects at a
higher sample rate is ideal. Unfortunately, there are some limitations with higher project sample rates
(and oversampling).

 Recording a project at a higher sample rate stresses out your computer more. This reduces the
number of possible audio Channels, and won’t allow running as many plug-ins.
 Oversampling requires more CPU power, so even if all your instruments are oversampling
internally, you may not be able to use as many instances of them.
 Although some instruments may do oversampling, that still might not be sufficient to eliminate
aliasing on harmonically rich sources.
 With plug-ins that oversample, the sound quality depends on the algorithms that perform the
oversampling. It’s not always easy to perform high-quality sample-rate conversion in real time.

A lot of this is splitting hairs. If you have a powerful computer with plenty of storage, sure...record at
96 kHz. However recording at 44.1 kHz has served us well for over three decades, and as you’ll see in

Page 46
the following ultra-geeky Tech Talk sidebar, when mixing there’s a workaround that can obtain the
benefits of higher sample rates in 44.1 kHz projects anyway.

Tech Talk: A Workaround to Obtain the Benefits of Higher Sample Rates

It’s possible to obtain the benefits of a 96 kHz or 192 kHz sample rate when mixing a 44.1 kHz project.
In Song Setup, convert the project sample rate up to a higher sample rate, render the instrument file into
audio, then drop the sample rate back down to 44.1 kHz. The audio from the instrument will exhibit the
benefits of higher sample rate recording, even when converted back to 44.1 kHz.

Key Takeaways
• Channels correspond to individual audio tracks.
• A Bus is like a mini-mixer inside your mixer. Buses can simplify mixing because you adjust the
levels of grouped tracks with a single level control, instead of adjusting each track’s level
individually. Buses also make it easy to have the same effect on some, but not all, Channels.
• Each Channel includes a channel strip with a volume fader, pan slider, mute and solo buttons,
and other controls.
• Channels can terminate in a Bus. If multiple Channels (e.g., individual drum parts) go into the
same Bus, then the Channel faders set the balance of the tracks, while the Bus sets the overall
level.
• You can group faders virtually, so that moving one fader moves the other grouped faders
simultaneously.
• Record in 24-bit instead of 16-bit resolution. The files take up 50% more space, but most people
agree 24-bit resolution sounds better.
• If possible, use 64-bit process precision. If that stresses out your CPU too much, use 32-bit
precision. There will likely be no audible difference.
• Virtual instruments create their sounds in real time. To convert these sounds to audio tracks,
they need to be transformed (bounced or rendered).
• A fast CPU in your computer makes for a smoother mixing experience.
• With some virtual instruments and amp sims, running projects at 96 kHz may produce an
audible improvement compared to using 44.1 kHz.
• There’s a simple workaround to obtain the benefits of recording at higher sample rates in 44.1
kHz projects.

Page 47
Page 48
Page 49
Chapter 4 | How to Use Plug-Ins

Effects, also called signal processors, are a key element of great mixes. They can process sounds to
blend in better or stand out more, add interest to parts that could use a little spice, or transform sounds
into something completely different. Previously, effects were available only as hardware devices.
However, effects—as well as musical instruments—have now been virtualized in software, and can
load into Studio One. (However, it’s still possible to integrate external hardware effects as you would
plug-ins with Studio One’s Pipeline plug-in, as described later.)

This chapter covers basics common to both effect and instrument plug-ins, then addresses aspects
unique to each type.

Plug-In Technologies
There are two main plug-in technologies: host-based (also called native) and hardware-based.
Hardware-based plug-ins run only with certain hardware computer cards or peripherals designed for
digital signal processing (e.g., Universal Audio’s Accelerator cards). Studio One has native plug-ins.
These use the computer’s inherent power to perform digital signal processing.

Because native plug-ins draw power from the CPU, running more plug-ins makes the CPU work
harder. This limits how many plug-ins you can use while mixing. To run more plug-ins, there are three
main solutions:

 Use a computer with a faster CPU clock speed.


 With Windows, increase the Device Block Size under Options > Audio Setup > Audio Device
so the CPU doesn’t have to work as hard. However, increased latency lengthens the response
time when moving faders, playing soft synths, etc. Studio One’s Native Low-Latency
Monitoring technology can help.
 Transform the track (a process called track freeze in some other programs). This host-based
process transforms the track with the plug-in into an an audio track, and disconnects the plug-in
from the CPU so that the plug-in doesn’t draw power. Please refer to the previous chapter’s
section “The Two Types of Audio—and How to Save CPU Power” for details.

Tip: CPU consumption changes constantly based on what else the computer is doing, so don’t max out
the CPU. Exceeding the available power can lead to an interruption in the audio stream, crackles and
other audio artifacts, or possibly even a crash.

With hardware-based plug-ins, the maximum number of plug-ins you can run is limited by the
hardware hosting the plug-ins. Because the hardware’s available power and the power required by the
associated plug-ins don’t vary, you can add plug-ins until there’s no available power left—there won’t
be additional stresses to push the CPU power consumption over the line.

Page 50
Figure 4.1 The control panel for Universal Audio’s Powered Plug-Ins shows how much DSP the currently loaded
plug-ins consume. This figure doesn’t vary unless you add or remove plug-ins.

Plug-In Formats
There are several plug-in formats:

• DirectX is an older, Windows-only format that is essentially obsolete. Studio One doesn’t
support the format natively. Although there are DX-to-VST adapters, these can sometimes lead
to system instability. Using VST plug-ins is preferable.
• VST stands for Virtual Studio Technology, a term coined by Steinberg when the company
“virtualized” signal processors as native parts of the computer environment rather than outboard
hardware devices. The original VST spec was enhanced to VST2, and is currently at version 3
(VST3). It’s the dominant plug-in standard for Windows. The virtual instrument version is
sometimes called VSTi.
• AU stands for Audio Unit, Apple’s plug-in standard introduced in OS X. Most Apple-
compatible programs use AU plug-ins, although there’s also an Apple-compatible VST plug-in
variation with which many programs are compatible. A Windows VST plug-in is not compatible
with Mac VST, and vice-versa (although a company may make versions for both platforms).
• The RTAS (Real-Time AudioSuite) and TDM (Time-Division Multiplexing) formats were used
only in Avid Pro Tools, and have since been replaced by AAX (Avid Audio eXtension). AAX is
also Pro Tools-specific, and not compatible with any other programs.

32-Bit vs. 64-Bit Plug-Ins


Computer hardware and operating systems use either a 32-bit or 64-bit architecture. 64-bit systems
provide multiple benefits over 32-bit systems (which are rapidly becoming obsolete). Studio One is a
64-bit-only program, so any plug-ins should be 64-bit as well.

Page 51
Plug-In Wrappers
It’s possible to run 32-bit plug-ins in 64-bit systems by using a wrapper. This software utility makes a
plug-in look to your system like it has a different format. The jBridge and BitBridge wrappers allow
many 32-bit Windows plug-ins to function in Studio One.

Although wrappers work most of the time, sometimes they lead to system instability with particular
plug-ins. Furthermore, as fewer people continue to use the older formats, demand drops for wrappers so
there is little incentive for companies to update them.

Tip: Search the internet to find the most recent versions of wrappers. Most have demo versions; verify
that they work with the plug-ins you want to use. Some are so old they’re free, but they still work.

Stereo vs. Mono Plug-Ins


Studio One’s tracks can be mono or stereo. Some effects plug-ins include both mono and stereo
versions, so use the appropriate version for the track’s audio. Many stereo plug-ins can also accept a
mono input, but provide a stereo output. For example with a mono mic as the input, a mono-stereo
reverb plug-in can create a stereo field from the mono input.

Overall, the mono/stereo distinction isn’t much of a concern. Just be aware that if you recorded a mono
track but want to use a stereo plug-in, change the Channel Mode (Fig. 4.2) to stereo. To convert a track
recorded in mono to stereo, change the Channel Mod to stereo, select the audio, then type Ctrl+B.

Figure 4.2 With stereo plug-ins, make sure the Channel Mode is set to stereo.

Effects Plug-Ins Are Always “Re-Amping”


Before plug-ins, many engineers used a guitar recording technique called “re-amping.” This split a dry
guitar signal into two paths. One went to an amp, which was miked to record the amp’s sound. The
other went directly to the recorder. To change the amp sound while mixing, you could send the non-
miked, recorded sound to a different amp, and then re-record (“re-amp”) the recorded guitar part
playing through that amp.

With plug-ins, the audio on a track is always recorded dry. The plug-in generates the sound of an amp
sim or other effect in real time from this audio, so it’s doing the software equivalent of re-amping.
(However, as mentioned previously, it’s possible to transform the track into an audio track that includes
the effect’s processing.)

Page 52
The Four Places to Insert Effects
Unlike a traditional hardware mixer with fixed places to insert effects, software is more flexible. Studio
One has four places where you can insert effects:

• Individual Channels. The effect processes audio from the track associated with the Channel.
These are called insert effects because they insert into a Channel, through the virtual equivalent
of an effects rack.
• Individual Events. Event effects process a portion of the audio within a track, not the entire
track.
• Buses (including FX Channels). Send effects (also called Bus or aux effects) process all signals
feeding a Bus; for example, a reverb effect that’s applied to vocals, rhythm guitar, drums, and
piano.
• Master output. Master effects process the entire mixed output, typically at the Main Channel.

Let’s look at the theory behind each option, and then get practical.

Track (Channel) Insert Effects


Insert effects are named after the insert jacks found in hardware mixers, which are part of individual
mixer channels. In hardware mixers, these inserts follow the input preamp. This is because few
hardware effects are designed for mic-level signals, so if the audio comes from a mic, the channel’s
preamp can amplify the incoming signal to a level suitable for feeding the effect.

Studio One’s software mixer follows the same concept. Insert effects go within a specific Channel,
affect only the audio entering that Channel, and process the entire track (unless you’re using
automation to, for example, bypass the effect in certain places). Popular insert effects include
equalizers, dynamics processors, saturation, delay, amp sims, and modulation, but there are many more.
You can place insert effects in any order, but some orders are more likely to give the results you want.
You have several options to see which effects have been inserted (Fig. 4.3).

Page 53
Figure 4.3 Clockwise from upper left: With the Console in Small mode, clicking on the Expand button opens the
virtual effects rack to the side. In Large mode, the effects appear above the Channel fader. A track’s Inspector
shows the virtual effects rack similarly to the Console view’s Small mode. In the Arrange view, clicking on the
track number opens the effects chain, including the interface for the selected effects tab. All of these options
allow for bypassing effects, and opening them for editing.

Event (Clip) Effects


Inserting effects in individual Events within a track processes only that particular Event. For example,
you could split a clip to isolate a snare drum hit, and add a big reverb “splash” to it during a big
transition. Or, split off a vocal’s last word or phrase to add echoes to it.

To insert an effect into an Event, click on the Event, open the Inspector (F4 keyboard shortcut), and
you’ll see the Inspector’s virtual effect rack (Fig. 4.4).

Page 54
Figure 4.4 In an Event called Bass Break, a Limiter2 (toward the bottom of the Inspector) is processing only that
Event. Clicking on the Render button applies the effect to the Event, and then removes the effect (although once
rendered, this button turns into Restore so you can undo the render). This effect doesn’t process any other
Events in the track.

Bus and FX Channels


These are different from insert effects because they affect multiple tracks simultaneously. While we
mentioned this in Chapter 3, here’s a more in-depth explanation.

Audio Channels use Send controls to “pick off” some of the Channel’s audio, and then send it to a Bus
or FX Channel. The Bus Channel output shows up in the program’s mixer, just like a Channel output.

An effect inserted in a Bus or FX Channel processes any audio it receives. The classic bus effect
application is reverb, where different tracks send different amounts of audio to the Bus Channel’s
reverb effect. For example, if you want lots of reverb on voice and guitar but not bass, you’d turn up
the voice and guitar track Send controls that feed the reverb bus, while leaving the bass’s Send control
turned down.

A Channel’s Send control can switch the send signal before (pre-) or after (post-) the Channel’s fader.

• Selecting post-fader reduces the send level when you lower the track’s main fader. (Also,
Channel mute and solo buttons typically affect a send only if it’s post-fader.)
• With pre-fader, the Send level control determines the Send level independently of the track
fader’s setting.

Page 55
Tip: With a pre-fader setting for an echo effect, the echoes continue after pulling down the Channel
fader. With a post-fader setting, the echo level follows the Channel level.

FX Channels are similar to Bus Channels, but simplified and optimized for effects. Bus Channels
include Send controls; FX Channels do not, and don’t include a channel mode (mono or stereo)
because they follow the characteristics of the effect inserted in the Channel.

When using Bus or FX Channel effects with wet/dry balance controls (like reverb or delay), remember
that the output of the Channel sending the audio and the processed output from the Bus or FX Channel
both feed into the Main Bus. This is a parallel signal routing, because there are two parallel audio
paths. Because the track itself is feeding the mixer an unprocessed signal, you’ll usually set the send
effect for processed sound only, and then use the FX Channel level control to dial in the amount of
processed signal.

Adjusting Send and Bus Levels

When using send effects, there are up to four places to alter level (Fig. 4.5).

Figure 4.5 This vocal Channel has one send going to the reverb Bus, and one going to the delay FX Channel.
The reverb Bus also has a send going to the FX Channel.

Page 56
• The main Channel fader outlined in blue (if the Send control is set to post-fader)
• A Channel’s effects Send control (outlined in yellow)
• The Channel input level controls (outlined in orange)
• The Bus output faders (outlined in red)

Furthermore, a signal processor inserted in the Bus may itself have input and/or output level controls.
The effect’s sound may depend on the incoming level (e.g., with distortion, more input signal increases
the amount of distortion).

If these controls aren’t set correctly, an excessively high level may cause distortion, while too low a
level can give a poor signal-to-noise ratio. Although level-setting isn’t quite as critical as it is in the
analog world, the following is a “best practices” list for proper level-setting:

1. If the send effect has input and/or output level controls, set them to unity gain (i.e., the signal is
neither amplified nor attenuated).
2. Set the Bus Input controls and faders to unity gain.
3. Adjust the individual Send control(s) for the desired amount of effect. The higher you turn the
individual Send control(s), the more that Channel will contribute to the processed sound.
4. Because the Send controls from multiple individual Channels add up, they may overload the
effect’s input. Leave the effect levels at unity gain, and use the Input controls to reduce the level
going to the effect.
5. If the signal going to the effect is too low, use the Bus input level control to bring it up. If there
still isn’t enough level to drive the effect, increase the individual Channel Send controls.

FX Chains
Studio One’s FX Chains are brilliant—they’re “containers” for multiple effects. You can insert an FX
Chain into a Channel or Bus as easily as you would insert an individual effect. Conceptually, an FX
Chain is like a virtual effects rack, but with more flexibility (and its “virtual patch cords” are always
the right length!).

It’s possible to create series chains of effects, where the output of one effect feeds the input of another
effect. This effect’s output can feed another effect’s input, and so on. However, FX Chains also include
a Splitter module that can create parallel routings. These split the signal entering the FX into up to five
splits. Each split can go to an effect or even a series chain of effects, which creates a parallel/series
routing. Mixing the path outputs back together provides a single output (Fig. 4.6). An FX Chain can
have more than one Splitter, so splits can split into additional splits.

Page 57
Figure 4.6 Left to right: Series, parallel, and parallel/series routings. Although only two splits are shown, there
can be up to five splits in Studio One’s Splitter.

Split Types
There are three split modes:

• Normal sends the same input to multiple, parallel splits.


• Channel Split sends the left and right outputs to different splits. If there are two splits, the
assignment is simple—left to the left split, right to the right split. With four splits, the left goes
to two splits, and the right goes to two splits. With three splits, the middle is the right channel,
and the other two are the left channel. With five splits, the left-most, right-most, and center
splits are the left channel; the remaining two are the right channel.
• Frequency Split turns the Splitter into a crossover, which splits the input into up to five
individual frequency bands. This simplifies multiband processing—for example, longer delays
on low frequencies, and shorter delays on high frequencies; or reverbs tailored to different
frequency ranges.

Parallel Effects Applications


Although FX Chains are a convenient way to create parallel effects, Bus Channels and FX Channels
also make it easy to create parallel effects. Use a Channel’s effects Send control to send audio into a
Bus Channel or FX Channel, which provides the parallel effect. These examples show some advantages
of parallel effects, whether using Buses, or the Splitter within FX Chains:

• Wah or envelope-controlled filter. Filtering won’t thin out the main Channel’s sound when
placed in parallel with the Channel it’s processing.

Page 58
 Bass. Keep the full, round bass sound in the main Channel, and use processors like distortion,
chorus, wah, etc. as parallel effects. This keeps the low end intact.
 Maintain consistency with doubled parts. To add the same effect to both parts, instead of
setting up insert effects for each part, create an FX Channel and send audio from each doubled
part to the FX Channel.
 Trashy sounds for drums. Amp sim distortion and cabinets can sometimes impart a
delightfully trashy sound, as well as ambiance, to drums. However, a little goes a long way—
add a subtle amount in parallel with the drums.
 Distortion. Choose a couple instruments to be emphasized, and send some of their audio to a
Bus with a saturation plug-in. Potential candidates for this include drums, percussion, and
especially bass. This may also recall the sound saturating analog magnetic tape, which we
associate with “pushing” the sound.

Master Effects
This variation on insert effects for individual Channels involves inserting effects into the Main Bus to
alter the entire mixed signal. For example, use EQ to brighten up the entire mix a bit, and/or limiting to
make the mix seem a little louder overall.

You’ve probably heard of the mastering process, where a mastering engineer enhances the sound of a
finished stereo mix by adding processing. Although adding master effects as you mix can help simulate
the way mastering may affect the sound, if you intend to take your mix to a mastering engineer, it’s
usually best not to insert master effects when you export your mix—a good mastering engineer will
likely want to use a familiar set of high-quality mastering tools or plug-ins. There’s more on this topic
in Chapter 14.

Using Virtual Instrument Plug-Ins


Instrument plug-ins appear in Studio One as dedicated Instrument tracks (which play back through
mixer Channels). Dragging an instrument from the Browser into the Arrange view inserts the
instrument and creates a track. Compared to audio tracks, there are three major differences in mixing
with virtual instruments:

• The virtual instrument’s audio is not recorded as a track, but generated by the computer in real
time. Its track contains note data that tells the instrument what notes to play, the dynamics,
additional articulations, etc. You can use effects with an Instrument track just as you would an
audio track; insert them after the instrument output. Send effects, mute, solo, and similar audio
track options are available, and FX Chains can layer virtual instruments within the same track.
However before mixing, I recommend rendering (transforming) the part into a standard audio
track, as explained in Chapter 3.
• Virtual instruments have parameters that can alter their sound, and therefore affect the mix. For
example, if you record a standard electric bass part and decide you should have used the neck
pickup instead of the bridge pickup, you can’t change that. But a virtual bass may include the
option to choose a different pickup’s sound.

Page 59
• Because these instruments are being driven by MIDI data, changing the data driving the
instruments can change a part, which affects the mix.

Chapter 5 describes mixing with virtual instruments in detail. If you don’t use virtual instruments, you
can skip it.

Instruments with Multiple Outputs


Many virtual instruments offer multiple outputs, especially if they’re multitimbral (i.e., they can play
back different instruments that are set to different MIDI channels). For example, if you’ve loaded bass,
piano, and ukulele sounds, each set to receive data from a different MIDI channel, then each one will
have its own audio output (probably stereo) in the console.

However, multitimbral instruments almost always have internal mixers as well, where you can set the
various instruments’ levels and panning (Fig. 4.7). The mix appears as a stereo Channel in Studio One’s
mixer. The instrument will likely include effects too.

Figure 4.7 IK Multimedia’s SampleTank can host up to 16 instruments (8 are shown), mix them down to a
stereo output, and add effects.

Page 60
A mixed instrument output can reduce clutter in your software mixer, because each instrument sound
doesn’t need its own mixer Channel. However, if the instrument doesn’t include the effects plug-ins
you need to create a particular sound, then you’ll need to use the instrument’s individual outputs and
insert effects in Studio One’s mixer Channels. For example, many people use separate outputs for drum
instruments because they want to add specific effects to each drum sound (kick, snare, etc.).

ReWire and Mixing


ReWire is a software protocol that allows two (or sometimes more) software applications to work
together as one integrated program. We’re covering this in the context of plug-ins because ReWire is
like “plugging in” an entire program. Sometimes this is to take advantage of virtual instruments
included in a program (like Propellerhead Software’s Reason prior to version 11), or specialized
programs like Ableton Live or Acid Pro.

ReWire has two elements:

• A client application (also called the synth application), which you can consider as a plug-in
• A ReWire-compatible host program, like Studio One (also called the mixer application)

Any ReWire-compatible application is either a host, a client, or both (but not simultaneously—you
can’t ReWire a client into a host, then ReWire that host into another host). Although there can be only
one host, sometimes multiple clients can ReWire into a host.

The order in which you open and close programs matter. Open Studio One first, and then any clients.
Close clients first, then close Studio One. You won’t break anything if you don’t, but you’ll likely
receive a warning about opening and closing programs in the right order. Also, although Studio One
will try to launch a client automatically when you select it for rewiring, if that doesn’t work you’ll need
to launch the client manually.

There are five main ReWire attributes (see Fig. 4.8):

Figure 4.8 ReWire sets up relationships between the host and client programs.

Page 61
• The client’s audio outputs stream into Studio One’s mixer.
• The host and client transports are linked, so that starting or stopping either one starts or stops
the other.
• Setting loop points in either application affects both applications.
• MIDI data recorded in Studio One can flow to the client (excellent for triggering soft synths).
• Both applications share the same audio interface.

Computer Requirements
ReWire is a software-based function that’s built into ReWire-compatible programs. Although there’s a
misconception that ReWire requires a powerful computer, ReWire itself is an interconnection protocol
that doesn’t need much CPU power. However, your computer needs enough RAM and processing
power to run two programs simultaneously.

Applying ReWire
ReWire can stream 255 MIDI buses (with 16 channels per bus) from one application to another. Studio
One can also query the client for information, like instrument names for automatic track naming.

With audio, a modern ReWire client can stream up to 256 individual audio channels into Studio One’s
mixer (ReWire’s initial version was limited to 64 channels). You may have the option to stream only
the master mixed (stereo) outs, all available outs, or your choice of outs.

If you choose all available outs, then you can ReWire individual instrument outputs into Studio One’s
Channels. For example, if you ReWire a drum module with eight available outputs into Studio One,
you can process, mix, insert plug-ins, and automate mixer parameters for each of the eight outputs.

Choosing all available outs can create a lot of mixer Channels. If the Channels aren’t being used, delete
them. Or, set up the client to send only the stereo outs, and do all client-related mixing inside the client.

ReWire Implementations
PreSonus Notion is a superb candidate for rewiring, because it integrates notation with Studio One
that’s more comprehensive than the notation in Studio One itself. However because this book
concentrates on mixing, we’ll use Propellerhead Software’s Reason (which you can treat as a suite of
virtual instrument plug-ins) as an example of how to ReWire.

Note that starting with Version 11, Reason became available as a VST plug-in, so ReWiring is no
longer essential if all you want to do is treat Reason as a plug-in instrument suite. However, as of this
writing, there are still many older versions of Reason installed...and in any event, ReWire is ReWire.
Learning how to ReWire one program is tantamount to learning how to ReWire any program.

Fortunately, you don’t need to learn everything about a ReWire client to use it with a ReWire host. In
this example, all you need to do is send note data to Reason’s instruments, and send the instrument
audio outputs to Studio One’s mixer. Neither is difficult. There are actually several ways to access the

Page 62
tools in ReWire, which is maybe why some people find it confusing. So, we’ll cover only what you
need to know for ReWire to do what you want.

Loading ReWire Instruments


The available ReWire clients are located in the Browser under Instruments > ReWire. Drag a ReWire
device into the Arrange view, as you would any other instrument. In the dialog box that appears (Fig.
4.9), you’ll have the option to Allow Tempo/Signature Changes, and have Multiple MIDI Outputs (I
check both). Then, click on Open Application in the dialog box.

Figure 4.9 When you insert a ReWire device in Studio One, a dialog box appears.

Using ReWired Instruments


There are two main options:

• Treat the ReWired instruments like any other virtual instruments—trigger sounds from a MIDI
controller, or note data in Studio One. As with other virtual instruments, you can transform
Reason’s instruments into audio tracks.
• Generate patterns within Reason, and record them into audio tracks. For example, maybe
you’ve come up with a cool ReDrum loop that complements Impact, and want to turn it into an
audio loop for Studio One.

With ReWire, you need to decide what workflow is best for you. This can involve a little head-
scratching, but once you’re set up, it’s straightforward. We’ll make a few assumptions:

• You’ve opened Studio One and Reason (or a similar ReWire-compatible client.)
• There’s a working MIDI controller.
• You’ve loaded the instruments you want to use in Reason. For our examples, we’ll choose
Malström, Thor, and SubTractor. (I’ve created a Reason document specifically for ReWiring. It

Page 63
opens Reason with nothing in its virtual rack except the Hardware Interface with the Audio I/O
open, and the Master Section.)
• Reason’s Master Section output goes to Audio I/O audio outputs 1 and 2. This mixes all
instruments down to a single stereo output, but we’ll find out how to mix and process
instruments individually within Studio One.

The Simplest Option: Reason as a Collection of Virtual Instruments

This is ideal for recording Reason Instrument tracks fast—e.g., you’re in the creative throes of
songwriting. When you ReWire Reason into Studio One, the instruments show up in Reason’s
MixL+MixR output.

1. Dragging Reason into Studio One created an Instrument Track. For the Instrument Track output
choose Reason, and for the instrument, choose the desired instrument (Fig. 4.10). In this case, it’s
Malström.

Figure 4.10 Reason’s Malström instrument has been chosen as the track output.

2. Choose your MIDI controller as the Instrument input. In this example, it’s Native Instrument’s
Komplete Kontrol keyboard (KK MIDI in Fig. 4.10).
3. Play your controller, and the track meter should show activity.
4. In Reason, click on the associated sequencer track for the virtual instrument you want to hear. This
prevents triggering other instruments.
5. Record the Instrument track note data in Studio One. Edit as desired.
6. Right-click on the track in Studio One’s Arrange view Track column, and choose Transform to
Audio Track. Unless you’re sure you won’t need to edit the Instrument track later, check Preserve
Instrument Track State.
7. To play a different Reason instrument, insert another Instrument track in Studio One.
8. In the new Instrument track, choose Reason and the desired instrument as the Instrument output.
9. Repeat steps 3 through 9.

When you transform an Instrument track to audio, Studio One creates a new mixer Channel for the
audio output. Also note that in Step 6, you can also transform an Instrument track to audio by dragging
the note data into an audio track. However, this doesn’t mute the Instrument track automatically, so it

Page 64
will trigger the instrument and you’ll hear the audio, unless you mute the Instrument track. I usually
prefer Transform to Audio Track.

Assuming you checked Preserve Instrument Track State, the Reason instrument audio tracks can be
transformed back into Instrument tracks if they need more editing. To do this, right-click on the track in
the Arranger view track column, and choose Transform to Instrument Track.

Using as Real-Time Virtual Instruments

You may not want to render to audio after recording, but use Reason’s instruments as real-time, virtual
instruments. Then you could automate their parameters, or do other tweaks, prior to transforming them
to audio (or just leave them operating virtually).

To do this, patch the outputs of instruments you want to use to Reason’s Audio I/O section in the
Hardware Interface module rather than Reason’s Master Section. Don’t let the “hardware interface”
name confuse you; Reason thinks Studio One’s ReWire channels are hardware. Here’s how to patch
instruments to audio I/O.

1. Flip Reason’s “rack” around by hitting Tab. Click on the Audio I/O button (outlined in orange in fig.
4.11). This reveals 16 available audio outputs, and after inserting the instruments, you can patch the
instrument outputs to any of these. For example, in Fig. 4.11 Malström goes to 1+2, Thor goes to 3+4,
and SubTractor, which has a mono output, goes to output 5.

Page 65
Figure 4.11 How to route Reason instrument outputs to the Audio I/O section.

2. In Studio One’s Instruments panel, under Reason, check all the Channels you want to use—in this
case MixL+MixR (which are Channels 1 and 2), Channels 3 and 4 for Thor, and Channel 5 for
SubTractor, which has a mono output. Now all these outputs will be available in Studio One’s mixer
(Fig. 4.12).

Page 66
Figure 4.12 Check the Channels in Studio One’s Instrument panel for the Channels you want to have appear in
the mixer.

Note 1: The only stereo Channel that Reason exposes is MixL+MixR, because Reason’s outputs other
than MixL+MixR are fundamentally mono. You cannot, for example, take Channels 3 and 4 and send
them to a stereo mixer Channel in Studio One. So in the mixer, pan the two mono Channels (3 and 4)
left and right for stereo. I recommend naming the mixer Channels to avoid confusion; because Studio
One combined Reason Channels 1 and 2 into mixer Channel 1, Studio One’s Channel numbers will be
one less than Reason’s Channel numbers.

Note 2: However, when you bounce an instrument to audio, or drag the Instrument track note data into
an audio track, this can indeed create a stereo audio track. You can then hide the Instrument tracks, or
delete them if you don’t think you’ll use them again.

Note 3: If you hear double-triggering with MIDI notes, turn off the Monitor function for the track on
which you’re recording.

3. Now that you’re set up, record your tracks. Follow the same procedure as in the first example—
insert an Instrument track, send its output to Reason and the desired instrument, click on the Instrument
track in Reason’s sequencer, and then record. This time, though, don’t transform the track to audio.

Page 67
To render these tracks to audio at some point, I usually right-click on the Note Events, and choose
Bounce Selection or Bounce to New Track. This adds an audio track, creates an audio mixer Channel,
and mutes the original Note Events. To edit the parts further, unmute them later and redo the parts.

More About ReWired Instruments

There’s nothing you need to learn regarding automation for the Reason instruments—it works the same
way as recording automation for any virtual instrument. Simply move the instrument knob or switch
you want to control, then drag the hand down from Studio One’s Control Link section (which will show
the parameter you selected) to either the track for track automation, or the Edit view’s Part for part
automation.

Finally, don’t forget that in addition to using Studio One’s insert effects to process the instrument
sounds, you can also patch Reason’s processors between its instrument outputs and the audio I/O.

Using Hardware Effects when Mixing


Software plug-ins have become so good, affordable, and flexible it’s easy to forget that hardware gear
provides unique functions. Fortunately, Studio One’s Pipeline plug-in (for mono, mono-in-stereo-out,
or stereo hardware) can interface easily with external hardware, which allows for the following:

• Integrate rack processors (and even guitar stompboxes) with Studio One.
• Send a track’s output into a hardware synthesizer’s external audio input, bring the synth’s
output back into a Channel in Studio One’s mixer, and treat the synth like a playable signal
processor.

You need an audio interface with enough spare inputs and outputs to accommodate the hardware you
plan to use. For example, I’ve dedicated a pair of inputs and outputs on my Studio 192 for Pipeline.

It’s not necessary to use Pipeline; you can send audio from one Channel to your interface output, patch
that to the hardware processor’s input, then bring the hardware output back into interface inputs that go
to a different Channel. However, Pipeline’s advantages are:

• It inserts into a Channel like a plug-in.


• You can save presets for specific pieces of hardware.
• It compensates automatically for the latency that occurs when sending signals out through an
interface, then bringing them back into Studio One.

Using Pipeline
Pipeline appears as an effect in the Browser. You drag it into a Channel like any other effect, or call it
up from a Channel’s Insert menu. You need to tell Pipeline where to send the audio (i.e., the interface
outputs that patch into your hardware) and where to find the processed audio (the audio interface inputs
that receive the processed audio). A “ping” function measures the time it takes for audio to pass through
an external effect, and compensates for the delay (Fig. 4.13).

Page 68
Figure 4.13 DigiTech’s DSP256 is being used as a plug-in. A preset has been set up for Studio One, so instead
of seeing the names of the interface inputs and outputs, they’re labeled as DSP256. The wrench symbol initiates
the ping process that compensates for latency.

The most complex part of using Pipeline is setting up the inputs and outputs properly on your audio
interface to avoid feedback and other issues. Fortunately, it’s easy when working within the PreSonus
ecosystem. Go into the audio routing setup, add a set of inputs and outputs for your hardware, and
connect them to the desired inputs and outputs in the matrix (Fig. 4.14).

Page 69
Figure 4.14 A virtual output has been set up for the DSP256 processor, which routes to physical outputs Line
Out 3 and Line Out 4. Inputs are handled similarly.

Limitations of External Hardware


Please be aware of the following limitations with external hardware:

• Your audio interface needs to dedicate at least one input and output for mono effects, and two
inputs and outputs for stereo effects. For mono-in-stereo-out effects, use Pipeline stereo.
• Sending audio out through the audio interface, processing it through an effect, then bringing
audio back into the interface adds latency. Although Studio One “pings” this signal path with a
test signal to measure the delay and compensate for it, this compensation isn’t always perfect.
You may need to trim the compensation manually.
• Hardware can insert only once in a project. We’ve become spoiled by plug-ins where we can
insert as many instances as we want...but hey, it’s hardware! To free up the hardware for another
track, bounce the processed audio.
• Any bouncing or rendering involving external hardware must happen in real time.

Page 70
Key Takeaways
• Some native plug-ins, particularly instruments, can draw a fair amount of CPU power. To
minimize their impact, use a faster computer, increase latency, or transform tracks.
• Make sure any plug-ins you buy are software- and hardware-compatible with your system.
• VST is the most common plug-in format for Windows, and AU for the Mac.
• It’s best to use 64-bit software with a 64-bit operating system. However, it’s sometimes possible
to use 32-bit plug-ins in a 64-bit system, as well as unsupported plug-in formats, with a plug-in
wrapper.
• Virtual instruments and plug-ins generate their sounds in real time. These sounds are not
recorded in audio tracks unless you render or bounce them.
• There are multiple places to insert effects in a project.
• Send effects allow for parallel signal paths. They’re fed with audio from selected tracks;
different tracks can send different amounts of audio. Sending more audio from a track makes
the effect more pronounced for that track
• ReWire is a useful technology that allows two (and sometimes more) compatible programs to
work together as a single program.
• You aren’t limited to using software plug-ins. Studio One’s Pipeline plug-in can integrate
hardware effects into your projects.

Page 71
Page 72
Page 73
Chapter 5 | Mixing and MIDI

Thanks to virtual instruments, MIDI has enjoyed a resurgence. As shown in the previous chapter,
“tracks” are not always traditional digital audio tracks, but virtual instruments, triggered by MIDI data.
These instruments exist in software, and can play in real time, or be converted to standard audio tracks.
Computer algorithms model particular sounds, from ancient analog synthesizers to sounds that never
existed before. The instrument outputs appear similarly to audio tracks in Studio One’s mixer. Because
virtual instruments generate audio, they can use the same plug-ins and mixing techniques as audio
tracks.

There’s some confusion regarding how Studio One handles MIDI data. Studio One is completely
compatible with MIDI controllers and MIDI-controlled devices. However, once the data enters Studio
One, it’s converted into an internal, higher-resolution format. This has two main benefits:

• When sweeping a filter frequency with Studio One’s virtual instruments, it’s a continuous
sweep that sounds like an analog synthesizer. Conventional MIDI quantizes sweeps into steps,
and sometimes you hear the steps instead of a continuous change.
• MIDI controller values aren’t restricted to being displayed as numbers between 0-127, but can
be displayed at a percentage (Fig. 5.1). If you’re used to conventional MIDI, this can be slightly
disorienting. But overall, it’s easier to think of, for example, reducing a MIDI controller’s value
by “a third” instead of trying to figure out how to subtract a third of 127.

Figure 5.1 Right-click on the scale in the Edit View to choose between Percent or standard MIDI views.

Mixing’s Most Important MIDI Data


The two most important parts of the MIDI “language” for mixing are note data and controller data.
Note data specifies a note’s pitch and dynamics. Controller data creates modulation signals that vary

Page 74
parameter values. These variations can be periodic, like vibrato that modulates pitch, or arbitrary
variations from something like a physical knob that you move.

Envelopes can vary modulation over time. Triggering an envelope creates predictable changes. Fig. 5.2
shows an ADSR (attack, decay, sustain, release) envelope, which can control level, filter cutoff,
detuning, or other synthesizer parameters. This common envelope design is usually triggered by
playing a keyboard key.

Figure 5.2 The ADSR level envelope in Presence creates level changes. Once you trigger the envelope, its
level increases over time (Attack), decreases (Decay) over time to a constant level (Sustain), then fades out
when the trigger ends (Release), typically when you lift your finger from a key.

Just as you can move a Channel’s fader to change the Channel level, MIDI data can create changes—
automated or human-controlled—in signal processors and virtual instruments. These changes add
interest to a mix by introducing variations.

Virtual Instruments and CPU Issues


Virtual instruments often need a fair amount of CPU power. After completing a track, consider using
the Transform or Bounce functions to covert the track to audio, which saves CPU power. To allow for
further editing, check “preserve state” in the Transform dialog box. This allows reverting a track to its
pre-transformed state so that you can make additional edits.

How to Enhance MIDI Drum Parts in a Mix


Virtual drum modules, like Impact XT, provide the rhythms that drive much of today’s electronically
oriented music. Take advantage of their editing possibilities to enhance the mix. You can modify the
Note data feeding them, and/or tweak how their parameters respond to Note data.

Shift Pitch
A drum’s pitch control parameter can optimize the sound in many ways:

• Tune drums to the song’s key. This is particularly important with toms and resonant kick
drums. An out-of-tune kick can fight with the bass, or confuse the song’s sense of key.

Page 75
• Multiply one sound into many sounds. To play a two-hand shaker part with one shaker
sample, copy the sample and detune it by a semitone or so to provide a variation. Detuning can
also create a family of cymbals or toms from one cymbal or tom sample.
• Accommodate different musical genres. Some musical genres favor lower- or higher-pitched
drum sounds. You may not need a new set of drum samples—try retuning the ones you have.
• Use radical transpositions to create novel sounds. To create a gong, copy your longest
cymbal sample. Detune the copy by -12 to -20 semitones. Detune the original sample by about
-3 semitones. When layered together, the slightly detuned cymbal gives a convincing attack,
while the highly detuned one provides the sustain.

Make Impact XT Drum Sounds More Expressive


Some of these techniques use Impact XT’s synth-type modules, while others depend on using multiple
samples recorded at different intensities (i.e., softer and harder hits), or modifying a single sample to
make it sound like it was recorded with different intensities.

Assign Velocity to Parameters

To emphasize dynamics, tie velocity to amplitude and filtering. These parameters affect all samples
loaded on a pad:

Figure 5.3 Assigning velocity to Pitch and/or Filter Cutoff can enhance dynamics.

Turn up the Pitch module’s Velocity to Pitch parameter by around 0.26 semitones (Fig. 5.3). This raises
the pitch slightly with harder drum hits, which emulates acoustic drums (the initial strike increases the
tension on the head, thus raising pitch momentarily).

Similarly, lower the Filter Cutoff slightly, and turn up the Filter’s Vel parameter (e.g., 10%). This
makes the sound brighter with higher velocities.

Page 76
Multiple Drum Samples

Drum sample libraries often include multiple versions of the same drum sound—like soft, medium, and
hard hits—that are triggered at different velocities. You can load multiple sounds recorded at different
intensities on an Impact XT pad, and trigger them at different velocities. However, if a pad already
contains a sample and you drag a new sample to a pad, it will replace, not supplement, the existing
sample. Use the following method to load multiple samples on a single pad. (This example uses only
three sounds, but you can load more than that.)

1. Drag the first (Soft) sample onto an empty pad.


2. Click the + sign to the lower left of the pad sample’s waveform display, navigate to the Medium
sample, and load it (Fig. 5.4).

Figure 5.4 Click on the + sign (circled in orange) to load another sample on to a pad.

3. Click the + sign again, navigate to the Hard sample, and load it.
4. Above the pad’s waveform view, you’ll see three numbers—one for each sample. Impact XT splits
the velocity range into an equal number of smaller ranges based on the number of drums you’ve
loaded, and automatically assigns the drums to the ranges. 1 is the first sample (Soft) you dragged in, 2
is the second (Medium) sample, and 3 is the last (Hard) sample. Although Impact XT does automatic
velocity assignment, you can drag the splitter bar between the numbered sections to vary the velocity
ranges (Fig. 5.5).

Page 77
Figure 5.5 The splitter bar between samples can alter the velocity range to which a sample responds.

Now you’ll trigger different drum samples, depending on the velocity.

How to Simulate Multiple Drum Samples

If samples recorded at multiple velocities aren’t available, but you have a single drum sample with a
hard hit, then you can use Impact XT’s sample start parameter to create additional, softer hits by
changing the sample start time. (Starting sample playback later in the sample cuts off part of the attack,
which sounds like a drum that’s hit more softly.)

1. Do all the steps above, but load the single, hard hit three times instead of using three different hits.
This loads multiple versions of the same sample on the same pad, split into different velocities.
2. Click on the number 1 in the bar above the waveform to select the first sample.
3. Drag the sample start time further into the sample to create the softest hit (Fig. 5.6).

Figure 5.6 Click on the sample start line, and drag right to start sample playback past the initial attack. The
readout toward the lower right shows the offset in samples.

4. Click on the number 2 in the bar above the waveform to select the second sample.
5. Move the sample start time halfway between the sample start and the altered sample start time in
step 3.

Page 78
Play the drum at different velocities. Edit sample start times, and/or velocities, to obtain a smooth
change from lower to higher velocities.

Hi-Hat Amplitude Envelope Decay Modulation

A drummer works the hi-hat constantly, opening and closing it with the pedal, but the electronic version
is often an unchanging snapshot. One workaround is to program a combination of open, half-closed,
and closed hi-hat notes, and then assign them to a Choke group (see below) so triggering one will cut
off other sounds that are still ringing. However, programming a rhythm with three hi-hat sounds is
tedious, and may not sound sufficiently realistic.

As an alternative, use a MIDI controller (e.g., mod wheel) to vary an open hi-hat sound’s Amp
envelope decay time. Shorten the decay for a closed hi-hat; as you extend the decay, the hi-hat “opens”
gradually. You can play this part in real time, however, post-processing can also work well—record the
part, then overdub the needed controller changes.

Choke Groups

Hitting a drum assigned to a Choke Group will stop any other drum assigned to the same group from
playing back. This is intended mostly for hi-hats, so that playing a closed hi-hat replaces the open hi-
hat sound.

Also consider assigning toms with long decays to the same Choke Group. Too many simultaneous tom
decays can muddy a track. However, this technique may not work well if the toms are sampled with a
lot of room sound.

Tip: Assigning sounds to a mute group also conserves polyphony (i.e., the ability to play several notes
at once), because only one sound in the group can play at a time.

Enhancing Synth Parts in the Mix


If you’re comfortable with basic preset editing, the following programming tweaks can add more
interest to synthesizers and samplers during the mix.

Change the Sample Start Point


We showed how to “fake” changing the sample start point with Impact XT, but Presence XT takes this
one step further, because it can assign velocity control to the sample start parameter.

Set the initial sample start point several tens of thousands, or even hundreds of thousands, of samples
after the attack occurs. Higher velocities move the sample start point closer to the atttack’s beginning.
At low velocities, you don’t hear the sound’s initial attack; at maximum velocity you hear the entire
attack (Fig. 5.7).

Page 79
Figure 5.7 In Presence XT, velocity can alter sample start time to emulate using multiple samples.

Sample start time modulation’s main application is negative modulation to increase the attack’s
strength. However, moving the start time later can add more punch with instruments that have a slow
attack time, like wind instruments.

Layering Techniques
Layering combines sounds. SampleOne XT can layer many sounds within a single preset. With
Presence XT, you’ll need to layer multiple instances, each with its own preset.

One of my favorite techniques for larger-than-life sounds is layering a synthesizer waveform with a
sampled sound. Following is a practical example of layering a sawtooth or pulse wave with strings.
Synthesized strings can sound rich, but not very realistic. Conversely, sampled strings sound realistic,
but aren’t very lush. Layering the two gives lush realism.

1. Create an Instrument track with Presence XT, and call up the Violin Full preset.
2. Drag Mai Tai into the same track. You’ll be asked to Replace, Keep, or Combine. Choose Combine
(Fig. 5.8).

Page 80
Figure 5.8 Choose Combine to layer two instruments.

3. Now both instruments are layered within the Instrument Editor (see above).
4. Because Mai Tai’s role is only to provide reinforcement, program a very basic sound with sawtooth
or pulse waveforms: a little oscillator detuning, no filter modulation, basic LFO settings to add vibrato
and prevent too static a waveform, amplitude envelope and velocity that tracks the Presence sound as
closely as possible, and possibly some reverb to create a more “concert hall” sound. Fig. 5.9 shows the
parameters used for this example. The only semi-fancy programming tricks were making one of the
oscillators a pulse wave instead of a sawtooth, and panning the two oscillators very slightly off-center.

Page 81
Figure 5.9 Parameters used for programming a Mai Tai synth sound to layer with strings.

5. Adjust the Mai Tai’s volume to supplement, not overwhelm, Presence.

There are other ways layering can improve how synths and samplers feel in a mix:

 Add dynamics. Layering two sounds with different velocity responses can create exciting
dynamic effects. For example, program one layer with no velocity response to provide the main
sound, and a second layer (with a harder or more percussive sound) to respond to velocity so
that it plays only with higher-velocity notes.
 Stronger leads. Use the same synth sound for both layers. As in the above example, one layer
provides the main sound and has no velocity. Detune a second layer slightly compared to the
other layer, and with maximum velocity response, so that hitting it harder brings in the layer to
create chorusing. Normally chorusing tends to diffuse a sound, but because the detuned layer
increases the overall level when played, the sound is bigger.
 Fuller acoustic guitar or piano sound. Layer a sine wave with the guitar or piano’s lower
notes. To attenuate the sine wave at higher-pitched notes, use keyboard position to modulate the
sine wave’s amplitude negatively (i.e., the higher you play on the keyboard, the lower the sine
wave level). Keep the overall level low—just enough to provide a slight psycho-acoustic boost.

Page 82
 Bigger harp sounds. Layer a triangle wave with harp, and mimic the harp’s envelope with the
triangle’s amplitude envelope. A bit of triangle wave provides depth, while the sample provides
detail and realism.
 Add male voices to an ethereal female choir. Layer a triangle wave tuned an octave lower
with a female choir, for a powerful bottom end that sounds like males singing along. To
maintain the ethereal female quality, modulate the triangle wave amplitude by keyboard
position to reduce the amplitude on higher notes.
 Strengthen attacks. Take advantage of the complex attacks found in bass sounds (slap bass,
synth bass, plucked acoustic bass, etc.). Transpose a bass waveform up one or two octaves, and
layer it behind the primary sound. Add a fairly rapid decay to the bass so that its sustain doesn’t
become part of the composite sound.
 Hybrid pitched sounds. Percussion instruments acquire a sense of pitch when played across a
keyboard. Layering these with conventional melodic samples can yield hybrid sounds that are
melodic, but have complex and interesting transients. Cowbell is great for this application.
Claves, a triangle dropped down an octave, struck metal, and other pitch-friendly percussion
sounds can also give good results.
 Not using waveforms as intended. Some edits create new presets if you apply a sound
“incorrectly.” These sounds may even work on their own, without layering. I’ve stumbled on
some “electric piano” sounds by transposing electric bass sounds up an octave or two.

Taming Peaks
Synths sometimes generate strong peaks that can wreak havoc on playback. For example, even though
detuned (chorused) oscillators sound fat, there’s an output boost when the chorused waveform peaks
occur simultaneously. To reduce this, drop one oscillator’s level about 30% - 50%. The sound will
remain thick, yet the peaks won’t be as drastic.

High-resonance filter settings create peaks if you play a note at the filter’s resonant frequency. Add a
limiter at the output to reduce the peaks (use a fast attack time). See Chapter 8 on dynamics processing
for more information on limiters.

Synth/Sampler Parameter Automation Applications


Consider using the pitch bend wheel to introduce vibrato—like a guitarist. This frees up the mod wheel,
which is typically dedicated to vibrato, to control some other parameter. Also, many synths can assign
an external footpedal to any parameter, including effects. Use the mod wheel and/or pedal to enter
automation envelopes (see Chapter 12) that control the synthesizer sound during mixdown. Here are
several suggestions on what to control:

 Tone. With bass, roll the mod wheel forward to reduce highs (e.g., by lowering a filter’s cutoff
frequency) and simultaneously increase gain to compensate for the lower level.
 Reinforce the fundamental. A variation on the layering tip above involves controlling the level
of a sine wave tuned to the preset’s fundamental. Bring in a hint of sine wave for a deeper
fundamental on lower notes.

Page 83
 Distortion. Both Sample One XT and Presence XT have modulation sources that can control
internal signal processor parameters. Set a minimum drive amount with distortion, then use the
mod wheel to increase drive. Simultaneously apply some negative modulation to the output
level so the level stays consistent when going from minimum to maximum drive.
 Guitar-like pseudo-feedback. Guitarists often sustain a note at high volume, inducing a second
tone that’s typically an octave and a fifth above the fundamental. Tune an additional sine wave
oscillator +19 semitones, and control its level with the mod wheel. To be more guitar-like, add
some vibrato as the “feedback” appears, and reduce the fundamental’s level slightly. Or, use
envelopes to introduce the feedback effect automatically (Fig. 5.10).

Figure 5.10 The top image shows the image of the main sound, while the lower envelope brings in the
“feedback.” The red line indicates a time two seconds into each envelope.

 Waveform morphing. At its most basic, this technique controls the level of two oscillators so
that as one goes from full off to full on, the other goes from full on to full off. But there are rich
possibilities, like morphing between a cello and sawtooth wave to transition from a more
realistic to a more synthetic sound quality, or even morphing between patches.
 Brighter or darker overall sound. If a preset includes a lowpass filter, use the mod wheel to
increase the filter envelope’s effect on the filter frequency. The attack and decay have the same
shape, but the overall filter frequency is higher, which gives a brighter sound. Or, darken the
sound by reducing the envelope filter amount. Alternately, vary the filter envelope sustain level
—increase for brighter, decrease for darker. This will likely interact with the attack and decay
characteristics, which may add another useful variation.

Page 84
 Alter filter resonance. Reduce the filter resonance control to place a sound more in the
background. Add resonance to bring a sound more to the forefront.
 Release-based reverb. With pads, turn up the amplitude envelope release to add an evocative,
reverb-like lengthening to notes when you release your fingers from the keys. Note: You may
also need to turn up the filter envelope release control. Otherwise, the filter cutoff may go low
enough to make the note inaudible before the volume envelope fades out completely.
 Vintage oscillator drift. To re-create the subtle, random oscillator drift of older analog
synthesizers, route an LFO to one (or more) of the preset’s oscillators, set the LFO for a very
slow rate, and add modulation to change the pitch in a subliminal way. A smoothed, random
LFO waveform is ideal; otherwise, a very slow triangle wave may work. Better yet, use two
slow LFOs set for slightly different rates, and apply some signal from each one to randomize
the LFO waveform more.
 Distortion drive. Distortion is an increasingly popular synth effect, but take a cue from
guitarists and differentiate between rhythm and lead. Nothing says “pay attention to this part!”
like putting the pedal to the metal, and going from a somewhat dirty sound to overloaded
screaming. Subtle amounts are good, too. I was once asked which synth I used to get “that
amazing funky Wurlitzer sound.” It was a sampled acoustic piano, followed by EQ to take off
the highs, and then distortion. Distortion on organs and Native Instruments’ Massive synth can
also be a beautiful thing.
 Oscillator fine tune. Detuning one oscillator of a pair tuned to the same frequency gives
flanging/chorusing effects. Use automation to vary one oscillator’s fine tuning, which controls
the beating between the two oscillators. Faster beating increases intensity, while slower beating
sounds more ambient/relaxed.
 Amplitude envelope decay. This is particularly effective for percussive synth bass parts (and as
mentioned previously, open/closed hi-hat). The parameter to control depends on how the
envelope generator controls decay—with triggers from the release parameter, or held notes with
decay, specifying the decay time.
 Sub-octave level. Add an oscillator tuned an octave lower, and set to a simple waveform (sine
or filtered triangle). Automate the sub-octave level to add gravitas (bridges and choruses love
this).
 High-frequency EQ. Most synth effects sections have high-frequency EQ—shelving,
parametric, etc. (see Chapter 7 for more information on EQ). Reducing the highs can help make
a digital synth sound more “analog” and often, sit better in a track. If the synth needs to be more
prominent, increase the highs. Sample One XT, Mai Tai, and Presence XT include a graphic
equalizer that works well for this application.
 Delay feedback and/or mix. Long, languid echoes (see Chapter 10) are fine for accenting
individual notes, but might clutter staccato passages. Controlling the amount of echo feedback
can push echoes to the max for spacey sounds, or pull back for tighter effects.

Humanizing Sequences
Timing is everything...especially with music. Yet mathematically perfect timing is not everything,
otherwise drum machines would have replaced drummers. World-class drummers enhance music by
playing around with time. They speed up or slow down subtly to change a tune’s feel, as well as lead
the beat to push a tune, or lag the beat so it lays back a bit in the groove.

Page 85
Even a few milliseconds make a difference. This may be surprising, but once you experiment with
timing shifts, you’ll find that small timing differences matter.

Some people quantize MIDI data to a rhythmic grid, which can make music feel mechanical. But
ironically, Studio One’s MIDI editing features can help put the feel back in to sequenced music.

Before proceeding I’d like to acknowledge the late Michael Stewart, whose seminal research on the
topic first made me aware of the importance of small timing shifts.

How Timing Shifts Produce “Feel”


Feel is not based on randomizing note start times. Accomplished drummers add variations in a mostly
non-random, subconscious way, so these changes tap directly into the source of the musician’s feel. For
example, jazz drummers often hit a ride cymbal’s bell or high-hat a bit ahead of the beat to push a song,
and give it more urgency.

Rock drummers frequently hit the snare behind the beat (a bit late) to give a big sound. Our brain
interprets slight delays as indicating a big space—since we’ve seen someone make noise at a distance,
and know that sound takes time to travel through a big space before it reaches us.

Some instrumentalists create their own note shifts within an overall tempo shift. In other words, if the
overall tempo is speeding up, a guitar player might speed up a bit more than the tempo change to
emphasize the change, then pull back, speed up, pull back, etc. These changes will be felt more than
heard, but just as subtle tempo tweaks can have a big influence on the sound, subtle note placement
changes in relationship to a rhythmic grid can alter the feel.

Track Timing Tricks


We’re covering this under “Mixing and MIDI” because it’s more difficult to shift timing with audio.
However, many of these principles apply to audio. There are two main ways to shift Events or notes
within a track:

 Click on the clips or notes you want to shift, and drag them. Because the shift amount will be
small, zoom in and turn off snap. With longer tracks, use the Split tool to cut the audio into a
specific region you want to shift. Otherwise, you might shift portions you don’t want to change.
 Enter an offset or start time for Events or notes (Fig. 5.11). If several Events are selected, you
can shift them simultaneously by changing times for any of the selected Events.

Page 86
Figure 5.11 Enter precise start (and end) times for Events in the Inspector (outlined in red), or note start/end
times in the Edit view.

Although the following timing change applications are more premeditated than musicians playing
instinctively, the goal is the same—add more feel.

 What to shift. With drums, keep the kick drum on the beat as a reference, and shift the timing
of the snare, toms, or percussion by a few milliseconds compared to the kick.
 More urgent feel. For techno, house, reggaeton, soca, and other dance-oriented music, move
double-time percussion parts (shaker, tambourine, etc.) slightly ahead of the beat for a more
urgent feel.
 More laid-back feel. Shift percussion a few milliseconds late compared to the grid.
 Shift individual notes. This can be preferable to shifting an entire track. With tom fills, delay
each subsequent fill note a bit more (e.g., the first fill note is on the beat, the second note
approximately 2 ms after the beat, the third note 4-5 ms after the beat, the fourth note 6-8 ms
after the beat, and so on, until the last note ends up about 20 ms behind the beat). This makes a
tom fill sound gigantic.
 Avoid part interference. If two percussion sounds in a rhythm pattern hit often on the same
beat, try sliding one part ahead or behind the beat by a small amount (a few ms) to prevent the
parts from interfering with each other. Or, slip one slightly ahead, and one slightly behind.
 Staccato separation. Track shifting doesn’t apply only to drum parts. Consider two fairly
staccato harmony lines. Advance one by 5 ms and delay the other by 5 ms so the two parts
become more distinct, instead of sounding like a combined part. Separate them further by
panning the parts oppositely.
 Cymbal emphasis/de-emphasis. Hitting a crash cymbal slightly ahead of the beat makes it
stand out. Moving it slightly later meshes it more with the track.
 Melody/rhythm emphasis/de-emphasis. If the kick and bass hit at the same time, emphasize
the melody by shifting the bass slightly earlier than the kick, or emphasize the rhythm by

Page 87
shifting the bass slightly later than the kick. The instrument that hits first sounds louder, even
though its level doesn’t change.

Tech Talk: Timing Shifts with Audio

Shifting audio is more complex than shifting MIDI data (except for moving entire clips). With audio,
you mostly need to modify where a note’s attack falls, because it’s more interesting to the ear than its
decay. If a sound is isolated in its own track (e.g., snare, cowbell), drag the audio earlier or later on the
timeline. However, there will likely be times when you want to change some note attacks, but not
others. Studio One’s bend markers are ideal for shifting note attacks (Fig. 5.12).

Figure 5.12 Studio One has detected the snare’s transient (outlined in red). This allows moving it manually, so
now it has been moved a little late compared to the beat (measure 1, beat 4) for a “bigger” feel.

Quantization Options
Note quantization, which moves note start points to a rhythmic grid, is common for some musical
genres (e.g., electro, techno). However, no human plays with 100% precision, so Studio One has
options to make quantization less mechanical.

Start (Percentage). Click the big Q in the Edit view to open the Quantize panel. Start (Percentage) is
my favorite quantize option, because it moves notes a certain percentage closer to the beat instead of
exactly on the beat. Your timing “errors” may be more about a lack of precise control. In other words,
you may have subconsciously meant to hit a little ahead or behind the beat, but you went too far.
Strength tightens your timing, without making it metronomic.

You can set a specific quantization percentage, but there’s a dedicated action menu item for 50%
quantization strength (keyboard shortcut Alt+Q). You can keep invoking this to move the timing ever-
closer to the grid, until the timing feels right. For example, if a note is 12 ms behind the beat,

Page 88
quantizing moves it on the beat. Quantizing with 50% strength moves it 6 ms behind the beat, and
quantizing again with 50% strength moves it 3 ms behind the beat.

Swing. This function affects the timing of pairs of equal-value notes. Each note normally takes up 50%
of the total duration of both notes; adding swing lengthens the first note of the pair, and to maintain the
total duration of both notes, shortens the pair’s second note. This imparts the feel found in shuffles,
some jazz tunes, and a lot of hip-hop. Even though swing still quantizes notes to a grid, it’s a grid with
a more human feel.

Groove Quantizing. Sometimes two parts, like an ostinato eighth-note synthesizer pattern and a drum
part played by a live drummer, may “fight”—the synth’s rhythm will be perfect, while the drummer
will play with more of a groove. Quantizing the synth pattern to the drummer’s rhythm, not the grid,
solves the problem. Studio One can extract the groove from a part, and quantize to it (Fig. 5.13).

Figure 5.13 A drum part was dragged to the Groove window (outlined in orange), and a rhythm guitar part (in
the main Edit view) quantized to it.

The process is simple. Select Groove mode in the Quantize panel, then drag the audio that’s the
quantize “master” into the Groove window. Next, quantize what’s in the Edit menu to the groove.
Quantize audio to audio, or bring an Instrument part into the Groove window and quantize audio to
note data.

You can also drag what’s in the Groove window to an Instrument track. As one example, drag a drum
part to a bass track. Now the bass notes follow the drums. And you can even use Harmonic Editing to
conform the bass track to the chord progression.

Of course, you don’t need to avoid the grid—some forms of music benefit from ultra-tight rhythms.
But humans tend to play with the beat, not just to it. These tips can help your music flow a little better.

Page 89
Tech Talk: Quantization with Audio

Quantizing audio usually produces less predictable results than quantizing MIDI data, because audio
must be stretched or shortened to fit a rhythmic grid. This can affect audio quality. It’s also necessary to
analyze the audio and find the attack transients that define note attacks. The analysis process is rarely
perfect unless an audio track consists of isolated, mostly percussive sounds. So, you may need to
correct any issues by moving bend markers manually to align with audio transients that indicate note
attacks. More complex audio material usually requires more corrections.

Proofing MIDI Sequences


Sometimes a MIDI part just doesn’t feel quite right. The problem could be small errors, deep within the
MIDI data stream, that may not be obvious by themselves, but accumulate while mixing. Typical
glitches are:

 Double triggers caused by two quantized notes landing on the same beat.
 Excessive or unwanted controller data that interferes with timing.
 The end of one note overlapping the beginning of the next note with supposedly monophonic
instruments (e.g., bass, wind instruments).
 “Voice stealing” that cuts off notes abruptly when an instrument runs out of polyphony.

Although a group of instruments playing together may mask these problems, they can detract from a
song’s overall quality. Fortunately, the same technology that created these problems can minimize
them, because you can edit a sequenced track while mixing—long after the actual recording took place.
Before getting too far into the mix, “proof” your MIDI data, like you would use spelling or grammar
checkers to proof a word processor document before printing it. Look for duplicate notes, accidentally
hit notes that are shorter than a certain duration (or below a certain velocity), unneeded data (e.g.,
aftertouch that a keyboard may have generated, but a synthesizer doesn’t recognize), and the like.

Proofing is particularly important with MIDI guitar tracks, which are often loaded with low-velocity
and/or low-duration “ghost” notes. A more subtle problem is that sometimes a keyboard’s pitch bend
and mod wheels mount on the same support bar, so moving one wheel energetically causes the other to
move slightly. Delete any unintended pitch bend and mod wheel data.

Key Takeaways
 If you plan to use lots of virtual instruments, buy the fastest, most powerful computer possible.
 Use controller data to alter instrument parameters in ways that enhance the mix.
 Layering instruments can give full, rich sounds.
 Most instrument parameters are automatable.
 Although it’s convenient to quantize notes to a rhythmic grid, subtle timing changes humanize
music and make it sound more natural.
 Quantization offers various options other than strict, rhythmic quantization.
 Even at the mixing stage, it’s possible to do some tempo track edits.
 It’s helpful to examine MIDI tracks and make sure there isn’t extraneous data.

Page 90
Page 91
Chapter 6 | Prepare for the Mix

This is the book’s heart. You build a mix over time by making multiple edits and adjustments. The
difficulty is that these edits interact. Changing a track’s equalization (tone quality) also changes the
level, because you’re boosting or cutting a band of frequencies. Alter a sound’s stereo location, and you
may need to edit the ambiance. Think of a mix as an audio combination lock—when all the elements
hit the right combination, you have a good mix. Listen critically, because if you don’t fix something
that bothers you, it will bother you every time you hear the mix.

Consider importing some superbly mixed, commercially recorded tracks, and comparing your mix with
them as you proceed. The commercial tracks will have been mastered, so they’ll likely sound louder.
You can deal with this during the mastering process, when you move your mixes into Studio One’s
Project page. For now, concentrate on the balance among the instruments. Ideally, you’ll be able to
hear every instrument distinctly.

A good mix brings out the best in your music, a bad mix obscures it. An effective mix:

 Spotlights a composition’s most important musical elements.


 Keeps the listener engaged, by attaining the right balance of groove and surprise.
 Balances levels, so tracks don’t get lost, or become overbearing.
 Makes full use of the audio spectrum by not over- or under-emphasizing specific frequency
ranges.
 Sounds good on any system—from a smartphone speaker to an audiophile’s dream setup.

Some people favor a current trend, “top-down mixing,” that involves inserting processors a typical
mastering engineer would use in the Main Bus. The object is to give a rough idea of what the mix will
sound like when mastered. (Some people also feel mastering a song simply requires inserting
processors in the Main Bus.) Or an engineer may consider these processors important, because the mix
“falls apart” without them.

Of course, in recording there are no rules. But my opinion is that mixing and mastering are separate
processes. The purpose of mixing is to find the proper balance among the individual tracks, while the
purpose of mastering is to process the combination of tracks optimally. If you depend on something
like a dynamic range limiter in the Main Bus to create a balanced mix, then the mix probably wasn’t
balanced properly. I strive for an ideal balance when mixing, without any added processors. Then the
mastering process brings out the best in that ideal balance.

In any event, translating a collection of tracks into a cohesive song isn’t easy—mixing requires the
same level of creativity and experience as any part of the musical process.

Page 92
Before You Mix
Although this book isn’t about tracking, preparation for the mix should have begun when you started
recording—so keep that in mind for future projects. Part of this preparation involves recording the
cleanest possible signal.

 Eliminate as many active stages as possible between source and recorder.


 Hardware devices set to “bypass” may not be adding any effect but remain in the signal path,
which can degrade sound quality (or possibly enhance it) slightly.
 Change strings, check intonation, and oil the kick drum pedal if it squeaks.
 Send line-level signals through your audio interface’s line inputs, rather than through inputs
with mic preamps.
 For mic signals, an ultra-high quality outboard mic preamp that patches into an audio interface’s
line input may give better performance that an onboard mic preamp. However, this is mostly an
issue with older or budget gear. Modern audio interfaces typically have excellent specs.

Always record with the highest possible fidelity. Recording engineers call this “getting it right at the
source.” Although you may not hear much difference when monitoring a single instrument, with
multiple tracks, the cumulative effect of stripping the signal path to its essentials can improve a mix’s
clarity and definition.

Mental Preparation, Organization, and Setup


Mixing requires concentration and can be tedious, so set up an efficient workspace. For an uncluttered
view in your monitor, learn Studio One’s shortcuts that show and hide various user interface elements.

Remember to take periodic breaks, and rest your ears, to maintain a fresh perspective. Even a couple
minutes of down time can restore your objectivity and, paradoxically, complete a mix faster.

Many people do rough mixes while tracking, so that when it’s time to mix the finished song, levels,
panning, and processing are already close to the desired settings. Personal bias alert: aside from basic
signal processing like EQ and dynamics control settings, I usually prefer to “re-boot” and start a mix
from scratch—with the master fader set to 0, and then adjust levels with the Channel faders. That’s not
a recommendation, just what works for me.

Tip: If you need to reduce overall levels during the mixing process, rather than lower the master level,
temporarily select all your Channel faders to group them, and reduce their levels.

When starting a mix, I pan all Channels to mono and the fader levels to a nominal setting, like –12 dB.
Starting with mono reveals which instruments conflict with each other. If every instrument sounds
distinct in a mono mix, then stereo placement will make the mix just that much better.

Page 93
Review the Tracks
Next up: do the prep work needed to sail through the mixing process.

Organize Your Mixer Space


Name the tracks (“Record 1” is not a name).

Group sounds logically, and organize your mixing console for a consistent flow from project to project.
For example, I place all the drum tracks to the mixer’s left. Moving to the right there’s percussion, bass,
guitars, keyboards, vocals, and sound effects/miscellaneous. The order doesn’t really matter;
consistency makes finding tracks second nature.

Some find colorizing tracks distracting. I find it helpful to identify tracks quickly, so I color code guitar
tracks blue, vocals green, drums red, percussion yellow, and so on (Fig. 6.1). However, I’ll use a
different shade for the lead guitar and lead vocal, or change the shade for a track that requires attention.

Figure 6.1 A typical song layout, colorized to make it easy to parse tracks visually—it’s easier for the brain to
decode colors instead of text or numbers.

Put on Headphones and Listen for Glitches


Fixing glitches is a left-brain activity, as opposed to the right-brain activity involved in mixing. As
mentioned previously, switching between these two modes can hamper creativity. Do any fixes (erase
glitches, fix bad notes, delete or hide scratch tracks, and the like) before you start mixing.

Solo each track (it’s easier to hear glitches in isolation), and listen to it from beginning to end. This is
time-consuming, but worth the effort. Any glitches can detract from a mix, even if you don’t hear them
consciously (or worse yet, hear them after you’ve made 1,000 CDs for your band’s merch table!).

Page 94
Proof the MIDI tracks, as mentioned in the previous chapter. With audio tracks, listen for any spurious
noises just before or after audio appears (mic handling sounds with hand-held mics, hum from a bass
amp, etc.). Pay particular attention to vocals. Low-level glitches, like clicks from someone moving
their tongue prior to singing, may not seem important—but they add up.

It’s usually easy to edit out undesirable artifacts. Zoom way in, adjust the track height for a comfortable
view, then drag across the audio that needs fixing (Fig. 6.2). Delete it, or choose among Studio One’s
audio Event DSP options—change gain, fade in, or fade out, by clicking and dragging on the Event’s
upper, left, or right handles, respectively. For more complex edits, apply plug-in processing to
individual Events.

Figure 6.2 Use the Range tool (outlined in orange) to drag across the range you want to delete, then hit the
Delete key. This range contains some little glitchy sounds that won’t do anything to help the mix.

Render Virtual Instruments as Audio Tracks


Consider transforming MIDI-driven virtual instruments to audio tracks. As mentioned in Chapter 5, this
frees up processing power. Also, having audio tracks can help “future proof” a song if someday the
virtual instrument is no longer compatible with, for example, an operating system change.

When transforming an Instrument track, check the box for preserving the instrument’s pre-transformed
state so you can do edits later if needed. Also, retaining the note data takes up very little memory, so
that data is worth keeping as well.

Page 95
Set Up a Relative Level Balance Among the Tracks
With preparations out of the way, start setting levels. Concentrate on the overall effect of hearing the
tracks by themselves, and then work on a good balance; don’t become distracted by detail work. With a
good mix, the tracks should sound good by themselves—but sound even better when interacting with
the other tracks.

I still recommend keeping the audio panned to mono until after you’ve adjusted the EQ (see Chapter 7).
But again, let me emphasize this is a personal bias.

Key Takeaways
 Mixing requires concentration, so it’s important to prepare for the mixing process.
 Organize your mixing console so all the tracks are arranged logically.
 Solo each track, and listen on headphones from start to finish to catch any glitches. Eliminate as
many glitches as possible—even at low levels, they can detract from a mix.
 Proof the MIDI tracks to check for extraneous data (as mentioned in Chapter 5).
 Render soft synths as audio tracks to save on CPU power, and also to help “future-proof” the
tracks.
 It can help to start a mix with the tracks panned to mono, because if you can differentiate
among all the tracks in mono, they’ll sound that much better when you start creating a stereo
placement.

Page 96
Page 97
Chapter 7 | Adjust Equalization (EQ)

The audio spectrum has only so much space, and ideally, each part occupies its own sonic turf without
fighting other parts. Equalization (EQ) alters timbre and tone, so it can “sculpt” each track’s frequency
response so it takes up its own part of the audio spectrum. Next to level, EQ is arguably the most
important part of the mixing process—changing EQ even slightly for just one instrument can affect the
entire mix.

Main EQ Parameters
An equalizer stage emphasizes (boosts) or de-emphasizes (cuts) the frequency range you specify.
Studio One 5’s Pro EQ2 has eight separate stages, each of which can affect the sound. There are three
main equalizer stage parameters:

 Frequency sets the specific part of the audio spectrum where boosting or cutting occurs.
 Gain (also called boost/cut or peak/dip) determines the amplitude over the selected frequency
range—whether the signal is louder or softer compared to a flat response. The amount of boost
or cut is measured in decibels (dB).
 Bandwidth (also called resonance, or Q) determines the frequency range of the boosting or
cutting action. Narrow bandwidth settings affect a small section of the audio spectrum, while
broad settings process a wider range. Note that the Pro EQ2’s Shelf, Low Cut, High Cut,and
Phase Linear Low Cut responses (described later) do not have bandwidth settings.

Tech Talk: Understanding the deciBel (dB)

The dB measures a ratio of two audio levels. A 1 dB change between levels is (in theory) the smallest
audio level difference a typical human can detect. A dB spec can also have a – or + sign. For example,
cutting response in an equalizer band by -12 dB cuts more than cutting by -6 dB; a setting of +2 dB
creates a slight boost, and +10 dB, a major boost.

Equalization added to one track may affect other tracks. For example, boosting a guitar part’s midrange
could conflict with vocals, piano, and other instruments that have a midrange component. Or if you add
more treble to a bass part so that it stands out better on little speakers, it may fight with a rhythm
guitar’s low end.

Sometimes boosting a frequency for one instrument implies cutting the same region in another
instrument, so they complement each other. One potential solution if the bass and kick drum conflict is
to trim the kick’s low end to make room for the bass, but then boost the kick’s high frequencies so that
the “clack” of the beater hitting the drum becomes more prominent. The ear fills in the kick sound
because the hit is well-defined. In some cases, the reverse works—trimming some low end from the
bass, and boosting its highs.

However, because the Pro EQ2 has automatable parameters (see Chapter 12 on Automation), Studio
One can remember frequency response changes you make during the mix. For instance, suppose you’re

Page 98
recording a singer/songwriter with guitar. During vocals, cut the guitar’s midrange a bit in the vocal
frequencies to give the voice more room. When the singer isn’t singing, the guitar’s midrange can come
back up to emphasize the guitar.

Because EQ can dramatize differences among instruments and create a more balanced overall sound,
adjust EQ on the most important song elements first (typically vocals, drums, and bass). Once these
lock together and claim their spaces in the frequency spectrum, deal with the more supportive parts.
Drums are particularly important because they cover such a wide range, from the kick’s low frequency
thud to the cymbals’ high frequency sheen. Because drums tend to be upfront in today’s mixes, it’s
sometimes best to work on the drums first. Think of the song as a frequency spectrum. Decide where
you want the various parts to sit, and their prominence.

Equalizer Responses
Equalizers use filter circuits that pass certain frequencies and reject others.

 High Cut response (Fig. 7.1). Also commonly called a lowpass response, this attenuates
frequencies above a specified cutoff frequency (where the filtering action starts to take place). A
node (small circle) on the frequency response graph indicates the frequency; Pro EQ2 nodes are
color-coded, and correspond to the color-coding of different frequency bands. Applications:
Warm up a sound by reducing brightness, roll back hiss, reduce harshness with digital sound
sources, place an instrument further back in a mix without reducing its fader level.

Figure 7.1 High Cut (HC) filter response. The response drops off at higher frequencies.

Page 99
 Low Cut response (Fig. 7.2). Also called a highpass response, this attenuates all frequencies
below a specified cutoff frequency. Applications: Tighten up the low end, reduce rumble and
room noise, prevent excessive low frequencies from going into reverb, reduce p-pops with
vocals, reduce kick drum in drum loops and DJing.

Figure 7.2 Low Cut (LC) filter response. The response drops off at lower frequencies.

With both HC and LC responses, the frequency response doesn’t just stop at the cutoff frequency.
Instead, it rolls off at a certain slope, specified in deciBels per octave. For example, a 24 dB/octave
slope is steeper than a 6 dB/octave slope, so the response drops off faster with a 24 dB/octave slope
(Fig. 7.3).

Page 100
Figure 7.3 The High Cut filter slope (right) rolls off response at 6 dB/octave, while the Low Cut response (left)
has a steeper, 24 dB/octave slope.

The slope you choose makes a major difference on the overall sound. Gentler slopes have less of an
obvious effect, while steeper slopes give more of an audibly “filtered” sound. When swept, a steeper
slope can sound more dramatic, which is why many synthesizers use 24 dB/octave lowpass filter
slopes. However, if the slope is extremely steep, like 48 or 96 dB per octave, then the intention is
probably to solve a particular problem rather than add an artistic effect.

Page 101
 High Shelf response (Fig. 7.4). This starts boosting or cutting the highs at a specified frequency,
then levels off to a constant amount of boost or cut. Applications: When boosting, increase
brightness, add articulation to percussion and stringed instruments, increase “air” on voice.
When cutting, the effect is similar to the High Cut response and can reduce harshness, but is
less drastic.

Figure 7.4 This high shelf is boosting the high frequencies.

Page 102
 Low Shelf response (Fig. 7.5). This starts boosting or cutting the lows at a specified frequency,
then levels off to a constant amount of boost or cut. Applications: When cutting, this has the
same basic application as the Low Cut filter, but with a less drastic effect. Boosting adds
fullness in the bass range, and emphasizes bass and kick. However, use care—too much bass
boost gives a muddy, indistinct sound.

Figure 7.5 This low shelf is cutting the low frequencies.

 Peak/Dip or Parametric response (Fig. 7.6). This response boosts or cuts a range of frequencies
(called the bandwidth, which can be narrow or broad) around its resonant frequency. Peak is
also called bandpass or bell, while other names for dip are band reject or notch. Applications:
The peak/dip offers more precision than a “tone control,” so it’s well-suited to fix problems—
reduce resonances, cut sibilants, reduce hum, boost specific frequencies to increase articulation,
compensate for response anomalies, and the like.

Page 103
Figure 7.6 Peak/dip filter response. The lower curve shows a peak response at 485 Hz, while the upper curve
shows the response if it had been cut instead.

 Graphic Equalizer (Fig. 7.7). This equalizer response (more commonly associated with live
sound) divides the frequency spectrum into multiple bands, each with its own boost/cut slider.
Compared to other equalizer types, it’s fast to make tonal tweaks—just move the sliders. It’s
called a graphic equalizer because the fader positions look somewhat like a frequency response
curve.

Page 104
Figure 7.7 The Ampire amp sim includes a basic graphic EQ with seven bands.

Tip: The Channel Strip plug-in’s Low/Mid/High equalizer features a different type of EQ, called
quasi-parametric. This is like a peak/dip stage, but has no Q control. It’s best for general tone shaping.

Depending on the instrument and track characteristics, all these filter types have their uses. A track can
benefit from combining responses, such as a Low Cut stage to remove muddiness, a sharp dip to reduce
a resonance, and a gentle, high-frequency boost to add some “air.” Fig. 7.8 shows two parametric
stages, acting independently of each other.

Figure 7.8 The three main parameters of a parametric equalizer. The low mid band (left) is boosting with a
narrow Q at 330 Hz, while the Hi Mid band (right) is cutting with a broad Q at 1,970 Hz.

Page 105
There are three main ways to adjust the Pro EQ2’s controls:

• Move the virtual, on-screen knobs.


• Click on a node along the frequency response curve, and drag it. Dragging left lowers the
frequency, dragging right raises the frequency. Dragging up increases gain, dragging down
decreases gain.
• Use a hardware control surface. Its hardware knobs control the virtual, on-screen knobs.
Chapter 12 describes control surfaces.

Holding down Shift while moving either knobs or nodes provides finer control. Furthermore, after
selecting a node, the mouse’s scroll wheel can change the bandwidth.

However, one of the most useful equalizer controls is the bypass button. It’s important to compare
unequalized and equalized sounds periodically as a reality check. Use the minimum amount of
equalization necessary; avoid “iterative” EQ tweaking where the lows seem thin so you boost the bass,
but now the highs don’t seem clear so you boost the highs, and so on. For example, if a vocal sounds
thin, instead of boosting its bass, try cutting back the highs slightly. Then raise the overall vocal level.

Dynamic Equalization
Dynamic EQ specifies a threshold for a particular frequency range. If the audio in that range passes
over the threshold, then the EQ either boosts or cuts, depending on which you’ve specified (Fig. 7.9).

Figure 7.9 Studio One doesn’t include a dynamic EQ effect, but Waves’ F6 is a popular third-party option.

With vocals, a stage of dynamic EQ can reduce “s” sounds if the audio exceeds a certain level. This is
like de-essing, but without traditional compression. Dynamic EQ can also tame overly prominent
cymbals and hi-hats in mixed drums, as well as resonant synthesizer presets—applying dynamic EQ

Page 106
solely to the resonant frequency reduces the signal’s level if it exceeds the threshold, without affecting
the overall synth sound.

Chapter 8 describes multiband compression, which divides the frequency spectrum into bands like a
graphic equalizer, and provides results similar to dynamic EQ.

Spectrum Analysis
Studio One includes a Spectrum Meter plug-in, but can also superimpose a spectrum analyzer display
on the Pro EQ2’s main frequency graph. This diagnostic tool shows an audio signal’s level in multiple
bands to highlight response peaks and dips. Changing equalization settings changes the analyzer’s
readout (Fig. 7.10). For example, raising the response around 2 kHz raises the 2 kHz bar in the
spectrum analyzer display.

Figure 7.10 Studio One’s Pro EQ2 includes a spectrum analyzer with several display modes.

Page 107
Of the display modes, third octave (the top of Fig. 7.10) is common, and shows the overall response.
FFT mode (middle of Fig. 7.10) shows the instantaneous peaks and dips throughout the spectrum. The
waterfall mode at the bottom of Fig. 7.10 makes spectacular eye candy for clients. Okay, that’s not
really the main purpose, but it shows the evolution of level vs. frequency over time, where the brighter
the section of the waterfall, the higher the level.

Linear-Phase Equalization
There are two broad equalization technologies, non-linear and linear-phase. Each has advantages and
disadvantages; linear-phase equalizer user interfaces are similar to standard equalizers. Linear-phase
equalizers are less common, however Studio One’s Pro EQ2 includes a linear-phase EQ stage, and you
may encounter this option with third-party plug-ins.

Linear-Phase Basics
Traditional EQ introduces phase shifts when you boost or cut. With multiple EQ stages, these phase
differences can produce subtle cancellations or reinforcements at particular frequencies. This may or
may not create an effect called smearing, which can be subtle or obvious. However, it’s important to
note that these phase shifts give particular EQs their “character” and therefore, may be desirable.

Linear-phase EQ technology delays the signal where appropriate, so that all bands are in phase with
each other. If you compare a massive treble boost with non-linear and linear-phase modes, you’ll likely
find the linear-phase version more “transparent” and neutral. Also, for a given amount of gain, linear-
phase mode may sound like it has less of an effect. Conversely, the phase shifts in non-linear
equalization may provide a “character” that produces something like a slight, but noticeable, upper
midrange emphasis.

The Pro EQ2 includes a linear-phase low-cut EQ stage that offers three cutoff frequencies and two
different slopes (Fig. 7.11). You might wonder why there’s only one linear-phase stage, with a low-cut
response, but there’s a good reason for this. Many engineers like to remove unneeded low frequencies
for utilitarian purposes (e.g., remove p-pops or handling noise from vocals), or for artistic reasons, like
reducing lows on an amp sim cab to emulate more of an open-back cab sound. Standard EQ introduces
phase changes above the cutoff frequency; with linear-phase EQ, there are no phase issues. This can be
particularly important with doubled audio sources, where you don’t want phase differences between
them due to slightly different EQ settings.

Page 108
Figure 7.11 The linear-phase Low Cut stage can start its low-frequency cutoff at 20, 50, or 80 Hz, with a choice
of two slopes (soft and hard).

The Pro EQ2 is very efficient, but note that enabling linear-phase EQ requires far more CPU power, and
adds considerable latency—it’s not something you’ll want to add to every track. Fortunately, in many
cases, it’s a setting that you apply and don’t think about anymore. This makes it a good candidate for
“Transform to Rendered Audio” so you can reclaim that CPU power, and then use conventional EQ
going forward.

Limitations
Linear-phase equalizers aren’t perfect. Because they delay the signal, it’s necessary to delay tracks that
don’t include linear-phase processing by an equal amount of time to keep the tracks in sync. This
increases latency (delay) through the system. Unlike recording, latency when mixing isn’t much of a
problem. Hearing a drum hit 30 ms later after hitting it while tracking can be annoying, but you won’t
notice a 30 ms delay between moving a fader and hearing a change.

Also, phase-linear technology exhibits a phenomenon called pre-ringing. This adds a low-level,
“swooshing” artifact before audio transients. Normally pre-ringing isn’t an issue, because it’s audible
mostly at relatively low frequencies, with high gain and Q settings. However, with hip-hop, EDM, and
other bass-heavy musical genres, you may want high gain/high Q settings at low frequencies on some
tracks. For those applications, non-linear EQ can be a better choice.

Page 109
Conversely, non-linear EQs can have post-ringing after transients. This tends not to be noticeable
because it’s masked by the audio that follows a transient, but it exists.

Fig. 7.12 shows kick drum pre-ringing with high-gain, narrow-width settings. This screen shot is
zoomed way in to magnify the waveform, and reveal the low-level pre-ringing (zooming so far in is
why the kick appears clipped, even though it isn’t).

Figure 7.12 Pre-ringing with a kick drum processed by linear-phase EQ.

The blue waveform shows a kick with a boost at 100 Hz, 10 dB gain, and a Q of 10. The pre-ringing is
visually obvious. The yellow is the same kick, with 5 dB gain, and a Q of 5. Note a tiny bit of pre-
ringing just before the attack. The green waveform again has a gain of 10 dB and Q of 10, but uses
non-linear equalization so there’s no pre-ringing.

In most applications, pre-ringing will not be a problem. If it is, use a non-linear mode instead.

Minimizing Latency
Because phase-linear operation uses a fair amount of CPU power and can cause latency, there may be
low-, medium-, and high-quality options for CPU consumption. Higher-quality options aren’t necessary
during a mix’s initial stages. You can use the low setting for the “snappiest” response, then switch over
to the high setting when doing the final mix.

Tip: Before getting too concerned about plug-in latency, make sure the cause isn’t a misadjusted audio
interface parameter.

Page 110
Mid-Side Processing with Equalization
Mid-side processing encodes a stereo track into two separate components:

• The center becomes the mid component in the left channel.


• The difference between the right and left channels (i.e., what the two channels don’t have in
common) becomes the side component in the right channel.

You can process these components separately, with eventual decoding back into stereo. In the process,
you can almost remix a mixed stereo file. Need more kick and bass? Boost the low frequencies in the
center, and leave the sides alone. Or add a little treble to the sides, to widen the apparent stereo image.

In Studio One, the key to M-S processing is the Mixtool plug-in, and its MS Transform button. The
easiest way to get started with M-S processing is with the MS-Transform FX Chain (Fig. 7.13), found
in the Browser’s FX Chains Mixing folder.

Figure 7.13 The MS-Transform FX Chain included with Studio One.

The upper Mixtool in Fig. 7-13 encodes the signal so that the left channel contains a stereo file’s center
component, and the right channel contains the stereo file’s side components. This stereo signal goes to
the Splitter, which separates the channels into side and center paths. These then feed into the lower
Mixtool, which decodes the M-S signal back into stereo. (The Limiter2 isn’t an essential part of this
process, but is added for convenience.)

Even this simple implementation is useful. Turn up the post-Splitter gain slider in the Center path to
boost the bass, kick, vocals, and other center components. Or, turn up the gain slider in the post-Splitter
Side path to emphasize the sides.

Page 111
Fig. 7.14 shows a somewhat more developed FX Chain, where a Pro EQ2 boosts the highs on the sides.
Boosting the highs adds a sense of air, which enhances the stereo image because highs are more
directional.

Figure 7.14 This image-enhancing FX Chain takes advantage of mid-side processing.

In addition to decoding the signal back to stereo, the second Mixtool’s Output Gain control is brought
out to a Macro control, which compensates for any level differences when the FX Chain is
bypass/enabled. Also, the button in the lower right can disable the MS Decoder to prevent converting
the signal back into stereo, which makes it easy to hear what’s happening in the center and sides.

The following common applications illustrate more ways to use mid-side processing.

Page 112
• Add a second EQ in the center channel.
• Insert a compressor in the center to process the kick/snare/vocals, while leaving the sides alone.
• Add reverb to the sides but not the center, to avoid muddying any center-centric bass.
• Drums with lots of room ambiance can benefit from some upper mids in the sides, and lower
mids in the center to accent the kick.
• If a synth bass’s wide image interferes with other instruments, lower the bass in the sides.
• Mid-side processing can shape reverb. For the mid processing, select a highpass EQ curve, and
set the frequency so high it takes out essentially everything. This removes most of the reverb
from the center, where it could muddy the bass and kick. Then shape the remaining reverb with
the side EQ.
• Try adding some short delays to the sides to give more of a room sound.

Mid-side processing can enhance a mix, and it’s particularly effective with both EQ and dynamics
(we’ll cover dynamics in the next chapter).

The Pros—and Pitfalls—of Presets


EQ presets provide a starting point that may save time when mixing. Certain instruments favor
particular EQ settings, even in different songs. When an EQ setting works well, save it. You can also
save presets with more “generic” curves (e.g., brighter, more bass, etc.), which you edit for a specific
application (Fig. 7.15). However, every project is different. An ideal acoustic guitar preset for solo
guitar may not work when the acoustic guitar is part of an ensemble.

Figure 7.15 This generic drum EQ preset can serve as a point of departure, but you’ll likely need to tweak it.

Page 113
Tip: If you ever have to re-install Studio One because of a hard drive crash, custom presets are not part
of the base program. When backing up your data, locate the folder where presets are saved, and back it
up. Also back up any configuration files and preferences that aren’t saved with projects.

As you become proficient with recording, it may take less time to create the settings you want from
scratch, than to tweak existing settings. Presets are useful, but relying on them can lead to creative ruts.

EQ Applications
Above all, adjust equalization with your ears, not your eyes. Once after doing a mix, the client started
writing down the EQ settings I’d made. He liked the EQ, and wanted to use the same settings in future
mixes.

That’s risky—EQ is part of the mixing process. Just as levels, panning, and reverb are different for
each mix, so is choosing the correct EQ type and amount. Doing that requires finding the “magic”
frequencies for diverse musical material, and deciding which equalizer tool is most suitable.

There are four main EQ applications. Each one requires specialized techniques:

 Solve Problems
 Emphasize or de-emphasize an instrument in a mix
 Alter a sound’s personality
 Change the stereo image

Equalization is powerful, so I have a recommendation:

When you make a change that sounds good, cut the amount in half.

In other words, if boosting a signal by 2 dB at 4 kHz seems to improve a track’s sound, pull back the
boost to 1 dB. Let your ears acclimate to the change before deciding you need any more EQ.

Solve Problems
The following real-world examples show how EQ can solve common mixing-related problems.

Remove Subsonic Audio

Analog gear rarely had response in the subsonic range (below the range of human hearing). However,
digital technology can create and reproduce subsonic signals, which take up dynamic range even if you
can’t hear them. Some modern audio interfaces have frequency responses that go down to 0 Hz.

To minimize subsonic audio in a Channel, insert a steep, Low Cut filter at a low frequency, like 20 to
40 Hz (Fig. 7.16). Although the Pro EQ2’s 48 dB/octave slope is steep, you can make it steeper by
adding more stages.

Page 114
Figure 7.16 Combining four EQ stages (LC, LF, LMF, and MF) creates a super-steep cutoff below 100 Hz.

Some engineers like to attenuate all low-frequency energy below an instrument’s lowest note. For
example, a guitar note doesn’t go much below 90 Hz so theoretically, you can set a sharp, low-
frequency cutoff starting at around 60 Hz. This may tighten up the sound by removing low-frequency
sounds that have nothing to do with guitar. Other engineers feel this is unnecessary, and removes parts
of the audio people may not hear, but can feel. Listen, and decide for yourself.

Tame Resonances

Nylon-string classical guitars often project well on stage, thanks to a strong body resonance in the
lower midrange. However, recording has different requirements than playing live. Setting levels so the
peaky, low frequency notes don’t distort can cause the higher guitar notes to sound weak by
comparison. Steel-string guitars can also have resonance issues, particularly if the output comes from a
piezo pickup.

Dynamic range compression or limiting is one possible solution, but it alters the guitar’s attack. A more
natural-sounding option is using boost/cut parametric EQ to apply a frequency cut opposite to the
natural boost, thus leveling out the response. Here’s the procedure for finding problem frequencies,
which you can then alter:

1. Turn down the monitor volume—the sound might get nasty during the following steps.
2. Set the EQ for lots of boost (10-12 dB) and fairly narrow Q (around 10).
3. As the instrument plays, sweep the frequency control slowly. Any peaks will jump out due to the
boosting and narrow bandwidth. Some peaks may even distort.

Page 115
4. Find the loudest peak, and then cut the EQ until the peak falls into balance with the rest of the
instruments. Widen the bandwidth somewhat with broad peaks, or narrow it for sharp resonances.

This technique of “boost/find the peak/cut” can help eliminate midrange “honking,” reduce strident
peaks in wind instruments, tame amp sim resonances, and more. Although you want to preserve an
instrument’s characteristic resonances, applying EQ to reduce peaks, often lets instruments sit more
gracefully in the track.

Besides, sometimes it’s better to cut than boost. Boosting uses up headroom; cutting opens up
headroom. With the nylon-string classical guitar, cutting the peak allowed bringing up the average
guitar level, thus producing a louder sound without any dynamics processing.

Make Amp Sims Sound Warmer

Post-amp EQ can create a sweeter amp sim sound. Physical amps have cabinets with limited frequency
responses, so adding a lowpass filter set to a frequency above 5 kHz or so, with as steep a slope as
possible (e.g., 48 dB/octave), can often warm up the sound. Sometimes restricting highs (above 2 kHz
or so) and lows (below 100 Hz) prior to an amp sim can also sweeten the sound.

Amp sims sometimes exhibit a resonant, unpleasant whistling tonality. The frequency varies, depending
on the cabinet being modeled. For a smoother, less harsh tone, follow the sim with a steep notch filter
tuned to this resonant frequency (in Fig. 7.17, the HMF stage provides this function).

Figure 7.17 Creating a steep notch around 3.2 kHz reduces the whistling resonance generated by a particular
amp sim model. The white line in the graph indicates the composite curve of the four filter stages.

Page 116
Minimize Vocal Pops

With vocals, a directional mic (e.g., cardioid response) accentuates bass as the vocalist moves closer to
the mic. This becomes a problem when plosive sounds like “b” or “p” produce a massive pop. Ideally,
using a pop filter minimized these issues while recording. If not, then create an ultra-steep cutoff
frequency to remove the low frequencies. Make sure it’s below the vocal range to avoid thinning out
the voice. Start with four sharp notches (Fig. 7.18), all with a Q of 12 and a gain of -24 dB, at 130, 140,
150, and 160 Hz.

Figure 7.18 This virtual pop filter stacks six filter stages to create a response that drops dramatically below
around 200 Hz.

Next, reduce the low frequencies further. Enable the Low Cut filter; a 48 dB/octave slope, with a
frequency around 90 Hz, seems about right (raising the frequency too high makes the overall slope less
steep). Then add a Low Frequency Shelf stage, with -24 dB gain at 120 Hz. This attenuates the
remaining low frequencies, while maintaining the insanely steep slope. If you don’t mind a little extra
hit to the CPU, add the LLC low-cut filter as well (although it won’t make that much difference).

The Low Frequency Shelf has an optional attribute. Increasing the Q adds a low-frequency “bump”
(boost) that’s just above the cutoff frequency. This gives extra depth, and a deep vocal resonance that
still keeps pops under control.

Page 117
Naturally, every voice and mic is different, so the parameters may need editing. To preserve more bass,
try dropping the frequency of all filter stages by 10 Hz. Conversely, if the pops are still too prominent,
raise the frequency of all filter stages by 10 Hz until you find the desired response.

Reduce Muddiness

Some recordings sound muddy—they lack high-frequency and low-frequency definition. Instead of
boosting the highs and lows, try adding a shallow cut in the lower midrange (200-300 Hz—see Fig.
7.19). This tightens up the high and low frequencies.

Figure 7.19 A slight, broad lower midrange cut can reduce excessively muddy qualities.

Create “Virtual Mics” with EQ

Here’s a good example of a technique that can be done while mixing to compensate for tracking issues.
I sometimes record acoustic rhythm guitars with one mic for two main reasons: no issues with phase
cancellations among multiple mics, and faster setup time. If the rhythm guitar part sits in the
background, electronic delay and reverb-based ambiance can produce a somewhat bigger sound.
However, on an album project with the late classical guitarist Linda Cohen, the solo guitar needed to be
up front. A stereo image was essential.

Page 118
Rather than experiment with multiple mics, I recorded the most accurate sound possible from a high-
quality, condenser mic. This was successful, in the sense that moving from the control room to the
studio, sounded virtually identical. Upon starting the mix, though, the sound lacked realism.

To provide the aural cues from an acoustic guitar, consider that when facing a guitarist, your right ear
hears the finger squeaks and string noise from the guitarist’s fretting hand. Meanwhile, your left ear
picks up some of the body’s “bass boom.” Although not as directional as the high-frequency finger
noise, it still shifts the lower part of the frequency spectrum somewhat to the left. Meanwhile, the main
guitar sound fills the room, and provides the acoustic equivalent of a center channel.

The original guitar track had a -6 dB cut at 225 Hz, where the guitar exhibited a strong resonant peak.
Sending the guitar track into two additional Buses solved the imaging problem by giving one Bus a
drastic treble cut and panning it toward the left. The other Bus had a drastic bass cut, and was panned
toward the right (Fig. 7.20).

Figure 7.20 The main track (on the left) splits into three pre-fader Buses, each with its own EQ.

One Send control goes to Bus 1, which is panned toward the left. Set its High Cut EQ to around 400
Hz, with a 24 dB/octave slope to focus on the guitar body’s “boom.” Another Send goes to Bus 2,
panned toward the right, to emphasize finger noises and high frequencies. Set its Low Cut EQ to a
24dB/octave slope and cutoff frequency around 1 kHz. The stereo image emulates facing the guitarist.

Page 119
The Send to Bus 3 goes to the main guitar Bus. Offset its highpass and lowpass filters about an octave
from the other two Buses, e.g., 150 Hz for the highpass and 2.4 kHz for the lowpass (Fig. 7.21).

Figure 7.21 The lower curve isolates low frequencies, the middle curve isolates high frequencies, and the upper
curve trims the main guitar sound’s response. For visual clarity, irrelevant EQ controls are grayed out.

Page 120
Balance the low- and high-frequency Buses. Then bring up the third Send’s level, with its pan centered.
The result should be a big guitar sound with a stereo image...but we’re not done yet.

The balance of the three Buses is crucial to obtaining the most realistic sound, as is experimenting with
the EQ frequencies. Consider reducing the frequency range of the main guitar sound Bus. If the image
is too wide, pan the low- and high-frequency Buses more to center. For a reality check, monitor the
output in mono as well as stereo.

Once you nail the right settings, you may be taken aback to hear the sound of a stereo acoustic guitar
with no phase issues. The sound is stronger, more consistent, and the stereo image is solid.

Emphasize Instruments
Our ears are most sensitive around 3-4 kHz, so you can emphasize an instrument by boosting a bit in
this range (especially with vocals). However, if you do this with multiple tracks so that they all jump
out of the mix, the result will sound harsh. Reserve this technique for a limited number of tracks that
are crucial to the mix.

If an electric or acoustic bass track isn’t loud enough, increasing the level will raise the bass’s low end,
but also emphasize higher bass frequencies that may conflict with other instruments. It may be better to
boost the lowest bass frequencies with EQ instead of increasing the overall level.

The same technique mentioned previously of finding and cutting specific frequencies can also
eliminate interference among competing instruments. For example, when mixing a Spencer Brewer
track for Narada records, two woodwind parts had resonant peaks around the same frequency. When
playing together, they would load up that part of the frequency spectrum, which made them difficult to
differentiate. Here’s a workaround:

1. Find, then reduce, the peak on one of the instruments (as described above with the nylon-string
guitar example) to create a more even sound.
2. Note the amount of cut and bandwidth that was applied to reduce the peak.
3. Using a second stage of EQ, apply a roughly equal and opposite boost at either a slightly higher or
slightly lower frequency than the natural peak.

Both instruments will now sound more distinct, because each has a peak that’s now located in a
different part of the audio spectrum.

Create New Sonic Personalities


EQ can also change a sound’s character—like give a brash rock piano sound a more classical character
by reducing the brightness, and boosting the low end. This type of application requires relatively gentle
tone-shaping EQ.

Page 121
Let’s revisit the five main frequency ranges we briefly covered in Chapter 2, but this time, we’ll
correlate these terms to positive attributes when equalized correctly, and negative when not—but note
that this is a very subjective topic.

 Bass (below 200 Hz): Bottom, deep. Negative: Bottom-heavy, muffled


 Lower midrange (200 to 500 Hz): Warm, fat. Negative: Muddy, dull
 Midrange (500 Hz to 2.5 kHz): Defined, forward. Negative: Honking, thin.
 Upper midrange (2.5 kHz to 5 kHz): Present, articulate. Negative: Screechy, harsh.
 Treble (5 kHz and higher): Bright, airy. Negative: Annoying, fizzy.

For example, to add warmth, apply a gentle boost (3 dB or so) somewhere in the 200-500 Hz range.
However, if a sound is muddy, then try a gentle cut in the same range.

Additional Equalization Tips


 Apply problem-solving and character-altering EQ early in the mixing process. EQ changes
levels in specific frequency ranges, which alters the overall instrumental balance and influences
how the mix develops. For example, strummed acoustic guitars cover a lot of bandwidth—even
at relatively low levels, their sound can take over a mix. So, an engineer might accent the high
end, but turn the overall guitar level down. This retains the percussive propulsion, but de-
emphasizes the midrange.
 Instruments EQ’ed in isolation to sound great, may not sound great when combined with
other instruments. If every track is equalized to leap out at you, there’s no room left for the
music to “breathe.” EQ some instruments to take on more supportive roles.
 Shelving filter responses are good for a general, gentle lift in the bass and/or treble
regions. They can also tame overly prominent bass and/or treble regions.

Key Takeaways
 There are many different equalizer responses. Learn them so you can use the correct response to
create the desired result.
 When mixing, EQ makes it possible for each instrument to carve out its own part of the audio
spectrum.
 Equalization changes levels, albeit in specific frequencies. As you mix, you may need to revisit
levels when you change EQ settings.
 Although the spectrum analyzers included with some EQs can be helpful in locating “problem”
frequencies, your ears are the final arbiters of what sounds correct.
 Linear-phase equalization can be more “surgical” and precise than standard EQ, but requires
extra CPU power that can result in more latency.
 Emulating conventional analog EQ adds a certain character to the sound. Some emulate the
gentler curves of passive EQ, while others produce sharper responses. The character can vary
considerably from one EQ to another.

Page 122
Page 123
Chapter 8 | Dynamics Processing

Now that the tone is under control, let’s delve into dynamics.

We’ll assume you have a basic understanding of how dynamics processors work, so we’ll concentrate
on applications. To learn more about dynamics processing, please check out More than Compressors:
The Complete Guide to Dynamics in Studio One, 2nd Edition. This 213-page eBook covers all Studio
One dynamics processors in detail.

Dynamic range is the difference between a recording’s loudest and softest sections. Live music’s wide
dynamic range was impossible to capture on tape or vinyl. So, restricting the dynamic range—soft
enough not to overload the tape, but loud enough to rise above the tape hiss—was essential.

24-bit digital audio means the medium’s dynamic range is no longer a significant issue. With high-
quality input and output electronics, we can record very high and low levels. Nonetheless, dynamics
processing remains an important part of mixing because if used correctly, it can make sounds stand out
more in a mix, and provide more overall “punch” to a production.

Dynamics processing can also help players who lack a good touch (the ability to play an instrument
with controlled nuances). For example, a singer with good mic technique moves closer or further away
from the mic to keep relatively constant levels. An inexperienced singer with less developed technique
might create level variations that, unlike level changes designed to enhance the music, could cause an
uneven sound. Dynamics processing can help compensate.

Bassists often use compression because the ear is less sensitive in the bass range, so subtle dynamics
tend to get lost. Evening out the bass’s dynamic range provides a fuller low end, which makes it easier
to hear the bass part.

Restricting dynamic range to make soft parts louder can also bring music above the background noise
of everyday life, like road noise when listening in cars. Commercials use dynamics control to increase
the perceived loudness as much as possible. Movie soundtracks alter dynamics to keep music from
competing too much with dialog, and broadcasting controls dynamics to avoid overmodulating
transmitters.

How much dynamics processing to use is controversial. Many listeners think “louder is better,” so pop
music recordings often have a super-compressed dynamic range so they can sound “loud.” DJs prefer
compressed dance music, because minimizing level variations transfers more control to the DJ mixer’s
faders. Classical and jazz recordings use little or no compression.

There are several types of dynamics control. Here’s how they work.

Manual Gain-Riding
Before automation (Chapter 12), engineers adjusted the gain manually while mixing, and sometimes
even when recording—turning down on peaks, turning up during quiet parts—to restrict dynamic

Page 124
range. It’s difficult to do this quickly and consistently. However, riding gain is now viable when
mixing, because automation allows fixing any mistakes.

Level-Riding Plug-Ins
Plug-ins like Waves’ Vocal Rider (Fig. 8.1) and Bass Rider do automatic gain-riding. Vocal or bass
remains at a target level you specify.

Figure 8.1 Waves Vocal Rider provides automated gain-riding for vocals.

These automatic level-riding plug-ins also write automation that reflects the changes they make, so you
can edit the automation manually if needed. Also, many of these plug-ins offer a sidechain input (see
Chapter 9), to control the track level based on the mix’s overall level—louder in louder passages, and
softer in quieter passages.

Unlike other dynamics processors, automatic gain-riding programs don’t alter the fidelity, only the
overall level. The effect on the signal is the same as moving a fader.

For applications where a consistent voice level is crucial—like narration or audiobooks—vocal gain-
riding plug-ins are extremely helpful. For music, although these plug-ins make technical decisions
rather than artistic ones, they work well and can save time. Besides, you can edit the automation if you
sometimes disagree with the software’s decisions.

Page 125
Normalization
Normalization (keyboard shortcut Alt+N) is a basic type of dynamics control. This process calculates
the difference between a recording’s highest peak and the maximum available headroom, then
amplifies the recording so that its highest peak reaches the maximum level possible short of distortion
(Fig. 8.2).

Figure 8.2 The signal on the left hasn’t been normalized, and its highest peak reaches around -4 dB. The copy
on the right has been normalized; its maximum level reaches 0 dB.

Some programs allow normalizing to different maximum levels. However, Studio One’s audio Event
Level Envelopes can change the level of audio Events, so you can do the equivalent of “normalization”
to a value other than maximum; it’s also possible to create a Macro that normalizes to a specific level.

Compressor
The Compressor evens out dynamic range variations by progressively attenuating signals that exceed a
specified level (the threshold). This causes the output level to increase at a slower rate than the input
level, thus reducing the difference between soft and loud levels. For example, increasing the input level
by 4 dB might yield only a 1 dB increase in output level.

Because compression lowers a signal’s peaks, you can raise the lower-level signals. For example,
suppose some peaks reach to 0, and compression reduces those by -5 dB. The loudest peaks now reach
-5 dB. So, you can turn up the signal’s overall level by +5 dB, and the peaks will once again hit 0.
You’ve been able to add +5 dB of gain to the overall signal level simply by reducing the peaks.

Compression can alter dynamics in many ways. Fig. 8.3 shows how a compressor can create radically
different guitar sustain characteristics with different parameter values.

Page 126
Figure 8.3 (1) shows an uncompressed, decaying guitar chord. (2) shows moderate compression on the
sustain, but leaves the initial attack level intact. (3) uses the same settings but compresses the entire signal,
including the attack. This allows raising the overall level considerably. (4) uses extreme amounts of compression
to act as a sustainer for guitar.

How to Adjust the Parameters


This is a subjective topic, but here’s my approach. Personal bias alert: I’m not a big fan of
compression, so I prefer to use DSP-based techniques (like gain changes and normalization) to achieve
consistent levels. This allows using a minimal amount of compression, which reduces the potential for
artifacts.

The compressor’s effect depends on the input level, because that determines how often the signal
exceeds the threshold. Sometimes I normalize a track so that it feeds a consistent level to the
compressor. Otherwise, you can compensate by turning up the compressor’s Input Gain control. Enable
Look Ahead so that the compressor’s attack happens when needed, not slightly afterward.

1. To start, unless the compressor uses the Input control to set the degree of compression (e.g., the Fat
Channel’s Tube Comp), you’ll usually set the input for unity gain. Set the Ratio to 1.5:1.
2. Reduce the Threshold control slowly, while observing the gain reduction meter. The greater the
amount of gain reduction, the more you’ll hear the effects of compression.
3. The Threshold and Ratio controls interact, so adjusting one may require tweaking the other. Lower
threshold values increase the overall compression amount, while higher ratios make the compression
effect more pronounced. Over time, you’ll learn which one requires editing for the desired result.
4. With lots of gain reduction, the compression will sound more like an effect than a transparent
change in dynamics. Reducing the Ratio and lowering the Threshold, or increasing the Ratio and

Page 127
raising the Threshold, can often increase transparency while giving the same perceived amount of
compression. A more gradual Knee can help, because rather than switching instantly at the threshold
from no compression to the full compression ratio, the knee parameter rounds off the compression
curve.
5. Choose Auto for the attack and release times, and evaluate the results. If they’re not satisfactory,
turn off Auto. Increase the attack time to retain more of the natural dynamics (although you may need
to lower the output level so the peaks don’t distort subsequent stages), or shorten it to clamp down
harder on the initial peaks. Adjust the Release control so it’s long enough to avoid a choppy sound, but
short enough to avoid objectionable volume variations during the release. Typical release times for
program material are in the 100-300 ms range. (After finding the desired settings, enabling the
Adaptive option may allow the Compressor to follow level variations more effectively, because it
varies times automatically around the user-set amount.)
6. Increase the output Gain to compensate for any level loss caused by introducing compression.
Enabling Auto can adjust the gain amount automatically to compensate for volume drops.

Tip: If the compression action sounds too drastic or obvious, there are three interacting solutions:
change the Knee to a gentler curve, reduce the Ratio, or raise the Threshold.

The Gain Reduction Meter’s Importance


This provides a reality check on how much attenuation is occurring to maintain the desired output
level. You’ll learn quickly how much gain reduction sounds acceptable. 3-6 dB of reduction is typical
for moderate compression, while 10 dB or more is not uncommon for heavy compression.

Parameter Adjustment Tips


Like dynamics control in general, compression is not always transparent. Until you’ve trained your ears
to recognize subtle amounts of compression, observe the gain reduction meters to avoid
overcompressing, and use the bypass switch for perspective.

 Every track is different. If a preset works “out of the box,” that’s probably luck. Instead of
relying on presets, learn how compression works, decide what effect you want the compression
to accomplish, and then adjust the compression settings to produce that effect. Most of the time,
I start with a default preset and adjust settings from scratch. However, there are some situations
(like recording my vocals, at a particular level, using a particular mic, for a particular musical
genre) where a preset I’ve saved will be close to what’s needed. In that case, a preset can save
editing time.
 Start with a conservative gain reduction setting. Unless you’re using compression as an
effect, choose less than 6 dB of gain reduction. To reduce the amount of gain reduction, either
raise the Threshold control, or lower the Ratio.
 Vocals get along well with compression. Compression brings up low-level sounds, so vocals
can sound more expressive because you hear breaths and clearer articulations. Vocals often use
relatively high compression ratios and low thresholds, consistent with not sounding artificial.
 Consider parallel compression. The Mix control blends compressed and dry sound. Mixing in
some dry sound adds percussive peaks to the compressed, sustained sound.

Page 128
 For a more natural sound, use lower compression ratios (1.5:1 to 3:1). Bass typically uses a
ratio of around 3:1, voice 2:1 to 10:1—but these are approximations, not rules. Also, sometimes
you don’t want a more natural sound. For example, to increase guitar sustain, try a ratio in the
4:1 to 8:1 range (or more).
 For the most transparent effect, insert two compressors in series. With low compression
ratios and relatively high thresholds, they’ll create a more transparent effect than a single
compressor set for the same amount of perceived gain reduction.
 Attack time settings matter. A minimum attack time clamps peaks almost instantly, which can
sound unnatural. For a signal with high average levels that never hits 0, a limiter is likely more
suitable. For most sounds, an attack time of 3-30 ms lets through some peaks for a more natural
effect, although you’ll need to lower the output level so the peaks don’t distort subsequent
stages. If distortion is a problem, follow the compressor with a limiter (see later in this chapter),
set for a high threshold, so that its only role is to catch transients.
 Release time is not as critical as attack time. Start with release in the 100-250 ms range.
Shorter times sound livelier, longer times sound smoother. However, too short a release time
can give a choppy effect, while too long a release may homogenize the sound, or lead to audible
volume variations.
 Reality checks are important. Toggle the bypass button frequently to compare the compressed
and non-compressed sounds. Match their peak levels closely for the most realistic comparison.
Even a little compression may give the desired effect.
 Compression placement. If possible, place compression early in any signal processing chain so
it doesn’t bring up noise from preceding effects.
 Compressors are not miracle workers. A compressor can’t compensate for dead strings, or
guitars with poor sustain characteristics.
 For guitar sustain, add compression before distortion. This gives a smoother sound, and
doesn’t bring up noise from the distortion.

Compressor Applications
Please remember that the following are points of departure—there are no rules.

Vocals

Gain reduction of 6 dB on peaks is common, but vocals sometimes benefit from heavy compression.
Make sure you can hear every word distinctly. Too high a ratio, or too low a threshold, can “smear” the
difference between louder and softer words, as well as sections within a word. If no combination of
ratio and threshold seems to work, consider inserting the Limiter2 prior to the compressor, to pre-
condition peaks before they reach the Compressor.

Setting a smooth knee works well with vocals (Fig. 8.4), because the compression action becomes
heavier at louder levels—which is where you want the most compression. However, if the vocalist
tends to “spike” on higher peaks, a sharper knee can control these.

Page 129
Figure 8.4 Compressed vocals often benefit from a smoother knee.

If you’re not using Auto or Adaptive attack/release settings, add about 40 ms of attack time so that the
consonants at the start of words come through clearly. If the consonants seem too prominent, reduce the
attack time. Also, lower the attack if the singer hits peaks hard at the beginning of words.

Use a Release setting that “tracks” the dynamics. With a percussive vocal part, a shorter release tracks
the percussive quality. For more legato vocals with sustained passages, try a longer release.

Electric Bass

Heavy compression is common with bass to even out the volume levels, so you’ll likely use a high
Ratio setting with a relatively low Threshold (Fig. 8.5).

Figure 8.5 A variety of settings work well with electric bass. This one gives a more compressed bass sound.

To have the bass “snap” somewhat and hear compression as more of an effect, try a ratio of 7:1 to 10:1,
or use a lower Threshold with a hard Knee setting.

5 ms of Attack provides a smooth bass sound. However, if there’s a buzzy type of distortion on the
attack, increase this to 20 ms or more. Fr the bass to “pop,” use an even longer attack—start at 25 ms,
and increase it until the compressor lets through enough of the peak.

Page 130
Guitar (Sustainer)

Heavy compression (e.g., 20:1 ratio and -35 dB threshold) adds sustain to single-note guitar solos. A
sharp knee maintains sustain as long as possible, while a short Attack gives a smoother sound. The
release time isn’t too critical, although this depends on your playing style; a relatively long time (450
ms) usually works best (Fig. 8.6). Due to the extreme compression, you’ll need plenty of makeup gain.

Figure 8.6 A sustainer is a compressor with an extremely high ratio and low threshold.

With a sustainer, you likely don’t want Auto or Adaptive attack/release times. The goal is an effect, not
the most natural sound. However, enabling Look Ahead helps tame the attack.

Guitar (Acoustic Rhythm)

An acoustic guitar can play many roles, from a fingerpicked, upfront solo guitar, to a constant,
churning, background rhythm that drives a song. Each role requires different compression settings, so
there’s no universal acoustic guitar preset.

Fig. 8.7 is for background strumming. The settings give a smooth, even sound that doesn’t overtake
other tracks. The ratio is fairly high to reduce the peaks; lower this for a more percussive sound. The
low threshold provides enough compression so that the peaks don’t conflict with other instruments.

Page 131
Figure 8.7 This preset applies slightly aggressive compression to a rhythm guitar strumming in the background.

A medium Knee setting smooths the dynamics rather than simply grabbing the peaks. The Attack and
Release controls may need major adjustments; the example preset has enough attack time to prevent
neutering the initial peak, and enough release to track the instrument’s strumming. To pick up more of
the strumming, increase the Attack setting. For a smoother overall sound, increase Release.

The Auto setting works well with strummed acoustic guitar parts, but the Knee parameter takes on
increased importance for tailoring the sound.

Kick and Snare

Kick sounds best with an even sound that also retains punch. To have the compression take hold
rapidly, but not dilute the punch, start with 0 ms attack time. Then, increase Attack until you hear the
initial hit clearly. Because a kick decays rapidly, Release can be fast as well (Fig. 8.8).

Figure 8.8 This preset is a good starting point for kick (and snare) compression.

Page 132
Compressors can also shape drum transients. Shorter attack times soften the attack. Between the Attack
and Ratio controls, you can tailor the kick drum’s attack and sustain characteristics, as well as even out
the overall sound. Assuming the attack time is sufficiently long so the attack occurs before the
compression kicks in:

 Raising the ratio increases the sustain.


 A higher threshold can emphasize the attack, by letting the decay occur naturally.
 Lowering the threshold reduces the level difference between the attack and decay.

Snare responds similarly to kick. A lower ratio (like 2:1 or 3:1) will give a fuller snare sound. As with
the kick, use the Attack control to dial in the desired attack characteristics.

With kick and snare, you’ll often want a hard knee. However, the knee control can fine-tune the attack.

Cymbals

As with kick and snare, compression can impact a cymbal’s attack and decay characteristics. For a
more aggressive sound, set the compressor’s controls for a longer Attack, lower Ratio, and higher
Threshold. Or, choose a more “liquid” sound (Fig. 8.9).

Figure 8.9 This compression preset creates a smooth, pillowy effect.

These settings leave the cymbal’s attack intact, but like a transient shaper, affect the decay separately
due to the relatively high ratio and low threshold. You’ll need at least a 150-200 ms release time to
preserve the decay. Too short a Release setting can give an annoying “double attack.”

Page 133
Limiter2
A limiter is like a motor’s governor: Signals don’t exceed a user-settable threshold; signals below the
threshold remain untouched (Fig. 8.10). By setting the threshold to a level slightly below 0 (like -1 dB),
within reason a limiter can prevent distortion caused by going above 0.

Figure 8.10 The audio on the top hasn’t been limited. The copy below has been limited with about 4 dB of gain
reduction. Reducing the peaks allows raising the level; this gives a higher average level. However, note that
aside form the peaks being limited, the waveform hasn’t changed—other than having a higher level.

Limiter vs. Compressor


Compared to a limiter, a compressor provides more control over dynamics after a signal exceeds the
threshold. A compressor’s knee adjusts whether the audio has a faster or slower transition into a
compressed state, and a ratio sets the rate at which gain reduction occurs. A limiter allows no level
changes above the threshold, and retains existing dynamics below the threshold.

Gain Reduction Metering


Gain reduction metering shows how much the level is being reduced to keep the signal under the
threshold. -6 dB of gain reduction usually produces sonically acceptable results, where the audio
doesn’t sound “limited.” You can sometimes stretch this to -10 dB (or more) if needed.

Page 134
Limiting Parameters
Some limiters (especially older ones, like the LA-2A compressor/limiter emulated in the Fat Channel as
the Tube Comp) are easy to use: One control sets the amount of limiting, and another sets the output
level. But Studio One’s Limiter2 has four main controls—Gain, Ceiling, Threshold, and Release—and
the first three controls interact.

Figure 8.11 Studio One’s Limiter2 includes sophisticated metering, along with standard limiter parameters..

Often basic limiting is all you need—like shaving some peaks down by a few dB. To do this kind of
basic limiting:

1. Load the Limiter2’s default preset.


2. Turn down the Threshold for the desired limiting effect. Makeup gain occurs automatically to
maintain a constant output level.
3. Adjust the Release for the most natural-sounding limiting.

How to Adjust the Parameters


This assumes that you’re using the Limiter2 processor (which sets the amount of limiting with a
Threshold control), and not the Tube Comp in the Fat Channel (where you set the limiting amount by
adjusting the Input level).

Input Gain

Increasing the input signal level increases the amount of gain reduction, because as the signal level
increases, the higher levels exceed the threshold more often. The Input control also works in
conjunction with the Threshold control; reducing the threshold means that more of the input signal will
be above the threshold, effectively increasing the amount of limiting.

Page 135
Another use for the Input control is with relatively low-level input signals. If they don’t exceed the
Threshold setting, turning up the Input control can increase the level to where the level better matches
the Limiter2’s Threshold setting.

Ceiling

There are two main ways to set the maximum output level:

• With the Threshold set to 0.00, set the maximum output level with the Ceiling control (from 0
to -12 dB).
• With the Ceiling set to 0.00, set the maximum output level with the Threshold control (from 0
to -12 dB).

It’s also possible to set maximum output levels below -12.00 dB. Turn either the Ceiling or Threshold
control all the way counter-clockwise to -12.00 dB, then turn down the other control to lower the
maximum output level. With both controls fully counter-clockwise, the maximum output level can be
as low as -24 dB.

Release (Decay)

Once a signal drops below the threshold, the Release control sets how long it takes for the limiting
action to slow down and stop. While it might seem you’d want limiting to stop immediately, this can
produce a choppy effect if the signal goes rapidly in and out of a limited state.

Increasing the release time gives a smoother sound; however, too long a release time means the
Limiter2 will react too slowly to level changes. Typical release times for program material are in the
100-300 ms range. Musically speaking, though, you’ll always want to adjust this control based on the
sound—not the time.

Tip: Short release times with drums can add a percussive, “breathing” effect.

Modes and Attack

There are two limiting Modes, A and B, and three Attack time settings. (The Limiter prior to Studio
One 5 had less flexible attack options, which mostly impacted how it responded to low-frequency
audio. The waveform could have some visible distortion when first clamped, but the distortion would
disappear after the attack time completed.)

In applications where you want to apply something like 6 dB of peak reduction to make a track or mix
“pop,” the Limiter2 performance in Mode A is essentially perfect. Unless you’re into extreme amounts
of limiting or material with lots of low frequencies, Mode A should cover what you need 95% of the
time (and it also outperforms the pre-Version 5 limiter).

If you’re using Limiter2 as a brickwall limiter to keep transients from spilling over into subsequent
stages, then use Mode A/Fast attack for the highest fidelity (the tradeoff is giving up a tiny bit of
headroom), or Mode B/Fast Attack for absolute clamping.

Page 136
To Clip, or Not to Clip?
Limiter2 can supplement the limiting action with clipping. Enabling the Soft Clip button shaves off
(flattens) the waveform’s peak, causing mild distortion (Fig. 8.12). This resembles what happens with
tape when you send in more signal than the tape can handle. When clipping happens infrequently, you
likely won’t hear any distortion. But if the output is high and there’s clipping, you’ll hear a slightly
“beefier” sound.

Figure 8.12 The left screen shot shows the waveform with the input 6 dB above the Threshold, and Soft Clip off.
The right screen shows the same waveform and levels, but with Soft Clip turned on. Note how the waveform
peak is somewhat flatter.

While it might seem odd to add mild distortion intentionally, this can increase the average level by
more than with limiting alone. Perhaps surprisingly, soft clipping can create a more natural
sound with some audio source material.

If you’re happy with the sound, move on. However, it’s likely you’ll re-tweak some of the parameters
as the mix develops. When mixing, very few parameters are “set and forget.”

Limiter Applications
 Compared to compression, some feel limiting is a less sophisticated type of dynamics control.
However, I often prefer limiting over compression.
 Limiting is often used with mastering to keep peaks under control.
 For mixed drums or a drum Bus, mild limiting can bring up the room sound without an overly
negative effect on the drum attacks.
 Similarly, limiting overhead mics that are mixed in with other drum mics will bring up the room
sound. Heavy limiting attenuates percussive peaks so they don’t compete with the main drum
sound’s peaks.
 With vocals, limiting the vocal’s peaks prior to compression allows more conservative (and
therefore, more transparent) compressor settings.

Page 137
 When used with slightly detuned synth patches, limiting preserves the characteristic
flanging/chorusing-like sound, while keeping occasional peaks under control.
 Limiting is useful when following synth sounds with resonant filters, or virtual wah pedals.
Either of these can create peaks that are much higher than the average level.
 Limiting electric guitar before going into distortion keeps huge peaks out of the distortion effect
or amp simulator. Feeding in a more consistent signal level can lead to a smoother sound.

Tricomp
The Tricomp (Fig. 8.13) can add punch and greater perceived loudness to signals, and is relatively easy
to adjust. Although often thought of as a mastering processor, it’s also effective with individual tracks.
This type of processor is sometimes called a “maximizer.”

Figure 8.13 The Tricomp is an easy-to-use, multiband processor (lows, mids, and highs) that increases the
apparent level without exceeding peak levels. If set properly, it can sound less compressed than single-band
compressors.

The Tricomp splits the audio into three frequency bands, each with automatic threshold and ratio
settings, and processes each band individually. With a traditional compressor, a signal that exceeds the
threshold reduces the gain, and affects all frequencies. For example, if a strong kick drum hits above
the threshold, the compression it triggers will affect other drum sounds. By splitting the incoming
signal into three bands, each band has its own compressor, so compression affects only the associated
frequencies. This gives a less obvious effect that, even with significant compression, doesn’t
necessarily sound overcompressed.

How to Adjust the Parameters


The Input Gain, Mix, Knee, and Gain (makeup gain) controls work the same way as the Compressor,
and the metering (input, output, and gain reduction) works as expected. Here’s how the Tricomp differs
from other dynamics processors.

The Compress knob sets the overall amount of compression applied to all three bands. It’s likely the
first control you’ll edit, because it has the most effect on the sound. This may be all you need to obtain

Page 138
the desired sound. When adjusting either the Low or High controls, the amount of midrange
compression (applied by Compress) remains constant.

However, remember that excessive compression can lead to listener fatigue, especially with program
material. Adjust the Gain control to match the peak levels when bypassed or active; observe the
Channel’s meter where the Tricomp is inserted (the Tricomp output meter becomes inactive when
bypassed). Following the Tricomp with the Level Meter plug-in may make level-matching easier,
because you can stretch the meter horizontally to increase resolution.

After dialing in the basic sound, increase or decrease the amount of low band compression by turning
the Low control clockwise or counter-clockwise, respectively. Low Freq sets the low band’s upper
frequency limit. For example, with 300 Hz, the Low control will compress frequencies below 300 Hz.

Similarly to the Low control, the High control either increases or decreases the amount of high-band
compression. The High Freq control sets the lower frequency limit for high-band compression. The
High and Low controls can have a powerful effect on the sound.

 To increase the sense of “air” and add sparkle, set High Freq to 12 kHz, and turn up the High
compression.
 If the high end sounds dull or lacks articulation with individual tracks, set High Freq around 3
kHz, and then boost the High control.
 With a harsh high end, sometimes you can compensate by setting High Freq around 3 to 5 kHz,
and then reducing the High control slightly.
 As noted previously, bass often sounds inconsistent due to amp, room, and bass response issues
(e.g., dead spots on the neck, or misadjusted pickups). Lowering Low Freq to under 150-200
Hz, then increasing the low amount, can give a more consistent bass sound.
 The Tricomp can do a lot with mixed drums. Turn up Compress with the default settings to
bring up room sound and overheads. Reduce compression in the Low band to let the kick punch
more, and reduce High band compression to keep cymbals from dominating—or increase
compression to make the cymbals more prominent. For more overall punch, turn down the Knee
control (0 dB is the hard knee setting).

The Speed parameter adjusts attack and release parameters simultaneously, from 0.1 to 10 ms and 3 to
300 ms, respectively. As with standard attack controls, shorter attack times clamp a signal harder than
longer attack times. This reduces dynamic range by reducing initial peaks.

Autospeed automates the process, by adjusting attack and release times dynamically, based on the
program material. Autospeed aims for the best combination of dynamic range and clamping down on
transients. For a somewhat wider dynamic range, leave Autospeed off, and make your adjustments with
the Speed control. With Autospeed bypassed, you may need to trim the output Gain to match levels.

For the tightest clamping down on peaks, set Speed to 0.1 ms, Autospeed off, and Saturation at
minimum. Enabling Autospeed increases dynamics slightly. Saturation adds a soft clipping effect that’s
characteristic of vintage compressors. The Mix control provides parallel compression, as covered
previously with the compressor.

Page 139
The Tricomp can be easy to overlook—the Compressor has more customizable parameters, while the
Multiband Dynamics processor offers detailed, sophisticated dynamics control. However, the
Tricomp’s simple interface belies its sophistication.

Tip: Many of the factory presets are designed for specific instrument sounds. Experiment with these to
learn what the Tricomp can do.

Expander
An expander is the opposite of a compressor—instead of reducing the dynamic range for high-level
signals, it expands the dynamic range for low-level signals—yet it has similar parameters (Fig. 8.14).
Below the threshold, the output drops off faster than the input. For example, with an expansion ratio of
1:2, lowering the input level by 1 dB below the threshold lowers the output level by 2 dB. Expanders
are similar to gates (described later), but arguably more refined. I have a sense they’re underutilized,
because expanders can do quite a bit.

Figure 8.14 Studio One’s Expander offers sidechaining, as covered in Chapter 9, and a range over which
expansion occurs—not just a single threshold.

An expander’s most common application is reducing residual, low-level noise. This requires a low
threshold (around -45 to -60 dB) with a steep expansion ratio, like 1:4 to 1:10. The effect may sound
natural enough that you don’t need to do manual edits of the hiss between vocal phrases, amp noise
between guitar licks, and the like. Expanders can also provide special effects, like hasten an
instrument’s natural decay, or reduce leakage among drums when recording a drum set.

Despite the superficial similarity to compressor parameters, expanders work in a “reversed” way except
for the input control, whose level still has a major influence on the processor’s characteristics. Also
note that an output control isn’t as relevant, because there’s no need to make up gain—the lowest-level
signals change, not the peaks.

There are two types of expansion:

Page 140
• Downward expansion expands signals below the threshold, e.g., to get rid of noise.
• Upward expansion expands signals above the threshold, e.g., to emphasize peaks.

The Expander can do both, because it can process the dynamic range above or below the threshold.

How to Adjust the Parameters


Along with the Threshold control (see next), Range specifies the range over which you want expansion.
For example, to accent a snare hit’s peaks, you’d apply expansion over only a limited, high-level range
(e.g., from 0 to -4 dB). But to reduce noise, you’d want to expand as much noise as possible downward;
edit the range to extend from a threshold set just above the noise, down to the lowest level possible.

Threshold sets the level at which expansion begins. Above the threshold, there’s no expansion, so the
output level follows the input level. Below the threshold, the output level drops off at a faster rate than
the input, as determined by the ratio control. For example, with a 1:2 expansion ratio, if an input signal
below the threshold drops by 3 dB, the output signal drops by 6 dB.

Attack sets how long it takes for the signal’s dynamic range to be expanded once the input level goes
below the threshold. Release slows down, and then stops the expansion action when expansion is no
longer required. Expansion typically uses relatively short attack times (10-30 ms), and moderate release
times (100-500 ms). The shorter the attack time, the quicker the response.

Expander Applications
Although the parameter adjustment process can be touchy, don’t overlook the Expander’s many
potential applications. These settings assume an input signal that peaks at close to 0 dB.

Reduce Low-Level Noise

Expansion can reduce preamp hiss, mic line hum, the ragged end of a digital reverb tail, and the like.
The settings shown in Fig. 8.15 reduce noise between guitar phrases when going through heavy
distortion, so the noise won’t be amplified.

Page 141
Figure 8.15 Attenuating low-level signals stops noise from being amplified by subsequent high-gain stages.

1. Set the Ratio control to 1:20 (the steepest expansion ratio possible), so that any expansion will be
obvious in the following steps.
2. Adjust the Threshold control above the low-level sounds you want to attenuate. Because the
expansion ratio is so steep, you should be able to hear when the low-level sounds disappear.
3. Optimize the effect for a natural-sounding decay. Reduce the ratio parameter so that the lower-level
signals fade out more evenly and smoothly.
4. If needed, increase the Attack control to delay the onset of expansion as the signal transitions from
above the threshold to below the threshold. Similarly, adjust the Release control so that the Expander
recovers at the desired rate after the signal returns above the threshold.

Tighten a Mixed Drum Sound

Expansion can tighten up the sound of drums with too much ambiance, or that have excessive ringing.
Finding the right expansion settings requires some fine-tuning, due to interaction among the controls
(Fig. 8.16).

Page 142
Figure 8.16 Typical Expander settings for tightening up mixed drums.

1. Adjust the threshold to a high setting so that there’s little, or no, expansion.
2. Start with a conservative ratio setting, like 1:1.5, and relatively short attack and release times (0.10
and 50 ms respectively).
3. Slowly lower the threshold. To shorten the drum decay evenly, keep the threshold setting fairly high.
To preserve some of the initial decay, but accelerate the decay rate shortly thereafter, use a lower
threshold setting.
4. Choose the desired range. You won’t need much range to obtain the desired effect.
5. If you hear distortion, increase the attack and/or decay times slightly.

Fix Overcompressed Sounds

The settings that expand overcompressed material vary greatly from one piece of audio to another. Fig.
8.17 shows settings that helped fix an overcompressed drum loop.

Page 143
Figure 8.17 The settings are quite different from the previous screen shot, but produce similar results with
different audio material.

What differs from the previous application is that the ratio setting doesn’t provide as much attenuation,
so there’s gentler expansion over a wider range, rather than greater expansion over a narrower range.

Make Dry Electric Guitars More Dynamic

Playing an acoustic guitar harder makes it brighter as well as louder. Dry, electric guitar doesn’t have
that quality, so it sounds less dynamic. Here’s a way to give your dry electric guitar more acoustic-like
qualities.

Create an FX Channel, and add a pre-fader Send to it from your electric guitar track (Fig. 8.18). The
FX Channel has an Expander followed by the Pro EQ2. The process depends on editing the Expander
settings so that it passes only the peaks of your playing. Those peaks then pass through a Pro EQ2, set
for a bass rolloff and a high frequency boost, so only the peaks become brighter.

Page 144
Figure 8.18 This Console setup uses a pre-fader Send from the guitar Channel to feed an FX Channel with
Expander and Pro EQ2.

Creating a pre-fader Send for the guitar track lets you bring the guitar fader down, and monitor only the
FX Channel as you adjust the Expander and Pro EQ2 settings. The Expander parameter values are
critical—grab only the peaks, and expand the rest of the guitar signal downward. The settings in Fig.
8.19 are a good starting point, assuming the guitar track’s peaks hit close to 0.

Page 145
Figure 8.19 The Expander lets through only the guitar signal’s peaks; the Pro EQ 2 boosts those peaks’ treble.

The most important edit is to the Expander’s Threshold control. After it grabs only the peaks,
experiment with the Range and Ratio controls to obtain the most natural sound. Finally, balance the
guitar track with the brightener effect from the FX Channel. Even just a little brightening can enhance
the overall guitar sound.

Page 146
Multiband Dynamics
Studio One’s sophisticated Multiband Dynamics processor combines elements of EQ, compression, and
expansion. When edited skillfully, multiband dynamics can give more effective and unobtrusive
dynamics control than traditional, single-band compressors. Multiband dynamics is most useful for
instruments with wide frequency ranges and dynamics, such as piano and drums.

Similarly to how the Tricomp splits a signal into three bands, the Multiband Dynamics processor splits
an incoming signal into a maximum of five independent bands. Each band has a dynamics processor,
with a high threshold for compression and a low threshold for expansion. As one example of a
multiband dynamics application, if you use the low band to compress the kick, its compression doesn’t
affect the other bands, which are compressed according to their own needs (Fig. 8.20).

Figure 8.20 The Multiband Dynamics processor incorporates elements of compression, expansion, and
equalization.

How to Adjust the Parameters


Editing the Multiband Dynamics processor can be challenging, because of the many parameters spread
over multiple bands. If you’re new to this type of processing, load some factory presets, observe the
control settings, and vary them to hear how any changes affect the sound. “Reverse-engineering” these
presets is a fast way to learn the functions of the various Multiband Dynamics parameters.

Page 147
Diagnostic Controls and Shortcuts

Helpful diagnostic controls and shortcuts simplify parameter adjustments.

 Each band has Mute (M), Solo (S), and Bypass buttons below each band’s label (Low, Low
Mid, Mid, High Mid, High). Solo simplifies hearing how dynamics control impacts a particular
band.
 To adjust the same control (Low Threshold, High Threshold, Ratio, Gain, Attack, or Release) in
each band simultaneously, click the Edit All Relative button.
 Enabling Auto Speed adjusts the Attack and Release controls for all bands automatically. This
often produces the optimum setting, which can save time experimenting with different Attack
and Release times.
 Hovering over any feature in the two dynamics graphs gives a tooltip above the main display
that explains the feature’s function (like the standard tooltips that show a control’s function).

Color-Coding

On the global display, a colored block in the output section corresponds to the Gain control setting. A
red block indicates attenuation; green indicates gain. The taller and brighter the block, the greater the
amount of attenuation or gain. In Fig. 8.14 above, all the bands are red because they’re all attenuating.

Setting the Frequency Bands

There are three ways to set the crossover frequencies between bands, which determine the ranges.

 The four knobs below the main display set the crossover frequencies. You needn’t use all five
bands—you can have as few as two, to split highs and lows. For example, for a three-band
dynamics processor, turn the H (High frequency) control fully clockwise to eliminate the high
band. Similarly, turn the control L (Low frequency) control fully-counterclockwise to eliminate
the low band.
 You can type frequencies directly into the text boxes just above the knobs.
 The vertical lines in the main, global dynamics processing graph define the crossover
frequencies. Hover the mouse over a line until the cursor changes to a horizontal double-arrow,
click, then drag left (lower frequency) or right (higher frequency).

Setting Threshold, Ratio, and Gain

For background information on how these controls work, please review the previous sections on the
Compressor and Expander.

There are three ways to adjust these parameters:

 Adjust the individual knobs (this also varies the selected band).
 Click and drag on the smaller compression display’s nodes; this display shows one band at a
time (as selected by clicking on the band buttons below the global display).

Page 148
 Hover the mouse over one of the four horizontal lines within a band—Edit High Threshold, Edit
Low Threshold, Edit Upper Output Level (Gain + Ratio), or Edit Lower Output Level (Gain +
Ratio)—until the cursor changes to a vertical double-arrow. Click, then drag up (higher level) or
down (lower level). There’s more information on these controls in the next section.

Tip: Several methods of changing parameter values are interchangeable, so changing parameter values
using any one method is reflected in all methods. Use the method that’s most comfortable for you.

All processing occurs on signals between the low and high thresholds, based on the setting of the Ratio
and Gain controls. Signals below or above these thresholds, respectively, have linear gain.

Dynamics Controls

The Input and Output sections to the immediate right of a band’s individual compression graph (upper
left) each provide access to two horizontal lines within the selected band. The Input section’s upper line
controls the High Threshold. The lower line controls the Low Threshold. Changing these also changes
the Low Thresh and High Thresh knobs, just as changing these knobs also changes the line positions.
However, while using the lines for editing makes it easy to see the relationship among settings, you
may prefer to adjust a band’s individual parameters with the knobs, because the interface is more
familiar.

To create a standard compression curve:

1. Turn the High Threshold control up to 0.00 dB.


2. Set Low Threshold for where you want compression to start occurring.
3. Set a Ratio higher than 1:1 (e.g., 2:1).
4. Adjust the band’s Gain control (not the master Gain at the processor’s right) for the appropriate
amount of makeup gain, to compensate for any lower output levels caused by compression.

To create a standard expansion curve:

1. Turn the Low Threshold control down all the way.


2. Set High Threshold for where you want expansion to start occurring.
3. Set a Ratio lower than 1:1 (e.g., 1:2).
4. Adjust the Gain control if needed.

Global Controls

The global controls and meters provide the same functions as they do on the compressor or expander.

Multiband Dynamics Applications


Editing a multiband dynamics processor is more complex than optimizing a single-band compressor,
because the multiple bands need to work well with each other. Before touching any knobs, listen to the
sound and analyze what’s needed. Creating a hotter, louder sound is the simplest application: split the
signal into bands that divide up the spectrum, and set the compressor parameters similarly—the

Page 149
multiband compressor acts like a standard compressor, but gives a more transparent sound. (For this
application, being able to edit a parameter in all bands simultaneously is useful.)

Tips for Individual Instruments


Although this type of processing is most useful with complex program material, it can also be effective
with individual tracks.

 With piano, leave the low end alone except for very light limiting, so that a good hit in the bass
range has strong dynamics—if you squash the low end too much, the notes will lose drama. But
compressing the upper midrange a bit can help melody lines cut through a mix better, and
boosting (without compressing) the very highest frequencies (e.g., 8 kHz and above) adds “air.”
 Drums work well with multiband dynamics, because each drum tends to have its own slot of the
frequency spectrum. Multiband dynamics can almost remix a drum part by compressing or
boosting certain drums. To tighten a drum’s sound, use expansion to reduce ringing.
 Although single-band compressors are the usual choice for bass, a multiband dynamics
processor can serve simultaneously as a compressor and graphic EQ. Typically, I’ll apply a lot
of compression to the lowest band (with the low band’s crossover frequency set below about
200 Hz), light compression to the low-mid bands, and medium compression to the high-mid
band (from about 1.2 kHz to 6 kHz). I often trim the level a bit for the low-mid bands, and for
the band above 5-6 kHz because there’s not a lot of energy up there with bass. (Or, try setting a
ratio below 1.0 for the highest band so that it turns into an expander. This can reduce any hiss
that’s present in the very highest band.)

Another advantage of multiband dynamics with bass is that you can tweak the high and low band gain
parameters to alter the levels, similarly to processing with EQ.

The preset in Fig. 8.21 gives a sound I call “tuned thunder,” thanks to heavy compression in the lowest
band. The main points here are extreme low end compression, and muting the lower mid band to make
room for guitar while leaving the bass’s low end intact.

Page 150
Figure 8.21 This screen shot shows the lowest band’s parameter values, which create heavy compression.

Going to the other extreme, a significant upper midrange or treble boost can help a bass hold its own
against other tracks, because the ear/brain combination will fill in the lower frequencies. Fig. 8.22
shows settings for extra articulation so that the bass “pops.” Start with the default settings mentioned
previously, but set the low band crossover frequency to 110 Hz or so.

Page 151
Figure 8.22 This use of multiband dynamics for bass emphasizes articulation and pick attack. The screen shot
shows the parameters for the all-important midrange band.

Only the mid band (320–1.2 kHz band) is being compressed. A bit of uncompressed gain for the high-
mid band emphasizes pick noise and harmonics; 5 dB seems about right. To compensate for the extra
highs, try adding 2-3 dB to the low band around 110 Hz.

To assess each band’s sonic contribution, do occasional reality checks with the solo and mute buttons
for individual stages.

Multiband Dynamics for Vocal De-Essing

Chapter 9 covers sidechaining, which is the usual technique for de-essing (de-emphasizing sibilant
sounds in vocals and narration). However, the Multiband Dynamics processor can also do de-essing by
compressing the high frequencies to reduce sibilants, while leaving the main vocal range untouched.

Page 152
Figure 8.23 Multiband Dynamics set up for vocal de-essing.

In Fig. 8.23, all the bands are bypassed except for the High frequency band, which has heavy
compression—the Low Threshold is around -30, with a ratio of 20:1. A short Attack allows the
Multiband Dynamics to “grab” the high frequencies quickly when they appear.

Multiband Dynamics as Graphic Equalizer


We can ignore the dynamics control aspect, and use the Multiband Dynamics as a high-performance,
five-band graphic equalizer. It offers some attributes standard graphic equalizers don’t have, like being
able to solo and mute individual bands, and move the band’s upper and lower limits around freely to
focus precisely on the part of the spectrum you want to affect.

First, defeat the compression by setting the Ratio for all bands to 1.0:1. The Attack, Release, Knee, and
Threshold controls don’t matter, because there’s no compression (Fig. 8.24).

Page 153
Figure 8.24 Setting the Ratio for all bands to 1.0:1 means the Multiband Dynamics acts solely like a graphic
equalizer.

Now you can adjust the frequency ranges and level for individual bands. Being able to mute and solo
bands is invaluable; for example, suppose you want to add intelligibility to a vocal. With a parametric
EQ, you would need to go back and forth between the frequency, Q, and gain parameters to find the
“sweet spot” for intelligibility. With the Multiband Dynamics processor, solo the HM band, move the
range dividers until you focus on the vocal frequencies with the most articulation, then boost that
band’s gain.

Even better, use the Mix control to make the EQ’s overall effect more or less drastic by blending the
processed and unprocessed sounds. And speaking of drastic, each band’s Gain control covers ±36 dB—
more than most parametric EQs.

Transient Shaper
Studio One’s Expander and Compressor can shape transients several ways, as noted previously.
Although there’s no dedicated transient shaper effect, you may encounter this specialized dynamics
processor in third-party software. It affects a signal’s attack by either emphasizing or softening the
initial transient. However unlike a compressor or limiter, this doesn’t necessarily change the overall
signal level. Some transient shapers also include a sustain control that brings up the average level after
the initial decay.

There are two cautions with transient shapers:

 Unless there’s a smooth transition from the attack sound to the post-attack sound, the two can
sound separated. You can usually fix this with proper control adjustments.
 Emphasizing the attack can exceed the available headroom.

Transient shaping has several uses for mixing:

Page 154
 Emphasize drum and other percussive attacks so that they stand out in the mix.
 Reduce the attack of overly-“aggressive” drums. For example, reducing tom attacks can place
them more behind the kick and snare.
 Soften the attack of steel-string acoustic guitars if the attack overshadows the guitar sound
itself.
 Reduce the attack on electric guitars prior to an amp sim. This reduces the non-tonal “splash” of
pick noise that may produce a harsh-sounding initial transient.
 Lower a signal’s sustain to reduce room sound or reverb effects.
 Increase attack, and reduce sustain, to bring a sound more to the mix’s forefront, or soften the
attack to place an instrument more in the background.
 With bass, emphasize the attack to give more punch. This also helps bass stand out in a busy
mix. Increase the sustain for a fatter, more even sound.

Studio One’s Zero-Latency/Zero-Artifact Transient Shaper


In theory Studio One doesn’t have a transient shaper plug-in. But in practice, there’s a zero-latency,
artifact-free transient shaper that’s ideal for emphasizing the attack in drum parts (and other percussive
sounds, like bass or funky rhythm guitar).

1. Copy the clip to which you want to add transient shaping.


2. Right-click in the copy, and choose Detect Transients.
3. Right-click in the copy again, and choose Split at Bend Markers. The copy now has slices at each
transient.
4. With all the slices still selected, click on any slice’s fadeout handle, and drag it all the way to the left
so that each slice has a sharp decay.

Tip: De-select one slice before doing this, because once you drag all the fadeouts to minimum, it’s very
difficult to change them. By de-selecting a slice, you can use the de-selected slice’s fadeout handle to
change all the slice fadeouts at once, regardless of the other slices’ settings.

5. Click the node in the middle of the fadeout curve, and drag the node down to make all the slices
highly percussive (Fig. 8.25).

Page 155
Figure 8.25 The top waveform is the original drum part, while the lower waveform adds a sharp decay to each
drum transient.

The copy now isolates the transients from the rest of the loop. Vary the mix of the copied and original
tracks to set the balance of the emphasized attack with the loop’s “body.”

This technique is particularly effective with acoustic drum loops, because the drums tend to ring longer,
so shaping the transients gives a tighter sound.

Gate
The Gate mutes the track’s audio when the input level drops below a user-settable threshold, and
unmutes the audio when the level returns above the threshold. The main use for noise gates was to
reduce tape hiss. Setting the threshold just above the hiss’s level, muted the hiss when the input signal
consisted solely of hiss. When the signal level exceeded the hiss, the gate would open. Ideally, the
louder signal would mask the hiss.

With today’s high quality gear and low noise levels, gates are rarely used for removing hiss (especially
because expanders can often do a better job). However, they’re still useful for providing effects, such
as:

 Cut off the ring from sustaining drum sounds, like toms.
 Remove the room ambiance “tail” from percussive instruments.
 Shorten reverb decay times.
 Reduce leakage—for example if a snare track has leakage from other drums, it may be possible
to reduce the leakage by setting the threshold above the leakage, but below the snare hit.
 Cut off a sound’s decay so you hear primarily the attack, which makes the sound more
percussive. This requires a fairly high Threshold control setting.

Page 156
How to Adjust the Parameters
The Gate’s parameters resemble other dynamics processor parameters, although their functions are
different. Threshold sets the level at which the gate opens. To avoid cutting off low-level vocals, set the
Threshold just above the level of any noise or hiss you want removed.

Reduction Range attenuates the audio by a lesser amount than muting, so the gate doesn’t close to full
silence. At a maximum attenuation setting of -72.0 dB, almost no sound gets through the gate. The
Gain Reduction bar meter to the left of the main meter appears when the gate is closed, and shows the
amount of attenuation.

Attack sets the time between when the noise gate detects a signal above the Threshold, to when the gate
actually opens. If there’s a click when the gate opens, a short attack time (0.1 to 5 ms) can reduce or
eliminate clicks. With some signals, a slight attack time may be necessary to avoid an abrupt transition
from no sound to sound. When the input signal goes below the Threshold, Release ramps the level
down over the specified time, to provide a smooth transition as the gate closes. A typical time is 50 to
200 ms.

Hold complements the Attack and Release controls. After a signal goes above the Open threshold, the
gate stays open for the specified hold time. This is useful for signals that cross over the threshold
several times when they attack, which can produce a “chattering” effect. Increasing the hold time keeps
the gate open, even if the level drops briefly below the Close threshold.

Look Ahead delays the audio going through the Gate. By monitoring the input in real time, the Gate
can anticipate when a transient will occur in the delayed audio (which is what you hear). This way, the
Gate can open when the transient hits, instead of just after.

About Trigger Event

This doesn’t relate to traditional noise gating, but it is a great feature. Clicking Active causes the Gate
to send a MIDI note (with a fixed, selectable pitch and velocity) through an output that’s available as an
Instrument Track’s note input (e.g., instead of a keyboard or other controller). This works well for drum
replacement, as described next under Gate Applications.

Gate Applications
The Gate is suitable for more than just muting low-level signals. In addition to these applications, see
Chapter 9 on sidechaining for more gate-specific applications.

Attack Delay Processor

An attack delay processor adds a slow, bow-like attack to sounds with a fast attack (guitar, bass, organ,
etc.) that have at least some sustain. Increasing the Gate’s attack time provides this effect, as long as
there’s silence just before the note where you want the attack. The silence allows the gate to reset back
to being closed before it opens again.

Page 157
Setting the optimum threshold level and release time trades off reliable triggering with providing a
space before notes. A higher threshold setting and a short release time (typically a couple hundred
milliseconds) may help, because this will cause the preceding note to cut off sooner into silence.
However, too high a threshold, or too short a release, will cause “chattering.”

If you want the new note to ramp up from a higher starting level than 0, edit the noise gate Range
control so that the sound doesn’t go fully silent after the signal passes below the threshold. However,
the most dramatic effect occurs with maximum gate attenuation.

Reduce Noise with Saturated/Distorted Audio

With guitar, inserting the Gate before distortion or an amp sim while mixing can help clean up noise
and hum between notes (Fig. 8.26).

Figure 8.26 Typical settings for reducing noise between guitar notes prior to distortion.

The optimum Threshold control setting depends on the incoming guitar signal’s level; the settings in
the screen shot assume a fairly high guitar level whose peaks approach 0 dB.

Drum Enhancement with Multitracked Drums

Sometimes, part of mixing involves a salvage job—like replacing a drum sound with one that’s better
suited to the track. Drum enhancement or replacement works best with multitrack drum parts, where
individual drums have their own tracks. For example, suppose your mix would benefit from a different
kick drum sound than the recorded one. The Gate can send a note whenever the kick drum hits, and the
note can trigger a sampled or synthesized kick drum (Fig. 8.27).

Page 158
Figure 8.27 The Gate is providing the trigger input for the Mai Tai, which is programmed for a sound that
complements the kick.

To set up the Gate to trigger an instrument sound from a kick drum:

1. Insert a Gate in the kick drum track.


2. Insert an Instrument Track with the desired kick drum sound.
3. Set the Gate’s trigger to the note needed to trigger the replacement kick sound (e.g., C2). Also set
the velocity. Figure 8.27 shows a velocity of 100, so the kick drum will hit with a moderately high
level. You can choose only one value, although you can edit the individual kick drum hits afterward.
4. Set the Instrument Track’s note input to Gate.
5. Adjust the threshold for reliable triggering from the kick drum track.

Because you’re not dealing with audio, the controls are less critical. However, one cool trick is that if
you turn up Attack and set the Threshold just right, it’s possible to “skip” sending every other trigger.

Snare Drum or Tom Enhancement with Mixed Drums

This technique works with a mixed drum track or loop. It extracts a trigger from the drum to be
enhanced (we’ll use snare as an example), triggers a synthesized sound (I like noise-based Mai Tai
sounds), and layers it with the original drum. Start by isolating the snare from the mixed track.

1. Create an FX Channel, and insert a Pro EQ2 followed by a Gate.


2. Add a pre-fader Send from your mixed drums to the FX Channel. Aside from providing a more
consistent level for triggering, a pre-fader Send lets you turn down the main drum track so you hear
only the FX Channel. This makes it easier to tweak the EQ and isolate the snare.
3. With the Gate bypassed, tune the Pro EQ2 to the snare frequency. Use the LC and HC with a 48
dB/octave slope to provide the preliminary isolation, then use a super-sharp bandpass setting to isolate

Page 159
the snare frequency (Fig. 8.28). When tuned to the snare, the EQ’s background spectrum analyzer
should show the bar that corresponds to the snare’s frequency range at its highest possible level. In
stubborn cases, you may need to double up the bandpass filter with a second sharp bandpass filter.

Figure 8.28 Use the Pro EQ2 and Gate to extract a snare drum trigger.

Page 160
4. Enable the gate, and click on Active to enable the trigger output. Set the Note and Velocity as
desired. Adjust the Gate’s settings so that it triggers on the snare hits. Like the Pro EQ2, the Gate
controls are critical.

 A short attack is usually best.


 Keep release short (e.g., 10 ms), unless you want to mix some of this Channel’s sound in with
the replacement sound.
 Hold times around 50 ms can help prevent false triggering. But you can also get creative with
this control. If you don’t want to trigger on hits that are close together (e.g., fills), a long Hold
time will trigger on the first snare of the series, but ignore subsequent ones that fall within the
hold time.

5. Insert the instrument (e.g., Mai Tai). Set the Instrument Track’s Note input to Gate, and enable its
associated track’s Monitor button. Fig. 8.29 shows the finished track setup.

Figure 8.29 Track layout for snare drum extraction.

Page 161
Tweaking the Synth

When enhancing or replacing snare in a mix, the Mai Tai synthesizer is particularly well-suited to
creating electronic snare drum sounds because of its “Character” controls (Sound and Amount), along
with the filter controls, noise Color control, and FX (particularly Reverb, EQ, and Distortion). Fig. 8.30
shows a typical starting point for a sound that’s suitable for snare enhancement.

Figure 8.30 Starting point for a cool snare drum sound with Mai Tai.

If the snare is on a separate track, then you don’t need to isolate it. Just insert a Gate in the snare track,
enable the Gate’s trigger output, and adjust the Gate Threshold controls to trigger on each snare drum
hit. The comments above regarding the Attack, Release, and Hold controls still apply.

Nor are you limited to snare. You can isolate the kick drum, and trigger a massive, low-frequency sine
wave from the Mai Tai. Toms can sometimes be easy to isolate, depending on how they’re tuned. And
don’t be afraid to venture outside of the “drum enhancement” comfort zone—sometimes the wrong
Gate threshold settings, driving the wrong sound, can produce an effect that’s deliciously “right.”

Page 162
Additional Gate Applications for Drums

 To cut off the ring from sustaining drum sounds (like toms), set the threshold just above where
you want the drums to either cut off (no release time), or fade out over the release time.
 Similarly, the Gate can attenuate the room ambiance “tail” from percussive instruments, and
shorten reverb decays.
 If a snare track has leakage from other drums, set the threshold just above the leakage’s level.

Dynamics Coda: Should Compression or Limiting


Go Before or After EQ?
There’s no universal answer, because dynamics control like compression can serve different purposes.
Both options have their uses.

Consider this scenario: You’ve recorded a great synth bass line with a highly resonant filter sweep. On
some notes, the level goes too high when a note’s frequency coincides with the filter frequency.
Otherwise, the signal is well behaved. But, you also want to boost the lower midrange to give it a
beefier sound.

Put the compressor or limiter first to trap those rogue transients, then apply EQ to the more
dynamically consistent sound. If the EQ change is minor, it won’t change the signal’s overall amplitude
much.

Now suppose you don’t have any issues with overly-resonant filters, but you do need a massive lower
midrange boost. This much boost could increase the amplitude at some frequencies, so putting
compression after the EQ will help even these out.

However, there’s a complication. Because boosts in a frequency range increase the level in that range,
the compressor will scale those levels back down. This inhibits what the EQ is trying to do—it wants to
boost, but the compressor won’t let it go much further. However, signals below the threshold do remain
boosted, and this might give the sound you want.

Another reason to place EQ before compression is to make the compression more frequency-sensitive.
To emphasize a guitar part’s melody, boost EQ slightly for the range to be emphasized and then
compress. The boosted frequencies will cross over the compression threshold sooner than the other
frequencies.

Or, suppose a digital synth is “buzzy.” Cut the highs a bit prior to compression, and the compressor will
bring up everything else more readily than the highs. This type of technique isn’t quite the same as
multiband compression, but gives some of the same results because there’s more punch to the boosted
frequencies.

Page 163
Key Takeaways
• Dynamics processing used to be essential due to the limited dynamic range of analog recording,
but now it’s used more as an effect.
• Manual gain-riding was an early form of dynamics control; plug-ins can now provide a similar
function.
• Normalization calculates the difference between a recording’s highest peak and the maximum
available headroom, then amplifies the recording so that its highest peak reaches the maximum
level possible short of distortion.
• A compressor evens out dynamic range variations by progressively attenuating signals that
exceed a specified level (the threshold). This causes the output level to increase at a slower rate
than the input level, thus reducing the difference between soft and loud levels.
• Presets for dynamics processors are of limited use, because the processors depend on the input
level and the material being processed.
• A limiter is like a motor’s governor: Signals don’t exceed a user-settable threshold; signals
below the threshold remain untouched.
• Compared to a limiter, a compressor provides more control over dynamics after a signal
exceeds the threshold. A limiter allows no level changes above the threshold, and retains
existing dynamics below the threshold.
• The Tricomp is a three-band compressor. Although often thought of as a mastering processor,
it’s also effective with individual tracks.
• An expander is the opposite of a compressor—instead of reducing the dynamic range for high-
level signals, it expands the dynamic range for low-level signals.
• The Multiband Dynamics processor splits an incoming signal into a maximum of five
independent bands. It combines elements of EQ, compression, and expansion. When edited
skillfully, multiband dynamics can give more effective and unobtrusive dynamics control than
traditional, single-band compressors.
• A gate mutes the track’s audio when the input level drops below a user-settable threshold, and
unmutes the audio when the level returns above the threshold.
• Studio One’s Gate can also generate a MIDI note trigger when the audio input goes above the
threshold, making it suitable for drum replacement.
• There’s no universal answer as to whether dynamics should go before or after EQ.

Page 164
Page 165
Chapter 9 | Sidechaining

Usually, a dynamics processor’s input signal influences the amount of compression, gating, expansion,
or limiting. This is because the input signal is either above or below a threshold, which causes the
processor to react in a certain way. However, the Compressor, Gate, Expander, Channel Strip, Pro EQ2,
Autofilter, Multiband Dynamics, Spectrum Meter, and Scope include a sidechaining feature. This
separates the audio signal going into the processor from the signal that controls the processor. The
signal that controls the processor is the sidechain input (Fig. 9.1).

Figure 9.1 The block diagram on the left shows how the input signal controls the gate’s sidechain input. The
diagram on the right diverts the sidechain input to a different, independent control signal, which now controls the
gating effect.

Referring to Fig. 9.1 (left), technically speaking a sidechain is always in play with dynamics
processing, but it’s locked to the input signal. A variation on this, the internal sidechain, adds
processing between the input signal and the internal sidechain input. Often this is a filter, so that the
dynamics processing affects only certain frequencies. The Compressor, Expander, and Gate have an
internal sidechain set up this way (in addition to being able to use an external control signal). However,
the term sidechaining generally describes using an external source to control a dynamics processor.

Probably the best way to explain why this can be useful is with two common examples. Sidechaining
can also do special effects, as described later.

 Suppose you’re mixing a singer-songwriter playing live with vocal and guitar, and you want the
guitar to get quieter when the singer sings. Send the guitar through a compressor, split the vocal
into two paths, record one split, and send the other split to the Compressor’s sidechain input. A
vocal level above the threshold compresses the guitar to create more space for the vocal. When
the singer isn’t singing, the guitar returns to its uncompressed state.
 When doing narration over a video’s music bed, you often want the music’s level to dip
somewhat during narration. Insert a Compressor in the music track, use the narration as the
sidechain signal, and the music bed’s level will dip when there’s narration.

How to Access the Sidechain Input


In Studio One, a processor’s sidechain input will appear as an available track, Bus, Send, or FX
Channel output, within a sidechain category (Fig. 9.2).

Page 166
Figure 9.2 When choosing an output in Studio One, available sidechain inputs appear in their own category,
which is separate from the other available outputs.

Insert a Send to Feed the Sidechain


This is the most common option to feed a sidechain. Insert a Send into the track you want to have
control the sidechain, and assign the Send output to the sidechain input. You can choose a pre- or post-
fader Send. With a post-fader setting, the sidechain signal follows the track’s fader level. With a pre-
fader setting, the level does not change when the Channel fader changes. (Note: Copying a plug-in with
sidechain routing to another Console Channel preserves its sidechain routing, along with its parameter
settings.)

Another option is to click on the Sources button to the right of the Sidechain button in the plug-in
header. Check one (or more) sources you want to connect to the Sidechain input, and Studio One
automatically creates a post-fader Send in the selected source(s).

Several Sends or Track/Bus outputs can feed into a single sidechain input. For example, you might
want to gate bass with drums. If the drums are recorded on separate tracks, you can insert a Send in the
kick drum track, snare track, or any other drum tracks you want to have control the sidechain.

How to Process the Signal Feeding a Sidechain Input


As mentioned, a track output or Send can show an available sidechain input, and send signal to it.
However, this doesn’t accommodate processing the signal going to the sidechain, independently of the
track itself. There are two solutions:

1. Create a Send in the track you want to have control the sidechain, but instead of assigning it to the
sidechain, choose Add FX Channel. The Send now goes to the FX Channel.
2. Insert the signal processor for the sidechain in the FX Channel.
3. Assign the FX Channel output to the target sidechain input.

Page 167
Studio One doesn’t allow feeding a track into the sidechain input of a processor in the same track,
because this creates a loop where the processed output tries to control the processor that’s feeding the
output. However, there’s a workaround.

Suppose a drum track includes a Compressor, and you’d like the drum track itself to provide a
processed sidechain signal for the Compressor—perhaps to process the sidechain signal through the X-
Trem, which would create beat-synched compression effects based on the original drum track.

1. Copy the original track’s audio to a separate track.


2. Send the copied track’s output, or a pre-fader Send, to the sidechain input of the effect in the
original audio track.
3. Insert the effect in the copied track that will process the signal feeding the original track’s sidechain.

As one example of why this is useful, X-Trem has a 16-step waveform where you can customize each
level. Increase the step amplitude on specific beats, and these sounds will be compressed while the rest
of the drum part remains uncompressed.

Copying a track has another potential benefit: you can modify the audio feeding the sidechain. For
example, if you don’t want a section of the track to feed the sidechain, delete that section. Or, to ensure
that the track “slams” the sidechain in a particular section, raise the track’s level.

Internal vs. External Sidechaining


So far, we’ve discussed external sidechaining, where a processor exposes its sidechain input to an
external control source. However, the Compressor, Expander, and Gate include a filter in the internal
sidechain that can process the control signal—no external signal required (Fig. 9.3).

Figure 9.3 Although the Compressor has a sidechain input for external signals, it also has a filter to restrict the
frequency response of the internal signal controlling the Compressor’s dynamics.

Page 168
After enabling the internal filter with the Filter button, enable Listen Filter to hear the filtered sound
that feeds the sidechain input. This makes it easier to adjust the sidechain’s filter frequencies.

One internal sidechain filtering application is frequency-selective compression, like filtering out the
high frequencies so that only a kick drum triggers compression, or de-essing (described later under
applications). With the Compressor’s internal sidechaining, the sidechain controls are limited to
selecting the Lowcut and Highcut filter frequencies, and listening to the sidechain input. The
conventional Compressor controls handle the threshold, compression ratio, and other compressor
parameters.

Sidechain Applications
Because sidechaining takes over the dynamics control for sidechain-capable processors, it allows for
several creative applications.

Lock Kick Drum and Bass Together (Gate Application)

For an ultra-tight rhythm section, lock the bass to the kick so that you hear bass only when the kick hits
(Fig. 9.4).

Figure 9.4 Track setup for locking bass to drums. The kick Send is pre-fader (i.e., its level is independent of the
kick’s Channel fader), and goes to the sidechain-enabled Gate that’s inserted in the Bass track.

Set the Gate’s threshold so that it opens when the kick exceeds the threshold. The Gate’s range control
could mute bass completely when the kick doesn’t hit, or let through some of the bass part. Release can
add a bass decay that lasts longer than the kick, thereby differentiating the two instruments a bit more.

Page 169
Pump Drums with Internal Sidechaining (Compressor Application)

The “pumping” effect is an EDM staple. Usually, this technique requires sidechaining from an external
source, but we can use the Compressor’s internal sidechain filter instead. The pumping effect works
best when applied to sustaining sounds—like drum parts with cymbals, or pads if you want to pump a
non-drum track.

To pump some drums, insert the Compressor in the drum track, and click on the Compressor’s Filter
and Listen Filter buttons. To pump with the kick, set the Lowcut frequency to off, and lower the
Highcut filter until you hear pretty much nothing but kick. Once you’ve isolated the kick (or snare, or
whatever you want to isolate), turn off Listen Filter but leave Filter on.

The control settings are crucial. Fig. 9.5 shows some potential initial settings, but you’ll need to edit the
controls based on the source audio and the desired effect.

Figure 9.5 Setup for pumping drums, using the Compressor’s internal sidechain.

The effect’s depth depends on the Threshold and Ratio control settings. For a heavy-duty effect, set
threshold between -20 and -30 dB and ratio around 10. Tweak as needed, depending on the program
material.

Now for the pumping. Start with Attack at minimum, and set Release for the amount of pumping—start
between 100 and 300 ms, depending on the audio. To restore some of the attack at the start of the

Page 170
pumping, increase the Attack time. Even a little bit, like 5 ms, restores much of the attack. Finally,
because this effect does compress, you’ll probably need to add makeup gain.

Conventional Pumped Drums (Compressor Application)

An external sidechain signal can provide more flexibility than using an internal filter, because any
audio can provide the pumping effect. Fig. 9.6 uses an external signal to pump drums.

Figure 9.6 Setup for conventional compression-based pumping.

Mix the multitracked drum tracks together into a Bus. This is the sound we’ll “pump.” Although we
could use any of the drums to trigger the pumping, we’ll get creative and use a claps track instead.

The mixed drums Bus has a Compressor inserted. A Send from the claps track goes to the
Compressor’s sidechain input. As with the internally filtered sidechain, the Compressor’s settings are
crucial. The ones shown in the screen shot are a good starting point, but will likely need adjusting.
Optionally, mix in some of the claps track to hear the claps sound.

Frequency-Selective Compression (Compressor Application)

Some instruments (especially guitar) lose “sparkle” when compressed, if the stronger lower frequencies
also compress the weaker high frequencies. Using the filter in the compressor Sidechain (Fig. 9.7) can
compress a guitar’s lower frequencies, while leaving the higher frequencies uncompressed. The high
frequencies can then “ring out” above the compressed sound. (Multiband compression can do this too,
but sidechaining may accomplish the same results more easily.)

Page 171
Figure 9.7 These Compressor settings provide frequency-selective compression for guitar.

1. Insert the Compressor in the guitar track.


2. Enable the Filter button.
3. Enable the Listen Filter button.
4. Turn Lowcut fully counterclockwise (minimum), and set the Highcut control to around 250 – 300
Hz. You want to hear only the guitar’s low frequencies.
5. You can’t hear the results of adjusting the compression controls with the Filter enabled. Disable
Filter, and adjust the controls to optimize the low-frequency compression.
6. Use the Mix control to compare the compressed and uncompressed sounds. The high frequencies
should be equally prominent regardless of the Mix control setting (unless you’re hitting the high strings
really hard), while the lower strings should sound compressed.

The compression controls are fairly critical in this application; experiment. For more complex
responses than the internal filter can provide:

1. Copy the guitar track. You won’t listen to this track, but use it solely to drive the Compressor
sidechain.
2. Insert a Pro EQ2 in the copied track.
3. Solo the copied track, and adjust the EQ’s range to cover the frequencies you want to compress.
4. Assign the copied track’s output to the Compressor sidechain input.

Muting via Ducking (Gate Application)

With the Gate, ducking mutes one signal when another one appears (Fig. 9.8).

Page 172
Figure 9.8 The Gate will mute audio whenever a control signal, like an MC talking or a drum hit, occurs.

1. Insert a Send in the control track, and assign it to the Gate sidechain.
2. Enable the Gate’s Sidechain button.
3. The Duck button appears; enable it.
4. Adjust the Gate’s Threshold controls so that the audio mutes when the control signal is present. For
announcer-type purposes, you’d use fairly long Attack and Release control settings. For musical
applications, you may want these to be very short so that percussive transients can “slice” silence out of
a sound rhythmically.
5. Note that Range works in reverse of what you expect with a Gate. The audio to be muted will mute
regardless of the Range setting. Instead, Range determines the muted signal’s level when it’s unmuted
(-72.00 dB, which translates to virtually no muting, returns the audio to its maximum level).

Lower a Track’s Level from a Different Track (Compressor Application)

You don’t always want ducking to mute a signal, but simply lower the level somewhat (e.g., lower an
accompanying instrument’s volume when a singer sings). You can do this with automation, but ducking
automates the procedure.

Page 173
Figure 9.9 Compression with sidechaining is the most common setup for having one signal alter another
signal’s level automatically.

Set up the Compressor. The crucial controls (Fig. 9.9) are Threshold, which determines the amount of
gain reduction, and Attack and Release, which need to be fairly long to prevent an abrupt ducking
action.

1. Insert a Send in the control track, and assign it to the target track’s Compressor sidechain.
2. Enable the Compressor sidechain.
3. Adjust Threshold for how much you want to lower the target track, and Ratio for the compression
amount (too high a ratio will give an unnatural, overcompressed sound).
4. A softer knee setting can provide a smoother effect.
5. Experiment with the Attack and Release controls for the smoothest level transitions in the target
track. Avoid Auto or Adaptive modes, which assume you want traditional dynamic range processing—
that’s not the case here.
As shown previously in Frequency-Selective Compression, the Compressor’s internal sidechain can
compress over a specific frequency range. This is useful for de-essing, because you can reduce sibilants
with vocals and narration, without affecting the rest of the audio. Fig. 9.10 shows the Compressor
acting as a de-esser.

Page 174
Figure 9.10 The Compressor’s internal sidechain can provide de-essing for vocals and narration.

When set for de-essing, the Sidechain Filter filters out frequencies above 4 kHz or so. This varies for
different vocalists and different amounts of sibilance. The Threshold is set very low, and the Ratio very
high, to produce the maximum amount of high-frequency compression. The softer knee isn’t always
necessary; sometimes you want the compression to clamp down on the highs decisively (i.e., short
attack time).

Reduce Amp Sim Harshness with De-Essing (Compressor Application)

Feeding too much treble into a high-gain amp sim might create a harsh tone due to the amp sim
distorting the high frequencies. A tone control reduces highs, but a de-esser can reduce the highs
entering an amp sim “intelligently.” It does this by reducing high frequencies from your guitar when
they’re prominent, but otherwise leaving the sound unaffected. As a bonus, the compression
contributed by de-essing adds smoothness. Adjust the parameters similarly to de-essing vocals.

1. Start with no filtering.


2. Set the Threshold considerably lower than the high-frequency peaks.
3. With Listen Filter enabled, and the Highcut control fully clockwise, turn up the Lowcut control to
zero in on the high frequencies.
4. As you listen to the amp sim output while playing, adjust the compression controls for a relatively
low Threshold and high Ratio. With the right settings, the amp sim sound will likely become sweeter as
the highs start compressing.

If the de-essing effect is too obvious, raise the Threshold control and/or reduce the Ratio.

A de-esser is useful beyond making sweeter distorted sounds. Vox AC-30 amp emulations have a
naturally bright sound. If you boost the treble too much, the sound becomes screechy. Use a de-esser to
zero in on a narrow range of the brightest incoming frequencies, reduce them, and then add EQ
afterward to apply a wide treble boost. This gives a sweet brightness that doesn’t boost the harsh
elements.

Page 175
Drum Sound Enhancement (Gate)

This application for sidechain-triggered gating can give snare a more “80s” drum sound, assuming the
snare is on its own track. If not, isolate it as described in “Snare Drum or Tom Enhancement with
Mixed Drums” in the previous chapter.

1. Create a track with white or pink noise (either an audio file for the length of the drum part, the Mai
Tai output set for a noise-only preset, or the Tone Generator plug-in set for Pink or White Noise; see
Fig. 9.11).

Figure 9.11 The Tone Generator can generate white noise, pink noise, sine waves, and other waveforms for
testing or musical purposes.

2. Insert a Pro EQ2 in the track to shape the noise’s timbre. This noise will mix with the snare sound.
3. Follow the Pro EQ2 with the Gate.
4. Add a Send to the snare drum track, and assign it to the Gate’s sidechain input.
5. The noise will now trigger when the snare drum hits. Adjust the Gate’s Attack and Release so that
the noise’s dynamics follow the snare drum.

You can similarly transform a kick drum into the super-boomy, floor-shaking “hum drum” kick sound.
Instead of creating a track with white noise, create a track with a steady sine wave (50-70 Hz or so).
The Tone Generator plug-in can generate the sine wave. Insert a gate in this track, and trigger its
sidechain input with the kick drum. Set a fairly high threshold, and use the gate’s Release control to set
the sine wave’s decay time.

Another option is to trigger effects on one drum from a different drum. For example, if you have snare
and kick on separate tracks:

1. Create an FX Channel with reverb, set for a long decay time (like 10 seconds).
2. In the snare track, insert a Send that goes to the reverb.
3. Insert a Gate after the reverb. Set its Attack and Hold controls to 20 ms, and Release to around 250
ms.
4. Insert a Send in the kick drum track, and assign it to the Gate’s sidechain input.

Now the kick drum will gate the snare’s reverb, producing twisted rhythmic effects.

Page 176
Spectrum Meter
We’ll close with the Spectrum Meter because even though it doesn’t process dynamics, thanks to
sidechaining, it can display the dynamics of two different signals (Fig. 9.12).

Figure 9.12 You can compare the dynamics of two signals with the Spectrum Meter’s sidechain capabilities.

With sidechain enabled, the top view shows the spectrum of the track into which you’ve inserted the
Spectrum Meter. The lower view shows the spectrum of the track feeding the sidechain. When
sidechained, all the Spectrum Meter analysis modes are available except for Waterfall and Sonogram.

The screen shot compares two drum loops from different loop libraries. The lower one is clearly more
compressed, so to match the loops better, the upper one will need to be compressed, or the lower one,
expanded.

Key Takeaways
• Sidechaining separates the audio signal going into the processor from the signal that controls
the processor. The signal that controls the processor is the sidechain input.
• Not all processors have sidechain inputs. In Studio One, a processor’s sidechain input will
appear as an available track, Bus, Send, or FX Channel output, within a sidechain category.

Page 177
• You can feed a sidechain input from a Send control, a track or Bus output, or any combination
of these.
• To process the signal feeding a sidechain input independently of the track feeding the sidechain,
send to an FX Channel, insert the effect, then assign the FX Channel output to the sidechain
input.
• You can’t feed a track into the sidechain input of a processor in the same track, because this
creates a loop where the processed output tries to control the processor that’s feeding the output.
However, there are workarounds.
• The Compressor, Expander, and Gate include a filter in the internal sidechain that can process
the control signal—no external sidechain signal is required.
• Sidechaining is useful for solving problems, but can also serve as a special effect.

Page 178
Page 179
Chapter 10 | Add Other Effects

Although equalization and dynamics control are important, many other effects can enhance a track. You
can choose from thousands of plug-ins, including many that are public domain. But be careful not to
get carried away, particularly because it’s easy to be tempted by the free plug-ins. You don’t need 46
compressors. Really.

One of my favorite plug-in stories involves Sound on Sound magazine’s Mix Rescue series, where the
magazine’s editors go to someone’s studio and show how to improve a mix. During one of these, the
musician who owned the studio went into the kitchen to make tea. Meanwhile, the Sound on Sound
people bypassed all the plug-ins so they could hear what the raw tracks sounded like. When the
musician returned, he wanted to know what they had done to make the sound so much better.

Often a raw track doesn’t need much more than EQ or dynamics, if that. Piling more processing on a
track can create a more artificial sound. Remember too that a song’s tracks work together. When adding
effects, do so in context. If all the tracks are tarted up with plug-ins to sound fantastic, the end result
will likely be a confusing mess. Think of effects as spices for cooking: Too much of a good thing can
be a bad thing. Also, avoid using an effect to try and salvage a part. If a part has problems, re-do it,
rather than try to cover up the flaws with effects.

However, there are situations when adding the right plug-in at the right time can lift a song to a higher
level. Let’s look at some popular plug-ins used for mixing.

Console Emulation
Studio One Professional has a unique feature called Mix Engine FX. These effects are available in
Buses, but what makes them clever is that a single effect processes the tracks feeding the Bus
individually. This makes them particularly well-suited for console emulation effects. With a process
like saturation, the sound of processing each track individually is more distinct than simply applying
saturation to the sum of all signals going into a Bus. Also, the Bus fader level is pre-Mix Engine FX, so
it not only sets output level, but the amount of drive going to the Mix Engine FX.

Studio One includes a Console Shaper Mix Engine FX processor. Additional, optional-at-extra-cost
Mix Engine FX are available, like the CTC-1 (Fig. 10.1). These are variations on a theme of adding
saturation, hiss, crosstalk, and/or “character.”

Page 180
Figure 10.1 In this setup the Console Shaper is inserted in a Bus for drums, while the CTC-1 goes in the Main
Bus.

It may seem nonsensical to emulate the sound of imperfect, old-school consoles in a precision digital
audio program. The rationale is that analog and digital mixing use different processes and technologies.
For example, analog circuitry has inherent non-linearities (a polite word for “distortion”). As a signal
goes through multiple analog stages, these non-linearities add up. When distortion generates harmonics
in the audio spectrum’s upper range, the “soundstage” may appear wider. This can also produce a mild
form of the “sparkle” associated with processors like exciters.

Mixers also had audio transformers that introduced phenomena called overshoot and ringing. This
effect is most visible on square waves (Fig. 10.2). Some people feel this adds “warmth.”

Figure 10.2 The square wave should be a perfect square, but turning up the CTC-1’s Character control, along
with the Bus’s input fader to increase drive to the CTC-1, adds some audio transformer-like qualities.

Page 181
Additionally, crosstalk occurred when a little bit of signal from one channel leaked into another
Channel. This sometimes resulted in “happy accidents,” due to processor and pan settings that added
depth to a mix.

Console emulation is not a broadband effect, because it affects different frequencies differently. Some
people hear a major difference in their mixes; others feel it’s inconsequential. When I first started using
console emulation, I wasn’t impressed. After learning how to take advantage of it, I used it much more.
Sometimes the results are subtle, but all that matters is if the end result is better than not using it.

Remember that a little goes a long way—these should add subtle amounts of spice, not pour ketchup all
over the mix. Although they fall into the “magic fairy dust” area of signal processing, there’s no
denying that mixes can benefit sometimes from console emulation.

Saturation and Distortion


Although one goal of audio engineering is to rid systems of distortion, there are creative ways to use
distortion plug-ins. Saturation is usually thought of as a lighter distortion compared to, for example, an
overdrive pedal. There are three main saturation categories:

 Tube saturation
 Tape saturation
 General saturation that may be based on hardware, but isn’t necessarily a strict emulation of a
particular product or technology (or may combine multiple saturation types in a single plug-in).

Ampire includes a tube driver stomp box. The RedLightDist effect offers three types of tube saturation,
and several effects (Analog Delay, Tricomp, Autofilter, and Rotor) have a State Space input stage that
produces saturation. However, the general-purpose Softube Saturation Knob, may be what you’ll use
the most (Fig. 10.3).

Figure 10.3 Softube’s Saturation Knob is a useful, general-purpose distortion effect.

Page 182
There’s some confusion about the Saturation Knob’s Saturation Type switch, because an early video
was released that got it wrong. “Keep low” means distort the highs, “keep high” means distort the lows,
and “neutral” provides equal-opportunity saturation.

Applying Saturation
Distortion on guitar is the sound of rock and roll, but saturation is useful for more than guitar. Also
remember you can use external hardware with Studio One, including distortion-specific guitar
processors, with the Pipeline plug-in (Chapter 4).

The following applications for saturation and distortion don’t involve guitar. Saturation does generate
harmonics, so you may want to follow these plug-ins (or even precede them) with a filter that reduces
the high-frequency response.

Drums

Drums have a quick initial attack, followed by an abrupt decay. Adding a little distortion clips the
attack’s first few milliseconds, while leaving the decay untouched. This affects the sound in three
important ways:

 You can raise the drum’s overall average level for a louder perceived sound, because the
overdrive effect acts like a primitive limiter.
 Clipping creates a short period of time where the sound is at its maximum level, which
contributes “punch.”
 Distortion increases the attack’s harmonic content, which makes it brighter.

Moderate distortion can “toughen up” drums, particularly analog electronic drums, and turn audio
sissies into turbulent filth monsters. Kick drums are candidates for significant amounts of distortion. Be
sparing with cymbals and hi-hats, which often sound harsh with distortion. Parallel processing is
useful, because it combines a distorted signal path with natural drum sounds.

Another option is to use saturation as a send effect, particularly with drums having multiple outputs
(e.g., distort the kick and snare, but not the hi-hat and cymbals). Even a little distortion can add a useful
edge to drums, including acoustic drums and percussion loops.

Bass

Saturation on bass tracks is one of my favorite mixing techniques. The bass stands out in the mix more
because the extra “grit” adds high frequencies, which makes the naturally low-frequency bass more
audible. Also, saturation acts like a limiter because it cuts off the peaks, which opens up headroom so
you can increase the overall level. However, with amp sim distortion, consider patching the distortion
in parallel with the bass track to retain the low end’s fullness. Bass seems to sound best with relatively
low-gain distortion settings, because this gives more of a deep, aggressive “growl” that cuts well
through a mix.

Page 183
Vocals

Hardcore/industrial groups sometimes add distortion to vocals for a dirty, disturbing effect.

Keyboards

Overdriving a rotating speaker’s preamp was common with classic rock tonewheel organ sounds, and
the Rotor effect’s State Space input emulates this effect. Keep the gain fairly low; you don’t want a
“fizzy” sound. Like bass and drums, parallel processing provides good results if you use a separate
distortion plug-in.

Bus/FX Channel Send Effects Distortion

To bring a couple instruments out from a mix, insert the Saturation Knob plug-in, set for very little
distortion, into a Bus or FX Channel during mixdown. Turn up the Send controls for the individual
Channels, but remember that a little goes a long way.

Where to Insert EQ with Saturation


“Clean” effects that follow distortion can smooth the distortion. The classic example is reverb. Adding
some gorgeous room ambiance to a distorted signal takes off the edge. But placing distortion after
reverb will distort the reverb tails, which sounds unrealistic as well as dirty.

This also applies to discrete echoes (delay). With post-distortion echo, the echoes remain distinct.
When distorting an echoed sound with feedback, the distorted echoes will eventually degenerate into a
distorted mess (but if that’s what you want...now you know how to do it!).

One possible exception is EQ. You usually want EQ after distortion, where it alters the distorted
sound’s timbre. But just as EQ before compression gives a more “frequency-sensitive” effect, EQ
before distortion causes the selected frequency ranges to distort more readily. For example, if you boost
a synth’s midrange to distort sooner than the bass, the melody gets chunky but the bass doesn’t.
Boosting EQ on a guitar’s higher notes causes leads to go into distortion sooner than lower notes. I
often add EQ both before and after distortion—the first to alter what gets distorted, and the second to
alter the distorted sound itself.

Delay
Adding delay to tracks has been popular since “slapback” echo from a tape recorder graced rockabilly
rhythm guitar and vocals. Even subtle amounts of delay can help fill out a sound, particularly vocals.

Delay Parameters
There are three main parameters associated with delay effects like the Analog Delay and Beat Delay.

Page 184
Delay Time

This sets the time between receiving the input signal and generating the first echo. When adding
feedback to produce multiple echoes, delay time also represents the time difference between
subsequent echoes. You can dial in a specific delay time, like 300 ms, with the Analog Delay’s Time
control (Fig. 10.4), or sync to specific rhythmic values, like 1/4 notes or dotted-eighth notes, by
clicking the Sync button. This calibrates the time control in beats. The Beat Delay is designed
specifically for rhythmic delays, as set by the Beats and Offset parameters (Fig. 10.4).

Figure 10.4 These delays are somewhat similar, but the analog delay allows discrete or rhythmic delay values,
while the Beat Delay is oriented specifically toward rhythmic delay.

The Groove Delay takes echoes beyond simple repeats, to fashion them into grooves. There are four
tapped delays. Each one can align individually to a rhythmic grid (Fig. 10.5).

Page 185
Figure 10.5 The Groove Delay is ideal for EDM, hip-hop, and other beat-oriented music.

Each tap has multiple parameters, including filters with X-Y “pads” that lend themselves to expressive,
real-time control, along with global Feedback and filter cutoff modulation controls.

Older, hardware delay units may not offer sync to tempo. To calculate the delay needed for specific
tempos, use the formula 60,000/tempo in BPM = quarter note delay time in milliseconds. For
example, a quarter-note delay at 95 BPM would use a delay time of 60,000/95 = 631.6 ms.

Tip: An eighth-note delay is good for thickening vocals. Mix the echo level fairly low—just enough to
reinforce the vocal.

Page 186
Feedback

Also called recirculation, turning up feedback creates multiple echoes by feeding the output back into
the input. When the echo re-enters the input, it’s delayed again, which creates another echo. Both the
Analog Delay and Beat Delay have EQ in the feedback path, so that the timbre of successive echoes
can change over time. For example, if the feedback path reduces the high frequencies, then each
successive echo will sound less bright than the previous one.

Mix

This sets the balance of echoes with the original, unprocessed signal. When inserting echo in a Bus or
FX Channel, Mix is usually 100% echo because the main Channel fader provides the unprocessed
sound.

Creating Wider-than-Life Sounds with Delay


Many mono sources (voice, vintage synths, electric guitar, etc.) benefit from stereo imaging. Some
plug-ins can make a mono track sound more like stereo through psycho-acoustic processing, but an
older option is to copy a track, move it slightly behind the original track to create a short time
difference between the two (e.g., around 23 to 40 ms), and then pan the two tracks oppositely. In some
cases, moving the original track slightly ahead of the beat so that the two timings average out makes
them sound like they’re more on the beat.

How much delay to add depends on the instrument’s frequency range. When summing two identical,
but delayed, signals to mono, there will always be phase differences as well as a level difference. If the
delay is too short, the two signals may cancel quite a bit when summed to mono, which creates a thin
sound. Fine-tune the sound for minimum cancellation by setting the Main Bus output to mono
temporarily, and varying the time of one of the sounds. You can often find a “sweet spot” where the
sound is acceptable in mono. Another option is to lower the copied signal’s level, although this makes
the stereo imaging less dramatic.

If the delay is too long, you’ll hear an echo effect. This does create an even wider stereo image, but
there are rhythmic implications—do you want an audible delay? If the delay is long enough, the sound
will be more like two mono signals than a wide stereo signal.

The following option creates stereo from a mono signal, with minimal phase issues. The stereo sound
collapses back well to mono if needed.

1. Pan your main guitar track to center.


2. Send it to two FX Channels.
3. Insert an Analog Delay in each Channel, with Mix set for 100% delay, Factor to 1.00, Mod to 0.00,
and Width fully counterclockwise. Pan them oppositely. Set Damping as desired. Fig. 10.6 shows good
initial settings.

Page 187
Figure 10.6 Initial settings for converting a mono guitar to stereo, with minimal phase or level issues when
collapsed back to mono. The two Analog Delay settings are identical, except that one has a delay time of 11 ms,
and the other, 29 ms.

4. Set different delay times, preferably somewhat far apart, for the two delays. I prefer prime number
delays (3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41 ms) so the delay timings don’t interfere with each
other. For example, set one delay to 11 ms, and the other to 29 ms.
5. Bring up the levels of the two delayed channels to create a pleasing stereo spread. It won’t be as
dramatic a spread as using one dry and one delayed sound panned oppositely (but frankly, that “super
stereo” effect sounds gimmicky compared to a full, robust stereo image). However, if you do want a
more dramatic stereo separation, drop the center channel by 6 dB compared to the FX Channels that are
panned right and left—you’ll still get most of the benefits of this approach. (You may need to group all
three Channels, and raise their levels a bit, to compensate for the drop from lowering the center channel
level.)

When you set the Main output mode to mono, you’ll hear virtually no difference between that and the
“faux stereo” signal, other than the stereo imaging. Choosing mono raises the signal in the center above
the delayed sounds, so there’s much less chance of audible cancellation. This also balances the level
better between the stereo and mono modes.

Using Delay to Create Long, Trailing Echoes


With this delay effect, the echo trails off over time to provide a spacey, evocative delay. This is popular
in dance music, with the delay time synched to tempo.

Echo used more as an effect than for thickening will likely be mixed higher, have a longer delay time,
and include a significant amount of feedback to extend the echo “trail.” This may interfere with the
sound that’s being delayed, but there are ways to restrict the echo so it occurs only when you want it
(e.g., echoes on voice, when a vocalist isn’t singing).

Page 188
 Send effect Bus automation. Automate the Send control to the echo during sections where you
want echo. Vary the echo level, or Send Bus output fader, to mix the desired amount of echo in
and out.
 Event effect. Cut (split) an Event to create another Event to which you want to add echo, and
insert an echo effect in the new Event.
 Split the section to be echoed to another track. This approach’s advantage is that you may not
need to use automation, but just set up echo as a track plug-in. Echo will affect only the sections
that are moved to that track.
 Plug-in automation. Manipulate the desired controls (mainly feedback and mix), and record
your automation moves. Edit the automation envelopes further if needed.

Stereo Image Enhancers


Plug-ins like the Binaural Pan (Fig. 10.7) can turn stereo signals into “super-stereo” signals with an
extremely wide image, or narrow them down into mono. We’ll cover conventional panning in the next
chapter.

Figure 10.7 The Binaural Pan plug-in can widen or narrow stereo images.

Most image enhancers use mid-side processing, as covered in Chapter 7, to create a wider stereo image.
To recap, mid-side processing separates the audio into two components, the mid (what’s common to
both channels) and the side, or the difference between the two channels—basically, what’s not in the
stereo image’s center. However, you don’t need to know the theory, because you can simply use the FX
chain covered previously to do mid-side processing. Or, use the Binaural Pan, whose single control
either increases the difference signal for a wider image, or decreases it to make the stereo more mono.
Image enhancement applications include:

 In mastering, widen the overall track to make more room in the center for the lead vocal (which
is usually centered). However, making the sides more prominent may lower the center’s
apparent level.
 Produce a wider overall stereo image, which can help differentiate among instruments more
easily.
 Move the low frequencies to center in anticipation of doing a vinyl release, which requires that
the bass be centered.
 Spread pads, choirs, and background vocals across the stereo mix so you can lower their overall
level, yet retain their presence.

Page 189
Modulation Effects (Chorus, Flanger, etc.)
Modulation effects are used most often with individual instruments. Modulation typically uses a control
source, typically a low-frequency oscillator, to change an effect’s characteristics. For example, if a low-
frequency oscillator varies a short delay time back and forth between a maximum and minimum value,
this modulates (animates) the sound.

Ampire has convenient, stompbox-style modulation effects (Fig. 10.8). Note that you can bypass the
amp and cabinet to use only the effects section.

Figure 10.8 Ampire is an amp modeler, but also includes modulation effects.

Studio One also includes several modulation effects plug-ins (Fig. 10.9).

Page 190
Figure 10.9 Studio One’s modulation effects...they’re a mountain of modulation.

Page 191
 Flanger imparts a whooshing, jet-airplane sound that was popular in the 60s.
 X-Trem is a tremolo/auto panner with sync-to-tempo, four modulation waveforms, and a 16-
step sequencer that can produce varying levels or gates. This is super cool for adding rhythmic
effects.
 Phaser varies a signal’s phase while mixing it with a dry signal. This produces a sound similar
to flanging, and was popular before time-based modulation effects became common.
 Rotor, a rotating speaker simulator, includes a Drive control that emulates the ability to
overdrive the amp. This distortion effect is characteristic of rotating speaker sounds used in rock
music.
 Chorus can also “double” signals. Chorus multiplies the sound of one instrument so it sounds
like an ensemble, while doubling adds the sound of the same instrument playing the same part,
with slightly different timing.
 Analog Delay may seem out of place as a “modulation” effect, but it has an LFO with
modulation options. This can add chorusing effects to delay, or create vibrato effects (Fig. 10.10
shows recommended initial settings for vibrato.)

Figure 10.10 Initial settings for a vibrato effect that uses the Analog Delay plug-in.

How to Adjust Modulation Effects Parameters


Modulation effects have similar parameters.

Page 192
Initial Delay

This sets the base amount of delay time, generally between 0 and 15 ms. With flanging and chorusing,
modulation occurs around this initial time delay. Flanging typically uses shorter initial delay times (0 –
7 ms). Chorusing uses longer initial delays (5 – 15 ms).

Mix or Depth

These vary the perceived amount of effect—Mix by altering the balance between the processed and dry
sounds, Depth by injecting more or less of the effect into the dry signal.

Feedback

Feeding some of the output back to the input creates feedback, which gives a sharper, more resonant
sound with flanging and phasing (somewhat like increasing a filter’s resonance control).

Modulation or LFO Width

With flanging, delay, or chorusing, modulation varies the initial delay time cyclically—the delay varies
between a higher and lower limit, which causes the pitch to vary between sharp and flat. With small
amounts of LFO width, the sound won’t be perceived as being out of tune. Synching the modulation
sweep to tempo causes the variations to happen in time with the rhythm.

A wide sweep range is important for dramatic flanging. Chorus and echo don’t need much sweep range
to be effective. Initial delay time and modulation interact; a given amount of modulation creates a
greater variation with longer delays.

Modulation Waveform or Shape

The modulation source usually comes from low-frequency, periodic waveforms (triangle, sine, square,
and sawtooth waves). However, Trem-X’s step sequencer creates stepped changes that sync to the
song’s tempo.

Modulation Rate

This sets the modulation frequency if it’s not synched to tempo. When the Trem-X does panning, you’ll
almost certainly want to sync to tempo, or the panning will fight the rhythm.

Modulation Effects Tips


 For vibrato (frequency modulation) effects, start with a short initial delay (2 to 5 ms), set mix
for delayed sound only, and modulate the delay with a triangle or sine wave at a 3 to 14 Hz rate.
 To create a comb filter (a filter type that creates multiple, deep notches), choose an initial delay
of 1 to 10 ms, minimum feedback, no modulation, and an equal blend of processed and dty
sound. (Note that the Autofilter includes an option to create comb filter responses.)

Page 193
 Placing these effects before distortion dilutes the effect—but you might like the way a high-
resonance flanging sound “cuts” through the distortion.
 Flangers can generate massive frequency response peaks and deep valleys. You may want to
follow the flanger with a limiter to restrict the dynamic range, but too much limiting will reduce
the flanging sound’s intensity.
 Inserting EQ before a flanger can optimize the timbre to work well with the flanging effect,
whereas inserting EQ afterward modifies the flanged sound.
 Generally, flangers, phasers, delays, reverb, and other time-based effects go toward the end of a
chain of effects.
 With guitar, tremolo was a common effect in vintage guitar amps, but was difficult to sync to
the music. For a vintage sound, consider not synching the tremolo to tempo, and inserting the
tremolo before distortion.

Pitch Correction
Pitch correction can fix out-of-tune notes with vocals and other instruments, but with extreme settings,
it provides the “hard” pitch correction effect that’s popular in hip-hop. Important: Pitch correction
works best with unprocessed sounds. Apply Melodyne to a vocal or other track before you add effects
(particularly time-based modulation effects).

Pitch correction has somewhat of a bad reputation because when used improperly, it can make vocals
sound unnatural or annoying. Yet when used properly, you don’t know it’s being used—so it doesn’t get
any credit. The key to transparent pitch correction is to correct only notes that sound wrong. If the
correction is so severe that a note sounds unnatural, and you’re not aiming for that unnatural sound, re-
record that part of the vocal.

Tip: Although this tip relates more to tracking than mixing, some singers are more daring with their
vocals if they know that they can use pitch correction to fix the occasional wrong note when mixing.
Some purists say pitch correction takes the soul out of a vocal, but concentrating too much on the pitch
instead of the performance can also take the soul out of a vocal.

Melodyne Essential (included in Studio One Professional) is one of the best pitch correction plug-ins
available. It can also tweak phrasing and timing. However, there are more evolved versions, like
Melodyne Assistant, Melodyne Editor, and Melodyne Studio. The Editor and Studio versions, while
pricey, can do pitch correction with polyphonic audio (like guitar chords), which sounds pretty
amazing.

How Pitch Correction Works


Automatic pitch correction software analyzes note pitches, compares them to the correct scale
frequencies, and then tunes the pitch sharp or flat to quantize them to the correct pitch (Fig. 10.11). You
can also correct note pitches manually, because sometimes you don’t want a note to hit exactly on
pitch, but perhaps be a bit flat to add tension.

Page 194
Figure 10.11 Melodyne has analyzed a vocal performance, and displays “blobs” that represents the vocal’s
notes. The notes on the left, before measure 24, have been corrected so they’re quantized to the pitch scale on
the left; to the right of measure 24, the notes have not been corrected.

Although some assume Melodyne Essential works only on vocals, it works on most monophonic
sources. I often use it with bass to flatten out the pitch variations that occur when a string decays.

The analysis data Melodyne generates can also create Note data. For example, play a bass part on
guitar, drag it into an Instrument track, and the Melodyne engine will convert the audio to Note data.
You can then transpose the part down an octave, and trigger a synth bass. Audio-to-MIDI conversion
(especially polyphonic) isn’t perfect, but editing the MIDI data can fix any issues.

Applying Pitch Correction


There’s not much to it. Either you turn off pitch quantization and click/drag notes manually, or select
notes and snap them to the scale.

Other Pitch Correction Applications


Pitch correction can do more than fix pitch.

Automatic Double-Tracking (ADT) for Vocals

Before applying pitch correction to a vocal, copy it to another track. If you plan to do pitch correction,
apply correction only to the original vocal. Then, open the copied vocal in your pitch correction
software, and add either no pitch/timing correction, or a slight amount (Fig. 11.12).

Page 195
Figure 10.12 The copied clip has a bit of pitch and time correction compared to the original clip, which creates
the slight differences needed for an automatic double-tracking effect.

The slight pitch and timing changes, when compared to the original one, can make the copied vocal
sound like a double-tracked vocal. The Melodyne Editor version can take this further with subtle
formant changes, and/or random pitch and timing variations.

Add a Harmony

Pitch correction can synthesize harmonies. Although the sound quality won’t equal the original vocal,
this may not matter for a background part. And if there’s no harmony in the mix because it was out of
the singer’s range, synthesizing a harmony can be the answer.

Copy the original vocal track from which you want to create the harmony. Adjust the copy’s pitches to
create the desired harmony (Fig. 10.13).

Figure 10.13 The orange blobs are the original vocal. The blue ones were copied from the harmony track,
colored blue in a paint program for contrast, and overlaid on a screen shot of the original vocal.

Page 196
Create Heavy Drum Sounds

To add depth to drums, copy the drum track, and then lower its pitch several semitones. Mix this about
6 dB lower than the main drum track for a fatter drum sound. Or, “tighten” drum sounds, particularly
kick and toms, by raising the copied pitch a couple semitones.

Full, Tight Kick Drums

Make two copies of a kick track. Tune one up three semitones, and tune the other down two semitones
for a full, tight kick. Varying the pitch more than this can loosen the timing.

Octave Divider Bass

For more low-end authority, Melodyne can create decent octave-below effects. Copy your bass track,
select all, and drag down one octave. EQ the copied track to take off most of the highs and boost the
bass. You don’t need to mix in too much octave-below signal to add some fat.

Pitch Uncorrection

The human ear doesn’t always want perfection. I recorded two backup vocals that didn’t seem quite
right, even though their pitches—and those of the track underneath it—were accurate. But that’s what
wasn’t right: selectively flattening the pitch of a few notes in the background vocals made the sound
more interesting.

Restoration Plug-Ins
Studio One doesn’t include restoration plug-ins (some commercial restoration plug-ins cost more than
Studio One itself), but hopefully you won’t need to use them. Nonetheless, sometimes tracks will have
crackles, pops, distortion, or hiss. Restoration plug-ins tend toward heavy CPU consumption, and you
may not be able to monitor them in real time. Instead, render the tracks and then undo if you don’t like
the result.

Tip: De-crackling plug-ins that reduce vinyl surface noise can make amp sims sound smoother.

Multiband Processing
This isn’t a specific effect, but a signal processing technique that applies to various effects. We already
met multiband dynamics in Chapter 8. Multiband processing, in general, splits a signal into multiple
frequency bands—e.g., lows, lower mids, upper mids, and highs—and then processes each band
individually. With hardware, this is complex, expensive, and difficult to implement. With Studio One,
multiband processing is easy, thanks to the FX Chain Splitter. It can split the audio spectrum into as
many as five bands.

Page 197
Multiband processing is especially effective with distortion to create a more defined, articulated sound.
Of course, that’s not always what you want—sometimes a sprawling, dirty sound is ideal. But a track
processed with multiband distortion sounds more focused, and often, fits better into a mix.

Here’s an example of the Splitter in action (Fig. 10.14).

Figure 10.14 The Splitter is set up to do multiband processing. Frequency Split is selected, and below it are the
four split point frequencies.

The Splitter is using its Frequency Split superpowers to create five bands. Each band feeds an Ampire
(using the Crunch Boutique amp, no stomps, and the 1 x 12 American cabinet). The Mixtool at the
beginning gives about 10 dB of gain—because we’re filtering out so much sound going into each
Ampire, the extra gain helps hit the amps a little harder. The Pro EQ2 at the end does the following:

• Roll off the lows, which tightens up the sound (the sound is more like going through an open-
back cabinet).
• Shave off the extreme highs to sweeten the sound.
• Add an upper midrange lift.

The Binaural Pan widens the sound, and the Open Air reverb creates ambiance.

Page 198
Tip: The Splitter’s Mute Output buttons make it easy to hear what’s happening with individual bands.
You can optimize each band without being distracted by the sounds of the other bands.

Multiband processing can benefit effects other than distortion. For example with delay, delaying low
frequencies might add “mud” that doesn’t happen when you delay only the upper mids and treble. Also,
short, slapback-type delays on the higher frequency bands and longer delays on the low-frequency
bands may create a delay effect that fits better in a track. And splitting an instrument into four bands,
then chorusing themseparately, each with slightly different LFO rates, can give gorgeous, lush
chorusing effects.

Key Takeaways
 Console emulation is a subtle effect, but can benefit your mixes.
 Tape emulation can emulate the sound of tape, but it’s simply a different type of processing—
not necessarily something magical.
 Don’t overlook distortion as a mixing tool. It can help make bass and kick drum more
prominent in a mix.
 Delay synched to tempo is a staple of many dance mixes. It’s particularly effective when there’s
an option to control the EQ in the feedback loop, so that successive repeats are less bright.
 Synchronizing delay time to tempo is often the preferred way of setting the delay time in
modern productions.
 Stereo image enhancers are best when used sparingly on individual tracks. Because they expand
outward by leaving more of a “hole” in the center, this can help avoid interference with sounds
panned to the center (voice, snare, etc.).
 Chorusing tends to diffuse the sound, which can place instruments more in the background.
 De-essers are helpful with vocals because they can remove objectionable “s” sounds. You can
then increase the overall brightness for added clarity and articulation, without over-emphasizing
the “s” sounds.
 Pitch correction gets a bad rap, because it’s often overused. Using it tastefully can result in
better, yet still natural-sounding, vocals.
 Use restoration plug-ins to remove glitches like pops, hiss, and clicks that could mar otherwise
excellent tracks.
 Multiband processing allows for more nuanced effects, and works particularly well with
distortion.

Page 199
Page 200
Page 201
Chapter 11 | Create a Soundstage

Our tracks are squared away: they sound great, carve out their part of the frequency spectrum, have
proper dynamics control, the levels are well balanced, and some of them even have cool “ear candy”
effects. But we’re not done yet.

We hear music in an acoustic space. Our ears receive cues that locate instruments closer to us or further
away, and to the right, left, or center. We also receive cues from the back and front. Often, one of
mixing’s goals is to create a convincing soundstage (acoustical environment) for your music. Your goal
might be traditional—to re-create the feel of a live performance—or to ignore reality completely, and
create a space that could exist only in a virtual world.

The main ways to create a soundstage involve panning (the placement of sounds in the stereo field),
along with reverberation and delay, which emulate an acoustical space. Although stereo’s spatial
options are limited to left, right, or in between, placing sounds in a soundfield is an important part of
mixing. Let’s start with panning.

Panning Basics
Each channel strip has a panpot (short for panoramic potentiometer) whose slider places instruments
within the stereo field (Fig. 11.1).

Page 202
Figure 11.1 The panpots in the center four Channels (outlined in orange) are moving these Channels’ stereo
position off center.

Stereo placement alters how we perceive a sound. Consider a doubled vocal line, where a singer sings a
part and then doubles it as closely as possible. Panning the parts to center gives a smoother sound,
which can help weaker vocalists. Spreading the parts more left and right separates the vocals, for a
more defined sound. This can help accent a good singer.

Because you can automate panning, an instrument needn’t have a static placement throughout a song.
You might pan a percussion instrument to center initially, but when a second percussion instrument
enters the mix toward the right, you could tilt the first percussion part toward the left. If these parts are
in the background, the changes won’t be distracting—they’ll be sensed more than heard.

Page 203
How Panning Differs for Stereo and Mono Tracks
The pan control works differently for stereo and mono audio tracks, as well as for MIDI.

 Pan places a mono track at a specific point within the stereo field—anywhere from full left to
full right.
 With stereo tracks, the panpot is a balance control. Moving the panpot off center to the right
turns down the left channel until, when all the way to the right, you hear the right channel only.
Moving the panpot toward the left makes the right channel progressively softer, and the left
channel progressively louder.

Pro Tools, one of the earlier DAWs, defaults to treating stereo signals as two mono signals. Each has its
own panpot, so the left and right channels can spread over any part of the stereo field. For example,
with two stereo rhythm guitar parts, one could span the range from left to center, while the other spans
the range from center to right. This still gives a stereo image, and keeps the guitars spatially separate.

Studio One defaults to treating stereo signals as true stereo tracks. However, the Dual Pan plug-in can
pan the right and left channels independently (Fig. 11.2), so both “standard” panning options are
available.

Figure 11.2 Studio One’s Dual Pan plug-in provides independent pan control over the left and right channels. Its
five different panning curves determine how the amplitude changes as you pan from left to right. -3 dB Constant
Power Sin/Cos maintains a constant perceived volume while panning.

Panning and MIDI


Continuous controller #10 is the default for panning. With Instrument tracks, the pan control sends
these messages to place the instrument in the stereo field. Any plug-in that recognizes this controller
will react to it. When controlling outboard gear, you can create a pan envelope, and then send this data
to a MIDI output port that patches into the hardware gear.

Page 204
Panning Tips
There’s more to panning than stereo placement. These tips can optimize a project’s panning:

 Panning bass frequencies. Bass frequencies are less directional than highs. Most engineers
place the kick drum and bass straight down the center (unless the bass is a synth type, and
recorded in stereo).
 Panning high frequencies. Highs are directional, so placing higher frequency instruments
(shaker, tambourine) further to the left or right gives the illusion of a wider stereo field.
 Audience perspective or performer perspective. As you set up stereo placement for
instruments, think about your listener’s position. For a drummer the hi-hat is on the left, and the
floor tom on the right—for the audience, it’s the reverse.
 Avoid extreme panning. I generally don’t pan to the extreme left or right, but always at least a
little bit toward center. If the sound is full right or left, it’s coming from only a single point
source, and sounds less realistic.
 Consider timbral balance when panning. If you’ve panned a hi-hat (which has lots of high
frequencies) to the left, pan other high-frequency sounds (e.g., tambourine or shaker) somewhat
to the right. The same concept applies to any instruments with overlapping frequency ranges.
 MIDI panning. Panning an instrument sound back and forth with an LFO may sound
gimmicky because you hear an audible sweep. Modulating panning with velocity can sound
more natural. When you first hit a note, its stereo position depends on the velocity, but as it
sustains, rather than move around, it will retain its location in the stereo field until replayed.
 Locate sounds in space. When you play the piano, there’s a sense of lower notes emanating
from the left, and higher notes from the right. You’ll probably want a similar spread when
mixing, at least for solo piano.
 Synthesizer splits. Synthesizers with split functionality (i.e., separate keyboard sections can
play different sounds) can enhance spatial placement because the different splits can have
different stereo positions. This isn’t the most realistic option for imitating the real world, but
hey—it’s a synth.
 Synthesizer pan modulation. If you assign panning to note number, this will create a wide
stereo spread. However, unless the keyboard is the major focus, you’ll probably want to narrow
the range somewhat. For example, if guitar is another major melodic instrument, try spreading
the guitar from center to left of center, and the keyboard from center to right of center.
 Panning and Delay. Placing a delayed sound in the same spatial location as the main sound
may cause the echoes to obscure the main notes. To avoid this, if your instrument is weighted to
one side of the stereo spread, weight the delayed sound (set to delayed only/no dry signal) to the
other side of the spread. With stereo delay on a lead instrument that’s panned to center, try
panning one channel of echo toward the left, and the other toward the right. Also, polyrhythmic
echoes can give lively “ping-pong” effects. This may sound gimmicky (not always a bad thing!)
but if the echoes are mixed relatively low and there’s some stereo reverb, the sense of
spaciousness can be huge.
 Panning with two mics on a single instrument. For a bigger sound when using two mics on a
single instrument, pan the right mic track full right, the left mic track full left, then duplicate the
right and left mic tracks. Pan the duplicated tracks to center, and then bring the duplicated

Page 205
tracks down about 5-6 dB (or to taste). This fills in the center hole that normally occurs by
panning the two main signals to the extreme left and right.

The Audio Architect: Build An Acoustical Space


You can build an acoustic space by adding reverberation and delay to give depth to the normally flat
soundstage. If your reference point is live performance, an overall reverb can create a particular type of
space (club, concert hall, auditorium, etc.). A second reverb can add special effects, such as a “splash”
on a snare drum hit, gated reverb on toms, or a diaphanous reverb on vocals.

Tip: Vocals often have a separate reverb because they’re mixed front and center, so the reverb is
optimized specifically for vocals.

About Reverb
If a room’s acoustics are baked into a track, aside from “de-reverb” restoration plug-ins (which may or
may not be effective), there’s little you can do to remove those characteristics. Although studios often
strive to provide a neutral, dry-sounding environment, a totally dry sound goes against our expectation
of hearing music in an acoustic space. In addition, some studios have rooms that are purposely not
neutral, because they have a desirable sound, or have harder surfaces for a more “live” sound..

Different Reverb Types


Originally, reverb was a physical space, like a purpose-built concrete chamber. Perhaps the most
famous reverb setup was designed by Les Paul for the Capitol Records studios in Los Angeles. He
specified eight concrete echo chambers, each with certain sonic characteristics, dug 30 feet into the
ground—no other reverb sounds like this. Another famous example is the Olympian drum sound on
Led Zeppelin’s “When the Levee Breaks,” which was the result of John Bonham’s kit having room
mics set up in a three-story stairwell with some added delay. Sonic Vista Studios in Ibiza, Spain, has an
incredible reverb sound from using a centuries-old water well as a reverb chamber.

Although you can’t fit a concert hall in your project studio, it’s possible to model acoustic spaces with
surprising realism—from the sound of classic gear, to virtual spaces that can’t exist in real life. There
are two main electronic reverb technologies.

Synthesized Reverb

Synthesized (also called algorithmic) reverb ruled the digital reverb world for several decades. This
technology recreates the reverb effect with three basic processes:

 Pre-delay emulates the time for a signal to travel from the source to the first reflective surfaces.
 Early reflections are the initial sounds that happen when sound waves first bounce off room
surfaces.
 Decay is the wash of sound caused by emulating the myriad reflections that occur in a real
room, with their various amplitude and frequency response variations (Fig. 11.3).

Page 206
Figure 11.3 Synthesized reverb deconstructs reverb into these parameters.

Tip: Most digital reverbs are not true stereo devices. They mix stereo inputs into mono, and synthesize
a stereo space. This is why you can obtain stereo reverb effects with a mono signal like voice.

Convolution Reverb

Convolution reverb is like taking an audio snapshot of an acoustic space’s characteristics, and then
imparting those characteristics to your audio. As an analogy, think of the impulse as a “mold” of a
particular space, and you “pour” the sound into the mold. If the space is a concert hall, then the sound
takes on the concert hall’s characteristics. This produces a realistic sound, much like how a keyboard
sampler produces more realistic sounds than a keyboard synthesizer.

The tradeoff for this realism is difficulty in editing the sounds. However, the Open Air reverb is quite
editable, and as easy to use (and understand) as standard reverbs. Changing parameters can feel
sluggish due to all the calculations being performed, but this is expected with convolution reverbs.
Choosing convolution or synthesized reverb is a matter of taste (Fig. 11.4).

Page 207
Figure 11.4 The reverb on the left is the convolution-based Open Air reverb, while the reverb on the right is the
algorithmic Room Reverb.

To make an analogy with the visual arts, synthesized reverb is like a painting, while convolution reverb
is like a photograph.

Tech Talk: The Convolution Process

Although well-suited to reverb, convolution impulses can be speaker cabinets, guitar bodies, equalizer
responses—just about anything. Downloadable impulses are available on the web (many are free) that
can load into convolution reverbs. It’s even possible to create your own impulses; I’ve created several
using white noise, which can give excellent reverb sounds if you shape the frequency response and
decay characteristics. Convolution-based processors can also import just about any kind of audio file
and apply convolution; experiment with loading drum loops, individual instrument notes, and sound
effects—the results can be surprising, and sometimes useful.

One Reverb or Many?


Early recordings had one reverb, with channel sends bused to it. The vocals usually sent more signal
than the other instruments, but the result was a cohesive group sound, due to the common acoustical
space.

Later on, studios sometimes used a specific vocal reverb to make the voice more distinctive. Plate
reverb (an early type of mechanical reverb) was a popular choice because it had a crisper, brighter
sound than a traditional room reverb. This complemented voice well.

When inexpensive digital reverbs became available, some folks went crazy—one reverb type on the
voice, gated reverb on drums, some gauzy reverb on guitars, and maybe even one or two reverbs in an

Page 208
aux bus. The result bears no resemblance to the real world. That’s not necessarily a bad thing, but if
taken to extremes your ears—which know what acoustical spaces sound like—recognize the sound as
artificial.

A convincing plate sound can work well as a vocal insert effect, complemented by a good room or hall
reverb in an FX Channel for your other tracks. To create a smoother blend, send some of the vocal
reverb to the main reverb. This will likely require dialing back the dedicated vocal track reverb level a
bit, because the main reverb will add to the vocal reverb level.

Tip: If a part is questionable, reverb can’t salvage it. A bad part with lots of reverb is still a bad part—
to paraphrase Nike, just re-do it.

Supplementing Reverb with a Real Acoustic Space


Although digital reverb’s sound quality has increased exponentially over the years, there’s nothing
quite like a physical acoustic space. Even relatively small, highly reflective spaces can provide a useful
ambiance. For example, send a Bus output to a speaker in your bathroom, place a mic in the bathroom,
and bring its output back into an audio interface input.

Tip: You can modify the bathroom’s acoustics by adding or removing towels, and opening or closing
the shower curtain.

Send some of your vocal channel’s digital reverb output into this space, and return just enough of the
acoustical reverb to provide the equivalent of “sonic caulking” to the digital reverb sound. The room
will add more complex early reflections than all but the very best digital reverbs. You may be surprised
at how much this enhances the sound.

Also consider adding a slight amount of feedback to the room reverb by sending some of the room
reverb back into the Bus output feeding the speaker. Caution: keep the monitors at extremely low
levels as you experiment—you don’t want a major feedback blast.

Reverb Parameters and Controls


A sophisticated reverb has many parameters. It’s not always obvious how to optimize these for specific
recording situations. Here are some guidelines—but as always, there are no rules.

Early Reflections

Also called initial reflections, the associated parameters control the time between when a sound occurs
and when its sound waves hit walls, ceilings, etc. These reflections sound more like discrete echoes
than reverb. The early reflections time is usually variable from 0 to around 100 ms, and a level
parameter sets a balance with the overall reverb decay. Increase the time for the feeling of a bigger
space; for example, complement a large room size with a fairly long pre-delay time.

 Personal bias alert: With vocals, I prefer not to use a lot of early reflections or pre-delay, so that
the vocal stands out.

Page 209
 With drums, rhythm guitar, piano, and other primarily percussive instruments, the secondary
percussive attack from the early reflections can be distracting, if the reverb’s level is fairly high.
A short amount of pre-delay, mixed behind the main reverb decay, can work well. However, if
you’re trying for a more intimate, ensemble sound, consider avoiding pre-delay.
 Pads and sounds with long attacks, like a distant flute or brass, often benefit from pre-delay to
place them further back in the soundstage.

Decay Time and Decay Time Frequencies

Decay is the sound created by the reflections as they bounce around a space. This “wash” of sound,
also called the reverb tail, is what most people associate with reverb. The decay time parameter
determines how long it takes for the reflections to run out of energy and become inaudible. Long reverb
times may sound impressive on instruments when soloed, but rarely work in a mix unless the
arrangement is sparse.

Many reverbs offer a crossover frequency, which divides the reverb into high and low frequency
ranges. With Studio One, you can create multiple reverb ranges by using the Splitter in an FX Chain,
and adding one or more reverbs (see the Dual-Band Reverb section).

Reflexivity

This control is similar to the diffusion parameter found in other reverbs, but works oppositely: low
reflexivity (aka high diffusion in other reverbs) places the reflections closer together, while high
reflexivity (aka low diffusion) spreads them out. With percussive sounds, high reflexivity creates lots of
tightly-spaced, but not continuous-sounding, attacks—like marbles hitting steel. But with voice, which
is more sustained, high reflexivity can blend well with vocals. Low reflexivity settings may sound
overly thick.

However, with less complex material, you might want less reflexivity on the vocals for a richer sound.
For example, plate reverbs are popular with vocals because of their high diffusion characteristics. As
always...no rules.

Reverb Algorithm

The Room Reverb has four algorithms (Small Room, Room, Medium Hall, and Large Hall). Algorithm
is simply a fancy name for the code that’s emulating a particular space.

The equivalent concept with the Open Air convolution reverb is the impulse. Impulses capture the
sound of specific rooms (like particular concert halls or recording studio rooms). It’s also possible to
create impulses of older hardware reverbs.

Room Reverb Size, Width, and Height Parameters

These affect whether the paths the waves take while bouncing around in the virtual room are long or
short, and the directions they take. Just like real rooms, artificial rooms can have resonances, and some
frequencies where the reflections cancel or add to each other. If the reverb sound has excessive flutter

Page 210
(a periodic warbling effect), vary these parameters in conjunction with decay time for the smoothest
sound.

Room Reverb Damping and Population Parameters

Turn up dampness to attenuate the decay’s high frequencies. Population emulates the effect of people in
a room. Lower settings enhance the bass and reduce motion of the decay tail, while higher settings
attenuate bass and give the reverb tail more motion.

Dual-Band Reverb
Separating reverb into multiple bands is one of my favorite techniques, because (for example) you can
have a tight kick ambiance, but let the hats and cymbals fade out in a sweet haze...or have a huge kick
that sounds like it was recorded in a gothic castle, with tight snare and cymbals on top.

Figure 11.5 FX Chain settings for a dual reverb setup.

Referring to Fig. 11.5, Splitter 2 uses a Normal split. One split provides the dry signal, while the other
goes to the reverbs. Splitter 1 does a Frequency split with one split going to a single Room Reverb
dedicated to the low frequencies, and the other split going to two Room Reverbs in series for the high
frequencies. The Split point (crossover frequency) is set around 620 Hz, but varying this parameter
provides different reverb characters.

Page 211
Instead of using a frequency split, you could feed two reverbs, and EQ their outputs. However, the
results aren’t the same. EQing before going into the reverb gives each reverb more clarity, because the
low and high frequencies don’t interact with each other while being reverberated. The three Mixtool
modules in Fig. 11.15 provide mixing for the dry, low reverb, and high reverb sounds.

Reverb Settings

For the FX Chain in Fig. 11.5, the low-frequency reverb has a shorter delay than the high-frequency
reverbs, but still gives a big kick sound (Fig. 11.6).

Figure 11.6 Low-frequency reverb settings.

The reason for using two Room Reverbs in series for the high reverb component is to increase the
amount of diffusion, and provide a smoother sound (Fig. 11.7).

Page 212
Figure 11.7 Two room reverbs in series can give more diffusion, and a smoother high-frequency sound quality.

Page 213
Use somewhat different settings for the two reverbs so that they blend, which increases the sense of
diffusion. There’s no strategy behind the above settings; I just copied one of the reverbs, and changed a
few parameters until the sound was smooth.

Figure 11.8 Several of the reverb parameters are brought out to Macro controls.

You’ll want to bring out several reverb parameters as Macro controls to simplify tweaking (see Fig.
11.8). What you assign is a matter of preference, but you can download the Dual Band Reverb FX
Chain at https://pae-marketing.s3.amazonaws.com/Misc/Dual Band Drum Reverb.multipreset. After
you do, following are some ideas on how to tweak the sound.

Start with the Dry, Low Verb, and High Verb controls at minimum. Bring up the Low Verb, and adjust
Low Verb Mix and Low Length for desired low end. Then turn down Low Verb, bring up High Verb,
and adjust its associated controls (Hi Verb Mix, Hi Verb Length, and Hi Verb Damp). With both Low
Verb and High Verb set more or less the same, go into the Routing section and vary Splitter 1’s
crossover frequency (the slider below Frequency Split). After finding the optimum crossover point, re-
tweak the mix if necessary. Finally, choose a balance of all three levels with the Dry, Low Verb, and
High Verb controls.

All these controls impact the overall reverb character. Increasing the low frequency length creates a
bigger, more massive sound. Increasing the high frequency length gives a more ethereal effect. With
few exceptions, this is not the way high-frequency sounds work in nature, but an extended high-
frequency decay can sound excellent with vocals because in addition to adding more reverb to sibilants
and fricatives, it minimizes reverb on plosives and lower vocal ranges.

Incidentally, three Room Reverbs requires a decent amount of CPU. One workaround is to use the
Perform mode, which saves CPU, for the low frequency reverb. This impacts the sound less than using
Perform for the two high frequency reverbs.

Page 214
Tip: With any reverb, reducing low frequencies (below 100 - 300 Hz) going into reverb can reduce
muddiness and increase definition. Conversely, if your reverb sounds overly metallic, try reducing the
highs starting at 4 - 8 kHz.

Create Virtual Room Mics with Delay


Along with reverb, room mics are another common technique for enhancing the stereo image, by
giving a sense of space. We lose that sense of space when doing only close-miking or direct injection
recording. Typically, room mics are a distance from the sound source, and capture the short delays that
occur when sounds bounce around in the room while recording.

If room mics weren’t present while recording, there’s a way to do a credible emulation: Split the audio
into multiple parallel delays, set for short times and delayed sound only, and mix these in with the dry
track(s). This is most suitable as a Bus or FX Channel effect, but you might want to use it as an insert
effect with drums, or other instruments that benefit from being in an acoustic space. This effect is so
associated with the sound of miking an instrument in a small room that our brains think “Aha! This was
recorded in a small room!”

The Setup
Split the audio into four parallel Analog Delay processors. Set up the Splitter for four splits, choose
Channel Split for the Split Mode, then insert the four Analog Delays (Fig. 11.9).

Figure 11.9 FX Chain setup for virtual room mics.

Set the controls on the four delays identically (Fig. 11.10) except for the four time parameters: turn off
Sync, and choose 11, 13, 17, and 23 ms. These are prime numbers, so they don’t create resonances with
each other. For a bigger room, choose another prime number series with longer delays, like 19, 23, 29,
and 31 ms.

Page 215
Figure 11.10 Suggested settings for one of the four delays. The others use different delay times.

For convenience in adjusting multiple parameters at once, it’s helpful to add macro controls to the FX
Chain for the Mix, Feedback, and High Cut parameters. Too much feedback gives a metallic sound, so
you probably won’t want the feedback to go much above 30 to 60%. With percussive instruments like
drums, you’ll want more feedback than with sustained instruments. Here are some additional tips
regarding this technique:

 Beware of phase cancellations if you mix in too much delayed signal. Although the goal is to re-
create the phase cancellation/addition effects found in rooms, high levels of processed signal
will cause excessive cancellation. Set the Main Bus output to mono temporarily to confirm that
the sound is still acceptable.
 Mixing in just a little bit of delay is sufficient to give a convincing effect.
 With mostly mono source material, this technique improves stereo imaging. With stereo source
material, using short delays may “monoize” the signal and make the stereo spread less obvious.
Sometimes this is a benefit as it provides an overall sonic ambiance for instruments like drums.
 These types of delays can sound good on vocals, but there’s still nothing like a nice, warm
chamber or plate reverb algorithm for wrapping around a voice.

Plan Ahead with Reverb and Panning


Although you can just move panpots around and add reverb until you’re happy with the sound,
consider planning ahead by drawing a diagram of the intended soundstage (like the way theater people
draw marks for where characters are supposed to stand). Place sounds further back by lowering the
level, and possibly adding more reverb and/or taking off some of the high frequencies. Closer sounds

Page 216
are louder, drier, and sometimes brighter. When mixing, this diagram can be a helpful guide to stay on
track.

Key Takeaways
 It’s rare to hear recorded music where many tracks don’t have at least some reverb.
 Algorithmic and convolution reverbs provide very different effects. Try both to determine
which option flatters your music the best.
 In most cases, you don’t want a lot of low-frequency reverb with voice. Reverb on the vocal’s
higher frequencies is more common.
 Instruments with sustained notes like voice, organ, and strings can often benefit from low
diffusion reverb settings. Too much diffusion may produce a “thick” reverb sound that competes
with the main sound.
 High diffusion settings usually benefit percussive sounds.

Page 217
Page 218
Page 219
Chapter 12 | Mix Automation

Now that the mix is on its way, it’s time to fine-tune levels and other parameters via automation that
remembers your mixing moves.

Recording and playing back fader moves is the classic automation application. For example, if you tell
Studio One to write automation and pull down a vocal track’s fader to reduce the level, automation
remembers this move. When you tell Studio One to read automation, it reproduces the move when the
song passes through that section. Any controls you automated will move as if by magic.

Furthermore, individual Events can have envelopes for fade in, fade out, and level (see Chapter 6). This
is independent from Track automation, which affects all Events in a track. For example, you can cut an
Event into smaller pieces and vary their levels with Event envelopes, or add fade ins to reduce breath
inhales for individual Events, but vary the overall level of a Track’s Events with track automation.

Signal processing and virtual instrument plug-ins have automatable parameters, although the specifics
depend on the plug-in design. We’ll cover this later.

What You Can Automate


For individual tracks, you can automate:

• Level
• Pan
• Mute (but not solo)
• Send level
• Send pan
• Input controls (input gain and polarity)
• Parameters for effects inserted within tracks (EQ, dynamics, and the like)

MIDI tracks implement automation similarly, but have a different repertoire of controllable parameters.
Volume, modulation, pitch bend, and panning are common. However, you can automate any parameter
that responds to MIDI controllers by creating MIDI automation envelopes that transmit matching
controller data. We’ll cover this later in this chapter.

Automation simplifies adding nuanced expressions to electronically-oriented music. With automatable


plug-ins, parameter variations can help the music come alive.

With any type of automation, you can overdub (and edit) automation data to tweak one parameter to
perfection, then another, and so on.

Page 220
Automation Basics
Recording automation is different from recording audio or MIDI data. In most cases, the transport
doesn’t need to be in record mode, so you can record automation without being concerned about
overwriting recorded audio or MIDI data.

Studio One shows automation as a line (called an envelope) that represents the value of the parameter
being automated. The envelope can be superimposed on the track itself, or displayed in a lane that’s a
subset of a track (Fig. 12.1).

Figure 12.1 Track 1 superimposes the envelope on the track; its drop-down menu chooses the automation
envelope to display. Track 1 includes automation for volume, and the high-mid-frequency gain for the track’s Pro
EQ2. Track 2 shows automation envelopes in separate lanes; the Automation Show/Hide button is outlined in red,
and the Expand Envelopes button to show the lanes is outlined in yellow.

There are two ways to show automation envelopes in the Arrange view:

 To superimpose the automation envelopes on the track Events globally, type A or click the
Automation Show/Hide button (the envelope graphic with nodes) at the top of the Track
Column. To show/hide automation for individual tracks, right-click on the track, and toggle
Show/Hide Automation. A drop-down menu below the track header chooses the automation
envelope to be displayed.
 To show the envelopes in lanes below the track Events, click on the expand envelopes button in
the track’s lower left, or right-click on the track in the Track Column, and choose Expand
Envelopes.

Page 221
With most audio tracks, I prefer superimposing the envelope on the track to see the correlation between
the automation envelope and audio. Seeing automation in lanes can be useful with virtual instruments,
where you often automate multiple parameters, and need to see their relationship to each other.

Until you write automation, a parameter’s envelope will be a straight line. Clicking on this line or
manipulating controls creates nodes, which are little dots. In addition to overwriting or editing these by
moving controls, you can edit nodes manually. Dragging a node higher or lower changes the parameter
value, while dragging left or right alters its position on the timeline. With enough nodes, you can draw
detailed automation data. Also, you can click between nodes, and drag up or down to bend a straight
line to a curve (Fig. 12.2).

Figure 12.2 The blue line is changing the vocal’s volume. Note how two of the automation lines are curved.

Audio Track Automation Methods


There are four main automation methods—some real-time, and some offline.

Method 1: Record On-Screen Control Motion


This method accommodates the human touch. You move an on-screen fader (or other control) while the
song plays. You can edit your moves later. Although a mouse can control only one parameter at a time,
a multi-fader hardware control surface allows controlling multiple parameters simultaneously (e.g.,
levels for lead and rhythm guitars, or individual tracks with multitracked drums).

To record volume automation, choose Volume from the drop-down menu below the track header, then
click on the Automation Mode button (to the right of the Solo button) and select Write from the drop-
down menu. You can also select Write from the automation mode button toward the bottom of a
Channel in the Mix view (Fig. 12.3).

Page 222
Figure 12.3 In Write mode, the automation records automation data continuously, based on a control’s position.
The fields that show the options to enable writing are outlined in yellow.

Start playback, then click on the fader and drag it to record the fader movements. To stop writing
Volume automation, release the mouse button.

Automation Modes

What happens when you release your finger from the mouse button depends on the automation mode
(Fig. 12.4).

Figure 12.4 Different automation modes affect what happens to the envelope when you click on, or release, the
mouse button.

Page 223
The options are:

 Touch. Automation is written as long as the mouse button is held down. When you release the
mouse button, any existing automation plays back. Touch is like punching in automation,
whether holding down the mouse button, or touching a touch-sensitive hardware fader. This is
my most-used mode for writing and touching up automation.
 Latch. Existing automation remains unchanged until you start moving the fader, which then
overwrites any existing automation. When you release the mouse button, automation continues
being written at the last recorded automation value.
 Write. Automation based on the control’s position is always being written.

Choose Read to play back automation, or Auto: Off to ignore all automation data.

Method 2: Draw and Edit Envelopes


In addition to moving an on-screen control to create an automation envelope, you can:

 Draw a freehand envelope, or a periodic one (like a low-frequency triangle wave), using the
Paint tool.
 Edit existing automation data by adding, removing, or moving envelope nodes.
 Select a range of automation and cut, copy, paste, or perform other automation-related edits
within a track.
 Copy automation data to other tracks. This can include different automation types (e.g., copy
volume automation from one track to an effect parameter in a different track).

Because moving controls creates envelopes, and drawing envelopes moves the associated on-screen
controls, these options are somewhat interchangeable. The method to use depends on the application.

• For simple level changes, recording control motions is easy.


• To sync changes to the beat, drawing an envelope can create more precise automation changes
(as well as snap periodic-waveform envelope nodes to the grid), by selecting a Paint tool
waveform (Fig. 12.5).

Page 224
Figure 12.5 The upper image shows a volume envelope that uses the Paint tool’s sine wave option. This
creates a tremolo-type effect that fluctuates between a maximum and minimum level. The lower image shows a
sawtooth wave controlling the Pro EQ2’s Mid frequency band, with the Transform tool warping the shape.

With MIDI, choosing the controller whose envelope you want to draw, and then editing it, requires
more preparation. We’ll cover that later.

Method 3: Record Automation Moves from a Control Surface


Using an external, hardware control surface for automation is like recording a control’s on-screen
motion, because the control surface mirrors the movement of the on-screen control(s). After setting up
a parameter to respond to an external control signal, you start automation recording, move your
hardware faders, and the automation data will appear as envelopes—just as if you’d moved on-screen
faders.

Tech Talk: Control Surfaces

A control surface has hardware controls that you can assign (or come pre-assigned) to your host
program’s parameters. PreSonus makes four control surfaces for mixing: the FaderPort and ioStation
24c interface (both single channel), 16-channel FaderPort 16, and 8-channel FaderPort 8 (Fig. 12.6).

Page 225
Figure 12.6 The PreSonus FaderPort 8 provides eight faders, as well as multiple switches. While designed
specifically to integrate with Studio One, it includes modes that are compatible with other programs.

Higher-end control surfaces (like the FaderPort series) have touch-sensitive, motorized faders.
Touching the control is the same as pushing on the mouse button when doing mouse-based automation.
What happens when you push or release the mouse button depends on the automation Mode, as
described previously.

The big advantage of motorized faders is that they follow the position of the on-screen faders. So when
you want to overdub automation moves, just touch the motorized fader and continue moving it.
However, motorized faders do make noise when they move. You may want to ignore automation
temporarily if you’re recording vocals near the control surface.

Method 4: Snapshot Automation


Snapshot automation is not a dynamic process, but captures settings at a particular time. Use this when
you want a sudden change, rather than one that transitions over time. For example, with level, you
could set the fader to the desired level at the beginning of the big chorus, and then take a snapshot of
that setting by recording automation. That level would remain in effect until you either take another
snapshot, stop recording automation, or start recording automation by some other automation method
(Fig. 12.7).

Page 226
Figure 12.7 Snapshots have been taken for the 1st verse and 2nd verse to change levels for those sections.

Snapshot automation is useful if you’ve rendered a track with its automation and effects, but then want
static level changes in different sections of the song. To do snapshot automation, place the cursor where
you want the change to occur. Set the controls as desired, and the Automation Mode to Write. Start the
transport. Click Stop when you want the snapshot to end, or if you want another snapshot, let playback
continue slightly past where the next snapshot should begin. Place the cursor at the next snapshot,
adjust the controls, and start playback to record the new snapshot.

Automating Effect and Virtual Instrument


Parameters
The same techniques that automate levels, panning, and other console parameters also apply to effects
and virtual instruments. Using automation with effects can modify EQ settings as needed, bring delay
effects in and out at particular times, change the reverb Send control amount, and so on. With virtual
instruments, automation can avoid an overly-static sound by varying parameter values.

There are several ways to add a parameter to a track so that it can be automated. The Control Link
option that’s unique to Studio One combines simplicity with versatility.

Call up whatever includes the control you want to automate (effect, Bus, instrument, etc.). When you
click on that control, its name appears in the Software Parameter window. To superimpose the
automation on a track, drag the hand down to the track (Fig. 12.8).

Page 227
Figure 12.8 A vocal feeds an FX Channel via a Send level control; now the Send level can be automated.

If you prefer to create a separate automation track, then drag the hand down to where you want to add
the automation track (Fig. 12.9).

Figure 12.9 The Fine tuning parameter in Mai Tai will now have a dedicated automation track.

You don’t even need to use Control Link, or have a separate automation lane, if you just want to
superimpose the automation for note Events in an Instrument track. Start recording on your Instrument
track (make sure the Record Panel’s Record Mode is not set to Replace), and start moving controls on-
screen. These changes will superimpose automation envelopes on the Instrument track itself. When you
go to the Event’s Edit view (described later), you’ll see a new tab for the controller you just recorded.

Page 228
Adding or Removing Envelopes
Although all of Studio One’s plug-ins operate consistently with automation, once you leave the Studio
One ecosystem, how you choose parameter automation may vary. For almost all virtual instruments,
you’ll be able to click on a control and use the Control Link hand. However, for something like a
wrapped DX or DXi plug-in (not recommended as a best practice, but sometimes that’s the only way to
use an older plug-in), Control Link won’t work. Instead, there’s an alternate way to select parameters to
automate that also works with any Studio One plug-in.

For example, suppose you have an old virtual instrument that assigns each parameter to a specific,
unalterable MIDI controller number. If you can automate using the plug-in’s on-screen controls, then
the controller number doesn’t matter: Tweak the control while recording, and the plug-in will play back
those control gestures. However, if you want to draw an envelope for a specific parameter, you need to
find and display the parameter you want to edit, and assign the envelope appropriately.

Finding the Parameter to Add or Remove


There are three ways to do this, but first, here’s a very important note: Tracks, Automation Tracks, and
Edit Views are independent. If you add a virtual instrument parameter to one of these, you will not be
able to add it to one of the others. Before committing an automation envelope to a Track, Automation
Track, or the Edit View, think about the environment in which you want to edit automation.

Global Method

Click the Show Automation button in the toolbar above the track headers, or type A. The Track Header
will default to Display: Off, but if you had previously selected an automation envelope to display, that
will appear instead (Fig. 12.10).

Figure 12.10 The Show Automation button is outlined in yellow. Track 1 shows the default automation setting of
Display: Off. Track 2 shows that Volume was selected for viewing/editing in the track. Track 3 is selecting
automation to view/edit. If no automation existed on that track, then it would show only the Add/Remove option.

Page 229
Automation Track Method

Click the downward arrow in the field below the track name. It will look like the display for track 3 in
Fig. 12.10 above, and show the assigned parameters for automation.

Edit View Method

The available automation parameters will be displayed as tabs. Click on the three dots to the left of the
tabs to open the Add/Remove dialog box (Fig. 12.11).

Figure 12.11 Click on the three dots to add or remove automation envelopes.

The Add/Remove Dialog Box


When you choose Add/Remove for any of the above options, the following dialog box appears (Fig.
12.12).

Page 230
Figure 12.12 Automation Add/Remove dialog box.

The right pane shows the available automatable parameters for the selected track (virtual instrument or
audio). To choose a parameter for editing, click on the parameter, then click on Add. This adds it to the
automation drop-down menu in a track, or creates a tab in the Instrument track Edit view.

Tip: With some virtual instruments, this list of parameters can be really long. You might find it easier
to record a short segment, wiggle the control for which you want to draw an envelope, and see the
name of the tab that shows up in the Instrument track’s Edit view.

Similarly, you can remove an automation assignment. Click on it in the left pane, and then click on
Remove.

MIDI Learn
Many plug-ins support MIDI Learn, one of the best inventions ever for those who use hardware
controllers. Select a parameter you want to control (usually by right-clicking on it), tell it to “MIDI
Learn,” and then move a control to complete the assignment process (Fig. 12.13). The control now has
real-time control over the parameter. You can usually change the assignment by following the steps to
MIDI Learn, but choosing a MIDI Forget option that appears instead.

Page 231
Figure 12.13 Right-clicking on the Wah position parameter in the Line 6 Helix Native, and selecting MIDI Learn,
links the wah frequency to whatever hardware controller (pedal, knob, etc.) you move next.

MIDI Learn is excellent for temporary assignments. For more detailed and semi-permanent hardware
control, review the Control Link chapter in Studio One’s documentation, and choose the scenario that’s
most appropriate to your setup.

Using Hardware Control Surfaces


Software-based, virtual recording and automation lack the hands-on, real-time control of traditional
recording. Drawing an envelope with a mouse, or even a touch screen, isn’t like moving a fader. Many
people prefer tactile, physical control.

Hardware controllers aren’t essential; you can use a mouse and draw envelopes, or move on-screen
faders. Drawing envelopes is often more precise than using faders. However, hardware control surfaces
can help turn a mix into more of a performance, encourage spontaneity, and speed up your work flow,
when they include additional functionality (like transport controls).

Before describing how to install and use control surfaces, let’s recap their history.

Traditional Mixing
Until digital audio appeared, all mixing surfaces had one control per function. Large-format analog
consoles were expensive and huge, but having all controls at your fingertips made it easy to tweak
anything you wanted, in real time. Before automation, more than one person would often be involved
with mixing, because it was physically difficult for one person to reach all the mixer’s controls (Fig.
12.14).

Page 232
Figure 12.14 The Duality δelta Pro-Station from Solid State Logic has a layout that’s representative of large-
format mixing consoles.

Real-time, physical control invited “playing” the mixer, making it more of an instrument than just a
way to balance levels. Many engineers would ride gain in time with the music, and make spontaneous
adjustments to add character. When I did session work at Columbia Records, I saw the engineer mixing
the song “Brandy (You’re a Fine Girl).” He would slide the faders close to the right place, close his
eyes, and move them subtly and rhythmically. I was impressed how this made the mix more lively. This
lesson stayed with me.

EQs had accessible knobs as well, which likewise invited real-time tweaking. The mixer was not a set-
and-forget device, but in the hands of a talented engineer, became a dynamic, living part of the music-
making process. This type of thinking still exists in DJ and “groove” types of music, but overall, it
seems mixing has become a more static process.

In the late 90s, Mackie and Digidesign created the HUI (Human User Interface) protocol for control
surfaces. This established compatibility between hardware and computer-based recording programs.
Later, the Mackie Control Universal established a more open protocol. Even today, most controllers
(even those optimized for specific programs, like the PreSonus FaderPort) offer HUI or Mackie Control
modes. Studio One is compatible with Mackie Control devices as well.

Control surfaces return some of the human element to mixing. Becoming familiar with a control
surface’s workflow can lead to a more fluid experience in the studio.

How to Choose a Control Surface


There are two main control surface families: general-purpose controllers designed to work with a wide
range of software, and controllers like the PreSonus FaderPort that are optimized for a specific program
(although as mentioned, this particular controller family is also compatible with other programs).

Page 233
General-purpose controllers start with budget models that have, for example, 8 assignable MIDI knobs
or faders. Programming these to control Studio One may be tedious. If available, accessory software for
programming the device may help simplify matters.

You may not need a controller if existing equipment can do the job. For example, some keyboard
controllers (Fig. 12.15) include templates for various DAWs, which often include Studio One because
of its popularity. For relatively simple control surface applications that don’t require motorized faders,
keyboard controllers may be enough. Controllers with motorized faders (like the FaderPort) cost more
than controllers without motorized faders, but lead to smoother workflow.

Figure 12.15 Nektar’s Impact LX88+, LX61+, and LX49+ keyboards can also function as a control surface for
various programs, including Studio One.

Automation Applications
Although adjusting levels while mixing is an important automation application, there are other ways
automation can make better mixes—consider attitude. An orchestra’s conductor doesn’t just act like a
metronome, but cajoles, leads, lags, and adds motion to the performance. A control surface can translate
your human-generated gestures into machine-friendly automation.

Of course, the tracks will probably include some degree of animation anyway. But music seems to lose
some of its impact in the recording process, and real-time changes to various aspects of a mix can
create a more satisfying listening experience.

The Trim Function


Before covering specific applications, let’s look at a solution to a common problem. Suppose the
automation moves are good, but you need to bring a section of automation—or all the automation—up
or down a bit. The Trim function takes care of this. Use the Range tool to select the automation you
want to edit, and then float the Arrow tool just below the top of the track, until the Arrow turns into a
bracket. Click, then drag up to raise the selected automation’s value, or drag down to lower the value
(Fig. 12.16). Note that VCA Channels (Chapter 3) can also bring a channel’s overall level up or down,
while preserving any automation changes.

Page 234
Figure 12.16 The Trim function is lowering the entire envelope by -3.8 dB relative to its prior level.
Studio One’s automation is deep, so it’s well worth reading the documentation to understand all the possibilities.

Add Expressiveness with Controllers


Small, rhythmic level variations are often felt rather than heard. Although many musicians are satisfied
to draw level changes with a mouse, this can’t have the same spontaneity as changes that result from
on-the-fly, creatively inspired decisions for when tracks should dominate or lay back further in the
track (Fig. 12.17).

Figure 12.17 This track shows the result of using human-controlled, fader automation to add rhythmic accents
to a drum track. Trying to create this type of complex envelope by clicking and drawing would be frustrating.

Aux Send Automation and Delays


To prevent delay from overwhelming a vocal track, insert the echo in an FX Channel and vary the
vocal’s Send control to pick up just the end of phrases. When the phrase stops, the echoes continue—
but before the vocals come back in, bring the Send back down again. You’ll probably want a pre-fader
Send control, so that the send level isn’t dependent on the channel fader..

Page 235
Aux Sends and Reverb Splashes
I generally don’t use huge reverberant spaces on mixes, but will insert reverb with a long decay in one
Bus. If an isolated snare hit or held vocal note needs emphasis (or a dramatic pause wants a reverb
spillover), turn up the Send control long enough to send a signal spike into the reverb. This creates a
tasty reverb splash.

Panning
When stereo was relatively new in the 60s, panning was a popular effect (“Oh wow man, the guitars are
flying across my head! Pass the bong!”). While returning to those gimmick-laden days is probably not
a good idea, subtle panning changes can be effective. For example, if you can pan a pad’s left and right
channels independently, panning both toward center sounds different compared to moving the panpots
out to widen the stereo field. Also try expanding the sound incrementally so that as (for example) a pad
continues, the stereo field widens.

Complementary Motion
Try this with bass and drums, or two instruments playing complementary rhythm parts. Vary their
levels in opposing ways, but in time with the beat and subtly—this should be felt rather than heard.
Consider mixing the drums slightly louder for one measure with bass slightly back, and on the next
measure, bump the bass up a bit and drop the drums correspondingly. The rhythmic variations build
interest, and can even give a somewhat hypnotic effect with dance-type music.

Mutes and Solos


Musicians can learn much from DJs (or at least, musicians with open minds can)—and one popular
technique is to solo a track for a break, or perhaps mute several tracks. Skilled remixers often create
musical variations by playing multiple loops simultaneously, and then mixing them in and out
(sometimes with level changes, sometimes with mute or solo) to build compositions. A tune might start
with a looped pad, then fade in the kick, then the hi-hat, then the bass, and then have everything drop
out except for the bass before bringing in some other melodic or rhythmic element.

As with the other examples mentioned so far, you don’t have to do this in real time because you can
program these changes as automation. But remember that mixing can be a performance—and
sometimes inspiration can cause you to make fader moves and button presses in clever, non-repeatable
ways. And when you do, you’ll be happy that automation remembers those moves.

Mute vs. Change Level


Enabling mute or solo is a sudden, rapid level change that surprises the listener. Attempting the same
type of change with faders will sound different, because the fader change will not be instantaneous. On
the other hand, moving a fader from full off to full on—even if you do so rapidly—may cause a feeling
of anticipation rather than surprise. Choose what’s appropriate.

Page 236
Plug-In Automation Applications
Automation can control plug-ins as well as console parameters.

Better Chorusing and Flanging


Even when tempo-synched, the repetition of LFO-driven chorus effects can be more boring than AM
radio (that’s saying a lot). Try these workarounds:

 Use automation to vary the LFO rate control so that it changes constantly, instead of locking
into one tempo.
 Turn off LFO modulation (or set it to a very slow rate), and automate the initial delay
parameter, if that’s possible without causing clicks and pops. Play with the delay so the effect
rises and falls in a musically appropriate way. Sometimes it’s also worth overdubbing
automation for feedback (regeneration) to add emphasis in certain parts.

Creative Distortion Crunch


A distortion plug-in’s input level or “drive” control, affects the distortion amount. Assuming the signal
is already clipping, turning up the control more will create a more crunched, intense sound—but
because it’s clipped, the output level won’t increase by much. If the increase is too great, then automate
the output to compensate.

If the distortion plug-in’s drive parameter cannot be automated, automate the Channel’s input control to
change the level going to the distortion. Or, place the distortion in an FX Channel, and automate the
Send going to it.

Emphasizing with EQ
Be careful when automating EQ, because even slight EQ boosts can have a major impact on the sound.
But consider a situation where you want some big piano chords to become more prominent, so they
push the song more. You could increase the level, but that may cause the piano to dominate. Another
option is to automate a parametric stage’s boost/cut control (use a fairly wide bandwidth in the 2 – 4
kHz range). When you want the piano to stand out, add a tiny bit of boost. Because the ear is most
sensitive in this frequency range, even a small difference will give the piano more clarity. You could
also boost in the low bass (e.g., below 150 – 200 Hz) to give the piano more power rather than more
articulation.

Delay Feedback
This application sold me on effects automation. I often use synchronized echo effects on solos, and
heighten the intensity at the solo’s peaks by increasing the delay feedback. This creates a “sea of
echoes” effect. Sometimes this also involves altering the delay mix, so there’s more delay and less
straight signal.

Page 237
Using automation to bring up feedback, then reducing it before the effect becomes overbearing, can
apply to any effect with a feedback parameter.

Envelope-Based Tremolo
Nodes set at regular intervals can create the kind of periodic waveforms used with tremolo. It’s difficult
to draw a waveform like this freehand, but as mentioned previously, Studio One provides a way to
draw periodic waveforms with the Paint tool (Fig. 12.5).Tremolo circuits in old guitar amps used sine
or triangle waves, but no law says you can’t use other waveforms.

Key Takeaways
• Automation remembers your mixing moves. Recording and playing back fader moves is the
classic automation application.
• Individual Events can have envelopes for fade in, fade out, and level, which are independent of
the track automation.
• Signal processing and virtual instrument plug-ins have automatable parameters, although the
specifics depend on the plug-in design.
• MIDI tracks implement automation similarly to audio tracks, but have a different repertoire of
controllable parameters.
• Recording automation is different from recording audio or MIDI data. In most cases, the
transport doesn’t need to be in record mode, so you can record automation without being
concerned about overwriting recorded audio or MIDI data.
• Automation envelopes can appear superimposed on a track, or appear in their own lanes below
the track events.
• Until you write automation, a parameter’s envelope will be a straight line. Clicking on this line
or manipulating controls creates nodes, which you can edit manually to change the automation
amount and position.
• There are four main automation methods: record on-screen control motion, draw and edit
envelopes, record automation moves from a control surface, or write automation “snapshots.”
• Studio One’s Control Link feature makes it easy to automate effect and virtual instrument
parameters.
• Many plug-ins support MIDI Learn, where simply moving a control completes the assignment
process.
• There are two main control surface families: general-purpose controllers designed to work with
a wide range of software, and controllers like the PreSonus FaderPort that are optimized for a
specific program
• Some keyboards that have faders and buttons for programming also include templates for
controlling parameters in Studio One.

Page 238
Page 239
Chapter 13 | Final Timing Tweaks

By the time you’re mixing, the tempo has almost certainly been set. What’s more, almost all
contemporary popular music is cut to an unvarying click track, in order to keep a steady tempo. A click
track makes it much easier to do overdubs, as well as use tempo-synched effects. Click tracks are also
extremely helpful to solo musicians, because by definition people aren’t playing in a room together.

However, with Studio One, it’s possible to add tempo changes after the fact, to a final, mixed song.
This can help restore some of the feel associated with subtle tempo changes. But before describing how
to do this, let’s look at some pop song tempo changes from the pre-click era to understand how these
tempo changes helped songs to “breathe.” Far from being random, tempo changes often occurred in
similar ways, even in songs from different musical genres. Whether unconscious or not, these patterns
were often deliberate enough to be repeated in similar sections of the song. Here are some examples:

James Brown “Papa’s Got a Brand New Bag”


Many people considered James Brown’s rhythm section to be the tightest rhythm section ever. If any
drummer could earn the title of “human click track,” it would be Clyde Stubblefield...right? But in
reality, he had the ability to twist the beat around in uncanny yet predictable ways. He played with
precision, but it wasn’t the precision of a flat-lined, metronome-based tempo track (Fig. 13.1).

Figure 13.1 The tempo variations in the James Brown song, “Papa's Got a Brand New Bag.”

The tempo map “breathes” with plenty of variations, but also, the song’s overall tempo increases
linearly over time. The band accelerates the tempo until the phrase “Papa’s Got a Brand New Bag” that
starts at measures 10, 22, and 42, at which point the tempo then decelerates. There’s even the same
change at measure 55 which is musically similar, but doesn’t use the same lyrics—and note how the
“rise-dip-rise-fall” shape is almost identical for all these sections. These changes are definitely not
random! After each break, the tempo slides back up again.

Page 240
The Beatles “Love Me Do.”
The Fab Four were quite consistent in their tempo changes. While these tempo changes (Fig. 13.2) may
appear to be random, they follow a pattern.

Figure 13.2 The tempo variations in the Beatles song “Love Me Do.”

Note the dramatic pause at “so please, love me do” around measure 16 and again at 49. They didn’t
program those tempo variations in a sequencer—they felt the changes. Then they sped up naturally
after that section, when it went into the “love, love me do” sections. They also sped up a bit over the
course of the track, which is something you’ll hear in many songs.

The Police “Walking on the Moon”


Although the overall tempo is consistent, there are two significant dips starting around measures 31 and
52 (Fig. 13.3)

Figure 13.3 “Walking on the Moon,” by the Police, is overall quite consistent but nonetheless has places where
the tempo dips and then builds up again.

These dips occur prior to leading into the bridge, but note that the tempo is higher the second time
around. At measure 59, the band pulls back for the instrumental break. At measure 65, they start
climbing out of it, and speed up as they head to the end of the song.

Page 241
Smokey Robinson & The Miracles “Tears of a
Clown”
This is another song that follows a definite, deliberate pattern. The band accelerates until measure 22,
which is just before the end of the verse, and then decelerates down the bridge into the line “tears of a
clown” which starts at measure 31, after which the tempo starts accelerating again (Fig. 13.4).

Figure 13.4 The tempo variations in “Tears of a Clown” are significant, but nonetheless predictable.

The second verse starting at measure 37 is pretty consistent, but again toward the end of the verse
there’s the speed-up, the bridge decelerates, and the tempo is slowest when “tears of a clown” is
repeated again. After that, again the tempo is pretty consistent. Note that the shape of the changes
starting at measure 22 are almost the same as the changes starting at measure 55.

Pat Benatar “Shadows of the Night”


There wasn’t enough tempo information in the beginning for an analysis, so I drew a straight line in the
beginning. Anyway, either by accident or by design, the tempo tracks the vocal pitch quite closely (Fig.
13.5).

Figure 13.5 The tempo swings in “Shadows of the Night” track the vocal pitch closely.

Page 242
Starting at measure 22, it’s almost like someone applied an LFO to the tempo, the changes are so
consistent. The song pretty much tracks Benatar’s pitch; the tempo accelerates up to the higher notes in
a phrase, then slides down as the pitch slides down. The most extreme variations occur during the
guitar solo, which starts at the lowest tempo, then speeds up and slows down cyclically. I’ve always felt
the vocal in this song is compelling; I wonder if part of that is because of the interplay with the tempo,
which may underscore the effect of the vocal pitch changes.

So the Point Is...


One element most of these songs have in common is accelerating tempo up to a crucial point in the
song, then decelerating during a verse or chorus. This type of change was repeated so often, in so many
songs I analyzed, that it seems to be an important musical element that’s almost inherent in music
played without a click track. It makes sense this would add an emotional component that could not be
obtained with a constant tempo. Also, the tempo often sped up, sometimes a little and sometimes a lot,
as the song progressed.

Although there’s no simple way to measure the reaction listeners have to tempo changes, the songs
where I’ve added tempo changes seem to get a more favorable response. This could be a coincidence,
but I don’t think so.

Remember, a key principle of music is tension and release. Speeding up adds a degree of tension and
anticipation, while slowing down creates a corresponding release. Tempo changes can help music
“breathe” more; it has always been an inherent aspect of musicians playing together. It’s only in the
past few decades that music has lost this vital characteristic.

Adding Tempo Changes To a Final Mix


The following technique is the easiest way to add tempo changes, because Studio One can add these to
a finished mix—you can complete your song that was cut to a click track, and then add subtle tempo
changes where appropriate. This technique also lets you compare a version without tempo changes, and
one with tempo changes. You may not hear a difference, but you’ll feel it.

You’ll need to add these tempo changes to a mixed stereo track in a new Song, so open a new Song,
and import the stereo mixed file. Although it’s possible to use this technique with a multitracked song,
the process is more complicated because you have to make sure that all the audio tracks can follow
tempo changes properly.

Preparation
Open the Inspector as shown in Fig. 13.6, and enter the song’s tempo under File Tempo (outlined in
red; for example, 120 BPM).

Page 243
Figure 13.6 Prep the file in Studio One's Inspector, then create tempo changes in the tempo track.

If you don’t know the tempo, it doesn’t matter—estimate the tempo, because you’ll simply be adding
relative changes, not working with an absolute tempo. While you’re in the Inspector, choose Tempo =
Timestretch, and Timestretch = Sound (outlined in orange). For the highest stretching fidelity, also
choose Options > Advanced > Audio, and check “Use cache for timestretched audio files.” This
stretches files using offline analysis instead of a real-time algorithm, which improves the sound quality.

Making the Tempo Changes


Edit the tempo track next. Starting with version 4.5, tempo track editing works similarly to adjusting
any kind of automation. You can set high and low tempo limits within the tempo track; the minimum
difference between high and low Tempo Track values is 20 BPM, however you can change the tempo
track height to increase the visual resolution.

Page 244
To alter tempo, click to create a node in the tempo track, then drag it to the desired tempo and position.
Or, place the playback cursor on the timeline, and click the tempo section’s + button to add a node
under the cursor, as well as enter a precise value. This same field also displays the tempo at the
playback cursor position. Like other Studio One automation, hovering the mouse over the line between
two nodes displays a “phantom” node, which you can then click and drag to add a convex or concave
curve to the tempo line. It’s possible to create detailed tempo changes, quickly and easily.

Following are two examples of typical tempo change tracks. The first (Fig. 13.7) is for a song on my
YouTube channel called My Butterfly. Because it’s a relatively slow song, the tempo changes cover a
pretty wide relative range, from a low of 90 to a high of 96 BPM. If you check out the video, you may
be able to hear the speedup in the solo (not just feel it) now that you know it’s there.

Figure 13.7 Tempo changes for the song “My Butterfly.”

The second example (Fig. 13.8) is a hard-rock cover version of Walking on the Moon (originally
recorded by The Police, and written by Sting). Yes, I know it’s weird to do a hard rock version...but the
tempo change differences are fairly significant, starting with a low of 135 BPM, going up to 141 BPM,
and dropping down as low as 134 BPM.

Page 245
Figure 13.8 Tempo changes for a cover version of the song “Walking on the Moon.”

When possible, I try to keep a constant tempo at the beginning and end. It doesn’t matter so much with
songs, but with dance mixes, tempo changes within a track are acceptable as long as there’s a constant
tempo on the intro and outro. This keeps DJs from going crazy when they’re beat-matching.

For relatively small tempo changes, audio artifacts are not a significant issue. This means you can free
your song from click-track monotony, and let it “breathe” and flow like music used to do—and still
can.

Inserting “Time Traps”


One tempo track technique that’s possible when mixing, even with a multitrack project, is inserting
sudden, very short tempo drops. These add a slight pause to build anticipation/tension in strategic
places (Fig. 13.9).

Page 246
Figure 13.9 Short, deep tempo reductions can add a dramatic pause without having to move any recorded MIDI
(or audio) parts.

Suppose you want to add an almost subliminal “dramatic pause,” like just before a booming snare drum
hit heralds the chorus’s start. Because the listener expects the section to start on the beat, even a tiny
pause can add tension before the release. Although you could slide your tracks or insert some space, it’s
much easier to do a radical tempo drop (e.g., from 120 to 50 bpm) for a tiny fraction of a beat where
you want the dramatic pause. This sloooooows everything down enough to add the pause. (Ideally,
you’d want a sound that sustains over the pause—silence, a pad, held note, etc.)

Key Takeaways
• Slight tempo changes were common in the days before musicians started recording to click
tracks. These increased tension and release, and helped a song “breathe.”
• Tempo changes were often consistent with many genres of music, like speeding up slightly
before arriving at the chorus, then slowing down.
• With Studio One, it’s possible to add tempo changes after the fact, to a final, mixed song to help
restore some of the feel associated with subtle tempo changes.
• For relatively small tempo changes, audio artifacts are not a significant issue.
• One tempo track technique that’s possible even with a multitrack project is inserting sudden,
very short tempo drops. These add a slight pause to build anticipation/tension in strategic
places.

Page 247
Page 248
Page 249
Chapter 14 | Review and Export

You’ve cleaned up the tracks, added processing, controlled the dynamics, set a great balance among all
your tracks, used automation to tweak levels and other parameters, and maybe even helped the music
“breathe” with subtle tempo changes. This is your last chance to make any final changes before the
mastering process.

As with many aspects of life, less is more. Live with the mix for a while and critique what you hear. Be
as brutal as possible.

 Does the beginning pull someone in like a tractor beam?


 Do parts compete with each other, which reduces their effectiveness?
 If the focus is the vocal, do all the other instruments sharpen that focus when the singer is
singing?
 Does every single song element, from the notes themselves to the signal processing, serve the
song and enhance the message it’s trying to convey?
 If there some needed, but missing, element like an extra hand-percussion part toward the end to
maintain interest?
 If you mute a track, do you miss it—or does muting it allow the other tracks to shine?
 Are you mixing creatively by selectively dropping out or bringing in tracks as appropriate? This
type of mixing is the foundation for a lot of dance music.
 Is there enough cowbell? (Kidding!)

Mastering While Mixing—Pros and Cons


Studio One’s dedicated mastering page is my preference for mastering. However, some people put
mastering-type processors (like the Multiband Dynamics or Tricomp) in the Main Bus, and consider
that mastering. There are no rules, and whatever shapes music the way you want, is valid. But, let’s get
some perspective.

When tape ruled, mastering was a separate process from mixing for several reasons:

 Albums (collections of songs) were popular. A mastering engineer transformed differing mixes
into a cohesive listening experience, with similar dynamics and tonal qualities. With Studio
One, it’s easy to bounce tracks from the Song page into the Project page, and do this type of
album-oriented mastering in the Project page. With traditional recording, it was difficult to
compare songs—especially before automation became commonplace.
 The mastering engineer also worked with the producer and artist to determine the best song
sequence. With vinyl, this was not always an artistic decision. For example, a record’s inner
grooves were more prone to distortion, so softer songs were often the last song on a side. Also,
the two album sides needed to be of approximately equal length.
 Tradeoffs were required for different delivery media. With vinyl, too much bass could cause
needles to jump out of grooves on playback, and too much level would cause distortion.

Page 250
Creating a good master involved both experience and trial and error. Mastering engineers
generated multiple test pressings before finding the right balance of level, length, and frequency
response.
 There would sometimes be last-minute tweaks, like speeding up the master tape a few percent
to give it a brighter, tighter sound.
 Mastering suites worked on only one stereo master at a time, so they could buy the finest gear
possible—the kind of gear you couldn’t afford if you needed several units to process tracks
from a multitrack tape recorder.
 Mixing and mastering were considered specialized skills. A great mix engineer was not
necessarily a great mastering engineer, and vice-versa.

With digital audio, most of mastering’s technical constraints are gone. Although this simplifies the
mastering process, it’s still valid to consider mixing and mastering as separate skills. With so many
musicians doing everything themselves—recording, playing, engineering, producing, mixing, and yes,
even mastering—it’s important to have an objective set of ears for a final project review. If that’s you,
great. If not...you need a mastering engineer, or least a collaborator.

Can Online and Automated Mastering Services Do the Job?


Yes and no. If you want to post a live recording of your band online as a souvenir for your fans, you
might not be able to justify a pro mastering engineer’s expense. I’ve always defined mastering as
“making what comes out sound better than what went in,” and algorithm-based mastering can achieve
that in some situations (Fig. 14.1).

Figure 14.1 Although the Lurssen Mastering Console program from IK Multimedia can’t do “waveform surgery”
like cutting song sections or noise reduction, it can make what comes out better than what went in.

Page 251
For a mission-critical project upon which your career depends, a good mastering engineer will make
decisions no machine could make—like shortening your mix to remove an overindulgent guitar solo,
bringing up the level somewhat on a fill, or applying restoration software to minimize residual hiss.

Mastering While Mixing


Mastering adds any needed dynamics, EQ, or other processing to your mix. While you could add these
processors to your Main Bus and (in theory) accomplish the same result, I still prefer treating mixing
and mastering as separate processes, particularly for album assembly and song sequencing. I’ll take the
mixes and load them into Studio One’s Project page, experiment with the sequence, set the timing
between songs, insert overall processing on all tracks or just some tracks, add PQ codes (if the final
product will go on CD), and set fades. Listening to all the tracks in context makes it easier to decide if
each track integrates well as part of a whole.

Even if you record only singles, to ensure consistency it’s worth mastering them as if they were on an
album. If someone assembles a playlist of your greatest hits, you don’t want them to experience level
and tonal swings while they listen.

Prepping Files for a Mastering Engineer


If you decide to use a mastering engineer, note the following before you export any files:

 Leave some headroom; -6 dB is a “safe” value. Some mastering engineers believe this is an
archaic “rule” in the digital age, but you can verify it isn’t by inserting the Level Meter plug-in
in your Main Bus. This meter’s True Peak reading indicates what the level will be after D/A
conversion. If the True Peak is above zero, then you may be hearing distortion when you mix,
even if the Main Bus doesn’t show the signal as being “above 0” when mixing (Fig. 14.2).
When mixing, you want to hear the most accurate sound possible.

Page 252
Figure 14.2 The Main Bus meters (outlined in white) show that the output is not exceeding 0 going into the
audio interface’s D/A converters. However, the True Peak readings (outlined in red) show that the signal
reconstructed at the output of the converters is considerably above 0. Although this won’t cause duplicators to
reject a CD due to excessive distortion, it can influence what you hear as you mix the song.

 Avoid inserting processors in the Main Bus, especially maximizers or compressors. This limits
what mastering engineers can do, and the results can’t be “undone.”
 Be clear about what you want. I’ve often said there are at least 20 valid ways to master a piece
of music, but the only one that matters is the one that fulfills the artist’s vision. Give the
engineer examples of music whose sound you like (even if you’re not a fan of the music).
 Don’t add fade-outs or fade-ins, unless they’re unusual. If you decide to crossfade something
later on, fades done prior to that decision cannot be undone. Let the mastering engineer do
fades, based on your direction.

Whether you master inside the program or give a file to a mastering engineer, eventually you’ll need to
export your song to a final stereo mix.

Export Your Mixed File


Exporting your song is an easy, universal way to create a final stereo mix. Adjust all the settings exactly
as desired (EQ, levels, etc.), and include any automation. When the mix is perfect, choose to Export
Mixdown and render the file in any one of several formats, sample rates, and bit resolutions.

However, this brings up another reason why I prefer to export from the Project Page, even for a single
song. You can perfect global song tweaks (like EQ and dynamics) in the Project page, as well as check
a variety of analytics. This is possible—mostly—in the Song Page, but the Project Page’s workflow is
more streamlined. After the song is the way you want it, export the file.

Page 253
Main File Types
When exporting, choose from three main families of file types:

 Compressed audio. Compressed audio doesn’t refer to dynamic range compression (as
described in Chapter 8), but data compression that reduces the file size. Data-compressed files
take up less memory in portable players, stream more quickly over the web, and take less time
to send via email. The maximum size reduction occurs with lossy formats, like MP3, AAC, and
Ogg Vorbis. To save space, these remove audio data judged to be inaudible. Technically, they’re
more about data omission than data compression. You can choose how much data to remove—
the tradeoff is lower-quality sound with smaller file sizes.
 Lossless compressed audio. This format is like a .zip file, because de-compressing the file
restores the original file, so there’s no lost data. The most popular format is FLAC (Free
Lossless Audio Codec); it’s the preferred lossless file format in Windows 10. Apple’s alternative
is Apple Lossless. Compared to compressed audio, the tradeoff for the greater fidelity is larger
file sizes.
 Uncompressed audio. The main formats are WAV (Windows) and AIFF or AIF (Apple). These
are preferred by mastering engineers and CD duplication houses.

Some formats, like CAF and M4A, have lossy and lossless variants. Lossy formats are becoming less
popular as internet bandwidth increases and memory becomes cheaper, but Studio One can export in
both lossy and lossless formats (Fig. 14.3).

Figure 14.3 Studio One’s Project Page can export a collection of songs or an album in a variety of digital
formats (song titles are from the album Trigger, by former Bryan Ferry guitarist Quist).

Page 254
Sample Rates and Bit Depth
In addition to choosing the export format, you need to specify a sample rate and bit depth (i.e., audio
resolution). We touched on this previously, but didn’t go into much detail because by the time you
reached the mixing stage, the sample rate and bit depth were already set. When exporting, you have the
opportunity to export in a variety of formats. Here are the most popular choices for uncompressed files.

Tech Talk: Sample Rates and Resolution

Analog-to-digital conversion translates audio from microphones, pickups, and instruments—all of


which provide varying voltages—into computer data. More accurate conversion means higher audio
quality. Because a computer can’t understand a changing voltage unless it’s presented as a series of
numbers, an analog-to-digital converter (ADC or A/D converter) measures the incoming analog
voltage thousands of times a second, and converts each measurement into computer-friendly digital
data (binary numbers). The number of measurements the converter takes each second is the sample
rate, also called sampling frequency. Bit resolution refers to the accuracy with which the ADC
measures the incoming audio signal. More bits means higher resolution.

 44.1 kHz with 16-bit resolution is the standard for CDs.


 96 kHz with 24-bit resolution is the standard for “high-resolution” audio, although some
audiophiles prefer 192 kHz (or even 384 kHz) with 24 bits of resolution. DVD audio tracks use
48 or 96 kHz, with 16- or 24-bit resolution.

For data-compressed files, the primary spec is how many kilobits of data can stream in one second
(abbreviated kbps).

 128 kbps is the lowest common denominator for streaming audio. It saves the most space, but
compromises sound quality.
 256 kbps gives reasonably good fidelity.
 320 kbps is the fastest MP3 rate. Typical consumers don’t hear any significant difference
compared to CDs.
 Higher bit rates, like 384 kbps, are available for compressed formats other than MP3.

Bouncing Mixes Inside the Project


There are non-traditional options for mixing down your project to a finished file. One of my favorites is
exporting multiple mixes, then comparing them. To do this, export a mix from a Song, and enable the
option to import the file back into the Song (Fig. 14.4).

Page 255
Figure 14.4 Export alternate mixes, import them back into a Song (as outlined in yellow), and then compare
them to choose the best version.

After importing the mixed song, mute it. Export later mixes, import them, and mute them. When it’s
time to evaluate the mixes, solo individual versions while comparing them (exclusive solo mode is
helpful—Alt+Click on a solo button to mute all other tracks, even if they’re soloed). The mix you like
best can then go into the Project page for final polishing. However, with this approach, you no longer
have the same synergy between the Song page and Project page that allows making changes in a Song
file, and re-rendering the results into the Project page. Instead, this is the more old-school approach of
saving your finished mixes, then mastering them.

Another advantage of having multiple mixes within a project is that you can choose parts of different
mixes, and splice them together to create a final mix. For example, everything except the chorus might
have been perfect in one mix, but the chorus was perfect in a different mix. Cut the preferred chorus,
substitute it for the one you don’t like, and then export the final mix.

A final advantage of creating mixes within a project is that when you save the project, you also save the
mixes. This makes it easy to, for example, cut and paste the different mixes, and use the best parts to
create the “perfect” mix. Conceptually, this is kind of like how you do composite recording for
individual tracks to assemble the best bits into a final track.

Page 256
Check Your Mix Over Different Systems
Before you sign off on a mix, check it over different playback systems. If the mix translates acceptably
on all of them, your job is done. Of course, when played back over smartphone speakers, it won’t
sound like a studio. You’re checking to hear if the most important song elements come through.

I’ve found Audified’s MixChecker plug-in, which provides representative responses for a variety of
consumer playback media (Fig. 14.5), quite helpful. If your mix sounds okay on all the options,
congratulate yourself.

Figure 14.5 Audified’s MixChecker helps simulate what your mix will sound like after undergoing the sonic
violence of less-than-ideal playback systems.

Also, a home studio allows the luxury of leaving a mix and coming back a day or two later, when you
can bring a fresh perspective. This is one reason why automation is wonderful—if the mix was perfect
except for one little issue, edit the automation to fix it.

Mix until you’re satisfied. You don’t want to hear a song six months later, and regret not taking the time
to correct a flaw—or miss a flaw entirely, because you were in too much of a hurry to finish.

However, be equally careful not to beat a mix to death. Quincy Jones once told me he felt that editing
with synthesizers and sequencing was like “painting a 747 airplane with Q-Tips.” A mix is a
performance. If you overdo it, you may lose the spontaneity that adds excitement. An imperfect mix
that conveys passion will be more fun for listeners than one that’s so perfect it’s sterile. As insurance
against overdoing, don’t always re-record over your mixes—you might find that an earlier mix was the
one that sounded most alive.

Fun story: When working with record producer Ted Cooper (Damita Jo, The Staple Singers, The
Remains, Walter Jackson), he once told me about mixing literally dozens of takes of the same song,
because he kept hearing small changes that seemed important at the time. He had to go away for a few
weeks and when he returned to review the mixes, he couldn’t tell any difference among almost all of
them. Be careful not to waste time making changes that no one—not even you!—will care about.

Page 257
Tip: Once you’ve captured your ultimate mix, create some extra mixes, such as an instrumental-only
mix, one without the solo instrument, or one with the vocals mixed higher. These mixes may come in
handy if you need to re-use your music for a film or video score, or create extended dance mixes.

Key Takeaways
 You’ve reached the last part of the music creation process., so this is your last chance to make
any changes.
 There are both pros and cons to mastering while mixing.
 Automated mastering algorithms can sometimes be all you need, but there’s much to be said for
handing off your mix to a professional mastering engineer.
 When exporting your mix for a mastering engineer, make sure it’s prepared properly.
 Lossy data compression reduces file size, but degrades fidelity. Lossless data compression, like
FLAC files, doesn’t reduce file size as much, but retains fidelity.
 Mastering engineers prefer uncompressed audio (WAV and AIF files).
 Export your mix at 44.1 kHz with uncompressed audio if it will end up on CD. For lossy
formats, choose the appropriate tradeoff of file size vs. fidelity.
 Bouncing mixes within a project, to create multiple stereo mixes, offers some advantages.
 Always check a mix over different systems before signing off on it.

Page 258
Page 259
Appendix A | MIDI 1.0 Basics

At its core, MIDI is a computer language that quantifies musical notes and gestures as computer code.
Studio One greatly simplifies the process of working with MIDI and controllers by doing the hard work
“behind the scenes.” You don’t need to know much about MIDI, if anything, to do common operations
like assigning an effect parameter to automation. As a result, this Appendix is more for folks who are
curious about the MIDI specification, and how it might apply to what they do.

Tip: To find out more about MIDI, and register for free to download the MIDI specification, visit the
MIDI Association at www.midi.org.

The Most Important MIDI Messages


• Channels: MIDI has 16 basic channels per MIDI port, over which you can transmit data. This
matters for virtual instruments, but when controlling effects from a hardware controller, Studio
One simply learns what data the controller sends out and the channel over which it’s sent, and
makes any needed parameter assignments automatically.
• Notes: Note data specifies the pitch of a note, as well as its dynamics (i.e., whether the note was
played softly or with force). The note data itself consists of note-on and note-off messages
• Program Changes: MIDI program change messages were originally intended to let players call
up 128 different synthesizer sounds on-the-fly, but they can also be used to select presets in
effects plug-ins and some amp sims (mostly VST2 versions). A Program Change is a single
command that you can edit in an instrument track’s Edit window, and triggers the program
change.
• Continuous Controllers: These commands allow varying a particular parameter (like delay
feedback, EQ frequency or gain, distortion drive, reverb mix, etc.) using a hardware controller,
like a control surface or footpedal. They became part of the MIDI spec because synthesizers
have pedals, knobs, levers, and other physical controllers that alter some aspect of a synth’s
sound over a continuous range of values. Continuous controllers generate a series of messages.
Consider a volume fade-in from a fader—each change in the fader’s position sends out another
controller message, so that the volume reflects the fader position.

MIDI Continuous Controllers


Let’s put this in the context of a MIDI footswitch. Like program changes, a hardware device transmits
continuous controller messages over a MIDI output, which the computer’s MIDI interface receives.
The transmitting device usually quantizes physical controller motion into 128 discrete values (0-127).
For example, with a footpedal that generates continuous controller messages, pulling the pedal all the
way back, generates a value of 0. Pushing down on the pedal, increases the value in steps until at the
midpoint, the pedal generates a value of 64. Continuing to push on the pedal until it’s all the way down,
again increases the value in steps, until it generates a value of 127.

Page 260
Note that continuous controller transmitters, send messages only when reflecting a change. Leaving a
control like a knob in one position doesn’t transmit any messages until you change the knob’s physical
position.

Continuous Controller Numbers

MIDI “tags” each continuous controller message with an ID from 0 to 127. Therefore, a signal
processor with 127 different parameters could assign each one to a unique controller number (1 for
delay time, 2 for delay feedback, 3 for delay wet/dry mix, etc.).

MIDI Controller Assignment


Automation and MIDI control depend on a control source—whether it’s a piece of hardware, a plug-in
knob, or Studio One’s automation—that can communicate with the desired effect or instrument
parameter. There are two main ways plug-ins identify and organize their parameters.

Fixed Parameter Assignments


This protocol commits parameters to particular MIDI controllers and automation targets. When using
hardware controllers, you must set your hardware controller to the continuous controller number that
will affect the desired parameter. In software, when selecting a target for automation, you’ll see a list
(and probably a long one!) of available parameters.

The advantage of this parameter assignment method is consistency—if a particular control surface
knob is assigned to a certain controller, then once you’ve assigned your hardware knob to this
controller, you don’t have to think about it any more as you change presets, or even projects. The
disadvantage is having to find out which controller number corresponds to what parameter, and making
the appropriate assignment at your controller. This is one of the main issues solved by Studio One’s
Control Link feature.

Uncommitted Parameter Assignments


With this option, parameter assignments are like a blank slate. You decide what will control the
parameter, and make an assignment. The advantage is being able to have flexible assignments for each
preset. For example, a particular control surface knob could reverb wet/dry mix in one preset, and
without having to change the assignment, control delay feedback in another preset.

There are two main ways of assigning parameters: manual linking, and MIDI Learn.

Manual Linking

With this method, you can send MIDI control data or automation to control or automation “slots.” After
assigning the desired parameter to the desired slot, you’ve linked your control source to the target
parameter (Fig. A.1).

Page 261
Figure A.1 Parameters in AmpliTube are being assigned to specific automation slots (three have already been
assigned). Once assigned, an automation track in Studio One can specify a parameter as an automation target.

MIDI Learn

MIDI Learn simplifies the MIDI assignment process. Typically, you right-click (or shift-click, or some
other keyboard shortcut) on the parameter you want to control, and a MIDI Learn menu appears (Fig.
A.2).

Figure A.2 The MIDI Learn menu from Guitar Rig pops up with a right-click on the knob or switch you want to
control via MIDI. The dialog box on the left appears once the parameter has learned what MIDI message will
control it.

Page 262
To complete the assignment process, simply move the knob, pedal, or fader you want to assign to the
parameter. The advantage is speed and user-friendliness. The downside is possibly eliminating the
certainty of knowing what automation or control signal will control which parameter.

A “split the difference” option is to standardize on controller numbers for some parameters that you use
all the time, like the send control to a reverb bus. Then you can have them learn a dedicated hardware
control.

Scaling and Inversion


You may prefer that a controller not cover a parameter’s entire range, either because you don’t want to
reach some values accidentally (like excessive echo feedback), or you want finer control over a smaller
parameter range.

The solution is scaling, which may be done in Studio One via the Transition settings. However, in the
plug-in itself, the controller assignment or MIDI Learn process may allow you to specify a minimum
and maximum value the hardware controller will cover (Fig. A.3).

Figure A.3 In Helix Native, two parameters are assigned for control via automation: 63 Spring reverb Mix, and
US Double Nrm amp Drive. Note the minimum and maximum drive values (4.0 and 6.2, respectively). If the
control signal goes from full off to full on, the Drive will change smoothly from 4.0 to 6.2, not 0 to 10.

For example, if a controller for reverb dry/wet mix covers the usual values of 0 to 127, and you always
want a little reverb but sometimes need to increase it in particular sections, you could specify the
minimum value as 10 and the maximum value as 40. Now, when you move a controller fully
counterclockwise you’ll have a little reverb, but if you move it fully clockwise, you’ll have the
maximum reverb amount you want..

Inversion reverses the sense of a control. Usually, turning a control clockwise increases a parameter
value. When inverting controller messages, turning a control clockwise decreases the parameter value
from a maximum setting.

Page 263
Parameter Value Takeover
Sometimes a preset’s programmed value may not match a physical controller’s value. For example,
suppose a footpedal that controls distortion drive is pulled all the way back. You then call up a preset
where the distortion drive parameter is up halfway. There are a few different ways of handling what
happens when you move the pedal. Many amp sims designed for live performance use takeover mode
because its limitations are less worrisome for live performance.

Takeover or Jump mode. This is the simplest option. As soon as you move a control, the parameter
value changes to match the control’s position. The disadvantage of this approach is that if there’s a big
difference between the parameter value and the control position, the sudden jump might be jarring.

Match mode. This method won’t let parameters respond to continuous controllers until the controller
passes through the programmed value, after which the parameter follows the controller messages. This
is helpful when switching between programs where a single hardware controller controls different
parameters. The parameter will stay as originally programmed until you start turning the control and go
past the existing setting. The disadvantage of this method is that if the parameter is at an extreme
setting, the controller will need to reach that extreme setting before you can change it.

Tweak Presets with MIDI

Many keyboard controllers have eight faders and eight rotary knobs, as well as a modulation wheel and
footpedal jack, that all send MIDI data. To experiment with preset settings for particular effects, you
can temporarily assign all their parameters to hardware controls. This makes it easier to tweak
parameters compared to using a mouse and adjusting only one parameter at a time, which is
particularly important with controls that interact—it’s easier to adjust them both simultaneously than to
go back and forth between them. After the settings are as desired, save them as a preset. Now you can
use your hardware controller for other tasks.

Page 264
Appendix B | Calibrating Levels

The following gets a bit into rocket science, so feel free to skip it if you start falling asleep. Or, read it
when you do need to fall asleep! But re-visit this section at some point, because the techniques
presented here will help you create more consistent mixes.

The Level Meter


The Level Meter plug-in can insert in the Main Bus. It’s calibrated like standard meters, but also
includes K-System metering to calibrate levels to what’s called the K-System. This system was
developed by Bob Katz (a well-known mastering engineer) in the late 90s. The goal was to apply
standards used in the film industry to mastering, and put the focus back on dynamic range—instead of
“loudness at all costs.” However, calibrating your monitors to a standard can also help create a more
consistent listening environment.

A key feature of K-System metering is an emphasis on average (not just peak) levels, because average
levels correlate more closely to how we perceive loudness. The K-Meters themselves show both peak
and average levels. Seeing peak levels remains useful because it can expose clipping. A difference
compared to some other meters is that K-System meters use a linear scale, where each dB occupies the
same width in the meter (or height, with horizontal meters). A logarithmic scale increases the width of
each dB as the level gets louder, which although it corresponds more closely to the ear’s response, is a
more ambiguous way to show dynamic range.

Fortunately, there is now an international standard (based on a recommendation by the International


Telecommunications Union) that defines perceived average levels, based on reference levels expressed
in LUFS (Loudness Units referenced to digital Full Scale). As an example of a practical application,
when listening to a streaming service, you don’t want massive level changes from one song to the next.
The streaming service can regulate the level of the music it receives so that all the songs conform to the
same level of perceived loudness. Because of this, there’s no point in creating a “hot” master because it
will just be turned down to bring it in line with songs that retain dynamic range.

The Level Meter includes LUFS readings (click on the R128 button), which is useful for keeping your
mixes at a consistent level, assuming your speakers are calibrated properly. The Project Page also
provides LUFS readings when you click on Loudness Information. Thanks to LUFS, we now have a
way to make sure that finished mixes have the same perceived level.

Nonetheless, the K-System remains valid, because it relates monitoring levels to meter readings. If you
mix with your monitors at different levels on different days, your mixes will be inconsistent. Ideally,
music reaching the same average meter readings will sound like they’re at the same levels. This
requires calibrating your monitor levels with a sound level meter (as described later). Many smart
phones can run sound level meter apps that are sufficiently accurate for our purposes.

Tip: iPhone apps are better for this application because compared to Android phones, the hardware is
more consistent from phone to phone.

Page 265
With the K-System, 0 dB does not represent the maximum possible level (i.e., dBFS, or dB Full Scale).
Instead, K-System meters shift the 0 dB point “down” from the top of the scale to either -12, -14, or -20
dB, depending on which scale you use (K-12, K-14, or K-20). These numbers represent the amount of
headroom above 0, and therefore, the available dynamic range. You choose a scale based on the music
you’re mixing or mastering—like -12 for music with less dynamic range (like dance music), -14 for
typical pop music, or -20 dB for acoustic ensembles and classical music. You then aim for an average
(not peak) level that hovers around this shifted 0 dB point. Peaks that go above this, take advantage of
the available headroom, while quieter passages will stay below 0 dB. Like conventional meters, the K-
Systems meters have green, yellow, and red color-coding to give an indication of levels. Levels above 0
dB trigger the red, but this doesn’t mean you’re clipping—observe the peak meter for that.

Calibrating Your Monitors


To use the K-System “by the book,” you calibrate your monitors so that stereo pink noise reading 0 on
your K-System output meter produces 83 dBSPL (the sound level coming out of your monitor
speakers, as measured by a sound level meter). However, this level was specified for the large control
rooms associated with film studios, and isn’t a rule. Choose a level that’s comfortable for your ears,
which will be lower in a smaller room. Hopefully this will be a level where your ear’s response is
relatively flat, and also, loud enough that the effects of dynamic processing are obvious.

To calibrate, use Studio One’s Tone Generator plug-in, which provides a source of pink noise. Run it
through your system until the K-System meter reads 0, set the sound level meter to a slow response
with C weighting, and place it where you listen to the monitors. Adjust the monitor level for your
“standard” dBSPL reading (I use 76 dBSPL with near-field monitors).

A consistent monitoring level doesn’t preclude using other volume levels. I always start mixes at low
levels to keep my ears fresh. After the mix starts to develop, I’ll go for the standard monitoring level
for more detailed work. Eventually I’ll listen at a variety of levels to experience the range over which
listeners will hear it, and if necessary, return to the standard level for making tweaks.

You’ll need to do individual calibrations for the three meter scales if you use all three of them (you may
not if, for example, you mix only pop music and so therefore, use only the -14 scale). PreSonus’s Eris
speakers have a volume control on the front, so you can affix a label next to this control that shows the
optimized setting for each K-Scale. Also note that what’s been described is a rough calibration; a truly
refined calibration may mean calibrating each channel separately, verifying that your panpot’s center
truly is “center,” and the like. For more information, surf the web—you’ll find everything from Bob
Katz’s web site [https://www.digido.com/] that describes use and calibration of the K-System in great
detail, to opposing viewpoints who believe it’s redundant in the age of LUFS.

All I can go by is my own experience, which is that monitoring at a consistent level, aided by the K-
System, helps produce more consistent mixes and masters.

Page 266
Appendix C | Mixing with Noise

This subject is in an Appendix because if you’ve made it this far, you’ve probably found some value in
what I’ve written, and won’t dismiss this out of hand as you dissolve in gales of laughter. But most
people who’ve been brave enough to try this technique swear that it does work…see what you think.

One Reason Why Mixing Is Challenging


A mix doesn’t compare levels to an absolute standard; all the tracks are interrelated (e.g., lead
instruments typically have higher levels than rhythm instruments). Top mixing engineers are in demand
because they’ve trained their ears to discriminate among tiny level and frequency response differences.
They “juggle” the levels of multiple tracks to make sure that each one occupies its proper level with
respect to the other tracks. The more tracks, the more intricate the juggling act.

However, there are certain essential elements of any mix—some instruments that just have to be
prominent, and have similar levels because of their importance. Ensuring that these elements are clearly
audible, and perfectly balanced, is crucial in creating a mix that sounds good over a variety of systems.
Perhaps the lovely high end of some bell won’t translate on a cheap boombox, but if the average
listener can make out the vocals, leads, the beat, and the bass, the high points are covered.

Remember too that our ears are less sensitive to changes at relatively loud levels. Many veteran mixers
work on a mix initially at low levels, because it’s easier to tell if the important instruments are out of
balance. A balance that sounds good at higher levels may fall apart at lower levels, but it’s very likely
that if a mix sounds good at lower levels, it will sound at higher levels, too.

This Technique’s Backstory


When working in a studio in Florida, I noticed that mixes done with the air conditioner on often
sounded better than ones done when it was off. Then I made the connection with how many musicians
use the “play the music in the car” test as the final judge of whether a mix is going to work or not. In
both cases the background noise masks low-level signals. This makes it easier to tell which signals
make it above the noise.

Curious whether this could be defined more precisely, I started injecting pink noise into the console
while mixing. This just about forces you to listen at relatively low levels, because the noise is
obnoxious—but more importantly, the noise adds a sort of “cloud cover” over the music, and just as
mountain peaks poke out of a cloud cover, so do sonic peaks poke out of the noise. So, mixing with a
pink noise source (like Studio One’s Tone Generator) as one of the Channels, panned to center, can
help check whether a song’s crucial elements are mixed with equal emphasis.

Page 267
How to Mix with Noise
Inject noise sporadically; you can’t get an accurate idea of the complete mix while injecting noise,
because the noise covers up high frequency sounds like hi-hat. But it can help you make sure that all
the important instruments are being heard properly.

Typically, I’ll take a mix to the point where I’m fairly satisfied with the sound. Then I’ll add in pink
noise, and start analyzing.

While listening, I pay special attention to vocals, snare, kick, bass, and leads (with this much noise,
you’re not going to hear much else in the song anyway). It’s easy to adjust their relative levels, because
there’s a limited range between overload at higher levels, and dropping below the noise at lower levels.
If all the crucial sounds make it into that “window,” and can be heard clearly above the noise without
distorting, you have a head start toward an equal balance.

Also note that the “noise test” can uncover problems. If you can hear a hi-hat or other minor part fairly
high above the noise, it’s probably too loud.

I’ll generally run through the song a few more times, carefully tweaking each track for the right relative
balance. Then it’s time to take out the noise. First, it’s an incredible relief not to hear that annoying
hiss! Second, and more importantly, you can now get to work balancing the supporting instruments so
that they work well with the lead sounds you’ve tweaked.

Although so far I’ve mentioned only instruments that are above the noise floor, the noise creates three
distinct zones: totally masked by the noise (inaudible), above the noise (clearly audible), and “melded,”
where an instrument isn’t loud enough to stand out or soft enough to be masked, so it blends in with the
noise. Mixing rhythm parts so that they sound melded can work well, if the noise is at a level suitable
for the rhythm parts.

Overall, I spend very little mixing time using the injected noise. But often, it’s the factor responsible for
making the mix sound good over multiple systems. Mixing with noise may sound crazy, but give it a
try. With a little practice, there are ways to make noise work for you.

Page 268
About the Author

Musician/author Craig Anderton is an internationally recognized authority on music and technology.


His onstage career spans from the 60s with the group Mandrake, through the early 2000s with
electronic groups Air Liquide and Rei$$dorf Force, to the “power duo” EV2 with Public Enemy’s
Brian Hardgroove, and EDM-oriented solo performances.

He has played on, produced, or mastered over 20 major label recordings, did pop music session work in
New York in the 1970s on guitar and keyboards, played Carnegie Hall, and more recently, has mastered
well over a hundred tracks for various artists.

In the mid-80s, Craig co-founded Electronic Musician magazine. As an author, he’s written over three
dozen books on musical electronics, and over a thousand articles for magazines like Sound on Sound,
Rolling Stone, Pro Sound News, Guitar Player, Mix, and several European publications.

Craig has lectured on technology and the arts (in 10 countries, 38 U.S. states, and three languages), and
done sound design work for companies like Alesis, Gibson, Peavey, PreSonus, Roland, and Steinberg.

Please visit his educational web site at craiganderton.org (which also has free downloads), listen to
some of his music at youtube.com/thecraiganderton, and browse his products at the
craiganderton.com digital storefront.

Page 269
Other Studio One Books by the Author

For more information on these books, and the Spanish-language version of “How to Create Compelling
Mixes,” please visit the PreSonus shop eBooks page.

Page 270

You might also like