REC1732-O - Week II

Sequencing
Technology

Week 2
MIDI and synthesizers are kindred brethren. Without the advances in synthesis and digital
sampling in the 1970’s and 80’s, there would have been no use for MIDI and the revolution
it spawned. A basic understanding of synthesis, digital sampling, and their techniques will
help you and your clients create and produce better sounds and tracks. Also, just about
every hardware and software instrument used today is based on the same synthesis techniques developed over fifty years ago, so you should have no problem in figuring out (or
helping someone else figure out) the sonic structure of any MIDI sound module, keyboard,
or software instrument.
Beyond looking at advanced MIDI Sequencing concepts and putting them into practice,
we will primarily focus on Logic’s built-in Software Instruments, leading up to the Synthesis Project which is due by the end of the week. This Project will emphasize the importance of synthesis, and your ability to recreate basic sounds for use in any type of production scenario.

i

1
Synthesis Basics
Electronic Music has been around for quite a while. Most researchers have credited Elisha
Grey with being the first to create and transmit music electronically with his ‘Singing Telegraph’ in 1867.
After World War II, electronic music flourished, likely due to the abundance of electronic
components found in surplus shops. Composers and inventors in garage or basement
laboratories began to implement electronic sounds into the public ear via radio and television commercials, experimental performance pieces, as well as sci-fi and horror movies.
The 1960’s ushered in a new era or electronic music production. Bob Moog, Don Buchla,
Alan R. Perlman, and others began designing synthesizer components based on the three
basic fundamentals of sound, and sold them on the commercial market. Not requiring a degree in engineering now meant that more musicians could integrate electronic sounds and
textures into their music, albeit for a price.
In order to recreate sound electronically, the first thing that needs to be done is to categorize sound; break it up into its basic fundamentals then create circuits to emulate them.
These basic fundamentals are Pitch, Amplitude, and Timbre, and the three circuits that
were designed to emulate them respectively are the:
Oscillator (VCO, DCO, etc.)
Amplifier (VCA, DCA, TVA, etc.)
Filter (VCF, DCF, TVF, etc.)

2

The sound that comes out of an Oscillator is a choice between several different basic
single-cycle waveforms. Sawtooth, square, pulse, triangle, sine waves, as well as pink or
white noise are generally the most common. Each of these waveforms have a distinct
sound ranging from very edgy to quite mellow. This initial waveform is the basis of what
the sound you are creating. Most analog synthesizers have two or more Oscillator’s allowing you to blend different waveforms together to create more complex sounds.
A keyboard connected to the oscillator has a discreet voltage for each key, commonly 1/
12th of a volt to accommodate for the twelve notes to an octave. As you play the keyboard, each key’s voltage is sent to the Oscillator, which would play back the proper pitch.
After selecting a waveform with the Oscillator, it would then be routed to the Filter. The filter allows us to subtract frequencies from our basic waveform, detailing the sound we
want to create. Low Pass Filters are the most common, which remove the higher (Treble)
frequencies while allowing the lows (Bass) to pass through, hence the term Low Pass.
Other types of filters may also be available including High Pass (filters the low end) and
Band Pass, which is a combination of a Low and High Pass filter allowing only the mid frequencies to be heard.
After modifying our waveform(s) with the Filter, our sound would finally be routed to the
Amplifier. Simply, the Amplifier boosts the gain, or amplitude, of all the elements that come
before it. Not surprisingly, the only parameter typically available to an Amplifier is a gain or
volume control.

3

Oscillators and Filters

The sound that comes out of an Oscillator is a choice between several different basic
single-cycle waveforms. Sawtooth, square, pulse, triangle, sine waves, as well as pink or
white noise are generally the most common. Each of these waveforms have a distinct
sound ranging from very edgy to quite mellow. This initial waveform is the basis of what
the sound you are creating. Most analog synthesizers have two or more Oscillator’s allowing you to blend different waveforms together to create more complex sounds.
A keyboard connected to the oscillator has a discreet voltage for each key, commonly 1/
12th of a volt to accommodate for the twelve notes to an octave. As you play the keyboard, each key’s voltage is sent to the Oscillator, which would play back the proper pitch.
After selecting a waveform with the Oscillator, it would then be routed to the Filter. The filter allows us to subtract frequencies from our basic waveform, detailing the sound we
want to create. Low Pass Filters are the most common, which remove the higher (Treble)
frequencies while allowing the lows (Bass) to pass through, hence the term Low Pass.
Other types of filters may also be available including High Pass (filters the low end) and
Band Pass, which is a combination of a Low and High Pass filter allowing only the mid frequencies to be heard.
After modifying our waveform(s) with the Filter, our sound would finally be routed to the
Amplifier. Simply, the Amplifier boosts the gain, or amplitude, of all the elements that come
before it. Not surprisingly, the only parameter typically available to an Amplifier is a gain or
volume control.

4

Modifiers

The three circuits are good for creating very basic sounds, but in order to create more interesting, moving tones, we need modifiers to add depth and character. The first modifier is a
circuit called the Low Frequency Oscillator or LFO. Just like its name implies, the LFO
only creates low frequencies. So low in fact, that it is below our threshold of our hearing,
from about 0.01 Hertz (or cycles per second) to around 10 Hertz (the theoretical range of
human hearing is 20 Hertz to around 20,000 Hertz). Since frequencies can affect all other
frequencies, the LFO’s purpose was not to be audible, but to be used as a modulator for
the Oscillator, Amplifier, or Filter.
When an LFO is routed to the Oscillator, the pitch generated from the Oscillator rides the
wave of the LFO, much like a boat bobs up and down on waves on the ocean. When the
LFO’s wave rises, the Oscillator’s pitch will rise with it. When the LFO’s wave would fall, so
would the pitch of the Oscillator. This effect is commonly called Vibrato. If we route an
LFO to modulate the Amplifier, the effect is known as Tremolo, as we are modulating volume. Routing an LFO to the Filter creates what we call ‘Wah Wah’ - because it sounds like
a guitarists’ Wah pedal.
The LFO itself has two parameters that control it. The first is Rate or Speed, which is the
frequency of the LFO. Your ear perceives this as how fast the sound is modulating. The
second parameter is Depth or Intensity, which is the amount or amplitude of the LFO. We
hear this as how much or how little it is being modulated.

5

Envelope Generators

The second modifier circuit determines how the sound changes over time. Different instruments have particular characteristics depending on how they’re constructed or played. A
violin, for example, if bowed, has a slight ‘fade in’ and ‘fade out’ because of the bow wiping against the strings. If you pluck the same violin in a ‘pizzicato’ style the sound starts instantly and ends immediately, sounding similar to a guitar played with a pick. The Envelope Generator was created to reproduce these same characteristics.
A keyboard can have an Envelope Generator for any or all three of the main circuits but
the most common is for the Amplifier circuit. The envelope generator has four basic parameters: Attack, Decay, Sustain, and Release. These four parameters correspond with
when a key is pressed down, how long it is held down, and when it is let go.
The Attack parameter affects the beginning of the sound, how quickly the sound comes in
when the key is pressed. When it hits its peak volume then the Decay kicks in. The Decay
is how long it takes for the volume of the sound to drop to the level it stays at until you let
go of the key. The level it stays at is called Sustain. Sustain is almost like a ‘loop’ – repeating the sound until you let go of the key, and is
the only voltage that is not time-based. When
you let go of the key the Release stage happens,
which determines how long it takes for the sound
to fade out. The Envelope Generator allows us to
have complete control of sound over time, not
only amplitude in this case but, if available, to the
pitch and filter settings as well, allowing us to
mimic the characteristics of familiar instruments
as well as ones never heard of.

Figure 1.1 Envelope Generator
(ADSR)

6

2
Synthesis Types
Designers and manufacturers have created many different types of synthesis over the past
decades, many trying to improve on previous versions to create more powerful or more realistic sounds. These descriptions are for clarification and understanding of the different
types of synthesis available.
Subtractive Synthesis is the most common of the different types of synthesis. It is removing overtones from the overall sound by utilizing filters to remove harmonic content from
the basic waveform. You can think of Subtractive synthesis as a sculpture in which you
start with a block of raw material and remove what is needed to produce the piece you desire.
Additive Synthesis is the process of combining waveforms via oscillators (generally sine
waves) with their own respective amplitude envelopes (ADSR’s) to create a sound that
changes over time. A Pipe Organ qualifies as a type of Additive synthesizer because each
rank (or set) of pipes generates a unique type of sound which when combined with other
ranks add to the overall sound, creating a richer, moving tone.
Resynthesis is the technique of recreating a natural sound by using additive synthesis
techniques. Resynthesis needs powerful (and usually expensive) hardware to adequately
reproduce ‘real’ sounds – multiple, if not dozens of oscillators and envelopes, and has
been easily and cheaply replaced by the advent of Digital Sampling.
Frequency Modulation Synthesis (commonly called FM Synthesis) is where a simple
waveform is changed by modulating it with another waveform. This creates a new, more

7

complex waveform with a different sounding tone and character. This type of synthesis
was utilized in Yamaha’s popular DX and TX series of synthesizers in the 1980’s
Phase Distortion Synthesis is similar to Frequency Modulation, except that in PD synthesis the phase angle of the waveform is changed, ‘bending’ its shape to create a unique
character and tone.
Wavetable Synthesis utilizes the sound of a single note, which is sampled and stored as
an oscillator waveform. Wavetable synthesis has two advantages over static single-cycle
waveforms: First, because it is a sampled waveform, recordings of more complex or acoustic sounds can be used, allowing for richer, more realistic synthesis. Second, wavetable
synths allow you to ‘crossfade’ between different waveforms over time, generating more
complexity by the movement of the raw sound waveforms.
Pulse Code Modulation is essentially Digital Sampling playback. A natural waveform is
digitally recorded and stored as a series of binary code. Because the raw waveform is a
near-perfect reproduction of the original sound, it provides a more powerful and realistic
starting point for synthesis techniques.
Granular Synthesis utilizes the same techniques as Digital Sampling, but the sampled
waveform is divided into small pieces of sound lasting from 1 to around 50 milliseconds
(thousandths of a second). These ‘grains’ of sound are then layered on top of each other,
and can be adjusted by speed, phase angle and volume. This resulting tone ‘cloud’ has a
very complex sound, and the parameters of the individual grains can be manipulated over
time, further adding to the complexity of the tone.
Graintable Synthesis is a synthesis method used by Propellerheads’ Malstrom synthesizer in their Reason software package. Graintable synthesis is a combination of the features of Wavetable and Granular synthesis methods, allowing selection of particular
wavetables which have been broken down into grain clouds for further manipulation.
Physical Modeling Synthesis uses powerful DSP algorithms and equations to simulate
the properties of a natural (or otherworldly) instrument. These properties can include what

8

material the instrument is made out of, its size, how the instrument is played (struck,
plucked, blown, etc.) and other characteristics.
Analog Modeling uses the same techniques as Physical Modeling Synthesis, but is specifically designed to recreate the nuances and characteristics of analog synthesizers. Analog modeling synths (commonly known as Virtual Analogs) have nearly the same advantages of real analog synthesizers as far as programming and sound capability, but because of the DSP recreation, they have tuning stability, zero maintenance issues, and significantly reduced cost which are the Achilles’ Heel of true analog synthesizers, vintage or
modern. Analog modeling is the heart of software or ‘virtual’ synthesizers that have become increasingly popular over that past few years.

9

3
Software Instruments
Software or Virtual Instruments (commonly called ‘plugins’) have truly come of age. Computers have become so powerful that by the addition of a decent audio interface, a simple
MIDI controller, DAW software, and plugins, you can do more than tens or hundreds of
thousands of dollars in MIDI and audio hardware could do just a few years ago.
The concept of ‘mixing in the box’ is not new. Composer and inventor Raymond Scott began building his Electronium in the mid-1950’s as a complete composition center that the
user could define and create rhythms, bass lines, and melody parts at a single control console. Powerful systems like the Fairlight and the Synclavier in the 1980’s combined digital
sampling and synthesis with sequencing capabilities. The mid and late 80’s ushered in the
era of the ‘MIDI Workstation,’ which combined sample playback, synthesis, and sequencing abilities in one. But the personal computer revolution has made every step in music
technology before it look like child’s toys.
The first foray into combining sound recording, playback, and editing with a computer
were ’sound cards’. These built-in or after market cards contained simple General MIDI
playback engines that used basic synthesis or sample playback chips to provide alert
sounds, MIDI file playback, or gaming sound effects and music. By the late 1980’s and
early 1990’s, technology has progressed enough that manufacturers would market their
cards with rudimentary MIDI sequencing programs, editing and librarian software, and
even crude sampling capability.
The high-end market followed quickly with Digidesign’s SoundDesigner, SoundAccelerator,
and SoundTools (now ProTools) systems, soon to be followed by offering from Korg, Terratec, Ensoniq, CreativeLabs, and many others.

10

On the strictly-software side, the earliest known ‘software synth’ was the Digidesign SoftSynth program in 1985-86, while SeerSystems claims to have the first software-only synthesizer with their Reality in 1986-87. PPG Instruments in Germany showed a prototype of
the ‘Realizer’ in 1996, which was a stand-alone computer/control surface that contained
virtual representations of familiar instruments, like the PPG Wave and the MiniMoog, as
well as their own instruments if desired. Wolfgang Palm, President of PPG delighted in telling the quizzical crowd at the German MusicMesse trade show how he copied the circuit
diagrams of the original instruments into software. Unfortunately, the Realizer never made
it out of the prototype stage.
Regardless of who got there first, the concept of having a sound module created by software programming alone is a powerful concept. Designers and musicians alike dreamt of
the day that they could have an entire studio inside a computer - your author was certainly
one of them. As computing power and technology increased exponentially, that dream
inched closer to reality. Opcode Systems Studio Vision blurred the lines between MIDI sequencer and audio recorder/editor, Steinberg’s Cubase VST (Virtual Studio Technology)
converged MIDI and audio sequencing with virtual plugins - both audio effects and virtual
instruments. Almost every manufacturer of sequencing software quickly followed suit, either by licensing Steinberg’s VST protocol, or by developing their own.
For the past few years, the processing power of computers has become so powerful that
the idea of MIDI hardware is becoming very passé. Almost every major Non-Dedicated
MIDI sequencer or DAW comes with included software instrument and effect plugins that
can rival or exceed their physical comrades.
Even though virtual instrument plugins send their audio out of the host computer (or an
audio interface), they are controlled by MIDI. Any MIDI keyboard or controller can trigger
the sounds of the plugin, and any additional controls such as knobs, sliders, and buttons
on said MIDI controller can handle the functions of various parameters and operations of
the plugin.
Keep in mind that you will need to ensure that the plugins you acquire can be used depending on your computer platform and sequencer manufacturer.

11

The major plugin formats and platforms:
AU: AudioUnits, Macintosh Only, Realtime Processing, Licensable Standard
VST: Virtual Studio Technology, Mac/PC, Realtime Processing, Licensable Standard
AS / RTAS: AudioSuite / Real Time AudioSuite, Proprietary (Digidesign), Mac/PC. AudioSuite is Non-Realtime, RTAS is Realtime. (AS/RTAS phased out with Pro Tools 11 - see
AAX below)
AAX: Avid Audio eXtension. Introduced with Pro Tools 10 and required for Pro Tools 11 or
better. Real Time, 64-bit Native and DSP card formats.
MAS: MOTU Audio System, Proprietary (MOTU Digital Performer/AudioDesk), Realtime
Processing
Premiere: Originally developed for Aldus (now Adobe) After Effects, Mac/PC, NonRealtime Processing.
DXi: Direct X, PC Only, Realtime Processing, Licensable Standard

Image 3.1 AU

12

Trends and the Future

As of this writing, it looks like Software Instruments are becoming the predominant MIDI
instruments of choice. As computers become more and more powerful, their portability
and ease of use is incredibly convenient. Hardware will never go away completely, so you
still need to understand the skills needed to program and use them efficiently, but software
can do just as much, if not more than their hardware counterparts.
Don’t forget that Logic can utilize AU plugins as well as its vast range of built-in instruments. We have a video that shows you how to find and install AudioUnit plugins, and the
website reference at the end of the Coursebook has links to a wide variety of additional instrument and effect plugins.
One of the more intriguing concepts we have seen over the past few years has been a revival of the editor/librarian concept, but with a virtual twist. The programs themselves can
be used as a standalone program like their predecessors, but now they can also be inserted as a plugin on a virtual instrument track of a sequencer. This means that when you
make changes on the plugin, it uses MIDI commands to update the actual MIDI instrument. This can free up computing power which might normally be used by the plugin for a
‘true’ virtual instrument, since the keyboard or module is handling the sound generation duties and not the cpu in your computer. This concept means you can have an instrument
that can be edited and saved like a traditional plugin, but can easily be taken out for live or
studio use.
Another trend that dovetails into this concept is the recent implementation of ‘External Instrument’ tracks. These tracks allow you use any external MIDI keyboard or sound module
very much like a traditional instrument plugin. The MIDI commands are sent via your MIDI
interface to the device, and the audio out from the device is then routed back to the computer through your audio interface. Although it doesn’t allow the editing and librarian functions mentioned above, it does allow you to add plugin audio effects to the device, and to
even bounce ‘in the box’ like you would with a traditional instrument plugin.
13

Loading Software Instruments

Software Instrument Tracks are the main type of track we’ll be using here in Sequencing
Technology. We’ll talk about External MIDI and Audio Tracks, but Software Instruments are
the easiest way for us to explore Logic Pro and MIDI.
To load a Software Instrument, follow these simple steps:
1. Within your Logic Project, create a Software Instrument Track (Command-Option-N is
the shortcut for the New Tracks window). You can also go to Create New Software Instrument Track... under the Track Menu in the Menu Bar.
2. Locate the leftmost Channel Fader found in the Channel Strip in the Inspector (it will
say something like ‘Inst 1’ at the bottom of the Channel Strip) and on the Instrument
slot towards the top of the Channel Strip. See Image 3.3 on the next page.
3. The list that pops up shows all of the Software Instruments available to Logic.
4. Select an instrument that looks interesting to you.
5. You may see an additional popup menu when selecting a plugin that allows you to
choose Mono, Stereo, or Multi Channel. We’ll discuss these options later. For now,
choose Mono or Stereo.
6. The plugin will be loaded, and the Software Instrument window will open. Play your
USB MIDI keyboard and the sound will come out of your MacBook Pro or your external audio interface.
Feel free to move knobs and slidstrument!

Image 3.2 Loaded
Instrument slot

ers to alter the sound of the in-

14

Image 3.3 Instrument Slot

Instrument slot

15

Software Instrument Presets and Settings

Choosing a Software Instrument Preset
1. Click on the Settings (Factory Default) drop-down menu. Image 3.4 shows the ES M
with Settings menu.

Factory Default - Settings menu

Image 3.4 Factory Default

2. You should see a list at the bottom of the pull-down menu showing the presets available. These may be simple patch names, or categorized into subfolders depending on
the Software Instrument you have chosen. Navigate through and simply select a preset that sounds interesting...

16

3. If you checked the Open Library checkbox, the Library Tab will open in the Media Palette. This will also allow you to choose the same presets.

Saving an edited sound as Preset
If you change any of the parameters for a Software Instrument to create a certain type of
sound, you will probably want to save this for future reference.
Click on the Settings (Factory Default or other sound name) drop-down menu.
Select the “Save Setting As…” title. This will open a Save menu for this Preset.
By default, this Save menu will open to the Plugin Setting folder for the Software Instrument you have chosen.
Title your new patch in the “Save As” dialogue box and press the Save button in the bottom right corner of this window.
Now if you click on the Settings Menu again you should see your saved Preset above the
Factory Preset selections.
Keep in mind that any adjustments you make to Software Instruments will be saved with
the Logic Project, but they will not be available in any other Project unless you save them
as a Preset.

17

Channel Strip Settings & Performances

The Channel Strip Setting has become the highest priority level of the Library. It allows
you to select a preset which will automatically load the saved plugin on a Software Instrument track, with the correct Preset, along with any audio effect plugins you have inserted
on the Channel Strip as well. (!) This is an incredibly powerful function that we have fallen
in love with. But that’s not all the Channel Strip Setting does...
It also allows you to create category folders for whatever you want, and these are available
in the Library whenever Logic launches. It provides a super-easy ‘one-click’ selection of
both factory and custom Presets that really speeds up your workflow.
The Performance is simply a Channel Strip Setting that allows MIDI Program Change messages to select a Channel Strip Setting. We’ll get more into this function when we talk
about the MIDI Messages next week.
Channel Strip Setting

To create a Channel Strip Setting
1. Load a Software Instrument of your choice on a Software Instrument Track.
2. Select whichever Preset you desire from either the Settings pull-down menu or the Library.
3. (Optional) You can also load audio effect(s) in the Inserts boxes at the top of the Software Instruments’ Channel Strip at the bottom of the Inspector.

18

4.Once you have created the ultimate
sound with your Software Instrument,
save the Channel Strip Setting by clicking and holding on the ‘Setting’ box at
the top of the Channel Strip window.
5.Select ‘Save Channel Strip Setting
As...’ from the top of the Settings
menu.
Tip: You can always ‘reset’ to the top
level of your Channel Strip Setting Presets in the Library by clicking on the
Setting button in the Channel Strip. It
Image 3.5 Save Channel Strip Setting as ...

should also reset when you create a
new Software Instrument Track.

6. In the Dialog Box that opens, Name your Channel Strip Setting (where it says ‘Untitled’
by Save As:)and click ‘Save’ in the bottom right corner. By default, Logic will choose
the Instruments folder as seen in
7. At the bottom left of the Dialog Box is a button labeled ‘New Folder’. This will allow
you to create category folders as you want.

19

The picture in Image 3.5 is your Course Directors’ Channel Strip Settings folder categorized by Software Instrument for easy access. You can see how much he loves Software Instruments...

20

Virtual Instruments

We recommend reading Apple’s Logic Instruments Manual:
http://manuals.info.apple.com/en_US/logic_pro_x_instruments.pdf
Besides the built-in instrument plugins, Logic can use third-party Audio Unit instruments
and effects as well. The Installing Third-Party Plugins video on FSO will guide you through
this process of installing and loading additional sound sources. There are many manufacturers of these, both for sale, and for free.
The site we recommend for simplicity is Don’t Crack: http://www.dontcrack.com/freeware/software.php/id/7030/audio/Virtual-Instruments/plugi
ns/AU/
KVR Audio has a vast searchable database of every plugin known to man. It’s a great resource for keeping up to date with audio and instrument plugins and updates.
http://www.kvraudio.com/

21

4
Digital Sampling
In 1979, a small Australian company developed an instrument they called the Qasar, which
was originally designed to be a very powerful Additive/Resynthesis keyboard instrument.
One of the designers, Kim Ryrie, discovered that by a slight modification to the circuitry,
they could digitally record sounds and play them back across the keyboard. They rechristened the instrument the Fairlight and debuted it the 1980 NAMM show in Anaheim.
Although very expensive (over $30,000 1980 dollars) it created quite a stir in the music markets of America and Europe, redefining the landscape of music production.
After the Fairlight’s breakthrough, manufacturers began churning out samplers with the latest technological changes and updates at prices that soon fell well within the affordable
range of the average musician. Sampling technology became entrenched in every music
style from the Dance world to the twangs of Country and Western.
Today, digital audio workstations (both hardware and software) with their processing
power, infinite plug-ins, and simplicity of use, are mainstays for music making, but samplers still have a prominent role in today’s compositions, from sample-playback based
workstations to the MPC’s, MV-8800’s, and SP-1200’s which are still the cornerstone of
Dance, Hip-Hop and R&B styles.

What is a Sampler?
A sampler is a device that allows us to capture sound and trigger it to play back using
MIDI.
Like sequencers, there are two types of samplers, hardware and software. Hardware samplers have become almost all but obsolete, with the sole survivors being Akai’s MPC-

22

series, DJ samplers, and sampling additions or options found in ‘Workstation’ type keyboards like the Yamaha Motif, Roland Fantom, or Korg Kronos.
In the music world, the sampler has been used for loops, individual drum hits or slices,
sound effects, hook lines, or acoustic instrument reproductions. Although digital audio recording in a DAW has somewhat replaced loop and effects playback and manipulation in
many genres, sampling still has a coveted place in the musical world, and sampleplayback instruments will certainly be available to musicians in the foreseeable future.

Digital Sampling Components
A/D Converter: Converts the incoming analog signal to a binary signal so the sampler can
control, edit, and modify the recorded sound.
RAM: This is where the samples are recorded and also loaded into when playing back
from within a program. These can be proprietary modules, off-the-shelf SIMMs or DIMMs
(standard computer memory), or a fixed amount built-in to the device.
Flash Media / Hard Drive / Floppy Disks: Where the samples are stored. Typically a sampler erases its memory when powered off, so you need some kind of storage if you want
to get your sound(s) back again.
D/A Converter: Converts the ones and zeroes back into an analog signal that human ears
can appreciate.
Both the hardware and software samplers utilize RAM the same way. This is where one of
the first advantages and disadvantages appears. Since RAM is where the samples are recorded or loaded into, the software sampler has the advantage. RAM is directly tied to the
amount of samples that can be accessed at one time. The more RAM the more samples
you can use. The same rules that apply to digital audio, 10 Megs per stereo minute at 16
bit / 44.1K sample rate, apply here. What does this mean to you? If you have a hardware
sampler that maxes out at 128 Megs of RAM, that translates into about 12 to 13 stereo
minutes of stereo CD-quality Audio. Not too bad - especially if you are only using loops or
simple hits. Where this falls short however is if you need to recreate acoustic instruments,

23

because they have so many inflections and tonality changes with every single note, depending on how the note was struck. This is true for all acoustic instruments.
Which leads us to a bit of sampler usage, tips, and technique…

24

Sampling Techniques

Drum or guitar loops, hook lines, and sound effects are easy to record and play back, as
they are individual events. But what if you wanted to capture an acoustic piano or a complex synthesizer sound? This is where some understanding of sampling structure and technique comes into play.
Let’s start by sampling an acoustic piano. A typical piano has 88 keys, which means 88 total samples if we want its complete tone. But also remember that a piano has a different
tone for each note depending on how much force we strike the key with. We can use velocity switching to emulate this in our sampler, but it would not be as accurate, so we decide
to take six different piano key strikes, ranging from pianissimo to fortissimo. We now have
a total of 528 samples we would need to capture! Also, we need to think about how long
each note will decay. The lowest notes on the piano can ring out for well over a minute,
while the highest ones will die out almost instantly. So now this ‘simple’ piano program
could easily end up being several hundred Megabytes to Gigabytes in size. This also depends on the sample rate and bit depth we are recording with. This is why software samplers have a huge advantage here. They can have enough RAM to get the job done right,
and they are generally easier to use because of their graphical interface.
Sampling a piano is obviously not an easy task, which is why many manufacturers provide
fairly high quality samples of traditional instruments.
But there is one other ‘trick’ in the samplers’ arsenal that could help us when sampling an
instrument…
A sampler (hardware or software) speeds up or slows the individual sample(s) across the
range of a keyboard to create pitch. This means that we can probably get away with sampling two or three notes per octave, map them out across the sampler at the same keys,
and let it speed up and/or slow down the sample between each recorded tone. The key
where we place the sample at it’s original pitch it’s called the Root Key or Key Note, and

25

we can then extend the Note Range across the keyboard to extend the range of that Root
Key. This also means less memory required for an instrument, making it perfect for
memory-limited hardware samplers. However, keep in mind that letting the sampler pitch
up or down too much will make for some very unnatural sounds, but this can be a good
thing as well…
Other than thinking about structure and potential memory concerns, the only other thing to
really think about with a Digital Sampler is the actual recordings themselves.
Always make sure your recordings are as loud as possible without going into the red. Digital overs are not nice to listen to, and may even damage your equipment - not to mention
your ears! Always try to adjust the gain before sampling - normalizing will boost gain in
most sample editors, but it will boost everything, including noises and artifacts.
Keep this in mind when downloading sounds and samples from the Internet as well. Some
sound designers do a fantastic job of creating and editing their soundware, but some are
just plain amateurish, requiring you to clean up bad start times, low levels, and even file
type mismatches.

Sample Libraries
We have talked about creating your own Sampler Instruments and Programs, but there are
also programmers and companies that create sample libraries for every genre and sampler
type. A quick Web search will display dozens of them, and they may offer a instrument or
two for free to demonstrate their sounds and techniques. You will probably have to pay for
the full library, but that money spent to them may save you hours of time and frustration if
you (or your client) prefer to record and not play programmer.

26

EXS24 mkII

Click here to open sampler instruments menu

Now, we’ll explore the EXS24 mkII. It’s another
Software Instrument, just like the ones you used
in the previous examples. Its graphic user interface makes it easy to see the inner workings of
digital sampling technology.
To use the EXS:
Create a new Software Instrument Track and from
the Instrument slot on the Channel Strip, load the
Image 4.1 Load Sampler Instruments

EXS24 Sampler. Choose ‘Stereo’ for it output.
Like the other Software Instruments, the plugin

window will appear after the plugin is loaded in the Channel Strip.
You can select Presets from the Library just like with any other Software Instrument, but unlike the others, the Settings (Factory Default) pull-down menu does not access the EXS
Presets (Sampler Instruments).
To access preset Sampler Instruments from EXS24, click on the
Sampler instruments menu (---) button. Choose the desired factory sampler instrument from the pop-up menu.

27

Changing EXS24 Parameters
Changes made to parameters in the EXS24 interface are applied globally to all samples
that make up the current program. These are basic subtractive synthesis parameters, and
should look familiar to Logic’s built-in Software Synthesizers that we used above.

Global parameters

Pitch parameters

Filter parameters

Sampler Instruments field

Output parameters

Modulation router

Modulation &
control parameters

LFO parameters

Envelope parameters 1 & 2

28

Take a look at our Sampling videos on FSO to get a feel for the simplicity, yet power of the
EXS24 Software Instrument.
You can always load in a Factory patch, hit the Edit button (right next to the ‘LED display’)
in the upper right-hand corner, and see how the programmers have laid out samples and
groups to give you some inspiration. Feel free to change things up as well - you can always either Save As... as a different name, or just not save when you are done experimenting.
You can find plenty of samples for free on the Internet - just search for ‘free samples’ in a
search engine and you’ll be amazed at the number of hits you’ll get. Keep in mind that the
EXS24 will load SoundFont banks as well - there are plenty of those to be found on the
Web if you want to expand your Digital Sampling arsenal.
A few links to get you started:
http://www.macosxaudio.com/forums/viewtopic.php?t=39561
http://phonologic.net/exs24/
http://www.morphproductions.com/free_exs24_soundfont_samples.html

Happy Sampling!

29

Additive Synthesis

The creation of complex waveforms through the addition of multiple sine waves at va
cies and amplitudes.

Related Glossary Terms
Drag related terms here

Index

Find Term

Chapter 2 - Synthesis Types

Amplifier

A device capable of increasing the magnitude of a signal by increasing the voltage or
the signal passing through it.

Related Glossary Terms
Drag related terms here

Analog Modeling

A synthesizer that generates the sounds of traditional analog synthesizers using DSP
and software algorithms to simulate the behavior of the original electric and electroni
order to obtain the sound in a more precise manner from the simulated inner working
cuitry, instead of attempting to recreate the sound directly.

Related Glossary Terms
Drag related terms here

Envelope Generator (EG)

The circuit in a synthesizer used to contour and shape an electronic sound over time.

Related Glossary Terms
Drag related terms here

Filter
A device designed to attenuate certain frequencies or bands of frequencies.

Related Glossary Terms
Drag related terms here

Frequency Modulation Synthesis
A simple waveform is changed by modulating it with another waveform. This creates
complex waveform with a different sounding tone and character.

Related Glossary Terms
Drag related terms here

Graintable Synthesis

A combination of the features of Wavetable and Granular synthesis methods, allowing
particular wavetables which have been broken down into grain clouds for further man

Related Glossary Terms
Drag related terms here

Granular Synthesis

Utilizes the same techniques as Digital Sampling, but the sampled waveform is divide
pieces of sound lasting from 1 to around 50 milliseconds (thousandths of a second).
‘grains’ of sound are then layered on top of each other, and can be adjusted by spee
gle and volume.

Related Glossary Terms
Drag related terms here

Low Frequency Oscillator
A modulating control signal that operates in the sub-audio range.

Related Glossary Terms
Drag related terms here

Oscillator

The circuitry that generates the kernel of a synthesizer sound. In the early days oscilla
ated a fairly basic of sound types (sawtooth, square, pulse etc). In modern synth eng
tors can be driven by a myriad waveforms and samples.

Related Glossary Terms
Drag related terms here

Phase Distortion Synthesis
Similar to Frequency Modulation, except that in PD synthesis the phase angle of the
changed, ‘bending’ its shape to create a unique character and tone.

Related Glossary Terms
Drag related terms here

Physical Modeling Synthesis
A type of synthesis that uses a set of equations an algorithms to simulate a physical
source.

Related Glossary Terms
Drag related terms here

Pulse Code Modulation
A method of digitizing audio by sampling the signal’s amplitude at a steady rate and
each sample with a multi-bit word. The standard audio format for CDs.

Related Glossary Terms
Drag related terms here

Random Access Memory (RAM)

RAM is a form of computer data storage. A random-access device allows stored data
cessed directly in any random order.

Related Glossary Terms
Drag related terms here

Resynthesis
The technique of recreating a natural sound by using additive synthesis techniques.

Related Glossary Terms
Drag related terms here

Subtractive Synthesis
Controlling the timbre of a complex sound by filtering out the unwanted frequencies.

Related Glossary Terms
Drag related terms here

Tremolo

Amplitude modulation of a sound, as in synthesis and certain guitar/keyboard amps (
amps may incorrectly call this “vibrato”.

Related Glossary Terms
Vibrato, Wah Wah

Vibrato
Frequency modulation of a sound. Compare with Tremolo.

Related Glossary Terms
Tremolo, Wah Wah

Wah Wah

A musical effect achieved on brass instruments by alternately applying and removing
on an electric guitar by controlling the output from the amplifier with a pedal.

Related Glossary Terms
Tremolo, Vibrato

Wavetable Synthesis

A process by which sounds are generated from snippets of real instruments stored o

Related Glossary Terms
Drag related terms here