You are on page 1of 21

LAB SIX

REVIEW
Understand the difference between series and parallel
processing.
Have a scheme in place for naming your processed files,
particularly series processed files.

GENERAL CONSIDERATIONS
When you are working on the third project, there are a number of
general points to consider that are not specific to any software.

ASSIGNMENT 3:
MUSIQUE CONCRTE EXERCISE
Due in Week Ten.

ASSIGNMENT
Create two audio files, the first of which is a short (maximum
10-second) sound clip (source material), the second of which is a oneto two-minute composition exercise based entirely on musique
concrte-style manipulations of the original source material.
Upload both files to WebCT.

PROCEDURE
Record a sound or series of sounds using a microphone and/or
sounds from CD. Using editing, reversal, mixing, and so on,
manipulate either portions of and/or entire segment(s) or sound
object(s). Create a one- to two-minute monophonic musique concrte
piece.
121

Lab Six

Sound objects that are entirely prerecorded music are not permitted.

Hint: The trick is to find a sound object that has a wide range
of timbres or will provide a rich palette from which to choose
sounds or to choose a few short objects with diverse character.

Do not use electroacoustic pieces or sounds that have been previously


processed as source segment(s).

EVALUATION
Evaluation of your work will be based on:

technical aspects

sound quality (no clicks, pops, extraneous noises)

good recording levels

lack of distortion

creative aspects

formal unity (fulfilling expectations)

wit (surprises)

quality of sounds.

This assignment is not meant to be a major work but rather an


exercise in experimenting with these techniques. This project could
be completed in four to six hours.

THE ASSIGNMENT, EXPLAINED


This assignment uses everything you learned in the first two
assignments (digital editing, file handling, processing). It is now
time to put these technique to use creatively
Ten Seconds of Sound
The idea is to find one or more sounds that together last no more
than ten seconds and to use only those sounds to create a one- to
two-minute compositional exercise. You will hand in the ten
seconds of source material on the first track and a second track that
contains the one- to two-minute composition.

122

Lab Six

Dont think that you have to create a ten-second source that


you will then manipulate. Nor should you start out trying to record
your different sounds all at once. Instead, settle on one or more
sounds that you want to use, keeping in mind that their total length
will eventually have to be less than ten seconds. Experiment with
them; you may find that you will substitute other sounds and
discard some. You might even complete the assignment. Then place
these sounds, one after another, in a file to demonstrate the ten
seconds of source material.
The Manipulations
The kinds of manipulations that you can do to the sound are
limited to those done by the early musique concrte composers
reversal, mixing, editing (including amplitude change), filtering,
and some time-based effects (such as echo, delay, chorusing, etc.).
Newer signal-processing methods, particularly those that are
entirely digital in their origin, are not allowed (well save those for
the final project).
The monophonic limitation means that the end product should
not have different material on the left and right channels. You are
encouraged to use more than a single track in the compositional
stage (within ProTools) and thus have several layers; however, do
not worry about panning (placing a sound in a left to right spatial
location). Again, well save this technique for the final project.
The Marking
Marking will be evenly divided between technical and creative
elements. Technically, Ill be listening for sound quality (no clicks,
pops, or extraneous noises that may have resulted from poor
recording technique), good recording levels, and a lack of
distortion (which may have resulted from either poor recording or
careless processing).
The creative aspects will include a balance between unity
(fulfilling expectations) and wit (unexpected surprises) as well as
the general quality of sounds (the sounds themselves should be
interesting and complex). These elements will be discussed later.
Lastly, please consider this as an exercisea short study, not a
major composition! Experiment in this assignment and complete it
on time. You will then be able to concentrate more fully on the final
project, which draws upon this assignment as well as further
material.

123

Lab Six

THE SOURCE MATERIAL


Much of the success of this assignment depends upon the source
material. Because you will be manipulating these sounds in many
ways, the source material must contain certain properties that will
make the manipulations interesting. These properties may include
an interesting envelope, an evolving spectrum, or an internal
rhythm.
Voice is a very rich sound source. As you should have noticed
in the first assignment, voice contains both consonants (which are
noise based) and vowels (which are pitched and melodic). It is our
natural instrument, and can be manipulated in a variety of ways in
the controlled recording environment in order to produce a wide
variety of sounds and noises. Furthermore, our relationship to text
and our need to understand voice is very strong. For example, when
we hear a voice speaking a language we do not understand, we can
focus upon its sound rather than its meaning. Playing with semantic
elements of text adds another potential level to this exercise.
Because this material will be extensively manipulated and
processed, it is important that these sounds be as neutral as
possible. In other words, sounds that are already processed (such
as those found in an existing electroacoustic work) or combined
with other sounds (such as existing musical material) make poor
source material.

WHAT SOUNDS TO USE?


Spend the next few days listening to sounds around you, and
examine their constituent elements thoroughly. The motorcycle that
just passed you: what was the rhythm of its engine? Was its
spectrum mainly high, mid, or low frequency? How could you
describe its amplitude envelope: symmetrical? The sound of filling
the sink with water: does the frequency change as the water level in
the sink changes? Does the tap make a squeal as the water comes
out? Can you control this squeal by changing the water pressure?

SOUNDSCAPE
The sounds that occur around you are part of the soundscape. You
can think of it as an environment of sound, or sonic environment.
Consider (and imagine) the following soundscape:

124

Lab Six

You are walking to the bus; you can hear your footsteps on the
pavement. Perhaps it is raining, and your feet make quiet splashes
with each sound. As you approach the bus stop, you can hear a
conversation between two people already waiting there. Maybe its in
a language you dont understand. As you wait, you can hear the cars
passing by on the street. There may be a certain rhythm to their
passing. Finally, you hear the bus approach, you hear its brakes squeal
as it stops. The doors open with a whoosh, and you step inside, drop
your fare into the box. You can hear the coins jangle as they go down.
There are several conversations going on around you, but they sound
different from the one you overheard at the bus stop, because the
sound reflects off of the walls of the bus. Someone pulls the cord
(ding), and as the bus slows down, you hear the brakes squeal.

This experience may be familiar to you and have a particular


meaning for you. Maybe you do this everyday on the way to
school. But for someone who doesnt take the bus (a Howe Street
stockbroker?) or someone who has never seen a bus (an Inuk?), this
soundscape will mean something else (or nothing at all).

SOUND EVENT
From the soundscape, we can extract a sound event. Although the
soundscape may last for minutes, the sound event may last only a
few seconds. Contained within the sound event is its spatial and
temporal context, that is, where the sounds occurred and in what
order they occurred.
For example, walk down a hallway to a door (is it a
reverberant space, is the floor carpeted, are there other people?),
remove a key (one key? two? many?), insert it into the lock, open
the door (does it creak?), then close it (did it slam like a heavy
metal institutional door? A lighter wooden door?).
The temporal contextthe order of eventsis important. For
example, having the door close before it opens would be confusing
and disorienting.
The sound event has meaning to us. You can use a sound event
(or portions of one) as your source material and play with our
expectations about its meaning.

SOUND OBJECT
From the sound event, we can extract a sound object. This is the
smallest self-contained element of a soundscape. It can be analyzed
by the characteristics of its spectrum, loudness, and envelope.
125

Lab Six

(In terms of the sound event and soundscape, these elements would
be changing continuously, and we would have to generalize).
In our sound event example, each element is a separate sound
object: a footstep, a set of keys jingling, a door opening, and so on.
Because they are self-contained objects, they are easy to manipulate
and process (you wouldnt generally process an entire sound event
. . . but, then again, you might).
Although the sound object may be referential (a door opening
can have many meanings), you should consider it primarily as a
phenomenological sound formation, independent of its referential
qualities as a sound event. Of course, we can play with the
referential qualities, but these should be considered secondary.
For example, we may limit all of our sound objects to those
produced by a balloonrubbing, blowing up, squeaking, and so
on. We could treat all those objects as unique sounds and process
them as such. But we could end the piece with the sound of the
balloon popping. In this case, the referential quality of the sound is
as strong as its sonic characteristics.
Lastly, the sound object should not be confused with the
sounding body that produces the sound. For example, if I say my
sound object is guitar, I am not describing the sound but the
device that created the sound. Although we may think of the guitar
as producing one main type of sound (a musical note), we can use
it to produce a great variety of sounds by scraping the strings,
tapping its body, loosening the strings, dropping something on the
strings, and so forth.

WHAT MAKES A
GOOD SOUND OBJECT?
When you are choosing your sound objects, consider the example
of the guitar as a sound-producing body. Although the sounds will
have great variety of sonic characteristics, because they were
created by the same body, there will be a sense of unity when they
are presented together in your composition (which will help when
you start putting your processed material together).

126

Lab Six

PERFORMING YOUR MATERIAL


Live vs. Sound Effects CDs
When you explore a sound-making device, experiment with many
different ways of making sounds with it. For example, how many
different sustained sounds can you get from it? Percussive sounds?
Can you change its spectrum in any way? Different performances
of the same method of sound generation (for example, a variety of
different fingernail scrapes on a guitar string) will yield different
sound objects and will help to create variety in your composition.
For this reason, sound effects CDs are not always the way to
get good sound objects. Although you may find some interesting
sounds, there probably will not be a variety of performances of
these sounds.
One added benefit of generating a wide variety of material
from a single source is that it will avoid the collage effect, which
occurs when there is no relation between sound objectsfor
example, a babys cry, a car crash, train whistle, and a blown bottle
taken from a CD. Each of these sound objects may have potential,
but when their various manipulations are combined in the
compositional stage, they will have little to do with each other. The
aural result will be a hodgepodge of sounds (heres a weird sound,
heres another weird sound, and heres yet another weird
sound . . .).

Collage effect is perhaps the single biggest problem with student


electroacoustic compositions.

HOW IS THE SOUND PRODUCED?


When we consider what makes an interesting sound, we have to
look at what our culture has given us historically as interesting
sound producers: musical instruments. Hundreds of years have
gone into the development and refinement of musical instruments,
and we should examine them in relation to the sounds they produce.
Musical instruments are divided into families based upon how
they produce their sounds. They can be struck (idiophones, or
mallet instruments, and membraphones, or drums), bowed or
plucked (chordophones, or string instruments), or blown
(aerophones, or wind instruments).

127

Lab Six

In all cases, something must be excited in order to cause the


air pressure changes (the different methods of excitation were just
given). When you pluck a stringed instrument, it is the vibrating
string that causes the air pressure changes. Different materials will
vibrate in different ways (a nylon string versus a steel string, a
plastic drum head versus an animal skin). Although the string is
vibrating in complex ways, its relatively small physical size can
cause only small variations in air pressure.
For this reason, most instruments have a resonator to amplify
and colour these vibrations. In the case of a guitar, the resonator is
its body, which will vibrate sympathetically with the frequency of
the string. Because the physical area of the guitar body is much
greater than that of the string, the resulting soundwave is louder.
Furthermore, the resonator of the guitar will also capture
some of the vibrations inside it. These soundwaves will bounce
around inside the instrument and either dissipate or exit the
resonator. Certain frequencies (based upon the size of the resonator
in relation to the wavelengths of the initial frequencies) will be
amplified, which we can think of as colouring the sound.
Because resonators must vibrate and reflect sound, their
material composition is integral to the resulting sound. A guitar
made out of wood sounds differently from one made out of plastic
or steel.

MAKING AN INSTRUMENT
While you are by no means required to build an instrument,
consider these implications when you are looking for interesting
sound objects. What can you use to cause excitation (tapping,
bowing, rubbing, dropping something into it)? Can you amplify
and colour these vibrations with different resonators? Can you
excite the same source in different ways?

SOUND CHARACTERISTICS
AND PROCESSES
When you listen to your sound object, consider its spectrum. Does
it have an identifiable pitch? If so, it must be harmonic. Does it
sound bell-like (was your sound-producing device metallic)? If so,
it must be inharmonic. Does it sound more noise-like? Does it have
a wide spectrum (are there both low and high frequencies to the
sound)? Does the spectrum change over time?
128

Lab Six

In considering the different sound characteristics of your


sound object, think about potential processing (this topic will be
covered later).

Reversal will be affected mainly by envelope. If your sound


has a symmetrical envelope (quick attack, quick decay),
reversing it may not create anything new.

Amplitude change (envelope) can be applied more easily to


longer sounds (making them shorter) than the other way
around. Shaping the sound in less dramatic ways (such as
fading in or out) is also more noticeable on longer sounds.

Filtering (removing or altering part of the spectrum) requires


a rich spectrum to begin with. If your sound consists entirely
of low frequencies, there isnt much in the way of filtering or
equalization that you can experiment with.

Time delay (echo) wont have much effect on sustained


sounds (other than chorusing them). Time delay works best
with short percussive sounds.

SOUNDS TO AVOID
Although your sound can be referential, always consider the
objects potential for processing!
For example, a generally bad sound object would be one
derived from electronic sources, such as talking childrens toys,
handheld video games, cellular phones, and so on. These devices,
while referential, are not made to produce interesting, high-quality
sounds; often they use the cheapest speakers possible. When you
begin to manipulate these sounds, their limited potential will
quickly surface.
Most plastic objects create boring sounds; they have little
potential for excitation (a dull thud, maybe), and they are poor
resonators.
Dialogue from movies or television will almost invariably
have background noise, sounds, or music. Hardly a neutral sound
object!
Finally, music, whether it is from a prerecorded CD or a
musical instrument, makes a poor sound object because it is
extremely referential. Musical instruments, however, have great
potential. For example, a single plucked note on the guitar is still
quite neutral, whereas several notes create a melody, which, again,
is too referential to be a neutral sound object.
129

Lab Six

GETTING THE SOUND INTO THE COMPUTER


There are three possible ways that you can get sounds into the
computer and convert them into audio files.
1. Record them using a microphone. This is perhaps the best
option since it allows you to choose a sound-making device and
experiment with it to create a variety of potential sounds. However,
there are drawbacks as well. Recording sounds is a fine art, and
you may introduce problems that will be detected later on. For
example, a poor recording level will yield a poor signal-to-noise
ratio, which will be evident once you begin processing the sound.
In addition, the quality of the microphone is of paramount
importance. You may already have a microphone or have access to
a microphone to record into your computer. However, if it connects
directly to the computer, it is a low-quality microphone intended
only for low-fidelity voice recording. High-fidelity microphones
must use XLR (three-pronged) connectorsand they cannot be
connected directly to the computer without an audio mixer (or
connecting transformer) in the signal path. A microphone and
mixer are not required for this course; however, if you plan to
continue creating electroacoustic music, they are essential
investments. A good quality microphone, such as an AKG C-1000
costs about C$300, while a small, but good quality, audio mixer,
such as a Behringer Eurorack MX602A, costs about C$170.
2. Use the sounds provided on the Sound Examples CD (or a sound
effects CD). You will find a number of audio files on this CD labelled
Assignment II files. These are high-quality sounds that provide
good source material. You can also find sound effects CDs in record
stores; however, many of them are made for entertainment
purposes and have questionable sounds. Some environment
recordings may appear to have great potential (Sounds of the
Rainforest, for example), but these often have background music
playing. The Fine Arts Room in the Bennett Library at Simon Fraser
University has an excellent collection of sound effects CDsthe
Sound Ideas libraryintended for professional film sound use.
3. Find audio files on the Internet. You can find a great diversity
of audio files on the Internet, but their fidelity is a major problem.
Most audio will be compressed in some way to reduce bandwidth
and speed up transmission. Some methods of compression include
downsampling (changing the sampling rate from 44.1 kHz to
22 kHz or less), lowering bit depth (using eight bits, instead of
sixteen), or selective compression, such as that employed by the
MPEG-3 standard. In all cases, these sounds may seem usable when
you listen to them over computer speakers, but once you begin
processing them, you will quickly discover their poor quality.
130

Lab Six

Once an audio file has been compressed, there is no way to regain the
lost information!

AMPLIFYING VS. RERECORDING


Any signal can be amplified, which simply amounts to increasing
the signal level. In analogue terms, this means increasing the voltage
equally, and in digital terms it means increasing the numbers used to
represent amplitude (the samples) by a relative amount.
There is a point at which amplification is counterproductive in
both the analogue and the digital domain. You may have had a
poorly recorded cassette tape given to you and found you had to
raise the level on your power amplifier. What did you notice? Not
only was the signal recorded on the tape amplified but so was the
background noise. In this case, the poor signal-to-noise ratio made
its presence felt when you amplified the signal.
Similarly, amplifying a signal digitally will raise the background noise in the signal. Because the signal-to-noise ratio is much
better in digital media, you can generally amplify the signal more
than with its analogue equivalent before the noise becomes
overbearing; however, there is a point at which the trade-off
between a higher level of signal and the louder background noise
meets. In this case, your only option is to rerecord the signal.

If your signal has a maximum amplitude of less than fifty per cent, or
-6 dB, try rerecording your signal.

NORMALIZATION
In our example above, the recordings highest amplitude was at
seventy-three per cent of the maximum. How much can we amplify
the signal before it distorts? The numbers showed that the highest
sample value was not at the maximum of 32,768. If we increase our
current high sample so that it is the maximum representable, we
could increase every sample by the same amount. Therefore, what
number do we multiply 26,053 (the current highest sample) by in
order to get 32,768? (Note that we do not simply add 6715, the
difference between the two numbers, since we want to retain the
same relationship between the numbers.)
Dividing the maximum (32,765) by the current maximum
(26,053) gives us 1.25. If we multiply each sample by this amount,
131

Lab Six

then we increase each sample by approximately twenty-one per


cent, so that the entire audio file will be at maximum amplitude
without distortion.
Fortunately, we do not have to do this calculation by hand; we
are using computers, after all. This process is known as
normalization, and it is a standard technique used to raise the
signal level of a recorded file. Normalization is a three-step process:
first, it looks through the entire audio file data to find the highest
absolute sample value; second, it calculates the ratio between that
sample value and the highest value representable by the current
sample bit depth; third, it multiplies each sample by that ratio.
Remember, normalization is simply maximum amplification
without distortion.

Background noise in a signal will be amplified by the same amount as


the desired signal!

MICROPHONE AND
RECORDING TECHNIQUE
Dont try to record single events.

When recording your sounds through a microphone, record several


versions, or performances, of a sound. Disk space is relatively
cheap nowadays, so allow the recorder to run while you get as
many variations out of your sound source as possible. You can
choose the best versions while you are editing.
While recording, watch the peak level meters to monitor your signal
level.

While it is possible to amplify a low-level audio file, it is


preferable to record using good signal levels. If you watch your
level while recording and you are not getting a good signal, keep
recording, but move your source closer to the microphone. Another
option is to perform your sound louder, although doing so will
change the quality (timbre) of the sound as well as its amplitude.

132

Lab Six

Similarly, if you peak out (distort) while recording, dont stop


the recording process. Instead, perform your sound more quietly or
further from the microphone.
Microphone technique has to be learned, and it requires
practice. However, there are certain points that you will want to be
aware of in advance.
Close Miking
During the recording process, one of the primary concerns should
be achieving a good signal level. This is especially the case in the
controlled surroundings of an electroacoustic studio (which could
be your home) as opposed to an outdoor environment. Outdoors,
you might get only a single chance to record a soundlower signal
levels might be the cost of recording a truly unique sound.
The best signal level will be achieved through a combination of
good levels at every stage of the process without undue amplification at any point. Remember that amplification will also add noise
to a signal. Therefore, if you are using a mixer, do not increase the
level disproportionately at any one point; instead, raise all levels
equally. (Mixers will be discussed more thoroughly in Lab Seven.)
Because microphones create low-level signals (called,
appropriately, mic level signals), it is often not enough to raise all
levels equally. Sometimes you might feel that you should raise the
level on the mixer all the way, but that will introduce extraneous
(and unwanted) noise. Therefore, it is often necessary to place the
microphone at a close proximity to the sound source, sometimes
leaving only a few centimetres in between. This practice is called
close miking; it not only allows for higher signal levels but also
makes it possible to record very quiet sounds.
Close miking is not a transparent process; its results will be
noticeable. The reasons for this transparency are quite obvious
when you consider an example. Imagine recording scissors cutting
through some heavy construction paper. Most of us can imagine
that sound because we have experienced making the sound. But
when we are making the sound, the sound source (both the scissors
and the paper) is at a certain distance, perhaps half a metre, from
our ears. Many of the very small, low-amplitude soundwaves will
have lost their energy before they reach our ears, and we will
therefore not perceive them. However, when you place the
microphone within a few centimetres of the sound source, all of
those tiny soundwaves will be recorded. As a result, you will have
a somewhat unnatural soundunnatural, that is, when compared
to our normal experiences with the sound.

133

Lab Six

The extra sound detail of a close miked sound will (usually) provide
much more sonic interest.

Soft vs. Loud SoundsAmbience


When you are performing your sound, consider the difference that
will result if you make a loud sound further away from the microphone instead of making a quieter sound close to the microphone.
Firstly, striking (or plucking or blowing) an object harder will
make it vibrate more and create a brighter and richer sound
a greater number of resulting partials. What type of sound are you
trying to record? Try recording both versions; you might have to
normalize one or both of the sounds. Perhaps the differences
between the two sounds can be used within the exercise.
Secondly, the closer the microphone is to the sound source, the
fewer the number of reflected waveforms that will be recorded:
these reflections might be masked by the original sounds decay. It is
the reflections that create reverberation, which is the ambience of the
room. Close miked sounds, lacking this ambience, are more neutral;
recording a sound further away will also record the room sound.
Consider the diagram below. The distance between the source
and the microphone is not that much different from the distance
travelled by the early reflections. The longer the path, the longer the
delay; the more reflections, the more absorption and, therefore, the
lower the amplitude. Remember that there are many, many
reflections, and only a fraction of them are displayed in the diagram.

Microphone placement that captures room sound. Reflections


(24) travel only a short distance further than the direct sound
(1) and will therefore be of almost equal amplitude
The diagram below shows an approximation of the time
delays and amplitudes represented in the diagram above. The
direct sound is recorded first, followed by the three reflections.

134

Lab Six

Relative amplitude and time delays of the direct sound and


three reflections, based on the previous diagram
Notice that the amplitudes of the reflections are almost as high
as the direct sounds amplitude. The result will be perceivable
reverberation, or room sound.
Consider the following diagram, in which the microphone is
much closer to the source. The reflections have a much greater
distance to travel before they strike the microphone; as a result, they
will be significantly delayed and also of a much lower amplitude.

Close miking, which captures little room sound. Reflections (24) travel a much greater distance than the direct sound (1) and
will therefore be of much lower amplitude.
The diagram below is an approximation of the time delays
and amplitudes represented in the second diagram. Again, the
direct sound occurs first, followed by the three reflections.

135

Lab Six

Relative amplitude and time delays of the direct sound and


three reflections, based on the previous diagram.
Notice that the amplitudes of the reflections are much lower
than that of the direct sound. The result will be little perceivable
reverberation or room sound.

EDITING LONGER
SOURCE RECORDINGS
Following the experimental performance procedure above should
generate longer source recordings, perhaps even a minute or two.
Save the result for now so that you can come back to it later on.
Editing this source recording will mean a lot of listening to the
material and choosing which versions are the most interesting.
Listen for timbral variety within the sound (the spectral envelope)
as well as variety between sounds (for variation of material). Listen
for possible internal rhythms and amplitude change (the
amplitude envelope). Listen for any frequency change (the
frequency envelope).
Once you have identified the more interesting material, use the
same techniques that you used in Project One to edit the material.
If you are using an audio editor, select one of these sounds,
and copy it to the clipboard, then paste it into a new file. Save the
new file, giving it an appropriate name. For example, Cutting_1 or
CuttingConstruction_1 for the scissors cutting the construction paper
example. Different versions of the sound should be numbered.
The name of your audio file should ideally describe the sound source.

Naming a recording sound1, or newSound will not help you


determine the origin of the sound, or even what it sounds like,

136

Lab Six

several weeks from now when you have collected dozens of audio
files.
If you are using ProTools, make a new region and give it a
descriptive name.

FADING IN AND OUT


TO AVOID CLICKS
You may have noticed in Project One that editing can sometimes
introduce clicks into the material. These occur when you edit
continuous material and the resulting edit does not leave the
waveform intact.
For example, consider the following waveform and the editing
selection. (This example is for demonstration only, since such a
small edit would not be noticeable in terms of edited-out material.)

A waveform with a portion selected.


The same waveform, after you delete the material, will look
like this:

The same waveform after editing.


Notice that the formerly continuous waveform now has an
instantaneous change from positive numbers to negative numbers.
137

Lab Six

Such discontinuities result in broadband noise (broadband refers to


the broad bandwidth of the resulting frequency spectrum). It will
be heard as a clickthe greater the discontinuity (or difference
between the two edit points), the broader the click. In fact, these
digital clicks can result in an almost infinite bandwidth!
There are two solutions to this potential problem. The first is
to make your edit points occur at near-identical amplitudes in the
waveform, so that after the edit, the waveform will still be
continuous. This solution may seem extremely difficult, except if
you consider the points where the waveform crosses from positive
to negative or vice versa. These points are the zero crossings; any
edits that fall on these points will result in continuous waveforms.
The diagram below displays a selection that is positioned on the
zero crossings.

A zero-crossing selection. Deleting the selected data will not


cause an audible click.
Many audio editors have a way of limiting all editing to zero
crossing. This feature is often called snap-to-zero, which moves the
edit point to the nearest sample that is closest to zero.
Keep in mind that using this technique on longer edits, such
as the first project, may still result in a discontinuity. Although
there may be no actual click caused by instantaneous change, the
difference in overall amplitude between the two edits may result in
an audible edit. For this reason, zero crossing editing was not
necessary in Project One.
The other way to avoid clicks caused by editing is to fade in
and fade out the audio immediately preceding or following the
edit. It is most useful when you are creating new audio files from
longer sources. In this case, we are not worried about the source file
but the resulting new file. The click may occur at the very
beginning of the file, particularly if it begins on a sample other than
zero. In such a case, even if you edited the material using the snapto-zero feature, the discontinuity between silence (before the
sound) and a high amplitude sound (as the sound begins) may be
138

Lab Six

unwanted. In this case, the solution is to select a small portion of


the audio at the beginning of the new file and fade in the audio.
This solution, of course, assumes that your audio editor has the
capability of a fade in/fade out process. Amadeus, for example, has
a dedicated process, while Audition allows for fading in its
Envelope process as a preset.
The example below displays an extracted portion from a
longer source recording. Although the first and last samples are
close to zero (the snap-to-zero feature was used to select the data),
there will be a discontinuity, particularly at the end of the audio,
that results from a high amplitude being suddenly cut off.

No fades in or out, resulting in a discontinuity at the beginning


and end of this recording.
The same audio file after a quick fade in at the beginning and
fade out at the end:

Fading in at the beginning and fading out at the end results in


smoother amplitude.
Remember that fading in and out using audio editors are destructive processesthey will permanently alter the data in the file.
ProTools does not require any such fade processing during the
edit stage because it allows for amplitude envelopes to be applied
during playback. This process will be thoroughly discussed in Lab
Six.

139

Lab Six

TO DO THIS WEEK
Begin choosing your sounds. If you have the ability to record
sound, listen to potential sounds around you. Make sounds and
listen to them. Tap things, scratch things, knock thingscan you
make a variety of sounds with a single object? Can you change the
spectrum of the object? The envelope? The rhythm?
If you plan to use the sounds on the accompanying CD, listen
to those that are available. Can you describe their spectra? Could
you draw their envelopes? What types of transformations do they
suggest?
Try surfing the Net for some sounds. If you find some
compressed sounds, download them to your computer and load
them into either ProTools or your audio editor; they will most likely
be converted when you import them. Listen to them closely: can
you hear the noise (lower bit depth), lack of high frequencies
(lower sampling rate), or hard metallic quality (MP3 compression)?
Get some good, high-fidelity sounds into your audio editor or
ProTools.
You should also begin processing your sounds in a systematic way.
1)

For example, concentrate on editing first. Can you get smaller


portions of your sound? Can you move these portions around
as you did in Project One?

2)

Next, try reversing all of your sounds. Which ones are


noticeably different? Which ones evidence little effect?
Reversal is a simple, yet powerful, technique that is often
forgotten in todays complex digital signal processing.

3)

Next, try filtering your sounds. Begin with a low pass filter to
remove the high frequencies. Choose different cut-off
frequencies. Save the different versions. Then try high pass
filtering. Then try peak filtering, with and without a high Q.

4)

Next, try the delay processes. Which sounds work well with
such processes?
Remember to set a delay time that is shorter than the length of
the region!

140

Lab Six

5)

Continue in this manner and experiment by using the same


process on different sounds and different parameters within
the same process. For example, filter the same sound with
different cut-off frequencies. Transpose the sound at different
intervals.

6)

Try all of the following techniques on as many of your sounds


as possible:

Editing

Reversal

Transposition (pitch change)

Tempo (speed/time change)

Filters/EQ

Echo (delay)

The following processes are not available as default (free) plug-ins


within ProTools. They can sometimes be found on the Internet as
AudioSuite plug-in for ProTools or as VST or Direct-X plug-ins for
your audio editor:
- Reverberation
- Phasing/Flanging
- Doubling/Chorusing
7)

Save all of your experiments! You should be compiling dozens


of sounds, maybe even approaching a hundred by the time
you start composing in Week Seven!

8)

Remember to give your files descriptive names. If you are


really organized, write down the process and the parameter
settings for each file. Then you will be able to return to these
settings at a later time, to reproduce the result, or to try the
settings on another sound.

For this week, dont be too discriminating; keep everything. You


can sift through your results later.

141