You are on page 1of 54

First bachelor thesis

Internal Mixing of Dance Music

Completed with the aim of graduating with a


Bakkalaureat (FH) in Telecommunications and Media
from the St. Plten University of Applied Sciences
Telecommunications and Media
degree course

under the supervision of

Hannes Raffaseder

Completed by

Sascha Galley
tm061025

St. Plten, ___________________

Signature: ___________________

Declaration

I declare that the attached research paper is my own, original work undertaken
in partial fulfilment of my degree. I have made no use of sources, materials or
assistance other than those which have been openly and fully acknowledged in
the text. If any part of another persons work has been quoted, this either
appears in inverted commas or (if beyond a few lines) is indented. Any direct
quotation or source of ideas has been identified in the text by author, date, and
page number(s) immediately after such an item, and full details are provided in
a reference list at the end of the text.
I understand that any breach of the fair practice regulations may result in a mark
of zero for this research paper and that it could also involve other repercussions.
I understand also that too great a reliance on the work of others may lead to a
low mark.

_____________________________________
Undersigned & Date

-I-

Abstract

This bachelor thesis provides a general overview about mixing dance music
primarily on a computer with the focus on digital audio workstations and plugins. To begin with, it describes the variety of tools that are available on the
market and how to use and set up the studio equipment. Furthermore, the
processing of audio files is explained and the target for the final product is set.
The next chapter focuses on the theory of mixing. It introduces the three
dimensions of sound and how those are defined and used for the purpose of
mixing. Moreover, it describes the most commonly applied filters and effects like
equalizers, reverbs, and delays. Dynamics and its alteration with tools like
compressors and limiters are also covered in this part of the thesis. In the next
chapter the practice and workflow of mixing from planning a soundstage to the
final mix is described. It provides information about how to set up the digital
audio workstation and how to adjust the mixer. Then it continues by explaining
how to set the volume and panning position of each track correctly. After that it
shows the procedure of editing single instruments and fitting them into the mix.
Furthermore, an overview about automations is provided. The last chapter deals
with professional mix characteristics, final steps for the mixdown, and the
preparation for the mastering process.

- II -

Table of Content

1.

2.

PREPARING THE MIX

1.1 STUDIO EQUIPMENT

1.1.1 Digital Audio Workstation (DAW)

1.1.2 Monitoring

1.1.3 Headphones

1.2 PROCESSING AUDIO FILES

1.2.1 Technical Conditions

1.2.2 DC Offset

1.2.3 Normalization

1.3 KNOW YOUR GOALS

MIXING THEORY

2.1 THE THREE DIMENSIONS

2.1.1 Layering Front-to-Back Perspective

2.1.2 Panorama Horizontal Perspective

10

2.1.3 Frequency Vertical Perspective

11

2.2 FILTERS AND EFFECTS

13

2.2.1 Equalizer

13

2.2.2 Reverb

15

2.2.3 Delay

17

2.2.4 Other Effects

19

- III -

2.3 DYNAMICS

3.

4.

21

2.3.1 Compressor

21

2.3.2 Side-Chaining and Ducking

23

2.3.3 Limiter

24

MIXING PRACTICE AND WORKFLOW

25

3.1 CREATING A SOUNDSTAGE

25

3.2 INITIATE THE WORK SPACE

26

3.2.1 Arrangement

27

3.2.2 Sub Groups

28

3.2.3 Mixer and Signal Routing

28

3.3 SETTING VOLUMES

28

3.4 PANNING

29

3.5 SINGLE INSTRUMENT EDITING

31

3.5.1 Kick / Bass Drum

31

3.5.2 Snare

33

3.5.3 Hi-hats and Cymbals

34

3.5.4 Bass

35

3.5.5 Vocals

38

3.5.6 Synthesizers / Pianos / Guitars

39

3.5.7 Atmos, Pads & Sound FX

39

3.6 AUTOMATIONS

39

THE FINAL MIX

42

4.1 WHERE TO GO

42
- IV -

5.

4.2 FINAL STEPS

43

4.3 OUTLOOK ON MASTERING

43

CONCLUSIO

45

LIST OF FIGURES

46

LIST OF LITERATURE

47

-V-

1. Preparing the Mix


Before starting to mix it is essential to prepare a solid and well arranged setup.
High quality audio tracks, excellent studio components, and a proper starting
point are crucial for professional mixes and will simplify matters.

Nevertheless, the art of mixing is not only an issue of technical know-how, but
also a very creative and inventive process. Mixing engineers can only reach their
goals with spatial imagination, musically trained ears, and other artistic skills.
Furthermore, an ample knowledge of ones software and hardware is needed to
develop an appropriate and individual workflow.

1.1 Studio Equipment


Due to the rapidly growing number of home studios and the vast spread of
personal computers, high end components have become rather affordable. The
big amount of available audio devices offers a huge range of possibilities for your
personal setup, depending on style, genre, affectations, and independent means.
However, a well- planned acquisition of software and hardware is indispensable
for a reliable working environment, especially when it comes to mixing.

1.1.1

Digital Audio Workstation (DAW)

To hear real time changes in your mix accurately you need a system with low
latency. Only fast computers having the right hardware configuration offer a
failure-free workflow. Hence you have to know your system like the back of your
hand, especially the characteristics of your soundcard and the functioning of your
software. There are endless solutions for good setups and it is your decision on

how much you will invest in it.1 Keep in mind that electronic dance music owes
a big part of its personality to the creative use of low-tech, home studio gear.
That dirty sound is part of its heritage.2

Beneath a proper DAW you also need plug-ins or external devices to deliver
professional mix-downs. Most common audio programs are provided with an
interface called Virtual Studio Technology (VST), invented by Steinberg, which
communicates with other software. There are a lot of plug-ins available on the
market, either as VST instruments or as VST effects.3
Since these plug-ins are certainly sufficient in most cases you dont have to
worry about expensive and huge external audio hardware.

1.1.2

Monitoring

To get a precise image of the current work progress it is very important to hear
veridical what you are doing. All hi-fi speakers deliberately employ frequencyspecific boosts and cuts throughout the sonic range to make the sound produced
by them appear full and rounded. As a result, if you rely entirely on these while
mixing youll produce mixes that will not translate well on any other hi-fi
system.4 Thus it is far better to use flat and natural loudspeakers, so-called
studio monitors, with an excellent frequency response.

When you have chosen the right monitors for your requirements you have to
consider some setup guidelines, because inaccurate positioning can affect the

1 Cf. Hawkins (2004), p.135.


2 See Hawkins (2004), p. 135.
3 Cf. Wikipedia (2008), http://en.wikipedia.org/wiki/Virtual_Studio_Technology,
23th July 2008.
4 See Snowman (2004), p. 323.

perception of frequencies and, moreover, the stereophonic sound. First of all, the
distance between your monitors shall neither be too short nor too long for a clear
spatial definition. A rule of thumb is that the speakers should be as far apart as
the distance from the listening position.5 Furthermore, they have to be angled
properly at your mixing position. At best you get an equilateral triangle, meaning
that both monitors need to have an angle of about 60, like shown in figure 1.
Some mixing engineers, though, prefer to set the intersection point of their
speakers about three feet behind the mixing position to dispose of the mostly
hyped sounds of modern speakers.6

Figure 1: Equilateral triangle of monitors (Source: Owsinski (2006), p.73.)

1.1.3

Headphones

Mixing with headphones often sounds better and more transparent than with
normal boxes. This is due to the fact that the signal attains directly to the ear
without any reflections and reverberation. So instruments are perceived louder
and more perspicuous than they actually are and thus it gets impossible to set

5 See Owsinski (2006), p.72.


6 Cf. Owsinski (2006), p.72ff.
3

volumes correctly. Moreover, there are still no headphones available which have
a linear frequency response. While medium notes are sensed better, deep basses
are neither heard nor felt with headphones, because the body does not
experience any vibrations in the air.

On the other hand headphones can turn out to be a very useful tool, though, as
signals are perceived more intensive and clearer and therefore headphones can
be used as a magnifying glass. Unwanted noises and clicks can be located and
eliminated much better. But since the overall picture is pretty distorted, you
should change again to your monitors immediately.7

1.2 Processing Audio Files


A good mix starts with good material. Recording errors, poor microphones,
generally lousy quality, or badly arranged MIDI programming will lead to a
miserable mix. It is vital at this stage that you dont settle for anything less
than the very best it may be an overused analogy but you wouldnt expect to
make a great tasting cake if some of the ingredients were out of date.8 So it is
necessary to listen to each track once more to correct errors early enough. Nonapplicable instruments can be rejected or replaced with others that are more
suitable now.9

7 Cf. Eisner (2006), p.57f.


8 See Snowman (2004), p.309.
9 Cf. Snowman (2004), p.309.

1.2.1

Technical Conditions

To archive the current state of production all MIDI files are exported to WAV files
with a minimum depth of 24-bit to obtain high accuracy. Yet it is far better to
use 32-bit floating-point audio tracks to increase dynamics and to prevent
clipping and rounding errors. While you may not recognize a big difference with
one single file, it becomes impossible to obtain a transparent mix in more or less
comprehensive projects. Furthermore, the co-operation between data, software
and hardware is facilitated since most popular digital audio workstations and
CPUs work internally with 32-bit.10

1.2.2

DC Offset

The DC offset can be imagined visually as a shift from zero within the waveform
and can occur in all recorded audio signals. It can be heard as a muddy effect in
the bass range, as the subwoofer uses a lot of unnecessary energy for those
events, so the whole mix will sound scruffy. Moreover, zero crossings are harder
to find and it will get impossible to accomplish clean cuts. Each good DAW has a
built-in feature to remove the DC offset very quickly to avoid problems from the
beginning on.11

1.2.3

Normalization

Normalization is the reinforcement of a signal so that the highest occurring


amplitude in the signal is lifted to 0 dBFS (decibel full scale is the maximum
available level in digital workstations) and thus the entire signal and at the same
time undesirable noise gets louder. Normalization does not reduce the dynamic

10 Cf. Tischmeyer (2006), p.37ff.


11 Cf. Tischmeyer (2006), 289ff.
5

but leaves little or no headroom and furthermore it can lead to inter sample
peaks. Of course, it always depends on the material you are working with, but in
most cases normalization does not make mixing easier.12

1.3 Know Your Goals


To begin with imagine how the final mix should sound like and think of how the
instruments can be positioned. If it is not your own song you have to get familiar
with it. Try to figure out the direction of the song while finding the most
important constituent of the track so that you can highlight it.13

Your goal should be a clear, warm and powerful mix in which every element can
be heard perspicuously. Therefore you need to find sufficient space for every
event and instrument except for pads and ambience elements which only create
background warmth. When there isnt enough place for a certain element, think
about if you really need it or if it can be renounced. Keep in mind, that
sometimes less is more!14

Above all, dance music is ultimately about groove, vibe and feel, and without
these fundamental elements it isnt dance music. A track may be beautifully
programmed and arranged, but if it has no feel then the whole endeavour is
pointless.15 To find out how a final mix should sound like it is further advisable
to train your hearing by analyzing titles of similar genres. Look for details in

12 Cf. Hometracked (2008),


http://www.hometracked.com/2008/04/20/10-myths-about-normalization, 15th
August 2008.
13 Cf. Owsinski (2006), p.7f.
14 Cf. Tischmeyer (2006), p.26.
15 See Snowman (2004), p.310.
6

commercial mixes by listening to them closely and learn how professional mixers
did their jobs.

2. Mixing Theory
Now that you have the sound of the final mix in mind you need to employ your
spatial sense to imagine where and how you can arrange the instruments.
Because, as with our eyes, we experience the surrounded environment with our
ears in three dimensions, too. Furthermore, you will learn how to alter sound and
dynamics of audio signals.

2.1 The Three Dimensions


The most important thing about mixing is the arrangement of audio signals in
the given spatial sonic range. To do so you have to think in three dimensions:
tall, deep, and wide. A well made mix must imply all frequencies from low to
high in a balanced form (tall or vertical), and, furthermore, it requires enough
depth (deep or front-to-back) and a wide spread stereo dimension (wide or
horizontal) to sound consummate and clear.16 This will be described closer in
the following three chapters.

But to obtain a fully balanced track you also need to worry about a proper
arrangement, which can avoid problems right from the start. If instruments with
a similar frequency and volume are played at the same time, the human ear
cant tell them apart. Instead of trying desperately to find space for both
instruments in the three dimensions it is often better to just mute one of
them.17

16 Cf. Owsinski (2006), p.8f.


17 Cf. Owsinski (2006), p.11f.
8

2.1.1

Layering Front-to-Back Perspective

The distance of a sound source is generally specified through the intensity of air
pressure our ears perceive, because the inverse law says that sound pressure
decreases proportionally to the square of the distance from the source18. So the
more the sound has to travel, the quieter it gets, thus the louder an instrument
is, the more in your face it will appear to be19. This effect is definitely
essential in dance music for the drums and bass to impart rhythm we can dance
to.

To avoid a fight for attention at the front side of the mix you should not use
equal volumes for all instruments. To get the appearance of three dimensions
you need to put some instruments further back. It is obvious that the easiest
method is to reduce gain to create the impression of depth, but, in addition to
that, another characteristic of distance is the amount of high frequencies we
perceive. Since those frequencies have a shorter wavelength and, hence, have to
travel a longer way than their lower companions, they lose more energy. By
reducing the higher frequencies with an equalizer we can reproduce this effect to
obtain a more realistic distant sound.

Moreover,

every

sound

is

reflected

by

walls

which

cause

echoes

and

reverberation. The further away a sound source is, the more often the sound
waves will be rejected by obstacles crossing their way. Hence, while the amount
of reverberation will increase, the stereo width will be reduced. By applying this
natural effect to the desired instruments they will be pushed to the rear of the
mix.20 This procedure will be described in chapter Filters and Effects.

18 See Snowman (2004), p.313.


19 See Snowman (2004), p.313.
20 Cf. Snowman (2004), p.312ff.
9

2.1.2

Panorama Horizontal Perspective

One of the most overlooked or taken-for-granted elements in mixing is


panorama, or the placement of a sound element in the sound field.21 Panning is
not only useful to make a track seem more spacious but can also cause more
overall action through movement and spreading.22
The placement of the signal within the left and right border is done by the
panorama potentiometer (pan), which is included in almost any audio software.

Principally stereophony only simulates a spatial sound. We can hear a voice


centrally although we only have a left and a right speaker. So it depends on how
the brain processes the information it receives from the ears. If the signal arrives
simultaneously and at the same volume we sense the sound coming from the
middle. But if we hear a signal louder and earlier on one side of our head, we
think that the sound source is not central. Stereophony takes advantage of this
effect and uses different volumes on each speaker to control the position of a
signal.23

Nowadays nearly every sound-making software and hardware produces a stereo


signal. But if you only use stereo files, you will not be able to make a transparent
mix. While an instrument that is played alone may sound fantastic due to its
stereo behaviour, it will occupy space for another one. It will get impossible to
solve this problem by only using the pan pot or trying to fix it with unnecessary
EQ, particularly in dance music, where many various elements are played at the
same time. Furthermore, comb filtering effects, phase coherency, or other mono

21 See Owsinski (2006), p.19.


22 Cf. Owsinski (2006), p.19.
23 Cf. Eisner (2006), p.33.
10

compatibility problems can appear by using stereo files or too many stereo
effects. Therefore it is always better to use mono files throughout the mix.24

2.1.3

Frequency Vertical Perspective

The human ear can only perceive a certain range of frequencies and thus the
mixing engineer has to arrange the instruments within these borders. With the
usage of an equalizer it will be possible to assign an instrument to its place in
the vertical layer. The sense of hearing can be divided into the following
frequency bands:

Sub-Bass (0-50 Hz): The sub-bass, which is the lowest part of kick
and bass, is mostly felt subconsciously. To avoid a reduction of volume
and to prevent the subwoofers of a club from getting broken it is
generally advisable to cut off these low frequencies.

Bass (50-250 Hz): As the name suggests this range is used for
instruments that primarily contain bass. EQ cuts (or boosts) around
this area can add definition and presence to the bass and kick
drum.25

Mid-Range (200-800 Hz): If too many instruments are represented


here, the mix will sound muddy and unclear.

True Mid Range (800-5000 Hz): Due to the reason that this frequency
range stands for the human voice it is perceived as the loudest.
Therefore you have to pay utter attention when boosting instruments
at these frequencies.

24 Cf. Snowman (2004), p.316.


25 See Snowman (2004), p.311.
11

High Range (5000-8000 Hz): In this range the body of hi-hats and
cymbals is located. To get a brighter sound you can try to boost dull
instruments in this area.

Hi-high Range (8000-20000 Hz): This area is occupied by some higher


frequencies of hi-hats and cymbals. While some boosts at around 12
kHz may produce music with a higher fidelity, too much gaining will
cause unwanted noise and hiss.26

The perception of loudness is not only depending on the acoustic intensity but is
also controlled by the frequencies. Our ears perceive various frequencies in a
different manner by their very nature. Due to the complex anatomy of the ear
canal we can hear some frequencies better than others. The human voice, the
most essential sound of all, resides between two and four kHz and hence our
sense of hearing is the best in this area. Quite contrary to that the sensitivity is
decreasing below 500 Hz and above 10 kHz.27

This circumstance was found out experimentally and was henceforward called
Fletcher-Munson contour control curve. For a mixing engineer it is very important
to bear this curve in mind as the amount of bass is totally depending on the
volume, as you can see in figure 2. Try to make your mixes sound great at loud
volumes and passable on lower ones.28 That is, of course, assuming that the
clubs are your market they dont play music at low or medium levels.29

26 Cf. Snowman (2004), p.311f.


27 Cf. Raffaseder (2002), p.90.
28 Cf. Snowman (2004), p.310f.
29 See Snowman (2004), p.310f.
12

Figure 2: Fletcher-Munson curve (Source: Snowman (2004), p.311.)

2.2 Filters and Effects


Filters and effects are algorithms to alter signals to make them more interesting
or to fit them into the surroundings. While filters are usually used to exclude
certain parts of the signal, effects will add something to it. Both are available as
software and hardware. Effects are further subdivided into real-time and offline
effects. Latter are altering the audio file in a destructive way meaning that they
change the file directly. Real-time effects are applied as plug-ins by internal
routing of the signal path. Due to that the sound can be changed on the fly
without the risk of losing the original file.30

2.2.1

Equalizer

The equalizer is the most important filter and is used as a frequency volume
control. This will let you increase or decrease the gain of specific frequency

30 Cf. Raffaseder (2002), p.194f.


13

ranges to make space for other instruments. While an instrument needs all its
frequencies when played solo, we will perceive it as a whole in a mix in which
only the main frequencies of the timbres are left, because our hearing will
pretend that the hidden ones are masked behind other instruments.31

The simplest type is a Shelving EQ that only lets you cut or boost above or below
a certain frequency. While low frequencies are passed by a low-pass shelving EQ
and everything above the cut-off frequency will be diminished, a high-pass
shelving EQ works the other way round. Those two filters are commonly known
as bass and treble, which appeared quite often as knobs on older radios or hi-fi
systems. On a typical mixer you will also find a third knob to adjust the mid
range. This filter allows you to cut or boost all frequencies within a specific range
and is called Band-pass filter.

With a parametric EQ you can also alter the associated frequencies and the width
of the filter response (Q). A high Q value creates a narrow filter that handles
small frequency ranges while a low Q value produces a wider bandwidth and
smoother crossovers.32

In nature all frequencies of several sounds are reduced by walls and other
objects. Consequently, while some boosts may be required for creative reasons,
you should look towards mostly cutting to prevent the mix from sounding too
artificial. Keep in mind, that you can effectively boost some frequencies of a

31 Cf. Snowman (2004), p.319.


32 Cf. Sound on Sound (2008),
http://www.soundonsound.com/sos/1997_articles/feb97/allabouteq.html,
28th August 2008.
14

sound by cutting others, as the volume relationship between them will change.
This will produce a mix that has clarity and detail.33

2.2.2

Reverb

As sound waves travel through the air they are reflected by every obstacle
getting in the way. Depending on the surface the energy of the sound will
decrease more or less while the frequency is modified. Therefore our brain
notices some different echoes, from which it can recognize the surroundings,
after the direct sound reaches our ears. Due to the reason that we are used to
these natural reflections we also expect them to appear in everything we hear.
But most digital samples and synthesizers will not give you a feeling of a real life
surrounding,

because

their

sound

is

mostly

dry

and

without

natural

reverberation.34

Usually the reverb effect unit is connected with the mixer through aux-send and
is routed post fader to control the amount of reverberation with the volume fader
of the channel. For this reason the ratio of the reverb effect is mostly set to
100%.35

On most reverb effect units or plug-ins you can alter the following settings:

Ratio (or Mix): This setting lets you adjust the amount of direct sound
and reverberation. When it is set to 0% there will only be direct
sound, while at 100% only reverberation is let through.

33 See Snowman (2004), p.337.


34 Cf. Snowman (2004), p.109f.
35 Cf. Tischmeyer (2006), p.169.

15

Pre-delay Time: After a sound occurs, the time separation between


the direct sound and the first reflection to reach your ears is referred
to as the pre-delay.36 When applying reverberation to important
instruments that should stick out it is important to use a longer predelay to avoid moving them backwards in the mix due to the overlap
of reflections.

Diffusion: To control the displacement of a signal you also have to


define the stereo width of early reflections. A signal further away will
tend to have more reverberation and less stereo width and the other
way round. To obtain a correct stereo image it is vital to bear this in
mind.

Decay Time: While the decay of reflections holds on longer in bigger


buildings, the reverberation will shrink away very fast in smaller ones.
To avoid overlapping of reverbs you have to pay attention to
successional

reflections

from

prior

notes.

This

could

cause

continuous increasing of feedback and, furthermore, desultory and


washed-out frequencies.
-

High-Frequency and Low-Frequency Damping: Another vital thing to


remember is the equalization of frequencies to get the impression of
distance.37 The further reflections have to travel, the less highfrequency content they will have, since the surrounding air will absorb
them.38

The following picture shows a typical reverb effect plug-in.

36 See Snowman (2004), p.110f.


37 Cf. Snowman (2004), p.110f.
38 See Snowman (2004), p.110f.

16

Figure 3: Reverb effect plug-in (Waves Rverb) (Source: authors own.)

Every instrument should have its own ambient environment created by reverbs
and other layering effects. To obtain a sonic layering these environments should
not stay in conflict with each other. By returning the reverb in mono and panning
it anywhere but hard left or right you can avoid clashing of those environments.
Furthermore, a long reverb should be brighter than a shorter one. To weld the
ambient environments together you may apply just a little bit of the longest
reverb to all main components.39

2.2.3

Delay

In nature we experience echoes on mountains where an apparent recurrence of


sound occurs. First reflections that arrive 50 ms after the direct sound are heard

39 Cf. Owsinski (2006), p.41f.

17

as a delay.40 A digital delay line can emulate this effect for all incoming audio
signals. On these effect units you can either set the delay time in milliseconds or
in note values. To produce more echoes in a row the unit sends some of the
signal back to itself. A feedback knob will allow you to determine the amount of
the signal that comes back, like shown in figure 4.41

Figure 4: Delay effect plug-in of Steinberg Cubase (Source: authors own.)

If you have to set the delay time manually, because the effect unit cannot do the
job, you need to use some mathematics to get the time of one quarter note:
60,000 (ms in a minute) / bpm (beats per minute). You need to set the time
accurately, because otherwise the delay will stick out.42

To get a realistic and nice delay effect the unit needs to add ambience to the
signal and, furthermore, the sound must be duller. To get this effect done
manually you need to add a reverb after the delay and equalize the signal before

40 Cf. Raffaseder (2002), p.218f.


41 Cf. Snowman (2004), p.113.
42 Cf. Owsinski (2006), p.44f.
18

it goes back as a feedback. If the unit or plug-in doesnt have a damping


parameter, you will need to turn off the feedback completely and send the
returning signal back to the delay unit again by using AUX-send. On the channel
you are using for the AUX-return you can now equalize the signal and take away
the high frequencies. As a result, not only the whole delay is duller, but also
every repeat.43

2.2.4

Other Effects

Beneath reverb and delay, which are the two most important effects, there are a
lot of other effects available. The most common and known are:

Chorus: By duplicating a signal, adding a little delay, and altering the


sound height slightly it seems as if two instruments were playing
together. Though most chorus effects now come as stereo, you can
fatten up a mono chorus by panning the original, untreated sound to
one side and the modulated delay to the other. The result is a moving
wide sound source that seems to hang between the speakers.44

Phaser and flanger: These two effects work similar to the chorus and
are slightly different from each other. Both use an LFO to either
modulate the phase shifting of the phaser or the time delay of the
flanger. This creates a series of phase cancellations, since the original
and delayed signals are out of phase with one another.45 This will

43 Cf. Sound on Sound (2008),


http://www.soundonsound.com/sos/feb98/articles/processors.html,
10th September 2008.
44 See Sound on Sound (2008),
http://www.soundonsound.com/sos/1997_articles/jul97/multifx1.html
15th September 2008.
45 See Snowman (2004), p.112f.
19

result in a concave, strange sound and is often applied to various


dance music elements.46
-

Distortion: Normally distortion is not eligible but you can use this
effect to get a dirtier, uglier, and more interesting sound. In most
cases this is used for electric guitars to appear fatter and more
aggressive.47

Exciter: This effect is applied to get [] a brighter, airier sound


without the stridency that can sometimes occur by simply boosting the
treble. It is often accomplished with subtle amounts of high-frequency
distortion, and sometimes by playing around with phase shifting.48
Exciters add presence and clarity to instruments and, hence, are very
useful for recordings with a lack in high frequencies.49

Enhancer: When a signal crosses a certain threshold the enhancer


emphasizes dynamically its heights. While an equalizer also gains the
unwanted noise, the enhancer changes the signal to a more brilliant
and transparent sound.50

The usage of those effects and how to adjust their settings depends on the
material and your purpose. Try whatever you want to find weird sounds that
possibly have not been discovered yet. But be aware of using effects too heavily,
because this may cause a very unnatural and intransparent global sound.

46 Cf. Snowman (2004), p.112f.


47 Cf. Wikipedia (2008), http://en.wikipedia.org/wiki/Distortion
13th August 2008.
48 See Zlzer (2002), p.128ff.
49 Cf. Zlzer (2002), p.128ff.
50 Cf. Eisner (2006), p.158.
20

2.3 Dynamics
The human perception of loudness depends on the average level of a signal.
Sound events with a great dynamic possess large differences between minimum
and maximum levels and are thus perceived quieter as a whole. The dynamic is
an important factor for the sound design and composing of classical music, in
contrary to dance music where especially kick and bass are extremely
compressed to obtain as much pressure as possible. Furthermore, generally in
pop music, vocals are very much restricted in their dynamics. In the mastering
process the final songs are heavily compressed to compete with other titles in all
kinds of media.51

2.3.1

Compressor

Recorded

instruments

and

especially voices with

large dynamics are a

challenging task to be fit into an appropriate mix, because the volumes of those
are difficult to adjust. If the fader is set to the loudest parts, the quieter ones will
vanish into thin air. In the opposite case the louder sections would be too
dominant to fit well. To prevent this compressors are used, whose task it is to
reduce the dynamics and to prepare a signal so that it can be amplified without
the risk of distortion. Thus quieter passages are lifted and the song gets
generally louder and more powerful.52

To achieve this you have to set a threshold in dB on the compressor. This


indicates when the compressor begins to work. Once a signal exceeds this
threshold it will be diminished with a certain factor, named ratio. The signal is
now restricted in its dynamics and can be boosted with the gain or output

51 Cf. Raffaseder (2002), p.196f.


52 Cf. Snowman (2004), p.93.
21

controls. Two other important parameters of every compressor are the attack
and release time. These indicate how quickly the compressor steps in after the
threshold is crossed and how much time will have to pass until it stops when the
amplitude falls below the value. In addition, there are some compressors that
have the opportunity to choose between soft- and hard-knee. This means that
you can decide if either a soft or a hard transition should be used.53
In the following figure you will find the design of a typical compressor.

Figure 5: Waves RComp compressor plug-in (Source: authors own.)

A special form is a multiband compressor that splits the spectrum of the signal
into three to ten bands. This allows you to compress the individual areas harder
without mutual influence of the others. It is advisable to use this tool rather less

53 Cf. Raffaseder (2002), p.196f.


22

when mixing to ensure that possible corrections can still be done during the
mastering process.54

One very important thing to bear in mind when compressing is that, if you use a
fast attack, the transient will be restricted and so high frequencies are curbed.
This will lead the instrument to appear further back in the mix than you probably
want it to. So you either use a compressor with a longer attack or apply a multiband compressor to only affect the lower frequencies.55

2.3.2

Side-Chaining and Ducking

Another very useful and frequently built-in compressor technique can make your
mix sound much better, a lot punchier and even more sophisticated. The actual
procedure is based on combined signal routes where the sounds envelope of
another track is used to control the compressor of the actual signal, on what
score this method is called side-chaining.

When the current signal gets lower because the other one is triggering the
threshold of its compressor we are speaking of ducking. This effect can be heard
when a DJ speaks into his microphone and the currently played music gets
hushed. For the purpose of mixing we can use the same effect when two signals
use the same frequency but one should be more present. So you can use ducking
when the lead sound and human voice are played together to let the vocals pull
through the mix. This method can also be applied to bass lines to make room for
the kick leading to a very pumping bass drum.56

54 Cf. Tischmeyer (2006), p.157f.


55 Cf. Snowman (2004), p.314.
56 Cf. Snowman (2004), p.97.
23

2.3.3

Limiter

A limiter can be best described as a compressor with an infinite ratio. Because of


its fast impact and heavy dynamic modification it should be used quite
cautiously. A limiter is applied during recording instruments or vocals to prevent
transient signals crossing the user-defined threshold. In the mastering process it
will also avoid clipping errors and can boost the overall volume of the final mix
by adding 3-6 dB.57

57 Cf. Snowman (2004), p.105f.


24

3. Mixing Practice and Workflow


Now that you know the theory you can start your mixing software of choice. But
before you touch a single fader or knob you should think about the final product
once again. You really have to know what you want and where to place the
instruments in the three dimensions.

3.1 Creating a Soundstage


It is always advisable to start with a draft of a soundstage on which you position
the instruments from left to right and front to back, like shown in figure 6. Your
goal is to find a unique space in the mix for every instrument by keeping the
three dimensions in mind. In the mixing process this is realized by using
panorama, volume, equalizer, and reverberation. It is vital that every sound is
heard well and fits in the mix perfectly, so this draft is an eligible concept to start
with. Moreover, this will help you to keep an overview about positioning and
arrangement of the various instruments.58

58 Cf. Snowman (2004), p.312.


25

Figure 6: Arrangement of instruments (Source: Snowman (2004), p.313.)

3.2 Initiate the Work Space


If you want to mix your own songs, you have to make sure that you separate
production from mixing. A vantage point to start with is a completely new project
with dry audio tracks. This means that no automations, send, or insert effects
are implemented yet, except for those that form the sound in a creative way.
Finally, the pan and volume controllers have to be set to default.59

59 Cf. Tischmeyer (2006), p.97f.


26

3.2.1

Arrangement

Be sure to arrange similar audio files among each other. I prefer to place drums
and percussions at the top of the window, followed by basses, synths, and
vocals. Those are the main components of every dance track and, hence, require
enough space in the sonic range to unfold powerful and transparent mixes. Pads,
effects, and other harmonical instruments are positioned beneath. A typical
arrangement is shown in figure 7.

Figure 7: Arrangement of tracks and sub groups (Source: authors own.)

Furthermore, it is also possible to define colours for audio tracks and sub groups
in most popular digital audio workstations. However, it is your decision on how
you want to configure your project, as long as you keep track of it.

27

3.2.2

Sub Groups

Finally divide the audio files into sub-groups to disburden quick muting or
common editing of tribal instruments. Sub-groups will also save valuable
resources when you put compressor and equalizer into its inserts. A compressor
can merge pieces together to build up consistent sounds while a common
equalizer is great for getting required space in the frequency spectrum easily and
fast. But panning and automations are not advisable in groups because this can
lead to a lack of clarity.60

3.2.3

Mixer and Signal Routing

Instead of starting aimless mixing you should initialize your mixer with all
necessary presets. Route all audio tracks into their dedicated sub-groups and
arrange them accurately for a proper overview. If you use the same effects for a
certain type of audio track you can add it by now to enhance your workflow.
Moreover, some sequencers also offer the opportunity to save mixer presets as
templates to use them in every similar project. Though early preparations will
facilitate your working progress keep your hands off effect parameters. When
you add them for later usage turn them off again to get an uncommitted start.

3.3 Setting Volumes


Now lets begin with the first adjustments by muting all tracks, except of one:
starting with the drum group, mainly focusing on the bass drum, all relative
volumes are set. After all tracks are fit to one another in their regarding subgroups, the volumes of each sub-group are adjusted. Be sure to bring the

60 Cf. Tischmeyer (2006), p.103ff.


28

defining elements of dance music, typically the drums and bass, to the front of
the mix.61

There are also some things that have to be regarded conscientiously to obtain a
sophisticated achievement. First of all it is obvious that you should never gain
the volume of any audio track above 0dBFS to stem distortions and generally
unwanted noise. Another important rule is that channel faders should never be
set higher than their respective sub-group and, furthermore, all together should
remain below the master volume.

In earlier times every bit was significant for recording engineers but nowadays,
since we deal with 24-bits and beyond, the more important issue is headroom.
Every instrument, especially drums and percussions, exhibits extremely fast and
impulsive signals called transients. Our ears recognize these peaks as a very
natural sound but unfortunately they are hard to detect with standard peak
meters. If you allocate enough headroom by adjusting your average volume to
about -10dB, those important transients will not be cut and a realistic sound will
remain. In fact you rather aim the volume of your final result a little lower and
the overall dynamic higher to get better results during the mastering process.62

3.4 Panning
After setting volumes it is important to place the instruments from left to right.
Timbres with a low frequency, especially kick and bass, should always remain in
the centre of the soundstage to achieve mono compatibility. Particularly dance

61 Cf. Snowman (2004), p.325.


62 Cf. Owsinski (2006), p.105f.

29

music, which is most frequently played in the clubs, has to have a central
fundament, so that the energy spreading is equal at all points.63

The snare, which mostly supports the kick, should stay relatively central to
exploit the energy, but dont shrink away from varying its position a little bit.
Cymbals and percussions can be spread all over the sound stage to increase
dynamic behaviour. Another good trick is to position Hi Hats further left or right
with a delayed version on the other site.

Another part of the mix which should stay central is the lead vocal because
everyone would expect the vocalist in the centre of the stage. The lead synth
should be a stereo file which is panned hard left and hard right to obtain space
for the vocals. In parts of the song where there are no vocals you can also place
a mono synth into the centre of the soundstage.

In general it is recommended to set higher signals to the outside and deeper


ones central.64 In many cases it may be important to exaggerate the positions
of each instrument to make a mix appear clearer and more defined.
Nevertheless, you shouldnt position a sound in a different area of the
soundstage just to avoid any small frequency clashes with other instruments. In
theses circumstances you should try to EQ the sound beforehand, then if it still
doesnt fit consider panning it out of the way.65

63 Cf. Tischmeyer (2006), p.114.


64 Cf. Snowman (2004), p.326.
65 See Snowman (2004), p.326.
30

3.5 Single Instrument Editing


Now that you have an adequate global setup the next steps will focus on single
instruments. Of course it always depends on the actual project, but the following
lines can help you to find a good entry point and can be seen as a general
guideline.

3.5.1

Kick / Bass Drum

The bass drum is the most important part of all dance tracks and so you should
give your best attention to it. The kick drum must assert itself with power and
pressure in the low frequencies to lay the foundations together with the bass.

To emphasize the pressure of a kick drum you can try to boost the gain of the
low-frequency impact, residing around 40 to 120 Hz, a little bit. But that wont
solve the problem if the sound of the bass drum has no punch at all, so you
rather replace the kick with a more significant timbre.66

The attack of a bass drum resides between 2 and 8 kHz and controls the ability
of locating the rhythm. In most cases you can increase this component a little bit
but this always depends on the specific kick and the arrangement as a whole.
Furthermore, the bass sector has to be kept clean so a low-cut with a narrow
bandwidth should be set around 30 Hz, like shown in figure 8. In the muddy mid
range between 120 and 350 kHz you should decrease the gain with a bell filter to
offer space for other elements of the mix, because this is the foundation for
many other instruments.

66 Cf. Snowman (2004), p.329.


31

Figure 8: A sample of a kick equalizer (Source: authors own.)

Another important part for editing the kick drum is of course the compressor. It
is used to limit its dynamics to intercept single peaks and, moreover, to make it
punchier by adding artificial transients by using long attack times.

If you want to use a reverb for your bass drum it may gain the higher
frequencies and due to that the transients seem fickle and not precise. For this
reason a reverb should only be inserted without pre-delay to avoid this unwanted
fluttering. The reverb, on the other side, is vital for the spatial connection
between the bass drum and other percussions. The reverb should be used
unobtrusively, so that it is missed when turned off but not heard to strong and
obvious when applied.67

67 Cf. Tischmeyer (2006), p.196ff.


32

3.5.2

Snare

Snares are very powerful at the bottom of the frequency range, but this energy
is not needed, because it is mostly produced by the kick. Therefore you have to
set a high-pass filter at around 150 Hz to get rid of the disturbing frequencies
below this limit.

The main body of a snare is located between 400 Hz and 1 kHz. Some cuts or
reductions in this area may help to fit better into the whole mix. To emphasize
the snap of the snare it is generally recommended to give it a little boost at
around 8 to 10 kHz to brighten it up and to get a more striking snare drum at
the high frequencies.68
This is shown in figure 9.

Figure 9: Snare equalizer (Source: authors own)

68 Cf. Snowman (2004), p.329.


33

When you want to compress your snare drum to decrease dynamics it is very
important to bear attention that you do not affect the transients of it, because
otherwise it will sound as if someone has deflated the drum. To avoid that and to
support its crispness you need to use long attack times.69

3.5.3

Hi-hats and Cymbals

Due to the reason that hi-hats and cymbals are only represented at higher
frequencies you can erase the lower ones by using a high-pass filter at around
300 Hz or above. Figure 10 shows how narrow a filter can be set to fit an
instrument into a mix. To increase the brightness of these instruments a shelving
filter is set for all frequencies between 8 and 15 kHz. If you took higher ones, the
overall mix would get an ugly hiss. If you miss a little presence, try to boost the
instrument at about 600 Hz.70

69 Cf. Tischmeyer (2006), p.200.


70 Cf. Snowman (2004), p.329.
34

Figure 10: Hihat equalizer (Source: authors own.)

3.5.4

Bass

Perhaps the most difficult task of a mixing engineer is balancing the bass and
drums (especially the bass and kick). Nothing can make or break a mix faster
than how these instruments work together. Its not uncommon for a mixer to
spend hours on this balance (both level and frequency), because if the
relationship isnt correct, the song will never sound big and punchy.71

Once more a foresighted production is vital to avoid the kick and bass, which
have similar frequencies, from clashing together. This is done by off-setting the
bass so that it is not played at the same time as the kick does. If, however, your
arrangement disallows that, you will have to equalize the bass radically to let
both instruments fit together well.72

71 See Owsinski (2006), p.31.


72 Cf. Snowman (2004), p.330.
35

For getting more clarity you can try to gain the bass at around 800 Hz a little bit.
Try boosting at 100 Hz and cutting at 140 Hz, if the bass is too warm, to make it
more distinct without getting rid of the fundamental. To avoid muddiness it is
always better to cut other instruments in the 250 Hz area than removing it from
the bass. With a low-pass and high-pass filter you can further fit the bass into
the mix.73

A suitable ratio for the compression of the bass is somewhere around 8:1, like
shown in figure 11. Connect the threshold of the compressor with the kick to
avoid simultaneous occurrence. Generally it is recommended to use a fast attack
and release, but as with all settings it always depends on the overall project
situation and the timbre being used.74

73 Cf. Owsinski (2006), p.35.


74 Cf. Snowman (2004), p.332.
36

Figure 11: Typical offbeat bass compressor settings (Source: authors own.)

Reverbs for basses are only applied discreet and virtually inaudibly to consolidate
the interaction between kick and bass by combining their ambient environment.
In other cases a stronger reverberation may be applied as a stylistic element, but
this can lead to intransparent mixes. If you want to use chorus, flangers or
phasers you will have to duplicate the bass and define the bottom on the one
and the tonal perspicuity on the other track. This is essential to obtain a clear
mono bass without phase problems and a well sounding stereo effect.75

75 Cf. Tischmeyer (2006), p.209f.

37

3.5.5

Vocals

Firstly, it should go without saying that the vocals should be compressed so


that they maintain a constant level throughout the mix without disappearing
behind instruments. Generally, a good starting point is to set the threshold so
that most of the vocal range is compressed with a ratio of 9:1 and an attack to
allow the initial transient to pull through unmolested.76

If you have recorded the vocals correctly, you will not have to make big changes
during the mixing process, because any boost will let the voice sound unnatural.
To remove muddiness you may reduce it a little bit around 400 Hz. Many dance
artists evade a lack of energy by increasing the speed and pitch of the vocal a
little bit. Though this may not sound completely correct to some perfect trained
ears it will produce the intended energy.

Adding effects to vocals is a very sensible issue and should be handled with care.
In most cases a little reverb is added to the vocals to obtain a more realistic
sound. The effects have to be applied correctly from beginning on to avoid later
frequency clashes. In this case you would have to equalize the effect of the vocal
unnecessarily leading to an unnatural sound.77

To get more presence you can also duplicate the vocal track and apply different
EQ and compression settings to make them sound dissimilar. After that one track
is placed left and the other one is panned to the right, but it should still sound as
if the vocalist was in the centre of the stage.78

76 See Snowman (2004), p.332.


77 Cf. Snowman (2004), p.332f.
78 Cf. Tischmeyer (2006), p.212.
38

3.5.6

Synthesizers / Pianos / Guitars

Synthesizers and all other melodic instruments have their basic frequencies in
the mid range. Therefore it is important to edit them in the order of their
significance. If vocals occur simultaneously, you will have to leave them
untouched. You can either decrease the mid range of the instrument or apply a
side chain of not more than 1 dB. All frequencies that are not vital to appear in
the mix have to be removed by using shelving filters. Try to find the lowest and
highest frequency that change the sound apparently when eliminated and use
them as the cut-off frequency.79

3.5.7

Atmos, Pads & Sound FX

Atmos, pads and sound effects will increase the depth of your production and will
improve the dynamic behaviour and dramatization. These elements should not
be too present and hence they should be edited in the end when every important
instrument has its own space assigned. Try to find a place where they may fit in.

3.6 Automations
Most common digital audio workstations that can handle MIDI input are capable
of automating any knob or fader value. By using your MIDI controller along with
your favourite sequencer software you can automate any volume, panning, EQ,
and usually all parameters on VST instruments and effects, too. The software
memorizes all of your movements and mostly lets you edit any recorded
automation data with an editor afterwards.80

79 Cf. Snowman (2004), p.333.


80 Cf. Snowman (2004), p.131.

39

At first you need a static mix in which all volume and panning settings are
already done consummately. This is a vantage point for your automations,
because you should bear in mind that early usage of automations will lead to a
very lousy workflow and so it ends up in extra work. As soon as you change the
parameters of any track all of its automations regarding this value have to be
modified, too. Thus, if the lead vocal is too quiet just within one part, try to solve
this little difference with on-board techniques, so called offline methods, like
simply gaining the volume only in this section.81
Figure 12 shows the automation editor in FL Studios 8.0.

81 Cf. Tischmeyer (2006), p.187.


40

Figure 12: Automation curves in FL Studios (Source: authors own.)

41

4. The Final Mix


Once again you have to think about your determined goals and compare it with
the status quo of your production. Not until you are completely satisfied with
your mix you have to do some more final steps to achieve a reasonable vantage
point for the mastering process.

4.1 Where to go
As a matter of course you want a sophisticated mix that sounds very
professional, elaborate, and clear. To set your final product apart from other
more or less amateur mixes you need to bear some characteristics in mind:

Contrast: make differences in the musical texture for an enthralling


song.

Focal point: In every single second of the song the lead vocal or
instrument needs to arrest the attention of the listener.

Noises: Eliminate all clicks, unmeant noises, hums, and unwanted


human sounds like breathing or smacking.

Clarity: All instruments, sounds, and voices need to be heard clearly


and distinct from each other.

Punch: You need to strike the happy medium for the low end of your
song. Dance music implicitly needs a pumping bass, but on the other
hand you should not exaggerate it because all clubs use sound
systems that emphasize the bass heavily.

Distance: When your mix sounds too distant and aloof, you have to
decrease the use of reverbs and other effects.

42

Levels: You have to make sure that all your instruments have
balanced levels.82

Before you proceed with the next step you should rest your ears for a while to
retrieve a neutral view on your work. If you listen to the track once more a week
or so later, you will hear it from a very different point of view and, hence, you
can make changes early enough before a mastering engineer will put hands on
it.

4.2 Final Steps


To deliver an error-free and clean mix down to the mastering engineer you have
to take account of some technical issues. At first, the bit depth of the mix has to
be as high as possible and at least 24-bit. When you want to transport it on CD
please be sure to burn it down with a very slow speed to avoid burning errors.
Moreover, if the sample rate is not set to 44.1 kHz, the stereo mix will need to be
converted with a high-quality sample rate converter, which can be very
expensive, so you should relinquish it to the mastering studio. Furthermore,
fades and cuts at the beginning and end of the audio file are not profitable for
the mastering process, because they can cause pumping effects and artefacts.83

4.3 Outlook on Mastering


The mastering engineer is responsible for the persistently equal aesthetics on the
whole production of a CD or other media. The volumes of all tracks are adapted

82 Cf. Owsinski (2006), p.9.


83 Cf. Tischmeyer (2006), p.30f.

43

and cautiously compressed. Furthermore, the music is processed to sound good


and nearly the same in all kinds of radios and hi-fi systems.84

While the mastering process is done by audio engineers with highly trained ears,
nowadays more and more people are producing low budget music and try to mix
and master their projects on their own. This can lead to some problems, because
you will not discover errors you have already overlooked throughout the mixing
process. Therefore you should either wait a while to rest your ears till you start
mastering or you let it do someone else.85

During mixing you can change every parameter with ease, because you have
every track and channel available. This is completely different to the mastering
process, where only one audio file the final mix down is changeable. For this
reason you need to remember some vital things before mastering:

Equalizer: it is better to deliver a duller mix down, because the


mastering engineer can brighten things up more easily than the other
way round.

Compression: with hyper compression it gets impossible to do suitable


changes during mastering. It is not the function of a mixing engineer
to make a song as loud as possible.

Phase: always test your song in mono before you export it as the final
mix. Due to phase coherencies even the lead vocal can disappear
completely,

if

you

have

made

errors

with

layering

or stereo

spreading.86

84 Cf. Tischmeyer (2006), p.32.


85 Cf. Electronic Music Production (2007),
http://emusictips.com/what-is-mastering, 29th August 2008.
86 Cf. Owsinski (2006), p.86f.
44

5. Conclusio
Since the production of electronic music and especially dance music can be done
by anyone who owns a computer, the number of hobby producers has grown
rapidly. A basic setup including studio monitors and a digital audio workstation
are sufficient for most amateur purposes. By understanding the spatial
arrangement of instruments and the usage of filters and effects it is possible to
create professional mixes without expensive hardware.

It is very important to adhere to a solid workflow to avoid later problems during


the mixing process. It is always advisable to start with a draft of a soundstage
prior to making general alterations, like setting volumes and panning tracks. All
instruments should be divided into sub-groups and routed adequately before
each track is edited separately. There are several guidelines for every type of
instrument, but the properties and settings of effects and filters differ in every
mix according to the arrangement of the track.

To obtain a sophisticated mix it is vital to keep ones goal in mind and to compare
the final product with other similar available tracks. Before the final mix can be
forwarded to the mastering studio one should focus on a mixdown that does not
lack of clarity and punch, but without hyper compression or too heavy usage of
equalizers.

45

List of Figures
Figure 1: Equilateral triangle of monitors (Source: Owsinski (2006), p.73.)............... 3
Figure 2: Fletcher-Munson curve (Source: Snowman (2004), p.311.) ......................13
Figure 3: Reverb effect plug-in (Waves Rverb) (Source: authors own.) ...................17
Figure 4: Delay effect plug-in of Steinberg Cubase (Source: authors own.) .............18
Figure 5: Waves RComp compressor plug-in (Source: authors own.) ......................22
Figure 6: Arrangement of instruments (Source: Snowman (2004), p.313.) ..............26
Figure 7: Arrangement of tracks and sub groups (Source: authors own.) ................27
Figure 8: Sample of a kick equalizer (Source: authors own.) .................................32
Figure 9: Snare equalizer (Source: authors own) .................................................33
Figure 10: Hihat equalizer (Source: authors own.) ...............................................35
Figure 11: Typical offbeat bass compressor settings (Source: authors own.)............37
Figure 12: Automation curves in FL Studios (Source: authors own.) .......................41

46

List of Literature
Eisner, Uli (2006): Mixing Workshop. Leitfaden fr Beschallung und Homerecording,
8th edition, PPV Medien, Stuttgart.
Electronic Music Production (2007): What is Mastering? July 25th, 2007, website
address: http://emusictips.com/what-is-mastering.
Hawkins, Erik (2004): The Complete Guide to Remixing. Produce Professional
Dance-Floor Hits on Your Home Computer, Berklee Press, Boston.
Hometracked (2008): 10 Myths About Normalization. April 20th, 2008, website
address: http://www.hometracked.com/2008/04/20/10-myths-about-normalization.
Owsinski, Bobby (2006): The Mixing Engineers Handbook. 2nd edition, Course
Technology, Boston.
Raffaseder,

Hannes

(2002):

Audiodesign.

Kommunikationskette,

Schall,

Klangsynthese, Effektbearbeitung, Akustische Gestaltung, Hanser Fachbuchverlag,


Hamburg.
Snowman, Rick (2004): The Dance Music Manual. Tools, Toys and Techniques,
focalPress, Oxford.
Sound on Sound (1997): Equal Opportunities. All About EQ, February 1997, website
address:
http://www.soundonsound.com/sos/1997_articles/feb97/allabouteq.html.
Sound on Sound (1998): Learning Process. Where To Use Processors And Why
Part 5, February 1998, website address:
http://www.soundonsound.com/sos/feb98/articles/processors.html.
Sound on Sound (1997): Multi Story. Multi-effects Explained: Part 1, July 1997,
website address:
http://www.soundonsound.com/sos/1997_articles/jul97/multifx1.html.

47

Tischmeyer, Friedemann (2006): Internatl Mixing. Der systematische Weg zum


professionellen Mixdown im Rechner, Tischmeyer Publishing, Kuekels.
Wikipedia (2008): Virtual Study Technology. 2008, website address:
http://en.wikipedia.org/wiki/Virtual_Studio_Technology.
Wikipedia (2008): Distortion. 2008, website address:
http://en.wikipedia.org/wiki/Distortion.
Zlzer, Udo (2002): DAFX. Digital Audio Effects, Wiley & Sons, New Jersey.

48

You might also like