You are on page 1of 76

PA Training Introduction 2012

The Sound Technician’s Role In The Worship Experience


That’s right! Your role as a sound technician is a crucial element of the worship ministry, Praise Team,
and the worship experience of the congregation, members and guests alike. In fact, your duties of running the
sound system actually constitute worship in light of Romans 12:1, “Therefore, I urge you, brothers, in view of God
mercy, to offer your bodies as living sacrifices, holy and pleasing to God – this is your spiritual act of worship.” And
what you do as a sound technician has a direct effect on how our worship and service is perceived. In
understanding this, we want you to know some simple truths about taking on the role of a sound technician.

I. Your Job Is Part Of Worship


First of all, thank you for your willingness to help Hope Serdang and the Worship Ministry in this important
component of our Sunday morning worship experience. Your assistance as a sound technician enables our service
to run smoothly and, in turn, facilitates an atmosphere of worship that is conducive to spiritual growth. Your goal
as a sound technician is to serve as a seemingly invisible factor of worship that, at the least, does not hinder the
worship process and, at the most, helps to enhance the service.

II. Your Job Is Difficult


You will carry a heavy load from week to week, responsible for making everything and everyone sound
good. When things go wrong or the music is too loud, you will get funny looks from everyone turning around to see
what the problem is. A poor sound experience can be a huge obstacle in leading the congregation to the throne of
God in worship – if the praise singers and musicians aren't heard well, it can be difficult to communicate the
message of worship and life that we present each week.

Characteristics Of A Godly And Successful Sound Technician


God has given us unique abilities to shape and control the sound, or the lights, or the video equipment, to
capture and even enhance the gifts of the worship team. But you didn't wake up one day with the ability to deliver
a great mix. You had to work on it. Artfully lighting a presentation on stage, or even lighting the stage evenly so
that the video team will have a smooth picture to record takes an investment of our time and a decision to learn
and develop those unique abilities that God has given us. In any area we use our skills and God-given gifts, there
can be defined a number of characteristics that will help us excel under that particular role in the Body of Christ.
Romans 12:4 – 5 “Just as each of us has one body with many members, and these members do not all
have the same function, so in Christ we who are many form one body and each member belongs to all
the others.”
1 Corinthians 12:18 “But in fact God has arranged the parts in the body, every one of them, just as he
wanted them to be.”

i. Excellence For Christ


We are called to excellence in the technical support ministry. God gave us His best, and our service through the
tech ministry should offer no less than our best pursuit of excellence for Him. Recognize that God isn't looking for
perfection, but excellence – that is, offering our best to Him, and realize that we can minister to Him and others
through a mutual desire to seek God's best.
Colossians 3:23-24, “Whatever you do, work at it with all your heart, as working for the Lord, not for men”

ii. Cooperation & Recognition Of Authority


A sound technician must have a cooperative spirit.
Colossians 3:12-13, “Therefore, as God's chosen people, holy and dearly loved, clothe yourselves with
compassion, kindness, humility, gentleness and patience. Bear with each other and forgive whatever
grievances you may have against one another.”
1 Thessalonians 5:11, “Therefore encourage one another and build each other up, just as in fact you are
doing.”
We should build up one another, and have the responsibility of eliminating any word or action that is not
constructive to our team. Remember that as a sound technician, you are not the final say on the way the sound
system is run. We are to submit to each other and to our leadership, out of respect for each other as fellow brothers
and sisters in Christ.

1|Page
PA Training Introduction 2012

Hebrews 13:17 “Obey your leaders and submit to their authority. They keep watch over you as men who
must give an account. Obey them so that their work will be a joy, not a burden, for that would be of no
advantage to you.”
In the end, the Worship Leader or the Pastor will be the authority on how worship should sound and how to run the
sound system.

iii. Experience & Teachability


A sound technician should have some experience running the sound equipment and may be responsible,
individually and as a team, for further technical development (such as classes, workshops, or self-study). Be
prepared to humbly share with and learn from others in regards to new information and techniques.

iv. Knowledge Of The Sound Equipment


Think of your job as to that of an artist - having a relationship with the sound equipment that is similar to the one a
musician has to their instrument. The thing that often separates good musicians and poor ones is practice, practice
and more practice, until the instrument nearly becomes an extension of the individual. You need to do the same.
Psalm 33:3, “Sing to him a new song; play skilfully, and shout for joy.” First, get to know the different aspects of
the sound equipment exceedingly well: the Mixer, the Amplifier, the Equalizer, the Monitor System, and the
Microphones.

v. Sense Of Artistry & Ability To Listen


A sound technician not only needs to know the mechanics of running the sound equipment, but must also have a
sense of music in order to get the proper blends of music and vocals. Your ears are the most powerful tool you
have as a sound technician. That's what the whole job is about - listening. Your ears are the reference point for
the entire congregation. Spend time listening to professional musicians, singers, and speakers. Fix that sound in
your mind and work hard to match that sound when you reinforce the Praise & Worship team, Pastor, or any other
users of the sound system. On musical selections, for instance, there should be an appropriate blend between
background music and vocals with the vocals being the most important (clear and audible), but the music loud
enough for the worship to not feel “dead.” This means that you need to keep your ears “turned on” and “tuned in”
at all times.

vi. Balancing The Two Goals – Flow and Message


The two goals of the sound presentation during worship are:
a) A smooth flow of worship.
b) Clear and effective communication.
Sound technician will consistently try to achieve a balance between the two. First, we try to achieve smooth
transitions. For instance, it is good to avoid awkward pauses to create a smooth flow of worship (i.e. cuing sound
media beforehand, watching for changes in program, etc.). At the same time, problems sometimes occur. It is also
important to make sure the audio message is clear and effective. When problems do occur, we will make what
changes we can to ensure the message is communicated (i.e. restarting a CD clip if the audio was off the first time,
making sure vocals are raised for speaking parts, etc.).

Focus on Running Sound and Establish the Line of Authority


We need to put ourselves in the congregation's shoes. Delivering a flawless worship service requires
focus and sensitivity on our part. First, we need to be focused on the task at hand. As much as we want to close
our eyes and lift our hands in worship during an especially moving song, we can't. It's not that we can't get anything
out of a worship service, because we do. But we have to look at our part of the service as a sacrificial offering to
God so that others can enter in. If we allow ourselves to get distracted, if we are not fully focused on the task at
the moment, then we can easily miss a mic cue, allow a bit of feedback to get out of hand, miss a lighting cue,
forget to project the right song lyric white screens, and so on. Those kinds of mistakes are understandable, but
inexcusable. Another reason for our staying focused on the task at the moment is so that we don't do something
really stupid during a service.
To help minimize a fair amount of frustration during rehearsals and services, keep in mind your primary
focus as a sound technician. Make sure that others know you are there to run the sound system and that it takes
constant attention. This is not a time for conversation, nor is it a time for other requests that may come your way.

2|Page
PA Training Introduction 2012

Focus on the job at hand – gently but firmly explain the importance of your role to others that may distract you from
your task.
People in the congregation, Praise & Worship Team, and church leaders will often come to the Sound
Technician with their comments and suggestions for running the sound system (“turn it down,” “turn it up,” “the
band is too loud,” “the singers are too loud,” “bring up my monitor,” etc.). You simply cannot make changes based
upon everyone’s input or chaos will occur. Ultimately, you answer to PA Coordinators (Wyn & Alex) for how the
system is run and then he answers church leadership and the congregation. Because of this, it is often good to re-
direct people or their comments and suggestions directly to PA Coordinators (Wyn & Alex)
Even with Praise & Worship Team Members, all requests should first be directed through the Worship
Leader. When the sound technician can take direction from only one person, it makes rehearsals and sound checks
less confusing and frustrating for everyone.

Connection and Power Switching order.


I. Connecting Cables
Always make sure that all equipment is turned off when making connections. Also make sure that all volume and
level controls are turned down to minimum before turning the power on.

II. Power ON/OFF Switching


When turning on the power to your system, follow the procedure outlined below to protect your speakers from the
power surge that occurs when sound gear is switched on or off.

i. Turn on electric/electronic musical instruments and sources such as CD or cassette players


ii. Turn on the mixer
iii. Turn on any graphic equalizers
iv. Turn on the power amp(s)

Reverse this order when turning the system off. Rule of thumb: Amps on last, off first.

What is Sound

3|Page
PA Training Introduction 2012

Sound reinforcement for House of Worship application have evolved from simple speech reinforcement
to full concert quality multi-media systems. It has grown from basic analog sound system to an extremely varied
selection of digital audio equipment. However, no matter how complex the overall audio system, an understanding
of the basic principles of sound, the key elements of sound systems, and the primary goal of a “good sound” will
insure the best result in choosing and using that system.

Because good sound quality is the goal of any house of worship sound system, it is helpful to be familiar
with some general aspects of sound: how it is produced, transmitted, and received. In addition, it is also useful to
describe or classify sound according to its acoustic behaviour. Finally, the characteristics of “good” sound should
be understood

I. Vibrations
Sound is produced by vibrating objects. These include musical instruments, loudspeakers, and, of course,
human vocal cords. The mechanical vibrations of these objects move the air which is immediately adjacent to them,
alternately “pushing” and “pulling” the air from its resting state. Each back-and-forth vibration produces a
corresponding pressure increase and pressure decrease in the air. A complete pressure change, or cycle, occurs
when the air pressure goes from rest, to maximum, to minimum, and back to rest again. These cyclic pressure
changes travel outward from the vibrating object, forming a pattern called a sound wave. A sound wave is a series
of pressure changes moving through the air. This is the genesis of any sound. As a matter of fact, if an object —
whatever it may be — doesn’t vibrate, it cannot make sound. These waves have length (hence the term wavelength)
measured in feet and inches or even meters. As these sound waves reach your ear, your eardrum also vibrates,
and it is those vibrations that are recognized by your brain.

II. Hertz
A simple sound wave can be described by its frequency and by its amplitude. The frequency of a sound
wave is the rate at which the pressure changes occur. It is measured in cycles per second , more commonly known
as Hertz (Hz), where 1 Hz is equal to 1 cycle-per second. The term derived from the surname of the man who
brought forth this knowledge: 19th century physicist Heinrich Hertz. The range of frequencies audible to the human
ear extends from a low of about 20 Hz to a high of about 20,000 Hz. In practice, a sound source such as a voice
usually produces many frequencies simultaneously. In any such complex sound, the lowest frequency is called the
fundamental and is responsible for the pitch of the sound. The higher frequencies are called harmonics and are
responsible for the timbre or tone of the sound. Harmonics allow us to distinguish one source from another, such
as a piano from a guitar, even when they are playing the same fundamental note. Those fluctuations can be thought
of as cycles per second (cps). So, the more cycles per second, the higher the frequency, or vice versa.

III. Decibels
The amplitude of a sound wave refers to the magnitude (strength) of the pressure changes and determines the
“loudness” of the sound. Amplitude is measured in decibels (dB) of sound pressure level (SPL) and ranges from 0
dB SPL (the threshold of hearing), to above 120 dB SPL (the threshold of pain). The level of conversational speech
is about 70dB SPL. A change of 1dB is about the smallest SPL difference that the human ear can detect, while 3
dB is a generally noticeable step, and an increase of 10 dB is perceived as a “doubling” of loudness.

Another characteristic of a sound wave related to frequency is wavelength. The wavelength of a sound
wave is the physical distance from the start of one cycle to the start of the next cycle, as the wave moves through
the air. Since each cycle is the same, the distance from any point in one cycle to the same point in the next cycle
is also one wavelength: for example, the distance from one maximum pressure point to the next maximum pressure
point. Wavelength is related to frequency by the speed of sound. The speed of sound is the velocity at which a
sound wave travels. The speed of sound is constant and is equal to about 1130 feet-per-second in air. It does not

4|Page
PA Training Introduction 2012

change with frequency or wavelength, but it is related to them in the following way: the frequency of a sound,
multiplied by its wavelength The amplitude of a sound wave refers to the magnitude (strength) of the pressure
changes and determines the “loudness” of the sound. Amplitude is measured in decibels (dB) of sound pressure
level (SPL) and ranges from 0 dB SPL (the threshold of hearing), to above 120 dB SPL (the threshold of pain). The
level of conversational speech is about 70dB SPL. A change of 1dB is about the smallest SPL difference that the
human ear can detect, while 3 dB is a generally noticeable step, and an increase of 10 dB is perceived as a
“doubling” of loudness.

* SOUND CHECK

An anechoic chamber is a room with special walls that absorb as much sound as possible. Anechoic means "without echoes". Sometimes
the entire room even rests on shock absorbers, negating any vibration from the rest of the building or the outside.

The material covering the walls of an anechoic chamber uses wedge-shaped panels to dissipate as much audio energy as possible before
reflecting it away. Their special shape reflects energy into the apex of the wedge, dissipating it as vibrations in the mater ial rather than the
air. Anechoic chambers are frequently used for testing microphones, measuring the precise acoustic properties of various instruments,
determining exactly how much energy is transferred in electro-acoustic devices, and performing delicate psychoacoustic experiments.

Interesting Facts

1. The BA-Aerospatiale Concorde is the only passenger plane to fly faster than sound, at 2100 kph.

5|Page
PA Training Introduction 2012

2. The loudest natural sounds ever made on Earth are probably gigantic volcanic eruptions, such as the explosions of the island
of Krakatoa.
3. Some of the loudest sounds produced by our own invention are the noise of space rockets blasting from the launch pad. The
biggest were the Saturn V rockets that launched the USA's Apollo moon missions of 1968-72. They had their greatest success
when Apollo 11 landed on the Moon - an airless and therefore completely silent place - on 20 July 1969. Once a space rocket
had taken off and entered the vaccum of space, it became totally silent.
4. One musical piece has no sound at all. It is called 4 minutes 33 seconds. It was ' written ' by the American composer John
Cage in 1954. A pianist sits at the piano and plays nothing for exactly 4 minutes and 33 seconds.
5. In the deep ocean, the sperm whale uses sound to stun or kill its prey. It sends out giant grunts, immensely powerful bursts of
sound that can disable nearby fish, squid and other victims.
6. Acoustics plays a large part in the design of modern concert halls, theatres and similar buildings. The travels of a sound wave
can be shown on acomputer screen, for different frequencies and for different materials covering them. This is why many such
buildings have strange shapes on the walls and ceiling, such as discs, panels and saucers, to absorb or reflect the sounds.
7. Huge cathedrals, with their hard walls and floors of stone, glass and wood, are amazing places for acoustics. Almost any sound
seems loud and long, a sit echoes and reverberates through the surfaces. This is why church singing sounds so special.

The Sound Source

The sound sources most often found in worship facility applications are the speaking voice, the singing
voice, and a variety of musical instruments. Voices may be male or female, loud or soft, single or multiple, close
or distant, etc. Instruments may range from a simple acoustic guitar to a pipe organ or even to a full orchestra.

Sound sources may be categorized as desired or undesired, and the sound produced by them may be
further classified as being direct or indirect. In practice, the sound field or total sound in a space will always consist
of both direct and indirect sound, except in anechoic chambers or, to some extent, outdoors, when there are no
nearby reflective surfaces.

Undesired sound sources that may be present: building noise from air conditioning or buzzing light fixtures,
noise from the congregation, sounds from street or air traffic, drum that overpowers the congregations or musician
or microphone pickup feedback.

The acoustics of the room are often as important as the sound source itself. Room acoustics are a
function of the size and shape of the room, the materials covering the interior surfaces, and even the presence of
the congregation. The acoustic nature of an area may have a positive or a negative effect on the sound produced
by voices, instruments, and loudspeakers before it is picked up by microphones or heard by listeners: absorbing
or diminishing some sounds while reflecting or reinforcing other sounds. Strong reflections can contribute to
undesired sound in the form of echo, standing waves, or excessive reverberation.

The Basic Purpose of a Sound System

I. To help people hear something better.

6|Page
PA Training Introduction 2012

For example, one person speaking on a stage may not be heard well at the back of a large hall. A sound system
may be used to make the sound more clearly audible. In this case, the intention is to make the voice at the back
of the hall sound as loud as (not louder than) it is when heard up close.

II. To make sound louder for artistic reasons


Praise & Worship team may be clearly audible but not necessarily very exciting. A sound system can give the
group much greater musical impact by bringing up the church atmosphere.

III. To enable people to hear sound in remote locations.


Special events draw larger crowds than the meeting room will hold. A sound system can bring the speeches and
discussion to a second room, so that the overflow crowd can hear them.

A Conceptual Model Of A Sound System

Sound systems amplify sound by converting it into electrical energy, increasing the power of the electrical
energy by electronic means, and then converting the more powerful electrical energy back into sound. In audio
electronics, devices that convert energy from one form into another are called transducers.

Devices that change one or more aspects of an audio signal are called signal processors. Using these
terms, we can model a sound system in its simplest form.

The input transducer (i.e., mic or pickup) converts sound into a fluctuating electrical current or voltage
which is a precise representation of the sound. The fluctuating current or voltage is referred to as an audio signal.

The signal processing alters one or more characteristics of the audio signal. In the simplest case, it
increases the power of the signal (a signal processor that does this is called an amplifier). In practical sound
systems, this block of the diagram represents a multitude of devices preamplifiers, mixers, effects units, power
amplifiers, and so on. The output transducer (i.e., the loudspeaker or headphones) converts the amplified or
otherwise processed audio signal back into sound.

The Sound System

A basic sound reinforcement system consists of an input device (microphone), a control device (mixer),
an amplification device (power amplifier), and an output device (loudspeaker). This arrangement of components is
sometimes referred to as the audio chain: each device is linked to the next in a specific order. The primary goal of
the sound system in house of worship sound applications is to deliver clear, intelligible speech, and, usually, high-
quality musical sound, to the entire congregation. The overall design, and each component of it, must be
intelligently thought out, carefully installed, and properly operated to accomplish this goal.

There are three levels of electrical signals in a sound system: microphone level (a few thousandths of a
Volt), line level (approximately one Volt), and speaker level (ten Volts or higher).
Sound is picked up and converted into an electrical signal by the microphone. This microphone level
signal is amplified to line level and possibly combined with signals from other microphones by the mixer. The power
amplifier then boosts the line level signal to speaker level to drive the loudspeakers, which convert the electrical
signal back into sound.

7|Page
PA Training Introduction 2012

Electronic signal processors, such as equalizers, limiters or time delays, are inserted into the audio chain,
usually between the mixer and the power amplifier, or often within the mixer itself. They operate at line level. The
general function of these processors is to enhance the sound in some way or to compensate for certain deficiencies
in the sound sources or in the room acoustics.

In addition to feeding loudspeakers, an output of the system may be sent simultaneously to recording
devices or even used for broadcast. It is also possible to deliver sound to multiple rooms, such as vestibules and
cry rooms, by using additional power amplifiers and loudspeakers.

Finally, it may be useful to consider the room acoustics as part of the sound system: acoustics act as a
“signal processor” that affects sound both before it is picked up by the microphone and after it is produced by the
loudspeakers. Good acoustics may enhance the sound, while poor acoustics may degrade it, sometimes beyond
the corrective capabilities of the equipment. In any case, the role of room acoustics in sound system performance
cannot be ignored.

What is “GOOD” sound?

The three primary measures of sound quality are fidelity, intelligibility, and loudness. In a house of
worship the quality of sound will depend on the quality of the sound sources, the sound system, and the room
acoustics.

The fidelity of sound is primarily determined by the overall frequency response of the sound arriving at
the listener’s ear. It must have sufficient frequency range and uniformity to produce realistic and accurate speech
and music.

The intelligibility of sound is determined by the overall signal-to-noise ratio and the direct-to reverberant
sound ratio at the listener’s ear. In a house of worship, the primary “signal” is the spoken word. The “noise” is the
ambient noise in the room as well as any electrical noise added by the sound system. In order to understand
speech with maximum intelligibility and minimum effort, the speech level should be at least 20dB louder than the
noise at every listener’s ear. The sound that comes from the system loudspeakers already has a signal-to-noise
ratio limited by the speech-to-noise ratio at the microphone. To insure that the final speech-to-noise ratio at the
listener is at least 20dB, the speech-to-noise ratio at the microphone must be at least 30dB. That is, the level of
the voice picked up by the microphone must be at least 30dB louder than the ambient noise picked up by the
microphone.

The direct-to-reverberant ratio is determined by the directivity of the system loudspeakers and the
acoustic reverberation characteristic of the room. Reverberation time is the length of time that a sound persists in
a room even after the sound source has stopped. A high level of reverberant sound interferes with intelligibility by
making it difficult to distinguish the end of one word from the start of the next. A reverberation time of 1 second or
less is ideal for speech intelligibility. However, such rooms tend to sound somewhat lifeless for music, especially
traditional choral or orchestral music. Reverberation times of 3-4 seconds or longer are preferred for those sources.
Reverberation can be reduced only by absorptive acoustic treatment. If it is not possible to absorb the reverberant
sound once it is created, then it is necessary either to increase the level of the direct sound, to decrease the
creation of reverberant sound, or a combination of the two. Simply raising the level of the sound system will raise
the reverberation level as well. However, use of directional loudspeakers allows the sound to be more precisely
“aimed” toward the listener and away from walls and other reflective surfaces that contribute to reverberation.
Again, directional control is more easily achieved at high frequencies than at low frequencies.

8|Page
PA Training Introduction 2012

Finally, the loudness of the speech or music at the furthest listener must be sufficient to achieve the
required effect: comfortable levels for speech, perhaps more powerful levels for certain types of music. These
levels should be attainable without distortion or feedback. The loudness is determined by the dynamic range of the
sound system, the potential acoustic gain (PAG) of the system, and the room acoustics. The dynamic range of a
sound system is the difference in level between the noise floor of the system and the loudest sound level that it
can produce without distortion. It is ultimately limited only by the available amplifier power and loudspeaker
efficiency. The loudness requirement dictates the needed acoustic gain (NAG) so that the furthest listener can hear
at a level similar to closer listeners. However, a sound reinforcement system with microphones requires
consideration of potential acoustic gain. Potential Acoustic Gain (PAG) is a measure of how much gain or
amplification a sound system will provide before feedback occurs. This turns out to be much more difficult than
designing for dynamic range because it depends very little on the type of system components but very much on
the relative locations of microphones, loudspeakers, talkers, and listeners. Room acoustics also play a role in
loudness. Specifically, reverberant sound adds to the level of the overall sound field indoors. If reverberation is
moderate, the loudness will be somewhat increased without ill effect. If reverberation is excessive, the loudness
may substantially increase but with potential loss of fidelity and intelligibility.

Although “good” sound is qualitatively determined by the ear of the beholder, there are quantitative design
methods and measurements that can be used to accurately predict and evaluate performance. It is usually possible
(though often not easy) to resolve the competing factors of acoustics, sound systems, architecture, aesthetics and
budget in order to deliver good sound in a house of worship. However, major deficiencies in any of these areas
can seriously compromise the final result. Readers who are contemplating major sound system purchases,
acoustic changes, or new construction are encouraged to speak with knowledgeable consultants and/or
experienced contractors to ensure the “best” sound.

Basic Components of a Sound System

I. Mixer
A mixing console, or audio mixer, also called a sound board, soundboard, mixing desk, or mixer is an electronic
device for combining (also called "mixing"), routing, and changing the level, timbre and/or dynamics of audio signals.
A mixer can mix analog or digital signals, depending on the type of mixer. The modified signals (voltages or digital
samples) are summed to produce the combined output signals. Mixing consoles are used in many applications,
including recording studios, public address systems, sound reinforcement systems, broadcasting, television, and
film post-production. An example of a simple application would be to enable the signals that originated from two
separate microphones (each being used by vocalists singing a duet, perhaps) to be heard through one set of
speakers simultaneously. When used for live performances, the signal produced by the mixer will usually be sent
directly to an amplifier, unless that particular mixer is "powered" or it is being connected to powered speakers.

9|Page
PA Training Introduction 2012

II. Audio Amplifier


An audio amplifier is an electronic amplifier that amplifies low-power audio signals (signals composed primarily of
frequencies between 20 - 20 000 Hz, the human range of hearing) to a level suitable for driving loudspeakers and
is the final stage in a typical audio playback chain.

The preceding stages in such a chain are low power audio amplifiers which perform tasks like pre-amplification,
equalization, tone control, mixing/effects, or audio sources like record players, CD players, and cassette players.
Most audio amplifiers require these low-level inputs to adhere to line levels.
While the input signal to an audio amplifier may measure only a few hundred microwatts, its output may be tens,
hundreds, or thousands of watts.

History
The audio amplifier was invented in 1909 by Lee De Forest when he invented the triode vacuum tube. The triode
was a three terminal device with a control grid that can modulate the flow of electrons from the filament to the plate.
The triode vacuum amplifier was used to make the first AM radio.[1]

Early audio amplifiers were based on vacuum tubes (also known as valves), and some of these achieved notably
high quality (e.g., the Williamson amplifier of 1947-9). Most modern audio amplifiers are based on solid state
devices (transistors such as BJTs, FETs and MOSFETs), but there are still some who prefer tube-based amplifiers,
due to a perceived 'warmer' valve sound. Audio amplifiers based on transistors became practical with the wide
availability of inexpensive transistors in the late 1960s.
Key design parameters for audio amplifiers are frequency response, gain, noise, and distortion. These are
interdependent; increasing gain often leads to undesirable increases in noise and distortion. While negative
feedback actually reduces the gain, it also reduces distortion. Most audio amplifiers are linear amplifiers operating
in class AB.

The relationship of the input to the output of an amplifier - usually expressed as a function of the input frequency—
is called the transfer function of the amplifier, and the magnitude of the transfer function is termed the gain. In

10 | P a g e
PA Training Introduction 2012

popular use, the term usually describes an electronic amplifier, in which the input "signal" is usually a voltage or a
current. In audio applications, amplifiers drive the loudspeakers used in PA systems to make the human voice
louder or play recorded music.

Amplifiers may be classified according to the input (source) they are designed to amplify (such as a guitar amplifier,
to perform with an electric guitar), the device they are intended to drive (such as a headphone amplifier), the
frequency range of the signals (Audio, IF, RF, and VHF amplifiers, for example), whether they invert the signal
(inverting amplifiers and non-inverting amplifiers), or the type of device used in the amplification (valve or tube
amplifiers, FET amplifiers, etc.).

Typical Amplifiers

Vacuum Tube Amplifiers

11 | P a g e
PA Training Introduction 2012

Famous Mixer / Amplifier suppliers: Alesis, Allen & Heath, Soundcraft, Crest Audio, Mackie, Yamaha,
Behringer, Peavey, Phonic, Pioneer, Roland

III. Microphone
Microphone is a generic term that is used to refer to any element which transforms acoustic energy (sound) into
electrical energy (the audio signal). A microphone is therefore one type from a larger class of elements called
transducers - devices which translate energy of one form into energy of another form. The fidelity with which a
microphone generates an electrical representation of a sound depends, in part, on the method by which it
performs the energy conversion. Historically, a number of different methods have been developed for varying
purposes, and today a wide variety of microphone types may be found in everyday use.

Microphones are generally classed according to 2 main items:


a) Type of transducer: Dynamic, Condenser, Electret Condenser, Ribbon, etc
b) Pickup pattern: Cardioid, Omnidirectional, Bi-directional / Figure-8, Supercardioid

12 | P a g e
PA Training Introduction 2012

Famous Microphone suppliers: Shure, Sennheiser, Audio Technica, AKG, Audix, Mipro, Oktava, etc

IV. Loudspeakers
Loudspeaker is a generic term used to describe a wide variety of transducers that convert electrical energy into
acoustical energy, or sound. The term also is commonly used to refer to systems of two or more transducers in
a single enclosure, with or without a crossover. For the sake of clarity, we will use the term driver to refer to
individual transducers, and loudspeaker to refer to systems. A system of one or more drivers implemented as a
free standing functional component - that is, mounted in an enclosure with or without crossover, or fitted with a
horn, or otherwise completed for a specific function.

In a sound reinforcement system, loudspeakers play an important role as the final link between the sound source
and the audience. Surprisingly, they are also the least understood components of that equipment chain.
To some people, loudspeakers are seen alternately as extraordinarily fragile and temperamental devices, or
as powerful, magical things capable of acoustical miracles. Many sound reinforcement professionals, for example,
accept it as inevitable that half of the drivers in their system will bum out every night there is a show. Other
professionals have been known to claim obviously outrageous efficiency ratings or acoustical power figures for
their systems.

Terminology
The term "loudspeaker" may refer to individual transducers (known as "drivers") or to complete speaker systems
consisting of an enclosure including one or more drivers. To adequately reproduce a wide range of frequencies,
most loudspeaker systems employ more than one driver, particularly for higher sound pressure level or maximum
accuracy. Individual drivers are used to reproduce different frequency ranges. The drivers are named subwoofers

13 | P a g e
PA Training Introduction 2012

(for very low frequencies); woofers (low frequencies); mid-range speakers (middle frequencies); tweeters (high
frequencies); and sometimes supertweeters, optimized for the highest audible frequencies. The terms for different
speaker drivers differ, depending on the application. In two-way systems there is no mid-range driver, so the task
of reproducing the mid-range sounds falls upon the woofer and tweeter. Home stereos use the designation
"tweeter" for the high frequency driver, while professional concert systems may designate them as "HF" or "highs".
When multiple drivers are used in a system, a "filter network", called a crossover, separates the incoming signal
into different frequency ranges and routes them to the appropriate driver. A loudspeaker system with n separate
frequency bands is described as "n-way speakers": a two-way system will have a woofer and a tweeter; a three-
way system employs a woofer, a mid-range, and a tweeter.

The most common type of driver uses a lightweight diaphragm, or cone, connected to a rigid basket, or frame, via
a flexible suspension that constrains a coil of fine wire to move axially through a cylindrical magnetic gap. When
an electrical signal is applied to the voice coil, a magnetic field is created by the electric current in the voice coil,
making it a variable electromagnet. The coil and the driver's magnetic system interact, generating a mechanical
force that causes the coil (and thus, the attached cone) to move back and forth, thereby reproducing sound under
the control of the applied electrical signal coming from the amplifier.

14 | P a g e
PA Training Introduction 2012

Famous Loudspeaker suppliers: Peavey, Mackie, JBL, Bose, Yamaha, Wharfedale, BMB

15 | P a g e
PA Training Introduction 2012

V. Cabling

A given cable probably costs less than any other component in a sound system (unless it is a multi channel snake,
which is pretty costly). Still, there may be hundreds of cables in a single system, so the cost can add up to a sizable
figure. Hum, crackles, lost signal due to open circuits, or failed outputs due to shorted circuits can all be caused by
a cable. If you think about it, regardless of how high the quality of your mics, mixing console, amplifiers and
loudspeakers may be, the entire system can be degraded or silenced by a bad cable. You should never try to save
money by cutting corners with cable.

A system's worth of good cable is expensive. High price alone does not guarantee a good product. There are major
differences between similar looking cables. All wire is not the same, nor are all look alike connectors made the
same way. Even if the overall diameter, wire gauge, and general construction are similar, two cables may have
significantly different electrical and physical properties such as resistance, capacitance between conductors,
inductance between conductors, overall flexibility, shielding density, durability, ability to withstand crushing or sharp
bends, tensile strength, jacket friction (or lack thereof for use in conduits), and so forth. Almost all audio cables in
a sound reinforcement system should utilize stranded conductors, yet many same-gauge wires use a different
number of strands. More strands usually yield better flexibility and less chance of metal fatigue failure or failure
after an inadvertent nick in the cable. Even the wire itself makes a difference. Pure copper is an excellent conductor,
but lacks tensile strength.

Copper/bronze inner conductors are strong yet adequately flexible. Aluminum is strong and lightweight, but has
too much resistance for practical use in audio circuits.

Connectors may be well made, with low contact resistance (and low tendency to develop resistance over time), or
perhaps not. They may be well secured to the cable, with thoroughly soldered shields and inner conductors and
good strain relief, or they may be carelessly put together.

There is also the question of which type of cable to use: single or dual conductor shielded type? Cable with
braided, wrapped or foil shields - or cable with no shield at all?... separate, loosely bundled cables for each channel,
or a multicore snake with many channels sharing the same outer jacket?

16 | P a g e
PA Training Introduction 2012

Famous Cable suppliers: Kadas, Belden, SommerCable etc

17 | P a g e
PA Training Introduction 2012

Microphone Techniques for Live Sound Reinforcement


Microphone techniques (the selection and placement of microphones) have a major influence on the audio quality
of a sound reinforcement system. For reinforcement of musical instruments, there are several main objectives of
microphone techniques: to maximize pick-up of suitable sound from the desired instrument, to minimize pick-up of
undesired sound from instruments or other sound sources, and to provide sufficient gain-before-feedback.
“Suitable” sound from the desired instrument may mean either the natural sound of the instrument or some
particular sound quality which is appropriate for the application. “Undesired” sound may mean the direct or ambient
sound from other nearby instruments or just stage and background noise. “Sufficient” gain-before-feedback means
that the desired instrument is reinforced at the required level without ringing or feedback in the sound system.
Obtaining the proper balance of these factors may involve a bit of give-and-take with each. In this guide, Shure
application and development engineers suggest a variety of microphone techniques for musical instruments to
achieve these objectives. In order to provide some background for these techniques it is useful to understand some
of the important characteristics of microphones, musical instruments and acoustics.

Microphone Characteristics
The most important characteristics of microphones for live sound applications are their operating principle,
frequency response and directionality. Secondary characteristics are their electrical output and actual physical
design.

Operating principle - The type of transducer inside the microphone, that is, how the microphone picks up sound
and converts it into an electrical signal. A transducer is a device that changes energy from one form into another,
in this case, acoustic energy into electrical energy. The operating principle determines some of the basic
capabilities of the microphone. The two most common types are Dynamic and Condenser.

Dynamic microphones employ a diaphragm/ voice coil/magnet assembly which forms a miniature sound-driven
electrical generator. Sound waves strike a thin plastic membrane (diaphragm) which vibrates in response.
A small coil of wire (voice coil) is attached to the rear of the diaphragm and vibrates with it. The voice coil itself
is surrounded by a magnetic field created by a small permanent magnet. It is the motion of the voice coil in this
magnetic field which generates the electrical signal corresponding to the sound picked up by a dynamic
microphone.
Dynamic microphones have relatively simple construction and are therefore economical and rugged. They can
provide excellent sound quality and good specifications in all areas of microphone performance. In particular, they
can handle extremely high sound levels: it is almost impossible to overload a dynamic microphone. In addition,
dynamic microphones are relatively unaffected by extremes of temperature or humidity. Dynamics are the type
most widely used in general sound reinforcement.

18 | P a g e
PA Training Introduction 2012

Condenser microphones are based on an electrically-charged diaphragm/backplate assembly which forms a


sound-sensitive capacitor. Here, sound waves vibrate a very thin metal or metal-coated-plastic diaphragm. The
diaphragm is mounted just in front of a rigid metal or metal-coated-ceramic backplate. In electrical terms this
assembly or element is known as a capacitor (historically called a “condenser”), which has the ability to store a
charge or voltage. When the element is charged, an electric field is created between the diaphragm and the
backplate, proportional to the spacing between them. It is the variation of this spacing, due to the motion of the
diaphragm relative to the backplate, that produces the electrical signal corresponding to the sound picked up by a
condenser microphone.

The construction of a condenser microphone must include some provision for maintaining the electrical charge or
polarizing voltage. An electret condenser microphone has a permanent charge, maintained by a special material
deposited on the backplate or on the diaphragm. Non-electret types are charged (polarized) by means of an
external power source. The majority of condenser microphones for sound reinforcement are of the electret type.
All condensers contain additional active circuitry to allow the electrical output of the element to be used with typical
microphone inputs. This requires that all condenser microphones be powered: either by batteries or by phantom
power (a method of supplying power to a microphone through the microphone cable itself).

There are two potential limitations of condenser microphones due to the additional circuitry: first, the electronics
produce a small amount of noise; second, there is a limit to the maximum signal level that the electronics can
handle. For this reason, condenser microphone specifications always include a noise figure and a maximum sound
level. Good designs, however, have very low noise levels and are also capable of very wide dynamic range.

Condenser microphones are more complex than dynamics and tend to be somewhat more costly. Also,
condensers may be adversely affected by extremes of temperature and humidity which can cause them to become
noisy or fail temporarily. However, condensers can readily be made with higher sensitivity and can provide a
smoother, more natural sound, particularly at high frequencies. Flat frequency response and extended frequency
range are much easier to obtain in a condenser. In addition, condenser microphones can be made very small
without significant loss of performance.

19 | P a g e
PA Training Introduction 2012

ADDITIONAL READING:

Ribbon microphones employ a transduction method that is similar to that of dynamics. Figure 10-3 illustrates the
construction of a typical ribbon element. A very light, thin, corrugated metal ribbon, Figure 10-3 (a), is stretched
within the air gap of a powerful magnet (b). The ribbon is clamped at the ends, but is free to move throughout its
length. When sound strikes the ribbon, the ribbon vibrates in response. As is the case with the dynamic coil element,
the moving ribbon cuts the magnetic lines of force in the air gap, and a voltage is thereby induced in the ribbon.
The voltage is very small and the ribbon impedance very low, so all ribbon microphones incorporate a built-in
transformer. The transformer serves the dual functions of boosting the signal voltage and isolating the ribbon
impedance from the load presented by the input to which the microphone is connected.

Early ribbon microphones were extremely fragile. The ribbon could be damaged simply by blowing or coughing
into the microphone! Not many microphone manufacturers now make ribbon units, but those that are available are
much more rugged than older units. All but a few modern ribbon mics remain more fragile than dynamic or
condenser units, so they are used primarily in recording (a couple of notable exceptions are used for reinforcement).

Ribbon microphones usually have excellent sonic characteristics, with great warmth and gentle high-frequency
response. They also have excellent transient response and very low self-noise. For these reasons, some ribbon
mics are prized as vocal microphones, and are also very effective with acoustic instruments.

The carbon type is among the earliest microphone elements ever developed. Figure 10-4 illustrates the
construction of a typical carbon element. A small cup, Figure 10-4 (a), is packed with pulverized carbon and
enclosed at one end by a brass disk called a button (b), which is coupled to a circular metal diaphragm Cc). The
button and a back plate at the rear of the cylinder form the connection terminals. A battery (d) provides an activating
voltage across the carbon. When sound strikes the diaphragm, the carbon granules in the button vibrate, becoming
alternately more and less dense as the diaphragm moves. The electrical resistance of the carbon thereby fluctuates,
and converts the battery voltage into a corresponding fluctuating current that is an electrical representation of the
sound. The current is stepped up by a transformer (e), which also serves to isolate the low impedance of the
element from that of the input to which it is connected, and to block the battery DC from the input.

Carbon microphones are not known for excellent sonic characteristics, but they are quite inexpensive, and rugged.
For this reason, they are still widely used in utility sound applications. (The standard telephone mic element has

20 | P a g e
PA Training Introduction 2012

long been a carbon type, although dynamic mics are used in many newer phones.) Carbon microphones can lose
some efficiency and become noisy if the granules in the button become compacted,
but simply tapping the element against a hard surface usually cures the problem.

Phantom Power
Phantom power is a DC voltage (usually 12-48 volts) used to power the electronics of a condenser microphone.
For some (non-electret) condensers it may also be used to provide the polariziing voltage for the element tself.
This voltage is supplied through the microphone cable by a mixer equipped with phantom power or by some type
of in-line external source. The voltage is equal on Pin 2 and Pin 3 of a typical balanced, XLR-type connector. For
a 48 volt phantom souorce, for example, Pin 2 is 48 VDC and Pin 3 is 48 VDC, both with respect to Pin 1 which is
ground (shield).

Because the voltage is exactly the same on Pin 2 and Pin 3, phantom power will have no effect on balanced
dynamic microphones: no current will flow since there is no voltage difference across the output. In fact, phantom
power supplies have current limiting which will prevent damage to a dynamic microphone even if it is shorted or
miswired. In general, balanced dynamic microphones can be connected to phantom powered mixer inputs with no
problem.

Transient Response

21 | P a g e
PA Training Introduction 2012

Transient response refers to the ability of a microphone to respond to a rapidly changing sound wave. A good way
to understand why dynamic and condenser mics sound different is to understand the differences in their transient
response. In order for a microphone to convert sound energy into electrical energy, the sound wave must physically
move the diaphragm of the microphone. The amount of time it takes for this movement to occur depends on the
weight (or mass) of the diaphragm. For instance, the diaphragm and voice coil assembly of a dynamic microphone
may weigh up to 1000 times more than the diaphragm of a condenser microphone. It takes longer for the heavy
dynamic diaphragm to begin moving than for the lightweight condenser diaphragm. It also takes longer for the
dynamic diaphragm to stop moving in comparison to the condenser diaphragm. Thus, the dynamic transient
response is not as good as the condenser transient response. This is similar to two vehicles in traffic: a truck and
a sports car. They may have equal power engines but the truck weighs much more than the car. As traffic flow
changes, the sports car can accelerate and brake very quickly, while the semi accelerates and brakes very slowly
due to its greater weight. Both vehicles follow the overall traffic flow but the sports car responds better to sudden
changes.

Pictured here are two studio microphones responding to the sound impulse produced by an electric spark:
condenser mic on top, dynamic mic on bottom. It is evident that it takes almost twice as long for the dynamic
microphone to respond to the sound. It also takes longer for the dynamic to stop moving after the impulse has
passed (notice the ripple on the second half of the graph). Since condenser microphones generally have better
transient response then dynamics, they are better suited for instruments that have very sharp attack or extended
high frequency output such as cymbals. It is this transient response difference that causes condenser mics to have
a more crisp, detailed sound and dynamic mics to have a more mellow, rounded sound.

The decision to use a condenser or dynamic microphone depends not only on the sound source and the sound
reinforcement system but on the physical setting as well. From a practical standpoint, if the microphone will be
used in a severe environment such as a rock and roll club or for outdoor sound, dynamic types would be a good
choice. In a more controlled environment such as a concert hall or theatrical setting, a condenser microphone
might be preferred for many sound sources, especially when the highest sound quality is desired.

Frequency response – The output level or sensitivity of the microphone over its operating range from lowest to
highest frequency.

22 | P a g e
PA Training Introduction 2012

Virtually all microphone manufacturers list the frequency response of their microphones over a range, for example
50 – 15,000 Hz. This usually corresponds with a graph that indicates output level relative to frequency. The graph
has frequency in Hertz (Hz) on the x-axis and relative response in decibels (dB) on the y-axis.

A microphone whose output is equal at all frequencies has a flat frequency response.

Flat response microphones typically have an extended frequency range. They reproduce a variety of sound
sources without changing or coloring the original sound.

A microphone whose response has peaks or dips in certain frequency areas exhibits a shaped response. A shaped
response is usually designed to enhance a sound source in a particular application. For instance, a microphone
may have a peak in the 2 – 8 kHz range to increase intelligibility for live vocals. This shape is called a presence
peak or rise. A microphone may also be designed to be less sensitive to certain other frequencies.
One example is reduced low frequency response (low end roll-off) to minimize unwanted “boominess” or stage
rumble.

The Decibel
The decibel (dB) is an expression often used in electrical and acoustic measurements. The decibel is a number
that represents a ratio of two values of a quantity such as voltage. It is actually a logarithmic ratio whose main
purpose is to scale a large measurement range down to a much smaller and more useable range. The form of the
decibel relationship for voltage is:

dB = 20 x log(V1/V2)
Where 20 is a constant, V1 is one voltage, V2 is the other voltage, and log is logarithm base 10.
Examples:

23 | P a g e
PA Training Introduction 2012

What is the relationship in decibels between 100 volts and 1 volt?


dB = 20 x log(100/1)
dB = 20 x log(100)
dB = 20 x 2 (the log of 100 is 2)
dB = 40
That is, 100 volts is 40dB greater than 1 volt.

What is the relationship in decibels between 0.001 volt and 1 volt?


dB = 20 x log(0.001/1)
dB = 20 x log(0.001)
dB = 20 x (-3) (the log of .001 is -3)
dB = -60
That is, 0.001 volt is 60dB less that 1 volt.

Similarly:
If one voltage is equal to the other they are 0dB different
If one voltage is twice the other they are 6dB different
If one voltage is ten times the other they are 20dB different

The choice of flat or shaped response microphones again depends on the sound source, the sound system and
the environment. Flat response microphones are usually desirable to reproduce instruments such as acoustic
guitars or pianos, especially with high quality sound systems. They are also common in stereo miking and distant
pickup applications where the microphone is more than a few feet from the sound source: the absence of response
peaks minimizes feedback and contributes to a more natural sound. On the other hand, shaped response
microphones are preferred for closeup vocal use and for certain instruments such as drums and guitar amplifiers
which may benefit from response enhancements for presence or punch. They are also useful for reducing pickup
of unwanted sound and noise outside the frequency range of an instrument.

Directionality – A microphone’s sensitivity to sound relative to the direction or angle from which the sound arrives.
There are a number of different directional patterns found in microphone design. These are typically plotted in a
polar pattern to graphically display the directionality of the microphone. The polar pattern shows the variation in
sensitivity 360 degrees around the microphone, assuming that the microphone is in the center and that 0 degrees
represents the front of the microphone. The three basic directional types of microphones are
omnidirectional, unidirectional, and bidirectional. The omnidirectional microphone has equal output or sensitivity
at all angles. Its coverage angle is a full 360 degrees. An omnidirectional microphone will pick up the maximum
amount of ambient sound. In live sound situations an omni should be placed very close to the sound source to pick
up a useable balance between direct sound and ambient sound. In addition, an omni cannot be aimed away from
undesired sources such as PA speakers which may cause feedback.

24 | P a g e
PA Training Introduction 2012

The unidirectional microphone is most sensitive to sound arriving from one particular direction and is less
sensitive at other directions. The most common type is a cardioids (heart-shaped) response. This has the most
sensitivity at 0 degrees (on-axis) and is least sensitive at 180 degrees (off-axis). The effective coverage or pickup
angle of a cardioid is about 130 degrees, that is up to about 65 degrees off axis at the front of the microphone. In
addition, the cardioid mic picks up only about one-third as much ambient sound as an omni. Unidirectional
microphones isolate the desired on-axis sound from both unwanted off-axis sound and from ambient noise.

For example, the use of a cardioid microphone for a guitar amplifier which is near the drum set is one way to reduce
bleed-through of drums into the reinforced guitar sound. Unidirectional microphones have several variations on the
cardioid pattern. Two of these are the supercardioid and hypercardioid. Both patterns offer narrower front pickup
angles than the cardioid (115 degrees for the supercardioid and 105 degrees for the hypercardioid) and also greater
rejection of ambient sound. While the cardioid is least sensitive at the rear (180 degrees off-axis) the least sensitive
direction is at 126 degrees off-axis for the supercardioid and 110 degrees for the hypercardioid. When placed
properly they can provide more focused pickup and less ambient noise than the cardioid pattern, but they have
some pickup directly at the rear, called a rear lobe. The rejection at the rear is -12 dB for the supercardioid and
only -6 dB for the hypercardioid. A good cardioid type has at least 15-20 dB of rear rejection. Supercardioid

25 | P a g e
PA Training Introduction 2012

The bidirectional microphone has maximum sensitivity at both 0 degrees (front) and at 180 degrees (back). It
has the least amount of output at 90 degree angles (sides). The coverage or pickup angle is only about 90 degrees
at both the front and the rear. It has the same amount of ambient pickup as the cardioid. This mic could be used
for picking up two opposing sound sources, such as a vocal duet. Though rarely found in sound reinforcement they
are used in certain stereo techniques, such as M-S (mid-side).

Microphone Polar Patterns Compared

Using Directional Patterns to Reject Unwanted Sources


In sound reinforcement, microphones must often be located in positions where they may pick up unintended
instrument or other sounds. Some examples are: individual drum mics picking up adjacent drums, vocal mics
picking up overall stage noise, and vocal mics picking up monitor speakers. In each case there is a desired sound
source and one or more undesired sound sources. Choosing the appropriate directional pattern can help to
maximize the desired sound and minimize the undesired sound. Although the direction for maximum pickup is
usually obvious (on-axis) the direction for least pickup varies with microphone type. In particular, the cardioid is
least sensitive at the rear (180 degrees off-axis) while the supercardioid and hypercardioid types actually have
some rear pickup. They are least sensitive at 125 degrees off-axis and 110 degrees off axis respectively. For
example, when using floor monitors with vocal mics, the monitor should be aimed directly at the rear axis of a
cardioid microphone for maximum gain-before-feedback. When using a supercardioid, however, the monitor should
be positioned somewhat off to the side (55 degrees off the rear axis) for best results. Likewise, when using
supercardioid or hypercardioid types on drum kits be aware of the rear pickup of these mics and angle them
accordingly to avoid pickup of other drums or cymbals.

26 | P a g e
PA Training Introduction 2012

Other directional related microphone characteristics:

Ambient sound rejection – Since unidirectional microphones are less sensitive to off-axis sound than
omnidirectional types they pick up less overall ambient or stage sound. Unidirectional mics should be used to
control ambient noise pickup to get a cleaner mix.

Distance factor – Because directional microphones pick up less ambient sound than omnidirectional types they
may be used at somewhat greater distances from a sound source and still achieve the same balance between the
direct sound and background or ambient sound. An omni should be placed closer to the sound source than a
uni—about half the distance—to pick up the same balance between direct sound and ambient sound.

Off-axis coloration – Change in a microphone’s frequency response that usually gets progressively more
noticeable as the arrival angle of sound increases. High frequencies tend to be lost first, often resulting in “muddy”
off-axis sound.

Proximity effect –
With unidirectional microphones, bass response increases as the mic is moved closer (within 2 feet) to the sound
source. With close-up unidirectional microphones (less than 1 foot), be aware of proximity effect and roll off the
bass until you obtain a more natural sound. You can (1) roll off low frequencies on the mixer, or (2) use a
microphone designed to minimize proximity effect, or (3) use a microphone with a bass rolloff switch, or (4) use an
omnidirectional microphone (which does not exhibit proximity effect).

27 | P a g e
PA Training Introduction 2012

Unidirectional microphones can not only help to isolate one voice or instrument from other singers or instruments,
but can also minimize feedback, allowing higher gain. For these reasons, unidirectional microphones are referred
over omnidirectional microphones in almost all sound reinforcement applications.

The electrical output of a microphone is usually specified by level, impedance and wiring configuration. Output
level or sensitivity is the level of the electrical signal from the microphone for a given input sound level. In general,
condenser microphones have higher sensitivity than dynamic types. For weak or distant sounds a high sensitivity
microphone is desirable while loud or close-up sounds can be picked up well by lower-sensitivity models.

The output impedance of a microphone is roughly equal to the electrical resistance of its output: 150-600 ohms for
low impedance (low-Z) and 10,000 ohms or more for high impedance.(high-Z). The practical concern is that low
impedance microphones can be used with cable lengths of 1000 feet or more with no loss of quality while high
impedance types exhibit noticeable high frequency loss with cable lengths greater than about 20 feet.

Finally, the wiring configuration of a microphone may be balanced or unbalanced. A balanced output carries the
signal on two conductors (plus shield). The signals on each conductor are the same level but opposite polarity (one
signal is positive when the other is negative). A balanced microphone input amplifies only the difference between
the two signals and rejects any part of the signal which is the same in each conductor. Any electrical noise or hum
picked up by a balanced (two-conductor) cable tends to be identical in the two conductors and is therefore rejected
by the balanced input while the equal but opposite polarity original signals are amplified. On the other hand, an
unbalanced microphone output carries its signal on a single conductor (plus shield) and an unbalanced microphone
input amplifies any signal on that conductor. Such a combination will be unable to reject any electrical noise which
has been picked up by the cable. Balanced, low-impedance microphones are therefore recommended for nearly
all sound reinforcement applications.

The physical design of a microphone is its mechanical and operational design. Types used in sound
reinforcement include: handheld, headworn, lavaliere, overhead, stand-mounted, instrument-mounted and
surface-mounted designs. Most of these are available in a choice of operating principle, frequency response,
directional pattern and electrical output. Often the physical design is the first choice made for an application.
Understanding and choosing the other characteristics can assist in producing the maximum quality microphone
signal and delivering it to the sound system with the highest fidelity.

28 | P a g e
PA Training Introduction 2012

Musical Instrument Characteristics


Some background information on characteristics of musical instruments may be helpful. Instruments and other
sound sources are characterized by their frequency output, by their directional output and by their dynamic range.

Frequency output - the span of fundamental and harmonic frequencies produced by an instrument, and the
balance or relative level of those frequencies. Musical instruments have overall frequency ranges as found in the
chart below. The dark section of each line indicates the range of fundamental frequencies and the shaded section
represents the range of the highest harmonics or overtones of the instrument. The fundamental frequency
establishes the basic pitch of a note played by an instrument while the harmonics produce the timbre or
characteristic tone.

29 | P a g e
PA Training Introduction 2012

It is this timbre that distinguishes the sound of one instrument from another. In this manner, we can tell whether a
piano or a trumpet just played that C note. The following graphs show the levels of the fundamental and harmonics
associated with a trumpet and an oboe each playing the same note.

The number of harmonics along with the relative level of the harmonics is noticeably different between these two
instruments and provides each instrument with its own unique sound. A microphone which responds evenly to the
full range of an instrument will reproduce the most natural sound from an instrument. A microphone which responds
unevenly or to less than the full range will alter the sound of the instrument, though this effect may be desirable in
some cases.

Directional output – the three-dimensional pattern of sound waves radiated by an instrument. A musical
instrument radiates a different tone quality (timbre) in every direction, and each part of the instrument produces a
different timbre. Most musical instruments are designed to sound best at a distance, typically two or more feet
away. At this distance, the sounds of the various parts of the instrument combine into a pleasing composite. In
addition, many instruments produce this balanced sound only in a particular direction. A microphone placed at
such distance and direction tends to pick up a natural or well-balanced tone quality. On the other hand, a
microphone placed close to the instrument tends to emphasize the part of the instrument that the microphone is
near. The resulting sound may not be representative of the instrument as a whole. Thus, the reinforced tonal
balance of an instrument is strongly affected by the microphone position relative to the instrument.

Unfortunately, it is difficult, if not impossible, to place a microphone at the “natural sounding” distance from an
instrument in a sound reinforcement situation without picking up other (undesired) sounds and/or acoustic feedback.
Close microphone placement is usually the only practical way to achieve sufficient isolation and gain-before-
feedback. But since the sound picked up close to a source can vary significantly with small changes in microphone
position, it is very useful to experiment with microphone location and orientation. In some cases more than one
microphone may be required to get a good sound from a large instrument such as a piano.

Dynamic range - the range of volume of an instrument from its softest to its loudest level. The dynamic range of
an instrument determines the specifications for sensitivity and maximum input capability of the intended
microphone. Loud instruments such as drums, brass and amplified guitars are handled well by dynamic
microphones which can withstand high sound levels and have moderate sensitivity. Softer instruments such as
flutes and harpsichords can benefit from the higher sensitivity of condensers. Of course, the farther the microphone
is placed from the instrument the lower the level of sound reaching the microphone. In the context of a live
performance, the relative dynamic range of each instrument determines how much sound reinforcement may be
required. If all of the instruments are fairly loud, and the venue is of moderate size with good acoustics, no
reinforcement may be necessary. On the other hand, if the performance is in a very large hall or outdoors, even
amplified instruments may need to be further reinforced. Finally, if there is a substantial difference in dynamic
range among the instruments, such as an acoustic guitar in a loud rock band, the microphone techniques (and the

30 | P a g e
PA Training Introduction 2012

sound system) must accommodate those differences. Often, the maximum volume of the overall sound system is
limited by the maximum gain-before-feedback of the softest instrument. An understanding of the frequency output,
directional output, and dynamic range characteristics of musical instruments can help significantly in choosing
suitable microphones, placing them for best pickup of the desired sound and minimizing feedback or other
undesired sounds.

Instrument Loudspeakers
Another instrument with a wide range of characteristics is the loudspeaker. Anytime you are placing microphones
to pick up the sound of a guitar or bass cabinet you are confronted with the acoustic nature of loudspeakers. Each
individual loudspeaker type is directional and displays different frequency characteristics at different angles and
distances. The sound from a loudspeaker tends to be almost omnidirectional at low frequencies but becomes very
directional at high frequencies. Thus, the sound on-axis at the center of a speaker usually has the most “bite” or
high-end, while the sound produced off-axis or at the edge of the speaker is more “mellow” or bassy.
A cabinet with multiple loudspeakers has an even more complex output, especially if it has different speakers for
bass and treble. As with most acoustic instruments the desired sound only develops at some distance from the
speaker.

Sound reinforcement situations typically require a close-mic approach. A unidirectional dynamic microphone is a
good first choice here: it can handle the high level and provide good sound and isolation. Keep in mind the proximity
effect when using a uni close to the speaker: some bass boost will be likely. If the cabinet has only one speaker a
single microphone should pick up a suitable sound with a little experimentation. If the cabinet has multiple speakers
of the same type it is typically easiest to place the microphone to pick up just one speaker. Placing the microphone
between speakers can result in strong phase effects though this may be desirable to achieve a particular tone.
However, if the cabinet is stereo or has separate bass and treble speakers multiple microphones may be required.

Placement of loudspeaker cabinets can also have a significant effect on their sound. Putting cabinets on carpets
can reduce brightness, while raising them off the floor can reduce low end. Open-back cabinets can be miked from
behind as well as from the front. The distance from the cabinet to walls or other objects can also vary the sound.
Again, experiment with the microphone(s) and placement until you have the sound that you like!

Sound Propagation

31 | P a g e
PA Training Introduction 2012

There are four basic ways in which sound can be altered by its environment as it travels or propagates: reflection,
absorption, diffraction and refraction.

1. Reflection – A sound wave can be reflected by a surface or other object if the object is physically as large
or larger than the wavelength of the sound. Because low frequency sounds have long wavelengths they
can only be reflected by large objects. Higher frequencies can be reflected by smaller objects and surfaces
as well as large. The reflected sound will have a different frequency characteristic than the direct sound
if all frequencies are not reflected equally.

Reflection is also the source of echo, reverb, and standing waves:

Echo occurs when a reflected sound is delayed long enough (by a distant reflective surface) to be heard
by the listener as a distinct repetition of the direct sound.

Reverberation consists of many reflections of a sound, maintaining the sound in a reflective space for a
time even after the direct sound has stopped.

Standing waves in a room occur for certain frequencies related to the distance between parallel walls.
The original sound and the reflected sound will begin to reinforce each other when the distance between
two opposite walls is equal to a multiple of half the wavelength of the sound.

This happens primarily at low frequencies due to their longer wavelengths and relatively high energy.

2. Absorption – Some materials absorb sound rather than reflect it. Again, the efficiency of absorption is
dependent on the wavelength. Thin absorbers like carpet and acoustic ceiling tiles can affect high
frequencies only, while thick absorbers such as drapes, padded furniture and specially designed bass
traps are required to attenuate low frequencies. Reverberation in a room can be controlled by adding
absorption: the more absorption the less reverberation. Clothed humans absorb mid and high frequencies
well, so the presence or absence of an audience has a significant effect on the sound in an otherwise
reverberant venue.

3. Diffraction – A sound wave will typically bend around obstacles in its path which are smaller than its
wavelength. Because a low frequency sound wave is much longer than a high frequency wave, low
frequencies will bend around objects that high frequencies cannot. The effect is that high frequencies tend
to have a higher directivity and are more easily blocked while low frequencies are essentially
omnidirectional. In sound reinforcement, it is difficult to get good directional control at low frequencies for
both microphones and loudspeakers.

4. Refraction – The bending of a sound wave as it passes through some change in the density of the
environment. This effect is primarily noticeable outdoors at large distances from loudspeakers due to
atmospheric effects such as wind or temperature gradients. The sound will appear to bend in a certain
direction due to these effects.

Direct vs. Ambient Sound

A very important property of direct sound is that it becomes weaker as it travels away from the sound source. The
amount of change is controlled by the inverse-square law which states that the level change is inversely

32 | P a g e
PA Training Introduction 2012

proportional to the square of the distance change. When the distance from a sound source doubles, the sound
level decreases by 6dB. This is a noticeable decrease. For example, if the sound from a guitar amplifier is 100 dB
SPL at 1 ft. from the cabinet it will be 94 dB at 2 ft., 88 dB at 4 ft., 82 dB at 8 ft., etc. Conversely, when the distance
is cut in half the sound level increases by 6dB: It will be 106 dB at 6 inches and 112 dB at 3 inches! On the other
hand, the ambient sound in a room is at nearly the same level throughout the room. This is because the ambient
sound has been reflected many times within the room until it is essentially nondirectional. Reverberation is an
example of nondirectional sound.

For this reason the ambient sound of the room will become increasingly apparent as a microphone is placed further
away from the direct sound source. In every room, there is a distance (measured from the sound source) where
the direct sound and the reflected (or reverberant) sound become equal in intensity. In acoustics, this is known as
the Critical Distance. If a microphone is placed at the Critical Distance or farther, the sound quality picked up may
be very poor. This sound is often described as “echoey”, reverberant, or “bottom of the barrel”. The reflected sound
overlaps and blurs the direct sound.

Critical distance may be estimated by listening to a sound source at a very short distance, then moving away until
the sound level no longer decreases but seems to be constant. That distance is critical distance.

A unidirectional microphone should be positioned no farther than 50% of the Critical Distance, e.g. if the Critical
Distance is 10 feet, a unidirectional mic may be placed up to 5 feet from the sound source. Highly reverberant
rooms may require very close microphone placement. The amount of direct sound relative to ambient sound is
controlled primarily by the distance of the microphone to the sound source and to a lesser degree by the directional
pattern of the mic.

Phase relationships and interference effects

The phase of a single frequency sound wave is always described relative to the starting point of the wave or 0
degrees. The pressure change is also zero at this point. The peak of the high pressure zone is at 90 degrees, the
pressure change falls to zero again at 180 degrees, the peak of the low pressure zone is at 270 degrees, and the
pressure change rises to zero at 360 degrees for the start of the next cycle. Two identical sound waves starting at
the same point in time are called “in-phase” and will sum together creating a single wave with double the amplitude
but otherwise identical to the original waves.
Two identical sound waves with one wave’s starting point occurring at the 180 degree point of the other wave are
said to be “out of phase” and the two waves will cancel each other completely. When two sound waves of the
same single frequency but different starting points are combined the resulting wave is said to have “phase shift” or
an apparent starting point somewhere between the original starting points. This new wave will have the same

33 | P a g e
PA Training Introduction 2012

frequency as the original waves but will have increased or decreased amplitude depending on the degree of phase
difference. Phase shift, in this case, indicates that the 0 degree points of two identical waves are not the same.

Most soundwaves are not a single frequency but are made up of many frequencies. When identical multiple
frequency soundwaves combine there are three possibilities for the resulting wave: a doubling of amplitude at all
frequencies if the waves are in phase, a complete cancellation at all frequencies if the waves are 180 degrees out
of phase, or partial cancellation and partial reinforcement at various frequencies if the waves have intermediate
phase relationship. The results may be heard as interference effects.

The first case is the basis for the increased sensitivity of boundary or surface-mount microphones. When a
microphone element is placed very close to an acoustically reflective surface both the incident and reflected sound
waves are in phase at the microphone. This results in a 6dB increase (doubling) in sensitivity, compared to the
same microphone in free space. This occurs for reflected frequencies whose wavelength is greater than the
distance from the microphone to the surface: if the distance is less than one-quarter inch this will be the case for
frequencies up to at least 18 kHz. However, this 6dB increase will not occur for frequencies that are not reflected,
that is, frequencies that are either absorbed by the surface or that diffract around the surface. High frequencies
may be absorbed by surface materials such as carpeting or other acoustic treatments. Low frequencies will diffract
around the surface if their wavelength is much greater than the dimensions of the surface: the boundary must be
at least 5 ft. square to reflect frequencies down to 100 Hz.

The second case occurs when two closely spaced microphones are wired out of phase, that is, with reverse polarity.
This usually only happens by accident, due to miswired microphones or cables but the effect is also used as the
basis for certain noise-canceling microphones. In this technique, two identical microphones are placed very close

34 | P a g e
PA Training Introduction 2012

to each other (sometimes within the same housing) and wired with opposite polarity. Sound waves from distant
sources which arrive equally at the two microphones are effectively canceled when the outputs are mixed.

However, sound from a source which is much closer to one element than to other will be heard. Such close-talk
microphones, which must literally have the lips of the talker touching the grille, are used in high-noise environments
such as aircraft and industrial paging but rarely with musical instruments due to their limited frequency response.

It is the last case which is most likely in musical sound reinforcement, and the audible result is a degraded
frequency response called “comb filtering.” The pattern of peaks and dips resembles the teeth of a comb and the
depth and location of these notches depend on the degree of phase shift.

With microphones this effect can occur in two ways. The first is when two (or more) mics pick up the same sound
source at different distances. Because it takes longer for the sound to arrive at the more distant microphone there
is effectively a phase difference between the signals from the mics when they are combined (electrically) in the
mixer. The resulting comb filtering depends on the sound arrival time difference between the microphones: a large
time difference (long distance) causes comb filtering to begin at low frequencies, while a small time difference
(short distance) moves the comb filtering to higher frequencies.

The second way for this effect to occur is when a single microphone picks up a direct sound and also a delayed
version of the same sound. The delay may be due to an acoustic reflection of the original sound or to multiple
sources of the original sound. A guitar cabinet with more than one speaker or multiple loudspeaker cabinets for a
single instrument would be examples. The delayed sound travels a longer distance (longer time) to the mic and
thus has a phase difference relative to the direct sound. When these sounds combine (acoustically) at the
microphone, comb filtering results. This time the effect of the comb filtering depends on the distance between the
microphone and the source of the reflection or the distance between the multiple sources.

35 | P a g e
PA Training Introduction 2012

The 3-to-1 Rule

When it is necessary to use multiple microphones or to use microphones near reflective surfaces the resulting
interference effects may be minimized by using the 3-to-1 rule. For multiple microphones the rule states that the
distance between microphones should be at least three times the distance from each microphone to its intended
sound source. The sound picked up by the more distant microphone is then at least 12dB less than the sound
picked up by the closer one. This insures that the audible effects of comb filtering are reduced by at least that
much. For reflective surfaces, the microphone should be at least 11/2 times as far from that surface as it is from
its intended sound source. Again, this insures minimum audibility of interference effects.

Strictly speaking, the 3-to-1 rule is based on the behavior of omnidirectional microphones. It can be relaxed slightly
if unidirectional microphones are used and they are aimed appropriately, but should still be regarded as a basic
rule of thumb for worst case situations.

36 | P a g e
PA Training Introduction 2012

Microphone Phase Effects


One effect often heard in sound reinforcement occurs when two microphones are placed in close proximity to the
same sound source, such as a drum kit or instrument amplifier. Many times this is due to the phase relationship of
the sounds arriving at the microphones. If two microphones are picking up the same sound source from different
locations, some phase cancellation or summing may be occurring. Phase cancellation happens when two
microphones are receiving the same soundwave but with opposite pressure zones (that is,180 degrees out of
phase).This is usually not desired. A mic with a different polar pattern may reduce the pickup of unwanted sound
and reduce the effect or physical isolation can be used. With a drum kit, physical isolation of the individual drums
is not possible. In this situation the choice of microphones may be more dependent on the off-axis rejection
characteristic of the mic.

Another possibility is phase reversal. If there is cancellation occurring, a 180 degree phase flip will create phase
summing of the same frequencies. A common approach to the snare drum is to place one mic on the top head and
one on the bottom head. Because the mics are picking up relatively similar sound sources at different points in the
sound wave, you may experience some phase cancellations. Inverting the phase of one mic will sum any
frequencies being canceled. This may sometimes achieve a “fatter“snare drum sound. This effect will change
dependent on mic locations. The phase inversion can be done with an in-line phase reverse adapter or by a phase
invert switch found on many mixers inputs.

Potential Acoustic Gain vs. Needed Acoustic Gain


The basic purpose of a sound reinforcement system is to deliver sufficient sound level to the audience so that they
can hear and enjoy the performance throughout the listening area. As mentioned earlier, the amount of
reinforcement needed depends on the loudness of the instruments or performers themselves and the size and
acoustic nature of the venue. This Needed Acoustic Gain (NAG) is the amplification factor necessary so that the
furthest listeners can hear as if they were close enough to hear the performers directly.

To calculate NAG: NAG = 20 x log (Df/Dn)

Where: Df = distance from sound source to furthest listener


Dn = distance from sound source to nearest listener
log = logarithm to base 10

Note: the sound source may be a musical instrument, a vocalist or perhaps a loudspeaker.

The equation for NAG is based on the inverse-square law, which says that the sound level decreases by 6dB each
time the distance to the source doubles. For example, the sound level (without a sound system) at the first row of
the audience (10 feet from the stage) might be a comfortable 85dB. At the last row of the audience (80 feet from
the stage) the level will be 18dB less or 67dB. In this case the sound system needs to provide 18dB of gain so that
the last row can hear at the same level as the first row. The limitation in real-world sound systems is not how loud
the system can get with a recorded sound source but rather how loud it can get with a microphone as its input. The
maximum loudness is ultimately limited by acoustic feedback.

The amount of gain-before-feedback that a sound reinforcement system can provide may be estimated
mathematically. This Potential Acoustic Gain involves the distances between sound system components, the
number of open mics, and other variables. The system will be sufficient if the calculated Potential Acoustic Gain
(PAG) is equal to or greater than the Needed Acoustic Gain (NAG). Below is an illustration showing the key
distances.

37 | P a g e
PA Training Introduction 2012

The simplified PAG equation is:

PAG = 20 (log D1 - log D2 + log D0 - log Ds) -10 log NOM -6


Where: PAG = Potential Acoustic Gain (in dB)
Ds = distance from sound source to microphone
D0 = distance from sound source to listener
D1 = distance from microphone to loudspeaker
D2 = distance from loudspeaker to listener
NOM = the number of open microphones
-6 = a 6 dB feedback stability margin
log = logarithm to base 10

In order to make PAG as large as possible, that is, to provide the maximum gain-before-feedback, the following
rules should be observed:
1. Place the microphone as close to the sound source as practical.
2. Keep the microphone as far away from the loudspeaker as practical.
3. Place the loudspeaker as close to the audience as practical.
4. Keep the number of microphones to a minimum.

In particular, the logarithmic relationship means that to make a 6dB change in the value of PAG the corresponding
distance must be doubled or halved. For example, if a microphone is 1 ft. from an instrument, moving it to 2 ft.
away will decrease the gain-before-feedback by 6dB while moving it to 4 ft. away will decrease it by 12dB. On the
other hand, moving it to 6 in. away increases gain-before feed back by 6dB while moving it to only 3 in. away will
increase it by 12dB. This is why the single most significant factor in maximizing gain-before-feedback is to place
the microphone as close as practical to the sound source.

The NOM term in the PAG equation reflects the fact that gain-before-feedback decreases by 3dB every time the
number of open (active) microphones doubles. For example, if a system has a PAG of 20dB with a single
microphone, adding a second microphone will decrease PAG to 17dB and adding a third and fourth mic will
decrease PAG to 14dB. This is why the number of microphones should be kept to a minimum and why unused
microphones should be turned off or attenuated. Essentially, the gain-before-feedback of a sound system can be
evaluated strictly on the relative location of sources, microphones, loudspeakers, and audience, as well as the
number of microphones, but without regard to the actual type of component. Though quite simple, the results are

38 | P a g e
PA Training Introduction 2012

very useful as a best case estimate. Understanding principles of basic acoustics can help to create an awareness
of potential influences on reinforced sound and to provide some insight into controlling them. When effects of this
sort are encountered and are undesirable, it may be possible to adjust the sound source, use a microphone with a
different directional characteristic, reposition the microphone or use fewer microphones, or possibly use acoustic
treatment to improve the situation. Keep in mind that in most cases, acoustic problems can best be solved
acoustically, not strictly by electronic devices.

General Rules
Microphone technique is largely a matter of personal taste—whatever method sounds right for the particular
instrument, musician, and song is right. There is no one ideal microphone to use on any particular instrument.
There is also no one ideal way to place a microphone. Choose and place the microphone to get the sound you
want. We recommend experimenting with a variety of microphones and positions until you create your desired
sound. However, the desired sound can often be achieved more quickly and consistently by understanding basic
microphone characteristics, sound-radiation properties of musical instruments, and acoustic fundamentals as
presented above.
Here are some suggestions to follow when miking musical instruments for sound reinforcement.

• Try to get the sound source (instrument, voice, or amplifier) to sound good acoustically (“live”) before
miking it.
• Use a microphone with a frequency response that is limited to the frequency range of the instrument, if
possible, or filter out frequencies below the lowest fundamental frequency of the instrument.
• To determine a good starting microphone position, try closing one ear with your finger. Listen to the sound
source with the other ear and move around until you find a spot that sounds good. Put the microphone
there. However, this may not be practical (or healthy) for extremely close placement near loud sources.
• The closer a microphone is to a sound source, the louder the sound source is compared to reverberation
and ambient noise. Also, the Potential Acoustic Gain is increased—that is, the system can produce
more level before feedback occurs. Each time the distance between the microphone and sound source
is halved, the sound pressure level at the microphone (and hence the system) will increase by 6 dB.
(Inverse Square Law)
• Place the microphone only as close as necessary. Too close a placement can color the sound source’s
tone quality (timbre), by picking up only one part of the instrument. Be aware of Proximity Effect with
unidirectional microphones and use bass rolloff if necessary.
• Use as few microphones as are necessary to get a good sound. To do that, you can often pick up two or
more sound sources with one microphone. Remember: every time the number of microphones doubles,
the Potential Acoustic Gain of the sound system decreases by 3 dB. This means that the volume level
of the system must be turned down for every extra mic added in order to prevent feedback. In addition,
the amount of noise picked up increases as does the likelihood of interference effects such as comb-
filtering.
• When multiple microphones are used, the distance between microphones should be at least three times
the distance from each microphone to its intended sound source. This will help eliminate phase
cancellation. For example, if two microphones are each placed one foot from their sound sources, the
distance between the microphones should be at least three feet. (3 to 1 Rule)

• To reduce feedback and pickup of unwanted sounds:


1. place microphone as close as practical to desired sound source
2. place microphone as far as practical from unwanted sound sources such as loudspeakers and other
instruments
3. aim unidirectional microphone toward desired sound source (on-axis)

39 | P a g e
PA Training Introduction 2012

4. aim unidirectional microphone away from undesired sound source (180 degrees off-axis for cardioid,
126 degrees off-axis for supercardioid)
5. use minimum number of microphones

• To reduce handling noise and stand thumps:


1. use an accessory shock mount
2. use an omnidirectional microphone
3. use a unidirectional microphone with a specially designed internal shock mount

• To reduce “pop” (explosive breath sounds occurring with the letters “p,” “b,” and “t”):
1. mic either closer or farther than 3 inches from the mouth (because the 3-inch distance is worst)
2. place the microphone out of the path of pop travel (to the side, above, or below the mouth)
3. use an omnidirectional microphone
4. use a microphone with a pop filter.
This pop filter can be a ball-type grille or an external foam windscreen

• If the sound from your loudspeakers is distorted the microphone signal may be overloading your mixer’s
input. To correct this situation, use an in-line attenuator, or use the input attenuator on your mixer to
reduce the signal level from the microphone.

Seasoned sound engineers have developed favorite microphone techniques through years of experience. If you
lack this experience, the suggestions listed on the following pages should help you find a good starting point. These
suggestions are not the only possibilities; other microphones and positions may work as well or better for your
intended application. Remember—Experiment and Listen!

Why it’s a good time to learn more about wireless microphone systems

40 | P a g e
PA Training Introduction 2012

These innovative audio products have gone through a dramatic change in the past few years. The costs
for these systems have decreased considerably and their features have become more sophisticated, more user-
friendly and far more adaptable to the widest range of needs.
Therefore, it is now possible for people who are less technical and have smaller budgets to use these
audio products to provide dramatically improved sound for the congregation as well as more control and flexibility
for the praise and worship team.
It has also become far easier for less technical users to gain the benefits of these systems without the
long learning curve once associated with wireless microphone systems and personal monitors.

Houses of worship have unique audio challenges and needs that are easily addressed by wireless
microphone systems and personal monitors. These include the configuration of the space itself, as well as the
various expectations and desires of the worship team and the worshippers.
There are two more reasons to consider upgrading your sound platform to include these technologies:
hearing conservation and vocal strain. There has been a great deal of research lately on the hearing loss of people
who are constantly exposed to sound, even if the sound is not always overly loud. There has also been more
understanding of the vocal strain caused by having to continually sing over high volume. Since worship team
members are often part of multiple services weekly, if not daily, these two reasons, alone, would merit considering
personal monitors and earphones for your services.

All in all, the benefits of including wireless microphone systems and personal monitors into your house of
worship will likely more than pay for themselves in the added richness of the overall sound for your congregation
and the increased control for those who use them.

WIRELESS MICROPHONE SYSTEMS

Descriptions /Types
Before we can get into the advantages of ‘unplugging’ your worship team or any tips and techniques for
getting as much as you can from your wireless microphone systems, it’s a good idea to get a basic understanding
of their components and operating concepts.
This first section includes a brief overview of wireless microphone systems in order to add some context
to the components or technical aspects we discuss later in this chapter.

Wireless microphone systems include three components:


1. A microphone (or an input device such as a guitar pickup),
2. A transmitter, and
3. A receiver.

1. The microphone (or pick-up) can be any of the following:

41 | P a g e
PA Training Introduction 2012

• A handheld microphone (often, this will have the transmitter built into its base)
• A headworn vocal microphone
• A lavaliere (lapel) vocal microphone
• A clip-on instrument mic
• A guitar/bass pickup (which replaces the microphonesince it is a direct output to the transmitter
via a cable.)
2. The transmitter is either built into the base of the microphone, as is the case with a wireless microphone,
or is a body pack that clips onto the belt or clothing of the user. Its function is to convert the audio signal
from the microphone to a radio signal and send this signal to the receiver.
3. The receiver is placed in a location that can easily receive the transmitted radio waves.The receiver’s
output cable is plugged into the sound system in the same place you would plug the cable from a wired
microphone.

The key difference between a wired and wireless microphone system is that the user of a wireless system is not
attached to the cable – making him or her free to roam the worship space unhindered

The benefits to using wireless microphone systems in a House of Worship

If you think wireless microphone systems have sound and clarity issues, then you will be happy to hear that those
days are gone. As the prices have come down, the quality and features have increased. With very little effort you
should be able to find a wireless microphone system that you can afford and which provides the sound quality you
desire.
However, it is far more likely you are already using wireless microphones in your house of worship, so we will
spend most of this chapter discussing ways you can increase the value of having these systems and who might
benefit from them the most.
The initial advantages of wireless microphones in a house of worship are fairly apparent:
1. Cable-free mobility for the pastor, worship leader and worship musicians
2. Fewer cables, which provides a cleaner, less cumbersome worship space

Let’s look at these two main advantages individually.

Greater mobility – As praise bands become more elaborate and the congregations’ expectations of more
interaction increases, other musicians, such as the horn player and the guitarist, are finding that the cable on the
wired microphone is limiting their ability to bring their worship closer to – and often into – the congregation.
Additionally, the pastor might want to lend a voice to the praise band. With a wireless microphone, he or she can
simply walk across the platform and join in.

A cleaner worship space – Again, as praise bands become more elaborate, as more and more guest speakers
are added to the platform, the number of people who need to be miked increases. This results in the need for more
and more microphone cables and stands. Wireless systems eliminate the cables on the platform and allow new
presenters and musicians to join the celebration without adding yet another cable to the clutter.

For example: you want to feature a member of the choir in the song. Simply hand her the pre-set wireless
microphone and she can walk forward on the platform and add her voice to the worship without adding another
cable to the stage. Then, when her part is over, she can hand the microphone to the next featured singer or step
back and rejoin the choir.

42 | P a g e
PA Training Introduction 2012

Basic set-ups for:


1. Pastor
Any of the following:
i. A handheld microphone with a built-in transmitter
ii. A headset microphone with a bodypack transmitter
iii. A lavaliere microphone with a bodypack transmitter

Why the pastor’s best option is a headset microphone:


The closer you can position the microphone to the sound source, in this case the pastor’s mouth, the
better.
A lavaliere microphone is usually attached to the robe or lapel, which positions the microphone a few
inches away from the sound source and not in the sound’s direct path. For this reason, the sound is not as clear
and becomes softer and louder when the pastor looks from side to side or up and down.
A headset microphone allows you to position the microphone right at the pastor’s mouth or jaw line. When
the pastor looks left or right – or even swivels to look behind – the microphone stays positioned directly in front of
the mouth and the sound level remains the same.
It also enables higher gain-before-feedback. This lets you increase the pastor’s volume level – as needed
of course – with less risk of feedback. Since placing microphones as close to the sound sources as possible is the
best way to avoid feedback, a headset microphone is a better choice for this reason than a lavaliere.
Many pastors might object to the headset microphone for aesthetic reasons and there is very little reason
to argue this point. But if you want to convince your pastor to go this route, you might want to try this little test:
make recordings of two rehearsals, one using a lavaliere microphone and one using a headset. When you play the
recordings back, the pastor should hear the dramatic difference in sound clarity and consistency and can then
decide just how much sound quality is being traded for aesthetics.
Also: headset microphones now come in a variety of colours and profiles. You might want to try to find
one that matches the pastor’s skin colour and is less apparent to the congregation.

2. Praise leader
Same choices as the pastor.

3. Guitar or bass player


Short instrument cable and a bodypack transmitter

A few words on wireless systems for guitar players:


In the past, wireless systems provided less than optimal sound reproduction for guitar players, especially bass
players. Current wireless systems, with their ability to faithfully reproduce the lower ranges, come far closer to
matching the sound you get from a wired version. More sophisticated models can actually provide sound that is
indistinguishable from a wired microphone.
This means you have the confidence to help when the bass player asks, “Can you do something about all these
wires?”

4. horn or woodwind player


Clip-on instrument mic and a bodypack transmitter

5. Guest presenters and a spare system


Often, you will find you need another wireless system for a special guest or additional singer, for example. Since
it is hard to determine beforehand what you need or what their microphone preference might be, it’s best to get a
system that includes multiple microphone choices, such as headset, lavaliere, and a handheld mic.

43 | P a g e
PA Training Introduction 2012

But remember that each additional microphone will still need its own dedicated receiver.

6. Drummers, keyboard players, and choir members


Since not everyone on the platform will benefit from the added freedom of wireless, you should consider the
“Mobility Test” before rushing to provide each musician and singer with a system of his or her own. Our
recommendation is that anyone who is assigned a fixed position on the platform (such as drummers, keyboard
players and choir members) be provided with wired microphones. While the cost for wireless systems has
decreased and the ease-of-use has increased, there is still no reason to provide a wireless system to anyone who
will not benefit from the lack of wires.

7. Congregation participation
A handheld wireless microphone gives the pastor and the praise leader the opportunity to let one or more of the
members of the congregation add a few words … sing one or two lines of a hymn … or express an “Amen!” for all
to hear.

8. Going from the lobby to the platform


More and more pastors are greeting the congregation as they arrive. Why shouldn’t they be able to be heard by
the entire congregation while doing so? With a wireless microphone system (and remote antennas), the pastor can
be in the lobby or even outside while preaching to those already seated.
Then, as the last of the congregation arrives, he can begin the sermon as he walks into the main area, down the
aisle and onto the platform.

44 | P a g e
PA Training Introduction 2012

45 | P a g e
PA Training Introduction 2012

46 | P a g e
PA Training Introduction 2012

47 | P a g e
PA Training Introduction 2012

Outreach systems
Wireless microphone systems can also be used outside of the house of worship, either for dedications on
the grounds or for taking into the community. In situations such as these, they greatly increase mobility and crowd
participation without adding any complicated wiring.

48 | P a g e
PA Training Introduction 2012

Imagine having the sound system with loudspeakers and other sound equipment against a wall 10-20
feet away from where people might walk. Then imagine using a wireless microphone system to eliminate the cable
that connects the pastor to the speakers. Now you have optimum flexibility for your event and no wires for anyone
to trip over.

Other areas, live events, and portable churches


Wireless microphone systems are also perfect for other house of worship activities and events such as
theater productions, skits, and more. All the same advantages … none of the cables. This makes for a more
aesthetically appealing presentation, especially for holiday pageants. They are also optimum for ‘portable
churches,’ which rent space or move from location to location, since they eliminate any need to run wires and
making packing up easier and faster.
Additionally, wireless microphone systems can provide cordless sound to meeting rooms and fellowship
halls, especially where people might be asking questions of the speakers. With wireless microphones, participants
can share their experiences without having to shuffle out of their seats to where a wired microphone might be
located.
Wireless microphones are perfect when it is more convenient and less disruptive for the microphone to
go to the talker instead of the talker to the microphone.

Holiday pageants and wireless lavaliere microphones.


Through advances in wireless microphone technology, and the availability of more affordable systems,
your holiday pageants can now include the freedom of movement that was formerly only available to professional
theaters.
Bodypack transmitters are small and easy to conceal. Also, you can have many wireless systems in use
at once. All this makes wireless microphones a great way to provide exceptional audio for all the main speaking
and singing roles.
While we suggested earlier that you consider a headset microphone for your pastor, we suggest that you
use lavalieres for your theater productions. They are easy to hide in costumes and wigs. They can even be taped
right to a pair of glasses! This allows the congregation to hear each player clearly without seeing the microphones.
It also allows each person to concentrate on what is important: the production, not the microphone!

Some considerations and technical details for more effective wireless operation

Frequency Ranges
Every wireless microphone system transmits and receives sound on a specific radio frequency. These
frequencies are mainly grouped into two large bands, or ranges: VHF and UHF.
VHF means very high frequency and UHF means ultra high frequency. Each of these ranges has their
advantages and limits. To understand the “whys” of frequency limitations would require a fairly technical discussion
(see “Additional Resources” for guides on where to learn more), but for the purposes of selecting the proper
wireless system, there are some simple guidelines and useful generalities:
• Each wireless system must be on a different frequency.
• Most wireless microphones share the same frequencies used by TV stations, both VHF and UHF. Since
TV stations are much more powerful than wireless microphones – and since the SKMM / MCMC
(Suruhanjaya Komunikasi dan Multimedia Malaysia / Malaysian Communications And Multimedia
Commission) requires you to do so – you need to avoid local TV channels.
• You also have to avoid frequencies that are already used within your house of worship or those in use by
other organizations nearby.
• Most manufacturers have online tools to help you select the best range based on your model and location.
They can also help select the right frequencies when multiple systems are used.

49 | P a g e
PA Training Introduction 2012

UHF vs. VHF. What is the difference and which should I select?

First of all, while there are some differences in the radio behaviour of VHF and UHF systems, there is no
inherent difference in audio quality. The quality of the wireless system itself makes the largest difference to the
quality of the sound. And, yes, you can use both VHF and UHF systems in the same location. That being said,
there are some generalities that might help you better determine which option is best for you.

I. UHF is usually recommended if…


• You need to use more than 5 or 6 wireless systems at the same time;
• You use them in “crowded” radio environments such as cities or places where there are many other
houses of worship nearby;
• You want the flexibility to take your system to other US cities; You're able to spend a little extra funds
to enable flexibility for future needs.

II. VHF is usually recommended if…


• You use fewer than 5 systems at the same time; you use them in “open” (less crowded) radio
environments;
• You do not plan to take your system outside of your local area;
• Your budget is more limited.

Receiver and antenna placement


Wireless microphone systems include antennas on both the
receiver and transmitter.
Antennas range in shape, size and even quantity. Some can be
obvious; such as on bodypack transmitters, while others are located
internally; such as for many handheld transmitters. Some receivers, for
example, have two antennas (called diversity) while others only have
one (called non-diversity). Here, again, the discussion can quickly
become technical, so we have outlined a few basic principles to help
you avoid interference and increase the likelihood you will get clear
audio.
• Antennas of bodypacks should always be kept as clear as
possible from obstructive surfaces or materials. Never curl up
the antenna into a pocket, or wrap it around the bodypack.
• Remote or receiver antennas should be placed above the
congregation or other obstructions so the transmitter and the receiving antenna can ‘see’ each other. This
is called ‘line of sight.’
• Never let antennas touch one another.
• When mounting receivers onto racks:
a) Keep them a few feet or rack spaces away from CD/DAT, DSP, and digital effects units as this
may cause interference and
b) Make sure you have not compromised ‘line of sight,’ which usually means you should mount the
antennas in the front.

50 | P a g e
PA Training Introduction 2012

• Single antenna receivers are usually more affordable, but they are also more susceptible to loss of signal
(called dropouts).
• Diversity receivers provide superior performance in
any environment and when budget allows, are
preferable.
• Remote antennas are recommended when wireless
microphone systems are being used in more than
one location (such as when the pastor walks in from
outside, through the lobby and into the auditorium).
• For locations where a great number of wireless
microphone systems are being operated at once,
you can use an antenna distribution system. An
antenna distribution system reduces the total
number of antennas needed and can help improve
overall performance.

Power
Unlike wired microphones, all wireless microphone transmitters require batteries. As the batteries run down, the
performance of the wireless system begins to suffer. For this reason, keep these tips in mind:
• Use fresh batteries. Weak batteries can cause short range and distortion.
• Check your batteries before each service. We actually recommend using new batteries for each service.
• Alkaline batteries are recommended since they provide longer, more consistent life than rechargeable or
basic (carbon-zinc) batteries for wireless applications. While lithium batteries can last longer, the
difference in cost might not be worth the additional life.
• Rechargeable batteries are not desirable as they usually last less than three hours and are not as strong
initially as alkaline batteries. In fact, rechargeable batteries don’t typically start with enough power needed
for a wireless system, 7.2 volts out of the box vs. 9 volts from a fresh alkaline.

Remember that a wireless system is only as good as its ability to transmit signals from the microphone to the
receiver. The weaker the batteries, the weaker the signal.

How to select the right wireless microphone systems for your House of Worship
While the best idea is always to discuss your requirements with a sound contractor or an applications specialist at
the manufacturer before making a final decision, it's generally just a matter of asking yourself four questions:

51 | P a g e
PA Training Introduction 2012

1. Which microphone/transmitter configurations best fit our needs?


Earlier we have shown the components of a wireless system and some of the set-ups that best fit the individuals
who might be using them. Count the number of users and/or rooms that might require any of the following
configurations:
• Handheld microphone (with built-in transmitter)
• Headworn microphone with bodypack
• Lavaliere microphone with bodypack
• Clip-on microphone with bodypack
• Instrument cable with bodypack

2. Where do we intend to use our wireless systems? One location? Many locations?
One location – If you intend to use your wireless microphone system(s) in one location, you only need to make
sure you select a system that operates on frequencies compatible with your locations VHF or UHF broadcast TV
channel frequencies.

Multiple locations – If you intend to use your wireless system(s) in different towns or neighbourhoods, you will likely
encounter different active TV channels. Here, you should make sure your system(s) are frequency-agile (that is,
allow you to change frequencies as you move from location to location).

3. Do we need one system or many systems?


One system – if you are operating one system in a location where no other wireless systems are in use, then you
will not have any multisystem needs to manage.

Multiple systems – If you plan to use more than one wireless system, you will need to carefully select frequencies
to make sure that each system is compatible with the others. Also, there is a limit to the number of wireless systems
that can be used in one location, which brings us to the final consideration:

4. How much do we want to spend?


The adage that you ‘get what you pay for’ holds true with wireless systems. While the prices have come down and
the features have improved, you still need to weigh your budget against your needs – especially when you are
buying multiple systems for one location.

Better wireless systems allow you to operate more units at the same time without interference and are able to
operate across larger bands of frequencies.

The key to any wireless system is the confidence you have in its ability to provide sound clarity that rivals its wired
cousins. Your need for user-friendly features to locate open frequencies, avoid dropouts, and get clear consistent
sound has not gone unnoticed by the manufacturers of these systems. More and more wireless systems are now
including increasingly sophisticated technologies, such as ‘autoscan’ and ‘Audio Reference Companding,’ to help
users get the sound and signal they want without having to worry about the technical issues. Before making any
major system purchases, you might want to spend a little time researching the latest features and comparing their
costs and benefits to your needs and budget.

WIRELESS SYSTEM OPERATION


52 | P a g e
PA Training Introduction 2012

Frequency Bands For Wireless Systems

Existing wireless microphone systems transmit and receive on a specific radio frequency, called the
operating frequency. Individual radio frequencies are found in frequency "bands" which are specific ranges of
frequencies.
Use of radio frequencies in Malaysia is regulated by the SKMM / MCMC (Suruhanjaya Komunikasi dan
Multimedia Malaysia / Malaysian Communications And Multimedia Commission). The SKMM / MCMC has
designated certain bands of frequencies and certain frequencies in those bands for use by wireless microphones,
as well as by other services. The SKMM / MCMC further determine who can operate in each band and who has
priority if more than one user is operating.

UHF vs. VHF


Like the VHF region, the UHF region contains several bands that are used for wireless microphone
systems. However, certain physical, regulatory, and economic differences between VHF and UHF regions should
be noted here. The primary physical characteristic of UHF radio waves is their much shorter wavelength (one-third
to two-thirds of a meter). The visible consequence of this is the much shorter length of antennas for UHF wireless
microphone systems. Quarter-wave antennas in the UHF range can be less than 10 cm.
There are other consequences of the shorter UHF wavelength. One is reduced efficiency of radio wave
propagation both through the air and through other non-metallic materials such as walls and human bodies. This
can result in potentially less range for a UHF signal compared to a VHF signal of the same radiated power. "Line-
of-sight" operation is more important in the UHF range. Another consequence is the increased amount of radio
wave reflections by smaller metal objects, resulting in comparatively more frequent and more severe interference
due to multi-path (dropouts). However, diversity receivers are very effective in the UHF band, and the required
antenna spacing is minimal. Finally, the signal loss in coaxial antenna cables is greater in the UHF range. Amplifiers
and/or low-loss cable may be required in UHF antenna systems.
While the regulations for users and for licensing are essentially the same in the VHF and UHF bands,
regulations for the equipment allow two potential differences. For FM signals in the UHF band, greater occupied
bandwidth is allowed. This effectively permits greater FM deviation, for potentially greater audio dynamic range. In
addition, greater transmitter power is allowed (up to 250 mw). Finally, the available radio spectrum for UHF wireless
microphone system use is five times greater than for high-band VHF. This allows for a much larger number of
systems to be operated simultaneously.
In practice, the effectively greater deviation limits of UHF are not generally used because of the resulting
reduction in the number of simultaneous systems that may operated: the corresponding increased occupied
bandwidth of each system uses up more of the available frequency range. Also, use of increased transmitter power
is rare due to the resulting severely decreased battery life and to the increased potential of mutual system
interference. Even with limited deviation and power, however, the capability for an increased number of
simultaneous systems is a significant benefit in certain applications. This is especially true since UHF systems can
generally be used in conjunction with VHF systems at the same location without mutual interference.
The primary economic difference between VHF and UHF operation is the relatively higher cost of UHF
equipment. Typically, it is more difficult and hence more expensive to design and manufacture UHF devices. In
many ways this is a consequence of the behaviour of high frequency (short wavelength) radio signals. This cost
differential applies to antennas, cables, and other accessories as well as to the basic transmitter and receiver.
Currently, though, economies of scale have reduced this premium substantially so that it is now possible to produce
basic UHF systems at prices comparable to VHF. However, advanced features and performance tend to remain
in the province of high-end UHF products.

WIRELESS SYSTEM SELECTION AND SETUP

53 | P a g e
PA Training Introduction 2012

System Selection
The proper selection of a wireless microphone system consists of several steps based on the intended application
and on the capabilities and limitations of the equipment required for that application. It should be remembered that
while wireless microphone systems cannot ultimately be as consistent and reliable as wired systems, the
performance of currently available wireless can be very good, allowing excellent results to be obtained. Following
these steps will insure selection of the best system(s) for a given application.
1. Define the application.
This definition should include the intended sound source (voice, instrument, etc.) and the intended sound
destination (sound system, recording or broadcast). It must also include a description of the physical
setting (architectural and acoustic features). Any special requirements or limitations should also be noted:
cosmetics, range, maintenance, other possible sources of RF interference, etc. Finally, the desired
performance level must be defined: radio quality, audio quality, and overall reliability.
2. Choose the microphone (or other source) type.
The application will usually determine which microphone physical design is required: a lavaliere or clip-
on type attached to clothing, or a head-worn type, both for hands-free use; a handheld type for a vocalist
or when the microphone must be passed around to different users; a connecting cable when an electric
musical instrument or other non-microphone source is used. Other microphone characteristics
(transducer type, frequency response, and directionality) are dictated by acoustic concerns. As mentioned
earlier, the microphone choice for a wireless application should be made using the same criteria as for a
wired application.
3. Choose the transmitter type.
The microphone choice will usually determine the required transmitter type (handheld, bodypack or plug-
on), again based on the application. General features to consider include: antenna style (internal or
external), control functions and location (power, muting, gain, tuning), indicators (power, battery condition),
batteries (operating life, type, accessibility), and physical description (size, shape, weight, finish, material).
For handheld and plugon types interchangeability of microphone elements may be an option. For
bodypack transmitters, inputs may be hardwired or detachable. Multi-use inputs are often desirable and
may be characterized by connector type, wiring scheme and electrical capability (impedance, level, bias
voltage, etc.).
4. Choose the receiver type.
The basic choice is between diversity and non-diversity. For reasons mentioned in the receiver section
above, diversity receivers are recommended for all but the most budget-conscious applications. Though
non-diversity types will work well in many situations, the insurance provided by the diversity receiver
against multipath problems is usually well worth the somewhat higher cost. Other receiver features that
should be considered are: controls (power, output level, squelch, tuning), indicators (power, RF level,
audio level, frequency), antennas (type, connectors), electrical outputs (connectors, impedance,
line/microphone/headphone level, balanced/unbalanced). In some applications battery power may be
required.
5. Determine the total number of systems to be used simultaneously.
This should take into account future additions to the system: choosing a system type that can only
accommodate a few frequencies may prove to be an eventual limitation. Of course, the total number
should include any existing wireless microphone systems with which the new equipment must work.
6. Specify the geographic location in which these systems will be used.
This information is necessary in the next step to avoid possible conflict with broadcast television
frequencies. In the case of touring applications, this may include cities inside and outside of the Malaysia.
7. Coordinate frequencies for system compatibility and avoidance of known non-system sources.
Consult the manufacturer or a knowledgeable professional about frequency selection and integration of

54 | P a g e
PA Training Introduction 2012

the planned number of systems. This should be done even for single systems and must certainly be done
for any multiple system installation to avoid potential interference problems. Frequency coordination
includes the choice of operating band (VHF and/or UHF) and choice of the individual operating
frequencies (for compatibility and avoidance of other transmissions). For fixed locations choose
frequencies in unused TV channels. For touring applications, it may be necessary to carry additional
systems on alternate frequencies, though this is only practical for a small number of channels. The
preferred approach for touring is to use frequency-agile (tuneable) units to insure the required number of
systems at all venues.
8. Specify accessory equipment as needed.
This may include remote antennas (1/2 wave, 5/8 wave, directional), mounting hardware (brackets,
ground-planes), antenna splitters (passive, active), and antenna cables (portable, fixed). These choices
are dependent on operating frequencies and the individual application.

System Setup: Transmitter


Transmitter setup first requires optimizing the source – to – transmitter interface. Sources include dynamic
and condenser microphones, electronic musical instruments and general audio sources such as mixer outputs,
playback devices, etc. The output signal of each of these sources is characterized by its level, impedance and
configuration (balanced or unbalanced). For sources such as condenser microphones, some type of power
(phantom or bias) may be required.
The transmitter may be a bodypack, plug-on or handheld type and its input will also have a characteristic
level, impedance and configuration (balanced or unbalanced). It may be capable of supplying power to the source.
The interface can consist of some type of connector or it may
be hard-wired, either internally or externally. (See Figure 4-1.)
The simplest interface is the handheld transmitter.
This design should insure that the microphone element is
already optimally integrated (electrically and mechanically) with
the transmitter. The only choice involves systems that offer a
selection of microphone elements. If each is equipped for
proper interface the decision should be made based on the
performance characteristics of the microphone element for the
intended application.
The plug-on transmitter offers a range of interface
possibilities. Mechanically, the 3-pin XLR type connector is
standard but the electrical characteristics of the chosen
microphone and transmitter combination must be considered.
The input impedance of the transmitter should be higher than
the microphone output impedance. All transmitters of this type will work with typical low impedance dynamic
microphones. If the transmitter input impedance is high enough (>10,000 ohms) a high impedance microphone
may also be used. Most plug-on transmitters will work with either balanced or unbalanced microphone outputs.
Some plug-on transmitters are also capable of supplying "phantom power" to a condenser microphone.
This is only possible with a balanced transmitter input and a balanced microphone output. Even then, the
transmitter must supply at least the minimum phantom voltage required by the microphone (usually between 11
and 52 volts DC). If less than the minimum is available, the condenser microphone performance may be
compromised with less headroom or more distortion. This is not a concern with dynamic microphones (which do
not require power) or with condenser microphones powered by an internal battery.
The bodypack transmitter presents the greatest range of possible interfaces. The simplest arrangement
is the hard-wired lavaliere or headset microphone. Again, it can usually be assumed that this design already

55 | P a g e
PA Training Introduction 2012

provides the optimum interface for the components provided. If various hardwired microphone choices are offered,
the selection should be based on the intended application.
Most bodypack transmitters are equipped with an input connector to allow the use of a variety of
microphones and other input sources.
(See Figure 4-2.) Microphones and
input cables supplied by a
manufacturer with a given wireless
microphone system can be assumed
to be compatible with that system.
However, they may not be directly
compatible with wireless microphone
systems from other manufacturers. At
a minimum, a connector change is
often required. In many cases, additional circuitry or modifications to components will be necessary. A few
combinations simply will not work.
In order to determine the suitability of a particular microphone for use with a particular transmitter it is first
necessary to determine the connector type(s) involved. Connectors include eighth-inch and quarter-inch phone
jacks as well as a variety of multi-pin designs.
Next, the wiring of the microphone connector and the wiring of the transmitter connector must be
compared. Unfortunately, there is no standard input connector, and further, the wiring scheme of the same
connector may differ from one manufacturer to another. A quarter-inch input jack is usually wired unbalanced with
the audio signal at the tip and shield on the sleeve. The typical multi-pin input on a body-pack transmitter has at
least one pin for the audio signal and one pin for shield or ground. There may be other pins to provide "bias" (a
DC voltage for a condenser microphone element) or to provide an alternate input impedance. Some transmitters
have additional pins to accept audio signals at different levels or to provide a combination audio + bias for certain
condenser elements.
The electrical characteristics of the microphone and transmitter should then be compared: the output level
of the microphone should be within the acceptable input level range of the transmitter and the output impedance
of the microphone should be less than the input impedance of the transmitter. In addition, the input configuration
of most bodypack units is unbalanced. Microphones intended for use with wireless are also invariably unbalanced,
though a balanced output dynamic microphone can usually be accommodated with an adapter cable.
If the microphone has a condenser element and does not have its own power source then the transmitter
must supply the required bias voltage. Most transmitters provide about 5 VDC, suitable for a typical electrets
condenser element, though some elements may require as much as 9 VDC. In this case, it is sometimes possible
to modify the transmitter to provide the higher voltage.
Many condenser elements and associated transmitters use a two-conductor-plus-shield hookup in which
the audio is carried on one conductor and the bias voltage on the other. A few condenser elements and some
transmitters use a single-conductor-plus-shield arrangement in which the audio and bias voltage are carried on the
same conductor. Interfacing a microphone of one scheme with a transmitter of the other may require modification
of one or both components.
In general, for non-standard combinations, it is best to directly contact the manufacturer of the wireless
microphone system and/or the manufacturer of the microphone to determine the compatibility of the desired
components. They can provide the relevant specifications and can usually describe any limitations or necessary
modifications.
Non-microphone sources include electronic musical instruments and possibly outputs from sound
systems and playback devices. Though none of these sources require bias or phantom power their interface
presents a much wider range of level and impedance than a typical microphone source.

56 | P a g e
PA Training Introduction 2012

Musical instruments such as electric guitars and basses can have output levels from a few millivolts
(microphone level) for instruments with passive pickups to a few volts (line level) for those with active pickups The
transmitter must be capable of handling this dynamic range to avoid over modulation or distortion.
Ordinary (passive) magnetic instrument pickups have a high output impedance and require a transmitter input
impedance of about 1 Megohm to insure proper frequency response. Active (powered) pickups have fairly low
output impedance and will work with almost any transmitter input impedance of 20,000 ohms or greater.
Piezoelectric pickups have very high output impedance and require a 1-5 Megohm transmitter input impedance to
avoid loss of low frequencies.
Mixers and playback devices produce line level outputs. These sources typically have low-to-medium
output impedance and may be balanced or unbalanced. They can sometimes be interfaced with a simple adapter
cable. However, these high level input sources often require additional (external or internal) attenuation to prevent
overload of the transmitter input, which is usually expecting a mic-level signal.
Once the source/transmitter interface has been optimized, control adjustment should be performed. The
only control adjustment available on most transmitters is for input level or sensitivity. It consists of a small
potentiometer and/or a switch. The control is often placed inside the battery compartment or in a recessed position
to avoid accidental maladjustment. Some bodypack designs have separate level adjustments for microphone
inputs and instrument inputs.

The control(s) should be adjusted so that the loudest sound level (or highest instrument level) in actual
use produces full modulation of the radio signal. This is usually determined by speaking or singing into the
microphone (or playing the instrument) while observing audio level indicators on the receiver. Typically, an audio
peak LED will indicate full (or nearly full) modulation. A few designs have peak indicators on the transmitters
themselves. In systems that indicate peaks at less than full modulation, this LED may light fairly often. For systems
that indicate full modulation, this should light only briefly at maximum input levels. In either case, sustained peak
indication requires reducing input sensitivity or level to avoid audible distortion.
If the transmitter is equipped with a compander system
(noise reduction) defeat switch make sure that it is set to the
same mode as the receiver. The only situation in which this
system would be defeated is with the use of a receiver that is
not equipped with compander circuitry.
For tuneable transmitters, make sure that the
transmitter is set to the same frequency as the receiver.
The last step in transmitter setup is placement.
Placement of a handheld or plug-on system is essentially the
same as for a wired microphone of the same type. The unit may
be mounted on a stand, boom or fishpole with an appropriate
stand adapter, or it may be handheld.
Bodypack transmitter placement is dependent on the
particular application. If the input source is a microphone, such
as a lavaliere or headset, the bodypack is normally clipped to a
belt or pants waistband. It may be attached in other ways as long as the antenna is allowed to extend freely. Insure
that there is adequate access to the controls if necessary and that the connecting cable, if any, has enough length
to permit the source and the transmitter to be located as desired. When the input is a musical instrument, it is often
possible to attach the transmitter directly to the instrument or to its strap as in the case of an electric guitar.
For all types of transmitters, insure that the antenna is securely attached and positioned for maximum
efficiency. Wire antennas should be fully extended. The hand should not cover external antennas on handheld
transmitters. (See Figure 4-3.)

57 | P a g e
PA Training Introduction 2012

As much as possible, proper transmitter placement should avoid large metal objects and previously
mentioned sources of RF such as digital devices, other wireless transmitters and mobile telephones. If an individual
is using more than one wireless system at the same time, such as a wireless head-set and a wireless musical
instrument, or is wearing a wireless personal monitor receiver, the devices should be kept as far apart as practical
to minimize interaction.

System Setup: Receivers


Receiver setup involves two interfaces: antenna-to-receiver and receiver-to-sound system.
(See Figure 4-4.)

Receiver Mounting and Placement


Proper placement of receivers involves both mechanical and electrical considerations. Mechanically,
wireless receivers are usually designed to be used like other standard rackmount products. The electrical concerns
are possible RF interference and possible hum or other electrical noise induced in the audio circuits. Receivers
should be kept away from RF noise sources such as digital processors, computers and video equipment. They
should also be separated from large AC sources such as power supplies for high current or high voltage equipment
as well as lighting dimmers, fluorescent light ballasts and motors.
If wireless receivers are mounted in racks with other equipment it is best to place them with low-power
analog devices nearby and potentially troublesome devices farther away or in a separate rack. In particular, if other
wireless transmitting devices such as personal monitor transmitters or wireless intercom transmitters are used, it
is strongly recommended that they be mounted in a different rack. Antennas from these transmitters should also
be at a sufficient distance from receiver antennas. Obviously, if receivers are placed in metal racks or mounted

58 | P a g e
PA Training Introduction 2012

between other metal devices it will be necessary to make sure that antenna function is not compromised.

System Setup: Receiver Antennas

Setup of receiver antennas involves first the antenna-to-receiver interface and then antenna placement.
The simplest case is a receiver with the antenna(s) permanently attached. The antenna is typically a quarter-wave
telescoping or possibly "rubber ducky" type. Receivers with non-detachable antennas should be placed on an open
surface or shelf, in line-of-sight to the transmitter, for proper operation. They are often not suitable for rack mounting
except perhaps as a single unit at the top of a rack and then only if the antennas are mounted on the front of the
receiver or can project through the top of the rack.
A receiver with detachable antennas offers more versatility in setup. In most cases the antennas attach
to the rear of the receiver. If the receiver is to be mounted in a metal rack the antennas must be brought to the
outside of the rack. Some designs allow the antennas to be moved to the front of the receiver, while others provide
an accessory panel for antenna relocation. Again, the receiver should be mounted high enough in the rack so that
the antennas are essentially in the open.
Here are some general rules concerning setup and use of receiver antennas:
1. Maintain line-of-sight between the transmitter and receiver antennas as much as possible,
particularly for UHF systems.
Avoid metal objects, walls, and large numbers of people between the receiving antenna and its
associated transmitter. Ideally, this means that receiving antennas should be in the same room as the
transmitters and elevated above the audience or other obstructions. (See Figure 4-6.)

2. Locate the receiver antenna so that it is at a reasonable distance from the transmitter. A minimum
distance of about 5 meters is recommended to avoid potential intermodulation products in the receiver.
The maximum distance is not constant but is limited by transmitter power, intervening objects, interference,
and receiver sensitivity. Ideally, it is better to have the antenna/receiver combination closer to the
transmitter (and run a long audio cable) than to run a long antenna cable or to transmit over excessively
long distances.

3. Use the proper type of receiver antenna.


A quarterwave antenna can be used if it is mounted directly to the receiver, to an antenna distribution
device or to another panel, which acts as a ground-plane. If the antenna is to be located at a distance
from the receiver, a half-wave antenna is recommended. This type has somewhat increased sensitivity

59 | P a g e
PA Training Introduction 2012

over the quarter-wave and does not require a ground-plane. For installations requiring more distant
antenna placement or in cases of strong interfering sources it may be necessary to use a directional (Yagi
or log-periodic) antenna suitably aimed. Telescoping antennas should be extended to their proper length.

4. Select the correctly tuned receiver antenna(s).


Most antennas have a finite bandwidth making them suitable for receivers operating only within a certain
frequency band. When antenna distribution systems are used, receivers should be grouped with antennas
of the appropriate frequency band as much as possible.
• For the VHF range: if the receiver frequencies span two adjacent antenna bands, the longer
(lower frequency) antennas should be used. If the range spans all three antenna bands, one
long antenna and one short antenna should be used (no middle length antenna).
• For the UHF range: receivers should only be used with antennas of a matching range.

5. Locate diversity receiver antennas a suitable distance apart.


For diversity reception the minimum separation for significant benefit is one-quarter wavelength (about 30
cm. for VHF and about 10 cm. for UHF). The effect improves somewhat up to a separation of about one
wavelength. Diversity performance does not change substantially beyond this separation distance.
However, in some large area applications, overall coverage may be improved by further separation. In
these cases one or both antennas may be located to provide a shorter average distance to the
transmitter(s) throughout the operating area.

6. Locate receiver antennas away from any suspected sources of interference. These include other
receiver and transmitter antennas as well as sources mentioned earlier such as digital equipment, AC
power equipment, etc.

7. Mount receiver antennas away from metal objects.


Ideally, antennas should be in the open or else perpendicular to metal structures such as racks, grids,
metal studs, etc. They should be at least one-quarter wavelength from any parallel metal structure. All
antennas in a multiple system setup should be at least one-quarter wavelength apart.

8. Orient receiver antennas properly. A non-diversity receiver should generally have its antenna
vertical.
Diversity receivers can benefit from having antennas 1 angled 45 degrees apart. Yagi and log-periodic
types should be oriented with their transverse elements vertical.

9. Use the proper antenna cable for remotely locating receiver antennas. A minimum length of the
appropriate low-loss cable equipped with suitable connectors will give the best results. Refer to the chart
presented earlier. Because of increasing losses at higher frequencies, UHF systems may require special
cables.

10. Use an antenna distribution system when possible.


This will minimize the overall number of antennas and may reduce interference problems with multiple
receivers. For two receivers a passive splitter may be used. For three or more receivers active splitters
are strongly recommended. Verify proper antenna tuning as mentioned above. Antenna amplifiers are not
usually necessary for VHF systems but may be required for UHF systems with long cable runs.
System Setup: Batteries
Always use fresh batteries of the correct type in the transmitter and/or receiver. Most manufacturers
recommend only alkaline type batteries for proper operation. Alkaline batteries have a much higher power capacity,

60 | P a g e
PA Training Introduction 2012

more favourable discharge rate and longer storage life than other types of single-use batteries such as carbon-
zinc. Alkaline types will operate up to 10 times longer than so-called "heavy duty" non-alkaline cells. They are also
far less likely to cause corrosion problems if left in the unit. Consider bulk purchase of alkaline batteries to get the
greatest economy: they have a shelf life of at least one year.
The battery condition should be determined before system use and checked periodically during use, if
possible. Most transmitters are equipped with a battery status indicator of some kind that will at least indicate a
go/no-go or some minimum operating time. Some units have a "fuel gauge" that can allow more precise indication
of remaining battery life. A few models even have the capability of transmitting battery condition information to the
receiver for remote monitoring.
Rechargeable batteries may be used in wireless microphones with some reservations. These reservations
are dependent on the battery size and on the actual chemistry of the battery. The conventional rechargeable battery
uses a Ni-Cad (nickel-cadmium) cell or Ni-Mh (nickel-metal-hydride) cell. The voltage of an individual Ni-Cad or
Ni-Mh cell is 1.2 volts rather than the 1.5 volts of an alkaline cell. This is a 20% lower starting voltage per cell. For
systems using AA or AAA size batteries, this lower starting voltage may not be an issue because most transmitters
using these battery sizes have internal voltage regulators that can compensate. High capacity Ni-Mh single cell
(AA or AAA) batteries are available with operating times that are comparable to single cell alkaline types.
However, the standard alkaline 9-volt battery is made up of six cells in series, which yields an initial voltage
of at least 9 volts. Typical continuous operating time for a 9-volt alkaline battery in a wireless microphone is about
eight hours. The least expensive "9-volt size" rechargeable also has six cells, giving it an initial voltage of only 7.2
volts. When combined with its lower power capacity the operating time may be less than 1/20 of an alkaline, only
about 15 minutes in some units. The "better" 9-volt size rechargeable has seven cells (8.4 volts initial), but still has
significantly less power capacity than an alkaline. Operating time for these types may be as little as two hours
compared to eight hours for an alkaline 9-volt battery (See Figure 4-7).

It is possible to obtain high performance 9-volt size Ni-Mh batteries that approach the power capacity of
an alkaline. These may offer up to six hours of operation.
A battery chemistry that shows potential for exceeding alkaline capacity is lithium-ion (Li-on) or lithium-
polymer (Li-polymer). However, this chemistry is presently only found in custom battery designs such as those
used in digital cameras, laptop computers, and other high discharge rate devices. Some standard size 9-volt
versions have recently become available and may finally replace single use types.
If it is decided to use rechargeable batteries, battery management is very important. For systems in daily
service a minimum of two batteries per unit is recommended due to the charging time: one charging, and one in
use. In addition, Ni-Cad batteries must periodically be completely cycled to get maximum service life and avoid
developing a short discharge "memory effect." Generally, Ni-Mh and Li-on types do not exhibit memory effect.
However, for maximum performance from any rechargeable battery it is necessary to use a high-quality charger
that is designed for the specific battery type. Improper charging can impair or even damage rechargeable batteries
prematurely.

61 | P a g e
PA Training Introduction 2012

Ultimately, the long-term potential savings in battery cost must be weighed against the expected operating
time, initial investment and ongoing maintenance requirements for rechargeable batteries.

Pre-Show Checkout:
1. Verify good batteries in all transmitters.
2. Turn on all receivers (without transmitters) and all antenna distribution equipment. All receivers should
show little or no RF activity.
3. Turn on individual transmitters one at a time to verify activation of proper receiver. Transmitters should all
be at a comparable distance (at least 5 meters) from receiving antennas. Off-channel receivers should
show little or no RF activity.
4. Turn on all transmitters (with receivers) to verify activation of all receivers. Transmitters should all be at a
comparable distance (at least 5 meters) from receiving antennas and at least 1 meter from each other.
5. Perform a stationary listening test with each individual system one at a time to verify proper audio level
settings.
6. Perform a listening test around the performance area with each individual system one at a time to verify
no dropouts.
7. Perform a listening test around the performance area with each individual system while all systems are
on to verify no audible interference or dropouts.

It should be noted in Step 3 (on pg. 19) that certain combinations of active transmitters and receivers might indicate
pickup of an individual transmitter by more than one receiver. However, in Step 7 (on pg. 20), when all transmitters
are active, each should be picked up by just its intended receiver. Unless there is audible interference when all
transmitters are on this should not pose a problem, since receiver should not normally be turned up when its own
transmitter is not active. Once the wireless microphone systems have passed this checkout there are a few
recommendations to achieve successful operation during the performance:

Show Operation:
1. Again, verify good batteries in all transmitters.
2. Receivers should be muted until transmitters are on.
3. Do not activate unneeded transmitters or their associated receivers.
4. Once the system is on, use the "mute" or "microphone" switch to turn off the audio if necessary, not power
switch. (This is not a concern for tone-key squelch systems.)
5. Do not bring up the sound system audio level for any receiver that does not have an active transmitter.
6. Maintain line-of-sight from transmitter antennas to receiver antennas.
7. Maintain transmitter-to-receiver antenna distance of at least 5 meters.
8. Maintain transmitter-to-transmitter distance of at least 1 meter if possible.
9. Operate transmitters in the same general performance area.
10. At the end of the event mute receiver outputs before turning off transmitters.

Troubleshooting Wireless Microphone Systems


Even when wireless microphone systems appear to be properly selected and set up, problems may arise
in actual use. While it is not practical here to offer comprehensive solutions for all possible situations some general
guidelines are suggested.

62 | P a g e
PA Training Introduction 2012

Though problems with wireless microphone systems eventually show up as audible effects these effects
can be symptoms of audio and/or radio problems. The object of troubleshooting in either situation is first to identify
the source of the problem and second to reduce or eliminate the problem.
The following abbreviations are used in these charts:
AF-audio frequency, RF-radio frequency, RFI-radio frequency interference, TX-transmitter, RCV-receiver
A common symptom in multiple system operation is apparent activation of two receivers by a single
transmitter. This can be due to one of several causes: operating frequencies the same or too close, crystal
harmonics, transmitter at the image frequency of the second receiver, IM with an unknown source, etc. If activating
the second transmitter results in proper operation of both systems this effect can usually be ignored.
Recommended operating procedure is to turn up a receiver only when its transmitter is active. If it is desired to
allow open receivers without transmitters, readjusting the squelch settings may suffice. Otherwise the operating
frequencies may have to be changed.

Troubleshooting Guide
Conditions: TX on RCV on, Single System
TX – RCV
Symptom Possible Cause Action
Distance
No AF signal and no RF signal Any Low TX Battery Voltage Replace Battery
No AF signal and no RF signal Any TX and RCV tuned to retune one or both units
different frequencies
No AF signal and no RF signal average multipath dropout use diversity RCV or reposition TX
and/or RCV
No AF signal and no RF signal long out of range move TX closer to RCV
No AF signal but normal RF signal any TX muted un-mute TX
No AF signal but normal RF signal any microphone or other input check input source
source
Distortion with no AF peak indication any low TX battery voltage replace battery
Distortion with AF peak indication any excessive TX input level decrease source level or TX input level
Distortion with AF peak indication in any excessive RCV output level decrease RCV output level
subsequent equipment
Noise with low AF signal and normal any insufficient TX input level increase source level or TX input level
RF signal
Noise with low AF signal and normal any strong RFI identify source and eliminate, or change
RF signal frequency of wireless microphone
system
Noise with normal AF signal and low average moderate RFI increase squelch setting until RCV
RF signal mutes
Noise with normal AF and RF signals any very strong RFI identify source and eliminate, or change
frequency of wireless microphone
system
Intermittent AF signal and low RF long out of range move TX closer to RCV
signal
Intermittent AF signal and low RF
long insufficient antenna gain use higher gain antenna
signal
Intermittent AF signal and low RF excessive antenna cable
long use low loss cable and/or less cable
signal loss
use diversity RCV or reposition TX
Intermittent AF and RF signals average multipath interference
and/or RCV
obstructions in signal path
Intermittent AF and RF signals average remove obstructions or remove obstructions or reposition TX
reposition TX and/or RCV

63 | P a g e
PA Training Introduction 2012

squelch set too high very


Intermittent AF and RF signals average decrease squelch setting
strong RFI
identify source and eliminate, or change
Intermittent AF and RF signals average very strong RFI frequency of wireless microphone
system

When multiple systems are in use some additional problems can occur due to interaction between the systems.
Turning individual systems on and off and trying systems in different combinations can help to pinpoint the cause.
However, this can become much more difficult as the number of systems increases.
Following are some multiple system troubleshooting suggestions for symptoms observed when all systems are
active.

Conditions: TX on, RCV on, multiple systems


TX – RCV
Symptom Possible Cause Action
Distance
Distortion on two (or more) systems with no any
units on same
units on same frequency change frequencies AF any change frequencies
frequency
peak indication
Distortion on two (or more) systems with no any
units on same frequency change frequencies AF TX-TX short TX + TX intermod change frequencies
peak indication
Distortion on two (or more) systems with no any TX-TX short TX + TX intermod
increase TX to TX distance change
units on same frequency change frequencies AF TX-RCV TX + TX + RCV
frequencies
peak indication short intermod
Distortion on two (or more) systems with no any
TX-RCV TX + TX + RCV
units on same frequency change frequencies AF increase TX to RCV distance
short intermod
peak indication

APPLICATION NOTES

Following are some suggestions on wireless microphone system selection and use for some specific applications.
Each section gives typical choices and setup for microphones, transmitters and receivers as well as a few operating
tips.

64 | P a g e
PA Training Introduction 2012

Presenters
The most common wireless choice for presenters has been the
lavaliere/bodypack system, which allows hands-free use for a single speaking
voice. However, the traditional lavaliere microphone is often being replaced by
a small headworn microphone because of its much better acoustic
performance. In either case, the microphone is connected to a bodypack
transmitter and the combination is worn by the presenter. The receiver is
located in a fixed position.
The bodypack transmitter is generally worn at the waistband or belt.
It should be located so that the antenna can be freely extended and so that the
controls can be reached easily. Transmitter gain should be adjusted to provide
suitable level for the particular presenter.
The receiver should be located so that its antennas are line of sight
to the transmitter and at a suitable distance, preferably at least 5 meters. Once
the receiver is connected to the sound system the output level and squelch should be adjusted according to the
previous recommendations.
The most important factor in achieving good sound quality and adequate gain-before-feedback with a
lavaliere system is microphone choice and placement. A high quality microphone placed as close as practical to
the wearers’ mouth is the best starting point. An omnidirectional lavaliere microphone should be attached to the
presenter on a tie, lapel or other location within 8-10 inches of the mouth for best pickup.
The headworn microphone has a significant advantage because of its much closer placement to the
mouth. Compared to a lavaliere microphone at 8 inches from the mouth, a headworn type placed within one inch
of the mouth will have 18dB better gain-before-feedback. In addition, because the microphone is always at the
same distance from the mouth, there is no volume change or tonal change as the presenter’s head moves in any
direction.
In situations of limited gain-before-feedback or high ambient noise levels a unidirectional microphone may
be used. This type should be located like the omnidirectional type but it must also be aimed at the presenter’s
mouth. The user should be aware that unidirectional types are much more sensitive to wind noise and breath blasts
(k’s, t’s, d’s, etc.) as well as noise from clothing rubbing against the microphone or cable. Unidirectional
microphones should always be used with a windscreen and mounted in a way to reduce direct mechanical contact
with clothing or jewelry. Again, the headworn type has an advantage because the microphone itself is not in contact
with clothing or other articles.
Finally, it should be noted that the unidirectional gain-before-feedback improvement is typically only 6-
8dB. Thus, an omnidirectional headworn microphone will still have at least a 11-12dB advantage in gain-before-
feedback over a unidirectional lavaliere type. This is sufficient to allow the use of omnidirectional headworn
microphones in all but the most severe feedback environments. A unidirectionalheadworn microphone can perform
nearly identically to a unidirectional handheld type and substantially better than any lavaliere type in this case.

Musical Instruments

65 | P a g e
PA Training Introduction 2012

The most appropriate choice for an instrument wireless


application is a bodypack system, which will accept the audio signal
from various instrument sources. The receiver can be a diversity
design for highest performance or non-diversity for economy
applications and is located in a fixed position.
The transmitter can often be attached to the instrument itself
or to the instrument strap. In any case it should be located to avoid
interfering with the player but with its controls accessible. Instrument
sources include electric guitars and basses as well as acoustic
instruments such as saxophones and trumpets. Electric sources can
usually connect directly to a transmitter while acoustic sources
require a microphone or other transducer.
Receivers for instrument systems are connected to an
instrument amplifier for electric guitars and basses or to a mixer
input for acoustic instruments, which are not otherwise
amplified. Be aware of the potential for interference from digital
effects processors in the vicinity of the amplifier or at the mixer
position. Connections should be well-shielded and secure.
Again the usual distance and line-of-sight considerations apply.
The most important factor in the performance of an
instrument system is the interface between the instrument and
the transmitter. The signals from electric instruments fitted with
magnetic pickups are generally comparable to microphone signals, though the levels and impedances may be
somewhat higher. Other transducers such as piezo-electric types have output signals that also are similar to
microphone signals but again may have higher levels and substantially higher impedances. With any of these
sources care should be taken to insure that there is compatibility with the transmitter input in regard to level,
impedance and connector type.
Occasionally it is found that certain wireless microphone systems do not initially work well with certain
instruments. Symptoms may include poor frequency response, distortion or noise. In most cases this can be traced
to an impedance or level mismatch between the two. Frequency response changes are most often due to
impedance problems. Make sure that the transmitter has sufficiently high input impedance. Distortion is usually
due to excessive input level to the transmitter. Instruments with active circuitry (battery powered preamps) often
have very high output levels which may need to be attenuated for some transmitters. They may also suffer from
RFI caused by the wireless microphone system. This may reduced by the addition of RF filters in the instrument.
A common type of noise that is heard in wireless microphone systems is often called modulation noise.
This is a low-level hiss, which accompanies the actual instrument sound. Though it is usually masked by the
instrument sound certain factors may make it more pronounced. These include low audio signal levels, low RF
signal levels and high RF noise levels. Modulation noise can be most noticeable when the wireless microphone
system is connected to a high gain instrument amplifier with boosted high frequencies and distortion circuits
engaged. The apparent level of modulation noise can be reduced by setting the transmitter gain as high as possible
(without causing distortion), maintaining adequate RF signal level and avoiding sources of RF noise.
Some electric guitars and basses used with wireless microphone systems may also exhibit intermittent
noise when their control pots are moved to or from the endpoints of their rotation (full-on or full-off). This is due to
metal-to-metal contact, which occurs at these points in certain potentiometer designs. A different type of pot may
need to be substituted.
Microphones for acoustic instruments may be omni or unidirectional and are usually condenser types.
Microphone selection and placement for acoustic instruments is a subjective process that may involve a certain
amount of trial and error. See the references in the bibliography for suggestions.

66 | P a g e
PA Training Introduction 2012

It is advised to consult the manufacturer of the wireless equipment and/or the manufacturer(s) of the
instruments, microphones and transducers if problems persist. They may have details of suggested modifications
for one or both units. One wireless benefit of interest to guitar players is the elimination of the potential shock
hazard created between a wired electric guitar and a wired microphone. Once the hardwire connection between
either the guitar and amplifier or between the microphone and the PA system is removed the polarity of the guitar
amp is of no consequence.

Vocalists
The usual choice for vocalists is a handheld wireless
microphone system for close pickup of the singing voice. It consists of a
suitable vocal microphone element attached to a handheld transmitter
used with a fixed receiver.
The microphone/transmitter may be handheld or mounted on a
microphone stand. Microphone technique is essentially the same as for
a wired microphone: close placement gives the most gain-before-
feedback, the least ambient noise pickup and the most proximity effect.
An accessory pop filter may be used if wind or breath blast is a problem.
If the transmitter is equipped with an external antenna avoid placing the
hand around it. If the transmitter has externally accessible controls it may
be useful to conceal them with a sleeve or tape to avoid accidental switching during a performance. Some
transmitters can be set to lock out the controls. Battery condition should be checked prior to this if the indicator will
be covered. Transmitter gain should be adjusted for the particular vocalist at performance levels.
A popular option for vocalists who require hands-free operation is the unidirectional headworn microphone.
It can have gain-before-feedback performance equivalent to a handheld and similar sound quality as well. The only
operational difference is that the vocalist cannot “work” the microphone by changing its distance from the mouth.
Thus, vocal dynamics need to be adjusted with the singer’s vocal technique rather than by microphone technique.
The receiver should be located at a suitable distance and in line of sight to the transmitter. Since this is
often at the mixer position, check for possible interference from nearby digital signal processors. Again antenna
and audio connections should be well-shielded and secure. The primary considerations for sound quality in a hand-
held wireless microphone system is the microphone element and its proper integration with the transmitter. The
choice of element for a wireless microphone system would be made according to the same criteria as for a wired
microphone. Ideally the wireless version of a microphone will sound identical to the wired version. Ultimately this
is up to the manufacturer of the wireless microphone system. For this reason it is highly recommended to compare
the performance of the proposed wireless microphone system to its wired counterpart to make sure that any
differences in sound quality or directionality are minimal.

Theatre

67 | P a g e
PA Training Introduction 2012

Theatrical applications also generally call for lavaliere/ bodypack


wireless microphone systems. The microphone and transmitter are worn
by the performer while the receiver is in a fixed location. Theatre
combines aspects of presenter, vocalist, and aerobic/dance applications
with additional unique requirements.
In current theatre practice the lavaliere microphone is often
concealed somewhere on the head of the performer: just in front of the
ear, on the forehead, in the hair or beard, etc. In some cases it is
concealed in some part of the costume such as a hat or high collar. The
intent is always to get the microphone as close to the performer’s mouth
as possible without being visible. The close placement maximizes gain-
before feedback and minimizes noise and acoustic interference.
Miniature omnidirectional types are used almost exclusively, but they must be of high quality for both speech and
singing. Avoid obstructing the ports on microphones with makeup or adhesives.
Headworn microphones have become much more common in theatrical applications, particularly for
highsound- level musical theatre. Again, the benefits of very high gain-before-feedback, high signal-to-noise ratio,
and consistent microphone-to-mouth distance make the headworn type an excellent choice in this setting.
Transmitters are also concealed in or under costumes and are often subject to an even more severe
environment than the aerobic/dance situation. Special packs and bindings are available to attach the transmitter
to various parts of the body. Latex covers are sometimes used to protect transmitters from sweat. Routing
microphone cables and antennas and still allowing quick costume changes presents a serious challenge. Normal
wear and tear on cables and connectors will take a rapid toll on anything but the most reliable microphones and
transmitters.
Receivers for theatrical applications are not unique but they must be of high quality to allow multiple
system use without interference. It is not unusual to use as many as 30 simultaneous wireless microphone systems
in a professional musical theatre production. This number can only be handled with systems operating in the UHF
range. 10 to 12 systems is the practical limit at VHF frequencies. In addition, separate antennas and antenna
distribution systems are necessary for any installation involving a large number of systems.
Though small-scale theatre applications can be done with a moderate investment in planning and
equipment, large-scale productions usually require professional coordination of wireless microphone systems to
achieve successful results. This becomes an absolute necessity for a touring production.

Worship
Worship services may include presenter, vocalist and instrument
applications. While wireless vocal and instrument use is essentially the
same as outlined in the preceding sections, the presenter function may be
somewhat different. Microphone, transmitter and receiver selection are as
before but placement of the components may require extra consideration.
In particular, proper location of the lavaliere microphone and/or
transmitter may pose problems because of robes or vestments. It is still
necessary to position the microphone as close as practical the user’s
mouth for best results. Different methods of attachment may be necessary.
Access to transmitter controls can also be problematic. Use of accessory
microphone mute switches similar to those worn by sports referees can be the answer. Though an omnidirectional
type microphone is easier to use, a unidirectional model may be chosen to allow more gain-before-feedback. In
this case pop sensitivity and mechanical noise should be taken into account. Again it is very important to adjust
the transmitter level for the individuals’ voice under actual conditions.

68 | P a g e
PA Training Introduction 2012

Note that headworn microphones are becoming more acceptable for worship applications. They provide
the highest gain-before-feedback in a hands-free type.
Because most worship services involve both wired lectern microphones and wireless lavaliere
microphones it often happens that the person wearing the wireless is also speaking at the lectern. If the voice is
picked up by both microphones an acoustic phenomenon known as "comb filtering" occurs which creates a hollow,
unnatural sound.
The solution is to turn down one of the two microphones whenever they are within one or two feet of each
other. In most cases it will be less noticeable to turn down the lectern microphone when the wireless wearer
approaches it. Proper frequency selection is necessary for any worship application. Since a fixed location is the
norm, unused TV channel frequencies are the best recommendation, not "traveling" frequencies. The simultaneous
use of other wireless microphone systems by vocalists and musicians during the service must be considered as
well. In addition, wireless microphone systems at other churches or facilities within 1000 feet of the site should be
included in any program for frequency coordination.
Finally, receivers should be located and adjusted according to the suggestions made earlier. Even with
proper squelch settings, though, it is very strongly recommended to turn off or turn down the outputs of any
receivers that do not have an active transmitter. This will avoid noise from random RF interference being heard in
the sound system.

Making the Most Of Your Mixer

69 | P a g e
PA Training Introduction 2012

1 A Plethora Of Connectors—What Goes Where?

Questions you’re likely to encounter when setting up a system for the first time might include “Why all these different
types of connectors on the back of my mixer?” and “What’s the difference?”

Let’s start by taking a look at the most common connector types.

• The Venerable RCA Pin Jack


This is the “consumer connector,” and the one that has been most
commonly used on home audio gear for many years. Also known as
“phono” jacks (short for”phonogram”), but the term isn’t used much these
days—besides, it’s too easily confusable with “phone” jacks, below. RCA
pin jacks are always unbalanced, and generally carry a line-level signal
at –10 dB, nominal. You’re most likely to use this type of connector when
connecting a CD player or other home audio type source to your mixer,
or when connecting the output of your mixer to a cassette recorder or
similar gear.

• The Phone Jack AKA TRS


The name “phone jack” arose simply because this configuration was first
used in telephone switchboards. Phone jacks can be tricky because you
can’t always tell what type of signal they’re designed to handle just by
looking at them. It could be unbalanced mono, unbalanced stereo,
balanced mono, or an insert patch point. The connector’s label will
usually tell you what type of signal it handles, as will the owner’s manual
(you do keep your manuals in a safe place, don’t you?). A phone jack
that is set up to handle balanced signals is also often referred to as a
“TRS” phone jack. “TRS” stands for Tip-Ring-Sleeve, which describes
the configuration of the phone plug used.

• The XLR
This type of connector is generally referred to as “XLR-type,” and almost
always carries a balanced signal. If the corresponding circuitry is
designed properly, however, XLR-type connectors will also handle
unbalanced signals with no problem. Microphone cables usually have
this type of connector, as do the inputs and outputs of most professional
audio gear.

1.1 Balanced, Unbalanced — What’s the Difference?

70 | P a g e
PA Training Introduction 2012

In a word: “noise.” The whole point of balanced lines is noise rejection, and it’s something they’re very good at.
Any length of wire will act as an antenna to pick up the random electromagnetic radiation we’re constantly
surrounded by: radio and TV signals as well as spurious electromagnetic noise generated by power lines, motors,
electric appliances, computer monitors, and a variety of other sources. The longer the wire, the more noise it is
likely to pick up. That’s why balanced lines are the best choice for long cable runs. If your “studio” is basically
confined to your desktop and all connections are no more than a meter or two in length, then unbalanced lines are
fine—unless you’re surrounded by extremely high levels of electromagnetic noise. Another place balanced lines
are almost always used is in microphone cables. The reason for this is that the output signal from most
microphones is very small, so even a tiny amount of noise will be relatively large, and will be amplified to an
alarming degree in the mixer’s high-gain head amplifier.

To summarize:
Microphones: Use balanced lines.
Short line-level runs: Unbalanced lines are fine if you’re in a relatively noise-free environment.
Long line-level runs: The ambient electromagnetic noise level will be the ultimate deciding factor, but balanced is
best.

1.2 Signal Levels—Decibel Do’s and Don’ts

From the moment you start dealing with things audio, you’ll have to deal with the term “decibel” and its abbreviation,
“dB”. Things can get confusing because decibels are a very versatile unit of measure used to describe acoustic
sound pressure levels as well as electronic signal levels. To make matters worse there are a number of variations:
dBu, dBV, dBm. Fortunately, you don’t need to be an expert to make things work. Here are a few basics you should
keep in mind:

• “Consumer” gear (such as home audio equipment) usually has line inputs and outputs with a nominal
(average) level of –10 dB.

• Professional audio gear usually has line inputs and outputs with a nominal level of +4 dB.

• You should always feed –10 dB inputs with a –10 dB signal. If you feed a +4 dB signal into a –10 dB input
you are likely to overload the input.

• You should always feed +4 dB inputs with a +4 dB signal. A –10 dB signal is too small for a +4 dB input,
and will result in less-than-optimum performance.

• Many professional and semi-professional devices have level switches on the inputs and/or outputs that
let you select –10 or +4 dB. Be sure to set these switches to match the level of the connected equipment.

• Inputs that feature a “Gain” control—such as the mono-channel inputs on your Yamaha mixer—will accept
a very wide range of input levels because the control can be used to match the input’s sensitivity to the
signal. More on this later.

2 The First Steps in Achieving Great Sound.

Before you even consider EQ and effects, or even the overall mix, it is important to make sure that levels are
properly set for each individual source. This can’t be stressed enough—initial level setup is vitally important for
achieving optimum performance from your mixer! Here’s why … and how.

71 | P a g e
PA Training Introduction 2012

Each and every “stage” in the mixer’s signal path will add a certain amount of noise to the signal: the head amp,
the EQ stage, the summing amplifier, and the other buffer and gain stages that exist in the actual mixer circuit (this
applies to analog mixers in particular). The thing to keep in mind is that the amount of noise added by each stage
is usually not dependent to any significant degree on the level of the audio signal passing through the circuit. This
means that the bigger the desired signal, the smaller the added noise will be in relation to it. In tech-speak this
gives us a better “signal-to-noise ratio”—often abbreviated as “S/N ratio.” All of this leads to the following basic
rule:

To achieve the best overall system S/N ratio, amplify the input to the desired average level as early as
possible in the signal path.

In our mixer, that means the head amplifier. If you don’t get the signal up to the desired level at the head amplifier
stage, you will need to apply more gain at later stages, which will only amplify the noise contributed by the
preceding stages. Just remember that too much initial gain is bad too, because it will overload channel circuitry
and cause clipping.

2.1 Level Setup Procedure For Optimum Performance

Now that we know what we have to do, how do we do it? If you take another quick look at the mixer block diagram
you’ll notice that there’s a peak indicator located right after the head amplifier and EQ stages, and therein lays our
answer! Although the exact procedure you use will depend on the type of mixer you use and the application, as
well as your personal preferences, here’s a general outline:

1 Start by setting all level controls to their minimum: master faders, group faders (if provided), channel faders,
and input gain controls. Also make sure that no EQ is applied (no boost or cut), and that all effects and
dynamic processors included in the system are defeated or bypassed.

2 Apply the source signal to each channel one at a time: have singers sing, players play, and playback devices
play back at the loudest expected level. Gradually turn up the input gain control while the signal is being
applied to the corresponding channel until the peak indicator begins to flash, then back off a little so that the
peak indicator flashes only occasionally. Repeat for each active channel.

3 Raise your master fader(s)—and group faders if available—to their nominal levels (this will be the “0”
markings on the fader scale).

4 Now, with all sources playing, you can raise the channel faders and set up an initial rough mix.

That’s basically all there is to it. But do keep your eyes on the main output level meters while setting up the mix to
be sure you don’t stay in the “peak zone” all the time. If the output level meters are peaking constantly you will
need to lower the channel faders until the overall program falls within a good range— and this will depend on the
“dynamic range” of your program material.

3 External Effects, Monitor Mixes, and Groups

3.1 AUX Buses For Monitor Sends and Overall Effects

72 | P a g e
PA Training Introduction 2012

There are a number of reasons why you might want to “tap” the signal flowing through your mixer at some point
before the main outputs: the two most common being

1) To create a monitor mix that is separate from the main mix


2) To process the signal via an external effect unit and then bring it back into the mix.

Both of these functions, and more, can be handled by the mixer’s AUX (Auxiliary) buses and level controls. If the
mixer has two AUX buses, then it can handle both functions at the same time. Larger mixing consoles can have 6,
8, or even more auxiliary buses to handle a variety of monitoring and processing needs. Using the AUX buses and
level controls is pretty straightforward. The only thing you need to consider is whether you need a “pre-fader” or
“postfader” send. AUX sends often feature a switch that allows you to configure them for pre- or postfader operation.

Pre/Post—What’s the difference?

PRE POST
A “pre-fader” signal is taken from a point before the A “post-fader” signal is taken from a point after the
channel fader, so the send level is affected only by the channel fader, so its level will be affected by both the
AUX send level control and not by the channel fader. AUX send level control and the channel fader.
Pre-fader sends are most commonly used to provide Post-fader sends are most commonly used in
monitor mixes. conjunction with the mixer’s AUX or effect returns for
external effect processing.

Pre-fader Post-fader
- Send for a monitor mix. - Send for external effects processing.

- The send signal is fed to the monitor power - The send signal is fed to the external effect unit—
amplifier and speaker system. a reverb unit, for example—and the output from
the effect unit is returned to the AUX Return jack
and mixed back into the main program.

- The channel fader does not affect the send level - The send level is affected by the channel fader
so the monitor mix remains independent of the so the effect level always remains in proportion to
main mix. No return signal is used in this case. the channel signal.

3.2 Using Groups

Group buses and faders can greatly simplify the mixing process—particularly in live situations in which changes
have to be made as quickly as possible. If you have a group of channels that need to be adjusted all together while
maintaining their relative levels, grouping is the way to go. Simply assign the group to a group bus, and make sure
that group is also assigned to the main program bus. Then you can adjust the overall level of the group using a
single group fader, rather than having to attempt to control multiple channels faders simultaneously.

Group buses usually also have their own outputs, so you can send the group signal to a different external
destination from the main mix.

A group of channels whose levels need to maintain the same relationship—a drum mix, for example—can be
assigned to a group bus.

Usually the group bus signal can be output independently via “Group” outputs, or it can be assigned to the main
program (stereo) bus to be mixed in with the main stereo program.

Once the mix between the channels assigned to the group is established via the channel faders, the overall level
of the entire group can be conveniently adjusted via a single group fader.

3.3 Channel Inserts for Channel-specific Processing

73 | P a g e
PA Training Introduction 2012

Another way to get the mixer’s signal outside the box is to use the channel inserts. The channel inserts are almost
always located before the channel fader and, when used, actually “break” the mixer’s internal signal path. Unlike
the AUX sends and returns, the channel insert only applies to the corresponding channel.

Channel inserts are most commonly used for applying a dynamics processor such as a compressor or limiter to a
specific channel—although they can be used with just about any type of in/out processor.

Channel insert jacks must be used with a special insert cable that has a TRS phone jack on one end and mono
phone jacks on the split “Y” end. One of the mono phone jacks carries the “send” signal to be fed to the input of
the external processor, and the other carries the “return” signal from the output of the processor.

4 Making Better Mixes

4.1 Approaching the Mix—Where Do You Start?

Mixing is easy, right? Just move the faders around until it sounds right? Well, you can do it that way, but a more
systematic approach that is suited to the material you’re mixing will produce much better results, and faster. There
are no rules, and you’ll probably end up developing a system that works best for you. But the key is to develop a
system rather than working haphazardly. Here are a few ideas to get you started:

i. Faders Down
It might sound overly simple, but it is usually a good idea to start with all channel faders off—all the way down. It’s
also possible to start with all faders at their nominal settings, but it’s too easy to lose perspective with this approach.
Start with all faders down, then bring them up one by one to fill out the mix. But which channel should you start
with?

Example1:
Vocal Ballad Backed by Piano Trio
What are you mixing? Is it a song in which the vocals are the most important element? If so you might want to build
the mix around the vocals. This means bringing the vocal channel up to nominal first (if your level setup procedure
has been done properly this will be a good starting point), and then adding the other instruments. What you add
next will depend on the type of material you are working with and your approach to it. If the vocals are backed by
a piano trio and the song is a ballad, for example, you might want to bring in the piano next and get the vocal/piano
relationship just right, then bring in the bass and drums to support the overall sound.

Example2:
Funky R&B Groove
The approach will be totally different if you’re mixing a funky R&B number that centers on the groove. In this case
most engineers will start with the drums, and then add the bass. The relationship between the drums and bass is

74 | P a g e
PA Training Introduction 2012

extremely important to achieve the “drive” or groove the music rides on. Pay particular attention to how the bass
works with the kick (bass drum). They should almost sound like a single instrument— with the kick supplying the
punch and the bass supplying the pitch. Once again, there are no rules, but these are concepts that have been
proven to work well.

ii. Music First—Then Mix


In any case, the music comes first. Think about the music and let it guide the mix, rather than trying to do things
the other way around. What is the music saying and what instrument or technique is being used to drive the
message? That’s where the focus of your mix should be. You’re using a high-tech tool to do the mixing, but the
mix itself is as much art as the music. Approach it that way and your mixes will become a vital part of the music.

4.2 Panning For Cleaner Mixes

Not only does the way you pan your individual channels determine where the instruments appear in the stereo
sound field, but it is also vital to give each instrument it’s own “space” so that it doesn’t conflict with other
instruments. Unlike live sound in a real acoustic space, recorded stereo sound is basically 2-dimensional (although
some types of surround sound are actually very 3-dimensional), and instruments positioned right on top of each
other will often get in each other’s way—particularly if they are in the same frequency range or have a similar
sound.

Spread them Out!


Position your instruments so they have room to “breathe,” and connect in the most musical way with other
instruments. Sometimes, however, you’ll want to deliberately pan sounds close together, or even right on top of
one another, to emphasize their relationship. There are no hardand- fast rules. Normally (but this is not a rule),
bass and lead vocals will be panned to center, as will the kick drum if the drums are in stereo.

4.3 To EQ Or Not To EQ

In general: less is better. There are many situations in which you’ll need to cut certain frequency ranges, but use
boost sparingly, and with caution. Proper use of EQ can eliminate interference between instruments in a mix and
give the overall sound better definition. Bad EQ—and most commonly bad boost—just sounds terrible.

Cut For a Cleaner Mix


For example: cymbals have a lot of energy in the mid and low frequency ranges that you don’t really perceive as
musical sound, but which can interfere with the clarity of other instruments in these ranges. You can basically turn
the low EQ on cymbal channels all the way down without changing the way they sound in the mix. You’ll hear the
difference, however, in the way the mix sounds more “spacious,” and instruments in the lower ranges will have
better definition. Surprisingly enough, piano also has an incredibly powerful low end that can benefit from a bit of
low-frequency roll-off to let other instruments—notably drums and bass—do their jobs more effectively. Naturally
you won’t want to do this if the piano is playing solo.

The reverse applies to kick drums and bass guitars: you can often roll off the high end to create more space in the
mix without compromising the character of the instruments. You’ll have to use your ears, though, because each
instrument is different and sometimes you’ll want the “snap” of a bass guitar, for example, to come through.

Boost With Caution


If you’re trying to create special or unusual effects, go ahead and boost away as much as you like. But if you’re
just trying to achieve a good sounding mix, boost only in very small increments. A tiny boost in the midrange can
give vocals more presence, or a touch of high boost can give certain instruments more “air.” Listen, and if things
don’t sound clear and clean try using cut to remove frequencies that are cluttering up the mix rather than trying to
boost the mix into clarity.

One of the biggest problems with too much boost is that it adds gain to the signal, increasing noise and potentially
overloading the subsequent circuitry.

75 | P a g e
PA Training Introduction 2012

4.4 Ambience

Judicious application of reverb and/or delay via the mixer’s AUX busses can really polish a mix, but too much can
“wash out” the mix and reduce overall clarity. The way you set up your reverb sound can make a huge difference
in the way it meshes with the mix.

Reverb/Delay Time
Different reverb/delay units offer different capabilities, but most offer some means of adjusting the reverb time. A
little extra time spent matching the reverb time to the music being mixed can mean the difference between great
and merely average sound. The reverb time you choose will depend to a great degree on the tempo and “density”
of the mix at hand. Slower tempos and lower densities (i.e. sparser mixes with less sonic activity) can sound good
with relatively long reverb times. But long reverb times can completely wash out a faster more active piece of music.
Similar principles applies to delay.

Reverb Tone
How “bright” or “bassy” a reverb sound is also has a huge impact on the sound of your mix. Different reverb units
offer different means of controlling this—balance between the high- and low-frequency reverb times, simple EQ,
and others. A reverb that is too bright will not only sound unnatural, but it will probably get in the way of delicate
highs you want to come through in your mix. If you find yourself hearing more high-end reverb than mix detail, try
reducing the brightness of the reverb sound. This will allow you to get full-bodied ambience without compromising
clarity.

Reverb Level
It’s amazing how quickly your ears can lose perspective and fool you into believing that a totally washed-out mix
sounds perfectly fine. To avoid falling into this trap start with reverb level all the way down, then gradually bring the
reverb into the mix until you can just hear the difference. Any more than this normally becomes a “special effect.”
You don’t want reverb to dominate the mix unless you are trying to create the effect of a band in a cave—which is
a perfectly legitimate creative goal if that’s the sort of thing you’re aiming for.

Built-in Effects & EQ


Most mixer features a high-performance internal effect system and graphic equalizer that offers extraordinary
sound-processing power and versatility without the need for external equipment. The internal DSP (Digital Signal
Processor) lets you individually add reverb and delay to each channel in the same way that you can with an external
effect unit – but you don’t need to wire up any extra gear, and won’t suffer the signal quality loss that external
connections sometimes entail. The graphic equalizer is ideal for shaping the response of the overall mix, and for
minimizing feedback in live situations.

76 | P a g e

You might also like