You are on page 1of 14

SRS Training Introduction 2010

What is Sound
Sound reinforcement for House of Worship application have evolved from simple speech reinforcement
to full concert quality multi-media systems. It has grown from basic analog sound system to an extremely varied
selection of digital audio equipment. However, no matter how complex the overall audio system, an
understanding of the basic principles of sound, the key elements of sound systems, and the primary goal of a
“good sound” will insure the best result in choosing and using that system.

Because good sound quality is the goal of any house of worship sound system, it is helpful to be familiar
with some general aspects of sound: how it is produced, transmitted, and received. In addition, it is also useful to
describe or classify sound according to its acoustic behaviour. Finally, the characteristics of “good” sound should
be understood

I. Vibrations
Sound is produced by vibrating objects. These include musical instruments, loudspeakers, and, of
course, human vocal cords. The mechanical vibrations of these objects move the air which is immediately
adjacent to them, alternately “pushing” and “pulling” the air from its resting state. Each back-and-forth vibration
produces a corresponding pressure increase and pressure decrease in the air. A complete pressure change, or
cycle, occurs when the air pressure goes from rest, to maximum, to minimum, and back to rest again. These
cyclic pressure changes travel outward from the vibrating object, forming a pattern called a sound wave. A sound
wave is a series of pressure changes moving through the air. This is the genesis of any sound. As a matter of
fact, if an object — whatever it may be — doesn’t vibrate, it cannot make sound. These waves have length
(hence the term wavelength) measured in feet and inches or even meters. As these sound waves reach your ear,
your eardrum also vibrates, and it is those vibrations that are recognized by your brain.

II. Hertz
A simple sound wave can be described by its frequency and by its amplitude. The frequency of a sound
wave is the rate at which the pressure changes occur. It is measured in cycles per second , more commonly
known as Hertz (Hz), where 1 Hz is equal to 1 cycle-per second. The term derived from the surname of the man
who brought forth this knowledge: 19th century physicist Heinrich Hertz. The range of frequencies audible to the
human ear extends from a low of about 20 Hz to a high of about 20,000 Hz. In practice, a sound source such as
a voice usually produces many frequencies simultaneously. In any such complex sound, the lowest frequency is
called the fundamental and is responsible for the pitch of the sound. The higher frequencies are called harmonics
and are responsible for the timbre or tone of the sound. Harmonics allow us to distinguish one source from
another, such as a piano from a guitar, even when they are playing the same fundamental note. Those
fluctuations can be thought of as cycles per second (cps). So, the more cycles per second, the higher the
frequency, or vice versa.

III. Decibels
The amplitude of a sound wave refers to the magnitude (strength) of the pressure changes and determines the
“loudness” of the sound. Amplitude is measured in decibels (dB) of sound pressure level (SPL) and ranges from
0 dB SPL (the threshold of hearing), to above 120 dB SPL (the threshold of pain). The level of conversational
speech is about 70dB SPL. A change of 1dB is about the smallest SPL difference that the human ear can detect,
while 3 dB is a generally noticeable step, and an increase of 10 dB is perceived as a “doubling” of loudness.

Another characteristic of a sound wave related to frequency is wavelength. The wavelength of a sound
wave is the physical distance from the start of one cycle to the start of the next cycle, as the wave moves through
the air. Since each cycle is the same, the distance from any point in one cycle to the same point in the next cycle

4|Page
SRS Training Introduction 2010

is also one wavelength: for example, the distance from one maximum pressure point to the next maximum
pressure point. Wavelength is related to frequency by the speed of sound. The speed of sound is the velocity at
which a sound wave travels. The speed of sound is constant and is equal to about 1130 feet-per-second in air. It
does not change with frequency or wavelength, but it is related to them in the following way: the frequency of a
sound, multiplied by its wavelength The amplitude of a sound wave refers to the magnitude (strength) of the
pressure changes and determines the “loudness” of the sound. Amplitude is measured in decibels (dB) of sound
pressure level (SPL) and ranges from 0 dB SPL (the threshold of hearing), to above 120 dB SPL (the threshold
of pain). The level of conversational speech is about 70dB SPL. A change of 1dB is about the smallest SPL
difference that the human ear can detect, while 3 dB is a generally noticeable step, and an increase of 10 dB is
perceived as a “doubling” of loudness.

* SOUND CHECK

An anechoic chamber is a room with special walls that absorb as much sound as possible. Anechoic means "without echoes". Sometimes
the entire room even rests on shock absorbers, negating any vibration from the rest of the building or the outside.

The material covering the walls of an anechoic chamber uses wedge-shaped panels to dissipate as much audio energy as possible before
reflecting it away. Their special shape reflects energy into the apex of the wedge, dissipating it as vibrations in the material rather than the
air. Anechoic chambers are frequently used for testing microphones, measuring the precise acoustic properties of various instruments,
determining exactly how much energy is transferred in electro-acoustic devices, and performing delicate psychoacoustic experiments.

5|Page
SRS Training Introduction 2010

Interesting Facts

1. The BA-Aerospatiale Concorde is the only passenger plane to fly faster than sound, at 2100 kph.
2. The loudest natural sounds ever made on Earth are probably gigantic volcanic eruptions, such as the explosions of the island
of Krakatoa.
3. Some of the loudest sounds produced by our own invention are the noise of space rockets blasting from the launch pad. The
biggest were the Saturn V rockets that launched the USA's Apollo moon missions of 1968-72. They had their greatest success
when Apollo 11 landed on the Moon - an airless and therefore completely silent place - on 20 July 1969. Once a space rocket
had taken off and entered the vaccum of space, it became totally silent.
4. One musical piece has no sound at all. It is called 4 minutes 33 seconds. It was ' written ' by the American composer John
Cage in 1954. A pianist sits at the piano and plays nothing for exactly 4 minutes and 33 seconds.
5. In the deep ocean, the sperm whale uses sound to stun or kill its prey. It sends out giant grunts, immensely powerful bursts of
sound that can disable nearby fish, squid and other victims.
6. Acoustics plays a large part in the design of modern concert halls, theatres and similar buildings. The travels of a sound wave
can be shown on acomputer screen, for different frequencies and for different materials covering them. This is why many such
buildings have strange shapes on the walls and ceiling, such as discs, panels and saucers, to absorb or reflect the sounds.
7. Huge cathedrals, with their hard walls and floors of stone, glass and wood, are amazing places for acoustics. Almost any sound
seems loud and long, a sit echoes and reverberates through the surfaces. This is why church singing sounds so special.

The Sound Source

The sound sources most often found in worship facility applications are the speaking voice, the singing
voice, and a variety of musical instruments. Voices may be male or female, loud or soft, single or multiple, close
or distant, etc. Instruments may range from a simple acoustic guitar to a pipe organ or even to a full orchestra.

Sound sources may be categorized as desired or undesired, and the sound produced by them may be
further classified as being direct or indirect. In practice, the sound field or total sound in a space will always
consist of both direct and indirect sound, except in anechoic chambers or, to some extent, outdoors, when there
are no nearby reflective surfaces.

Undesired sound sources that may be present: building noise from air conditioning or buzzing light
fixtures, noise from the congregation, sounds from street or air traffic, drum that overpowers the congregations or
musician or microphone pickup feedback.

The acoustics of the room are often as important as the sound source itself. Room acoustics are a
function of the size and shape of the room, the materials covering the interior surfaces, and even the presence of
the congregation. The acoustic nature of an area may have a positive or a negative effect on the sound produced
by voices, instruments, and loudspeakers before it is picked up by microphones or heard by listeners: absorbing
or diminishing some sounds while reflecting or reinforcing other sounds. Strong reflections can contribute to
undesired sound in the form of echo, standing waves, or excessive reverberation.

6|Page
SRS Training Introduction 2010

The Basic Purpose of a Sound System

I. To help people hear something better.


For example, one person speaking on a stage may not be heard well at the back of a large hall. A sound system
may be used to make the sound more clearly audible. In this case, the intention is to make the voice at the back
of the hall sound as loud as (not louder than) it is when heard up close.

II. To make sound louder for artistic reasons


Praise & Worship team may be clearly audible but not necessarily very exciting. A sound system can give the
group much greater musical impact by bringing up the church atmosphere.

III. To enable people to hear sound in remote locations.


Special events draw larger crowds than the meeting room will hold. A sound system can bring the speeches and
discussion to a second room, so that the overflow crowd can hear them.

A Conceptual Model Of A Sound System

Sound systems amplify sound by converting it into electrical energy, increasing the power of the
electrical energy by electronic means, and then converting the more powerful electrical energy back into sound.
In audio electronics, devices that convert energy from one form into another are called transducers.

Devices that change one or more aspects of an audio signal are called signal processors. Using these
terms, we can model a sound system in its simplest form.

The input transducer (i.e., mic or pickup) converts sound into a fluctuating electrical current or voltage
which is a precise representation of the sound. The fluctuating current or voltage is referred to as an audio signal.

The signal processing alters one or more characteristics of the audio signal. In the simplest case, it
increases the power of the signal (a signal processor that does this is called an amplifier). In practical sound
systems, this block of the diagram represents a multitude of devices preamplifiers, mixers, effects units, power
amplifiers, and so on. The output transducer (i.e., the loudspeaker or headphones) converts the amplified or
otherwise processed audio signal back into sound.

The Sound System

A basic sound reinforcement system consists of an input device (microphone), a control device (mixer),
an amplification device (power amplifier), and an output device (loudspeaker). This arrangement of components
is sometimes referred to as the audio chain: each device is linked to the next in a specific order. The primary
goal of the sound system in house of worship sound applications is to deliver clear, intelligible speech, and,
usually, high-quality musical sound, to the entire congregation. The overall design, and each component of it,
must be intelligently thought out, carefully installed, and properly operated to accomplish this goal.

There are three levels of electrical signals in a sound system: microphone level (a few thousandths of a
Volt), line level (approximately one Volt), and speaker level (ten Volts or higher).

7|Page
SRS Training Introduction 2010

Sound is picked up and converted into an electrical signal by the microphone. This microphone level
signal is amplified to line level and possibly combined with signals from other microphones by the mixer. The
power amplifier then boosts the line level signal to speaker level to drive the loudspeakers, which convert the
electrical signal back into sound.

Electronic signal processors, such as equalizers, limiters or time delays, are inserted into the audio
chain, usually between the mixer and the power amplifier, or often within the mixer itself. They operate at line
level. The general function of these processors is to enhance the sound in some way or to compensate for
certain deficiencies in the sound sources or in the room acoustics.

In addition to feeding loudspeakers, an output of the system may be sent simultaneously to recording
devices or even used for broadcast. It is also possible to deliver sound to multiple rooms, such as vestibules and
cry rooms, by using additional power amplifiers and loudspeakers.

Finally, it may be useful to consider the room acoustics as part of the sound system: acoustics act as a
“signal processor” that affects sound both before it is picked up by the microphone and after it is produced by the
loudspeakers. Good acoustics may enhance the sound, while poor acoustics may degrade it, sometimes beyond
the corrective capabilities of the equipment. In any case, the role of room acoustics in sound system
performance cannot be ignored.

What is “GOOD” sound?

The three primary measures of sound quality are fidelity, intelligibility, and loudness. In a house of
worship the quality of sound will depend on the quality of the sound sources, the sound system, and the room
acoustics.

The fidelity of sound is primarily determined by the overall frequency response of the sound arriving at
the listener’s ear. It must have sufficient frequency range and uniformity to produce realistic and accurate speech
and music.

The intelligibility of sound is determined by the overall signal-to-noise ratio and the direct-to
reverberant sound ratio at the listener’s ear. In a house of worship, the primary “signal” is the spoken word. The
“noise” is the ambient noise in the room as well as any electrical noise added by the sound system. In order to
understand speech with maximum intelligibility and minimum effort, the speech level should be at least 20dB
louder than the noise at every listener’s ear. The sound that comes from the system loudspeakers already has a
signal-to-noise ratio limited by the speech-to-noise ratio at the microphone. To insure that the final speech-to-
noise ratio at the listener is at least 20dB, the speech-to-noise ratio at the microphone must be at least 30dB.
That is, the level of the voice picked up by the microphone must be at least 30dB louder than the ambient noise
picked up by the microphone.

The direct-to-reverberant ratio is determined by the directivity of the system loudspeakers and the
acoustic reverberation characteristic of the room. Reverberation time is the length of time that a sound persists in
a room even after the sound source has stopped. A high level of reverberant sound interferes with intelligibility by
making it difficult to distinguish the end of one word from the start of the next. A reverberation time of 1 second or
less is ideal for speech intelligibility. However, such rooms tend to sound somewhat lifeless for music, especially
traditional choral or orchestral music. Reverberation times of 3-4 seconds or longer are preferred for those
sources. Reverberation can be reduced only by absorptive acoustic treatment. If it is not possible to absorb the

8|Page
SRS Training Introduction 2010

reverberant sound once it is created, then it is necessary either to increase the level of the direct sound, to
decrease the creation of reverberant sound, or a combination of the two. Simply raising the level of the sound
system will raise the reverberation level as well. However, use of directional loudspeakers allows the sound to be
more precisely “aimed” toward the listener and away from walls and other reflective surfaces that contribute to
reverberation. Again, directional control is more easily achieved at high frequencies than at low frequencies.

Finally, the loudness of the speech or music at the furthest listener must be sufficient to achieve the
required effect: comfortable levels for speech, perhaps more powerful levels for certain types of music. These
levels should be attainable without distortion or feedback. The loudness is determined by the dynamic range of
the sound system, the potential acoustic gain (PAG) of the system, and the room acoustics. The dynamic range
of a sound system is the difference in level between the noise floor of the system and the loudest sound level
that it can produce without distortion. It is ultimately limited only by the available amplifier power and loudspeaker
efficiency. The loudness requirement dictates the needed acoustic gain (NAG) so that the furthest listener can
hear at a level similar to closer listeners. However, a sound reinforcement system with microphones requires
consideration of potential acoustic gain. Potential Acoustic Gain (PAG) is a measure of how much gain or
amplification a sound system will provide before feedback occurs. This turns out to be much more difficult than
designing for dynamic range because it depends very little on the type of system components but very much on
the relative locations of microphones, loudspeakers, talkers, and listeners. Room acoustics also play a role in
loudness. Specifically, reverberant sound adds to the level of the overall sound field indoors. If reverberation is
moderate, the loudness will be somewhat increased without ill effect. If reverberation is excessive, the loudness
may substantially increase but with potential loss of fidelity and intelligibility.

Although “good” sound is qualitatively determined by the ear of the beholder, there are quantitative
design methods and measurements that can be used to accurately predict and evaluate performance. It is
usually possible (though often not easy) to resolve the competing factors of acoustics, sound systems,
architecture, aesthetics and budget in order to deliver good sound in a house of worship. However, major
deficiencies in any of these areas can seriously compromise the final result. Readers who are contemplating
major sound system purchases, acoustic changes, or new construction are encouraged to speak with
knowledgeable consultants and/or experienced contractors to ensure the “best” sound.

Basic Components of a Sound System

I. Mixer
A mixing console, or audio mixer, also called a sound board, soundboard, mixing desk, or mixer is an electronic
device for combining (also called "mixing"), routing, and changing the level, timbre and/or dynamics of audio
signals. A mixer can mix analog or digital signals, depending on the type of mixer. The modified signals (voltages
or digital samples) are summed to produce the combined output signals. Mixing consoles are used in many
applications, including recording studios, public address systems, sound reinforcement systems, broadcasting,
television, and film post-production. An example of a simple application would be to enable the signals that
originated from two separate microphones (each being used by vocalists singing a duet, perhaps) to be heard
through one set of speakers simultaneously. When used for live performances, the signal produced by the mixer
will usually be sent directly to an amplifier, unless that particular mixer is "powered" or it is being connected to
powered speakers.

9|Page
SRS Training Introduction 2010

II. Audio Amplifier


An audio amplifier is an electronic amplifier that amplifies low-power audio signals (signals composed primarily of
frequencies between 20 - 20 000 Hz, the human range of hearing) to a level suitable for driving loudspeakers
and is the final stage in a typical audio playback chain.

The preceding stages in such a chain are low power audio amplifiers which perform tasks like pre-amplification,
equalization, tone control, mixing/effects, or audio sources like record players, CD players, and cassette players.
Most audio amplifiers require these low-level inputs to adhere to line levels.
While the input signal to an audio amplifier may measure only a few hundred microwatts, its output may be tens,
hundreds, or thousands of watts.

History
The audio amplifier was invented in 1909 by Lee De Forest when he invented the triode vacuum tube. The triode
was a three terminal device with a control grid that can modulate the flow of electrons from the filament to the
plate. The triode vacuum amplifier was used to make the first AM radio.[1]

Early audio amplifiers were based on vacuum tubes (also known as valves), and some of these achieved notably
high quality (e.g., the Williamson amplifier of 1947-9). Most modern audio amplifiers are based on solid state
devices (transistors such as BJTs, FETs and MOSFETs), but there are still some who prefer tube-based
amplifiers, due to a perceived 'warmer' valve sound. Audio amplifiers based on transistors became practical with
the wide availability of inexpensive transistors in the late 1960s.
Key design parameters for audio amplifiers are frequency response, gain, noise, and distortion. These are
interdependent; increasing gain often leads to undesirable increases in noise and distortion. While negative
feedback actually reduces the gain, it also reduces distortion. Most audio amplifiers are linear amplifiers
operating in class AB.

10 | P a g e
SRS Training Introduction 2010

The relationship of the input to the output of an amplifier - usually expressed as a function of the input
frequency—is called the transfer function of the amplifier, and the magnitude of the transfer function is termed
the gain. In popular use, the term usually describes an electronic amplifier, in which the input "signal" is usually a
voltage or a current. In audio applications, amplifiers drive the loudspeakers used in PA systems to make the
human voice louder or play recorded music.

Amplifiers may be classified according to the input (source) they are designed to amplify (such as a guitar
amplifier, to perform with an electric guitar), the device they are intended to drive (such as a headphone
amplifier), the frequency range of the signals (Audio, IF, RF, and VHF amplifiers, for example), whether they
invert the signal (inverting amplifiers and non-inverting amplifiers), or the type of device used in the amplification
(valve or tube amplifiers, FET amplifiers, etc.).

Typical Amplifiers

Vacuum Tube Amplifiers

11 | P a g e
SRS Training Introduction 2010

Famous Mixer / Amplifier suppliers: Alesis, Allen & Heath, Soundcraft, Crest Audio, Mackie, Yamaha,
Behringer, Peavey, Phonic, Pioneer, Roland

III. Microphone
Microphone is a generic term that is used to refer to any element which transforms acoustic energy (sound) into
electrical energy (the audio signal). A microphone is therefore one type from a larger class of elements called
transducers - devices which translate energy of one form into energy of another form. The fidelity with which a
microphone generates an electrical representation of a sound depends, in part, on the method by which it
performs the energy conversion. Historically, a number of different methods have been developed for varying
purposes, and today a wide variety of microphone types may be found in everyday use.

Microphones are generally classed according to 2 main items:


a) Type of transducer: Dynamic, Condenser, Electret Condenser, Ribbon, etc
b) Pickup pattern: Cardioid, Omnidirectional, Bi-directional / Figure-8, Supercardioid

12 | P a g e
SRS Training Introduction 2010

Famous Microphone suppliers: Shure, Sennheiser, Audio Technica, AKG, Audix, Mipro, Oktava, etc

IV. Loudspeakers
Loudspeaker is a generic term used to describe a wide variety of transducers that convert electrical energy into
acoustical energy, or sound. The term also is commonly used to refer to systems of two or more transducers in
a single enclosure, with or without a crossover. For the sake of clarity, we will use the term driver to refer to
individual transducers, and loudspeaker to refer to systems. A system of one or more drivers implemented as a
free standing functional component - that is, mounted in an enclosure with or without crossover, or fitted with a
horn, or otherwise completed for a specific function.

In a sound reinforcement system, loudspeakers play an important role as the final link between the sound source
and the audience. Surprisingly, they are also the least understood components of that equipment chain.
To some people, loudspeakers are seen alternately as extraordinarily fragile and temperamental devices, or
as powerful, magical things capable of acoustical miracles. Many sound reinforcement professionals, for example,
accept it as inevitable that half of the drivers in their system will bum out every night there is a show. Other
professionals have been known to claim obviously outrageous efficiency ratings or acoustical power figures for
their systems.

Terminology
The term "loudspeaker" may refer to individual transducers (known as "drivers") or to complete speaker systems
consisting of an enclosure including one or more drivers. To adequately reproduce a wide range of frequencies,
most loudspeaker systems employ more than one driver, particularly for higher sound pressure level or maximum
accuracy. Individual drivers are used to reproduce different frequency ranges. The drivers are named subwoofers

13 | P a g e
SRS Training Introduction 2010

(for very low frequencies); woofers (low frequencies); mid-range speakers (middle frequencies); tweeters (high
frequencies); and sometimes supertweeters, optimized for the highest audible frequencies. The terms for
different speaker drivers differ, depending on the application. In two-way systems there is no mid-range driver, so
the task of reproducing the mid-range sounds falls upon the woofer and tweeter. Home stereos use the
designation "tweeter" for the high frequency driver, while professional concert systems may designate them as
"HF" or "highs". When multiple drivers are used in a system, a "filter network", called a crossover, separates the
incoming signal into different frequency ranges and routes them to the appropriate driver. A loudspeaker system
with n separate frequency bands is described as "n-way speakers": a two-way system will have a woofer and a
tweeter; a three-way system employs a woofer, a mid-range, and a tweeter.

The most common type of driver uses a lightweight diaphragm, or cone, connected to a rigid basket, or frame,
via a flexible suspension that constrains a coil of fine wire to move axially through a cylindrical magnetic gap.
When an electrical signal is applied to the voice coil, a magnetic field is created by the electric current in the
voice coil, making it a variable electromagnet. The coil and the driver's magnetic system interact, generating a
mechanical force that causes the coil (and thus, the attached cone) to move back and forth, thereby reproducing
sound under the control of the applied electrical signal coming from the amplifier.

14 | P a g e
SRS Training Introduction 2010

Famous Loudspeaker suppliers: Peavey, Mackie, JBL, Bose, Yamaha, Wharfedale, BMB

15 | P a g e
SRS Training Introduction 2010

V. Cabling

A given cable probably costs less than any other component in a sound system (unless it is a multi channel
snake, which is pretty costly). Still, there may be hundreds of cables in a single system, so the cost can add up to
a sizable figure. Hum, crackles, lost signal due to open circuits, or failed outputs due to shorted circuits can all be
caused by a cable. If you think about it, regardless of how high the quality of your mics, mixing console,
amplifiers and loudspeakers may be, the entire system can be degraded or silenced by a bad cable. You should
never try to save money by cutting corners with cable.

A system's worth of good cable is expensive. High price alone does not guarantee a good product. There are
major differences between similar looking cables. All wire is not the same, nor are all look alike connectors made
the same way. Even if the overall diameter, wire gauge, and general construction are similar, two cables may
have significantly different electrical and physical properties such as resistance, capacitance between conductors,
inductance between conductors, overall flexibility, shielding density, durability, ability to withstand crushing or
sharp bends, tensile strength, jacket friction (or lack thereof for use in conduits), and so forth. Almost all audio
cables in a sound reinforcement system should utilize stranded conductors, yet many same-gauge wires use a
different number of strands. More strands usually yield better flexibility and less chance of metal fatigue failure or
failure after an inadvertent nick in the cable. Even the wire itself makes a difference. Pure copper is an excellent
conductor, but lacks tensile strength.

Copper/bronze inner conductors are strong yet adequately flexible. Aluminum is strong and lightweight, but has
too much resistance for practical use in audio circuits.

Connectors may be well made, with low contact resistance (and low tendency to develop resistance over time),
or perhaps not. They may be well secured to the cable, with thoroughly soldered shields and inner conductors
and good strain relief, or they may be carelessly put together.

There is also the question of which type of cable to use: single or dual conductor shielded type? Cable with
braided, wrapped or foil shields - or cable with no shield at all?... separate, loosely bundled cables for each
channel, or a multicore snake with many channels sharing the same outer jacket?

16 | P a g e
SRS Training Introduction 2010

Famous Cable suppliers: Kadas, Belden, SommerCable etc

17 | P a g e

You might also like