Professional Documents
Culture Documents
1|Page
HOW SOUND REACHES THE BRAIN:
NEURAL INTERACTIONS:
In vertebrates, inter-aural time differences are
known to be calculated in the superior olivary
nucleus of the brainstem. According
to Jeffress,[1] this calculation relies on delay
lines: neurons in the superior olive which accept
innervation from each ear with different
connecting axon lengths. Some cells are more
directly connected to one ear than the other, thus
they are specific for a particular inter-aural time
2|Page
difference. This theory is equivalent to the
mathematical procedure of cross-correlation.
However, because Jeffress' theory is unable to
account for the precedence effect, in which only
the first of multiple identical sounds is used to
determine the sounds' location (thus avoiding
confusion caused by echoes), it cannot be entirely
used to explain the response. Furthermore, a
number of recent physiological observations made
in the midbrain and brainstem of small mammals
have shed considerable doubt on the validity of
Jeffress' original ideas [2]
Neurons sensitive to inter-aural level differences
(ILDs) are excited by stimulation of one ear and
inhibited by stimulation of the other ear, such that
the response magnitude of the cell depends on the
relative strengths of the two inputs, which in turn,
depends on the sound intensities at the ears.
In the auditory midbrain nucleus, the inferior
colliculus (IC), many ILD sensitive neurons have
response functions that decline steeply from
maximum to zero spikes as a function of ILD.
However, there are also many neurons with much
more shallow response functions that do not
decline to zero spikes.
3|Page
THE CONE OF CONFUSION:
5|Page
SOUND FOCALIZATION BY HUMAN
AUDITORY SYSTEM:
Duplex Theory[edit]
To determine the lateral input direction (left,
front, right), the auditory system analyzes the
following ear signal information:
Duplex Theory[edit]
In 1907, Lord Rayleigh utilized tuning forks to
generate monophonic excitation and studied the
lateral sound localization theory on a human head
model without auricle. He first presented the
interural clue difference based sound localization
theory, which is known as Duplex
Theory.[7] Human ears are on the different sides of
the head, thus they have different coordinates in
space. As shown in fig. 2, since the distances
between the acoustic source and ears are
different, there are time difference and intensity
difference between the sound signals of two ears.
We call those kinds of differences as Interaural
Time Difference (ITD) and Interaural Intensity
Difference (IID) respectively.
6|Page
fig.2 Duplex Theory
Interaural Level Difference (ILD) between left ear (left) and right ear (right).
[sound source: a sweep from right]
7|Page
interaural time differences from: (a) Phase
delays at low frequencies and (b) group
delays at high frequencies.
Massive experiments demonstrate that ITD
relates to the signal frequency f. Suppose the
angular position of the acoustic source is θ, the
head radius is r and the acoustic velocity is c,
the function of ITD is given by:[8] . In above
closed form, we assumed that the 0 degree is in
the right ahead of the head and counter-
clockwise is positive.
Interaural Intensity Difference (IID) or Interaural
Level Difference (ILD) Sound from the right side
has a higher level at the right ear than at the left
ear, because the head shadows the left ear.
These level differences are highly frequency
dependent and they increase with increasing
frequency. Massive theoretical researches
demonstrate that IID relates to the signal
frequency f and the angular position of the
acoustic source θ. The function of IID is given
by:[8]
For frequencies below 1000 Hz, mainly ITDs are
evaluated (phase delays), for frequencies above
1500 Hz mainly IIDs are evaluated. Between
1000 Hz and 1500 Hz there is a transition zone,
where both mechanisms play a role.
Localization accuracy is 1 degree for sources in
front of the listener and 15 degrees for sources
8|Page
to the sides. Humans can discern interaural time
differences of 10 microseconds or less.[9][10]
Evaluation for low frequencies[edit]
For frequencies below 800 Hz, the dimensions of
the head (ear distance 21.5 cm, corresponding to
an interaural time delay of 625 µs) are smaller
than the half wavelength of the sound waves. So
the auditory system can determine phase delays
between both ears without confusion. Interaural
level differences are very low in this frequency
range, especially below about 200 Hz, so a
precise evaluation of the input direction is nearly
impossible on the basis of level differences alone.
As the frequency drops below 80 Hz it becomes
difficult or impossible to use either time difference
or level difference to determine a sound's lateral
source, because the phase difference between the
ears becomes too small for a directional
evaluation.[11]
fig.4 HRTF
10 | P a g e
Motivations[edit]
Duplex theory clearly points out that ITD and
IID play significant roles in sound localization but
they can only deal with lateral localizing problems.
For example, based on duplex theory, if two
acoustic sources are symmetrically located on the
right front and right back of the human head, they
will generate equal ITDs and IIDs, which is called
as cone model effect. However, human ears can
actually distinguish this set of sources. Besides
that, in natural sense of hearing, only one ear,
which means no ITD or IID, can distinguish the
sources with a high accuracy. Due to the
disadvantages of duplex theory, researchers
proposed the pinna filtering effect theory.[14] The
shape of human pinna is very special. It is
concave with complex folds and asymmetrical no
matter horizontally or vertically. The reflected
waves and the direct waves will generate a
frequency spectrum on the eardrum, which is
related to the acoustic sources. Then auditory
nerves localize the sources by this frequency
spectrum. Therefore, a corresponding theory was
proposed and called as pinna filtering effect
theory.[15]
Math Model[edit]
11 | P a g e
These spectrum clue generated by pinna
filtering effect can be presented as Head-Related
Transfer Functions (HRTF). The corresponding
time domain expressions are called as Head-
Related Impulse Response (HRIR). HRTF is also
called as the transfer function from the free field to
a specific point in the ear canal. We usually
recognize HRTFs as LTI systems:[8]
,
where L and R represent the left ear and right ear
respectively. and represent the amplitude of
sound pressure at entrances of left and right ear
canal. is the amplitude of sound pressure at the
center of the head coordinate when listener does
not exist. In general, HRTFs and are
functions of source angular position , elevation
angle , distance between source and center of
the head , the angular velocity and the
equivalent dimension of the head .
HRTF Database[edit]
At present, the main institutes that work on
measuring HRTF database includes
CIPIC[16] International Lab, MIT Media Lab, The
Graduate School in Psychoacoustics at the
12 | P a g e
University of Oldenburg, Neurophysiology Lab in
University of Wisconsin-Madison and Ames Lab of
NASA. They carefully measures the HRIRs from
both humans and animals and share the results on
Internet for people who want to study.
fig. 5 HRIR
13 | P a g e
with other direction-selective reflections at the
head, shoulders and torso, they form the outer ear
transfer functions. These patterns in the
ear's frequency responses are highly individual,
depending on the shape and size of the outer ear.
If sound is presented through headphones, and
has been recorded via another head with different-
shaped outer ear surfaces, the directional patterns
differ from the listener's own, and problems will
appear when trying to evaluate directions in the
median plane with these foreign ears. As a
consequence, front–back permutations or inside-
the-head-localization can appear when listening
to dummy head recordings, or otherwise referred
to as binaural recordings. It has been shown that
human subjects can monaurally localize high
frequency sound but not low frequency sound.
Binaural localization, however, was possible with
lower frequencies. This is likely due to the pinna
being small enough to only interact with sound
waves of high frequency.[17] It seems that people
can only accurately localize the elevation of
sounds that are complex and include frequencies
above 7,000 Hz, and a pinna must be present.[18]
EXPERIMENT:
Objective:
Scientists have been able to focus sound waves by
transmitting an ultrasonic wave in a straight line that
can give off audible sound in its path. The only
disadvantage to this is the high cost. This project was
designed to develop a low-cost process of focusing
sound using a parabolic dish and sound-absorbent
material.
Methods:
The project began with the building of a sound box to
test different materials in. The box (20.75 in. x 15 in. x
16 in.),made of particleboard,had one open end and
served as a confined space to test the sound
characteristics in. A speaker was suspended in a cradle
and was capable of moving inward and outward 1 in. A
16 | P a g e
constant sound frequency was transmitted at a level of
105 dB through the speaker. The project consisted of 3
small tests and 1 final test. The 1st test was to
determine if the material of the parabolic dish affected
its sound focusing capabilities. Measurements were
taken from many locations around each parabolic dish
(inside the box) using a decibel meter. The 2nd test
determined if the position of the speaker affected how
sound was focused. Measurements were taken with
the decibel meter at many different locations to
determine if the speaker directed sound best from
2,3,or 4 in. from the back of each dish. The 3rd test was
to determine whether sheet rock,styrofoam,or
fiberglass insulation absorbed the most sound. These
materials were cut to line the walls of the sound box.
For each material,sound measurements were taken 1
ft. from the outside of the box. The final test combined
the results of the previous 3 tests to determine if it is
possible to focus sound.
Results:
The 1st test indicated that the glass dish was the most
capable of focusing sound. The 2nd test yielded that
sound waves were more focused when the speaker
was placed 2 in. from the rear of the dish. The 3rd test
showed that fiberglass insulation was the most capable
of absorbing sound. Thus, the final test consisted of a
17 | P a g e
measuring of the sound with the speaker 2 in. from the
back of the glass dish that was situated inside the
fiberglass insulation-coated walls of the box. The sound
was able to be focused 3-5 ft. in front of the dish,while
the spread of sound was limited in other directions.
Conclusion:
The data collected supported the hypothesis that
sound could be focused using a parabolic dish and
sound-absorbent material. Also,this process of focusing
sound is very cost-effective.
18 | P a g e
Photo gallery:
19 | P a g e
Photo credits:
20 | P a g e
Bibliography:
Parsons, Allan (2017-06-
27). "Focalisation". compendium.kosawese.net.
Retrieved 2017-12-14.
www.sciencefairproject.com
www.researchgate.com
www.wikipedia.com
21 | P a g e