You are on page 1of 21

INTRODUCTION:

Focalisation in literature is similar to point-of-


view (POV) in film-making. It occurs in a narrative
where all information presented reflects the
subjective perception of a certain character is said
to be internally focalised. An omniscient
narrator corresponds to zero focalisation. External
focalisation is the camera eye.
A novel in which no simple rules restrict the
transition between different focalisations could be
said to be unfocalised, but specific relationships
between basic types of focalisation constitute
more complex focalisation strategies; for example,
a novel could provide external focalisation
alternating with internal focalisations through three
different characters, where the second character is
never focalised except after the first, and three
other characters are never focalised at all.

1|Page
HOW SOUND REACHES THE BRAIN:

Sound is the perceptual result of mechanical


vibrations traveling through a medium such as air
or water. Through the mechanisms of compression
and rarefaction, sound waves travel through the
air, bounce off the pinna and concha of the
exterior ear, and enter the ear canal. The sound
waves vibrate the tympanic membrane (ear drum),
causing the three bones of the middle ear to
vibrate, which then sends the energy through
the oval window and into the cochlea where it is
changed into a chemical signal by hair cells in
the organ of corti, which synapse onto spiral
ganglion fibers that travel through the cochlear
nerve into the brain.

NEURAL INTERACTIONS:
In vertebrates, inter-aural time differences are
known to be calculated in the superior olivary
nucleus of the brainstem. According
to Jeffress,[1] this calculation relies on delay
lines: neurons in the superior olive which accept
innervation from each ear with different
connecting axon lengths. Some cells are more
directly connected to one ear than the other, thus
they are specific for a particular inter-aural time
2|Page
difference. This theory is equivalent to the
mathematical procedure of cross-correlation.
However, because Jeffress' theory is unable to
account for the precedence effect, in which only
the first of multiple identical sounds is used to
determine the sounds' location (thus avoiding
confusion caused by echoes), it cannot be entirely
used to explain the response. Furthermore, a
number of recent physiological observations made
in the midbrain and brainstem of small mammals
have shed considerable doubt on the validity of
Jeffress' original ideas [2]
Neurons sensitive to inter-aural level differences
(ILDs) are excited by stimulation of one ear and
inhibited by stimulation of the other ear, such that
the response magnitude of the cell depends on the
relative strengths of the two inputs, which in turn,
depends on the sound intensities at the ears.
In the auditory midbrain nucleus, the inferior
colliculus (IC), many ILD sensitive neurons have
response functions that decline steeply from
maximum to zero spikes as a function of ILD.
However, there are also many neurons with much
more shallow response functions that do not
decline to zero spikes.

3|Page
THE CONE OF CONFUSION:

Most mammals are adept at resolving the


location of a sound source using interaural time
differences and interaural level differences.
However, no such time or level differences exist
for sounds originating along the circumference of
circular conical slices, where the cone's axis lies
along the line between the two ears.

Consequently, sound waves originating at any


point along a given circumference slant height will
have ambiguous perceptual coordinates. That is to
say, the listener will be incapable of determining
whether the sound originated from the back, front,
top, bottom or anywhere else along the
circumference at the base of a cone at any given
distance from the ear. Of course, the importance of
these ambiguities are vanishingly small for sound
sources very close to or very far away from the
subject, but it is these intermediate distances that
are most important in terms of fitness.

These ambiguities can be removed by tilting


the head, which can introduce a shift in both
the amplitude and phase of sound waves arriving
at each ear. This translates the vertical orientation
of the interaural axis horizontally, thereby
4|Page
leveraging the mechanism of localization on the
horizontal plane. Moreover, even with no
alternation in the angle of the interaural axis (i.e.
without tilting one's head) the hearing system can
capitalize on interference patterns generated by
pinnae, the torso, and even the temporary re-
purposing of a hand as extension of the pinna
(e.g., cupping one's hand around the ear).

As with other sensory stimuli, perceptual


disambiguation is also accomplished through
integration of multiple sensory inputs, especially
visual cues. Having localized a sound within the
circumference of a circle at some perceived
distance, visual cues serve to fix the location of the
sound. Moreover, prior knowledge of the location
of the sound generating agent will assist in
resolving its current location.

5|Page
SOUND FOCALIZATION BY HUMAN
AUDITORY SYSTEM:

Duplex Theory[edit]
To determine the lateral input direction (left,
front, right), the auditory system analyzes the
following ear signal information:

Duplex Theory[edit]
In 1907, Lord Rayleigh utilized tuning forks to
generate monophonic excitation and studied the
lateral sound localization theory on a human head
model without auricle. He first presented the
interural clue difference based sound localization
theory, which is known as Duplex
Theory.[7] Human ears are on the different sides of
the head, thus they have different coordinates in
space. As shown in fig. 2, since the distances
between the acoustic source and ears are
different, there are time difference and intensity
difference between the sound signals of two ears.
We call those kinds of differences as Interaural
Time Difference (ITD) and Interaural Intensity
Difference (IID) respectively.

6|Page
fig.2 Duplex Theory

ITD and IID[edit]

Interaural Time Difference (ITD) between left ear


(top) and right ear (bottom).
[sound source: 100 ms white noise from right]

Interaural Level Difference (ILD) between left ear (left) and right ear (right).
[sound source: a sweep from right]

From fig.2 we can see that no matter for source B1


or source B2, there will be a propagation delay
between two ears, which will generate the ITD.
Simultaneously, human head and ears may have
shadowing effect on high frequency signals, which
will generate IID.
 Interaural Time Difference (ITD) Sound from the
right side reaches the right ear earlier than the
left ear. The auditory system evaluates

7|Page
interaural time differences from: (a) Phase
delays at low frequencies and (b) group
delays at high frequencies.
 Massive experiments demonstrate that ITD
relates to the signal frequency f. Suppose the
angular position of the acoustic source is θ, the
head radius is r and the acoustic velocity is c,
the function of ITD is given by:[8] . In above
closed form, we assumed that the 0 degree is in
the right ahead of the head and counter-
clockwise is positive.
 Interaural Intensity Difference (IID) or Interaural
Level Difference (ILD) Sound from the right side
has a higher level at the right ear than at the left
ear, because the head shadows the left ear.
These level differences are highly frequency
dependent and they increase with increasing
frequency. Massive theoretical researches
demonstrate that IID relates to the signal
frequency f and the angular position of the
acoustic source θ. The function of IID is given
by:[8]
 For frequencies below 1000 Hz, mainly ITDs are
evaluated (phase delays), for frequencies above
1500 Hz mainly IIDs are evaluated. Between
1000 Hz and 1500 Hz there is a transition zone,
where both mechanisms play a role.
 Localization accuracy is 1 degree for sources in
front of the listener and 15 degrees for sources
8|Page
to the sides. Humans can discern interaural time
differences of 10 microseconds or less.[9][10]
Evaluation for low frequencies[edit]
For frequencies below 800 Hz, the dimensions of
the head (ear distance 21.5 cm, corresponding to
an interaural time delay of 625 µs) are smaller
than the half wavelength of the sound waves. So
the auditory system can determine phase delays
between both ears without confusion. Interaural
level differences are very low in this frequency
range, especially below about 200 Hz, so a
precise evaluation of the input direction is nearly
impossible on the basis of level differences alone.
As the frequency drops below 80 Hz it becomes
difficult or impossible to use either time difference
or level difference to determine a sound's lateral
source, because the phase difference between the
ears becomes too small for a directional
evaluation.[11]

Evaluation for high frequencies[edit]


For frequencies above 1600 Hz the
dimensions of the head are greater than the length
of the sound waves. An unambiguous
determination of the input direction based on
interaural phase alone is not possible at these
frequencies. However, the interaural level
differences become larger, and these level
differences are evaluated by the auditory system.
9|Page
Also, group delays between the ears can be
evaluated, and is more pronounced at higher
frequencies; that is, if there is a sound onset, the
delay of this onset between the ears can be used
to determine the input direction of the
corresponding sound source. This mechanism
becomes especially important in reverberant
environments. After a sound onset there is a short
time frame where the direct sound reaches the
ears, but not yet the reflected sound. The auditory
system uses this short time frame for evaluating
the sound source direction, and keeps this
detected direction as long as reflections and
reverberation prevent an unambiguous direction
estimation.[12] The mechanisms described above
cannot be used to differentiate between a sound
source ahead of the hearer or behind the hearer;
therefore additional cues have to be evaluated.[13]

Pinna Filtering Effect Theory[edit]

fig.4 HRTF
10 | P a g e
Motivations[edit]
Duplex theory clearly points out that ITD and
IID play significant roles in sound localization but
they can only deal with lateral localizing problems.
For example, based on duplex theory, if two
acoustic sources are symmetrically located on the
right front and right back of the human head, they
will generate equal ITDs and IIDs, which is called
as cone model effect. However, human ears can
actually distinguish this set of sources. Besides
that, in natural sense of hearing, only one ear,
which means no ITD or IID, can distinguish the
sources with a high accuracy. Due to the
disadvantages of duplex theory, researchers
proposed the pinna filtering effect theory.[14] The
shape of human pinna is very special. It is
concave with complex folds and asymmetrical no
matter horizontally or vertically. The reflected
waves and the direct waves will generate a
frequency spectrum on the eardrum, which is
related to the acoustic sources. Then auditory
nerves localize the sources by this frequency
spectrum. Therefore, a corresponding theory was
proposed and called as pinna filtering effect
theory.[15]

Math Model[edit]

11 | P a g e
These spectrum clue generated by pinna
filtering effect can be presented as Head-Related
Transfer Functions (HRTF). The corresponding
time domain expressions are called as Head-
Related Impulse Response (HRIR). HRTF is also
called as the transfer function from the free field to
a specific point in the ear canal. We usually
recognize HRTFs as LTI systems:[8]

,
where L and R represent the left ear and right ear
respectively. and represent the amplitude of
sound pressure at entrances of left and right ear
canal. is the amplitude of sound pressure at the
center of the head coordinate when listener does
not exist. In general, HRTFs and are
functions of source angular position , elevation
angle , distance between source and center of
the head , the angular velocity and the
equivalent dimension of the head .

HRTF Database[edit]
At present, the main institutes that work on
measuring HRTF database includes
CIPIC[16] International Lab, MIT Media Lab, The
Graduate School in Psychoacoustics at the
12 | P a g e
University of Oldenburg, Neurophysiology Lab in
University of Wisconsin-Madison and Ames Lab of
NASA. They carefully measures the HRIRs from
both humans and animals and share the results on
Internet for people who want to study.

fig. 5 HRIR

Other Cues for 3D Space Localization[edit]


Monaural cues[edit]
The human outer ear, i.e. the structures of
the pinna and the external ear canal, form
direction-selective filters. Depending on the sound
input direction in the median plane, different filter
resonances become active. These resonances
implant direction-specific patterns into
the frequency responses of the ears, which can be
evaluated by the auditory system (directional
bands) for vertical sound localization. Together

13 | P a g e
with other direction-selective reflections at the
head, shoulders and torso, they form the outer ear
transfer functions. These patterns in the
ear's frequency responses are highly individual,
depending on the shape and size of the outer ear.
If sound is presented through headphones, and
has been recorded via another head with different-
shaped outer ear surfaces, the directional patterns
differ from the listener's own, and problems will
appear when trying to evaluate directions in the
median plane with these foreign ears. As a
consequence, front–back permutations or inside-
the-head-localization can appear when listening
to dummy head recordings, or otherwise referred
to as binaural recordings. It has been shown that
human subjects can monaurally localize high
frequency sound but not low frequency sound.
Binaural localization, however, was possible with
lower frequencies. This is likely due to the pinna
being small enough to only interact with sound
waves of high frequency.[17] It seems that people
can only accurately localize the elevation of
sounds that are complex and include frequencies
above 7,000 Hz, and a pinna must be present.[18]

Dynamic binaural cues[edit]


When the head is stationary, the binaural cues
for lateral sound localization (interaural time
difference and interaural level difference) do not
14 | P a g e
give information about the location of a sound in
the median plane. Identical ITDs and ILDs can be
produced by sounds at eye level or at any
elevation, as long as the lateral direction is
constant. However, if the head is rotated, the ITD
and ILD change dynamically, and those changes
are different for sounds at different elevations. For
example, if an eye-level sound source is straight
ahead and the head turns to the left, the sound
becomes louder (and arrives sooner) at the right
ear than at the left. But if the sound source is
directly overhead, there will be no change in the
ITD and ILD as the head turns. Intermediate
elevations will produce intermediate degrees of
change, and if the presentation of binaural cues to
the two ears during head movement is reversed,
the sound will be heard behind the
listener.[13][19] Hans Wallach[20] artificially altered a
sound’s binaural cues during movements of the
head. Although the sound was objectively placed
at eye level, the dynamic changes to ITD and ILD
as the head rotated were those that would be
produced if the sound source had been elevated.
In this situation, the sound was heard at the
synthesized elevation. The fact that the sound
sources objectively remained at eye level
prevented monaural cues from specifying the
elevation, showing that it was the dynamic change
in the binaural cues during head movement that
allowed the sound to be correctly localized in the
15 | P a g e
vertical dimension. The head movements need not
be actively produced; accurate vertical localization
occurred in a similar setup when the head rotation
was produced passively, by seating the blindfolded
subject in a rotating chair. As long as the dynamic
changes in binaural cues accompanied a
perceived head rotation, the synthesized elevation
was perceived.[13

EXPERIMENT:

Objective:
Scientists have been able to focus sound waves by
transmitting an ultrasonic wave in a straight line that
can give off audible sound in its path. The only
disadvantage to this is the high cost. This project was
designed to develop a low-cost process of focusing
sound using a parabolic dish and sound-absorbent
material.

Methods:
The project began with the building of a sound box to
test different materials in. The box (20.75 in. x 15 in. x
16 in.),made of particleboard,had one open end and
served as a confined space to test the sound
characteristics in. A speaker was suspended in a cradle
and was capable of moving inward and outward 1 in. A
16 | P a g e
constant sound frequency was transmitted at a level of
105 dB through the speaker. The project consisted of 3
small tests and 1 final test. The 1st test was to
determine if the material of the parabolic dish affected
its sound focusing capabilities. Measurements were
taken from many locations around each parabolic dish
(inside the box) using a decibel meter. The 2nd test
determined if the position of the speaker affected how
sound was focused. Measurements were taken with
the decibel meter at many different locations to
determine if the speaker directed sound best from
2,3,or 4 in. from the back of each dish. The 3rd test was
to determine whether sheet rock,styrofoam,or
fiberglass insulation absorbed the most sound. These
materials were cut to line the walls of the sound box.
For each material,sound measurements were taken 1
ft. from the outside of the box. The final test combined
the results of the previous 3 tests to determine if it is
possible to focus sound.

Results:
The 1st test indicated that the glass dish was the most
capable of focusing sound. The 2nd test yielded that
sound waves were more focused when the speaker
was placed 2 in. from the rear of the dish. The 3rd test
showed that fiberglass insulation was the most capable
of absorbing sound. Thus, the final test consisted of a

17 | P a g e
measuring of the sound with the speaker 2 in. from the
back of the glass dish that was situated inside the
fiberglass insulation-coated walls of the box. The sound
was able to be focused 3-5 ft. in front of the dish,while
the spread of sound was limited in other directions.

Conclusion:
The data collected supported the hypothesis that
sound could be focused using a parabolic dish and
sound-absorbent material. Also,this process of focusing
sound is very cost-effective.

18 | P a g e
Photo gallery:

19 | P a g e
Photo credits:

20 | P a g e
Bibliography:
Parsons, Allan (2017-06-
27). "Focalisation". compendium.kosawese.net.
Retrieved 2017-12-14.
www.sciencefairproject.com
www.researchgate.com
www.wikipedia.com

21 | P a g e

You might also like