You are on page 1of 3

3D sound localization in noise

Introduction
In the last 30 years, more than 320 000 patients suffering from severe to profound
bilateral deafness around the world, including 7 000 cases in France, were fitted
with a cochlear implant (CI). Most of them recovered efficient speech
intelligibility in daily life and, in the case of deaf children, an unprecedented
opportunity for accessing the oral language. Despite these successes, several
domains of auditory perception remain challenging for CI users, including that of
the spatial hearing.
Spatial hearing is a fundamental ability for humans and other animals. Accurate
localization of sounds allows for the construction of maps of the environment
beyond the limits of the visual field, guides head and eye orienting behaviour,
plays a crucial role in multisensory integration, support auditory scene analysis
and can improve discrimination of auditory signals from noise. In everyday
environments, spatial hearing is three-dimensional, multisensory and active. We
estimate azimuth, elevation and distance of sounds. Most importantly, to resolve
perceptual ambiguity in sound localization, we explore the auditory environment
with head and body movements (Andéol & Simpson, 2016).
Several studies have examined in spatial auditory perception in CI users, typically
documenting that their sound localization performances in the azimuthal plane
were significantly impaired in noise environment compared those of normal
hearing (NH) subjects. For instance, Kerber and Serber (2012) reported an
average root mean square (RMS) errors of 30° (ranging from 14.8° to 44.8°) for
bilateral CI users (BCI) whereas the RMS errors of NH subjects was 5° (ranging
from 2.8° - 7.3°). However, when spatial hearing abilities are investigated, most
of the naturalistic aspects of spatial hearing are considerably constrained: (i)
sounds sources originated from a limited set of positions, typically varying only
along the azimuthal plane; (ii) the head-movements were constrained to ensure
reproducibility of sound characteristics across trials; (iii) the response was often
limited to one dimension; (iiii) multisensory contributions to sound localization
(e.g visual information) are also often poorly controlled. These approaches limit
our understanding of perceived auditory space, and might overestimate sound
localization performance by reducing stimulation and response uncertainty.
In Lyon, we recently developed a novel methodology based on immersive reality
and 3D motion capture to evaluate and record for the first-time the naturalistic
aspects of spatial hearing (SPHERE, European patent n° WO2017203028A1).
Free-field sound position and behavioural responses are monitored and recorded
using a motion tracking system (VIVE) and three tracker devices (for the head,
the hand and the speaker). SPHERE was conducted with NH listeners and CI
patients. We confirmed difficulties of BCI users in azimuthal plane (mean
absolute error of 31.5° ± 30.6 vs. 16.6° ± 17 for NH subjects). Moreover, we
discovered difficulties for BCI users also in elevation (mean absolute error of
86.1° ± 31.2 vs. 23.6° ± 23.2 for NH subjects) and in depth since they did not
really manage to discriminate sound sources at three different depths.
Interestingly, our precise kinematic monitoring of the active participant allowed
us to explore also head movement strategies used by CI users and controls during
sound emission. We found that free head moving allowed BCI users to
significantly improve their spatial auditory perception in azimuth and elevation
(mean decrease of absolute error: 17° in azimuth and 26° in elevation vs. 5° in
azimuth and 6.5° in elevation for NH subjects). These results suggest that head
movements could be one possible approach to rehabilitate spatial auditory
perception after a cochlear implantation.

Our first question is whether active listening (head movements during sound
emission) benefit CI patients in 3D sound localization in noise. Indeed,
developing effective and new approaches to promote naturalistic spatial hearing
is of utmost importance, especially since auditory rehabilitation after surgery is
nowadays exclusively oriented to the improvement of speech comprehension. Our
second question is whether promoting spatial hearing in noise to bCI users re-
learning benefit to abilities among spatial hearing (speech comprehension,
listening effort, auditory scene analysis, hearing in noise)

Objectives
First, we aim to better characterize sound localization deficits of CI users in 3D
in a noisy environment (e.g. cocktail party), in line with our previous work
performed in silence. Then, we want to propose a specific spatial hearing
rehabilitation in noise for CI users specifically impaired by sound localization
difficulties, and evaluate the impact of this rehabilitation on behavioral level.

Material and Methods

The virtual reality system used for both experiment (evaluation and rehabilitation)
comprises a Head-Mounted Display (HMD-VIVE) worn by participants, and two
infra-red cameras to capture the position of infrared sensors and rigid body within
a wide 3D space (see figure below). The first rigid body is attached on top of the
speaker and tracks the loudspeaker’s coordinates; the second rigid body is
attached to the pointing finger and tracks hand-pointing responses; finally,
infrared sensors are embedded on the HMD and tracks head and the HMD
positions. Importantly, because the VIVE incorporates an eye-tracking system, it
allows us continuous monitoring of the participant’s eye position. In brief, all
participants’ behavioural responses to the sound (hand, head and eyes) can be
traced with millisecond and millimetre precision. Critically, our apparatus permits
full control over the visual stimulation provided to participants, hence we will be
able to deliver sounds in free field without providing any cues as its location
through vision (unlike all typical setups in which many priors for perception are
available to participants through vision). Participants will not be aware of
speaker’s position at any time and they will not receive feedback about their
localization accuracy. This new system is portable and adapted for use in clinical
settings.

For the sound localization task, participants, comfortably seated wearing HMD-
VIVE, will have to localize as accurately as possible, the position of a sound
delivered by a speaker in a defined 3D space around them. Three other
loudspeakers (one front and two lateral) will deliver noise at the same time (this
noise will be fixed at 60 dB). Impact of head movements on sound localization
performances will be evaluated when comparing trials realized with and without
free head movements posture.

Spatial rehabilitation will rely on multisensory-motor training, using visual and


auditory cues as well as actions directed to the active sound sources. As a matter
of fact, recent studies have shown that multisensory training can promote
subsequent unisensory learning. The hypothesis is that the more reliable visual
information can help the brain to optimally calibrate the association between
auditory cues and spatial locations. Furthermore, based on our preliminary
findings on the advantage of active behaviour during sound localization in CI
users, we will build our training paradigms to promote also active interactions
with a sound of interest in a noisy environment. The goal of this rehabilitation is
to show that improved spatial hearing has consequences beyond sound
localization alone, impacting on essential abilities for the interaction with the
physical and social environment (speech understanding, listening effort, etc.)

You might also like