You are on page 1of 9

258 IEEE Transactions on Consumer Electronics, Vol. 63, No.

3, August 2017

Smart Guiding Glasses for Visually Impaired


People in Indoor Environment
Jinqiang Bai, Shiguo Lian, Member, IEEE, Zhaoxiang Liu, Kai Wang, Dijun Liu


Abstract—To overcome the travelling difficulty for the visually perceive all the necessary information such as volume or
impaired group, this paper presents a novel ETA (Electronic distance, etc. [4]. Comparably, ETA (Electronic Travel Aid)
Travel Aids)-smart guiding device in the shape of a pair of can provide more information about the surroundings by
eyeglasses for giving these people guidance efficiently and safely. integrating multiple electronic sensors and have proved to be
Different from existing works, a novel multi-sensor fusion based
obstacle avoiding algorithm is proposed, which utilizes both the effective on improving the visually impaired person’s daily life
depth sensor and ultrasonic sensor to solve the problems of [4], and the device presented in this work belongs to such
detecting small obstacles, and transparent obstacles, e.g. the category.
French door. For totally blind people, three kinds of auditory cues The RGB-D (Red, Green, Blue and Depth) sensor based
were developed to inform the direction where they can go ahead. ETA [5], [6] can detect obstacles more easily and precisely than
Whereas for weak sighted people, visual enhancement which other sensor (e.g. ultrasonic sensor, mono-camera, etc.) based
leverages the AR (Augment Reality) technique and integrates the
traversable direction is adopted. The prototype consisting of a scheme. However, a drawback of the depth sensor is that it has
pair of display glasses and several low-cost sensors is developed, a limited working range in measuring the distance of the
and its efficiency and accuracy were tested by a number of users. obstacle and cannot work well in the face of transparent objects,
The experimental results show that the smart guiding glasses can such as glass, French window, French door, etc. To overcome
effectively improve the user’s travelling experience in complicated this limitation, a multi-sensor fusion based obstacle avoiding
indoor environment. Thus it serves as a consumer device for algorithm, which utilizes both the depth sensor and the
helping the visually impaired people to travel safely.
ultrasonic sensor, is proposed in this work.
Index Terms—AR, depth sensor, ETA, sensor fusion, vision The totally blind people can be informed through auditory
enhancement and/or tactile sensor [7]. Tactile feedback does not block the
auditory sense, which is the most important perceptual input
source. However, such an approach has the drawbacks of high
I. INTRODUCTION power consumption and large size, which is not suitable for

A CCORDING to the official statistics from World Health wearable ETA (like the glasses proposed in this work). Thus,
Organization, there are about 285 million visually sound or synthetic voice is the option for the first case. Some
impaired persons in the world up to the year of 2011: about 39 sound feedback based ETAs map the processed RGB image
million are completely blind and 246 million have weak sight and/or depth image to acoustic patterns [8] or semantic speech
[1]. This number will increase rapidly as the baby boomer [9] for helping the blind to perceive the surroundings. But, the
generation ages [2]. These visually impaired people have great blind still needs to understand the feedback sound and decide
difficulty in perceiving and interacting with the surroundings, where they can go ahead by himself. Thus, such systems are
especially those which are unfamiliar. Fortunately, there are hard to ensure the blind making a right decision according to
some navigation systems or tools available for visually the feedback sound. Focusing on the above problem, three
impaired individuals. Traditionally, most people rely on the kinds of auditory cues, which is converted from the traversable
white cane for local navigation, constantly swaying it in front direction (produced by the multi-sensor fusion based obstacle
for obstacle detection [3]. However, they cannot adequately avoiding algorithm) were developed in this paper for directly
guiding the user where to go.
Since the weak sighted people have some degree of visual
Manuscript received July 1, 2017; accepted August 30, 2017. Date of perception, and vision can provide more information than other
publication September 5, 2017. This work was supported by the CloudMinds
Technologies Inc. (Corresponding author: J. Bai.)
senses, e.g. touch and hearing, the visual enhancement, which
Jinqiang Bai is with Beihang University, Beijing, 10083, China (e-mail: uses the popular AR (Augmented Reality) [10], [11] technique
baijinqiang@buaa.edu.cn). for displaying the surroundings and the feasible direction on the
Shiguo Lian, Zhaoxiang Liu, Kai Wang are all with AI Department, eyeglasses, is proposed to help the users to avoid the obstacle.
CloudMinds Technologies Inc., Beijing, 100102, China (e-mail: { scott.lian,
robin.liu, kai.wang }@cloudminds.com). The rest of the paper is organized as follows. Section II
Dijun Liu is with DT-LinkTech Inc., Beijing, 10083, China (e-mail: reviews the related works involved in guiding the visually
liudijun@datang.com) impaired people. The proposed smart guiding glasses are
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
presented in Section III. Section IV shows some experimental
Digital Object Identifier 10.1109/TCE.2017.014980 results, and demonstrates the effectiveness and robustness of

0098 3063/17/$20.00 © 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
J. Bai et al.: Smart Guiding Glasses for Visually Impaired People in Indoor Environment 259
the proposed system. Finally, some conclusions are drawn in on a belt [7], helmet [23] or in a backpack [14]. Although they
Section V. have far less interference with sensing the environment, they
are hard to represent complicated information and require more
II. RELATED WORK training and concentration. Audio feedback based systems
utilize acoustic patterns [8], [9], semantic speech [24], different
As this work focuses on the obstacle avoidance and the
intensities sound [25] or spatially localized auditory cues [26].
guiding information feedback, the related work with respect to
The method in [8], [9] directly maps the processed RGB image
such two fields are reviewed in this section.
to acoustic patterns for helping the blind to perceive the
A. Obstacle Avoidance surroundings. The method in [24] maps the depth image to
There exist a vast literature on obstacle detection and semantic speech for telling the blind some information about
avoidance. According to the sensor type, the obstacle the obstacles. The method in [25] maps the depth image to
avoidance method can be categorized as: ultrasonic sensor different intensities sound for representing obstacles in
based method [12], laser scanner based method [13], and different distance. The method in [26] maps the depth image to
camera based method [5], [6], [14]. Ultrasonic sensor based spatially localized auditory cues for expressing the 3D
method can measure the distance of obstacle and compare it information of the surroundings. However, the user will
with the given distance threshold for deciding whether to go misunderstand these auditory cues under noisy or complicated
ahead, but it cannot determine the exact direction of going environment. Visual feedback based systems can be used for
forward, and may suffer from interference problems with the the partially sighted individuals due to its ability of providing
sensors themselves if ultrasonic radar (ultrasonic sensor array) more detailed information than haptic or audio feedback based
is used, or other signals in indoor environment. Although laser systems. The method in [27] maps the distance of the obstacle
scanner based method is widely used in mobile robot to brightness on LED (Light Emitting Diode) display as a visual
navigation for their high precision and resolution, the laser enhancement method to help the users more easily to notice the
scanner is expensive, heavy, and with high power consumption, obstacle. But, the LED display only shows the large obstacle
so it is not suitable for wearable navigation system. As for due to its low resolution.
camera based method, there are many methods based on In this paper, a novel multi-sensor fusion based obstacle
different cameras, such as mono-camera, stereo-camera, and avoiding algorithm is proposed to overcome the above
RGB-D camera. Based on the mono-camera, some methods limitations, which utilizes both the depth sensor and ultrasonic
process RGB image to detect obstacles by e.g., floor sensor to find the optimal traversable direction. The output
segmentation [15], [16], deformable grid based obstacle traversable direction is then converted to three kinds of auditory
detection [8], etc. However, these methods cost so much cues in order to select an optimal one under different scenarios,
computation that they are not satisfied for real-time and integrated in the AR technique based visual enhancement
applications, and hard to measure the distance of the obstacle. for guiding the visually impaired people.
To measure the distance, some stereo-camera based methods
are proposed. For example, the method [17] uses local window III. THE PROPOSED FRAMEWORK
based matching algorithms for estimating the distance of
A. The Hardware System
obstacles, and the method [18] uses genetic algorithm to
generate dense disparity maps that can also estimate the The proposed system includes a depth camera for acquiring
distance of obstacles. However, these methods will fail under the depth information of the surroundings, an ultrasonic
low-texture or low-light scenarios, which cannot ensure the rangefinder consisting of an ultrasonic sensor and a MCU
secure navigation. Recently, RGB-D cameras have been widely (Microprogrammed Control Unit) for measuring the obstacle
used in many applications [5], [14], [19]-[21] for their low cost, distance, an embedded CPU (Central Processing Unit) board
good miniaturization and ability of providing wealthy acting as main processing module, which does such operations
information. The RGB-D cameras provide both dense range as depth image processing, data fusion, AR rendering, guiding
information from active sensing and color information from sound synthesis, etc., a pair of AR glasses to display the visual
passive sensor such as standard camera. The RGB-D camera enhancement information and an earphone to play the guiding
based method [5] combines range information with color sound. The hardware configuration of the proposed system is
information to extend the floor segmentation to the entire scene illustrated in Fig. 1, and the initial prototype of the system is
for detecting the obstacles in detail. The one in [14] builds a 3D shown in Fig. 2.
(3 Dimensional) voxel map of the environment and analyzes
3D traversability for obstacle avoidance. But these methods are
constrained to non-transparent objects scenarios due to the
imperfection of the depth camera.
B. Guiding Information Feedback
There are three main techniques for providing guiding
information to visually impaired people [22], i.e., haptic, audio
and visual. Haptic feedback based systems often use vibrators Fig. 1. The hardware configuration of the proposed system.
260 IEEE Transactions on Consumer Electronics, Vol. 63, No. 3, August 2017

object in front of the user. The MCU is used to control the


ultrasonic sensor to start measurement and detect the pulse at
the Echo pin. The width of the pulse at the Echo pin is
proportional to the time interval, from which the distance of the
object is determined:
v  ToF (1)
d  ,
2
Fig. 2. The initial prototype of the proposed smart guiding glasses.
where d is the distance of the object, v is the speed of sound
in air, usually taken as 340 m/s and ToF is the time interval
between the Trig and Echo transitions.
1) Depth Information Acquisition
Depth information is acquired with the depth sensor (the
initial prototype only uses the depth camera of RGB-D camera,
which includes a depth sensor and an RGB camera). The depth
sensor is composed by an infrared laser source that project
non-visible light with a coded pattern combined with a
monochromatic CMOS (Complementary Metal Oxide
Semiconductor) image sensor that captures the reflected light.
The algorithm that deciphers the reflected light coding
generates the depth information representing the scene. In this
work, the depth information is acquired by mounting the depth
sensor onto the glasses with an approximate inclination of 30°,
as shown in Fig. 3. This way, considering the height of the
camera to the ground to be about 1.65 m and the depth camera
working range to be limited about from 0.4 m to 4 m, the valid
distance in field of view is about 2.692 m, starting about 0.952
m in front of the user.
Fig. 4. Connections of ultrasonic rangefinder.

B. The Steps
The overall algorithm diagram is depicted in Fig. 5. The
depth image acquired from the depth camera is processed by
the depth-based way-finding algorithm which outputs several
candidate moving directions. The multi-sensor fusion based
obstacle avoiding algorithm then uses the ultrasonic
measurement data to select an optimal moving direction from
the candidates. The AR rendering utilizes one depth image to
Fig. 3. Depth information acquisition.
generate and render the binocular images as well as the moving
direction to guide the user efficiently. The guiding sound
2) Ultrasonic Rangefinder
In this work, the ultrasonic sensor is mounted on the glasses.
The sensor uses 40 KHz samples. The samples are sent by the
transmitter of the sensor. The object reflects the ultrasound
wave and the receiver of the sensor receives the reflected wave.
The distance of the object can be obtained according to the time
interval between the wave sending and the receiving. As is
shown in Fig. 4., the Trig pin of the sensor must receive a pulse
of high (5 V) for at least 10 us to start measurement that will
trigger the sensor to transmit 8 cycles of ultrasonic burst at 40
KHz and wait for the reflected burst. When the sensor has sent
the 8 cycles burst, the Echo pin of the sensor is set to high. Once
the reflected burst is received, the Echo pin will be set to low,
which produces a pulse at the Echo pin. If no reflected burst is
received within 30ms, the Echo pin stays high. Thus, the
distance will be set very large for representing that there is no Fig. 5. Diagram of the proposed system.
J. Bai et al.: Smart Guiding Glasses for Visually Impaired People in Indoor Environment 261
synthesis takes the moving direction as the input to produce the 1  D ( z ) , which D ( z ) represents the adaptive width
auditory cue for guiding the totally blind people. Three kinds of depending on the depth z . Every sliding step is computed as
auditory cues are developed and tested to allow the selection of follows.
the most suitable one under different scenarios. First, compute the corresponding 3D point of a given point in
1) Depth-based Way-finding the depth image. As is shown in Fig. 7, given a point n in the
This depth-based way-finding algorithm is to find candidate depth image, the u1 , v1 , z can be known. Using similar
traversable directions based on the depth image. Different from triangles law, the 3D point N ( x1 , y1 , z ) can be calculated by:
the floor-segmentation based way-finding methods, it only uses
the region of interest to determine the traversable directions.  x1   u1  u 0 
  z   (2)
Since the nearest obstacle is always at the bottom of the depth  y1   f  v1  v 0  .
image, it only select a line in the bottom of the image as input, z   f 
   
as is shown in Fig. 6. Considering that the user’s walking is
Second, compute the sliding window width D ( z ) in the
slow and gradual, it can detect the obstacle timely.
image. According to the traversable threshold w , the 3D
boundary point M ( x 2 , y 2 , z ) of the traversable region can be
obtained by:
 x 2   x1  w 
    (3)
 y 2    y1 .
z  z 
   
Using similar triangles law as well, the 2D point m of the
3D point M projection on the depth image can be computed by:
 
 u2   x2   u0 
Fig. 6. The used depth image. The blue line represents the input of the   f     v . (4)

 2  z  y2
v
depth-based way-finding algorithm.
    0
1   z   0 
Since the depth is relative to the camera, i.e. it is in the  f 
camera coordinate system. As is shown in Fig. 7, O c is the Substituting (2), (3) into (4), we can obtain:
origin of the camera coordinate system ( X c , Yc , Z c ) , i.e. the  u 2   u1   fw 
(5)
center of projection. O is the origin of the image coordinate     + z .
 v 2   v1   0 

system ( u , v ) in pixel. O I ( u 0 , v 0 ) is the principal point, i.e.
the origin of the image coordinate system ( x , y ) in millimeter. Then the width D ( z ) of adaptive sliding window can be
The distance from O c to the image plane is the focal length f . expressed as:
fw (6)
A 3D point in camera coordinates N ( x1 , y1 , z ) is mapped to D ( z )  u 2  u1  .
z
the image plane I at the intersection n ( u1 , v1 ) of the ray Third, judge if the region between point n and m in the depth
connecting the 3D point N with the center of projection O c . image is traversable. This can be calculated by:
1 if  x 0:4  { x | z x  [ z x   , z x   ], z x   }; (7)
1x ( z )   ,
 0 others .
where x is the point in the depth image between point n
and m , x 0 :4 represents continuous five points, z x is the depth
of the point x ,  is the measurement noise and set as fixed
value,  is the distance threshold.
If arbitrary continuous five points between point n and m is
in the range [ z   , z   ] , and the depths of the five points
exceed the distance threshold  for timely and safely avoiding
the obstacle, this region is considered to be traversable;
Fig. 7. Coordinate system transformation. otherwise, an obstacle is considered in this region, and this
region will be discarded.
The depth-based way-finding algorithm uses the traversable Fourth, compute the steering angle  . If 1 x ( z ) in (7) is 1,
threshold w and adaptive sliding window to determine the i.e. the region is traversable, the steering angle  can be
candidate moving directions. The sliding window size is calculated by:
262 IEEE Transactions on Consumer Electronics, Vol. 63, No. 3, August 2017
u1  u 2  2 u 0 (8)
  arctan .
2f
If 1 x ( z ) in (7) is 0, i.e. the region is not traversable, the
steering angle  is not calculated.
These four steps are continually conducted until all the input
points were traversed. Then the candidate direction set A ( ) ,
i.e. the set of steering angle  , will be stored for later use.
2) Multi-sensor Fusion Based Obstacle Avoiding
Because the depth camera projects the infrared laser for
measuring the distance, and the infrared laser can pass through
transparent objects, which will produce incorrect measuring
Fig. 8. The workflow of the proposed algorithm.
data. Thus, the multi-sensor fusion based method, which
utilizes the depth camera and the ultrasonic sensor, is proposed
and can overcome the above limitation of depth camera. This
algorithm steps are as follows.
Firstly, compute the optimal moving direction based on the
depth image. The optimal moving direction can be obtained by
minimizing the cost function, which is defined as:
(a) (b) (c)
 1 Fig. 9. Results of the moving direction. (a) is the input depth iamge. (b) shows
 min f ( )  min (    ) ifA ( )   ; (9)
 opt     A ( )  A ( ) W ( ) , the moving direction (the laurel-green region in the image) calculated in (9). (c)
 Null ifA ( )   . shows the moving direciton (Null) calculated in (10).

where αopt is the optimal moving direction,  is the steering 3) AR Rendering with Guiding Cue
angle, which belongs to the A ( ) (see section Ⅲ.B.(1)),
The visual enhancement, which adopts the AR technique, is
W ( ) is the maximum traversable width which centers on the
direction  ,  ,  are the different weights. used for weak sighted people. In order to showing the guiding
cue to the user based on the one depth image, the binocular
The function f(α) evaluates the cost of both steering angle
parallax images are needed to generate. This was realized in
and traversable region width. The smaller the steering angle is,
Unity3D [28] by adjusting the texture coordinates of the depth
the faster the user can turn. The wider the traversable region is,
image. The rendering stereo images integrate the feasible
the safer it will be. This cost function will ensure the user move
direction (the circle in Fig. 10(a) (b)) for guiding the user.
effectively and safely.
When the feasible direction is located in the bounding box (the
Second, fuse ultrasonic data to determine the final moving
rectangular box in Fig. 10 (a) (b)), the user can go forward (see
direction. Since the ultrasonic sensor can detect the obstacles in
(c) of the third row in Fig. 10). When the direction is out of the
the range of 0.03 m to 4.25 m, and within scanning field of 15°,
bounding box, the user should turn left (see the second row in
the final moving direction is defined as:
Fig. 10) or right (see the last row in Fig. 10) according to the
 if (opt [7.5,7.5]) (opt [7.5,7.5]&&d   ); (10) feasible direction until it is lay in the bounding box. When the
   opt
opt , feasible direction is absent (see the first row in Fig. 10), this
 Null others .
indicates there is no traversable way in the field of view, the
 is the final moving direction,  opt is equal as
where  opt user should stop and turn left or right slowly, even turn back in
order to find a traversable direction.
(9), d is the distance measured by the ultrasonic sensor,  is
the same as (7). 4) Guiding Sound Synthesis
This can be explained as follows. First, it judges if the For the totally blind users, auditory cues are adopted in this
optimal moving direction in (9) is within the view field of work. The guiding sound synthesis module can produce three
ultrasonic sensor, i.e. [-7.5°, 7.5°]. If false, it will directly kinds of guiding signal: stereo tone [26], recorded instructions
output the optimal moving direction as in (9). If it is true, the and different frequency beep.
ultrasonic data then will be used to judge if the measurement First kind converts the feasible direction into stereo tone. The
distance exceeds the distance threshold  . If true, it will also stereo sound (see loudspeaker in Fig. 11) is like a person in the
output the optimal moving direction as in (9). If false, it will right direction to tell the user came to him. Second kind uses the
output Null, which means no moving direction. The workflow recorded speech to tell the user turn left or right, or go forward.
of this algorithm is shown in Fig. 8. As is shown in Fig. 11, the field of view is 60°, the middle
The results of the optimal moving direction is shown in Fig. region is 15° and the two sides are divided equally. When an
9, and show the multi-sensor fusion based method can make a obstacle is in front of the user, the recorded speech will tell the
correct decision under transparent scenario, whereas the user “Attention, obstacle in front of you, turn left 20 degrees”.
method only by depth image cannot. Some recorded audio instructions are detailed in TABLE Ⅰ. The
J. Bai et al.: Smart Guiding Glasses for Visually Impaired People in Indoor Environment 263
last one converts the feasible direction into different frequency TABLE I
beep. The beep frequency is proportional to the steering angle. AUDIO INSTRUCTIONS
When the user should turn left, the left channel of the earphone Condition Audio instruction
will work and the right will not, vice versa. When the user
should go forward, the beep sound will keep silence. Obstacle placed in front of the user Attention, obstacle in front of
with no feasible direction you, turn left or right slowly
Obstacle placed in front of the user Attention, obstacle in front of
with feasible direction on the left you, turn left xxa degrees
Obstacle placed in front of the user Attention, obstacle in front of
with feasible direction on the right you, turn right xxa degrees
Obstacle placed in left of the user Attention, obstacle in left of
with feasible direction on the front you, go straight
Obstacle placed in right of the user Attention, obstacle in right of
with feasible direction on the front you, go straight
Obstacle placed in left of the user Attention, obstacle in left of
with feasible direction on the right you, turn right xxa degrees
Obstacle placed in right of the user Attention, obstacle in right of
with feasible direction on the left you, turn left xxa degrees
No obstacle Go straight
a
xx is the steering angle.

supermarket. The main purpose of these subjective tests is to


check the efficiency of the guiding instructions.
A. Adaptability for Different Height
To test the adaptability for different user’s height, we set an
obstacle from 1 m to 2 m in front of the depth camera. Then the
minimum detectable height of the obstacle is measured under
different camera height from 1.4 m to 1.8m. The results are
shown in Fig. 12, and indicate that the proposed algorithm can
detect the obstacle whose height is more than 5 cm. When the
obstacle’s distance to the camera or the user is fixed, the lower
(a) Left image (b) Right image (c) RGB image the height of the camera (i.e. the user’s height), the smaller
Fig. 10. The rendering images. (a)(b) are used for display, (c) is just for intuitive obstacle which can be detected will be. When the height of the
representation. camera is fixed, the closer the obstacle’s distance to the camera,
the smaller the obstacle which can be detected will be. This is
due to the size of the obstacle in the depth image is affected by
the camera’s height and the distance between the obstacle and
the camera. Since the proposed algorithm is based on the depth
image, the obstacle’s size in the depth image can affect the
correctness of the proposed algorithm. As is shown in Fig. 3
and Fig. 6, the distance of the region of interest is about 1-1.3 m
to the user, so the minimum detectable height of the obstacle is
3 cm (see Fig. 12). Because very few object is less than 3 cm in
the three main scenarios (home, office, supermarket), the
proposed algorithm has very high adaptability.
Fig .11 Guiding sketch

These three kinds of guiding method are tested in next


section. Tested results show that they have their virtues and
their faults. Different guiding method can be adopted in
different scenarios. Next section will describe in detail.

IV. EXPERIMENTAL RESULTS AND DISCUSSIONS


The performance of the proposed system have been
evaluated both objectively and subjectively. For the objective
tests, the adaptability, the correctness, the computational cost of Fig. 12. Minimum detectable height of the obstacle under different height and
the proposed algorithm were analyzed. For subjective tests, 20 distance
users (10 of them are amblyopia and the other 10 are totally
blind) whose heights range from 1.5 m to 1.8 m were invited to B. Correctness of Obstacle Avoiding Algorithm
attend the study in three main scenarios, i.e. home, office and In order to evaluate the correctness of the proposed
264 IEEE Transactions on Consumer Electronics, Vol. 63, No. 3, August 2017
algorithm, especially under the transparent obstacle, several maximally takes about 26.5 ms. The ultrasonic sensor fusion
transparent scenarios (see Fig. 13) are selected as the test algorithm takes about 1.33 ms. The AR rendering takes about
environment. Two groups of experiments were conducted, 2.19 ms. Because the ultrasonic sensor measurement runs on
including the avoiding algorithms with and without the the MCU, the multi-sensor fusion based obstacle avoiding
ultrasonic sensor. The results (see Fig. 14) reveal that the algorithm is parallel with the ultrasonic sensor measurement,
avoiding algorithm without ultrasonic sensor has the accuracy the maximum cost for processing each frame is about 30.2 ms.
of 98.93% under the frosted glass, but has very low accuracy Since the computation can be finished in real time, the obstacle
when encountering the pure transparent glass. This is due to the can be detected timely and the user’s safety can be guaranteed
limitation of the depth camera as explained in section Ⅲ.B.(4). sufficiently.
The algorithm with the ultrasonic sensor can improve the
D. Interactive Experience
accuracy significantly. Thus it verifies that the proposed
algorithm can detect the obstacles robustly and avoid the To test the interactive experience of the proposed system,
obstacles accurately. three kinds of guiding instructions for the totally blind users
were compared under three main scenarios. The experiments
with and without vision enhancement proposed in this work
were conducted by the weak sighted users, under the same three
main scenarios (see Fig. 15). The total length of path in the
home (see Fig. 15 (a)) is 40 m, and 10 kinds of obstacles (whose
height is from 5 cm to 1 m) are placed on the path for testing the
obstacle avoiding algorithm. The length of the path in the office
(see Fig. 15 (b)) is in total 150m and the length of the path in the
supermarket (see Fig. 15 (c)) amounts to 1 km. 15 kinds of
obstacles are placed on the path both in the office and the
supermarket.

Fig. 13 Examples of different transparent obstacles

Fig. 14 Accuracy under different transparent glass (a) Home (b) Office

C. Computational Cost
The average computational time for each step of the
proposed system is calculated, and the results are shown in
TABLE Ⅱ. The depth image acquisition and the depth based
way-finding algorithm takes about 11 ms. The ultrasonic sensor
measurement cost depends on the obstacle’s distance, which

TABLE Ⅱ
COMPUTATIONAL TIME FOR EACH STEP OF THE PROPOSED ALGORITHM

Processing Step Average Time

Depth image acquisition 8.23 ms


Way-finding 2.71 ms
Ultrasonic sensor measurement Max 26.5 ms
Multi-sensor Fusion 1.33 ms
AR rendering 2.19 ms
Depth image acquisition 8.23 ms
(c) Supermarket
Way-finding 2.71 ms
Fig. 15 Test path under different scenarios. The red dot line is the waking path.
J. Bai et al.: Smart Guiding Glasses for Visually Impaired People in Indoor Environment 265
First, the totally blind persons with the smart guiding glasses that when the user is in the home or the office, the time costs
were asked to walk in the three scenarios under three kinds of with the smart guiding glasses and with nothing are almost
guiding instructions (see section Ⅲ.B.(4)). Then the totally equal. But the total collisions numbers of using nothing are
blind persons with a cane instead of the smart guiding glasses much more than the numbers of using the smart guiding
repeated in the three scenarios. The walking time under these glasses. This is because they are familiar with their home and
scenarios was recorded respectively and are shown in TABLE office, the time costs can be almost the same. But the small
Ⅲ. It can be seen that when the user is in the home and office, obstacles on the ground are very hard to observe for them
the time cost with the stereo tone and beep sound is almost the without the smart guiding glasses, therefor they suffer
same as that with the cane. The stereo tone based guiding collisions more frequently. When they are in the supermarket,
instructions are more efficient than the recorded instructions the time costs and the total collisions with the smart guiding
based one, the beep sound based one is the most efficient. glasses are much less than the one with nothing. This is because
they are unfamiliar with the supermarket and have difficulty in
TABLE Ⅲ watching the small obstacles.
COMPUTATIONAL TIME FOR EACH STEP OF THE PROPOSED ALGORITHM
Both the totally blind and weak sighted persons’ experiments
Smart Guiding Glasses Cane
Scenarios Stereo Recorded Beep
verified that the proposed smart guiding glasses is very efficient
Tone Instructions Sound and security, and very helpful for the visually impaired people
Home 91.23 s 100.46 s 90.08 s 90.55 s in the complicated indoor environment.
Office 312.79 s 350.61 s 308.14 s 313.38 s
Supermarket 2157.50 s 2120.78 s 2080.91 s 2204.15 s
V. CONCLUSION
This paper presents a smart guiding device for visually
According to the user’s experience, they feel that it is hard to impaired users, which can help them move safely and
turn the accurate angle, therefore the recorded instructions efficiently in complicated indoor environment. The depth
based guiding method is not efficient enough. The cane based image and the multi-sensor fusion based algorithms solve the
method is a little more efficient than the stereo tone based and problems of small and transparent obstacle avoiding. Three
recorded instructions based method, and this is because the main auditory cues for the totally blind users were developed
users are familiar with their home and office. Although they do and tested in different scenarios, and results show that the beep
not know the obstacles on the path, they can soon make a sound based guiding instructions are the most efficient and
decision with previous memory about the environment. well-adapted. For weak sighted users, visual enhancement
However when they are in the unfamiliar environment, such as based on AR technique was adopted to integrate the traversable
the supermarket, the proposed method in this work is much direction into the binocular images and it helps the users to
more efficient than the cane based one. This is because the walk more quickly and safely. The computation is fast enough
proposed method can directly inform the user where they for the detection and display of obstacles. Experimental results
should go, but the cane based method must sweep the road for show that the proposed smart guiding glasses can improve the
detecting and avoiding the obstacles, which is time-consuming. travelling experience of the visually impaired people. The
Interestingly, the recorded instructions based method is more sensors used in this system are simple and with low cost,
efficient than the stereo tone based method in the supermarket. making it possible to be widely used in consumer market.
On the basis of the user’s experience, this is because the
supermarket is noisy relative to the home and office, the user REFERENCES
can not identify the direction according to the stereo tone, but [1] B. Söveny, G. Kovács and Z. T. Kardkovács, “Blind guide - A virtual eye
the recorded instructions based method can directly tell the user for guiding indoor and outdoor movement,” in 2014 5th IEEE Conf.
Cognitive Infocommunications (CogInfoCom), Vietri sul Mare, 2014, pp.
turn left or right. Above all, the beep sound based method is 343-347.
more efficient and has a better adaptability. [2] L. Tian, Y. Tian and C. Yi, “Detecting good quality frames in videos
The test of visual enhancement for the weak sighted users is captured by a wearable camera for blind navigation,” in 2013 IEEE Int.
Conf. Bioinformatics and Biomedicine, Shanghai, 2013, pp. 334-337.
similar with test for the totally blind except that the guiding [3] M. Moreno, S. Shahrabadi, J. José, J. M .H. du Buf and J. M. F.
cues are obtained by the AR rendering images instead of the Rodrigues, “Realtime local navigation for the blind: Detection of lateral
audio. Besides, the weak sighted users without the smart doors and sound interface,” in Proc. 4th Int. Conf. Software Development
for Enhancing Accessibility and Fighting Info-exclusion, 2012, pp. 74-82.
guiding glasses or the cane were also tested as a contrast. The [4] D. Dakopoulos and N. G. Bourbakis, “Wearable obstacle avoidance
results are shown in TABLE Ⅳ. From the results, we can see electronic travel aids for blind: A survey,” IEEE Trans. Systems, Man,
Cybern., vol. 40, no. 1, pp. 25-35, Jan. 2010.
[5] A. Aladrén, G. López-Nicolás, L. Puig and J. J. Guerrero, “Navigation
TABLE Ⅳ assistance for the visually impaired using RGB-D sensor with range
AVERAGE WALKING TIME AND TOTAL COLLISIONS IN DIFFERENT SCENARIOS expansion,” IEEE Systems J., vol. 10, no. 3, pp. 922-932, Sept. 2016,
FOR WEAK SIGHT USERS [6] H. Fernandes, P. Costa, V. Filipe, L. Hadjileontiadis and J. Barroso,
Smart Guiding Glasses None “Stereo vision in blind navigation assistance,” 2010 World Automation
Scenarios Time Total Time Total Congr., Kobe, 2010, pp. 1-6.
Costs Collisions Costs Collisions [7] W. Heuten, N. Henze, S. Boll and M. Pielot, “Tactile wayfinder: a
Home 74.66 s 0 73.81 s 12 non-visual support system for wayfinding,” Nordic Conf. Human
Office 280.02 s 0 284.57 s 28 Computer Interaction, Lund, 2008, pp. 172-181.
Supermarket 1890.50 s 0 2004.03 s 20 [8] M. C. Kang, S. H. Chae, J. Y. Sun, J. W. Yoo and S. J. Ko, “A novel
obstacle detection method based on deformable grid for the visually
266 IEEE Transactions on Consumer Electronics, Vol. 63, No. 3, August 2017
impaired,” IEEE Trans. Consumer Electron., vol. 61, no. 3, pp. 376-383, Jinqiang Bai got his B.E. degree and M.S.
Aug. 2015.
[9] J. Sánchez, M. Sáenz, A. Pascual-Leone and L. Merabet, “Navigation for
degree from China Uiversity of Petroleum
the blind through audio-based virtual environments,” in Proc. 28th Int. in 2012 and 2015, respectively. He has
Conf. Human Factors in Computing Syst., Atlanta, Georgia, 2010, pp. been a Ph.D. student in Beihang University
3409-3414. since 2015. His research interests include
[10] J. Kim and H. Jun, “Vision-based location positioning using augmented
reality for indoor navigation,” IEEE Trans. Consumer Electron., vol. 54,
computer vision, deep learning, robotics,
no. 3, pp. 954-962, Aug. 2008. AI, etc.
[11] N. Uchida, T. Tagawa and K. Sato, “Development of an augmented
reality vehicle for driver performance evaluation,” IEEE ITS Magazine,
vol. 9, no. 1, pp. 35-41, Jan. 2017.
[12] M. Bousbia-Salah, M. Bettayeb and A. Larbi, “A navigation aid for blind
people,” J. Intelligent Robot. Syst., vol. 64, no. 3, pp. 387-400, May 2011. Shiguo Lian got his Ph.D. from Nanjing
[13] F. Penizzotto, E. Slawinski and V. Mut, “Laser radar based autonomous University of Science and Technology,
mobile robot guidance system for olive groves navigation,” IEEE Latin China. He was a research assistant in City
America Trans., vol. 13, no. 5, pp. 1303-1312, May 2015.
[14] Y. H. Lee and G. Medioni, “Wearable RGBD indoor navigation system
University of Hong Kong in 2004. From
for the blind,” ECCV Workshops (3), 2014, pp. 493-508. 2005 to 2010, he was a Research Scientist
[15] S. Bhowmick, A. Pant, J. Mukherjee and A. K. Deb, “A novel floor with France Telecom R&D Beijing. He
segmentation algorithm for mobile robot navigation,” in 2015 5th was a Senior Research Scientist and
National Conf. Computer Vision, Pattern Recognition, Image Processing
and Graphics (NCVPRIPG), Patna, 2015, pp. 1-4.
Technical Director with Huawei Central
[16] Y. Li and S. Birchfield, “Image-based segmentation of indoor corridor Research Institute from 2010 to 2016.
floors for a mobile robot,” IEEE/RSJ Int. Conf. Intelligent Robot. Syst., Since 2016, he has been a Senior Director with CloudMinds
Taipei, Taiwan, 2010, pp. 837–843. Technologies Inc. He is the author of more than 80 refereed
[17] V. C. Sekhar, S. Bora, M. Das, P. K. Manchi, S. Josephine and R. Paily,
“Design and implementation of blind assistance system using real time
international journal papers covering topics of artificial
stereo vision algorithms,” in 2016 29th Int. Conf. VLSI Design and 2016 intelligence, multimedia communication, and human computer
15th Int. Conf. Embedded Syst. (VLSID), Kolkata, 2016, pp. 421-426. interface. He authored and co-edited more than 10 books, and
[18] J. D. Anderson, Dah-Jye Lee and J. K. Archibald, “Embedded stereo held more than 50 patents. He is on the editor board of several
vision system providing visual guidance to the visually impaired,” 2007
IEEE/NIH Life Science Systems and Applications Workshop, Bethesda,
refereed international journals.
MD, 2007, pp. 229-232.
[19] D. H. Kim and J. H. Kim, “Effective background model-based RGB-D Zhaoxiang Liu received his B.S. degree
dense visual odometry in a dynamic environment,” IEEE Trans. Robotics, and Ph.D. degree from the College of
vol. 32, no. 6, pp. 1565-1573, Dec. 2016.
[20] K. Wang, S. Lian and Z. Liu, “An intelligent screen system for
Information and Electrical Engineering,
context-related scenery viewing in smart home,” IEEE Trans. Consumer China Agricultural University in 2006 and
Electron., vol. 61, no. 1, pp. 1-9, Feb. 2015. 2011, respectively. He joined VIA
[21] Y. Kong and Y. Fu, “Discriminative relational representation learning for Technologies, Inc. in 2011. From 2012 to
RGB-D action recognition,” IEEE Trans. Image Process., vol. 25, no. 6,
pp. 2856-2865, Jun. 2016.
2016, he was a senior researcher in the
[22] N. Fallah, I. Apostolopoulos, K. Bekris and E. Folmer, “Indoor human Central Research Institute of Huawei
navigation systems: A survey,” Interacting with Computers, vol. 25, no. Technologies, China. He has been a senior
1, pp. 21-33, Sep. 2013. engineer in CloudMinds Technologies Inc. since 2016. His
[23] S. Mann, J. Huang, R. Janzen, R. Lo, V. Rampersad, A. Chen and T.
Doha, “Blind navigation with a wearable range camera and vibrotactile
research interests include computer vision, deep learning,
helmet,” in Proc. 19th ACM Int. Conf. Multimedia, Scottsdale, Arizona, robotics, and human computer interaction and so on.
2011, pp. 1325-1328.
[24] S. C. Pei and Y. Y. Wang, “Census-based vision for auditory depth Kai Wang has been a senior engineer in
images and speech navigation of visually impaired users,” IEEE Trans.
Consumer Electron., vol. 57, no. 4, pp. 1883-1890, Nov. 2011.
CloudMinds Technologies Inc. since 2016.
[25] C. Stoll, R. Palluel-Germain, V. Fristot, D. Pellerin, D. Alleysson and C. Prior to that, he was with the Huawei
Graff, “Navigating from a depth image converted into sound,” Applied Central Research Institute. He received his
Bionics and Biomechanics, vol. 2015, pp. 1-9, Jan. 2015. Ph.D. degree from Nanyang Technological
[26] S. Blessenohl, C. Morrison, A. Criminisi and J. Shotton, “Improving
indoor mobility of the visually impaired with depth-based spatial sound,”
University, Singapore in 2013. His
2015 IEEE Int. Conf. Computer Vision Workshop (ICCVW), Santiago, research interests include Augmented
Chile, 2015, pp. 418-426. Reality, Computer Graphics, Human-
[27] S. L. Hicks, I. Wilson, L. Muhammed, J. Worsfold, S. M. Downes and C. Computer Interaction and so on. He has
Kennard, “A depth-based head-mounted visual display to aid navigation
in partially sighted individuals,” PLoS ONE, vol. 8, no. 7, pp. 1-8, Jul.
published more than ten papers on international journals and
2013. conferences.
[28] R. Tredinnick, B. Boettcher, S. Smith, S. Solovy and K. Ponto,
“Uni-CAVE: A Unity3D plugin for non-head mounted VR display Dijun Liu has been a chief scientist and
systems,” 2017 IEEE Virtual Reality (VR), Los Angeles, CA, 2017, pp.
393-394.
engineer in Datang Telecom since 2008.
He was a Ph.D supervisor in Beihang
University, received many awards for
scientific and technological advancement.
His research interests include IC Design,
Image Processing, AI, Deep Learning,
UAV and so on.

You might also like