You are on page 1of 6

Microsyst Technol (2011) 17:1169–1174

DOI 10.1007/s00542-010-1219-1

TECHNICAL PAPER

Development of a guide-dog robot: human–robot interface


considering walking conditions for a visually handicapped person
Shozo Saegusa • Yuya Yasuda • Yoshitaka Uratani •

Eiichirou Tanaka • Toshiaki Makino •


Jen-Yuan Chang

Received: 7 September 2010 / Accepted: 30 December 2010 / Published online: 12 January 2011
Ó Springer-Verlag 2011

Abstract The repletion rate of guide dogs for visually The results show that the algorithms are reliable enough to
handicapped persons is roughly 10% nationwide. The enable the guide-dog robot and the led-person to maneuver
reasons for this low rate are the long training period and the in a complex corridor environment.
expense for obtaining a guide dog in Japan. Motivated by
these two reasons, we are developing guide-dog robots.
The major objective is to develop an intelligent human– 1 Introduction
robot interface. This paper describes two novel interface
algorithms and strategy to guide visually handicapped The repletion rate of guide dogs for visually handicapped
person. We developed new leading edge searching method, persons is roughly 10% in Japan. Considering the long
which uses a single laser range finder (LRF) developed to training and high cost of guide dogs, it seems to be hard to
find the center of the corridor in an indoor environment. increase the repletion rates significantly within a short
We also developed a new twin cluster trace method that period of time. Because of this, we have undertaken
can recognize the led-person’s walking conditions mea- development of a guide-dog robot. Our main goal is to
sured by the LRF. The algorithm allows the guide-dog develop an intelligent human–machine interface (HMI)
robot to accurately estimate and anticipate the led-person’s between the visually handicapped person and the guide-
next move. We experimentally verified these algorithms. dog robot, and it is of engineering, social, and economic
importance.
The tasks for a guide dog include obeying the owner,
S. Saegusa (&) notifying the owner of turning points, leading the owner to
Education and Venture Creation Division, the target, detecting and avoiding obstacles, and intelligent
Center for Collaborative Research and Community Cooperation, insubordination (Table 1).
Hiroshima University, VBL Office 313, Kagamiyama 2-chome,
Higashi-Hiroshima 739-8527, Japan The most important of these tasks is guiding the owner
e-mail: shosaegu@hiroshima-u.ac.jp while being aware of his or her walking conditions
(walking speed, etc.). Since the owner is visually impaired,
Y. Yasuda his or her interaction with the dog is the key to successful
Yaskawa Electric Corporation, Kitakyushu, Japan
guiding. Tate et al. (1978) investigated a control method to
Y. Uratani detect the distance between a human and a mobile robot by
Yamaha Corporation, Shizuoka, Japan using a hardcoded classical control algorithm. However,
this method was not able to handle various walking con-
E. Tanaka
Shibaura Institute of Technology, Saitama, Japan ditions. Few studies have focused on this subject area, and
hence, we had to begin our development by studying a
T. Makino guiding system that could handle various user walking
Tokuyama National College of Technology, Tokuyama, Japan
conditions. Yamada et al. (2003) proposed a steering rope
J.-Y. Chang method using a force sensor measuring in three Cartesian
Massey University, Albany, Auckland 0745, New Zealand axes and three Euler rotational axes to serve as the

123
1170 Microsyst Technol (2011) 17:1169–1174

Table 1 Tasks of a guide dog


Role Description

(1) Obey owner The dog easily obeys the owner’s commands
(2) Notify owner The dog nudges the owner when to turn
(3) Lead owner to target The dog leads the owner to the target
(4) Detect and avoid obstacles The dog detects and avoids obstacles that may hinder or endanger
the owner
(5) Intelligent in subordination The dog does not obey orders that would lead into danger

controller from the person to the dog like a dog’s harness.


This method could detect the guide dog’s movements and Obstacle
measure dynamical values under different walking condi-
tions. However, its measurement accuracy and robustness
with regard to the various kinds of motion needed to be
improved and the signaling process needed to be made Wall Detected Wall
center of
simpler. Yamada et al. (2005) and Kurachi et al. (2005) corridor
proposed guiding visually impaired pedestrians with Side &
mobile devices and integrated sensor networks in streets,
Obs.
underground walkways, and buildings. This approach 60°
requires extensive infrastructure, which is expensive and Detecting
can’t meet individual needs. Area 120°
As opposed to complex systems such as Guide Cane
developed by Ulrich and Borenstein (2001) and the array of Leading
electronic sensors proposed by Ando (2003), the present edge
study’s aim is to produce a lightweight mobile guide-dog candidates
robot. Such a robot would have a simple mechanism and a
small mechatronics system to allow it to operate in rough
conditions, as discussed by Shkolnik et al. (2009). While Foot-recognizing area
the size of the robot should be small for the sake of (1.2m x 1.2m)
mobility, the HMI should be intelligent and easy to operate.
Our work is based on the framework developed by Saegusa Fig. 1 Illustrative example of guide-dog mobile robot operation
based on profile of the follower’s legs in an indoor environment
et al. (2010), in which the mobile guide-dog robot could
avoid obstacles in a corridor environment. However, this robot is indicated by shaded rectangle boxes in the center,
framework’s calculation efficiency and detection ability and the profile of the follower’s legs is depicted by a couple
should be improved and calculation cycle needs to be of dashed circles. The robot is designed to detect obstacles,
faster. Hence, our objectives for this study were to develop move around them, and be aware of its surroundings
a new leading edge search method for detecting obstacles (walls, obstacles, and the follower’s legs). The human
and corridors, and a novel method for the robot to keep follower, as shown in Fig. 2, simply follows the robot,
pace with the owner. To this end we developed new while the robot detects the follower’s movement by using a
algorithms and verified them in our prototype guide-dog laser range finder mounted on the front to measure the
robot. profile of the follower’s shins.

2.1 Leading edge searching method


2 Leading system
A laser range finder (LRF) with a detecting arc of 240° is
To create a HMI for the robot to be operated intelligently in used to detect both sides of the corridor. As shown in
an indoor environment, we developed two novel algo- Fig. 1, the LRF’s detection arc is divided into three areas.
rithms, namely the leading edge searching (LES) method The centre area having a 120° span is reserved for human
for circumstance detection, and the twin cluster trace motion detection, whereas the 60° spans to the robot’s right
(TCT) method for follower detection. An illustrative and left sides are for objection detection. The sensor of the
example of these methods is shown in Fig. 1. The mobile LRF is located 20 cm above ground level, a height at

123
Microsyst Technol (2011) 17:1169–1174 1171

follower environment involves the LRF scanning on both sides of


the robot.

Shins 2.2 Twin cluster trace method

For the person to follow the guide-dog robot easily, we


define the detecting zone, as shown by the 1.2 m2 dashed
rectangle box containing two circular leg shapes in Fig. 1,
to a box from -400 to -1,600 mm behind the robot and
z 600 mm from the LRF to its right and left. The procedure
to detect the follower’s shin motion using scanned data
Fig. 2 Guide-dog mobile robot and follower
from the LRF is shown in Fig. 3 and contains the following
steps:
which it can detect the follower’s shins in motion. The Step 1 Make a cluster consists of closer measured points.
robot’s surroundings within the two 60° spans can be An ideal cluster should have the size of a human
ascertained by searching for leading edges of objects with foot.
the LRF. The LES method to identify leading points among Step 2 Calculate location of the geometric center of each
measured points is as follows. cluster. The green cross marks shown in Fig. 4a
are the centers of clustered points.
1. The shortest normal distance between the LRF and
Step 3 Determine center location among two adjacent
corridor walls is found. The leading edge is identified
clusters. In order to determine the body’s motion,
and updated when the current measured distance is
a human body center, denoted as HBC, is
shorter than the previous one.
determined by calculating the location of the
2. If a large change in the LRF signal occurs, the
midpoint between the two geometric centers of
measured point is taken to be a candidate point of a
the leg clusters. It has been found that under
leading edge.
realistic walking circumstances, the velocity of
3. More points from the front of the robot are scanned.
HBC is much lower than that of the human leg,
4. When a shorter distance is measured, the leading edge
and the body is moving slower and smoother than
point is re-assigned. If there is no closer point, the
the feet.
previous point remains the leading edge.
Step 4 In the next program cycle, a new HBC is
Obstacles that are near the walls contribute to the calculated. The distance between the present
leading edges because they are the nearest objects to the HBC and the previous HBC is used to
robot. A complete scan cycle to ascertain the corridor determine the follower’s moving speed.

Fig. 3 Flow of TCT method

End

123
1172 Microsyst Technol (2011) 17:1169–1174

a HBC. This position is especially useful when, for instance,


the follower is moving unusually or erratically, e.g., wav-
LRF
ing and staggering, the LRF would pick up a lot of
movement from such multiple muscular-mechanical
actions.
Detected points To be able to track the follower’s motion, the HBC on
Center of the other hand, provides a relatively stable dynamic pattern
Clustered points for the present application.
When the final position of the HBC is decided, we
multiply with the following weighting function:
 1
Esq dist ¼ C  dx2 þ  dz2 ð* C [ 1Þ ð1Þ
Obstacles C
where E is the possibility function of the foot, C is the
weighting function (considered walk characteristics), delta
Feet of x is forward steps, and z is lateral movements in a step.
follower
Since people usually walk forward along a relatively
Foot matching area straight line, by weighting stable motions, the above
algorithm and weighting function can better detect and
identify the follower. This study used an experimentally
b determined value of C = 1.5.
LRF

3 Prototype robot and experimental setup


Candidate position of
person’s body The prototype guide-dog robot (named DUMBO) consisted
Current body-center of two driving wheels, one steering wheel, and a LRF
position estimated
equipped on the top of vehicle as a head marker, as shown
in Fig. 2. A small-sized personal computer (PC) acquired
The Nearest the distance data from the LRF. The control signals were
candidate
generated by the inside processor and transferred to the PC
in the robot, the same as in the work presented by Saegusa
et al. (2010). Two control methods were used to the lead
human follower: one is to steer the robot using detected the
center of the corridor and obstacles through the LES
method, and the other is to determine the follower’s speed
by using the TCT method. The test environment was a
2.2 m wide and 20 m long corridor, having doors, a dark-
Positions in air intake, and dust box, as shown in Fig. 5. The speed
previous scan
control for the robot was set to be proportional to the
Fig. 4 a. Pictorial illustration for Steps 1 and 2 of TCT method. HBC’s speed with a maximum allowable speed of
b. Schematic diagram of search method for finding the nearest HBC 800 mm/s when the distance between the robot and the
to estimate next step follower was within 400 mm. The robot waited for the
follower by stopping its motors when user-robot distance
Step 5 On the basis of the HBC history, the next nearest was over 1,200 mm.
HBC is estimated, as illustrated in Fig. 4b, by
calculating the location of the midpoint between
cluster geometric centers. The nearest HBC is
4 Evaluation system
then used to control the maneuvering speed of the
mobile guide-dog robot.
An independent evaluation system using secondary LRF
In Step 3, HBC is assumed to be at the midpoint and TCT algorithms was developed to validate the guide-
between the centers of each foot. This assumption is dog system. Because the robot and follower’s feet were
important for providing stable tracing of the follower’s simultaneously detected by the LRF on the robot and the

123
Microsyst Technol (2011) 17:1169–1174 1173

Experimental circumstances

Robot

Obstacle
2.2m

Fig. 5 Indoor corridor setup for the experiments

3m
Wall

Obstacle
Feet of follower
2.2m

Fig. 7 Trace of the robot and follower


marker

robot treated the person as a moving obstacle and tried to


lead the follower away from collision.
Wall
The traces in Figs. 7 and 8 demonstrate that our new
Fixed LRF (Rapid-URG) algorithms are effective in helping the robot to lead the
Fig. 6 Location of the secondary fixed LRF in the experimental human follower in walking down the centre line of the
corridor environment corridor. They also show that the algorithms help the robot
and follower to avoid obstacles and other pedestrians.
Figure 9 shows the relative speeds of the robot relative to
secondary LRF, the LRF of the evaluation system was set
the follower. It means when the follower is closer to the
off the center of the corridor, at 35 cm above the floor, to
robot, the robot moves faster. That is because the speed
avoid interference as illustrated in Fig. 6.
control is designed to be proportional to the distance
between the human and the robot. With the new algo-
rithms, the overhead computation time to control the guide-
5 Experimental results
dog robot is 100 ms which compares well with the over-
head of 200 ms from the previous work in Saegusa et al.
The measured follower’s motion, marked as magenta
(2010).
crosses, the guide-dog robot’s motion, marked as blue
circles, the obstacles and walls of the corridor, marked as
red squares obtained by both the guide-dog LRF and the
6 Conclusion
secondary LRF are illustrated in Fig. 7. The green dia-
monds in the figure represent the center locations of the
We devised two new algorithms, namely LES method for
clustered points. As one can observe, the guide-dog mobile
circumstance detection and TCT method for follower
robot with the LES and TCT methods successfully led the
detection, and implemented them in a prototype guide-dog
follower around in the corridor.
robot. They were experimentally verified to be effective in
To show the effectiveness of the combined LES-TCT
guiding viewing impaired persons in an indoor circum-
method in leading a visually impaired person, an experi-
stance. We arrived at the following conclusions in our
ment was conducted in which one person passed the guide-
study:
dog robot and the human follower as shown in picture in
Fig. 8. In this experiment, the person walks up from behind 1. The LES method ensures the robot successfully
the follower on the left side closer to the left wall (marked maneuvers around obstacles in a corridor environment.
as dark brown circles in the measurement plot in Fig. 8). 2. TCT method is robust in tracking the human follower.
The guide-dog robot working with the combined algorithm It also reduces computation time significantly to allow
was not affected by the person’s passing the follower. On a faster response for the guide-dog robot to respond to
the other hand, once the person had passed the follower, the human motions.

123
1174 Microsyst Technol (2011) 17:1169–1174

Fig. 8 Picture showing a person passing the guide-dog robot and human follower (to the left), and the corresponding LRF measurement result
(to the right)

References

Ando B (2003) Electronic sensory systems for the visually impaired.


IEEE Instrum Meas Mag 6:62–67
Kurachi K, Kitakaze S, Fujishima Y, Watanabe N, Kamata M (2005)
Integrated pedestrian guidance system using mobile device. In:
Proceedings on 12th World Congress on ITS, San Francisco,
6–10 Nov 2005
Saegusa S et al (2010) ‘‘Development of a guide-dog robot: leading
and recognizing a visually-handicapped person using a LRF.
J Adv Mech Des Syst Manuf 4(1):194–205
Shkolnik A et al (2009) Bounding on rough terrain with the little dog
robot, CSAL lab report, MIT
Tate S et al (1978) A control method of movable robot with certain
distance from human. BioMechanisms 4:279–289 (in Japanese)
Ulrich I, Borenstein J (2001) The GuideCane–applying mobile robot
technologies to assist the visually impaired. IEEE Trans Syst
Man Cybern A Syst Hum 30(2):131–136
Fig. 9 Measured and designed relative velocity of robot with respect
Yamada T, Ohya A, Yuta S (2003) Navigation of a robot by steering a
to the follower
rope. In: Proceedings of annual meeting of the Robotics Society
of Japan, vol 21, p 3H23
The present system and the proposed detection algo- Yamada J, Adachi T, Setoshita S (2005) Technical features of the free
rithms would be useful for visually impaired persons. In the mobility assistance system. In: Proceedings of 12th World
Congress on ITS, San Francisco, 6–10 Nov 2005
future, enhancement of the interface and the communica-
tions between the human follower and the guide-dog robot
will be the key to increasing the robustness of the HMI in
dealing with various situations.

123

You might also like