You are on page 1of 5

Proceedings of the 2016 International Conference on Advanced Mechatronic Systems, Melbourne, Australia, November 30 - December 3, 2016

Quantitative Analysis of the Relationship between


Camera Image Characteristics and Operability of
Rescue Robots

Noritaka Sato, Masayoshi Koiji and Yoshifumi Morita


Dept. of Electrical and Mechanical Engineering
Nagoya Institute of Technology
Nagoya, Japan
Email: sato.noritaka@nitech.ac.jp

Abstract—Many rescue robots have been developed to search This study focused on the features of a camera image
for victims and gather information at disaster sites. Generally, mounted on a robot. The purpose of this study included
rescue robots are teleoperated using images from a camera specifying the condition of the camera image related to the
mounted on the robots. This study examined the condition of operability of a robot for a search task in a 2D maze. This
a camera image related to the operability of a robot for a search task is a basic operation of a rescue robot. Qualitative findings
task in a 2D maze. Indices indicating the amount of information
of the camera image related to the robot and the environment
indicated that a robot should be shown on an image and that an
around the robot were defined. A subject experiment by using a FOV should be sufficiently wide to increase the operability of
robot simulator USARSim was performed. In th experiment, 16 the robot [3][4]. Therefore, in this study, first the quantitative
images with various indices are used. The task and the scoring indices for qualitative lessons were determined. Following this,
method of the experiment were determined by referring to DHS- combinations of the quantitative indices were used to perform
NIST-ASTM International Standard Test Methods for Rescue a subject experiment t with a robot simulator (USARSim) [6].
Robots. The results revealed that the score, which indicates the The experiment results were analyzed to discuss and specify
operability, had an extreme value when the indices, (xmin , h), the condition of camera images related to the operability.
were (100, 3000), where xmin indicates the amount of information
related to the robot and h indicates the amount of information
related to the environment around the robot. II. Q UANTIFICATION OF CAMERA IMAGE
Keywords—Rescue robot, Teleoperation, Situation awareness As mentioned in Section I, qualitative findings indicate that
a robot should be shown on an image and that the FOV should
be sufficiently wide to increase the operability of the robot.
I. I NTRODUCTION Therefore, in this study, it was assumed that the amount of
information on an image related to a robot and its environment
Many rescue robots have been developed to search for influences the operability of the robot. Figure 1 shows the
victims and gather information at disaster sites replacing relationship between a camera mounted on a robot, the FOV
human rescue workers [1]. Generally, rescue robots are teleop- of the camera, and the view area. If the position of the camera
erated because their operation requires advanced intelligence. is (xc andzc ), and the posture of the camera θc , the vertical
Additionally, an image from a camera mounted on the robot angle of FOV θv , and the horizontal angle of FOV θh are
is a very important piece of information that is displayed on known, then the coordinates of the points that construct the
the operator screen [2]. view area are obtained as follows:
However, the quantitative condition of the image related zc
xa = + xc (1)
to the operability of the robot is unknown. A few studies tan(θc + θ2v )
focused on the relationship between the camera images and
the operability of the robot. Shiroma et al. compared types of
zc θh θh
camera images and found that a fish-eye camera installed at a ya = θv
tan cos (2)
high position was effective [3]. However, their study involved sin(θc + 2 )
2 2
results that were qualitative. Kimura et al. compared several
angles of a field of view (FOV) of a camera mounted by zc
xb = θv
+ xc (3)
using a robot simulator to determine camera specifications for tan(θc − 2 )
developing robot hardware [4]. However, their analysis was
limited because they only focused on the FOV of the camera.
zc θh θh
Koiji et al. compared the position and the posture of a camera yb = θv
tan cos (4)
mounted on a robot and specified conditions related to the sin(θc − 2 )
2 2
position and posture of the camera that increased operability
[5]. However, they did not include features of the images, and In this study, the amount of information related to the robot
thus the reason as to why the image was effective for the on the image is defined as the x coordinate of the nearest
teleoperation was not clear. point, referred to as xmin . Therefore xmin = xa . Hence if

978-1-5090-5346-9 / 16 / $31.00 ©2016 IEEE 533


z 1000 2000 3000 ∞ h
Camera
(xc, zc) 100
䐟 䐠 䐡 䐢
Robot y 䐣 䐤 䐥 䐦
-100

a
View area
-300
䐧 䐨 䐩 䐪
(xa, -ya) x

z
b (xb, -yb) -500
䐫 䐬 䐭 䐮
Camera
xmin
(xc, zc) ρ
θv Fig. 2. Camera images used in the experiment. USARSim is used for the
Robot x experiment.

a (xa, -ya) b (xb, -yb)


Camera z
Fig. 1. Camera mounted on the robot and its FOV.
c (xc, zc) θc

xmin becomes small, then the size of the robot on the image Robot θv
x
increases. Moreover, the amount of information related to the
environment of the image is defined as the height of the
trapezoid of the view area, termed as h. Therefore h = xb −xa .
a (xa, -ya) b (xb, -yb)
Thus, if h becomes large, then the image includes considerable
information related to the environment.
Fig. 3. Side view in the case where h = ∞.
In the study, xmin [mm] = 100, −100, −300, −500 and
h[mm] = 1000, 2000, 3000, ∞ were used for a subject experi-
ment. These values were determined according to the findings The task involved searching for two targets (Eye charts)
of a previous study [5]. The number of combinations involved in a 2D maze. The environment was defined by referring
in the study was 16. Figure 2 shows the images used in the to DHS-NIST-ASTM International Standard Test Methods for
experiment. It was assumed that the robot moved in a low Response Robots [8] and the RoboCup Rescue Robot League
ceiling environment, and thus the height of the camera zc was competition [9]. Figure 5 and Figure 4 show the environment
fixed as 600 [mm]. It may be noted that when h = ∞, the of the experiment It was necessary for the robot to avoid
posture of the camera θc was set as θ2v , as shown in Figure 3. collision with walls and to move as fast as possible from the
start point to the goal point. It was also important for the
III. E XPERIMENT operator to confirm the targets.
A. Experimental method
B. Evaluation method
The number of subjects involved in the experiment was 10.
Ten unrelated students for this research project were chosen The evaluation index was called the score, and it was
and teleoperated the robot by using a gamepad. The average also defined by referring to DHS-NIST-ASTM International
age of the subjects is 22.4. They have never operated a robot Standard Test Methods for Response Robots [8] and the
using the developed system and never operated a crawler-type RoboCup Rescue Robot League competition [9]. The score
robot. However, all of them have played video games by using was calculated by the following equation:
a gamepad. Each camera image is used once. Therefore, one
subject teleoperated the robot 16 times.The order of the images
Tr
was random to reduce the influence of the order. A robot S = (k1 C + k2 F − k3 W ) , (5)
simulator, USARSim, was used to reduce the experimental T
setup burden because the total number of runs of the robot where C denotes the number of targets that are read correctly,
was 160. The robot model used in the experiment was Kenaf, F denotes the number of targets that are read incorrectly,
which was developed by Yoshida et al. [7]. The width of Kenaf W denotes the number of collisions with the wall, and T
is 480 mm and the length is 1099 mm. It is one of default denotes the running time from the start point to the goal point
robots of USARSim. ( achievement time of the task). In this study, k1 = 10, k2 = 5,

534
Fig. 4. Experiment environment in the USARSim.

3.6 [m]
Fig. 6. Bar graph of the average scores of the experiment.

㽢 䖃
3.6 [m]


㽢Start/Goal 䖃Target

Fig. 5. Schematic diagram of the experiment environment.

k3 = 10, Tr = 1200 were set such that they were similar to Fig. 7. Scores of the experiment obtained using camera images.
the scoring equation of the RoboCup Rescue Robot League
competition [9].
The results were statistically analyzed by using the Steel -
C. Results Dwass test. The result is shown in Figure 8. As observed, the
difference of xmin led to significant differences in the score.
The results are shown in Figure 6 and Figure 7. Figure 6 If xmin exceeded zero, then the robot did not appear on the
shows the bar graph of the average scores of the experiment. camera image. Therefore, the appearance of the robot on the
Figure 7 shows the scores of the experiment obtained using camera image increased the operability. This was similar to the
camera images. qualitative findings of previous studies. However, the score did
not increase with an increase in the appearance of the robot
on the camera image. The results of the experiment indicated
IV. D ISCUSSION that the conditions wherein only the tip of the robot appeared
The most interesting finding was that the average score had on the image marked higher scores.
an extreme value. Prior to verification, it was proposed that As shown in Figure 8, no significant differences in h were
the increase in the score could be due to a decrease of xmin observed. However, the score decreased when the h was too
and an increase of h. In the experiment, the most effective low or too high. Hence, h should be set to approximately
combination of (xmin , h) was (−100, 3000). Future research 3000 for the robot and the task used in the experiment.
will involve normalizing the indices. However, the finding Future extensions of this study may include performing the
method proposed in this study to determine the most effective experiment with resolved values of h such as h =2400, 2700,
combination can be extended to other robots and other tasks. 3000, 3300, and 3600. Additionally, the number of the subjects
Future research will also involve performing the experiment for could be increased to confirm the significant differences.
other tasks with a similar scheme and storing the quantitative
lessons learnt with respect to the relationship between camera It is proposed that the reason for the extreme value of the
images and operability. average score was related to the recognition of the distance

535
Fig. 10. The image obtained by the 7th camera when the robot rotates at
the corner.

Fig. 8. Results of the statistical analysis.

Fig. 11. The image obtained by the 16th camera when the robot rotates at
the corner.

V. C ONCLUSION
The purpose of this study included specifying the condition
of camera images related to the operability of a robot for
a search task in a 2D maze. This is considered as a basic
operation of a rescue robot. The study involved defining the
indices xmin and h indicating the amount of the information
related to the robot and the environment of the camera image,
respectively.
Fig. 9. Average number of collisions with walls when the robot rotates.
In the study, 16 combinations of the indices were set and
a robot simulator USARSim was used to perform a subject
experiment. The task and the scoring method of the experi-
ment were determined by referring to the DHS-NIST-ASTM
between the robot and the forward wall. Specifically, the robot International Standard Test Methods for Response Robots [8]
sometimes collided with the walls when the robot rotated at the and the RoboCup Rescue Robot League competition [9]. The
corners of the experimental field. Figure 9 shows the number results indicated that the score indicating the operability had
of collisions with the walls when the robot rotated at the an extreme value. The maximum score was obtained when
corners. In the task involved in this study, it was necessary (xmin , h) was (−100, 3000). Additionally, the results of the
for the robot to move forward near the forward wall prior statistical analysis revealed that the score involved significant
to rotating to avoid collision because the width of the road differences with a difference of xmin .
was not sufficiently wide for the robot. Fig. 10 and 11 show Future studies will include normalizing the indices xmin
the 7th and 16th camera image at the corner, respectively. As and h to perform the experiment with more subjects and
observed, the distance between the robot and the wall could extending the experiment to other robots and other tasks.
be determined by the operator more accurately by using the
7th camera than by using the 16th camera. Prior to this result,
R EFERENCES
previous studies indicated that the camera should be mounted
such that the entire body of the robot appears on the image [1] J. Casper, R. Murphy, Human-Robot Interactions during the Robot-
[3]. This appears to be a promising result with respect to the Assisted Urban Search and Rescue Response at the World Trade Center,
IEEE Transactions on Systems, man, and cybernetics, Vol. 33, No. 3,
camera image mounted on the robot. However, it should be pp. 367-385, 2003.
noted that there may be tasks (for example, when the robot [2] H. A. Yanco and J. L. Drury, Rescuing Interfaces: A Multi-year Study
runs on a rough terrain) that require displaying the entire body of Human-robot Interaction at the AAAI Robot Rescue Competition,
of the robot on the image. Autonomous Robots, Vol.22, No. 4, pp.333-352, 2007.

536
[3] N. Shiroma, N. Sato, Y. Chiu and F. Matsuno, Study on Effective A robot Simulator for Research and Education, Proceedings of IEEE
Camera Images for Mobile Robot Teleoperation, Proceedings of the International Conference on Robotics and Automation, pp.1400-1405,
2004 IEEE International Workshop on Robot and Human Interactive 2007
Communication, pp. 107-112, 2004. [7] T. Yoshida, E. Koyanagi, et al., A High Mobility 6-crawler Mobile Robot
[4] T. Kimura, W. C. Vie and Y. Ukai, Development of a USAR Robot ’kenaf’, In Proceedings of the 4th International Workshop on Synthetic
Considering Camera View Angle and Grouser Shape of Crawler, Pro- Simulation and Robotics to Mitigate Earthquake Disaster, p. 38, 2007.
ceedings of the 2008 IEEE International Conference on Robotics and [8] ASTM E2521-16, Standard Terminology for Evaluating Response Robot
Biomimetics, pp. 1991-1994, 2009. Capabilities, ASTM International, West Conshohocken, PA, 2016.
[5] M. Koiji, N. Sato, Y. Morita, A study on Operability of Rescue Robot [9] A. Jacoff, E. Messina, J. Evans, A Standard Test Course for Urban
Focusing on Information Amount of Camera Images, Proceedings of the Search and Rescue Robots, Measuring the Performance and Intelligence
15th SICE System Integration Division Annual Conference, pp.315-318, of Systems: Proceedings of the 2000 PerMIS Workshop, pp.253-259,
2014. (in Japanese) 2000.
[6] S. Carpin M. Lewis J. Wang S. Balakirsky C. Scrapper, USARSim:

537

You might also like