Professional Documents
Culture Documents
as the position of the ball and goals when the robots carry out Control unit (PC) Mobile base unit
cooperative tasks. Furthermore a robot must keep its own
position to estimate the arrangement of objects and share it Fig. 1 Hardware configuration of the robot system.
among its teammates. The reactiveness ofsoccer robots requires
a vision system with a high processing cycle time. As the mobile base unit, an autonomous mobile robot
In this paper, we focus on object recognition, environment platform for experimentation applications in indoor flat-floored
recognition, and movement control for small mobile robots environments (Applied AI Systems, LABO-l) was used. A
using only a monocular camera system with the limited viewing view of the robot is shown in Fig.2. The robot body is
angle. For object recognition, fast color image segmentation is rectangular shaped, approximately 310 mm wide and 380 mm
performed based on YUV color information of each pixel ofthe long. The chassis is 30 mm high and rides on a set of four hard
color image. The position of an orange ball on the field is rubber wheels. Two driving motor and wheel sets are located on
efficiently and accurately calculated using a monocular camera. both sides, one places near each comer of the vehicle. Wheel
Then, suitable environment recognition and movement control
base is approximately 21 em. Wheel diameters are 11 cm. Each
methods for real-time navigation of mobile robots are described
of four motors is controlled individually and drives a single
with experimental results.
wheel. A differential drive mechanism with 4 wheels permits
the robot to tum smoothly within a small turning radius.
1084
depicted in Fig. 4. The result of the experiment is shown in IV. E NVIRONMENT R ECOGNITION
TABLE I. The row Ax- shows the average estimation errors in x
coordinate, and the row ~y shows those in y coordinate. The Image processing software has been implemented using the
same control unit for environment recognition in an indoor
estimation errors are shown in [mm]. A variation of
experimental space. The flowchart is shown in Fig. 7. First, the
measurement is sufficiently small for movement control as
edges surrounding the various objects are amplified to increase
shown in Fig. 5.
the quality ofthe image. A Laplacian edge detection operation is
performed to accentuate the objects in the image. In binarization,
-250 250 an appropriate threshold value is set to preserve the important
features using a histogram of the source image, on the
assumption that the two important peaks in the histogram
represent the wall and the floor. Erosion and dilation are used to
remove the enhanced noise pixels. Thinning and shrinking
operations are performed to reduce the image objects and find
their skeleton. An example sequence of images obtained by
image processing procedures is shown in Fig. 8.
TABLE I
EXPERIMENTAL RESULT OF OBJECTLOCALIZATION
PI P2 P3
Ax- 18.89 11.02 21.55
~y 7.73 0.14 6.23
611
.
610
609 ;~
608
607
X [mm]606
605
604
~
•
....
... . .! ••-
•
603
o 0.2 0.4 0.6 0.8
Y[mm]
1085
control part, behavior control part, and planning part. Real-time
control part executes direct sensor and motor control including
the infrared sensors data acquisition and encoder based position
and velocity controls on the onboard computer in the mobile
base unit. Simple reflex actions brought about by infrared
sensors or bumpers are also executed with the multitask
operating system. Behavior control part monitors the status of
the objects in the environment and commands from the upper
level and selects a suitable primitive action at a time Primitive
actions include wall following, obstacle avoidance, and target
(I) original (2) Laplacian filtering following. Planning part manages the environment map and
human interface, and generates a behavior plan composed of
primitive actions.
For vision based control, the software in the control unit
calculates the following at the control period of 33 ms. First, it
determines the desired translational and rotational velocities
using the position and orientation errors to the goal position.
The motion control algorithm must allow a general
nonholonomic mobile robot to accurately reach a target point
with a desired orientation in a dynamic environment. The
point-to-point (PTP) control method is used to control the
robots to follow transient subgoals. Fig. 9 shows the robot
kinematics for the point-to-point (PTP) control method to
(3) binarization (4) erosion control the robots to follow transient subgoals. The following
feedback control law was adopted:
v=kpp
OJ=kaa+kpfJ (3)
((J=a+B)
where v: the tangent velocity of the robot, OJ : the angular
velocity of the robot, p: the distance between the centre of the
robot's wheel axle and the goal position, a: the angle between
the forward direction of the robot and the vector connecting the
centre of the wheel axle with the goal position, B: the pose
(5) dilation (6) thinning error.
The control parameterskp , k a, kpwere determined [6] so
that the closed control system is locally exponentially stable as
follows:
k p > 0 , ka - k p > 0 , k p < 0 (4)
Furthermore, the parameters were adjusted so that the robot
can be smoothly guided to face the goal [7][8].
Ye;=Y/
f3
(7) shrinking (8) Hough transform goal XG=X/
p
Fig. 8 Sequence of images obtained by image processing
procedures.
1086
Then it calculates the desired angular velocities of the right
and left wheels by a simple, invertible linear transformation.
Finally, it sends 8-bit control data (4-bit data for each motor) to
the onboard computer with wireless modem. The onboard
computer executes multitasking processing with preemptive
task switching at a rate of 128 Hz (approximately every 8
msec). The kernel performs the system-level functions such as
sensor reading and motor control just before the task is switched.
The onboard computer responds to the commands from the host
quickly. The closed loop motor controller is implemented in
software and called by the Period Interrupt Timer interrupt
service routine. Speed and acceleration are measured as the
(a)
values over the interrupt period (1/128 s). Speed profile
parameters and PID coefficients for position control are
configured in software. Thus the angular velocity control loop is
executed on the onboard computer and run every 8 ms to
compensate for differences between desired and estimated
velocity of the wheels. The flowchart of command generation
for movement control executed in the control unit is shown in
Fig. 10, where the turn angle of the robot towards the target a
is calculated in terms of robot coordinates.
(b)
Fig. 11 Example of movement trajectory in the face of the
straight line ofa wall: (a) Hough transform, (b) real image.
NO
YES (a)
send stop command to robot
1087
VI. CONCLUSIONS
The color image segmentation method has been implemented
on general purpose, inexpensive hardware, and applied to object
recognition for robotic soccer by wheeled mobile robots. The
accuracy of object localization was satisfactory for movement
control in robotic soccer.
Movement control using the localization method has been
implemented. The feedback control law determines the
translational and rotational velocities using the position and
orientation errors to the goal position. Experiments show that
the robot can be smoothly guided to navigate to the goal without
colliding with obstacles.
(a) We are planning to develop a method to construct an
environment map by integrating the visual information of
teammates as the future work. Using the map, the robot can
know the situation outside its view, and perform advanced
behaviors such as role sharing and effective pass work.
REF ERENCES
[I] G. Yasuda, and H. Takai, "A microcontroller based architecture for locally
intelligent robot agents", The Japanese Society for Artificial Intelligence,
SIC-challenge Technical Report, no. 4, pp. 11-15,1999.
(b) [2] 1. Bruce, T. Balch, and M. Veloso, " Fast and inexpensive color image
segmentation for interactive robots", Proceedings of IROS'OO, pp.
Fig. 13 Example of movement trajectory in the front of an 2061-2066,2000.
obstacle: (a) Hough transform, (b) real image. [3] G. Yasuda, and B. Gc, " Position control of mobile robots using global
vision with fast color image segmentation", Proceedings of the 3rd
China-Japan Mechatronics Symposium , pp. 191-196,2002.
[4] T. Kubota, H. Hashimoto, and F. Harashima, "Path searching for mobile
robot by local planning", Journal ofRobotics Society ofJapan , vol. 7, no.
4, pp. 267-274 , Aug. 1989 (in Japanese).
[5] G. Yasuda, "Distributed Implementation of Communicating Process
Architectures for Autonomous Mobile Robots", IFAC Distributed
Computer Control Systems 1997, Pergamon, pp. 67-72,1997.
[6] R. Siegwart and I. Nourbakhsh, Introduction to Autonomous Mobile
Robots, MIT Press, Cambridge, MA, 2004.
[7] K. I lan, and M. VcIoso, " Reactive visual control of multiple
non-holonomic robotic agents", Proceedings ofICRA '98, 1998.
[8] Y. Kanayama , Y. Kimura, F. Miyazaki and T. Noguchi, "A stable tracking
control method for a non-holonomic mobile robot", Proceedings of
IROS '91, pp. 1236-1241, 1991.
(a)
(b)
Fig. 14 Example of movement trajectory in the face of the
comer of an obstacle: (a) Hough transform , (b) real image.
1088