Professional Documents
Culture Documents
AbstractSolving the SLAM (Simultaneous Localization And It is considered that approaches by Smith, Chesseman [4]
Mapping) problem increases the robot autonomy allowing the and Durrant-Whyte [5] are essential to define the SLAM
robot to build a map of the environment and simultaneously, problem. These previous works establishes the capability to
estimate its pose (position and orientation) using this map. generate complex spatial relationships between points as the
Usually, the robot pose is computed by means of odometry. This system captures new information. This complex relationships
method uses the sensor information, nevertheless, these sensors are the basic element to generate the map. This research lines
introduce noise on the measurements, generating uncertainty in had much in common and converge in the paper of Smith, Self
robot localization. For this reason, probabilistic methods as and Cheeseman [6]. It shows that as a mobile robot navigates
Kalman and Particle filters have been implemented in robotics to
through an unknown environment taking relative observations
estimate the robot and landmark positions, manipulating the
system errors.
of landmarks, the estimations of these landmarks are all
In this paper SLAM algorithms based Rao-Blackwellized correlated with each other due to the common error in
particle filters were implemented in a mobile robotic platform to estimated vehicle location [7]. This implicates that SLAM
perform two-dimensional and three-dimensional unknown problem requires a state consisting of the vehicle pose and
environments reconstructions. The SURF (Speed Up Robust every landmark position, which must be updated with each
Features) and RANSAC (Random Sample Consensus) methods observation [1]. Therefore, it is deduces that a full solution to
were implemented for the visual feature extractions and the this problem is impossible, given the computational complexity
transformation estimations respectively. Furthermore, ROS owing to the huge state vector used by the Kalman filter (KF)
(Robot Operating System) was used for platform position [2]. For this reason, some researchers assumed that correlations
estimation and map building. Finally, a Monte Carlo algorithm between landmarks could be minimized to reduce the
was employed for autonomous navigation. This algorithm computational complexity (e.g. [7] and [8]). However,
computes the shortest path between two points within the posterior studies, such as [9], show that this correlations are
reconstructed map and executes it avoiding obstacles that may fundamental to the SLAM problem and these could not be
arise in its travel. minimized, because if these correlations increased the solution
will be better. Additionally, researchers focused SLAM
KeywordsSimultaneous Localization And Mapping, solution in Extended Kalman Filter (EKF) because the KF was
Autonomous robots, autonomous navigation, Bayesian filters.
designed to estimate the state vector of a linear model. The
advantage of the EKF is that it assumes that the process models
I. INTRODUCTION and observation are locally linear and can be linearized as a
The SLAM (Simultaneous Localization and Mapping) Taylor series expansion [9]. However, the EKF presents data
problem has been intensively studied in the robotics association and linearization model problems, which lead to
community in the past twenty years. Solving this problem is inconsistent solution. For this reason the FastSLAM, based in
known as the Holy Grail for the mobile robotics as it would filtering particles, arises to represent directly the nonlinear
provide the way to make a robot truly autonomous [1], models. The high dimensional state-space of the SLAM
allowing it to build a map of an environment, and problem makes the application of this filter computationally
simultaneously navigate and estimate its pose (position and unfeasible. However it is possible to reduce the sample space
orientation) with the aid of the constructed map [2]. According by applying Rao-Blackwellization [1].
to this, SLAM is useful in a huge range of applications where Another key aspect to the SLAM problem is the reliable
absolute position and precise map information is unobtainable, acquisition of environmental information. Initially, the human
e.g. search and rescue operations, urban reconstructions, being manipulated the environment locating artificial
reconnaissance in hostile environments, planetary exploration, landmarks (beacons), letting the robot to locate itself [10].
among others [3]. Then, the robots pose was obtained by employing some
odometric or inertial system using the sensor information [11]. Fig. 2. SLAM Gmapping system distribution.
Nevertheless, these sensors introduce noise on the
measurements generating uncertainty in the robot location,
producing a cumulative error that grows with time. This is a
recurrent problem because if there is an error in the robot
location, there will be an error in the map and, if there is an
error in the map, there will be an error in the location [12]. For
this reason, probabilistic methods (Bayesian filters) as Kalman
and Particle filters have been implemented in robotics to
estimate the robot and landmark positions, manipulating the
system errors.
The purpose of this work is implement SLAM algorithms
in a mobile robotic platform to build two and three-
dimensional maps of different environments. First, SURF
(Speed Up Robust Features) algorithm extracts features from
images taken with the Kinect, embedded in the robot, and then,
a RANSAC (RANdom Sample Consensus) algorithm is used
to match these features against features from previous images A. Mobile Unit
and to estimate the relative transformations between pairs of The hardware of the mobile unit (Fig. 3) consists of a CPU,
obtained images. From this information, and using SLAM- a distance sensor and a mobile robot. The robots processing
GMAPPING, RGBD-SLAM and Monte Carlo (AMCL) unit is a Fit-PC 3, a mini high-performance computer with the
algorithms, it is possible estimate the robot path and the map of required specifications for this project. The Kinect, developed
the navigated environment simultaneously. Furthermore, the by Microsoft, is used as a distance sensor and the mobile
Robot Operating System (ROS) was used to build the map and platform is a Roomba 555, an autonomous vacuum cleaner
estimate the platform positions. sold by iRobot. This robot interacts with the CPU using
The remainder of this paper is organized as follows. After bidirectional RS-232 protocol. CPU sends speed and direction
explain how the system architecture is, we then provide information of the wheels and the platform sends the odometry
implementation details of the distributed system in Section III. data.
Experiments carried out in simulation as well as on real robot
are discussed in Section IV. Finally, the conclusions of the Fig. 3. Distribution of the robots elements.
research are presented in the Section V.
A. Learning Stage
This phase begins launching nodes implemented in the
mobile robot (slave computer) from user interface (master
computer), by means of a wireless network and SSH
communication protocol. This connection allows teleoperating
After rebuilding the environment, autonomous navigation Fig. 7. Laser scanner display test.
tests were performed testing different paths in the map (Fig. 5).
As the AMCL algorithm computes paths dynamically, a wall
was placed into the map, so that the navigation algorithm is
forced to recalculate the route, avoiding the obstacle. The
simulation result showed that the robot reaches the final
location calculating the shortest path avoiding obstacles that
might arise in the navigation.
Fig. 5. Calculating the path in the simulation, from the starting poitn A to the
end point B, with the navigation algorithm.
B. Real Navigation
For the experiment in which the robot navigates in real
environments, the nodes were tested independently. First the
mobile platform is connected to slave computer and the node
that controls it was launched, checking the sensor data
transmission. Then, the kinect was connected to this computer
checking the point cloud formation of the real environment
(Fig. 6). Since the environment reconstruction is performed in
two dimensions, the conversion node of the point cloud was
launched, checking the accuracy of obtained distance data
(Fig. 7).
Fig. 6. Point cloud visualization test.
VI. ACKNOWLEDGMENT
This work is supported by the project ING-1551 namely
Approximation to localization and mapping from aerial
platforms Pashe I funded by the research vice rectory of the
Universidad Militar Nueva Granada in Bogot, Colombia.
REFERENCES
[1] H. Durrant-Whyte and T. Baley, Simultaneous localization and
mapping: part I, IEEE Robotics & Automation Magazine, vol. 13, no. 2,
pp. 99-110, June, 2006.
[2] M. W. M. G. Dissanayake et al., A solution to the simultaneous
localization and map building (SLAM) problem, IEEE Trans. Robotics
and Automation, vol. 17, no. 3, pp. 229-241, Jun, 2001.
[3] M. W. M. G. Dissanayake, P. Newman, H. Durrant-Whyte, S. Clark and
M. Csorba, An experimental and theoretical investigation into
simultaneous localisation and map building, in Experimental Robotics
VI, vol. 250, Springer London, 2000, pp. 265-274.
[4] C. Smith and P. Cheeseman, On the representation and estimation of
spatial ncertainty, Int. J. Robotics Research, vol. 5, no. 4, pp. 56-68,
1987.