You are on page 1of 5

Localization and Mapping Aproximation for

Autonomous Ground Platforms, Implementing SLAM


Algorithms
Sergio Medina, Paola Lancheros, Laura Sanabria, Nelson Velasco and Leonardo Solaque
Departamento de Ingeniera en Mecatrnica
Universidad Militar Nueva Granada (UMNG)
Bogot, Colombia
u1801359@unimilitar.edu.co, u1801917@unimilitar.edu.co, u1801943@unimilitar.edu.co, nelson.velasco@unimilitar.edu.co,
leonardo.solaque@unimilitar.edu.co

AbstractSolving the SLAM (Simultaneous Localization And It is considered that approaches by Smith, Chesseman [4]
Mapping) problem increases the robot autonomy allowing the and Durrant-Whyte [5] are essential to define the SLAM
robot to build a map of the environment and simultaneously, problem. These previous works establishes the capability to
estimate its pose (position and orientation) using this map. generate complex spatial relationships between points as the
Usually, the robot pose is computed by means of odometry. This system captures new information. This complex relationships
method uses the sensor information, nevertheless, these sensors are the basic element to generate the map. This research lines
introduce noise on the measurements, generating uncertainty in had much in common and converge in the paper of Smith, Self
robot localization. For this reason, probabilistic methods as and Cheeseman [6]. It shows that as a mobile robot navigates
Kalman and Particle filters have been implemented in robotics to
through an unknown environment taking relative observations
estimate the robot and landmark positions, manipulating the
system errors.
of landmarks, the estimations of these landmarks are all
In this paper SLAM algorithms based Rao-Blackwellized correlated with each other due to the common error in
particle filters were implemented in a mobile robotic platform to estimated vehicle location [7]. This implicates that SLAM
perform two-dimensional and three-dimensional unknown problem requires a state consisting of the vehicle pose and
environments reconstructions. The SURF (Speed Up Robust every landmark position, which must be updated with each
Features) and RANSAC (Random Sample Consensus) methods observation [1]. Therefore, it is deduces that a full solution to
were implemented for the visual feature extractions and the this problem is impossible, given the computational complexity
transformation estimations respectively. Furthermore, ROS owing to the huge state vector used by the Kalman filter (KF)
(Robot Operating System) was used for platform position [2]. For this reason, some researchers assumed that correlations
estimation and map building. Finally, a Monte Carlo algorithm between landmarks could be minimized to reduce the
was employed for autonomous navigation. This algorithm computational complexity (e.g. [7] and [8]). However,
computes the shortest path between two points within the posterior studies, such as [9], show that this correlations are
reconstructed map and executes it avoiding obstacles that may fundamental to the SLAM problem and these could not be
arise in its travel. minimized, because if these correlations increased the solution
will be better. Additionally, researchers focused SLAM
KeywordsSimultaneous Localization And Mapping, solution in Extended Kalman Filter (EKF) because the KF was
Autonomous robots, autonomous navigation, Bayesian filters.
designed to estimate the state vector of a linear model. The
advantage of the EKF is that it assumes that the process models
I. INTRODUCTION and observation are locally linear and can be linearized as a
The SLAM (Simultaneous Localization and Mapping) Taylor series expansion [9]. However, the EKF presents data
problem has been intensively studied in the robotics association and linearization model problems, which lead to
community in the past twenty years. Solving this problem is inconsistent solution. For this reason the FastSLAM, based in
known as the Holy Grail for the mobile robotics as it would filtering particles, arises to represent directly the nonlinear
provide the way to make a robot truly autonomous [1], models. The high dimensional state-space of the SLAM
allowing it to build a map of an environment, and problem makes the application of this filter computationally
simultaneously navigate and estimate its pose (position and unfeasible. However it is possible to reduce the sample space
orientation) with the aid of the constructed map [2]. According by applying Rao-Blackwellization [1].
to this, SLAM is useful in a huge range of applications where Another key aspect to the SLAM problem is the reliable
absolute position and precise map information is unobtainable, acquisition of environmental information. Initially, the human
e.g. search and rescue operations, urban reconstructions, being manipulated the environment locating artificial
reconnaissance in hostile environments, planetary exploration, landmarks (beacons), letting the robot to locate itself [10].
among others [3]. Then, the robots pose was obtained by employing some
odometric or inertial system using the sensor information [11]. Fig. 2. SLAM Gmapping system distribution.
Nevertheless, these sensors introduce noise on the
measurements generating uncertainty in the robot location,
producing a cumulative error that grows with time. This is a
recurrent problem because if there is an error in the robot
location, there will be an error in the map and, if there is an
error in the map, there will be an error in the location [12]. For
this reason, probabilistic methods (Bayesian filters) as Kalman
and Particle filters have been implemented in robotics to
estimate the robot and landmark positions, manipulating the
system errors.
The purpose of this work is implement SLAM algorithms
in a mobile robotic platform to build two and three-
dimensional maps of different environments. First, SURF
(Speed Up Robust Features) algorithm extracts features from
images taken with the Kinect, embedded in the robot, and then,
a RANSAC (RANdom Sample Consensus) algorithm is used
to match these features against features from previous images A. Mobile Unit
and to estimate the relative transformations between pairs of The hardware of the mobile unit (Fig. 3) consists of a CPU,
obtained images. From this information, and using SLAM- a distance sensor and a mobile robot. The robots processing
GMAPPING, RGBD-SLAM and Monte Carlo (AMCL) unit is a Fit-PC 3, a mini high-performance computer with the
algorithms, it is possible estimate the robot path and the map of required specifications for this project. The Kinect, developed
the navigated environment simultaneously. Furthermore, the by Microsoft, is used as a distance sensor and the mobile
Robot Operating System (ROS) was used to build the map and platform is a Roomba 555, an autonomous vacuum cleaner
estimate the platform positions. sold by iRobot. This robot interacts with the CPU using
The remainder of this paper is organized as follows. After bidirectional RS-232 protocol. CPU sends speed and direction
explain how the system architecture is, we then provide information of the wheels and the platform sends the odometry
implementation details of the distributed system in Section III. data.
Experiments carried out in simulation as well as on real robot
are discussed in Section IV. Finally, the conclusions of the Fig. 3. Distribution of the robots elements.
research are presented in the Section V.

II. SYSTEM ARCHITECTURE


Two units form the distributed system: a) a master
computer that is on the fixed station and b) a slave computer
embedded in the mobile robot. Both units are connected by
TCP-IP wireless network, used to transmit the sensor
measurements and the control data to the mobile platform. Fig.
1 shows a distribution system diagram of RGBD-SLAM.
While Fig. 2 shows the distribution system diagram of SLAM-
GMAPPING.
ROS was installed in the CPU. This system includes
Fig. 1. RGBD-SLAM system distribution.
several platforms, languages, compilers and packages
developed by different researchers, companies and fans of
robotics so that other people can use and modify them
according to their needs. This makes ROS an open platform
feedback continuously [13]. The structural element in this
operative system is the node, defined as an executable file
within a ROS package. ROS nodes use a ROS client library
(python or c++) to communicate with other nodes and can do
so in two ways [14]:
Publish/subscribe: The idea is that a node that wants
to share information will publish messages on the appropriate
topic or topics; a node that wants to receive information will
subscribe to the topic or topics that its interested in. A topic
might have many publishers and many subscribers.
Services: One node sends information to another node the robot sending the speed and direction data. During the
and waits for a response. This communication is bi-directional, teleoperated navigation, visual features are extracted from the
information flows in both directions. Each service call is incoming color images. The slave computer matches these
initiated by one node, and the response goes back to that same features against features from previous images, forming a set
node. of three-dimensional point correspondences between any two
frames. Based on these correspondences, it estimates the
In this work five nodes were implemented: relative transformations between the frames using RANSAC.
These information, along with the odometry data, are sent to
1) Gmapping: Bidimensional Simultaneous localization
the SLAM algorithm. This information allows estimating the
and mapping algorithm whose inputs are odometry data of the navigated map, visualizing and reconstructing it. Finally, this
robot and distance measurements. The output is a information is stored as a PGM image which will be used as
bidimensional environmental map. input in the robot autonomous navigation.
2) RGBD-SLAM: Three-dimensional Simultaneous
localization and mapping algorithm whose inputs are B. Autonomous Navigation Stage
odometry data of the robot, depth information and RGB This phase also begins with launching nodes implemented
images from kinect sensor. The output is a three-dimensional in slave computer from the master computer. The inputs to this
environmental map. process are the generated map image previously in the learning
3) Laser scanner: Using the kinect and its emulation as stage, robot odometry and measurements of the laser sensor.
laser scanner, the distance measurements are performed from Posteriorly, by means of RVIZ the map is showed and the user
the platform to the landmarks, to send them to the SLAM must indicate the final position in the map that the robot must
algorithm. reach. The AMCL algorithm is executed in the slave computer
4) Roomba Control: This node is responsible for acquiring and it is used as a probabilistic location system.
data from odometry to send speeds and directions of the
wheels, vital information to operate the robot. IV. EXPERIMENTS
5) Navegacin AMCL: Using data provided by the sensor, The approach described above has been implemented and
the mobile platform and the map of the built environment with tested by means of two stages. The first is a simulation using a
the SLAM, autonomous robot navigation is performed using virtual environment that allows verifying the operation and
the Monte Carlo algorithm. performance of each implemented node. The second one is the
real navigation of the robot and the three-dimension
B. Master Computer reconstruction.
This computer is responsible to visualize the built map of
the navigated environment. To accomplish this, ROS was A. Simulation
installed and the five nodes mentioned above were The aim of the simulation is to test the nodes performance
implemented. In addition, the Map Saver node, the RVIZ and in the virtual environment. To represent similar conditions to
teleoperation tools were implemented too. the real navigation, the sensor measurements were corrupted by
Map Saver: it permits storing the built map as a PGM Gaussian noise, because any sensor is completely accurate.
image that will be subsequently used in the autonomous Stage Player tool was used in this test. This tool allows loading
navigation. the map of the environment, launch the nodes and display the
reconstruction. Simulation of the constructed map is showed in
RVIZ: Bidimensional and three-dimensional Fig. 4. It can observe the similarity between the real map and
visualization tool which allows us to add point cloud the obtained map.
data, cameras, maps, robot positions, etc. Also, it allows
visualizing personalized views that show the difference Fig. 4. (a) Environment navigated in the simulation and (b) Environment built
between the real world and the world observed by the by the Gmapping SLAM algorithm.
robot.
Teleoperation: User interface that sends speed and
position commands to the robot allowing it to navigate
the environment.

III. IMPLEMENTATION OF THE DISTRIBUTION SYSTEM


Two phases were implemented to fulfil the objective.

A. Learning Stage
This phase begins launching nodes implemented in the
mobile robot (slave computer) from user interface (master
computer), by means of a wireless network and SSH
communication protocol. This connection allows teleoperating
After rebuilding the environment, autonomous navigation Fig. 7. Laser scanner display test.
tests were performed testing different paths in the map (Fig. 5).
As the AMCL algorithm computes paths dynamically, a wall
was placed into the map, so that the navigation algorithm is
forced to recalculate the route, avoiding the obstacle. The
simulation result showed that the robot reaches the final
location calculating the shortest path avoiding obstacles that
might arise in the navigation.
Fig. 5. Calculating the path in the simulation, from the starting poitn A to the
end point B, with the navigation algorithm.

After testing each node independently, all systems were


attached to the robot and tests of communication between the
master and slave were performed accessing remotely to all
packages implemented in the slave to beginning navigation
tests. These tests were performed in a UMNG corridor. This
environment has flat, regular and homogeneous surfaces,
consisting of 2,5 m high and 2 m wide walls, doors and lockers
(Fig. 8). The 3D reconstruction was performed using RGBD-
SLAM implemented in the slave computer and displayed on
the master one (Fig. 9). Notably, the master computer must not
The simulation test lets to adjust some parameters such as be more than 15 m from the robot, this in order to avoid
reduce the laser scanner range from 3,5 m to 2,5 m because in miscommunication.
the range of 3,5 m errors in the reconstruction was presented
due to sensor optimal area. Fig. 8. Navigated environment.

B. Real Navigation
For the experiment in which the robot navigates in real
environments, the nodes were tested independently. First the
mobile platform is connected to slave computer and the node
that controls it was launched, checking the sensor data
transmission. Then, the kinect was connected to this computer
checking the point cloud formation of the real environment
(Fig. 6). Since the environment reconstruction is performed in
two dimensions, the conversion node of the point cloud was
launched, checking the accuracy of obtained distance data
(Fig. 7).
Fig. 6. Point cloud visualization test.

To accomplish the experiment, we started with the


teleoperation of the robot checking correct operation and low
vibrations, thus to begin the reconstruction and autonomous
navigation. Fig. 10 shows the reconstruction done from the
previously described space.
Fig. 9. 3D reconstruction of the UMNG corridor capturing a sequence of
frames with RGBD-SLAM algorithm.
Fig. 10. Bidimensional reconstructed map where the test were executed. [5] H. Durrant-Whyte, Uncertain geometry in robotics, IEEE J. Robotics
and Automation, vol. 4, no. 1, pp. 23-31, February, 1988.
[6] R. Smith, M.Self and P. Cheesema, Estimating uncertain spatial
relationships in robotics, Autonomous Robot Vehicles. New York:
Springer-Verlag, pp.167-193, 1990.
[7] J. Leonard and H. Durrant-Whyte, Simultaneous Map Building and
Localization for an Autonomous Mobile Robot, in IEEE/RSJ Int.
Workshop Intelligent Robots and Systems IROS 91, Osaka, 1991, pp.
1442-1447.
[8] W. Rencken, Concurrent localization and map building for mobile
robots using ultrasonic sensors, in Proc. 1993 IEEE/RSJ Int. Conf.
Intelligent robots and Systems IROS 93, Yokohama, 1993, pp. 2192-
2197.
[9] M. Csorba, Simultaneous localisation and map building, Ph.D.
dissertation, Dept. Eng. Sci., Univ. Oxford, Oxford, 1997.
V. CONCLUSIONS [10] J. Leonard and H. Durrant-Whyte, Mobile robot localization by
tracking geometric beacons, IEEE Trans. Robot. Autom., vol. 7, no. 3,
We presented an approach to visual SLAM from RGB-D pp. 376-382, Jun, 1991.
sensors. The algorithms allow to build a bidimensional and [11] R. Chatila and J. P. Laumond, Position referencing and consistent
three-dimensional map of an unknown environment that can world modeling for mobile robots, in Proc. 1985 IEEE Int. Conf.
be used for robot localization, navigation and path planning. Robotics and Automation, 1985, pp. 138-145.
The maps created are based on reference points extracted from [12] A. Gil et al., Influencia de los parmetros de un filtro de partculas en la
images by Kinect sensor. solucin al problema de SLAM, IEEE Latin America, vol. 6, no. 1, pp.
18-27, 2007.
SLAM algorithms were implemented on a mobile robotic [13] M. Nuez, F. Len and P. cardenas, ROS: Operating system for
platform for autonomous navigation and reconstruction of robotics, concepts and applications, in X Cong. Int. electrnica y
unknown environments, minimizing the measurement errors Tecnologas de Avanzada, Pamplona, 2014.
in the sensors. Good results are obtained with the [14] J. M. OKane, A gentel introduction to ROS, Columbia, 2013, ch. 2 and
bidimensional and three-dimensional reconstructions, enabling 8, pp. 23, 24,117.
the development of important applications such as search and
rescue operations, exploration of hostile environments, among
others. The AMCL algorithm allows the robot to navigate
autonomously by the environment following the shortest route
between two points, avoiding obstacles that may arise during
the path.
ROS is an important tool in research projects that allows
implementing from basic applications like process simulation
to more complex applications like SLAM. Although it has
things to improve and it is limited to the Ubuntu operating
system, day by day, more robots and electronic devices are
included in this platform.

VI. ACKNOWLEDGMENT
This work is supported by the project ING-1551 namely
Approximation to localization and mapping from aerial
platforms Pashe I funded by the research vice rectory of the
Universidad Militar Nueva Granada in Bogot, Colombia.

REFERENCES
[1] H. Durrant-Whyte and T. Baley, Simultaneous localization and
mapping: part I, IEEE Robotics & Automation Magazine, vol. 13, no. 2,
pp. 99-110, June, 2006.
[2] M. W. M. G. Dissanayake et al., A solution to the simultaneous
localization and map building (SLAM) problem, IEEE Trans. Robotics
and Automation, vol. 17, no. 3, pp. 229-241, Jun, 2001.
[3] M. W. M. G. Dissanayake, P. Newman, H. Durrant-Whyte, S. Clark and
M. Csorba, An experimental and theoretical investigation into
simultaneous localisation and map building, in Experimental Robotics
VI, vol. 250, Springer London, 2000, pp. 265-274.
[4] C. Smith and P. Cheeseman, On the representation and estimation of
spatial ncertainty, Int. J. Robotics Research, vol. 5, no. 4, pp. 56-68,
1987.

You might also like