You are on page 1of 9

Implementation of Informed RRT* on a Pre-Mapped

Octomap Generated by RTAB Map


W. Adiprawita​1,a)​, M. Adhipatiunus​2,b)​, R.R.A. Setiadji​3,c)​ ,R. M.
Widyaputra​1,d)
1​
School of Electrical Engineering and Informatics, Institut Teknologi Bandung
2​
Faculty of Mechanical and Aerospace Engineering, Institut Teknologi Bandung
3​
Faculty of Industrial Technology, Institut Teknologi Bandung

a)​
wadiprawita@stei.itb.ac.id
b)​
Corresponding author: adhipatiunus@students.itb.ac.id
c)​
reinardras@gmail.com
d)​
ronggur.mwp@gmail.com

Abstract. ​Simultaneous Localization and Mapping (SLAM) is a solution in robotics in which Global
Positioning System (GPS) positioning is impossible. One particular example is indoor navigation which is used
in poor signal or distance of objects is negligible relative to GPS precision. In this research, visual SLAM is
implemented using a stereo depth camera simulated in Gazebo which will generate Point Cloud which later
converted into Octomap, a spatially efficient data structure. Based on the information gathered a path planning
algorithm can be implemented for the desired robot, in this case a quadcopter because of its versatility and 3D
navigation. The path planning is implemented using Informed Rapidly-exploring Random Tree (Informed
RRT*) which uses an ellipsoid boundary to optimize the stochastic path planning. All these are implemented in
the Robot Operating System (ROS) framework. In this research it is shown that Informed RRT* algorithm
gives a solution for a quadcopter.

INTRODUCTION

One of the challenges of Unmanned Aerial Vehicle (UAV) navigation is when Global Positioning System
(GPS) cannot be used in case of poor signal or distance of object is negligible relative to GPS precision. One
such solution for this problem is Simultaneous Localization and Mapping (SLAM). SLAM allows UAV to keep
track of its current position and the position of an object nearby relative to a fixed coordinate system. One
possible method is by using a visual sensor such as a stereo depth camera, this allows UAV to perform SLAM.
One such module that implements it is RTAB Map​1​, it is an appearance-based and graph-based SLAM method
that is already integrated into the Robot Operating System (ROS)​2​. The UAV can then perform informed
navigation from the sensor data from a given start position to a given goal position relative to a fixed coordinate
system. One such method of path planning is Rapidly-exploring Random Tree Star (RRT*)​3​, which generates a
stochastic path, the algorithm is proven to be probabilistically correct​, ​which is for a number of sampling that
approaches infinity, a solution is guaranteed to exist or output for the given constraints a solution is impossible.
In this research, the authors used Informed RRT* which uses an ellipsoid boundary to generate random sample
that converges faster than traditional RRT*​4​.
In this research, the authors use Gazebo simulator to map the virtual environment. The map will be
converted into an Octomap which will then be modelled as occupied and unoccupied space. Finally, the author
will implement Informed RRT* to find the optimum solution given a start and goal coordinate based on the
Octomap.

ROBOT OPERATING SYSTEM

Robot Operating System is a framework primarily used to write robot software. It is designed with
modularity in mind, where a functionality is meant to be written and executed as a separate process, with the
framework handling all communications between each. The design is aimed to help the development process in
robotics, where the involved fields are extremely diverse, so that developers for a specific field in robotics can
focus on research progress instead of putting more effort in integrating possible prerequisite components​5​.

SIMULTANEOUS LOCALIZATION AND MAPPING

Simultaneous Localization and Mapping (SLAM) is a solution for situations where GPS positioning is
impossible. The main idea is that the robot could localize its position according to a fixed coordinate system and
map objects simultaneously using sensors such as LIDAR or camera. In this research, stereo depth camera is
used as the primary sensor. For appearance-based SLAM, there are several methods, this research uses graph
method which recovers both map and path of the robot. In appearance-based SLAM, there is a process called
loop closure that is used to determine if the robot has visited a location before and perform comparison, this
process can increase linearly as it travels to new areas.

Real-Time Appearance Based Map

Real-Time Appearance Based Map (RTAB Map) is a SLAM module integrated to ROS that supports both
LIDAR and camera sensors. It is an appearance-based SLAM that uses graph method. RTAB Map is also
optimized to perform loop closure in real time. One of the outputs of RTAB Map is Point Cloud​6​, a data
structure that contains the RGB and depth of a point in 3D, the data structure can then be converted into
Octomap, a tree-based data structure that is spatially efficient​7​.

INFORMED RAPIDLY-EXPLORING RANDOM TREE STAR

Informed Rapidly-exploring Random Tree Star (Informed RRT*) is an improvement from RRT*, it performs
as an RRT* algorithm until it reaches a given goal position. Later, it implements a heuristic function based on
sampling an 3D ellipsoid that contains all possible solutions to achieve a given goal position and start position,
the algorithm would also minimize the cost. The algorithm is already implemented in Open Motion Platform
Library (OMPL)​8​ and Flexible Collision Library (FCL)​9​ is used to model obstacles in 3D.

METHODOLOGY

Gazebo Simulation

This research consists of two parts, first is performing SLAM on a virtual Gazebo​10 environment. The
simulation is performed on a Waffle-type Turtlebot with stereo depth camera. The output of the simulation is to
produce a 3D map of the virtual environment and export the result as Point Cloud data, the Point Cloud data is
then converted into Octomap data.

Informed RRT* Implementation

The Octomap data will then be used as obstacle in FCL, the next step is to implement Informed RRT* on the
given map. The algorithm is as following:

FIGURE 1. ​Flow Chart of the Informed RRT* Algorithm


In this research, a start point coordinate is given by using rostopic pub command as a geometry_msgs
message type and after that a goal point coordinate is also given by rostopic pub command as a geometry_msgs
message type. The virtual environment where the Informed RRT* is implemented is shown below:

FIGURE 2​. The Octomap Representation of the Map


The result is accepted as valid if the result is a solution where navigation is possible which means no
collision with obstacle or if the result shows that no possible solution can be obtained given the constraints.

System Settings

The simulation is performed on Ubuntu 18.04 with Core i5-8250U 1.6 GHz and 16 GB of RAM. The ROS
version is Melodic. To model quadcopter vertical flight capability, the 3D boundary is specified as following:
TABLE 1. ​3D Boundary of the Simulation
X (m) Y (m) Z (m)
Max 10 10 10
Min -10 -10 0.05
For each test case, the maximum distance for each vertex is 0.2 m.
The start point coordinate is given below:
TABLE 2. ​Start Point Coordinate
X (m) Y (m) Z (m)

-3.25 -1.9 0.05


The goal point coordinates are given below:
TABLE 3. ​Goal Point Coordinates
Test Case X (m) Y (m) Z (m)
1 -6.4 3.1 0.05
2 -6.55 -1.9 0.05
3 1.05 3.35 0.05
4 3.95 1.05 0.05

RESULT AND DISCUSSION

The result for each test cases are given below; the red line shows the path generated by the Informed RRT*
algorithm:

Test Case 1

FIGURE 3.​ Result of Test Case 1


Test Case 2

FIGURE 4.​ Result of Test Case 2

Test Case 3

FIGURE 5.​ Result of Test Case 3


Test Case 4

FIGURE 6.​ Result of Test Case 4

Computational Cost

For each test cases, the minimum distance of start and goal using Pythagoras theorem, best distance
calculated by the algorithm and number of iterations required to obtain the solution are:
TABLE 4. ​Computational Cost
Test Case Minimum Distance (m) Best Distance (m) Iterations
1 5.909 8.385 367
2 3.300 4.540 176
3 6.786 11.327 383
4 7.781 13.268 482
From the data given in Table 4 and its respective illustration for each test case, it shows that for a simple
trajectory such as in case 2, the number of iterations is low because there is easy access to the room which in
this case is a window which the quadcopter can then form a relatively straight line to the desired goal point. For
other test cases, the quadcopter must anticipate multiple walls to reach the target and thus must iterate longer to
obtain an optimum solution, in addition it must also fly higher to prevent collision with the obstacle thus
increasing its distance.

CONCLUSION

The implementation of Informed RRT* in OMPL has been tested to have a solution given a start point
coordinate and goal point coordinate. The algorithm can also bypass obstacles and prevent collision with
obstacles by utilizing the vertical flight capability of a quadcopter. The algorithm can compute the path and best
distance in under one second for the given start point and goal point coordinates. The algorithm can also detect
smart entry points such as a window in Test Case 2. The computational cost of the algorithm increases as the
complexity of the obstacle increases, such as in cases other than the second where the quadcopter must
anticipate multiple walls in order to reach the desired goal point coordinate. The rate of convergence increases if
the minimum distance per vertex is increased, but doing so will also increase the optimum distance from start to
goal target. Best case is achieved if there is no obstacle along a straight line from start point to goal point
coordinate such as in case 2, where the trajectory is relatively straight.

ACKNOWLEDGEMENT

This work is supported by Aksantara UAV Research and Development Team.

REFERENCES

1. M. Labbé and F. Michaud, “​RTAB-Map as an open-source lidar and visual simultaneous localization
and mapping library for large-scale and long-term online operation”, (​Journal Of Field Robotics 36,
2018), pp. 416-446​.
2. I. Z. Ibragimov and I. M. Afanasyev, “Comparison of ROS-based visual SLAM methods in
homogeneous indoor environment”, (14th Workshop on Positioning, Navigation and Communications
(WPNC), Bremen, 2017), pp. 1-6.
3. S. ​Karaman, M. Walter, A. Perez, E. Frazzoli and S. Teller, “Anytime Motion Planning Using The
RRT*”, (IEEE International Conference on Robotics and Automation, Shanghai, 2011), pp. 1478-1483.
4. J. Gammell, S. Srinivasa and T. Barfoot, “Informed RRT*: Optimal sampling-based path planning
focused via direct sampling of an admissible ellipsoidal heuristic”, ​(I​ EEE/RSJ International Conference
on Intelligent Robots and Systems, Chicago, 2014​),​ pp. 2997-3004.
5. M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler and A.Y. Ng, 2009, May.
“ROS: an open-source Robot Operating System”, (IEEE International Conference on Robotics and
Automation, Kobe, 2009), pp. 5-10.
6. R. Rusu and S. Cousins, “3D is here: Point Cloud Library (PCL)”, (IEEE International Conference on
Robotics and Automation​, ​Shanghai, 2011), pp. 11-14.
7. A. Hornung, K. Wurm, M. Bennewitz, C. Stachniss and W. Burgard, “OctoMap: an efficient
probabilistic 3D mapping framework based on octrees”, (Autonomous Robots, 2013), pp.189-206.
8. I. Sucan, M. Moll, and L. Kavraki, “The Open Motion Planning Library”. (IEEE Robotics &
Automation Magazine, Maidstone, 2012), pp. 72-82.
9. J. Pan, S. Chitta. and D. Manocha, “FCL: A general purpose library for collision and proximity
queries”, (IEEE International Conference on Robotics and Automation, 2012), pp. 3859-3866.
10. N. Koenig and A. Howard, “Design and use paradigms for gazebo, an open-source multi-robot
simulator”, (IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Sendai,
2004), pp. 2149-2154.

You might also like