You are on page 1of 7

RBE 3002 Lab 4

16 December 2021

1 Introduction
The objective of this laboratory was to design and integrate a localization, navigation, and mapping
system for a differential drive robot. To best demonstrate these abilities, the laboratory was broken into
three phases. In Phase 1, the robot was placed in an unknown map, and it was required to traverse the
map, identifying obstacles and creating a complete depiction of the map to be used in future phases. In
Phase 2, the robot needed to use the map information generated in Phase 1 to return to the position
in which it started Phase 1. In Phase 3, the robot was placed in an unknown location on the map,
and it needed to localize itself on the map before navigating to a given location. In each phase, the
robot needed to move throughout the map without contacting obstacles. All three phases needed to
be completed within a 15-minute window, and the third phase was limited to a 5-minute window. The
successful development of a code base to complete these tasks would demonstrate successful achievement
of course and laboratory objectives relating to robot navigation, perception, localization, and mapping.

2 Methodology
The following sections detail the methods used to achieve the aforementioned goals of this laboratory.

2.1 Phase 1
In Phase 1, the robot needed to travel throughout an unknown map, finding landmarks, obstacles, and
other features. While completing these tasks, the robot needed to avoid obstacles and move swiftly
enough to finish the exercise within the given time limit.

2.1.1 Setup
The following nodes were used to successfully complete Phase 1: lab4 controller, path planner, nav
interface, turtlebot3 core, turtlebot3 lds, and turtlebot3 slam gmapping. lab4 controller contained the
overarching logic flow required to execute the desired actions, as well as the methods used to generate
frontiers and calculate c-space. path planner contained the methods used to run the A* algorithm
and generate an optimal set of waypoints to bring the robot to the desired destination. nav interface
contained the methods used to actuate the robot’s motors and bring it to its desired goal. turtlebot3 core
was a provided node that controlled low-level aspects of the robot. turtlebot3 lds handled the robot’s
lidar scans. turtlebot3 slam gmapping interpreted the lidar and locomotion information, generated a
map based on the sensor readings, and localized the robot in that map.
In addition to these nodes, Rviz was used to visualize important information about the map and the
robot. In Rviz, the robot was overlaid onto the map generated by gmapping. Also shown in Rviz were
the cspace, the frontiers, and the path to the frontier of highest priority.

2.1.2 C-Space Creation


The first step in creating the c-space was retrieving the map from gmapping. This was done with a
ros service that retrieved a GetMap which contains an occupancy grid. This occupancy grid was then
iterated over so that it was transformed from a one dimensional array to a two dimensional array. The
first step in forming this two dimensional array is by creating a list of a list of elements where each inner
list is a row. Since the width of the occupancy grid is known, the program counts how many values it is

1
adding to each row and adding a new row until all values are found. While forming this two dimensional
array, the obstacles were given a value of 0 and everything else was given a value of 255. This creates a
contrast where obstacles are black and the surrounding area is all white. With this new list of a list of
values generated, the numpy function ”array” converts the value into the needed two dimensional array.
This two dimensional array is now in a format for openCV to treat it as an image where standard image
processing techniques can be implemented. This image is then dilated with the erode function (since
the black part must be expanded), which takes in a map, kernel, and number of iterations. By choosing
a kernel of a 3x3 matrix of all ones the lines are able to all grow by a size of 1 on each side per each
given number of iterations.
Finally, the new image must be decomposed back into an occupancy grid for the rest of ROS to be able
to understand and manipulate the data. The numpy function ”flatten” was called that condenses the
two dimensional array back into a singular dimension. Then the code iterates over each cell of the one
dimensional cell and transforms any cell with a value of 0 into a cell with a value of 100 and any cell
with a value of 255 into a cell with a value of either -1 or 0. This brings the map back into the realm of
three different possibilities known, unknown, and obstacles. This new one dimensional list encompasses
all of the necessary data for ROS to handle the occupancy grid and for the rest of the nodes to further
manipulate the data.

2.1.3 Finding Frontiers


Frontiers are found in a similar way to the c space calculation where an image is created and then image
processing actions are applied to optimize the program.
When constructing the two dimensional array for this image all non obstacle known cell that borders an
unknown cell are given a value of 0 and everything else was given a value of 255. As a result, an image
is produced where all edges are black and everything else is white.
With frontiers known, the next important step was grouping similar frontiers for the robot to travel to.
This was done by dilating by a factor of 2 and then eroding by the same factor so that the line thickness
does not change but the lines become more grouped. These dilation and erosions were accomplished
with the same kernel described above in the c-space calculation. This however becomes a problem when
evaluating a frontier that has angled lines for neighboring pixels are not connected. To counteract this
issue, the map is dilated with a custom kernel that is a 2x2 matrix so that one half of the line is expanded,
connecting the lines.
With all image processing accomplished, the openCV contour command was called returning all contours
on the image. This returns a list of connected points that represent a contour. This list of contours were
then iterated over to produce a list of coordinates where each point represents the center of the frontier.
This center was calculated by determining the Moments of the frontier, if they exist, and calculating
the geometric mean if they return 0 values.

2.1.4 Selecting Frontiers


With a list of coordinates representing the center of a frontier, the program calculates the euclidean
distance of the frontier to the robot and sorts them in a priority queue. In this priority queue, the closer
frontiers are prioritized.
Next, the program gets the top of the queue and attempts to plan a path to the coordinate. This path
is an implementation of A* where the cost is composed of whether or not the cell is in the c-space,
adjacent to a c-space or diagonal to the previous cell. Higher costs are given the closer to c-space to
avoid the robot from traveling in the c-space and planning a path that is farther away from the c-space.
The heuristic of the A* implementation takes the euclidean distance of the point and is influenced by
whether or not the robot would need to turn. This helps produce a path where the robot is encouraged
to not turn. If a path cannot be planned to the point then the algorithm returns None.
When calculating frontiers, when the algorithm returns None the system classifies the point as unreach-
able and gets a new frontier.
This process repeats until there exists no more valid frontiers where the program classifies the mapping
process as complete.

2
2.1.5 Saving the Map
At the end of Phase 1, the map created by gmapping needs to be saved. This was accomplished by using
the os python package to make the following system call, os.system(”rosrun map server map saver”).
This calls the map saver node from the operator computer and saves the map as a map.yaml and a
map.pgm file.

2.2 Phase 2
The goal of Phase 2 was to return to the robot’s starting pose after completing the mapping in Phase
1. At the start of Phase 1, the robot’s current pose is saved to start position and start orientation
variables. Then in Phase 2, the saved start pose was set as the goal pose and the new start pose became
the robot’s current pose. The path calculated by A* using the new start and goal poses was requested
from the path planner node and returned as a list of waypoints from the waypoints service created in
the path planner node.

2.3 Phase 3
In Phase 3, the robot needed to localize after being dropped into an unknown location in the map. After
localizing, the robot needed to calculate a path to a given goal location and travel along that path until
reaching the goal.

2.3.1 Setup
The following nodes were used to complete Phase 3: phase3, path planner, nav interface, amcl, turtle-
bot3 core, turtlebot3 lds, and map server. phase3 handled the logic to localize the robot in the map and
navigate to the given destination. path planner contained the methods used to run the A* algorithm
and generate an optimal set of waypoints to bring the robot to the desired destination. nav interface
contained the methods used to actuate the robot’s motors and bring it to its desired goal. amcl per-
formed Monte-Carlo Localization for the robot based on the lidar readings and map. turtlebot3 core
was a provided node that controlled low-level aspects of the robot. turtlebot3 lds handled the robot’s
lidar scans. map server published the map for use by amcl.
In addition to these nodes, Rviz was used to visualize important information about the robot and
the map. Just as in Phase 1, the robot was overlaid onto the map. In addition, the amcl pose, given
as a PoseWithCovariance, was shown as a range of possible robot positions and orientations. The lidar
readings were also shown, as well as the cspace and the path from the robot’s localized position to its
goal.

2.3.2 Localization
AMCL was used to localize the robot in the map. This package, which was provided, uses a particle
filter to perform Monte Carlo Localization on a given map. AMCL starts by assuming the robot to be
at the position (0, 0, 0) with a given default covariance. Then, as the robot moves, AMCL updates its
understanding of the robot’s pose using the latest lidar data.
To minimize covariance quickly while ensuring that a correct pose estimate was produced, AMCL’s
parameters were tuned with great care. In the end, the minimum number of particles was set at 10000,
the maximum number of particles was set at 20000, the minimum linear distance required for an update
was set at 2 cm, and the minimum angular distance required for an update was set at 0.1 rad.
In addition to tuning AMCL’s parameters, an efficient algorithm was necessary to ensure rapid,
precise localization without driving into obstacles. After much experimentation, a spin-turn-drive process
was used. In this algorithm, the robot first spins in place. The repeated localization updates caused
by the robot’s spinning serve to quickly reduce covariance. After completing one full rotation, the
robot turns to and drives in the direction with the fewest obstacles. This direction was determined by
calculating the weighted average of the 180 lidar readings (1 for each degree of rotation) on the front
half of the robot. Each distance reading was multiplied by its index in the reading array (with index
0 being to the robot’s left, index 90 being directly forward, and index 179 being to the robot’s right);
these product were summed, and final sum was divided by 180. The result was rounded to the nearest
integer, and this integer was the degree to which the robot would turn. Once the robot was facing the

3
direction of fewest obstacles, it drove one-sixth of the distance to the nearest obstacle. After completing
this motion, the robot repeated the spin-turn-drive process until the pseudocovariance was less than
0.01. Pseudocovariance was defined as:
p
P seudocovariance = (σ(x))2 + (σ(y))2 (1)

Once the pseudocovariance dropped below 0.01, the robot was assumed to have a correct under-
standing of its position. Navigation to the desired destination then began.

2.3.3 Navigation
The motions in Phase 3 were all triggered by a 2D Nav Goal input in Rviz. Once the evil professor
provided the 2D Nav Goal, the robot began localization. Immediately after localization completed, A*
was used to generate an optimal path from the AMCL-generated pose to the destination. The navigation
interface was then used to travel to each waypoint in the path in succession.

3 Results
3.1 Phase 1
3.1.1 C-Space Creation
During the demo, the robot had no issues forming the c-space and instantaneously returned a calculated
c-space whenever pinged. The team felt extremely confident with the performance of the c-space creation.

3.1.2 Finding Frontiers


The robot successfully found all viable frontiers while navigating the map. During the final demo, the
robot consistently quickly computed a list of possible frontiers.

3.1.3 Selecting Frontiers


During the final demo, the robot consistently chose the most optimal viable path and never attempted
to reach an unreachable frontier or dispose of a reachable frontier. This portion of the project worked
without issue and encountered no issues.

3.1.4 Saving the Map


During the final demo, the robot was able to successfully complete phase 1 without colliding with any
obstacles. We had a very successive phase 1, the map that was produced can be found in Figure 1.

Figure 1: Map saved in Phase 1.

3.2 Phase 2
The robot successfully made it back to the starting pose in our final demo. However, there were runs
where the robot hit an obstacle, typically while attempting to turn a corner.

4
3.3 Phase 3
3.3.1 Localization
The robot successfully localized after two iterations of the spin-turn-drive algorithm. The robot did not
collide into any obstacles during the localization process. Visual displays in Rviz showed the robot in
the correct spot on the map, with the laser scan matching the location of the map’s obstacles.

3.3.2 Navigation
The path planner generated the correct path from the robot’s position after localization to the goal
position entered by the evil professor. The robot navigated successfully along this path. However,
it brushed two obstacles along the way, with one collision being only a glancing blow with a wire
overhanging the robot chassis. The robot finished in the desired position with only a negligible error.

4 Discussion
4.1 Phase 1
4.1.1 C-Space Creation
Originally, the c-space was calculated through iterating over each cell and manipulating the values to
extend the frontiers. In an attempt to optimize the system, openCV was implemented so that the
calculation would be more efficient. This worked better than the group could have hoped and created a
nearly instantaneous c-space calculation that was not limited by the amount of cells on the map.

4.1.2 Finding Frontiers


The frontier calculation implemented in this lab is not perfect and is able to fail when the frontier is
circular. This is because the system places a frontier in an unknown space which cannot be navigated
to. Fortunately, this scenario only occurred a handful of times throughout all of the testing that was
run. This issue however, could be exaggerated when implemented for a long period of time or where
many circular obstacles exist.
For the challenge that was assigned, this frontier worked exceedingly well and was able to reliably choose
a close frontier. Since the calculation involves openCV the calculation occurs extremely quickly with
very little waiting time. This is important for speed since it allowed the group to recalculate a frontier
after each movement and having the robot constantly move in an optimal path. The other benefit of
this design was it allowed a lot of flexibility in its operation. The openCV library allows for a wide
variety of parameters allowing the group to tweak the edge detection to a method that works best for
the robot. This variability without having to change many lines of code was invaluable in testing.

4.1.3 Selecting Frontiers


Invalidating frontiers was a large challenge the group had to overcome in the synthesis of this project.
The A* algorithm was relatively simple to implement for a point that is reachable, but when the
point is unreachable it gets considerably more complex. A* upon encountering an unreachable obstacle
determines the point is unreachable when the priority queue becomes empty and no point was found.
With large maps, the A* algorithm was computationally expensive when the point was unreachable. To
compensate for this problem, the system was able to navigate across all cells and when the system is
reached, if the path goes through an obstacle the frontier is thrown out. This method on paper makes
a lot of sense, however in practice the group ran into an early error where the standard cost to drive to
an obstacle would be thrown out and seen as unreachable. The group realized that these values must
be tuned for a specific map to properly throw out frontiers making the code perfect for our scenario
but less helpful for maps of different sizes. Fortunately, by the time of our demo, we had a fully tuned
frontier selection that has yet to fail.

5
4.1.4 Saving the Map
Saving the map in the lab4 controller node reduced the time taken between phase 1 and 2 completing
and starting phase 3. However, if there was already a map saved, the system call in the node would fail
and it had to be run manually in the terminal. This was most likely due to the map saver node’s lack of
permission to overwrite the saved map. In the final demo the map successfully saved in lab4 controller..

4.2 Phase 2
The causes of the robot occasionally hitting obstacles when running Phase 2 is likely due to drift and
A* calculating a path too close to the cspace. The robot had noticeable drift in the heading after long
stretches of going straight. The most common collision was clipping the corner of an obstacle when
turning around it. Heading drift could have been causing the robot to turn towards the obstacle too
much before it was actually at the point where it was instructed to turn and the error may have been
large enough to cause a collision. Also, the path determined by A* originally ran right along the c-space
border, so, before the A* cost was adjusted to include proximity to cspace, the robot collided more
frequently. Another possible cause of collisions could have been a bad sensor reading causing obstacles
to appear when they should not. The odom data was not very accurate, so it was replaced with a
transform between map and base footprint in order to improve the assumption of the robot’s location.
That reduced the collisions due to the robot believing it was a different location than it really was and
so had a clear path, when in reality the path was obstructed.

4.3 Phase 3
4.3.1 Localization
The robot’s localization was quick and effective. The spin-turn-drive procedure was shown to be effective
not only on the final map, but on several practice maps in both simulation and reality. While the
procedure would benefit from tuning to account for rare edge cases, it is in general a robust, effective
method for localization using AMCL. AMCL itself was also shown to be a mostly robust solution.
While AMCL would often struggle to differentiate between similarly-shaped portions of the map, its
understanding improved when the number of particles increased and when it was given more data. The
methods used to localize the robot in this laboratory constitute a robust system that can be used to
determine a robot’s location in a given map.

4.3.2 Navigation
The robot was always able to navigate to its goal provided that it successfully localized. However,
the robot usually collided with at least 1 obstacle on its way to the goal. This repeated error can
be attributed to a combination of wheel slippage and AMCL inaccuracies. To account for this error,
the path to the goal could have been recalculated at each waypoint along the route to ensure that the
most recent positioning data was used. In addition, a stricter localization procedure, which would have
required a pseudocovariance lower than 0.01, could have been used.

5 Conclusion
In conclusion, this laboratory demonstrates an understanding of mapping the unknown, robot naviga-
tion with imperfect sensor data, and robot localization starting from an unknown location. Mapping
constructed a map of obstacles and free space by the robot navigating to new frontiers until the whole
map had been explored. Various methods were implemented to prevent and minimize collisions during
navigation, including using the transform between map and base footprint frames instead of the odom
frame, adding cspace padding to obstacles, and weighting the cost of driving through and in proximity
to cspace higher. This was demonstrated by the success of Phases 1-3.

6 References
“Wiki.” Ros.org, http://wiki.ros.org/Documentation.

6
7 Appendix I: Gist Link
https://gist.github.com/chrisdemaio/8e7b8b76fe063fea0f25bfb0c3b4698a

You might also like