Professional Documents
Culture Documents
141
Authorized licensed use limited to: University of Regina IEL. Downloaded on June 07,2023 at 20:58:00 UTC from IEEE Xplore. Restrictions apply.
Global Costmap Global Planner Global Path
Robot
Master
Map Server
Output TEB Local Planner
Obstacle
Ahead
MobileNetV2 or
Optimal Local
Free
RGB Camera Images Space Trajectory
Cognitive Layer
PointCloud
Obstacle Layer
LiDAR
Local Costmap
restrictions such as the path length, the time required for input image into two categories, which are labeled “blocked”
path execution, and distance to obstacles are considered in or “free space”. For these two categories, 60 pictures were
the algorithm to produce the optimal local trajectory. collected respectively in our test, and we fine-tune the
model based on the ImageNet[17] pre-trained model.
Each configuration defining the robot’s poses in the During the obstacle avoidance process of the robot, the
classic Elastic Band usually comes with three parameters
network model obtains a 640h480 RGB image from the
ݔ , ݕ , ߠ . Sequence defined as ܳ contains all configurations
used to present a local path. Based on the classic method, camera placed in the front of the robot and resizes it to an
TEB incorporates time intervals ȴܶ for every neighboring image sized 224h224 to fit our model. We then deployed
configuration, which represents the time needed to transit the network to the embedded system using a high-
between them. Sequence defined as ߬ contains ݊ െ 1 performance neural network inference computing
intervals for there are ݊ poses in the pose sequence. Then a framework called NCNN. The probability of obstacles
trajectory ܤcan be defined with ܳ and ߬ in the form: ahead outputted by the network is taken into account in the
update process of the cognitive layer.
ܤൌ ሺܳ, ߬ሻ
B. Adoption of the TEB Local Planner
Considering the constrictions, we can define ݂ሺܤሻ This obstacle avoidance method uses the TEB local path
shown in (2) as the global objective function, which is the planning package on the ROS platform as the
sum of all objective functions ݂ ሺܤሻ with weights ߬ implementation of the TEB algorithm. Our approach uses
multiplied. The goal of the TEB planner is to find the best the costmap converter to reduce the need for computing
trajectory כܤwith minimum ݂ሺܤሻ, as (3) shows power on embedded systems. With the Density-Based
݂ሺܤሻ ൌ σ߬ ݂ ሺܤሻ Spatial Clustering of Applications with Noise (DBSCAN)
algorithm[18], discrete points on the costmap created by
כܤൌ argmin ݂ሺܤሻ LiDAR are converted to multiple polygons to reduce
computation load and boost the speed of path planning. The
TEB’s graph optimization strategy is based on the penalty weight for going backward was set to a relatively
Hypergraph[16]. In the initialization step, the algorithm higher value, which could avoid collision with obstacles
first obtains the global path and generate the sequence of ܳ behind the robot, considering the LiDAR module on our bot
and ȴܶ . Points in the sequence become nodes while the has a limited angle of view at about 240 degrees.
various constraints mentioned above become edges in the C. Strategy of Combination
hypergraph. Each edge in the hypergraph is connected to In the ROS platform, the local costmap used for
those nodes that are associated with the constraints. navigation normally consists of the obstacles layer and
Configurable weight on each of these edges would have inflation layer. On the obstacle layer, LiDAR provides
different effects on the overall cost function in future precise information about the distance to obstacles around
optimization. the robot by flagging the corresponding grids as occupied.
III. THE PROPOSED APPROACH The inflation layer at the top of the costmap adds new value
in the costmap around obstacles considering the robot's
A. Adoption of the MobileNetV2 model radius. Based on the local costmap, the local path planner
In this paper, we adopted the lightweight model takes the grid values around the robot into account and
MobileNetV2 described above to a new costmap layer plans the optimal local trajectory following the direction of
called the “cognitive layer”. By changing the output nodes the global path.
of the fully connected layer from 1280 to 2, we classify the
142
Authorized licensed use limited to: University of Regina IEL. Downloaded on June 07,2023 at 20:58:00 UTC from IEEE Xplore. Restrictions apply.
The dynamic obstacle avoidance approach considering road in front of him is no longer so smooth, he would be
LiDAR and camera data mainly focuses on modifying more careful and slows down. Limited by the computing
costmap layers. The overall structure of the navigation power of embedded systems, if the robot moves fast all the
system proposed in this paper is shown in Fig. 2. Detailed way before the probability value exceeds the threshold, the
strategies of the combination are as follows. robot may end up being very close to the obstacle and needs
to come to a sudden stop to prevent the collision. This
The possibility of obstacle ahead outputted by the strategy can slow down the robot in advance, while the
cognitive node is used to update the occupancy grid of the robot can still move fast in the open space. By applying this
cognitive layer. The current state of the robot switches strategy, the robot could have both reliability and efficiency
during the obstacle avoidance process according to the navigating.
change of probability so that it can slow down early when
moving to an obstacle. The states are shown in Fig. 3. IV. EXPERIMENTS AND RESULTS
A. Brief Description
The obstacle avoidance approach described in this paper
is tested on the Szarbase platform from Shenzhen Academy
of Robotics. The platform is equipped with a six-core
RockChip RK3399 CPU and 4GB LPDDR4 memory. A 2D
LiDAR modeled LS01B is mounted in the front of the robot.
We collected 60 images for each category and fine-tuned
the pre-trained model using PyTorch[19]. After that, we
converted the model to an NCNN model, which was
transferred to the mobile robot later. The robot was placed
on one side of the experimental site and assigned a target.
Then we observed the robot’s behavior while navigating.
Fig. 4 shows the mobile robot we used for the test and the
vertical view of the experimental site.
143
Authorized licensed use limited to: University of Regina IEL. Downloaded on June 07,2023 at 20:58:00 UTC from IEEE Xplore. Restrictions apply.
C. Model Prediction Results When the robot is first brought up, the local costmap
We tested the model deployed to the mobile robot by contains obstacles found by LiDAR only, as Fig. 7(a) shows.
placing various objects in front of the robot at different During the navigation process, the robot found the
distances. The result shows that this model would label the yellow mat in front of it with the camera. Grids ahead of the
image as “free space” with high confidence when the robot in the cognitive layer were flagged as occupied
obstacle is far away. While the robot is moving near the correspondingly. When the trajectory of moving straight
obstacle, the confidence of “obstacle ahead” rises. When forward is infeasible, the local planner chose to rotate to the
the threshold for obstacle detection is set to a value of about left to avoid collision with the obstacle, as shown in Fig.
0.7, the robot will recognize the obstacle at a distance of 7(b). The yellow line represents the robot’s moving path. In
about 0.25 meters behind it. This is enough for the local this navigation process, the yellow mat is found because it
planner to plan for a new trajectory around the obstacle. blocks the robot’s path to the destination, while the obstacle
Examples of prediction results are shown in Fig. 5. at the top of the site is still not found. In another experiment,
the local planner produced a trajectory that would pass
through the obstacle at the top, and then the robot can find
that obstacle and avoid it by rotating to another direction,
as shown in Fig. 7(c) and (d).
(a) (b)
(c) (d)
Figure 7. The status of costmap during navigation
144
Authorized licensed use limited to: University of Regina IEL. Downloaded on June 07,2023 at 20:58:00 UTC from IEEE Xplore. Restrictions apply.
fixed position on the experimental site, then a specific goal future researches. We will look at how to achieve the
is specified for navigation. The distance described above is abstraction of the cognitive scheme as an objective function
recorded for each test. Table ĉ shows the parameters we in TEB graph optimization, thus could eliminate
used. We also tested the robot’s performance when it is unreachable paths while preserving reachable paths as
always in the initial state (there are no further restrictions much as possible in future research.
on speed) or inhibited state (the robot would move slowly).
After repeating the experiment 5 times for each setting, we ACKNOWLEDGMENT
summarize the data in Fig. 9 and found that using the This research work is supported by the National
dynamic speed limit would keep the robot safe from the Training Program of Innovation and Entrepreneurship for
obstacle without slowing down the robot in the whole Undergraduates (202010561079), Guang Dong Province
navigation process. Science and Technology Plan Projects
(2020A0505100015), and Ministry of Education
Cooperative Education Project-robot innovation platform
construction based on artificial intelligence technology
(201902146063).
REFERENCES
[1] N. Karlsson, E. Di Bernardo, J. Ostrowski et al., "The vSLAM
Figure 8. The robot’s trajectory when the collision
algorithm for robust localization and mapping." pp. 24-29.
possibility is the only data source
[2] D. V. Lu, D. Hershberger, and W. D. Smart, "Layered costmaps
for context-sensitive navigation." pp. 709-715.
[3] P. E. Hart, N. J. Nilsson, and B. Raphael, “A formal basis for the
heuristic determination of minimum cost paths,” IEEE transactions
on Systems Science and Cybernetics, vol. 4, no. 2, pp. 100-107,
1968.
[4] E. W. Dijkstra, “A note on two problems in connexion with
graphs,” Numerische mathematik, vol. 1, no. 1, pp. 269-271, 1959.
[5] D. Fox, W. Burgard, and S. Thrun, “The dynamic window
approach to collision avoidance,” IEEE Robotics & Automation
Magazine, vol. 4, no. 1, pp. 23-33, 1997.
[6] C. Rösmann, W. Feiten, T. Wösch et al., "Efficient trajectory
optimization using a sparse model." pp. 138-143.
[7] U. Muller, J. Ben, E. Cosatto et al., "Off-road obstacle avoidance
through end-to-end learning." pp. 739-746.
[8] C. Liu, B. Zheng, C. Wang et al., "CNN-based vision model for
Figure 9. Distance between the robot and obstacles using obstacle avoidance of mobile robot." p. 00007.
different strategies [9] J. Michels, A. Saxena, and A. Y. Ng, "High speed obstacle
avoidance using monocular vision and reinforcement learning." pp.
TABLE I. PARAMETERS FOR LINEAR AND ANGULAR SPEED 593-600.
LIMITS [10] S. Yang, J.-H. Kim, D. Noh et al., "Cliff-sensor-based Low-level
Obstacle Detection for a Wheeled Robot in an Indoor
Speed Environment." pp. 1-6.
State
Linear Angular [11] M. Quigley, K. Conley, B. Gerkey et al., "ROS: an open-source
Initial 0.3m/s 0.5rad/s Robot Operating System." p. 5.
[12] M. Sandler, A. Howard, M. Zhu et al., "Mobilenetv2: Inverted
Inhibited 0.05m/s 0.3rad/s
residuals and linear bottlenecks." pp. 4510-4520.
Recovery 0.2m/s 0.4rad/s [13] K. He, X. Zhang, S. Ren et al., "Deep residual learning for image
recognition." pp. 770-778.
V. CONCLUSION [14] S. Quinlan, and O. Khatib, "Elastic bands: Connecting path
planning and control." pp. 802-807.
The obstacle avoidance approach proposed in this [15] R. Kümmerle, G. Grisetti, H. Strasdat et al., "g 2 o: A general
article is tested on the embedded system running ROS. In framework for graph optimization." pp. 3607-3613.
our tests, this method shows good performance avoiding [16] A. Bretto, “Hypergraph theory,” An introduction. Mathematical
Engineering. Cham: Springer, 2013.
collision with floor mat and cables. The RGB camera
[17] J. Deng, W. Dong, R. Socher et al., "Imagenet: A large-scale
successfully detected the obstacles we placed on the hierarchical image database." pp. 248-255.
experimental site, and the robot avoids them with ease. [18] M. Ester, H.-P. Kriegel, J. Sander et al., "A density-based
However, this method still does not perform satisfactorily algorithm for discovering clusters in large spatial databases with
in some scenarios. For example, in a narrow aisle, the noise." pp. 226-231.
fusion scheme may judge some paths as infeasible, thus [19] A. Paszke, S. Gross, F. Massa et al., ĀPytorch: An imperative
style, high-performance deep learning library,” arXiv preprint
increases the length of the trajectory or makes the target arXiv:1912.01703, 2019.
unreachable. This method is also a bit short-sighted because [20] D. P. Kingma, and J. Ba, “Adam: A method for stochastic
the robot can only mark obstacles when it gets close to them. optimization,” arXiv preprint arXiv:1412.6980, 2014.
That requires the camera to be a low-latency one so that it [21] G. Grisetti, C. Stachniss, and W. Burgard, “Improved techniques
can return the image of obstacles in time. Apart from this, for grid mapping with rao-blackwellized particle filters,” IEEE
our method is mainly focused on introducing the cognitive transactions on Robotics, vol. 23, no. 1, pp. 34-46, 2007.
layer to the costmap to assist the local planner. A higher
level of integration with the local planner is needed for
145
Authorized licensed use limited to: University of Regina IEL. Downloaded on June 07,2023 at 20:58:00 UTC from IEEE Xplore. Restrictions apply.