You are on page 1of 6

Proceedings of 11th IEEE International Conference

on CYBER Technology in Automation, Control, and Intelligent Systems


2021 IEEE 11th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER) | 978-1-6654-2527-8/21/$31.00 ©2021 IEEE | DOI: 10.1109/CYBER53097.2021.9588225

July 27 - 31, 2021, Jiaxing, China

A Robot Obstacle Avoidance Approach with LiDAR and RGB


Camera Data Combined
Zesen Liu, Chuanhong Guo, Sheng Bi*, Kezheng Yu, Guojun Peng, Yuyang Yue

only be found when the robot gets very near to them. In




Abstract— A robot obstacle avoidance approach is


proposed in this paper, which combines 2D LiDAR data with these scenarios, the local planner may produce the wrong
images from the RGB camera attached to the mobile robot. trajectory, resulting in the robot hitting or oscillating around
With accurate information of distance to obstacles provided the obstacle.
by the LiDAR, conventional obstacle avoidance methods have
good performance navigating in simple environments, which Benefit from the development of artificial intelligence,
means that the ground is level and no low reflectivity object some obstacle avoidance approaches based on the end-to-
is blocking the robot’s way. However, lacking global end deep learning algorithm were proposed some time ago.
awareness of its surrounding environment, the robot may not The end-to-end based obstacle avoidance model classifies
perform well in complex environments. In our approach to an image as two or three categories typically. The result of
obstacle avoidance, the robot uses the camera as a second classification will guide the robot to avoid the obstacle
data source to get more information about its surroundings. ahead and continue moving. [7] proposed an end-to-end
Based on the end-to-end deep learning algorithm proposed in approach of obstacle avoidance using images from two
this paper, images from the camera are classified into two cameras ahead of the robot. The rotation command is
categories, telling the possibility of whether there are instructed by possibilities returned by the network, and the
obstacles ahead. A method integrating the possibility to the researchers had achieved good performance in the wild
navigation process is also proposed, making the robot more scene. The experimental result proves that the end-to-end
reliable and less likely to run into an obstacle while navigating deep learning algorithm is effective for avoiding obstacles
in a complex environment. Finally, The Timed Elastic Band
in complex environments. [8] proposed a three-category
(TEB) local path planning method is used to produce the local
CNN network with three output nodes on the latest fully
path and generate the velocity command for the robot. An
experiment is conducted to verify the overall performance by
connected layer and obtained good results in the indoor
deploying the approach to an embedded system on a mobile scene.
robot. By evaluating the experimental result, we found the Besides the image classification method, some
method effective and reliable for obstacle avoidance. researchers have proposed methods that predict the depth of
I. INTRODUCTION objects ahead of the robot with an RGB camera. Based on
the predicted depth, the robot could make rotation and avoid
A. Background running into obstacles. [9] adapted the reinforcement
Autonomous obstacle avoidance is a critical part of learning to predict the depth data. The researchers deployed
robot intelligence. Traditional algorithms control the robot it to an RC car and found it works well moving in the grass.
by using real-time data from LiDAR, sonar, and other However, without the awareness of the goal and precise
sensors. The global map is built in advance of navigation information about the distance to obstacles, the end-to-end
by SLAM or vSLAM[1] algorithms to guide the robot while deep learning method can hardly be applied to real
calculating the global path, and sensor data is fed into the situations. This approach does not consider the navigation
local planner, which computes the kinematic trajectory and target, and the robot would turn only if there are obstacles
the current velocity commands of the robot. Global and ahead. Moreover, approaches using image classification
local path planners used in conventional navigation models would rotate the robot at a certain angular velocity
approaches are based on the costmap[2] to find the best when the robot meets with obstacles as [8] do, or generate
trajectory. When a navigation target is specified, the global the angular velocity considering the possibility of “left-
planner searches for the best global path through the A*[3] blocked” or “right-blocked” as [7] do. With the inaccurate
and Dijkstra[4] algorithms. Based on the global path, local velocity values, the produced trajectory comes out with
planners such as Dynamic Window Approach (DWA)[5] worse smoothness than the conventional local path
and Timed Elastic Band (TEB)[6] produce the local planning method. However, in terms of the obstacle
trajectory in a specific size of space around the robot with avoidance performance, it is better than the conventional
the costmap. methods in complex scenes such as the wild, since the 2D
Local path planning algorithms have good performance LiDAR solution is not that easy to distinguish free space
when deployed to a robot navigating in simple and obstacle area in the uneven ground as the experiential
environments. However, in complicated environments, result shows. Combining these two local path planning
robots may experience difficulties navigating using the approaches can enhance the robot’s flexibility without
same algorithm, as is shown in the experiment part of the losing its precision while moving towards the target, which
paper. Some obstacles cannot be seen by LiDAR or can

*Corresponding author is Sheng Bi. Email: picy@scut.edu.cn.

978-1-6654-2527-8/21/$31.00 © 2021 IEEE 140


Authorized licensed use limited to: University of Regina IEL. Downloaded on June 07,2023 at 20:58:00 UTC from IEEE Xplore. Restrictions apply.
would eventually improve its performance of obstacle II. OBSTACLE AVOIDANCE METHODS
avoidance.
A. End-to-end Based Obstacle Avoidance Approach
B. Analysis for the Demand of Combining MobileNetV2 is a neural network framework
Since the conventional obstacle avoidance method is specifically tailored for mobile platforms with limited
too mechanical but has better accuracy, while the end-to- computing resources[12]. It brings computer vision models
end based method has good adaptability to the environment to embedded systems without sacrificing too much
but is not accurate enough, the two methods are being accuracy with a reduced number of operations and memory
combined in the approach proposed in this paper to achieve usage. The main contribution made by MobileNetV2 lies
better performance avoiding obstacles and finding the best mainly in the Inverted residuals and the Linear Bottleneck
trajectory. structures. Low dimensional data is taken as an input of the
Inverted Residual block[13] and expanded to a higher
The cognitive part of the obstacle avoidance approach
dimension before being reduced to a lower dimension later.
is based on a deep learning algorithm, which uses images
Moreover, the Linear Bottleneck structure acts as a
captured by the camera as input and outputs the possibility
replacement of the ReLU function to prevent possible loss
of obstacles ahead. With the cognitive data, obstacles that
of feature. The overall structure of the MobileNetV2 is
are hardly visible to LiDAR can be detected by the visual
shown in Fig. 1.
image. The cognitive part could improve the performance
of obstacle avoidance in the case that the LiDAR data is
unreliable due to factors like low reflectivity or low height
of the obstacles when the LiDAR module is not so close to
the ground.
Our research found that the conventional single-line
LiDAR solutions cannot effectively identify and avoid the
robot running into uneven ground. They perform poorly on
recognizing obstacles that are difficult for wheeled robots
to pass, such as rolled-edge carpets, tangled threads, and
black table legs with low reflectivity. There have been
solutions using infrared cliff sensors to prevent the robot
from being stuck onto a low-level obstacle[10]. However,
that method needs physical contact with the obstacle to
measure the obstacle's height and determine whether the
robot can pass. To address these issues, we found the
vision-based end-to-end deep learning approach has a
higher success rate recognizing these kinds of obstacles
compared to the LiDAR solution, though it may be
relatively mediocre in ideal scenarios (i.e., without the
various special conditions described above). Therefore, the
LiDAR and camera data combination method takes
advantage of both obstacle avoidance methods to produce a
reliable and accurate local path.
C. Contributions
In this paper, we proposed an obstacle avoidance
approach by involving the camera image to produce the
local trajectory. The main contributions are as follows:
(1) Designed and implemented the overall architecture
of the robot obstacle avoidance approach combining
LiDAR and RGB camera data. (2) Designed a deep learning
network model based on MobileNetV2 for obstacle
detection and deployed it to the mobile robot running on an Figure 1. The architecture of MobileNetV2
embedded system. (3) Designed a costmap level fusion
strategy of LiDAR and Camera data, enabling the robot to B. TEB – A Conventional Obstacle Avoidance Approach
detect obstacles that were previously undetectable and Timed Elastic Band (TEB) is an improved approach
understand the surroundings better. (4) Conducted a test on compared to the conventional Elastic Band (EB)
a real robot running the Robot Operating System (ROS)[11] method[14]. This approach produces optimal local
and had obtained good results. trajectory based on the global path provided by the global
planner and takes various constrictions into key account.
By converting the path optimization problem to a multi-
objective function optimization problem, the TEB method
adapted the idea of graph optimization[15]. Several

141
Authorized licensed use limited to: University of Regina IEL. Downloaded on June 07,2023 at 20:58:00 UTC from IEEE Xplore. Restrictions apply.
Global Costmap Global Planner Global Path

Robot
Master
Map Server
Output TEB Local Planner

Obstacle
Ahead

MobileNetV2 or
Optimal Local
Free
RGB Camera Images Space Trajectory

Cognitive Layer

PointCloud
Obstacle Layer
LiDAR
Local Costmap

Figure 2. Structure of the proposed obstacle avoidance approach

restrictions such as the path length, the time required for input image into two categories, which are labeled “blocked”
path execution, and distance to obstacles are considered in or “free space”. For these two categories, 60 pictures were
the algorithm to produce the optimal local trajectory. collected respectively in our test, and we fine-tune the
model based on the ImageNet[17] pre-trained model.
Each configuration defining the robot’s poses in the During the obstacle avoidance process of the robot, the
classic Elastic Band usually comes with three parameters
network model obtains a 640h480 RGB image from the
‫ݔ‬௜ , ‫ݕ‬௜ , ߠ௜ . Sequence defined as ܳ contains all configurations
used to present a local path. Based on the classic method, camera placed in the front of the robot and resizes it to an
TEB incorporates time intervals ȴܶ௜ for every neighboring image sized 224h224 to fit our model. We then deployed
configuration, which represents the time needed to transit the network to the embedded system using a high-
between them. Sequence defined as ߬ contains ݊ െ 1 performance neural network inference computing
intervals for there are ݊ poses in the pose sequence. Then a framework called NCNN. The probability of obstacles
trajectory ‫ ܤ‬can be defined with ܳ and ߬ in the form: ahead outputted by the network is taken into account in the
update process of the cognitive layer.
 ‫ ܤ‬ൌ ሺܳ, ߬ሻ  
B. Adoption of the TEB Local Planner
Considering the constrictions, we can define ݂ሺ‫ܤ‬ሻ This obstacle avoidance method uses the TEB local path
shown in (2) as the global objective function, which is the planning package on the ROS platform as the
sum of all objective functions ݂ ሺ‫ܤ‬ሻ with weights ߬ implementation of the TEB algorithm. Our approach uses
multiplied. The goal of the TEB planner is to find the best the costmap converter to reduce the need for computing
trajectory ‫ כܤ‬with minimum ݂ሺ‫ܤ‬ሻ, as (3) shows power on embedded systems. With the Density-Based
 ݂ሺ‫ܤ‬ሻ ൌ σ߬௞ ݂௞ ሺ‫ܤ‬ሻ   Spatial Clustering of Applications with Noise (DBSCAN)
algorithm[18], discrete points on the costmap created by
 ‫ כܤ‬ൌ argmin ݂ሺ‫ܤ‬ሻ   LiDAR are converted to multiple polygons to reduce
஻ computation load and boost the speed of path planning. The
TEB’s graph optimization strategy is based on the penalty weight for going backward was set to a relatively
Hypergraph[16]. In the initialization step, the algorithm higher value, which could avoid collision with obstacles
first obtains the global path and generate the sequence of ܳ behind the robot, considering the LiDAR module on our bot
and ȴܶ . Points in the sequence become nodes while the has a limited angle of view at about 240 degrees.
various constraints mentioned above become edges in the C. Strategy of Combination
hypergraph. Each edge in the hypergraph is connected to In the ROS platform, the local costmap used for
those nodes that are associated with the constraints. navigation normally consists of the obstacles layer and
Configurable weight on each of these edges would have inflation layer. On the obstacle layer, LiDAR provides
different effects on the overall cost function in future precise information about the distance to obstacles around
optimization. the robot by flagging the corresponding grids as occupied.
III. THE PROPOSED APPROACH The inflation layer at the top of the costmap adds new value
in the costmap around obstacles considering the robot's
A. Adoption of the MobileNetV2 model radius. Based on the local costmap, the local path planner
In this paper, we adopted the lightweight model takes the grid values around the robot into account and
MobileNetV2 described above to a new costmap layer plans the optimal local trajectory following the direction of
called the “cognitive layer”. By changing the output nodes the global path.
of the fully connected layer from 1280 to 2, we classify the

142
Authorized licensed use limited to: University of Regina IEL. Downloaded on June 07,2023 at 20:58:00 UTC from IEEE Xplore. Restrictions apply.
The dynamic obstacle avoidance approach considering road in front of him is no longer so smooth, he would be
LiDAR and camera data mainly focuses on modifying more careful and slows down. Limited by the computing
costmap layers. The overall structure of the navigation power of embedded systems, if the robot moves fast all the
system proposed in this paper is shown in Fig. 2. Detailed way before the probability value exceeds the threshold, the
strategies of the combination are as follows. robot may end up being very close to the obstacle and needs
to come to a sudden stop to prevent the collision. This
The possibility of obstacle ahead outputted by the strategy can slow down the robot in advance, while the
cognitive node is used to update the occupancy grid of the robot can still move fast in the open space. By applying this
cognitive layer. The current state of the robot switches strategy, the robot could have both reliability and efficiency
during the obstacle avoidance process according to the navigating.
change of probability so that it can slow down early when
moving to an obstacle. The states are shown in Fig. 3. IV. EXPERIMENTS AND RESULTS
A. Brief Description
The obstacle avoidance approach described in this paper
is tested on the Szarbase platform from Shenzhen Academy
of Robotics. The platform is equipped with a six-core
RockChip RK3399 CPU and 4GB LPDDR4 memory. A 2D
LiDAR modeled LS01B is mounted in the front of the robot.
We collected 60 images for each category and fine-tuned
the pre-trained model using PyTorch[19]. After that, we
converted the model to an NCNN model, which was
transferred to the mobile robot later. The robot was placed
on one side of the experimental site and assigned a target.
Then we observed the robot’s behavior while navigating.
Fig. 4 shows the mobile robot we used for the test and the
vertical view of the experimental site.

Figure 3. The states of robot while navigating

The robot stays in the initial state when the probability


of obstacle ahead is low. Navigating in this state, the robot
proactively clears the occupancy value of grids ahead of the
robot in the cognitive layer. Since the robot moves towards
the target following the global map considering obstacles
obtained by the LiDAR sensor, if the probability value of
Figure 4. The robot and the experimental site
obstacles rises, there may be coming obstacles which is the
kind that is difficult to scan by LiDAR. When the rise of
probability value was detected, the robot shifts to the B. Data Collecting and Model Training
inhibited state and slows down. If the probability value In our experiment, two categories of data were collected
continues raising and exceeds the threshold, the on the experimental site. We controlled the robot walking
corresponding grids in front of the robot would be set as around the field and saved images captured by the camera.
occupied. By modifying the corresponding grids, the local Then we manually placed some obstacles like network
path planner gets the information of visually obtained cables or colored mats on the ground, and collected images
obstacles. Thus, the robot could eventually find an feed by the camera while the robot moves towards them.
alternative path around the obstacle. Using the Adam optimizer[20] with a learning rate of 1e-4,
we fine-tuned the pre-trained model for 30 epochs and got
In the inhibited state, when the probability value drops high accuracy above 90 percent. The model was uploaded
and stays at a low value within two seconds, the recovery to the mobile robot after training. On the RK3399 CPU, we
state is entered. The state corresponds to the accelerating got about 13 fps doing inference work on two Cortex-A73
procedure while the robot is moving away from obstacles. cores, which means that the processing time for a single
The robot's speed is increased in this state, but it is still frame is around 0.077 seconds. When the robot is running
lower than the initial state. This strategy aims to prevent the at about 0.4m/s, additional 0.03 meters is need for the robot
robot from ignoring blocks of small obstacles that are easily to come to a complete stop, which is tolerable in our case.
overlooked like thin strings in its way. During the two
seconds of the recovery state, if the probability value stays
low, the robot reentered the initial state and continued
navigating towards the target.
Bionic ideas are applied to this scheme of obstacle
avoidance. Humans tend to have a higher pace walking to a
specified goal in an open space. When this person finds the

143
Authorized licensed use limited to: University of Regina IEL. Downloaded on June 07,2023 at 20:58:00 UTC from IEEE Xplore. Restrictions apply.
C. Model Prediction Results When the robot is first brought up, the local costmap
We tested the model deployed to the mobile robot by contains obstacles found by LiDAR only, as Fig. 7(a) shows.
placing various objects in front of the robot at different During the navigation process, the robot found the
distances. The result shows that this model would label the yellow mat in front of it with the camera. Grids ahead of the
image as “free space” with high confidence when the robot in the cognitive layer were flagged as occupied
obstacle is far away. While the robot is moving near the correspondingly. When the trajectory of moving straight
obstacle, the confidence of “obstacle ahead” rises. When forward is infeasible, the local planner chose to rotate to the
the threshold for obstacle detection is set to a value of about left to avoid collision with the obstacle, as shown in Fig.
0.7, the robot will recognize the obstacle at a distance of 7(b). The yellow line represents the robot’s moving path. In
about 0.25 meters behind it. This is enough for the local this navigation process, the yellow mat is found because it
planner to plan for a new trajectory around the obstacle. blocks the robot’s path to the destination, while the obstacle
Examples of prediction results are shown in Fig. 5. at the top of the site is still not found. In another experiment,
the local planner produced a trajectory that would pass
through the obstacle at the top, and then the robot can find
that obstacle and avoid it by rotating to another direction,
as shown in Fig. 7(c) and (d).

(a) (b)

(c) (d)
Figure 7. The status of costmap during navigation

Figure 5. Prediction results in multiple cases


E. Comparison with Methods Having Single Data
D. Navigation Performance Source Only
With the NCNN package for ROS, we integrated the To find out how our combined method improves the
prediction result into the navigation stack. The NCNN node performance of robot navigation, we compared the robot’s
publishes the possibility output of the model, and the new path when the robot uses different methods to detect
cognitive layer listens to the possibility and performs obstacles in the navigation process. In the same
updates to the local costmap to guide the local planner TEB. experimental site, we compared our method with other
The static map is built using Gmapping[21] on the same methods described below.
platform ahead of time, and we placed some obstacles on
the experimental site, as Fig. 6 shows. When the robot uses the LiDAR data only, it cannot
detect the mat placed on the ground. The local path planner
would produce a path across the mat, which would
eventually cause a collision.
When the collision possibility inferred from the camera
image is the only data source, the robot's functionality is
limited. While it performs well at avoiding obstacles, the
loss of the current location makes the robot unable to locate
and move towards the target. Fig. 8 shows that the robot
only turns when it finds an obstacle visually.
F. Improvement on the Distance to Obstacles
Figure 6. Experimental site with obstacles marked In this paper, we have proposed a strategy that could
Obstacles within the red circles cannot be scanned by adjust the linear and angular speed limit dynamically to
the 2D LiDAR attached to our test platform. The initial slow down the robot while approaching visual obstacles.
position of the mobile robot is pointed by the blue circle on This strategy is implemented and tested to see the distance
the right side. We manually select a target on another side between the robot and the obstacle before the robot
across the obstacles to see how the robot behaves. eventually takes action to avoid it. We placed the robot at a

144
Authorized licensed use limited to: University of Regina IEL. Downloaded on June 07,2023 at 20:58:00 UTC from IEEE Xplore. Restrictions apply.
fixed position on the experimental site, then a specific goal future researches. We will look at how to achieve the
is specified for navigation. The distance described above is abstraction of the cognitive scheme as an objective function
recorded for each test. Table ĉ shows the parameters we in TEB graph optimization, thus could eliminate
used. We also tested the robot’s performance when it is unreachable paths while preserving reachable paths as
always in the initial state (there are no further restrictions much as possible in future research.
on speed) or inhibited state (the robot would move slowly).
After repeating the experiment 5 times for each setting, we ACKNOWLEDGMENT
summarize the data in Fig. 9 and found that using the This research work is supported by the National
dynamic speed limit would keep the robot safe from the Training Program of Innovation and Entrepreneurship for
obstacle without slowing down the robot in the whole Undergraduates (202010561079), Guang Dong Province
navigation process. Science and Technology Plan Projects
(2020A0505100015), and Ministry of Education
Cooperative Education Project-robot innovation platform
construction based on artificial intelligence technology
(201902146063).

REFERENCES
[1] N. Karlsson, E. Di Bernardo, J. Ostrowski et al., "The vSLAM
Figure 8. The robot’s trajectory when the collision
algorithm for robust localization and mapping." pp. 24-29.
possibility is the only data source
[2] D. V. Lu, D. Hershberger, and W. D. Smart, "Layered costmaps
for context-sensitive navigation." pp. 709-715.
[3] P. E. Hart, N. J. Nilsson, and B. Raphael, “A formal basis for the
heuristic determination of minimum cost paths,” IEEE transactions
on Systems Science and Cybernetics, vol. 4, no. 2, pp. 100-107,
1968.
[4] E. W. Dijkstra, “A note on two problems in connexion with
graphs,” Numerische mathematik, vol. 1, no. 1, pp. 269-271, 1959.
[5] D. Fox, W. Burgard, and S. Thrun, “The dynamic window
approach to collision avoidance,” IEEE Robotics & Automation
Magazine, vol. 4, no. 1, pp. 23-33, 1997.
[6] C. Rösmann, W. Feiten, T. Wösch et al., "Efficient trajectory
optimization using a sparse model." pp. 138-143.
[7] U. Muller, J. Ben, E. Cosatto et al., "Off-road obstacle avoidance
through end-to-end learning." pp. 739-746.
[8] C. Liu, B. Zheng, C. Wang et al., "CNN-based vision model for
Figure 9. Distance between the robot and obstacles using obstacle avoidance of mobile robot." p. 00007.
different strategies [9] J. Michels, A. Saxena, and A. Y. Ng, "High speed obstacle
avoidance using monocular vision and reinforcement learning." pp.
TABLE I. PARAMETERS FOR LINEAR AND ANGULAR SPEED 593-600.
LIMITS [10] S. Yang, J.-H. Kim, D. Noh et al., "Cliff-sensor-based Low-level
Obstacle Detection for a Wheeled Robot in an Indoor
Speed Environment." pp. 1-6.
State
Linear Angular [11] M. Quigley, K. Conley, B. Gerkey et al., "ROS: an open-source
Initial 0.3m/s 0.5rad/s Robot Operating System." p. 5.
[12] M. Sandler, A. Howard, M. Zhu et al., "Mobilenetv2: Inverted
Inhibited 0.05m/s 0.3rad/s
residuals and linear bottlenecks." pp. 4510-4520.
Recovery 0.2m/s 0.4rad/s [13] K. He, X. Zhang, S. Ren et al., "Deep residual learning for image
recognition." pp. 770-778.
V. CONCLUSION [14] S. Quinlan, and O. Khatib, "Elastic bands: Connecting path
planning and control." pp. 802-807.
The obstacle avoidance approach proposed in this [15] R. Kümmerle, G. Grisetti, H. Strasdat et al., "g 2 o: A general
article is tested on the embedded system running ROS. In framework for graph optimization." pp. 3607-3613.
our tests, this method shows good performance avoiding [16] A. Bretto, “Hypergraph theory,” An introduction. Mathematical
Engineering. Cham: Springer, 2013.
collision with floor mat and cables. The RGB camera
[17] J. Deng, W. Dong, R. Socher et al., "Imagenet: A large-scale
successfully detected the obstacles we placed on the hierarchical image database." pp. 248-255.
experimental site, and the robot avoids them with ease. [18] M. Ester, H.-P. Kriegel, J. Sander et al., "A density-based
However, this method still does not perform satisfactorily algorithm for discovering clusters in large spatial databases with
in some scenarios. For example, in a narrow aisle, the noise." pp. 226-231.
fusion scheme may judge some paths as infeasible, thus [19] A. Paszke, S. Gross, F. Massa et al., ĀPytorch: An imperative
style, high-performance deep learning library,” arXiv preprint
increases the length of the trajectory or makes the target arXiv:1912.01703, 2019.
unreachable. This method is also a bit short-sighted because [20] D. P. Kingma, and J. Ba, “Adam: A method for stochastic
the robot can only mark obstacles when it gets close to them. optimization,” arXiv preprint arXiv:1412.6980, 2014.
That requires the camera to be a low-latency one so that it [21] G. Grisetti, C. Stachniss, and W. Burgard, “Improved techniques
can return the image of obstacles in time. Apart from this, for grid mapping with rao-blackwellized particle filters,” IEEE
our method is mainly focused on introducing the cognitive transactions on Robotics, vol. 23, no. 1, pp. 34-46, 2007.
layer to the costmap to assist the local planner. A higher
level of integration with the local planner is needed for

145
Authorized licensed use limited to: University of Regina IEL. Downloaded on June 07,2023 at 20:58:00 UTC from IEEE Xplore. Restrictions apply.

You might also like