You are on page 1of 10

International Journal of Mining Science and Technology 31 (2021) 779–788

Contents lists available at ScienceDirect

International Journal of Mining Science and Technology


journal homepage: www.elsevier.com/locate/ijmst

Location estimation of autonomous driving robot and 3D tunnel


mapping in underground mines using pattern matched LiDAR sequential
images
Heonmoo Kim, Yosoon Choi ⇑
Department of Energy Resources Engineering, Pukyong National University, Busan 48513, Republic of Korea

a r t i c l e i n f o a b s t r a c t

Article history: In this study, a machine vision-based pattern matching technique was applied to estimate the location of
Received 14 October 2020 an autonomous driving robot and perform 3D tunnel mapping in an underground mine environment. The
Received in revised form 20 April 2021 autonomous driving robot continuously detects the wall of the tunnel in the horizontal direction using
Accepted 25 July 2021
the light detection and ranging (LiDAR) sensor and performs pattern matching by recognizing the shape
Available online 3 August 2021
of the tunnel wall. The proposed method was designed to measure the heading of the robot by fusion
with the inertial measurement units sensor according to the pattern matching accuracy; it is combined
Keywords:
with the encoder sensor to estimate the location of the robot. In addition, when the robot is driving, the
Pattern matching
Location estimation
vertical direction of the underground mine is scanned through the vertical LiDAR sensor and stacked to
Autonomous driving robot create a 3D map of the underground mine. The performance of the proposed method was superior to that
3D tunnel mapping of previous studies; the mean absolute error achieved was 0.08 m for the X-Y axes. A root mean square
Underground mine error of 0.05 m2 was achieved by comparing the tunnel section maps that were created by the autono-
mous driving robot to those of manual surveying.
Ó 2021 Published by Elsevier B.V. on behalf of China University of Mining & Technology. This is an open
access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).

1. Introduction autonomous driving system that detects hazards in underground


mines [10]. They analyzed the possibility of detecting rockfall or
In recent years, the autonomous driving technology has toxic gases in an underground mine using the multi-autonomous
attracted great interest globally, and various automobile compa- driving robot. Berglund et al. conducted a study on the optimal
nies are making significant efforts to commercialize it. For exam- path for autonomous mineral transport vehicles to transport min-
ple, Tesla developed Autopilot, a Level-2 autonomous driving erals while avoiding obstacles [11]. Chi et al. developed an auton-
system and released it as a commercial product [1]. Autopilot uses omous driving scrapper that combined a laser-based estimation
devices, such as cameras, radars, and ultrasonic sensors. Research system and barcode recognition technology and conducted a driv-
is being conducted to actualize fully autonomous driving. Google’s ing test on a simulated test site [12]. Volvo tested an autonomous
parent company, Alphabet [2], developed a Level-4 self-driving transport system in a real limestone mine [13]. Studies have also
taxi, Waymo, and is currently conducting Level-5 prototype tests. been conducted to measure the environmental factors in under-
Furthermore, commercial products, such as General Motors’ Cruise ground mines [14–16] or explore the inside of a mine shaft [17–
division [3] and Ford’s Argo AI [4], are being developed. Various 20] using autonomous driving robots.
global IT companies are also researching the autonomous driving Many studies have been performed to create three-dimensional
technology [5,6]. (3D) maps of underground mines using light detection and ranging
Various studies, pertaining to autonomous driving technology, (LiDAR) sensors [21–24]. Some studies have employed LiDAR sen-
have been conducted in the mining industry [7]. For example, sors with autonomous driving robots. For example, Baker et al.
the autonomous driving technology was used to develop equip- combined multiple sensors to develop an autonomous driving
ment to improve safety in underground mines or automatically robot that can not only perform autonomous driving but also per-
transport minerals [8,9]. Yinka-Banjo et al. developed an form tasks such as exploring the tunnel mapping environment
[25]. The robot can also be used in environments with poor terrain.
Studies on mapping tunnels using autonomous driving robots [26]
⇑ Corresponding author. and 3D LiDAR sensors [27] and rotating LiDAR sensors [28,29] have
E-mail address: energy@pknu.ac.kr (Y. Choi). also been conducted.

https://doi.org/10.1016/j.ijmst.2021.07.007
2095-2686/Ó 2021 Published by Elsevier B.V. on behalf of China University of Mining & Technology.
This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
H. Kim and Y. Choi International Journal of Mining Science and Technology 31 (2021) 779–788

Accurate location estimation technology is essential to perform mous driving robot utilized four types of sensors (IMU, encoder,
a 3D mapping of an underground mine using an autonomous driv- vertical LiDAR, and horizontal LiDAR) to perform real-time auton-
ing robot. This is because to survey an underground mine using an omous driving, location estimation, and 3D mapping. To measure
autonomous driving robot, the robot’s 3D coordinates (X, Y, and Z) the robot’s three-axis pose, an IMU sensor that combined an accel-
and 3D pose (roll, pitch, and yaw) should be combined with the eration sensor, a geomagnetic sensor, and a gyroscope sensor with
distance data measured by the LiDAR sensor. In other industries, a Kalman filter was used. The IMU sensor used in this study mea-
many studies have been conducted to accurately estimate the sured the robot’s three-axis pose (roll, pitch, and yaw) in the form
robots and vehicles’ location [30–32]. For instances, the vehicle’s of Euler angles.
location was estimated by comparing a pre-made map created To measure the driving distance of the robot, an encoder sensor
using surveying equipment such as laser scanners, and the map that could measure distance using the number of rotations of the
conducted the detection process using the LiDAR in real time, as robot’s wheel was used. Wheel-type mobile robots can also calcu-
the autonomous driving robot drove [33]. Furthermore, techniques late the heading of the robot based on the difference between the
that can detect feature points, such as corners or road intersec- number of the rotations of the left and right wheels. However, in an
tions, as the autonomous driving robots drive, have been used for underground mine environment where the terrain is very rough,
location estimation [34,35]. However, it is difficult to apply the error occurring due to the slip generated when rotating may
location estimation methods developed in other industries to increase. Consequently, the wheel-type robots were used to mea-
underground autonomous driving robots. First, the shape of an sure only the distance of the robot’s linear movement, and the
underground mine frequently changes due to blasting during min- rotation of the robot was measured using the IMU sensor and pat-
ing; therefore, it is difficult to repeatedly use a pre-made map data. tern matching method.
In addition, it is difficult to detect characteristic points because the For the robot’s heading angle estimation and 3D tunnel map-
walls of the underground mine tunnel are rough and irregular. Fur- ping, two (horizontal, vertical) LiDAR sensors were used. The hor-
thermore, in underground mines, it is impossible to receive signals izontal LiDAR sensor measured the distance between the left and
through the global positioning system (GPS). Besides, most of the right walls, following which it recognized the centerline of the road
tunnels are dark, which makes it impossible to use location estima- and ran autonomously along the line. Furthermore, the heading
tion technology based on camera sensors. angle of the robot was calculated by comparing the two-point
In the mining industry, a few studies have been conducted to cloud data continuously inputted with a pattern matching algo-
estimate the location of autonomous driving robots. Using encoders, rithm. In addition, the vertical LiDAR sensor measured the real-
inertial measurement unit sensors (IMUs), and rotating LiDAR sen- time vertical section of the tunnel as the robot drove and accumu-
sors, Neumann et al. developed an autonomous driving robot for lated it to create a 3D tunnel model. The detailed specifications of
underground mining [28]. Kim et al. developed an autonomous driv- the sensors and controllers used in this study are shown in Table 1.
ing robot for underground mines using sensors, such as IMU, LiDARs, The view of the autonomous driving robot and sensors used in
and encoders and conducted an accuracy comparison experiment this study is shown in Fig. 2. A vertical LiDAR was installed on the
[36]. However, in these existing studies, there is error accumulation front of the robot to scan the vertical section of the tunnel. On the
because only IMU and encoder sensors were used for location esti- top, a horizontal LiDAR that measured the horizontal wall shaft and
mation. Ghosh et al. performed 3D tunnel mapping in an under- a webcam that recorded the driving process of the robot were
ground mine by combining an IMU, an encoder sensor, and a installed. In addition, an IMU sensor that measured the 3D pose
rotating 2D LiDAR sensor [29]. However, the robot had to stop and was installed inside the robot, and an encoder sensor was installed
re-start after each scan (stop-and-go method); therefore, the explo- on the robot’s wheel. The robot was encased in an acrylic case to
ration time was overly long. Thus, in previous studies in which 3D prevent physical shock and leakage, and a laptop, which is the
tunnel mapping was performed using autonomous driving robots main controller, was placed inside the case. The robot’s main and
in underground mines, position correction was achieved using only remote controllers were wirelessly connected via Wi-Fi, and a
part of the point data when estimating the robot’s position by scan- communication environment was established with the microcon-
ning the wall of the underground mine using a LiDAR sensor. How- troller controlling individual motors through the RS-232 method.
ever, the detailed shape of the uneven tunnel wall was not reflected, The robot and horizontal and vertical LiDAR have unique coordi-
and the performance of the location estimation was degraded. nate systems.
In this study, a machine vision-based pattern matching tech- In this study, LabVIEW was used as the programming language
nique was applied to an autonomous driving robot for under- to perform autonomous driving, location estimation, and 3D tunnel
ground mining to improve the location estimation accuracy and mapping (National Instruments, Austin, TX, USA). The user inter-
perform the 3D mapping of the underground mine shaft. Pattern face of the autonomous driving system developed in this study is
matching was performed by continuously recognizing the shape shown in Fig. 3. The user interface visualizes the webcam screen,
of the walls of the tunnel using horizontal LiDAR. The results from IMU sensor data, horizontal/vertical LiDAR data, estimated loca-
pattern matching were combined with an encoder sensor data to tion, and robot settings in real time. In addition to the parts
estimate the robot’s 3D pose and location. Through the vertical included in the interface screen, the programming codes included
LiDAR sensor, the cross section of the underground mine tunnel autonomous driving algorithms, pattern matching, and wireless
was mapped and accumulated while the robot was driving to cre- communication through remote controllers.
ate a 3D tunnel model. The developed autonomous driving robot
system was then applied to an actual underground mine site to 2.2. Location estimation and 3D tunnel mapping
evaluate the location estimation accuracy and its applicability to
underground 3D models. 2.2.1. Location estimation
The pattern matching and data processing sequence of the sen-
2. Materials and methods sors when estimating the location of the autonomous driving robot
are shown in Fig. 4. Pattern matching was performed using the hor-
2.1. System configuration for autonomous driving robot izontal LiDAR data (SK, SK+1) independently and measured two con-
secutive times (K, K + 1), and the angle value representing the
The sensor and data processing procedure of the autonomous heading of the robot and score value representing the accuracy
driving robot used in this study are shown in Fig. 1. The autono- were outputted. A calculated score value exceeding the threshold
780
H. Kim and Y. Choi International Journal of Mining Science and Technology 31 (2021) 779–788

Fig. 1. Overall structure of data processing for the autonomous driving robot used in this study.

Table 1
Specification of sensors, controller, driving platform used in this study.

Equipment Model Specification


Main Laptop PC Intel Core i7-9750H CPU 4.50 GHz
controller Windows 10 (Microsoft (Intel, Santa Clara, CA, UAS), 16 GB
Corporation, Redmond, RAM, NVIDIA GeForce 1650 4 GB
WA, USA) (NVIDIA, Santa Clara, CA, USA)
LiDAR sensor LMS-111 (SICK, Field of view: 270°
Waldkirch, Germany) Interface: TCP/IP
Operating range: 0.5–20 m
Scanning frequency: 25 Hz/50 Hz
IMU sensor EBIMU–9DOFV4 Error: Roll/Pitch ± 0.2°, Yaw ± 0.5°
(E2BOX, Hanam, Korea) Output range: 180 – +180°
Encoder IG-32PGM 01TYPE Motor gear ratio: 13
sensor (YOUNGJIN B&B, Seoul, Encoder gear ratio: 61
Korea)
Fig. 2. Conceptual view of autonomous driving robot, sensors, and the coordinate
Driving robot ERP-42 (Unmanned Size: 650 mm (length)  470 mm
systems.
Solution, Seoul, Korea) (width)  158 mm (height)
Drive: All wheel drives based on
the differential gear
Max speed: 8 km/h where d(tk) is the driving distance of the robot at time tk; a(tk) the
heading; and b(tk) the pitch angle.

2.2.2. 3D tunnel mapping


indicated that the pattern matching was correct, and the heading The LiDAR sensor used in this study was SICK’s LMS-111 model,
value output of the pattern matching was used. In contrast, if the the field of view was 270°, and the point data were measured in
score value was lower than the threshold, it was determined that 0.25° increments. Because the LiDAR sensor was measured in
the pattern matching accuracy was low, and the heading value increments of 0.5 s, the horizontal/vertical LiDAR each collected
obtained from the IMU sensor was used instead. The current head- 2162-point cloud data in 1 s. In the LiDAR, the coordinates (xpoint,
ing value was calculated by accumulating the heading value mea- ypoint) of points separated by an angle h and a distance D may be
sured in units of 0.5 s. The location of the autonomous driving defined as Eqs. (4) and (5).
robot was estimated using the heading value obtained through
pattern matching or IMU sensor and the driving distance measured xpoint ¼ Dpoint  coshpoint ð4Þ
by the encoder sensor. Eqs. (1), (2), and (3) represent the location
estimation equation of the autonomous driving robot. ypoint ¼ Dpoint  sinhpoint ð5Þ
xðtkþ1 Þ ¼ xðt k Þ þ dðt k Þ  cosðbðtk ÞÞ  cosðaðtk ÞÞ ð1Þ To create a 3D map by stacking the point clouds measured by
the vertical LiDAR, it was necessary to record information on the
yðt kþ1 Þ ¼ yðt k Þ þ dðt k Þ  cosðbðtk ÞÞ  sinðaðt k ÞÞ ð2Þ autonomous driving robot’s location and pose, along with the point
data, as it drove. First, the 3D pose of the robot was defined by the
zðt kþ1 Þ ¼ zðt k Þ þ dðt k Þ  sinðbðt k ÞÞ ð3Þ roll, pitch, and yaw in the form of the Euler angle. They represent
781
H. Kim and Y. Choi International Journal of Mining Science and Technology 31 (2021) 779–788

Fig. 3. User interface of autonomous driving robot system in LabVIEW software.

The coordinates of the tunnel map are calculated as in Eq. (9)


using the autonomous driving robot’s location, pose and point
data. xmap, ymap, and zmap are the coordinates of the point data,
and xrobot, yrobot, and zrobot are the coordinates of the autonomous
driving robot. In order to reflect the pose of the autonomous driv-
ing robot in the point cloud data that maps the tunnel, the matrices
of Eqs. (6), (7), and (8) were converted into one matrix that rotates
in the z-y-x order and calculated. In addition, xpoint and ypoint are
the coordinates of the point measured by the vertical LiDAR sensor,
and xLiDAR, yLiDAR, and zLiDAR represent the moving distance to
match the center position of the robot and the position of the
LiDAR sensor. The tunnel coordinates calculated through Eq. (9)
are stacked while the robot is driving to create a 3D model of a sin-
gle underground mine tunnel.
2 3 2 3
xmap cosacosb cosasinbsinc  sinacosc cosasinbcosc  sinasinc
6 7 6 7
4 ymap 5 ¼ 4 sinacosb sinasinbsinc þ cosacosc sinasinbcosc  cosasinc 5
zmap sinb cosbsinc cosbcosc
2 3 2 3 2 3
xrobot 0 xLiDAR
6 7 6 7 6 7
 4 yrobot 5 þ 4 xpoint 5 þ 4 yLiDAR 5 ð9Þ
zrobot ypoint zLiDAR

Fig. 4. Flowchart for estimating the location of the autonomous driving robots The point data, including the location, pose, and LiDAR’s dis-
using pattern matching, IMU, encoder sensors. tance information obtained at time K can be expressed in the form
of (SK) in Eq. (10).
the degree of rotation of the robot in the x-, y-, and z-axes direc- 2 3
X Kð1Þ Y Kð1Þ Z Kð1Þ
tions, respectively and can be represented by the rotation matrices 6 X Kð2Þ 7
6 Y Kð2Þ Z Kð2Þ 7
of Eqs. (6), (7), and (8). SK ¼ 6
6 .. .. ..
7
7 ð10Þ
2 3 4 . . . 5
1 0 0
6 7
RX ðrollÞ ¼ 4 0 cosa sina 5 ð6Þ X Kð1081Þ Y Kð1081Þ Z Kð1081Þ
0 sina cosa
2 3 2.3. Pattern matching of sequential LiDAR images for estimating
cosb 0 sinb robot’s heading
6 7
RY ðpitchÞ ¼ 4 0 1 0 5 ð7Þ
sinb 0 cosb In this study, the heading of the robot was measured through
pattern matching. The pattern matching technology is a machine
2 3
cosc sinc 0 vision-based image recognition, learning, and processing algorithm
6 7 that can calculate angle and accuracy by comparing the learned
RZ ðyawÞ ¼ 4 sinc cosc 0 5 ð8Þ
template image with a target image. The pattern matching algo-
0 0 1
rithm used was pyramid matching, a pattern matching algorithm
782
H. Kim and Y. Choi International Journal of Mining Science and Technology 31 (2021) 779–788

that reduces the size of a sample through the Gaussian pyramid, intensity of light could be ignored because the performance of the
adjusting the resolution through the Gaussian filter and minimiz- LiDAR sensor was constant, regardless of it.
ing the size of the image by repeating this process [37]. By optimiz- The pattern matching success and failure in the LabVIEW soft-
ing the data size, the total amount of computation was reduced ware are shown in Fig. 7. The score value was calculated using
significantly; therefore, the algorithm can be used in a short cycle Eq. (11); failure was determined based on the threshold value.
[38]. In this study, the Vision Assistant of National Instruments’
machine vision processing software was used in conjunction with 2.4. Field experiment
LabVIEW to implement the pattern matching technology [39]. Pat-
tern matching was performed by designating some of the previous 2.4.1. Study area
data as template image and the current data as target image among In this study, an abandoned underground mine located in Ulsan,
the continuously acquired LiDAR sensor data. This process was Korea (N35°320 400 , E129°50 3700 ) was set as a research area to esti-
continuously repeated to output the accuracy of pattern matching mate the location of an autonomous driving robot and map the
and the heading angle of the robot. 3D tunnel. To perform the location estimation and tunnel mapping
The pattern matching, as performed through the tunnel mea- experiment, a 25-m long and 3-m wide area was designated the
sured by the LiDAR sensor, is shown in Fig. 5. The proposed test area, as shown in Fig. 8. Then, five points of the entire exper-
pattern-matching-based heading measurement method was per- imental site were selected as Sections A, B, C, D, and E, and the loca-
formed through a 2D LiDAR that measured the wall in the horizon- tions of the robot, webcam display, and tunnel wall were
tal direction; there was no additional sensor. Among the point data compared. Further, sticky notes were attached to the floor at regu-
continuously measured by the LiDAR sensor as the robot drove, the lar intervals to accurately observe the autonomous driving robot’s
rotational angle of the region of interest (ROI) was measured by driving path. In the test area, all the tunnel walls were within a
comparing the overlapping points of the mine wall data (SK) at range that could be detected by the LiDAR sensor.
time K and mine wall data (SK+1) at time K. Similarly, SK+1 was
learned as a template image of SK+2, and the pattern matching
2.4.2. Experiment method
between SK+1 and SK+2 was sequentially performed. When compar-
The autonomous driving robot received a signal wirelessly from
ing the ROI measured in SK with the tunnel data of SK+1, the match-
the remote controller at the starting point. Then, it autonomously
ing location was automatically found and the rotation angle was
drove through the mine at a constant speed along the central point
measured, as shown in Fig. 4.
of the tunnel via an autonomous driving algorithm using a hori-
The data processing sequence of the proposed pattern matching
zontal LiDAR sensor; pattern matching was performed by compar-
method is shown in Fig. 6. After removing the singular values
ing the shape change of the tunnel wall. Additionally, the 3D
caused by sensor error among 1081 points cloud data (SK) mea-
tunnel map of the underground mine was created using the esti-
sured by the horizontal LiDAR at time K, the point cloud data on
mated location and tunnel data measured by the vertical LiDAR
the tunnel wall was converted into an RGB image. To apply the
sensor in addition to the estimated location coordinates and the
RGB image to the machine vision-based pattern-matching algo-
point data measured by the vertical LiDAR sensor. All the sensor
rithm, the grayscale image was converted, and the 2-m point in
data were measured and saved every 0.5 s. After the experiment,
front of the robot was selected as the ROI and learned as a template
the accuracy of the location estimation was evaluated in compar-
pattern. Then, the learned template pattern was applied to SK+1,
ison with the robot’s actual location.
pattern matching was performed, and the score value and angle
were calculated. The score value indicated the pattern matching
accuracy. As it had a value between 0 and 1000 points, it could 3. Results
be expressed as a value from 0% to 100%. The angles represented
the robot’s rotating angle, that is, its heading change. In Fig. 8, an autonomous driving robot, performing location esti-
When the size of the template was K  L and the size of the mation and 3D tunnel mapping while autonomously driving in an
image was M  N between two consecutively inputted tunnel underground mine is shown. In each section, there was a webcam
images, the correlation at point (i, j) was calculated using Eq. (11). screen and a tunnel image measured by a horizontal LiDAR. As the
robot was driving, its pose and location were estimated using the
PL1 PK1
x¼0

y¼0 ðwðx;yÞ  wÞðf ðx þ i; y þ jÞ  f ði;jÞÞ
heading and distance data measured by the sensors in real time.
PL1 PK1 ffiqffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
C ði;jÞ ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P The autonomous driving robot could confirm the safety of driving
L1 PK1
x¼0 y¼0 wðx;yÞ  w
2 x¼0 y¼0 f ðx þ i; y þ jÞ  f ði;jÞ
2
in the underground mine shaft; it took approximately 63 s to drive
ð11Þ through the entire experimental section. It was confirmed that the
pattern matching failed or its accuracy decreased in the section
where i = 0, 1, 2, . . . M1 and j = 0, 1, 2, . . .N–1 The value C(i, j) at the where the robot had to rotate at a relatively large angle, compared
highest point of the value, up to N-1, could be used to calculate the to other sections, as shown in Fig. 9b.
  Fig. 10 shows the score of the pattern matching performed by
pattern matching score. The constants (w, f ði; jÞ) related to the the autonomous driving robot in the experiment. Pattern matching
and location estimation were performed every 0.5 s while the
robot was driving and approximately 130 times during the entire
experiment. Pattern matching was successful at all points; how-
ever, the ratios of success and failure were recorded at 64.8% and
35.2%, respectively when a threshold of 70% was applied to the pat-
tern matching score. The heading variations indicate the difference
in the heading between the previous and present points; it was cal-
culated as the average of the absolute values to indicate only the
degree of change. The overall average was approximately 2.5°,
indicating an average rotation of 2.5° in the (+) or () direction
every 0.5 s. On the other hand, when pattern matching was suc-
Fig. 5. Pattern matching using continuous two LiDAR data (SK, SK+1). cessful with an accuracy of 70% or more, the heading value was
783
H. Kim and Y. Choi International Journal of Mining Science and Technology 31 (2021) 779–788

Fig. 6. Procedure of obtaining pattern-matching data from horizontal LiDAR sensor.

Fig. 7. Comparison of pattern matching success and failure in LabVIEW software.

Fig. 9. View of autonomous driving robot, webcam display, tunnel wall measured
by the LiDAR sensor at Section A, Section B, Section C, Section D, Section E.
Fig. 8. Conceptual diagram of field experiment in underground mine environment.

approximately 1.92° and 3.05°, when it failed. It was found that the
pattern matching accuracy was high when the robot rotated at a
small angle compared to the previous point of view, and low, when
the robot rotated at a relatively large angle. At some points, the
heading variations showed significant deviations exceeding 10°,
and it was confirmed that the pattern matching accuracy was very
low at these points.
In some areas of the underground mine shaft, there was a
curved section in which the robot had to rotate at a relatively large
angle. When the robot encountered the bend, the heading value
changed significantly, and the robot maintained the heading while
driving until the next bend. It was confirmed that the success rate
of pattern matching was relatively higher in the curve, compared
to at the entrance of the curved point, even when the robot main- Fig. 10. Pattern matching score recorded as autonomous robot drove and 70% score
tained a large heading value when entering the curved section. line.
Fig. 11 shows the estimated driving path of the autonomous
driving robot in the underground mine shaft. Overall, it was con-
firmed that the robot was driven in a path similar to the central Fig. 12 shows the 3D map created by stacking vertical LiDAR
point of the road. Because the location estimation was performed data as the autonomous driving robot drove through the under-
in a short cycle, it was confirmed that the movement path of the ground mine tunnel. Over a total distance of 25 m, the robot moved
robot was formed relatively densely. approximately 25 m in the X-axis direction and 6 m in the Y-axis
784
H. Kim and Y. Choi International Journal of Mining Science and Technology 31 (2021) 779–788

Fig. 12. 3D model of underground mine generated by pattern matching-based


Fig. 11. Driving path of autonomous driving robot in underground mine. autonomous driving robot: top view, side view, front view.

direction. The underground mine shaft was approximately 2.41-m


high in the Z-axis direction. Because the robot was driving at a low
speed and mapping was performed every 0.5 s, the shaft was den-
sely displayed.

4. Discussion

In this study, the location of an autonomous driving robot was


estimated using a pattern matching method; IMU sensors were
combined according to the pattern matching score. In addition,
the 3D map was created based on the location of the robot and ver-
tical section of the underground mine shaft measured from the
vertical LiDAR. To evaluate the accuracy of the location estimation
and 3D mapping methods developed in this study, a quantitative
comparison between the estimated robot location, 3D tunnel
shape, and actual value must be performed. The accuracy of the
proposed pattern-matching-based location estimation method
was quantitatively evaluated. As the robot drove through the
underground mine, the evaluation method estimated its location
using the pattern-matching-based location estimation method
and the previously used location estimation method and compared
it with the actual movement path. The previously used location
estimation method combined the LiDAR, IMU, and encoder sensors
Fig. 13. Autonomous driving robot’s driving path estimated by both location
[25–29,36]. The IMU and encoder sensors performed the same estimation methods (LiDAR + IMU + encoder + pattern matching and
function, and the robot’s pose and location were estimated by com- LiDAR + IMU + encoder) and actual driving path in field experiment.
paring the featured scanned at two different times among the
LiDAR sensors’ data.
Fig. 13 shows the results of a comparison experiment on the using the existing location estimation method and the pattern
accuracy of the located estimation for the underground mines matching-based one. The existing method tended to exhibit a

785
H. Kim and Y. Choi International Journal of Mining Science and Technology 31 (2021) 779–788

relatively high error in the curved section. Furthermore, the loca- X-axis, 0.06 m on the Y-axis, and an overall error of 0.08 m. The
tion estimation performance of the pattern-matching-based existing method exhibited an average error of 0.16 m on the X-
method was relatively high, with the error remaining almost con- axis, 0.14 m on the Y-axis, and approximately error 0.15 m overall.
stant in the entire section. Therefore, it was proven that the proposed pattern-matching-
The comparison results of the proposed pattern-matching- based location estimation method outperformed the existing
based location estimation method and existing ones used are method.
shown in Table 2. The method combining pattern matching, IMU, The location estimation method based on the LiDAR data used
and encoder sensors yielded an average error of 0.10 m on the in the previous study corrected the location using only some fea-
tured points among the entire point data. However, it is often dif-
ficult to recognize the shape of the tunnels using only some point
data because the walls of the tunnels are rough or irregularly
Table 2
formed. Consequently, it could be expected that the relative loca-
Experiment results of both autonomous driving robot’s location estimation methods.
tion estimation accuracy was inferior because of the high depen-
Mean absolute error (m) PM + IMU + Encoder LiDAR + IMU + Encoder dence on the IMU and encoder sensors. On the other hand, the
X-axis 0.1001 0.1643 pattern matching method used in this study converted the high-
Y-axis 0.0620 0.1462 resolution LiDAR data into a high-quality image format and calcu-
X-Y axes 0.0810 0.1553
lated the heading, which enabled it to efficiently predict the robot’s
pose and location, even in the underground mine environment.
In this study, 3D mapping, which was performed using a
method based on pattern-matching location estimation, and a ver-
tical LiDAR sensor were evaluated quantitatively. Standard survey-
ing instruments, such as a ‘‘total station” was used for surveying
five points among the entire experimental area where tunnel map-
ping was performed using the autonomous robot. Furthermore, the
cross sections of the shaft map constructed using the robot and
those obtained using standard measurement methods were quan-
titatively compared. To match the scan points of the vertical LiDAR
mounted on the autonomous driving robot with those of standard
survey methods, a 3D tunnel model was constructed for the exper-
imental area, and the sections matching the standard survey points
were compared in a cross-sectional manner.
Fig. 14 shows the 3D point cloud map constructed by the auton-
omous driving robot and surveying. Five points between 4 and
10 m of the experimental area were surveyed, and the constructed
map was compared with the robot-based mapping results. The top
view (Fig. 14a) confirmed that the map created by the robot was
distributed similar to the result of the actual survey point in the
x- and y-axis directions. Furthermore, the front (Fig. 14b) and side
Fig. 14. 3D point cloud map generated and survey conducted by the autonomous views (Fig. 14c) confirmed that the floor and ceiling of the tunnel
driving robot: top, front, and side views. were mapped with high accuracy.

Fig. 15. Comparison of the 2D tunnel sections generated and survey conducted at five points (a, b, c, d, and e) by the autonomous driving robot.

786
H. Kim and Y. Choi International Journal of Mining Science and Technology 31 (2021) 779–788

Table 3
Experiment results of the mapping and surveying conducted by the autonomous driving robot of the tunnel section.

No. Area of section created Area of section created by autonomous Absolute error (m2) Root mean square error (m2)
by surveying (m2) driving robot (m2)
1 6.62 6.69 0.07 0.05
2 6.73 6.66 0.07
3 6.73 6.62 0.11
4 5.97 5.96 0.01
5 5.44 5.44 0.00

Fig. 15 shows the results of comparing the 2D tunnel sections (3) This study measured the angle in the yaw direction using a
generated by the autonomous driving robot and those surveyed horizontal 2D LiDAR sensor; however, the angles in the roll
manually. It can be seen that the 2D tunnel sections created by and pitch directions were measured using only the IMU sen-
the two methods are similar. Table 3 lists a quantitative compar- sor. If pattern matching in the yaw direction and the roll and
ison of the tunnel sections created and survey conducted by the pitch directions are performed simultaneously, the accuracy
autonomous driving robot. For the five survey points, the area of the robot’s 3D pose measurement will increase, and the
and accuracy of the tunnel sections generated by the autonomous precision of the 3D mapping for the underground mine shaft
driving robot were calculated by comparing it with that of the will also be guaranteed. Therefore, research on three-axis
actual survey. The map created by the autonomous driving robot pose correction using 3D LiDAR or a vision camera is neces-
achieved an absolute error with a minimum of 0.0075 m2 up to sary. In addition, the chosen experimental area was a
0.1069 m2 and a root mean square error (RMSE) value of straight-line shaft that was 25 m long with no obstacles
0.05 m2. The LiDAR sensor mounted on the autonomous robot along the direction of driving. However, in actual under-
exhibited relatively high accuracy; therefore, if the location of ground mines, narrow sections, intersections, obstacles,
the autonomous robot was accurately estimated, the accuracy of and workers exist as obstacles. Therefore, in future works,
the tunnel mapping would also improve. autonomous driving algorithms, such as obstacle avoidance
and path planning, should be supplemented to incorporate
5. Conclusions such field conditions.
(4) In this study, field experiments were performed using a
In this study, the autonomous driving robot’s location estima- small autonomous robot; however, in actual underground
tion and 3D tunnel mapping were performed using a machine- mines, there may be areas with poor road conditions. There-
vison-based pattern matching technique. fore, it is necessary to improve the driving performance by
applying the autonomous driving algorithm that was devel-
(1) The proposed location estimation method recognized and oped in this study to the robot platform to enable it to safely
matched the shape of the tunnel wall using a horizontal navigate in a field environment.
LiDAR sensor to measure the heading angle of the robot.
When the pattern matching accuracy was poor, the heading Underground mines have a variety of risk factors such as rock
was measured using an IMU sensor. The robot’s pose and collapse and toxic gases. Therefore, many areas that are too dan-
location were estimated using the heading and the encoder gerous for humans to access. Using autonomous driving robots in
sensor, which can measure the distance and IMU; the esti- these areas can increase the safety of exploration. Furthermore,
mates were combined with the vertical LiDAR sensor data transporting equipment in underground mines using the autono-
to create a 3D tunnel. The accuracy of the proposed and pre- mous driving system can significantly boost the productivity of
vious location estimation methods was compared; the accu- the mine. To use autonomous driving robots in these areas, accu-
racy achieved by the proposed method was as high as rate location estimation and 3D mapping are essential. It is
0.07 m. In addition, the tunnel mapping conducted using expected that this study will serve as an important reference mate-
the autonomous driving robot exhibited an RMSE of 0.05 m2. rial in various fields related to location estimation and 3D mapping
(2) The 3D maps of underground mines created in previous of autonomous driving robots in underground mines in the future.
studies using autonomous driving robots tended to accumu-
late errors significantly because of the types of sensors used,
Acknowledgements
such as the IMU and encoder. In studies in which location
was measured using distance-measuring sensors, such as
This work was supported by the National Research Foundation
LiDAR sensors, it was necessary to conduct prior exploration,
of Korea (NRF) grant funded by the Korea government (MSIT) (No.
which posed an inconvenience when used in a form that
2021R1A2C1011216).
performed comparison with survey points or existing maps.
Further, it was difficult to detect the feature points due to
the topographical characteristics of underground mines; References
another limitation was that the amount of computation
increased because it was necessary to identify the shape of [1] Tesla Autopilot. Autopilot and Full Self-Driving Capability. https://www.
tesla.com/support/autopilot; 2020.
each wall. The proposed pattern-based location estimation [2] Synopsys Autonomous Driving Levels. The 6 Levels of Vehicle Autonomy
method could be effectively utilized in underground mining Explained. https://www.synopsys.com/automotive/autonomous-driving-
environments and was highly effective at minimizing the levels.html; 2020.
[3] General Motors. 2018 Self-driving safety report. https://www.
location estimation error of the robot because it can be per-
gm.com/content/dam/company/docs/us/en/gmcom/gmsafetyreport.pdf; 2018.
formed in a short cycle. Furthermore, the proposed method [4] Ford. Ford’s approach to developing Self-driving vehicles.
reduces the amount of computation through image opti- https://media.ford.com/content/dam/fordmedia/pdf/Ford_AV_LLC_FINAL_HR_
mization. As above-mentioned, the fact that it could be per- 2.pdf; 2018.
[5] NVIDIA. Self-driving safety report. https://www.nvidia.com/content/dam/en-
formed in a short cycle enabled highly accurate location zz/Solutions/self-driving-cars/safety-report/NVIDIA-Self-Driving-Safety-
estimation. Report-2018.pdf; 2018.

787
H. Kim and Y. Choi International Journal of Mining Science and Technology 31 (2021) 779–788

[6] Intel. The State of AV/ADAS at Mobileye/Intel. https://s21.q4cdn.com/ [22] Åstrand M, Jakobsson E, Lindfors M, Svensson J. A system for underground road
600692695/files/doc_presentations/2019/01/Mobileye_CES2019.pdf; 2019. condition monitoring. Int J Min Sci Tech 2020;30(3):405–11.
[7] Barnewold L, Lottermoser BG. Identification of digital technologies and [23] Monsalve JJ, Baggett J, Bishop R, Ripepi N. Application of laser scanning for rock
digitalisation trends in the mining industry. Int J Min Sci Tech 2020;30 mass characterization and discrete fracture network generation in an
(6):747–57. underground limestone mine. Int J Min Sci Tech 2019;29(1):131–7.
[8] Larsson J, Broxvall M, Saffiotti A. A navigation system for automated loaders [24] Evanek N, Slaker B, Iannacchione A, Miller T. LiDAR mapping of ground damage
in underground mines. In: Proceedings of the 5th International Conference in a heading re-orientation case study. Int J Min Sci Tech 2021;31(1):67–74.
on Field and Service Robotics. Port Douglas, Australia: Springer; 2006. [25] Baker C, Morris A, Ferguson D, Thayer S, Whittaker C, Omohundro Z. A
p.129–40. campaign in autonomous mine mapping. In: IEEE International Conference on
[9] Marshall J, Barfoot T, Larsson J. Autonomous underground tramming for Robotics and Automation. New Orleans, New York, USA: IEEE; 2004.
center-articulated vehicles. J F Robot 2008;25(6-7):400–21. [26] Bakambu JN, Polotski V. Autonomous system for navigation and surveying in
[10] Yinka-Banjo C, Bagula A, Osunmakinde I. Autonomous multi-robot behaviours underground mines. J F Robot 2007;24(10):829–47.
for safety inspection under the constraints of underground mine terrains. [27] Ren Z, Wang L, Bi L. Robust GICP-based 3D LiDAR SLAM for underground
Ubiquitous Comput Commun J 2012;7:1316. mining environment. Sensors 2019;19(13):2915.
[11] Berglund T, Brodnik A, Jonsson H, Staffanson M, Soderkvist I. Planning smooth [28] Neumann T, Ferrein A, Kallweit S, Scholl I. Towards a mobile mapping robot for
and obstacle-avoiding B-spline paths for autonomous mining vehicles. IEEE underground mines. In: Proceedings of the 2014 PRASA, RobMech and AfLaT
Trans Autom Sci Eng 2010;7(1):167–72. International Joint Symposium. Cape Town, South Africa; 2014. p. 1–7.
[12] Chi H, Zhan K, Shi B. Automatic guidance of underground mining vehicles [29] Ghosh D, Samanta B, Chakravarty D. Multi sensor data fusion for 6D pose
using laser sensors. Tunn Undergr Sp Tech 2012;27:142–8. estimation and 3D underground mine mapping using autonomous mobile
[13] Volvo. https://www.volvogroup.com/en-en/news/2016/sep/news-2297091. robot. Int J Image Data Fusion 2017;8(2):173–87.
html; 2016. [30] Thrun S, Burgard W, Fox D. Probabilistic robotics. Cambridge: MITPress; 2005.
[14] Zhao J, Gao J, Zhao F, Liu Y. A search-and-rescue robot system for remotely [31] Stahl T, Wischnewski A, Betz J, Lienkamp M. ROS-based localization of a race
sensing the underground coal mine environment. Sensors 2017;17:1–23. vehicle at high-speed using LIDAR. In: Proceedings of the E3S Web of
[15] Günther F, Mischo H, Lösch R, Grehl S, Güth F. Increased safety in deep mining Conferences, Prague, Czech Republic: E3S Web of Conferences; 2019. p. 1–10.
with iot and autonomous robots. In: Proceedings of the 39th International [32] Wischnewski A, Stahl T, Betz J, Lohmann B. Vehicle dynamics state estimation
Symposium ‘Application of Computers and Operations Research. Wroclaw, and localization for high performance race cars. IFAC-PapersOnLine 2019;52
Poland: CRC Press; 2019. p. 101–5. (8):154–61.
[16] Thrun S, Thayer S, Whittaker W, Baker C, Burgard W, Ferguson D. Autonomous [33] de Miguel MÁ, García F, Armingol JM. Improved LiDAR probabilistic
exploration and mapping of abandoned mines: Software architecture of an localization for autonomous vehicles using GNSS. Sensors 2020;20(11):3145.
autonomous robotic system. IEEE Robot Autom Mag 2004;11(4):79–91. [34] Akai N, Morales LY, Yamaguchi T, Takeuchi E, Yoshihara Y, Okuda H.
[17] Miller ID, Cladera F, Cowley A, Shivakumar SS, Lee ES, Jarin-Lipschitz L. Mine Autonomous driving based on accurate localization using multilayer LiDAR
tunnel exploration using multiple quadrupedal robots. IEEE Robot Autom Lett and dead reckoning. In: 2017 IEEE 20th Intelligent Transportation System
2020;5(2):2840–7. (ITSC). Yokohama, Japan: IEEE; 2018. p. 1–6.
[18] Roberts JM, Duff ES, Corke PI. Reactive navigation and opportunistic [35] Fu H, Ye L, Yu R, Wu T. An efficient scan-to-map matching approach for
localization for autonomous underground mining vehicles. Inf Sci 2002;145 autonomous driving. In: 2016 IEEE International Conference on Mechatronics
(1-2):127–46. and Automation. Harbin, China: IEEE; 2016. p. 1649–54.
[19] Kim H, Choi Y. Development of a LiDAR sensor-based small autonomous [36] Kim H, Choi Y. Comparison of three location estimation methods of an
driving robot for underground mines and indoor driving experiments. J Korean autonomous driving robot for underground mines. Appl Sci 2020;10(14):4831.
Soc Miner Energy Resour Eng 2019;56:407–15. [37] Vyas A, Roopashree MB, Prasad BR. Centroid detection by Gaussian pattern
[20] Kim H, Choi Y. Field experiment of a LiDAR sensor-based small autonomous matching in adaptive optics. Int J Comput Appl 2010;1(26):32–7.
driving robot in an underground mine. Tunn Undergr Sp 2020;30:76–86. [38] Banerji D, Ray R, Basu J, Basak I. Autonomous navigation by robust scan
[21] Singh SK, Raval S, Banerjee B. A robust approach to identify roof bolts in 3D matching technique. Int J Innov Tech Creat Eng 2012;2:7–13.
point cloud data captured from a mobile laser scanner. Int J Min Sci Tech [39] National Instruments. IMAQ Vision Concepts Manual. http://www.ni.com/
2021;31(2):303–12. pdf/manuals/322916a.pdf; 2000.

788

You might also like