You are on page 1of 19

sensors

Article
A Single LiDAR-Based Feature Fusion Indoor
Localization Algorithm
Yun-Ting Wang 1 , Chao-Chung Peng 1, *, Ankit A. Ravankar 2 and Abhijeet Ravankar 3
1 Department of Aeronautics and Astronautics, National Cheng Kung University, Tainan 701, Taiwan;
omiwanggg@gmail.com
2 Division of Human Mechanical Systems and Design, Faculty of Engineering, Hokkaido University,
Sapporo 060-8628, Japan; ankit@eng.hokudai.ac.jp
3 Lab of Smart Systems Engineering, Kitami Institute of Technology, Hokkaido, Kitami 090-8507, Japan;
abhijeetravankar@gmail.com
* Correspondence: ccpeng@mail.ncku.edu.tw; Tel.: +886-6-275-7575 (ext. 63633)

Received: 17 March 2018; Accepted: 18 April 2018; Published: 23 April 2018 

Abstract: In past years, there has been significant progress in the field of indoor robot localization.
To precisely recover the position, the robots usually relies on multiple on-board sensors. Nevertheless,
this affects the overall system cost and increases computation. In this research work, we considered
a light detection and ranging (LiDAR) device as the only sensor for detecting surroundings and
propose an efficient indoor localization algorithm. To attenuate the computation effort and preserve
localization robustness, a weighted parallel iterative closed point (WP-ICP) with interpolation is
presented. As compared to the traditional ICP, the point cloud is first processed to extract corners and
line features before applying point registration. Later, points labeled as corners are only matched with
the corner candidates. Similarly, points labeled as lines are only matched with the lines candidates.
Moreover, their ICP confidence levels are also fused in the algorithm, which make the pose estimation
less sensitive to environment uncertainties. The proposed WP-ICP architecture reduces the probability
of mismatch and thereby reduces the ICP iterations. Finally, based on given well-constructed indoor
layouts, experiment comparisons are carried out under both clean and perturbed environments. It is
shown that the proposed method is effective in significantly reducing computation effort and is
simultaneously able to preserve localization precision.

Keywords: indoor localization; pose estimation; iterative closet point; SLAM; LiDAR

1. Introduction
Simultaneous localization and mapping (SLAM) is a method of building a map under exploration
and estimating vehicle pose based on sensor information in an unknown environment. When exploring
an unpredictable environment, an unmanned vehicle is generally employed for exploration as well
as localization. In this regard, the vehicle could be equipped with a single sensor for detecting and
identifying surroundings, or by attaching two or even more sensors on the vehicle to enhance its
estimation capability.
When considering different kinds of sensors [1], laser range finders (LRFs), vision, and Wi-Fi
networks are popular sensing techniques for indoor localization tasks. Recently, with advancement in
computer vision and image processing, many researchers have started investigating the vision-based
SLAM [2,3]. Under the condition that the captured images are matched sufficiently, features can
be extracted using Scale-Invariant Feature Transform (SIFT) [4] or Speeded Up Robust Features
(SURF) [5]. Other indoor localization methods consider the amplitude of received signal from Wi-Fi
networks [6–10]. These localization strategies depend on pre-installed wireless hardware devices on
the site and thus may not be applicable in Wi-Fi denied environments.

Sensors 2018, 18, 1294; doi:10.3390/s18041294 www.mdpi.com/journal/sensors


Sensors 2018, 18, 1294 2 of 19

For an ultra-low-cost SLAM module, previous works have considered a set of on-board ultrasonic
sensors [11], which provided sparse measurements about the environment. However, the robot pose
might lose its pose under complicated environments without the aid of robot kinematics information.
To achieve robust image recognition [12], robot navigation and map construction, the depth sensor,
Kinect v2, was considered [13]. Kinect v2 is based on the time-of-flight measurement principle and can
be used in outdoors environment. Since the multi-depth sensors are able to provide highly dense 3D
data [14], the real-time computation effort is relative higher. Furthermore, for well-constructed indoor
environment, such hardware configuration is not necessary for 2D robot positioning.
Owing to the light weight and portable advantage, light detection and ranging (LiDAR) has
attracted more and more attention [15,16]. LiDAR possesses a high sampling rate, high angular
resolution, good range detection, and high robustness against environment variability. As a result,
in this research, a single LiDAR is used for indoor localization.
By analyzing the position of features in each frame at every movement of the vehicle, one can
figure out the vehicle’s traveling distance and heading. With different scanning data, an iterative closed
point (ICP) [17] algorithm is employed to find the most appropriate robot pose matching conditions,
including rotation and translation. However, the ICP may not always lead to good pattern matching if
point cloud registration issue is not well addressed. In other words, a better point registration will lead
to better robot pose estimation. To address this, point cloud outliers must be identified and recognized.
Another issue when applying the ICP is computation efficiency. Since the ICP algorithm considers
the closest-point rule to establish correspondences between points in current scan and a given layout,
the searching effort can increase dramatically when the scan or a layout contains large amounts of data.
The researches [18–24] has addressed and solved some of the problems when applying ICP,
including (1) wrong point matching for large initial errors, (2) expensive correspondence searching,
(3) slow convergence speed, and (4) outlier removal. For robot pose subjected to large initial angular
displacement, especially in [16], iterative dual correspondence (IDC) is proposed. However, it demands
higher computation due to its dual-correspondence process. Metric-based ICP (MbICP) [21] considers
geometric distance that takes translation and rotation into account simultaneously. The correspondence
between scans is established with this measure and the minimization of the error is carried out in
terms of this distance. The MbICP shows superior robustness in the case of existing large angular
displacement. Among various planar scan matching strategies, Normal Distribution Transformation
(NDT) [22] and Point-to-Line ICP (PLICP) [23] illustrate state-of-the-art performance in consideration
of scan matching accuracy. NDT transforms scans onto a grid space and tends to find the best fit by
maximizing the normal distribution in each cell. The NDT does not require point-to-point registrations,
so it enhances matching speed. However, the selection of the grid size dominates estimation stability.
The PLICP considers normal information from environment’s geometric surface and tries to minimize
the distance projected onto the normal vector of the surface. In addition, a close-form solution is also
given such that the convergence speed can be drastically improved.
To obtain precise and high-bandwidth robot pose estimation, the vehicle’s dynamics and traveling
status can be further integrated into a Kalman filter [25]. A Kalman filter provides the best estimate to
eliminate noise and provides a better robot pose prediction. Other researchers have considered the
wheel odometry fusion-based SLAM [26,27], which integrates robot kinematics and encoder data for
pose estimation. However, it might not be suitable for realization of a portable localization system.
Based on the aforementioned issues, in this work, the LiDAR is considered as the only sensor for
mapping and localization. To avoid mismatched point registration and to enhance matching speed,
we propose a feature-based weighted parallel iterative closed point (WP-ICP) architecture inspired
by [18,23,28–31]. The main advantages of the proposed method are as follows: (a) Point sizes of the
model set and data set are significantly reduced so that the ICP speed can be enhanced. (b) A split
and merge algorithm [28,29] is considered to divide the point cloud into two feature groups, namely,
corner and line segments. The algorithm works by matching points labeled as corners to the corner
Sensors 2018, 18, 1294 3 of 19

candidates; similarly, for those points labeled as lines can only be matched to the lines candidates. As a
result, it attenuates
Sensors any
2018, 18, x FOR possibilities
PEER REVIEW of point cloud mismatch. 3 of 19
In this paper, it is supposed that the well-constructed indoor layout is given in advance. The main
corner candidates; similarly, for those points labeled as lines can only be matched to the lines
design object is to reduce the computation effort and to maintain the indoor positioning precision.
candidates. As a result, it attenuates any possibilities of point cloud mismatch.
The rest of this paper is organized as follows. In Section 2, an adaptive breakpoint detector is firstly
In this paper, it is supposed that the well-constructed indoor layout is given in advance. The
introduced for scan
main design point
object is segmentation. A clusteringeffort
to reduce the computation algorithm
and toand a split–merge
maintain approach
the indoor is further
positioning
considered for point clustering and feature extraction, respectively. In Section 3, a WP-ICP
precision. The rest of this paper is organized as follows. In Section 2, an adaptive breakpoint detector algorithm is
proposed. Section
is firstly introduced4 presents
for scanreal experiments
point segmentation.to A
evaluate thealgorithm
clustering effectiveness of the proposed
and a split–merge method.
approach
Finally, Section
is further 5 outlinesforconclusions
considered and and
point clustering future work.
feature extraction, respectively. In Section 3, a WP-ICP
algorithm is proposed. Section 4 presents real experiments to evaluate the effectiveness of the
2. Feature Extraction
proposed method. Finally, Section 5 outlines conclusions and future work.

For featureExtraction
2. Feature extraction, a robot is employed in an indoor environment and moves in a given layout,
and a localization algorithm is introduced. In addition, real-time sensing and pose estimation is key
For feature extraction, a robot is employed in an indoor environment and moves in a given
for practical realization. Feature extraction plays an important role in reducing the amount of point
layout, and a localization algorithm is introduced. In addition, real-time sensing and pose estimation
cloud data for computational speedup.
is key for practical realization. Feature extraction plays an important role in reducing the amount of
point cloud data for computational speedup.
2.1. The Main Concept of Feature Extraction
2.1. The
Due to Main Concept
the high scanof resolution
Feature Extraction
of LiDAR and the given layout, construction of a KD-Tree will
be highlyDuetime-consuming
to the high scanifresolution
all data of
points
LiDAR are
andfed
theinto
given thelayout,
ICP algorithm.
constructionTherefore,
of a KD-Treeextracting
will
informative
be highly feature points to represent
time-consuming if all datathe environment
points are fed into is the
oneICP
of the important
algorithm. tasks. extracting
Therefore,
In this research,
informative featureapoints
feature extraction
to represent thescan matching
environment procedure
is one is proposed
of the important tasks.as summarized in
Figure 1. In this research,
Firstly, a feature extraction
all the scanning points arescan matching
separated procedure
into is proposed
many clusters as summarized
by invoking in
the adaptive
Figure 1.
breakpoint Firstly, all
detector the scanning
(ABD) [28]. The points
main areidea
separated
of an into
ABD many
is toclusters
find the by breakpoints
invoking the adaptive
in each scan,
wherebreakpoint
the scanning detector (ABD) [28]. The main idea of an ABD is to find the breakpoints in each scan,
direction of a LiDAR is counterclockwise and continuous. Therefore, by detecting
where the scanning direction of a LiDAR is counterclockwise and continuous. Therefore, by detecting
breakpoints, the algorithm can determine if there exists a discontinuity between two consecutive
breakpoints, the algorithm can determine if there exists a discontinuity between two consecutive
scanning points. However, the threshold for the breakpoint determination should be adaptive with
scanning points. However, the threshold for the breakpoint determination should be adaptive with
respect to the
respect to scanning
the scanning distance,
distance,which
whichisispresented
presented in in the
thefollowing
followingsubsection.
subsection.

Figure1.1.Flow
Figure Flow chart
chart of
of feature
featureextraction.
extraction.
Sensors 2018, 18, 1294 4 of 19

2.2. A Novel Method for Finding Clusters


Consider that, for each scan, there exists n beams. The radius for each beam is denoted as ri ,
Sensors 2018, 18, x FOR PEER REVIEW 4 of 19
where i = 1, · · · , n. To enhance the clustering performance, a simple and straightforward adaptive
radius
2.2.clustering (ARC)foralgorithm
A Novel Method is developed as follows:
Finding Clusters
Consider that, for
∆Seach
= Rscan,
× ∆θthere
= Rexists
× (α ×n beams.
2π/360The
), Rradius
= minfor
(reach
, r beam
) is denoted as ri , (1)
i i −1
where i  1,  , n . To enhance the clustering performance, a simple and straightforward adaptive
radius clustering (ARC) algorithm is developed
λ = as × ∆S
N follows: (2)
S  R  θ  R  (α  2π / 360) , R  min ri , ri 1  (1)
where ∆S denotes an adaptive arc length between points, α is the angular resolution provided by
LiDAR specification, N is a design scaling factor λ  N relative
S to LiDAR noise level, and λ is the threshold (2)
where S denotes an adaptive arc length between points, α
for clustering. Therefore, if the distance between two scanning points in the same scan is largerby
is the angular resolution provided than λ,
N is a design scaling factor relative to LiDAR noise level, and λ is the
thoseLiDAR
pointsspecification,
are going to be divided into two different clusters; that is
threshold for clustering. Therefore, if the distance between two scanning points in the same scan is
larger than λ , those points are going to be divided into two different clusters; that is
k p i − p i −1 k > λ (3)
pi  pi1  λ (3)
Based on the ARC, the scan shown in Figure 2b can be clustered into two groups. Else, it may
be dividedBased
intoon the ARC,
several the scan due
segments shownto in Figure
the 2b can
radius be clustered
variations into two groups.
as illustrated Else, 2a.
in Figure it may be
Therefore,
divided into several segments due to the radius variations as illustrated in Figure 2a.
the ARC algorithm is able to separate clusters according to LiDAR’s measurement characteristics.Therefore, the
ARC algorithm is able to separate clusters according to LiDAR’s measurement characteristics.
After utilizing the ARC algorithm, clusters with fewer scanning points are treated as outliers.
After utilizing the ARC algorithm, clusters with fewer scanning points are treated as outliers.
For clusters with lower density points that cannot provide reliable information are discarded before
For clusters with lower density points that cannot provide reliable information are discarded before
the feature extraction step.
the feature extraction step.

(a) without adaptive radius (b) with adaptive radius

Figure 2. Illustration of the adaptive radius clustering (ARC) algorithm.


Figure 2. Illustration of the adaptive radius clustering (ARC) algorithm.
2.3. Split and Merge for Corner and Line Extractions
2.3. Split and Merge for Corner and Line Extractions
Comparing a few of the feature extraction methods in different aspects, including the speed of
the algorithm,
Comparing correctness,
a few and so
of the feature on, the split
extraction methods and inmerge [15,28,29]
different aspects, is considered
including the for speed
featureof the
extraction
algorithm, in this research.
correctness, and so on, Thethe algorithm
split andismergecapable of extracting
[15,28,29] corner feature
is considered pointsextraction
for feature in the in
environment,
this research. Thewhich can beisthen
algorithm capabletakenof asextracting
stable and reliable
corner feature
featurepoints.
points in the environment, which
can be thenThetaken
splitting procedure
as stable also combines
and reliable featurethe idea of Iterative End Point Fitting (IEPF) [30,32],
points.
which
The is a recursive
splitting algorithm
procedure alsofor extracting
combines thefeatures.
idea ofFor a cluster
Iterative End  xi , yFitting
 Point , k , the[30,32],
i | i  1, (IEPF) split andwhich
is a recursive algorithm for extracting features. For a cluster Ω = { xi , yi |i = 1, · · · , k}, the then
merge algorithm connects the first point and the last point to form a straight line. The algorithm split and
merge calculates
algorithm deviations
connects from
theall thepoint
first points andto this
the lastline. point
If a point where
to form the corresponding
a straight maximum then
line. The algorithm
distance is larger than a predefined deviation threshold d c , this point is labeled as a feature point
calculates deviations from all the points to this line. If a point where the corresponding maximum
and it further splits the cluster into two subsets. The feature point Pc x i , y i    can be determined
distance is larger than a predefined deviation threshold dc , this point is labeled as a feature point and
by the following rule:
it further splits the cluster into two subsets. The feature point Pc ( xi , yi ) ∈ Ω can be determined by the
following rule: axi  byi  1
Pc xi , y i   d : arg max | ax + by + d
i i c1| (4)
Pc ( xi , yi ) ← d := argmaxPc  a2√  b2 > dc (4)
a 2 + b2
Pc ∈Ω
Sensors 2018, 18, 1294 5 of 19

Sensors 2018, 18, x FOR PEER REVIEW 5 of 19

where a, b are the coefficients of a line equation L : ax + by + 1 = 0, which is composed of P1 and Pk .


where a, b are the coefficients of a line equation L : ax  by  1  0 , which is composed of P1 and Pk .
Figure 3 demonstrates the main process of the split and merge algorithm. The algorithm recursively
Figure 3 demonstrates the main process of the split and merge algorithm. The algorithm recursively splits
splits the set
the of
set points
of pointsinto twosubsets
into two subsets until
until the condition,
the condition, Equation Equation (4), is not satisfied.
(4), is not satisfied.

Figure
Figure 3. Illustrationof
3. Illustration of the
the split
splitand
andmerge scheme.
merge scheme.

3. The Proposed Method: WP-ICP


3. The Proposed Method: WP-ICP
3.1. Pose Estimation Algorithm
3.1. Pose Estimation Algorithm
SLAM is considered to be a chicken and egg problem since precise localization needs a reference
map, and a good mapping result comes from a correct estimation of the robot pose [33]. To achieve
SLAM is considered to be a chicken and egg problem since precise localization needs a reference
SLAM, these two issues must be solved simultaneously [34]. However, in this work, only localization
map, and isa considered.
good mapping result comes from a correct estimation of the robot pose [33]. To achieve
Therefore, by assuming that a complete layout of the environment is given in advance,
SLAM, these twoscan
a novel issues
matching mustalgorithm
be solved simultaneously
is presented and will be[34]. However,
introduced in this
in the next work, only localization
subsection.
is considered. Suppose that the
Therefore, by correspondences
assuming thatare already known,
a complete the pose
layout estimation
of the can be considered
environment is givenasin advance,
an estimate of rigid body transform, which can be solved efficiently via singular value decomposition
a novel scan matching algorithm is presented and will be introduced in the next subsection.
(SVD) technique [35].
Suppose LetthatPthe
 pcorrespondences are already known, the pose estimation can be considered as
1 , p 2 ,  p N  be data set from a current scan and Q  q1 , q 2 ,  q N  be a model set
an estimate of rigid body transform, which
received from a given layout. The goal is to can
findbea rigid
solved efficiently
body via singular
transformation pair R, t  value decomposition
such that the
(SVD) technique [35]. can be achieved in the least error sense. It can be stated as
best alignment
Let P = {p1 , p2 , · · · p N } be data set from a Ncurrent scan 2and Q = {q1 , q2 , · · · q N } be a model set
R, t   arg min  wi Rp i  t  q i (5)
received from a given layout. The goal is to findi1a rigid body transformation pair (R, t) such that the
where can
best alignment wi are
be the weights for
achieved eachleast
in the point error
pair. sense. It can be stated as
The optimal translation vector can be calculated by
t  q NRp (6)
(R, t) = argmin ∑ wi kRpi + t − qi k2 (5)
where i =1

i1 i i , q  i1 i i
N N
wp wq
where wi are the weights for each point
p pair. (7)
The optimal translation vector can becalculated by
N N
w
i 1 i
w
i 1 i

can be taken as weighted centroids for the data set and the model set, respectively.
Let x i  p i  p and y i  q i  q . Consider q −matrices
t = also Rp X , Y , and W which are defined by (6)
X  x1  x N  , Y  y1  y N  , and W  diag w1 ,  , wN  , respectively.
where Defining S  XWY T and then applying the singular value decomposition (SVD) on S yields
∑ N w i pi ∑ N w i qi
p = i=NS1  U Σ ,VqT = i=N1 (8) (7)
where U and V are unitary matrices, ∑iand ∑i=1 matrix.
=1 wΣi is a diagonal wi It has been proved that the
can be optimal
taken rotation matrix
as weighted is available
centroids for by
theconsidering
data set and the model set, respectively.
R  VU Talso matrices X, Y, and W which are (9) defined by
hLet xi = pi − pi and yhi = qi − q. Consider i
X = x1 · · ·Based
x N on ,Equation
Y = (9),y1 the· ·translational
· y N , and vector
Wgiven in Equation
= diag (w1 , · · (6)
· ,wcan also be solved.
N ), respectively.
Defining S = XWYT and then applying the singular value decomposition (SVD) on S yields

S = U Σ VT (8)

where U and V are unitary matrices, and Σ is a diagonal matrix. It has been proved that the optimal
rotation matrix is available by considering

R = VUT (9)
Sensors 2018, 18, 1294 6 of 19

Based on Equation (9), the translational vector given in Equation (6) can also be solved.
Sensors 2018, 18, x FOR PEER REVIEW 6 of 19
According to the ARC, the split–merge algorithm, and the ICP, the procedure of the
According to ICP
corner-feature-based the pose
ARC,estimation
the split–merge algorithm, in
is summarized and the ICP,
Figure 4. Tothe procedure
reject of the
the outlier during
corner-feature-based
the feature ICP pose
point registration, theestimation
weightingsis summarized in Figureaccording
wi can be designed 4. To rejectto
thethe
outlier during distance,
Euclidean the
feature
and the point
values forregistration, the weightings
certain unreasonable wi can
feature be can
pairs designed
be setaccording
to zero. to the Euclidean distance,
and the values for certain unreasonable feature pairs can be set to zero.
Start
Lidar

Scanning points (Data Set)

Feature Clustering by ARC algorithm


Extraction Layout
Module Corner feature extraction by
Split and Merge algorithm
Feature
Extraction Module
(Model Set)
ICP algorithm

Find the rotation &


translation matrix

Update the vehicle pose


in Global Map

Yes Is there any new


scanning data?

No
End

Figure4.4.Corner-feature-based
Figure Corner-feature-based pose
poseestimation.
estimation.

3.2. A Weighted Parallel ICP Pose Estimation Algorithm


3.2. A Weighted Parallel ICP Pose Estimation Algorithm
An environment that has a similar layout generally results in good estimates. However, in
An environment
practice, that hasthe
it is not always a similar
case. Tolayout
verifygenerally results
the feasibility in good
of the estimates.
proposed method, However, in practice,
the proposed
it is not always the case. To verify the feasibility of the proposed
algorithm needs to be robust enough even in the presence of environment uncertainties such method, the proposed algorithm
as
needs to be robust
moving people. enough even in the presence of environment uncertainties such as moving people.
Based on the
Based on theresults presented
results presentedininSections
Sections 2 andand3,3,aaWP-ICP
WP-ICP is is proposed.
proposed. TheThe WP-ICP
WP-ICP considers
considers
two features
two features for point
for point clouds
clouds thatthatare
arepre-processed:
pre-processed: one oneisisthe
thecorner
corner feature
featureandandthe other is theisline
the other the line
feature.
feature. Corners
Corners are are important
important feature
feature points
points ininthe
theenvironment
environment as as they
they are
are distinct,
distinct,whereas
whereaswallswalls are
stableare stable points
feature featureand points and candidate
a good a good candidate for feature
for feature extraction
extraction in structured
in structured environments
environments [15,31,36].
[15,31,36]. Taking advantage of LiDAR for detecting surroundings, walls can be represented as line
Taking advantage of LiDAR for detecting surroundings, walls can be represented as line segments,
segments, which are composed of two corners. Furthermore, we also considered the center point of
which are composed of two corners. Furthermore, we also considered the center point of a line segment
a line segment as another matching reference point.
as another matching
The motivation reference
for suchpoint.
features are that indoor environments, e.g., offices and buildings, are
The motivation for
generally well structured. suchTherefore,
features are that indoor
feature-based environments,
localization e.g.,for
is suitable offices
such and buildings, are
environments.
generally well the
Examining structured.
traditionalTherefore,
full-points ICP feature-based
algorithm, point localization is suitable
cloud registration for such
is achieved byenvironments.
means of
Examining the neighbor
the nearest traditional full-points
search ICP algorithm,
(NNS) concept. There is no point cloud
further registration
information is achieved
attached to thoseby means of
points.
Therefore,
the nearest it is easysearch
neighbor to obtain incorrect
(NNS) correspondence
concept. There is no as shown
furtherininformation
Figure 5. Under this circumstance,
attached to those points.
severalititerations
Therefore, is easy toareobtain
usually needed to
incorrect converge the point
correspondence registration.
as shown In addition,
in Figure 5. Under the ICP
thisgives rise
circumstance,
to an obvious time cost for registration especially when the size of point
several iterations are usually needed to converge the point registration. In addition, the ICP gives cloud is large. To solve the rise
incorrect correspondence and iteration time cost issues, a feature-based point cloud reduction
to an obvious time cost for registration especially when the size of point cloud is large. To solve the
method is developed.
incorrect correspondence and iteration time cost issues, a feature-based point cloud reduction method
For the proposed WP-ICP, the point cloud is first characterized by fewer corners and lines. This
is developed.
has two main advantages: (a) First, the size of the point cloud is reduced significantly and thus
For the proposed
enhances WP-ICP,
the ICP speed. the point
(b) Second, the cloud
polished is first
pointscharacterized
are labeled asby fewer
corner orcorners and lines.
line features. Only This
has two main advantages: (a) First, the size of the point cloud is reduced significantly and thus
Sensors 2018, 18, 1294 7 of 19
Sensors 2018, 18, x FOR PEER REVIEW 7 of 19
Sensors 2018, 18, x FOR PEER REVIEW 7 of 19
enhances thepoints
those ICP speed.
labeled(b) Second,
as corners the
can bepolished
matched to points are labeled
the corner as corner
candidates; or line
similarly, onlyfeatures. Only
those points
those points
those labeled
points as corners
labeled as can
corners be
can matched
be matched to the
the corner
corner candidates;
candidates; similarly,
similarly, only those
labeled as lines can be matched to the lines candidates. The result is shown in Figure 6. Figure 6a only those
points points
aslabeled
labeledillustrates as
lines canthelines
be can be matched
matched
matching to for
to the
condition the linescandidates.
lines candidates.
corners The
The
while Figure result is shown
6bresult inthe
is shown
represents Figure
in 6.Figure
matchingFigure 6a Figure
6.
condition for 6a
illustrates the matching condition for corners while Figure 6b represents the matching condition for
lines.
illustrates theThe parallel-ICP
matching results
condition forincorners
correctwhile
pointFigure
registration and thereby
6b represents reduces the
the matching numberfor
condition
lines. The parallel-ICP results in correct point registration and thereby reduces the number
of iterations.
lines. The parallel-ICP
of iterations. results in correct point registration and thereby reduces the number of iterations.

5
5
Full Full Points
Points ModelModel
Set Set
Full Points
Full Points Data Set
Data Set
4 Corner
Corner Feature
Feature

3
3

2
2

1
1
0
0
-1
-1
-2
0 1 2 3 4 5 6 7 8
-2 X
0 1 2 3 4 5 6 7 8
Figure 5. Incorrect X neighbor search (NNS) Iterative Closest
Figure 5. Incorrect point point registration
registration causedby
caused by the
the nearest
nearest neighbor search (NNS) Iterative Closest
Point (ICP) algorithm.
Point (ICP) algorithm.
Figure 5. Incorrect point registration caused by the nearest neighbor search (NNS) Iterative Closest
5 5
Point (ICP) algorithm. Full Points Model Set Full Points Model Set
Full Points Data Set Full Points Data Set
4 Corner Feature Model Set 4 Line Feature Model Set
5 Corner Feature Data Set 5 Line Feature Data Set
3 Full Points Model Set 3 Full Points Model Set
Full Points Data Set Full Points Data Set
4 Corner Feature Model Set 4 Line Feature Model Set
2 2
Corner Feature Data Set Line Feature Data Set
3 3
1 1

2 0 0 2

1 -1 -1 1

0 -2 0 1 2 3 4 5 6 7 8
-2
00 1 2 3 4 5 6 7 8
X X
-1 -1
(a) (b)
-2 -2
0Figure
16. Illustration
2 3 4of feature-based
5 6 7 point
8 cloud registration.
(a) 1Corner
2 features
3 0 4 are matched
5 6 to7 8
corresponding cornerXfeatures (b) Line features are matched to corresponding line features.
X

(a) (b)
The main advantage of the WP-ICP is that the number of data points in a data set as well as a
model set6.can
Figure be significantly
Illustration reduced. On
of feature-based the cloud
point contrary, the full-points
registration. ICP algorithm
(a) Corner features areincludes all to
matched
Figuredata Illustration
6. points of feature-based point
for correspondence
cloud registration. (a) Corner features are matched to
corresponding corner featuressearching
(b) Line and thusare
features leads to low to
matched computation efficiency.
corresponding line features.
corresponding corner
Moreover, in features (b) Line
the proposed features
WP-ICP, are matched
the scan points aretoclustered
corresponding
into the line features.
corner or the line
groups, respectively. Based on the parallel mechanism, the
The main advantage of the WP-ICP is that the number of data points in a data points from a corner set can never
set asbewell as a
matched
model with the points from a line set. It can thus avoid mismatching during the point registration.
The mainsetadvantage
can be significantly
of the WP-ICP reduced. is On
thatthethecontrary,
numberthe of full-points
data points ICPinalgorithm
a data setincludes
as wellallas a
However, for full point ICP, many mismatches could happen once the distances between those two
data
model set setpoints
canpoints for correspondence
be significantly searching and thus leads to low computation
reduced. On the contrary, the full-points ICP algorithm includes all data efficiency.
are close enough.
Moreover, in the proposed WP-ICP, the scan points are clustered into the corner or the line
points for correspondence
Since the WP-ICP searching
has two ICP and thus leads
processes at the to
samelow computation
time, efficiency.
it generates two pairs of robot pose,
groups,
namelyrespectively.  andBased t Lon the parallel it is mechanism, the points from a corner setwith
can anever be
Moreover, inRthe
C , tC
proposed R L ,WP-ICP,
 . Therefore,
the scan desired
pointstoare fuse these
clustered twointo
poses
thetocorner
come out or the line groups,
matched with the points from a line set. It can thus avoid mismatching during the point registration.
more confident pose estimate.
respectively. Based on the parallel mechanism, the points from a corner set can never be matched
However, for full point ICP, many mismatches could happen once the distances between those two
with thesetpoints
pointsfrom a line
are close set. It can thus avoid mismatching during the point registration. However,
enough.
for full pointSince
ICP,the
many mismatches
WP-ICP has two ICP could happenatonce
processes the distances
the same betweentwo
time, it generates those two
pairs of set points
robot pose,are
namely R C , t C  and R L , t L  . Therefore, it is desired to fuse these two poses to come out with a
close enough.
Since
morethe WP-ICP
confident hasestimate.
pose two ICP processes at the same time, it generates two pairs of robot pose,
namely (RC , tC ) and (R L , t L ). Therefore, it is desired to fuse these two poses to come out with a more
confident pose estimate.
Sensors 2018, 18, 1294 8 of 19

The criterion
Sensors 2018, 18, x for
FORthe confidence
PEER REVIEW evaluation is designed as follows: 8 of 19

The criterion for the confidence evaluation is designed#ofmatchedlinefeaturepoints


#ofmatchedcornerfeaturepoints as follows:
ΓC = , ΓL = (10)
# #oftotalcornerfeaturepoints
of matched corner feature points # of matched line feature points
#oftotallinefeaturepoints
C  , L  (10)
# of total corner feature points # of total line feature points
ΓC,L represent
wherewhere the confidences and can be treated as fused weights for corner feature ICP and
C , L represent the confidences and can be treated as fused weights for corner feature ICP and
line feature ICP, respectively.
line feature ICP, respectively.  
The final stepstep
The final is toiscalculate a fused
to calculate a fusedpose estimate R
poseestimate fused , t fused  as follows:
R f used , t f used as follows:

R fRused  ααR
R C + 1 (1α−R Lα)RL C
, α  , α = ΓC
fused=
C
  (11) (11)
ttf used
fused =αtαt
C 
C
1
+  α
( 1t−L α ) t L C  L ΓC + Γ L

wherewhere the heading


the heading rotation
rotation angle
angle is is usedtotoobtain
used the RR fused ..
obtainthe
f used
SinceSince the WP-ICP
the WP-ICP provides
provides twotwosources
sourcesofof feature
feature points,
points,the thereal-time LiDAR
real-time LiDARscanning points
scanning points
are going to be separated into two groups including corners and lines. Each group
are going to be separated into two groups including corners and lines. Each group is then matched is then matched
with
with its its corresponding
corresponding features.
features. BasedBased
on on
thisthis parallel
parallel matchingmechanism,
matching mechanism,serious
serious mismatching
mismatching can
can be avoided, resulting in improved localization stability and precision. The flow chart of the
be avoided, resulting in improved localization stability and precision. The flow chart of the WP-ICP is
WP-ICP is illustrated in Figure 7.
illustrated in Figure 7.

Figure
Figure 7. The
7. The weighted
weighted paralleliterative
parallel iterativeclosed
closed point
point(WP-ICP)
(WP-ICP)pose
poseestimation algorithm.
estimation algorithm.

The practical benefits of the proposed WP-ICP include the following: (a) computation effort is
The practical
reduced when benefits
KD-Treesofare
the proposed
built WP-ICP
and nearest pointinclude the following:
registration (a)correct
is applied; (b) computation effort is
point cloud
reduced when KD-Trees
registration arereduces
significantly built and
ICP nearest point
iterations, registration
enabling fast robotispose
applied; (b) (c)
estimate; correct
robotpoint
pose iscloud
registration significantly
determined reduces
by feature-based ICPICP iterations,
fusion enabling
of two features fast robot
(corner pose
and line), estimate;
which makes (c)
therobot pose is
estimate
less sensitive to uncertain environments.
determined by feature-based ICP fusion of two features (corner and line), which makes the estimate
less sensitive to uncertain environments.
4. Experiments and Discussions
Sensors 2018, 18, 1294 9 of 19

Sensors
Sensors2018,
2018,18,
18,xxFOR
FORPEER
PEERREVIEW
REVIEW 99of
of19
19
4. Experiments and Discussions
For
For the
the experiments,
experiments,aaaHokuyo
experiments, Hokuyo
Hokuyo UST-20LX
UST-20LX
UST-20LX Scanning
Scanning
Scanning Laser
Laser
LaserRangefinder
Rangefinder
Rangefinder is
is used,
used, where
where
is used, aa 20
where 20 Hza 20scan
Hz scan
Hz
rate
scanand
rate rateaaand
and 15
15m mascan
15 mdistance
scan distance are
areapplied.
scan distance applied. The
Thescanning
are applied. scanning angle
angleresolution
The scanning resolution is
is0.25°.
angle resolution0.25°.Based
Based
is ◦
0.25on
on these
thesesettings,
. Based settings,
on these
the
the maximum
maximum
settings, scanning
the maximum points
points are
scanningscanning are 1081 point
1081 are
points point
1081per scan;
perpoint
scan;per that
that is,
is, 20
scan; 20 ×× 1081
that is, 20==×
1081 21,620
1081 points
21,620 =points
21,620 per
per second.
second.
points per
The
The experiment
experiment
second. system is shown
system issystem
The experiment shown isin Figure
inshown
Figure in8, which
8, which includes
Figureincludes
8, which(1)(1) a portable
a portable
includes LiDAR
(1) aLiDAR module
portablemodule
LiDAR (the
(the upper
upper
module
half
half part)
(the part) and
upper half(2)
and (2) aa mecanum
part) and (2) a wheeled
mecanum wheeled
mecanum robot
robot (the
(the lower
wheeled lower
robot (thehalf part).
part). In
half lower In this
halfthis work,
work,
part). since
since
In this the
the LiDAR
work, LiDAR
since theis
is
considered
considered as the single
as the single
LiDAR is considered sensor,
sensor,
as the the
singlethe robot
robotthe
sensor, vehicle
vehicle is only
is only is
robot vehicle taken
taken as
onlyas a moving
a moving
taken platform.
platform.
as a moving There is
ThereThere
platform. no
is no
communication
communication
is no communication between
between the
betweenthe LiDAR
LiDAR
the LiDAR module
module
moduleand
andandthe
thethevehicle.
vehicle.
vehicle. The
TheThemaximum
maximummoving
maximum movingspeed
moving speedof
speed ofthe
of the
vehicle
vehicle in
in the
the following
following experiments
experiments is is restricted
restricted to
restricted to 50
to 50 cm/s.
50 cm/s.
cm/s.

Figure 8. A LiDAR-based portable module and a moving platform.


Figure
Figure 8.
8. A
A LiDAR-based
LiDAR-based portable
portable module
module and
and aa moving
moving platform.
platform.

Basedon
Based
Based onthe
on theresolution
the resolutionand
resolution and
and testing
testing
testing result,
result,
result, N N Equation
N in
in in Equation
Equation (2) (2)set
(2) is
is setistoset
to toWith
15.
15. 15. regard
With With
regardregard
to theto
to the ARC,the
ARC,
ARC, clusters
clusters
clusters containing
containing
containing fewerfewer
fewer than than
than 5 points
55 points
points are are
are considered
considered
considered as as
as outliers
outliers
outliers andand
and areremoved
are
are removedbefore
removed before the
before the
the
WP-ICP
WP-ICP
WP-ICP is is applied.
is applied.
applied. The The distance
The distance d = 10
distance ddccc == 10
10 cm cm
cm is is used
is used
used forfor the
for the split
the split and
split and merge
and merge process.
merge process. For
process. For each
For each iteration,
each iteration,
iteration,
the weights
the
the weights for
weights for ICP
for ICPwill
ICP willbe
will beset
be setto
set tozero
to zeroifififthe
zero thedistances
the distancesbetween
distances betweenthe
between thepoints’
the points’correspondences
points’ correspondencesare
correspondences aregreater
are greater
greater
than
than 50
than50 cm.
50cm.
cm.This This threshold
Thisthreshold
thresholdis is determined
isdetermined
determinedin in accordance
inaccordance
accordancewith with
withthe the maximum
themaximum
maximummoving moving
movingspeed speed
speedof of
ofthe the
therobot.robot.
robot.
To
To ensure
To ensure
ensure the the WP-ICP
the WP-ICP algorithm
WP-ICP algorithm
algorithm is is feasible,
is feasible,
feasible, an an experiment
an experiment
experiment was was firstly
was firstly carried
firstly carried out
carried out
out in in a clear
in aa clear
clear
environment
environment
environment with with
with no no obstacles,
no obstacles, which
obstacles, which
which is is shown
is shown
shown in in Figure
in Figure
Figure 9. 9. The
9. The area
The area
area of of the
of the testing
the testing environment
testing environment
environment was was
was
about
about 5
about55 mm ×
m××66m. 6 m. There
m. There
Thereareare
are twotwo experiments
two experiments
experiments that that were
that were carried
werecarried out
carried out
out in in this
inthis environment.
this environment.
environment. One One
One is is guiding
isguiding
guiding
the
the vehicle
vehicle in
in aa rectangular
rectangular path
path and
and the
the other
other is
is moving
moving
the vehicle in a rectangular path and the other is moving the vehicle randomly. the
the vehicle
vehicle randomly.
randomly.

(a)
(a) (b)
(b)

Figure 9. Cont.
Sensors 2018, 18, 1294 10 of 19
Sensors 2018, 18, x FOR PEER REVIEW 10 of 19

Sensors 2018, 18, x FOR PEER REVIEW 10 of 19


Sensors 2018, 18, x FOR PEER REVIEW 10 of 19

(c)
Figure
Figure9.9.Scene-1: a robot
Scene-1: movesmoves
a robot in a clear
in environment
(c) with no obstacles.
a clear environment with no(a) Experimental
obstacles. (a) environment.
Experimental
(b) Robot is moved in a guided rectangular path. (c)
(c) Robotpath.
is moved randomly.
environment. (b) Robot is moved in a guided rectangular (c) Robot is moved randomly.
Figure 9. Scene-1: a robot moves in a clear environment with no obstacles. (a) Experimental environment.
Figure 9. Scene-1:
(b) Robot is moveda robot moves in
in a guided a clear environment
rectangular with no
path. (c) Robot is obstacles. (a) Experimental environment.
moved randomly.
Considering
(b) Robot isfull points
moved basedrectangular
in a guided ICP (shown path.in
(c) the black
Robot line)randomly.
is moved result as the ground truth, using
Considering full points based ICP (shown in the black line) result as the ground truth, using corner
corner feature
Considering full points based ICP (shown in the black line) result as the ground truth, using clear
only (shown in the red line) is able to result in an accurate pose estimate in a
feature only (shown in
Considering thepoints
full red line)
basedis ICP
able(shown
to result in
in11theanblack
accurate
line) pose estimate
result as the in a clear
ground environment
truth,
environment as illustrated
corner feature only (shownin inFigure
the red10.line)
Figure
is able toshows
result intheandeviation
accurate comparisons
pose estimate ausing
inof the pose
clear
as illustrated
corner in
environment Figure
feature only 10.
asaverage Figure
(shown
illustrated in 11
the shows
red the
line)
in Figure 10.are is
Figuredeviation
able to comparisons
result
11 shows in an of
accurate
the deviation the pose estimation
comparisons of the clear
pose estimate in a and the
pose
estimation and the of deviations 3.06786 and 4.29678 cm, respectively.
average of deviations
environment as are 3.06786
illustrated in and
Figure 4.29678
10. cm,
Figure respectively.
11 shows the deviation
estimation and the average of deviations are 3.06786 and 4.29678 cm, respectively. comparisons of the pose
estimation and the average ofcomparsion
Robot pose deviations are 3.06786 and 4.29678 cm, respectively.
7000 Robot pose comparsion
Robot pose comparsion 7000
7000 Full scanning points Robot pose comparsion
Robot pose comparsion
Corner featurepoints
only 7000 Full scanning points
6000 7000 Full scanning Robot pose comparsion
Corner points
feature only
Corner feature only 6000
7000 Full scanning
6000 Full scanning points Corner feature only
6000 Full scanning points
Corner feature only Corner feature only
5000 6000 6000
5000
5000 5000
5000 5000
4000 4000 4000
mm

4000
mm
mm mm

mm mm

4000 4000
3000 3000 3000
3000
3000 3000
2000 2000 2000
2000
2000 2000
1000 1000 1000
1000
1000 1000
0 0
0 0 1000 2000
3000 4000 5000 6000 7000 00 1000 2000 3000 4000 5000 6000 7000
0 0 1000 2000 3000 mm4000 5000 6000 7000 0 0 1000 2000 3000
mm 4000 5000 6000 7000
0 1000 2000 3000
mm 4000 5000 6000 7000 0 1000 2000 3000 4000
mm 5000 6000 7000
mm mm
(a) Exp-1 (b) Exp-2
(a)(a)
Exp-1
Exp-1 (b)(b) Exp-2
Exp-2
Figure 10. Different algorithms for the first/second experiments.
Figure
Figure 10.
10.10.
Figure Differentalgorithms
Different algorithms for thefirst/second
for the first/second
first/second experiments.
experiments.
experiments.
Pose estimation deviation Pose estimation deviation
300 250
Pose estimation
Pose deviation
estimation deviation
300
PosePose estimation
estimation deviation
deviation 250
250
300
250 200
250 200
250 200
200
200 150
mmmmmm

200 150
mmmm

150 150
150 100
mm

150 100 100


100 100
50
100 50
50
50
0 50
0
50 0 200 400 600 800 1000 1200 1400 1600 1800 0 500 1000 1500 2000 2500 3000
0 0
0 200 400 600 Scan Loop
800 1000 1200 1400 1600 1800 0 500 1000 Scan Loop 2000
1500 2500 3000
Scan Loop Scan Loop
0 0
0 200 400 600 800 1000 1200 1400 1600 1800 0 500 1000 1500 2000 2500 3000
(a)Scan Loop
Exp-1 Scan Loop
(b) Exp-2
(a) Exp-1 (b) Exp-2
Figure 11. Pose estimation deviation comparison of the first/second experiments.
Figure 11. Pose estimation deviation comparison of the first/second
(a) Exp-1 experiments.
(b) Exp-2
Figure 11. Pose estimation deviation comparison of the first/second experiments.
Moreover, to compare the real-time estimation capabilities between the full-points ICP and the
Moreover,
Figureto11.
compare
corner-feature-based Pose thetotal
real-time
ICP,estimation
the estimation
deviation
number capabilities
of comparison ofatthe
ICP iteration between the scan
first/second
each LiDAR full-points
loop isICP
experiments. and the
addressed.
corner-feature-based ICP, the total number of ICP iteration at each LiDAR scan loop is addressed.
Moreover, to compare the real-time estimation capabilities between the full-points ICP and the
corner-feature-based ICP, the total number of ICP iteration at each LiDAR scan loop is addressed.
Sensors 2018, 18, x FOR PEER REVIEW 11 of 19

Less ICP iterations are helpful for real-time realization. Figure 12 shows the number of ICP iterations
between different algorithms for different experiments. It should be noted that, when using corner
Sensors 2018, 18, 1294 11 of 19
as the only feature, the maximum iteration loop in the ICP was no more than 4 (two iterations on
average). However, using full-points ICP results in more than 25 iterations on average. In this
environment,
Moreover,the WP-ICP was
to compare also applied.
the real-time The localization
estimation capabilitiesperformance
between theisfull-points
very closeICP to and
the one
the
conducted by the corner-based
corner-feature-based ICP, the totalICP. Therefore,
number of ICP theiteration
main advantage of the WP-ICP
at each LiDAR scan loopisisnot obvious
addressed.
Sensors
under
Less 2018,
ICPthis 18, x FOR
clear
iterations PEER REVIEWfor real-time realization. Figure 12 shows the number of ICP iterations
environment.
are helpful 11 of 19
To illustrate
between different the superiorforpose
algorithms estimation
different robustness
experiments. against
It should be the
notedcorner-based
that, when ICP, usinganother
corner
Less
as theICP
experiment iterations
only has been
feature, are
thehelpful
carried
maximum for real-time
out on the 3rd
iteration realization.
floorinofthe
loop Figure
Department
ICP was 12 shows
of more
no thethan
number
Aeronautics of ICP
and
4 (two iterations
Astronautics,
iterations on
between
NCKU, shown
average). different
However, algorithms
in Figure
using for different
13.full-points
The area isICP experiments.
about 15 × in
results It
40 more
m and
2 should be noted
contained
than that,
different
25 iterations when
on sized using
average.objectscorner
like
In this
as the only and
flowerpots
environment, feature,
the the maximum
water-cooler
WP-ICP was alsoiteration
as shown applied. loop
in Figure The in
14, the ICP
which
localizationcanwas
be no more
taken
performance than
as unknown 4 (two
is very iterations
disturbances
close to the one on
for
average).
conducted However,
post estimation.
by theThese using full-points
objects were
corner-based ICP
added later
ICP. Therefore, results
theinmainin more
the environment,than
advantage of and 25 iterations
hence can
the WP-ICP on average.
be used
is not In
to test
obvious this
the
under
environment,
feasibility
this the WP-ICP was also applied. The localization
and robustness of the proposed WP-ICP algorithm in dynamic environments.
clear environment. performance is very close to the one
conducted by the corner-based ICP. Therefore, the main advantage of the WP-ICP is not obvious
under ICP performance comparison
this clear environment. ICP performance comparison
100 100
To illustrate the superior Full pose estimation
scanning points robustness against the corner-based ICP,points
Full scanning another
90 Corner feature only 90 Corner feature only
experiment has been carried out on the 3rd floor of Department of Aeronautics and Astronautics,
80 80
NCKU, shown in Figure 13. The area is about 15 × 40 m2 and contained different sized objects like
70 70
flowerpots and water-cooler as shown in Figure 14, which can be taken as unknown disturbances for
ICP iterations

ICP iterations
60 60
post estimation. These objects were added later in the environment, and hence can be used to test the
50 50
feasibility and robustness of the proposed WP-ICP algorithm in dynamic environments.
40 40
30 ICP performance comparison 30 ICP performance comparison
100 100
20 Full scanning points 20 Full scanning points
90 Corner feature only 90 Corner feature only
10 10
80 80
0 0
700 200 400 600 800 1000 1200 1400 1600 1800 700 500 1000 1500 2000 2500 3000
Scan Loop Scan Loop
ICP iterations

ICP iterations

60 60
(a) Exp-1 (b) Exp-2
50 50
40 Figure 12. ICP performances of the40first/second experiments.
Figure 12. ICP performances of the first/second experiments.
30 30
20 20
To illustrate the superior pose estimation robustness against the corner-based ICP, another
10 10
experiment has been carried out on the 3rd floor of Department of Aeronautics and Astronautics,
0 0 2
NCKU, 0 shown
200 400 600 80013.1000
in Figure The1200 1400
area is about 15 × 40 m
1600 1800 0 and500 1000 different
contained 1500 2000
sized2500
objects3000
like
Scan Loop Scan Loop
flowerpots and water-cooler as shown in Figure 14, which can be taken as unknown disturbances for
post estimation. These (a)objects
Exp-1 were added later in the environment, and(b) Exp-2
hence can be used to test the
feasibility and robustness of the proposed WP-ICP algorithm in dynamic environments.
Figure 12. ICP performances of the first/second experiments.

Figure 13. Scene-2: the 3rd floor, Department of Aeronautics and Astronautics, NCKU.

Figure 13.
Figure Scene-2: the
13. Scene-2: the 3rd
3rd floor,
floor, Department of Aeronautics and Astronautics, NCKU.
Sensors 2018,
Sensors 2018, 18,
18, 1294
x FOR PEER REVIEW 12 of
12 of 19
19
Sensors 2018, 18, x FOR PEER REVIEW 12 of 19

(a) New objects like water-cooler added to the (b) New objects like flowerpots added to the
(a) New objects like water-cooler added to the (b) New objects like flowerpots added to the
environment environment
environment environment
Figure 14. DAA-3F local snapshot with new objects to test the robustness of the algorithm.
Figure 14.
Figure DAA-3F local
14. DAA-3F local snapshot
snapshot with
with new
new objects
objects to
to test
test the
the robustness
robustness of
of the
the algorithm.
algorithm.
In the experiment, the traveling path of the vehicle goes counterclockwise on the 3rd floor. The
In the
the experiment,
transient localizationthe
experiment, thetraveling
behavior can path
traveling path
refer of of
to thethe
vehicle
Figure goes
vehicle
15, where counterclockwise
goes
the counterclockwise
upper left subplotonand
the 3rd
the floor.
onlower 3rd The
left floor.
transient localization
subplot
The transient shows that behavior
localizationall the canincan
points
behavior refer
data to (from
set
refer Figure 15,15,
a LiDAR
to Figure where
scan)the
where theupper
and aupperleft
model setsubplot
left (from
subplot and
and lower
a layout) are left
lower
already
subplot showsbeing labeled
that all thebypoints
cornerinfeatures
data set and line a
(from features,
LiDARrespectively.
scan) and aThose
modeltwo setfeatures
(from aare fed are
layout)
into the parallel ICP process and finally being fused to a single robot pose. Facing
already being labeled by corner features and line features, respectively. Those two features are the area that is fed
different from layout and passing through a straight corridor, the result from utilizing corner feature
into the parallel ICP process and finally being fused to a single robot pose. Facing Facing the
the area that is
points based ICP algorithm leads to apparent localization deviations. Figure 16 shows a long range
different from layout and passing through a straight corridor, the result from utilizing corner feature
localization results at DAA-3F under the use of different algorithms.
points based ICP algorithm leads to apparent localization deviations. Figure 16 shows a long range
localization results
LiDARatScan
DAA-3F under
Loop = 6652 Timethe use sec
= 166.28 different algorithms.
of different algorithms.
Frame Rate 20.00 Hz
104 ICP Iteration =2 104 Angular Resolution 0.250 deg
1.5 4
LiDAR Scan Loop
= -0.80= deg
6652 Time = 166.28 sec Frame Rate 20.00 Hz
1 4
10 = ICP
-0.22Iteration
deg =2 104 Angular Resolution 0.250 deg
1.5 3.54

= -0.80 deg
Corner Feature

0.5
1 = -0.22 deg 3
3.5
0
Corner Feature

0.5 2.5
-0.5 3
0 Model Set
-1 Rotated Data Set 2
Data Set 2.5
-0.5
-1.5 1.5
-1.5 -1 -0.5 0 0.5 1 1.5
Model
104 Set ICP Iteration =3 4 2
-1 1.5Rotated Data Set 10
Data Set 1
= -0.80 deg
-1.5 1 = -0.16 deg
1.5
-1.5 -1 -0.5 0 0.5 1 1.5
0.5
104 ICP Iteration =3
1.5 0.5 104
Line Feature

1
= -0.80 deg 0
0 = -0.16 deg
1

-0.5 0.5
-0.5
0.5
Line Feature

Model Set
-1 Rotated Data Set
0 Data Set -10
-1.5
-1.5 -1 -0.5 0 0.5 1 1.5 -0.5 0.0 0.5 1.0 1.5 2.0 x104
-0.5 -0.5
104 Global map (mm)
Model Set
-1 Rotated Data Set
Figure 15. Data Set
Localization by the WP-ICP with 10 cm-1interpolation. Upper left corner subplot: corner
features.
-1.5 Lower left corner subplot: line features. Right subplot: layout and estimated robot trajectory.
-1.5 -1 -0.5 0 0.5 1 1.5 -0.5 0.0 0.5 1.0 1.5 2.0 x104
4
10 Global map (mm)

Figure 15. Localization


Localization by the WP-ICP with 10 cm cm interpolation.
interpolation. Upper left corner subplot: corner
features. Lower left corner subplot: line
line features.
features. Right
Right subplot:
subplot: layout
layout and
andestimated
estimated robot
robot trajectory.
trajectory.
Sensors 2018, 18, 1294 13 of 19
Sensors 2018, 18, x FOR PEER REVIEW 13 of 19
Sensors 2018, 18, x FOR PEER REVIEW 13 of 19

4 3 4
x10 4 Robot Trajectory Comparison x10 3 Robot Trajectory Comparison x10 4 Robot Trajectory Comparison
5 x10 Robot Trajectory Comparison x10 Robot Trajectory Comparison x10 Robot Trajectory Comparison
5 FullFull
Points
PointsICP
ICP Full Points ICP points
Full scanning Full Points ICPpoints
Full scanning
Corner
Cornerfeature
FullFullfeature
Points only
Points only
ICP
ICP 10.0 Corner
Fullfeature
Points
Corner only
ICPonly
Full feature
scanning points 4 Corner
Fullfeature
Points
Corner only
ICP
Full feature only
scanning points
WP-ICP(without
Corner featureinterpolation)
WP-ICP(without
Corner feature interpolation)
only
only 10.0 WP-ICP(without interpolation)
Corner WP-ICP(without
feature onlyinterpolation)
Corner feature only 4 WP-ICP(without
CornerWP-ICP(without
feature interpolation)
only
Corner feature interpolation)
only
WP-ICP(with
WP-ICP(with
WP-ICP(without interpolation)
WP-ICP(without interpolation)
interpolation)
interpolation) WP-ICP(with
WP-ICP(without interpolation)
WP-ICP(with interpolation)
interpolation)
WP-ICP(without interpolation) WP-ICP(with
WP-ICP(without interpolation)
WP-ICP(with
WP-ICP(withoutinterpolation)
interpolation)
interpolation)
4 WP-ICP(with
WP-ICP(withinterpolation)
interpolation) WP-ICP(with interpolation)
WP-ICP(with interpolation) WP-ICP(with interpolation)
WP-ICP(with interpolation)
4
8.0 3.8
8.0 3.8
3
3 6.0 3.6
6.0 3.6
2
mm

mm

mm
2 4.0 3.4
mm

mm

mm
4.0 3.4

1
1 2.0 3.2
2.0 3.2

0 0 3.0
0 0 3.0

-1 4 -2.0 3 2.8
-1 x10 4 -2.0 x10 3 2.8 x10 4
4
x10 x10 x10
-0.5 0 0.5 1.0 1.5 2.0 2.5 3.0 0 2.0 4.0 6.0 8.0 10.0 12.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9
-0.5 0 0.5 1.0
mm 1.5 2.0 2.5 3.0 0 2.0 4.0 mm
6.0 8.0 10.0 12.0 1.1 1.2 1.3 1.4mm
1.5 1.6 1.7 1.8 3.0
1.9
mm mm mm 3.0

(a) Complete view (b) A corridor area (c) Area with unknown objects
(a) Complete view (b) A corridor area (c) Area with unknown objects
Figure 16. Experiments under different algorithms in the DAA-3F.
Figure 16. Experiments under different algorithms in the DAA-3F.
DAA-3F.

Examining
Examining Figure Figure 16 16 again,
again, since
since WP-ICP
WP-ICP provides
provides both both corner
corner and and line line features
features as as scan
scan
matchingExamining Figure 16 again,
correspondences, since WP-ICP
theoretically the provides
results are both
supposedcornertoand be line
betterfeatures
than as scan
those matching
conducted
matching correspondences, theoretically the results are supposed to be better than those conducted
correspondences,
by theoretically the results 16b are supposedthat to be better than those conducted by the
bythethecorner-based
corner-basedICP. ICP.However,
However,Figure Figure 16bindicates
indicates thatthere thereare arestill
stillestimation
estimationerrorserrorswhenwhen
corner-based
passing ICP. However, Figure 16b indicates that there are still estimation errors when passing
passingthrough
throughthe thecorridor.
corridor.ItItisisbecause
becauseof offewer
fewerline linefeatures
featuresin inthe
thearea.
area.To Tofurther
furtherimprove
improvethe the
through therobustness,
estimation corridor. It anis because of fewer line features in theisarea. To further improve the estimation
estimation robustness, an interpolation on the line features is further integrated into theWP-ICP.
interpolation on the line features further integrated into the WP-ICP.
robustness,
Note an interpolation on the line features is further integrated into the WP-ICP. Note that
Notethatthatthe
theinterpolation
interpolationisisused
usedto toincrease
increasethe thenumber
numberof ofline
linefeature
featurepoints
pointsforforevery
every10 10cm.
cm.The Theresult
result
the
(i.e., interpolation is used to increase the number of line feature points for every 10 cm. The result
(i.e.,the
thepurple
purpleline)
line)shows
showsaagood goodestimate
estimatewhen whenutilizing
utilizingthe theWP-ICP
WP-ICPalgorithm
algorithmwith withinterpolation.
interpolation.
(i.e., thethe
Finally, purple line) shows a goodbe estimate when utilizing the WP-ICP algorithm with interpolation.
Finally, theimprovement
improvementcan canalso
also befound
foundin inlocations
locationssubjectsubjectto tostatic
staticunknown
unknownobjects objectsas asshown
shown
Finally,
in the improvement can also be found in locations subject to static unknown objects as shown in
inFigure
Figure16c,16c,where
wherethe thecorresponding
correspondingsnapshotssnapshotsare areshown
shownin inFigure
Figure14a,b,
14a,b,respectively.
respectively.
FigureThe 16c, whereofthe corresponding snapshots are shown in Figure 14a,b, respectively. the external
The results
results of WP-ICP
WP-ICP with with interpolation
interpolation demonstrate
demonstrate the the robustness
robustness againstagainst the external
The results
environment of WP-ICP with interpolation demonstrate the robustness against the external
environment uncertainty. The details of the computation efficiency under differentalgorithms
uncertainty. The details of the computation efficiency under different algorithmsare are
environment
depicted uncertainty. The details of the computation efficiency under different algorithms are
depictedin inFigure
Figure17. 17.Compared
Comparedto tothe
thefull
fullpoint
pointICP, ICP,the theuseuseof ofcorner
cornerfeatures
featuresonly onlycancanimprove
improve
depicted
speed in Figure 17. Compared to the full pointWP-ICP
ICP, the use ofinterpolation)
corner features only can improve speed
speedby byabout
about50 50times
timeson onaverage;
average;the theuseuseofof WP-ICP(with (with interpolation)can canimprove
improvespeed speedby by55
by
timesabout 50 times on average; the use of WP-ICP (with interpolation) can improve speed by 5 times
times on on average.
average. However,
However, applying
applying cornercorner features
features only only could
could sometimes
sometimes lead lead to to unstable
unstable
on average.
estimates. However,
Therefore, applying
WP-ICP corner
makes a features onlybetween
compromise could sometimes
computation lead to unstable
speed and estimates.
localization
estimates. Therefore, WP-ICP makes a compromise between computation speed and localization
Therefore, WP-ICP makes a compromise between computation speed and localization accuracy.
accuracy.
accuracy.

Figure
Figure17.
17.ICP
ICPperformance
performancecomparison
comparisonof
ofthe
thethird
thirdexperiment.
experiment.
experiment.

Finally,
Finally,ititisisworth
worthdiscussing
discussingthe
thelocalization
localizationperformance
performanceunder
underdifferent
differentinterpolation
interpolationsizes
sizes
when
whenapplying
applyingthe theWP-ICP.
WP-ICP.In Inthis
thisstudy,
study,the
thelayouts
layoutsofofthe
thelocalization
localizationenvironment
environmentare aregiven
givenin
in
terms
termsof
offew
fewdiscrete
discretedata
datapoints.
points.Therefore,
Therefore,the
theresolution
resolutionofofthe
thelayouts
layoutscan
canbe
befurther
furtherenhanced
enhancedby by
Sensors 2018, 18, 1294 14 of 19

Finally, it is worth discussing the localization performance under different interpolation sizes
when applying the WP-ICP. In this study, the layouts of the localization environment are given in
terms of few discrete data points. Therefore, the resolution of the layouts can be further enhanced by
using interpolation. For each LiDAR scan, the interpolation can be achieved by manipulating the raw
data directly. The simplest way is to apply a divider on each segment.
Based on previous experiments, it is obvious that WP-ICP with interpolation has the minimum
error compared with the corner-feature-based ICP and the pure WP-ICP. In the following, 10 and 50 cm
interpolation resolutions are further considered for the same DAA-3F experiment. Figure 18 verifies
Sensors 2018, 18, x FOR
that increasing thePEER REVIEW
size of the interpolation resolution does increase the localization error. However, 14 of 19

the number of ICP iterations can be significantly reduced. The average iterations of WP-ICP with
using
50 cminterpolation. For each LiDAR
and 10 cm interpolations scan, and
are 7.063 the interpolation can be achieved
11.7577, respectively. by manipulating
As a result, the resolution theofraw
the
data directly. The simplest way is to apply a divider on each segment.
interpolation can be taken as a trade-off design factor between the computation efficiency and the
Based on
localization previous experiments, it is obvious that WP-ICP with interpolation has the minimum
precision.
errorAcompared
comparison with the corner-feature-based
study from the viewpoint of ICP
theand
ICPthe pure WP-ICP.
iteration In the following,
is summarized in Table 1.10 and
It is 50
clear
cm interpolation resolutions are further considered for the same DAA-3F experiment.
that the ICP iteration performances obtained by the corner-based ICP and WP-ICP are noticeably better Figure 18
verifies that increasing
than full-points ICP. Tothe size of the
overcome the challenge
interpolation resolutionobjects
of unknown does increase the localization
during point error.
cloud matching,
However,
WP-ICP with the interpolation
number of ICP wasiterations can be significantly
further introduced. It was shownreduced. Thelocalization
that the average iterations
robustness of
WP-ICP with 50 cm and 10 cm interpolations are 7.063 and 11.7577, respectively. As
can be significantly enhanced. Although the number of ICP iterations increases, the increment is still a result, the
resolution
acceptable of forthe interpolation
real-time can be taken as a trade-off design factor between the computation
consideration.
efficiency and the localization precision.

Figure 18.
Figure Pose estimation
18. Pose estimation deviation
deviation of
of different
different interpolations.
interpolations.

A comparison study from the viewpoint of the ICP iteration is summarized in Table 1. It is clear
that the ICP iteration performances Table 1. ICP iteration
obtained performance. ICP and WP-ICP are noticeably
by the corner-based
better than full-points ICP. To overcome the challenge of unknown objects during point cloud
Experimental Environment Pose Estimation Algorithm Average ICP Iteration
matching, WP-ICP with interpolation was further introduced. It was shown that the localization
robustness can be significantly enhanced.Full-Points
AlthoughICP 25.36 increases, the
the number of ICP iterations
Scene-1 Exp.1 Corner-Based ICP 2.06
increment is still acceptable for real-time consideration.
WP-ICP 2.06/2.04 (corner/line)
Full-Points
Table 1. ICP iterationICP
performance. 30.10
Scene-1 Exp.2 Corner-Based ICP 2.04
Experimental Environment WP-ICP Algorithm
Pose Estimation 2.05/2.04
Average(corner/line)
ICP Iteration
Full-Points
Full-Points ICP ICP 25.36
49.92
Scene-1 Corner-Based ICP ICP 2.49
Scene-2Exp.1 Corner-Based 2.06
WP-ICP 2.61/2.85 (corner/line)
WP-ICP 2.06/2.04 (corner/line)
WP-ICP with 10cm Interpolation 2.87/9.25 (corner/line)
Full-Points ICP 30.10
Scene-1 Exp.2 Corner-Based ICP 2.04
Finally, to further demonstrate the robustness of
WP-ICPthe proposed WP-ICP, we considered another
2.05/2.04 (corner/line)
experiment in the same environment (as illustrated in Figure 19) but with five people walking around
Full-Points ICP 49.92
Corner-Based ICP 2.49
Scene-2
WP-ICP 2.61/2.85 (corner/line)
WP-ICP with 10cm Interpolation 2.87/9.25 (corner/line)

Finally, to further demonstrate the robustness of the proposed WP-ICP, we considered another
Sensors 2018, 18, 1294 15 of 19

the vehicle. The practical scenes are shown in Figure 20. Firstly, to generate a ground truth for
comparison, the full-points ICP is considered, which leads to good localization results, as illustrated
in Figure 21. Due to many unknown moving objects that do not exist in the given layout, those
obstacles result in many outliers. Under this condition, using corner features only would cause
Sensors 2018,
Sensors 2018, 18,
18, xx FOR
FOR PEER
PEER REVIEW
REVIEW 15 of
15 of 19
19
divergent localization, as demonstrated in Figure 22. On the contrary, as shown in Figure 23, applying
the
the WP-ICP
WP-ICP together
together with
with interpolation
interpolation presents
presents satisfactory
satisfactory localization
localization results
results without
without inducing
inducing
the WP-ICP together with interpolation presents satisfactory localization results without inducing
divergent
divergent behavior,
behavior, even
even in
in the
the presence
presence of
of unknown
unknown moving
moving objects.
objects. For
For the
the WP-ICP
WP-ICP under different
under different
divergent behavior, even in the presence of unknown moving objects. For the WP-ICP under different
interpolation
interpolationresolutions,
resolutions,thetheresults areare
results depicted in Figure
depicted 24a,b,24a,b,
in Figure
Figure respectively. The performance
respectively. details
The performance
performance
interpolation resolutions, the results are depicted in 24a,b, respectively. The
are given
details are in Table
are given
given in 2. Experiments
in Table
Table 2. verify
2. Experiments that the
Experiments verify use of
verify that WP-ICP
that the
the use
use ofcan withstand
of WP-ICP
WP-ICP can dynamic uncertainties
can withstand
withstand dynamic
dynamic
details
as well as produce
uncertainties as satisfactory
well as produce localization
satisfactoryresult with fewer
localization iterations.
result with fewer iterations.
uncertainties as well as produce satisfactory localization result with fewer iterations.

Figure 19.
Figure ICP performance
19. ICP performance of
of different
different interpolations.
interpolations.
interpolations.

(a)
(a) (b)
(b)
Figure 20.
Figure 20. An
An environment
environment with
with unknown
unknown moving
moving objects.
objects. (a)
(a) Random
Random snapshot
snapshot 1.
1. (b)
(b) Random
Random
Figure 20. An environment with unknown moving objects. (a) Random snapshot 1. (b) Random
snapshot 2.
snapshot 2.
snapshot 2.
Sensors 2018, 18, 1294 16 of 19
Sensors 2018, 18, x FOR PEER REVIEW 16 of 19
Sensors 2018, 18, x FOR PEER REVIEW 16 of 19
Frame Rate 20.00 Hz
Frame
Angular Rate 20.00
Resolution Hz deg
0.250
LiDAR Scan Loop = 2477 Time = 61.90 sec 7000 Angular Resolution 0.250 deg
LiDAR Scan Loop = 2477 Time 7000
ICP Iteration =13 = 61.90 sec
6000 ICP Iteration =13 6000
6000 = -2.98 deg 6000
= -2.98 deg 5000
= 0.01 deg
4000 = 0.01 deg 5000
4000
4000
4000
2000
Feature

2000 3000
CornerFeature

3000
0 2000
0 2000
Corner

-2000 1000
-2000 1000
Model Set 0
-4000 Model SetData Set
Rotated 0
-4000 Rotated
Data Set Data Set -1000
Data Set -1000
-6000
0 1000 2000 3000 4000 5000
-6000 -5000 0 5000 Global map (mm)4000 5000
0 1000 2000 3000
-5000 0 5000 Global map (mm)
(a) (b)
(a) (b)
Figure
Figure 21.
21. Localization
Localization result
result by
by using
using full-points
full-points ICP.
ICP. (a)
(a) Point
Point registration.
registration. (b)
(b) Robot
Robot trajectories.
trajectories.
Figure 21. Localization result by using full-points ICP. (a) Point registration. (b) Robot trajectories.
Corner
Corner
Line
Line Feature
Feature
Feature
Feature

Figure 22. Localization result by using corner features only (where the line information was not fused
Figure 22.
Figure Localization result
22. Localization result by
by using
using corner
corner features
features only
only (where
(where the
the line
line information
information was
was not
not fused
fused
for the localization).
for the
for the localization).
localization).
Sensors 2018, 18, 1294 17 of 19
Sensors 2018, 18, x FOR PEER REVIEW 17 of 19
Sensors 2018, 18, x FOR PEER REVIEW 17 of 19

Corner
Corner
Line
Line Feature
Feature
Feature
Feature

Figure 23. Localization result using the WP-ICP with 20 cm interpolation.


Figure 23. Localization result using the WP-ICP with 20 cm interpolation.
120
7000 120
7000 Full Point ICP
Full Point ICP Full Point(10cm
ICP Interp.)
Full Point(10cm
WP-ICP ICP Interp.) WP-ICP
6000 WP-ICP 100 WP-ICP
WP-ICP (10cm
(20cm Interp.)
Interp.)
6000 WP-ICP (10cm
(20cm Interp.)
Interp.) 100 WP-ICP (20cm Interp.)
WP-ICP (20cm Interp.)
5000 80
Iterations

5000 80
Iterations

4000
Y(mm)

4000 60
Y(mm)

60
ICP

3000
ICP

3000
40
2000 40
2000
20
1000 20
1000
0 0
0-1000 0 1000 2000 3000 4000 5000 6000 00 500 1000 1500 2000 2500
-1000 0 1000 2000 3000
X(mm) 4000 5000 6000 0 500 1000 Scan1500
LiDAR Round 2000 2500
X(mm) LiDAR Scan Round
(a) Localization results (b) ICP iterations
(a) Localization results (b) ICP iterations
Figure 24.
Figure 24. Localization results
Localization results
24. Localization and
results and ICP
and ICP iterations.
ICP iterations.
iterations.
Figure

Table 2. Localization
Table 2.
2. results
Localization results for the
results for
for the environment
environment subject
subject to
to unknown
unknown moving
moving objects.
objects.
Table Localization the environment subject to unknown moving objects.
Model Set Size Average ICP Total ICP Local. Error (mm)
Localization Algorithm ModelSet
SetSize
Size Average
Localization
Localization Algorithm Model
(Points) Average ICPICP
Iterations TotalTotal
ICP ICP
Iterations
Local.
Local. Error
Error (mm)
(mm)
Avg/Max
(Points) Iterations Iterations Avg/Max
Algorithm
Full-Points ICP (Points) Iterations Iterations Avg/Max
Full-Points ICP 1991 35 86687 taken as ground truth
(Point-to-Point ICP) 1991 35 86687 taken as ground truth
Full-Points ICP
(Point-to-Point ICP)
WP-ICP with 10 ICP)
cm 1991 35 86687 taken as ground truth
(Point-to-Point
WP-ICP with 10 cm 213 13 30838 53.7/261.3
Interpolation 213 13 30838 53.7/261.3
Interpolation
WP-ICP with2010cm
cm
WP-ICP with 213 13 9 3083822981 53.7/261.3
WP-ICP with 20 cm
Interpolation 114 60.5/264.6
Interpolation 114 9 22981 60.5/264.6
Interpolation
WP-ICP with 20 cm
114 9 22981 60.5/264.6
Interpolation
5. Conclusions
5. Conclusions
In this work, to solve the mismatched point cloud registration problem and to enhance ICP
In this work, to solve the mismatched point cloud registration problem and to enhance ICP
5. Conclusions
efficiency, a parallel feature-based indoor localization algorithm is proposed. In the traditional ICP
efficiency, a parallel feature-based indoor localization algorithm is proposed. In the traditional ICP
In thisthe
algorithm, work, to solve
point cloud the mismatched
registration point cloud
is achieved registration
by means of NNS problem and is
and there to no
enhance ICP
any other
algorithm, the point cloud registration is achieved by means of NNS and there is no any other
efficiency, a parallel
information attachedfeature-based indoor
to those points. localization
Therefore, algorithm
it is prone is proposed.
to obtain incorrect In the traditional ICP
correspondences. On
information attached to those points. Therefore, it is prone to obtain incorrect correspondences. On
algorithm, the point cloud registration is achieved by means of NNS and there is
the contrary, we present a novel WP-ICP algorithm that provides more information on the polished no any other
the contrary, we present a novel WP-ICP algorithm that provides more information on the polished
information
point cloud.attached to those
The WP-ICP points.ofTherefore,
consists two ICP it is proneone
sources, to obtain incorrect
is corner correspondences.
features and the other On the
is line
point cloud. The WP-ICP consists of two ICP sources, one is corner features and the other is line
features. Owing to the parallel mechanism, it attenuates mismatch probabilities from corner to line
features. Owing to the parallel mechanism, it attenuates mismatch probabilities from corner to line
Sensors 2018, 18, 1294 18 of 19

contrary, we present a novel WP-ICP algorithm that provides more information on the polished point
cloud. The WP-ICP consists of two ICP sources, one is corner features and the other is line features.
Owing to the parallel mechanism, it attenuates mismatch probabilities from corner to line matching or
from line to corner matching. As a result, the proposed algorithm results in faster convergence for pose
estimation. Moreover, since the full scan points are processed to extract fewer feature points, it also
enhances the ICP computation efficiency and therefore is suitable for low-cost CPUs. For environments
that possess fewer feature points, the WP-ICP together with a line interpolation is further verified.
Environments subject to static and dynamic unknown moving objects were also considered to verify
the feasibility and robustness of the proposed method.

Acknowledgments: This research is supported by the Ministry of Science and Technology under grant numbers
MOST 106-2221-E-006-138 and MOST 107-2218-E-006-004.
Author Contributions: Y.T. Wang implemented the ideas and performed the experiments; C.C. Peng conceived
and designed the algorithm; Y.T. Wang and C.C. Peng wrote the paper. A.A. Ravankar and A. Ravankar polished
the paper and provided valuable concepts and comments.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Khoshelham, K.; Zlatanova, S. Sensors for Indoor Mapping and Navigation. Sensors 2016, 16, 655. [CrossRef]
[PubMed]
2. Nützi, G.; Weiss, S.; Scaramuzza, D.; Siegwart, R. Fusion of IMU and vision for absolute scale estimation in
monocular SLAM. J. Intell. Robot. Syst. 2011, 61, 287–299. [CrossRef]
3. Se, S.; Lowe, D.G.; Little, J.J. Vision-based global localization and mapping for mobile robots.
IEEE Trans. Robot. 2005, 21, 364–375. [CrossRef]
4. Lowe, D.G. Object recognition from local scale-invariant features. In Proceedings of the Seventh IEEE
International Conference on Computer Vision, Kerkyra, Greece, 20–22 September 1999; pp. 1150–1157.
5. Bay, H.; Ess, A.; Tuytelaars, T.; Gool, V.L. Speeded-up robust features (SURF). Comput. Vis. Image Underst.
2008, 110, 346–359. [CrossRef]
6. Husen, M.N.; Lee, S. Indoor Location Sensing with Invariant Wi-Fi Received Signal Strength Fingerprinting.
Sensors 2016, 16, 1898. [CrossRef] [PubMed]
7. Chang, Q.; Li, Q.; Shi, Z.; Chen, W.; Wang, W. Scalable Indoor Localization via Mobile Crowdsourcing and
Gaussian Process. Sensors 2016, 16, 381. [CrossRef] [PubMed]
8. Chen, Z.; Zou, H.; Jiang, H.; Zhu, Q.; Soh, Y.C.; Xie, L. Fusion of WiFi, smartphone sensors and landmarks
using the Kalman filter for indoor localization. Sensors 2015, 15, 715–732. [CrossRef] [PubMed]
9. Passafiume, M.; Maddio, S.; Cidronali, A. An Improved Approach for RSSI-Based Only Calibration-Free
Real-Time Indoor Localization on IEEE 802.11 and 802.15.4 Wireless Networks. Sensors 2017, 17, 717.
[CrossRef] [PubMed]
10. Wei, C.C.; Lin, J.S.; Chang, C.C.; Huang, Y.F.; Lin, C.B. The Development of E-Bike Navigation Technology
Based on an OpenStreetMap. J. Smart Sci. 2017, 6, 29–35. [CrossRef]
11. D’Alfonso, L.; Grano, A.; Muraca, P.; Pugliese, P. A polynomial based SLAM algorithm for mobile robots
using ultrasonic sensors-Experimental results. In Proceedings of the 16th International Conference on
Advanced Robotics (ICAR), Montevideo, Uruguay, 25–29 November 2013.
12. Yu, C.W.; Liu, C.H.; Chen, Y.L.; Lee, P.; Tian, M.S. Vision-based Hand Recognition Based on ToF Depth
Camera. J. Smart Sci. 2017, 6, 21–28. [CrossRef]
13. Fankhauser, P.; Bloesch, M.; Rodriguez, D.; Kaestner, R.; Hutter, M.; Siegwart, R. Kinect v2 for Mobile Robot
Navigation: Evaluation and Modeling. In Proceedings of the 2015 International Conference on Advanced
Robotics (ICAR), Istanbul, Turkey, 27–31 July 2015.
14. Chen, C.; Yang, B.; Song, S.; Tian, M.; Li, J.; Dai, W.; Fang, L. Calibrate Multiple Consumer RGB-D Cameras
for Low-Cost and Efficient 3D Indoor Mapping. Remote Sens. 2018, 10, 328. [CrossRef]
15. Nguyen, V.; Martinelli, A.; Tomatis, N.; Siegwart, R. A comparison of line extraction algorithms using 2D
range data for indoor mobile robotics. Auton. Robot. 2007, 23, 97–111. [CrossRef]
Sensors 2018, 18, 1294 19 of 19

16. Aghamohammadi, A.; Tamjidi, A.H.; Taghirad, H.D. SLAM Using Single Laser Range Finder. In Proceedings
of the 17th IFAC World Congress, Seoul, Korea, 6–11 July 2008; pp. 14657–14662.
17. Besl, P.J.; McKay, N.D. Method for registration of 3-D shapes. Proc. SPIE 1992, 1611, 239–256. [CrossRef]
18. Lu, F.; Milios, E. Robot pose estimation in unknown environments by matching 2D range scans.
J. Intell. Robot. Syst. 1997, 18, 249–275. [CrossRef]
19. Rusinkiewicz, S.; Levoy, M. Efficient variants of the ICP algorithm. In Proceedings of the Third International
Conference on 3-D Digital Imaging and Modeling, Quebec, Canada, 28 May–1 June 2001.
20. Montesano, L.; Minguez, J.; Montano, L. Probabilistic scan matching for motion estimation in unstructured
environments. In Proceedings of the IEEE/RSJ International conference on Intelligent Robots and Systems
(IROS), Edmonton, AB, Canada, 2–6 August 2005.
21. Minguez, J.; Lamiraux, F.; Montesano, L. Metric-based scan matching algorithms for mobile robot
displacement estimation. In Proceedings of the IEEE International Conference on Robotics and Automation,
Barcelona, Spain, 18–22 April 2005.
22. Biber, P.; Strasser, W. The Normal Distributions Transform: A New Approach to Laser Scan Matching.
In Proceedings of the IEEE/RSJ International Conference on Intelligent Robotics and Systems (IROS),
Las Vegas, NV, USA, 27–31 October 2003.
23. Censi, A. An ICP variant using a point-to-line metric. In Proceedings of the 2008 IEEE International
Conference on Robotics and Automation (ICRA), Pasadena, CA, USA, 19–23 May 2008.
24. Pottmann, H.; Huang, Q.X.; Yang, Y.L.; Hu, S.M. Geometry and convergence analysis of algorithms for
registration of 3D shapes. Int. J. Comput. Vis. 2006, 67, 277–296. [CrossRef]
25. Kalman, R.E. A new approach to linear filtering and prediction problems. J. Basic Eng. 1960, 82, 35–45.
[CrossRef]
26. Armesto, L.; Tornero, J. SLAM based on Kalman filter for multi-rate fusion of laser and encoder
measurements. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems
(IROS), Sendai, Japan, 28 September–2 October 2004.
27. Zhang, J.; Ou, Y.; Jiang, G.; Zhou, Y. An Approach to Restaurant Service Robot SLAM. In Proceedings of the
IEEE International Conference on Robotics and Biomimetics (ROBIO), Qingdao, China, 3–7 December 2016.
28. Borges, G.A.; Aldon, M.J. Line extraction in 2D range images for mobile robotics. J. Intell. Robot. Syst. 2004,
40, 267–297. [CrossRef]
29. Lv, J.; Kobayashi, Y.; Ravankar, A.A.; Emaru, T. Straight Line Segments Extraction and EKF-SLAM in Indoor
Environment. J. Auto. Cont. Eng. 2014, 2, 270–276. [CrossRef]
30. An, S.Y.; Kang, J.G.; Lee, L.K.; Oh, S.Y. Line segment-based indoor mapping with salient line feature
extraction. Adv. Robot. 2012, 26, 437–460. [CrossRef]
31. Ravankar, A.; Ravankar, A.A.; Hoshino, Y.; Emaru, T.; Kobayashi, Y. On a hopping-points SVD and hough
transform-based line detection algorithm for robot localization and mapping. Int. J. Adv. Robot. Syst. 2016,
13, 98. [CrossRef]
32. Duda, R.O.; Hart, P.E. Pattern Classification and Scene Analysis; John Wiley & Sons: New York, NY, USA, 1973.
33. Reina, A.; Gonzalez, J. A two-stage mobile robot localization method by overlapping segment-based maps.
Robot. Auton. Syst. 2000, 31, 213–225. [CrossRef]
34. Guivant, J.E.; Nebot, E.M. Optimization of the simultaneous localization and map-building algorithm for
real-time implementation. IEEE Trans. Robot. Autom. 2001, 17, 242–257. [CrossRef]
35. Sorkine-Hornung, O.; Rabinovich, M. Least-Squares Rigid Motion Using SVD. Available online:
http://www.igl.ethz.ch/projects/ARAP/svd_rot.pdf (accessed on 19 April 2018).
36. Ravankar, A.A.; Hoshino, Y.; Ravankar, A.; Jixin, L.; Emaru, T.; Kobayashi, Y. Algorithms and a framework
for indoor robot mapping in a noisy environment using clustering in spatial and hough domains.
Int. J. Adv. Robot. Syst. 2015, 12, 27. [CrossRef]

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

You might also like