Vehicle Paper

© All Rights Reserved

71 views

Vehicle Paper

© All Rights Reserved

- Setting up a mobile Lidar (DIAL) system for detecting chemical warfare agents
- Self Driving Car
- 2013 Reg Sylabus
- Cambridge Maths Syllabus
- Land Processes Field Work and Projects: Southern Sierra Nevada Mountains
- lidar robot
- Lidar Analysis Forestry
- A Dynamic Stiffness Element for Free Vibration Analysis of Delaminated Layered Beams
- AEB-Test-Results-ESV-2013-
- Optimal 2003
- 20160927 Auto Vision Systems Report FINAL
- BLTN10008
- Module 4 Slides
- r7210101 Mathematics II
- Lidar Applications in Air Pollution Research and Control
- Archaeology.docx
- 38_486_14_Cantemir_A
- 3-chanin1981
- Project Geotechnical Planning
- Bending and Vibration of Laminated Plates by a Layerwise Meshless Formulation

You are on page 1of 6

Environment

Z. J. Chong1 , B. Qin1 , T. Bandyopadhyay2 , M. H. Ang Jr.1 , E. Frazzoli3 , D. Rus3

Abstract This paper presents a precise localization algorithm for vehicles in 3D urban environment with only one 2D

LIDAR and odometry information. A novel idea of synthetic

2D LIDAR is proposed to solve the localization problem on

a virtual 2D plane. A Monte Carlo Localization scheme is

adopted for vehicle position estimation, based on synthetic

LIDAR measurements and odometry information. The accuracy

and robustness of the proposed algorithm are demonstrated by

performing real time localization in a 1.5 km driving test around

the NUS campus area.

I. I NTRODUCTION

Mobile robot localization is the problem of determining

the pose of a robot relative to a given map of the environment

[28]. It is a fundamental requirement to realize vehicle autonomy. This problem has been well solved for indoor robots on

a planar surface. However, there are still many challenges to

get an accurate, robust while low-cost approach for vehicle

localization in an outdoor 3D urban environment. Together

with localization comes its concomitant problem of mapping.

A map is an abstract representation of the environment,

which usually serves as a prior for robot localization. How

to map the 3D environment and what representation to use

are additional questions that must be addressed. This paper

introduces a novel idea of synthetic LIDAR, which constructs

synthetic 2D scan from 3D features, and solves the localization and mapping problem in a 2D manner. The algorithm is

developed under the idea of minimal sensing, using only one

tilted-down single-layer LIDAR, and odometry information.

The fusion of Global Positioning System (GPS) and Inertial Navigation System (INS) to estimate vehicle position

has been the most popular localization solution in recent

years [18], [21], [4]. This solution works well in open areas;

however, it is not suitable for dense urban environment

where GPS signals severely suffer from satellite blockage

and multipath propagation caused by high buildings [14].

Road-matching algorithms are then proposed to alleviate this

problem, where a prior road map is used as either additional

motion constraint or observation to update the localization

estimation[6], [7]. While this solution achieves good global

localization, it is not designed for precise estimation relative

to the local environment, an ability that is highly desirable

in many cases.

1 Z. J. Chong, B. Qin, M. H. Ang Jr. are with the National University

of Singapore, Kent Ridge, Singapore {chongzj, baoxing.qin,

mpeangh} at nus.edu.sg

2 T. Bandyopadhyay is with the Singapore-MIT Alliance for Research and

Technology, Singapore tirtha at smart.mit.edu

3 E. Frazzoli and D. Rus are with the Massachusetts Institute of Technology, Cambridge, MA., USA frazzoli at mit.edu, rus at

csail.mit.edu

978-1-4673-5643-5/13/$31.00 2013 IEEE

localization using local features. In [9], single side curb

features are extracted by a vertical LIDAR to build a

boundary map to improve vehicle localization. This map is

learned beforehand in the form of line segments. In [26], lane

markers serve as local features, which are extracted from

reflectivity values of LIDAR scans. A digital lane marker

map is used as prior. The performance of the algorithm

is similar to those in [9]. While these algorithms reduce

lateral localization error considerably, they help little in the

longitudinal direction.

Levinson et al. in [16], [17] utilize road surface reflectivity

for precise localization. A particle filter is used to localize

the vehicle in real time with a 3D Velodyne LIDAR. The

algorithm first analyses the laser range data, and extract those

points cast on the ground. Then reflectivity measurements of

these points are correlated to a map of ground reflectivity

to update particle weights. One assumption underlying this

algorithm is that road surfaces remain relatively constant,

which may undermine the performance in some cases. Besides, the need for costly 3D LIDAR sensor limits its usage.

Baldwin et al. in [2] utilizes accumulated laser sweeps

as local features. The algorithm first generates a swath of

laser data by accumulating 2D laser scans from a tilteddown LIDAR. Then the swathe is matched to a prior 3D

survey by minimizing an objective function. This algorithm

demonstrates its accuracy and robustness in GPS-denied

areas. Although the algorithm proposed does not require an

accurate 3D model of the environment, we argue that an

accurate and consistent prior is always desired when the

localization is integrated with other navigation functions.

Similarly in [31], [15], a 3D point cloud of the environment

is obtained by servoing a 2D LIDAR, and a reduced 2D

feature is used to perform localization. This algorithm has

been shown to work well in an indoor environment with a

well structured ceiling features. In [5], a microwave radar

sensor is used to perform SLAM. While the radar has the

ability to see through obstacles, association of radar targets

is a complex task and SLAM processing is done offline.

This paper proposes a novel notion of synthetic LIDAR

to solve the 3D localization problem in a 2D manner. The

synthetic LIDAR is constructed in real time with interest

points extracted from a 3D rolling window. The basic assumption is that many surfaces in the urban environment

are rectilinear in the vertical direction. The interest points

are extracted from the rectilinear surface, and then projected

on a virtual horizontal plane to form a synthetic LIDAR.

The synthetic LIDAR serves as a bridge between the real-

1554

plane. With the idea of synthetic LIDAR, algorithms for 2D

localization can be easily adapted to the 3D problem. This

paper develops a Monte Carlo Localization algorithm with

the notion of synthetic LIDAR, and demonstrates its accuracy

and robustness through experiments.

The rest of the paper is organized as follows. Section

II discusses the idea of projecting 3D world to 2D plane,

and gives an overview of the localization system. Section III

introduces the construction of synthetic LIDAR. The localization algorithm is shown in Section IV, while experimental

results and analysis are presented in Section V.

II. L OCALIZATION ON A V IRTUAL P LANE

A. Projecting 3D world to 2D plane

Robot localization on a planar surface has been studied for

decades and many algorithms have been proposed. The 2D

scan-matching algorithm may be the most popular choice due

to its accuracy and robustness [27]. However, it cannot be

directly applied for vehicles moving in the 3D world. Since

outdoor road can be very hilly at times, laser points from

a planar LIDAR may cast on the road surface, rather than

the desired vertical objects, as discussed in [2]. Our previous

research in [22] utilizes a tilted-down LIDAR to extract road

boundary features on urban road, and then uses these features

for vehicle localization. Actually, there are many other salient

features in urban environment that can benefit localization.

What features to extract, how to extract them, and how to

feed them into the localization scheme are questions to be

further addressed. 3D range data is usually desired to extract

features for robot navigating in the 3D world [20]. In this

paper, we use a tilted-down LIDAR to generate 3D point

cloud of the environment in a push-broom configuration.

Rather than directly apply 3D scan-matching with the raw

data [2], we try to extract features from the 3D point cloud,

and use the vertical features for localization. The assumption

of our method is that urban environment is rich in vertical

surfaces, such as curbs, wall of buildings, and even vertical

tree trunks.

The vertical world assumption is actually a popular

assumption used in many works in the literature. Harrison et

al. in [8] propose a method to generate high quality 3D laser

range data while the robot is moving. By exploiting the assumption of vertical world, useful information (e.g., roll and

pitch angles) can be inferred. Kohlbrecher et al. [13] achieve

2D SLAM and 6DOF pose estimation with only a single 2D

LIDAR and an IMU. Although not explicitly explained, the

underlying assumption in the work is that the environments

contains many vertical surfaces. Weingarten et al. in [30]

uses this assumption to realize fast structured environment

reconstruction. In this paper, since outdoor environments

may have more arbitrary-shape objects other than structural

environments, a classification step has to be taken before

using the vertical assumption. In the classification procedure,

laser points cast on the vertical surfaces are extracted based

on surface normal estimation. When the tilted-down LIDAR

sweeps the environment, some vertical surfaces will be swept

bird-view for this scanning process and project the vertical

features on to a virtual horizontal plane, it is exactly the

same as a robot with a horizontal LIDAR moving on a

2D surface. From a mathematical point of view, the vertical

surface constrains how laser points at different height should

match with each other. With the above intuition, the idea

of synthetic LIDAR is proposed. A synthetic LIDAR is a

planar 2D LIDAR on the projected virtual plane, where the

end points of its laser beams are the projected points from

vertical surface in the 3D environment.

The idea of synthetic LIDAR helps to solve the 3D

localization problem on a 2D plane. Although a vehicle is

moving in the 3D world with 6DOF, generally speaking,

ground based vehicle is mostly interested in its 2D pose

vector (x, y, yaw). By projecting the 3D vertical features

onto a virtual plane, 2D occupancy grid map can be used

by marking those vertical features. This way, an a-priori

map can be obtained using SLAM with the idea of synthetic

LIDAR. It should be clarified that our algorithm only applies

to an environment with only one vertical traversable level.

For cases with more traversable levels, some other 2.5D or

full 3D algorithms may be used, for example [12].

B. System Overview

The localization system mainly consists of two parts, 3D

perception to extract key feature points, and 2D localization

to solve the localization on the horizontal plane. The synthetic LIDAR serves as a bridge to connect the 3D world

and the 2D virtual plane, as shown in Fig. 1(a).

The system uses an IMU and a wheel encoder to provide

6DOF odometry information, a 2D tilted-down LIDAR to

provide laser scans, and an occupancy grid map serving as

a prior for localization. A simple dead reckoning is used

to obtain the odometry information. Assuming the distance

measured by a wheel encoder at n-th time step is rn , and

the rotation is given by a pitch and a yaw , the change

in position of the vehicle is given by:

xn

cos(n ) cos(n )

yn = cos(n ) sin(n ) rn rn1

zn

sin(n )

(1)

The 3D perception assumes that odometry system is accurate

enough in a short time period, and accumulates the laser

scans for 3D range data. A classification procedure is then

applied to extract interest points from the accumulated data.

The extracted laser points are then projected onto a virtual

horizontal plane (by ignoring their z values), and a synthetic

2D LIDAR is constructed. The 2D localization fuses odometry information from odometry and measurements from the

synthetic 2D LIDAR in a Monte Carlo Localization scheme.

With a prior map of vertical features generated beforehand,

localization on the 2D horizontal plane is achieved.

III. 3D PERCEPTION

One of the requirements of the synthetic LIDAR is excellent adaptability that fits with different types of environment

1555

2D scan

odometry

IMU +

Encoder

projected

(x, y, yaw)

Prior

Map

each point in the space is approximated by performing leastsquare plane fitting to a points local neighborhood P K [25].

The plane is represented by a point x, its normal vector ~n

and distance di from a point pi P K , where di is defined

as

3D perception

Rolling Window

Updating

rolling ahead

Points

Classification

2D Synthetic

LIDAR

2D localization

Monte Carlo

Localization

Fig. 1.

scan accumulation

obsolete

Tilted-down

LIDAR

window size,

interest points from a reconstructed environment model,

before the synthetic LIDAR is built.

To be able to recognize features that are perpendicular

to the ground, an accurate model of the world is necessary.

There are numerous ways that allow building an accurate

environmental model, which includes nodding LIDAR and

Velodyne. As introduced in section II-A, a fixed, tilted

down single planar LIDAR enables the reconstruction of

the environment accurately by sweeping across the ground

surface. This is an attractive solution since it is low cost and

only requires rigid mounting of the sensor. It also allows

optimization to be done that allows real-time computation of

feature extraction that is unique to this configuration.

A. 3D rolling window

The reconstruction uses rolling window sampling to maintain high probability of reflecting more recent samples by

the ranging sensor. As such, a 3D rolling window is used to

accumulate different scans recorded in a short distance. The

size of the window is flexible and the rolling window forms

a local map of the 3D environment, i.e., it rolls together with

the vehicle, where new incoming scans will be added into

the window, and the old samples get discarded.

More specifically, given the window size w, the points p

in n-th scan, Pn is accumulated according to

[

Pn =

{pk , . . . , pn } n > w

(2)

k=nw

accumulated scans such that the size of the window would

not grow unbounded. Also, a new scan is only inserted

when a sufficient distance, is achieved. This has two

effects, a small will have denser points but the overall

window size will become shorter and vice versa. The rolling

window works in the odometry frame of the system, where

each scan from a physical LIDAR is projected based on the

odometry information derived from IMU and wheel encoder

as discussed in Section II-B.

B. Point Classification

To extract features that are perpendicular to the ground,

estimation of surface normal is used. While many method

exists [11], we used normal estimation proposed by [3]. It

di = (pi x) ~n

(3)

P

k

By taking x = p = k1 i=1 pi as the centroid of pk ,

the values of ~n can be computed in a least-square sense

such that di = 0. The solution for ~n is given by computing

the eigenvalue and eigenvector of the following covariance

matrix C R3x3 of P K [24]:

k

1X

(pi p) (pi p)T , C v~j = j v~j , j {0, 1, 2}

k i=1

(4)

Where k is the number of points in the local neighborhood,

p as the centroid of the neighbors, j is the j th eigenvalue

with v~j as the j th eigenvector. The principal components of

P K corresponds to the eigenvectors v~j . Hence, the approximation of ~n can be found from the smallest eigenvalue 0 .

Once the normal vector ~n is found, the vertical points can

then be obtained by simply taking the threshold of ~n along

the z axis, e.g. 0.5. This can vary depending on how noisy

the sensor data is.

To find the local neighborhood points efficiently, KDtree [19] is built from all the points obtained from the

rolling window and perform a fixed radius search at each

point. Although the surface normal can be calculated as a

whole, performing normal calculation at each point in the

rolling window can be very expensive. To further reduce the

computation complexity, two successive rolling windows are

maintained, where

[

Pn+1

= Pn (Pn+1 \ Pn )

(5)

C=

consists of the processed points and P contains the raw

points. This way, surface normal calculation is only required

for the much smaller rolling window Pn+1 \ Pn . In other

words, this ensures that classification will only perform on

the newly accumulated point cloud and the processed points

from the previous instance can be reused.

C. Synthetic LIDAR construction

The result from the classified points consist of collection

of interest points in 3D. For the construction of synthetic

LIDAR, the interest points in 3D is projected into virtual

horizontal plane (z=0). It can be seen that this synthetic

LIDAR has a very special feature: the ability to see through

the obstacles. This is possible since interpretation of points

is done in 3D. The construction of synthetic LIDAR is

completed by placing the virtual sensor at the base of the

vehicle and perform transformation of all the interest points

from odometry to the vehicles base.

1556

3D rolling window

Point Classification

Fig. 2.

Interest Points

(equally spaced angle increment), the synthetic LIDAR can

be further reconstructed to fulfill this constraint. This would

involve performing ray tracing at each fixed angle increment

to obtain minimum range value from the possible end points.

The overall 3D perception can be summarized in Figure 2.

The 3D perception is done with the PointCloud library [23]

which provides many of the operations described in this

section.

IV. O NLINE L OCALIZATION

A. MCL Localization

This paper adopts the Monte Carlo Localization (MCL)

scheme in [1] to estimate the vehicle pose. MCL is a probabilistic localization method based on Bayes Theorem and

Monte Carlo idea [28]. The core of MCL is a particle filter,

where the belief of vehicle position is maintained by a set of

particles. MCL mainly consists of three steps, prediction,

correction, and resampling. For the motion model which

is required for the prediction step, Pseudo-3D odometry

motion model from our previous work [22] is used. The

choice of measurement model is discussed in the following.

the waist level for obstacle detection. Both rear wheels of

the golf cart are mounted with encoders that provide an

estimate of the distance traveled. An Inertial Measurement

Unit (IMU) MicroStrain 3DM-GX3-25 is mounted at the

center of the real axle to provide orientation information

of the vehicle. The localization algorithm is tested in the

Engineering Campus of National University of Singapore,

where the road is up-and-down and many high buildings exist

off the road.

A prior map is first generated with graph SLAM techniques by using the synthetic LIDAR as the input. To perform

pose optimization, [29] is used as front end to detect loop

closure. Then, the fully optimized pose is recovered using

optimization library from [10]. To evaluate the quality of

the recovered map built from synthetic LIDAR, the map is

projected onto a satellite map, as shown in Fig. 5. The map

shows consistency with good correlation with the satellite

map, with an area of about 550 m 487 m. Although there

are discrepancies towards the left side of the map due

to uniform logitudinal features along the road, the overall

topology is maintained. This shows that the map can be used

for accurate localization.

To incorporate the measurement into localization, a measurement model is needed for the synthetic LIDAR. The

likelihood model is adopted for the synthetic LIDAR. Since

the end points of virtual beams are the projection of interest

points from vertical surfaces, it is possible that different

points from different vertical surfaces may have the same

angle. In other works, there exist two laser beams with the

same angle while having two different range values. For

this reason, synthetic 2D LIDAR is a peculiar LIDAR that

only detects vertical surfaces, and can also see through these

surfaces. In light of this, the likelihood model which only

requires the end points of laser beams is well suited for the

synthetic LIDAR.

V. E XPERIMENTS

A. Experiment Setup

Our test bed is a Yamaha G22E golf cart equipped with

various types of sensors. The hardware configuration is

shown in Fig. 3. The tilted-down LIDAR mounted in the

upper-front is a SICK LMS-291 LIDAR for localization.

B. Experimental Results

The synthetic LIDAR is able to perform at a rate of 50

Hz output on a laptop with Core i7 processor, showing that

the synthetic LIDAR can be used to perform a real time

localization. The localization results are shown in Fig. 6.

Judging from the prior map, the localization result from

our algorithm always aligns with our driving path where a

parallel line with the road boundary is clearly shown in the

long stretch of road. Since our algorithm does not rely on

GPS, our estimation still performs well near areas crowded

by tall buildings. Note that in the experiment, a rough

initial position is given and hence localization is mostly

concerned with pose tracking. However, the system is able

to cope with small kidnapping problems, e.g. brief data error

from the LIDAR since the odometry system is still able to

provide information. Should a large kidnapping occur, e.g.

the vehicle was moved in between placed without turning

on the localization module, a rough initial position may be

provided to speed up the convergence rate.

Fig. 4 shows localization variance vs driving distance.

1557

Fig. 3.

Vehicle testbed

(b) position variance

shown in Fig. 4(a). Fig. 4(b) shows position estimation

variance vs driving distance in longitudinal and lateral

direction relative to the vehicle. It is shown that during

the whole test, variance in both directions remain small.

The worse variance occur longitudinally, at a value about

0.2 m. This suggests the localization algorithm has high

confidence about its pose estimates. At the same time, it is

also seen that the lateral variance is generally smaller than

the longitudinal one. This is inline with the fact that in an

urban road environments, features in the lateral direction are

much richer than those from the longitudinal direction, as

discussed in our previous work [22].

This proposed method has since been used to perform

autonomous navigation similar to [22]. To show that the

localization results is consistent, two autonomous runs are

performed as shown in Fig. 7. The golf cart is given a

path to follow with the direction from A to E. To validate

the precision of the localization, 5 checkpoints are selected

where per-pixel absolute difference between 2 grayscale

images captured from each autonomous run is performed. To

ensure the same lighting condition, the two autonomous runs

were performed consecutively on the same day. 5 smaller

images from Fig. 7 are the results of the visual validation.

The sharp edges of the images provide strong evidence that

the localization system is able to provide precise position

repeatedly. Do note that the dark spots present at images B,

C and E are the natural results from moving objects. This

shows that precise navigation can be achieved with only a

single 2D LIDAR.

Fig. 4.

Fig. 5.

ACKNOWLEDGMENT

This research was supported by the National Research

Foundation (NRF) Singapore through the Singapore MIT

Alliance for Research and Technologys (FM IRG) research

programme, in addition to the partnership with the Defence

Science Organisation (DSO). We are grateful for their support.

R EFERENCES

achieve precise localization in 3D environment. Through

experiments, we show that the proposed method can handle

localization on a large environment. Vehicle position estimation is conducted in a Monte Carlo Localization scheme,

based on synthetic LIDAR measurements and odometry

information. We demonstrate the accuracy and robustness

of our localization algorithm in a driving test. Future work

will look into more generic framework that allows multiple

sensor modalities in order to achieve robust navigation for

vehicle autonomy.

1558

[2] I. Baldwin and P. Newman, Road vehicle localization with 2d pushbroom lidar and 3d priors, in Proc. IEEE International Conference on

Robotics and Automation (ICRA2012), Minnesota, USA, May 2012.

[3] J. Berkmann and T. Caelli, Computation of surface geometry and

segmentation using covariance techniques, Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 16, no. 11, pp. 1114

1116, nov 1994.

[4] H. Carvalho, P. DelMoral, A. Monin, and G. Salut, Optimal nonlinear

filtering in GPS/INS integration, IEEE Transactions on Aerospace

and Electronic Systems, vol. 33, no. 3, pp. 835850, 1997.

[5] P. Checchin, F. Gerossier, C. Blanc, R. Chapuis, and L. Trassoudaine,

Radar scan matching slam using the fourier-mellin transform, in

Field and Service Robotics. Springer, 2010, pp. 151161.

Engineering Campus

precise vehicle localization using belief theory and Kalman filtering,

Autonomous Robots, vol. 19, no. 2, pp. 173191, 2005.

[7] J. Guivant and R. Katz, Global urban localization based on road

maps, 2007 IEEE/RSJ International Conference on Intelligent Robots

and Systems, Vols 1-9, pp. 10851090, 2007.

[8] A. Harrison and P. Newman, High quality 3d laser ranging under

general vehicle motion, in Proc. IEEE International Conference on

Robotics and Automation (ICRA08), Pasadena, California, April 2008.

[9] M. Jabbour and P. Bonnifait, Global localization robust to GPS

outages using a vertical ladar, 2006 9th International Conference on

Control, Automation, Robotics and Vision, Vols 1- 5, pp. 10131018,

2006.

[10] M. Kaess, A. Ranganathan, and F. Dellaert, iSAM: Incremental

smoothing and mapping, IEEE Trans. on Robotics (TRO), vol. 24,

no. 6, pp. 13651378, Dec. 2008.

[11] K. Klasing, D. Althoff, D. Wollherr, and M. Buss, Comparison of

surface normal estimation methods for range sensing applications,

in Robotics and Automation, 2009. ICRA 09. IEEE International

Conference on, may 2009, pp. 3206 3211.

[12] R. Kummerle, R. Triebel, P. Pfaff, and W. Burgard, Monte carlo

localization in outdoor terrains using multilevel surface maps,

Journal of Field Robotics, vol. 25, no. 6-7, pp. 346359, 2008.

[Online]. Available: http://dx.doi.org/10.1002/rob.20245

[13] S. Kohlbrecher, J. Meyer, O. von Stryk, and U. Klingauf, A flexible and scalable slam system with full 3d motion estimation, in

Proc. IEEE International Symposium on Safety, Security and Rescue

Robotics (SSRR). IEEE, November 2011.

[14] T. Kos, I. Markezic, and J. Pokrajcic, Effects of multipath reception

on GPS positioning performance, in ELMAR, 2010 PROCEEDINGS.

IEEE, 2010, pp. 399402.

[15] D. Lecking, O. Wulf, and B. Wagner, Localization in a wide range

of industrial environments using relative 3d ceiling features, in

Emerging Technologies and Factory Automation, 2008. ETFA 2008.

IEEE International Conference on. IEEE, 2008, pp. 333337.

[16] J. Levinson, M. Montemerlo, and S. Thrun, Map-based precision vehicle localization in urban environments, in Proceedings of Robotics:

Science and Systems, Atlanta, GA, USA, June 2007.

[17] J. Levinson and S. Thrun, Robust vehicle localization in urban

environments using probabilistic maps, in ICRA10, 2010, pp. 4372

4378.

[18] A. Mohamed and K. Schwarz, Adaptive Kalman filtering for

INS/GPS, Journal of Geodesy, vol. 73, no. 4, pp. 193203, 1999.

[19] M. Muja and D. G. Lowe, Fast approximate nearest neighbors with

the right from top to bottom correspond to visual validation of localization

repeatability from checkpoint A to E

[20]

[21]

[22]

[23]

[24]

[25]

[26]

[27]

[28]

[29]

[30]

[31]

1559

Computer Vision Theory and Application VISSAPP09). INSTICC

Press, 2009, pp. 331340.

A. Nuchter, K. Lingemann, J. Hertzberg, and H. Surmann, 6d

slam3d mapping outdoor environments, Journal of Field Robotics,

vol. 24, no. 8-9, pp. 699722, 2007.

H. H. Qi and J. B. Moore, Direct Kalman filtering approach for

GPS/INS integration, IEEE Transactions on Aerospace and Electronic

Systems, vol. 38, no. 2, pp. 687693, 2002.

B. Qin, Z. J. Chong, T. Bandyopadhyay, M. H. Ang Jr., E. Frazzoli, and

D. Rus, Curb-Intersection Feature Based Monte Carlo Localization

on Urban Roads, in IEEE International Conference on Robotics and

Automation, 2012.

R. B. Rusu and S. Cousins, 3D is here: Point Cloud Library (PCL), in

IEEE International Conference on Robotics and Automation (ICRA),

Shanghai, China, May 9-13 2011.

R. Rusu, Semantic 3d object maps for everyday manipulation in

human living environments, KI-Kunstliche Intelligenz, vol. 24, no. 4,

pp. 345348, 2010.

C. Shakarji, Least-squares fitting algorithms of the nist algorithm

testing system, JOURNAL OF RESEARCH-NATIONAL INSTITUTE

OF STANDARDS AND TECHNOLOGY, vol. 103, pp. 633641, 1998.

N. Suganuma and T. Uozumi, Precise position estimation of autonomous vehicle based on map-matching, in Intelligent Vehicles

Symposium (IV), 2011 IEEE. IEEE, pp. 296301.

S. Thrun, W. Burgard, and D. Fox, A real-time algorithm for mobile

robot mapping with applications to multi-robot and 3d mapping,

in Robotics and Automation, 2000. Proceedings. ICRA00. IEEE

International Conference on, vol. 1. IEEE, 2000, pp. 321328.

, Probabilistic robotics. MIT Press, 2005.

G. Tipaldi and K. Arras, Flirt - interest regions for 2d range data, in

Robotics and Automation (ICRA), 2010 IEEE International Conference

on, may 2010, pp. 3616 3622.

J. Weingarten and G. Gruener, A fast and robust 3d feature extraction

algorithm for structured environment reconstruction, in Reconstruction, 11th International Conference on Advanced Robotics (ICAR,

2003.

O. Wulf, D. Lecking, and B. Wagner, Robust self-localization in

industrial environments based on 3d ceiling structures, in Intelligent

Robots and Systems, 2006 IEEE/RSJ International Conference on.

IEEE, 2006, pp. 15301534.

- Setting up a mobile Lidar (DIAL) system for detecting chemical warfare agentsUploaded bySidharth Razdan
- Self Driving CarUploaded bydevashish0dewan
- 2013 Reg SylabusUploaded byJayaraman Tamilvendhan
- Cambridge Maths SyllabusUploaded byHyun-ho Shin
- Land Processes Field Work and Projects: Southern Sierra Nevada MountainsUploaded byjpeterson1
- lidar robotUploaded bycoolshahabaz
- Lidar Analysis ForestryUploaded byazkonar
- A Dynamic Stiffness Element for Free Vibration Analysis of Delaminated Layered BeamsUploaded byesatec
- AEB-Test-Results-ESV-2013-Uploaded byVaibhav Arora
- Optimal 2003Uploaded bySandesh Kumar B V
- 20160927 Auto Vision Systems Report FINALUploaded byjasleen kaur
- BLTN10008Uploaded byEdward Cuipa Vicencio
- Module 4 SlidesUploaded byyashar2500
- r7210101 Mathematics IIUploaded bysivabharathamurthy
- Lidar Applications in Air Pollution Research and ControlUploaded bycorederoma
- Archaeology.docxUploaded byAbhishek Bhagat
- 38_486_14_Cantemir_AUploaded byGABRIEL2010NOU
- 3-chanin1981Uploaded byRajitha
- Project Geotechnical PlanningUploaded bynathychidaz
- Bending and Vibration of Laminated Plates by a Layerwise Meshless FormulationUploaded bylekan4
- FinalUploaded byWilliam Rodzewich
- DiffEQ_Final_FormulaSheet.pdfUploaded byjosiahgerber
- A Model Reduction Approach for OFDM Channel Estimation Under High Mobility ConditionsUploaded bysusasuresh
- _Automated Steering SystemUploaded byKumari Ragini
- ahlswede-winterUploaded byylo82
- Lecture 3Uploaded bynarendra
- Lecture 20 ControlUploaded byBIRRU JEEVAN KUMAR
- Colored Gaussian Noise 2Uploaded byYasser Naguib
- Page RankUploaded bygrzerysz
- imf.pdfUploaded byGiovanni Gozzini

- UOP PX-Plus ™ XPUploaded byana_dcz7154
- ChryslerUploaded byChaveiro Gold
- spmcilUploaded byJeshi
- DODIG-2016-134Uploaded byDoug Ohnemus
- capm pptUploaded byVaibhav Jethi
- Welding DocumentUploaded byVishal Sharma
- DWF - R810S_service ManualUploaded byEnrique Herrera
- Data Center April 2009Uploaded byEl Trukitosac Trukin
- Checkpoint Sample PaperUploaded byKashif1111
- The Coordination Number and Oxidation State OfUploaded bySubhasish Sau
- Presentation of ProjectUploaded byKirronMehtta
- 96-273-2-PBUploaded byRmb90
- Personal Treatise on Dance- Rachel KearlUploaded byRachel Elisabeth Kearl
- Final Halving Global PovertyUploaded byFiras Frangieh
- Sustainable Green Urban Planning YosuaUploaded byYosua Vincencius Harefa
- Organisation Interview MmgggUploaded bydudhmogre23
- Hutchins 2014 Catalog CC07-WebUploaded byBrian Trevisa
- Lecture 7Uploaded byMelinda Gordon
- AWS A5.28 A5.28M-2005 Low-Alloy Steel Electrodes and Rods for Gas Shielded Arc WeldingUploaded byDinh Van Duc
- Biological Ensilage of FishUploaded byMaria Dapkevicius
- Jubilant FoodWorks ppt.pptxUploaded byAarushi Manchanda
- Arc Spray BrochureUploaded byAlexis González
- Antamina LoveUploaded byWilson Omar Quispe Ccohuata
- Software_configuration 3900 2900 1900 RouterUploaded byHarish Kumar
- Design Guide for HotelsUploaded byNoor Sabah
- Cancer GeneticsUploaded byAayudh Das
- 1c7faca5-3d36-4f56-960e-32c06c6f0e0b-150927070447-lva1-app6891Uploaded byPritam Gole
- CarbonCure Ready Mixed Technology BrochureUploaded byMee Noi
- Modeling_Pipeline_Hydraulics_wp.pdfUploaded bythuan0805
- Lesson 1 -PMP Prep Introduction V3Uploaded byamanthegreat