You are on page 1of 5

Francisco J.

Agostini Ramos

INGL3236-030

December 16, 2022

Annotated Bibliography

1- Dominguez, R., Onieva, E., Alonso, J., Villagra, J., & Gonzalez, C. (2011, November).

LIDAR based perception solution for autonomous vehicles. In 2011 11th International

Conference on Intelligent Systems Design and Applications (pp. 790-795). IEEE.

This paper was written in 2011, when self-driving/autonomous vehicle technology was

starting to develop into what it is today. To give some context, lidar first appeared in autonomous

vehicles in 2005, in experimental competitions with select, planned obstacles. This paper, written

in 2011, looks to tackle real world problems that autonomous vehicles may encounter by using

lidar as means to scan and process obstacles by segmenting, clustering, and tracking the readings

from the lidar sensors. Using an algorithm, inputs for distance and angle relative to the sensor

position and area sweep are calculated to help position the obstacle the vehicle is facing, and the

cluster shape to identify what object it may be, an “L” shape for cars, an “I” shape for walls, and

a ”0” or “00” shape for people. The data is then read by the system and makes decisions based on

the two-dimensional data of distance and angle. This information, while easy to calculate, leaves

a lot of crucial information out, and does not cover three dimensions of information that the

author needs.
2- Kusenbach, M., Himmelsbach, M., & Wuensche, H. J. (2016, June). A new geometric 3D

LiDAR feature for model creation and classification of moving objects. In 2016 IEEE Intelligent

Vehicles Symposium (IV) (pp. 272-278). IEEE.

This paper presents a new way for autonomous vehicles to retrieve useful information

from lidar sensors. In autonomous vehicles, speed is key, and the quicker the data from the

sensors is processed, the faster the vehicle can “make decisions”. Here, the paper describes a

method where the system first analyzes the laser data as segment, much like the paper cited

above, also like the paper cited above, it uses general shapes (L, I, 0…) to determine what the

object may be. What makes this new approach different, is that it “decides” what object is being

scanned via movement, using different views of the object, it’s able to create a result of 3D

points, called a “point cloud”, that provides a more detailed model of the obstacle at hand. This

point cloud is then segmented into 2D models that are easy to process and can use pre-existing

fast interpolation methods for quicker processing than it would with a raw, 3D scan of the

obstacle. This method is relevant to the research because it’s been expanded upon and is a

relevant and useful solution to real world field problems an autonomous car might face.
3- Kusenbach, M., Luettel, T., & Wuensche, H. J. (2019, October). Enhanced Temporal Data

Organization for LiDAR Data in Autonomous Driving Environments. In 2019 IEEE Intelligent

Transportation Systems Conference (ITSC) (pp. 2701-2706). IEEE.

This article, much like the last, follows the same method of making 2D scans out of 3D

point clouds. In this paper, the method is modified to make temporal links from points received

from point clouds. These temporal links make for faster readings that are more accurate to the

ground truth. The data, after being processed from 3D point clouds to 2D models, are then linked

together via a series of planes, each containing a 2D model each. This results in a much more

complete image for the system to act upon. Due to its fast refresh rate, it also isn’t affected by

discontinuity, like that of a moving object, as every revolution of the lidar sensor is constantly

creating a new object appearance.


4- Shreyas, E., & Sheth, M. H. (2021, August). 3D Object Detection and Tracking Methods

using Deep Learning for Computer Vision Applications. In 2021 International Conference on

Recent Trends on Electronics, Information, Communication & Technology (RTEICT) (pp. 735-

738). IEEE.

On this document, 3D lidar scanning methods beyond the ones previously seen in the

paper are described. It does include M. Kusenbach’s method of decomposing 3D information

into 2D information for both quick and adaptive information that is “easily digestible” for the

algorithms running the autonomous vehicle’s command. This paper is useful for the author’s

research because it provides an ample source of 3D lidar processing methods to further research

and expand upon. It can be used to compare methods, as well as to combine parts of different

methods to create more effective ones.


Reflection

For this project, I chose the topic of LiDAR (light detection and ranging) because I

believe it to be the future of accurate and reliable data for autonomous driving. I have been

receiving SAE’s Automotive Engineering magazine since 2019, and since then the topic that

most often comes up is lidar solutions for environment scanning of autonomous vehicle

surroundings. This field interests me very much, and I see myself working on lidar developments

as a practicing engineer, so choosing this topic for this project gets me warmed up for what may

come. I left out some of the more technical information that the papers go into detail about

because, quite frankly, I found it very challenging to understand enough to be able to give a clear

and concise summary of them. This is what I found most difficult about the project. As an

undergraduate student with limited coursework and no experience, it was very hard for me to

extract information from these documents, it took me several reading of each article to

understand what was being discussed well enough to be able to do the project, which is why I

often had a “deer in the headlights” look in class when we were supposed to be working on the

first three annotations. I do believe, however, that in the end I was able to understand the topic

well enough to do a good job at writing these annotations, and at linking my articles together in

them.

You might also like