You are on page 1of 22

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/369945351

The perception systems used in fully automated vehicles: a comparative


analysis

Article in Multimedia Tools and Applications · April 2023


DOI: 10.1007/s11042-023-15090-w

CITATIONS READS

2 713

4 authors, including:

Kritika Rana Gaurav Gupta


Shoolini University Shoolini University
7 PUBLICATIONS 13 CITATIONS 70 PUBLICATIONS 773 CITATIONS

SEE PROFILE SEE PROFILE

Manju Khari
Jawaharlal Nehru University
159 PUBLICATIONS 2,534 CITATIONS

SEE PROFILE

All content following this page was uploaded by Kritika Rana on 20 June 2023.

The user has requested enhancement of the downloaded file.


The Perception Systems Used in Fully Automated Vehicles: A Comparative
Analysis
Kritika Rana 1[0000-0002-2247-5045]
, Gaurav Gupta 2[0000-0002-5192-4428]
and Pankaj Vaidya 3[0000-0003-1304-630X]

123
Yogananda School of AI, Computers and Data Science, Shoolini University, Solan, India
kritikarana0000@gmail.com

Abstract. Over the last decade, researchers and the car industry became very active in the deployment of
driverless cars as they can greatly increase safety of the vehicle, rate of traffic accidents, as well as the effect
of climate on automobiles. The structural design of the autonomous vehicle system has been very important
for the advancement of automated driving. A strong design will boost autonomous vehicle’s ‘safety and
handling’. Overall, technology of autonomous vehicle systems is similar, but self-driving vehicles differ
considerably in expertise. The key explanation is the automated driving technology architecture. In
implementing automated driving, software architecture plays a crucial function. The ability of driverless
vehicles to work in real-world situations in a healthy and stable manner is currently being studied in public
roads. However, they are restricted to specific traffic conditions and incidents. This paper therefore discusses
the key technologies and embedded system essentials required to understand the system architecture of these
applications. A comparison has been drawn for sensing devices which necessitates the requirement of sensor
fusion for correct functioning of the autonomous vehicles. Timeline of autonomous vehicles from 1925-2021
has been shown and then discussed. A detailed interpretation of the capabilities and weaknesses of different
sensor systems is provided by this comparative analysis of the perception systems used in fully automated
vehicles. In addition to providing insight into the possibilities for improvements, it also indicates possible areas
for additional research and analyses the accuracy, dependability, and cost of the various technologies. The
analysis also gives a summary of the present situation of the autonomous vehicle perception systems and
analyses for advancements in the future. By increasing knowledge of the various sensing technologies utilized
in these cars, comparative analysis of perception systems used in fully automated vehicles can be helpful in
closing the gap in the literature and/or practices currently in use.

Keywords: Autonomous driving, Self-driving vehicles, Architecture, Driverless, Real-world.

1. Introduction

A vehicle is considered autonomous if it can recognise its surroundings and operate on its own without the help
of a human driver [1]. There is no obligation of a human passenger to take charge of the vehicle or that a human
driver is needed at all. An automated vehicle can go anywhere, an accomplished human driver goes. Francis
Houdina, an inventor, drove a radio-controlled car around the streets of Manhattan in 1925 without anyone in the
driver's seat. Its engine could be started, the radio could change gears, and the horn could blow. General Motors
developed the first autonomous vehicle model in 1939.
Fig. 1. History timeline of Autonomous Vehicles [2][3][4]

It was an electric vehicle that was propelled by magnetised metal spikes buried in the road and controlled by radio-
controlled electromagnetic fields [2]. In 1958, the sensors in 1939’s concept, which became a reality, could detect
the current flowing via a wire buried in the road. The steering wheel could be moved left or right by adjusting the
current. Researchers started contemplating about ways to put automobiles on the moon in 1961. James Adams
consequently developed the Stanford Cart, which is equipped with cameras and is designed to automatically find and
follow a line on the ground. It was the first-time cameras had been used in an autonomous car. With a camera system
that transmitted information to a computer to process photographs of the road, the Japanese improved on this concept
in 1977 [3]. This resulted in the testing of the first autonomous passenger car, which was capable of 20 miles per
hour. Researchers from Carnegie Mellon drove its self-driving vehicle, known as NavLab5, approximately 2,797
miles from Pittsburgh to San Diego in 1995. The automobile was otherwise autonomous, but they managed the speed
and braking. DARPA, the research branch of the U.S. Department of Defence, organised a competition in 2007 to
test how well vehicles could travel 150 miles of arid terrain on their own. No vehicles finished the course. In 2007,
the task was driving four cars through a 60-mile metropolitan landscape [3][4]. TESLA unveiled autopilot software
in 2015, while the University of Michigan's MCity AV laboratory was launched. Ford and GM invested millions of
dollars on AV research and development in 2021 [4].
Autonomous vehicles use perceptual sensing, which works similarly to the visual sense of a human driver, to
collect data from their sensors and turn it into information of their surroundings. Analog devices believe that
vision, RADAR, and LiDAR sensing modalities will be used effectively in autonomous vehicle perception
sensing. Each sensing method has advantages and disadvantages, but by utilising sensor fusion techniques, the
advantages can be combined, and the disadvantages can be overcome. One computationally demanding AV
function in an AV stack is perception [5][6]. Perception must interpret sensor data and identify things with an
appropriate level of accuracy and quickness [7][8]. Perception for self-driving cars include accurate and real-time
detection, localization, and tracking of other vehicles and pedestrians, as well as objects and lane markings on the
road, to ensure safe operation.

A brief description of five main elements that can be used in autonomous vehicle navigation are (Fig. 2.) Perception,
localization and planning, route planning, decision-making, and automobile control [9].

Fig. 2. Description of the autonomous process of navigation

Perception functions by making use of embedded system devices [10], [11] to scan and track the traffic world
continuously, analogous to human perception and other intelligence related tasks [12]. Localization and mapping
algorithms measure car’s current location and record the environment from sensor’s data and other observation
outputs [13]. Path planning denotes possible secure paths for the ego-car based on knowledge on experience,
position, and mapping [14].
In the decision-making process the optimal route depending on the available paths, existing state of the vehicle
and environmental details are determined (e.g., path value, weather conditions, road signs, etc.) [15]. To track
the best path choice, such as changing of path, a right turn or another move, the vehicle control module
determines the correct vehicle control (turning force, acceleration-deceleration, steering wheel angle, yaw angle
etc.) [16].

How Self-driving cars see the world?

The ability to extract relevant knowledge from the immediate environment is a basic pillar to secure the operation
of an automated car and thereby requires human beings to create and operate on the immediate associations. The
ability of perception helps a car to identify cars with cameras and other devices, to identify potential threats and
to monitor their movements continuously. The vehicle can sense and monitor both moving and static objects as
it moves. The 360 ° range of the vehicle can be extended. Perception is the first step in the computational pipeline
to ensure that a self-driving car operates safely. When the machine can retrieve appropriate data from its
surroundings, it will prepare the route ahead and function completely without human interference [18], [11]. A
combination of high-tech sensors and cameras allows the sensing protocol of self-driving vehicles to manage
and grasp the information of the environment all around the vehicle. Perception of autonomous vehicles is vital
to perform safe and reliable operations since the data obtained from these methods enable the decision-making
which decides how the vehicle will move further; in the same way, the vehicle is moved in the right track (towards
its target destination) and, while doing so, it does not endanger people's lives in and around the vehicle.
In order to detect and analyse the surrounding environment in three-dimensional space, sensors like LiDAR, radar,
and cameras, are used to create maps. The output of processing this data is a point cloud map, a detailed depiction
of the surroundings. With the use of this map, the autonomous car can navigate its surroundings safely and make
decisions. It also gains a thorough understanding of its surroundings, including the locations of highways, barriers,
and other landmarks.
Without sensors, autonomous vehicles would be difficult to build, they allow car's cameras, sensors, and other
onboard systems to gather the data necessary to drive safely. Cameras, Radar, Ultrasonic, and LiDAR sensors are
the most significant vehicle sensors for detecting the environment.
Fig. 3. Sensors used for perception in autonomous vehicle [19]
Radar: Radar consists of an antenna which generates radio signals towards specific direction & a radar detector
to track a radio signal as it bounces off artefacts throughout the atmosphere. Radar measures radiation detection
and ranging. The aim is to measure the time it would take to receive a given radio signal and the gap between the
antenna and the source from which the signal is reflected will be calculated with a constant (approximate) duration.
This process takes place in several different directions and determines the gap from the signal rebounding to its
respective ‘points’. Hundreds of times per second, automobiles can create precise point cloud environmental maps
and can use both points to try and work for the location and intensity of an object. When overlapping certain
artefacts, radio waves are less absorbed by light waves, and they often function long distances [20]. Radars are
often highly flexible, can operate well in adverse conditions — such as fog, light snow, rain — and can see ahead
of the vehicle. However, radar tends to be less accurate than other sensors, giving driverless cars very little
information. You lose sight of cars that cross a curve that are difficult to discern or recognize other artefacts.
Fig. 4. Perfect Radar Sensor vs Automotive Radar

For an autonomous car to react to its surroundings, it uses an automated driving system. AVs need a various sensors
like radar, lidar, cameras & ultrasonic sensors, to gather information from their surroundings and transmit it into
their data fusion units.
Cameras: Cameras deliver the finest knowledge regarding the environment around the vehicle, compared to any
other sensor. Every vehicle that aims to move on the roads in today's age needs to be equipped with cameras [21].
The presence and significance of four major driving environment characteristics are determined by cameras:

• Detection and identification of the traffic light (so that the car understands if there is some traffic light
& it is of what colour).
• Lane detection and classification (so that the car recognizes when it can move through a certain
location).
• Detection and recognition of road signs (so it understands where to look at for road directions and the
exact directions for a specific route) [22].
• Detection of objects, classification and tracking of objects (seeing things pose a potential threat, see
what they are and if they are valuable and follow their position) [23].
Fig. 5. Perfect Camera Sensor vs Automotive Camera

Stereo cameras: In case of stereo cameras, 2 digital cameras work together. The images of stereoscopic vision of a
pair of eyes enable the perception of area around the vehicle and along with that, provides information of the position,
distance, and speed of the objects. Based on the arrangement of pixels and triangulation, software compares the two-
image captured by the stereo cameras from different angles, and determines the information required for a 3D image.
Monocular cameras: Monocular cameras are single-lens camera used to capture images from a single vantage point.
It is basically a single-lens device that is used to image an object or scene, as opposed to a binocular camera, which
uses two lenses to create a stereo image. The monocular cameras have a single lens that is connected to a digital
sensor, which capture images and then process them into a digital format. These cameras can be used in a variety of
ways including photography, video, and virtual reality applications. Monocular cameras are typically used in
applications where a narrow field of view is required, such as in surveillance and security. They can also be used to
create 3D models of objects or scenes, as well as for medical imaging, such as MRI and CT scans. Monocular cameras
can also be used in robotics and autonomous vehicles to identify and track objects in their environment [24].
Infrared Cameras: Active and passive IR cameras are the two most common types. With an in-vehicle near-IR
light source, the scene that can't be seen by the human eye is illuminated by an active infrared camera, which uses a
standard digital camera to capture reflected light. Any object that has an IR sensor pixel emits infrared radiation,
which is captured by a passive infrared camera. The infrared radiation is detected by the pixel on the IR sensor,
which then transforms it into an electronic signal. To form a digital image, this signal is subsequently processed and
amplified. The camera's display screen then shows the captured image. Infrared cameras pick up on this radiation.
Infrared radiation is converted into an electrical signal by infrared cameras. Utilizing a specialised detector, such as
a thermopile or pyroelectric sensor, which detects infrared radiation and turns it into an electrical signal, this is
accomplished. An image is then produced on a monitor by amplification of this signal. Along with a lens to direct
infrared radiation onto the detector, the camera also has a series of filters to obstruct visible light so that only infrared
radiation can be detected. The infrared radiation used by the camera is outside the range of visible light that humans
can see, an infrared camera scene cannot be viewed by the human eye. As far-infrared radiation is emitted by objects
that are too dim to be seen in visible light, infrared technology can be utilised to see in the dark [24].
Fig. 6. Centralized Image Processing

Fig. 7. Road and Actor Scenario Canvas

In fig. 7, a road driving scenario was created using driving scenario application in MATLAB, in which the width of
road is 6m, along with that few actors like, truck (green), pedestrians, bicycle and barriers were placed. The
waypoints of the ego vehicle (blue) are as follows:

Table 1. Waypoints of The Car (Ego Vehicle)


X (m) Y (m) Z (m)
1 2.4000 2.7000 0
2 6.5000 3 0
3 12.2000 2.7000 0
4 17.2000 3 0
5 22.7000 2.8000 0

6 29.3000 3 0

7 34.6000 2.5000 0
X (m) Y (m) Z (m)
1 2.4000 2.7000 0
2 6.5000 3 0
3 12.2000 2.7000 0
4 17.2000 3 0
8 42.2000 2.1000 0

9 49.2000 1.7000 0

A waypoint is a reference point that can be used for navigation and location. A waypoint can be a specific location's
latitude and longitude, a well-known structure, or a natural feature.

Fig. 8. Sensor Canvas

The sensor canvas depicts common sensor placement locations. The camera recognises just actors and not lanes by
default. To enable lane detections, we have expanded the detection parameters section and set the detection type to
objects & lanes.

Fig. 9. Bird Eye Plot


Cameras can detect the waypoint for navigation and location identification, by detecting few characteristics
mentioned in Table II, allowing the car to navigate reliably and safely. The driving environment characteristics that
are determined by the cameras that help to detect the waypoint for navigation and location identification include the
following:

Table II. Characteristics of road and ability of cameras to detect these characteristics
1) Road features The cameras detect the features of the road such as the lanes, curves, junctions, and
other prominent features that help identify the route.
2) Objects The cameras detect the objects nearby, such as traffic signs, poles, trees, and other
objects that help identify the environment.

3) Paths The cameras detect the paths that the car will take, such as the turns, curves, and
other path information that help determine the route.
4) Traffic The cameras detect the traffic on the roads, such as the speed, direction, and lane
changes of the vehicles nearby, that help determine the route.
5) Weather The cameras detect the weather conditions, such as rain, snow, fog, and other
weather conditions that may affect the route.
6) Lighting The cameras detect the lighting conditions such as day, night, sunrise, and sunset
that help determine the route.
7) Landmarks The cameras detect the landmarks that help identify the location, such as buildings,
monuments, and other landmarks.

There are few differences among the camera and radar on radio waves while that are highly accurate at detecting
objects, Cameras uses visible light or infrared light to capture images, captures a 2-dimensional image of an object,
can detect objects that are visible to the camera, and these requires a light source to take images. Whereas radar uses
radio waves to detect objects, detects objects in multiple dimensions, can detect objects that are not visible to the
camera and does not require a light source to detect objects.

LiDAR: LiDAR sensors are generally considered as the most critical sensor for a safe and reliable functioning of a
self-driven vehicle, which is like radar in the sense that they send and receive waves and measure time to assess the
size. Unfortunately, LiDAR's sensors are still deep and hence, like traffic lights or road signs, cannot distinguish the
main depthless elements. Spinning LiDARs are used to create a 360° vision of the environment around the vehicle
[25]. The device essentially consists of a series of laser emitters that are built to handle the entire depth of field and
move hundreds of times per second to provide real-time data to keep up with changes in the environment [25].

One of the most praised sensors in the autonomous area is LiDAR (light detection and ranging). Two types of LiDAR
exist:

• Bathymetric: Bathymetric LiDAR measures seafloor and riverbed heights using water penetrating green
light.
• Topographic: Using a near-infrared laser, Topographic LiDAR can map distances in real time.

LiDAR devices emit eye-safe laser beams. Beams strike obstacles in the surroundings and bounce back to the
camera detector. The returned beams are fetched together as point cloud, producing a three-dimensional
representation of the environment. Digital 3D representations of the physical world are called point cloud
environmental maps. Data gathered from sensors on autonomous driving vehicles or from other sources is used to
build these maps. LiDAR helps the vehicle to feel anything in its environment, whether it be houses, cars, animals,
or pedestrians. It makes it possible for self-driving cars to have a 3D view of their surroundings. LiDAR remote
sensing technology uses laser light pulses to measure the distance between the target surface and the sensor. The
laser light pulses are directed towards the surface and are reflected to the sensor. By measuring the time, it takes
for the light to travel back, the distance between the sensor and the object can be calculated. This distance
information is then used to create a 3D map of the surface features, providing data that previously could not be
obtained with traditional light beam sources.
Fig. 10. Perfect LiDAR Sensor vs Automotive LiDAR

Fig. 11. Rotating 2D LiDAR

2D LiDAR gathers data from the surroundings by firing a single laser beam perpendicular to the rotation axis onto
a revolving mirror.
Fig. 12. Rotating 3D LiDAR

Rotating 3D LiDAR is a type of laser scanning technology that is used to capture three-dimensional objects,
environments, and landscapes. This technology uses a spinning laser scanner to measure the distance and profile of
objects and surfaces in space [26]. Rotating 3D LiDAR is used in a wide range of applications, including autonomous
vehicles, mapping, surveying, and robotics [4].

With our comparative analysis of the various systems currently in use, we tried to contribute to the field of fully
automated vehicle perception systems. Conventional computer vision algorithms as well as more recent techniques
like radar-based systems, LiDAR-based systems, and infrared sensing systems are all considered in this comparison,
it also offers an overview of current research trends in the industry.

Fig. 13. Overall performance analysis of perception system sensors: Automotive RADAR and Camera

The perception system in autonomous vehicles is a crucial component that helps the car recognize its environment
and make decisions accordingly. This system utilizes a variety of sensors, which we have explained in this paper,
to detect objects in the car’s immediate vicinity. By combining these sensors, the car can detect obstacles, lane
markings, and other vehicles to make safe decisions and navigate the environment. The persistence of the
perception system in autonomous vehicles is what allows it to make decisions more quickly and accurately than a
human driver. This helps to improve the safety of the car and its passengers, by reducing the risk of accidents or
injuries. It also has the potential to reduce traffic congestion, as autonomous vehicles can react faster than human
drivers in certain situations. This could result in fewer traffic jams, less pollution, and an overall smoother traffic
flow. In addition, the persistence of the perception system in autonomous vehicles can result in a safer driving
experience for everyone on the road. By avoiding potential collisions, autonomous vehicles can help to reduce the
number of injuries and fatalities caused by motor vehicle accidents. This could have a profound impact on society,
potentially saving hundreds of thousands of lives each year.

Fig. 14. Papers published on Perception System in Autonomous Vehicles in respective years.

The Society of Automotive Engineers (SAE) has established six levels of automation for autonomous vehicles.
Level 0 is “No Automation”, where the driver is in control of all vehicle operations. Level 1 is “Driver Assistance”,
where the car assists the driver with one specific task. Level 2 is “Partial Automation”, where the car can control
acceleration, steering, and braking under certain conditions. Level 3 is “Conditional Automation”, where the car
can autonomously control the vehicle in certain situations, but the driver must remain prepared to take over. Level
4 is “High Automation”, where the car can autonomously control the vehicle in most situations, but the driver may
still be required to take over in certain circumstances. Level 5 is “Full Automation”, where the car can autonomously
control the vehicle in all situations.
Many of the road crashes could be avoided if the automotive systems were implemented to assist human drivers
when applying brakes. ADAS (Advanced Driver Assistance System) cannot completely prevent the collisions, but
it can better save the vehicle from several human errors in situation of most traffic accidents. The role of ADAS is
prevent injuries and deaths by reducing the number of vehicle collisions and its serious impacts.
Fig. 15. Application of ADAS
.
ADAS concepts include Blind Spot Detection, Adaptive Cruise Control, Autonomous Intelligent Cruise Control,
Automatic Parking, Navigation System, Night Vision, Driver Monitoring System, 5G and V2X etc. With the
improvement in sensing technologies, telecommunication services, automation technologies and computer vision
technologies, ADAS development has achieved positive results in traffic resource integration, driving environment
monitoring and real-time vehicle status [28,29]. ADAS consist of active safety and passive safety. Passive safety
depends on devices like bumpers, seat belts, airbags to reduce damage [30]. However, passive safety cannot improve
driving safety by itself as 93% of the traffic collisions are caused by the driver’s lack of alertness of hazard [31]. It has
been reported that 90% of dangerous accidents could have been avoided if the drivers were warned just 1.5 seconds
prior [32]. Active safety is considered as an important part of vehicles as it is developed to predict and sense hazardous
situation. Active safety measures include ESC (Electronic Stability Control), ABS (Anti-Lock Braking System), ICA
(Intersection Collision Avoidance), LKA (Line Keeping Assistance).
ADAS technologies include sensors (lidar, radar, and cameras) that are used to detect the presence of objects around
the vehicle. These sensors can be used to detect objects in the environment and can then send alerts to the driver or
initiate automated braking or steering maneuvers to avoid a collision. The challenge with ADAS lies in the accuracy
and reliability of the sensors, as well as the ability to identify objects accurately and in a timely manner. If the sensors
are unable to detect an object in time, the risk of a crash increases significantly. Additionally, the accuracy of the
sensor’s readings may be impacted by environmental conditions such as fog, rain, or snow, making them less reliable.
How do Autonomous cars work?
Sensors, actuators, sophisticated algorithms, machine learning programs and efficient processors are the foundations
for autonomous vehicles to execute software. Autonomous cars build and retain a map of their surroundings dependent
on a variety of sensors in various areas of the body. The location of neighbouring vehicles is tracked by radar sensors.
Video cameras monitor traffic signals, interpret road signs, watch vehicles, and check for pedestrians. LiDAR sensors
bounce light signals off the vehicle's environment for distance estimation, identification of ground edges and lane
marking. Once driving, ultrasonic sensors sense curbs and other vehicles when in parking [33]. Autonomous vehicles
need to appear different than human beings to drive efficiently. The technology has become a big engineering barrier
to create reliable vision for vehicles. However, with the combination of multiple sensors researchers can design a
scanning device that "sees" the world of a vehicle much more than human vision. This system's keys are — various
sensor types — and redundancy — the overlap sensors to guarantee that the detection of the car is correct. Camera,
radar, and LiDAR are three primarily independent sensors for cars. They function together to provide a perception of
their environment to sense the speed, distance, and three-dimensional shape of surrounding objects. Sensors known as
inertial units often help monitor the acceleration and position of a car.

2. Literature Review
The literature review performed in this paper primarily deals with the sensing and perception- based techniques
deployed in an autonomous vehicle. In the first part we have discussed the main criteria of how an automated system-
based vehicle works, in the next section we have delved into the basic embedded system products being utilized for
making the sensory units of the car. A comparison has been drawn in the form of spider charts taken from [34] to
understand the capabilities of a particular sensor and its shortcomings. Three primary sensory units comprise of
RADARS, LiDARS and VISION Sensors [35].

Conventional computer vision algorithms as well as more recent techniques like radar-based systems, LiDAR-based
systems, and infrared sensing systems are all considered in this comparison, it also offers an overview of current
research trends in the industry.

Yassine Zein et. al [36] have developed an autonomous car mechatronics system that can memorise a route using
GPS rather than utilising pre-saved maps that are outdated and do not cover all routes in a country. The results of the
experiments have revealed that the proposed system can operate with just small mistakes. Passengers and goods can
be transported more cheaply and efficiently using the new system, which may be utilised on roads and inside
campuses, airports, and industries.
Raghu Changalvala et.al.[37] for real-time intruder detection and localisation of sensor data manipulation, authors
have proposed a semi-fragile data hiding-based approach. When applied to the real-world benchmark dataset
(KITTI), the suggested method is 100% accurate in detecting and locating tampering. If you want to extend this to
other 3D point-cloud generating sensors, you can use dynamic message embedding techniques, according to the
authors.
Henry Alexander Ignatious et. al. [38] is useful for researchers to have an overview of the data types collected by
various sensors to better understand how sensors are used in AV decision-making. An overview of sensor types
manufactured by leading vendors in the market has been presented in this article.
A chapter by Piergiuseppe Mallozzi et. al [39] with a focus on software, this study analyses autonomous vehicle's
current condition and potential trends and challenges. The authors divided autonomous vehicles into three categories
based on their level of automation, and they outlined in detail the ecosystem of firms and organisations working on
them. Authors analysed the present level of vehicle functionality and how it is incorporated into the overall design of
the vehicle. Autonomous cars and self-adaptive systems were also discussed by the authors as well as how these
trends fit into the automotive sector while addressing future human interactions with vehicles.
Tyler G. R. Reid et. al [40] Based on existing road safety advancements, the authors established a safety integrity
level, which defined the maximum risk of failure per hour of operation. A comparison could be made to the 10^(-8)
failure probability per hour of operation necessary for aviation and rail localization integrity criteria.
The purpose of this article by Gabor Kiss and Eva Csilla Berecz [41]is to highlight the security concerns that should
be investigated prior to the widespread adoption of this mode of transportation. During this research, a desire for
vehicle communication arose, which might improve the efficiency of accident avoidance, but it must be well-
protected to prevent bogus signals from influencing other vehicles. It was then decided on the challenge's geometry,
with the purpose of keeping track of the car's location within its lane and figuring out what road level it was on.
Vipin Kumar Kukkala et.al [42] have presented a survey of various hardware and software Advanced Driving
Assistance System (ADAS) technologies and their abilities and weaknesses. Authors have discussed approaches used
for vision-based recognition and sensor fusion in ADAS solutions. They also have highlighted the challenges for the
next generation ADASs.
The goal of this research done by Ishak Ertugrul and Osman Ulkir [43] is to undertake a COMSOL (A
Multiphysics simulation and finite element analysis software that runs on any platform) analysis of the inertial
measurement unit (IMU), which is a sensor in an inertial navigation system that is based on a microelectromechanical
system (MEMS) (INS). This study used the COMSOL tool to perform a finite element analysis of the gyroscope
sensor after providing information regarding the use of IMU sensors in autonomous vehicles.
For accurate object detection, Jihun Kim et. al [44] have combined radar and vision sensors in this paper. However,
because sensor-specific data has distinct coordinates, calibrating the data coordinates is necessary. The coordinate
calibration procedures between radar and vision images are introduced in this study, and sensor calibration is
performed using data collected from genuine sensors.
From the perspectives of sensor fusion, computer vision, system identification, and fault tolerance, this study by Amr
Mohamed et.al [45] have offered a critical overview of the state of recent developments in the field of autonomous
cars. The survey also outlined the practical concerns and technical difficulties associated with the development of
such systems for autonomous cars. According to the authors, many scholars and organisations have expressed interest
in the study conducted on these vehicles, which has implications for both civilian and military uses. Indeed,
autonomous vehicles are currently being employed in military activities such as inspection, surveillance, and rescue.
The major sensor technologies utilised to develop an autonomous car have been reviewed in paper by Sean Campbell
et. al [46], this paper has discussed how each of these sensors works, as well as their benefits and drawbacks, and
how sensor fusion techniques can be used to produce a more optimal and efficient autonomous car system.
An article by Jessica Van Brummelen et. al. [47] gives a thorough overview of today's state-of-the-art audio-visual
perception technologies. It contains up-to-date information on the benefits, drawbacks, limitations, and ideal uses of
individual AV sensors, the very common sensors in current research and commercial AVs, autonomous features now
on the market, and current AV research localization and mapping methodologies. This paper also identifies areas for
future research and draws conclusions regarding the most effective ways for AV perception and its impact on
mapping and localization.
Xiangmo Zhao et. al [48] have introduced a novel and quick multi-object identification approach in the paper that
takes full advantage of the complementarity of 3D LiDAR and camera data to reliably identify several things around
an autonomous vehicle. The KITTI (Karlsruhe Institute of Technology and Toyota Technological Institute)
benchmark results demonstrate that this technique generates an average of 86 non-repeating object candidate areas
each frame, which is fewer than the pseudo candidate regions generated by other methods.

Table III. Analysis of key technologies, database used, findings and implementation in respective papers.

Reference Key technology Database Finding Implementation


[36] Instead, then using outdated • It is capable of Proposed mechatronics
maps that do not include all autonomous bump system design for self-
routes in all countries, the detection and obstacle driving cars.
proposed model of a avoidance.
mechatronics system for
autonomous vehicles can The proposed
memorise a route based on mechatronics system
the GPS(Global Positioning will be implemented in
System). a small-
scale automobile,
experimental testing is
carried out. The findings
demonstrate that the
suggested system has a
few small flaws and
faults.
[38] Many sensor types have
been studied by
researchers, based on
important characteristics
such how they operate,
what kind of data they
collect, and their basic
parameters. It also
examines the pros and
cons of every sensor
category.

Authors have compared


the sensors used in
autonomous vehicles
and found out that the
main problems with
wireless sensor networks
are (i) choosing the right
hardware and operating
system, (ii) calibrating,
deploying, and
programming the sensor
network, and (iii)
synchronisation.
[39], [45] – Transporta Among the topics According to the vehicle
[48] tion addressed include type (mid-size, full size,
Statistics machine learning, self- standard pickup) accuracy,
Annual adaptive systems, and alert limit and probability of
Report continuous software failure are calculated.
(TSAR) engineering, and how
2017. they apply to the
automotive sector while
A Policy for considering how people
Highway will use the vehicle.
and Street
Geometric Key issues authors think
Design. need to be resolved to
ensure their safety are
Federal Size highlighted.
Regulations
for
Commercial
Motor
Vehicles

United
States
Department
of
Transportati
on and
Federal
Highway
Administrati
on,
“Roundabou
ts: An
Information
al Guide

Washington
State
Department
of
Transportati
on, “Design
Manual,”
Olympia,
WA

Manual on
“Uniform
Traffic
Control
Devices for
Streets and
Highways”
[32] Authors have raised few
securities related
question like: Which
traffic sign will the
system recognise? What
choice should be taken if
there are no motorcycles
or pedestrians? And
many more and
discussed the possible
solutions for these
question that can be
helpful in few situations.
[40] To detect and locating KITTI According to As per the authors, this study
tampering with real-time vision experimental findings, can also be applied to other
sensor data integrity, authors benchmark the suggested approach sensors that produce 3D
have presented a unique suite. may identify and point clouds, and it can be
semi-fragile data hiding- pinpoint data tampering strengthened even more by
based technique. attacks including target employing dynamic message
For tamper detection and object deletion and false embedding methods.
integrity verification, a object insertion (FOI)
binary watermark can be (TOD). Additionally
added to the data at the assessed is robustness
sensing layer using a data against noise-addition
hiding method based on assaults.
three-dimensional
quantization index
modulation (QIM) data
hiding.

[33] Authors have provided a


thorough analysis of
various ADAS variation
types along with a
summary of the sensors
employed in these
versions.

They talked about vision


based ADASs for both
indoor and outdoor
monitoring and detailed
a taxonomy of ADASs
based on the many types
of sensors employed.

It was also addressed


how crucial modern
communication systems
like V2X, and sensor
fusion techniques are to
the development of
autonomous cars.
[35] An inertial measurement unit GTI This research offered
(IMU) sensor based on KITTI details on the finite
microelectromechanical element analysis and the
systems (MEMs) in an structure of the inertial
inertial navigation system is measurement unit (IMU)
analysed in COMSOL. sensors used in
autonomous vehicles.
Gyro sensor is analysed with
COMSOL.
[36] Radar and vision images Concluded that an
were used to create autonomous vehicle's
coordinate calibration surroundings can be
methods, and sensor detected using the higher
calibration was performed performance and simple
using data from real sensors. implementation of these
sensor fusion algorithm.
To increase the accuracy of
object recognition in a
vehicle environment, they
incorporated the use of radar
and vision sensors.
[37] The choice of the sensor By combining GPS/INS
fusion estimation Kalman filter positioning
technique frequently techniques that use both
depends on the type of loosely coupled and strongly
issue that is present coupled integration, future
along with the plans are to enhance the
presumptions that have location data for the multi-
been developed for each wheeled combat vehicle.
technique used.
In the future, a multi-
As the primary sensor in wheeled combat vehicle's
many autonomous system identification model
systems, camera has will be constructed by
emerged as one of the determining the relationship
most affordable, between the system's input
promising, and difficult and output signals.
methodological
approaches for
driverless cars in a
variety of situations.

[38] This study identifies While improving safety,


three key areas for sustainability, and mobility,
improvement: autonomous vehicles (AVs)
(1) decrease of are shortly to be operating
perception on public roads.
uncertainty.
(2) cost reduction of
perception systems.
(3) operating safety
for sensors and
algorithms.

III. Discussion
There are various advancements in autonomous vehicles carried out since year 1925 till present. With the time,
autonomous vehicles have gained popularity among public as well as the car companies. AVs operates based on
data collected from various sensors quipped and react accordingly to the objects in the environment. The sensing
protocol of self-driving vehicles can manage and grasp the data of the surroundings all around the vehicle with
the help of a mix of high-tech sensors and cameras. We have discussed various sensors used, their working and
tried to simulate their working behaviours with the help of data passed. AVs have different stages according to
SAE and it moves on road with the help of Advanced Driver Assistance System, although ADAS (Advanced
Driver Assistance System) cannot totally avoid crashes, it can more effectively protect the car from some human
errors in most traffic accident situations. The purpose of ADAS is to reduce the amount of serious automobile
incidents and their associated injuries and fatalities. The most challenging problem for autonomous vehicles in
today's life is the ethical dilemmas they face in making decisions. Autonomous vehicles must be able to make
decisions that are both safe and ethical, such as how to react if they encounter a situation in which they must
choose between hitting a pedestrian or swerving into oncoming traffic. While autonomous vehicles have the
potential to reduce human error and improve driving efficiency, they are not yet able to replace human driving
efficiency. Autonomous vehicles are still developing and require further research and development before they
can be relied upon to drive as safely and efficiently as humans.

IV. Conclusion
There is a necessary requirement for fusion of different sensor devices to enable collision free driving in different
environments. Radars performs object detection using radio waves by measuring the time it takes to reflect from
the target object. Similarly, camera and LiDAR also perform object detection and ranging using laser-based
techniques. Cameras are affected by the poor weather conditions; however, radar and LiDAR are not impacted by
changing light or climatic conditions which enable them to drive the vehicle safely. Working of camera sensor,
LiDAR, RADAR is shown with the help of diagrams which gives a clear idea about the internal functioning of the
sensors quipped in vehicle. After carrying out literature survey and we have drawn a table showing key
technologies, findings, future scope etc which can be very much helpful for future research in field of autonomous
vehicles.

V. Future scope
Perception technologies will have a very broad future application in fully automated cars. These systems will
get more complex as technology develops, enabling more precise and reliable observation of the environment
and more dependable navigation. Higher resolution cameras and sensors, enhanced object recognition and
scene understanding algorithms, and enhanced data fusion and decision-making techniques are all things we
may anticipate in the near future. Accurate perception is much more important in these settings because it's
necessary to deal with unpredictable and dynamic items. The creation of vehicle-to-vehicle (V2V)
communication, improved navigational tools, enhanced object recognition, and enhanced situational
awareness are some potential uses. Vehicles would "speak" to one another through vehicle-to-vehicle (V2V)
communication, alerting drivers to potential hazards. In order to provide drivers with more effective routes,
improved navigation systems might take into account weather trends, traffic patterns, and other environmental
elements. A safer environment for drivers would result from automated vehicles being able to perceive,
classify, and respond to other vehicles and hazards on the road with greater accuracy due to improved object
detection. Finally, enhanced situational awareness would enable automated vehicles to recognise possible
threats earlier and respond to them more effectively.
Finally, it is expected that a substantial percentage of future research will go into creating perception systems
that are appropriate for both indoor and outdoor environments.

References
1. C. Badue et al., “Self-Driving Cars: A Survey,” arXiv Prepr. arXiv1901.04407, 2019.
2. https://www.skynettoday.com/editorials/autonomous_vehicles
3. https://techinspection.net/development-history-and-future-of-self-driving-autonomous-cars/
4. https://www.eesi.org/papers/view/issue-brief-autonomous-vehicles-state-of-the-technology-and-potential-role-as-a-
climate-solution
5. T. Brell, R. Philipsen, and M. Ziefle, “sCARy! Risk Perceptions in Autonomous Driving: The Influence of Experience on
Perceived Benefits and Barriers,” Risk Analysis, vol. 39, no. 2, pp. 342–357, 2019.
6. H. Zhao, Y. Zhang, P. Meng, H. Shi, L. E. Li, T. Lou, and J. Zhao, “Driving scenario perception-aware computing system
design in autonomous vehicles,” in 2020 IEEE 38th International Conference on Computer Design (ICCD). IEEE, 2020,
pp. 88–95.
7. S.-C. Lin, Y. Zhang, C.-H. Hsu, M. Skach, M. E. Haque, L. Tang, and J. Mars, “The Architectural Implications of
Autonomous Driving: Constraints and Acceleration,” in International Conference on Architectural Support for
Programming Languages and Operating Systems (ASPLOS), 2018, pp. 751–766.
8. H. Zhao, Y. Zhang, P. Meng, H. Shi, L. E. Li, T. Lou, and J. Zhao, “Towards Safety-Aware Computing System Design in
Autonomous Vehicles,” arXiv preprint arXiv:1905.08453, 2019.
9. E. Fraedrich and B. Lenz, Automated Driving, vol. 2416. 2014.
10. P. Kaur and A. K. Luhach, “An approach to improve test path generation: Inclination towards automated model-based
software design and testing,” in 2016 5th International Conference on Reliability, Infocom Technologies and Optimization,
ICRITO 2016: Trends and Future Directions, 2016.
11. P. Kaur and R. Sobti, “Current challenges in modelling advanced driver assistance systems: Future trends and
advancements,” in 2017 2nd IEEE International Conference on Intelligent Transportation Engineering, ICITE 2017, 2017.
12. M. Maurer, J. C. Gerdes, B. Lenz, and H. Winner, Autonomous driving: Technical, legal, and social aspects. 2016.
13. S. Pendleton et al., “Perception, Planning, Control, and Coordination for Autonomous Vehicles,” Machines, vol. 5, no. 1,
p. 6, 2017.
14. C. Katrakazas, M. Quddus, W. Chen, and L. Deka, “Real-time motion planning methods for autonomous on-road driving :
State-of-the-art and future research directions,” Transp. Res. Part C, vol. 60, pp. 416–442, 2015.
15. J. Villagra et al., Automated Driving. Elsevier Inc., 2018.
16. D. Gruyer, S. Demmel, V. Magnier, and R. Belaroussi, “Multi-Hypotheses Tracking using the Dempster – Shafer Theory ,
application to ambiguous road context,” Inf. Fusion J., vol. 29, pp. 40–56, 2016.
17. F. Rosique, P. J. Navarro, C. Fernández, and A. Padilla, “A Systematic Review of Perception System and Simulators for
Autonomous Vehicles Research,” Sensors Journal, MDPI, pp. 1–29, 2019.
18. R. Vivacqua, A Low-Cost Sensors Approach for Accurate Vehicle Localization and Autonomous Driving Application.
2017.
19. Zein, Y., Darwiche, M., & Mokhiamar, O. (2018). GPS tracking system for autonomous vehicles. Alexandria engineering
journal, 57(4), 3127-3137.
20. Changalvala, R., & Malik, H. (2019). LiDAR data integrity verification for autonomous vehicle. IEEE Access, 7, 138018-
138031.
21. Ignatious, H. A., & Khan, M. (2022). An overview of sensors in Autonomous Vehicles. Procedia Computer Science, 198,
736-741.
22. Mallozzi, P., Pelliccione, P., Knauss, A., Berger, C., & Mohammadiha, N. (2019). Autonomous vehicles: state of the art,
future trends, and challenges. Automotive Systems and Software Engineering, 347-367.
23. Reid, T. G., Houts, S. E., Cammarata, R., Mills, G., Agarwal, S., Vora, A., & Pandey, G. (2019). Localization requirements
for autonomous vehicles. arXiv preprint arXiv:1906.01061.
24. Kiss, G., & Berecz, É. C. (2019, January). Questions of Security in the World of Autonomous Vehicles. In Proceedings of
the 2019 the 5th International Conference on e-Society, e-Learning, and e-Technologies (pp. 109-115).
25. Kukkala, V. K., Tunnell, J., Pasricha, S., & Bradley, T. (2018). Advanced driver-assistance systems: A path toward
autonomous vehicles. IEEE Consumer Electronics Magazine, 7(5), 18-25.
26. F. Rosique, P. J. Navarro, C. Fernández, and A. Padilla, "A systematic review of perception system and simulators for
autonomous vehicles research," Sensors, vol. 19, no. 3, p. 648, 2019.
27. I. F. Akyildiz, W. Su, Y. Sankarasubramaniam, and E. Cayirci, "A survey on sensor networks," IEEE Communications
magazine, vol. 40, no. 8, pp. 102-114, 2002.
28. S. Ma and Z. Zhang, "Computer vision," Publishing House of Scientific, BeiJing, 1998.
29. L. Wang et al., "Advanced driver-assistance system (ADAS) for intelligent transportation based on the recognition of traffic
cones," Advances in Civil Engineering, vol. 2020, 2020.
30. L. Jiaxing and W. Wei, "Driver Fatigue Monitoring and Warming System Based-on Mulit-parameter Fusion [D]," Lanzhou
University, 2013.
31. Ertugrul, I., & Ulkir, O. (2020). Analysis of MEMS-IMU Navigation System Used in Autonomous Vehicles. In
Autonomous Vehicle and Smart Traffic. IntechOpen.
32. Kim, J., Han, D. S., & Senouci, B. (2018, July). Radar and vision sensor fusion for object detection in autonomous vehicle
surroundings. In 2018 Tenth International Conference on Ubiquitous and Future Networks (ICUFN) (pp. 76-78). IEEE.
33. Mohamed, A., Ren, J., El-Gindy, M., Lang, H., & Ouda, A. N. (2018). Literature survey for autonomous vehicles: sensor
fusion, computer vision, system identification and fault tolerance. International Journal of Automation and Control, 12(4),
555-581.
34. Campbell, S., O'Mahony, N., Krpalcova, L., Riordan, D., Walsh, J., Murphy, A., & Ryan, C. (2018, June). Sensor
technology in autonomous vehicles: A review. In 2018 29th Irish Signals and Systems Conference (ISSC) (pp. 1-4). IEEE.
35. Van Brummelen, J., O’Brien, M., Gruyer, D., & Najjaran, H. (2018). Autonomous vehicle perception: The technology of
today and tomorrow. Transportation research part C: emerging technologies, 89, 384-406.
36. Zhao, X., Sun, P., Xu, Z., Min, H., & Yu, H. (2020). Fusion of 3D LIDAR and camera data for object detection in
autonomous vehicle applications. IEEE Sensors Journal, 20(9), 4901-4913.
37. P. Kaur and R. Sobti, “Optimized MBT-Test Case Generation for Embedded System Controller Using LabVIEW and
Sequence Graphs,” Springer Adv. Intell. Syst. Comput. 709, pp. 283–294, 2018.
38. N. Kaprocki, “Multiunit automotive perception framework : Synergy between AI and deterministic processing,” 2019 IEEE
9th Int. Conf. Consum. Electron., pp. 257–260, 2019.
39. J. Zolock, C. Senatore, R. Yee, R. Larson, and B. Curry, “The Use of Stationary Object Radar Sensor Data from Advanced
Driver Assistance Systems ( ADAS ) in Accident Reconstruction,” SAE Tech. Pap., 2016.
40. J. Koci, N. Jovi, and V. Drndarevi, “Sensors and Sensor Fusion in Autonomous Vehicles,” in 26th IEEE
Telecommunications forum TELFOR 2018, 2019, no. November 2018.
41. S. Behere and M. Törngren, Automated Driving. 2017.
42. P. Morignot, J. P. Rastelli, and F. Nashashibi, "Arbitration for balancing control between the driver and ADAS systems in
an automated vehicle: Survey and approach," in 2014 IEEE Intelligent Vehicles Symposium Proceedings, 2014: IEEE, pp.
575-580.
43. A. Ziebinski, R. Cupek, H. Erdogan, and S. Waechter, “A Survey of ADAS Technologies for the Future Perspective of
Sensor Fusion,” Comput. Collect. Intell. ICCCI 2016. Lect. Notes Comput. Sci. vol 9876. Springer, Ch5am, vol. 2, pp.
135–146, 2016.
44. P. Kaur and R. Sobti, “Pedestrian and Vehicle detection in automotive embedded systems using deep neural networks,” Int.
Conf. Recent Innov. Electr. Electron. Commun. Eng. -, no. IEEE, pp. 2105–2109, 2020.
45. R. Sagar, “Making cars safer through technology innovation,” Texas Instruments White Ppaper, 2017.
46. U.S. Department of Transportation, “Transportation Statistics Annual Report (TSAR) 2017,” Washington, DC, 2017.
47. American Association of State Highway and Transportation Officials, “A Policy on Geometric Design of Highways and
Streets,” 2001.
48. U.S. Department of Transportation Federal Highway Administration, “Federal Size Regulations for Commercial Motor
Vehicles,” Washington, DC, p. 17, 2017

View publication stats

You might also like