You are on page 1of 6

Tomorrow’s Mobility

Sustainable Technologies for the automotive sector

Week 4 – Session 3 – Autonomous Vehicle Key


Technologies

Guillaume Bresson

Introduction

I- Autonomous vehicles – key technologies


II- Autonomous driving – challenges
III- Status of deployment of autonomous vehicles

Conclusion

© IFPEN / IFP School 2018


Introduction
This lesson starts with the key technologies of autonomous vehicles, followed by the technological
locks and, to finish, the deployment of the autonomous vehicles will be discussed.

I- Autonomous vehicles – key technologies


Some key technologies for autonomous driving should be explained first, starting with sensor
technology.
The algorithms employed in autonomous driving are tightly linked to the sensors. There are,
actually, two kinds of sensors:
The first kind, called proprioceptive, gives information about the state of the vehicle, its dynamics.
Typically, inertial measurements units belong to this group of sensors. They include accelerometers,
gyroscopes and inclinometers. These sensors allows us to compute speed in 3 dimensions. They are
often coupled with high precision Global Navigation Satellite Systems, or GNSS, to compensate for
short signal losses.
Exteroceptive sensors give indications about the environment. The most commonly used are
cameras, radars and lasers. Cameras produce images, providing rich information, but this requires
heavy processing. This usually limits the size of the images that can be analyzed in real time, and
thus the distance at which objects can be detected. Radars and lidars emit a signal that is reflected
by hit obstacles. By measuring the time between emission and reception, the distance to an
obstacle can be deduced. Lasers emit infrared light and usually scan the environment. 3D versions
exist: thus, the laser does not only scan horizontally, but also vertically.
Radars and 2D lasers can be quite accurate regarding position. However it is difficult to identify the
kind of object with this type of sensor.
Sensors
Distinction between proprioceptive and
exteroceptive sensors:
• Proprioception: inertial measurement unit
(accelerometer, gyroscope, inclinometer),
encoder, etc.
• Exteroception: camera, laser, radar, GNSS,
ultrawaves, etc.

The below figure compares the different sensors’ range and field of view. It is important to keep in
mind that this is very specific to a sensor and it might be different in commercialized products.
2D Lidars, or 3D ones scanning over a small vertical angle, and long-range radars, can usually see up
to 200 m. These sensors are mainly for obstacle detection and tracking.
3D Lidars in the same price range as 2D lidars usually see less far away (around 100 meters). 3D
lidars scan vertically giving a 3D point cloud that can be used for localization and mapping
algorithms, or even for the detection of obstacles and infrastructure. High-end 3D lidars can see
farther away but are more expensive.

Week 4 – Session 3 – Autonomous Vehicle Key Technologies, p. 1


1
© IFPEN / IFP School 2018
There are also short-range radars, with a large field of view and a decreased perception distance.
The main advantage of radars over lasers is that they are not affected by difficult weather
conditions.
The perception distance of cameras, is variable and depends on the resolution of the image. A high
resolution usually involves more computationally expensive processing. This tradeoff usually
involves a 50 or 60 meter perception distance. Cameras are affected by rain, snow, fog or direct
sunlight. They also do not directly provide depth information contrary to radars and lidars.
Comparison radar/lidar/camera
Legend
X Lidar
X Cameras
X Radar
X 3D Lidar

The data provided by the above sensors need to be processed and interpreted. The figure below
shows a simple loop that is commonly used to depict how an autonomous vehicle works. On the
perception side, the vehicle needs to be localized inside its environment and the surroundings have
to be identified and understood: All potential obstacles have to be detected, tracked through time,
predicted where they will go next, lanes have to be detected, etc..
Before going to the planning phase, there are usually algorithms that take into account what is
known about the environment and predict long-time behaviors of other road users or pedestrians.
This information can be pretty useful for the planning phase as it will allow, for instance, to avoid
sudden braking if a pedestrian eventually decides to cross the road.
The planning phase includes itinerary and trajectory planning. It takes into account other obstacles,
their speed and road constraints. Decision and behavior are also a part of this phase.
Basic functions

Based on the above overview of sensors and the basic functions needed for an autonomous car,
now some of the recent technological evolutions should be discussed.

Week 4 – Session 3 – Autonomous Vehicle Key Technologies, p. 2


2
© IFPEN / IFP School 2018
The rise of deep learning during the last years has changed a lot the way we approach perception.
Deep learning is not new, but now the appropriate hardware and the right amount of data are
available that allow to develop these algorithms, and fully exploit them. Also, not only have the
algorithms evolved, sensors have improved their performances too. Prior knowledge, what we call
maps of the environment, are getting better and better with more information and higher accuracy.
Recent technological evolution
Some « old » methods now work well: deep
learning
• Appropriate hardware to process data
• Sufficient amount of data
• New actors involved in autonomous driving like
Nvidia

Progressive sensor evolution


• Better performance, lower price
• Much more competitive
• Example for Lidars: Valeo, Continental, IBEO,
LeddarTech, Quanergy, Phantom, etc.

New maps
• Seen as a sensor
• Here, TomTom, Ushr, Mobileye, etc.

II- Autonomous driving – challenges


In order to grasp why autonomous vehicles are not already everywhere, it is important to
understand the scope of what they are expected to do.
So what are the challenges?
Autonomous driving is hard, mainly because the road is shared. It seems obvious, but being able to
know exactly what other vehicles on the road will do would make autonomous driving much easier
to achieve.
A driver of a car is confronted with a variety of situations that require very specific reactions. The
response is partly linked to passive or active communications between road users. For instance,
recognizing a cop asking the driver to pull over, is a very specific task. It requires a dedicated
recognition method, based on the outfit of the person. Many examples of this nature exist, and
taking them all into account exhaustively is a challenge, especially with limited resources.
A combination of different algorithms and sensors is often needed, depending on the situation.
Sensors will certainly be limited due to cost. As sensors are also evolving, there is no clear
consensus on which ones are needed to cover all the possible use cases.
Autonomous driving is also hard because the environment is not controlled, like the weather
conditions: a scene explored during spring will not be the same during fall. Another example are
buildings: they are built and demolished. The painting for lane markings wears off, new lanes are
created, intersections are modified: it’s very complex.
Additionally there are cultural differences: Establishing a typical vehicle behavior for a country or
even a city, does not mean it will work elsewhere.
Finally, interpretation and decision must made very quickly, usually using only partial information.

Week 4 – Session 3 – Autonomous Vehicle Key Technologies, p. 3


3
© IFPEN / IFP School 2018
Challenges

Autonomous driving is hard also because Autonomous driving is hard mainly


the environment is not controlled because the road is shared

Variety of situations Variety of situations


• Weather has an impact on sensors and algorithms: • How do I recognize a cop asking me to pull over?
snow, heavy rain, glowing sun, etc. • How do I cross roundabouts with almost no
• Time has an impact as well: seasonality, wearing, structure?
etc. • How do I anticipate a pedestrian crossing the
• Driving in France is different from driving in India road?
• When do I decide to overtake a stationary car in
situation with limited visibilty?
Understanding the scene is extremely • How do I react if I took the wrong decision?
difficult: constrained both by the time and
the amount of information to establish a
decision Different algorithms/sensors are needed for
different situations

III- Status of deployment of autonomous vehicles


So, where is the actual status right now in terms of deployment? A short overview should now be
given: What kind of automation is now being sold by car manufacturers, with respect to the
automation level classification from the SAE International?
Everyone is selling at least cars integrating adaptive cruise control, or lane keeping assist in high-
end models.
The first vehicle sold to customers that could be classified as level 3 is coming in 2019. It is the Audi
A8 and it will be able to handle traffic jam situations until a certain speed. Then, the driver will have
10 seconds to take back the control of the car.
Although everyone is working on levels 4 and 5, no commercial product exists today in 2018. What
Google has been showcasing since 2010, and what Uber has been testing during the last couple of
years, can be considered as level 4 automation.
Deployment

Everyone

Almost everyone

Audi

Everyone is working on it

When are these autonomous vehicles coming?


The European Road Transport Research Advisory Council has established several scenarios.
These road maps forecast when we can expect the technology to be mature enough to be
integrated as commercial products.

Week 4 – Session 3 – Autonomous Vehicle Key Technologies, p. 4


4
© IFPEN / IFP School 2018
The road map below shows, on the left side, that we can expect automated vehicles in highway,
and traffic jam situations by 2024. Fully automated cars, however, are planned beyond 2030.
On the right side of the road map, the expected development for shuttles is shown. First
autonomous shuttles can be expected by 2022 in dedicated roads before moving to mixed traffic by
2028.
Roadmap

Conclusion
So, as a quick synthesis, self-driving cars work with sensors to:
 Locate and position the vehicle inside existing maps,
 Detect the environment: other cars, obstacles, bikes, pedestrians, and so on,
 And identify the relevant infrastructure like traffic lights or temporary changes such as road
works.
Based on this information, the vehicle plans the trajectory by using algorithms that take into
account previous knowledge about the environment, that predict long-time behaviors of other road
users or pedestrians.

Autonomous driving is extremely hard, mainly because the road is shared and the environment is
not controlled. Indeed, the algorithms need to predict what other vehicles would do, and take the
decisions quickly based on partial information.

Week 4 – Session 3 – Autonomous Vehicle Key Technologies, p. 5


5
© IFPEN / IFP School 2018

You might also like