Professional Documents
Culture Documents
Guillaume Bresson
Introduction
Conclusion
The below figure compares the different sensors’ range and field of view. It is important to keep in
mind that this is very specific to a sensor and it might be different in commercialized products.
2D Lidars, or 3D ones scanning over a small vertical angle, and long-range radars, can usually see up
to 200 m. These sensors are mainly for obstacle detection and tracking.
3D Lidars in the same price range as 2D lidars usually see less far away (around 100 meters). 3D
lidars scan vertically giving a 3D point cloud that can be used for localization and mapping
algorithms, or even for the detection of obstacles and infrastructure. High-end 3D lidars can see
farther away but are more expensive.
The data provided by the above sensors need to be processed and interpreted. The figure below
shows a simple loop that is commonly used to depict how an autonomous vehicle works. On the
perception side, the vehicle needs to be localized inside its environment and the surroundings have
to be identified and understood: All potential obstacles have to be detected, tracked through time,
predicted where they will go next, lanes have to be detected, etc..
Before going to the planning phase, there are usually algorithms that take into account what is
known about the environment and predict long-time behaviors of other road users or pedestrians.
This information can be pretty useful for the planning phase as it will allow, for instance, to avoid
sudden braking if a pedestrian eventually decides to cross the road.
The planning phase includes itinerary and trajectory planning. It takes into account other obstacles,
their speed and road constraints. Decision and behavior are also a part of this phase.
Basic functions
Based on the above overview of sensors and the basic functions needed for an autonomous car,
now some of the recent technological evolutions should be discussed.
New maps
• Seen as a sensor
• Here, TomTom, Ushr, Mobileye, etc.
Everyone
Almost everyone
Audi
Everyone is working on it
Conclusion
So, as a quick synthesis, self-driving cars work with sensors to:
Locate and position the vehicle inside existing maps,
Detect the environment: other cars, obstacles, bikes, pedestrians, and so on,
And identify the relevant infrastructure like traffic lights or temporary changes such as road
works.
Based on this information, the vehicle plans the trajectory by using algorithms that take into
account previous knowledge about the environment, that predict long-time behaviors of other road
users or pedestrians.
Autonomous driving is extremely hard, mainly because the road is shared and the environment is
not controlled. Indeed, the algorithms need to predict what other vehicles would do, and take the
decisions quickly based on partial information.