You are on page 1of 15

The dream of self-driving cars traces its roots back to the early days of automobiles.

In
1925, Francis Udina showcased a remote-controlled car in Manhattan, foreshadowing the
concept of autonomous driving. However, confusion arose as people mistakenly attributed
the innovation to Harry Houdini. The allure of self-driving cars persisted through the mid-
20th century, with General Motors predicting their arrival by 1976, envisioning stress-free
drives and passengers interacting with voice-activated controls.

The inspiration for this dream stemmed from the perilous nature of human-driven vehicles,
with over 94 percent of accidents attributed to human error. The prospect of reducing or
eliminating driving-related fatalities through automation became a compelling vision.
Beyond safety, the vision of a driverless future promised additional benefits, such as
increased productivity during commutes and extending mobility to a broader population.

The journey toward autonomous vehicles saw significant milestones in the 1980s, with
pioneers in Europe developing partially autonomous vehicles. Ernst Dickmanns and his
team in Munich achieved a major breakthrough in 1986 with a robotic van driving
autonomously at speeds of up to 60 kilometers per hour. Meanwhile, Carnegie Mellon
University in the U.S. was making strides, achieving a 98.2 percent autonomy rate in a
cross-country tour in 1996.

The turning point came with the DARPA Grand Challenges in the 2000s, showcasing the
feasibility of autonomous vehicles navigating challenging terrains. Google's entry into the
field, logging over 10 million autonomous miles by 2018, marked a significant milestone.
The legal framework for on-road testing emerged, with Nevada granting Google the first
autonomous vehicle testing license in 2012.

The timeline also witnessed setbacks, notably the first fatal accident involving a Tesla
Model S in 2016, highlighting the risks of relying on human monitoring of self-driving
systems. Despite challenges, progress continued, and the concept of autonomous taxi fleets
gained traction.

The year 2018, however, saw a setback with the Uber crash in Arizona, prompting a
reevaluation of safety measures. The increase in self-driving accidents during testing was
attributed to the exponential growth in miles driven under challenging conditions.
Companies, both new and established, aimed to bring driverless cars to market by 2020,
with large-scale deployments planned for fleets offering services like robo-taxis and
driverless delivery.

As the industry progresses, the need for talented engineers to address challenges and
innovate solutions remains crucial. The transformational changes promised by driverless
cars are on the horizon, presenting an opportunity for individuals to contribute to shaping
the future of personal mobility.

ACC: Adaptive Cruise Control

A cruise control system for vehicles which controls longitudinal speed. ACC can maintain a
desired reference speed or adjust its speed accordingly to maintain safe driving distances to other
vehicles.

Ego

A term to express the notion of self, which is used to refer to the vehicle being controlled
autonomously, as opposed to other vehicles or objects in the scene. It is most often used in the
form ego-vehicle, meaning the self-vehicle.

FMEA: Failure Mode and Effects Analysis

A bottom up approach of failure analysis which examines individual causes and determines their
effects on the higher level system.

GNSS: Global Navigation Satellite System

A generic term for all satellite systems which provide position estimation. The Global
Positioning System (GPS) made by the United States is a type of GNSS. Another example is the
Russian made GLONASS (Globalnaya Navigazionnaya Sputnikovaya Sistema).

HAZOP: Hazard and Operability Study

A variation of FMEA (Failure Mode and Effects Analysis) which uses guide words to brainstorm
over sets of possible failures that can arise.

IMU: Inertial Measurement Unit

A sensor device consisting of an accelerometer and a gyroscope. The IMU is used to measure
vehicle acceleration and angular velocity, and its data can be fused with other sensors for state
estimation.
LIDAR: Light Detection and Ranging

A type of sensor which detects range by transmitting light and measuring return time and shifts
of the reflected signal.

LTI: Linear Time Invariant

A linear system whose dynamics do not change with time. For example, a car using the unicycle
model is a LTI system. If the model includes the tires degrading over time (and changing the
vehicle dynamics), then the system would no longer be LTI.

LQR: Linear Quadratic Regulation

A method of control utilizing full state feedback. The method seeks to optimize a quadratic cost
function dependent on the state and control input.

MPC: Model Predictive Control

A method of control whose control input optimizes a user defined cost function over a finite time
horizon. A common form of MPC is finite horizon LQR (linear quadratic regulation).

NHTSA: National Highway Traffic Safety Administration

An agency of the Executive Branch of the U.S. government who has developed a 12-part
framework to structure safety assessment for autonomous driving. The framework can be found
here. https://www.nhtsa.gov/sites/nhtsa.dot.gov/files/documents/13069a-
ads2.0_090617_v9a_tag.pdf

ODD: Operational Design Domain

The set of conditions under which a given system is designed to function. For example, a self
driving car can have a control system designed for driving in urban environments, and another
for driving on the highway.

OEDR: Object and Event Detection and Response

The ability to detect objects and events that immediately affect the driving task, and to react to
them appropriately.

PID: Proportional Integral Derivative Control

A common method of control defined by 3 gains.

1) A proportional gain which scales the control output based on the amount of the error

2) An integral gain which scales the control output based on the amount of accumulated error
3) A derivative gain which scales the control output based on the error rate of change

RADAR: Radio Detection And Ranging

A type of sensor which detects range and movement by transmitting radio waves and measuring
return time and shifts of the reflected signal.

SONAR: Sound Navigation And Ranging

A type of sensor which detects range and movement by transmitting sound waves and measuring
return time and shifts of the reflected signal.
Welcome to the first module of the Introduction to Self-driving Cars course. Throughout this
module, you will learn about the main components needed to create a self-driving car and the
technical requirements that drive their design. Before we begin, it's important that you
understand autonomous vehicle requirements or how we define self-driving for a car.

This first week is meant to give you a high-level survey of the terms and concepts that we'll
explore more deeply throughout the specialization. So in module one, I will introduce you to the
taxonomy for self-driving cars, or a system of classification that we use to define driving
automation. Next, I'll describe the perception needs for the driving task or those items that we
need to be able to identify properly. Finally, we will tackle the question of how to make driving
decisions and discuss a few approaches for making choices about how a vehicle moves through
the environment.

The goal of this first module is to remind you just how many assessments and decisions the
driving task truly requires. Hopefully, this will help you appreciate just how much complexity
we as humans can manage effectively when it comes to staying safe on the road. So, let's begin.

In this video, we will cover the basic self-driving terminology, then discuss some requirements
leading to a classification system for driving automation levels. Define the task of driving and
the various components of driving. Formulate a taxonomy based on our requirements and levels
of autonomy needed for a driving task. And finally, we will conclude with the limitations of our
proposed taxonomy classification system.

Let's get started with some technical terms and definitions. We will use these throughout the
specialization, and they're helpful to know if you're working in this industry.

The first term on our list is the driving task. Broadly speaking, the driving task is made up of
three sub-tasks. The first sub-task is perception, or perceiving the environment that we're driving
in. This includes tracking a car's motion in identifying the various elements in the world around
us, like the road surface, road signs, vehicles, pedestrians, and so on.

We also need to track all moving objects and predict their future motions so we can drive safely
and accurately. The second sub-task is motion planning. This allows us to reach our destination
successfully. For example, you may want to drive from your home to your office. So you'll need
to consider which roads you should take, when you should change lanes or cross an intersection,
and how to execute a swerve maneuver around a pothole along the way. Finally, we need to
operate the vehicle itself with vehicle control. So we need to take the appropriate steering, brake,
and acceleration decisions to control the vehicle's position and velocity on the road. These three
sub-tasks form the main driving task and need to be performed constantly while driving a
vehicle.

The next concept I'll introduce is called the Operational Design Domain or ODD for short. The
ODD constitutes the operating conditions under which a given system is designed to function. It
includes environmental, time of day, roadway, and other characteristics under which the car will
perform reliably. Clearly defining the operating conditions for which a self-driving car is
designed is crucial to ensuring the safety of the system. So the ODD needs to be planned out
carefully in advance.

Now that we know some of the basic terms, let's get to the big question. How do we classify the
level of automation in a driving system? Here are some things to consider. First, how much
driver attention is needed? For example, can you watch a movie while driving to work? Or do
you need to keep your attention on the steering wheel at all times? Driver attention is one of the
crucial questions to consider when defining the level of autonomy. Second, how much driver
action is actually needed? For example, do you need to steer? Does the car take care of the speed,
or do you control that as well? Do you need to change lanes, or can the car stay in the current
lane without any intervention? What exactly do we need to expect when we say that the car can
automatically drive?

We defined the driving task broadly in the previous slides. But we will need to discuss this in
more depth. All of these questions lead us to the autonomous driving taxonomy. The standards
are continuously evolving, but for the purposes of our classification, we will use the
decomposition suggested by the Society of Automotive Engineers in 2014. You can find a link to
this resource in the lesson's supplementary reading.

Let's now discuss a way to describe the driving task in increasing levels of automation. First, we
have lateral control, which refers to the task of steering and navigating laterally on the road.
Turning left, right, going straight, or tracking a curve and so on. Next, we have longitudinal
control. This is the task where we control the position or velocity of the car along the roadway,
through actions like braking or acceleration. Then we have Object and Event Detection and
Response or OEDR for short. OEDR is essentially the ability to detect objects and events that
immediately affect the driving task and to react to them appropriately. OEDR really encompasses
a large portion of autonomous driving. So, is used in conjunction with the specific Operational
Design Domain to categorize current self-driving systems. Next, we have planning. Planning is
another important aspect of driving. As immediate response is already part of OEDR, planning is
primarily concerned with the long and short term plans needed to travel to a destination or
execute maneuvers such as lane changes and intersection crossings. Finally, there are some
miscellaneous tasks that people do while driving as well. These include actions like signaling
with indicators, hand-waving, interacting with other drivers and so on.

Now we have a clear description of what tasks we expect a self-driving car to perform. Let's now
discuss the questions that can lead us to the taxonomy for classifying the level of automation in a
self-driving car. First, can this system handle steering tasks or lateral control? Second, can it
perform acceleration, braking, and velocity manipulation tasks or longitudinal control? Third,
can the system perform object and event detection and response and to what degree? Crucially,
can the system handle emergency situations by itself or does it always need a driver to be
attentive during emergencies? Finally, can the system perform in all scenarios and all conditions?
Or does it have a limited ODD or set of operating conditions that it can handle safely?

Based on these questions let's walk through the commonly-used levels of autonomy defined by
the SAE Standard J3016. Let's start with full human perception, planning, and control and call
this level 0. In this level, there is no driving automation whatsoever and everything is done by
the driver. If an autonomous system assists the driver by performing either lateral or longitudinal
control tasks, we are in level one autonomy. Adaptive cruise control is a good example of level
one. In adaptive cruise control or ACC, the system can control the speed of the car. But it needs
the driver to perform steering. So it can perform longitudinal control but needs the human to
perform lateral control. Similarly, lane-keeping assist systems are also Level one. In lane-
keeping assistance, the system can help you stay within your lane and warn you when you are
drifting towards the boundaries. Today's systems rely on visual detection of lane boundaries
coupled with lane centering lateral control. Let's move on to the next level, the level of partial
automation.

In level two, the system performs both the control tasks, lateral and longitudinal in specific
driving scenarios. Some simple examples of level two features are GM
Paragraph 1:

Welcome to the first module of the Introduction to Self-driving Cars course. This module aims to
provide a comprehensive understanding of the main components required for developing self-
driving cars and the technical specifications that guide their design. The initial week is designed
to offer a broad overview of terms and concepts that will be explored in-depth throughout the
specialization. The primary focus will be on introducing the taxonomy for self-driving cars, a
classification system used to define driving automation. We will also delve into perception
needs, driving task components, and decision-making processes for navigating the environment.

Paragraph 2:

The driving task, consisting of perception, motion planning, and vehicle control, forms the core
of our exploration. Perception involves identifying elements in the driving environment, tracking
motion, and predicting future motions of objects. Motion planning ensures successful navigation
to a destination, involving decisions like lane changes and intersection crossings. Vehicle control
encompasses steering, braking, and acceleration decisions. Additionally, the Operational Design
Domain (ODD) is introduced, defining the conditions under which a self-driving car is designed
to operate, including environmental and roadway characteristics.

Paragraph 3:

The classification of automation levels in driving systems is a critical aspect that involves
assessing driver attention and required actions. The autonomous driving taxonomy, based on the
Society of Automotive Engineers' decomposition, is explored, covering lateral control,
longitudinal control, Object and Event Detection and Response (OEDR), planning, and
miscellaneous tasks. Key questions, such as handling emergencies and operating under different
conditions, guide the taxonomy for classifying automation levels.

Paragraph 4:
The commonly-used levels of autonomy defined by the SAE Standard J3016 are discussed.
Level 0 represents full human perception, planning, and control, where no driving automation is
present. The subsequent levels progress from assistance in either lateral or longitudinal control
(Level 1) to partial automation (Level 2), conditional automation (Level 3), highly automated
vehicles (Level 4), and finally, full autonomy with unlimited Operational Design Domain (Level
5).

Paragraph 5:

In conclusion, this video has covered fundamental concepts related to automation, providing
basic definitions, understanding the Operational Design Domain, and introducing the concept of
the driving task. The exploration of the five levels of driving automation offers a valuable
framework to assess the autonomy of self-driving systems. The next lesson will further explore
the requirements for perception, a crucial aspect in the design of autonomous systems.
Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road
Motor Vehicles J3016_201806

This SAE Recommended Practice describes motor vehicle driving automation systems that
perform part or all of the dynamic driving task (DDT) on a sustained basis. It provides a
taxonomy with detailed definitions for six levels of driving automation, ranging from no driving
automation (level 0) to full driving automation (level 5), in the context of motor vehicles
(hereafter also referred to as “vehicle” or “vehicles”) and their operation on roadways. These
level definitions, along with additional supporting terms and definitions provided herein, can be
used to describe the full range of driving automation features equipped on motor vehicles in a
functionally consistent and coherent manner. “On-road” refers to publicly accessible roadways
(including parking areas and private campuses that permit public access) that collectively serve
users of vehicles of all classes and driving automation levels (including no driving automation),
as well as motorcyclists, pedal cyclists, and pedestrians.

The levels apply to the driving automation feature(s) that are engaged in any given instance of
on-road operation of an equipped vehicle. As such, although a given vehicle may be equipped
with a driving automation system that is capable of delivering multiple driving automation
features that perform at different levels, the level of driving automation exhibited in any given
instance is determined by the feature(s) that are engaged.

This document also refers to three primary actors in driving: the (human) user, the driving
automation system, and other vehicle systems and components. These other vehicle systems and
components (or the vehicle in general terms) do not include the driving automation system in this
model, even though as a practical matter a driving automation system may actually share
hardware and software components with other vehicle systems, such as a processing module(s)
or operating code.

The levels of driving automation are defined by reference to the specific role played by each of
the three primary actors in performance of the DDT and/or DDT fallback. “Role” in this context
refers to the expected role of a given primary actor, based on the design of the driving
automation system in question and not necessarily to the actual performance of a given primary
actor. For example, a driver who fails to monitor the roadway during engagement of a level 1
adaptive cruise control (ACC) system still has the role of driver, even while s/he is neglecting it.

Active safety systems, such as electronic stability control and automated emergency braking, and
certain types of driver assistance systems, such as lane keeping assistance, are excluded from the
scope of this driving automation taxonomy because they do not perform part or all of the DDT
on a sustained basis and, rather, merely provide momentary intervention during potentially
hazardous situations. Due to the momentary nature of the actions of active safety systems, their
intervention does not change or eliminate the role of the driver in performing part or all of the
DDT, and thus are not considered to be driving automation.

It should, however, be noted that crash avoidance features, including intervention-type active
safety systems, may be included in vehicles equipped with driving automation systems at any
level. For Automated Driving System (ADS) features (i.e., levels 3-5) that perform the complete
DDT, crash avoidance capability is part of ADS functionality
Paragraph 1:

Welcome to the second lesson in this module, where we delve into the analysis of the driving
task, specifically focusing on the processes of perception. Building on the previous lesson that
covered the classification of automation, we are now exploring how a driving task is performed.
This lesson emphasizes the importance of perception in the context of autonomous vehicles,
introducing the requirements and challenges associated with this crucial aspect of self-driving
technology.

Paragraph 2:

The driving task can be divided into two main components: understanding the surroundings and
making driving decisions. Object and Event Detection and Response (OEDR) play a pivotal role
in any driving task, involving the identification of objects, recognition of events, and appropriate
responses. Perception, as a subset of OEDR, becomes fundamental in making sense of the
environment and predicting the motion of elements within it. This lesson focuses on the
perceptual task and its significance in the development of self-driving cars.

Paragraph 3:

Perception, in the context of autonomous vehicles, involves making sense of the environment
and understanding the agent's movement within it. Identifying objects and predicting their
trajectories are critical elements of perception, mirroring the way humans anticipate the behavior
of other entities on the road. The ability to predict the trajectory of moving objects is key to
making informed decisions, such as anticipating the actions of surrounding vehicles, pedestrians,
and other dynamic elements.

Paragraph 4:

The lesson further explores the various elements that need to be identified for the perception
task. This includes static elements like roads, lane markings, and traffic signals, as well as
dynamic elements such as vehicles, two-wheelers, and pedestrians. Ego localization, estimating
the vehicle's position and motion, is also discussed as a crucial goal for perception. The second
and third courses in the specialization will delve deeper into these essential perception tasks,
providing a comprehensive understanding of on and off-road object detection and tracking.

Paragraph 5:
To conclude, the lesson highlights the challenges associated with robust perception. The
difficulty lies in achieving human-level capability in detection and segmentation using modern
machine learning methods. Challenges include the need for extensive and diverse datasets,
sensor uncertainties, issues with occlusion and reflection, drastic environmental changes, and the
impact of weather conditions on sensor data quality. Acknowledging these challenges is crucial
in designing autonomous systems that can effectively overcome them. In the next video, the
focus will shift to the decision-making aspects of autonomous driving.
**Paragraph 1: Introduction to Decision-Making in Self-Driving Cars**

Welcome to the third and final video of this week, where we delve into the crucial aspect of
decision-making in self-driving car systems. Following the exploration of perception in the
previous video, decision-making forms the next step in the sequence of tasks required for
autonomous driving. The video aims to categorize planning based on the time window within
which decisions need to be made, illustrating this with examples. Additionally, it introduces the
concept of various decisions involved in a simple intersection scenario, laying the groundwork
for understanding the complexity of decision-making in self-driving cars.

**Paragraph 2: Types of Driving Decisions and Scenario Analysis**

Driving decisions are categorized under planning, which encompasses long-term, short-term, and
immediate planning. Long-term planning involves high-level decisions like navigating from one
location to another, often handled by mapping applications. Short-term decisions revolve around
immediate driving actions, such as changing lanes or executing a turn at an intersection.
Immediate decisions, crucial for object and event detection and response, require real-time
reactions to scenarios that unfold during driving. A simple example of approaching an
intersection highlights the multilayered decision-making needed even for routine driving tasks.

**Paragraph 3: Complexity and Variables in Decision-Making**

The example of a left turn at an intersection underscores the inherent complexity of driving.
Variables such as lane changes, stopping locations, and responses to unexpected scenarios create
an extensive list of decisions to be evaluated on different timescales. Each scenario demands a
consistent set of choices, reflecting the dynamic nature of driving. The paragraph emphasizes the
multifaceted nature of decision-making in self-driving cars, requiring meticulous planning and
execution of vehicle control.

**Paragraph 4: Approaches to Planning - Reactive and Predictive Planning**


To address the challenges of multilevel decision-making, the video introduces two planning
approaches: reactive planning and predictive planning. Reactive planning involves rule sets that
respond to the current state of the environment, without considering future predictions. Examples
include stopping for pedestrians or adjusting speed based on changing speed limits. Predictive
planning, on the other hand, incorporates predictions about how other agents will move over
time, allowing for a more natural decision-making process akin to human driving. The
complexity of predictive planning is acknowledged, particularly in accurately predicting the
actions of other entities in the driving environment.

**Paragraph 5: Conclusion and Recap of the Week's Learnings**

The video concludes by summarizing the main points discussed, emphasizing the challenging
nature of driving as a multifaceted problem. It highlights the necessity of in-depth exploration,
which will be covered in course four on planning for self-driving cars. The week's learnings
encompassed basic autonomous driving terminology, levels of automation, components of
driving (perception, planning, and execution), elements for perception, and the complexities and
approaches to decision-making. The subsequent module is teased, indicating that it will define
the main components of self-driving cars, covering both hardware and software elements.

You might also like