You are on page 1of 54

INTRODUCTION

A self-driving car, also known as an autonomous

vehicle (AV or auto), driverless car, or robo-car is a vehicle that is capable of

sensing its environment and moving safely with little or no human input.

Self-driving cars combine a variety of sensors to perceive their surroundings,

such as radar, lidar, sonar, GPS, odometry and inertial measurement

units. Advanced control systems interpret sensory information to identify

appropriate navigation paths, as well as obstacles and relevant signage.

Possible implementations of the technology include personal self-driving

vehicles, shared robotaxis, connected vehicle platoons and long-distance

trucking. Several projects to develop a fully self-driving commercial car are in

various stages of development. Waymo became the first service provider to

offer robotaxi rides to the general public in Phoenix, Arizona in 2020,

while Tesla has said it will offer subscription-based "full self-driving" to private
vehicle owners in 2021.Furthermore, the autonomous delivery

company Nuro has been allowed to start commercial operations in California

starting in 2021.

The era of self-driving cars traveling on city streets is fast approaching. There

are great expectations for the practical application of self-driving cars as

solutions to many social issues, such as securing safe means of transportation,

and reducing traffic congestion and accidents. However, it is not easy to replace

the advanced driving skills of a human driver with those of computers and

artificial intelligence. To achieve this, it is essential to introduce cutting-edge

technology. The use of artificial intelligence (AI) is generating attention as a

key technology making self-driving cars a reality. Remarkable progress in AI

technology has resulted in the ability to judge the driving environment with

near-human accuracy. However, regardless of how advanced AI technology

becomes, accurate judgment is possible only if the sensors obtain correct data

on the driving environment. In that regard, the use of high-precision sensors is

considered a major prerequisite for realizing safe self-driving cars. Murata

Manufacturing Co., Ltd. ("Murata" below) has embraced the challenge to

develop inertial force sensors that can accurately identify the position and

direction of vehicles even in areas where Global Positioning System (GPS)

signals do not reach, or in poor conditions where cameras and radar do not

function sufficiently
Auto drive is a suite of advanced driver-assistance system features offered

by auto drive that has lane centering, traffic-aware cruise control, self-parking,

automatic lane changes, semi-autonomous navigation on limited access

freeways, and the ability to summon the car from a garage or parking spot. In all

of these features, the driver is responsible and the car requires constant

supervision. The company claims the features reduce accidents caused by driver

negligence and fatigue from long-term driving. In October 2020, Consumer

Reports called Auto drive "a distant second" (behind Cadillac's Super Cruise),

although it was ranked first in the "Capabilities and Performance" and "Ease of

Use" category.

As an upgrade to the base Auto drive capabilities, the company's stated intent is

to offer full self-driving (FSD) at a future time, acknowledging that legal,

regulatory, and technical hurdles must be overcome to achieve this goal. As of

April 2020, most experts believe that Auto drive vehicles lack the necessary
hardware for full self-driving. Auto drive was ranked last by Navigant Research

in March 2020 for both strategy and execution in the autonomous driving

sector.

Auto drive released a "beta" version of its FSD software to a small group of

testers in the United States. Many industry observers criticized Auto drive's

decision to use untrained consumers to validate the beta software as dangerous

and irresponsiblefirst discussed the Auto drive system publicly in 2013,

noting.Auto drive is a good thing to have in planes, and we should have it in

cars.

All Auto drive cars manufactured between September 2014 and October 2016

had the initial hardware (HW1) that supported Auto drive. On October 9, 2014,

Auto drive offered customers the ability to pre-purchase Auto drive capability

within a "Tech Package" option. At that time Auto drive stated Auto drive

would include semi-autonomous drive and parking capabilities, and was not

designed forautodrive Inc. developed initial versions of Auto drive in


partnership with the Israeli company Mobileye. Auto drive and Mobileye ended

their partnership in July 2016.

Software enabling Auto drive was released in mid-October 2015 as part of Auto

drive software version 7.0. At that time, Auto drive announced its goal to offer

self-driving technology. Software version 7.1 then removed some features - to

discourage customers from engaging in risky behavior - and added the Summon

remote-parking capability that can move the car forward and backward under

remote human control without a driver in the car.

On August 31, 2016,Auto drive 8.0, which processes radar signals to create a

coarse point cloud similar to lidar to help navigate in low visibility, and even to

"see" in front of the car ahead of the auto drivecar. In November 2016 Auto

drive 8.0 was updated to have a more noticeable signal to the driver that it is

engaged and to require drivers to touch the steering wheel more frequently. By

November 2016, Auto drive had operated actively on HW1 vehicles for 300

million miles (500 million km) and 1.3 billion miles (2 billion km) in "shadow"

(not active) mode.

Auto drive states that as of October 2016, all new vehicles come with the

necessary sensors and computing hardware, known as hardware version 2

(HW2), for future full self-driving. Auto drive used the term "Enhanced Auto

drive" (EA) to refer to HW2 capabilities that were not available in hardware

version 1 (HW1), which include the ability to automatically change lanes


without requiring driver input, to transition from one freeway to another, and to

exit the freeway when your destination is near.

Auto drive software for HW2 cars came in February 2017. It included traffic-

aware cruise control, autosteer on divided highways, autosteer on 'local roads'

up to a speed of 35 mph or a specified number of mph over the local speed limit

to a maximum of 45 mph. Software version 8.1 for HW2 arrived in June 2017,

adding a new driving-assist algorithm, full-speed braking and handling parallel

and perpendicular parking. Later releases offered smoother lane-keeping and

less jerky acceleration and deceleration.I think we will be feature complete, full

self-driving, this year. Meaning the car will be able to find you in a parking lot,

pick you up and take you all the way to your destination without an

intervention. This year. I would say I am of certain of that, that is not a question

mark. However, people sometimes will extrapolate that to mean now it works

with 100% certainty, requiring no observation, perfectly, this is not the case.

Introduced to Early Access Program (EAP) testers automatic driving on city

streets. Safety features. If Auto drive detects a potential front or side collision

with another vehicle, bicycle or pedestrian within a distance of 525 feet

(160 m), it sounds a warning. Auto drive also has automatic emergency

braking that detects objects that may hit the car and applies the brakes, and the

car may also automatically swerve out of the way to prevent a collisionfull Self-

Driving hardware.
Auto drive has made claims since 2016 that all new HW2 vehicles have the

hardware necessary for full self-driving; howeverfreehardware upgrades have

been required. (An upgrade from HW 2 or 2.5 to HW3 is free to those who have

purchased FSD. Auto drive is claiming that the current software will be

upgraded to provide full self-driving(at an unknown future date, without any

need for additional hardware).

IMPORTANCE and RELEVANCE

1. Basic summon Moves car into and out of a tight space using the Auto

drive phone app or key fob without the driver in the car.

2. AutoparkParks the car in perpendicular or parallel parking spaces, with

either nose or tail facing outward; driver does not need to be in the car.

3. Navigate on Auto drive Beta Navigates on-ramp to off-ramp: executes

automatic lane changes, moves to a more appropriate lane based on

speed, navigates freeway interchanges, and exits a freeway.

4. Night driving Night-time driving is any day more challenging than

driving in daylight. Auto drive helps drive a car in night. Without any
accident.

5. Full Self-Driving (Beta; currently only for Early Access Program testers.

6. Autosteer on city streets (Beta; currently only for Early Access Program

testers.

7. Automatic lane change. Driver initiates the lane changing signal when

safe, then the system does the rest.

8. Emergency Lane Departure Avoidance

9. Lane Departure Avoidance

10.Blind spot collision warning chime

11.Auto drive updates received as part of recurring software updates.

12.If road conditions warrant, Autosteer and cruise control disengage and an

audio and visual signal indicate that the driver must assume full control.
13.Also known as adaptive cruise control, the ability to maintain a safe

distance from the vehicle in front of it by accelerating and braking as that

vehicle speeds up and slows down.

14.Traffic light/stop recognition

15.Auto drive began fitting a new version of the full self-driving computer.

16.I don't think we have to worry about autonomous cars because it's a sort

of a narrow form of AI.

17.Steers the car to remain in whatever lane it is in (also known as lane-

keeping). It is able to safely change lanes when the driver taps the turn

signal stalk. On divided highways

cars limit use of the feature to 90 mph (145 km/h), and on non-divided

highways the limit is five miles over the speed limit or 45 mph (72 km/h)

if no speed limit is detected.

If the driver ignores three audio warnings about controlling the steering

wheel within an hour, Auto drive is disabled until a new journey is begun.

18.Self-driving car would have the potential to save 10 million lives

worldwide, per decade.


19.Children and teens, who are not able to drive a vehicle themselves in case

of student transport, would also benefit of the introduction of autonomous

cars. Daycares and schools are able to come up with automated pick-up

and drop-off systems by car in addition to walking, cycling and busing,

causing a decrease of reliance on parents and childcare workers.

20.The driver can safely turn their attention away from the driving tasks, e.g.

the driver can text or watch a movie. The vehicle will handle situations

that call for an immediate response, like emergency braking. The driver

must still be prepared to intervene within some limited time, specified by

the manufacturer, when called upon by the vehicle to do so. You can

think of the automated system as a co-driver that will alert you in an

orderly fashion when it is your turn to drive.

Renewable energy generation and storage are critical components of developing

microgrids — an increasingly important means of delivering reliable and

sustainable electricity around the world. As the deployment of self-driving

products continues to accelerate, we can scale the adoption of renewable energy,

cost-effectively modernize our aging infrastructure (while becoming less reliant

on it), and improve the resilience of the electric grid to benefit everyone.
While many environmental reports focus on emissions generated by

themanufacturing phase of products and future goals for energy

consumption,we highlight the totality of the environmental impact of our

products today.After all, the vast majority of emissions generated by vehicles

today occur inthe product-use phase—that is, when consumers are driving their

vehicles. Webelieve that providing information on both sides of the

manufacturing andconsumer-use equation provides a clearer picture of the

environmental impactofself-driving products, and we have done so this year

largely through a lifecycleanalysis detailed in this report.

Climate change is reaching alarming levels in large part due to emissions from

burningfossil fuels for transportation and electricity generation. In 2016, carbon

dioxide (CO2) concentration levels in the atmosphere exceeded the 400 parts

per million thresholdsona sustained basis - a level that climate scientists believe

will have a catastrophic impacton the environment. Worse, annual global CO2

emissions continue to increase and haveapproximately doubled over the past 50

years to over 43 gigatons in 2019. The world‘scurrent path is unwise and

unsustainable.

The world cannot reduce CO2 emissions without addressing both energy

generationandconsumption. And the world cannot address its energy habits

without first directlyreducingemissions in the transportation and energy sectors.

We are focused oncreating a complete energy and transportation ecosystem


from solar generation andenergy storage to all-electric vehicles that produce

zero tailpipe emissions.

Since the onset of shelter-in-place orders and travel restrictions due to COVID-

19, wehave seen dramatic increases in air quality across the planet, as well as

projections forCO2 emissions to drop in excess of 4% in 2020 compared to pre-

COVID-19 levels, according to researchers. Because these improvements in air

quality and reductions inCO2 are a result of a global economic disruption and

not due to systemic changes in how we produce and consume energy, they are

not expected to be sustained absent intervention. However, these changes have

shown us the positive impacts of reduced pollution in a very short period of

time.

The scenarios for convenience and quality-of-life improvements are limitless.

The elderly and the physically disabled would have independence. If your kids

were at summer camp and forgot their bathing suits and toothbrushes, the car

could bring them the missing items. You could even send your dog to a

veterinary appointment.

But the real promise of autonomous cars is the potential for dramatically

lowering CO2 emissions. In a recent study, experts identified three trends that, if

adopted concurrently, would unleash the full potential of autonomous cars:

vehicle automation, vehicle electrification, and ridesharing. By 2050, these

―three revolutions in urban transportation‖ could:


 Reduce traffic congestion (30% fewer vehicles on the road)

 Cut transportation costs by 40% (in terms of vehicles, fuel, and

infrastructure)

 Improve walkability and livability

 Free up parking lots for other uses (schools, parks, community centers)

 Reduce urban CO2 emissions by 80% worldwide.

Global Greenhouse Gas (GHG) Emissions by

Economic Sector

Global Greenhouse Gas {GHG}

Electricity & Heat Production Agriculture, Forestry & Other Land Use
Transportation Industry
Other Energy Building

Electricity & Heat Production 31%

Agriculture, Forestry & Other Land Use 20%

Industry 18%

Transportation 16%
Other Energy 9%

Buildings 6%

Feasiblity/Possibilitiesof Self Drive

 REDUCED ACCIDENTS:- Self-driving cars have the potential in the

future to reduce deaths and injuries from car crashes, particularly those

that result from driver distraction, with 94 percent of fatal vehicle crashes

attributable to human error, the potential of autonomous vehicle

technologies to reduce deaths and injuries on our roads urges us to action.

 REDUCED TRAFFICCONGESTION:- Our experiments show that

with as few as 5 percent of vehicles being automated and carefully

controlled, we can eliminate stop-and-go waves caused by human driving

behavior Under normal circumstances, human drivers naturally create

stop-and-go traffic, even in the absence of bottlenecks, lane changes,

merges or other disruptions. This phenomenon is called the phantom

traffic jam. found that by controlling the pace of the autonomous car in

the study, they were able to smooth out the traffic flow for all the cars.

 REDUCED CO2 EMISSIONS:-CO2 emission is reduced by about 50%

by switching from a conventional-engine vehicle to an HEV, and by


subsequently switching the HEV to a PHEV, it can be reduced by another

70% through improvement in single-battery-charge running distance.

 INCREASED LANECAPACITY:- While AVs might lead to an

increase in overall vehicle travel, they could also support higher vehicle

throughput rates on existing roads. To begin with, the ability to constantly

monitor surrounding traffic and respond with finely tuned braking and

acceleration adjustments should enable AVs to travel safely at higher

speeds and with reduced headway (space) between each vehicle.

 LOWER FUEL CONSUMPTION: - AV technology can improve fuel

economy, improving it by 4–10 percent by accelerating and decelerating

more smoothly than a human driver. Further improvements could be had

from reducing distance between vehicles and increasing roadway

capacity. A platoon of closely spaced AVs that stops or slows down less

often resembles a train, enabling lower peak speeds (improving fuel

economy) but higher effective speeds (improving travel time).

 LAST MILE SERVICES:-Autonomous vehicles are well-positioned to

provide first/last-mile services to connect commuters to public

transportation. Larger cities have the problem of providing adequate

public transportation. Many lack the appropriate infrastructure to support


the needs of their residents, a void that could partially be filled by self-

driving cars.

 TRANSPORTATION ACCESSIBILITY: This

increased accessibility of self-driving vehicles opens up

personal transportation in a whole new way too many people. As a

window into future benefits, one study found that self-driving vehicles

could lead to two million employment opportunities for people with

disabilities. Many seniors and people with disabilities cannot currently

drive, even with vehicle modifications that help others drive safely.

 MORE EFFICIENT PARKING: -The arrow will appear on your

dashboard, on the side of your vehicle that the parking spot is available.

Once your vehicle has found a spot, you can use the active parking assist

feature by putting your vehicle in reverse. Press the ―OK‖ button on your

display when you are in reverse and ready to go.


Description

What is an Autonomous Car?

An autonomous car is a vehicle capable of sensing its environment and

operating without human involvement. A human passenger is not required to

take control of the vehicle at any time, nor is a human passenger required to be

present in the vehicle at all. An autonomous car can go anywhere a traditional

car goes and do everything that an experienced human driver does.

Self-driving cars are automobiles that do not require human operation to

navigate to a destination. They use cameras, sensors, and advanced software to

interpret and respond to traffic, pedestrians, and other surroundings on the road.

The Society of Automotive Engineers (SAE) currently defines 6 levels of

driving automation ranging from Level 0 (fully manual) to Level 5 (fully

autonomous).
The SAE uses the term automated instead of autonomous. One reason is that the

word autonomy has implications beyond the electromechanical.

A fully autonomous car would be self-aware and capable of making its own

choices. For example, you say ―drive me to work‖ but the car decides to take

you to the beach instead. A fully automated car, however, would follow orders

and then drive itself.

The term self-driving is often used interchangeably with autonomous. However,

it‘s a slightly different thing. A self-driving car can drive itself in some or even

all situations, but a human passenger must always be present and ready to take

control. Self-driving cars would fall under Level 3 (conditional driving

automation) or Level 4 (high driving automation). They are subject to

geofencing, unlike a fully autonomous Level 5 car that could go anywhere.

How do autonomous cars work?

Autonomous cars rely on sensors, actuators, complex algorithms, machine

learning systems, and powerful processors to execute software.

Autonomous cars create and maintain a map of their surroundings based on a

variety of sensors situated in different parts of the vehicle. Radar sensors

monitor the position of nearby vehicles. Video cameras detect traffic lights, read

road signs, track other vehicles, and look for pedestrians. Lidar (light detection

and ranging) sensors bounce pulses of light off the car‘s surroundings to
measure distances, detect road edges, and identify lane markings. Ultrasonic

sensors in the wheels detect curbs and other vehicles when parking.

Sophisticated software then processes all this sensory input, plots a path, and

sends instructions to the car‘s actuators, which control acceleration, braking,

and steering. Hard-coded rules, obstacle avoidance algorithms, predictive

modeling, and object recognition help the software follow traffic rules and

navigate obstacles.

AI technologies power self-driving car systems. Developers of self-driving cars

use vast amounts of data from image recognition systems, along with machine

learning and neural networks, to build systems that can drive autonomously.

The neural networks identify patterns in the data, which is fed to the machine

learning algorithms. That data includes images from cameras on self-driving

cars from which the neural network learns to identify traffic lights, trees, curbs,

pedestrians, street signs and other parts of any given driving environment.

Uses a mix of sensors, Lidar light detection and ranging a technology similar

to radar and cameras and combines all of the data those systems generate to

identify everything around the vehicle and predict what those objects might do

next. This happens in fractions of a second. Maturity is important for these

systems. The more the system drives, the more data it can incorporate into

its deep learning algorithms, enabling it to make more nuanced driving choices.
The following outline how auto drive vehicles work:-

 The driver (or passenger) sets a destination. The car's software

calculates a route.

 A rotating, roof-mounted Lidar sensor monitors a 60-meter range

around the car and creates a dynamic three-dimensional (3D) map of

the car's current environment.

 A sensor on the left rear wheel monitors sideways movement to detect

the car's position relative to the 3D map.

 Radar systems in the front and rear bumpers calculate distances to

obstacles.

 AI software in the car is connected to all the sensors and collects

input from Google Street View and video cameras inside the car.

 The AI simulates human perceptual and decision-making processes

using deep learning and controls actions in driver control systems,

such as steering and brakes.

 The car's software consults Google Maps for advance notice of things

like landmarks, traffic signs and lights.

 An override function is available to enable a human to take control of

the vehicle.
Could Self-Driving Cars Change Society?

It's very possible. When self-driving cars become the dominant type of vehicle

on the road, many things in society will likely change. However, many of those

changes are still obscured by big questions, including:

 What happens to workers who drive for a living? Taxi drivers, bus

drivers, delivery drivers, and truck drivers will all be impacted by self-

driving technology.

 Who is at fault when two self-driving cars have a traffic collision?

 Could the vehicles be hacked into and controlled by an outside party?

 What types of privacy issues will arise? With the constant communication

between cars, information like your location and driving habits may be

accessible to car manufacturers and other drivers.

At the moment, there is no clear answer to any of these questions. It also

doesn‘t help that there is currently little regulation for self-driving cars. Until

lawmakers determine answers to some of the questions above, the presence of

self-driving cars could be limited on public roads.


What Are The Challenges With Autonomous Cars?

Fully autonomous (Level 5) cars are undergoing testing in several pockets of the

world, but none are yet available to the general public. We‘re still years away

from that. The challenges range from the technological and legislative to the

environmental and philosophical. Here are just some of the unknowns.

 Lidar and Radar

Lidar is expensive and is still trying to strike the right balance between range

and resolution. If multiple autonomous cars were to drive on the same road,

would their lidar signals interfere with one another? And if multiple radio

frequencies are available, will the frequency range be enough to support mass

production of autonomous cars.

 Weather Conditions

What happens when an autonomous car drives in heavy precipitation. If there‘s

a layer of snow on the road, lane dividers disappear. How will the cameras and

sensors track lane markings if the markings are obscured by water, oil, ice etc.

 Traffic Conditions and Laws

Will autonomous cars have trouble in tunnels or on bridges. How will they do in

bumper-to-bumper traffic. Will autonomous cars be relegated to a specific lane.

they be granted carpool lane access. And what about the fleet of legacy cars still

sharing the roadways for the next 20 or 30 years.


 State vs. Federal Regulation

The regulatory process in the U.S. has recently shifted from federal guidance to

state-by-state mandates for autonomous cars. Some states have even proposed a

per-mile tax on autonomous vehicles to prevent the rise of driving around

without passengers. Lawmakers have also written bills proposing that all

autonomous cars must be zero-emission vehicles and have a panic button

installed. But are the laws going to be different from state to state. Will you be

able to cross state lines with an autonomous car.

 Accident Liability

Who is liable for accidents caused by an autonomous car. The manufacturer the

human passenger. The latest blueprints suggest that a fully autonomous Level 5

car will not have a dashboard or a steering wheel, so a human passenger would

not even have the option to take control of the vehicle in an emergency.

 Rear cross-traffic assists: -

Rear cross-traffic assist or rear cross-traffic alert system helps drivers reversing

out of perpendicular parking spaces when their rear view is obstructed. This

system basically utilizes two mid-range radar sensors in the rear of the vehicle,

which measure as well as interpret the distance, speed and anticipated driving

path of vehicles detected in cross-traffic.


 Traffic jam assists: -

Traffic jam assists can be said as an extension of the adaptive cruise control.

The technique is basically a low-speed version of adaptive cruise control that

tries to maintain the set speed while taking other vehicles into account. Traffic

jam assist is based on the sensors, and the functionality of adaptive cruise

control with stop & go and lane-keeping support. When the adaptive cruise

control systems ‗stop & go‘ is turned on, it continuously analyses the speed of

the surrounding vehicles and compares with their own driving speed.

 Vehicle-to-Vehicle (V2V) communication: -

The Vehicle-to-Vehicle communication is a technique that wirelessly exchanges

information about the speed, position as well as distance of the surrounding

vehicles. The technology behind V2V communication allows vehicles to

broadcast and receive Omni-directional messages and creates a 360-degree

―awareness‖ of other vehicles in proximity.

 Vehicle guidance system: -

This technology helps steer the vehicle without human intervention. Unlike

driver assistance systems, this system needs no monitoring by a human driver.

The Vehicle Guidance System is part of the control structure of the vehicle and
consists of a path generator, a motion planning algorithm and a sensor fusion

module.

Artificial vs. Emotional Intelligence

Human drivers rely on subtle cues and non-verbal communication like making

eye contact with pedestrians or reading the facial expressions and body

language of other driversto make split-second judgment calls and predict

behaviors. Will autonomous cars be able to replicate this connection? Will they

have the same life-saving instincts as human drivers.

A self-driving car (sometimes called an autonomous car or driverless car) is a

vehicle that uses a combination of sensors, cameras, radar and artificial

intelligence (AI) to travel between destinations without a human operator. To

qualify as fully autonomous, a vehicle must be able to navigate without human

intervention to a predetermined destination over roads that have not been

adapted for its use.

Cars with self-driving features: -

Self-driving car that is almost entirely autonomous. It still requires a human

driver to be present but only to override the system when necessary. It is not

self-driving in the purest sense, but it can drive itself in ideal conditions. It has a

high level of autonomy. Many of the cars available to consumers today have a

lower level of autonomy but still have some self-driving features.


 Hands-free steering centers the car without the driver's hands on the

wheel. The driver is still required to pay attention.

 Adaptive cruise control down to a stop automatically maintains a

selectable distance between the driver's car and the car in front.

 Lane-centering steering intervenes when the driver crosses lane

markings by automatically nudging the vehicle toward the opposite

lane marking.

Levels Of Autonomy In Self-Driving Cars: -

The Society of Automotive Engineers (SAE) defines 6 levels of driving

automation ranging from 0 (fully manual) to 5 (fully autonomous).

 Level 1: An advanced driver assistance system aids the human driver

with steering, braking or accelerating, though not simultaneously. An

ADAS includes rearview cameras and features like a vibrating seat

warning to alert drivers when they drift out of the traveling lane.

 Level 2: An ADAS that can steer and either brake or accelerate

simultaneously while the driver remains fully aware behind the wheel

and continues to act as the driver.

 Level 3: An automated driving system can perform all driving tasks

under certain circumstances, such as parking the car.


 In these circumstances, the human driver must be ready to retake

control and is still required to be the main driver of the vehicle.

 Level 4: An ADS can perform all driving tasks and monitor the

driving environment in certain circumstances. In those circumstances,

the ADS is reliable enough that the human driver needn't pay

attention.

 Level 5: The vehicle's ADS acts as a virtual chauffeur and does all the

driving in all circumstances. The human occupants are passengers and

are never expected to drive the vehicle.

Self-driving car safety and challenges: -

Autonomous cars must learn to identify countless objects in the vehicle's path,

from branches and litter to animals and people. Other challenges on the road are

tunnels that interfere with the Global Positioning System, construction projects

that cause lane changes or complex decisions, like where to stop to allow

emergency vehicles to pass.

The systems need to make instantaneous decisions on when to slow down,

swerve or continue acceleration normally. This is a continuing challenge for

developers, and there are reports of self-driving cars hesitating and swerving

unnecessarily when objects are detected in or near the roadways.


Safety: -

Adaptive Cruise Control: -

Adaptive cruise control. Control is an intelligent form of cruise control that

works by slowing down the speed and speeding up automatically to keep pace

with the vehicle in front of it. This technique helps in avoiding a collision.

Autonomous Emergency Braking: -

Autonomous Emergency Braking is one of the most advanced developments for

standard safety equipment on autonomous vehicles. This technique works by

scanning the road ahead and can apply the brakes automatically to avoid a

collision.

In autonomous vehicles, this technique works by automating the activation of a

car breaks in conjunction with the existing self-steering, lane-keeping assistance

and other systems to control the vehicle with negative inputs from the driver.

Blind spot detection: -

Blindspot detection is one of the core technologies which provides 360 degrees

of electronic coverage around a car, regardless of the speed. This technology

tracks traffic just behind the vehicle as well as what‘s coming alongside.
Blindspot detection or blindspot monitoring includes two different categories,

active blindspot monitoring and passive blindspot monitoring.

Electronic stability control: -

Electronic Stability Control is an automatic feature that uses automatic

computer-controlled braking of individual wheels to assist in maintaining

control in critical driving situations. The feature becomes active when the driver

loses control of the vehicle.

Lane keeping assist: -

Lane keeping assists is a technique in autonomous driving vehicles that enables

vehicles to travel along a desired line of lanes by adjusting the front steering

angle. This technique is used mainly for preventing accidents during free

driving of autonomous vehicles. It works by warning the driver and or

deploying the steering if the vehicle moves out of a lane.

Reverse park assists: -

The Reverse Park Assist system helps a driver to sense when objects are in the

blind spot of the vehicle. The system can help in preventing reverse parking

accidents. There are mainly two common types of reverse parking-assist

systems which are simple audio warning and a sophisticated camera and video-
monitoring system. Also known as rear park assist this technique utilizes

multiple ultrasonic sensors located on the rear bumper of the vehicle.

5 Stages Of Design Thinking


Empathies

The first stage of the Design Thinking process is to gain an empathic understanding of the

problem you are trying to solve. This involves consulting experts to find out more about the

area of concern through observing, engaging and empathizing with people to understand their

experiences and motivations, as well as immersing yourself in the physical environment so

you can gain a deeper personal understanding of the issues involved. Empathy is crucial to a

human-centered design process such as Design Thinking, and empathy allows design thinkers

to set aside their own assumptions about the world in order to gain insight into users and their

needs.
Depending on time constraints, a substantial amount of information is gathered at this stage

to use during the next stage and to develop the best possible understanding of the users, their

needs, and the problems that underlie the development of that particular product.

There are following problems relating to the cars driving:

1. Problem of rush driving of teenagers: in this modern time most of the

teenagers use cars for going to colleges and to other places and they drive very

harshly on the roads that cause accidents.

2. Problem of non-driver: there are many peoples in India who don‘t know how

to drive a car but they have to do many things outside like going to work,

receive children from school, go for shopping, any emergency, etc.

3. Problem of animal‘s safety: when we drive in forest area or on highways many

times some animals cross roads and the driver can not control the car or just

hit the animal.

4. Problem of drink and drive: many peoples like to drink and after that they

have to drive their car to the home or office these peoples sometimes lose their

conciseness and control over the car and got a accident.

5. Problem of traffic light: many times, the drivers mainly teenagers and old aged

persons break the traffic rules because teenagers don‘t care about the rules and

old aged peoples don‘t know about them.


6. Problem of sleep: this is a common problem at night time while driving for a

long route peoples feel sleepy and loose their control over the car and may

face serious accident.

7. Problem of parking: many peoples don‘t know how to park a car properly

because they can not make a proper guess about the area.
Define (the problem)

During the Define stage, you put together the information you have created and gathered

during the Empathies stage. This is where you will analyze your observations and synthesize

them in order to define the core problems that you and your team have identified up to this

point. You should seek to define the problem as a problem statement in a human-centered

manner.

In first stage of design thinking process we gather the data about the problems of the

customers which is faced by many of them on regular basis. The main problem of teenagers

is that they drive for fun and in that they don‘t care of the speed and face accidents. The

peoples who dot know how to drive car face many problems in their daily life style because

they have to relay on the cabs or on the other transportation systems like railways and

roadways. Many times, while driving a car we face a common problem which is that of

animals. Mostly in forest area and on the highways while driving animal jump on the road

and disbalance the car and create serious accident.

While driving a car many people think why this car can not drive itself because for a long

route drive driver become tired and don‘t want to drive more but he/she can not stop the car

because the area is not always suitable for rest. Many peoples drive the cars after having a

great party for example new year parties, birthday parties, etc (tagline: GAADI TERA BHAI

CHLAEGA) that is called drink and drive which is very dangerous for all of us. Now a day‘s

traffic signals are on high alert many times by mistake many peoples break the signal and got

chalan for that. Parking is also a very serious problem because in traffics, functions, malls,

exhibitions, etc.
Ideate

During the third stage of the Design Thinking process, designers are ready to start generating

ideas. You‘ve grown to understand your users and their needs in the Empathies stage, and

you‘ve analyzed and synthesized your observations in the Define stage, and ended up with a

human-centered problem statement. With this solid background, you and your team members

can start to "think outside the box" to identify new solutions to the problem statement you‘ve

created, and you can start to look for alternative ways of viewing the problem.
There are hundreds of Ideation techniques such as Brainstorm, Brain write, Worst Possible

Idea, and SCAMPER. Brainstorm and Worst Possible Idea sessions are typically used to

stimulate free thinking and to expand the problem space. It is important to get as many ideas

or problem solutions as possible at the beginning of the Ideation phase. You should pick

some other Ideation techniques by the end of the Ideation phase to help you investigate and

test your ideas so you can find the best way to either solve a problem or provide the elements

required to circumvent it.

After the 2 stages of design thinking we can generate the idea for the problem. So according

to the above analysis of the problem the idea is to launch the ―autonomous car‖ or we can say

that self-driving car. This idea can solve most of the problems of the customers because these

cars have the ability to drive itself and this will create ease for the non-drivers.

The problem of rush drivers can be solved by this idea of self-driving cars because these cars

will maintain the speed and avoid over speeding.

The problem of non-drivers can be solved by this also because the self-driving cars drive

automatically.

Other problem which is about the safety of animals can also be solved by this idea because

these cars will have the sensors.

The peoples who drive while drink can get benefited by this idea because they will not drive

by themselves.

And the problems of the traffic signals, parking and tiredness can also solved by this idea.
Prototype

The design team will now produce a number of inexpensive, scaled down versions of the

product or specific features found within the product, so they can investigate the problem

solutions generated in the previous stage. Prototypes may be shared and tested within the

team itself, in other departments, or on a small group of people outside the design team. This

is an experimental phase, and the aim is to identify the best possible solution for each of the

problems identified during the first three stages. The solutions are implemented within the

prototypes, and, one by one, they are investigated and either accepted, improved and re-

examined, or rejected on the basis of the users‘ experiences. By the end of this stage, the

design team will have a better idea of the constraints inherent to the product and the problems
that are present, and have a clearer view of how real users would behave, think, and feel

when interacting with the end product.

Let‘s examine some of the features that define the best autonomous cars on the road today

and then discuss how to design and manufacture PCBs for them.

Features of self-driving

The automotive industry has had a long tradition for vehicle manufacturers to produce a new

version of their models each year. More often than not, the annual change involves the

enhancement or addition of features that elevate the driving experience by improving

performance and/or increasing safety. For autonomous vehicles (AVs), the primary focus of

feature enhancement or addition is safety in pursuit of achieving a fully driverless car. The

most advanced or best of these cars incorporate some or all of the features listed below.

Lane Control - This is the ability to stay safely within the lane, which is achieved by

monitoring distances to lane markers, road edges and adjacent vehicles. Some of these

systems utilize the global positioning system (GPS) to pinpoint locations.

Adaptive Cruise Control (ACC) - This feature is an enhancement of the common cruise

control that maintains a constant speed. ACC, by contrast, is a safety feature, dedicated to

maintaining a safe distance from the vehicle ahead.

Automatic Emergency Braking System (AEBS) - This safety feature is essential for fully

autonomous vehicles as it automatically stops the vehicle to avoid a collision.

Light Detection and Ranging (LIDAR) - This technology is used for distance determination

and object identification.


Standalone mountable units are used on drones as well as road vehicles.

Street Sign Recognition - This feature is a software program that is able to process sensor

data and identify road signs. Although viable products do exist, this technology will be the

subject of research and development for some time to come.

Vehicle-to-Vehicle (V2V) Communication - This feature is at the heart of a connected-

vehicle technology where vehicles work together to improve the safety of the roadway

system, including the vehicles on it.

Object or Collision Avoidance System (CAS) - An object avoidance system typically

integrates multiple features, such as object detection or identification and AEBS to avoid a

collision.
Test

The testing stage of the design thinking process requires real users to generate real data.

However, the final stage of design thinking is not necessarily the last thing designers will do.

Remember, design thinking is built upon a foundation of iteration, so many designers roll out

multiple prototypes to test different change factors within their idea. Without a

comprehensive testing stage, user experiences and solutions have difficulty scaling.

Testing is often an iterative process. Designers can expect to go through a series of changes,

edits and refinements during the testing stage. For this reason, it‘s not uncommon for the

testing phase to ‘restart‘ some other design thinking processes such as ideation or testing as

newfound ideas might spark additional potential solutions that require an entirely fresh

approach.

Creating (and maintaining) maps for self-driving cars is difficult work.Self-driving cars work

by relying on a combination of detailed pre-made maps as well as sensors that ―see‖ obstacles

on the road in real time. Both systems are crucial and they work in tandem The car uses the

map as a reference and then deploys its sensors to look out for other vehicles, pedestrians, as

well as any new objects that weren‘t on the map, such as unexpected signs or construction.

Driving requires many complex social interactions — which are still tough for robots

―training‖ the cars‘ software to recognize various thorny situations that pop up on the roads.

For example, the company says its cars can now recognize cyclists and interpret their hand

signals — slowing down, say, if the cyclist intends to turn.

All this explains that fully self-driving cars will ultimately need to be adept at four key tasks:

1)understanding the environment around them;


2) understanding why the people they encounter on the road are behaving the way they are;

3) deciding how to respond (it‘s tough to come up with a rule of thumb for four-way stop signs

that works every single time); and

4)communicating with other people.

Bad weather makes everything trickier

This is a real, but lesser, hurdle. Weather adds to the difficulty, but it‘s not a fundamental

challenge. Also, even if we had a car that only worked in fair weather, that‘s still enormously

valuable. It might take longer to overcome weather challenges, but this won‘t derail the

technology.‖

Another issue is cybersecurity

How do we make sure these cars can‘t be hacked? As vehicles get smarter and more

connected, there are more ways to get into them and disrupt what they‘re doing.‖

This shouldn‘t be impossible to fix. Software companies have been dealing with this issue for

a long time. It will likely require a culture change in the auto industry, which hasn‘t

traditionally worried much about cybersecurity issues.

IMPLEMENTATION

Road Lane Detection

There are some methods and filters that are being used in order to detect the

road lanes accurately. One of these methods used is canny edge detection as it
includes different mathematical methods that identify the points that are used to

detect edges in the captured video frames. The road lane detection technique is

applied through some steps which are, applying the Gaussian filter to smooth

the video frame as well as removing the noise, finding the intensity gradients of

the image, getting rid of illusory response to edge detection, applying a

threshold to determine possible edges. Then finally track detected edges and

connect them. Filters such as grayscale and blurred are also being applied on the

captured frames to smooth them in order to make it easier to apply some of the

methods as well. In this stage, ashough transform is used.The purpose of the

whole methodology is to find the identification of lines in the digital image

captured and draw a virtual path upon these lines to be the path that should be

taken by the car.

Hough transform is defined by the equation where r is the distance from the

origin to the closest point on a straight line and is the angle between the x-axis

and the line that is connecting the origin with that closest point. After lane

detection is done and the lane is detected successfully. Mask lines are drawn

upon the real lane to recognize the path that should be taken. Then, the

raspberry pi sends an output of three signals which are right, left, and forward

and decide to move according to the signal of the threshold value and angle

theta detected.
Disparity Map

Disparity map is more like when eyes capture two different views of a three-

dimensional object. Retinal images are fused in the brain in a way that their

disparities are transformed into depth intuition, creating a three-dimensional

representation of the object in the observer‘s mind. That is why the normal case

of stereo vision arranges two cameras horizontally that is pointing in the same

direction within a distance between them. This arrangement results in two

different perspectives. Then, algorithms are applied to match corresponding

points and store the depth information in a disparity map. To calculate the

disparity map, the car had to go through two main phases pre-processing phase

and processing phase.

Pre-processing

it‘s im-possible to align them perfectly to create perfect depth maps. Thus, a

software calibration method is used by two cameras that take multiple photos of

an object. In our case, we used a printed chessboard. The calibration method

will then analyze these photos by using edge detection to find the intersection

points in the chessboard image to create a common view area of the two

cameras.
Processing

In this phase, the real-time disparity map calculation takes place. The car simply

moves while capturing live videoframes and calculating the disparity map for

each frame. Then, based on the output of the disparity map, the car motor stake

actions as if it should either slow down, keep on moving or stop. Real-time

disparity map differentiates between distances by different colors on the live

captured video frames.

The disparity map is the distance between points in an image corresponding to

the scene point three dimensions and their camera center. B is the baseline

(distance between the two cameras), F is the focal length of the cameras, and Z

is the perpendicular distance on the baseline. The depth of a point in a scene is

inversely proportional to the difference in distance of identical image points and

their camera centers. With this information, the depth of every pixel in a video

frame is derived.

Anomalies Detection

There are two sensors required to read and collect the data of the car direction

and rotation axes. These sensors are accelerometer and gyroscope. The

accelerometer is a sensor that simply measures the acceleration force exerted

upon it. The gyroscope is a sensor that is a bit more advanced than

Accelerometer as it can measure the orientation and rotational velocity of itself.


Therefore, we are using the MPU-6050 unit which is a small hardware unit that

is composed of these two sensors. To accurately detect road anomalies, there are

two main phases that the car had to go through. Pre-processing phase and

processing phase.

After pre -Processing

After pre-processing, the classification takes a place where the data is obtained

and classified using a support vectormachine(SVM) algorithm with Radial Basis

Function(RBF) kernel method. It was indicated in the research thatthe support

vector machine algorithm achieved the highest accuracy compared with other

classification algorithms inmost of the data sets used. The decision of using the

RBF kernel method on SVM was taken according to asthey compared between

SVM kernel methods on five different data sets. They showed that RBF SVM

has achievedthe highest accuracy compared with Linear SVM, Polynomial

SVM, Sigmoid SVM. Also, after trying these kernel sons our data set, it came

out with the expected results that give the highest advantage of using RBF SVM

for our dataset as it has obtained 98.6% accuracy. SVM simply creates a

hyperplane or a line that separates perfectly between ourdata into two classes.

Anomaly class and normal road class.


ANNEXURE

1. PLEASE SELECT YOUR AGE

18 to 24

25 to 34

35 to 44

45 to 54

55 to 64

65 to 74

75 or older

2. Are you male or female?

Male

Female
3. What kind of vehicle do you drive on a daily basis?

I do not own a license to drive

SUV

Sedan

Truck

Other (please specify)

4. Have you ever heard or read about self-driving cars before?

Yes

No

5. What is your general opinion regarding this technology?

(even if you haven't heard about it before, give an opinion based on what you have

read so far)

Extremely positive

Positive
Neutral

Negative

Extremely negative

6. The self-driving-car technology claims to have many of these benefits. How likely do

you think these benefits will occur in Oman?

Fewer Crashes

Reduction in Fuel consumption (Assuming the vehicles will be powered electrically)

Less traffic

Destinations are reached faster

7. How do you think the Omani government will react to this kind of technology?

Positively

Neutral

Negativel
8. A large concern of people owning a driver-less car is being able to trust it. Do you

feel like you can trust the car to drive itself?

Yes

No

I am not sure

9. Do you see yourself owning a driver-less car in the future?

Yes

No

10. How safe would you feel being a passenger in a self-driving car?

Extremely safe

Very safe

Somewhat safe

Not so safe

Not at all safe


PROJECT REPORT
On
“SELF DRIVING CAR”
Usage of Design Thinking

SUBMITTED TO:
Dr. DINESH KUMAR SINGH
H.O.D
MBA Department
SUBMITTED BY:-
DEEP GAUR
MBA 1stYEAR (1st SEM)
Sub. Code: KMBN(106)
SESSION 2020-2021

ALIGARH COLLEGE OF ENGINEERING AND


TECHNOLOGY, ALIGARH
AFFILIATED TO AKTU UNIVERSITY,LUCKNOW
DECLARATION

I the undersigned solemnly declare that the project report SELF


DRIVING CAR is based on my own work carried out during the
course of our study under the supervision of Dr. Roshan Sheikh.
I assert the statements made and conclusions drawn are an outcome of
my research work. I further certify that
 The work contained in the report is original and has been
done by me under the general supervision of my supervisor.

 The work has not been submitted to any other Institution for
any other degree/diploma/certificate in this university or any
other University of India or abroad.

 We have followed the guidelines provided by the university


in writing the report.

 Whenever we have used materials (data, theoretical analysis,


and text) from other sources, we have given due credit to
them in the text of the report .
ACKNOWLEDGEMENT

I would like to express my special thanks of gratitude to my teacher


(H.O.D) Dr. Dinesh Kumar Singh who gave me the golden
opportunity to do this wonderful project on the topic SELF
DRIVING CAR!, which also helped me in doing a lot of Research
and i came to know about so many new things I am really thankful to
them.
Secondly i would also like to thank my parents and friends who
helped me a lot in finalizing this project within the limited time frame
TABLE OF CONTENT

S.No. TOPICS Page No.


1. Introduction
2. Importance& Relevance
3. Feasibility Of The Project

4. Description
5. Product Innovation

6. Outcomes or
Results:Challenges
7. Stages of Design Thinking
8. Annexure

You might also like