You are on page 1of 6

AUTONOMOUS CAR USING

DEEP LEARNING

Recommended by : Anita More

NAME

Harshitha Aditham [1613001]


Rajeshwari Nadar [1613026]

Department of Information Technology


OBJECTIVE

-> ​SELF DRIVING ON THE TRACK


-> ​STOP SIGN AND TRAFFIC LIGHT DETECTION
-> ​FRONT COLLISION AVOIDANCE
-> ​PEDESTRIAN DETECTION

INTRODUCTION:

The project will employ concepts from Digital Signal Processing, Dynamic
Programming for path calculation and Artificial Neural Networks for Machine
Learning as a part of Artificial Intelligence. Currently, most hardware oriented
applications consist of a procedural approach towards implementation of autonomy.

However, the procedural approach can only be used to implement Artificial Narrow
Intelligence. We aim to initiate a transition towards Artificial General Intelligence by
expanding upon the avenues to which ANI is being applied and coalescing these into
a step in the direction of AGI.

A robot is a container for AI, sometimes mimicking the human form, and sometimes
not; but the AI itself is in the computer inside the robot. AI determines the governing
behavior, and the robot is merely its body. For example, Apple's Siri is AI, the
woman's voice we hear is a personification of that AI, and there's no robot involved
at all. There are many different types or forms of AI since AI is a broad concept, the
critical categories we need to think about are based on an AI's caliber. The objective
of creating a system logic that can be implemented to any form factor in the form of
Artificial Intelligence,is displayed by collision avoidance by using weights that
determine whether the system is in a positive or negative state.This is achieved by
using the various reinforcement learning methods to train the robot and make it
understand the environment. Thus, this system gives a platform for any hardware
framework to be implemented to avoid collision.
LITERATURE SURVEY:

Unsupervised machine learning is the machine learning task of inferring a function to


describe hidden structure from "unlabeled" data (a classification or categorization is
not included in the observations). Since the examples given to the learner are
unlabeled, there is no evaluation of the accuracy of the structure that is output by the
relevant algorithm—which is one way of distinguishing unsupervised learning from
supervised learning and reinforcement learning. A central case of unsupervised
learning is the problem of density estimation in statistics, though unsupervised
learning en-compasses many other problems (and solutions) involving summarizing
and explaining key features of the data.

Artificial Neural Networks are a computational approach which is based on a large


collection of neural units loosely modeling the way the brain solves problems with
large clusters of biological neurons connected by axons. Each neural unit is
connected with many others, and links can be enforcing or inhibitory in their effect on
the activation state of connected neural units. Each individual neural unit may have a
summation function which combines the values of all its inputs together. There may
be a threshold function or limiting function on each connection and on the unit itself
such that it must surpass it before it can propagate to other neurons. These systems
are self-learning and trained rather than explicitly programmed and excel in areas
where the solution is difficult in a traditional computer program.
The perceptron algorithm; its first implementation, in custom hardware, was one of
the first artificial neural networks to be produced. In machine learning, the perceptron
is an algorithm for supervised learning of binary classifiers. It is a type of li-
near classifier, i.e. a classification algorithm that makes its predictions based on a
linear predictor function combining a set of weights with the feature vector. The
activation function of perceptron is a function like The Heaviside Step function
which returns ‘1’ if the input is positive or zero, and ‘0’ for any negative input .
Detailed study on Perceptron and its implementation can be found in .
PROBLEM STATEMENT:

Controlling the traffic in the metro cities is the huge due to increase in the vehicles
population and increase in accidents too. To prevent, many of the plans are
implemented but ​fails. To overcome these problems, we come up the idea of
“Controllable Traffic Autonomous ​Car”. The cars most nowadays are smart we utilize
that idea and make a prototype which ​control according to the traffic signals and to
prevent accidents by the machine learning ​prediction and image processing
technique. The main controlling unit if the car is raspberry ​Pi which teaches the car to
move in the path and to stop when the red signal in the traffic, to ​maintain the speed
and to stop the car if it gets collide with nearby vehicles. The distance ​between the
cars is monitored with the help of ultrasonic sensor. It also associated with the ​GSM
and GPS technique to locate the car and to alert in case of emergency situations​.
The result that going to obtained based on the quality of the image frame from the
camera ​and the collision avoidance according to the sensor data from the ultrasonic
sensor. The ​machine learning and image processing are done by using the OpenCV
module in python. ​The self is going to done by the convolution neural network and
the object prediction by the ​harr classifiers. The advance is going to done by the
deep learning of the objects.
The objective of the agent is to avoid crashing without using hard-coded predefined
values for stopping distance from the nearest obstacle. This is achieved by learning
from initial crashes and tuning the synaptic weights to change in subsequent
iterations the stopping distance.Initially unsupervised learning is performed by the
system where the robot on crashing into an obstacle, backtracks to the previous safe
state, increments the current Threshold value (initially set to zero) by the learning
rate and dumps the new state of the system. Optimal path calculation is then done
by taking a sweep of the environment using Ultrasonic sensor mounted on a servo
motor.This longest bitonic array will show the most promising direction or path for the
robot's movement.
METHODOLOGY:

The car’s main loop runs indenitely with a dened frequency. At rst, the car nds
out whether it can continue driving or if there is an obstacle in front of it. Then it gets
the latest speed and steering angle, which are provided either by user from the web
interface or from a pretrained model (based on the most recent camera image). The
speed and angle are then saved and fed into the actuator that executes appropriate
steering commands. It is worth to mention that the camera unit, the web interface
and the distance sensor all run separately on their own threads to not stall the
execution loop. On the other hand, the most computationally demanding part of this
loop is the inferring process of the neural network. A more thorough performance
breakdown can be found in ,where I focus on the suitability of Raspberry Pi for this
task and the measured performance of the system. The process is depicted in the
following diagram.
REQUIREMENT:

Servos, Motors and ESC


SainSmart Wide Angle Fish-Eye Camera Lenses
Raspberry Pi 3 model B
The power bank (Anker Astro E1) and jumper wires
Ultrasonic sensor

Bibliography
[1] Motors, T. Tesla Autopilot. 2017, [Online]. Available from:
https://www.tesla.com/autopilot
[2] Karpathy, A. Stanford CS231n - Convolutional Neural Networks for Visual
Recognition [Online]. University Lecture, January 2018. Available ​from:
http://cs231n.github.io/neural-networks-1/
[3] LeCun, Y.; Bottou, L.; et al. Gradient-Based Learning Applied to Document
Recognition. Proceedings of the IEEE, volume 86, no. 11, November ​1998: pp.
2278–2324.
[4] Gudi, A. Recognizing Semantic Features in Faces using Deep Learning. ​Master’s
thesis, University of Amsterdam, 9 2014.
[5] Goodfellow, I.; Bengio, Y.; et al. Deep Learning. MIT Press, 2016,
http://www.deeplearningbook.org.
[6] Selvaraju, R. R.; Das, A.; et al. Grad-CAM: Why did you say that?Visual
Explanations from Deep Networks via Gradient-based Localization. CoRR, volume
abs/1610.02391, 2016, 1610.02391. Available from:http://arxiv.org/abs/1610.02391
[7] Krizhevsky, A. Learning Multiple Layers of Features from Tiny Images.05 2012.
[8] Krizhevsky, A.; Sutskever, I.; et al. ImageNet Classification with Deep
Convolutional Neural Networks. Commun. ACM, volume 60, no. 6, May ​2017: pp.
84–90, ISSN 0001-0782, doi:10.1145/3065386. Available from:
http://doi.acm.org/10.1145/3065386

You might also like