Professional Documents
Culture Documents
DEEP LEARNING
NAME
INTRODUCTION:
The project will employ concepts from Digital Signal Processing, Dynamic
Programming for path calculation and Artificial Neural Networks for Machine
Learning as a part of Artificial Intelligence. Currently, most hardware oriented
applications consist of a procedural approach towards implementation of autonomy.
However, the procedural approach can only be used to implement Artificial Narrow
Intelligence. We aim to initiate a transition towards Artificial General Intelligence by
expanding upon the avenues to which ANI is being applied and coalescing these into
a step in the direction of AGI.
A robot is a container for AI, sometimes mimicking the human form, and sometimes
not; but the AI itself is in the computer inside the robot. AI determines the governing
behavior, and the robot is merely its body. For example, Apple's Siri is AI, the
woman's voice we hear is a personification of that AI, and there's no robot involved
at all. There are many different types or forms of AI since AI is a broad concept, the
critical categories we need to think about are based on an AI's caliber. The objective
of creating a system logic that can be implemented to any form factor in the form of
Artificial Intelligence,is displayed by collision avoidance by using weights that
determine whether the system is in a positive or negative state.This is achieved by
using the various reinforcement learning methods to train the robot and make it
understand the environment. Thus, this system gives a platform for any hardware
framework to be implemented to avoid collision.
LITERATURE SURVEY:
Controlling the traffic in the metro cities is the huge due to increase in the vehicles
population and increase in accidents too. To prevent, many of the plans are
implemented but fails. To overcome these problems, we come up the idea of
“Controllable Traffic Autonomous Car”. The cars most nowadays are smart we utilize
that idea and make a prototype which control according to the traffic signals and to
prevent accidents by the machine learning prediction and image processing
technique. The main controlling unit if the car is raspberry Pi which teaches the car to
move in the path and to stop when the red signal in the traffic, to maintain the speed
and to stop the car if it gets collide with nearby vehicles. The distance between the
cars is monitored with the help of ultrasonic sensor. It also associated with the GSM
and GPS technique to locate the car and to alert in case of emergency situations.
The result that going to obtained based on the quality of the image frame from the
camera and the collision avoidance according to the sensor data from the ultrasonic
sensor. The machine learning and image processing are done by using the OpenCV
module in python. The self is going to done by the convolution neural network and
the object prediction by the harr classifiers. The advance is going to done by the
deep learning of the objects.
The objective of the agent is to avoid crashing without using hard-coded predefined
values for stopping distance from the nearest obstacle. This is achieved by learning
from initial crashes and tuning the synaptic weights to change in subsequent
iterations the stopping distance.Initially unsupervised learning is performed by the
system where the robot on crashing into an obstacle, backtracks to the previous safe
state, increments the current Threshold value (initially set to zero) by the learning
rate and dumps the new state of the system. Optimal path calculation is then done
by taking a sweep of the environment using Ultrasonic sensor mounted on a servo
motor.This longest bitonic array will show the most promising direction or path for the
robot's movement.
METHODOLOGY:
The car’s main loop runs indenitely with a dened frequency. At rst, the car nds
out whether it can continue driving or if there is an obstacle in front of it. Then it gets
the latest speed and steering angle, which are provided either by user from the web
interface or from a pretrained model (based on the most recent camera image). The
speed and angle are then saved and fed into the actuator that executes appropriate
steering commands. It is worth to mention that the camera unit, the web interface
and the distance sensor all run separately on their own threads to not stall the
execution loop. On the other hand, the most computationally demanding part of this
loop is the inferring process of the neural network. A more thorough performance
breakdown can be found in ,where I focus on the suitability of Raspberry Pi for this
task and the measured performance of the system. The process is depicted in the
following diagram.
REQUIREMENT:
Bibliography
[1] Motors, T. Tesla Autopilot. 2017, [Online]. Available from:
https://www.tesla.com/autopilot
[2] Karpathy, A. Stanford CS231n - Convolutional Neural Networks for Visual
Recognition [Online]. University Lecture, January 2018. Available from:
http://cs231n.github.io/neural-networks-1/
[3] LeCun, Y.; Bottou, L.; et al. Gradient-Based Learning Applied to Document
Recognition. Proceedings of the IEEE, volume 86, no. 11, November 1998: pp.
2278–2324.
[4] Gudi, A. Recognizing Semantic Features in Faces using Deep Learning. Master’s
thesis, University of Amsterdam, 9 2014.
[5] Goodfellow, I.; Bengio, Y.; et al. Deep Learning. MIT Press, 2016,
http://www.deeplearningbook.org.
[6] Selvaraju, R. R.; Das, A.; et al. Grad-CAM: Why did you say that?Visual
Explanations from Deep Networks via Gradient-based Localization. CoRR, volume
abs/1610.02391, 2016, 1610.02391. Available from:http://arxiv.org/abs/1610.02391
[7] Krizhevsky, A. Learning Multiple Layers of Features from Tiny Images.05 2012.
[8] Krizhevsky, A.; Sutskever, I.; et al. ImageNet Classification with Deep
Convolutional Neural Networks. Commun. ACM, volume 60, no. 6, May 2017: pp.
84–90, ISSN 0001-0782, doi:10.1145/3065386. Available from:
http://doi.acm.org/10.1145/3065386