Professional Documents
Culture Documents
SUBMITTED BY
AKHIL KM 720917114014
Of
BACHELOR OF ENGINEERING
IN
MECHANICAL ENGINEERING
COIMBATORE
APRIL 2021
ANNA UNIVERSITY: CHENNAI 600 025
BONAFIDE CERTIFICATE
Certified that this project report “SELF DRIVING VEHICLE BASED ON
ARTIFICIAL INTELLIGENCE USING MACHINE LEARNING” is the
bonafide work of “ANUSH MOHAMED E, ABDUL HAADI BA, AKHIL
KM, MANU SKARIA” who carried out the project work under my supervision.
ii
ACKNOWLEDGEMENT
We author of this project first of all thank to the Almighty and our Parents
for providing us the right proportion of strength and knowledge for the
successfully completion of the project.
We take this opportunity to express our sincere thanks to our guide Mr.
RAMAKRISHNA M.E Department of Mechanical Engineering, JCT College of
Engineering and Technology, Coimbatore for his endless support and
encouragement during this project. We extend our sincere thanks to all our
teaching and non-teaching staff members for helping us.
iii
JCT COLLEGE OF ENGINEERING AND TECHNOLOGY
PICHANUR, COIMBATORE – 64110
DEPARTMENT OF MECHANICAL ENGINEERIN G
iv
Project management and finance: Demonstrate knowledge and
PO understanding of the engineering and management principles and apply these
11 to one’s own work, as a member and leader in a team, to manage projects and
in multidisciplinary environments.
Life-long learning: Recognize the need for, and have the preparation and
PO1
ability to engage in independent and life-long learning in the broadest context of
2
technological change.
v
vi
ABSTRACT
Advancement in Self-driving technology has created opportunities for smart
urban mobility. Self-driving vehicles are now a popular topic with the rise of the
smart city agenda. However, legislators, urban administrators, policymakers, and
planners are unprepared to deal with the possible disruption of autonomous
vehicles, which potentially could replace conventional transport. There is a lack
of knowledge on how the new capabilities will disrupt and which policy strategies
are needed to address such disruption. This report aims to determine where we
are, where we are headed, what the likely impacts of a wider uptake could be, and
what needs to be done to generate desired smart urban mobility outcomes. The
methodology. The objective of this project consisted on creating a vehicle able to
work with Artificial Intelligence.
This paper aims to represent a mini version of self-driving car using IOT with
raspberry pi and Arduino UNO working as a main processor chip, the 8mp high
resolution pi camera will provide the necessary information and the raspberry pi
will analyse the data(samples) and it will get trained in pi with neural network
and machine learning algorithm which would result in detecting road lanes, traffic
lights and the car will take turns accordingly. In addition to these features the car
will overtake with proper LED indications if it comes across an obstacle
The system would have sensors to detect the obstacles and camera as well to be
able to react according to their position. Many extra sensors and C++ programmes
were used in order to measure the distance of the obstacle. The information given
by them with the orders followed by the CPU, as well as a light sensor (LDR) to
help deciding if the camera and the computer vision algorithm should be used. It
uses both camera and sensor for the obstacle detection purpose, so that if there is
a chance of one of the components failed to work would be compensated by the
other to ensure safety when there is light and there wasn’t barely any.
vii
TABLE OF CONTENT
ABSTRACT 6
LIST OF FIGURES 9
LIST OF ABBRIVIATION 12
1. INTRODUCTION 1
1.1 Overview 1
1.2 Project Overview 2
1.3 Objective 3
1.4 Working 3
1.5 Thesis Structure 4
2. LITERATURE REVIEW 6
3. COMPONENTS/HARDWARES USED 9
3.1 List of Components 9
3.1.1 Uses of Ultrasonic Sensor 9
3.1.2 Uses of Motor module 9
3.1.3 Uses of Camera module 10
3.1.4 Uses of Color Detection sensor 10
3.1.5 Uses of Infrared Sensor 10
3.1.6 Uses of Arduino Board 10
3.1.7 What is a Robot Chassis and Types 11
3.1.8 Robot Wheel Configurations 11
3.1.9 Jumper wires 11
3.1.10 18650 Battery pack 11
3.1.11 LED Signal Light (Prototype) 12
3.1.12 Uses of Custom Printed Circuit Board 12
3.1.13 DIY road to test Lane Detection 12
4. SOFTWARE USED AND SETUP 13
4.1 What is Arduino IDE 13
4.2 How to setup and configure Arduino IDE 13
5. METHODOLOGY 16
viii
5.1 Assembling chassis kit 16
REFERENCES 45
ix
LIST OF FIGURES
x
4.2 Arduino IDE Programming 13
4.3 OpenCV 13
xi
5.13 Arduino Connection to Module 20
xii
7.1 Self-driving car made by tesla 43
xiii
LIST OF ABBRIVIATION
IDE Integrated Development Environment
ML Machine Learning
NN Neural Networks
AI Artificial Intelligence
ML Machine Learning
DC Direct Current
xiv
CHAPTER ONE
INTRODUCTION
1.1 Overview
This chapter explains the project overview, project objectives and project methodology alongside
with basic idea behind Self Driving Vehicles, history of self-driving vehicle, industries producing
these autonomous vehicles, pros and cons and market trends. A self-driving vehicle is a vehicle that
guides its self without human interference. It is designed to sense its own environment using deep
learning and computer vision techniques Autonomous vehicles have now gone uphill from science
fiction to reality. It seems like this technology emerged overnight, but in reality, the path to self-
driving vehicles has been a long and tedious one. The history of self-driving vehicles went through
several milestones. After the invention of man driven motor vehicles, it did not take too long for the
inventors to think upon self-driven vehicles. In the year 1925, the inventor Francis Houdini invented a
radio-controlled vehicle, which he drove through the streets of Manhattan without any human steering
the wheel. The radio-controlled vehicle can start its engine, shift gears, and turn on its horn in year
1969, John McVehiclethy who is the one of the founding fathers of artificial intelligence
demonstrated something likely to the modern autonomous vehicle in an essay titled “Computer-
Controlled Vehicles.” McVehiclethy refers to an “automatic chauffeur,” capable of navigating a
public road via a “television camera input that uses the same visual input available to the human
driver”.
In early 90s, Vehiclenegie Mellon researcher Dean Pomerleau wrote a PhD thesis, describing how
neural networks could allow a self-driving vehicle to take in raw images from the road and output
steering controls in real time. Pomerleau was not the only researcher working on self-driving vehicles,
but his use of neural networks proves way more efficient than alternative attempts to manually divide
images into “road” and “non-road” categories. Waymo, which is also known as “Google Self-driving
vehicle” collects tons of data and feed the data to deep learning algorithm from labelling and
processing. Waymo, nowadays is being 15 used as an online cab service like Uber in different states
of the US. The basic model for a self –driving vehicle is shown in figure 1.1: Figure 1.1: Basic model
for self-driving Vehicles. The basic concept behind self-driving vehicle is to sense its environment
and take actions accordingly as shown in fig 1.1. The vehicle collects data of its environment through
a Single or multiple Cameras along with different sensors. This data is then processed through
advanced computer vision and other algorithms to generate action required to manoeuvre the vehicle
according to the environment. For example, in the case of “Tesla vehicle” it uses PythonML, SQL
&C++ to bring out the human capabilities to a computer
1
and for “Waymo”, machine learning plays an important role. With the collaboration of Google AI
researchers, Waymo is integrated with AutoML which enables the autonomous vehicle to optimize
models for different scenarios at a great velocity. AutoML has a graphical user interface to train,
assess, improve, and arrange models based on the data. Self-driving vehicle Observations through
camera and sensors Actions Environment16 GM cruise comes second to cover the greatest number of
miles autonomously. It is considered to be world’s most advanced self-driving vehicles to vehicle
fully connect people with the places, things, and experiences they vehicle about. Safety plan and
functioning requirements in obedience with federal, state and local regulations are kept in focus.
Cruise self-driving vehicle has a balanced array of sensors so that the vehicle can map out the
complex city streets intelligently and with a 360- degree view of the world around it. Each vehicle
contains 10 cameras that take pictures at 10 frames per second. That is how this vehicle is able to see
more of its surrounding environment and therefore, can respond more quickly and safely. At Nissan,
advancement in artificial intelligence is making the autonomous vehicles smarter, more responsive,
and better at making their own decisions. Nissan is developing a vehicle that will be competent of
self-driving on a single lane road in the near future. The next step will be multi-lane road, then self-
driving in the city, and in the end fully autonomous driving in all situations. Nissan self-driving
vehicles are designed to get smarter with time. Seamless Autonomous Mobility is a system developed
from NASA technology to help autonomous vehicles deal with unexpected, the vehicle sends live data
to a mobility manager who instantly teaches it what to do then it shares what it learned with other
vehicles in the system so they get smarter too. Once this knowledge is absorbed into the system,
vehicles can start using it to solve completely new challenge.
How safe should a self-driving vehicle be?
This question is becoming increasingly important as companies like Tesla, Waymo, Apple vehicle,
Uber and others test their self-driving vehicles on public road. Around the world, 1.3 million people
are killed each year by vehicles. Many of these deaths are due to human error. If a self-driving
vehicle can help in this, it would be a great achievement. In contrast, the self-driving vehicles do
not drink, they do not text and they do not fall asleep at the wheel, thus reducing deaths and
injuries…
2
move. Initially, tracks are deployed on the ground in order to gather the data in a form of videos
using OpenCV with webcam interface and ultra-sonic sensors are used in-case the Camera failed
to detect. Videos, images are extracted for classification of the data into four different classes i.e.,
right, left, forward or stop. Before feeding the data to neural network model, Hough transform is
applied using OpenCV for finding the lane line. This data is trained using Convolution Neural
Network model and a classifier is set which is able to predict in real time whether to move the
steering of the vehicle left, right, forward or stop accordingly. This project also includes sensor
and it is computer vision based with sensing technology. The training and inference are done
using Raspberry pi3 b+ connected to Ryzen 5 laptop. In our model. Classifier takes the input
images from the live feed and predicts which direction to choose or stop. Classifier after
prediction generates a string and through serial communication the string is sent to Arduino.
Finally, the Arduino processes the wrappers embedded in its code according to the string received
from the classifier and the vehicle moves according to the prediction. The trained model predicts
in which direction to move and can also respond to traffic sign, such as a stop sign. The picture of
the toy vehicle used in this project is shown in figure 1.4. This is a motorized vehicle in which
additional modifications such as steering angle control, motor driver control and high-resolution
camera interfacing has been done.
1.4 Working
The block diagram of project is shown in figure. Initially, tracks are deployed on surface in order
to gather data through video streaming through a webcam using Arduino, OpenCV which is an
Open-source computer vision library. After the collection of data, the video was segmented into
frames and classified into four classes i.e., right, left, forward and stop. This classified data is
converted into required format using computer vision algorithms such that the data consists of
only bright Hough lines on a black image. To get the required format of image, the image was
first converted into Gray scale. To reduce noise and smoothing the data images, Gaussian blur is
applied. As noise in image create false edges and effect edge detection, so after smoothing of
images the canny method is applied to identify edges in the image. To identify the region on the
image for Hough 20 lines, the region of interest is found and to mask out anything else in the
image bitwise and operator is used. Initially the Hough lines are drawn on a zero-pixel image
using bitwise and operator inside the region of interest. Weights of Hough line transform image
3
and the real image of the track are added. By adding weights of these images, the Hough lines are
displayed. These displayed lines are then averaged according to their slopes and intercepts in
order to display the lines in equal ratio. After pre-processing, the data is feed to a CNN model for
training. The training and inference are done using Ryzen5 3550H laptop. Training data used in
our project is about 70 percent of the complete data. Supervised learning is used for training of the
data. This data is classified and labelled as Right, left and forward. This data is trained using CNN
sequential model. CNN model used for the training of the data contains 15 hidden layers. These
layers include dense layer, convolutional-2D layer, maxpooling-2D layer, flatten layer and fully
connected layers. CNN is used for extracting the features from the images and learn through these
features by updating the bias and weights of the perceptron. The trained model then takes the
input images from live camera and predicts which direction to choose or stop. The trained model
after prediction generates a string and through serial communication the string is sent to Arduino.
Finally, the Arduino processes the wrappers embedded in its code according to the string received
from the trained model and sends control signals to the H-bridge to drive motors of the vehicle to
move or stop according to the prediction
4
Front-Wheel
Motor
5
CHAPTER TWO
Literature Review
2. The paper “Self-driving vehicle ISEAUTO for research and education” presented by
Raito Sell, Anton Rassling, Mario Mario, describes an ISEAUTO project, the first
self-driving vehicle project in Estonia is implemented at Tallinn University of
Technology in cooperation with an Estonian automotive company. ISEAUTO works
in research and educational project targeted on the design and development of an
automated vehicle in cooperating with a private company and students
3. The paper “Driverless Intelligent Vehicle for Future Public Transport Based on GPS”,
by an, Lakshmi, Dipesh war Kumar Yadav, Vivek Kumar Verma. It involves
equipping GPS and GSM system on a 4 wheeled robot. The GPS system steers the
robot and is capable of reaching from one point to another without any human
intervention. While in the former one with the help of GSM system they promise to
report theft in case is there is any. An SMS alert is sent to the vehicle owner reporting
about the issue and as a result of it, the owner of the vehicle can switch the ignition
off and in the latter one the project states that vehicle can only be turned on if the
authorized person sends a predefined location to the vehicle.
4. The paper “Traffic Light Detection and Recognition for Self-Driving Vehicles using
Deep Learning”, by Rupture Kulkarni, Shruti Datalike, Sonale Banger. There are
several object detection architectures available like Single Shot Multibook Detector
6
(SSD), Faster Region based Faster Region (R-CNN), Region based Fully (R-FCN)
which incorporates feature extractors like ResNet-101, Inception-V2, Inception-V3,
Mobile Net etc. The selection of architecture and feature extractor is trade-off
between speed and accuracy that your application needs For the Traffic Light
detection considering the application requirement and available computational
resources, Faster R-CNN Inception-V2 model is used which serves the accuracy and
speed trade-offs. The model is trained on the above-mentioned dataset where loss is
reported at each step of training. The model is trained on the NVIDIA GEFORCE
940M GPU using TensorFlow
6. The paper presented by Dong, D., Li, X., & Sun, “A Vision-based Method for
Improving the Safety of Self-driving” gives detailed view about how they developed a
simulator which is able to detect traffic signs and lanes and road segmentation.
7. The paper presented by Straub, J., Amer, W., Ames, C., Dayananda, K. R, above-
mentioned, Marsala, Shipman “An Internetworked Self-Driving Vehicle System-of
Systems”, proposed an efficient way of establishing communications between two or
more vehicles in a particular system to keep the traffic less congested.
8. The paper “Real-time multiple vehicle detection and tracking from a moving vehicle”,
by Margret Betake, Esan Hairtail, Larry S. Davis. They have included modules for
detection of other vehicles on the road. The Naval project at Vehiclenegie Mellon
University uses the Rapidly Adapting Lateral Position Handler to determine the
coordinates of the road ahead and the appropriate steering direction. RALPH
automatically steered a Naval vehicle98%of a trip from Washington DC to San
Diego, California, a distance which is of over 2800 miles. They have added a module
for vehicle tracking. A module is used for detecting overtaking vehicles, and a
7
trinocular stereo module (three view vision) for detecting distant obstacles were added
to enhance and improvise the Naval performance of vehicle
10. The paper “The Issues and the Possible Solutions for Implementing Self-Driving
Vehicles in Bangladesh” presented by Mohammad Faisal bin Ahmed, Md. Safe Ullah
Miah, Md. Muneer Anjum Time, Shakira Akhter, Md. Sarkar’s. Some of the issues of
Bangladeshi roads are highlighted in a paper published Organization in 2004. Google
vehicle among other things, can calculate the most efficient path, abide local traffic
rules, park when necessary and change lane if required
8
CHAPTER THREE
HARDWARES/COMPONENTS USED
3.1 List of Components
1. Ultrasonic Sensor
2. L298 Motor Driver
3. Camera Module (for machine vision)
4.Color sensor
5. Infrared Sensors
6. Arduino UNO
7. Robot chassis
8. Robot wheels
9.Jumper Wires
10.18650 Battery Pack
11. LED bulbs for signal light (prototype)
12. Custom printed circuit board
13. DIY road for testing lane detection
9
but the acceptable reserve energy is high as 45 volts and the highest output current per
channel is at most 2 Ampere
3.1.3 Camera Module
Figure 3.1.6
Arduino Uno 10
fourteen digital imp or op pins that may be interfaced to
different expansion boards (shields) and other circuits, of
which six can be operated as Pulse Width Modulation
outputs, six Analog imp pins, a Universal Serial Bus
connection, an influence barrel jack, an In Circuit Serial
Programming header and an adjust switch
Figure 3.1.7
Robot Chassis
3.1.8 Robot Wheels
Wheeled robots are robots that navigate around the ground
using motorized wheels to propel themselves. This design is
simpler than using treads or legs and by using wheels they
are easier to design, build, and program for movement in
flat, not-so-rugged terrain
Figure 3.1.8
Robot Wheels
3.1.9 Jumper Wires
A jump wire (also known as jumper, jumper
wire, jumper cable, DuPont wire or cable) is an
electrical wire, or group of them in a cable, with a connector
or pin at each end (or sometimes without them – simply
"tinned"), which is normally used to interconnect the
Figure 3.1.9
components of a breadboard or other prototype or test
Jumper Wire
Figure 3.1.10
18650 Battery
11
a kind of battery mainly used for this type of project another advantage of using this kind of
battery is that it can be easily rechargeable, this battery is good enough for our project to
work
Figure 3.1.11
LED Signal Light
3.1.12 Custom PCB board
Custom board is not mandatory but if there is any case of
power fluctuation this custom PCB will compensate this
problem. It is essential in-order to avoid the restarting of
Arduino UNO.
Figure 3.1.12
Custom Board
12
CHAPTER FOUR
• Arduino IDE is the software that is used to program all the components in the
project.
1. What is Arduino IDE
The Arduino Integrated Development Environment (IDE) is a cross-platform application
(for Windows, macOS, Linux) that is written in functions from C and C++. It is used to
write and upload programs to Arduino compatible boards, but also, with the help of
third-party cores, other vendor development boards. Arduino IDE can be used for
Advanced Machine Learning projects including IOT projects, Projects based on Artificial
Intelligence even embedded OpenCV is currently supported in the latest version of
Arduino IDE
13
2. How to Setup and configure Arduino IDE
1. Arduino IDE can be easily downloaded from the official website of Arduino IDE from
the we can download the software that is suitable for our Operating System. This figure
below shows the official website of Arduino. Arduino IDE is available not only on
windows platform but also on macOS& Linux OS
Figure 4.6
Step 1 Arduino IDE Official site
2. When the download finishes, proceed with the installation and please allow the driver
installation process when you get a warning from the operating system
Figure 4.7
Step 2 Installing Components for
Arduino
3. Choose the components to instal
Figure 4.8
Step 3 Saving Directory
14
4. Choose the installation directory (we suggest to keep the default one). The process will
extract and install all the required files to execute properly the Arduino Software (IDE)
Figure 4.9
Step 4 Extracting C++ exe
FINISHED……
15
CHAPTER FIVE
METHODOLOGY
16
Figure 5.5
Step 2. Use the bottom chassis (with
only one hole) Remove the protection
cover of the six fasteners as well
Look like this
Figure 5.6
Step 3. For each motor, we use two long
screws to secure the motor on the chassis
Insert two fasteners to the spots where
indicates in the left image the motor is
sandwiched in
Figure 5.7
Step 4. Then use two nuts to secure
the screws Look like this
17
Figure 5.8
Step 5. Mount the spacer where
indicates in the left image Look like
this
Figure 5.9
Step 6. The last step is mounting the
wheels on
The L298N is a member of a family of IC’s that all have the designation “L298”. The
difference between the family members is in the amount of current they can handle. The
L298N can handle up to 3 amperes at 35 Volts DC, which is suitable for most hobby motors.
The L298N actually contains two complete H-Bridge circuits, so it is capable of driving a
pair of DC motors. This makes it ideal for robotic projects, as most robots have either two or
four powered wheels. The L298N can also be used to drive a single stepper motor, however
we won’t cover that configuration in the upcoming sessions.
18
• This figure shows the pins and other parts of the L298 h-Bridge
Figure 5.10
Pins and Configurations
• This Circuit Diagram shows how to connect 4 gear motors to the L298N H-Bridge
12 v
Figure 5.11
Module and 4 motor connection
• Make sure that the light of the H-Bridge is glowing. It will glow when we connect
battery to the module, if the red light is glowing means the circuit is correct
Figure 5.12
Red Light Showing Working
19
5.3 Wire management of Gear Motor
Wire management is essential when it comes to a circuit. Wire management has an
important role in circuit. It is mandatory to properly arrange the wires of motor and make
sure that the wires will move or distort in-between the motors, and make sure that the end
points of the wires are inserted properly in to the module, also tight the screws of the motor
module ensure that the wire will not disconnect. Also stick the wire some where to avoid
confusion of wires
Figure 5.13
Arduino Connection to module
Figure 5.14
Arduino Connection & module to
battery
20
3.Make sure that the Arduino UNO is turned on
Figure 5.15
digitalWrite(motor2pin1, HIGH);
digitalWrite(motor2pin2, LOW);
Speed control
5. We can change the speed with the EN pins using PWM. ENA controls the speed of the
left motor and ENB controls the speed of the right motor. Here I plugged them into pins
9 and 10 on the Arduino. This is optional and the motors will still run if you don't do
this.
Figure 5.16
Picture shows Speed Control
Programme
21
6.To change the speed in the code, use the analogWrite() function (more info) on the
ENA and ENB pins. Here is an example (You can download the full code in the "Code"
section at the bottom of the page):
Ultrasonic Sensor is designed to send out a sound wave signal called the Trigger; and receive
the bounced back sound wave into the Echo port. The sound wave will pulsate the Trigger on
and off so the sound wave returning from the contacted object will be able to pass between
the pulses. If the Trigger was constantly on the returning sound wave would be distorted.
Trigger Sound Wave will be a conical shape and can be distorted from ambient noise and
materials that absorb sound (i.e vehicledboard, tennis ball, etc.)
Figure 5.17
Arduino Connection to Ultra-Sonic
sensor
• Open Arduino IDE and Type the program for obstacle detection
Figure 5.18
Picture shows Obstacle detection
Programme
22
Figure 5.19
Picture shows Programme
• The program can be executed to check whether the ultrasonic sensor is working or not
• Further modification can be made after the testing, the values are further modified
according to our requirement
This is an integration project between hardware and software tools. The image
processing C++ code samples are provided with the OpenCV library and all I did was
to modify the sample code for this project. I removed some of the unnecessary code
23
and added serial communications to it so it can send X,Y values to Arduino..
TOOLS
Software Required
Code Required
Hardware Required
• Installation
1. Download and install the OpenCV-2.3.1-win-superpack.exe if you don't wish to deal with
generating the support files yourself. Everything you need from OpenCV to build this project
has already been generated in this download.
http://sourceforge.net/projects/opencvlibrary/files/opencv-win/2.3.1/
24
2.Download and install Microsoft Visual C++ 2010 Express
http://www.microsoft.com/visualstudio/en-us/products/2010-editions/visual-cpp-express
The OpenCV installation documentation explains how to make Visual C++ aware of the
OpenCV support files (include, bin, etc). This is not a one-click job. Vehicleeful
attention must be given to how Visual C++ must be configured to recognize OpenCV
files.
The OpenCV team tested version 2.3.1 and Visual C++ 2010 on Windows 7 SP1. If you are
using a different configuration,
Figure 5.20
• Open unzipped file of
Picture shows Camera module Camera_OV0706_TEST, and the program the
Connection to Arduino
code into UNO. The detailed steps are
demonstrated in the pictures.
• Click Tools / Serial Port and then choose the corresponding COM number.
• And then click the button of programming like below in red rectangle, program the
code into the UNO board until done uploading appears.
25
• Finally open the monitoring serial port as below in red rectangle.
• When the serial port display the data like demonstrated below, you can press the
digital keys to take a photo.
Figure 5.21
Picture shows camera Test
Programme
Figure 5.22
Picture shows Connection to
Infrared Sensor
26
Code to connect Infrared Camera
#define trigPin 12
#define echoPin 13int n;
int duration , distance;
String readString;
#include <Servo.h>
Servo myservo1 ; // create servo object to control servo
Servo myservo2 ;
void setup() {
Serial.begin(9600);
myservo1 .attach(8); // assigns the servo to a pin
myservo2 .attach(9);
pinMode(trigPin , OUTPUT);
pinMode(echoPin , INPUT);
pinMode(3, OUTPUT);
}
void loop() {
digitalWrite(trigPin , HIGH);
_delay_ms(500);
digitalWrite(trigPin , LOW);
duration = pulseIn(echoPin , HIGH);
distance = (duration /2) / 29.1;
if (distance < 40) { // this is the distance which the rover has to stop
digitalWrite(3, HIGH);
myservo1 .write(n ); // controls the direction of the motors
myservo2 .write(180-n );
delay(1000); // how long the wheels spin
myservo1 .write(n );
myservo2 .write(90-n );
delay(500);
}
else { // what the rover will do if it doen't sense anything
digitalWrite(3, LOW);
myservo1 .write(180-n );
myservo2 .write(n );
}
}
27
Done
Figure 5.23
Picture shows Connection of
Colour Sensor
• Program to Connect
Colour Sensor
28
Figure 5.24
Picture shows Color Sensing
Programme
Note here that three values differ due to the different sensitivity of each photodiode type, as
seen from the photodiode spectral responsivity diagram from the datasheet of the sensor.
Nevertheless, now let’s see how the values react when we will bring different colours in front
of the sensor. So, for example, if we bring red colour, the initial value will drop down, in my
case from around 70 to around 25
Figure 5.25
29
5.9 Combined Circuit Diagram of the Project
Figure 5.26
Driving Car
Figure 5.27
Block Diagram of Self Driving Car
30
5.12 Making a Prototype Signal Light for Light detection
For the testing purposes it is mandatory to make a signal Light (Prototype) just to see how the
vehicle works to the signal light detection system, and how well its responds to the codes that
we have executed
Figures Showing how to make DIY Signal Light
Figure 5.28
Step 2. Take a piece of Mount board to fit all the LEDs
Figure 2.29
Step 3. Make 3 holes over the board to insert each LEDs
31
Figure 5.30
Step 4. After making holes insert LEDs to it as shown in this picture
Figure 5.31
Step 5. Solder the -ve part of each led together like this
Figure 5.32
32
Step 6. Connect black wire by soldering to the -ve terminal and 3 wires to the 3 +ve sides of
the LEDs as shown
Figure 5.33
Step 7. Connect 120k resistance to one of its terminals
figure 5.34
33
Step 9. Mount the board to a stick or a strong piece of metal to place the light on the ground
Figure 5.36
Step 10. Connect to battery and inter-change the wire to glow Red , Green or Yellow light
Figure 5.37
34
5.13 Making a DIY road to test traffic detection
We used white paper for the road colour and for Lane-Detection purpose we use black colour
tape to make a lane and the road look like this, also complicated lane detection is possible
with this vehicle
35
36
CHAPTER SIX
37
• 6.3 Lane follower/ Lane Detection
The line follower robot senses black line by using sensor and then sends the signal to arduino.
Then Arduino drives the motor according to sensors' output. Here in this project we are using
two IR sensor modules namely left sensor and right sensor. When both left and right sensor
senses white then robot move forward. A line follower consists of an infrared light sensor and
an infrared LED. It works by illuminating a surface with infrared light; the sensor then picks
up the reflected infrared radiation and, based on its intensity, determines the reflectivity of the
surface in question
38
CHAPTER SEVEN
3. Stress-free parking
Autonomous vehicles drop you off at your destination and directly heads to a detected vacant
parking spot. This eliminates the wasting of time and gas looking for a vacant one.
4. Time-saving vehicle
As the system takes over the control, the driver has a spare time to continue work or spend
this time catching up with their loved-ones without the having the fear about road safety.
5. Accessibility to transportation
Senior citizens and disabled personnel are having difficulty driving. Autonomous vehicles
assist them towards safe and accessible transportation.
DISADVANTAGES
1. Expensive
39
High-technology vehicles and equipment are expensive. They prepare a large amount of money for
research and development as well as in choosing the finest and most functional materials needed such
as the software, modified vehicle parts, and sensors. Thus, the cost of having Autonomous vehicles is
initially higher. However, this may lower down after 10 years giving way for the average earner
people to have one.
3. Prone to Hacking
Autonomous vehicles could be the next major target of the hackers as this vehicle continuously tracks
and monitors details of the owner. This may lead to the possible collection of personal data.
5. Non-functional sensors
Sensors failures often happened during drastic weather conditions. This may not work during a
blizzard or a heavy snowfall.
40
CHAPTER EIGHT
An automated guided vehicle or automatic guided vehicle (AGV) is a portable robot that
follows along marked long lines or wires on the floor, or uses radio waves, vision cameras,
magnets, or lasers for navigation. They are most often used in industrial applications to
transport heavy materials around a large industrial building, such as a factory or warehouse.
Application of the automatic guided vehicle broadened during the late 20th century. The
AGV can tow objects behind them in trailers to which they can autonomously attach. The
trailers can be used to move raw materials or finished product. The AGV can also store
objects on a bed. The objects can be placed on a set of motorized rollers (conveyor) and then
pushed off by reversing them. AGVs are employed in nearly every industry, including pulp,
paper, metals, newspaper, and general manufacturing. Transporting materials such as food,
linen or medicine in hospitals is also done.
An AGV can also be called a laser guided vehicle (LGV). In Germany the technology is also
called Fahrerloses Transportsystem (FTS) and in Sweden förarlösa truckar. Lower cost
versions of AGVs are often called Automated Guided Carts (AGCs) and are usually guided
by magnetic tape. AGCs are available in a variety of models and can be used to move
products on an assembly line, transport goods throughout a plant or warehouse, and deliver
loads.
The first AGV was brought to market in the 1950s, by Barrett Electronics of Northbrook,
Illinois, and at the time it was simply a tow truck that followed a wire in the floor instead of a
rail. Out of this technology came a new type of AGV, which follows invisible UV markers on
the floor instead of being towed by a chain. The first such system was deployed at the Willis
Tower (formerly Sears Tower) in Chicago, Illinois to deliver mail throughout its offices.
41
Packmobile with trailer AGV
Over the years the technology has become more sophisticated and today automated vehicles
are mainly Laser navigated e.g. LGV (Laser Guided Vehicle). In an automated process,
LGVs are programmed to communicate with other robots to ensure product is moved
smoothly through the warehouse, whether it is being stored for future use or sent directly to
shipping areas. Today, the AGV plays an important role in the design of new factories and
warehouses, safely moving goods to their rightful destination.
42
CHAPTER NINE
Figure 7.3 Self Driving car made by Figure 7.4 Self Driving car made by
Google Apple (Concept)
43
Discussions
Within the contemporary smart city debate, AVs represent a way to create an ideal city form
and developments in the autonomous driving technology have the potential to bring smart
mobility to our rapidly urbanizing world; but for others AV is a branding hoax (Yigitcanlar &
Lee, 2014; Yigitcanlar & Kamruzzaman, 2018a). Despite a large body of recent literature on
AV’s, only a limited number of studies have outlined the disruptive effects that AV might bring
on city planning and society in general. This paper, through a systematic review of the
literature, aimed to determine the current state of research literature on AV technology, the
future direction that this technology is leading to, how the changes are 62 JOURNAL OF
TRANSPORT AND LAND USE 12.1 likely to affect our day-to-day travel behavior and long-
term changes in the structure of our cities, and what would be the likely policy tools for a
smooth transitioning of the technology. As the literature suggests, AVs’ major disruptions in
our cities will be in urban transport, land use, employment, parking, vehicle ownership,
infrastructure design, capital investment decisions, sustainability, mobility, and traffic safety.
It is clear from this study that preparing our cities for AVs through progressive planning is
critical to achieving the benefits and to address the resulting disruption. On the eve of rising
AV demand, local and state governments should be equipped with better policy and planning
tools to accommodate AV technology and its impacts. In parallel, timely interventions from
international, national/federal and state levels in terms of regulating, standardizing and
certifying this technology and approval of appropriate legislative measures to ensure testing,
deployment, privacy, security, and liability issues are addressed. These are discussed in the
following sub-sections in detail
Conclusion
Autonomous vehicles have now gone uphill from science fiction to reality. Basic technologies
behind the self-driving vehicles are deep learning and computer vision techniques. Self-driving
vehicles can potentially overcome the mistakes made by human drivers thus saving human. In this
project, a prototype of self-driving is developed that has the ability to manoeuvre on predefined
paths using convolutional neural networks and computer vision techniques
44
References
[1] Jeremy Straub, Wafaa Amer and Christian Ames,” An Internetworked Self-Driving
Vehicle System-of-Systems”, 12th system of systems engineering conference, IEEE, 2017.
[2] https://en.wikipedia.org/wiki/Arduino
[4] https://github.com/anshupandey/Self-driving-vehicle-using-arduino
[5] https://www.arduino.cc/en/Reference/Board?from=Guide.Board
[6] https://opencv.org/
[7] https://create.arduino.cc/projecthub/ishaq-yang/auto-ultrasonic-vehicle-a85c6f
[8] https://create.arduino.cc/projecthub/aaravpatel0124/self-driving-arduino-vehicle-using-
l298n-motor-driver-f1cf05
[10] Broggi, A. et al., 2013. Extensive Tests of Autonomous Driving Technologies. IEEE
TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 14(3).
[11] J.M.A. Alvarez, A.M. Lopez & R. Baldrich, IlluminantInvariant Model-Based Road
Segmentation. Intelligent Transportation Systems, IEEE Transactions on, 12, 2008, pp 184–
193.
[12] A. Bar Hillel, R. Lerner, D. Levi, & G. Raz. Recent progress in road and lane detection:
a survey. Machine Vision and Applications, Feb. 2012, pp. 727–745
45