You are on page 1of 59

SELF DRIVING VEHICLE BASED ON

ARTIFICIAL INTELLIGENCE USING MACHINE


LEARNING
A PROJECT REPORT

SUBMITTED BY

ABDUL HAADI BA 720917114001

AKHIL KM 720917114014

ANUSH MOHAMED E 720917114021

MANU SKARIA 720917114058

In partial fulfillment for the award of the degree

Of

BACHELOR OF ENGINEERING

IN

MECHANICAL ENGINEERING

JCT COLLEGE OF ENGINEERING AND TECHNOLOGY

COIMBATORE

ANNA UNIVERSITY: CHENNAI 600 025

APRIL 2021
ANNA UNIVERSITY: CHENNAI 600 025

BONAFIDE CERTIFICATE
Certified that this project report “SELF DRIVING VEHICLE BASED ON
ARTIFICIAL INTELLIGENCE USING MACHINE LEARNING” is the
bonafide work of “ANUSH MOHAMED E, ABDUL HAADI BA, AKHIL
KM, MANU SKARIA” who carried out the project work under my supervision.

Signature of the Head of the Department Signature of the Supervisor


Dr. G. Mahesh ME, Ph.D. Mr. Ramakrishnan ME
DEPT OF MECHANICAL DEPT OF MECHANICAL
ENGINEERING IN ENGINEERING IN
JCT COLLEGE OF JCT COLLEGE OF
ENGINEERING ENGINEERING AND
AND TECHNOLOGY, TECHNOLOGY,
COIMBATORE COIMBATORE

Submitter for the Viva-voice Examination held on ……………………… at


JCT College of Engineering and Technology, Coimbatore.

INTERNAL EXAMINER EXTERNAL EXAMINER

ii
ACKNOWLEDGEMENT

We author of this project first of all thank to the Almighty and our Parents
for providing us the right proportion of strength and knowledge for the
successfully completion of the project.

We would like to record our sincere thanks indebtedness and gratitude to


our renowned chairman Thiru. S.A SUBRAMANIAN for their noteworthy
effort to enhance our professional dexterity and co-curricular excellence.

We gratefully acknowledge our eminent and encouraging Principal Dr.


V.J. ARUL KARTHICK M.E. Ph.D., JCT College of Engineering and
Technology, Coimbatore, for providing all facilities for carrying out this project
very effectively and efficiently.

We express our sincere thanks to our Head of the Department Dr.G.


MAHESH M.E, Ph.D of JCT College of Engineering and Technology,
Coimbatore, for his constant to complete the project work.

We take this opportunity to express our sincere thanks to our guide Mr.
RAMAKRISHNA M.E Department of Mechanical Engineering, JCT College of
Engineering and Technology, Coimbatore for his endless support and
encouragement during this project. We extend our sincere thanks to all our
teaching and non-teaching staff members for helping us.

iii
JCT COLLEGE OF ENGINEERING AND TECHNOLOGY
PICHANUR, COIMBATORE – 64110
DEPARTMENT OF MECHANICAL ENGINEERIN G

Program Educational Objectives (PEOs)


PEO1: Graduates shall become outstanding engineers in industries and other
Institutions for higher studies.
PEO2: Graduates shall have the caliber to contribute to national goals through design
and manufacturing.
PEO3: Graduates shall serve the society by being entrepreneurs or consultants to
overcome the challenges they face.

Program Outcomes (POs)


Engineering knowledge: Apply the knowledge of mathematics, science,
engineering fundamentals, and an engineering specialization to the solution of
PO1 complex engineering problems.

Problem analysis: Identify, formulate, review research literature, and analyze


PO2 complex engineering problems reaching substantiated conclusions using first
principles of mathematics, natural sciences, and engineering sciences.
Design/development of solutions: Design solutions for complex
engineering problems and design system components or processes that meet
PO3
the specified needs with appropriate consideration for the public health and
safety, and the cultural, societal, and environmental considerations.
Conduct investigations of complex problems: Use research-based
knowledge and research methods including design of experiments, analysis and
PO4
interpretation of data, and synthesis of the information to provide valid
conclusions.
Modern tool usage: Create, select, and apply appropriate techniques,
resources, and modern engineering and IT tools including prediction and
PO5
modeling to complex engineering activities with an understanding of the
limitations.
The engineer and society: Apply reasoning informed by the contextual
PO6 knowledge to assess societal, health, safety, legal and cultural issues and the
consequent responsibilities relevant to the professional engineering +practice.
Environment and sustainability: Understand the impact of the professional
PO7 engineering solutions in societal and environmental contexts, and
demonstrate the knowledge of, and need for sustainable development.
Ethics: Apply ethical principles and commit to professional ethics and
PO8
responsibilities and norms of the engineering practice.
Individual and team work: Function effectively as an individual, and as a
PO9
member or leader in diverse teams, and in multidisciplinary settings.
Communication: Communicate effectively on complex engineering activities
PO with the engineering community and with society at large, such as, being able
10 to comprehend and write effective reports and design documentation, make
effective presentations, and give and receive clear instructions.

iv
Project management and finance: Demonstrate knowledge and
PO understanding of the engineering and management principles and apply these
11 to one’s own work, as a member and leader in a team, to manage projects and
in multidisciplinary environments.
Life-long learning: Recognize the need for, and have the preparation and
PO1
ability to engage in independent and life-long learning in the broadest context of
2
technological change.

Program Specific Outcomes (PSOs)


PSOP Capable of successfully performing national level competitive examinations for
1 higher studies and employment
An ability to apply their knowledge in the domain of engineering mechanics,
PSO
fluid, thermal engineering and advanced technologies in solving engineering
2
problems for the benefits of society.

v
vi
ABSTRACT
Advancement in Self-driving technology has created opportunities for smart
urban mobility. Self-driving vehicles are now a popular topic with the rise of the
smart city agenda. However, legislators, urban administrators, policymakers, and
planners are unprepared to deal with the possible disruption of autonomous
vehicles, which potentially could replace conventional transport. There is a lack
of knowledge on how the new capabilities will disrupt and which policy strategies
are needed to address such disruption. This report aims to determine where we
are, where we are headed, what the likely impacts of a wider uptake could be, and
what needs to be done to generate desired smart urban mobility outcomes. The
methodology. The objective of this project consisted on creating a vehicle able to
work with Artificial Intelligence.

This paper aims to represent a mini version of self-driving car using IOT with
raspberry pi and Arduino UNO working as a main processor chip, the 8mp high
resolution pi camera will provide the necessary information and the raspberry pi
will analyse the data(samples) and it will get trained in pi with neural network
and machine learning algorithm which would result in detecting road lanes, traffic
lights and the car will take turns accordingly. In addition to these features the car
will overtake with proper LED indications if it comes across an obstacle

The system would have sensors to detect the obstacles and camera as well to be
able to react according to their position. Many extra sensors and C++ programmes
were used in order to measure the distance of the obstacle. The information given
by them with the orders followed by the CPU, as well as a light sensor (LDR) to
help deciding if the camera and the computer vision algorithm should be used. It
uses both camera and sensor for the obstacle detection purpose, so that if there is
a chance of one of the components failed to work would be compensated by the
other to ensure safety when there is light and there wasn’t barely any.

vii
TABLE OF CONTENT

CHAPTER NO TITLE PAGE NO

ABSTRACT 6
LIST OF FIGURES 9
LIST OF ABBRIVIATION 12

1. INTRODUCTION 1
1.1 Overview 1
1.2 Project Overview 2
1.3 Objective 3
1.4 Working 3
1.5 Thesis Structure 4
2. LITERATURE REVIEW 6
3. COMPONENTS/HARDWARES USED 9
3.1 List of Components 9
3.1.1 Uses of Ultrasonic Sensor 9
3.1.2 Uses of Motor module 9
3.1.3 Uses of Camera module 10
3.1.4 Uses of Color Detection sensor 10
3.1.5 Uses of Infrared Sensor 10
3.1.6 Uses of Arduino Board 10
3.1.7 What is a Robot Chassis and Types 11
3.1.8 Robot Wheel Configurations 11
3.1.9 Jumper wires 11
3.1.10 18650 Battery pack 11
3.1.11 LED Signal Light (Prototype) 12
3.1.12 Uses of Custom Printed Circuit Board 12
3.1.13 DIY road to test Lane Detection 12
4. SOFTWARE USED AND SETUP 13
4.1 What is Arduino IDE 13
4.2 How to setup and configure Arduino IDE 13
5. METHODOLOGY 16

viii
5.1 Assembling chassis kit 16

I. Soldering the Gear motors 16

ii. Mounting the Gear motors over the chassis 16

5.2 Connecting Gear motors to the L298 H-bridge motor 19

5.3 Wire management of Gear Motor 20

5.4 Connecting Arduino Uno to the L298 H-Bridge 20

5.5 Connecting Ultra-Sonic Sensor 22

5.6 Connecting Camera Module and configuring OpenCV and ML 25

5.7 Connecting Infrared Sensor 26

5.8 Connecting color Sensor 28

5.9 Combined Circuit Diagram of the Project 30

5.10 Block Diagram 30

5.12 Making a Prototype Signal Light for Light detection 31

5.13 Making a DIY road to test traffic detection 35

5.14 Photos of the Self-Driving vehicle 35

6. TESTING THE SELF-DRIVING VEHICLE 37

6.1 Obstacle Detection 37

6.2 Signal Light Detection 37

6.3 Lane Detection 38

7. ADVANTAGES AND DISADVANTAGES OF A SELF-DRIVING VEHICLE 39

8. USES OF SELF DRIVING VEHICLE IN MECHANICAL FIELD 41

9. FUTURE OF THE SELF DRIVING VEHICLE IN AUTOMOBILE FIELD 43

DISCUSSIONS AND CONCLUSIONS 44

REFERENCES 45

ix
LIST OF FIGURES

FIGURE NO FIGURES DESCRIPTION PAGE NO

3.1.1 Ultra-Sonic sensor 9

3.1.2 L298 Motor Driver 9

3.1.3 Camera module (machine 10


vision)

3.1.4 Color Sensor 10

3.1.5 Infrared Sensor 10

3.1.6 Arduino Uno 10

3.1.7 Robot Chassis Kit 11

3.1.8 Robot Wheels 11

3.1.9 Jumper wires 11

3.1.10 18650 Battery 11

3.1.11 LED bulbs for signal Light 12

3.1.12 Custom PCB board 12

3.1.13 DIY road for testing lane 12


detection

4.1 Arduino IDE 13

x
4.2 Arduino IDE Programming 13

4.3 OpenCV 13

4.4 Obstacle Detection 13

4.5 Machine Vision 13

4.6 Arduino IDE official Site 14

4.7 Installing Components for IDE 14

4.8 Saving Directory 14

4.9 Extracting C++ exe 15

5.1 Soldering Gear Motors 16

5.2 Final Look 16

5.3 Whole parts of a Chassis Kit 16

5.4 Unwraping and separate 17


components
5.5 Removing protection Cover 17
and
5.6 Fitting Motors to each hole 17

5.7 Fitting motors to each side 17

5.8 Mounting 2nd chassis 18

5.9 Finally setting wheels 18

5.10 Pins and Configurations of 19


L298 motor module
5.11 Module & 4 motor connection 19

5.12 Red Light Indication 19

xi
5.13 Arduino Connection to Module 20

5.14 Arduino Connection to Battery 20

5.15 Showing Arduino connection 21


successful
5.16 Picture shows Speed control 21
programme
5.17 Arduino Connection to Ultra- 22
sonic sensor
5.18 Obstacle detection programme 22

5.19 Remaining obstacle detection 23


programme
5.20 Picture of Camera Module 24
Connection to Arduino Uno
5.21 Picture showing camera Test 25
programme
5.22 Picture showing connection to 26
infrared sensor
5.23 Connection of color sensor 28

5.24 Color sensing coding 29

5.25 Showing Color detection 29

5.26 Complete Diagram of Self- 30


Driving car
5.27 Block Diagram of Self-Driving 30
car
5.28 Figures of Signal Light 31

5.38 Lane follower track made for the 35


car

5.39 Complicated Lane follower 35

6.1 Testing obstacle Detection 37

6.2 Testing Signal Light Detection 37

6.3 Test Lane Following/Lane 38


Detection

xii
7.1 Self-driving car made by tesla 43

7.2 Self-driving car made by 43


Google
7.3 Self-driving car made by 43
Apple(concept car)

xiii
LIST OF ABBRIVIATION
IDE Integrated Development Environment

LED Light Emitting Diode

ML Machine Learning

NN Neural Networks

CNN Convolutional Neural Networks

DNN Deep Neural Networks

AI Artificial Intelligence

AGV Automated Guided Vehicle

PWM Pulse Width Modulation

DAPRA Défense Advanced Projects Research Agency

ML Machine Learning

API Application Programming Interface

ROI Region of Interest

DC Direct Current

OCR Optical Character Recognition

GPU Graphic Processing Unit

xiv
CHAPTER ONE

INTRODUCTION
1.1 Overview

This chapter explains the project overview, project objectives and project methodology alongside
with basic idea behind Self Driving Vehicles, history of self-driving vehicle, industries producing
these autonomous vehicles, pros and cons and market trends. A self-driving vehicle is a vehicle that
guides its self without human interference. It is designed to sense its own environment using deep
learning and computer vision techniques Autonomous vehicles have now gone uphill from science
fiction to reality. It seems like this technology emerged overnight, but in reality, the path to self-
driving vehicles has been a long and tedious one. The history of self-driving vehicles went through
several milestones. After the invention of man driven motor vehicles, it did not take too long for the
inventors to think upon self-driven vehicles. In the year 1925, the inventor Francis Houdini invented a
radio-controlled vehicle, which he drove through the streets of Manhattan without any human steering
the wheel. The radio-controlled vehicle can start its engine, shift gears, and turn on its horn in year
1969, John McVehiclethy who is the one of the founding fathers of artificial intelligence
demonstrated something likely to the modern autonomous vehicle in an essay titled “Computer-
Controlled Vehicles.” McVehiclethy refers to an “automatic chauffeur,” capable of navigating a
public road via a “television camera input that uses the same visual input available to the human
driver”.

In early 90s, Vehiclenegie Mellon researcher Dean Pomerleau wrote a PhD thesis, describing how
neural networks could allow a self-driving vehicle to take in raw images from the road and output
steering controls in real time. Pomerleau was not the only researcher working on self-driving vehicles,
but his use of neural networks proves way more efficient than alternative attempts to manually divide
images into “road” and “non-road” categories. Waymo, which is also known as “Google Self-driving
vehicle” collects tons of data and feed the data to deep learning algorithm from labelling and
processing. Waymo, nowadays is being 15 used as an online cab service like Uber in different states
of the US. The basic model for a self –driving vehicle is shown in figure 1.1: Figure 1.1: Basic model
for self-driving Vehicles. The basic concept behind self-driving vehicle is to sense its environment
and take actions accordingly as shown in fig 1.1. The vehicle collects data of its environment through
a Single or multiple Cameras along with different sensors. This data is then processed through
advanced computer vision and other algorithms to generate action required to manoeuvre the vehicle
according to the environment. For example, in the case of “Tesla vehicle” it uses PythonML, SQL
&C++ to bring out the human capabilities to a computer

1
and for “Waymo”, machine learning plays an important role. With the collaboration of Google AI
researchers, Waymo is integrated with AutoML which enables the autonomous vehicle to optimize
models for different scenarios at a great velocity. AutoML has a graphical user interface to train,
assess, improve, and arrange models based on the data. Self-driving vehicle Observations through
camera and sensors Actions Environment16 GM cruise comes second to cover the greatest number of
miles autonomously. It is considered to be world’s most advanced self-driving vehicles to vehicle
fully connect people with the places, things, and experiences they vehicle about. Safety plan and
functioning requirements in obedience with federal, state and local regulations are kept in focus.
Cruise self-driving vehicle has a balanced array of sensors so that the vehicle can map out the
complex city streets intelligently and with a 360- degree view of the world around it. Each vehicle
contains 10 cameras that take pictures at 10 frames per second. That is how this vehicle is able to see
more of its surrounding environment and therefore, can respond more quickly and safely. At Nissan,
advancement in artificial intelligence is making the autonomous vehicles smarter, more responsive,
and better at making their own decisions. Nissan is developing a vehicle that will be competent of
self-driving on a single lane road in the near future. The next step will be multi-lane road, then self-
driving in the city, and in the end fully autonomous driving in all situations. Nissan self-driving
vehicles are designed to get smarter with time. Seamless Autonomous Mobility is a system developed
from NASA technology to help autonomous vehicles deal with unexpected, the vehicle sends live data
to a mobility manager who instantly teaches it what to do then it shares what it learned with other
vehicles in the system so they get smarter too. Once this knowledge is absorbed into the system,
vehicles can start using it to solve completely new challenge.
How safe should a self-driving vehicle be?
This question is becoming increasingly important as companies like Tesla, Waymo, Apple vehicle,
Uber and others test their self-driving vehicles on public road. Around the world, 1.3 million people
are killed each year by vehicles. Many of these deaths are due to human error. If a self-driving
vehicle can help in this, it would be a great achievement. In contrast, the self-driving vehicles do
not drink, they do not text and they do not fall asleep at the wheel, thus reducing deaths and
injuries…

1.2 Project overview


In this project, a working prototype of a self-driving vehicle is developed using computer vision
and deep learning techniques. Self-driving vehicle made in this project is able to navigate the
track by making prediction using the trained data set with the help of CNN model. Before feeding
the data to model for training, it is pre-processed using computer vision and if there is any
problem to the camera it can be resolved with the sensor’s techniques such as, canny-edge
detection. The pre-processing is done to identify the lane line on track on which vehicle has to

2
move. Initially, tracks are deployed on the ground in order to gather the data in a form of videos
using OpenCV with webcam interface and ultra-sonic sensors are used in-case the Camera failed
to detect. Videos, images are extracted for classification of the data into four different classes i.e.,
right, left, forward or stop. Before feeding the data to neural network model, Hough transform is
applied using OpenCV for finding the lane line. This data is trained using Convolution Neural
Network model and a classifier is set which is able to predict in real time whether to move the
steering of the vehicle left, right, forward or stop accordingly. This project also includes sensor
and it is computer vision based with sensing technology. The training and inference are done
using Raspberry pi3 b+ connected to Ryzen 5 laptop. In our model. Classifier takes the input
images from the live feed and predicts which direction to choose or stop. Classifier after
prediction generates a string and through serial communication the string is sent to Arduino.
Finally, the Arduino processes the wrappers embedded in its code according to the string received
from the classifier and the vehicle moves according to the prediction. The trained model predicts
in which direction to move and can also respond to traffic sign, such as a stop sign. The picture of
the toy vehicle used in this project is shown in figure 1.4. This is a motorized vehicle in which
additional modifications such as steering angle control, motor driver control and high-resolution
camera interfacing has been done.

1.3 Project Objective


• Development of working prototype of a self-driving using a toy vehicle that will mainly navigate
• using computer vision and deep learning techniques
• Usage of Convolutional Neural Network to identify a stop sign

1.4 Working
The block diagram of project is shown in figure. Initially, tracks are deployed on surface in order
to gather data through video streaming through a webcam using Arduino, OpenCV which is an
Open-source computer vision library. After the collection of data, the video was segmented into
frames and classified into four classes i.e., right, left, forward and stop. This classified data is
converted into required format using computer vision algorithms such that the data consists of
only bright Hough lines on a black image. To get the required format of image, the image was
first converted into Gray scale. To reduce noise and smoothing the data images, Gaussian blur is
applied. As noise in image create false edges and effect edge detection, so after smoothing of
images the canny method is applied to identify edges in the image. To identify the region on the
image for Hough 20 lines, the region of interest is found and to mask out anything else in the
image bitwise and operator is used. Initially the Hough lines are drawn on a zero-pixel image
using bitwise and operator inside the region of interest. Weights of Hough line transform image

3
and the real image of the track are added. By adding weights of these images, the Hough lines are
displayed. These displayed lines are then averaged according to their slopes and intercepts in
order to display the lines in equal ratio. After pre-processing, the data is feed to a CNN model for
training. The training and inference are done using Ryzen5 3550H laptop. Training data used in
our project is about 70 percent of the complete data. Supervised learning is used for training of the
data. This data is classified and labelled as Right, left and forward. This data is trained using CNN
sequential model. CNN model used for the training of the data contains 15 hidden layers. These
layers include dense layer, convolutional-2D layer, maxpooling-2D layer, flatten layer and fully
connected layers. CNN is used for extracting the features from the images and learn through these
features by updating the bias and weights of the perceptron. The trained model then takes the
input images from live camera and predicts which direction to choose or stop. The trained model
after prediction generates a string and through serial communication the string is sent to Arduino.
Finally, the Arduino processes the wrappers embedded in its code according to the string received
from the trained model and sends control signals to the H-bridge to drive motors of the vehicle to
move or stop according to the prediction

1.5 Thesis Structure


• Chapter 1 provides a general introduction to self-driving vehicles. It includes a project overview,
project objectives, importance of the project, methodology and brief history of self-driving
vehicles and industries working on this technology.
• Chapter 2 provides the literature review. Review contains explanation related to the techniques
used in the project, i.e., machine learning, computer vision, image processing, neural networks
and convolutional neural networks.
• Chapter 3 consists of project preliminaries and discussion. It explains the learning methods and
procedure to be followed to complete the project.
• Chapter 4 explains the hardware design. In this chapter, hardware methodology to design and
develop a toy vehicle into a self-driving vehicle is explained.
• Chapter 6 presents the software portion of this project. The techniques for collecting data to apply
Hough transform and neural networks models for training.

4
Front-Wheel
Motor

5
CHAPTER TWO

Literature Review

1. The paper “Working model of Self-driving vehicle using Convolutional Neural


Network, Raspberry Pi and Arduino” by Aditya Kumar Jain. The proposed model
takes an image with the help of Pi cam attached with Raspberry Pi on the vehicle. The
Raspberry Pi and the laptop are connected to the same network, the Raspberry Pi
sends the captured image to the Convolutional Neural Network. The image is Gray-
scaled before passing it to the Neural Network. Upon prediction the model gives one
of the four output i.e., left, right, forward or stop. When the result is predicted
corresponding Arduino, signal is triggered which in turn helps the vehicle to move in
a particular direction with the help of its controller

2. The paper “Self-driving vehicle ISEAUTO for research and education” presented by
Raito Sell, Anton Rassling, Mario Mario, describes an ISEAUTO project, the first
self-driving vehicle project in Estonia is implemented at Tallinn University of
Technology in cooperation with an Estonian automotive company. ISEAUTO works
in research and educational project targeted on the design and development of an
automated vehicle in cooperating with a private company and students

3. The paper “Driverless Intelligent Vehicle for Future Public Transport Based on GPS”,
by an, Lakshmi, Dipesh war Kumar Yadav, Vivek Kumar Verma. It involves
equipping GPS and GSM system on a 4 wheeled robot. The GPS system steers the
robot and is capable of reaching from one point to another without any human
intervention. While in the former one with the help of GSM system they promise to
report theft in case is there is any. An SMS alert is sent to the vehicle owner reporting
about the issue and as a result of it, the owner of the vehicle can switch the ignition
off and in the latter one the project states that vehicle can only be turned on if the
authorized person sends a predefined location to the vehicle.

4. The paper “Traffic Light Detection and Recognition for Self-Driving Vehicles using
Deep Learning”, by Rupture Kulkarni, Shruti Datalike, Sonale Banger. There are
several object detection architectures available like Single Shot Multibook Detector

6
(SSD), Faster Region based Faster Region (R-CNN), Region based Fully (R-FCN)
which incorporates feature extractors like ResNet-101, Inception-V2, Inception-V3,
Mobile Net etc. The selection of architecture and feature extractor is trade-off
between speed and accuracy that your application needs For the Traffic Light
detection considering the application requirement and available computational
resources, Faster R-CNN Inception-V2 model is used which serves the accuracy and
speed trade-offs. The model is trained on the above-mentioned dataset where loss is
reported at each step of training. The model is trained on the NVIDIA GEFORCE
940M GPU using TensorFlow

5. The paper presented by Giuseppe Lugano “Virtual assistant and self-driving


vehicles”, introduced us the virtual assistant is a specific software functionality
originally conceived within the "desktop" computing environment to support the user
in the learning and use of a specific software package (e.g., word processor,
spreadsheet). The main purpose of virtual assistants was to increase the productivity
and efficiency of the user with a specific product

6. The paper presented by Dong, D., Li, X., & Sun, “A Vision-based Method for
Improving the Safety of Self-driving” gives detailed view about how they developed a
simulator which is able to detect traffic signs and lanes and road segmentation.

7. The paper presented by Straub, J., Amer, W., Ames, C., Dayananda, K. R, above-
mentioned, Marsala, Shipman “An Internetworked Self-Driving Vehicle System-of
Systems”, proposed an efficient way of establishing communications between two or
more vehicles in a particular system to keep the traffic less congested.

8. The paper “Real-time multiple vehicle detection and tracking from a moving vehicle”,
by Margret Betake, Esan Hairtail, Larry S. Davis. They have included modules for
detection of other vehicles on the road. The Naval project at Vehiclenegie Mellon
University uses the Rapidly Adapting Lateral Position Handler to determine the
coordinates of the road ahead and the appropriate steering direction. RALPH
automatically steered a Naval vehicle98%of a trip from Washington DC to San
Diego, California, a distance which is of over 2800 miles. They have added a module
for vehicle tracking. A module is used for detecting overtaking vehicles, and a

7
trinocular stereo module (three view vision) for detecting distant obstacles were added
to enhance and improvise the Naval performance of vehicle

9. The paper “Self-Driving and Driver Relaxing Vehicle”, presented by Quadria


Memnon, Muzammil Ahmed, Shahzeb Ali, Azam Rafique Memnon, Wadih Shah in
this paper they have designed two applications of an autonomous vehicle, which can
help the driver to relax for the limited duration of time. It also presents a concept
which focuses on modified concept of Google vehicle, the Google vehicle has to
reach the static destination automatically; in this prototype, they made the dynamic
destination. Here self-driving vehicle will follow a vehicle which is moving on a
certain route. This prototype will follow that vehicle

10. The paper “The Issues and the Possible Solutions for Implementing Self-Driving
Vehicles in Bangladesh” presented by Mohammad Faisal bin Ahmed, Md. Safe Ullah
Miah, Md. Muneer Anjum Time, Shakira Akhter, Md. Sarkar’s. Some of the issues of
Bangladeshi roads are highlighted in a paper published Organization in 2004. Google
vehicle among other things, can calculate the most efficient path, abide local traffic
rules, park when necessary and change lane if required

8
CHAPTER THREE

HARDWARES/COMPONENTS USED
3.1 List of Components
1. Ultrasonic Sensor
2. L298 Motor Driver
3. Camera Module (for machine vision)
4.Color sensor
5. Infrared Sensors
6. Arduino UNO
7. Robot chassis
8. Robot wheels
9.Jumper Wires
10.18650 Battery Pack
11. LED bulbs for signal light (prototype)
12. Custom printed circuit board
13. DIY road for testing lane detection

3.1.1 Ultrasonic sensor


An ultrasonic sensor is an electronic device that measures
the distance of a target object by emitting ultrasonic sound
waves, and converts the reflected sound into an electrical
signal. Ultrasonic waves travel faster than the speed of
audible sound (i.e., the sound that humans can hear). But we
are using this sensor just to compensate the camera system if
it fails to detect the object because of some reason. This
Figure 3.1.1
sensor is used just for the safety purposes so as to run the
Ultra-Sonic Sensor
vehicle without any problem

3.1.2 LN298 Motor Driver IC


L298N motor driver Integrated Circuit is a fifteen-lead peak
voltage, peak current motor driver Integrated Circuit. L298N
motor driver Integrated Circuit is a dual full bridge driver
Integrated Circuit which may manage 2 motors at the same time
with separate inputs. The basic minimum supply voltage is 5V
Figure 3.1.2
L298 motor module

9
but the acceptable reserve energy is high as 45 volts and the highest output current per
channel is at most 2 Ampere
3.1.3 Camera Module

The camera module is a portable light weight camera that


supports Arduino UNO Boards. It communicates
with Arduino UNO using the camera serial interface
protocol. It is normally used in image processing, machine
Figure 3.1.3 learning or in surveillance projects. The camera would detect
Camera Module the obstacles with its machine learning capabilities, also it
can detect signal lights, Road signs etc….

3.1.4 Colour Sensor


Colour sensors are generally used for two specific
applications: true colour recognition and colour mark
detection. Sensors used for true colour recognition are
required to "see" different colours or to distinguish between
shades of a specific colour. They can be used in either a
sorting or matching mode. In our project we are using Light
Figure 3.1.4
sensor in-order to compensate the camera if it fails to detect
Color Sensor the lighting object. It’s using for the safety purpose

3.1.5 Infrared sensor


An infrared sensor is an electronic device, that emits in order
to sense some aspects of the surroundings. An IR sensor can
measure the heat of an object as well as detects the motion.
Figure 3.1.5 These types of sensors measure only infrared radiation,
Infrared Sensor rather than emitting it that is called a passive IR sensor.

3.1.6 Arduino UNO


Arduino UNO is an open-source microchip board based on 8-
bit ATmega-328P microcontroller, it consists of components
like Crystal oscillators, Serial communication and Voltage
regulators to aid the micro-controller. Arduino Uno has

Figure 3.1.6
Arduino Uno 10
fourteen digital imp or op pins that may be interfaced to
different expansion boards (shields) and other circuits, of
which six can be operated as Pulse Width Modulation
outputs, six Analog imp pins, a Universal Serial Bus
connection, an influence barrel jack, an In Circuit Serial
Programming header and an adjust switch

3.1.7 Robot Chassis


Robot Chassis comprise the body of a robot. Roll cages,
bumpers and other body accessories can also be found in
this category. In this project we are using plastic readymade
chassis kit to mount all the components including board and
power source

Figure 3.1.7
Robot Chassis
3.1.8 Robot Wheels
Wheeled robots are robots that navigate around the ground
using motorized wheels to propel themselves. This design is
simpler than using treads or legs and by using wheels they
are easier to design, build, and program for movement in
flat, not-so-rugged terrain
Figure 3.1.8
Robot Wheels
3.1.9 Jumper Wires
A jump wire (also known as jumper, jumper
wire, jumper cable, DuPont wire or cable) is an
electrical wire, or group of them in a cable, with a connector
or pin at each end (or sometimes without them – simply
"tinned"), which is normally used to interconnect the
Figure 3.1.9
components of a breadboard or other prototype or test
Jumper Wire

3.1.10 18650 Battery pack (custom made)


We are using 3 nos 18650 battery of 3.7v, since we are
taking 3 batteries it can deliver an output of 11.1v. 18650 is

Figure 3.1.10
18650 Battery

11
a kind of battery mainly used for this type of project another advantage of using this kind of
battery is that it can be easily rechargeable, this battery is good enough for our project to
work

3.1.11 LED Signal Light


LED Bulbs is used for making DIY signal light for the signal
light detection purpose. Its better to use powerful LED bulbs so
that it can detect easily

Figure 3.1.11
LED Signal Light
3.1.12 Custom PCB board
Custom board is not mandatory but if there is any case of
power fluctuation this custom PCB will compensate this
problem. It is essential in-order to avoid the restarting of
Arduino UNO.

Figure 3.1.12
Custom Board

3.1.13 DIY road made out of paper


DIY road is made to test lane detection of the self-driving
vehicle. We are using black paper which look similar to the
road, also the white/yellow lines of the road can be made
with white tape or paper
Figure 3.1.13
DIY Road

12
CHAPTER FOUR

SOFTWARE USED AND SETUP

• Arduino IDE is the software that is used to program all the components in the
project.
1. What is Arduino IDE
The Arduino Integrated Development Environment (IDE) is a cross-platform application
(for Windows, macOS, Linux) that is written in functions from C and C++. It is used to
write and upload programs to Arduino compatible boards, but also, with the help of
third-party cores, other vendor development boards. Arduino IDE can be used for
Advanced Machine Learning projects including IOT projects, Projects based on Artificial
Intelligence even embedded OpenCV is currently supported in the latest version of
Arduino IDE

Figure 4.1 Figure 4.2


Arduino IDE Arduino IDE programming

Figure 4.3 Figure 4.4


Figure 4.5
OpenCV Obstacle detection
Machine Vision

13
2. How to Setup and configure Arduino IDE

1. Arduino IDE can be easily downloaded from the official website of Arduino IDE from
the we can download the software that is suitable for our Operating System. This figure
below shows the official website of Arduino. Arduino IDE is available not only on
windows platform but also on macOS& Linux OS

Figure 4.6
Step 1 Arduino IDE Official site

2. When the download finishes, proceed with the installation and please allow the driver
installation process when you get a warning from the operating system

Figure 4.7
Step 2 Installing Components for
Arduino
3. Choose the components to instal

Figure 4.8
Step 3 Saving Directory

14
4. Choose the installation directory (we suggest to keep the default one). The process will
extract and install all the required files to execute properly the Arduino Software (IDE)

Figure 4.9
Step 4 Extracting C++ exe

FINISHED……

15
CHAPTER FIVE

METHODOLOGY

5.1 Assembling the Chassis kit


• Soldering the Gear Motors
The DC motor that comes with the Arduino Experimenter’s Kit has short and delicate
leads. We need to replace the leads with more robust wiring and soldered connections
Procedure
1. Cut a length of wire
2. Strip and tin the ends of the wire
3. Make note of polarity and remove old leads
4. Insert tinned wire through tabs and bend into position
5. Secure leads by soldering to motor tabs

Figure 5.1 Figure 5.2


Soldering Gear Motors Gear motors

• Mounting the Gear Motors Over the chassis

Figure 5.3 Figure 5.4

Whole parts of a chassis kit Step 1. Unwrap the package Remove


the protection cover of the acrylic
chassis It should look like this

16
Figure 5.5
Step 2. Use the bottom chassis (with
only one hole) Remove the protection
cover of the six fasteners as well
Look like this

Figure 5.6
Step 3. For each motor, we use two long
screws to secure the motor on the chassis
Insert two fasteners to the spots where
indicates in the left image the motor is
sandwiched in

Figure 5.7
Step 4. Then use two nuts to secure
the screws Look like this

17
Figure 5.8
Step 5. Mount the spacer where
indicates in the left image Look like
this

Figure 5.9
Step 6. The last step is mounting the
wheels on

5.2 Connecting Gear Motor to L298 H-Bridge


While you can use discrete transistors to build an H-Bridge there are a number of advantages
in using an integrated circuit. A number of H-Bridge motor driver IC’s are available and all
of them work in pretty much the same fashion. One of the most popular is the L298N.

The L298N is a member of a family of IC’s that all have the designation “L298”. The
difference between the family members is in the amount of current they can handle. The
L298N can handle up to 3 amperes at 35 Volts DC, which is suitable for most hobby motors.

The L298N actually contains two complete H-Bridge circuits, so it is capable of driving a
pair of DC motors. This makes it ideal for robotic projects, as most robots have either two or
four powered wheels. The L298N can also be used to drive a single stepper motor, however
we won’t cover that configuration in the upcoming sessions.

18
• This figure shows the pins and other parts of the L298 h-Bridge

Figure 5.10
Pins and Configurations

• This Circuit Diagram shows how to connect 4 gear motors to the L298N H-Bridge

12 v

Figure 5.11
Module and 4 motor connection

• Make sure that the light of the H-Bridge is glowing. It will glow when we connect
battery to the module, if the red light is glowing means the circuit is correct

Figure 5.12
Red Light Showing Working
19
5.3 Wire management of Gear Motor
Wire management is essential when it comes to a circuit. Wire management has an
important role in circuit. It is mandatory to properly arrange the wires of motor and make
sure that the wires will move or distort in-between the motors, and make sure that the end
points of the wires are inserted properly in to the module, also tight the screws of the motor
module ensure that the wire will not disconnect. Also stick the wire some where to avoid
confusion of wires

5.4 Connecting Arduino Uno to the L298 H-Bridge


1.The row of pins on the bottom right of the
L298N control the speed and direction of the
motors. IN1 and IN2 control the direction of
the motor connected to OUT1 and OUT2.
IN3 and IN4 control the direction of the
motor connected to OUT3 and OUT4. Here I
plugged them into pins 2, 3, 4, and 5 on the
Arduino.

Figure 5.13
Arduino Connection to module

2.You can power the L298N with up to


12V by plugging your power source into
the pin on the L298N labelled "12V". The
pin labelled "5V" is a 5V output that you
can use to power your Arduino. Here are
some ways to wire it depending on your
12v use case: Powering Arduino with L298N

Figure 5.14
Arduino Connection & module to
battery

20
3.Make sure that the Arduino UNO is turned on

Figure 5.15

Showing Arduino Connection


Successful

4. Open Arduino IDE software that was installed earlier


Write the code. Setting IN1 to HIGH and IN2 to LOW will cause the left motor to turn a
direction. Setting IN1 to LOW and IN2 to HIGH will cause the left motor to spin the
other direction. The same applies to IN3 and IN4. Here is a short example (You can
download the full code in the "Code" section at the bottom of the page):
digitalWrite(motor1pin1, HIGH);
digitalWrite(motor1pin2, LOW);

digitalWrite(motor2pin1, HIGH);
digitalWrite(motor2pin2, LOW);
Speed control
5. We can change the speed with the EN pins using PWM. ENA controls the speed of the
left motor and ENB controls the speed of the right motor. Here I plugged them into pins
9 and 10 on the Arduino. This is optional and the motors will still run if you don't do
this.

Figure 5.16
Picture shows Speed Control
Programme

21
6.To change the speed in the code, use the analogWrite() function (more info) on the
ENA and ENB pins. Here is an example (You can download the full code in the "Code"
section at the bottom of the page):

analogWrite (ENA_pin, 50)

5.5 Connecting Ultra-Sonic Sensor

Ultrasonic Sensor is designed to send out a sound wave signal called the Trigger; and receive
the bounced back sound wave into the Echo port. The sound wave will pulsate the Trigger on
and off so the sound wave returning from the contacted object will be able to pass between
the pulses. If the Trigger was constantly on the returning sound wave would be distorted.
Trigger Sound Wave will be a conical shape and can be distorted from ambient noise and
materials that absorb sound (i.e vehicledboard, tennis ball, etc.)

Figure 5.17
Arduino Connection to Ultra-Sonic
sensor
• Open Arduino IDE and Type the program for obstacle detection

Figure 5.18
Picture shows Obstacle detection
Programme
22
Figure 5.19
Picture shows Programme

• The program can be executed to check whether the ultrasonic sensor is working or not
• Further modification can be made after the testing, the values are further modified
according to our requirement

5.6 Connecting Camera Module and configuring OpenCV and ML

• Installing OpenCV in Arduino IDE and OpenCV Vision Libraries


Basically, the webcam sends video frames to OpenCV running on a Windows PC. If
OpenCV detects a face it will track it and calculate its center's X,Y coordinates. The
coordinates are then passed on to the Arduino via a serial USB connection. The
Arduino controls the movement of the webcam with the help of two pan/tilt servos to
follow the detected face.

OpenCV (Open Source Computer Vision)


• OpenCV Library is an open-source library that includes several hundreds of real-time
computer vision algorithms. The OpenCV 2.x library is a C++ API.

This is an integration project between hardware and software tools. The image
processing C++ code samples are provided with the OpenCV library and all I did was
to modify the sample code for this project. I removed some of the unnecessary code

23
and added serial communications to it so it can send X,Y values to Arduino..

TOOLS

Software Required

Arduino IDE 1.0 for Windows


OpenCV 2.3.1 SuperPack For Windows
Microsoft Visual C++ 2010 Express SP1
Serial C++ Library for Win32 (by Thierry Schneider)

Code Required

- OpenCV C++ (attached) techbitarFaceDetection.cpp (based on OpenCV's example


facedetect.cpp)
- Arduino's (attached) cam.ino (based on Ryan Owens' example
SerialServoControl.pde)

Hardware Required

PC preferably running Windows 7 SP1. The faster CPU the better.


Arduino Uno or compatible + power source.
Standard servos X 2.
camera module
Breadboard/ Circuit
Jumper wires

• Installation

1. Download and install the OpenCV-2.3.1-win-superpack.exe if you don't wish to deal with
generating the support files yourself. Everything you need from OpenCV to build this project
has already been generated in this download.

http://sourceforge.net/projects/opencvlibrary/files/opencv-win/2.3.1/

24
2.Download and install Microsoft Visual C++ 2010 Express

http://www.microsoft.com/visualstudio/en-us/products/2010-editions/visual-cpp-express

The OpenCV installation documentation explains how to make Visual C++ aware of the
OpenCV support files (include, bin, etc). This is not a one-click job. Vehicleeful
attention must be given to how Visual C++ must be configured to recognize OpenCV
files.

The OpenCV team tested version 2.3.1 and Visual C++ 2010 on Windows 7 SP1. If you are
using a different configuration,

• Connecting Camera Module to the


Arduino
• Firstly download the two files of
Camera_OV0706_lib and
Camera_OV0706_TEST from the Camera
Module Code written by ElecFreaks and then
unzip it.
• Put the unzipped file of
Camera_OV0706_lib into the Arduino IDE
folder of Libraries.

Figure 5.20
• Open unzipped file of
Picture shows Camera module Camera_OV0706_TEST, and the program the
Connection to Arduino
code into UNO. The detailed steps are
demonstrated in the pictures.

• Click Tools and then choose the board of Arduino UNO.

• Click Tools / Serial Port and then choose the corresponding COM number.

• And then click the button of programming like below in red rectangle, program the
code into the UNO board until done uploading appears.

25
• Finally open the monitoring serial port as below in red rectangle.

• When the serial port display the data like demonstrated below, you can press the
digital keys to take a photo.

• If photo was taken successfully, the serial port would be displayed.

• Until now, the module testing has


been completed. Thanks for
reading

Figure 5.21
Picture shows camera Test
Programme

5.7 Connecting Infrared Sensor

As a fundamental function of intelligent vehicle, lane-tracking system based


on infrared photoelectric sensors faces several problems such as interfere with each other and
limited detection range. ... The vehicle control algorithm is based on the rotation angle of the follow-
up sensor system

Figure 5.22
Picture shows Connection to
Infrared Sensor

26
Code to connect Infrared Camera

#define trigPin 12
#define echoPin 13int n;
int duration , distance;

String readString;

#include <Servo.h>
Servo myservo1 ; // create servo object to control servo
Servo myservo2 ;

void setup() {
Serial.begin(9600);
myservo1 .attach(8); // assigns the servo to a pin
myservo2 .attach(9);
pinMode(trigPin , OUTPUT);
pinMode(echoPin , INPUT);
pinMode(3, OUTPUT);
}

void loop() {
digitalWrite(trigPin , HIGH);
_delay_ms(500);
digitalWrite(trigPin , LOW);
duration = pulseIn(echoPin , HIGH);
distance = (duration /2) / 29.1;

if (distance < 40) { // this is the distance which the rover has to stop
digitalWrite(3, HIGH);
myservo1 .write(n ); // controls the direction of the motors
myservo2 .write(180-n );
delay(1000); // how long the wheels spin
myservo1 .write(n );
myservo2 .write(90-n );
delay(500);
}
else { // what the rover will do if it doen't sense anything
digitalWrite(3, LOW);
myservo1 .write(180-n );
myservo2 .write(n );
}
}

27
Done

5.8 Connecting color Sensor


First, we need to define the pins to which the sensor is connected and define a variable for
reading the frequency. In the setup section we need to define the four control pins as outputs
and the sensor output as an Arduino input. Here we also need to set the frequency-scaling, for
this example I will set it to 20%, and start the serial communication for displaying the results
in the Serial Monitor. In the loop section,
we will start with reading the red filtered
photodiodes. For that purpose, we will set
the two control pins S2 and S3 to low logic
level. Then using the “pulse in ()” function
we will read the output frequency and put it
into the variable “frequency”. Using the
Serial. Print () function we will print the
result on the serial monitor. The same
procedure goes for the two other colours,
we just need to adjust the control pins for
the appropriate colour

Figure 5.23
Picture shows Connection of
Colour Sensor

• Program to Connect
Colour Sensor

28
Figure 5.24
Picture shows Color Sensing
Programme
Note here that three values differ due to the different sensitivity of each photodiode type, as
seen from the photodiode spectral responsivity diagram from the datasheet of the sensor.

Nevertheless, now let’s see how the values react when we will bring different colours in front
of the sensor. So, for example, if we bring red colour, the initial value will drop down, in my
case from around 70 to around 25

Figure 5.25

Shows Color Detection

29
5.9 Combined Circuit Diagram of the Project

Figure 5.26

Complete Connection Diagram of Self

Driving Car

5.10 Block Diagram

Figure 5.27
Block Diagram of Self Driving Car

30
5.12 Making a Prototype Signal Light for Light detection
For the testing purposes it is mandatory to make a signal Light (Prototype) just to see how the
vehicle works to the signal light detection system, and how well its responds to the codes that
we have executed
Figures Showing how to make DIY Signal Light

Step 1. Take 1 Green LED, 1 Yellow, 1 Red Colour

Figure 5.28
Step 2. Take a piece of Mount board to fit all the LEDs

Figure 2.29
Step 3. Make 3 holes over the board to insert each LEDs

31
Figure 5.30
Step 4. After making holes insert LEDs to it as shown in this picture

Figure 5.31

Step 5. Solder the -ve part of each led together like this

Figure 5.32

32
Step 6. Connect black wire by soldering to the -ve terminal and 3 wires to the 3 +ve sides of
the LEDs as shown

Figure 5.33
Step 7. Connect 120k resistance to one of its terminals

figure 5.34

Step 8. Connect battery holder to the terminals to connect to the battery

33
Step 9. Mount the board to a stick or a strong piece of metal to place the light on the ground

Figure 5.36
Step 10. Connect to battery and inter-change the wire to glow Red , Green or Yellow light

Figure 5.37

34
5.13 Making a DIY road to test traffic detection
We used white paper for the road colour and for Lane-Detection purpose we use black colour
tape to make a lane and the road look like this, also complicated lane detection is possible
with this vehicle

Figure 5.38 Figure 5.39


Lane follower track made for the car Complicated Lane follower track

5.14 Photos of the Self-Driving Vehicle

35
36
CHAPTER SIX

TESTING THE SELF-DRIVING VEHICLE

• 6.1 Obstacle Detection


The obstacle avoidance robotic vehicle uses ultrasonic sensors for its
movements. Arduino is used to achieve the desired operation. The motors are
connected through motor driver IC to Arduino. The arduino controls the motors
left, right, back, front, based on ultrasonic signals. There are many application use
ultrasonic sensors like instruction alarm system, automatic door openers etc. The
ultrasonic sensor is very compact and has a very high performance. It has both the
transmitter and receiver. It consists of four pins Vcc pin to offer a 5V supply to the
sensor, trigger pin give a TTL pulses (15us), echo pin to get the output from the
sensor and ground pin

• 6.2 Signal Light Detection


The vehicle is programmed to drive "straight head" if sensors in the front and
when it detects the red light it will stop there and when the green light is shown it
will start moving from its current place, the colour is detected by the colour sensor
so that the vehicle can move according to it. There are numerous use of colour
sensor but as of now the colour sensor is used only for this purpose

37
• 6.3 Lane follower/ Lane Detection

The line follower robot senses black line by using sensor and then sends the signal to arduino.
Then Arduino drives the motor according to sensors' output. Here in this project we are using
two IR sensor modules namely left sensor and right sensor. When both left and right sensor
senses white then robot move forward. A line follower consists of an infrared light sensor and
an infrared LED. It works by illuminating a surface with infrared light; the sensor then picks
up the reflected infrared radiation and, based on its intensity, determines the reflectivity of the
surface in question

38
CHAPTER SEVEN

ADVANTAGES AND DISADVANTAGES OF A SELF


DRIVING VEHICLE
ADVANTAGES

1. Decreased the number of accidents


Autonomous vehicles prevent human errors from happening as the system controls the
vehicle. It leaves no opportunity for distraction, not just like humans who are prone to
interruptions. It also uses complicated algorithms that determine the correct stopping distance
from one vehicle to another. Thereby, lessening the chances of accidents dramatically.

2. Lessens traffic jams


Driverless vehicles in a group participate in platooning. This allows the vehicles to brake or
accelerates simultaneously. Platoon system allows automated highway system which may
significantly reduce congestion and improve traffic by increasing up the lane capacity.
Autonomous vehicles communicate well with one another. They help in identifying traffic
problems early on. It detects road fixing and detours instantly. It also picks up hand signals
from the motorists and reacts to it accordingly.

3. Stress-free parking
Autonomous vehicles drop you off at your destination and directly heads to a detected vacant
parking spot. This eliminates the wasting of time and gas looking for a vacant one.

4. Time-saving vehicle
As the system takes over the control, the driver has a spare time to continue work or spend
this time catching up with their loved-ones without the having the fear about road safety.

5. Accessibility to transportation
Senior citizens and disabled personnel are having difficulty driving. Autonomous vehicles
assist them towards safe and accessible transportation.

DISADVANTAGES

1. Expensive

39
High-technology vehicles and equipment are expensive. They prepare a large amount of money for
research and development as well as in choosing the finest and most functional materials needed such
as the software, modified vehicle parts, and sensors. Thus, the cost of having Autonomous vehicles is
initially higher. However, this may lower down after 10 years giving way for the average earner
people to have one.

2. Safety and security concerns


Though it has been successfully programmed, there will still be the possible unexpected glitch that
may happen. Technologies are continuously updating and almost all of this equipment may have a
faulty code when the update was not properly and successfully done.

3. Prone to Hacking
Autonomous vehicles could be the next major target of the hackers as this vehicle continuously tracks
and monitors details of the owner. This may lead to the possible collection of personal data.

4. Fewer job opportunities for others


As the artificial intelligence continues to overcome the roles and responsibilities of humans, taxi,
trucks, or even co-pilots may be laid off as their services will no longer be needed. This may
significantly impact the employment rate and economic growth of a certain country.

5. Non-functional sensors
Sensors failures often happened during drastic weather conditions. This may not work during a
blizzard or a heavy snowfall.

40
CHAPTER EIGHT

USES OF SELF DRIVING VEHICLE IN MECHANICAL


FIELD

An automated guided vehicle or automatic guided vehicle (AGV) is a portable robot that
follows along marked long lines or wires on the floor, or uses radio waves, vision cameras,
magnets, or lasers for navigation. They are most often used in industrial applications to
transport heavy materials around a large industrial building, such as a factory or warehouse.
Application of the automatic guided vehicle broadened during the late 20th century. The
AGV can tow objects behind them in trailers to which they can autonomously attach. The
trailers can be used to move raw materials or finished product. The AGV can also store
objects on a bed. The objects can be placed on a set of motorized rollers (conveyor) and then
pushed off by reversing them. AGVs are employed in nearly every industry, including pulp,
paper, metals, newspaper, and general manufacturing. Transporting materials such as food,
linen or medicine in hospitals is also done.

An AGV can also be called a laser guided vehicle (LGV). In Germany the technology is also
called Fahrerloses Transportsystem (FTS) and in Sweden förarlösa truckar. Lower cost
versions of AGVs are often called Automated Guided Carts (AGCs) and are usually guided
by magnetic tape. AGCs are available in a variety of models and can be used to move
products on an assembly line, transport goods throughout a plant or warehouse, and deliver
loads.

The first AGV was brought to market in the 1950s, by Barrett Electronics of Northbrook,
Illinois, and at the time it was simply a tow truck that followed a wire in the floor instead of a
rail. Out of this technology came a new type of AGV, which follows invisible UV markers on
the floor instead of being towed by a chain. The first such system was deployed at the Willis
Tower (formerly Sears Tower) in Chicago, Illinois to deliver mail throughout its offices.

41
Packmobile with trailer AGV

Over the years the technology has become more sophisticated and today automated vehicles
are mainly Laser navigated e.g. LGV (Laser Guided Vehicle). In an automated process,
LGVs are programmed to communicate with other robots to ensure product is moved
smoothly through the warehouse, whether it is being stored for future use or sent directly to
shipping areas. Today, the AGV plays an important role in the design of new factories and
warehouses, safely moving goods to their rightful destination.

42
CHAPTER NINE

FUTURE OF THE SELF DIVING IN AUTOMOBILE FIELD

• There is huge scope of self-driving vehicles in future, the various automobile


companies are improving their autonomous vehicles rapidly making them more
accurate and secured. By using multiple cameras and sensors, ,the accuracy can be
improved. Designing a system where every vehicle is interconnected to nearby
vehicles will avoid traffic congestion in future. Most people would agree
that driverless vehicles are the future. ... Research has shown that the number of U.S.
deaths resulting from road accidents could be reduced by more than 90% by the year
2050 because of self-driving vehicles. However, this is not the only effect driverless
vehicles will have on our future. Here are some tech giants who paved a new era of
driving with Artificial Intelligence

Figure 7.1, 7.2 Self Driving car


made by Tesla

Figure 7.3 Self Driving car made by Figure 7.4 Self Driving car made by
Google Apple (Concept)

43
Discussions
Within the contemporary smart city debate, AVs represent a way to create an ideal city form
and developments in the autonomous driving technology have the potential to bring smart
mobility to our rapidly urbanizing world; but for others AV is a branding hoax (Yigitcanlar &
Lee, 2014; Yigitcanlar & Kamruzzaman, 2018a). Despite a large body of recent literature on
AV’s, only a limited number of studies have outlined the disruptive effects that AV might bring
on city planning and society in general. This paper, through a systematic review of the
literature, aimed to determine the current state of research literature on AV technology, the
future direction that this technology is leading to, how the changes are 62 JOURNAL OF
TRANSPORT AND LAND USE 12.1 likely to affect our day-to-day travel behavior and long-
term changes in the structure of our cities, and what would be the likely policy tools for a
smooth transitioning of the technology. As the literature suggests, AVs’ major disruptions in
our cities will be in urban transport, land use, employment, parking, vehicle ownership,
infrastructure design, capital investment decisions, sustainability, mobility, and traffic safety.
It is clear from this study that preparing our cities for AVs through progressive planning is
critical to achieving the benefits and to address the resulting disruption. On the eve of rising
AV demand, local and state governments should be equipped with better policy and planning
tools to accommodate AV technology and its impacts. In parallel, timely interventions from
international, national/federal and state levels in terms of regulating, standardizing and
certifying this technology and approval of appropriate legislative measures to ensure testing,
deployment, privacy, security, and liability issues are addressed. These are discussed in the
following sub-sections in detail

Conclusion
Autonomous vehicles have now gone uphill from science fiction to reality. Basic technologies
behind the self-driving vehicles are deep learning and computer vision techniques. Self-driving
vehicles can potentially overcome the mistakes made by human drivers thus saving human. In this
project, a prototype of self-driving is developed that has the ability to manoeuvre on predefined
paths using convolutional neural networks and computer vision techniques

44
References
[1] Jeremy Straub, Wafaa Amer and Christian Ames,” An Internetworked Self-Driving
Vehicle System-of-Systems”, 12th system of systems engineering conference, IEEE, 2017.

[2] https://en.wikipedia.org/wiki/Arduino

[3] https://www.Arduino.org/documentation/hardwar e/camera/

[4] https://github.com/anshupandey/Self-driving-vehicle-using-arduino

[5] https://www.arduino.cc/en/Reference/Board?from=Guide.Board

[6] https://opencv.org/

[7] https://create.arduino.cc/projecthub/ishaq-yang/auto-ultrasonic-vehicle-a85c6f

[8] https://create.arduino.cc/projecthub/aaravpatel0124/self-driving-arduino-vehicle-using-
l298n-motor-driver-f1cf05

[9] IEEE, 2014. www.ieee.org. [Online] Available at:


http://www.ieee.org/about/news/2014/14_july_2014.html [Zugriff am 29 April 2015].

[10] Broggi, A. et al., 2013. Extensive Tests of Autonomous Driving Technologies. IEEE
TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 14(3).

[11] J.M.A. Alvarez, A.M. Lopez & R. Baldrich, IlluminantInvariant Model-Based Road
Segmentation. Intelligent Transportation Systems, IEEE Transactions on, 12, 2008, pp 184–
193.

[12] A. Bar Hillel, R. Lerner, D. Levi, & G. Raz. Recent progress in road and lane detection:
a survey. Machine Vision and Applications, Feb. 2012, pp. 727–745

[13] Shahzeb Ali Department of Electronic Engineering, Mehran University of Engineering


& Technology, Jamshoro, 2016 IEEE.

[14] K. R. Memon, S. Memon, B. Memon, A. R. Memon, and M. Z. A. S. Syed, “Real time


Implementation of Path planning Algorithm with Obstacle Avoidance for Autonomous
Vehicle,” in 3rd 2016 International Conference on Computing for Sustainable Global
Development”, New Delhi, India, 2016

45

You might also like