You are on page 1of 43

Design and Development of Automatic Steering

Control System for Autonomous Electric Tractor


MINI PROJECT REPORT

Submitted by
NIRANJAN B (2003027)
RAMALINGAM M (2003029)
SYED ABDUL HAQ S (2003040)
TARUN B A (2003041)

in partial fulfilment for the award of the


degree of

BACHELOR OF ENGINEERING

in

ELECTRICAL AND ELECTRONICS ENGINEERING

COIMBATORE INSTITUTE OF TECHNOLOGY


(Government Aided Autonomous
Institution)
COIMBATORE – 641 014
ANNA UNIVERSITY:: CHENNAI 600 025
MAY 2023
Design and Development of automatic Steering
Control System for Autonomous Electric Tractor
MINI PROJECT REPORT

Submitted by
NIRANJAN B (2003027)
RAMALINGAM M (2003029)
SYED ABDUL HAQ S (2003040)
TARUN B A (2003041)

Under the guidance of

Dr. S. NANDAKUMAR, M.E.,


Ph.D., ASSISTANT PROFESSOR,
DEPARTMENT OF EEE, CIT
in partial fulfilment for the award of the degree of

BACHELOR OF ENGINEERING
in
ELECTRICAL AND ELECTRONICS ENGINEERING

COIMBATORE INSTITUTE OF TECHNOLOGY

(Government Aided Autonomous


Institution)
COIMBATORE – 641 014
ANNA UNIVERSITY:: CHENNAI 600 025
MAY 2023
COIMBATORE INSTITUTE OF TECHNOLOGY
(Government Aided Autonomous Institution)
COIMBATORE – 641 014
ANNA UNIVERSITY: CHENNAI 600 025

BONAFIDE CERTIFICATE

Certified that this project report “Design and Development of


automatic Steering Control System for Autonomous
Electric Tractor ” is the bonafide work of “NIRANJAN B
(2003027), RAMALINGAM M (2003029),SYED ABDUL HAQ S
(2003040), TARUN B A (2003041)” who carried out the project work
under my supervision.

Dr. S. NANDAKUMAR, M.E.,Ph.D, Dr. S. VASANTHARATHNA, M.E., Ph.D.


ASSISTANT PROFESSOR PROFESSOR & HEAD OF THE DEPARTMENT
PROJECT GUIDE

DEPARTMENT OF ELECTRICAL AND ELECTRONICS ENGINEERING


COIMBATORE INSTITUTE OF TECHNOLOGY
COIMBATORE – 641 014

Place: COIMBATORE
Date:
CERTIFICATE OF EVALUATION

College Name : Coimbatore Institute of Technology


Branch : Electrical and Electronics Engineering
Semester : VI Semester

Name of the Student


Name of the Supervisor with
who has done this Title of the Project
Designation
Project

NIRANJAN B
2003027
“Design and Development of
Dr. S. Nandakumar,
automatic Steering Control
RAMALINGAM M System for Autonomous M.E., Ph.D.,
2003029 Electric Tractor ” ASSISTANT PROFESSOR

Department of Electrical
SYED ABDUL and Electronics Engineering
HAQ S
2003040

TARUN B A
2003041

The report of the project work submitted by the above students in partial
fulfilment for the award of Bachelor of Engineering degree in Electrical and
Electronics Engineering of Anna University, Chennai is evaluated and
conformed to be report of the work done by the above students.

Certified that the candidates have been examined in the mini project work
viva-voce examination held on

INTERNAL EXAMINER EXTERNAL EXAMINER


ANNA UNIVERSITY
CHENNAI-600 025
CERTIFICATE

A project entitled “DESIGN AND DEVELOPMENT OF


AUTOMATIC STEERING CONTROL SYSTEM FOR
AUTONOMOUS ELECTRIC TRACTOR ” has been carried out in the
DEPARTMENT OF ELECTRICAL AND ELECTRONICS
ENGINEERING, COIMBATORE INSTITUTE OF TECHNOLOGY,
COIMBATORE.
The work reported here is original and does not form part of any other work.

We understand the University’s policy on plagiarism and declare that


the project is my own work, except where specifically acknowledged and has
not been copied from other sources.

NIRANJAN B RAMALINGAM M
2003027 2003029

SYED ABDUL HAQ S TARUN B A


2003040 2003041

SIGNATURE OF THE GUIDE


Dr. S. NANDAKUMAR, M.E.,
Ph.D.,ASSISTANT PROFESSOR,
DEPARTMENT OF EEE-CIT
ACKNOWLEDGEMENT

We wholeheartedly thank and express my heartiest gratitude to our


project guide, Dr. S. NANDAKUMAR., M.E., Ph.D., for encouraging us
throughout the year. Her constructive and encouraging comments and
suggestions proved to be indispensable for this project.

We express our gratitude to the Professor and Head of the Department,


Dr.S.Vasantharathna M.E., Ph.D., for her inspiring motivation and
encouragement, without which this project could not have been completed.

We thank the members of Project Evaluation Committee and Steering


Committee Dr. M. Mynavathi M.E., Ph.D., Dr. R. Rajalakshmi M.E.,
Ph.D., Dr. G.Manavaalan M.E., Ph.D., and Dr. V. Manikandan M.E.,
Ph.D., who reviewed our project and guided us in a successful way.

We would like to thank the Principal Dr. A. Rajeswari M.E., Ph.D.


for her support and for providing us facilities to carry out this project.

We thank all our faculty members, staffs and technicians for their support.
ABSTRACT

The automobile industry is moving towards a fully automated electric vehicle future.
Automation of agricultural equipment will greatly help reduce workload on farmers. It
will also be beneficial to farmers economically. Automation will also help improve
efficiency and mass output. This is why our group has taken this project of design and
developing of automatic power steering control system for autonomous electric tractor. To
accomplish this using help of yolo algorithm which is a software used for object detection
in real time application and autonomous driving.
A camera can be used to capture and send a live stream data to the computing part
where the yolo runs the stream dynamically and detects objects which are found to be
obstacle. When an obstacle is detected, the system calculates the obstacle’s size and
distance from the vehicle. The calculated data is sent to the steering control unit where it
computes by how much degrees should the vehicle turn in order to avoid it. The calculated
information is sent to a motor which is connected to the front axle of the vehicle. A motor
is used as this is a steer by wire method.
This can automate the vehicle and save a lot of time and money for the farmers. It is
very cost effective compared to other autonomous driving methods. If trained with more
data, this can also be used to drive in a real life traffic environment.
TABLE OF CONTENTS

CHAPTER NO TITLE PAGE NO.

ABSTRACT I

1 INTRODUCTION 1

1.1 Overview 1

1.2 Need for this project 1

1.3 Objectives 1

1.4 Advantage of Proposed System 2

1.5 Summary 2

2 LITERATURE REVIEW 3

2.1 Introduction 3

2.2 Literature Survey 3

2.3 Summary 7

3 BLOCK DIAGRAM AND DESCRIPTION 8

3.1 Introduction 8
3.2 Block Diagram 8
3.3 Hardware Description 15

3.3.1 ESP 32 Cam 15

3.3.2 Arduino ide 16

3.4 Software Description 19

3.4.1 Arduino ide


3.4.2 Anaconda Jupyter
3.5 Summary 21

4 RESULTS AND DISCUSSION 22

4.1 Introduction 22

4.2 Hardware 22

4.3 Summary 25

5 CONCLUSION 25

5.1 Conclusion 25

5.2 Future Scope 25

APPENDIX 1 29

APPENDIX 2 32
CHAPTER – 1
INTRODUCTION
1.1 OVERVIEW

The automobile industry is moving towards automated electric vehicles. Automation of


agricultural equipment will help farmers reduce workload and improve efficiency. Our group has
developed an automatic power steering system for autonomous electric tractors using the
YOLO object detection algorithm. The system detects obstacles, calculates their size and
distance, and turns the vehicle to avoid them. This is a cost-effective method that can save
farmers time and money. With more training data, the system could be used to drive in real-
world traffic environments

1.2 NEED FOR THIS PROJECT

YOLO (You Only Look Once) is a popular object detection model known for its speed and accuracy. Single-
shot object detection uses a single pass of the input image to make predictions about the presence and location
of objects in the image. It processes an entire image in a single pass, making them computationally efficient.
YOLO is a single-shot detector that uses a fully convolutional neural network (CNN) to process an image.  In
our project YOLO algorithm is used in automatic steering control system for autonomous electric tractor
to detect objects around the agriculture field such as vehicles, people, stones and animals
1.3 OBJECTIVES

• To Design an Autonomous power steering using YOLO algorithm.


• To program a YOLO module which effectively classifies the object in an image.
• To implement this in GoPiGo kit and outsteer the vehicle by avoiding the obstacle

1
1.4 ADVANTAGES OF THE PROPOSED SYSTEM

• Speed: This algorithm improves the speed of detection because it can predict objects in real-
time.
• High accuracy: YOLO is a predictive technique that provides accurate results with
minimal background errors.
• Learning capabilities: The algorithm has excellent learning capabilities that enable it to
learn the representations of objects and apply them in object detection.

1.5 SUMMARY

The chapter describes the objectives of the project, the need for this project,
and theadvantages of the proposed system with its working block diagram.

2
CHAPTER - 2
LITERATURE REVIEW
2.1 INTRODUCTION

The following chapter describes about the various literatures for autonomous
dvobject detection and identification.

2.2 LITERATURE SURVEY

S.no Title Author Abstract Conclusion


1. Real Time Geethapriya. The algorithms like
Object Convolutional Neural Network, Region proposal strategies
S, N.
Detection Fast-Convolutional Neural limit the classifier to a
Duraimurug
with Network the algorithm will not particular region. YOLO
an, S.P.
YOLO look at the image completely but accesses to the entire image
Chokkalinga in predicting boundaries.
in YOLO the algorithm looks the
m
image completely by predicting
the bounding boxes using
convolutional network and the
class probabilities for these boxes
and detects the image faster as
compared to other algorithms.

2. Literature Survey on
RekhaB.S,Ath The problems such as noise, A unified model for object detection
Object Detection
iyaMarium,Dr blurring and rotating jitter, which is easy to build and is trained
Using YOLO
.G.N.Srinivas etc. with images in real-world straight on full images. YOLO also
an,Supreetha have an important impact on generalized well to new domains
A.Shetty object detection. The objects can used in applications that rely on fast,
be detected in real time using robust object detection. To
YOLO (You only look once), an recognize indoor obstacles a new
algorithm based on method of using deep learning along
convolutional neural networks with a light field camera was used.
The method identifies the obstacles
and perceives its information.

3 YOLO-Former: The vision transformer (ViT) is Learnt about object detection and
Yuan Dai;
Marrying YOLO
Weiming Liu; introduced for dynamic attention obstacle identification in a live
and Transformer for and global modeling, thereby stream and avoiding an accident.
3
Heng Wang; solving the problem that the
Foreign Object Wei Xie; original YOLOv5 only utilizes
Detection Kejun Long information in region proposals
and has insufficient ability to
capture global information.
Second, the convolutional block
attention module (CBAM) and the
stem module are used to improve
feature expression ability further
and reduce floating-point
operations (FLOPs).

4 S.. Jansi Rani, The performance of YOLO-based The ability to observe and
Drive Assistant
software and VGGNet-based
System using YOLO S. Uthirapathi comprehend static and non static
Eswaran, architecture for traffic sign around the vehicle autonomous
V3 and VGGNET
A.V. Vedha detection that is augmented with a vehicles and advanced systems that
Mukund, M. CNN is compared, because the assist in driving .Learnt about how
Vidul detection should be real-time and yolo can be used as a driving
quick for the driving to be safe, assictance.
the networks used here have only
been trained to recognise and
classify items such as traffic
signs and lights.

5 Samuel An automated image processing An automated image processing


Automated Image
Oswald,Dries workflow has been developed to workflow has been developed to allow
Processing
Raymaekers,W allow end-users to plan and end -users to plan and execute UAV
Workflow for
outer Dierckx, execute UAV flights with the end flights. Learnt about how dynamic
Unmanned Aerial
Dominique goal of publishing consistent and image processing helps in deciding
Vehicles
DeMunck,Step reliable data products. An end vehicles path .
hen user is capable of producing
Kempenaers, consistent quality training images
Jens and annotations in a standardised
Verrydt,Dieter way, allowing for the fast
Meeus production and inference of new
application models.

6 Improved Proposed method designs an By modifying the loss function and


Real Time Jian Huang image segmentation algorithm for adding a hyper-parameter to suppress
Object high-resolution remote sensing the influence of imbalance between
Detection image, which reduces a mass of the object and background class on
Method for information loss in the process of the training process. Thus
Remote image scale transformation. The increasing accuracy.
Sensing feature of remote sensing image is
Image Based  extracted by CSPdarknet53

4
on YOLOv4 network and modified by inserting
the SE block in attention
mechanism to strengthen the
effective channels, and the
information is effectively utilized
by SPP and PANet modules.

7 Joseph Frameing object detection as a Unlike classifier-based approaches,


You Only Look
Redmon,Santo regression problem to spatially
Once: Unified, Real- YOLO is trained on a loss function
sh separated bounding boxes and
Time Object that directly corresponds
Divvala,Ross associated class probabilities.
Detection to detection performance and the
Girshick,Ali Increase process speed upto 155
entire model is trained jointly.Fast
Farhadi [J]. fps and increasing
YOLO is the fastest general-purpose
accuracy by twice.
object detector in the literature and
YOLO pushes the state-of-the-art in
real-time object detection. YOLO
also generalizes well to new domains
making it ideal for applications that
rely on fast, robust object detection.

8 Xia Zhao, By increasing cluster box count Object detection method M-YOLO,
Modified object
Yingting Ni, and increasing anchor boxes we which utilizes the cluster center in the
detection method
Haihang Jia can have a faster result and super-pixel segmentation and the
based on YOLO improves localization Anchor Box of Faster RCNN. The
accuracy ,convergence experimental result confirms that M-
speed is increased YOLO improves the accuracy of
object bounding boxes by about 10%
and the recall rate, while keeping the
detection accuracy.

9 YOLO Algorithm- Irvine Valiant Using camera module to record The results showed that the most
Based Surrounding Fanthony,Zaenal surrounding the vehicle , giving compatible YOLO model for the system
Object Identification Husin, Hera this to yolo and calculating size was the Tiny YOLOv4 model which
on Autonomous Hikmarika, Suci and distance of object. was built with the darknet framework.
Electric Vehicle Dwijayanti, The simulation experiment showed that
Bhakti Yudho detection accuracy was 80% and was
Suprapto able to transmit information in a form
of data location of the object to the
microcontroller. Hence, it showed that
the YOLO was able to detect objects
and provided input to the
steering control system.

5
10 Zhihen Yang, Autonomous Vehicles (AVs) have In YOLOv4 focused on optimising
Real-time pedestrian
Jun Li, Huiyun the potential to solve many traffic the speed and accuracy of the system,
and vehicle
Li problems, such as accidents, in such a manner that only one
detection for
congestion and pollution. conventional GPU is required (e.g.,
autonomous driving
However, there are still challenges
1080Ti or 2080Ti GPU). In this
to overcome, for instance, AVs
paper, they described that one stage
need to accurately perceive their
object detector is made of several
environment to safely navigate in
elements, input,
busy urban scenarios. The aim of
backbone, neck, and head.
this paper is to review recent
articles on computer vision
techniques that can be used to
build an AV perception system.

11 Razvan- The great majority of accidents They presented a brief overview of


Object Detection in
Alexandru are the relevance of object
Autonomous
Bratulescu,Sorin caused by human mistakes, and
Vehicles identification in autonomous driving
a-Andreea autonomous cars can help to
in this article, as well as
Mitroi,Robert- lower
different detection techniques, with a
Ionut this number significantly, thus
focus on the YOLO detection
Vatasoiu,Mari- improving road safety. Object
algorithm, which they trained in
Anais Sachian identification plays a critical part
autonomous driving utilizing
in autonomous vehicle driving,
camera sensors. The YOLO model
and deep learning techniques are
has been trained to recognize
used to implement it.
various objects that may come into
contact with moving cars.

12 Design and Rishabh Reinforcement learning methods The process of fabricating a model
Development of an Chopda, Saket can be introduced in
vehicle, from its embedded hardware
Autonomous Car Pradhan, Anuj  addition to this method to better
platform, to the end-to-end ML
using Object Goenka performance. This
pipeline necessary for automated data
Detection with method can be used as a prototype
acquisition and model-training,
YOLOv4 for future citywide self-
thereby allowing a Deep Learning
driving cars projects. It can also
model to derive input from the
be used exclusively, or in
hardware platform to control the
addition to conventional lane
car’s movements. This guides the car
detection, to further
autonomously and adapts well to
improve on accuracy of self-
real-time tracks without manual
driving cars.
feature-extraction.

6
2.3 SUMMARY

As a result of the foregoing surveys, the basic framework of the project


was created, and it was discovered that each method for designing an
autonomous power steering has its own set of benefits and drawbacks.

7
CHAPTER – 3

BLOCK DIAGRAM AND DESCRIPTION

3.1 INTRODUCTION

This chapter deals with the necessary hardware and software for the
autonomous object detection and classification mechanism to be implemented
in the steering wheel. The operation of both the hardware and software has
been explained in detail.

3.2 BLOCK DIAGRAM

Live
Camera Images Processing Steering Control
Module Unit Unit

Object
Detection
and
Classification
Steering
YOLO
Motor
algorithm

Fig 3.1 BLOCK DIAGRAM

3.2.1 Camera Module


The camera module captures live events happening in real time and transmits the stream
to the computational unit dynamically.
8
The quality of data provided by the camera module plays a vital role in the efficiency and
accuracy of the system.

3.2.2 Processing Unit

The processing unit is the core of the system. Here, the live feed from the camera module
is processed using YOLO (You Only Look Once) algorithm.

3.2.2.1 YOLO algorithm

YOLO is an algorithm that uses neural networks to provide real-time object


detection. This algorithm is popular because of its speed and accuracy. It has been used in
various applications to detect traffic signals, people, parking meters, and animals.

YOLO algorithm employs convolutional neural networks (CNN) to detect objects in real-
time. As the name suggests, the algorithm requires only a single forward propagation
through a neural network to detect objects.

This means that prediction in the entire image is done in a single algorithm run. The CNN
is used to predict various class probabilities and bounding boxes simultaneously.

3.2.2.2 Working of YOLO

YOLO algorithm works using the following three techniques:

 Residual blocks
 Bounding box regression
 Intersection Over Union (IOU)

Residual blocks
First, the image is divided into various grids. Each grid has a dimension of S x S.
The following image shows how an input image is divided into grids.

9
Fig 3.2 Residual blocks

In Fig 3.2, there are many grid cells of equal dimension. Every grid cell will detect
objects that appear within them. For example, if an object center appears within a
certain grid cell, then this cell will be responsible for detecting it.

Bounding box regression


A bounding box is an outline that highlights an object in an image.

Every bounding box in the image consists of the following attributes:

 Width (bw)
 Height (bh)
 Class (for example, person, car, traffic light, etc.)- This is represented by the
letter c.
 Bounding box center (bx,by)

The following image shows an example of a bounding box. The bounding box has
been represented by a yellow outline.
10
Fig 3.3 bounding box regression

YOLO uses a single bounding box regression to predict the height, width, center,
and class of objects. In the image above, represents the probability of an object
appearing in the bounding box.

Intersection over union (IOU)


Intersection over union (IOU) is a phenomenon in object detection that describes
how boxes overlap. YOLO uses IOU to provide an output box that surrounds the objects
perfectly.

Each grid cell is responsible for predicting the bounding boxes and their confidence
scores. The IOU is equal to 1 if the predicted bounding box is the same as the real box.
This mechanism eliminates bounding boxes that are not equal to the real box.

Fig 3.4 provides a simple example of how IOU works.

11
Fig 3.4 IoU

In the image above, there are two bounding boxes, one in green and the other one in blue.
The blue box is the predicted box while the green box is the real box. YOLO ensures that
the two bounding boxes are equal.

Combination of the three techniques

Fig 3.5 shows how the three techniques are applied to produce the final detection results.

12
Fig 3.5

First, the image is divided into grid cells. Each grid cell forecasts B bounding boxes and
provides their confidence scores. The cells predict the class probabilities to establish the
class of each object.

For example, we can notice at least three classes of objects: a car, a dog, and a bicycle. All
the predictions are made simultaneously using a single convolutional neural network.

Intersection over union ensures that the predicted bounding boxes are equal to the real
boxes of the objects. This phenomenon eliminates unnecessary bounding boxes that do
not meet the characteristics of the objects (like height and width). The final detection will
consist of unique bounding boxes that fit the objects perfectly.

For example, the car is surrounded by the pink bounding box while the bicycle is
surrounded by the yellow bounding box. The dog has been highlighted using the blue
bounding box.

3.2.3Steering Control Unit

13
The object is classified in the processing unit and the information is sent to the
steering control unit.

Here PWM pulses are generated accordingly and fed to the Steering motor.

3.2.4Steering Motor

The motor is connected to the steering column and turn the axis in accordance to
the signals received from the steering control unit.

14
3.3 HARDWARE DESCRIPTION

3.3.1 ESP32 CAM

In the camera module, the camera used here is ESP32 CAM. The ESP32
CAM is a highly versatile camera module based on the ESP32 chip. It is widely
used in the fields of home automation, IoT devices, robotics, and surveillance
systems, among others. This module is designed for embedded applications and
provides high-quality image and video capabilities.
The reason the ESp32 Cam is chosen as a camera in the project as it has a
compact size which will decrease the total weight of the system. It is small and
lightweight, which makes it easy to integrate into a wide range of devices.
This module is equipped with a 2MP camera that can capture clear and high-
quality images and a Wi-Fi module that allows it to connect to the internet and send
images and video streams to remote servers.

Fig 3.6 ESP 32 CAM MODULE

15
THE SPECIFICATIONS OF ESP32 CAM IS GIVEN BELOW

 Power Supply: +5V DC


 WiFi module: ESP-32S
 WiFi protocol: IEEE 803.11 b/g/n/e/i
 WiFi mode: Station/ SoftAP/ SoftAP + Station
 Security: WPA/WPA2/WPA2-Enterprose/WPS
 Output image format: JPEG (OV2640 support only), BMP,
GRAYSCALE
 RAM: Internal 512KB + External 4M PSRAM
 Bluetooth: Bluetooth 4.2 BR/EDR and BLE
 Supported TF card: up to 4G
 IO port: 9
 UART baud rate: default 115200bps
 Dimension: 40.5mm x 27mm x 4.5mm

3.3.2 ARDUINO UNO

The Arduino Uno shown in fig 3.7 is a microcontroller board based on


the ATmega328P 8-bit microcontroller. Along with ATmega328P, it consists of
other components like crystal oscillator, serial communication, voltage
regulator, etc. to support microcontroller.

The Arduino Uno features a USB interface, 6 analog input pins and 14
digital I/O ports used to interface to external electronic circuits. Of the 14 I/O
ports, 6 pins can be used for PWM output. It allows designers to control and
detect external electronic devices in the real world.

The software used for Arduino devices is called IDE (Integrated


16
Development Environment) which is free to use and requires some basic skills
to learn. It can be programmed with the languages C and C++.

Fig 3.7 ARDUINO UNO

THE SPECIFICATIONS OF ARDUINO UNO IS GIVEN BELOW

Table 3.1 Specification Arduino UNO

ATmega38P – 8bit AVR family


Microcontroller
microcontroller
Operating Voltage 5V
Recommended Input Voltage 7-12V
Input Voltage Limits 6-20V
Analog Input Pins 6 (A0-A5)
Digital I/O Pins 14 (Out of which 6 provide PWM output)
DC Current on I/O Pins 40mA
DC Current on 3.3V Pin 50mA
Flash Memory 32 KB (0.5 KB is used for Bootloader)
SRAM 2kB
EEPROM 1kB
Frequency (Clock Speed) 16MHz

17
3.1 SOFTWARE DESCRIPTION

3.1.1 ARDUINO IDE

Arduino is an open-source electronics platform based on easy-to-use


hardware and software. Arduino boards are able to read inputs - light on a sensor,
a finger on a button, or a Twitter message - and turn it into an output - activating
a motor, turning on an LED, publishing something online. You can tell your
board what to do by sending a set of instructions to the microcontroller on the
board. To do so you use the Arduino programming language (based on Wiring),
and the Arduino Software (IDE), based on Processing.

Over the years Arduino has been the brain of thousands of projects, from
everyday objects to complex scientific instruments. A worldwide community of
makers - students, hobbyists, artists, programmers, and professionals - has gathered
around this open-source platform, their contributions have added up to an incredible
amount of accessible knowledge that can be of great help to novices and experts
alike.

Arduino was born at the Ivrea Interaction Design Institute as an easy tool for
fast prototyping, aimed at students without a background in electronics and
programming. As soon as it reached a wider community, the Arduino board started
changing to adapt to new needs and challenges, differentiating its offer from simple
8-bit boards to products for IoT applications, wearable, 3D printing, and embedded
environments

18
3.1.2 ANACONDA – JUPYTER NOTEBOOK

Anaconda is a free and open-source distribution of the Python and R


programming languages for large-scale data processing, predictive analytics, and
scientific computing that aims to simplify package management and deployment.
The package management system conda manages package versions.
Anaconda is the easiest way to ensure you don’t spend all day installing Jupyter.
Simply
download the Anaconda package and run the installer. The Anaconda software package
contains everything you need to create a Python development environment. Anaconda
comes in two versions—one for Python 2.7 and one for Python 3.x. For this guide, install
the one for Python 2.7.
Jupyter Notebook turns your browser into a Python development environment.
The only thing you have to install is Anaconda. In essence, it allows you to enter a
few lines of Python code, press CTRL+Enter, and execute the code. You enter the
code in cells and then run the currently selected cell.

3.2 MATHEMATICAL MODELLING

3.5.1 MATHEMATICAL MODELLING IN YOLO

Our system models detection as a regression problem. It divides the image


into an S × S grid and for each grid cell predicts B bounding boxes, confidence
for those boxes, and C class probabilities. These predictions are encoded as an
S*S* (B*5+ C) tensor.

3.5.2 MATHEMATICAL MODELLING TO FIND DISTANCE

Real Object height =  Distance to Object × Object height on sensor (mm) x

-1
Focal Length (mm)
19
3.3 SUMMARY
4 This chapter discusses the block diagram, the connections of all devices in
the circuit diagram and how object classification works. The hardware and
software components used in the simulation and the hardware were also
discussed.

20
CHAPTER-4

RESULTS AND DISCUSSION

4.1 INTRODUCTION

This chapter deals with necessary hardware and software implementation in real time for
object detection and identification for automatic steering wheel. The hardware operation
has been explained in detail.

4.2 HARDWARE

ESP32 camera module is used to capture live images which is transferred to a


computer dynamically through WiFi.

21
Fig 4.1 Capturing real time images using ESP
32 Camera module

Fig 4.2 Live feed from Camera module

In Fig 4.2, the live feed from the camera module is displayed on the computer.
Different objects are placed in front of the camera from identification and
classification.

22
Fig 4.3 Object Classification

23
The objects present in front of the camera module in fig 4.2 are classified using YOLO
algorithm as depicted in fig 4.3.

4.3 SUMMARY

In this chapter, the process of object detection and classification using a ESP32 camera
module has been detailed.
First, the camera module sets up a live stream. This stream is then transferred and
displayed onto a computer using WiFi.
Here, the objects in the stream are detected and classified using pre-trained data sets.

24
CHAPTER-5

CONCLUSION AND FUTURE WORK

CONCLUSION

This project has proposed an innovative technique to detect and classify objects in the
surrounding space. If trained properly using data sets of different environments, this
technique can be used for fully automated driving in automobiles.
In the agricultural sector, automated tractors can be implemented by training the A.I using
data sets of different agricultural fields.

FUTURE SCOPE

Object detection and classification is the first step of the initially proposed project
regarding automatic power steering mechanism.
Further work would include integration of this software with the control systems,
electrical components and mechanical chassis all of which constitute an automatic electric
power steering.

25
REFERNCES

[1] “Real-Time Object Detection with YOLO”


Geethapriya.S,Duraimurugan.N,Chokkallingam.S.P. International
Journal of Engineering and Advanced Technology (IJEAT) ISSN:
2249 – 8958, Volume-8, Issue-3S, February 2019

[2] “Literature Survey on Object Detection Using


YOLO”RekhaB.S,AthiyaMarium,Dr.G.N.Srinivasan,Supreetha
A.Shetty, International Research Journal of Engineering and Technology:June
2020.

[3] “YOLO-Former: Marrying YOLO and Transformer for Foreign Object


Detection” Yuan Dai;Weiming Liu;Heng Wang;Wei Xie;Kejun Long IEEE
Transactions on Instrumentation and MeasurementYear: 2022 | Volume: 71 |
Journal Article | Publisher: IEEE

[4] “Driver Assistant System using YOLO V3 and VGGNETS”,Jansi Rani;S.


Uthirapathi Eswaran;A.V. Vedha Mukund;M. Vidul2022 International
Conference on Inventive Computation Technologies (ICICT)Year: 2022 |
Conference Paper| Publisher: IEEE.

[5] “You Only Look Once: Unified, Real-Time Object Detection”Joseph


Redmon,Santosh Divvala,Ross Girshick,Ali Farhadi [J].2016 IEEE
Conference on Computer Vision and Pattern Recognition
(CVPR),2016:779788.

[6] “ Improved Real Time Object Detection Method for Remote


Sensing Image Based on YOLOv4”, Jian Huang, 2021
International Conference on Computer Information Science and
Artificial Intelligence (CISAI).

[7] “Modified object detection method based on YOLO”


26
Xia Zhao, Yingting Ni, Haihang Jia,Computer Vision: Second CCF
Chinese Conference, CCCV 2017,
Tianjin, China, October 11–14, 2017, Proceedings, Part III, 233-244, 2017.

[8] “Automated Image Processing Workflow for Unmanned Aerial Vehicles” Samuel
Oswald,Dries Raymaekers,Wouter Dierckx, Dominique DeMunck,Stephen
Kempenaers, Jens Verrydt,Dieter Meeus.

[9] “YOLO Algorithm-Based Surrounding Object Identification on Autonomous


Electric Vehicle”,Irvine Valiant Fanthony, Zaenal Husin, Hera Hikmarika, Suci
Dwijayanti, Bhakti Yudho Suprapto ,2021 8th International Conference on
Electrical Engineering, Computer Science and Informatics (EECSI), 151-
156, 2021.

[10] “Real-time pedestrian and vehicle detection for autonomous driving” Zhiheng
Yang, Jun Li, Huiyun Li ,2018 IEEE intelligent vehicles Symposium (IV), 179-
184, 2018.

[11] “Object Detection in Autonomous Vehicles” Razvan-Alexandru


Bratulescu,Sorina-Andreea Mitroi,Robert-Ionut Vatasoiu,Mari-Anais
SachianMari-Anais Sachian,November 2022
Conference: 2022 25th International Symposium on Wireless Personal
Multimedia Communications (WPMC).

[12] “Design and Development of an Autonomous Car using Object Detection


with YOLOv4”Rishabh Chopda, Saket Pradhan, Anuj Goenka.

27
APPENDIX 1

ARDUINO IDE PROGRAMMING ACCESS THE ESP32 CAM

#include "esp_camera.h"
#include <WiFi.h>
#define CAMERA_MODEL_AI_THINKER // Has PSRAM
#include "camera_pins.h"
const char* ssid = "idname";
const char* password = "password";
void startCameraServer();
void setupLedFlash(int pin);
void setup()
{ Serial.begin(115200);
Serial.setDebugOutput(true);
Serial.println();
camera_config_t config;
config.ledc_channel = LEDC_CHANNEL_0;
config.ledc_timer = LEDC_TIMER_0;
config.pin_d0 = Y2_GPIO_NUM;
config.pin_d1 = Y3_GPIO_NUM;
config.pin_d2 = Y4_GPIO_NUM;
config.pin_d3 = Y5_GPIO_NUM;
config.pin_d4 = Y6_GPIO_NUM;
config.pin_d5 = Y7_GPIO_NUM;
config.pin_d6 = Y8_GPIO_NUM;
config.pin_d7 = Y9_GPIO_NUM;
config.pin_xclk = XCLK_GPIO_NUM;
config.pin_pclk = PCLK_GPIO_NUM;
config.pin_vsync = VSYNC_GPIO_NUM;

28
config.pin_href = HREF_GPIO_NUM;
config.pin_sccb_sda = SIOD_GPIO_NUM;
config.pin_sccb_scl = SIOC_GPIO_NUM;
config.pin_pwdn = PWDN_GPIO_NUM;
config.pin_reset = RESET_GPIO_NUM;
config.xclk_freq_hz = 20000000;
config.frame_size = FRAMESIZE_UXGA;
config.pixel_format = PIXFORMAT_JPEG; // for streaming
//config.pixel_format = PIXFORMAT_RGB565; // for face detection/recognition
config.grab_mode = CAMERA_GRAB_WHEN_EMPTY;
config.fb_location = CAMERA_FB_IN_PSRAM;
config.jpeg_quality = 12;
config.fb_count = 1;

if(config.pixel_format == PIXFORMAT_JPEG){ if(psramFound())


{
config.jpeg_quality = 10;
config.fb_count = 2;
config.grab_mode = CAMERA_GRAB_LATEST;
}
else {
config.frame_size = FRAMESIZE_SVGA;
config.fb_location = CAMERA_FB_IN_DRAM;
}
} else {
// Best option for face detection/recognition
config.frame_size = FRAMESIZE_240X240;
#if CONFIG_IDF_TARGET_ESP32S3
config.fb_count = 2;
#endif
}

29
#if defined(CAMERA_MODEL_ESP_EYE)
pinMode(13, INPUT_PULLUP);
pinMode(14, INPUT_PULLUP);
#endif
esp_err_t err = esp_camera_init(&config);
if (err != ESP_OK) {
Serial.printf("Camera init failed with error 0x%x", err);
return;
}
sensor_t * s = esp_camera_sensor_get();
if (s->id.PID == OV3660_PID) {
s->set_vflip(s, 1); // flip it back
s->set_brightness(s, 1); // up the brightness just a bit
s->set_saturation(s, -2); // lower the saturation
}
if(config.pixel_format == PIXFORMAT_JPEG){
s->set_framesize(s, FRAMESIZE_QVGA);
}

#if defined(CAMERA_MODEL_M5STACK_WIDE) ||
defined(CAMERA_MODEL_M5STACK_ESP32CAM)
s->set_vflip(s, 1);
s->set_hmirror(s, 1);
#endif

#if defined(CAMERA_MODEL_ESP32S3_EYE)
s->set_vflip(s, 1);
#endif
#if defined(LED_GPIO_NUM)
setupLedFlash(LED_GPIO_NUM);
#endif

30
WiFi.begin(ssid, password);
WiFi.setSleep(false);

while (WiFi.status() != WL_CONNECTED)


{ delay(500);
Serial.print(".");
}
Serial.println("");
Serial.println("WiFi connected");

startCameraServer();

Serial.print("Camera Ready! Use 'http://");


Serial.print(WiFi.localIP());
Serial.println("' to connect");
}

void loop()
{ delay(10000);

31
APPENDIX II
OBJECT DETECTION AND IDENTIFICATION
import sys sys.path.append(r'keras-
yolo3')
from yolo3_one_file_to_detect_them_all import * model =
make_yolov3_model()
weight_reader = WeightReader(r'yolov3.weights')
weight_reader.load_weights(model)
model.save('yolo_model_niranjan.h5')
rom keras.models import load_model
yolo_model=load_model('yolo_model_niranjan.h5')
yolo_model.summary()
net_h,net_w=416,416 obj_thresh,
nms_thresh = 0.7, 0.6 boxes = []
anchors = [[116,90, 156,198, 373,326], [30,61, 62,45, 59,119], [10,13,
16,30, 33,23]]
labels = ["person", "bicycle", "car", "motorbike", "aeroplane", "bus", "train", "truck",
\
"boat", "traffic light", "fire hydrant", "stop sign", "parking meter",
"bench", \
"bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra",
"giraffe", \
"backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", "skis",
"snowboard", \
"sports ball", "kite", "baseball bat", "baseball glove",
"skateboard", "surfboard", \
"tennis racket", "bottle", "wine glass", "cup", "fork", "knife",
"spoon", "bowl", "banana", \
"apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza",
"donut", "cake", \
"chair", "sofa", "pottedplant", "bed", "diningtable", "toilet", "tvmonitor",
"laptop", "mouse", \
"remote", "keyboard", "cell phone", "microwave", "oven",
"toaster", "sink", "refrigerator", \
"book", "clock", "vase", "scissors", "teddy bear", "hair drier",
"toothbrush"]
import matplotlib.pyplot as plt imag=cv2.imread(r"N:\proj\test img\
istockphoto-1319467946- 170667a.jpg")
imag_h,imag_w,_=imag.shape
new_imag=preprocess_input(imag,net_h,net_w) plt.imshow(image)
boxes=[]
for i in range(len(yolo_pred)):
boxes += decode_netout(yolo_pred[i][0], anchors[i], obj_thresh,
nms_thresh, net_h, net_w) draw_boxes(imag,boxes,labels,obj_thresh)
cv2.imwrite('detected_niranjan.jpg',(imag).astype('uint8'))

You might also like