You are on page 1of 76

A SMART VEHICLE COUNTING SYSTEM

USING IMAGE PROCESSING


BSEE
A SMART VEHICLE COUNTING SYSTEM USING IMAGE PROCESSING

By

Syed Ali Raza Naqvi


BEE-163120
Sohail Khan
BEE-143011
Saqib Nawas
BEE-153117

A Project Report submitted to the


DEPARTMENT OF ELECTRICAL ENGINEERING
in partial fulfillment of the requirements for the degree of
BACHELORS OF SCIENCE IN ELECTRICAL ENGINEERING
Faculty of Engineering
Capital University of Science & Technology
Islamabad
July 2020
A SMART VEHICLE COUNTING SYSTEM
USING IMAGE PROCESSING

By

Syed Ali Raza Naqvi


BEE 163120
Sohail Khan
BEE 143011
Saqib Nawas
BEE 153117

A Project Report submitted to the


DEPARTMENT OF ELECTRICAL ENGINEERING
in partial fulfillment of the requirements for the degree of
BACHELORS OF SCIENCE IN ELECTRICAL ENGINEERING
Faculty of Engineering
Capital University of Science & Technology
Islamabad
July 2020
Copyright © 2020 by CUST Student

All rights reserved. Reproduction in whole or in part in any form requires the prior
written permission of Syed Ali Raza Naqvi, Sohail Khan and Saqib Nawas or
designated representative.

ii
DECLARATION

It is declared that this is an original piece of our own work, except where otherwise
acknowledged in text and references. This work has not been submitted in any form for
another degree or diploma at any university or other institution for tertiary education
and shall not be submitted by us in future for obtaining any degree from this or any
other University or Institution.

Syed Ali Raza Naqvi


BEE 163120

Sohail Khan
BEE 143011

Saqib Nawas
BEE 153117

July 2020

iii
CERTIFICATE OF APPROVAL

It is certified that the project titled “A Smart Vehicle Counting System Using Image
Processing” carried out by Syed Ali Raza Naqvi, Reg. No. BEE-163120, Sohail Khan,
Reg. No, BEE-143011 and Saqib Nawas, Reg.No, BEE-153117, under the supervision
of Sir Umer Maqbool, at Capital University of Science & Technology, Islamabad, is
fully adequate, in scope and in quality, as a final year project for the degree of BS of
Electrical Engineering.

Supervisor: -------------------------
Mr. Umer Maqbool
Assistant Professor
Department of Electrical Engineering
Faculty of Engineering
Capital University of Science & Technology, Islamabad

HOD: ----------------------------
Dr. Noor Mohammad Khan
Professor
Department of Electrical Engineering
Faculty of Engineering
Capital University of Science & Technology, Islamabad

iv
ACKNOWLEDGMENT

After completion of this project we would like to thank our supervisor Mr. Umer
Maqbool who helped us to complete this project with effective results.

v
ABSTRACT

This project is design and development of a system which is used for the counting of
vehicles on roads by using the process of image processing. This system requires high
speed processers to perform the process of image processing in very small time. The
project is designed in two different parts. In first part the goals are achieved by using
Matlab to perform YOLO algorithm of image processing with per trained model of
Resnet. Resnet consists of 230 models of vehicle. These models will be used for the
comparison of bounded frames with these model. After this comparison the vehicle are
categorize and detected while the counter is incremented side by side. In second part
the same targets are achieved by Python programming language and YOLO algorithm
is implemented by using OpenCV. The images are been extracted from the input video.
And then image processing is been done. Raspberry Pi with Intel Movidius Compute
Stick is been used in order to make this project functional. With Intel Compute Stick
the real time image processing speed is increased and the system is able to process the
video as well as to generate the output in real time. This system helps in reducing the
work power and organize the traffic and limit the number of vehicles in a parking lot.
This project can be interfaced with traffic light to organize the traffic and can also help
to help reduce the time of emergency vehicles for reaching the destination. At the end
we have concluded that by using image processing and YOLO algorithm we can
achieve up to 96 percent accuracy.

vi
TABLE OF CONTENTS

CERTIFICATE OF APPROVAL ......................................................iv


ACKNOWLEDGMENT .....................................................................v
ABSTRACT ......................................................................................vi
LIST OF FIGURES ...........................................................................xi
LIST OF TABLES .......................................................................... xiii
Chapter 1 ............................................................................................1
INTRODUCTION ..............................................................................1
1.1 Overview ................................................................................................. 1
1.2 Project Idea .............................................................................................. 1
1.3 Purpose of the Project .............................................................................. 1
1.4 Project Specifications ............................................................................... 2
1.3.1 Non-Functional Specifications ................................................... 2
1.3.2 Functional Specifications ........................................................... 2
1.5 Applications of the Project ....................................................................... 3
1.6 Project Plan .............................................................................................. 4
1.7 Report Organization ................................................................................. 6
Chapter 2 ............................................................................................7
LITERATURE REVIEW ...................................................................7
2.1 Background Theory.................................................................................. 7
2.2 Related Technologies ............................................................................... 9
2.2.1 Piezoelectric Sensor for Vehicles Counting ............................... 9
2.2.2 Magnetic Sensor for Vehicle Counting .....................................10
2.2.3 Acoustic Detector for Vehicle Counting ...................................11
2.2.4 Passive Infrared IR sensor for Vehicle Counting .......................12
2.3 Related Projects ......................................................................................12
2.3.1 Smart Traffic Control System Using Image Processing .............12
2.3.2 Smart Security System by using Image Processing ...................13
2.3.3 An Image Processing based Object Counting System................14

vii
2.4 Limitations and Bottlenecks of the Existing Work .................................15
2.4.1 Accuracy ..................................................................................15
2.4.2 Cost ..........................................................................................16
2.4.3 Time and Additional Hardware .................................................16
2.4 Problem Statement ..................................................................................16
2.5 Summary ................................................................................................17
Chapter 3 .......................................................................................... 18
PROJECT DESIGN AND IMPLEMENTATION ............................. 18
3.1 Proposed Design Methodology................................................................18
3.1.1 Raspberry Pi .............................................................................19
3.1.2 Camera V2 ...............................................................................19
3.1.3 Pi Screen interfaced ..................................................................19
3.2 Interfacing of Components ......................................................................19
3.2.1 Creating Bootable SD Card for Raspberry Pi ............................20
3.2.2 Installing Raspberry Pi Operating System .................................20
3.2.3 Interfacing Raspberry Pi Camera ..............................................21
3.2.4 Installing Onscreen Keyboard in Raspberry Pi ..........................24
3.2.5 Interfacing Touch Screen ..........................................................25
3.2.6 Interfacing Intel Movidius Compute Stick ................................27
3.3 Analysis Procedure .................................................................................27
3.3.1 IR Sensor for Vehicle Counting ................................................28
3.3.2 Arduino Micro-controller..........................................................28
3.4 Design of Project Software and Hardware ...............................................28
3.4.1 Design and Implementation on Matlab......................................29
aaaaaaaaa3.4.1.1 Input Video. ..............................................................30
aaaaaaaaa3.4.1.2 Extraction of Frames. ................................................30
aaaaaaaaa3.4.1.3 Conversion into Grid Frames. ....................................30
aaaaaaaaa3.4.1.4 Comparison of each frame with existing data. ............31
aaaaaaaaa3.4.1.5 Identification of Vehicles ...........................................31
aaaaaaaaa3.4.1.6 Results in the form of numerical values. ....................32
3.4.2 Design and Implementation on PyCharm Community Edition ..32
aaaaaaaaa3.4.2.1 Initialization of Libraries (OpenCv) ...........................33

viii
aaaaaaaaa3.4.2.2 Loading Weights and Models ....................................34
aaaaaaaaa3.4.2.3 Loading Environment File .........................................34
aaaaaaaaa3.4.2.4 Frame Extraction .......................................................34
aaaaaaaaa3.4.2.5 Applying YOLO Algorithm .......................................35
aaaaaaaaa3.4.2.6 Condition...................................................................35
aaaaaaaaa3.4.2.7 CSRT and KCF Tracker ............................................35
3.5 Summary ...............................................................................................37
Chapter 4 .......................................................................................... 38
TOOLS AND TECHNIQUES .......................................................... 38
4.1 Hardware Tools used ..................................................................................38
4.1.1 Raspberry Pi 2 ..............................................................................38
4.1.2 Camera V2 for Input ....................................................................40
4.1.4 Battery .........................................................................................42
4.1.5 SD Card .......................................................................................43
4.1.6 Intel Movidius Neural Compute Stick...........................................43
4.2 Software, simulation tool used ....................................................................45
4.2.1 Matlab ..........................................................................................45
4.2.2 PyCharm Community Edition ...................................................46
4.3 Chapter Summary .......................................................................................47
Chapter 5 .......................................................................................... 48
PROJECT RESULTS AND EVALUATION .................................... 48
5.1 Presentation of the findings ...................................................................48
5.1.1 Software Results on Matlab ......................................................50
5.1.2 Software Results on PyCharm Community Edition ...................51
5.2 Result Analysis ......................................................................................52
5.2.1 Results Analysis with Respect to Error .....................................52
5.2.2 Result Analysis with Iterations .................................................54
5.2.3 Results Analysis with Different Test Inputs ..............................56
5.3 Discussion on the Findings ....................................................................57
5.4 Limitations of the working prototype .....................................................58
5.4 Chapter Summary ..................................................................................58
Chapter 6 .......................................................................................... 59

ix
CONCLUSION AND FUTURE WORK .......................................... 59
References ........................................................................................ 60

x
LIST OF FIGURES
Figure 1.1: Project Timeline Part-I ........................................................................... 5
Figure 1.2: Project Time Line Part-II ........................................................................ 6
Figure 2.1: Face Detection [1] .................................................................................. 8
Figure 2.2: Currency Identification System [2] .......................................................... 9
Figure 2.3: Piezoelectric Sensor for Vehicle Counting [3] ....................................... 10
Figure 2.4: Magnetic Sensor for Vehicle Counting [4] ............................................ 11
Figure 2.5: Acoustic Detector for Vehicle Counting [5] ........................................... 11
Figure 2.6: IR Sensor for Vehicle Counting [6] ....................................................... 12
Figure 2.7: Smart Traffic Light System Using Image Processing. [7] ...................... 13
Figure 2.8: Smart Security System Using Image Processing [8] .............................. 14
Figure 2.9: An Image Processing based Object Counting. [9] ................................. 15
Figure 3.1: Project Block Diagram.......................................................................... 18
Figure 3.2: NOOBS Operating System for Raspberry Pi .......................................... 20
Figure 3.3: NOOBS Installation .............................................................................. 21
Figure 3.4: Configuration Settings........................................................................... 22
Figure 3.5: Enabling Camera .................................................................................. 22
Figure 3.6: Image Test Result .................................................................................. 23
Figure 3.7: Video Test Results ................................................................................. 24
Figure 3.8: On Screen Keyboard ............................................................................. 25
Figure 3.9: Raspberry Pi and Screen Connections ................................................... 25
Figure 3.10: Touch Screen Display.......................................................................... 26
Figure 3.11: Intel Movidius Compute Stick with Raspberry pi ................................. 27
Figure 3.12: Implementation Flow Chart on Matlab ................................................ 29
Figure 3.13: System GUI in Matlab ......................................................................... 32
Figure 3.14: Implementation Flow Chart on PyCharm ............................................ 33
Figure 3.15: CSRT Tracker Working ....................................................................... 36
Figure 3.16: System GUI in Python ......................................................................... 37
Figure 4.1: Raspberry Pi 2 [15] .............................................................................. 39

xi
Figure 4.2: Camera V2 for Input [16] ..................................................................... 40
Figure 4.3: Raspberry Pi Display Screen [17] ......................................................... 41
Figure 4.4: Battery for Input Supply [18] ................................................................ 42
Figure 4.5: SD Card for internal storage [19] ......................................................... 43
Figure 4.6: Intel Movidius Neural Compute Stick [20] ............................................ 44
Figure 4.7: Matlab Software for code compilation................................................... 46
Figure 5.1: Boundary around vehicles ..................................................................... 50
Figure 5.2: Counting Results ................................................................................... 51
Figure 5.3: Final Output ......................................................................................... 52

xii
LIST OF TABLES

Table 1.1: Non-Functional Specification.................................................................... 2


Table 1.2: Functional Specification ........................................................................... 3
Table 1.3: Project Plan-01 ........................................................................................ 4
Table 1.4: Project Plan-02 ........................................................................................ 5
Table 4.1: Specification of Raspberry Pi 2 ............................................................... 39
Table 4.2: Specification of Camera V2..................................................................... 40
Table 4.3: Pi Display Screen Specifications ............................................................. 41
Table 4.4: Battery Specification ............................................................................... 42
Table 4.5: SD Card Specification............................................................................. 43
Table 4.6: Specification of Intel Movidius Stick ....................................................... 44
Table 5.1: Matlab Results ........................................................................................ 48
Table 5.2: PyCharm Results .................................................................................... 49
Table 5.3: Error Analysis for Matlab ....................................................................... 53
Table 5.4: Error Analysis for PyCharm ................................................................... 54
Table 5.5: Iteration Error Analysis for Matlab ........................................................ 55
Table 5.6: Iteration Error Analysis for PyCharm ..................................................... 55
Table 5.7: Test Results for Matlab ........................................................................... 56
Table 5.8: Test Results on PyCharm ........................................................................ 57

xiii
Chapter 1

INTRODUCTION

In this chapter the main idea of the purposed project will be discussed. The relevance
and need of this product in modern word will also be discussed in this chapter. The
basic and general principles and methodologies of this project will also be discussed.

1.1 Overview
This project is designed to count numbers of vehicles from a video using image
processing. The basic idea is to count the numbers of vehicles and give the result in
numeric form. This project is to be deployed on entrance of a parking and count the
vehicles entering the parking.

1.2 Project Idea


The final prototype will take input in the form of a video and perform image processing
and calculate the number of vehicles passing. The YOLO algorithm will be used to
perform image processing. The project has a pertained model of vehicles stored and it
will take frames from the input video and compare them with the model and detect the
vehicles and increment the counter accordingly.

1.3 Purpose of the Project


As the existing systems to do vehicle counting are less efficient like vehicle counting
using IR sensors has a limitation as if two vehicles cross the sensor in the same time
which result in less accurate result which is why this project is being selected so that
the efficiency can be increased. The other main purpose of this project is to develop a
more efficient system to give an accurate information of peak time of traffic on road.

1
As the video input is stored in the internal storage this will keep the record and provide
appropriate information about crash occurrence.

1.4 Project Specifications


The main specification of this project is it will take live video input and perform image
processing using YOLO algorithm and calculate the number of vehicles passing.

1.3.1 Non-Functional Specifications

This project has the following non-functional specification.

Table 1.1: Non-Functional Specification

Sr. No Non-Functional Specification

1 Has a night vision camera for visibility in night or in foggy weather.

2 Will have internal storage to store input video and has a reset function

3 Product is reliable for minimum of 5 years dependent upon battery life.

1.3.2 Functional Specifications

These are the functional specification this project will have.

2
Table 1.2: Functional Specification

Sr. No Functional Specification

1 It will count the numbers of vehicles from a video input.

2 It will display the number of vehicles on screen.

3 It will store the video in the internal storage.

1.5 Applications of the Project


This project has a vast scope in near future some of its applications are as given below.

1.5.1 Smart Traffic Light System

This project can be used as a Smart traffic light system as it will count the number of
vehicles on each side of road and organize the traffic accordingly. This will help to
organize the traffic and reduce conjugation and decrease traffic jams. This can also be
used as a safe and smartly organized way for emergency vehicles such as Ambulance.
Police vehicle and Fire-fighter tanker etc. This system will check he density of traffic
and communicate with the signal driver in such a way that the side with heavy density
and emergency vehicles will have the highest priority.

1.5.2 Parking Management System

One of the major issue for parking plazas and buildings is that they have limited space
for parking and if someone enter the area where the place is full and they will be issues
like that person sometimes block the way and hence causing disturbance to that
environment. To overcome this issue we can deploy this product in the entrance and
exit of the parking. In such a way that it will show a warning when the space is full and
also take the number of vehicles leaving from the exit side and also show the left
capacity of parking.
3
1.5.3 Limiting Traffic Flow

This project can also be used to limit the flow of traffic through a barrier for example
in the case discussed above of parking management system one can automatically limit
the number of vehicles entering a building or area.

1.6 Project Plan


The work is divided in to five main parts for part-I in which first four weeks are
allocated for literature survey, next two weeks are assigned for the selection of
algorithm for image processing. After this next one week is for testing of the algorithm
then three weeks are assigned for implementation of this algorithm on Matlab. Last two
weeks are for documentation and report writing.

1.6.1 Project Milestone

Give is the distribution of tasks, task duration and resource person details.

Table 1.3: Project Plan-01

Tasks Duration Source Person


Literature Review 04 Weeks Syed Ali Raza Naqvi, Sohail
Khan, Saqib Nawas
Algorithm For Image Processing 02 Weeks Syed Ali Raza Naqvi, Sohail Khan
Algorithm Testing 01 Weeks Syed Ali Raza Naqvi, Saqib
Nawas, Sohail Khan
Implementation On Matlab 03 Week Syed Ali Raza Naqvi, Sohail Khan
Documentation and Report Writing 02 Weeks Saqib Nawas

4
Table 1.4: Project Plan-02

Tasks Duration Source Person


Implementation of Algorithm on 04 Weeks Syed Ali Raza Naqvi, Sohail
Raspberry Pi Khan.
Camera and Screen Integration 02 Weeks Syed Ali Raza Naqvi, Sohail Khan
Testing and Modification 04 Weeks Syed Ali Raza Naqvi, Saqib
Nawas, Sohail Khan
Documentation and Report Writing 04 Weeks Saqib Nawas, Sohail Khan

1.6.2 Project Timeline

Project time line for part-I is as follows.

Figure 1.1: Project Timeline Part-I

Project time line for part-II is as follows.

5
Figure 1.2: Project Time Line Part-II

1.7 Report Organization


In chapter one introduction to the project is discussed. In chapter two the literature
survey and review is discussed in this chapter some of related technologies and project
are also discussed, this chapter also consist of different algorithms and methods to
perform same project. In chapter three the design and implementation of the project
will be discussed, the software tools will also be discussed in this chapter. In chapter
four tools and techniques are discussed that will be used to perform this project and the
hardware and software design and methodology for the design will be discussed. In
chapter five the results and evaluation is discussed, the limitation of existing work and
future recommendations will be discussed.

6
Chapter 2

LITERATURE REVIEW

This chapter is based on the research that is done during this project. In this chapter
all techniques and scheme have been discussed along with their pros and cons.

2.1 Background Theory


The motivation of this project is to get rid of the common occurring problems of
vehicles counting. The first thing everyone has in his mind is why to make such
system and what is the need of making a prototype which can count the numbers of
vehicles and returns the number in the form of numeric value. Well vehicle counting
provides information about traffic flow, vehicle crashes occurrence, and peak times
of traffic in roadways. The other main objective we can achieve through this is we
can count and limit the numbers of vehicles entering into any parking lot or similar
places. The suitable and more efficient way to achieve the goal of vehicle counting
is using image processing and using a camera to get video input. The implementation
of this technique has been performed using python programming language. The
methodology used for image processing for vehicle counting is using different library
and real time image algorithm. It will take video and will perform image processing
on it to count number of vehicles. This method will involve YOLO algorithm of
image processing. YOLO you only look once is an algorithm in which input frames
are to be extracted from the video and then is divided into small portions as a grid.
Then these portions are to be compared with the predefined and stored data in the
data base. After this whole mechanism the image is categorized and different objects
are to be detected. Counting vehicle will give us much needed information to get the
basic understanding about the over flow of traffic in any region, peak hours of traffic
and average numbers of vehicles from a given area. The purpose of this work is to
develop an automatic vehicle system using image processing e.g. a camera is been
installed at a spot and it will pass the video to the raspberry pi which will implement

7
the algorithm of image processing and at the end it will display the total numbers of
vehicles passing through the particular spot in that time. These are some of the
following studies which made us confident to perform this project and also these
technologies are working on almost same mechanism.

 Face Detection

In modern era every this is going towards automation and one of the most
common technique which is been used in cell phones is face detection lock.
New phones have this function of face detection for security. This
technology is based on pure and high level of image processing in which
it will take the input from the camera and compare it with the stored image
and decide whether to unlock the phone or not. This technology is vastly
used in many cell phones.

Figure 2.1: Face Detection [1]

 Currency Identification System.

One of another major type of such technology is Currency Identification


System this is used to identify the currency of different Countries. The
background of this technology is also image processing. It processes the
input image of the currency and then compares that image with the stored

8
one and in result it show the name of the currency as well as the name of
the Country having this currency.

Figure 2.2: Currency Identification System [2]

Some of the technologies that are used to achieve this similar goal are as given.

 Piezoelectric Sensor for Vehicle Counting.

 Magnetic Sensor for Vehicle Counting.

 Acoustic Detector for Vehicle Counting.

 Passive Infrared IR sensor for Vehicle Counting.

2.2 Related Technologies


Below are some of the technologies that are related with our project. These technologies
have same purpose as of this proposed project but have less efficiency.

2.2.1 Piezoelectric Sensor for Vehicles Counting

Piezoelectric sensor collects data from converting mechanical energy into electrical
energy. The sensor is putted in groove cut into road surface. When a vehicle passes

9
through the sensor it presses it and caused potential voltage signal and when the car
move the voltage reverses. The change of voltage can be used to detect the count of
vehicle. The hurdle in the way of efficiency is that if two vehicles pass through the
tracks the potential voltage signals level will be disturbed and hence is less efficient.
The other disadvantage of piezoelectric sensor is that there efficiency is decreased
and reduced with increasing pressure and temperature.

Figure 2.3: Piezoelectric Sensor for Vehicle Counting [3]

2.2.2 Magnetic Sensor for Vehicle Counting

Magnetic sensor detects vehicle by measuring change in earth magnetic field as the
vehicle passes over it. The sensor may be buried or enclosed in a box on the side of
road. If vehicle are much closed to each other then it will feel difficult to discriminate
between them this is listed as one of major limitation in the way of accuracy. It cost
is also high and it will disrupt the traffic during the time of installation.

10
Figure 2.4: Magnetic Sensor for Vehicle Counting [4]

2.2.3 Acoustic Detector for Vehicle Counting

This detector detects vehicle by the sound created by the vehicle. The sensor is put on
the pole and it is pointing toward the traffic. It can be used for one or more traffic lines.
Due to environmental factors and disturbance the efficiency of the results is decreased
and the speed measures are also less accurate.

Figure 2.4: Acoustic Detector for Vehicle Counting [5]

11
2.2.4 Passive Infrared IR sensor for Vehicle Counting

These devices detect vehicle by measuring the infrared energy radiating from the
detection zone. When the vehicle passes the energy radiated changes and the count
is increased. It can be limited for one or maximum two lanes. The main limitation is
when two vehicles passes through the sensor in the same time with same speed the
efficiency is decreased.

Figure 2.5: IR Sensor for Vehicle Counting [6]

2.3 Related Projects


There are some projects which are carried earlier related to Vehicle Counting Systems.
Some projects differ in their technologies and some in their scope. The projects along
with their technologies are discussed below.

2.3.1 Smart Traffic Control System Using Image Processing

This project is fast implementation on Mat lab software for the aim to prevent heavy
traffic congestion. And for implanting the project image processing technique has
been used. First video of a lane is captured by camera. A web camera will placed on
the road for the purpose to control traffic from it. And a video will be shoot to know
about the traffic density. According to the processed data from mat lab controller will
send data to traffic LED’s and it will show particular time on the traffic signal to

12
manage traffic. This discussed project is about to give direction to organize the traffic
in such a way that it will pass the emergency vehicles with highest priority.

The system figure is shown below

Figure 2.6: Smart Traffic Light System Using Image Processing. [7]

This project will reduce the time of signal of a side by checking emergency vehicle like
police vehicle, ambulance and fire brigade truck etc. and will immediately turn that side
of signal to green while converting other signals to red. The main idea of this project is
to detect the emergency vehicles from the traffic and let them pass.

2.3.2 Smart Security System by using Image Processing

This project is very suitable for monitoring confidential areas. The idea behind this
system is that many systems for security purposes are working which contain CCTV.
Which makes the videos and take so much memory and is only use after the incident
only for evidence but in this project it will capture the video and will take action to
prevent the attack. The project is designed for the protection on such kind of areas. It

13
will help to monitor and check any suspicious activity in the zone of the banks. It
reduces the men power and is more reliable and rigid to detect threats. This can also be
used in the entrance of banks to check and match the faces of each person with the
criminal data base and can generate warnings.

Figure 2.7: Smart Security System Using Image Processing [8]

2.3.3 An Image Processing based Object Counting System

Machine vision applications are low cost and high precision measurement systems
which are frequently used in production lines. The production facilities are able to
reach high production numbers without errors. Machine vision operations such as
product counting, error control, dimension measurement can be performed through a
camera. This approach performs automatic counting independently of product type
and color. One camera is used in the system. Through the camera, an image of the
products passing through a conveyor is taken and various image processing algorithms
are applied to these images.

14
Figure 2.8: An Image Processing based Object Counting. [9]

2.4 Limitations and Bottlenecks of the Existing Work

There certain related projects were carried out but they all have some limitations which
are discuss here

2.4.1 Accuracy

The major and most effective thing a costumer would love to invest on and will help to
solve problems is accuracy of the project. All the projects discussed above are easy to
implement but none of the project has achieved accuracy above 95%. They sometime
require human assistance to carry some of their tasks. This has come of the main cause
which involve environmental condition and some technical failure. In case of IR sensor
based counter the main reason of its less accuracy is that if two vehicles passes through
the sensor at a same time or if there is a long vehicle passing through. Then the
15
probability is two or more vehicles will pass from the other side of that long vehicle
and the sensor will not count those vehicles. In case of Acoustic sensor the main reason
of less accuracy are environmental factors like noise, rain and other similar effects and
these will reduce the efficiency. The other reason of less accuracy is these system do
not focus on traffic jams and they are less robust too.

2.4.2 Cost

The entire existing product have high cost which includes the cost of the
components, serviceability, installation and monitoring. The components used for these
products are expensive and they also need to be serviced for a specific period of time.
While during the time of installation they require a lot of construction work like burying
of the sensors in road, installation of poles on proper wiring, construction of monitoring
stations.

2.4.3 Time and Additional Hardware

In case of existing works the time of installation is extended to 1 to 2 weeks as it requires


digging of the road and installing sensors in it. But the proposed project can be deployed
in maximum time of 2 day. It just need a pole on road side where the camera can be
attached. There is no need of construction of monitoring station as well. All these
products have additional hardware like sensor, PC and regulated power supplies in case
if IR sensor based vehicle counting and similarly in magnetic sensor based vehicle
counting bars of sensors are used.

2.4 Problem Statement


A smart vehicle counting system is been used to calculate numbers of vehicles with
accuracy of more than 95% while having lesser design and construction cost and time.
This is for checking the traffic flow on roads and checking the traffic peak hours for
traffic. This project can also be used to calculate the number of vehicles entering in a
parking lot and check the remaining space for parking left. The project has a vast scope

16
it. It is to be used by interfacing with traffic signals then it can also be used to control
and organize the traffic flow on each side of road.

2.5 Summary
In this chapter the detailed literature review has been done. The technologies related
to this project has been also discussed and those which are closed and related to our
project are also been discussed. In this chapter, the research to this project and the
method that will be used is in the future work to make the project successful has also
been discussed.

17
Chapter 3

PROJECT DESIGN AND IMPLEMENTATION

This project mainly consist of two parts first part is the software design of a vehicle
counting system while the second part consists of hardware and software to implement
the project. Both these parts are discussed in this chapter along with their
implementation procedure.

3.1 Proposed Design Methodology


The detailed block diagram of proposed project is as follows.

Figure 3.1: Project Block Diagram

In this block diagram all the stages related to vehicle counting system is shown. The
project is mainly divided into four main steps which includes taking input in the form

18
of video, performing image processing to detect vehicles, counting number of vehicle
and then showing results.

The project used a camera to take video and then pass it to the raspberry pi. The pi is
coded in such a way that it will perform YOLO algorithm of image processing and
count the numbers of vehicles by comparing each grid of extracted frame to the stored
data set for detection. After recognizing and detecting vehicles it will count the numbers
of vehicles and at the end it will display the results.

3.1.1 Raspberry Pi

The video taken by the camera is received by the Raspberry Pi 2 B+ which will be
coded in such a way that it will perform YOLO algorithm of image processing and
count the numbers of vehicles. Then after the calculations it will display the counted
value on the screen.

3.1.2 Camera V2

Camera is used to take the video the camera used in this project is a night vision camera
which even has visibility on foggy weather and through this the video quality is
improved. The camera used is of 8 Megapixels.

3.1.3 Pi Screen interfaced

A screen is been used and interfaced with Pi in such a way that it will show the total
number of vehicles been counted.

3.2 Interfacing of Components


To perform this project different components are used and in order to get desire
prototype we have interfaced different component. The step by step interfacing process
is given below.

19
3.2.1 Creating Bootable SD Card for Raspberry Pi

For creating bootable SD card, firstly the NOOBS (new out of the box software) is
download from the official website of Raspberry Pi. After downloading the software
the first step towards the installations is to format the SD card completely by using SD
Card Formatter after which the downloaded software is extracted to the SD card and
the operating system is ready to install.

Figure 3.2: NOOBS Operating System for Raspberry Pi

3.2.2 Installing Raspberry Pi Operating System

After first step the SD card is inserted to the raspberry pi and is connected to mouse,
keyboard and display screen through HDMI. When the power is supplied to the
raspberry pi the follow window appeared.

20
Figure 3.3: NOOBS Installation

After selecting Raspbian and clicking on the install tab on up-left corner the
installation begins. When the installation is complete by following the instructions the
operation system becomes functional and raspberry pi is ready to use.

3.2.3 Interfacing Raspberry Pi Camera

When the operating system is installed in the raspberry pi the next step is interfacing of
the camera. In order to interface the camera we have to open the raspberry pi
configuration settings by using the command “sudo raspi-config”. When the
configuration settings window appears from there select the camera option as show
below

21
Figure 3.4: Configuration Settings

When the camera option is selected the following window appears and from there
camera is enabled.

Figure 3.5: Enabling Camera

After enabling camera the system need to reboot and after reboot the camera is
functional.

22
The command use for capturing picture and video are as bellow respectively.

raspistill –o nameofimage.formate

The test result for capturing an image are as below.

Figure 3.6: Image Test Result

raspistill –o nameofvideo.formate –t time

The test result for capturing video are as follows

23
Figure 3.7: Video Test Results

3.2.4 Installing Onscreen Keyboard in Raspberry Pi

For the installation of on screen keyboard in raspberry pi, the following commands are
used in the command window.

sudo apt-get update

sudo apt-get install matchbox-keyboard

After this reboot the raspberry pi and download the keyboard.sh file. By running this
file we get the keyboard on the screen as given below.

24
Figure 3.8: On Screen Keyboard

3.2.5 Interfacing Touch Screen

After getting the keyboard and camera running the next step in interfacing of touch
screen. Before installing the files for switching from HDMI to touch screen we have to
connect the display to the raspberry pi as shown below.

Figure 3.9: Raspberry Pi and Screen Connections

25
After this the given commands are used in the command window in order to
automatically download the required files.

sudo rm -rf LCD-show


git clone https://github.com/goodtft/LCD-show.git
chmod -R 755 LCD-show
cd LCD-show/
sudo ./LCD35-show

After the download is complete the operating system will ask for the conformation for
installation of the files. Once the files are installed the raspberry pi will automatically
start reboot and the display is shifted from HDMI to touch screen as shown below.

Figure 3.10: Touch Screen Display

26
3.2.6 Interfacing Intel Movidius Compute Stick

Interfacing of Intel Movidius Compute Stick is easy and simple. It starts working once
it is connected to raspberry pi through USB port.

Figure 3.11: Intel Movidius Compute Stick with Raspberry pi

3.3 Analysis Procedure


This project can be done be done by using different techniques and micro-controllers.
We can done this project by using IR sensors. The problem using IR sensor is the
accuracy imagine two vehicles pass the route at a same time and in this case the counter
will count both as one and the results are then less efficient. This is why we have used
image processing to count the number of vehicles. The major reason of using Raspberry
Pi is its fast processing and we can have fast calculations by using pi while using other
controller like Arduino we cannot have enough speed and has slow processing.

27
3.3.1 IR Sensor for Vehicle Counting

IR sensor is a motion based infrared sensor which detects object by detecting the
wavelengths of lights. The main problem is accuracy which cannot be achieved. This
project is also achievable by using IR sensor. The sensor used for vehicle detection is
passive infrared sensor. Its life span is also short and due to which we cannot have a
quality product and we have used image processing to perform this task and achieve
about 96 percent accuracy.

3.3.2 Arduino Micro-controller

Arduino is a small micro-controller which is used to control devices. The man


deficiency of this is it has at-mega chip on it .It serve as a small system that has different
pins to control devices. The flaw of this in our project is we need high speed as we are
taking frames of videos in seconds and preforming image processing but Arduino is
slower as compare to raspberry pi which is why we have used pi to complete the project.

After this there are two major question arises in one’s mind that why image processing
is used in-stud of IR sensors and why Raspberry Pi is used while Arduino can also
perform similar task. Well there are some of the major reason of choosing image
processing over other technologies. The main reason is a video source provides overall
information about the traffic and vehicles and on the other hand they are much cheaper
and has low maintenance and serviceability cost. As it is mentioned above that although
Arduino is a micro-controller but it has speed issues related to image processing as the
algorithm involve fast processors to do the work in seconds and Arduino is not
compatible to such extent.

3.4 Design of Project Software and Hardware


The project has a major design in software. For this the project is divided into two main
software design. The hardware design is also been discussed.

28
3.4.1 Design and Implementation on Matlab

The block diagram of the implementation process is as follows

Figure 3.12: Implementation Flow Chart on Matlab

Firstly a video has been selected from which vehicles are to be counted and then by
analyzing the resolution and other parameters of the video coding is been started. The
first and for most thing while starting the code is retrained model of cars detection
which is been downloaded from MATLAB official page [13]. After which predefined
functions of MATLAB are used and coding was completed.

The project is implemented in the following given stages as mention in the


implementation block diagram. Each of the activity is explained below.

29
3.4.1.1 Input Video.

Firstly an input video is been selected on which image processing is been done. After
this the resolution parameters and frames per seconds of input video are to be calculated
and measured. The parameters of the selected video are.

Frame width = 640

Frame height = 360

Data Rate = 366Kbps

Total Bitrate = 461Kbps

Frame Rate = 2500 frames/second

3.4.1.2 Extraction of Frames.

After checking the parameters of the input video the next work is extraction of frames
from the video to perform image processing this is done by using a predefined function
of Matlab as mentioned below.

obj.reader = vision.VideoFileReader('y2mate.com -
m6_motorway_traffic_PNCJQkvALVc_360p.mp4')

This code is used for extraction of frames from the video then these frames are passed
to the next block of processing.

3.4.1.3 Conversion into Grid Frames.

Then after the extraction of frames the frames are converted into small grids by using
the code as mentioned below.

obj.videoPlayer = vision.VideoPlayer('Position',
[20, 20, 1000, 600])

30
The code will convert the frame into small grids. These grids are then passed to next
block of processing.

3.4.1.4 Comparison of each frame with existing data.

A pertained YOLO model for vehicle detection is been downloaded by using the code
below. This model has around 230 images of vehicles. After the frames are converted
into small grids. Then each grid is compared with the model. The code for downloading
the model is given below.

if ~doTraining &&
~exist('yolov2ResNet50VehicleExample_19b.mat','file')

disp('Downloading pretrained detector (98 MB)...');

pretrainedURL =
'https://www.mathworks.com/supportfiles/vision/data/yo
lov2ResNet50VehicleExample_19b.mat';

websave('yolov2ResNet50VehicleExample_19b.mat',pretrai
nedURL);

end

The model used here is Resnet. Resnet actually means Residual Network which is
considered as a backbone to most of computer vision operations. It is allows used to
train extremely deep neutral network up to 150+ layers successfully.

3.4.1.5 Identification of Vehicles

After using Kalman filter the code has done comparison between the grids and model
of the vehicles are detected. With this comparison side by side there is a counter which
is incremented by the results of comparison.

31
3.4.1.6 Results in the form of numerical values.

After increment in the above process of the counter the results are generated in the form
of numeric values which will be displayed on the screen.

After implementation of the project testing is been done by using different videos. The
output GUI is shown as below

Figure 3.13: System GUI in Matlab

3.4.2 Design and Implementation on PyCharm Community Edition

As due to current situation of COVID-19 all around the country we were unable to
purchase the most important component of this project Intel Movidius Compute Stick.
Which actually serve as the brain for the implementation of real time image processing.
To compensate this deficiency we have used PyCharm Community Edition for the
completion of our project. The block diagram of the implementation of the project on
PyCharm community edition is as follows.

32
Figure 3.14: Implementation Flow Chart on PyCharm

Similarly the first thing like the implementation of the project in Matlab is selection of
video, then the parameters of the video are analyzed. After the selection of video the
coding begin. The implementation steps as given in above figure are elaborated as
follows.

3.4.2.1 Initialization of Libraries (OpenCv)

33
OpenCv is open source computer vision library. This library has functions mainly
aimed for the real time computer vision [10]. This library includes more than 2500 set
of algorithms which includes YOLO algorithm. The initialization of the libraries of
OpenCv is done by downloading its packages and after extraction of the downloaded
package it just only require to copy the cv2.pyd file to the site-packages folder in the
installation folder. OpenCv uses blob as a data set for the detection of the objects. Blob
(Binary Large Object) library is a library of OpenCv. This library is used for the
detection of connected regions in binary converted images [11]. This library is used to
detect features of image like color, area, mean, etc. It is also used for the representation
of group of pixels having same values. Blob contain collection of binary data as a single
entity.

3.4.2.2 Loading Weights and Models

After the libraries are loaded the next step is loading of the weight, models in order for
the comparison for detected object with these models which are already stored. Weights
are parameters of neural network which transformers the input data with the hidden
layers [12]. Weights determines the importance of input data and categorize it. These
weights and models are used for the prediction about the weather the object is in the
anchor box or not.

3.4.2.3 Loading Environment File

The next step is loading the environment file. This file consist of all the data about the
input video and its output result. In this file it is defined that weather the input is a stored
video or live. It also consist of the video parameters. This file is used for modification
of the output results that weather the output vide will have mask or where the threshold
line is to drawn. The threshold line is drawn by using the coordinates obtained by using
online website (imagemap.net). The coordinates are defined according to the line in
dictionaries having tuples.

3.4.2.4 Frame Extraction

In next step the extraction of frames is done. These frames are to be used for the process
of image processing. The frame extraction rate depends upon the speed of the
processing device.
34
3.4.2.5 Applying YOLO Algorithm

After the extraction of the frames these frame are passed to the algorithm which perform
image processing and convert the image into small anchor box. After this these anchor
boxes are processed and predictions are made as according to the stored models by
comparison. When predictions are make the next step is evaluation of the predicted
values. The detection of object is directly dependent upon the predicted values. If the
value is lesser then 0.5 then this detection is ignored. If it is above this threshold then
the Non-max Suppression (NMS) is applied in order to get a box around the detected
object.

3.4.2.6 Condition

After application of YOLO algorithm a condition is applied which checks weather a


vehicle is detected or not. If these is a vehicle detected then it moves to the next phase
while if there is no vehicle in the given frame then it will get a new frame and repeated
the procedure of above given step.

3.4.2.7 CSRT and KCF Tracker

When a vehicle is detected then CSRT (Channel and Spatial Reliability Tracking) and
KCF (Kernelized Correction Tracking) trackers are applied in order to track the object
in upcoming frames. The CSRT tracker is a tracker which works by using filters to track
object by searching the area around the last known position of object. There are some
of the following features of CSRT tracker [13].

1. It is robust to unpredictable motion of objects.

2. It has manual adjustable parameters.

3. It can be trained on single patch image.

4. It can tolerate intermittent frame drop.

Below given is the flow chart of working of CSRT tracker.

35
Figure 3.15: CSRT Tracker Working

KCF tracker that works by training the filter with patches containing the object as well
as its nearby patches which don’t have the object. Below are some of the advantages of
using KCF tracker [14].

1. It is 1.5 times faster than CSRT and 10 times faster than TLD.

2. It is also trained on a single patch image.

3. It supports costume features extractions.

4. It also has manual adjustable features.

After applying the tracker the tracking of the vehicle starts. When the vehicle crosses
the threshold line then the counter is incremented as accordingly. After this procedure
is completed then another condition is checked that weather the video is ended or not.
If there are more frames to process on then this program will shift to the extraction of
frames block and the same procedure is repeated but if video is ended then the program
will terminate.

36
After the implementation of the project on PyCharm testing is performed by using
different test samples.

Figure 3.16: System GUI in Python

3.5 Summary
This chapter consisted of the software and hardware design of the project and the work
flow of this project. It also discusses the details about the software which are been used
for implementing this project.

37
Chapter 4

TOOLS AND TECHNIQUES

In this chapter, all of the tools and techniques which are used and which are applied to
achieve a smart vehicle counting system will be discussed. In this chapter, all the
hardware tools which were used during this project are going to be explained. Also in
the same chapter, all the software tools which helped in making the project will be
explained.

4.1 Hardware Tools used


While making Smart Vehicle Counting System different components are used which
are as follows

4 Raspberry Pi 2

5 Camera V2

6 Pi Screen

7 Battery

8 SD Card.

9 Intel Movidius Neural Compute Stick

The specification table along with all the required information of all the used
components is given below.

4.1.1 Raspberry Pi 2

A raspberry pi is a small board having chips mounted on it sometimes also called as


a mini computer that is used as a micro controller.it has a size of 85.6mm x 56.5mm
and a broad-com quad core processor. It also has 4 USB ports and 5V operating
voltage. It is the most common as well as fast controller that can be used for
multitasking. Due to 1 GB of its RAM it helps to perform image processing fast and
generate results within no time. This is why Pi is been used.
38
Figure 4.1: Raspberry Pi 2 [15]

The specification table of Pi is as follows

Table 4.1: Specification of Raspberry Pi 2

Max Max RAM Processor Purpose USB Network


Current Volt Ports

600 mA 5V 1 GB Broad-com Used as a 4 10/100Mbits/sec


Quad Core micro-controller USB Ethernet
Cortex-A7 and as a mini Ports
processor computer

39
4.1.2 Camera V2 for Input

A camera is the primary component of this project as it has to take the input in the form
of a video. A Raspberry Pi 2 camera is been used which is also known as Camera V2.
It is of 8 Mega Pixels and takes a video of 1080p and has a 1000 frame per second rate.
It is also capable of taking a static image of 3290 x 2464. This camera is selected to
achieve a good quality of video and achieve more accurate results.

Figure 4.2: Camera V2 for Input [16]

The specification table of Camera V2 is as follows.

Table 4.2: Specification of Camera V2

Weight Pixels Video modes Optical Size Frames per


Seconds
3g 8MP 1080p30 1/4” 4

40
4.1.3 Raspberry Pi Screen

A LCD screen will be inter faced with the raspberry pi. The screen will show the results
in the form or numeric values. The raspberry pi support screens from smaller to larger
scale like a screen of a desktop. As the project has to be a compact device which is why
a screen of 3.5 inches is selected.

Figure 4.3: Raspberry Pi Display Screen [17]

The table of the specification of the Pi Display Screen is as follows.

Table 4.3: Pi Display Screen Specifications

Display Color Industrial Quality Backlight Average Contrast


Bits life Brightness Ratio

800x480 24 bits 140 degree horizontal 20000 hours 25cd/m2 500


130 degree vertical

41
4.1.4 Battery

A battery is used to power Raspberry Pi in order to start working. It is a recharge able


battery. The battery used for this is given below.

Figure 4.4: Battery for Input Supply [18]

The battery specification table is given below.

Table 4.4: Battery Specification

Input Voltage Input Capacity Maximum Life Time


Current discharge
Current
5V 2A 4.5Ah 45A(5sec) 5 years 260
cycles

42
4.1.5 SD Card

A class 10 SD Card is used for the storage in Raspberry Pi in order to boot the
software and installation of NOOBS. The Card is also used to store the input video
from the camera. The Card used for is given below.

Figure 4.5: SD Card for internal storage [19]

The specification table of SD Card is given below.

Table 4.5: SD Card Specification

Storage Class Writing speed Memory Location

32GB 10 10MB/s 32 bit

4.1.6 Intel Movidius Neural Compute Stick

As Raspberry Pi has low processing speed which is why we cannot perform real time
image processing and for this purpose we are using a device known as Intel Movidius
stick. This device is used to enhance the speed of processing as it is specifically design

43
to perform computer vision programs. It consists of a high speed CPU for Deep
Learning in machine vision.

Figure 4.6: Intel Movidius Neural Compute Stick [20]

The specification table of Intel Movidius Neural Compute Stick is as follows.

Table 4.6: Specification of Intel Movidius Stick

Processor Processor Operating Speed in Dimensions


Included Frequency System Image
Processing
2 Vision 933MHz Window- 2-3 times as of 72.5mm X
Processing 10,64bit, Raspberry Pi 27mm X
Unit 4GB Ubuntu,16.04, 14mm
CentOS,7.4

44
4.2 Software, simulation tool used
These are the following software which are been used while doing this project:

 Matlab

 PyCharm Community Edition

4.2.1 Matlab

Matlab is a window application that is used for multiple purposes like simulation and
writing code and many more. It has many of the built in function to perform tasks. The
main advantage of Matlab is we can debug and test codes in run time. Matlab has some
of the following key features

 Implementation and testing of codes is easy.

 Debugging of codes is easy.

 Has built in algorithms and function.

 Has pertained models for image processing.

 Data from external sources can easily be accessed.

To write a code you just have to click on the plus sign on very left top corner of the
window and open new script and start the code. You can access built in function by just
calling them. After completion of code you can run it by pressing the play button on the
actions bar on the top.

45
Figure 4.7: Matlab Software for code compilation

4.2.2 PyCharm Community Edition

PyCharm Community Edition is a software use for coding in python. It is easily


available software from internet having two versions one is community edition and
other is professional edition. For the coding of this project the community edition was
work able and hence used. It is a very user friendly software and code can be written
very easily. This version is Apache 2 licensed which means that is free and open
source, it is free to use wherever user wants to and can also be easily modified.

Here are some of the advantages of using PyCharm Community Edition.

 Implementation and testing of codes is easy.

 A large amount of productive shortcuts.

 Ability to view the entire Python source.

 Availability of an array of plugins.

 Good community support.

 Fast code development.

 More powerful, and different commercial version are available.

46
4.3 Chapter Summary
In the first section of this chapter, all the hardware tools which were used during this
project were discussed. Furthermore, in this section, the details and specifications of
the components used was discussed. In the second section of this chapter, all the
software tools which were helpful in this project, were also discussed in detail.

47
Chapter 5

PROJECT RESULTS AND EVALUATION

In this chapter, all the results of the project are discussed and evaluation on the basis of
these results are also been done.

5.1 Presentation of the findings


After the end of first part of project a demo is prepared by using a video as the input.
After giving the input the code has performed the image processing and then the results
are shown as follows.

Table 5.1: Matlab Results

Time Number of Vehicles in video Number of Vehicles


Counted

5secs 7 7

10secs 10 10

20secs 20 18

50secs 40 37

1min,10secs 63 40

1min,40secs 89 84

2mins,30secs 124 119

3mins,30secs 172 164

4mins 201 191

5mins 254 247

48
After completing the implementation of the project the final deliverable is a system
which takes video as input and after applying YOLO algorithm of image processing
gives the number of vehicles passing through the threshold point. The result are shown
as below.

Table 5.2: PyCharm Results

Time Number of Vehicles in video Number of Vehicles Counted

1min 6 5

3mins 8 6

4mins 10 7

5mins 13 10

6mins 14 11

7mins 16 13

8mins 17 14

9mins 19 15

10mins 20 16

11mins 24 20

12mins 24 20

13mins 27 23

14mins 30 25

15mins 32 27

16mins 34 29

17mins 35 30

18mins 37 32

19mins 39 34

20mins 40 35

49
After this test it is concluded that in 20 minutes of run time the total of 40 vehicle have
passed through the threshold line while the counted vehicles are 35.

Through this data it is concluded that the efficiency of the system is nearly 87.5 percent.

5.1.1 Software Results on Matlab

As this part of project is purely based on software which is why there are no hardware
results. In this part of Vehicle Counting System the very first step is loading the
pertained model for vehicle detection. After that the frames are extracted from the input
video and the boundary around the vehicles are made by using the defined functions of
Matlab. The results of boundary around the vehicles are shown below.

Figure 5.1: Boundary around vehicles

After detection and tracking of vehicles the last part is counting and displaying the total
number of vehicles which is been done by declaring a variable which is been

50
incremented after every detection of a new vehicle. The results of counting is given
below.

Figure 5.2: Counting Results

5.1.2 Software Results on PyCharm Community Edition

After the implementation of the whole code and performing several test and setting the
parameters of the threshold crossing lines. We have achieved our goals of making a
reliable system that can perform image processing in real time and at the output side it
can display the number of vehicle passing. The processing speed of the project on the
system is very slow as compare to if it was to be implemented on raspberry pi by using
Intel Movidius Compute Stick. Despite of this fact the project was working with almost
95 to 97 percent accuracy. The final results are show as below.

51
Figure 5.3: Final Output

5.2 Result Analysis


In this section the results are analyzed on the bases of different parameters as below.

5.2.1 Results Analysis with Respect to Error

 Matlab Results Analysis

The given below table consist of the data obtained by performing test with
respect to time and the results are estimated as below.

52
Table 5.3: Error Analysis for Matlab

Time % Error

5secs 0%

10secs 0%

20secs 10%

50secs 7.5%

1min,10secs 21.6%

1min,40secs 5.6%

2mins,30secs 4.03%

3mins,30secs 4.6%

4mins 4.9%

5mins 2.7%

After the analysis it is concluded that the error varies with time and we have
obtained minimum of 2.7% of error after the test input is completed.

 PyCharm Result Analysis

The given below table consist of the data collected while testing the input on
PyCharm Community Edition.

53
Table 5.4: Error Analysis for PyCharm

Time % Error

1min 16%

4mins 30%

6mins 21%

10mins 20%

12mins 16.6%

13mins 14.8%

15mins 15.6%

16mins 14.7%

18mins 13.5%

20mins 12.5%

From the above data the error is given with respect to time and it is observed
that after 20 minutes of video we have received 12.5% of errors.

5.2.2 Result Analysis with Iterations

Same input video is tested for 5 times in order to analyze the obtained data and to
calculate the results.

 Results Analysis for Matlab

The given below table consist of the data been collected after performing several
test. Note that the testing time is 5 minutes.
54
Table 5.5: Iteration Error Analysis for Matlab

No of Test Total Vehicles Vehicles Counted % Error

1 254 247 2.7%

2 254 243 4.3%

3 254 249 1.9%

4 254 251 1.1%

5 254 239 5.9%

After the analysis it is observed that with a different test but having same input
the results vary and the % error also fluctuate but with a very low margin in
Matlab.

 Results Analysis for PyCharm

The given below table consist of the data obtained by performing 5 iteration o
a same video input. Note that the observing time is 20 minutes.

Table 5.6: Iteration Error Analysis for PyCharm

No of Test Total Vehicles Vehicles Counted % Error

1 40 35 12.5%

2 40 37 7.5%

3 40 37 7.5%

4 40 34 15%

5 40 35 12.5%

55
After analyzing the results it is observed that there is a very little fluctuation between
then counting number by repeatedly giving the same input.

5.2.3 Results Analysis with Different Test Inputs

The last test was perform by using different video as input and the results are as follows.

 Results Analysis for Matlab


The following table consist of results obtained by using 3 different test videos.
Note that the time for processing is kept constant as 5 minutes.

Table 5.7: Test Results for Matlab

No of Test Total Vehicles Vehicles Counted % Error

1 254 247 2.7%

2 341 332 2.6%

3 262 249 4.9%

After these results it is observed that this system can efficiently count numbers
of vehicle regardless of the environment of the input.
 Results Analysis for PyCharm
The given below is the table consisting the data obtained from different test on
different videos. Note that the processing time is kept constant of 20 minutes.

56
Table 5.8: Test Results on PyCharm

No of Test Total Vehicles Vehicles Counted % Error

1 40 35 12.5%

2 86 78 9.3%

3 43 37 13.9%

After observing these results it is concluded that this code also count numbers
of vehicles with same efficiency regardless of the input environment.

5.3 Discussion on the Findings


After first part competition the demo was working perfectly but the only issue was the
slow processing as the video is of high resolution which is why we have to change the
video with low pixels so that the speed of processing can be increased. A 360p quality
video is been used with good speed of processing. The system is designed by using a
pertained model which was working fine. After the completion of whole project it is
observed that this system require high processing speed in order to perform real time
object detection, reorganization, tracking, and counting. As the operating system which
is been used for the completion of this project consist of 8GB RAM while 3.2GHZ’s
processer which is not sufficient to process the input video in real time with high speed.
This issue causes slow processing and the time to process 1 minute video is increase up
to 7 to 8 minutes, the accuracy of detection is nearly equal to 97 percent while detector
is 100 percent efficient that it always detect the vehicle when it is in the frame. Through
the above testes we have checked the reliability as well as the robustness of the working
prototype. The following thing are observed after testing.

 The detector in Matlab part some time works abnormally as it sometimes count
the shoulder blocks of the road.

 The detector of Python is working very accurately and always detect the
vehicles which are in the frame. It sometimes gives wrong names but like it
label car as a truck.

57
 The tracker in both the cases are not that much efficient as they sometime stop
tracking which effects the counting.

 The counter also sometimes don’t increment its value even a vehicle passes
through the threshold.

 The overall accuracy of the system is obtained as 90 to 95 percent for the Matlab
code while 85 to 90 percent for the Python code.

5.4 Limitations of the working prototype


The only limitation of this work is its pertained model as this model only contains 230
different kind of images which is why it only detect track and count those vehicle which
are stored in the model. This effect the efficiency of working system. The other main
limitation of this prototype is speed of the system, which can be resolved by using Intel
Compute Stick. This stick is specially designed for the purpose of real time image
processing.

5.4 Chapter Summary


In this chapter results and evaluation of this project are discussed, all the findings and
results of this project are discussed. In the presentation section the general discussion
about the software results and findings are discussed.

58
Chapter 6

CONCLUSION AND FUTURE WORK

Smart Vehicle Counting System by using Image processing is one of the main leading
work done in the field of automation towards the new era of traffic flow and control
system of cities. The system is built by using Raspberry Pi, which is a strong
microcontroller and has high speed when interfaced with Intel Compute Stick. Thus
system can be deployed on roads and entrance of parking lot to control the flow of
vehicles. The code is written by using pertained model of Resnet which has 230
different models of vehicles and is also done by using the Open CV technique while
implementing in Python programming language for implementation in Raspberry pi.
For future work if a modified model is made which consist different models of vehicles
so that the accuracy can be increased. As Pakistan has some different kind of vehicles
and Resnet do not have those models which is why this model has less accuracy in
Pakistan. This project can be interfaced with traffic lights to organize the traffic on
roads. With a bit of improvement in this system it can also be used to monitor and
organize the traffic flow of emergency vehicles throughout the whole city. By inter
linking different signals the system can let the emergency vehicle to pass easily and
reduce the traffic congestion.

59
References

[1] S. Maksymenko, "Towards Data Science," 2019. [Online]. Available:


https://towardsdatascience.com/how-to-build-a-face-detection-and-recognition-
system-f5c2cdfbeb8c. [Accessed 05 June 2020].

[2] V. Abburu and S. Gupta and S. R. Rimitha and M. Mulimani and S. G.


Koolagudi, “Currency recognition system using image processing" , 2017 Tenth
International Conference on Contemporary Computing (IC3): IEEE, No. 2572-
6129, 2017 [Online] Available: https://ieeexplore.ieee.org/document/8284300
[Accessed 05 June 2020]

[3] F. Liu and Z. Zeng and R. Jiang, "A video-based real-time adaptive vehicle-
counting system for urban roads" Plos One, 2017. [Online] Avaiable:
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0186098
[Accessed 05 June 2020]

[4] Palos.pk, "Giken Trastem," [Online]. Available:


http://www.trastem.co.jp/eng/product/palos_parking.html [Accessed 05 June
2020].

[5] S.K. Bahadir and F. Kalaoglu, "Science Direct," 2016. [Online]. Available:
https://www.sciencedirect.com/topics/engineering/ultrasonic-sensor. [Accessed
30 05 2020].

[6] S. B. Somani, and H. S. Khatri, "Infrared-based system for vehicle counting and
classification," IEEE, No. 978-1-4799-6272-3, 2015 [Online] Available:
https://ieeexplore.ieee.org/document/7086998 [Accessed 05 June 2020]

[7] M. M. Elkhatib, and A. I. Adwan, and A. S. Alsamna, and A. M. Abu-Hudrouss


"Smart Traffic Lights using Image Processing Algorithms," IEEE, No. 978-1-
5386-6291-5, 2019 [Online] Available:
https://ieeexplore.ieee.org/document/8747225 [Accessed 05 June 2020]

[8] M. P. Pathrikar, and S. J. Bhosale, and D. Patil, and G. Deshpande "SMART


SECURITY SYSTEM FOR SENSITIVE AREA BY USING IMAGE
PROCESSING", No. 110021471, 2014 [Online] Available:
https://www.semanticscholar.org/paper/SMART-SECURITY-SYSTEM-FOR-
SENSITIVE-AREA-BY-USING-Pathrikar-
Bhosale/430d64cd96f8714d911df61b079a8a18b6388e5e [Accessed 95 June
2020]

60
[9] M.Baygin and M. Karakose and A. Sarimaden and E. Akin, "An Image
Processing based Object Counting Approach for Machine Vision Application"
International Conference on Advances and Innovations in Engineering
(ICAIE), 2018, [Online] Available:
https://www.researchgate.net/publication/319355836_An_Image_Processing_b
ased_Object_Counting_Approach_for_Machine_Vision_Application [Accessed
05 June 2020]

[10] N. Mahamkali and A. Vadivel, "OpenCV for Computer Vision Applications"


Research Gate, 2015, [Online] Available:
https://www.researchgate.net/publication/301590571_OpenCV_for_Computer_
Vision_Applications [Accessed 18 June 2020]

[11] X. Qi and X. Li and H. Zhang, "Research of paper surface defects detection


system based on blob algorithm" International Conference, IEEE, 2013,
[Online] Available: https://ieeexplore.ieee.org/document/6720384 [Accessed 18
June 2020]

[12] P. Patel and M. Nandu and P. Raut, "Initilization of Weight in Neural Network"
Research Gate, 2019, [Online] Available:
https://www.researchgate.net/publication/330875010_Initialization_of_Weights
_in_Neural_Networks [Accessed 18 June 2020]
[13] X. Farhodov and O. H Kwon and K. W. Kang and S. H. Lee and K. P. Kwon, "
Faster RCNN Detection Based OpenCV CSRT Tracker Using Drone Data"
International Conference, IEEE, 2019 [Online] Available:
https://ieeexplore.ieee.org/document/9012043 [Accessed 18 June 2020]

[14] M. Luo and B. Zhou and T. Wang, "Multi-part and scale adaptive visual tracker
based on kernel correlation filter" PLOS ONE, 2020, [Online] Available:
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0231087
[Accessed 18 June 2020]

[15] J. C. Freakin, "Raspberry Pi 2: Six Things You Can (And Can't) Do"
Information Desk, 2015, [online] Availavle:
https://www.informationweek.com/software/raspberry-pi-2-six-things-you-can-
(and-cant)-do/a/d-id/1319064 [Accessed: 11 July 2020]

[16] J. S. Cook, "Google Coral Camera vs. Raspberry Pi Camera V2" Arrow, 2019.
[Online]. Available: https://www.arrow.com/en/research-and-
events/articles/google-coral-camera-vs-raspberry-pi-camera-v2 [Accessed 11
July 2020].

[17] A. Hussain, "Raspberry Pi Screens & Displays" RIKIKNOW, [Online].


Available: https://trickiknow.com/blog/. [Accessed 11 July 2020].

61
[18] L. Hughes, "How to Power a Raspberry Pi with Batteries" Arow, 2016.
[Online]. Available: https://www.arrow.com/en/research-and-
events/articles/battery-power-your-
pi#:~:text=USB%20port%20powering%20is%20definitely,will%20fry%20a%2
0Raspberry%20Pi [Accessed 11 July 2020].

[19] A. Myrick, "Best SD Cards for the Raspberry Pi 3 B+ in 2020" Androidcentral,


2020. [Online]. Available: androidcentral.com/best-sd-card-raspberry-pi-3-
b#:~:text=A%20step%20up%20SanDisk%20Extreme,the%20rest%20of%20yo
ur%20projects [Accessed 11 July 2020].

[20] N. Oh, "Intel Launches Movidius Neural Compute Stick: Deep Learning and AI
on a $79 USB Stick" ANANDTECH, 2017. [Online]. Available:
https://www.anandtech.com/show/11649/intel-launches-movidius-neural-
compute-stick. [Accessed 11 July 2020].

62

You might also like