Professional Documents
Culture Documents
By
By
All rights reserved. Reproduction in whole or in part in any form requires the prior
written permission of Syed Ali Raza Naqvi, Sohail Khan and Saqib Nawas or
designated representative.
ii
DECLARATION
It is declared that this is an original piece of our own work, except where otherwise
acknowledged in text and references. This work has not been submitted in any form for
another degree or diploma at any university or other institution for tertiary education
and shall not be submitted by us in future for obtaining any degree from this or any
other University or Institution.
Sohail Khan
BEE 143011
Saqib Nawas
BEE 153117
July 2020
iii
CERTIFICATE OF APPROVAL
It is certified that the project titled “A Smart Vehicle Counting System Using Image
Processing” carried out by Syed Ali Raza Naqvi, Reg. No. BEE-163120, Sohail Khan,
Reg. No, BEE-143011 and Saqib Nawas, Reg.No, BEE-153117, under the supervision
of Sir Umer Maqbool, at Capital University of Science & Technology, Islamabad, is
fully adequate, in scope and in quality, as a final year project for the degree of BS of
Electrical Engineering.
Supervisor: -------------------------
Mr. Umer Maqbool
Assistant Professor
Department of Electrical Engineering
Faculty of Engineering
Capital University of Science & Technology, Islamabad
HOD: ----------------------------
Dr. Noor Mohammad Khan
Professor
Department of Electrical Engineering
Faculty of Engineering
Capital University of Science & Technology, Islamabad
iv
ACKNOWLEDGMENT
After completion of this project we would like to thank our supervisor Mr. Umer
Maqbool who helped us to complete this project with effective results.
v
ABSTRACT
This project is design and development of a system which is used for the counting of
vehicles on roads by using the process of image processing. This system requires high
speed processers to perform the process of image processing in very small time. The
project is designed in two different parts. In first part the goals are achieved by using
Matlab to perform YOLO algorithm of image processing with per trained model of
Resnet. Resnet consists of 230 models of vehicle. These models will be used for the
comparison of bounded frames with these model. After this comparison the vehicle are
categorize and detected while the counter is incremented side by side. In second part
the same targets are achieved by Python programming language and YOLO algorithm
is implemented by using OpenCV. The images are been extracted from the input video.
And then image processing is been done. Raspberry Pi with Intel Movidius Compute
Stick is been used in order to make this project functional. With Intel Compute Stick
the real time image processing speed is increased and the system is able to process the
video as well as to generate the output in real time. This system helps in reducing the
work power and organize the traffic and limit the number of vehicles in a parking lot.
This project can be interfaced with traffic light to organize the traffic and can also help
to help reduce the time of emergency vehicles for reaching the destination. At the end
we have concluded that by using image processing and YOLO algorithm we can
achieve up to 96 percent accuracy.
vi
TABLE OF CONTENTS
vii
2.4 Limitations and Bottlenecks of the Existing Work .................................15
2.4.1 Accuracy ..................................................................................15
2.4.2 Cost ..........................................................................................16
2.4.3 Time and Additional Hardware .................................................16
2.4 Problem Statement ..................................................................................16
2.5 Summary ................................................................................................17
Chapter 3 .......................................................................................... 18
PROJECT DESIGN AND IMPLEMENTATION ............................. 18
3.1 Proposed Design Methodology................................................................18
3.1.1 Raspberry Pi .............................................................................19
3.1.2 Camera V2 ...............................................................................19
3.1.3 Pi Screen interfaced ..................................................................19
3.2 Interfacing of Components ......................................................................19
3.2.1 Creating Bootable SD Card for Raspberry Pi ............................20
3.2.2 Installing Raspberry Pi Operating System .................................20
3.2.3 Interfacing Raspberry Pi Camera ..............................................21
3.2.4 Installing Onscreen Keyboard in Raspberry Pi ..........................24
3.2.5 Interfacing Touch Screen ..........................................................25
3.2.6 Interfacing Intel Movidius Compute Stick ................................27
3.3 Analysis Procedure .................................................................................27
3.3.1 IR Sensor for Vehicle Counting ................................................28
3.3.2 Arduino Micro-controller..........................................................28
3.4 Design of Project Software and Hardware ...............................................28
3.4.1 Design and Implementation on Matlab......................................29
aaaaaaaaa3.4.1.1 Input Video. ..............................................................30
aaaaaaaaa3.4.1.2 Extraction of Frames. ................................................30
aaaaaaaaa3.4.1.3 Conversion into Grid Frames. ....................................30
aaaaaaaaa3.4.1.4 Comparison of each frame with existing data. ............31
aaaaaaaaa3.4.1.5 Identification of Vehicles ...........................................31
aaaaaaaaa3.4.1.6 Results in the form of numerical values. ....................32
3.4.2 Design and Implementation on PyCharm Community Edition ..32
aaaaaaaaa3.4.2.1 Initialization of Libraries (OpenCv) ...........................33
viii
aaaaaaaaa3.4.2.2 Loading Weights and Models ....................................34
aaaaaaaaa3.4.2.3 Loading Environment File .........................................34
aaaaaaaaa3.4.2.4 Frame Extraction .......................................................34
aaaaaaaaa3.4.2.5 Applying YOLO Algorithm .......................................35
aaaaaaaaa3.4.2.6 Condition...................................................................35
aaaaaaaaa3.4.2.7 CSRT and KCF Tracker ............................................35
3.5 Summary ...............................................................................................37
Chapter 4 .......................................................................................... 38
TOOLS AND TECHNIQUES .......................................................... 38
4.1 Hardware Tools used ..................................................................................38
4.1.1 Raspberry Pi 2 ..............................................................................38
4.1.2 Camera V2 for Input ....................................................................40
4.1.4 Battery .........................................................................................42
4.1.5 SD Card .......................................................................................43
4.1.6 Intel Movidius Neural Compute Stick...........................................43
4.2 Software, simulation tool used ....................................................................45
4.2.1 Matlab ..........................................................................................45
4.2.2 PyCharm Community Edition ...................................................46
4.3 Chapter Summary .......................................................................................47
Chapter 5 .......................................................................................... 48
PROJECT RESULTS AND EVALUATION .................................... 48
5.1 Presentation of the findings ...................................................................48
5.1.1 Software Results on Matlab ......................................................50
5.1.2 Software Results on PyCharm Community Edition ...................51
5.2 Result Analysis ......................................................................................52
5.2.1 Results Analysis with Respect to Error .....................................52
5.2.2 Result Analysis with Iterations .................................................54
5.2.3 Results Analysis with Different Test Inputs ..............................56
5.3 Discussion on the Findings ....................................................................57
5.4 Limitations of the working prototype .....................................................58
5.4 Chapter Summary ..................................................................................58
Chapter 6 .......................................................................................... 59
ix
CONCLUSION AND FUTURE WORK .......................................... 59
References ........................................................................................ 60
x
LIST OF FIGURES
Figure 1.1: Project Timeline Part-I ........................................................................... 5
Figure 1.2: Project Time Line Part-II ........................................................................ 6
Figure 2.1: Face Detection [1] .................................................................................. 8
Figure 2.2: Currency Identification System [2] .......................................................... 9
Figure 2.3: Piezoelectric Sensor for Vehicle Counting [3] ....................................... 10
Figure 2.4: Magnetic Sensor for Vehicle Counting [4] ............................................ 11
Figure 2.5: Acoustic Detector for Vehicle Counting [5] ........................................... 11
Figure 2.6: IR Sensor for Vehicle Counting [6] ....................................................... 12
Figure 2.7: Smart Traffic Light System Using Image Processing. [7] ...................... 13
Figure 2.8: Smart Security System Using Image Processing [8] .............................. 14
Figure 2.9: An Image Processing based Object Counting. [9] ................................. 15
Figure 3.1: Project Block Diagram.......................................................................... 18
Figure 3.2: NOOBS Operating System for Raspberry Pi .......................................... 20
Figure 3.3: NOOBS Installation .............................................................................. 21
Figure 3.4: Configuration Settings........................................................................... 22
Figure 3.5: Enabling Camera .................................................................................. 22
Figure 3.6: Image Test Result .................................................................................. 23
Figure 3.7: Video Test Results ................................................................................. 24
Figure 3.8: On Screen Keyboard ............................................................................. 25
Figure 3.9: Raspberry Pi and Screen Connections ................................................... 25
Figure 3.10: Touch Screen Display.......................................................................... 26
Figure 3.11: Intel Movidius Compute Stick with Raspberry pi ................................. 27
Figure 3.12: Implementation Flow Chart on Matlab ................................................ 29
Figure 3.13: System GUI in Matlab ......................................................................... 32
Figure 3.14: Implementation Flow Chart on PyCharm ............................................ 33
Figure 3.15: CSRT Tracker Working ....................................................................... 36
Figure 3.16: System GUI in Python ......................................................................... 37
Figure 4.1: Raspberry Pi 2 [15] .............................................................................. 39
xi
Figure 4.2: Camera V2 for Input [16] ..................................................................... 40
Figure 4.3: Raspberry Pi Display Screen [17] ......................................................... 41
Figure 4.4: Battery for Input Supply [18] ................................................................ 42
Figure 4.5: SD Card for internal storage [19] ......................................................... 43
Figure 4.6: Intel Movidius Neural Compute Stick [20] ............................................ 44
Figure 4.7: Matlab Software for code compilation................................................... 46
Figure 5.1: Boundary around vehicles ..................................................................... 50
Figure 5.2: Counting Results ................................................................................... 51
Figure 5.3: Final Output ......................................................................................... 52
xii
LIST OF TABLES
xiii
Chapter 1
INTRODUCTION
In this chapter the main idea of the purposed project will be discussed. The relevance
and need of this product in modern word will also be discussed in this chapter. The
basic and general principles and methodologies of this project will also be discussed.
1.1 Overview
This project is designed to count numbers of vehicles from a video using image
processing. The basic idea is to count the numbers of vehicles and give the result in
numeric form. This project is to be deployed on entrance of a parking and count the
vehicles entering the parking.
1
As the video input is stored in the internal storage this will keep the record and provide
appropriate information about crash occurrence.
2 Will have internal storage to store input video and has a reset function
2
Table 1.2: Functional Specification
This project can be used as a Smart traffic light system as it will count the number of
vehicles on each side of road and organize the traffic accordingly. This will help to
organize the traffic and reduce conjugation and decrease traffic jams. This can also be
used as a safe and smartly organized way for emergency vehicles such as Ambulance.
Police vehicle and Fire-fighter tanker etc. This system will check he density of traffic
and communicate with the signal driver in such a way that the side with heavy density
and emergency vehicles will have the highest priority.
One of the major issue for parking plazas and buildings is that they have limited space
for parking and if someone enter the area where the place is full and they will be issues
like that person sometimes block the way and hence causing disturbance to that
environment. To overcome this issue we can deploy this product in the entrance and
exit of the parking. In such a way that it will show a warning when the space is full and
also take the number of vehicles leaving from the exit side and also show the left
capacity of parking.
3
1.5.3 Limiting Traffic Flow
This project can also be used to limit the flow of traffic through a barrier for example
in the case discussed above of parking management system one can automatically limit
the number of vehicles entering a building or area.
Give is the distribution of tasks, task duration and resource person details.
4
Table 1.4: Project Plan-02
5
Figure 1.2: Project Time Line Part-II
6
Chapter 2
LITERATURE REVIEW
This chapter is based on the research that is done during this project. In this chapter
all techniques and scheme have been discussed along with their pros and cons.
7
the algorithm of image processing and at the end it will display the total numbers of
vehicles passing through the particular spot in that time. These are some of the
following studies which made us confident to perform this project and also these
technologies are working on almost same mechanism.
Face Detection
In modern era every this is going towards automation and one of the most
common technique which is been used in cell phones is face detection lock.
New phones have this function of face detection for security. This
technology is based on pure and high level of image processing in which
it will take the input from the camera and compare it with the stored image
and decide whether to unlock the phone or not. This technology is vastly
used in many cell phones.
8
one and in result it show the name of the currency as well as the name of
the Country having this currency.
Some of the technologies that are used to achieve this similar goal are as given.
Piezoelectric sensor collects data from converting mechanical energy into electrical
energy. The sensor is putted in groove cut into road surface. When a vehicle passes
9
through the sensor it presses it and caused potential voltage signal and when the car
move the voltage reverses. The change of voltage can be used to detect the count of
vehicle. The hurdle in the way of efficiency is that if two vehicles pass through the
tracks the potential voltage signals level will be disturbed and hence is less efficient.
The other disadvantage of piezoelectric sensor is that there efficiency is decreased
and reduced with increasing pressure and temperature.
Magnetic sensor detects vehicle by measuring change in earth magnetic field as the
vehicle passes over it. The sensor may be buried or enclosed in a box on the side of
road. If vehicle are much closed to each other then it will feel difficult to discriminate
between them this is listed as one of major limitation in the way of accuracy. It cost
is also high and it will disrupt the traffic during the time of installation.
10
Figure 2.4: Magnetic Sensor for Vehicle Counting [4]
This detector detects vehicle by the sound created by the vehicle. The sensor is put on
the pole and it is pointing toward the traffic. It can be used for one or more traffic lines.
Due to environmental factors and disturbance the efficiency of the results is decreased
and the speed measures are also less accurate.
11
2.2.4 Passive Infrared IR sensor for Vehicle Counting
These devices detect vehicle by measuring the infrared energy radiating from the
detection zone. When the vehicle passes the energy radiated changes and the count
is increased. It can be limited for one or maximum two lanes. The main limitation is
when two vehicles passes through the sensor in the same time with same speed the
efficiency is decreased.
This project is fast implementation on Mat lab software for the aim to prevent heavy
traffic congestion. And for implanting the project image processing technique has
been used. First video of a lane is captured by camera. A web camera will placed on
the road for the purpose to control traffic from it. And a video will be shoot to know
about the traffic density. According to the processed data from mat lab controller will
send data to traffic LED’s and it will show particular time on the traffic signal to
12
manage traffic. This discussed project is about to give direction to organize the traffic
in such a way that it will pass the emergency vehicles with highest priority.
Figure 2.6: Smart Traffic Light System Using Image Processing. [7]
This project will reduce the time of signal of a side by checking emergency vehicle like
police vehicle, ambulance and fire brigade truck etc. and will immediately turn that side
of signal to green while converting other signals to red. The main idea of this project is
to detect the emergency vehicles from the traffic and let them pass.
This project is very suitable for monitoring confidential areas. The idea behind this
system is that many systems for security purposes are working which contain CCTV.
Which makes the videos and take so much memory and is only use after the incident
only for evidence but in this project it will capture the video and will take action to
prevent the attack. The project is designed for the protection on such kind of areas. It
13
will help to monitor and check any suspicious activity in the zone of the banks. It
reduces the men power and is more reliable and rigid to detect threats. This can also be
used in the entrance of banks to check and match the faces of each person with the
criminal data base and can generate warnings.
Machine vision applications are low cost and high precision measurement systems
which are frequently used in production lines. The production facilities are able to
reach high production numbers without errors. Machine vision operations such as
product counting, error control, dimension measurement can be performed through a
camera. This approach performs automatic counting independently of product type
and color. One camera is used in the system. Through the camera, an image of the
products passing through a conveyor is taken and various image processing algorithms
are applied to these images.
14
Figure 2.8: An Image Processing based Object Counting. [9]
There certain related projects were carried out but they all have some limitations which
are discuss here
2.4.1 Accuracy
The major and most effective thing a costumer would love to invest on and will help to
solve problems is accuracy of the project. All the projects discussed above are easy to
implement but none of the project has achieved accuracy above 95%. They sometime
require human assistance to carry some of their tasks. This has come of the main cause
which involve environmental condition and some technical failure. In case of IR sensor
based counter the main reason of its less accuracy is that if two vehicles passes through
the sensor at a same time or if there is a long vehicle passing through. Then the
15
probability is two or more vehicles will pass from the other side of that long vehicle
and the sensor will not count those vehicles. In case of Acoustic sensor the main reason
of less accuracy are environmental factors like noise, rain and other similar effects and
these will reduce the efficiency. The other reason of less accuracy is these system do
not focus on traffic jams and they are less robust too.
2.4.2 Cost
The entire existing product have high cost which includes the cost of the
components, serviceability, installation and monitoring. The components used for these
products are expensive and they also need to be serviced for a specific period of time.
While during the time of installation they require a lot of construction work like burying
of the sensors in road, installation of poles on proper wiring, construction of monitoring
stations.
16
it. It is to be used by interfacing with traffic signals then it can also be used to control
and organize the traffic flow on each side of road.
2.5 Summary
In this chapter the detailed literature review has been done. The technologies related
to this project has been also discussed and those which are closed and related to our
project are also been discussed. In this chapter, the research to this project and the
method that will be used is in the future work to make the project successful has also
been discussed.
17
Chapter 3
This project mainly consist of two parts first part is the software design of a vehicle
counting system while the second part consists of hardware and software to implement
the project. Both these parts are discussed in this chapter along with their
implementation procedure.
In this block diagram all the stages related to vehicle counting system is shown. The
project is mainly divided into four main steps which includes taking input in the form
18
of video, performing image processing to detect vehicles, counting number of vehicle
and then showing results.
The project used a camera to take video and then pass it to the raspberry pi. The pi is
coded in such a way that it will perform YOLO algorithm of image processing and
count the numbers of vehicles by comparing each grid of extracted frame to the stored
data set for detection. After recognizing and detecting vehicles it will count the numbers
of vehicles and at the end it will display the results.
3.1.1 Raspberry Pi
The video taken by the camera is received by the Raspberry Pi 2 B+ which will be
coded in such a way that it will perform YOLO algorithm of image processing and
count the numbers of vehicles. Then after the calculations it will display the counted
value on the screen.
3.1.2 Camera V2
Camera is used to take the video the camera used in this project is a night vision camera
which even has visibility on foggy weather and through this the video quality is
improved. The camera used is of 8 Megapixels.
A screen is been used and interfaced with Pi in such a way that it will show the total
number of vehicles been counted.
19
3.2.1 Creating Bootable SD Card for Raspberry Pi
For creating bootable SD card, firstly the NOOBS (new out of the box software) is
download from the official website of Raspberry Pi. After downloading the software
the first step towards the installations is to format the SD card completely by using SD
Card Formatter after which the downloaded software is extracted to the SD card and
the operating system is ready to install.
After first step the SD card is inserted to the raspberry pi and is connected to mouse,
keyboard and display screen through HDMI. When the power is supplied to the
raspberry pi the follow window appeared.
20
Figure 3.3: NOOBS Installation
After selecting Raspbian and clicking on the install tab on up-left corner the
installation begins. When the installation is complete by following the instructions the
operation system becomes functional and raspberry pi is ready to use.
When the operating system is installed in the raspberry pi the next step is interfacing of
the camera. In order to interface the camera we have to open the raspberry pi
configuration settings by using the command “sudo raspi-config”. When the
configuration settings window appears from there select the camera option as show
below
21
Figure 3.4: Configuration Settings
When the camera option is selected the following window appears and from there
camera is enabled.
After enabling camera the system need to reboot and after reboot the camera is
functional.
22
The command use for capturing picture and video are as bellow respectively.
raspistill –o nameofimage.formate
23
Figure 3.7: Video Test Results
For the installation of on screen keyboard in raspberry pi, the following commands are
used in the command window.
After this reboot the raspberry pi and download the keyboard.sh file. By running this
file we get the keyboard on the screen as given below.
24
Figure 3.8: On Screen Keyboard
After getting the keyboard and camera running the next step in interfacing of touch
screen. Before installing the files for switching from HDMI to touch screen we have to
connect the display to the raspberry pi as shown below.
25
After this the given commands are used in the command window in order to
automatically download the required files.
After the download is complete the operating system will ask for the conformation for
installation of the files. Once the files are installed the raspberry pi will automatically
start reboot and the display is shifted from HDMI to touch screen as shown below.
26
3.2.6 Interfacing Intel Movidius Compute Stick
Interfacing of Intel Movidius Compute Stick is easy and simple. It starts working once
it is connected to raspberry pi through USB port.
27
3.3.1 IR Sensor for Vehicle Counting
IR sensor is a motion based infrared sensor which detects object by detecting the
wavelengths of lights. The main problem is accuracy which cannot be achieved. This
project is also achievable by using IR sensor. The sensor used for vehicle detection is
passive infrared sensor. Its life span is also short and due to which we cannot have a
quality product and we have used image processing to perform this task and achieve
about 96 percent accuracy.
After this there are two major question arises in one’s mind that why image processing
is used in-stud of IR sensors and why Raspberry Pi is used while Arduino can also
perform similar task. Well there are some of the major reason of choosing image
processing over other technologies. The main reason is a video source provides overall
information about the traffic and vehicles and on the other hand they are much cheaper
and has low maintenance and serviceability cost. As it is mentioned above that although
Arduino is a micro-controller but it has speed issues related to image processing as the
algorithm involve fast processors to do the work in seconds and Arduino is not
compatible to such extent.
28
3.4.1 Design and Implementation on Matlab
Firstly a video has been selected from which vehicles are to be counted and then by
analyzing the resolution and other parameters of the video coding is been started. The
first and for most thing while starting the code is retrained model of cars detection
which is been downloaded from MATLAB official page [13]. After which predefined
functions of MATLAB are used and coding was completed.
29
3.4.1.1 Input Video.
Firstly an input video is been selected on which image processing is been done. After
this the resolution parameters and frames per seconds of input video are to be calculated
and measured. The parameters of the selected video are.
After checking the parameters of the input video the next work is extraction of frames
from the video to perform image processing this is done by using a predefined function
of Matlab as mentioned below.
obj.reader = vision.VideoFileReader('y2mate.com -
m6_motorway_traffic_PNCJQkvALVc_360p.mp4')
This code is used for extraction of frames from the video then these frames are passed
to the next block of processing.
Then after the extraction of frames the frames are converted into small grids by using
the code as mentioned below.
obj.videoPlayer = vision.VideoPlayer('Position',
[20, 20, 1000, 600])
30
The code will convert the frame into small grids. These grids are then passed to next
block of processing.
A pertained YOLO model for vehicle detection is been downloaded by using the code
below. This model has around 230 images of vehicles. After the frames are converted
into small grids. Then each grid is compared with the model. The code for downloading
the model is given below.
if ~doTraining &&
~exist('yolov2ResNet50VehicleExample_19b.mat','file')
pretrainedURL =
'https://www.mathworks.com/supportfiles/vision/data/yo
lov2ResNet50VehicleExample_19b.mat';
websave('yolov2ResNet50VehicleExample_19b.mat',pretrai
nedURL);
end
The model used here is Resnet. Resnet actually means Residual Network which is
considered as a backbone to most of computer vision operations. It is allows used to
train extremely deep neutral network up to 150+ layers successfully.
After using Kalman filter the code has done comparison between the grids and model
of the vehicles are detected. With this comparison side by side there is a counter which
is incremented by the results of comparison.
31
3.4.1.6 Results in the form of numerical values.
After increment in the above process of the counter the results are generated in the form
of numeric values which will be displayed on the screen.
After implementation of the project testing is been done by using different videos. The
output GUI is shown as below
As due to current situation of COVID-19 all around the country we were unable to
purchase the most important component of this project Intel Movidius Compute Stick.
Which actually serve as the brain for the implementation of real time image processing.
To compensate this deficiency we have used PyCharm Community Edition for the
completion of our project. The block diagram of the implementation of the project on
PyCharm community edition is as follows.
32
Figure 3.14: Implementation Flow Chart on PyCharm
Similarly the first thing like the implementation of the project in Matlab is selection of
video, then the parameters of the video are analyzed. After the selection of video the
coding begin. The implementation steps as given in above figure are elaborated as
follows.
33
OpenCv is open source computer vision library. This library has functions mainly
aimed for the real time computer vision [10]. This library includes more than 2500 set
of algorithms which includes YOLO algorithm. The initialization of the libraries of
OpenCv is done by downloading its packages and after extraction of the downloaded
package it just only require to copy the cv2.pyd file to the site-packages folder in the
installation folder. OpenCv uses blob as a data set for the detection of the objects. Blob
(Binary Large Object) library is a library of OpenCv. This library is used for the
detection of connected regions in binary converted images [11]. This library is used to
detect features of image like color, area, mean, etc. It is also used for the representation
of group of pixels having same values. Blob contain collection of binary data as a single
entity.
After the libraries are loaded the next step is loading of the weight, models in order for
the comparison for detected object with these models which are already stored. Weights
are parameters of neural network which transformers the input data with the hidden
layers [12]. Weights determines the importance of input data and categorize it. These
weights and models are used for the prediction about the weather the object is in the
anchor box or not.
The next step is loading the environment file. This file consist of all the data about the
input video and its output result. In this file it is defined that weather the input is a stored
video or live. It also consist of the video parameters. This file is used for modification
of the output results that weather the output vide will have mask or where the threshold
line is to drawn. The threshold line is drawn by using the coordinates obtained by using
online website (imagemap.net). The coordinates are defined according to the line in
dictionaries having tuples.
In next step the extraction of frames is done. These frames are to be used for the process
of image processing. The frame extraction rate depends upon the speed of the
processing device.
34
3.4.2.5 Applying YOLO Algorithm
After the extraction of the frames these frame are passed to the algorithm which perform
image processing and convert the image into small anchor box. After this these anchor
boxes are processed and predictions are made as according to the stored models by
comparison. When predictions are make the next step is evaluation of the predicted
values. The detection of object is directly dependent upon the predicted values. If the
value is lesser then 0.5 then this detection is ignored. If it is above this threshold then
the Non-max Suppression (NMS) is applied in order to get a box around the detected
object.
3.4.2.6 Condition
When a vehicle is detected then CSRT (Channel and Spatial Reliability Tracking) and
KCF (Kernelized Correction Tracking) trackers are applied in order to track the object
in upcoming frames. The CSRT tracker is a tracker which works by using filters to track
object by searching the area around the last known position of object. There are some
of the following features of CSRT tracker [13].
35
Figure 3.15: CSRT Tracker Working
KCF tracker that works by training the filter with patches containing the object as well
as its nearby patches which don’t have the object. Below are some of the advantages of
using KCF tracker [14].
1. It is 1.5 times faster than CSRT and 10 times faster than TLD.
After applying the tracker the tracking of the vehicle starts. When the vehicle crosses
the threshold line then the counter is incremented as accordingly. After this procedure
is completed then another condition is checked that weather the video is ended or not.
If there are more frames to process on then this program will shift to the extraction of
frames block and the same procedure is repeated but if video is ended then the program
will terminate.
36
After the implementation of the project on PyCharm testing is performed by using
different test samples.
3.5 Summary
This chapter consisted of the software and hardware design of the project and the work
flow of this project. It also discusses the details about the software which are been used
for implementing this project.
37
Chapter 4
In this chapter, all of the tools and techniques which are used and which are applied to
achieve a smart vehicle counting system will be discussed. In this chapter, all the
hardware tools which were used during this project are going to be explained. Also in
the same chapter, all the software tools which helped in making the project will be
explained.
4 Raspberry Pi 2
5 Camera V2
6 Pi Screen
7 Battery
8 SD Card.
The specification table along with all the required information of all the used
components is given below.
4.1.1 Raspberry Pi 2
39
4.1.2 Camera V2 for Input
A camera is the primary component of this project as it has to take the input in the form
of a video. A Raspberry Pi 2 camera is been used which is also known as Camera V2.
It is of 8 Mega Pixels and takes a video of 1080p and has a 1000 frame per second rate.
It is also capable of taking a static image of 3290 x 2464. This camera is selected to
achieve a good quality of video and achieve more accurate results.
40
4.1.3 Raspberry Pi Screen
A LCD screen will be inter faced with the raspberry pi. The screen will show the results
in the form or numeric values. The raspberry pi support screens from smaller to larger
scale like a screen of a desktop. As the project has to be a compact device which is why
a screen of 3.5 inches is selected.
41
4.1.4 Battery
42
4.1.5 SD Card
A class 10 SD Card is used for the storage in Raspberry Pi in order to boot the
software and installation of NOOBS. The Card is also used to store the input video
from the camera. The Card used for is given below.
As Raspberry Pi has low processing speed which is why we cannot perform real time
image processing and for this purpose we are using a device known as Intel Movidius
stick. This device is used to enhance the speed of processing as it is specifically design
43
to perform computer vision programs. It consists of a high speed CPU for Deep
Learning in machine vision.
44
4.2 Software, simulation tool used
These are the following software which are been used while doing this project:
Matlab
4.2.1 Matlab
Matlab is a window application that is used for multiple purposes like simulation and
writing code and many more. It has many of the built in function to perform tasks. The
main advantage of Matlab is we can debug and test codes in run time. Matlab has some
of the following key features
To write a code you just have to click on the plus sign on very left top corner of the
window and open new script and start the code. You can access built in function by just
calling them. After completion of code you can run it by pressing the play button on the
actions bar on the top.
45
Figure 4.7: Matlab Software for code compilation
46
4.3 Chapter Summary
In the first section of this chapter, all the hardware tools which were used during this
project were discussed. Furthermore, in this section, the details and specifications of
the components used was discussed. In the second section of this chapter, all the
software tools which were helpful in this project, were also discussed in detail.
47
Chapter 5
In this chapter, all the results of the project are discussed and evaluation on the basis of
these results are also been done.
5secs 7 7
10secs 10 10
20secs 20 18
50secs 40 37
1min,10secs 63 40
1min,40secs 89 84
48
After completing the implementation of the project the final deliverable is a system
which takes video as input and after applying YOLO algorithm of image processing
gives the number of vehicles passing through the threshold point. The result are shown
as below.
1min 6 5
3mins 8 6
4mins 10 7
5mins 13 10
6mins 14 11
7mins 16 13
8mins 17 14
9mins 19 15
10mins 20 16
11mins 24 20
12mins 24 20
13mins 27 23
14mins 30 25
15mins 32 27
16mins 34 29
17mins 35 30
18mins 37 32
19mins 39 34
20mins 40 35
49
After this test it is concluded that in 20 minutes of run time the total of 40 vehicle have
passed through the threshold line while the counted vehicles are 35.
Through this data it is concluded that the efficiency of the system is nearly 87.5 percent.
As this part of project is purely based on software which is why there are no hardware
results. In this part of Vehicle Counting System the very first step is loading the
pertained model for vehicle detection. After that the frames are extracted from the input
video and the boundary around the vehicles are made by using the defined functions of
Matlab. The results of boundary around the vehicles are shown below.
After detection and tracking of vehicles the last part is counting and displaying the total
number of vehicles which is been done by declaring a variable which is been
50
incremented after every detection of a new vehicle. The results of counting is given
below.
After the implementation of the whole code and performing several test and setting the
parameters of the threshold crossing lines. We have achieved our goals of making a
reliable system that can perform image processing in real time and at the output side it
can display the number of vehicle passing. The processing speed of the project on the
system is very slow as compare to if it was to be implemented on raspberry pi by using
Intel Movidius Compute Stick. Despite of this fact the project was working with almost
95 to 97 percent accuracy. The final results are show as below.
51
Figure 5.3: Final Output
The given below table consist of the data obtained by performing test with
respect to time and the results are estimated as below.
52
Table 5.3: Error Analysis for Matlab
Time % Error
5secs 0%
10secs 0%
20secs 10%
50secs 7.5%
1min,10secs 21.6%
1min,40secs 5.6%
2mins,30secs 4.03%
3mins,30secs 4.6%
4mins 4.9%
5mins 2.7%
After the analysis it is concluded that the error varies with time and we have
obtained minimum of 2.7% of error after the test input is completed.
The given below table consist of the data collected while testing the input on
PyCharm Community Edition.
53
Table 5.4: Error Analysis for PyCharm
Time % Error
1min 16%
4mins 30%
6mins 21%
10mins 20%
12mins 16.6%
13mins 14.8%
15mins 15.6%
16mins 14.7%
18mins 13.5%
20mins 12.5%
From the above data the error is given with respect to time and it is observed
that after 20 minutes of video we have received 12.5% of errors.
Same input video is tested for 5 times in order to analyze the obtained data and to
calculate the results.
The given below table consist of the data been collected after performing several
test. Note that the testing time is 5 minutes.
54
Table 5.5: Iteration Error Analysis for Matlab
After the analysis it is observed that with a different test but having same input
the results vary and the % error also fluctuate but with a very low margin in
Matlab.
The given below table consist of the data obtained by performing 5 iteration o
a same video input. Note that the observing time is 20 minutes.
1 40 35 12.5%
2 40 37 7.5%
3 40 37 7.5%
4 40 34 15%
5 40 35 12.5%
55
After analyzing the results it is observed that there is a very little fluctuation between
then counting number by repeatedly giving the same input.
The last test was perform by using different video as input and the results are as follows.
After these results it is observed that this system can efficiently count numbers
of vehicle regardless of the environment of the input.
Results Analysis for PyCharm
The given below is the table consisting the data obtained from different test on
different videos. Note that the processing time is kept constant of 20 minutes.
56
Table 5.8: Test Results on PyCharm
1 40 35 12.5%
2 86 78 9.3%
3 43 37 13.9%
After observing these results it is concluded that this code also count numbers
of vehicles with same efficiency regardless of the input environment.
The detector in Matlab part some time works abnormally as it sometimes count
the shoulder blocks of the road.
The detector of Python is working very accurately and always detect the
vehicles which are in the frame. It sometimes gives wrong names but like it
label car as a truck.
57
The tracker in both the cases are not that much efficient as they sometime stop
tracking which effects the counting.
The counter also sometimes don’t increment its value even a vehicle passes
through the threshold.
The overall accuracy of the system is obtained as 90 to 95 percent for the Matlab
code while 85 to 90 percent for the Python code.
58
Chapter 6
Smart Vehicle Counting System by using Image processing is one of the main leading
work done in the field of automation towards the new era of traffic flow and control
system of cities. The system is built by using Raspberry Pi, which is a strong
microcontroller and has high speed when interfaced with Intel Compute Stick. Thus
system can be deployed on roads and entrance of parking lot to control the flow of
vehicles. The code is written by using pertained model of Resnet which has 230
different models of vehicles and is also done by using the Open CV technique while
implementing in Python programming language for implementation in Raspberry pi.
For future work if a modified model is made which consist different models of vehicles
so that the accuracy can be increased. As Pakistan has some different kind of vehicles
and Resnet do not have those models which is why this model has less accuracy in
Pakistan. This project can be interfaced with traffic lights to organize the traffic on
roads. With a bit of improvement in this system it can also be used to monitor and
organize the traffic flow of emergency vehicles throughout the whole city. By inter
linking different signals the system can let the emergency vehicle to pass easily and
reduce the traffic congestion.
59
References
[3] F. Liu and Z. Zeng and R. Jiang, "A video-based real-time adaptive vehicle-
counting system for urban roads" Plos One, 2017. [Online] Avaiable:
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0186098
[Accessed 05 June 2020]
[5] S.K. Bahadir and F. Kalaoglu, "Science Direct," 2016. [Online]. Available:
https://www.sciencedirect.com/topics/engineering/ultrasonic-sensor. [Accessed
30 05 2020].
[6] S. B. Somani, and H. S. Khatri, "Infrared-based system for vehicle counting and
classification," IEEE, No. 978-1-4799-6272-3, 2015 [Online] Available:
https://ieeexplore.ieee.org/document/7086998 [Accessed 05 June 2020]
60
[9] M.Baygin and M. Karakose and A. Sarimaden and E. Akin, "An Image
Processing based Object Counting Approach for Machine Vision Application"
International Conference on Advances and Innovations in Engineering
(ICAIE), 2018, [Online] Available:
https://www.researchgate.net/publication/319355836_An_Image_Processing_b
ased_Object_Counting_Approach_for_Machine_Vision_Application [Accessed
05 June 2020]
[12] P. Patel and M. Nandu and P. Raut, "Initilization of Weight in Neural Network"
Research Gate, 2019, [Online] Available:
https://www.researchgate.net/publication/330875010_Initialization_of_Weights
_in_Neural_Networks [Accessed 18 June 2020]
[13] X. Farhodov and O. H Kwon and K. W. Kang and S. H. Lee and K. P. Kwon, "
Faster RCNN Detection Based OpenCV CSRT Tracker Using Drone Data"
International Conference, IEEE, 2019 [Online] Available:
https://ieeexplore.ieee.org/document/9012043 [Accessed 18 June 2020]
[14] M. Luo and B. Zhou and T. Wang, "Multi-part and scale adaptive visual tracker
based on kernel correlation filter" PLOS ONE, 2020, [Online] Available:
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0231087
[Accessed 18 June 2020]
[15] J. C. Freakin, "Raspberry Pi 2: Six Things You Can (And Can't) Do"
Information Desk, 2015, [online] Availavle:
https://www.informationweek.com/software/raspberry-pi-2-six-things-you-can-
(and-cant)-do/a/d-id/1319064 [Accessed: 11 July 2020]
[16] J. S. Cook, "Google Coral Camera vs. Raspberry Pi Camera V2" Arrow, 2019.
[Online]. Available: https://www.arrow.com/en/research-and-
events/articles/google-coral-camera-vs-raspberry-pi-camera-v2 [Accessed 11
July 2020].
61
[18] L. Hughes, "How to Power a Raspberry Pi with Batteries" Arow, 2016.
[Online]. Available: https://www.arrow.com/en/research-and-
events/articles/battery-power-your-
pi#:~:text=USB%20port%20powering%20is%20definitely,will%20fry%20a%2
0Raspberry%20Pi [Accessed 11 July 2020].
[20] N. Oh, "Intel Launches Movidius Neural Compute Stick: Deep Learning and AI
on a $79 USB Stick" ANANDTECH, 2017. [Online]. Available:
https://www.anandtech.com/show/11649/intel-launches-movidius-neural-
compute-stick. [Accessed 11 July 2020].
62