You are on page 1of 44


With the rise in population and cars on the road, vehicle observation and traffic
monitoring became necessary tools for passengers to avoid traffic issues. the aim of
this document is to represent the way to economically count the vehicle variety.
Here, the Raspberry Pi camera module helped monitor traffic no matter color, size
and angle. The algorithmic program enclosed a writing that known virtually vehicles
of the identical size, color, or enclosed the distinction between massive and little
vehicles (for example, buses and cars). later on, the calculation concerned
completely different lighting conditions to verify the uniformity of the experiment.
The counting system also shows the traffic light based on the amount of cars.


Dengan peningkatan penduduk dan kereta di jalan raya, pemantauan kenderaan

dan pemantauan lalu lintas menjadi alat penting bagi penumpang untuk
mengelakkan masalah lalu lintas. Tujuan dokumen ini adalah untuk mewakili cara
mengira nombor kenderaan secara ekonomi. Di sini, modul kamera Raspberry Pi
membantu memantau lalu lintas tanpa mengira warna, saiz dan sudut. Algoritma
ini termasuk sistem pengekodan yang mengenal pasti hampir kenderaan yang sama
saiz, warna, atau termasuk perbezaan antara kenderaan besar dan kecil (contohnya,
bas dan kereta). Selepas itu, pengiraan melibatkan keadaan pencahayaan yang
berbeza untuk mengesahkan keseragaman eksperimen. Sistem pengiraan juga
menunjukkan isyarat lalu lintas berdasarkan bilangan kereta.


1.1 Introduction

Object counting is a very important image process technique applicable in several

industrial applications. examples of such applications include: counts the amount of
product passing through a conveyer, count the amount of machines passing through a
particular manner during a given time or count the amount of a selected species during a
natural park.

The cameras became the quality hardware and a necessary feature in several mobile
devices. These developments have shifted computer vision from a distinct segment tool
to a progressively common tool for several applications like automatic face recognition
software system, recreation interfaces, industrial, biometric automation, medical imaging
and planetary exploration.

Raspberry Pi is one amongst those mobile devices that comes with a slot designed into the
camera. There are variety of applications that may be done through the Pi camera. Fans
use it to develop game programs and robotic applications. Run a mechanism employing a
given instruction set image to show left, right or stop.

The project targeted on the planning and development of a Raspberry Pi-based system that
uses a camera and is ready to notice and count objects inside a place.

Python was the artificial language chosen for this project. this is often as a result of it's a
really powerful language and is compatible with pi. what is more, it provides fast
application development and there are on-line communities that program the pc python
using Raspberry PI.

1.2 Research Objective

The main objective of this analysis is to produce a unique taxonomy of traffic surveillance
schemes used to avoid congestion. Existing urban traffic management plans to avoid
congestion and prioritise emergency vehicles and establish the idea for future analysis.

To achieve these, there are few steps that are needed to be carried out as stated below:

i. To detect and count vehicles efficiently.

ii. To study and how to program the Raspberry-PI using python and OpenCV.

1.3 Problem Statement

There are several applications that use the camera, particularly in surveillance systems,
like cameras put in in several buildings and streets of town. The demand for surveillance
systems is increasing significantly. The analysis of the pictures of those cameras needs the
information of artificial vision, that could be a new discipline, particularly in Third World
countries. there's a spot that may be stuffed with additional data within the field of artificial
vision. This project provides solutions partially.

1.4 Project Scope

The project can target the utilization of the Raspberry-Pi as associate instrument capable
of imitating, to some extent, human vision. to attain this, a deeper understanding of a
artificial language is needed. Python has been used as a artificial language of alternative
as a result of it's compatible with rasp-pi. it's also a strong language that lends itself to fast
application development. Finally, there are several on-line communities that use it to
program the information processing.

1.5 Project Expected Outcome

The framework is meant for the detection and trailing and tallying of a various moving
vehicle. It may be additionally devised to associate alarming system.

1.6 Thesis Organization

The following two chapters have been organized as follows:

Chapter 2 The background, Raspberry Pi and former work done by different

researchers is mentioned.

Chapter 3 Possible analysis gaps associated with the project. The methodology and
also the structure of the project are given

Chapter 4 The project can concentrate on the utilization of the Raspberry-Pi as an

instrument capable of imitating, to some extent, human vision. to attain
this, a deeper understanding of a programming language is needed. Python
has been used as a programming language of alternative as a result of it's
compatible with the rasp-pi. it is additionally a strong language that lends
itself to fast application development. Finally, there are several on-line
communities that use it to program the information processing.

Chapter 5 The results of the experiments that were laid out in Chapter four are
compared and conclusions are drawn from the comparison. It additionally
highlights any issues with the results and different studies on the topic.


This chapter examines artificial vision, digital signal process with a powerful stress on
image process and image structure. A comparative analysis of the human eye and cameras
is additionally given. additionally, image detection algorithms are analyzed thoroughly
with a quick outline of detection performance. Finally, there's a general description of
OpenCV and PI.

2.1 Computer Vision

According to A. Guan, S. H. Bayless and R. Neelakantan, artificial vision is that the

method of exploitation a picture device to capture pictures then to use a laptop to research
these pictures to extract data of interest [1]. It is outlined merely because the emulation of
the visual capability of individuals who use computers, that is, they show computers
however we tend to see [4].

It involves the subsequent steps: image acquisition, image manipulation, image

comprehension and deciding. the most technologies that drive this development are signal
process, multiple-view pure mathematics, optimisation, pattern recognition, machine
learning and hardware and algorithms.

The areas of application of computer vision embrace, inter alia, the automotive business,
photography, industrial video and industrial automation.

2.2 Digital Signal Processing

SW Smith defines the DSP because the science of exploitation computers to grasp signals,
that in science and engineering embrace pictures of remote area probes, tensions generated
by the centre and brain, measuring system and navigational instrument echoes, unstable
vibrations and different numberless applications.

It differs from the opposite computing areas by the distinctive form of information it uses:
the signals. In most cases, these signals originate as sensory information from the important
world: unstable vibrations, visual pictures, sound waves, among others. DSP is additionally
outlined because the arithmetic, algorithms and techniques won’t to manipulate these signals
once they need become a digital type. This includes, however isn't restricted to, a good vary
of objectives, like visual image improvement, object detection and count, speech recognition
and generation, information compression for archiving and transmission [6].

2.3 Digital Image Processing

According to S. W. Smith [7], pictures are an outline of how a parameter varies on a surface.
It claims that normal visual pictures derive from variations in intensity level during a 2d

The images are signs with special characteristics. whereas most signals are a live of a
parameter over time, pictures are a live of a parameter in area (distance). Secondly, the
pictures contain lots of data. for instance, ten MB of area could also be needed to store a
second tv video that's quite thousand times larger than the area needed for the same length
of the speech signal. Finally, the ultimate finding on quality is commonly a subjective human
assessment instead of associate objective criterion. These 3 special options create image
process a definite set among DSP [8].

Some of the appliance areas that emphasize the importance of digital image process include
medication, particularly CT and MRI scanners, abstraction pictures within which pictures
are of low quality, therefore, should be processed to get helpful data and pictures business

Since the spatial pictures don't seem to be of fine quality, DSP will improve the standard of
such pictures in varied ways in which, like brightness and distinction adjustment, edge
detection, noise reduction, focus adjustment, reduction of motion blur, etc. As is found once
a flat image of a spherical planet is taken, it may be ill-shapen into an accurate illustration.
many individual pictures may be combined into one information, therefore improving the
presentation. for instance, a video sequence that simulates an air flight on the surface of a far

The massive content of data in pictures could be a downside for systems sold in large
quantities to the overall public. industrial systems shouldn't be overpriced, a proven fact that
doesn't match well with huge recollections and high information transfer speeds.

One answer to the present quandary is compression. Images, like voice signals, additionally
contain an unlimited quantity of redundant data. they will be performed exploitation
algorithms that cut back the number of bits required to represent them. tv and different
moving pictures are significantly appropriate for compression, since most pictures stay the
identical from one frame to a different. business image product that have the benefit of
compression include: video phones, computer programs that show motion footage and digital

2.4 Digital Image Structure

An image is drawn by samples organized during a 2nd array of rows and columns. like 1D
signals, rows and columns is numbered from zero to (N-1) or from one to N, for instance.
from zero to ninety-nine or from one to a hundred if the whole variety of rows and columns
is 100. If we've a hundred rows and a hundred columns representing an array of a 2nd image,
the whole variety of samples representing this image is 10,000 (ie one hundred * 100). every
sample is named a component within the image jargon. The term component could be a
contraction of the phrase "element of the image".

The images have their data encoded within the abstraction domain, that is that the image
similar to the domain of your time. this means that pictures are described by edges and not
by sinusoids. this implies that the pixels are determined by the amount of options to be seen,
instead of by the formal constraints of the sampling theorem.

The alias may be a development that's additionally intimate in pictures, however, is mostly
thought of a nuisance instead of an enormous downside. SW Smith points out that the stripy
suits look terrible on tv as a result of the repetitive pattern is larger than the sampling
frequency which the alias frequencies seem as lightweight and dark bands moving through
the garments because the person changes position [7].

A "typical" image has regarding five hundred lines per five hundred columns. Television,
computer applications and general research project meet this level of image quality. it's
believed that pictures with fewer pixels, like 250 by 250, have remarkably poor resolution.

2.5 Camera and Human Eye.

The structure and practicality of the human eye are the same as those of an electronic camera.
each are supported 2 main components: the lens set, that captures part of the sunshine
emanating from associate object and focuses it on the image device, and therefore the image
detector transforms the light pattern into a video signal, each electronic neural.

Figures 2.1 and 2.2 illustrate the most structures within the human eye and therefore the
electronic camera severally. each are tight envelopes with a lens mounted on one finish and
a picture device on the opposite. the attention fills with a transparent liquid whereas the
electronic camera fills with air. The lens has 2 adjustable parameters: focus and iris diameter.

If the lens isn't focus properly, every purpose on the item are projected onto a circular
region within the image detector, which is able to create the image blurred. The lens is
achieved in cameras by physically moving the lens on the image detector whereas the
attention contains 2 lenses, the tissue layer and an adjustable lens within the attention. The
tissue layer will most of the sunshine refraction, however is mounted in form and position.
the main focus adjustment is achieved by the inner lens, a versatile structure which will be
distorted by the action of the ciliary muscles. As these muscles contract, the lens flattens to
concentrate on the item.

In each system, the iris is employed to manage the number of lenses exposed to lightweight
and, therefore, the brightness of the image projected onto the image detector. The iris of the
attention is created from opaque muscle tissue that may contract to widen the pupil (the gap
of light). every iris during a camera may be a mechanical assembly that performs the identical

Figure 2.1: The Human Eyes [9] Figure 2.2: An Electronic Camera

2.6 Edge Detection Algorithms

Borders are often outlined as easy discontinuities within the image signal. Edges usually
occur at points wherever there's an oversized variation within the brightness values of a
picture and, as a result, typically indicate the boundaries of objects in a very scene. However,
massive light changes may also correspond to surface marks on objects. The tangent
separation points within the light signal, rather than the easy separation, may also indicate an
object limit within the scene [10].

Edge detection is a crucial space within the field of artificial vision. it's historically enforced
by convolving the signal with a linear filter sort, typically a filter approaching a primary or
second derived operator. The information of the sides helps the segmentation and therefore
the recognition of the objects. Edges will show wherever shadows represent a picture or the
other modification within the intensity of a picture [11].

2.6.1 Laplacian Gaussian Detection

L o G was made-up by Marr and Hildreth (1980) who combined mathematician filtering with
Laplacian filtering. This algorithmic program isn't wide utilized in artificial vision. Gaussian
edge detectors are symmetrical on the sting and scale back noise by smoothing the image.
the foremost necessary operators are cagy and ISEF (Shen-C a s t a n), who convert the image
with the Gaussian by-product for cagy and ISEF for Shen-C a s t a n [11].

2.6.2 Marr – Hildreth Edge Detector

The Marr - Hildreth edge detector may be a gradient-based operator that uses the
Laplacian to require the second by-product of a picture. it had been the foremost
widespread edge operator before the cagy edge detector was developed. This
detection could be a gradient-based operator that uses the Laplacian to require the
second by-product of a picture. It works below the premise that if there's a step
distinction within the intensity of the image, it'll be described within the second by-
product by a zero crossing [11].

The steps in this algorithm are the following:

i. Smooth the image with a Gaussian to cut back the number of errors found because
of noise.

ii. Apply a two-dimensional Laplacian, usually known as "Mexican hat operator" to

the image.

iii. Scroll every element within the "Mexican Hat Operator" of the smoothened image
and appearance for changes within the signals. If there's an indication modification
and also the slope through this sign modification is larger than some threshold, the
element is marked as a grip.

2.6.3 Canny Edge Detector

It is wide considered the quality edge detection rule within the business. it had been
created for the primary time by John cagy for his master's thesis at Massachusetts
Institute of Technology in 1983 [12], and still beats several of the new algorithms
that are developed. It arises from the previous work of Marr and Hildreth, who were
fascinated by shaping the first stages of human perception.

Canny has seen the matter of edge detection as an issue of improvement of signal
process and, therefore, has developed an objective perform to be optimized [12]. the
answer to the current downside was a rather complicated mathematical function,
however cagy found many ways in which to approach and optimize the sting search

2.7 Related Problem

Table 2.7 below shows the comparison of three related work.

Table 2.7: Comparison of Related Work

Objective to
System Task Deliverables

The planned
procedure get
following The smart phone
understanding of the application is To understand
vehicle like developed in java vehicle target
automobile selection language by identification, date,
and Tracking
(targeted exploitation the time and
System based
identification), Eclipse Integrated information of
on Embedded
vicinity, velocity, Development setting Raspberry pi.
Date, Time and (IDE).
distributor into the
data of Raspberry pi.

The information are

collected by the
Raspberry Pi-
The planned style exploitation entirely
Based Vehicle
provides data different module and
Tracking and
referring to vehicle dispatch it to the To track, monitor
Identity, speed, and station where it stores and surveillance
System for
position on real time the information in
basis. info and show it on
graphical computer
program (GUI) that is
user friendly.

Predicting Bus The planned system Primarily counting on To encourage extra

Arrival Time achieves outstanding low cost and wide on participants to

with Mobile prediction accuracy the market cellular bootstrap the
Phone based compared with bus signals, the projected system as a the
Participatory operator initiated and system provides cost- amount of sharing
Sensing GPS supported efficient solutions to passengers affects
solutions. the matter. the prediction

The vehicle is
tracked by
exploitation GPS
In the planned
technology, monitor
system, they have
Vehicle the parameters
designed vehicle To provide the
Position exploitation
trailing system along security systems
Tracking semiconductor device
with observation of using the GPS and
System using and inflicting the
automotive GSM.
GPS and GSM standing of the
parameters through
vehicle exploitation
GSM technology.
GSM to the owner
thereby providing
security to that.

The project they are To provide a low-

coming up with cost means that of
observation The mobile device is observation a
Embedded application within capable of causing vehicle’s
Based Vehicle the vehicle by data to a server performance and
Monitoring exploitation can bus victimization GPS trailing by
and Tracking which may be used (global positioning communication
System for communication system) and cellular obtained
between four internet association. information to a
completely different mobile device via
nodes. Bluetooth.

Refer to the Table 2.7 traffic observation systems through Raspberry Pi could be a project
that uses net as a association between Raspberry Pi to send or notice the user through email
or application. This project is a lot of economical and economical to user compare
alternative comes.

2.8 Summary

This section highlights 3 differing types of analysis areas. Some samples of every space are
given. Previous work done by different researchers evokes new researchers in traffic
surveillance. as a result of the topics are broad, the analysis space is aimed toward rising
traffic management. the requirement for advanced traffic surveillance systems exploitation
new technologies is increasing day by day as security becomes a really necessary or major
problem for everybody. Then, in chapter three, the methodology to perform this analysis is


This chapter discusses the methodology used to solve the problem established in Chapter 1,
see the problem statement on page 2. Section 3.1 presents the model requirements with the
project, while Section 3.2 defines the model phases. Section 3.3 highlights the minimum
hardware and software requirements necessary to ensure the success of the prototype. Section
3.4 defines the project's WBS, while in section 3.5 it describes the activity of the Gantt chart
to complete the project, furthermore section 3.6 describes the budget reviews and the cost of
the hardware and software requirements. Section 3.7 summarizes the discussion in this

3.1 Introduction

A prototyping model that's utilized in this system, as shown in Figure 3.1. This model works
best in things once all the main points or needs aren't acknowledged well before. By using
this model, a research worker will get an actual feel of the system. This model has six phases:

i. Gather Requirement
ii. Quick Design
iii. Build Prototype
iv. Prototype Validation
v. Refine Prototype
vi. Product

Figure 3.1: Prototype Model

3.2 Method

3.2.1 Requirements gathering and analysis

The first stage of prototyping model begins with necessities analysis and also the
needs of the system are outlined thoroughly. Users are analyzed so as to understand
the necessities of the system. during this analysis, the user is one among the

3.2.2 Quick Design

When the necessities are known, a preliminary project or a quick project is made for
the system. it's not a close design and includes solely the necessary aspects of the
system, which provides a concept of the system to the user. a quick design helps in
developing the model. Figure 3.2.2 provides an summary of the system
implementation utilized in this study.

Figure 3.2.2: Block diagram shows the integration of the hardware

3.2.3 Build prototype

The information gathered from the fast design is changed to make the primary model,
that represents the operating model of the specified system.

3.2.4 Prototype validation

The collection and analysis of data, from the design part of the strategy throughout
the assembly, that establishes the scientific proof that a technique is ready to
consistently deliver quality product.

3.2.5 Refining prototype

This method continues till all the wants specific by the user are met. Once the user is
glad with the developed model, a final system is developed based on the final model.

3.2.6 Engineer product

Once the necessities are met, the user accepts the final model. the ultimate system is
carefully evaluated, followed frequently by periodic maintenance to avoid large-scale
failures and minimize period of time.

3.3 Resources

3.3.1 Hardware Specifications

No Hardware Descriptions
 Model: Lenovo IdeaPad Z480
 Processor: Intel Core i5 (3rd Gen) 3210M / 2.5
1. Laptop  Graphics: NVDIA GeForce with Cuda
 Memory: DDR4 8GB
 OS: Windows 7 Home Premium 64-bit
 Storage: 500 GB hard drjve
 Resolution: 1280 (H) x 720 (V) pixels
 Support OS: UVC or Windows, android, linux,
2. Mac OS
 Lens: 8mm megapixels lens
 Technology: UVC 1.1 support

Table 3.3.1: Hardware Specifications

3.3.2 Software Specifications

No Software Descriptions
 Model: Raspberry Pi Model B+
 CPU: ARM11 @ 700MHz
Debian  RAM: 512MB SDRAM
Raspbian  Micro-SD
 GPU: 250MHz VideoCore IV
 Video & Audio: HDMI / Composite / Headphone

Table 3.3.2: Software Specifications

3.3.3 Hardware Requirements

This project was created using the subsequent hardware components: Raspberry pi,
camera Pi, power supply.

3.3.4 Raspberry Pi and SD card

The design for this project uses PI Model B +. To run this project with efficiency,
Raspbian Jessie OS was put in on a 16 GB SD card. in contrast to the Jessie lite
Raspbian OS and also the Whizzy OS, Raspbian Jessie offers a graphical user
interface expertise. Therefore, it had been not necessary to use Putty to access the
Raspberry pi remotely. With xrdp put in on pi, you'll be able to connect with the
raspberry pi remotely using the Windows Remote Desktop connection application. it
had been developed by the Raspberry-Pi foundation within the United Kingdom to
be used for advances in pc training. The second version of Raspberry Pi is employed
during this project.

3.3.5 Raspberry Pi Camera

The Raspberry pi camera card connects on to the CSI connection on the Raspberry
pi. The Raspberry Pi camera module connects to the Raspberry Pi via a 15-pin Ribbon
cable to the dedicated 15-pin MIPI CSI cable, specially designed for connection to
cameras. It will deliver a transparent resolution of 5 mega pixel image or 1080p HD
recording at 30 frames / sec.

3.3.6 Power Supply

The power of the Raspberry pi is quite easy. it's powered via a small USB
connection which will provide a minimum of 700mA to 5v.

3.3.7 Ethernet Cable

There are many ways in which to access pi. it's impracticable to figure on the
Raspberry Pi alone, since it doesn't have a monitor or keyboard. Therefore, it's
necessary to own an AV / HDMI screen and a keyboard. However, it also can be
accessed remotely by connecting it to a portable computer or PC via an coax cable.
the following technique was adopted for its convenience.

3.3.8 Software Used

The software system used to successfully complete the project objectives includes:
Python, OpenCV, Microsoft office Visio and Word, Raspbian Jessie OS and also
the Remote Desktop connection application in Microsoft Windows 7 that
guarantees remote access to PI.

3.3.9 Modelling

The overall design was visualized at hardware level by diagram of figure 3.2.2
whereas the flow chart of figure 3.3.9 gave the steps utilized in implementing the


Detect_Outlines Check contour

Acquire Image
Approximate all
contours Is it the contour of
the required
Import Libraries
Load the Image Draw Contour

Convert to Increment
Grayscale number_of_objects

Detect Edge Last object?

Apply closing
operation Display number of

Figure 3.3.9: Flow chart of the program

3.4 WBS

Figure 3.4 shows the Work Breakdown Structure (WBS) that contains level of the work
breakdown structure that gives more details and definitions.

Figure 3.4: WBS

3.5 Gantt Chart

Figure 3.5 below shows the gantt chart that is a form of bar graph that illustrates a project
timeline. This gantt chart shows the schedule for Final Year Project 1 (FYP 1) only that is
begin on 2nd August 2018 and finish on 5th December 2018.

Figure 3.5: A Gantt Chart

3.6 Budget/Costing

The following is a review of the budget and costing of the hardware and software system
requirements. Table 3.5.1 shows the hardware and Table 3.5.2 shows the software system
estimated budget and price.

3.6.1 Hardware Estimated Budget

No Equipment Quantity Remarks
1. Micro Sd Card 8GB 1 39.00
Raspberry Pi 8MP
2. 1 99.00
Raspberry Pi 3 Model
3. 1 162.00
4. Oled 12C Blue Display 1 25.00
5. DC-DC Recom 5V 5W 1 32.76

Table 3.6.1: Hardware Estimated Budget

3.6.2 Software Estimated Budget

No Equipment Quantity Remarks
1. Raspbian Stretch Lite 1
2. Phyton 3.7.0 1

Table 3.6.2: Software Estimated Budget

3.7 Summary

In this system, the traffic level before accessing the traffic section together with a live
broadcast and updates on the page and controls road signs based on density using Raspberry
Pi are calculated. The extension of this method will be done by adding RF modules as
wireless devices to eliminate traffic once the car arrives specifically, the address till it's
within the region.


In this section, we tend to present the experimental results of the approach by applying
the nonheritable information set and live video transmission from the Raspberry Pi
camera. The Pi camera wants correct parameters and values for excellent performance,
like FPS and determination. you'll be able to access Raspberry Pi from any pc, laptop,
mobile, or any technology devices and remotely will apprehend the vehicle information
in real time. This algorithmic program is executed primarily in Raspberry Pi with the Pi
camera and is employed to capture video with C ++ and OpenCV libraries. integrating
OpenCV with C ++ IDE in Raspberry pi is that the hardest a part of Desktop. The
Raspberry Pi is intended with 1 GB of RAM and 8 GB memory device. The operation was
performed during this video captured in urban traffic environments, transit scenes and
even in basic information sets.

4.1 Detection Robustness

The introduction of Raspberry Pi vehicle count greatly increases the strength of car
detection. It helps to avoid detection moving objects that aren't vehicles, as shown in
Figure 4.1, 4.1.1 and 4.1.2. In our observation, the proposed algorithmic program is strong
to, and not restricted to pedestrians, motorcycles, bicycles, flurry weather and descending
weather, etc.

Figure 4.1: Running person isn't detected whereas vehicle gets properly detected.

Figure 4.1.1: Pedestrian is not detected

Figure 4.1.2: Motorcycle is not detected.

4.2 Vehicle Detection

To collect the live information set, the Raspberry Pi have to connect into the power bank
and USB camera. The live video input is shown in Fig. 4.2, 4.2.1, and 4.2.2. No wiring
connection between the personal computer and Raspberry Pi. The remote system will
access this live video stream with the static IP address assigned to Raspberry Pi. The
captured of the live video stream is processed on the proposed system to research the video
flow of the traffic and therefore the results of the algorithmic program are displayed (see
Fig. 4.2.3)

Figure 4.2: Capture live video streams using Pi and Camera.

Figure 4.2.1: Raspberry Pi taken along with the Pi camera powered with the power
bank positioned on the bridge.

Figure 4.2.2: Sample live video sequence

4.3 Background Subtraction

Background subtraction provides the very beginning for vehicle detection method. We
tend to deployed BGS Library to accomplish this task. BGS could be OpenCV primarily
based library that has quite 30 completely different background subtraction ways. Once
the analysis of accuracy and procedure complexness, frame difference algorithmic
program is chosen. The resulting frame once background subtraction is shown in Figure

Figure 4.3: Sample frame after background subtraction.

To check the algorithmic program implemented on Raspberry pi an attempt was given on

urban roads in Jalan Pahang. An Opencv and code blocks were interfaced on the Raspberry
pi to implement the algorithmic program using Phyton and Opencv. The figs. shown in 4.2.1
and 4.2.2 are the live video sequence taken from Pi. With the remote laptop we've got
accessed this information and have taken the screenshots of the results of the algorithmic
program. The screenshot of the remote desktop is shown in Fig. 4.2.2. Pi camera was
additionally interfaced to the pi device and placed on the bridge to capture the live video
information and stream the identical to the algorithmic program as shown in Fig. 4.2.1. The
pi is powered with the power bank. The results were accessed from the remote laptop.

Figure 4.3.1: Screenshot of a remote computer that accesses Raspberry Pi.

The advantages of having Raspberry pi are the following:

1.The video method is transmitted in real time on the video network and therefore the density
of vehicles instantly

2.Access the video information in real time using computers, laptops, mobile phones, etc.

3.The phone is placed on any road for traffic monitoring.

4.4 Result

4.4.1 Performance

The results of car detection performed within the Raspberry pi are described in table
1. The performance of the planned algorithmic program. it's good compared to the
detection of vehicles and tracks for them and to see the number of vehicles. The
system is compared with existing systems and explained within the section
comparative study.

Table 4.1: Tabular representation of results.

No Resolution T_P F_P F_N Accuracy Quality Integrity

| video
sequences |
I. 320 * 240 |
road with
20 1 1 70 20 10
camera 1 |
II. 640 * 480 |
15 1 1 70 40 30
sequence 1
| (2 min)
III. 640 * 480 |
35 1 1 80 60 50
sequence 2
| (3 min)
IV. 640 * 480 |
camera in
10 1 1 55 80 70
motion | (1
V. 320 * 240 |
20 0 1 40 10 90
with lower
passage | (3
VI. 640 * 480 |
60 11 1 20 40 60
within the

city limits |
(12 mins)
VII. 720 * 640 |
fund with
less n. 65 3 2 80 70 80
move time |
(3:25 s)
VIII. 720 * 640 |
42 2 4 85 90 10
shake time
| (2:25 s)
IX. 720 * 640 |
on highway
with fast 120 27 15 90 70 80
X. Live Pi
and Pi 5 0 0 100 50 60
camera |
(20 s)
XI. 320 * 240 |
rainy video
34 1 1 90 30 40
sequence |
(20 s)

True_Positive (TP): Number of true vehicles detected correctly.

False_Positive (FP): Number of false vehicles detected correctly.

False_Negative (FN): Number of vehicles that can not be detected.

4.5 Summary

This chapter mentioned the experimental results and analysis of the results. The chapter was
divided into 2 sections. The primary section mentioned the results and analysis of live video
transmission. The second section evaluated the performance of all take a look at cases that
were mentioned earlier. Next, the analysis and thesis were complete in Chapter 5.


This chapter concludes the analysis and this thesis, the advice and future improvement.

5.1 Project Accomplishment

The proposed algorithmic program relies on Raspberry Pi with Pi camera to capture the
traffic scene. moreover, the nonheritable traffic videos is run on the Raspberry Pi and may
be analyzed. The Raspberry Pi, together with its camera, is hold on during a remote location
to capture traffic videos and may be controlled from the desktop or laptop computer or via
android devices and may analyze traffic information.

The objectives set for this analysis were to access the remote Raspberry Pi from computers
or any device, a static IP address is assigned to Raspberry Pi and connected to the personal
network, in order that you'll be able to access traffic data from any remote place.

5.2 Project Limitation

Despite the presence of the limitations, the system was ready to turn out good results once
analyzing the test information. The accuracy of the vehicle count capability was 90%. the
limitations of this approach is that the Raspberry Pi is barely able to handle vehicle aspect
views. Consequently, this algorithmic program can’t be directly applied to traffic
intersections, wherever completely different vehicle views would be present.

5.3 Future Recommendation

Experimental results demonstrated in difficult information series thought of below

completely different angles of view, the rear view of the vehicles and in several camera
heights shows flexibility and good accuracy of the proposed technique. one of the future jobs
that might create the system additional reliable is that the classification of vehicles. The
classification technique additional improves detection performance, thus it's helpful to
feature a classification method. This can be one among our future jobs.

5.4 Summary

This chapter concluded the analysis and this thesis. It summarized the processes concerned
before, throughout and when the experiment has been conducted. It additionally highlighted
the analysis limitations and therefore the probably connected researchers that might begin
from this research. finally, the technique utilized in this analysis may even be used for
different performance measure researchers particularly those connected a system with
Raspberry Pi and Pi camera sort of a real-time vehicle detection, tracking and count.


[1] A. Guan, S. H. Bayless and R. Neelakantan. "Trends in artificial vision".

Intelligent Transportation Society of America, May 2012.

[2] "What is a Raspberry Pi?" Raspberry Foundation, 2015. [Online]. Available:

[3] E. Upton, "Programming the Raspberry Pi." [Online]. Available: 1-
240260 / element14_RPi_Webinar_040412_V1.0_FINAL.pdf. [Access 27
November 2015].

[4] R. Rodrigo. "Introduction to digital image processing". 13 11 2011. [Online].


[5] S. W. Smith. "The guide of scientists and engineers for digital signal processing".
[Online]. Available: [Accessed 2016 1 26]

[6] S. W. Smith. "The Bredth and Depth of DSP", in the Digital Signal Processing
Guide of The Scientists and Engineers, San Diego, California, California Technical
Publishing, 1999, p.1.

[7] S. W. Smith. "Training and visualization of images", in the Technical Guide for
Digital Signal Processing of Scientists and Engineers, San Diego, California, California
Technical Publication, 1999, pages 373-376.

[8] S. W. Smith. Image procesing, in the scientific publication of the technical

publication of San Diego, California, 1999, p. 9.

[9] S. W. Smith. Cameras and Eyes, in the Guide to Scientists and Digital Signal
Processing Engineers, San Diego, California, California Technical Publishing, 1999,

[10] R. Owens. "Detection of classical features", University of Edinburgh, 10 October

1997. [Online]. Available:
.html. [Access carried out on February 14th 2016].

[11] S. Saini, B. Kasliwal and S. Bhatia. "Comparative study of image edge detection
algorithms", [online]. Available:
[Access made on February 10th 2016].

[12] S. Price Borders: The Canny Edge Detector, School of Informatics, 4 7 1996.
[Online]. Available: