You are on page 1of 4

Real Time Moving Vehicle Counter System Using OpenCv and Python

Sanika Divekar, Preeti Bailke


Information and Technology
Vishwakarma Institute of Technology
Pune,India
sanika.divekar20@vit.edu
preeti.bailke@vit.edu
Abstract systems. Vehicle detection must be implemented in an
In view of highway management, intelligent vehicle recognition and environment where the light will be the main source to count the
counting is becoming increasingly crucial. Vehicle detection and related vehicle. In the proposed system, the video of any traffic
statistics in highway monitoring video sequences are extremely monitoring system can be set as an input which will convert video
important for intelligent highway traffic management and control. With
into frames, extract reference backgrounds, and perform detection
the widespread use of traffic surveillance cameras, a large library of
traffic video footage has been available for examination. The vehicle of moving objects using OpenCV and python.
counting procedure offers accurate data about traffic flow, vehicle
collisions, and traffic peak times on highways. Use of digital image 2. BACKGROUND INFORMATION
processing on traffic camera video is a widely used technique for
achieving these goals. The car counter classifier presented in this 2.1 VIDEO PROCESSING
research is based on a combination of image processing algorithms Video processing is a subclass of Digital Signal Processing (DSP)
such as object detection, edge detection, and frame differentiation. The that takes digitized real-world signals such as voice, audio, video or
system has been implemented using Python. This paper explains how to
position, and alters them mathematically. Using image processing,
use multiple libraries for real-time image processing to achieve traffic
flow counting and classification. The proposed system differs the frames can be obtained from a video and converted into images
existing system by providing the feature to count the cars in the day and where each individual image is called a frame per second (FPS).
night mode. Motion is simply realized in this situation by comparing subsequent
Keywords: frames. The openCV for Video processing can be used to highlight
Traffic Monitoring, Computer Vision, Background subtraction, certain regions of videos, remove improper lighting effects,
Contours, Video Detection eliminate camera motions, and remove edge-artifacts with
cv2.resize is an example for resizing image [1]. Python's OpenCv
1. INTRODUCTION library provides a number of features. Numpy is used for creating
At first different Non-intrusive sensors were used where manual vectors and matrices which store information about the video after
methods of video capturing used magnetic bar, radar, ultrasonic, converting it into frames. OpenCVPython, a package with
acoustic and video imaging sensors to detect and count vehicles. MATLAB-style syntax, is used for numerical computations on the
Now in the advanced technology intrusive sensors are used for the frames. All array structures in OpenCV are translated to and from
same which include pneumatic road tubes, piezo-electric sensors, Numpy arrays. This makes it easier to interface with other Numpy-
magnetic sensors, an inductive loop. Non-intrusive technology has based libraries like SciPy and Matplotlib. [2]
advantages over intrusive technology, which requires traffic lanes
to be closed, putting construction workers in danger. Non- 2.2 RGB TO GREYSCALE CONVERSION
intrusive sensors are above the roadway surface and do not Image processing using openCV is used in video analysis to
typically require a traffic stop or lane closure. Each of the sensors convert RGB color images to grayscale mode. The major purpose of
can have some good side and even limitations. However, this conversion is to produce more acceptable outcomes wh ile
manually counting by video or digital means has a high level of processing grayscale photos than processing RGB ones[5]. In video
accuracy as compared to other technology.[1] processing procedures, the series of acquired video frames should
Because of the increase in the number of automobiles, be converted from RGB color mode to a 0 to 255 grey level. When
expressways, highways, and even roads are becoming converting an RGB image to grayscale mode, RGB values for each
overcrowded. Vehicle detection, tracking, classification, and pixel are considered and a single value reflecting the brightness
counting are very important for military, civilian, and government percentage of that pixel as an output is prepared [6].
applications, such as highway monitoring, traffic planning, toll
collection, and traffic flow. Vehicle detection is a vital step in
3. PREVIOUS WORK
highway traffic management. This method presents an affordable,
Various approaches have been considered to construct systems
portable, and Computer Vision-based system for detecting and
that can identify, count, and classify vehicles for use in a
counting moving vehicles. Images from a video sequence are
converted into frames to detect moving vehicles after extracting proposed surveillance model in an Intelligent Transportation
the background. The system is built with OpenCV image System (ITS). This section includes a discussion of such systems
development kits, and real-time video from a single camera is as well as understanding of the methods used to construct them.
used to demonstrate the findings. This highway vehicle counting For numerous years, use of image/video processing and object
process has been developed by applying following methods detection algorithms for vehicle detection and traffic flow
background subtraction, image filtering, converting image to grey, prediction has drawn a lot of interest. One of the following
and MOG algorithm i.e. Gaussian Mixture-based methodologies has been used to detect and track vehicle detection
Background/Foreground Segmentation Algorithm. CV techniques [8]:
such as thresholding, hole-filling and adaptive morphological
 Matching
dilation to remove noise and enhance the foreground object This
system is additionally capable of counting moving vehicles from  Threshold and segmentation
pre-recorded videos.[5]
Many features such as lighting, shadow of the object, border  Point detection
of vehicle can be used during building of vehicle counting  Edge detection
 Frame differentiation
One of the most important object detection researches, which
led to the development of auto scope video detection systems, is
described in [8]. Forward and backward image differencing
methods [10] are used to extract moving automobiles in a roadway
view. Feature vectors from image regions [4], [9] prove to be quite
effective for vehicle detection. Accurate vehicle dimension
estimation using a set of coordinate mapping functions [7] is
depicted. A variety of boosting algorithms for object detection [8],
[5] utilizing machine learning approaches that can recognize and
classify moving objects by both kind and color are implemented.
An optimized virtual loop approach [12] for a video-based real-
time vehicle counting system has been proposed. In this approach,
real-time traffic monitoring cameras are installed on roads to
calculate the number of vehicles passing through. Counting is
accomplished using three steps namely: method of employing
backdrop subtraction, method of using continuous video frame
difference, and counting the vehicle from the Region of Interest
(ROI) zone.
Lei, M., et al. [13] proposed yet another video-based vehicle
counting technique. Surveillance cameras were employed in this
system and were positioned at a somewhat high elevation to gather
the traffic video stream. Two main approaches are used in this
system namely adaptive background estimation and Gaussian
shadow elimination. The system's accuracy is determined by the
visual angle and its ability to remove shadows and ghost effects.
Inability to classify vehicle type is the system's primary flaw.

4. PROPOSED SYSTEM
a. The proposed system consists of three stages. Fig.1. The Architecture Of The Proposed System

1. System Initialization: In this stage, the system is set up and 4.1 GREYSCALE IMAGE GENERATION AND
initialized. A constant stream of video is recorded by the IMAGE ENHANCEMENT
camera and manually the video stream is transferred for The vehicle detection technique should be applied to the
processing.
grayscale picture images for better results. Hence, each video
2. Background Subtraction: At this stage, a set of frames is frame undergoes RGB to grayscale conversion.
brought into focus, and background subtraction is Each frame should be brought in contrast to the background to
performed on the frames that have been received from the produce an adequate threshold level and then convert the input
video using openCV. image. The power-law approach was utilized in this study as one of
several grayscale conversions. The function cv2.cvtColor (input
3. Vehicle Detection: At this stage, all moving
image, flag) for color conversion is used, where flag indicates the
vehicles/objects may be tracked and counted using the
type of conversion. The parameter cv2.COLOR BGR2GRAY is
removed background image. The proposed system works used to convert to grayscale.[1]
in two modes: pre-recorded video mode and real-time
camera mode.

Fig.2. Input RGB Video Frame


Subtraction is a Background/Foreground Segmentation
Algorithm based on Gaussian Mixtures.[14][15] It employs
an automatic way to select a suitable number of Gaussian
mixtures for the pixel, unlike [5], which specifies the number
of distributions for the generation of the background model.
Furthermore the algorithm is also better at handling the
illumination changes in the scene. The algorithm also gives
the ability to define if the shadow of the objects is to be
detected or not.
Shadows will be detected and marked by the algorithm. For
Fig.3. Grayscale Converted cases like this, several methods have been presented; some of
them are implemented in openCV, such as
4.2 EDGE DETECTION BackgroundSubtractionMOG algorithm [4], which uses
Edge detection is an image processing approach that Gaussian distributions to generate a model of the image's
identifies points in a digital image that have discontinuities, backdrop. MOG employs a technique that involves
or sudden changes in image brightness. The borders (or combining K Gaussian distributions to describe each
boundaries) of an image are the points where the image backdrop pixel (K is 3 to 5). BackgroundSubtractorGMG is a
brightness fluctuates drastically. For the edge each picture background subtraction implementation in openCV that is
(video frame) comprises three key features viz. edges, curves, and based on images [11] and combines the background image
points. Among the attributes (edges, curves, and points) stated, estimating technique with Bayesian segmentation
edge pixels are a good choice. Edge pixels, which are the major
features of passing automobiles in a roadway video frame, can be 4.4 CONTOUR EXTRACTION
detected by processing picture pixels.[2] The canny operator, 1. Contours are the boundaries of the shape which
which has been employed in this study, is one of the most used are used for the shape detection and recognition. The accuracy of
approaches to detect the edges of an image. Figure 4 shows the the process of finding the contours canny edge detection performed
original image, while Figure 5 shows image after conversion on a binary image. OpenCV provides the cv2.findContours()
using morphological transformations. Morphological method to find the contours.[3]
transformations are a set of straightforward procedures depending
on the shape of a picture. It's usually done with binary images. It
requires two inputs: one is our original image, and the other is a
structuring element or kernel that determines the operation's
nature. Erosion and Dilation are two basic morphological
operators. Then there are variations such as Opening, Closing,
Gradient etc. The output of the edge detection procedure is shown
in a fig5 of the identified edge pixels, as can be seen.

Fig.6. Green Contour of vehicle


b.
4.5 COUNTING VEHICLE
Defining a boundary for each incoming and outgoing
vehicle in the image identifies the Region of Interest (ROI).
[3] As ROI along with the imaginary line plays an important
role in categorization, ROI requires close human monitoring
Fig.4. Original Image
[9]. For ROI, the system executes the following sequence of
operations: applying a background mask, deleting the mask,
executing binary threshold, morphology with erosion and
dilation, and converting the frame to grayscale. Following
these actions, contours are discovered.[4]
Vehicle counting can be done by checking if the centroid of
a vehicle has touched or crossed the imaginary line in ROI.
The imaginary line is the line that appears diagonally when
two ROI points are connected. Once the centroid of a vehicle
in ROI crosses the imaginary line, the vehicle is counted.
From fig. 6, when the centroid of the vehicle crosses the
Fig.5. Edge Detection Result imaginary line in ROI, the counter variable "Total" and the
4.3 BACKGROUND SUBTRACTION ALGORITHM variable for the appropriate category both increase.
The algorithm used in the implementation of the proposed
system is called backgroundSubtractorMOG2. Background
[2] Nisar, M. A., Kumari, A., Jethwa, H., & Chasker, N, “Real
time traffic light control using image processing” International
Journal for Scientific Research and Development 2016, April 1.
[3] Sania Bhatti, Mir Muhammad B. Talpur, & Mohsin A.
Memon, “A Video based Vehicle Detection, Counting and
Classification System” http://www.mecs-press.org, DOI: 10.5815,
2018, September 8.
[4] P. KaewTraKulPong, and R. Bowden, “An improved
adaptive background mixture model for real-time tracking with
shadow detection”, Video-based surveillance systems, Springer.
pp. 135-144, 2002.
[5] N. Seenouvong, U. Watchareeruetai, C. Nuthong, K.
Fig.4. Increment in corresponding counter variables when Khongsomboon, “A Computer Vision Based Vehicle Detection
vehicle centroid touches imaginary line in ROI and Counting System”, IEEE 8th International conference on
Knowledge and Smart Technology (KST), pp.224-227, 2016
[6] P. Choudekar, S. Banerjee, M. K. Muju, “Real Time
5. CONCLUSION Traffic Light Control Using Image Processing,” Indian Journal of
Computer Science and Engineering, Vol. 2, No. 1, ISSN: 0976-
The proposed system suggested an object detection and tracking 5166I. S. Jacobs and C. P. Bean, “Fine particles, thin films and
method for highway surveillance video scenes and established exchange anisotropy,” in Magnetism, vol. III, G. T. Rado and H.
vehicle detection from the standpoint of surveillance cameras. Suhl, Eds. New York: Academic, 1963, pp. 271–350.
Python provides useful libraries like NumPy, matplotlib, scipy, [7] R. Gonzalez, R. E. Woods, “Digital Image Processing,”
which can help to count vehicles, classify the traffic. Compared to 2nd Edition, Prentice-Hall, 2002.
typical hardware-based traffic monitoring, the method described in [8] N. Chintalacheruvu, V. Muthukumar, “Video Based
this study is low-cost and stable, and it does not necessitate large- Vehicle Detection and Its Application in Intelligent Transportation
scale construction or installation work on existing monitoring Systems,” Journal of Transportation Technologies, Vol. 2, pp.
equipment. The system is also able to detect vehicles at night time. 305-314, 2012.
The system requires the presence of foreground objects in order to [9] P. Choudekar, S. Banerjee, M. K. Muju, “Real Time
collect contour properties and features. Traffic Light Control Using Image Processing,” Indian Journal of
Computer Science and Engineering, Vol. 2, No. 1, ISSN: 0976-
6. FUTURE WORK 5166.
One of the limitations of the proposed method is that it is [10] D. G. Lowe, “Distinctive Image Features from Scale-
not efficient at detection of crowded vehicles which affects Invariant Keypoints,” International Journal of Computer Vision,
the accuracy of the counting. This problem could be solved pp. 91-110, 2004.
[11] M. Tursun, and G. Amrulla, “A video based real-time
by introducing the second-level feature classification such
vehicle counting system using optimized virtual loop method”,
as the classification on the basis of color. Another IEEE 8th International workshop on Systems Signal Processing
limitation is that it needs human supervision for defining and their Applications (WoSSPA), 2013.
the ROI. For vehicle counting, the user must draw an [12] M. Lei, D. Lefloch, P. Gouton, K. Madani, “A video based
imaginary line where the centroid of the contours real-time vehicle counting system using adaptive background
intersects; thus, the accuracy is dependent on the human method”, IEEE International conference on Signal Image
supervisor's opinion. Furthermore, the camera angle also Technology and Internet Based Systems (SITIS'08), pp. 523-528,
affects the system performance. Hence, camera calibration 2008.
techniques could be used for the detection of the lane for a [13] Sania Bhatt, Liaquat A. Thebo, Mir Muhammad B.
better view of the road and to calculate speed of the Talpur,Mohsin A. Memon,”A Video based Vehicle Detection,
vehicle. Counting and Classification System”, MECS I.J. Image, Graphics
and Signal Processing,September 2018.
[14] Z. Zivkovic, “Improved adaptive Gaussian mixture model
for background subtraction”, 17th International Conference on
REFERENCES Pattern Recognition(ICPR), 2004.
[15] Z. Zivkovic, and F. van der Heijden, “Efficient adaptive
[1] Reha Justin, Dr. Ravindra Kumar, “Vehicle Detection and density estimation per image pixel for the task of background
Counting Method Based on Digital Image Processing in Python,” subtraction”, Pattern recognition letters, vol 27, no 7, pp. 773-780,
International Journal of Electrical Electronics & Computer 2006
Science Engineering -2018.

You might also like