You are on page 1of 6

Available online at www.sciencedirect.

com

ScienceDirect
Procedia Computer Science 89 (2016) 726 – 731

Twelfth International Multi-Conference on Information Processing-2016 (IMCIP-2016)

An Efficient Approach for Detection and Speed


Estimation of Moving Vehicles
Tarun Kumar and Dharmender Singh Kushwaha∗
National Institute of Technology, Allahabad 211 004, India

Abstract
An intelligent traffic management and surveillance is the basic need for the smart city development in India. This includes the
detection of moving vehicles, estimation of their speed and detection of the speed limit violation and its registration number. This
paper proposes an efficient and novel approach for the detection of moving vehicles as well as estimation of their speeds by using
a single camera in daylight or properly illuminated environment. The proposed approach detects and tracks the vehicle passing
through the surveillance area and keeps the record of vehicles position. In this paper vehicles tracking is based on the relative
positions of the vehicles in consecutive frames. This information may be used in the Automatic Number Plate Recognition (ANPR)
System for selection of those key frames where speed limit violation occurs. The average detection accuracy achieved by proposed
approach is about 87.7%. The proposed approach uses cropping operation to minimize the scope of any false positive detection on
both sides of road.
© 2016 The Authors. Published by Elsevier B.V.
© 2016 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license
Peer-review under responsibility of organizing committee of the Twelfth International Multi-Conference on Information
(http://creativecommons.org/licenses/by-nc-nd/4.0/).
Processing-2016
Peer-review under (IMCIP-2016).
responsibility of organizing committee of the Organizing Committee of IMCIP-2016

Keywords: Background Subtraction; Image Filtering; Thresholding; Contours Processing; Camera Calibration; Moving Object
Detection.

1. Introduction

Government of India aims to develop 100 smart cities in future. A Smart city delivers smart services like smart
traffic management, traffic surveillance etc. To deliver these smart services, various information and communication
technologies are used. The smart traffic monitoring is incomplete without a system that is capable of automatic
detection of traffic rule violations. Automatic traffic surveillance system is the need of the smart traffic management.
In urban areas, the detection of red light violations, speed limit violations and stop look and go protocol violations are
the issues that usually arises. The detection of red light violation is generally manual process in India barring few cities
where CCTV footage of traffic cameras is used for this. To detect the speed limit protocol violation, the speed guns are
used. For smart city development, these issues need to be resolved. Smart city traffic surveillance system is the right
solution to these issues. For detection of moving vehicles, detection of vehicles speed and automation recognition of
number plates of the vehicles, various techniques have been proposed by many researchers but a comprehensive and
cost effective solution is still missing. In the present era of computer vision, the detection of moving objects is intrinsic

∗ Corresponding author. Tel.: +91-532-227-1363; Fax: +91-532-254-5341.


E-mail address: dsk@mnnit.ac.in

1877-0509 © 2016 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license
(http://creativecommons.org/licenses/by-nc-nd/4.0/).
Peer-review under responsibility of organizing committee of the Organizing Committee of IMCIP-2016
doi:10.1016/j.procs.2016.06.045
Tarun Kumar and Dharmender Singh Kushwaha / Procedia Computer Science 89 (2016) 726 – 731 727

Fig. 1. Proposed System, Surveillance Zone Setup.

need of many image processing applications like traffic surveillance, vehicle classification, collision detection (such
as accidents on roads) etc. There exist wide variety of methodologies for moving vehicle detection and tracking but
efficient technique with higher accuracy and economy needs to be developed. In smart city traffic surveillance system1 ,
these techniques may play important roles. This paper proposed an efficient and novel approach for detection of the
moving vehicles and their speed. The proposed approach can be integrated with existing traffic monitoring based on
cameras system without major modifications.

2. Related Work

Moving object detection based on image processing techniques is composed of three major phases. The first phase
begins with image acquisition and preprocessing of the frames. The next phase is background modeling. The final stage
is detection of moving objects. Efficient background image modeling makes the moving object detection efficient.
Many researchers have proposed different background modeling techniques in the past. Mittal et al. propose the
background modeling based on segmentation of dynamic scenes2 . Background modeling based on weighted average
of current background and new image is also proposed by Gupte et al.3 Sliding window concepts is also proposed
by Hussain et al. in order to background modeling4 but it required extra memory for keeping frames in buffer. To
generate a good background image, an approach based on probability density function5 , background modeling based
on long term average of the image capture in a time interval6 and principle component analysis based approach are
also proposed Javed et al.7 . In general, frame difference and background subtraction methods are used for moving
object detection but frame differencing only detects the leading and trailing edge of a uniformly colored object. As a
result very few pixels on the object are labeled, and it is very hard to detect an object moving towards or away from
the camera. Javed et al.7 and Sullivan et al.8 propose an approach for moving vehicle detection based on background
subtraction. Kasetkasem and Varshney achieve background subtraction by using feature extraction, template matching
and contour processing techniques for identifying the presence of vehicles9 . For tracking of moving vehicles, mean
shift algorithm and template matching algorithm are proposed by Hsieh et al.10 . Although numerous approaches have
been proposed in the past, there exist some issues related to false positive detection in background subtraction method.
Vehicles feature detection and mean shift calculation introduces memory and time overhead. In this work and approach
that address the issues related to false positive vehicle detection and memory and time efficient tracking algorithm is
proposed.

3. Proposed Work

This paper proposes a novel approach and technique so as to efficiently detect and track the vehicles. The proposed
technique detects tracks and extracts the vehicles parameter for speed estimation by using a single camera.
This paper also proposed a cropping method for minimization of false positive vehicle detection. In such a
system, the camera must be situated on the traffic signal pole, approximately 10 meters or more above the level of
road projected towards the center of the road. This installation will minimize the effect of occlusion. The proposed
installation is shown in Fig. 1. The work flow of proposed technique is shown in Fig. 2.
728 Tarun Kumar and Dharmender Singh Kushwaha / Procedia Computer Science 89 (2016) 726 – 731

Fig. 2. Proposed System, Flow Diagram.

Fig. 3. Region of Interest for the Proposed Approach.

Initially Camera calibration is used to map the relationship between the real world and pixel matrix of the digital
image. Figure 4(a) shows the side view of the scenario, illustrating the position of the camera and range covered by it.
Figure 4(b) shows the front view of the camera that shows the range of the camera. The daylight condition is assumed
for the proposed approach. In full daylight and overcast day, when vehicle is moving the dark shade is formed under
the road clearance area of vehicle. This dark shaded area formed under the vehicles road clearance area is the Region of
Interest (ROI) in the proposed approach as shown in Fig. 3. In Fig. 3, the circle in the image represents the ROI of the
proposed approach. RGB color space11 is used for the proposed system. This proposed approach extracts the frames
from live video stream and stores the same in the database. Timestamp (time at which frame is captured) of the frame
is used as the identity of the frame. In parallel, preprocessing of the extracted frames is performed. The preprocessing
is performed on the series of the frames retrieved from the database. The main objective of the preprocessing is to
highlight the dark shaded area under the vehicles road clearance area as shown in the Fig. 3, a red invert method is
proposed for that.
A red invert operation is applied on the retrieved frames. Due to this red invert operation, the intensity of the pixels
in ROI increases after background subtraction. ROI contains the majority of the black color pixels. Due to the red
invert operation after subtraction of background image, intensity of the pixels of ROI becomes high and intensity
of the pixels other than ROI decreases. Let retrieved current frame be F. In Red invert operation, a scalar image S
Tarun Kumar and Dharmender Singh Kushwaha / Procedia Computer Science 89 (2016) 726 – 731 729

Fig. 4. (a) Side View of the Surveillance Zone; (b) Front View of the Surveillance Zone.

Fig. 5. (a) Background Image After Cropping; (b) Cropped Original Image (after red invert operation); (c) Illustration of Contour Rectangles of
Detected Vehicles.

(as defined in Eq. (1)) subtracted from frame F. The Red inverted frame F is computed as in Eq. (2).

 N−1
M−1 
S= Si, j = (255, 0, 0) (1)
i=0 j =0

 N−1
M−1 
F = (Fi, j − Si, j ) (2)
i=0 j =0

Here resolution of the frame F and image S is M × N pixels.


After the computation of red invert frame F, this frame is converted to the grey scale image F.
Thus after the background subtraction, the pixels having high intensity represent the ROI. After red color invert
method the cropping of background image and grey version of red inverted frames. The grey image of the road during
zero traffic condition is used as a prototype of the background image. After the adaptive background modeling, the
cropping method is used to remove the area other than the surveillance zone (as the area covered by both sides of the
road is undesired for the approach as shown in Fig. 5). This is because any movement occurring in the undesired area
may cause false positive detection. In Fig. 5(a), the black colored area represents the undesired range and white color
represents the surveillance area. In the proposed background modeling process, the undesired region is cropped from
background image as well as the current frame F. The camera calibration angle is used for estimation of the number
of pixels to be cropped from the image. The basic trigonometry formulas are used for the estimation.
Thus cropping operation is applied on the grey background image Bg and grey frame F to remove the undesired
area from both images as shown in Fig. 5(a), Fig. 5(b), Fig. 5(c). Background subtraction is performed after the
background modeling for foreground object detection. The background subtraction is performed on grey images F
and grey background image Bg .
On the subtracted image, the thresholding operation is applied and image is converted to the binary image. This
binary image may contain salt and pepper noise. To remove any noise from this image, a median filter with 3 × 3
mask is used. In this image, ROI pixels will have the high intensity values and other pixels will have the zero intensity
as shown in. Those pixels having high intensity indicate the presence of the vehicle in surveillance zone. These white
spots on the image tell the presence of vehicle in the image.
730 Tarun Kumar and Dharmender Singh Kushwaha / Procedia Computer Science 89 (2016) 726 – 731

Table 1. Average Vehicle Detection and Tracking Accuracy of the Proposed Approach in
Different Daylight Conditions.

Total no. Avg. Detection Avg. Tracking


Videos Session No. of Frames Vehicles Accuracy Accuracy
Video 1 Morning 189 7 94.2% 88.1%
Video 2 Afternoon 256 11 93.8% 98.3%
Video 3 Evening 198 14 92.9% 91.5%
Video 4 Cloudy day 101 1 70.03% 91.08%

Fig. 6. Variation in Accuracy with Changing Environment Conditions.

Now the objective of the approach is to detect the presence of the vehicles and extract their parameters like location
coordinates, length, and height for the vehicle. Outcomes of this would be a contour vector. Contour vector is the array
of the array of points. The array of points stores the two subsequent diagonal points (A, B) of the contour rectangle.
Contour finding algorithm is used for the detection of the contours of the ROI. The contour finding starts by scanning
the image from bottom to top and left to right direction. All the contour information is stored in temporary buffer.
This buffer contains the vehicles parameters like vehicle− id (generated automatically whenever new vehicle detected),
timestamp of the frame (in which vehicles first appearance detected), all contour vertex and frame count (total number
of the frames in which presence of the vehicle is detected) in previous frame. Tracking of the vehicles starts after the
detection of the vehicles is complete. Tracking of vehicles based on the comparing relative position of the vehicles
region in contour vector with the position of the region of the every vehicles detected in previous frame stored in
temporary buffer. The proposed approach keeps the list of the vehicles, their frame counts, locations in database. Speed
estimation of the vehicles using frame count is computed as:
d× f
S= (3)
n
Here f is the frame rate of the camera in frames/second, d is the length of the surveillance zone in meters, and n is
the frame count of the vehicle.

4. Results and Analysis

The proposed approach is implemented by using OpenCV and JavaCV. Mysql is used for database. For verification
and testing of the proposed approach, four different videos for different environment conditions (like in morning,
afternoon, evening and on partial cloudy day) are used. Figure 5 shows the snapshot of detection of moving vehicles.
The results obtained from proposed approach for video 1, 2, 3 and 4 are shown in Table 1.
The proposed approach is tested the on four different videos. Table 1 and Fig. 6 illustrate that tracking accuracy of
the proposed approach varies with the changing intensity of the light. As intensity of the light decreases, the accuracy
Tarun Kumar and Dharmender Singh Kushwaha / Procedia Computer Science 89 (2016) 726 – 731 731

of the approach decreases to some extent. The accuracy of the approach is superior as evident in afternoon session
which is as high as 98.3% for average intensity. The average tracking accuracy achieved by the proposed approach is
about 92.2%. The average detection accuracy obtained by the proposed approach in video 1, 2 and 3 is 93.62% and
overall it is 87.7%. The proposed approach achieves detection accuracy of about 70.03% in video 4. This is because, the
proposed approach tracks the vehicles moving in single direction (from entry point to exit point as shown in Fig. 1) in
surveillance zone only. Video 4 is recorded on single road where some vehicles were moving in opposite direction also.
The proposed approach detects the vehicles moving in opposite direction but cannot track them correctly. As results
the false positive detections are high in video 4 as compare to video 1, 2 and 3. Thus, detection of speed violation
degrades.

5. Conclusions

This paper proposes an approach to detect and track the moving vehicles and estimation of their speeds. The
innovation of the approach lies in the selection of the Region of Interest for the vehicle detection. The approach
proposed in this paper is verified and tested on four different videos. The average detection accuracy achieved by
proposed approach is 87.7%. The proposed approach uses cropping operation to minimize the scope of any false
positive detection on both sides of road. The average false positive detection in the proposed approach is lower than
average false positive detection in leading approaches such as STA12 . Maximum tracking accuracy achieved by the
proposed technique is up to 98.3% in the afternoon session, but the average tracking accuracy of the proposed approach
is about 92.2% that is improvement to other methods. In the proposed method, detection and tracking of the moving
vehicles utilizes parameters such as position, height and width of vehicle instead of features extraction. This requires
lesser computation and memory. The proposed approach stores vehicles parameters, estimated speed of the detected
vehicles in the database. The proposed system can be adopted easily in existing traffic management system.

References

[1] T. Kumar, R. Sachan and D. S. Kushwaha, Smart City Traffic Management and Surveillance System for Indian Scenario, Proceedings of
International Conference on Recent Advances in Mathematics, Statistics and Computer Science (ICRAMSCS), (2015).
[2] A. Mittal, A. Monnet and N. Paragios, Scene Modeling and Change Detection in Dynamic Scenes: A Subspace Approach, Computer Vision
and Image Understanding, vol. 113(1), pp. 63–79, (2009).
[3] S. Gupte, O. Masoud, R. F. K. Martin and N. P. Papanikolopoulos, Detection and Classification of Vehicles, IEEE Transactions on Intelligent
Transportation Systems, vol. 3(1), pp. 37–47, (2002).
[4] A. Hussain, K. Shahzad and C. Tang, Real Time Speed Estimation of Vehicles, WasetAcNz, vol. 6(1), pp. 726–730, (2012).
[5] O. Barnich and M. V. Droogenbroeck, ViBe: A Universal Background Subtraction Algorithm for Video Sequences, IEEE Transactions on
Image Processing 2015, vol. 20, pp. 1709–1724, July (2011).
[6] N. Friedman and S. Russell, Image Segmentation in Video Sequences: A Probabilistic Approach, In UAI’97 Proceedings of the Thirteenth
Conference on Uncertainty in Artificial Intelligence, pp. 175–181, (1997).
[7] S. Javed, U. D. L. Rochelle, S. K. Jung and L. Mia, OR-PCA with Dynamic Feature Selection for Robust Background Subtraction,
Proceedings of the 30th Annual ACM Symposium on Applied Computing-SAC’15, pp. 86–91, (2015).
[8] G. Sullivan, K. Baker, A. Worrall, C. Attwood and P. Remagnino, Model-Based Vehicle Detection and Classification using Orthographic
Approximations, Image and Vision Computing, vol. 15(8), pp. 649–654, (1997).
[9] T. Kasetkasem and P. K. Varshney, An Image Change Detection Algorithm Based on Markov Random Field Models, IEEE Transactions on
Geoscience and Remote Sensing, vol. 40(8), pp. 263–1823, (2002).
[10] J. W. Hsieh, S. H. Yu, Y. S. Chen and W. F. Hu, Automatic Traffic Surveillance System for Vehicle Tracking and Classification, IEEE
Transactions on Intelligent Transportation Systems, vol. 7(2), pp. 175–187, (2006).
[11] K. Padmavathi and K. Thangadurai, Implementation of RGB and Grayscale Images in Plant Leaves Disease Detection–Comparative Study,
Indian Journal of Science and Technology, vol. 9(6), (2016).
[12] Picomixer, http://picomixer.com/documents/Introduction of Picomixer STA.pdf/.

You might also like