You are on page 1of 4

2017 6th MEDITERRANEAN CONFERENCE ON EMBEDDED COMPUTING (MECO), 11-15 JUNE 2017, BAR, MONTENEGRO

Real-Time Stopped Vehicle Detection


Based on Smart Camera
Boris A. Alpatov, Maksim D. Ershov
Department of Automation and Information Technologies in Control
Ryazan State Radio Engineering University (RSREU)
Ryazan, Russia
aitu@rsreu.ru

Abstract—The paper describes traffic control and road safety This paper describes the solution for traffic control system
problems and their solutions using image processing algorithms. – stopped vehicle detection. If a vehicle stops on the road, it
We have developed an algorithm to detect stopped vehicles may be an obstruction for traffic and pose a serious threat. In
during traffic surveillance. Our algorithm is based on this case, a timely detection of such situation allows to warn
background estimation in image sequences. This paper offers other road users and inform the emergency services in time.
some features of algorithm implementation in smart cameras and
experimental results. Our stopped vehicle detection algorithm is designed for use
on the embedded platform of smart CCTV cameras. Thus, we
Keywords-traffic surveillance; video analysis; background reduce the amount of data transmitted to the traffic control
subtraction; embedded systems; smart cameras system. However, it must be remembered that the computing
capability of such cameras is limited, and the following factors
I. INTRODUCTION should be taken into the account during image processing:
changing the weather and light conditions, different camera
Currently, efficient road traffic control and road safety of angles and positions.
all road users require traffic surveillance, estimation of traffic
stream parameters, immediate detection of dangerous situations II. STOPPED VEHICLE DETECTION ALGORITHM
and their reporting to appropriate services [1].
Stopped vehicle detection algorithm is based on
Not so long ago such important and time-consuming tasks background subtraction technique [5]. Algorithm work can be
were performed by people – controllers in the traffic control divided into the initialization stage and the stage of stopped
centers, but now traffic control systems must be automated due object detection.
to the constant increase in the number of monitored road
sections. Initialization stage is initial background estimation. For this
purpose, an averaging filter is applied: the average value of
Video detectors are more and more widely used to analyze brightness for a given number of frames is determined in each
traffic stream [2]. Their advantages are: pixel of the image [6]. Thus, the initial background estimation
is done by accumulating the information about the pixel
x Vehicle detection on several traffic lanes with one brightness for a sufficiently long period of time.
sensor.
When detection algorithm is based on background
x Collecting a large number of various traffic stream subtraction, one of the tasks is to update the background
parameters. estimation. To do so, our algorithm uses an exponential
x Visual monitoring of vehicles. filter [7]. With background estimation done, object detection
can be carried out by analyzing the module of the difference
If the data source is CCTV camera, there are two between the current image and the background.
implementations of data processing algorithms.
Our approach is based on a current background estimation
x Online processing is performed on the embedded BGN and a queue of background estimations for various short
platform of CCTV camera and provides reducing the time intervals BGi (i = 0..N–1). Then we subtract the earliest
amount of transmitted data and thus reducing the background estimation from the current background estimation
requirements for communication links and computing for stopped object detection.
server [3].
Thus, the stopped object detection stage comprises the
x Offline processing is performed on the central following steps.
computing server and provides a significant reduction
in the requirements for CCTV camera specification [4]. x Getting matrix BD by binarizing the differential image
D = |BGN – BG0|.
The work has been supported by the grant for the Leading Scientific Schools
of the Russian Federation (NSh-7116.2016.8).
978-1-5090-6742-8/17/$31.00 ©2017 IEEE
2017 6th MEDITERRANEAN CONFERENCE ON EMBEDDED COMPUTING (MECO), 11-15 JUNE 2017, BAR, MONTENEGRO

x Processing of binary image BD. Smart camera AXIS P1365 delivers up to 60 fps in HDTV
1080p (up to 1920×1200 pixels). It provides noise reduction
x Information accumulation in matrix S about the and reduced motion blur in low light. The camera is based on
duration of assigning each pixel to the object. ARTPEC-5 platform, has 512 MB RAM, 256 MB Flash.
x Getting matrix BS by binarizing matrix S. Video encoder AXIS Q7424-R features a rugged design
x Objects marking and parameterization. optimized to withstand vibrations, shocks, and extreme
temperatures. Video encoder delivers up to 30 fps in resolution
x Update of background estimations. up to 720×576 pixels. The encoder can connect up to 4 analog
video cameras, but the embedded video processing is supported
D is the module of the difference between the current
only for one link. The encoder is based on ARTPEC-4
background estimation BGN and the background estimation for
platform, has 512 MB RAM, 128 MB Flash.
the earliest period of time BG0. Binarization of D is carried out
by thresholding. B. Development Tools
Next step is binary image BD postprocessing in order to Software implementation of the proposed algorithm is
connect small segments. To do so, morphological operations of written in C/C++ programming language. Video analytics
opening and closing are applied. It is also possible to use a application for devices described above were developed using a
predetermined mask for zeroing pixel values in the image BD in special software development kit (SDK) – AXIS Camera
areas not belonging to the roadway. Application Platform (ACAP) [10].
Matrix S is created in order to account for the temporary ACAP includes documentation, application programming
detection threshold. It stores for each pixel the number of interface (API), sample projects and related compilers.
frames in which this pixel is classified as belonging to a Applications on this platform are written in C and/or C++
stopped object. Update of the matrix S is performed for each programming languages, they are based on an operating system
frame and based on the information in the image BD. with the Linux kernel and the GNU system libraries. Package
gdbserver enables remote debugging of applications installed
Binary image BS is the result of the matrix S thresholding. on the smart camera.
The threshold is set according to required detection time.
Finally, marking and parametrization of selected objects is ACAP also contains the following third-party software
performed basing on image BS. During the procedure, each libraries.
object is assigned with a unique number, coordinates and area x RAster Processing Primitives library (RAPP) [11]. The
are calculated. Too small objects are rejected. library for image processing focused on computer
To update the current background estimation BGN, the vision and video analytics applications. It contains the
exponential filter input is supplied with current image IN. As a efficient implementation of algorithms such as simple
result, BGN will accumulate information about stopped objects. thresholding, binary morphology, convolution,
geometric transformations, etc.
Update of earlier background estimations BGi (i = 0..N–1)
is based on the mask, which is binary image BD. For pixels that x Fixed Point library (Fixmath) [12]. The library for
belong to stopped objects, the brightness value is averaged over working with fixed point numbers. On considered
all background estimations BGi (i = 0..N–1). After a platform, multiplication of fixed-point numbers is
predetermined short period of time, background estimation performed two times faster, and division and algebraic
queue shifts, the earliest estimation is rejected (BG0 = BG1, …, functions are 10-50 times faster than corresponding
BGN–1 = BGN). operations with floating point numbers.
C. Developed Software
III. EMBEDDED IMPLEMENTATION OF ALGORITHM
The developed software package consists of the following
This section describes AXIS products with embedded video components.
processing, as well as provided tools to develop applications
for video analytics. It will also consider the implementation of x Video stream analysis module using an embedded
the developed stopped vehicle detection algorithm. processor of the camera.
A. Hardware Specification x Data exchange module based on TCP/IP protocol using
an embedded processor of the camera.
The final implementation of the algorithm is set to work on
the smart camera embedded platform. We used network x Configuration and results display module, which is a
cameras AXIS P1365 [8] and video encoders AXIS Q7424-R stand-alone application written in C++ for Windows
[9] in our research. operating system.
These products enable urban surveillance and infrastructure The video stream analysis module performs real-time video
monitoring in different conditions. processing basing on the stopped vehicle detection algorithm.
2017 6th MEDITERRANEAN CONFERENCE ON EMBEDDED COMPUTING (MECO), 11-15 JUNE 2017, BAR, MONTENEGRO

The data exchange module is a TCP server, which saves


and delivers algorithm configuration, delivers the results (the
position of stopped vehicles).
The configuration and results display module has a
graphical user interface. It allows user to configure the video
stream analysis module, to receive the current image from the
camera and mark detected stopped vehicles. To get the
displayed information, the module refers to the data exchange
module located on the camera.
To perform real-time video processing, the image size is
reduced to 480×320 pixels (the original 1920×1200) and to
360×288 pixels (the original 720×576).
In addition, to perform the background estimation, the data
is presented as fixed-point numbers, because operations on
floating-point numbers are significantly more time-consuming
when running on the embedded processor of considered smart Figure 1. Detection of 3 stopped vehicles.
cameras. Wrapper class for Fixmath library functions enables
easy switching between different types of data.
IV. EXPERIMENTAL RESULTS
Experimental researches of the developed algorithm and
software were conducted with previously recorded actual
videos. Also, the algorithm has been tested on CCTV cameras
when surveilling different sections of the roads in real time.
The video monitoring was done under different weather
and light conditions. The length of video sequences is from 10
minutes to 8 hours. The size of processed images is from
360×288 to 1280×720 pixels.
When using the least productive platform (AXIS Q7424-R
on ARTPEC-4), processing speed was about 5 fps. Considering
the specifics of the task, this speed was sufficient for the timely
detection of stopped vehicles.
Results of experimental researches for different video Figure 2. Detection of 2 stopped vehicles.
sequences are shown in Table I. We use following notations.
x TP (True Positive) – true detection of a stopped TABLE I. RESULTS OF STOPPED VEHICLE DETECTION
vehicle.
Test Results
x FN (False Negative) – not detected stopped vehicle video TP FN FP
(skip of the object). 1 4 0 0
x FP (False Positive) – false detection of a stopped 2 3 0 0
vehicle (false alarm).
3 4 0 0
Thus, the true detection rate (ratio of the number of all
4 2 0 0
stopped objects to TP) is 100% at the error rate (ratio of the
number of all stopped objects to the sum of FN and FP) – 5 2 0 0
7.5%.
6 1 0 1
Fig. 1 presents an example of stopped vehicle detection on 7 1 0 0
a sunny day (stop duration – more than 1 minute). Also, the
lighting conditions on this video sequence change over time 8 1 0 0
due to the presence of clouds. 9 1 0 0
Fig. 2 presents an example of stopped vehicle detection in 10 3 0 0
the dark under the street lights (stop duration – more than 10
seconds). 11 22 0 1
2017 6th MEDITERRANEAN CONFERENCE ON EMBEDDED COMPUTING (MECO), 11-15 JUNE 2017, BAR, MONTENEGRO

Test Results REFERENCES


video TP FN FP [1] V. M. Markuts, “Transport flows of highways and the city streets
12 27 0 0 (practical applications),” Tyumen, 2008, 108 p., in Russian.
[2] B. Coifmana, D. Beymer, P. McLauchlan, and J. Malik, “A real-time
13 8 0 0 computer vision system for vehicle tracking and traffic surveillance,”
Transportation Research Part C 6, 1998, pp. 271–288.
14 14 0 2
[3] B. Alpatov, P. Babayan, M. Ershov, and V. Strotov, “The
15 0 0 1 implementation of contour-based object orientation estimation algorithm
in FPGA-based on-board vision system,” Proc. of SPIE, vol. 10007,
16 0 0 0 100070A, pp. 1–8, October 2016.
17 0 0 2 [4] M. D. Ershov, “Application of image processing algorithms for traffic
stream parameters estimation,” Proc. of New information technologies
18 0 0 0 in scientific researches, Ryazan: RSREU, 2016, pp. 259–261, in
Russian.
V. CONCLUSION [5] Y. Benezeth, B. Emile, H. Laurent, and C. Rosenberger, “Review and
evaluation of commonly-implemented background subtraction
In the paper we described one of the tasks of traffic algorithms,” International Conference on Pattern Recognition, pp. 1–4,
surveillance – stopped vehicle detection. We introduced an December 2008.
algorithm to process video sequences from optical sensor [6] M. Piccardi, “Background subtraction techniques: a review,” Proc. of
IEEE International Conference on Systems, Man and Cybernetics,
performing traffic surveillance. The algorithm is based on pp. 3099–3104, 2004.
background subtraction and designed to detect and localize
[7] J. Heikkila, O. Silven, “A real-time system for monitoring of cyclists
stopped vehicles. and pedestrians,” 2nd IEEE Workshop on Visual Surveillance, pp. 74–
81, 1999.
Our algorithm is implemented on smart cameras. We have
considered the implementation based on embedded platform of [8] AXIS P1365 network camera. Support & Documentation,
https://www.axis.com/global/en/products/axis-p1365-mki/support-and-
AXIS cameras. documentation.
Experiments have confirmed the efficiency of stopped [9] AXIS Q7424-R video encoder. Support & Documentation,
vehicle detection algorithm and developed software. After the https://www.axis.com/global/en/products/axis-q7424-r-mki/support-and-
documentation.
analysis of the results, we can conclude that the developed
[10] AXIS Camera Application Platform, https://www.axis.com/global/en/
software package allows solving the considered problem in support/developer-support/axis-camera-application-platform.
real-time under different surveillance conditions. [11] RAPP library. Documentation, http://www.nongnu.org/rapp/doc/rapp.
[12] Fixmath library. Documentation, http://www.nongnu.org/fixmath/doc.

You might also like