Professional Documents
Culture Documents
Review Article
Abstract: Developing an intelligent transportation system has attracted a lot of attention in the recent past. Moreover, with the
growing number of vehicles on the road most nations are adopting an intelligent transport system (ITS) for handling issues like
traffic flow density, queue length, the average speed of the traffic, and total vehicles passing through a point in a specific time
interval and so on. ITS by capturing traffic images and videos through cameras, helps the traffic control centres in monitoring
and managing the traffic. Efficient and unfailing vehicle detection is a crucial step for the ITS. This study reviews different
techniques and applications used around the world for vehicle detection under various environmental conditions based on video
processing systems. This study also discusses the types of cameras used for vehicle detections, and the classification of
vehicles for traffic monitoring and controlling. This study finally highlights the problems encountered during surveillance under
extreme weather conditions.
1 Introduction images are converted into a digital form using a processor. The
images in their digital form are processed and examined using
Nowadays, traffic detection is attracting a lot of researchers’ image processing techniques. The extracted information can then
attention in computer vision and intelligent transportation systems be used by any external user, such as a traffic control centre, for
[1, 2]. The rapid upsurge in several vehicles in the last few years traffic monitoring and controlling, etc.
has posed serious problems in the management of the Image processing techniques are applied for a video-based
transportation system vividly. Conditions such as accidents, traffic monitoring system in five stages: (i) image acquisition
blockages on roads, vehicle robberies, and traffic congestion are a (CCTV) and digitisation, (ii) vehicle detection through background
common problem faced in the management of traffic. The subtraction/foreground extraction, (iii) image segmentation, (iv)
intelligent transport system (ITS) has become a dynamic area of vehicle identification under different environmental conditions and
research since the last decade. It plays an important role in classification, and (v) tracking.
handling many real-time situations like avoiding traffic congestions Fig. 2 shows that in the process of vehicle detection, the video
and accidents, in the surveillance of traffic, in the collection of toll image is processed, and the region where the vehicles are present is
tax, terrorist monitoring, and in traffic flow under hazy determined. This region of interest can be an alone pixel, a line of
environments [3], and many more. That is why ITS has become a pixels, or a group of pixels [9].
significant field of study. Throughout the traffic parameter extraction, they are obtained
Vehicle detection is the key task of the transport monitoring by comparing the vehicle detection status on the region of interest
system, as after the detection of the vehicle, other applications can at the different instant of time frames. In a traffic surveillance
be used in a better way. Various technologies [4, 5] are being system, background and illumination changes make vehicle
undertaken for detecting and categorising vehicles automatically. detection a difficult task. The key problems are changing
Different algorithms (based on shape, size, texture, colour, etc.) are backgrounds (camera jitter, water rippling, and waving trees, etc.),
used to categorise different vehicles from videos, using the image illumination changes (gradual or sudden light changes), and
processing [6], this information is then transferred to the climatic changes [10]. In these issues, the foreground mask by a
centralised controlling and monitoring system [7]. Gaussian mixture model (GMM) is used, but there is a big chance
Fig. 1 shows the process that is followed in a traffic monitoring of fault detection. To overcome this problem, various projects are
system based on videos, which has a camera mounted on the being taken up to find better vehicle surveillance ways through
roadside, watching over the traffic scene. The camera works as a which the chances of false detections can be reduced.
sensor device used to capture traffic videos. The captured video
In a broad sense, vehicle detection can be categorised based on matched with corresponding 3D features of the position [39].
[31]: Karsten et al. suggested a method that works on the 3D restoration
of a dynamic atmosphere with a fully regulated background for
(i) prior knowledge, traffic movements [45]. Lou et al. [39] proposed another method
(ii) motion, that disintegrated the two motions, i.e. translation and rotation.
(iii) Wavelets Besides that, Ghosh and Bhanu [46] suggested a 3D model
(iv) machine learning (ML), mapping arrangement where he classified the vehicles into various
(v) deep learning (DL). types like hatchback, sedan, and wagons.
Various approaches to detect a vehicle have been made based on 3.2.1 Optical flow: This method of vehicle detection is based on
prior knowledge. They are symmetry, colour, vertical, or horizontal optical flow calculations, whereas the above-mentioned methods
edge, shadow, wheels, and 3D model, etc. [31, 32]. have used spatial features. Motion detection is one of the vital
parameters in an ITS. There is a relative motion between the traffic
scene and the sensor because of which the pixels on the image
3.1.1 Symmetry: For both horizontal and vertical directions, there appear to be moving. This motion because of the vector field is
must be a symmetry while taking the observation from the called optical flow. By using optical flow autonomously any
stationary camera. The method has not been considered as the vehicle can be detected from the camera. However, its computation
finest method of detection due to noise. is very complex and subtle to noise. Meyer et al. [47] estimated the
concept of a displacement vector field, which initialises a contour-
3.1.2 Colour: Colour segmentation, along with motion has been based tracking process. Real-time analysis of video streams
introduced to detect the moving vehicle. Colour segmentation without any specialised hardware is not that easy. Barron's et al.
generally uses split and merge algorithm, whereas vehicle [48] have mentioned a detailed discussion on optical flow in his
segmentation uses frame differentiation. work.
3.1.3 Vertical and horizontal edges: Conventional gradient- 3.2.2 Background subtraction: Background subtraction, also
based and morphological edge detectors are the two kinds of edge named as foreground detection, is one of the vital jobs by which
detection methods. The conventional gradient-based uses a Sobel many other applications like vehicle tracking, recognition, and
operator and generalised Hough transform. Morphological edge irregularity detection can be realised. It is used for motion
detection uses dilation and erosion operation to spot edges, and segmentation in a static scene [49]. For detection of motion, pixel
they are better than conventional edge detectors based on by pixel image is subtracted from a background image, which is
computational cost. Apart from the above-mentioned method, considered as a reference image. A certain threshold level is
separable morphological edge detector was proposed by Siyal defined if the difference is above the threshold level, then the
which depends less on edge directionality and increases the image considered as foreground. The formation of the background
robustness of vehicle detection [33]. image is recognised as background modelling. Some
morphological post-processing is done after the creation of
3.1.4 Shadow: As the region under any vehicle is darker as foreground pixel maps like erosion, dilation, and closing to reduce
compared to other regions on the road, this fact becomes the basis noise and improve the detected region. The reference background
of vehicle detection [34]. A threshold method centred on the image is updated from time-to-time and the image where the
characteristics of the Hue saturation value colour space is section of the road is shown with no vehicle serves as [49, 50].
suggested by Cucchiara et al. [35]. Xua et al. [36] suggested There are various techniques by which background subtraction can
another algorithm which suppresses the moving cast shadow of be applied in terms of foreground detection.
vehicle detection. Heikkila and Silven [51] gave a very simple concept of
background technique in which a pixel at a location (x, y) in the
3.1.5 Wheel: The methods proposed for vehicle detection in the present image It is marked as foreground if It(x, y) − Bt(x, y) > Th
daytime are not valid for nighttime. This fact was discussed by is fulfilled; where a pre-defined threshold is denoted as Th. An
Yoneyama et al. [37] stated that methods applied to daytime do not impulse response filter updates the background image, as shown in
work properly if it is applied for detection at nighttime. While the equation below:
considering the problem faced by Yoneyama et al., Iwasaki and
Kurogi [38] defined a new unique method specifically for Bt + 1 = αIt + (1 − α)Bt (1)
nighttime. The method calculates the distance between its wheels
(front and rear), which helps in finding the size of the vehicle. where α is a constant that ranges between 0 and 1.
Also, this method helped in finding the lane where the vehicle The foreground pixel map construction is followed by
exists. However, for this method, the camera needs to be installed morphological closing and the elimination of small-sized areas [51,
on the roadside rather than on top of any traffic signal. 52]. With the help of this easy background model, pixels
corresponding to foreground moving vehicles can be detected by
3.1.6 3D model: The vehicles’ 3D pose is mapped to a thresholding any of the distance functions [53]
corresponding model description. Dealing with obstruction is
another benefit of the 3D model [39]. Various approaches of 3D d0 = It − Bt (2)
modelling have been proposed like graph matching [40], viewpoint
consistency check [41], alignment-based 2D method, indexing and d0 = (ItR − BtR)2 + (ItG − BtG)2 + (ItB − BtB)2 (3)
invariants [42], gradient-based method [42, 43], neural network,
self-similarity [44], etc. In all the above-mentioned methods, 2D
d∞ = max ItR − BtR , ItG − BtG , ItB − BtB (4)
image features like line segments, points, and conic sections, are
Fig. 7 Deep network with two hidden layers in between input and output 3.5.1 Convolutional neural networks: It is a deep discriminative
layers [68] network, usually looks like that of the ordinary artificial neural
network, as mentioned in Fig. 7, where transforms have been done
through hidden layers. The deep architecture of CNN is explained
in [66] (refer to Fig. 3c) that discussed the arrangements with a
multi-channelled image as the participation input [77]. To manage
this complicated input, the CNN comprises of a few layers of
convolutions with non-linear actuation capacities to figure the
output. As an outcome, the CNN contains limited associations
whereby every area of the input is associated with a neuron in the
output. Each layer applies a distinctive channel and joins their
outcomes. What is more, the CNN comprises of the pooling layers
for subsampling. During the preparation stage, a CNN
consequently takes in the estimations of its channels considering
the given assignment. In its first layer, CNN may figure out how to
distinguish the edges from the raw pixels. At that point, in the
second layer, the CNN utilises the edges to distinguish
straightforward shapes. In this form, in the consequent higher
layers, by utilising these shapes, CNN might have the capacity to
learn more elevated amount highlights like facial shapes of the
animal present in the image. In the last layer, a classifier is utilised
to exploit these abnormal state highlights. Profound CNN's have
ended up being very fruitful in learning task highlights, which have
given much-enhanced outcomes, especially on computer vision
undertakings, conversely with contemporary ML procedures.
Normally, CNN's are prepared by utilising supervised learning
strategies whereby an expansive number of input–output sets is
important. Notwithstanding, acquiring a considerably huge data set
has been a key test in applying CNN's for settling another
assignment. Likewise, CNN's have been accounted for to be widely
utilised as a part of unsupervised techniques [71, 78, 79].
Fig. 11 Moving vehicles classification based on attributes derived from feature space optimisation
Table 3 Classification of vehicles based on their sizes The classification of vehicles based on their sizes includes
S. no. Type of vehicle Width, m Length, m Height, m width, length, and height of the vehicle is mentioned in Table 3.
1. two-wheeler 0.60–0.90 2.10–2.40 1.2–1.5 The approximate dimensions of all types of possible vehicles have
been included in the table. The features of these vehicles have been
2. three-wheeler 1.2–1.4 2.5–2.8 1.6–1.9
recognised and detected using any detection method [105].
3. light motor vehicle 1.3–1.6 3.2–3.8 1.4–1.7 Figs. 12a–f show the classification of vehicles in terms of size or
4. sports utility vehicle 1.6–1.8 3.8–4.1 1.6–1.8 structure.
5. light commercial vehicle 1.5–1.7 4.0–4.2 1.8–2.1 Vehicle type is notable in view of the characterised scope of
6. heavy motor vehicle 2.5–3.0 8.0–12.5 3.6–4.0 vehicle sizes. After the various methodologies applied for
classification of vehicles, the vehicle position in the video frame
sequences does not only depend on the vehicle size; rather it also
been increased significantly. Another segment of the classification depends upon what type of weather environment the vehicle is in.
of vehicle type is three-wheeler. Different types of auto-rickshaws
come under this category, which is used to carry passengers or 5 Hazy environment
goods for shorter distances. ‘Vikram’ from ‘Scooter India’ was a
very famous three-wheeler at one time, but now many companies Detecting a vehicle under challenging weather environment such as
are manufacturing three-wheelers that are small structure-wise and sunny, cloudy, snowy, rainy, hazy, etc., is a challenging job. Many
efficient for work. The light motor vehicle is a small vehicle used researchers have tried to eliminate the effect of bad light from the
to carry passengers that includes a jeep, motor cars, taxi, etc., video, to improve the quality of image sequences. However, these
whereas sports utility vehicle is a rugged vehicle that can carry methods do not detect the velocity and classification of the vehicle.
both passengers and cargo. Some of the sports utility vehicles As hazy weather conditions pose the most common problem in
which use this technique are Hummer, Ford Eco Sport, and Range surveillance systems, thereby reducing the clarity in traffic images
Rover, etc. Another segment is Light Commercial Vehicles, resulting in low accuracy of vehicle recognition and detection.
designed for low goods carriage. Piaggio Ape is one of the good There are various computational methods available for noisy
examples of this classification. Heavy motor vehicle includes images captured in hazy weather conditions and to overcome such
trucks, bus, etc. is bigger in all dimensions. challenges, some of them are discussed as below:
1
∑ δi, j −
ωm
1 + (Ii − μm)T
m (i, j) ∈ ωm
−1 (9)
∈
+ Σm + U Ii − μm
ωm 3
I f (m, n) − L f
5.1 Dark channel prior K f (m, n) = + Lf (11)
max (t(m, n), t0)
It is a technique of image restoration that removes hazy from a
single image [106, 107]. The estimation of hazy can be calculated where f∈{r, g, b}, the value of t0 is supposed to be 0.1. In the real
as input image, atmospheric light (L) is the highest intensity pixel as
per its correspondence in the dark channel to the brightest 0.1% of
K dark(m, n) = min min K f i, j (6) pixels.
i, j ∈ Ω m, n f ∈ r, g, b In addition to the above-discussed methods, another traditional
method for detection of moving the vehicle is GMM, but for the
Here ‘f’ belongs to red (r), green (g) and blue (b). K f symbolises a hazy environment [110] due to rain, dust, mist, snow, fog, etc., this
network of colour image K. A local patch denoted by Ω is centred model underperforms. To handle these kinds of challenges DL-
at around (m, n). based framework outperforms over GMM because of lower
The minimum operation min f ε r, g, b K f (i, j) is executed on K f computational cost and accuracy.
and min(i, j) ∈ Ω(m, n) is centred at (m, n), which is executed as a
minimum filter on the local patch. 6 Conclusion
When the outdoor image lacks hazy in that case, the dark Vehicle detection in a complex environment is a key task in an ITS.
channel has low intensity [108]. For a haze-free image, the value of The modern and futuristic practices of vehicle detection that are
the dark channel is approximately zero. For haze region, the dark used across the world have been reviewed in this paper. This paper
channel value is always greater than zero. is categorised into four sections, including the types of traffic
surveillance cameras, vehicle detection techniques, bases of vehicle
5.2 Transmission map estimation classification and detection of a vehicle in the hazy environment.
Among other cameras, Gatso and specs cameras are widely used in
The amount of hazy and transmission map is estimated under traffic surveillance because of their high resolution and accuracy.
normalised atmospheric light L f by dark channel prior as Moreover, the five major techniques used for vehicle detection are
given as (i) knowledge-based, (ii) motion-based, (iii) wavelet-
~ I f (i, j) based, (iv) ML, and (v) DL, which has been discussed briefly in
t (m, n) = 1 − min ( min (7)
(i, j) ∈ Ω(m, n) f ∈ r, g, b Lf this paper. A brief discussion on vehicle detection algorithms
shows the importance of DNN over existing traditional methods.
On the comprehensive removal of hazy, the image becomes Various challenges such as weather condition, varying illumination,
unnatural [108]. Thus, a constant parameter ω ∼0.95 is added to dynamic background scenes, etc. may affect the performance of the
retain a portion of the hazy for distant objects as vehicle detection algorithm. A brief overview is given for vehicle
classification based on a CNN. Finally, the computational methods
I f (i, j) that can be used for rectifying the faulty image detection under a
~
t (m, n) = 1 − ω min ( min (8) hazy environment have been discussed. Although lots of
i, j ε m, n f ∈ r, g, b Lf methodologies-based algorithms are available in the form of the
state-of-the-art for vehicle classification, detection, and tracking,
The dark channel prior to patch size 15 × 15 was proposed by He et still challenges arise due to various issues related to moving
al. [108]. vehicle dataset. Thus, leading the researchers to work on further for
the updating and enhancement of existing state-of-the-art.
5.3 Soft matting
After the estimation of the transmission map, the images recovered
contain certain block effects in a hazy image. Transmission map
has been improved by using soft matting [109], which was used by
Levin et al. to reduce the block effects.
8 IET Image Process., 2020, Vol. 14 Iss. 1, pp. 1-10
© The Institution of Engineering and Technology 2019
7 References Conf. on Intelligent Transportation Systems, Oakland, USA, 2001, pp. 334–
339
[1] Buch, N., Velastin, S.A., Orwell, J.: ‘A review of computer vision techniques [36] Xua, H., Xia, X., Guo, L., et al.: ‘A novel algorithm of moving cast shadow
for the analysis of urban traffic’, IEEE Trans. Intell. Transp. Syst., 2011, 12, suppression’. Proc. of the 18 Int. Conf. on Signal Processing, Beijing, China,
(3), pp. 920–939 2006, pp. 1–4
[2] Zhang, J., Wang, F.Y., Wang, K., et al.: ‘Data-driven intelligent transportation [37] Yoneyama, A., Yeh, C.H., Kuo, C.C.J.: ‘Moving cast shadow elimination for
systems: a survey’, IEEE Trans. Intell. Transp. Syst., 2011, 12, (4), pp. 1624– robust vehicle extraction based on 2D joint vehicle/shadow models’. Proc.
1639 IEEE Conf. Advanced Video and Signal Based Surveillance, Miami, USA,
[3] Sun, Z., Bebis, G., Miller, R.: ‘On-road vehicle detection: a review’, IEEE 2003, pp. 229–236
Trans Pattern Anal. Mach. Intell., 2006, 28, (5), pp. 694–711 [38] Iwasaki, Y., Kurogi, Y.: ‘Real-time robust vehicle detection through the same
[4] Saran, K.B., Sreelekha, G.: ‘Traffic video surveillance: vehicle detection and algorithm both day and night’. Proc. of the Int. Conf. on Wavelet Analysis and
classification’, Int. Conf. on Control Communication and Computing, India, Pattern Recognition, Beijing, China, 2007, pp. 1008–1014
November 2015, pp. 516–521 [39] Lou, J., Tan, T., Hu, W., et al.: ‘3-D model-based vehicle tracking’, IEEE
[5] Wang, G., Xiao, D., Gu, J.: ‘Review on vehicle detection based on video for Trans. Image Process., 2005, 14, (10), pp. 1561–1569
traffic surveillance’, IEEE Int. Conf. Automation and Logistics, Qingdao, [40] Kogut, G., Trivedi, M.: ‘A wide area tracking system for vision sensor
China, September 2008, pp. 2961–2966 networks’. The 9th World Congress Intelligent Transport Systems, Chicago,
[6] Chen, Z., Ellis, T., Velastin, S.A.: ‘Vehicle type categorization: a comparison USA, 2002
of classification schemes’. IEEE Conf. Intelligent Transportation Systems [41] Lee, J.W., Kim, M.S., Kweon, I.S.: ‘A Kalman filter based visual tracking
Proc., ITSC, Washington, DC, USA, 2011, pp. 74–79 algorithm for an object moving in 3-D’. Proc. Int. Conf. Intelligent Robots
[7] Lai, A.S.H., Yung, N.H.C.: ‘Vehicle-type identification through automated and Systems, Pittsburgh, USA, 1995, pp. 355–358
virtual loop assignment and block-based direction-biased motion estimation’, [42] Costa, M.S., Shapiro, L.G.: ‘3-D object recognition and pose with relational
IEEE Trans. Intell. Transp. Syst., 2000, 1, (2), pp. 86–97 indexing’, Comput. Vis. Image Underst., 2000, 79, (3), pp. 64–407
[8] https://i.ytimg.com/vi/xVwsr9p3irA/maxresdefault.jpg [43] Tan, T.N., Baker, K.D.: ‘Efficient image gradient based vehicle localization’,
[9] Kim, J.B., Kim, H.J.: ‘Efficient region-based motion segmentation for a video IEEE Trans. Image Process., 2000, 9, (11), pp. 1343–1356
monitoring system’, Pattern Recognit. Lett., 2003, 24, (1–3), pp. 113–128 [44] Muller, K., Smolic, A., Drose, M., et al.: ‘3-D construction of a dynamic
[10] Wu, B.F., Juang, J.H.: ‘Adaptive vehicle detector approach for complex environment with a fully calibrated background for traffic scenes’, IEEE
environments’, IEEE Trans. Intell. Transp. Syst., 2012, 13, (2), pp. 817–827 Trans. Circuits Syst. Video Technol., 2005, 15, (4), pp. 538–549
[11] Bishop, R.: ‘Intelligent vehicle applications worldwide’, IEEE Trans. Intell. [45] Cutler, R., Davis, L.S.: ‘Model-based object tracking in monocular image
Transp. Syst., 2000, 15, (1), pp. 78–81 sequences of road traffic scenes’, IEEE Trans. Pattern Anal. Mach. Intell.,
[12] Rai, M., Yadav, R.K.: ‘A novel method for detection and extraction of human 2000, 22, (8), pp. 781–796
face for video surveillance applications’, Int. J. Signal Imaging Syst. Eng., [46] Ghosh, N., Bhanu, B.: ‘Incremental vehicle 3-D modeling from video’. Proc.
2016, 9, (3), pp. 165–173 of the 18th Int. Conf. on Pattern Recognition, Hong Kong, China, 2006, pp.
[13] Baran, R., Rusc, T., Fornalski, P.: ‘A smart camera for the surveillance of 272–275
vehicles in intelligent transportation systems’, Multimedia Tools Appl., 2016, [47] Meyer, D., Denzler, J., Niemann, H.: ‘Model based extraction of articulated
75, (17), pp. 10471–10493 objects in image sequences for gait analysis’. Proc. IEEE Int. Conf. Image
[14] Deb, S.K., Nathr, R.K.: ‘Vehicle detection based on video for traffic Processing, Santa Barbara, USA, 1998, pp. 78–81
surveillance on road’, Int. J. Comput. Sci. Emerg. Technol., 2012, 3, (4), pp. [48] Barron, J., Fleet, D., Beauchemin, S.: ‘Performance of optical flow
121–137 techniques’, Int. J. Comput. Vis., 1994, 12, (1), pp. 42–77
[15] Michalopoulos, P.G.: ‘Vehicle detection video through image processing: the [49] Shaikh, S.H., Saeed, K., Chaki, N.: ‘Moving object detection using
autoscope system’, IEEE Trans. Veh. Technol., 1991, 40, (1), pp. 21–29 background subtraction’ (Springer Briefs in Computer Science, New York,
[16] Dickmanns, E.D.: ‘The development of machine vision for road vehicles in 2014)
the last decade’, IEEE Intell. Veh. Symp. Proc., 2003, 1, pp. 268–281 [50] Chalidabhongse, T.H., Kim, K., Harwood, D.: ‘A perturbation method for
[17] Sussman, J.M.: ‘Perspectives on intelligent transportation systems’, 2005 evaluating background subtraction algorithms’. Int. Workshop on Visual
[18] Gupte, S., Masoud, O., Martin, R.F.K., et al.: ‘Detection and classification of Surveillance and Performance Evaluation of Tracking and Surveillance, Nice,
vehicles’, IEEE Trans. Intell. Transp. Syst., 2002, 3, (1), pp. 37–47 France, 2003
[19] Janowski, L., Kozłowski, P., Baran, R., et al.: ‘Quality assessment for a visual [51] Heikkila, J., Silven, O.: ‘A real-time system for monitoring of cyclists and
and automatic license plate recognition’, Multimedia Tools Appl., 2014, 68, pedestrians’. Proc. of 2nd IEEE Workshop on Visual Surveillance, Fort
(1), pp. 23–40 Collins, USA, 1999, pp. 74–81
[20] Dule, E., Gokmen, M., Beratoglu, M.S.: ‘A convenient feature vector [52] Cucchiara, R., Grana, C., Prati, A., et al.: ‘Probabilistic classification for
construction for vehicle color recognition’. Proc. Int. Conf. Neural Network, human behaviour analysis in transactions on systems’, Man Cybern., 2005,
WSEAS, Lasi, Romania, 2010, pp. 250–255 35, pp. 42–54
[21] Dziech, W., Baran, R., Wiraszka, D.: ‘Signal compression based on zonal [53] Benezeth, Y., Jodoin, P., Emile, B., et al.: ‘Comparative study of background
selection methods’. Proc. of the Int. Conf. of Mathematical Methods in subtraction algorithms’, J. Electron. Imaging, Soc. Photo-Opt. Instrum. Eng.,
Electromagnetic Theory, Kharkov, Ukraine, 2000, pp. 224–226 2010, 19, (3), pp. 1–12
[22] Cao, M., Vu, A., Barth, M.J.: ‘A novel omni-directional vision sensing [54] Stauffer, C., Grimson, W.E.L.: ‘Adaptive background mixture models for real-
technique for traffic surveillance’. IEEE Intelligent Transportation Systems time tracking’. Int. Conf. on Computer Vision and Pattern Recognition, Fort
Conf., Seattle, USA, 2007, pp. 678–683 Collins, USA, 1999
[23] Bertozzi, M., Broggi, A., Cellario, M.: ‘Artificial vision in road vehicles’, [55] Zivkovic, Z.: ‘Improved adaptive Gaussian mixture model for background
Proc. IEEE, 2002, 90, (7), pp. 1258–1271 subtraction’. Int. Conf. on Pattern Recognition, Cambridge, UK, 2004
[24] Long, J., Shelhamer, E., Darrell, T.: ‘Fully convolutional networks for [56] Unzueta, L., Nieto, M., Cortes, A.: ‘Adaptive multi cue background
semantic segmentation’. Proc. IEEE Conf. Computer Vision Pattern subtraction for robust vehicle counting and classification’, IEEE Trans. Intell.
Recognition, Boston, USA, 2015, pp. 3431–3440 Transp. Syst., 2012, 13, (2), pp. 527–540
[25] http://www.dailymail.co.uk/news/article-2930654/Know-enemy- [57] Yuan, W., Wang, J.: ‘Gaussian mixture model based on the number of moving
Incredibly-20-differentkindscameras-spying-motorists-spot-spotyou.html vehicle detection algorithm’. Proc. of 2012 IEEE Int. Conf. on Intelligent
[26] Rai, M., Maity, T., Yadav, R.K.: ‘Thermal imaging system and its real-time Control Automatic Detection and High-End Equipment (ICADE), Beijing,
applications: a survey’, J. Eng. Technol., 2017, 6, (2), pp. 290–303 China, 2012, pp. 94–97
[27] Rai, M., Husain, A.A., Maity, T., et al.: ‘Advance intelligent video [58] Bouwmans, T., El-Baf, F., Vachon, B.: ‘Background modelling using mixture
surveillance system (AIVSS): a future aspect’ (In Video Surveillance of gaussians for foreground detection: a survey’, Recent Patents Comput. Sci.,
IntechOpen, London, UK, 2018) 2008, 1, (3), pp. 219–237
[28] Pang, C.C.C., Lam, W.W.L., Yung, N.H.C.: ‘A novel method for resolving [59] Jenifa, R., A., T, , Akila, C., et al.: ‘Rapid background subtraction from video
vehicle occlusion in a monocular traffic-image sequence’, IEEE Trans. Intell. sequence’. IEEE Int. Conf. on Computing, Electronic and Electrical
Transp. Syst., 2004, 5, (3), pp. 129–141 Technologies (ICCEET), Kumaracoil, India, 2012, pp. 1077–1086
[29] Wang, Y.: ‘Joint random field model for all-weather moving vehicle [60] Oliver, N.M., Rosario, B., Pentland, A.P.: ‘A Bayesian computer vision
detection’, IEEE Trans. Image Process., 2010, 19, (9), pp. 2491–2501 system for modelling human interactions’, IEEE Trans. Pattern Anal. Mach.
[30] Liu, Y., Tian, B., Chen, S., et al.: ‘A survey of vision-based vehicle detection Intell., 2000, 22, (8), pp. 831–843
and tracking techniques in ITS’. Proc. 2013 IEEE Int. Conf. on Vehicular [61] Bouwmans, T., Zahzah, E.: ‘Robust PCA via principal component pursuit: a
Electronics and Safety, Dongguan, China, 2013, pp. 72–77 review for a comparative evaluation in video surveillance’, Computer Vision
[31] Zhang, J., Marszalek, M., Lazebnik, S., et al.: ‘Local features and kernels for and Image Understanding, 2014, 122, pp. 22–34
classification of texture and object categories: a comprehensive study’, Int. J. [62] Cutler, R., Davis, L.: ‘Robust real-time periodic motion detection, analysis
Comput. Vis., 2007, 73, (2), pp. 213–238 and applications’, IEEE Trans. Pattern Recognit. Mach. Intell., 2000, 13, pp.
[32] Bouwmans, T., Gonzalez, J., Shan, C., et al.: ‘Special issue on background 129–155
modelling for foreground detection in real-world dynamic scenes’, Mach. Vis. [63] Wang, Y., K., Chen, S.: ‘A robust vehicle detection approach’. IEEE Conf. on
Appl., 2014, 25, (5), pp. 1101–1103 Advanced Video and Signal Based Surveillance, Tehran, Iran, 2005, pp. 117–
[33] Siyal, M.Y.: ‘A neural vision-based approach for intelligent transportation 122
system’, IEEE ICIT’ 02, Bankok, Thailand, 2002, pp. 456–460 [64] Wang, X., Zhang, J.: ‘A traffic incident detection method based on wavelet
[34] Liu, Z., Huang, K., Tan, T.: ‘Cast shadow removal in a hierarchical manner algorithm’. IEEE Workshop on Soft Computing in Industrial Applications,
using MRF’, IEEE Trans. Circuits Syst. Video Technol., 2012, 22, (1), pp. 56– Espoo, Finland, 2005, pp. 166–172
66 [65] Yin, M., Zhang, H., Meng, H., et al.: ‘An HMM based algorithm for vehicle
[35] Cucchiara, R., Grana, C., Piccardi, M., et al.: ‘Improving shadow suppression detection in congested traffic situation’. IEEE Intelligent Transportation
in moving object detection with HSV color information’. IEEE Proc. Int. Systems Conf., Seattle, USA, 2007, pp. 736–741