Professional Documents
Culture Documents
Abstract-In this paper, a side vehicle detection system with scanner estimates the distance as well as the contour
real-time on-board vision is proposed. The system employs camera information of observed objects. In addition, video-based
vision to detect side moving vehicles, and provides necessary vehicle detection utilizes pattern recognition algorithm to
warning to drivers and passengers when the vehicle is making lane classifY the object's sides based on contour information. Kato
change. Based on SURF, feature detection and comparison are et al. [3] also combined different sensors to detect vehicles, and
applied to find the same features between two consecutive images
used camera vision to detect lane marking, then the vehicle
and calculate the feature vectors. Further image analysis is
would utilize radar sensor to estimate the distance of vehicle.
performed based on the feature vectors to determine whether there
Kim and Chon [4] used pure camera vision to detect vehicles.
are side moving vehicles. If any side moving vehicle is detected, the
To perform comer detection and find the matched feature
system will apply inverse perspective projection technique to
between two consecutive images, one needs to analyze the
estimate the positions of surrounding vehicles in Cartesian space.
Then, the system will generate on-screen signs of safety, warning,
feature vector to locate moving vehicles. Chang and Cho [5]
or danger to inform drivers and passengers based on the estimated
presented a real-time vision-based vehicle detection system
distance to side moving vehicles. The approach has been employing an online boosting algorithm. In [6], a system
successfully validated in real traffic environments by performing employing both parts-based detection and decision tree ideas
experiments with two CCD cameras mounted on top of side mirrors was proposed to detect side vehicles. Miyoshi et al. [7]
of a roadway moving vehicle. presented a temporal median filter to detect the latest
background images. The technique was applied to the video
Keywords-Computer vision, Intelligent transportation systems, image obtained from a hand-held camera and detect the moving
Real-time vision, Side vehicle detection. objects.
Authorized licensed use limited to: UNIVERSITY OF SHARJAH. Downloaded on November 24,2020 at 04:34:35 UTC from IEEE Xplore. Restrictions apply.
2010 International Coriference on System Science and Engineering
meters and the pan angles, (J\ and (J2' between the camera and Fig. 4 shows a flow chart of the proposed side vehicle
the vehicle are both 15 degrees. detection process. There are three major stages constituting
the process: detecting features, comparing features, and
calculating feature vectors. First stage is to detect SURF
features from previous and current images. Second stage
refers first stage result to relate current features from previous
image. Third stage will calculate the feature vectors based on
further image analysis to determine whether there are side
moving vehicles. Finally, the side moving vehicle region in
the image is used to estimate the distance between our
experimental vehicle and side moving vehicle based on
inverse perspective projection technique.
CCD camera
r(- '\ CCD camera
� 4
�.
,/
;
�"'.
Fig. 4. Flow chart for the proposed side vehicle detection process.
(a) Bird's eye view of the vehicle.
Authorized licensed use limited to: UNIVERSITY OF SHARJAH. Downloaded on November 24,2020 at 04:34:35 UTC from IEEE Xplore. Restrictions apply.
2010 International Coriference on System Science and Engineering
(2)
i=O j=O
h from the top left comer to the bottom right point (x, y). One
can calculate I� (x,y) of every point's location, and then use
. Jr. . . . . . . . . .
the results to calculate the feature points. As shown in Fig 6,
if one wants to calculate the sum of the values inside a
rectangular area, only the four points (a,h,c,d) are needed
..----.....
.. yw
in addition to the results from previous calculation. Thus,
L= A-B -C+ D (3)
Xw
Fig. 5. The CCO camera and world coordinate systems.
where A, B, C, D are the integral image for the four points a, b,
c, d, respectively, and L is the sum of the values inside the
B. Estimation of distance to side vehicles rectangular area.
To detect side moving vehicles in the images based on our o
f ·h
Yw =f . x.-f tanB
Yi cosB-Yi smB----'-
' ---=--
-
X tanB+ f (1) c a
i
X -f tanB
Xw =Yw i Fig. 6. Illustration of integral image.
X tanB+ f
i
B. Road detection
where f is the focal length of the CCD camera, (X ' Y ) is the In order to reduce the computation load of system, one
i i only needs to do detection computation for the region where
coordinate of the side vehicle in the image plane.
the vehicle is driving through. Hence, one needs to outline the
road range, and exclude unnecessary non-road background
image on both sides for the purpose of reducing the
IV. SIDE VEHICLE DETECTION
computation load of system. As illustrated in Fig. 7, one can
Lateral vehicle image is captured by image detector, and the obtain the desired picture by using edge detection, and make
range between the roadway edge and the vehicle's side is the use of the results to get road edge line (green straight line), and
processing area. The SURF algorithm is employed to capture then use RANSAC [27] method to find out the nearest linear
the feature points. One can find out the matched feature point equation that matches those points.
pair by comparing two successive images, and then use the
matched feature point pair to calculate the feature vector.
Based on the feature vector, one can judge if there is a vehicle
in the side area. The Inverse Perspective Projection is used to
calculate detected information, and then the coordinates on the
image can be transferred to the real world coordinates relative
to the vehicle. Based on the determined distance, different
warning figures are shown on screen for drivers.
A. Integral image
The advantage of using integral image is that it can
accelerate the calculation. The expression of integral image is
as follows.
Authorized licensed use limited to: UNIVERSITY OF SHARJAH. Downloaded on November 24,2020 at 04:34:35 UTC from IEEE Xplore. Restrictions apply.
2010 International Coriference on System Science and Engineering
The outlines of RANSAC to yield a linear equation is as the calculation method. Each feature point is surrounded by
follows. four areas, which are upper right, bottom right, upper left and
bottom left, respectively. The size of each area is a 5x5 square.
1) Taking out the points by random from all points as needed First, we have to do calculation for one of the four areas and
for calculating linear equation. we will get several singular values of dx and dy, and se�eral
2) Calculating a linear equation. Idxl and Idyl.
3) Setting up a threshold value for possible errors.
4) Putting all the other points into the linear equation to
Secondly, summing up all of these values will yield , I dx
calculate separate error value. Idy, Ildxl, IIdyl , respectively. Lastly, combining
5) If error value is smaller than threshold value, then one
these values will constitute a four dimensional vector
(Idx, Idy, Ildxl, Ildyl)·
should be added into the count.
Repeat the steps above to find out the linear equation that From Fig. 9 we can see that SURF quantifies these four
have most counts, i.e., the linear equation that best match all summed values. The element with value larger than a certain
the points. threshold is set to be one, whereas the element with value
smaller than a certain threshold is set to be zero. Thus, the
C. SURF resulting vector is a four dimensional one with values one and
zero. Fig. 10 illustrates the SURF descriptor for three
[ ]
SURF uses Hessian Matrix H(x,y) to calculate feature
different image-intensity patterns.
points.
Lxx (x, y) Lxy(x, y)
H(x, y) Lxy(x, y) Lyy(x, y)
= (9)
used while calculating Lxx (x, y), and the mask in Fig. 8(c) is
used for calculating Lxy(x, y).
m dx
1 1
1 1
DUD
L b�
"Ldx
-I 1 "Ldy
1 "Lldx
"Lldy
(a) yy (b) xx (c) xy
Fig. 10. Feature description table constituted
Fig. 8. Similar Gaussian second orders
One needs to compare the feature points of two images in To detect the sideway moving vehicles, one needs to
order to determine which feature points on one image match to understand the sideway static background and the movement
the other image. SURF comparison method is employed to model of the sideway moving objects. From Fig. 1 1, we can
locate the matched feature points. First, find out the feature find that when watching the rear area from the moving vehicle,
points on the first image, and based on which match the relative moving direction of the static background object is
calculation is performed against another image. The backward, and the object on the image will gradually become
description table is built based on SURF and the information smaller and finally disappear from the center of the screen. As
about feature points' surroundings. Considering the for the relative moving direction of sideway moving vehicle,
calculation speed, only 4 x 4 dimension is used to produce its speed verifies according to the speed of the experimental
descriptive values of the feature points. The values in vehicle. When the sideway moving vehicle (Vs) is slower than
description table are produced by calculation through Haar the experimental vehicle (Vo), the moving direction of the
wavelet filter. Please refer Fig. 9 for more information about former is also backward. When the speed of sideway moving
Authorized licensed use limited to: UNIVERSITY OF SHARJAH. Downloaded on November 24,2020 at 04:34:35 UTC from IEEE Xplore. Restrictions apply.
2010 International Coriference on System Science and Engineering
V. EXPERIMENTAL RESULTS
Safety Warning Danger
The proposed method has been effectively implemented
using two CCD cameras mounted onto a roadway vehicle.
Distance
The captured images from side CCD cameras are transferred
to the computer in 320x240 BMP format. Fig. 12 illustrates a
sequence of three images that are captured when the
experimental vehicle is moving on the road.
The feature points were obtained from two successive Fig. IS. The different warning figures shown on screen.
images, and we compared the feature points to find out the
Table 1 and Fig. 16 show the experimental results that are
feature point pair shown in Fig. 13. Then, we calculated the
generated from experimental platform directly. It provides the
feature vector based on the location of the feature point pair,
distance information and warning figures on the monitor. The
and marked the forward direction of feature vector, which is
detection rate in the table denotes the ratio the system detects
shown in Fig. 14. After that, we calculate the distance
the existence of a moving vehicle on the sideway in the image,
between the sideway moving vehicle and the experimental while false alarm ratio denotes the system incorrectly detects
vehicle. Meanwhile, different warning signs as shown in Fig. that there is a moving vehicle on the sideway and causes a
15 were given based on different length of the distance for false alarm.
drivers to know if it is safe to change the traffic lane, and in
this way to ensure driving safety. Percentage Total frame Detection frame
Authorized licensed use limited to: UNIVERSITY OF SHARJAH. Downloaded on November 24,2020 at 04:34:35 UTC from IEEE Xplore. Restrictions apply.
2010 International Coriference on System Science and Engineering
ACKNOWLEDGMENT
REFERENCES
[I ] K.-T. Song, C.-H. Chen and C.-H. Chiu Huang, "Design and
Experimental Study of an Ultrasonic Sensor System for Lateral Collision
Avoidance at Low Speeds," Proceedings of the 2004 IEEE Intelligent
Vehicles Symposium, Parma, italy,pp. 647-652, Jun. 2004.
[2 ] S. Wender and K. Dietmayer, "3D vehicle detection using a laser
scanner and a video camera," Congress of lET Intelligent Transport
Systems, Vol. 2,No. 2,pp. 105-112,Aug. 2008.
[3] T. Kato, Y. Ninomiya and I. Masaki, "An Obstacle Detection Method by
Fusion of Radar and Motion Stereo," Conference of IEEE Transactions
on Intelligent Transportation Systems, Vol. 3,No. 3,Sep. 2002.
[4] Z.-W. Kim and T.-E. Cohn, "Pseudoreal-Time Activity Detection for
Fig. 16. The experimental platform for side vehicle detection. Railroad Grade-Crossing Safety," Conference of IEEE Transactions on
Intelligent Transportation Systems, Vol. 5,No. 4, Dec. 2004.
[5] W.-C. Chang and C.-W. Cho, "On-Line Boosting for Vehicle
Detection," IEEE Transactions on Systems, Man, and Cybernetics, Part
VI. CONCLUSION B: Cybernetics, 40(3), June 2010.
[6] W.-C. Chang and Chih-Wei Cho, "Real-Time Side Vehicle Tracking
In this paper, a method of side vehicle detection is Using Parts-Based Boosting," In Proceedings of the 2008 IEEE
presented which has been effectively implemented in a International Conference on Systems, Man and Cybernetics, Singapore,
roadway environment for detecting side moving vehicles. A Oct. 12-15 2008.
[7] M. Miyoshi, J.-K. Tan and S. Ishikawa, "Extracting Moving Objects
vision-based approach is proposed for detecting side moving
from a Video by Sequential Background Detection Employing a Local
vehicles even in the blind spot area. The detection system can Correlation Map," International Conference of the 2008 IEEE on
be operated at real-time rate to provide drivers and passengers Systems, Man and Cybernetics, 2008.
with surrounding traffic information. Any moving side [8] W.-C. Chang and T.-C. Tsao, "Omni-Directional Lane Detection and
Vehicle Motion Estimation with Multi-Camera Vision," In Proceedings
vehicles, including motorcycles, can be effectively detected.
of the 2009 CACS International Automatic Control Conference, Taipei,
Taiwan,Nov. 27-29,2009.
Authorized licensed use limited to: UNIVERSITY OF SHARJAH. Downloaded on November 24,2020 at 04:34:35 UTC from IEEE Xplore. Restrictions apply.