You are on page 1of 6

2010 International Coriference on System Science and Engineering

Vision-Based Side Vehicle Detection from a


Moving Vehicle

Wen-Chung Chang Kuo-Jung Hsu


Department ofElectrical Engineering Department ofElectrical Engineering
National Taipei University of Technology National Taipei University of Technology
Taipei, Taiwan, R.o.C Taipei, Taiwan, R.o.C
e-mail: wchang@ee.ntut.edu.tw e-mail: kuojung.hsu66@gmail.com

Abstract-In this paper, a side vehicle detection system with scanner estimates the distance as well as the contour
real-time on-board vision is proposed. The system employs camera information of observed objects. In addition, video-based
vision to detect side moving vehicles, and provides necessary vehicle detection utilizes pattern recognition algorithm to
warning to drivers and passengers when the vehicle is making lane classifY the object's sides based on contour information. Kato
change. Based on SURF, feature detection and comparison are et al. [3] also combined different sensors to detect vehicles, and
applied to find the same features between two consecutive images
used camera vision to detect lane marking, then the vehicle
and calculate the feature vectors. Further image analysis is
would utilize radar sensor to estimate the distance of vehicle.
performed based on the feature vectors to determine whether there
Kim and Chon [4] used pure camera vision to detect vehicles.
are side moving vehicles. If any side moving vehicle is detected, the
To perform comer detection and find the matched feature
system will apply inverse perspective projection technique to
between two consecutive images, one needs to analyze the
estimate the positions of surrounding vehicles in Cartesian space.
Then, the system will generate on-screen signs of safety, warning,
feature vector to locate moving vehicles. Chang and Cho [5]
or danger to inform drivers and passengers based on the estimated
presented a real-time vision-based vehicle detection system
distance to side moving vehicles. The approach has been employing an online boosting algorithm. In [6], a system
successfully validated in real traffic environments by performing employing both parts-based detection and decision tree ideas
experiments with two CCD cameras mounted on top of side mirrors was proposed to detect side vehicles. Miyoshi et al. [7]
of a roadway moving vehicle. presented a temporal median filter to detect the latest
background images. The technique was applied to the video
Keywords-Computer vision, Intelligent transportation systems, image obtained from a hand-held camera and detect the moving
Real-time vision, Side vehicle detection. objects.

I. INTRODUCTION In this paper, a method is proposed for detecting side


moving vehicles, providing distance information and warning
In recent years, many car manufacturers have invested lots
figure to drivers and passengers in a moving vehicle. This
of manpower and resources on developing Intelligent
paper is organized as follows. Section II describes the overall
Transportation Systems (ITS) such as auto navigation, traffic
system architecture. Section III illustrates the distance
flow surveillance and driver-assist system to improve the safety
estimation by inverse perspective projection technique. The
and efficiency of transportation. As for the driver-assist system,
side vehicle detection technique is presented in Section IV.
it covers the lane departure warning system, the front vehicle
Section V shows the experimental results of the side vehicle
collision warning system, and the auxiliary night vision system.
detection. Final concluding remarks are shown in Section VI.
In reality, when drivers and passengers are opening the doors to
get off the vehicle, the only available resource for making II. SYSTEM ARCHITECTURE
judgment is the side mirrors. But the side vision blind spot and
human negligence are usually the causes of many collisions For the description of side vehicle detection architecture,
and car accidents. To reduce the occurrence of car accidents, one first needs to understand the blind spot area. There is a
this paper focuses on side vehicle detection when the vehicle is side vision blind spot in the vehicle side. If there is a side
making lane change. A number of researchers have moving vehicle in the blind spot area at the same time when the
demonstrated side vehicle detection that uses laser, radar or driver or passenger is opening the doors to get off the vehicle
ultrasonic as primary detection sensor. However, laser sensors or is making lane change, a collision or car accident may occur
and radar sensors are too expensive and ultrasonic sensors are as illustrated in Fig. 1. The arrowhead shows the direction of
easily interfered with other ultrasonic sources. Therefore, traffic and the green area can be viewed by the both side
camera vision is apparently a popular sensor for vehicle mirrors, and the yellow area is the blind spot area. Therefore,
detection. Many papers use different sensors to detect side one cannot see it by both side mirrors.
vehicle and the goal is to provide information of surrounding In this paper, a side vehicle detection approach is proposed
vehicles to ensure safe driving. Song et al. [1] used multiple with two CCD cameras mounted onto a roadway vehicle as
ultrasonic sensors installed in the vehicle side to detect side shown in Fig. 2. The CCD cameras face the rear of the vehicle.
vehicles. Wender and Dietmayer [2] performed sensor fusion In this experimental system, the height of the camera is 1.2
of a laser scanner and a camera vision sensor. The laser

978-1-4244-6474-61101$26.00 © 2010 IEEE - 553 - ICSSE 2010

Authorized licensed use limited to: UNIVERSITY OF SHARJAH. Downloaded on November 24,2020 at 04:34:35 UTC from IEEE Xplore. Restrictions apply.
2010 International Coriference on System Science and Engineering

meters and the pan angles, (J\ and (J2' between the camera and Fig. 4 shows a flow chart of the proposed side vehicle
the vehicle are both 15 degrees. detection process. There are three major stages constituting
the process: detecting features, comparing features, and
calculating feature vectors. First stage is to detect SURF
features from previous and current images. Second stage
refers first stage result to relate current features from previous
image. Third stage will calculate the feature vectors based on
further image analysis to determine whether there are side
moving vehicles. Finally, the side moving vehicle region in
the image is used to estimate the distance between our
experimental vehicle and side moving vehicle based on
inverse perspective projection technique.

CCD camera
r(- '\ CCD camera

� 4
�.
,/
;

�"'.

Fig. 4. Flow chart for the proposed side vehicle detection process.
(a) Bird's eye view of the vehicle.

III. VEHICLE DISTANCE ESTIMATION

In this section, we will introduce the method for distance


estimation. For information about the distance estimation
method, please refer to [8]. It utilizes inverse perspective
projection technique to estimate the distance between our
experimental vehicle and side moving vehicle. When the
distance is calculated, we can show distance information on
monitor and show warning figure based on different distances.
A. CCD model and real world model
(b) Lateral view of the vehicle.
First, one needs to establish coordinate systems for the
Fig. 2. CCO cameras are mounted on the side mirrors of a vehicle. CCD camera and the world space, respectively, as shown in Fig.
5. The origin of the CCD camera coordinate system
The captured images are shown in Fig. 3, where the side (Xe, Ye,Ze ) is located at the optical center of the camera with
moving objects includes a vehicle and a motorcycle.
optical axis along the Ze axis. The orientation of the world
coordinate frame is chosen as the orientation of the CCD
camera frame after rotating about Xe axis for 90 degrees. As
with the origin Ow of the world coordinate system, attached to
the moving vehicle, is at height h below the optical center
along the Ye axis.

(a) A side vehicle (b) A side motorcycle


Fig. 3. Images captured by the left CCO camera.

- 554 - IeSSE 2010

Authorized licensed use limited to: UNIVERSITY OF SHARJAH. Downloaded on November 24,2020 at 04:34:35 UTC from IEEE Xplore. Restrictions apply.
2010 International Coriference on System Science and Engineering

(2)
i=O j=O

where I� (x, y) is defined as the sum of the rectangular region

h from the top left comer to the bottom right point (x, y). One
can calculate I� (x,y) of every point's location, and then use

. Jr. . . . . . . . . .
the results to calculate the feature points. As shown in Fig 6,
if one wants to calculate the sum of the values inside a
rectangular area, only the four points (a,h,c,d) are needed
..----.....
.. yw
in addition to the results from previous calculation. Thus,
L= A-B -C+ D (3)
Xw
Fig. 5. The CCO camera and world coordinate systems.
where A, B, C, D are the integral image for the four points a, b,
c, d, respectively, and L is the sum of the values inside the
B. Estimation of distance to side vehicles rectangular area.
To detect side moving vehicles in the images based on our o

side vehicle detection system, we need to refer the position of


the bottom of the side moving vehicle in the image to
determine the estimated distance between the side moving
vehicles and our experimental vehicle. Based on the coordinate
system and perspective projection model of the camera, the
distance can be determined by the following equations. d b

f ·h
Yw =f . x.-f tanB
Yi cosB-Yi smB----'-
' ---=--­
-
X tanB+ f (1) c a
i
X -f tanB
Xw =Yw i Fig. 6. Illustration of integral image.
X tanB+ f
i
B. Road detection

where f is the focal length of the CCD camera, (X ' Y ) is the In order to reduce the computation load of system, one
i i only needs to do detection computation for the region where
coordinate of the side vehicle in the image plane.
the vehicle is driving through. Hence, one needs to outline the
road range, and exclude unnecessary non-road background
image on both sides for the purpose of reducing the
IV. SIDE VEHICLE DETECTION
computation load of system. As illustrated in Fig. 7, one can
Lateral vehicle image is captured by image detector, and the obtain the desired picture by using edge detection, and make
range between the roadway edge and the vehicle's side is the use of the results to get road edge line (green straight line), and
processing area. The SURF algorithm is employed to capture then use RANSAC [27] method to find out the nearest linear
the feature points. One can find out the matched feature point equation that matches those points.
pair by comparing two successive images, and then use the
matched feature point pair to calculate the feature vector.
Based on the feature vector, one can judge if there is a vehicle
in the side area. The Inverse Perspective Projection is used to
calculate detected information, and then the coordinates on the
image can be transferred to the real world coordinates relative
to the vehicle. Based on the determined distance, different
warning figures are shown on screen for drivers.
A. Integral image
The advantage of using integral image is that it can
accelerate the calculation. The expression of integral image is
as follows.

Fig.7. Edge detection on a lateral image

- 555 - IeSSE 2010

Authorized licensed use limited to: UNIVERSITY OF SHARJAH. Downloaded on November 24,2020 at 04:34:35 UTC from IEEE Xplore. Restrictions apply.
2010 International Coriference on System Science and Engineering

The outlines of RANSAC to yield a linear equation is as the calculation method. Each feature point is surrounded by
follows. four areas, which are upper right, bottom right, upper left and
bottom left, respectively. The size of each area is a 5x5 square.
1) Taking out the points by random from all points as needed First, we have to do calculation for one of the four areas and
for calculating linear equation. we will get several singular values of dx and dy, and se�eral
2) Calculating a linear equation. Idxl and Idyl.
3) Setting up a threshold value for possible errors.
4) Putting all the other points into the linear equation to
Secondly, summing up all of these values will yield , I dx
calculate separate error value. Idy, Ildxl, IIdyl , respectively. Lastly, combining
5) If error value is smaller than threshold value, then one
these values will constitute a four dimensional vector
(Idx, Idy, Ildxl, Ildyl)·
should be added into the count.

Repeat the steps above to find out the linear equation that From Fig. 9 we can see that SURF quantifies these four
have most counts, i.e., the linear equation that best match all summed values. The element with value larger than a certain
the points. threshold is set to be one, whereas the element with value
smaller than a certain threshold is set to be zero. Thus, the
C. SURF resulting vector is a four dimensional one with values one and
zero. Fig. 10 illustrates the SURF descriptor for three

[ ]
SURF uses Hessian Matrix H(x,y) to calculate feature
different image-intensity patterns.
points.
Lxx (x, y) Lxy(x, y)
H(x, y) Lxy(x, y) Lyy(x, y)
= (9)

where Lxx (x, y) is convolution of the Gaussian second order


in the image that includes (x, y). The mask in Fig. 8(a) is
used while calculating Lyy(x, y) and the mask in Fig. 8(b) is

used while calculating Lxx (x, y), and the mask in Fig. 8(c) is
used for calculating Lxy(x, y).
m dx

Fig. 9. Statistic Haar wavelet filter in dx and dy response


The feature points location are select by relying on the
determinant of the Hessian. The local maxima of the
determinant are determined to find out the locations.

1 1
1 1
DUD
L b�
"Ldx
-I 1 "Ldy
1 "Lldx
"Lldy
(a) yy (b) xx (c) xy
Fig. 10. Feature description table constituted
Fig. 8. Similar Gaussian second orders

D. Compare Feature Points E. Detect the side vehicles

One needs to compare the feature points of two images in To detect the sideway moving vehicles, one needs to
order to determine which feature points on one image match to understand the sideway static background and the movement
the other image. SURF comparison method is employed to model of the sideway moving objects. From Fig. 1 1, we can
locate the matched feature points. First, find out the feature find that when watching the rear area from the moving vehicle,
points on the first image, and based on which match the relative moving direction of the static background object is
calculation is performed against another image. The backward, and the object on the image will gradually become
description table is built based on SURF and the information smaller and finally disappear from the center of the screen. As
about feature points' surroundings. Considering the for the relative moving direction of sideway moving vehicle,
calculation speed, only 4 x 4 dimension is used to produce its speed verifies according to the speed of the experimental
descriptive values of the feature points. The values in vehicle. When the sideway moving vehicle (Vs) is slower than
description table are produced by calculation through Haar the experimental vehicle (Vo), the moving direction of the
wavelet filter. Please refer Fig. 9 for more information about former is also backward. When the speed of sideway moving

- 556 - IeSSE 2010

Authorized licensed use limited to: UNIVERSITY OF SHARJAH. Downloaded on November 24,2020 at 04:34:35 UTC from IEEE Xplore. Restrictions apply.
2010 International Coriference on System Science and Engineering

vehicle (Vs) is the same as the experimental vehicle (Vo), the


relative movement is static. When the sideway moving
vehicle (Vs) is faster than the experimental vehicle (Vo), the
relative moving direction is forward. Based on this
characteristic, we can find out the forward feature vector, and
regard it as moving vehicles.

Fig. 13. Locate matched moving pixels.

Fig. 14. Detect moving pixels of side vehicles.

Fig. II. Schematic figure of establishing background.

V. EXPERIMENTAL RESULTS
Safety Warning Danger
The proposed method has been effectively implemented
using two CCD cameras mounted onto a roadway vehicle.
Distance
The captured images from side CCD cameras are transferred
to the computer in 320x240 BMP format. Fig. 12 illustrates a
sequence of three images that are captured when the
experimental vehicle is moving on the road.

Fig. 12. Illustration of the sequence images.

The feature points were obtained from two successive Fig. IS. The different warning figures shown on screen.
images, and we compared the feature points to find out the
Table 1 and Fig. 16 show the experimental results that are
feature point pair shown in Fig. 13. Then, we calculated the
generated from experimental platform directly. It provides the
feature vector based on the location of the feature point pair,
distance information and warning figures on the monitor. The
and marked the forward direction of feature vector, which is
detection rate in the table denotes the ratio the system detects
shown in Fig. 14. After that, we calculate the distance
the existence of a moving vehicle on the sideway in the image,
between the sideway moving vehicle and the experimental while false alarm ratio denotes the system incorrectly detects
vehicle. Meanwhile, different warning signs as shown in Fig. that there is a moving vehicle on the sideway and causes a
15 were given based on different length of the distance for false alarm.
drivers to know if it is safe to change the traffic lane, and in
this way to ensure driving safety. Percentage Total frame Detection frame

Detection rate 83.93% 56 frame 47 frame

False alarm ratio 9.88% 81 frame 8 frame

Table I. The expenmental results.

- 557 - IeSSE 2010

Authorized licensed use limited to: UNIVERSITY OF SHARJAH. Downloaded on November 24,2020 at 04:34:35 UTC from IEEE Xplore. Restrictions apply.
2010 International Coriference on System Science and Engineering

Therefore, the proposed system is capable of ensuring safety


driving and reducing accidents caused by lateral collision.

ACKNOWLEDGMENT

This research was supported by the National Science


Council of Taiwan, R.O.C. under grant NSC 98-2221-E-027-
065-MY3.

REFERENCES
[I ] K.-T. Song, C.-H. Chen and C.-H. Chiu Huang, "Design and
Experimental Study of an Ultrasonic Sensor System for Lateral Collision
Avoidance at Low Speeds," Proceedings of the 2004 IEEE Intelligent
Vehicles Symposium, Parma, italy,pp. 647-652, Jun. 2004.
[2 ] S. Wender and K. Dietmayer, "3D vehicle detection using a laser
scanner and a video camera," Congress of lET Intelligent Transport
Systems, Vol. 2,No. 2,pp. 105-112,Aug. 2008.
[3] T. Kato, Y. Ninomiya and I. Masaki, "An Obstacle Detection Method by
Fusion of Radar and Motion Stereo," Conference of IEEE Transactions
on Intelligent Transportation Systems, Vol. 3,No. 3,Sep. 2002.
[4] Z.-W. Kim and T.-E. Cohn, "Pseudoreal-Time Activity Detection for
Fig. 16. The experimental platform for side vehicle detection. Railroad Grade-Crossing Safety," Conference of IEEE Transactions on
Intelligent Transportation Systems, Vol. 5,No. 4, Dec. 2004.
[5] W.-C. Chang and C.-W. Cho, "On-Line Boosting for Vehicle
Detection," IEEE Transactions on Systems, Man, and Cybernetics, Part
VI. CONCLUSION B: Cybernetics, 40(3), June 2010.
[6] W.-C. Chang and Chih-Wei Cho, "Real-Time Side Vehicle Tracking
In this paper, a method of side vehicle detection is Using Parts-Based Boosting," In Proceedings of the 2008 IEEE
presented which has been effectively implemented in a International Conference on Systems, Man and Cybernetics, Singapore,
roadway environment for detecting side moving vehicles. A Oct. 12-15 2008.
[7] M. Miyoshi, J.-K. Tan and S. Ishikawa, "Extracting Moving Objects
vision-based approach is proposed for detecting side moving
from a Video by Sequential Background Detection Employing a Local
vehicles even in the blind spot area. The detection system can Correlation Map," International Conference of the 2008 IEEE on
be operated at real-time rate to provide drivers and passengers Systems, Man and Cybernetics, 2008.
with surrounding traffic information. Any moving side [8] W.-C. Chang and T.-C. Tsao, "Omni-Directional Lane Detection and
Vehicle Motion Estimation with Multi-Camera Vision," In Proceedings
vehicles, including motorcycles, can be effectively detected.
of the 2009 CACS International Automatic Control Conference, Taipei,
Taiwan,Nov. 27-29,2009.

- 558 - IeSSE 2010

Authorized licensed use limited to: UNIVERSITY OF SHARJAH. Downloaded on November 24,2020 at 04:34:35 UTC from IEEE Xplore. Restrictions apply.

You might also like