You are on page 1of 6

Real-time approaching vehicle detection in blind-spot area

C.T. Chen Y.S. Chen


R&D Division R&D Division
Automotive Research & Testing Center Automotive Research & Testing Center
Changhua, Taiwan Changhua, Taiwan
chenct@artc.org.tw yschen@artc.org.tw

distance detection radar, etc., to detect distance. Some


Abstract—Detecting objects (including all kinds of vehicles,
vehicular radar may even detect the shape of an object.
bicycles, and pedestrians) accurately and efficiently is an
However, the vehicular radars usually have a smaller view
essential issue in blind-spot information system (BLIS). To
field and have their blind spots. The range of the blind spots
meet these requirements, this paper presents an
correlates with the number of the installed vehicular radars.
image-based method to detect approaching objects in
Besides, the vehicular radar has a limited detection distance
blind-spot area and proposes a verification method by
and is hard to detect an object moving in a large area.
using the recorded video database from real traffic
Therefore, the image technology to detect a moving object can
environment. By taking video frames and converting the
overcome the problem of blind spots, whereby the driver can
images into one dimensional information, the image
watch the images of all the blind spots from the display device.
entropy of the road scene in the near lane are estimated. It is becoming widely used due to its low cost and big field of
Thus, by analysis the lane information, an object will be vision.
detected and located in a constant time. This idea has been
realized and implemented on low-cost DSP platform
When it comes to image-based vehicle detection system, the
developed by Automotive Research and Testing Center
vehicle/object detection and tracking technologies have been
(ARTC, Taiwan). The accurate rate of this blind-spot proposed in literature, including motion information [1][2][3],
detection system (BDS) is 91% and the frame rate is more knowledge-based method [4][5] and optical flow [6][7].
than 20 frames per sec (fps), in day and night and all Techmer et al [1] combined lane detection and tracking
weather conditions. The BDS has been applied for general approach in the lateral and rear lane. The object tracking is
vehicles and heavy truck vehicles nowadays. done by minimizing distances between the surrounding
contours of each edge point within two adjacent images. Wang
Keywords- image entropy; blind-spot area; DSP platform et al [2] proposed a novel approach performs a spatiotemporal
wavelet transform on videos to obtain motion information of
vehicles. Zhu et al [3] detect overtaking vehicles by integrate
I. INTRODUCTION dynamic scene modeling, hypothesis testing and robust

W
information fusion.
ith the development of science and technology, distance
detection and tracking technologies have been applied Tsai et al [4] used color and edge information to detect
to many daily living facilities to promote the safety and approaching vehicle from single image, which have the ability
convenience of living. For example, drivers are more and more of detect candidate vehicles. Thus, it is different from [1][2][3]
concerned about driving safety nowadays, and automobile methods which were referenced the motion information
manufacturers have proposed many safety schemes. Among through a serious of images. She et al [5] take the features
them, collision-forewarning technologies attracts great from color and shape. The shape feature is obtained from edge
attentions, including lane departure warning systems, vehicle cue, which contained three directions and is robust from
backing assist systems, and blind spot information systems. scaling and occlusion. Even those methods might work
Especially, the blind spot information systems contribute perfectly under most conditions, but they still failed in some
mostly to driving safety and are gradually valued by the specific environments. Batavia et al [6] applied implicit
market. The abovementioned blind spot refers to the area a optical flow to detect overtaking vehicle, which is not affected
driver cannot watch from the back mirrors. In the blind spot by shadows. Although it is affected by curvy road, it is only
information systems using the vision solution, cameras are suitable for straight road. Wang et al [7] employed
installed on two sides of a vehicle to capture the images of homogeneous spare optical flow in overtaking vehicle
blind spots, and the driver is alerted to watch the approaching detection, and it made detection more robust to camera shocks
vehicles as driver would like to change the lane. Therefore, the and vibrations. Although optical flow methods can detect
technologies of estimating the distance of the approaching approaching objects, they still have a major limitation on still
vehicle are needed by the blind spot information system. Radar objects. Nevertheless, the abovementioned technology still has
is another technology in the market. The radar adopts an room to improve. Alternatively, an image comparison method
ultrasonic distance detection technology, or an infrared is used to search different image regions. However, the
recognition capability is dependent on the adaptability of
templates. Besides, the search is time-consuming.
Alternatively, indices are used to work out the eigenvalues of
various images, and then an artificial intelligence mode is
constructed to identify objects. However, it is hard to embed
the technology into an existing system.

For a vehicle-carried image system, the response thereof


should be instant, and the recognition algorithm thereof should
be exempted from the influence of environmental illumination
so that a certain level of reliability can be achieved. The three
technologies mentioned above all consume a lot of resources
to process images. Further, two different algorithms are Fig. 2. Detection region of vehicle image system .The front side, rear side,
front lateral sides and rear lateral sides are the regions.
usually used to deal with different illumination conditions of
day and night. To solve the abovementioned problems, the
research proposes an object position detection device and a
method thereof. III. ALGORITHM OF APPROACHING VEHICLE DETECTION

The study continuously captures images in the detected


II. ON-VEHICLE IMAGING SYSTEM direction and uses an image entropy estimation method to
In general, on-vehicle imaging system used to apply convert the captured image into one dimensional (1-D)
cameras to capture inside-view of driver’s status or distance-axis signal information. Next, the algorithm performs
outside-view of vehicle status, and they integrate signals of a numerical differentiation on the 1-D distance-axis signal
vehicle to provide drivers alert information friendly. ARTC information to obtain a differential value of the 1-D
established embedded on-vehicle imaging system (see Fig. 1). distance-axis signal information at a single time point. Next,
The system was formed with input unit, processing unit and the difference of the 1-D distance-axis signal information of at
output unit. The input unit includes image capture unit and least two adjacent time points is estimated. Therefore, the
vehicle signals. The processing unit includes image processing position of the approaching object is determined, according to
whether the differential value of the 1-D distance-axis signal
unit and microprocessor, digital signal processor was used to
information is greater than or equal to a predetermined
image processing unit. The output unit includes display unit
threshold. The approaching status of the object is also
and alert unit.
determined according to whether the difference of the 1-D
It uses DSP (digital signal processor) to capture image distance-axis signal information of at least two adjacent time
signal from CCD/CMOS image sensor, steering wheel and points is greater than or equal to a predetermined threshold,
speed signal of vehicle from microprocessor. It integrates wherein the abovementioned two thresholds may be identical.
RS232 interface to modulate parameter of blind-spot area The flowchart of a method for detecting an approaching object
between computer and microprocessor. The recognition result according to the research is shown in Fig. 3. In Step 1, the
of blind-spot area image is presented on general monitor. image capture unit captures the images within the detected
Besides, the alert information could be represented by LED regions in at least one direction defined by the user. The image
and buzzer as well. capture unit may be a CCD (Charge Coupled Device) camera
or a CMOS (Complementary Metal Oxide Semiconductor)
camera. The captured image is transmitted to the image
processing unit.

In Step2, the image processing unit uses a complexity


estimation method, such a method of calculating the entropy of
the regional image, to work out the information quantities of
the images at different distances within the region defined by
the user; thus, each 2-D image is converted into 1-D
Fig. 1. Architecture of ARTC on-vehicle imaging system. The system was distance-axis signal information (I).
formed with input unit, processing unit and output unit.
In the embodiment shown in Fig. 4, the 1-D distance-axis
ARTC on-vehicle imaging system can be applied to all signal information, estimated by Equation (1), is the
kinds of detection region (see Fig. 2). It includes front side, information quantity of a traffic lane image and represented by
rear side, front lateral sides and rear lateral sides. the solid lines.
n
I ( x) = −∑ log( pi ) × pi signal, wherein the dotted-line Y axis represents the
(1)
i =1 differential value of the 1-D signal. From the left part of Fig. 5,
pi =
Gi it is observed that the differential value of the 1-D signal is
Tpixels obviously greater than a threshold in the position where a
vehicle appears. Thus, the position of a vehicle is detected.
wherein Gi is the number of Gray Level i appearing in the Then, the image processing unit calculates the difference of
regional image, Tpixels is the number of the effective pixels in the 1-D distance-axis signal information of at least two
the regional image, Pi is the probability that Gray Level i adjacent time points.
appears in the regional image, n is the number of Gray Level in
the regional image, x is the beginning position of 1-D The right part of Fig. 4 shows the information quantity and
distance-axis in the regional image. the difference of the 1-D signals of at least two adjacent time
points, wherein the dotted-line Y axis represents the difference
When there is another object existing in the detected of the 1-D signals of at least two adjacent time points. From
direction, the image information quantity may be increased by the right part of Fig. 4, it is observed that the difference of the
the colors, shadows, or light of the additional object. When the 1-D signals of at least two adjacent time points is obviously
image information quantity is used in object detection, the greater than a threshold in the position where a vehicle is
weather factor will not affect the operation of the image approaching. Thus, the differential value of the 1-D signal and
capture unit and the image processing unit. Thus, the required the difference of the 1-D signals of at least two adjacent time
computational capability of the image processing unit is points can be used to crosscheck the approaching of a vehicle.
greatly reduced, and the rate of real-time feedback is greatly
increased. Further, the hardware requirement of the system is Step4 and Step5 use the information quantities of 1-D
lowered. signals to determine the position where a vehicle is
approaching, wherein numerical differentiation is used to
obtain the gradient (I’) of the image information quantity, as
Define detected regions shown in Equation (2):
and
continuously capturing images
I ( x) − I ( x − a) (2)
I '( x) =
Converting a 2-D image into Calculating the differential value a
1-D distance-axis of the 1-D distance-axis signal
signal information information at a single time point
Then, the differential value of the 1-D signal and the
Calculating the differential value difference of the 1-D signals of at least two adjacent time
of the 1-D distance-axis signal
information of at least two
points are respectively compared with predetermined
adjacent time points thresholds to see whether both the differential value of the 1-D
signal and the difference of the 1-D signals of at least two
Determining the adjacent time points are greater than the thresholds and
position of an object
determine Position Xt where an object may exist. When the
image capture unit is a camera having a resolution of 320×240,
Determining the approaching the value of the increment “a” will be set to about 10 for an
status of the object image having real dimensions of 5m in width and 15m in
length. When the image capture unit is a camera having a
resolution of 640×480, the value of the increment “a” will be
Alerting the user
set to about 15-20 for an image having real dimensions of 5m
in width and 15m in length. Therefore, the higher the
resolution the image capture unit has, the greater the value the
Fig. 3. Algorithmic process of approaching vehicle detection in blind-spot
increment “a” may use. Thus, the user can determine the
area.
threshold he needs. To detect the movement of an object, in
Step6, the information quantities of two adjacent images It and
After the image processing unit obtains the 1-D It+1 subtract to attain the difference ΔIt+1 according to Equation
distance-axis signal information, the process proceeds to Step (3):
3. In Step3, the image processing unit performs a numerical
differentiation on the 1-D distance-axis signal information to ΔI t +1 ( x) = It +1 ( x) − It ( x) (3)
obtain a differential value of the 1-D distance-axis signal
information at a single time point. In the embodiment shown in
From the calculation results of ΔIt+1, it is known: When an
Fig. 4, the differential value of the 1-D signal is the
differentiated image information of a traffic lane, which is object moves to Position Xt+1, the ΔIt+1 (Xt+1) is the maximum
value. In the present invention, a complexity estimation
represented by the dotted lines. Below, the differentiated
method is used to convert a 2-D image into 1-D distance-axis
image information will be discussed in terms of the differential
signal information with, and a numerical differentiation
value of the 1-D signal. The left part of Fig. 4 shows the
method is used to obtain the differential value of the 1-D
information quantity and the differential value of the 1-D
distance-axis signal information at a single time point, and the differential value of the 1-D signal It+1’ (Xt+1) and the
difference of the 1-D distance-axis signal information of at difference of the 1-D signals of at least two adjacent time
least two adjacent time points is also worked out. Then, it is points ΔIt+1 (Xt+1) with a simple threshold value θ. If the
found: When there is an object approaching in a continuous approaching of an object is detected, the distance between the
image, the differential value of the 1-D signal It+1’ (Xt+1) and detected object and the detector will decrease with the time
the difference of the 1-D signals of at least two adjacent time elapsed, which can further confirm that there is indeed an
points ΔIt+1 (Xt+1) respectively have obvious peaks at Position object approaching. In Step7, when the image processing unit
Xt+1 where the object appears. finds that an object has approached the alert limit set by the
user, the image processing unit alerts the user with an alert unit
In one embodiment of the research, whether an object is coupled to the image processing unit. The alert unit may be a
approaching can be learned merely via comparing the display, LED, a buzzer, or a speaker.

difference of the
information quantity different value of the information quantities
information quantity
of a traffic lane image information quantity of two successive
of a traffic lane image
of a traffic lane image traffic lane images
approaching
great great different value
of a 1-D signal

small small
near end probable position far end near end determined vehicle far end
of a vehicle position
difference of 1-D signal
of two adjacent time points

Image information of the surroundings of a vehicle


rear lateral front lateral
side image side image
rear side
image
rear side image

near end far end near end far end near end far end near end far end

Fig. 4. Algorithmic description chart of approaching vehicle detection in blind-spot area.

B. Establishment of Verifiable Video Database of Blind-Spot


IV. EXPERIMENTAL RESULTS
Area
The blind-spot system was verified by using the video The blind-spot system was verified by using the video
database of blind-spot area, including three road situations and database of blind-spot area, including three road situations
four weather conditions. In this section, we demonstrate the (high-speed road, urban road and thoroughfare road) and four
specifications of blind-spot system, establishment of verifiable weather conditions (sunny day, sunny night, rainy day and
video database, standard and evaluation method of system rainy night)(see Table1) in order to verify the system at
accuracy and evaluative result. Our evaluation has been scientific condition, and avoid system optimum in specific
carried out both qualitatively and quantitatively. environmental condition.
A. Specifications of Blind-spot System
Specifications of blind-spot system are: Table1 Verifiable video database for approaching vehicle detection in
blind-spot area.
1) Enable condition: Power-on operation and Speed over 10
Video Classification
km/hr.
Serial Number Road situations Weather conditions
2) Region of detection: The width and length of detection 1 Sunny day
filed are 3 m and 8 m from rear view mirror and the 2
High-speed road
Sunny night
relative velocity of vehicle is between -10 and 70 km/hr . 3 Rainy day
4 Rainy night
3) Alert addition: When driver doesn’t use turn signal, it 5 Sunny day
enables LED to give driver warning signal actively. When Urban road
6 Sunny night
(high car flow in traffic
driver attempts to change road lane and uses turn signal, it 7
and many road junctions)
Rainy day
8 Rainy night
enable LED and buzzer to gives driver warning signal
9 Sunny day
actively. Thoroughfare road
10 Sunny night
(It dose not belong to high-speed
11 Rainy day
road and urban road)
12 Rainy night
Table2 Judgement of alert result for approaching vehicle detection in
blind-spot area. (Legend: AC represents Approaching car and EC represents
C. Standard and Evaluation Method of System Accuracy Equipped car.)
Determinative EC Alert
The process of recognizable capacity evaluation is shown in AC relative velocity Judgement
region status result
Fig. 5. Thoroughfare road was used to demarcate distance line none
Total
Total velocity
Alert Failure
(see Fig. 5-A) in order to test the system whether the algorithm status No alert Pass
Alert Pass
can give driver exact alert in the region of warning area. We AC≧EC
No alert Failure
filmed the video, including three road situations and four Running
Alert Failure
3*0~8m AC<EC
weather conditions and then the DSP was used to plant No alert Pass
Exist
distance line, the green horizontal line is 3 m away from AC>EC
Alert Pass
equipped vehicle, the red straight lines are away from 3 m to 8 No alert Failure
Static
Alert Failure
m from rear view mirror of equipped vehicle, and two blue AC≦EC
No alert Pass
distance lines are 9 m and 10 m positions (see Fig. 5-B). Alert Pass
AC≧EC
No alert Pass
Running
Alert Failure
AC<EC
No alert Pass
3*8~10m Exist
Alert Pass
AC≧EC
No alert Pass
Static
Alert Failure
AC<EC
No alert Pass

D. Evaluative Result of Approaching Object Detection


System
The system accuracy under three road situations and four
weather conditions is shown in Table3. At high-speed road
situation, the total average accuracy is 89.98%, day/night
condition is the best (97.67%) and rainy day condition is the
worst (82.67%). At rainy day condition, the fog which was
rolled form the ground by tires of equipped vehicle make
features of vehicle indeterminate. At sunny night condition, it
also has problem of lamp reflectance, but the great part of
background image is low gray-level, the region of high
Fig. 5. Process of recognizable capacity evaluation. A: distance line, B: DSP
gray-level image which was made by lamp doesn’t make it
planted distance line, C: Proof video, D: The analytic result of proof video was mistake, if there are not approaching vehicles obviously. At
analyzed by DSP, E: The analytic result add the distance line been planted by sunny condition, day condition is much better than night
DSP. condition because the feature of vehicle is obvious and
complete at day condition.
The verifiable videos were stored in portable digital video At urban road situation, good accuracy it has, besides night
recorder, and they were played in the form of NTSC and input condition has much worse accuracy. At thoroughfare road
to DSP (see Fig. 5-C). The analytic result of verified video was condition, system accuracy is also over 90%. As a conclusion,
analyzed by DSP (see Fig. 5-D). When the system alerts, it will the algorithm of approaching vehicle detection that is applied
display red block, and it will display green block when it to blind-spot area of vehicle has some characters.
doesn’t alert. Finally, the analytic result which was added to 1) At sunny condition, the accuracy at day condition is
the distance line is shown in Fig. 5-E. better than it at night condition.
The analytic video was played by PC, and it was verified 2) At rainy condition, the accuracy at night condition is
interval of 2 seconds between each verified video unit. The better than it at day condition.
verified method was what recorded the recognizable accuracy 3) As comprehensive statistic results, the accuracy at night
at the interval. The judgement of alert result is shown in table2. condition is better than it at day condition.
The standard of “ Pass ” judgement is “ the system must to 4) The system accuracy at thoroughfare road situation is
alert and then it alerts”, and the standard of “ Failure ” better than it at urban road situation.
judgement is “ the system must not to alert and then it alerts ”. 5) The system accuracy at urban road situation is better
Each verifiable video is 10 minutes, and therefore there are than it at high-speed road situation.
300 ((60/2)*10 = 300) verifiable video units in it. The Thanks to theory, the distance of warning area is affected by
verifiable accuracy percentage is what the numbers of “ Pass ” the relative velocity between approaching car and equipped
judgement divided by the numbers of verifiable video units. car. The distance of warning area will shorten as the relative
velocity between approaching car and equipped car speeds up.
The research used image of actual road situation to evaluate. which is affected by the relative velocity. The mean distance
The relative velocity between approaching car and equipped of warning area is shown in Table 4, The system mean distance
car was not measured, so the research could not get endurance of warning area is about 8.1 m.

Table3 Percentage (%) of judgement accuracy for approaching vehicle detection in blind-spot area.
Video Classification Accurate Day Night Sunny Rainy Total
Average
Serial Number Road situations Weather conditions Percentage average average average average average
1 Sunny day 91.25
2 Sunny night 88.33
High-speed road 89.98
3 Rainy day 82.67
4 Rainy night 97.67
5 Sunny day 92.31
6 Sunny night 89.97
Urban road 91.46 89.88 92.15 90.66 91.36 91.01
7 Rainy day 90.64
8 Rainy night 92.91
9 Sunny day 91.02
10 Thoroughfare Sunny night 91.06
91.59
11 road Rainy day 91.36
12 Rainy night 92.93

Table4 Average distance (m) of alert for approaching vehicle detection in blind-spot area.
Video Classification Accurate Day Night Sunny Rainy Total
distance
Serial Number Road situations Weather conditions distance average average average average average
1 Sunny day 8.80
2 Sunny night 7.50
High-speed road 8.46
3 Rainy day 9.17
4 Rainy night 8.37
5 Sunny day 8.19
6 Sunny night 8.38
Urban road 8.23 8.34 7.86 7.95 8.26 8.10
7 Rainy day 8.44
8 Rainy night 7.89
9 Sunny day 7.59
10 Thoroughfare Sunny night 7.21
7.62
11 road Rainy day 7.87
12 Rainy night 7.82

V. CONCLUSION REFERENCES
[1] A. Techmer, “Real-time Motion Analysis for Monitoring the Rear and
The image detection and warning system is a critical Lateral Road”, Proceedings of 2004 IEEE Intelligent Vehicles
technical for vehicle safety studies. The research proposes an Symposium, Vol. 11, pp. (11)704 – 709.
image process method to detect the motion object. The 2D data [2] Y. K. Wang and S. H. Chen, “A Robust Vehicle Detection Approach,”
IEEE International Conference on Advanced Video and Signal-based
of a road image is transferred to 1D lane information by using Surveillance, 2005, pp. 117-122.
the estimation of image entropy firstly, and the possible [3] Y. Zhu, D. Comaniciu, M. Pellkofer and T. Koehler , “Reliable Detection
vehicle position is determined by using the differentiation of Overtaking Vehicles Using Robust Information Fusion,” IEEE
process. Moreover, the position of approaching vehicle is Transactions on Intelligent Transportation Systems Vol. 7, 2006, No. 4,
pp. 401-414.
determined by the information of two lanes extracted from the [4] L. W. Tsai, J. W. Hsieh and K. C. Fan, “Vehicle detection using
images of time series. normalized color and edge map,” Proceedings of IEEE International
Conference on Image Processing, 2005, vol.2, pp.588-601.
The system was verified by using the video database of [5] K. She, G. Bebis, H. Gu and R. Miller, “Vehicle Tracking Using On-Line
Fusion of Color and Shape Features,” IEEE Transactions on Intelligent
blind-spot area, including three road situations and four Transportation Systems, 2004, pp. 731-736.
weather conditions. The system accuracy is more than 91.01 % [6] P. H. Batavia, D. E. Pomerleau and C. E. Thorpe, “Overtaking Vehicle
and the mean distance of warning area is about 8.1 m. The Detection Using Implicit Optical Flow,” IEEE Transactions on
results are satisfied to the system specification and show the Intelligent Transportation Systems, 1997, pp. 729-734.
[7] J. Wang, G. Bebis and R, Miller,“Overtaking Vehicle Detection Using
algorithm is excellent for approaching vehicle detection of Dynamic and Quasi-Static Background Modeling,” IEEE Transactions
vehicle imaging system. In the future, the algorithm would be on Intelligent Transportation Systems, 2004, pp. 731-736.
adopted to develop the collision warning system of around [8] Z. Sun, G. Bebis and R. Miller, “On-Road Vehicle Detection Using
view of vehicles, and the system would be applied to Optical Sensor: A Review,” IEEE Transactions on Intelligent
Transportation Systems, 2004, pp. 585-590.
intelligent transportation system widely.

You might also like