You are on page 1of 7

P m d l n g r d the 2004 lEEE

-
Internatlonal Confennu an Rabotlsr (L Automatron
New Orleans, LA April 2004

An interactive driver assistance system monitoring


the scene in and out of the vehicle
Lars Petersson’, Luke Fletcher+, Nick Barnest, Alexander Zelinskys
National ICT Australia Limited*$ Research School of Information Sciences and Engineering+§
Locked Bag 8001, Canberra ACT 2601. The Australian National University, Canberra ACT 0200.
Email: lars.petersson@nicta.com.au*luke@syseng.anu.edu.aut nick.barnes@nicta.com.autalex@syseng.anu.edu.au§

Abstract-This paper presents a framework for interactive is approaching the cars in front too rapidly, or helshe may try
driver assislance systems including techniques for fast speed to change lane when there is a car in the blindspot.
sign detection and elassilication, car detection and tracking, and How can the system within the car perceive whether the
lane departure warning. In addition, the driver’s actions are
monitored. The integrated system uses information extracted driver is already aware of a particular event? An eye gaze,
from the road scene (speed signs, position within the lane, relative and head pose tracking system is used to detect where the
position to other cars, etc.) together with information about the driver is looking. This way, we can ensure a warning is issued
driver’s slate such as eye gaze and head pose, lo issue adequate if the attention of the driver was directed elsewhere.
warnings. A touch screen monitor presents relevant information
and allows the driver to interact with the system. The research
is focused around robust on-line algorithms. Initial results of on-
line speed sign detection and car tracking are presented in the
context of a driver assistance system.

I. INTRODUCTION
Improved safety is key technological goal in road vehicles
today. One way to achieve this is by creating systems within
the vehicle that support the driver in reacting to changing
road conditions. Our research is focused on driver assistance
systems: systems that assist the driver in controlling the car,
but keep the driver in the loop. Impressive work in this and
related areas has been performed by Thorpe [ I ] [21 [31 , Fig. 1. The driver’s seat showing the touch screen monitor, active Stereo
Dickmanns [4] [5] [6] and Broggi [7] [E]. Their work deals vision head looking out the windscreen and a passive Stem p c nmnitwing
mostly with the sensing aspect of driver assistance which is the driver.
essential to create robust and reliable systems. An interesting
research area is, however, how to handle the information flow Section II discusses driver assistance systems in general,
generated. Depending on the context, information has different outlining the purpose and demands of a driver assistance
significance, e.g. how are warnings most efficiently conveyed? system. A comparison is made to a human co-pilot to highlight
In this paper, we present a framework for interactive driver the important aspects. In Section IU the software framework
assistance systems in which we can perform research on is described with a generic model of the system. Section N
new technologies relevant to vehicle safety. It is centered shows the method of interfacing to the driver. Preliminary
around a touch screen monitor mounted next to the driver work on a system for fast visual speed sign recognition is
with a backbone of computing power and sensordactuators. presented in Section V. Section VI presents a method for
We demonstrate techniques for fast speed sign detection and obstacle detection and tracking which is applied to detecting
classification, car detection and tracking, and lane departure and tracking other vehicles. A lane tracker is shown in
warning. In addition, the driver’s actions are monitored by the Section VI-D. The integration of the different subsystems is
use of a faceLAB system from Seeing Machines [9]. Only presented in Section W, followed by conclusions and future
robust on-line algorithms are used. work.
An important task for a driver assistance system is sign
11. DRIVERASSISTANCESYSTEMS
recognition. Signs giving information relevant to the local
conditions and appear clearly in the environment, however, A Driver Assistance System (DAS) may perform activities
a driver may not notice a sign, due to distractions or a lack like relieving the driver of distracting routine activities, wam
of concentration. In this case it is helpful to make them aware about upcoming situations and possibly take control of the car
of the information that they have missed. The same is m e for if an accident is imminent. Depending on the task to be per-
monitoring the position of other vehicles. Perhaps the driver formed, a DAS must have appropriate levels of competencies

0-7803-8232-3/04/$17.00 @ZOO4 IEEE 3475


~

in a number of areas. If we, consider the DAS to be a human


co-pilot it is easier to pick out the important aspects. To be
of assistance, the CO-pilotneed to be aware of what is going
on outside of the car, e.g., are there any pedestrians in sight.
. where are they going, how is the road turning, etc. Moreover.
we would like our CO-pilotto warn us if we have not noticed
an upcoming situation. That means that not only should the
CO-pilotbe aware of what is going on outside of the car, but W
also what is happening inside, i.e., the driver’s responses. In
addition, our CO-pilotmust -know where the vehicle is going,
how fast, if we are braking, accelerating etc. to make good Fig. 2. Sub-cornponenor of the Driver Assistive System.
decisions. Good decisions are a result-of good reasoning. A
successful driver-co-pilot team requires good communication.
The CO-pilot must not be inuusive or present the driver with The clienVserver suucture is implemented using CORBA.
too much information. Finally, if the co-pilot notices that the This gives clean well defined interfaces between modules and
driver does not respond to a situation that will result in an distributed computing becomes less complex. Computer vision
accident, he must be.able to take control over the car. algorithms in general tend to consume a lot of computing
Returning to our non-human co-pilot, the DAS, we can power and so it is necessary to run the DAS on several com-
puters. Moreover, developing algorithms for driver assistance
..
condense the above to the-followinf key competencies:
Traffic situation monitoring systems requires many diverse skills so a modular approach

..
Driver’s state monitoring
Vehicle state monitoring
is necessary to allow a large team to work on it at the same
time.

..
Communication with the driver
Vehicle control
Reasoning system
A. Driver state

The~firstthree collect information which t h e b A S can use to The driver’s. head position .and eye gaze from the Face-
analyse the current situation. The fourth, Communication with LAB system are combined with the other sensors directed
the driver, provides both input to the DAS and output to the at the driver to estimate the driver’s state. The driver’s
driver. E.g., the driver can specify an overall goal, orthe DAS state is organised on three levels: a time-stamped log of
can give information to the driver. .Vehicle contml is necessary raw events such as brake pedal depressions; an interface
if it is expected that the DAS should be able to perform any to inquire.about the driver’s current gaze direction; and a
semi- or fully autonomous maneuvers. A reasoning system simple behaviour interpreter. The .behaviour interpreter just
may range from adirect mapping from an input to an output, monitors the logged events and the vehicle state to form simple
to a complex system using the latest advances in artificjal conclusions such as: driver^ intending to accelerate”, ”Driver
intelligence. The level of competence in each category is intending stop/slow”, ”Driver intending changing lanes” i n d
dependent on the specific task to be solved. ”Driver loqking at X (using the Seeing Machines environment
Finally, with a human co-pilot, the DAS should possess the segmentation tool). Also the interface can, be queried for

-
following behavior:
Intuitive. The behavior of the DAS must make immediate
sense in the context of the standard driving task.
”time since” inquiries about the driver gaze direction. Here
a direction’andconfidence is given and the engine returns the
time since the driver looked in that direction by searching the
Nonintrusive. Must not distract or disrupt the driver FaceLAB log file.
unless it is necessary. FaceLAB is a driver monitoring system developed by
. Overridable. The driver has ultimate control and can Seeingmachines [9] in conjunction with ANU and Volvo
refuse assistance. Technological Development. It uses a passive stereo pair of
cameras mounted on the dashboard to capture video images
III. THE SOFTWARE FRAMEWORK of the driver’s head. These images are processed in real-
.’. Figure 2 shows the generic model .of our driver assistive time to determine the 3D pose of the persons face ( f l m m ,
system. A clienhserver based system architecture has been fl deg) as well as the eye gaze direction (f3deg), blink rates
designed where each of the principle information sources, such and eye closure. Clinical trials show head position and eye
as the road scene. the driver or the vehicle are represented closure are key indicators for the detection of fatigue in drivers
by an information sewer. The information servers are generic [IO].When augmented with information about the vehicle and
across the driver assistance systems, a particular driver assis- traffic situation additional inferences can be made. Land and
tance system is implemented by a simple “DAS logic” client Lee [ 1 I ] investigated the relevance of gaze direction relative
that encapsulated the desired ;behaviour of the DAS system. to road curvature. Apostoloff et. al. [I21 showed a correlation
These “DAS logic” clients effectively perform the role of cross between eye gaze direction and road curvature, particularly
referencing and auditing the data from the information sources. when monitoring oncoming traffic.
~

3476
B. Vehicle sfuie of sign types are to be classified. We argue that this can be
The state of the vehicle is monitored using a 3-axis ac- an effective means of managing computation for even a small
celerometer, 3-axis gyro and a GPS. There is also a poten- number of sign types if a detection stage is available that has
tiometer on the steering shaft to measure the current steering low computational cost, facilitating real-time operation.
angle. These vehicle state sensors will be combined with an We propose a new method of fast sign detection that is
Ackerman steering model and an extended Kalman filter to applicable to signs with a circular feature, a significant subset
provide a robust estimate of the vehicle motion, between video of signs: the fast radial symmetry detector [17]. All Australian
image frames. speed signs have a red circle on a white background with black
numbers. We are able to eliminate the vast majority of false
IV. USERINTERFACE positives by considering only radially symmetric regions, that
It is important that information gathered, and decisions are stable across several images, have a high count of pixels
made by the driver assistance system are conveyed to the in ratio to the radius.
driver in an effective manner. Although many warnings and Cross correlation can then be applied to the small number of
selections can be made using switches, there is often a need for candidates. For cross-correlation, scale is generally a problem,
more context and richer ways of interaction. The user interface typically requiring multiple templates at different resolutions.
must not add to the driving task, but rather make it easier by However, from the radius returned from the fast radial symme-
selecting the appropriate information in an intelligent way. As try detector we know the approximate scale of the template.
mentioned in [13], information display issues that will affect
the system’s safety, usability and acceptance include

. maddiry (auditory, visual, tactile etc)


formai (text, map, tone, voice)
time (stan time, duration, frequency)
We have chosen to use a touch screen monitor as the
main interface as it provides means for the driver to do
advanced selections in the information flow, and context based
information can be displayed. This choice is in line with what
many of the car manufacturers have chosen. The particular
touch screen we have chosen is a high contrast colour 12” LCD Fig. 3. A candidate sign detected by the fast radial symmetry detector at the
monitor. model “El0 Entuitive 1266V from Elo Graphics. size if appears in the image. and close-up. The outer circle and numbers are
It comes as a panel mount device with a serial interface. n m . Despite itr consistent appearance as a small image, it contains few
pixels that could be said to be r e d b k k , or even white.
It is mounted next to the driver in a position suitable for
ergonomical interaction, where the driver can interact at a
glance. A head-up-display (HUD) displaying information in There is much possible variation in the appearance of a
the same field of view as the road has many advantages as speed sign in an image. Throughout the day, and at night,
there is no need for the driver to change gaze direction or lighting conditions vary enormously. A sign may be well lit
the focal distance. However, currently available H U B suffer by direct sunlight, headlights, it may be completely in shadow
from poor contrast and do not work well under some lighting even on a bright day, or heavy rain may blur the image of the
conditions. sign. Ideally, signs have clear colour contrast, but over time
Multimodal interfaces have a tendency to introduce a lower they become faded, but still be clear to drivers. Although signs
workload on the user [14]. We are researching the possibilities appear by the road edge, this may be far from the car on a
of adding, for example, auditory warning systems. A design multi-lane highway - to the left or right, or very close on a
process for natural warning sounds is presented in [I51 which single lane exit ramp. While signs are generally a standard
introduces methods for ussociabiliiy and sound imagery. As- distance above the ground, they can also appear on temporary
sociability represents the required effort to associate sounds roadworks signs at ground level. Thus, it is not simple to
to their assigned alert function meaning. An associable sound restrict the possible viewing positions of a sign within the
requires less effort and fewer cognitive resources. Sound image. By modelling the road [I81 or the sky, it may be
imagery is used to develop sound images, which by its acoustic possible to dictate parts of the image where a sign cannot
characteristics has a particular meaning to someone without appear, but road modelling has its own computational expense.
prior training in a certain context. and colour-based methods are not robust.
However, the roadway is well structured. Under Australian
V. SPEED SIGN RECOGNITION law the appearance of speed signs is highly restricted: they
Road sign recognition research has been around since the must be of a particular size, and be a white sign with black
mid 1980’s. A typical approach applies normalised cmss numbers surrounded by a red circle. Unless the sign has been
correlation directly to several places in the traffic scene image. tampered with signs will appear approximately orthogonally
Some more recent approaches separate detection and classifi- to the road direction. Finally, signs are always placed to be
cation steps [16]. This is particularly used when a large numbeI easily visible, so the driver can easily see them without having

3477
~

to look away from the road. improve the results further, both rejecting invalid candidates,
.Our algorithm searches the image for near circular features. and reducing the number of misclassifications.
A legal speed sign must have a red circle around it, and the
VI. OBSTACLE DETECTION A N D TRACKING
signs almost always appear orthogonal to the road. Provided
our camera points in the direction of vehicle motion, the Reliable obstacle detection and tracking is a challenging
surface of all signs will be parallel to the image plane of problem in automotive research due to the diverse operat-
the camera. On a rapidly curving road it may be that the sign ing enviropment. Large variations in lighting, apparent ob-
only appears parallel to the image plane briefly, but this will stacle size and relative velocity all must be accomodated.
be when the vehicle is close to the sign, so it will appear large A moving sensor and unknown object appearance preclude
in the image. If we are processing images at > 30Hz.and we many classic segmentation techniques such as background
are able to recognise a sign reliably from only a small number subtraction or appearance based feature detection. Progress
of frames then generally we are safe to assume that the sign has tended along two avenues: superior sensing techniques
is parallel to the image plane. and constraining the problem. Superior sensors include laser
Three further restrictions on the radial symmetry detector range finders, millimetre wave radars or large baseline stereo
can reduce the number of candidates. Firstly, .we are only cameras [19][20][7]. Constrained problem solutions have used
interested in circles of a particular size range. A circle with significant assumptions about the road scene, e.g., flat roads,
a small number of pixels as its radius may well constitute a featureless road surfaces or looking for car-like objects only
speed sign, however, there will noi be enough pixels present to [21]. Our technique combines several sensor data processing
discern what the sign says, and so there is no point in funher techniques and some weak assumptions about obstacles (such
processing. we should wait until the sign is close enough to as a consistent size and location over time) to develop a robust
be recognised. Further, in normal driving conditions a sign detection and tracking system. The strength of the system is
will never appear closer to the camera than several metres. the ease of adding additional information sources (like better
Given a camera of approximately known focal length, we can sensors, or image processing algorithms).
impose an upper limit on the possible radius of circles that we A. System overview
are interested in. In our system these limits were empirically
The obstacle detection and tracking engine provides the “ob-
derived based on sample road images.
stacle state object”. This object can be queried to provide a list
A. Initial results on sign recognition of known obstacles. Each obstacle is returned with a location,
size and trajectory estimate and associated confidence.
For evaluation purposes, the sign recognition system has The Obstacle detection and tracking engine has three phases
been run over several image sequences taken from the research of operation: detection, distillation and tracking. Figure 5
vehicle. The sequences come from cameras in a binocular head shows these phases which operate concurrently detecting new
mounted approximately in the position of the rear-view mirror. obstacles while tracking previously detected obstacles. The
All images used in the experiments were taken of signs on first phase uses a set of “bonom up”, whole image techniques
public roads around Canberra. Two sample signs from those (stereo disparity, optical flow, colour consistency) to search the
used are shown in Figure 4. There is great variation in the image for likely obstacles. The second phase uses the particle
appearance of the signs, including apparent scale, lighting, and filter based “Distillation algorithm” [22] [ 121.
the deterioration of the sign due to weather conditions.

Fig. 4. Sample imager with speed signs present. The quality of sign and
the lighting vaied within ow sequences, along with the scale of the sngn that
appeared. Also. more than one sign may appear in a single imge.

Classification using the fast radial symmetry detector has Fig. 5 . Phases: 1. Obstacle detection. 2. Obstacle distillation, 3. Obstacle
tracking
proved highly sucessful, with only 10% false positives over
many varied scenes, and a total of 1107 input images. B. Obstacle Detection
The performance of the classifier on the candidates pro- The obstacle detection phase is based on the coarse segmen-
duced by the detection phase was encouraging. Requiring tation of potential obstacles from a stereo template correlation-
temporal consistancy of classification for several frames could based disparity map and image gradient-based optical flow

3476
data. As mentioned in Frank et. al. [23] the range of optical
flow values and disparities encountered in the road scene is
large. Disparities and image motion in a single instance can
range from 0 at the horizon to over 64 pixels in the near field.
Frank et. al. limited the vehicle speed in their experiments to
satisfy the gradient based optical flow estimation constraint of
image motion of less than 2 pixels per frame. They mention
future work could use Gaussian image pyramids to enhance Fig. 7. Distillation: a visual cue processing fmnework
the dynamic range possible in the flow estimation. We have
adopted a image pyramid technique both in the optical flow
and disparity map estimation. For optical flow we implement C. Obstacle Distillation
a method similar to Simocelli [24]. Optical flow is computed In this phase the potential obstacles identified above are
for the coarsest images then the result used to warp the next “distilled” using the “Distillation algorithm” into consistently
higher image resolution to maintain an acceptably small image detected obstacles. The “Distillation algorithm” (Figure 7) is
motion at each level. The penalty for using a coarse to fine the combination of a particle filter with an intelligent cue
approach is the integration of any errors. processing system. The cue processing system changes the
Using an image pyramid and calculating the disparity for rate different sensor data is incorporated into the panicle filter
each image resolution with the same size correlation window based on how well the sensors are performing. Stereo data
means the correlation window is effectively halving for each may be disrupted by a momentary occlusion of one camera,
image resolution going from course to fine. This property in this case the information from this cue is ignored in favour
matches large objects at large disparities and smaller objects at of other cues which are unaffected.
small disparities (i.e., on the horizon). Also, by not warping Sets of particles representing each obstacle candidate are in-
the image between resolutions we avoid the propagation of jected into the state-space in a Gaussian distribution around the
errors between image resolutions. At high resolutions we are potential obstacle’s detected location. Stereo disparity, optical
interested in finding distant objects with small disparities, flow and colour consistency cues are again used to evaluate
where larger objects such as close vehicles are recovered at the the potential obstacles, this time however, only the projected
coarse image resolutions. Coarse resolution images can only locations of the particles are evaluated not the whole image.
resolve disparities to half the accuracy of the next higher image Over several iterations of the filter, particles representing
resolution, but as this works in opposition to the property of unsubstantiated obstacles dissipate. Found obstacles are rep-
disparity estimates deteriorating as distances increase the effect resented by clusters of particles which remain consistent over
on the resultant disparity map is acceptable. time. Each cluster of particles surviving a minimum number
The stereo data is further processed by removing the road of iterations is checked against a Gaussian distribution at its
surface using the V-Disparity technique [251. The disparity centroid. If the Gaussian distribution adequately describes the
of the road is assumed to be dominant making a contour in cluster an extended Kalman filter based tracker is initialised
the histogram. The road surface can then be removed from (phase three). Figure 8 illustrates clusters of panicles detected
the disparity image. Potential obstacles are extracted using representing obstacles to he replaced with Kalman filters.
basic constraints of a minimum height and maximum height.
Figure 6 shows the resultant segmented road scene. Several POb..lll.

false positives exist, but this is acceptahle as the desired result


is a low number of false negatives (missed obstacles) at the ....

expense of some false positives (phantom obstacles) which


b
will be filtered in the second phase of the system.

Fig. 8. Obstacle dirtillation: Uni-modal clusters in the parrick filler are


extracted far tracking.
D. Obstacle Tracking
In the third phase of the system an extended Kalman
filter and template correlation is used to track each obstacle
independently. Using the obstacle location extracted from
the previous phase, a uniqueness detector identifies good
image correlation templates from each stereo camera. This
Fig. 6. Road scene with obstacles identified with bounding boxer. collection of templates is then used lo track the obstacle. The

3479
correlation templates are tracked independently in the image
using normalised cross correlation. The collection of templates
.
per obstacle is then evaluated for the mean shift in the image
and high correlation. Templates tracking inconsistently or
unreliablely are discarded. The remaining templates are used
to estimate the new location of the obstacle fed to the extended
Kalman filter. The extended Kalman filter tracks the location of
the vehicle in the 3D road coordinate system using a constant
velocity motion model. The size of the vehicle estimated in Fig. 10. Road model used far the particle filter. The dark shaded region
the previous phase is assumed to be constant. Eventually is wed as the non road boundary in the mlour cues while the Light shaded
region is the mad region. Note that the figure is exaggerated for clarity.
the obstacle tracked is lost: either overtaken, obscured, too
far in front of the vehicle to be seen, or any other random
failure. This condition is identified by either no reliable image
VII. INTEGRATION
templates remaining or by a divergence in the covariance
matrix of the filter. In either case the system will discard the The driver-DAS interface has been designed with concepts
extended Kalman filter and, as a precaution, inject a cluster of “intuitiveness”, “noninuusiveness” and “ovemdability” in
of particles at the final location of the object back into the mind. above uses standard road signage colours to convey the
particle filter in the above phase of the system. importance of information. Critical issues appear red, warnings
Figure 9 shows the output of the obstacle tracking engine. yellow, and affirmative information in green.
Each car is tracked using an independent extended Kalman In the DAS lane position is shown as lines overlayed on
filter and image template correlation. Later in this image the camera image. Lane departure is shown as as green to
sequence the centre car is lost due to a template tracking red tinting over the left or right side of the image (based on
failure (only one template is tracking reliably at this stage), which lane boundary is being crossed). The intensity of the
then the second phase of the system quickly detects the vehicle colour represents the perceived seriousness of the violation.
again and tracks using a new filter and new image templates. Intentional lane departures are detected by the driver using
Also in this sequence the car on the far right is obscured the indicators. Acknowledgement of the warning can be made
: by an overtaking vehicle. Again this obstacle is lost, and the by the driver by looking at or touching the touch-screen.
Tracked obstacles are highlighted on the touch-screen, again
overtaking vehicle detected and tracked instead.
a colour coding system represents the DAS’s perceived es-
timate of the threat. Close fast moving obstacles have red
boundaries. distant vehicles green. Colour coding is adjusted
based on vehicle speed, i.e. at lOOkm/h vehicles 10 metres
away are red, at 5 0 k d vehicles I O metres away are yellow.
Warnings are based on driver attention. If there is an obstacle
that is perceived a high threat yet the driver is detected as
looking in that direction, no warning is given. If the driver
head pose and eye gaze has not been in the direction of the
obstacle a warning is given.
Regarding the speed warning system, it is reasonable to
assume that if the driver is aware of the current speed limit
they should have adjusted their speed to a safe level. Thus,
if a system can monitor speed signs, and inform the driver if
Fig. 9. Output of obstacle tracker. Rectangles indicale obstacle bounding
they are currently driving too fast (or too slow), then the driver
boxes, ‘+’ indicate correlation template locations. ‘*’ indicate centroid of will only gain information that they don’t know. (Alternatively,
obstacle. the driver may have decided not to react to the information,
in which case it may not be a bad thing to inform them
The lane tracking engine, which also is based on the anyway). Speed signs are continously being monitored and
Distillation tracking algorithm [22], has been developed by when a strong candidate appears it is shown in the interface
Apostoloff et al. and is decrihed in full in [12]. In this sub- together with its interpretation. Thus, the driver can tell what
system visual cues such as edges and road colour consistency information the warning was based on.
are combined with cues based on physical world constraints Circumstances can change quickly, so instead of relying on
such as vanishing points and plausible road shapes, to distill the driver to remember to look at the screen, several auditory
a winning hypothesis of the vehicle’s position with respect to warnings are planned. Warnings will consist of tones based
the road. Lane tracking estimates the road width, lateral offset on the threat level and will only occur when sustained high
of vehicle from road centerline and the yaw of the vehicle level threats have been ignored (such as prolonged speeding,
with respect to the road centerline (see Figure IO). or lane departure).

3480
It is planned to integrate these waminglinformation sys- [3] L. zhao and C. “stem and neural network-based pedestrian
terns further to driver assistive systems with higher detection,’’ IEEE Tmnsonions on lnlelllgenl Tmnsponation Sy3Iems.
vol. 1. no. 3, pp. 148 -154. September 2000.
behavioural functions. These second generation systems will E, D. and A, zapp, .AAuton,anous high speed road vehicle
attempt to interact more like a human co-pilot. Instead of the guidance by computer vision,” in Pmc. IFAC. 1987, pp. 221-226.
monitoring lane position these systems would look [51 E. D. ~icLmanns.“Vertebrate-type vision for autonomous vehicles:’ in
P m . Sympesium on Biologically Inspired Systems. December 2000.
for trends and deviations in the behaviour of the driver and [61 _, .Anexpectation.based, multi.faeal, saccadic (ems)vision
road environment. A draft of the gui is shown in Figure 11. for vehicle guidance:’ in Pmc. Intemrionol Symposium on Robodes
Output from the speed sign detection algorithm (Section V) Rcsemh. Salt Lake cik October l999.
171 A. Broggi. M. Bertozri, and A. Fascioli. ”Self-calibration of a stereo
and the obstacle detection algorithm (Section VI) is shown. ..
vision s w e m for automotive aoolications.” Seoul. Korea. May 2001.
The user interface has to be a comnromise between orovidine
clear, instantaneous messages and suffient information for the
- DD. 369k3702.
[8] M. Benomi. A. Bmggi, M. CarleUi. A. Fascioli. T. Gmf, P. Grisleli,
and M. Meinecke, “Lr pedestrian detection for advanced driver assis-
driver to re-evaluate the decision made by the system ( e.g., tance systems,” in Pmc. Ponem Recognition Symposium,Magdeburg,
the image of the speedsign that may have been misread). Gennany, September 2003.
[9] Seeing Machines, “Facelab face and eye tracking system:’
hrrp:wnw.seeingrr?chines.enm, 2001.
[IO] N. L. Haworth, T. Trig@. and E. M. Grey, “Driver fatigue: Concpets,
measurement and crash countermeasures,’’ Federal Office of Road Safety
Contract Report 72 by Human Fanors Group. Depmment of Psychol-
ogy, Monash Universily, Tech. Rep.. 1988.
[ I l l M. L a d and D. Lee. “where we lwk when we steer:’ Nolure, no. 369,
pp. 742-744, lune 1994.
1121 N. Apostoloff and A. &tinsky, “Wsion in and out of vehicles: iotegrated
driver and road scene monitoring,” in Lrprrirnrntol RoOorics Vlll. rer.
Advanced Robotics Series, B. Siciliano and P. Dario, Eds Springer-
Fig. 11. A draft of the gui showing output from the speed sign detection Verlag, 2002.
algorithm at the top. and carlobslade detection at Ihe bottoni. [ 131 M. Cellario. “Human-centered intelligent vehicles: Toward multimodal
interface integration,” lnrelligenr Tmnsponarion Swems. pp. 78-81.
VIII. CONCLUSION & FUTURE WORK July 2001.
[I41 Y. Liu and T. Dingus, ‘Development of human factors gudielines for
A framework for interactive driver assistance systems was advanced !raveler information systems (atis) and commercial vehicle
presented. Techniques for fast speed sign detection and classi- operations (WO): Human facton evaluation of the effectiveness of multi-
modality displays io advanced Vdveler information systems:’ Federal
fication, car detection and tracking, and lane tracking were Highway Adminishation, Washington D.C., Tech. Rep. FHWA-RD-96-
included. The driver assistance framework includes means 150. July 1999.
for monitoring the driver’s actions, and the integrated system 1151 P. Ulfvenpn, “Design of natural warning sounds in human-machine
syrtems:’ PhD. dissertation, Depamnent of Industrial Economics and
uses information extracted from the road scene together with Management Series, Royal Institute of Technology, Stockholm, Sweden.
information about the driver’s state such as eye gaze and December 2003.
head pose to issue adequate warnings. A touch screen monitor [I61 P. Paclik, I. Novovicicova, P. Somol, and P. Pudil, “Road sign clasification
using the laplace kernel classifie2’ Podern Recngnifion Letters, vol. 21.
was used to present relevant information and allow the driver pp. 1165-1173, 2oW.
to interact with the system. The research focuses around [17] G. Loy and A. &hsky, “Fast radial symmetry for detecting points of
robust and reliable algorithms for extracting information from interest:‘lEEE Tronr Porrcm Amlysir and Mochine Inrelligence. vol. 25.
no. 8, pp. 959-973. Aug. 2003.
the road scene, adequate warning mechanisms and a proper 1181. R. Labawde, D. Auben. and 1.-P. Tarel,“Real time obstacle detection
.
integration of subsystems. in st-vision on non Rat mad geomehy through ’’
Future work includes further research on appropriate warn- [I91 1. Robem and P. Cake, “Obstacle detection for a mining vehicle using
a 2d laser,” in Pmc. Auslmlion Conference on Roborics and Auromalion
ings, such as different auditory signals depending on the level (ACR4). Melbounn. Aurfralia. August 2000, pp. 185-190.
of threat. It is planned to study the behavior of the driver 1201 G.Bmoker and H. D m t - W h i t e . “Millimeuewave radar:’ Melbourne.
and vehicle more closely and to detect when deviations from AusValia. 2001.
1211 E Dellam. D. Pomerleau, and C. Thorpe, “Model-based car tracking
the normal or expected occurs. The area of driver assistance integrated with a may 1w8,
systems is enormous, and the presented framework allows ~ 2 2 1G. b y , L. Retcher. N. A ~ O S ~ O I O and
~ ~ . A. Zelinsky, adaptive
research to be carried out easily in this domain. fusion architecture for target hacking:’ in Pmc. The 51h Inlemalionol
Conference on Auromlic Foee ond Gesrure Reco~nition,Washington
Ix. ACKNOWLEDGEMENT DC. May 2002.
[23] U. F&e and S. Heinrich, “Fast obstacle detection for urban naffic
Thanks to Leanne Matuszyk for helping out with experi- situations:’ vol. 3. no. 3, pp. 173-181, September 2002.
ments, pictures and the G n , Thanks a[so to Gareth Loy for [24] E. P. Shoncelli. Boyesion Multi-scok Differential oplicd Flow. Aca-
demic Press. 1999, vol. 2. ch. 14, pp. 297422.
proof reading. .1251. R. Labawde. D. Aubert. and J.-p. Tare1 “Real time obstacle detection
in stere&ision on non &mad I geomehy through ”v-disparity” repre-
R E F ER EN CE S senlation,” in Pmc. IEEE ltIIdigm1 Vehicle Symposium. France. lune
[I] R. Aufrere, J. Gawdy, C. Meru, C. Thorpe, C.E. Wang, and T. Yata. 2002.
‘Perception for collision avoidance and autonomous driving:’ Mecho-
nonics, vol. 13, no. 10,pp. 1149-1161. December 2003.
[21 C. Mem, S. McNeil, and C. ‘I%orpe, “Side collision waming systems for
transit buses,” in I V 2003, IEEE Intelligent Vehicle Sjmposium, October
2000, paper accepted for publication.

3481

You might also like