Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
Department of Electrical Engineering
Examensarbete
Observer for a vehicle longitudinal controller
Examensarbete utfört i Reglerteknik
vid Tekniska högskolan i Linköping
av
Peter Rytterstedt
LiTHISYEX2007/3950SE
Linköping 2007
Department of Electrical Engineering Linköpings tekniska högskola
Linköpings universitet Linköpings universitet
SE581 83 Linköping, Sweden 581 83 Linköping
Observer for a vehicle longitudinal controller
Examensarbete utfört i Reglerteknik
vid Tekniska högskolan i Linköping
av
Peter Rytterstedt
LiTHISYEX2007/3950SE
Handledare: Johanna Wallén
isy, Linköpings universitet
Volker Maaß
Mercedes Benz Technology Center, DaimlerChrysler AG
Examinator: Thomas Schön
isy, Linköpings universitet
Linköping, 1 April, 2007
Avdelning, Institution
Division, Department
Division of Automatic Control
Department of Electrical Engineering
Linköpings universitet
SE581 83 Linköping, Sweden
Datum
Date
20070401
Språk
Language
Svenska/Swedish
Engelska/English
Rapporttyp
Report category
Licentiatavhandling
Examensarbete
Cuppsats
Duppsats
Övrig rapport
URL för elektronisk version
http://www.control.isy.liu.se
http://www.ep.liu.se/2007/3950
ISBN
—
ISRN
LiTHISYEX2007/3950SE
Serietitel och serienummer
Title of series, numbering
ISSN
—
Titel
Title
Observatör för en längsregulator i fordon
Observer for a vehicle longitudinal controller
Författare
Author
Peter Rytterstedt
Sammanfattning
Abstract
The longitudinal controller at DaimlerChrysler AG consists of two cascade
controllers. The outer control loop contains the driver assistance functions such
as speed limiter, cruise control, etc. The inner control loop consists of a PID
controller and an observer. The task of the observer is to estimate the part of
the vehicle’s acceleration caused by large disturbances, for example by a changed
vehicle mass or the slope of the road.
As observer the Kalman ﬁlter is selected. It is the optimal ﬁlter when the
process model is linear and the process noise and measurement noise can be mod
eled as Gaussian noise. In this Master’s thesis the theory for the Kalman ﬁlter
is presented and it is shown how to choose the ﬁlter parameters. Simulated an
nealing is a global optimization technique which can be used when autotuning,
i.e., automatically ﬁnd the optimal parameter settings. To be able to perform
autotuning for the longitudinal controller one has to model the environment and
driving situations.
In this Master’s thesis it is veriﬁed that the parameter choice is a compromise
between a fast but jerky, or a slow but smooth estimate. As the output from
the Kalman ﬁlter is directly added to the control value for the engine and brakes,
it is important that the output is smooth. It is shown that the Kalman ﬁlter
implemented in the test vehicles today can be exchanged with a ﬁrstorder lag
function, without loss in performance. This makes the ﬁlter tuning easier, as
there is only one parameter to choose.
Change detection is a method that can be used to detect large changes in the
signal, and react accordingly – for example by making the ﬁlter faster. A ﬁlter
using change detection is implemented and simulations show that it is possible to
improve the estimate using this method. It is suggested to implement the change
detection algorithm in a test vehicle and evaluate it further.
Nyckelord
Keywords Kalman ﬁlter, longitudinal controller, ﬁlter tuning, simulated annealing, change
detection
Abstract
The longitudinal controller at DaimlerChrysler AG consists of two cascade
controllers. The outer control loop contains the driver assistance functions such
as speed limiter, cruise control, etc. The inner control loop consists of a PID
controller and an observer. The task of the observer is to estimate the part of
the vehicle’s acceleration caused by large disturbances, for example by a changed
vehicle mass or the slope of the road.
As observer the Kalman ﬁlter is selected. It is the optimal ﬁlter when the
process model is linear and the process noise and measurement noise can be mod
eled as Gaussian noise. In this Master’s thesis the theory for the Kalman ﬁlter
is presented and it is shown how to choose the ﬁlter parameters. Simulated an
nealing is a global optimization technique which can be used when autotuning,
i.e., automatically ﬁnd the optimal parameter settings. To be able to perform
autotuning for the longitudinal controller one has to model the environment and
driving situations.
In this Master’s thesis it is veriﬁed that the parameter choice is a compromise
between a fast but jerky, or a slow but smooth estimate. As the output from
the Kalman ﬁlter is directly added to the control value for the engine and brakes,
it is important that the output is smooth. It is shown that the Kalman ﬁlter
implemented in the test vehicles today can be exchanged with a ﬁrstorder lag
function, without loss in performance. This makes the ﬁlter tuning easier, as
there is only one parameter to choose.
Change detection is a method that can be used to detect large changes in the
signal, and react accordingly – for example by making the ﬁlter faster. A ﬁlter
using change detection is implemented and simulations show that it is possible to
improve the estimate using this method. It is suggested to implement the change
detection algorithm in a test vehicle and evaluate it further.
v
Acknowledgments
This Master’s thesis has been performed between October 2006 and March 2007 at
the Mercedes Technology Center, DaimlerChrysler AG in Sindelﬁngen, Germany.
It completes my international studies for a Master of Science degree in Applied
Physics and Electrical Engineering at Linköpings Universitet, Sweden.
I would like to express my greatest gratitude to my supervisor at DaimlerChrysler,
Volker Maaß, who has always had time for my questions and helped me in any way
possible. The teams EP/ERW and GR/EAT deserve many thanks for welcoming
me at the department, and for answering questions about the cars and develop
ment tools that have come up during this thesis. My supervisor Johanna Wallén
and examiner Thomas Schön at Linköpings Universitet, who have given insightful
comments and tips, also have a part in this thesis.
Finally I would like to thank Marie Rytterstedt, Peter JuhlinDannfelt and Erik
Almgren for proofreading, and my girlfriend for her support and encouragement.
Sindelﬁngen, March 2007
Peter Rytterstedt
vii
Contents
1 Introduction 1
1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 DaimlerChrysler AG . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.6 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.7 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Driver Assistance Systems 5
2.1 Antilock Braking System . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Traction Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 Stability Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.4 Speed Limiter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.5 Cruise Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.6 Hill Descent Control . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.7 Forward Collision Mitigation System . . . . . . . . . . . . . . . . . 6
2.7.1 Distance Warning . . . . . . . . . . . . . . . . . . . . . . . 7
2.7.2 Brake Assist System . . . . . . . . . . . . . . . . . . . . . . 7
2.8 Adaptive Cruise Control . . . . . . . . . . . . . . . . . . . . . . . . 7
2.9 Lane Guidance System . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.10 Blindspot Warning . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.11 Systems Supported by the Controller . . . . . . . . . . . . . . . . . 8
3 Basic Filter Theory 11
3.1 StateSpace Models . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2 Discretization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.3 Observer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.4 Observability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.5 Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.5.1 Process and Measurement Model . . . . . . . . . . . . . . . 15
3.5.2 Discrete Time Kalman Filter Equations . . . . . . . . . . . 15
3.5.3 Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.5.4 Steady State . . . . . . . . . . . . . . . . . . . . . . . . . . 17
ix
x Contents
3.5.5 Block Diagram of the Stationary Kalman Filter . . . . . . . 17
3.5.6 Design Parameters . . . . . . . . . . . . . . . . . . . . . . . 17
3.6 Shaping Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.6.1 Shaping Filters for NonGaussian Process Noise . . . . . . . 19
3.6.2 Shaping Filters for NonGaussian Measurement Noise . . . 19
4 Choosing the Kalman Filter Parameters 21
4.1 Estimating the Covariances . . . . . . . . . . . . . . . . . . . . . . 21
4.2 Choosing Q and R Manually . . . . . . . . . . . . . . . . . . . . . 22
4.3 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.3.1 OpenLoop Simulation . . . . . . . . . . . . . . . . . . . . . 23
4.3.2 ClosedLoop Simulation . . . . . . . . . . . . . . . . . . . . 24
4.4 Autotuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.4.1 Evaluation Using RMSE . . . . . . . . . . . . . . . . . . . . 25
4.4.2 Autotuning Using Matlab . . . . . . . . . . . . . . . . . . . 25
4.4.3 Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . 26
5 Kalman Filter Implementation 33
5.1 Overview of the Inner Control Loop . . . . . . . . . . . . . . . . . 33
5.2 Modeling the Acceleration . . . . . . . . . . . . . . . . . . . . . . . 35
5.3 Errors in the Acceleration Model . . . . . . . . . . . . . . . . . . . 38
5.4 Kalman Filter Model . . . . . . . . . . . . . . . . . . . . . . . . . . 43
5.5 Choosing the Filter Parameters . . . . . . . . . . . . . . . . . . . . 44
6 Alternative Kalman Filter Models 49
6.1 Vehicle Speed as Feedback . . . . . . . . . . . . . . . . . . . . . . . 49
6.2 Modeling the Disturbance a
z
. . . . . . . . . . . . . . . . . . . . . 50
6.2.1 FirstOrder Lag Function . . . . . . . . . . . . . . . . . . . 50
6.2.2 FirstOrder GaussMarkov Process . . . . . . . . . . . . . . 52
6.2.3 Identifying the Time Constant . . . . . . . . . . . . . . . . 53
6.2.4 Testing the Model of a
z
. . . . . . . . . . . . . . . . . . . . 55
6.2.5 HigherOrder Derivative of a
z
. . . . . . . . . . . . . . . . . 56
6.3 Implementation and Testing in Arjeplog . . . . . . . . . . . . . . . 59
6.4 Comparing the Kalman Filter Models . . . . . . . . . . . . . . . . 60
6.5 Comparing the Kalman Filter with a FirstOrder Lag Function . . 61
7 Change Detection 67
7.1 Idea of Change Detection . . . . . . . . . . . . . . . . . . . . . . . 67
7.2 One Kalman Filter with WhitenessTest . . . . . . . . . . . . . . . 68
7.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
7.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
8 Conclusions and Future Work 75
8.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
8.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
List of Notations 77
Bibliography 79
A Matlab Implementation of “lsqnonlin” 81
B Matlab Implementation of Simulated Annealing 83
C Time Constant Identiﬁcation 88
xii Contents
Chapter 1
Introduction
This chapter wil l give an introduction to the problem investigated in this Master’s
thesis. DaimlerChrysler AG, where the thesis project has been performed, wil l be
presented, as wel l as an outline for the thesis.
1.1 Background
Driver assistance systems are more and more becoming standard in the new vehi
cles built today. The tasks for these systems are to support and relieve the driver,
but not to take the driving task from him or her. By vehicle longitudinal regula
tion, one understands the inﬂuence of the vehicle in its driving direction by means
of a controller. In this Master’s thesis the focus will be on the driver assistance
systems supported by the longitudinal controller at DaimlerChrysler AG.
The longitudinal controller can be thought of as two cascade controllers, see
Figure 1.1. The outer controller contains the driver assistance functions (such as
Speedtronic, Distronic, DSR, etc  for further information see Chapter 2). v is here
the actual vehicle speed. a
d1
, a
d2
, . . ., a
dn
are the desired accelerations calculated
by the assistance functions. The block called “Coord.” (coordinator) in the ﬁgure
chooses which of the functions that should have aﬀect, depending on the driver’s
choice. It also contains a jerk damper, so that the vehicle is traveling smoothly,
even when switching between the diﬀerent assistance functions.
The resulting calculated acceleration a
d
is delivered to the inner control loop,
whose task is to make the vehicle have the same acceleration as the desired value.
The inner controller delivers a desired torque T to the actuators engine, brake and
gearbox, which are aﬀecting the vehicle. The current speed v and acceleration a
are measured and used as feedback signals by the controllers.
The inner controller contains a PIDcontroller and an observer. The task of the
observer is to estimate the part of the vehicle’s acceleration caused by disturbances
not included in the vehicle model. A Kalman ﬁlter is chosen as observer, and this
Master’s thesis will explain the function and implementation of the Kalman ﬁlter
in more detail.
1
2 Introduction
... 
a
dn

v
DSR

a
d3

v
Distronic

a
d2

v
Speedtronic

a
d1
Coord.

a
d
PID
Observer

T
Vehicle
6
v, a
Outer controller Inner controller
Figure 1.1. Overview of the longitudinal controller in the vehicle. The driver assis
tance functions in the outer control loop calculates accelerations a
di
. The coordinator
(“Coord”) chooses which of the functions that should have aﬀect and delivers a desired ac
celeration a
d
to the inner control loop. The inner control loop consists of a PIDcontroller
and an observer. They deliver a desired torque T to the engine, brake and gearbox which
are aﬀecting the vehicle. The speed v and acceleration a are used as feedback signals.
1.2 Problem Formulation
The Kalman ﬁlter attached parallel to the PIDcontroller should estimate the part
of the vehicle’s acceleration caused by large disturbances (for example a changed
mass or slope of the road). The ﬁlter is directly attached to the engine and brakes,
and it is therefore important that the output from the ﬁlter is smooth. Otherwise
the comfort is negatively aﬀected.
The Kalman ﬁlter has to be tuned to work optimally. For this purpose the
theory behind the ﬁlter and the methods for ﬁlter tuning had to be better inves
tigated.
1.3 Objective
The goal of this thesis is to explain the function of the Kalman ﬁlter, describe
methods for choosing the parameters, and to ﬁnd good parameter settings justiﬁed
by theoretical and practical methods. It should also be examined if the structure
of the ﬁlter can be improved.
1.4 DaimlerChrysler AG 3
1.4 DaimlerChrysler AG
DaimlerChrysler AG is a major automobile and truck manufacturer formed in 1998
by the merge of DaimlerBenz in Germany (the manufacturer of MercedesBenz)
and the Chrysler Corporation in USA. The company produces cars, trucks, vans
and busses under the brands Chrysler, Dodge, Jeep, MercedesBenz, Smart and
Maybach, among others. In January 2007 the company was the second largest
auto manufacturer.
At Mercedes Technology Center in Sindelﬁngen, Germany, over 35000 employ
ees are producing the CClass, EClass, SClass, CLSClass, CLClass and May
bach cars. 9500 persons are working with development. The team working with
driving assistance systems is developing control systems such as adaptive cruise
control and hill descent control.
1.5 Method
The necessary theory describing the Kalman ﬁlter will be presented using literature
studies. Diﬀerent methods for choosing the ﬁlter parameters will be explained
and implemented. Using previous Master’s theses at DaimlerChrysler the process
model for the vehicle’s longitudinal dynamics will be developed. The model of the
disturbance used in the existing Kalman ﬁlter will be examined. Then a stationary
Kalman ﬁlter is designed for the model and good working parameters will be found
using simulation. The development of the observer is performed in Matlab and
Simulink, and simulations are made oﬀline. The ﬁlter will then be extended with
an algorithm for change detection. It will be examined if this makes the ﬁlter both
work properly in case of small noise as well as respond quickly in case of sudden
larger changes (as is two competitive goals with the Kalman ﬁlter).
1.6 Outline
In the introductory chapter the purpose and method of the thesis is presented. An
overview of the diﬀerent driver assistance systems supported by the longitudinal
controller is given in the second chapter. Chapter three discusses some basic esti
mation ﬁlter theory needed to implement an observer in the controller. Diﬀerent
possibilities to choose and tune the ﬁlter parameters are presented in chapter four.
In chapter ﬁve the model for the longitudinal dynamics of the vehicle is derived,
and an initial version of the Kalman ﬁlter is implemented and tested. Chapter
six starts with a presentation of some more complex models used to model the
estimated parameter, and continues with a discussion of the advantages and dis
advantages of using these models. At the end of chapter six a comparison between
the developed Kalman ﬁlter and a standard lowpass ﬁlter is made. Chapter seven
discusses the possibilities given by some ideas picked up from the area of “Change
Detection”, and an algorithm that uses this theory is simulated and evaluated.
Finally, in the last chapter, conclusions are drawn and some extensions for the
thesis are presented.
4 Introduction
1.7 Limitations
The use of an observer in an inner controlloop to estimate the model errors was
suggested in [1]. A comparison between this method and the use of a PIDcontroller
was made in [19]. However, the advantages and disadvantages of attaching an
observer in parallel to a PIDcontroller (as described in this Master’s thesis) have
not been examined. It is a method used for about a year in test vehicles at
DaimlerChrysler and accepted as a basis for this thesis.
The sensors used in this Master’s thesis to measure the speed and acceleration
of the vehicle use Kalman ﬁlters and sensor fusion techniques to obtain stable
measurements. It has not been examined if the ﬁlter described in this thesis
would make better estimates by using unﬁltered data.
Chapter 2
Driver Assistance Systems
To better understand the task of the control ler discussed in this thesis, an intro
duction to driver assistance systems is given in this chapter. Among the driver
assistance systems there are comfort functions, which relieve the driver in his/her
tasks, passive safety functions, which reduce the consequence of an accident, and
active safety functions which help the driver to avoid accidents.
2.1 Antilock Braking System
Antilock braking system (ABS) prevents the wheels from locking and maintains the
steering ability of the vehicle during hard braking. During bad road conditions,
ABS will also reduce the stopping distance. The system measures the velocity
of all four wheels, and if one of the sensors reports an abnormal deceleration it
concludes that the wheel is about to lock and the pressure in the braking system
is reduced. [7]
2.2 Traction Control
The functioning of the traction control system is very similar to that of the ABS.
The system prevents the wheels from slipping during acceleration by using the
same velocity sensors as the ABS. If the vehicle starts to slip, the engine power is
reduced in order to maintain control of the vehicle. [7]
2.3 Stability Control
A stability control system basically measures the yaw rate of the vehicle, i.e., the
rotation in the ground plane, and compares it with the desired trajectory. If the
deviation is greater than a certain threshold, the system will activate the brakes
on one side of the vehicle to correct this. When the German automotive supplier
Bosch launched their stability control system they called it “electronic stability
program” (ESP). [7]
5
6 Driver Assistance Systems
2.4 Speed Limiter
The speed limitation function is used to make sure that the driver does not ex
ceed a set speed. The driver can set a variable or permanent limit speed. The
variable limit can easily be set and changed during driving. It is automatically
deactivated if the driver pushes down the accelerator pedal beyond the pressure
point, often referred to as “kickdown”. This is useful when overtaking. The per
manent limit is used for permanent longterm speed restrictions, such as driving
on winter tires. The permanent limit speed is set using the onboard computer,
and it cannot be exceeded by kickdown. This function is called “Speedtronic” by
DaimlerChrysler. [25]
2.5 Cruise Control
The cruise control, sometimes called “speed control” or “autocruise”, automatically
controls the speed of the vehicle. The driver can easily set and change the desired
speed during driving, and cruise control maintains the set speed and accelerates
and brakes the vehicle automatically if necessary. If the driver pushes down the
accelerator pedal to temporarily drive faster, the cruise control adjusts the vehicle’s
speed to the last stored speed when he/she again releases the accelerator pedal.
This is useful when overtaking. [25]
2.6 Hill Descent Control
The hill descent control system is essentially a lowspeed cruise control system for
steep descents. It uses the ABS brake system to control each wheel’s speed and
keeps the speed of travel to the speed set in the operating system. It is possible to
drive at a higher or a lower speed than that set in the operating system at any time
by manually braking or accelerating. The driver will be able to maintain control of
the vehicle when driving down hills with slippery or rough terrain and the system is
therefore especially helpful in oﬀroad conditions. The hill descent control system
used in DaimlerChrysler vehicles is called “Downhill Speed Regulation” (DSR). [26]
2.7 Forward Collision Mitigation System
Col lision mitigation system (CMS) uses radar sensors to detect obstacles which
are in the path of the vehicle. Most manufacturers have a similar functionality
when it comes to the intervention strategy. They use increasing warning levels as
the threat approaches. If the driver does not brake himself, the CMS will reduce
the impact speed by applying the brakes when a collision with the leading vehicle
appears to be unavoidable. [7]
2.8 Adaptive Cruise Control 7
2.7.1 Distance Warning
This function warns the driver when the distance to the vehicle in front is too
small. A message or a warning lamp in the instrument cluster then lights up. If
the driver is approaching the vehicle in front at high speed, he/she will also hear a
signal. The driver has to apply the brakes in order to maintain the correct distance
and avoid a collision. [25]
If the system has detected a risk of collision and the driver does not brake
or steer himself/herself, the vehicle is automatically braked gently and the seat
belts are retracted gently two or three times to warn the driver. This helps to
reduce the consequence of an accident, but the driver has to apply the brakes
himself/herself in order to avoid a collision. This system is called “presafe brake”
in DaimlerChrysler vehicles. [25]. At this point, if the driver applies the brakes,
the system interprets this action as emergency braking, and activates the brake
assist system to reduce impact speed. [7]
2.7.2 Brake Assist System
Brake assist system (BAS) operates in emergency braking situations. If the driver
pushes down the brake pedal quickly, BAS automatically boosts the braking force
and thus shortens the stopping distance. If the vehicle is equipped with radar
sensors, the system calculates the brake pressure necessary to avoid a collision.
When the driver pushes down the brake pedal forcefully, the system automatically
boosts the braking force to a level appropriate to the traﬃc situation. The brake
assist system is deactivated and the brakes will function as normal when the driver
releases the brake pedal, there are no obstacles detected in the path of the vehicle
and there is no longer a risk of collision. In DaimlerChrysler vehicles this function
is called “BAS Plus”. [25]
2.8 Adaptive Cruise Control
Adaptive cruise control (ACC) is also known as “active cruise control” or “intelli
gent cruise control”. ACC uses a forward looking sensor, usually radar or laser, to
monitor the distance to leading vehicles. If the system is active and the time gap
to the leading vehicle falls below a certain threshold, the vehicle will automatically
brake in order to maintain distance. In Europe there are government restrictions
which limit the permitted braking rate. If the vehicle detects that a higher decel
eration is required to avoid colliding with the leading vehicle, an audible warning
is given to the driver. If there is no vehicle in front, ACC operates in the same
way as cruise control. [7]
DaimlerChrysler oﬀers adaptive cruise control under the name “Distronic”. It
functions at speeds between 30 and 200 km/h. With the Distronic system, the
distance to the leading vehicle is set as a time between one and two seconds. The
system uses radar sensors to measure the distance to the vehicle in front. [25]
Some DaimlerChrysler vehicles are equipped with a system called Distronic
Plus, which functions at speeds between 0 and 200 km/h. If Distronic Plus detects
8 Driver Assistance Systems
that the vehicle in front has stopped, it will cause the vehicle to brake and come to
a halt. Once the vehicle is stationary, it will remain so without the driver having to
push down the brake pedal. If the vehicle in front pulls away, and the driver pulls
the cruise control lever or brieﬂy pushes down the accelerator pedal, the vehicle
automatically pulls away and adapts its speed to the vehicle in front. [25]
2.9 Lane Guidance System
Lane guidance system refers to systems that try to help the driver stay in the lane.
Systems typically use an audible warning or a steering wheel torque to alert the
driver if the vehicle is approaching the lane markings. The steering wheel torque
used by some of the systems will automatically steer the vehicle back into the
center of the lane, thus working almost like an autopilot. Another idea is to try
to mimic the sounds and vibrations that are generated by rumble strips, i.e., the
grooved lane markings that are sometimes used on motorways to indicate lane
departure. [7]
2.10 Blindspot Warning
The general idea behind a blindspot warning system is to lower the risk of lane
change accidents by warning the driver about vehicles in the blind spot. There are
diﬀerent techniques for achieving this but usually ocular vision or radar is used. [7]
2.11 Systems Supported by the Controller
In this Master’s thesis, the focus will be on those driver assistance systems that
are supported by the longitudinal controller at DaimlerChrysler AG. These are
• Speed limiter
• Cruise control
• Adaptive cruise control
• Collision mitigation system
• Brake assist system
• Hill descent control
Figure 2.1 gives an overview of the vehicles sold by DaimlerChrysler. The vehicles
are listed together with the driver assistance systems that are used in the vehicles.
2.11 Systems Supported by the Controller 9
Speed
tronic
Cruise
Control
Distance
Warning
Distronic
Distronic
Plus
DSR
Pre
Safe
Brake
BAS
Plus
C  Class WS203
x x
C  Class CL203
x x
CL  Class 215
x x x x
CL  Class 216
x x x x x x x
CLK  Class 209
x x x x
CLS  Class 219
x x x x
E  Class 211
x x x x
R  Class 251
x x x x
S  Class 221
x x x x x x x
SL  Class 230
x x x x
SLK  Class 171
x x
M  Class 164
x x x x x
G  Class 463
x x
GL  Class X164
x x x x x
Sprinter
x x
Viano 639
x x
Vito 639
x x
Crafter (VW)
x x
Figure 2.1. Driver assistance systems supported by the vehicle longitudinal controller
at DaimlerChrysler. The ﬁgure shows which DaimlerChrysler vehicles are using the
systems. The names that are used in the ﬁgure are the names used by DaimlerChrysler.
10 Driver Assistance Systems
Chapter 3
Basic Filter Theory
This chapter starts with an introduction to statespace models often used when
working with control systems. It is shown how to transform a continuous time
model into a time discrete model. Then some basic theory for observers is pre
sented. One popular observer is the Kalman ﬁlter. The Kalman ﬁlter is “certainly
one of the greater discoveries in the history of statistical estimation theory and
possibly the greatest discovery in the twentieth century” [11]. The equations for
this ﬁlter are presented and the function of the stationary Kalman ﬁlter is ex
plained. In the end of this chapter it is described how to construct shaping ﬁlters
for nonGaussian process noise and nonGaussian measurement noise.
3.1 StateSpace Models
To design an estimation ﬁlter one ﬁrst needs a mathematical model of the con
trolled process. Sir Isaac Newton (16421727) discovered that the sun and its
planets are governed by laws of motion that depend only upon their current rel
ative positions and current velocities. By expressing these laws as a system of
diﬀerential equations and feed them with the current positions and velocities he
could uniquely determine the positions and velocities of the planets for all times.
Almost every physical system can in the same way be described using diﬀerential
equations. [11]
The order of a diﬀerential equation is equal to the order of the highest deriva
tive. When doing control design it is preferable to have all equations of ﬁrst order.
One can reduce the form of any system of higher order diﬀerential equations to
an equivalent system of ﬁrstorder diﬀerential equations by introducing new vari
ables. [9]
In this Master’s thesis a special type of diﬀerential equations will be used,
called ordinary diﬀerential equations (ODE). This model is also referred to as the
continuoustime statespace model. When the equations describing a system are
11
12 Basic Filter Theory
linear, the model can be written as the linear statespace model [3]
˙ x(t) = Ax(t) +Bu(t) +Gw(t) (3.1)
y(t) = Cx(t) +e(t) (3.2)
The variables x = [x
1
, x
2
, ..., x
n
]
T
in (3.1) and (3.2) are commonly called “state
variables” (or “states”) and they represent all important characteristics of the sys
tem at the current time. The “process model” (3.1) describes the dynamics of the
system. The variable u represents a known external signal, typically available to or
controlled by the system. The variable w is used to model unknown disturbances
or model uncertainties, which cannot be directly measured. They can only be
observed through their inﬂuence on the output. The “measurement model” (3.2)
describes how the noisy measurements y are related to the internal variables. e is
here some noise added to the measurement. [3]
Usually e and w are modeled as unpredictable random processes with null as
mean value and a known variance, often referred to as Gaussian noise. It can
be shown that a system described by (3.1) and (3.2) with e and w modeled as
Gaussian noise is a Markov Process. This means that it satisﬁes the Markov
Property [3]
p[x(t)x(τ), τ < t
1
] = p[x(t)x(t
1
)], ∀t > t
1
(3.3)
The construction p[AB] in this statement should be read “the probability of A
given B”. In words this means that the past up to any time t
1
is fully characterized
by the value of the process at t
1
, or “The future is independent of the past if the
present is known”. This is an important property, because when trying to predict
future states of the system with a good model, one only needs to know the current
state. [3]
3.2 Discretization
Almost every physical system is best described using a continuoustime model, but
the controller is often implemented in a computer using discrete methods. The
model therefore has to be sampled and changed into a discrete time statespace
model. How this is done is explained in this section, based on [10] and [13].
The continuous dynamic system described by (3.1) and (3.2) can be trans
formed into a discrete statespace system, assuming that the input signal u is
piecewise constant during the sampling interval T. This is usually the case when
the input u is generated by a computer. This gives the linear discrete time
invariant statespace model
x(kT +T) = A
d
x(kT) +B
d
u(kT) +G
d
w(kT) (3.4)
y(kT) = C
d
x(kT) +e(kT) (3.5)
The index d is referring to the discrete form of the matrices. Often an easier and
more compact form of (3.4) and (3.5) is being used
x
k+1
= A
d
x
k
+B
d
u
k
+G
d
w
k
(3.6)
y
k
= C
d
x
k
+e
k
(3.7)
3.3 Observer 13
In this Master’s thesis the indices d will be left out, when there is no risk of
confusion.
The discrete form of the matrices are calculated using the matrices in (3.1)
and (3.2) as
A
d
= e
AT
(3.8)
B
d
=
T
_
0
e
AT
Bdt (3.9)
C
d
= C (3.10)
There are several ways to calculate e
AT
. One of them is
e
AT
= L
−1
(sI −A)
−1
(3.11)
where L
−1
is the inverse Laplace transform. Other methods are using Taylor series
approximation or the Padé approximation, see [11] for a more detailed description.
There are several possibilities to calculate the matrix G
d
. Assuming that the
stochastic input e also is constant during the sampling intervals, the same method
can be used as for the matrix B
d
. G
d
is then calculated using A and G from (3.1)
giving [13]
G
d
=
T
_
0
e
AT
Gdt (3.12)
This method, called “zeroorder hold”, is used by the command “c2d” (=con
tinuous to discrete) in Matlab when nothing else is speciﬁed [24].
It is seldom the case that e is constant during the sampling intervals [13].
Another alternative is “triangle approximation”, where the input e is assumed
piecewise linear over the sampling period. This method is generally more accurate
than zeroorder hold when the input e is assumed to be smooth [24]. Other meth
ods include “impulseinvariant discretization” (where the impulse response of the
discretized system is matched with that of the continuous system) and “Tustin
approximation” (which instead matches the frequency response of the systems).
3.3 Observer
When designing a controller it is important to have information about the states
of the system. Normally all states cannot be measured. An observer may be used
to estimate the unknown states with help from the available measured states and
the measured signals y and u. In this section some basic theory about the observer
is discussed, based on parts from [9] and [3].
Consider the discrete time system described by (3.6) and (3.7). The matrices A,
B and C are time invariant and known, and the signals u and y can be measured.
The states x cannot be measured but are needed for controlling purposes. An
14 Basic Filter Theory
initial approach to estimate the states would be to simulate the system using only
the known input values u
ˆ x
k+1
= Aˆ x
k
+Bu
k
(3.13)
where ˆ x
k
is the estimated value of x at time step k. To measure the quality of the
estimation, the diﬀerence y
k
− Cˆ x
k
can be used. This diﬀerence should be null
if the estimate ˆ x
k
is equal to the real state x
k
, and will also be so in absence of
errors. This will in practice, however, never be the case, since there are always
model errors and disturbances w as well as measurement noise e. A good way to
improve the estimates is to use y
k
−Cˆ x
k
as a feedback, such that
ˆ x
k+1
= Aˆ x
k
+Bu
k
+L(y
k
−Cˆ x
k
) (3.14)
In words this can be written as
“Estimated state” = “Predicted state” + L · “Correction term”
The correction term reﬂects the diﬀerence between the predicted measurement
and the actual measurement as explained above. In the context of ﬁlters this term
is often called the measurement “innovation” or the “residual”.
The matrix L is here a design parameter and it adjusts how much the residuals
should aﬀect the estimated states. It is a trade oﬀ between how fast the estima
tions converge toward the measurement (high L gives a fast convergence) and how
sensitive the estimate is to measurement noise (high L gives a more noise sensitive
estimate).
The optimal value of L can be calculated in diﬀerent ways, resulting in diﬀerent
types of observers. One type of observer is the Kalman ﬁlter, described in detail
in Section 3.5.
3.4 Observability
The observer estimates the states x with the help of measurements y. Therefore,
there has to be a connection between the states and the measurement, the states x
has to be “seen” in the output y. This limitation is formulated with the help of
the observability matrix O, described in [9] as
O =
_
_
_
_
_
_
_
C
CA
CA
2
.
.
.
CA
n−1
_
_
_
_
_
_
_
(3.15)
All states are observable if and only if O has full rank. This criteria does
however not give any information about how good the estimate will be. When a
matrix has full rank, none of the rows can be written as a linear combination of
3.5 Kalman Filter 15
the other rows. If the matrix does not have full rank, then there is one or more
rows that are “unnecessary”. The easiest way to compute the rank of a matrix is
given by the Gauss elimination algorithm. In Matlab this is calculated with the
command “rank”.
3.5 Kalman Filter
The Kalman ﬁlter is very powerful in several aspects. It supports estimates of
past, present, and future states, and it can do so even when the precise nature of
the modeled system is unknown [11]. In the following sections the basic theory of
the Kalman ﬁlter is presented, based on parts from [23], [12] and [13].
3.5.1 Process and Measurement Model
The Kalman ﬁlter is a set of equations that calculates the optimal L, given a linear
model of the system and statistical information about the process noise w and
measurement noise e. When the system and noise are modeled in the way described
in this section, the Kalman ﬁlter will compute the value of L that minimizes the
variance of the state estimation error, i.e., x
k
− ˆ x
k
. [23]
The Kalman ﬁlter estimates the states of a discrete time controlled process
described in the form of (3.6) and (3.7), repeated below
x
k+1
= Ax
k
+Bu
k
+Gw
k
y
k
= Cx
k
+e
k
The random variables w and e are assumed to be independent of each other,
Gaussian, and with normal probability distributions according to
p(w) ∼ N(0, Q) (3.16)
p(e) ∼ N(0, R) (3.17)
The covariance matrices are thus deﬁned R = E{e
k
e
T
k
} and Q = E{w
k
w
T
k
}
with E{e
k
} = E{w
k
} = 0. The process noise covariance matrix Q, measurement
noise covariance matrix R and the matrices A, B, C and G might change with
each time step or measurement, but in this Master’s thesis they will be assumed
stationary and known.
3.5.2 Discrete Time Kalman Filter Equations
In this section the Kalman ﬁlter equations will be presented to give an overview
of how the ﬁlter works, following the presentation in [23] and [12]. Note that
the equations can be algebraically manipulated into several forms. Therefore the
equations presented here might diﬀer from those found in other literature. For
more information, for example on how to derive the equations, see [3], [11] or [12].
The Kalman ﬁlter estimates a process by using a feedback control. The ﬁlter
estimates the process state at some time and then obtains feedback in the form
16 Basic Filter Theory
of (noisy) measurements. As such, the equations for the Kalman ﬁlter are divided
into two groups; predictor equations and measurement update equations.
The discrete Kalman ﬁlter predictor equations are [13]
ˆ x
tt−1
= Aˆ x
t−1t−1
+Bu
t
(3.18)
P
tt−1
= AP
t−1t−1
A
T
+GQG
T
(3.19)
which translates the estimate from the last time step ˆ x
t−1t−1
to obtain the esti
mate for the current time step. ˆ x
tt−1
refers to the estimate of x at the current
time step t given all the measurements prior to this time step, also referred to as
the “a priori” state estimate. P
tt−1
in (3.19) is deﬁned
P
tt−1
= E[(x
t
− ˆ x
tt−1
)(x
t
− ˆ x
tt−1
)
T
] (3.20)
and is the estimation error covariance given measurements prior to this time step,
also referred to as the “a priori” estimate error. Note that P
tt−1
is calculated using
the estimate error from the last time step, P
t−1t−1
, and the process noise covari
ance (or “model uncertainties”), Q. P
tt−1
can be thought of as the uncertainties
of how the states x are evolving.
The discrete Kalman ﬁlter measurement update equations are [13]
L
t
= P
tt−1
C
T
(CP
tt−1
C
T
+R)
−1
(3.21)
ˆ x
tt
= ˆ x
tt−1
+L
t
(y
t
−Cˆ x
tt−1
) (3.22)
P
tt
= (I −L
t
C)
−1
P
tt−1
(3.23)
The measurement update equations are responsible for the feedback, i.e., for im
proving the estimate incorporating a new measurement. They can also be thought
of as corrector equations. (3.21) computes the Kalman gain L
t
that minimizes the
estimation error covariance P
tt
= E[(x
t
− ˆ x
tt
)(x
t
− ˆ x
tt
)
T
]. The correction (3.22)
generates the state estimate by incorporating the new measurement y
t
. Together
are (3.18) and (3.22) forming (3.14) in Section 3.3. The ﬁnal step (3.23) is to
obtain the estimate error covariance P
tt
, which is needed in the next time step.
After each predictor and measurement update pair, the calculated values x
tt
and P
tt
are saved so they can be used in the next time step. This update is
performed with the time update equations
ˆ x
t−1t−1
= ˆ x
tt
(3.24)
P
t−1t−1
= P
tt
(3.25)
The process is then repeated using these values in the algorithms for the next
time step. This recursive nature of the Kalman ﬁlter is practical when doing the
implementation. [23]
3.5.3 Initialization
The initial values ˆ x
−1
and P
−1
have to be chosen before starting the ﬁlter. ˆ x
−1
is
chosen with knowledge about the state x. For example, if x is a random constant
3.6 Shaping Filter 17
with normal probability distribution, the best choice is ˆ x
−1
= 0. P
−1
is the
uncertainty in the initial estimate ˆ x
−1
. When one is absolutely certain that the
initial state estimate is correct, then P
−1
should be set to 0. Otherwise the best
choice is the variance of x. However, the choice is not critical. [12],[23]
3.5.4 Steady State
If the matrices A, C, Q and R are timeinvariant, both the estimation error covari
ance P
k
and the Kalman gain L
k
will converge to a stationary value. If this is the
case, these parameters can be precomputed by either running the ﬁlter oﬀline,
or by calculating the stationary value P as described in [12] and [19]
P = APA
T
+GQG
T
−APC
T
(CPC
T
+R)
−1
CPA
T
(3.26)
This equation is referred to as the algebraic Riccati equation. The stationary value
of L can then be calculated as
L = APC
T
(CPC
T
+R)
−1
(3.27)
3.5.5 Block Diagram of the Stationary Kalman Filter
The computational procedure and the relation of the ﬁlter to the system is illus
trated as a block diagram in Figure 3.1. The Kalman ﬁlter recursively computes
values of ˆ x using the precalculated stationary values of P and L, the initial esti
mate ˆ x
t−1t−1
and the input data y
t
. [23]
3.5.6 Design Parameters
Design parameters for an observer are the matrices A, B, C and L. If the Kalman
ﬁlter equations (3.26) and (3.27) are used to calculate L, the design parameters
are Q, R and G instead of L. If the matrices are time dependent, the functionality
to calculate new values for L also has to be implemented, but in this Master’s
thesis they will be assumed stationary. In Chapter 4 it will be discussed how
to choose the parameters Q and R. The model used in this Master’s thesis is
developed in Chapter 5.
3.6 Shaping Filter
When implementing a Kalman ﬁlter, it is necessary to have all disturbances acting
as Gaussian noise, i.e., a random signal with null as mean value. For many physical
systems encountered in practice, it may not be justiﬁed to assume that all noises
are Gaussian. If the spectrum of a signal is known, the signal can be described
as the output of a ﬁlter driven by Gaussian noise. Using this, a model with
nonGaussian noise can be extended to a ﬁlter driven by Gaussian noise. These
ﬁlters are called shaping ﬁlters, and they shape the Gaussian noise to represent
the spectrum of the actual system. The ﬁlter can be included in the original state
space system, giving a new linear statespace model driven by Gaussian noise.
18 Basic Filter Theory
u
t

B

+
f
x
t
delay
x
t−1
A
6
+
?
w
t
+

C

+
f
?
e
t
+
y
t
+
f
L
+
f
ˆ x
tt
ˆ x
tt
?
delay

ˆ x
t−1t−1
A

+
f
u
t
6
B
6
+
ˆ x
tt−1
6
+

C
6
−
Discrete System
Measurement
Discrete Kalman Filter
Figure 3.1. Kalman ﬁlter block diagram. This ﬁgure shows the computational pro
cedure of the Kalman ﬁlter, and its relation to the system. The blocks “Discrete Sys
tem” and “Measurement” are a graphically representation of the statespace model (3.6)
and (3.7). The Kalman ﬁlter recursively computes values of ˆ x using the precalculated
stationary values of P and L, and the input signals y and u. It can here be seen how the
estimate ˆ x
tt
delivered from the Kalman ﬁlter is saved in the delay block, so that it can be
used in the next time step. The variable ˆ x
tt−1
is calculated Aˆ x
t−1t−1
+Bu
t
as in (3.18).
The estimate for the current time step ˆ x
tt
is calculated ˆ x
tt−1
+ L(y
t
− Cˆ xtt − 1) as
in (3.22). [11]
3.6 Shaping Filter 19
This is done for systems with nonGaussian process noise and nonGaussian
measurement noise in the next two sections, following the theory in [11].
3.6.1 Shaping Filters for NonGaussian Process Noise
Consider a system given on the form
˙ x
1
= A
1
x
1
+G
1
w
1
(3.28)
y = C
1
x
1
+e (3.29)
where w
1
is nongaussian noise and e is zeromean Gaussian noise. Suppose w
1
can be modeled by a linear shaping ﬁlter according to
˙ x
SF
= A
SF
x
SF
+G
SF
w (3.30)
w
1
= C
SF
x
SF
(3.31)
where w is Gaussian noise. Then the ﬁlter can be included in the original state
space system, giving the new statespace system
˙ x = Ax +Gw (3.32)
y = Cx +e (3.33)
where
x =
_
x
1
x
SF
_
(3.34)
A =
_
A
1
G
1
C
SF
0 A
SF
_
(3.35)
G =
_
0
G
SF
_
(3.36)
C =
_
C
1
0
¸
(3.37)
3.6.2 Shaping Filters for NonGaussian Measurement Noise
Consider a system given on the form
˙ x
1
= A
1
x
1
+G
1
w (3.38)
y = C
1
x
1
+e
1
(3.39)
In this case, e
1
is nonGaussian noise and w is Gaussian noise. Suppose e
1
can be
modeled by a linear shaping ﬁlter according to
˙ x
SF
= A
SF
x
SF
+G
SF
e (3.40)
e
1
= C
SF
x
SF
(3.41)
where e is Gaussian noise. In this case the new statespace system becomes
˙ x = Ax +GW (3.42)
y = Cx (3.43)
20 Basic Filter Theory
where
x =
_
x
1
x
SF
_
(3.44)
A =
_
A
1
0
0 A
SF
_
(3.45)
G =
_
G
1
0
0 G
SF
_
(3.46)
C =
_
C
1
C
SF
¸
(3.47)
W =
_
w
v
_
(3.48)
Chapter 4
Choosing the Kalman Filter
Parameters
In this chapter diﬀerent possibilities on how to choose and tune the Kalman ﬁlter
parameters are presented. First it wil l be shown how to estimate the parameters
using information about the process and measurement noise. Then it wil l be de
scribed how to tune the parameters using knowledge about the parameters’ inﬂuence
on the behavior of the ﬁlter, using openloop or closedloop simulation, and ﬁnal ly
using autotuning. A global optimization technique cal led simulated annealing is
implemented for autotuning in Matlab and Simulink.
4.1 Estimating the Covariances
The Kalman ﬁlter assumes that all disturbances are stochastic variables known in
advance. If the system is linear and both the process noise w and measurement
noise e have a normal distribution, it can be shown that the Kalman ﬁlter is the
optimal ﬁlter (in the sense of minimizing the variance of the estimate error). In
this case the covariance matrices Q and R should be estimated using measures of
the noises e and w. [12]
Each element of R is deﬁned as [3]
R
ij
= E[(e
i
− ¯ e
i
)(e
j
− ¯ e
j
)
T
] (4.1)
where ¯ e
i
is the mean value of e
i
, and the formulation E[ζ] means the statistical
expected value of ζ. The matrix R is a symmetric n × n matrix, where n is the
number of elements in e. The diagonal elements of the covariance matrix are the
variances of the components of e, while the oﬀdiagonal elements are the scalar
covariances between its components. If the components of e is independent of each
other, the oﬀdiagonal elements of R should be set to 0.
By investigating the measured signals, it is possible to obtain an estimation of
the covariance matrix R. Assume that the information in the measured signal y
21
22 Choosing the Kalman Filter Parameters
is constant. The elements of R can then be estimated as in [4] and [2] using
R
i
=
1
N −1
N
t=1
(y
i
(t) − ¯ y
i
)
2
(4.2)
where i is the i:th measured signal, ¯ y
i
is the mean value of y
i
, and N is the number
of samples used for the estimation. The uncertainties of the measured signals are
here assumed to be independent, which results in a diagonal R matrix.
Now assume that the necessary information in the measured signal is of low
frequency, for example the speed of the vehicle. The measurement noise e can
then be estimated by lowpass ﬁltering the signal y as [2]
e = y(t) −y (4.3)
where y is the lowpass ﬁltered signal. The elements of R can then be calculated
as
ˆ
R
ij
=
1
N
N
t=1
e
i
(t)e
j
(t) (4.4)
where i and j are the index of the measured signals and N is the number of samples
used for the estimation. The estimation of the covariance matrix can be performed
in Matlab using the command “covf” [18].
The deﬁnition of the covariance matrix for the process noise Q is similar as
for R and it can also be estimated using a similar method. The problem that
might arise is the fact that not all states in the state vector are measurable.
4.2 Choosing Q and R Manually
A drawback of the Kalman ﬁlter is that knowledge about process and measurement
noise statistics is required. It may be possible to determine the measurement noise
covariance from measurements, but determining the process noise covariance is
more diﬃcult, for example when not all the states are measurable. Instead, a
common approach is to test diﬀerent choices of Q and R until the Kalman ﬁlter
shows acceptable behavior. To understand how the parameter choice aﬀects the
ﬁlter, a discussion of the function of the parameter will now be held based on parts
from [12] and [11].
L is calculated using A, C, Q and R, and will therefore depend on which char
acteristics the process noise and the measurement noise are given in the model.
The inﬂuence on L from diﬀerent choices of R and Q can be understood by insert
ing (3.7) in (3.22), which gives
ˆ x
k
= ˆ x
−
k
+L
k
(y
k
−Cˆ x
−
k
)
= ˆ x
−
k
+L
k
Cx
k
+L
k
e
k
−L
k
Cˆ x
−
k
= ˆ x
−
k
+L
k
C(x
k
− ˆ x
−
k
) +L
k
e
k
(4.5)
4.3 Simulation 23
This shows that the state estimate ˆ x
k
is adjusted using the diﬀerence between the
estimate ˆ x and the real state x, as well as the measurement noise e. Both terms
are multiplied with the gain L. A large Q results in a large L, which means a fast
ﬁlter with good trust in the measurements, but it also makes the observer more
sensitive to the measurement noise e. A large R results in a small L, which means
that the measurements are not reliable. This demands good thrust in the model,
which makes the observer sensitive to errors in the model.
Assume that the parameters are chosen as Q = Q
1
and R = R
1
. Then the
stationary values of P and L can be calculated using (3.26) and (3.27). Assume
that the calculated values are P
1
and L
1
.
If Q and R is both multiplied with the same value λ, the resulting P in (3.26)
is according to [12] also multiplied with λ. This gives P = λP
1
. The calculation
of L in (3.27) then becomes
L = APC
T
(CPC
T
+R)
−1
= A(λP
1
)C
T
(C(λP
1
)C
T
+ (λR
1
))
−1
= λAP
1
C
T
λ
−1
(CP
1
C
T
+R
1
)
−1
= AP
1
C
T
(CP
1
C
T
+R
1
)
−1
= L
1
(4.6)
In other words, L remains the same when Q and R is multiplied with the
same value. The quotient between Q and R is therefore the design parameter; the
absolute values do not matter. When choosing the parameters, R can be set to a
constant value and Q adjusted until the ﬁlter gets acceptable behavior.
4.3 Simulation
Using simulation in Simulink diﬀerent parameter choices can be evaluated without
having to make a test drive in a real vehicle. Here two diﬀerent simulation methods
will be explained, openloop and closedloop.
4.3.1 OpenLoop Simulation
One method is openloop simulation. With this method, measurements done in a
test car can be recorded and given as input back to the model in Simulink. It is
now possible to simulate the Kalman ﬁlter with diﬀerent parameters and compare
the outputs. The reason why this simulation method is called openloop is that
the diﬀerent parameter choices does not aﬀect the behavior of the vehicle. The
ﬁlter is fed with the recorded measurements, but the output from the ﬁlter is not
connected to the controller run in the simulation. This type of simulation is used
to produce all the diagrams presented in the next chapters.
24 Choosing the Kalman Filter Parameters
4.3.2 ClosedLoop Simulation
The other type of simulation used is closedloop simulation. With this method
a scenario including the vehicle and the road is simulated. The output from the
ﬁlter is attached to the controller and the behavior of the vehicle is aﬀected by how
well the ﬁlter is performing. Figure 4.1 shows the Simulink model used by the
closedloop simulation. The model consists of several subsystems, all developed by
DaimlerChrysler, and one of them is the controller containing the Kalman ﬁlter.
Figure 4.1. Closedloop simulation. The vehicle, it’s controller and the environment are
simulated together. The output of the ﬁlter will here aﬀect the behavior of the vehicle.
The environment and the actions of the driver are speciﬁed using simulation scenarios. It
is also possible to specify another vehicle which is traveling in front, a socalled “rabbit”.
Simulation Scenarios
For the closedloop simulation two diﬀerent scenarios are prepared. The ﬁrst
scenario represents the vehicle driven up and down a hill. The vehicle is unloaded
and the driver has activated the cruise control with a set speed of 120 km/h. The
ﬁrst part of the hill has a slope of 10% (meaning “uphill”), and the second part
has a slope of −15% (meaning “downhill”).
The second scenario represents the vehicle driven on a straight road. The
vehicle is heavy loaded. Total mass of the vehicle is 1.8 times the normal mass.
The vehicle is driven at 80 km/h and the driver has activated the cruise control.
The driver adjusts the speed by using the cruise control lever, ﬁrst by increasing
the set speed to 120 km/h and then by decreasing it again to 80 km/h.
4.4 Autotuning 25
4.4 Autotuning
Tuning the ﬁlter, i.e., choosing the values of the process noise covariance Q and
measurement noise covariance R so that the ﬁlter performance is optimized with
respect to some performance measure, is a challenging task. Performing it manu
ally is timeconsuming with no guarantee for optimality. Poor tuning may result
in unsatisfactory performance of an otherwise powerful algorithm. It is there
fore often desirable to develop automated systematic procedures for Kalman ﬁlter
tuning.
A systematic method of choosing Q and R is to perform many simulations using
diﬀerent parameters and evaluate the performance. A performance evaluation
variable may be the variance of the state estimation error, x
k
− ˆ x
k
(which is also
minimized with the Kalman ﬁlter). [12]
4.4.1 Evaluation Using RMSE
The observer gives a so called point estimate ˆ x of the state vector x using the
inputs u and measurements of the output y. Suppose that it is possible to gener
ate M realizations of the data u and y and apply the same estimator to all of them.
For evaluation it is necessary to measure the performance of this estimation. One
such performance measure is the root mean square error (RMSE) described in [13]
RMSE(k) =
¸
¸
¸
_
1
M
M
j=1
x
k
− ˆ x
(j)
k

2
2
(4.7)
where the subindex 2 stands for the 2norm, also called the euclidean norm. (The
euclidean norm is deﬁned x
2
=
_
x
2
1
+· · · +x
2
n
.) This is an estimate of the
standard deviation of the estimation error norm at each time instant. A scalar
measure for the whole data sequence is
RMSE =
¸
¸
¸
_
1
k
k
i=1
1
M
M
j=1
x
k
− ˆ x
(j)
k

2
2
(4.8)
The scalar performance measure can be used for autotuning, as long as it is
possible to generate several data sets under the same premises. Optimally this
should be done using reallife testing (instead of simulation), but this might not
be possible, for instance when it is too expensive to repeat the same experiment
many times. [12]
4.4.2 Autotuning Using Matlab
To automatically ﬁnd the optimal parameters for the Kalman ﬁlter implemented
at DaimlerChrysler, an optimization algorithm is developed in Matlab. The
algorithm starts with some parameter values, then simulates the system using
these values for the Kalman ﬁlter and calculates a cost function based on RMSE
explained in the previous section. The cost function measures how good the actual
26 Choosing the Kalman Filter Parameters
parameters are working. The optimization algorithm then changes the values,
simulates again and calculates a new value for the cost function. The algorithm
continues until optimal values for the parameters are found.
The source code for the script implementing this optimization technique is
found in Appendix A. The code is a modiﬁed example from the Optimization
Toolbox [6], and it uses a function called “lsqnonlin”.
Using this optimization technique does not give a satisfactory result. After sev
eral (about 100) restarts with diﬀerent starting parameters the script each time
ends up giving almost the same parameters back to the user. The optimization
function does not vary the parameters enough to see if there are any better solu
tions.
One reason is that Matlab’s optimization functions are designed to ﬁnd local
minima and they can be fooled, especially by oscillatory functions. They will only
ﬁnd a global minimum if it is the only minimum and the function is continuous.
Global optimization problems are typically quite diﬃcult to solve. Methods for
global optimization problems can be categorized based on the properties of the
problem that are used and the types of guarantees that the methods provide for
the ﬁnal solution. “Simulated annealing” is a popular approach for the global
optimization of continuous functions when derivatives of the objective functions
are not available. [21]
4.4.3 Simulated Annealing
The rest of this chapter will be used to present an algorithm implementing the
theory of simulated annealing. More theory of the algorithm can be found in [15],
and the implementation in Matlab developed to do optimization with Simulink
is found in Appendix B.
Simulated annealing (SA) is a stochastic global minimization technique. Given
a function E(s) depending on some parameter vector s = [s
1
, ...s
n
], the SA al
gorithm attempts to locate a good approximation to the global minimum of the
function.
The name and inspiration come from annealing in metallurgy, a technique in
volving heating and controlled cooling of a material to increase the size of its
crystals and reduce their defects. The heat causes the atoms to become unstuck
from their initial positions (a local minimum of the internal energy) and the slow
cooling gives them more chances of ﬁnding conﬁgurations with lower internal en
ergy than the initial one.
By analogy with this physical process, each step of the SA algorithm considers
some random neighbor ˜ s of the current parameter state s, and probabilistically
decides between moving the system to state ˜ s or staying in s. This probability is a
function P(E(s), E(˜ s), T) depending on the corresponding values of the function
for the states s and ˜ s, and on a parameter T (called the temperature), that is
gradually decreased during the process. There are diﬀerent possibilities to choose
the function P, as long as some constraints are fulﬁlled. This will be explained
later.
Another explanation of the simulated annealing algorithm goes as follows. Con
4.4 Autotuning 27
sider a man running in the mountains. His task is to ﬁnd the place with the lowest
altitude. The cost function that should be minimized is in this case the man’s
altitude, and the variable T is his current strength. To ﬁnd the place with the
lowest altitude (the global minimum) he sometimes has to try running up the hills,
otherwise he may be stuck in a valley (local minimum) not knowing that a better
solution is hiding in another valley behind the next hill. The man’s will to go
“uphill” is larger at the beginning, when his strength T is large. When T tends to
zero the man will only has the strength to run “downhill”.
PseudoCode
The following pseudocode describes the simulated annealing algorithm. It starts
at a state s
0
and recursively explores the search space using the method described
above. The algorithm continues until a maximum number of evaluations k
max
has been reached, or until a state with the target function value e
target
or less
is found. The function call “neighbor(s)” should generate a randomly chosen
neighbor of a given state s. The function call “random()” should return a random
value in the range [0, 1]. The annealing schedule is deﬁned by “tempfunction()”,
which should yield the temperature to use, given the fraction r of the time that
has passed so far.
s = s0; // Initial state
e = E(s); // Initial function value
s_best = s; // Initial best parameters
e_best = e; // Initial function minimum
T = initialtemperature(k_max); // Initial temperature
k = 0; // Evaluation count
while k < k_max and e > e_target // While not good enough
s_neighbor = neighbor(s); // Pick some neighbor
e_neighbor = E(s_neighbor) // Compute its function value
if e_neighbor < e_{best} then // Is this a new best?
s_best = s_neighbor; // Yes, save it
e_best = e_neighbor;
end if
if random() < P(e, e_neighbor, T) // Move to the neighbor state?
s = s_neighbor; // Yes, change state
e = e_neighbor;
end if
T = tempfunction(T,k/k_max); // Calculate new temperature
k = k + 1; // Count evaluations
end while
return s_best; // Return best solution found
Implementation of the SAAlgorithm
In order to apply the SA method to a speciﬁc problem, one must specify the
parameter search space, the neighbor selection method, the probability transition
28 Choosing the Kalman Filter Parameters
function, and the annealing schedule (temperature function). These choices can
have a signiﬁcant impact on the eﬀectiveness of the method. Unfortunately, there
are no choices that will be good for all problems, and there is no general way to
ﬁnd the best choices for a given problem. It has therefore been observed that
applying the SA method is more an art than a science.
In the following subsections it is explained how the algorithm is implemented.
The general demands and calculations are described here, and the complete imple
mentation of the algorithm in Matlab can be found in Appendix B. The script
can be used to perform autotuning on the ﬁlter.
Choosing the Neighbors
The neighbors of the current state have to be chosen so that the function values of
the neighboring states are not too far away from the function value of the current
state. This makes the probability of moving to the new state higher, and in this
way the algorithm can move on ﬁnding a good solution. It is true that choosing
a neighbor far away from the current state could lead to ﬁnding the best solution
faster, but this also leads to a low probability of moving to the new solution and
the risk of getting stuck in a nonoptimal solution is higher.
In the Matlab implementation found in Appendix B, the neighbors ˜ s to the
current state s are found by moving a random distance from s in a random direc
tion. The distance has been chosen as a value between −0.5 and +0.5 times the
current parameter vector s. The Matlabcode is as follows
move = (rand(1,3)0.5).*s; % Randomize between 0.5 and +0.5
s_neigbour = s + move; % Calculate new parameters
Transition Probability Function P
The function P calculates the probability of making the transition from the current
state s to a candidate new state ˜ s. The function depends on the corresponding
function values E(s) and E(˜ s), and the temperature T. The probability (a number
between 0 and 1) should be greater than 0 when E(˜ s) > E(s). This is an essential
requirement meaning that the system may move to the new state even when its
solution is worse than the current one. It is this feature that prevents the method
from becoming stuck in a local minimum. As the algorithm evolves and T goes to
zero, the probability P must tend to zero if E(˜ s) > E(s) and to a value greater
than zero if E(˜ s) < E(s). This makes the system favor moves that go “downhill”
and avoid those that go “uphill”. When T is zero the algorithm will fall down to
the nearest local minimum.
In the implementation found in Appendix B the probability is calculated as
P =
_
1 if E(˜ s) < E(s)
e
E(s)−E(˜ s)
T
otherwise
(4.9)
This is the method used in [21] and [15]. However, there is no mathematical
justiﬁcation for using this particular formula in SA, other than the fact that it
corresponds to the requirements explained above. The Matlabcode is as follows
4.4 Autotuning 29
function P = transition_P(E, E_neighbor, T)
if E_neighbor < E
P = 1; % Always go down the hill
else
P = exp((EE_neighbor)/T); % Move if temperature is high
end
end
Annealing Schedule
The annealing schedule must be chosen with care. The initial temperature must be
large enough to make the “uphill” and “downhill” transition probabilities nearly
the same. To do that, one must have an estimate of the diﬀerence E(˜ s) −E(s) for
a random state and its neighbors. The temperature must then decrease so that it
is zero, or nearly zero, when the algorithm is supposed to ﬁnish. For this thesis
an exponential schedule has been chosen, where the temperature decreases by a
ﬁxed cooling factor 0 < α < 1 at each step. The temperature T
k
for the current
time step k is calculated using the cooling factor α and the temperature from the
previous time step, T
k−1
, as
T
k
= αT
k−1
(4.10)
The initial temperature T
0
and the cooling factor α now have to be chosen.
Typically they are obtained by trial and error and tuned to the function E, which
should be minimized. In the problem at hand, this function depends on the simu
lated model, and as the evaluation of this function involves running a simulation
in Simulink, such tuning procedures are impractical. What is needed is an au
tomatic and reasonable way of setting these parameters based on some initial
information obtained by the algorithm. Such a method is presented in [21] and
used in this Master’s thesis.
Let s
0
be the initial state of the system. To pick the initial temperature T
0
,
generate a set of solutions that lies in the neighborhood of s
0
. Let s
bestn
and s
worstn
be the best and worst among the neighbor solutions. If E(s
worstn
) > E(s
0
), deﬁne
the maximum uphill move as maxmove = E(s
worstn
) −E(s
0
). Otherwise, deﬁne
maxmove = E(s
0
) −E(s
bestn
).
It is now reasonable to assume that the initial temperature T
0
is high enough
if an “uphill” move maxmove will be accepted with a relatively high probability,
say 0.9. Setting P = 0.9 for an “uphill” move of maxmove, (4.9) gives
0.9 = e
−
maxmove
T
(4.11)
and T = T
0
can be calculated.
Next, the cooling parameter α is calculated. To do this, assume that the ﬁnal
acceptance probability for an “uphill” maxmove should be very low, say 10
−6
. If
the ﬁnal probability is too high, the algorithm still behaves like a random search.
As k
max
is the maximum number of function evaluations allowed, this is also the
number of times the temperature is reduced. (4.9) now gives
10
−6
= e
−maxmove
T
0
α
k
max
(4.12)
30 Choosing the Kalman Filter Parameters
and α can be obtained.
Restarting
The SAalgorithm uses a random method to ﬁnd the solution, which means that
several executions may give diﬀerent outputs. When a better solution is needed,
and more computer time is available, instead of justifying the maximum number
of iterations allowed (k
max
), it is sometimes better to start the algorithm over
with a new initial state s
0
. Moving back to a solution that was signiﬁcantly better
rather than always moving from the current state is called restarting. The decision
to restart could be based on a ﬁxed number of evaluation steps, or based on the
current function value being too high from the best value so far. The Matlab
implementation found in Appendix B can be executed recursively and the starting
parameter state s
0
for the next iteration is set to the best solution found in the
previous iteration. In this way the algorithm can be left running for a long time,
restarting over and over again using the previous best solution found as the new
initial solution.
Results
Figure 4.2 shows one execution of the SA script with 200 closedloop simulations
in the ﬁrst scenario (the hill) described in Section 4.3.2. The upper diagram is the
cost function described in Section 4.4.1, calculated by using the diﬀerence between
the ﬁlter estimate and the “real” value. The diagram shows that the SAalgorithm
is trying to ﬁnd the global minimum of the cost function, but does not get stuck in
local minima. The allowance for parameter changes causing a higher value of the
cost function is high in the beginning, but decreases together with the temperature
shown in the lower diagram.
4.4 Autotuning 31
Figure 4.2. One execution of the simulated annealing (SA) algorithm using 200 evalu
ations. By analogy with the physical process, each step of the SA algorithm probabilis
tically decides between moving the system to the new state or staying in the old state.
The probability depends on the parameter T (called the temperature). The top diagram
shows the RMSE costfunction for all the states evaluated. The bottom diagram shows
the temperature that is gradually decreasing during the process. As can be seen, the
current solution changes almost randomly when T is large. This allowance for “uphill”
moves saves the method from becoming stuck at local minima.
32 Choosing the Kalman Filter Parameters
Chapter 5
Kalman Filter
Implementation
In this chapter the model for the longitudinal dynamics of the vehicle is derived,
and an initial version of the Kalman ﬁlter is implemented and tested. First the
function of the observer in the context of the inner control loop wil l be explained
and then a model for the expected acceleration of the vehicle wil l be derived. It
wil l be shown that this model cannot take al l driving situations into consideration,
resulting in a large error in the calculated acceleration. To deal with this error, a
Kalman ﬁlter is implemented. In the end of this chapter a discussion wil l be held
on how to best choose the ﬁlter parameters.
5.1 Overview of the Inner Control Loop
Before implementing the Kalman ﬁlter, a short explanation of its surroundings,
the inner control loop of the vehicle longitudinal controller, is needed.
An overview of the inner control loop is given in Figure 5.1. (For a complete
diagram of the outer and inner control loop, refer to Figure 1.1.) The controlled
system is the vehicle with its actuators engine, brake and gearbox. The desired
engine torque T
e
and the desired brake torque T
b
are calculated and given as input
to the actuators. The output from the controlled system is the actual motion
of the vehicle and can be thought of as the actual speed v
real
and the actual
acceleration a
real
.
The block “Sensors” in the ﬁgure contains signal processing software which
analyzes the motion of the vehicle and gives information back to the controller.
The measured speed v
m
and acceleration a
m
are derived from wheel speed sensors.
Input to the controller is the desired acceleration a
des
. The momentary devi
ation a
dev
from the desired value is calculated as
a
dev
= a
des
−a
m
(5.1)
This deviation is fed into a PIDcontroller which calculates a control value to the
33
34 Kalman Filter Implementation

a
des
+
f 
a
dev
PID

+
f 
a
c
F
1

T
F
2
T
e
, T
b
q
Vehicle
v
real
, a
real
Sensors
v
m
Observer
a
z
6
−
6
−
a
m
Figure 5.1. Inner control loop of the longitudinal controller. Input to the controller
is the desired acceleration a
des
. Sensors are measuring the real speed v
real
and accel
eration a
real
of the vehicle, and the task is to get a
real
= a
des
. The deviation a
dev
is
given as input to a PIDcontroller. The control value a
c
is converted by two conver
sion functions F
1
and F
2
into the torques T
e
and T
b
, which are given as input to the
vehicle’s actuators engine and brake, respectively. The observer looks at the torques T
e
and T
b
and calculates the expected acceleration of the vehicle. The output a
z
from the
observer is the estimated diﬀerence between the expected acceleration and the measured
acceleration. This is summarized with the output from the PIDcontroller, forming a
c
.
actuators which minimizes the deviation. The output from the PIDcontroller will
form the variables T
e
and T
b
described above, after passing through two conversion
steps F
1
and F
2
.
The block F
1
calculates the needed torque T on the wheel axis from the corre
sponding acceleration a
c
, using the equation
T = r
w
(a
c
˜ m+F
resistance
) (5.2)
Since ˜ m is the standard mass of the vehicle plus the moments of inertia of the
wheel axis and other rotating parts, then a
c
˜ m is the force needed to get the desired
acceleration a
c
. The other force taken into consideration here, F
resistance
, is acting
on the vehicle in its opposite direction of travel and is called “drive resistance”.
It consists of the force due to air resistance, losses due to tire deﬂection, etc, and
will be described in detail in Section 5.2. By adding F
resistance
to a
c
˜ m and then
multiplying with the wheel radius r
w
, T is the torque needed on the wheel axis to
give the vehicle the acceleration a
c
.
The output torque T is fed into another block, called F
2
in the ﬁgure, before
delivery to the actuators. F
2
coordinates the work of the engine, brake and gear
box. Depending on the calculated torque T, action is taken either by the brake or
the engine, whereas the engine also can be used to decelerate.
a
c
is, as explained above, the control value from the controller given as input
to the block F
1
. It consists of two parts: the output from the PIDcontroller, and
the output from the block “Observer”. The task of the observer is to estimate
5.2 Modeling the Acceleration 35
T
e

G
e

T
engine
T
b

G
b

T
brake
Longitudinal Dynamics

v
real
a
real
Vehicle Model
Figure 5.2. Modeling engine and brake. The blocks G
e
and G
b
model the dynamics
of the engine and brake respectively. T
e
and T
b
are the desired torques given as input.
The engine and brake models calculate the estimated output torques T
engine
and T
brake
.
The model of the longitudinal dynamics uses these values to calculate the speed and
acceleration of the vehicle.
the drive resistance parameters and other unknown parameters not taken into
consideration by F
1
. This is performed by looking at the torques T
e
and T
b
given
to the actuators, and calculate an expected acceleration. The diﬀerence between
the expected acceleration and the measured acceleration gives a hint about the
model error. This error, called a
z
, is subtracted from the output from the PID
controller to form a
c
.
The block called “Vehicle” is further described in Figure 5.2. The blocks “G
e
”
and “G
b
” model the dynamics of the engine and brake as transfer functions with
torques T
engine
and T
brake
as outputs. These transfer functions and the equations
describing the vehicle longitudinal dynamics will be presented in the next section.
5.2 Modeling the Acceleration
A model for the expected longitudinal acceleration of the vehicle will now be
presented. Assume just for this section that the speed of the vehicle is v and
the acceleration of the vehicle in its driving direction is a. These are the variable
names used in this section for deriving the model, in later sections the acceleration
calculated by the model will be called a
exp
, and for the vehicle speed the measured
value v
m
will be used.
Using the classical mechanical law from Newton,
F = ma, the forces acting
on the vehicle can be written as
ma = F
drive
−F
brake
−F
resistance
(5.3)
where F
drive
is the force acting on the vehicle through the transmission and engine,
and F
brake
is the force from the braking system. The drive resistance F
resistance
is modeled as
F
resistance
= F
air
+F
roll
(5.4)
36 Kalman Filter Implementation
When a wheel is rolling, energy losses occur due to deﬂection of the tire. This
is modeled as a force acting on the wheel in the opposite direction of rolling
F
r
= c
rr
N (5.5)
where N is the normal force acting on the wheel from the ground and c
rr
is the
rolling resistance coeﬃcient [20]. N is in this case deﬁned as
N =
mg
n
(5.6)
where m is the mass of the vehicle, g is the gravitational acceleration and n is the
number of wheels. Assuming that all wheels have the same c
rr
, the total rolling
resistance acting on the vehicle from all wheels can be calculated
F
roll
= F
r
n = c
rr
mg
n
n = c
rr
mg (5.7)
The air resistance F
air
is modeled as follows. When an object is moving through
air at relatively high speed, the object experiences a force acting on it against its
direction of travel. This force can according to [14] be written as
F
air
=
1
2
ρc
d
A
w
(v +v
wind
)
2
(5.8)
where ρ is the density of the air, c
d
is the drag coeﬃcient and A
w
is a reference
area related to the projected front area of the object. v
wind
is the unknown speed
of the wind and it will therefore be neglected in this model.
F
drive
and F
brake
depend on the torques acting on the wheel axis, T
drive
and
T
brake
, and the wheel radius r
w
as
F
drive
=
T
drive
r
w
(5.9)
F
brake
=
T
brake
r
w
(5.10)
The torque acting on the wheel axis T
drive
depends on the output torque from the
engine T
engine
, the gearbox and diﬀerential ratios i
g
and i
d
, the eﬃciency factor
for the drivetrain η, the moment of inertia for engine and gear, I
e
and I
g
, and the
moment of inertia for the front and rear wheel axis, I
f
and I
r
, as follows [20]
T
drive
= ηi
d
i
g
T
engine
−(i
2
d
i
2
g
I
e
+i
2
d
I
g
)
a
r
w
−(I
f
+I
r
)
a
r
w
(5.11)
Inserting (5.4), (5.7), (5.8), (5.9), (5.10) and (5.11) in (5.3) yields
ma =
1
r
w
ηi
d
i
g
T
engine
−(i
2
d
i
2
g
I
e
+i
2
d
I
g
)
a
r
2
w
−(I
f
+I
r
)
a
r
2
w
−
1
r
w
T
brake
−
1
2
ρc
d
A
w
v
2
−c
rr
mg (5.12)
5.2 Modeling the Acceleration 37
Now let
˜ m = m+
I
f
+I
r
r
2
w
+i
2
d
i
2
g
I
e
r
2
w
+i
2
d
I
g
r
2
w
(5.13)
Inserting (5.13) in (5.12) gives
˜ ma =
ηi
d
i
g
r
w
T
engine
−
1
r
w
T
brake
−
ρc
d
A
w
2
v
2
−c
rr
mg (5.14)
Dividing with ˜ m yields the equation for the vehicle acceleration as
a =
ηi
d
i
g
r
w
˜ m
T
engine
−
1
r
w
˜ m
T
brake
−
ρc
d
A
w
2 ˜ m
v
2
−
c
rr
mg
˜ m
(5.15)
In previous work at DaimlerChrysler [19], models for the engine and brake were
prepared. The results are averages of several tests with diﬀerent vehicles and will
be presented here and used in this Master’s thesis. See [19] for more details.
The engine is modeled as a transfer function G
e
(s) as
G
e
(s) = L{g
e
(t)} =
k
1
ω
2
0
s
2
+ 2Dω
0
s +ω
2
0
e
−sT
t1
(5.16)
which relates the input torque T
e
to the output torque from the engine, T
engine
,
as
T
engine
(t) = g
e
(t) ∗ T
e
(t) (5.17)
In the same way, the brake is modeled as a transfer function G
b
(s) as
G
b
(s) = L{g
b
(t)} =
k
2
T
t2
s + 1
e
−sT
t3
(5.18)
which relates the input torque T
b
to the output torque from the brake, T
brake
, as
T
brake
(t) = g
b
(t) ∗ T
b
(t) (5.19)
The parameters in the models were found in [19] using system identiﬁcation and
chosen to the mean values of diﬀerent test drives with diﬀerent vehicles as
k
1
= 1
ω
0
= 16.7 rad/s
D = 0.82
T
t1
= 90 ms
k
2
= 0.98
T
t2
= 80 ms
T
t3
= 140 ms
Adding this information to (5.15) gives the model for the longitudinal acceler
ation of the vehicle used by the Kalman ﬁlter
a(t) =
ηi
d
i
g
r
w
˜ m
(g
e
(t) ∗ T
e
(t)) −
1
r
w
˜ m
(g
b
(t) ∗ T
b
(t)) −
ρc
d
A
w
2 ˜ m
v(t)
2
−
c
rr
mg
˜ m
(5.20)
In Section 5.4 this calculated (expected) acceleration will be called a
exp
.
38 Kalman Filter Implementation
0 10 20 30 40 50 60
1.5
1
0.5
0
0.5
1
1.5
Speedtronic: step down and up again
Time [s]
A
c
c
e
l
e
r
a
t
i
o
n
[
m
/
s
2
]
a expected
a measured
Figure 5.3. Test drive using cruise control. The vehicle is traveling with a speed
of 60 km/h when the driver changes the set desired speed to 30 km/h, then after
nearly 30 seconds changes back to 60 km/h again. The ﬁgure shows the calculated ex
pected acceleration (solid line) and the measured acceleration (dashed line). The mea
surement has been recorded during a test with a relatively nervous controller, causing
the large oscillations between 10 and 20 seconds.
5.3 Errors in the Acceleration Model
A thorough validation of the acceleration model is not a subject of this Master’s
thesis. But to verify that the model seems reasonable, some tests will now be
presented. During test drives, the signal a
m
from the sensors is recorded. This
measurement of the actual acceleration is then compared with the expected accel
eration calculated by the model in (5.20). In this section ﬁve such recordings will
be presented.
Figure 5.3 shows a test drive using cruise control (explained in Section 2.5).
The vehicle is traveling with a speed of 60 km/h when the driver changes the set
desired speed to 30 km/h, then after nearly 30 seconds changes back to 60 km/h
again. The ﬁgure shows the calculated expected acceleration (solid line) and the
measured acceleration (dashed line).
Figure 5.4 shows a similar test drive using cruise control, this time with set
speeds 60 km/h, 30 km/h, 60 km/h and then 30 km/h again.
In both Figure 5.3 and Figure 5.4 it can be observed that the agreement be
tween the measured and calculated expected acceleration is relatively good. The
main characteristics of the acceleration has been captured by the model. Some dif
ferences between the calculated and the measured signal can be seen in the ﬁgures.
This is expected, as the model does not exactly describe the speciﬁed vehicle. The
model parameters have been chosen as the mean values from several test drives
5.3 Errors in the Acceleration Model 39
0 10 20 30 40 50 60
2
1.5
1
0.5
0
0.5
1
1.5
Speedtronic: step up
Time [s]
A
c
c
e
l
e
r
a
t
i
o
n
[
m
/
s
2
]
a expected
a measured
Figure 5.4. Test drive using cruise control. The vehicle is traveling with a speed
of 60 km/h when the driver changes the set desired speed ﬁrst to 30 km/h, then back
to 60 km/h, and at last to 30 km/h again. The solid line is the calculated expected
acceleration and the dashed line is the measured acceleration.
with diﬀerent vehicles [19]. Therefore the model does not exactly comply with the
vehicle being used here.
Figure 5.5 shows the vehicle traveling with a constant speed of 30 km/h on
a bumpy road. The vehicle looses speed and the controller tries to compensate,
resulting in an oscillatory behavior. As can be seen, also here the agreement
of the measured and calculated acceleration can be recognized. An outstanding
feature of the calculated value is that it is always a bit, and sometimes up to 0.5 s
“faster” than the measured value. The reason for this could be that the identiﬁed
timedelays in the models for the engine and brake are too small when applied
to the vehicle used in the tests. Another reason is that the measurement of the
acceleration in the vehicle contains a lowpass ﬁlter with some timedelay. In
Figure 5.5, the calculated value is always a bit higher than the measured value.
The reason for this might be that the rolling resistance on the bumpy road is
higher than expected, and that the mass of the vehicle is higher than set in the
model, due to one extra passenger.
As can be seen in Figure 5.6 and Figure 5.7, the model does not perform as well
when changing the working conditions. In Figure 5.6 a test has been made using
the same vehicle but with an attached trailer with a mass of 2000 kg. The vehicle
is traveling using cruise control, with a desired speed of 60 km/h. After 16 seconds
the desired speed is changed to 80 km/h. As can be seen, the calculated expected
acceleration does not comply with the the measured acceleration in this case. The
large errors in the calculations is because of a wrong value of the parameter m,
the mass of the vehicle. In the current model, the vehicle mass is a constant
40 Kalman Filter Implementation
20 25 30 35 40 45 50
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
Constant speed, bumpy road
Time [s]
A
c
c
e
l
e
r
a
t
i
o
n
[
m
/
s
2
]
a expected
a measured
Figure 5.5. Test drive using cruise control on a bumpy road. The vehicle is traveling
with a constant speed of 30 km/h. It looses speed and the controller tries to compensate,
in this test resulting in an oscillating behavior. The test has been done using a relatively
nervous controller.
0 5 10 15 20 25 30 35 40 45
0.5
0
0.5
1
1.5
2
2.5
Speedtronic: step up and down, trailer 2000 kg
Time [s]
A
c
c
e
l
e
r
a
t
i
o
n
[
m
/
s
2
]
a expected
a measured
Figure 5.6. Test drive with a heavy trailer (2000 kg). The vehicle and trailer are
traveling using cruise control, with a desired speed of 60 km/h. After 16 seconds the
desired speed is changed to 80 km/h. As can be seen the calculated expected acceleration
does not comply with the the measured acceleration. The large errors in the calculations
is because of a wrong value of the parameter m, the mass of the vehicle.
5.3 Errors in the Acceleration Model 41
0 5 10 15 20 25
6
5
4
3
2
1
0
1
2
3
Driving up and down a hill, incline: 20% up, 15% down
Time [s]
A
c
c
e
l
e
r
a
t
i
o
n
[
m
/
s
2
]
a expected
a measured
Figure 5.7. Test drive up and down a steep hill using cruise control. First the slope of
the road is 0 % (horizontal road), then changed to 20 % (uphill), then to −15 % (downhill).
The calculated acceleration does not comply with the measured acceleration. The reason
is that the model does not include the case of a changed road slope.
parameter. Figure 5.7 shows a test drive with the same vehicle, this time without
trailer but with a changing slope. The vehicle is driven up and down a steep
hill. First the slope of the road is 0 % (horizontal road), then changed to 20 %
(uphill), then to −15 % (downhill). As expected, the calculated acceleration does
not comply with the measured acceleration. The model does not include the case
of a changing slope.
It should be mentioned that all models have errors. It does not matter how
complex the model is, it will in practice never exactly describe the real physical
system. In many implementations there are good reasons to keep the model simple.
Especially for real time systems it is a good practice to keep models as simple as
possible to avoid time consuming computations and dubious parameters.
In this case the model for the vehicle acceleration in (5.20) does not comply
with the real system in all situations. The following are some examples of what
might happen. Two of the cases have already been mentioned before in this text.
• The total mass of the vehicle is not m as in the model, due to extra pas
sengers, baggage or a trailer. This aﬀects the calculation of ˜ m in (5.13) as
well as F
roll
in (5.7) and has a large eﬀect on the calculation of the expected
acceleration in (5.20). The value of the parameter m is set to the mass of
the vehicle including full tank, plus 80 kg for the weight of the driver. If the
attached trailer is equipped with brakes, the large change in the mass will
only be experienced when accelerating. When braking, the trailer brake will
help and compensate partially for the extra weight.
42 Kalman Filter Implementation
• The parameter c
rr
in (5.7) is in the model set to a constant value. In practice,
however, the rolling resistance changes depending on the driving conditions,
for example in case of tirepressure drop or when driving on sand. According
to [8] the real value of the parameter can vary up to 3.5 times the standard
value.
• In the calculation of the air resistance F
air
in (5.8), the speed of the wind
v
wind
cannot be taken into account. In real life the wind speed can have
a large impact on the actual resistance. The drag coeﬃcient c
d
might also
change, as well as the reference area A
w
(for example due to extra baggage).
According to [8] the real value of F
air
can be up to 9 times the calculated
value.
• All the parameters in (5.16) and (5.18) have been estimated with system
identiﬁcation. From several test drives the mean values have been selected.
These parameters diﬀer from those found in a real vehicle. As an example
the interval for the engine parameters in (5.16) were found to be [19]
15.1 < ω
0
< 19.2
0.54 < D < 1.05
0.07 < T
t1
< 0.12
• The slope has been totally neglected in the derived model. When driving the
vehicle in a slope, a force F
slope
arises having a direct eﬀect on the vehicle’s
acceleration. Assume that the slope of the road is α , the longitudinal force
acting on the vehicle is given by
F
slope
= mg sin α (5.21)
• Engine and brake might not behave as expected due to inaccuracy, errors or
change in the friction coeﬃcient of the brakes. This is observed to happen
relatively often. For example, the friction coeﬃcient of the brake may vary
between +10% and −15% during a normal vehicle stop maneuver.
Actually there is a longitudinal acceleration sensor mounted in the vehicles
that could be used to estimate the slope α. The sensor measures the sum of the
vehicle’s acceleration and the gravitational component parallel to the ground, as
a
sensor
= a +g sin α (5.22)
where a
sensor
is the sensor value and a is the longitudinal acceleration of the
vehicle. One problem with the sensor is that it might be diﬃcult making good
estimates of the road slope while cornering. In [16] it is proposed how to do road
slope and vehicle mass estimation using Kalman ﬁltering.
5.4 Kalman Filter Model 43
5.4 Kalman Filter Model
The model for the vehicle longitudinal acceleration a in (5.20) has to be changed
to comply with “the real world”. Therefore, let the calculated expected accelera
tion a
exp
be deﬁned as
a
exp
=
ηi
d
i
g
r
w
˜ m
T
engine
−
1
r
w
˜ m
T
brake
−
ρc
d
A
w
2 ˜ m
v
2
m
−
c
rr
mg
˜ m
(5.23)
This is the model that was derived in Section 5.2. Then the real vehicle accelera
tion a
real
is
a
real
= a
exp
+a
z
(5.24)
where a
z
is called “disturbance acceleration”. This state variable represents the
part of the vehicle’s acceleration caused by disturbances not described by the
model for the longitudinal dynamics. It should cover all the model errors found in
the previous section, and is connected parallel to the PIDcontroller as described
in Section 5.1 and shown in Figure 5.1. Given a good description on how the
state a
z
is changing, the Kalman ﬁlter can be used to estimate this state.
The process noise is modeled in continuous time under the assumption that
the state a
z
undergoes slight changes each sampling period. According to [3] such
changes can be modeled by a continuous time Gaussian noise w as
˙ a
z
(t) = w(t) (5.25)
where
E[w(t)] = 0 (5.26)
E[w(t)w(τ)] = qδ(t −τ) (5.27)
The scalar value q is here the process noise intensity (assumed to be timeinvariant)
and δ(·) is the Dirac (impulse) delta function [3].
Using the statespace model presented in (3.1) and (3.2), and choosing the
state vector x = a
z
, the continuous time statespace model for the Kalman ﬁlter
becomes
˙ x =
_
0
¸
. ¸¸ .
A
x +
_
1
¸
. ¸¸ .
G
w (5.28)
y =
_
1
¸
. ¸¸ .
C
x +e (5.29)
Here y = x+e = a
z
+e means that the Kalman ﬁlter needs a measurement of the
signal a
z
. This can be provided by feeding it with a new constructed signal a
∆
.
With the deﬁnition a
m
= a
real
+e together with (5.24), a
∆
can be deﬁned as
a
∆
= a
m
−a
exp
= a
z
+e (5.30)
44 Kalman Filter Implementation
In this way, the Kalman ﬁlter will estimate the state a
z
, given the information
that a
∆
is a noisy measurement of the true value. How noisy the measurement is
can be deﬁned by modeling e. Here e will be modeled as Gaussian noise in the
same way as w
E[e(t)] = 0 (5.31)
E[e(t)e(τ)] = rδ(t −τ) (5.32)
The scalar value r is here the measurement noise intensity. It is assumed to be
timeinvariant.
The system is observable, as can be veriﬁed using the rank test in Section 3.4.
This means that the state is uniquely determinable from the inputs and outputs
in the model.
The statespace model is discretized into a digital statespace model with sam
ple time T, using the theory in Section 3.2. In this case it is easy, resulting in
x
k+1
=
_
1
¸
. ¸¸ .
A
d
x
k
+
_
1
¸
. ¸¸ .
G
d
w
k
(5.33)
y
k
=
_
1
¸
. ¸¸ .
C
d
x
k
+e
k
(5.34)
5.5 Choosing the Filter Parameters
Diﬀerent values of the noise intensities q and r will now be chosen and the per
formance evaluated using openloop simulation described in Section 4.3.1. It is
possible to use other methods from Chapter 4, for example the script for autotun
ing developed in Section 4.4. However, since the estimated signal a
z
in this case
will be directly connected to the engine and brakes (see Figure 5.1), the signal
will direct aﬀect the comfort of the driver and passengers. Therefore this section
will show the function of the developed Kalman ﬁlter by choosing the parameters
manually.
As explained in Section 4.2 the quotient between q and r is the design param
eter, the absolute values do not matter. Therefore r is set to 1 and the Kalman
ﬁlter is simulated using diﬀerent values for q. The ﬁgures that follow have been
generated using measured data from test drives. The Kalman ﬁlter is fed with the
signal y = a
∆
= a
m
−a
exp
, and those values are also shown in the ﬁgures as dots.
Figure 5.8 shows two diﬀerent Kalman ﬁlters, one with q = 1 and the other
with q = 0.01. As can be seen, the one using q = 1 is faster and follows the
measured values more accurately. This was expected, as choosing a high q always
means a faster ﬁlter. The faster ﬁlter is more sensitive to measurement noise,
while the slower ﬁlter delivers a smoother estimate of a
z
. The measurement is
taken from a test drive on a bumpy road.
Figure 5.9 shows the same parameter choices, but this time with a measurement
of a vehicle driven up and down a steep hill. The output from the Kalman ﬁlter
with q = 1 follows the measurement almost exactly. It can be seen that, comparing
5.5 Choosing the Filter Parameters 45
2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7
1.2
1
0.8
0.6
0.4
0.2
0
Time [s]
D
i
s
t
u
r
b
a
n
c
e
[
m
/
s
2
]
a
∆
= a
m
a
exp
Kalman filter (q=1)
Kalman filter (q=0.01)
Figure 5.8. Simulation of two fast ﬁlters with diﬀerent parameters (q = 1 and q = 0.01)
using a measurement of a test drive on a bumpy road. The ﬁlter with q = 1 is fastest
and follows the measured values most accurate. This was expected, as choosing q high
always means a faster ﬁlter. The faster ﬁlter is also more sensitive to measurement noise.
The slower ﬁlter delivers a smoother estimate of a
z
.
with the faster ﬁlter, a smaller q makes the time delay for large signal changes
unavoidable larger. The Kalman ﬁlter q = 0.01 still reacts relatively fast and
when only looking at these two ﬁgures (5.8 and 5.9) q = 0.01 seems a logical
choice.
However, every unnecessary oscillation or jerk in the estimate a
z
will have a
direct eﬀect of the control values for the engine and brakes. It has been observed
during test drives that the comfort is negatively aﬀected by having a too fast ﬁlter.
With this in mind, two slower ﬁlters are evaluated, shown in the next two ﬁgures.
This time q = 0.001 and q = 0.0003 are simulated. Figure 5.10 shows the drive
on the bumpy road. The Kalman ﬁlter now ignores the oscillations even more,
making it better for controlling purposes. The price one has to pay, as shown in
Figure 5.11, is an even slower ﬁlter.
The time delay for the ﬁlter with q = 0.0003 is so large that the driver will
probably feel it when driving up and down the hill shown in Figure 5.11. The
hill is rather steep, and the change from the horizontal road to a slope of 20%
comes very fast. The task of choosing the optimal parameters is in this case a
compromise between ignoring small changes in the signal, or react fast to large
changes. A linear ﬁlter of this type cannot do both.
Figure 5.12 shows the slowest ﬁlter (with q = 0.0003) during a drive on a very
bumpy road. The oscillations of the signal y = a
∆
= a
m
−a
exp
are even larger
than in the previous ﬁgures. Even the slowest of the tested Kalman ﬁlters is in
46 Kalman Filter Implementation
4 6 8 10 12 14 16 18
2
1.5
1
0.5
0
0.5
1
1.5
2
2.5
Time [s]
D
i
s
t
u
r
b
a
n
c
e
[
m
/
s
2
]
a
∆
= a
m
a
exp
Kalman filter (q=1)
Kalman filter (q=0.01)
Figure 5.9. Simulation of the two fast ﬁlters using a measurement of a test drive of a
vehicle driving up and down a steep hill. The output from the Kalman ﬁlter with q = 1
follows the measurement almost exactly. It can be seen that, comparing with the faster
ﬁlter, a smaller q makes the time delay for large signal changes unavoidably larger. The
Kalman ﬁlter q = 0.01 still reacts relatively fast.
this case “not slow enough”. But choosing a slower ﬁlter will also make the time
delay for changes even larger. The ideal ﬁlter would ignore these oscillations, but
still react fast to large changes.
5.5 Choosing the Filter Parameters 47
2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7
1.2
1
0.8
0.6
0.4
0.2
0
Time [s]
D
i
s
t
u
r
b
a
n
c
e
[
m
/
s
2
]
a
∆
= a
m
a
exp
Kalman filter (q=0.001)
Kalman filter (q=0.0003)
Figure 5.10. Simulation of two slow ﬁlters (q = 0.001 and q = 0.0003) using measure
ments from a test drive on a bumpy road. The slow Kalman ﬁlters ignore the oscillations
even more, making them better for controlling purposes.
4 6 8 10 12 14 16 18
2
1.5
1
0.5
0
0.5
1
1.5
2
2.5
Time [s]
D
i
s
t
u
r
b
a
n
c
e
[
m
/
s
2
]
a
∆
= a
m
a
exp
Kalman filter (q=0.001)
Kalman filter (q=0.0003)
Figure 5.11. Simulation of the two slow ﬁlters (q = 0.001 and q = 0.0003) using
measurements from a test drive with a vehicle driving up and down a steep hill. When
trying to avoid small changes and oscillations, the price one has to pay is a slower ﬁlter.
48 Kalman Filter Implementation
14 16 18 20 22 24 26 28 30
0.8
0.6
0.4
0.2
0
0.2
0.4
Time [s]
D
i
s
t
u
r
b
a
n
c
e
[
m
/
s
2
]
a
∆
= a
m
a
exp
Kalman filter (q=0.0003)
Figure 5.12. Simulation of the slowest Kalman ﬁlter using a measurement of a very
bumpy road. Even this ﬁlter is in this driving situation “not slow enough”. But choosing
such a slow ﬁlter will also make the time delay for changes even larger.
Chapter 6
Alternative Kalman Filter
Models
As shown in Chapter 5, the behavior of the Kalman ﬁlter is easy to understand
when using simple models. Another advantage of using simple models is low com
putational eﬀort. This chapter derives some more complex models, with the aim
of explaining how the Kalman ﬁlter implemented in the test vehicles is working.
First a model is presented which can be used when it is not possible to measure
the acceleration. Then other models of the parameter a
z
is derived. At the end of
this chapter it is shown that the implemented Kalman ﬁlter behaves like a lowpass
ﬁlter.
6.1 Vehicle Speed as Feedback
In some situations it is not practical to use the signal a
∆
= a
m
−a
exp
as feedback
to the ﬁlter, for example when the signal a
m
is not available. This is the case
at DaimlerChrysler when using hill descent control (explained in Section 2.6). In
this situation the measurement of the acceleration by conventional methods is not
considered good enough. In this case the measurements of the vehicle speed, v
m
,
can be used instead. The Kalman ﬁlter is then designed using the state vector
x =
_
x
1
x
2
_
=
_
v
real
a
z
_
(6.1)
so that
˙ v
real
= a
real
= a
z
+a
exp
(6.2)
˙ a
z
= w (6.3)
Input to the ﬁlter is the signal a
exp
, as deﬁned in (5.23), and the (assumed noisy)
measurement v
m
of the vehicle speed v
real
, with
v
m
= v
real
+e (6.4)
49
50 Alternative Kalman Filter Models
where e is the measurement noise and w is the process noise, modeled as described
in Section 5.4. Notice that e is the noise in the measurement of the speed, and
not the acceleration as in Section 5.4. The statespace model becomes
˙ x =
_
0 1
0 0
_
. ¸¸ .
A
x +
_
1
0
_
. ¸¸ .
B
a
exp
+
_
0
1
_
. ¸¸ .
G
w (6.5)
y =
_
1 0
¸
. ¸¸ .
C
x +e (6.6)
The noises w and e are modeled as Gaussian noises with intensities r and q.
Simulating the ﬁlter with the same measurements as in Chapter 5 shows that the
basic behavior remains the same. Figure 6.1 shows the output from the ﬁlter when
fed with the measurement from the test drive on the bumpy road. The process
noise intensity q has been chosen to 5 and measurement noise intensity r is 1.
Figure 6.2 shows the same ﬁlter simulated with the measurement from the drive
up and down the steep hill. The dotted line is a
∆
= a
m
−a
exp
as deﬁned in (5.30),
which still can be used as a reference. The dashed line in the ﬁgures is the output
from the ﬁlter in Section 5.5, which used a
∆
as feedback. As can be seen the
overall behavior is the same.
The Kalman ﬁlter now has to estimate both v
real
and a
z
, resulting in higher
computational costs. Choosing a smaller q makes the ﬁlter slower, and choosing a
larger q makes it faster, just as before. Of course the value of q is not the same as
in Section 5.5 because the measurements are no longer the same.
6.2 Modeling the Disturbance a
z
The model of a
z
so far says that its ﬁrst derivative is equal to Gaussian noise. This
means that a
z
undergoes slight changes each sampling period. These changes are
uncorrelated, which means that the derivative changes each period to a value
independent of the last value. This model allows the derivative of a
z
to jump
quickly from a positive value to a negative, which does not comply with the “real”
parameter the ﬁlter is trying to estimate. a
z
represents large changes in the
environment of the vehicle, such as changes in the mass of the vehicle or the slope
of the road. These parameters do not change so quickly, and therefore another
model will now be examined.
6.2.1 FirstOrder Lag Function
For a ﬁrstorder lag function with input signal u, the output signal y satisﬁes the
ordinary diﬀerential equation
τ ˙ y +y = u (6.7)
where τ is the time constant of the step response. The transfer function is
G(s) =
1
1 +τs
(6.8)
6.2 Modeling the Disturbance a
z
51
2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7
1.8
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
Time [s]
D
i
s
t
u
r
b
a
n
c
e
[
m
/
s
2
]
a
∆
=a
m
a
exp
Kalman filter with v as feedback
Kalman filter from Chapter 5 (q=0.0003)
Figure 6.1. Simulation of the Kalman ﬁlter using the measurement from the drive on
a bumpy road. Input to the ﬁlter is the measurement of the vehicle speed. q has been
chosen to 5. The solid line is the Kalman ﬁlter with the measurement of v as feedback.
The dashed line is the ﬁlter developed in Section 5.5. As can be seen the overall behavior
of the ﬁlter remains the same.
To evaluate the frequency response for the function, set s = jω and plot the
magnitude of the function
G(jω) =
1
√
1 +ω
2
τ
2
(6.9)
ω is here the frequency of the input in radians per second. Figure 6.3 shows the
Bode plot of the transfer function with three diﬀerent values of τ. Deﬁne the break
frequency ω
0
as
ω
0
=
1
τ
(6.10)
Then the magnitude of the function is approximately
G(jω) ≈
_
1 when ω < ω
0
ω
0
jω
when ω > ω
0
(6.11)
The ﬁrstorder lag function dampens all signals with frequencies higher than the
break frequency ω
0
and can be used as a lowpass ﬁlter. As can be seen in the
plot, a larger τ means a lower break frequency, which complies with the deﬁnition
in (6.10). A larger τ also means a slower response. [9]
52 Alternative Kalman Filter Models
4 6 8 10 12 14 16 18
2
1.5
1
0.5
0
0.5
1
1.5
2
2.5
Time [s]
D
i
s
t
u
r
b
a
n
c
e
[
m
/
s
2
]
a
∆
=a
m
a
exp
Kalman filter with v as feedback
Kalman filter from Chapter 5 (q=0.0003)
Figure 6.2. Simulation of the Kalman ﬁlter using the measurement from the vehicle
driving up and down a steep hill. q is set to 5. The solid line is the Kalman ﬁlter with
the measurement of v as feedback. The dashed line is the ﬁlter developed in Section 5.5.
As can be seen the overall behavior of the ﬁlter remains the same.
6.2.2 FirstOrder GaussMarkov Process
The ﬁrstorder lag function can be used to model physical systems. The input u
in (6.7) is then set to w, where w represents Gaussian noise. With this choice of u
the function is called a ﬁrstorder GaussMarkov process [22]. This function has
turned out to be important in applied work since it seems to ﬁt a large number of
physical systems with reasonable accuracy but still has a very simple mathematical
description [5].
The described function will now be used to model a
z
. Let
˙ a
z
+
1
τ
a
z
= w (6.12)
Now the problem is to choose a reasonable τ and the intensity of the noise w.
According to [3] the autocorrelation of the GaussMarkov process in (6.12) can be
written as
E[a
z
(t
1
)a
z
(t
2
)] = e
−
1
τ
t
1
−t
2

E[a
z
(t
1
)
2
] (6.13)
The autocorrelation is a measure of how well the signal matches a timeshifted
version of itself, where t
1
and t
2
deﬁnes the time shift. This means that the value
of a
z
at a sample time t
k
will depend on the value at the last sample time t
k−1
, as
a
z
(t
k
) = e
−
1
τ
T
a
z
(t
k−1
) +w(t
k−1
) (6.14)
6.2 Modeling the Disturbance a
z
53
Bode Diagram
Frequency (rad/sec)
P
h
a
s
e
(
d
e
g
)
M
a
g
n
i
t
u
d
e
(
d
B
)
50
40
30
20
10
0
τ=1
τ=0.5
τ=0.25
10
1
10
0
10
1
10
2
90
45
0
Figure 6.3. Bode plot of the ﬁrstorder lag function, with three diﬀerent values of τ.
Larger τ means a lower break frequency.
where T is the sampling interval [5]. If τ is chosen very large (τ approaches ∞),
a
z
will be integrated Gaussian noise just as before. If τ is small the correlation is
high.
6.2.3 Identifying the Time Constant
The technique of system identiﬁcation is used to build mathematical models of a
dynamic system based on measurement data. This is done by adjusting param
eters within a given model until its output coincides as well as possible with the
measured output [17]. Matlab contains a toolbox called “System Identiﬁcation
Toolbox” [18]. It contains all the common techniques to adjust parameters in all
kinds of linear models, including statespace models, as well as some nonlinear
models. In this section an introduction is given on how to use this toolbox to
identify the unknown parameters in a model.
A Matlab script is found in Appendix C, which creates a model of a
z
described
by (6.12) and deﬁnes τ and the intensity of w as parameters to be identiﬁed. As
identiﬁcation data measurements of the slope of the road during a test drive up and
down the steep hill is used. This data set will represent the “maximum dynamic”
of a
z
that the ﬁlter will have to estimate. When driving a vehicle at 30 km/h
over this steep hill, the highest demands on the ﬁlter is said to be reached. As
54 Alternative Kalman Filter Models
identiﬁcation data the part of a
z
caused by the slope of the road is chosen. This
is done by setting
a
real
z
≡ g sin(α) (6.15)
where g is the gravitational acceleration and α is the slope of the road.
When the noise intensity is set to a constant value, the script can be used to
choose the optimal value of τ. An intensity value of 1 results in the optimal choice
τ ≈ 7. In Figure 6.4 the output from the model is compared with the measurement.
As can be seen, the identiﬁed model does not ﬁt the identiﬁcation data exactly.
4 6 8 10 12 14 16 18
2.5
2
1.5
1
0.5
0
0.5
1
1.5
y
1
Measured Output and 1step Ahead Predicted Model Output
Measured Output
Model Fit: 83.67%
Figure 6.4. Identiﬁcation of time constant τ. Setting the noise intensity to a constant
value and identify the parameter gives τ = 7, results in a model ﬁt of 83.67%. Letting
the script identify the noise intensity with τ = 7 will give a model ﬁt of over 99%.
Setting the parameter τ to 7 and letting Matlab identify the noise intensity,
results in a model ﬁt of over 99%. The noise intensity is then identiﬁed to 70.
This means that by choosing τ = 7, it is possible to ﬁnd a perfect ﬁt by adjusting
the parameter for the noise intensity. In fact, by choosing any value of τ larger
than 0.1 and then letting Matlab ﬁnd the optimal noise intensity, results in a
model ﬁt of at least 90%, and τ > 0.3 gives a ﬁt of over 95%. For τ = 0.3 the
optimal noise intensity is identiﬁed to 91. A smaller choice of τ demands a larger
noise intensity to make a good model ﬁt.
Figure 6.5 shows the calculated signal a
real
z
and the predicted output from the
model with τ = 0.3 and noise intensity calculated by Matlab.
The quality of the model can be tested by looking at what the model could not
reproduce in the data, the “residuals”
(t) = y(t) − ˆ y(t) (6.16)
6.2 Modeling the Disturbance a
z
55
4 6 8 10 12 14 16 18
2.5
2
1.5
1
0.5
0
0.5
1
1.5
y
1
Measured Output and 1step Ahead Predicted Model Output
Measured Output
model Fit: 96.26%
Figure 6.5. Identiﬁcation of time constant τ. Here τ = 0.3 is chosen and the noise
intensity is identiﬁed to 91, giving a model ﬁt of 96.26%.
where y is the validation data and ˆ y is the output from the model. The residuals
should not be correlated with the past inputs u. If they are, then there is a part
of y that originates from the past input that has not been picked up properly by the
model. The command “resid” in Matlab computes the correlation function of
the residuals from the model, as well as 99% conﬁdence intervals assuming that the
residuals are Gaussian. The rule is that if the correlation function go signiﬁcantly
outside these conﬁdence intervals, do not accept the corresponding model as a good
description of the system. In that case the model can be improved, for example
by adjusting the number of parameters in the model [17].
6.2.4 Testing the Model of a
z
Modeling a
z
using the equation for the ﬁrstorder GaussMarkov process (6.12)
with state vector x = [x
1
, x
2
]
T
= [v, a
z
]
T
gives the statespace model
˙ x =
_
0 1
0 −
1
τ
_
. ¸¸ .
A
x +
_
1
0
_
. ¸¸ .
B
a
exp
+
_
0
1
_
. ¸¸ .
G
w (6.17)
y =
_
1 0
¸
. ¸¸ .
C
x +e (6.18)
The Kalman ﬁlter using this model is simulated in the same way as in Section 5.5.
Figure 6.6 shows the ﬁlter during the drive on the very bumpy road. Here τ is
set to 7, r is set to 1 and q is set to 70. Figure 6.7 shows the estimate of a
z
during simulating driving up and down the steep hill. As can be seen, there are no
56 Alternative Kalman Filter Models
relevant diﬀerences in comparison to the simple model used in Section 5.4. This
means that both models can be used to estimate a
z
in the simulated situations,
giving the same performance.
14 16 18 20 22 24 26 28 30
0.8
0.6
0.4
0.2
0
0.2
0.4
Time [s]
D
i
s
t
u
r
b
a
n
c
e
[
m
/
s
2
]
a
∆
=a
m
a
exp
Kalman filter with a
z
modeled as GaussMarkov Process
Kalman filter from Chapter 5 (q=0.001)
Figure 6.6. Kalman ﬁlter with a
z
modeled as a GaussMarkov process. The ﬁgure
shows a simulation using recorded data from the vehicle driving on a very bumpy road.
τ is set to 7, r is set to 1 and q is set to 70. When comparing with the Kalman ﬁlter
developed in Section 5.4, no relevant changes can be found.
6.2.5 HigherOrder Derivative of a
z
In this section the model proposed by [19] will be examined. Recall Section 5.4
where it was stated that the changes in the parameter a
z
can be modeled by setting
the ﬁrst derivative of a
z
to Gaussian noise, according to
˙ a
z
(t) = w(t) (6.19)
where
E[w(t)] = 0 (6.20)
E[w(t)w(τ)] = qδ(t −τ) (6.21)
This was implemented and tested in Chapter 5, and it was shown that it is possible
to get an arbitrary fast (or slow) estimate by adjusting the noise parameter q.
Another way of modeling the changes is by setting a higher derivative of the
parameter equal to Gaussian noise, for example
¨ a
z
(t) = w(t) (6.22)
6.2 Modeling the Disturbance a
z
57
4 6 8 10 12 14 16 18
3
2
1
0
1
2
Time [s]
D
i
s
t
u
r
b
a
n
c
e
[
m
/
s
2
]
a
∆
=a
m
a
exp
Kalman filter with a
z
modeled as GaussMarkov Process
Kalman filter from Chapter 5 (q=0.001)
Figure 6.7. Kalman ﬁlter with a
z
modeled as a GaussMarkov process. The ﬁgure is
generated using recorded data from the vehicle driving up and down a steep hill. τ is set
to 7, r is set to 1 and q is set to 70. When comparing with the Kalman ﬁlter developed
in Section 5.4, no relevant changes can be found.
This is used in [19] and also suggested as an alternative by [3] when making
estimates for kinematic models. For example when estimating position ξ and
speed
˙
ξ of an object, one might use the state vector x = [x
1
, x
2
]
T
= [ξ,
˙
ξ]
T
. The
speed of the object undergoes slight changes, which often are modeled as Gaussian
noise with
¨
ξ = w.
There is still the possibility to choose τ according to
¨ a
z
(t) +
1
τ
˙ a
z
(t) = w(t) (6.23)
For the problem at hand, there is no practical need to estimate the extra state ˙ a
z
,
but in order to take advantage of (6.22) or (6.23), the state vector is chosen as
x =
_
_
x
1
x
2
x
3
_
_
=
_
_
v
real
a
z
˙ a
z
_
_
(6.24)
The statespace model becomes
˙ x =
_
_
0 1 0
0 0 1
0 0 −
1
τ
_
_
. ¸¸ .
A
x +
_
_
1
0
0
_
_
. ¸¸ .
B
a
exp
+
_
_
1 0 0
0 1 0
0 0 1
_
_
. ¸¸ .
G
w (6.25)
y =
_
1 0 0
¸
. ¸¸ .
C
x +e (6.26)
58 Alternative Kalman Filter Models
e is Gaussian noise as before with intensity r. Here w consists of three components,
w = [w
1
, w
2
, w
3
]
T
, meaning that Gaussian noise is added to all three equations in
the statespace model. The covariance matrix for w is
Q =
_
_
q
1
0 0
0 q
2
0
0 0 q
3
_
_
(6.27)
meaning that the noise components of w are independent of each other, with noise
intensities q
1
, q
2
and q
3
. A possible approach when adding process noise to all
estimated parameters in the statespace model, is to set all intensities to the same
value, so that
Q = q
_
_
1 0 0
0 1 0
0 0 1
_
_
(6.28)
This method makes the choice of the parameters easier, but w
1
, w
2
and w
3
still
remain independent of each other. [13]
The design parameters are the noise intensities for each of the Gaussian noises
w
1
, w
2
, w
3
and e, and the value of τ. As has been explained in Section 3.5.6, the
noise intensities are dependent of each other. Letting the values of q
1
, q
2
and q
3
remain constant while changing the value of r, will aﬀect all three estimates in
the state vector. By letting r be constant, the three estimates can be adjusted
individually. Therefore, the parameters to choose are the intensities of w
1
, w
2
and w
3
, and the value of τ.
As can be seen in (6.24), (6.25) and (6.26), the equations in the model are
˙ v
real
= a
exp
+a
z
+w
1
(6.29)
˙ a
z
= ˙ a
z
+w
2
(6.30)
¨ a
z
= −
1
τ
˙ a
z
+w
3
(6.31)
y = v
m
= v
real
+e (6.32)
This might not seem logical, as (6.29) does not comply with the deﬁnition of a
z
given in (5.24). Also, the process noise w
2
could according to (6.30) be set to 0.
The statespace model is transformed into a discrete statespace model, using
the theory presented in Section 3.2. With the sample time T the result is
x
n+1
=
_
_
1 T
1
2
T
2
0 1 T
0 0 1
_
_
. ¸¸ .
A
d
x
n
+
_
_
T
0
0
_
_
. ¸¸ .
B
d
u
n
+
_
_
T
1
2
T
2 1
6
T
3
0 T
1
2
T
2
0 0 T
_
_
. ¸¸ .
G
d
w (6.33)
y =
_
1 0 0
¸
. ¸¸ .
C
d
x
n
+e
n
(6.34)
6.3 Implementation and Testing in Arjeplog 59
The observability matrix O has full rank, which according to the test in Section 3.4
means that the system is observable
O =
_
_
C
d
C
d
A
d
C
d
A
2
d
_
_
=
_
_
1 0 0
1 T
1
2
T
2
1 2T 2T
2
_
_
(6.35)
Running the ﬁlter oﬀline in Simulink with measurements from test drives, the
parameters are set using trialanderror, until the ﬁlter gets acceptable behavior.
The following parameter set is found to work properly
q
1
= 4.0 (6.36)
q
2
= 0.1 (6.37)
q
3
= 20.0 (6.38)
r = 0.2 (6.39)
τ = 0.500 (6.40)
With “properly” is meant that the ﬁlter with these parameters has the same behav
ior as the other ﬁlters evaluated in this thesis, compare to Section 5.5, Section 6.1
and Section 6.2.4. Plots of the Kalman ﬁlter implemented in this section is not
shown here, but it will be used in Section 6.4, where a comparison of all ﬁlter mod
els is made. Plots of this ﬁlter will also be shown in Section 6.5 in a comparison
with a lowpass ﬁlter.
6.3 Implementation and Testing in Arjeplog
The Kalman ﬁlter is implemented in the test vehicles at DaimlerChrysler. By the
implementation concern has to be taken to the diﬀerent driver assistance functions
in the outer control loop, as well as to certain driving situations, as described below.
• The Kalman ﬁlter can be halted, for example when the vehicle is stopped.
The estimate for a
z
in the state vector is then kept constant, while the
other estimates are set to initial values. This is done because there is no
information about the acceleration while the vehicle is standing still. This is
for example necessary when Distronic Plus is stopping the vehicle or when
Downhill Speed Regulation (DSR) is active and the tires are blocked. (Refer
to Section 2.8 and Section 2.6 for explanations of Distronic Plus and DSR.)
• The Kalman ﬁlter can be restarted, for example when the vehicle is started,
when Distronic Plus tells the vehicle to start moving or when the vehicle has
moved against the desired direction of travel.
• The output from the Kalman ﬁlter is limited to some maximum and min
imum values, due to safety reasons. These values are changed if DSR is
active.
The Kalman ﬁlter was also evaluated during a testing expedition to Arjeplog in
Sweden. Some of the plots in this thesis have been generated there. The Kalman
ﬁlter was then thoroughly tested in diﬃcult situations to detect adverse behavior.
60 Alternative Kalman Filter Models
6.4 Comparing the Kalman Filter Models
It is interesting to know if the models derived in this chapter can improve the
estimate in any way. Before continuing, a comparison between the diﬀerent models
will therefore be made. This is done by adjusting the parameters so that the ﬁlters
become the same behavior when simulating the vehicle driving up and down the
steep hill, and then by comparing their output when simulating other situations.
By doing this, all the diﬀerent models have almost the same performance with
respect to estimating large changes. The other driving situations (drive on bumpy
road for example) show if any of the models can ignore small changes better than
the other. Figure 6.8 shows the simulation of driving up and down the steep hill.
The parameters have been adjusted so that the diﬀerent ﬁlters almost have the
same behavior. Only the output from the Kalman ﬁlter using the model derived
in Section 6.1 is a bit diﬀerent than the others. Figure 6.9 and Figure 6.10 show
simulations of a test drive on two diﬀerent bumpy roads. By looking at these
ﬁgures, it becomes obvious that the behavior of the diﬀerent ﬁlters remain the
same.
4 6 8 10 12 14 16 18
3
2
1
0
1
2
Time [s]
D
i
s
t
u
r
b
a
n
c
e
[
m
/
s
2
]
a
∆
=a
m
a
exp
a
z
Kalman filter from Chapter 5
a
z
Kalman filter with v as feedback
a
z
Kalman filter with a
z
as GaussMarkov process
a
z
Kalman filter implemented in test vehicles
Figure 6.8. This ﬁgure shows the simulation of driving up and down the steep hill. The
parameters have been adjusted so that the diﬀerent ﬁlters almost have the same behavior.
Only the output from the Kalman ﬁlter using the model derived in Section 6.1, plotted
with a dashed line, is a bit diﬀerent than the others.
In this chapter diﬀerent Kalman ﬁlter models have been tested and evaluated.
After some work with the models trying to tune the parameters, it turns out that
modeling a
z
in diﬀerent ways does not make the estimate much better. In fact,
during this Master’s project no simulated situation has been found where any of
the models examined in this chapter can make a better estimate than the simple
model used in Section 5.5.
6.5 Comparing the Kalman Filter with a FirstOrder Lag Function 61
2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7
1.8
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
Time [s]
D
i
s
t
u
r
b
a
n
c
e
[
m
/
s
2
]
a
∆
=a
m
a
exp
a
z
Kalman filter from Chapter 5
a
z
Kalman filter with v as feedback
a
z
Kalman filter with a
z
as GaussMarkov process
a
z
Kalman filter implemented in test vehicles
Figure 6.9. This ﬁgure shows a simulation of a test drive on a bumpy road. By looking
at the output from the diﬀerent Kalman ﬁlters, it cannot be said that any of the ﬁlters
is better than the other.
The reason for this is that the Kalman ﬁlter remains a linear ﬁlter giving the
user to choose between a fast but jerky, or a slow but smooth estimate. With the
application at hand, it is important that the estimate is smooth, as the output
will be directly connected to the engine and brakes. Another conclusion is that
inserting more complexity and more parameters into the model makes the tuning
work more time consuming and harder to understand.
6.5 Comparing the Kalman Filter with a First
Order Lag Function
With the application at hand, the process noise w and measurement noise e cannot
be measured or estimated. Instead the ﬁlter is tuned using a subjective feeling
about what is a good compromise when sitting in the car. A fast ﬁlter makes a
faster controller, but it does not necessary mean that the controller works better.
It has been noticed that a faster controller makes the deviation from the desired
speed smaller, but at the same time the drive gets more uncomfortable. The
demands on the ﬁlter correspond to those of an ordinary lowpass ﬁlter. Therefore
this section will try to explain the similarities between these ﬁlters.
The ﬁgures that follow show the similarities between the Kalman ﬁlter from
Section 6.2.5 and a ﬁrstorder lag function as described in Section 6.2.1. The
ﬁgures are from simulations in Simulink, using measured data. The output from
the Kalman ﬁlter is represented with a solid line. As reference a
∆
= a
m
−a
exp
is
also plotted with a dotted line. The dashed line is the output from a ﬁrstorder
62 Alternative Kalman Filter Models
14 16 18 20 22 24 26 28 30
0.8
0.6
0.4
0.2
0
0.2
0.4
Time [s]
D
i
s
t
u
r
b
a
n
c
e
[
m
/
s
2
]
a
∆
=a
m
a
exp
a
z
Kalman filter from Chapter 5
a
z
Kalman filter with v as feedback
a
z
Kalman filter with a
z
as GaussMarkov process
a
z
Kalman filter implemented in test vehicles
Figure 6.10. This ﬁgure shows a simulation of a test drive on a very bumpy road. By
looking at the output from the diﬀerent Kalman ﬁlters, it cannot be said that any of the
ﬁlters is better than the other.
lag function with time constant 1.1, when the signal a
∆
= a
m
− a
exp
is given as
input.
The measurement in Figure 6.11 comes from a test drive when the vehicle
drives up and down the steep hill, and Figure 6.12 comes from a drive on the
bumpy road. Figure 6.13 shows a test drive with an attached trailer of 2000 kg,
where the driver uses the cruise control lever to fast step the set speed up and
down several times, without giving the controller enough time to adjust the speed
completely.
The Kalman ﬁlter is very practical when the task is to extract information from
noisy measurements (also from many sensors in combination – so called sensor
fusion) or estimating more than one parameter in a complex statespace model.
When knowing the intensities of the process noise w and measurement noise e,
the Kalman ﬁlter equations are used, as explained in Section 3.5, to calculate the
optimal gain L for the observer. Looking at the ﬁgures, it is obvious that the
Kalman ﬁlter with these parameters behaves as a lowpass ﬁlter. The calculated
gain L is used to adjust the frequency properties of the Kalman ﬁlter so that the
gain is high when the signaltonoise ratio is high, but low when the ratio is low.
This behavior is also described in [12]. When knowing what type of behavior is
wanted from the ﬁlter, the same work can be done using traditional ﬁlter methods.
The Kalman ﬁlter used in the comparison does not have the signal a
∆
as in
put. Instead the measurement of the vehicle speed v
m
is used as described in
Section 6.2.5. Therefore it is not a regular lowpass ﬁlter. Instead, the Kalman
ﬁlter uses the measurement of the vehicle speed as input, and this is one example
of the advantage of developing ﬁlters using the Kalman model.
6.5 Comparing the Kalman Filter with a FirstOrder Lag Function 63
4 6 8 10 12 14 16 18
3
2
1
0
1
2
Time [s]
D
i
s
t
u
r
b
a
n
c
e
[
m
/
s
2
]
a
∆
=a
m
a
exp
a
z
Kalman filter
a
∆
lowpass filtered
Figure 6.11. The measurement in this ﬁgure comes from a test drive when the vehicle
drives up and down the steep hill. The dashed line is the output from a ﬁrstorder lag
function with time constant τ = 1.1. As can be seen the behavior is similar to the output
from the Kalman ﬁlter (solid line).
According to [12], the transfer function for the stationary Kalman ﬁlter is
G
kf
(s) = (sI −A+LCA)
−1
Ls (6.41)
where L is the steadystate gain parameters calculated by (3.27). Calculating G
kf
with the model used for the Kalman ﬁlter gives a matrix containing three transfer
functions from the input v
m
to each of the ﬁlter outputs v, a
z
and ˙ a
z
. Taking the
transfer function from v
m
to a
z
, letting s = e
iω
and plotting its absolute value
gives the magnitude plot of the Bode diagram in Figure 6.14. The solid line is given
by the parameter set used in Section 6.2.5 and above in this section. The dashed
line shows a ﬁlter with smaller r and the dasheddotted line shows a ﬁlter with a
larger r. The ﬁlter has the function of a highpass ﬁlter. This is expected, as the
transfer function used in the plot estimates a
z
using measurements of the speed v.
Its characteristic is normal for all derivating ﬁlters. The ﬁlter with a smaller r has
a higher break frequency, and a larger r means lower break frequency. This was
also expected, because a smaller measurement noise means that the ﬁlter also can
diﬀerentiate higher frequencies. [12]
64 Alternative Kalman Filter Models
10 12 14 16 18 20 22 24 26 28 30
0.4
0.3
0.2
0.1
0
0.1
0.2
Time [s]
D
i
s
t
u
r
b
a
n
c
e
[
m
/
s
2
]
a
∆
=a
m
a
exp
a
z
Kalman filter
a
∆
lowpass filtered
Figure 6.12. The ﬁgure is from a simulation using recorded data from a test drive on
a bumpy road. The dashed line is the output from a ﬁrstorder lag function with time
constant τ = 1.1. As can be seen the behavior is similar to the output from the Kalman
ﬁlter (solid line).
10 15 20 25 30 35 40 45
3
2.5
2
1.5
1
0.5
0
0.5
1
1.5
Time [s]
D
i
s
t
u
r
b
a
n
c
e
[
m
/
s
2
]
a
∆
=a
m
a
exp
a
z
Kalman filter
a
∆
lowpass filtered
Figure 6.13. The ﬁgure is from a simulation using recorded data from a test drive
with a heavy loaded trailer. The driver uses the cruise control lever to step the set
speed up and down several times, without giving the controller enough time to adjust
the speed completely. The dashed line is the output from a ﬁrstorder lag function with
time constant τ = 1.1. As can be seen the behavior is similar to the output from the
Kalman ﬁlter (solid line).
6.5 Comparing the Kalman Filter with a FirstOrder Lag Function 65
Bode Diagram
Frequency (rad/sec)
P
h
a
s
e
(
d
e
g
)
M
a
g
n
i
t
u
d
e
(
d
B
)
70
60
50
40
30
20
10
r smaller (faster filter)
original Kalman filter
r larger (slower filter)
10
3
10
2
10
1
10
0
10
1
180
135
90
Figure 6.14. Bode Diagram of the transfer function from v
m
to a
z
, with the Kalman
ﬁlter presented in Section 6.2.5. The solid line is the parameters chosen in Section 6.2.5,
the dashed line is from a ﬁlter with smaller r and the dasheddotted line is from a ﬁlter
with a larger r. The ﬁlter has the function of a normal highpass ﬁlter. This is expected,
as the transfer function used in the plot estimates a
z
using measurements of the speed v.
The ﬁlter with a smaller r has a higher break frequency, and a larger r means lower break
frequency. A smaller measurement noise means that the ﬁlter also can diﬀerentiate higher
frequencies.
66 Alternative Kalman Filter Models
Chapter 7
Change Detection
In this chapter an overview of diﬀerent change detection algorithms is given, and
then one of them is chosen and implemented. An introduction on how to adjust
the parameters for this algorithm is given and simulations of diﬀerent driving
situations are made. It wil l be shown that it is possible to improve the estimate
of a
z
using this algorithm.
7.1 Idea of Change Detection
When constructing a ﬁlter, it is desirable that the output follows the desired target
signal, ignoring the noise. The gain in a linear ﬁlter is a compromise between noise
attenuation and tracking ability. Choosing a large gain makes it fast and sensitive
to measurement noise, and choosing a low gain makes it slow when large changes
in the signal occurs.
When driving the vehicle on a straight road, a slow ﬁlter could be used. The
function could be satisfying for some time, but then suddenly the slope changes
and the model used by the ﬁlter is no longer correct. It would be practical to
be able to detect such changes in the environment, and react by making the ﬁlter
faster. The presentation that follow is based on [13], where a thorough explanation
of the subject is given.
Consider a ﬁlter trying to estimate a signal x. The ﬁlter uses measurements x
m
to calculate an estimate ˆ x. The measurement is modeled as
x
m,k
= x
k
+e
k
(7.1)
where e is the measurement noise. The quality of the estimate ˆ x can be tested by
looking at the residuals
k
= x
m,k
− ˆ x
k
(7.2)
If there is no change in the system and the model is correct, then the residuals
are white noise, a sequence of independent stochastic variables with zero mean
and known variance. After a change either the mean or variance or both changes,
67
68 Change Detection

x
k

u
k
Kalman Filter
k

Distance

s
k
Stop rule
p 
Alarm

ˆ x
k
6
Change Detector
Figure 7.1. A change detector consists of distance measure and a stopping rule. The
distance measure transform the residuals from the Kalman ﬁlter to a signal s, repre
senting the change in the residuals. The stopping rule decides whether the change is
signiﬁcant or not. If the change in the residuals is signiﬁcant, the change detector gives
an alarm and the Kalman ﬁlter can take appropriate action (for example by making the
ﬁlter faster).
that is, the residuals become “large” in some sense. This can be used by a change
detection algorithm. The problem is to decide what “large” is. Change detection
is also referred to as “fault detection”. It is often used to detect faults in a system,
for example when a sensor is broken or temporarily unavailable.
There are three diﬀerent categories of change detection methods [13]
• Methods using one ﬁlter, where a whiteness test is applied to the residuals.
The ﬁlter is temporarily made faster when a change is detected.
• Methods using two ﬁlters, one slow and one fast, in parallel. Depending on
the residuals from the two ﬁlters, one of them is chosen as the currently “best
one”.
• Methods using multiple ﬁlters in parallel, each one matched to certain as
sumptions on the abrupt changes. For each ﬁlter the probability of that ﬁlter
being correct is calculated, and output is the weighted sum of the output
from all the individual ﬁlters, depending on their current probabilites.
In this chapter, a method using one ﬁlter will be implemented and tested.
7.2 One Kalman Filter with WhitenessTest
The task of the change detector is to decide which of the following hypotheses is
correct, concerning the residuals from the Kalman ﬁlter
H
0
: is white noise (7.3)
H
1
: is not white noise (7.4)
A change detector consists of a distance measure and a stopping rule, as in
Figure 7.1. The residuals are transformed to a distance measure s
k
, that mea
sures the deviation from the hypothesis H
0
. The stopping rule decides whether
the deviation is signiﬁcant or not. Diﬀerent implementations of the distance mea
sures s
k
are [13]
7.2 One Kalman Filter with WhitenessTest 69
• Change in mean. The residual itself is used, giving
s
k
=
k
(7.5)
• Change in variance. The squared residual subtracted by a known “normal”
residual variance λ is used, giving
s
k
=
2
k
−λ (7.6)
• Change in correlation. The correlation between the residual
t
at the current
time step k and past outputs y
k−l
or inputs u
k−l
at a time step k − l are
used as
s
k
=
k
y
k−l
(7.7)
or
s
k
=
k
u
k−l
(7.8)
for some value l.
• Change in sign correlation. For instance, one can use the fact that white
residuals should in average change sign every second sample and use
s
k
= sign(
k
k−1
) (7.9)
A stopping rule is created by lowpass ﬁltering s
k
and comparing this value to a
threshold h. Two common lowpass ﬁlters described in [13] are
• The CUmulative SUM (CUSUM) test
g
k
= max(g
k−1
+s
k
−ν, 0) (7.10)
The “drift parameter” ν inﬂuences the lowpass eﬀect.
• The Geometric Moving Average (GMA) test
g
k
= λg
k−1
+ (1 −λ)s
k
(7.11)
Here the forgetting factor λ is used to tune the lowpass eﬀect. λ can be
chosen as 0, which means no lowpass eﬀect and s
k
will in this case be
thresholded directly.
The stopping rule gives an alarm when g
k
> h. When an alarm is given, the
Kalman ﬁlter is temporarily made faster by adjusting the parameters, and g
k
is
reset to 0.
70 Change Detection
7.3 Implementation
In this section one change detection algorithm is implemented and tested. As
distance measure s
k
=
2
k
−λ is chosen, and as stopping rule the CUSUMtest
g
k
= max(g
k−1
+s
k
−ν, 0). Inserting (7.6) in (7.10) gives
g
k
= max(g
k−1
+
2
k
−λ −ν, 0)
= max(g
k−1
+
2
k
−β, 0) (7.12)
Here β has been deﬁned as β = λ +ν.
The Kalman ﬁlter from Chapter 5 with R = 1 and Q = 0.0003 is selected. This
gives L = L
slow
= 0.0172 and results in a slow ﬁlter. The change detector gives an
alarm when g
k
> h, and the Kalman ﬁlter is temporarily made faster by changing
the calculated value L to another value L
fast
. Choosing R = 1 and Q = 0.01 from
Chapter 5 gives L
fast
= 0.0951.
To choose the threshold h and the parameter β in (7.12) for the change detec
tion algorithm, the following steps are taken, inspired by the general advice given
in [13]
• Start with a very large threshold h and choose β to the size of the expected
change. Simulate the system with measurements from test drives on bumpy
roads. The Kalman ﬁlter should in these situations remain slow, ignoring
the noise. Adjust β such that g
k
= 0 more than 50% of the time.
• Then simulate the system with measurements where large changes occur, for
example driving up and down a steep hill, or stepping the cruise controller
set speed up and down with a heavy loaded vehicle. Set the threshold h so
the delay for detection of these large changes is reasonable.
• Then simulate all the driving situations again. If faster detection is sought,
try to decrease β. If fewer false alarms are wanted, try to increase β. If there
is a subset of the change times that does not make sense, try to increase β.
The parameters for the change detection found using this method are β = 0.005
and h = 0.2.
7.4 Results
The Kalman ﬁlter with the change detection algorithm chosen in Section 7.3 is
simulated using measurements from test drives representing diﬀerent driving sit
uations. The output from the simulated Kalman ﬁlter is plotted in a diagram
together with the measured signal a
m
−a
exp
. As a reference the output from the
“original” Kalman ﬁlter without change detection, presented in Section 6.2.5, is
also plotted. The discrete signal at the bottom of the diagrams is the output from
the change detection algorithm, called “Alarm” in Figure 7.1. When this signal is
high, the faster parameter of the Kalman ﬁlter is chosen.
Figure 7.2 is a simulation of a test drive up and down a steep hill, with a slope
of 20% up and 15% down. Figure 7.3 shows a vehicle with a trailer with a weight
7.4 Results 71
of 2000 kg driving up and down the same hill. Figure 7.4 shows the vehicle with
the trailer again, this time driving with cruise control. The driver is stepping the
set speed fast up and down without letting the vehicle reach the desired speed.
From the three ﬁgures it can be seen that the estimates from the Kalman ﬁlter
with change detection is faster than the original implementation. Faster estimates
are better in these driving situations because it would give the controller a better
chance to compensate for the large changes.
Figure 7.5 and Figure 7.6 each show a test drive on two diﬀerent bumpy roads.
As can be seen, not so many alarms are given, and the change detection algorithm
does not aﬀect the estimate. This shows that the change detection algorithm does
not aﬀect the estimate when it is not necessary.
0 5 10 15 20 25
3
2
1
0
1
2
3
Time [s]
D
i
s
t
u
r
b
a
n
c
e
[
m
/
s
2
]
a
∆
= a
m
a
exp
a
z
Kalman filter
a
z
Change Detection
Alarm
Figure 7.2. Simulation of a Kalman ﬁlter with change detection algorithm when driving
up and down a steep hill. The change detection algorithm detects the large changes and
makes the Kalman ﬁlter faster.
72 Change Detection
0 5 10 15 20 25
6
5
4
3
2
1
0
1
2
3
4
Time [s]
D
i
s
t
u
r
b
a
n
c
e
[
m
/
s
2
]
a
∆
= a
m
a
exp
a
z
Kalman filter
a
z
Change Detection
Alarm
Figure 7.3. Simulation of a Kalman ﬁlter with change detection algorithm when driving
up and down a steep hill with a 2000 kg trailer. The change detection algorithm detects
the large changes and makes the Kalman ﬁlter faster. (The original Kalman ﬁlter becomes
saturated at −3ms
−2
. This is not implemented in the simulation for the ﬁlter with change
detection.)
10 15 20 25 30 35 40 45
3
2
1
0
1
2
3
Time [s]
D
i
s
t
u
r
b
a
n
c
e
[
m
/
s
2
]
a
∆
= a
m
a
exp
a
z
Kalman filter
a
z
Change Detection
Alarm
Figure 7.4. Simulation of a Kalman ﬁlter with change detection algorithm when driving
with an attached trailer. The driver uses the cruise control lever to fast step up and down
witout letting the vehicle reach the desired speed. The change detection algorithm detects
the large changes and makes the Kalman ﬁlter faster than the original implementation.
7.4 Results 73
0 1 2 3 4 5 6 7 8 9 10
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
Time [s]
D
i
s
t
u
r
b
a
n
c
e
[
m
/
s
2
]
a
∆
= a
m
a
exp
a
z
Kalman filter
a
z
Change Detection
Alarm
Figure 7.5. Simulation of a Kalman ﬁlter with change detection algorithm when driving
on a bumpy road. The change detection algorithm does not give many alarms, and
therefore the estimate is not aﬀected, as desired.
0 10 20 30 40 50 60
0.6
0.5
0.4
0.3
0.2
0.1
0
0.1
0.2
Time [s]
D
i
s
t
u
r
b
a
n
c
e
[
m
/
s
2
]
a
∆
= a
m
a
exp
a
z
Kalman filter
a
z
Change Detection
Alarm
Figure 7.6. Simulation of a Kalman ﬁlter with change detection algorithm when driving
on a very bumpy road. The change detection algorithm does not give many alarms, and
therefore the estimate is not aﬀected, as desired.
74 Change Detection
Chapter 8
Conclusions and Future
Work
In this chapter the thesis is concluded with a short summary of the obtained results
and observations made. The chapter also includes a section in which interesting
future work is brieﬂy introduced.
8.1 Conclusions
In this Master’s thesis the theory for the Kalman ﬁlter and ﬁlter tuning have been
presented. It has been shown how to implement a Kalman ﬁlter estimating the
part called a
z
of the vehicle’s acceleration caused by disturbances not included
in the model of the vehicle. The easiest method is to use a constructed sig
nal a
∆
= a
exp
−a
m
as input to the ﬁlter, where a
exp
is the expected acceleration
calculated by the model and a
m
is the measured actual acceleration.
It has been shown that the ﬁlter parameters can be chosen either
• by knowledge about the noise intensities (when they are not known they can
be estimated),
• by running simulations in Simulink and optimizing the parameters using a
script in Matlab (for this purpose the algorithm simulated annealing has
been implemented),
• or by adjusting the parameters as a compromise between a slow ﬁlter, or a
fast but jerky ﬁlter (to do this a subjective choice has to be made).
Some more complex models for the Kalman ﬁlter have been implemented and
tested. First it was shown how to use the speed of the vehicle as input to the
ﬁlter, instead of the constructed signal a
∆
. Then two models of a
z
were derived
and tested, inspired by the ﬁrstorder lag function and the GaussMarkov process.
These models have a higher computational cost, but it could not be proven that
they improve the estimate in any way.
75
76 Conclusions and Future Work
It has been shown that the Kalman ﬁlter implemented in the vehicles today
can be replaced by a ﬁrstorder lag function, with no loss in performance.
A change detection algorithm has also been implemented in Simulink and
simulations show that it is possible to improve the estimate using this algorithm.
8.2 Future Work
There are several interesting aspects that deserve further investigation.
• The change detection algorithm implemented in Simulink should be tested
in a real vehicle. If this test shows a positive result, the parameters can then
be adjusted by practical methods to suit the actual application.
• It is suggested to implement, simulate and test the other methods for change
detection described in this Master’s thesis. For example the method of using
two ﬁlters in parallel, one slow and one fast, may be of interest.
• It should be practically tested if the Kalman ﬁlter can be exchanged with
another simpler type of lowpass ﬁlter, as the simulations in this thesis sug
gest.
• As the parameters m (the mass of the vehicle) and α (the slope of the
road) have a big impact on the calculation of the expected acceleration (see
Section 5.3) it would be interesting to see if the performance of the controller
could be improved by estimating these parameters, instead of estimating a
z
.
List of Notations
This table shows the symbols and abbreviations used in this thesis, together with
a reference to the page where they are deﬁned.
Symbol Description Page
α Slope of the road 42
a
c
Output from the controller 34
a
dev
Deviation from desired acceleration 33
a
des
Desired acceleration 33
a
exp
Calculated expected acceleration 43
a
m
Measured acceleration of the vehicle 33
a
real
Actual aceleration of the vehicle 33
a
z
Output from the observer 43
A
w
Air resistance reference area 36
c
d
Drag coeﬃcient 36
c
rr
Rolling resistance coeﬃcient 36
η Eﬃciency factor for the drivetrain 36
e Measurement noise 12
F
air
Air resistance 36
F
brake
Force from the brakes 35
F
drive
Force from transmission and engine 35
F
resistance
Drive resistance 35
F
roll
Rollin resistance 36
g Gravitational acceleration 36
i
d
Diﬀerential ratio 36
I
e
Moment of inertia for the engine 36
I
f
Moment of inertia for the front axes 36
i
g
Gearbox ratio 36
I
g
Moment of inertia for the gear 36
I
r
Moment of inertia for the rear axes 36
L Observer gain 14
m Mass of the vehicle 36
˜ m Mass and moments of inertia 37
P Covariance matrix 16
q Measurement noise intensity 43
77
78 List of Notations
Q Measurement noise covariance matrix 15
ρ Density of the air 36
r Process noise intensity 44
r
w
Wheel radius 34
R Process noise covariance matrix 15
T
b
Desired brake torque 33
T
brake
Expected output torque from the brakes 35
T
drive
Output torque from the transmission and engine 36
T
e
Desired engine torque 33
T
engine
Expected output torque from the engine 35
u Input signal 12
v
m
Measured speed of the vehicle 33
v
real
Actual speed of the vehicle 33
v
wind
Speed of the wind 36
w Process noise 12
x State 12
ˆ x Estimate of the state 14
Abbreviation Description Page
ABS Antilock Braking System 5
ACC Adaptive Cruise Control 7
BAS Brake Assist System 7
CMS Collision Mitigation System 6
DSR Downhill Speed Regulation 6
ESP Electronic Stability Program 5
RMSE Root Mean Square Error 25
SA Simulated Annealing 26
Bibliography
[1] B. Adiprasito, Fahrzeuglängsführung im Niedergeschwindigkeitsbereich,
PhD thesis, Shaker Verlag, Aachen, 2004.
[2] P. Andersson, Air Charge Estimation in Turbocharged Spark Ignition Engines,
PhD thesis No. 989, Linköpings universitet, Linköping, 2005.
[3] Y. BarShalom, X. Li and T. Kirubarajan, Estimation with Applications to
Tracking and Navigation, John Wiley & Sons, Inc., New York, 2001.
[4] G. Blom, Sannolikhetsteori och statistikteori med til lämpningar, Studentlit
teratur, Lund, fourth edition, 1989.
[5] R.G Brown and P.Y.C Hwang, Introduction to Random Signals and Applied
Kalman Filtering, John Wiley & Sons, Inc., New York, third edition, 1997.
[6] T. Coleman, M.A Branch and A. Grace, Optimization Toolbox User’s Guide,
The MathWorks, Inc., Natick, 1999.
[7] A. Eidehall, Tracking and threat assessment for automotive col lision avoid
ance, PhD thesis No. 1066, Linköpings universitet, Linköping, 2007.
[8] M. Fach, Robustheits und Stabilitätsbetrachtung des Fahrzeuglängsreglers im
Mercedes Benz Pkw, Master’s thesis, Universität Stuttgart, Stuttgart, 2003.
[9] T. Glad and L. Ljung, Reglerteknik  Grundläggande teori, Studentlitteratur,
Lund, fourth edition, 2006.
[10] T. Glad and L. Ljung, Reglerteori  Flervariabla och olinjära metoder, Stu
dentlitteratur, Lund, second edition, 2003.
[11] M. Grewal and A. Andrews, Kalman Filtering  Theory and Practice Using
Matlab, John Wiley & Sons, Inc., Ney York, second edition, 2001.
[12] F. Gustafsson, L. Ljung and M. Millnert, Signalbehandling, Studentlitteratur,
Lund, second edition, 2001.
[13] F. Gustafsson, Adaptive Filtering and Change Detection, John Wiley &
Sons, Inc. New York, 2001.
79
80 Bibliography
[14] U. Kienecke and L. Nielsen, Automotive Control Systems For Engine, Drive
line and Vehicle, SpringerVerlag, Berlin, second edition, 2005.
[15] S. Kirkpatrick, C.D. Gelatt and M.P. Vecchi, Optimization by Simulated An
nealing, Science(220) : 671680, 1983.
[16] P. Lingman and B. Schmidtbauer, Road Slope and Vehicle Mass Estimation
Using Kalman Filtering, Vehicle System Dynamics Supplement 37 page 1223,
2002.
[17] L. Ljung, System Identiﬁcation  Theory for the User, Prentice Hall, Upper
Saddle River, second edition, 1999.
[18] L. Ljung, System Identiﬁcation Toolbox User’s Guide, The Math Works, Inc.,
Natick, 1995.
[19] D. Pfrommer, Regelung der Fahrzeuglängsdynamik unter Berücksichtigung der
unterschiedlichen Dynamik von Motor und Bremse, Master’s thesis, Univer
sität Stuttgart, Stuttgart, 2005.
[20] K. Popp and W. Schiehlen, Fahrzeugdynamik, B.G. Teubner, Stuttgart, 1993.
[21] R. Regis and C. Shoemaker, A Stochastic Radial Basis Function Method
for the Global Optimization of Expensive Functions, Online Supplement, IN
FORMS Journal on Computing, Acc. 20070201
http://joc.pubs.informs.org/Supplements/Regis.pdf
[22] R. Rosander, Sensor Fusion between a Synthetic Attitude and Heading
Reference System and GPS, Master’s thesis No. LITHISYEX34082003,
Linköpings Universitet, Linköping, 2003.
[23] G. Welch and G. Bishop, An Introduction to the Kalman Filter, Technical
Report TR95041, University of North Carolina at Chapel Hill, Chapel Hill,
1995.
[24] Control System Toolbox User’s Guide, The MathWorks, Inc., Natick, 1998.
[25] Owner’s Manual SClass, DaimlerChrysler AG, Stuttgart, 2006.
[26] Owner’s Manual MClass, DaimlerChrysler AG, Stuttgart, 2006.
Appendix A
Matlab Implementation of
“lsqnonlin”
function [q11,q22,q33] = opt_param_lsq(tolx, tolfun)
% Optimize control parameters using LSQNONLIN and Simulink model
if (nargin < 2)
warning(’Using standard value for tolx and tolfun (0.001)’);
tolfun = 0.001;
tolx = 0.001;
end
load_system(’opt_param_model’) % Load the model
start_parameters = [0.1 1 10]; % Set initial values
% Set optimization options (for example termination options)
options =
optimset(’LargeScale’,’off’,’Display’,’iter’,
’TolX’,tolx,’TolFun’,tolfun);
% Run lsqnonlin to solve the optimization problem
best_parameters = lsqnonlin(@tracklsq, start_parameters,
[], [], options);
% Save the result
q11 = best_parameters(1);
q22 = best_parameters(2);
q33 = best_parameters(3);
% This is the callback function used by lsqnonlin
function F = tracklsq(current_parameters)
81
82 Matlab Implementation of “lsqnonlin”
% Current values are passed by lsqnonlin
q11 = current_parameters(1);
q22 = current_parameters(2);
q33 = current_parameters(3);
% Calculate the observer
Q_d = [q11 0 0 ; 0 q22 0 ; 0 0 q33];
\textsc{Simulink}_model_parameters
% Create simulation options and run simulation
[tout,xout,yout] = sim(’opt_param_model’);
% Calculate the cost function value
% (In the model used, error is the 2:nd output)
error = yout(:,2);
% (lsqnonlin uses sqrt(F) as cost function, therefore ^2
F = rmse(error)^2;
end
function rmse = RMSE(error)
% Calculate a cost function based on the insignal error
error = estimated_valuereal_value);
t = length(error);
rmse=sqrt(1/t*sum(error.^2));
Appendix B
Matlab Implementation of
Simulated Annealing
This is the developed optimization script implementing the simulated annealing
(SA) algorithm in Matlab.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% [param_best, cost_best] = sim_annealing(param_start,steps_max)
%
% This Matlab function recursively tries to find the optimal
% parameters, using Simulinksimulation and
% the algorithm "simulated annealing".
%
% Required parameters:
% param_start = [q11 q22 q33], initial state for the algorithm
% steps_max, the maximum number of evaluations allowed
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function [param_best, cost_best] =
sim_annealing(param_start,steps_max)
if(nargin<2)
error(’Specifz initial values and max evaluations’);
end
param_current = param_start; % Initial state
cost_current = sim_cost(param_current); % Initial error
cost_stop = 0.1
P_start = 0.9;
P_stop = 0.000001;
83
84 Matlab Implementation of Simulated Annealing
[temp_start alpha]=
init(param_current,cost_current,steps_max,P_start,P_stop);
param_best = param_current; % Initial "best" solution
cost_best = cost_current;
temp_current = temp_start; % Initial temperature
rand(’state’,sum(100*clock)); % reset random generator
disp(sprintf(
’steps: \t [q11 \t q22 \t q33] = \t cost \t T=temp’));
steps = 0; % Evaluation count.
% While time remains & not good enough
while steps < steps_max & cost_current > cost_stop
% Pick some neighbour
param_neighbour = neighbour(param_current);
% Compute its energy
cost_neighbour = sim_cost(param_neighbour);
if cost_neighbour < cost_best % Is this a new best?
param_best = param_neighbour;
cost_best = cost_neighbour; % Yes, save it.
end
% Should we move to the new state?
if rand < trans_P(cost_current, cost_neighbour, temp_current)
param_current = param_neighbour; % Yes, change state
cost_current = cost_neighbour;
end
steps = steps + 1; % One more evaluation
cost_history(steps) = cost_current; % Log the cost (path)
temp_history(steps) = temp_current;
temp_current = alpha*temp_current; % Cool down
disp(sprintf(’%d: \t [%0.5g \t %0.5g \t %0.5g]
= \t %0.5g \t T=%0.5g’, steps,
param_current(1), param_current(2), param_current(3),
cost_current,temp_current));
end
figure(1);
subplot(2,1,1), plot(cost_history), xlabel(’costfunction’);
subplot(2,1,2), plot(temp_history), xlabel(’temperature’);
85
% Print the best parameters and their evaluated cost value
param_best
cost_best
% Pick some neighbour to the current parameters
% Should try to get nearby values!
function n = neighbour(param)
n = [1 1 1];
% Can not allow negative values
while n(1) < 0  n(2) < 0  n(3) < 0
% Randomize between 0.5 and +0.5
change = (rand(1,3)0.5).*param;
% Calculate new parameters
n = param + change;
end
end
% Calculate the transition probability function P
% The probability that we move to new parameters
function P = trans_P(cost_current, cost_neighbour, temp)
if cost_current < cost_neighbour
P = 1; % Always go down the hill
else
% Go up the hill if the temperature is high,
% but stay if the temperature is low
P = exp((cost_currentcost_neighbour)/temp);
end
end
% sim_cost executes the simulation and returns the RMSEerror
function cost = sim_cost(param)
global simin_Ld1 simin_Ld2 simin_Ld3
q11 = param(1);
q22 = param(2);
q33 = param(3);
r_d = 0.2;
lrg_para_lrg;
% These parameters have to be defined as global
simin_Ld1 = L_d(1);
simin_Ld2 = L_d(2);
simin_Ld3 = L_d(3);
disp([’Simulating...’]);
86 Matlab Implementation of Simulated Annealing
sim(’rdu_simmod_tl_fumo’);
% Check global parameters
if(simin_Ld1 == max(simout_Ld1))
disp([’OK’]);
else
error(’Parameters are not received by Simulink’);
end
% Calculate the cost function value
error = simout_car_pos_gsimout_LRG_az_SGB;
% Pick out the interesting part (skip beginning!)
parterror = error(400:2600);
% Calculate cost (Root mean square error)
cost = rmse(parterror);
end
% Calculate a reasonable initial temperature and alpha
function [temp_start, alpha] =
init(param_start,cost_start,steps_max,P_start,P_stop)
cost_best=cost_start; % best of all the neighbours
cost_worst=cost_start; % worst of all the neighbours
for i=1:5
% Generate some random neighbours, evaluate costs
param_neighbour = neighbour(param_start);
cost_neighbour = sim_cost(param_neighbour);
% Save the worst and best neighbour costs
if( cost_neighbour < cost_best )
cost_best = cost_neighbour;
end
if( cost_neighbour > cost_worst )
cost_worst = cost_neighbour;
end
end
% Calculate the maximum uphill move needed
if( cost_worst > cost_start )
max_change = cost_worst  cost_start
else
max_change = cost_start  cost_best
end
87
% Set initial temperature so that this maximum move
% is accepted with a high probability P_start.
% P_start = exp(max_change/temp_start) gives
temp_start = max_change / log(P_start);
% Now calculate the cooling factor alpha
% P_stop = exp(max_change/(temp_start*alpha^steps_max))
alpha = (max_change / (temp_start*log(P_stop)))^(1/steps_max);
end
Appendix C
Time Constant Identiﬁcation
Here is the Matlab identiﬁcation script that was used to identify the unknown
parameter τ and the intesitiy of w in the ﬁrstorder lag function τ ˙ a
z
+a
z
= w.
% Identifies the parameter "tau" and "K" in the matrix A
% dot{x} = Ax + Bu + Ke
% y = Cx + Du + e
% The noise intensities of e is 1
load ’C:\Messungen\221_836_RS141p_pr21.mat’
% Calculate the disturbance a_z due to the slope alpha
alpha = atan((B_LRdeSteigSe)/100);
real_az_Slope = 9.81*sin(alpha);
az = real_az_Slope’;
T = 0.02; % Sampling time
tau_start = 0.5; % Start values
K_start = 1;
A = [1/tau_start];
B = [0];
C = [1];
D = [0];
K = [K_start];
x0 = [0];
% Create statespace identification model
m = idss(A,B,C,D,K,x0,’Ts’,0);
m.as = [NaN]; % NaN means "please identify"
m.ks = m.k;
m.bs = m.b;
m.cs = m.c;
88
89
m.ds = m.d;
% Automatically adjust initial values to suit model
m_init = init(m);
% Load identification data, az output, no input
identificationData = iddata(az, zeros(length(az),1), T);
% Identify unknown parameters
model = pem(identificationData,m_init);
% Save the identified parameters
tau_save = inv(model.A)
K_save = model.K
lambda = model.NoiseVariance
% Check the model’s ability to predict one step ahead
figure(15);
compare (identificationData,model,1);
Observer for a vehicle longitudinal controller
Examensarbete utfört i Reglerteknik vid Tekniska högskolan i Linköping av
Peter Rytterstedt LiTHISYEX2007/3950SE
Handledare:
Johanna Wallén
isy, Linköpings universitet
Volker Maaß
Mercedes Benz Technology Center, DaimlerChrysler AG
Examinator:
Thomas Schön
isy, Linköpings universitet
Linköping, 1 April, 2007
.
Avdelning. Change detection is a method that can be used to detect large changes in the signal. numbering — URL för elektronisk version http://www. i. The outer control loop contains the driver assistance functions such as speed limiter. without loss in performance. To be able to perform autotuning for the longitudinal controller one has to model the environment and driving situations. it is important that the output is smooth. Simulated annealing is a global optimization technique which can be used when autotuning.control. cruise control. Sweden Språk Language Svenska/Swedish Engelska/English Rapporttyp Report category Licentiatavhandling Examensarbete Cuppsats Duppsats Övrig rapport ISBN — ISRN Datum Date 20070401 LiTHISYEX2007/3950SE Serietitel och serienummer ISSN Title of series.liu..se http://www. etc. automatically ﬁnd the optimal parameter settings. simulated annealing. It is suggested to implement the change detection algorithm in a test vehicle and evaluate it further. or a slow but smooth estimate. As observer the Kalman ﬁlter is selected. and react accordingly – for example by making the ﬁlter faster. This makes the ﬁlter tuning easier. Nyckelord Keywords Kalman ﬁlter. In this Master’s thesis the theory for the Kalman ﬁlter is presented and it is shown how to choose the ﬁlter parameters. for example by a changed vehicle mass or the slope of the road. The task of the observer is to estimate the part of the vehicle’s acceleration caused by large disturbances.e. It is shown that the Kalman ﬁlter implemented in the test vehicles today can be exchanged with a ﬁrstorder lag function. As the output from the Kalman ﬁlter is directly added to the control value for the engine and brakes. Institution Division.liu. The inner control loop consists of a PIDcontroller and an observer. In this Master’s thesis it is veriﬁed that the parameter choice is a compromise between a fast but jerky.isy. longitudinal controller. It is the optimal ﬁlter when the process model is linear and the process noise and measurement noise can be modeled as Gaussian noise.ep. change detection . ﬁlter tuning.se/2007/3950 Titel Title Observatör för en längsregulator i fordon Observer for a vehicle longitudinal controller Författare Peter Rytterstedt Author Sammanfattning Abstract The longitudinal controller at DaimlerChrysler AG consists of two cascade controllers. A ﬁlter using change detection is implemented and simulations show that it is possible to improve the estimate using this method. as there is only one parameter to choose. Department Division of Automatic Control Department of Electrical Engineering Linköpings universitet SE581 83 Linköping.
.
automatically ﬁnd the optimal parameter settings. Change detection is a method that can be used to detect large changes in the signal. or a slow but smooth estimate. The inner control loop consists of a PIDcontroller and an observer. As the output from the Kalman ﬁlter is directly added to the control value for the engine and brakes. This makes the ﬁlter tuning easier. v . As observer the Kalman ﬁlter is selected. A ﬁlter using change detection is implemented and simulations show that it is possible to improve the estimate using this method. It is the optimal ﬁlter when the process model is linear and the process noise and measurement noise can be modeled as Gaussian noise.Abstract The longitudinal controller at DaimlerChrysler AG consists of two cascade controllers. The outer control loop contains the driver assistance functions such as speed limiter. i. and react accordingly – for example by making the ﬁlter faster. To be able to perform autotuning for the longitudinal controller one has to model the environment and driving situations. it is important that the output is smooth. In this Master’s thesis it is veriﬁed that the parameter choice is a compromise between a fast but jerky. cruise control. as there is only one parameter to choose. In this Master’s thesis the theory for the Kalman ﬁlter is presented and it is shown how to choose the ﬁlter parameters. Simulated annealing is a global optimization technique which can be used when autotuning. It is shown that the Kalman ﬁlter implemented in the test vehicles today can be exchanged with a ﬁrstorder lag function. etc.e. The task of the observer is to estimate the part of the vehicle’s acceleration caused by large disturbances. It is suggested to implement the change detection algorithm in a test vehicle and evaluate it further.. for example by a changed vehicle mass or the slope of the road. without loss in performance.
.
March 2007 Peter Rytterstedt vii . Sindelﬁngen. Peter JuhlinDannfelt and Erik Almgren for proofreading. It completes my international studies for a Master of Science degree in Applied Physics and Electrical Engineering at Linköpings Universitet. DaimlerChrysler AG in Sindelﬁngen.Acknowledgments This Master’s thesis has been performed between October 2006 and March 2007 at the Mercedes Technology Center. and for answering questions about the cars and development tools that have come up during this thesis. who have given insightful comments and tips. Germany. My supervisor Johanna Wallén and examiner Thomas Schön at Linköpings Universitet. The teams EP/ERW and GR/EAT deserve many thanks for welcoming me at the department. Finally I would like to thank Marie Rytterstedt. I would like to express my greatest gratitude to my supervisor at DaimlerChrysler. also have a part in this thesis. and my girlfriend for her support and encouragement. Volker Maaß. Sweden. who has always had time for my questions and helped me in any way possible.
.
Contents
1 Introduction 1.1 Background . . . . . 1.2 Problem Formulation 1.3 Objective . . . . . . 1.4 DaimlerChrysler AG 1.5 Method . . . . . . . 1.6 Outline . . . . . . . 1.7 Limitations . . . . . 1 1 2 2 3 3 3 4 5 5 5 5 6 6 6 6 7 7 7 8 8 8 11 11 12 13 14 15 15 15 16 17
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
2 Driver Assistance Systems 2.1 Antilock Braking System . . . . . . 2.2 Traction Control . . . . . . . . . . . 2.3 Stability Control . . . . . . . . . . . 2.4 Speed Limiter . . . . . . . . . . . . . 2.5 Cruise Control . . . . . . . . . . . . 2.6 Hill Descent Control . . . . . . . . . 2.7 Forward Collision Mitigation System 2.7.1 Distance Warning . . . . . . 2.7.2 Brake Assist System . . . . . 2.8 Adaptive Cruise Control . . . . . . . 2.9 Lane Guidance System . . . . . . . . 2.10 Blindspot Warning . . . . . . . . . . 2.11 Systems Supported by the Controller
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
3 Basic Filter Theory 3.1 StateSpace Models . . . . . . . . . . . . . . . . 3.2 Discretization . . . . . . . . . . . . . . . . . . . 3.3 Observer . . . . . . . . . . . . . . . . . . . . . . 3.4 Observability . . . . . . . . . . . . . . . . . . . 3.5 Kalman Filter . . . . . . . . . . . . . . . . . . . 3.5.1 Process and Measurement Model . . . . 3.5.2 Discrete Time Kalman Filter Equations 3.5.3 Initialization . . . . . . . . . . . . . . . 3.5.4 Steady State . . . . . . . . . . . . . . . ix
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
x
Contents 3.5.5 Block Diagram of the Stationary Kalman Filter . . . . 3.5.6 Design Parameters . . . . . . . . . . . . . . . . . . . . Shaping Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.1 Shaping Filters for NonGaussian Process Noise . . . . 3.6.2 Shaping Filters for NonGaussian Measurement Noise . . . . . . . . . . . . . . . 17 17 17 19 19 21 21 22 23 23 24 25 25 25 26 33 33 35 38 43 44 49 49 50 50 52 53 55 56 59 60 61 67 67 68 70 70 75 75 76 77
3.6
4 Choosing the Kalman Filter Parameters 4.1 Estimating the Covariances . . . . . . . . 4.2 Choosing Q and R Manually . . . . . . . 4.3 Simulation . . . . . . . . . . . . . . . . . . 4.3.1 OpenLoop Simulation . . . . . . . 4.3.2 ClosedLoop Simulation . . . . . . 4.4 Autotuning . . . . . . . . . . . . . . . . . 4.4.1 Evaluation Using RMSE . . . . . . 4.4.2 Autotuning Using Matlab . . . . . 4.4.3 Simulated Annealing . . . . . . . . 5 Kalman Filter Implementation 5.1 Overview of the Inner Control Loop 5.2 Modeling the Acceleration . . . . . . 5.3 Errors in the Acceleration Model . . 5.4 Kalman Filter Model . . . . . . . . . 5.5 Choosing the Filter Parameters . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
6 Alternative Kalman Filter Models 6.1 Vehicle Speed as Feedback . . . . . . . . . . . . . . . . . . . . . 6.2 Modeling the Disturbance az . . . . . . . . . . . . . . . . . . . 6.2.1 FirstOrder Lag Function . . . . . . . . . . . . . . . . . 6.2.2 FirstOrder GaussMarkov Process . . . . . . . . . . . . 6.2.3 Identifying the Time Constant . . . . . . . . . . . . . . 6.2.4 Testing the Model of az . . . . . . . . . . . . . . . . . . 6.2.5 HigherOrder Derivative of az . . . . . . . . . . . . . . . 6.3 Implementation and Testing in Arjeplog . . . . . . . . . . . . . 6.4 Comparing the Kalman Filter Models . . . . . . . . . . . . . . 6.5 Comparing the Kalman Filter with a FirstOrder Lag Function 7 Change Detection 7.1 Idea of Change Detection . . . . . . . . 7.2 One Kalman Filter with WhitenessTest 7.3 Implementation . . . . . . . . . . . . . . 7.4 Results . . . . . . . . . . . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
8 Conclusions and Future Work 8.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . List of Notations
Bibliography A Matlab Implementation of “lsqnonlin” B Matlab Implementation of Simulated Annealing C Time Constant Identiﬁcation
79 81 83 88
xii Contents .
Chapter 1 Introduction This chapter will give an introduction to the problem investigated in this Master’s thesis. which are aﬀecting the vehicle. In this Master’s thesis the focus will be on the driver assistance systems supported by the longitudinal controller at DaimlerChrysler AG. and this Master’s thesis will explain the function and implementation of the Kalman ﬁlter in more detail. v is here the actual vehicle speed.for further information see Chapter 2). The block called “Coord. depending on the driver’s choice. one understands the inﬂuence of the vehicle in its driving direction by means of a controller. adn are the desired accelerations calculated by the assistance functions. .” (coordinator) in the ﬁgure chooses which of the functions that should have aﬀect.1 Background Driver assistance systems are more and more becoming standard in the new vehicles built today. The inner controller delivers a desired torque T to the actuators engine. DaimlerChrysler AG. as well as an outline for the thesis. By vehicle longitudinal regulation. but not to take the driving task from him or her. so that the vehicle is traveling smoothly. The current speed v and acceleration a are measured and used as feedback signals by the controllers. The tasks for these systems are to support and relieve the driver. The resulting calculated acceleration ad is delivered to the inner control loop.1. where the thesis project has been performed. The longitudinal controller can be thought of as two cascade controllers. The inner controller contains a PIDcontroller and an observer. 1. etc . . The task of the observer is to estimate the part of the vehicle’s acceleration caused by disturbances not included in the vehicle model. 1 . Distronic. DSR. brake and gearbox.. even when switching between the diﬀerent assistance functions. see Figure 1. A Kalman ﬁlter is chosen as observer. whose task is to make the vehicle have the same acceleration as the desired value. ad1 . . will be presented. The outer controller contains the driver assistance functions (such as Speedtronic. ad2 . It also contains a jerk damper.
The ﬁlter is directly attached to the engine and brakes. and it is therefore important that the output from the ﬁlter is smooth. The Kalman ﬁlter has to be tuned to work optimally. Otherwise the comfort is negatively aﬀected.2 Problem Formulation The Kalman ﬁlter attached parallel to the PIDcontroller should estimate the part of the vehicle’s acceleration caused by large disturbances (for example a changed mass or slope of the road). For this purpose the theory behind the ﬁlter and the methods for ﬁlter tuning had to be better investigated. 1. describe methods for choosing the parameters. It should also be examined if the structure of the ﬁlter can be improved. They deliver a desired torque T to the engine. Distronic ad PID T  Vehicle DSR ad3 Observer . adn  6 v. 1.1. and to ﬁnd good parameter settings justiﬁed by theoretical and practical methods. brake and gearbox which are aﬀecting the vehicle. The driver assistance functions in the outer control loop calculates accelerations adi .. a Outer controller Inner controller Figure 1. The speed v and acceleration a are used as feedback signals.3 Objective The goal of this thesis is to explain the function of the Kalman ﬁlter.2 Introduction v v v  Speedtronic ad1 ad2 Coord. .. Overview of the longitudinal controller in the vehicle. The inner control loop consists of a PIDcontroller and an observer. The coordinator (“Coord”) chooses which of the functions that should have aﬀect and delivers a desired acceleration ad to the inner control loop.
4 DaimlerChrysler AG DaimlerChrysler AG is a major automobile and truck manufacturer formed in 1998 by the merge of DaimlerBenz in Germany (the manufacturer of MercedesBenz) and the Chrysler Corporation in USA. Then a stationary Kalman ﬁlter is designed for the model and good working parameters will be found using simulation. 1. SClass. Chapter seven discusses the possibilities given by some ideas picked up from the area of “Change Detection”.1. 9500 persons are working with development. In January 2007 the company was the second largest auto manufacturer. MercedesBenz. and simulations are made oﬀline. At the end of chapter six a comparison between the developed Kalman ﬁlter and a standard lowpass ﬁlter is made. Diﬀerent possibilities to choose and tune the ﬁlter parameters are presented in chapter four. At Mercedes Technology Center in Sindelﬁngen. Smart and Maybach.4 DaimlerChrysler AG 3 1. The model of the disturbance used in the existing Kalman ﬁlter will be examined. EClass. Dodge. vans and busses under the brands Chrysler. 1. over 35000 employees are producing the CClass. Jeep. Using previous Master’s theses at DaimlerChrysler the process model for the vehicle’s longitudinal dynamics will be developed. Finally. Chapter six starts with a presentation of some more complex models used to model the estimated parameter. Diﬀerent methods for choosing the ﬁlter parameters will be explained and implemented.5 Method The necessary theory describing the Kalman ﬁlter will be presented using literature studies. trucks. conclusions are drawn and some extensions for the thesis are presented. . In chapter ﬁve the model for the longitudinal dynamics of the vehicle is derived. CLClass and Maybach cars.6 Outline In the introductory chapter the purpose and method of the thesis is presented. The company produces cars. and an initial version of the Kalman ﬁlter is implemented and tested. The development of the observer is performed in Matlab and Simulink. and continues with a discussion of the advantages and disadvantages of using these models. and an algorithm that uses this theory is simulated and evaluated. An overview of the diﬀerent driver assistance systems supported by the longitudinal controller is given in the second chapter. The ﬁlter will then be extended with an algorithm for change detection. The team working with driving assistance systems is developing control systems such as adaptive cruise control and hill descent control. among others. It will be examined if this makes the ﬁlter both work properly in case of small noise as well as respond quickly in case of sudden larger changes (as is two competitive goals with the Kalman ﬁlter). in the last chapter. Germany. Chapter three discusses some basic estimation ﬁlter theory needed to implement an observer in the controller. CLSClass.
the advantages and disadvantages of attaching an observer in parallel to a PIDcontroller (as described in this Master’s thesis) have not been examined. The sensors used in this Master’s thesis to measure the speed and acceleration of the vehicle use Kalman ﬁlters and sensor fusion techniques to obtain stable measurements.4 Introduction 1. . A comparison between this method and the use of a PIDcontroller was made in [19].7 Limitations The use of an observer in an inner controlloop to estimate the model errors was suggested in [1]. However. It is a method used for about a year in test vehicles at DaimlerChrysler and accepted as a basis for this thesis. It has not been examined if the ﬁlter described in this thesis would make better estimates by using unﬁltered data.
ABS will also reduce the stopping distance. [7] 2. If the vehicle starts to slip.1 Antilock Braking System Antilock braking system (ABS) prevents the wheels from locking and maintains the steering ability of the vehicle during hard braking. During bad road conditions.3 Stability Control A stability control system basically measures the yaw rate of the vehicle. which reduce the consequence of an accident. which relieve the driver in his/her tasks. the rotation in the ground plane. and compares it with the desired trajectory. and active safety functions which help the driver to avoid accidents.e. passive safety functions.. 2. and if one of the sensors reports an abnormal deceleration it concludes that the wheel is about to lock and the pressure in the braking system is reduced. Among the driver assistance systems there are comfort functions.Chapter 2 Driver Assistance Systems To better understand the task of the controller discussed in this thesis. i.2 Traction Control The functioning of the traction control system is very similar to that of the ABS. If the deviation is greater than a certain threshold. The system prevents the wheels from slipping during acceleration by using the same velocity sensors as the ABS. the system will activate the brakes on one side of the vehicle to correct this. an introduction to driver assistance systems is given in this chapter. The system measures the velocity of all four wheels. [7] 2. [7] 5 . When the German automotive supplier Bosch launched their stability control system they called it “electronic stability program” (ESP). the engine power is reduced in order to maintain control of the vehicle.
and cruise control maintains the set speed and accelerates and brakes the vehicle automatically if necessary. [26] 2. The variable limit can easily be set and changed during driving. sometimes called “speed control” or “autocruise”. The driver will be able to maintain control of the vehicle when driving down hills with slippery or rough terrain and the system is therefore especially helpful in oﬀroad conditions.6 Hill Descent Control The hill descent control system is essentially a lowspeed cruise control system for steep descents. [7] . This is useful when overtaking. automatically controls the speed of the vehicle. It is automatically deactivated if the driver pushes down the accelerator pedal beyond the pressure point. The permanent limit is used for permanent longterm speed restrictions. The permanent limit speed is set using the onboard computer. and it cannot be exceeded by kickdown. The driver can easily set and change the desired speed during driving.7 Forward Collision Mitigation System Collision mitigation system (CMS) uses radar sensors to detect obstacles which are in the path of the vehicle. The hill descent control system used in DaimlerChrysler vehicles is called “Downhill Speed Regulation” (DSR). The driver can set a variable or permanent limit speed.5 Cruise Control The cruise control. It uses the ABS brake system to control each wheel’s speed and keeps the speed of travel to the speed set in the operating system. Most manufacturers have a similar functionality when it comes to the intervention strategy. This function is called “Speedtronic” by DaimlerChrysler. They use increasing warning levels as the threat approaches. It is possible to drive at a higher or a lower speed than that set in the operating system at any time by manually braking or accelerating. [25] 2.6 Driver Assistance Systems 2. [25] 2. the CMS will reduce the impact speed by applying the brakes when a collision with the leading vehicle appears to be unavoidable. often referred to as “kickdown”. such as driving on winter tires. If the driver pushes down the accelerator pedal to temporarily drive faster. This is useful when overtaking.4 Speed Limiter The speed limitation function is used to make sure that the driver does not exceed a set speed. the cruise control adjusts the vehicle’s speed to the last stored speed when he/she again releases the accelerator pedal. If the driver does not brake himself.
the system calculates the brake pressure necessary to avoid a collision. A message or a warning lamp in the instrument cluster then lights up. If there is no vehicle in front. ACC operates in the same way as cruise control. which functions at speeds between 0 and 200 km/h. the vehicle will automatically brake in order to maintain distance. If the driver pushes down the brake pedal quickly. the distance to the leading vehicle is set as a time between one and two seconds. and activates the brake assist system to reduce impact speed. [25] If the system has detected a risk of collision and the driver does not brake or steer himself/herself. [25] 2. If the vehicle detects that a higher deceleration is required to avoid colliding with the leading vehicle. usually radar or laser. If the vehicle is equipped with radar sensors. In Europe there are government restrictions which limit the permitted braking rate. [7] 2. ACC uses a forward looking sensor. [7] DaimlerChrysler oﬀers adaptive cruise control under the name “Distronic”. the vehicle is automatically braked gently and the seat belts are retracted gently two or three times to warn the driver. This helps to reduce the consequence of an accident. there are no obstacles detected in the path of the vehicle and there is no longer a risk of collision. With the Distronic system. if the driver applies the brakes. but the driver has to apply the brakes himself/herself in order to avoid a collision.8 Adaptive Cruise Control 7 2. This system is called “presafe brake” in DaimlerChrysler vehicles. an audible warning is given to the driver. the system automatically boosts the braking force to a level appropriate to the traﬃc situation. The system uses radar sensors to measure the distance to the vehicle in front. [25] Some DaimlerChrysler vehicles are equipped with a system called Distronic Plus. If the system is active and the time gap to the leading vehicle falls below a certain threshold. the system interprets this action as emergency braking. The driver has to apply the brakes in order to maintain the correct distance and avoid a collision. When the driver pushes down the brake pedal forcefully. [25]. The brake assist system is deactivated and the brakes will function as normal when the driver releases the brake pedal.7. If the driver is approaching the vehicle in front at high speed. to monitor the distance to leading vehicles.7.2.1 Distance Warning This function warns the driver when the distance to the vehicle in front is too small. If Distronic Plus detects .2 Brake Assist System Brake assist system (BAS) operates in emergency braking situations.8 Adaptive Cruise Control Adaptive cruise control (ACC) is also known as “active cruise control” or “intelligent cruise control”. At this point. BAS automatically boosts the braking force and thus shortens the stopping distance. It functions at speeds between 30 and 200 km/h. he/she will also hear a signal. In DaimlerChrysler vehicles this function is called “BAS Plus”.
it will cause the vehicle to brake and come to a halt. Once the vehicle is stationary. There are diﬀerent techniques for achieving this but usually ocular vision or radar is used. If the vehicle in front pulls away.9 Lane Guidance System Lane guidance system refers to systems that try to help the driver stay in the lane. thus working almost like an autopilot.8 Driver Assistance Systems that the vehicle in front has stopped.e. i. These are • Speed limiter • Cruise control • Adaptive cruise control • Collision mitigation system • Brake assist system • Hill descent control Figure 2.10 Blindspot Warning The general idea behind a blindspot warning system is to lower the risk of lane change accidents by warning the driver about vehicles in the blind spot. and the driver pulls the cruise control lever or brieﬂy pushes down the accelerator pedal. The vehicles are listed together with the driver assistance systems that are used in the vehicles. the vehicle automatically pulls away and adapts its speed to the vehicle in front. Another idea is to try to mimic the sounds and vibrations that are generated by rumble strips. the focus will be on those driver assistance systems that are supported by the longitudinal controller at DaimlerChrysler AG. it will remain so without the driver having to push down the brake pedal.. the grooved lane markings that are sometimes used on motorways to indicate lane departure. . [7] 2.1 gives an overview of the vehicles sold by DaimlerChrysler. [7] 2. [25] 2.11 Systems Supported by the Controller In this Master’s thesis. The steering wheel torque used by some of the systems will automatically steer the vehicle back into the center of the lane. Systems typically use an audible warning or a steering wheel torque to alert the driver if the vehicle is approaching the lane markings.
2.Class 211 R .Class 215 CL . The names that are used in the ﬁgure are the names used by DaimlerChrysler.Class 463 GL .1.Class 251 S .Class 164 G .Class CL203 CL . .Class 221 SL .Class 216 CLK .Class 209 CLS . Driver assistance systems supported by the vehicle longitudinal controller at DaimlerChrysler. The ﬁgure shows which DaimlerChrysler vehicles are using the systems.Class WS203 C .11 Systems Supported by the Controller 9 Speedtronic C .Class 171 M .Class 230 SLK .Class 219 E .Class X164 Sprinter Viano 639 Vito 639 Crafter (VW) Cruise Control Distance Warning Distronic Distronic Plus DSR PreSafe Brake BASPlus x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x Figure 2.
10 Driver Assistance Systems .
The equations for this ﬁlter are presented and the function of the stationary Kalman ﬁlter is explained. One can reduce the form of any system of higher order diﬀerential equations to an equivalent system of ﬁrstorder diﬀerential equations by introducing new variables. In the end of this chapter it is described how to construct shaping ﬁlters for nonGaussian process noise and nonGaussian measurement noise. Then some basic theory for observers is presented. This model is also referred to as the continuoustime statespace model. Almost every physical system can in the same way be described using diﬀerential equations.1 StateSpace Models To design an estimation ﬁlter one ﬁrst needs a mathematical model of the controlled process. One popular observer is the Kalman ﬁlter. 3. called ordinary diﬀerential equations (ODE). [9] In this Master’s thesis a special type of diﬀerential equations will be used. Sir Isaac Newton (16421727) discovered that the sun and its planets are governed by laws of motion that depend only upon their current relative positions and current velocities. When the equations describing a system are 11 . The Kalman ﬁlter is “certainly one of the greater discoveries in the history of statistical estimation theory and possibly the greatest discovery in the twentieth century” [11]. When doing control design it is preferable to have all equations of ﬁrst order.Chapter 3 Basic Filter Theory This chapter starts with an introduction to statespace models often used when working with control systems. It is shown how to transform a continuous time model into a time discrete model. [11] The order of a diﬀerential equation is equal to the order of the highest derivative. By expressing these laws as a system of diﬀerential equations and feed them with the current positions and velocities he could uniquely determine the positions and velocities of the planets for all times.
because when trying to predict future states of the system with a good model.12 Basic Filter Theory linear. the model can be written as the linear statespace model [3] x(t) = Ax(t) + Bu(t) + Gw(t) ˙ y(t) = Cx(t) + e(t) (3.1) and (3. but the controller is often implemented in a computer using discrete methods. This gives the linear discrete timeinvariant statespace model x(kT + T ) = Ad x(kT ) + Bd u(kT ) + Gd w(kT ) y(kT ) = Cd x(kT ) + e(kT ) (3. typically available to or controlled by the system. The variable w is used to model unknown disturbances or model uncertainties. In words this means that the past up to any time t1 is fully characterized by the value of the process at t1 . This means that it satisﬁes the Markov Property [3] p[x(t)x(τ )..2) are commonly called “state variables” (or “states”) and they represent all important characteristics of the system at the current time.. Often an easier and more compact form of (3.3) The construction p[AB] in this statement should be read “the probability of A given B”. τ < t1 ] = p[x(t)x(t1 )].5) The index d is referring to the discrete form of the matrices. often referred to as Gaussian noise. one only needs to know the current state.4) (3. They can only be observed through their inﬂuence on the output.6) (3. This is usually the case when the input u is generated by a computer. The variable u represents a known external signal. x2 . The “process model” (3.5) is being used xk+1 yk = Ad xk + Bd uk + Gd wk = Cd xk + ek (3.1) describes the dynamics of the system. which cannot be directly measured.2 Discretization Almost every physical system is best described using a continuoustime model. This is an important property. e is here some noise added to the measurement. The continuous dynamic system described by (3.4) and (3..1) (3. ∀t > t1 (3.1) and (3.1) and (3.2) can be transformed into a discrete statespace system. [3] 3. [3] Usually e and w are modeled as unpredictable random processes with null as mean value and a known variance. How this is done is explained in this section. based on [10] and [13].2) The variables x = [x1 . or “The future is independent of the past if the present is known”. xn ]T in (3. The “measurement model” (3.2) with e and w modeled as Gaussian noise is a Markov Process. It can be shown that a system described by (3. assuming that the input signal u is piecewise constant during the sampling interval T . The model therefore has to be sampled and changed into a discrete time statespace model.2) describes how the noisy measurements y are related to the internal variables.7) . .
This method is generally more accurate than zeroorder hold when the input e is assumed to be smooth [24]. is used by the command “c2d” (=continuous to discrete) in Matlab when nothing else is speciﬁed [24]. The discrete form of the matrices are calculated using the matrices in (3.3 Observer 13 In this Master’s thesis the indices d will be left out. An observer may be used to estimate the unknown states with help from the available measured states and the measured signals y and u.12) This method.2) as Ad Bd Cd = eAT T (3. see [11] for a more detailed description.10) = 0 eAT Bdt = C There are several ways to calculate eAT . Another alternative is “triangle approximation”. called “zeroorder hold”. One of them is eAT = L−1 (sI − A)−1 (3. The states x cannot be measured but are needed for controlling purposes. and the signals u and y can be measured.1) and (3.11) where L−1 is the inverse Laplace transform. There are several possibilities to calculate the matrix Gd . The matrices A.1) giving [13] T Gd = 0 eAT Gdt (3. Gd is then calculated using A and G from (3.3. where the input e is assumed piecewise linear over the sampling period.6) and (3. based on parts from [9] and [3].8) (3. It is seldom the case that e is constant during the sampling intervals [13].9) (3.7). 3. the same method can be used as for the matrix Bd . In this section some basic theory about the observer is discussed. Other methods include “impulseinvariant discretization” (where the impulse response of the discretized system is matched with that of the continuous system) and “Tustin approximation” (which instead matches the frequency response of the systems).3 Observer When designing a controller it is important to have information about the states of the system. when there is no risk of confusion. Other methods are using Taylor series approximation or the Padé approximation. Assuming that the stochastic input e also is constant during the sampling intervals. Normally all states cannot be measured. Consider the discrete time system described by (3. An . B and C are time invariant and known.
such that ˆ xk+1 = Aˆk + Buk + L(yk − C xk ) ˆ x ˆ In words this can be written as “Estimated state” = “Predicted state” + L · “Correction term” The correction term reﬂects the diﬀerence between the predicted measurement and the actual measurement as explained above. It is a trade oﬀ between how fast the estimations converge toward the measurement (high L gives a fast convergence) and how sensitive the estimate is to measurement noise (high L gives a more noise sensitive estimate). The matrix L is here a design parameter and it adjusts how much the residuals should aﬀect the estimated states.13) where xk is the estimated value of x at time step k. This limitation is formulated with the help of the observability matrix O.14 Basic Filter Theory initial approach to estimate the states would be to simulate the system using only the known input values u xk+1 = Aˆk + Buk ˆ x (3.15) All states are observable if and only if O has full rank. A good way to improve the estimates is to use yk − C xk as a feedback. This diﬀerence should be null ˆ if the estimate xk is equal to the real state xk . described in [9] as O= C CA CA2 . . however. In the context of ﬁlters this term is often called the measurement “innovation” or the “residual”. When a matrix has full rank. there has to be a connection between the states and the measurement. CAn−1 (3. This criteria does however not give any information about how good the estimate will be. (3. none of the rows can be written as a linear combination of . resulting in diﬀerent types of observers.4 Observability The observer estimates the states x with the help of measurements y. the diﬀerence yk − C xk can be used. The optimal value of L can be calculated in diﬀerent ways. and will also be so in absence of ˆ errors. never be the case. the states x has to be “seen” in the output y. To measure the quality of the ˆ estimation. . described in detail in Section 3. One type of observer is the Kalman ﬁlter. This will in practice. Therefore.14) 3.5. since there are always model errors and disturbances w as well as measurement noise e.
[12] and [13].5 Kalman Filter 15 the other rows. and it can do so even when the precise nature of the modeled system is unknown [11]. Q) p(e) ∼ N (0.2 Discrete Time Kalman Filter Equations In this section the Kalman ﬁlter equations will be presented to give an overview of how the ﬁlter works. for example on how to derive the equations. The easiest way to compute the rank of a matrix is given by the Gauss elimination algorithm. Note that the equations can be algebraically manipulated into several forms. R) (3. xk − xk . present. based on parts from [23].1 Process and Measurement Model The Kalman ﬁlter is a set of equations that calculates the optimal L. repeated below xk+1 yk = Axk + Buk + Gwk = Cxk + ek The random variables w and e are assumed to be independent of each other. B.e. following the presentation in [23] and [12].17) T The covariance matrices are thus deﬁned R = E{ek eT } and Q = E{wk wk } k with E{ek } = E{wk } = 0. and with normal probability distributions according to p(w) ∼ N (0. but in this Master’s thesis they will be assumed stationary and known. then there is one or more rows that are “unnecessary”. see [3]. 3. The ﬁlter estimates the process state at some time and then obtains feedback in the form .5. If the matrix does not have full rank. measurement noise covariance matrix R and the matrices A.3. It supports estimates of past. [23] ˆ The Kalman ﬁlter estimates the states of a discrete time controlled process described in the form of (3.5 Kalman Filter The Kalman ﬁlter is very powerful in several aspects. C and G might change with each time step or measurement. When the system and noise are modeled in the way described in this section. Therefore the equations presented here might diﬀer from those found in other literature. In the following sections the basic theory of the Kalman ﬁlter is presented. [11] or [12]. given a linear model of the system and statistical information about the process noise w and measurement noise e.5. The process noise covariance matrix Q.6) and (3. i. For more information.. 3.16) (3. In Matlab this is calculated with the command “rank”. 3. Gaussian.7). the Kalman ﬁlter will compute the value of L that minimizes the variance of the state estimation error. and future states. The Kalman ﬁlter estimates a process by using a feedback control.
24) (3. the equations for the Kalman ﬁlter are divided into two groups. For example..18) and (3. Together are (3. The ﬁnal step (3. [23] 3. The discrete Kalman ﬁlter measurement update equations are [13] Lt xtt ˆ Ptt = Ptt−1 C T (CPtt−1 C T + R)−1 = xtt−1 + Lt (yt − C xtt−1 ) ˆ ˆ = (I − Lt C)−1 Ptt−1 (3.23) is to obtain the estimate error covariance Ptt . This recursive nature of the Kalman ﬁlter is practical when doing the implementation. xtt−1 refers to the estimate of x at the current ˆ time step t given all the measurements prior to this time step. Pt−1t−1 . for improving the estimate incorporating a new measurement. and the process noise covariance (or “model uncertainties”). (3.21) (3.3.23) The measurement update equations are responsible for the feedback. The correction (3.22) (3. This update is performed with the time update equations xt−1t−1 ˆ Pt−1t−1 = xtt ˆ = Ptt (3. Ptt−1 in (3.18) (3.22) ˆ ˆ generates the state estimate by incorporating the new measurement yt .14) in Section 3.e. After each predictor and measurement update pair.3 Initialization The initial values x−1 and P−1 have to be chosen before starting the ﬁlter. i. The discrete Kalman ﬁlter predictor equations are [13] xtt−1 ˆ Ptt−1 = Aˆt−1t−1 + But x = APt−1t−1 AT + GQGT (3. Ptt−1 can be thought of as the uncertainties of how the states x are evolving.19) which translates the estimate from the last time step xt−1t−1 to obtain the estiˆ mate for the current time step. also referred to as the “a priori” state estimate.19) is deﬁned Ptt−1 = E[(xt − xtt−1 )(xt − xtt−1 )T ] ˆ ˆ (3.25) The process is then repeated using these values in the algorithms for the next time step.21) computes the Kalman gain Lt that minimizes the estimation error covariance Ptt = E[(xt − xtt )(xt − xtt )T ]. if x is a random constant . the calculated values xtt and Ptt are saved so they can be used in the next time step.20) and is the estimation error covariance given measurements prior to this time step. which is needed in the next time step.5. Q. They can also be thought of as corrector equations.16 Basic Filter Theory of (noisy) measurements. predictor equations and measurement update equations. Note that Ptt−1 is calculated using the estimate error from the last time step. As such. x−1 is ˆ ˆ chosen with knowledge about the state x.22) forming (3. also referred to as the “a priori” estimate error.
. C and L. R and G instead of L. it is necessary to have all disturbances acting as Gaussian noise.5. For many physical systems encountered in practice.6 Shaping Filter When implementing a Kalman ﬁlter. it may not be justiﬁed to assume that all noises are Gaussian. Q and R are timeinvariant. the signal can be described as the output of a ﬁlter driven by Gaussian noise. The model used in this Master’s thesis is developed in Chapter 5. When one is absolutely certain that the ˆ initial state estimate is correct. or by calculating the stationary value P as described in [12] and [19] P = AP AT + GQGT − AP C T (CP C T + R)−1 CP AT (3. [23] ˆ 3. but in this Master’s thesis they will be assumed stationary. the choice is not critical.5 Block Diagram of the Stationary Kalman Filter The computational procedure and the relation of the ﬁlter to the system is illustrated as a block diagram in Figure 3.1. [12].27) are used to calculate L. These ﬁlters are called shaping ﬁlters. then P−1 should be set to 0. i. In Chapter 4 it will be discussed how to choose the parameters Q and R. 3. the initial estiˆ mate xt−1t−1 and the input data yt .[23] 3.6 Design Parameters Design parameters for an observer are the matrices A.26) and (3. B.5. a model with nonGaussian noise can be extended to a ﬁlter driven by Gaussian noise. If the Kalman ﬁlter equations (3. The stationary value of L can then be calculated as L = AP C T (CP C T + R)−1 (3. If this is the case. If the matrices are time dependent. The Kalman ﬁlter recursively computes values of x using the precalculated stationary values of P and L.3. these parameters can be precomputed by either running the ﬁlter oﬀline.6 Shaping Filter 17 with normal probability distribution. The ﬁlter can be included in the original statespace system. and they shape the Gaussian noise to represent the spectrum of the actual system.e. the best choice is x−1 = 0. the design parameters are Q. P−1 is the ˆ uncertainty in the initial estimate x−1 . a random signal with null as mean value. both the estimation error covariance Pk and the Kalman gain Lk will converge to a stationary value. . Otherwise the best choice is the variance of x.5. the functionality to calculate new values for L also has to be implemented. Using this. C. giving a new linear statespace model driven by Gaussian noise. However.27) 3.4 Steady State If the matrices A. If the spectrum of a signal is known.26) This equation is referred to as the algebraic Riccati equation.
18 Basic Filter Theory Discrete System Measurement ut .C B 6 ut Figure 3. [11] . and the input signals y and u.B wt + + ? f + 6 xt A delay xt−1 .7). The variable xtt−1 is calculated Aˆt−1t−1 + But as in (3.C et + + f ? yt Discrete Kalman Filter xtt ˆ ? delay xt−1t−1 ˆ . This ﬁgure shows the computational procedure of the Kalman ﬁlter. Kalman ﬁlter block diagram.22). The Kalman ﬁlter recursively computes values of x using the precalculated ˆ stationary values of P and L. It can here be seen how the estimate xtt delivered from the Kalman ﬁlter is saved in the delay block. so that it can be ˆ used in the next time step. The blocks “Discrete System” and “Measurement” are a graphically representation of the statespace model (3. and its relation to the system.18).1.6) and (3.A + f + 6 xtt ˆ + + f 6 L − + f 6 xtt−1 ˆ . ˆ x The estimate for the current time step xtt is calculated xtt−1 + L(yt − C xtt − 1) as ˆ ˆ ˆ in (3.
40) (3.6.33) 3.28) (3. Suppose e1 can be modeled by a linear shaping ﬁlter according to xSF ˙ e1 = ASF xSF + GSF e = CSF xSF (3. Then the ﬁlter can be included in the original statespace system. giving the new statespace system x = Ax + Gw ˙ y = Cx + e where x = A = G C = = x1 xSF A1 0 0 GSF C1 0 G1 CSF ASF (3.34) (3.42) (3.32) (3. Suppose w1 can be modeled by a linear shaping ﬁlter according to xSF ˙ w1 = ASF xSF + GSF w = CSF xSF (3.6.3.2 Shaping Filters for NonGaussian Measurement Noise x1 ˙ y = A1 x1 + G1 w = C1 x1 + e1 (3.37) (3.38) (3. e1 is nonGaussian noise and w is Gaussian noise.29) Consider a system given on the form where w1 is nongaussian noise and e is zeromean Gaussian noise.30) (3.31) where w is Gaussian noise.1 Shaping Filters for NonGaussian Process Noise x1 ˙ y = A1 x1 + G1 w1 = C1 x1 + e (3. 3. following the theory in [11].6 Shaping Filter 19 This is done for systems with nonGaussian process noise and nonGaussian measurement noise in the next two sections.39) Consider a system given on the form In this case.43) .36) (3.35) (3. In this case the new statespace system becomes x = Ax + GW ˙ y = Cx (3.41) where e is Gaussian noise.
45) (3.46) (3.20 where x = A = G C W = = = x1 xSF A1 0 G1 0 C1 w v 0 ASF 0 GSF CSF Basic Filter Theory (3.48) .44) (3.47) (3.
it can be shown that the Kalman ﬁlter is the optimal ﬁlter (in the sense of minimizing the variance of the estimate error). and ﬁnally using autotuning. [12] Each element of R is deﬁned as [3] Rij = E[(ei − ei )(ej − ej )T ] ¯ ¯ (4. A global optimization technique called simulated annealing is implemented for autotuning in Matlab and Simulink. First it will be shown how to estimate the parameters using information about the process and measurement noise. and the formulation E[ζ] means the statistical ¯ expected value of ζ. If the components of e is independent of each other.Chapter 4 Choosing the Kalman Filter Parameters In this chapter diﬀerent possibilities on how to choose and tune the Kalman ﬁlter parameters are presented. In this case the covariance matrices Q and R should be estimated using measures of the noises e and w. it is possible to obtain an estimation of the covariance matrix R. where n is the number of elements in e. By investigating the measured signals. The matrix R is a symmetric n × n matrix. Assume that the information in the measured signal y 21 .1 Estimating the Covariances The Kalman ﬁlter assumes that all disturbances are stochastic variables known in advance. The diagonal elements of the covariance matrix are the variances of the components of e.1) where ei is the mean value of ei . while the oﬀdiagonal elements are the scalar covariances between its components. using openloop or closedloop simulation. the oﬀdiagonal elements of R should be set to 0. Then it will be described how to tune the parameters using knowledge about the parameters’ inﬂuence on the behavior of the ﬁlter. 4. If the system is linear and both the process noise w and measurement noise e have a normal distribution.
2 Choosing Q and R Manually A drawback of the Kalman ﬁlter is that knowledge about process and measurement noise statistics is required.5) . but determining the process noise covariance is more diﬃcult. and N is the number ¯ of samples used for the estimation. yi is the mean value of yi . Q and R. The elements of R can then be calculated as 1 ˆ Rij = N N ei (t)ej (t) t=1 (4. Now assume that the necessary information in the measured signal is of low frequency. and will therefore depend on which characteristics the process noise and the measurement noise are given in the model.7) in (3. for example when not all the states are measurable.2) where i is the i:th measured signal. To understand how the parameter choice aﬀects the ﬁlter. which gives xk ˆ = x− + Lk (yk − C x− ) ˆk ˆk − = xk + Lk Cxk + Lk ek − Lk C x− ˆ ˆk − − = xk + Lk C(xk − xk ) + Lk ek ˆ ˆ (4. a common approach is to test diﬀerent choices of Q and R until the Kalman ﬁlter shows acceptable behavior. The problem that might arise is the fact that not all states in the state vector are measurable. C. 4.4) where i and j are the index of the measured signals and N is the number of samples used for the estimation. The inﬂuence on L from diﬀerent choices of R and Q can be understood by inserting (3. The measurement noise e can then be estimated by lowpass ﬁltering the signal y as [2] e = y(t) − y (4.3) where y is the lowpass ﬁltered signal. which results in a diagonal R matrix. It may be possible to determine the measurement noise covariance from measurements. The elements of R can then be estimated as in [4] and [2] using Ri = 1 N −1 N (yi (t) − yi )2 ¯ t=1 (4.22 Choosing the Kalman Filter Parameters is constant. a discussion of the function of the parameter will now be held based on parts from [12] and [11]. The deﬁnition of the covariance matrix for the process noise Q is similar as for R and it can also be estimated using a similar method. The uncertainties of the measured signals are here assumed to be independent. L is calculated using A.22). for example the speed of the vehicle. The estimation of the covariance matrix can be performed in Matlab using the command “covf” [18]. Instead.
but the output from the ﬁlter is not connected to the controller run in the simulation. Then the stationary values of P and L can be calculated using (3.6) In other words. The reason why this simulation method is called openloop is that the diﬀerent parameter choices does not aﬀect the behavior of the vehicle. Assume that the calculated values are P1 and L1 . When choosing the parameters. the absolute values do not matter.26) is according to [12] also multiplied with λ. A large R results in a small L. Assume that the parameters are chosen as Q = Q1 and R = R1 . the resulting P in (3. L remains the same when Q and R is multiplied with the same value.3 Simulation 23 This shows that the state estimate xk is adjusted using the diﬀerence between the ˆ estimate x and the real state x. The quotient between Q and R is therefore the design parameter. .3. The ﬁlter is fed with the recorded measurements. which means a fast ﬁlter with good trust in the measurements. This demands good thrust in the model. measurements done in a test car can be recorded and given as input back to the model in Simulink. Both terms ˆ are multiplied with the gain L. It is now possible to simulate the Kalman ﬁlter with diﬀerent parameters and compare the outputs.27) then becomes L = = = = = AP C T (CP C T + R)−1 A(λP1 )C T (C(λP1 )C T + (λR1 ))−1 λAP1 C T λ−1 (CP1 C T + R1 )−1 AP1 C T (CP1 C T + R1 )−1 L1 (4. which means that the measurements are not reliable. Here two diﬀerent simulation methods will be explained. The calculation of L in (3. R can be set to a constant value and Q adjusted until the ﬁlter gets acceptable behavior. If Q and R is both multiplied with the same value λ.3 Simulation Using simulation in Simulink diﬀerent parameter choices can be evaluated without having to make a test drive in a real vehicle.4.1 OpenLoop Simulation One method is openloop simulation. as well as the measurement noise e. but it also makes the observer more sensitive to the measurement noise e. With this method. This type of simulation is used to produce all the diagrams presented in the next chapters. This gives P = λP1 .27). 4. A large Q results in a large L. openloop and closedloop.26) and (3. 4. which makes the observer sensitive to errors in the model.
a socalled “rabbit”.1 shows the Simulink model used by the closedloop simulation. all developed by DaimlerChrysler. and one of them is the controller containing the Kalman ﬁlter. .8 times the normal mass. The model consists of several subsystems. The output from the ﬁlter is attached to the controller and the behavior of the vehicle is aﬀected by how well the ﬁlter is performing.2 ClosedLoop Simulation The other type of simulation used is closedloop simulation. The vehicle. ﬁrst by increasing the set speed to 120 km/h and then by decreasing it again to 80 km/h. The vehicle is driven at 80 km/h and the driver has activated the cruise control. and the second part has a slope of −15% (meaning “downhill”). It is also possible to specify another vehicle which is traveling in front. Figure 4. The environment and the actions of the driver are speciﬁed using simulation scenarios.24 Choosing the Kalman Filter Parameters 4. With this method a scenario including the vehicle and the road is simulated. The output of the ﬁlter will here aﬀect the behavior of the vehicle. Figure 4. Total mass of the vehicle is 1. Simulation Scenarios For the closedloop simulation two diﬀerent scenarios are prepared. The ﬁrst scenario represents the vehicle driven up and down a hill. The vehicle is heavy loaded.1. it’s controller and the environment are simulated together. Closedloop simulation. The vehicle is unloaded and the driver has activated the cruise control with a set speed of 120 km/h. The driver adjusts the speed by using the cruise control lever.3. The ﬁrst part of the hill has a slope of 10% (meaning “uphill”). The second scenario represents the vehicle driven on a straight road.
Performing it manually is timeconsuming with no guarantee for optimality.4.4. A scalar measure for the whole data sequence is RM SE = 1 k k i=1 1 M M xk − xk 2 ˆ 2 j=1 (j) (4. The cost function measures how good the actual . A performance evaluation variable may be the variance of the state estimation error. xk − xk (which is also ˆ minimized with the Kalman ﬁlter). The algorithm starts with some parameter values. One such performance measure is the root mean square error (RMSE) described in [13] RM SE(k) = 1 M M xk − xk 2 ˆ 2 j=1 (j) (4.e.8) The scalar performance measure can be used for autotuning. for instance when it is too expensive to repeat the same experiment many times. choosing the values of the process noise covariance Q and measurement noise covariance R so that the ﬁlter performance is optimized with respect to some performance measure.7) where the subindex 2 stands for the 2norm. [12] 4. i. Poor tuning may result in unsatisfactory performance of an otherwise powerful algorithm. an optimization algorithm is developed in Matlab. Suppose that it is possible to generate M realizations of the data u and y and apply the same estimator to all of them.4 Autotuning 25 4. It is therefore often desirable to develop automated systematic procedures for Kalman ﬁlter tuning. is a challenging task. (The euclidean norm is deﬁned x2 = x2 + · · · + x2 . but this might not be possible. For evaluation it is necessary to measure the performance of this estimation.4.1 Evaluation Using RMSE The observer gives a so called point estimate x of the state vector x using the ˆ inputs u and measurements of the output y.2 Autotuning Using Matlab To automatically ﬁnd the optimal parameters for the Kalman ﬁlter implemented at DaimlerChrysler.) This is an estimate of the n 1 standard deviation of the estimation error norm at each time instant. Optimally this should be done using reallife testing (instead of simulation). also called the euclidean norm. then simulates the system using these values for the Kalman ﬁlter and calculates a cost function based on RMSE explained in the previous section.4 Autotuning Tuning the ﬁlter.. [12] 4. as long as it is possible to generate several data sets under the same premises. A systematic method of choosing Q and R is to perform many simulations using diﬀerent parameters and evaluate the performance.
The optimization algorithm then changes the values. “Simulated annealing” is a popular approach for the global optimization of continuous functions when derivatives of the objective functions are not available. The name and inspiration come from annealing in metallurgy. The source code for the script implementing this optimization technique is found in Appendix A. Con . They will only ﬁnd a global minimum if it is the only minimum and the function is continuous.26 Choosing the Kalman Filter Parameters parameters are working. and probabilistically ˜ decides between moving the system to state s or staying in s. . a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects. The algorithm continues until optimal values for the parameters are found. simulates again and calculates a new value for the cost function. By analogy with this physical process. Methods for global optimization problems can be categorized based on the properties of the problem that are used and the types of guarantees that the methods provide for the ﬁnal solution. that is ˜ gradually decreased during the process. Given a function E(s) depending on some parameter vector s = [s1 . After several (about 100) restarts with diﬀerent starting parameters the script each time ends up giving almost the same parameters back to the user. The code is a modiﬁed example from the Optimization Toolbox [6]. and on a parameter T (called the temperature).3 Simulated Annealing The rest of this chapter will be used to present an algorithm implementing the theory of simulated annealing.sn ]. E(˜). This probability is a ˜ function P (E(s). and the implementation in Matlab developed to do optimization with Simulink is found in Appendix B.. the SA algorithm attempts to locate a good approximation to the global minimum of the function.4. T ) depending on the corresponding values of the function s for the states s and s. Global optimization problems are typically quite diﬃcult to solve. This will be explained later. Simulated annealing (SA) is a stochastic global minimization technique. especially by oscillatory functions. each step of the SA algorithm considers some random neighbor s of the current parameter state s. There are diﬀerent possibilities to choose the function P . [21] 4. and it uses a function called “lsqnonlin”. The heat causes the atoms to become unstuck from their initial positions (a local minimum of the internal energy) and the slow cooling gives them more chances of ﬁnding conﬁgurations with lower internal energy than the initial one. Another explanation of the simulated annealing algorithm goes as follows. More theory of the algorithm can be found in [15]. Using this optimization technique does not give a satisfactory result. The optimization function does not vary the parameters enough to see if there are any better solutions. One reason is that Matlab’s optimization functions are designed to ﬁnd local minima and they can be fooled. as long as some constraints are fulﬁlled..
when his strength T is large. The algorithm continues until a maximum number of evaluations kmax has been reached. The function call “neighbor(s)” should generate a randomly chosen neighbor of a given state s. 1]. His task is to ﬁnd the place with the lowest altitude. the probability transition . // Initial function value s_best = s. The cost function that should be minimized is in this case the man’s altitude. The function call “random()” should return a random value in the range [0. // Calculate new temperature k = k + 1. // Evaluation count while k < k_max and e > e_target // While not good enough s_neighbor = neighbor(s). the neighbor selection method. When T tends to zero the man will only has the strength to run “downhill”. // Initial temperature k = 0. and the variable T is his current strength. It starts at a state s0 and recursively explores the search space using the method described above. // Pick some neighbor e_neighbor = E(s_neighbor) // Compute its function value if e_neighbor < e_{best} then // Is this a new best? s_best = s_neighbor. // Return best solution found Implementation of the SAAlgorithm In order to apply the SA method to a speciﬁc problem. or until a state with the target function value etarget or less is found. // Yes.4 Autotuning 27 sider a man running in the mountains. To ﬁnd the place with the lowest altitude (the global minimum) he sometimes has to try running up the hills. one must specify the parameter search space. // Initial state e = E(s). save it e_best = e_neighbor. end if if random() < P(e. e_neighbor. // Count evaluations end while return s_best. s = s0. // Yes. The man’s will to go “uphill” is larger at the beginning. otherwise he may be stuck in a valley (local minimum) not knowing that a better solution is hiding in another valley behind the next hill. change state e = e_neighbor. The annealing schedule is deﬁned by “tempfunction()”.k/k_max). which should yield the temperature to use. given the fraction r of the time that has passed so far. // Initial best parameters e_best = e. PseudoCode The following pseudocode describes the simulated annealing algorithm. end if T = tempfunction(T. T) // Move to the neighbor state? s = s_neighbor. // Initial function minimum T = initialtemperature(k_max).4.
but this also leads to a low probability of moving to the new solution and the risk of getting stuck in a nonoptimal solution is higher.5 and +0.28 Choosing the Kalman Filter Parameters function. The distance has been chosen as a value between −0. and in this way the algorithm can move on ﬁnding a good solution.3)0. The general demands and calculations are described here. and the complete implementation of the algorithm in Matlab can be found in Appendix B. In the following subsections it is explained how the algorithm is implemented. Choosing the Neighbors The neighbors of the current state have to be chosen so that the function values of the neighboring states are not too far away from the function value of the current state. the probability P must tend to zero if E(˜) > E(s) and to a value greater s than zero if E(˜) < E(s). The function depends on the corresponding ˜ function values E(s) and E(˜).5 times the current parameter vector s. and the temperature T . The script can be used to perform autotuning on the ﬁlter. When T is zero the algorithm will fall down to the nearest local minimum. % Randomize between 0. It is true that choosing a neighbor far away from the current state could lead to ﬁnding the best solution faster. there is no mathematical justiﬁcation for using this particular formula in SA. It has therefore been observed that applying the SA method is more an art than a science. As the algorithm evolves and T goes to zero. In the Matlab implementation found in Appendix B. The probability (a number s between 0 and 1) should be greater than 0 when E(˜) > E(s).9) This is the method used in [21] and [15]. the neighbors s to the ˜ current state s are found by moving a random distance from s in a random direction.5 % Calculate new parameters Transition Probability Function P The function P calculates the probability of making the transition from the current state s to a candidate new state s. The Matlabcode is as follows move = (rand(1.5 and +0. These choices can have a signiﬁcant impact on the eﬀectiveness of the method. The Matlabcode is as follows . s_neigbour = s + move. This is an essential s requirement meaning that the system may move to the new state even when its solution is worse than the current one. This makes the probability of moving to the new state higher. there are no choices that will be good for all problems. other than the fact that it corresponds to the requirements explained above. and the annealing schedule (temperature function).*s. In the implementation found in Appendix B the probability is calculated as P = 1 E(s)−E(˜) s T e if E(˜) < E(s) s otherwise (4. However.5). Unfortunately. and there is no general way to ﬁnd the best choices for a given problem. It is this feature that prevents the method from becoming stuck in a local minimum. This makes the system favor moves that go “downhill” s and avoid those that go “uphill”.
9) gives 0. say 0. this is also the number of times the temperature is reduced. E_neighbor. The temperature Tk for the current time step k is calculated using the cooling factor α and the temperature from the previous time step. Such a method is presented in [21] and used in this Master’s thesis. To do that. one must have an estimate of the diﬀerence E(˜) − E(s) for s a random state and its neighbors. What is needed is an automatic and reasonable way of setting these parameters based on some initial information obtained by the algorithm. Let sbestn and sworstn be the best and worst among the neighbor solutions. the algorithm still behaves like a random search. T) if E_neighbor < E P = 1. % Always go down the hill else P = exp((EE_neighbor)/T). The temperature must then decrease so that it is zero. which should be minimized. Next. % Move if temperature is high end end Annealing Schedule The annealing schedule must be chosen with care. Otherwise.9 for an “uphill” move of maxmove.9. For this thesis an exponential schedule has been chosen. as Tk = αTk−1 (4. or nearly zero. deﬁne maxmove = E(s0 ) − E(sbestn ). when the algorithm is supposed to ﬁnish. Tk−1 .4. say 10−6 . the cooling parameter α is calculated. Setting P = 0. To do this. The initial temperature must be large enough to make the “uphill” and “downhill” transition probabilities nearly the same. where the temperature decreases by a ﬁxed cooling factor 0 < α < 1 at each step. and as the evaluation of this function involves running a simulation in Simulink.4 Autotuning 29 function P = transition_P(E. Typically they are obtained by trial and error and tuned to the function E. assume that the ﬁnal acceptance probability for an “uphill” maxmove should be very low. In the problem at hand. To pick the initial temperature T0 . deﬁne the maximum uphill move as maxmove = E(sworstn ) − E(s0 ).9) now gives 10−6 = e −maxmove T0 αkmax (4. It is now reasonable to assume that the initial temperature T0 is high enough if an “uphill” move maxmove will be accepted with a relatively high probability. (4.9 = e− maxmove T (4. If E(sworstn ) > E(s0 ).12) . (4.11) and T = T0 can be calculated. Let s0 be the initial state of the system. If the ﬁnal probability is too high. such tuning procedures are impractical. this function depends on the simulated model.10) The initial temperature T0 and the cooling factor α now have to be chosen. As kmax is the maximum number of function evaluations allowed. generate a set of solutions that lies in the neighborhood of s0 .
which means that several executions may give diﬀerent outputs. The Matlab implementation found in Appendix B can be executed recursively and the starting parameter state s0 for the next iteration is set to the best solution found in the previous iteration. or based on the current function value being too high from the best value so far. Results Figure 4. The decision to restart could be based on a ﬁxed number of evaluation steps. instead of justifying the maximum number of iterations allowed (kmax ).2 shows one execution of the SA script with 200 closedloop simulations in the ﬁrst scenario (the hill) described in Section 4. restarting over and over again using the previous best solution found as the new initial solution. When a better solution is needed. The diagram shows that the SAalgorithm is trying to ﬁnd the global minimum of the cost function. calculated by using the diﬀerence between the ﬁlter estimate and the “real” value. In this way the algorithm can be left running for a long time. The upper diagram is the cost function described in Section 4. but decreases together with the temperature shown in the lower diagram.4. but does not get stuck in local minima. Restarting Choosing the Kalman Filter Parameters The SAalgorithm uses a random method to ﬁnd the solution. Moving back to a solution that was signiﬁcantly better rather than always moving from the current state is called restarting.2. The allowance for parameter changes causing a higher value of the cost function is high in the beginning. . and more computer time is available. it is sometimes better to start the algorithm over with a new initial state s0 .30 and α can be obtained.3.1.
the current solution changes almost randomly when T is large. This allowance for “uphill” moves saves the method from becoming stuck at local minima. .4. One execution of the simulated annealing (SA) algorithm using 200 evaluations. The probability depends on the parameter T (called the temperature). As can be seen. The bottom diagram shows the temperature that is gradually decreasing during the process. each step of the SA algorithm probabilistically decides between moving the system to the new state or staying in the old state.4 Autotuning 31 Figure 4. The top diagram shows the RMSE costfunction for all the states evaluated.2. By analogy with the physical process.
32 Choosing the Kalman Filter Parameters .
The momentary deviation adev from the desired value is calculated as adev = ades − am (5.1. a short explanation of its surroundings. (For a complete diagram of the outer and inner control loop. The block “Sensors” in the ﬁgure contains signal processing software which analyzes the motion of the vehicle and gives information back to the controller.) The controlled system is the vehicle with its actuators engine. The desired engine torque Te and the desired brake torque Tb are calculated and given as input to the actuators. The output from the controlled system is the actual motion of the vehicle and can be thought of as the actual speed vreal and the actual acceleration areal . and an initial version of the Kalman ﬁlter is implemented and tested. An overview of the inner control loop is given in Figure 5. To deal with this error. Input to the controller is the desired acceleration ades . In the end of this chapter a discussion will be held on how to best choose the ﬁlter parameters. the inner control loop of the vehicle longitudinal controller. resulting in a large error in the calculated acceleration. is needed.1.Chapter 5 Kalman Filter Implementation In this chapter the model for the longitudinal dynamics of the vehicle is derived.1 Overview of the Inner Control Loop Before implementing the Kalman ﬁlter. It will be shown that this model cannot take all driving situations into consideration. brake and gearbox. The measured speed vm and acceleration am are derived from wheel speed sensors. 5. First the function of the observer in the context of the inner control loop will be explained and then a model for the expected acceleration of the vehicle will be derived.1) This deviation is fed into a PIDcontroller which calculates a control value to the 33 . a Kalman ﬁlter is implemented. refer to Figure 1.
The output torque T is fed into another block. areal Sensors Figure 5.2. Depending on the calculated torque T . The other force taken into consideration here. and will be described in detail in Section 5. action is taken either by the brake or the engine. It consists of the force due to air resistance. The control value ac is converted by two conversion functions F1 and F2 into the torques Te and Tb . T is the torque needed on the wheel axis to give the vehicle the acceleration ac . Sensors are measuring the real speed vreal and acceleration areal of the vehicle.34 ades + adev f . is acting on the vehicle in its opposite direction of travel and is called “drive resistance”. forming ac .2) Since m is the standard mass of the vehicle plus the moments of inertia of the ˜ wheel axis and other rotating parts. It consists of two parts: the output from the PIDcontroller. This is summarized with the output from the PIDcontroller. Inner control loop of the longitudinal controller. whereas the engine also can be used to decelerate. The output az from the observer is the estimated diﬀerence between the expected acceleration and the measured acceleration. F2 coordinates the work of the engine. actuators which minimizes the deviation. after passing through two conversion steps F1 and F2 . The deviation adev is given as input to a PIDcontroller. called F2 in the ﬁgure. then ac m is the force needed to get the desired ˜ acceleration ac . respectively. ac is. The output from the PIDcontroller will form the variables Te and Tb described above. the control value from the controller given as input to the block F1 . Fresistance . Input to the controller is the desired acceleration ades . as explained above. The observer looks at the torques Te and Tb and calculates the expected acceleration of the vehicle. brake and gearbox. and the output from the block “Observer”. which are given as input to the vehicle’s actuators engine and brake. losses due to tire deﬂection. before delivery to the actuators. and the task is to get areal = ades . The block F1 calculates the needed torque T on the wheel axis from the corresponding acceleration ac . T b q.1. using the equation T = rw (ac m + Fresistance ) ˜ (5. etc. By adding Fresistance to ac m and then ˜ multiplying with the wheel radius rw . The task of the observer is to estimate .Vehicle F1 F2 am az Observer vm vreal .PID − 6 ac + f − 6 Kalman Filter Implementation TTe .
The block called “Vehicle” is further described in Figure 5. Using the classical mechanical law from Newton.2 Modeling the Acceleration 35 Tb Te  Gb Ge Tbrake Tengine Longitudinal Dynamics vreal areal Vehicle Model Figure 5. Assume just for this section that the speed of the vehicle is v and the acceleration of the vehicle in its driving direction is a. the drive resistance parameters and other unknown parameters not taken into consideration by F1 . Modeling engine and brake.2 Modeling the Acceleration A model for the expected longitudinal acceleration of the vehicle will now be presented. The drive resistance Fresistance is modeled as Fresistance = Fair + Froll (5.5. the forces acting on the vehicle can be written as ma = Fdrive − Fbrake − Fresistance (5. is subtracted from the output from the PIDcontroller to form ac . The blocks “Ge ” and “Gb ” model the dynamics of the engine and brake as transfer functions with torques Tengine and Tbrake as outputs. The engine and brake models calculate the estimated output torques Tengine and Tbrake . This is performed by looking at the torques Te and Tb given to the actuators. Te and Tb are the desired torques given as input. F = ma. The blocks Ge and Gb model the dynamics of the engine and brake respectively. and Fbrake is the force from the braking system. and calculate an expected acceleration. These are the variable names used in this section for deriving the model.2. The model of the longitudinal dynamics uses these values to calculate the speed and acceleration of the vehicle. The diﬀerence between the expected acceleration and the measured acceleration gives a hint about the model error. and for the vehicle speed the measured value vm will be used. These transfer functions and the equations describing the vehicle longitudinal dynamics will be presented in the next section. in later sections the acceleration calculated by the model will be called aexp .2. 5.3) where Fdrive is the force acting on the vehicle through the transmission and engine. called az .4) . This error.
4). Fdrive and Fbrake depend on the torques acting on the wheel axis. Tdrive and Tbrake .10) The torque acting on the wheel axis Tdrive depends on the output torque from the engine Tengine . the eﬃciency factor for the drivetrain η.10) and (5. the gearbox and diﬀerential ratios ig and id .6) where m is the mass of the vehicle. (5. energy losses occur due to deﬂection of the tire. This is modeled as a force acting on the wheel in the opposite direction of rolling Fr = crr N (5.11) in (5.8). N is in this case deﬁned as N= mg n (5. If and Ir . When an object is moving through air at relatively high speed. This force can according to [14] be written as Fair = 1 ρcd Aw (v + vwind )2 2 (5. Ie and Ig . and the moment of inertia for the front and rear wheel axis.36 Kalman Filter Implementation When a wheel is rolling. vwind is the unknown speed of the wind and it will therefore be neglected in this model.7). the object experiences a force acting on it against its direction of travel.12) .5) where N is the normal force acting on the wheel from the ground and crr is the rolling resistance coeﬃcient [20]. g is the gravitational acceleration and n is the number of wheels. (5.7) The air resistance Fair is modeled as follows. as follows [20] Tdrive = ηid ig Tengine − (i2 i2 Ie + i2 Ig ) d g d a a − (If + Ir ) rw rw (5.9). cd is the drag coeﬃcient and Aw is a reference area related to the projected front area of the object. the total rolling resistance acting on the vehicle from all wheels can be calculated Froll = Fr n = crr mg n = crr mg n (5. (5.9) (5. Assuming that all wheels have the same crr . the moment of inertia for engine and gear. and the wheel radius rw as Fdrive Fbrake = = Tdrive rw Tbrake rw (5.3) yields ma = a 1 ηid ig Tengine − (i2 i2 Ie + i2 Ig ) 2 d g d rw rw a 1 1 −(If + Ir ) 2 − Tbrake − ρcd Aw v 2 − crr mg rw rw 2 (5.11) Inserting (5.8) where ρ is the density of the air. (5.
The results are averages of several tests with diﬀerent vehicles and will be presented here and used in this Master’s thesis.19) The parameters in the models were found in [19] using system identiﬁcation and chosen to the mean values of diﬀerent test drives with diﬀerent vehicles as k1 ω0 D Tt1 k2 Tt2 Tt3 = = = = = = = 1 16.5. the brake is modeled as a transfer function Gb (s) as Gb (s) = L {gb (t)} = k2 e−sTt3 Tt2 s + 1 (5.2 Modeling the Acceleration Now let m=m+ ˜ Inserting (5.18) (5. See [19] for more details.4 this calculated (expected) acceleration will be called aexp .17) which relates the input torque Tb to the output torque from the brake.13) (5.13) in (5.16) which relates the input torque Te to the output torque from the engine.82 90 ms 0.7 rad/s 0. as Tengine (t) = ge (t) ∗ Te (t) In the same way. The engine is modeled as a transfer function Ge (s) as Ge (s) = L {ge (t)} = 2 k1 ω0 −sTt1 2e s2 + 2Dω0 s + ω0 (5.15) In previous work at DaimlerChrysler [19].20) rw m ˜ rw m ˜ 2m ˜ m ˜ In Section 5.15) gives the model for the longitudinal acceleration of the vehicle used by the Kalman ﬁlter a(t) = ηid ig 1 ρcd Aw crr mg (ge (t) ∗ Te (t)) − (gb (t) ∗ Tb (t)) − v(t)2 − (5. models for the engine and brake were prepared. as Tbrake (t) = gb (t) ∗ Tb (t) (5. .12) gives ma = ˜ ηid ig 1 ρcd Aw 2 Tengine − Tbrake − v − crr mg rw rw 2 Ie Ig If + Ir + i2 i2 2 + i2 2 d g d 2 rw rw rw 37 (5. Tbrake .98 80 ms 140 ms Adding this information to (5. Tengine .14) Dividing with m yields the equation for the vehicle acceleration as ˜ a= ηid ig 1 ρcd Aw 2 crr mg Tengine − Tbrake − v − rw m ˜ rw m ˜ 2m ˜ m ˜ (5.
In both Figure 5. Figure 5. the signal am from the sensors is recorded. This measurement of the actual acceleration is then compared with the expected acceleration calculated by the model in (5. this time with set speeds 60 km/h.3 and Figure 5.5). then after nearly 30 seconds changes back to 60 km/h again.38 1. then after nearly 30 seconds changes back to 60 km/h again.3 Errors in the Acceleration Model A thorough validation of the acceleration model is not a subject of this Master’s thesis. 5. But to verify that the model seems reasonable. The vehicle is traveling with a speed of 60 km/h when the driver changes the set desired speed to 30 km/h. The ﬁgure shows the calculated expected acceleration (solid line) and the measured acceleration (dashed line). Test drive using cruise control.5 1 1.5 Acceleration [m/s 2] 0 0. Some differences between the calculated and the measured signal can be seen in the ﬁgures.5 0 10 20 30 Time [s] 40 50 60 Figure 5. causing the large oscillations between 10 and 20 seconds.3. Figure 5. 30 km/h.20). This is expected. The model parameters have been chosen as the mean values from several test drives .4 shows a similar test drive using cruise control. In this section ﬁve such recordings will be presented.5 Kalman Filter Implementation Speedtronic: step down and up again a expected a measured 1 0.4 it can be observed that the agreement between the measured and calculated expected acceleration is relatively good. The ﬁgure shows the calculated expected acceleration (solid line) and the measured acceleration (dashed line). The main characteristics of the acceleration has been captured by the model. some tests will now be presented. as the model does not exactly describe the speciﬁed vehicle. 60 km/h and then 30 km/h again. During test drives. The measurement has been recorded during a test with a relatively nervous controller. The vehicle is traveling with a speed of 60 km/h when the driver changes the set desired speed to 30 km/h.3 shows a test drive using cruise control (explained in Section 2.
The reason for this might be that the rolling resistance on the bumpy road is higher than expected. In the current model.6 and Figure 5. The vehicle is traveling using cruise control. In Figure 5. the calculated value is always a bit higher than the measured value. and that the mass of the vehicle is higher than set in the model. Another reason is that the measurement of the acceleration in the vehicle contains a lowpass ﬁlter with some timedelay. The solid line is the calculated expected acceleration and the dashed line is the measured acceleration. the vehicle mass is a constant . resulting in an oscillatory behavior. due to one extra passenger. Test drive using cruise control. As can be seen in Figure 5. also here the agreement of the measured and calculated acceleration can be recognized. Figure 5. An outstanding feature of the calculated value is that it is always a bit. The large errors in the calculations is because of a wrong value of the parameter m.5. and sometimes up to 0. then back to 60 km/h. As can be seen. After 16 seconds the desired speed is changed to 80 km/h.6 a test has been made using the same vehicle but with an attached trailer with a mass of 2000 kg.3 Errors in the Acceleration Model Speedtronic: step up 1. with a desired speed of 60 km/h.5 shows the vehicle traveling with a constant speed of 30 km/h on a bumpy road.5 2 0 10 20 30 Time [s] 40 50 60 Figure 5.5 Acceleration [m/s 2] 0 0.5 a expected a measured 1 39 0.5 1 1. The vehicle looses speed and the controller tries to compensate.7. Therefore the model does not exactly comply with the vehicle being used here.5.4. As can be seen. The vehicle is traveling with a speed of 60 km/h when the driver changes the set desired speed ﬁrst to 30 km/h. In Figure 5. The reason for this could be that the identiﬁed timedelays in the models for the engine and brake are too small when applied to the vehicle used in the tests. and at last to 30 km/h again. the calculated expected acceleration does not comply with the the measured acceleration in this case. with diﬀerent vehicles [19]. the model does not perform as well when changing the working conditions.5 s “faster” than the measured value. the mass of the vehicle.
5 a expected a measured 2 1. The large errors in the calculations is because of a wrong value of the parameter m. Speedtronic: step up and down. Test drive with a heavy trailer (2000 kg).4 Acceleration [m/s 2] 0.5 0 0. trailer 2000 kg 2. The test has been done using a relatively nervous controller.2 0 0. It looses speed and the controller tries to compensate.6 0.40 0.5 Acceleration [m/s 2] 1 0. with a desired speed of 60 km/h. .5.6 20 25 30 35 Time [s] 40 45 50 Figure 5. As can be seen the calculated expected acceleration does not comply with the the measured acceleration. The vehicle and trailer are traveling using cruise control. the mass of the vehicle. The vehicle is traveling with a constant speed of 30 km/h. Test drive using cruise control on a bumpy road.6. in this test resulting in an oscillating behavior.5 0 5 10 15 20 Time [s] 25 30 35 40 45 Figure 5.2 0.8 Kalman Filter Implementation Constant speed.4 0. After 16 seconds the desired speed is changed to 80 km/h. bumpy road a expected a measured 0.
then changed to 20 % (uphill). the large change in the mass will only be experienced when accelerating. First the slope of the road is 0 % (horizontal road). The vehicle is driven up and down a steep hill. then to −15 % (downhill). It should be mentioned that all models have errors. In this case the model for the vehicle acceleration in (5. parameter. This aﬀects the calculation of m in (5.7 shows a test drive with the same vehicle. this time without trailer but with a changing slope. The following are some examples of what might happen. • The total mass of the vehicle is not m as in the model.20). 15% down 3 a expected a measured 2 41 1 0 Acceleration [m/s 2] 1 2 3 4 5 6 0 5 10 Time [s] 15 20 25 Figure 5. then changed to 20 % (uphill). As expected. then to −15 % (downhill).3 Errors in the Acceleration Model Driving up and down a hill. incline: 20% up. plus 80 kg for the weight of the driver.13) as ˜ well as Froll in (5. When braking. it will in practice never exactly describe the real physical system. Test drive up and down a steep hill using cruise control. The reason is that the model does not include the case of a changed road slope. Two of the cases have already been mentioned before in this text. The calculated acceleration does not comply with the measured acceleration. the calculated acceleration does not comply with the measured acceleration.7. baggage or a trailer.20) does not comply with the real system in all situations. . The model does not include the case of a changing slope. due to extra passengers. The value of the parameter m is set to the mass of the vehicle including full tank. the trailer brake will help and compensate partially for the extra weight. If the attached trailer is equipped with brakes. First the slope of the road is 0 % (horizontal road). Especially for real time systems it is a good practice to keep models as simple as possible to avoid time consuming computations and dubious parameters.5. Figure 5. It does not matter how complex the model is. In many implementations there are good reasons to keep the model simple.7) and has a large eﬀect on the calculation of the expected acceleration in (5.
One problem with the sensor is that it might be diﬃcult making good estimates of the road slope while cornering. In practice.07 < Tt1 < 19. however. the speed of the wind vwind cannot be taken into account. When driving the vehicle in a slope. In [16] it is proposed how to do road slope and vehicle mass estimation using Kalman ﬁltering. the longitudinal force acting on the vehicle is given by Fslope = mg sin α (5. for example in case of tirepressure drop or when driving on sand. the friction coeﬃcient of the brake may vary between +10% and −15% during a normal vehicle stop maneuver. a force Fslope arises having a direct eﬀect on the vehicle’s acceleration. • All the parameters in (5.42 Kalman Filter Implementation • The parameter crr in (5. Assume that the slope of the road is α .12 • The slope has been totally neglected in the derived model.22) where asensor is the sensor value and a is the longitudinal acceleration of the vehicle. These parameters diﬀer from those found in a real vehicle. as well as the reference area Aw (for example due to extra baggage). In real life the wind speed can have a large impact on the actual resistance.16) were found to be [19] 15.5 times the standard value. As an example the interval for the engine parameters in (5. • In the calculation of the air resistance Fair in (5.21) • Engine and brake might not behave as expected due to inaccuracy.8). Actually there is a longitudinal acceleration sensor mounted in the vehicles that could be used to estimate the slope α. According to [8] the real value of Fair can be up to 9 times the calculated value. . the rolling resistance changes depending on the driving conditions. The sensor measures the sum of the vehicle’s acceleration and the gravitational component parallel to the ground. as asensor = a + g sin α (5.7) is in the model set to a constant value.18) have been estimated with system identiﬁcation.16) and (5.2 < 1. The drag coeﬃcient cd might also change. From several test drives the mean values have been selected.54 < D 0.1 < ω0 0. This is observed to happen relatively often. For example.05 < 0. According to [8] the real value of the parameter can vary up to 3. errors or change in the friction coeﬃcient of the brakes.
2. This state variable represents the part of the vehicle’s acceleration caused by disturbances not described by the model for the longitudinal dynamics.4 Kalman Filter Model The model for the vehicle longitudinal acceleration a in (5. It should cover all the model errors found in the previous section. Therefore.5.24). and is connected parallel to the PIDcontroller as described in Section 5.1.27) (5. Then the real vehicle acceleration areal is areal = aexp + az (5. the Kalman ﬁlter can be used to estimate this state.23) This is the model that was derived in Section 5.29) 1 C Here y = x + e = az + e means that the Kalman ﬁlter needs a measurement of the signal az . Using the statespace model presented in (3. This can be provided by feeding it with a new constructed signal a∆ .25) The scalar value q is here the process noise intensity (assumed to be timeinvariant) and δ(·) is the Dirac (impulse) delta function [3]. a∆ can be deﬁned as a∆ = am − aexp = az + e (5.2). With the deﬁnition am = areal + e together with (5.30) . Given a good description on how the state az is changing.28) (5.24) where az is called “disturbance acceleration”. the continuous time statespace model for the Kalman ﬁlter becomes x = ˙ y = 0 A x+ x+e 1 G w (5. let the calculated expected acceleration aexp be deﬁned as aexp = ηid ig 1 ρcd Aw 2 crr mg Tengine − Tbrake − v − rw m ˜ rw m ˜ 2m m ˜ m ˜ (5.26) (5. The process noise is modeled in continuous time under the assumption that the state az undergoes slight changes each sampling period.20) has to be changed to comply with “the real world”. According to [3] such changes can be modeled by a continuous time Gaussian noise w as az (t) = w(t) ˙ where E[w(t)] = 0 E[w(t)w(τ )] = qδ(t − τ ) (5. and choosing the state vector x = az .1 and shown in Figure 5.1) and (3.4 Kalman Filter Model 43 5.
However. as can be veriﬁed using the rank test in Section 3. As explained in Section 4. The ﬁgures that follow have been generated using measured data from test drives.31) (5. The output from the Kalman ﬁlter with q = 1 follows the measurement almost exactly.5 Choosing the Filter Parameters Diﬀerent values of the noise intensities q and r will now be chosen and the performance evaluated using openloop simulation described in Section 4. the Kalman ﬁlter will estimate the state az . but this time with a measurement of a vehicle driven up and down a steep hill. This was expected.1. comparing . The faster ﬁlter is more sensitive to measurement noise. Figure 5. resulting in xk+1 yk = = 1 Ad xk + xk + ek 1 Gd wk (5. The measurement is taken from a test drive on a bumpy road. one with q = 1 and the other with q = 0. How noisy the measurement is can be deﬁned by modeling e. Figure 5.1). The Kalman ﬁlter is fed with the signal y = a∆ = am − aexp . The statespace model is discretized into a digital statespace model with sample time T . the signal will direct aﬀect the comfort of the driver and passengers. It can be seen that. It is assumed to be timeinvariant.2 the quotient between q and r is the design parameter.32) The scalar value r is here the measurement noise intensity.01. Therefore this section will show the function of the developed Kalman ﬁlter by choosing the parameters manually.33) (5. given the information that a∆ is a noisy measurement of the true value.44 Kalman Filter Implementation In this way.8 shows two diﬀerent Kalman ﬁlters. In this case it is easy. This means that the state is uniquely determinable from the inputs and outputs in the model.4. As can be seen. Therefore r is set to 1 and the Kalman ﬁlter is simulated using diﬀerent values for q.34) 1 Cd 5. while the slower ﬁlter delivers a smoother estimate of az . for example the script for autotuning developed in Section 4. as choosing a high q always means a faster ﬁlter. since the estimated signal az in this case will be directly connected to the engine and brakes (see Figure 5.3. the one using q = 1 is faster and follows the measured values more accurately. Here e will be modeled as Gaussian noise in the same way as w E[e(t)] = 0 E[e(t)e(τ )] = rδ(t − τ ) (5.4.2. using the theory in Section 3. It is possible to use other methods from Chapter 4. The system is observable. the absolute values do not matter.9 shows the same parameter choices. and those values are also shown in the ﬁgures as dots.
01) using a measurement of a test drive on a bumpy road.2 0.9) q = 0.11.5 Time [s] 5 5.5 7 Figure 5. The ﬁlter with q = 1 is fastest and follows the measured values most accurate. every unnecessary oscillation or jerk in the estimate az will have a direct eﬀect of the control values for the engine and brakes. a smaller q makes the time delay for large signal changes unavoidable larger.10 shows the drive on the bumpy road.6 0. Figure 5. This time q = 0. making it better for controlling purposes.01 still reacts relatively fast and when only looking at these two ﬁgures (5. with the faster ﬁlter. The hill is rather steep.01) 2 2. Simulation of two fast ﬁlters with diﬀerent parameters (q = 1 and q = 0. The slower ﬁlter delivers a smoother estimate of az . With this in mind. and the change from the horizontal road to a slope of 20% comes very fast. as choosing q high always means a faster ﬁlter. The faster ﬁlter is also more sensitive to measurement noise. is an even slower ﬁlter. A linear ﬁlter of this type cannot do both. Figure 5. Even the slowest of the tested Kalman ﬁlters is in . The Kalman ﬁlter now ignores the oscillations even more. The price one has to pay. The oscillations of the signal y = a∆ = am − aexp are even larger than in the previous ﬁgures.0003 is so large that the driver will probably feel it when driving up and down the hill shown in Figure 5.5 Choosing the Filter Parameters 0 45 0. or react fast to large changes.8 1 1.0003 are simulated.4 Disturbance [m/s 2] 0.5 6 6.8 and 5.5 3 3.2 a∆ = amaexp Kalman filter (q=1) Kalman filter (q=0.0003) during a drive on a very bumpy road.01 seems a logical choice.001 and q = 0.11. The time delay for the ﬁlter with q = 0. The Kalman ﬁlter q = 0.12 shows the slowest ﬁlter (with q = 0. two slower ﬁlters are evaluated.5. However. It has been observed during test drives that the comfort is negatively aﬀected by having a too fast ﬁlter. This was expected. as shown in Figure 5.5 4 4. shown in the next two ﬁgures.8. The task of choosing the optimal parameters is in this case a compromise between ignoring small changes in the signal.
this case “not slow enough”.5 0 0.5 Kalman Filter Implementation 2 1. . but still react fast to large changes.01 still reacts relatively fast.9. a smaller q makes the time delay for large signal changes unavoidably larger.5 1 1.5 a∆ = amaexp Kalman filter (q=1) Kalman filter (q=0.01) 2 4 6 8 10 Time [s] 12 14 16 18 Figure 5. comparing with the faster ﬁlter. Simulation of the two fast ﬁlters using a measurement of a test drive of a vehicle driving up and down a steep hill.5 1 Disturbance [m/s 2] 0.46 2. The output from the Kalman ﬁlter with q = 1 follows the measurement almost exactly. The Kalman ﬁlter q = 0. The ideal ﬁlter would ignore these oscillations. But choosing a slower ﬁlter will also make the time delay for changes even larger. It can be seen that.
2.5 Time [s] 5 5. the price one has to pay is a slower ﬁlter.10.001 and q = 0.11.8 1 1.5 a∆ = amaexp Kalman filter (q=0.2 0.5 Choosing the Filter Parameters 47 0 0.0003) 2 2.001) Kalman filter (q=0.5.5 7 Figure 5.5 4 4.4 Disturbance [m/s 2] 0.5 6 6.0003) 2 4 6 8 10 Time [s] 12 14 16 18 Figure 5.5 1 1.001 and q = 0.0003) using measurements from a test drive on a bumpy road.001) Kalman filter (q=0.5 3 3.5 0 0.5 2 1. Simulation of two slow ﬁlters (q = 0. When trying to avoid small changes and oscillations.0003) using measurements from a test drive with a vehicle driving up and down a steep hill. . Simulation of the two slow ﬁlters (q = 0.5 1 Disturbance [m/s 2] 0. The slow Kalman ﬁlters ignore the oscillations even more.2 a∆ = amaexp Kalman filter (q=0. making them better for controlling purposes.6 0.
But choosing such a slow ﬁlter will also make the time delay for changes even larger.8 14 16 18 20 22 Time [s] 24 26 28 30 Figure 5.4 0.48 Kalman Filter Implementation 0.6 a∆ = amaexp Kalman filter (q=0. Simulation of the slowest Kalman ﬁlter using a measurement of a very bumpy road.12. Even this ﬁlter is in this driving situation “not slow enough”.2 Disturbance [m/s 2] 0 0. .0003) 0.2 0.4 0.
6. In this situation the measurement of the acceleration by conventional methods is not considered good enough.1) Input to the ﬁlter is the signal aexp . and the (assumed noisy) measurement vm of the vehicle speed vreal . the behavior of the Kalman ﬁlter is easy to understand when using simple models. for example when the signal am is not available. In this case the measurements of the vehicle speed. Then other models of the parameter az is derived. The Kalman ﬁlter is then designed using the state vector x = so that vreal ˙ az ˙ = areal = az + aexp = w (6. At the end of this chapter it is shown that the implemented Kalman ﬁlter behaves like a lowpass ﬁlter. Another advantage of using simple models is low computational eﬀort. as deﬁned in (5.Chapter 6 Alternative Kalman Filter Models As shown in Chapter 5.4) . This chapter derives some more complex models. This is the case at DaimlerChrysler when using hill descent control (explained in Section 2. can be used instead. vm .6). First a model is presented which can be used when it is not possible to measure the acceleration.1 Vehicle Speed as Feedback In some situations it is not practical to use the signal a∆ = am − aexp as feedback to the ﬁlter.23).2) (6.3) x1 x2 = vreal az (6. with the aim of explaining how the Kalman ﬁlter implemented in the test vehicles is working. with vm = vreal + e 49 (6.
2 Modeling the Disturbance az The model of az so far says that its ﬁrst derivative is equal to Gaussian noise. These changes are uncorrelated.2. The dashed line in the ﬁgures is the output from the ﬁlter in Section 5. which does not comply with the “real” parameter the ﬁlter is trying to estimate. These parameters do not change so quickly. The Kalman ﬁlter now has to estimate both vreal and az . Choosing a smaller q makes the ﬁlter slower. which used a∆ as feedback.8) (6. and choosing a larger q makes it faster. Of course the value of q is not the same as in Section 5.1 FirstOrder Lag Function For a ﬁrstorder lag function with input signal u.4.4. resulting in higher computational costs. This means that az undergoes slight changes each sampling period.30). Figure 6.6) The noises w and e are modeled as Gaussian noises with intensities r and q. Simulating the ﬁlter with the same measurements as in Chapter 5 shows that the basic behavior remains the same.50 Alternative Kalman Filter Models where e is the measurement noise and w is the process noise. az represents large changes in the environment of the vehicle. 6. Notice that e is the noise in the measurement of the speed. The dotted line is a∆ = am − aexp as deﬁned in (5. just as before.5.7) . The process noise intensity q has been chosen to 5 and measurement noise intensity r is 1. which still can be used as a reference. and therefore another model will now be examined. This model allows the derivative of az to jump quickly from a positive value to a negative.5 because the measurements are no longer the same. such as changes in the mass of the vehicle or the slope of the road.5) y = 1 0 C x+e (6. 6. Figure 6. The statespace model becomes x = ˙ 0 1 0 0 A x+ 1 0 B aexp + 0 1 G w (6. As can be seen the overall behavior is the same. modeled as described in Section 5. and not the acceleration as in Section 5.1 shows the output from the ﬁlter when fed with the measurement from the test drive on the bumpy road. the output signal y satisﬁes the ordinary diﬀerential equation τy + y = u ˙ where τ is the time constant of the step response. which means that the derivative changes each period to a value independent of the last value.2 shows the same ﬁlter simulated with the measurement from the drive up and down the steep hill. The transfer function is G(s) = 1 1 + τs (6.
Input to the ﬁlter is the measurement of the vehicle speed.5 4 4.10) Then the magnitude of the function is approximately G(jω) ≈ 1 ω0 jω when ω < ω0 when ω > ω0 (6.6 Disturbance [m/s 2] 0. set s = jω and plot the magnitude of the function G(jω) = √ 1 1 + ω2 τ 2 (6. As can be seen in the plot. which complies with the deﬁnition in (6.3 shows the Bode plot of the transfer function with three diﬀerent values of τ .5 7 Figure 6. As can be seen the overall behavior of the ﬁlter remains the same.6 a∆=amaexp Kalman filter with v as feedback Kalman filter from Chapter 5 (q=0. A larger τ also means a slower response.2 1. The solid line is the Kalman ﬁlter with the measurement of v as feedback.6. [9] .5 3 3.9) ω is here the frequency of the input in radians per second. Deﬁne the break frequency ω0 as ω0 = 1 τ (6.4 1.1.4 0.2 0.2 Modeling the Disturbance az 0 51 0.10).11) The ﬁrstorder lag function dampens all signals with frequencies higher than the break frequency ω0 and can be used as a lowpass ﬁlter.8 1 1.5 Time [s] 5 5.5 6 6.8 2 2. Figure 6. Simulation of the Kalman ﬁlter using the measurement from the drive on a bumpy road. a larger τ means a lower break frequency.0003) 1.5. q has been chosen to 5. To evaluate the frequency response for the function. The dashed line is the ﬁlter developed in Section 5.
The dashed line is the ﬁlter developed in Section 5.5 a∆=amaexp Kalman filter with v as feedback Kalman filter from Chapter 5 (q=0.14) .52 2.0003) 2 4 6 8 10 Time [s] 12 14 16 18 Figure 6.12) Now the problem is to choose a reasonable τ and the intensity of the noise w.13) The autocorrelation is a measure of how well the signal matches a timeshifted version of itself. With this choice of u the function is called a ﬁrstorder GaussMarkov process [22]. The solid line is the Kalman ﬁlter with the measurement of v as feedback. where w represents Gaussian noise. as az (tk ) = e− τ T az (tk−1 ) + w(tk−1 ) 1 (6.2 FirstOrder GaussMarkov Process The ﬁrstorder lag function can be used to model physical systems. Simulation of the Kalman ﬁlter using the measurement from the vehicle driving up and down a steep hill. The input u in (6.5 1 1. Let az + ˙ 1 az = w τ (6. This means that the value of az at a sample time tk will depend on the value at the last sample time tk−1 .5. The described function will now be used to model az .2.12) can be written as E[az (t1 )az (t2 )] = e− τ t1 −t2  E[az (t1 )2 ] 1 (6. According to [3] the autocorrelation of the GaussMarkov process in (6.5 1 Disturbance [m/s 2] 0.2. where t1 and t2 deﬁnes the time shift.5 0 0.5 Alternative Kalman Filter Models 2 1. q is set to 5.7) is then set to w. As can be seen the overall behavior of the ﬁlter remains the same. This function has turned out to be important in applied work since it seems to ﬁt a large number of physical systems with reasonable accuracy but still has a very simple mathematical description [5]. 6.
This data set will represent the “maximum dynamic” of az that the ﬁlter will have to estimate. When driving a vehicle at 30 km/h over this steep hill.2 Modeling the Disturbance az Bode Diagram 0 53 10 Magnitude (dB) 20 30 τ=1 τ=0. Bode plot of the ﬁrstorder lag function.6. as well as some nonlinear models.3.12) and deﬁnes τ and the intensity of w as parameters to be identiﬁed. az will be integrated Gaussian noise just as before. A Matlab script is found in Appendix C. Matlab contains a toolbox called “System Identiﬁcation Toolbox” [18]. This is done by adjusting parameters within a given model until its output coincides as well as possible with the measured output [17].25 40 50 0 Phase (deg) 45 90 10 1 10 0 10 Frequency (rad/sec) 1 10 2 Figure 6. If τ is chosen very large (τ approaches ∞). with three diﬀerent values of τ . 6.5 τ=0. including statespace models. Larger τ means a lower break frequency. which creates a model of az described by (6.3 Identifying the Time Constant The technique of system identiﬁcation is used to build mathematical models of a dynamic system based on measurement data. In this section an introduction is given on how to use this toolbox to identify the unknown parameters in a model.2. If τ is small the correlation is high. the highest demands on the ﬁlter is said to be reached. where T is the sampling interval [5]. It contains all the common techniques to adjust parameters in all kinds of linear models. As . As identiﬁcation data measurements of the slope of the road during a test drive up and down the steep hill is used.
Setting the noise intensity to a constant value and identify the parameter gives τ = 7. As can be seen. the identiﬁed model does not ﬁt the identiﬁcation data exactly. When the noise intensity is set to a constant value.1 and then letting Matlab ﬁnd the optimal noise intensity.5 2 2. and τ > 0. it is possible to ﬁnd a perfect ﬁt by adjusting the parameter for the noise intensity. In fact. Measured Output and 1step Ahead Predicted Model Output 1. This is done by setting areal ≡ g sin(α) z (6. The noise intensity is then identiﬁed to 70. An intensity value of 1 results in the optimal choice τ ≈ 7.4. Setting the parameter τ to 7 and letting Matlab identify the noise intensity. by choosing any value of τ larger than 0. For τ = 0. results in a model ﬁt of 83. Identiﬁcation of time constant τ . Letting the script identify the noise intensity with τ = 7 will give a model ﬁt of over 99%.5 0 y1 0. results in a model ﬁt of at least 90%.5 shows the calculated signal areal and the predicted output from the z model with τ = 0.67%.67% 1 0. In Figure 6.5 1 1. This means that by choosing τ = 7.54 Alternative Kalman Filter Models identiﬁcation data the part of az caused by the slope of the road is chosen.3 and noise intensity calculated by Matlab.4 the output from the model is compared with the measurement. Figure 6. the script can be used to choose the optimal value of τ .16) .5 4 6 8 10 12 14 16 18 Figure 6.3 gives a ﬁt of over 95%.5 Measured Output Model Fit: 83. The quality of the model can be tested by looking at what the model could not reproduce in the data. A smaller choice of τ demands a larger noise intensity to make a good model ﬁt.15) where g is the gravitational acceleration and α is the slope of the road.3 the optimal noise intensity is identiﬁed to 91. results in a model ﬁt of over 99%. the “residuals” (t) = y(t) − y (t) ˆ (6.
for example by adjusting the number of parameters in the model [17].5 1 1.5 4 6 8 10 12 14 16 18 Figure 6. giving a model ﬁt of 96.6 shows the ﬁlter during the drive on the very bumpy road. 6. The command “resid” in Matlab computes the correlation function of the residuals from the model. In that case the model can be improved.7 shows the estimate of az during simulating driving up and down the steep hill.26%.6. Figure 6.5 Measured Output model Fit: 96.2.2 Modeling the Disturbance az Measured Output and 1step Ahead Predicted Model Output 1.5 0 y1 0.5. x2 ]T = [v. The rule is that if the correlation function go signiﬁcantly outside these conﬁdence intervals. r is set to 1 and q is set to 70.5. where y is the validation data and y is the output from the model.26% 55 1 0. Identiﬁcation of time constant τ .3 is chosen and the noise intensity is identiﬁed to 91. Figure 6.17) y = 1 0 C x+e (6.18) The Kalman ﬁlter using this model is simulated in the same way as in Section 5. Here τ is set to 7. there are no . as well as 99% conﬁdence intervals assuming that the residuals are Gaussian.5 2 2. The residuals ˆ should not be correlated with the past inputs u. As can be seen.4 Testing the Model of az Modeling az using the equation for the ﬁrstorder GaussMarkov process (6. do not accept the corresponding model as a good description of the system. az ]T gives the statespace model x = ˙ 0 1 1 0 −τ A x+ 1 0 B aexp + 0 1 G w (6. then there is a part of y that originates from the past input that has not been picked up properly by the model. If they are. Here τ = 0.12) with state vector x = [x1 .
The ﬁgure shows a simulation using recorded data from the vehicle driving on a very bumpy road.4 0.001) 14 16 18 20 22 Time [s] 24 26 28 30 0. τ is set to 7.20) (6. and it was shown that it is possible to get an arbitrary fast (or slow) estimate by adjusting the noise parameter q.5 HigherOrder Derivative of az In this section the model proposed by [19] will be examined.21) (6. When comparing with the Kalman ﬁlter developed in Section 5.19) This was implemented and tested in Chapter 5. 6. giving the same performance.2 0.4. 0. for example az (t) = w(t) ¨ (6.6.4 0.2 Disturbance [m/s 2] 0 0.22) .4 where it was stated that the changes in the parameter az can be modeled by setting the ﬁrst derivative of az to Gaussian noise.8 Figure 6.4.6 a∆=amaexp Kalman filter with az modeled as GaussMarkov Process Kalman filter from Chapter 5 (q=0.2. r is set to 1 and q is set to 70. This means that both models can be used to estimate az in the simulated situations.56 Alternative Kalman Filter Models relevant diﬀerences in comparison to the simple model used in Section 5. Another way of modeling the changes is by setting a higher derivative of the parameter equal to Gaussian noise. Recall Section 5. Kalman ﬁlter with az modeled as a GaussMarkov process. according to az (t) = w(t) ˙ where E[w(t)] = 0 E[w(t)w(τ )] = qδ(t − τ ) (6. no relevant changes can be found.
4.7.23) For the problem at hand. ξ]T .22) or (6. The speed of the object undergoes slight changes. This is used in [19] and also suggested as an alternative by [3] when making estimates for kinematic models.001) 4 6 8 10 Time [s] 12 14 16 18 Figure 6. there is no practical need to estimate the extra state az . There is still the possibility to choose τ according to az (t) + ¨ 1 az (t) = w(t) ˙ τ (6.2 Modeling the Disturbance az 57 2 1 Disturbance [m/s 2] 0 1 2 3 a∆=amaexp Kalman filter with az modeled as GaussMarkov Process Kalman filter from Chapter 5 (q=0. no relevant changes can be found.25) y = 1 0 0 C x+e (6. Kalman ﬁlter with az modeled as a GaussMarkov process. the state vector is chosen as x1 vreal x = x2 = az (6. When comparing with the Kalman ﬁlter developed in Section 5. x2 ]T = [ξ. τ is set to 7. r is set to 1 and q is set to 70. ˙ but in order to take advantage of (6. For example when estimating position ξ and ˙ ˙ speed ξ of an object.26) . The ﬁgure is generated using recorded data from the vehicle driving up and down a steep hill.6. which often are modeled as Gaussian ¨ noise with ξ = w.24) x3 az ˙ The statespace model becomes 0 1 0 1 1 x = 0 0 1 x + 0 aexp + 0 ˙ 1 0 0 0 0 −τ A B 0 1 0 G 0 0 w 1 (6. one might use the state vector x = [x1 .23).
32) This might not seem logical. the parameters to choose are the intensities of w1 . A possible approach when adding process noise to all estimated parameters in the statespace model.5.58 Alternative Kalman Filter Models e is Gaussian noise as before with intensity r. With the sample time T the result is 1 T = 0 1 0 0 Ad 1 2 2T xn+1 T 1 T T xn + 0 un + 0 0 0 Bd 1 2 2T T 0 Gd 1 3 6T 1 2 2T w (6. (6.30) be set to 0.30) (6. is to set all intensities to the same value.34) y = 1 0 0 Cd xn + en .29) does not comply with the deﬁnition of az given in (5. will aﬀect all three estimates in the state vector. Also. Here w consists of three components. so that 1 0 Q = q 0 1 0 0 0 0 1 (6. q2 and q3 .2. with noise intensities q1 . As can be seen in (6. as (6. As has been explained in Section 3. and the value of τ . w2 .27) meaning that the noise components of w are independent of each other. w2 and w3 . The statespace model is transformed into a discrete statespace model. the three estimates can be adjusted individually.24). The covariance matrix for w is q1 Q = 0 0 0 q2 0 0 0 q3 (6. w2 . Therefore.28) This method makes the choice of the parameters easier.29) (6. and the value of τ . By letting r be constant. q2 and q3 remain constant while changing the value of r. w = [w1 .33) T (6. the equations in the model are vreal ˙ az ˙ az ¨ y = aexp + az + w1 = az + w2 ˙ 1 = − az + w3 ˙ τ = vm = vreal + e (6.31) (6. w3 ]T . [13] The design parameters are the noise intensities for each of the Gaussian noises w1 . Letting the values of q1 .24). meaning that Gaussian noise is added to all three equations in the statespace model.6. the noise intensities are dependent of each other. w3 and e. the process noise w2 could according to (6. but w1 . w2 and w3 still remain independent of each other.25) and (6.26). using the theory presented in Section 3.
0 0.6 for explanations of Distronic Plus and DSR. By the implementation concern has to be taken to the diﬀerent driver assistance functions in the outer control loop.35) 2T 2 Cd Ad 1 2T 2T 2 Running the ﬁlter oﬀline in Simulink with measurements from test drives. 6.5 in a comparison with a lowpass ﬁlter. (Refer to Section 2. This is for example necessary when Distronic Plus is stopping the vehicle or when Downhill Speed Regulation (DSR) is active and the tires are blocked. for example when the vehicle is started.36) (6.39) (6.38) (6. as well as to certain driving situations. compare to Section 5.1 20. Plots of the Kalman ﬁlter implemented in this section is not shown here.4 means that the system is observable 1 0 0 Cd 1 2 O = Cd Ad = 1 T (6. when Distronic Plus tells the vehicle to start moving or when the vehicle has moved against the desired direction of travel.5. Some of the plots in this thesis have been generated there.2. • The output from the Kalman ﬁlter is limited to some maximum and minimum values. Plots of this ﬁlter will also be shown in Section 6. The Kalman ﬁlter was then thoroughly tested in diﬃcult situations to detect adverse behavior.500 (6. The estimate for az in the state vector is then kept constant. while the other estimates are set to initial values. but it will be used in Section 6. the parameters are set using trialanderror.4. The following parameter set is found to work properly q1 q2 q3 r τ = = = = = 4. Section 6.4. These values are changed if DSR is active.2 0. for example when the vehicle is stopped. which according to the test in Section 3.8 and Section 2.0 0.6.) • The Kalman ﬁlter can be restarted. until the ﬁlter gets acceptable behavior.3 Implementation and Testing in Arjeplog The Kalman ﬁlter is implemented in the test vehicles at DaimlerChrysler.1 and Section 6.40) With “properly” is meant that the ﬁlter with these parameters has the same behavior as the other ﬁlters evaluated in this thesis.3 Implementation and Testing in Arjeplog 59 The observability matrix O has full rank. . • The Kalman ﬁlter can be halted. where a comparison of all ﬁlter models is made. The Kalman ﬁlter was also evaluated during a testing expedition to Arjeplog in Sweden. This is done because there is no information about the acceleration while the vehicle is standing still. as described below.37) (6. due to safety reasons.
The parameters have been adjusted so that the diﬀerent ﬁlters almost have the same behavior. is a bit diﬀerent than the others. all the diﬀerent models have almost the same performance with respect to estimating large changes.1.60 Alternative Kalman Filter Models 6. This is done by adjusting the parameters so that the ﬁlters become the same behavior when simulating the vehicle driving up and down the steep hill.9 and Figure 6. a comparison between the diﬀerent models will therefore be made.4 Comparing the Kalman Filter Models It is interesting to know if the models derived in this chapter can improve the estimate in any way. during this Master’s project no simulated situation has been found where any of the models examined in this chapter can make a better estimate than the simple model used in Section 5. Figure 6. In fact. Only the output from the Kalman ﬁlter using the model derived in Section 6. Only the output from the Kalman ﬁlter using the model derived in Section 6.5. In this chapter diﬀerent Kalman ﬁlter models have been tested and evaluated.8. The parameters have been adjusted so that the diﬀerent ﬁlters almost have the same behavior. and then by comparing their output when simulating other situations. By looking at these ﬁgures. Figure 6. it becomes obvious that the behavior of the diﬀerent ﬁlters remain the same.1 is a bit diﬀerent than the others. .8 shows the simulation of driving up and down the steep hill. By doing this. plotted with a dashed line. After some work with the models trying to tune the parameters. it turns out that modeling az in diﬀerent ways does not make the estimate much better. This ﬁgure shows the simulation of driving up and down the steep hill.10 show simulations of a test drive on two diﬀerent bumpy roads. The other driving situations (drive on bumpy road for example) show if any of the models can ignore small changes better than the other. 2 1 Disturbance [m/s 2] 0 1 2 a∆=amaexp az Kalman filter from Chapter 5 az Kalman filter with v as feedback az Kalman filter with az as GaussMarkov process az Kalman filter implemented in test vehicles 4 6 8 10 Time [s] 12 14 16 18 3 Figure 6. Before continuing.
With the application at hand.4 0.6. Instead the ﬁlter is tuned using a subjective feeling about what is a good compromise when sitting in the car. The ﬁgures that follow show the similarities between the Kalman ﬁlter from Section 6. it cannot be said that any of the ﬁlters is better than the other. By looking at the output from the diﬀerent Kalman ﬁlters. The output from the Kalman ﬁlter is represented with a solid line.5 7 1.5 6 6. using measured data. The demands on the ﬁlter correspond to those of an ordinary lowpass ﬁlter.2 1.2 0.5 3 3.1.5 Comparing the Kalman Filter with a FirstOrder Lag Function With the application at hand.5 Comparing the Kalman Filter with a FirstOrder Lag Function 61 0 0. As reference a∆ = am − aexp is also plotted with a dotted line. The ﬁgures are from simulations in Simulink. This ﬁgure shows a simulation of a test drive on a bumpy road.6 Disturbance [m/s 2] 0.8 Figure 6. Therefore this section will try to explain the similarities between these ﬁlters. The reason for this is that the Kalman ﬁlter remains a linear ﬁlter giving the user to choose between a fast but jerky.2. A fast ﬁlter makes a faster controller.6 a∆=amaexp az Kalman filter from Chapter 5 az Kalman filter with v as feedback az Kalman filter with az as GaussMarkov process az Kalman filter implemented in test vehicles 2 2. but at the same time the drive gets more uncomfortable.2. or a slow but smooth estimate. it is important that the estimate is smooth.5 4 4. It has been noticed that a faster controller makes the deviation from the desired speed smaller. 6. Another conclusion is that inserting more complexity and more parameters into the model makes the tuning work more time consuming and harder to understand. as the output will be directly connected to the engine and brakes.9.8 1 1.4 1. the process noise w and measurement noise e cannot be measured or estimated.5 Time [s] 5 5.5 and a ﬁrstorder lag function as described in Section 6. but it does not necessary mean that the controller works better. The dashed line is the output from a ﬁrstorder .
Figure 6.2. without giving the controller enough time to adjust the speed completely. when the signal a∆ = am − aexp is given as input.2 0. When knowing what type of behavior is wanted from the ﬁlter.13 shows a test drive with an attached trailer of 2000 kg.1.10.11 comes from a test drive when the vehicle drives up and down the steep hill. it is obvious that the Kalman ﬁlter with these parameters behaves as a lowpass ﬁlter.2 Disturbance [m/s 2] 0 0. lag function with time constant 1. the same work can be done using traditional ﬁlter methods. Instead. and this is one example of the advantage of developing ﬁlters using the Kalman model. This ﬁgure shows a simulation of a test drive on a very bumpy road.8 Figure 6. The calculated gain L is used to adjust the frequency properties of the Kalman ﬁlter so that the gain is high when the signaltonoise ratio is high. to calculate the optimal gain L for the observer.5. The Kalman ﬁlter is very practical when the task is to extract information from noisy measurements (also from many sensors in combination – so called sensor fusion) or estimating more than one parameter in a complex statespace model. This behavior is also described in [12]. The measurement in Figure 6. and Figure 6. The Kalman ﬁlter used in the comparison does not have the signal a∆ as input. where the driver uses the cruise control lever to fast step the set speed up and down several times. it cannot be said that any of the ﬁlters is better than the other. but low when the ratio is low. as explained in Section 3.12 comes from a drive on the bumpy road. the Kalman ﬁlter equations are used. Therefore it is not a regular lowpass ﬁlter.6 0. By looking at the output from the diﬀerent Kalman ﬁlters. Instead the measurement of the vehicle speed vm is used as described in Section 6. When knowing the intensities of the process noise w and measurement noise e. .4 a∆=amaexp az Kalman filter from Chapter 5 az Kalman filter with v as feedback az Kalman filter with az as GaussMarkov process az Kalman filter implemented in test vehicles 14 16 18 20 22 Time [s] 24 26 28 30 0. Looking at the ﬁgures.62 Alternative Kalman Filter Models 0.4 0. the Kalman ﬁlter uses the measurement of the vehicle speed as input.5.
6.5 Comparing the Kalman Filter with a FirstOrder Lag Function 63
2
1
Disturbance [m/s 2]
0
1
2
3
a∆=amaexp az Kalman filter a∆ lowpass filtered 4 6 8 10 Time [s] 12 14 16 18
Figure 6.11. The measurement in this ﬁgure comes from a test drive when the vehicle drives up and down the steep hill. The dashed line is the output from a ﬁrstorder lag function with time constant τ = 1.1. As can be seen the behavior is similar to the output from the Kalman ﬁlter (solid line).
According to [12], the transfer function for the stationary Kalman ﬁlter is Gkf (s) = (sI − A + LCA)−1 Ls (6.41)
where L is the steadystate gain parameters calculated by (3.27). Calculating Gkf with the model used for the Kalman ﬁlter gives a matrix containing three transfer functions from the input vm to each of the ﬁlter outputs v, az and az . Taking the ˙ transfer function from vm to az , letting s = eiω and plotting its absolute value gives the magnitude plot of the Bode diagram in Figure 6.14. The solid line is given by the parameter set used in Section 6.2.5 and above in this section. The dashed line shows a ﬁlter with smaller r and the dasheddotted line shows a ﬁlter with a larger r. The ﬁlter has the function of a highpass ﬁlter. This is expected, as the transfer function used in the plot estimates az using measurements of the speed v. Its characteristic is normal for all derivating ﬁlters. The ﬁlter with a smaller r has a higher break frequency, and a larger r means lower break frequency. This was also expected, because a smaller measurement noise means that the ﬁlter also can diﬀerentiate higher frequencies. [12]
64
Alternative Kalman Filter Models
a∆=amaexp az Kalman filter a∆ lowpass filtered
0.2
0.1
Disturbance [m/s 2]
0
0.1
0.2
0.3
0.4 10
12
14
16
18
20 Time [s]
22
24
26
28
30
Figure 6.12. The ﬁgure is from a simulation using recorded data from a test drive on a bumpy road. The dashed line is the output from a ﬁrstorder lag function with time constant τ = 1.1. As can be seen the behavior is similar to the output from the Kalman ﬁlter (solid line).
1.5
1
0.5
0 Disturbance [m/s 2]
0.5
1
1.5
2 a∆=amaexp az Kalman filter a∆ lowpass filtered 15 20 25 Time [s] 30 35 40 45
2.5
3 10
Figure 6.13. The ﬁgure is from a simulation using recorded data from a test drive with a heavy loaded trailer. The driver uses the cruise control lever to step the set speed up and down several times, without giving the controller enough time to adjust the speed completely. The dashed line is the output from a ﬁrstorder lag function with time constant τ = 1.1. As can be seen the behavior is similar to the output from the Kalman ﬁlter (solid line).
6.5 Comparing the Kalman Filter with a FirstOrder Lag Function 65
Bode Diagram 10 20 30 40 50 60 70 90 r smaller (faster filter) original Kalman filter r larger (slower filter)
Phase (deg)
Magnitude (dB)
135
180 10
3
10
2
10
1
10
0
10
1
Frequency (rad/sec)
Figure 6.14. Bode Diagram of the transfer function from vm to az , with the Kalman ﬁlter presented in Section 6.2.5. The solid line is the parameters chosen in Section 6.2.5, the dashed line is from a ﬁlter with smaller r and the dasheddotted line is from a ﬁlter with a larger r. The ﬁlter has the function of a normal highpass ﬁlter. This is expected, as the transfer function used in the plot estimates az using measurements of the speed v. The ﬁlter with a smaller r has a higher break frequency, and a larger r means lower break frequency. A smaller measurement noise means that the ﬁlter also can diﬀerentiate higher frequencies.
66 Alternative Kalman Filter Models .
Chapter 7 Change Detection In this chapter an overview of diﬀerent change detection algorithms is given. it is desirable that the output follows the desired target signal.2) If there is no change in the system and the model is correct. Consider a ﬁlter trying to estimate a signal x. a sequence of independent stochastic variables with zero mean and known variance.k − xk ˆ (7. ignoring the noise. The presentation that follow is based on [13]. and then one of them is chosen and implemented. The quality of the estimate x can be tested by ˆ looking at the residuals k = xm. When driving the vehicle on a straight road.1 Idea of Change Detection When constructing a ﬁlter. a slow ﬁlter could be used. The gain in a linear ﬁlter is a compromise between noise attenuation and tracking ability. An introduction on how to adjust the parameters for this algorithm is given and simulations of diﬀerent driving situations are made. The ﬁlter uses measurements xm to calculate an estimate x. After a change either the mean or variance or both changes. and choosing a low gain makes it slow when large changes in the signal occurs. The measurement is modeled as ˆ xm.k = xk + ek (7. but then suddenly the slope changes and the model used by the ﬁlter is no longer correct. and react by making the ﬁlter faster.1) where e is the measurement noise. It would be practical to be able to detect such changes in the environment. The function could be satisfying for some time. 67 . 7. It will be shown that it is possible to improve the estimate of az using this algorithm. where a thorough explanation of the subject is given. then the residuals are white noise. Choosing a large gain makes it fast and sensitive to measurement noise.
68 Change Detection ukxk. There are three diﬀerent categories of change detection methods [13] • Methods using one ﬁlter. one of them is chosen as the currently “best one”.4) A change detector consists of a distance measure and a stopping rule. If the change in the residuals is signiﬁcant. representing the change in the residuals. The stopping rule decides whether the change is signiﬁcant or not. It is often used to detect faults in a system. one slow and one fast. • Methods using two ﬁlters. The distance measure transform the residuals from the Kalman ﬁlter to a signal s. The stopping rule decides whether the deviation is signiﬁcant or not. depending on their current probabilites. for example when a sensor is broken or temporarily unavailable. Diﬀerent implementations of the distance measures sk are [13] . the change detector gives an alarm and the Kalman ﬁlter can take appropriate action (for example by making the ﬁlter faster). where a whiteness test is applied to the residuals. For each ﬁlter the probability of that ﬁlter being correct is calculated. concerning the residuals from the Kalman ﬁlter H0 H1 : : is white noise is not white noise (7. Change detection is also referred to as “fault detection”. In this chapter. This can be used by a change detection algorithm. and output is the weighted sum of the output from all the individual ﬁlters. • Methods using multiple ﬁlters in parallel.2 One Kalman Filter with WhitenessTest The task of the change detector is to decide which of the following hypotheses is correct. the residuals become “large” in some sense.Kalman Filter 6 xˆk k Distance  sk Stop rule Alarm p Change Detector Figure 7. a method using one ﬁlter will be implemented and tested. each one matched to certain assumptions on the abrupt changes.3) (7. A change detector consists of distance measure and a stopping rule. that measures the deviation from the hypothesis H0 . The residuals are transformed to a distance measure sk . that is.1. as in Figure 7. in parallel. The problem is to decide what “large” is. 7. Depending on the residuals from the two ﬁlters. The ﬁlter is temporarily made faster when a change is detected.1.
• Change in sign correlation. giving sk = 2 k −λ (7. For instance. 0) The “drift parameter” ν inﬂuences the lowpass eﬀect. and gk is reset to 0. the Kalman ﬁlter is temporarily made faster by adjusting the parameters. Two common lowpass ﬁlters described in [13] are • The CUmulative SUM (CUSUM) test gk = max(gk−1 + sk − ν. one can use the fact that white residuals should in average change sign every second sample and use sk = sign( k k−1 ) k uk−l k yk−l (7.9) A stopping rule is created by lowpass ﬁltering sk and comparing this value to a threshold h. The residual itself is used. The stopping rule gives an alarm when gk > h.10) Here the forgetting factor λ is used to tune the lowpass eﬀect.11) (7. which means no lowpass eﬀect and sk will in this case be thresholded directly.5) • Change in variance. .8) (7. λ can be chosen as 0. • The Geometric Moving Average (GMA) test gk = λgk−1 + (1 − λ)sk (7.7) (7. The squared residual subtracted by a known “normal” residual variance λ is used. When an alarm is given. The correlation between the residual t at the current time step k and past outputs yk−l or inputs uk−l at a time step k − l are used as sk = or sk = for some value l. giving sk = k 69 (7.2 One Kalman Filter with WhitenessTest • Change in mean.7.6) • Change in correlation.
3 Implementation In this section one change detection algorithm is implemented and tested. When this signal is high. inspired by the general advice given in [13] • Start with a very large threshold h and choose β to the size of the expected change. The parameters for the change detection found using this method are β = 0.10) gives gk = max(gk−1 + = max(gk−1 + 2 k 2 k − λ − ν. is also plotted. As distance measure sk = 2 − λ is chosen. If fewer false alarms are wanted. If there is a subset of the change times that does not make sense.5. Choosing R = 1 and Q = 0. presented in Section 6. 7. or stepping the cruise controller set speed up and down with a heavy loaded vehicle. Inserting (7. 0) − β. try to increase β. the following steps are taken. The Kalman ﬁlter should in these situations remain slow. • Then simulate the system with measurements where large changes occur. Adjust β such that gk = 0 more than 50% of the time. This gives L = Lslow = 0.2. To choose the threshold h and the parameter β in (7.0003 is selected. try to decrease β. The change detector gives an alarm when gk > h.6) in (7. 0).1.2.005 and h = 0.01 from Chapter 5 gives Lf ast = 0. and the Kalman ﬁlter is temporarily made faster by changing the calculated value L to another value Lf ast . with a slope of 20% up and 15% down. • Then simulate all the driving situations again. try to increase β. Simulate the system with measurements from test drives on bumpy roads.0172 and results in a slow ﬁlter. The Kalman ﬁlter from Chapter 5 with R = 1 and Q = 0.3 shows a vehicle with a trailer with a weight . As a reference the output from the “original” Kalman ﬁlter without change detection.4 Results The Kalman ﬁlter with the change detection algorithm chosen in Section 7.2 is a simulation of a test drive up and down a steep hill.70 Change Detection 7. for example driving up and down a steep hill. If faster detection is sought.12) for the change detection algorithm. 0) (7. the faster parameter of the Kalman ﬁlter is chosen. The output from the simulated Kalman ﬁlter is plotted in a diagram together with the measured signal am − aexp .12) Here β has been deﬁned as β = λ + ν. Figure 7. Set the threshold h so the delay for detection of these large changes is reasonable. and as stopping rule the CUSUMtest k gk = max(gk−1 + sk − ν.3 is simulated using measurements from test drives representing diﬀerent driving situations. ignoring the noise. Figure 7. The discrete signal at the bottom of the diagrams is the output from the change detection algorithm.0951. called “Alarm” in Figure 7.
Simulation of a Kalman ﬁlter with change detection algorithm when driving up and down a steep hill. this time driving with cruise control.7. Figure 7. From the three ﬁgures it can be seen that the estimates from the Kalman ﬁlter with change detection is faster than the original implementation.4 Results 71 of 2000 kg driving up and down the same hill. and the change detection algorithm does not aﬀect the estimate. not so many alarms are given.5 and Figure 7. Figure 7. This shows that the change detection algorithm does not aﬀect the estimate when it is not necessary.6 each show a test drive on two diﬀerent bumpy roads. The change detection algorithm detects the large changes and makes the Kalman ﬁlter faster.2.4 shows the vehicle with the trailer again. 3 a∆ = amaexp az Kalman filter az Change Detection 2 Alarm 1 Disturbance [m/s 2] 0 1 2 3 0 5 10 Time [s] 15 20 25 Figure 7. . As can be seen. The driver is stepping the set speed fast up and down without letting the vehicle reach the desired speed. Faster estimates are better in these driving situations because it would give the controller a better chance to compensate for the large changes.
. (The original Kalman ﬁlter becomes saturated at −3ms−2 .4. The change detection algorithm detects the large changes and makes the Kalman ﬁlter faster than the original implementation.) 3 a∆ = amaexp az Kalman filter az Change Detection 2 Alarm 1 Disturbance [m/s 2] 0 1 2 3 10 15 20 25 Time [s] 30 35 40 45 Figure 7.72 4 a∆ = amaexp az Kalman filter az Change Detection Alarm 2 Change Detection 3 1 Disturbance [m/s 2] 0 1 2 3 4 5 6 0 5 10 Time [s] 15 20 25 Figure 7. Simulation of a Kalman ﬁlter with change detection algorithm when driving up and down a steep hill with a 2000 kg trailer. This is not implemented in the simulation for the ﬁlter with change detection. Simulation of a Kalman ﬁlter with change detection algorithm when driving with an attached trailer. The driver uses the cruise control lever to fast step up and down witout letting the vehicle reach the desired speed.3. The change detection algorithm detects the large changes and makes the Kalman ﬁlter faster.
4 0.6 a∆ = amaexp az Kalman filter az Change Detection Alarm 0. and therefore the estimate is not aﬀected.5 0.2 a∆ = amaexp az Kalman filter az Change Detection Alarm 0 0.2 0.4 0. as desired. Simulation of a Kalman ﬁlter with change detection algorithm when driving on a bumpy road.6 0.2 Disturbance [m/s 2] 0 0.1 0. The change detection algorithm does not give many alarms.5. Simulation of a Kalman ﬁlter with change detection algorithm when driving on a very bumpy road.7. The change detection algorithm does not give many alarms.4 0.6.2 0. as desired.3 0.6 0 10 20 30 Time [s] 40 50 60 Figure 7. .8 0 1 2 3 4 5 Time [s] 6 7 8 9 10 Figure 7.4 Results 73 0. and therefore the estimate is not aﬀected. 0.1 Disturbance [m/s 2] 0.
74 Change Detection .
Some more complex models for the Kalman ﬁlter have been implemented and tested. The easiest method is to use a constructed signal a∆ = aexp − am as input to the ﬁlter. The chapter also includes a section in which interesting future work is brieﬂy introduced. Then two models of az were derived and tested. where aexp is the expected acceleration calculated by the model and am is the measured actual acceleration. It has been shown how to implement a Kalman ﬁlter estimating the part called az of the vehicle’s acceleration caused by disturbances not included in the model of the vehicle. It has been shown that the ﬁlter parameters can be chosen either • by knowledge about the noise intensities (when they are not known they can be estimated). • or by adjusting the parameters as a compromise between a slow ﬁlter. These models have a higher computational cost.1 Conclusions In this Master’s thesis the theory for the Kalman ﬁlter and ﬁlter tuning have been presented. 8. inspired by the ﬁrstorder lag function and the GaussMarkov process. • by running simulations in Simulink and optimizing the parameters using a script in Matlab (for this purpose the algorithm simulated annealing has been implemented). First it was shown how to use the speed of the vehicle as input to the ﬁlter. 75 . but it could not be proven that they improve the estimate in any way.Chapter 8 Conclusions and Future Work In this chapter the thesis is concluded with a short summary of the obtained results and observations made. or a fast but jerky ﬁlter (to do this a subjective choice has to be made). instead of the constructed signal a∆ .
as the simulations in this thesis suggest. instead of estimating az . For example the method of using two ﬁlters in parallel. A change detection algorithm has also been implemented in Simulink and simulations show that it is possible to improve the estimate using this algorithm. • As the parameters m (the mass of the vehicle) and α (the slope of the road) have a big impact on the calculation of the expected acceleration (see Section 5.76 Conclusions and Future Work It has been shown that the Kalman ﬁlter implemented in the vehicles today can be replaced by a ﬁrstorder lag function. . • The change detection algorithm implemented in Simulink should be tested in a real vehicle. may be of interest. the parameters can then be adjusted by practical methods to suit the actual application. one slow and one fast. with no loss in performance.2 Future Work There are several interesting aspects that deserve further investigation.3) it would be interesting to see if the performance of the controller could be improved by estimating these parameters. simulate and test the other methods for change detection described in this Master’s thesis. • It is suggested to implement. • It should be practically tested if the Kalman ﬁlter can be exchanged with another simpler type of lowpass ﬁlter. 8. If this test shows a positive result.
Symbol α ac adev ades aexp am areal az Aw cd crr η e Fair Fbrake Fdrive Fresistance Froll g id Ie If ig Ig Ir L m m ˜ P q Description Slope of the road Output from the controller Deviation from desired acceleration Desired acceleration Calculated expected acceleration Measured acceleration of the vehicle Actual aceleration of the vehicle Output from the observer Air resistance reference area Drag coeﬃcient Rolling resistance coeﬃcient Eﬃciency factor for the drivetrain Measurement noise Air resistance Force from the brakes Force from transmission and engine Drive resistance Rollin resistance Gravitational acceleration Diﬀerential ratio Moment of inertia for the engine Moment of inertia for the front axes Gearbox ratio Moment of inertia for the gear Moment of inertia for the rear axes Observer gain Mass of the vehicle Mass and moments of inertia Covariance matrix Measurement noise intensity 77 Page 42 34 33 33 43 33 33 43 36 36 36 36 12 36 35 35 35 36 36 36 36 36 36 36 36 14 36 37 16 43 . together with a reference to the page where they are deﬁned.List of Notations This table shows the symbols and abbreviations used in this thesis.
78 Q ρ r rw R Tb Tbrake Tdrive Te Tengine u vm vreal vwind w x x ˆ Abbreviation ABS ACC BAS CMS DSR ESP RMSE SA List of Notations Measurement noise covariance matrix Density of the air Process noise intensity Wheel radius Process noise covariance matrix Desired brake torque Expected output torque from the brakes Output torque from the transmission and engine Desired engine torque Expected output torque from the engine Input signal Measured speed of the vehicle Actual speed of the vehicle Speed of the wind Process noise State Estimate of the state Description Antilock Braking System Adaptive Cruise Control Brake Assist System Collision Mitigation System Downhill Speed Regulation Electronic Stability Program Root Mean Square Error Simulated Annealing 15 36 44 34 15 33 35 36 33 35 12 33 33 36 12 12 14 Page 5 7 7 6 6 5 25 26 .
X. Kalman Filtering . Coleman. Fach. Ljung and M. second edition. Gustafsson. Lund. Master’s thesis. Ney York. Blom.und Stabilitätsbetrachtung des Fahrzeuglängsreglers im Mercedes Benz Pkw. Lund. third edition. [8] M. [2] P.. 79 . Linköping.Bibliography [1] B.Grundläggande teori. 2004. New York.Theory and Practice Using Matlab.C Hwang. Inc. Optimization Toolbox User’s Guide. Stuttgart. [12] F.. Ljung. Linköping. John Wiley & Sons. The MathWorks. [10] T. Glad and L. Inc. Andersson. PhD thesis No. Linköpings universitet. John Wiley & Sons. [4] G.. 1989. 2001.Y. John Wiley & Sons. 2007. 2001. BarShalom. Natick. 2006. Studentlitteratur. Signalbehandling. [5] R. 2003. PhD thesis. Studentlitteratur. second edition. 989. [9] T. Estimation with Applications to Tracking and Navigation. [11] M. Lund. Reglerteori . Millnert. 2001. New York. fourth edition. Tracking and threat assessment for automotive collision avoidance. Sannolikhetsteori och statistikteori med tillämpningar. Ljung. Adaptive Filtering and Change Detection. 2001.A Branch and A. Fahrzeuglängsführung im Niedergeschwindigkeitsbereich. 1066. [13] F. Aachen. fourth edition. second edition. Lund. [6] T. Air Charge Estimation in Turbocharged Spark Ignition Engines. Gustafsson. Introduction to Random Signals and Applied Kalman Filtering. Universität Stuttgart. 2005. Inc. Inc. John Wiley & Sons. Li and T. Glad and L. PhD thesis No. 1999. Kirubarajan. Inc. Linköpings universitet. Andrews. [7] A. 1997. Robustheits.G Brown and P. L.. 2003. Grewal and A. Eidehall. Grace. Adiprasito. Reglerteknik . Studentlitteratur. [3] Y. Studentlitteratur. Shaker Verlag.Flervariabla och olinjära metoder. New York. M.
Gelatt and M. 2006. Stuttgart.80 Bibliography [14] U. Natick. Vecchi. Nielsen. System Identiﬁcation . LITHISYEX34082003. Inc. Stuttgart. Regis and C. Upper Saddle River. Stuttgart. Kirkpatrick. Fahrzeugdynamik. Road Slope and Vehicle Mass Estimation Using Kalman Filtering. An Introduction to the Kalman Filter. Optimization by Simulated Annealing.pubs. [23] G. Science(220) : 671680. 1995. Prentice Hall. Lingman and B. System Identiﬁcation Toolbox User’s Guide.D. Inc. [20] K. Automotive Control Systems For Engine. Schmidtbauer. [21] R. 1999. Linköping. SpringerVerlag. Master’s thesis. second edition. Teubner. Linköpings Universitet. second edition.Theory for the User. Popp and W. Regelung der Fahrzeuglängsdynamik unter Berücksichtigung der unterschiedlichen Dynamik von Motor und Bremse.. C. Ljung.pdf [22] R. . Acc. 2005. INFORMS Journal on Computing. Ljung. [24] Control System Toolbox User’s Guide. [25] Owner’s Manual SClass.P. 1998. Welch and G. 1983. University of North Carolina at Chapel Hill. The Math Works. [16] P. Natick. 2006. A Stochastic Radial Basis Function Method for the Global Optimization of Expensive Functions. Universität Stuttgart. 2002. [17] L. 2005. [19] D. DaimlerChrysler AG.. Schiehlen.G. DaimlerChrysler AG. 1993. Online Supplement. Technical Report TR95041. The MathWorks. 20070201 http://joc. Sensor Fusion between a Synthetic Attitude and Heading Reference System and GPS. B. [26] Owner’s Manual MClass. Chapel Hill. Berlin. Master’s thesis No. Stuttgart.informs. Rosander. Kienecke and L. 2003. Vehicle System Dynamics Supplement 37 page 1223. [15] S. Bishop. Driveline and Vehicle. Shoemaker. Pfrommer.org/Supplements/Regis. [18] L. 1995.
q22 = best_parameters(2). % Load the model % Set initial values % Set optimization options (for example termination options) options = optimset(’LargeScale’.tolx.1 1 10]. []. options).Appendix A Matlab Implementation of “lsqnonlin” function [q11. tolfun) % Optimize control parameters using LSQNONLIN and Simulink model if (nargin < 2) warning(’Using standard value for tolx and tolfun (0. q33 = best_parameters(3). % Save the result q11 = best_parameters(1). start_parameters. []. end load_system(’opt_param_model’) start_parameters = [0. % Run lsqnonlin to solve the optimization problem best_parameters = lsqnonlin(@tracklsq.001.001)’).’iter’.’TolFun’. tolfun = 0. % This is the callback function used by lsqnonlin function F = tracklsq(current_parameters) 81 . ’TolX’. tolx = 0.’off’.001.q22.’Display’.q33] = opt_param_lsq(tolx.tolfun).
xout.yout] = sim(’opt_param_model’). rmse=sqrt(1/t*sum(error. % Calculate the observer Q_d = [q11 0 0 . q33 = current_parameters(3). \textsc{Simulink}_model_parameters % Create simulation options and run simulation [tout.2). 0 0 q33].82 Matlab Implementation of “lsqnonlin” % Current values are passed by lsqnonlin q11 = current_parameters(1). % (lsqnonlin uses sqrt(F) as cost function. t = length(error). error is the 2:nd output) error = yout(:.^2)). therefore ^2 F = rmse(error)^2. 0 q22 0 . . end function rmse = RMSE(error) % Calculate a cost function based on the insignal error error = estimated_valuereal_value). q22 = current_parameters(2). % Calculate the cost function value % (In the model used.
cost_stop = 0.9. initial state for the algorithm % steps_max.1 P_start = 0. cost_best] = sim_annealing(param_start. cost_current = sim_cost(param_current).steps_max) % % This Matlab function recursively tries to find the optimal % parameters. the maximum number of evaluations allowed %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function [param_best.Appendix B Matlab Implementation of Simulated Annealing This is the developed optimization script implementing the simulated annealing (SA) algorithm in Matlab. using Simulinksimulation and % the algorithm "simulated annealing".steps_max) if(nargin<2) error(’Specifz initial values and max evaluations’).000001. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % [param_best. 83 % Initial state % Initial error . end param_current = param_start. P_stop = 0. % % Required parameters: % param_start = [q11 q22 q33]. cost_best] = sim_annealing(param_start.
% Evaluation count. plot(temp_history).5g] = \t %0.temp_current)). end steps = steps + 1. % Cool down disp(sprintf(’%d: \t [%0. param_current(3). param_best = param_current. temp_current) param_current = param_neighbour. end figure(1). % reset random generator disp(sprintf( ’steps: \t [q11 \t q22 \t q33] = \t cost \t T=temp’)). % Yes. end % Is this a new best? % Yes. % Initial "best" solution cost_best = cost_current. % Compute its energy cost_neighbour = sim_cost(param_neighbour). % Should we move to the new state? if rand < trans_P(cost_current. steps = 0.5g \t T=%0. plot(cost_history).1). param_current(1). xlabel(’costfunction’).84 Matlab Implementation of Simulated Annealing [temp_start alpha]= init(param_current. cost_best = cost_neighbour.1. % Initial temperature rand(’state’. cost_neighbour. subplot(2. % While time remains & not good enough while steps < steps_max & cost_current > cost_stop % Pick some neighbour param_neighbour = neighbour(param_current).P_stop). . xlabel(’temperature’). if cost_neighbour < cost_best param_best = param_neighbour. temp_history(steps) = temp_current. cost_history(steps) = cost_current. change state cost_current = cost_neighbour.2). % One more evaluation % Log the cost (path) temp_current = alpha*temp_current. subplot(2.P_start. steps. save it.1.5g’. temp_current = temp_start.cost_current. param_current(2).5g \t %0.steps_max.sum(100*clock)). cost_current.5g \t %0.
. % but stay if the temperature is low P = exp((cost_currentcost_neighbour)/temp).’]). . simin_Ld3 = L_d(3). % These parameters have to be defined as global simin_Ld1 = L_d(1).5). lrg_para_lrg.5 change = (rand(1.. % Calculate new parameters n = param + change. simin_Ld2 = L_d(2).*param. end end % sim_cost executes the simulation and returns the RMSEerror function cost = sim_cost(param) global simin_Ld1 simin_Ld2 simin_Ld3 q11 = param(1). % Can not allow negative values while n(1) < 0  n(2) < 0  n(3) < 0 % Randomize between 0.2. q22 = param(2). disp([’Simulating. r_d = 0. % Always go down the hill else % Go up the hill if the temperature is high. cost_neighbour. temp) if cost_current < cost_neighbour P = 1.5 and +0.85 % Print the best parameters and their evaluated cost value param_best cost_best % Pick some neighbour to the current parameters % Should try to get nearby values! function n = neighbour(param) n = [1 1 1]. end end % Calculate the transition probability function P % The probability that we move to new parameters function P = trans_P(cost_current. q33 = param(3).3)0.
steps_max. end % Calculate a reasonable initial temperature and alpha function [temp_start. % best of all the neighbours % worst of all the neighbours for i=1:5 % Generate some random neighbours. % Save the worst and best neighbour costs if( cost_neighbour < cost_best ) cost_best = cost_neighbour. evaluate costs param_neighbour = neighbour(param_start).cost_best end . cost_neighbour = sim_cost(param_neighbour). end % Calculate the cost function value error = simout_car_pos_gsimout_LRG_az_SGB.cost_start. cost_worst=cost_start.P_stop) cost_best=cost_start.86 Matlab Implementation of Simulated Annealing sim(’rdu_simmod_tl_fumo’). end if( cost_neighbour > cost_worst ) cost_worst = cost_neighbour. end end % Calculate the maximum uphill move needed if( cost_worst > cost_start ) max_change = cost_worst . % Pick out the interesting part (skip beginning!) parterror = error(400:2600). % Check global parameters if(simin_Ld1 == max(simout_Ld1)) disp([’OK’]). % Calculate cost (Root mean square error) cost = rmse(parterror).cost_start else max_change = cost_start .P_start. alpha] = init(param_start. else error(’Parameters are not received by Simulink’).
87 % Set initial temperature so that this maximum move % is accepted with a high probability P_start. end . % Now calculate the cooling factor alpha % P_stop = exp(max_change/(temp_start*alpha^steps_max)) alpha = (max_change / (temp_start*log(P_stop)))^(1/steps_max). % P_start = exp(max_change/temp_start) gives temp_start = max_change / log(P_start).
ks = m.K.c.x0. B = [0]. % NaN means "please identify" m. K_start = 1.k.5.bs = m.02. K = [K_start].B. real_az_Slope = 9.b.C.mat’ % Calculate the disturbance a_z due to the slope alpha alpha = atan((B_LRdeSteigSe)/100).D. tau_start = 0. A = [1/tau_start]. 88 % Sampling time % Start values . D = [0].’Ts’. m.Appendix C Time Constant Identiﬁcation Here is the Matlab identiﬁcation script that was used to identify the unknown parameter τ and the intesitiy of w in the ﬁrstorder lag function τ az + az = w.cs = m. az = real_az_Slope’. x0 = [0].81*sin(alpha). C = [1]. m. ˙ % % % % Identifies the parameter "tau" and "K" in the matrix A dot{x} = Ax + Bu + Ke y = Cx + Du + e The noise intensities of e is 1 load ’C:\Messungen\221_836_RS141p_pr21. % Create statespace identification model m = idss(A. m.as = [NaN].0). T = 0.
.1). no input identificationData = iddata(az.K lambda = model.89 m.NoiseVariance % Check the model’s ability to predict one step ahead figure(15).1).m_init). compare (identificationData. % Identify unknown parameters model = pem(identificationData. T). az output.A) K_save = model. % Load identification data.ds = m.d. zeros(length(az). % Save the identified parameters tau_save = inv(model. % Automatically adjust initial values to suit model m_init = init(m).model.
This action might not be possible to undo. Are you sure you want to continue?
Use one of your book credits to continue reading from where you left off, or restart the preview.