You are on page 1of 23

State Estimation!

Robert Stengel!
Robotics and Intelligent Systems MAE 345, !
Princeton University, 2017

Learning Objectives

•! Compute least-squares estimates of a constant vector


–! Unweighted and weighted batch processing of noisy data
–! Recursive processing to incorporate new data
•! Estimate the state of an uncertain linear dynamic system
with incomplete, noisy measurements
–! Discrete-time Kalman filter
–! Continuous-time Kalman-Bucy filter (supplement)

Copyright 2017 by Robert Stengel. All rights reserved. For educational use only.
http://www.princeton.edu/~stengel/MAE345.html 1

Estimate Constant Vector


by Inverse Transformation
•! Given
–! Measurements, y, of a constant vector, x
•! Estimate x
•! Assume that output, y, is a perfect
measurement and H is invertible

y=Hx
–! y: (n x 1) output vector
–! H: (n x n) output matrix
–! x : (n x 1) vector to be estimated

•! Estimate is based on inverse transformation

x̂ = H !1 y
2
Imperfect Measurement of a
Constant Vector
•! Given
–! Noisy measurements, z, of a
constant vector, x
•! Effects of error can be reduced
if measurement is redundant
•! Noise-free output, y
•! y: (k x 1) output vector
y=Hx •! H: (k x n) output matrix, k > n
•! x : (n x 1) vector to be estimated

•! Measurement of output with error, z

z= y+n=Hx+n •! z: (k x 1) measurement vector


•! n : (k x 1) error vector
3

Least-Squares Estimate
of a Constant Vector
•! Measurement-error residual
! = z " H x̂ = z " ŷ dim(!) = ( k "1)

•! Squared measurement error = cost function, J


1 T 1
! ! = ( z " H x̂ ) ( z " H x̂ )
T
J=
2 2 Quadratic norm
1
= ( z T z " x̂T HT z " z T H x̂ + x̂T HT H x̂ )
2
•! What is the control parameter?

The estimate of x x̂ dim ( x̂ ) = ( n ! 1)


4
Least-Squares Estimate
of a Constant Vector
1 T
J=
2
(z z ! x̂T HT z ! z T H x̂ + x̂T HT H x̂ )
Necessary condition for a minimum
!J
=0
! x̂
1
= # 0 " ( HT z ) " z T H + ( HT H x̂ ) + x̂T HT H %
T T

2$ &
The 2nd and 4th terms are transposes of the
3rd and 5th terms
5

Least-Squares Estimate
of a Constant Vector
The derivative of a scalar, J, with respect to a vector, x,
(i.e., the gradient) is defined to be a row vector; thus,

!J " !J !J !J %
=$ ... '
! x̂ $# ! x̂1 ! x̂2 ! x̂n '
&
1"
0 ! ( HT z ) ! z T H + ( HT H x̂ ) + x̂T HT H $
T T
0=
2# %
= "# !z T H + x̂T HT H $%
6
Optimal Estimate of x
Rearranging
"# x̂T HT H ! z T H $% = 0
x̂T HT H = z T H
Then, the optimal estimate is
! x̂T ( HT H ) # ( HT H ) = x̂T ( I )
%1
" $
x̂T = z T H ( HT H )
%1
(row)

x̂ = ( HT H ) HT z (column)
!1

(HTH)-1HT is called the left pseudoinverse of H 7

Estimate of Scalar
Constant: Average Weight
of Jelly Beans
Measurements are equally uncertain

zi = x + ni ,i = 1 to k
Express measurements as
z = Hx + n
! 1 $ Optimal estimate
Output # &
1
(
x̂ = H H ) HT z
!1
matrix H=# & T
# ... &
#
" 1 &
% 8
Estimation of a Scalar
Constant: Average Weight
of the Jelly Beans
(
x̂ = HT H ) HT z
!1
Optimal
estimate
(1 " 1) = #$(1 " k ) ( k " 1)%& (1 " k ) ( k " 1)
!1

' ! 1 #*
-1
! z1 #
) % &, % &
1 % z &
x̂ = ) !" 1 1 ... 1 #$ % &, !" 1 1 ... 1 #$ 2

) % ... &, % ... &


)( % 1 &,+ % &
" $ %" zk &$

...is the average

1 k
x̂ = ( k )
!1
( z1 + z2 + ... + zk ) x̂ = ! zi
k i =1
[ sample mean value]
9

Measurements of
Differing Quality
Original cost function, J, and optimal estimate of x
1 T 1
! ! = ( z " H x̂ ) ( z " H x̂ )
T
J=
2 2
(
x̂ = HT H ) HT z
!1
1 T
= ( z z " x̂T HT z " z T H x̂ + x̂T HT H x̂ )
2

Suppose some elements of measurement, z,


are more uncertain than others
z = Hx + n n : Error vector
Give more uncertain measurements less weight
in arriving at minimum-cost estimate
10
Error-Weighted Cost Function
! (large error) 0 ... 0 $
Measurement #
0 (small error) ... 0
&
&
uncertainty matrix, R R=#
# ... ... ... ... &
(large is worse) #
#" 0 0
&
... (medium error) &%

" (low weight) 0 ... 0 %


Error-weighting $
R !1 = $
0 (high weight) ... 0
'
'
matrix, R-1 $
$
... ... ... ... '
'
$# 0 0 ... (medium weight) '
&
Weighted cost function, J, reduces
significance of poorer measurements
1 T "1 1
! R ! = ( z " H x̂ ) R "1 ( z " H x̂ )
T
J=
2 2
1
= ( z T R "1z " x̂T HT R "1z " z T R "1H x̂ + x̂T HT R "1H x̂ )
2 11

Weighted Least-Squares
Estimate of a Constant Vector
Necessary condition for minimum
!J 1
= 0 = # 0 " ( HT R "1z ) " z T R "1H + ( HT R "1H x̂ ) + x̂T HT R "1H %
T T

! x̂ 2 $ &

"# x̂T HT R !1H ! z T R !1H $% = 0


x̂T HT R !1H = z T R !1H
Weighted optimal estimate:

xˆ = (HT R!1H) HT R!1 z


!1

12
Weighted Estimate
of Average Jelly
Bean Weight
Error-weighting matrix based on standard deviations
# 1/"2 0 ... 0 & #
% n1
( % a11 0 ... 0 &
(
% 0 1 / " n22 ... 0 ( % 0 a22 ... 0 (
R !1 = A = % (=%
% ... ... ... ... ( % ... ... ... ... (
(
% 0 0 ... 1 / " n2k ( % 0 0 ... akk (
%$ (' $ '

Optimal estimate of average jelly bean weight

( )
k
x̂ = HT R !1H HT R !1 z
!1
!a z ii i

' ! a11 ... 0 # ! 1 #*


-1
! a11 ... 0 # ! z1 # xˆ = i=1
k
) %
x̂ = ) ! 1 ... 1 # % ... ... ...
" $
&% &, ! %
& % ... &, " 1 ... 1 #$ % ... ... ...
&% &
& % ... & !a ii
) % 0 ... akk & % 1 &, % 0 ... akk & % zk & i=1
( " $" $+ " $" $ 13

Recursive Least-Squares
Estimate of Constant Vector, x
!! Batch-processing approach
!! All information is gathered prior to processing
!! All information is processed at once

!! Recursive approach
!! Optimal estimate has been made from prior
measurement set
!! Additional new measurement set
!! Optimal estimate improved by correction to prior
estimate

14
Add One New Measurement
Initial measurement set and state estimate

z1 = H1x + n1 , dim ( z1 ) = k1 ! 1
x̂1 = ( H1T R1"1H1 ) H1T R1"1z1
"1

R1 : Error covariance of 1st measurement


New measurement set

z 2 = H 2 x + n 2 , dim ( z 2 ) = k2 ! 1
R 2 : Error covariance of 2 nd measurement
15

Improved Estimate Incorporating


New Measurement Set
P1!1 ! H1T R1!1H1 x̂1 = P1H1T R1!1z1
New estimate is linear correction to old

x̂ 2 = x̂1 ! P1HT2 ( H 2 P1HT2 + R 2 ) ( z 2 ! H2 x̂1 )


!1

! x̂1 ! K ( z 2 ! H 2 x̂1 )

K : Estimator gain matrix


= P1HT2 ( H 2 P1HT2 + R 2 )
!1

See reading for details 16


Recursive Optimal Estimate of
Constant Vector, x
!! Sequence of new measurements
!! Generalize to a recursive form, with index i

x̂ i = x̂ i!1 ! Ki ( z i ! Hi x̂ i!1 )
with
Ki = Pi!1HTi ( Hi Pi!1HTi + R i )
!1

dim ( x ) = n ! 1; dim ( P ) = n ! n
Pi = ( P + H R Hi )
!1 T !1 !1
dim ( z ) = r ! 1; dim ( R ) = r ! r
i!1 i i
dim ( H ) = r ! n; dim ( K ) = n ! r

17

Dynamic Sampled-Data
Systems with Uncertain
Inputs and Disturbances!

18
Systems
with
Uncertainty
•! x is not constant in a dynamic system
•! Dynamic systems may have uncertain
–! Initial conditions
–! Inputs
–! Measurements
–! System parameters or dynamic structure
•! Design goal: estimate the state with minimum
expected error
–! Mean value " actual value of the state
–! Expected value of estimate error as small as possible
19

State Estimation

•! Goals
–! Minimize effects of measurement error on knowledge of state
–! Recontruct full state from reduced measurement set (r " n)
–! Average redundant measurements (r # n) to produce estimate
of full state
•! Method
–! Provide optimal balance between measurements and estimates
based on dynamic model alone
–! Continuous- or discrete-time implementation
20
Uncertain Sampled-Data Linear
Dynamic Model
Discrete-time LTI model with known coefficients

x k = !x k"1 + #u k"1 + $w k"1

y k = H x x k + Hu u k
z k = y k + nk

Equivalent to continuous-time model at


sampling instants

21

Optimal State Estimation!

see Supplemental Material for continuous-time filter and example


22
Discrete-Time Linear-Optimal !
State Estimation
•! Kalman filter is optimal estimator for discrete-time
linear systems with Gaussian uncertainty
•! Five equations
1)$ State estimate extrapolation
2)$ Covariance estimate extrapolation
3)$ Filter gain computation
4)$ State estimate update
5)$ Covariance estimate update 

•! Notation

x̂ k ( ! ) : Estimate at k th instant before measurement update


x̂ k ( + ) : Estimate at k th instant after measurement update
23

Equations of the Kalman Filter


1) State estimate extrapolation (or propagation)

x̂ k ( ! ) = "k !1 x̂ k !1 ( + ) + # k !1u k !1

2) Covariance estimate extrapolation (or propagation)

Pk ( ! ) = "k !1 Pk !1 ( + ) "kT!1 + Qk !1

24
Equations of the Kalman Filter
3) Filter gain computation

K k = Pk ( ! ) HTk "# H k Pk ( ! ) HTk + R k $%


!1

4) State estimate update

x̂ k ( + ) = x̂ k ( ! ) + K k "# z k ! H k x̂ k ( ! ) $%

5) Covariance estimate update




Pk ( + ) = "# Pk!1 ( ! ) + HTk R !1


k Hk $
!1
%
25

Example: Estimate Rolling


Motion of an Airplane
Continuous-time model
# !p & # Roll rate, rad/s &
# !!p & # L % (=%
0 & # !p & # L) A # Lp & (
& %$ !" (' %$ Roll angle, rad ('
% (=% p (% (+% ( !) A + % ( !pw
%$ !" (' %$ 1
! 0 (' %$ !" (' %$ 0 (' %$ 0 (' !) A = Aileron deflection, rad
!pw = Turbulence disturbance, rad/s

Discrete-time model
# 0 &
L T
ep
# !pk & % LT ( # !pk )1 & # ~L T & # ~ L pT &
%
%$ !" k (' %
(
( = % e p )1 ) (%
1 ( %$ !" k )1
(+%
(' %$
*A

0
( !* Ak )1 + %
(' %$ 0
( !pwk)1
('
%$ Lp ('
# +11 0 & # !pk )1 & # , & # - &
=% (% ( + % 1 ( !* Ak )1 + % 1 ( !pwk)1
%$ + 21 1 (' %$ !" k )1 (' %$ 0 (' %$ 0 ('
T = sampling interval, s

26
Kalman Filter Example
Rate and Angle Measurement

Roll rate gyro, rad/s # !pM & # !p + !n p &


% ( =% (
Roll angle gyro, rad % !" M (' %$ !" + !n" (
$ k 'k
= I!x k + !n k

1) State Estimate Extrapolation

$ !p̂ ( " ) ' $ * 0 ' $ !p̂k "1 ( + ) ' $ + '


k 11
& )=& )& )+& 1 ) !, Ak "1
& !#̂ k ( " ) ) & * 21 1 ) & !#̂ k "1 ( + ) ) &% 0 )(
% ( % (% (
27

Kalman Filter Example


2) Covariance Extrapolation
" p (!) p12 ( ! ) % " (11 0 % " p11 ( + ) p12 ( + ) % " (11 ( 21 % " ) 2p 0 %
$ 11 ' =$ '$ ' $ '+$ '
$ p21 ( ! ) p22 ( ! ) ' $# ( 21 1 '& $ p21 ( + ) p22 ( + ) ' $# 0 1 '& $# 0 0 '&
# &k # & k !1

where # L & # )2 0 &


Qk !1 " % p ( Q 'C # L p 0 &T = % p (
%$ 0 (' $ ' %$ 0 0 ('

3) Gain Computation
'1
* $ .
p12 ( ' ) $ ! ( pM 0
2
! k11 k12 $ ! p11 ( ' ) p12 ( ' ) $ , ! p11 ( ' )
# & =# & +# & +# & ,/
#" k21 k22 &% #" p21 ( ' ) p22 ( ' ) & , # p21 ( ' )
%k -"
p22 ( ' ) & # 0
%k "
( )2M & ,
k %k 0
where $ "2 '
pM 0
R k! jk = & )
& 0 " 2
#M
)
% (k 28
Kalman Filter Example
4) State Estimate Update

# !p̂ ( + ) & # !p̂ ( ) ) & # k k12 & *, # !pM k & # !p̂k ( ) ) &.
% k
(=% k
( + % 11 ( +% ()% ( ,/
% !"ˆk ( + ) ( % !"ˆk ( ) ) ( % k21 k22 ( , % !" M k ( % !"ˆk ( ) ) (,
$ ' $ ' $ 'k -$ ' $ '0
5) Covariance Update
'1
* ! 1 $ .
, # 2 0 & ,
! p (+) p12 ( + ) $ , ! p11 ( ' ) p12 ( ' ) $ # ( pM & ,
# 11 & = +# & +# & /
# p21 ( + ) p22 ( + ) & , #" p21 ( ' ) p22 ( ' ) & 1
" %k %k # 0 & ,
, # ( )2M & ,
- " %k 0

29

Comparison of Running Average and


Kalman Estimate of Velocity from
Position Measurement

30
Kalman Filter Estimate is Stable
Estimate is stable even if dynamic system is unstable,
i.e., asymptotically approaches actual state

1st-order LTI System

31

Next Time:!
Stochastic Control!

32
Supplemental Material

33

Is the Least-Squares
Solution a Minimum or
a Maximum?
Gradient

!J " !J !J !J %
... ' = "# (z H + x̂ H H %& = 0
T T T
=$
! x̂ $ ! x̂1 ! x̂2 ! x̂n '
# &
Hessian matrix
!2J
= H T
H > 0, dim = ( n " n )
! x̂ 2

A minimum 34
Weighted Least-Squares
Estimate of a Constant Vector
Weighted cost function, J
1 T "1 1
! R ! = ( z " H x̂ ) R "1 ( z " H x̂ )
T
J=
2 2
1
= ( z T R "1z " x̂T HT R "1z " z T R "1H x̂ + x̂T HT R "1H x̂ )
2

Necessary condition for a minimum


!J 1
= 0 = # 0 " ( HT R "1z ) " z T R "1H + ( HT R "1H x̂ ) + x̂T HT R "1H %
T T

! x̂ 2$ &

35

Uncertain Continuous-Time
Linear Dynamic Model
Continuous-time LTI model with known coefficients
x! (t) = Fx(t) + Gu(t) + Lw(t), x(t o ) given
t

x(t) = x(t o ) + " [ Fx(! ) + Gu(! ) + Lw(! )] d!


to

y(t) = H x x(t) + H u u(t) : Output vector


z ( t ) = y(t) + n ( t ) : Measurement vector

Initial condition and disturbance inputs are not known precisely


Measurement of state is transformed and is subject to error

36
Continuous-Time Linear-Optimal !
State Estimation
Continuous-time linear dynamic process
with random disturbance

x! (t) = F(t)x(t) + G(t)u(t) + L(t)w(t)


Measurement with random error
z(t) = Hx(t) + n(t)
Uncertainty model for initial condition,
disturbance input, and measurement error

x(t 0 ) = E [ x(t 0 )] w(t) = 0

{
P(t 0 ) = E [ x(t 0 ) ! x(t 0 )][ x(t 0 ) ! x(t 0 )]
T
} {
W(t) = E [ w(t)][ w(! )]
T
}
n(t) = 0
u(t) = E [ u(t)]
U(t 0 ) = 0 {
N(t) = E [ n(t)][ n(! )]
T
} 37

Linear-Optimal !
State Estimator !
(Kalman-Bucy Filter)
•! Optimal estimate of state
!ˆ = F(t)x̂(t) + G(t)u(t) + K(t)[ z(t) ! Hx̂(t)], x̂(t ) = x(t )
x(t) o o

K(t) : Optimal estimator gain matrix (n " r)

•! Two parts to the optimal state estimator


–! Propagation of the expected value of x
–! Least-squares correction to the model-based estimate

ˆ! = F ( t ) !x̂(t) + G!u(t)
!x(t) +K(t)[!z(t) " H!xˆ (t)]

LTI System !ˆ = [ F " KH ] !x̂(t) + G!u(t) + K!z ( t )


!x(t)
38
Estimator Gain for the !
Kalman-Bucy Filter
Optimal filter gain matrix
K(t) = P(t)HT N!1 (t)
Matrix Riccati equation for estimator covariance

P˙ (t) = F(t)P(t) + P(t)F T (t) + L(t)W(t)LT (t)


! P(t)HT N!1HP(t), P(t o ) = Po
•! Same equations as those that define LQ control gain,
except
–! Solution matrix, P, propagated forward in time
–! Matrices are modified

39

Continuous-Time 2nd-Order
Example of Kalman-Bucy Filter
Rolling motion of an airplane
" p! ( t ) % " L 0 % " p (t ) % " L % " Lp %
$ '=$ p '$ ' + $ (A ' ( A (t ) + $ ' pw ( t )
$ !! ( t )
#
' $# 1
& 0 '& $# ! ( t ) ' $# 0
& '& $# 0 '&

" p % " Roll rate, rad/s %


$ '=$ ' "$ Roll ! rate damping
$# ! '& $# Roll angle, rad '& Lp : #
Turbulence sensitivity
%$
( A = Aileron deflection, rad
L& A : Control effectiveness
pw = Turbulence disturbance, rad/s

Measurement of roll rate and angle


" p (t ) % " p (t ) + n p (t ) %
$ M ' =$ ' = I2 x (t ) + n (t )
$ !M (t ) ' $ ! ( t ) + n! ( t ) '
# &k # &k
40
Second-Order Example
of Kalman-Bucy Filter
Covariance extrapolation
! p! ( t ) p!12 ( t ) $ ! L p 0 $ ! p11 ( t ) p12 ( t ) $ ! p11 ( t ) p12 ( t ) $ ! L p 1 $
# 11 &=# &# &+# &# &
# p!12 ( t ) p! 22 ( t ) & #" 1 0 &% #" p12 ( t ) p22 ( t ) & # p12 ( t ) p22 ( t ) & #" 0 0 &%
" % % " %
(1

0 $ ! p11 ( t ) p12 ( t ) $ ! ' pM 0 $ p12 ( t ) $


2 ! p (t )
! L2 ' 2
+ # p pW &(# &# & # 11 &
#" 0 0 &% #" p12 ( t ) p22 ( t ) & # 0
%"
' )2M & # p12 ( t )
"
p22 ( t ) &
%
%

Estimator gain computation


)1
! k (t ) k (t ) $ ! p (t ) p12 ( t ) $ ! ' 2p 0 $
# 11 12
& = # 11 &# M
&
# k21 ( t ) k22 ( t ) & # p12 ( t ) p22 ( t ) &# 0 ' (2M &
" % " %" %
41

Kalman-Bucy Filter with


Two Measurements
State estimate with roll rate and angle measurements

" p!ˆ ( t ) % " % " L %


$ ' = $ Lp 0 % " p̂ ( t )
'$ ' + $ ( A '( A (t )
$ !ˆ ( t )
! ' $ 1 0 '& $ !ˆ ( t ) ' $# 0 '&
$# '& # # &
" k (t ) k (t ) % " pM ( t ) ) p̂ ( t ) %
11 12
+$ '$ '
$ k21 ( t ) k22 ( t ) ' $ ! M ( t ) ) !ˆ ( t ) '
# &# &

42
State Estimate with
Angle Measurement Only
Covariance extrapolation
! p! ( t ) p!12 ( t ) $ ! L p 0 $ ! p11 ( t ) p12 ( t ) $ ! p11 ( t ) p12 ( t ) $ ! L p 1 $ ! L2p' 2pW 0 $
# 11 &=# &# &+# &# &+# &
# p!12 ( t ) p! 22 ( t ) & #" 1 0 &% #" p12 ( t ) p22 ( t ) & # p12 ( t ) p22 ( t ) & #" 0 0 &% #" 0 0 &%
" % % " %
T
1 ! p11 ( t ) p12 ( t ) $ ! p (t )
# 11
p12 ( t ) $
( 2 # & &
' )M # p12 ( t ) p22 ( t ) & # p12 ( t ) p22 ( t ) &
" % " %
Gain computation
! k (t ) $ 1 ! p (t ) p12 ( t ) $
# 11 &= 2 # 11 &
# k21 ( t ) & ' (M # p12 ( t ) p22 ( t ) &
" % " %

State estimate with roll angle measurement


" p!ˆ ( t ) % "
' = $ Lp 0 % " p̂ ( t ) % " L % " k (t ) %
$ '$ ' + $ (A ' ( A (t ) + $
11
' "! M ( t ) ) !ˆ ( t ) %
0 '& $ !ˆ ( t ) '# &
$ !ˆ ( t ) ' $ 1 ' $# 0 $ k21 ( t )
! '&
$# '& # # & # & 43

State and Output Vectors for


the Quadrotor Helicopter
Longitudinal State Longitudinal Output

#x & Range(GPS)
Range
" x % % !z (
% (
Height(GPS)
$
z
' Height % !z ( Height(Ultrasound)
$ ' % ( Height(Pressure Sensor)
$ u ' Axial Velocity % !z (
x=$ ' y = % x! ( Ground Speed(QVGA Camera)
w Normal Velocity % (
Axial Acceleration
$ ' %u! (
$ q ' Pitch Rate % w! ( Normal Acceleration
% (
$ ! ' q
% ( Pitch Rate(Gyro)
# & Pitch Angle %" (
$ ' Pitch Angle(Magnetometer)

y!x 44
Output Vector and Matrix for the
Quadrotor Helicopter
Neglect GPS and Pressure Sensor

Longitudinal Output, Linearized at ! = 0

y = Hxx + Huu
# !z & # 0 !1 0 0 0 0 &# x & # 0 0 &
% ( % (% ( % (
x! 0 0 1 0 0 0 z 0 0 (
% ( % (% ( %
% u! ( % 0 0 0 0 0 !g (% u ( % 0 0 ( # Tf &
=% (=% % +
( % !2 m !2 m (% (
%
w!
( % 0 0 0 0 0 0 (( % w
( % ( %$ Ta ('
% q ( % 0 0 0 0 0 0 (% q ( % d I yy !d I yy (
% " (' %$ 0 0 0 0 1 0 (' %$ " (' %$ 0 (
$ 0 '
How would you design the Kalman Filter?
45

You might also like