Professional Documents
Culture Documents
Least Square Estimation
Least Square Estimation
Robert Stengel
Optimal Control and Estimation, MAE 546,
Princeton University, 2013"
Estimating unknown
constants from redundant
measurements"
Perfect Measurement of a
Constant Vector"
Given "
Measurements, y, of a constant vector, x"
Estimate x"
Assume that output, y, is a perfect
measurement and H is invertible"
Least-squares"
Weighted least-squares"
y=Hx
x = H 1 y
Copyright 2013 by Robert Stengel. All rights reserved. For educational use only.!
http://www.princeton.edu/~stengel/MAE3546.html!
http://www.princeton.edu/~stengel/OptConEst.html!
Imperfect Measurement of a
Constant Vector"
Given "
Measurement-error residual"
Noisy measurements, z, of a
constant vector, x"
y=Hx
y: (k x 1) output vector"
H: (k x n) output matrix, k > n"
x : (n x 1) vector to be estimated"
z= y+n=Hx+n
z: (k x 1) measurement vector"
n : (k x 1) error vector"
y: (n x 1) output vector"
H: (n x n) output matrix"
x : (n x 1) vector to be estimated"
= z H x = z y
dim() = ( k 1)
J=
dim ( x ) = ( n 1)
J=
1 T
z z x T HT z z T H x + x T HT H x
2
)(
x T HT H HT H
Necessary condition"
T
T
J
1
= 0 = 0 ( HT z ) z T H + ( HT H x ) + x T HT H
x
2
= x T = z T H HT H
(row)
or
x = HT H
HT z (column)
x T HT H = z T H
Average Weight of
the Jelly Beans"
Optimal estimate"
1
z1
1
z2
x = 1 1 ... 1
1
1
...
1
...
...
zk
zi = x + ni , i = 1 to k
Express measurements as"
z = Hx + n
Output matrix"
H=
1
1
...
1
Optimal estimate"
x = H H
T
= (k)
( z1 + z2 + ... + zk )
Simple average"
T
H z
x =
1
zi
k i=1
Least-Squares Applications"
Least-Squares Linear
Fit to Noisy Data"
Measurement vector"
Find trend line in noisy data"
y = a0 + a1 x
z = ( a0 + a1 x ) + n
J=
Higher-degree curve-fitting"
Multivariate estimation "
Degraded Image!
z1 ( a0 + a1 x1 )
z2 ( a0 + a1 x2 )
=
zn ( a0 + a1 xn )
x1
n1
1 x2 a0
+n
z =
a1
n2
1 xn
Ha + n
nn
zi = ( a0 + a1 x ) + ni
1
( z H a )T ( z H a )
2
a0
= HT H
a =
a1
HT z
y = a0 + a1 x
Measurements of
Differing Quality"
Suppose some elements of the measurement,
z, are more uncertain than others"
z = Hx + n
Give the more uncertain measurements less weight
in arriving at the minimum-cost estimate"
Let S = measure of uncertainty; then express error
cost in terms of S1"
1
J=
T S 1
Weighted Least-Squares
Estimate of a Constant Vector"
1
1
T
J = T S 1 = ( z H x ) S 1 ( z H x )
2
2
1 T 1
= z S z x T HT S 1z z T S 1H x + x T HT S 1H x
2
x T HT S 1H z T S 1H = 0
x T HT S 1H = z T S 1H
J
=0
x
=
x = HT S 1H
T
T
1
0 ( HT S 1z ) z T S 1H + ( HT S 1H x ) + x T HT S 1H
1
S A =
a11
a22
...
0
...
0
a11
0
x = 1 1 ... 1
...
0
a22
...
0
... 0
... ...
... akk
...
x = H S H
T
H S z
J=
1
1
1
...
1
a11
0
1 1 ... 1
...
0
k
x =
a z
i =1
k
ii i
a
i =1
ii
HT S 1 z
Optimal estimate of
average jelly bean weight"
... 0
... ...
... akk
...
0
... ...
... akk
...
a22
...
...
0
z1
z2
...
zk
1 T 1
1
1
T
S A = ( z y ) S 1
( z H x )T S1A ( z H x )
A (z y) =
2
2
2
Measurement Residual
Covariance, SB"
Measurement Error
Covariance, SA"
S B = E T
T
S A = E ( z y ) ( z y )
T
= E ( z Hx ) ( z Hx )
T
= E ( H + n ) ( H + n )
T
= E ( z Hx ) ( z Hx )
= HE T HT + HE nT + E n T HT + E nnT
Recursive Least-Squares
Estimation"
HPH + HM + M H + R
where
R = E nnT
= E nnT R
T
P = E ( x x ) ( x x )
T
M = E ( x x ) n
z1 = H1x + n1
! Recursive approach"
! Optimal estimate has been made from prior
measurement set"
! New measurement set is obtained"
! Optimal estimate is improved by incremental
change (or correction) to the prior optimal estimate"
= ( z Hx )
dim ( z1 ) = dim ( n1 ) = k1 1
dim ( H1 ) = k1 n
J1 =
dim ( R1 ) = k1 k1
1 T 1
1
T
1 R1 1 = ( z1 H1 x 1 ) R11 ( z1 H1 x 1 )
2
2
z 2 = H2 x + n2
R 2 : Second measurement error covariance
J 2 = ( z1 H1x 2 )
z1
z
z 2
dim ( z 2 ) = dim ( n 2 ) = k2 1
dim ( H 2 ) = k2 n
dim ( R 2 ) = k2 k2
HT2
H
1 H1T
H 2
(H R
T
1
R 1 0
1
HT2
0 R 1
2
z + HT2 R 1
2 z2 )
1
1 1
1 0
R1
0 R 1
2
( z H x )
1
1 2
( z 2 H 2 x 2 )
x 2 = H1T
( z 2 H2 x 2 )
P11
H1T R11H1
(H R
(P
T
1
H1 + HT2 R 1
=
2 H2 )
1
1
1
1
1
+ HT2 R 1
=
2 H2 )
1
P1 P1HT2 ( H 2 P1HT2 + R 2 ) H 2 P1
1
I = A 1A = AA 1 , with A H 2 P1HT2 + R 2
x 2 = x 1 P1HT2 H 2 P1HT2 + R 2
H 2 x 1
+ P1H I n H 2 P1H + R 2
T
2
T
2
H 2 P1H R z
T
2
1
2 2
1
= I n ( H 2 P1HT2 + R 2 ) H 2 P1HT2 x 1
1
+P1HT2 I n ( H 2 P1HT2 + R 2 ) H 2 P1HT2 R 2 1z 2
x i = x i 1 Pi 1HTi Hi Pi 1HTi + R i
x i 1 Ki ( z i Hi x i 1 )
with
1
i 1
1
i
Pi = P + H R Hi
T
i
) (z
1
Hi x i 1 )
dim ( x ) = n 1; dim ( P) = n n
dim ( z ) = r 1; dim ( R ) = r r
dim ( H ) = r n; dim ( K ) = n r
x 2 = x 1 P1HT2 ( H 2 P1HT2 + R 2 )
x 1 K ( z 2 H 2 x 1 )
( z 2 H2 x 1 )
Example of Recursive
Optimal Estimate"
z= x+n
xi = xi1 + pi1 ( pi1 + 1)
pi1
( pi1 + 1)
ki = pi1 ( pi1 + 1) =
1
1
pi = ( pi1
+ 1) =
1
( zi xi1 )
(p
i1
1
1
+ 1)
x 0 = z0
x1 = 0.5 x0 + 0.5z1
x 2 = 0.667 x1 + 0.333z2
x 3 = 0.75 x2 + 0.25z3
x 4 = 0.8 x 3 + 0.2z4
H = 1; R = 1
T
1
Pi = Pi1
1 + H i R i H i
Ki = Pi 1HTi Hi Pi 1HTi + R i
Next Time:
Propagation of Uncertainty
in Dynamic Systems
! Why?"
! Each new sample has smaller effect on the
average than the sample before!
Weighted Least
Squares ( Kriging )
Estimates
(Interpolation)"
Can be used with arbitrary
interpolating functions"
Supplemental Material!
http://en.wikipedia.org/wiki/Kriging!