You are on page 1of 4

Introduction to Kalman filter

Henry Huang 2012/2/20

CSDSP Lab

Optimal Estimates

Given signal x1(t ) and noise x2(t ), we can only observed the sum y(t ) = x1(t ) + x2(t ) Suppose we observed y(t 0 ),..., y(t ) ,and t = t1 is the present time If t < t interpolation If t = t filtering If t > t prediction The statistical estimate is denoted by X (t ) , which is also a fixed function of the random variables y (t ),..., y (t )
1

In general,X (t ) is different from actual value x (t ) ,


1 1 1 1

CSDSP Lab

Optimal Estimates

In general,X (t ) is different from actual value ,therefore it is natural to assign a loss function for incorrect estimates. The loss function should be a (i) positive, (ii) nondecreasing function of the estimation error e =
1 1

E {L[x 1(t1 ) - X 1(t1 )]| y (t 0 ),...y (t )}

The loss function is defined by


L (0) = 0 L ( e2 ) L ( e1 ) 0 when e2 e1 0 L ( e) = L (- e)

CSDSP Lab

Optimal Estimation

One way of choosing the random variable X is to require that this choice should minimize the average loss
1

E [E {L [x 1(t 1 ) - X 1(t 1 )]| y (t 0 ),..., y (t )}]

The first expectation doesnt depend on the choice of X 1 but only y (t 0 ),..., y (t ) ,therefore the loss function will be
E {L [x 1(t1 ) - X 1(t 1 )]| y (t 0 ),..., y (t )}

CSDSP Lab

You might also like