You are on page 1of 48

Digital Image Processing

Hongkai Xiong
Department of Electronic Engineering
Shanghai Jiao Tong University

Today
Wiener Filter
Kalman Filter
Particle Filter

Wiener Filter
In signal processing, the Wiener filter is a
filter proposed by Norbert Wiener during the
1940s and published in 1949.
[1] Wiener, Norbert (1949). Extrapolation, Interpolation, and
Smoothing of Stationary Time Series.

Wiener Filter Description


The goal of the Wiener filter is to filter out noise
that has corrupted a signal. It is based on a
statistical approach.
The design of the Wiener filter takes a different
approach. One is assumed to have knowledge of
the spectral properties of the original signal and
the noise, and one seeks the linear timeinvariant filter whose output would come as
close to the original signal as possible.

Wiener Filter Description


1. Assumption: signal and (additive) noise are
stationary linear stochastic processes with
known spectral characteristics or known
autocorrelation and cross-correlation.
2. Requirement: the filter must be physically
realizable/causal.
3. Performance criterion: minimum meansquare error (MMSE)

where

Wiener Filter Problem Setup


The input to the Wiener filter is assumed to be a
signal, s(t ) , corrupted by additive noise, n(t ) . The
output, s(t ) , is calculated by means of a filter, ,
using the following convolution:

s(t ) g (t ) *[s(t ) n(t )]


s(t ) is the original signal
n(t ) is the noise
s(t ) is the estimated signal
g (t ) is the Wiener filters impulse response

where

Wiener Filter Problem Setup


The error is defined as:

e(t ) s(t ) s(t )

is the delay of the Wiener filter


In other words, the error is the difference between the estimate
signal and the true signal shifted by

e (t ) s (t ) 2s(t )s(t ) s (t )
2

e(t ) s(t ) s(t )


Depending on the value of , the problem can be
described as follows:
1. If 0 then the problem is that of prediction
(error is reduced when similar to a later value of
s)
2. If 0 then the problem is that of filtering
(error is reduced when similar to s)
3. If 0 then the problem is that of smoothing
(error is reduced when similar to an earlier value
of s)

s(t ) g (t ) *[s(t ) n(t )]


Writing as a convolution integral:

s(t )

g ( )[s(t ) n(t )]d

We denote:
x(t ) s(t ) n(t ) is the observed signal
Rs is the autocorrelation function of s(t )
Rx is the autocorrelation function of x(t )
Rxs is the cross-correlation function of s(t ) and x(t )

e (t ) s (t ) 2s(t )s(t ) s (t )
2

Taking the expected value of the


squared error results in

E (e 2 ) Rs (0) 2 g ( ) Rxs ( )d g ( ) g ( ) Rx ( )dd

If the signal and the noise are uncorrelated (i.e., the cross-correlation Rsn is
zero), then this means that

Rxs Rs
Rx Rs Rn

where

Wiener Filter Solutions


The Wiener filter problem has solutions for three
possible cases:
1. The case where a non-causal filter is acceptable
(requiring an infinite amount of both past and
future data)
2. The case where a causal filter is desired (using
an infinite amount of past data)
3. The finite impulse response (FIR) case where a
finite amount of past data is used.

where

Wiener Filter Solutions


The first case (non-causal) is simple to solve but is not
suited for real-time applications.
Wiener's main accomplishment was solving the case where
the causality requirement (second case) is in effect, and in
an appendix of Wiener's book also gave the FIR solution.

Non-causal
Causal
FIR

simple to solve, not suitable for


application
main accomplishment
used in discrete series

Wiener Filter Solutions


Non-causal solution:

G( s)

S x,s ( s)
S x ( s)

Provided that g (t ) is optimal, then the minimum mean-square error equation


reduces to

E (e ) Rs (0) g ( ) Rx,s ( )d
2

Wiener Filter Solutions


Causal solution:

H ( s)
G( s)
S x ( s)
where
S x , s ( s ) s
H (s) consists of the causal part of S x (s) e
S x (s) is the causal component of (i.e., the inverse Laplace transform
of is non-zero only for )
S x (s) is the anti-causal component of (i.e., the inverse Laplace
transform of is non-zero only for )

Wiener Filter Solutions


Finite Impulse Response Wiener filter for
discrete series:

The causal finite impulse response (FIR) Wiener filter,


instead of using some given data matrix X and output
vector Y, finds optimal tap weights by using the statistics
of the input and output signals

In order to derive the coefficients of the Wiener filter,


consider the signal w[n] being fed to a Wiener filter of
order N. The output of the filter is denoted x[n] which is
given by the expression

The residual error is denoted e[n] and is defined as


e[n] = x[n] s[n] (see corresponding block diagram).
The Wiener filter is designed so as to minimize the
mean square error (MMSE criteria) which can be
stated concisely as follows:

Assuming that w[n] and s[n] are each stationary and


jointly stationary, the sequences known respectively as
the autocorrelation of w[n] and the cross-correlation
between w[n] and s[n] can be defined as follows:

The derivative of the MSE may therefore be rewritten as

Letting the derivative be equal to zero results in

which can be rewritten in matrix form

Kalman Filter
The Kalman filter, also known as linear quadratic
estimation (LQE), is an algorithm which uses a series
of measurements observed over time, containing noise
(random variations) and other inaccuracies, and
produces estimates of unknown variables that tend to be
more precise than those that would be based on a single
measurement alone.

Kalman Filter: Overview


The Kalman filter uses a system's dynamics model (e.g.,
physical laws of motion), known control inputs to that
system, and multiple sequential measurements (such as
from sensors) to form an estimate of the system's
varying quantities (its state) that is better than the
estimate obtained by using any one measurement alone.
As such, it is a common sensor fusion and data fusion
algorithm.

Kalman Filter: Application


The Kalman filter is an efficient recursive filter that
estimates the internal state of a linear dynamic system
from a series of noisy measurements. It is used in a wide
range of engineering and econometric applications from
radar and computer vision to estimation of structural
macroeconomic models, and is an important topic in
control theory and control systems engineering.

Kalman Filter: dynamic system model

Fk, the state-transition model;


Hk, the observation model;
Qk, the covariance of the process noise;
Rk, the covariance of the observation noise;
Bk, the control-input model.

Kalman Filter: dynamic system model

Fk is the state transition model which is applied to the previous state xk1;
Bk is the control-input model which is applied to the control vector uk;
wk is the process noise which is assumed to be drawn from a zero mean
multivariate normal distribution with covariance Qk.

Kalman Filter: dynamic system model

Hk is the observation model which maps the true state space into the
observed space and vk is the observation noise which is assumed to be zero
mean Gaussian white noise with covariance Rk.

Kalman Filter: dynamic system model


The Kalman filter is a recursive estimator. This means that
only the estimated state from the previous time step and
the current measurement are needed to compute the
estimate for the current state.
The state of the filter is represented by two variables:

, the posteriori state estimate at time k given


observations up to and including at time k;

, the posteriori error covariance matrix (a measure


of the estimated accuracy of the state estimate).

Kalman Filter: Predict

Predicted state estimate:


Predicted estimate covariance:

Kalman Filter: Update

Innovation or measurement residual


Innovation (or residual) covariance
Optimal Kalman gain
Updated state estimate
Updated estimate covariance

Kalman Filter: Invariants


If the model is accurate, and the values for
and
accurately reflect the distribution of the initial state values,
then the following invariants are preserved: (all estimates
have a mean error of zero)

Kalman Filter: Estimation of the noise


covariances Qk and Rk
Practical implementation of the Kalman Filter is often
difficult due to the inability in getting a good estimate of
the noise covariance matrices Qk and Rk. Extensive
research has been done in this field to estimate these
covariances from data. One of the more promising
approaches to doing this is called the Auto-covariance
Least-Squares (ALS) technique that uses
autocovariances of routine operating data to estimate the
covariances.

Kalman Filter: An example application


Considering a model:
A truck on infinitely long straight rails. Initially, the truck is stationary at
position 0, but it is buffeted this way and that by random acceleration. We
measure the position of the truck every t seconds, but these
measurements are imprecise; we want to maintain a model of where the
truck is and what its velocity is.
We show here how we derive the model from which we create
our Kalman filter:

The position and velocity of the truck are described by the linear state space
is the velocity, that is, the derivative
of position with respect to time.

Kalman Filter: An example application


We assume that between the (k 1) and k timestep the truck
undergoes a constant acceleration of ak that is normally distributed,
with mean 0 and standard deviation a. From Newton's laws of
motion we conclude that

Kalman Filter: An example application


At each time step, a noisy measurement of the true position of the
truck is made. Let us suppose the measurement noise vk is also
normally distributed, with mean 0 and standard deviation z.

We know the initial starting state


of the truck with perfect precision,
so we initialize

To tell the filter that we know


the exact position, we give it a
zero covariance matrix:

Particle Filter:
In statistics, a particle filter, also known as a sequential
Monte Carlo method (SMC), is a sophisticated model
estimation technique based on simulation.[1] Particle
filters are usually used to estimate Bayesian models in
which the latent variables are connected in a Markov chain
similar to a hidden Markov model (HMM), but typically
where the state space of the latent variables is continuous
rather than discrete, and not sufficiently restricted to make
exact inference tractable
[1] Doucet, A.; De Freitas, N.; Gordon, N.J. (2001).
Sequential Monte Carlo Methods in Practice. Springer

Particle Filter:
Particle filters are the sequential analogue of Markov chain
Monte Carlo (MCMC) batch methods and are often similar
to importance sampling methods.
Adventages:
Well-designed particle filters can often be much faster than
MCMC.
With sufficient samples, they approach the Bayesian
optimal estimate, so they can be made more accurate than
either the EKF or UKF.
Disadvantages:
When the simulated sample is not sufficiently large, they
might suffer from sample impoverishment.

Particle Filter: Goal


The particle filter aims to estimate the sequence
of hidden parameters, xk for k = 0,1,2,3,, based
only on the observed data yk for k = 0,1,2,3,.
All Bayesian estimates of xk follow from the
posterior distribution p(xk | y0,y1,,yk). In
contrast, the MCMC(Markov chain Monte Carlo)
or importance sampling approach would model
the full posterior p(x0,x1,,xk | y0,y1,,yk).

Particle Filter: Model


Particle methods assume
modeled in this form:

and the observations

can be

is a first order Markov process such that


and with an initial distribution
The observations
provided that
In other words, each

are conditionally independent


are known
only depends on

Particle Filter: Model


One example form of this scenario is

where both
and
are mutually independent and identically
distributed sequences with known probability density functions
and
and
are known functions.
These two equations can be viewed as state space equations and
look similar to the state space equations for the Kalman filter.

Particle Filter: Monte Carlo approximation


Particle methods, like all sampling-based approaches,
generate a set of samples that approximate the filtering
distribution
So, with P samples, expectations with respect to the
filtering distribution are approximated by

Particle Filter: Sequential Importance


Resampling (SIR)
Sequential importance resampling (SIR), the original
particle filtering algorithm, is a very commonly used
particle filtering algorithm, which approximates the
filtering distribution
by a weighted set of P
particles
The importance weights
are approximations to the
relative posterior probabilities (or densities) of the
particles such that
.

Particle Filter: Sequential Importance


Resampling (SIR)
SIR is a sequential version of importance sampling. As in
importance sampling, the expectation of a function
can be approximated as a weighted average

Particle Filter: Sequential Importance


Resampling (SIR)
For a finite set of particles, the algorithm performance is
dependent on the choice of the proposal distribution
The optimal proposal distribution is given as the target
distribution
However, the transition prior is often used as importance
function, since it is easier to draw particles (or samples) and
perform subsequent importance weight calculations:

Particle Filter: Sequential Importance


Resampling (SIR)
A single step of sequential importance resampling is as
follows:
1) For
draw samples from the proposal
distribution
2) For
update the importance weights up to a
normalizing constant:
Note that when we use the transition prior as the
importance function,
this simplifies to the following:

Particle Filter: Sequential Importance


Resampling (SIR)
3) For

compute the normalized importance weights:

4) Compute an estimate of the effective number of particles as

5) If the effective number of particles is less than a given threshold


,
then perform resampling:
a) Draw P particles from the current particle set with probabilities
proportional to their weights. Replace the current particle set with this
new one.
b) For
, set

Particle Filter:

Result of particle filtering (red line) based on observed data generated


from the blue line

Particle Filter: "Direct version" algorithm

Other Particle Filter:


Auxiliary particle filter
Gaussian particle filter
Unscented particle filter
Gauss-Hermite particle filter
Cost Reference particle filter
Rao-Blackwellized particle filter
Rejection-sampling based optimal particle filter

Thank You!

You might also like