1.1 Sequencial importance sampling algorithm 1 ESTIMATORS
are well suited to a Bayesian approach. In such approach one attempts toconstruct the posterior probability density function (pdf) of the state givenall the available information, which includes the set of received measurements. For the cases where an estimate must be obtained whenever a newmeasurement is received, a recursive ﬁlter is an adequate solution. This ﬁlteris normally divided in two stages: (1) prediction and (2) update (or correction). In the ﬁrst stage, the system model is used to predict the state pdf atthe next measurement time, from the previous one. Due to the presence of noise (which models the unknown disturbances that aﬀects the system), thepredicted pdf is generally translated, deformed and spread. The receptionof a new measurement permits the adjustment of this pdf during the updateoperation. This is done using the Bayes theorem as the mechanism to updatethe knowledge about the system state upon the reception of new information.There are diﬀerent ﬁltering algorithms that can be derived using a Bayesianformulation, being the Kalman Filter (or its Extended version) one of themost used. Its requirements are that the models be linear (or diﬀerentiable)and the involved noise distributions be Gaussian.Particle ﬁlters are in turn, particularly wellsuited for visionbased tracking applications, in particular due to the nonGaussian characteristics of themeasurement functions involved. These are also known as Monte Carlobasedmethods, as they involve the use of sets of random samples that approximatethe probability distributions we want to estimate.
1.1 Sequencial importance sampling algorithm
Suppose that there is a probability distribution
p
(
x
), from which it is verydiﬃcult to draw samples, but we know that it is proportional to anotherdistribution
π
(
x
) which can easily be evaluated.Let
x
(
i
)
∼
q
(
x
)
,i
= 1
,...,N
s
be samples that are easily generated froma proposal distribution
q
(
.
), called
importance density
. Then, a weightedapproximation to
p
(
.
) is
p
(
x
)
≈
N
s
i
=1
w
(
i
)
δ
(
x
−
x
(
i
)
) (1)where
w
(
i
)
∝
π
(
x
(
i
)
)
q
(
x
(
i
)
)
is the normalised weight of the ith particle.Thus, if the samples were drawn from an importance density
q
(
x
0:
k

z
1:
k
)
Vis˜ao por Computador
2
Paulo Menezes, 2011, DEEC