You are on page 1of 10

2592 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 51, NO.

10, OCTOBER 2003

Gaussian Particle Filtering


Jayesh H. Kotecha and Petar M. Djurić, Senior Member, IEEE

Abstract—Sequential Bayesian estimation for nonlinear dy- where and are some known functions, and and
namic state-space models involves recursive estimation of filtering are random noise vectors of given distributions. The process
and predictive distributions of unobserved time varying signals equation represents a system evolving with time , where
based on noisy observations. This paper introduces a new filter
the system is represented by the hidden state , and the
called the Gaussian particle filter1. It is based on the particle
filtering concept, and it approximates the posterior distributions prior knowledge of the initial state is given by the probability
by single Gaussians, similar to Gaussian filters like the extended distribution . Our aim is to learn more about the unknown
Kalman filter and its variants. It is shown that under the Gaus- state variables, given the observations as time evolves.
sianity assumption, the Gaussian particle filter is asymptotically We denote by and the signal and observations up
optimal in the number of particles and, hence, has much-improved
performance and versatility over other Gaussian filters, especially to time , respectively, i.e., and
when nontrivial nonlinearities are present. Simulation results are . In a Bayesian context, our aim is to estimate
presented to demonstrate the versatility and improved perfor- recursively in time
mance of the Gaussian particle filter over conventional Gaussian • the filtering distribution at time given all the
filters and the lower complexity than known particle filters. The
use of the Gaussian particle filter as a building block of more observations up to time ;
complex filters is addressed in a companion paper. • the predictive distribution at time given
Index Terms—Dynamic state space models, extended Kalman
all the observations up to time .
filter, Gaussian mixture, Gaussian mixture filter, Gaussian From these distributions, an estimate of the state can be deter-
particle filter, Gaussian sum filter, Gaussian sum particle filter, mined for any performance criterion suggested for the problem.
Monte Carlo filters, nonlinear non-Gaussian stochastic systems, The filtering distribution or the marginal posterior of the state at
particle filters, sequential Bayesian estimation, sequential sam-
pling methods, unscented Kalman filter.
time can be written as
(2)
I. INTRODUCTION where is the normalizing constant given by

N ONLINEAR filtering problems arise in many fields in-


cluding statistical signal processing, economics, statistics,
biostatistics, and engineering such as communications, radar
tracking, sonar ranging, target tracking, and satellite navigation. Furthermore, the predictive distribution can be expressed as
The problem consists of estimating a possibly dynamic state of
a nonlinear stochastic system, based on a set of noisy obser- (3)
vations. Many of these problems can be written in the form of
the so-called dynamic state space (DSS) model [3]. The DSS
When the model is linear with Gaussian noise and the prior
model represents the time-varying dynamics of an unobserved
knowledge about given by is Gaussian, the filtering
state variable , as the distribution , where in-
and predictive distributions are Gaussian, and the Kalman filter
dicates time (or any other physical parameter). The observations
provides the mean and covariance sequentially, which is the
in the application are usually noisy and distorted versions
of . The distribution represents the observation optimal Bayesian solution [4]. However, for most nonlinear
equation conditioned on the unknown state variable , which models and non-Gaussian noise problems, closed-form analytic
is to be estimated. Alternatively, the model can be written as expression for the posterior distributions do not exist in general.
Numerical solutions often require high-dimensional integra-
process equation tions that are not practical to implement. As a result, several
observation equation) (1) approximations that are more tractable have been proposed.
A class of filters called Gaussian filters provide Gaussian ap-
Manuscript received July 5, 2001; revised March 4, 2003. This work was proximations to the filtering and predictive distributions, exam-
supported by the National Science Foundation under Awards CCR-0082607 and ples of which include the extended Kalman filter (EKF) and its
CCR-9903120. The associate editor coordinating the review of this paper and
approving it for publication was Prof. Zhi Ding. variations [4]–[8]. Equivalently, each recursive update propa-
J. H. Kotecha is with the Department of Electrical and Computer Engi- gates only the mean and covariance of the densities. The jus-
neering, University of Wisconsin at Madison, Madison, WI, 53705 USA tifications are that under this assumption, only the mean and
(e-mail: jkotecha@ece.wisc.edu).
P. M. Djurić is with the Department of Electrical and Computer Engineering, covariance need to be tracked and that given just the mean and
State University of New York at Stony Brook, Stony Brook, NY 11794 USA covariance, the Gaussian maximizes the entropy of the random
(e-mail: djuric@ece.sunysb.edu). variable, i.e., it is the least informative distribution. The appli-
Digital Object Identifier 10.1109/TSP.2003.816758
cability of Gaussian filters to nonlinear problems depends on
1Part of this work has appeared in [1] and [2]. the nature of the nonlinearities and has to be established on a
1053-587X/03$17.00 © 2003 IEEE
KOTECHA AND DJURIĆ: GAUSSIAN PARTICLE FILTERING 2593

case-by-case basis. For some general guidelines, see, for ex- additive Gaussian noise can be relaxed for the GPF and can,
ample, [5] and [9]. Other approaches that propagate approxima- in general, be non-Gaussian and nonadditive. The GPF has
tions of the first two moments of the densities include [10]–[12]. improved performance compared with the EKF and UKF, as
These filters, including the EKF, have been successfully imple- demonstrated by simulations. It is shown analytically that the
mented in some problems, but in others, they diverge or provide estimates of the unknowns converge asymptotically with prob-
poor approximations. This is especially emphasized when the ability one to the minimum mean square estimates (given that
model is highly nonlinear or when the posterior distributions are the Gaussian assumption holds true). The GPF is quite similar
multimodal. In such cases, however, significant improvements to the SIS filter by the fact that importance sampling is used to
are possible. Efforts to improve on the EKF have led to the new obtain particles. However, unlike the SIS filters, resampling is
filters recently, like the unscented Kalman filter (UKF) by Julier not required in the GPF. This results in a reduced complexity
et al. [13] and similar filters proposed by Ito et al. [14], which of the GPF as compared with the SIS with resampling and is
use deterministic sets of points in the space of the state variable a major advantage.
to obtain more accurate approximations to the mean and covari- The GPF only propagates the mean and covariance; however,
ance than the EKF. However, note the following. note that the importance sampling procedure makes it simple to
• The improvement of these filters over the EKF can be sig- propagate higher moments as well. This gives rise to a new class
nificant, depending on the problem. However, divergence of filters, where the posterior distributions are approximated by
can still occur in some nonlinear problems. An example of distributions other than Gaussian simply by propagating the re-
such undesirable divergence is shown in the simulations. quired moments (or functions thereof).
• More importantly, in problems where the above filters do An example of the above is given in a companion paper [24],
not diverge, improvement in the estimates of the mean and where we introduce three types of Gaussian sum particle fil-
covariance are desirable. The aim is not only to minimize ters (GSPFs), which are built from banks of GPFs, and de-
the mean square error but also to provide accurate esti- velop a general framework for Bayesian inference for nonlinear
mates of the covariance that are a measure of the confi- and non-Gaussian additive noise DSS models using Gaussian
dence in the estimates. mixtures.
• Finally, the versatility of the filter can be improved if the To facilitate readability of the paper, we provide a list of ab-
restrictive assumption of additive Gaussian noise made in breviations used in the sequel.
the EKF like filters is removed. List of Abbreviations:
There have also been other attempts to propagate filtering BOT Bearings-only tracking.
densities, most of them based on Gaussian sum filters (GSFs) DSS Dynamic state space.
[15], where the posterior distributions are approximated as EKF Extended Kalman filter.
finite Gaussian mixtures (GMs). The GM approximation is EM Expectation–maximization.
generally more accurate, especially for multimodal systems. GS Gaussian sum.
Other methods evaluate the required densities over grids [7], GM Gaussian mixture.
[16]–[19], but they are computationally intense especially for GMF Gaussian mixture filter.
high-dimensional problems. GMM Gaussian mixture model.
Recently, importance sampling-based filters have been used GPF Gaussian particle filter.
to update the posterior distributions [20]–[23]. There, a distri- GSF Gaussian sum filter.
bution is represented by a weighted set of samples (or parti- GSPF Gaussian sum particle filter.
cles) from the distribution, which are propagated through the MMSE Minimum mean square error.
dynamic system using importance sampling to sequentially up- MSE Mean square error.
date the posterior distributions. These methods are collectively SIS Sequential importance sampling.
called sequential importance sampling (SIS) filters, or particle SISR Sequential importance sampling with resampling.
filters, and provide optimal results asymptotically in the number UKF Unscented Kalman filter.
of particles. However, a major disadvantage of particle filters VLSI Very large scale integration.
is the computational complexity (in a serial implementation), a
large part of which comes from a procedure called resampling. II. GAUSSIAN PARTICLE FILTERING
Particle filters are, however, amenable to parallel implementa-
tion, which provides possibilities for processing signals sampled The GPF approximates the filtering and predictive distribu-
at high sampling rates. tions in (2) and (3) by Gaussian densities using the particle
In this paper, we introduce a new Gaussian filter called the filtering methodology [20], [22], [25]. The basic idea of
Gaussian particle filter (GPF). Essentially, the GPF approxi- Monte Carlo methods is to represent a distribution of a
mates the posterior mean and covariance of the unknown state random variable by a collection of samples (particles) from
variable using importance sampling. The proposed filter is that distribution. particles from
analyzed theoretically and studied by computer simulations. a so-called importance sampling distribution (which
Comparisons are made with the EKF, UKF, and the standard satisfies certain conditions; see [25] for details) are generated.
sequential importance sampling resampling (SISR) filters. It The particles are then weighted as .
is important to note that unlike the EKF, the assumption of If , then the set represents
2594 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 51, NO. 10, OCTOBER 2003

samples from the posterior distribution . Monte Carlo TABLE I


integration suggests then that the estimate of GPF—MEASUREMENT UPDATE ALGORITHM

can be computed as

(4)

Using the Strong Law of Large Numbers, it can be shown that

(5)

almost surely as ; see, for example, [25]. The posterior


distribution can be approximated as

(6)

where is the Dirac delta function.


In the sequel, the density of a Gaussian random variable is
written as

where the -dimensional vector is the mean, and the covari-


ance is the positive definite matrix . Assume that at time
we have , where and are
chosen based on prior information. As new measurements are
received, measurement and time updates are performed to ob-
tain the filtering and predictive distributions as discussed in the
following sections.

A. Measurement Update
After receiving the th observation , the filtering distribu-
tion in (2) is given by

(7)

The GPF measurement update step approximates the above den-


sity as a Gaussian, i.e., Now, we recall a standard theorem of importance sampling,
(8) following which we provide a corollary that underscores the
improvements obtained by the GPF in comparison to other
In general, analytical expressions for the mean and covari- Gaussian filters.
ance of are not available. However, for the GPF Theorem 1: Assume that at time , (the analytical form of)
update, Monte Carlo estimates of and can be computed is known up to a proportionality constant. On
from the samples and their weights, where the samples are receiving the th observation , the GPF measurement updates
obtained from an importance sampling function . the filtering distribution using Monte Carlo integration defined
This allows for the measurement update algorithm in Table I. in (4). Then, the th moment (or the th central moment) of
The updated filtering distribution is now approximated as estimated using (4) converges almost surely to
. [or to ].
KOTECHA AND DJURIĆ: GAUSSIAN PARTICLE FILTERING 2595

Proof: From (10) and (11), the th moment is estimated A Monte Carlo approximation for the predictive distribution is
as given by

(14)

where are particles from . Following this


observation, samples from ,
are obtained and are denoted as , from which the mean
and covariance of is computed as

The GPF time update approximates as a


Gaussian, or
where the convergence is with probability one as [see
(5)] and is due to the Strong Law of Large Numbers. A similar
proof holds for the th central moment. Alternatively, we can use the weighted samples of
Corollary 1: Assume that at time , obtained in the measurement update to get the following
. On receiving the th observation , the GPF Monte Carlo approximation of
measurement updates the filtering distribution, as shown by the
algorithm above. Then, computed in (11) converges to the
MMSE estimate of almost surely as . In addition,
the MMSE estimate given by in (11) converges to the true
MMSE estimate almost surely as .
Proof: The corollary follows straightforwardly from The-
orem 1 since the MMSE estimate of is given by Following this observation, samples from ,
, and the MMSE is . are obtained and denoted as , from which
The above corollary shows that, given the validity of the the mean and covariance of is computed as
Gaussian approximation, the GPF provides the MMSE estimate
asymptotically during the measurement update, which is clearly
not true for the EKF and UKF. Hence, the GPF is expected to
perform better than the EKF and UKF, which is validated by
simulations.
1) Choice of : The choice of the importance sampling
function depends on the problem; see [23] and [25] for de- The GPF time update steps are summarized in Table I.
tails. For the GPF, a simple choice for is Similar to Theorem 1, it can be shown that as
since samples from this density can be easily and given the observations until time , converges almost
obtained. Alternatively, instead of generating completely new surely to the MMSE estimate of .
samples from samples obtained in the time up-
date step (presented in the next section) in step 2 can be used. III. DISCUSSION
However, this choice can be inadequate in some applications.
From Corollary 1 (and its counterpart in the time update),
Another choice is , where and are
we may deduce that the GPF provides better approximations
obtained from the measurement update step of the EKF or from
to the integrations involved in the update processes than other
the unscented Kalman filter [26].
Gaussian filters in the presence of severe nonlinearities. Two
B. Time Update conclusions can be obtained from this observation. One is
that divergence can be avoided by using the GPF unlike other
Assume that at time it is possible to draw samples from Gaussian filters. The other is that the GPF provides more
. With approximated as a Gaussian, accurate estimates of the mean and covariance (confidence
we would like to obtain the predictive distribution intervals) as a function of the number of particles than the other
and approximate it as a Gaussian. We recall (3), that is Gaussian filters. Simulations results in Section IV show that
in general, the GPF has better performance than the EKF and
UKF when the mean square errors are compared.
The EKF and its variants assume that the process and obser-
vation noise processes are additive and Gaussian. However, as
2596 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 51, NO. 10, OCTOBER 2003

long as the posterior distributions can be approximated as Gaus- A. Univariate Nonstationary Growth Model (UNGM)
sians, these assumptions can be relaxed for the GPF, and the pro- We choose this model because it is highly nonlinear and is
posed filter can be applied for a nonadditive and non-Gaussian bimodal in nature. The DSS equations for this model can be
noise DSS model. The measurement and time updates indicate written as
that this is possible if required in the weight update
(9) can be evaluated and that a sample from can
be obtained.
As with most Gaussian filters, it is possible that bias accu-
mulation may occur in certain applications. Particle filters with
resampling also have bias due to resampling; see [27] for details. where , and the distribution of is speci-
However, this phenomenon has not occurred in our simulation fied below. Data were generated using , ,
examples. A theoretical analysis of bias accumulation due to the , , and . Notice the term in the
approximations involved and finite samples is of interest but is process equation that is independent of but varies with time
left for future work. and that it can be interpreted as time-varying noise. The like-
SIS filters essentially obtain particles and their weights from lihood has bimodal nature when and when
the posterior distributions in a recursive manner. However, a , it is unimodal. The bimodality makes the problem more
phenomenon called sample degeneration occurs wherein only difficult to address using conventional methods.
a few particles representing the distribution have significant We compare the estimation and prediction performance of the
weights. A procedure called resampling [22], [28] has been EKF, UKF, GPF, and SIS filters based on the following metrics.
introduced to mitigate this problem, but it may give limited MSE is defined by
results and may be computationally expensive. Since the GPF
approximates posterior distributions as Gaussians, particle
MSE (15)
resampling is not required. This results in a computational
advantage of the GPF over SIS filters. Removal of resampling
has another important advantage in implementation. Resam- where , and it is obtained from the filtering
pling is a nonparallel operation in an otherwise parallel SIS distribution. MSE is defined similarly with
algorithm; hence, the GPF is more amenable for fully parallel and
implementation in VLSI.
The GPF propagates only the mean and covariance of the MSE (16)
posterior densities. However, Theorem 1 shows that all mo-
ments can be estimated using importance sampling. This sug-
where the estimate is obtained from the predictive distribution.
gests a natural extension, wherein even higher moments are
MSE is the mean square error computed from the predictive
propagated. Alternatively, it may be motivating to approximate
distribution of , i.e., . The MMSE estimate of
the posterior distributions by distributions other than Gaussian,
is given by or
which may be more accurate. This results in a new and much
richer class of filters following our methodology. An example
is given in the companion paper [24], where Gaussian mixtures
approximate the distributions. Other distributions can also be
considered, for example, the Student-t distribution, which has a
heavier tail than the Gaussian; this may be motivating in certain
applications. The mathematic tractability of propagating Gaus- For this example, we obtain
sians has motivated researchers in the past to provide Gaussian
approximations to posterior densities. However, the framework
provided here makes it possible to obtain better approximations,
albeit at the cost of computational power. We do not elaborate When the ratio is small, the bimodality of the
more on this new class of filters in this paper since the resulting problem is more severe, and we expect to see improved perfor-
filtering algorithms are straightforward extensions of ones pro- mance of the GPF over that of the EKF in the presence of this
vided in this paper. high nonlinearity. The process noise , where
. The initial distribution was . For both
GPF and SIS, the IS function was given by (in
IV. SIMULATION RESULTS SIS terminology, the IS function is written as ).
The applied SIS filter was the one from [31], where resampling
The GPF proposed in this paper was applied to numerical ex- was performed at every step using the systematic resampling
amples, and here, we present the results for two models: the uni- scheme [32]. For the EKF, we get . For the
variate nonstationary growth model, which is popular in econo- GPF, since we draw particles from in the measure-
metrics and has been used previously in [17], [20], and [29], and ment update, we obtain a Monte Carlo estimate for .
the bearing only tracking model, which is of interest in defense A large number of simulations were performed to compare
applications [20], [30]. the four filters. In Figs. 1 and 2, we show plots for the first 100
KOTECHA AND DJURIĆ: GAUSSIAN PARTICLE FILTERING 2597

Fig. 1. Plot of the true state and estimate of the EKF. Fig. 3. Plot of the prediction error and 3 interval for the EKF.

Fig. 4. Plot of the prediction error and 3 interval for the GPF.
Fig. 2. Plot of the true state and estimate of the GPF.

of particles 20, the GPF and SISR filters perform signifi-


states and the estimates obtained using the EKF and GPF with cantly better than the EKF and UKF, whereas the MSE for the
, respectively, for a single simulation. Note the ten- GPF is marginally higher than that of SISR. An increase in the
dency of the EKF to track the opposite mode of the bimodality, number of particles reduced the MSE even further for the GPF
especially when is small. This behavior was ob- and SISR filters; however, the change in performance is neg-
served in general for most simulation runs. ligible as the number of particles was increased from 100. As
In Figs. 3 and 4, we plot the error and the 3 in- observed from the figures, for 100 and 1000, there is in-
tervals, where was the estimated standard deviation of the significant difference in the MSEs for the GPF and the SISR.
prediction error. As expected, the errors lie mostly within this Similar behavior was observed for the MSE (see Fig. 6) and
interval for the GPF, which is not the case for the EKF. In ad- the MSE metrics.
dition, the values of for the EKF are much higher, pointing A comparison of the computation times is also shown in Fig. 7
to the occurrence of divergence of the filter. All of the above for simulations implemented on a 450-MHz Intel Pentium III
observations were made in most of the simulation runs. Clearly, processor using MATLAB. Note that as expected, the computa-
the GPF outperforms the EKF significantly for this highly non- tion time for GPF and SISR filters is much higher than the EKF
linear example. and UKF. However, as noted in the Section III, these times in-
Fig. 5 shows the MSE for 50 random realizations with dicate those obtained on a serial computer and much reduction
particles. In Fig. 7, the average MSE is plotted in computation times can be expected for the GPF and SISR
for 20, 100, and 1000 particles. Even for a small number when implemented in parallel. For the EKF, UKF, and GPF, a
2598 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 51, NO. 10, OCTOBER 2003

x
Fig. 7. Performance comparison of EKF, UKF, GPF, and SISR filters.
x M
Computation time (per realization) and average MSE of 50 random
M=
Fig. 5. Performance comparison of EKF, UKF, GPF, and SISR filters; MSE
is plotted for 50 random realizations. 100 for both GPF and SISR filters. realizations. = 20, 100, and 1000 for both GPF and SISR filters.

We employ a change of notation here to better facilitate un-


derstanding of the mechanics of the problem. Let the sensor be
stationary and located at the origin in the — plane. The ob-
ject moves in the – plane according to the following process
model:

where ,

Here, and denote the Cartesian coordinates of the target ,


and denote the velocities in the and directions, respec-
tively. The system noise is a zero mean Gaussian white noise,
that is, , where is the 2 2 identity ma-
trix. The initial state describes the targets initial position and
x
Fig. 6. Performance comparison of EKF, UKF, GPF, and SISR filters. Average
MSE of 50 random realizations. M=20, 100, and 1000 for both GPF and SISR velocity. A prior for the initial state also needs to be spec-
filters. ified for the model; we assume .
The measurements consist of the true bearing of the target
clear performance-computational time tradeoff can be seen in corrupted by a Gaussian error term. The measurement equation
the figure. More importantly, the GPF has much less of a com- can be written as
putation time than the SISR, and the difference increases as the
number of particles increases. This is due to the additional re- (17)
sampling required in the SISR filter, which has computational
complexity of for the systematic resampling scheme used where the measurement noise is a zero mean Gaussian white
here. noise, that is, .
It is important to note that from (17), we obtain no informa-
B. Bearings Only Tracking (BOT) tion about the range of the object from the measurement. Thus,
The bearings-only model is well motivated and arises in de- given a series of measurements, we can only detect in general
fense applications and in engineering problems. The bearings the shape of the trajectory but not its location or distance from
only tracking problem considers tracking an object moving in a the point of measurement. In particular, if we consider the like-
2-D space. The measurements taken by the sensor at fixed in- lihood associated with a single measurement as a function
tervals to track the object are the bearings or angles (subject to of and , that is , it is clear that the likeli-
noise) with respect to the sensor. The range of the object, that hood consists of a modal ridge along the line .
is, the distance from the sensor, is not measured. The prior describes our knowledge about the position and
KOTECHA AND DJURIĆ: GAUSSIAN PARTICLE FILTERING 2599

Fig. 8. Tracking of a moving target in two dimensions with the EKF, UKF, Fig. 9. Performance comparision of EKF, UKF, GPF, and SISR filters. MSEs
GPF, and SISR filters. =
for position and velocities in x and y directions. M 1000 for both GPF and
SISR filters.

velocity of the target at the initial stage. This prior and the like-
lihood of the series of measurements are combined to obtain an
estimate of the trajectory.
A target trajectory and associated measurements over
24 time steps was generated with initial state vector
. The process and
measurement noise were as specified above with
and . Note that for this particular trajectory, in
the initial stages, the bearing of the target with respect to the
observer remains almost constant. As the object passes the ob-
server, the change in the angle is large. For the prior, the param-
eters assumed were
(which is the same as the initial vector) and

Fig. 10. Performance comparision of EKF, UKF, GPF, and SISR filters.
The EKF, UKF, GPF, and SISR filters were applied to get an Average MSEs for position and velocities in x and y directions. M = 1000
estimate of the trajectory. The number of particles used for the for both GPF and SISR filters.
GPF and SISR filters was 1000. Systematic random re-
sampling was used for the SISR filter. Observe that due to the much higher for the UKF. Comparison of the GPF and SISR fil-
nonlinearity of the observation equation, it is not straightfor- ters shows that the GPF has marginally better performance then
ward to sample from the optimal importance function. Hence, the SISR filter.
we use the prior and as the IS In another experiment, all the above simulations were repli-
function for the GPF and SISR, respectively. cated for a varying number of particles. For 500, the GPF
Fig. 8 displays a representative trajectory and the tracking ob- and SISR filters diverged, implying that a greater number of par-
tained by all four filters. Fig. 9 shows the MSE for 100 random ticles was required for stability. It was observed that increasing
realizations for all the state variables. The EKF diverged in more from 500 to 1000 improved the performance. Increasing
than 50 realizations, whereas the UKF diverged for three real- further gave minor improvements.
izations. Fig. 10 shows the average MSE obtained for same 100 Computation times are shown in Fig. 12. As expected, the
random realizations for all the state variables on a logarithmic EKF and UKF have very small computation time compared
scale. Fig. 11 displays a trajectory where the EKF and UKF with the particle filters. However, the high use of computational
both diverged. In cases where divergence was not observed, the power is justified for the GPF and SISR since in most cases, the
MSEs of EKF and UKF were much higher compared to the target is tracked well. Again, since resampling is required in the
MSEs of the GPF and SISR filters. On an average, the UKF per- SISR, it is more computationally expensive, and a large differ-
formed better than the EKF; however, the MSE of the was ence is observed in the computation times of the GPF and SISR.
2600 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 51, NO. 10, OCTOBER 2003

noise models, and hence, the GPF developed here can also be
used to solve the non-Gaussian noise problem.

REFERENCES
[1] J. H. Kotecha and P. M. Djurić, “Gaussian particle filtering,” in Proc.
Workshop Statistical Signal Process., Singapore, Aug. 2001.
[2] , “Gaussian sum particle filtering for dynamic state space models,”
in Proc. Int. Conf. Acoust., Speech, Signal Process., Salt Lake City, UT,
May 2001.
[3] P. J. Harrison and C. F. Stevens, “Bayesian forecasting (with discus-
sion),” J. R. Statist. Soc. B, vol. 38, pp. 205–247, 1976.
[4] B. D. O. Anderson and J. B. Moore, Optimal Filtering. Englewood
Ciffs, NJ: Prentice-Hall, 1979.
[5] M. S. Grewal and A. P. Andrews, Kalman Filtering, 2nd ed. New York:
Wiley, 2001.
[6] S. F. Schmidt, “Advances in control systems,” in Applications of State
Space Methods to Navigation Problems, C. T. Leondes, Ed. New York:
Academic, 1966.
[7] H. W. Sorenson, Recursive Estimation for Nonlinear Dynamic Sys-
tems. New York: Marcel Dekker, 1988.
[8] A. H. Jazwinski, Stochastic Processes and Filtering Theory. New
Fig. 11. Example of divergence of the EKF and UKF in the bearings-only
York: Academic, 1970.
tracking example, whereas the GPF and SISR filters do not diverge.
[9] Y. Bar-Shalom and Xiao-Rong Li, Estimation and Tracking. Norwell,
MA: Artech House, 1993.
[10] S. Frühwirth-Schnatter, “Applied state-space modeling of non-Gaussian
time series using integration based Kalman filtering,” Statist. Comput.,
vol. 4, pp. 259–269, 1994.
[11] C. J. Masreliez, “Approximate non-Gaussian filtering with linear state
and observation relations,” IEEE Trans. Automat. Contr., vol. 20, pp.
107–110, 1975.
[12] M. West, P. J. Harrison, and H. S. Migon, “Dynamic generalized linear
models and Bayesian forecasting, (with discussion),” J. Amer. Statist.
Assoc., vol. 80, pp. 73–97, 1985.
[13] S. J. Julier, J. K. Uhlmann, and H. F. Durrant-Whyte, “A new approach
for filtering nonlinear systems,” in Proc. Amer. Contr. Conf., Seattle,
WA, June 1995, pp. 1628–1632.
[14] K. Ito and K. Xiong, “Gaussian filters for nonlinear filtering problems,”
IEEE Trans. Automat. Contr., vol. 45, pp. 910–927, May 2000.
[15] D. L. Alspach and H. W. Sorenson, “Nonlinear Bayesian estimation
using Gaussian sum approximation,” IEEE Trans. Automat. Contr., vol.
17, pp. 439–448, July 1972.
[16] R. S. Bucy, “Bayes theorem and digital realization for nonlinear filters,”
J. Astronaut. Sci., vol. 80, pp. 73–97, 1969.
[17] G. Kitagawa, “Non-Gaussian state-space modeling of nonstationary
time series,” J. Amer. Statist. Assoc., vol. 82, no. 400, pp. 1032–1063,
1987.
Fig. 12. Performance comparison of EKF, UKF, GPF, and SISR filters; [18] S. C. Kramer and H. W. Sorenson, “Recursive Bayesian estimation using
computation time (per realization) versus number of particles for both GPF piece-wise constant approximations,” Automatica, vol. 24, pp. 789–801.
and SISR filters. [19] A. Pole and M. West, “Efficient numerical integration in dynamic
models,” Dept. Statist., Univ. Warwick, Warwick, U.K., Res. Rep., 136,
1988.
V. CONCLUSION [20] N. Gordon, D. Salmond, and C. Ewing, “Bayesian state estimation for
tracking and guidance using the bootstrap filter,” J. Guidance, Contr.,
The Gaussian particle filter provides much better perfor- Dyn., vol. 18, no. 6, pp. 1434–1443, Nov.-Dec. 1995.
mance than the EKF and UKF. Moreover, the additive Gaussian [21] J. H. Kotecha and P. M. Djurić, “Sequential monte carlo sampling de-
noise assumption can be relaxed without any modification to tector for Rayleigh fast-fading channels,” in Proc. Int. Conf. Acoust.,
Speech, Signal Process., 2000.
the filter algorithm. The update of the filtering and predictive [22] J. S. Liu and R. Chen, “Monte Carlo methods for dynamic systems,” J.
distributions as Gaussians by particle-based approaches has Amer. Statist. Assoc., vol. 93, no. 443, pp. 1032–1044, 1998.
the advantages of easier implementation than standard SIS [23] A. Doucet, S. J. Godsill, and C. Andrieu, “On sequential Monte Carlo
sampling methods for Bayesian filtering,” Statist. Comput., vol. 10, no.
algorithms and better performance than algorithms that are also 3, pp. 197–208, 2000.
based on Gaussian approximations. The GPF is more computa- [24] J. H. Kotecha and P. M. Djurić, “Gaussian sum particle filtering,” IEEE
tionally expensive than the EKF. However, the parallelizibility Trans. Signal Processing, vol. 51, pp. 2603–2613, Oct. 2003.
[25] J. Geweke, “Bayesian inference in econometrics models using Monte
of the GPF and the absence of resampling makes it convenient Carlo integration,” Econometrica, vol. 33, no. 1, pp. 1317–1339, 1989.
for VLSI implementation and, hence, feasible for practical [26] S. J. Julier, J. K. Uhlmann, and H. F. Durrant-Whyte, “A new method
real-time applications. In a companion paper [24], we exploit for the nonlinear transformation of means and covariances in filters and
estimators,” IEEE Trans. Automat. Contr., vol. 45, pp. 477–482, Mar.
the GPF to build Gaussian sum particle filters that can be used 2000.
for models where the the filtering and predictive distributions [27] C. Berzuini, N. G. Best, W. R. Gilks, and C. Larizza, “Dynamic condi-
cannot be approximated successfully with a single Gaussian tional independence models and Markov chain Monte Carlo,” J. Amer.
Statist. Assoc., vol. 92, no. 440, pp. 1403–1412, Dec. 1997.
distribution and for models with non-Gaussian noise. The latter [28] A. Doucet, J. F. G. de Freitas, and N. J. Gordon, Sequential Monte Carlo
models can be approximated as a weighted bank of Gaussian Methods in Practice. New York: Springer-Verlag, 2000.
KOTECHA AND DJURIĆ: GAUSSIAN PARTICLE FILTERING 2601

[29] E. R. Beadle and P. M. Djurić, “A fast weighted Bayesian bootstrap filter Peter M. Djurić (SM’99) received the B.S. and
for nonlinear model state estimation,” IEEE Trans. Aerosp. Electron. M.S. degrees in electrical engineering from the
Syst., vol. 33, pp. 338–342, Jan. 1997. University of Belgrade, Belgrade, Yugoslavia, in
[30] D. Avitzour, “Stochastic simulation Bayesian approach to multitarget 1981 and 1986, respectively, and the Ph.D. degree in
tracking,” Proc. Inst. Elect. Eng.—Radar Sonar Navigat., vol. 142, no. electrical engineering from the University of Rhode
2, pp. 41–44, Apr. 1995. Island, Kingston, in 1990.
[31] N. J. Gordon, D. J. Salmond, and A. F. M. Smith, “Novel approach From 1981 to 1986, he was Research Associate
to nonlinear/non-Gaussian Bayesian state estimation,” Proc. Inst. Elect. with the Institute of Nuclear Sciences, Vinca, Bel-
Eng. F, vol. 140, no. 2, pp. 107–113, Apr. 1993. grade, Yugoslavia. Since 1990, he has been with the
[32] J. Carpenter, P. Clifford, and P. Fearnhead, “Improved particle filter for State University of New York at Stony Brook, where
nonlinear problems,” Proc. Inst. Elect. Eng.—Radar, Sonar, Navig., vol. he is Professor with the Department of Electrical and
146, no. 1, pp. 2–7, Feb. 1999. Computer Engineering. He works in the area of statistical signal processing, and
his primary interests are in the theory of modeling, detection, estimation, and
time series analysis and its application to a wide variety of disciplines, including
telecommunications, bio-medicine, and power engineering.
Jayesh H. Kotecha received the B.E. degree in elec- Prof. Djuric has served on numerous Technical Committees for the IEEE
tronics and telecommunications from the College of and SPIE and has been invited to lecture at universities in the United States
Engineering, Pune, India, in 1995 and the M.S. and and overseas. He was Associate Editor of the IEEE TRANSACTIONS ON SIGNAL
Ph.D. degrees from the State University of New York PROCESSING, and currently, he is the Treasurer of the IEEE Signal Processing
at Stony Brook in 1996 and 2001, respectively. Conference Board and Area Editor of Special Issues of the IEEE SIGNAL
Since January 2002, he has been with the Uni- PROCESSING MAGAZINE. He is also Vice Chair of the IEEE Signal Processing
versity of Wisconsin, Madison, as a post-doctoral Society Committee on Signal Processing—Theory and Methods and a Member
researcher. His research interests are primarily in of the American Statistical Association and the International Society for
the fields of communications, signal processing, Bayesian Analysis.
and information theory. Presently, he is working
in statistical signal processing, adaptive signal
processing, and particle filters and physical layer aspects in multi-input
multi-ouput communication systems, including channel modeling, tranceiver
design, space-time coding, and capacity analysis.

You might also like