You are on page 1of 4

Proceedings of the 8th

World Congress on Intelligent Control and Automation


June 21-25 2011, Taipei, Taiwan

Interacting Multiple Model Gaussian Particle Filter


Zhigang Liu and Jinkuan Wang
Insitute of Engineering Optimization and Smart Antenna
Northeastern University
Qinhuangdao 066004, China
{zliu,wjk}@mail.neuq.edu.cn

Abstract— For maneuvering target tracking, the interacting II. P ROBLEM F ORMULATION
multiple model Gaussian particle filter is proposed without
resampling, which can avoid the degeneracy in the effective
Assume that the target model jumps between a finite set
number of particles. The basic idea is to combine the interacting of known modes representing various trajectory models. At
multiple model approach with a Gaussian particle filter and this any time 𝑘 + 1, the target state evolves according to
approach is easy of parallel implementation. Finally, simulation
results show the effectiveness of the proposed algorithms. x𝑘+1 = F(𝑚𝑘+1 )x𝑘 + G(𝑚𝑘+1 )w𝑘 (1)
Index Terms— Maneuvering target tracking, interacting mul-
tiple model, Gaussian particle filter, resampling. where x𝑘 is the system state, and w𝑘 is zero-mean white
Gaussian noise(WGN) vector of variance Q. 𝑚𝑘+1 ∈ 𝑆 is
I. I NTRODUCTION the modal state at time 𝑘 + 1, where 𝑆 = {1, 2, ⋅ ⋅ ⋅ , 𝑠}. The
Tracking a maneuvering target is often formulated as a observation model is given by
problem of estimating the state of a partially observed jump 𝑧𝑘 = H(x𝑘 ) + 𝑣𝑘 (2)
Markov nonlinear system in practice[1][2]. However, for
most nonlinear models and non-Gaussian noise, closed-form where 𝑧𝑘 is the observation vector, and 𝑣𝑘 is the zero-mean
analytic expression for the posterior distributions do not exist additive WGN of variance R.
in general. As a result, several approximations that are more Furthermore the mode transition of the system is modelled
tractable have been proposed. by a Markov chain with
Particle filter can approximate the posterior distribution by 𝑃 {𝑚𝑘+1 = 𝑗∣𝑚𝑘 = 𝑖} = 𝜋𝑖𝑗 , ∀𝑖, 𝑗 ∈ 𝑆.
a weighted set of samples and deal with nonlinearities in the
dynamics and measurements[3][4]. To reduce the computa- Define the measurement up to and including time step 𝑘 as
tional complexity, the Gaussian particle filter is proposed by
𝑍 𝑘 ≜ {𝑧1 , 𝑧2 , ⋅ ⋅ ⋅ , 𝑧𝑘 }.
removal of resampling and more amenable for fully parallel
implementation in very large scale integration(VLSI)[5]. As a III. G AUSSIAN PARTICLE F ILTER
suboptimal hybrid filter, the interacting multiple model(IMM) In a Bayesian context, our aim is to estimate the fil-
estimator can estimate the state of a dynamic system with tering distribution 𝑝(x𝑘 ∣𝑍 𝑘 ) and the predictive distribution
several behavior modes which can switch from one to 𝑝(x𝑘+1 ∣𝑍 𝑘 ) recursively in time. The filtering distribution at
another, which makes it natural for tracking maneuvering time 𝑘 can be written as
targets. On its basis, the multiple model bootstrap filter is
given for maneuvering target tracking, but the posterior model 𝑝(x𝑘 ∣𝑍 𝑘 ) = 𝐶𝑘 𝑝(x𝑘 ∣𝑍 𝑘−1 )𝑝(𝑧𝑘 ∣x𝑘 ) (3)
probabilities are approximately equal to the proportion of
where 𝐶𝑘 is the normalizing constant given by
samples, and this leads to numerical problems[6]. Further, a
(∫ )−1
combination of the IMM filter and a regularised particle filter 𝑘−1
is presented by using a hybrid type of sampling filter as an 𝐶𝑘 = 𝑝(x𝑘 ∣𝑍 )𝑝(𝑧𝑘 ∣x𝑘 )𝑑x𝑘 .
alternative for direct resampling[7].
Furthermore, the predictive distribution can be expressed as
In this paper, a new method for tracking a maneuvering ∫
target, using the Gaussian particle filter, is proposed with-
𝑝(x𝑘+1 ∣𝑍 𝑘 ) = 𝑝(x𝑘+1 ∣x𝑘 )𝑝(x𝑘 ∣𝑍 𝑘 )𝑑x𝑘 . (4)
out resampling, and degeneracy in the effective number of
particles is avoided. The basic idea is to combine the IMM The GPF approximates the filtering and predictive distri-
approach with a Gaussian particle filter. In addition, the butions in (3) and (4) by Gaussian densities. Therefore, the
proposed method is easy of parallel implementation. filtering distribution in (3) is now approximated as
This work was supported by the National Natural Science Foundation of 𝑝(x𝑘 ∣𝑍 𝑘 ) ≈ 𝐶𝑘 𝑝(x𝑘 ∣𝑍 𝑘−1 )𝒩 (x𝑘 ; 𝜇
¯𝑘 , Σ̄𝑘 )
China under Grant 60874108 and the Fundamental Research Funds for the
Central Universities under Grant N090423002. ≈ 𝒩 (x𝑘 ; 𝜇𝑘 , Σ𝑘 ) (5)

978-1-61284-700-9/11/$26.00 ©2011 IEEE 270


where 𝒩 (x𝑘 ; 𝜇𝑘 , Σ𝑘 ) represents a Gaussian normal distribu- B. Gaussian Particle Filtering Stage
tion with the mean 𝜇𝑘 and covariance Σ𝑘 . On the basis of the above mixing estimate and mixing
Similarly, we can approximate the predictive distribution covariance, we firstly draw the conditioning particles from
as a Gaussian. To utilize the Monte Carlo approximation, we 𝒩 (x𝑘−1 ; x̄𝑖𝑘−1 , P̄𝑖𝑘−1 ) to obtain {x𝑖,𝑚 𝑀
𝑘−1 }𝑚=1 , generate parti-
can modify the equation (4) into
cles by drawing samples from 𝑝(x𝑘 ∣x𝑘−1 = x𝑖,𝑚 𝑘−1 ) to obtain

{x𝑖,𝑚
𝑘 }𝑀
𝑚=1 , calculate the weights by 𝑤˜ 𝑖,𝑚
𝑘 = 𝑝(𝑧𝑘 ∣x𝑖,𝑚
𝑘 ),
𝑝(x𝑘+1 ∣𝑍 𝑘 ) ≈ 𝑝(x𝑘+1 ∣x𝑘 )𝒩 (x𝑘 ; 𝜇𝑘 , Σ𝑘 )𝑑x𝑘 normalize the weights by
𝑀
1 ∑ ˜ 𝑖,𝑚
𝑤
≈ 𝑝(x𝑘+1 ∣x𝑚
𝑘 ) 𝑤𝑘𝑖,𝑚 = ∑𝑀 𝑘 𝑖,𝑚
𝑀 𝑖=1
𝑚=1 𝑤 ˜𝑘
≈ 𝒩 (x𝑘+1 ; 𝜇
¯𝑘+1 , Σ̄𝑘+1 ) (6) and estimate the mean and covariance of the filtering distri-
bution by
where x𝑚𝑘 are particles from 𝒩 (x𝑘 ; 𝜇𝑘 , Σ𝑘 ), and the mean
and covariance of 𝑝(x𝑘+1 ∣𝑍 𝑘 ) is computed as 𝑀

x̂𝑖𝑘 = 𝑤𝑘𝑖,𝑚 x𝑖,𝑚
𝑘
𝑀

1 𝑚=1
𝜇
¯𝑘+1 = x𝑚
𝑘+1 ∑𝑀
𝑀 𝑚=1 P𝑖𝑘 = 𝑤𝑘𝑖,𝑚 (x𝑖,𝑚
𝑘 − x̂𝑖𝑘 )(x𝑖,𝑚
𝑘 − x̂𝑖𝑘 )𝑇 . (12)
∑𝑀
1 𝑚=1
Σ̄𝑘+1 = 𝜇𝑘+1 − x𝑚
(¯ 𝜇𝑘+1 − x𝑚
𝑘+1 )(¯ 𝑘+1 )
𝑇
(7)
𝑀 𝑚=1
The measurement residual can be computed as

where x𝑚 𝑚 𝑧˜𝑘𝑖 = 𝑧𝑘 − 𝑧¯𝑘𝑖 (13)


𝑘+1 are particles from 𝑝(x𝑘+1 ∣x𝑘 ).

where 𝑧¯𝑘𝑖 can be expressed as


IV. P ROPOSED A LGORITHM
𝑀

The algorithm that we propose here has three stages: 𝑧¯𝑘𝑖 = 𝑤𝑘𝑖,𝑚 H(x𝑖,𝑚
𝑘 ) (14)
mixing stage, Gaussian particle filtering stage, and fusion 𝑚=1
stage. and further the residual covariance can be displayed as
𝑀

A. Mixing Stage
𝑆𝑘𝑖 = R + 𝑤𝑘𝑖,𝑚 (𝑧𝑘𝑖,𝑚 − 𝑧¯𝑘𝑖 )(𝑧𝑘𝑖,𝑚 − 𝑧¯𝑘𝑖 )𝑇 . (15)
The mixing estimate in cycle 𝑘 for the proposed algorithm 𝑚=1
matched to the 𝑖th mode is computed using C. Fusion Stage
𝑆
∑ 𝑗∣𝑖
We can evaluate the likelihood function 𝐿𝑖 (𝑘) of each
x̄𝑖𝑘−1 = x̂𝑗𝑘−1 𝛾𝑘−1 (8) mode at time 𝑘 using
𝑗=1
𝐿𝑖𝑘 = 𝑝[˜
𝑧𝑘𝑖 ∣𝑚𝑘 = 𝑖, 𝑍 𝑘−1 ] = 𝒩 (˜
𝑧𝑘𝑖 ; 0, 𝑆𝑘𝑖 ) (16)
𝑗∣𝑖
where 𝛾𝑘−1 is the mixing weight given by
and so the mode probability is given by
𝑗∣𝑖 𝑘−1 𝑗 𝑖
𝛾𝑘−1 = 𝑝{𝑚𝑘−1 = 𝑗∣𝑚𝑘 = 𝑖, 𝑍 }= 𝜋𝑗𝑖 𝛾𝑘−1 /𝛾𝑘∣𝑘−1 𝑖
𝛾𝑘∣𝑘−1 𝐿𝑖𝑘
(9) 𝛾𝑘𝑖 = ∑𝑆 𝑗 𝑗
. (17)
𝑖
and 𝛾𝑘∣𝑘−1 is the predicted mode probability as follows 𝑗=1 𝛾𝑘∣𝑘−1 𝐿𝑘

Finally, the overall estimate and overall covariance are


𝑆

𝑖 𝑘−1 𝑗 computed by using the following equations:
𝛾𝑘∣𝑘−1 = 𝑝{𝑚𝑘 = 𝑖∣𝑍 }= 𝜋𝑗𝑖 𝛾𝑘−1 (10)
𝑗=1 𝑆

x̂𝑘 = x̂𝑖𝑘 𝛾𝑘𝑖 (18)
The associated mixing covariance is displayed as 𝑖=1

𝑆
∑ and
𝑗∣𝑖
P̄𝑖𝑘−1 = [P𝑗𝑘−1 + (x̄𝑖𝑘−1 − x̂𝑗𝑘−1 )(x̄𝑖𝑘−1 − x̂𝑗𝑘−1 )𝑇 ]𝛾𝑘−1 𝑆

𝑗=1
P𝑘 = [P𝑖𝑘 + (x̂𝑘 − x̂𝑖𝑘 )(x̂𝑘 − x̂𝑖𝑘 )𝑇 ]𝛾𝑘𝑖 . (19)
(11) 𝑖=1

271
4

V. S IMULATION R ESULTS 6
x 10

Position(m)
True
4
We present some computer simulations that illustrate the 2
Estimate

performances of the proposed algorithms. Assume that a 0


0 50 100 150 200 250 300 350 400
target maneuvers along the X-axis and that the motion 150

Velocity(m/s)
equation can switch between the accelerating model and 100

constant velocity model, in other words, 𝑆 = {1, 2}.


True
50
Estimate

When 𝑚𝑘+1 = 1, the motion equation(1) should be the 0


0 50 100 150 200 250 300 350 400

Acceleration(m /s)
2
accelerating model, that is

2
True
1 Estimate

x𝑘+1 = F(1)x𝑘 + G(1)w𝑘 (20) 0

−1
0 50 100 150 200 250 300 350 400

where x𝑘 = [𝑥𝑘 , 𝑥˙ 𝑘 , 𝑥
¨𝑘 ] is a state vector. F(1) and G(1) are Number of iterations

known matrices given by


⎛ ⎞ Fig. 1. The target’s position, velocity and acceleration versus number of
1 𝑇 𝑇 2 /2 iterations.
F(1) = ⎝ 0 1 𝑇 ⎠
0 0 1 10
1

and ⎛ ⎞
𝑇 2 /2
0
10

G(1) = ⎝ 𝑇 ⎠

Position Error(m)
−1

1
10

where 𝑇 = 1 is the sampling interval. And when 𝑚𝑘+1 = 2, 10


−2

the motion equation should be rewritten into the constant


velocity model, 10
−3

R=1
R=0.1
x𝑘+1 = F(2)x𝑘 + G(2)w𝑘 (21) 10
−4

0 50 100 150 200 250 300 350 400


Number of iterations
where F(2) and G(2) are also known matices given by
⎛ ⎞
1 𝑇 0 Fig. 2. The position error versus number of iterations for R = 1 and
R = 0.1.
F(2) = ⎝ 0 1 0 ⎠
0 0 0
and ⎛ ⎞ shows that the position error of the proposed algorithm gets
𝑇 2 /2 smaller with the measurement noise variance R decreasing.
G(2) = ⎝ 𝑇 ⎠ . From a mode probability perspective, the updated mode
0 probabilities of the proposed algorithm change when the
where Q = 0.1 is the covariance of the process noise w𝑘 . number of iterations is increasing in Figure 3. During this
The measurement equation can be displayed as follows tracking procedure, the probability of each mode varies in
√ order, and this implies that two modes works by collaboration
𝑧𝑘 = (𝑥𝑘 − 1)2 + 1 + 𝑣𝑘 (22) with each other.
where the measurement noise variance is R = 1. The initial VI. C ONCLUSION
state is x0 = [1, 1, 1]𝑇 and the associated initial covariance
is P0 = I, where I is the identity matrix. In addition, the In this paper, the interacting multiple model Gaussian
mode transition matrix is set as particle filter is presented to avoid the degeneracy in the ef-
( ) fective number of particles without resampling. The proposed
0.98 0.02
Π= . algorithm consists of three stages:mixing stage, Gaussian
0.02 0.98 particle filtering stage, and fusion stage. In terms of the
position, the velocity, the acceleration and the updated mode
To show the performance of the IMMGPF algorithm, we probabilities, simulation results show the effectiveness of the
can compare its estimate with the true value in terms of proposed algorithms.
the position, the velocity and the acceleration in Figure 1.
From this Figure, the proposed algorithm has the admirable R EFERENCES
tracking performance with the number of iteration increasing.
[1] H.A.P. Blom, and Y. Bar-Shalom. The interacting multiple model
Figure 2 gives the curves of the position error versus the algorithm for systems with Markovian switching coefficients. IEEE
number of iterations when R = 1 and R = 0.1, and this Trans. on Automatic Control, vol.33, pp.780-783, Aug. 1988.

272
1

0.8

0.6
1
γ

0.4

0.2

0
0 50 100 150 200 250 300 350 400
Number of iterations

0.8

0.6
2
γ

0.4

0.2

0
0 50 100 150 200 250 300 350 400
Number of iterations

Fig. 3. The updated mode probability versus number of iterations.

[2] E. Mazor, A. Averbuch, Y. Bar-shalom, and J. Dayan. Interacting


multiple model methods in target tracking: a survery. IEEE Trans. on
Aerospace and Electronic Systems, vol. 34, pp.103-123, Jan. 1998.
[3] M.S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp. A tutorial
on particle filters for online nonlinear/non-gaussian bayesian tracking.
IEEE Trans. on Signal Processing, vol. 50, pp.174-188, Feb. 2002.
[4] F. Gustafsson, F. Gunnarsson, N. Bergman, et al. Particle filters for po-
sitioning, navigation, and tracking. IEEE Trans. on Signal Processing,
vol. 50, pp.425-437, Feb. 2002.
[5] J.H. Kotecha and P.M. Djurić. Gaussian particle filtering. IEEE Trans.
on Signal Processing, vol. 51, pp.2592-2600, Oct. 2003.
[6] S. Mcginnity and G.W. Irwin. Multiple model bootstrap filter for
maneuvering target tracking. IEEE Trans. on Aerospace and Electronic
Systems, vol. 36, pp.1006-1012, Jul. 2000.
[7] Y. Boers and J.N. Driessen. Interacting multiple model particle filter.
IEE Proc. Radar Sonar Navig., vol. 150, pp.344-349, Oct. 2003.

273