Professional Documents
Culture Documents
From now on we further assume that the TMC process t is Qn = Qnx − Qxn ,y
(Qy,y
n ) Qn .
also Gaussian. More precisely, we assume that
On the other hand, from (15)
∗ x∗ ,x∗ ∗ ∗ x∗
xn+1 Fn Fnx ,y xn wn
= +
∗ ∗
∗ , (13) p(x∗n |y0:n ) ∼ N (b x ,x
x∗n|0:n , Pn|0:n ).
yn Fny,x Fny,y yn−1 wny
| {z } | {z } | {z }
tn+1 Fn wn Using (20) and Proposition 7, we get
in which w = {wn }n∈IN is independent and independent " #" x∗ ,x∗ ∗
,x∗ T
#
b∗n|0:n
x Pn|0:n Pxn|0:n An
of t0 , y−1 = 0, and ∗ ∗
p(xn , xn+1|y0:n ) ∼ N ( ∗ , ∗ ∗ ∗ ∗ )
bn+1|0:n An Px ,x Px ,x
x
x∗ ,x∗ ∗ n|0:n n+1|0:n
Qn Qxn ,y
wn ∼ N (0, ∗ ). (14)
Qny,x Qy,y
n b∗n|0:n + bn , Pxn+1|0:n
b∗n+1|0:n = An x
with x ,x
= Qn
∗ ∗ x∗ ,x∗
+
| {z }
∗
,x∗ T
Qn An Pxn|0:n An . Using Proposition 6, we get
Then p(x∗n |y0:N )
is also Gaussian, and computing this pdf
amounts to computing its parameters. Let us thus introduce p(x∗n |x∗n+1 , y0:n ) ∼ N (b
x∗n|0:n +K∗n|0:N (x∗n+1 −b
x∗n+1|0:n ),
the following notation. For 0 ≤ i ≤ j ≤ N , let yi:j = ∗ ∗ ∗ ∗
{ym }i≤m≤j , and let Pxn|0:n
,x
− K∗n|0:N Pxn+1|0:n
,x
K∗T
n|0:N ) (21)
x∗n|i:j , Pxn|i:j
p(x∗n |yi:j ) ∼ N (b ,x
).
∗ ∗
(15) in which K∗n|0:N is given by (16).
It remains to compute equation (6). Using (15), (21) and
Proposition 7, we finally get (17) and (18).
4.1. The Triplet RTS algorithm
Proposition 3 (The Triplet RTS algorithm) Let (13) hold. 4.2. The Triplet Two-Filter algorithm
Let
Proposition 4 (The Triplet Two-Filter algorithm) Let (13)
∗ ∗ ∗ ∗
−1 y,x T x ,x −1
K∗n|0:N = Pxn|0:n
,x ∗
,x∗ ∗ ∗
[Fnx −Qnx ,y
(Qy,y
n ) Fn ] Pn+1|0:n . hold. Then (8) - (10) reduce to
(16) ∗ ∗
,x −1 ∗ ,x −1 ∗ ∗ ∗
,x −1 ∗ ∗ ∗
Using Proposition 6, we get from which we deduce (31), (32) and (33). Using Proposi-
tion 6, we get (34), (35) and (36).
(Py,y
∗
b∗n|n−1
x = b∗n + Pxn
x ,y −1
n−1 ) (yn−1−b
yn−1),(27)
∗
x ,x ∗ 5. SOME PROPERTIES OF GAUSSIAN R.V.
(Py,y
∗
,x∗ ∗
−1 y,x ∗
Pn|n−1 = Pxn − Pxn ,y
n−1 ) Pn . (28)
The derivations in this paper rely on the following results,
x∗n|n−1:N ,
Let us finally address the computation of (b which are recalled for convenience of the reader :
∗ ∗
x ,x
Pn|n−1:N ). These parameters can be computed recursively
Proposition 6 Let
by the TMC backward Kalman filter :
µ Σ1,1 Σ1,2
Proposition 5 (The TMC Backward Kalman filter) Let (u1 , u2 ) ∼ N ( 1 , ).
µ2 Σ2,1 Σ2,2
" ∗ #" ∗ ∗ ∗ #
Fex ,t Fen+1
x ,x e x ,y
Fn+1 Then conditionally on u2 , u1 ∼ N (µ1 + Σ1,2 Σ−1
2,2 (u2 −
Fen+1 = Pn Fn (Pn+1) = en+1
t T t −1
y,t = ey,x∗ e y,y (29)
Fn+1 Fn+1 Fn+1 µ2 ), Σ1,1 − Σ1,2 Σ−1 Σ
2,2 2,1 ).
" ∗ ∗ ∗
#
ex ,x Q
Q ex ,y Proposition 7 Let u1 ∼ N (µ1 , Σ1 ) and conditionally on
e t e t
Qn+1 = Pn− Fn+1Pn+1Fn+1=e T n+1∗
n+1 .(30)
ey,x Q
Q ey,y u1 , let u2 ∼ N (Au1 + b, Σ2|1 ). Then
n+1 n+1
Then we have µ1 Σ1 Σ1 A T
(u1 , u2 ) ∼ N ( , ).
Aµ1 + b AΣ1 Σ2|1 + AΣ1 AT
∗ ∗ ∗
b∗n|n:N = Fen+1
x x ,x
x∗n+1|n:N −b
[b x∗n+1 ]+ Fen+1
x ,y
[yn −b
yn ]
+x ∗
bn , (31) 6. REFERENCES
e y,x∗ ∗ ∗ e y,y
bn−1|n:N = F
y [b
x n+1 −bxn+1 ]+ F
n+1|n:N [yn −byn ] n+1 [1] Y. Ephraim and N. Merhav, “Hidden Markov pro-
+ybn−1 , (32) cesses,” IEEE Transactions on Information Theory,
Pxn|n:N
,x ∗ ∗
e x∗ ,x∗ e x∗ ,x∗ x∗ ,x∗ e x∗ ,x∗ T
= Qn+1 + Fn+1 Pn+1|n:N (Fn+1 ) , (33) vol. 48, no. 6, pp. 1518–69, June 2002.
∗
ex ,y + Fex ,x Px ,x
K∗n|n−1:N = [Q ey,x T
∗ ∗ ∗ ∗ ∗
[2] W. Pieczynski, “Pairwise Markov chains,” IEEE
n+1 n+1 n+1|n:N (Fn+1 ) ]
∗ ∗ ∗ ∗
Transactions on Pattern Analysis and Machine Intel-
ey,y + Fey,x Px ,x
× [Q ey,x T −1 , (34)
n+1 n+1 n+1|n:N (Fn+1 ) ] ligence, vol. 25, no. 5, pp. 634–39, May 2003.
b∗n|n−1:N = x
x b∗n|n:N +K∗n|n−1:N (yn−1 −b
yn−1|n:N ), (35) [3] W. Pieczynski, C. Hulard, and T. Veit, “Triplet Markov
x∗ ,x∗
Pn|n−1:N =
∗
,x∗
Pxn|n:N ey,y
−K∗n|n−1:N [Q chains in hidden signal restoration,” in SPIE Interna-
n+1
∗ ∗ ∗ T ∗
tional Symposium on Remote Sensing, Crete, Grece,
+ Fen+1
y,x x ,x
Pn+1|n:N (Fen+1
y,x T
) ]K∗n|n−1:N . (36) September 22-27 2002.
[4] W. Pieczynski and F. Desbouvries, “Kalman filter- [16] P. Lanchantin and W. Pieczynski, “Unsupervised non
ing using pairwise Gaussian models,” in Proceed- stationary image segmentation using triplet Markov
ings of the International Conference on Acoustics, chains,” in Advanced Concepts for Intelligent Vision
Speech and Signal Processing (ICASSP 03), Hong- Systems (ACVIS 04), Brussels, Belgium, August 31 -
Kong, 2003. September 3 2004.
[5] B. Ait-El-Fquih and F. Desbouvries, “Kalman filter- [17] W. Pieczynski and F. Desbouvries, “On Triplet
ing for triplet Markov chains : Applications and exten- Markov chains,” in Proceedings of the International
sions,” in Proceedings of the International Conference Symposium on Applied Stochastic Models and Data
on Acoustics, Speech and Signal Processing (ICASSP Analysis (ASMDA 2005), Brest, France, May 17-20
05), Philadelphia, USA, March 19-23 2005. 2005.
[18] M. Mariton, Jump linear systems in automatic control,
[6] H. E. Rauch, F. Tung, and C. T. Striebel, “Maximum
Marcel Dekker, New York, 1990.
likelihood estimates of linear dynamic systems,” AIAA
J., vol. 3, no. 8, pp. 1445–1450, August 1965. [19] A. Doucet, N. J. Gordon, and V. Krishnamurthy, “Par-
ticle filters for state estimation of jump Markov linear
[7] D. Q. Mayne, “A solution to the smoothing problem systems,” IEEE Transactions on Signal Processing,
for linear dynamical systems,” Automatica, vol. 4, pp. vol. 49, no. 3, pp. 613–24, March 2001.
73–92, 1966.
[20] H. W. Sorenson, “Kalman filtering techniques,” in
[8] D. C. Fraser and J. E. Potter, “The optimum linear Advances in Control Systems Theory and Appl., C. T.
smoother as a combination of two optimum linear fil- Leondes, Ed., vol. 3, pp. 219–92. Acad. Press, 1966.
ters,” IEEE Transactions on Automatic Control, vol.
7, pp. 387–90, August 1969. [21] C.K. Chui and G. Chen, Kalman Filtering with Real-
Time Applications, Berlin, DE: Springer, 1999.
[9] R. E. Kalman, “A new approach to linear filtering and
[22] L. E. Baum and T. Petrie, “Statistical inference for
prediction problems,” J. Basic Eng., Trans. ASME,
probabilistic functions of finite state Markov chains,”
Series D, vol. 82, no. 1, pp. 35–45, 1960.
Ann. Math. Stat., vol. 37, pp. 1554–63, 1966.
[10] Y. C. Ho and R. C. K. Lee, “A Bayesian approach to [23] L. E. Baum and J. A. Eagon, “An inequality with
problems in stochastic estimation and control,” IEEE applications to statistical estimation for probabilistic
Transactions on Automatic Control, vol. 9, pp. 333– functions of a Markov process and to a model for ecol-
339, October 1964. ogy,” Bull. Amer. Meteorol. Soc., vol. 73, pp. 360–63,
1967.
[11] B. D. O. Anderson and J. B. Moore, Optimal Filtering,
Prentice Hall, Englewood Cliffs, New Jersey, 1979. [24] G. D. Forney, “The Viterbi algorithm,” Proceedings
of the IEEE, vol. 61, pp. 268–78, March 1973.
[12] T. Kailath, A. H. Sayed, and B. Hassibi, Linear es-
timation, Prentice Hall Information and System Sci- [25] A. J. Viterbi, “Error bounds for convolutional codes
ences Series. Prentice Hall, Upper Saddle River, New and an asymptotically optimal decoding algorithm,”
Jersey, 2000. IEEE Transactions on Information Theory, vol. 13, pp.
260–69, April 1967.
[13] A. Doucet, N. de Freitas, and N. Gordon, Eds., Se-
quential Monte Carlo Methods in Practice, Statistics [26] H. L. Weinert, Fixed interval smoothing for state-
for Engineering and Information Science. Springer space models, The Kluwer international series in engi-
Verlag, New York, 2001. neering and computer science. Kluwer Academic Pub-
lishers, Boston, 2001.
[14] R. S. Lipster and A. N. Shiryaev, Statistics of Random
[27] B. Delyon, “Remarks on linear and nonlinear filter-
Processes, Vol. 2 : Applications, chapter 13 : ”Con- ing,” IEEE Transactions on Information Theory, vol.
ditionally Gaussian Sequences : Filtering and Related
41, no. 1, pp. 317–22, January 1995.
Problems”, Springer Verlag, Berlin, 2001.
[28] P. L. Ainsleigh, N. Kehtarnavaz, and R. L. Streit,
[15] S. Derrode and W. Pieczynski, “Signal and image “Hidden Gauss-Markov models for signal classifica-
segmentation using pairwise Markov chains,” IEEE tion,” IEEE Transactions on Signal Processing, vol.
Transactions on Signal Processing, vol. 52, no. 9, pp. 50, no. 6, pp. 1355–1367, June 2002.
2477–89, 2004.