You are on page 1of 1

Method 1 Constructing the Covariance Matrix

From the state equation of the state space model, we have xk = xk


(x1 , ..., xk ) forms a Markov chain, the joint probability of all xk is,
p(x1 , ..., xK ) = p(xK |xK

+ k , and k N (0,

1 ) . . . p(x2 |x1 )p(x1 )

log p(x1 , ..., xK ) = log p(x1 )

1
2

log(2

2
)

K
X

2
).

(xk

xk
2

k=2

Since x =

1)

1)

denote the joint density for x as x N (b, B)

q
1
K
log( (2) |B|)
(x
2

b) B

(x

b)

= log p(x1 )

1
2

log(2

2
)

K
X
k=2

Taking the second derivatives with respect to x on both sides we get,


!
K
2
X
(x
x
)
k
k
1
B 1= H
+ log p(x1 ),
2 2

(xk

xk
2

k=2

where H is the Hessian Matrix

@2f
6 @x2
6
1
6
6 @2f
6
6 @x2 @x1
H=6
6
..
6
.
6
6
6
4 @2f
@xK @x1

Solving the Hessian Matrix we get

@2f
@x1 @x2
@2f
@x22
..
.
@2f
@xK @x2

6 1
6 2
6 .
6 .
4 .
0
2

1 6
6
= 26
4

..

3
@2f
@x1 @xK 7
7
7
@2f 7
7
@x2 @xK 7
7
7
..
7
.
7
7
7
@2f 5
@x2K

..
.

..
.

..

2
1
..
.

1
2
..
.

3
7
7
7
7
5

0
1 7
7
.. 7
..
. . 5
1 1

b = [0, , 0]T

We now have the prior density for x, from which we can derive the full conditional density for x|w, N0,K .
x|w, N0,K N (m(w), (w)),
where
(w) = (diag(w) + B
m(w) = (w)(N0,K
1

) 1
1
)
2

You might also like