You are on page 1of 2

Gaussian Mixture Model by Using EM

Shubing Wang∗
April 17, 2004

1. 2-component Gaussian Mixture Model. We consider the


The MRI image of bone slice The histgram of the reduced intensity list
500
250

450

case that all the 5 parameters 50


200 400

T 350

θ = (p, µ1 , σ1 , µ2 , σ2 ) 100
150 300

are unknown. By using EM algorithm:


250

150
100 200

• Estimation Step: define p˜i = P (yi ∼ f1 |Θ), then


150

200
50 100

pf1 (yi |Θ) 50

p̃ =
250

. 50 100 150 200 250


0 0
50 100 150 200 250

pf1 (yi |Θ) + (1 − p)f2 (yi |Θ)


So Figure 1: Left: mid sagittal brain image. Right: histogram of
0
X
0
√ image intensity.
Q(Θ, Θ ) = {[p˜i (log p − log 2π
i 500

(Yi − µ01 )2
− log σ10 − )] 450
The Gaussian mix−
2σ12 ture model
√ 400
+[(1 − p˜i )(log(1 − p0 ) − log 2π
(Yi − µ02 )2 350
− log σ20 − )]}
2σ22 300

• Maximization Step: 250


P
i p˜i
p0 = 200
n
150
P
p˜ Y
0
µ1 = Pi i i
i p˜i 100
sP
(Yi − µ01 )2
i p˜iP
50
σ10 =
i p˜i 0
P 50 100 150 200 250
(1 − p˜i )Yi
µ02 = Pi
i (1 − p˜i ) Figure 2: The comparation of the original data and the fitted
sP Gaussian Mixture Model
i (1P− p˜i )(Yi − µ02 )2
σ20 =
i (1 − p˜i )
3. Segmentation by Using Gaussian Mixture Model. We
2. M -component Gaussian Mixture Model. It can derived start with data as shown in Figure 1. We always should
similarly by using EM with the Maximization step: get the histgram as well, since the initial guess is cru-
cial when one is trying to do EM algorithm. A good ini-
P
0 i p˜ ji
pj = , 1≤j≤M tial guess will absolutely improve the preformance of the
n
programming and histgrams provide the clue for goog
P
p˜ Y
µ0j = Pi ji i , if 1 ≤ j < M initial guess. One can get the 2-component Gaussian
i p˜ ji
sP Mixture Model as shown in Figure 2. The result of seg-
0 2
i p˜ji (Yi − µj ) mentation as shown in Figure 3.
σj0 = P , if 1 ≤ j < M
i p˜ji Intuitively, 3-component Gaussian Mixture Model will
P PM −1 give a better result. Figure 4 and Figure 5 show the re-
i (1 − j=1 p˜ji )Yi
0
µM = P PM −1 sult of segmentation by using the 3-component Gaussian
i (1 − j=1 p˜ji ) Mixture Model.
v
uP PM −1
u i (1 − j=1 p˜ji )(Yi − µ0M )2
0
σM = t P PM −1
i (1 − j=1 p˜ji )
∗ Email: wang6@wisc.edu
1 1

0.9 0.9

50 0.8 50 0.8

0.7 0.7

100 0.6 100 0.6

0.5 0.5

150 150
0.4 0.4

0.3 0.3

200 200
0.2 0.2

0.1 0.1

250 250
0 0
50 100 150 200 250 50 100 150 200 250

Figure 3: The two components of the 2-component Gaussian


Mixture Model

The Gaussian Mixture Model and the Original Data


500

450

400

350

300

250

200

150

100

50

0
50 100 150 200 250

Figure 4: The fitting 3-component Gaussian Mixture Model

1 1

0.9 0.9

50 0.8 50 0.8

0.7 0.7

100 0.6 100 0.6

0.5 0.5

150 150
0.4 0.4

0.3 0.3

200 200
0.2 0.2

0.1 0.1

250 250
0 0
50 100 150 200 250 50 100 150 200 250
1

0.9

50 0.8

0.7

100 0.6

0.5

150
0.4

0.3

200
0.2

0.1

250
0
50 100 150 200 250

Figure 5: The 3 segments of the 3-component Gaussian Mix-


ture Model

You might also like