This action might not be possible to undo. Are you sure you want to continue?
https://www.scribd.com/doc/101529315/Hmm
07/30/2012
text
original
Mark Stamp
January 18, 2004
1 A simple example
Suppose we want to determine the average annual temperature at a particular location on
earth over a series of years. To make it interesting, suppose the years we are concerned with
lie in the distant past, before thermometers were invented. Since we can’t go back in time,
we instead look for indirect evidence of the temperature.
To simplify the problem, we only consider two annual temperatures, “hot” and “cold”.
Suppose that modern evidence indicates that the probability of a hot year followed by another
hot year is 0.7 and the probability that a cold year is followed by another cold year is 0.6.
We’ll assume that these probabilities held in the distant past as well. The information so
far can be summarized as
H C
H
C
_
0.7 0.3
0.4 0.6
_
(1)
where H is “hot” and C is “cold”.
Also suppose that current research indicates a correlation between the size of tree growth
rings and temperature. For simplicity, we only consider three diﬀerent tree ring sizes, small,
medium and large, or S, M and L. Conceivably, the probabilistic relationship between
temperature and tree ring sizes could be given by
S M L
H
C
_
0.1 0.4 0.5
0.7 0.2 0.1
_
.
(2)
For this system, the state is the average annual temperature—either H or C. The tran
sition from one state to the next is a Markov process (of order one), since the next state
depends only on the current state and the ﬁxed probabilities in (1). However, the actual
states are “hidden” since we can’t directly observe the temperature in the past.
Although we can’t observe the state (temperature) we can observe the size of tree rings.
From (2), tree rings provide us with probabilistic information regarding the temperature.
Since the states are hidden, this type of system is known as a Hidden Markov Model (HMM).
1
Our goal is to make eﬀective and eﬃcient use of the observable information in order to gain
insight into various aspects of the Markov process.
The state transition matrix
A =
_
0.7 0.3
0.4 0.6
_
(3)
comes from (1) and the observation matrix
B =
_
0.1 0.4 0.5
0.7 0.2 0.1
_
. (4)
is from (2). Perhaps there is additional evidence that the initial state distribution, denoted
by π, is
π =
_
0.6 0.4
_
. (5)
The matrices, π, A and B are row stochastic, meaning that each element is a probability
and the elements of each row sum to 1.
Now consider a particular fouryear period of interest where we observe the series of tree
rings S, M, S, L. Letting 0 represent S, 1 represent M and 2 represent L, this observation
sequence is
O = (0, 1, 0, 2). (6)
We might want to determine the most likely state sequence of the Markov process given
the observations (6). This is not quite as clearcut as it seems, since there are diﬀerent
possible interpretations of “most likely”. On the one hand, we could reasonably deﬁne
most likely as the state sequence with the highest probability from among all possible state
sequences of length four. Dynamic programming (DP) eﬃciently ﬁnds this particular most
likely solution. On the other hand, we might reasonably deﬁne most likely as the state
sequence that maximizes the expected number of correct states. HMMs ﬁnd this most likely
sequence.
The DP solution and the HMM solution are not necessarily the same. For example, the
DP solution must have valid state transitions, while this is not necessarily the case for the
HMMs. And even if all state transitions are valid, the HMM solution can still diﬀer from
the DP solution (as we illustrate in an example below).
Next, we present one of the most challenging aspects of HMMs—the notation. Then
we discuss the three fundamental problems related to HMMs and give algorithms for their
eﬃcient solutions. We also consider some critical computational issue that must be addressed
when writing any HMM computer program. We conclude with a substantial example that
does not require any specialized knowledge, yet nicely illustrates the strength of the HMM
approach. Rabiner [1] is the best source for further introductory information on HMMs.
2
2 Notation
Let
T = the length of the observation sequence
N = the number of states in the model
M = the number of observation symbols
Q = {q
0
, q
1
, . . . , q
N−1
} = the states of the Markov process
V = {0, 1, . . . , M −1} = set of possible observations
A = the state transition probabilities
B = the observation probability matrix
π = the initial state distribution
O = (O
0
, O
1
, . . . , O
T−1
) = observation sequence.
The observations are always denoted by {0, 1, . . . , M −1}, since this simpliﬁes the notation
with no loss in generality. Then O
i
∈ V for i = 0, 1, . . . , T −1.
A generic hidden Markov model is illustrated in Figure 1, where the X
i
are the hidden
states and all other notation is as given above. The Markov process—which is hidden behind
the dashed line—is determined by the initial state X
0
and the A matrix. We are only able
to observe the O
i
, which are related to the actual states of the Markov process by the
matrices B and A.
Markov process:
X
0
X
1
X
2
· · · X
T−1

A

A

A

A
?
B
?
B
?
B
?
B
Observations:
O
0
O
1
O
2
· · · O
T−1
Figure 1: Hidden Markov Model
For the temperature example of the previous section—with the observations sequence
given in (6)—we have T = 4, N = 2, M = 3, Q = {H, C}, V = {0, 1, 2} (where we let 0, 1, 2
represent “small”, “medium” and “large” tree rings). In this case, the matrices A, B and π
are given by (3), (4) and (5), respectively.
The matrix A = {a
ij
} is N ×N with
a
ij
= P(state q
j
at t + 1  state q
i
at t)
and A is row stochastic. Note that the probabilities a
ij
are independent of t. The matrix
B = {b
j
(k)} is an N ×M with
b
j
(k) = P(observation k at t  state q
j
at t).
3
As with A, the matrix B is row stochastic and the probabilities b
j
(k) are independent of t.
The unusual notation b
j
(k) is standard in the HMM world.
An HMM is deﬁned by A, B and π (and, implicitly, by the dimensions N and M). The
HMM is denoted by λ = (A, B, π).
Consider a state generic sequence of length four
X = (x
0
, x
1
, x
2
, x
3
)
with corresponding observations
O = (O
0
, O
1
, O
2
, O
3
).
Then π
x
0
is the probability of starting in state x
0
. Also, b
x
0
(O
0
) is the probability of initially
observing O
0
and a
x
0
,x
1
is the probability of transiting from state x
0
to state x
1
. Continuing,
we see that the probability of the state sequence X is given by
P(X) = π
x
0
b
x
0
(O
0
)a
x
0
,x
1
b
x
1
(O
1
)a
x
1
,x
2
b
x
2
(O
2
)a
x
2
,x
3
b
x
3
(O
3
). (7)
Consider the temperature example in Section 1 with observation sequence O = (0, 1, 0, 2),
as given in (6). Using (7) we can compute, say,
P(HHCC) = 0.6(0.1)(0.7)(0.4)(0.3)(0.7)(0.6)(0.1) = 0.000212.
Similarly, we can compute the probability of each of the possible state sequences of length
four, assuming the ﬁxed observation sequence (6). We have listed these results in Table 1,
where the probabilities in the last column are normalized so that they sum to 1.
To ﬁnd the optimal state sequence in the dynamic programming (DP) sense, we simply
choose the sequence with the highest probability, namely, CCCH. To ﬁnd the optimal
sequence in the HMM sense, we choose the most probable symbol at each position. To this
end we sum the probabilities in Table 1 that have an H in the ﬁrst position. Doing so, we ﬁnd
the (normalized) probability of H in the ﬁrst position is 0.18817 and hence the probability
of C in the ﬁrst position is 0.81183. The HMM therefore chooses the ﬁrst element of the
optimal sequence to be C. We repeat this for each element of the sequence, obtaining the
probabilities in Table 2.
From Table 2 we ﬁnd that the optimal sequence—in the HMM sense—is CHCH. Note
that in this example, the optimal DP sequence diﬀers from the optimal HMM sequence, even
though all state transitions are valid.
3 The three problem
3.1 Problem 1
Given the model λ = (A, B, π) and a sequence of observations O, ﬁnd P(O λ). Here, we
want to determine the likelihood of the observed sequence O, given the model.
4
normalized
state probability probability
HHHH .000412 .042743
HHHC .000035 .003664
HHCH .000706 .073274
HHCC .000212 .021982
HCHH .000050 .005234
HCHC .000004 .000449
HCCH .000302 .031403
HCCC .000091 .009421
CHHH .001098 .113982
CHHC .000094 .009770
CHCH .001882 .195398
CHCC .000564 .058619
CCHH .000470 .048849
CCHC .000040 .004187
CCCH .002822 .293096
CCCC .000847 .087929
Table 1: State sequence probabilities
element
0 1 2 3
P(H) 0.188170 0.519432 0.228878 0.803979
P(C) 0.811830 0.480568 0.771122 0.196021
Table 2: HMM probabilities
3.2 Problem 2
Given λ = (A, B, π) and an observation sequence O, ﬁnd an optimal state sequence for the
underlying Markov process. In other words, we want to uncover the hidden part of the
Hidden Markov Model. This type of problem is discussed in Section 1.
3.3 Problem 3
Given an observation sequence O and the dimensions N and M, ﬁnd the model λ = (A, B, π)
that maximizes the probability of O. This can be viewed as training the model to best ﬁt
the observed data. Alternatively, we can view this as a (discrete) hill climb on the parameter
space represented by A, B and π.
5
3.4 Discussion
Consider speech recognition—which happens to be one of the bestknown applications of
HMMs. We can use the solution to Problem 3 to train an HMM, say, λ
0
to recognize the
spoken word “no” and train another HMM, say, λ
1
to recognize the spoken word “yes”.
Then given an unknown spoken word, we can use the solution to Problem 1 to score this
word against λ
0
and also λ
1
to determine whether it is more likely “no”, “yes” or neither. In
this case, we don’t need to solve Problem 2, but it is possible that such a solution—which
uncovers the hidden states—might provide additional insight into the underlying speech
model.
4 The three solutions
4.1 Solution to Problem 1
Let λ = (A, B, π) be a given model and let O = (O
0
, O
1
, . . . , O
T−1
) be a series of observations.
We want to ﬁnd P(O λ).
Let X = (x
0
, x
1
, . . . , x
T−1
) be a state sequence. Then by the deﬁnition of B we have
P(O X, λ) = b
x
0
(O
0
)b
x
1
(O
1
) · · · b
x
T−1
(O
T−1
)
and by the deﬁnition of π and A it follows that
P(X  λ) = π
x
0
a
x
0
,x
1
a
x
1
,x
2
· · · a
x
T−2
,x
T−1
.
Since
P(O, X  λ) =
P(O ∩ X ∩ λ)
P(λ)
and
P(O X, λ)P(X  λ) =
P(O ∩ X ∩ λ)
P(X ∩ λ)
·
P(X ∩ λ)
P(λ)
=
P(O ∩ X ∩ λ)
P(λ)
we have
P(O, X  λ) = P(O X, λ)P(X  λ).
By summing over all possible state sequences we obtain
P(O λ) =
X
P(O, X  λ)
=
X
P(O X, λ)P(X  λ)
=
X
π
x
0
b
x
0
(O
0
)a
x
0
,x
1
b
x
1
(O
1
) · · · a
x
T−2
,x
T−1
b
x
T−1
(O
T−1
).
However, this direct computation is generally infeasible, since it requires about 2TN
T
multi
plications. The strength of the HMM approach derives largely from the fact that there exist
an eﬃcient algorithm to achieve the same result.
6
To ﬁnd P(O λ), the socalled forward algorithm, or αpass, is used. For t = 0, 1, . . . , T−1
and i = 0, 1, . . . , N −1, deﬁne
α
t
(i) = P(O
0
, O
1
, . . . , O
t
, x
t
= q
i
 λ). (8)
Then α
t
(i) is the probability of the partial observation sequence up to time t, where the
underlying Markov process in state q
i
at time t.
The key result is that the α
t
(i) can be computed recursively as
1. Let α
0
(i) = π
i
b
i
(O
0
), for i = 0, 1, . . . , N −1
2. For t = 1, 2, . . . , T −1 and i = 0, 1, . . . , N −1, compute
α
t
(i) =
_
_
N−1
j=0
α
t−1
(j)a
ji
_
_
b
i
(O
t
)
3. Then from (8) it is clear that
P(O λ) =
N−1
i=0
α
T−1
(i).
The αpass computation requires N
2
T multiplications, as opposed to more than 2TN
T
for
the na¨ıve approach.
4.2 Solution to Problem 2
Given the model λ = (A, B, π) and a sequence of observations O, our goal is to ﬁnd the
most likely state sequence. As mentioned above, there are diﬀerent possible interpretations
of “most likely”. For HMMs we maximize the expected number of correct states. In contrast,
a dynamic program ﬁnds the highest scoring overall path. As we have seen, these solutions
are not necessarily the same.
First, we deﬁne the backward algorithm, or βpass. This is analogous to the αpass
discussed above, except that it starts at the end and works back toward the beginning.
For t = 0, 1, . . . , T −1 and i = 0, 1, . . . , N −1, deﬁne
β
t
(i) = P(O
t+1
, O
t+2
, . . . , O
T−1
 x
t
= q
i
, λ).
Then the β
t
(i) can be computed recursively as
1. Let β
T−1
(i) = 1, for i = 0, 1, . . . , N −1.
2. For t = T −2, T −1, . . . , 0 and i = 0, 1, . . . , N −1 compute
β
t
(i) =
N−1
j=0
a
ij
b
j
(O
t+1
)β
t+1
(j).
7
For t = 0, 1, . . . , T −2 and i = 0, 1, . . . , N −1, deﬁne
γ
t
(i) = P(x
t
= q
i
 O, λ).
Since α
t
(i) measures the relevant probability up to time t and β
t
(i) measures the relevant
probability after time t,
γ
t
(i) =
α
t
(i)β
t
(i)
P(O λ)
.
Recall that the denominator P(O λ) is obtained by summing α
T−1
(i) over i. From the
deﬁnition of γ
t
(i) it follows that the most likely state at time t is the state q
i
for which γ
t
(i)
is maximum—where the maximum is taken over the index i.
4.3 Solution to Problem 3
Here we want to adjust the model parameters to best ﬁt the observations. The sizes of the
matrices (N and M) are ﬁxed but the elements of A, B and π are free, subject only to the
row stochastic condition. The fact that we can reestimate the model itself is one of the
more amazing aspects of HMMs.
For t = 0, 1, . . . , T −2 and i, j ∈ {0, 1, . . . , N −1}, deﬁne “digammas” as
γ
t
(i, j) = P(x
t
= q
i
, x
t+1
= q
j
 O, λ).
Then γ
t
(i, j) is the probability of being in state q
i
at time t and transiting to state q
j
at
time t + 1. The digammas can be written in terms of α, β, A and B as
γ
t
(i, j) =
α
t
(i)a
ij
b
j
(O
t+1
)β
t+1
(j)
P(O λ)
.
The γ
t
(i) and γ
t
(i, j) (or digamma) are related by
γ
t
(i) =
N−1
j=0
γ
t
(i, j).
Given the γ and digamma we verify below that the model λ = (A, B, π) can be re
estimated as
1. For i = 0, 1, . . . , N −1, let
π
i
= γ
0
(i) (9)
2. For i = 0, 1, . . . , N −1 and j = 0, 1, . . . , N −1, compute
a
ij
=
T−2
t=0
γ
t
(i, j)
_
T−2
t=0
γ
t
(i) . (10)
8
3. For j = 0, 1, . . . , N −1 and k = 0, 1, . . . , M −1, compute
b
j
(k) =
t∈{0,1,...,T−2}
O
t
=k
γ
t
(j)
_
T−2
t=0
γ
t
(j) . (11)
The numerator of the reestimated a
ij
can be seen to give the expected number of tran
sitions from state q
i
to state q
j
, while the denominator is the expected number of transitions
from q
i
to any state. Then the ratio is the probability of transiting from state q
i
to state q
j
,
which is the desired value of a
ij
.
The numerator of the reestimated b
j
(k) is the expected number of times the model is
in state q
j
with observation k, while the denominator is the expected number of times the
model is in state q
j
. The ratio is the probability of observing symbol k, given that the model
is in state q
j
, which is the desired value of b
j
(k).
Reestimation is an iterative process. First, we initialize λ = (A, B, π) with a best
guess or, if no reasonable guess is available, we choose random values such that π
i
≈ 1/N
and a
ij
≈ 1/N and b
j
(k) ≈ 1/M. It’s critical that A, B and π be randomized, since exactly
uniform values result in a local maximum from which the model cannot climb. As always, π,
A and B must be row stochastic.
The process can be summarized as
1. Initialize, λ = (A, B, π).
2. Compute α
t
(i), β
t
(i), γ
t
(i, j) and γ
t
(i).
3. Reestimate the model λ = (A, B, π).
4. If P(O λ) increases, goto 2.
Of course, it might be desirable to stop if P(O λ) does not increase by at least some
predetermined threshold and/or to set a maximum number of iterations.
5 Dynamic programming
Before completing our discussion of the elementary aspects of HMMs, we make a brief detour
to show the relationship between dynamic programming (DP) and HMMs. In fact, a DP
can be viewed as an αpass where “sum” is replaced by “max”. More precisely, for π, A
and B as above, the dynamic programming algorithm can be stated as
1. Let δ
0
(i) = π
i
b
i
(O
0
), for i = 0, 1, . . . , N −1.
2. For t = 1, 2, . . . , T −1 and i = 0, 1, . . . , N −1, compute
δ
t
(i) = max
j∈{0,1,...,N−1}
[δ
t−1
(j)a
ji
b
i
(O
t
)]
9
At each successive t, the DP determines the probability of the best path ending at each of
the states i = 0, 1, . . . , N −1. Consequently, the probability of the best overall path is
max
j∈{0,1,...,N−1}
[δ
T−1
(j)]. (12)
Be sure to note that (12) only gives the optimal probability, not the optimal path itself.
By keeping track of each preceding state, the DP procedure can be used to recover the
optimal path by tracing back from the highestscoring ﬁnal state.
For example, consider the example of Section 1. The initial probabilites are
P(H) = π
0
b
0
(0) = 0.6(0.1) = 0.06 and P(T) = π
1
b
1
(0) = 0.4(0.7) = 0.28.
The probabilities of each path of length two are
P(HH) = 0.06(0.7)(0.4) = 0.0168
P(HC) = 0.06(0.3)(0.2) = 0.0036
P(CH) = 0.28(0.4)(0.4) = 0.0448
P(CC) = 0.28(0.6)(0.2) = 0.0336
and hence the best (most probable) path of length two ending with H is CH while the
best path of length two ending with C is CC. Continuing, we construct the the diagram
in Figure 2 one “level” at a time, where each arrow points to the preceding element in the
optimal path up to that particular state. Note that at each stage, the dynamic programming
algorithm only needs to maintain the highestscoring paths at each possible state—not a list
of all possible paths. This is the key to the eﬃciency of the DP algorithm.
H
.06
C
.28
H
.0448
=
C
.0336
H
.003136
C
.014112
H
.002822
=
C
.000847
Figure 2: Dynamic programming
In Figure 2, the maximum ﬁnal probability is 0.002822 which occurs at the ﬁnal state H.
We can use the arrows to trace back from H to ﬁnd the optimal path CCCH.
Underﬂow is a concern with a dynamic programming problem of this form, since we
compute products of probabilities. Fortunately, underﬂow is easily avoided by simply taking
logarithms. The underﬂowresistant DP algorithm is
1. Let
ˆ
δ
0
(i) = log[π
i
(O
0
)], for i = 0, 1, . . . , N −1.
2. For t = 1, 2, . . . , T −1 and i = 0, 1, . . . , N −1, compute
ˆ
δ
t
(i) = max
j∈{0,1,...,N−1}
_
ˆ
δ
t−1
(j) + log[a
ji
] + log[b
i
(O
t
)]
_
.
10
In this case, the optimal score is
max
j∈{0,1,...,N−1}
[
ˆ
δ
T−1
(j)].
Of course, additional bookkeeping is still required in order ﬁnd the optimal path.
6 HMM Scaling
The three HMM solutions in Section 4 all require computations involving products of prob
abilities. It is easy to see, for example, that α
t
(i) tends to 0 exponentially as T increases.
Therefore, any attempt to implement the formulae as given above will inevitably result in
underﬂow. The solution to this underﬂow problem is to scale the numbers. However, care
must be taken to insure that, for example, the reestimation formulae remain valid.
First, consider the computation of α
t
(i). The basic recurrence is
α
t
(i) =
N−1
j=0
α
t−1
(j)a
ji
b
i
(O
t
).
It seems sensible to normalize each α
t
(i) by dividing by the sum (over j) of α
t
(j). However,
we must verify that the reestimation formulae hold.
For t = 0, let ˜ α
0
(i) = α
0
(i) for i = 0, 1, . . . , N − 1. Then let c
0
= 1/
N−1
j=0
˜ α
0
(j) and,
ﬁnally, ˆ α
0
(i) = c
0
˜ α
0
(i) for i = 0, 1, . . . , N − 1. Then for each t = 1, 2, . . . , T − 1 do the
following.
1. For i = 0, 1, . . . , N −1, compute
˜ α
t
(i) =
N−1
j=0
ˆ α
t−1
(j)a
ji
b
i
(O
t
).
2. Let
c
t
=
1
N−1
j=0
˜ α
t
(j)
.
3. For i = 0, 1, . . . , N −1, compute
ˆ α
t
(i) = c
t
˜ α
t
(i).
Clearly, ˆ α
0
(i) = c
0
α
0
(i). Suppose that
ˆ α
t
(i) = c
0
c
1
· · · c
t
α
t
(i). (13)
11
Then
ˆ α
t+1
(i) = c
t+1
˜ α
t+1
(i)
= c
t+1
N−1
j=0
ˆ α
t
(j)a
ji
b
i
(O
t+1
)
= c
0
c
1
· · · c
t
c
t+1
N−1
j=0
α
t
(j)a
ji
b
i
(O
t+1
)
= c
0
c
1
· · · c
t+1
α
t+1
(i)
and hence (13) holds, by induction, for all t.
From (13) and the deﬁnitions of ˜ α and ˆ α it follows that
ˆ α
t
(i) =
α
t
(i)
N−1
j=0
α
t
(j)
(14)
and hence ˆ α
t
(i) are the desired scaled values of α
t
(i) for all t.
As a consequence of (14),
N−1
j=0
ˆ α
T−1
(j) = 1.
Also, from (13) we have
N−1
j=0
ˆ α
T−1
(j) = c
0
c
1
· · · c
T−1
N−1
j=0
α
T−1
(j)
= c
0
c
1
· · · c
T−1
P(O λ).
Combining these results gives
P(O λ) =
1
T−1
j=0
c
j
.
To avoid underﬂow, we instead compute
log[P(O λ)] = −
T−1
j=0
log c
j
. (15)
The same scale factor is used for β
t
(i) as was used for α
t
(i), namely c
t
, so that
ˆ
β
t
(i) =
c
t
β
t
(i). We then compute γ
t
(i, j) and γ
t
(i) using the formulae of the previous section
with ˆ α
t
(i) and
ˆ
β
t
(i) in place of α
t
(i) and β
t
(i), respectively. These values are then used
to reestimate π, A and B.
By writing the original reestimation formulae (9) and (10) and (11) directly in terms
of α
t
(i) and β
t
(i), it is an easy exercise to show that the reestimated π and A and B
12
are exact when ˆ α
t
(i) and
ˆ
β
t
(i) are used in place of α
t
(i) and β
t
(i). Furthermore, P(O λ)
isn’t required in the reestimation formulae, since in each case it cancels in the numerator
and denominator. Therefore, (15) can be used to verify that P(O λ) is increasing at each
iteration. Fortunately, we have no need for the actual value of P(O λ), the calculation of
which would inevitably result in underﬂow.
7 Putting it all together
Here we give complete pseudocode for solving Problem 3, including scaling. This pseudo
code also provides everything needed to solve Problems 1 and 2.
The values N and M are ﬁxed and the T observations
O = (O
0
, O
1
, . . . , O
T−1
)
are assumed known.
1. Initialization:
Select initial values for the matrices A, B and π, where π is 1 ×N, while A = {a
ij
}
is N ×N and B = {b
j
(k)} is N ×M, and all three matrices are rowstochastic. If
known, use reasonable approximations for the matrix values, otherwise let π
i
≈ 1/N
and a
ij
≈ 1/N and b
j
(k) ≈ 1/M. Be sure that each row sums to 1 and the elements
of each matrix are not uniform.
Let
maxIters = maximum number of reestimation iterations
iters = 0
oldLogProb = −∞.
2. The αpass
// compute α
0
(i)
c
0
= 0
for i = 0 to N −1
α
0
(i) = π(i)b
i
(O
0
)
c
0
= c
0
+α
0
(i)
next i
// scale the α
0
(i)
c
0
= 1/c
0
13
for i = 0 to N −1
α
0
(i) = c
0
α
0
(i)
next i
// compute α
t
(i)
for t = 1 to T −1
c
t
= 0
for i = 0 to N −1
α
t
(i) = 0
for j = 0 to N −1
α
t
(i) = α
t
(i) +α
t−1
(j)a
ji
next j
α
t
(i) = α
t
(i)b
i
(O
t
)
c
t
= c
t
+α
t
(i)
next i
// scale α
t
(i)
c
t
= 1/c
t
for i = 0 to N −1
α
t
(i) = c
t
α
t
(i)
next i
next t
3. The βpass
// Let β
T−1
(i) = 1 scaled by c
T−1
for i = 0 to N −1
β
T−1
(i) = c
T−1
next i
// βpass
for t = T −2 to 0 by −1
for i = 0 to N −1
β
t
(i) = 0
for j = 0 to N −1
β
t
(i) = β
t
(i) +a
ij
b
j
(O
t+1
)β
T+1
(j)
next j
// scale β
t
(i) with same scale factor as α
t
(i)
β
t
(i) = c
t
β
t
(i)
next i
14
next t
4. Compute γ
t
(i, j) and γ
t
(i)
for t = 0 to T −2
denom = 0
for i = 0 to N −1
for j = 0 to N −1
denom = denom +α
t
(i)a
ij
b
j
(O
t+1
)β
t+1
(j)
next j
next i
for i = 0 to N −1
γ
t
(i) = 0
for j = 0 to N −1
γ
t
(i, j) = (α
t
(i)a
ij
b
j
(O
t+1
)β
t+1
(j))/denom
γ
t
(i) = γ
t
(i) +γ
t
(i, j)
next j
next i
next t
5. Reestimate A, B and π
// reestimate π
for i = 0 to N −1
π
i
= γ
0
(i)
next i
// reestimate A
for i = 0 to N −1
for j = 0 to N −1
numer = 0
denom = 0
for t = 0 to T −2
numer = numer +γ
t
(i, j)
denom = denom +γ
t
(i)
next t
a
ij
= numer/denom
next j
next i
15
// reestimate B
for i = 0 to N −1
for j = 0 to M −1
numer = 0
denom = 0
for t = 0 to T −2
if(O
t
== j) then
numer = numer +γ
t
(i)
end if
denom = denom +γ
t
(i)
next t
b
i
(j) = numer/denom
next j
next i
6. Compute log[P(O λ)]
logProb = 0
for i = 0 to T −1
logProb = logProb + log(c
i
)
next i
logProb = −logProb
7. To iterate or not to iterate, that is the question. . .
iters = iters + 1
if (iters < maxIters and logProb > oldLogProb) then
oldLogProb = logProb
goto 2
else
output λ = (π, A, B)
end if
8 A notsosimple example
In this section we present a classic application of Hidden Markov Models due to Cave and
Neuwirth [2]. This application nicely illustrates the strength of HMMs and has the additional
advantage that it requires no background in any specialized ﬁeld such as speech processing.
16
Suppose Marvin the Martian obtains a large body of English text, such as the “Brown
Corpus” [3], which totals about 1,000,000 words. Marvin, who has a working knowledge of
HMMs, but no knowledge of English, would like to determine some basic properties of this
mysterious writing system. A reasonable question he might ask is whether the characters can
be partitioned into sets so that the characters in each set are “diﬀerent” in some statistically
signiﬁcant way.
Marvin might consider attempting the following. First, remove all punctuation, numbers,
etc., and convert all letters to lower case. This leaves 26 distinct letters and word space, for
a total of 27 symbols. He could then test the hypothesis that there is an underlying Markov
process (of order one) with two states. For each of these two hidden states, he assume that
the 27 symbols are observed according to ﬁxed probability distributions.
This deﬁnes an HMM with N = 2 and M = 27, where the state transition probabilities
of the A matrix and the observation probabilities—the B matrix—are unknown, while the
the observations are the series of characters found in the text. To ﬁnd the most probable A
and B matrices, Marvin must solve Problem 3 of Section 7.
We programmed this experiment, using the ﬁrst T = 50, 000 observations (letters—
converted to lower case—and word spaces) from the “Brown Corpus” [3]. We initialized
each element of π and A randomly to approximately 1/2. The precise values used were
π = [ 0.51316 0.48684 ]
and
A =
_
0.47468 0.52532
0.51656 0.48344
_
.
Each element of B was initialized to approximately 1/27. The precise values in the initial B
matrix (actually, the transpose of B) appear in the second and third columns of Figure 3.
After the initial iteration, we have
log[P(O, λ)] = −165097.29
and after 100 iterations,
log[P(O λ)] = −137305.28.
This shows that the model has been improved signiﬁcantly.
After 100 iterations, the model λ = (A, B, π) has converged to
π =
_
0.00000 1.00000
_
and
A =
_
0.25596 0.74404
0.71571 0.28429
_
with the transpose of B appearing in the last two columns of Figure 3.
The most interesting part of this result is the B matrix. Without having made any
assumption about the two hidden states, the B matrix tells us that one hidden state contains
17
Initial Final
a 0.03735 0.03909 0.13845 0.00075
b 0.03408 0.03537 0.00000 0.02311
c 0.03455 0.03537 0.00062 0.05614
d 0.03828 0.03909 0.00000 0.06937
e 0.03782 0.03583 0.21404 0.00000
f 0.03922 0.03630 0.00000 0.03559
g 0.03688 0.04048 0.00081 0.02724
h 0.03408 0.03537 0.00066 0.07278
i 0.03875 0.03816 0.12275 0.00000
j 0.04062 0.03909 0.00000 0.00365
k 0.03735 0.03490 0.00182 0.00703
l 0.03968 0.03723 0.00049 0.07231
m 0.03548 0.03537 0.00000 0.03889
n 0.03735 0.03909 0.00000 0.11461
o 0.04062 0.03397 0.13156 0.00000
p 0.03595 0.03397 0.00040 0.03674
q 0.03641 0.03816 0.00000 0.00153
r 0.03408 0.03676 0.00000 0.10225
s 0.04062 0.04048 0.00000 0.11042
t 0.03548 0.03443 0.01102 0.14392
u 0.03922 0.03537 0.04508 0.00000
v 0.04062 0.03955 0.00000 0.01621
w 0.03455 0.03816 0.00000 0.02303
x 0.03595 0.03723 0.00000 0.00447
y 0.03408 0.03769 0.00019 0.02587
z 0.03408 0.03955 0.00000 0.00110
space 0.03688 0.03397 0.33211 0.01298
Figure 3: Initial and ﬁnal B transpose
the vowels while the other hidden state contains the consonants. Curiously, wordspace is
more like a vowel, while y is not even sometimes a vowel. Of course, anyone familiar with
English would not be too surprised that there is a clear distinction between vowels and
consonants. But the HMM result show us that this distinction is a statistically signiﬁcant
feature inherent in the language. And, thanks to HMMs, this could easily be deduced by
Marvin the Martian, who has no other knowledge of the language.
Cave and Neuwirth [2] obtain further interesting results by allowing more than two hidden
states. In fact, they are able to obtain and sensibly interpret the results for models with up
to 12 hidden states.
18
9 Exercises
1. Write the reestimation formulae (9), (10) and (11) directly in terms of α and β.
2. In the reestimation formulae obtained in Exercise 1, substitute ˆ α and
ˆ
β for α and β,
respectively, and show that the resulting reestimation formulae are exact.
3. Instead of using c
t
to scale the β
t
(i), scale each β
t
(i) by
d
t
=
1
N−1
j=0
˜
β
t
(j)
where the deﬁnition of
˜
β is similar to that of ˜ α.
a. Using the scaling factors c
t
and d
t
show that the reestimation formulae obtained
in Exercise 1 are exact with ˆ α and
ˆ
β in place of α and β.
b. Write log[P(O λ)] in terms of c
t
and d
t
.
4. Consider an HMM where for each t the state transition matrix is time dependent.
Then for each t, there is an N ×N rowstochastic A
t
= {a
t
ij
} that is used in place of A
in the HMM computations. For such an HMM,
a. Give pseudocode to solve Problem 1.
b. Give pseudocode to solve Problem 2.
5. Consider an HMM of order two, that is, an HMM where the underlying Markov process
is of order two. Then the the state at time t depends on the states at time t−1 and t−2
and a ﬁxed set of probabilities. For an HMM of order two,
a. Give pseudocode to solve Problem 1.
b. Give pseudocode to solve Problem 2.
c. Give pseudocode to solve Problem 3.
6. Write an HMM program to solve the English language problem discussed in Section 8
with
a. Three hidden states. Explain your results.
b. Four hidden states. Explain your results.
7. Write an HMM program to solve the problem discussed in Section 8, replacing English
text with
a. French.
19
b. Russian.
c.
∗
Chinese.
8. Perform an HMM analysis similar to that discussed in Section 8, replacing English
with “Hamptonese”; see
http://www.cs.sjsu.edu/faculty/stamp/Hampton/hampton.html
for information on Hamptonese.
References
[1] L. R. Rabiner, A tutorial on hidden Markov models and selected applications
in speech recognition, Proceedings of the IEEE, Vol. 77, No. 2, February 1989,
http://www.cs.ucsb.edu/~cs281b/papers/HMMs%20%20Rabiner.pdf
[2] R. L. Cave and L. P. Neuwirth, Hidden Markov models for English, in J. D. Ferguson,
editor, Hidden Markov Models for Speech, IDACRD, Princeton, NJ, October 1980.
[3] The Brown Corpus of Standard American English, available for download at
http://www.cs.toronto.edu/~gpenn/csc401/a1res.html\verb
20
Our goal is to make eﬀective and eﬃcient use of the observable information in order to gain insight into various aspects of the Markov process. The state transition matrix 0.7 0.3 A= (3) 0.4 0.6 comes from (1) and the observation matrix B= 0.1 0.4 0.5 0.7 0.2 0.1 . (4)
is from (2). Perhaps there is additional evidence that the initial state distribution, denoted by π, is π = 0.6 0.4 . (5) The matrices, π, A and B are row stochastic, meaning that each element is a probability and the elements of each row sum to 1. Now consider a particular fouryear period of interest where we observe the series of tree rings S, M, S, L. Letting 0 represent S, 1 represent M and 2 represent L, this observation sequence is O = (0, 1, 0, 2). (6) We might want to determine the most likely state sequence of the Markov process given the observations (6). This is not quite as clearcut as it seems, since there are diﬀerent possible interpretations of “most likely”. On the one hand, we could reasonably deﬁne most likely as the state sequence with the highest probability from among all possible state sequences of length four. Dynamic programming (DP) eﬃciently ﬁnds this particular most likely solution. On the other hand, we might reasonably deﬁne most likely as the state sequence that maximizes the expected number of correct states. HMMs ﬁnd this most likely sequence. The DP solution and the HMM solution are not necessarily the same. For example, the DP solution must have valid state transitions, while this is not necessarily the case for the HMMs. And even if all state transitions are valid, the HMM solution can still diﬀer from the DP solution (as we illustrate in an example below). Next, we present one of the most challenging aspects of HMMs—the notation. Then we discuss the three fundamental problems related to HMMs and give algorithms for their eﬃcient solutions. We also consider some critical computational issue that must be addressed when writing any HMM computer program. We conclude with a substantial example that does not require any specialized knowledge, yet nicely illustrates the strength of the HMM approach. Rabiner [1] is the best source for further introductory information on HMMs.
2
respectively. . We are only able to observe the Oi . OT −1 ) = observation sequence. 1. qN −1 } = the states of the Markov process {0. In this case. . The observations are always denoted by {0. A generic hidden Markov model is illustrated in Figure 1. 3 . . . 1. Q = {H. . where the Xi are the hidden states and all other notation is as given above. . . which are related to the actual states of the Markov process by the matrices B and A. 1. “medium” and “large” tree rings). . N = 2. (4) and (5). . . The matrix A = {aij } is N × N with aij = P (state qj at t + 1  state qi at t) and A is row stochastic. . V = {0. B and π are given by (3). . . The matrix B = {bj (k)} is an N × M with bj (k) = P (observation k at t  state qj at t). C}. 1. . O1 . . . since this simpliﬁes the notation with no loss in generality. . M = 3. Note that the probabilities aij are independent of t. 2 represent “small”.2 Let Notation T N M Q V A B π O = = = = = = = = = the length of the observation sequence the number of states in the model the number of observation symbols {q0 . 2} (where we let 0. M − 1} = set of possible observations the state transition probabilities the observation probability matrix the initial state distribution (O0 . Markov process: X0 A X1 A X2 A ··· A XT −1 B ? B ? B ? B ? Observations: O0 O1 O2 ··· OT −1 Figure 1: Hidden Markov Model For the temperature example of the previous section—with the observations sequence given in (6)—we have T = 4. . Then Oi ∈ V for i = 0. M − 1}. T − 1. q1 . 1. the matrices A. The Markov process—which is hidden behind the dashed line—is determined by the initial state X0 and the A matrix. . .
To this end we sum the probabilities in Table 1 that have an H in the ﬁrst position. 0.x1 is the probability of transiting from state x0 to state x1 .4)(0. Note that in this example.7)(0.18817 and hence the probability of C in the ﬁrst position is 0.1 The three problem Problem 1 Given the model λ = (A. namely. bx0 (O0 ) is the probability of initially observing O0 and ax0 .1)(0. The HMM therefore chooses the ﬁrst element of the optimal sequence to be C. the optimal DP sequence diﬀers from the optimal HMM sequence.x2 bx2 (O2 )ax2 . O1 . given the model. x3 ) with corresponding observations O = (O0 . obtaining the probabilities in Table 2. To ﬁnd the optimal state sequence in the dynamic programming (DP) sense. assuming the ﬁxed observation sequence (6). 1.000212. O3 ). by the dimensions N and M ). x2 . the matrix B is row stochastic and the probabilities bj (k) are independent of t.x3 bx3 (O3 ). Also.x1 bx1 (O1 )ax1 . we want to determine the likelihood of the observed sequence O. O2 . π) and a sequence of observations O. 4 . 2). as given in (6).81183. 3 3.6(0.3)(0. x1 . we can compute the probability of each of the possible state sequences of length four. B. The unusual notation bj (k) is standard in the HMM world. An HMM is deﬁned by A.1) = 0. Consider a state generic sequence of length four X = (x0 . ﬁnd P (O  λ). (7) Consider the temperature example in Section 1 with observation sequence O = (0. Using (7) we can compute. Then πx0 is the probability of starting in state x0 . implicitly. CCCH. We repeat this for each element of the sequence. B. say.7)(0. We have listed these results in Table 1. From Table 2 we ﬁnd that the optimal sequence—in the HMM sense—is CHCH. we choose the most probable symbol at each position. we ﬁnd the (normalized) probability of H in the ﬁrst position is 0. we simply choose the sequence with the highest probability. Here. Doing so.As with A. π). P (HHCC) = 0. To ﬁnd the optimal sequence in the HMM sense. where the probabilities in the last column are normalized so that they sum to 1. Continuing. we see that the probability of the state sequence X is given by P (X) = πx0 bx0 (O0 )ax0 . The HMM is denoted by λ = (A.6)(0. even though all state transitions are valid. Similarly. B and π (and.
we want to uncover the hidden part of the Hidden Markov Model. π) and an observation sequence O.000035 .009770 .073274 .000094 .2 Problem 2 Given λ = (A. B.000091 .228878 0.195398 .188170 P (C) 0. ﬁnd the model λ = (A.087929 Table 1: State sequence probabilities element 0 P (H) 0.519432 0.000449 .803979 0.480568 0.000412 .293096 .000564 .005234 . 3.042743 .003664 .000706 . B and π. This can be viewed as training the model to best ﬁt the observed data.009421 . In other words.001098 .058619 .196021 Table 2: HMM probabilities 3.002822 .3 Problem 3 Given an observation sequence O and the dimensions N and M .000050 .000004 .000470 .048849 .000040 .001882 . 5 . This type of problem is discussed in Section 1.000847 normalized probability .021982 .004187 .811830 1 2 3 0.113982 . Alternatively.000212 . B.771122 0. ﬁnd an optimal state sequence for the underlying Markov process. π) that maximizes the probability of O.031403 . we can view this as a (discrete) hill climb on the parameter space represented by A.000302 .state HHHH HHHC HHCH HHCC HCHH HCHC HCCH HCCC CHHH CHHC CHCH CHCC CCHH CCHC CCCH CCCC probability .
x1 bx1 (O1 ) · · · axT −2 . Since P (O. O1 . λ)P (X  λ) X = = X πx0 bx0 (O0 )ax0 . say. but it is possible that such a solution—which uncovers the hidden states—might provide additional insight into the underlying speech model. X  λ) = P (O  X. X  λ) = and P (O  X. π) be a given model and let O = (O0 . Then given an unknown spoken word. xT −1 ) be a state sequence. we don’t need to solve Problem 2. this direct computation is generally infeasible. . OT −1 ) be a series of observations.4 Discussion Consider speech recognition—which happens to be one of the bestknown applications of HMMs. 6 .1 The three solutions Solution to Problem 1 Let λ = (A. since it requires about 2T N T multiplications. X  λ) P (O  X. λ0 to recognize the spoken word “no” and train another HMM. . . λ)P (X  λ) = we have P (O.xT −1 . By summing over all possible state sequences we obtain P (O  λ) = X P (O ∩ X ∩ λ) P (λ) P (O ∩ X ∩ λ) P (X ∩ λ) P (O ∩ X ∩ λ) · = P (X ∩ λ) P (λ) P (λ) P (O. . Let X = (x0 .xT −1 bxT −1 (OT −1 ).3. We can use the solution to Problem 3 to train an HMM. λ) = bx0 (O0 )bx1 (O1 ) · · · bxT −1 (OT −1 ) and by the deﬁnition of π and A it follows that P (X  λ) = πx0 ax0 . . We want to ﬁnd P (O  λ).x2 · · · axT −2 . we can use the solution to Problem 1 to score this word against λ0 and also λ1 to determine whether it is more likely “no”.x1 ax1 . x1 . 4 4. λ1 to recognize the spoken word “yes”. λ)P (X  λ). The strength of the HMM approach derives largely from the fact that there exist an eﬃcient algorithm to achieve the same result. say. B. However. In this case. “yes” or neither. . . . Then by the deﬁnition of B we have P (O  X.
ıve 4. . 2. N − 1. . . . . . . As we have seen. 7 . For HMMs we maximize the expected number of correct states. . . a dynamic program ﬁnds the highest scoring overall path. 0 and i = 0. . 1. xt = qi  λ). (8) Then αt (i) is the probability of the partial observation sequence up to time t. Ot . . deﬁne αt (i) = P (O0 . we deﬁne the backward algorithm. our goal is to ﬁnd the most likely state sequence. Then from (8) it is clear that αt−1 (j)aji bi (Ot ) N −1 P (O  λ) = i=0 αT −1 (i). . In contrast. the socalled forward algorithm. The key result is that the αt (i) can be computed recursively as 1. . N − 1. . T − 1 and i = 0. for i = 0. T − 1. N − 1 2. . . This is analogous to the αpass discussed above. . .2 Solution to Problem 2 Given the model λ = (A. Let βT −1 (i) = 1. . is used. 2. . deﬁne βt (i) = P (Ot+1 . for i = 0. compute N −1 j=0 αt (i) = 3. First. . λ). For t = T − 2. . For t = 0. . Then the βt (i) can be computed recursively as 1. .To ﬁnd P (O  λ). . . Ot+2 . 1. The αpass computation requires N 2 T multiplications. . 1. . OT −1  xt = qi . . For t = 1. as opposed to more than 2T N T for the na¨ approach. . . As mentioned above. . . . 1. N − 1. . . there are diﬀerent possible interpretations of “most likely”. these solutions are not necessarily the same. . . or βpass. π) and a sequence of observations O. O1 . . Let α0 (i) = πi bi (O0 ). . 1. . or αpass. . where the underlying Markov process in state qi at time t. B. 1. . . . 1. For t = 0. . T − 1 and i = 0. . N − 1 compute N −1 βt (i) = j=0 aij bj (Ot+1 )βt+1 (j). N − 1. . 1. except that it starts at the end and works back toward the beginning. T −1 and i = 0.
From the deﬁnition of γt (i) it follows that the most likely state at time t is the state qi for which γt (i) is maximum—where the maximum is taken over the index i. . 1. compute T −2 T −2 (9) aij = t=0 γt (i. The sizes of the matrices (N and M ) are ﬁxed but the elements of A. Since αt (i) measures the relevant probability up to time t and βt (i) measures the relevant probability after time t.For t = 0. . j) = αt (i)aij bj (Ot+1 )βt+1 (j) . B. . j). For t = 0. . let πi = γ0 (i) 2. j) = P (xt = qi . deﬁne “digammas” as γt (i. N − 1. . j) (or digamma) are related by N −1 γt (i) = j=0 γt (i. P (O  λ) Recall that the denominator P (O  λ) is obtained by summing αT −1 (i) over i. . . deﬁne γt (i) = P (xt = qi  O. N − 1 and j = 0. . . . . T − 2 and i = 0. . Then γt (i. 4. . 1. The fact that we can reestimate the model itself is one of the more amazing aspects of HMMs. . π) can be reestimated as 1. N − 1}. . For i = 0. (10) 8 . xt+1 = qj  O. . Given the γ and digamma we verify below that the model λ = (A. 1. B and π are free.3 Solution to Problem 3 Here we want to adjust the model parameters to best ﬁt the observations. For i = 0. . j) is the probability of being in state qi at time t and transiting to state qj at time t + 1. αt (i)βt (i) γt (i) = . . . . . The digammas can be written in terms of α. 1. λ). A and B as γt (i. . . . . 1. . 1. . N − 1. λ). . j ∈ {0. subject only to the row stochastic condition. 1. j) t=0 γt (i) . β. N − 1. P (O  λ) The γt (i) and γt (i. T − 2 and i.
. π) with a best guess or. Compute αt (i). . N − 1 and k = 0. since exactly uniform values result in a local maximum from which the model cannot climb.. . . 2. . . 1.. βt (i). 1. N − 1. . for i = 0. B. j) and γt (i). π).T −2} Ot =k γt (j) t=0 γt (j) . In fact. T − 1 and i = 0.. compute T −2 bj (k) = t∈{0. N − 1.3. First. while the denominator is the expected number of times the model is in state qj . a DP can be viewed as an αpass where “sum” is replaced by “max”. . the dynamic programming algorithm can be stated as 1. M − 1.. goto 2. Reestimation is an iterative process... Of course. More precisely. we make a brief detour to show the relationship between dynamic programming (DP) and HMMs. A and B must be row stochastic. we initialize λ = (A. The process can be summarized as 1. . For t = 1. . The numerator of the reestimated bj (k) is the expected number of times the model is in state qj with observation k. 1. .. Let δ0 (i) = πi bi (O0 ). For j = 0. . B and π be randomized. It’s critical that A. compute δt (i) = j∈{0. A and B as above. for π. Reestimate the model λ = (A. we choose random values such that πi ≈ 1/N and aij ≈ 1/N and bj (k) ≈ 1/M . π). . 1. The ratio is the probability of observing symbol k. it might be desirable to stop if P (O  λ) does not increase by at least some predetermined threshold and/or to set a maximum number of iterations. which is the desired value of aij . . π. . If P (O  λ) increases.1. 5 Dynamic programming Before completing our discussion of the elementary aspects of HMMs. 4. . . . 3. (11) The numerator of the reestimated aij can be seen to give the expected number of transitions from state qi to state qj . B. if no reasonable guess is available.N −1} max [δt−1 (j)aji bi (Ot )] 9 . B. . while the denominator is the expected number of transitions from qi to any state. Initialize.1.. 2. γt (i. . As always. 2. λ = (A. given that the model is in state qj . Then the ratio is the probability of transiting from state qi to state qj . which is the desired value of bj (k).
. We can use the arrows to trace back from H to ﬁnd the optimal path CCCH.6(0..0336 and hence the best (most probable) path of length two ending with H is CH while the best path of length two ending with C is CC. This is the key to the eﬃciency of the DP algorithm. Let δ0 (i) = log[πi (O0 )]. 2. N − 1. Fortunately. (12) Be sure to note that (12) only gives the optimal probability.. the DP determines the probability of the best path ending at each of the states i = 0.7) = 0.2) = 0.1) = 0. T − 1 and i = 0. . By keeping track of each preceding state. Continuing. not the optimal path itself. .28(0.3)(0. Consequently.000847 Figure 2: Dynamic programming In Figure 2. the DP procedure can be used to recover the optimal path by tracing back from the highestscoring ﬁnal state.002822 H H = C H = C C .6)(0. For example. .0448 0.002822 which occurs at the ﬁnal state H. the maximum ﬁnal probability is 0. . consider the example of Section 1.. . since we compute products of probabilities.4) = 0. . . the dynamic programming algorithm only needs to maintain the highestscoring paths at each possible state—not a list of all possible paths. the probability of the best overall path is j∈{0.2) = 0. The underﬂowresistant DP algorithm is ˆ 1.06(0.06(0.N −1} max ˆ δt−1 (j) + log[aji ] + log[bi (Ot )] . we construct the the diagram in Figure 2 one “level” at a time. . 1.28(0. . 10 . . Note that at each stage.4) = 0. The probabilities of each path of length two are P (HH) P (HC) P (CH) P (CC) = = = = 0. compute ˆ δt (i) = j∈{0. .1.N −1} max [δT −1 (j)].At each successive t. . 2. . .0036 0..0168 0. 1.0448 H . N − 1. For t = 1.4(0. for i = 0.06 ..06 and P (T ) = π1 b1 (0) = 0.. where each arrow points to the preceding element in the optimal path up to that particular state.1.014112 .4)(0.0336 C . .003136 .7)(0. N − 1.28 . 1. underﬂow is easily avoided by simply taking logarithms.28. The initial probabilites are P (H) = π0 b0 (0) = 0. Underﬂow is a concern with a dynamic programming problem of this form. . ..
. . Of course. Suppose that ˆ αt (i) = c0 c1 · · · ct αt (i). α0 (i) = c0 α0 (i). .. However. . However. .1. 1. It seems sensible to normalize each αt (i) by dividing by the sum (over j) of αt (j). . ˜ j=0 ˜ ﬁnally. compute αt (i) = ct αt (i). 1. the optimal score is j∈{0. . N − 1. Then for each t = 1. N − 1. 6 HMM Scaling The three HMM solutions in Section 4 all require computations involving products of probabilities. The basic recurrence is N −1 αt (i) = j=0 αt−1 (j)aji bi (Ot ). The solution to this underﬂow problem is to scale the numbers. ˆ (13) 11 . we must verify that the reestimation formulae hold. ˆ 2. 1. 2. Therefore. N − 1. that αt (i) tends to 0 exponentially as T increases. ˆ ˜ Clearly. . . T − 1 do the ˆ ˜ following. .In this case.. Then let c0 = 1/ N −1 α0 (j) and.. . For t = 0. 1. consider the computation of αt (i). for example. let α0 (i) = α0 (i) for i = 0. 1. . 3. for example. α0 (i) = c0 α0 (i) for i = 0. For i = 0.. . compute N −1 αt (i) = ˜ j=0 αt−1 (j)aji bi (Ot ). additional bookkeeping is still required in order ﬁnd the optimal path. . . the reestimation formulae remain valid. . . . . care must be taken to insure that. First. Let ct = N −1 1 αt (j) ˜ j=0 . . N − 1. any attempt to implement the formulae as given above will inevitably result in underﬂow. For i = 0.N −1} max ˆ [δT −1 (j)]. It is easy to see.
(15) ˆ The same scale factor is used for βt (i) as was used for αt (i). so that βt (i) = ct βt (i). We then compute γt (i. Combining these results gives P (O  λ) = 1 T −1 . for all t.Then αt+1 (i) = ct+1 αt+1 (i) ˆ ˜ N −1 = ct+1 j=0 αt (j)aji bi (Ot+1 ) ˆ N −1 = c0 c1 · · · ct ct+1 j=0 αt (j)aji bi (Ot+1 ) = c0 c1 · · · ct+1 αt+1 (i) and hence (13) holds. from (13) we have N −1 N −1 αT −1 (j) = c0 c1 · · · cT −1 ˆ j=0 j=0 αT −1 (j) = c0 c1 · · · cT −1 P (O  λ). ˆ As a consequence of (14). N −1 αT −1 (j) = 1. we instead compute T −1 log[P (O  λ)] = − j=0 log cj . by induction. These values are then used ˆ to reestimate π. From (13) and the deﬁnitions of α and α it follows that ˜ ˆ αt (i) = ˆ αt (i) N −1 (14) αt (j) j=0 and hence αt (i) are the desired scaled values of αt (i) for all t. By writing the original reestimation formulae (9) and (10) and (11) directly in terms of αt (i) and βt (i). namely ct . respectively. j) and γt (i) using the formulae of the previous section ˆ with αt (i) and βt (i) in place of αt (i) and βt (i). cj j=0 To avoid underﬂow. ˆ j=0 Also. it is an easy exercise to show that the reestimated π and A and B 12 . A and B.
Furthermore.ˆ are exact when αt (i) and βt (i) are used in place of αt (i) and βt (i). O1 . Initialization: Select initial values for the matrices A. P (O  λ) ˆ isn’t required in the reestimation formulae. OT −1 ) are assumed known. . The αpass // compute α0 (i) c0 = 0 for i = 0 to N − 1 α0 (i) = π(i)bi (O0 ) c0 = c0 + α0 (i) next i // scale the α0 (i) c0 = 1/c0 13 . (15) can be used to verify that P (O  λ) is increasing at each iteration. where π is 1 × N . use reasonable approximations for the matrix values. . including scaling. the calculation of which would inevitably result in underﬂow. B and π. The values N and M are ﬁxed and the T observations O = (O0 . 1. Therefore. while A = {aij } is N × N and B = {bj (k)} is N × M . If known. This pseudocode also provides everything needed to solve Problems 1 and 2. . . Be sure that each row sums to 1 and the elements of each matrix are not uniform. Fortunately. Let maxIters = maximum number of reestimation iterations iters = 0 oldLogProb = −∞. 2. 7 Putting it all together Here we give complete pseudocode for solving Problem 3. we have no need for the actual value of P (O  λ). and all three matrices are rowstochastic. otherwise let πi ≈ 1/N and aij ≈ 1/N and bj (k) ≈ 1/M . since in each case it cancels in the numerator and denominator.
The βpass // Let βT −1 (i) = 1 scaled by cT −1 for i = 0 to N − 1 βT −1 (i) = cT −1 next i // βpass for t = T − 2 to 0 by − 1 for i = 0 to N − 1 βt (i) = 0 for j = 0 to N − 1 βt (i) = βt (i) + aij bj (Ot+1 )βT +1 (j) next j // scale βt (i) with same scale factor as αt (i) βt (i) = ct βt (i) next i 14 .for i = 0 to N − 1 α0 (i) = c0 α0 (i) next i // compute αt (i) for t = 1 to T − 1 ct = 0 for i = 0 to N − 1 αt (i) = 0 for j = 0 to N − 1 αt (i) = αt (i) + αt−1 (j)aji next j αt (i) = αt (i)bi (Ot ) ct = ct + αt (i) next i // scale αt (i) ct = 1/ct for i = 0 to N − 1 αt (i) = ct αt (i) next i next t 3.
j) and γt (i) for t = 0 to T − 2 denom = 0 for i = 0 to N − 1 for j = 0 to N − 1 denom = denom + αt (i)aij bj (Ot+1 )βt+1 (j) next j next i for i = 0 to N − 1 γt (i) = 0 for j = 0 to N − 1 γt (i. j) denom = denom + γt (i) next t aij = numer/denom next j next i 15 . j) = (αt (i)aij bj (Ot+1 )βt+1 (j))/denom γt (i) = γt (i) + γt (i. B and π // reestimate π for i = 0 to N − 1 πi = γ0 (i) next i // reestimate A for i = 0 to N − 1 for j = 0 to N − 1 numer = 0 denom = 0 for t = 0 to T − 2 numer = numer + γt (i. Reestimate A. Compute γt (i. j) next j next i next t 5.next t 4.
. This application nicely illustrates the strength of HMMs and has the additional advantage that it requires no background in any specialized ﬁeld such as speech processing. 16 . iters = iters + 1 if (iters < maxIters and logProb > oldLogProb) then oldLogProb = logProb goto 2 else output λ = (π. that is the question.// reestimate B for i = 0 to N − 1 for j = 0 to M − 1 numer = 0 denom = 0 for t = 0 to T − 2 if(Ot == j) then numer = numer + γt (i) end if denom = denom + γt (i) next t bi (j) = numer/denom next j next i 6. A. B) end if 8 A notsosimple example In this section we present a classic application of Hidden Markov Models due to Cave and Neuwirth [2]. Compute log[P (O  λ)] logProb = 0 for i = 0 to T − 1 logProb = logProb + log(ci ) next i logProb = −logProb 7. To iterate or not to iterate. .
The precise values used were π = [ 0. he assume that the 27 symbols are observed according to ﬁxed probability distributions. For each of these two hidden states. where the state transition probabilities of the A matrix and the observation probabilities—the B matrix—are unknown. Marvin must solve Problem 3 of Section 7.. λ)] = −165097. Each element of B was initialized to approximately 1/27.000 words. we have log[P (O . B.Suppose Marvin the Martian obtains a large body of English text.25596 0.71571 0. He could then test the hypothesis that there is an underlying Markov process (of order one) with two states. This shows that the model has been improved signiﬁcantly. Without having made any assumption about the two hidden states. This leaves 26 distinct letters and word space. remove all punctuation. The most interesting part of this result is the B matrix. using the ﬁrst T = 50.28429 0.48344 .48684 ] and A= 0. First. Marvin might consider attempting the following.52532 0. To ﬁnd the most probable A and B matrices. After the initial iteration.74404 0. π) has converged to π= and A= 0.00000 1. After 100 iterations. who has a working knowledge of HMMs. A reasonable question he might ask is whether the characters can be partitioned into sets so that the characters in each set are “diﬀerent” in some statistically signiﬁcant way.28.00000 with the transpose of B appearing in the last two columns of Figure 3. numbers.51656 0. which totals about 1. the transpose of B) appear in the second and third columns of Figure 3.000. This deﬁnes an HMM with N = 2 and M = 27. but no knowledge of English. We initialized each element of π and A randomly to approximately 1/2.47468 0. and convert all letters to lower case.51316 0. would like to determine some basic properties of this mysterious writing system. We programmed this experiment. such as the “Brown Corpus” [3]. log[P (O  λ)] = −137305.29 and after 100 iterations. Marvin. for a total of 27 symbols. while the the observations are the series of characters found in the text. the model λ = (A. The precise values in the initial B matrix (actually. 000 observations (letters— converted to lower case—and word spaces) from the “Brown Corpus” [3]. the B matrix tells us that one hidden state contains 17 . etc.
00182 0.00062 0.00000 0.03408 0.04062 0.01298 Figure 3: Initial and ﬁnal B transpose the vowels while the other hidden state contains the consonants.03408 0.03955 0.02311 0.00000 0.03816 0.03889 0.00000 0.01621 0.03397 0.00000 0.21404 0.03397 Final 0.00000 0.04508 0. Cave and Neuwirth [2] obtain further interesting results by allowing more than two hidden states.a b c d e f g h i j k l m n o p q r s t u v w x y z space Initial 0.11042 0.03723 0.07278 0.04048 0. who has no other knowledge of the language. In fact.12275 0.03443 0.00110 0.00000 0.11461 0.03735 0.03676 0.03455 0.02303 0.03735 0.01102 0.03408 0. this could easily be deduced by Marvin the Martian.00000 0.00040 0.00447 0.04048 0.03537 0.03455 0.03559 0.00000 0.02724 0.03537 0.02587 0.06937 0.03630 0.03397 0.03537 0.00049 0.03548 0. they are able to obtain and sensibly interpret the results for models with up to 12 hidden states.13845 0.03816 0.00703 0.04062 0. thanks to HMMs.03909 0.05614 0.03922 0.33211 0. And. Of course.03723 0.03537 0.00000 0.03674 0.03816 0.03537 0.00153 0.03922 0.03769 0.07231 0.03735 0.00365 0.03641 0.03782 0.03595 0.04062 0.00019 0. anyone familiar with English would not be too surprised that there is a clear distinction between vowels and consonants.03968 0.03408 0.03548 0.00000 0.00075 0.00000 0.14392 0.13156 0.00081 0.00000 0.03909 0.03688 0.03955 0. 18 .03595 0.03583 0.00000 0.03408 0.10225 0.03909 0.00000 0.00066 0.03688 0.00000 0.03875 0.03490 0. But the HMM result show us that this distinction is a statistically signiﬁcant feature inherent in the language.03828 0.00000 0. wordspace is more like a vowel.04062 0.00000 0.03909 0. while y is not even sometimes a vowel. Curiously.
substitute α and β for α and β. Give pseudocode to solve Problem 3. Three hidden states. French. For such an HMM. 5. Write an HMM program to solve the problem discussed in Section 8. b. a. Give pseudocode to solve Problem 1. Write an HMM program to solve the English language problem discussed in Section 8 with a. b. Explain your results. (10) and (11) directly in terms of α and β. ˆ 2. Give pseudocode to solve Problem 2. Then the the state at time t depends on the states at time t−1 and t−2 and a ﬁxed set of probabilities. ˆ b. Explain your results. Using the scaling factors ct and dt show that the reestimation formulae obtained ˆ in Exercise 1 are exact with α and β in place of α and β. 3. and show that the resulting reestimation formulae are exact. 6. Instead of using ct to scale the βt (i). Consider an HMM where for each t the state transition matrix is time dependent. Give pseudocode to solve Problem 1. ˜ a. a. In the reestimation formulae obtained in Exercise 1. scale each βt (i) by dt = 1 N −1 j=0 ˜ βt (j) ˜ where the deﬁnition of β is similar to that of α. Write the reestimation formulae (9). replacing English text with a. For an HMM of order two. that is. ˆ respectively.9 Exercises 1. Then for each t. b. Consider an HMM of order two. 19 . c. Give pseudocode to solve Problem 2. 4. 7. Write log[P (O  λ)] in terms of ct and dt . Four hidden states. an HMM where the underlying Markov process is of order two. there is an N × N rowstochastic At = {at } that is used in place of A ij in the HMM computations.
cs. available for download at http://www. References [1] L. NJ.html for information on Hamptonese.edu/faculty/stamp/Hampton/hampton. R. Proceedings of the IEEE.ucsb. editor. replacing English with “Hamptonese”. Rabiner. No.cs. Neuwirth. October 1980. see http://www.sjsu. c. Russian. Princeton. Hidden Markov models for English.cs. 8. 77.toronto. Hidden Markov Models for Speech. IDACRD. A tutorial on hidden Markov models and selected applications in speech recognition. 2.b.html\verb 20 .pdf [2] R. Ferguson. in J. Vol. L. February 1989.∗ Chinese. D. http://www.edu/~gpenn/csc401/a1res.edu/~cs281b/papers/HMMs%20%20Rabiner. Cave and L. [3] The Brown Corpus of Standard American English. P. Perform an HMM analysis similar to that discussed in Section 8.
This action might not be possible to undo. Are you sure you want to continue?