Chapter 2

2.3 Since m is not a prime, it can be factored as the product of two integers a and b,
m = a · b
with 1 < a, b < m. It is clear that both a and b are in the set {1, 2, · · · , m − 1}. It follows
from the definition of modulo-m multiplication that
a b = 0.
Since 0 is not an element in the set {1, 2, · · · , m−1}, the set is not closed under the modulo-m
multiplication and hence can not be a group.
2.5 It follows from Problem 2.3 that, if m is not a prime, the set {1, 2, · · · , m− 1} can not be a
group under the modulo-m multiplication. Consequently, the set {0, 1, 2, · · · , m−1} can not
be a field under the modulo-m addition and multiplication.
2.7 First we note that the set of sums of unit element contains the zero element 0. For any 1 ≤
< λ,

i=1
1 +
λ−

i=1
1 =
λ

i=1
1 = 0.
Hence every sum has an inverse with respect to the addition operation of the field GF(q). Since
the sums are elements in GF(q), they must satisfy the associative and commutative laws with
respect to the addition operation of GF(q). Therefore, the sums form a commutative group
under the addition of GF(q).
Next we note that the sums contain the unit element 1 of GF(q). For each nonzero sum

i=1
1
with 1 ≤ < λ, we want to show it has a multiplicative inverse with respect to the multipli-
cation operation of GF(q). Since λ is prime, and λ are relatively prime and there exist two
1
integers a and b such that
a · + b · λ = 1, (1)
where a and λ are also relatively prime. Dividing a by λ, we obtain
a = kλ + r with 0 ≤ r < λ. (2)
Since a and λ are relatively prime, r = 0. Hence
1 ≤ r < λ
Combining (1) and (2), we have
· r = −(b + k) · λ + 1
Consider

i=1
1 ·
r

i=1
1 =
·r

i=1
1 =
−(b+k)·λ

i=1
+1
= (
λ

i=1
1)(
−(b+k)

i=1
1) + 1
= 0 + 1 = 1.
Hence, every nonzero sumhas an inverse with respect to the multiplication operation of GF(q).
Since the nonzero sums are elements of GF(q), they obey the associative and commutative
laws with respect to the multiplication of GF(q). Also the sums satisfy the distributive law.
As a result, the sums form a field, a subfield of GF(q).
2.8 Consider the finite field GF(q). Let n be the maximum order of the nonzero elements of GF(q)
and let α be an element of order n. It follows from Theorem 2.9 that n divides q −1, i.e.
q −1 = k · n.
Thus n ≤ q − 1. Let β be any other nonzero element in GF(q) and let e be the order of β.
2
Suppose that e does not divide n. Let (n, e) be the greatest common factor of n and e. Then
e/(n, e) and n are relatively prime. Consider the element
β
(n,e)
This element has order e/(n, e). The element
αβ
(n,e)
has order ne/(n, e) which is greater than n. This contradicts the fact that n is the maximum
order of nonzero elements in GF(q). Hence e must divide n. Therefore, the order of each
nonzero element of GF(q) is a factor of n. This implies that each nonzero element of GF(q)
is a root of the polynomial
X
n
−1.
Consequently, q −1 ≤ n. Since n ≤ q −1 (by Theorem 2.9), we must have
n = q −1.
Thus the maximum order of nonzero elements in GF(q) is q-1. The elements of order q − 1
are then primitive elements.
2.11 (a) Suppose that f(X) is irreducible but its reciprocal f

(X) is not. Then
f

(X) = a(X) · b(X)
where the degrees of a(X) and b(X) are nonzero. Let k and m be the degrees of a(X) and
b(X) respectivly. Clearly, k + m = n. Since the reciprocal of f

(X) is f(X),
f(X) = X
n
f

(
1
X
) = X
k
a(
1
X
) · X
m
b(
1
X
).
This says that f(X) is not irreducible and is a contradiction to the hypothesis. Hence f

(X)
must be irreducible. Similarly, we can prove that if f

(X) is irreducible, f(X) is also
irreducible. Consequently, f

(X) is irreducible if and only if f(X) is irreducible.
3
(b) Suppose that f(X) is primitive but f

(X) is not. Then there exists a positive integer k less
than 2
n
−1 such that f

(X) divides X
k
+ 1. Let
X
k
+ 1 = f

(X)q(X).
Taking the reciprocals of both sides of the above equality, we have
X
k
+ 1 = X
k
f

(
1
X
)q(
1
X
)
= X
n
f

(
1
X
) · X
k−n
q(
1
X
)
= f(X) · X
k−n
q(
1
X
).
This implies that f(X) divides X
k
+ 1 with k < 2
n
− 1. This is a contradiction to the
hypothesis that f(X) is primitive. Hencef

(X) must be also primitive. Similarly, if f

(X) is
primitive, f(X) must also be primitive. Consequently f

(X) is primitive if and only if f(X)
is primitive.
2.15 We only need to show that β, β
2
, · · · , β
2
e−1
are distinct. Suppose that
β
2
i
= β
2
j
for 0 ≤ i, j < e and i < j. Then,

2
j−i
−1
)
2
i
= 1.
Since the order β is a factor of 2
m
−1, it must be odd. For (β
2
j−i
−1
)
2
i
= 1, we must have
β
2
j−i
−1
= 1.
Since both i and j are less than e, j − i < e. This is contradiction to the fact that the e is the
smallest nonnegative integer such that
β
2
e
−1
= 1.
4
Hence β
2
i
= β
2
j
for 0 ≤ i, j < e.
2.16 Let n

be the order of β
2
i
. Then

2
i
)
n

= 1
Hence

n

)
2
i
= 1. (1)
Since the order n of β is odd, n and 2
i
are relatively prime. From(1), we see that n divides n

and
n

= kn. (2)
Now consider

2
i
)
n
= (β
n
)
2
i
= 1
This implies that n

(the order of β
2
i
) divides n. Hence
n = n

(3)
From (2) and (3), we conclude that
n

= n.
2.20 Note that c · v = c · (0+v) = c · 0+c · v. Adding −(c · v) to both sides of the above equality,
we have
c · v + [−(c · v)] = c · 0 + c · v + [−(c · v)]
0 = c · 0 +0.
Since 0 is the additive identity of the vector space, we then have
c · 0 = 0.
2.21 Note that 0 · v = 0. Then for any c in F,
(−c + c) · v = 0
5
(−c) · v + c · v = 0.
Hence (−c) · v is the additive inverse of c · v, i.e.
−(c · v) = (−c) · v (1)
Since c · 0 = 0 (problem 2.20),
c · (−v +v) = 0
c · (−v) + c · v = 0.
Hence c · (−v) is the additive inverse of c · v, i.e.
−(c · v) = c · (−v) (2)
From (1) and (2), we obtain
−(c · v) = (−c) · v = c · (−v)
2.22 By Theorem 2.22, S is a subspace if (i) for any u and v in S, u + v is in S and (ii) for any c
in F and u in S, c · u is in S. The first condition is now given, we only have to show that the
second condition is implied by the first condition for F = GF(2). Let u be any element in S.
It follows from the given condition that
u +u = 0
is also in S. Let c be an element in GF(2). Then, for any u in S,
c · u =





0 for c = 0
u for c = 1
Clearly c · u is also in S. Hence S is a subspace.
2.24 If the elements of GF(2
m
) are represented by m-tuples over GF(2), the proof that GF(2
m
) is
6
a vector space over GF(2) is then straight-forward.
2.27 Let u and v be any two elements in S
1
∩ S
2
. It is clear the u and v are elements in S
1
, and u
and v are elements in S
2
. Since S
1
and S
2
are subspaces,
u +v ∈ S
1
and
u +v ∈ S
2
.
Hence,u + v is in S
1
∩ S
2
. Now let x be any vector in S
1
∩ S
2
. Then x ∈ S
1
, and x ∈ S
2
.
Again, since S
1
and S
2
are subspaces, for any c in the field F, c · x is in S
1
and also in S
2
.
Hence c · v is in the intersection, S
1
∩ S
2
. It follows from Theorem 2.22 that S
1
∩ S
2
is a
subspace.
7
Chapter 3
3.1 The generator and parity-check matrices are:
G =









0 1 1 1 1 0 0 0
1 1 1 0 0 1 0 0
1 1 0 1 0 0 1 0
1 0 1 1 0 0 0 1









H =









1 0 0 0 0 1 1 1
0 1 0 0 1 1 1 0
0 0 1 0 1 1 0 1
0 0 0 1 1 0 1 1









From the parity-check matrix we see that each column contains odd number of ones, and no
two columns are alike. Thus no two columns sum to zero and any three columns sum to a 4-
tuple with odd number of ones. However, the first, the second, the third and the sixth columns
sum to zero. Therefore, the minimum distance of the code is 4.
3.4 (a) The matrix H
1
is an (n−k+1)×(n+1) matrix. First we note that the n−k rows of Hare
linearly independent. It is clear that the first (n−k) rows of H
1
are also linearly independent.
The last row of H
1
has a

1

at its first position but other rows of H
1
have a

0

at their first
position. Any linear combination including the last row of H
1
will never yield a zero vector.
Thus all the rows of H
1
are linearly independent. Hence the row space of H
1
has dimension
n − k + 1. The dimension of its null space, C
1
, is then equal to
dim(C
1
) = (n + 1) − (n − k + 1) = k
Hence C
1
is an (n + 1, k) linear code.
(b) Note that the last row of H
1
is an all-one vector. The inner product of a vector with odd
weight and the all-one vector is

1

. Hence, for any odd weight vector v,
v · H
T
1
= 0
and v cannot be a code word in C
1
. Therefore, C
1
consists of only even-weight code words.
(c) Let v be a code word in C. Then v · H
T
= 0. Extend v by adding a digit v

to its left.
8
This results in a vector of n + 1 digits,
v
1
= (v

, v) = (v

, v
0
, v
1
, · · · , v
n−1
).
For v
1
to be a vector in C
1
, we must require that
v
1
H
T
1
= 0.
First we note that the inner product of v
1
with any of the first n−k rows of H
1
is 0. The inner
product of v
1
with the last row of H
1
is
v

+ v
0
+ v
1
+ · · · + v
n−1
.
For this sum to be zero, we must require that v

= 1 if the vector v has odd weight and
v

= 0 if the vector v has even weight. Therefore, any vector v
1
formed as above is a code
word in C
1
, there are 2
k
such code words. The dimension of C
1
is k, these 2
k
code words are
all the code words of C
1
.
3.5 Let C
e
be the set of code words in C with even weight and let C
o
be the set of code words in
C with odd weight. Let x be any odd-weight code vector from C
o
. Adding x to each vector in
C
o
, we obtain a set of C

e
of even weight vector. The number of vectors in C

e
is equal to the
number of vectors in C
o
, i.e. |C

e
| = |C
o
|. Also C

e
⊆ C
e
. Thus,
|C
o
| ≤ |C
e
| (1)
Now adding x to each vector in C
e
, we obtain a set C

o
of odd weight code words. The number
of vectors in C

o
is equal to the number of vectors in C
e
and
C

o
⊆ C
o
Hence
|C
e
| ≤ |C
o
| (2)
From (1) and (2), we conclude that |C
o
| = |C
e
|.
9
3.6 (a) From the given condition on G, we see that, for any digit position, there is a row in G
with a nonzero component at that position. This row is a code word in C. Hence in the code
array, each column contains at least one nonzero entry. Therefore no column in the code array
contains only zeros.
(b) Consider the -th column of the code array. From part (a) we see that this column contains
at least one

1

. Let S
0
be the code words with a

0

at the -th position and S
1
be the
codewords with a

1

at the -th position. Let x be a code word from S
1
. Adding x to each
vector in S
0
, we obtain a set S

1
of code words with a

1

at the -th position. Clearly,
|S

1
| = |S
0
| (1)
and
S

1
⊆ S
1
. (2)
Adding x to each vector in S
1
, we obtain a set of S

0
of code words with a

0

at the -th
location. We see that
|S

0
| = |S
1
| (3)
and
S

0
⊆ S
0
. (4)
From (1) and (2), we obtain
|S
0
| ≤ |S
1
|. (5)
From (3) and (4) ,we obtain
|S
1
| ≤ |S
0
|. (6)
From (5) and (6) we have |S
0
| = |S
1
|. This implies that the -th column of the code array
consists 2
k−1
zeros and 2
k−1
ones.
(c) Let S
0
be the set of code words with a

0

at the -th position. From part (b), we see that
S
0
consists of 2
k−1
code words. Let x and y be any two code words in S
0
. The sum x + y
also has a zero at the -th location and hence is code word in S
0
. Therefore S
0
is a subspace
of the vector space of all n-tuples over GF(2). Since S
0
is a subset of C, it is a subspace of C.
The dimension of S
0
is k − 1.
10
3.7 Let x, y and z be any three n-tuples over GF(2). Note that
d(x, y) = w(x +y),
d(y, z) = w(y +z),
d(x, z) = w(x +z).
It is easy to see that
w(u) + w(v) ≥ w(u +v). (1)
Let u = x +y and v = y +z. It follows from (1) that
w(x +y) + w(y +z) ≥ w(x +y +y +z) = w(x +z).
From the above inequality, we have
d(x, y) + d(y, z) ≥ d(x, z).
3.8 From the given condition, we see that λ <
d
min
−1
2
. It follows from the theorem 3.5 that all
the error patterns of λ or fewer errors can be used as coset leaders in a standard array. Hence,
they are correctable. In order to show that any error pattern of or fewer errors is detectable,
we need to show that no error pattern x of or fewer errors can be in the same coset as an
error pattern y of λ or fewer errors. Suppose that x and y are in the same coset. Then x + y
is a nonzero code word. The weight of this code word is
w(x +y) ≤ w(x) + w(y) ≤ + λ < d
min
.
This is impossible since the minimum weight of the code is d
min
. Hence x and y are in
different cosets. As a result, when x occurs, it will not be mistaken as y. Therefore x is
detectable.
3.11 In a systematic linear code, every nonzero code vector has at least one nonzero component in
its information section (i.e. the rightmost k positions). Hence a nonzero vector that consists of
only zeros in its rightmost k position can not be a code word in any of the systematic code in Γ.
11
Now consider a nonzero vector v = (v
0
, v
1
, · · · , v
n−1
) with at least one nonzero component
in its k rightmost positions,say v
n−k+i
= 1 for 0 ≤ i < k. Consider a matrix of the following
form which has v as its i-th row:



















p
00
p
01
· · · p
0,n−k−1
1 0 0 0 · · · 0
p
10
p
11
· · · p
1,n−k−1
0 1 0 0 · · · 0
.
.
.
.
.
.
v
0
v
1
· · · v
n−k−1
v
n−k
v
n−k+1
· · · · · v
n−1
p
i+1,0
p
i+1,1
· · · p
i+1,n−k−1
0 0 · · 1 · · 0
.
.
.
.
.
.
p
k−1,0
p
k−1,1
· · · p
k−1,n−k−1
0 0 0 0 · · · 1



















By elementary row operations, we can put G into systematic form G
1
. The code generated
by G
1
contains v as a code word. Since each p
ij
has 2 choices, 0 or 1, there are 2
(k−1)(n−k)
matrices G with v as the i-th row. Each can be put into systematic form G
1
and each G
1
generates a systematic code containing v as a code word. Hence v is contained in 2
(k−1)(n−k)
codes in Γ.
3.13 The generator matrix of the code is
G = [P
1
I
k
P
2
I
k
]
= [G
1
G
2
]
Hence a nonzero codeword in C is simply a cascade of a nonzero codeword v
1
in C
1
and a
nonzero codeword v
2
in C
2
, i.e.,
(v
1
, v
2
).
Since w(v
1
) ≥ d
1
and w(v
2
) ≥ d
2
, hence w[(v
1
, v
2
)] ≥ d
1
+ d
2
.
3.15 It follows from Theorem 3.5 that all the vectors of weight t or less can be used as coset leaders.
There are

n
0

+

n
1

+ · · · +

n
t

12
such vectors. Since there are 2
n−k
cosets, we must have
2
n−k

n
0

+

n
1

+ · · · +

n
t

.
Taking logarithm on both sides of the above inequality, we obtain the Hamming bound on t,
n − k ≥ log
2
{1 +

n
1

+ · · · +

n
t

}.
3.16 Arrange the 2
k
code words as a 2
k
× n array. From problem 6(b), each column of this code
array contains 2
k−1
zeros and 2
k−1
ones. Thus the total number of ones in the array is n· 2
k−1
.
Note that each nonzero code word has weight (ones) at least d
min
. Hence
(2
k
− 1) · d
min
≤ n · 2
k−1
This implies that
d
min

n · 2
k−1
2
k
− 1
.
3.17 The number of nonzero vectors of length n and weight d − 1 or less is
d−1

i=1

n
i

From the result of problem 3.11, each of these vectors is contained in at most 2
(k−1)(n−k)
linear
systematic codes. Therefore there are at most
M = 2
(k−1)(n−k)
d−1

i=1

n
i

linear systematic codes contain nonzero codewords of weight d − 1 or less. The total number
of linear systematic codes is
N = 2
(k(n−k)
If M < N, there exists at least one code with minimum weight at least d. M < N implies
13
that
2
(k−1)(n−k)
d−1

i=1

n
i

< 2
k(n−k)
d−1

i=1

n
i

< 2
(n−k)
.
3.18 Let d
min
be the smallest positive integer such that
d
min
−1

i=1

n
i

< 2
(n−k)

d
min

i=1

n
i

From problem 3.17, the first inequality garantees the existence of a systematic linear code
with minimum distance d
min
.
14
Chapter 4
4.1 A parity-check matrix for the (15, 11) Hamming code is
H =









1 0 0 0 1 0 0 1 1 0 1 0 1 1 1
0 1 0 0 1 1 0 0 0 1 1 1 0 1 1
0 0 1 0 0 1 1 0 1 0 1 1 1 0 1
0 0 0 1 0 0 1 1 0 1 0 1 1 1 1









Let r = (r
0
, r
1
, . . . , r
14
) be the received vector. The syndrome of r is (s
0
, s
1
, s
2
, s
3
) with
s
0
= r
0
+r
4
+r
7
+r
8
+r
10
+r
12
+r
13
+r
14
,
s
1
= r
1
+r
4
+r
5
+r
9
+r
10
+r
11
+r
13
+r
14
,
s
2
= r
2
+r
5
+r
6
+r
8
+r
10
+r
11
+r
12
+r
14
,
s
3
= r
3
+r
6
+r
7
+r
9
+r
11
+r
12
+r
13
+r
14
.
Set up the decoding table as Table 4.1. From the decoding table, we find that
e
0
= s
0
¯ s
1
¯ s
2
¯ s
3
, e
1
= ¯ s
0
s
1
¯ s
2
¯ s
3
, e
2
= ¯ s
0
¯ s
1
s
2
¯ s
3
,
e
3
= ¯ s
0
¯ s
1
¯ s
2
s
3
, e
4
= s
0
s
1
¯ s
2
¯ s
3
, e
5
= ¯ s
0
s
1
s
2
¯ s
3
,
. . . , e
13
= s
0
s
1
¯ s
2
s
3
, e
14
= s
0
s
1
s
2
s
3
.
15
Table 4.1: Decoding Table
s
0
s
1
s
2
s
3
e
0
e
1
e
2
e
3
e
4
e
5
e
6
e
7
e
8
e
9
e
10
e
11
e
12
e
13
e
14
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0
1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0
0 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
1 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0
1 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
0 1 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0
1 1 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0
0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0
1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0
1 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0
1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
r
0
r
1
Buffer Register r
14
+
...
s
0
+
s
1
+
s
2
+
s
3
...
s
0
s
1
s
2
s
3
s
0
s
1
s
2
s
3
s
0
s
1
s
2
s
3
s
0
s
1
s
2
s
3
+
e
0
r
0
+
e
1
r
1
+
e
2
r
2
+
e
14
r
14
Decoded bits
... ... ...
16
4.3 From (4.3), the probability of an undetected error for a Hamming code is
P
u
(E) = 2
−m
{1 + (2
m
−1)(1 −2p)
2
m−1
} −(1 −p)
2
m
−1
= 2
−m
+ (1 −2
−m
)(1 −2p)
2
m−1
−(1 −p)
2
m
−1
. (1)
Note that
(1 −p)
2
≥ 1 −2p, (2)
and
(1 −2
−m
) ≥ 0. (3)
Using (2) and (3) in (1), we obtain the following inequality:
P
u
(E) ≤ 2
−m
+ (1 −2
−m
)

(1 −p)
2

2
m−1
−(1 −p)
2
m
−1
= 2
−m
+ (1 −p)
2
m
−1
{(1 −2
−m
)(1 −p) −1}
= 2
−m
−(1 −p)
2
m
−1
{(1 −(1 −p)(1 −2
−m
)}. (4)
Note that 0 ≤ 1 −p ≤ 1 and 0 ≤ 1 −2
−m
< 1. Clearly 0 ≤ (1 −p) · (1 −2
−m
) < 1, and
1 −(1 −p) · (1 −2
−m
) ≥ 0. (5)
Since (1 −p)
2
m
−1
≥ 0, it follows from (4) and (5) that P
u
(E) ≤ 2
−m
.
4.6 The generator matrix for the 1st-order RM code RM(1,3) is
G =









v
0
v
3
v
2
v
1









=









1 1 1 1 1 1 1 1
0 0 0 0 1 1 1 1
0 0 1 1 0 0 1 1
0 1 0 1 0 1 0 1









(1)
The code generated by this matrix is a (8, 4) code with minimumdistance 4. Let (a
0
, a
3
, a
2
, a
1
)
17
be the message to be encoded. Then its corresponding codeword is
b = (b
0
, b
1
, b
2
, b
3
, b
4
, b
5
, b
6
, b
7
)
= a
0
v
0
+a
3
v
3
+a
2
v
2
+a
1
v
1
. (2)
From (1) and (2), we find that
b
0
= a
0
, b
1
= a
0
+a
1
, b
2
= a
0
+a
2
, b
3
= a
0
+a
2
+a
1
,
b
4
= a
0
+a
3
, b
5
= a
0
+a
3
+a
1
, b
6
= a
0
+a
3
+a
2
,
b
7
= a
0
+a
3
+a
2
+a
1
. (3)
From (3), we find that
a
1
= b
0
+b
1
= b
2
+b
3
= b
4
+b
5
= b
6
+b
7
,
a
2
= b
0
+b
2
= b
1
+b
3
= b
4
+b
6
= b
5
+b
7
,
a
3
= b
0
+b
4
= b
1
+b
5
= b
2
+b
6
= b
3
+b
7
,
a
0
= b
0
= b
1
+a
1
= b
2
+a
2
= b
3
+a
2
+a
1
= b
4
+a
3
= b
5
+a
3
+a
1
= b
6
+a
3
+a
2
= b
7
+a
3
+a
2
+a
1
.
Let r = (r
0
, r
1
, r
2
, r
3
, r
4
, r
5
, r
6
, r
7
) be the received vector. The check-sum for decoding a
1
, a
2
and a
3
are:
(a
1
) (a
2
) (a
3
)
A
(0)
1
= r
0
+r
1
B
(0)
1
= r
0
+r
2
C
(0)
1
= r
0
+r
4
A
(0)
2
= r
2
+r
3
B
(0)
2
= r
1
+r
3
C
(0)
2
= r
1
+r
5
A
(0)
3
= r
4
+r
5
B
(0)
3
= r
4
+r
6
C
(0)
3
= r
2
+r
6
A
(0)
4
= r
6
+r
7
B
(0)
4
= r
5
+r
7
C
(0)
4
= r
3
+r
7
18
After decoding a
1
, a
2
and a
3
, we form
r
(1)
= r −a
1
v
1
−a
2
v
2
−a
3
v
3
= (r
(1)
0
, r
(1)
1
, r
(1)
2
, r
(1)
3
, r
(1)
4
, r
(1)
5
, r
(1)
6
, r
(1)
7
).
Then a
0
is equal to the value taken by the majority of the bits in r
(1)
.
For decoding (01000101), the four check-sum for decoding a
1
, a
2
and a
3
are:
(1) A
(0)
1
= 1, A
(0)
2
= 0, A
(0)
3
= 1, A
(0)
4
= 1;
(2) B
(0)
1
= 0, B
(0)
2
= 1, B
(0)
3
= 0, B
(0)
4
= 0;
(3) C
(0)
1
= 0, C
(0)
2
= 0, C
(0)
3
= 0, C
(0)
4
= 1.
Based on these check-sums, a
1
, a
2
and a
3
are decoded as 1, 0 and 0, respectively.
To decode a
0
, we form
r
(1)
= (01000101) −a
1
v
1
−a
2
v
2
−a
3
v
3
= (00010000).
From the bits of r
(1)
, we decode a
0
to 0. Therefore, the decoded message is (0001).
4.14
RM(1, 3) = { 0, 1, X
1
, X
2
, X
3
, 1 +X
1
, 1 +X
2
,
1 +X
3
, X
1
+X
2
, X
1
+X
3
, X
2
+X
3
,
1 +X
1
+X
2
, 1 +X
1
+X
3
, 1 +X
2
+X
3
,
X
1
+X
2
+X
3
, 1 +X
1
+X
2
+X
3
}.
4.15 The RM(r, m−1) and RM(r −1, m−1) codes are given as follows (from (4.38)):
RM(r, m−1) = {v(f) : f(X
1
, X
2
, . . . , X
m−1
) ∈ P(r, m−1)},
RM(r −1, m−1) = {v(g) : g(X
1
, X
2
, . . . , X
m−1
) ∈ P(r −1, m−1)}.
Then
19
RM(r, m)={v(h) : h = f(X
1
, X
2
, . . . , X
m−1
) +X
m
g(X
1
, X
2
, . . . , X
m−1
)
with f ∈ P(r, m−1) and g ∈ P(r −1, m−1)}.
20
Chapter 5
5.6 (a) A polynomial over GF(2) with odd number of terms is not divisible by X + 1, hence it
can not be divisible by g(X) if g(X) has (X +1) as a factor. Therefore, the code contains no
code vectors of odd weight.
(b) The polynomial X
n
+ 1 can be factored as follows:
X
n
+ 1 = (X + 1)(X
n−1
+X
n−2
+· · · + X + 1)
Since g(X) divides X
n
+1 and since g(X) does not have X+1 as a factor, g(X) must divide
the polynomial X
n−1
+ X
n−2
+ · · · + X + 1. Therefore 1 + X + · · · + X
n−2
+ X
n−1
is a
code polynomial, the corresponding code vector consists of all 1

s.
(c) First, we note that no X
i
is divisible by g(X). Hence, no code word with weight one.
Now, suppose that there is a code word v(X) of weight 2. This code word must be of the
form,
v(X) = X
i
+ X
j
with 0 ≤ i < j < n. Put v(X) into the following form:
v(X) = X
i
(1 + X
j−i
).
Note that g(X) and X
i
are relatively prime. Since v(X) is a code word, it must be divisible
by g(X). Since g(X) and X
i
are relatively prime, g(X) must divide the polynomial X
j−i
+1.
However, j − i < n. This contradicts the fact that n is the smallest integer such that g(X)
divides X
n
+ 1. Hence our hypothesis that there exists a code vector of weight 2 is invalid.
Therefore, the code has a minimum weight at least 3.
5.7 (a) Note that X
n
+ 1 = g(X)h(X). Then
X
n
(X
−n
+ 1) = X
n
g(X
−1
)h(X
−1
)
21
1 + X
n
=

X
n−k
g(X
−1
)

X
k
h(X
−1
)

= g

(X)h

(X).
where h

(X) is the reciprocal of h(X). We see that g

(X) is factor of X
n
+ 1. Therefore,
g

(X) generates an (n, k) cyclic code.
(b) Let C and C

be two (n, k) cyclic codes generated by g(X) and g

(X) respectively. Let
v(X) = v
0
+ v
1
X + · · · + v
n−1
X
n−1
be a code polynomial in C. Then v(X) must be a
multiple of g(X), i.e.,
v(X) = a(X)g(X).
Replacing X by X
−1
and multiplying both sides of above equality by X
n−1
, we obtain
X
n−1
v(X
−1
) =

X
k−1
a(X
−1
)

X
n−k
g(X
−1
)

Note that X
n−1
v(X
−1
), X
k−1
a(X
−1
) and X
n−k
g(X
−1
) are simply the reciprocals of v(X),
a(X) and g(X) respectively. Thus,
v

(X) = a

(X)g

(X). (1)
From (1), we see that the reciprocal v

(X) of a code polynomial in C is a code polynomial in
C

. Similarly, we can show the reciprocal of a code polynomial in C

is a code polynomial in
C. Since v

(X) and v(X) have the same weight, C

and C have the same weight distribution.
5.8 Let C
1
be the cyclic code generated by (X + 1)g(X). We know that C
1
is a subcode of C
and C
1
consists all the even-weight code vectors of C as all its code vectors. Thus the weight
enumerator A
1
(z) of C
1
should consists of only the even-power terms of A(z) =

n
i=0
A
i
z
i
.
Hence
A
1
(z) =
n/2

j=0
A
2j
z
2j
(1)
Consider the sum
A(z) + A(−z) =
n

i=0
A
i
z
i
+
n

i=0
A
i
(−z)
i
22
=
n

i=0
A
i

z
i
+ (−z)
i

.
We see that z
i
+ (−z)
i
= 0 if i is odd and that z
i
+ (−z)
i
= 2z
i
if i is even. Hence
A(z) + A(−z) =
n/2

j=0
2A
2j
z
2j
(2)
From (1) and (2), we obtain
A
1
(z) = 1/2 [A(z) + A(−z)] .
5.10 Let e
1
(X) = X
i
+ X
i+1
and e
2
(X) = X
j
+ X
j+1
be two different double-adjacent-error
patterns such that i < j. Suppose that e
1
(X) and e
2
(X) are in the same coset. Then e
1
(X) +
e
2
(X) should be a code polynomial and is divisible by g(X) = (X + 1)p(X). Note that
e
1
(X) +e
2
(X) = X
i
(X + 1) + X
j
(X + 1)
= (X + 1)X
i
(X
j−i
+ 1)
Since g(X) divides e
1
(X) + e
2
(X), p(X) should divide X
i
(X
j−i
+ 1). However p(X) and
X
i
are relatively prime. Therefore p(X) must divide X
j−i
+ 1. This is not possible since
j −i < 2
m
−1 and p(X) is a primitive polynomial of degree m (the smallest integer n such
that p(X) divides X
n
+ 1 is 2
m
−1). Thus e
1
(X) +e
2
(X) can not be in the same coset.
5.12 Note that e
(i)
(X) is the remainder resulting from dividing X
i
e(X) by X
n
+ 1. Thus
X
i
e(X) = a(X)(X
n
+ 1) +e
(i)
(X) (1)
Note that g(X) divides X
n
+ 1, and g(X) and X
i
are relatively prime. From (1), we see that
if e(X) is not divisible by g(X), then e
(i)
(X) is not divisible by g(X). Therefore, if e(X) is
detectable, e
(i)
(X) is also detectable.
23
5.14 Suppose that does not divide n. Then
n = k · + r, 0 < r < .
Note that
v
(n)
(X) = v
(k·+r)
(X) = v(X) (1)
Since v
()
(X) = v(X),
v
(k·)
(X) = v(X) (2)
From (1) and (2), we have
v
(r)
(X) = v(X).
This is not possible since 0 < r < and is the smallest positive integer such that v
()
(X) =
v(X). Therefore, our hypothesis that does not divide n is invalid, hence must divide n.
5.17 Let n be the order of β. Then β
n
= 1, and β is a root of X
n
+ 1. It follows from Theorem
2.14 that φ(X) is a factor of X
n
+ 1. Hence φ(X) generates a cyclic code of length n.
5.18 Let n
1
be the order of β
1
and n
2
be the order of β
2
. Let n be the least common multiple of n
1
and n
2
, i.e. n = LCM(n
1
, n
2
). Consider X
n
+ 1. Clearly, β
1
and β
2
are roots of X
n
+ 1.
Since φ
1
(X) and φ
2
(X) are factors of X
n
+ 1. Since φ
1
(X) and φ
2
(X) are relatively prime,
g(X) = φ
1
(X) · φ
2
(X) divides X
n
+ 1. Hence g(X) = φ
1
(X) · φ
2
(X) generates a cyclic
code of length n = LCM(n
1
, n
2
).
5.19 Since every code polynomial v(X) is a multiple of the generator polynomial p(X), every root
of p(X) is a root of v(X). Thus v(X) has α and its conjugates as roots. Suppose v(X) is
a binary polynomial of degree 2
m
− 2 or less that has α as a root. It follows from Theorem
2.14 that v(X) is divisible by the minimal polynomial p(X) of α. Hence v(X) is a code
polynomial in the Hamming code generated by p(X).
5.20 Let v(X) be a code polynomial in both C
1
and C
2
. Then v(X) is divisible by both g
1
(X) and
g
2
(X). Hence v(X) is divisible by the least common multiple g(X) of g
1
(X) and g
2
(X),
i.e. v(X) is a multiple of g(X) = LCM(g
1
(X), g
2
(X)). Conversely, any polynomial of
degree n − 1 or less that is a multiple of g(X) is divisible by g
1
(X) and g
2
(X). Hence
v(X) is in both C
1
and C
2
. Also we note that g(X) is a factor of X
n
+ 1. Thus the code
24
polynomials common to C
1
and C
2
forma cyclic code of length n whose generator polynomial
is g(X) = LCM(g
1
(X), g
2
(X)). The code C
3
generated by g(X) has minimum distance
d
3
≥ max(d
1
, d
2
).
5.21 See Problem 4.3.
5.22 (a) First, we note that X
2
m
−1
+ 1 = p

(X)h

(X). Since the roots of X
2
m
−1
+ 1 are the
2
m
− 1 nonzero elements in GF(2
m
) which are all distinct, p

(X) and h

(X) are relatively
prime. Since every code polynomial v(X) in C
d
is a polynomial of degree 2
m
− 2 or less,
v(X) can not be divisible by p(X) (otherwise v(X) is divisible by p

(X)h

(X) = X
2
m
−1
+1
and has degree at least 2
m
−1). Suppose that v
(i)
(X) = v(X). It follows from (5.1) that
X
i
v(X) = a(X)(X
2
m
−1
+ 1) +v
(i)
(X)
= a(X)(X
2
m
−1
+ 1) +v(X)
Rearranging the above equality, we have
(X
i
+ 1)v(X) = a(X)(X
2
m
−1
+ 1).
Since p(X) divides X
2
m
−1
+ 1, it must divide (X
i
+ 1)v(X). However p(X) and v(X) are
relatively prime. Hence p(X) divides X
i
+ 1. This is not possible since 0 < i < 2
m
−1 and
p(X) is a primitive polynomial(the smallest positive integer n such that p(X) divides X
n
+1
is n = 2
m
−1). Therefore our hypothesis that, for 0 < i < 2
m
−1, v
(i)
(X) = v(X) is invalid,
and v
(i)
(X) = v(X).
(b) From part (a), a code polynomial v(X) and its 2
m
− 2 cyclic shifts form all the 2
m
− 1
nonzero code polynomials in C
d
. These 2
m
− 1 nonzero code polynomial have the same
weight, say w. The total number of nonzero components in the code words of C
d
is w·(2
m
−1).
Now we arrange the 2
m
code words in C
d
as an 2
m
×(2
m
−1) array. It follows from Problem
3.6(b) that every column in this array has exactly 2
m−1
nonzero components. Thus the total
nonzero components in the array is 2
m−1
· (2
m
−1). Equating w.(2
m
−1) to 2
m−1
· (2
m
−1),
we have
w = 2
m−1
.
25
5.25 (a) Any error pattern of double errors must be of the form,
e(X) = X
i
+ X
j
where j > i. If the two errors are not confined to n −k = 10 consecutive positions, we must
have
j −i + 1 > 10,
15 −(j −i) + 1 > 10.
Simplifying the above inequalities, we obtain
j −i > 9
j −i < 6.
This is impossible. Therefore any double errors are confined to 10 consecutive positions and
can be trapped.
(b) An error pattern of triple errors must be of the form,
e(X) = X
i
+ X
j
+ X
k
,
where 0 ≤ i < j < k ≤ 14. If these three errors can not be trapped, we must have
k −i > 9
j −i < 6
k −j < 6.
If we fix i, the only solutions for j and k are j = 5 +i and k = 10 +i. Hence, for three errors
not confined to 10 consecutive positions, the error pattern must be of the following form
e(X) = X
i
+ X
5+i
+X
10+i
26
for 0 ≤ i < 5. Therefore, only 5 error patterns of triple errors can not be trapped.
5.26 (b) Consider a double-error pattern,
e(X) = X
i
+ X
j
where 0 ≤ i < j < 23. If these two errors are not confined to 11 consecutive positions, we
must have
j −i + 1 > 11
23 −(j −i −1) > 11
From the above inequalities, we obtain
10 < j −i < 13
For a fixed i, j has two possible solutions, j = 11+i and j = 12+i. Hence, for a double-error
pattern that can not be trapped, it must be either of the following two forms:
e
1
(X) = X
i
+ X
11+i
,
e
1
(X) = X
i
+ X
12+i
.
There are a total of 23 error patterns of double errors that can not be trapped.
5.27 The coset leader weight distribution is
α
0
= 1, α
1
=

23
1

, α
2
=

23
2

, α
3
=

23
3

α
4
= α
5
= · · · = α
23
= 0
The probability of a correct decoding is
P(C) = (1 −p)
23
+

23
1

p(1 −p)
22
+

23
2

p
2
(1 −p)
21
27
+

23
3

p
3
(1 −p)
20
.
The probability of a decoding error is
P(E) = 1 −P(C).
5.29(a) Consider two single-error patterns, e
1
(X) = X
i
and e
2
(X) = X
j
, where j > i. Suppose
that these two error patterns are in the same coset. Then X
i
+ X
j
must be divisible by
g(X) = (X
3
+ 1)p(X). This implies that X
j−i
+ 1 must be divisible by p(X). This is
impossible since j − i < n and n is the smallest positive integer such that p(X) divides
X
n
+ 1. Therefore no two single-error patterns can be in the same coset. Consequently, all
single-error patterns can be used as coset leaders.
Now consider a single-error pattern e
1
(X) = X
i
and a double-adjacent-error pattern e
2
(X) =
X
j
+ X
j+1
, where j > i. Suppose that e
1
(X) and e
2
(X) are in the same coset. Then
X
i
+X
j
+X
j+1
must be divisible by g(X) = (X
3
+1)p(X). This is not possible since g(X)
has X + 1 as a factor, however X
i
+ X
j
+ X
j+1
does not have X + 1 as a factor. Hence no
single-error pattern and a double-adjacent-error pattern can be in the same coset.
Consider two double-adjacent-error patterns,X
i
+X
i+1
and X
j
+X
j+1
where j > i. Suppose
that these two error patterns are in the same cosets. Then X
i
+ X
i+1
+ X
j
+ X
j+1
must be
divisible by (X
3
+ 1)p(X). Note that
X
i
+ X
i+1
+X
j
+ X
j+1
= X
i
(X + 1)(X
j−i
+ 1).
We see that for X
i
(X + 1)(X
j−i
+ 1) to be divisible by p(X), X
j−i
+ 1 must be divisible
by p(X). This is again not possible since j − i < n. Hence no two double-adjacent-error
patterns can be in the same coset.
Consider a single error pattern X
i
and a triple-adjacent-error pattern X
j
+ X
j+1
+ X
j+2
. If
these two error patterns are in the same coset, then X
i
+X
j
+X
j+1
+X
j+2
must be divisible
by (X
3
+1)p(X). But X
i
+X
j
+X
j+1
+X
j+2
= X
i
+X
j
(1 +X +X
2
) is not divisible by
X
3
+1 = (X+1)(X
2
+X+1). Therefore, no single-error pattern and a triple-adjacent-error
pattern can be in the same coset.
Now we consider a double-adjacent-error pattern X
i
+X
i+1
and a triple-adjacent-error pattern
28
X
j
+ X
j+1
+ X
j+2
. Suppose that these two error patterns are in the same coset. Then
X
i
+ X
i+1
+ X
j
+ X
j+1
+ X
j+2
= X
i
(X + 1) + X
j
(X
2
+ X + 1)
must be divisible by (X
3
+1)p(X). This is not possible since X
i
+X
i+1
+X
j
+X
j+1
+X
j+2
does not have X+1 as a factor but X
3
+1 has X+1 as a factor. Hence a double-adjacent-error
pattern and a triple-adjacent-error pattern can not be in the same coset.
Consider two triple-adjacent-error patterns, X
i
+ X
i+1
+ X
i+2
and X
j
+ X
j+1
+ X
j+2
. If
they are in the same coset, then their sum
X
i
(X
2
+ X + 1)(1 + X
j−i
)
must be divisible by (X
3
+ 1)p(X), hence by p(X). Note that the degree of p(X) is 3 or
greater. Hence p(X) and (X
2
+ X + 1) are relatively prime. As a result, p(X) must divide
X
j−i
+1. Again this is not possible. Hence no two triple-adjacent-error patterns can be in the
same coset.
Summarizing the above results, we see that all the single-, double-adjacent-, and triple-
adjacent-error patterns can be used as coset leaders.
29
Chapter 6
6.1 (a) The elements β, β
2
and β
4
have the same minimal polynomial φ
1
(X). From table 2.9, we
find that
φ
1
(X) = 1 + X
3
+ X
4
The minimal polynomial of β
3
= α
21
= α
6
is
φ
3
(X) = 1 + X + X
2
+ X
3
+ X
4
.
Thus
g
0
(X) = LCM(φ
1
(X), φ
2
(X))
= (1 + X
3
+X
4
)(1 + X + X
2
+ X
3
+X
4
)
= 1 + X + X
2
+ X
4
+ X
8
.
(b)
H =



1 β β
2
β
3
β
4
β
5
β
6
β
7
β
8
β
9
β
10
β
11
β
12
β
13
β
14
1 β
3
β
6
β
9
β
12
β
15
β
18
β
21
β
24
β
27
β
30
β
33
β
36
β
39
β
42



H =






















1 1 1 0 1 0 1 1 0 0 1 0 0 0 1
0 1 0 0 0 1 1 1 1 0 1 0 1 1 0
0 0 0 1 1 1 1 0 1 0 1 1 0 0 1
0 1 1 1 1 0 1 0 1 1 0 0 1 0 0
1 0 1 0 0 1 0 1 0 0 1 0 1 0 0
0 0 1 0 1 0 0 1 0 1 0 0 1 0 1
0 1 1 0 0 0 1 1 0 0 0 1 1 0 0
0 1 1 1 1 0 1 1 1 1 0 1 1 1 1






















.
30
(c) The reciprocal of g(X) in Example 6.1 is
X
8
g(X
−1
) = X
8
(1 + X
−4
+ X
−6
+X
−7
+ X
−8
= X
8
+ X
4
+X
2
+ X + 1 = g
0
(X)
6.2 The table for GF(s
5
) with p(X) = 1 + X
2
+ X
5
is given in Table P.6.2(a). The minimal
polynomials of elements in GF(2
m
) are given in Table P.6.2(b). The generator polynomials
of all the binary BCH codes of length 31 are given in Table P.6.2(c)
Table P.6.2(a) Galois Field GF(2
5
) with p(α) = 1 + α
2
+ α
5
= 0
0 (0 0 0 0 0)
1 (1 0 0 0 0)
α (0 1 0 0 0)
α
2
(0 0 1 0 0)
α
3
(0 0 0 1 0)
α
4
(0 0 0 0 1)
α
5
= 1 + α
2
(1 0 1 0 0)
31
Table P.6.2(a) Continued
α
6
= α + α
3
(0 1 0 1 0)
α
7
= α
2
+ α
4
(0 0 1 0 1)
α
8
= 1 + α
2
+ α
3
(1 0 1 1 0)
α
9
= α + α
3
+ α
4
(0 1 0 1 1)
α
10
= 1 + α
4
(1 0 0 0 1)
α
11
= 1 + α + α
2
(1 1 1 0 0)
α
12
= α + α
2
+ α
3
(0 1 1 1 0)
α
13
= α
2
+ α
3
+ α
4
(0 0 1 1 1)
α
14
= 1 + α
2
+ α
3
+ α
4
(1 0 1 1 1)
α
15
= 1 + α + α
2
+ α
3
+ α
4
(1 1 1 1 1)
α
16
= 1 + α + α
3
+ α
4
(1 1 0 1 1)
α
17
= 1 + α + α
4
(1 1 0 0 1)
α
18
= 1 + α (1 1 0 0 0)
α
19
= α + α
2
(0 1 1 0 0)
α
20
= α
2
+ α
3
(0 0 1 1 0)
α
21
= α
3
+ α
4
(0 0 0 1 1)
α
22
= 1 + α
2
+ α
4
(1 0 1 0 1)
α
23
= 1 + α + α
2
+ α
3
(1 1 1 1 0)
α
24
= α + α
2
+ α
3
+ α
4
(0 1 1 1 1)
α
25
= 1 + α
3
+ α
4
(1 0 0 1 1)
α
26
= 1 + α + α
2
+ α
4
(1 1 1 0 1)
α
27
= 1 + α + α
3
(1 1 0 1 0)
α
28
= α + α
2
+ α
4
(0 1 1 0 1)
α
29
= 1 + + α
3
(1 0 0 1 0)
α
30
= α + α
4
(0 1 0 0 1)
32
Table P.6.2(b)
Conjugate Roots φ
i
(X)
1
α, α
2
, α
4
, α
8
, α
16
α
3
, α
6
, α
12
, α
24
, α
17
α
5
, α
10
, α
20
, α
9
, α
18
α
7
, α
14
, α
28
, α
25
, α
19
α
11
, α
22
, α
13
, α
26
, α
21
α
15
, α
30
, α
29
, α
27
, α
23
1 + X
1 + X
2
+ X
5
1 + X
2
+ X
3
+ X
4
+ X
5
1 + X + X
2
+ X
4
+ X
5
1 + X + X
2
+ X
3
+ X
5
1 + X + X
3
+ X
4
+ X
5
1 + X
3
+ X
5
Table P.6.2(c)
n k t g(X)
31 26 1 g
1
(X) = 1 + X
2
+X
5
21 2 g
2
(X) = φ
1
(X)φ
3
(X)
16 3 g
3
(X) = φ
1
(X)φ
3
(X)φ
5
(X)
11 5 g
4
(X) = φ
1
(X)φ
3
(X)φ
5
(X)φ
7
(X)
6 7 g
5
(X) = φ
1
(X)φ
3
(X)φ
5
(X)φ
7
(X)φ
11
(X)
6.3 (a) Use the table for GF(2
5
) constructed in Problem 6.2. The syndrome components of
r
1
(X) = X
7
+X
30
are:
S
1
= r
1
(α) = α
7
+ α
30
= α
19
S
2
= r
1

2
) = α
14
+ α
29
= α
7
S
3
= r
1

3
) = α
21
+ α
28
= α
12
33
S
4
= r
1

4
) = α
28
+ α
27
= α
14
The iterative procedure for finding the error location polynomial is shown in Table P.6.3(a)
Table P.6.3(a)
µ σ
(µ)
(X) d
µ

µ
2µ −
µ
-1/2 1 1 0 -1
0 1 α
19
0 0
1 1 + α
19
X α
25
1 1(ρ = −1/2)
2 1 + α
19
X + α
6
X
2
– 2 2(ρ = 0)
Hence σ(X) = 1 + α
19
X + α
6
X
2
. Substituting the nonzero elements of GF(2
5
) into σ(X),
we find that σ(X) has α and α
24
as roots. Hence the error location numbers are α
−1
= α
30
and α
−24
= α
7
. As a result, the error polynomial is
e(X) = X
7
+ X
30
.
The decoder decodes r
1
(X) into r
1
(X) +e(X) = 0.
(b) Now we consider the decoding of r
2
(X) = 1 +X
17
+X
28
. The syndrome components of
r
2
(X) are:
S
1
= r
2
(α) = α
2
,
S
2
= S
2
1
= α
4
,
S
4
= S
2
2
= α
8
,
S
3
= r
2

3
) = α
21
.
The error location polynomial σ(X) is found by filling Table P.6.3(b):
34
Table P.6.3(b)
µ σ
(µ)
(X) d
µ

µ
2µ −
µ
-1/2 1 1 0 -1
0 1 α
2
0 0
1 1 + α
2
X α
30
1 1(ρ = −1/2)
2 1 + α
2
X + α
28
X
2
– 2 2(ρ = 0)
The estimated error location polynomial is
σ(X) = 1 + α
2
X + α
28
X
2
This polynomial does not have roots in GF(2
5
), and hence r
2
(X) cannot be decoded and must
contain more than two errors.
6.4 Let n = (2t + 1)λ. Then
(X
n
+ 1) = (X
λ
+ 1)(X
2tλ
+ X
(2t−1)λ
+ · · · + X
λ
+ 1
The roots of X
λ
+ 1 are 1, α
2t+1
, α
2(2t+1)
, · · · , α
(λ−1)(2t+1)
. Hence, α, α
2
, · · · , α
2t
are roots
of the polynomial
u(X) = 1 + X
λ
+ X

+ · · · + X
(2t−1)λ
+ X
2tλ
.
This implies that u(X) is code polynomial which has weight 2t + 1. Thus the code has
minimum distance exactly 2t + 1.
6.5 Consider the Galois field GF(2
2m
). Note that 2
2m
− 1 = (2
m
− 1) · (2
m
+ 1). Let α be
a primitive element in GF(2
2m
). Then β = α
(2
m
−1)
is an element of order 2
m
+ 1. The
elements 1, β, β
2
, β
2
, β
3
, β
4
, · · · , β
2m
are all the roots of X
2
m
+1
+1. Let ψ
i
(X) be the minimal
35
polynomial of β
i
. Then a t-error-correcting non-primitive BCH code of length n = 2
m
+ 1 is
generated by
g(X) = LCM {ψ
1
(X), ψ
2
(X), · · · , ψ
2t
(X)} .
6.10 Use Tables 6.2 and 6.3. The minimal polynomial for β
2
= α
6
and β
4
= α
12
is
ψ
2
(X) = 1 + X +X
2
+ X
4
+ X
6
.
The minimal polynomial for β
3
= α
9
is
ψ
3
(X) = 1 + X
2
+ X
3
.
The minimal polynomial for β
5
= α
15
is
ψ
5
(X) = 1 + X
2
+ X
4
+ X
5
+X
6
.
Hence
g(X) = ψ
2
(X)ψ
3
(X)ψ
5
(X)
The orders of β
2
, β
3
and β
5
are 21,7 and 21 respectively. Thus the length is
n = LCM(21, 7, 21),
and the code is a double-error-correcting (21,6) BCH code.
6.11 (a) Let u(X) be a code polynomial and u

(X) = X
n−1
u(X
−1
) be the reciprocal of u(X).
A cyclic code is said to be reversible if u(X) is a code polynomial then u

(X) is also a code
polynomial. Consider
u


i
) = β
(n−1)i
u(β
−i
)
Since u(β
−i
) = 0 for −t ≤ i ≤ t, we see that u


i
) has β
−t
, · · · , β
−1
, β
0
, β
1
, · · · , β
t
as roots
36
and is a multiple of the generator polynomial g(X). Therefore u

(X) is a code polynomial.
(b) If t is odd, t+1 is even. Hence β
t+1
is the conjugate of β
(t+1)/2
and β
−(t+1)
is the conjugate
of β
−(t+1)/2
. Thus β
t+1
and β
−(t+1)
are also roots of the generator polynomial. It follows from
the BCH bound that the code has minimum distance 2t + 4 (Since the generator polynomial
has (2t + 3 consecutive powers of β as roots).
37
Chapter 7
7.2 The generator polynomial of the double-error-correcting RS code over GF(2
5
) is
g(X) = (X + α)(X + α
2
)(X + α
3
)(X + α
4
)
= α
10
+ α
29
X + α
19
X
2
+ α
24
X
3
+ X
4
.
The generator polynomial of the triple-error-correcting RS code over GF(2
5
) is
g(X) = (X + α)(X + α
2
)(X + α
3
)(X + α
4
)(X + α
5
)(X + α
6
)
= α
21
+ α
24
X + α
16
X
2
+ α
24
X
3
+ α
9
X
4
+ α
10
X
5
+ X
6
.
7.4 The syndrome components of the received polynomial are:
S
1
= r(α) = α
7
+ α
2
+ α = α
13
,
S
2
= r(α
2
) = α
10
+ α
10
+ α
14
= α
14
,
S
3
= r(α
3
) = α
13
+ α
3
+ α
12
= α
9
,
S
4
= r(α
4
) = α + α
11
+ α
10
= α
7
,
S
5
= r(α
5
) = α
4
+ α
4
+ α
8
= α
8
,
S
6
= r(α
6
) = α
7
+ α
12
+ α
6
= α
3
.
The iterative procedure for finding the error location polynomial is shown in Table P.7.4. The
error location polynomial is
σ(X) = 1 + α
9
X
3
.
The roots of this polynomial are α
2
, α
7
, and α
12
. Hence the error location numbers are α
3
, α
8
,
and α
13
.
From the syndrome components of the received polynomial and the coefficients of the error
1
Table P.7.4
µ σ
µ
(X) d
µ
l
µ
µ − l
µ
−1 1 1 0 −1
0 1 α
13
0 0
1 1 + α
13
X α
10
1 0 (take ρ = −1)
2 1 + αX α
7
1 1 (take ρ = 0)
3 1 + α
13
X + α
10
X
2
α
9
2 1 (take ρ = 1)
4 1 + α
14
X + α
12
X
2
α
8
2 2 (take ρ = 2)
5 1 + α
9
X
3
0 3 2 (take ρ = 3)
6 1 + α
9
X
3
− − −
location polynomial, we find the error value evaluator,
Z
0
(X) = S
1
+ (S
2
+ σ
1
S
1
)X + (S
3
+ σ
1
S
2
+ σ
2
S
1
)X
2
= α
13
+ (α
14
+ 0α
13
)X + (α
9
+ 0α
14
+ 0α
13
)X
2
= α
13
+ α
14
X + α
9
X
2
.
The error values at the positions X
3
, X
8
, and X
13
are:
e
3
=
−Z
0

−3
)
σ


−3
)
=
α
13
+ α
11
+ α
3
α
3
(1 + α
8
α
−3
)(1 + α
13
α
−3
)
=
α
7
α
3
= α
4
,
e
8
=
−Z
0

−8
)
σ


−8
)
=
α
13
+ α
6
+ α
8
α
8
(1 + α
3
α
−8
)(1 + α
13
α
−8
)
=
α
2
α
8
= α
9
,
e
13
=
−Z
0

−13
)
σ


−13
)
=
α
13
+ α + α
13
α
13
(1 + α
3
α
−13
)(1 + α
8
α
−13
)
=
α
α
13
= α
3
.
Consequently, the error pattern is
e(X) = α
4
X
3
+ α
9
X
8
+ α
3
X
13
.
and the decoded codeword is the all-zero codeword.
2
7.5 The syndrome polynomial is
S(X) = α
13
+ α
14
X + α
9
X
2
+ α
7
X
3
+ α
8
X
4
+ α
3
X
5
Table P.7.5 displays the steps of Euclidean algorithm for finding the error location and error
value polynomials.
Table P.7.5
i Z
(i)
0
(X) q
i
(X) σ
i
(X)
−1 X
6
− 0
0 α
13
+ α
14
X + α
9
X
2
+ α
7
X
3
+ α
8
X
4
+ α
3
X
5
− 1
1 1 + α
8
X + α
5
X
3
+ α
2
X
4
α
2
+ α
12
X α
2
+ α
12
X
2 α + α
13
X + α
12
X
3
α
12
+ αX α
3
+ αX + α
13
X
2
3 α
7
+ α
8
X + α
3
X
2
α
8
+ α
5
X α
9
+ α
3
X
3
The error location and error value polynomials are:
σ(X) = α
9
+ α
3
X
3
= α
9
(1 + α
9
X
3
)
Z
0
(X) = α
7
+ α
8
X + α
3
X
2
= α
9

13
+ α
14
X + α
9
X
2
)
From these polynomials, we find that the error location numbers are α
3
, α
8
, and α
13
, and error
values are
e
3
=
−Z
0

−3
)
σ


−3
)
=
α
7
+ α
5
+ α
12
α
9
α
3
(1 + α
8
α
−3
)(1 + α
13
α
−3
)
=
α
α
12
= α
4
,
e
8
=
−Z
0

−8
)
σ


−8
)
=
α
7
+ 1 + α
2
α
9
α
8
(1 + α
3
α
−8
)(1 + α
13
α
−8
)
=
α
11
α
2
= α
9
,
e
13
=
−Z
0

−13
)
σ


−13
)
=
α
7
+ α
10
+ α
7
α
9
α
13
(1 + α
3
α
−13
)(1 + α
8
α
−13
)
=
α
10
α
7
= α
3
.
3
Hence the error pattern is
e(X) = α
4
X
3
+ α
9
X
8
+ α
3
X
13
.
and the received polynomial is decoded into the all-zero codeword.
7.6 From the received polynomial,
r(X) = α
2
+ α
21
X
12
+ α
7
X
20
,
we compute the syndrome,
S
1
= r(α
1
) = α
2
+ α
33
+ α
27
= α
27
,
S
2
= r(α
2
) = α
2
+ α
45
+ α
47
= α,
S
3
= r(α
3
) = α
2
+ α
57
+ α
67
= α
28
,
S
4
= r(α
4
) = α
2
+ α
69
+ α
87
= α
29
,
S
5
= r(α
5
) = α
2
+ α
81
+ α
107
= α
15
,
S
6
= r(α
6
) = α
2
+ α
93
+ α
127
= α
8
.
Therefore, the syndrome polynomial is
S(X) = α
27
+ αX + α
28
X
2
+ α
29
X
3
+ α
15
X
4
+ α
8
X
5
Using the Euclidean algorithm, we find
σ(X) = α
23
X
3
+ α
9
X + α
22
,
Z
0
(X) = α
26
X
2
+ α
6
X + α
18
,
as shown in the following table: The roots of σ(X) are: 1 = α
0
, α
11
and α
19
. From these
roots, we find the error location numbers: β
1
= (α
0
)
−1
= α
0
, β
2
= (α
11
)
−1
= α
20
, and
4
i Z
(i)
0
(X) q
i
(X) σ
i
(X)
-1 X
6
- 0
0 S(X) - 1
1 α
5
X
4
+ α
9
X
3
+ α
22
X
2
+ α
11
X + α
26
α
23
X + α
30
α
23
X + α
30
2 α
8
X
3
+ α
4
X + α
6
α
3
X + α
5
α
24
X
2
+ α
30
X + α
10
3 α
26
X
2
+ α
6
X + α
18
α
28
X + α α
23
X
3
+ α
9
X + α
22
β
3
= (α
19
)
−1
= α
12
. Hence the error pattern is
e(X) = e
0
+ e
12
X
12
+ e
20
X
20
.
The error location polynomial and its derivative are:
σ(X) = α
22
(1 + X)(1 + α
12
X)(1 + α
20
X),
σ

(X) = α
22
(1 + α
12
X)(1 + α
20
X) + α
3
(1 + X)(1 + α
20
X) + α
11
(1 + X)(1 + α
12
X).
The error values at the 3 error locations are given by:
e
0
=
−Z
0

0
)
σ


0
)
=
α
26
+ α
6
+ α
8
α
22
(1 + α
12
)(1 + α
20
)
= α
2
,
e
12
=
−Z
0

−12
)
σ


−12
)
=
α
2
+ α
25
+ α
18
α
3
(1 + α
19
)(1 + α
8
)
= α
21
,
e
20
=
−Z
0

−20
)
σ


−20
)
=
α
17
+ α
17
+ α
18
α
11
(1 + α
11
)(1 + α
23
)
= α
7
.
Hence, the error pattern is
e(X) = α
2
+ α
21
X
12
+ α
7
X
20
and the decoded codeword is
v(X) = r(X) −e(X) = 0.
5
7.9 Let g(X) be the generator polynomial of a t-symbol correcting RS code C over GF(q) with α,
α
2
, . . . , α
2t
as roots, where α is a primitive element of GF(q). Since g(X) divides X
q−1
− 1,
then
X
q−1
− 1 = g(X)h(X).
The polynomial h(X) has α
2t+1
, . . . , α
q−1
as roots and is called the parity polynomial. The
dual code C
d
of C is generated by the reciprocal of h(X),
h

(X) = X
q−1−2t
h(X
−1
).
We see that h

(X) has α
−(2t+1)
= α
q−2t−2
, α
−(2t+2)
= α
q−2t−3
, . . . , α
−(q−2)
= α, and
α
−(q−1)
= 1 as roots. Thus h

(X) has the following consecutive powers of α as roots:
1, α, α
2
, . . . , α
q−2t−2
.
Hence C
d
is a (q − 1, 2t, q − 2t) RS code with minimum distance q − 2t.
7.10 The generator polynomial g
rs
(X) of the RS code C has α, α
2
, . . . , α
d−1
as roots. Note that
GF(2
m
) has GF(2) as a subfield. Consider those polynomial v(X) over GF(2) with degree
2
m
−2 or less that has α, α
2
, . . . , α
d−1
(also their conjugates) as roots. These polynomials over
GF(2) form a primitive BCH code C
bch
with designed distance d. Since these polynomials are
also code polynomials in the RS code C
rs
, hence C
bch
is a subcode of C
rs
.
7.11 Suppose c(X) =

2
m
−2
i=0
c
i
X
i
is a minimum weight code polynomial in the (2
m
− 1, k) RS
code C. The minimum weight is increased to d + 1 provided
c

= −c(1) = −
2
m
−2

i=0
c
i
= 0.
We know that c(X) is divisible by g(X). Thus c(X) = a(X)g(X) with a(X) = 0. Consider
c(1) = a(1)g(1).
Since 1 is not a root of g(X), g(1) = 0. If a(1) = 0, then c

= −c(1) = 0 and the vector
(c

, c
0
, c
1
, . . . , c
2
m
−2
) has weight d+1. Next we show that a(1) is not equal to 0. If a(1) = 0,
6
then a(X) has X − 1 as a factor and c(X) is a multiple of (X − 1)g(X) and must have a
weight at least d + 1. This contradicts to the hypothesis that c(X) is a minimum weight code
polynomial. Consequently the extended RS code has a minimum distance d + 1.
7.12 To prove the minimum distance of the doubly extended RS code, we need to show that no 2t
or fewer columns of H
1
sum to zero over GF(2
m
) and there are 2t + 1 columns in H
1
sum
to zero. Suppose there are δ columns in H
1
sum to zero and δ ≤ 2t. There are 4 case to be
considered:
(1) All δ columns are from the same submatrix H.
(2) The δ columns consist of the first column of H
1
and δ − 1 columns from H.
(3) The δ columns consist of the second column of H
1
and δ − 1 columns from H.
(4) The δ columns consist of the first two columns of H
1
and δ − 2 columns from H.
The first case leads to a δ × δ Vandermonde determinant. The second and third cases lead to
a (δ − 1) × (δ − 1) Vandermonde determinant. The 4th case leads to a (δ − 2) × (δ − 2)
Vandermonde determinant. The derivations are exactly the same as we did in the book. Since
Vandermonde determinants are nonzero, δ columns of H
1
can not be sum to zero. Hence the
minimum distance of the extended RS code is at least 2t + 1. However, H generates an RS
code with minimum distance exactly 2t + 1. There are 2t + 1 columns in H (they are also in
H
1
), which sum to zero. Therefore the minimum distance of the extended RS code is exactly
2t + 1.
7.13 Consider
v(X) =
2
m
−2

i=0
a(α
i
)X
i
=
2
m
−2

i=0
(
k−1

j=0
a
j
α
ij
)X
i
Let α be a primitive element in GF(2
m
). Replacing X by α
q
, we have
v(α
q
) =
2
m
−2

i=0
k−1

j=0
a
j
α
ij
α
iq
=
k−1

j=0
a
j
(
2
m
−2

i=0
α
i(j+q)
).
7
We factor 1 + X
2

1
as follows:
1 + X
2
m
−1
= (1 + X)(1 + X + X
2
+ · · · + X
2
m
−2
)
Since the polynomial 1 + X + X
2
+ · · · + X
2
m
−2
has α, α
2
, . . . , α
2
m
−2
as roots, then for
1 ≤ l ≤ 2
m
− 2,
2
m
−2

i=0
α
li
= 1 + α
l
+ α
2l
+ · · · + α
(2
m
−2)l
= 0.
Therefore,

2
m
−2
i=0
α
i(j+q)
= 0 when 1 ≤ j + q ≤ 2
m
− 2.
This implies that
v(α
q
) = 0 for 0 ≤ j < k and 1 ≤ q ≤ 2
m
− k − 1.
Hence v(X) has α, α
2
, . . . , α
2
m
−k−1
as roots. The set {v(X)} is a set of polynomial over
GF(2
m
) with 2
m
−k−1 consecutive powers of α as roots and hence it forms a (2
m
−1, k, 2
m

k) cyclic RS code over GF(2
m
).
8
Chapter 8
8.2 The order of the perfect difference set {0, 2, 3} is q = 2.
(a) The length of the code n = 2
2
+ 2 + 1 = 7.
(b) Let z(X) = 1 +X
2
+X
3
. Then the parity-check polynomial is
h(X) = GCD{1 +X
2
+X
3
, X
7
+ 1} = 1 +X
2
+X
3
.
(c) The generator polynomial is
g(X) =
X
7
+ 1
h(X)
= 1 +X
2
+X
3
+X
4
.
(d) From g(X), we find that the generator matrix in systematic form is
G =






1 0 1 1 1 0 0
1 1 1 0 0 1 0
0 1 1 1 0 0 1






.
The parity-check matrix in systematic form is
H =









1 0 0 0 1 1 0
0 1 0 0 0 1 1
0 0 1 0 1 1 1
0 0 0 1 1 0 1









.
The check-sums orthogonal on the highest order error digit e
6
are:
A
1
= s
0
+s
2
,
A
2
= s
1
,
A
3
= s
3
.
Based on the above check-sum, a type-1 decoder can be implemented.
8.4 (a) Since all the columns of H are distinct and have odd weights, no two or three columns
can sum to zero, hence the minimum weight of the code is at least 4. However, the first,
1
the second, the third and the 6th columns sum to zero. Therefore the minimum weight,
hence the minimum distance, of the code is 4.
(b) The syndrome of the error vector e is
s = (s
0
, s
1
, s
2
, s
3
, s
4
) = eH
T
with
s
0
= e
0
+e
5
+e
6
+e
7
+e
8
+e
9
+e
10
,
s
1
= e
1
+e
5
+e
6
+e
8
,
s
2
= e
2
+e
5
+e
7
+e
9
,
s
3
= e
3
+e
6
+e
7
+e
10
,
s
4
= e
4
+e
8
+e
9
+e
10
.
(c) The check-sums orthogonal on e
10
are:
A
1,10
= s
0
+s
1
+s
2
= e
0
+e
1
+e
2
+e
5
+e
10
,
A
2,10
= s
3
= e
3
+e
6
+e
7
+e
10
,
A
3,10
= s
4
= e
4
+e
8
+e
9
+e
10
.
The check-sums orthogonal on e
9
are:
A
1,9
= s
0
+s
1
+s
3
,
A
2,9
= s
2
,
A
3,9
= s
4
.
The check-sums orthogonal on e
8
are:
A
1,8
= s
0
+s
2
+s
3
,
A
2,8
= s
1
,
A
3,8
= s
4
.
2
The check-sums orthogonal on e
7
are:
A
1,7
= s
0
+s
1
+s
4
,
A
2,7
= s
2
,
A
3,7
= s
3
.
The check-sums orthogonal on e
6
are:
A
1,6
= s
0
+s
2
+s
4
,
A
2,6
= s
1
,
A
3,6
= s
3
.
The check-sums orthogonal on e
5
are:
A
1,5
= s
0
+s
3
+s
4
,
A
2,5
= s
1
,
A
3,5
= s
2
.
(d) Yes, the code is completely orthogonalizable, since there are 3 check-sums orthogonal
on each message bit and the minimum distance of the code is 4.
8.5 For m = 6, the binary radix-2 form of 43 is
43 = 1 + 2 + 2
3
+ 2
5
.
The nonzero-proper descendants of 43 are:
1, 2, 8, 32, 3, 9, 33, 10, 34, 40, 11, 35, 41, 42.
8.6
α

α
0
α
1
α
2
α
3
α
4
α
5
α
6
α
7
α
8
α
9
α
10
α
11
α
12
α
13
α
14
u = ( 1 1 1 0 0 1 0 1 0 1 1 0 0 0 0 1 )
Applying the permutation Z = α
3
Y +α
11
to the above vector, the component at the location
α
i
is permute to the location α
3
α
i

11
. For example, the 1-component at the location Y = α
8
is permuted to the location α
3
α
8
+ α
11
= α
11
+ α
11
= α

. Performing this permutation to
3
the each component of the above vector, we obtain the following vector
α

α
0
α
1
α
2
α
3
α
4
α
5
α
6
α
7
α
8
α
9
α
10
α
11
α
12
α
13
α
14
v = ( 1 1 0 1 0 0 1 0 0 1 1 0 1 0 1 0 )
8.7 For J = 9 and L = 7, X
63
+ 1 can be factor as follows:
X
63
+ 1 = (1 +X
9
)(1 +X
9
+X
18
+X
27
+X
36
+X
45
+X
54
).
Then π(X) = 1+X
9
+X
18
+X
27
+X
36
+X
45
+X
54
. Let α be a primitive element in GF(2
6
)
whose minimal polynomial is φ
1
(X) = 1+X+X
6
. Because α
63
= 1, the polynomial 1+X
9
has α
0
, α
7
, α
14
, α
21
, α
28
, α
35
, α
42
, α
49
, and α
56
as all it roots. Therefore, the polynomial π(X)
has α
h
as a root when h is not a multiple of 7 and 0 < h < 63. From the conditions (Theorem
8.2) on the roots of H(X), we can find H(X) as: H(X) = LCM{minimal polynomials φ
i
(X)
of the roots of H(X)}. As the result,
H(X) = φ
1
(X)φ
3
(X)φ
5
(X)φ
9
(X)φ
11
(X)φ
13
(X)φ
27
(X),
where φ
1
= 1 + X + X
6
, φ
3
= 1 + X + X
2
+ X
4
+ X
6
, φ
5
= 1 + X + X
2
+ X
5
+ X
6
,
φ
9
= 1 + X
2
+ X
3
, φ
11
= 1 + X
2
+ X
3
+ X
5
+ X
6
, φ
13
= 1 + X + X
3
+ X
4
+ X
6
, and
φ
27
= 1 +X +X
3
. Then, we can find G(X),
G(X) =
X
63
+ 1
H(X)
=
(1 +X
9
)π(X)
H(X)
= (1 +X
9
)(1 +X
2
+X
4
+X
5
+X
6
)(1 +X +X
4
+X
5
+X
6
)(1 +X
5
+X
6
).
For type-1 DTI code of length 63 and J = 9, the generator polynomial is:
g
1
(X) =
X
27
G(X
−1
)
1 +X
=
(1 +X
9
)(1 +X +X
2
+X
4
+X
6
)(1 +X +X
2
+X
5
+X
6
)(1 +X +X
6
)
1 +X
= (1 +X +X
2
+X
3
+X
4
+X
5
+X
6
+X
7
+X
8
)(1 +X +X
2
+X
4
+X
6
)
(1 +X +X
2
+X
5
+X
6
)(1 +X +X
6
).
Represent the polynomials π(X), Xπ(X), X
2
π(X), X
3
π(X), X
4
π(X), X
5
π(X), X
6
π(X),
X
7
π(X), and X
8
π(X) by 63-tuple location vectors. Add an overall parity-check digit and
apply the affine permutation, Y = αX +α
62
, to each of these location vectors. Then, remove
the overall parity-check digits of all location vectors. By removing one vector with odd weight,
4
we can obtain the polynomials orthogonal on the digit position X
62
. They are:
X
11
+X
16
+X
18
+X
24
+X
48
+X
58
+X
59
+X
62
,
X
1
+X
7
+X
31
+X
41
+X
42
+X
45
+X
57
+X
62
,
X
23
+X
33
+X
34
+X
37
+X
49
+X
54
+X
56
+X
62
,
X
2
+X
14
+X
19
+X
21
+X
27
+X
51
+X
61
+X
62
,
X
0
+X
3
+X
15
+X
20
+X
22
+X
28
+X
52
+X
62
,
X
9
+X
10
+X
13
+X
25
+X
30
+X
32
+X
38
+X
62
,
X
4
+X
6
+X
12
+X
36
+X
46
+X
47
+X
50
+X
62
,
X
5
+X
29
+X
39
+X
40
+X
43
+X
55
+X
60
+X
62
.
8.8 For J = 7 and L = 9,
X
63
+ 1 = (1 +X
7
)(1 +X
7
+X
14
+X
21
+X
28
+X
35
+X
42
+X
49
+X
56
)
and π(X) = 1+X
7
+X
14
+X
21
+X
28
+X
35
+X
42
+X
49
+X
56
. Let α be a primitive element in
GF(2
6
) whose minimal polynomial is φ
1
(X) = 1+X+X
6
. Because α
63
= 1, the polynomial
1 +X
7
has α
0
, α
9
, α
18
, α
27
, α
36
, α
45
, and α
54
as all it roots. Therefore, the polynomial π(X)
has α
h
as a root when h is not a multiple of 9 and 0 < h < 63. From the conditions (Theorem
8.2) on the roots of H(X), we can find H(X) as: H(X) = LCM{minimal polynomials φ
i
(X)
of the roots of H(X)}. As the result,
H(X) = φ
1
(X)φ
3
(X)φ
5
(X)φ
7
(X)φ
21
(X),
where φ
1
= 1 + X + X
6
, φ
3
= 1 + X + X
2
+ X
4
+ X
6
, φ
5
= 1 + X + X
2
+ X
5
+ X
6
,
φ
7
= 1 +X
3
+X
6
, and φ
21
= 1 +X +X
2
. Then, we can find G(X),
G(X) =
X
63
+ 1
H(X)
=
(1 +X
7
)π(X)
H(X)
= (1 +X
7
)(1 +X
2
+X
3
+X
5
+X
6
)(1 +X +X
3
+X
4
+X
6
)
(1 +X
2
+X
4
+X
5
+X
6
)(1 +X +X
4
+X
5
+X
6
)(1 +X
5
+X
6
).
5
For type-1 DTI code of length 63 and J = 7, the generator polynomial is:
g
1
(X) =
X
37
G(X
−1
)
1 +X
=
(1 +X
7
)(1 +X +X
3
+X
4
+X
6
)(1 +X
2
+X
3
+X
5
+X
6
)
1 +X
(1 +X +X
2
+X
4
+X
6
)(1 +X +X
2
+X
5
+X
6
)(1 +X +X
6
)
= (1 +X +X
2
+X
3
+X
4
+X
5
+X
6
)(1 +X +X
3
+X
4
+X
6
)(1 +X
2
+X
3
+X
5
+X
6
)
(1 +X +X
2
+X
4
+X
6
)(1 +X +X
2
+X
5
+X
6
)(1 +X +X
6
).
Represent the polynomials π(X), Xπ(X), X
2
π(X), X
3
π(X), X
4
π(X), X
5
π(X), and
X
6
π(X) by 63-tuple location vectors. Add an overall parity-check digit and apply the affine
permutation, Y = αX + α
62
, to each of these location vectors. By removing one vector with
odd weight, we can obtain the polynomials orthogonal on the digit position X
62
. They are:
X
11
+X
14
+X
32
+X
36
+X
43
+X
44
+X
45
+X
52
+X
56
+X
62
,
X
3
+X
8
+X
13
+X
19
+X
31
+X
33
+X
46
+X
48
+X
60
+X
62
,
X
2
+X
10
+X
23
+X
24
+X
26
+X
28
+X
29
+X
42
+X
50
+X
62
,
X
1
+X
6
+X
9
+X
15
+X
16
+X
35
+X
54
+X
55
+X
61
+X
62
,
X
0
+X
4
+X
7
+X
17
+X
27
+X
30
+X
34
+X
39
+X
58
+X
62
,
X
5
+X
21
+X
22
+X
38
+X
47
+X
49
+X
53
+X
57
+X
59
+X
62
.
8.9 The generator polynomial of the maximum-length sequence code of length n = 2
m
− 1 is
g(X) = (X
n
+ 1)/p(X) = (X + 1)(1 +X +X
2
+. . . +X
n−1
)/p(X),
where p(X) is a primitive polynomial of degree m over GF(2). Since p(X) and (X + 1) are
relatively prime, g(X) has 1 as a root. Since the all-one vector 1 + X + X
2
+ . . . + X
n−1
does not have 1 as a root, it is not divisible by g(X). Therefore, the all-one vector is not a
codeword in a maximum-length sequence code.
8.17 There are five 1-flats that pass through the point α
7
. The 1-flats passing through α
7
can be
represented by α
7
+ βa
1
, where a
1
is linearly independent of α
7
and β ∈ GF(2
2
). They are
6
five 1-flats passing through α
7
which are:
L
1
= {α
7
, α
9
, α
13
, α
6
},
L
2
= {α
7
, α
14
, α
10
, α
8
},
L
3
= {α
7
, α
12
, 0, α
2
},
L
4
= {α
7
, α
4
, α
11
, α
5
},
L
5
= {α
7
, α
3
, 1, α}.
8.18 (a) There are twenty one 1-flats that pass through the point α
63
. They are:
L
1
= {α
63
, 0, α
42
, α
21
}
L
2
= {α
63
, α
6
, α
50
, α
39
}
L
3
= {α
63
, α
12
, α
15
, α
37
}
L
4
= {α
63
, α
32
, α
4
, α
9
}
L
5
= {α
63
, α
24
, α
11
, α
30
}
L
6
= {α
63
, α
62
, α
7
, α
17
}
L
7
= {α
63
, α
1
, α
18
, α
8
}
L
8
= {α
63
, α
26
, α
41
, α
38
}
L
9
= {α
63
, α
48
, α
60
, α
22
}
L
10
= {α
63
, α
45
, α
46
, α
53
}
L
11
= {α
63
, α
61
, α
34
, α
14
}
L
12
= {α
63
, α
25
, α
3
, α
51
}
L
13
= {α
63
, α
2
, α
16
, α
36
}
L
14
= {α
63
, α
35
, α
31
, α
40
}
L
15
= {α
63
, α
52
, α
13
, α
19
}
L
16
= {α
63
, α
23
, α
54
, α
58
}
L
17
= {α
63
, α
33
, α
44
, α
57
}
L
18
= {α
63
, α
47
, α
49
, α
20
}
7
L
19
= {α
63
, α
27
, α
43
, α
29
}
L
20
= {α
63
, α
56
, α
55
, α
10
}
L
21
= {α
63
, α
59
, α
28
, α
5
}
(b) There are five 2-flats that intersect on the 1-flat, {α
63
+ ηα}, where η ∈ GF(2
2
). They
are:
F
1
= {1, α
6
, α
50
, α
39
, 0, α, α
22
, α
43
, α
42
, α
29
, α
18
, α
48
, α
21
, α
60
, α
27
, α
8
}
F
2
= {1, α
6
, α
50
, α
39
, α
12
, α
26
, α
10
, α
46
, α
15
, α
53
, α
41
, α
56
, α
37
, α
55
, α
45
, α
38
}
F
3
= {1, α
6
, α
50
, α
39
, α
32
, α
35
, α
20
, α
57
, α
4
, α
33
, α
31
, α
47
, α
9
, α
49
, α
44
, α
40
}
F
4
= {1, α
6
, α
50
, α
39
, α
24
, α
16
, α
34
, α
17
, α
11
, α
62
, α
36
, α
14
, α
30
, α
61
, α
7
, α
2
}
F
5
= {1, α
6
, α
50
, α
39
, α
25
, α
5
, α
54
, α
52
, α
3
, α
13
, α
59
, α
58
, α
51
, α
23
, α
19
, α
28
}
8.19 The 1-flats that pass through the point α
21
are:
L
1
= {α
21
, α
42
, α
11
, α
50
, α
22
, α
44
, α
25
, α
37
}
L
2
= {α
21
, α
60
, α
35
, α
31
, α
47
, α
54
, α
32
, α
52
}
L
3
= {α
21
, α
58
, α
9
, α
26
, α
6
, α
5
, α
28
, α
34
}
L
4
= {α
21
, α
30
, α
57
, 0, α
3
, α
48
, α
39
, α
12
}
L
5
= {α
21
, α
51
, α
61
, α
27
, α
19
, α
14
, α
62
, α
2
}
L
6
= {α
21
, α
38
, α
40
, α
33
, α
46
, α
17
, α
18
, α
7
}
L
7
= {α
21
, α
29
, α
16
, α
53
, α
23
, α
0
, α
4
, α
1
}
L
8
= {α
21
, α
59
, α
15
, α
45
, α
56
, α
8
, α
55
, α
13
}
L
9
= {α
21
, α
43
, α
41
, α
20
, α
10
, α
36
, α
24
, α
49
}
8.20 a. The radix-2
3
expansion of 47 is expressed as follows:
47 = 7 + 5 · 2
3
.
8
Hence, the 2
3
-weight of 47
W
2
3(47) = 7 + 5 = 12.
b.
W
2
3(47
(0)
) = W
2
3(47) = 7 + 5 = 12,
W
2
3(47
(1)
) = W
2
3(31) = 7 + 3 = 10,
W
2
3(47
(2)
) = W
2
3(62) = 6 + 7 = 13.
Hence,
max
0≤l<3
W
2
3(47
(l)
) = 13.
c. All the positive integers h less than 63 such that
0 < max
0≤l<3
W
2
3(h
(l)
) ≤ 2
3
− 1.
are
1, 2, 3, 4, 5, 6, 7, 8, 10, 12, 14, 16, 17, 20,
21, 24, 28, 32, 33, 34, 35, 40, 42, 48, 49, 56.
9
Chapter 11
Convolutional Codes
11.1 (a) The encoder diagram is shown below.
u
v
( 1 )
v
( 2 )
v
( 0 )
(b) The generator matrix is given by
G =
_
¸
¸
¸
_
111 101 011
111 101 011
111 101 011
.
.
.
.
.
.
_
¸
¸
¸
_
.
(c) The codeword corresponding to u = (11101) is given by
v = u · G = (111, 010, 001, 110, 100, 101, 011).
11.2 (a) The generator sequences of the convolutional encoder in Figure 11.3 on page 460 are given in
(11.21).
1
2
(b) The generator matrix is given by
G =
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
1111 0000 0000
0101 0110 0000
0011 0100 0011
1111 0000 0000
0101 0110 0000
0011 0100 0011
.
.
.
.
.
.
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
.
(c) The codeword corresponding to u = (110, 011, 101) is given by
v = u · G = (1010, 0000, 1110, 0111, 0011).
11.3 (a) The generator matrix is given by
G(D) =
_
1 +D 1 +D
2
1 +D +D
2
¸
.
(b) The output sequences corresponding to u(D) = 1 +D
2
+D
3
+D
4
are
V(D) =
_
v
(0)
(D), v
(1)
(D), v
(2)
(D)
_
=
_
1 +D +D
2
+D
5
, 1 +D
3
+D
5
+D
6
, 1 +D +D
4
+D
6
¸
,
and the corresponding codeword is
v(D) = v
(0)
(D
3
) +Dv
(1)
(D
3
) +D
2
v
(2)
(D
3
)
= 1 +D +D
2
+D
3
+D
5
+D
6
+D
10
+D
14
+D
15
+D
16
+D
19
+D
20
.
11.4 (a) The generator matrix is given by
G(D) =
_
1 +D D 1 +D
D 1 1
_
and the composite generator polynomials are
g
1
(D) = g
(0)
1
(D
3
) + Dg
(1)
1
(D
3
) +D
2
g
(2)
1
(D
3
)
= 1 +D
2
+D
3
+D
4
+D
5
and
g
2
(D) = g
(0)
2
(D
3
) + Dg
(1)
2
(D
3
) +D
2
g
(2)
2
(D
3
)
= D + D
2
+D
3
.
(b) The codeword corresponding to the set of input sequences U(D) =
_
1 +D +D
3
, 1 +D
2
+D
3
¸
is
v(D) = u
(1)
(D
3
)g
1
(D) +u
(2)
(D
3
)g
2
(D)
= 1 +D +D
3
+D
4
+D
6
+D
10
+D
13
+D
14
.
3
11.5 (a) The generator matrix is given by
G =
_
¸
¸
¸
_
111 001 010 010 001 011
111 001 010 010 001 011
111 001 010 010 001 011
.
.
.
.
.
.
_
¸
¸
¸
_
.
(b) The parity sequences corresponding to u = (1101) are given by
v
(1)
(D) = u(D) · g
(1)
(D)
= (1 +D +D
3
)(1 +D
2
+D
3
+D
5
)
= 1 +D +D
2
+D
3
+D
4
+D
8
,
and
v
(2)
(D) = u(D) · g
(2)
(D)
= (1 +D +D
3
)(1 +D +D
4
+D
5
)
= 1 +D
2
+D
3
+D
6
+D
7
+D
8
.
Hence,
v
(1)
= (111110001)
v
(2)
= (101100111).
11.6 (a) The controller canonical form encoder realization, requiring 6 delay elements, is shown below.
v
( 0 )
v
( 1 )
v
u
( 1 )
u
( 2 )
( 2 )
4
(b) The observer canonical form encoder realization, requiring only 3 delay elements, is shown below.
v
(2)
v
(1)
v
(0)
u
(2)
u
(1)
11.14 (a) The GCD of the generator polynomials is 1.
(b) Since the GCD is 1, the inverse transfer function matrix G
−1
(D) must satisfy
G(D)G
−1
(D) =
_
1 +D
2
1 +D +D
2
¸
G
−1
(D) = I.
By inspection,
G
−1
(D) =
_
1 +D
D
_
.
11.15 (a) The GCD of the generator polynomials is 1 +D
2
and a feedforward inverse does not exist.
(b) The encoder state diagram is shown below.
S
2
S
5
1/10
0/00 S
7
S
6
S
3
S
0
S
4
S
1
0/01
0/10
1/10
0/11
0/01 1/11
0/00
1/00
1/11
1/01
1/00
1/01
0/10
0/11
(c) The cycles S
2
S
5
S
2
and S
7
S
7
both have zero output weight.
(d) The infinite-weight information sequence
u(D) =
1
1 +D
2
= 1 +D
2
+D
4
+D
6
+D
8
+· · ·
5
results in the output sequences
v
(0)
(D) = u(D)
_
1 +D
2
_
= 1
v
(1)
(D) = u(D)
_
1 +D +D
2
+D
3
_
= 1 +D,
and hence a codeword of finite weight.
(e) This is a catastrophic encoder realization.
11.16 For a systematic (n, k, ν) encoder, the generator matrix G(D)is a k ×n matrix of the form
G(D) = [I
k
|P(D)] =
_
¸
¸
¸
¸
_
1 0 · · · 0 g
(k)
1
(D) · · · g
(n−1)
1
(D)
0 1 · · · 0 g
(k)
2
(D) · · · g
(n−1)
2
(D)
.
.
.
.
.
.
.
.
.
0 0 · · · 1 g
(k)
k
(D) · · · g
(n−1)
k
(D)
_
¸
¸
¸
¸
_
.
The transfer function matrix of a feedforward inverse G
−1
(D) with delay l = 0 must be such that
G(D)G
−1
(D) = I
k
.
A matrix satisfying this condition is given by
G
−1
(D) =
_
I
k
0
(n−k)×k
_
=
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
1 0 · · · 0
0 1 · · · 0
.
.
.
.
.
.
.
.
.
0 0 · · · 1
0 0 · · · 0
.
.
.
.
.
.
.
.
.
0 0 · · · 0
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
.
6
11.19 (a) The encoder state diagram is shown below.
S
2
S
1
S
3
S
0
0/000
1/001
1/111
0/011
1/100
0/110
1/010
0/101
(b) The modified state diagram is shown below.
S
3
X
X
X
S
1
S
2
S
0
S
0
X
2
X
2
X
3
X
2
(c) The WEF function is given by
A(X) =

i
F
i

i

.
7
There are 3 cycles in the graph:
Cycle 1: S
1
S
2
S
1
C
1
= X
3
Cycle 2: S
1
S
3
S
2
S
1
C
2
= X
4
Cycle 3: S
3
S
3
C
3
= X.
There is one pair of nontouching cycles:
Cycle pair 1: (Cycle 1, Cycle 3) C
1
C
3
= X
4
.
There are no more sets of nontouching cycles. Therefore,
∆ = 1 −

i
C
i
+

i

,j

C
i
C
j

= 1 −(X +X
3
+X
4
) +X
4
= 1 −X −X
3
.
There are 2 forward paths:
Forward path 1: S
0
S
1
S
2
S
0
F
1
= X
7
Forward path 2: S
0
S
1
S
3
S
2
S
0
F
2
= X
8
.
Only cycle 3 does not touch forward path 1, and hence

1
= 1 −X.
Forward path 2 touches all the cycles, and hence

2
= 1.
Finally, the WEF is given by
A(X) =
X
7
(1 −X) +X
8
1 −X −X
3
=
X
7
1 −X −X
3
.
Carrying out the division,
A(X) = X
7
+X
8
+X
9
+ 2X
10
+· · · ,
indicating that there is one codeword of weight 7, one codeword of weight 8, one codeword of
weight 9, 2 codewords of weight 10, and so on.
(d) The augmented state diagram is shown below.
(e) The IOWEF is given by
A(W, X, L) =

i
F
i

i

.
There are 3 cycles in the graph:
Cycle 1: S
1
S
2
S
1
C
1
= WX
3
L
2
Cycle 2: S
1
S
3
S
2
S
1
C
2
= W
2
X
4
L
3
Cycle 3: S
3
S
3
C
3
= WXL.
8
S
1
S
2
S
0 S
0
W X
3
L X
2
L
X
2
L
X
2
L
S
3
WXL
WXL
WXL
There is one pair of nontouching cycles:
Cycle pair 1: (Cycle 1, Cycle 3) C
1
C
3
= W
2
X
4
L
3
.
There are no more sets of nontouching cycles. Therefore,
∆ = 1 −

i
C
i
+

i

,j

C
i
C
j

= 1 −WX
3
L
2
+W
2
X
4
L
3
+WXL +W
2
X
4
L
3
.
There are 2 forward paths:
Forward path 1: S
0
S
1
S
2
S
0
F
1
= WX
7
L
3
Forward path 2: S
0
S
1
S
3
S
2
S
0
F
2
= W
2
X
8
L
4
.
Only cycle 3 does not touch forward path 1, and hence

1
= 1 −WXL.
Forward path 2 touches all the cycles, and hence

2
= 1.
Finally, the IOWEF is given by
A(W, X, L) =
WX
7
L
3
(1 −WXL) +W
2
X
8
L
4
1 −(WX
3
L
2
+W
2
X
4
L
3
+WXL) +X
4
Y
2
Z
3
=
WX
7
L
3
1 −WXL −WX
3
L
2
.
Carrying out the division,
A(W, X, L) = WX
7
L
3
+W
2
X
8
L
4
+W
3
X
9
L
5
+· · · ,
indicating that there is one codeword of weight 7 with an information weight of 1 and length 3,
one codeword of weight 8 with an information weight of 2 and length 4, and one codeword of
weight 9 with an information weight of 3 and length 5.
9
11.20 Using state variable method described on pp. 505-506, the WEF is given by
A(X)=
−X
4
(−1−2X
4
+X
3
+9X
20
−42X
18
+78X
16
+38X
12
+3X
17
+5X
13
+9X
11
−9X
15
−2X
7
−74X
14
−14X
10
−9X
9
+X
6
+6X
8
)
1−X+2X
4
−X
3
−X
2
+3X
24
+X
21
−3X
20
+32X
18
−8X
22
−X
19
−45X
16
−8X
12
−5X
17
−6X
11
+9X
15
+2X
7
+27X
14
+3X
10
−2X
9
−4X
6
+X
8
.
Performing the division results in
A(X) = X
4
+X
5
+ 2X
6
+ 3X
7
+ 6X
8
+ 9X
9
+· · · ,
which indicates that there is one codeword of weight 4, one codeword of weight 5, two codewords of
weight 6, and so on.
11.28 (a) From Problem 11.19(c), the WEF of the code is
A(X) = X
7
+X
8
+X
9
+ 2X
10
+· · · ,
and the free distance of the code is therefore d
free
= 7, the lowest power of X in A(X).
(b) The complete CDF is shown below.
0 1 2 3 4
1
2
3
4
5
6



7
5 6 7
d
d
min
= 5
d
free
=7
(c) The minimum distance is
d
min
= d
l
|
l=m=2
= 5.
10
11.29 (a) By examining the encoder state diagram in Problem 11.15 and considering only paths that begin
and end in state S
0
(see page 507), we find that the free distance of the code is d
free
= 6. This
corresponds to the path S
0
S
1
S
2
S
4
S
0
and the input sequence u = (1000).
(b) The complete CDF is shown below.
0 1 2 3 4
1
2
3
4
5
6




d
d
free
= 6
d
min
= 3
(c) The minimum distance is
d
min
= d
l
|
l=m=3
= 3.
11.31 By definition, the free distance d
free
is the minimum weight path that has diverged from and remerged
with the all-zero state. Assume that [v]
j
represents the shortest remerged path through the state
diagram with weight free d
free
. Letting [d
l
]
re
be the minimum weight of all remerged paths of length
l, it follows that [d
l
]
re
= d
free
for all l ≥ j. Also, for a noncatastrophic encoder, any path that remains
unmerged must accumulate weight. Letting [d
l
]
un
be the minimum weight of all unmerged paths of
length l, it follows that
lim
l→∞
[d
l
]
un
→∞.
Therefore
lim
l→∞
d
l
= min
_
lim
l→∞
[d
l
]
re
, lim
l→∞
[d
l
]
un
_
= d
free
.
Q. E. D.
Chapter 12
Optimum Decoding of Convolutional
Codes
12.1 (Note: The problem should read “ for the (3,2,2) encoder in Example 11.2 ”rather than “ for the (3,2,2)
code in Table 12.1(d)”.) The state diagram of the encoder is given by:
01/111
11/101
00/000
01/011
00/100
10/101
S
3
S
1
S
2
S
0
0
0
/
0
1
1
1
1
/
1
1
0
00/111
10/010
10/001
01/100
11/001
10/110
11/010
01/000
1
2
From the state diagram, we can draw a trellis diagram containing h +m+ 1 = 3 + 1 + 1 = 5 levels as
shown below:
00/000
1
0
/
1
0
1 0
1
/
0
1
1
1
1
/
1
1
0
10/010
0
0
/
1
1
1
0
1
/
1
0
0
1
1
/
0
0
1
01/111
1
0
/
0
0
1
1
1
/
0
1
0
0
0
/
1
0
0
11/101
0
1
/
0
0
0
1
0
/
1
1
0
0
0
/
0
1
1
S
3
S
1
S
2
S
0
S
0
00/000
1
0
/
1
0
1 0
1
/
0
1
1
1
1
/
1
1
0
10/010
0
0
/
1
1
1
0
1
/
1
0
0
1
1
/
0
0
1
01/111
1
0
/
0
0
1
1
1
/
0
1
0
0
0
/
1
0
0
11/101
0
1
/
0
0
0
1
0
/
1
1
0
S
0
S
3
S
1
S
2
0
0
/
0
1
1
S
0
S
3
S
1
S
2
00/000
1
0
/
1
0
1 0
1
/
0
1
1
1
1
/
1
1
0
00/000
0
0
/
1
1
1
0
0
/
1
0
0
0
0
/
0
1
1
S
0
Hence, for u = (11, 01, 10),
v
(0)
= (1001)
v
(1)
= (1001)
v
(2)
= (0011)
and
v = (110, 000, 001, 111),
agreeing with (11.16) in Example 11.2. The path through the trellis corresponding to this codeword is
shown highlighted in the figure.
3
12.2 Note that
N−1

l=0
c
2
[log P(r
l
|v
l
) +c
1
] =
N−1

l=0
[c
2
log P(r
l
|v
l
) +c
2
c
1
]
= c
2
N−1

l=0
log P(r
l
|v
l
) +Nc
2
c
1
.
Since
max
v
_
c
2
N−1

l=0
log P(r
l
|v
l
) +Nc
2
c
1
_
= c
2
max
v
_
N−1

l=0
log P(r
l
|v
l
)
_
+Nc
2
c
1
if C
2
is positive, any path that maximizes

N−1
l=0
log P(r
l
|v
l
) also maximizes

N−1
l=0
c
2
[log P(r
l
|v
l
) +
c
1
].
12.3 The integer metric table becomes:
0
1
0
2
1
2
1
1
0 6 5 3 0
1 0 3 5 6
The received sequence is r = (1
1
1
2
0
1
, 1
1
1
1
0
2
, 1
1
1
1
0
1
, 1
1
1
1
1
1
, 0
1
1
2
0
1
, 1
2
0
2
1
1
, 1
2
0
1
1
1
). The decoded se-
quence is shown in the figure below, and the final survivor is
ˆ v = (111, 010, 110, 011, 000, 000, 000),
which yields a decoded information sequence of
ˆ u = (11000).
This result agrees with Example 12.1.
4
0/000
1
/
1
1
1
0/000 0/000 0/000 0/000 0/000 0/000
1
/
1
1
1
1
/
1
1
1
1
/
1
1
1
1
/
1
1
1
0
/
1
0
1
1
/
0
1
0
1
/
0
1
0
1
/
0
1
0
1
/
0
1
0
0
/
1
0
1
0
/
1
0
1
0
/
1
0
1
0
/
1
0
1
1/001
0
/
1
1
0
1/001 1/001
0
/
1
1
0
0
/
1
1
0
0
/
1
1
0
1
/
1
0
0
0
/
0
1
1
1
/
1
0
0
1
/
1
0
0
0
/
0
1
1
0
/
0
1
1
0
/
0
1
1
0
/
0
1
1
S
3
S
1
S
0
S
0
S
0
S
0
S
0
S
0
S
0
S
1
S
1
S
1
S
1
S
2
S
2
S
2
S
3
S
2
S
3
S
3
S
2
S
0
0 9 13 26 52 67 75 84
11 24 32 46 57
22 36 42 63
20 40 48 53 73
5
12.4 For the given channel transition probabilities, the resulting metric table is:
0
1
0
2
0
3
0
4
1
4
1
3
1
2
1
1
0 −0.363 −0.706 −0.777 −0.955 −1.237 −1.638 −2.097 −2.699
1 −2.699 −2.097 −1.638 −1.237 −0.955 −0.777 −0.706 −0.363
To construct an integer metric table, choose c
1
= 2.699 and c
2
= 4.28. Then the integer metric table
becomes:
0
1
0
2
0
3
0
4
1
4
1
3
1
2
1
1
0 10 9 8 7 6 5 3 0
1 0 3 5 6 7 8 9 10
12.5 (a) Referring to the state diagram of Figure 11.13(a), the trellis diagram for an information sequence
of length h = 4 is shown in the figure below.
(b) After Viterbi decoding the final survivor is
ˆ v = (11, 10, 01, 00, 11, 00).
This corresponds to the information sequence
ˆ u = (1110).
6
0/00
0
0/00 0/00 0/00 0/00 0/00 0/00
0
/
0
0
0
/
0
0
0
/
0
0
1
/
1
0
1
/
1
0
1
/
1
0
S
1
S
0
0
/
0
0
0
/
0
1
S
0
S
0
S
0
S
0
S
0
S
4
S
4
S
4
S
4
S
2
S
2
S
2
S
2
S
6
S
6
S
6
S
5
S
5
S
1
S
1
S
1
0
/
0
1
0
/
0
1
0
/
0
1
S
3
S
3
S
3
3
19 12 21 42
22
38
16
1
/
0
0
0
/
1
1
0
/
1
1
0
/
1
1
0
/
1
1
1
/
0
0
1
/
0
1
1
/
0
1
0
/
1
0
0
/
1
0
0
/
1
0
46 51
S
7
S
7
20 48
1/10
1
/
0
1
0
/
0
1
0
/
0
1
0
/
1
0
0
/
1
0
40
61
1
/
0
0
1
/
1
1
0
/
1
1
0
/
0
0
0
/
0
0
0
/
0
0
53 71 66
20 45 79
0
/
1
1
0
/
1
1
0
/
1
1
27 68 83 94
34 49 80 98 115
S
0
S
0
7
12.6 Combining the soft decision outputs yields the following transition probabilities:
0 1
0 0.909 0.091
1 0.091 0.909
For hard decision decoding, the metric is simply Hamming distance. For the received sequence
r = (11, 10, 00, 01, 10, 01, 00),
the decoding trellis is as shown in the figure below, and the final survivor is
ˆ v = (11, 10, 01, 01, 00, 11, 00),
which corresponds to the information sequence
ˆ u = (1110).
This result matches the result obtained using soft decisions in Problem 12.5.
8
0/00
0
0/00 0/00 0/00 0/00 0/00 0/00
0
/
0
0
0
/
0
0
0
/
0
0
1
/
1
0
1
/
1
0
1
/
1
0
S
1
S
0
0
/
0
0
0
/
0
1
S
0
S
0
S
0
S
0
S
0
S
4
S
4
S
4
S
4
S
2
S
2
S
2
S
2
S
6
S
6
S
6
S
5
S
5
S
1
S
1
S
1
0
/
0
1
0
/
0
1
0
/
0
1
S
3
S
3
S
3
2
0 3 5 4
2
0
3
1
/
0
0
0
/
1
1
0
/
1
1
0
/
1
1
0
/
1
1
1
/
0
0
1
/
0
1
1
/
0
1
0
/
1
0
0
/
1
0
0
/
1
0
1 3
S
7
S
7
4 2
1/10
1
/
0
1
0
/
0
1
0
/
0
1
0
/
1
0
0
/
1
0
2
3
1
/
0
0
1
/
1
1
0
/
1
1
0
/
0
0
0
/
0
0
0
/
0
0
1 1 2
4 4 3
0
/
1
1
0
/
1
1
0
/
1
1
4 2 2 3
3 4 3 3 3
S
0
S
0
9
12.9 Proof: For d even,
P
d
=
1
2
_
d
d/2
_
p
d/2
(1 −p)
d/2
+
d

e=(d/2)+1
_
d
e
_
p
e
(1 −p)
d−e
<
d

e=(d/2)
_
d
e
_
p
e
(1 −p)
d−e
<
d

e=(d/2)
_
d
e
_
p
d/2
(1 −p)
d/2
= p
d/2
(1 −p)
d/2
d

e=(d/2)
_
d
e
_
< 2
d
p
d/2
(1 −p)
d/2
and thus (12.21) is an upper bound on P
d
for d even. Q. E. D.
12.10 The event error probability is bounded by (12.25)
P(E) <

d=d
free
A
d
P
d
< A(X)|
X=2

p(1−p)
.
From Example 11.12,
A(X) =
X
6
+X
7
−X
8
1 −2X −X
3
= X
6
+ 3X
7
+ 5X
8
+ 11X
9
+ 25X
10
+· · · ,
which yields
(a) P(E) < 1.2118 ×10
−4
for p = 0.01,
(b) P(E) < 7.7391 ×10
−8
for p = 0.001.
The bit error probability is bounded by (12.29)
P
b
(E) <

d=d
free
B
d
P
d
< B(X)|
X=2

p(1−p)
=
1
k
∂A(W, X)
∂W
¸
¸
¸
¸
X=2

p(1−p),W=1
.
From Example 11.12,
A(W, X) =
WX
7
+W
2
(X
6
−X
8
)
1 −W(2X +X
3
)
= WX
7
+W
2
_
X
6
+X
8
+X
10
_
+W
3
_
2X
7
+ 3X
9
+ 3X
11
+X
13
_
+· · · .
Hence,
∂A(W, X)
∂W
=
X
7
+ 2W(X
6
−3X
8
−X
10
) −3W
2
(2X
7
−X
9
−X
11
)
(1 −2WX −WX
3
)
2
+· · ·
and
∂A(W, X)
∂W
¸
¸
¸
¸
W=1
=
2X
6
− X
7
−2X
8
+X
9
+X
11
(1 −2X −X
3
)
2
= 2X
6
+ 7X
7
+ 18X
8
+· · · .
10
This yields
(a) P
b
(E) < 3.0435 ×10
−4
for p = 0.01,
(b) P
b
(E) < 1.6139 ×10
−7
for p = 0.001.
12.11 The event error probability is given by (12.26)
P(E) ≈ A
d
free
_
2
_
p(1 −p)
_
d
free
≈ A
d
free
2
d
free
p
d
free
/2
and the bit error probability (12.30) is given by
P
b
(E) ≈ B
d
free
_
2
_
p(1 −p)
_
d
free
≈ B
d
free
2
d
free
p
d
free
/2
.
From Problem 12.10,
d
free
= 6, A
d
free
= 1, B
d
free
= 2.
(a) For p = 0.01,
P(E) ≈ 1 · 2
6
· (0.01)
6/2
= 6.4 ×10
−5
P
b
(E) ≈ 2 · 2
6
· (0.01)
6/2
= 1.28 ×10
−4
.
(b) For p = 0.001,
P(E) ≈ 1 · 2
6
· (0.001)
6/2
= 6.4 ×10
−8
P
b
(E) ≈ 2 · 2
6
· (0.001)
6/2
= 1.28 ×10
−7
.
12.12 The (3, 1, 2) encoder of (12.1) has d
free
= 7 and B
d
free
= 1. Thus, expression (12.36) becomes
P
b
(E) ≈ B
d
free
2
d
free
/2
e

(Rd
free
/2)·(E
b
/No)
= 2
(7/2)
e

(7/6)·(E
b
/No)
and (12.37) remains
P
b
(E) ≈
1
2
e
−E
b
/No
.
These expressions are plotted versus E
b
/N
o
in the figure below.
11
0 5 10 15
10
-16
10
-14
10
-12
10
-10
10
-8
10
-6
10
-4
10
-2
10
0
10
2
E
b
/N
o
(dB)
P
b
(
E
)
12.36 (Coded)
12.37 (Uncoded)
12
Equating the above expressions and solving for E
b
/N
o
yields
2
(7/2)
e

(7/6)·(E
b
/No)
=
1
2
e
−E
b
/No
1 = 2
(9/2)
e
−(1/6)(E
b
/No)
e
(−1/6)(E
b
/No)
= 2
−(9/2)
(−1/6)(E
b
/N
o
) = ln(2
−(9/2)
)
E
b
/N
o
= −6 ln(2
−(9/2)
) = 18.71,
which is E
b
/N
o
= 12.72dB, the coding threshold. The coding gain as a function of P
b
(E) is plotted
below.
10
-10
10
-9
10
-8
10
-7
10
-6
10
-5
10
-4
10
-3
10
-2
10
-1
-6
-4
-2
0
2
4
6
8
10
12
14
P
b
E
b
/
N
o

(
d
B
)
Reguired SNR, Coded
Required SNR, Uncoded
Gain
( E )
Note that in this example, a short constraint length code (ν = 2) with hard decision decoding, the
approximate expressions for P
b
(E) indicate that a positive coding gain is only achieved at very small
values of P
b
(E), and the asymptatic coding gain is only 0.7dB.
12.13 The (3, 1, 2) encoder of Problem 12.1 has d
free
= 7 and B
d
free
= 1. Thus, expression (12.46) for the
unquantized AWGN channel becomes
P
b
(E) ≈ B
d
free
e
−Rd
free
E
b
/No
= e

(7/3)·(E
b
/No)
and (12.37) remains
P
b
(E) ≈
1
2
e
−E
b
/No
.
These expressions are plotted versus E
b
/N
o
in the figure below.
13
0 5 10 15
10
-35
10
-30
10
-25
10
-20
10
-15
10
-10
10
-5
10
0
E
b
/N
o
(dB)
P
b
(
E
)
12.46 (Coded)
12.37 (Uncoded)
Equating the above expressions and solving for E
b
/N
o
yields
e
−(7/3)·(E
b
/No)
=
1
2
e
−E
b
/No
1 =
1
2
e
(4/3)(E
b
/No)
e
(4/3)(E
b
/No)
= 2
(4/3)(E
b
/N
o
) = ln(2)
E
b
/N
o
= (3/4) ln(2) = 0.5199,
which is E
b
/N
o
= −2.84dB, the coding threshold. (Note: If the slightly tighter bound on Q(x) from
(1.5) is used to form the approximate expressin for P
b
(E), the coding threshold actually moves to
−∞dB. But this is just an artifact of the bounds, which are not tight for small values of E
b
/N
o
.)
The coding gain as a function of P
b
(E) is plotted below. Note that in this example, a short constraint
length code (ν = 2) with soft decision decoding, the approximate expressions for P
b
(E) indicate that
a coding gain above 3.0 dB is achieved at moderate values of P
b
(E), and the asymptotic coding gain
is 3.7 dB.
14
10
-10
10
-9
10
-8
10
-7
10
-6
10
-5
10
-4
10
-3
10
-2
10
-1
-2
0
2
4
6
8
10
12
14
P
b
E
b
/
N
o

(
d
B
)
Reguired SNR, Coded
Required SNR, Uncoded
Gain
( E )
15
12.14 The IOWEF function of the (3, 1, 2) encoder of (12.1) is
A(W, X) =
WX
7
1 −WX −WX
3
and thus (12.39b) becomes
P
b
(E) < B(X)|
X=Do
=
1
k
∂A(W, X)
∂W
¸
¸
¸
¸
X=D0,W=1
=
X
7
(1 −WX − WX
3
)
2
¸
¸
¸
¸
X=D0,W=1
.
For the DMC of Problem 12.4, D
0
= 0.42275 and the above expression becomes
P
b
(E) < 9.5874 ×10
−3
.
If the DMC is converted to a BSC, then the resulting crossover probability is p = 0.091. Using (12.29)
yields
P
b
(E) < B(X)|
X=Do
=
1
k
∂A(W, X)
∂W
¸
¸
¸
¸
X=2

p(1−p),W=1
=
X
7
(1 −WX −WX
3
)
2
¸
¸
¸
¸
X=2

p(1−p),W=1
= 3.7096×10
−1
,
about a factor of 40 larger than the soft decision case.
12.16 For the optimum (2, 1, 7) encoder in Table 12.1(c), d
free
= 10, A
d
free
= 1, and B
d
free
= 2.
(a) From Table 12.1(c)
γ = 6.99dB.
(b) Using (12.26) yields
P(E) ≈ A
d
free
2
d
free
p
d
free
/2
= 1.02 ×10
−7
.
(c) Using (12.30) yields
P
b
(E) ≈ B
d
free
2
d
free
p
d
free
/2
= 2.04 ×10
−7
.
(d) For this encoder
G
−1
=
_
D
2
1 +D
+
D
2
_
and the amplification factor is A = 4.
For the quick-look-in (2, 1, 7) encoder in Table 12.2, d
free
= 9, A
d
free
= 1, and B
d
free
= 1.
(a) From Table 12.2
γ = 6.53dB.
(b) Using (12.26) yields
P(E) ≈ A
d
free
2
d
free
p
d
free
/2
= 5.12 ×10
−7
.
(c) Using (12.30) yields
P
b
(E) ≈ B
d
free
2
d
free
p
d
free
/2
= 5.12 ×10
−7
.
16
(d) For this encoder
G
−1
=
_
1
1
_
and the amplification factor is A = 2.
12.17 The generator matrix of a rate R = 1/2 systematic feedforward encoder is of the form
G =
_
1 g
(1)
(D)
_
.
Letting g
(1)
(D) = 1 +D+D
2
+D
5
+D
7
achieves d
free
= 6 with B
d
free
= 1 and A
d
free
= 1.
(a) The soft-decision asymptotic coding gain is
γ = 4.77dB.
(b) Using (12.26) yields
P(E) ≈ A
d
free
2
d
free
p
d
free
/2
= 6.4 ×10
−5
.
(c) Using (12.30) yields
P
b
(E) ≈ B
d
free
2
d
free
p
d
free
/2
= 6.4 ×10
−5
.
(d) For this encoder (and all systematic encoders)
G
−1
=
_
1
0
_
and the amplification factor is A = 1.
12.18 The generator polynomial for the (15, 7) BCH code is
g(X) = 1 +X
4
+X
6
+X
7
+X
8
and d
g
= 5. The generator polynomial of the dual code is
h(X) =
X
15
+ 1
X
8
+X
7
+X
6
+X
4
+ 1
= X
7
+X
6
+X
4
+ 1
and hence d
h
≥ 4.
(a) The rate R = 1/2 code with composite generator polynomial g(D) = 1 +D
4
+D
6
+D
7
+D
8
has
generator matrix
G(D) =
_
1 +D
2
+D
3
+D
4
D
3
¸
and d
free
≥ min(5, 8) = 5.
(b) The rate R = 1/4 code with composite generator polynomial g(D) = g(D
2
) +Dh(D
2
) = 1 +D+
D
8
+D
9
+D
12
+D
13
+D
14
+D
15
+D
16
has generator matrix
G(D) =
_
1 +D
2
+D
3
+D
4
1 +D
2
+D
3
D
3
D
3
¸
and d
free
≥ min(d
g
+d
h
, 3d
g
, 3d
h
) = min(9, 15, 12) = 9.
17
The generator polynomial for the (31, 16) BCH code is
g(X) = 1 +X + X
2
+ X
3
+X
5
+X
7
+X
8
+X
9
+X
10
+X
11
+X
15
and d
g
= 7. The generator polynomial of the dual code is
h(X) =
X
15
+ 1
g(X)
= X
16
+X
12
+X
11
+X
10
+X
9
+X
4
+X + 1
and hence d
h
≥ 6.
(a) The rate R = 1/2 code with composite generator polynomial g(D) = 1 + D + D
2
+ D
3
+ D
5
+
D
7
+D
8
+D
9
+D
10
+D
11
+D
15
has generator matrix
G(D) =
_
1 +D+D
4
+D
5
1 +D +D
2
+D
3
+D
4
+D
5
+D
7
¸
and d
free
≥ min(7, 12) = 7.
(b) The rate R = 1/4 code with composite generator polynomial g(D) = g(D
2
) +Dh(D
2
) = 1 +D+
D
2
+D
3
+D
4
+D
6
+D
9
+D
10
+D
14
+D
16
+D
18
+D
19
+D
20
+D
21
+D
22
+D
23
has generator
matrix
G(D) =
_
1 +D +D
4
+D
5
1 +D
2
+D
5
+D
6
+D
8
1 +D+D
2
+D
3
+D
4
+D
5
+D
7
1 +D
5
¸
and d
free
≥ min(d
g
+d
h
, 3d
g
, 3d
h
) = min(13, 21, 18) = 13.
12.20 (a) The augmented state diagram is shown below.
S
1
S
2
S
3
S
0
S
0
WX
3
L X
2
L
X
2
L WXL
WXL
X
2
L
WXL
The generating function is given by
A(W, X, L) =

i
F
i

i

.
There are 3 cycles in the graph:
18
Cycle 1: S
1
S
2
S
1
C
1
= WX
3
L
2
Cycle 2: S
1
S
3
S
2
S
1
C
2
= W
2
X
4
L
3
Cycle 3: S
3
S
3
C
3
= WXL.
There is one pair of nontouching cycles:
Cycle pair 1: (loop 1, loop 3) C
1
C
3
= W
2
X
4
L
3
.
There are no more sets of nontouching cycles. Therefore,
∆ = 1 −

i
C
i
+

i

,j

C
i
C
j

= 1 −(WX
3
L
2
+W
2
X
4
L
3
+WXL) +W
2
X
4
L
3
.
There are 2 forward paths:
Forward path 1: S
0
S
1
S
2
S
0
F
1
= WX
7
L
3
Forward path 2: S
0
S
1
S
3
S
2
S
0
F
2
= W
2
X
8
L
4
.
Only cycle 3 does not touch forward path 1, and hence

1
= 1 −WXL.
Forward path 2 touches all the cycles, and hence

2
= 1.
Finally, the WEF A(W, X, L) is given by
A(W, X, L) =
WX
7
L
3
(1 −WXL) +W
2
X
8
L
4
1 −(WX
3
L
2
+W
2
X
4
L
3
+WXL) +W
2
X
4
L
3
=
WX
7
L
3
1 −WXL −WX
3
L
2
and the generating WEF’s A
i
(W, X, L) are given by:
A
1
(W, X, L) =
WX
3
L(1 −WXL)

=
WX
3
L(1 −WXL)
1 −WXL −WX
3
L
2
= W X
3
L +W
2
X
6
L
3
+W
3
X
7
L
4
+ (W
3
X
9
+W
4
X
8
) L
5
+ (2 W
4
, X
10
+W
5
X
9
) L
6
+· · ·
A
2
(W, X, L) =
WX
5
L
2
(1 −WXL) +W
2
X
6
L
3

=
WX
5
L
2
1 −WXL −WX
3
L
2
= W X
5
L
2
+W
2
X
6
L
3
+ (W
2
X
8
+W
3
X
7
) L
4
+ (2 W
3
X
9
+W
4
X
8
) L
5
+(W
3
X
11
+ 3 W
4
X
10
+W
5
X
9
)L
6
+ · · ·
A
3
(W, X, L) =
W
2
X
4
L
2

=
W
2
X
4
L
2
1 −WXL −WX
3
L
2
= W
2
X
4
L
2
+W
3
X
5
L
3
+ (W
3
X
7
+W
4
X
6
) L
4
+ (2 W
4
X
8
+W
5
X
7
) L
5
+(W
4
X
10
+ 3 W
5
X
9
+W
6
X
8
)L
6
· · ·
(b) This code has d
free
= 7, so τ
min
is the minimum value of τ for which d(τ) = d
free
+ 1 =
8. Examining the series expansions of A
1
(W, X, L), A
2
(W, X, L), and A
3
(W, X, L) above yields
τ
min
= 5.
19
(c) A table of d(τ) and A
d(τ)
is given below.
τ d(τ) A
d(τ)
0 3 1
1 4 1
2 5 1
3 6 1
4 7 1
5 8 1
(d) From part (c) and by looking at the series expansion of A
3
(W, X, L), it can be seen that
lim
τ→∞
d(τ) = τ + 3.
12.21 For a BSC, the trellis diagram of Figure 12.6 in the book may be used to decode the three possible 21-
bit subequences using the Hamming metric. The results are shown in the three figures below. Since the
r used in the middle figure (b) below has the smallest Hamming distance (2) of the three subsequences,
it is the most likely to be correctly synchronized.
0/000
1
/
1
1
1
0/000 0/000 0/000 0/000 0/000 0/000
1
/
1
1
1
1
/
1
1
1
1
/
1
1
1
1
/
1
1
1
0
/
1
0
1
1
/
0
1
0
1
/
0
1
0
1
/
0
1
0
1
/
0
1
0
0
/
1
0
1
0
/
1
0
1
0
/
1
0
1
0
/
1
0
1
1/001
0
/
1
1
0
1/001 1/001
0
/
1
1
0
0
/
1
1
0
0
/
1
1
0
1
/
1
0
0
0
/
0
1
1
1
/
1
0
0
1
/
1
0
0
0
/
0
1
1
0
/
0
1
1
0
/
0
1
1
0
/
0
1
1
S
3
S
1
S
0
S
0
S
0
S
0
S
0
S
0
S
0
S
1
S
1
S
1
S
1
S
2
S
2
S
3
S
2
S
3
S
3
S
2
S
0
0 2 3 4 4 6 4 6
1 4 3 5 5
3 5 3 6
2 3 6 3 7 S
2
r 011 100 110 010 110 010 001
(a)
20
0/000
1
/
1
1
1
0/000 0/000 0/000 0/000 0/000 0/000
1
/
1
1
1
1
/
1
1
1
1
/
1
1
1
1
/
1
1
1
0
/
1
0
1
1
/
0
1
0
1
/
0
1
0
1
/
0
1
0
1
/
0
1
0
0
/
1
0
1
0
/
1
0
1
0
/
1
0
1
0
/
1
0
1
1/001
0
/
1
1
0
1/001 1/001
0
/
1
1
0
0
/
1
1
0
0
/
1
1
0
1
/
1
0
0
0
/
0
1
1
1
/
1
0
0
1
/
1
0
0
0
/
0
1
1
0
/
0
1
1
0
/
0
1
1
0
/
0
1
1
S
3
S
1
S
0
S
0
S
0
S
0
S
0
S
0
S
0
S
1
S
1
S
1
S
1
S
2
S
2
S
3
S
2
S
3
S
3
S
2
S
0
0 3 4 4 5 4 5 2
0 5 1 4 1
2 4 4 6
1 3 1 5 2 S
2
r 111 001 100 101 100 100 011
(b)
21
0/000
1
/
1
1
1
0/000 0/000 0/000 0/000 0/000 0/000
1
/
1
1
1
1
/
1
1
1
1
/
1
1
1
1
/
1
1
1
0
/
1
0
1
1
/
0
1
0
1
/
0
1
0
1
/
0
1
0
1
/
0
1
0
0
/
1
0
1
0
/
1
0
1
0
/
1
0
1
0
/
1
0
1
1/001
0
/
1
1
0
1/001 1/001
0
/
1
1
0
0
/
1
1
0
0
/
1
1
0
1
/
1
0
0
0
/
0
1
1
1
/
1
0
0
1
/
1
0
0
0
/
0
1
1
0
/
0
1
1
0
/
0
1
1
0
/
0
1
1
S
3
S
1
S
0
S
0
S
0
S
0
S
0
S
0
S
0
S
1
S
1
S
1
S
1
S
2
S
2
S
3
S
2
S
3
S
3
S
2
S
0
0 2 4 4 4 5 5 6
1 3 5 5 6
2 2 3 3
3 4 4 6 5 S
2
r 110 011 001 011 001 000 111
(c)
Chapter 13
Suboptimum Decoding of
Convolutional Codes
13.1 (a) Referring to the state diagram of Figure 11.13a, the code tree for an information sequence of
length h = 4 is shown below.
(b) For u = (1001), the corresponding codeword is v = (11, 01, 11, 00, 01, 11, 11), as shown below.
11
10
01
00
11
00
01
10
10
01
11
00
00
11
10 01 00 11
01 00 11 10
11
00
10
11 00 00
11 11
01
10
10
11
11
11
00
00
11 11 01 00
11 00 00 00
01 00 11 01
00 11 10 10
10
11 00 00
11 11 00
11
10
11
11
11
00
00
10
01
00 00 00 00
11 11 11 01
1
2
13.2 Proof: A binary-input, Q-ary-output DMC is symmetric if
P(r

= j|0) = P(r

= Q−1 −j|1) (a-1)
for j = 0, 1, . . . , Q−1. The output probability distribution may be computed as
P(r

= j) =
1

i=0
P(r

= j|i)P(i),
where i is the binary input to the channel. For equally likely input signals, P(1) = P(0) = 0.5 and
P(r

= j) = P(r

= j|0)P(0) + P(r

= j|1)P(1)
= 0.5 [P(r

= j|0) + P(r

= j|1)]
= 0.5 [P(r

= Q−1 −j|0) + P(r

= Q−1 −j|1)] [using (a-1)]
= P(r

= Q−1 −j).
Q. E. D.
13.3 (a) Computing the Fano metric with p = 0.045 results in the metric table:
0 1
0 0.434 −3.974
1 −3.974 0.434
Dividing by 0.434 results in the following integer metric table:
0 1
0 1 −9
1 −9 1
(b) Decoding r = (11, 00, 11, 00, 01, 10, 11) using the stack algorithm results in the following eight
steps:
Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 Step 7 Step 8
1(-2) 11(-6) 10(-6) 100(-4) 1001(-2) 10010(0) 100100(-8) 1001000(-6)
0(-18) 10(-6) 111(-14) 111(-14) 111(-14)
0(-18) 110(-14) 110(-14) 110(-14)
0(-18) 0(-18) 0(-18)
101(-24) 1000(-22)
101(-24)
The decoded sequence is ˆ v = (11, 01, 11, 00, 01, 11, 11) and ˆ u = [1001]. The Viterbi algorithm
requires fifteen steps to decode the same sequence.
(c) Decoding r = (11, 10, 00, 01, 10, 01, 00) using the stack algorithm results in the following sixteen
steps:
3
Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 Step 7 Step 8
1(-2) 11(4) 111(-4) 1110(-2) 110(-4) 11100(-10) 1100(-12) 1101(-12)
0(-18) 10(-16) 110(-4) 110(-4) 11100(-10) 1100(-12) 1101(-12) 10(-16)
0(-18) 10(-16) 10(-16) 10(-16) 1101(-12) 10(-16) 111000(-18)
0(-18) 0(-18) 0(-18) 10(-16) 111000(-18) 0(-16)
1111(-22) 1111(-22) 0(-18) 0(-18) 11000(-20)
Step 9 Step 10 Step 11 Step 12 Step 13 Step 14 Step 15 Step 16
11010(-10) 10(-16) 101(-14) 1011(-12) 10110(-10) 101100(-18) 111000(-18) 1110000(-16)
10(-16) 111000(-18) 111000(-18) 111000(-18) 111000(-18) 111000(-18) 110100(-18) 110100(-18)
111000(-18) 110100(-18) 110100(-18) 110100(-18) 110100(-18) 110100(-18) 0(-18) 0(-18)
0(-18) 0(-18) 0(-18) 0(-18) 0(-18) 0(-18) 11000(-20) 11000(-20)
11000(-20) 11000(-20) 11000(-20) 11000(-20) 11000(-20) 11000(-20) 1010(-32) 1010(-32)
100(-34) 1010(-32) 1010(-32) 1010(-32) 100(-34) 100(-34)
100(-34) 100(-34) 100(-34) 1011000(-36) 1011000(-36)
The decoded sequence is ˆ v = (11, 10, 01, 01, 00, 11, 00) and ˆ u = (1110). This agrees with the result
of Problem 12.6.
13.4 (a) Computing the Fano metric using
M(r

|v

) = log
2
P(r

|v

)
P(r

)
−R
with R = 1/2 and
P(0
1
) = P(1
1
) = 0.218
P(0
2
) = P(1
2
) = 0.1025
P(0
3
) = P(1
3
) = 0.095
P(0
4
) = P(1
4
) = 0.0845
results in
0
1
0
2
0
3
0
4
1
4
1
3
1
2
1
1
0 0.494 0.443 0.314 −0.106 −1.043 −2.66 −4.07 −7.268
1 −7.268 −4.07 −2.66 −1.043 −0.106 0.314 0.443 0.494
Multiplying by 3/0.106 yields the following integer metric table:
0
1
0
2
0
3
0
4
1
4
1
3
1
2
1
1
0 14 12 9 −3 −30 −75 −115 −167
1 −167 −115 −75 −30 −3 9 12 14
4
(b) Decoding r = (1
2
1
1
, 1
2
0
1
, 0
3
0
1
, 0
1
1
3
, 1
2
0
2
, 0
3
1
1
, 0
3
0
2
) using the stack algorithm results in the
following thirteen steps:
Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 Step 7
1(26) 11(52) 110(-9) 1100(-70) 111(-106) 1110(-83) 1101(-167)
0(-282) 10(-256) 111(-106) 111(-106) 1101(-167) 11000(-173) 11100(-186)
0(-282) 10(-256) 1101(-167) 11000(-173) 11000(-173) 11100(-186)
0(-252) 10(-256) 10(-256) 10(-256) 10(-256)
0(-282) 0(-282) 0(-282) 0(-282)
1111(-348) 1111(-348)
Step 8 Step 9 Step 10 Step 11 Step 12 Step 13
11010(-143) 11000(-173) 11100(-186) 110100(-204) 111000(-247) 1110000(-226)
11000(-173) 11100(-186) 110100(-204) 111000(-247) 10(-256)
11100(-186) 110100(-204) 10(-256) 10(-256) 0(-282)
10(-256) 10(-256) 0(-282) 0(-282) 110000(-348)
0(-282) 0(-282) 110000(-331) 110000(-331) 1111(-348)
1111(-348) 1111(-348) 1111(-348) 1111(-348) 1101000(-394)
The decoded sequence is ˆ v = (11, 10, 01, 01, 00, 11, 00) and ˆ u = (1110). This agrees with the result
of Problem 12.5(b).
13.6 As can be seen from the solution to Problem 13.3, the final decoded path never falls below the second
stack entry (part (b)) or the fourth stack entry (part(c)). Thus, for a stacksize of 10 entries, the final
decoded path is not effected in either case.
13.7 From Example 13.5, the integer metric table is
0 1
0 1 −5
1 −5 1
5
(a) Decoding the received sequence r = (010, 010, 001, 110, 100, 101, 011) using the stack bucket algo-
rithm with an interval of 5 results in the following decoding steps:
Interval 1 Interval 2 Interval 3 Interval 4 Interval 5 Interval 6 Interval 7
9 to 5 4 to 0 -1 to -5 -6 to -10 -11 to -15 -16 to -20 -21 to -25
Step 1 0(-3) 1(-9)
Step 2 00(-6) 01(-12)
1(-9)
Step 3 1(-9) 000(-12)
001(-15)
01(-12)
Step 4 11(-6) 000(-12) 10(-24)
001(-15)
01(-12)
Step 5 111(-3) 000(-12) 110(-21)
001(-15) 10(-24)
01(-12)
Step 6 1110(0) 000(-12) 1111(-18) 110(-21)
001(-15) 10(-24)
01(-12)
Step 7 11101(3) 11100(-15) 1111(-18) 110(-21)
000(-12) 10(-24)
001(-15)
01(-12)
Step 8 111010(6) 11100(-15) 1111(-18) 110(-21)
000(-12) 10(-24)
001(-15)
01(-12)
Step 9 1110100(6) 11100(-15) 1111(-18) 110(-21)
000(-12) 10(-24)
001(-15)
01(-12)
The decoded sequence is ˆ v = (111, 010, 001, 110, 100, 101, 011) and ˆ u = (11101), which agrees with
the result of Example 13.5.
6
(b) Decoding the received sequence r = (010, 010, 001, 110, 100, 101, 011) using the stack bucket algo-
rithm with an interval of 9 results in the following decoding steps.
Interval 1 Interval 2 Interval 3 Interval 4
8 to 0 -1 to -9 -10 to -18 -19 to -27
Step 1 0(-3)
1(-9)
Step 2 00(-6) 01(-12)
1(-9)
Step 3 1(-9) 000(-12)
001(-15)
01(-12)
Step 4 11(-6) 000(-12) 10(-24)
001(-15)
01(-12)
Step 5 111(-3) 000(-12) 110(-21)
001(-15) 10(-24)
01(-12)
Step 6 1110(0) 1111(-18) 110(-21)
000(-12) 10(-24)
001(-15)
01(-12)
Step 7 11101(3) 11100(-15) 110(-21)
1111(-18) 10(-24)
000(-12)
001(-15)
01(-12)
Step 8 111010(6) 11100(-15) 110(-21)
1111(-18) 10(-24)
000(-12)
001(-15)
01(-12)
Step 9 1110100(9) 11100(-15) 110(-21)
1111(-18) 10(-24)
000(-12)
001(-15)
01(-12)
The decoded sequence is ˆ v = (111, 010, 001, 110, 100, 101, 011) and ˆ u = (11101), which agrees with
the result of Example 13.5 and part (a).
7
13.8 (i) = 5
Step Look M
F
M
B
Node Metric T
0 - - - X 0 0
1 LFB -3 −∞ X 0 -5
2 LFB -3 - 0 -3 -5
3 LFB -6 0 X 0 -5
4 LFNB -9 −∞ X 0 -10
5 LFB -3 - 0 -3 -10
6 LFB -6 - 00 -6 -10
7 LFB -9 - 000 -9 -10
8 LFB -12 -9 00 -6 -10
9 LFNB -15 -3 0 -3 -10
10 LFNB -12 0 X 0 -10
11 LFNB -9 - 1 -9 -10
12 LFB -6 - 11 -6 -10
13 LFB -3 - 111 -3 -5
14 LFB 0 - 1110 0 0
15 LFB 3 - 11101 3 0
16 LFB 6 - 111010 6 5
17 LFB 9 - 1110100 9 Stop
The decoded sequence is ˆ v = (111, 010, 001, 110, 100, 101, 011) and ˆ u = [11101] which agrees with the
result of Example 13.5.
(ii) = 10
Step Look M
F
M
B
Node Metric T
0 - - - X 0 0
1 LFB -3 −∞ X 0 -10
2 LFB -3 - 0 -3 -10
3 LFB -6 - 00 -6 -10
4 LFB -9 - 000 -9 -10
5 LFB -12 -6 00 -6 -10
6 LFNB -15 -3 0 -3 -10
7 LFNB -12 0 X 0 -10*
8 LFNB -9 - 1 -9 -10
9 LFB -6 - 11 -6 -10
10 LFB -3 - 111 -3 -10
11 LFB 0 - 1110 0 0
12 LFB 3 - 11101 3 0
13 LFB 6 - 111010 6 0
14 LFB 9 - 1110100 9 Stop
* Not the first visit. Therefore no tightening is done.
Note: The final decoded path agrees with the results of Examples 13.7 and 13.8 and Problem 13.7.
The number of computations in both cases is reduced compared to Examples 13.7 and 13.8.
8
13.20 (a)
g
(1)
(D) = 1 + D + D
3
+ D
5
+ D
8
+ D
9
+ D
10
+ D
11
Therefore:
H =


































1 1
1 0 1 1
0 0 1 0 1 1
1 0 0 0 1 0 1 1
0 0 1 0 0 0 1 0
.
.
.
1 0 0 0 1 0 0 0
.
.
.
0 0 1 0 0 0 1 0
.
.
.
0 0 0 0 1 0 0 0
.
.
.
1 0 0 0 0 0 1 0
.
.
.
1 0 1 0 0 0 0 0
.
.
.
1 0 1 0 1 0 0 0 1 1
1 0 1 0 1 0 1 0 . . . . . . . . . . . . . . . . . . 1 0 1 1


































(b) Using the parity triangle of the code, we obtain:
s
0
= e
(0)
0
+e
(1)
0
s
1
= e
(0)
0
+e
(0)
1
+e
(1)
1
s
2
= +e
(0)
1
+e
(0)
2
+e
(1)
2
s
3
= e
(0)
0
+e
(0)
2
+e
(0)
3
+e
(1)
3
s
4
= +e
(0)
1
+e
(0)
3
+e
(0)
4
+e
(1)
4
s
5
= e
(0)
0
+e
(0)
2
+e
(0)
4
+e
(0)
5
+e
(1)
5
s
6
= +e
(0)
1
+e
(0)
3
+e
(0)
5
+e
(0)
6
+e
(1)
6
s
7
= +e
(0)
2
+e
(0)
4
+e
(0)
6
+e
(0)
7
+e
(1)
7
s
8
= e
(0)
0
+e
(0)
3
+e
(0)
5
+e
(0)
7
+e
(0)
8
+e
(1)
8
s
9
= e
(0)
0
+e
(0)
1
+e
(0)
4
+e
(0)
6
+e
(0)
8
+e
(0)
9
+e
(1)
9
s
10
= e
(0)
0
+e
(0)
1
+e
(0)
2
+e
(0)
5
+e
(0)
7
+e
(0)
9
+e
(0)
10
+e
(1)
10
s
11
= e
(0)
0
+e
(0)
1
+e
(0)
2
+e
(0)
3
+e
(0)
6
+e
(0)
8
+e
(0)
10
+e
(0)
11
+e
(1)
11
9
(c) The effect of error bits prior to time unit are removed. Thus, the modified syndrome bits are
given by the following equations:
s

= e
(0)

s

+1
= e
(0)

+e
(0)
+1
+e
(1)
+1
s

+2
= +e
(0)
+1
+e
(0)
+2
+e
(1)
+2
s

+3
= e
(0)

+e
(0)
+2
+e
(0)
+3
+e
(1)
+3
s

+4
= +e
(0)
+1
+e
(0)
+3
+e
(0)
+4
+e
(1)
+4
s

+5
= e
(0)

+e
(0)
+2
+e
(0)
+4
+e
(0)
+5
+e
(1)
+5
s

+6
= +e
(0)
+1
+e
(0)
+3
+e
(0)
+5
+e
(0)
+6
+e
(1)
+6
s

+7
= +e
(0)
+2
+e
(0)
+4
+e
(0)
+6
+e
(0)
+7
+e
(1)
+7
s

+8
= e
(0)

+e
(0)
+3
+e
(0)
+5
+e
(0)
+7
+e
(0)
+8
+e
(1)
+8
s

+9
= e
(0)

+e
(0)
+1
+e
(0)
+4
+e
(0)
+6
+e
(0)
+8
+e
(0)
+9
+e
(1)
+9
s

+10
= e
(0)

+e
(0)
+1
+e
(0)
+2
+e
(0)
+5
+e
(0)
+7
+e
(0)
+9
+e
(0)
+10
+e
(1)
+10
s

+11
= e
(0)

+e
(0)
+1
+e
(0)
+2
+e
(0)
+3
+e
(0)
+6
+e
(0)
+8
+e
(0)
+10
+e
(0)
+11
+e
(1)
+11
13.28 (a) Using either of the methods described in Examples 13.15 and 13.16, we arrive at the conclusion
that d
min
= 7.
(b) From the parity triangle, we can see that this code is not self-orthogonal (for instance, s
3
and s
5
both check information error bit e
(0)
2
).
(c) From the parity triangle shown below, we see that the maximum number of orthogonal parity
checks that can be formed on e
(0)
0
is 4: {s
0
, s
1
, s
2
+ s
11
, s
5
}.
1
1
0
1
0
1
0
0
1
1
1
1
1
1
0
1
0
1
0
0
1
1
1
1
1
0
1
0
1
0
0
1
1
1
1
0
1
0
1
0
0
1
1
1
0
1
0
1
0
0
1
1
0
1
0
1
0
1
1
0
1
0
1
1
1
0
1
0
1
1
0
1
1
1
0
1
1 1
(d) So, because of (c), this code is not completely orthogonalizable.
10
13.32 (a) According to Table 13.2(a), there is only one rate R = 1/2 self orthogonal code with d
min
= 9:
the (2,1,35) code with
g
(1)
(D) = 1 + D
7
+ D
10
+ D
16
+ D
18
+ D
30
+ D
31
+ D
35
and m
(a)
= 35.
(b) According to Table 13.3(a), there is only one rate R = 1/2 orthogonalizable code with d
min
= 9:
the (2,1,21) code with
g
(1)
(D) = 1 + D
11
+ D
13
+ D
16
+ D
17
+ D
19
+ D
20
+ D
21
and m
(b)
= 21.
(c) The best rate R = 1/2 systematic code with d
min
= 9 is the (2,1,15) code with
g
(1)
(D) = 1 + D + D
3
+ D
5
+ D
7
+ D
8
+ D
11
+ D
13
+ D
14
+ D
15
and m
(c)
= 15

.
(d) According to Table 12.1(c), the best rate R = 1/2 nonsystematic code with d
free
= 9 (actually,
10 in this case) is the (2,1,6) code with
g
(0)
(D) = 1 + D + D
2
+ D
3
+ D
6
, g
(1)
(D) = 1 + D
2
+ D
3
+ D
5
+ D
6
, and m
(d)
= 6.
Thus m
(d)
< m
(c)
< m
(b)
< m
(a)
.
* This generator polynomial can be found in Table 13.1 of the first edition of this text.
13.34 (a) This self orthogonal code with g
(1)
(D) = 1+D
2
+D
7
+D
13
+D
16
+D
17
yields the parity triangle:
→ 1
0 1
→ 1 0 1
0 1 0 1
0 0 1 0 1
0 0 0 1 0 1
0 0 0 0 1 0 1
→ 1 0 0 0 0 1 0 1
0 1 0 0 0 0 1 0 1
0 0 1 0 0 0 0 1 0 1
0 0 0 1 0 0 0 0 1 0 1
0 0 0 0 1 0 0 0 0 1 0 1
0 0 0 0 0 1 0 0 0 0 1 0 1
→ 1 0 0 0 0 0 1 0 0 0 0 1 0 1
0 1 0 0 0 0 0 1 0 0 0 0 1 0 1
0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 1
→ 1 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 1
→ 1 1 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 1
The orthogonal check sums on e
(0)
0
are {s
0
, s
2
, s
7
, s
13
, s
16
, s
17
}.
11
(b) The block diagram of the feedback majority logic decoder for this code is:
+
+ + + + + +
+
MAJORITY GATE
) 1 (
17 "
r
) 0 (
17 "
r
) 0 (
ˆ
"
e
"
sc
"
uˆ ) 0 (
"
r
17 "
s
13.37 (a) These orthogonalizable code with g
(1)
(D) = 1 + D
6
+ D
7
+ D
9
+ D
10
+ D
11
yields the parity
triangle:
1
0
0
0
0
0
1
1
0
1
1
1 1
1
0
0
0
0
0
1
1
0
1
1
1
0
0
0
0
0
1
1
0
1
1
0
0
0
0
0
1
1
0
1
0
0
0
0
0
1
1
1
0
0
0
0
0
1
1
0
0
0
0
0
1
0
0
0
0
1
0
0
0
1
0
0
1
0
The orthogonal check sums on e
(0)
0
are {s
0
, s
6
, s
7
, s
9
, s
1
+ s
3
+ s
10
, s
4
+ s
8
+ s
11
}.
12
(b) The block diagram of the feedback majority logic decoder for this code is:
+
+ + + +
+
+
MAJORITY GATE
+
+
+
) 0 (
11 "
r
) 1 (
11 "
r 11 "
s
"
sc
) 0 (
"
r
"

) 0 (
ˆ
"
e
Chapter 16
Turbo Coding
16.1 Let u = [u
0
, u
1
, . . . , u
K−1
] be the input to encoder 1. Then
v
(1)
= [v
(1)
0
, v
(1)
1
, . . . , v
(1)
K−1
] = u G
(1)
is the output of encoder 1, where G
(1)
is the K ×K parity generator matrix for encoder 1, i.e.,
v
(1)
= uG
(1)
is the parity sequence produced by encoder 1. For example, for the (2,1,2) systematic
feedbacK encoder with
G
(1)
(D) =
_
1
1 +D
2
1 +D +D
2
_
,
the parity generator matrix is given by
G
(1)
=
_
¸
¸
¸
_
1 1 1 0 1 1 0 1 1 . . . . . . . . . . . . . . .
1 1 1 0 1 1 0 1 1 . . . . . . . . . . . .
1 1 1 0 1 1 0 1 1 . . . . . . . . .
.
.
.
_
¸
¸
¸
_
. (K ×K)
Now let u

= [u

0
, u

1
, . . . , u

K−1
] be the (interleaved) input to encoder 2. Then we can write
u

= u T,
where T is a K×K permutation matrix with a single one in each row and column. (If T is the identity
matrix, then u

= u.)
The output of encoder 2 is then given by
v
(2)
= [v
(2)
0
, v
(2)
1
, . . . , v
(2)
K−1
] = u

G
(2)
= u T G
(2)
,
where G
(2)
is the K ×K parity generator matrix for encoder 2.
1
2
Now consider two input sequences u
1
and u
2
, both of length K. The corresponding encoder outputs
in each case are given by the 3-tuples
[u
1
, u
1
G
(1)
, u

1
G
(2)
] = [u
1
, u
1
G
(1)
, u
1
T G
(2)
]

=y
1
and
[u
2
, u
2
G
(1)
, u

2
G
(2)
] = [u
2
, u
2
G
(1)
, u
2
T G
(2)
]

=y
2
.
Now consider the input sequence u = u
1
+u
2
. The encoder output 3-tuple in this case is given by
[u, u G
(1)
, u

G
(2)
] = [u, u G
(1)
, u T G
(2)
]
= [u
1
+u
2
, u
1
G
(1)
+u
2
G
(2)
, u
1
T G
(2)
+u
2
T G
(2)
]
= [u
1
, u
1
G
(1)
, u
1
T G
(2)
] + [u
2
, u
2
G
(2)
, u
2
T G
(2)
]
= y
1
+y
2
.
Thus superposition holds and the turbo encoder is linear.
16.3 Define C
1
to be the (7,4,3) Hamming code and C
2
to be the (8,4,4) extended Hamming code.
C
1
Codeword list:
0000000
1101000
0111001
1010001
1011010
0110010
1100011
0001011
1110100
0011100
1001101
1000101
0101110
1000110
1000111
1111111
The CWEF for C
1
is given by
A
C
1
1
(Z) = 3Z
2
+Z
3
A
C
1
2
(Z) = 3Z + 3Z
2
A
C
1
3
(Z) = 1 + 3Z
A
C
1
4
(Z) = Z
3
3
C
2
Codeword list:
0000 0000
11010001
01110010
10100011
10110100
01100101
11000110
00010111
11101000
00111001
10011010
10001011
01011100
10001101
10001110
11111111
The CWEF for C
2
is given by
A
C
2
1
(Z) = 4Z
3
A
C
2
2
(Z) = 6Z
2
A
C
2
3
(Z) = 4Z
A
C
2
4
(Z) = Z
4
The CWEF for the PCBC is given by equation (16.23) as
A
PC
w
(Z) = A
C
1
w
(Z) ∗ A
C
2
w
(Z)/
_
K
w
_
A
PC
1
(Z) = (3Z
2
+Z
3
)(4Z
3
)/4 = 3Z
5
+Z
6
A
PC
2
(Z) = (3Z + 3Z
2
)(6Z
2
)/6 = 3Z
3
+ 3Z
4
A
PC
3
(Z) = (1 + 3Z)(4Z)/4 = Z + 3Z
2
A
PC
4
(Z) = (Z
3
)(Z
4
)/1 = Z
7
The bit CWEF’s are given by
B
PC
w
(Z) =
_
w
K
_
A
PC
w
(Z)
B
PC
1
(Z) =
_
1
4
_
(3Z
5
+Z
6
) = .75 Z
5
+.25 Z
6
B
PC
2
(Z) =
_
2
4
_
(3Z
3
+ 3Z
4
) = 1.5Z
3
+ 1.5Z
4
B
PC
3
(Z) =
_
3
4
_
(Z + 3Z
2
) = .75Z
3
+ 2.25Z
2
4
B
PC
4
(Z) =
_
4
4
_
(Z
7
) = Z
7
The IRWEF’s are given by
A
PC
(W, Z) = W(3Z
5
+Z
6
) +W
2
(3Z
3
+ 3Z
4
) +W
3
(Z + 3Z
2
) +W
7
Z
7
B
PC
(W, Z) = W(.75Z
5
+.25Z
6
) +W
2
(1.5Z
3
+ 1.5Z
4
) +W
3
(.75Z + 2.25Z
2
) +W
4
Z
7
To find the WEF’s, set W = Z = X:
A
PC
(X) = X
4
+ 6X
5
+ 6X
6
+X
7
+X
11
B
PC
(X) = .75X
4
+ 3.75X
5
+ 2.25X
6
+.25X
7
+X
11
16.4 The codeword IRWEF can be found from equation (16.34) as follows:
A
PC
(W, Z) = W(4.5Z
4
+ 3Z
5
+ 0.5Z
6
) +
W
2
(1.29Z
2
+ 2.57Z
3
+ 1.29Z
4
+ 3.86Z
5
+ 6.43Z
6
+ 3Z
7
+
3.32Z
8
+ 3.86Z
9
+ 1.93Z
10
+ 0.43Z
11
+ 0.04Z
12
) +
W
3
(0.07 + 0.43Z + 0.64Z
2
+ 1.29Z
3
+ 5.57Z
4
+ 5.57Z
5
+
7.07Z
6
+ 15.43Z
7
+ 14.14Z
8
+ 5.14Z
9
+ 0.64Z
10
) +
W
4
(3.21Z
4
+ 17.14Z
5
+ 29.29Z
6
+ 17.14Z
7
+ 3.21Z
8
) +
W
5
(0.64Z
2
+ 5.14Z
3
+ 14.14Z
4
+ 15.43Z
5
+ 7.07Z
6
+ 5.57Z
7
+
5.57Z
8
+ 1.29Z
9
+ 0.64Z
10
+ 0.43Z
11
+ 0.07Z
12
) +
W
6
(0.04 + 0.43Z + 1.93Z
2
+ 3.86Z
3
+ 3.32Z
4
+ 3Z
5
+
6.43Z
6
+ 3.86Z
7
+ 1.29Z
8
+ 2.57Z
9
+ 1.29Z
10
) +
W
7
(0.05Z
6
+ 3Z
7
+ 4.5Z
8
) +W
8
Z
12
To find the bit IRWEF, take each term in the expression for A
PC
(W, Z) and divide by w/8, where w
is the exponent of W.
B
PC
(W, Z) = W(0.56Z
4
+ 0.38Z
5
+ 0.06Z
6
) +
W
2
(0.32Z
2
+ 0.64Z
3
+ 0.32Z
4
+ 0.97Z
5
+ 1.61Z
6
+.75Z
7
+
0.83Z
8
+ 0.97Z
9
+ 0.48Z
10
+ 0.11Z
11
+ 0.01Z
12
) +
W
3
(0.003 + 0.16Z + 0.24Z
2
+ 0.48Z
3
+ 2.09Z
4
+ 2.09Z
5
+
2.65Z
6
+ 5.79Z
7
+ 5.30Z
8
+ 1.93Z
9
+ 0.24Z
10
) +
W
4
(1.61Z
4
+ 8.57Z
5
+ 14.65Z
6
+ 8.57Z
7
+ 1.61Z
8
) +
W
5
(0.40Z
2
+ 3.21Z
3
+ 8.84Z
4
+ 9.64Z
5
+ 4.42Z
6
+ 3.48Z
7
3.48Z
8
+ 0.81Z
9
+ 0.40Z
10
+ 0.27Z
11
+ 0.04Z
12
) +
W
6
(0.03 + 0.32Z + 1.45Z
2
+ 2.90Z
3
+ 2.49Z
4
+ 2.25Z
5
4.82Z
6
+ 2.90Z
7
+ 0.97Z
8
+ 1.93Z
9
+ 0.97Z
10
5
W
7
(0.04Z
6
+ 2.63Z
7
+ 3.94Z
8
) +W
8
Z
12
To find the WEF’s, substitute X=W=Z into the equations for the IRWEF’s, as shown below.
A
PC
(X) = 0.07X
3
+ 1.71X
4
+ 7.71X
5
+ 5.61X
6
+ 11X
7
+ 22.29X
8
+ 45.21X
9
+
66.79X
10
+ 44.79X
11
+ 20.14X
12
+ 9.71X
13
+ 9.46X
14
+ 5.14X
15
+ 3X
16
+
0.07X
17
+ 1.29X
18
+X
20
B
PC
(X) = 0.03X
3
+ 0.48X
4
+ 1.45X
5
+ 1.21X
6
+ 3.84X
7
+ 9.96X
8
+ 23.71X
9
+
33.39X
10
+ 21.19X
11
+ 10.71X
12
+ 5.82X
13
+ 5.04X
14
+ 0.96X
15
+ 2.20X
16
+
0.04X
17
+ 0.96X
18
+X
20
16.5 Consider using an h-repeated (n, k, d
min
) = (7, 4, 3) Hamming code as the constituent code, forming
an (h[2n − k], k) = (10h, 4h) PCBC. The output consists of the original input bits u, the parity bits
from the non-interleaved encoder v
(1)
, and the parity bits from the interleaved encoder v
(2)
. For the
h = 4-repeated PCBC, the IRWEF of the (28,16) constituent code is
A(W, Z) = [1 +W(3Z
2
+Z
3
) +W
2
(3Z + 3Z
2
) +W
3
(1 + 3Z) +W
4
Z
3
]
4
− 1
= W(12Z
2
+ 4Z
3
) +
W
2
(12Z + 12Z
2
+ 54Z
4
+ 36Z
5
+ 6Z
6
) +
W
3
(4 + 12Z + 108Z
3
+ 144Z
4
+ 36Z
5
+ 108Z
6
+
108Z
7
+ 36Z
8
+ 4Z
9
) +
W
4
(90Z
2
+ 232Z
3
+ 90Z
4
+ 324Z
5
+ 540Z
6
+
252Z
7
+ 117Z
8
+ 108Z
9
+ 54Z
10
+ 12Z
11
+Z
12
) +
W
5
(36Z + 144Z
2
+ 108Z
3
+ 432Z
4
+ 1188Z
5
+ 780Z
6
+
468Z
7
+ 648Z
8
+ 432Z
9
+ 120Z
10
+ 12Z
11
) +
W
6
(6 + 36Z + 54Z
2
+ 324Z
3
+ 1296Z
4
+ 1296Z
5
+
918Z
6
+ 1836Z
7
+ 1620Z
8
+ 556Z
9
+ 66Z
10
) +
W
7
(144Z
2
+ 780Z
3
+ 1188Z
4
+ 1080Z
5
+ 2808Z
6
+ 3456Z
7
+
1512Z
8
+ 324Z
9
+ 108Z
10
+ 36Z
11
+ 4Z
12
) +
W
8
(36Z + 252Z
2
+ 540Z
3
+ 783Z
4
+ 2592Z
5
+ 4464Z
6
+
2592Z
7
+ 783Z
8
+ 540Z
9
+ 252Z
10
+ 36Z
11
) +
6
W
9
(4 + 36Z + 108Z
2
+ 324Z
3
+ 1512Z
4
+ 3456Z
5
+
2808Z
6
+ 1080Z
7
+ 1188Z
8
+ 780Z
9
+ 144Z
10
) +
W
10
(66Z
2
+ 556Z
3
+ 1620Z
4
+ 1836Z
5
+ 918Z
6
+ 1296Z
7
+
1296Z
8
+ 324Z
9
+ 54Z
10
+ 36Z
11
+ 6Z
12
) +
W
11
(12Z + 120Z
2
+ 432Z
3
+ 648Z
4
+ 468Z
5
+ 780Z
6
+
1188Z
7
+ 432Z
8
+ 108Z
9
+ 144Z
10
+ 36Z
11
) +
W
12
(1 + 12Z + 54Z
2
+ 108Z
3
+ 117Z
4
+ 252Z
5
+
540Z
6
+ 324Z
7
+ 90Z
8
+ 232Z
9
+ 90Z
10
) +
W
13
(4Z
3
+ 36Z
4
+ 108Z
5
+ 108Z
6
+ 36Z
7
+
144Z
8
+ 108Z
9
+ 12Z
11
+ 4Z
12
) +
W
14
(6Z
6
+ 36Z
7
+ 54Z
8
+ 12Z
10
+ 12Z
11
) +
W
15
(4Z
9
+ 12Z
10
) +
W
16
Z
12
.
The IRWEF’s of the (40,16) PCBC are found, using A
PC
w
(Z) =
_
K
w
_
−1
[A
w
(Z)]
2
, where K = hk = 16
is the interleaver (and input) length, to be:
A
PC
1
(Z) = 9Z
4
+ 6Z
5
+Z
6
A
PC
2
(Z) = 1.2Z
2
+ 2.4Z
3
+ 1.2Z
4
+ 10.8Z
5
+ 18Z
6
+ 8.4Z
7
+ 25.5Z
8
+ 32.4Z
9
+ 16.2Z
10
+
3.6Z
11
+ 0.3Z
12
A
PC
3
(Z) = 0.029 + 0.171Z + 0.257Z
2
+ 1.543Z
3
+ 6.686Z
4
+ 6.686Z
5
+ 23.914Z
6
+ 61.714Z
7
+
56.057Z
8
+ 61.771Z
9
+ 99.686Z
10
+ 83.314Z
11
+ 54.771Z
12
+ 48.343Z
13
+
35.229Z
14
+ 15.429Z
15
+ 3.857Z
16
+ 0.514Z
17
+ 0.029Z
18
A
PC
4
(Z) = 4.451Z
4
+ 22.945Z
5
+ 38.475Z
6
+ 54.989Z
7
+ 140.459Z
8
+ 194.637Z
9
+ 186.903Z
10
+
257.697Z
11
+ 294.389Z
12
+ 216.831Z
13
+ 151.273Z
14
+ 117.156Z
15
+ 73.844Z
16
+
36.316Z
17
+ 17.268Z
18
+ 8.229Z
19
+ 3.155Z
20
+ 0.831Z
21
+
0.138Z
22
+ 0.013Z
23
+ 0.001Z
24
A
PC
5
(Z) = 0.297Z
2
+ 2.374Z
3
+ 6.527Z
4
+ 14.242Z
5
+ 50.736Z
6
+ 112.549Z
7
+ 160.615Z
8
+
315.099Z
9
+ 550.385Z
10
+ 579.363Z
11
+ 551.505Z
12
+ 611.802Z
13
+ 540.89Z
14
+
360.791Z
15
+ 238.088Z
16
+ 158.176Z
17
+ 80.901Z
18
+ 27.297Z
19
+
5.670Z
20
+ 0.659Z
21
+ 0.033Z
22
7
A
PC
6
(Z) = 0.004 + 0.054Z + 0.243Z
2
+ 0.971Z
3
+ 5.219Z
4
+ 17.964Z
5
+ 43.615Z
6
+
133.355Z
7
+ 345.929Z
8
+ 533.928Z
9
+ 682.391Z
10
+ 1030.59Z
11
+ 1269.74Z
12
+
1130.6Z
13
+ 993.686Z
14
+ 891.674Z
15
+ 597.803Z
16
+ 255.219Z
17
+ 65.307Z
18
+
9.165Z
19
+ 0.544Z
20
A
PC
7
(Z) = 1.813Z
4
+ 19.636Z
5
+ 83.089Z
6
+ 189.189Z
7
+ 341.333Z
8
+ 694.221Z
9
+
1194.5Z
10
+ 1462.3Z
11
+ 1702.7Z
12
+ 2064.99Z
13
+ 1874.92Z
14
+ 1101.01Z
15
+
456.243Z
16
+ 169.326Z
17
+ 61.439Z
18
+ 18.05Z
19
+ 4.116Z
20
+ 0.906Z
21
+
0.189Z
22
+ 0.025Z
23
+ 0.001Z
24
A
PC
8
(Z) = 0.101Z
2
+ 1.41Z
3
+ 7.955Z
4
+ 25.527Z
5
+ 67.821Z
6
+ 192.185Z
7
+
454.462Z
8
+ 795.877Z
9
+ 1316.39Z
10
+ 2201.74Z
11
+ 2743.06Z
12
+ 2201.74Z
13
+
1316.39Z
14
+ 795.877Z
15
+ 454.462Z
16
+ 192.185Z
17
+ 67.821Z
18
+ 25.527Z
19
+
7.955Z
20
+ 1.41Z
21
+ 0.101Z
22
A
PC
9
(Z) = 0.001 + 0.025Z + 0.189Z
2
+ 0.906Z
3
+ 4.116Z
4
+ 18.05Z
5
+ 61.439Z
6
+
169.326Z
7
+ 456.243Z
8
+ 1101.01Z
9
+ 1874.92Z
10
+ 2064.99Z
11
+ 1702.7Z
12
+
1462.3Z
13
+ 1194.5Z
14
+ 694.221Z
15
+ 341.333Z
16
+ 189.189Z
17
+
83.089Z
18
+ 19.636Z
19
+ 1.813Z
20
A
PC
10
(Z) = 0.544Z
4
+ 9.165Z
5
+ 65.307Z
6
+ 255.219Z
7
+ 597.803Z
8
+ 891.674Z
9
+
993.686Z
10
+ 1130.6Z
11
+ 1269.74Z
12
+ 1030.59Z
13
+ 682.391Z
14
+ 533.928Z
15
+
345.929Z
16
+ 133.355Z
17
+ 43.615Z
18
+ 17.964Z
19
+ 5.219Z
20
+ 0.971Z
21
+
0.243Z
22
+ 0.054Z
23
+ 0.004Z
24
A
PC
11
(Z) = 0.033Z
2
+ 0.659Z
3
+ 5.67Z
4
+ 27.297Z
5
+ 80.901Z
6
+ 158.176Z
7
+
238.088Z
8
+ 360.791Z
9
+ 540.89Z
10
+ 611.802Z
11
+ 551.505Z
12
+ 579.363Z
13
+
550.385Z
14
+ 315.099Z
15
+ 160.615Z
16
+ 112.549Z
17
+ 50.736Z
18
+ 14.242Z
19
+
6.527Z
20
+ 2.374Z
21
+ 0.297Z
22
A
PC
12
(Z) = 0.001 + 0.013Z + 0.138Z
2
+ 0.831Z
3
+ 3.155Z
4
+ 8.229Z
5
+ 17.268Z
6
+
36.316Z
7
+ 73.844Z
8
+ 117.156Z
9
+ 151.273Z
10
+ 216.831Z
11
+ 294.389Z
12
+
257.697Z
13
+ 186.903Z
14
+ 194.637Z
15
+ 140.459Z
16
+ 54.989Z
17
+
38.475Z
18
+ 22.945Z
19
+ 4.451Z
20
A
PC
13
(Z) = 0.029Z
6
+ 0.514Z
7
+ 3.857Z
8
+ 15.429Z
9
+ 35.229Z
10
+ 48.343Z
11
+ 54.771Z
12
+
83.314Z
13
+ 99.686Z
14
+ 61.771Z
15
+ 56.057Z
16
+ 61.714Z
17
+ 23.914Z
18
+
8
6.686Z
19
+ 6.686Z
20
+ 1.543Z
21
+ 0.257Z
22
+ 0.171Z
23
+ 0.029Z
24
A
PC
14
(Z) = 0.3Z
12
+ 3.6Z
13
+ 16.2Z
14
+ 32.4Z
15
+ 25.5Z
16
+ 8.4Z
17
+ 18Z
18
+
10.8Z
19
+ 1.2Z
20
+ 2.4Z
21
+ 1.2Z
22
A
PC
15
(Z) = Z
18
+ 6Z
19
+ 9Z
20
A
PC
16
(Z) = Z
24
.
Using a 4x4 row-column (block) interleaver, the minimum distance of the PCBC is 5. This comes
from a weight 1 input, resulting in w
H
(u) = 1, w
H
(v
(1)
) = 2, and w
H
(v
(2)
) = 2, where w
H
(x) is the
Hamming weight of x.
16.7 On page 791 in the text, it is explained that “...any multiple error event in a codeword belonging
to a terminated convolutional code can be viewed as a succession of single error events separated by
sequences of 0’s.” Another way to look at it is that for convolutional codewords, we start in the zero
state and end in the zero state and any multiple error event can be seen as a series of deviations from
the zero state. These deviations from the zero state can be separated by any amount of time spent
in the zero state. For example, the codeword could have a deviation starting and another deviation
ending at the same zero state, or the codeword could stay in the zero state for several input zeros.
Likewise the first input bit could take the codeword out of the zero state or the last input bit could be
the one to return the codeword to the zero state.
So, if there are h error events with total length λ in a block codeword length K, then there must be
K−λ times that state 0 occurs or, as explained in the text, K−λ 0’s. Now, in the codeword, there are
K−λ +h places where an error event can start, since the remaining λ −h places are then determined
by the error events. Since there are h events, the number of ways to choose h elements from N −λ+h
places gives the multiplicity of block codewords for h-error events.
16.8 The IRWEF and WEF for the PCCC in Example 16.6 is found in the same manner as for a PCBC.
The following combines the equations in (16.49) for the CWEF’s:
A
PC
(W, Z) = W
2
(4Z
8
+ 4Z
10
+Z
12
) +W
3
(1.81Z
4
+ 6.74Z
6
+ 9.89Z
8
+ 6.74Z
10
+ 1.81Z
12
) +
W
4
(0.93Z
4
+ 5.93Z
4
+ 10.59Z
8
+ 4.67Z
10
+ 3.89Z
12
+ 0.67Z
14
+ 0.33Z
16
) +
W
5
(0.33Z
4
+ 2.22Z
6
+ 6.37Z
8
+ 9.33Z
10
+ 6.81Z
12
+ 1.78Z
14
+ 0.15Z
16
) +
W
6
(0.04Z
4
+ 1.04Z
6
+ 8.15Z
8
+ 12.44Z
10
+ 5.33Z
12
) +W
7
(5.44Z
8
+ 3.11Z
10
+
0.44Z
12
) +W
9
(Z
12
)
A
PC
(X) = 1.81X
7
+ 0.93X
8
+ 7.07X
9
+ 9.97X
10
+ 12.11X
11
+ 15.63X
12
+ 13.11X
13
+
13.82X
14
+ 15.95X
15
+ 16.33X
16
+ 9.92X
17
+ 6X
18
+ 2.22X
19
+ 0.33X
20
+ 1.15X
21
9
16.9 The bit IRWEF is given by:
B
PC
(W, Z) = W
2
(0.89Z
8
+ 0.89Z
10
+ 0.22Z
12
) +W
3
(0.60Z
4
+ 2.25Z
6
+ 3.30Z
8
+
2.25Z
10
+ 0.60Z
12
) +W
4
(0.41Z
4
+ 2.63Z
6
+ 4.71Z
8
+ 2.07Z
10
+ 1.73Z
12
+
0.30Z
14
+ 0.15Z
16
) +W
5
(0.19Z
4
+ 1.23Z
6
+ 3.54Z
8
+ 5.18Z
10
+ 3.79Z
12
+
0.99Z
14
+ 0.08Z
16
) +W
6
(0.02Z
4
+ 0.69Z
6
+ 5.43Z
8
+ 8.30Z
10
+ 3.55Z
12
) +
W
7
(4.23Z
8
+ 2.42Z
10
+ 0.35Z
12
) +W
9
Z
12
The bit WEF is given by:
B
PC
(X) = 0.60X
7
+ 0.41X
8
+ 2.43X
9
+ 3.55X
10
+ 4.53X
11
+ 6.29X
12
+ 5.79X
13
+ 7.73X
14
10.02X
15
+ 10.02X
16
+ 6.20X
17
+ 3.85X
18
+ 1.33X
19
+ 0.15X
20
+ 1.08X
21
16.10 Redrawing the encoder block diagram and the IRWEF state diagram for the reversed generators, we
see that the state diagram is essentially the same except that W and Z are switched. Thus, we can
use equation (16.41) for the new IRWEF with W and Z reversed.
A(W, Z, L) = L
3
W
2
Z
3
+L
4
W
4
Z
2
+L
5
(W
4
Z
3
+W
2
Z
4
) +L
6
(2W
4
Z
3
+W
4
Z
4
) +L
7
(W
2
Z
5
+
2W
4
Z
4
+W
4
Z
5
+W
6
Z
2
) +L
8
(W
4
Z
5
+ 3W
4
Z
4
+ 2W
4
Z
5
+ 2W
6
Z
3
) +L
9
(W
2
Z
6
+
3W
4
Z
5
+ 2W
4
Z
6
+W
4
Z
7
+ 3W
6
Z
3
+ 3W
6
Z
4
).
After dropping all terms of order larger than L
9
, the single-error event enumerators are obtained as
follows:
A
(1)
2,3
= Z
3
A
(1)
2,5
= Z
4
A
(1)
2,7
= Z
5
A
(1)
2,9
= Z
6
A
(1)
4,4
= Z
2
A
(1)
4,5
= Z
3
A
(1)
4,6
= 2Z
3
+Z
4
A
(1)
4,7
= 2Z
4
+Z
5
A
(1)
4,8
= 3Z
4
+ 2Z
5
+Z
6
A
(1)
4,9
= 3Z
5
+ 2Z
6
+Z
7
A
(1)
6,7
= Z
2
A
(1)
6,8
= 2Z
3
A
(1)
6,9
= 3Z
3
+ 2Z
4
The double-error event IRWEF is
A
2
(W, Z, L) = [A(W, Z, L)]
2
= L
6
W
4
Z
6
+ 2L
7
W
6
Z
5
+L
8
(2W
4
Z
7
+ 2W
6
Z
6
+W
8
Z
4
) +L
9
(6W
6
Z
6
+ 2W
6
Z
7
+
2W
8
Z
5
) +. . .
Dropping all terms of order greater than L
9
gives the double-error event enumerators A
(2)
2,
(Z):
A
(2)
4,6
= Z
6
A
(2)
4,8
= 2Z
7
A
(2)
6,7
= 2Z
5
A
(2)
6,8
= 2Z
6
A
(2)
6,9
= 6Z
6
+ 2Z
7
A
(2)
8,8
= Z
4
A
(2)
8,9
= 2Z
5
10
Following the same procedure, the triple-error event IRWEF is given by A
(3)
(W, Z, L) = [A(W, Z, L)]
3
=
L
9
W
6
Z
9
+. . ., and the triple-error event enumerator is given by A
(3)
6,9
(Z) = Z
9
.
Before finding the CWEF’s we need to calculate h
max
. Notice that there are no double-error events of
weight less than 4 and that the only triple-error event has weight 6. Also notice that, for the terminated
encoder, odd input weights don’t appear. So for weight 2, h
max
= 1. For weight 4 and 8, h
max
= 2.
For weight 6, h
max
= 3. Now we can use equations (16.37) and (16.39) to find the CWEF’s as follows:
A
2
(Z) = c[3, 1]A
(1)
2,3
(Z) +c[5, 1]A
(1)
2,5
(Z) +c[7, 1]A
(1)
2,7
(Z) +c[9, 1]A
(1)
2,9
(Z)
= 3Z
2
+ 7Z
3
+ 5Z
4
+Z
6
A
4
(Z) = c[4, 1]A
(1)
4,4
(Z) +c[5, 1]A
(1)
4,5
(Z) +c[6, 1]A
(1)
4,6
(Z) +c[7, 1]A
(1)
4,7
(Z) +c[8, 1]A
(1)
4,8
(Z) +
c[9, 1]A
(1)
4,9
(Z) +c[6, 2]A
(2)
4,6
(Z) +c[8, 2]A
(2)
4.8
(Z)
= 6Z
2
+ 13Z
3
+ 16Z
4
+ 10Z
5
+ 14Z
6
+ 7Z
7
A
6
(Z) = c[7, 1]A
(1)
6,7
(Z) +c[8, 1]A
(1)
6,8
(Z) +c[9, 1]A
(1)
6,9
(Z) +C[6, 2]A
(2)
6,7
(Z) +c[8, 2]A
(2)
6,8
(Z) +
c[9, 2]A
(2)
6,9
(Z) +c[9, 3]A
(3)
6,9
(Z)
= 3Z
2
+ 7Z
3
+ 3Z
4
+ 12Z
5
+ 12Z
6
+ 2Z
7
+Z
9
A
8
(Z) = c[8, 2]A
(2)
8,8
(Z) +c[9, 2]A
(2)
8,9
(Z)
= 3Z
4
+ 2Z
5
A quick calculation shows that the CWEF’s include a total of 127 nonzero codewords, the correct
number for an (18,7) code.
The IRWEF and WEF are given by:
A(W, Z) = W
2
(3Z
2
+ 7Z
3
+ 5Z
4
+Z
6
) +W
4
(6Z
2
+ 13Z
3
+ 16Z
4
+ 10Z
5
+ 14Z
6
+ 7Z
7
) +
W
6
(3Z
2
+ 7Z
3
+ 3Z
4
+ 12Z
5
+ 12Z
6
+ 2Z
7
+Z
9
) +W
8
(3Z
4
+ 2Z
5
)
A(X) = 3X
4
+ 7X
5
+ 11X
6
+ 13X
7
+ 20X
8
+ 17X
9
+ 17X
10
+ 20X
11
+ 15Z
12
+ 4X
13
+X
15
This convolutional code has minimum distance 4, one less than for the code of Example 6.6. As in the
example, we must modify the concept of the uniform interleaver because we are using convolutional
constituent codes. Therefore, note that for weight 2, there are 16 valid input sequences. For weight 4,
there are 66. For weight 6, there are 40, and for weight 8, there are 5. For all other weights, there are
no valid input sequences.
11
The CWEF’s of the (27,7) PCCC can be found as follows:
A
PC
2
(Z) = (3Z
2
+ 7Z
3
+ 5Z
4
+Z
6
)
2
/16
= 0.56Z
4
+ 2.62Z
5
+ 4.94Z
6
+ 4.37Z
7
+ 1.94Z
8
+ 0.87Z
9
+ 0.62Z
10
+ 0.06Z
12
A
PC
4
(Z) = (6Z
2
+ 13Z
3
+ 16Z
4
+ 10Z
5
+ 14Z
6
+ 7Z
7
)
2
/66
= 0.55Z
4
+ 2.36Z
5
+ 5.47Z
6
+ 8.12Z
7
+ 10.36Z
8
+ 11.63Z
9
+ 11.06Z
10
+ 7.65Z
11
5.09Z
12
+ 2.97Z
13
+ 0.74Z
14
A
PC
6
(Z) = (3Z
2
+ 7Z
3
+ 3Z
4
+ 12Z
5
+ 12Z
6
+ 2Z
7
+Z
9
)
2
/40
= 0.22Z
4
+ 1.05Z
5
+ 1.67Z
6
+ 2.85Z
7
+ 6.22Z
8
+ 6.3Z
9
+ 6.1Z
10
+ 7.65Z
11
+
5.15Z
12
+ 1.35Z
13
+ 0.7Z
14
+ 0.6Z
15
+ 0.1Z
16
+ 0.02Z
18
A
PC
8
(Z) = (3Z
4
+ 2Z
5
)
2
/5
= 1.8Z
8
+ 2.4Z
9
+ 0.8Z
10
16.11 As noted in Problem 16.10, odd input weights do not appear. For weight 2, h
max
= 1 and equation
(16.55) simplifies to
A
PC
2
(Z) = 2[A
(1)
2
(Z)]
2
and B
PC
2
(Z) =
4
K
[A
(1)
2
(Z)]
2
.
For weight 4 and 8, h
max
= 2, so we have
A
PC
4
(Z) = 6[A
(2)
4
(Z)]
2
and B
PC
4
(Z) =
24
K
[A
(2)
4
(Z)]
2
.
A
PC
8
(Z) =
10080
K
4
[A
(2)
8
(Z)]
2
and B
PC
4
(Z) =
80640
K
5
[A
(2)
8
(Z)]
2
.
For weight 6, h
max
= 3, so
A
PC
6
(Z) = 20[A
(3)
6
(Z)]
2
and B
PC
6
(Z) =
120
K
[A
(3)
6
(Z)]
2
.
Including only these terms in the approximate IRWEF’s for the PCCC gives
A(W, Z) = 2W
2
[A
2
(Z)]
2
+ 6W
4
[A
4
(Z)]
2
+ 20W
6
[A
6
(Z)]
2
+
10080W
8
K
4
[A
8
(Z)]
2
≈ 2W
2
[A
2
(Z)]
2
+ 6W
4
[A
4
(Z)]
2
+ 20W
6
[A
6
(Z)]
2
B(W, Z) =
4W
2
K
[A
2
(Z)]
2
+
24W
4
K
[A
4
(Z)]
2
+
120W
6
K
[A
6
(Z)]
2
+
80640W
8
K
5
[A
8
(Z)]
2

4W
2
K
[A
2
(Z)]
2
+
24W
4
K
[A
4
(Z)]
2
+
120W
6
K
[A
6
(Z)]
2
.
12
The approximate CWEF’s from the previous problem are:
A
2
(Z) ≈ 3Z
2
+ 7Z
3
+ 5Z
4
+Z
6
A
4
(Z) ≈ 6Z
2
+ 13Z
3
+ 16Z
4
+ 10Z
5
+ 14Z
6
+ 7Z
7
A
6
(Z) ≈ 3Z
2
+ 7Z
3
+ 3Z
4
+ 12Z
5
+ 12Z
6
+ 2Z
7
+Z
9
A
8
(Z) ≈ 3Z
4
+ 2Z
5
.
Now we obtain the approximate IRWEF’s as:
A(W, Z) ≈ 2W
2
Z
4
(3 + 7Z + 5Z
2
+Z
4
)
2
+ +6W
4
Z
4
(6 + 13Z + 16Z
2
+ 10Z
3
+ 14Z
4
+ 7Z
5
)
2
+
20W
6
Z
4
(3 + 7Z + 3Z
2
+ 12Z
3
+ 12Z
4
+ 2Z
5
+Z
7
)
2
B(W, Z) ≈ (4W
2
Z
4
(3 + 7Z + 5Z
2
+Z
4
)
2
+ 24W
4
Z
4
(6 + 13Z + 16Z
2
+ 10Z
3
+ 14Z
4
+ 7Z
5
)
2
+
120W
6
Z
4
(3 + 7Z + 3Z
2
+ 12Z
3
+ 12Z
4
+ 2Z
5
+Z
7
)
2
)/K
and the approximate WEF’s are given by:
A
PC
(X) ≈ 18X
6
+ 84X
7
+ 374X
8
+ 1076X
9
+ 2408X
10
+ 4084X
11
+ 5464X
12
+ 6888X
13
+
9362X
14
+ 8064X
15
+ 6896X
16
+ 7296X
17
+ 4414X
18
+ 1080X
19
+ 560X
20
+
480X
21
+ 80X
22
+ 20X
24
B
PC
(X) ≈ 36X
6
/K + 168X
7
/K + 1180X
8
/K + 4024X
9
/K + 9868X
10
/K + 17960X
11
/K +
24496X
12
/K + 32112X
13
/K + 47404X
14
/K + 42336X
15
/K + 37344X
16
/K +
41424X
17
/K + 25896X
18
/K + 6480X
19
/K + 3360X
20
/K + 2880X
21
/K +
480X
22
/K + 120X
24
/K
16.13 First constituent encoder: G
(1)
ff
(D) = [1 +D +D
2
1 +D
2
]
Second constituent encoder: G
(2)
ff
(D) = [1 1 +D +D
2
]
For the first constituent encoder, the IOWEF is given by
A
C
1
(W, X, L) = WX
5
L
3
+W
2
L
4
(1 +L)X
6
+W
3
L
5
(1 +L)
2
X
7
+. . . ,
and for the second one the IRWEF can be computed as
A
C
2
(W, Z, L) = WZ
3
L
3
+W
2
Z
2
L
4
+W
2
Z
4
L
5
+. . . ,
The effect of uniform interleaving is represented by
A
PC
w
(X, Z) =
A
C
1
w
(X) ∗ A
C
2
w
(Z)
_
K
w
_
13
and for large K, this can be approximated as
A
PC
w
(X, Z) ≈

1≤ h
1
≤ h
max

1≤ h
2
≤ h
max
c[h
1
]c[h
2
]
_
K
w
_ A
(h
1
)
w
(X)A
(h
2
)
w
(Z),
where A
(h
1
)
w
(X) and A
(h
2
)
w
(Z) are the conditional IO and IR h-error event WEF’s of the nonsystematic
and systematic constituent encoders, C
1
and C
2
, respectively. With further approximations, we obtain
A
PC
w
(X, Z) ≈
w!
h
1max!
h
2max!
K
(h
1max
+h
2max
−w)
A
(h
1max
)
w
(X)A
(h
2max
)
w
(Z)
and
B
PC
w
(X, Z) ≈
w
K
A
PC
w
(X, Z).
Going directly to the large K case, for any input weight w, w ≥ 1, h
max
= w, since the weight 1 single
error event can be repeated w times. It follows that
A
(w)
w
(X) = X
5w
, w ≥ 1
and
A
(w)
w
(Z) = Z
3w
, w ≥ 1.
Hence
A
PC
w
(X, Z) ≈
w!
w!w!
K
(w+w−w)
X
5w
Z
3w
=
K
w
w!
X
5w
Z
3w
and
B
PC
w
(X, Z) ≈
K
(w−1)
(w − 1)!
X
5w
Z
3w
.
Therefore
A
PC
(W, X) ≈

1≤w≤K
W
w
K
w
w!
X
8w
= KWX
8
+
K
2
2
W
2
X
16
+
K
6
3
W
3
X
24
−. . . ,
B
PC
(W, X) ≈ WX
8
+KW
2
X
16
+
K
2
2
W
3
X
24
+. . . ,
the average WEF’s are
14
A
PC
(X) = KX
8
+
K
2
2
X
16
+
K
3
6
X
24
+. . . ,
B
PC
(X) = X
8
+KX
16
+
K
2
2
X
24
+. . . ,
and the free distance of the code is 8.
16.14 The encoder and augmented modified state diagram for this problem are shown below:
S
0 S
2
S
0
S
3
S
1
LWZ
LW
LWZ
LW L
LZ
LZ
(a) (b)
(a) Encoder and (b) Augmented Modified State Diagram for G(D) =
1+D+D
2
1+D
Note that in order to leave from and return to state S
0
(i.e., terminating a K −2 bit input sequence),
an even overall-input (including termination bits) weight is required. Examination of the state diagram
reveals 6 paths that contain overall-input weight W < 6 (“· · ·” means an indefinite loop around S
3
):
Path Function
{S
0
, S
1
, S
2
, S
0
} L
3
W
2
Z
3
{S
0
, S
1
, S
2
, S
1
, S
2
, S
0
} L
5
W
4
Z
4
{S
0
, S
1
, S
3
, · · · , S
3
, S
2
, S
0
}
L
4
W
2
Z
2
1−ZL
{S
0
, S
1
, S
2
, S
1
, S
3
, · · · , S
3
, S
2
, S
0
}
L
6
W
4
Z
3
1−ZL
{S
0
, S
1
, S
3
, · · · , S
3
, S
2
, S
1
, S
2
, S
0
}
L
6
W
4
Z
3
1−ZL
{S
0
, S
1
, S
3
, · · · , S
3
, S
2
, S
1
, S
3
, · · · , S
3
, S
2
, S
0
}
L
7
W
4
Z
2
(1−ZL)
2
resulting in the single error event IRWEF for W < 6 of
A
(1)
(W, Z, L) = L
3
W
2
Z
3
+L
5
W
4
Z
4
+
L
4
W
2
Z
2
+ 2L
6
W
4
Z
3
1 −ZL
+
L
7
W
4
Z
2
(1 −ZL)
2
,
and for both W < 6 and L < 10,
15
A
(1)
(W, Z, L) = L
3
W
2
Z
3
+L
5
W
4
Z
4
+
_
5

n=0
L
4+n
W
2
Z
2+n
_
+
_
2
3

n=0
L
6+n
W
4
Z
3+n
_
+
_
2

n=0
(n + 1)L
7+n
W
4
Z
2+n
_
.
The IRWEF for double error events for W < 6 is then
A
(2)
(W, Z, L) = [A
(1)
(W, Z, L)]
2
= L
6
W
4
Z
6
+
2L
7
W
4
Z
5
1 −ZL
+
L
8
W
4
Z
4
(1 −ZL)
2
,
and for both W < 6 and L < 10,
A
(2)
(W, Z, L) = L
6
W
4
Z
6
+
_
2
2

n=0
L
7+n
W
4
Z
5+n
_
+
_
1

n=0
(n + 1)L
8+n
W
4
Z
4+n
_
.
Thus the single- and double-error event enumerators for W < 6 and L < 10 are
A
(1)
2,3
(Z) = Z
3
A
(1)
2,4+n
(Z) = Z
2+n
A
(1)
4,5
(Z) = Z
4
A
(1)
4,6+n
(Z) = 2Z
3+n
A
(1)
4,7
(Z) = Z
2
A
(1)
4,8
(Z) = 2Z
3
A
(1)
4,9
(Z) = 3Z
4
A
(2)
4,6
(Z) = Z
6
A
(2)
4,7+n
(Z) = 2Z
5+n
A
(2)
4,8
(Z) = Z
4
A
(2)
4,9
(Z) = 2Z
5
,
and thus, up to Z
7
,
A
(1)
2
(Z) = Z
3
+
Z
2
1 −Z
= Z
3
+
K−4

n=0
Z
2+n
= Z
2
+ 2Z
3
+Z
4
+Z
5
+Z
6
+Z
7
+ · · · +Z
n
+ · · ·
A
(2)
4
(Z) = Z
6
+
2Z
5
1 −Z
+
Z
4
(1 −Z)
2
= Z
6
+ 2
K−7

n=0
Z
5+n
+
K−8

n=0
(n + 1)Z
4+n
= Z
4
+ 4Z
5
+ 6Z
6
+ 6Z
7
+ · · · + (n − 1)Z
n
+ · · · .
The multiple turbo code, as shown in Figure 16.2, contains 3 encoders, and it is assumed that the
interleavers are uniformly distributed, the block size K is large, and the code uses the same constituent
encoders. Assume that a particular input sequence of weight w enters the first encoder, generating
the parity weight enumerator A
PC
w
(Z). Then, by the definition of the uniform interleaver, the second
and third encoders will generate any of the weights in A
PC
w
(Z) with equal probability. The codeword
CWEF of this (4K, K − 2) terminated multiple turbo code is given by
A
PC
w
(Z) =
_
h
max

h
1
=1
c[h
1
, w]A
(h
1
)
w
(Z)
__
h
max

h
2
=1
c[h
2
, w]
_
K
w
_
−1
A
(h
2
)
w
(Z)
__
h
max

h
3
=1
c[h
3
, w]
_
K
w
_
−1
A
(h
3
)
w
(Z)
_
=
h
max

h
1
=1
h
max

h
2
=1
h
max

h
3
=1
_
K
w
_
−2
c[h
1
, w] c[h
2
, w] c[h
3
, w] A
(h
1
)
w
(Z) A
(h
2
)
w
(Z) A
(h
3
)
w
(Z) ,
16
where h
max
is the maximum number of error events for the particular choice of w and c[h, w] =
_
K −h +w
w
_
. With K h and K w, c[h, w] ≈
_
K
h
_

K
h
h!
, and h
max
=
w
2
. These
approximations, along with keeping only the highest power of K, i.e., the term corresponding to
h
1
= h
2
= h
3
= h
max
, result in the codeword CWEF
A
PC
w
(Z) ≈
(w!)
2
(h
max
!)
3
K
(3h
max
−2w)
_
A
(h
max
)
w
(Z)
_
3
and the bit CWEF
B
PC
w
(Z) =
w
K
A
PC
w
(Z) ≈
w (w!)
2
(h
max
!)
3
K
(3h
max
−2w−1)
_
A
(h
max
)
w
(Z)
_
3
.
Then, for w < 6, the approximate IRWEF’s for this PCCC are
A
PC
(W, Z) ≈
5

w=2
W
w
A
PC
w
(Z) and B
PC
(W, Z) ≈
5

w=2
W
w
B
PC
w
(Z),
and the WEF’s are A
PC
(X) = A
PC
(X, X) and B
PC
(X) = B
PC
(X, X). From the above,
_
A
(1)
2
(Z)
_
3
= Z
9
+
3Z
8
1 −Z
+
3Z
7
(1 −Z)
2
+
Z
6
(1 −Z)
3
= Z
6
+ 6Z
7
+ 15Z
8
+ 23Z
9
+ 30Z
10
+ 39Z
11
+ · · · +
_
n
2
− 3n − 10
2
_
Z
n
+ · · ·
and
_
A
(2)
4
(Z)
_
3
= Z
18
+
6Z
17
1 −Z
+
15Z
16
(1 −Z)
2
+
20Z
15
(1 −Z)
3
+
16Z
14
(1 −Z)
4
+
6Z
13
(1 −Z)
5
+
Z
12
(1 −Z)
6
= Z
12
+ 12Z
13
+ 66Z
14
+ 226Z
15
+ 561Z
16
+ 1128Z
17
+ 1995Z
18
+ 3258Z
19
+
5028Z
20
+ · · · +
_
−21720 − 7046n + 3015n
2
− 155n
3
− 15n
4
+n
5
120
_
Z
n
+ · · · .
(a) Using the above, the approximate CWEF’s up to Z
14
are
A
PC
2
(Z) ≈
4
K
_
A
(1)
2
(Z)
_
3
=
4
K
Z
6
+
24
K
Z
7
+
60
K
Z
8
+
92
K
Z
9
+
120
K
Z
10
+
156
K
Z
11
+
196
K
Z
12
+
240
K
Z
13
+
288
K
Z
14
+ · · · + 2
_
n
2
−3n−10
K
_
Z
n
+ · · ·
A
PC
4
(Z) ≈
72
K
2
_
A
(2)
4
(Z)
_
3

72
K
2
Z
12
+
864
K
2
Z
13
+
4752
K
2
Z
14
+ · · ·
A
PC
3
(Z) = A
PC
5
(Z) = 0
B
PC
2
(Z) =
2
K
_
A
PC
2
(Z)
¸

8
K
2
Z
6
+
48
K
2
Z
7
+
120
K
2
Z
8
+
184
K
2
Z
9
+
240
K
2
Z
10
+
312
K
2
Z
11
+
392
K
2
Z
12
+
480
K
2
Z
13
+
576
K
2
Z
14
+ · · · + 4
_
n
2
−3n−10
K
2
_
Z
n
+ · · ·
B
PC
4
(Z) =
4
K
_
A
PC
4
(Z)
¸

288
K
3
Z
12
+
3456
K
3
Z
13
+
19008
K
3
Z
14
+ · · ·
B
PC
3
(Z) = B
PC
5
(Z) = 0
17
(b) Using the above, the approximate IRWEF’s up to W
5
and Z
14
are
A
PC
(W, Z) ≈ W
2
_
4
K
Z
6
+
24
K
Z
7
+
60
K
Z
8
+
92
K
Z
9
+
120
K
Z
10
+
156
K
Z
11
+
196
K
Z
12
+
240
K
Z
13
+
288
K
Z
14
+ · · · + 2
_
n
2
−3n−10
K
_
Z
n
+ · · ·
_
+
W
4
_
72
K
2
Z
12
+
864
K
2
Z
13
+
4752
K
2
Z
14
+ · · ·
¸
and
B
PC
(W, Z) ≈ W
2
_
8
K
2
Z
6
+
48
K
2
Z
7
+
120
K
2
Z
8
+
184
K
2
Z
9
+
240
K
2
Z
10
+
312
K
2
Z
11
+
392
K
2
Z
12
+
480
K
2
Z
13
+
576
K
2
Z
14
+ · · · + 4
_
n
2
−3n−10
K
2
_
Z
n
+ · · ·
_
+
W
4
_
288
K
3
Z
12
+
3456
K
3
Z
13
+
19008
K
3
Z
14
+ · · ·
¸
(c) Using the above, the approximate WEF’s up to X
14
, neglecting any higher power of K terms, are
A
PC
(X) ≈
4
K
X
8
+
24
K
X
9
+
60
K
X
10
+
92
K
X
11
+
120
K
X
12
+
156
K
X
13
+
196
K
X
14
+ · · ·
and
B
PC
(X) ≈
8
K
2
X
8
+
48
K
2
X
9
+
120
K
2
X
10
+
184
K
2
X
11
+
240
K
2
X
12
+
312
K
2
X
13
+
392
K
2
X
14
+ · · ·
(d) The union bounds on the BER and WER for K = 10
3
and K = 10
4
, assuming a binary input,
unquantized output AWGN channel are shown below. The plots were created using the loose bounds
WER ≤ A
PC
(X)
¸
¸
¸
¸
X=e

RE
b
N
0
and BER ≤ B
PC
(X)
¸
¸
¸
¸
X=e

RE
b
N
0
where R = 1/4 is the code rate and
E
b
N
0
is the SNR.
10
-2
10
-3
10
-1
10
-4
10
-5
10
-6
10
-7
1 2 3 4 5 6 7
- - - - - N = 1000
______ N = 10000
E
b
/N
0
(dB)
(a) Word Error Rate
18
E
b
/N
0
(dB)
(b) Bit Error Rate
10
-5
10
-6
10
-4
10
-7
10
-8
10
-9
10
-10
1 2 3 4 5 6 7
- - - - - N = 1000
______ N = 10000
16.15 (a)
_
1
1
1+D
_
i. Weight 2
Minimum parity weight generating information sequence = 1 +D.
Corresponding parity weight = 1.
In a PCCC (rate 1/3) we therefore have (assuming a similar input after interleaving):
Information weight = 2; Parity weight = 1 + 1 = 2; Total weight = 4.
ii. Weight 3
For this encoder a weight 3 input does not terminate the encoder and hence is not possible.
(b)
_
1
1+D
2
1+D+D
2
_
i. Weight 2
Minimum parity weight generating information sequence = 1 +D
3
.
Corresponding parity weight = 4.
In a PCCC we therefore have:
Information weight = 2; Parity weight = 4 + 4 = 8; Total weight = 10.
ii. Weight 3
Minimum parity weight generating information sequence = 1 +D +D
2
.
Corresponding parity weight = 2.
In a PCCC we therefore have:
Information weight = 3; Parity weight = 2 + 2 = 4; Total weight = 7.
(c)
_
1
1+D
2
+D
3
1+D+D
3
_
i. Weight 2
Minimum parity weight generating information sequence = 1 +D
7
.
Corresponding parity weight = 6.
In a PCCC we therefore have:
Information weight = 2; Parity weight = 6 + 6 = 12; Total weight = 14.
19
ii. Weight 3
Minimum parity weight generating information sequence = 1 +D +D
3
.
Corresponding parity weight = 4.
In a PCCC we therefore have:
Information weight = 3; Parity weight = 4 + 4 = 8; Total weight = 11.
(d)
_
1
1+D+D
2
+D
4
1+D
3
+D
4
_
i. Weight 2
Minimum parity weight generating information sequence = 1 +D
15
.
Corresponding parity weight = 10.
In a PCCC we therefore have:
Information weight = 2; Parity weight = 10 + 10 = 20; Total weight = 22.
ii. Weight 3
Minimum parity weight generating information sequence = 1 +D
3
+D
4
.
Corresponding parity weight = 4.
In a PCCC we therefore have:
Information weight = 3; Parity weight = 4 + 4 = 8; Total weight = 11.
(e)
_
1
1+D
4
1+D+D
2
+D
3
+D
4
_
i. Weight 2
Minimum parity weight generating information sequence = 1 +D
5
.
Corresponding parity weight = 4.
In a PCCC we therefore have:
Information weight = 2; Parity weight = 4 + 4 = 8; Total weight = 10.
ii. Weight 3
For this encoder a weight 3 input does not terminate the encoder and hence is not possible.
In all cases the input weight two sequence gives the free distance codeword for large K.
16.16 For a systematic feedforward encoder, for input weight w, the single error event for input weight 1 can
occur w times. Therefore h
max
, the largest number of error events associated with a weight w input
sequence, is equal to w. That is, the maximum number of error events produced by a weight w input
sequence occurs by repeating the single error event for input weight 1 w times. Thus,
A
(w)
w
(Z) = [A
(1)
1
(Z)]
w
.
This is not true for feedback encoders. Feedback encoders require at least a weight 2 input sequence
to terminate. Thus, for an input sequence of weight 2w, h
max
= w, i.e., we have a maximum of w error
events. This occurs when an error event caused by a weight 2 input sequence is repeated w times (for
a total input weight of 2w).
Therefore
A
2w
(w)
(Z) = [A
Z
(1)
(Z)]
w
.
20
m
0
m
1
m
2
u
v
(())
V
(2)
V
(n-1)
V
(1)
Q
m
-1
16.19 An (n, 1, v) systematic feedback encoder can be realized by the following diagram:
In the figure, the dashed lines indicate potential connections that depend on the encoder realization.
It can be seen from the encoder structure that the register contents are updated as
m
i,t
= m
i−1,t−1
, i = 1, 2, . . . , v − 1
and
m
0,t
= m
v−1,t−1
+u
t
+
v−2

i=0
a
i
m
i,t−1
,
where t is the time index, u
t
represents the input bit at time t, and the coefficients a
i
depend on the
particular encoder realization. When the encoder is in state S
2
v−1 = (0 0 . . . 1) at time t −1, it follows
from the given relations that
m
i,t
= m
i−1,t−1
= 0, i = 1, 2, . . . , v − 1
and
m
0,t
= m
v−1,t−1
+u
t
+
v−2

i=0
a
i
m
i,t−1
= 1 +u
t
.
Thus if the input bit u
t
= 1, the first memory position at time t is also 0, and the encoder is in the
all-zero state.
16.20 For the primitive encoder D with G
p
(D) =
_
1
D
4
+D
2
+D+1
D
4
+D
3
+1
_
, the shortest terminating weight 2 input
sequence is
u(D) = 1 +D
2
4
−1
= 1 +D
15
,
21
which generates the parity sequence
v(D) = (1 +D
3
+D
4
+D
6
+D
8
+D
9
+D
10
+D
11
)(D
4
+D
2
+D + 1)
= 1 +D +D
2
+D
3
+D
4
+D
8
+D
11
+D
12
+D
14
+D
15
.
Thus
Z
min
= 10
and
d
eff
= 2 + 2 Z
min
= 2 + 2 × 10 = 22.
For the nonprimitive encoder E with G
np
(D) =
_
1
D
4
+1
D
4
+D
3
+D
2
+D+1
_
, the shortest terminating weight
2 input sequence is u(D) = 1 +D
5
, which generates the parity sequence
v(D) = (D + 1)(D
4
+ 1) = 1 +D +D
4
+D
5
.
Thus Z
min
= 4 and d
eff
= 2 + 2Z
min
= 10.
Chapter 20
20.1 If an (n, k) code C is designed to correct all the bursts of length l or less and simultaneously to
detect all the bursts of length d or less with d ≥ l, then no burst of length l + d or less can be
a codeword. Suppose there is a burst x of length l +d or less that is a codeword in C. We can
decompose this burst x into two bursts y and z such that: (1) x = y + z; (2) the length of y
is l or less and the length of z is d or less. Since the code is designed to correct all the bursts
of length l or less, y must be a coset leader in a standard array for the code. Since y +z = x
is codeword, then z must be in the same coset as y. As a results, z has the same syndrome as
y. In this case, if z occurs as an error burst during the transmission of a codeword in C, then
y will be taken as the error burst for correction. This will result in an incorrect decoding and
hence the decoder will fail to detect z which contradicts to the fact that the code is designed
to correct all the bursts of lengths l or less and simultaneously to detect all the bursts of length
d or less with d ≥ l. Therefore, a burst of length l + d or less can not be a codeword. Then it
follows from Theorem 20.2 that n −k ≥ l + d.
20.2 An error-trapping decoder is shown in Figure P20.2.
Gate 1 Buffer Register Gate 2
+
Gate 4
Gate 3
+
Syndrome Register
. . .
Test All 0’s
. . .
l stages
Figure P20.2 An error-trapping decoder.
The decoding operation is given below:
1
Step 1. The received polynomial r(x) is shifted into the buffer and syndrome registers simulta-
neously with Gates 1 and 3 turned on and all the other gates turned off.
Step 2. As soon as the entire r(x) has been shifted into the syndrome register, the contents of the
syndrome form the syndrome S
(n−k)
(x) of r
(n−k)
(x). If the n −k −l leftmost stages of
the syndrome register contain all zeros, the error burst is confined to the positions X
n−l
,
. . ., Xn −1. Gate 2 and Gate 4 are activated. The received information digits are read
out of the buffer register and connected by the error digits shifted out from the syndrome
register. If n −k −l leftmost stages of the syndrome register does not contain all zeros,
go to step 3.
Step 3. The syndrome register starts to shift with Gate 3 on. As soon as its n − k − l leftmost
stages contain only zeros, its l rightmost stages contain the burst-error pattern. The error
correction begins. there are three cases to be considered.
Step 4. If the n −k −l leftmost stages of the syndrome register contains all zeros after ith shift
0 < i ≤ 2k −n (Assume 2k −n ≥ 0, otherwise, no this step), the error burst is confined
to positions X
n−i−1−l
, . . ., X
n−i−1
. In this event, Gate 2 is first activated and the i high
position information digits are shifted out. Then Gate 4 is activated, the rest information
digits are read out and corrected by the error digit shifted out from the syndrome register.
Step 5. If the n − k − l leftmost stages of the syndrome register contains all zeros after the ith
shift k ≤ i ≤ n−l, the errors of the burst e(x) are confined to the parity-check positions
of r(x). In this event, the k received information digits in the buffer are error free. Gate
2 is activated and the k error-free information digits in the buffer are shifted out to the
data sink.
Step 6. if the n−k−l leftmost stages of the syndrome register contains all zeros after (n−l+i)th
shift 0 < i < l, the error burst is confined to positions X
n−i
, . . ., X
n−1
, X
0
, X
l−i−1
. In
this event, the l −i digits contained in the l −i rightmost stages of the syndrome register
match the errors at the parity-check positions of r(X), and the i digits contained int the
next i stages of the syndrome register match the errors at the positions X
n−i
, . . ., X
n−1
of r(X). At this instant, a clock starts to count from (n − l + i + 1). The syndrome
register is then shifted with Gate 3 turned off. As soon as the clock has counted up to n,
the i rightmost digits in the syndrome register match the errors at the positions X
n−i
, . . .,
X
n−l
, X
n−1
of r(X). Gate 4 and Gate 2 are then activated. The received information
digits are read out of the buffer register and corrected by the error digits shifted out from
2
the syndrome register.
If the n − k − l stages of the syndrome never contain all zeros after all the steps above,
an uncorrectable burst of errors has been detected.
20.4 The generator polynomial of the desired Fire code is
g(X) = (X
2l−1
+ 1)p(X)
Since the code is designed to correct all the bursts of length 4 or less, l = 4. Hence
g(X) = (X
7
+ 1)(1 + X +X
4
) = 1 + X + X
4
+ X
7
+ X
8
+ X
11
.
Since p(X) is a primitive polynomial of degree 4, its period ρ is 2
4
− 1 = 15. Hence the
length of the code is n = LCM(7, 15) = 105. Since the degree of the generator polynomial
of the code is 11 which is the number of parity check digits of the code, the code is a (105, 94)
Fire code. The decoder of this code is shown in Figure P20.4.
+ + + + +
Test for all 0’s
Buffer Register
Gate 1
Gate 4
+
Gate 3
Gate 2
Output
Input r(x)
1 X X
4
X
7
X
8
X
11
Figure P20.4 A decoder for the (105, 94) Fire code.
3
20.5 The high-speed decoder for the (105, 94) Fire code designed in Problem 20.4 is shown in
Figure P20.5.
+
Error Pattern Register
Comparator test for a match Test for all 0’s
+ +
Buffer Register
r(X) Input
Error Location Register
Computation
for n-q
Counter 1
Counter 2
Figure P20.5 A high-speed decoder for the (105, 94) Fire code.
20.6 We choose the second code given in Table 20.3 which is a (15, 9) code and capable of correct-
ing any burst of length 3 or less. The burst-error-correcting efficiency z = 1. The generator
polynomial of this code is g(X) = 1 +X
3
+X
4
+X
5
+X
6
. Suppose we interleave this code
by a degree λ = 17. This results in a (255, 153) cyclic code that is capable of correcting any
single burst of length 51 or less and has burst-error-correcting efficiency z = 1. The generator
of this code is
g

(X) = g(X
17
) = 1 + X
51
+ X
68
+ X
85
+ X
102
.
A decoder for this code can be implemented with a de-interleaver and the decoder for the
(15, 9) code as shown in Figure P20.6. First the received sequence is de-interleaved and
arranged as a 17 × 15 array. Then each row of the array is decoded based on the base (15, 9)
code.
De-interleaver
Buffer
Decoder C
Figure P20.6 A decoder for the (255, 153) burst-error-correcting code designed above.
20.7 A codeword in the interleaved code C
l
λ
is obtained by interleaving l codewords from the (n, k)
base code C. Let v
1
(X), v
2
(X), . . . , v
l
(X) be l codewords in C. Then the codeword in C
l
λ
4
obtained by interleaving these l codewords in C in polynomial form is
v(X) = v
1
(X
l
) + Xv
2
(X
l
) + X
2
v
3
(X
l
) + . . . + X
l−1
v
l
(X
l
).
Since v
i
(X) is divisible by g(X) for 1 ≤ i ≤ l, v
i
(X
l
) is divisible by g(X
l
). Therefore v(X)
is divisible by g(X
l
). The degree of g(X
l
) is (n − k)l. Since g(X
l
) is a factor of X
ln
+ 1,
g(X
l
) generates a (ln, lk) cyclic code.
20.8 In order to show that the Burton code is capable of correcting all phased bursts confined to
a single subblock of m digits, it is necessary and sufficient to prove that no two such bursts
are in the same coset of a standard array for the code. Let X
mi
A(X) and X
mj
B(X) be the
polynomial representations of two bursts of length l
1
and l
2
with l
1
≤ m and l
2
≤ m confined
to ith and jth subblocks (0 ≤ i ≤ σ, 0 ≤ j ≤ σ), respectively, where
A(X) = 1 + a
1
X + . . . + a
l
1
−2
X
l
1
−2
+ X
l
1
−1
,
and
B(X) = 1 + b
1
X +. . . +b
l
2
−2
X
l
2
−2
+ X
l
2
−1
.
Suppose X
mi
A(X) and X
mj
B(X) are in the same coset. Then V(X) = X
mi
A(X) +
X
mj
B(X) must be a code polynomial and divisible by the generator polynomial g(X) =
(X
m
+ 1)p(X). Assume mj ≥ mi, then mj −mi = qm and
V(X) = X
mi
(A(X) +B(X)) + X
mi
B(X)(X
qm
+ 1).
Since X
m
− 1 is a factor of the generator g(X), V(X) must be divisible by X
m
− 1. Note
that X
qm
+ 1 is divisible by X
m
+ 1, and X
mi
and X
m
+ 1 are relatively prime. Since
V(X) and X
mi
B(X)(X
qm
+1) are divisible by X
m
+1, A(X) +B(X) must be divisible by
X
m
+ 1. However both degrees l
1
and l
2
of A(X) and B(X) are less than m, the degree of
A(X) +B(X) is less than m. Hence A(X) +B(X) is not divisible by X
m
+1. For V(X) to
be divisible by X
m
+ 1, A(X) and B(X) must be identical, i.e., A(X) = B(X). Therefore,
V(X) = X
mi
B(X)(X
qm
+ 1).
5
Since the degree of B(X) is less than m, B(X) and p(X) must be relatively prime. Also X
mi
and p(X) are relatively prime. Since V(X) is assumed to be a code polynomial, X
qm
+ 1
must be divisible by both p(X) and X
m
− 1. This implies that X
qm
+ 1 is divisible by
(X
m
+ 1)p(X) and qm must be a multiple of n = σm. This is not possible, since q = j − i
is less than σ. Therefore V(X) can not be a code polynomial and the bursts X
mi
A(X) and
X
mi
B(X) can not be in the same coset. Consequently, all the phased bursts confined to a
single subblock of m digits are in different cosets of a standard array for the Burton code
generated by (X
m
+ 1)p(X) and they are correctable.
20.9 Choose p(X) = 1 +X
2
+X
5
which is a primitive polynomial with period ρ = 2
5
−1 = 31.
Then generator polynomial of the Burton code C with m = 5 is
g(X) = (X
m
+ 1)p(X) = 1 + X
2
+ X
7
+ X
10
.
The length of the code is n = LCM(31, 5) = 155. Hence the code is a (155, 145) code
that is capable of correcting any phased burst confined to a single subblock of 5 digits. If the
code is interleaved to a degree λ = 6, we obtain a (930, 870) code C
6
generated by g(X
6
) =
1 + X
12
+X
42
+X
60
. This code is capable of correcting any burst of length 5 ×5 + 1 = 26
or less.
20.10 From Table 9.3, code (164, 153) has burst-correcting-capability l = 4. If it is interleaved to
degree λ = 6, then λn = 6 × 164 = 984, λk = 6 × 153 = 918, and λl = 6 × 4 = 24. The
interleaved code is a (984, 918) code with burst-error-correcting capability 24.
The interleaved Burton code of Problem 20.9 is a (930, 870) code with burst-error-correcting
capability 26. The burst-error-correcting efficiency of the (984, 918) code is
z =
2λl
λn −λk
=
48
984 −918
=
8
11
.
The efficiency of the interleave Burton code of Problem 20.9 is
z

=
2[(λ −1)m + 1]
2λm
=
52
60
=
13
15
.
Therefore, z

≥ z. The interleaved Burton code is more powerful and efficient.
6
20.11 The (15, 7) BCH code has random error-correcting capability 2. Interleaving this code to a
degree 7, we obtain a (105, 49) cyclic code that is capable of correcting any combination of
two random bursts, each of length 7 or less. Of course, it is capable of correcting any single
burst of length 14 or less. In decoding, a received sequence is de-interleaved and arranged into
a 7 ×15 array. Each row of the array is decoded based on the (15, 7) BCH code.
20.12 If each symbol of the (31, 15) RS code over GF(2
5
) is replaced by its corresponding 5-bit
byte, we obtain a (155, 75) binary code. Since the (31, 15) RS code is capable of correcting
8 or fewer random symbol errors, the binary (155, 75) code is capable of correcting: (1) any
single burst of length (8 −1) ×5 + 1 = 36 or less; or (2) correcting any combination random
bursts that result in 8 or fewer random symbol errors for the (31, 15) RS code.
20.13 From Problem 20.4, the generator polynomial
g(X) = 1 + X + X
4
+ X
7
+ X
8
+X
11
generates a (105, 94) Fire code. If it is shortened by deleting the 15 high-order message digits,
it becomes a (90, 79) code.
Dividing X
n−k+15
= X
26
by g(X), we obtain a remainder ρ(X) = X
10
+X
8
+X
5
+X
3
+X.
Then a decoder for the (105, 94) shortened Fire code is shown in Figure P20.13
Test for all 0’s
Gate 1
Buffer Register
Gate 2
Gate 4
Gate 3
Output
1 x
x x
3
x
4
x
5
x
7
x
8
x
8
x
11
Figure P20.13 A decoder for the (90, 79) Fire code.
20.14 Let α be a primitive element of the Galois field GF(2
6
). The order of α is 2
6
−1 = 63 and the
minimal polynomial of α is
Φ(X) = 1 + X + X
6
.
7
Since 63 = 7 ×9, we can factor X
63
+ 1 as follows:
X
63
+ 1 = (X
7
+ 1)(1 + X
7
+ X
14
+ X
21
+ X
28
+ X
35
+ X
42
+ X
49
+ X
56
).
The code generated by the polynomial
g
1
(X) = (X
7
+ 1)(1 + X + X
6
)
is a Fire code of length 63 which is capable of correcting any single error burst of length 4
or less. Let g
2
(X) be the generator polynomial of the double-error-correcting BCH code of
length 63. From Table 6.4, we find that
g
2
(X) = (1 + X + X
6
)(1 + X + X
2
+ X
4
+ X
6
).
The least common multiple of g
1
(X) and g
2
(X) is
g(X) = (1 + X
7
)(1 + X + X
6
)(1 + X + X
2
+ X
4
+ X
6
).
Hence, g(X) generates a (63, 44) cyclic code which is a modified Fire code and is capable of
correcting any single error burst of length 4 or less as well as any combination of two or fewer
random errors.
8

integers a and b such that a · + b · λ = 1, where a and λ are also relatively prime. Dividing a by λ, we obtain a = kλ + r with 0 ≤ r < λ. (2) (1)

Since a and λ are relatively prime, r = 0. Hence 1≤r<λ Combining (1) and (2), we have · r = −(b + k ) · λ + 1 Consider
r ·r −(b+k )·λ


i=1 i=1

1 =
i=1 λ

1=
i=1 −(b+k )

+1 1) + 1
i=1

= (
i=1

1)(

= 0 + 1 = 1. Hence, every nonzero sum has an inverse with respect to the multiplication operation of GF(q). Since the nonzero sums are elements of GF(q), they obey the associative and commutative laws with respect to the multiplication of GF(q). Also the sums satisfy the distributive law. As a result, the sums form a field, a subfield of GF(q). 2.8 Consider the finite field GF(q). Let n be the maximum order of the nonzero elements of GF(q) and let α be an element of order n. It follows from Theorem 2.9 that n divides q − 1, i.e. q − 1 = k · n. Thus n ≤ q − 1. Let β be any other nonzero element in GF(q) and let e be the order of β. 2

Suppose that e does not divide n. Let (n, e) be the greatest common factor of n and e. Then e/(n, e) and n are relatively prime. Consider the element β (n,e) This element has order e/(n, e). The element αβ (n,e) has order ne/(n, e) which is greater than n. This contradicts the fact that n is the maximum order of nonzero elements in GF(q). Hence e must divide n. Therefore, the order of each nonzero element of GF(q) is a factor of n. This implies that each nonzero element of GF(q) is a root of the polynomial X n − 1. Consequently, q − 1 ≤ n. Since n ≤ q − 1 (by Theorem 2.9), we must have n = q − 1. Thus the maximum order of nonzero elements in GF(q) is q-1. The elements of order q − 1 are then primitive elements. 2.11 (a) Suppose that f (X) is irreducible but its reciprocal f ∗ (X) is not. Then f ∗ (X) = a(X) · b(X) where the degrees of a(X) and b(X) are nonzero. Let k and m be the degrees of a(X) and b(X) respectivly. Clearly, k + m = n. Since the reciprocal of f ∗ (X) is f (X), f (X) = X n f ∗ ( 1 1 1 ) = X k a( ) · X m b( ). X X X

This says that f (X) is not irreducible and is a contradiction to the hypothesis. Hence f ∗ (X) must be irreducible. Similarly, we can prove that if f ∗ (X) is irreducible, f (X) is also irreducible. Consequently, f ∗ (X) is irreducible if and only if f (X) is irreducible. 3

(β 2 j−i −1 )2 = 1. if f ∗ (X) is primitive. Hencef ∗ (X) must be also primitive. 4 . Similarly. we have X k + 1 = X k f ∗( 1 1 )q( ) X X 1 1 = X n f ∗ ( ) · X k−n q( ) X X 1 = f (X) · X k−n q( ). Let X k + 1 = f ∗ (X)q(X). j − i < e. Then there exists a positive integer k less than 2n − 1 such that f ∗ (X) divides X k + 1. β 2 . X This implies that f (X) divides X k + 1 with k < 2n − 1.(b) Suppose that f (X) is primitive but f ∗ (X) is not. we must have i = 1. · · · . Then. This is a contradiction to the hypothesis that f (X) is primitive. Consequently f ∗ (X) is primitive if and only if f (X) is primitive. β 2 e−1 are distinct. j−i −1 i Since the order β is a factor of 2m − 1. j < e and i < j. For (β 2 β2 j−i −1 )2 = 1. 2. it must be odd.15 We only need to show that β. Taking the reciprocals of both sides of the above equality. f (X) must also be primitive. This is contradiction to the fact that the e is the smallest nonnegative integer such that β2 e −1 = 1. Suppose that i j β2 = β2 for 0 ≤ i. Since both i and j are less than e.

2. i i i i j (1) Since the order n of β is odd.21 Note that 0 · v = 0.20 Note that c · v = c · (0 + v) = c · 0 + c · v. j < e. n and 2i are relatively prime. (3) i i i (2) Since 0 is the additive identity of the vector space. we have c · v + [−(c · v)] = c · 0 + c · v + [−(c · v)] 0 = c · 0 + 0. Adding −(c · v) to both sides of the above equality. From(1). (−c + c) · v = 0 5 .16 Let n be the order of β 2 . Then (β 2 )n = 1 Hence (β n )2 = 1. 2. we see that n divides n and n = kn. Now consider (β 2 )n = (β n )2 = 1 This implies that n (the order of β 2 ) divides n. Then for any c in F .Hence β 2 = β 2 for 0 ≤ i. we conclude that n = n. Hence n= n From (2) and (3). we then have c · 0 = 0. 2.

we obtain −(c · v) = (−c) · v = c · (−v) (2) (1) 2. S is a subspace if (i) for any u and v in S. i. Hence (−c) · v is the additive inverse of c · v. i. Let c be an element in GF(2). Hence c · (−v) is the additive inverse of c · v. the proof that GF(2m ) is 6 .20). u + v is in S and (ii) for any c in F and u in S.22. −(c · v) = c · (−v) From (1) and (2). c · (−v + v) = 0 c · (−v) + c · v = 0. It follows from the given condition that u+u=0 is also in S.(−c) · v + c · v = 0. c · u is in S. Then.22 By Theorem 2. for any u in S. −(c · v) = (−c) · v Since c · 0 = 0 (problem 2. Hence S is a subspace. we only have to show that the second condition is implied by the first condition for F = GF (2).24 If the elements of GF(2m ) are represented by m-tuples over GF(2). 2.    0 f or c·u=   u f or c=0 c=1 Clearly c · u is also in S. Let u be any element in S. The first condition is now given.e.e.

22 that S1 ∩ S2 is a subspace. It is clear the u and v are elements in S1 . Hence c · v is in the intersection. Then x ∈ S1 .27 Let u and v be any two elements in S1 ∩ S2 . S1 ∩ S2 . Again. Hence. It follows from Theorem 2. u + v ∈ S1 and u + v ∈ S2 .a vector space over GF(2) is then straight-forward. Since S1 and S2 are subspaces. and x ∈ S2 . since S1 and S2 are subspaces.u + v is in S1 ∩ S2 . c · x is in S1 and also in S2 . Now let x be any vector in S1 ∩ S2 . for any c in the field F . and u and v are elements in S2 . 7 . 2.

(c) Let v be a code word in C. v · HT = 0 1 and v cannot be a code word in C1 .1 The generator and parity-check matrices are:   0 1 1   1 1 1  G=   1 1 0  1 0 1  1 1 0 0 0   0 0 1 0 0     1 0 0 1 0   1 0 0 0 1   1 0 0   0 1 0  H=   0 0 1  0 0 0  0 0 1 1 1   0 1 1 1 0     0 1 1 0 1   1 1 0 1 1 From the parity-check matrix we see that each column contains odd number of ones. for any odd weight vector v. It is clear that the first (n − k) rows of H1 are also linearly independent.Chapter 3 3. is then equal to dim(C1 ) = (n + 1) − (n − k + 1) = k Hence C1 is an (n + 1. Thus no two columns sum to zero and any three columns sum to a 4tuple with odd number of ones. The last row of H1 has a 1 at its first position but other rows of H1 have a 0 at their first position. Therefore. The inner product of a vector with odd weight and the all-one vector is 1 . First we note that the n − k rows of H are linearly independent. Hence. Therefore. Then v · HT = 0. The dimension of its null space.4 (a) The matrix H1 is an (n − k +1) × (n + 1) matrix. the minimum distance of the code is 4. k) linear code. the third and the sixth columns sum to zero. (b) Note that the last row of H1 is an all-one vector. 3. C1 consists of only even-weight code words. 8 . Any linear combination including the last row of H1 will never yield a zero vector. Thus all the rows of H1 are linearly independent. the second. Extend v by adding a digit v∞ to its left. the first. Hence the row space of H1 has dimension n − k + 1. However. C1 . and no two columns are alike.

we obtain a set of Ce of even weight vector. |Co | ≤ |Ce | (1) Now adding x to each vector in Ce . The number of vectors in Ce is equal to the number of vectors in Co . v1 = (v∞ . · · · . v0 . 3. Therefore.This results in a vector of n + 1 digits.5 Let Ce be the set of code words in C with even weight and let Co be the set of code words in C with odd weight. The dimension of C1 is k. The number of vectors in Co is equal to the number of vectors in Ce and Co ⊆ Co Hence |Ce | ≤ |Co | From (1) and (2). Also Ce ⊆ Ce . we obtain a set Co of odd weight code words. vn−1 ). we must require that v∞ = 1 if the vector v has odd weight and v∞ = 0 if the vector v has even weight. there are 2k such code words. i. any vector v1 formed as above is a code word in C1 . these 2k code words are all the code words of C1 . v) = (v∞ . For this sum to be zero. v1 . The inner product of v1 with the last row of H1 is v∞ + v0 + v1 + · · · + vn−1 . Thus. 1 First we note that the inner product of v1 with any of the first n − k rows of H1 is 0. we must require that v1 HT = 0. Adding x to each vector in Co . For v1 to be a vector in C1 . |Ce | = |Co |. Let x be any odd-weight code vector from Co . we conclude that |Co | = |Ce |.e. 9 (2) .

We see that |S0 | = |S1 | and S0 ⊆ S0 . we obtain |S0 | ≤ |S1 |. Therefore S0 is a subspace of the vector space of all n-tuples over GF(2). The dimension of S0 is k − 1. we see that S0 consists of 2k−1 code words.we obtain |S1 | ≤ |S0 |. 10 . From (1) and (2). we obtain a set of S0 of code words with a 0 at the -th location. Hence in the code array. Adding x to each vector in S0 .3. From part (b). This row is a code word in C. (2) (1) Adding x to each vector in S1 . Let x and y be any two code words in S0 . we see that. From (3) and (4) .6 (a) From the given condition on G. (6) (5) (4) (3) From (5) and (6) we have |S0 | = |S1 |. Clearly. we obtain a set S1 of code words with a 1 at the -th position. each column contains at least one nonzero entry. Therefore no column in the code array contains only zeros. This implies that the -th column of the code array consists 2k−1 zeros and 2k−1 ones. Let S0 be the code words with a 0 at the -th position and S1 be the codewords with a 1 at the -th position. it is a subspace of C. |S1 | = |S0 | and S1 ⊆ S1 . The sum x + y also has a zero at the -th location and hence is code word in S0 . Let x be a code word from S1 . From part (a) we see that this column contains at least one 1 . (c) Let S0 be the set of code words with a 0 at the -th position. Since S0 is a subset of C. (b) Consider the -th column of the code array. there is a row in G with a nonzero component at that position. for any digit position.

e. It follows from the theorem 3. It is easy to see that w(u) + w(v) ≥ w(u + v). Let u = x + y and v = y + z. As a result. Then x + y is a nonzero code word.11 In a systematic linear code. Therefore x is detectable.7 Let x. we have d(x.3. y) = w(x + y). Hence a nonzero vector that consists of only zeros in its rightmost k position can not be a code word in any of the systematic code in Γ. Suppose that x and y are in the same coset. y) + d(y. it will not be mistaken as y. It follows from (1) that w(x + y) + w(y + z) ≥ w(x + y + y + z) = w(x + z). (1) 3. z) = w(y + z). every nonzero code vector has at least one nonzero component in its information section (i. z) ≥ d(x.5 that all the error patterns of λ or fewer errors can be used as coset leaders in a standard array. when x occurs. Hence x and y are in different cosets. The weight of this code word is w(x + y) ≤ w(x) + w(y) ≤ + λ < dmin . z) = w(x + z). 11 . we need to show that no error pattern x of or fewer errors can be in the same coset as an error pattern y of λ or fewer errors. 3. From the above inequality. Hence. In order to show that any error pattern of or fewer errors is detectable. z). d(x. they are correctable. d(y. This is impossible since the minimum weight of the code is dmin . the rightmost k positions). Note that d(x.8 From the given condition. we see that λ < dmin −1 2 . y and z be any three n-tuples over GF(2).

. we can put G into systematic form G1 .13 The generator matrix of the code is G = [P1 = [G1 Ik G2 ] P2 Ik ] Hence a nonzero codeword in C is simply a cascade of a nonzero codeword v1 in C1 and a nonzero codeword v2 in C2 . Consider a matrix of the following form which has v as its i-th row:  p01  p00   p10 p11   .  . Since each pij has 2 choices.  .  . 0 or 1.n−k−1 1 0 0 1 0 0 ··· 0 0 ··· 0 0                    ··· ··· vn−k vn−k+1 · 0 0 · · ··· vn−1 0 · 1·· ··· 0 0 0 0 ··· 1 By elementary row operations.  . (v1 . there are 2(k−1)(n−k) matrices G with v as the i-th row. . v1 . vn−k−1 pi+1.e.n−k−1 . There are n n n + + ··· + 0 1 t 12 .n−k−1 . 3. i. .1 ··· ··· p0.0 pi+1. v2 ).    v0 v1    pi+1. hence w[(v1 . .say vn−k+i = 1 for 0 ≤ i < k.15 It follows from Theorem 3.n−k−1 p1. Hence v is contained in 2(k−1)(n−k) codes in Γ. pk−1. 3.Now consider a nonzero vector v = (v0 .1   . vn−1 ) with at least one nonzero component in its k rightmost positions. Since w(v1 ) ≥ d1 and w(v2 ) ≥ d2 .5 that all the vectors of weight t or less can be used as coset leaders.. Each can be put into systematic form G1 and each G1 generates a systematic code containing v as a code word. · · · .  pk−1. v2 )] ≥ d1 + d2 .0 pk−1. The code generated by G1 contains v as a code word.

there exists at least one code with minimum weight at least d.17 The number of nonzero vectors of length n and weight d − 1 or less is d−1 i=1 n i From the result of problem 3.16 Arrange the 2k code words as a 2k × n array. n − k ≥ log2 {1 + n n + ··· + }. each column of this code array contains 2k−1 zeros and 2k−1 ones. 0 1 t Taking logarithm on both sides of the above inequality.11. Note that each nonzero code word has weight (ones) at least dmin . M < N implies 13 . Therefore there are at most d−1 M =2 (k−1)(n−k) i=1 n i linear systematic codes contain nonzero codewords of weight d − 1 or less. each of these vectors is contained in at most 2(k−1)(n−k) linear systematic codes. From problem 6(b). Since there are 2n−k cosets. 2k − 1 3.such vectors. we obtain the Hamming bound on t. Hence (2k − 1) · dmin ≤ n · 2k−1 This implies that dmin ≤ n · 2k−1 . we must have 2n−k ≥ n n n + + ··· + . 1 t 3. The total number of linear systematic codes is N = 2(k(n−k) If M < N . Thus the total number of ones in the array is n · 2k−1 .

3. the first inequality garantees the existence of a systematic linear code with minimum distance dmin .17.18 Let dmin be the smallest positive integer such that dmin −1 i=1 n i dmin < 2(n−k) ≤ i=1 n i From problem 3.that 2 (k−1)(n−k) d−1 i=1 d−1 n i < 2k(n−k) i=1 n i < 2(n−k) . 14 .

e14 = s0 s1 s2 s3 . . s2 . s3 = r3 + r6 + r7 + r9 + r11 + r12 + r13 + r14 . e2 = s0 s1 s2 s3 . r1 .Chapter 4 4.1. e5 = s0 s1 s2 s3 . . . From the decoding table. . . e1 = s0 s1 s2 s3 . e4 = s0 s1 s2 s3 . we find that e0 = s0 s1 s2 s3 . 11) Hamming code is   1 0 0   0 1 0  H=  0 0 1   0 0 0  0 1 0 0 1 1 0 1 0 1 1 1   0 1 1 0 0 0 1 1 1 0 1 1    0 0 1 1 0 1 0 1 1 1 0 1    1 0 0 1 1 0 1 0 1 1 1 1 Let r = (r0 . . ¯ 15 .1 A parity-check matrix for the (15. ¯¯¯ ¯¯ ¯ ¯ . ¯¯¯ ¯ ¯¯ ¯¯ ¯ e3 = s0 s1 s2 s3 . e13 = s0 s1 s2 s3 . . s1 = r1 + r4 + r5 + r9 + r10 + r11 + r13 + r14 . s2 = r2 + r5 + r6 + r8 + r10 + r11 + r12 + r14 . r14 ) be the received vector. Set up the decoding table as Table 4. The syndrome of r is (s0 . s1 . s3 ) with s0 = r0 + r4 + r7 + r8 + r10 + r12 + r13 + r14 .

. .. ...1: Decoding Table e2 e3 e4 e5 e6 e7 e8 e9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 e10 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 e11 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 e12 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 e13 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 e14 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 r0 r1 Buffer Register r14 .s0 0 1 0 0 0 1 0 0 1 1 0 1 0 1 1 1 s1 0 0 1 0 0 1 1 0 0 0 1 1 1 0 1 1 s2 0 0 0 1 0 0 1 1 0 1 0 1 1 1 0 1 s3 0 0 0 0 1 0 0 1 1 0 1 0 1 1 1 1 e0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 e1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 Table 4.. + s0 + s1 + s2 + s3 s0 s1 s2 s3 s0 s1 s2 s3 s0 s1 s2 s3 s0 s1 s2 s3 .. .. r0 e0 r1 e1 r2 e2 e14 r14 + + + + Decoded bits 16 ....

Let (a0 . (1) (2) (3) 2m−1 − (1 − p)2 m −1 {(1 − 2−m )(1 − p) − 1} (4) m −1 {(1 − (1 − p)(1 − 2−m )}. 4) code with minimum distance 4. a3 .3 From (4. and 1 − (1 − p) · (1 − 2−m ) ≥ 0. a2 . the probability of an undetected error for a Hamming code is Pu (E) = 2−m {1 + (2m − 1)(1 − 2p)2 = 2−m + (1 − 2−m )(1 − 2p)2 Note that (1 − p)2 ≥ 1 − 2p. and (1 − 2−m ) ≥ 0.6 The generator matrix for the 1st-order RM code RM(1.3) is     1 1 1 1 1 1 1   0 0 0 1 1 1 1    0 1 1 0 0 1 1    1 0 1 0 1 0 1  v0   1     v3   0    G= =  v   0  2      0 v1 (1) The code generated by this matrix is a (8. a1 ) 17 .4.3). Clearly 0 ≤ (1 − p) · (1 − 2−m ) < 1. Since (1 − p)2 m −1 (5) ≥ 0. 4. Using (2) and (3) in (1). we obtain the following inequality: Pu (E) ≤ 2−m + (1 − 2−m ) (1 − p)2 = 2−m + (1 − p)2 = 2−m − (1 − p)2 m −1 m−1 m−1 } − (1 − p)2 m −1 m −1 − (1 − p)2 . it follows from (4) and (5) that Pu (E) ≤ 2−m . Note that 0 ≤ 1 − p ≤ 1 and 0 ≤ 1 − 2−m < 1.

b3 = a0 + a2 + a1 .be the message to be encoded. r3 . From (3). b4 = a0 + a3 . a2 = b0 + b2 = b1 + b3 = b4 + b6 = b5 + b7 . b6 = a0 + a3 + a2 . r2 . Let r = (r0 . r7 ) be the received vector. b3 . a0 = b0 = b1 + a1 = b2 + a2 = b3 + a2 + a1 = b4 + a3 = b5 + a3 + a1 = b6 + a3 + a2 = b7 + a3 + a2 + a1 . r6 . b4 . r4 . b6 . b5 = a0 + a3 + a1 . a3 = b0 + b4 = b1 + b5 = b2 + b6 = b3 + b7 . r1 . b2 . From (1) and (2). b1 . we find that b0 = a0 . b1 = a0 + a1 . b5 . b2 = a0 + a2 . b7 ) = a0 v0 + a3 v3 + a2 v2 + a1 v1 . b7 = a0 + a3 + a2 + a1 . Then its corresponding codeword is b = (b0 . a2 and a3 are: (a1 ) A 1 = r0 + r1 A 2 = r2 + r3 A 3 = r4 + r5 A 4 = r6 + r7 (0) (0) (0) (0) (0) (0) (2) (3) (a2 ) B1 = r0 + r2 B2 = r1 + r3 B3 = r4 + r6 B4 = r5 + r7 (0) (0) (0) (0) (a3 ) C1 = r0 + r4 C2 = r1 + r5 C3 = r2 + r6 C4 = r3 + r7 (0) (0) 18 . r5 . we find that a1 = b0 + b1 = b2 + b3 = b4 + b5 = b6 + b7 . The check-sum for decoding a1 .

Based on these check-sums. r6 . r5 . X1 + X3 . . 1 + X2 + X3 .14 RM (1. . (2) B1 = 0. (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) (1) (1) (1) (1) (1) (1) (1) (1) 4. For decoding (01000101). (3) C1 = 0. . 1 + X1 + X3 . . we decode a0 to 0. 1 + X2 .15 The RM(r. 0 and 0. Therefore. a1 . C4 = 1. respectively. B4 = 0. r7 ). X2 + X3 . C2 = 0. B2 = 1. A4 = 1. To decode a0 . a2 and a3 are: (1) A1 = 1. A3 = 1. Xm−1 ) ∈ P(r − 1. . the four check-sum for decoding a1 . the decoded message is (0001). 1. . r3 . m − 1) codes are given as follows (from (4. X2 . m − 1)}. 1 + X1 . X1 + X2 . we form r(1) = r − a1 v1 − a2 v2 − a3 v3 = (r0 . r4 . Xm−1 ) ∈ P(r. r2 . Then 19 . B3 = 0. a2 and a3 are decoded as 1. 1 + X1 + X2 . X3 . Then a0 is equal to the value taken by the majority of the bits in r(1) . A2 = 0. 3) = { 0. X1 . r1 . m − 1)}. RM (r − 1. we form r(1) = (01000101) − a1 v1 − a2 v2 − a3 v3 = (00010000). m − 1) = {v(f ) : f (X1 .After decoding a1 . . X2 .38)): RM (r. 1 + X1 + X2 + X3 }. X1 + X2 + X3 . C3 = 0. 1 + X3 . From the bits of r(1) . m − 1) = {v(g) : g(X1 . 4. . X2 . m − 1) and RM(r − 1. a2 and a3 .

. 20 . . m − 1) and g ∈ P(r − 1.RM(r. m − 1)}. X2 . . . Xm−1 ) + Xm g(X1 . X2 . Xm−1 ) with f ∈ P(r. . m)={v(h) : h = f (X1 . . . .

However. Therefore 1 + X + · · · + X n−2 + X n−1 is a code polynomial. Note that g(X) and X i are relatively prime.6 (a) A polynomial over GF(2) with odd number of terms is not divisible by X + 1. we note that no X i is divisible by g(X). g(X) must divide the polynomial X n−1 + X n−2 + · · · + X + 1. Then X n (X −n + 1) = X n g(X −1 )h(X −1 ) 21 . (b) The polynomial X n + 1 can be factored as follows: X n + 1 = (X + 1)(X n−1 + X n−2 + · · · + X + 1) Since g(X) divides X n + 1 and since g(X) does not have X + 1 as a factor. Now. the corresponding code vector consists of all 1 s. v(X) = X i + X j with 0 ≤ i < j < n. suppose that there is a code word v(X) of weight 2. Hence our hypothesis that there exists a code vector of weight 2 is invalid. 5. This contradicts the fact that n is the smallest integer such that g(X) divides X n + 1. Therefore. Therefore. it must be divisible by g(X). j − i < n. (c) First. Since v(X) is a code word. the code contains no code vectors of odd weight. Since g(X) and X i are relatively prime. Put v(X) into the following form: v(X) = X i (1 + X j−i ). hence it can not be divisible by g(X) if g(X) has (X + 1) as a factor. g(X) must divide the polynomial X j−i +1.7 (a) Note that X n + 1 = g(X)h(X).Chapter 5 5. no code word with weight one. Hence. This code word must be of the form. the code has a minimum weight at least 3.

we see that the reciprocal v∗ (X) of a code polynomial in C is a code polynomial in C ∗ .8 Let C1 be the cyclic code generated by (X + 1)g(X). a(X) and g(X) respectively. Since v∗ (X) and v(X) have the same weight. v∗ (X) = a∗ (X)g∗ (X). We know that C1 is a subcode of C and C1 consists all the even-weight code vectors of C as all its code vectors.e. X k−1 a(X −1 ) and X n−k g(X −1 ) are simply the reciprocals of v(X).1 + X n = X n−k g(X −1 ) X k h(X −1 ) = g∗ (X)h∗ (X). Thus. we obtain X n−1 v(X −1 ) = X k−1 a(X −1 ) X n−k g(X −1 ) Note that X n−1 v(X −1 ). Replacing X by X −1 and multiplying both sides of above equality by X n−1 . v(X) = a(X)g(X). Similarly. 5. g∗ (X) generates an (n. Let v(X) = v0 + v1 X + · · · + vn−1 X n−1 be a code polynomial in C.. A2j z 2j (1) Consider the sum A(z) + A(−z) = n n Ai z + i=0 i=0 i Ai (−z)i 22 . k) cyclic code. (1) From (1). C ∗ and C have the same weight distribution. We see that g∗ (X) is factor of X n + 1. we can show the reciprocal of a code polynomial in C ∗ is a code polynomial in C. where h∗ (X) is the reciprocal of h(X). k) cyclic codes generated by g(X) and g∗ (X) respectively. Thus the weight enumerator A1 (z) of C1 should consists of only the even-power terms of A(z) = Hence A1 (z) = j=0 n/2 n i=0 Ai z i . i. Therefore. Then v(X) must be a multiple of g(X). (b) Let C and C ∗ be two (n.

Note that e1 (X) + e2 (X) = X i (X + 1) + X j (X + 1) = (X + 1)X i (X j−i + 1) Since g(X) divides e1 (X) + e2 (X). and g(X) and X i are relatively prime.10 Let e1 (X) = X i + X i+1 and e2 (X) = X j + X j+1 be two different double-adjacent-error patterns such that i < j. However p(X) and X i are relatively prime. Thus e1 (X) + e2 (X) can not be in the same coset. We see that z i + (−z)i = 0 if i is odd and that z i + (−z)i = 2z i if i is even. Suppose that e1 (X) and e2 (X) are in the same coset.n = i=0 Ai z i + (−z)i . Therefore. then e(i) (X) is not divisible by g(X). Therefore p(X) must divide X j−i + 1. e(i) (X) is also detectable. we obtain A1 (z) = 1/2 [A(z) + A(−z)] . Hence n/2 A(z) + A(−z) = j=0 2A2j z 2j (2) From (1) and (2). This is not possible since j − i < 2m − 1 and p(X) is a primitive polynomial of degree m (the smallest integer n such that p(X) divides X n + 1 is 2m − 1). 23 . Then e1 (X) + e2 (X) should be a code polynomial and is divisible by g(X) = (X + 1)p(X). p(X) should divide X i (X j−i + 1).12 Note that e(i) (X) is the remainder resulting from dividing X i e(X) by X n + 1. we see that if e(X) is not divisible by g(X). 5. Thus X i e(X) = a(X)(X n + 1) + e(i) (X) (1) Note that g(X) divides X n + 1. From (1). 5. if e(X) is detectable.

It follows from Theorem 2. Since φ1 (X) and φ2 (X) are relatively prime. 5. Then v(X) is divisible by both g1 (X) and g2 (X). . Hence g(X) = φ1 (X) · φ2 (X) generates a cyclic code of length n = LCM (n1 . n = LCM (n1 . hence must divide n. v(X) is a multiple of g(X) = LCM (g1 (X).e. v(k· ) (X) = v(X) From (1) and (2). Hence v(X) is divisible by the least common multiple g(X) of g1 (X) and g2 (X).14 that v(X) is divisible by the minimal polynomial p(X) of α. Note that v(n) (X) = v(k· +r) (X) = v(X) Since v( ) (X) = v(X). Then β n = 1.20 Let v(X) be a code polynomial in both C1 and C2 . Therefore. Also we note that g(X) is a factor of X n + 1.14 that φ(X) is a factor of X n + 1. 5. Hence v(X) is a code polynomial in the Hamming code generated by p(X). Clearly. Hence v(X) is in both C1 and C2 . Consider X n + 1. It follows from Theorem 2. This is not possible since 0 < r < and is the smallest positive integer such that v( ) (X) = v(X). and β is a root of X n + 1. Conversely. Thus the code 24 (2) (1) 0<r< .19 Since every code polynomial v(X) is a multiple of the generator polynomial p(X). Thus v(X) has α and its conjugates as roots. n2 ). every root of p(X) is a root of v(X). 5. Hence φ(X) generates a cyclic code of length n. any polynomial of degree n − 1 or less that is a multiple of g(X) is divisible by g1 (X) and g2 (X). i. g2 (X)). 5. Then n = k · + r. Since φ1 (X) and φ2 (X) are factors of X n + 1.14 Suppose that does not divide n.18 Let n1 be the order of β1 and n2 be the order of β2 . our hypothesis that does not divide n is invalid. β1 and β2 are roots of X n + 1. Suppose v(X) is a binary polynomial of degree 2m − 2 or less that has α as a root.17 Let n be the order of β. we have v(r) (X) = v(X). n2 ). g(X) = φ1 (X) · φ2 (X) divides X n + 1. i. Let n be the least common multiple of n1 and n2 .5.e.

Suppose that v(i) (X) = v(X). say w. it must divide (X i + 1)v(X). we have (X i + 1)v(X) = a(X)(X 2 Since p(X) divides X 2 m −1 m −1 m −1 + 1) + v(i) (X) m −1 + 1) + v(X) + 1). These 2m − 1 nonzero code polynomial have the same weight. for 0 < i < 2m − 1. 5. The total number of nonzero components in the code words of Cd is w·(2m −1). we note that X 2 m −1 + 1 = p∗ (X)h∗ (X). This is not possible since 0 < i < 2m − 1 and p(X) is a primitive polynomial(the smallest positive integer n such that p(X) divides X n + 1 is n = 2m − 1). However p(X) and v(X) are relatively prime. we have w = 2m−1 . The code C3 generated by g(X) has minimum distance d3 ≥ max(d1 . Equating w. Therefore our hypothesis that. Now we arrange the 2m code words in Cd as an 2m × (2m − 1) array. Thus the total nonzero components in the array is 2m−1 · (2m − 1). 5. v(i) (X) = v(X) is invalid. It follows from Problem 3.polynomials common to C1 and C2 form a cyclic code of length n whose generator polynomial is g(X) = LCM (g1 (X). and v(i) (X) = v(X). It follows from (5. v(X) can not be divisible by p(X) (otherwise v(X) is divisible by p∗ (X)h∗ (X) = X 2 m −1 +1 and has degree at least 2m − 1). d2 ). Since the roots of X 2 m −1 + 1 are the 2m − 1 nonzero elements in GF(2m ) which are all distinct.21 See Problem 4.1) that X i v(X) = a(X)(X 2 = a(X)(X 2 Rearranging the above equality. p∗ (X) and h∗ (X) are relatively prime. (b) From part (a). 25 . Since every code polynomial v(X) in Cd is a polynomial of degree 2m − 2 or less. Hence p(X) divides X i + 1. a code polynomial v(X) and its 2m − 2 cyclic shifts form all the 2m − 1 nonzero code polynomials in Cd .22 (a) First.3. + 1.6(b) that every column in this array has exactly 2m−1 nonzero components.(2m − 1) to 2m−1 · (2m − 1). g2 (X)).

for three errors not confined to 10 consecutive positions. we must have j − i + 1 > 10. (b) An error pattern of triple errors must be of the form.5.25 (a) Any error pattern of double errors must be of the form. the error pattern must be of the following form e(X) = X i + X 5+i + X 10+i 26 . we must have k−i>9 j−i<6 k − j < 6. 15 − (j − i) + 1 > 10. the only solutions for j and k are j = 5 + i and k = 10 + i. we obtain j−i>9 j − i < 6. This is impossible. where 0 ≤ i < j < k ≤ 14. If the two errors are not confined to n − k = 10 consecutive positions. If these three errors can not be trapped. Therefore any double errors are confined to 10 consecutive positions and can be trapped. Hence. Simplifying the above inequalities. If we fix i. e(X) = X i + X j where j > i. e(X) = X i + X j + X k .

5. j has two possible solutions. only 5 error patterns of triple errors can not be trapped. we must have j − i + 1 > 11 23 − (j − i − 1) > 11 From the above inequalities. e1 (X) = X i + X 12+i . α2 = 1 23 . α3 = 2 23 3 α4 = α5 = · · · = α23 = 0 The probability of a correct decoding is P (C) = (1 − p)23 + 23 23 2 p(1 − p)22 + p (1 − p)21 1 2 27 . we obtain 10 < j − i < 13 For a fixed i. 5.27 The coset leader weight distribution is α0 = 1. for a double-error pattern that can not be trapped.26 (b) Consider a double-error pattern. e(X) = X i + X j where 0 ≤ i < j < 23. If these two errors are not confined to 11 consecutive positions. Therefore. it must be either of the following two forms: e1 (X) = X i + X 11+i . Hence. α1 = 23 . There are a total of 23 error patterns of double errors that can not be trapped.for 0 ≤ i < 5. j = 11+i and j = 12+i.

Consequently. 5. where j > i. Now consider a single-error pattern e1 (X) = X i and a double-adjacent-error pattern e2 (X) = X j + X j+1 . But X i + X j + X j+1 + X j+2 = X i + X j (1 + X + X 2 ) is not divisible by X 3 + 1 = (X + 1)(X 2 + X + 1). This is again not possible since j − i < n. Now we consider a double-adjacent-error pattern X i +X i+1 and a triple-adjacent-error pattern 28 . Suppose that these two error patterns are in the same coset. Hence no two double-adjacent-error patterns can be in the same coset. all single-error patterns can be used as coset leaders. no single-error pattern and a triple-adjacent-error pattern can be in the same coset. Hence no single-error pattern and a double-adjacent-error pattern can be in the same coset. Suppose that e1 (X) and e2 (X) are in the same coset. This is impossible since j − i < n and n is the smallest positive integer such that p(X) divides X n + 1. Consider a single error pattern X i and a triple-adjacent-error pattern X j + X j+1 + X j+2 . Then X i + X i+1 + X j + X j+1 must be divisible by (X 3 + 1)p(X). This is not possible since g(X) has X + 1 as a factor. however X i + X j + X j+1 does not have X + 1 as a factor.29(a) Consider two single-error patterns. X j−i + 1 must be divisible by p(X). Then X i + X j must be divisible by g(X) = (X 3 + 1)p(X). We see that for X i (X + 1)(X j−i + 1) to be divisible by p(X). Therefore no two single-error patterns can be in the same coset. If these two error patterns are in the same coset. e1 (X) = X i and e2 (X) = X j . Therefore. Suppose that these two error patterns are in the same cosets.+ The probability of a decoding error is 23 3 p (1 − p)20 . where j > i. Then X i + X j + X j+1 must be divisible by g(X) = (X 3 + 1)p(X). then X i + X j + X j+1 + X j+2 must be divisible by (X 3 + 1)p(X). 3 P (E) = 1 − P (C). This implies that X j−i + 1 must be divisible by p(X).X i + X i+1 and X j + X j+1 where j > i. Consider two double-adjacent-error patterns. Note that X i + X i+1 + X j + X j+1 = X i (X + 1)(X j−i + 1).

If they are in the same coset. and tripleadjacent-error patterns can be used as coset leaders.X j + X j+1 + X j+2 . double-adjacent-. This is not possible since X i +X i+1 +X j +X j+1 +X j+2 does not have X +1 as a factor but X 3 +1 has X +1 as a factor. p(X) must divide X j−i + 1. then their sum X i (X 2 + X + 1)(1 + X j−i ) must be divisible by (X 3 + 1)p(X). Consider two triple-adjacent-error patterns. Then X i + X i+1 + X j + X j+1 + X j+2 = X i (X + 1) + X j (X 2 + X + 1) must be divisible by (X 3 +1)p(X). Hence no two triple-adjacent-error patterns can be in the same coset. Hence p(X) and (X 2 + X + 1) are relatively prime. Again this is not possible. As a result. we see that all the single-. 29 . hence by p(X). Summarizing the above results. Note that the degree of p(X) is 3 or greater. X i + X i+1 + X i+2 and X j + X j+1 + X j+2 . Suppose that these two error patterns are in the same coset. Hence a double-adjacent-error pattern and a triple-adjacent-error pattern can not be in the same coset.

Chapter 6 6. From table 2.9. (b)  2 3 4 5 6 7 8 9 10 11 12 13 14  β β β β β β β β β β   1 β β β β H=  1 β 3 β 6 β 9 β 12 β 15 β 18 β 21 β 24 β 27 β 30 β 33 β 36 β 39 β 42            H=           1 1 1 0 1 0 1 1 0 0 1 0 0 0 1 0 1 0 0 0 1 1 1 1 0 1 0 1 1 0 0 0 0 1 1 1 1 0 1 0 1 1 0 0 1 0 1 1 1 1 0 1 0 1 1 0 0 1 0 0 1 0 1 0 0 1 0 1 0 0 1 0 1 0 0 0 0 1 0 1 0 0 1 0 1 0 0 1 0 1 0 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 1 1 0 1 1 1 1 0 1 1 1 1 30            . we find that φ1 (X) = 1 + X 3 + X 4 The minimal polynomial of β 3 = α21 = α6 is φ3 (X) = 1 + X + X 2 + X 3 + X 4 . φ2 (X)) = (1 + X 3 + X 4 )(1 + X + X 2 + X 3 + X 4 ) = 1 + X + X 2 + X 4 + X 8. β 2 and β 4 have the same minimal polynomial φ1 (X).           . Thus g0 (X) = LCM (φ1 (X).1 (a) The elements β.

(c) The reciprocal of g(X) in Example 6.1 is X 8 g(X −1 ) = X 8 (1 + X −4 + X −6 + X −7 + X −8 = X 8 + X 4 + X 2 + X + 1 = g0 (X) 6.2 The table for GF (s5 ) with p(X) = 1 + X 2 + X 5 is given in Table P.6.6.6. The generator polynomials of all the binary BCH codes of length 31 are given in Table P. The minimal polynomials of elements in GF (2m ) are given in Table P.2(a).2(a) Galois Field GF(25 ) with p(α) = 1 + α2 + α5 = 0 0 1 α α2 α3 α4 α5 = 1 + α2 (0 0 0 0 0) (1 0 0 0 0) (0 1 0 0 0) (0 0 1 0 0) (0 0 0 1 0) (0 0 0 0 1) (1 0 1 0 0) 31 .6.2(c) Table P.2(b).

6.2(a) Continued α 6 = = = = = = = = = = = = = = = = = = = = = = = = = 1 + 1 1 1 + + 1 1 + 1 1 1 1 1 + + + + 1 1 + 1 α α2 + α α2 α2 α2 + α α α α α + α2 α2 α2 α2 α2 α2 + α2 α2 α2 + α3 + α4 α4 α4 α3 α3 + + (0 1 0 1 0) (0 0 1 0 1) (1 0 1 1 0) (0 1 0 1 1) (1 0 0 0 1) (1 1 1 0 0) α7 α8 α9 α10 α11 α12 α13 α14 α15 α16 α17 α18 α19 α20 α21 α22 α23 α24 α25 α26 α27 α28 α29 α30 + + α α + + + + + + + α3 α3 α3 α3 α3 + + + + + α4 α4 α4 α4 α4 (0 1 1 1 0) (0 0 1 1 1) (1 0 1 1 1) (1 1 1 1 1) (1 1 0 1 1) (1 1 0 0 1) (1 1 0 0 0) (0 1 1 0 0) + α3 α3 + + α4 α4 α4 α4 α4 α4 α4 (0 0 1 1 0) (0 0 0 1 1) (1 0 1 0 1) (1 1 1 1 0) + + + (0 1 1 1 1) (1 0 0 1 1) (1 1 1 0 1) (1 1 0 1 0) + (0 1 1 0 1) (1 0 0 1 0) + (0 1 0 0 1) + α α + + + + + α3 α3 α3 α3 α3 α α α + + + α2 + α 32 .Table P.

α12 . α20 .2(c) n 31 k 26 21 16 11 6 t 1 2 3 5 7 g(X) g1 (X) = 1 + X 2 + X 5 g2 (X) = φ1 (X)φ3 (X) g3 (X) = φ1 (X)φ3 (X)φ5 (X) g4 (X) = φ1 (X)φ3 (X)φ5 (X)φ7 (X) g5 (X) = φ1 (X)φ3 (X)φ5 (X)φ7 (X)φ11 (X) 6. α22 . α27 . α14 .3 (a) Use the table for GF (25 ) constructed in Problem 6.6. α18 α7 . α17 α5 . α8 . α28 .Table P.6. α2 . The syndrome components of r1 (X) = X 7 + X 30 are: S1 = r1 (α) = α7 + α30 = α19 S2 = r1 (α2 ) = α14 + α29 = α7 S3 = r1 (α3 ) = α21 + α28 = α12 33 . α16 1 + X 1 + X2 + X5 1 + X2 + X3 + X4 + X5 1 + X 1 + X 1 + X + X2 + X4 + X5 + X2 + X3 + X5 + X3 + X4 + X5 φi (X) α3 . α29 . α19 α11 . α23 1 + X3 + X5 Table P. α4 . α24 .2. α26 . α21 α15 . α30 . α25 . α13 . α10 .2(b) Conjugate Roots 1 α. α9 . α6 .

we find that σ(X) has α and α24 as roots. The decoder decodes r1 (X) into r1 (X) + e(X) = 0. Substituting the nonzero elements of GF (25 ) into σ(X). The syndrome components of r2 (X) are: S1 = r2 (α) = α2 . 2 S4 = S2 = α8 .6.3(a) Table P. the error polynomial is e(X) = X 7 + X 30 .3(b): 34 . S3 = r2 (α3 ) = α21 . (b) Now we consider the decoding of r2 (X) = 1 + X 17 + X 28 . As a result.6. Hence the error location numbers are α−1 = α30 and α−24 = α7 .S4 = r1 (α4 ) = α28 + α27 = α14 The iterative procedure for finding the error location polynomial is shown in Table P.3(a) µ -1/2 0 1 2 σ (µ) (X) 1 1 1 + α19 X 1 + α19 X + α6 X 2 dµ 1 α19 α25 – µ 2µ − -1 0 µ 0 0 1 2 1(ρ = −1/2) 2(ρ = 0) Hence σ(X) = 1 + α19 X + α6 X 2 . The error location polynomial σ(X) is found by filling Table P. 2 S2 = S1 = α4 .6.

5 Consider the Galois field GF (22m ). β 2 . Note that 22m − 1 = (2m − 1) · (2m + 1). Then (X n + 1) = (X λ + 1)(X 2tλ + X (2t−1)λ + · · · + X λ + 1 The roots of X λ + 1 are 1. · · · . The m +1 elements 1. Thus the code has minimum distance exactly 2t + 1. and hence r2 (X) cannot be decoded and must contain more than two errors. α. α2 . α2t are roots of the polynomial u(X) = 1 + X λ + X 2λ + · · · + X (2t−1)λ + X 2tλ . β 4 . · · · .3(b) µ -1/2 0 1 2 σ (µ) (X) 1 1 1 + α2 X 1 + α2 X + α28 X 2 dµ 1 α2 α30 – µ 2µ − -1 0 µ 0 0 1 2 1(ρ = −1/2) 2(ρ = 0) The estimated error location polynomial is σ(X) = 1 + α2 X + α28 X 2 This polynomial does not have roots in GF (25 ). Let α be a primitive element in GF (22m ). 6. 6. β 2 .6.4 Let n = (2t + 1)λ. α(λ−1)(2t+1) . · · · . α2t+1 . α2(2t+1) . Hence. β 3 . This implies that u(X) is code polynomial which has weight 2t + 1. Let ψi (X) be the minimal . Then β = α(2 m −1) is an element of order 2m + 1. β 2m are all the roots of X 2 35 +1. β.Table P.

The minimal polynomial for β 5 = α15 is ψ5 (X) = 1 + X 2 + X 4 + X 5 + X 6 . ψ2 (X).7 and 21 respectively. β −1 . A cyclic code is said to be reversible if u(X) is a code polynomial then u∗ (X) is also a code polynomial. 7. 6. Then a t-error-correcting non-primitive BCH code of length n = 2m + 1 is generated by g(X) = LCM {ψ1 (X).11 (a) Let u(X) be a code polynomial and u∗ (X) = X n−1 u(X −1 ) be the reciprocal of u(X). β 1 . · · · .polynomial of β i . · · · . The minimal polynomial for β 3 = α9 is ψ3 (X) = 1 + X 2 + X 3 . we see that u∗ (β i ) has β −t . β t as roots 36 .10 Use Tables 6. β 0 . · · · . Thus the length is n = LCM (21. The minimal polynomial for β 2 = α6 and β 4 = α12 is ψ2 (X) = 1 + X + X 2 + X 4 + X 6 .6) BCH code. 6.2 and 6. and the code is a double-error-correcting (21. ψ2t (X)} .3. Consider u∗ (β i ) = β (n−1)i u(β −i ) Since u(β −i ) = 0 for −t ≤ i ≤ t. 21). β 3 and β 5 are 21. Hence g(X) = ψ2 (X)ψ3 (X)ψ5 (X) The orders of β 2 .

Hence β t+1 is the conjugate of β (t+1)/2 and β −(t+1) is the conjugate of β −(t+1)/2 . t+1 is even. Thus β t+1 and β −(t+1) are also roots of the generator polynomial.and is a multiple of the generator polynomial g(X). Therefore u∗ (X) is a code polynomial. 37 . (b) If t is odd. It follows from the BCH bound that the code has minimum distance 2t + 4 (Since the generator polynomial has (2t + 3 consecutive powers of β as roots).

7. The error location polynomial is σ(X) = 1 + α9 X 3 . From the syndrome components of the received polynomial and the coefficients of the error 1 .4. α8 . The generator polynomial of the triple-error-correcting RS code over GF(25 ) is g(X) = (X + α)(X + α2 )(X + α3 )(X + α4 )(X + α5 )(X + α6 ) = α21 + α24 X + α16 X 2 + α24 X 3 + α9 X 4 + α10 X 5 + X 6 .7.2 The generator polynomial of the double-error-correcting RS code over GF(25 ) is g(X) = (X + α)(X + α2 )(X + α3 )(X + α4 ) = α10 + α29 X + α19 X 2 + α24 X 3 + X 4 . S4 = r(α4 ) = α + α11 + α10 = α7 .Chapter 7 7. S3 = r(α3 ) = α13 + α3 + α12 = α9 . S2 = r(α2 ) = α10 + α10 + α14 = α14 . Hence the error location numbers are α3 . and α13 . α7 . S6 = r(α6 ) = α7 + α12 + α6 = α3 . The iterative procedure for finding the error location polynomial is shown in Table P.4 The syndrome components of the received polynomial are: S1 = r(α) = α7 + α2 + α = α13 . The roots of this polynomial are α2 . S5 = r(α5 ) = α4 + α4 + α8 = α8 . and α12 .

σ (α−8 ) α (1 + α3 α−8 )(1 + α13 α−8 ) α −Z0 (α−13 ) α13 + α + α13 α = 13 = 13 = α3 . and X 13 are: e3 = −Z0 (α−3 ) α13 + α11 + α3 α7 = 3 = 3 = α4 .7. σ (α−3 ) α (1 + α8 α−3 )(1 + α13 α−3 ) α α13 + α6 + α8 α2 −Z0 (α−8 ) = 8 = 8 = α9 . −13 ) 3 α−13 )(1 + α8 α−13 ) σ (α α (1 + α α e8 = e13 = Consequently. we find the error value evaluator. X 8 . The error values at the positions X 3 . and the decoded codeword is the all-zero codeword. the error pattern is e(X) = α4 X 3 + α9 X 8 + α3 X 13 .4 µ −1 0 1 2 3 4 5 6 σ µ (X) 1 1 1 + α13 X 1 + αX 1 + α13 X + α10 X 2 1 + α14 X + α12 X 2 1 + α9 X 3 1 + α9 X 3 dµ 1 α13 α10 α7 α9 α8 0 − lµ 0 0 1 1 2 2 3 − µ − lµ −1 0 0 (take ρ = −1) 1 (take ρ = 0) 1 (take ρ = 1) 2 (take ρ = 2) 2 (take ρ = 3) − location polynomial.Table P. 2 . Z0 (X) = S1 + (S2 + σ1 S1 )X + (S3 + σ1 S2 + σ2 S1 )X 2 = α13 + (α14 + 0α13 )X + (α9 + 0α14 + 0α13 )X 2 = α13 + α14 X + α9 X 2 .

σ (α−8 ) α α (1 + α3 α−8 )(1 + α13 α−8 ) α α7 + α10 + α7 α10 −Z0 (α−13 ) = 9 13 = 7 = α3 . Table P. and α13 . σ (α−3 ) α α (1 + α8 α−3 )(1 + α13 α−3 ) α e8 = −Z0 (α−8 ) α7 + 1 + α2 α11 = 9 8 = 2 = α9 .5 i −1 0 1 2 3 (i) Z0 (X) q i (X) − − α2 + α12 X α12 + αX α8 + α5 X σ i (X) 0 1 α2 + α12 X α3 + αX + α13 X 2 α9 + α3 X 3 X6 α13 + α14 X + α9 X 2 + α7 X 3 + α8 X 4 + α3 X 5 1 + α8 X + α5 X 3 + α2 X 4 α + α13 X + α12 X 3 α7 + α8 X + α3 X 2 The error location and error value polynomials are: σ(X) = α9 + α3 X 3 = α9 (1 + α9 X 3 ) Z0 (X) = α7 + α8 X + α3 X 2 = α9 (α13 + α14 X + α9 X 2 ) From these polynomials.7. we find that the error location numbers are α3 .7.5 The syndrome polynomial is S(X) = α13 + α14 X + α9 X 2 + α7 X 3 + α8 X 4 + α3 X 5 Table P. σ (α−13 ) α α (1 + α3 α−13 )(1 + α8 α−13 ) α 3 e13 = .5 displays the steps of Euclidean algorithm for finding the error location and error value polynomials. α8 .7. and error values are −Z0 (α−3 ) α7 + α5 + α12 α e3 = = 9 3 = 12 = α4 .

S2 = r(α2 ) = α2 + α45 + α47 = α.Hence the error pattern is e(X) = α4 X 3 + α9 X 8 + α3 X 13 . From these −1 roots. and the received polynomial is decoded into the all-zero codeword. and . α11 and α19 . S1 = r(α1 ) = α2 + α33 + α27 = α27 . r(X) = α2 + α21 X 12 + α7 X 20 . Therefore. as shown in the following table: The roots of σ(X) are: 1 = α0 . we find the error location numbers: β1 = (α0 ) 4 = α0 . 7. we find σ(X) = α23 X 3 + α9 X + α22 . S5 = r(α5 ) = α2 + α81 + α107 = α15 . S3 = r(α3 ) = α2 + α57 + α67 = α28 .6 From the received polynomial. the syndrome polynomial is S(X) = α27 + αX + α28 X 2 + α29 X 3 + α15 X 4 + α8 X 5 Using the Euclidean algorithm. S6 = r(α6 ) = α2 + α93 + α127 = α8 . we compute the syndrome. S4 = r(α4 ) = α2 + α69 + α87 = α29 . β2 = (α11 )−1 = α20 . Z0 (X) = α26 X 2 + α6 X + α18 .

σ (X) = α22 (1 + α12 X)(1 + α20 X) + α3 (1 + X)(1 + α20 X) + α11 (1 + X)(1 + α12 X). the error pattern is e(X) = α2 + α21 X 12 + α7 X 20 and the decoded codeword is v(X) = r(X) − e(X) = 0. The error values at the 3 error locations are given by: e0 = e12 e20 −Z0 (α0 ) α26 + α6 + α8 = 22 = α2 . −12 ) 19 )(1 + α8 ) σ (α α (1 + α −20 −Z0 (α ) α17 + α17 + α18 = 11 = α7 . 0) 12 )(1 + α20 ) σ (α α (1 + α −12 −Z0 (α ) α2 + α25 + α18 = = 3 = α21 . The error location polynomial and its derivative are: σ(X) = α22 (1 + X)(1 + α12 X)(1 + α20 X). 5 .i -1 0 1 2 3 Z0 (X) X6 S(X) α5 X 4 + α9 X 3 + α22 X 2 + α11 X + α26 α8 X 3 + α4 X + α6 α26 X 2 + α6 X + α18 (i) qi (X) α23 X + α30 α3 X + α5 α28 X + α σ i (X) 0 1 α23 X + α30 α24 X 2 + α30 X + α10 α23 X 3 + α9 X + α22 β 3 = (α19 )−1 = α12 . = σ (α−20 ) α (1 + α11 )(1 + α23 ) Hence. Hence the error pattern is e(X) = e0 + e12 X 12 + e20 X 20 .

Next we show that a(1) is not equal to 0. Since these polynomials are also code polynomials in the RS code Crs . . . α−(q−2) = α. . If a(1) = 0. then X q−1 − 1 = g(X)h(X). . α2 . . Hence Cd is a (q − 1. h∗ (X) = X q−1−2t h(X −1 ). . . . . α.7. q − 2t) RS code with minimum distance q − 2t. . We know that c(X) is divisible by g(X). . αq−2t−2 . 7. hence Cbch is a subcode of Crs . . g(1) = 0. . α2 .11 Suppose c(X) = 2m −2 i i=0 ci X is a minimum weight code polynomial in the (2m − 1. 6 . The dual code Cd of C is generated by the reciprocal of h(X). αd−1 as roots. . The minimum weight is increased to d + 1 provided 2m −2 c∞ = −c(1) = − i=0 ci = 0. α2 . We see that h∗ (X) has α−(2t+1) = αq−2t−2 . . Since 1 is not a root of g(X). αq−1 as roots and is called the parity polynomial. These polynomials over GF(2) form a primitive BCH code Cbch with designed distance d. αd−1 (also their conjugates) as roots. . . α2 . c2m −2 ) has weight d + 1.9 Let g(X) be the generator polynomial of a t-symbol correcting RS code C over GF(q) with α.10 The generator polynomial grs (X) of the RS code C has α. . Since g(X) divides X q−1 − 1. Consider c(1) = a(1)g(1). . c1 . Thus h∗ (X) has the following consecutive powers of α as roots: 1. then c∞ = −c(1) = 0 and the vector (c∞ . α2t as roots. If a(1) = 0. . Note that GF(2m ) has GF(2) as a subfield. 2t. . α−(2t+2) = αq−2t−3 . c0 . k) RS code C. 7. . Consider those polynomial v(X) over GF(2) with degree 2m −2 or less that has α. . The polynomial h(X) has α2t+1 . Thus c(X) = a(X)g(X) with a(X) = 0. where α is a primitive element of GF(q). . and α−(q−1) = 1 as roots. . . . .

which sum to zero. Therefore the minimum distance of the extended RS code is exactly 2t + 1. Replacing X by αq . (3) The δ columns consist of the second column of H1 and δ − 1 columns from H. 7 .13 Consider v(X) = i=0 2m −2 2m −2 k−1 a(α )X = i=0 i i ( j=0 aj αij )X i Let α be a primitive element in GF(2m ). H generates an RS code with minimum distance exactly 2t + 1. There are 4 case to be considered: (1) All δ columns are from the same submatrix H. we have 2m −2 k−1 v(α ) = i=0 j=0 k−1 q aj αij αiq 2m −2 = j=0 aj ( i=0 αi(j+q) ). Suppose there are δ columns in H1 sum to zero and δ ≤ 2t.then a(X) has X − 1 as a factor and c(X) is a multiple of (X − 1)g(X) and must have a weight at least d + 1. Since Vandermonde determinants are nonzero. The first case leads to a δ × δ Vandermonde determinant. 7. However. Consequently the extended RS code has a minimum distance d + 1. This contradicts to the hypothesis that c(X) is a minimum weight code polynomial.12 To prove the minimum distance of the doubly extended RS code. Hence the minimum distance of the extended RS code is at least 2t + 1. There are 2t + 1 columns in H (they are also in H1 ). (4) The δ columns consist of the first two columns of H1 and δ − 2 columns from H. (2) The δ columns consist of the first column of H1 and δ − 1 columns from H. we need to show that no 2t or fewer columns of H1 sum to zero over GF(2m ) and there are 2t + 1 columns in H1 sum to zero. The second and third cases lead to a (δ − 1) × (δ − 1) Vandermonde determinant. 7. The derivations are exactly the same as we did in the book. The 4th case leads to a (δ − 2) × (δ − 2) Vandermonde determinant. δ columns of H1 can not be sum to zero.

. 2m −2 has α. Therefore. 2m −2 i=0 αi(j+q) = 0 when 1 ≤ j + q ≤ 2m − 2. α2 k) cyclic RS code over GF(2m ). 2m − 8 . . then for Since the polynomial 1 + X + X 2 + · · · + X 2 1 ≤ l ≤ 2m − 2. k. α2 m −2)l m −2 αli = 1 + αl + α2l + · · · + α(2 i=0 = 0. . . for 0 ≤ j < k and 1 ≤ q ≤ 2m − k − 1. This implies that v(αq ) = 0 Hence v(X) has α. . α2 . . α2 . . .We factor 1 + X 2 −1 as follows: 1 + X2 m −1 = (1 + X)(1 + X + X 2 + · · · + X 2 m −2 m −2 ) as roots. m −k−1 as roots. The set {v(X)} is a set of polynomial over GF(2m ) with 2m −k −1 consecutive powers of α as roots and hence it forms a (2m −1.

Then the parity-check polynomial is h(X) = GCD{1 + X 2 + X 3 . we find that the generator matrix in systematic form is    1 0 1 1 1 0 0    G =  1 1 1 0 0 1 0 . 8. 3} is q = 2. hence the minimum weight of the code is at least 4. a type-1 decoder can be implemented. (c) The generator polynomial is g(X) = X7 + 1 = 1 + X 2 + X 3 + X 4. A2 = s1 . the first. 1 .     0 1 1 1 0 0 1 The parity-check matrix in systematic form is   1 0 0 0   0 1 0 0  H=   0 0 1 0  0 0 0 1  1 1 0   0 1 1   .Chapter 8 8. X 7 + 1} = 1 + X 2 + X 3 . h(X) (d) From g(X).4 (a) Since all the columns of H are distinct and have odd weights. A3 = s3 . (a) The length of the code n = 22 + 2 + 1 = 7. 2. (b) Let z(X) = 1 + X 2 + X 3 . Based on the above check-sum.  1 1 1   1 0 1 The check-sums orthogonal on the highest order error digit e6 are: A1 = s0 + s2 . no two or three columns can sum to zero.2 The order of the perfect difference set {0. However.

8 = s1 . A3. A3. s3 = e3 + e6 + e7 + e10 .10 = s3 = e3 + e6 + e7 + e10 . s1 = e1 + e5 + e6 + e8 . (b) The syndrome of the error vector e is s = (s0 .the second. s3 .9 = s2 . of the code is 4. the third and the 6th columns sum to zero. s2 = e2 + e5 + e7 + e9 . The check-sums orthogonal on e9 are: A1. The check-sums orthogonal on e8 are: A1. s4 ) = eHT with s0 = e0 + e5 + e6 + e7 + e8 + e9 + e10 . 2 . s4 = e4 + e8 + e9 + e10 .9 = s4 . (c) The check-sums orthogonal on e10 are: A1. A2. hence the minimum distance.10 = s0 + s1 + s2 = e0 + e1 + e2 + e5 + e10 . s1 .8 = s0 + s2 + s3 .8 = s4 . A2. Therefore the minimum weight.10 = s4 = e4 + e8 + e9 + e10 . s2 . A2. A3.9 = s0 + s1 + s3 .

7 = s0 + s1 + s4 . A2.5 = s0 + s3 + s4 . 2. 40. A3. the component at the location αi is permute to the location α3 αi +α11 . A3. 42. the 1-component at the location Y = α8 is permuted to the location α3 α8 + α11 = α11 + α11 = α∞ . since there are 3 check-sums orthogonal on each message bit and the minimum distance of the code is 4. 8. 9.7 = s2 .5 = s2 . The nonzero-proper descendants of 43 are: 1. 10. 8. 33.6 = s0 + s2 + s4 .The check-sums orthogonal on e7 are: A1.7 = s3 .5 = s1 . 8. the code is completely orthogonalizable. The check-sums orthogonal on e5 are: A1. 34. 41.6 α∞ α0 α1 α2 α3 α4 α5 α6 α7 α8 α9 α10 α11 α12 α13 α14 u = ( 1 1 1 0 0 1 0 1 0 1 1 0 0 0 0 1 ) Applying the permutation Z = α3 Y + α11 to the above vector. Performing this permutation to 3 . 11.6 = s1 . The check-sums orthogonal on e6 are: A1. 3. (d) Yes. the binary radix-2 form of 43 is 43 = 1 + 2 + 23 + 25 . For example.6 = s3 . A3. 35. A2. A2. 32.5 For m = 6.

α7 . we can find H(X) as: H(X) = LCM{minimal polynomials φi (X) of the roots of H(X)}. 4 . X 6 π(X). Then π(X) = 1+X 9 +X 18 +X 27 +X 36 +X 45 +X 54 . the polynomial 1 + X 9 has α0 . and φ27 = 1 + X + X 3 . Represent the polynomials π(X). Then. By removing one vector with odd weight. X 4 π(X). and X 8 π(X) by 63-tuple location vectors. α42 . X 7 π(X). φ13 = 1 + X + X 3 + X 4 + X 6 . we can find G(X).the each component of the above vector. Then. Y = αX + α62 . α49 . where φ1 = 1 + X + X 6 . H(X) = φ1 (X)φ3 (X)φ5 (X)φ9 (X)φ11 (X)φ13 (X)φ27 (X).2) on the roots of H(X). As the result. φ9 = 1 + X 2 + X 3 . Therefore. α35 . X 63 + 1 can be factor as follows: X 63 + 1 = (1 + X 9 )(1 + X 9 + X 18 + X 27 + X 36 + X 45 + X 54 ). φ11 = 1 + X 2 + X 3 + X 5 + X 6 .7 For J = 9 and L = 7. For type-1 DTI code of length 63 and J = 9. the generator polynomial is: g 1 (X) = (1 + X 9 )(1 + X + X 2 + X 4 + X 6 )(1 + X + X 2 + X 5 + X 6 )(1 + X + X 6 ) X 27 G(X −1 ) = 1+X 1+X = (1 + X + X 2 + X 3 + X 4 + X 5 + X 6 + X 7 + X 8 )(1 + X + X 2 + X 4 + X 6 ) (1 + X + X 2 + X 5 + X 6 )(1 + X + X 6 ). Xπ(X). and α56 as all it roots. Add an overall parity-check digit and apply the affine permutation. X 3 π(X). to each of these location vectors. G(X) = (1 + X 9 )π(X) X 63 + 1 = H(X) H(X) 9 2 = (1 + X )(1 + X + X 4 + X 5 + X 6 )(1 + X + X 4 + X 5 + X 6 )(1 + X 5 + X 6 ). α14 . Let α be a primitive element in GF(26 ) whose minimal polynomial is φ1 (X) = 1 + X + X 6 . α28 . X 5 π(X). φ5 = 1 + X + X 2 + X 5 + X 6 . Because α63 = 1. remove the overall parity-check digits of all location vectors. the polynomial π(X) has αh as a root when h is not a multiple of 7 and 0 < h < 63. φ3 = 1 + X + X 2 + X 4 + X 6 . X 2 π(X). we obtain the following vector α∞ α0 α1 α2 α3 α4 α5 α6 α7 α8 α9 α10 α11 α12 α13 α14 v = ( 1 1 0 1 0 0 1 0 0 1 1 0 1 0 1 0 ) 8. From the conditions (Theorem 8. α21 .

we can obtain the polynomials orthogonal on the digit position X 62 . α9 . Because α63 = 1. From the conditions (Theorem 8. Let α be a primitive element in GF(26 ) whose minimal polynomial is φ1 (X) = 1+X +X 6 . α45 . where φ1 = 1 + X + X 6 .8 For J = 7 and L = 9. 5 . the polynomial 1 + X 7 has α0 . G(X) = (1 + X 7 )π(X) X 63 + 1 = H(X) H(X) 7 2 = (1 + X )(1 + X + X 3 + X 5 + X 6 )(1 + X + X 3 + X 4 + X 6 ) (1 + X 2 + X 4 + X 5 + X 6 )(1 + X + X 4 + X 5 + X 6 )(1 + X 5 + X 6 ). Then. X 9 + X 10 + X 13 + X 25 + X 30 + X 32 + X 38 + X 62 . 8. X 5 + X 29 + X 39 + X 40 + X 43 + X 55 + X 60 + X 62 . H(X) = φ1 (X)φ3 (X)φ5 (X)φ7 (X)φ21 (X). α18 . X 1 + X 7 + X 31 + X 41 + X 42 + X 45 + X 57 + X 62 . α36 . X 63 + 1 = (1 + X 7 )(1 + X 7 + X 14 + X 21 + X 28 + X 35 + X 42 + X 49 + X 56 ) and π(X) = 1+X 7 +X 14 +X 21 +X 28 +X 35 +X 42 +X 49 +X 56 . φ7 = 1 + X 3 + X 6 . X 23 + X 33 + X 34 + X 37 + X 49 + X 54 + X 56 + X 62 . the polynomial π(X) has αh as a root when h is not a multiple of 9 and 0 < h < 63. X 0 + X 3 + X 15 + X 20 + X 22 + X 28 + X 52 + X 62 . Therefore.2) on the roots of H(X). we can find G(X). φ5 = 1 + X + X 2 + X 5 + X 6 . α27 . we can find H(X) as: H(X) = LCM{minimal polynomials φi (X) of the roots of H(X)}. X 2 + X 14 + X 19 + X 21 + X 27 + X 51 + X 61 + X 62 . They are: X 11 + X 16 + X 18 + X 24 + X 48 + X 58 + X 59 + X 62 . X 4 + X 6 + X 12 + X 36 + X 46 + X 47 + X 50 + X 62 . and α54 as all it roots. and φ21 = 1 + X + X 2 . As the result. φ3 = 1 + X + X 2 + X 4 + X 6 .

where a1 is linearly independent of α7 and β ∈ GF (22 ).For type-1 DTI code of length 63 and J = 7. X 4 π(X). Y = αX + α62 . where p(X) is a primitive polynomial of degree m over GF(2). the generator polynomial is: g 1 (X) = X 37 G(X −1 ) 1+X (1 + X 7 )(1 + X + X 3 + X 4 + X 6 )(1 + X 2 + X 3 + X 5 + X 6 ) = 1+X (1 + X + X 2 + X 4 + X 6 )(1 + X + X 2 + X 5 + X 6 )(1 + X + X 6 ) = (1 + X + X 2 + X 3 + X 4 + X 5 + X 6 )(1 + X + X 3 + X 4 + X 6 )(1 + X 2 + X 3 + X 5 + X 6 ) (1 + X + X 2 + X 4 + X 6 )(1 + X + X 2 + X 5 + X 6 )(1 + X + X 6 ). . Add an overall parity-check digit and apply the affine permutation. the all-one vector is not a codeword in a maximum-length sequence code. to each of these location vectors. Since p(X) and (X + 1) are relatively prime. g(X) has 1 as a root. X 1 + X 6 + X 9 + X 15 + X 16 + X 35 + X 54 + X 55 + X 61 + X 62 . X 2 π(X). we can obtain the polynomials orthogonal on the digit position X 62 . X 2 + X 10 + X 23 + X 24 + X 26 + X 28 + X 29 + X 42 + X 50 + X 62 . By removing one vector with odd weight. They are: X 11 + X 14 + X 32 + X 36 + X 43 + X 44 + X 45 + X 52 + X 56 + X 62 . X 0 + X 4 + X 7 + X 17 + X 27 + X 30 + X 34 + X 39 + X 58 + X 62 . X 3 π(X). it is not divisible by g(X).9 The generator polynomial of the maximum-length sequence code of length n = 2m − 1 is g(X) = (X n + 1)/p(X) = (X + 1)(1 + X + X 2 + . They are 6 . Xπ(X). X 5 π(X). . X 3 + X 8 + X 13 + X 19 + X 31 + X 33 + X 46 + X 48 + X 60 + X 62 . and X 6 π(X) by 63-tuple location vectors.17 There are five 1-flats that pass through the point α7 . Represent the polynomials π(X). X 5 + X 21 + X 22 + X 38 + X 47 + X 49 + X 53 + X 57 + X 59 + X 62 . Therefore. + X n−1 )/p(X). 8. Since the all-one vector 1 + X + X 2 + . The 1-flats passing through α7 can be represented by α7 + βa1 . + X n−1 does not have 1 as a root. . . 8.

α61 . α23 . 8. α44 . α2 . α15 . α25 . 1. α45 . L5 = {α7 . α30 } L6 = {α63 . α42 . α50 .18 (a) There are twenty one 1-flats that pass through the point α63 . α3 . α33 . They are: L1 = {α63 . α12 . α8 }. α52 . α9 } L5 = {α63 . α6 }. α14 . α2 }. α20 } 7 . 0. α24 . α26 . α60 . α13 . α39 } L3 = {α63 . α57 } L18 = {α63 . α17 } L7 = {α63 . α7 . α19 } L16 = {α63 . α6 . α22 } L10 = {α63 . α58 } L17 = {α63 . α13 . α21 } L2 = {α63 . α53 } L11 = {α63 . L3 = {α7 . α37 } L4 = {α63 . α12 . α16 . α14 } L12 = {α63 . α38 } L9 = {α63 . α62 . α54 . L2 = {α7 . α4 . α41 . α}. α35 . α34 . α11 . α18 . α40 } L15 = {α63 . α3 . L4 = {α7 . α49 . α5 }. α1 . α46 . α51 } L13 = {α63 . α4 . α48 .five 1-flats passing through α7 which are: L1 = {α7 . α11 . α36 } L14 = {α63 . α32 . α47 . α8 } L8 = {α63 . α31 . α9 . α10 . 0.

α6 . α58 . α31 . α61 . α21 . α17 .L19 = {α63 . α35 . The radix-23 expansion of 47 is expressed as follows: 47 = 7 + 5 · 23 . α28 . α20 . α27 . α30 . α60 . α41 . α6 . α39 . α7 } L7 = {α21 . α26 . α54 . α40 . α52 } L3 = {α21 . α13 . α50 . α30 . α36 .19 The 1-flats that pass through the point α21 are: L1 = {α21 . α50 . α39 . α33 . α25 . α56 . α59 . α53 . α3 . α14 . α46 . α19 . α29 . α4 . 8 . α5 . α50 . α45 . α46 . α0 . α15 . α35 . α62 . α23 . α50 . α50 . α54 . α4 . α47 . α31 . α1 } L8 = {α21 . α8 } F2 = {1. α12 . α10 . α20 . α44 . α8 . α50 . α18 . α22 . α48 . α33 . α9 . α59 . α11 . α29 } L20 = {α63 . α52 . α38 } F3 = {1. α47 . α7 . α43 . where η ∈ GF (22 ). α38 . α23 . α29 . α16 . α13 } L9 = {α21 . α24 . α22 . α60 . α2 } L6 = {α21 . α6 . α16 . α5 } (b) There are five 2-flats that intersect on the 1-flat. α51 . α56 . α26 . α27 . α18 . α34 } L4 = {α21 . They are: F1 = {1. α40 } F4 = {1. α3 . α42 . α41 . α37 } L2 = {α21 . α10 . α45 . α43 . α39 . α56 . α44 . α2 } F5 = {1. α17 . α11 . α6 . 0. α28 } 8. α55 . α49 } 8. α57 . α25 . α34 . α10 } L21 = {α63 . α49 . α24 . α6 . α39 . α39 .20 a. α39 . α36 . α42 . α51 . α32 . α15 . α6 . α61 . α48 . α19 . α55 . α43 . α55 . α5 . {α63 + ηα}. α12 } L5 = {α21 . α28 . α59 . α62 . α57 . α58 . 0. α32 . α53 . α27 . α14 . α9 . α. α37 .

3. 6. b. 56. 28. 17. 33. 24. 5. 49. 32. 10. 35. 4. 40. W23 (47(1) ) = W23 (31) = 7 + 3 = 10. the 23 -weight of 47 W23 (47) = 7 + 5 = 12. 21. 2. 0≤l<3 are 1. W23 (47(2) ) = W23 (62) = 6 + 7 = 13. 34. 12. 0≤l<3 max W23 (47(l) ) = 13. 48. 8. 14. 9 . Hence. 16. 20.Hence. c. 7. W23 (47(0) ) = W23 (47) = 7 + 5 = 12. 42. All the positive integers h less than 63 such that 0 < max W23 (h(l) ) ≤ 23 − 1.

010.Chapter 11 Convolutional Codes 11. 001.21).3 on page 460 are given in (11. 011). 1 . (c) The codeword corresponding to u = (11101) is given by v = u · G = (111.    . 110... 100. v(0) u v( 1 ) v( 2 ) (b) The generator matrix is given by  111 101 011  111 101 011  G= 111 101 011  . 11. . 101.1 (a) The encoder diagram is shown below.  .2 (a) The generator sequences of the convolutional encoder in Figure 11. .

      . 101) is given by v = u · G = (1010.2 (b) The generator matrix is given by       G=     1111 0000 0000 0101 0110 0000 0011 0100 0011 1111 0000 0000 0101 0110 0000 0011 0100 0011 .. 11. 1 + D + D4 + D6 . v(1) (D). 0111. (c) The codeword corresponding to u = (110. 0000.4 (a) The generator matrix is given by G(D) = 1+D D D 1 1+D 1 and the composite generator polynomials are g1 (D) = = and g2 (D) = = g2 (D3 ) + Dg2 (D3 ) + D2 g2 (D3 ) D + D2 + D3 . . 1 + D2 + D3 is v(D) = = u(1) (D3 )g1 (D) + u(2) (D3 )g2 (D) 1 + D + D3 + D4 + D6 + D10 + D13 + D14 . 1 + D3 + D5 + D6 .. 0011). 011.     . (b) The output sequences corresponding to u(D) = 1 + D2 + D3 + D4 are V(D) = = v(0) (D). v(2) (D) 1 + D + D2 + D5 .3 (a) The generator matrix is given by G(D) = 1+D 1 + D2 1 + D + D2 . (0) (1) (2) g1 (D3 ) + Dg1 (D3 ) + D2 g1 (D3 ) 1 + D2 + D3 + D4 + D5 (0) (1) (2) (b) The codeword corresponding to the set of input sequences U(D) = 1 + D + D3 . and the corresponding codeword is v(D) = v(0) (D3 ) + Dv(1) (D3 ) + D2 v(2) (D3 ) = 1 + D + D2 + D3 + D5 + D6 + D10 + D14 + D15 + D16 + D19 + D20 . . . 11. 1110.

is shown below. v( 0 ) u( 1 ) v( 1 ) u( 2 ) v( 2 ) .. .  . (b) The parity sequences corresponding to u = (1101) are given by v(1) (D) = = = and v(2) (D) = u(D) · g(2) (D) = (1 + D + D3 )(1 + D + D4 + D5 ) = 1 + D2 + D3 + D6 + D7 + D8 . 11. Hence.5 (a) The generator matrix is given by  111 001 010 010 001 011  111 001 010 010 001 011  G= 111 001 010 010 001 011  .    . u(D) · g(1) (D) (1 + D + D3 )(1 + D2 + D3 + D5 ) 1 + D + D2 + D3 + D4 + D8 .6 (a) The controller canonical form encoder realization.3 11. .. v(1) v(2) = (111110001) = (101100111). requiring 6 delay elements.

G−1 (D) = 1+D D . 11. (d) The infinite-weight information sequence u(D) = 1 = 1 + D2 + D4 + D6 + D8 + · · · 1 + D2 . is shown below. S1 1/11 0/01 1/10 S3 1/11 1/01 1/00 0/00 S0 1/10 S2 0/00 S5 0/10 S7 1/00 0/01 0/11 1/01 0/11 S4 0/10 S6 (c) The cycles S2 S5 S2 and S7 S7 both have zero output weight. (b) Since the GCD is 1. requiring only 3 delay elements.14 (a) The GCD of the generator polynomials is 1. (b) The encoder state diagram is shown below. u(2) u(1) v(2) v(1) v(0) 11. By inspection.15 (a) The GCD of the generator polynomials is 1 + D2 and a feedforward inverse does not exist. the inverse transfer function matrix G−1 (D) must satisfy G(D)G−1 (D) = 1 + D2 1 + D + D2 G−1 (D) = I.4 (b) The observer canonical form encoder realization.

. . G(D) = [Ik |P(D)] =  .  . 0 0 · · · 1 gk (D) (k) · · · gk (n−1) (D) The transfer function matrix of a feedforward inverse G−1 (D) with delay l = 0 must be such that G(D)G−1 (D) = Ik . . k. A matrix satisfying this condition is given by       =      1 0 . . . . the generator matrix G(D)is a k × n matrix of the form   (k) (n−1) 1 0 · · · 0 g1 (D) · · · g1 (D)   (k) (n−1)  0 1 · · · 0 g2 (D) · · · g2 (D)  . 0  0 ··· 0 1 ··· 0   . and hence a codeword of finite weight.5 results in the output sequences v(0) (D) v(1) (D) = = u(D) 1 + D2 = 1 u(D) 1 + D + D2 + D3 = 1 + D. . 0 0 . (e) This is a catastrophic encoder realization. . 11.  0 ··· 1 .   . ν) encoder.  . .16 For a systematic (n. . .  .  .  . . 0 ··· 0 G−1 (D) = Ik 0(n−k)×k . . .  0 ··· 0   .  . .

0/000 S0 0/011 1/100 1/111 S2 0/101 0/110 1/010 S1 S3 1/001 (b) The modified state diagram is shown below. . X S3 X X2 X X2 S2 S0 X3 S1 X2 S0 (c) The WEF function is given by Fi ∆i A(X) = i ∆ .19 (a) The encoder state diagram is shown below.6 11.

2 codewords of weight 10. ∆ = = = There are 2 forward paths: Forward path 1: Forward path 2: S0 S1 S2 S0 S0 S 1 S 3 S 2 S 0 F1 = X 7 F2 = X 8 . (e) The IOWEF is given by Fi ∆i A(W.7 There are 3 cycles in the graph: Cycle 1: Cycle 2: Cycle 3: There is one pair of nontouching cycles: Cycle pair 1: (Cycle 1. (d) The augmented state diagram is shown below. S 1 S2 S1 S1 S3 S2 S1 S3 S3 C1 = X 3 C2 = X 4 C3 = X. and so on. X.j Ci Cj 1 − (X + X 3 + X 4 ) + X 4 1 − X − X 3. A(X) = X 7 + X 8 + X 9 + 2X 10 + · · · . and hence ∆2 = 1. the WEF is given by A(X) = Carrying out the division. one codeword of weight 9. Only cycle 3 does not touch forward path 1. Forward path 2 touches all the cycles. 3 1−X −X 1 − X − X3 ∆ . and hence ∆1 = 1 − X. i X7 X 7 (1 − X) + X 8 = . one codeword of weight 8. Therefore. Cycle 3) C1 C3 = X 4 . Finally. 1− i Ci + i . . indicating that there is one codeword of weight 7. L) = There are 3 cycles in the graph: Cycle 1: Cycle 2: Cycle 3: S1 S2 S1 S1 S3 S2 S1 S3 S3 C1 = W X 3 L2 C2 = W 2 X 4 L3 C3 = W XL. There are no more sets of nontouching cycles.

the IOWEF is given by A(W. Cycle 3) C1 C3 = W 2 X 4 L3 . one codeword of weight 8 with an information weight of 2 and length 4. L) = W X 7 L3 W X 7 L3 (1 − W XL) + W 2 X 8 L4 = . and hence ∆2 = 1. There are 2 forward paths: Forward path 1: Forward path 2: S 0 S1 S2 S0 S0 S 1 S 3 S 2 S 0 ∆1 = 1 − W XL.j Ci Cj = 1 − W X 3 L2 + W 2 X 4 L3 + W XL + W 2 X 4 L3 . Finally. L) = W X 7 L3 + W 2 X 8 L4 + W 3 X 9 L5 + · · · . and one codeword of weight 9 with an information weight of 3 and length 5.8 WXL S3 WXL X2L X2L WXL There is one pair of nontouching cycles: Cycle pair 1: (Cycle 1. X. Forward path 2 touches all the cycles. and hence Carrying out the division. S0 W X3L S1 S2 X2L S0 There are no more sets of nontouching cycles. ∆ = 1− i Ci + i . A(W. Only cycle 3 does not touch forward path 1. X. . indicating that there is one codeword of weight 7 with an information weight of 1 and length 3. 1 − (W X 3 L2 + W 2 X 4 L3 + W XL) + X 4 Y 2 Z 3 1 − W XL − W X 3 L2 F1 = W X 7 L3 F2 = W 2 X 8 L4 . Therefore.

one codeword of weight 5. (b) The complete CDF is shown below. 11. and the free distance of the code is therefore df ree = 7. which indicates that there is one codeword of weight 4.19(c). the lowest power of X in A(X). dfree =7 7 6 dmin = 5 5 d 4 3 2 1 0 1 2 3 4 5 6 7 (c) The minimum distance is dmin = dl |l=m=2 = 5. 505-506. two codewords of weight 6. and so on. . −X 4 (−1−2X 4 +X 3 +9X 20 −42X 18 +78X 16 +38X 12 +3X 17 +5X 13 +9X 11 −9X 15 −2X 7 −74X 14 −14X 10 −9X 9 +X 6 +6X 8 ) Performing the division results in A(X) = X 4 + X 5 + 2X 6 + 3X 7 + 6X 8 + 9X 9 + · · · .28 (a) From Problem 11.20 Using state variable method described on pp. the WEF is given by A(X)= 1−X+2X 4 −X 3 −X 2 +3X 24 +X 21 −3X 20 +32X 18 −8X 22 −X 19 −45X 16 −8X 12 −5X 17 −6X 11 +9X 15 +2X 7 +27X 14 +3X 10 −2X 9 −4X 6 +X 8 . the WEF of the code is A(X) = X 7 + X 8 + X 9 + 2X 10 + · · · .9 11.

15 and considering only paths that begin and end in state S0 (see page 507). the free distance df ree is the minimum weight path that has diverged from and remerged with the all-zero state.31 By definition. lim [dl ]un l→∞ = df ree . l→∞ Therefore l→∞ lim dl = min l→∞ lim [dl ]re .29 (a) By examining the encoder state diagram in Problem 11. Q. . any path that remains unmerged must accumulate weight. Letting [dl ]un be the minimum weight of all unmerged paths of length l. we find that the free distance of the code is df ree = 6. it follows that lim [dl ]un → ∞. it follows that [dl ]re = df ree for all l ≥ j. for a noncatastrophic encoder. Letting [dl ]re be the minimum weight of all remerged paths of length l. E.10 11. D. Also. This corresponds to the path S0 S1 S2 S4 S0 and the input sequence u = (1000). Assume that [v]j represents the shortest remerged path through the state diagram with weight free df ree . (b) The complete CDF is shown below. dfree = 6 6 5 d 4 dmin = 3 3 2 1 0 1 2 3 4 (c) The minimum distance is dmin = dl |l=m=3 = 3. 11.

) The state diagram of the encoder is given by: 00/000 S0 01/011 00/100 00/011 00/111 10/101 01/100 01/111 S2 10/001 11/010 01/000 11/110 11/001 10/110 S1 10/010 S3 11/101 1 .2.2.2 ”rather than “ for the (3.1 (Note: The problem should read “ for the (3.Chapter 12 Optimum Decoding of Convolutional Codes 12.2) code in Table 12.1(d)”.2) encoder in Example 11.

00 /0 10 01 /1 10 11 00/0 01 /0 10 100 00/ 11 /1 00 00/000 00 /0 10 0110/1 11 00/0 S2 01 /0 10 100 00/ 10/010 S1 11 /1 00 11 00/0 100 1 00/ /11 00 1 10 0/ 1 S0 00/000 00/000 S0 v(0) v(1) v(2) = (1001) = (1001) = (0011) and v = (110. .2. for u = (11. 111). 000. agreeing with (11.2 From the state diagram. 10).16) in Example 11. The path through the trellis corresponding to this codeword is shown highlighted in the figure. 01. we can draw a trellis diagram containing h + m + 1 = 3 + 1 + 1 = 5 levels as shown below: S3 11/101 S3 11/101 S3 11 /0 10 S2 01/111 S2 11 /0 10 01/111 01 11/0 01 /1 00 S1 10/010 S1 01 11/0 /1 01 00 1 10 0/ 1 01 /1 10 S0 S0 00/000 S0 11/1 10 01/ 011 11/1 10 01/ 011 11/1 10 01/ 011 Hence. 001.

11 11 02 . 000). Since max c2 v N −1 N −1 log P (rl |vl ) + N c2 c1 l=0 = c2 max v l=0 log P (rl |vl ) + N c 2 c1 N −1 l=0 c2 [log P (rl |vl ) if C2 is positive.3 The integer metric table becomes: N −1 l=0 log P (rl |vl ) also maximizes + 0 1 01 6 0 02 5 3 12 3 5 11 0 6 The received sequence is r = (11 12 01 . 010. 12 02 11 . 11 11 01 . and the final survivor is ˆ v = (111. 000. The decoded sequence is shown in the figure below. 110. any path that maximizes c1 ].3 12. 11 11 11 . 12. 01 12 01 . which yields a decoded information sequence of ˆ u = (11000).1. .2 Note that N −1 N −1 c2 [log P (rl |vl ) + c1 ] l=0 = l=0 [c2 log P (rl |vl ) + c2 c1 ] N −1 = c2 l=0 log P (rl |vl ) + N c2 c1 . 12 01 11 ). 011. This result agrees with Example 12. 000.

4 22 S3 1/001 0 0/11 36 S3 1/001 0 0/11 42 S3 1/001 0 0/11 63 S3 0 0/11 1/ 01 0 1/ 01 0 1/ 01 0 11 S1 1 10 0/ 24 S1 1 10 0/ 1/ 10 0 32 S1 1 10 0/ 1/ 10 0 46 1/ 01 0 S1 1 10 0/ 1/ 10 0 57 S1 1 10 0/ 20 S2 1 01 0/ 40 S2 1 01 0/ 48 S2 1 01 0/ 53 S2 1 01 0/ 73 S2 1 01 0/ 1/11 1 1/11 1 1/11 1 S0 0/000 S0 0/000 S0 1/11 1 0/000 S0 0/000 S0 1/11 1 0/000 S0 0/000 S0 0/000 S0 0 9 13 26 52 67 75 84 .

5 12. .363 −0.706 −0. 00. 00).28.699 and c2 = 4.237 −0. Then the integer metric table becomes: 01 02 0 10 9 1 0 3 03 8 5 04 7 6 14 6 7 13 5 8 12 3 9 11 0 10 12.699 −2. 10. 01.363 0 1 To construct an integer metric table. the trellis diagram for an information sequence of length h = 4 is shown in the figure below. 11.955 −1.13(a). choose c1 = 2.638 −1.097 −2. the resulting metric table is: 01 02 03 04 14 13 12 11 −0.777 −0.706 −0.4 For the given channel transition probabilities.097 −1. (b) After Viterbi decoding the final survivor is ˆ v = (11. This corresponds to the information sequence ˆ u = (1110).638 −2.777 −0.237 −1.955 −0.5 (a) Referring to the state diagram of Figure 11.699 −2.

6 46 S7 1/10 51 S7 0/01 38 S3 0 0/1 1/ 01 20 S3 0 0/1 1/ 01 0/01 48 S3 0 0/1 1/ 10 1/ 10 19 S1 1 0/0 12 S1 1 0/0 21 S1 1 0/0 1/ 10 42 S1 1 0/0 1/0 1 40 61 S5 0 0/1 S5 0 0/1 53 S6 00 0/ 1/ 00 1/ 11 71 S6 00 0/ 66 S6 00 0/ 22 S2 1/ 00 20 S2 45 S2 79 S2 1/00 0/00 0/00 S0 0/00 S0 0/00 0/00 S0 0/00 S0 0/00 11 0/ 27 S4 11 0/ 68 11 0/ 0/00 S4 S0 11 0/ 83 11 0/ 0/00 S4 S0 11 0/ 94 11 0/ 0/00 S4 11 0/ S0 0/00 S0 0 3 16 34 49 80 98 115 .

6 Combining the soft decision outputs yields the following transition probabilities: 0 1 0 0.5. This result matches the result obtained using soft decisions in Problem 12. For the received sequence r = (11.7 12. 01. 01. which corresponds to the information sequence ˆ u = (1110).091 0. the decoding trellis is as shown in the figure below. 10.909 0.091 1 0. 11. the metric is simply Hamming distance. 00).909 For hard decision decoding. 01. 00. 01. 10. 00. 00). and the final survivor is ˆ v = (11. 10. .

8 1 S7 1/10 3 S7 0/01 0 S3 0 0/1 1/ 01 4 S3 0 0/1 1/ 01 0/01 2 S3 0 0/1 1/ 10 1/ 10 0 S1 1 0/0 3 S1 1 0/0 5 S1 1 0/0 1/ 10 4 S1 1 0/0 1/0 1 2 3 S5 0 0/1 S5 0 0/1 1 S6 00 0/ 1/ 00 1/ 11 1 S6 00 0/ 2 S6 00 0/ 2 S2 1/ 00 4 S2 4 S2 3 S2 1/00 0/00 0/00 S0 0/00 S0 0/00 0/00 S0 0/00 S0 0/00 11 0/ 4 S4 11 0/ 2 11 0/ 0/00 S4 S0 11 0/ 2 11 0/ 0/00 S4 S0 11 0/ 3 11 0/ 0/00 S4 11 0/ S0 0/00 S0 0 2 3 3 4 3 3 3 .

A(W. (1 − 2X − X 3 )2 . 1 − W (2X + X 3 ) X 7 + 2W (X 6 − 3X 8 − X 10 ) − 3W 2 (2X 7 − X 9 − X 11 ) ∂A(W. X) ∂W = W =1 and 2X 6 − X 7 − 2X 8 + X 9 + X 11 = 2X 6 + 7X 7 + 18X 8 + · · · .W =1 .9 12.7391 × 10−8 for p = 0. 1 − 2X − X 3 Pb (E) < d=df ree 1 ∂A(W. X) = Hence.21) is an upper bound on Pd for d even.29) ∞ X6 + X7 − X8 = X 6 + 3X 7 + 5X 8 + 11X 9 + 25X 10 + · · · . From Example 11. 12. E. A(X) = which yields (a) P (E) < 1. X) Bd Pd < B(X)|X=2√p(1−p) = k ∂W X=2 √ p(1−p).10 The event error probability is bounded by (12.12.12.2118 × 10−4 for p = 0. D.001. X) = + ··· ∂W (1 − 2W X − W X 3 )2 ∂A(W. The bit error probability is bounded by (12. Pd = 1 2 d d d/2 d e d e pd/2 (1 − p)d/2 + pe (1 − p)d−e pd/2 (1 − p)d/2 d e=(d/2) d e=(d/2)+1 d e pe (1 − p)d−e < e=(d/2) d < e=(d/2) = pd/2 (1 − p)d/2 < 2 p d d/2 d e (1 − p) d/2 and thus (12.9 Proof: For d even. P (E) < d=df ree Ad Pd < A(X)|X=2√p(1−p) . W X 7 + W 2 (X 6 − X 8 ) = W X 7 +W 2 X 6 + X 8 + X 10 +W 3 2X 7 + 3X 9 + 3X 11 + X 13 +· · · . From Example 11.01.25) ∞ Q. (b) P (E) < 7.

expression (12. df ree = 6.001)6/2 = 6.12 The (3.001. Thus. Adf ree = 1. (a) For p = 0.30) is given by Pb (E) ≈ Bdf ree 2 From Problem 12.1) has df ree = 7 and Bdf ree = 1. Pb (E) ≈ (Rdf ree /2)·(Eb /No ) = 2(7/2) e− (7/6)·(Eb /No ) .6139 × 10−7 for p = 0.4 × 10−8 Pb (E) ≈ 2 · 26 · (0.26) P (E) ≈ Adf ree 2 p(1 − p) and the bit error probability (12. 12.37) remains 1 −Eb /No e .10.001)6/2 = 1.4 × 10−5 Pb (E) ≈ 2 · 26 · (0.01)6/2 = 1. Bdf ree = 2. (b) For p = 0. 2) encoder of (12.11 The event error probability is given by (12.01.0435 × 10−4 for p = 0.001. (b) Pb (E) < 1. 2 These expressions are plotted versus Eb /No in the figure below. 12. p(1 − p) df ree df ree ≈ Adf ree 2df ree pdf ree /2 ≈ Bdf ree 2df ree pdf ree /2 .10 This yields (a) Pb (E) < 3. 1.01)6/2 = 6.28 × 10−4 . P (E) ≈ 1 · 26 · (0. P (E) ≈ 1 · 26 · (0.28 × 10−7 .01.36) becomes Pb (E) ≈ Bdf ree 2df ree /2 e− and (12.

11 10 2 10 0 12.37 (Uncoded) 10 -2 10 -4 10 P b(E) 10 -6 -8 10 -10 10 -12 10 -14 10 -16 0 5 Eb/N o (dB) 10 15 .36 (Coded) 12.

37) remains 1 −Eb /No e . a short constraint length code (ν = 2) with hard decision decoding.71. Pb (E) ≈ (7/3)·(Eb /No ) . Uncoded Gain 10 8 6 E b/N o (dB) 4 2 0 -2 -4 -6 -10 10 10 -9 10 -8 10 -7 10 -6 10 P b (E ) -5 10 -4 10 -3 10 -2 10 -1 Note that in this example.12 Equating the above expressions and solving for Eb /No yields 2(7/2) e− e (7/6)·(Eb /No ) = (−1/6)(Eb /No ) 1 = = 1 −Eb /No e 2 2(9/2) e−(1/6)(Eb /No ) 2−(9/2) ln(2−(9/2) ) −6 ln(2−(9/2) ) = 18. 14 12 Reguired SNR. (−1/6)(Eb /No ) = Eb /No = which is Eb /No = 12. 2) encoder of Problem 12.1 has df ree = 7 and Bdf ree = 1. expression (12. Thus.46) for the unquantized AWGN channel becomes Pb (E) ≈ Bdf ree e−Rdf ree Eb /No = e− and (12. and the asymptatic coding gain is only 0.13 The (3. 12. the coding threshold.72dB. 2 These expressions are plotted versus Eb /No in the figure below. Coded Required SNR. the approximate expressions for Pb (E) indicate that a positive coding gain is only achieved at very small values of Pb (E). 1.7dB. The coding gain as a function of Pb (E) is plotted below.

the coding threshold. 1 = e(4/3)(Eb /No ) = (4/3)(Eb /No ) = Eb /No = which is Eb /No = −2. (Note: If the slightly tighter bound on Q(x) from (1. the coding threshold actually moves to −∞ dB.) The coding gain as a function of Pb (E) is plotted below. Note that in this example.37 (Uncoded) 10 -5 10 -10 10 P b(E) 10 -15 -20 10 -25 10 -30 10 -35 0 5 Eb/N o (dB) 10 15 Equating the above expressions and solving for Eb /No yields e−(7/3)·(Eb /No ) = 1 −Eb /No e 2 1 (4/3)(Eb /No ) e 2 2 ln(2) (3/4) ln(2) = 0.13 0 10 12.84dB.5199.7 dB. . the approximate expressions for Pb (E) indicate that a coding gain above 3.5) is used to form the approximate expressin for Pb (E). a short constraint length code (ν = 2) with soft decision decoding.46 (Coded) 12. which are not tight for small values of Eb /No .0 dB is achieved at moderate values of Pb (E). and the asymptotic coding gain is 3. But this is just an artifact of the bounds.

14

14

12

Reguired SNR, Coded Required SNR, Uncoded Gain

10

8 E b/N o (dB)

6

4

2

0

-2 -10 10

10

-9

10

-8

10

-7

10

-6

10 P b (E )

-5

10

-4

10

-3

10

-2

10

-1

15

12.14 The IOWEF function of the (3, 1, 2) encoder of (12.1) is A(W, X) = and thus (12.39b) becomes Pb (E) < B(X)|X=Do = 1 ∂A(W, X) k ∂W =
X=D0 ,W =1

W X7 1 − W X − W X3

X7 (1 − W X − W X 3 )2

X=D0 ,W =1

.

For the DMC of Problem 12.4, D0 = 0.42275 and the above expression becomes Pb (E) < 9.5874 × 10−3 . If the DMC is converted to a BSC, then the resulting crossover probability is p = 0.091. Using (12.29) yields Pb (E) < B(X)|X=Do = 1 ∂A(W, X) k ∂W √ = X7 (1 − W X − W X 3 )2 √ = 3.7096×10−1,

X=2

p(1−p),W =1

X=2

p(1−p),W =1

about a factor of 40 larger than the soft decision case. 12.16 For the optimum (2, 1, 7) encoder in Table 12.1(c), df ree = 10, Adf ree = 1, and Bdf ree = 2. (a) From Table 12.1(c) γ = 6.99dB. (b) Using (12.26) yields (c) Using (12.30) yields (d) For this encoder G−1 = and the amplification factor is A = 4. For the quick-look-in (2, 1, 7) encoder in Table 12.2, df ree = 9, Adf ree = 1, and Bdf ree = 1. (a) From Table 12.2 γ = 6.53dB. (b) Using (12.26) yields (c) Using (12.30) yields P (E) ≈ Adf ree 2df ree pdf ree /2 = 5.12 × 10−7 . Pb (E) ≈ Bdf ree 2df ree pdf ree /2 = 5.12 × 10−7 . P (E) ≈ Adf ree 2df ree pdf ree /2 = 1.02 × 10−7 . Pb (E) ≈ Bdf ree 2df ree pdf ree /2 = 2.04 × 10−7 . D2 1 + D+ D2

16

(d) For this encoder G−1 = and the amplification factor is A = 2. 12.17 The generator matrix of a rate R = 1/2 systematic feedforward encoder is of the form G = 1 g(1) (D) . Letting g(1) (D) = 1 + D + D2 + D5 + D7 achieves df ree = 6 with Bdf ree = 1 and Adf ree = 1. (a) The soft-decision asymptotic coding gain is γ = 4.77dB. (b) Using (12.26) yields (c) Using (12.30) yields P (E) ≈ Adf ree 2df ree pdf ree /2 = 6.4 × 10−5 . Pb (E) ≈ Bdf ree 2df ree pdf ree /2 = 6.4 × 10−5 . 1 1

(d) For this encoder (and all systematic encoders) G−1 = and the amplification factor is A = 1. 12.18 The generator polynomial for the (15, 7) BCH code is g(X) = 1 + X 4 + X 6 + X 7 + X 8 and dg = 5. The generator polynomial of the dual code is h(X) = and hence dh ≥ 4. (a) The rate R = 1/2 code with composite generator polynomial g(D) = 1 + D4 + D6 + D7 + D8 has generator matrix G(D) = 1 + D2 + D3 + D4 D3 and df ree ≥ min(5, 8) = 5. (b) The rate R = 1/4 code with composite generator polynomial g(D) = g(D2 ) + Dh(D2 ) = 1 + D + D8 + D9 + D12 + D13 + D14 + D15 + D16 has generator matrix G(D) = 1 + D2 + D3 + D4 1 + D2 + D3 D3 D3 and df ree ≥ min(dg + dh , 3dg , 3dh ) = min(9, 15, 12) = 9. X 15 + 1 = X7 + X6 + X4 + 1 X8 + X7 + X6 + X4 + 1 1 0

18) = 13. 16) BCH code is g(X) = 1 + X + X 2 + X 3 + X 5 + X 7 + X 8 + X 9 + X 10 + X 11 + X 15 and dg = 7. X 15 + 1 = X 16 + X 12 + X 11 + X 10 + X 9 + X 4 + X + 1 g(X) S0 WX 3 L X2L S1 WXL S3 WXL WXL X2L S2 X2L S0 The generating function is given by Fi ∆i A(W. (a) The rate R = 1/2 code with composite generator polynomial g(D) = 1 + D + D2 + D3 + D5 + D7 + D8 + D9 + D10 + D11 + D15 has generator matrix G(D) = 1 + D + D4 + D5 1 + D + D2 + D3 + D4 + D5 + D7 and df ree ≥ min(7. 3dh ) = min(13. 12. L) = There are 3 cycles in the graph: i ∆ . 3dg . X.20 (a) The augmented state diagram is shown below. (b) The rate R = 1/4 code with composite generator polynomial g(D) = g(D2 ) + Dh(D2 ) = 1 + D + D2 + D3 + D4 + D6 + D9 + D10 + D14 + D16 + D18 + D19 + D20 + D21 + D22 + D23 has generator matrix G(D) = 1 + D + D4 + D5 1 + D2 + D5 + D6 + D8 1 + D + D2 + D3 + D4 + D5 + D7 1 + D5 and df ree ≥ min(dg + dh . 21.17 The generator polynomial for the (31. . 12) = 7. The generator polynomial of the dual code is h(X) = and hence dh ≥ 6.

A2 (W. X. Finally. 1− i Ci + i . and hence ∆2 = 1. L). the WEF A(W.18 Cycle 1: Cycle 2: Cycle 3: There is one pair of nontouching cycles: Cycle pair 1: (loop 1. Examining the series expansions of A1 (W. . X. L) = W X 7 L3 W X 7 L3 (1 − W XL) + W 2 X 8 L4 = 3 L2 + W 2 X 4 L3 + W XL) + W 2 X 4 L3 1 − (W X 1 − W XL − W X 3 L2 F1 = W X 7 L3 F2 = W 2 X 8 L4 . X. S 1 S2 S1 S1 S3 S2 S1 S3 S3 C1 = W X 3 L2 C2 = W 2 X 4 L3 C3 = W XL. X. L) is given by A(W. There are no more sets of nontouching cycles. and hence and the generating WEF’s Ai (W. L) above yields τ min = 5. Forward path 2 touches all the cycles. and A3 (W. Only cycle 3 does not touch forward path 1. loop 3) C1 C3 = W 2 X 4 L3 . L) = = (b) This code has df ree = 7. X. L) are given by: A1 (W. ∆ = = There are 2 forward paths: Forward path 1: Forward path 2: S 0 S1 S2 S0 S0 S 1 S 3 S 2 S 0 ∆1 = 1 − W XL. L) = = W X 3 L(1 − W XL) W X 3 L(1 − W XL) = ∆ 1 − W XL − W X 3 L2 3 2 6 3 W X L + W X L + W 3 X 7 L4 + (W 3 X 9 + W 4 X 8 ) L5 + (2 W 4 . so τ min is the minimum value of τ for which d(τ ) = df ree + 1 = 8. X.j Ci Cj 1 − (W X 3 L2 + W 2 X 4 L3 + W XL) + W 2 X 4 L3 . X. L) = = A2 (W. L). Therefore. X. X. X 10 + W 5 X 9 ) L6 + · · · W X 5 L2 W X 5 L2 (1 − W XL) + W 2 X 6 L3 = ∆ 1 − W XL − W X 3 L2 5 2 2 6 3 2 8 W X L + W X L + (W X + W 3 X 7 ) L4 + (2 W 3 X 9 + W 4 X 8 ) L5 +(W 3 X 11 + 3 W 4 X 10 + W 5 X 9 )L6 + · · · W 2 X 4 L2 W 2 X 4 L2 = ∆ 1 − W XL − W X 3 L2 2 4 2 W X L + W 3 X 5 L3 + (W 3 X 7 + W 4 X 6 ) L4 + (2 W 4 X 8 + W 5 X 7 ) L5 +(W 4 X 10 + 3 W 5 X 9 + W 6 X 8 )L6 · · · A3 (W.

the trellis diagram of Figure 12.21 For a BSC. 3 S3 1/001 0 0/11 5 S3 1/001 0 0/11 3 S3 1/001 0 0/11 6 S3 0 0/11 1/ 01 0 1/ 01 0 1/ 01 0 1 S1 4 S1 3 S1 5 1/ 01 0 S1 5 S1 1 10 0/ 1 10 0/ 1 10 0/ 1 10 0/ 1/ 10 0 2 S2 3 1/ 10 0 S2 6 1/ 10 0 1/11 1 1/11 1 1/11 1 S0 0/000 S0 0/000 S0 1/11 1 0/000 S0 0/000 S0 1/11 1 1 10 0/ S2 3 S2 7 S2 1 01 0/ 1 01 0/ 1 01 0/ 1 01 0/ 1 01 0/ 0/000 S0 0/000 S0 0/000 S0 0 r 2 011 100 3 110 4 010 4 110 6 010 4 001 6 (a) .6 in the book may be used to decode the three possible 21bit subequences using the Hamming metric. L). τ →∞ 12. τ 0 1 2 3 4 5 d(τ ) 3 4 5 6 7 8 Ad(τ ) 1 1 1 1 1 1 (d) From part (c) and by looking at the series expansion of A3 (W. The results are shown in the three figures below. it is the most likely to be correctly synchronized. Since the r used in the middle figure (b) below has the smallest Hamming distance (2) of the three subsequences. X.19 (c) A table of d(τ ) and Ad(τ ) is given below. it can be seen that lim d(τ ) = τ + 3.

20 2 S3 1/001 0 0/11 4 S3 1/001 0 0/11 4 S3 1/001 0 0/11 6 S3 0 0/11 1/ 01 0 1/ 01 0 1/ 01 0 0 S1 5 S1 1 S1 4 1/ 01 0 S1 1 S1 1 10 0/ 1 10 0/ 1 10 0/ 1 10 0/ 1/ 10 0 1 S2 3 1/ 10 0 S2 1 1/ 10 0 1/11 1 1/11 1 1/11 1 S0 0/000 S0 0/000 S0 1/11 1 0/000 S0 0/000 S0 1/11 1 1 10 0/ S2 5 S2 2 S2 1 01 0/ 1 01 0/ 1 01 0/ 1 01 0/ 1 01 0/ 0/000 S0 0/000 S0 0/000 S0 0 r 3 111 001 4 100 4 101 5 100 4 100 5 011 2 (b) .

21 2 S3 1/001 0 0/11 2 S3 1/001 0 0/11 3 S3 1/001 0 0/11 3 S3 0 0/11 1/ 01 0 1/ 01 0 1/ 01 0 1 S1 3 S1 5 S1 5 1/ 01 0 S1 6 S1 1 10 0/ 1 10 0/ 1 10 0/ 1 10 0/ 1/ 10 0 3 S2 4 1/ 10 0 S2 4 1/ 10 0 1/11 1 1/11 1 1/11 1 S0 0/000 S0 0/000 S0 1/11 1 0/000 S0 0/000 S0 1/11 1 1 10 0/ S2 6 S2 5 S2 1 01 0/ 1 01 0/ 1 01 0/ 1 01 0/ 1 01 0/ 0/000 S0 0/000 S0 0/000 S0 0 r 2 110 011 4 001 4 011 4 001 5 000 5 111 6 (c) .

00.Chapter 13 Suboptimum Decoding of Convolutional Codes 13. 11. as shown below. 01. 10 01 01 10 11 10 00 11 01 00 10 01 00 11 11 01 10 10 11 00 01 11 00 10 11 01 00 11 00 00 00 00 00 01 11 11 11 11 00 10 00 11 11 00 00 10 11 11 00 11 10 00 01 00 00 00 11 01 11 11 11 11 00 10 00 11 11 00 00 10 11 11 00 11 10 01 00 11 1 .13a.1 (a) Referring to the state diagram of Figure 11. 01. (b) For u = (1001). the corresponding codeword is v = (11. the code tree for an information sequence of length h = 4 is shown below. 11. 11).

00.2 13. Q. 00. 11. 11) and u = [1001]. Q − 1. . 13. where i is the binary input to the channel.5 [P (r = Q − 1 − j|0) + P (r = Q − 1 − j|1)] [using (a-1)] P (r = Q − 1 − j). . 01.434 results in the following integer metric table: 0 1 1 −9 −9 1 0 1 (b) Decoding r = (11.2 Proof: A binary-input. 11. P (1) = P (0) = 0. 01. 11. D.045 results in the metric table: 0 1 0. 01. . 10.3 (a) Computing the Fano metric with p = 0.5 and P (r = j) = = = = P (r = j|0)P (0) + P (r = j|1)P (1) 0. The output probability distribution may be computed as 1 (a-1) P (r = j) = i=0 P (r = j|i)P (i).434 0 1 Dividing by 0. For equally likely input signals. 01. The Viterbi algorithm requires fifteen steps to decode the same sequence. 01. Q-ary-output DMC is symmetric if P (r = j|0) = P (r = Q − 1 − j|1) for j = 0.974 −3.5 [P (r = j|0) + P (r = j|1)] 0. .974 0. (c) Decoding r = (11. 00. 1. 10.434 −3. E. 10. 00) using the stack algorithm results in the following sixteen steps: . 11) using the stack algorithm results in the following eight steps: Step 1 1(-2) 0(-18) Step 2 11(-6) 10(-6) 0(-18) Step 3 10(-6) 111(-14) 110(-14) 0(-18) Step 4 100(-4) 111(-14) 110(-14) 0(-18) 101(-24) Step 5 1001(-2) 111(-14) 110(-14) 0(-18) 1000(-22) 101(-24) Step 6 10010(0) Step 7 100100(-8) Step 8 1001000(-6) ˆ ˆ The decoded sequence is v = (11. 00.

01. 10.07 −2.07 −7. This agrees with the result of Problem 12.106 0.6. 01.314 0.443 0.268 02 03 04 14 13 12 11 0.106 yields the following integer metric table: 01 0 14 1 −167 02 12 −115 03 04 9 −3 −75 −30 14 13 12 11 −30 −75 −115 −167 −3 9 12 14 .66 −4.3 Step 1 1(-2) 0(-18) Step 2 11(4) 10(-16) 0(-18) Step 3 111(-4) 110(-4) 10(-16) 0(-18) Step 4 Step 5 Step 6 Step 7 Step 8 1110(-2) 110(-4) 11100(-10) 1100(-12) 1101(-12) 110(-4) 11100(-10) 1100(-12) 1101(-12) 10(-16) 10(-16) 10(-16) 1101(-12) 10(-16) 111000(-18) 0(-18) 0(-18) 10(-16) 111000(-18) 0(-16) 1111(-22) 1111(-22) 0(-18) 0(-18) 11000(-20) Step 12 Step 13 Step 14 Step 15 Step 16 1011(-12) 10110(-10) 101100(-18) 111000(-18) 1110000(-16) 111000(-18) 111000(-18) 111000(-18) 110100(-18) 110100(-18) 110100(-18) 110100(-18) 110100(-18) 0(-18) 0(-18) 0(-18) 0(-18) 0(-18) 11000(-20) 11000(-20) 11000(-20) 11000(-20) 11000(-20) 1010(-32) 1010(-32) 1010(-32) 1010(-32) 1010(-32) 100(-34) 100(-34) 100(-34) 100(-34) 100(-34) 1011000(-36) 1011000(-36) Step 9 11010(-10) 10(-16) 111000(-18) 0(-18) 11000(-20) Step 10 10(-16) 111000(-18) 110100(-18) 0(-18) 11000(-20) Step 11 101(-14) 111000(-18) 110100(-18) 0(-18) 11000(-20) 100(-34) ˆ ˆ The decoded sequence is v = (11. 00) and u = (1110).0845 P (r |v ) −R P (r ) Multiplying by 3/0.095 0.4 (a) Computing the Fano metric using M (r |v ) = log2 with R = 1/2 and P (01 ) = P (02 ) = P (03 ) = P (04 ) = results in 01 0 0.043 −2.494 1 −7. 00.494 P (11 ) = P (12 ) = P (13 ) = P (14 ) = 0. 11.268 −4.314 −0.66 −1.043 −0.218 0. 13.1025 0.106 −1.443 0.

7 From Example 13.6 As can be seen from the solution to Problem 13. the final decoded path never falls below the second stack entry (part (b)) or the fourth stack entry (part(c)).5. for a stacksize of 10 entries. 12 02 . 03 11 . the integer metric table is 0 1 1 −5 −5 1 0 1 . Thus. 01.4 (b) Decoding r = (12 11 . 00) and u = (1110). 12 01 . 11. This agrees with the result of Problem 12. 01. 01 13 . the final decoded path is not effected in either case. 03 01 . 10.5(b). 00. 13.3. 03 02 ) using the stack algorithm results in the following thirteen steps: Step 1 1(26) 0(-282) Step 2 11(52) 10(-256) 0(-282) Step 3 110(-9) 111(-106) 10(-256) 0(-252) Step 4 1100(-70) 111(-106) 1101(-167) 10(-256) 0(-282) Step 5 111(-106) 1101(-167) 11000(-173) 10(-256) 0(-282) Step 6 1110(-83) 11000(-173) 11000(-173) 10(-256) 0(-282) 1111(-348) Step 12 111000(-247) 10(-256) 0(-282) 110000(-348) 1111(-348) 1101000(-394) Step 7 1101(-167) 11100(-186) 11100(-186) 10(-256) 0(-282) 1111(-348) Step 13 1110000(-226) Step 8 11010(-143) 11000(-173) 11100(-186) 10(-256) 0(-282) 1111(-348) Step 9 11000(-173) 11100(-186) 110100(-204) 10(-256) 0(-282) 1111(-348) Step 10 11100(-186) 110100(-204) 10(-256) 0(-282) 110000(-331) 1111(-348) Step 11 110100(-204) 111000(-247) 10(-256) 0(-282) 110000(-331) 1111(-348) ˆ ˆ The decoded sequence is v = (11. 13.

which agrees with the result of Example 13. 011) and u = (11101). 101. 001.5. 010. . 110.5 (a) Decoding the received sequence r = (010. 001. 110. 011) using the stack bucket algorithm with an interval of 5 results in the following decoding steps: Interval 1 9 to 5 Step 1 Step 2 Step 3 Interval 2 4 to 0 Interval 3 -1 to -5 0(-3) Interval 4 -6 to -10 1(-9) 00(-6) 1(-9) 1(-9) Interval 5 -11 to -15 01(-12) 000(-12) 001(-15) 01(-12) 000(-12) 001(-15) 01(-12) 000(-12) 001(-15) 01(-12) 000(-12) 001(-15) 01(-12) 11100(-15) 000(-12) 001(-15) 01(-12) 11100(-15) 000(-12) 001(-15) 01(-12) 11100(-15) 000(-12) 001(-15) 01(-12) Interval 6 -16 to -20 Interval 7 -21 to -25 Step 4 11(-6) 10(-24) Step 5 111(-3) 110(-21) 10(-24) 1111(-18) 110(-21) 10(-24) 110(-21) 10(-24) Step 6 1110(0) Step 7 11101(3) 1111(-18) Step 8 111010(6) 1111(-18) 110(-21) 10(-24) Step 9 1110100(6) 1111(-18) 110(-21) 10(-24) ˆ ˆ The decoded sequence is v = (111. 100. 101. 100. 010.

101. 100. 011) using the stack bucket algorithm with an interval of 9 results in the following decoding steps. 011) and u = (11101). 010. 101. 110.6 (b) Decoding the received sequence r = (010. . Interval 1 8 to 0 Step 1 Step 2 Step 3 Interval 2 -1 to -9 0(-3) 1(-9) 00(-6) 1(-9) 1(-9) Interval 3 -10 to -18 Interval 4 -19 to -27 01(-12) 000(-12) 001(-15) 01(-12) 000(-12) 001(-15) 01(-12) 000(-12) 001(-15) 01(-12) 1111(-18) 000(-12) 001(-15) 01(-12) 11100(-15) 1111(-18) 000(-12) 001(-15) 01(-12) 11100(-15) 1111(-18) 000(-12) 001(-15) 01(-12) 11100(-15) 1111(-18) 000(-12) 001(-15) 01(-12) Step 4 11(-6) 10(-24) Step 5 111(-3) 110(-21) 10(-24) 110(-21) 10(-24) Step 6 1110(0) Step 7 11101(3) 110(-21) 10(-24) Step 8 111010(6) 110(-21) 10(-24) Step 9 1110100(9) 110(-21) 10(-24) ˆ ˆ The decoded sequence is v = (111. which agrees with the result of Example 13.5 and part (a). 100. 110. 010. 001. 001.

7. (ii) = 10 Step 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Look LFB LFB LFB LFB LFB LFNB LFNB LFNB LFB LFB LFB LFB LFB LFB MF -3 -3 -6 -9 -12 -15 -12 -9 -6 -3 0 3 6 9 MB −∞ -6 -3 0 Node X X 0 00 000 00 0 X 1 11 111 1110 11101 111010 1110100 Metric 0 0 -3 -6 -9 -6 -3 0 -9 -6 -3 0 3 6 9 T 0 -10 -10 -10 -10 -10 -10 -10* -10 -10 -10 0 0 0 Stop * Not the first visit.8 (i) =5 Step 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 Look LFB LFB LFB LFNB LFB LFB LFB LFB LFNB LFNB LFNB LFB LFB LFB LFB LFB LFB MF -3 -3 -6 -9 -3 -6 -9 -12 -15 -12 -9 -6 -3 0 3 6 9 MB −∞ 0 −∞ -9 -3 0 Node X X 0 X X 0 00 000 00 0 X 1 11 111 1110 11101 111010 1110100 Metric 0 0 -3 0 0 -3 -6 -9 -6 -3 0 -9 -6 -3 0 3 6 9 T 0 -5 -5 -5 -10 -10 -10 -10 -10 -10 -10 -10 -10 -5 0 0 5 Stop ˆ ˆ The decoded sequence is v = (111.7 and 13. 100. 010. Therefore no tightening is done. 110. .5. Note: The final decoded path agrees with the results of Examples 13. 001.8. The number of computations in both cases is reduced compared to Examples 13.7 13.8 and Problem 13. 011) and u = [11101] which agrees with the result of Example 13. 101.7 and 13.

. . we obtain: s0 = s1 = s2 = s3 = s4 = s5 = s6 = s7 = s8 = s9 = s10 = s11 = e0 (0) e0 e0 e0 (0) (0) +e1 (0) +e1 (0) +e1 (0) +e1 (0) +e2 (0) +e2 +e2 +e2 (0) (0) (0) +e3 (0) +e3 (0) +e3 (0) +e3 (0) +e4 (0) +e4 +e4 +e4 (0) (0) (0) +e5 (0) +e5 (0) +e5 (0) +e5 (0) (0) e0 (0) e0 (0) e0 (0) e0 +e6 (0) +e6 +e6 +e6 (0) (0) +e1 (0) +e1 (0) +e1 (0) (0) +e2 (0) +e2 (0) +e7 (0) +e7 (0) +e7 (0) +e8 (0) +e8 +e8 (0) (0) +e3 (0) (0) +e9 (0) +e9 (0) +e10 (0) +e10 (0) +e11 (0) +e0 (1) +e1 (1) +e2 (1) +e3 (1) +e4 (1) +e5 (1) +e6 (1) +e7 (1) +e8 (1) +e9 (1) +e10 (1) +e11 (1) . . .. .20 (a) g (1) (D) = 1 + D + D3 + D5 + D8 + D9 + D10 + D11 Therefore:                  H=                 1 1 1 0 1 1 0 0 1 0 1 1 1 0 0 0 1 0 1 1 0 0 1 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 1 0 1 0 0 0 0 0 1 0 1 0 1 0 0 0 1 0 1 0 1 0 1 0 ....... .8 13. . ... . 1 0 1 1                                   (b) Using the parity triangle of the code. . . .. ..... .. ... 1 1 .. .

s5 }. (b) From the parity triangle. we see that the maximum number of orthogonal parity (0) checks that can be formed on e0 is 4: {s0 . s3 and s5 (0) both check information error bit e2 ). we can see that this code is not self-orthogonal (for instance. this code is not completely orthogonalizable. because of (c).28 (a) Using either of the methods described in Examples 13. s1 . the modified syndrome bits are =e =e = =e = =e = = =e =e = = (0) (0) (0) +1 (0) +e +1 +e (0) (0) +2 (0) +e +2 +e +e (0) (0) +1 (0) +3 (0) +e +3 +e +e +e (0) +1 (0) +2 (0) +4 (0) +e +4 +e +e +e (0) +2 (0) +3 (0) +5 (0) +e +5 +e +e +e (0) +3 (0) +4 (0) +6 (0) +e +6 +e (0) (0) +e +e (0) +4 (0) +5 (0) +1 (0) (0) e +e +1 (0) (0) e +e +1 +e +e +e (0) +5 (0) +6 (0) +2 (0) +e +2 +e +e (0) +3 +e (0) +6 (1) +1 (1) +e +2 (1) +e +3 (1) +e +4 (1) +e +5 (1) +e +6 (0) (1) +e +7 +e +7 (0) (0) (1) +e +7 +e +8 +e +8 (0) (0) (1) +e +8 +e +9 +e +9 (0) (0) (0) (1) +e +7 +e +9 +e +10 +e +10 (0) (0) (0) (1) +e +8 +e +10 +e +11 +e +11 +e 13. Thus. 1 1 0 1 0 1 0 0 1 1 1 1 1 1 0 1 0 1 0 0 1 1 1 1 1 0 1 0 1 0 0 1 1 1 1 0 1 0 1 0 0 1 1 1 0 1 0 1 0 0 1 1 0 1 0 1 0 1 1 0 1 0 1 1 1 0 1 0 1 1 0 1 1 1 0 1 1 1 (d) So. we arrive at the conclusion that dmin = 7.9 (c) The effect of error bits prior to time unit given by the following equations: s s s s s s s s s s s s +1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +11 are removed. (c) From the parity triangle shown below.15 and 13. .16. s2 + s11 .

there is only one rate R = 1/2 self orthogonal code with dmin = 9: the (2. Thus m(d) < m(c) < m(b) < m(a) .1.10 13. s17 }.1(c). g (1) (D) = 1 + D2 + D3 + D5 + D6 .2(a). and m(d) = 6.21) code with g (1) (D) = 1 + D11 + D13 + D16 + D17 + D19 + D20 + D21 and m(b) = 21. s13 . . the best rate R = 1/2 nonsystematic code with df ree = 9 (actually. 10 in this case) is the (2. (c) The best rate R = 1/2 systematic code with dmin = 9 is the (2.6) code with g (0) (D) = 1 + D + D2 + D3 + D6 . (d) According to Table 12. 13. s7 . s16 .32 (a) According to Table 13. * This generator polynomial can be found in Table 13.15) code with g (1) (D) = 1 + D + D3 + D5 + D7 + D8 + D11 + D13 + D14 + D15 and m(c) = 15∗ .34 (a) This self orthogonal code with g (1) (D) = 1 + D2 + D7 + D13 + D16 + D17 yields the parity triangle: → 1 0 → 1 0 0 0 0 → 1 0 0 0 0 0 → 1 0 0 → 1 → 1 1 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 1 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 1 0 1 0 1 0 0 0 0 1 0 0 0 0 0 1 1 0 1 0 0 0 0 1 0 0 0 0 0 (0) 1 0 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 1 0 0 0 0 1 0 1 0 1 0 0 0 0 1 1 0 1 0 0 0 0 1 0 1 0 0 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 The orthogonal check sums on e0 are {s0 .3(a). (b) According to Table 13. s2 .1 of the first edition of this text.35) code with g (1) (D) = 1 + D7 + D10 + D16 + D18 + D30 + D31 + D35 and m(a) = 35.1. there is only one rate R = 1/2 orthogonalizable code with dmin = 9: the (2.1.1.

37 (a) These orthogonalizable code with g (1) (D) = 1 + D6 + D7 + D9 + D10 + D11 yields the parity triangle: 1 0 0 0 0 0 1 1 0 1 1 1 1 0 0 0 0 0 1 1 0 1 1 1 0 0 0 0 0 1 1 0 1 1 0 0 0 0 0 1 1 0 (0) 1 0 0 0 0 0 1 1 1 0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 1 The orthogonal check sums on e0 are {s0 . s9 .11 (b) The block diagram of the feedback majority logic decoder for this code is: (0) r 17 r (0) ˆ u + + s 17 (1) r 17 + + + + + + s MAJORITY GATE ˆ e ( 0) 13. s1 + s3 + s10 . s4 + s8 + s11 }. s6 . s7 . .

12 (b) The block diagram of the feedback majority logic decoder for this code is: ( 0) r 11 r ( 0) + ˆ u + (1) r 11 s 11 s + + + + + + + + MAJORITY GATE ˆ e ( 0) .

2) systematic feedbacK encoder with G(1) (D) = 1 the parity generator matrix is  1 1 1  1 1  G(1) =  1  given by 0 1 1 ... . . . ..) The output of encoder 2 is then given by v(2) = [v0 . . where T is a K × K permutation matrix with a single one in each row and column... 1 1 0 1 1 0 1 1 0 1 1 0 1 1 0  . u1 . .. Then we can write u = u T.. 1 + D + D2 (1) (1) (1) (K × K) Now let u = [u0 .. 1 (2) (2) (2) . v1 . .Chapter 16 Turbo Coding 16. .  1 + D2 . . uK−1 ] be the (interleaved) input to encoder 2. .. where G(2) is the K × K parity generator matrix for encoder 2. for the (2. For example. .. i. .   1 1 . vK−1 ] = u G(1) is the output of encoder 1. u1 . uK−1 ] be the input to encoder 1.1. .. . ..... .1 Let u = [u0 ... .... . vK−1 ] = u G(2) = u T G(2) . ... . . . v(1) = uG(1) is the parity sequence produced by encoder 1.. Then v(1) = [v0 .e. . . v1 . where G(1) is the K × K parity generator matrix for encoder 1. . 1 . then u = u.... (If T is the identity matrix. . .

The corresponding encoder outputs in each case are given by the 3-tuples [u1 .3) Hamming code and C2 to be the (8. both of length K.4. The encoder output 3-tuple in this case is given by [u. u1 G(1) . u G(1) . C1 Codeword list: 0000000 1101000 0111001 1010001 1011010 0110010 1100011 0001011 1110100 0011100 1001101 1000101 0101110 1000110 1000111 1111111 The CWEF for C1 is given by AC1 (Z) = 3Z 2 + Z 3 1 AC1 (Z) = 3Z + 3Z 2 2 AC1 (Z) = 1 + 3Z 3 AC1 (Z) = 4 Z3 . u1 G(2) ] = [u1 . u T G(2) ] = [u1 + u2 . u2 T G(2) ] = y1 + y2 . u2 G(2) . u1 G(1) + u2 G(2) .4) extended Hamming code. u2 G(2) ] = [u2 . u2 G(1) . u G(2) ] = [u. u1 G(1) . u1 T G(2) ] + [u2 . u G(1) . Now consider the input sequence u = u1 + u2 . u1 G(1) . u1 T G(2) + u2 T G(2) ] = [u1 .3 Define C1 to be the (7.2 Now consider two input sequences u1 and u2 . Thus superposition holds and the turbo encoder is linear. u2 G(1) . 16. u2 T G(2) ] = y2 . u1 T G(2) ] = y1 and [u2 .4.

25Z 2 .5Z 3 + 1.75 Z 5 + .3 C2 Codeword list: 0000 0000 11010001 01110010 10100011 10110100 01100101 11000110 00010111 11101000 00111001 10011010 10001011 01011100 10001101 10001110 11111111 The CWEF for C2 is given by AC2 (Z) 1 AC2 (Z) 2 AC2 (Z) 3 AC2 (Z) 4 = = = = 4Z 3 6Z 2 4Z Z4 The CWEF for the PCBC is given by equation (16.75Z 3 + 2.23) as AP C (Z) = w AP C (Z) 1 AP C (Z) 2 AP C (Z) 3 AP C (Z) 4 The bit CWEF’s are given by P Bw C (Z) = P B1 C (Z) = P B2 C (Z) = P B3 C (Z) = AC1 (Z) ∗ AC2 (Z)/ w w K w = = = = (3Z 2 + Z 3 )(4Z 3 )/4 = 3Z 5 + Z 6 (3Z + 3Z 2 )(6Z 2 )/6 = 3Z 3 + 3Z 4 (1 + 3Z)(4Z)/4 = Z + 3Z 2 (Z 3 )(Z 4 )/1 = Z 7 w K 1 4 2 4 3 4 AP C (Z) w (3Z 5 + Z 6 ) = .25 Z 6 (3Z 3 + 3Z 4 ) = 1.5Z 4 (Z + 3Z 2 ) = .

29Z 8 + 2.81Z 9 + 0.25Z 6 ) + W 2 (1.82Z 6 + 2. set W = Z = X: AP C (X) = X 4 + 6X 5 + 6X 6 + X 7 + X 11 B P C (X) = .09Z 4 + 2.21Z 4 + 17.57Z 4 + 5.97Z 9 + 0.64Z 2 + 5.65Z 6 + 5.14Z 5 + 29.14Z 3 + 14.48Z 3 + 2. Z) = W (0.07 + 0.57Z 3 + 1.61Z 4 + 8.14Z 8 + 5.86Z 5 + 6.43Z 5 + 7.4 The codeword IRWEF can be found from equation (16.09Z 5 + 2.61Z 6 + .25Z 2 ) + W 4 Z 7 To find the WEF’s.45Z 2 + 2.57Z 8 + 1.07Z 6 + 5. Z) = W (4.04 + 0.32Z 4 + 0.57Z 7 + 5. where w is the exponent of W.32Z 2 + 0.48Z 7 3.93Z 2 + 3.01Z 12 ) + W 3 (0.64Z 10 + 0.84Z 4 + 9.25X 7 + X 11 16.29Z 2 + 2.5Z 3 + 1.49Z 4 + 2.42Z 6 + 3. Z) = W (.06Z 6 ) + W 2 (0.64Z 2 + 1.24Z 10 ) + W 4 (1.14Z 9 + 0.93Z 9 + 0.003 + 0.38Z 5 + 0.64Z 3 + 0.16Z + 0.57Z 7 + 1.57Z 9 + 1.75Z 7 + 0.90Z 7 + 0.29Z 10 ) + W 7 (0.90Z 3 + 2.43Z 11 + 0.61Z 8 ) + W 5 (0.07Z 6 + 15.93Z 9 + 0.14Z 4 + 15.97Z 10 .05Z 6 + 3Z 7 + 4.40Z 10 + 0.48Z 10 + 0.29Z 3 + 5.04Z 12 ) + W 3 (0.56Z 4 + 0.97Z 5 + 1.48Z 8 + 0.86Z 9 + 1.24Z 2 + 0.5Z 4 + 3Z 5 + 0.43Z 11 + 0.57Z 5 + 7.43Z + 1.5Z 6 ) + W 2 (1.04Z 12 ) + W 6 (0.75X 5 + 2.43Z 6 + 3Z 7 + 3. Z) = W (3Z 5 + Z 6 ) + W 2 (3Z 3 + 3Z 4 ) + W 3 (Z + 3Z 2 ) + W 7 Z 7 B P C (W.32Z + 1.65Z 6 + 8.4 4 4 P B4 C (Z) = (Z 7 ) = Z 7 The IRWEF’s are given by AP C (W.25X 6 + . B P C (W.21Z 8 ) + W 5 (0.64Z 5 + 4.25Z 5 4.43Z 7 + 14.57Z 5 + 14.21Z 3 + 8. Z) and divide by w/8.43Z 6 + 3.5Z 8 ) + W 8 Z 12 To find the bit IRWEF.83Z 8 + 0.32Z 4 + 3Z 5 + 6.27Z 11 + 0.40Z 2 + 3.07Z 12 ) + W 6 (0.43Z + 0.5Z 4 ) + W 3 (.75Z 5 + .75Z + 2.34) as follows: AP C (W.79Z 7 + 5.11Z 11 + 0.03 + 0.30Z 8 + 1.29Z 4 + 3.32Z 8 + 3. take each term in the expression for AP C (W.14Z 7 + 3.29Z 6 + 17.75X 4 + 3.86Z 3 + 3.86Z 7 + 1.93Z 10 + 0.29Z 9 + 0.64Z 10 ) + W 4 (3.97Z 8 + 1.

79X 10 + 44. substitute X=W=Z into the equations for the IRWEF’s. AP C (X) = 0.21X 6 + 3.48X 4 + 1.07X 3 + 1.5 Consider using an h-repeated (n.84X 7 + 9.14X 15 + 3X 16 + 0.82X 13 + 5.79X 11 + 20. k. 4h) PCBC.96X 18 + X 20 16.29X 8 + 45.5 W 7 (0. For the h = 4-repeated PCBC. dmin ) = (7. as shown below. forming an (h[2n − k]. The output consists of the original input bits u.29X 18 + X 20 B P C (X) = 0. Z) = [1 + W (3Z 2 + Z 3 ) + W 2 (3Z + 3Z 2 ) + W 3 (1 + 3Z) + W 4 Z 3 ]4 − 1 = W (12Z 2 + 4Z 3 ) + W 2 (12Z + 12Z 2 + 54Z 4 + 36Z 5 + 6Z 6 ) + W 3 (4 + 12Z + 108Z 3 + 144Z 4 + 36Z 5 + 108Z 6 + 108Z 7 + 36Z 8 + 4Z 9 ) + W 4 (90Z 2 + 232Z 3 + 90Z 4 + 324Z 5 + 540Z 6 + 252Z 7 + 117Z 8 + 108Z 9 + 54Z 10 + 12Z 11 + Z 12 ) + W 5 (36Z + 144Z 2 + 108Z 3 + 432Z 4 + 1188Z 5 + 780Z 6 + 468Z 7 + 648Z 8 + 432Z 9 + 120Z 10 + 12Z 11 ) + W 6 (6 + 36Z + 54Z 2 + 324Z 3 + 1296Z 4 + 1296Z 5 + 918Z 6 + 1836Z 7 + 1620Z 8 + 556Z 9 + 66Z 10 ) + W 7 (144Z 2 + 780Z 3 + 1188Z 4 + 1080Z 5 + 2808Z 6 + 3456Z 7 + 1512Z 8 + 324Z 9 + 108Z 10 + 36Z 11 + 4Z 12 ) + W 8 (36Z + 252Z 2 + 540Z 3 + 783Z 4 + 2592Z 5 + 4464Z 6 + 2592Z 7 + 783Z 8 + 540Z 9 + 252Z 10 + 36Z 11 ) + .46X 14 + 5. 3) Hamming code as the constituent code.04X 14 + 0.71X 12 + 5. the IRWEF of the (28.94Z 8 ) + W 8 Z 12 To find the WEF’s.96X 15 + 2. 4.14X 12 + 9.07X 17 + 1.04X 17 + 0.21X 9 + 66.96X 8 + 23.71X 9 + 33.63Z 7 + 3.19X 11 + 10. k) = (10h.39X 10 + 21.71X 5 + 5.45X 5 + 1.71X 4 + 7.20X 16 + 0.16) constituent code is A(W. and the parity bits from the interleaved encoder v(2) . the parity bits from the non-interleaved encoder v(1) .61X 6 + 11X 7 + 22.04Z 6 + 2.03X 3 + 0.71X 13 + 9.

4Z 9 + 16.316Z 17 + 17.771Z 9 + 99.297Z 2 + 2.033Z 22 .029 + 0.637Z 9 + 186.89Z 14 + 360.771Z 12 + 48.615Z 8 + 315.429Z 15 + 3.171Z + 0.736Z 6 + 112.945Z 5 + 38.2Z 4 + 10. using AP C (Z) = w is the interleaver (and input) length.514Z 17 + 0.297Z 19 + 5.363Z 11 + 551.659Z 21 + 0.156Z 15 + 73.686Z 10 + 83.242Z 5 + 50.229Z 14 + 15.343Z 13 + 35.989Z 7 + 140.176Z 17 + 80.385Z 10 + 579.057Z 8 + 61. The IRWEF’s of the (40.459Z 8 + 194.268Z 18 + 8.914Z 6 + 61.549Z 7 + 160.903Z 10 + 257.4Z 3 + 1.527Z 4 + 14.155Z 20 + 0.8Z 5 + 18Z 6 + 8.857Z 16 + 0.6Z 11 + 0.475Z 6 + 54.138Z 22 + 0.697Z 11 + 294.2Z 10 + 3.16) PCBC are found. to be: K w −1 [Aw (Z)]2 .273Z 14 + 117.314Z 11 + 54.001Z 24 AP C (Z) = 5 0. where K = hk = 16 AP C (Z) = 1 AP C (Z) = 2 AP C (Z) = 3 9Z 4 + 6Z 5 + Z 6 1.686Z 5 + 23.670Z 20 + 0.257Z 2 + 1.901Z 18 + 27.831Z 21 + 0.543Z 3 + 6.029Z 18 AP C (Z) = 4 4.844Z 16 + 36.5Z 8 + 32.505Z 12 + 611.2Z 2 + 2.3Z 12 0.088Z 16 + 158.802Z 13 + 540.099Z 9 + 550.013Z 23 + 0.686Z 4 + 6.831Z 13 + 151.451Z 4 + 22.229Z 19 + 3.791Z 15 + 238.4Z 7 + 25.714Z 7 + 56.374Z 3 + 6.6 W 9 (4 + 36Z + 108Z 2 + 324Z 3 + 1512Z 4 + 3456Z 5 + 2808Z 6 + 1080Z 7 + 1188Z 8 + 780Z 9 + 144Z 10 ) + W 10 (66Z 2 + 556Z 3 + 1620Z 4 + 1836Z 5 + 918Z 6 + 1296Z 7 + 1296Z 8 + 324Z 9 + 54Z 10 + 36Z 11 + 6Z 12 ) + W 11 (12Z + 120Z 2 + 432Z 3 + 648Z 4 + 468Z 5 + 780Z 6 + 1188Z 7 + 432Z 8 + 108Z 9 + 144Z 10 + 36Z 11 ) + W 12 (1 + 12Z + 54Z 2 + 108Z 3 + 117Z 4 + 252Z 5 + 540Z 6 + 324Z 7 + 90Z 8 + 232Z 9 + 90Z 10 ) + W 13 (4Z 3 + 36Z 4 + 108Z 5 + 108Z 6 + 36Z 7 + 144Z 8 + 108Z 9 + 12Z 11 + 4Z 12 ) + W 14 (6Z 6 + 36Z 7 + 54Z 8 + 12Z 10 + 12Z 11 ) + W 15 (4Z 9 + 12Z 10 ) + W 16 Z 12 .389Z 12 + 216.

05Z 19 + 4.363Z 13 + 550.343Z 11 + 54.06Z 12 + 2201.636Z 19 + 1.549Z 17 + 50.185Z 17 + 67.74Z 12 + 1130.116Z 4 + 18.971Z 21 + 0.001 + 0.877Z 9 + 1316.714Z 17 + 23.686Z 10 + 1130.155Z 4 + 8.41Z 3 + 7.74Z 13 + 1316.189Z 7 + 341.273Z 10 + 216.544Z 4 + 9.025Z 23 + 0.101Z 22 AP C (Z) = 9 0.451Z 20 AP C (Z) = 13 0.165Z 5 + 65.971Z 3 + 5.92Z 14 + 1101.374Z 21 + 0.243Z 22 + 0.004 + 0.297Z 22 AP C (Z) = 12 0.029Z 6 + 0.3Z 11 + 1702.165Z 19 + 0.659Z 3 + 5.901Z 6 + 158.333Z 8 + 694.033Z 2 + 0.7 AP C (Z) = 6 0.964Z 19 + 5.268Z 6 + 36.906Z 3 + 4.219Z 20 + 0.928Z 9 + 682.243Z 8 + 1101.429Z 9 + 35.219Z 17 + 65.59Z 13 + 682.001Z 24 AP C (Z) = 8 0.101Z 2 + 1.39Z 14 + 795.636Z 5 + 83.242Z 19 + 6.5Z 10 + 1462.813Z 20 AP C (Z) = 10 0.802Z 11 + 551.92Z 10 + 2064.736Z 18 + 14.771Z 12 + 83.615Z 18 + 17.844Z 8 + 117.686Z 14 + 891.013Z + 0.803Z 16 + 255.099Z 15 + 160.74Z 11 + 2743.219Z 7 + 597.391Z 14 + 533.189Z 22 + 0.99Z 13 + 1874.462Z 8 + 795.637Z 15 + 140.903Z 14 + 194.01Z 9 + 1874.831Z 3 + 3.326Z 7 + 456.176Z 7 + 238.5Z 14 + 694.771Z 15 + 56.6Z 13 + 993.989Z 17 + 38.914Z 18 + .955Z 4 + 25.39Z 10 + 2201.544Z 20 AP C (Z) = 7 1.821Z 6 + 192.138Z 2 + 0.189Z 2 + 0.314Z 13 + 99.929Z 16 + 133.116Z 20 + 0.185Z 7 + 454.7Z 12 + 2064.219Z 4 + 17.221Z 15 + 341.391Z 10 + 1030.385Z 14 + 315.297Z 5 + 80.674Z 9 + 993.514Z 7 + 3.221Z 9 + 1194.089Z 6 + 189.054Z 23 + 0.3Z 13 + 1194.156Z 9 + 151.67Z 4 + 27.906Z 21 + 0.475Z 18 + 22.389Z 12 + 257.59Z 11 + 1269.831Z 11 + 294.674Z 15 + 597.05Z 5 + 61.316Z 7 + 73.355Z 17 + 43.527Z 5 + 67.326Z 17 + 61.527Z 20 + 2.821Z 18 + 25.41Z 21 + 0.307Z 18 + 9.054Z + 0.001 + 0.243Z 16 + 169.439Z 6 + 169.813Z 4 + 19.791Z 9 + 540.355Z 7 + 345.307Z 6 + 255.01Z 15 + 456.686Z 14 + 61.928Z 15 + 345.459Z 16 + 54.857Z 8 + 15.803Z 8 + 891.439Z 18 + 18.89Z 10 + 611.615Z 16 + 112.877Z 15 + 454.697Z 13 + 186.089Z 18 + 19.189Z 17 + 83.615Z 6 + 133.229Z 10 + 48.505Z 12 + 579.057Z 16 + 61.004Z 24 AP C (Z) = 11 0.025Z + 0.964Z 5 + 43.99Z 11 + 1702.7Z 12 + 1462.333Z 16 + 189.088Z 8 + 360.929Z 8 + 533.945Z 19 + 4.74Z 12 + 1030.229Z 5 + 17.462Z 16 + 192.6Z 11 + 1269.955Z 20 + 1.527Z 19 + 7.243Z 2 + 0.

2Z 22 AP C (Z) = Z 18 + 6Z 19 + 9Z 20 15 AP C (Z) = Z 24 . as explained in the text.93X 8 + 7. For example.93Z 4 + 5.686Z 19 + 6. resulting in wH (u) = 1. and wH (v(2) ) = 2.95X 15 + 16.74Z 6 + 9.81Z 12 ) + W 4 (0. Now.67Z 14 + 0.15X 21 .171Z 23 + 0.92X 17 + 6X 18 + 2. So.78Z 14 + 0.8Z 19 + 1.33X 16 + 9.81Z 12 + 1.8 The IRWEF and WEF for the PCCC in Example 16.” Another way to look at it is that for convolutional codewords.33Z 10 + 6.4Z 17 + 18Z 18 + 10.any multiple error event in a codeword belonging to a terminated convolutional code can be viewed as a succession of single error events separated by sequences of 0’s. 16. if there are h error events with total length λ in a block codeword length K.2Z 20 + 2.04Z 6 + 8..59Z 8 + 4.82X 14 + 15. These deviations from the zero state can be separated by any amount of time spent in the zero state. Since there are h events.11Z 10 + 0.04Z 4 + 1.89Z 8 + 6.44Z 12 ) + W 9 (Z 12 ) AP C (X) = 1.44Z 10 + 5. we start in the zero state and end in the zero state and any multiple error event can be seen as a series of deviations from the zero state.33Z 12 ) + W 7 (5.6Z 13 + 16.22Z 6 + 6.6 is found in the same manner as for a PCBC.97X 10 + 12. Z) = W 2 (4Z 8 + 4Z 10 + Z 12 ) + W 3 (1. or the codeword could stay in the zero state for several input zeros.11X 11 + 15.81X 7 + 0. the codeword could have a deviation starting and another deviation ending at the same zero state.11X 13 + 13. 16. where wH (x) is the Hamming weight of x.2Z 14 + 32.22X 19 + 0. there are K − λ + h places where an error event can start.4Z 15 + 25.33Z 16 ) + W 5 (0.686Z 20 + 1.8 6.15Z 8 + 12..07X 9 + 9. then there must be K − λ times that state 0 occurs or.89Z 12 + 0. it is explained that “. the number of ways to choose h elements from N − λ + h places gives the multiplicity of block codewords for h-error events.15Z 16 ) + W 6 (0.33Z 4 + 2. The following combines the equations in (16. since the remaining λ − h places are then determined by the error events.74Z 10 + 1. This comes from a weight 1 input.7 On page 791 in the text. in the codeword.029Z 24 AP C (Z) = 14 0.3Z 12 + 3. the minimum distance of the PCBC is 5.37Z 8 + 9. K − λ 0’s.44Z 8 + 3. wH (v(1) ) = 2.81Z 4 + 6.257Z 22 + 0.63X 12 + 13.33X 20 + 1.49) for the CWEF’s: AP C (W.5Z 16 + 8. 16 Using a 4x4 row-column (block) interleaver. Likewise the first input bit could take the codeword out of the zero state or the last input bit could be the one to return the codeword to the zero state.4Z 21 + 1.67Z 10 + 3.93Z 4 + 10.543Z 21 + 0.

9 16.9 The bit IRWEF is given by: B P C (W, Z) = W 2 (0.89Z 8 + 0.89Z 10 + 0.22Z 12 ) + W 3 (0.60Z 4 + 2.25Z 6 + 3.30Z 8 + 2.25Z 10 + 0.60Z 12 ) + W 4 (0.41Z 4 + 2.63Z 6 + 4.71Z 8 + 2.07Z 10 + 1.73Z 12 + 0.30Z 14 + 0.15Z 16 ) + W 5 (0.19Z 4 + 1.23Z 6 + 3.54Z 8 + 5.18Z 10 + 3.79Z 12 + 0.99Z 14 + 0.08Z 16 ) + W 6 (0.02Z 4 + 0.69Z 6 + 5.43Z 8 + 8.30Z 10 + 3.55Z 12 ) + W 7 (4.23Z 8 + 2.42Z 10 + 0.35Z 12 ) + W 9 Z 12

The bit WEF is given by: B P C (X) = 0.60X 7 + 0.41X 8 + 2.43X 9 + 3.55X 10 + 4.53X 11 + 6.29X 12 + 5.79X 13 + 7.73X 14 10.02X 15 + 10.02X 16 + 6.20X 17 + 3.85X 18 + 1.33X 19 + 0.15X 20 + 1.08X 21

16.10 Redrawing the encoder block diagram and the IRWEF state diagram for the reversed generators, we see that the state diagram is essentially the same except that W and Z are switched. Thus, we can use equation (16.41) for the new IRWEF with W and Z reversed. A(W, Z, L) = L3 W 2 Z 3 + L4 W 4 Z 2 + L5 (W 4 Z 3 + W 2 Z 4 ) + L6 (2W 4 Z 3 + W 4 Z 4 ) + L7 (W 2 Z 5 + 2W 4 Z 4 + W 4 Z 5 + W 6 Z 2 ) + L8 (W 4 Z 5 + 3W 4 Z 4 + 2W 4 Z 5 + 2W 6 Z 3 ) + L9 (W 2 Z 6 + 3W 4 Z 5 + 2W 4 Z 6 + W 4 Z 7 + 3W 6 Z 3 + 3W 6 Z 4 ). After dropping all terms of order larger than L9 , the single-error event enumerators are obtained as follows: A2,3 (1) A4,4 (1) A4,7 (1) A6,7
(1)

= Z3 = Z2 = 2Z 4 + Z 5 = Z2

A2,5 (1) A4,5 (1) A4,8 (1) A6,8

(1)

= Z4 = Z3 = 3Z 4 + 2Z 5 + Z 6 = 2Z 3

A2,7 (1) A4,6 (1) A4,9 (1) A6,9

(1)

= Z5 = 2Z 3 + Z 4 = 3Z 5 + 2Z 6 + Z 7 = 3Z 3 + 2Z 4

A2,9 = Z 6

(1)

The double-error event IRWEF is A2 (W, Z, L) = [A(W, Z, L)]2 = L6 W 4 Z 6 + 2L7 W 6 Z 5 + L8 (2W 4 Z 7 + 2W 6 Z 6 + W 8 Z 4 ) + L9 (6W 6 Z 6 + 2W 6 Z 7 + 2W 8 Z 5 ) + . . . Dropping all terms of order greater than L9 gives the double-error event enumerators A2, (Z): A4,6 = Z 6 (2) A6,7 = 2Z 5 (2) A8,8 = Z 4
(2) (2)

A4,8 = 2Z 7 (2) A6,8 = 2Z 6 (2) A8,9 = 2Z 5

(2)

A6,9 = 6Z 6 + 2Z 7

(2)

10 Following the same procedure, the triple-error event IRWEF is given by A(3) (W, Z, L) = [A(W, Z, L)]3 = (3) L9 W 6 Z 9 + . . ., and the triple-error event enumerator is given by A6,9 (Z) = Z 9 . Before finding the CWEF’s we need to calculate hmax . Notice that there are no double-error events of weight less than 4 and that the only triple-error event has weight 6. Also notice that, for the terminated encoder, odd input weights don’t appear. So for weight 2, hmax = 1. For weight 4 and 8, hmax = 2. For weight 6, hmax = 3. Now we can use equations (16.37) and (16.39) to find the CWEF’s as follows:
(1) (1) (1) (1)

A2 (Z) = = A4 (Z) = = A6 (Z) = = A8 (Z) = =

c[3, 1]A2,3 (Z) + c[5, 1]A2,5 (Z) + c[7, 1]A2,7 (Z) + c[9, 1]A2,9 (Z) 3Z 2 + 7Z 3 + 5Z 4 + Z 6 (1) (1) (1) (1) (1) c[4, 1]A4,4 (Z) + c[5, 1]A4,5 (Z) + c[6, 1]A4,6 (Z) + c[7, 1]A4,7 (Z) + c[8, 1]A4,8 (Z) + c[9, 1]A4,9 (Z) + c[6, 2]A4,6 (Z) + c[8, 2]A4.8 (Z) 6Z 2 + 13Z 3 + 16Z 4 + 10Z 5 + 14Z 6 + 7Z 7 (1) (1) (1) (2) (2) c[7, 1]A6,7 (Z) + c[8, 1]A6,8 (Z) + c[9, 1]A6,9 (Z) + C[6, 2]A6,7 (Z) + c[8, 2]A6,8 (Z) + c[9, 2]A6,9 (Z) + c[9, 3]A6,9 (Z) 3Z 2 + 7Z 3 + 3Z 4 + 12Z 5 + 12Z 6 + 2Z 7 + Z 9 (2) (2) c[8, 2]A8,8 (Z) + c[9, 2]A8,9 (Z) 3Z 4 + 2Z 5
(2) (3) (1) (2) (2)

A quick calculation shows that the CWEF’s include a total of 127 nonzero codewords, the correct number for an (18,7) code. The IRWEF and WEF are given by:

A(W, Z) = W 2 (3Z 2 + 7Z 3 + 5Z 4 + Z 6 ) + W 4 (6Z 2 + 13Z 3 + 16Z 4 + 10Z 5 + 14Z 6 + 7Z 7 ) + W 6 (3Z 2 + 7Z 3 + 3Z 4 + 12Z 5 + 12Z 6 + 2Z 7 + Z 9 ) + W 8 (3Z 4 + 2Z 5 ) A(X) = 3X 4 + 7X 5 + 11X 6 + 13X 7 + 20X 8 + 17X 9 + 17X 10 + 20X 11 + 15Z 12 + 4X 13 + X 15

This convolutional code has minimum distance 4, one less than for the code of Example 6.6. As in the example, we must modify the concept of the uniform interleaver because we are using convolutional constituent codes. Therefore, note that for weight 2, there are 16 valid input sequences. For weight 4, there are 66. For weight 6, there are 40, and for weight 8, there are 5. For all other weights, there are no valid input sequences.

11 The CWEF’s of the (27,7) PCCC can be found as follows:

AP C (Z) 2 AP C (Z) 4

= = = = = = = =

AP C (Z) 6

AP C (Z) 8

(3Z 2 + 7Z 3 + 5Z 4 + Z 6 )2 /16 0.56Z 4 + 2.62Z 5 + 4.94Z 6 + 4.37Z 7 + 1.94Z 8 + 0.87Z 9 + 0.62Z 10 + 0.06Z 12 (6Z 2 + 13Z 3 + 16Z 4 + 10Z 5 + 14Z 6 + 7Z 7 )2 /66 0.55Z 4 + 2.36Z 5 + 5.47Z 6 + 8.12Z 7 + 10.36Z 8 + 11.63Z 9 + 11.06Z 10 + 7.65Z 11 5.09Z 12 + 2.97Z 13 + 0.74Z 14 (3Z 2 + 7Z 3 + 3Z 4 + 12Z 5 + 12Z 6 + 2Z 7 + Z 9 )2 /40 0.22Z 4 + 1.05Z 5 + 1.67Z 6 + 2.85Z 7 + 6.22Z 8 + 6.3Z 9 + 6.1Z 10 + 7.65Z 11 + 5.15Z 12 + 1.35Z 13 + 0.7Z 14 + 0.6Z 15 + 0.1Z 16 + 0.02Z 18 (3Z 4 + 2Z 5 )2 /5 1.8Z 8 + 2.4Z 9 + 0.8Z 10

16.11 As noted in Problem 16.10, odd input weights do not appear. For weight 2, hmax = 1 and equation (16.55) simplifies to
P AP C (Z) = 2[A2 (Z)]2 and B2 C (Z) = 2 (1)

4 (1) [A (Z)]2 . K 2

For weight 4 and 8, hmax = 2, so we have AP C (Z) 4 AP C (Z) 8
P = 6[A4 (Z)]2 and B4 C (Z) =

=

24 (2) [A (Z)]2 . K 4 10080 (2) 80640 (2) P [A8 (Z)]2 and B4 C (Z) = [A8 (Z)]2 . K4 K5
(2)

For weight 6, hmax = 3, so AP C (Z) = 6
P 20[A6 (Z)]2 and B6 C (Z) = (3)

120 (3) [A6 (Z)]2 . K

Including only these terms in the approximate IRWEF’s for the PCCC gives A(W, Z) = 2W 2 [A2 (Z)]2 + 6W 4 [A4 (Z)]2 + 20W 6 [A6 (Z)]2 + 10080W 8 [A8 (Z)]2 K4

≈ 2W 2 [A2 (Z)]2 + 6W 4 [A4 (Z)]2 + 20W 6 [A6 (Z)]2 4W 2 24W 4 120W 6 80640W 8 B(W, Z) = [A2 (Z)]2 + [A4 (Z)]2 + [A6 (Z)]2 + [A8 (Z)]2 K K K K5 4W 2 24W 4 120W 6 ≈ [A2 (Z)]2 + [A4 (Z)]2 + [A6 (Z)]2 . K K K

L) = W X 5 L3 + W 2 L4 (1 + L)X 6 + W 3 L5 (1 + L)2 X 7 + . Z) ≈ B(W. . . X. and for the second one the IRWEF can be computed as AC2 (W.13 First constituent encoder: Gf f (D) = [1 + D + D2 Second constituent encoder: Gf f (D) = [1 (2) 1 + D2 ] 1 + D + D2 ] For the first constituent encoder. the IOWEF is given by AC1 (W. Z) = w AC1 (X) ∗ AC2 (Z) w w K w . .12 The approximate CWEF’s from the previous problem are: A2 (Z) ≈ A4 (Z) ≈ A6 (Z) ≈ A8 (Z) ≈ 3Z 2 + 7Z 3 + 5Z 4 + Z 6 6Z 2 + 13Z 3 + 16Z 4 + 10Z 5 + 14Z 6 + 7Z 7 3Z 2 + 7Z 3 + 3Z 4 + 12Z 5 + 12Z 6 + 2Z 7 + Z 9 3Z 4 + 2Z 5 . . Now we obtain the approximate IRWEF’s as: A(W. The effect of uniform interleaving is represented by AP C (X. Z. . L) = W Z 3 L3 + W 2 Z 2 L4 + W 2 Z 4 L5 + . . Z) ≈ 2W 2 Z 4 (3 + 7Z + 5Z 2 + Z 4 )2 + +6W 4 Z 4 (6 + 13Z + 16Z 2 + 10Z 3 + 14Z 4 + 7Z 5 )2 + 20W 6 Z 4 (3 + 7Z + 3Z 2 + 12Z 3 + 12Z 4 + 2Z 5 + Z 7 )2 (4W 2 Z 4 (3 + 7Z + 5Z 2 + Z 4 )2 + 24W 4 Z 4 (6 + 13Z + 16Z 2 + 10Z 3 + 14Z 4 + 7Z 5 )2 + 120W 6 Z 4 (3 + 7Z + 3Z 2 + 12Z 3 + 12Z 4 + 2Z 5 + Z 7 )2 )/K and the approximate WEF’s are given by: AP C (X) ≈ 18X 6 + 84X 7 + 374X 8 + 1076X 9 + 2408X 10 + 4084X 11 + 5464X 12 + 6888X 13 + 9362X 14 + 8064X 15 + 6896X 16 + 7296X 17 + 4414X 18 + 1080X 19 + 560X 20 + 480X 21 + 80X 22 + 20X 24 36X 6 /K + 168X 7 /K + 1180X 8 /K + 4024X 9 /K + 9868X 10 /K + 17960X 11 /K + 24496X 12 /K + 32112X 13 /K + 47404X 14 /K + 42336X 15 /K + 37344X 16 /K + 41424X 17 /K + 25896X 18 /K + 6480X 19 /K + 3360X 20 /K + 2880X 21 /K + 480X 22 /K + 120X 24 /K (1) B P C (X) ≈ 16.

... w ≥ 1. Z) ≈ (h ) (h ) w! K (h1max +h2max −w) A(h1max ) (X)A(h2max ) (Z) w w h1max! h2max! w PC A (X. X) ≈ 1≤w≤K W w! X 8w = KW X 8 + K 2 2 16 K 3 3 24 W X + W X − . (w − 1)! Therefore wK w AP C (W. 2 the average WEF’s are . for any input weight w. Z) ≈ w and P Bw C (X. w K w where Aw 1 (X) and Aw 2 (Z) are the conditional IO and IR h-error event WEF’s of the nonsystematic and systematic constituent encoders. Z) ≈ w 1≤ h1 ≤ hmax 1≤ h2 ≤ hmax c[h1 ]c[h2 ] (h1 ) Aw (X)A(h2 ) (Z). since the weight 1 single error event can be repeated w times. 2 6 W X 8 + KW 2 X 16 + K 2 3 24 W X + . respectively. Z) ≈ w and P Bw C (X..13 and for large K.. w ≥ 1. K w Going directly to the large K case. w Hence AP C (X.. X) ≈ B P C (W. hmax = w. With further approximations. this can be approximated as AP C (X. Z) ≈ w! K w 5w 3w K (w+w−w) X 5w Z 3w = X Z w!w! w! K (w−1) 5w 3w X Z . It follows that A(w) (X) = X 5w . w ≥ 1 w and A(w) (Z) = Z 3w . we obtain AP C (X. C1 and C2 . Z).

S2 . S0 } resulting in the single error event IRWEF for W < 6 of A(1) (W. an even overall-input (including termination bits) weight is required. Z. S1 . · · · . 2 KX 8 + and the free distance of the code is 8. S0 } {S0 . S2 .14 AP C (X) = B P C (X) = K 2 16 K 3 24 X + X + . S0 } {S0 . S2 . L) = L3 W 2 Z 3 + L5 W 4 Z 4 + and for both W < 6 and L < 10.. S0 } {S0 . S0 } {S0 . S2 . S3 . S1 . S3 . · · · . · · · . S2 . S1 . 1 − ZL (1 − ZL)2 Function L3 W 2 Z 3 L5 W 4 Z 4 L4 W 2 Z 2 1−ZL L6 W 4 Z 3 1−ZL L6 W 4 Z 3 1−ZL L7 W 4 Z 2 (1−ZL)2 . S1 . L4 W 2 Z 2 + 2L6 W 4 Z 3 L7 W 4 Z 2 + .. S1 . 16.e. S1 . S3 . S0 } {S0 . S3 ... S3 . S1 . S3 .. S1 . S3 .14 The encoder and augmented modified state diagram for this problem are shown below: LZ S3 L S0 (a) LW LWZ LW (b) 1+D+D 2 1+D LWZ S1 S2 LZ S0 (a) Encoder and (b) Augmented Modified State Diagram for G(D) = Note that in order to leave from and return to state S0 (i.. S2 . S2 . S3 . S1 . S2 . · · · . S3 . S2 . S1 .. S3 . terminating a K − 2 bit input sequence). Examination of the state diagram reveals 6 paths that contain overall-input weight W < 6 (“· · ·” means an indefinite loop around S3 ): Path {S0 . 2 6 K 2 24 X 8 + KX 16 + X + . S2 . · · · .

w]A(h1 ) (Z) w h2 =1 hmax h2 =1 hmax h3 =1 −2 c[h2 . L)]2 = L6 W 4 Z 6 + and for both W < 6 and L < 10. Then.15 5 A(1) (W. L) = L3 W 2 Z 3 + L5 W 4 Z 4 + n=0 3 L4+n W 2 Z 2+n + 2 2 n=0 L6+n W 4 Z 3+n + (n + 1)L7+n W 4 Z 2+n n=0 .9 (Z) = 3Z 4 (1) Z2 = Z3 + 1−Z 5 4 K−4 Z 2+n = Z 2 + 2Z 3 + Z 4 + Z 5 + Z 6 + Z 7 + · · · + Z n + · · · n=0 K−7 K−8 2Z Z + = Z6 + 2 Z 5+n + 1−Z (1 − Z)2 n=0 5 6 7 n (n + 1)Z 4+n n=0 Z + 4Z + 6Z + 6Z + · · · + (n − 1)Z + · · · . L) = L6 W 4 Z 6 + 2 n=0 L7+n W 4 Z 5+n + (n + 1)L8+n W 4 Z 4+n n=0 . Assume that a particular input sequence of weight w enters the first encoder.5 (Z) = Z 4 A4. contains 3 encoders.9 (Z) = 2Z 5 .7 (Z) = Z 2 A4.6+n (Z) = 2Z 3+n A4. The multiple turbo code. Z. The codeword w CWEF of this (4K.6 (Z) = Z 6 and thus. A2 (Z) = A4 (Z) = = (2) (1) (2) (1) (1) A2.4+n (Z) = Z 2+n A4. and the code uses the same constituent encoders. up to Z 7 . Z. Z. as shown in Figure 16. Z. generating the parity weight enumerator AP C (Z). The IRWEF for double error events for W < 6 is then A(2) (W. 1 − ZL (1 − ZL)2 A(2) (W.and double-error event enumerators for W < 6 and L < 10 are A2. w w w w . the block size K is large. w] c[h3 . w] K A(h2 ) (Z) w w −1 hmax c[h3 .3 (Z) = Z 3 A4. (2) (1) A4. the second w and third encoders will generate any of the weights in AP C (Z) with equal probability. by the definition of the uniform interleaver.8 (Z) = 2Z 3 A4. w] h3 =1 K A(h3 ) (Z) w w −1 = h1 =1 K c[h1 .7+n (Z) = 2Z 5+n Z3 + Z6 + 4 (2) (1) (1) A4. and it is assumed that the interleavers are uniformly distributed. L) = [A(1) (W.2. K − 2) terminated multiple turbo code is given by hmax hmax AP C (Z) = w h1 =1 hmax c[h1 . Thus the single. w] c[h2 . 2 1 2L7 W 4 Z 5 L8 W 4 Z 4 + . w] A(h1 ) (Z) A(h2 ) (Z) A(h3 ) (Z) .8 (Z) = Z 4 (2) (1) A4.

and the WEF’s are AP C (X) = AP C (X.16 where hmax is the maximum number of error events for the particular choice of w and c[h. and hmax = w . result in the codeword CWEF AP C (Z) ≈ w and the bit CWEF P Bw C (Z) = (w!)2 K (3hmax −2w) A(hmax ) (Z) w (hmax !)3 3 w (w!)2 (3hmax −2w−1) (hmax ) w PC Aw (Z) ≈ K Aw (Z) K (hmax !)3 5 5 3 .e. the approximate CWEF’s up to Z 14 are AP C (Z) ≈ 2 4 K A2 (Z) (1) 3 = 4 6 KZ + 24 7 KZ + 60 8 KZ + 92 9 KZ + 120 10 K Z + 2 156 11 K Z + 196 12 K Z + 240 13 K Z + 288 14 K Z + ··· + 2 n −3n−10 K Zn + · · · AP C (Z) ≈ 4 AP C (Z) 3 = P B2 C (Z) = 72 K2 A4 (Z) =0 AP C (Z) 2 (2) 3 ≈ 72 12 K2 Z + 864 13 K2 Z + 4752 14 K2 Z + ··· AP C (Z) 5 2 K ≈ 8 6 K2 Z + 48 7 K2 Z + 120 8 K2 Z + 184 9 K2 Z + 240 10 K2 Z + 312 11 K2 Z + P B4 C (Z) = P B3 C (Z) 4 K AP C (Z) 4 =0 ≈ 392 12 + 480 Z 13 + 576 Z 14 + · · · + K2 Z K2 K2 288 12 3456 13 19008 14 + K3 Z + K3 Z + · · · K3 Z 4 n2 −3n−10 K2 Zn + · · · = P B5 C (Z) .. the approximate IRWEF’s for this PCCC are AP C (W. X) and B P C (X) = B P C (X. for w < 6. i. w] ≈ ≈ K . X). the term corresponding to h1 = h2 = h3 = hmax . along with keeping only the highest power of K. From the above. Z) ≈ w=2 W w AP C (Z) and B P C (W. These h! 2 w h approximations. w] = h K −h+w K . Then. With K h and K w. Z) ≈ w w=2 P W w Bw C (Z). c[h. 120 (a) Using the above. A2 (Z) (1) 3 = = Z9 + 3Z 8 3Z 7 Z6 + + 2 1−Z (1 − Z) (1 − Z)3 n2 − 3n − 10 2 Zn + · · · Z 6 + 6Z 7 + 15Z 8 + 23Z 9 + 30Z 10 + 39Z 11 + · · · + and A4 (Z) (2) 3 = = Z 18 + 6Z 17 15Z 16 20Z 15 16Z 14 6Z 13 Z 12 + + + + + 1−Z (1 − Z)2 (1 − Z)3 (1 − Z)4 (1 − Z)5 (1 − Z)6 Z 12 + 12Z 13 + 66Z 14 + 226Z 15 + 561Z 16 + 1128Z 17 + 1995Z 18 + 3258Z 19 + −21720 − 7046n + 3015n2 − 155n3 − 15n4 + n5 5028Z 20 + · · · + Zn + · · · .

neglecting any higher power of K terms. assuming a binary input.17 (b) Using the above... Z) ≈ W2 4 6 KZ + 24 7 KZ + 60 8 KZ + 92 9 KZ + 120 10 K Z + 2 156 11 K Z + 196 12 + K Z 4 72 12 W K2 Z + 240 13 + K Z 864 13 + K2 Z 288 14 + ··· + K Z 4752 14 + ··· K2 Z 2 n −3n−10 K Zn + · · · + and B P C (W.. unquantized output AWGN channel are shown below. are AP C (X) ≈ and B P C (X) ≈ 4 8 24 9 60 10 92 11 120 12 156 13 196 14 X + X + X + X + X + X + X + ··· K K K K K K K 8 8 48 120 184 240 312 392 X + 2 X 9 + 2 X 10 + 2 X 11 + 2 X 12 + 2 X 13 + 2 X 14 + · · · 2 K K K K K K K (d) The union bounds on the BER and WER for K = 103 and K = 104 . 10-1 . Z) ≈ W2 8 6 K2 Z + 48 7 K2 Z + 120 8 K2 Z + 184 9 K2 Z + 240 10 K2 Z 2 + 312 11 K2 Z + 392 12 + 480 Z 13 + 576 Z 14 + · · · + 4 K2 Z K2 K2 W 4 288 Z 12 + 3456 Z 13 + 19008 Z 14 + · · · 3 3 K K K3 n −3n−10 K2 Zn + · · · + (c) Using the above. the approximate IRWEF’s up to W 5 and Z 14 are AP C (W. the approximate WEF’s up to X 14 ..N = 1000 ______ N = 10000 X=e − R Eb N0 and BER ≤ B P C (X) X=e − R Eb N0 where R = 1/4 is the code rate and Eb N0 10-2 10-3 10-4 10-5 10-6 10-7 1 2 3 4 5 6 7 Eb /N0 (dB) (a) Word Error Rate . The plots were created using the loose bounds WER ≤ AP C (X) is the SNR.

18

10-4 - - - - - N = 1000 ______ N = 10000

10-5 10-6 10-7

10-8 10-9

10-10 1 2 3 4 5 6 7

Eb /N0 (dB) (b) Bit Error Rate

16.15 (a) 1

1 1+D

i. Weight 2 Minimum parity weight generating information sequence = 1 + D. Corresponding parity weight = 1. In a PCCC (rate 1/3) we therefore have (assuming a similar input after interleaving): Information weight = 2; Parity weight = 1 + 1 = 2; Total weight = 4. ii. Weight 3 For this encoder a weight 3 input does not terminate the encoder and hence is not possible. (b) 1
1+D 2 1+D+D 2

i. Weight 2 Minimum parity weight generating information sequence = 1 + D3 . Corresponding parity weight = 4. In a PCCC we therefore have: Information weight = 2; Parity weight = 4 + 4 = 8; Total weight = 10. ii. Weight 3 Minimum parity weight generating information sequence = 1 + D + D2 . Corresponding parity weight = 2. In a PCCC we therefore have: Information weight = 3; Parity weight = 2 + 2 = 4; Total weight = 7. (c) 1
1+D 2 +D 3 1+D+D 3

i. Weight 2 Minimum parity weight generating information sequence = 1 + D7 . Corresponding parity weight = 6. In a PCCC we therefore have: Information weight = 2; Parity weight = 6 + 6 = 12; Total weight = 14.

19 ii. Weight 3 Minimum parity weight generating information sequence = 1 + D + D3 . Corresponding parity weight = 4. In a PCCC we therefore have: Information weight = 3; Parity weight = 4 + 4 = 8; Total weight = 11. (d) 1
1+D+D 2 +D 4 1+D 3 +D 4

i. Weight 2 Minimum parity weight generating information sequence = 1 + D15 . Corresponding parity weight = 10. In a PCCC we therefore have: Information weight = 2; Parity weight = 10 + 10 = 20; Total weight = 22. ii. Weight 3 Minimum parity weight generating information sequence = 1 + D3 + D4 . Corresponding parity weight = 4. In a PCCC we therefore have: Information weight = 3; Parity weight = 4 + 4 = 8; Total weight = 11. (e) 1
1+D 4 1+D+D 2 +D 3 +D 4

i. Weight 2 Minimum parity weight generating information sequence = 1 + D5 . Corresponding parity weight = 4. In a PCCC we therefore have: Information weight = 2; Parity weight = 4 + 4 = 8; Total weight = 10. ii. Weight 3 For this encoder a weight 3 input does not terminate the encoder and hence is not possible. In all cases the input weight two sequence gives the free distance codeword for large K. 16.16 For a systematic feedforward encoder, for input weight w, the single error event for input weight 1 can occur w times. Therefore hmax , the largest number of error events associated with a weight w input sequence, is equal to w. That is, the maximum number of error events produced by a weight w input sequence occurs by repeating the single error event for input weight 1 w times. Thus, A(w) (Z) = [A1 (Z)]w . w This is not true for feedback encoders. Feedback encoders require at least a weight 2 input sequence to terminate. Thus, for an input sequence of weight 2w, hmax = w, i.e., we have a maximum of w error events. This occurs when an error event caused by a weight 2 input sequence is repeated w times (for a total input weight of 2w). Therefore A2w (w) (Z) = [AZ (1) (Z)]w .
(1)

20

u

v(())

m0

m1

m2

m

-1

V(1) V(2)

V(n-1)

16.19 An (n, 1, v) systematic feedback encoder can be realized by the following diagram: In the figure, the dashed lines indicate potential connections that depend on the encoder realization. It can be seen from the encoder structure that the register contents are updated as mi,t = mi−1,t−1 and
v−2

, i = 1, 2, . . . , v − 1

m0,t = mv−1,t−1 + ut +
i=0

ai mi,t−1 ,

where t is the time index, ut represents the input bit at time t, and the coefficients ai depend on the particular encoder realization. When the encoder is in state S2v−1 = (0 0 . . . 1) at time t − 1, it follows from the given relations that mi,t = mi−1,t−1 = 0, and
v−2

i = 1, 2, . . . , v − 1

m0,t = mv−1,t−1 + ut +
i=0

ai mi,t−1 = 1 + ut .

Thus if the input bit ut = 1, the first memory position at time t is also 0, and the encoder is in the all-zero state. 16.20 For the primitive encoder D with Gp (D) = 1 sequence is
D 4 +D 2 +D+1 D 4 +D 3 +1

, the shortest terminating weight 2 input

u(D) = 1 + D2

4

−1

= 1 + D15 ,

which generates the parity sequence v(D) = (D + 1)(D4 + 1) = 1 + D + D4 + D5 . Thus Zmin = 4 and def f = 2 + 2Zmin = 10. the shortest terminating weight 2 input sequence is u(D) = 1 + D . Thus Zmin = 10 and def f = 2 + 2 Zmin = 2 + 2 × 10 = 22. For the nonprimitive encoder E with Gnp (D) = 1 5 D 4 +1 D 4 +D 3 +D 2 +D+1 .21 which generates the parity sequence v(D) = (1 + D3 + D4 + D6 + D8 + D9 + D10 + D11 )(D4 + D2 + D + 1) = 1 + D + D2 + D3 + D4 + D8 + D11 + D12 + D14 + D15 . .

2 An error-trapping decoder is shown in Figure P20.. (2) the length of y is l or less and the length of z is d or less. Since the code is designed to correct all the bursts of length l or less. then no burst of length l + d or less can be a codeword. k) code C is designed to correct all the bursts of length l or less and simultaneously to detect all the bursts of length d or less with d ≥ l. Therefore. if z occurs as an error burst during the transmission of a codeword in C. In this case. Syndrome Register .. Gate 1 Buffer Register Gate 2 + Gate 3 + Gate 4 . The decoding operation is given below: 1 . z has the same syndrome as y.2. Since y + z = x is codeword.1 If an (n. This will result in an incorrect decoding and hence the decoder will fail to detect z which contradicts to the fact that the code is designed to correct all the bursts of lengths l or less and simultaneously to detect all the bursts of length d or less with d ≥ l. then z must be in the same coset as y. We can decompose this burst x into two bursts y and z such that: (1) x = y + z. Test All 0’s l stages Figure P20. 20..2 An error-trapping decoder. As a results.Chapter 20 20. then y will be taken as the error burst for correction. a burst of length l + d or less can not be a codeword. Then it follows from Theorem 20.2 that n − k ≥ l + d.. Suppose there is a burst x of length l + d or less that is a codeword in C. y must be a coset leader in a standard array for the code.

. the error burst is confined to positions X n−i−1−l . . X n−1 of r(X). . . In this event. If n − k − l leftmost stages of the syndrome register does not contain all zeros. the i rightmost digits in the syndrome register match the errors at the positions X n−i .Step 1. At this instant. the error burst is confined to positions X n−i . Gate 2 is first activated and the i high position information digits are shifted out.. .. Step 3. if the n−k−l leftmost stages of the syndrome register contains all zeros after (n−l+i)th shift 0 < i < l. If the n − k − l leftmost stages of the syndrome register contains all zeros after the ith shift k ≤ i ≤ n − l. Step 2.. X n−i−1 . and the i digits contained int the next i stages of the syndrome register match the errors at the positions X n−i . no this step). Step 4. Step 5. the error burst is confined to the positions X n−l . .. Gate 4 and Gate 2 are then activated. otherwise. If the n − k − l leftmost stages of the syndrome register contain all zeros. Gate 2 and Gate 4 are activated. X l−i−1 . the contents of the syndrome form the syndrome S (n−k) (x) of r (n−k) (x). As soon as the entire r(x) has been shifted into the syndrome register. the errors of the burst e(x) are confined to the parity-check positions of r(x). its l rightmost stages contain the burst-error pattern. . a clock starts to count from (n − l + i + 1). The received information digits are read out of the buffer register and connected by the error digits shifted out from the syndrome register. . The syndrome register is then shifted with Gate 3 turned off. . Step 6. the l − i digits contained in the l − i rightmost stages of the syndrome register match the errors at the parity-check positions of r(X). The received polynomial r(x) is shifted into the buffer and syndrome registers simultaneously with Gates 1 and 3 turned on and all the other gates turned off.. X n−1 . Then Gate 4 is activated. . The received information digits are read out of the buffer register and corrected by the error digits shifted out from 2 . The error correction begins. X n−1 of r(X). . The syndrome register starts to shift with Gate 3 on. Xn − 1. In this event. If the n − k − l leftmost stages of the syndrome register contains all zeros after ith shift 0 < i ≤ 2k − n (Assume 2k − n ≥ 0. As soon as the clock has counted up to n. X n−l . go to step 3. there are three cases to be considered. In this event. . X 0 . the k received information digits in the buffer are error free. Gate 2 is activated and the k error-free information digits in the buffer are shifted out to the data sink. the rest information digits are read out and corrected by the error digit shifted out from the syndrome register. . . As soon as its n − k − l leftmost stages contain only zeros. .

94) Fire code. an uncorrectable burst of errors has been detected.4 The generator polynomial of the desired Fire code is g(X) = (X 2l−1 + 1)p(X) Since the code is designed to correct all the bursts of length 4 or less. The decoder of this code is shown in Figure P20.4 A decoder for the (105. 94) Fire code.the syndrome register. l = 4. Gate 2 1 X X4 X7 X8 X11 + + + + + Test for all 0’s Gate 3 Buffer Register Gate 1 Gate 4 + Output Input r(x) Figure P20. 3 . the code is a (105. If the n − k − l stages of the syndrome never contain all zeros after all the steps above.4. 15) = 105. Hence the length of the code is n = LCM (7. 20. Since the degree of the generator polynomial of the code is 11 which is the number of parity check digits of the code. its period ρ is 24 − 1 = 15. Hence g(X) = (X 7 + 1)(1 + X + X 4 ) = 1 + X + X 4 + X 7 + X 8 + X 11 . Since p(X) is a primitive polynomial of degree 4.

First the received sequence is de-interleaved and arranged as a 17 × 15 array. .4 is shown in Figure P20. This results in a (255. .6 We choose the second code given in Table 20. A decoder for this code can be implemented with a de-interleaver and the decoder for the (15. 20.5 A high-speed decoder for the (105. The burst-error-correcting efficiency z = 1.6. . 94) Fire code. Let v 1 (X). 153) burst-error-correcting code designed above. The generator polynomial of this code is g(X) = 1 + X 3 + X 4 + X 5 + X 6 . De-interleaver Buffer Decoder C Figure P20. k) l base code C. 9) code. 9) code and capable of correcting any burst of length 3 or less.7 A codeword in the interleaved code Cλ is obtained by interleaving l codewords from the (n. v 2 (X).6 A decoder for the (255. Then the codeword in Cλ 4 . . v l (X) be l codewords in C.20. 94) Fire code designed in Problem 20.3 which is a (15. Error Pattern Register + Comparator test for a match Test for all 0’s + + Error Location Register Buffer Register Counter 1 Computation for n-q Counter 2 r(X) Input Figure P20. Then each row of the array is decoded based on the base (15. l 20. Suppose we interleave this code by a degree λ = 17.5.5 The high-speed decoder for the (105. 153) cyclic code that is capable of correcting any single burst of length 51 or less and has burst-error-correcting efficiency z = 1. The generator of this code is g (X) = g(X 17 ) = 1 + X 51 + X 68 + X 85 + X 102 . 9) code as shown in Figure P20.

Therefore v(X) is divisible by g(X l ). it is necessary and sufficient to prove that no two such bursts are in the same coset of a standard array for the code. . v i (X l ) is divisible by g(X l ).. lk) cyclic code. 0 ≤ j ≤ σ). Let X mi A(X) and X mj B(X) be the polynomial representations of two bursts of length l1 and l2 with l1 ≤ m and l2 ≤ m confined to ith and jth subblocks (0 ≤ i ≤ σ. A(X) and B(X) must be identical. 20. . g(X l ) generates a (ln. The degree of g(X l ) is (n − k)l. V(X) must be divisible by X m − 1. . respectively. 5 . V(X) = X mi B(X)(X qm + 1). Note that X qm + 1 is divisible by X m + 1. However both degrees l1 and l2 of A(X) and B(X) are less than m. Since g(X l ) is a factor of X ln + 1. Then V(X) = X mi A(X) + X mj B(X) must be a code polynomial and divisible by the generator polynomial g(X) = (X m + 1)p(X).8 In order to show that the Burton code is capable of correcting all phased bursts confined to a single subblock of m digits. A(X) = B(X). Therefore. the degree of A(X) + B(X) is less than m.e. then mj − mi = qm and V(X) = X mi (A(X) + B(X)) + X mi B(X)(X qm + 1). Since V(X) and X mi B(X)(X qm + 1) are divisible by X m + 1.obtained by interleaving these l codewords in C in polynomial form is v(X) = v 1 (X l ) + Xv 2 (X l ) + X 2 v 3 (X l ) + . and X mi and X m + 1 are relatively prime. + al1 −2 X l1 −2 + X l1 −1 . . Since v i (X) is divisible by g(X) for 1 ≤ i ≤ l. i. Hence A(X) + B(X) is not divisible by X m + 1. Since X m − 1 is a factor of the generator g(X). Suppose X mi A(X) and X mj B(X) are in the same coset. . . Assume mj ≥ mi. + X l−1 v l (X l ). For V(X) to be divisible by X m + 1. + bl2 −2 X l2 −2 + X l2 −1 . and B(X) = 1 + b1 X + . A(X) + B(X) must be divisible by X m + 1. where A(X) = 1 + a1 X + .

then λn = 6 × 164 = 984. code (164. Hence the code is a (155.9 is a (930. since q = j − i is less than σ.9 Choose p(X) = 1 + X 2 + X 5 which is a primitive polynomial with period ρ = 25 − 1 = 31. The interleaved Burton code of Problem 20. If the code is interleaved to a degree λ = 6. The interleaved Burton code is more powerful and efficient. we obtain a (930. Therefore V(X) can not be a code polynomial and the bursts X mi A(X) and X mi B(X) can not be in the same coset. X qm + 1 must be divisible by both p(X) and X m − 1.9 is z = 52 13 2[(λ − 1)m + 1] = = . 20. The burst-error-correcting efficiency of the (984. Then generator polynomial of the Burton code C with m = 5 is g(X) = (X m + 1)p(X) = 1 + X 2 + X 7 + X 10 . λk = 6 × 153 = 918.Since the degree of B(X) is less than m. 20. 918) code with burst-error-correcting capability 24. This is not possible. 153) has burst-correcting-capability l = 4. The length of the code is n = LCM (31. 145) code that is capable of correcting any phased burst confined to a single subblock of 5 digits. z ≥ z. all the phased bursts confined to a single subblock of m digits are in different cosets of a standard array for the Burton code generated by (X m + 1)p(X) and they are correctable. 918) code is z= 48 8 2λl = = . 6 . 870) code C 6 generated by g(X 6 ) = 1 + X 12 + X 42 + X 60 . Since V(X) is assumed to be a code polynomial. If it is interleaved to degree λ = 6. This implies that X qm + 1 is divisible by (X m + 1)p(X) and qm must be a multiple of n = σm. This code is capable of correcting any burst of length 5 × 5 + 1 = 26 or less. λn − λk 984 − 918 11 The efficiency of the interleave Burton code of Problem 20. 870) code with burst-error-correcting capability 26. The interleaved code is a (984.3. 5) = 155. 2λm 60 15 Therefore.10 From Table 9. Consequently. B(X) and p(X) must be relatively prime. Also X mi and p(X) are relatively prime. and λl = 6 × 4 = 24.

we obtain a (155.13 A decoder for the (90. 75) binary code. If it is shortened by deleting the 15 high-order message digits. 94) shortened Fire code is shown in Figure P20. 7) BCH code has random error-correcting capability 2. Dividing X n−k+15 = X 26 by g(X). the binary (155. a received sequence is de-interleaved and arranged into a 7 × 15 array. 15) RS code. or (2) correcting any combination random bursts that result in 8 or fewer random symbol errors for the (31. 7 . it is capable of correcting any single burst of length 14 or less.13 Gate 2 1 x x3 x4 x5 x7 x8 x8 x11 x Test for all 0’s Gate 3 Buffer Register Gate 1 Gate 4 Output Figure P20. Since the (31. Then a decoder for the (105. we obtain a (105. 15) RS code is capable of correcting 8 or fewer random symbol errors. 49) cyclic code that is capable of correcting any combination of two random bursts. the generator polynomial g(X) = 1 + X + X 4 + X 7 + X 8 + X 11 generates a (105. it becomes a (90.4. Each row of the array is decoded based on the (15. 79) code. we obtain a remainder ρ(X) = X 10 +X 8 +X 5 +X 3 +X. Of course. 15) RS code over GF(25 ) is replaced by its corresponding 5-bit byte. 75) code is capable of correcting: (1) any single burst of length (8 − 1) × 5 + 1 = 36 or less. 20. 7) BCH code. Interleaving this code to a degree 7.14 Let α be a primitive element of the Galois field GF(26 ). 79) Fire code. 20.11 The (15. 94) Fire code.20.12 If each symbol of the (31.13 From Problem 20. The order of α is 26 − 1 = 63 and the minimal polynomial of α is Φ(X) = 1 + X + X 6 . 20. In decoding. each of length 7 or less.

we can factor X 63 + 1 as follows: X 63 + 1 = (X 7 + 1)(1 + X 7 + X 14 + X 21 + X 28 + X 35 + X 42 + X 49 + X 56 ).4. Let g 2 (X) be the generator polynomial of the double-error-correcting BCH code of length 63. The least common multiple of g 1 (X) and g 2 (X) is g(X) = (1 + X 7 )(1 + X + X 6 )(1 + X + X 2 + X 4 + X 6 ). From Table 6. we find that g 2 (X) = (1 + X + X 6 )(1 + X + X 2 + X 4 + X 6 ). 8 . 44) cyclic code which is a modified Fire code and is capable of correcting any single error burst of length 4 or less as well as any combination of two or fewer random errors. g(X) generates a (63. Hence.Since 63 = 7 × 9. The code generated by the polynomial g 1 (X) = (X 7 + 1)(1 + X + X 6 ) is a Fire code of length 63 which is capable of correcting any single error burst of length 4 or less.

Master your semester with Scribd & The New York Times

Special offer for students: Only $4.99/month.

Master your semester with Scribd & The New York Times

Cancel anytime.