This action might not be possible to undo. Are you sure you want to continue?
https://www.scribd.com/doc/91827972/LearningMaterialITC
04/30/2012
text
original
• Review of probability theory
• Entropy
• Mutual information
• Data compression
• Huffman coding
• Asymptotic equipartition property
• Universal source coding
• Channel capacity
• Differential entropy
• Block codes and Convolutional codes.
1
Information Theory and
Coding
Pavan S. Nuggehalli
CEDT, IISc, Bangalore
Pavan S. Nuggehalli ITC/V1/2005 2 of 3
Course Outline  I
Information theory is concerned with the fundamental limits of communication.
What is the ultimate limit to data compression? e.g. how many bits are required
to represent a music source.
What is the ultimate limit of reliable communication over a noisy channel, e.g.
how many bits can be sent in one second over a telephone line.
Pavan S. Nuggehalli ITC/V1/2005 3 of 3
Course Outline  II
Coding theory is concerned with practical techniques to realize the limits
specified by information theory
Source coding converts source output to bits.
¾ Source output can be voice, video, text, sensor output …
Channel coding adds extra bits to data transmitted over the channel
¾ This redundancy helps combat the errors introduced in transmitted bits due
to channel noise
1
Information Theory and
Coding
Pavan S. Nuggehalli
CEDT, IISc, Bangalore
Pavan S. Nuggehalli ITC/V1/2005 2 of 4
Communication System Block Diagram
Source
Source
Encoder
Channel
Encoder
Modulator
Channel
Demodu
lator
Channel
Decoder
Source
Decoder
Sink
Noise
Source Coding Channel Coding
Modulation converts bits into coding analog waveforms suitable for transmission over physical
channels. We will not discuss modulation in any detail in this course.
Pavan S. Nuggehalli ITC/V1/2005 3 of 4
What is Information?
Sources can generate “information” in several formats
¾ sequence of symbols such as letters from the English or Kannada alphabet,
binary symbols from a computer file.
¾ analog waveforms such as voice and video signals.
Key insight : Source output is a random process
* This fact was not appreciated before Claude Shannon developed information
theory in 1948
Pavan S. Nuggehalli ITC/V1/2005 4 of 4
Randomness
Why should source output be modeled as random?
Suppose not x. Then source output will be a known determinative process.
x simply reproduces this process at the risk without bothering to communicate?
The number of bits required to describe source output depends on the
probability distribution of the source, not the actual values of possible outputs.
Information Theory and Coding
Lecture 1
Pavan Nuggehalli Probability Review
Origin in gambling
Laplace  combinatorial counting, circular discrete geometric probability  continuum
A N Kolmogorous 1933 Berlin
Notion of an experiment
Let Ω be the set of all possible outcomes. This set is called the sample set.
Let A be a collection of subsets of Ω with some special properties. A is then a col
lection of events (Ω, A) are jointly referred to as the sample space.
Then a probability model/space is a triplet (Ω, A, P) where P satisﬁes the following
properties
1 Non negativity : P(A) ≥ 0 ∀A ǫ A
2 Additivity : If {A
n
, n ≥ 1} are disjoint events in A,
then P(U
∞
n=1
A
n
) =
∞
n=1
P(A
n
)
3 Bounded : P(Ω) = 1
* Example : Ω = {T, H} A = {φ, {T, H}, {T}, {H}}
P({H}) = 0.5
When Ω is discrete (ﬁnite or countable) A = lP(Ω), where lP is the power set. When Ω
takes values from a continiuum, A is a much smaller set. We are going to handwave out
of that mess. Need this for consistency.
* Note that we have not said anything about how events are assigned probabilities.
That is the engineering aspect. The theory can guide in assigning these probabilities,
but is not overly concerned with how that is done.
There are many consequences
11
1. P(A
⊂
) = 1 −P(A) ⇒P(φ) = 0
2. P(A∪ B) = P(A) + P(B) −P(A∩ B)
3. Inclusion  Exclusion principle :
If A
1
, A
2
, . . . , A
n
are events, then
P(∪
n
j=1
A
j
) =
n
j=1
P(A
j
) −
1≤i<j≤n
P(A
i
∩ A
j
)
+
1≤i<j<k≤n
P(A
i
∩ A
j
∩ A
k
)..(−1)
n
P(A
1
∩ A
2
. . . ∩ A
n
)
Can be proved using induction
4. Monotonivity : A ⊂ B ⇒(P)A ≤ P(B)
5. Continuity : First deﬁne limits : A = ∪A
n
or A = ∩A
n
(a) IfA
n
↑ A, then P(A
n
) ↑ P(A)
A
1
⊂ A
2
. . . ⊂ A, lim
n→∞
P(A
n
) = P( lim
n→∞
A
n
)
fcont ⇒ lim
xn→x
f(X
n
) = f( lim
xn→x
X
n
) = f(x)
(b) A
n
↓ A, then P(A
n
) ↓ P(A)
Proof :
a) A
1
⊂ A
2
. . . ⊂ A
n
B
1
= A
1
, B
2
= A
2
\A
1
= A
2
∩ A
⊂
1
B
3
= A
3
\A
2
B
n
is a sequence of disjoint ”annular rings”. ∪
n
k=1
B
n
= A
n
∪
∞
n=1
B
n
= ∪
∞
n=1
A
n
= A
By additivity of P
P(A) = P(∪
∞
n=1
B
n
) =
∞
n=1
P(B
n
) = lim
n→∞
n
k=1
P(B
n
)
= lim
n→∞
P(∪
n
k=1
B
k
) = lim
n→∞
P(A
n
)
We have, P(A) = lim
n→∞
p(A
n
)
12
b)
A
1
⊃ A
2
⊃ A
n
. . . ⊃ A A ⊂ B A ⊃ B
A
⊂
⊃ B
⊂
A
⊂
⊂ B
⊂
A
⊂
1
⊂ A
⊂
z
. . . ⊂ A
⊂
P(A
⊂
) = lim
n→∞
P(A
⊂
n
)
⇒1 −P(A) = lim
n→∞
1 −P(A
n
) = 1 − lim
n→∞
P(A
n
)
⇒P(A) = lim
n→∞
P(A
n
)
Limits of sets. Let A
n
⊂ A be a sequence of events
inf
k≥n
A
k
= ∩
∞
k=n
A
k
sup
k≥n
A
k
= ∪
∞
k=n
A
k
liminf
n→∞
A
n
= ∪
∞
n=1
∩
∞
k=n
A
k
limsup
n→∞
A
n
= ∩
∞
n=1
∪
∞
k=n
A
k
If liminf
n→∞
A
n
= limsup
n→∞
A
n
= A, then we say A
n
converges to A
Some useful interpretations :
limsup A
n
= {W :
1
An
(W) = ∞}
= {W : WǫA
n
k
, K = 1, 2, . . .}for some sequencesn
k
= {A
n
1 : 0}
liminf A
n
= {W : A.WǫA
n
}for all n except a ﬁnite number
= {W :
1
⊂
An
(W) < ∞}
= {W : WǫA
n
∀
n
≥ n
o
(W)}
Borel Cantelli Lemma : Let {A
n
} be a sequence of events.
If
∞
n=1
P(A
n
) < ∞ then P(A
n
1 : 0) = P(limsup A
n
) = 0
P(A
n
1 : 0) = P( lim
n→∞
∪
j≥n
A
j
)
=
n
P(A
n
) ≤ ∞
= lim
n→∞
P(∪
j≥n
A
j
) ≤ lim
n→∞
∞
j=n
P(A
j
) = 0
13
Converse to BC Lemma
If {A
n
} are independent events such that
n
P(A
n
) = ∞, then P{A
n
1 : 0} = 1
P(A
n
1 : 0) = P( lim
n→∞
∪
j≥n
A
j
)
= lim
n→∞
P(∪
j≥n
A
j
)
= lim
n→∞
(1 −P(∩
j≥n
A
⊂
j
))
= 1 − lim
n→∞
lim
m→∞
Π
m
k=n
(1 −P(A
k
)
1 −P(A
k
) ≤ e
−P(A
k
)
m
therefore lim
m→∞
Π
m
k=n
(1 −P(A
k
)) ≤ lim
m→∞
Π
m
k=n
e
−P(A
k
)
= lim
m→∞
e
−
m
k=n
P(A
k
)
= e
−
∞
k=n
P(A
k
)
= e
−∞
= 0
Random Variable :
Consider a random experiment with the sample space (Ω, A). A random variable is a
function that assigns a real number to each outcome in Ω.
X : Ω −→R
In addition for any interval (a, b) we want X
−1
((a, b)) ǫ A. This is an technical
condition we are stating merely for completeness.
Such a function is called Borel measurable cumulative.
The cumulative distribution function for a random variable X is deﬁned as
F(x) = P(X ≤ x)
∼
, P({W1
ǫΩ
X(W) ≤ x}), XǫR
14
A random variable is said to discrete if it can take only a ﬁnite or countable/denumerable.
The probability mass function (PMF) gives the probability that X will take a particular
value.
P
X
(x) = P(X = x)
We have F(x) =
y≤x
P
x
(y)
A random variable is said to be continuous if there exists a function f(x), called the
probability distribution function such that
F(x) = P(X ≤ x) =
x
−∞
f(y)dy
diﬀerentiating, we get f(x) =
d
dx
F(x)
The distribution function F
x
(x) satisﬁes the following properties
1) F(x) ≥ 0
2) F(x) is right contininous lim
xn↓x
F(x) = F(x)
3) F(−∞) = 0, F(+∞) = 1
Let A
k
= {W : X ≤ x
n
}, A = {W : X ≤ x}
Clearly A
1
⊃ A
2
⊃ A
4
⊃ A
n
⊃ A LetA = ∩
∞
k=1
A
k
Then A = {W : X(W) ≤ x}
We have lim
n→∞
P(A
n
) = lim
xn↓x
F(x
n
)
By continuity of P, P( lim
n→∞
A
n
) = P(A) = F(x)
Independence
Suppose (Ω, A, P) is a probability space. Events A, B ǫ A are independent if
P(A∩ B) = P(A).P(B)
15
In general events A
1
, A
2
, . . . A
n
are said to be independent if
P(∩
iǫI
A
i
) = Π
iǫI
P(A
i
)
for all ﬁnite I C {1, . . . , n}
Note that pairwise independence does not imply independence as deﬁned above.
Let Ω = {1, 2, 3, 4}, each equally mobable let A
1
= {1, 2}A
2
= {1, 3} and A
3
= {1, 4}.
Then only two are independent.
A ﬁnite collection of random variables X
1
, . . . , X
k
is independent if
P(X
1
≤ x
1
, . . . , X
k
≤ x
k
) = Π
k
i=1
P(X
i
≤ x
i
) ∀ x
i
ǫ R, 1 ≤ i ≤ k
Independence is a key notion in probability. It is a technical condition, don’t rely on
intuition.
Conditional Probability
The probability of event A occuring, given that an event B has occured is given by
P(AB) =
P(A∩ B)
P(B)
, P(B) > 0
If A and B are independent, then
P(AB) =
P(A)P(B)
P(B)
= P(A) as expected
In general P(∩
n
i=1
A
1
) = P(A
1
)P(A
2
A
1
) P(A
n
A
1
, A
2
, . . . , A
n−1
)
Expected Value
The expectation, average or mean of a random variable is given by
EX =
=
xP(X = x) Xis discreet
∞
−∞
xf(x)dx continuous
In general EX =
∞
x=−∞
xdF(x) This has a well deﬁned meaning which reduces to the
above two special cases when X is discrete or continuous but we will not explore this
aspect any further.
16
We can also talk of expected value of a function
Eh(X) =
∞
−∞
h(x)dF(x)
Mean is the ﬁrst moment. The n
th
moment is given by
EX
n
=
∞
−∞
x
n
dF(x) if it exists
V arX = E(X −EX)
2
= EX
2
−(EX)
2
√
V arX is called the std deviation
Conditional Expectation :
If X and Y are discrete, the conditional p.m.f. of X given Y is deﬁned as
P(X = xY = y) =
P(X = x, Y = y)
P(Y = y)
P(Y = y) > 0
The conditional distribution of X given Y=y is deﬁned as F(xy) = P(X ≤ xY = y)
and the conditional expectation is given by
E[XY = y] =
xP(X = xY = y)
If X and Y are continuous, we deﬁne the conditional pdf of X given Y as
f
X(Y )
(xy) =
f(x, y)
f(y)
The conditional cumulative distribution in cdf is given by
F
XY
(xy) =
x
−∞
f
XY
(xy)dx
Conditional mean is given by
E[XY = y] =
x f
XY
(xy)dx
It is also possible to deﬁne conditional expectations functions of random variables in a
similar fashion.
17
Important property If X & Y are rv.
EX = E[EXY ] =
E(XY = y)dF
Y
(y)
Markov Inequality :
Suppose X ≥ 0. Then for any a > 0
P(X ≥ a) ≤
EX
a
EX =
a
o
xdF(x) +
∞
a
xdF(x)
∞
a
adF(x) = a.p(X ≥ a)
P(X ≥ a) ≤
EX
a
Chebyshev’s inequality :
P(X −EX ≥ ǫ) ≤
V ar(X)
ǫ
2
Take Y = X −a
P(X −a ≥ ǫ) = P((X −a)
2
≥ ǫ
2
) ≤
E(X−a)
2
ǫ
2
The weak law of Large Number
Let X
1
, X
2
, . . . be a sequence of independent and identically distributed random vari
ables with mean N and ﬁnite variance σ
2
Let S
n
=
n
k=1
n
X
k
Then P(S
n
−N ≥ δ) ⇒0 as n ⇒∞ ∀δ
Take any δ > 0
P(S
n
−N ≥ δ) ≤
V arSn
δ
2
=
1
n
2
nσ
2
δ
2
=
1
n
σ
2
δ
2
18
lim
n→∞
P(S
n
N ≥ δ) −→0
Since δ rvar pushed arbitrarily
lim
n→∞
P(S
n
−N ≥ 0) = 0
The above result holds even when σ
2
is inﬁnite as long as mean is ﬁnite.
Find out about how LS work and also about WLLN
We say S
n
⇒N in probability
19
Information Theory and Coding
Lecture 2
Pavan Nuggehalli Entropy
The entropy H(X) of a discrete random variable is given by
H(X) =
xǫX
P(x) log
1
P(x)
= −
xǫX
P(x) logP(x)
= E log
1
P(X)
log
1
P(X)
is called the selfinformation of X. Entropy is the expected value of self informa
tion.
Properties :
1. H(X) ≥ 0
P(X) ≤ 1 ⇒
1
P(X)
≥ 1 ⇒log
1
P(X)
≥ 0
2. LetH
a
(X) = Elog
a
1
P(X)
Then H
a
(X) = (log
a
2).H(X)
3. Let X = M Then H(X) ≤ logM
H(x) −logM =
xǫX
P(x) log
1
P(x)
− logM
=
xǫX
P(x) log
1
P(x)
−
xǫX
P(x) logM
=
xǫX
P(x) log
1
MP(x)
= E log
1
MP(x)
Jensens
′
≤ logE
1
MP(x)
= log
P(x)
1
MP(x)
= 0
21
therefore H(X) ≤ logM When P(x) =
1
M
∀ x ǫ X
H(X) =
xσX
1
M
logM = logM
4. H(X) = 0 ⇒X is a constant
Example : X = {0, 1} P(X = 1) = P, P(X = 0) = 1 −P
H(X) = P log
1
P
+ (1 −P) log
1
1−P
= H(P)(the binary entropy function)
We can easily extend the deﬁnition of entropy to multiple random variables. For
example, let Z = (X, Y ) where X and Y are random variables.
Deﬁnition : The joint entropy H(X,Y) with joint distribution P(x,y) is given by
H(X, Y ) = +
xǫX
yǫY
P(x, y)log
1
P(x, y)
= E log
1
P(X, Y )
If X and Y are independent, then
H(X, Y ) =
xǫX
yǫY
P(x).P(y) log
1
P(x), P(y)
=
xǫX
yǫY
P(x).P(y) log
1
P(x)
+
xǫX
yǫY
P(x)P(y) log
1
P(y)
=
yǫY
P(y)H(X) +
xǫX
P(x)H(Y )
= H(X) + H(Y )
In general, given X
1
, . . . , X
n
. i.i.d. random variables,
H(X1, . . . , X
n
) =
i=1
n
H(X
i
)
We showed earlier that for optimal coding
H(X) ≤ L
−
< H(X) + 1
22
What happens if we encode blocks of symbols?
Lets take n symbols at a time
X
n
= (X1, . . . , X
n
) Let L
−
n be the optimal code
H(X1, . . . , X
n
) ≤ L
−n
< H(X1, . . . , X
n
) + 1
H(X1, . . . , X
n
) =
H(X
i
) = nH(X)
H(X) ≤ L
−n
≤ nH(X) + 1
H(X) ≤
L
−n
n
≤ H(X) +
1
n
Therefore, by encoding a block of source symbols at a time, we can get as near to
the entropy bound as required.
23
Information Theory and Coding
Lecture 3
Pavan Nuggehalli Asymptotic Equipartition Property
The Asymptotic Equipartition Property is a manifestation of the weak law of large num
bers.
Given a discrete memoryless source, the number of strings of length n = X
n
. The AEP
asserts that there exists a typical set, whose cumulative probability is almost 1. There
are around 2
nh(X)
strings in this typical set and each has probability around 2
−nH(X)
”Almost all events are almost equally surprising.”
Theorem : Suppose X
1
, X
2
, . . . are iid with distribution p(x)
Then −
1
n
log P(X
1
, . . . , X
n
) →H(X) in probability
Proof : Let Y
k
= log
1
P(X
k
)
. Then Y
k
are iid and EY
k
= H(X)
Let S
n
=
1
n
n
k=1
Y
k
. By WLLN S
n
→H(x) in probability
But S
n
=
1
n
n
k=1
log
1
P(X
k
)
= −
n
k=1
log
n
P(X
k
)
= −
1
n
log(P(X
1
, . . . , X
n
))
Deﬁnition : The typical set A
n
ǫ
is the set of sequences x
n
= (x
1
, . . . , x
n
) ǫ X
n
such
that 2
−n(H(X)+ǫ)
≤ P(x
1
, . . . , x
n
) ≤ 2
−n(H(X)−ǫ)
Theorem :
a) If (x
1
, . . . , x
n
) ǫ A
n
ǫ
, then
H(X) −ǫ ≤ −
1
n
logP(x
1
, . . . , x
n
) ≤ H(X) + ǫ
b) Pr(A
n
ǫ
) > 1 −ǫ for large enough n
c) A
n
ǫ
 ≤ 2
n(H(X)+ǫ)
d) A
n
ǫ
 ≥ (1 −ǫ) 2
n(H(X)−ǫ)
for large enough n
Remark
1. Each string in A
n
ǫ
is approximately equiprobable
2. The typical set occur with probability 1
3. The size of the typical set is roughly 2
nH(X)
31
Proof :
a) Follows from the deﬁnition
b) AEP
⇒−
1
n
log P(X
1
, . . . , X
n
) →H(X) in prob
Pr
 −
1
n
log P(X
1
, . . . , X
n
) −H(X) < ǫ
> 1 −δ for large enough n
Take δ = ǫ
1
Pr(A
n
ǫ
) > 1 −δ
c)
1 =
x
n
ǫX
n
P(x
n
)
≥
x
n
ǫA
n
ǫ
P(x
n
)
≥
x
n
ǫA
n
ǫ
2
−n(H(X)+ǫ)
= A
n
ǫ
 . 2
−n(H(X)+ǫ)
⇒A
n
ǫ
 ≤ 2
n(H(X)+ǫ)
d)
Pv(A
n
ǫ
) > 1 −ǫ
⇒1 −ǫ < Pv(A
n
ǫ
)
=
x
n
ǫA
n
ǫ
Pv(x
n
)
≤
x
n
ǫA
n
ǫ
2
−n(H(X)−ǫ)
= A
n
ǫ
 . 2
−n(H(X)−ǫ)
A
n
ǫ
 ≥ (1 −ǫ) . 2
−n(H(X)−ǫ)
# strings of length n = X
n
# typical strings of length n
∼
= 2
nH(X)
32
lim
2
nH(X)
Xn
= lim2
−n(logX−H(X))
→0
One of the consequences of AEP is that it provides a method for optimal coding.
This has more theoretical than practical signiﬁcance.
Divide all strings of length n into A
n
ǫ
and A
n
⊂
ǫ
We know that A
n
ǫ
 ≤ 2
n(H(X)+ǫ)
Each sequence in A
n
ǫ
is represented by its index in the set. Instead of transmitting
the string, we can transmit its index.
#bits required = ⌈log(A
n
ǫ
)⌉ < n(H(X) + ǫ) + 1
Preﬁx each sequence by a 0, so that the decoder knows that what follows is an in
dex number.
#bits ≤ n(H(X) + ǫ) + 2
For X
n
ǫ A
n
⊂
ǫ
,
#bits required = nlogX + 1 + 1
Let l(x
n
) be the length of the codeword corresponding to x
n
. Assume n is large enough
that Pv(A
n
ǫ
) > 1 −ǫ
El(x
n
) =
x
n
P(x
n
)l(x
n
)
=
x
n
ǫA
n
ǫ
P(x
n
)l(x
n
) +
x
n
ǫA
n
⊂
ǫ
P(x
n
)l(x
n
)
≤
x
n
+A
n
ǫ
P(x
n
)[(nH + ǫ) + 2] +
P(x
n
)(n logX + 2)
= Pv(A
n
ǫ
) . (n(H + ǫ) + 2) +Pv(A
n
⊂
ǫ
) . (n logX + 2)
≤ n(H + ǫ) + 2 + ǫ.n logX
= n(H + ǫ
1
) ǫ
1
= ǫ + ǫ logX +
2
n
33
Theorem : For a DMS, there exists a UD code which satisﬁes
E
1
n
l(x
n
)
≤ H(X) + ǫ for n suﬃciently large
34
Information Theory and Coding
Lecture 4
Pavan Nuggehalli Data Compression
The conditional entropy of a random variable Y with respect to a random variable X is
deﬁned as
H(Y X) =
xǫX
P(x)H(Y X = x)
=
xǫX
P(x)
yǫY
P(yx)log
1
P(yx)
=
xǫX
yǫY
P(x, y)log
1
P(yx)
= E
1
logP(yx)
In general, suppose X = (X
1
, . . . , X
n
) Y = (Y
1
, . . . , Y
m
)
Then H(XY ) = E
1
logP(Y X)
Theorem : (Chain Rule)
H(XY ) = H(X) + H(Y X)
H(X, Y ) = −
xǫX
yǫY
P(x, y)logP(x, y)
= −
xǫX
yǫY
P(x, y)logP(x).P(yx)
= −
x
y
P(x, y)logP(x) −
xǫX
yǫY
P(x, y)log(yx)
= −
x
P(x)logP(x) −
xǫX
yǫY
P(x, y)log(yx)
= H(X) + H(Y X)
Corollary :
1)
H(X, Y Z) = H(XZ) + H(Y X, Z)
= E
1
logP(yx, z)
41
2)
H(X
1
, . . . , X
n
) =
n
k=1
H(X
k
X
k−1
, . . . , X
1
)
H(X
1
, X
2
) = H(X
1
) + H(X
2
X
1
)
H(X
1
, X
2
, X
3
) = H(X
1
) + H(X
2
, X
3
X
1
)
= H(X
1
) + H(X
2
X
1
) + H(X
3
X
1
, X
2
)
3) H(Y ) ≤ H(Y X)−
Stationary Process : A stochastic process is said to be stationery if the joint dis
tribution of any subset of the sequence of random variables is invariant with respect to
shifts in the time index.
Pr(X
1
= x
1
, . . . , X
n
= x
n
) = Pr(X
1+t
= x
1
, . . . , X
n+t
= x
n
)
∀ t ǫ Z and all x
1
, . . . , x
n
ǫ X
Remark : H(X
n
X
n−1
) = H(X
2
X
1
)
Entropy Rate : The entropy rate of a stationery stochastic process is given by
H = lim
n→∞
1
n
H(X
1
, . . . , X
n
)
Theorem : For a stationery stochastic process, H exists and further satisﬁes
H = lim
n→∞
1
n
H(X
1
, . . . , X
n
) = lim
n→∞
H(X
n
X
n−1
, . . . , X
1
)
Proof : We will ﬁrst show that limH(X
n
X
n−1
, . . . X
1
) exists and then show that
lim
n→∞
1
n
H(X
1
, . . . , X
n
) = lim
n→∞
H(X
n
X
n−1
, . . . , X
1
)
42
Suppose lim
n→∞
x
n
= x. Then we mean for any ǫ > 0, there exists a number N
ǫ
such that
x
n
−x < ǫ ∀n ≥ N
ǫ
Theorem : Suppose x
n
is a bounded monotonically decreasing sequence, then lim
n→∞
x
n
exists.
H(X
n+1
X
1
, . . . , X
n
) ≤ H(X
n+1
X
2
, . . . , X
n
)
= H(X
n
X
1
, . . . , X
n−1
)by stationarity
⇒H(X
n
X
1
, . . . , X
n−1
) is monotonically decreasing with n
0 ≤ H(X
n
X
1
, . . . , X
n−1
) ≤ H(X
n
) ≤ logX
Cesaro mean
Theorem : If a
n
→a, then b
n
=
1
n
n
k=1
a
k
→a
WTS. ∀ǫ > 0, ∃N
ǫ
s.t. b
n
−a < ǫ ∀
n
≥ N
ǫ
We know a
n
→a ∃N
ǫ
2
s.t n ≥ N
ǫ
2
a
n
−a ≤
ǫ
2
b
n
−a =
1
n
n
k=1
(a
k
−a)
≤
1
n
n
k=1
a
k
−a
≤
1
n
N
ǫZ
k=1
a
k
−a +
n −N(ǫ)
n
ǫ
Z
≤
1
n
N
ǫZ
k=1
a
k
−a +
ǫ
Z
Choose n large enough that the ﬁrst term is less than
ǫ
Z
b
n
−a ≤
ǫ
Z
+
ǫ
Z
= ǫ ∀n ≥ N
∗
ǫ
43
Now we will show that
limH(X
n
X
1
, . . . , X
n−1
) → lim
1
n
H(X
1
, . . . , X
n
)
H(X
1
, . . . , X
n
)
n
=
1
n
n
k=1
H(X
1
X
k−1
, . . . , X
1
)
↓
lim
H(X
1
, . . . , X
n
)
n
= lim
n→∞
H(X
n
X
1
, . . . , X
n−1
)
Why do we care about entropy rate H ? Because AǫP holds for all stationary ergodic
process,
−
1
n
logP(X
1
, . . . , X
n
) →H in prob
This can be used to show that the entropy rate is the minimum number of bits re
quired for uniquely decodable lossless compression.
Universal data/source coding/compression are a class of source coding algorithms which
operate without knowledge of source statistics. In this course, we will consider Lempel.
Ziv compression algorithms which are very popular Winup & Gup use version of these
algorithms. Lempel and Ziv developed two version LZ78 which uses an adaptive dictio
nary and LZ77 which employs a sliding window. We will ﬁrst describe the algorithm and
then show why it is so good.
Assume you are given a sequence of symbols x
1
, x
2
. . . to encode. The algorithm main
tains a window of the W most recently encoded source symbols. The window size is fairly
large ≃ 2
10
−2
17
and a power of 2. Complexity and performance increases with W.
a) Encode the ﬁrst W letters without compression. If X = M, this will require
⌈WlogM⌉ bits. This gets amortized over time over all symbols so we are not
worried about this overhead.
b) Set the pointer P to W
c) Find the largest n such that
x
P=n
P+1
= x
P−u−1+n
P−u
for some u, 0 ≤ u ≤ W −1
Set n = 1 if no match exists for n ≥ 1
44
d) Encode n into a preﬁx free code word. The particular code here is called the unary
binary code. n is encoded into the binary representation of n preceded by ⌊logn⌋
zeros
1 : ⌊log1⌋ = 0 1
2 : ⌊log2⌋ = 1 010
3 : ⌊log3⌋ = 1 011
4 : ⌊log4⌋ = 2 00100
e) If n > 1 encode u using ⌈logW⌉ bits. If n = 1 encode x
p+1
using ⌈logM⌉ bits.
f) Set P = P + n; update window and iterate.
Let R(N) be the expected number of bits required to code N symbols
Then lim
W→∞
lim
N→∞
R(N)
N
= H(X)
”Baby LZ algorithm”
Assume you have been given a data sequence of W + N symbols where W = 2
n
∗
(H+2ǫ)
,
where H is the entropy rate of the source. n
∗
divides NX = M is a power of 2.
Compression Algorithm
If there is a match of n
∗
symbols, send the index in the window using logW(= n
∗
(H+2ǫ))
bits. Else send the symbols uncompressed.
Note : No need to encode n, performance sub optimal compared to LZ more compression
n needs only log n bits.
Y
k
= # bits generated by the K
th
segment
#bits sent =
N/n
∗
k=1
Y
k
Y
k
= logW if match
= n
∗
logM if no match
E(#bits sent) =
N/n
∗
k=1
EY
k
=
N
n
∗
(P(match).logW + P(No match).n
∗
logM)
E(#bits sent)
N
= P(match).
logW
n
∗
+ P(no match)logM
claim P(no match) →0 as n
∗
→∞
45
lim
n
∗
→∞
E(#bits sent)
N
=
logW
n
∗
=
n
∗
(H+2ǫ)
n
∗
= H + 2ǫ
Let S be the minimum number of backward shifts required to ﬁnd a match for n
∗
symbols
Fact : E(SX
P+1
, X
P+2
, . . . , X
P+n
∗) =
1
P(X
P+1,...,X
P+n
∗
)
for a stationery ergodic source. This result is due to kac.
By Markov inequality
P(No matchX
P+n
∗
P+1
) = P(S > WX
P+n
∗
P+1
)
=
ES
W
=
1
P(X
P+n
∗
P+1
).W
P(No match) = P(S > W)
=
P(X
P+n
∗
P+1
)P(S > WX
P+n
∗
P+1
)
=
P(n
∗
)P(S > WX
n
∗
)
=
X
n
∗
ǫA
n
∗
ǫ
P(X
n
∗
)P(S > WX
n
∗
) +
X
n
∗
ǫA
n
∗⊂
ǫ
P(X
n
∗
)(S > WX
n
∗
)
≤ P(A
n
∗⊂
ǫ
) →0 as n
∗
→∞
X
n
∗
ǫ A
n
∗
ǫ
⇒ P(X
n
∗
) ≥ 2
−n
∗
(H+ǫ)
= therefore
1
P(X
n
∗
)
≤ 2
n
∗
(H+ǫ)
≤
X
n
∗
ǫA
n
∗
ǫ
P(X
n
∗
).
1
P(X
n
∗
).W
≤
2
n
∗
(H+ǫ)
W
X
n
∗
ǫA
n
∗
ǫ
P(X
n
∗
) =
2
n
∗
(H+ǫ)
W
.P(A
n
∗
ǫ
)
≤
2
n
∗
(H+ǫ)
W
= 2
n
∗
(H+ǫ−H−2ǫ)
= 2
n
∗
(−ǫ)
→0 as n
∗
→∞
46
47
Information Theory and Coding
Lecture 5
Pavan Nuggehalli Channel Coding
Source coding deals with representing information as concisely as possible. Channel cod
ing is concerned with the ”reliable” ”transfer” of information. The purpose of channel
coding is to add redundancy in a controlled manner to ”manage” error. One simple
approach is that of repetition coding wherein you repeat the same symbol for a ﬁxed
(usually odd) number of time. This turns out to be very wasteful in terms of band
width and power. In this course we will study linear block codes. We shall see that
sophisticated linear block codes can do considerably better than repetition. Good LBC
have been devised using the powerful tools of modern algebra. This algebraic framework
also aids the design of encoders and decoders. We will spend some time learning just
enough algebra to get a somewhat deep appreciation of modern coding theory. In this
introductory lecture, I want to produce a bird’s eye view of channel coding.
Channel coding is employed in almost all communication and storage applications. Ex
amples include phone modems, satellite communications, memory modules, magnetic
disks and tapes, CDs, DVD’s etc.
Digital Foundation : Tornado codes Reliable data transmission over the Internet
Reliable DSM VLSI circuits
There are tow modes of error control.
Error detection → Ethernet CRC
Error correction → CD
Errors can be of various types : Random or Bursty
There are two basic kinds of codes : Block codes and Trellis codes
This course : Linear Block codes
Elementary block coding concepts
Deﬁnition : An alphabet is a discrete set of symbols
Examples : Binary alphabet{0, 1}
Ternary alphabet{0, 1, 2}
Letters{a, . . . , z}
Eventually these symbols will be mapped by the modulator into analog wave forms and
transmitted. We will not worry about that part now.
In a(n, k) block code, the incoming data source is divided into blocks of k symbols.
51
Each block of k symbols called a dataword is used to generate a block of n symbols called
a codeword. (n −k) redundant bits.
Example : (3, 1) binary repetition code
0 → 000 n = 3, k = 1
1 → 111
Deﬁnition : A block code G of blocklength n over an alphabet X is a non empty set
of ntuples of symbols from X. These ntuples are called codewords.
The rate of the code with M symbols is given by
R =
1
n
logq M
Let us assume X = q. Codewords are generated by encoding messages of k symbols.
# messages = q
k
= G
Rate of code =
k
n
Example : Single Parity check code SPC code
Dataword : 010
Codeword : 0101
k = 3, n = 4, Rate =
3
4
This code can detect single errors.
Ex : All odd number of errors can be detected. All even number of errors go undetected
Ex : Suppose errors occur with prob P. What is the probability that error detection fails?
Hamming distance : The Hamming distance d(x, y) between two qary sequences
x and y is the number of places in which x and y diﬀer.
Example:
x = 10111
y = 01011
d(x, y) = 1 + 1 + 1 = 3
Intuitively, we want to choose a set of codewords for which the Hamming distance
between each other is large as this will make it harder to confuse a corrupted codeword
with some other codeword.
52
Hamming distance satisﬁes the conditions for a metric namely
1. d(x, y) ≥ 0 with equality if x = y
2. d(x, y) = d(y, x) symmetry
3. d(x, y) ≤ d(x, z) + d(z, y) (triangle inequality)
Minimum Hamming distance of a block code is the distance of the two closest code
words
d
min
= min d(c
i
, c
j
)
c
i
, c
j
ǫ G
= i = j
An (n, k) block code with d
min
= d is often referred to as an (n, k, d) block code.
Some simple consequences of d
min
1 An (n, k, d) block code can always detect up to d1 errors
Suppose codeword c was transmitted and r was received (The received word is often
called a senseword.)
The error weight = # symbols changed / corrupted
= d(c, r)
If d(c, r) < d, then r cannot be a codeword. Otherwise c and r would be two
codewords whose distance is less than the minimum distance.
Note :
(a) Error detection = Error correction
d(c
1
, r) < d d(c
2
, r) < d
(b) This is the guaranteed error detecting ability. In practise, errors can be de
tected even if the error weight exceeds d. e.g. SPC detects all odd patterns of
errors.
2 An (n, k, d) block code can correct up to
t = ⌊
d−1
2
⌋ errors
53
Proof : Suppose we detect using nearest neighbor decoding i.e. given a senseword r, we
choose the transmitted codeword to be
c
^
= argnum d(r, c)
= c ǫ G
A Hamming sphere of radius r centered at an n tuple c is the set of all n tuples,
c
′
satisfying d(c, c
′
) ≤ r
t = ⌊
d
min
−1
2
⌋ ⇒ d
min
≥ 2t = 1
Therefore, Hamming spheres of radius t are nonintersecting. When ≤ t errors occur,
the decoder can unambiguously decide which codeword was transmitted.
Singleton Bound : For an (n, k) block code n −k ≥ d
min
−1
Proof :
Remove the ﬁrst d −1 symbols of each codeword in C, Denote the set of modiﬁed code
words by
ˆ
C
Suppose x ǫ C, denote by x
^
its image in
ˆ
C
Then x = y ⇒ x
^
= y
^
Therefore If x
^
= y
^
, then d(x, y) ≤ d −1
Therefore q
k
= C = 
ˆ
C
But C ≤ q
n−d
min
+1
⇒ q
k
≤ q
n−d
min
+1
⇒ k ≤ n −d
min
+ 1
or n −k ≥ d
min
−1
# possible block codes = 2
n.2
k
We want to ﬁnd codes with good distance structure. Not the whole picture.
The tools of algebra have been used to discover many good codes. The primary al
gebraic structure of interest are Galois ﬁelds. This structure is exploited not only in
discovering good codes but also in designing eﬃcient encoders and decoders.
54
Information Theory and Coding
Lecture 6
Pavan Nuggehalli Algebra
We will begin our discussion of algebraic coding theory by deﬁning some important al
gebraic structures.
Group, Ring, Field and Vector space.
Group : A group is an algebraic structure (G, ∗) consisting of a set G and a binary
operator * satisfying the following four axioms.
1. Closure : ∀a, b ǫ G, a ∗ b ǫ G
2. Associative law : (a ∗ b) ∗ c = a ∗ (b ∗ c) ∀a, b, c ǫ G
3. Identity : ∃ e ǫ G such that e ∗ a = a ∗ e = a ∀ a ǫ G
4. Inverse : ∀ a ǫ G, ∃ b ǫ G such that b ∗ a = a ∗ b = e
A group with a ﬁnite number of element is called a ﬁnite group. If a ∗ b = b ∗
a ∀ a, b ǫ G, then G is called a commutative or abelian group. For abelian groups * is
usually denoted by + and called addition. The identity element is called 0.
Examples :
(Z, +), (R\{0}, .}, (Z\n, +)
How about (Z, −). Ex: Prove (Z, −) is not a group.
An example of a non commutative group : Permutation Groups
Let X = {1, 2, . . . , n}. A 11 map of X onto itself is called a permutation. The symmetric
group S
n
is made of the set of permutations of X.
eg : n = 3 S
n
= {123, 132, 213, 231, 312, 321}
132 denotes the permutation 1 → 1, 2 → 3, 3 → 2. The group operation is deﬁned
by the composition of permutations. b ∗ c is the permutation obtained by ﬁrst applying
c and then applying b.
For example :
61
132 ∗ 213 = 312
b c
213 ∗ 132 = 231 Noncommutative
A ﬁnite group can be represented by an operation table. e.g. Z/2 = {0, 1} (Z/2, +)
+ 0 1
0 0 1
1 1 0
Elementary group properties
1. The identity element is unique
Let e
1
&e
2
be identity elements
Then e
1
= e
1
∗ e
2
= e
2
2. Every element has a unique inverse
b and b
′
are two inverses of a. Then
b = b ∗ e = b ∗ (a ∗ b
′
) = (b ∗ a) ∗ b
′
= e ∗ b
′
= b
′
3. Cancellation
a ∗ b = a ∗ c ⇒a = c b ∗ a = c ∗ a ⇒b = c
a
−1
∗ a ∗ b = a
−1
∗ a ∗ c
⇒b = c → No duplicate elements in any row or column of operation table
Exercise : Denote inverse of x ǫ G by x
′
Show that (a ∗ b)
′
= b
′
∗ a
′
Deﬁnition : The order of a ﬁnite group is the number of elements in the group
Subgroups : A subgroup of G is a subset H of G that is itself a group under the
operations of G
1) Closure : a ǫ H, b ǫ H ⇒ a ∗ b ǫ H
2) Associative : (a ∗ b) ∗ c = a ∗ (b ∗ c)
3) Identity : ∃ e
′
ǫ H such that a ∗ e
′
= e
′
∗ a = a ∀ a ǫ H
Note that e
′
= e, the identity of G 1 a ∗ e
′
= axe
4) Inverse : ∀ a ǫ H, ∃ b such that a ∗ b = b ∗ a = e
Property (2) holds always because G is a group.
Property (3) follows from (1) and (4) provided H is nonempty.
62
H non empty ⇒ ∃ a ǫ H
(4) ⇒a
−1
ǫ H
(1) ⇒a ∗ a
−1
= e ǫ H
Examples : {e} is a subgroup, so is G itself. To check if a nonempty subset H is a
subgroup we need only check for closure and inverse (Properties 1 and 4)
More compactly a ∗ b
−1
ǫ H ∀a, b ǫ H
For a ﬁnite group, enough to show closure.
Suppose G is ﬁnite and h ǫ G consider the set H = {h, h ∗ h, h ∗ h ∗ h, . . .}. We will
denote this compactly as {h, h
2
, h
3
, . . .}. Consists of all powers of h.
Let the inverse of h be h
′
Then (h
k
)
′
= (h
′
)
k
Why? h
2
∗ (h
′
)
2
h ∗ h ∗ h
′
∗ h
′
= e
Similarly (h
′
)
2
∗ h
2
= e Closure ⇒ inverse exists
Since the set H is ﬁnite
h
i
= h
j
h
i
(h
′
)
i
= h
j
(h
′
)
j
h
i−j
= e
∃ n such that h
n
= e
H = {h, h
2
, . . . , h
n
} → cyclic group, subgroup generated by H
h
n
= e
h.h
n−1
= e In general,(h
k
)
′
= h
n−k
h
′
= h
n−1
Order of an element H is the order of the subgroup generated by H
Ex: Given a ﬁnite subset H of a group G which satisﬁes the closure property, prove
that H is a subgroup.
63
Cosets : A left coset of a subgroup H is the set denoted by g ∗ H = {g ∗ H : hǫH}.
Ex : g ∗ H is a subgroup if g ǫ H
A right coset is H ∗ g = {h ∗ g : h ǫ H}
Coset decomposition of a ﬁnite group G with respect to H is an array constructed as
follows :
a) Write down the ﬁrst row consisting of elements of H
b) Choose an element of G not in the ﬁrst row. Call it g
2
. The second row consists of
the elements of the coset g ∗ H
c) Continue as above, each time choosing an element of G which has not appeared in
the previous rows. Stop when there is no unused element left. Because G is ﬁnite
the process has to terminate.
h
1
= 1 h
2
. . . h
n
g
2
g
2
g
2
∗ h
2
. . . g
2
∗ h
n
.
.
.
g
m
g
m
g
m
∗ h
2
. . . g
m
∗ h
n
h
1
, g
2
, g
3
, . . . , g
m
are called coset leaders
Note that the coset decomposition is always rectangular. Every element of G occurs
exactly once in this rectangular array.
Theorem : Every element of G appears once and only once in a coset decomposi
tion of G.
First show that an element cannot appear twice in the same row and then show that
an element cannot appear in two diﬀerent rows.
Same row : g
k
h
1
= g
k
h
2
⇒h
1
= h
2
a contradiction
Diﬀerent row : g
k
h
1
= g
l
h
2
where k > l
= g
k
= g
l
h
2
h
1
⇒g
k
ǫ g
l
∗ H, a contradiction
G = H. (number of cosets of G with respect to H)
Lagrange’s Theorem : The order of any subgroup of a ﬁnite group divides the or
der of the group.
Corr : Prime order groups have no proper subgroups
Corr : The order of an element divides the order of the group
64
Rings : A ring is an algebraic structure consisting of a set R and the binary opera
tions, + and . satisfying the following axioms
1. (R, +) is an abelian group
2. Closure : a.b ǫ R ∀a, b ǫ R
3. Associative Law : a.(b.c) = (a.b).c
4. Distributive Law :
(a + b).c = a.c + b.c
c.(a + b) = c.a + c.b
Two Laws ; need not be commutative
0 is additive identity, 1 is multiplicative identity
Some simple consequences
1. a 0 = 0 a = 0
a.0 = a.(0 + 0) = a.0 + a.0
therefore 0 = a.0
2. a.(−b) = (−a).b = −(a.b)
0 = a.0 = a(b −b) = a.b + a(−b)
therefore a(−b) = −(a.b) 0.b = (a −a)b
Similarly(−a)b = −(a.b) = ab + (−a).b
. →multiplication + →addition a.b = ab
Examples (Z, +, .) (R, +, .) (Z\n, +, .)
(R
nxn
, +, .) noncommutative ring
R[x] : set of all polynomials with real coeﬃcients under polynomial addition and multi
plication
R[x] = {a
0
+ a
1
x + . . . + a
n
x
n
: n ≥ 0, a
k
ǫ R}
Notions of commutative ring, ring with identity. A ring is commutative if multiplica
tion is commutative.
Suppose there is an element l ǫ R such that 1.a = a.1 = a
Then R is a ring with identity
Example (2Z, +, .) is a ring without identity
65
Theorem :
In a ring with identity
i) The identity is unique
ii) If an element a has an multiplicative inverse, then the inverse is unique.
Proof : Same as the proof for groups.
An element of R with an inverse is called a unit.
(Z, +, .) units = 1
(R, +, .) units R\{0}
(R
nxn
, +, .) units nonsingular or invertible matrices
R[x] units polynomals of order 0 except the zero polynomal
If ab = ac and a = 0 Then is b = c?
Zero divisors, cancellation Law, Integral domain
Consider Z/4. = {0, 1, 2, 3} suppose a.b = ac. Then is b = c? a = b = 2. A ring
with no zero divisor is called when a = 0 an integral domain. Cancellation holds in an
integral domain.
Fields :
A ﬁeld is an algebraic structure consisting of a set F and the binary operators + and .
satisfying
a) (F, +) is an abelian group
b) (F −{0}, .) is an abelian group
c) Distributive law : a.(b + c) = ab + ac
addition multiplication substraction division
Conventions 0 1 a + (−b) ab
−a a
−1
a −b ab
−1
Examples : (R, +, .), (C, +, .), (Q, +, .)
A ﬁnite ﬁeld with q elements, if it exists is called a ﬁnite ﬁeld or Galois ﬁled and denoted
by GF(q). We will see later that q can only be a power of a prime number. A ﬁnite ﬁeld
can be described by its operation table.
66
GF(2) + 0 1 . 0 1
0 0 1 0 0 0
1 1 0 1 0 1
GF(3) + 0 1 2 . 0 1 2
0 0 1 2 0 0 0 0
1 1 2 0 1 0 1 2
2 2 0 1 2 0 2 1
GF(4) + 0 1 2 3 . 0 1 2 3
0 0 1 2 3 0 0 0 0 0
1 1 0 3 2 1 0 1 2 3
2 2 3 0 1 2 0 2 3 1
3 3 2 1 0 3 0 3 1 2
multiplication is not modulo 4.
We will see later how ﬁnite ﬁelds are constructed and study their properties in detail.
Cancellation law holds for multiplication.
Theorem : In any ﬁeld
ab = ac and a = 0
⇒b = c
Proof : multiply by a
−1
Introduce the notion of integral domain
Zero divisors <⇒ Cancellation Law
Vector Space :
Let F be a ﬁeld. The elements of F are called scalars. A vector space over a ﬁeld F is
an algebraic structure consisting of a set V with a binary operator + on V and a scalar
vector product satisfying.
1. (V, +) is an abelian group
2. Unitary Law : 1.V = V for ∀ V ǫ V
3. Associative Law : (C
i
C
2
).V = C
i
(C
2
V )
4. Distributive Law :
C.(V
1
+ V
2
) = C.V
1
+ C.V
2
(C
1
+ C
2
).V = C
1
.V + C
2
.V
67
A linear block code is a vector subspace of GF(q)
n
.
Suppose
¯
V
1
, . . . ,
¯
V
m
are vectors in GF(q)
n
.
The span of the vectors {
¯
V
1
, . . . ,
¯
V
m
} is the set of all linear combinations of these vectors.
S = {a
1
¯ v
1
+ a
2
¯ v
2
+ . . . + a
m
¯ v
m
: a
1
, . . . , a
m
ǫ GF(q)}
= LS(¯ v
1
, . . . , ¯ v
m
) LS → Linear span
A set of vectors {¯ v
1
, . . . , ¯ v
m
} is said to be linearly independent (LI) if
a
1
¯ v
1
+ . . . + a
m
¯ v
m
= 0 ⇒a
1
= a
2
= . . . = a
m
= 0
i.e. no vector in the set is a linear combination of the other vectors.
A basis for a vector space is a linearly independent set that spans the vector space.
What is a basis for GF(q)
n
. V = LS (Basis vectors)
Take
¯ e
1
= (1, 0, . . . , 0)
¯ e
2
= (0, 1, . . . , 0)
¯ e
n
= (0, 0, . . . , 1)
Then {¯ e
k
: 1 ≤ k ≤ n} is a basis
To prove this, need to show that e
′
k
are LI and span GF(q)
n
.
Span :
¯ v = (v
1
, . . . , v
n
) Independence : consider e
1
=
n
k=1
v
k
e
k
{e
k
} is called the standard basis.
The dimension of a vector space is the number of vectors in its basis.
dimension of GF(q)
n
= n a vector space V C
Suppose {
¯
b
1
, . . . ,
¯
b
m
} is a basis for GF(q)
n
Then any ¯ v ǫ V can be written as
¯
V = V
1
¯
b
1
+ V
2
¯
b
2
+ . . . + V
m
¯
b
m
V
1
, . . . , V
m
ǫ GF(q)
= (V
1
V
2
. . . V
m
)
_
_
_
¯
b
1
¯
b
2
¯
b
m
_
_
_
= (V
1
, V
2
V
m
)B
68
= ¯ a.B where ¯ a ǫ (GF(q))
m
Is it possible to have two vectors ¯ a and ¯ a
′
such that ¯ aB = ¯ a
′
B
Theorem : Every vector can be expressed as a linear combination of basis vectors
in exactly one way
Proof : Suppose not.
Then
¯ a.B = ¯ a
′
.B
⇒ (¯ a −¯ a
′
).B = 0
⇒ (¯ a − ¯ a
′
)
_
_
_
_
_
_
¯
b
1
¯
b
2
.
.
.
¯
b
m
_
_
_
_
_
_
= 0
(¯ a
1
= ¯ a
′
1
)
¯
b
1
+ (a
2
−a
′
2
)
¯
b
2
+ . . . (a
m
−a
′
m
)
¯
b
m
= 0
But
¯
b
′
k
are LI
⇒a
k
= a
′
k
1 ≤ k ≤ m
⇒¯ a = ¯ a
′
Corollary : If (b
1
, . . . , b
m
) is a basis for V, then V consists of q
m
vectors.
Corr : Every basis for V has exactly m vectors
Corollary : Every basis for GF(q)
n
has n vectors. True In general for any ﬁnite di
mensional vector space Any set of K LI vectors forms a basis.
Review : Vector Space Basis
{b
1
, . . . , b
m
}
v = ¯ aB ¯ a ǫ GF(q)
m
B =
_
_
_
¯
b
1
¯
b
2
¯
b
m
_
_
_
v = q
m
Subspace : A vector subspace of a vector space V is a subset W that is itself a vec
tor space. All we need to check closed under vector addition and scalar multiplication.
The inner product of two ntuples over GF(q) is
(a
1
, . . . , a
n
).(b
1
, . . . , b
m
) = a
1
b
1
+ . . . + a
n
b
n
=
a
k
b
k
= ¯ a.
¯
b
⊤
69
Two vectors are orthogonal if their inner product is zero
The orthogonal complement of a subspace W is the set W
⊥
of ntuples in GF(q)
n
which
are orthogonal to every vector in W.
V ǫ W
⊥
iﬀ v.w = 0 ∀ w ǫ W
⊥
Example : GF(3)
2
:
W = {00, 10, 20} GF(2)
2
W
⊥
= {00, 01, 02} W = {00, 11}
W
⊥
= {00, 11}
dim W = 1 10
dim W
⊥
= 1 01
Lemma : W
⊥
is a subspace
Theorem : If dimW = k, then dimW
⊥
= n −k
Corollary : W = (W
⊥
)
⊥
Firstly WC(W
⊥
)
⊥
Proof : Let
dimW = k
⇒dimW
⊥
= n −k
⇒dim(W
⊥
)
⊥
= k
dimW = dim(W
⊥
)
⊥
Let {g
1
, . . . , g
k
} be a basis for W and {h
1
, . . . , h
n−k
} be a basis for w
⊥
Let G =
_
¸
¸
_
g
1
.
.
.
g
k
_
¸
¸
_
H =
_
¸
¸
_
h
1
.
.
.
h
n−k
_
¸
¸
_
k ×n n −k × n
Then GH
⊤
= O
k × n−k
Gh
⊤
1
=
_
¸
¸
_
g
1
h
⊤
1
.
.
.
g
k
h
⊤
1
_
¸
¸
_
= O
k × 1
GH
⊤
= O
k × n−k
Theorem : A vector V ǫ W iff V H
⊤
= 0
vh
⊤
1
= 0 v ǫ W and h
1
ǫ W
⊥
610
⇒V [h
⊤
1
h
⊤
2
h
⊤
n−k
] = 0
i.e. V H
⊤
= 0
⇐ Suppose V H
⊤
= 0 ⇒ V h
⊤
j
= 0 1 ≤ j ≤ n −k
Then V ǫ (W
⊥
)
⊥
= W
WTS V ǫ (W
⊥
)
⊥
i.e. v.w = 0 ∀ w ǫ W
⊥
But w =
n−k
j=1
a
j
h
j
v.w = vw
⊤
= v.
n−k
j=1
a
j
h
⊤
j
=
n−k
j=1
a
j
v
j
.h
⊤
j
= 0
We have two ways of looking at a vector V in W
V ǫ W ⇒V = a G for some a
Also V H
⊤
= 0
How do you check that a vector w lies in W ?
Hard way : Find a vector ¯ a ǫ GF(q)
k
such that v = aG−
Easy way : Compute V H
⊤
. H can be easily determined from G.
611
Information Theory and Coding
Lecture 7
Pavan Nuggehalli Linear Block Codes
A linear block code of blocklength n over a ﬁeld GF(q) is a vector subspace of GF(q)
n
.
Suppose the dimension of this code is k. Recall that rate of a code with M codewords is
given by
R =
1
n
log
q
M
Here M = q
k
⇒ R =
k
n
Example : Repetition, SPC,
Consequences of linearity
The hamming weight of a codeword, w(c), is the number of nonzero components of
c . w(c) = d
H
(o, c)
Er: w(0110) = 2, w(3401) = 3
The minimum hamming weight of a block code is the weight of the nonzero codeword
with smallest weight w
min
Theorem : For a linear block code, minimum weight = minimum distance
Proof : (V, +) is a group
w
min
= min
c
w(c) = d
min
= min
c
i
=c
j
d(c
i
⊃ c
j
)
w
min
≥ d
m
in d
min
≥ w
min
Let c
o
be the codeword of minimum weight. Since o is a codeword
w
min
= w(c
o
) = d(o, c
o
) ≥ min
c
i
=c
j
d(c
i
⊃ c
j
)
= d
min
d
min
≥ w
min
Suppose C
1
and C
2
are the closest codewords. Then C
1
−C
2
is a codeword.
71
Therefore d
min
= d(c
1
, c
2
) = d(o, c
1
−c
2
)
= w(c
1
−c
2
)
= min
c
w(c)
= w
min
Therefore d
min
≥ w
min
Key fact : For LBCs, weight structure is identical to distance structure.
Matrix description of LBC
A LBC C has dimension k
⇒ ∃ basis set with k vectors or ntuples. Call these g
o
, . . . , g
k−1
. Then any C ǫ C can be
written as
C = α
0
g
0
+α
1
g
1
+. . . +α
k−1
g
k−1
= [α
o
α
1
. . . α
k−1
]
_
¸
¸
¸
¸
_
g
o
g
1
.
.
.
g
k−1
_
¸
¸
¸
¸
_
i.e. C = ¯ αG α ǫ GF(q)
k
Gis called the generator matrix
This suggests a natural encoding approach. Associate a data vector α with the codeword
αG. Note that encoding then reduces to matrix multiplication. All the trouble lies in
decoding.
The dual code of C is the orthogonal complement C
⊥
C
⊥
= {h : ch
⊤
= o ∀ c ǫ C}
Let h
o
, . . . , h
n−k−1
be the basis vectors for C
⊥
and H be the generator matrix for C
⊥
. H
is called the parity check matrix for C
Example : C = (3, 2) parity check code
C =
0 0 0
0 1 1
1 0 1
1 1 0
G =
_
0 1 1
1 0 1
_
k = 2, n −k = 1
C
⊥
=
0 0 0
1 1 1
H = [1 1 1]
H is the generator matrix for the repetition code i.e. SPC and RC are dual codes.
Fact : C belongs to C iﬀ CH
⊤
= 0
72
Let C be a linear block code and C
⊥
be its dual code. Any basis set of C can be used to
form G. Note that G is not unique. Similarly with H.
Note that
¯
CH
⊤
= 0 ∀
¯
C ǫ C in particular true for all rows of G
Therefore GH
⊤
= 0
Conversely suppose GH
⊤
= 0, then H is a parity check matrix if the rows of H form
a LI basis set.
C C
⊥
Generator matrix G H
Parity check matrix H G
¯
C ǫ C iff CH
⊤
= 0
¯
V ǫ C
⊥
iff V G
⊤
= 0
Equivalent Codes : Suppose you are given a code C. You can form a new code
by choosing any two components and transposing the symbols in these two components
for every codeword. What you get is a linear block code which has the same minimum
distance. Codes related in this manner are called equivalent codes.
Suppose G is a generator matrix for a code C. Then the matrix obtained by linearly
combining the rows of G is also a generator matrix.
b
1
G =
_
¸
¸
¸
¸
_
g
0
g
1
.
.
.
g
k−1
_
¸
¸
¸
¸
_
elementary row operations 
Interchange any tow rows
Multiplication of any row by a nonzero element in GF(q)
Replacement of any row by the sum of that row and a multiple of any other rows.
Fact : Using elementary row operations and column permutation, it is possible to
reduce G to the following form
G = [I
k × k
P]
This is called the systematic form of the generator matrix. Every LBC is equivalent
to a code has a generator matrix in systematic form.
73
Advantages of systematic G
C = a.G
= (a
0
. . . a
k−1
)[I
k × k
P] k × n −k
= (a
0
. . . a
k−1
, C
k
, . . . , C
k−1
)
Check matrix
H = [−P
⊤
I
n−k × n−k
]
1) GH
⊤
= 0
2) The row of H form a LI set of n −k vectors.
Example
G =
_
¸
_
1 0 0 1 0
0 1 0 0 1
0 0 1 1 1
_
¸
_ P =
_
¸
_
1 0
0 1
1 1
_
¸
_
P
⊤
=
_
1 0 1
0 1 1
_
−P
⊤
=
_
1 0 1
0 1 1
_
H =
_
1 0 1 0 0
0 1 1 0 1
_
n −k ≥ d
min
−1
Singleton bound (revisited)
d
min
= min
c
w(c) ≤ 1 +n −k
Codes which meet the bound are called maximum distance codes or maximumdistance
separable codes.
Now state relationship between columns of H and d
min
Let C be a linear block code (LBC) and C
⊥
be the corresponding dual code. Let G
be the generator matrix for C and H be the generator matrix for C
⊥
. Then H is the
parity check matrix for C and G is the parity check matrix for H.
¯
C.H
⊤
= 0 ⇔
¯
C ǫ (C
⊥
)
⊥
= C
¯
V .G
⊤
= 0 ⇔
¯
V ǫ C
⊥
74
Note that the generator matrix for a LBC C is not unique.
Suppose
G =
_
¸
¸
¸
¸
_
¯ g
0
¯ g
1
.
.
.
¯ g
k−1
_
¸
¸
¸
¸
_
Then
C = LS(¯ g
0
, . . . , ¯ g
k−1
)
= LS(G)
Consider the following transformations of G
a) Interchange two rows C
′
= LS(G
′
) = LS(G) = C
b) Multiply any row of G by a nonzero element of GF(q).
G
′
=
_
¸
¸
_
α¯ g
0
.
.
.
¯ g
k−1
_
¸
¸
_
LS(G
′
) = ?
= C
c) Replace any row by the sum of that row and a multiple of any other row.
G
′
=
_
¸
¸
¸
¸
_
¯ g
0
+α¯ g
1
¯ g
1
.
.
.
¯ g
k−1
_
¸
¸
¸
¸
_
LS(G
′
) = C
Easy to see that GH
⊤
= O
k×n−k
HG
⊤
= O
n−k×k
Suppose G is a generator matrix and H is some n−k×n matrix such that GH
⊤
= O
k×n−k
.
Is H a parity check matrix.
The above operations are called elementary row operations.
Fact 1 : A LB code remains unchanged if the generator matrix is subjected to elementary
row operations. Suppose you are given a code C. You can form a new code by choosing
any two components and transposing the symbols in these two components. This gives
a new code which is only trivially diﬀerent. The parameters (n, k, d) remain unchanged.
The new code is also a LBC. Suppose G = [f
0
, f
1
, . . . f
n−1
] Then G
′
= [f
1
, f
0
, . . . f
n−1
].
Permutation of the components of the code corresponds to permutations of the columns
of G.
Defn : Two block codes are equivalent if they are the same except for a permutation
of the codeword components (with generator matrices G & G
′
) G
′
can be obtained from
G
75
Fact 2 : Two LBC’s are equivalent if using elementary row operations and column
permutations.
Fact 3 : Every generator matrix G can be reduced by elementary row operations and
column operations to the following form :
G = [I
k×k
P
n−k×k
]
Also known as rowechelon form
Proof : Gaussian elimination
Proceed row by row and then interchange rows and columns.
A generator matrix in the above form is said to be systematic and the corresponding
LBC is called a systematic code.
Theorem : Every linear block code is equivalent to a systematic code.
Proof : Combine Fact3 and Fact2
There are several advantages to using a systematic generator matrix.
1) The ﬁrst k symbols of the codeword is the dataword.
2) Only n −k check symbols needs to be computed ⇒ reduces decoder complexity.
3) If G = [I P], then H = [−P
⊤
n−k×k
I
n−k×n−k
]
NTS : GH
⊤
= 0 GH
⊤
= [IP]
_
−P
I
_
= −P +P = 0
Rows of H are LI
Now let us study the distance structure of LBC
The Hamming weight, w(¯ c) of a codeword ¯ c, is the number of nonzero components
of ¯ c. w(¯ c) = d
H
(o, ¯ c)
The minimum Hamming weight of a block code is the weight of the nonzero code
word with smallest weight.
w
min
= min
¯ c ǫ C
w(¯ c)
76
Theorem : For a linear block code, minimum weight = minimum distance
Proof : Use the fact that (C, +) is a group
w
min
= min
¯ c ǫ C
w(¯ c) d
min
= min
(c
i
=c
j
)
c
i
,c
j
ǫ C
d(¯ c
i
, ¯ c
j
)
w
min
≥ d
min
d
min
≥ w
min
Let ¯ c
o
be the minimum weight codeword
O ǫ C
w
min
= w(c
o
) = d(o, ¯ c
o
) ≥ min
(c
i
=c
j
)
c
i
,c
j
ǫ C
⇒ w
min
≥ d
min
Suppose ¯ c
1
and ¯ c
2
are the two closest codewords
Then ¯ c
1
− ¯ c
2
ǫ C
therefore d
min
= d(¯ c
1
, ¯ c
2
) = d(o, ¯ c
1
, ¯ c
2
)
= w(¯ c
1
, ¯ c
2
)
≥ min
¯ c ǫ C
w(¯ c) = w
min
Key fact : For LBC’s, the weight structure is identical to the distance structure.
Given a generator matrix G, or equivalently a parity check matrix H, what is d
min
.
Brute force approach : Generate C and ﬁnd the minimum weight vector.
Theorem : (Revisited) The minimum distance of any linear (n, k) block code satis
ﬁes
d
min
≤ 1 +n −k
77
Proof : For any LBC, consider its equivalent systematic generator matrix. Let ¯ c be
the codeword corresponding to the data word (1 0 . . . 0)
Then w
min
≤ w(¯ c) ≤ 1 +n −k
⇒ d
min
≤ 1 +n −k
Codes which meet this bound are called maximum distance seperable codes. Exam
ples include binary SPC and RC. The best known nonbinary MDS codes are the Reed
Solomon codes over GF(q). The RS parameters are
(n, k, d) = (q −1, q +d, d + 1) q = 256 = 2
8
Gahleo Mission (255, 223, 33)
A codeword ¯ c ǫ C iﬀ ¯ cH
⊤
= 0. Let H = [
¯
f
0
,
¯
f
1
. . .
¯
f
n−1
] where
¯
f
k
is a n − k × 1 col
umn vector.
¯ cH
⊤
= 0 ⇒
n−1
i=0
c
i
f
i
= 0 when f
⊤
k
is a 1 × n − k vector corresponding to a column
of H.
therefore each codeword corresponds to a linear dependence among the columns of H.
A codeword with weight w implies some w columns are linearly dependent. Similarly a
codeword of weight at most w exists, if some w columns are linearly dependent.
Theorem : The minimum weight (= d
min
) of a LBC is the smallest number of lin
early dependent columns of a parity check matrix.
Proof : Find the smallest number of LI columns of H. Let w be the smallest num
ber of linearly dependent columns of H. Then
w−1
k=0
a
n
k
¯
f
n
k
= 0. None of the a
n
k
are o.
(violate minimality).
Consider the codeword
C
n
k
= a
n
k
C
1
= 0
otherwise
Clearly
¯
C is a codeword with weight w.
Examples
Consider the code used for ISBN (International Standardized Book Numbers). Each
book has a 10 digit identifying code called its ISBN. The elements of this code are from
GF(11) and denoted by 0, 1, . . . , 9, X. The ﬁrst 9 digits are always in the range 0 to 9.
The last digital is the parity check bit.
78
The parity check matrix is
H = [1 2 3 4 5 6 7 8 9 10]
GF(11) is isomorphic to Z/11 under addition and multiplication modulo11
d
min
= 2
⇒ can detect one error
Ex : Can also detect a transposition error i.e. two codeword positions are interchanged.
Blahut :
[0521553741]
12345678910
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
1
2
3
4
5
6
7
8
9
10
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
= 65
Hamming Codes
Two binary vectors are independent iﬀ they are distinct and non zero. Consider a binary
parity check matrix with m rows. If all the columns are distinct and nonzero, then
d
min
≥ 3. How many columns are possible? 2
m
− 1. This allows us to obtain a binary
(2
m
−1, 2
m
−1 −m, 3) Hamming code. Note that adding two columns gives us another
column, so d
min
≤ 3.
Example : m = 3 gives us the (7, 4) Hamming code
H =
_
¸
_
0111
1101
1011
. ¸¸ .
100
010
001
.¸¸.
_
¸
_ G =
_
¸
¸
¸
_
1 0 0 0 0 1 1
0 1 0 0 1 1 0
0 0 1 0 1 0 1
0 0 0 1 1 1 1
_
¸
¸
¸
_
−P
⊤
I
3×3
⇒
G = [I P]
Hamming codes can correct single errors and detect double errors used in SIMM and
DIMM.
Hamming codes can be easily deﬁned over larger ﬁelds.
Any two distinct & nonzero mtuples over GF(q) need not be LI. e.g. ¯ a = 2.
¯
b
Question : How many mtuples exists such that any two of them are LI.
q
m
−1
q−1
Deﬁnes a
_
q
m
−1
q−1
,
q
m
−1
q−1
−m, 3
_
Hamming code over GF(q)
79
Consider all nonzero mtuples or columns that have a 1 in the topmost non zero compo
nent. Two such columns which are distinct have to be LI.
Example : (13, 10) Hamming code over GF(3)
H =
_
¸
_
1 1 1 111 1 1 0 0 1 0 0
0 0 1 112 2 2 1 1 0 1 0
1 2 0 120 1 2 1 2 0 0 1
_
¸
_
3
3
= 27 −1 =
26
2
= 13
d
min
ofC
⊥
is always ≥ k
Suppose a codeword ¯ c is sent through a channel and received as senseword ¯ r. The
errorword ¯ e is deﬁned as
¯ e = ¯ r − ¯ c
The decoding problem : Given ¯ r, which codeword ¯ c ǫ C maximizes the likelihood of
receiving the senseword ¯ r ?
Equivalently, ﬁnd the most likely error pattern ˆ e. ˆ c = ¯ r − ¯ e Two steps.
For a binary symmetric channel, most likely, means the smallest number of bit errors.
For a received senseword ¯ r, the decoder picks an error pattern ¯ e of smallest weight such
that ¯ r −¯ e is a codeword. Given an (n, k) binary code, P(w(¯ e) = j) = (N
j
)P
j
(1 −P)
N−j
function of j.
This is the same as nearest neighbour decoding. One way to do this would be to write a
lookup table. Associate every ¯ r ǫ GF(q)
n
, to a codeword ˆ c ǫ GF(q), which is ¯ r’s nearest
neighbour. A systematic way of constructing this table leads to the (Slepian) standard
array.
Note that C is a subgroup of GF(q)
n
. The standard array of the code C is the coset
decomposition of GF(q)
n
with respect to the subgroup C. We denote the coset {¯ g + ¯ c :
∀ ǫ C} by ¯ g +C. Note that each row of the standard array is a coset and that the cosets
completely partition GF(q)
n
.
The number of cosets =
GF(q)
n

C
=
q
n
q
k
= q
n−k
710
Let o, ¯ c
2
, . . . , ¯ c
q
k be the codewords. Then the standard array is given by
o ¯ c
2
¯ c
3
. . . ¯ c
q
k
¯ e
2
¯ c
2
+ ¯ e
2
¯ c
3
+ ¯ e
2
. . . ¯ c
q
k + ¯ e
2
¯ e
3
.
.
.
.
.
.
¯ e
q
n−k ¯ e
q
n−k + ¯ c
2
. . . ¯ c
q
k + ¯ e
q
n−k
1. The ﬁrst row is the code C with the zero vector in the ﬁrst column.
2. Choose as coset leader among the unused ntuples, one which has least Hamming
weight, ”closest to all zero vector”.
Decoding : Find the senseword ¯ r in the standard array and denote it as the code
word at the top of the column that contains ¯ r.
Claim : The above decoding procedure is nearest neighbour decoding.
Proof : Suppose not.
We can write ¯ r = ¯ c
ijk
+ ¯ e
j
. Let ¯ c
1
be the nearest neighbor.
Then we can write ¯ r = ¯ c
i
+ ¯ e
i
such that w(¯ e
i
) < w(e
j
)
⇒ ¯ c
j
+ ¯ e
j
= ¯ c
i
+ ¯ e
i
i.e. ¯ e
i
= ¯ e
j
+ ¯ c
j
− ¯ c
i
But ¯ c
j
− ¯ c
i
∈ C
⇒ ¯ e
i
∈ ¯ e
j
+C and w(¯ e
i
) < w(¯ e
j
), a contradiction.
Geometrically, the ﬁrst column consists of Hamming spheres around the all zero code
word. The k
th
column consists of Hamming spheres around the k
th
codeword.
Suppose d
min
= 2t + 1. Then Hamming spheres of radius t are nonintersecting.
In the standard array, draw a horizontal line below the last row such that w(e
k
) ≤ t.
Any senseword above this codeword has a unique nearest neighbour codeword. Below
this line, a senseword will have more than one nearest neighbour codeword.
A Boundeddistance decoder corrects all errors up to weight t. If the senseword falls
below the Lakshman Rekha, it declares a decoding failure. A complete decoder assigns
every received senseword to a nearby codeword. It never declares a decoding failure.
Syndrome detection
For any senseword ¯ r, the syndrome is deﬁned by
¯
S = ¯ rH
⊤
.
711
Theorem : All vectors in the same coset have the same syndrome. Two distinct cosets
have distinct syndromes.
Proof : Suppose ¯ r and ¯ r
′
are in the same coset
Then ¯ r = ¯ c + ¯ e. Let ¯ e be the coset leader and ¯ r
′
= ¯ c
′
+ ¯ e
therefore S(¯ r) = ¯ rH
⊤
= ¯ eH
⊤
and S(¯ r
′
) = ¯ r
′
H
⊤
= ¯ eH
⊤
Suppose two distinct cosets have the same syndrome. Then Let ¯ e and e
⊤
be the corre
sponding coset leaders.
¯ eH
⊤
= ¯ e
′
H
⊤
⇒ ¯ e − ¯ e
′
ǫ C
therefore ¯ e = ¯ e
′
+ ¯ c ⇒ ¯ e ǫ ¯ e
′
+C a contradiction
This means we only need to tabulate syndromes and coset leaders.
Suppose you receive ¯ r. Compute syndrome S = rH
⊤
. Look up table to ﬁnd coset leader ¯ e
Decide ˆ c = ¯ r − ¯ e
Example : (1, 3)RC
Hamming Codes :
Basic idea : Construct a Parity Check matrix with as many columns as possible such
that no two columns are linearly dependent.
Binary Case : just need to make sure that all columns are nonzero and distinct.
Nonbinary Case :
¯
V
1
= 0
Pick a vector
¯
V
1
ǫ GF(q
m
), The set of vectors LD with
¯
V
1
are {
¯
O,
¯
V
1
, 2
¯
V
1
, . . . , (q−1)
¯
V
1
}
△
=
H
1
Pick
¯
V
2
ǫ?H
1
and form the set of vectors LD with
¯
V
2
{
¯
O,
¯
V
2
, 2
¯
V
2
, . . . , (q −1)V
2
}
Continue this process till all the mtuples are used up. Two vectors in disjoint sets
are L.I. Incidentally {H
n
, +} is a group.
#columns =
q
m
−1
q−1
Two nonzero distinct m tuples that have a 1 as the topmost or ﬁrst nonzero com
ponent are LI Why?
712
#mtuples = q
m−1
+q
m−2
+. . . + 1 =
q
m
−1
q−1
Example : m = 2, q = 3 n =
3
2
−1
3−1
= 4 k = n −m = 2 (4, 2, 3)
Suppose a codeword ¯ c is sent through a channel and received as senseword ¯ r. The
error vector or error pattern is deﬁned as
¯ e = ¯ r − ¯ c
The Decoding Problem : Given ¯ r, which codeword ˆ c ǫ C maximizes the likelihood of
receiving the senseword ¯ r ? Equivalently, ﬁnd the most likely valid errorword ˆ e, ˆ c = ¯ r−ˆ e.
For a binary symmetric channel, with Pe < 0.5 ”most likely” error pattern is the er
ror pattern with least number of 1’s, i.e. the pattern with the smallest number of bit
errors. For a received senseword ¯ r, the decoder picks an error pattern ˆ e of smallest weight
such that ¯ r − ˆ e is a codeword.
This is the same as nearest neighbour decoding. One way to do this would be to write
a lookup table. Associate every ¯ r ǫ GF(q)
n
to a codeword ˆ c(¯ r) ǫ C, which is ¯ r’s nearest
neighbour in C.
There is an element of arbitrariness in this procedure because some ¯ r may have more
than one nearest neighbour. A systematic way of constructing this table leads us to the
(slepian) standard array.
We begin by noting that C is a subgroup of GF(q)
n
. For any ¯ g ǫ GF(q)
n
, the coset
associated with ¯ g is given by the set ¯ g +C = {¯ g + ¯ c : ¯ c ǫ C}
Recall :
1) The cosets are disjoint completely partition GF(q)
n
2) # cosets =
GF(q)
n

C
=
q
n
q
k
= q
n−k
The standard array of the code C is the coset decomposition of GF(q)
n
with respect
to the subgroup C.
Let ¯ o, ¯ c
2
, . . . , ¯ c
q
k be the codewords. Then the standard array is constructed as follows :
a) The ﬁrst row is the code C with the zero vector in the ﬁrst column. ¯ o is the coset
leader.
713
b) Among the vectors not in the ﬁrst row, choose an element or vector ¯ e
2
, which has
least Hamming weight. The second row is the coset ¯ e
2
+ C with ¯ e
2
as the coset
leader.
c) Continue as above, each time choosing an unused vector g ǫ GF(q)
n
of minimum
weight.
Decoding : Find the senseword ¯ r in the standard array and decode it as the code
word at the top of the column that contains ¯ r.
Claim : Standard array decoding is nearest neighbor decoding.
Proof : Suppose not. Let ¯ c be the codeword obtained using standard array decod
ing and let ¯ c
′
be any of the nearest neighbors. We have ¯ r = ¯ c + ¯ e = ¯ c
′
+ ¯ e
′
where ¯ e is the
coset leader for ¯ r and w(¯ e) > w(¯ e
′
)
⇒ ¯ e
′
= ¯ e + ¯ c − ¯ c
′
⇒ ¯ e
′
ǫ ¯ e +C
By construction, ¯ e has minimum weight in ¯ e +C
⇒ w(¯ e) ≤ w(¯ e
′
), a contradiction
What is the geometric signiﬁcance of standard array decoding. Consider a code with
d
min
= 2t + 1. Then Hamming spheres of radius t, drawn around each codeword are
nonintersecting. In the standard array consider the ﬁrst n rows. The ﬁrst n vectors
in the ﬁrst column constitute the Hamming sphere of radius 1 drawn around the ¯ o vec
tor. Similarly the ﬁrst n vectors of the k
th
column correspond to the Hamming sphere
of radius 1 around ¯ c
k
. The ﬁrst n + (n
2
) vectors in the k
th
column correspond to the
Hamming sphere of radius 2 around the k
th
codeword. The ﬁrst n+(n
2
)+. . . (n
t
) vectors
in the k
th
column are elements of the Hamming sphere of radius t around ¯ c
k
. Draw a
horizontal line in the standard array below these rows. Any senseword above this line
has a unique nearest neighbor in C. Below the line, a senseword may have more than one
nearest neighbor.
A Bounded Distance decoder converts all errors up to weight t, i.e. it decodes all sense
words lying above the horizontal line. If a senseword lies below the line in the standard
array, it declares a decoding failure. A complex decoder simply implements standard
array decoding for all sensewords. It never declares a decoding failure.
Example : Let us construct a (5, 2) binary code with d
min
= 3
H =
_
¸
_
1 1 1 0 0
1 0 0 1 0
0 1 0 0 1
_
¸
_ G =
_
¸
_
1 0 1 1 0
0 1 1 0 1
_
¸
_
−P
⊤
I I : P
714
Codewords C = {00000, 01101, 10110, 11011}
00000 01101 10110 11011
00001 01100 10111 11010
00010 01111 10100 11001
00100 01001 10010 11111
01000 00101 11110 10011
10000 11101 00110 01011
00011 01110 10101 11000
01010 00011 11100 11011
00011 = 11011 + 11000
= 00000 + 00011
Syndrome detection :
For any senseword ¯ r, the syndrome is deﬁned by ¯ s = ¯ rH
⊤
Theorem : All vectors in the same coset (row in the standard array) have the same
syndrome. Two diﬀerent cosets/rows have distinct syndromes.
Proof : Suppose ¯ r and ¯ r
′
are in the same coset.
Let ¯ e be the coset leader.
Then ¯ r = ¯ c + ¯ e and ¯ r
′
= ¯ c
′
+ ¯ e
¯ rH
⊤
= ¯ cH
⊤
+ ¯ eH
⊤
= ¯ eH
⊤
= ¯ c
′
H
⊤
+ ¯ eH
⊤
= ¯ r
′
H
⊤
Suppose two distinct cosets have the same syndrome.
Let ¯ e and ¯ e
′
be the coset leaders
Then ¯ eH
⊤
= ¯ e
′
H
⊤
⇒ (¯ e − ¯ e
′
)H
⊤
= 0 ⇒ ¯ e − ¯ e
′
ǫ C ⇒ ¯ e ǫ ¯ e
′
+C, a contradiction.
This means we only need to tabulate syndromes and coset leaders. The syndrome de
coding procedure is as follows :
1) Compute S = ¯ rH
⊤
2) Look up corresponding coset leader ¯ e
3) Decode ˆ c = ¯ r − ¯ e
q
n−k
RS(255, 223, 33), q
n−k
= (256)
32
= 2
2
8×32
= 2
256
> 10
64
more than the number of atoms on earth?
Why can’t decoding be linear. Suppose we use the following scheme : Given syndrome
S, we calculate ˆ e = S.B where B is a n −k ×n matrix.
715
Let E = {ˆ e : ˆ e = SB for some S ǫ GF(q)
n−k
}
Claim : E is a vector subspace of GF(q)
n
E ≤ q
n−k
⇒ dimE ≤ n −k
Let E
1
= {single errors where the non zero component is 1 which can be detected}
E
1
⊂ E
We note that no more n − k single errors can be detected because E
1
constitutes a
LI set. In general not more than (n −k)(q − 1) single errors can be detected. Example
(7, 4) Hamming code can correct all single errors (7) in contrast to 7 −4 = 3 errors with
linear decoding.
Need to understand Galois Field to devise good codes and develop eﬃcient decoding
procedures.
716
Information Theory and Coding
Lecture 8
Pavan Nuggehalli Finite Fields
Review : We have constructed two kinds of ﬁnite ﬁelds. Based on the ring of integers
and the ring of polynomials.
(GF(q) −{0}, .) is cyclic ⇒∃ α ∈ GF(q) such that
GF(q) −{0} = {α, α
2
, . . . , α
q−1
= 1}. There are Q(q −1) such primitive elements.
Fact 1 : Every ﬁnite ﬁeld is isomorphic to a ﬁnite ﬁeld GF(p
m
), P prime constructed
using a prime polynomial f(x) ∈ GF(p)[x]
Fact 2 : There exist prime polynomials of degree m over GF(P), p prime, for all
values of p and m. Normal addition, multiplication, smallest nontrivial subﬁeld.
Let GF(q) be an arbitrary ﬁeld with q elements. Then GF(q) must contain the additive
identity 0 and the multiplicative identity 1. By closure GF(q) contains the sequence of
elements 0, 1, 1 + 1, 1 + 1 + 1, . . . and so on. We denote these elements by 0, 1, 2, 3, . . .
and call them the integers of the ﬁeld. Since q is ﬁnite, the sequence must eventually
repeat itself. Let p be the ﬁrst element which is a repeat of an earlier element, r. i.e.
p = r m GF(q).
But this implies p − r = 0, so if r = 0, then these must have been an earlier repeat
at p − r = 0. We can then conclude that p = 0. The set of ﬁeld integers is given by
G = {0, 1, 2, . . . , P −1}.
Claim : Addition and multiplication in G is modulo p.
Addition is modulo p because G is a cyclic group under addition. Multiplication is
modulo p because of distributive law.
a.b = (1 + 1 + . . . + 1)
.b = b + b + . . . + b = a ×b(mod p)
a times
Claim : (G, +, .) is a ﬁeld
(G, +) is an abelian group, (G−{0}, .) is an abelian group, distributive law holds.
To show (G−{0}, .) is a group, NTS closure & inverse.
This is just the ﬁeld of integers modulo p. We can then conclude that p is prime.
Then we have
Theorem : Each ﬁnite ﬁeld contains a unique smallest subﬁeld which has a prime
81
number of elements.
The number of elements of this unique smallest subﬁeld is called the characteristic of
the ﬁeld.
Corr. If q is prime, the characteristic of GF(q) is q. In other words GF(q) = Z/q.
Suppose not G is a subgroup of GF(q) ⇒p/q ⇒p = q.
Defn : Let GF(q) be a ﬁeld with characteristic p. Let f(x) be a polynomial over
GF(p). Let α ∈ GF(q). Then f(α), α ∈ GF(q) is an element of GF(q). The monic poly
nomial of smallest degree over GF(p) with f(α) = 0 is called the minimal polynomial of
α over GF(p).
Theorem :
1) Every element α ∈ GF(q) has a unique minimal polynomial.
2) The minimal polynomial is prime
Proof : Pick α ∈ GF(q)
Evaluate the zero, degree.0, degree.1, degree.2 monu polynomials with x = α , until
a repetition occurs. Suppose the ﬁrst repetition occurs at f(x) = h(x) where deg f(x) >
deg h(x). Otherwise f(x) − h(x) has degree < deg f(x) and evaluates to 0 for x = α .
Then f(x) −h(x) is the minimal polynomial for α ∈ GF(q).
Any other lower degree polynomial cannot evaluate to 0. Otherwise a repetition would
have occurred before reaching f(x).
Uniqueness : g(x) and g
′
(x) are two monu polynomials of lowest degree such that
g(α) = g
′
(α) = 0. Then h(x) = g(x) −g
′
(x) has degree lower than g(x) and evaluates to
0, a contradiction.
Primatily : Suppose g(x) = p
1
(x) . p
2
(x) . p
N
(x). Then g(α) = 0 ⇒ p
k
(α) = 0.
But p
k
(x) has lower degree than g(x), a contradiction.
Theorem : Let α be a primitive element in a ﬁnite ﬁeld GF(q) with characteristic
p. Let m be the degree of the minimal polynomial of α over GF(p). Then q = p
m
and
every element β ∈ GF(q) can be written as
β = a
m−1
α
m−1
+ a
m−2
α
m−2
+ . . . + a
1
α + a
0
where a
m−1
, . . . , a
0
∈ GF(p)
82
Proof : Pick some combination of a
m−1
, a
m−2
, . . . , a
0
∈ GF(p)
Then β = a
m−1
α
m−1
+ . . . + a
1
α + a
0
∈ GF(q)
Two diﬀerent combinations cannot give rise to the same ﬁeld element. Otherwise we
would have
β = a
m−1
α
m−1
+ . . . + a
0
= b
m−1
α
m−1
+ . . . + b
0
⇒(a
m−1
−b
m−1
)α
m−1
+ . . . + (a
0
−b
0
) = 0
⇒ α is a zero of a polynomial of degree m− 1, contradicting the fact that the minimal
polynomial of α has degree m.
There are p
m
such combinations, so q ≥ p
m
. Pick any β ∈ GF(q) − {0}. Let f(α)
be the deg m, minimal polynomial of α. Then β = α
l
1 ≤ l ≤ q −1
Then β = α
l
= Q(α).f(α) + r(α), deg r(α) ≤ m−1 division algorithm
i.e. β = r(α)
⇒ every element β can be expressed as a linear combination of α
m−1
, α
m−2
, . . . , α
0
Therefore q ≤ p
m
. Hence proved.
Corr : Every ﬁnite ﬁeld is isomorphic to a ﬁeld GF(p)/p(x)
Proof : By theorem, every element of GF(q) can be associated with a polynomial
of deg m−1 replacing α with the indeterminate x. These polynomials can be thought of
as ﬁeld elements. They are added and multiplied modulo f(x), the minimal polynomial
of α. This ﬁeld is then isomorphic to the ﬁeld GF(p)[x]/f(x).
Theorem : A ﬁnite ﬁeld exists of size p
m
for all primes p and m ≥ 1
Proof : Handwaving
Need to show that there exists a prime polynomial over GF(p) for every degree m.
The scene of Eratosthenes
83
Information Theory and Coding
Lecture 9
Pavan Nuggehalli Cyclic Codes
Cyclic codes are a kind of linear block codes with a special structure. A LBC is called
cyclic if every cyclic shift of a codeword is a codeword.
Some advantages
 amenable to easy encoding using shift register
 decoding involves solving polynomial equations not based on lookup tables
 good burst correction and error correction capabilities. ExCRC are cyclic
 almost all commonly used block codes are cyclic
Deﬁnition : A linear block code C is cyclic if
(c
0
c
1
. . . c
n−1
) ∈ C ⇒ (c
n−1
c
0
c
1
. . . c
n−2
) ∈ C
Ex : equivalent def. with left shifts
It is convenient to identify a codeword c
0
c
1
. . . c
n−1
in a cyclic code C with the polyno
mial c(x) = c
0
+ c
1
x + . . . + c
n−1
x
n−1
. If c
i
∈ GF(q) Note that we can think of C as a
subset of GF(q)[x]. Each polynomial in C has degree m ≤ n −1, therefore C can also be
thought of as a subset of GF(q)x
n
−1, the ring of polynomials modulo x
n
−1 C will be
thought of as the set of n tuples as well as the corresponding codeword polynomials.
In this ring a cyclic shift can be written as a multiplication with x in the ring.
Suppose c = (c
0
. . . c
n−1
)
Then c(x) = c
0
+ c
1
x + . . . c
n−1
x
n−1
Then x c(x) = c
0
x + c
1
x
2
+ . . . c
n−1
x
n
and x c(x) mod x
n
−1 = c
0
x + . . . + c
n−1
which corresponds to the code (c
n−1
c
0
. . . c
n−2
)
Thus a linear code is cyclic iﬀ
c(x) ∈ C ⇒ x c(x) mod x
n
−1 ∈ C
A linear block code is called cyclic if
(c
0
c
1
. . . c
n−1
) ∈ C ⇒ (c
n−1
c
0
c
1
. . . c
n−2
) ∈ C
91
Equivalently, in terms of codeword polynomials,
c(x) ∈ C ⇒ x c(x) mod x
n
−1 ∈ C
Theorem : A set of codeword polynomials C is a cyclic code iﬀ
1) C is a subgroup under addition
2) c(x) ∈ C ⇒ a(x) c(x) mod x
n
−1 ∈ C ∀a(x) ∈ GF(q)[x]
Theorem : Let C be an (n, k) cyclic code. Then
1) There exists a unique monu polynomial g(x) ∈ C of smallest degree among all non
zero polynomials in C
2) c(x) ∈ C ⇒ c(x) = a(x) g(x)
3) deg. g(x) = n −k
4) g(x)  x
n
−1
Let h(x) =
x
n
−1
g(x)
. h(x) is called the check polynomial. We have
Theorem : c(x) ∈ C ⇔ c(x) h(x) = 0 mod x
n
−1
Theorem : The generator and parity check matrices for a cyclic code with genera
tor polynomial g(x) and check polynomial h(x) are given by
G =
g
0
g
1
. . . g
n−k
0 0 . . . 0
0 g
0
g
1
. . . g
n−k−1
g
n−k
0 . . . 0
0 0 g
0
g
n−k
. . . 0
. . .
.
.
.
.
.
.
0 0 g
0
g
1
. . . g
n−k
H =
h
k
h
k−1
h
k−2
. . . . . . h
0
0 0 0 0
0 h
k
h
k−1
. . . . . . h
0
0 0 0
.
.
. 0 h
k
.
.
.
.
.
. 0
.
.
.
0 0 0 h
k
h
k−1
. . . h
0
92
Proof : Note that g
0
= 0 Otherwise, we can write
g(x) = xg
′
(x) ⇒ x
n
g(x) = xg
′
(x) mod x
n
−1
⇒ g
′
(x) = x
n−1
g(x) mod x
n
−1
⇒ g
′
(x) ∈ C a contradiction
Each row in G corresponds to a codeword (g(x), xg(x), . . . , x
k−1
g(x)). These codewords
are LI. For H to be the parity check matrix, we need to show that GH
T
= 0 and that
the n − k rows of H are LI. deg h(x) = k ⇒ h
k
= 0. Therefore, each row is LI. Need
to show GH
T
= 0
k×n−k
We know g(x) h(x) = x
n
−1 ⇒ coeﬃcients of x
1
, x
2
, . . . , x
n−1
are 0
i.e. u
l
=
l
k=0
g
k
h
l−k
= 0 1 ≤ l ≤ n −1
It is easy to see by inspection that
GH
T
=
u
k
u
k+1
. . . u
n−1
u
k−1
u
k
u
n−2
.
.
.
u
1
u
2
u
n−k
= O
k×n−k
These matrices can be reduced to systematic form by elementary row operations.
Encoding and decoding : polynomial multiplication
Let a(x) be the data polynomial, degree ≤ k −1
c(x) = a(x) g(x)
Decoding :
Let v(x) = c(x) + e(x) be the received senseword. e(x) is the errorword polynomial
Deﬁnition : Syndrome polynomial s(x) is given by s(x) = v(x) mod g(x)
We have
s(x) = [c(x) + e(x)] mod g(x)
= e(x) mod g(x)
Syndrome decoding : Find the e(x) with the least number of nonzero coeﬃcients
satisfying
s(x) = e(x) mod g(x)
Syndrome decoding can be implemented using a look up table. There are q
n−k
val
93
ues of s(x), store corresponding e(x)
Theorem : Syndrome decoding is nearest neighbour decoding.
Proof : Let e(x) be the error vector obtained using syndrome detection. Syndrome
detection diﬀers from nearest neighbour decoding if there exists an error polynomial
e
′
(x) with weight strictly less than e(x) such that
v(x) −e
′
(x) = c
′
(x) ∈ C
But v(x) −e(x) = c(x) ∈ C
⇒ e
′
(x) − e(x) ∈ C
⇒ e
′
(x) = e(x) mod g(x)
But s(x) = e(x) mod g(x)
⇒ s(x) = e
′
(x) mod g(x), a contradiction
By deﬁnition e(x) is the smallest weight error polynomial for which s(x) = e(x) mod g(x)
Let us construct a binary cyclic code which can correct two errors and can be decoded
using algebraic means Let n = 2
m
− 1. Suppose α is a primitive element of GF(2
m
).
Deﬁne
C = {c(x) ∈ GF(2)/x
n
−1 : c(α) = 0 and c(α
3
) = 0 in GF(2
m
) = GF(n + 1)}
Note that C is cyclic : C is a group under addition and c(x) ∈ C ⇒ a(x) c(x) mod x
n
−1 =
0
a(α) c(α) = 0 and α(α
3
) c(α
3
) = 0 ⇒ a(x) c(x) mod x
n
−1 ∈ C
We have the senseword v(x) = c(x) + e(x)
Suppose at most 2 errors occurs. Then e(x) = 0 or e(x) = x
i
or e(x) = x
i
+ x
j
Deﬁne X
1
= α
i
and X
2
= α
j
. X
1
and X
2
are called error location numbers and are
unique because α has order n. So if we can ﬁnd X
1
&X
2
, we know i and j and the
senseword is properly decoded.
Let S
1
= V (α) = α
i
+ α
j
= X
1
+ X
2
and S
2
= V (α
3
) = α
3i
+ α
3j
= X
3
1
+ X
3
2
We are given S
1
and S
2
and we have to ﬁnd X
1
and X
2
. Under the assumption that at
94
most 2 errors occur, S
1
= 0 iﬀ no errors occur.
If the above equations can be solved uniquely for X
1
and X
2
, the two errors can be
corrected. To solve these equations, consider the polynomial
(x −X
1
)(x −X
2
) = x
2
+ (X
1
+ X
2
)x + (X
1
X
2
)
X
1
+ X
2
= S
1
(X
1
+ X
2
)
3
= (X
1
+ X
1
)
2
(X
1
+ X
2
) = (X
2
1
+ X
2
2
)(X
1
+ X
2
)
= X
3
1
+ X
2
1
X
2
+ X
2
2
X
1
+ X
3
2
= S
3
+ X
1
X
2
(S
1
)
⇒ X
1
X
2
=
S
3
1
+S
3
S
1
therefore (x −X
1
)(x −X
2
) = x
2
+ S
1
x +
S
3
1
+S
3
S
1
We can construct the RHS polynomial. By the unique factorization theorem, the ze
ros of this quadratic polynomial are unique.
One easy way to ﬁnd the zeros is to evaluate the polynomial for all 2
m
values in GF(2
m
)
Solution to Midterm II
1. Trivial
2. 51, not abelian (a −b)
−1
= ab = b
−1
a
−1
= ba
3. Trivial
4. 1, 3, 7, 9, 21, 63,
α ∈ GF(p) α
p−1
= 1 ⇒ α
p
= α
Look at the polynomial x
p
−x. This has only p zeros given by elements of GF(p).
If α ∈ GF(p
m
) and β
p
= β and β ∈ GF(p), then x
p
− x will have p + 1 roots, a
contradiction.
Procedure for decoding
1. Compute syndromes S
1
= V (α) and S
2
= V (α
3
)
2. If S
1
= 0 and S
2
= 0, assume no error
3. Construct the polynomial x
2
+ s
1
x +
s
3
1
+s
3
s
1
4. Find the roots X
1
and X
2
. If either of them is 0, assume a single error. Else, let
X
1
= α
i
and X
2
= α
j
. Then errors occur in locations i & j
95
Many cyclic codes are characterized by zeros of all codewords.
C = {c(x) : c(β
1
) = 0, c(β
2
) = 0, . . . , c(β
l
) = 0}
c
0
, . . . , c
n−1
∈ GF(q) and β
1
, . . . , β
l
∈ GF(Q) ⊃ GF(q) ⇒ Q = q
m
Note that the above deﬁnition imposes constraint on values of q and n.
c(β) = 0 ∀c ∈ C(x −β)  c(x) ⇒ x −β  g(x)
⇒ x −β  x
n
−1 ⇒ β
n
= 0 ⇒ n  Q−1 ⇒ nq
m
−1
because GF(Q) is an extension ﬁeld of GF(q)
Lemma : Suppose n and q are coprime. Then there exists some number m such
that n  q
m
−1
Proof :
q = Q
1
n + S
1
q
2
= Q
2
n + S
2
.
.
.
q
n+1
= Q
n+1
n + S
n
+ 1
All the remainders lie between 0 and n − 1. Because we have n + 1 remainders, at
least two of them must be the same, say S
i
and S
j
.
Then we have
q
j
−q
i
= Q
j
n + S
j
−Q
i
n −S
i
= (Q
j
−Q
i
)n
or q
j
(q
j−1
−1) = (Q
j
−Q
i
)n
n /q
i
⇒ nq
j−i
−1
Put m = j −1, and we have nq
m
−1 for some m
Corr : Suppose n and q are coprime and nq
m
− 1. Let C be a cyclic code over GF(q)
of length n.
Then g(x) = Π
l
i=1
(x −β
l
) where β
l
∈ GF(q
m
)
nq
m
−1 ⇒ x
n
−1x
q
m
−1
−1
96
Proof : z
k
−1 = (z −1)(z
k−1
+ z
k−2
+ . . . + 1)
Let q
m
−1 = n.r.
Put z = x
n
and k = r
Then x
nr
−1 = x
q
m
−1
−1 = (x
n
−1)(x
n(k−1)
+ . . . + 1)
⇒ x
n
−1x
q
m
−1
⇒ g(x)x
q
m
−1
But x
q
m
−1
−1 = Π
q
m
−1
i=1
(x −α
i
), α
i
∈ GF(q
m
), α
i
= 0
Therefore g(x) = Π
l
i=1
(x −β
i
)
Note : A cyclic code of length n over GF(q), such that n = q
m
−1 is uniquely speciﬁed
by the zeros of g(x) in GF(q
m
).
C = {c(x) : c(β
i
) = 0, 1 ≤ i ≤ l}
Defn. of BCH Code : Suppose n and q are relatively prime and nq
m
− 1. Let α
be a primitive element of GF(q
m
). Let β = α
q
m
−1/n
. Then β has order n in GF(q
m
).
An (n, k) BCH code of design distance d is a cyclic code of length n given by
C = {c(x) : c(β) = c(β
2
) . . . = c(β
d−1
) = 0}
Note : Often we have n = q
m
− 1. In this case, the resulting BCH code is called
primitive BCH code.
Theorem : The minimum distance of an (n, k) BCH code of design distance d is at
least d.
Proof :
Let H =
1 β β
2
. . . β
n−1
1 β
2
β
4
. . . β
2(n−1)
.
.
.
1 β
d−1
β
2(d−1)
. . . β
(d−1)(n−1)
c(β
1
) = c
1
β
i
β
2
i
β
(n−1)i
Therefore C ∈ C ⇔ CH
T
= 0 and C ∈ GF(q)[x]
97
Digression :
Theorem : A matrix A has an inverse iﬀ detA = 0
Corr : CA = 0 and C = 0
1×n
⇒ detA = 0
Proof : Suppose detA = 0. Then A
−1
exists
⇒ CAA
−1
= 0 ⇒ C = 0, a contradiction.
Lemma :
det(A) = det(A
T
)
det(KA) = KdetA Kscalar
Theorem : Any square matrix of the form
A =
1 1 . . . 1
X
1
X
2
X
d
.
.
.
.
.
.
.
.
.
X
d−1
1
X
d−1
2
X
d−1
d
has a nonzero determinant iﬀ all the X
1
are distinct
vandermonde matrix.
Proof : See pp 44, Section 2.6
End of digression
In order to show that weight of c is at least d, let us proceed by contradiction. As
sume there exists a nonzero codeword c, with weight w(c) = w < d i.e. c
i
= 0 only for
i ∈ {n
1
, . . . , n
w
}
CH
T
= 0
⇒ (c
0
. . . c
n−1
)
1 1 . . . 1
β β
2
β
d−1
β
2
β
4
β
2(d−1)
.
.
.
.
.
.
.
.
.
β
n−1
β
2(n−1)
β
(d−1)(n−1)
= 0
⇒ (c
n
1
. . . c
nw
)
β
n
1
β
2n
1
. . . β
(d−1)n
1
β
n
2
β
2n
2
β
(d−1)n
2
.
.
.
.
.
.
.
.
.
β
nw
β
2nw
β
(d−1)nw
= 0
⇒ (c
n
1
. . . c
nw
)
β
n
1
. . . β
wn
1
β
n
2
β
wn
2
.
.
.
.
.
.
β
n
w
β
wnw
= 0
98
⇒ det
β
n
1
. . . β
wn
1
β
n
2
β
wn
2
.
.
.
.
.
.
β
nw
β
wnw
= 0
⇒ β
n
1
β
n
2
. . . β
nw
.det
1 β
n
1
. . . β
(w−1)n
1
1 β
n
2
. . . β
(w−1)n
2
.
.
.
1 β
nw
. . . β
(w−1)nw
= 0
det(A) = det(A
T
)
But
⇒ det
1 1 . . . 1
β
n
1
β
n
2
β
nw
.
.
.
.
.
.
β
(w−1)n
1
β
(w−1)n
2
β
(w−1)nw
= 0, a contradiction. Hence proved.
Suppose g(x) has zeros at β
1
, . . . , β
l
Can we write
C = {c(x) : c(β
1
) = . . . = c(β
l
) = 0}?
No! in general. Example Take n = 4 g(x) = (x + 1)
2
= x
2
+ 1x
4
−1
C(x) = {c(x) : c(1) = 0} = {a(x)(x + 1) mod x
4
−1} =< g(x) >
Some special cases :
1) Binary Hamming codes : let q = 2 and n = 2
m
−1. Clearly n and q are coprime. Let
α be a primitive element of GF(2
m
). Then the (n, k) BCH code of design distance 3 is
given by C = {c(x) : c(α) = 0, c(α
2
) = 0}
In GF(2)[x] c(x
2
) = c(x)
2
(a + b)
2
= a + b ⇒ [
n−1
i=1
c
i
x
i
]
2
=
n−1
i=1
c
2
1
x
2i
=
n−1
i=1
c
i
x
2i
= c(x
2
)
Therefore C = {c(x) : c(α) = 0}
Let H = [α α
2
. . . α
n
= 1]
Then C ∈ C ⇔ CH
T
= 0
β ∈ GF(2
m
) ⇒ β = a
0
+ a
1
x + . . . a
m−1
x
m−1
a
0
, . . . , a
m−1
∈ GF(2) n = 7, α ∈ GF(8)
99
H =
1 0 0 1 1 1
0 1 0 1 1 0
0 0 1 0 1 1
2) R.S. codes : Take n = q −1
n = q −1 C = {c(x) : c(α) = c(α
2
) . . . c(α
d−1
) = 0}
The generator polynomial for this code is
g(x) = (x −α)(x −α
2
) . . . (x −α
d−1
)α is primitive
Ex : Prove that g(x) is the generator polynomial of C
deg g(x) = n −k = d −1
⇒ d = n −k + 1
Therefore, we have an (n, k, n −k + 1) MDS code
R.S. codes are optimal in this sense
Example : 3 error correcting RS code over GF(11)
n = q −1 = 10 d = 7 k = n −d + 1 = 10 −7 + 1 = 4
(10, 4, 7) RS code over GF(11).
g(x) = (x −α)(x −α
2
)(x −α
3
)(x −α
∗
)(x −α
5
)(x −α
6
)
2 is a primitive element
= (x −2)(x −4)(x −8)(x −5)(x −10)(x −9)
= x
6
+ 6x
5
+ 5x
4
+ 7x
3
+ 2x
2
+ 8x + 2
EX : dual of an MDS code is dual : Hint columns are linearly dependent. ⇔ rows
are linearly dependent.
Reallife examples :
music CD standard RS codes are (32, 28) and (28, 24) double error correcting codes over
GF(256).
NASA standard RS code is a (255, 223, 33) code.
Review : n, q coprime, nq
m
− 1. β ∈ GF(q
m
) has order n. A (n, k) BCH code of
design d is deﬁned as
C = {c(x) : c(β) = . . . c(β
d−1
)} = 0
910
This deﬁnition is useful in deriving the distance structure of the code. The obvious
questions to ask are
1) Encoding : need to ﬁnd the generator polynomial
2) Decoding : will not be covered. PGZ decoder described in section 6.6 is an extension
of the 2εc we described earlier. Peterson  Gorenstein  Zierler
Minimal polynomials : Let α ∈ GF(q
m
). The monic polynomial of smallest degree
over GF(q) with α as a zero is called the minimal polynomial of α over GF(q). This is
a slight generalization of the deﬁnition given previously where q was equal to p.
Recall : Existence and uniqueness, primality
Lemma : Let f(x) be the minimal polynomial of α ∈ GF(q
m
). Suppose g(x) is a
polynomial over GF(q) such that g(α) = 0. Then f(x)g(x)
Proof : g(x) = f(x)Q(x) + r(x) where deg r(x) < deg f(x)
g(α) = 0 ⇒ r(α) = g(α) − f(α)Q(α) = 0. Hence r(x) must be 0. Alternate Proof :
Appal to primarity of f(x).
Primitive polynomial The minimal polynomial of a primitive element is called prim
itive polynomial. Suppose α is a primitive element of GF(q
m
) and p(x) is the minimal
polynomial of α over GF(q). Then GF(q
m
) ∼ GF(q)[x]/p(x).
The polynomial x ∈ GFq[x]/p(x) is the primitive element of GF(q)[x]/p(x)
Corr : If p(β) = 0 and p(x) is prime, then p(x) is the minimal polynomial of β.
Theorem : Consider a BCH code with zeros at β, β
2
, . . . , β
d−1
. Let f
k
(x) be the minimal
polynomial of β
k
∈ GF(q
m
−1) over GF(q)[x]. Then the generator polynomial is given by
g(x) = LCM(f
1
(x), f
2
(x) . . . , f
d−1
(x))
Proof :
c(α
k
) = 0 ⇒ f
k
(x)c(x) 1 ≤ k ≤ d −1
⇒ g(x) = LCM(f
1
(x), . . . , f
k
(x)c(x) by UPF
911
Therefore deg.g(x) ≤ deg c(x) ∀ c(x) ∈ C
Also g(x) ∈ C
Therfore g(x) is the generator polynomial
Finding the generator polynomials reduces to ﬁnding the minimal polynomials of the
zeros of the generator polynomial.
Brute force approach : Factorize x
n
−1 into its prime polynomials over GF(q)[x]
x
n
−1 = b
1
(x)b
2
(x) . . . b
l
(x)
Since g(x)x
n
− 1, g(x) has to be a product of a subset of these factors. g(x) is zero
for x = β, β
2
, . . . , β
d−1
.
We know that g(β
k
) = 0 ⇒ f
k
(x) is a factor of g(x) and hence of x
n
− 1. By Unique
Prime Factorization f
k
(x) must be equal to one of the b
j
(x) 1 ≤ j ≤ l.
We only need to ﬁnd the polynomial b
j
(x) for which b
j
(β
k
) = 0. Repeat this procedure to
ﬁnd the minimal polynomials of all zeros and take their LCM to ﬁnd the generator matrix.
Problem : factorizing x
n
−1 is hard
Theorem : Let p be the characteristic of GF(q). Let f(x) be a polynomial over
GF(q).f(x) =
n
i=o
f
i
x
i
f
i
∈ GF(q)
Then f(x)
p
m
=
n
i=0
f
p
m
1
x
ip
m
∀
m
Proof :
(a + b)
p
= a
p
+ b
p
(a + b)
p
2
= (a
p
+ b
p
)
p
= a
p
2
+ b
p
2
In general
(a + b)
p
m
= a
p
m
+ b
p
m
(a + b + c)
p
= (a + b)
p
+ c
p
= a
p
+ b
p
+ c
p
(a + b + c)
p
m
= a
p
m
+ b
p
m
+ c
p
m
In general
(
n
i=0
a
i
)
p
m
= a
p
m
0
+ a
p
m
1
+ . . . + a
p
n
n
.a
i
∈ GF(q)
therefore f(x)
p
m
= (
n
i=0
f
i
x
i
)
p
m
= (f
0
x
0
)
p
m
+ . . . (f
n
x
n
)
p
m
=
n
i=0
f
p
m
i
x
ip
m
Corr : Suppose f(x) is a polynomial over GF(q)
Then f(x)
q
= f(x
q
)
912
Proof : f(x) is prime. If we show that f(β
q
) = 0, then we are done. Suppose g(x) is
the minimal polynomial of β
q
. Then f(β
q
) = 0 ⇒ g(x)f(x). f(x) prime ⇒ g(x) = f(x)
Now,
f(x)
q
= [
r
i=0
f
i
x
1
]
q
r = degf(x) because q = p
m
=
r
i=0
f
q
i
x
iq
f
i
∈ GF(q) ⇒ f
q
i
= f
i
=
r
i=0
f
i
x
iq
= f(x
q
)
Thereforef(β
q
) = f(β)
q
= 0
Conjugates : The conjugates of β over GF(q) are the zeros of the minimal polyno
mial of β over GF(q) (includes β itself)
Theorem : The conjugates of β over GF(q) are β, β
q
, β
q
2
, . . . , β
q
r−1
where r is the least positive integer such that β
q
r
= β
First we check that β
q
k
are indeed conjugates of β
f(β) = 0 ⇒ f(β
q
) = 0 ⇒ f(β
q
2
) = 0 and so on
Therefore f(β) = 0 ⇒ f(β
q
k
) = 0 Repeatedly use the fact that f(x)
q
= f(x
q
)
Let f(x) = (x −β)(x −β
q
) . . . (x −β
q
r−1
)
The minimal polynomial of β has zeros at β, β
q
, . . . , β
q
r−1
Therefore f(x) = (x −β)(x −β
q
) . . . (x −β
q
r−1
) minimal polynomial of β over GF(q)
If we show that f(x) ∈ GF(q)[x], then f(x) will be the minimal polynomial and hence
β, β
q
, . . . , β
q
r−1
are the only conjugates of β
f(x)
q
= (x −β)
q
(x −β
q
)
q
. . . (x −β
q
r−1
)
q
= (x
q
−β
q
)(x
q
−β
q
2
) . . . (x
q
−β
q
r
)
= (x
q
−β)(x
q
−β
q
) . . . (x −β
q
r−1
)
= f(x
q
)
But f(x)
q
= f
q
0
+ f
1
x
q
+ . . . + f
q
r
x
rq
&f(x
q
) = f
0
+ f
1
x
q
+ . . . + f
r
x
rq
Equating the coeﬃcients, we have f
q
i
= f
i
⇒ f
i
∈ GF(q) ⇒ f(x) ∈ GF(q)[x].
Examples :
GF(4) = {0, 1, 2, 3}
913
Conjugates of 2 in GF(2) are {2, 2
2
} = {2, 3}
Therefore minimal polynomial of 2 and 3 is
(x −2)(x −3) = x
2
+ (2 + 3)x + 2 ×3
= x
2
+ x + 1
GF(8) = {0, 1, 2, . . . , 7} = {0, 1, α, x + 1, . . . , α
2
+ α + 1}
The conjugates of 2 in GF(2) are {2, 2
2
, 2
4
} and the minimal polynomial is x
3
+ x + 1
Cyclic codes are often used to correct burst errors and detect errors.
A cyclic burst of length t is a vector whose nonzero components are among t cycli
cally consecutive components, the ﬁrst and last of which are nonzero.
We can write such an errorword as follows :
e(x) = x
i
b(x) mod x
n
−1
where deg b(x) = t −1 and b
o
= 0
In the setup we have considered so far, the optimal decoding procedure is nearest neigh
bor decoding. For cyclic codes this reduces to calculating the syndrome and then doing
a look up (at least conceptually). Hence a cyclic code which corrects burst errors must
have syndrome polynomials that are distinct for each correctable error pattern.
Suppose a linear code (not necessarily cyclic) can correct all burst errors of length t
or less. Then it cannot have a burst of length 2t or less as a codeword. A burst of length
t could change the codeword to a burst pattern of length t or less which could also be
obtained by a burst pattern of length ≤ t operating on the all zero codeword.
Rieger Bound : A linear block code that corrects all burst errors of length t or less
must satisfy
n −k ≥ 2t
Proof : Consider the coset decomposition of the code #cosets = q
n−k
. No codeword is
burst of length t. Consider two vectors which are non zero in their ﬁrst 2t components.
d(c
1
, c
2
) ≤ 2t ⇒ c
1
−c
2
∈ C. c
1
−c
2
is a burst of length ≤ 2t ⇒ c
1
−c
2
∈ C
⇒ c
1
and c
2
are in diﬀerent cosets.
Therefore #cosets ≥ #n −tuples diﬀerent in their ﬁrst 2t components
914
Therefore q
n−k
≥ q
2t
⇒ n −k ≥ 2t
For small n and t, good errorcorrecting cyclic codes have found by computer search.
These can be augmented by interleaving.
Section 5.9 contains more details.
g(x) = x
6
+ x
3
+ x
2
+ x + 1 n −k = 6 2t = 6
Cyclic codes are also used extensively in error detection where they are often called
cyclic redundancy codes.
The generator polynomial for these codes of the form
g(x) = (x + 1) p(x)
where p(x) is a primitive polynomial of degree m. The blocklength n = 2
m
−1 and k =
2
m
−1 −(m+1). Almost always, these codes are shortened considerably. The two main
advantages of these codes are :
1) error detection is computing the syndrome polynomial which can be eﬃciency im
plemented using shift registers.
2) m = 32 (2
32
−1, 2
32
−33)
good performance at high rate
x + 1g(x) ⇒ x + 1c(x) ⇒ c(1) = 0, so the codewords have even weight.
p(x)c(x) ⇒ c(x) = 0 ⇒ c(x) are also Hamming codewords. Hence C is the set of
even weight codewords of a Hamming code d
min
= 4
Examples
CRC −16 : g(x) = (x + 1)(x
15
+ x + 1) = x
16
+ x
15
+ x
2
+ 1
CRC −CCITT g(x) = x
16
+ x
12
+ x
5
+ 1
915
Error detection algorithm : Divide senseword v(x) by g(x) to obtain the syndrome
polynomial (remainder) s(x). s(x) = 0 implies an error has been detected.
No burst of length less than n −k is a codeword.
e(x) = x
i
b(x) mod x
n
−1 deg b(x) < n −k
s(x) = e(x) mod g(x)
= x
i
b(x) mod g(x)
= [x
i
mod g(x)] . [b(x) mod g(x)] mod g(x)
s(x) = 0 ∗
⇒ b(x) mod g(x) = 0 which is impossible if deg b(x) < deg g(x)
∗ g(x) / x
i
Otherwise g(x)x
i
⇒ g(x)x
i
−x
n−i
⇒ g(x)x
n
But g(x)x
n
−1
If n −k = 32 every burst of length less than 32 will be detected.
We looked at both source coding and error control coding in this course.
Probalchstic techniques like turbo codes and trellis codes were not covered.
Modulation, lossy coding.
916
E2 231: Data Communication Systems January 7, 2005
Homework 1
1. In communication systems, the two primary resource constraints are transmitted
bandwidth and channel bandwidth. For example, the capacity (in bits/s) of a band
limited additive white Gaussian noise channel is given by
C = W log
2
(1 +
P
N
0
W
)
where W is the bandwidth,
N
0
2
is the twosided power spectral density and P is the signal
power. Find the minimum energy required to transmit a signal bit over this channel. Take
W = 1, N
0
= 1.
(Hint: Energy = Power×Time)
2. Assume that there are exactly 2
1.5n
equally likely, grammatically correct strings of n
characters, for every n (this is, of course, only approximately true). What is then the
minimum bitrate required to transmit spoken English? You can make any reasonable
assumption about how fast people speak, language structure and so on. Compare this
with the 64 Kbps rate bitstream produced by sampling speech 8000 times/s and then
quantizing each sample to one of 256 levels.
3. Consider the socalled Binary Symmetric Channel (BSC) shown below. It shows that
bits are erroneously received with probability p. Suppose that each bit is transmitted
several times to improve reliability (Repetition Coding). Also assume that, at the input,
bit are equally likely to be 1 or 0 and that successive bits are independent of each other.
What is the improvement in bit error rate with 3 and 5 repetitions, respectively?
0
1
0
1
1−p
1−p
p
p
Figure 1: Binary Symmetric Channel
11
E2 231: Data Communication Systems January 17, 2005
Homework 2
1. Suppose f(x) be a strictly convex function over the interval (a, b). Let λ
k
, 1 ≤ k ≤ N
be a sequence of nonnegative real numbers such that
N
k=1
λ
k
= 1. Let x
k
, 1 ≤ k ≤ N,
be a sequence of numbers in (a, b), all of which are not equal. Show that
a) f(
N
k=1
λ
k
x
k
) <
N
k=1
λ
k
f(x
k
)
b) Let X be a discrete random variable. Suppose Ef(X) = f(EX). Use Part(a) to
show that X is a constant.
28. Solve the following problems from Chapter 5 of Cover and Thomas : 4, 6, 11, 12,
15, 18 and 22.
9. Solve Parts (a) and (b) of 5.26.
21
E2 231: Data Communication Systems February 3, 2005
Homework 3
1. A random variable X is uniformly distributed between 10 and +10. The probability
density function for X is given by
f
X
(x) =
1
20
−10 ≤ x ≤ 10
0 otherwise
Suppose Y = X
2
. What is the probability density function of Y?
2. Suppose X
1
, X
2
, . . . , X
N
are independent and identically distributed continuous ran
dom variables. What is the probability that X
1
= min(X
1
, X
2
, . . . , X
N
).
(Hint: Solve for N = 2 and generalize)
36. Solve the following problems from Chapter 3 of Cover and Thomas : 1, 2, 3 and 7.
712. Solve the following problems from Chapter 4 of Cover and Thomas : 2 (Hint:
H(X, Y ) = H(Y, X)), 4, 8, 10, 12 and 14.
31
E2 231: Data Communication Systems February 18, 2005
Homework 4
14. Solve the following problems from Chapter 4 of Cover and Thomas : 2, 4, 9, and
10.
41
E2 231: Data Communication Systems March 6, 2005
Homework 5
1.
a) Perform a LempelZiv compression of the string given below. Assume a window
size of 8.
000111000011100110001010011100101110111011110001101101110111000010001100011
b) Assume that the above binary string is obtained from a discrete memoryless source
which can take 8 values. Each value is encoded into a 3bit segment to generate the
given binary string. Estimate the probability of each 3bit segment by its relative
frequency in the given binary string. Use this probability distribution to devise a
Huﬀman code. How many bits do you need to encode the given binary string using
this Huﬀman code?
2.
a) Show that a code with minimum distance d
min
can correct any pattern of p erasures
if
d
min
≥ p + 1
b) Show that a code with minimum distance d
min
can correct any pattern of p erasures
and t errors if
d
min
≥ 2t + p + 1
3. For any qary (n.k) block code with minimum distance 2t + 1 or greater, show that
the number of data symbols satisfy
n −k ≥ log
q
1 +
n
1
(q −1) +
n
2
(q −1)
2
+ . . . +
n
t
(q −1)
t
4. Show that only one group exists with three elements. Construct this group and show
that it is abelian.
5.
a) Let G be a group and a
′
denote the inverse of any element a ∈ G. Prove that
(a ∗ b)
′
= b
′
∗ a
′
.
51
b) Let G be an arbitrary ﬁnite group (not necessarily abelian). Let h be an element
of G and H be the subgroup generated by h. Show that H is abelian
615. Solve the following problems from Chapter 2 of Blahut : 5, 6, 7, 11, 14, 15, 16, 17,
18 and 19.
16. (Putnam, 2001) Consider a set S and a binary operation ∗, i. e., for each a, b ∈ S,
a ∗ b ∈ S. Assume (a ∗ b) ∗ a = b for all a, b ∈ S. Prove that a ∗ (b ∗ a) = b for all a, b ∈ S.
52
E2 231: Data Communication Systems March 14, 2005
Homework 6
16. Solve the following problems from Chapter 3 of Blahut : 1, 2, 4, 5, 11 and 13.
7. We deﬁned maximum distance separable (MDS) codes as those linear block codes
which satisfy the Singleton bound with equality, i.e., d = n−k +1. Find all binary MDS
codes. (Hint: Two binary MDS codes are the repetition codes (n, 1, n) and the single
parity check codes (n, n −1, 2). It turns out that these are the only binary MDS codes.
All you need to show now is that a binary (n, k, n−k +1) linear block code cannot exist
for k = 1, k = n −1).
8. A perfect code is one for which there are Hamming spheres of equal radius about the
codewords that are disjoint and that completely ﬁll the space (See Blahut, pp. 5961
for a fuller discussion). Prove that all Hamming codes are perfect. Show that the (9,7)
Hamming code over GF(8) is a perfect as well as MDS code.
61
E2 231: Data Communication Systems March 28, 2005
Homework 7
112. Solve the following problems from Chapter 4 of Blahut : 1, 2, 3, 4, 5, 7, 9, 10, 12,
13, 15 and 18.
13. Recall that the Euler totient function, φ(n), is the number of positive numbers less
than n, that are relatively prime to n. Show that
a) If p is prime and k >= 1, then φ(p
k
) = (p −1)p
k−1
.
b) If n and m are relatively prime, then φ(mn) = φ(n)φ(n).
c) If the factorization of n is
i
q
k
i
i
, then φ(n) =
i
(q
i
−1)q
k
i
−1
i
14. Suppose p(x) is a prime polynomial of degree m > 1 over GF(q). Prove that p(x)
divides x
q
m
−1
−1. (Hint: Think minimal polynomials and use Theorem 4.6.4 of Blahut)
15. In a ﬁnite ﬁeld with characteristic p, show that
(a + b)
p
= a
p
+ b
p
, a, b ∈ GF(q)
71
E2 231: Data Communication Systems April 6, 2005
Homework 8
1. Suppose p(x) is a prime polynomial of degree m > 1 over GF(q). Prove that p(x)
divides x
q
m
−1
−1. (Hint: Think minimal polynomials and use Theorem 4.6.4 of Blahut)
29. Solve the following problems from Chapter 5 of Blahut : 1, 5, 6, 7, 10, 13, 14, 15.
1011. Solve the following problems from Chapter 6 of Blahut : 5, 6
12. This problem develops an alternate view of ReedSolomon codes. Suppose we con
struct a code over GF(q) as follows. Take n = q (Note that this is diﬀerent from the code
presented in class where n was equal to q −1). Suppose the dataword polynomial is given
by a(x)=a
0
+a
1
x+. . .+a
k−1
x
k−1
. Let α
1
, α
2
, . . ., α
q
be the q diﬀerent elements of GF(q).
The dataword polynomial is then mapped into the ntuple (a(α
1
), a(α
2
), . . . , a(α
q
). In
other words, the j
th
component of the codeword corresponding to the data polynomial
a(x) is given by
a(α
j
) =
k−1
i=0
a
i
α
i
j
∈ GF(q), 1 ≤ j ≤ q
a) Show that the code as deﬁned is a linear block code.
b) Show that this code is an MDS code. It is often called an extended RS code.
c) Suppose this code is punctured in r <= q −k = d −1 places (r components of each
codeword are removed) to yield an (n = q − r, k, d = n − k + 1 − r) code. Show
that the punctured code is also an MDS code.
Hint: a(x) is a polynomial of order less than or equal to k − 1. By the Fundamental
Theorem of Algebra, a(x) can have at most k −1 zeros. Therefore, a nonzero codeword
can have at most k −1 symbols equal to 0. Hence d >= n −k + 1. The proof follows.
81
E2 231: Data Communication Systems February 16, 2004
Midterm Exam 1
1.
a) Find the binary Huﬀman code for the source with probabilities (
1
3
,
1
5
,
1
5
,
2
15
,
2
15
).
(2 marks)
b) Let l
1
, l
2
, . . . , l
M
be the codeword lengths of a binary code obtained using the Huﬀ
man procedure. Prove that
M
k=1
2
−l
k
= 1
(3 marks)
2. A code is said to be suﬃx free if no codeword is a suﬃx of any codeword.
a) Show that suﬃx free codes are uniquely decodable. (2 marks)
b) Suppose you are asked to develop a suﬃx free code for a discrete memoryless source.
Prove that the minimum average length over all suﬃx free codes is the same as the
minimum average length over all preﬁx free codes. (2 marks)
c) State a disadvantage of using suﬃx free codes. (1 mark)
3. Let X
1
, X
2
, . . . be an i.i.d sequence of discrete random variables with entropy H, taking
values in the set X. Let
B
n
(s) = {x
n
∈ X
n
: p(x
n
) ≥ 2
−ns
}
denote the subset of ntuples which occur with probability greater than 2
−ns
a) Prove that B
n
(s) ≤ 2
ns
. (2 marks)
b) For any > 0, show that lim
n→∞
Pr(B
n
(H+)) = 1 and lim
n→∞
Pr(B
n
(H−)) = 0
(3 marks)
4.
a) A fair coin is ﬂipped until the ﬁrst head occurs. Let X denote the number of ﬂips
required. Find the entropy of X. (2 marks)
b) Let X
1
, X
2
, . . . be a stationary stochastic process. Show that (3 marks)
H(X
1
, X
2
, . . . , X
n
)
n
≤
H(X
1
, X
2
, . . . , X
n−1
)
n −1
11
E2 231: Data Communication Systems March 29, 2004
Midterm Exam 2
1.
a) For any qary (n, k) block code with minimum distance 2t +1 or greater, show that
the number of data symbols satisﬁes
n − k ≥ log
q
1 +
n
1
(q − 1) +
n
2
(q − 1)
2
+ . . . +
n
t
(q − 1)
t
(2 marks)
b) Show that a code with minimum distance d
min
can correct any pattern of p erasures
and t errors if
d
min
≥ 2t + p + 1
(3 marks)
2.
a) Let S
5
be the symmetric group whose elements are the permutations of the set
X = {1, 2, 3, 4, 5} and the group operation is the composition of permutations.
• How many elements does this group have? (1 mark)
• Is the group abelian? (1 mark)
b) Consider a group where every element is its own inverse. Prove that the group is
abelian. (3 marks)
3.
a) You are given a binary linear block code where every codeword c satisﬁes cA
T
= 0,
for the matrix A given below.
A =
1 0 1 0 0 0
1 0 0 1 0 0
0 1 0 0 1 0
0 1 0 0 0 1
1 0 1 0 0 0
• Find the parity check matrix and the generator matrix for this code.(2 marks)
• What is the minimum distance of this code? (1 mark)
21
b) Show that a linear block code with parameters, n = 7, k = 4 and d
min
= 5, cannot
exist. (1 mark)
c) Give the parity check matrix and the generator matrix for the (n, 1) repetition
code. (1 mark)
4.
a) Let α be an element of GF(64). What are the possible values for the multiplicative
order of α? (1 mark)
b) Evaluate (2x
2
+ 2)(x
2
+ 2) in GF(3)[x]/(x
3
+ x
2
+ 2). (1 mark)
c) Use the extended Euclidean algorithm to ﬁnd the multiplicative inverse of 9 in
GF(13). (2 marks)
d) Suppose p is prime and α is an element of GF(p
m
), m > 1. Show that α
p
= α
implies that α belongs to the subﬁeld GF(p). (2 marks)
22
E2 231: Data Communication Systems April 22, 2004
Final Exam
1. For each statement given below, state whether it is true or false. Provide brief
(12 lines) explanations.
a) The source code {0, 01} is uniquely decodable. (1 mark)
b) There exists a preﬁxfree code with codeword lengths {2, 2, 3, 3, 3, 4, 4, 4}. (1 mark)
c) Suppose X
1
, X
2
, . . . , X
n
are discrete random variables with the same entropy value.
Then H(X
1
, X
2
, . . . , X
n
) <= nH(X
1
). (2 marks)
d) The generator polynomial of a cyclic code has order 7. This code can detect all
cyclic bursts of length at most 7. (1 mark)
2. Let {p
1
> p
2
>= p
3
>= p
3
} be the symbol probabilities for a source which can assume
four values.
a) What are the possible sets of codeword lengths for a binary Huﬀman code for this
source? (1 mark)
b) Prove that for any binary Huﬀman code, if the most probable symbol has proba
bility p
1
>
2
5
, then that symbol must be assigned a codeword of length 1.(2 marks)
c) Prove that for any binary Huﬀman code, if the most probable symbol has proba
bility p
1
<
1
3
, then that symbol must be assigned a codeword of length 2.(2 marks)
3. Let X
0
, X
1
, X
2
, . . . , be independent and indentically distributed random variables
taking values in the set X = {1, 2, . . . , m} and let N be the waiting time to the next
occurence of X
0
, where N = min
n
{X
n
= X
0
}.
a) Show that EN = m. (2 marks)
b) Show that E log(N) ≤ H(X). (3 marks)
Hint: EN =
n>=1
P(N >= n), EY = E(EY X).
4. Consider a sequence of independent and identically distributed binary random vari
ables, X
1
, X
2
, . . . , X
n
. Suppose X
k
can either be 1 or 0. Let A
(n)
ǫ
be the typical set
associated with this process.
a) Suppose P(X
k
= 1) = 0.5. For each ǫ and each n, determine A
(n)
ǫ
 and P(A
(n)
ǫ
).
(1 mark)
11
b) Suppose P(X
k
= 1) =
3
4
. Show that
H(X
k
) = 2 −
3
4
log 3
−
1
n
log P(X
1
, X
2
, . . . , X
n
) = 2 −
Z
n
log 3
where Z is the number of ones in X
1
, X
2
, . . . , X
n
. (2 marks)
c) For P(X
k
= 1) =
3
4
, n = 8 and ǫ =
1
8
log 3, compute A
(n)
ǫ
 and P(A
(n)
ǫ
). (2 marks)
5. Imagine you are running the pathology department of a hospital and are given 5 blood
samples to test. You know that exactly one of the samples is infected. Your task is to
ﬁnd the infected sample with the minimum number of tests. Suppose the probability
that the k
th
sample is infected is given by (p
1
, p
2
, . . . , p
5
) = (
4
12
,
3
12
,
2
12
,
2
12
,
1
12
).
a) Suppose you test the samples one at a time. Find the order of testing to minimize
the expected number of tests required to determine the infected sample. (1 mark)
b) What is the expected number of tests required? (1 mark)
c) Suppose now that you can mix the samples. For the ﬁrst test, you draw a small
amount of blood from selected samples and test the mixture. You proceed, mixing
and testing, stopping when the infected sample has been determined. What is the
minimum expected number of tests in this case? (2 marks)
d) In part (c), which mixture will you test ﬁrst? (1 mark)
6.
a) Let C be a binary Hamming code with n = 7 and k = 4. Suppose a new code C
′
is formed by appending to each codeword ¯ x = (x
1
, . . . , x
7
) ∈ C, an overall parity
check bit equal to x
1
+x
2
+. . .+x
7
. Find the parity check matrix and the minimum
distance for the code C
′
? (2 marks)
b) Show that in a binary linear block code, for any bit location, either all the codewords
contain 0 in the given location or exactly half have 0 and half have 1 in the given
location. (3 marks)
7.
a) Suppose α and β are elements of order n and m respectively, of a ﬁnite Abelian
group and that GCD(n, m) = 1. Prove that the order of the element α ∗ β is nm.
(2 marks)
12
b) Prove that f(x) = x
3
+x
2
+ 1 is irreducible over GF(3). (1 mark)
c) What are the multiplicative orders of the elements of GF(3)/x
3
+x
2
+1 ? (1 mark)
8.
a) Suppose α is a primitive element of GF(p
m
), where p is prime. Show that all the
conjugates of α (with respect to GF(p) are also primitive. (2 marks)
b) Find the generator polynomial for a binary double error correcting code of block
length n = 15. Use a primitive α and the primitive polynomial p(x) = x
4
+ x + 1.
(2 marks)
c) Suppose the code deﬁned in part(b) is used and the received senseword is equal to
x
9
+x
8
+x
7
+x
5
+x
3
+x
2
+x. Find the error polynomial. (1 mark)
9. A linear (n, k, d) block code is said to be maximum distance seperable (MDS) if
d
min
= n −k + 1.
a) Construct an MDS code over GF(q) with k = 1 and k = n − 1. (1 mark)
b) Suppose an MDS code is punctured in s <= n −k = d −1 places (s check symbols
are removed) to yield an (n = q −s, k) code. Show that the punctured code is also
an MDS code. (2 marks)
c) Suppose C is an MDS code over GF(2). Show that no MDS code can exist for
2 ≤ k ≤ n − 2. (2 marks)
10.
a) Find the generator polynomial of a ReedSolomon (RS) code with n = 10 and
k = 7. What is the minimum distance of this code? (2 marks)
b) Consider an (n, k) RS code over GF(q). Suppose each codeword (c
0
, c
2
, . . . , c
n−1
)
is extended by adding an overall parity check c
n
given by
c
n
= −
n−1
j=0
c
j
Show that the extended code is an (n + 1, k) MDS code. (3 marks)
13
Information Theory and Coding
Pavan S. Nuggehalli CEDT, IISc, Bangalore
1
Course Outline  I
Information theory is concerned with the fundamental limits of communication. What is the ultimate limit to data compression? e.g. how many bits are required to represent a music source. What is the ultimate limit of reliable communication over a noisy channel, e.g. how many bits can be sent in one second over a telephone line.
Pavan S. Nuggehalli
ITC/V1/2005
2 of 3
Course Outline . sensor output … Channel coding adds extra bits to data transmitted over the channel This redundancy helps combat the errors introduced in transmitted bits due to channel noise Pavan S. text. Nuggehalli ITC/V1/2005 3 of 3 . video. Source output can be voice.II Coding theory is concerned with practical techniques to realize the limits specified by information theory Source coding converts source output to bits.
IISc. Bangalore 1 .Information Theory and Coding Pavan S. Nuggehalli CEDT.
Communication System Block Diagram Source Encoder Channel Encoder Source Modulator Noise Channel Sink Source Decoder Source Coding Channel Decoder Channel Coding Demodu lator Modulation converts bits into coding analog waveforms suitable for transmission over physical channels. Pavan S. We will not discuss modulation in any detail in this course. Nuggehalli ITC/V1/2005 2 of 4 .
analog waveforms such as voice and video signals.What is Information? Sources can generate “information” in several formats sequence of symbols such as letters from the English or Kannada alphabet. binary symbols from a computer file. Key insight : Source output is a random process * This fact was not appreciated before Claude Shannon developed information theory in 1948 Pavan S. Nuggehalli ITC/V1/2005 3 of 4 .
not the actual values of possible outputs.Randomness Why should source output be modeled as random? Suppose not x. Pavan S. Then source output will be a known determinative process. Nuggehalli ITC/V1/2005 4 of 4 . x simply reproduces this process at the risk without bothering to communicate? The number of bits required to describe source output depends on the probability distribution of the source.
When Ω takes values from a continiuum. Then a probability model/space is a triplet (Ω. {T }.Information Theory and Coding Lecture 1 Pavan Nuggehalli Probability Review Origin in gambling Laplace . circular discrete geometric probability . A. A is a much smaller set. where lP is the power set.combinatorial counting. A) are jointly referred to as the sample space.continuum A N Kolmogorous 1933 Berlin Notion of an experiment Let Ω be the set of all possible outcomes. P ) where P satisﬁes the following properties 1 Non negativity : P (A) ≥ 0 ∀A ǫ A 2 Additivity : If {An . That is the engineering aspect. n ≥ 1} are disjoint events in A. * Note that we have not said anything about how events are assigned probabilities. Need this for consistency. There are many consequences 11 . but is not overly concerned with how that is done. ∞ ∞ then P (Un=1 An ) = n=1 P (An ) 3 Bounded : P (Ω) = 1 * Example : Ω = {T. The theory can guide in assigning these probabilities. {H}} P ({H}) = 0. This set is called the sample set. H} A = {φ. H}. We are going to handwave out of that mess. {T. A is then a collection of events (Ω. Let A be a collection of subsets of Ω with some special properties.5 When Ω is discrete (ﬁnite or countable) A = lP (Ω).
. Monotonivity : A ⊂ B ⇒ (P )A ≤ P (B) 5. ∩ An ) Can be proved using induction 4. n→∞ lim P (An ) = P ( lim An ) n→∞ xn →x xn →x f cont ⇒ lim f (Xn ) = f ( lim Xn ) = f (x) (b) An ↓ A. . P (A⊂ ) = 1 − P (A) ⇒ P (φ) = 0 2. ⊂ An B1 = A1 . . An are events.. ∪n Bn = An k=1 ∪∞ Bn = ∪∞ An = A n=1 n=1 By additivity of P ∞ n P (A) = P (∪∞ Bn ) = n=1 n=1 P (Bn ) = = n→∞ n→∞ lim P (Bn ) k=1 n→∞ lim P (∪n Bk ) = lim P (An ) k=1 We have. Inclusion . . then P (∪n Aj ) = j=1 + 1≤i<j<k≤n n j=1 P (Aj ) − 1≤i<j≤n P (Ai ∩ Aj ) P (Ai ∩ Aj ∩ Ak ). A2 . . . then P (An ) ↑ P (A) A1 ⊂ A2 .Exclusion principle : If A1 .(−1)n P (A1 ∩ A2 . B2 = A2 \A1 = A2 ∩ A⊂ 1 B3 = A3 \A2 Bn is a sequence of disjoint ”annular rings”. ⊂ A. then P (An ) ↓ P (A) Proof : a) A1 ⊂ A2 . . . Continuity : First deﬁne limits : A = ∪An or A = ∩An (a) IfAn ↑ A. P (A ∪ B) = P (A) + P (B) − P (A ∩ B) 3. P (A) = lim p(An ) n→∞ 12 .1. . .
. K = 1. ⊂ A⊂ 1 z P (A⊂ ) = lim P (A⊂ ) n n→∞ ⇒ 1 − P (A) = lim 1 − P (An ) = 1 − lim P (An ) n→∞ n→∞ ⇒ P (A) = n→∞ P (An ) lim Limits of sets. Let An ⊂ A be a sequence of events infk≥n Ak = ∩∞ Ak k=n lim inf An = ∪∞ ∩∞ Ak n=1 k=n n→∞ n→∞ supk≥n Ak = ∪∞ Ak k=n lim sup An = ∩∞ ∪∞ Ak n=1 k=n n→∞ If lim inf An = lim sup An = A. If n=1 P (An ) < ∞ then P (An 1 : 0) = P (lim sup An ) = 0 P (An 1 : 0) = P ( lim ∪j≥n Aj ) n→∞ = n P (An ) ≤ ∞ ∞ = lim P (∪j≥n Aj ) ≤ n→∞ lim n→∞ 13 P (Aj ) = 0 j=n . . .W ǫAn }for all n except a ﬁnite number {W : 1⊂n (W ) < ∞} A {W : W ǫAn ∀n ≥ no (W )} Borel Cantelli Lemma : Let {An } be a sequence of events. 2.b) A1 ⊃ A2 ⊃ An . then we say An converges to A n→∞ Some useful interpretations : lim sup An = = = lim inf An = = = ∞ {W : 1An (W ) = ∞} {W : W ǫAnk . . . . .}for some sequencesnk {An 1 : 0} {W : A. ⊃ A A⊂B A⊃B A⊂ ⊃ B ⊂ A⊂ ⊂ B ⊂ A⊂ ⊂ A⊂ .
b) we want X −1 ((a.Converse to BC Lemma If {An } are independent events such that P (An ) = ∞. This is an technical . XǫR 14 ǫ A. P ({W 1ǫΩX(W ) ≤ x}). b)) condition we are stating merely for completeness. then P {An 1 : 0} = 1 n P (An 1 : 0) = P (n→∞ ∪j≥n Aj ) lim = = n→∞ lim P (∪j≥n Aj ) lim (1 − P (∩j≥n A⊂ )) j n→∞ m→∞ n→∞ = 1 − lim lim Πm (1 − P (Ak ) k=n −P 1 − P (Ak ) ≤ em (Ak ) therefore m→∞ Πm (1 − P (Ak )) ≤ lim k=n = m→∞ lim Πm e−P (Ak ) k=n m − m→∞ ∞ P (Ak ) k=n lim e − P (Ak ) k=n = e = e−∞ = 0 Random Variable : Consider a random experiment with the sample space (Ω. Such a function is called Borel measurable cumulative. A random variable is a function that assigns a real number to each outcome in Ω. X : Ω −→ R In addition for any interval (a. A). The cumulative distribution function for a random variable X is deﬁned as F (x) = P (X ≤ x)∼ .
P ( lim An ) = P (A) = F (x) n→∞ Independence Suppose (Ω. called the probability distribution function such that x F (x) = P (X ≤ x) = diﬀerentiating. PX (x) = P (X = x) We have F (x) = y≤x Px (y) A random variable is said to be continuous if there exists a function f (x).A random variable is said to discrete if it can take only a ﬁnite or countable/denumerable. A. B ǫ A are independent if P (A ∩ B) = P (A). A = {W : X ≤ x} Clearly A1 ⊃ A2 ⊃ A4 ⊃ An ⊃ A LetA = ∩∞ Ak k=1 Then A = {W : X(W ) ≤ x} We have lim P (An ) = lim F (xn ) n→∞ xn ↓x By continuity of P.P (B) 15 . we get f (x) = d F (x) dx f (y)dy −∞ The distribution function Fx (x) satisﬁes the following properties 1) F (x) ≥ 0 2) F (x) is right contininous lim F (x) = F (x) xn ↓x 3) F (−∞) = 0. The probability mass function (PMF) gives the probability that X will take a particular value. Events A. P ) is a probability space. F (+∞) = 1 Let Ak = {W : X ≤ xn }.
4}. An−1 ) i=1 Expected Value The expectation. don’t rely on intuition. Then only two are independent. . An are said to be independent if P (∩iǫI Ai ) = ΠiǫI P (Ai ) for all ﬁnite I C {1. It is a technical condition. . 4}. 2. A2 . . then P (AB) = P (A)P (B) = P (A) as expected P (B) In general P (∩n A1 ) = P (A1 )P (A2 A1 ) P (An A1 . . . given that an event B has occured is given by P (AB) = P (A ∩ B) .In general events A1 . A2 . 2}A2 = {1. . average or mean of a random variable is given by EX = = ∞ xP (X = x) xf (x)dx ∞ x=−∞ Xis discreet continuous −∞ In general EX = xdF (x) This has a well deﬁned meaning which reduces to the above two special cases when X is discrete or continuous but we will not explore this aspect any further. . . A ﬁnite collection of random variables X1 . Let Ω = {1. P (B) > 0 P (B) If A and B are independent. Conditional Probability The probability of event A occuring. . 3} and A3 = {1. n} Note that pairwise independence does not imply independence as deﬁned above. . . each equally mobable let A1 = {1. . 1 ≤ i ≤ k i=1 Independence is a key notion in probability. . . . Xk is independent if P (X1 ≤ x1 . . . . 3. . 16 . Xk ≤ xk ) = Πk P (Xi ≤ xi ) ∀ xi ǫ R.
The nth moment is given by EX n = ∞ xn dF (x) if it exists √ V arX is called the std deviation −∞ V arX = E(X − EX)2 = EX 2 − (EX)2 Conditional Expectation : If X and Y are discrete. the conditional p. 17 .m.We can also talk of expected value of a function ∞ Eh(X) = −∞ h(x)dF (x) Mean is the ﬁrst moment. y) f (y) The conditional cumulative distribution in cdf is given by x FXY (xy) = −∞ fXY (xy)dx Conditional mean is given by E[XY = y] = x fXY (xy)dx It is also possible to deﬁne conditional expectations functions of random variables in a similar fashion. Y = y) P (Y = y) > 0 P (Y = y) The conditional distribution of X given Y=y is deﬁned as F (xy) = P (X ≤ xY = y) and the conditional expectation is given by E[XY = y] = xP (X = xY = y) If X and Y are continuous. we deﬁne the conditional pdf of X given Y as fX(Y ) (xy) = f (x. of X given Y is deﬁned as P (X = xY = y) = P (X = x.f.
.Important property If X & Y are rv. Then for any a > 0 P (X ≥ a) ≤ a o EX a ∞ a EX = xdF (x) + ∞ a xdF (x) adF (x) = a. .p(X ≥ a) EX a P (X ≥ a) ≤ Chebyshev’s inequality : P (X − EX ≥ ǫ) ≤ Take Y = X − a P (X − a ≥ ǫ) = P ((X − a)2 ≥ ǫ2 ) ≤ The weak law of Large Number Let X1 . be a sequence of independent and identically distributed random variables with mean N and ﬁnite variance σ 2 Let Sn = n k=1 n V ar(X) ǫ2 E(X−a)2 ǫ2 Xk Then P (Sn − N ≥ δ) ⇒ 0 as n ⇒ ∞ ∀δ Take any δ > 0 P (Sn − N ≥ δ) ≤ = 1 n2 nσ2 δ2 V arSn δ2 = 1 n σ2 δ2 18 . . X2 . EX = E[EXY ] = E(XY = y)dFY (y) Markov Inequality : Suppose X ≥ 0.
Find out about how LS work and also about WLLN We say Sn ⇒ N in probability 19 .n→∞ lim P (Sn N ≥ δ) −→ 0 Since δ rvar pushed arbitrarily n→∞ lim P (Sn − N ≥ 0) = 0 The above result holds even when σ 2 is inﬁnite as long as mean is ﬁnite.
Information Theory and Coding Lecture 2 Pavan Nuggehalli The entropy H(X) of a discrete random variable is given by 1 P (x) Entropy H(X) = xǫX P (x) log = − P (x) logP (x) xǫX = E log 1 P (X) 1 log P (X) is called the selfinformation of X. H(X) ≥ 0 P (X) ≤ 1 ⇒ 1 P (X) 1 ≥ 1 ⇒ log P (X) ≥ 0 1 2. Let X  = M Then H(X) ≤ logM H(x) − logM = = = = Jensens′ ≤ = = 1 − logM P (x) xǫX 1 − P (x) logM P (x) log P (x) xǫX xǫX 1 P (x) log MP (x) xǫX 1 E log MP (x) 1 logE MP (x) 1 log P (x) MP (x) 0 P (x) log 21 . Entropy is the expected value of self information.H(X) 3. Properties : 1. LetHa (X) = Eloga P (X) Then Ha (X) = (loga2 ).
Y) with joint distribution P(x. .d. Deﬁnition : The joint entropy H(X. . y) H(X. Y ) where X and Y are random variables. For example. Y ) = + xǫX yǫY P (x.i. Xn .P (y) log P (x). given X1 . H(X1. let Z = (X. .y) is given by 1 P (x. . P (X = 0) = 1 − P 1 1 H(X) = P log P + (1 − P ) log 1−P = H(P )(the binary entropy function) We can easily extend the deﬁnition of entropy to multiple random variables. . H(X) = 0 ⇒ X is a constant Example : X = {0. y)log 1 P (X. then H(X. .P (y) log xǫX yǫY 1 P (x). i. 1} P (X = 1) = P. . . Y ) = xǫX yǫY P (x). Y ) = E log If X and Y are independent. Xn ) = i=1n H(Xi) We showed earlier that for optimal coding H(X) ≤ L− < H(X) + 1 22 . P (y) 1 + P (x) xǫX P (x)P (y) log yǫY = = yǫY 1 P (y) P (y)H(X) + xǫX P (x)H(Y ) = H(X) + H(Y ) In general. random variables.therefore H(X) ≤ logM When P (x) = H(X) = xσX 1 M 1 M ∀xǫX logM = logM 4.
we can get as near to the entropy bound as required. . . . . . 23 . . . . Xn ) ≤ L−n < H(X1. . . . . Xn ) + 1 H(X1. . . . Xn ) Let L− n be the optimal code H(X1. Xn ) = H(Xi) = nH(X) H(X) ≤ L−n ≤ nH(X) + 1 H(X) ≤ L−n n ≤ H(X) + 1 n Therefore. by encoding a block of source symbols at a time. .What happens if we encode blocks of symbols? Lets take n symbols at a time X n = (X1.
Xn )) Deﬁnition : The typical set An is the set of sequences xn = (x1 . then ǫ 1 H(X) − ǫ ≤ − n logP (x1. X2 . . . .” Theorem : Suppose X1 . are iid with distribution p(x) 1 Then − n log P (X1. . xn ) ≤ 2−n(H(X)−ǫ) Theorem : a) If (x1 . . . . . . Then Yk are iid and EYk = H(X) Yk . By WLLN Sn → H(x) in probability 1 log P (Xk ) = − n k=1 log P (Xk ) n k=1 1 = − n log(P (X1 . . . whose cumulative probability is almost 1. The size of the typical set is roughly 2nH(X) 31 . . xn ) ≤ H(X) + ǫ b) P r(An ) > 1 − ǫ for large enough n ǫ c) An  ≤ 2n(H(X)+ǫ) ǫ d) An  ≥ (1 − ǫ) 2n(H(X)−ǫ) for large enough n ǫ Remark 1. .Information Theory and Coding Lecture 3 Pavan Nuggehalli Asymptotic Equipartition Property The Asymptotic Equipartition Property is a manifestation of the weak law of large numbers. xn ) ǫ An . . . The typical set occur with probability 1 3. . xn ) ǫ X n such ǫ that 2−n(H(X)+ǫ) ≤ P (x1 . Given a discrete memoryless source. Each string in An is approximately equiprobable ǫ 2. . . . . . . Xn ) → H(X) in probability Proof : Let Yk = log Let Sn = But Sn = 1 n 1 n n k=1 n 1 P (Xk ) . . The AEP asserts that there exists a typical set. There are around 2nh(X) strings in this typical set and each has probability around 2−nH(X) ”Almost all events are almost equally surprising. . the number of strings of length n = X n . . . .
. . 2−n(H(X)−ǫ) ǫ # strings of length n = X n # typical strings of length n ∼ 2nH(X) = 32 .Proof : a) Follows from the deﬁnition b) AEP 1 ⇒ − n log P (X1 . 2−n(H(X)−ǫ) ǫ An  ≥ (1 − ǫ) . 2−n(H(X)+ǫ) ⇒ An  ≤ 2n(H(X)+ǫ) ǫ ǫ d) P v(An ) > 1 − ǫ ǫ ⇒ 1 − ǫ < P v(An ) ǫ = xn ǫAn ǫ P v(xn ) 2−n(H(X)−ǫ) xn ǫAn ǫ ≤ = An  . Xn ) − H(X) < ǫ > 1 − δ for large enough n Take δ = ǫ1 P r(An ) > 1 − δ ǫ c) 1 = xn ǫX n P (xn ) P (xn ) xn ǫAn ǫ ≥ ≥ 2−n(H(X)+ǫ) xn ǫAn ǫ = An  . . . Xn ) → H(X) in prob Pr  − 1 n log P (X1 . . . . .
(n(H + ǫ) + 2) + P v(An ) . ǫ ⊂ ⊂ #bits required = nlogX  + 1 + 1 Let l(xn ) be the length of the codeword corresponding to xn . (n logX + 2) ǫ ǫ ≤ n(H + ǫ) + 2 + ǫ. Assume n is large enough that P v(An ) > 1 − ǫ ǫ El(xn ) = xn P (xn )l(xn ) P (xn )l(xn ) + xn ǫAn ǫ xn ǫAn⊂ ǫ = ≤ P (xn )l(xn ) P (xn )(n logX + 2) ⊂ P (xn )[(nH + ǫ) + 2] + xn +An ǫ = P v(An ) . #bits required = ⌈log(An )⌉ < n(H(X) + ǫ) + 1 ǫ Preﬁx each sequence by a 0. we can transmit its index. Instead of transmitting ǫ the string.lim 2 Xn nH(X) = lim 2−n(logX−H(X)) → 0 One of the consequences of AEP is that it provides a method for optimal coding. so that the decoder knows that what follows is an index number.n logX = n(H + ǫ1 ) ǫ1 = ǫ + ǫ logX + 2 n 33 . This has more theoretical than practical signiﬁcance. Divide all strings of length n into An and An ǫ ǫ We know that An  ≤ 2n(H(X)+ǫ) ǫ Each sequence in An is represented by its index in the set. #bits ≤ n(H(X) + ǫ) + 2 For X n ǫ An .
there exists a UD code which satisﬁes E 1 n l(xn ) ≤ H(X) + ǫ for n suﬃciently large 34 .Theorem : For a DMS.
P (yx) xǫX yǫY x y P (x. . y)log(yx) xǫX yǫY x P (x)logP (x) − P (x. . . . Z) = E 1 logP (yx.Information Theory and Coding Lecture 4 Pavan Nuggehalli Data Compression The conditional entropy of a random variable Y with respect to a random variable X is deﬁned as H(Y X) = = xǫX xǫX P (x)H(Y X = x) P (x) yǫY P (yx)log 1 P (yx) = xǫX yǫY P (x. y) xǫX yǫY P (x. . Xn ) 1 Then H(XY ) = E logP (Y X) Y = (Y1 . z) 41 . suppose X = (X1 . y)logP (x) − P (x. y)logP (x. Y Z) = H(XZ) + H(Y X. y)log(yx) xǫX yǫY = H(X) + H(Y X) Corollary : 1) H(X. . Ym) Theorem : (Chain Rule) H(XY ) = H(X) + H(Y X) H(X. y)log 1 logP (yx) 1 P (yx) = E In general. Y ) = − = − = − = − P (x. . y)logP (x). .
. . Xn ) = lim H(Xn Xn−1 . . X2 . Xn ) = k=1 H(Xk Xk−1 . . . . . . Xn ) = n→∞ H(Xn Xn−1. . . Xn+t = xn ) ∀ t ǫ Z and all x1 . . . . . . . . . Xn ) n→∞ n Theorem : For a stationery stochastic process. . . .2) n H(X1 . . . . . . . xn ǫ X Remark : H(Xn Xn−1 ) = H(X2 X1 ) Entropy Rate : The entropy rate of a stationery stochastic process is given by H = lim 1 H(X1. . . X3 X1 ) = H(X1 ) + H(X2 X1 ) + H(X3 X1 . . X2 ) = H(X1 ) + H(X2 X1 ) H(X1 . . . Xn = xn ) = P r(X1+t = x1 . X1 ) H(X1 . . . H exists and further satisﬁes 1 H = n→∞ H(X1 . X2 ) 3) H(Y ) ≤ H(Y X)− Stationary Process : A stochastic process is said to be stationery if the joint distribution of any subset of the sequence of random variables is invariant with respect to shifts in the time index. X1 ) n→∞ n 42 . . X3 ) = H(X1 ) + H(X2 . . . P r(X1 = x1 . X1 ) lim lim n Proof : We will ﬁrst show that lim H(Xn Xn−1 . . X1 ) exists and then show that n→∞ lim 1 H(X1 . . . . . .
t. . Xn−1) is monotonically decreasing with n 0 ≤ H(Xn X1 . ∀ǫ > 0. . . Xn ) = H(Xn X1 . . . . . then bn = WTS. . . Xn−1 ) ≤ H(Xn ) ≤ logX  Cesaro mean Theorem : If an → a. . H(Xn+1X1 . . .t n ≥ N 2 an − a ≤ bn − a = ≤ ǫ 2 1 n (ak − a) n k=1 1 n ak − a n k=1 NǫZ 1 ≤ n ≤ 1 n k=1 NǫZ ak − a + ak − a + n − N(ǫ) ǫ n Z ǫ Z ǫ Z k=1 Choose n large enough that the ﬁrst term is less than bn − a ≤ ǫ ǫ + = ǫ ∀n ≥ Nǫ∗ Z Z 43 . . . Xn ) ≤ H(Xn+1 X2 . Then we mean for any ǫ > 0. .Suppose limn→∞ xn = x. then n→∞ xn lim exists. We know an → a 1 n n k=1 ak → a bn − a < ǫ ∀n ≥ Nǫ ǫ ǫ ∃N 2 s. . . . Xn−1 )by stationarity ⇒ H(Xn X1 . . there exists a number Nǫ such that xn − x < ǫ ∀n ≥ Nǫ Theorem : Suppose xn is a bounded monotonically decreasing sequence. . ∃Nǫ s.
Now we will show that 1 lim H(Xn X1 . to encode. . X1 ) n n k=1 ↓ H(X1 . . . Xn ) n 1 n H(X1 . . . Assume you are given a sequence of symbols x1 . . . 1 − logP (X1. . . . . Xn ) → H in prob n This can be used to show that the entropy rate is the minimum number of bits required for uniquely decodable lossless compression. Ziv compression algorithms which are very popular Winup & Gup use version of these algorithms. Complexity and performance increases with W. this will require ⌈W logM⌉ bits. In this course. Universal data/source coding/compression are a class of source coding algorithms which operate without knowledge of source statistics. The algorithm maintains a window of the W most recently encoded source symbols. . We will ﬁrst describe the algorithm and then show why it is so good. . The window size is fairly large ≃ 210 − 217 and a power of 2. . . Xn−1) n→∞ n Why do we care about entropy rate H ? Because AǫP holds for all stationary ergodic process. Xn ) lim = lim H(Xn X1 . 0 ≤ u ≤ W − 1 P +1 Set n = 1 if no match exists for n ≥ 1 44 . . a) Encode the ﬁrst W letters without compression. . Lempel and Ziv developed two version LZ78 which uses an adaptive dictionary and LZ77 which employs a sliding window. Xn−1 ) → lim H(X1 . . . This gets amortized over time over all symbols so we are not worried about this overhead. If X = M. . . . . . . Xn ) = H(X1 Xk−1. . . we will consider Lempel. x2 . . . . b) Set the pointer P to W c) Find the largest n such that P −u−1+n xP =n = xP −u for some u.
logW + P (no match)logM n∗ claim P (no match) → 0 as n∗ → ∞ 45 . The particular code here is called the unary binary code.n∗ logM) = N (P (match). update window and iterate.logW n∗ E(#bits N sent) = P (match). where H is the entropy rate of the source. Let R(N) be the expected number of bits required to code N symbols Then lim lim W →∞ N →∞ R(N ) N = H(X) ”Baby LZ.d) Encode n into a preﬁx free code word. Else send the symbols uncompressed. n is encoded into the binary representation of n preceded by ⌊logn⌋ zeros 1 : ⌊log1⌋ = 0 1 010 011 00100 2 : ⌊log2⌋ = 1 3 : ⌊log3⌋ = 1 4 : ⌊log4⌋ = 2 e) If n > 1 encode u using ⌈logW ⌉ bits. send the index in the window using logW (= n∗ (H +2ǫ)) bits. n∗ divides NX = M is a power of 2. If n = 1 encode xp+1 using ⌈logM⌉ bits. Compression Algorithm If there is a match of n∗ symbols. f) Set P = P + n. Note : No need to encode n. Yk = # bits generated by the K th segment N/n∗ #bits sent = k=1 Yk Yk = logW if match = n∗ logM if no match N/n∗ E(#bits sent) = k=1 EYk + P (No match).algorithm” ∗ Assume you have been given a data sequence of W + N symbols where W = 2n (H+2ǫ) . performance sub optimal compared to LZ more compression n needs only log n bits.
. . XP +n∗ ) = 1 P (XP +1.XP +n∗ ) for a stationery ergodic source.W P (No match) = P (S > W ) = = = X n∗ ǫAn∗ ǫ ∗⊂ P +n P +n P (XP +1 )P (S > W XP +1 ) ∗ ∗ P (n∗ )P (S > W X n ) ∗ P (X n )P (S > W X n ) + ∗ ∗ X n∗ ǫAn∗⊂ ǫ P (X n )(S > W X n ) ∗ ∗ ≤ P (An ) → 0 as n∗ → ∞ ǫ X n ǫ An ǫ ∗ ∗ ⇒ P (X n ) ≥ 2−n ∗ ∗ (H+ǫ) = theref ore ≤ ≤ ≤ 1 ∗ ≤ 2n (H+ǫ) n∗ ) P (X ∗ P (X n ). X n∗ ǫAn∗ ǫ 1 P (X n∗ )..P (An ) ǫ ∗ W = 2n ∗ (H+ǫ−H−2ǫ) = 2n → 0 as n∗ → ∞ 46 . .E(#bits N n →∞ lim ∗ sent) = logW n∗ = n∗ (H+2ǫ) n∗ = H + 2ǫ Let S be the minimum number of backward shifts required to ﬁnd a match for n∗ symbols Fact : E(SXP +1.W P (X ) = n∗ 2n 2n ∗ (H+ǫ) 2n ∗ (H+ǫ) W ∗ (H+ǫ) X n∗ ǫAn∗ ǫ W ∗ (−ǫ) .. By Markov inequality P +n P +n P (No matchXP +1 ) = P (S > W XP +1 ) ∗ ∗ = ES 1 = P +n∗ W P (XP +1 ). . XP +2. This result is due to kac.. .
47 .
In this course we will study linear block codes. We shall see that sophisticated linear block codes can do considerably better than repetition.Information Theory and Coding Lecture 5 Pavan Nuggehalli Channel Coding Source coding deals with representing information as concisely as possible. . Error detection → Ethernet CRC Error correction → CD Errors can be of various types : Random or Bursty There are two basic kinds of codes : Block codes and Trellis codes This course : Linear Block codes Elementary block coding concepts Deﬁnition : An alphabet is a discrete set of symbols Examples : Binary alphabet{0. 2} Letters{a. memory modules. Good LBC have been devised using the powerful tools of modern algebra. magnetic disks and tapes. I want to produce a bird’s eye view of channel coding. . This algebraic framework also aids the design of encoders and decoders. 51 . Digital Foundation : Tornado codes Reliable data transmission over the Internet Reliable DSM VLSI circuits There are tow modes of error control. We will not worry about that part now. the incoming data source is divided into blocks of k symbols. . CDs. 1} Ternary alphabet{0. k) block code. z} Eventually these symbols will be mapped by the modulator into analog wave forms and transmitted. Examples include phone modems. In this introductory lecture. Channel coding is employed in almost all communication and storage applications. One simple approach is that of repetition coding wherein you repeat the same symbol for a ﬁxed (usually odd) number of time. The purpose of channel coding is to add redundancy in a controlled manner to ”manage” error. . 1. satellite communications. Channel coding is concerned with the ”reliable” ”transfer” of information. We will spend some time learning just enough algebra to get a somewhat deep appreciation of modern coding theory. DVD’s etc. In a(n. This turns out to be very wasteful in terms of bandwidth and power.
y) between two qary sequences x and y is the number of places in which x and y diﬀer. 1) binary repetition code 0 → 000 n = 3.Each block of k symbols called a dataword is used to generate a block of n symbols called a codeword. Codewords are generated by encoding messages of k symbols. What is the probability that error detection fails? Hamming distance : The Hamming distance d(x. we want to choose a set of codewords for which the Hamming distance between each other is large as this will make it harder to confuse a corrupted codeword with some other codeword. Ex : All odd number of errors can be detected. # messages = q k = G Rate of code = k n Example : Single Parity check code SPC code Dataword : 010 Codeword : 0101 3 k = 3. n = 4. (n − k) redundant bits. 52 . Rate = 4 This code can detect single errors. The rate of the code with M symbols is given by R= 1 logq M n Let us assume X  = q. These ntuples are called codewords. k = 1 1 → 111 Deﬁnition : A block code G of blocklength n over an alphabet X is a non empty set of ntuples of symbols from X . All even number of errors go undetected Ex : Suppose errors occur with prob P. Example : (3. Example: x = 10111 y = 01011 d(x. y) = 1 + 1 + 1 = 3 Intuitively.
k. Some simple consequences of dmin 1 An (n. r) If d(c. r) < d d(c2 . cj ǫ G = i=j An (n. SPC detects all odd patterns of errors. d(x. d) block code can correct up to t = ⌊ d−1 ⌋ errors 2 53 . In practise.g.) T he error weight = # symbols changed / corrupted = d(c. y) ≥ 0 with equality if x = y 2. y) ≤ d(x. d(x. d(x. r) < d (b) This is the guaranteed error detecting ability. d) block code. z) + d(z. then r cannot be a codeword. cj ) ci . k. y) (triangle inequality) Minimum Hamming distance of a block code is the distance of the two closest codewords dmin = min d(ci . errors can be detected even if the error weight exceeds d. d) block code can always detect up to d1 errors Suppose codeword c was transmitted and r was received (The received word is often called a senseword. k. k) block code with dmin = d is often referred to as an (n. Otherwise c and r would be two codewords whose distance is less than the minimum distance. r) < d. y) = d(y.Hamming distance satisﬁes the conditions for a metric namely 1. e. Note : (a) Error detection = Error correction d(c1 . 2 An (n. x) symmetry 3.
When ≤ t errors occur. Not the whole picture. c ) ≤ r ′ dmin − 1 ⌋ ⇒ dmin ≥ 2t = 1 2 Therefore. This structure is exploited not only in discovering good codes but also in designing eﬃcient encoders and decoders. k) block code n − k ≥ dmin − 1 Proof : Remove the ﬁrst d − 1 symbols of each codeword in C. Denote the set of modiﬁed codeˆ words by C ˆ Suppose x ǫ C. we choose the transmitted codeword to be c^ = argnum d(r. y) ≤ d − 1 ˆ Therefore q k = C = C But C ⇒ qk ⇒k or n − k k # possible block codes = 2n. The primary algebraic structure of interest are Galois ﬁelds.e.2 ≤ ≤ ≤ ≥ q n−dmin +1 q n−dmin +1 n − dmin + 1 dmin − 1 We want to ﬁnd codes with good distance structure. t=⌊ Singleton Bound : For an (n. given a senseword r. c) = cǫG A Hamming sphere of radius r centered at an n tuple c is the set of all n tuples. the decoder can unambiguously decide which codeword was transmitted. then d(x. 54 . denote by x^ its image in C ^ = y^ Then x = y ⇒ x Therefore If x^ = y ^ .Proof : Suppose we detect using nearest neighbor decoding i. Hamming spheres of radius t are nonintersecting. The tools of algebra have been used to discover many good codes. ′ c satisfying d(c.
Closure : ∀a. eg : n = 3 Sn = {123. 2 → 3. 321} 132 denotes the permutation 1 → 1. ∗) consisting of a set G and a binary operator * satisfying the following four axioms. Inverse : ∀ a ǫ G. Identity : ∃ e ǫ G such that e ∗ a = a ∗ e = a ∀ a ǫ G 4. For example : 61 . . (Z\n. 231. b. −). For abelian groups * is usually denoted by + and called addition. 132. Group : A group is an algebraic structure (G. c ǫ G 3. 213. If a ∗ b = b ∗ a ∀ a. Field and Vector space. Ex: Prove (Z. The symmetric group Sn is made of the set of permutations of X. (R\{0}. +). The group operation is deﬁned by the composition of permutations. Group. . Associative law : (a ∗ b) ∗ c = a ∗ (b ∗ c) ∀a. b ǫ G.}.Information Theory and Coding Lecture 6 Pavan Nuggehalli Algebra We will begin our discussion of algebraic coding theory by deﬁning some important algebraic structures. 2. 312. a ∗ b ǫ G 2. n}. . An example of a non commutative group : Permutation Groups Let X = {1. Ring. b ∗ c is the permutation obtained by ﬁrst applying c and then applying b. The identity element is called 0. then G is called a commutative or abelian group. A 11 map of X onto itself is called a permutation. b ǫ G. Examples : (Z. . +) How about (Z. 1. ∃ b ǫ G such that b ∗ a = a ∗ b = e A group with a ﬁnite number of element is called a ﬁnite group. −) is not a group. 3 → 2. .
∀aǫH 62 . 1} + 0 1 0 0 1 1 1 0 Elementary group properties 1. Property (3) follows from (1) and (4) provided H is nonempty.A ﬁnite group can be represented by an operation table.g. Z/2 = {0. b ǫ H ⇒ a ∗ b ǫ H 2) Associative : (a ∗ b) ∗ c = a ∗ (b ∗ c) 3) Identity : ∃ e′ ǫ H such that a ∗ e′ = e′ ∗ a = a Note that e′ = e. +) 3. ∃ b such that a ∗ b = b ∗ a = e Property (2) holds always because G is a group. Every element has a unique inverse b and b′ are two inverses of a. Cancellation a∗b=a∗c⇒a=c b∗a=c∗a⇒b=c −1 −1 a ∗a∗b =a ∗a∗c ⇒ b = c → No duplicate elements in any row or column of operation table Exercise : Denote inverse of x ǫ G by x′ Show that (a ∗ b)′ = b′ ∗ a′ Deﬁnition : The order of a ﬁnite group is the number of elements in the group Subgroups : A subgroup of G is a subset H of G that is itself a group under the operations of G 1) Closure : a ǫ H. The identity element is unique Let e1 &e2 be identity elements Then e1 = e1 ∗ e2 = e2 2. Then b = b ∗ e = b ∗ (a ∗ b′ ) = (b ∗ a) ∗ b′ = e ∗ b′ = b′ 132 ∗ 213 = 312 b c 213 ∗ 132 = 231 Noncommutative (Z/2. the identity of G 1 a ∗ e′ = axe 4) Inverse : ∀ a ǫ H. e.
h2 . . . We will denote this compactly as {h. h ∗ h.(hk )′ = hn−k h′ = hn−1 Order of an element H is the order of the subgroup generated by H Ex: Given a ﬁnite subset H of a group G which satisﬁes the closure property.H non empty ⇒ ∃ a ǫ H (4) ⇒ a−1 ǫ H (1) ⇒ a ∗ a−1 = e ǫ H Examples : {e} is a subgroup. h ∗ h ∗ h. hn } → cyclic group. so is G itself. Let the inverse of h be h′ Then (hk )′ = (h′ )k Why? h2 ∗ (h′ )2 h ∗ h ∗ h′ ∗ h′ = e Similarly (h′ )2 ∗ h2 = e Since the set H is ﬁnite Closure ⇒ inverse exists hi = hj hi (h′ )i = hj (h′ )j hi−j = e ∃ n such that hn = e H = {h. Suppose G is ﬁnite and h ǫ G consider the set H = {h. h2 . . .}. Consists of all powers of h. 63 . enough to show closure. . . To check if a nonempty subset H is a subgroup we need only check for closure and inverse (Properties 1 and 4) More compactly a ∗ b−1 ǫ H ∀a. h3 . . . subgroup generated by H hn = e h. prove that H is a subgroup. .}.hn−1 = e In general. b ǫ H For a ﬁnite group. .
. Every element of G occurs exactly once in this rectangular array. . Theorem : Every element of G appears once and only once in a coset decomposition of G. Call it g2 . gm gm gm ∗ h2 . g2 ∗ hn . . Because G is ﬁnite the process has to terminate. Ex : g ∗ H is a subgroup if g ǫ H A right coset is H ∗ g = {h ∗ g : h ǫ H} Coset decomposition of a ﬁnite group G with respect to H is an array constructed as follows : a) Write down the ﬁrst row consisting of elements of H b) Choose an element of G not in the ﬁrst row. (number of cosets of G with respect to H) Lagrange’s Theorem : The order of any subgroup of a ﬁnite group divides the order of the group. . hn g2 g2 g2 ∗ h2 . Corr : Prime order groups have no proper subgroups Corr : The order of an element divides the order of the group 64 .. First show that an element cannot appear twice in the same row and then show that an element cannot appear in two diﬀerent rows. a contradiction G = H. each time choosing an element of G which has not appeared in the previous rows. g2. gm are called coset leaders Note that the coset decomposition is always rectangular.Cosets : A left coset of a subgroup H is the set denoted by g ∗ H = {g ∗ H : hǫH}. h1 = 1 h2 . . . . . Stop when there is no unused element left. .. Same row : gk h1 = gk h2 ⇒ h1 = h2 a contradiction Diﬀerent row : gk h1 = gl h2 where k > l = gk = gl h2 h1 ⇒ gk ǫ gl ∗ H. . gm ∗ hn h1 . g3 . The second row consists of the elements of the coset g ∗ H c) Continue as above.
b = −(a.c = a. + and .c 4.c c.b) 0.b = (a − a)b Similarly(−a)b = −(a.b) 0 = a. .0 = a(b − b) = a.(−b) = (−a).) noncommutative ring R[x] : set of all polynomials with real coeﬃcients under polynomial addition and multiplication R[x] = {a0 + a1 x + .(b. Distributive Law : (a + b). +. +.c) = (a. a 0 = 0 a = 0 a.) is a ring without identity 65 .b . +.b + a(−b) therefore a(−b) = −(a.0 = a. A ring is commutative if multiplication is commutative. 1 is multiplicative identity Some simple consequences 1.0 + a. (R. +. Associative Law : a. need not be commutative 0 is additive identity.b) = ab + (−a).b = ab Examples (Z. a.(a + b) = c. Suppose there is an element l ǫ R such that 1.(0 + 0) = a. .c + b. b ǫ R 3. . ring with identity. . Closure : a.a + c. satisfying the following axioms 1. .) (Z\n. +.b ǫ R ∀a.b).1 = a Then R is a ring with identity Example (2Z.b Two Laws .Rings : A ring is an algebraic structure consisting of a set R and the binary operations.a = a. ak ǫ R} Notions of commutative ring. → multiplication + → addition a.0 therefore 0 = a. .0 2. + an xn : n ≥ 0. .) (R. +) is an abelian group 2.) (Rnxn .
) R[x] units units units units =1 R\{0} nonsingular or invertible matrices polynomals of order 0 except the zero polynomal If ab = ac and a = 0 Then is b = c? Zero divisors. Fields : A ﬁeld is an algebraic structure consisting of a set F and the binary operators + and . 1. We will see later that q can only be a power of a prime number. .Theorem : In a ring with identity i) The identity is unique ii) If an element a has an multiplicative inverse. if it exists is called a ﬁnite ﬁeld or Galois ﬁled and denoted by GF (q). 66 . A ﬁnite ﬁeld can be described by its operation table. +.) is an abelian group c) Distributive law : a.). +. cancellation Law. = {0. An element of R with an inverse is called a unit. Integral domain Consider Z/4. satisfying a) (F. Proof : Same as the proof for groups. Then is b = c? a = b = 2. +. . Cancellation holds in an integral domain.) A ﬁnite ﬁeld with q elements. +.) (Rnxn . then the inverse is unique. 2. (C.) (R. (Q. +) is an abelian group b) (F − {0}. +. +. . . . 3} suppose a.b = ac. A ring with no zero divisor is called when a = 0 an integral domain. (Z. . .(b + c) = ab + ac addition multiplication substraction division Conventions 0 1 a + (−b) ab −a a−1 a−b ab−1 Examples : (R.).
+ 0 1 . 0 1 0 0 1 0 0 0 1 1 0 1 0 1 GF (3) + 0 1 2 . 0 1 2 0 0 1 2 0 0 0 0 1 1 2 0 1 0 1 2 2 2 0 1 2 0 2 1 GF (4) + 0 1 2 3 . 0 1 2 3 0 0 1 2 3 0 0 0 0 0 1 1 0 3 2 1 0 1 2 3 2 2 3 0 1 2 0 2 3 1 3 3 2 1 0 3 0 3 1 2 multiplication is not modulo 4. We will see later how ﬁnite ﬁelds are constructed and study their properties in detail. Cancellation law holds for multiplication. Theorem : In any ﬁeld ab = ac and a = 0
GF (2)
⇒b=c Proof : multiply by a−1 Introduce the notion of integral domain Zero divisors <⇒ Cancellation Law Vector Space : Let F be a ﬁeld. The elements of F are called scalars. A vector space over a ﬁeld F is an algebraic structure consisting of a set V with a binary operator + on V and a scalar vector product satisfying. 1. (V, +) is an abelian group 2. Unitary Law : 1.V = V for ∀ V ǫ V 3. Associative Law : (Ci C2 ).V = Ci (C2 V ) 4. Distributive Law : C.(V1 + V2 ) = C.V1 + C.V2 (C1 + C2 ).V = C1 .V + C2 .V 67
A linear block code is a vector subspace of GF (q)n . ¯ ¯ Suppose V1 , . . . , Vm are vectors in GF (q)n . ¯ ¯ The span of the vectors {V1 , . . . , Vm } is the set of all linear combinations of these vectors. S = {a1 v1 + a2 v2 + . . . + am vm : a1 , . . . , am ǫ GF (q)} ¯ ¯ ¯ = LS(¯1 , . . . , vm ) v ¯ LS → Linear span A set of vectors {¯1 , . . . , vm } is said to be linearly independent (LI) if v ¯ a1 v1 + . . . + am vm = 0 ⇒ a1 = a2 = . . . = am = 0 ¯ ¯ i.e. no vector in the set is a linear combination of the other vectors. A basis for a vector space is a linearly independent set that spans the vector space. What is a basis for GF (q)n . V = LS (Basis vectors) e1 = (1, 0, . . . , 0) ¯ Take e2 = (0, 1, . . . , 0) ¯ en = (0, 0, . . . , 1) ¯ Then {¯k : 1 ≤ k ≤ n} is a basis e To prove this, need to show that e′k are LI and span GF (q)n . Span : v = (v1 , . . . , vn ) ¯ =
n
Independence : consider e1
vk ek
k=1
{ek } is called the standard basis. The dimension of a vector space is the number of vectors in its basis. dimension of GF (q)n = n a vector space V C Suppose {¯1 , . . . , ¯m } is a basis for GF (q)n b b Then any v ǫ V can be written as ¯ ¯ b b b V = V1¯1 + V2¯2 + . . . + Vm¯m ¯1 b ¯ = (V1 V2 . . . Vm ) b2 ¯m b = (V1 , V2 Vm )B 68 V1 , . . . , Vm ǫ GF (q)
= a.B ¯
where a ǫ (GF (q))m ¯
Is it possible to have two vectors a and a′ such that aB = a′ B ¯ ¯ ¯ ¯ Theorem : Every vector can be expressed as a linear combination of basis vectors in exactly one way Proof : Suppose not. Then
a.B ¯ ⇒ (¯ − a′ ).B a ¯ ¯1 b ¯ b2 ⇒ (¯ − a′ ) . a ¯ . . ¯m b ′ ¯ ′ ¯ b (¯1 = a1 )b1 + (a2 − a2 )b2 + . . . (am − a′m )¯m a ¯ But ¯′k are LI b ⇒ ak = a′k 1≤k≤m ′ ⇒a = a ¯ ¯
= a′ .B ¯ = 0 = 0 = 0
Corollary : If (b1 , . . . , bm ) is a basis for V, then V consists of q m vectors. Corr : Every basis for V has exactly m vectors Corollary : Every basis for GF (q)n has n vectors. True In general for any ﬁnite dimensional vector space Any set of K LI vectors forms a basis. Review : Vector Space Basis {b1 , . . . , bm } ¯1 b b v = aB ¯ a ǫ GF (q)m B = ¯2 ¯ ¯m b m v = q Subspace : A vector subspace of a vector space V is a subset W that is itself a vector space. All we need to check closed under vector addition and scalar multiplication. The inner product of two ntuples over GF (q) is (a1 , . . . , an ).(b1 , . . . , bm ) = a1 b1 + . . . + an bn = ak bk = a.¯⊤ ¯b 69
. 10. then dimW ⊥ = n − k Corollary : W = (W ⊥ )⊥ Firstly W C(W ⊥ )⊥ Proof : Let dimW ⇒ dimW ⊥ ⇒ dim(W ⊥ )⊥ dimW = = = = k n−k k dim(W ⊥ )⊥ Let {g1 . . Gh1 = . 02} W = {00. gk } be a basis for W and {h1 . 20} GF (2)2 Example : GF (3)2 : W ⊥ = {00. . =O ⊤ .Two vectors are orthogonal if their inner product is zero The orthogonal complement of a subspace W is the set W ⊥ of ntuples in GF (q)n which are orthogonal to every vector in W. . 11} W ⊥ = {00.w = 0 ∀ w ǫ W⊥ W = {00. hn−k } be a basis for w ⊥ h1 g1 . . V ǫ W ⊥ iﬀ v. . . 11} dim W = 1 10 dim W ⊥ = 1 01 Lemma : W ⊥ is a subspace Theorem : If dimW = k. . Let G = hn−k gk k×n n−k × n Then GH ⊤ = Ok × n−k g1 h⊤ 1 . . 01. H = . . . k×1 ⊤ gk h1 GH ⊤ = Ok × n−k Theorem : A vector V ǫ W if f V H ⊤ = 0 vh⊤ = 0 1 v ǫ W and h1 ǫ W ⊥ 610 . .
e. v.w = 0 But w = n−k j=1 ⇒ V h⊤ = 0 j 1≤j ≤n−k ∀ w ǫ W⊥ aj hj n−k j=1 v.e.⇒ V [h⊤ h⊤ 1 2 i. V H ⊤ = 0 h⊤ ] = 0 n−k ⇐ Suppose V H ⊤ = 0 Then V ǫ (W ⊥ )⊥ = W W T S V ǫ (W ⊥ )⊥ i. 611 . H can be easily determined from G.h⊤ = 0 j We have two ways of looking at a vector V in W V ǫW ⇒V =aG for some a ⊤ Also V H = 0 How do you check that a vector w lies in W ? Hard way : Find a vector a ǫ GF (q)k such that v = aG− ¯ Easy way : Compute V H ⊤ . aj h⊤ = j n−k j=1 aj vj .w = vw ⊤ = v.
w(c). minimum weight = minimum distance Proof : (V. w(c) = dH (o. w(3401) = 3 The minimum hamming weight of a block code is the weight of the nonzero codeword with smallest weight wmin Theorem : For a linear block code. Since o is a codeword wmin = w(co ) = d(o. +) is a group wmin = minc w(c) = dmin = minci =cj d(ci ⊃ cj ) wmin ≥ dm in dmin ≥ wmin Let co be the codeword of minimum weight. SPC. Then C1 − C2 is a codeword. co ) ≥ minci =cj d(ci ⊃ cj ) = dmin dmin ≥ wmin Suppose C1 and C2 are the closest codewords. Recall that rate of a code with M codewords is given by R= Here M = q k ⇒ R = k n 1 logq M n Example : Repetition. c) Er: w(0110) = 2. Suppose the dimension of this code is k. Consequences of linearity The hamming weight of a codeword. is the number of nonzero components of c .Information Theory and Coding Lecture 7 Pavan Nuggehalli Linear Block Codes A linear block code of blocklength n over a ﬁeld GF (q) is a vector subspace of GF (q)n . 71 .
n − k = 1 0 0 0 H = [1 1 1] 1 1 1 H is the generator matrix for the repetition code i. All the trouble lies in decoding. Note that encoding then reduces to matrix multiplication. .Key fact : For LBCs. c2 ) = = = = Therefore dmin ≥ wmin d(o. αk−1 ] . . . Fact : C belongs to C iﬀ CH ⊤ = 0 72 . hn−k−1 be the basis vectors for C ⊥ and H be the generator matrix for C ⊥ . SPC and RC are dual codes. The dual code of C is the orthogonal complement C ⊥ C ⊥ = {h : ch⊤ = o ∀ c ǫ C} Let ho . . . . . Matrix description of LBC A LBC C has dimension k ⇒ ∃ basis set with k vectors or ntuples. . . c1 − c2 ) w(c1 − c2 ) minc w(c) wmin k = 2. . + αk−1 gk−1 go g1 . Then any C ǫ C can be written as C = α0 g0 + α1 g1 + . 2) parity check code 0 0 0 1 1 1 1 G = 0 1 1 0 1 1 0 Therefore dmin = d(c1 . . gk−1. Associate a data vector α with the codeword αG. Call these go . C = αG ¯ α ǫ GF (q)k Gis called the generator matrix This suggests a natural encoding approach. = [αo α1 .e. . H is called the parity check matrix for C Example : C 0 0 C = 1 1 C⊥ = = (3. .e. weight structure is identical to distance structure. gk−1 i.
Let C be a linear block code and C ⊥ be its dual code. Then the matrix obtained by linearly combining the rows of G is also a generator matrix. ¯ ¯ Note that CH ⊤ = 0 ∀C ǫ C in particular true for all rows of G Therefore GH ⊤ = 0 Conversely suppose GH ⊤ a LI basis set. 73 . Fact : Using elementary row operations and column permutation. Every LBC is equivalent to a code has a generator matrix in systematic form. . . Similarly with H. You can form a new code by choosing any two components and transposing the symbols in these two components for every codeword. Codes related in this manner are called equivalent codes. Suppose G is a generator matrix for a code C. then H is a parity check matrix if the rows of H form C⊥ H G ¯ V ǫ C ⊥ if f V G⊤ = 0 Equivalent Codes : Suppose you are given a code C. What you get is a linear block code which has the same minimum distance. gk−1 elementary row operations Interchange any tow rows Multiplication of any row by a nonzero element in GF (q) Replacement of any row by the sum of that row and a multiple of any other rows. C Generator matrix G Parity check matrix H ¯ C ǫ C if f CH ⊤ = 0 = 0. Any basis set of C can be used to form G. b1 G= g0 g1 . Note that G is not unique. it is possible to reduce G to the following form G = [Ik × k P ] This is called the systematic form of the generator matrix.
ak−1 )[Ik × k P ] k × n−k = (a0 . . . . Ck−1) Check matrix H = [−P ⊤ In−k × n−k ] 1) GH ⊤ = 0 2) The row of H form a LI set of n − k vectors.G = (a0 . .G⊤ = 0 ⇔ V ǫ C ⊥ ¯ V 74 .Advantages of systematic G C = a. ak−1 . Ck .H ⊤ = 0 ⇔ C ǫ (C ⊥ )⊥ = C ¯ . Now state relationship between columns of H and dmin Let C be a linear block code (LBC) and C ⊥ be the corresponding dual code. Let G be the generator matrix for C and H be the generator matrix for C ⊥ . Then H is the parity check matrix for C and G is the parity check matrix for H. Example 1 0 0 G = 0 1 0 0 0 1 1 0 0 1 1 1 P = 1 0 0 1 1 1 1 0 1 0 1 1 1 0 1 0 1 1 n − k ≥ dmin − 1 P⊤ = −P ⊤ = H = 1 0 1 0 0 0 1 1 0 1 Singleton bound (revisited) dmin = minc w(c) ≤ 1 + n − k Codes which meet the bound are called maximum distance codes or maximumdistance separable codes. . ¯ ¯ C. . . .
. . . The above operations are called elementary row operations. . . . . Is H a parity check matrix. α¯0 g . Fact 1 : A LB code remains unchanged if the generator matrix is subjected to elementary row operations. You can form a new code by choosing any two components and transposing the symbols in these two components. by g0 + α¯1 ¯ g ¯ g1 ′ LS(G′ ) = C . f0 . Suppose you are given a code C. . Defn : Two block codes are equivalent if they are the same except for a permutation of the codeword components (with generator matrices G & G′ ) G′ can be obtained from G 75 . LS(G′ ) = ? ′ G = . . . The new code is also a LBC. fn−1 ] Then G′ = [f1 . = C gk−1 ¯ c) Replace any row the sum of that row and a multiple of any other row. G = . d) remain unchanged. . Permutation of the components of the code corresponds to permutations of the columns of G. The parameters (n. gk−1 ¯ Easy to see that GH ⊤ = Ok×n−k HG⊤ = On−k×k Suppose G is a generator matrix and H is some n−k×n matrix such that GH ⊤ = Ok×n−k . Then C = LS(¯0 . Suppose G = [f0 . k. . .Note that the generator matrix for a LBC C is not unique. fn−1 ]. This gives a new code which is only trivially diﬀerent. . gk−1 ) g ¯ = LS(G) gk−1 ¯ Consider the following transformations of G a) Interchange two rows C ′ = LS(G′ ) = LS(G) = C b) Multiply anyrow of G by a nonzero element of GF (q). f1 . Suppose G= g0 ¯ g1 ¯ .
w(¯) of a codeword c. w(¯) = dH (o.Fact 2 : Two LBC’s are equivalent if using elementary row operations and column permutations. Theorem : Every linear block code is equivalent to a systematic code. 2) Only n − k check symbols needs to be computed ⇒ reduces decoder complexity. then H = [−Pn−k×k In−k×n−k ] −P NT S : GH ⊤ = 0 GH ⊤ = [IP ] I Rows of H are LI = −P + P = 0 Now let us study the distance structure of LBC The Hamming weight. 1) The ﬁrst k symbols of the codeword is the dataword. Fact 3 : Every generator matrix G can be reduced by elementary row operations and column operations to the following form : G = [Ik×k Pn−k×k ] Also known as rowechelon form Proof : Gaussian elimination Proceed row by row and then interchange rows and columns. ⊤ 3) If G = [I P ]. Proof : Combine Fact3 and Fact2 There are several advantages to using a systematic generator matrix. c) ¯ c ¯ The minimum Hamming weight of a block code is the weight of the nonzero codeword with smallest weight. is the number of nonzero components c ¯ of c. A generator matrix in the above form is said to be systematic and the corresponding LBC is called a systematic code. wmin = minc ǫ C w(¯) c ¯ 76 .
cj ǫ C d(¯i . what is dmin . c2 ) c ¯ ¯ ¯ = w(¯1 . Given a generator matrix G. c1 . Brute force approach : Generate C and ﬁnd the minimum weight vector. the weight structure is identical to the distance structure. cj ) c ¯ wmin ≥ dmin Let co be the minimum weight codeword ¯ OǫC dmin ≥ wmin wmin = w(co ) = d(o. +) is a group wmin = minc ǫ C w(¯) c ¯ dmin = min(ci =cj )ci .cj ǫ C ¯ ⇒ wmin ≥ dmin Suppose c1 and c2 are the two closest codewords ¯ ¯ Then c1 − c2 ǫ C ¯ ¯ therefore dmin = d(¯1 .Theorem : For a linear block code. or equivalently a parity check matrix H. Theorem : (Revisited) The minimum distance of any linear (n. c2 ) = d(o. co) ≥ min(ci =cj )ci . minimum weight = minimum distance Proof : Use the fact that (C. k) block code satisﬁes dmin ≤ 1 + n − k 77 . c2 ) c ¯ ≥ minc ǫ C w(¯) = wmin c ¯ Key fact : For LBC’s.
k=0 (violate minimality). cH ⊤ = 0 ⇒ ¯ n−1 i=0 ⊤ ci fi = 0 when fk is a 1 × n − k vector corresponding to a column q = 256 = 28 of H. Let w be the smallest numw−1 ¯ ber of linearly dependent columns of H. 223. Each book has a 10 digit identifying code called its ISBN. 33) ¯ ¯ ¯ ¯ A codeword c ǫ C iﬀ cH ⊤ = 0. Examples include binary SPC and RC. . The elements of this code are from GF (11) and denoted by 0. 1. . . Examples Consider the code used for ISBN (International Standardized Book Numbers). . if some w columns are linearly dependent.Proof : For any LBC. . q + d. 9. X. Then ank fnk = 0. Consider the codeword Cnk = ank otherwise C1 = 0 ¯ Clearly C is a codeword with weight w. . . 0) Then wmin ≤ w(¯) ≤ 1 + n − k c ⇒ dmin ≤ 1 + n − k Codes which meet this bound are called maximum distance seperable codes. Let c be ¯ the codeword corresponding to the data word (1 0 . k. consider its equivalent systematic generator matrix. None of the ank are o. f1 . therefore each codeword corresponds to a linear dependence among the columns of H. fn−1 ] where fk is a n − k × 1 col¯ ¯ umn vector. Let H = [f0 . Theorem : The minimum weight (= dmin ) of a LBC is the smallest number of linearly dependent columns of a parity check matrix. Similarly a codeword of weight at most w exists. d + 1) Gahleo Mission (255. The RS parameters are (n. The best known nonbinary MDS codes are the ReedSolomon codes over GF (q). Proof : Find the smallest number of LI columns of H. The last digital is the parity check bit. d) = (q − 1. 78 . A codeword with weight w implies some w columns are linearly dependent. . The ﬁrst 9 digits are always in the range 0 to 9.
The parity check matrix is H = [1 2 3 4 5 6 7 8 9 10] GF (11) is isomorphic to Z/11 under addition and multiplication modulo11 dmin = 2 ⇒ can detect one error Ex : Can also detect a transposition error i. 3) Hamming code. 3 Hamming code over GF(q) 79 . If all the columns are distinct and nonzero. e. Consider a binary parity check matrix with m rows. How many columns are possible? 2m − 1. Hamming codes can be easily deﬁned over larger ﬁelds. q−1 q−1 − m. 2m − 1 − m.¯ ¯ b Question : How many mtuples exists such that any two of them are LI.g.e. q m −1 q−1 Deﬁnes a q m −1 q m −1 . so dmin ≤ 3. 4) Hamming code 1 0 0 0 0111 100 0 1 0 0 010 H = 1101 G = 0 0 1 0 1011 001 0 0 0 1 −P ⊤ I3×3 ⇒ G = [I P ] 0 1 1 1 1 1 0 1 1 0 1 1 Hamming codes can correct single errors and detect double errors used in SIMM and DIMM. 1 2 3 4 [0521553741] 5 = 65 Blahut : 12345678910 6 7 8 9 10 Hamming Codes Two binary vectors are independent iﬀ they are distinct and non zero. Any two distinct & nonzero mtuples over GF (q) need not be LI. then dmin ≥ 3. Note that adding two columns gives us another column. two codeword positions are interchanged. Example : m = 3 gives us the (7. This allows us to obtain a binary (2m − 1. a = 2.
Associate every r ǫ GF (q)n . The standard array of the code C is the coset decomposition of GF (q)n with respect to the subgroup C. P (w(¯) = j) = (Nj )P j (1 − P )N −j ¯ ¯ e function of j. Example : (13. For a received senseword r . A systematic way of constructing this table leads to the (Slepian) standard array. ˆ c = r − e Two steps. which codeword c ǫ C maximizes the likelihood of ¯ ¯ receiving the senseword r ? ¯ Equivalently. The ¯ ¯ errorword e is deﬁned as ¯ e= r−c ¯ ¯ ¯ The decoding problem : Given r. Note that C is a subgroup of GF (q)n . Two such columns which are distinct have to be LI. We denote the coset {¯ + c : g ¯ ∀ ǫ C} by g + C. means the smallest number of bit errors. most likely.Consider all nonzero mtuples or columns that have a 1 in the topmost non zero component. Note that each row of the standard array is a coset and that the cosets ¯ completely partition GF (q)n . the decoder picks an error pattern e of smallest weight such ¯ ¯ that r − e is a codeword. k) binary code. One way to do this would be to write a lookup table. 10) Hamming code over GF (3) 1 1 H = 0 0 1 2 33 = 27 − 1 = 26 2 1 111 1 1 0 0 1 0 0 1 112 2 2 1 1 0 1 0 0 120 1 2 1 2 0 0 1 = 13 dmin of C ⊥ is always ≥ k Suppose a codeword c is sent through a channel and received as senseword r . Given an (n. which is r ’s nearest ¯ ˆ ¯ neighbour. This is the same as nearest neighbour decoding. ˆ ¯ ¯ For a binary symmetric channel. to a codeword c ǫ GF (q). The number of cosets = GF (q)n  qn = k = q n−k C q 710 . ﬁnd the most likely error pattern e.
A Boundeddistance decoder corrects all errors up to weight t. Below this line. . draw a horizontal line below the last row such that w(ek ) ≤ t. the syndrome is deﬁned by S = r H ⊤ . . ¯ 711 . c2 . The ﬁrst row is the code C with the zero vector in the ﬁrst column. Then the standard array is given by ¯ ¯ o e2 ¯ e3 ¯ . . a senseword will have more than one nearest neighbour codeword. a contradiction. ”closest to all zero vector”.Let o. Then Hamming spheres of radius t are nonintersecting. In the standard array. Syndrome detection ¯ ¯ For any senseword r . cqk + e2 ¯ ¯ ¯ ¯ eqn−k eqn−k + c2 ¯ ¯ ¯ . Decoding : Find the senseword r in the standard array and denote it as the code¯ word at the top of the column that contains r . cq k ¯ c3 + e2 . Choose as coset leader among the unused ntuples. one which has least Hamming weight. it declares a decoding failure. the ﬁrst column consists of Hamming spheres around the all zero codeword. . . . . If the senseword falls below the Lakshman Rekha. The k th column consists of Hamming spheres around the k th codeword. . 2. .e. Suppose dmin = 2t + 1. ¯ e e Geometrically. . . We can write r = cijk + ej . cqk be the codewords. ¯ Claim : The above decoding procedure is nearest neighbour decoding. . Any senseword above this codeword has a unique nearest neighbour codeword. . c3 ¯ . Let c1 be the nearest neighbor. c2 ¯ c2 + e2 ¯ ¯ . It never declares a decoding failure. ⇒ cj + ej = ci + ei ¯ ¯ ¯ ¯ ei ¯ = ej + cj − ci But cj − ci ∈ C ¯ ¯ ¯ ¯ ¯ ei ¯ ∈ ej + C and w(¯i ) < w(¯j ). cqk + eqn−k ¯ ¯ 1. . ¯ ¯ ¯ ¯ Then we can write r = ci + ei such that w(¯i ) < w(ej ) ¯ ¯ ¯ e ⇒ i. Proof : Suppose not. A complete decoder assigns every received senseword to a nearby codeword.
The set of vectors LD with V1 are {O. Compute syndrome S = rH ⊤ . . Two vectors in disjoint sets are L.Theorem : All vectors in the same coset have the same syndrome. Let e be the coset leader and r ′ = c′ + e ¯ ¯ ¯ ¯ ¯ therefore S(¯) = r H ⊤ = eH ⊤ r ¯ ¯ and S(¯′ ) = r ′ H ⊤ = eH ⊤ r ¯ ¯ Suppose two distinct cosets have the same syndrome. . ¯ Nonbinary Case : V1 = 0 ¯ ¯ ¯ ¯ ¯ ¯ △ Pick a vector V1 ǫ GF (q m ). . . +} is a group. Suppose you receive r. Then Let e and e⊤ be the corre¯ sponding coset leaders. 2V2 . Look up table to ﬁnd coset leader e ¯ ¯ Decide c = r − e ˆ ¯ ¯ Example : (1. Binary Case : just need to make sure that all columns are nonzero and distinct. ⇒ therefore eH ⊤ = e′ H ⊤ ¯ ¯ ′ e−e ǫ C ¯ ¯ e = e′ + c ⇒ e ǫ e′ + C a contradiction ¯ ¯ ¯ ¯ ¯ This means we only need to tabulate syndromes and coset leaders. 2V1 . V1 . . Incidentally {Hn . . 3)RC Hamming Codes : Basic idea : Construct a Parity Check matrix with as many columns as possible such that no two columns are linearly dependent. (q−1)V1 } = H1 ¯ ¯ ¯ ¯ ¯ Pick V2 ǫ?H1 and form the set of vectors LD with V2 {O. #columns = q m −1 q−1 Two nonzero distinct m tuples that have a 1 as the topmost or ﬁrst nonzero component are LI Why? 712 .I. V2 . Two distinct cosets have distinct syndromes. . . Proof : Suppose r and r ′ are in the same coset ¯ ¯ ¯ ¯ Then r = c + e. (q − 1)V2 } Continue this process till all the mtuples are used up.
.5 ”most likely” error pattern is the error pattern with least number of 1’s. . the pattern with the smallest number of bit errors. 713 . c = r −ˆ. q = 3 n= 32 −1 3−1 q m −1 q−1 =4 k =n−m= 2 (4. There is an element of arbitrariness in this procedure because some r may have more ¯ than one nearest neighbour. ¯ ˆ This is the same as nearest neighbour decoding. For a received senseword r . One way to do this would be to write a lookup table. . . the decoder picks an error pattern e of smallest weight ¯ ˆ such that r − e is a codeword. o is the coset ¯ leader.#mtuples = q m−1 + q m−2 + . ¯ ˆ ˆ ¯ e For a binary symmetric channel. i. The ¯ ¯ error vector or error pattern is deﬁned as e= r−c ¯ ¯ ¯ The Decoding Problem : Given r. which codeword c ǫ C maximizes the likelihood of ¯ ˆ receiving the senseword r ? Equivalently. the coset ¯ associated with g is given by the set g + C = {¯ + c : c ǫ C} ¯ ¯ g ¯ ¯ Recall : 1) The cosets are disjoint completely partition GF (q)n 2) # cosets = GF (q)n  C = qn qk = q n−k The standard array of the code C is the coset decomposition of GF (q)n with respect to the subgroup C. + 1 = Example : m = 2. ﬁnd the most likely valid errorword e. Let o. which is r ’s nearest ¯ ˆr ¯ neighbour in C. 3) Suppose a codeword c is sent through a channel and received as senseword r . with P e < 0. A systematic way of constructing this table leads us to the (slepian) standard array. For any g ǫ GF (q)n . Associate every r ǫ GF (q)n to a codeword c(¯) ǫ C. .e. . 2. c2 . We begin by noting that C is a subgroup of GF (q)n . Then the standard array is constructed as follows : ¯¯ ¯ a) The ﬁrst row is the code C with the zero vector in the ﬁrst column. cqk be the codewords.
e has minimum weight in e + C ¯ ¯ ⇒ w(¯) ≤ w(¯′). 2) binary code with dmin = 3 1 0 1 1 0 1 1 1 0 0 H = 1 0 0 1 0 G = 0 1 1 0 1 0 1 0 0 1 −P ⊤ I I:P 714 . it decodes all sensewords lying above the horizontal line. Any senseword above this line has a unique nearest neighbor in C. Then Hamming spheres of radius t. c) Continue as above. Below the line. The ﬁrst n vectors in the ﬁrst column constitute the Hamming sphere of radius 1 drawn around the o vec¯ th tor. The ﬁrst n+ (n2 ) + . (nt ) vectors in the k th column are elements of the Hamming sphere of radius t around ck . which has ¯ least Hamming weight. . In the standard array consider the ﬁrst n rows. it declares a decoding failure. Similarly the ﬁrst n vectors of the k column correspond to the Hamming sphere of radius 1 around ck . i. Decoding : Find the senseword r in the standard array and decode it as the code¯ word at the top of the column that contains r . a senseword may have more than one nearest neighbor. drawn around each codeword are nonintersecting. each time choosing an unused vector g ǫ GF (q)n of minimum weight. The ﬁrst n + (n2 ) vectors in the k th column correspond to the ¯ Hamming sphere of radius 2 around the k th codeword. Example : Let us construct a (5.b) Among the vectors not in the ﬁrst row. The second row is the coset e2 + C with e2 as the coset ¯ ¯ leader. Let c be the codeword obtained using standard array decod¯ ing and let c′ be any of the nearest neighbors. Draw a ¯ horizontal line in the standard array below these rows. If a senseword lies below the line in the standard array. Proof : Suppose not. A Bounded Distance decoder converts all errors up to weight t. A complex decoder simply implements standard array decoding for all sensewords. choose an element or vector e2 .e. We have r = c + e = c′ + e′ where e is the ¯ ¯ ¯ ¯ ¯ ¯ ¯ ′ coset leader for r and w(¯) > w(¯ ) ¯ e e ⇒ e′ = e + c − c′ ¯ ¯ ¯ ¯ ⇒ e′ ǫ e + C ¯ ¯ By construction. a contradiction e e What is the geometric signiﬁcance of standard array decoding. It never declares a decoding failure. Consider a code with dmin = 2t + 1. . ¯ Claim : Standard array decoding is nearest neighbor decoding.
¯ ¯ Let e be the coset leader. a contradiction. we calculate e = S. 33). 10110. Suppose we use the following scheme : Given syndrome S. 223. Let e and e′ be the coset leaders ¯ ¯ ⊤ Then eH = e′ H ⊤ ¯ ¯ ⇒ (¯ − e′ )H ⊤ = 0 ⇒ e − e′ ǫ C ⇒ e ǫ e′ + C. Two diﬀerent cosets/rows have distinct syndromes. Proof : Suppose r and r ′ are in the same coset.B where B is a n − k × n matrix. q n−k = (256)32 = 2 28×32 = 2256 > 1064 more than the number of atoms on earth? Why can’t decoding be linear. ¯ Then r = c + e and r ′ = c′ + e ¯ ¯ ¯ ¯ ¯ ¯ r H ⊤ = cH ⊤ + eH ⊤ = eH ⊤ = c′ H ⊤ + eH ⊤ = r ′ H ⊤ ¯ ¯ ¯ ¯ ¯ ¯ ¯ Suppose two distinct cosets have the same syndrome. The syndrome decoding procedure is as follows : 1) Compute S = rH ⊤ ¯ 2) Look up corresponding coset leader e ¯ 3) Decode c = r − e ˆ ¯ ¯ q n−k RS(255. 01101.Codewords C = {00000. 11011} 00000 00001 00010 00100 01000 10000 00011 01010 01101 01100 01111 01001 00101 11101 01110 00011 10110 10111 10100 10010 11110 00110 10101 11100 00011 11011 11010 11001 11111 10011 01011 11000 11011 = 11011 + 11000 = 00000 + 00011 Syndrome detection : For any senseword r . the syndrome is deﬁned by s = r H ⊤ ¯ ¯ ¯ Theorem : All vectors in the same coset (row in the standard array) have the same syndrome. ˆ 715 . e ¯ ¯ ¯ ¯ ¯ This means we only need to tabulate syndromes and coset leaders.
In general not more than (n − k)(q − 1) single errors can be detected. Need to understand Galois Field to devise good codes and develop eﬃcient decoding procedures.Let E = {ˆ : e = SB for some S ǫ GF (q)n−k } e ˆ Claim : E is a vector subspace of GF (q)n E ≤ q n−k ⇒ dimE ≤ n − k Let E1 = {single errors where the non zero component is 1 which can be detected} E1 ⊂ E We note that no more n − k single errors can be detected because E1 constitutes a LI set. Example (7. 4) Hamming code can correct all single errors (7) in contrast to 7 − 4 = 3 errors with linear decoding. 716 .
. . But this implies p − r = 0. . Then we have Theorem : Each ﬁnite ﬁeld contains a unique smallest subﬁeld which has a prime 81 . We can then conclude that p = 0.) is a ﬁeld (G. . Let GF (q) be an arbitrary ﬁeld with q elements. . Fact 1 : Every ﬁnite ﬁeld is isomorphic to a ﬁnite ﬁeld GF (pm ). This is just the ﬁeld of integers modulo p. (G − {0}.b = (1 + 1 + . α2. . Then GF (q) must contain the additive identity 0 and the multiplicative identity 1. NTS closure & inverse. + 1) . 1 + 1. P prime constructed using a prime polynomial f (x) ∈ GF (p)[x] Fact 2 : There exist prime polynomials of degree m over GF (P ). multiplication.b = b + b + . + b = a × b(mod p) a times Claim : (G. +. . 2. i. . 1 + 1 + 1. . . . . Since q is ﬁnite. Normal addition. r.e. (GF (q) − {0}. αq−1 = 1}. The set of ﬁeld integers is given by G = {0.) is a group. so if r = 0. . and call them the integers of the ﬁeld. Let p be the ﬁrst element which is a repeat of an earlier element. . 1. . Based on the ring of integers and the ring of polynomials. smallest nontrivial subﬁeld. . . 1. p = r m GF (q).Information Theory and Coding Lecture 8 Pavan Nuggehalli Finite Fields Review : We have constructed two kinds of ﬁnite ﬁelds. . and so on. Claim : Addition and multiplication in G is modulo p.) is an abelian group. There are Q(q − 1) such primitive elements. . then these must have been an earlier repeat at p − r = 0. 3. . We denote these elements by 0. Addition is modulo p because G is a cyclic group under addition. Multiplication is modulo p because of distributive law. distributive law holds. for all values of p and m. a. P − 1}. +) is an abelian group. We can then conclude that p is prime. . 2. To show (G − {0}. the sequence must eventually repeat itself. 1. p prime. . By closure GF (q) contains the sequence of elements 0.) is cyclic ⇒ ∃ α ∈ GF (q) such that GF (q) − {0} = {α.
degree.1. Then h(x) = g(x) − g ′ (x) has degree lower than g(x) and evaluates to 0. + a1 α + a0 where am−1 . Let f (x) be a polynomial over GF (p). Suppose the ﬁrst repetition occurs at f (x) = h(x) where deg f (x) > deg h(x). degree. . a contradiction. pN (x).0. Any other lower degree polynomial cannot evaluate to 0. But pk (x) has lower degree than g(x). Suppose not G is a subgroup of GF (q) ⇒ p/q ⇒ p = q. until a repetition occurs. Then f (α). Let m be the degree of the minimal polynomial of α over GF (p). Otherwise a repetition would have occurred before reaching f (x). Theorem : 1) Every element α ∈ GF (q) has a unique minimal polynomial. Defn : Let GF (q) be a ﬁeld with characteristic p. If q is prime. a contradiction. The monic polynomial of smallest degree over GF (p) with f (α) = 0 is called the minimal polynomial of α over GF (p). .number of elements. . Then f (x) − h(x) is the minimal polynomial for α ∈ GF (q). a0 ∈ GF (p) 82 . The number of elements of this unique smallest subﬁeld is called the characteristic of the ﬁeld. . .2 monu polynomials with x = α . degree. Then g(α) = 0 ⇒ pk (α) = 0. Corr. . Uniqueness : g(x) and g ′(x) are two monu polynomials of lowest degree such that g(α) = g ′ (α) = 0. Otherwise f (x) − h(x) has degree < deg f (x) and evaluates to 0 for x = α . α ∈ GF (q) is an element of GF (q). the characteristic of GF (q) is q. 2) The minimal polynomial is prime Proof : Pick α ∈ GF (q) Evaluate the zero. Then q = pm and every element β ∈ GF (q) can be written as β = am−1 αm−1 + am−2 αm−2 + . Theorem : Let α be a primitive element in a ﬁnite ﬁeld GF (q) with characteristic p. Let α ∈ GF (q). Primatily : Suppose g(x) = p1 (x) . In other words GF (q) = Z/q. p2 (x) .
Let f (α) be the deg m. so q ≥ pm . αm−2 . The scene of Eratosthenes 83 .e. . . . α0 Therefore q ≤ pm . Corr : Every ﬁnite ﬁeld is isomorphic to a ﬁeld GF (p)/p(x) Proof : By theorem. Hence proved. . β = r(α) ⇒ every element β can be expressed as a linear combination of αm−1 . . Theorem : A ﬁnite ﬁeld exists of size pm for all primes p and m ≥ 1 Proof : Handwaving Need to show that there exists a prime polynomial over GF (p) for every degree m. + (a0 − b0 ) = 0 ⇒ α is a zero of a polynomial of degree m − 1. every element of GF (q) can be associated with a polynomial of deg m − 1 replacing α with the indeterminate x.f (α) + r(α). + a1 α + a0 ∈ GF (q) Two diﬀerent combinations cannot give rise to the same ﬁeld element. . There are pm such combinations. contradicting the fact that the minimal polynomial of α has degree m. the minimal polynomial of α. . They are added and multiplied modulo f (x). . . This ﬁeld is then isomorphic to the ﬁeld GF (p)[x]/f (x). Then β = αl 1 ≤ l ≤ q − 1 Then β = αl = Q(α). . + a0 = bm−1 αm−1 + . Pick any β ∈ GF (q) − {0}. . .Proof : Pick some combination of am−1 . am−2 . minimal polynomial of α. a0 ∈ GF (p) Then β = am−1 αm−1 + . deg r(α) ≤ m − 1 division algorithm i. . + b0 ⇒ (am−1 − bm−1 )αm−1 + . . . Otherwise we would have β = am−1 αm−1 + . . These polynomials can be thought of as ﬁeld elements.
+ cn−1 xn−1 . . cn−1 ) Then c(x) = c0 + c1 x + .amenable to easy encoding using shift register . . . cn−1 ) ∈ C ⇒ (cn−1 c0 c1 . cn−1 ) ∈ C ⇒ (cn−1 c0 c1 . cn−2 ) ∈ C 91 . . ExCRC are cyclic . . . If ci ∈ GF (q) Note that we can think of C as a subset of GF (q)[x]. cn−2 ) ∈ C Ex : equivalent def. . .Information Theory and Coding Lecture 9 Pavan Nuggehalli Cyclic Codes Cyclic codes are a kind of linear block codes with a special structure. with left shifts It is convenient to identify a codeword c0 c1 . . cn−1 xn and x c(x) mod xn − 1 = c0 x + . In this ring a cyclic shift can be written as a multiplication with x in the ring. . . .decoding involves solving polynomial equations not based on lookup tables .almost all commonly used block codes are cyclic Deﬁnition : A linear block code C is cyclic if (c0 c1 . the ring of polynomials modulo xn − 1 C will be thought of as the set of n tuples as well as the corresponding codeword polynomials. cn−1 in a cyclic code C with the polynomial c(x) = c0 + c1 x + . + cn−1 which corresponds to the code (cn−1 c0 . cn−1 xn−1 Then x c(x) = c0 x + c1 x2 + . .good burst correction and error correction capabilities. cn−2 ) Thus a linear code is cyclic iﬀ c(x) ∈ C ⇒ x c(x) mod xn − 1 ∈ C A linear block code is called cyclic if (c0 c1 . . . . . . Each polynomial in C has degree m ≤ n − 1. therefore C can also be thought of as a subset of GF (q)xn − 1. . Some advantages . . . . Suppose c = (c0 . A LBC is called cyclic if every cyclic shift of a codeword is a codeword.
. . g(x) = n − k 4) g(x)  xn − 1 Let h(x) = xn −1 g(x) . c(x) ∈ C ⇒ x c(x) mod xn − 1 ∈ C Theorem : A set of codeword polynomials C is a cyclic code iﬀ 1) C is a subgroup under addition 2) c(x) ∈ C ⇒ a(x) c(x) mod xn − 1 ∈ C ∀a(x) ∈ GF (q)[x] Theorem : Let C be an (n. .. . h(x) is called the check polynomial. .. . gn−k 0 g0 g1 . 92 . h0 0 hk hk−1 . . . hk hk−1 hk−2 .. 0 0 g0 g1 .. . . . .. . . gn−k−1 gn−k 0 g0 G = . . We have Theorem : c(x) ∈ C ⇔ c(x) h(x) = 0 mod xn − 1 Theorem : The generator and parity check matrices for a cyclic code with generator polynomial g(x) and check polynomial h(x) are given by g1 . Then 1) There exists a unique monu polynomial g(x) ∈ C of smallest degree among all non zero polynomials in C 2) c(x) ∈ C ⇒ c(x) = a(x) g(x) 3) deg. in terms of codeword polynomials. .. 0 0. . . . . 0 gn−k . .. .. . . 0 0 0 0 hk g0 0 0 0. 0 0 h0 gn−k 0 0 0 0 0 0 h0 hk−1 . 0 hk H = . . . .Equivalently.. . . k) cyclic code. . .. . .
e(x) is the errorword polynomial Deﬁnition : Syndrome polynomial s(x) is given by s(x) = v(x) mod g(x) We have s(x) = [c(x) + e(x)] mod g(x) = e(x) mod g(x) Syndrome decoding : Find the e(x) with the least number of nonzero coeﬃcients satisfying s(x) = e(x) mod g(x) Syndrome decoding can be implemented using a look up table. For H to be the parity check matrix. Need to show GH T = 0k×n−k We know g(x) h(x) = xn − 1 ⇒ coeﬃcients of x1 . we need to show that GH T = 0 and that the n − k rows of H are LI. .Proof : Note that g0 = 0 Otherwise. xg(x). . . xk−1 g(x)). . un−1 uk−1 uk un−2 GH T = . we can write g(x) = xg ′ (x) ⇒ xn g(x) = xg ′ (x) mod xn − 1 ⇒ g ′ (x) = xn−1 g(x) mod xn − 1 ⇒ g ′ (x) ∈ C a contradiction Each row in G corresponds to a codeword (g(x). . xn−1 are 0 i. . These codewords are LI. Encoding and decoding : polynomial multiplication Let a(x) be the data polynomial. deg h(x) = k ⇒ hk = 0. .e. each row is LI. . There are q n−k val93 . u1 u2 un−k = Ok×n−k These matrices can be reduced to systematic form by elementary row operations. ul = l gk hl−k = 0 k=0 1≤l ≤n−1 It is easy to by inspection that see uk uk+1 . x2 . . Therefore. . . . degree ≤ k − 1 c(x) = a(x) g(x) Decoding : Let v(x) = c(x) + e(x) be the received senseword.
Suppose α is a primitive element of GF (2m ). store corresponding e(x) Theorem : Syndrome decoding is nearest neighbour decoding. we know i and j and the senseword is properly decoded. X1 and X2 are called error location numbers and are unique because α has order n.ues of s(x). Syndrome detection diﬀers from nearest neighbour decoding if there exists an error polynomial e′ (x) with weight strictly less than e(x) such that v(x) − e′ (x) = c′ (x) ∈ C But v(x) − e(x) = c(x) ∈ C ⇒ e′ (x) − e(x) ∈ C ⇒ e′ (x) = e(x) mod g(x) But s(x) = e(x) mod g(x) ⇒ s(x) = e′ (x) mod g(x). Proof : Let e(x) be the error vector obtained using syndrome detection. Let S1 = V (α) = αi + αj = X1 + X2 3 3 and S2 = V (α3 ) = α3i + α3j = X1 + X2 We are given S1 and S2 and we have to ﬁnd X1 and X2 . Then e(x) = 0 or e(x) = xi or e(x) = xi + xj Deﬁne X1 = αi and X2 = αj . Deﬁne C = {c(x) ∈ GF (2)/xn − 1 : c(α) = 0 and c(α3 ) = 0 in GF (2m ) = GF (n + 1)} Note that C is cyclic : C is a group under addition and c(x) ∈ C ⇒ a(x) c(x) mod xn −1 = 0 a(α) c(α) = 0 and α(α3 ) c(α3 ) = 0 ⇒ a(x) c(x) mod xn − 1 ∈ C We have the senseword v(x) = c(x) + e(x) Suppose at most 2 errors occurs. Under the assumption that at 94 . So if we can ﬁnd X1 &X2 . a contradiction By deﬁnition e(x) is the smallest weight error polynomial for which s(x) = e(x) mod g(x) Let us construct a binary cyclic code which can correct two errors and can be decoded using algebraic means Let n = 2m − 1.
assume no error 3. Compute syndromes S1 = V (α) and S2 = V (α3 ) 2. 7. To solve these equations. α ∈ GF (p) αp−1 = 1 ⇒ αp = α Look at the polynomial xp − x. By the unique factorization theorem. assume a single error. Procedure for decoding 1. a contradiction. If the above equations can be solved uniquely for X1 and X2 . not abelian (a − b)−1 = ab = b−1 a−1 = ba 3. Trivial 2. let X1 = αi and X2 = αj . Find the roots X1 and X2 . This has only p zeros given by elements of GF (p). Construct the polynomial x2 + s1 x + s3 +s3 1 s1 4. then xp − x will have p + 1 roots.most 2 errors occur. If S1 = 0 and S2 = 0. S1 = 0 iﬀ no errors occur. the zeros of this quadratic polynomial are unique. Then errors occur in locations i & j 95 . If either of them is 0. If α ∈ GF (pm ) and β p = β and β ∈ GF (p). Trivial 4. 51. consider the polynomial (x − X1 )(x − X2 ) X1 + X 2 (X1 + X2 )3 = x2 + (X1 + X2 )x + (X1 X2 ) = S1 2 2 = (X1 + X1 )2 (X1 + X2 ) = (X1 + X2 )(X1 + X2 ) 3 2 2 3 = X 1 + X1 X2 + X 2 X1 + X 2 = S3 + X1 X2 (S1 ) S 3 +S ⇒ X 1 X2 = 1S1 3 3 S1 +S3 S1 therefore (x − X1 )(x − X2 ) = x2 + S1 x + We can construct the RHS polynomial. 3. 1. One easy way to ﬁnd the zeros is to evaluate the polynomial for all 2m values in GF (2m ) Solution to Midterm II 1. the two errors can be corrected. 9. 63. 21. Else.
. . Let C be a cyclic code over GF (q) of length n. . . say Si and Sj .Many cyclic codes are characterized by zeros of all codewords. . . C = {c(x) : c(β1 ) = 0. = Q1 n + S1 = Q2 n + S2 q n+1 = Qn+1 n + Sn + 1 All the remainders lie between 0 and n − 1. Then g(x) = Πl (x − βl ) where βl ∈ GF (q m ) i=1 m nq m − 1 ⇒ xn − 1xq −1 − 1 96 . Then there exists some number m such that n  q m − 1 q q2 Proof : . at least two of them must be the same. c(βl ) = 0} c0 . . . . Then we have qj − qi = = j j−1 or q (q − 1) = n /q i ⇒ Put m = j − 1. . βl ∈ GF (Q) ⊃ GF (q) ⇒ Q = q m Note that the above deﬁnition imposes constraint on values of q and n. c(β2 ) = 0. . c(β) = 0 ∀c ∈ C(x − β)  c(x) ⇒ x − β  g(x) ⇒ x − β  xn − 1 ⇒ β n = 0 ⇒ n  Q − 1 ⇒ nq m − 1 because GF (Q) is an extension ﬁeld of GF (q) Lemma : Suppose n and q are coprime. . and Qj n + Sj − Qi n − Si (Qj − Qi )n (Qj − Qi )n nq j−i − 1 we have nq m − 1 for some m Corr : Suppose n and q are coprime and nq m − 1. Because we have n + 1 remainders. . . cn−1 ∈ GF (q) and β1 .
Then β has order n in GF (q m ). . αi ∈ GF (q m ).. . . + 1) Let q m − 1 = n. . = c(β d−1 ) = 0} Note : Often we have n = q m − 1. . In this case. .r.. k) BCH code of design distance d is at least d. such that n = q m − 1 is uniquely speciﬁed by the zeros of g(x) in GF (q m ). αi = 0 Therefore g(x) = Πl (x − βi ) i=1 m m Note : A cyclic code of length n over GF (q).Proof : z k − 1 = (z − 1)(z k−1 + z k−2 + . Let α m be a primitive element of GF (q m ). 1 ≤ i ≤ l} Defn. k) BCH code of design distance d is a cyclic code of length n given by C = {c(x) : c(β) = c(β 2 ) . .. . C = {c(x) : c(βi ) = 0. An (n. Theorem : The minimum distance of an (n. Put z = xn and k = r m Then xnr − 1 = xq −1 − 1 = (xn − 1)(xn(k−1) + . Let β = αq −1/n . . of BCH Code : Suppose n and q are relatively prime and nq m − 1. . the resulting BCH code is called primitive BCH code. + 1) m m ⇒ xn − 1xq −1 ⇒ g(x)xq −1 q −1 But xq −1 − 1 = Πi=1 (x − αi ). . β (d−1)(n−1) 1 βi c(β 1) = c β 2i β (n−1)i Therefore C ∈ C ⇔ CH T = 0 and C ∈ GF (q)[x] 1 1 . Proof : Let H = 1 β d−1 β 2(d−1) .. β n−1 β 2(n−1) 97 . β β2 β2 β4 .
. 1 X X2 Xd 1 A= . nw } CH T = 0 ⇒ (c0 . ci = 0 only for i ∈ {n1 . has a nonzero determinant iﬀ all the X1 are distinct . . . . . . . . . . βn w β (d−1)n1 β(d−1)n2 . . .Digression : Theorem : A matrix A has an inverse iﬀ detA = 0 Corr : CA = 0 and C = 01×n ⇒ detA = 0 Proof : Suppose detA = 0. . . d−1 d−1 d−1 X1 X2 Xd vandermonde matrix. . . Proof : See pp 44. . . . . . β (d−1)(n−1) =0 ⇒ (cn1 . . 1 β d−1 β 2(d−1) . let us proceed by contradiction. . . . . cnw ) ⇒ β n−1 β 2(n−1) β n1 β 2n1 . . . . . nw 2nw β β n1 β . with weight w(c) = w < d i. . Section 2. .... Lemma : det(A) = det(AT ) det(KA) = KdetA Kscalar Theorem : Any square matrix the form of 1 1 .6 End of digression In order to show that weight of c is at least d. . β (d−1)nw =0 =0 β wnw 98 . . Assume there exists a nonzero codeword c. cn−1 ) 1 β β2 . . . 1 β2 β4 . β n2 β 2n2 . . Then A−1 exists ⇒ CAA−1 = 0 ⇒ C = 0. β wn1 β wn2 β n2 . a contradiction. . cnw ) (cn1 . . . .e.
. . . Let α be a primitive element of GF (2m ). β ⇒ β n1 β n2 . Then the (n. . c(α2 ) = 0} In GF (2)[x] c(x2 ) = c(x)2 n−1 n−1 n−1 (a + b)2 = a + b ⇒ [ Therefore C = {c(x) : c(α) = 0} Let H = [α α2 . ⇒ det . β (w−1)n1 β (w−1)n2 Suppose g(x) has zeros at β1 . Example Take n = 4 g(x) = (x + 1)2 = x2 + 1x4 − 1 C(x) = {c(x) : c(1) = 0} = {a(x)(x + 1) mod x4 − 1} =< g(x) > Some special cases : 1) Binary Hamming codes : let q = 2 and n = 2m − 1. . . .. k) BCH code of design distance 3 is given by C = {c(x) : c(α) = 0. . . . . Hence proved. . . . .. β (w−1)nw = 0. . Clearly n and q are coprime. β nw .. . =0 . β (w−1)n1 n (w−1)n2 1 β 2 . β wn1 n2 β wn2 β . . αn = 1] Then C ∈ C ⇔ CH T = 0 ci xi ]2 = i=1 i=1 c2 x2i = 1 i=1 ci x2i = c(x2 ) β ∈ GF (2m ) ⇒ β = a0 + a1 x + . . nw wnw β β 1 β n1 . . am−1 ∈ GF (2) n = 7. βl Can we write C = {c(x) : c(β1 ) = . . β (w−1)nw det(A) = det(AT ) But 1 β n1 ⇒ det .det . . . . . . . 1 β n2 . . a contradiction.. . = c(βl ) = 0}? No! in general. =0 1 β nw . . 1 β nw .β n1 . . α ∈ GF (8) 99 . am−1 xm−1 a0 .
β ∈ GF (q m ) has order n. k. 7) RS code over GF (11). . codes : Take n = q − 1 n=q−1 C = {c(x) : c(α) = c(α2 ) . q coprime. . codes are optimal in this sense Example : 3 error correcting RS code over GF (11) n = q − 1 = 10 d = 7 k = n − d + 1 = 10 − 7 + 1 = 4 (10. ⇔ rows are linearly dependent. 33) code.S. . g(x) = (x − α)(x − α2 )(x − α3 )(x − α∗ )(x − α5 )(x − α6 ) 2 is a primitive element = (x − 2)(x − 4)(x − 8)(x − 5)(x − 10)(x − 9) = x6 + 6x5 + 5x4 + 7x3 + 2x2 + 8x + 2 EX : dual of an MDS code is dual : Hint columns are linearly dependent. we have an (n. Review : n. 223. k) BCH code of design d is deﬁned as C = {c(x) : c(β) = . c(αd−1 ) = 0} The generator polynomial for this code is g(x) = (x − α)(x − α2 ) . . NASA standard RS code is a (255. 28) and (28. c(β d−1 )} = 0 910 .S. Reallife examples : music CD standard RS codes are (32. 24) double error correcting codes over GF (256). . n − k + 1) MDS code R. 4. A (n. . nq m − 1.1 0 0 1 1 1 H= 0 1 0 1 1 0 0 0 1 0 1 1 2) R. (x − αd−1 )α is primitive Ex : Prove that g(x) is the generator polynomial of C deg g(x) = n − k = d − 1 ⇒d =n−k+1 Therefore.
PGZ decoder described in section 6. . Theorem : Consider a BCH code with zeros at β. Then GF (q m ) ∼ GF (q)[x]/p(x). . Suppose α is a primitive element of GF (q m ) and p(x) is the minimal polynomial of α over GF (q). . . Primitive polynomial The minimal polynomial of a primitive element is called primitive polynomial. Let fk (x) be the minimal polynomial of β k ∈ GF (q m −1) over GF (q)[x].Zierler Minimal polynomials : Let α ∈ GF (q m ). β 2. The obvious questions to ask are 1) Encoding : need to ﬁnd the generator polynomial 2) Decoding : will not be covered. The monic polynomial of smallest degree over GF (q) with α as a zero is called the minimal polynomial of α over GF (q). then p(x) is the minimal polynomial of β. β d−1 . Peterson . .This deﬁnition is useful in deriving the distance structure of the code. fk (x)c(x) by UP F 911 . Then f (x)g(x) Proof : g(x) = f (x)Q(x) + r(x) where deg r(x) < deg f (x) g(α) = 0 ⇒ r(α) = g(α) − f (α)Q(α) = 0. primality Lemma : Let f (x) be the minimal polynomial of α ∈ GF (q m ). Recall : Existence and uniqueness.Gorenstein . . . f2 (x) . Alternate Proof : Appal to primarity of f (x). This is a slight generalization of the deﬁnition given previously where q was equal to p. . . The polynomial x ∈ GF q[x]/p(x) is the primitive element of GF (q)[x]/p(x) Corr : If p(β) = 0 and p(x) is prime. Suppose g(x) is a polynomial over GF (q) such that g(α) = 0. Then the generator polynomial is given by g(x) = LCM(f1 (x). . Hence r(x) must be 0.6 is an extension of the 2εc we described earlier. fd−1 (x)) Proof : c(αk ) = 0 ⇒ fk (x)c(x) 1≤k ≤d−1 ⇒ g(x) = LCM(f1 (x). .
Problem : factorizing xn − 1 is hard Theorem : Let p be the characteristic of GF (q). By Unique Prime Factorization fk (x) must be equal to one of the bj (x) 1 ≤ j ≤ l. . + ap . . Brute force approach : Factorize xn − 1 into its prime polynomials over GF (q)[x] xn − 1 = b1 (x)b2 (x) . β d−1 . .ai ∈ GF (q) 1 0 n n m m = (f0 x0 )p + .f (x) = Then f (x)p = Proof : m n i=o n fi xi fi ∈ GF (q) p f1 xip m m i=0 ∀m (a + b)p = ap + bp 2 2 2 (a + b)p = (ap + bp )p = ap + bp m m m (a + b)p = ap + bp In general (a + b + c)p = (a + b)p + cp = ap + bp + cp m m m m (a + b + c)p = ap + bp + cp In general n m ( ai )p therefore f (x)p = ( i=0 m n i=0 fi xi )p m = ap + ap + . . . Repeat this procedure to ﬁnd the minimal polynomials of all zeros and take their LCM to ﬁnd the generator matrix. . . . (fn xn )p = n i=0 m m fip xip m m Corr : Suppose f (x) is a polynomial over GF (q) Then f (x)q = f (xq ) 912 . . bl (x) Since g(x)xn − 1. g(x) is zero for x = β. β 2. Let f (x) be a polynomial over GF (q).g(x) ≤ deg c(x) ∀ c(x) ∈ C Also g(x) ∈ C Therfore g(x) is the generator polynomial Finding the generator polynomials reduces to ﬁnding the minimal polynomials of the zeros of the generator polynomial.Therefore deg. . We only need to ﬁnd the polynomial bj (x) for which bj (β k ) = 0. We know that g(β k ) = 0 ⇒ fk (x) is a factor of g(x) and hence of xn − 1. g(x) has to be a product of a subset of these factors.
+ fr xrq f0 + f1 xq + . . β q . . . . Then f (β q ) = 0 ⇒ g(x)f (x). + fr xrq r−1 But f (x)q &f (xq ) Equating the coeﬃcients. β q Therefore f (x) = (x − β)(x − β q ) . . 1.Proof : f (x) is prime. . 2. f (x)q = [ = = r i=0 r i=0 r i=0 fi x1 ]q r = degf (x) because q = pm fiq xiq fi ∈ GF (q) ⇒ fiq = fi fi xiq = f (xq ) T heref oref (β q ) = f (β)q = 0 Conjugates : The conjugates of β over GF (q) are the zeros of the minimal polynomial of β over GF (q) (includes β itself) Theorem : The conjugates of β over GF (q) are β. then we are done. (x − β q ) f (xq ) q q f0 + f1 xq + . . . . we have fiq = fi ⇒ fi ∈ GF (q) ⇒ f (x) ∈ GF (q)[x]. f (x) prime ⇒ g(x) = f (x) Now. . . . . . . Suppose g(x) is the minimal polynomial of β q . Examples : GF (4) = {0. . β q . β q are the only conjugates of β f (x)q = = = = = = (x − β)q (x − β q )q . . then f (x) will be the minimal polynomial and hence r−1 β. . . β q . . (x − β q r−1 ) minimal polynomial of β over GF (q) If we show that f (x) ∈ GF (q)[x]. . If we show that f (β q ) = 0. . . 3} 913 . . . (x − β q )q 2 r (xq − β q )(xq − β q ) . (xq − β q ) r−1 (xq − β)(xq − β q ) . . (x − β q r−1 ) r−1 The minimal polynomial of β has zeros at β. β q r where r is the least positive integer such that β q = β k 2 r−1 First we check that β q are indeed conjugates of β 2 f (β) = 0 ⇒ f (β q ) = 0 ⇒ f (β q ) = 0 and so on k Therefore f (β) = 0 ⇒ f (β q ) = 0 Repeatedly use the fact that f (x)q = f (xq ) Let f (x) = (x − β)(x − β q ) . β q .
For cyclic codes this reduces to calculating the syndrome and then doing a look up (at least conceptually). . 1. 1. 7} = {0. . the optimal decoding procedure is nearest neighbor decoding. No codeword is burst of length t. 22 } = {2. . 3} Therefore minimal polynomial of 2 and 3 is (x − 2)(x − 3) = x2 + (2 + 3)x + 2 × 3 = x2 + x + 1 GF (8) = {0. 22. A burst of length t could change the codeword to a burst pattern of length t or less which could also be obtained by a burst pattern of length ≤ t operating on the all zero codeword. α2 + α + 1} The conjugates of 2 in GF (2) are {2. . A cyclic burst of length t is a vector whose nonzero components are among t cyclically consecutive components. Then it cannot have a burst of length 2t or less as a codeword. 24 } and the minimal polynomial is x3 + x + 1 Cyclic codes are often used to correct burst errors and detect errors. Hence a cyclic code which corrects burst errors must have syndrome polynomials that are distinct for each correctable error pattern. the ﬁrst and last of which are nonzero. . 2. . We can write such an errorword as follows : e(x) = xi b(x) mod xn − 1 where deg b(x) = t − 1 and bo = 0 In the setup we have considered so far. .Conjugates of 2 in GF (2) are {2. . c1 − c2 is a burst of length ≤ 2t ⇒ c1 − c2 ∈ C ⇒ c1 and c2 are in diﬀerent cosets. c2 ) ≤ 2t ⇒ c1 − c2 ∈ C. Rieger Bound : A linear block code that corrects all burst errors of length t or less must satisfy n − k ≥ 2t Proof : Consider the coset decomposition of the code #cosets = q n−k . Therefore #cosets ≥ #n − tuples diﬀerent in their ﬁrst 2t components 914 . Consider two vectors which are non zero in their ﬁrst 2t components. x + 1. d(c1 . α. Suppose a linear code (not necessarily cyclic) can correct all burst errors of length t or less.
these codes are shortened considerably. so the codewords have even weight. The blocklength n = 2m − 1 and k = 2m − 1 − (m + 1). Section 5.Therefore q n−k ≥ q 2t ⇒ n − k ≥ 2t For small n and t. p(x)c(x) ⇒ c(x) = 0 ⇒ c(x) are also Hamming codewords.9 contains more details. 2) m = 32 (232 − 1. These can be augmented by interleaving. Hence C is the set of even weight codewords of a Hamming code dmin = 4 Examples CRC − 16 : g(x) = (x + 1)(x15 + x + 1) = x16 + x15 + x2 + 1 CRC − CCIT T g(x) = x16 + x12 + x5 + 1 915 . The two main advantages of these codes are : 1) error detection is computing the syndrome polynomial which can be eﬃciency implemented using shift registers. Almost always. 232 − 33) good performance at high rate x + 1g(x) ⇒ x + 1c(x) ⇒ c(1) = 0. The generator polynomial for these codes of the form g(x) = (x + 1) p(x) where p(x) is a primitive polynomial of degree m. g(x) = x6 + x3 + x2 + x + 1 n − k = 6 2t = 6 Cyclic codes are also used extensively in error detection where they are often called cyclic redundancy codes. good errorcorrecting cyclic codes have found by computer search.
s(x) = 0 implies an error has been detected. Modulation. [b(x) mod g(x)] mod g(x) s(x) = 0 ∗ ⇒ b(x) mod g(x) = 0 which is impossible if deg b(x) < deg g(x) ∗ g(x) / xi Otherwise g(x)xi ⇒ g(x)xi − xn−i ⇒ g(x)xn But g(x)xn − 1 If n − k = 32 every burst of length less than 32 will be detected. No burst of length less than n − k is a codeword. lossy coding. 916 .Error detection algorithm : Divide senseword v(x) by g(x) to obtain the syndrome polynomial (remainder) s(x). We looked at both source coding and error control coding in this course. e(x) = xi b(x) mod xn − 1 deg b(x) < n − k s(x) = e(x) mod g(x) = xi b(x) mod g(x) = [xi mod g(x)] . Probalchstic techniques like turbo codes and trellis codes were not covered.
at the input. 2005 Homework 1 1. Compare this with the 64 Kbps rate bitstream produced by sampling speech 8000 times/s and then quantizing each sample to one of 256 levels. the capacity (in bits/s) of a bandlimited additive white Gaussian noise channel is given by C = W log2 (1 + P ) N0 W where W is the bandwidth. bit are equally likely to be 1 or 0 and that successive bits are independent of each other. N0 = 1. for every n (this is. N0 is the twosided power spectral density and P is the signal 2 power. It shows that bits are erroneously received with probability p. What is the improvement in bit error rate with 3 and 5 repetitions. In communication systems. grammatically correct strings of n characters. Find the minimum energy required to transmit a signal bit over this channel. the two primary resource constraints are transmitted bandwidth and channel bandwidth. Suppose that each bit is transmitted several times to improve reliability (Repetition Coding). of course. Assume that there are exactly 21. For example. Take W = 1. 3. What is then the minimum bitrate required to transmit spoken English? You can make any reasonable assumption about how fast people speak.5n equally likely. respectively? 1−p 0 1 p p 1−p 0 1 Figure 1: Binary Symmetric Channel 11 .E2 231: Data Communication Systems January 7. language structure and so on. Consider the socalled Binary Symmetric Channel (BSC) shown below. Also assume that. only approximately true). (Hint: Energy = Power×Time) 2.
9. 1 ≤ k ≤ N. Let λk . 11. k=1 be a sequence of numbers in (a. Solve Parts (a) and (b) of 5. 18 and 22. Suppose Ef (X) = f (EX). Suppose f (x) be a strictly convex function over the interval (a. Solve the following problems from Chapter 5 of Cover and Thomas : 4. b). 12. all of which are not equal.E2 231: Data Communication Systems January 17. 15.26. b). Let xk . 6. 2005 Homework 2 1. Show that a) f ( N k=1 λk xk ) < N k=1 λk f (xk ) b) Let X be a discrete random variable. 1 ≤ k ≤ N be a sequence of nonnegative real numbers such that N λk = 1. Use Part(a) to show that X is a constant. 21 . 28.
. . The probability density function for X is given by fX (x) = 0 1 20 −10 ≤ x ≤ 10 otherwise Suppose Y = X 2 . . . Y ) = H(Y. 12 and 14. . 2005 Homework 3 1. A random variable X is uniformly distributed between 10 and +10. X)). 2.E2 231: Data Communication Systems February 3. . (Hint: Solve for N = 2 and generalize) 36. What is the probability that X1 = min(X1 . 4. 3 and 7. Solve the following problems from Chapter 3 of Cover and Thomas : 1. 31 . Solve the following problems from Chapter 4 of Cover and Thomas : 2 (Hint: H(X. XN ). . . 712. 8. X2 . What is the probability density function of Y? 2. X2 . XN are independent and identically distributed continuous random variables. Suppose X1 . 10.
and 10. 4. Solve the following problems from Chapter 4 of Cover and Thomas : 2.E2 231: Data Communication Systems February 18. 41 . 2005 Homework 4 14. 9.
Show that only one group exists with three elements.E2 231: Data Communication Systems March 6. How many bits do you need to encode the given binary string using this Huﬀman code? 2. . a) Perform a LempelZiv compression of the string given below. .k) block code with minimum distance 2t + 1 or greater. Use this probability distribution to devise a Huﬀman code. a) Show that a code with minimum distance dmin can correct any pattern of p erasures if dmin ≥ p + 1 b) Show that a code with minimum distance dmin can correct any pattern of p erasures and t errors if dmin ≥ 2t + p + 1 3. Estimate the probability of each 3bit segment by its relative frequency in the given binary string. + (q − 1) + t 2 1 4. 51 ′ . For any qary (n. show that the number of data symbols satisfy n − k ≥ logq 1 + n n n (q − 1)t (q − 1)2 + . Prove that ′ ′ ′ (a ∗ b) = b ∗ a . Assume a window size of 8. 000111000011100110001010011100101110111011110001101101110111000010001100011 b) Assume that the above binary string is obtained from a discrete memoryless source which can take 8 values. Construct this group and show that it is abelian. Each value is encoded into a 3bit segment to generate the given binary string. 2005 Homework 5 1. a) Let G be a group and a denote the inverse of any element a ∈ G. 5.
16. a ∗ b ∈ S. Solve the following problems from Chapter 2 of Blahut : 5. 11. 18 and 19. 15.b) Let G be an arbitrary ﬁnite group (not necessarily abelian). i. 16. 17. Let h be an element of G and H be the subgroup generated by h. (Putnam. b ∈ S. for each a. 2001) Consider a set S and a binary operation ∗.. b ∈ S. Prove that a ∗ (b ∗ a) = b for all a. 7. 6. b ∈ S. Assume (a ∗ b) ∗ a = b for all a. 52 . Show that H is abelian 615. 14. e.
E2 231: Data Communication Systems
March 14, 2005
Homework 6
16. Solve the following problems from Chapter 3 of Blahut : 1, 2, 4, 5, 11 and 13. 7. We deﬁned maximum distance separable (MDS) codes as those linear block codes which satisfy the Singleton bound with equality, i.e., d = n − k + 1. Find all binary MDS codes. (Hint: Two binary MDS codes are the repetition codes (n, 1, n) and the single parity check codes (n, n − 1, 2). It turns out that these are the only binary MDS codes. All you need to show now is that a binary (n, k, n − k + 1) linear block code cannot exist for k = 1, k = n − 1). 8. A perfect code is one for which there are Hamming spheres of equal radius about the codewords that are disjoint and that completely ﬁll the space (See Blahut, pp. 5961 for a fuller discussion). Prove that all Hamming codes are perfect. Show that the (9,7) Hamming code over GF(8) is a perfect as well as MDS code.
61
E2 231: Data Communication Systems
March 28, 2005
Homework 7
112. Solve the following problems from Chapter 4 of Blahut : 1, 2, 3, 4, 5, 7, 9, 10, 12, 13, 15 and 18. 13. Recall that the Euler totient function, φ(n), is the number of positive numbers less than n, that are relatively prime to n. Show that a) If p is prime and k >= 1, then φ(pk ) = (p − 1)pk−1. b) If n and m are relatively prime, then φ(mn) = φ(n)φ(n). c) If the factorization of n is
ki i qi ,
then φ(n) =
i (qi
k − 1)qi i−1
14. Suppose p(x) is a prime polynomial of degree m > 1 over GF (q). Prove that p(x) m divides xq −1 − 1. (Hint: Think minimal polynomials and use Theorem 4.6.4 of Blahut) 15. In a ﬁnite ﬁeld with characteristic p, show that (a + b)p = ap + bp , a, b ∈ GF (q)
71
E2 231: Data Communication Systems
April 6, 2005
Homework 8
1. Suppose p(x) is a prime polynomial of degree m > 1 over GF (q). Prove that p(x) m divides xq −1 − 1. (Hint: Think minimal polynomials and use Theorem 4.6.4 of Blahut) 29. Solve the following problems from Chapter 5 of Blahut : 1, 5, 6, 7, 10, 13, 14, 15. 1011. Solve the following problems from Chapter 6 of Blahut : 5, 6 12. This problem develops an alternate view of ReedSolomon codes. Suppose we construct a code over GF (q) as follows. Take n = q (Note that this is diﬀerent from the code presented in class where n was equal to q − 1). Suppose the dataword polynomial is given by a(x)=a0 +a1 x+. . .+ak−1 xk−1 . Let α1 , α2 , . . ., αq be the q diﬀerent elements of GF (q). The dataword polynomial is then mapped into the ntuple (a(α1 ), a(α2 ), . . . , a(αq ). In other words, the j th component of the codeword corresponding to the data polynomial a(x) is given by
k−1
a(αj ) =
i=0
i ai αj ∈ GF (q), 1 ≤ j ≤ q
a) Show that the code as deﬁned is a linear block code. b) Show that this code is an MDS code. It is often called an extended RS code. c) Suppose this code is punctured in r <= q − k = d − 1 places (r components of each codeword are removed) to yield an (n = q − r, k, d = n − k + 1 − r) code. Show that the punctured code is also an MDS code. Hint: a(x) is a polynomial of order less than or equal to k − 1. By the Fundamental Theorem of Algebra, a(x) can have at most k − 1 zeros. Therefore, a nonzero codeword can have at most k − 1 symbols equal to 0. Hence d >= n − k + 1. The proof follows.
81
i. l2 . be a stationary stochastic process. be an i. . Let X1 . Find the entropy of X. taking values in the set X . (1 mark ) 3. (2 marks) b) Let X1 . (2 marks) c) State a disadvantage of using suﬃx free codes. X2 . . . X2 . Xn ) ≤ n n−1 (3 marks) 11 . . (2 marks) b) For any > 0. 5 . show that limn→∞ P r(Bn (H + )) = 1 and limn→∞ P r(Bn (H − )) = 0 (3 marks) 4. . Prove that the minimum average length over all suﬃx free codes is the same as the minimum average length over all preﬁx free codes. Prove that M 2−lk = 1 k=1 (3 marks) 2. . . . X2 . a) A fair coin is ﬂipped until the ﬁrst head occurs. (2 marks) b) Suppose you are asked to develop a suﬃx free code for a discrete memoryless source. lM be the codeword lengths of a binary code obtained using the Huﬀman procedure. . . 15 . X2 . . . 2004 Midterm Exam 1 1. Xn−1 ) H(X1 . a) Show that suﬃx free codes are uniquely decodable. . 5 (2 marks) b) Let l1 . . . .E2 231: Data Communication Systems February 16. A code is said to be suﬃx free if no codeword is a suﬃx of any codeword. 1 . Show that H(X1 . Let X denote the number of ﬂips required. Let Bn (s) = {xn ∈ X n : p(xn ) ≥ 2−ns } denote the subset of ntuples which occur with probability greater than 2−ns a) Prove that Bn (s) ≤ 2ns . . 1 1 2 2 a) Find the binary Huﬀman code for the source with probabilities ( 3 .d sequence of discrete random variables with entropy H. . 15 ).
+ (q − 1)t 1 2 t (2 marks) b) Show that a code with minimum distance dmin can correct any pattern of p erasures and t errors if dmin ≥ 2t + p + 1 (3 marks) 2. . 2004 Midterm Exam 2 1. 3. Prove that the group is abelian. a) You are given a binary linear block code where every codeword c satisﬁes cA T = 0. a) For any qary (n. • How many elements does this group have? • Is the group abelian? (1 mark ) (1 mark ) b) Consider a group where every element is its own inverse. show that the number of data symbols satisﬁes n − k ≥ logq 1 + n n n (q − 1) + (q − 1)2 + .(2 marks) • What is the minimum distance of this code? 21 (1 mark ) . 2. for the matrix A given below. (3 marks) 3. . a) Let S5 be the symmetric group whose elements are the permutations of the set X = {1. 5} and the group operation is the composition of permutations.E2 231: Data Communication Systems March 29. 4. A = 1 1 0 0 1 0 0 1 1 0 1 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 • Find the parity check matrix and the generator matrix for this code. k) block code with minimum distance 2t + 1 or greater.
cannot exist. m > 1. k = 4 and dmin = 5. (1 mark ) c) Use the extended Euclidean algorithm to ﬁnd the multiplicative inverse of 9 in GF (13). (2 marks) 22 . What are the possible values for the multiplicative order of α? (1 mark ) b) Evaluate (2x2 + 2)(x2 + 2) in GF (3)[x]/(x3 + x2 + 2). a) Let α be an element of GF (64). n = 7. Show that αp = α implies that α belongs to the subﬁeld GF (p). (1 mark ) 4. 1) repetition code. (1 mark ) c) Give the parity check matrix and the generator matrix for the (n.b) Show that a linear block code with parameters. (2 marks) d) Suppose p is prime and α is an element of GF (pm ).
state whether it is true or false. For each ǫ and each n. X2 .(2 marks) c) Prove that for any binary Huﬀman code. Provide brief (12 lines) explanations. 4. Suppose Xk can either be 1 or 0. Xn . if the most probable symbol has probability p1 < 1 . . . . 2. be independent and indentically distributed random variables taking values in the set X = {1. X1 .E2 231: Data Communication Systems April 22. (1 mark ) 2. 4. . (2 marks) d) The generator polynomial of a cyclic code has order 7. 3. . . Then H(X1. Let X0 . . 2. . then that symbol must be assigned a codeword of length 2. . . (1 mark ) c) Suppose X1 . For each statement given below. . X2 . . . where N = minn {Xn = X0 }. . Xn are discrete random variables with the same entropy value. m} and let N be the waiting time to the next occurence of X0 . This code can detect all cyclic bursts of length at most 7. 01} is uniquely decodable. 2004 Final Exam 1. 3.5.(2 marks) 3 3. a) The source code {0. . . Let {p1 > p2 >= p3 >= p3 } be the symbol probabilities for a source which can assume four values. b) Show that E log(N) ≤ H(X). X2 . . then that symbol must be assigned a codeword of length 1. ǫ ǫ (1 mark ) 11 . Consider a sequence of independent and identically distributed binary random vari(n) ables. (1 mark ) b) There exists a preﬁxfree code with codeword lengths {2. a) Show that EN = m. . 4}. Let Aǫ be the typical set associated with this process. 4. if the most probable symbol has proba2 bility p1 > 5 . Hint: EN = n>=1 P (N (2 marks) (3 marks) >= n). . X2 . Xn ) <= nH(X1 ). EY = E(EY X). 3. a) Suppose P (Xk = 1) = 0. X1 . a) What are the possible sets of codeword lengths for a binary Huﬀman code for this source? (1 mark ) b) Prove that for any binary Huﬀman code. determine A(n)  and P (A(n) ). .
an overall parity ¯ check bit equal to x1 + x2 + .+ x7 . (3 marks) 7. Show that 4 3 log 3 4 Z 1 − log P (X1. 12 . compute A(n)  and P (A(n) ). What is the minimum expected number of tests in this case? (2 marks) d) In part (c). either all the codewords contain 0 in the given location or exactly half have 0 and half have 1 in the given location. Your task is to ﬁnd the infected sample with the minimum number of tests. You proceed. (1 mark ) b) What is the expected number of tests required? (1 mark ) c) Suppose now that you can mix the samples. . Find the order of testing to minimize the expected number of tests required to determine the infected sample. 12 . . . . Find the parity check matrix and the minimum ′ distance for the code C ? (2 marks) b) Show that in a binary linear block code. . . X2 . 12 ). Imagine you are running the pathology department of a hospital and are given 5 blood samples to test.b) Suppose P (Xk = 1) = 3 . 12 . (2 marks) ǫ ǫ 4 8 5. . x7 ) ∈ C. . p5 ) = ( 12 . Xn ) = 2 − log 3 n n H(Xk ) = 2 − where Z is the number of ones in X1 . p2 . of a ﬁnite Abelian group and that GCD(n. Xn . Suppose the probability 4 3 2 2 1 that the k th sample is infected is given by (p1 . You know that exactly one of the samples is infected. (2 marks) c) For P (Xk = 1) = 3 . Suppose a new code C is formed by appending to each codeword x = (x1 . for any bit location. (2 marks) 12 ′ (1 mark ) . which mixture will you test ﬁrst? 6. . a) Let C be a binary Hamming code with n = 7 and k = 4. . Prove that the order of the element α ∗ β is nm. n = 8 and ǫ = 1 log 3. . a) Suppose α and β are elements of order n and m respectively. For the ﬁrst test. . . m) = 1. . you draw a small amount of blood from selected samples and test the mixture. mixing and testing. stopping when the infected sample has been determined. . a) Suppose you test the samples one at a time. X2 . . . .
cn−1 ) is extended by adding an overall parity check cn given by n−1 cn = − cj j=0 Show that the extended code is an (n + 1. (3 marks) 13 . c2 . . k) MDS code. Show that all the conjugates of α (with respect to GF (p) are also primitive. (1 mark ) b) Suppose an MDS code is punctured in s <= n − k = d − 1 places (s check symbols are removed) to yield an (n = q − s. a) Find the generator polynomial of a ReedSolomon (RS) code with n = 10 and k = 7. where p is prime. . A linear (n. . Show that the punctured code is also an MDS code. (2 marks) c) Suppose C is an MDS code over GF (2). d) block code is said to be maximum distance seperable (MDS) if dmin = n − k + 1. (1 mark ) 9. (2 marks) c) Suppose the code deﬁned in part(b) is used and the received senseword is equal to x9 + x8 + x7 + x5 + x3 + x2 + x. (1 mark ) c) What are the multiplicative orders of the elements of GF (3)/x3 + x2 + 1 ? (1 mark ) 8. k) RS code over GF (q). (2 marks) b) Find the generator polynomial for a binary double error correcting code of blocklength n = 15. Show that no MDS code can exist for 2 ≤ k ≤ n − 2. Suppose each codeword (c0 . k. a) Construct an MDS code over GF (q) with k = 1 and k = n − 1. . Use a primitive α and the primitive polynomial p(x) = x4 + x + 1. (2 marks) 10. a) Suppose α is a primitive element of GF (pm ). What is the minimum distance of this code? (2 marks) b) Consider an (n. k) code. Find the error polynomial.b) Prove that f (x) = x3 + x2 + 1 is irreducible over GF (3).
This action might not be possible to undo. Are you sure you want to continue?