You are on page 1of 5

Quiz - 1

EC60128: Linear Algebra and Error Control Techniques

Amitalok J. Budkuley (amitalok@ece.iitkgp.ac.in)


1. Recall the discussion in class on the Shannon capacity of a BSC.

(i) What is the capacity C of the binary symmetric channel (BSC) with transition/cross over probability
p; write the expression.

(ii) Plot the capacity C as a function of p; identify the maximum and minimum values and the corresponding
p values.

1 + 1 = 2 marks
n
2. Let C ⊆ {0, 1} be a code with minimum distance d , dmin (C) and d is odd. Now let us construct a
new code C 0 as follows: let us add an overall parity bit to each codeword of the code C to generate the
corresponding a new codeword in C 0 . Then,

• What is the length of the new code C 0 ?

• What is the size |C 0 |?

• What is the minimum distance dmin (C 0 )?

• What is the rate of the code R(C)?

0.5 + 0.5 + 1 + 1 = 3 marks


n n
3. Let z ∈ {0, 1} and r ∈ {0, 1, · · · , n}. Define S(z, r) , {y ∈ {0, 1} : dH (z, y) = r}. Let V ol(S(z, r)) ,
|S(z, r)| (size of S(z, r) or number of vectors in S(z, r)). Then

(i) What is the V ol(S(z, r))? Express V ol(S(z, r)) in terms of an exponential (with exponent 2; basically
use the trick x = 2log2 (x) ). Use Stirling’s approximation (mention this first; look up any resource
online) and simplify the same. This expressions will have as one term, the binary entropy function
h2 (p) , −p log2 (p) − (1 − p) log2 (1 − p), as was shown in class.

(ii) Let z0 be such that wtH (z0 ) = n/4. Then, what is V ol(S(z0 , r))?

(iii) Define B(z, r) , {y ∈ {0, 1}n : dH (z, y) ≤ r}. Repeat (i) and (ii) for B(z, r).

1 + 0.5 + 1.5 = 3 marks


4. Let x, y ∈ Rn be real vectors of length n. Let us define the following function

d∞ (x, y) , max {|xi − yi |}


i=1,2,··· ,n

(i) Show that this is a valid distance measure (show that it satisfies all the three properties as discussed in
class) for all values p ≥ 1.

(ii) Let the ‘n-ball’ be defined as follows B∞ (x, r) , {z ∈ Rn : d∞ (x, z) ≤ r}, where r is a non-negative real
number. This is an n-dimensinal ball centered at x and with radius r (radius measured under distance
function d∞ p). Draw the ball B∞ (0, 1) centered at origin 0 ∈ Rn with radius 1.

2 + 1 = 3 marks
1. Recall the binary symmetric channel (BSC) with cross over probability p, p ∈ [0, 1]; we define it below:
X = {0, 1}, Y = {0, 1} and the “channel law” PY |X given by
(
1 − p if y = x
PY |X (y|x) = (1)
p if y 6= x

For clarity, draw this as the 2 × 2 graph with 4 edges as done in class !
Now consider a binary block repetition code C ⊆ {0, 1}3 , where C = {000, 111}. Let the channel be a BSC(p),
p ∈ [0, 1]. For the decoder g : {0, 1}3 → C (note that g is a mapping from received vectors to messages which
are uniquely mapped to codewords 000 and 111 here) specified by

g(000) = g(001) = g(010) = g(100) = 1


g(111) = g(101) = g(110) = g(011) = 2.

(i) determine the probability of error averaged over the two equally likely messages (codewords); show your
calculations.

(ii) how does this quantity depend on p (examine what happens when p < 1/2 and p > 1/2)? Is it better or
worse than sending ‘uncoded’ messages (here I mean message corresponds to exactly one bit, and NOT
three repeated bits)?

2. Recall the discussion in class on the Shannon capacity of a BSC.

(i) What is the capacity C of the binary symmetric channel (BSC) with transition/cross over probability
p; write the expression.

(ii) Plot the capacity C as a function of p; identify the maximum and minimum values and the corresponding
p values.

3. (Error detection v/s error correction )


Prove the following lemmas:

Lemma 1. Let C ⊆ {0, 1}n be a code with minimum distance d , dmin (C). Then, there exists a decoder
g : {0, 1}n → C that correctly decodes every pattern of up to b d−1
2
c errors (or bit flips) over the channel.

Lemma 2. LetSC ⊆ {0, 1}n be a code with minimum distance d , dmin (C). Then, there exists a decoder
g : {0, 1}n → C ⊥ which corrects detects every pattern of up to d − 1 errors (or bit flips) over the channel.

4. Prove the following lemma:


Lemma 3. Let C ⊆ {0, 1}n be a code with minimum distance d , dmin (C). Consider the minimum distance
decoder gM DD : Y n → C, where Y n = {0, 1}n , defined below:
(
x if dH (x, y) < dH (x0 , y), ∀x 6= x, x, x0 ∈ C
gM DD (y) = (2)
⊥ otherwise

Then, this minimum distance decoder gM DD that correctly decodes every pattern of up to b d−1 2
c errors (or bit
flips) over the channel and correctly detects every pattern of u tp d − 1 errors over the channel.
5. (Bit erasures instead of bit errors/ bit flips)
Consider the binary erasure channel (BEC) defined below:
X = {0, 1}, Y = {0, 1, } ( denotes an erasure) and the “channel law” PY |X given by

1 − p if y = x

PY |X (y|x) = 0 if y 6= x (3)

p if y = 

For clarity, draw this as the 2 × 3 graph with 4 (non-trivial!) edges!


It can be shown that the capacity of this channel is C = 1 − p.
Now consider a binary block repetition code C ⊆ {0, 1}3 , where C = {000, 111}. Let the channel be a BEC(p),
p ∈ [0, 1]. For the decoder g : {0, 1, }3 → C (note that g is a mapping from received vectors to messages
which are uniquely mapped to codewords 000 and 111 here) specified by
g(000) = g(00) = g(00) = g(00) = 1
g(111) = g(11) = g(11) = g(11) = 2.

g() = g(0) = g(0) = g(0) = g(1) = g(1) = g(1) =⊥ .


Why did I not define the decoder for g(001), g(110), etc.,? Explain. Did i miss something else
when defining the decoder?
(i) determine the probability of error averaged over the two equally likely messages (codewords); show your
calculations.
(ii) how does this quantity depend on p (examine what happens when p < 1/2 and p > 1/2)? Is it better or
worse than sending ‘uncoded’ messages (here I mean message corresponds to exactly one bit, and NOT
three repeated bits)?
6. Let C ⊆ {0, 1}n be a code with minimum distance d , dmin (C) and d is odd. Now let us construct
a new code C 0 as follows: let us add an overall parity bit to each codeword of the code C to generate the
corresponding a new codeword in C 0 . Then,
• What is the length of the new code C 0 ?
• What is the size |C 0 |?
• What is the minimum distance dmin (C 0 )?
• What is the rate of the code R(C)?
7. A codeword of the code {01010, 10101} is transmitted through a BSC with crossover probability p = 0.1,
and a nearest neighbour decoding operation gN N D is used at the decoder (you have seen this decoder is class).
Compute the error probability; assume both messages (or codewords) are equally likely.
8. Let C ⊆ {0, 1}7 be a binary block code such that every vector in {0, 1}7 is at Hamming distance at most
1 from exactly one codeword of C. A codeword of C is transmitted through a BSC with crossover probability
p = 10− 2.
(i) Compute the rate of C.
(ii) Show that the minimum distance of C equals 3.
(iii) What is the probability of having more than one error in the received word?
(iv) A nearest-codeword decoder (also called nearest-neighbour decoder; you have seen this in class) D is
applied to the received vector. Compute the decoding error probability under the specified decoder
(assume equally likely messages/codewords).
(v) Compare the value of error probability to the error probability when no coding is used: compute the
probability of having at least one bit in error when an (uncoded) word of four bits is transmitted through
the given BSC.
9. Let C ⊆ {0, 1}8 be a binary block code such that |C| = 8 and dmin (C) = 4. A codeword of C is transmitted
through a BSC with crossover probability p = 10−2 .
(i) Compute the rate of C.
(ii) Given a word y ∈ {0, 1}8 , show that if there is a codeword x ∈ C such that dH (y, x) ≤ 1, then every
other codeword x0 ∈ C\x must satisfy dH (y, x0 ) ≥ 3.
(iii) Compute the probability of having exactly two errors in the received vector.
(iv) Compute the probability of having three or more errors in the received vector.
(v) The following decoder g : {0, 1}8 → C ⊥ is applied to the received vector:
S
(
x if ∃x ∈ C s.t. dH (y, x) ≤ 1
g(y) =
⊥ otherwise

Compute the decoding error probability under decoder g; namely, compute the probability that decoder
g produces either ⊥ or a wrong codeword.
(vi) Show that the value computed in part (iv) bounds from above the probability that the decoder g in part
(v) produces a wrong codeword (the latter probability is called the decoding misdetection probability
and does not count the event that the decoder produces ⊥.
9. Recall the binary erasure channel (BEC) with erasure probability p (described above). Let the erasure
probability be p = 0.1. A codeword of the binary block parity code C ⊆ {0, 1}4 , where |C| = 8 and dmin (C) = 2,
is transmitted through the BEC(p) and the following decoder g : Y 4 → C, where Y 4 = {0, 1}4 is applied to
the received vector:
(
x if y agrees with exactly one x ∈ C on the entries in {0, 1}
g(y) =
⊥ otherwise

Compute the probability that decoder g produces ⊥. Does this probability depend on which codeword is
transmitted?
10. Let z ∈ {0, 1}n and r ∈ {0, 1, · · · , n}. Define S(z, r) , {y ∈ {0, 1}n : dH (z, y) = r}. Let
V ol(S(z, r)) , |S(z, r)| (size of S(z, r) or number of vectors in S(z, r)). Then
(i) What is the V ol(S(z, r))?
(ii) Express V ol(S(z, r)) in terms of an exponential (with exponent 2; basically use the trick x = 2log2 (x) ).
Use Stirling’s approximation (mention this first; look up any resource online) and simplify the same. This
expressions will have as one term, the binary entropy function h2 (p) , −p log2 (p) − (1 − p) log2 (1 − p),
as was shown in class.
(iii) Let z0 be such that wtH (z0 ) = n/4. Then, what is V ol(S(z0 , r))?

11. Let A, B be m × n matrices over R (each entry in the matrix comes from the set of real numbers).
Define the rank distance between A and B by rank(A − B). Show that the rank distance is a metric over the
set of all m × n matrices over R.
(Here you will need to verify all the 3 properties any distance/metric function needs to satisfy; this was
discussed in class.
A couple of hints which might be useful:
(a) rank(A + B) ≤ rank(A) + rank(B),
(b) rank(A) = rank(−A). These should be sufficient to complete the proof.)
12. Let x, y ∈ Rn be real vectors of length n. Let us define the following function dp (x, y), where
p ∈ [0, ∞), between vectors x, y as follows

n
! p1
X
dp (x, y) , |xi − yi |p
i=1
.

(i) Show that this is a valid distance measure (show that it satisfies all the three properties as discussed in
class) for all values p ≥ 1.

(ii) Let the ‘n-ball’ be defined as follows Bp (x, r) , {z ∈ Rn : dp (x, z) ≤ r}, where r is a non-negative real
number. This is an n-dimensinal ball centered at x and with radius r (radius measured under distance
function p). Draw the ball Bp (0, 1) centered at origin 0 ∈ Rn with radius 1, for all values of p; in
particular, draw for p = 0, p = 1/2, p = 1, p = 2, p = 3, p = ∞.

(iii) Is dp (·, ·) a valid distance function for p ∈ [0, 1)? If not, something fails? Which property is it?

13. Let G = (V, E) be a graph comprising vertices V = {v1 , v2 , · · · , vk } and edges E = (e1 , e2 , · · · , el ),
k, l ∈ N. Let G be a weighted (every edge has a ‘weight’ function associated to it), undirected (this means
that the weight of the edge from node u to node v is the same as the weight from node v to node u), connected
graph (every pair of nodes has a path, either direct or via hops over intermediate notes), all the weights are
strictly positive. For nodes u, v ∈ V , let d(u, v) denote the ‘length’ or ‘weight’ of the shortest path between
u and v.
Show that shortest path distance between two nodes in the graph is a distance function (metric).

You might also like