You are on page 1of 9

Harvard SEAS ES250 – Information Theory

Homework 4
Solutions

1. Find the channel capacity of the following discrete memoryless channel:

X Y

where Pr{Z = 0} = Pr{Z = a} = 21 . The alphabet for x is X = {0, 1}. Assume that Z is independent
of X. Observe that the channel capacity depends on the value of a.

Solution :

Y = X + Z X ∈ {0, 1}, Z ∈ {0, a}

We have to distinguish various cases depending on the values of a.

• a = 0. In this case, Y = X, and max I(X; Y ) = max H(X) = 1. Hence the capacity is 1 bit per
transmission.
• a 6= 0, ±1. In this case, Y has four possible values 0, 1, a, and 1 + a. Knowing Y , we know the
X which was sent, and hence H(X|Y ) = 0. Hence max I(X; Y ) = max H(X) = 1, achieved for
an uniform distribution on the input X.
• a = 1. In this case, Y has three possible output values, 0, 1, and 2. The channel is identical to the
binary erasure channel with a = 1/2. The capacity of this channel is 1/2 bit per transmission.
• a = −1. This is similar to the case when a = 1 and the capacity here is also 1/2 bit per
transmission.

2. Consider a 26-key typewriter.

(a) If pushing a key results in printing the associated letter, what is the capacity C in bits?
(b) Now suppose that pushing a key results in printing that letter or the next (with equal proba-
bility). Thus A → A or B, · · · , Z → Z or A. What is the capacity?
(c) What is the highest rate code with block length one that you can find that achieves zero
probability of error for the channel in part (b).

Solution :

(a) If the typewriter prints out whatever key is struck, then the output Y , is the same as the input
X, and

C = max I(X; Y ) = max H(X) = log 26,

attained by a uniform distribution over the letters.

1
Harvard SEAS ES250 – Information Theory

(b) In this case, the output is either equal to the input (with probability 21 ) or equal to the next
letter (with probability 12 ). Hence H(Y |X) = log 2 independent of the distribution of X, and
hence
C = max I(X; Y ) = max H(Y ) − log 2 = log 26 − log 2 = log 13
attained for a uniform distribution over the output, which in turn is attained by a uniform
distribution on the input.
(c) A simple zero error block length one code is the one that uses every alternate letter, say
A,C,E,· · · ,W,Y. In this case, none of the codewords will be confused, since A will produce
either A or B, C will produce C or D, etc. The rate of this code,
log(♯ codewords) log 13
R= = = log 13
Block length 1
In this case, we can achieve capacity with a simple code with zero error.

3. Consider a binary symmetric channel with Yi = Xi ⊕ Zi , where ⊕ is mod 2 addition, and Xi , Yi ∈


{0, 1}.
Suppose that {Zi } has constant marginal probabilities p(Zi = 1) = p = 1 − p(Zi = 0), but that
Z1 , Z2 , · · · , Zn are not necessarily independent. Let C = 1 − H(p). Show that
max I(X1 , X2 , · · · , Xn ; Y1 , Y2 , · · · , Yn ) ≥ nC
p(x1 ,x2 ,··· ,xn )

Comment on the implications.

Solution :
When X1 , X2 , · · · , Xn are chosen i.i.d. ∼ Bern( 21 ),
I(X1 , · · · , Xn ; Y1 , · · · , Yn ) = H(X1 , · · · , Xn ) − H(X1 , · · · , Xn |Y1 , · · · , Yn )
= H(X1 , · · · , Xn ) − H(Z1 , · · · , Zn |Y1 , · · · , Yn )
≥ H(X1 , · · · , Xn ) − H(Z1 , · · · , Zn )
X
≥ H(X1 , · · · , Xn ) − H(Zi )
= n − nH(p)
Hence, the capacity of the channel with memory over n uses of the channel is
nC (n) = max I(X1 , · · · , Xn ; Y1 , · · · , Yn )
p(X1 ,··· ,Xn )

≥ I(X1 , · · · , Xn ; Y1 , · · · , Yn )p(x1 ,··· ,xn )=Bern( 1 )


2

≥ n(1 − H(p))
= nC
Hence, channels with memory have higher capacity. The intuitive explanation for this result is that
the correlation between the noise decreases the effective noise; one could use the information from
the past samples of the noise to combat the present noise.
4. Consider the channel Y = X + Z (mod 13), where
1

 1, with probability 3
1
Z= 2, with probability 3
1
3, with probability

3
and X ∈ {0, 1, · · · , 12}.

2
Harvard SEAS ES250 – Information Theory

(a) Find the capacity.


(b) What is the maximizing p∗ (x)?

Solution :

(a)
C = max I(X; Y )
p(x)

= max H(Y ) − H(Y |X)


p(x)

= max H(Y ) − log 3


p(x)

= log 13 − log 3
13
= log
3
which is attained when Y has a uniform distribution, which occurs (by symmetry) when X has
a uniform distribution.
(b) The capacity is achieved by a uniform distribution on the inputs, that is,
1
p(X = i) = for i = 0, 1, · · · , 12.
13
5. Using two channels
(a) Consider two discrete memoryless channels (X1 , p(y1 |x1 ), Y1 ) and (X2 , p(y2 |x2 ), Y2 ) with capac-
ities C1 and C2 respectively. A new channel (X1 × X2 , p(y1 |x1 ) × p(y2 |x2 ), Y1 × Y2 ) is formed
in which x1 ∈ X1 and x2 ∈ X2 , are simultaneously sent, resulting in y1 , y2 . Find the capacity of
this channel.
(b) Find the capacity C of the union 2 channels (X1 , p(y1 |x1 ), Y1 ) and (X2 , p(y2 |x2 ), Y2 ) where, at
each time, one can send a symbol over channel 1 or channel 2 but not both. Assume the output
alphabets are distinct and do not intersect. Show 2C = 2C1 + 2C2 .

Solution :

(a) To find the capacity of the product channel (X1 × X2 , p(y1 , y2 |x1 , x2 ), Y1 × Y2 ) , we have to find
the distribution p(x1 , x2 ) on the input alphabet X1 × X2 that maximizes I(X1 , X2 ; Y1 , Y2 ). Since
the transition probabilities are given as p(y1 , y2 |x1 , x2 ) = p(y1 |x1 )p(y2 |x2 ),
p(x1 , x2 , y1 , y2 ) = p(x1 , x2 )p(y1 , y2 |x1 , x2 )
= p(x1 , x2 )p(y1 |x1 )p(y2 |x2 )
Therefore, Y1 → X1 → X2 → Y2 forms a Markov chain and
I(X1 , X2 ; Y1 , Y2 ) = H(Y1 , Y2 ) − H(Y1 , Y2 |X1 , X2 )
= H(Y1 , Y2 ) − H(Y1 |X1 , X2 ) − H(Y2 |X1 , X2 ) (1)
= H(Y1 , Y2 ) − H(Y1 |X1 ) − H(Y2 |X2 ) (2)
≤ H(Y1 ) + H(Y2 ) − H(Y1 |X1 ) − H(Y2 |X2 ) (3)
= I(X1 ; Y1 ) + I(X2 ; Y2 )

3
Harvard SEAS ES250 – Information Theory

where (1) and (2) follow from Markovity and (3) is met with equality of X1 and X2 are inde-
pendent and hence Y1 and Y2 are independent. Therefore

C = max I(X1 , X2 ; Y1 , Y2 )
p(x1 ,x2 )

≤ max I(X1 ; Y1 ) + max I(X2 ; Y2 )


p(x1 ,x2 ) p(x1 ,x2 )

= max I(X1 ; Y1 ) + max I(X2 ; Y2 )


p(x1 ) p(x2 )

= C1 + C2

with equality iff p(x1 , x2 ) = p∗ (x1 )p∗ (x2 ) and p∗ (x1 ) and p∗ (x2 ) are the distributions that
maximize C1 and C2 respectively.
(b) Let
½
1, if the signal is sent over the channel 1
θ=
2, if the signal is sent over the channel 2

Consider the following communication scheme: The sender chooses between two channels ac-
cording to Bern(α) coin flip. Then the channel input is X = (θ, Xθ ).
Since the output alphabets Y1 and Y2 are disjoint, θ is a function of Y ,i.e. X → Y → θ.
Therefore,

I(X; Y ) = I(X; Y, θ)
= I(Xθ , θ; Y, θ)
= I(θ; Y, θ) + I(Xθ ; Y, θ|θ)
= I(θ; Y, θ) + I(Xθ ; Y |θ)
= H(θ) + αI(Xθ ; Y |θ = 1) + (1 − α)I(Xθ ; Y |θ = 2)
= H(α) + αI(X1 ; Y1 ) + (1 − α)I(X2 ; Y2 )

Thus, it follows that

C = sup{H(α) + αC1 + (1 − α)C2 }


α

which is a strictly concave function on α. Hence, the maximum exists and by elementary
calculus, one can easily show C = log2 (2C1 + 2C2 ), which is attained with α = 2C1 /(2C1 + 2C2 ).

6. Consider a time-varying discrete memoryless binary symmetric channel. Let Y1 , Y2 , · · · , Yn be con-


ditionally independent given X1 , X2 , · · · , Xn , with conditional distribution given by p(y n |xn ) =
Q n
i=1 pi (yi |xi ), as shown below.

(a) Find maxp(x) I(X n ; Y n ).

4
Harvard SEAS ES250 – Information Theory

(b) We now ask for the capacity for the time


P invariant version of this problem. Replace each pi ,
1 ≤ i ≤ n, by the average value p̄ = n1 nj=1 pj , and compare the capacity to part (a).

Solution :

(a)
n
X
I(X n ; Y n ) = H(Y n ) − H(Yi |Xi )
i=1
n
X n
X
≤ H(Yi ) − H(Yi |Xi )
i=1 i=1
n
X
≤ (1 − H(pi ))
i=1

with equality if X1 , · · · , Xn are chosen i.i.d. ∼ Bern( 12 ). Hence


n
X
max I(X1 , · · · , Xn ; Y1 , · · · , Yn ) = (1 − H(pi ))
p(x)
i=1

(b) Since H(p) is concave on p, by Jensen’s inequality,


n
à n !
1X 1X
H(pi ) ≤ H pi = H(p̄)
n n
i=1 i=1

i.e.,
n
X
H(pi ) ≤ nH(p̄)
i=1

Hence,
n
X
Ctime-varying = (1 − H(pi ))
i=1
n
X
=n− H(pi )
i=1
≥ n − nH(p̄)
Xn
= (1 − H(p̄))
i=1
= Ctime invariant

7. Suppose a binary symmetric channel of capacity C1 is immediately followed by a binary erasure


channel of capacity C2 . Find capacity C of the resulting channel.

5
Harvard SEAS ES250 – Information Theory

Now consider an arbitrary discrete memoryless channel (X , p(y|x), Y) followed by a binary erasure
channel , resulting in an output
½
Y, with probability 1 − α
Ỹ =
e, with probability α

where e denotes erasure. Thus the output Y is erased with probability α What is the capacity of this
channel?

Solution :

(a) Let C1 = 1 − H(p) be the capacity of the BSC with parameter p, and C2 = 1 − α be the capacity
of the BEC with parameter α. Let Ỹ denote the output of the cascaded channel, and Y the
output of the BSC. Then, the transition rule for the cascaded channel is simply
X
p(ỹ|x) = p(ỹ|y)p(y|x)
y=0,1

for each (x, ỹ) pair.


Let X ∼ Bern(π) denote the input to the channel. Then,

H(Ỹ ) = H((1 − α)(π(1 − p) + p(1 − π)), α, (1 − α)(pπ + (1 − p)(1 − π)))

and also

H(Ỹ |X = 0) = H((1 − α)(1 − p), α, (1 − α)p)


H(Ỹ |X = 1) = H((1 − α)p, α, (1 − α)(1 − p)) = H(Ỹ |X = 0)

Therefore,

C = max I(X; Ỹ )
p(x)

= max{H(Ỹ ) − H(Ỹ |X)}


p(x)

= max{H(Ỹ )} − H(Ỹ |X)


p(x)
= max{H((1 − α)(π(1 − p) + p(1 − π)), α, (1 − α)(pπ + (1 − p)(1 − π)))}
p(x)
−H((1 − α)(1 − p), α, (1 − α)p) (4)

6
Harvard SEAS ES250 – Information Theory

Note that the maximum value of H(Ỹ ) occurs when π = 1/2 by the concavity and symmetry
of H(·). (We can check this also by differentiating (4) with respect to π.)
Substituting the value π = 1/2 in the expression for the capacity yields

C = H((1 − α)/2, α, (1 − α)/2) − H((1 − p)(1 − α), α, p(1 − α))


= (1 − α)(1 + p log p + (1 − p) log(1 − p))
= C1 C2

(b) For the cascade of an arbitrary discrete memoryless channel (with capacity C) with the erasure
channel (with the erasure probability α), we will show that

I(X; Ỹ ) = (1 − α)I(X; Y ) (5)

Then, by taking suprema of both sides over all input distributions p(x), we can conclude the
capacity of the cascaded channel is (1 − α)C.
Proof of (5):
Let
½
1, Ỹ = e
E=
0, Ỹ = Y

Then, since E is a function of Y ,

H(Ỹ ) = H(Ỹ , E)
= H(E) + H(Ỹ |E)
= H(α) + αH(Ỹ |E = 1) + (1 − α)H(Ỹ |E = 0)
= H(α) + (1 − α)H(Y ),

where the last equality comes directly from the construction of E. Similarly,

H(Ỹ |X) = H(Ỹ , E|X)


= H(E|X) + H(Ỹ |X, E)
= H(E) + αH(Ỹ |X, E = 1) + (1 − α)H(Ỹ |X, E = 0)
= H(α) + (1 − α)H(Y |X),

whence

I(X; Ỹ ) = H(Ỹ ) − H(Ỹ |X) = (1 − α)I(X; Y )

8. We wish to encode a Bernoulli(α) process V1 , V2 , · · · for transmission over a binary symmetric channel
with error probability p.

Find conditions on α and p so that the probability of error p(V̂ n 6= V n ) can be made to go to
zero as n → ∞.

Solution :
Suppose we want to send a binary i.i.d. Bern(α) source over a binary symmetric channel with error

7
Harvard SEAS ES250 – Information Theory

probability p. By the source-channel separation theorem, in order to achieve the probability of error
that vanishes asymptotically, i.e. P (V̂ n 6= V n ) → 0, we need the entropy of the source to be less
than the capacity of the channel. Hence,
H(α) + H(p) < 1,
or, equivalently,
1
αα (1 − α)1−α pp (1 − p)1−p < .
2
9. Let (Xi , Yi , Zi ) be i.i.d. according to p(x, y, z). We will say that (xn , y n , z n ) is jointly typical [written
(n)
(xn , y n , z n ) ∈ Aǫ ] if
• 2−n(H(X)+ǫ) ≤ p(xn ) ≤ 2−n(H(X)−ǫ)
• 2−n(H(Y )+ǫ) ≤ p(y n ) ≤ 2−n(H(Y )−ǫ)
• 2−n(H(Z)+ǫ) ≤ p(z n ) ≤ 2−n(H(Z)−ǫ)
• 2−n(H(X,Y )+ǫ) ≤ p(xn , y n ) ≤ 2−n(H(X,Y )−ǫ)
• 2−n(H(X,Z)+ǫ) ≤ p(xn , z n ) ≤ 2−n(H(X,Z)−ǫ)
• 2−n(H(Y,Z)+ǫ) ≤ p(y n , z n ) ≤ 2−n(H(Y,Z)−ǫ)
• 2−n(H(X,Y,Z)+ǫ) ≤ p(xn , y n , z n ) ≤ 2−n(H(X,Y,Z)−ǫ)
Now suppose that (X̃ n , Ỹ n , Z̃ n ) is drawn according to p(xn )p(y n )p(z n ). Thus, X̃ n , Ỹ n , Z̃ n have the
(n)
same marginals as p(xn , y n , z n ) but are independent. Find (bounds on) Pr{(X̃ n , Ỹ n , Z̃ n ) ∈ Aǫ } in
terms of the entropies H(X), H(Y ), H(Z), H(X, Y ), H(X, Z), H(Y, Z) and H(X, Y, Z).

Solution :

X
Pr{(X̃ n , Ỹ n , Z̃ n ) ∈ A(n)
ǫ }= p(xn )p(y n )p(z n )
(n)
(xn ,y n ,z n )∈Aǫ
X
≤ 2−n(H(X)+H(Y )+H(Z)−3ǫ)
(n)
(xn ,y n ,z n )∈Aǫ

≤ |A(n)
ǫ |2
−n(H(X)+H(Y )+H(Z)−3ǫ)

≤ 2n(H(X,Y,Z)+ǫ) 2−n(H(X)+H(Y )+H(Z)−3ǫ)


≤ 2n(H(X,Y,Z)−H(X)−H(Y )−H(Z)+4ǫ)
Also,
X
Pr{(X̃ n , Ỹ n , Z̃ n ) ∈ A(n)
ǫ }= p(xn )p(y n )p(z n )
(n)
(xn ,y n ,z n )∈Aǫ
X
≥ 2−n(H(X)+H(Y )+H(Z)+3ǫ)
(n)
(xn ,y n ,z n )∈Aǫ

≥ |A(n)
ǫ |2
−n(H(X)+H(Y )+H(Z)+3ǫ)

≥ (1 − ǫ)2n(H(X,Y,Z)−ǫ) 2−n(H(X)+H(Y )+H(Z)−3ǫ)


≥ (1 − ǫ)2n(H(X,Y,Z)−H(X)−H(Y )−H(Z)−4ǫ)
Note that the upper bound is true for all n, but the lower bound only hold for n large.

8
Harvard SEAS ES250 – Information Theory

10. Twenty questions.

(a) Player A chooses some object in the universe, and player B attempts to identify the object with
a series of yes-no questions. Suppose that player B is clever enough to use the code achieving
the minimal expected length with respect to player A’s distribution We observe that player B
requires an average 38.5 questions to determine the object. Find a rough lower bound to the
number of objects in the universe.
(b) Let X be uniformly distributed over {1, 2, · · · , m}. Assume that m = 2n . We ask random
questions: Is X ∈ S1 ? Is X ∈ S2 ? · · · until only one integer remains. All 2m subsets S of
{1, 2, · · · m} are equally likely.
i. How many deterministic questions are needed to determine X?
ii. Without loss of generality, suppose that X = 1 is the random object. What is the probability
that object 2 yields the same answers as object 1 for k questions?
iii. What is the expected number of objects in {2, 3, · · · , m} that have the same answers to the
questions as those of the correct object 1?

Solution :

(a)

37.5 = L∗ − 1 < H(X) ≤ log |X |

and hence number of objects in the universe > 237.5 = 1.94 × 1011 .
(b) i. Obviously, Huffman codewords for X are all of length n. Hence, with n deterministic
questions, we can identify an object out of 2n candidates.
ii. Observe that the total number of subsets which include both object 1 and object 2 or
neither of them is 2m−1 . Hence, the probability that object 2 yields the same answers for k
questions as object 1 is (2m−1 /2m )k = 2−k .
iii. Let
½
1, object j yields the same answers for k questions as object 1
1j =
0, otherwise.
for j = 2, · · · , m

Then
 
Xm
E[N ] = E  1j 
j=2
m
X
= E[1j ]
j=2
Xm
= 2−k
j=2

= (m − 1)2−k
= (2n − 1)2−k

where N is the number of objects in {2, 3, · · · , m} with the same answers.

You might also like