You are on page 1of 2

Dr.

Alaa Al-Ibadi

Chapter 4: CHANNEL CAPACITY


4.1. Channel Capacity

Consider a (discrete memoryless channel) DMC having an input alphabet X= {xo, x1, ... , xq-
1} and an output alphabet Y= {yo, y1,... , yr-1}. Let us denote the set of channel transition
probabilities by P(yi/ xj).
The average mutual information provided by the output Y about the input X is given by:

𝒒−𝟏 𝑷(𝒚𝒊 |𝒙𝒋 )


𝑰(𝑿; 𝒀) = ∑𝒋=𝟎 ∑𝒓−𝟏
𝒊=𝟎 𝑷(𝒙𝒋 )𝑷(𝒚𝒊 |𝒙𝒋 ) 𝐥𝐨𝐠 (4.1)
𝑷(𝒚𝒊 )

The channel transition probabilities 𝑷(𝒚𝒊 |𝒙𝒋 ) are determined by the channel characteristics
(particularly the noise in the channel). However, the input symbol probabilities P(xj) are within
the control of the discrete channel encoder. The value of the average mutual information,
𝑰(𝑿; 𝒀), maximized over the set of input symbol probabilities P(xj) is a quantity that depends
only on the channel transition probabilities 𝑷(𝒚𝒊 |𝒙𝒋 ). This quantity is called the Capacity of
the Channel.

Definition 4.1 The Capacity of a DMC is defined as the maximum average mutual information
in any single use of the channel, where the maximization is over all possible input probabilities.
That is,

𝑪 = 𝒎𝒂𝒙𝑷(𝒙𝒋 ) 𝑰(𝑿; 𝒀)

𝒒−𝟏 𝑷(𝒚𝒊 |𝒙𝒋 )


𝑪 = 𝒎𝒂𝒙𝑷(𝒙𝒋 ) ∑𝒋=𝟎 ∑𝒓−𝟏
𝒊=𝟎 𝑷(𝒙𝒋 )𝑷(𝒚𝒊 |𝒙𝒋 ) 𝐥𝐨𝐠 (4.2)
𝑷(𝒚𝒊 )

The maximization of I(X; Y) is performed under the constraints


𝑞−1

𝑃(𝑥𝑗 ) ≥ 0, 𝑎𝑛𝑑 ∑ 𝑃(𝑥𝑗 ) = 1


𝑗=0

The units of channel capacity is bits per channel use (provided the base of the logarithm is 2).

Example 4.1 Consider a channel with channel transition probabilities P(0/1) = p = P(1/0)
From equation (4.2) we obtain the capacity of the channel as

C = 1 + p log2 p + (1 - p) log2 (1 - p)

Let us define the entropy function:


H(p) = -p log2 p - (1- p) log2 (1- p)

Hence, we can rewrite the capacity of a binary symmetric channel as


C= 1- H(p).
Dr. Alaa Al-Ibadi

The plot of the capacity versus p is given in Figure above. From the plot we make the following
observations.
(i) For p = 0 (i.e., noise free channel), the capacity is 1 bit/use, as expected. Each time we use
the channel, we can successfully transmit 1 bit of information.
(ii) For p = 0.5, the channel capacity is 0, i.e., observing the output gives no information about
the input. It is equivalent to the case when the channel is broken. We might as well discard the
channel and toss a fair coin in order to estimate what was transmitted.
(iii) For 0.5 < p < 1, the capacity increases with increasing p. In this case we simply reverse the
positions of 1 and 0 at the output of the channel.
(iv) For p = 1 (i.e., every bit gets flipped by the channel), the capacity is again 1 bit/use, as
expected. In this case, one simply flips the bit at the output of the receiver so as to undo the
effect of the channel.

You might also like