You are on page 1of 8

EE4@UoK COM-II- Ch2-lect4 ONJ

Channel Capacity of Binary Non-Symmetric Channel:

To find the channel capacity of a binary non-symmetric channel I(X, Y) must


be a function of the input probabilities. To find the input probabilities that give the
maximum value of the I(X, Y), derive the I(X, Y) function with respect to input
probabilities then make the derivative equation equal to zero.

Example: Find the channel capacity for the channel matrix shown below.

P(Y/X) = * +

Solution: let p(x1) = p → p(x2) =1- p → P(X) = [p 1- p]

P(X, Y) = P(X)d × P(Y/X) = [ ]

p(y1) = [0.1+0.6 p] & p(y2) = [0.9-0.6 p]

I(X, Y) = H(Y) – H(Y/X)

H(Y/X) =- ∑ ∑ p(xi, yj)log2 p(yj/xi)

H(Y/X) = -[0.7plog20.7+0.3plog20.3+(0.1-0.1p)log20.1+(0.9-0.9p)log20.9]

H(Y) =- ∑ p(yj)log2 p(yj)

H(Y) = -[(0.1+0.6p)log2(0.1+0.6p)+(0.9-0.6p)log2(0.9-0.6p)]

[I(X, Y)] = [H(Y) – H(Y/X)]=0

[H(Y/X)] = -[0.7log20.7+0.3log20.3 -0.1log20.1 -0.9log20.9]= 0.2858/ln2

[H(Y)] = [ -(0.1+0.6p)log2(0.1+0.6p) -(0.9-0.6p)log2(0.9-0.6p)]

1
EE4@UoK COM-II- Ch2-lect4 ONJ
Apply the Product Rule [f.g]ʹ = fʹ.g +f.gʹ

[H(Y)] = -0.6[log2(0.1+0.6p) – log2(0.9-0.6p)]

[I(X, Y)] = [H(Y)] – [H(Y/X)] =0

[I(X, Y)] = -0.6[log2(0.1+0.6p) – log2(0.9-0.6p)] –[0.2858/ln2] = 0

-0.6[ln(0.1+0.6p) – ln(0.9-0.6p)] = 0.2858

ln[(0.1+0.6p)/ (0.9-0.6p)] = - 0.4763

[(0.1+0.6p)/ (0.9-0.6p)] = e(- 0.4763) = 0.621

0.1+0.6p = 0.621(0.9-0.6p) =0.559 - 0.3726p

0.9726p = 0.459 →p = 0.472 = p(x1) → p(x2) =1-p = 0.528

P(X, Y) = [ ]= * +

At (p= 0.472) → (0.1+0.6p) = 0.3832, & (0.9-0.6p) = 0.6168

⸫H(Y) = – [0.3832ln0.3832 +0.6168ln0.6168]/ln2 = 0.960 bits/symbol

H(Y/X) = -[0.3304ln0.7+0.1416ln0.3+0.0528ln0.1+0.4752ln0.9]/ln2

H(Y/X) = 0.6636 bits/symbol

C = I(X, Y)max = H(Y) – H(Y/X) =0.960-0.6636 = 0.2964 bits/symbol

2
EE4@UoK COM-II- Ch2-lect4 ONJ
Example: If p(x1) = 0.682, find the channel capacity for the channel matrix shown
below and compare it with the channel capacity of the above example.

P(Y/X) = * +

Solution: p(x1) = 0.682 → p(x2) =1- 0.682= 0.138

P(X, Y)= P(X)d × P(Y/X) = * +

p(y1) = 0.6172 & p(y2) = 0.3828 → P(Y) = [0.6172 0.3828]

H(Y) = – [0.6172ln0.6172 +0.3828ln0.3828]/ln2 = 0.960 bits/symbol

H(Y/X)= -[0.6034ln0.7+0.2586ln0.3+0.0138ln0.1+0.1242ln0.9]/ln2

H(Y/X) = 0.8244 bits/symbol

C = I(X, Y) = H(Y) – H(Y/X) =0.960-0.8244 = 0.1356 bits/symbol

Channel Capacity of Non-binary and Non-Symmetric Channel:

As it is shown in the binary non-symmetric channel, to find the channel


capacity, the I(X, Y) has to be derived with respect to the input probabilities. This
process is becoming more complex because the I(X, Y) equation has more than one
variable and its derivation needs more mathematical procedures. To make these
mathematical procedures easy, it is very helpful finding some similarities in the
input symbols. If there are similar symbols in the channel matrix the probability of
these symbols is equal as shown in the following channels models.

3
EE4@UoK COM-II- Ch2-lect4 ONJ
In this channel p(x1) = p(x2) = p
and p(x3) = 1-2p

In this channel p(x1) = p(x2) = p =1/2

Source Redundancy:

As is taken before the source redundancy in percentage unit

Re (%) = 1 –η = 1 – [H(x)/H(x)max]= 1- [H(x)/log2n]

The source redundancy in bits unit: Re = H(x)max – H(x+/x-)

(x+)is the next produced symbol given that the x- symbol is already produced.

Example: A source produces two symbols A&B with probabilities of p(A/A)=5/6


and p(B/B)= 2/3 find the source entropy and the redundancy in percentage and bits

Solution:

p(x+/x-) =

Let P(A+) = P(A-) = p & P(B+)= P(B-)= 1-p

P(x+, x-) = P[A B]d P(x+/x-) = [ ]

5p/6 +(1-p)/3=p → p=2/3 → 1-p=1/3

4
EE4@UoK COM-II- Ch2-lect4 ONJ

P(x+, x-) = [ ] → P(x)= [2/3 1/3]

H(x) = –[(2/3)ln(2/3) + (1/3)ln(1/3)]/ln2 = 0.918 bits/symbol

Re (%)=1- [H(x)/log2n] 1 – 0.918/log22= 8.2%

H(x+/ x-) = - [(5/9)ln(5/6)+ (1/9)ln(1/6) +(1/9)ln(1/3) + (2/9)ln(2/3)]/ln2

H(x+/x-) = 0.74bits/symbol.

Re = H(x)max – H(x+/x-) = 1- 0.74 = 0.26bits

Channel Capacity by Shannon – Hartley

C = B.log2 [1+ S/N]

B: Bandwidth, S: Signal power, N: Noise power

The mean square value of the received signal (signal power + noise power)
=√

Let m equals the number of levels that can be separated without errors

m =√ /√ = √

The digital information

I= log2(m) = log2√ = [log2(1+S/N)]/2

If the channel transmits (k =2B) pules per second then the channel capacity is

C= Ik= k[log2(1+S/N)]/2 = B.log2 [1+ S/N]

5
EE4@UoK COM-II- Ch2-lect4 ONJ

The channel capacity cannot be infinite because the increase in the bandwidth
will lead to an increase in the noise power.

If the power density of the noise is If B→∞ then the


C/2 →N= ᵰB
[ ( ) ]= log2e = 1.44
C= B.log2 [1+ S/ ᵰB] = ( )
⸫C= 1.44S/ᵰ
C= [ ( ) ] ᵰ/2 is the noise power spectral
density

Example: If the SNR is 30dB, and the bandwidth available is 3kHz, then the max
data that can be transmitted through the channel.

C =B.log2 [1+ S/N] =3000log2 (1+1000) = 3000log2 (1001) = 29.9kbits/sec

Note that the value of S/N = 1000 is equivalent to the SNR of 30dB.

6
EE4@UoK COM-II- Ch2-lect4 ONJ
Example: If the SNR is 20 dB, and the bandwidth available is 4 kHz, then

C =B.log2 [1+ S/N] = 4000log2 (1 + 100) = 4000log2 (101) = 26.63 kbit/s.

Note that the value of S/N = 100 is equivalent to the SNR of 20dB.

Example: If the requirement is to transmit at 5Mbit/s, and a bandwidth of 1 MHz


is used, then the minimum S/N required is given by

C =B.log2 [1+ S/N] → 5000 = 1000log2 (1+S/N)

5000/1000 =5= log2 (1+S/N)

ln(1+S/N) = 5ln2=3.465 → 1+S/N = e(3.465) = 32 → S/N=31

S/N = SNR =14.91dB =10log10 (31).

Error-Free Communication Over a Noisy Channel

As seen previously, messages of a source with entropy H(x) can be encoded


by using an average of H(x) digits per message. This encoding has zero
redundancy. Hence, if we transmit these coded messages over a noisy channel,
some of the information will be received erroneously. There is absolutely no
possibility of error-free communication over a noisy channel when messages are
encoded with zero redundancy. The use of redundancy, in general, helps combat
noise. This can be seen from a simple example of a single parity-check code, in
which an extra binary digit is added to each code word to ensure that the total
number of ones in the resulting code word is always even (or odd). If a single error
occurs in the received code word, the parity is violated, and the receiver requests
retransmission. This is a rather simple example to demonstrate the utility of
redundancy.

7
EE4@UoK COM-II- Ch2-lect4 ONJ
The addition of an extra digit increases the average word length to[H(m)+1],
giving η = H(m)/[H(m) + 1], and the redundancy is 1 - η = 1/[H(m) + 1]. Thus,
the addition of an extra check digit increases redundancy, but it also helps combat
noise. Immunity against channel noise can be increased by increasing redundancy.
Shannon has shown that it is possible to achieve error-free communication by
adding sufficient redundancy.

You might also like