You are on page 1of 14

Introduction to Communication

Systems 1st Edition Madhow Solutions


Manual
Visit to Download in Full: https://testbankdeal.com/download/introduction-to-communi
cation-systems-1st-edition-madhow-solutions-manual/
Solutions to Chapter 7 Problems
Introduction to Communication Systems, by Upamanyu Madhow

Problem 7.1 We have a system with Rc = 1/4, and QPSK modulation. Spectral efficiency is
calculated by r = Rc log2 M = 41 log2 4 = 12 .
Eb

By, (7.11) ( N 0
) min = 2( 2 − 1) = −0.82(dB). This scheme is working 1.5 (dB) away from the
Eb
shanon limit. So the ( N0 )dB = 0.68.
Eb
By (7.2) Es = rEb = 12 Eb . So, ( N Es
0
)dB = ( N 0
)dB − 3 = −2.32(dB)
q
Problem 7.2 Bit error rate for Gray coded constellations is approximated by Pb ≈ Q( η2N p Eb
0
),
where ηp = d2min /Eb . In this problem BER is fixed at 10−5 for all constellations. On the other
hand, for uncoded constellations, spectral efficiency r = log2 M, where M is the size of the
constellation. Once we have the spectral efficiency, we can use Eb /N0 > (2r − 1)/r to find the
Shannon limit for minimum possible Eb /N0 for reliable communication. The following table
summarizes these calculations for different constellations:

Const. QPSK 8PSK 16QAM 64QAM


r 2 3 4 5
Eb
(N 0
)min (dB) 1.76 3.67 5.74 7.92
ηp 4 2.535 1.6 0.568
Eb
N0
9.10 14.35 22.73 63.95
Eb
N0
(dB) 9.58 11.56 13.56 18.05
Dist. to limit (db) 7.82 7.89 7.82 10.13

Es Eb
Problem 7.3 By (7.2) we know that N 0
= log2 M N 0
. Using the values of ηp , which are derived
in problem 7.2 BER of Gray coded constellation can be estimated by the following formula:

s
ηp Es
Pb ≈ Q( )
2log2 MN0
0.5
QPSK
8PSK
0.45
16QAM
64QAM
0.4

0.35

0.3

BER
0.25

0.2

0.15

0.1

0.05

0
−5 0 5 10 15 20
Es/N0 (dB)

Figure 1: BER of Gray coded constellation using nearest neighbors approximation (Problem
7.3a).

(b) The BSC capacity is given by:

CBSC (p) = 1 − HB (p)bits/symbol

The BER computed in part(a) is employed as p in the above formula to compute the capacity
that applying hard decision will induce. The capacity of a bandlimited AWGN channel is given
by CAW GN = log2 (1 + ENS0 ) which is also plotted in 2 for comparison. As can be seen, hard
decision will not allow the capacity to increase like the AWGN capacity and it will saturate at a
maximum value of one bit/symbol while AWGN channel potentialy has much more capacity.

QPSK
8PSK
6 16QAM
64QAM
Bandlimited AWGN
5
BSC Capacity

0
−5 0 5 10 15 20
Es/N0 (dB)

Figure 2: Capacity of a hard decision system with it’s equivalent BSC channel (Problem 7.3b).

Problem 7.4 We have a BICM system with Rc = 2/3, and Gray-coded QPSK modulation.
Spectral efficiency is calculated by r = Rc log2 M = 32 log2 4 = 43 .
Es Eb Eb
(a) Signal energy Es = rEb , therefore, N 0
= rN 0
= 43 N 0
.

2
(b) By (7.11), the Shannon limit for this system is
4
Eb 2r − 1 23 − 1
> = 4 ≈ 1.139 = 0.57 dB.
N0 r 3

(c) For the suboptimal strategy of making hard decisions, the number of information bits per
channel use (for the induced BSC) is R = 32 (i.e. each coded bit that we send over the BSC
corresponds to 32 of an information bit). Therefore, based on Shannon’s limit for BSC, we have

R < 1 − Hb (p),
q
Es
where p = Q( N0
) is the crossover probability of the BSC. Hence,

2
< 1 − Hb (p),
3
1
⇒ Hb (p) < ,
3
⇒ p < 0.062.
q
Es Es Es
We can find the minimum required N0
using p = Q( N 0
), which gives us N0
> 2.366. Therefore,
Eb 3
N0
> 4
× 2.366 = 1.77 ≈ 2.49 dB. The degradation due to hard decisions is about 1.92 dB.
Problem 7.5 (a) We have a BICM system with Rc = 12 . Attainable data rates can be determined
by the following formula:
symbol bits 1 information bit
Datarate = 10M × log2 M ×
sec symbol 2 bit
Sunbstituting M=4,16 and 64, will result in 10,20 and 30Mbps for QPSK, 16QAM and 64QAM
respectively.

(b) minimum required NEs0 is determined by 2r −1, in which r = RC log2 M is the spectral efficiency
and equal to 1,2 and 3 for QPSK, 16QAM and 64QAM respectively. As we should operate 2dB
Es
away from the shanon limit, 2, 6.77 and 10.45 (dB) are the minimum required N 0
for QPSK,
16QAM and 64 QAM respectively.

(C)Considering inverse square path loss we will have the following equation between minimum
value of NEs0 required and the maximum attainable ranges of schemes:

( NEs0 )1 r2
Es
= ( )2
( N 0 )2 r1

Using the minimum required NEs0 in part (b) the maximum attainable range will be 15.28 Km
and 26.45 Km for 16QAM and QPSK respectively.
(d)For each of the code rates Rc = 21 , 23 and 34 , minimum required NEs0 and the corresponding data
rates can be determined by the following formula:
Es
= 10log10 (2r − 1) + 2(dB)
N0
Datarate = r × BandW idth

3
results are shown in the following table:
2 3
Rc 3 4
Modulation QPSK 16QAM 64QAM QPSK 16QAM 64QAM
(Es /N0)dB 3.81 9.28 13.76 4.6 10.45 15.35
Data rate (Mbps) 13.3 26.6 40 15 30 45
in which r is the spectral efficiency. On the other hand, considering the refrence point stated in
part (c), which says the maximum attainable range for 64QAM modulation with a rate of 12 is
10 Km, we can compute NEs0 at the transmitter.

A − 10log10 (10Km) = 10.45(dB)

A = 90.45(dB)

Then it can be computed on any distance from the transmitter:

Es
90.45 − 10log10 (d) = ( )receiver
N0

which should be compared with the minimum required NEs0 computed before to determine the
maximum attainable data rate at any distance d from the transmitter.

45
Rc=1/2
R =2/3
40 c
Rc=3/4

35

30
Data rate (Mbps)

25

20

15

10

0
0 5 10 15 20 25 30
Range

Figure 3: Data rate versus range(Problem 7.5d).

Problem 7.6 (a) Applying l’Hopital’s rule, we have


2r − 1 ∂r
(2r − 1) 2r ln 2
lim = lim ∂
= lim = ln 2 = −1.59 dB.
r→0 r r→0
∂r
r r→0 1

Eb
Therefore, the minimum value of N0
for which reliable communication is possible is −1.59 dB.

Es
(b) By (7.10), we have r < log2 (1 + N0
). The bound has been plotted in Figure 4,

4
25

20

Reliable communication possible


15

10

Minimum required Es/N0 (dB)


5

−5 Reliable communication not possible

−10

−15

−20

−25
0 1 2 3 4 5 6 7

Spectral efficiency r (bps/Hz)

Figure 4: Power-Bandwidth tradeoffs over the AWGN channel (Problem 7.6).

Problem 7.7 (a) This parity check matrix is like a systematic generator matrix. Since HGT = 0,
G can be like a parity check matrix of that systematic code:
 
1 1 0 1 0 0 0
0 1 1 0 1 0 0
 
1 0 1 0 0 1 0
1 1 1 0 0 0 1

(b) By (7.30) dmin is the minimum weight of the codewords. One can list all of the codewords by
using (7.20) and see that dmin = 3. By (7.31), such a code can correct one error by a bounded
distance decoder.
(c) The standard array of a (7,4) Hamming code, is a 23 by 24 array. Coset leader of the first
row is all zero codeword. The other 7 coset leaders are all single errors. Thus unlike in table 7.1,
no binary vectors are ”left over” after running through the single error patterns. The Hamming
code is a ”Perfect” code: the decoding spheres of radius one cover the entire space of length-7
binary vectors. ”Perfect” in this case just refers to how well decoding spheres can be packed into
the available space: it definitely does not mean ”good”, since the Hamming code is a weak code.
(d)Syndromes for the i-th coset is defined as si = HeTi

Coset leader Syndrome


0000000 000
1000000 100
0100000 010
0010000 001
0001000 110
0000100 011
0000010 101
0000001 111

Problem 7.8 Since Hamming code is a perfect code (see Problem 7.7), the decoding spheres of

5
radius one cover the entire space of length-7 binary vectors. Therefore, there will be no decoding
failure (where the received vector falls outside of the spheres of all the valid codewords), and all
probability of error in decoding is due to undetected errors.
Probability of undetected error = 1 − Pc ,
where Pc is the probability of correct decoding (corresponding to maximum of one bit error) and
is computed as  
7 7
Pc = (1 − p) + p(1 − p)6 = 0.998,
1
therefore,
Probability of undetected error = 2 × 10−3
Problem 7.9 (a) In an extended Hamming code, number of data bits are still 4 but we have
added one bit to the codeword. So, k=4, n=8.

(b)According to the generator matrix of a (7,4) Hamming code which is determined in problem
(7.7) codeword bits (x1 , x2 , x3 , x4 , x5 , x6 , x7 ) are as follows:
x1 = u1 ⊕ u3 ⊕ u4
x2 = u1 ⊕ u2 ⊕ u4
x3 = u2 ⊕ u3 ⊕ u4
x4 = u1
x5 = u2
x6 = u3
x7 = u4
x8 = x1 ⊕ x2 ⊕ x3 ⊕ x4 ⊕ x5 ⊕ x6 ⊕ x7 = u1 ⊕ u2 ⊕ u3
Thus, the generator matrix of the extended Hamming code will be:
 
1 1 0 1 0 0 0 1
0 1 1 0 1 0 0 1
 
1 0 1 0 0 1 0 1
1 1 1 0 0 0 1 0

dmin is the minimum weight of the codewords. One can list all of the codewords by using (7.20)
and see that dmin = 4.

Problem 7.10 (a) Since the parity-check matrix consists of all nonzero binary vectors of length
m, it has to be a m × (2m − 1) matrix. This shows that length of the codewords n = 2m − 1 and
n − k = m. Therefore, for the (7, 4) Hamming code, m = n − k = 7 − 4 = 3.
(b) based on the previous part,
n = 2m − 1
k = n − m = 2m − m − 1.

Problem 7.11(a)
n = 2m − 1 = 1023

6
m = 10
mt = n − k = 100
t = 10
(b)
n = 2m − 1 = 511
m=9
k = 511 − 10(9) = 421
k 421
=
n 511
Problem 7.12 (a) The number of errors among n code bits is X ∼ Bin(n, p), so
 
n k
P [X = k] = p (1 − p)n−k
k
n! p
= pk−1 (1 − p)n−k+1
k!(n − k)! (1 − p)
n−k+1 p n!
= pk−1 (1 − p)n−k+1
k (1 − p) (k − 1)!(n − K + 1)!
n−k+1 p
= P [X = k − 1].
k (1 − p)

(b) The following Matlab program calculates Pe based on the recursive relationship in part (a),

% Parameters
n = 1023;
% Cross-over probability of the BSC
p = 2e-2;
% bunded-distance decoder of radius t
t = 10;
%% Evaluating Binomial PMF:
P x = zeros(1,n+1);
P x(1) = (1-p)ˆn;
for ii = 2:n+1
kk = ii-1;
P x(ii) = (p/(1-p)).*((n-kk+1)/kk).* P x(ii-1);
end

% Plot the Binomial PMF


stem(P x)
xlim([0 40])

%% Probability of error
% find the index for t:
t ind = floor(t)+1;
% Probability of error:
P e = sum(P x(t ind+1:end))
% Probability of error approximated by the first term of the sum
P e first term = nchoosek(n, t+1)*(pˆ(t+1))*((1-p)ˆ(n-t-1))
% Probability of error approximated by Gaussian distribution
P e Gaussian = qfunc((t+1-n*p)/sqrt(n*p*(1-p)))

7
The output of the code for (1023, 923) BCH code in Problem 7.11(a), assuming that p = 0.02 is
Pe = 0.9827. (Why is the probability of error so high? Hint: note that average number of bits
in error is np = 1023 × 0.02 = 20.46, whereas, the BCH code is capable of correcting at most
t = 10 errors.)

Problem 7.13The following graph shows the result of simulations and theoretical evaluation of
error,failure and correct decoding probability.

1.4
decoding error
decoding failure
correct decoding (theoretically)
1.2
failure probability (theoretically)
error probability (theoretically )
P +P +P
c f e
1

0.8

0.6

0.4

0.2

0
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
p

Figure 5: Probability of error,failure,correct decoding for a (5,2) code

Assuming that all zero codeword has been sent, decoding error happens if codewords in row 1:6
of column 2:4 of the standard array (Table 7.1) are received, which are 6 codewords (errors) of
weight 3, 6 codewords of weight 2, 5 codewords of weight 4 and one codeword of length 5. Thus
the error decoding probability will be:

Perror = 6p3 (1 − p)2 + 6p2 (1 − p)3 + 5p4 (1 − p) + p5

Decoding failure happens if the codewords in the two last rows are received, which are 4 possible
errors of weight 2 and 4 possible errors of weight 3. Thus the decoding failure probability will
be:
Pf ailure = 4p2 (1 − p)3 + 4p3 (1 − p)2

Which are plotted in Figure 5 for different values of p.

Problem 7.14 (a) Bounded distance decoding as capable of correcting up to “t” errors. The
number of errors in a codeword is distributed as X ∼ Bin(n, p), and the expected number of
errors is E[X] = np. By LLN, if E[X] > t, the code will not be able to correct the errors with
high probability. Therefore, np ≥ t or equivalently p ≥ t/n is and uninteresting situation. The
uninteresting regime for the (1023, 923) BCH code, is when p ≥ 10/1023 ≈ 9.7 × 10−3 .
(b) By the result of Problem 7.12,

P [X = k] p n−k+1
=
P [X = k − 1] (1 − p) k
p n
< for all k ≥ t + 1.
(1 − p) k

8
pn
Note that for p ≪ t/n < 1, we have 1 − p ≈ 1 and k
≪ 1. Therefore,

P [X = k]
≪ 1 for all k ≥ t + 1.
P [X = k − 1]

This shows that the terms in the sum in (7.47) are decreasing rapidly with k, which in turn
justifies approximation of the sum by its first term.
(c) Central Limit Theorem states that if Xi , i ∈ {1, . . . , n}, are i.i.d. random variables, with
E[Xi ] = µ < ∞ and Var[Xi ] = σ 2 < ∞ for all i, then empirical average Sn defined by
Pn
Xi
Sn = i=1
n
converges in distribution to N (µ, σ 2/n) for large n. Accordingly, sum of the random variables
converges as nSn ∼ N (nµ, nσ 2 ) for well chosen values of n, µ and σ 2 . Note that simply letting n
to be large will not result in nice convergence results for nSn as it does for the empirical average
Sn (since mean and variance of nSn are both growing with n). Therefore, by CLT, for Xi ’s
being Bernoulli random variables with E[Xi ] = p and Var(Xi ) = p(1 − p), we have distribution
of X = ni=1 Xi be approximated by
P

X ∼ N (np, np(1 − p)).

(d) For the (1023, 923) BCH code with t = 10 and p = 10−3 , we have (i) Pe = 1.2 × 10−8 , (ii)
Pe ≈ 1.1 × 10−8 , (iii) Pe ≈ Q( √t+1−np ) = Q(9.8691) ≈ 2.8 × 10−23 . We see that for n = 1023
np(1−p)
and p = 10 , Gaussian approximation is not a good approach.
−3

(e) For p = 10−4 , the three ways of approximating probability of incorrect decoding will results
in (i) Pe = 2.77 × 10−19 , (ii) Pe ≈ 2.75 × 10−19 and (iii) Pe ≈ Q( √t+1−np ) = 9.05 × 10−255 (!!!).
np(1−p)
We see that for smaller p, the Gaussian approximation is way off. (Look at moderate values of
p for example, p = 10−2 and compare the three approximates of Pe ).
(f) For very small values of p, first term approximation gives very good estimates of the probability
of incorrect decoding (p ≪ t/n), whereas, the Gaussian approximation leads to very bad results.
For moderate values of p, the first term of the sum is no longer the dominant term, while Gaussian
approximation gets closer to the actual value of Pe .
Problem 7.15 By substituting these n and t in the Matlab code of problem 7.12, error proba-
bilities are as follows:

n t Pe Pe first term est. Pe Gaussian est.


1023 16 3.2924e-32 3.2740e-32 0
511 10 1.3320e-22 1.3264e-22 0
255 5 3.5230e-13 3.5104e-13 1.0878e-306

In the above calculations we have assumed p = 10− 4.


Problem 7.16 (a) For a (255, 235) RS code, we have n = 255, k = 235, m = log2 (n + 1) = 8,
dmin = n − k + 1 = 21, therefore, (i) maximum number of symbol errors that the code can correct
is ⌊ dmin2 −1 ⌋ = 10, (ii) each symbol represents m = 8 bits, (iii) the code is capable of correcting 10
symbols, while each symbol represents 8 bits, so the 10 symbols can correspond to 10 to 80 bit
errors. So in the best case RS code is correcting 80 bits (errors are bursty), while in the worst
case it will be 10 bits (uncorrelated bit error locations).
(b) There will be a symbol error if any of the m bits representing that symbol are in error,
so symbol error probability is ps = 1 − (1 − pb )m = 8 × 10−3 . The probability of incorrect

9
decoding can be calculated using the Matlab code we have developed for Problem 7.14 as
Pe = P [X > t] ≈ 8.61 × 10−6 .
(c) We want Pe < 10−12 . Using the Matlab code in Problem 7.12, and by simple search we
can find an approximate value of ps for which the condition for Pe is satisfied. The result is
ps < 1.6 × 10−3 is required. By ps = 1 − (1 − pb )m we find pb < 2 × 10−4 .
(d) Again we want Pe < 10−12 , but this time we have fixed pb = 10−3 , so the symbol error
probability is ps = 8 × 10−3 . At this point we need to find out how many terms of the sum in
(7.47) should be included to satisfy the condition for Pe . Using the Matlab code in Problem
7.12, and by simple investigation we find out that t > 18 is required. Therefore, ⌊ dmin2 −1 ⌋ > 18,
hence, dmin > 37. Replacing dmin = n − k + 1, we conclude that k < 219.
Problem 7.17(a) According to bayes’ theorem:

P [y|x = 0]P (x = 0) π0 p(y|0)


P [x = 0|y] = =
P (y) P (y)

P [y|x = 1]P (x = 1) (1 − π0 )p(y|1)


P [x = 1|y] = =
P (y) P (y)
By substituting these two in the L(x) formula, we’ll get the desired result. Obviousely, we can
do the L(x) = Lchannel (x) + Lprior (x) decomposition.

(b)Conditional densities can be derived as follows:


1 −(y−A)2
p(y|x = 0) = p(N = y − A) = √ e 2σ 2
2πσ 2
(y|x = 0) ∼ N(A, σ 2 )
1 −(y+A)2
p(y|x = 1) = p(N = y + A) = √ e 2σ 2
2πσ 2
(y|x = 1) ∼ N(−A, σ 2 )
(c)
p(y|0) −(y−A)2 (y+A)2 2Ay 2Ay
Lchannel (x) = log = log(e 2σ2 + 2σ2 ) = log(e σ2 ) = 2
p(y|1) σ

(d)
2Ay
Lchannel (x) =
σ2
4A2
Lchannel (0) ∼ N(A, )
σ2
4A2
Lchannel (1) ∼ N(−A,)
σ2
p
(e)The amplitude on each direction is A. Therefore, Es = (A2 + A2 )2 . Noise variance on each
Es 2A2 A2
direction is σ 2 . Thus, N 0
= 2σ 2 = σ2

(f)
bits Data bits
Es = × × Eb = 2Rcode Eb
symbol bit

10
(e)
Es Eb 4
= 2Rcode = 100.3 = 2
N0 N0 3
A2 Es
2
= =2
σ N0

A= 2

(g) √
Lchannel (x) = 2 2y

Lchannel (0) ∼ N( 2, 8)

Lchannel (1) ∼ N(− 2, 8)
(h)
√ √
r q
dmin
Pe = Q( ) = Q( 2A) = Q( 2 2)

Problem 7.18 (a) Replacing for P [X = 0], we get

1 eL 1 1 eL − 1 1 eL/2 − e−L/2 1
δ = P [X = 0] − = L − = L
= L/2
= tanh(L/2)
2 e +1 2 2e +1 2e +e −L/2 2

(b) We know,
P [X3 = 0] = P [X1 = 0]P [X2 = 0] + P [X1 = 1]P [X2 = 1].
Replacing for the probabilities based on δi ’s, we have

δ3 + 1/2 = (δ1 + 1/2)(δ2 + 1/2) + (1/2 − δ1 )(1/2 − δ2 )


= δ1 δ2 + δ1 /2 + δ2 /2 + 1/4 + 1/4 − δ1 /2 − δ2 /2 + δ1 δ2
= 1/2 + 2δ1 δ2 .

Therefore, δ3 = 2δ1 δ2 .

(c) Combining parts (a) and (b),


1 1 1
tanh(L3 /2) = 2( tanh(L1 /2))( tanh(L2 /2))
2 2 2
so
tanh(L3 /2) = tanh(L1 /2) tanh(L2 /2).

Problem 7.19(a) According to the hint:

p(y|x1 = 0) = p(y|x1x2 = 00) + p(y|x1x2 = 01) = p(y|s = −3A) + p(y|s = +3A)


p(y|s = −3A) ∼ N(−3A, σ 2 )
p(y|s = 3A) ∼ N(3A, σ 2 )
According to the problem 7.17 (c):

6Ay
Lchannel (x1 ) =
σ2

11
Similarly,

p(y|x1 = 1) = p(y|x1 x2 = 10) + p(y|x1x2 = 11) = p(y|s = −A) + p(y|s = A)

p(y|s = −A) ∼ N(−A, σ 2 )


p(y|s = A) ∼ N(A, σ 2 )
2Ay
Lchannel (x2 ) =
σ2
(b)

LLR1
60
x1=0
x1=1

50

40

30

20

10

0
−150 −100 −50 0 50 100 150

LLR2
40
x2=0
x2=1
35

30

25

20

15

10

0
−40 −30 −20 −10 0 10 20 30 40

As can be seen, they are not well separated.


(c)
1
Es (A2 ) + 21 (3A)2 5A2
= 2 = = 20
N0 σ2 σ2
According to (7.7),
1 20
Cd = log2 (1 + ) = 1.73databit/channeluse
2 2

12
On the other hand:
data bit data bit bit
= ×
channel use bit channel use
Cd = Rcode × log2 M = 2Rc ode
Cd
Rcode = = 0.86
2
(d)

LLR1
40
x1=0
x1=1
35

30

25

20

15

10

0
−40 −30 −20 −10 0 10 20 30 40

LLR2
40
x2=0
x2=1
35

30

25

20

15

10

0
−15 −10 −5 0 5 10 15

As it was expected, they are more overlapping.

1
Es 2
(A2 ) + 21 (3A)2 5A2
= = 2 =5
N0 σ2 σ
1 5
Cd = log2 (1 + ) = .9databit/channeluse
2 2
Cd
Rcode = = 0.45
2

13

You might also like