You are on page 1of 4

N ATIONAL I NSTITUTE OF T ECHNOLOGY ROURKELA

Department of Electrical Engineering


Mid Semester Examination, 2022-23 (Autumn)
Program: B. Tech. 7th Semester
Course Code: EE4405 Course Tittle: Digital Communication
Number of Pages : 4 Full Marks: 30 Duration: 2 hours

⋆ Answer all questions.


⋆ Answer of all the parts of a question must be at one place.
⋆ All variables must be clearly defined.
⋆ Missing data, if any, may be assumed suitably.
⋆ ALL THE BEST

1. The joint density function of X and Y is given by


(
2e−x e−2y 0 < x < ∞, 0 < y < ∞
f (x) =
0 otherwise
(a) Compute P{X < Y }. [2]
(b) Compute P{X < a}. [1]
Solution: ZZ Z∞ Zy Z∞ h Zy i Z∞
(a) - P{X < Y } = f (x) dx dy = 2e−x e−2y dx dy = 2e−2y e−x dx dy = 2e−2y (1 − e−y ) dy
(x,y):x<y 0 0 0 0 0
Z∞ Z∞
−2y 2 1
= 2e dy − 2e−3y dy = 1 − =
3 3
0 0
Za Z∞ Za h Z∞ i Za
(b) - P{X < a} = 2e−x e−2y dy dx = 2e−x e−2y dy dx = e−x dx = (1 − e−a )
0 0 0 0 0

2. Suppose that 10 percent of the families in a certain community have no children, 40 percent have 1, 35 percent have 2, and 15 [3]
percent have 3 children; suppose further that each child is equally likely (and independently) to be a boy or a girl. If a family is
chosen at random from this community, then B, the number of boys, and G, the number of girls, find the probability of a family
having at least 1 girl.
Solution: P{B = 0, G = 0}, P{B = 0, G = 1}, P{B = 0, G = 2}, and P{B = 0, G = 3}
P{B = 0, G = 0} = P{no children} = 0.10
P{B = 0, G = 1} = P{1 girl and total of 1 child} = P{1 child}P{1 girl|1 child} = 0.4 × 12 = 0.2
P{B = 0, G = 2} = P{2 girl and total of 2 child} = P{2 child}P{2 girl|2 child} = 0.35 × ( 12 )2 = 0.0875
P{B = 0, G = 3} = P{3 girl and total of 3 child} = P{3 child}P{3 girl|3 child} = 0.15 × ( 21 )3 = 0.01875
P{B = 1, G = 1} = P{1 girl, 1 boy and total of 2 child} = P{2 child}(P{1 girl|2 child} + P{1 boy|2 child}) = 0.35 × (( 21 )2 +
( 12 )2 ) = 0.175
P{B = 2, G = 1} = P{1 girl, 2 boy and total of 3 child} = P{3 child}(P{1 girl|3 child} + P{2 boy|3 child}) = 0.15 × (( 21 )3 +
2 × ( 12 )3 ) = 0.05625
P{B = 1, G = 2} = P{2 girl, 1 boy and total of 3 child} = P{3 child}(P{2 girl|3 child}+P{1 boy|3 child}) = 0.15×(2×( 12 )3 +
( 12 )3 ) = 0.05625
Probability of a family having at least 1 girl = P{B = 0, G = 1}+P{B = 0, G = 2}+P{B = 0, G = 3}+P{B = 1, G = 1}+P{B =
1, G = 2} + P{B = 2, G = 1} = 0.2 + 0.0875 + 0.01875 + 0.175 + 0.05625 + 0.05625 = 0.59375 ≈ 0.594

3. Find pY (y) when pXY (x, y) = ye−y(x+1) u(x)u(y). Random variables X and Y are statistically independent or not. Justify your [4]
answer. Also find pX (x|y).

Page 1 of 4
Digital Communication (EE4405) End Semester Examamination

Solution: Z ∞ Z ∞
1
Z ∞
−y(x+1) −y
pY (y) = pXY (x, y) dx = ye e−yx dx = ye−y u(y) = e−y u(y)
u(x)u(y) dx = ye u(y)
−∞ −∞ 0 y
u(x)
Z ∞ Z ∞ Z ∞
pX (x) = pXY (x, y) dy = ye−y(x+1) u(x)u(y) dy = u(x) ye−y(x+1) dy =
−∞ −∞ 0 (x + 1)2

Since, pXY (x, y) = ye−yx u(x)e−y u(y) ̸= pX (x)pY (y), thus random variables X and Y are not statistically independent.
pX (x|y) = pXY (x, y)/pY (y) = ye−y(x+1) u(x)u(y)/e−y u(y) = ye−yx u(x)

4. State the linearity property of expectation operator and prove it. [4]
Solution:
1. Consider a random variable Z, defined by Z = X +Y ; where X and Y are two continuous RVs whose PDFs are respectively
R∞
denoted by fX (x) and fY (y). So, we write E[Z] = z fZ (z) dz
−∞

2. Expectation of Z Z∞ Z∞ Z∞ Z∞ Z∞
E[Z] = z fZ (z) dz = z fX (x) fY (z − x)dxdz = z fX,Y (x, z − x)dxdz
−∞ −∞ −∞ −∞ −∞

where fZ (z) is defined by the convolution integral of (3.33). Accordingly, we may go on to express the expectation E[Z]
R∞ R∞ R∞ R∞ R∞
as the double integral E[Z] = z fZ (z) dz = z fX (x) fY (z − x)dxdz = z fX,Y (x, z − x)dxdz
−∞ −∞ −∞ −∞ −∞
where the joint probability density function fX,Y (x, z − x) = fX (x) fY (z − x)

3. If we change the variable, y = z − x and x = x


Z∞ Z∞ Z∞ Z∞ Z∞ Z∞ Z∞ Z∞
E[Z] = (x + y) fX,Y (x, y)dxdy = x fX,Y (x, y)dxdy + y fX,Y (x, y)dxdy = x fX (x)dx + y fY (y)dy = E[X] + E[Y ]
−∞ −∞ −∞ −∞ −∞ −∞ −∞ −∞
4. The expectation of a sum of random variables is equal to the sum of the individual expectations.

5. Prove that entropy of a discrete random variable S is bounded as 0 ≤ H(S) ≤ log2 (K), where K is the number of symbols in the [6]
alphabet S.
Solution: Entropy of the discrete random variable S is bounded as follows:
0 ≤ H(s) ≤ log2 (K) where K is the number of symbols in the alphabet S . (1)

Elaborating on the two bounds on entropy in Eq. (1), we now make two statements:

1 H(S) = 0, if, and only if, probability pk = 1 for some k, and remaining probabilities in the set are all zero; this lower
bound on entropy corresponds to no uncertainty.
2 H(S) = log K, if, and only if, pk = 1/K for all k (i.e., all symbols in source alphabet S are equiprobable); this upper
bound on entropy corresponds to maximum uncertainty.

To prove these properties of H(S), we proceed as follows.

– First, since each probability pk is less than or equal to unity, it follows that each term pk log2 (1/pk ) in Eq. (??) is always
non-negative, so H(S) ≥ 0.
– Second, we note that the product term pk log2 (1/pk ) is zero if, and only if, pk = 0 or 1.

We therefore deduce that H(S) = 0 if, and only if, pk = 0 or 1 for some k and all the rest are zero. This completes the proofs of
the lower bound in Eq. (1) and statement 1.
To prove the upper bound in Eq. (1) and statement 2, we make use of a property of the natural logarithm:
loge x ≤ x − 1, x≥0 (2)

This inequality can be readily verified by plotting the functions ln(x) and (x − 1) versus x, as shown in Fig. ??.

Page 2 of 4
Digital Communication (EE4405) End Semester Examamination

0.5
x
1 2
−0.5
y = x−1
−1 y = ln(x)

To proceed with the proof, consider first any two different probability distributions denoted by {p0 , p1 ,. . . , pK−1 } and {q0 ,
q1 ,. . . , qK−1 } on the alphabet S = {s0 , s1 , . . . , sK−1 } of a discrete source. We may then define the relative entropy of these two
distributions:
• Cross Entropy(p,q)= exact bits + extra bits for storing information

• D(p||q) = Cross Entropy(p,q) − Entropy(p)

K−1 K−1 K−1


pk 
D(p||q) = − ∑ pk log2 qk + ∑ pk log2 pk = ∑ pk log2 (3)
k=0 k=0 k=0 qk
Hence, changing to the natural logarithm and using the inequality of Eq. (2), we may express the summation on the right-hand
side of Eq. (3) as follows:
K−1 K−1
pk  qk  1 K−1  qk  1 K−1  
∑ pk log2 qk = − ∑ pk log2 pk ≥ ln(2) ∑ pk pk − 1 = ln(2) ∑ qk − pk = 0
k=0 k=0 k=0 k=0

We thus have the fundamental property of probability theory:


D(p||q) ≥ 0 (4)

In words, Eq. (4) states:


The relative entropy of a pair of different discrete distributions is always nonnegative; it is zero only when the two distributions
are identical.
Suppose we next put 1
qk = , k = 0, 1, . . . , K − 1
K
which corresponds to a source alphabet S with equiprobable symbols. Using this distribution in Eq. (3) yields
K−1 K−1
D(p||q) = ∑ pk log2 pk + log2 K ∑ pk = −H(S) + log2 K [using Eq. (??) and (??)]
k=0 k=0

H(S) ≤ log2 K [using Eq. (4)] (5)


Thus, H(S) is always less than or equal to log2 K. The equality holds if, and only if, the symbols in the alphabet S are
equiprobable. This completes the proof of Eq. (1) and with it the accompanying statements 1 and 2.

6. Prove that mutual information of a channel is symmetric (i.e., I(X;Y ) = I(Y ; X)). [4]
Solution:
J−1  1  J−1  1  K−1 J−1 K−1  1 
H(X) = ∑ p(x j ) log2 = ∑ p(x j ) log2 ∑ p(yk |x j ) = ∑∑ p(y k |x j )p(x j ) log2
j=0 p(x j ) j=0 p(x j ) k=0 j=0 k=0 p(x j )
J−1 K−1  1 
=∑ ∑ p(x j , yk ) log2 (6)
j=0 k=0 p(x j )
J−1 K−1  p(x |y ) 
j k
I(X;Y ) = ∑ ∑ p(x j , yk ) log2 (7)
j=0 k=0 p(x j )

To further confirm this property, we may use Bayes’ rule for conditional probabilities,

Page 3 of 4
Digital Communication (EE4405) End Semester Examamination

p(x j |yk ) p(yk |x j )


= (8)
p(x j ) p(yk )

Substituting Eq (8) into (7) and interchanging the order of summation, we get
K−1 J−1  p(y |x ) 
k j
I(X;Y ) = ∑∑ p(x j , yk ) log2 = I(Y ; X) (9)
k=0 j=0 p(yk )
7. Suppose that equal numbers of the letter grades A, B, C, D, and F are given in a certain course. How much information in bits [2]
have you received when the instructor tells you that your grade is not F? How much more information do you need to determine
your grade?
Solution: P(not F) = 4/5 ⇒ I = log5/4 = 0.322bits,
P(speci f ic grade) = 1/5 ⇒ I = log5 = 2.322bits
so Ineeded = 2.322 − 0.322 = 2 bits
8. Consider the random variable X = {x1 , x2 , x3 , x4 , x5 , x6 , x7 } with probability 0.5, 0.26, 0.11, 0.04, 0.04, 0.03, 0.02, respectively.
(a) Find a binary Huffman code for X. [2]
(b) Extend the Binary Huffman method to Ternarry (Alphabet of 3) and apply it for X. [2]
Solution:
(a) Find a binary Huffman code for X. [2]
Code Symbol Stage I Stage II Stage III Stage IV Stage V Stage VI
0
0 x1 0.5 0.5 0.5 0.5 0.5 0.5
0
10 x2 0.26 0.26 0.26 0.26 0.26 0.5
0 1
111 x3 0.11 0.11 0.11 0.13 0.24
0 1
11000 x4 0.04 0.05 0.08 0.11
0 1
11001 x5 0.04 0.04 0.05
0 1
11010 x6 0.03 0.04
1
11011 x7 0.02
1
The expected length of the codewords for the binary Huffman code is 2 bits (H(X) = 1.99 bits).

(b) Extend the Binary Huffman method to Ternarry (Alphabet of 3) and apply it for X. [2]

Code Symbol Stage I Stage II Stage III


0 x1 0.5 0.5 0.5
0
1 x2 0.26 0.26 0.26
1
20 x3 0.11 0.11 0.24
0 2
22 x4 0.04 0.09
1
210 x5 0.04 0.04
0 2
211 x6 0.03
1
212 x7 0.02
2

*****

Page 4 of 4

You might also like