Professional Documents
Culture Documents
2022-23 Mid - Sem
2022-23 Mid - Sem
2. Suppose that 10 percent of the families in a certain community have no children, 40 percent have 1, 35 percent have 2, and 15 [3]
percent have 3 children; suppose further that each child is equally likely (and independently) to be a boy or a girl. If a family is
chosen at random from this community, then B, the number of boys, and G, the number of girls, find the probability of a family
having at least 1 girl.
Solution: P{B = 0, G = 0}, P{B = 0, G = 1}, P{B = 0, G = 2}, and P{B = 0, G = 3}
P{B = 0, G = 0} = P{no children} = 0.10
P{B = 0, G = 1} = P{1 girl and total of 1 child} = P{1 child}P{1 girl|1 child} = 0.4 × 12 = 0.2
P{B = 0, G = 2} = P{2 girl and total of 2 child} = P{2 child}P{2 girl|2 child} = 0.35 × ( 12 )2 = 0.0875
P{B = 0, G = 3} = P{3 girl and total of 3 child} = P{3 child}P{3 girl|3 child} = 0.15 × ( 21 )3 = 0.01875
P{B = 1, G = 1} = P{1 girl, 1 boy and total of 2 child} = P{2 child}(P{1 girl|2 child} + P{1 boy|2 child}) = 0.35 × (( 21 )2 +
( 12 )2 ) = 0.175
P{B = 2, G = 1} = P{1 girl, 2 boy and total of 3 child} = P{3 child}(P{1 girl|3 child} + P{2 boy|3 child}) = 0.15 × (( 21 )3 +
2 × ( 12 )3 ) = 0.05625
P{B = 1, G = 2} = P{2 girl, 1 boy and total of 3 child} = P{3 child}(P{2 girl|3 child}+P{1 boy|3 child}) = 0.15×(2×( 12 )3 +
( 12 )3 ) = 0.05625
Probability of a family having at least 1 girl = P{B = 0, G = 1}+P{B = 0, G = 2}+P{B = 0, G = 3}+P{B = 1, G = 1}+P{B =
1, G = 2} + P{B = 2, G = 1} = 0.2 + 0.0875 + 0.01875 + 0.175 + 0.05625 + 0.05625 = 0.59375 ≈ 0.594
3. Find pY (y) when pXY (x, y) = ye−y(x+1) u(x)u(y). Random variables X and Y are statistically independent or not. Justify your [4]
answer. Also find pX (x|y).
Page 1 of 4
Digital Communication (EE4405) End Semester Examamination
Solution: Z ∞ Z ∞
1
Z ∞
−y(x+1) −y
pY (y) = pXY (x, y) dx = ye e−yx dx = ye−y u(y) = e−y u(y)
u(x)u(y) dx = ye u(y)
−∞ −∞ 0 y
u(x)
Z ∞ Z ∞ Z ∞
pX (x) = pXY (x, y) dy = ye−y(x+1) u(x)u(y) dy = u(x) ye−y(x+1) dy =
−∞ −∞ 0 (x + 1)2
Since, pXY (x, y) = ye−yx u(x)e−y u(y) ̸= pX (x)pY (y), thus random variables X and Y are not statistically independent.
pX (x|y) = pXY (x, y)/pY (y) = ye−y(x+1) u(x)u(y)/e−y u(y) = ye−yx u(x)
4. State the linearity property of expectation operator and prove it. [4]
Solution:
1. Consider a random variable Z, defined by Z = X +Y ; where X and Y are two continuous RVs whose PDFs are respectively
R∞
denoted by fX (x) and fY (y). So, we write E[Z] = z fZ (z) dz
−∞
2. Expectation of Z Z∞ Z∞ Z∞ Z∞ Z∞
E[Z] = z fZ (z) dz = z fX (x) fY (z − x)dxdz = z fX,Y (x, z − x)dxdz
−∞ −∞ −∞ −∞ −∞
where fZ (z) is defined by the convolution integral of (3.33). Accordingly, we may go on to express the expectation E[Z]
R∞ R∞ R∞ R∞ R∞
as the double integral E[Z] = z fZ (z) dz = z fX (x) fY (z − x)dxdz = z fX,Y (x, z − x)dxdz
−∞ −∞ −∞ −∞ −∞
where the joint probability density function fX,Y (x, z − x) = fX (x) fY (z − x)
5. Prove that entropy of a discrete random variable S is bounded as 0 ≤ H(S) ≤ log2 (K), where K is the number of symbols in the [6]
alphabet S.
Solution: Entropy of the discrete random variable S is bounded as follows:
0 ≤ H(s) ≤ log2 (K) where K is the number of symbols in the alphabet S . (1)
Elaborating on the two bounds on entropy in Eq. (1), we now make two statements:
1 H(S) = 0, if, and only if, probability pk = 1 for some k, and remaining probabilities in the set are all zero; this lower
bound on entropy corresponds to no uncertainty.
2 H(S) = log K, if, and only if, pk = 1/K for all k (i.e., all symbols in source alphabet S are equiprobable); this upper
bound on entropy corresponds to maximum uncertainty.
– First, since each probability pk is less than or equal to unity, it follows that each term pk log2 (1/pk ) in Eq. (??) is always
non-negative, so H(S) ≥ 0.
– Second, we note that the product term pk log2 (1/pk ) is zero if, and only if, pk = 0 or 1.
We therefore deduce that H(S) = 0 if, and only if, pk = 0 or 1 for some k and all the rest are zero. This completes the proofs of
the lower bound in Eq. (1) and statement 1.
To prove the upper bound in Eq. (1) and statement 2, we make use of a property of the natural logarithm:
loge x ≤ x − 1, x≥0 (2)
This inequality can be readily verified by plotting the functions ln(x) and (x − 1) versus x, as shown in Fig. ??.
Page 2 of 4
Digital Communication (EE4405) End Semester Examamination
0.5
x
1 2
−0.5
y = x−1
−1 y = ln(x)
To proceed with the proof, consider first any two different probability distributions denoted by {p0 , p1 ,. . . , pK−1 } and {q0 ,
q1 ,. . . , qK−1 } on the alphabet S = {s0 , s1 , . . . , sK−1 } of a discrete source. We may then define the relative entropy of these two
distributions:
• Cross Entropy(p,q)= exact bits + extra bits for storing information
6. Prove that mutual information of a channel is symmetric (i.e., I(X;Y ) = I(Y ; X)). [4]
Solution:
J−1 1 J−1 1 K−1 J−1 K−1 1
H(X) = ∑ p(x j ) log2 = ∑ p(x j ) log2 ∑ p(yk |x j ) = ∑∑ p(y k |x j )p(x j ) log2
j=0 p(x j ) j=0 p(x j ) k=0 j=0 k=0 p(x j )
J−1 K−1 1
=∑ ∑ p(x j , yk ) log2 (6)
j=0 k=0 p(x j )
J−1 K−1 p(x |y )
j k
I(X;Y ) = ∑ ∑ p(x j , yk ) log2 (7)
j=0 k=0 p(x j )
To further confirm this property, we may use Bayes’ rule for conditional probabilities,
Page 3 of 4
Digital Communication (EE4405) End Semester Examamination
Substituting Eq (8) into (7) and interchanging the order of summation, we get
K−1 J−1 p(y |x )
k j
I(X;Y ) = ∑∑ p(x j , yk ) log2 = I(Y ; X) (9)
k=0 j=0 p(yk )
7. Suppose that equal numbers of the letter grades A, B, C, D, and F are given in a certain course. How much information in bits [2]
have you received when the instructor tells you that your grade is not F? How much more information do you need to determine
your grade?
Solution: P(not F) = 4/5 ⇒ I = log5/4 = 0.322bits,
P(speci f ic grade) = 1/5 ⇒ I = log5 = 2.322bits
so Ineeded = 2.322 − 0.322 = 2 bits
8. Consider the random variable X = {x1 , x2 , x3 , x4 , x5 , x6 , x7 } with probability 0.5, 0.26, 0.11, 0.04, 0.04, 0.03, 0.02, respectively.
(a) Find a binary Huffman code for X. [2]
(b) Extend the Binary Huffman method to Ternarry (Alphabet of 3) and apply it for X. [2]
Solution:
(a) Find a binary Huffman code for X. [2]
Code Symbol Stage I Stage II Stage III Stage IV Stage V Stage VI
0
0 x1 0.5 0.5 0.5 0.5 0.5 0.5
0
10 x2 0.26 0.26 0.26 0.26 0.26 0.5
0 1
111 x3 0.11 0.11 0.11 0.13 0.24
0 1
11000 x4 0.04 0.05 0.08 0.11
0 1
11001 x5 0.04 0.04 0.05
0 1
11010 x6 0.03 0.04
1
11011 x7 0.02
1
The expected length of the codewords for the binary Huffman code is 2 bits (H(X) = 1.99 bits).
(b) Extend the Binary Huffman method to Ternarry (Alphabet of 3) and apply it for X. [2]
*****
Page 4 of 4