Professional Documents
Culture Documents
Comsysf311 2023 M5L40
Comsysf311 2023 M5L40
Channel Capacity
Channel Capacity for AWGN
Bayes’ Rules
1
P
Entropy: H(X) = i P (xi ) log2
P (xi )
bits per message
1
P
Entropy: H(Y ) = j P (yj ) log2 P (yj ) bits per message
P (xi , yj ) log2 P (x1i ,yj )
P P
Joint Entropy H(X, Y ) = i j
0 a
1 b
Mutual information:
I(X : Y ) = H(X) − H(X|Y ) = H(Y ) − H(Y |X)
bits/message or bits/symbol
P P P (xi ,yj )
I(X : Y ) = i j P (xi , yj ) log2 P (xi )P (yj )
P P P (yj |xi )
I(X : Y ) = i j P (xi )P (yj |xi ) log2 P P (xi )P (yj |xi )
i
y =x+n
R∞
Differential Entropy: H(X) = −∞ fX (x) log2 fX1(x) dx bits per
message
Differential RConditional entropy:
∞ R∞
H(X|Y ) = −∞ −∞ fXY (x, y) log2 fX|Y1(x|y) dxdy bits per
message
Mutual information:
I(X : Y ) = H(X) − H(X|Y ) = H(Y ) − H(Y |X)
R∞ R∞ f (x,y)
I(X : Y ) = −∞ −∞ fXY (x, y) log2 fXX,Y
(x)fY (y)
dxdy bits per
message
1
f (x) = 2M in the range −M < x < M otherwise 0
R∞
H(X) = −∞ fX (x) log2 fX1(x) dx bits per message
R∞
H(X) = −∞ fX (x) log2 fX1(x) dx
RM 1
H(X) = −M 2M log2 2M
1
dx = log2 2M
2 2
fX (x) = √ 1 e−x /2σ
2πσ 2
R∞
H(X) = −∞ X
f (x) log2 fX1(x) dx bits per message
Simplify log2 (.) and use the property of PDF and variance
R∞
H(x) = −∞ fX (x) log2 fX1(x) dx = 12 log2 (2πeσ 2 )
R∞
Find fX (x) that maximizes H(x) = −∞ fX (x) loge fX1(x) dx
with a constraint that
R∞
Constraint 1: −∞ fX (x)dx = 1
Any other constraint?
Input signal should have a limited power
R∞
Constraint 2: −∞ x2 fX (x)dx = σ 2
Main result: for a given mean square value, the
entropy is maximum for a Gaussian distribution and
the maximum entropy is 12 log2 (2πeσ 2 ).
y =x+n
H(y|x)?
R∞ R∞ 1
H(y|x) = −∞ −∞ fXY (x, y) log2 fY |X (y|x)
dxdy bits per
message
C = B log2 (1 + SN R)
Assumptions to get the above?
Channel: Band-limited
Noise: Additive
Noise: Gaussian distributed
Input signals: White Gaussian
Py/x (y1 /x1 ) = 2/3, Py/x (y2 /x1 ) = 1/3, Py/x (y1 /x2 ) = 1/10,
Py/x (y2 /x2 ) = 9/10.
H(x) = P (x1 ) log2 1/P (x1 ) + P (x2 ) log2 1/P (x2 ) = 0.918
bits/message
H(y) = P (y1 ) log2 1/P (y1 ) + P (y2 ) log2 1/P (y2 ) = 0.8673
bits/message
Bayes’rule
P (y1 |x1 )P (x1 )
P (x1 |y1 ) = P (y1 )
= 10/13
P (y2 |x1 )P (x1 )
P (x1 |y2 ) = P (y2 )
= 5/32
P (y1 |x2 )P (x2 )
P (x2 |y1 ) = P (y1 )
= 3/13
P (y2 |x2 )P (x2 )
P (x2 |y2 ) = P (y2 )
= 54/64
H(x|y1 ) =
P (x1 |y1 ) log2 (1/P (x1 |y1 )) + P (x2 |y1 ) log2 (1/P (x2 |y1 )) = 0.779
H(x|y2 ) =
P (x1 |y2 ) log2 (1/P (x1 |y2 )) + P (x2 |y2 ) log2 (1/P (x2 |y2 )) = 0.624
H(x|y) = P (y1 )H(x|y1 ) + P (y2 )H(x|y2 ) = 0.6687
1 1
Cs = 1 − [Pe log2 Pe
+ (1 − Pe ) log2 ( 1−P e
)]
x1 = 0, x2 = 1, y1 = 0, y2 = 1, y3 = E, q = 1 − p.
i = 2, j = 3
It is for loop
H(x) = P (x1 , y1 ) log2 P (x11 |y1 ) + P (x1 , y2 ) log2 P (x11 |y2 ) +
P (x1 , y3 ) log2 P (x11 |y3 ) + P (x2 , y1 ) log2 P (x12 |y1 ) +
P (x2 , y2 ) log2 P (x12 |y2 ) + P (x2 , y3 ) log2 P (x12 |y3 )
Calculate P (xi |yj ) and P (xi , yj ) for each combinations of i
and j
P (yi |xj )P (xi )
Use P (xi |yj ) = P (yi )
Use P (xi , yj ) = P (yi |xj )P (xi )
P (yi |xj ): given from channel matrix
P (xi ): given from input probabilities (1/2 in this case)
P (yj )? from total probability theorem
Dr. Zafar (BITS Pilani) CommSys: M5L40 IT 38 / 46
Solution: P (yj )
Total probability
P (y1 ) = P (y1 |x1 )P (x1 ) + P (y1 |x2 )P (x2 ) =
q × 1/2 + 0 × 1/2 = q/2
P (y2 ) =
P (y2 |x1 )P (x1 ) + P (y2 |x2 )P (x2 ) + 0 × 1/2 + q × 1/2 = q/2
P (y3 ) = 1 − q/2 − q/2 = 1 − q = p
P (y1 |x1 ) and P (y1 |x2 ), P (y2 |x1 ) and ,P (y2 |x2 ) are taken from
channel transition matrix.
i = 2, j = 3
It is for loop
H(x) = P (x1 , y1 ) log2 P (x11 |y1 ) + P (x1 , y2 ) log2 P (x11 |y2 ) +
P (x1 , y3 ) log2 P (x11 |y3 ) + P (x2 , y1 ) log2 P (x12 |y1 ) +
P (x2 , y2 ) log2 P (x12 |y2 ) + P (x2 , y3 ) log2 P (x12 |y3 )
H(x|y) = p