Professional Documents
Culture Documents
Chap2 Proakis Solution Manual
Chap2 Proakis Solution Manual
Problem 2.1 :
P (Ai ) =
3
P (Ai, Bj ), i = 1, 2, 3, 4
j=1
Hence :
P (A1 ) =
3
j=1
P (A2 ) =
3
j=1
P (A3 ) =
3
j=1
P (A4 ) =
3
j=1
Similarly :
P (B1 ) =
4
i=1
P (B2 ) =
4
i=1
P (B3 ) =
4
i=1
Problem 2.2 :
The relationship holds for n = 2 (2-1-34) : p(x1 , x2 ) = p(x2 |x1 )p(x1 )
Suppose it holds for n = k, i.e : p(x1 , x2 , ..., xk ) = p(xk |xk1 , ..., x1 )p(xk1 |xk2 , ..., x1 ) ...p(x1 )
Then for n = k + 1 :
p(x1 , x2 , ..., xk , xk+1 ) = p(xk+1 |xk , xk1 , ..., x1 )p(xk , xk1 ..., x1 )
= p(xk+1 |xk , xk1 , ..., x1 )p(xk |xk1 , ..., x1 )p(xk1 |xk2 , ..., x1 ) ...p(x1 )
Hence the relationship holds for n = k + 1, and by induction it holds for any n.
Problem 2.3 :
Following the same procedure as in example 2-1-1, we prove :
1
pX
pY (y) =
|a|
yb
a
Problem 2.4 :
Relationship (2-1-44) gives :
yb
p
pY (y) =
X
a
3a [(y b) /a]2/3
1
1/3
a=2
0.35
b=3
0.3
0.25
0.2
0.15
0.1
0.05
0
10
0
y
10
Problem 2.5 :
(a) Since (Xr , Xi ) are statistically independent :
pX (xr , xi ) = pX (xr )pX (xi ) =
2
2 /2
Also :
Yr + jYi = (Xr + Xi )ej
Xr + Xi = (Yr + jYi ) ej = Yr cos + Yi sin + j(Yr sin + Yi cos )
Xr = Yr cos + Yi sin
Xi = Yr sin + Yi cos
X
r
Yr
Xr
Xi
Yr
Xi
Yi
Yi
cos sin
=
sin cos
=1
Hence, by (2-1-55) :
pY (yr , yi) = pX ((Yr cos + Yi sin ) , (Yr sin + Yi cos ))
(yr2 +yi2 )/22
1
= 2
2e
(b) Y = AX and X = A1 Y
Now, pX (x) = (212 )n/2 ex x/2 (the covariance matrix M of the random variables x1 , ..., xn is
M = 2 I, since they are i.i.d) and J = 1/| det(A)|. Hence :
1
1
1 1
2
ey (A ) A y/2
2
n/2
(2 ) | det(A)|
pY (y) =
Problem 2.6 :
(a)
jvY
Y (jv) = E e
jv
=E e
x
i=1 i
=E
n
jvxi
i=1
n
E ejvX = X (ejv )
i=1
But,
pX (x) = p(x 1) + (1 p)(x) X (ejv ) = 1 + p + pejv
Y (jv) = 1 + p + pejv
n
n
(b)
E(Y ) = j
dY (jv)
|v=0 = jn(1 p + pejv )n1 jpejv |v=0 = np
dv
and
E(Y 2 ) =
d2 Y (jv)
d
jv n1 jv
|
=
jn(1
p
+
pe
)
pe
= np + np(n 1)p
v=0
v=0
d2 v
dv
E(Y 2 ) = n2 p2 + np(1 p)
Problem 2.7 :
(jv) = e 2 v Mv
where v = [v1 , v2 , v3 , v4 ] , M = [ij ] .
We obtain the desired result by bringing the exponent to a scalar form and then performing
quadruple dierentiation. We can simplify the procedure by noting that :
1
(jv)
= i ve 2 v Mv
vi
Problem 2.8 :
For the central chi-square with n degress of freedom :
(jv) =
1
(1 j2v 2 )n/2
4
Now :
jn 2
d(jv)
d(jv)
=
|v=0 = n 2
E (Y ) = j
n/2+1
dv
dv
(1 j2v 2)
d2 (jv)
2n 4 (n/2 + 1)
d2 (jv)
2
=
E
Y
=
|v=0 = n(n + 2) 2
n/2+2
2
2
dv 2
dv
(1 j2v )
i=1
1
(1
j2v 2)n/2
2
2
ejvs /(1j2v )
m2i .
d(jv)
jn 2
2
2
js2
=
+
ejvs /(1j2v )
n/2+1
n/2+2
dv
(1 j2v 2 )
(1 j2v 2)
|v=0 = n 2 + s2
Hence, E (Y ) = j d(jv)
dv
s2 (n + 4) 2 ns2 2
s4
d2 (jv)
n 4 (n + 2)
2
2
=
+
+
ejvs /(1j2v )
n/2+2
n/2+3
n/2+4
2
dv
(1 j2v 2 )
(1 j2v 2 )
(1 j2v 2 )
Hence,
E Y2 =
and
d2 (jv)
4
2 2
2
2
|
=
2n
+
4s
+
n
+
s
v=0
dv 2
Y2 = E Y 2 [E (Y )]2 = 2n 4 + 4 2 s2
Problem 2.9 :
The Cauchy r.v. has : p(x) =
a/
,
x2 +a2
E (X) =
xp(x)dx = 0
E X
Note that for large x,
x2
x2 +a2
a
=
x p(x)dx =
x2
dx
x2 + a2
E X 2 = , 2 =
5
(b)
a/ jvx
a/
ejvx dx
e
dx
=
2
2
x + a
(x + ja) (x ja)
This integral can be evaluated by using the residue theorem in complex variable theory. Then,
for v 0 :
a/ jvx
e
(jv) = 2j
= eav
x + ja
x=ja
(jv) = E
jvX
For v < 0 :
a/ jvx
e
(jv) = 2j
x ja
Therefore :
= eav v
x=ja
(jv) = ea|v|
Note: an alternative way to nd the characteristic function is to use the Fourier transform
relationship between p(x), (jv) and the Fourier pair :
eb|t|
c
1
, c = b/2, f = 2v
2
c + f2
Problem 2.10 :
(a) Y =
1
n
i=1
Xi , Xi (jv) = ea|v|
Y (jv) = E ejv n
n
i=1
Xi
n
E ej n Xi =
i=1
n
Xi (jv/n) = ea|v|/n
= ea|v|
i=1
a/
.
y 2 +a2
Problem 2.11 :
We assume that x(t), y(t), z(t) are real-valued stochastic processes. The treatment of complexvalued processes is similar.
(a)
zz ( ) = E {[x(t + ) + y(t + )] [x(t) + y(t)]} = xx ( ) + xy ( ) + yx ( ) + yy ( )
6
(c) When x(t), y(t) are uncorrelated and have zero means :
zz ( ) = xx ( ) + yy ( )
Problem 2.12 :
The power spectral density of the random process x(t) is :
xx (f ) =
xx ( )ej2f d = N0 /2.
N0
|H(f )|2
2
yy (f )df =
N0
2
|H(f )|2df =
N0
(2B) = N0 B
2
Problem 2.13 :
X1
MX = E [(X mx )(X mx ) ] , X = X2
, mx is the corresponding vector of mean values.
X3
Then :
MY
Hence :
=
=
=
=
=
E [(Y my )(Y my ) ]
E [A(X mx )(A(X mx )) ]
E [A(X mx )(X mx ) A ]
AE [(X mx )(X mx ) ] A
AMx A
11
0
11 + 13
0
4
0
MY =
22
11 + 31
0
11 + 13 + 31 + 33
Problem 2.14 :
Y (t) = X 2 (t), xx ( ) = E [x(t + )x(t)]
Problem 2.15 :
pR (r) =
2
(m)
m
m
r 2m1 emr
2 /
, X=
p
1/ R
Hence :
1
2
m
pX (x) =
1/ (m)
1/
m
.
2m1 m(x)2 /
x
e
=
2
2
mm x2m1 emx
(m)
Problem 2.16 :
The transfer function of the lter is :
H(f ) =
1
1
1/jC
=
=
R + 1/jC
jRC + 1
j2f RC + 1
8
(a)
xx (f ) = 2 yy (f ) = xx (f ) |H(f )|2 =
(b)
yy ( ) = F 1 {xx (f )} =
2
RC
2
(2RC)2 f 2 + 1
1
RC
1 2
( RC
) + (2f )2
ej2f df
2 a| |
2 | |/RC
a/ jv
e
dv
=
=
e
e
a2 + v 2
2RC
2RC
where the last integral is evaluated in the same way as in problem P-2.9 . Finally :
E Y 2 (t) = yy (0) =
2
2RC
Problem 2.17 :
If X (f ) = 0 for |f | > W, then X (f )ej2f a is also bandlimited. The corresponding autocorrelation function can be represented as (remember that X (f ) is deterministic) :
X ( a) =
n=
Let us dene :
X (
sin 2W
n
a)
2W
2W
n
2W
n
2W
(1)
n
n sin 2W t 2W
X(t) =
X(
)
2W 2W t n
n=
2W
=0
E |X(t) X(t)|
m
m sin 2W t 2W
X(t)
)
X(
=0
E X(t) X(t)
2W 2W t m
m=
(2)
2W
First we have :
E
n
m
m
n m sin 2W t 2W
) = X (t
)
)
X(t) X(t) X(
X (
2W
2W
2W
2W t n
n=
2W
Since this is true for any m, it follows that E X(t) X(t) X(t) = 0. Also
E
n
sin 2W t 2W
n
t)
X(t) X(t) X(t) = X (0)
X (
2W
2W t n
n=
2W
Again, by applying (1) with a = t anf = t, we observe that the right-hand-side of the equation
is also zero. Hence (2) holds.
Problem 2.18 :
P [N x] evx E evN
(1)
E NevN xE evN = 0
Now :
E evN
1
2
= ev
= ev
and
E NevN =
2 /2
1
2
(2)
2 /2
evt et
dt
2 /2
e(tv)
dt
2 /2
d vN
2
E e
= vev /2
dv
(1) Q(x) ex ex
2 /2
Problem 2.19 :
Since H(0) =
h(n) = 0 my = mx H(0) = 0
10
Q(x) ex
2 /2
i
h(i)h(j)xx (k j + i) = x2
h(i)h(k + i)
i=
where the last equality stems from the autocorrelation function of X(n) :
xx (k j + i) = x2 (k j + i) =
x2 , j = k + i
0,
o.w.
Hence, yy (0) = 6x2 , yy (1) = yy (1) = 4x2 , yy (2) = yy (2) = x2 , yy (k) = 0 otherwise.
Finally, the frequency response of the discrete-time system is :
j2f n
H(f ) =
h(n)e
= 1 2ej2f + ej4f
1 ej2f
2
2
Problem 2.20 :
|k|
1
2
(k) =
The power density spectrum is
(f ) =
k=
(k)ej2f k
k
k=
1
2
ej2f k +
1 j2f k k
)
k=0 ( 2 e
1
1ej2f /2
2cos 2f
5/4cos 2f
3
54 cos 2f
11
k=0
1 j2f k
)
k=0 ( 2 e
1
1ej2f /2
1 k j2f k
e
Problem 2.21 :
We will denote the discrete-time process by the subscript d and the continuous-time (analog)
process by the subscript a. Also, f will denote the analog frequency and fd the discrete-time
frequency.
(a)
Hence, the autocorrelation function of the sampled signal is equal to the sampled autocorrelation
function of X(t).
(b)
d (k) = a (kT ) =
a (F )ej2f kT df
(2l+1)/2T
l= (2l1)/2T
1/2T
1/2T
l= 1/2T
1/2T
a (F )ej2f kT df
a (f + Tl )ej2F kT df
l=
a (f + Tl ) ej2F kT df
Let fd = f T. Then :
1/2
1
(1)
We know that the autocorrelation function of a discrete-time process is the inverse Fourier
transform of its power spectral density
d (k) =
1/2
Comparing (1),(2) :
d (fd ) =
1/2
1
fd + l
)
a (
T l=
T
1
fd
a ( )
T
T
i :
a (f ) = 0,
f : |f | > 1/2T
12
(2)
(3)
Otherwise, the sum of the shifted copies of a (in (3)) will overlap and aliasing will occur.
Problem 2.22 :
(a)
a ( ) =
=
=
W
a (f )ej2f df
ej2f df
sin 2W
(b) If T =
1
2W
, then :
d (k) =
sin 2W kT
kT
2W = 1/T,
k=0
0,
otherwise
Thus, the sequence X(n) is a white-noise sequence. The fact that this is the minimum value of
T can be shown from the following gure of the power spectral density of the sampled process:
fs W
fs
fs + W
fs W
fs
fs + W
We see that the maximum sampling rate fs that gives a spectrally at sequence is obtained
when :
1
W = fs W fs = 2W T =
2W
|
(c) The triangular-shaped spectrum (f ) = 1 |f
, |f | W may be obtained by convolvW
13
1
W
sin W
2
sin W kT
kT
2
1
T
sin k
k
=W
2
W,
k=0
0, otherwise
Problem 2.23 :
Lets denote : y(t) = fk (t)fj (t).Then :
fk (t)fj (t)dt =
y(t)dt = Y (f )|f =0
Then :
Fk (a) Fj (f a)da
Y (f ) = Fk (f ) Fj (f ) =
and at f = 0 :
Y (f )|f =0 =
=
F (a) Fj (a)da
2k
j2a(kj)/2W
da
e
1
2W
1/2W, k = j
0,
k=
j
Problem 2.24 :
1
|H(f )|2df
G 0
For the lter shown in Fig. P2-12 we have G = 1 and
Beq =
Beq =
0
|H(f )|2df = B
1
1
|H(f )|2 =
1 + j2f RC
1 + (2f RC)2
14
So G = 1 and
Beq =
=
=
2
0 |H(f )| df
1
2
2
1
4RC
|H(f )| df
where the last integral is evaluated in the same way as in problem P-2.9 .
15