Professional Documents
Culture Documents
2 /2
1. (i) φX (s) = es . (ii) We have
This upper bound should be minimized with respect to s to obtain the Chernoff bound.
2 /2
e−as φX (s) = e−as es
3.
Z 100 Z 100 x
E[X] = xfX (x)dx = dx = 50.
0 0 100
Given that X ≥ 65, X is uniformly distributed in [65, 100]. Therefore, we have
Z 100 Z 100 x
E[X|X ≥ 65] = xfX (x|X ≥ 65)dx = dx = 82.5.
65 65 35
∞
X ke−a ak
4. E[X] = . We know
k=0
k!
∞ −a k
X e a
= 1. (1)
k=0
k!
Differentiating with respect to a, we get
∞ ∞
X ke−a ak−1 ak
− e−a
X
= 0.
k=0
k! k=0
k!
1
∞
1 ak
E[X] = e−a
X
= 1.
a k=0
k!
Therefore, we have E[X] = a.
Differentiating (1) twice with respect to a, we get
∞ ∞ ∞ ∞ −a k
X k(k − 1)e−a ak−2 X ke−a ak−1 X ke−a ak−1 X e a
− − + =0
k=0
k! k=0
k! k=0
k! k=0
k!
∞ ∞ ∞
1 X k 2 e−a ak 1 X ke−a ak 2X ke−a ak
− − +1=0
a2 k=0 k! a2 k=0 k! a k=0 k!
1 1 2
2
E[X 2 ] − 2 a − a + 1 = 0.
a a a
Therefore, we have E[X 2 ] = a2 + a ⇒ V ar(X) = a.
5.
∞
∞ ∞ ∞ e−λx 1
Z Z ∞ Z
E[X] = xλe−λx dx = xd(−e −λx
) = −xe −λx
+ e−λx dx = 0 + = .
0 0 0 0 −λ 0 λ
(
0 x<2
fX (x|X ≥ 2) = fX (x)
P [X≥2] x≥2
∞
∞ e−λx
Z
−λx
P [X ≥ 2] = λe dx = = e−2λ .
2 −λ 2
Therefore, we have (
0 x<2
fX (x|X ≥ 2) = −λ(x−2)
λe x≥2
∞
∞ ∞ e−λ(x−2) 1
Z ∞ Z
−λ(x−2) −λ(x−2) −λ(x−2)
E[X|X ≥ 2] = xλe dx = −xe + e dx = 2+ = 2+ .
2 2 2 −λ 2 λ
R∞
6. (i) M SE(c) = E[(Y − c)2 ] = −∞ (y − c)2 fY (y)dy. Setting the derivative of M SE(c) with
respect to c to be 0, we get
dM SE(c)
Z ∞
=− 2(y − c)fY (y)dy = 0
dc −∞
i.e., Z ∞
c= yfY (y)dy = E[Y ].
−∞
Also,
d2 M SE(c) ∞
Z
= 2fY (y)dy = 2 > 0.
dc2 −∞
2
(ii) E[(Y − g(X))2 ] = E[E[(Y − g(X))2 |X]], i.e.,
Z ∞
E[(Y − g(X))2 ] = E[(Y − g(X))2 |X = x]fX (x)dx.
∞
Since fX (x) ≥ 0, we minimize E[(Y − g(X))2 ] by minimizing E[(Y − g(X))2 |X = x] for each
x. Z ∞
E[(Y − g(X))2 |X = x] = (y − g(x))2 fY (y|X = x)dy.
∞
As in part (i), the best choice for g(x) is
Z ∞
g(x) = yfY (y|X = x)dy = E[Y |X = x].
−∞
∂ n φX (s)
E[X n ] = .
∂sn s=0
Therefore, E[X n ] = 0 when n is odd. When n is even and n = 2m, we have
∂ n φX (s) (2m)! 2m
n
= m
σ = (1.3. · · · (2m − 3).(2m − 1)) σ 2m .
∂s
s=0 m!2
8. a)
1 1
fX (x) = exp − xT C −1 x
(2π)3/2 |C|1/2 2
where |C| = 36 and
30 −18 0
1
C −1 = −18 18 0 .
36
0 0 6
b) E[Y ] = 0 and
h i 3 3 0 1
σY2 = 1 2 −1 3 5 0 2 = 41.
0 0 6 −1
3
9. (a) X2 is a zero-mean Gaussian with variance 2.
( )
1 x2
fX2 (x2 ) = √ exp − 2 .
4π 4
Therefore, we have
n o
√1 1
exp − 2(2−r 2 2
2 ) (2x1 − 2rx1 x2 + x2 )
2π 2−r 2
fX1 (x1 |X2 = x2 ) = n
x22
o
√1 exp −
4π 4
√
x2 − rx x + r 2 x22
2 1 2
= √ √ exp − 1 4
2π 2 − r 2 2 − r2
√ 2 )
x1 − rx22
(
2
= √ √ exp −
2π 2 − r 2 2 − r2
rx2 r2
Given X2 = x2 , X1 is Gaussian with mean 2 and variance 1 − 2 .
10. (a)
1
φX (s) = exp s m + sT Cs .T
2
( )
σ 2 s2
= exp m1 s1 + 2 1
φX1 (s1 ) = φX (s)s 2 =0 2
Therefore, X1 is Gaussian. Similarly, X2 can be shown to be Gaussian.
(b)
Z ∞
fX1 (x1 ) = fX1 ,X2 (x1 , x2 )dx2
−∞
( )Z ( )
1 x2 ∞ 1 x2
= √ exp − 1 √ exp − 2 dx2
2π (
2 −∞) 2π 2( )
x21 − 2 x22
Z ∞
x1 x2
+ √ exp − √ exp − dx2
2π 2 −∞ 2π 2
( )
1 x2
= √ exp − 1 +0
2π 2
( )
1 x2
= √ exp − 1
2π 2
4
Similarly, we can show that
( )
1 x2
fX2 (x2 ) = √ exp − 2 .
2π 2
11. E[Y ] = AE[X] + b. Let CX and CY denote the covariance matrices of X and Y respectively.
12. X is proper if all the elements of E[(X − E[X])(X − E[X])T ] are zero.
E[(X r − E[X r ])(X r − E[X r ])T ] = E[(X i − E[X i ])(X i − E[X i ])T ],
and
E[(X r − E[X r ])(X i − E[X i ])T ] = −E[(X i − E[X i ])(X r − E[X r ])T ].
Since E[(X r − E[X r ])(X i − E[X i ])T ] = E[(X i − E[X i ])(X r − E[X r ])T ], we have
E[(X r − E[X r ])(X i − E[X i ])T ] = −E[(X r − E[X r ])(X i − E[X i ])T ]T .
This means that the diagonal elements of E[(X r −E[X r ])(X i −E[X i ])T ] are zero, i.e., the real
and imaginary part of each component in X are uncorrelated. Thus, the required conditions
are:
(i) The vectors X r and X i should have the same covariance matrix.
(ii) The vectors X r and X i should have a cross-covariance matrix that is skew-symmetric.
13.
n n n
sT m = sT Cs =
X X X
si m i and si Cij sj
i=1 i=1 j=1
∂φX (s)
E[Xk ] =
∂sk s=0
n
∂φX (s) 1 X
= φX (s) mk + Ckk sk + (Ckj + Cjk )sj
∂sk 2 j=1,j6=k
Therefore, we have
E[Xk ] = mk and E[X] = m.
5
R = E[XX T ] = C + mmT .
∂ ∂
Rkl = φX (s)
∂sk ∂sl s=0
2
n
1
X
Rkk = φX (s) Ckk + mk + Ckk sk + (Ckj + Cjk )sj
2 j=1,j6=k
s=0
= Ckk + m2k
n
1 1 X
Rkl = φX (s) (Ckl + Clk ) + mk + Ckk sk + (Ckj + Cjk )sj
2 2 j=1,j6=k
n
ml + Cll sl +
1 X
(Clj + Cjl )sj
2 j=1,j6=l
s=0
1
= (Ckl + Clk ) + mk ml
2
= Ckl + mk ml
Therefore, the covariance matrix of X is C.