# EE 562a: Random Processes in Engineering

EE department, USC, Fall 2014
Instructor: Prof. Salman Avestimehr
Homework 3
Solutions
1. (Uncorrelated vs. Independent)
(a) We have seen that independence of two random variables X and Y implies that they are
uncorrelated, but that the converse is not true. To see this, let Θ be uniformly distributed
on [0, π] and let random variables X and Y be deﬁned by X = cos Θ and Y = sinΘ.
Show that X and Y are uncorrelated but they are not independent.
(b) Now show that if we make the stronger assumption that random variables X and Y
have the property that for all functions g and h, the random variables g(X) and h(Y )
are uncorrelated, then X and Y must be independent. That is, uncorrelation of the ran-
dom variables is not enough, but uncorrelation of all functions of the random variables
is enough to ensure independence. For simplicity assume that X and Y are discrete
random variables.
Hint Consider for all x and y the indicator function
g(x) = 1
a
(x) =

1 x = a
0 x = a
h(y) = 1
b
(y) =

1 y = b
0 y = b
Problem 1 Solution
(a) X and Y are uncorrelated if cov(X,Y) = 0.
E[X] = E[cos(θ)] = 0
E[Y ] = E[sin(θ)] =

π
0
1
π
sin(θ)dθ =
2
π
cov(X, Y ) = E[XY ] −E[X]E[Y ] =

π
0
1
π
sin(θ) cos(θ)dθ
=

π
0
2
π
sin(2θ)dθ = 0
(1)
Hence, X and Y are uncorrelated. Moreover, if you know X, you can exactly tell the
value of Y; in other words, Y = f(X) =

1 −X
2
. This is due to the fact that θ ∈
[0, π]. Therefore, they are not independent.
(b) X and Y are uncorrelated therefore cov(X,Y) = 0. Using the functions given in the hint
E[g(X)] = P(X = a)
E[h(Y )] = P(Y = b)
⇒ cov(X, Y ) = E[g(X)h(Y )] −E[g(X)]E[h(Y )]
= E[g(X)h(Y )] −P(X = a)P(Y = b) = 0
⇒ E[g(X)h(Y )] = P(X = a)P(Y = b)
⇒ P(X = a, Y = b) = P(X = a)P(Y = b) ∀a, b
⇒ P(X = a, Y = b) = P
X
(a)P
Y
(b) ∀a, b
(2)
which means X and Y are independent.
2. (Bayes’ Rule for PDF) Consider a communication channel corrupted by noise. Let X be
the value of the transmitted signal and Y be the value of the received signal. Assume that
the conditional density of Y given {X = x} is Gaussian, that is
f
Y |X
(y|x) =
1

2πσ
2
exp

(y −x)
2

2

,
and X is uniformly distributed on [−1, 1]. What is the conditional probability density func-
tion of X given Y (i.e., f
X|Y
(x|y)?
Problem 2 Solution
f
X|Y
(x|y) =
f
Y |X
(y|x)f
X
(x)
f
Y
(y)
f
X
(x) =
1
2
rect(
x
2
)
f
Y
(y) =

−∞
f
X,Y
(x, y)dx
=

−∞
f
Y |X
(y|x)f
X
(x)dx
=

1
−1
1
2
1

2πσ
2
e

(y−x)
2

2
dx
=
1
2
1−y
σ
−1−y
σ
1

e

u
2
2
du
=
1
2
[−Q(
1 −y
σ
) + Q(
−1 −y
σ
)]
=
1
2
[−Q(
1 + y
σ
) + Q(
y −1
σ
)]
⇒ f
X|Y
(x|y) =
1

2πσ
2
e

(y−x)
2

2
rect(
x
2
)
Q(
y−1
σ
) −Q(
1+y
σ
)
(3)
2
3. (Cauchy-Schwarz Inequality)
(a) Let f
1
(X) and f
2
(X) be function of the randomvariable X. Showthat (E[f
1
(X)f
2
(X)])
2

E[f
1
(X)
2
]E[f
2
(X)
2
].
(b) Use (a) to deduce that P(X = 0) ≤ 1 −
(E[X])
2
E(X
2
)
.
(c) Generalize (a) and prove that for any two random variables, X and Y , (E[XY ])
2

E[X
2
]E[Y
2
].
Hint: Use the fact that E[(tX + Y )
2
] ≥ 0 for all t ∈ R.
Problem 3 Solution
(a) You can either prove this part using the hint for part(c) or use the Cauchy-Schwarz in-
equality for integrals, i.e. |

f(x)g(x)dx|
2

|f(x)|
2
dx

|g(x)|
2
dx, and let f(x) =
f
1
(x)

f
X
(x) and g(x) = f
2
(x)

f
X
(x).
(b) Assume X is a discrete RV (for continuous X the result is trivial as long as f
X
(x) does
not include any impulse functions). Let f
1
(X) = X and f
2
(X) = 1 − δ(X), where
δ(X) = 1 if and only if X = 0. Applying the result in part (a) to these functions we get
the desired result. Note that E[Xδ(X)] = 0 and E[(1 −δ(X))
2
] = E[(1 −δ(X))].
(E[f
1
(X)f
2
(X)])
2
= (E[X])
2
(E[f
1
(X)])
2
= E[X
2
]
(E[f
2
(X)])
2
= (1 −P(X = 0))
⇒ (E[X])
2
≤ E[X
2
](1 −P(X = 0))
⇒ P(X = 0) ≤ 1 −
(E[X])
2
E[X
2
]
(4)
(c)
E[(tX + Y )
2
] ≥ 0for any t ∈ R
⇒ t
2
E[X
2
] + 2E[XY ]t +E[Y
2
] ≥ 0
(5)
Now by setting t = −
E[XY ]
E[X
2
]
we get
(E[XY ])
2
≤ E[X
2
]E[Y
2
] (6)
4. (Linear Estimation) The output of a channel is Y = X + N, where the input X and the
noise N are independent, zero mean random variables.
(a) Find the correlation coefﬁcient between the input X and the output Y .
(b) Suppose we estimate the input X by a linear function g(Y ) = aY . Find the value of a
that minimizes the mean squared error E[(X −aY )
2
].
(c) Express the resulting mean squared error in terms of the ratio between the variance of
X and the variance of N (i.e,
σ
X
σ
Y
).
3
(d) Find COV(X −aY, X) for your choice of a in part (b).
Problem 4 Solution
(a)
ρ
XY
=
cov(XY )
σ
X
σ
Y
cov(X, Y ) = E[XY ] −E[X]E[Y ] = E[XY ]
⇒ cov(X, Y ) = E[X(X + N)] = E[X
2
] +E[XN] = E[X
2
]
E[X
2
] = var(X) −(E[X])
2
= var(X) = σ
2
X
⇒ ρ
XY
=
σ
2
X
σ
X
σ
Y
=
σ
X
σ
Y
σ
Y
=

σ
2
X
+ σ
2
N
(7)
(b)
E[(X −aY )
2
] = E[X
2
] −2aE[XY ] + a
2
E[Y
2
]
= σ
2
X
−2aσ
2
X
+ a
2

2
X
+ σ
2
N
)
d
da
E[(X −aY )
2
] = 0 ⇒ a =
σ
2
X
σ
2
X
+ σ
2
N
(8)
(c)
MSE = σ
2
X
−2(
σ
2
X
σ
2
X
+ σ
2
N

2
X
+ (
σ
2
X
σ
2
X
+ σ
2
N
)
2

2
X
+ σ
2
N
)
=
σ
2
X
1 +
σ
2
X
σ
2
N
(9)
(d)
cov(X −aY, X) = E[X(X −aY )] −E[X]E[(X −aY )]
= E[X
2
] −aE[XY ] = σ
2
X
(1 −
σ
2
X
σ
2
X
+ σ
2
N
) = σ
2
X
(1 −ρ
2
XY
)
(10)
5. (a) Show that COV(X, E[Y |X]) = COV(X, Y ).
(b) Show that if E[Y |X = x] = E[Y ], for all x implies that X and Y are uncorrelated.
Problem 5 Solution
(a)
cov(X, E[Y |X]) = E[XE[Y |X]] −E[X]E[E[Y |X]]
= E[E[XY |X]] −E[X]E[Y ]
= E[XY ] −E[X]E[Y ] = cov(X, Y )
(11)
4
(b)
E[Y |X = x] = E[Y ] ∀x ⇒ E[Y |X] = E[Y ]
⇒ cov(X, Y ) = cov(X, E[Y |X]) = E[XE[Y |X]] −E[X]E[E[Y |X]]
= E[XE[Y ]] −E[X]E[Y ] = E[X]E[Y ] −E[X]E[Y ] = 0
⇒ X and Y are uncorrelated.
(12)
6. (Linear Combination of Two RVs) The characteristic function of a continuous random
variable X is deﬁned as
Φ
X
(ω) = E [exp(jωX)]
=

−∞
exp(jωx)f
X
(x) dx,
where j =

−1. Thus Φ
X
(−ω) is the Fourier transform of the PDF. In particular, the
characteristic function uniquely determines the PDF.
(a) Suppose that X is N(µ, σ
2
). Show that its characteristic function equals
Φ
X
(ω) = exp(jµω −σ
2
ω
2
/2).
Hint: Complete the square in the exponent.
(b) Suppose X and Y are independent. Determine the characteristic function of aX + bY
in terms of a, b, and the characteristic functions of X and Y .
Hint: You do not need to compute the density of aX + bY .
(c) Suppose that X and Y are independent and Gaussian. Use the results in (a) and (b) to
show that aX + bY is also Gaussian.
Problem 6 Solution
(a)
Y = aX + b ⇒ Φ
Y
(ω) = e
jbω
Φ
X
(aω)(∗)
Φ
X
(s) = E[e
sX
]
Z =
X −µ
σ
Φ
Z
(s) =
1

−∞
e
sz
e
−z
2
2
dz
Φ
Z
(s) = e
s
2
2

−∞
1

e
−(z−s)
2
2
dz
(13)
Apply (*) since we have X = σz + µ, and set s = jω.
5
(b)
Φ
aX+bY
(ω) =

−∞

−∞
e
jω(ax+by)
f
X,Y
(x, y)dxdy
=

−∞
e
jωax
f
X
(x)dx

−∞
e
jωby
f
Y
(y)dy
= Φ
X
(aω)Φ
Y
(bω)
(14)
(c)
Φ
aX+bY
(ω) = Φ
X
(aω)Φ
Y
(bω)
= e

X
aω−
σ
2
X
a
2
ω
2
2
e

Y
bω−
σ
2
Y
b
2
ω
2
2
= e
j(aµ
X
+bµ
Y
)ω−
(a
2
σ
2
X
+b
2
σ
2
Y

2
2
(15)
since the characteristic function uniquely determines the PDF, aX + bY is Gaussian
with mean aµ
X
+ bµ
Y
and variance a
2
σ
2
X
+ b
2
σ
2
Y
.
6