# ECE 4110: Random Signals in Communications and Signal Processing

ECE department, Cornell University, Fall 2011
Instructor: Salman Avestimehr

Homework 4 Solutions
By: Sina Lashgari

1. Let Yi = X + Zi for i = 1, 2, . . . , n be n observations of a signal X ∼ N (0, P ). The additive noise components Z1 , Z2 , . . . , Zn are zero-mean jointly Gaussian random variables that are independent of X. Furthermore, assume that the noise components Z1 , . . . , Zn are uncorrelated, each with variance N . Find the best MSE estimate of X given Y1 , Y2 , . . . , Yn and its MSE. Hint: It might be convenient to assume a form of the estimator and use the orthogonality principle to claim optimality. Problem 1 Solution According to the hint we try to come up with a guess and prove the optimality of that guess in terms of mean square error. First, note that since X and Y are jointly Gaussian, we have E[X|Y ] = L[X|Y ]. Therefore, MSE is a linear combination of Yi ’s. Thus, it is suﬃcient to search through the class of linear functions for MMSE. In other words, it is suﬃcient to ﬁnd a linear combination of Yi ’s whose error is orthogonal to ˆ any of Y1 through Yn . We consider X = c n Yi , where c is an unknown constant. i=1 n For E[(X − c i=1 Yi )Yj ] to be equal to zero for any j ∈ {1, 2, · · · .n} we should have n ˆ c = nPP . Therefore, X = nPP i=1 Yi . +N +N The error is nP 2 ˆ E[(X − X)2 ] = P − nP + N . As n → ∞ or N → 0 this error goes to zero. 2. Suppose (X, Y1 , Y2 ) is a zero-mean Gaussian random vector, with covariance matrix   4 2 1 K= 2 4 2  (1) 1 2 1 (a) Find the conditional pdf, fX|Y1 ,Y2 (x|y1 , y2 ). (b) Calculate E[X|Y1 , Y2 ]. Problem 2 Solution

Thus. given Y1 and Y2 . we have X ∼ N (0. KY is singular and therefore. and cross-covariances of X and Y . Determine the mean square error E[(X − g(Y ))2 ] in terms of the means. y2 ) = √ (b) E[X|Y1 . where σ 2 = var(X − 0. We have E[(Y1 − 2Y2 )2 ] = var(Y1 ) − 4cov(Y1 . having correlation zero implies (X − 0. Y2 ] and variance equal to the MMSE error.5Y1 .5Y1 + 0. ⇒ fX|Y1 . thus given Y1 = y1 . For Gaussian random varibles E[X|Y1 ] = L[X|Y1 ].5Y1 ) and Y1 are independent.(a) Since we are dealing with jointly Gaussian random variables.Y2 (x|y1 .5y1 . We have X = X − 0. Y1 ) − avar(Y1 ) = 0 ⇒ a = 0.5Y1 ) = 3.5Y1 (5) 1 1 exp{− (x − 0. We need E[(X − aY1 )Y1 ] = 0 ⇒ cov(X. Y2 ] = E[X|Y1 ] = 0. Suppose that g(Y ) is the linear least-square error (LLSE) estimator for X given Y : −1 g(Y ) = L[X|Y ] = KX Y KY (Y − E[Y ]) + E[X]. we can just consider fX|Y1 .5 (3) (2) Since we are dealing with zero mean Gaussian random variables. X is a Gaussian random variable with mean E[X|Y1 . Problem 3 Solution −1 E[(X − g(Y ))2 ] = E[(X − E[X] − KX Y KY (Y − E[Y ]))2 ] −1 = E[(X − E[X])2 ] + E[(KX Y KY (Y − E[Y ]))2 ] −1 − 2E[(X − E[X])(KX Y KY (Y − E[Y ]))] −1 −1 T = var(X) + KX Y KY E[(Y − E[Y ])(Y − E[Y ])T ](KY )T KX Y −1 − 2KX Y KY E[(X − E[X])(Y − E[Y ])] −1 T −1 −1 T = var(X) + KX Y KY KY KY KX Y − 2KX Y KY KX Y −1 T = var(X) − KX Y KY KX Y (6) 2 .5y1 )2 } 6 2π × 3 if y1 = 2y2 (4) 3. Y2 ) + 4var(Y2 ) = 0 which shows Y1 = 2Y2 . covariances. Therefore. Y does not have a density. σ 2 ).

equiprobable Bernoulli random variables and let Yn = 2n X1 X2 . .i. does not imply their independence when conditioned on X2 . X3 ])2 ]. Xn . 3 . 1 1 (a) Compute E[X1 |X2 ] and E[(X1 − E[X1 |X2 ])2 ]. (c) Compute E[X1 |X2 . −1 E[X1 |Y ] = E[X1 ] + KX1 . E[X1 |X2 ] = E[X1 ] + cov(X1 . because X2 might contain some common information between X1 and X3 . E[X1 |X3 ] = E[X1 ] = 1 E[(X1 − E[X1 |X3 ])2 ] = 3 (c) Let Y T = [X2 X3 ]. Let Xn be a sequence of i.Y KY KX1 .Y = 2 (d) Independence of X1 and X3 . X3 ] is a function of both X2 and X3 .Y KY (Y − E[Y ]) (8) = 1 0 2 1 1 1 −1 ([X2 X3 ]T − [46]T ) + 1 (9) = X 2 − X3 + 3 −1 T E[(X1 − E[X1 |Y ])2 ] = var(X1 ) + +KX1 . X3 ] and E[(X1 − E[X1 |X2 . X3 )|X2 independent. we have the following statement: X1 . X2 )(var(X2 ))−1 (X2 − E[X2 ]) = E[(X1 − E[X1 |X2 ])2 ] = var(X1 ) − (cov(X1 . therefore independent. Yet E[X1 |X2 . . (b) Compute E[X1 |X3 ] and E[(X1 − E[X1 |X3 ])2 ]. (d) Note that X1 and X3 are uncorrelated. In general.d. X2 ))2 5 = var(X2 ) 2 X2 −1 2 (7) (b) X1 and X2 are Gaussian and uncorrelated.4. Let X be Gaussian random vector with  3  1 0 mean [1 4 6]T and covariance matrix  1 0 2 1 . 5. Why is that? Problem 4 Solution (a) MMSE estimator in this case is an aﬃne function. X3 independent (X1 . and hence independent.

(b) E[(Xn + Yn )2 ] → E[(X + Y )2 ]. . Show that (a) Xn + Yn → X + Y . and if so. (c) E[Xn Yn ] → E[XY ]. m. 4 .s.s. .s. to what limit? (b) Does this sequence converge in mean square. and if so. (a + b)2 ≤ 2a2 + 2b2 .s.(a) Does this sequence converge almost surely. . = Xn = 1 0 Otherwise (10) → 0. (11) 1 ) = 2n → ∞ 2n (12) Thus Yn does not converge to zero in m. Problem 6 Solution (a) E[(Xn + Yn − (X + Y ))2 ] = E[((Xn − X) + (Yn − Y ))2 ] ≤ E[2(Xn − X)2 + 2(Yn − Y )2 ] = 2E[(Xn − X)2 ] + 2E[(Yn − Y )2 ] → 0 as n → ∞ (13) where we have used the hint and the fact that expectation is a monotonic function. a. m. . to what limit? Problem 5 Solution (a) Yn = 2n X1 X2 . Suppose Xn → X and Yn → Y . . = Xn = 1]) 1 = 2n ( n ) = 1 2 Furthermore. . .s. Hint: You may ﬁnd the following inequality useful. 2 E[Yn ] = (2n )2 × ( 1 2n 2n X1 = X2 = . Yn → 0. 6. Xn = since P (Yn = 0) = (b) E[Yn ] = 2n × P [X1 = X2 = . m. . sense. = Xn = 1] + 0 × (1 − P [X1 = X2 = .

5 E[(Xn + Yn − (X + Y ))2 ]0. Problem 7 Solution 5 m. . we get E[(Xn + Yn )2 ] → E[(X + Y )2 ] (16) (c) E[Xn Yn ] = E[(X + (Xn − X))(Y + (Yn − Y ))] = E[XY ] + E[X(Yn − Y )] + E[Y (Xn − X)] + E[(Xn − X)(Yn − Y )] (17) We already know that Yn → Y in m. and we can similarly show that E[Xn ] → E[X 2 ] 2 and E[Yn ] → E[Y 2 ]. From part (b). we know 2 that E[(Xn +Yn )2 ] → E[(X +Y )2 ].s.(b) E[(Xn + Yn )2 ] = E[((X + Y ) + (Xn + Yn − (X + Y )))2 ] = E[(X + Y )2 ] + E[(Xn + Yn − (X + Y ))2 ] + 2E[(X + Y )(Xn + Yn − (X + Y ))] (14) From part (a) we know that the second term goes to zero as n goes to ∞. We use the Cauchy-Schwarz inequality to show that the last term also converges to zero. X2 . . n i=1 Xi n .5 → 0 (18) similarly. be a sequence of variables with mean µ and covariance COV (Xi . we have E[Xn Yn ] = E[ 2 2 (Xn + Yn )2 − Xn − Yn (X + Y )2 − X 2 − Y 2 ] → E[ ] = E[XY ] 2 2 (20) 7. third and fourth terms also go to zero and therefore E[Xn Yn ] → E[XY ] 2 2 2 (19) You can also use the fact that Xn Yn = (Xn +Yn ) 2−Xn −Yn .5 → 0 (15) Therefore. E[|(X + Y )(Xn + Yn − (X + Y ))|] ≤ E[(X + Y )2 ]0. where |ρ| < 1.s. . using Cauchy-Schwarz inequality we have E[|X(Yn − Y )|] ≤ E[X 2 ]0. sense. Xj ) = σ 2 ρ|i−j| . Using these facts. Let Sn = Show that Sn → µ.5 E[(Yn − Y )2 ]0. . Let X1 .

. . we have used the fact that |ρ| ≤ 1. . + 2σ 2 ρn−1 ] 2 n 1 [2nσ 2 + 2nσ 2 ρ + 2nσ 2 ρ2 + .E[(Sn − µ)2 ] = = = = ≤ = = 1 E[ n2 i=1 1 n2 1 n2 n n n (Xi − µ)(Xj − µ)] j=1 n E[(Xi − µ)(Xj − µ)] i=1 j=1 n n COV (Xi Xj ) i=1 j=1 1 [nσ 2 + 2(n − 1)σ 2 ρ + 2(n − 2)σ 2 ρ2 + . . + ρn−1 ] n 2σ 2 1 − ρn →0 n 1−ρ (21) where to derive the last line. . . 6 . + 2nσ 2 ρn−1 ] 2 n 2σ 2 [1 + ρ + ρ2 + .