You are on page 1of 15

UCSD ECE 153 Handout #46

Prof. Young-Han Kim Thursday, June 5, 2014

Solutions to Homework Set #8


(Prepared by TA Fatemeh Arbabjolfaei)

1. Discrete-time Wiener process. Let Zn , n ≥ 0 be a discrete time white Gaussian noise (WGN)
process, i.e., Z1 , Z2 , . . . are i.i.d. ∼ N (0, 1). Define the process Xn , n ≥ 1 as X0 = 0, and
Xn = Xn−1 + Zn for n ≥ 1.
(a) Is Xn an independent increment process? Justify your answer.
(b) Is Xn a Markov process? Justify your answer.
(c) Is Xn a Gaussian process? Justify your answer.
(d) Find the mean and autocorrelation functions of Xn .
(e) Specify the first and second order pdfs of Xn .
(f) Specify the joint pdf of X1 , X2 , and X3 .
(g) Find E(X20 |X1 , X2 , . . . , X10 ).
Solution:
(a) Yes. The increments Xn1 , Xn2 − Xn1 , . . . , Xnk1 − Xnk are sums of different Zi random
variables, and the Zi are IID.
(b) Yes. Since the process has independent increments, it is Markov.
(c) Yes. Any set of samples of Xn , n ≥ 1 are obtained by a linear transformation of IID
N (0, 1) random variables and therefore all nth order distributions of Xn are jointly
Gaussian (it is not sufficient to show that the random variable Xn is Gaussian for each
n).
(d) We have " n n n
#
X X X
E[Xn ] = E Zi = E[Zi ] = 0 = 0,
i=1 i=1 i=1

RX (n1 , n2 ) = E[Xn1 Xn2 ]


 
n1
X Xn2
= E Zi Zj 
i=1 j=1
n1 X
X n2
= E[Zi Zj ]
i=1 j=1
min(n1 ,n2 )
X
= E(Zi2 ) (IID)
i=1
= min(n1 , n2 ).

1
(e) As shown above, Xn is Gaussian with mean zero and variance

Var(Xn ) = E[Xn2 ] − E2 [Xn ]


= RX (n, n) − 0
= n.

Thus, Xn ∼ N (0, n).

Cov(Xn1 , Xn2 ) = E(Xn1 Xn2 ) − E(Xn1 )E(Xn2 ) = min(n1 , n2 ).

Therefore, Xn1 and Xn2 are jointly Gaussian random variables


 with mean µ = [0 0]T
n1 min(n1 , n2 )
and covariance matrix Σ = .
min(n1 , n2 ) n2
(f) Xn , n ≥ 1 is a zero mean Gaussian random process. Thus
     
X1 E[X1 ] RX (1, 1) RX (1, 2) RX (1, 3)
X2  ∼ N E[X2 ] , RX (2, 1) RX (2, 2) RX (2, 3)
X3 E[X3 ] RX (3, 1) RX (3, 2) RX (3, 3)

Substituting, we get
     
X1 0 1 1 1
X2  ∼ N 0 , 1 2 2 .
X3 0 1 2 3

(g) Since Xn is an independent increment process,

E(X20 |X1 , X2 , . . . , X10 ) = E(X20 − X10 + X10 |X1 , X2 , . . . , X10 )


= E(X20 − X10 |X1 , X2 , . . . , X10 ) + E(X10 |X1 , X2 , . . . , X10 )
= E(X20 − X10 ) + X10
= 0 + X10
= X10 .

2. Arrow of time. Let X0 be a Gaussian random variable with zero mean and unit variance,
and Xn = αXn−1 + Zn for n ≥ 1, where α is a fixed constant with |α| < 1 and Z1 , Z2 , . . . are
i.i.d. ∼ N (0, 1 − α2 ), independent of X0 .

(a) Is the process {Xn } Gaussian?


(b) Is {Xn } Markov?
(c) Find RX (n, m).
(d) Find the (nonlinear) MMSE estimate of X100 given (X1 , X2 , . . . , X99 ).
(e) Find the MMSE estimate of X100 given (X101 , X102 , . . . , X199 ).
(f) Find the MMSE estimate of X100 given (X1 , . . . , X99 , X101 , . . . , X199 ).

2
Solution:

(a) Yes, the process is Gaussian, since it is the linear transform of white Gaussian process
{Zn }.
(b) Yes, the process is Markov since Xn |{X1 = x1 , . . . , Xn−1 = xn−1 } ∼ N (αxn−1 , 1 − α2 ),
which depends only on xn−1 .
(c) First note that RX (n, n) = α2 RX (n − 1, n − 1) + (1 − α2 ) = 1 for all n. Since we can
express
Xn = αk Xn−k + αk−1 Zn−k+1 + · · · + Zn ,
and Xn−k is independent of (Zn−k+1 , . . . , Zn ), we have RX (n, n−k) = E(Xn Xn−k ) = αk .
Thus, RX (n, m) = α|n−m| .
(d) Because the process is Gaussian, the MMSE estimator is linear. From Markovity, X̂100 =
E(X100 |X1 , . . . , X99 ) = E(X100 |X99 ) = RRXX(100,99)
(99,99) X99 = αX99 .
(e) Again from the Markovity and the symmetry, (X1 , . . . , Xn ) has the same distribution as
(Xn , . . . , X1 ) hence X̂100 = E(X100 |X101 , . . . , X199 ) = E(X100 |X101 ) = αX101 .
(f) First note that X100 is conditionally independent of (X1 , . . . , X98 , X102 , . . . , X199 ) given
(X99 , X101 ). To see this, consider

f (x100 |x1 , . . . , x99 , x101 , . . . , x199 )


f (x1 , . . . , x199 )
=
f (x1 , . . . , x99 , x101 , . . . , x199 )
f (x1 )f (x2 |x1 ) · · · f( x99 |x98 )f (x100 |x99 )f (x101 |x100 )f (x102 |x101 ) · · · f (x199 |x198 )
=
f (x1 )f (x2 |x1 ) · · · f (x99 |x98 )f (x101 |x99 )f (x102 |x101 ) · · · f (x199 |x198 )
f (x100 |x99 )f (x101 |x100 )
=
f (x101 |x99 )
f (x100 , x101 |x99 )
=
f (x101 |x99 )
= f (x100 |x99 , x101 ).

Thus,

X̂100 = E(X100 |X99 , X101 )


 RX (99, 99) RX (99, 101) −1 X99
   

= RX (100, 99) RX (100, 101)
RX (101, 99) RX (101, 101) X101
−1
 1 α2
   
 X99
= α α
α2 1 X101
α
= (X99 + X101 ).
1 + α2

3. AM modulation. Consider the AM modulated random process

X(t) = A(t) cos(2πt + Θ) ,

3
where the amplitude A(t) is a zero-mean WSS process with autocorrelation function RA (τ ) =
1
e− 2 |τ | , the phase Θ is a Unif[0, 2π) random variable, and A(t) and Θ are independent. Is
X(t) a WSS process? Justify your answer.
Solution: X(t) is wide-sense stationary if EX(t) is independent of t and if RX (t1 , t2 ) depends
only on t1 − t2 . Consider

E[X(t)] = E[A(t) cos(ωt + Θ)]


= E[A(t)]E[cos(ωt + Θ)] by independence
= 0,

and

RX (t1 , t2 ) = E[X(t1 )X(t2 )]


= E[A(t1 ) cos(ωt1 + Θ)A(t2 ) cos(ωt2 + Θ)]
= E[A(t1 )A(t2 ) cos(ωt1 + Θ) cos(ωt2 + Θ)]
= E[A(t1 )A(t2 ) · E cos(ωt1 + Θ) cos(ωt2 + Θ)] by independence
= RA (t1 − t2 )E[cos(ωt1 + Θ) cos(ωt2 + Θ)]
 
1
= RA (t1 − t2 )E (cos(ω(t1 + t2 ) + 2Θ) + cos(ω(t1 − t2 )))
2
 
cos(ω(t1 + t2 )) cos(2Θ)
1
= RA (t1 − t2 )E  − sin(ω(t1 + t2 )) sin(2Θ) 
 
2
+ cos(ω(t1 − t2 ))
 
E cos(ω(t1 + t2 )) · E cos(2Θ)
1
= RA (t1 − t2 )  − E sin(ω(t1 + t2 )) · E sin(2Θ) 
 
2
+ E cos(ω(t1 − t2 ))
1
= RA (t1 − t2 ) cos(ω(t1 − t2 )),
2
which is a function of t1 − t2 only. Hence X(t) is wide-sense stationary.

4
4. LTI system with WSS process input. Let Y (t) = h(t) ∗ X(t) and Z(t) = X(t) − Y (t) as shown
in the Figure 1.

(a) Find SZ (f ).
(b) Find E(Z 2 (t)).

Your answers should be in terms of SX (f ) and the transfer function H(f ) = F[h(t)].

X(t) Y(t)
h(t)

+ -
Z(t)

Figure 1: LTI system.

Solution:

(a) To find SZ (f ), we first find the autocorrelation function

RZ (τ ) = E(Z(t)Z(t + τ ))
= E((X(t) − Y (t))(X(t + τ ) − Y (t + τ )))
= RX (τ ) + RY (τ ) − RY X (−τ ) − RXY (−τ )
= RX (τ ) + RY (τ ) − RXY (τ ) − RXY (−τ ).

Now, taking the Fourier Transform, we get

SZ (f ) = SX (f ) + SY (f ) − SXY (f ) − SXY (−f )


= SX (f ) + |H(f )|2 SX (f ) − H(−f )SX (f ) − H(f )SX (f )
= SX (f ) 1 + |H(f )|2 − 2Re[H(f )]


= SX (f )|1 − H(f )|2 .

(b) To find the average power of Z(t), we find the area under SZ (f )
Z ∞
2
E(Z (t)) = |1 − H(f )|2 SX (f ) df.
−∞

5. Echo filtering. A signal X(t) and its echo arrive at the receiver as Y (t) = X(t) + X(t − ∆) +
Z(t). Here the signal X(t) is a zero-mean WSS process with power spectral density SX (f ) and
the noise Z(t) is a zero-mean WSS with power spectral density SZ (f ) = N0 /2, uncorrelated
with X(t).

(a) Find SY (f ) in terms of SX (f ), ∆, and N0 .


(b) Find the best linear filter to estimate X(t) from {Y (s)}−∞<s<∞ .

Solution:

5
(a) We can write Y (t) = g(t) ∗ X(t) + Z(t) where g(t) = δ(t) + δ(t − ∆).
Thus, SY (f ) = |G(f )|2 SX (f ) + SZ (f ) = |1 + e−j2π∆f |2 SX (f ) + N20 .
(b) Since SY X (f ) = (1 + e−j2π∆f )SX (f ),

X̂(t) = h(t) ∗ Y (t),

where the linear filter h(t) has the transfer function

SXY (f ) SY X (−f ) (1 + ej2π∆f )SX (f )


H(f ) = = = N0
.
SY (f ) SY (f ) |1 + e−j2π∆f |2 SX (f ) + 2

6. Discrete-time LTI system with white noise input. Let {Xn : −∞ < n < ∞} be a discrete-time
white noise process, i.e., E(Xn ) = 0, −∞ < n < ∞, and
(
1 n = 0,
RX (n) =
0 otherwise.

The process is filtered using a linear time invariant system with impulse response

α n = 0,

h(n) = β n = 1,

0 otherwise.

Find α and β such that the output process Yn has



2
 n = 0,
RY (n) = 1 |n| = 1,

0 otherwise.

Solution: We are given that RX (n) is a discrete-time unit impulse. Therefore

RY (n) = h(n) ∗ RX (n) ∗ h(−n) = h(n) ∗ h(−n) .

The impulse response h(n) is the sequence (α, β, 0, 0, . . .). The convolution with h(−n) has
only finitely many nonzero terms.

RY (0) = 2 = h(0) ∗ h(0) = α2 + β 2


RY (+1) = 1 = h(1) ∗ h(−1) = αβ
RY (−1) = 1 = RY (1)

This pair of equations has two solutions: α = +1 and β = +1 or α = −1 and β = −1.

7. Finding time of flight. Finding the distance to an object is often done by sending a signal
and measuring the time of flight, the time it takes for the signal to return (assuming speed of
signal, e.g., light, is known). Let X(t) be the signal sent and Y (t) = X(t − δ) + Z(t) be the

6
RY X (t)

t
2 5 8
Figure 2: Crosscorrelation function.

signal received, where δ is the unknown time of flight. Assume that X(t) and Z(t) (the sensor
noise) are uncorrelated zero mean WSS processes. The estimated crosscorrelation function of
Y (t) and X(t), RY X (t) is shown in Figure 2. Find the time of flight δ.
Solution: Consider

RY X (τ ) = E(Y (t + τ )X(t)) = E((X(t − ∆ + τ ) + Z(t + τ ))X(t)) = RX (τ − ∆).

Now since the maximum of |RX (α)| is achieved for α = 0, by inspection of the given RY X we
get that 5 − ∆ = 0. Thus ∆ = 5.

7
Additional Exercises
Do not turn in solutions to these problems.

1. Moving average process. Let Z0 , Z1 , Z2 , . . . be i.i.d. ∼ N (0, 1).

(a) Let Xn = 12 Zn−1 + Zn for n ≥ 1. Find the mean and autocorrelation function of Xn .
(b) Is {Xn } wide-sense stationary? Justify your answer.
(c) Is {Xn } Gaussian? Justify your answer.
(d) Is {Xn } strict-sense stationary? Justify your answer.
(e) Find E(X3 |X1 , X2 ).
(f) Find E(X3 |X2 ).
(g) Is {Xn } Markov? Justify your answer.
(h) Is {Xn } independent increment? Justify your answer.
(i) Let Yn = Zn−1 + Zn for n ≥ 1. Find the mean and autocorrelation function of Yn .
(j) Is {Yn } wide-sense stationary? Justify your answer.
(k) Is {Yn } Gaussian? Justify your answer.
(l) Is {Yn } strict-sense stationary? Justify your answer.
(m) Find E(Y3 |Y1 , Y2 ).
(n) Find E(Y3 |Y2 ).
(o) Is {Yn } Markov? Justify your answer.
(p) Is {Yn } independent increment? Justify your answer.

Solution:

(a)
1
E(Xn ) = E(Zn−1 ) + E(Zn ) = 0.
2

RX (m, n) = E(Xm Xn )
  
1 1
=E Zn−1 + Zn Zm−1 + Zm
2 2
 1 2
 2 E[Zn−1 ], n−m=1

 1 E[Z 2 ] + E[Z 2 ], n = m

4 n−1 n
= 1 2 ],
 2
 E[Z n m−n=1


0, otherwise
 5
 4, n = m

1
=
 2 , |n − m| = 1
0 , otherwise.

8
(b) Since the mean and autocorrelation functions are time-invariant, the process is WSS.
(c) Since (X1 , . . . , Xn ) is a linear transform of a GRV (Z0 , Z1 , . . . , Zn ), the process is Gaus-
sian.
(d) Since the process is WSS and Gaussian, it is SSS.
(e) Since the process is Gaussian, the conditional expectation (MMSE estimate) is linear.
Hence,
 5 1 −1 X1
   
 1 4 2
1
E(X3 |X1 , X2 ) = 0 2 1 5 = (10X2 − 4X1 ).
2 4 X 2 21
(f) Similarly, E(X3 |X2 ) = (2/5)X2 .
(g) Since E(X3 |X1 , X2 ) 6= E(X3 |X2 ), the process is not Markov.
(h) Since the process is not Markov, it is not independent increment.
(i) The mean function µn = E(Yn ) = 0. The autocorrelation function

2 n = m,

RY (n, m) = 1 n = m − 1 or n = m + 1,

0 otherwise.

(j) Since the mean and autocorrelation functions are time-invariant, the process is WSS.
(k) Since (Y1 , . . . , Yn ) is a linear transform of a GRV (Z0 , Z1 , . . . , Zn ), the process is Gaus-
sian.
(l) Since the process is WSS and Gaussian, it is SSS.
(m) Since the process is Gaussian, the conditional expectation (MMSE estimate) is linear.
Hence,
 2 1 −1 Y1
   
 1
E(Y3 |Y1 , Y2 ) = 0 1 = (2Y2 − Y1 ).
1 2 Y2 3
(n) Similarly, E(Y3 |Y2 ) = (1/2)Y2 .
(o) Since E(Y3 |Y1 , Y2 ) 6= E(Y3 |Y2 ), the process is not Markov.
(p) Since the process is not Markov, it is not independent increment.

2. Gauss-Markov process. Let X0 = 0 and Xn = 12 Xn−1 + Zn for n ≥ 1, where Z1 , Z2 , . . . are


i.i.d. ∼ N (0, 1). Find the mean and autocorrelation function of Xn .
Solution: Using the method of lecture notes 8, we can easily verify that E(Xn ) = 0 for every
n and that
RX (n, m) = E(Xn Xm ) = 2−|n−m| 43 [1 − ( 41 )min{n,m} ].

3. Random binary waveform. In a digital communication channel the symbol “1” is represented
by the fixed duration rectangular pulse

1 for 0 ≤ t < 1
g(t) =
0 otherwise,

9
and the symbol “0” is represented by −g(t). The data transmitted over the channel is
represented by the random process

X
X(t) = Ak g(t − k), for t ≥ 0,
k=0
where A0 , A1 , . . . are i.i.d random variables with
1

+1 w.p. 2
Ai = 1
−1 w.p. 2.

(a) Find its first and second order pmfs.


(b) Find the mean and the autocorrelation function of the process X(t).

Solution:
(a) The first order pmf is
pX(t) (x) = P (X (t) = x)

!
X
=P Ak g(t − k) = x
k=0

= P A⌊t⌋ = x
= P (A0 = x) IID
 1
= 2 , x = ±1
0 , otherwise.
Now note that X(t1 ) and X(t2 ) are dependent only if t1 and t2 fall within the same time
interval. Otherwise, they are independent. Thus, the second order pmf is
pX(t1 )X(t2 ) (x, y) = P (X (t1 ) = x, X (t2 ) = y)
∞ ∞
!
X X
=P Ak g (t1 − k) = x, Ak g (t2 − k) = y
k=0 k=0

= P A⌊t1 ⌋ = x, A⌊t2 ⌋ = y

P(A0 = x, A0 = y), ⌊t1 ⌋ = ⌊t2 ⌋
=
P(A0 = x, A1 = y), otherwise
 1
 2 , ⌊t1 ⌋ = ⌊t2 ⌋ & (x, y) = (1, 1), (−1, −1)
= 1
, ⌊t1 ⌋ =
6 ⌊t2 ⌋ & (x, y) = (1, 1), (1, −1), (−1, 1), (−1, −1)
 4
0 , otherwise.
(b) For t ≥ 0,

" #
X
E[X(t)] = E Ak g(t − k)
k=0

X
= g(t − k)E[Ak ]
k=0
= 0.

10
For the autocorrelation RX (t1 , t2 ), we note once again that only if t1 and t2 fall within
the same interval, will X(t1 ) be dependent on X(t2 ); if they do not fall in the same
interval then they are independent from one another. Then,

RX (t1 , t2 ) = E[X(t1 )X(t2 )]



X
= g(t1 − k)g(t2 − k)E[A2k ]
k=0

1, ⌊t1 ⌋ = ⌊t2 ⌋
=
0, otherwise.

4. Absolute-value random walk. Let Xn be a random walk defined by


n
X
X0 = 0, Xn = Zi , n ≥ 1,
i=1

1
where {Zi } is an i.i.d. process with P(Z1 = −1) = P(Z1 = +1) = 2. Define the absolute
value random process Yn = |Xn |.

(a) Find P{Yn = k}.


(b) Find P{max1≤i<20 Yi = 10 | Y20 = 0}.

11
Solution:

(a) This is a straightforward calculation and we can use results from lecture notes. If k ≥ 0
then
P{Yn = k} = P{Xn = +k or Xn = −k} .
If k > 0 then P{Yn = k} = 2P{Xn = k}, while P{Yn = 0} = P{Xn = 0}. Thus
 n
 1 n−1

 (n+k)/2 2 k > 0, n − k is even, n − k ≥ 0


 1 n
P{Yn = k} = n
k = 0, n is even, n ≥ 0

 n/2 2

0 otherwise.

(b) If Y20 = |X20 | = 0 then there are only two sample paths with max1≤i<20 |Xi | = 10 that
is, Z1 = Z2 = · · · = Z10 = +1, Z11 = · · · = Z20 = −1 or Z1 = Z2 = · · · = Z10 =
−1, Z11 = · · · = Z20 = +1. Since the total number of sample paths is 20
10 and all paths
are equally likely,
 2 2 1
P max Yi = 10|Y20 = 0 = = = .
1≤i<20 20 184756 92378
10

5. Mixture of two WSS processes. Let X(t) and Y (t) be two zero-mean WSS processes with
autocorrelation functions RX (τ ) and RY (τ ), respectively. Define the process

X(t), with probability 21



Z(t) =
Y (t), with probability 21 .

Find the mean and autocorrelation functions for Z(t). Is Z(t) a WSS process? Justify your
answer.
Solution: To show that Z(t) is WSS, we show that its mean and autocorrelation functions
are time invariant. Consider

µZ (t) = E[Z(t)]
= E(Z|Z = X)P{Z = X} + E(Z|Z = Y )P{Z = Y }
1
= (µX + µY )
2
= 0,

and similarly

RZ (t + τ, t) = E[Z(t + τ )Z(t)]
1
= (RX (τ ) + RY (τ )) .
2
Since µZ (t) is independent of time and RZ (t + τ, t) depends only on τ , Z(t) is WSS.

12
6. Stationary Gauss-Markov process. Consider the following variation on the Gauss-Markov
process discussed in Lecture Notes 8:

X0 ∼ N(0, a)
Xn = 12 Xn−1 + Zn , n ≥ 1,

where Z1 , Z2 , Z3 , . . . are i.i.d. N(0, 1) independent of X0 .

(a) Find a such that Xn is stationary. Find the mean and autocorrelation functions of Xn .
(b) (Optional.) Consider the sample mean Sn = n1 ni=1 Xi , n ≥ 1. Show that Sn converges
P
to the process mean in probability even though the sequence Xn is not i.i.d. (A stationary
process for which the sample mean converges to the process mean is called mean ergodic.)

Solution:

(a) We are asked to find a such that E(Xn ) is independent of n and RX (n1 , n2 ) depends
only on n1 − n2 . For Xn to be stationary, E(Xn2 ) must be independent of n. Thus

E(Xn2 ) = 41 E(Xn−1
2
) + E(Zn2 ) + E(Xn−1 Zn ) = 14 E(Xn2 ) + 1 .

Therefore, a = E(X02 ) = E(Xn2 ) = 43 . Using the method of lecture notes 8, we can easily
verify that E(Xn ) = 0 for every n and that

RX (n1 , n2 ) = E(Xn1 Xn2 ) = 4


3 2−|n1 −n2 | .

(b) To prove convergence in probability, we first prove convergence in mean square and then
use the fact that mean square convergence implies convergence in probability.
 Xn  n n
1 1X 1X
E(Sn ) = E Xi = E(Xi ) = 0 = 0.
n n n
i=1 i=1 i=1

To show convergence in mean square we show that Var(Sn ) → 0 as n → ∞.


 X n   X n 2 
1 1
Var(Sn ) = Var Xi = E Xi (since E(Xi ) = 0)
n n
i=1 i=1
n n  n−1 
1 XX 4 X
−i
= 2 RX (i, j) = 2 n + 2 (n − i)2
n 3n
i=1 j=1 i=1
 n−1   ∞ 
4 X 4 X 4
≤ 1+2 2−i ≤ 1+2 2−i = .
3n 3n n
i=1 i=1

Thus Sn converges to the process mean, even though the sequence is not i.i.d.

13
7. QAM random process. Consider the random process
X(t) = Z1 cos ωt + Z2 sin ωt , −∞ < t < ∞ ,
where Z1 and Z2 are i.i.d. discrete random variables such that pZi (+1) = pZi (−1) = 21 .
(a) Is X(t) wide-sense stationary? Justify your answer.
(b) Is X(t) strict-sense stationary? Justify your answer.
Solution:
(a) We first check the mean.
E(X(t)) = E(Z1 ) cos ωt + E(Z2 ) sin ωt = 0 · cos(ωt) + 0 · sin(ωt) = 0 .
The mean is independent of t. Next we consider the autocorrelation function.
E(X(t + τ )X(t)) = E((Z1 cos(ω(t + τ )) + Z2 sin(ω(t + τ )))(Z1 cos(ωt) + Z2 sin(ωt)))
= E(Z12 ) cos(ω(t + τ )) cos(ωt) + E(Z22 ) sin(ω(t + τ )) sin(ωt)
= cos(ω(t + τ )) cos(ωt) + sin(ω(t + τ )) sin(ωt)
= cos(ω(t + τ ) − ωt)) = cos ωτ .
The autocorrelation function is also time invariant. Therefore X(t) is WSS.
(b) Note that X(0) = Z1 cos 0 + Z2 sin 0 = Z1 , so X(0) has the same pmf as Z1 . On the
other hand,
X(π/4ω) = Z1 cos(π/4) + Z2 (sin π/4)
1
= √ (Z1 + Z2 )
2

1

x = ± √2 = 2
4


 2
∼ 21 x = 0


0 otherwise

This shows that X(π/4ω) does not have the same pdf or even same range as X(0).
Therefore X(t) is not first-order stationary and as a result is not SSS.
8. Finding impulse response of LTI system. To find the impulse response h(t) of an LTI system
(e.g., a concert hall), i.e., to identify the system, white noise X(t), −∞ < t < ∞, is applied
to its input and the output Y (t) is measured. Given the input and output sample functions,
the crosscorrelation RY X (τ ) is estimated. Show how RY X (τ ) can be used to find h(t).
Solution: Since white noise has a flat psd, the crosspower spectral density of the input X(t)
and the output Y (t) is just the transfer function of the system scaled by the psd of the white
noise.
N0
SY X (f ) = H(f )SX (f ) = H(f )
2
N0
RY X (τ ) = F −1 (SY X (f )) = h(τ )
2

14
Thus to estimate the impulse response of a linear time invariant system, we apply white
noise to its input, estimate the crosscorrelation function of its input and output, and scale it
by 2/N0 .

15

You might also like