You are on page 1of 23
STATIONARITY 137 ‘Two processes X(t) and Y(t) are jointly WSS if each process satisfies Equation 3.28 and for all re T. E[X*()Y¥(t + 1)] = Ravn) (3.29) For random sequences, the conditions for WSS are E(X(K)} = wx (3.30.a) and E{X*(n)X(n + k)} = Rex(k) (3.30.b) It is easy to show that SSS implies WSS; however, the converse is not true in general. 3.5.3 Examples [EXAMPLE 3.7. Two random processes X(t) and Y(#) are shown in Figures 3.9 and 3.10. Find the mean and autocorrelation functions of X(t) and Y(t) and discuss their sta- tionarity properties. Figure 3.9 Example of a stationary random process. (Assume equal probabilities of occurrence for the six outcomes in sample space.) 138 RANDOM PROCESSES AND SEQUENCES no s(t) = —3 0s (t) yell) Figure 3.10 Example of a nonstationary random process. (Member function are assumed to have equal probabilities.) : * SOLUTION: E{X()} = 0 forall rE T = (—~, a) Rix(tyt) = FOS +9 +1 4149425 =2 Furthermore, a translation of the time axis does not result in any change in any member function, and hence, Equation 3.23 is satisfied and X(t) is stationary in the strict sense. For the random process Y(t), E{Y(s)} = 0, and 36 + 9sin ft, sin, + 9 sin t, sin ain Ry(t, 2) = + 9 cos t) cos tz + 9 cos ty cos tf + 36} a(n + 18 cos(t, ~ t))} Ryw(h — 4) STATIONARITY 139 Since the mean of the random process Y(t) is constant and the autocorrelation function depends only on the time difference t) — t,, Y(t) is stationary in the wide sense. However, Y(t) is not strict-sense stationary since the values that Y(t) can have at t = 0 and t = 7/4 are different and hence even the first-order distribution is not time invariant. EXAMPLE 3.8. A binary-valued Markov sequence X(n),n EI = {..., —2, -1,0,1,2,...} has the following joint probabilities: P[X(n) = 0, X(n + 1) = 0] = 0.2, P[X(n) = 0, X(n + 1) = 1] = 0.2 P[X(n) = 1, X(n + 1) = 0] = 02, P[X(n) = 1, X(n + 1) = 1] = 04 Find y2,(n), Ryx(n, n + 1), Rux(n,n + 2),..., and show that the sequence is wide-sense stationary. SOLUTION: P[X(n) = 0] = P[X(n) = 0, X(n + 1) = 0] + P[X(n) = 0, X(n + 1) = 1] =04 P[X(n) = 1] = 0.6 Hence, E{X(n)} = 0.6, and E{[X(n)?} = 0.6. E{X(n)X(n + D} = 1+ P[X(n) = 1, X(n + 1) = 1) = 04 E{X(n)X(n + 2)} = 1+ P[X(n) = 1, X(n +2) = J] 1 P[X(n) = 1, X(n + 1) =1,X(n + 2) = 1] + 1+ P[X(n) = 1, X(n + 1) = 0, X(n + 2) = I] 1+ P[X(n) = 1]P[X(n + 1) = 1]X(n) = 1] x P[X(n + 2) = 1X(n) = 1, X(n + 1) = 1] + 1+ P[X(n) = 1]P[X(n + 1) = 0/X(n) = I] x P[X(n + 2) = 11X(n) = 1, X(n + 1) = O} 4 140 RANDOM PROCESSES AND SEQUENCES The Markov property implies that P[X(n + 2) = 1X(n) = 1, X(n + 1) = I] P[X(n + 2) = 1|X(n + 1) = 1] 1, X(n + 1) = 0] P[X(n + 2) = 1|X(n + 1) = 0] P[X(n + 2) = 1|X(n) u and hence 0.4) /0.4 0.2\ (0.2 E{X(n)X(n + 2)} = (0.6) (4) (34) + (0.6) (22) (2) = 0.367 Thus we have E{X(n)} = 0.6 Ryx(n, n) = 0.6 all independent of n Ryx(n, n + 1) = 04 Ryx(n, n + 2) ~ 0.367 Proceeding in a similar fashion, we can show that Rux(n, 1 + k) will be in- dependent of n, and hence this Markov sequence is wide-sense stationary. EXAMPLE 3.9. A; and B,,i = 1,2,3,...,, is a set of 2n random variables that are uncor- related and have a joint Gaussian distribution with E{A} = E{B} = 0, and E{A}} = E{B?} = o?. Let X(t) = > (A; cos wt + B; sin w/t) a Show that X(t) is a SSS Gaussian random process. STATIONARITY 141 SOLUTION: E{X(t)} = > [E{A} cos wt + E{B} sin wt] = 0 E{X()X(t + a} = E {5 S [A; cos wt + B; sin wt] x [A; cos w(t + 7) + Bysin w(t + al} Since E{A;A}}, E{A;B}, E{A;B}, and E(B;B}}, i # j are all zero, we have E{X()X(t + 1)} fT S [E{A?} cos wt cos w(t + 7) fat + E{B}} sin wt sin @,(t + 7)] cos wt = Ryy(7) Since E{X(t)} and E{X(t)X(t + 1)} do not depend on t, the process X(t) is Wss. “This process X(¢) for any values of t), ,... , t is a weighted sum of 2n Gaussian random variables, A, and B,, i = 1, 2,..., ”. Since A;’s and B;'s have a joint Gaussian distribution, any linear combinations of these variables will also have a Gaussian distribution. That is, the joint distribution of X(¢,), X(t), .. . , X(t) will be Gaussian and hence X(1) is a Gaussian process. The kth-order joint distribution of X(¢,), X(4), . . . , X(te) will involve the parameters E{X(t)} = 0, and E{X(t,)X(t)} = Rux(|t; — t), which depends only on the time difference ¢ ~ t;, Hence, the joint-distribution of X(t), X(t), ..., X(t), and the joint distribution of X(t; + 1), X(t + 7),..., X(t, + 1) will be the same for all values of t and t, € I’, which proves that X(t) is SSS. A Gaussian random process provides one of the few examples where WSS implies SSS. 3.5.4 Other Forms of Stationarity A process X(t) is asymptotically stationary if the distribution of X(t; + 7), X(t, + 1),... X(t, + 7) does not depend on + when 7 is large. AUTOCORRELATION AND POWER SPECTRAL DENSITY 153 Ry xr) | RylOvieT Ryx(D/eT Rex(DVer Rex (Ve 3 * Ryle Ryle Figure 3.15 Autocorrelation function of X,(0). If the random sequence X(n) has a nonzero mean, then Sxx(f) will have discrete frequency components at multiples of 1/T (see Problem 3.35). Other- wise, Sxx(f) will be continuous in f. The derivation leading to Equation 3.53 seems to be a convoluted way of obtaining the psd of a random sequence. The advantage of this formulation will be explained in the next chapter. eEXAMPLE 3.10. Find the power spectral density function of the random process X(t) = 10 cos(2000t + ©) where @ is a random variable with a uniform pdf in the interval [—, 1]. SOLUTION: Ryx(t) = 50 cos(2000n7) and hence Sxx(f) = 25[8(f — 1000) + 8(f + 1000)} The psd of Syx(f) shown in Figure 3.16 has two discrete components in the frequency domain at f = +1000 Hz. Note that Rxx(0) = average power in the signal = OF = fo sat af 154 RANDOM PROCESSES AND SEQUENCES Sax 25 8 (f+ 1000) 25 6 (f~ 1000) | | | | | = 1000 Hz 0 1000 Hz a Figure 3.16 Psd of 10 cos(2000mt + ©) and 10 sin(2000nt + ©). Also, the reader can verify that Y(t) = 10 sin(2000n¢ + ©) has the same psd as X(t), which illustrates that the psd does not contain any phase information. EXAMPLE 3.11. A WSS random sequence X(n) has the following autocorrelation function: Ryx(k) = 4 + 6 exp(-0.5|k|) Find the psd of X,(¢) and of X(n). SOLUTION: We assume that as k —> ®, the sequence is uncorrelated. Thus Rux(k) = [E{X(n)}P = 4. Hence E{X(n)} = +2. If we define X(n) = Zn) + ¥(n), with Ya) = +2, then Z(n) is a zero mean stationary sequence with Rz2(k) = Ry(k) — 4 = 6 exp(—0.5|k)), and Ryy(k) = 4. The autocorrelation functions of the continuous-time versions of Z(n) and Rypxp() I (10/7) (2) 7 a /T)8r +57) NL Wie ALLL -5T -4T -3T -2T -T 0 CY 2r o3f «64T Figure 3.17 Autocorrelation function of the random sequence X(n). AUTOCORRELATION AND POWER SPECTRAL DENSITY 155 Y(n) are given by nae Raza) = 7D 6 exp(—0.5ikI)8(+ — kT) hae Riyy(o) =f D 48¢0 ~ kD) and Ry, x,(0) = Rz,z,(1) + Ry,r, (2) (see Figure 3.17). Taking the Fourier transform, we obtain the psd’s as Sz2,f) = 3[6 + 5 12 exp(—0.5k) cos 2nift | Hla RID YO [-1 + z exp(-(.5 + j2mfT)k} + 5 exp{-(5 - aap 7) | [-1 + 1/(1 — exp{—(.5 + j2afT)}) + 1/(1 — exp{—(.5 — j2mfT)})] [Q-e-)/(1 2e74 cos 2nfT + e~')] 0 Sux (f) = S22) + Syx() The psd of X,(t) has a continuous part Sz,2,(f) and a discrete sequence of impulses at multiples of 1/7. ‘The psd of X(1) is the Fourier transform of Rz2(k) plus the Fourier transform of Ryy(k) where Sth) = 400), <5 and Sz2(f) 6 [ 3, ex(- sikhero(-i2n74 | 2. u 61 - e-)/(1 = 2e-S cos 2mf +e}, Ifl <} 156 RANDOM PROCESSES AND SEQUENCES Thus Syx(f) = 48(f) + 6[(1 — e-/(1 — 2e-S cos 2nf + e“Y}, [fl <} Note the similarities and the differences between Sy,x, and Syy- Essentially Syx(P) is the principal part of Sy,x, (i.e. the value of Sy.x,(N) for —10 Find the psd and the effective bandwidth of X(t). SOLUTION: Sxx(f) iE A exp(—alr|)exp(—j2mfa) dr _ 24a a? + (nfyt The effective bandwidth of X(t) is calculated from Equation 3.44 as ” Sex(f) df se 2 7 1 Ra) 2 max[Sax()] 2 Sxx(0) 1 A a 158 RANDOM PROCESSES AND SEQUENCES EXAMPLE 3.14. The power spectral density function of a zero mean Gaussian random process is given by (Figure 3.19) = fi, [fl <500 Hz Sxx(f) = {0 elsewhere Find Ryy(x) and show that X(¢) and X(t + 1 ms) are uncorrelated and, hence, independent. SOLUTION: ei : exp(j2nfr) | Ras(a) = f° exp(i2npe) ap = SPEED)” sin 27Br — B = 500 Hi 2mBr ? 2 = (2B) To show that X(t) and X(t + 1 ms) are uncorrelated we need to show that E{X(t)X(t + 1 ms)} = 0. E{X()X(t + 1 ms)} = Ryx(1 ms) = 2p ® _ 9 ™ Hence, X(t) and X(t + 1 ms) are uncorrelated. Since X(t) and X(t + 1 ms) have a joint Gaussian distribution, being uncorrelated implies their indepen- dence. Sxx(f) F (He) -500 0 500 Figure 3.19a Psd of a lowpass random process X(t). AUTOCORRELATION AND POWER SPECTRAL DENSITY 159 1000 I (ms) Figure 3.196 Autocorrelation function of X(t). EXAMPLE 3.15. X(t) is a stationary random process with a psd afl [fl> B, where @ is a random variable with a uniform distribution in the interval [—, a]. Assume that X(¢) and Y(¢) are independent and find the psd of Z(t) = X(0)¥(0). SOLUTION: A Ryy(1) = > cos(2nfir) and Ret) = E{X(t)Y(t)X(t + 1) ¥(t + 7)} = E{X()X(t + DELY(OY(t + 1} = Ryx(t)R v(t) A = Rax(t) +> cos(2afit) : Rax(s) & lexpQnifa) + exp(—2nife)] 160 RANDOM PROCESSES AND SEQUENCES Sux Sexi L 1 Away i 7 lowpass Modulated 7B 0B signal X(@ signal Z(t) Carrier Yie)=A cos (2% fet-+ ©) Sy ' (Aras (r+ fa| ' (a2/4)8 Ff) ft 0 f mh Figure 3.20 Psd of X(t), Y(¢), and X(‘)Y¥(¢). Self) = FRzo(0) aT fe = “ [f Ryx(aexp(j2mf-n)exp(—j2nft) dr + [/ Ravtneno(~Paafenenp(— anf) dr] = 2 [ff Reateennl ian ~ fe + [7 Reston ants + sola = Lisulf - f+ Sulf + $01 The preceding equations shows that the spectrum of Z(t) is a translated version of the spectrum of X(#) (Figure 3.20). The operation of multiplying a “message” signal X(t) by a “carrier” Y(t) is called “modulation” and it is a fundamental operation in communication systems. Modulation is used primarily to alter the frequency content of a message signal so that it is suitable for transmission over a given communication channel. 3.7. CONTINUITY, DIFFERENTIATION, AND INTEGRATION Many dynamic electrical systems can be considered linear as a first approximation and their dynamic behavior can be described by linear differential or difference EXAMPLE 3.16. Discuss whether the random binary waveform is MS continuous, and whether the MS derivative and integral exist. SOLUTION: For the random binary waveform X(t), the autocorrelation func- tion is Fee i, -H kl> 1/B, calculate o} and compare it with o%. SOLUTION: ok = E(X% = Rex(0) = [” Sex(f) df = 2AB Lore B= 5 xO = oy = Bt) =f” Suxlf) (eossty af From Figure 3.24, we see that the bandwidth or the duration of (sin nf 7/mf7T)? is very small compared to the bandwidth of Syy(f) and hence the integral of the product can be approximated as . nap narn ie Sul) po ) Of = Sqx(0) [are under (222) | [sin (x fT) x (TP Sxx(f) =B 0 B 7 Figure 3.24 Variance calculations in the frequency domain. TIME AVERAGING AND ERGODICITY 175 or 1_A oF = Sx) = and ok 2AB _ [ (ait) ~ 287 BT>>1 The result derived in this example is important and states that time averaging of a lowpass random process over a long interval results in a reduction in variance by a factor of 2BT (when BT >> 1). Since this is equivalent to a reduction in variance that results from averaging 2BT uncorrelated samples of a random sequence, it is often stated that there are 2BT uncorrelated samples in a-T second interval or there are 2B uncorrelated samples per second in a lowpass random process with a bandwith B. EXAMPLE 3.19. Consider the problem of estimating the pulse amplitudes in a random binary waveform X(t), which is corrupted by additive Gaussian noise N(t) with py = 0 and Ryy(t) = exp(—|r|/a). Assume that the unknown amplitude of X(t) in the interval (0, T) is 1, T = 1 ms, a = 1 ps, and compare the accuracy of the following two estimators for the unknown amplitude: (a) $= ¥(4), 4 €@, 7) 1yr pil ¥() dt (b) 5, where Y(t) = X() + M(d). SOLUTION: $, = X(n) + N(n) = 1 + N(4) E{S} = 1 176 RANDOM PROCESSES AND SEQUENCES and var($,) = Ryy(0) = 1 E{S,} = 3 fea + M(O} dt = and var{S,} = mtr [ "Nv at} "ft - el Cyn (1) dr HIN Ae mr Lb: ~ 4 exp(—t/a) dr 2 = exp(-T/a)] AIP te Since «/T << 1, the second term in the preceding equation can be neglected and we have var{S,} ~ 2s a 500 and standard deviation of §, ~ 1/500 ~ 0.0447. Comparing the standard deviation (c) of the estimators, we find that o of 5, is 1, which is the same order of magnitude of the unknown signal amplitude being estimated. On the other hand, o of S; is 0.0447, which is quite small compared to the signal amplitude. Hence, the fluctuations in the estimated value due to noise will be very small for. $, and quite large for $,. Thus, we can expect 5, to be a much more accurate estimator. 3.8.2 Ergodicity In the analysis and design of systems that process random signals, we often assume that we have prior knowledge of such quantities as the means, auto- correlation functions, and power spectral densities of the random processes involved. In many applications, such prior knowledge will not be available, and TIME AVERAGING AND ERGODICITY 181 EXAMPLE 3.20. For the stationary random process shown in Figure 3.9, find E{(ux)7} and var{(wx)r}. Is the process ergodic in the mean? SOLUTION: {qy)z has six values: 5, 3, 1, -1, -3, —5, and 1 EXux)r} = GS +3 4+1-1-3—5}=0 Variance of (ux)z can be obtained as var((uade} = 268 + FP +P + (DP + (3)? + (999 = 70/6 Note that the variance of (uy); does not depend on T and it does not decrease as we increase T. Thus, the condition stated in Equation 3.77 is not met and the process is not ergodic in the mean. This is to be expected since a single member function of this process has only one amplitude, and it does not con- tain any of the other five amplitudes that X(s) can have. EXAMPLE 3.21. Consider the stationary random process X(t) = 10 cos(100¢ + 0) where © is a random variable with a uniform probability distribution in the interval [—7, 7]. Show that X(t) is ergodic in the autocorrelation function. SOLUTION: Ryxx(t) = E{100 cos(100¢ + ©) cos(100¢ + 1007 + 6)} 50 cos(1007) 1 re a | X()X(t + 1) dt T Jt " (Rax(1))r 182 RANDOM PROCESSES AND SEQUENCES 4 #f{” 100 cos(100¢ + ©) cos(100r + 1007 + @) dt ea u 1 ore 7 i 50 cos(1007) dt T J-1n + rai 50 cos(2001 + 1007 + 20) dt T J-1a Irrespective of which member function we choose to form the time-averaged correlation function (i.e., irrespective of the value of @), as T— ~, we have (Rxx(1))r = 50 cos(1007) = Rxx(t) Hence, E{(Rxx(7))7} = Rxx(t) and var{(Ryx(t))7} = 0. Thus, the process is ergodic in autocorrelation function. Other Forms of Ergodicity. There are several other forms of ergodicity and some of the important ones include the following: Wide Sense Ergodic Processes. A tandom process is said to be wide-sense ergodic (WSE) if it is ergodic in the mean and the autocorrelation function. WSE processes are also called weakly ergodic. Distribution Ergodic Processes. A random process is said to be distribution ergodic if time-averaged estimates of distribution functions are equal to the appropriate (ensemble) distribution functions. Jointly Ergodic Processes. Two random processes are jointly (wide-sense) ergodic if they are ergodic in their means and autocorrelation functions and also have a time-averaged cross-correlation function that equals the ensemble averaged cross-correlation functions. Tests for Ergodicity. Conditions for ergodicity derived in the preceding sections are in general of limited use in practical applications since they require prior knowledge of parameters that are often not available. Except for certain simple cases, it is usually very difficult to establish whether a random process meets the conditions for the ergodicity of’a particular parameter. In practice, we are usually forced to consider the physical origin of the random process to make an intuitive judgment of ergodicity. For a process to be ergodic, each member function should “look” random, even though we view each member function to be an ordinary time signal. For example, if we consider the member functions of a random binary waveform, randomness is evident in each member function and it might be reasonable to TIME AVERAGING AND ERGODICITY 183 expect the process to be at least weakly ergodic. On the other hand, each of the member functions of the random process shown in Figure 3.9 is a constant and by observing one member function we learn nothing about other member functions of the process. Hence, for this process, time averaging will tell us nothing about the ensemble averages. Thus, intuitive justification of ergodicity boils down to deciding whether a single member function is a “truly random signal” whose variations along the time axis can be assumed to represent typical variations over the ensemble. The comments given in the previous paragraph may seem somewhat circular, and the reader may feel that the concept of ergodicity is on shaky ground. However, we would like to point out that in many practical situations we are forced to use models that are often hard to justify under rigorous examination. Fortunately, for Gaussian random processes, which are extensively used in a variety of applications, the test for ergodicity is very simple and is given below. EXAMPLE 3.22. Show that a stationary, zero-mean, finite variance Gaussian random process is ergodic in the general sense if [Rex] dr <= SOLUTION: Since a stationary Gaussian random process is completely speci- fied by its mean and autocorrelation function, we need to be concerned only with the mean and autocorrelation function (i.e., weakly ergodic implies ergod- icity in the general sense for a stationary Gaussian random process). For the process to be ergodic in mean, we need to show that tim 5", (.-! - I) cant) dr =0 The preceding integral can be written as 1 yr 1/T Os ef (1- tt) Cax(t) de 5% |" [Cax(0) dr Hence, Cxx(1) dt = 0 tine, ( 184 RANDOM PROCESSES AND SEQUENCES since Fe eet dr <= To prove ergodicity of the autocorrelation function, we need to show that, for every a, the integral v= tf (- it) Cee(t) dr; Z(t) = X(NX(t + @) approaches zero as T— ©, The integral V can be bounded as 0

You might also like