You are on page 1of 41
7 one “In this chapter, we shall study the estimation of power spectral density (PSD) of a wide. sense stationary (WSS) process. As PSD is the Fourier transform of the autocorrelation Sequence, estimating the power spectrum is equivalent to estimating the autocorrelation, Ih Section 3.7, we discussed the correlation of signals, and in Section 5.11, we discussed the PSD. From Section 5.11.2, the autocorrelation of a signal x(n) is given by : ne ii (16.1) | re(m) = lim 5; ) This shows that if x(n) is known for all n, then its power spectrum Pe(e!*) ean be estimated by, first, determining the autocorrelation sequence r and then taking its Fourier transform. However, in practical applications there are three problems that may arise with this approach. 1, The data sequence x(n) may not be known for all n. Only a finite segment of 2(n) is available, for example, seismic data from an earthquake in w hich the signal persists for only a short period of time 2, The spectral characteristics may change with time over the duration of the data collection. 3. The data sequence x(n) is often corrupted by noise Therefore, in practical applications, we have to « from a finite segment of noisy data. The spectrum estimation techniques may be categorized as (a) nonparametric or classical and (b) parametric or nonclassical. The nonparametric methods do not make @uy ssstmptions about the generation of the data and hence the name, These meth- Ode fate based on the discrete Fourier transform (DFT) of either the signal segment or its autocorrelation sequence. The nonpar ‘ametric methods inci periodogram, modified periodogram, Bartlett, Welch's, and Blackman-Tukey oe methods have the advantage of implementation using fast Fourier transform (FFT), but with the disadvantage of limited frequency resolution. » 3 Parametric methods assume that the signal segment has been generated Parametric model. These models include ree aa au ie and autoregressive moving average (ARMA). We assume that the the output of one of these systems with input white noise having. model consists of a sum of sinusoids or complex model-based methods are referred as noise stimate the power spectrum P,(e) tee rey) or discrete-time), Continuous ‘Time Fourier ‘Tra xm (DTFT), and DFT. The DFT version of 85.11. It is as follows (Eq. (8.103): Not N-1 DY |)? = ¥ yy xP n=0 io Tet us understand how we can apply Parseval’s theorem. Let a(n) be the periodic extension of an N-point signal x(n), that is, 2(n) is the one period of ap(n)- a(n) OSn=N—-] 0 otherwise a( The average power of z»(n) can be computed from one period as is | (2 P= aH SS Iam) Using Eqs. (16.2) and (16.3), we obtain sil va ae pals me- 1 xe fen This equation shows that we Equation (16.4) can be written as ree oe 1x@)P _ 1S Pte) (16.5) NR : _ [XH jg the PSD. Thus, the average power is the average’ tie SIs where P,(k) = 4?" is th can determine the average power of a signal from its DFT. P ‘The PSD is given by 16.1 Consider a 4-point signal a(n) = _ (xQ@)P _ XP 0, 2} and its DFT X(k) 4, 3 +48 2, 3— P(t) = PE = = Tily the Parseval’s theorem. «s og This power matches the true power in the Bee t=; continuous-time signal ¢> (t) = sin (2mfit), |The DTFT version of Parseval’s theorem is given in Section 5.5.12 and PSD is ‘described in Section 5.11. From Eqs (5.112) and (5.113), we have [xw(o)2 2N+1 y : 1 pel a > Prete | E Meee ON pT 2 HC") ee a ‘ el P(e!) dus a oa. Where Xn(c)— S~ 2(n)e-sen N and the PSD is P,(e*) = in pe (16.6) (16.7) 16.1.2 Performance of Estimators Let x(n) be a length-N signal segment {x(0), 2(1), (NV —1)}, An unknown param- eter (e.g. PSD) P depends on this signal segment. We wish to determine P based on the data, that is, P =9(2(0), (1), 2(N —1)) (16.8) Where q is'some function. This is the estimation problem, Biasand variance The bias and va terize the performance of spectrum e: _ Some optimality criterion. The meai nance are two basic measures often used to charac- ‘stimation. For optimal estimators, we need to adopt n-square error (MSE) optimality criterion is defined MSE = E((P — P)?] the average mean-sq it can be written as ‘= {[(e— 2p lwared deviation of the estimator from th i }) + (BIP] ~ P) } = vent | NSB =v) ‘when b{?] = 0, the MSE becomes equal t sstimator is equal to the true value. oy : oe Consistency If the variance and bias of an zero as the N — co, then the estimator is called a consistent estimator. The probabil function of the estimator tends to cluster more closely around the true increases. 7 Frequency resolution ‘The frequency resolution Af is the smallest difference in frequencies that can be reliably detected by the estimator, Quality factor and variability ‘The quality factor is defined as the ratio of the square of the mean of the PSD to its variance. ‘The variability is a normalized variance which is defined as var P ate) (16.14) 2 tht Figure of merit The figure of me resolution, a = crit is defined as the product of variability and (16.15) M=VxAf ‘ible. The figure of merit should be as small as possi HODS NONPARAMETRIC (OR CLASSICAL) MET ie cared out by taking the DE The spectrum analysis of @ deterministic aa ee iota vindgegesteal ‘ a finite length segment of the signal obtaltsection 8.12 isnot satisfactory when in Section 8.12. The method described no based on estimating the torandonn signals, Nonparamele Me Fourer trasr o ‘sequence of random process 2° rnation is al e, we can say that spectrum i SM 7, —mn), the most widely sed aiicioele an autocorrelation sequence -(N-1) 1,(™) =4 0 z [f2(m) ay Tet) aval) {o ofa (16.31) because the expected value of F(m) .d estimate of T2(™) JJation r(m). However, #(m) is an Therefore, 7,(m) is a biase I to the true autocorrel from Eq, (16.31) is not equal asymptotically unbiased estimator since fim, E[fe(m)] = 72(™) (16.20) and using Eq (16.32) (16.31), we obtain not ; Selman mantN-2) Taking the expectation of F4- Brome = ¥ ma-(N-?) 2 gorerl(o (mM) E[Phex(e™ = + iy = pet * G ima_var[Poer(e?)] = P2(e*) ‘Since the variance does not go to zero as V —> oo, the periodogram is not a consistent estimator. Frequency resolution of the periodogram The frequency resolution is the 3 dB width ‘of the main lobe of the window. In periodogram, we use rectangular window w(n), which has length NV. Therefore, the resolution of the periodogram is given by 9 Af 3 Quality factor The quality factor of the periodogram is defined as (16.38) FlPa(e*)] _ P2.,(e%) var | Pper(e?” ere?) [Poer(ei)] FB =1 The quality factor Q is fixed and independent of the length N of the signal segment, which indicates the poor quality of the estin Figure of merit The figure of merit is defined as the product of variability and resolution. HVxAf=gx0=% (16.40) ‘The figure of merit should be as smalll as possible. 16.2.2 Modified Periodogram ‘The term periodogram is used when w(n) is a rectangular window. The term modified periodogram is used when w(n) is a non-rectangular window. In other words, the peri- ‘odogram of a process, that is, windowed with a non-rectangular window w(n) is called fied periodogram and is given by = 2 re ar v2 2(n)w(n)e-*" B[Av(ei)] — Lyf where W(e) is the DTFT fa 1 WO = ay fl of the wi 10 ene et Therefore, Jim st, W(ci2) totically unbiased. Note that periodogram, ; and hence, the modified periodogram is as 1, the modified periodogram reduces to the if U Variance of the modifie the rectangular window approximately the same as d Perlodogram In periodogram, we use window Behe ‘than Hence, the variance of the modified periodogram will ibe that for the periodogram, that is, lim var [Pye (e)] = P2(ei) (16.44) Since the variance does not go to zero as N — 00, the modified periodogram is not a consistent estimatur i i ‘The use of non-rectangular windows in mod- Frequency resolution of the periodogram f ified periodogram offers no benefit in terms of reducing the variance. eres Bitdows provide trade-off benweat| frequctig) rasiaGaniammn ee b), spect masking (sidelobe amplitude), and spectral leakage (refer Section 8. i , ra Gesolution is the 3 dB width of the main lobo of the windans iaaeiaaam modified periodogram is window dependent (16.45) Resolution = (Af)s ap xd windows of length-N. Table 16.1 shows the properties of some commonly usec ’s Method: Periodogram area aie am Sct o a noe at carn a fa a com by sveraging » number, of et postive nee "The . eieee ey steps to estimate the P! oid i e 1. Divide the signal x(n) into M non-over! segments ith signal segment is given by <1, 0sis a(n) = a(n¢ it) OSS TOS OE 3. Compute the average of M periodograms. The average of M periodograms gives the estimated PSD. ia zo a(n + iL)e Jon) | (16.48) n=O A pictorial description of this method is shown in Fig, 16.1 Mean of Pa(e) The mean value of Pp(e%) is Segment M Periodogram 1 Y] pea a [Periodogram Therefore Py (ei) is a biased eotimate of P, (ey 2nd(), the periodogram is asymptotically unbiased ii P(e] = 4 js Jim E[Pa(e = Pte ) + 2nd(w)| = P,(o) Variance ot Ps(e') Assume that the non-overlapping Gigsall sageNene a(n) are uncorrelated. The variance of Pa(e) is i 1 come var[Pa(*)] = fw [P(e] = hae) esi Clearly, as M increases, the variance tends to zero. Thus, Pa(e!) provides a consistent estimate of P,(e!”). If N is fixed and N = LM, we see that increasing M to reduce variance results in a decrease of L. If we decrease L, the frequency resolution decreases. Frequency resolution Since the periodograms used in Pa(e!”) are computed using signals of length L, then the resolution is (16.52) which is M times larger (worse) than the periodogram, Quality factor The quality factor #* 2 (plPoe)]] _ Pe = Tp) var[Pa(e)] Figure of merit The figure of merit 1 M=ax4f Q 16.2.4 Welch’s Method: ‘The Welch method is ping modified periodogta™> methods: ea 1, Allow the signal sarments 200) eran oe e is no overlap (D I + if D = 4, there is 50% overlap between 2;() and 2;41(n), N L ‘segments of length L. For this case, the resolution is same as Bartlett's method while the ‘Variance reduces by a factor of 2 because the number of segments is doubled. However, with a 50% overlap, we can also form N ey bi: Segments of length 2L. Now, for this case, the variance is same as Bartlett's method while the resolution increases because the length of the segments is doubled. Now, consider the second modification. We nse the non-rectangular with each over. Tapped signal segment to obtain the PSD. The modified periodogram of the ith signal segment is given by 2 Py (oe) = P(e) = where ve a 2 ae Be, w*(n) ‘The average of M modified periodograms gives the estimated PSD. i A ee estes ee =e > Pale) => [i Yi x(n + wD) w(njen =o n= iption of this method is shown in Fig. 16.2. ) The mean value of Py(e!) is t \ / I Periodogram M| ‘Averaging Tesp estimate Fig. 16.2 Pictorial description of the Welch method, Using Eq. (16.43), we obtain (window length = L) 1 1 | = Bl P(e shel 0) | Wei) |? v(e#*)| = B[Paa(e*)] se [Pate = |(e)})] (16.57) where W(c!#) is the DIFT of the L-point window w(n). The Welch’s method is an asymptotically unbiased estimate of the PSD. Variance of Py(e!”) With a Bartlett window and a 50 % overlap, the variance of Pw(e) is var| Py(e™)] : (16.58) Comparing Fags (16.51) and (16.58), we seo that for a given M» the variance of the Welsrs neha i ee than Bartlett's method by a factor of 3. However, for a Wand 7 Gonlution), with » 50% overlap, the number of eeements fa doubled ne variance is ‘ var[ Pay(e!*)] = Bs pace) = iePale) resolution The resolution of the Weldhlg method lewinding ‘this method with the methods of Bartlett and by the averaging effect of the autocorrelation ‘of several periodograms. The Blackman~Tukey estimate i Pule)= >> #-(m)w(m)e*™ m=~(L-1) F Using only the first 1 < N more-reliable values of the autocorrelation sequence reduces e the variance of the spectrum estimate by a factor of £. However, this reduces the P resolution from about } to about } ey F ‘The reduction in resolution and variance of the Blackman-Tukey estimate Pyle) E is achieved by averaging the values of the spectrum at consecutive frequeney bins by oe windowing the F,(m). In the method: and Welch, the same effect is achieved by averaging the values of the multipl t the same frequency bin. Example 16.3 Consider a continuous-time signa Foul) 2(2) = 008 (2nfit). It is sampled at twice the Ny Fate (f, = 4fi) to obtain discrete-time s« ¢ ' (n) = cos(3#). Determine and sketch the ‘odogram for N = 4. | k Solution: D 23 <3, the signal xy (n) is ieee 40 = % $8, the signal 2v( Fig. 16.3 Periodogram for Example 163 a: 1,0, — 1 4(n) = cos ("*) 1, 0} o a4 a(n =k) = w(n) + D> by w(n — k) (16.61) tt p= where w(n) is the input signal and r(n) is the output signal. The system function 18 4 (16.62) and the frequency response is a 14+ 0 dpe-ivk which is a rational function completely speci oe t ecified by the parameters {a1, a, {his +") ba}, and o%,. Since these models are [71 output process ary. In parametric methods ae ee of spectrum estimation, we want to estimate signal mod parameters @;, bj, and 62 to Satisfy a prescribed criterion. Types of models There are three tybes of models, 1. For q = 0, we have an all pole model. The all-pole model is also known as AR model, denoted by AR(p). The difference equation is (n) =~ Sag 2(n = 8) + w(n) (16.65) 2. For p = 0, we have an all-zero model. The all-zero model is also known as MA model, denoted by MA(q). The difference equation is a(n) = w(n) + 5b, w(n=B) ase fave a pole zero model. The pole zero model is also known 3. For p > 0 and q > 0, we have a pol d n as ARMA model, denoted by ARMA(p, @). The difference equation is ‘ ee n(n) +}. be w(n—k) Sag) a(n) =~ Yax2(n—K) + mln) x isi ation 16.3.1 Autoregressive Spectrum Estim Consider the AR(p) model : =- Yann H tae) a(n) = fiance 03. Its auto ‘mean and variance 0%, Its au Where the input w(n) is white now ere h: ee a is T(m) = 03.6(m) and PSD is fe Tesponse are ets) = -Sas(n =k) + w(n) = ‘Post-multiplying this equation by 1(n — m) and taking the expectation, we obtain E[z(n)0(n — m)] = aS ay B[z(n — k)2(n — m)] + E[w(n)a(n = m)] ia p ral! = a4 re( — b) + rue) (16.70) auti = Where ru(m) is the cross-correlation between w(n) and x(n). Since a(n) = A(E)w(n — 2) (n) = h()w( ) Th > coe = la Yr, }) is given by , "The cross-correlation r,,.;(m) is given es Tux(m) = B[w(n)2(n — m)] =E }w(n) S> h(e)w(n — m—6) = VL AOE[w(n)w(n - = So h(0)02,.6(m +8) = 0% h(=m) (16:71) = = Substituting ru. = 03,h(—m) in Eq. (16.70), we obtain 2 t2(m) = = > ay r(m — k) + 03,h(—m) (16.72) In i Using the causality condition h(n) =0 for n <0, this equation can be written a ki = ¥ a re(m—h) m>0 oe 16.73) 7e(™) =) _ 9 agre(m—k) +02 m=0 Q Sey =A s t2(—m) m<0 E =1, For m =0 and m > 0, this equation can be written as. e(0) re(-1) re(=2) .0. yf re(1) re(0) se tal re(2) re(1) —r,(0) r2(P) re(P —1) re(p —2) +>. 7=(0) Since r,(m) = r,(—m), this equation can be written re(0) re(1) (2) t2(p) re(p~1)| | r(p — 2) re(1) re(0 re(1) re(2) rx(1 re(0) % 0 0 (er) r2(p) 1) re(p —2) (0) ay 0 These equat 1s Yule-Walker equations or normal equations. Here, the autocorrelation estimated using the following equation. i‘ ” m %(n) = ¥ [x(n) + x" (n)al [2(n) + a7 x(n) n= ao wo => [x(n)a(n) +2(n)a™x(n) + 2(n)x"(n)a +aTx(n) x?(ndal == J wa wa No = 3 oy a(n al = ee (n) +a S> x(n)x(n) + X zyx" (njas a? Saxe m0 J=E,+alr+r’at+aRia= EF, +207 r+aTRia Not a2) equations collectively may be written as vJ=0 where V is the gradient operator defined as the column vector v=| eo, a |e 8a; Baz *** Ba, | and 0) on the right-hand side of Eq. (16.89) denotes the column vector containing p zeros. From Eqs (16.85) and (16.89), we obtain VJ =2r+2R,a=0 R,a=-r (16.90) Tt has the following solution (16.91) Seouming that R., has an inverse. Substituting the result of Ba. (16:91) in Eq: (16380), the minimum sum of squared error is given by : pee, (16.92) MMSE criterion hold for the LSE criterion if all the formulas derived for the 3 — with the time-average operator > [-] ‘we replace the expectation operator E[-] tion 16.3.4 Moving Average Spectrum Estimatio! Consider the MA(q) mode! dy w(n— #) a(n) = w(n) +20 i i sean end variance 02, Its autocorrelation js white noise of zero mea ‘The system funetion and frequency ee ee PSD is Pale’) = 2m ; “is Tw (m) = 02; sg BB ath me = We) Moving Average Spectrum Estimation (p, 4) model . ay x(n — k) + w(n) +S by w(n ~ k) ist ‘where the input w(n) is white noise of zero mean and variance o3,. Its autocorrelation is 7,,(m) = 02,5(m) and PSD is P,,(e”) = 03. The system function and frequency ‘Tesponse are pa « a 1+) byz* 14+ 3h ewe H@) = and A(e*) = —5+—_ : : The 1+ > az-# 14+ 3 ap eduk mm : oq; ‘The PSD of the output signal is non 2 a i ; 1+ } bee 16. P,(e*) = Pw(e)|H(e? o2,|H(e)/? = 02, = (16.95) The J1+ 0 a edoe pro \ ie E which is specified by the parameters, {a1, « ap}, {b,, ba, -=+ , ba}, and @2, Since this model is LTI, the output process (n) is stationary. Therefore, if we estimate signal model parameters @;, b,, and 62,, then an estimate of the PSD will be 1+ > het é (16.96) a it EIGENVALUES AND EIGENVECTORS OF THE AUTOCORRELATION Before looking at the frequency estimation methods that are based on the eigendecompe sition ‘of the autocorrelation matrix, we begin with a discussion of the ei ‘the autocorrelation matrix. Let autocorrelation matrix of a real-valued WSS random process ! +1)]", where the superscripts H and A nonzero N 1 vector q is wer Speotrum Estimation 1028 equation wid to be an rig Raq =a envector of RL if it satisties the following here 0 is the eigenvalue associated w ector q wt lated with the eigenvect vector q. expressed a xpr This equation can be (R= ADq=o (16.99 where I is the N x N identity matrix and trivial solution q = 0, it is necessary for the the determinant of R — AT must be zero. det(R — AI) =0 is the N x 1 null vector, To prevent the matrix R = Al to be singular. This implies (16.100) This equation is called the characteristic equation of the matrix R. It is an Nth order polynomial in A, The N roots, Xo, Ai,-++, Ay. are the eigenvalues of R. For each eigenvalue, \,, 0 < t < N —1, the matrix R ~ AI will be singular and th Jeast one nonzero vector, q,, which satisfies the equation Raq, = Aa, (16.101) These vectors q, are called the eigenvectors of R. For any eigenvector q,, it is clear that oq, will also be an eigenvector for any constant a. ‘Therefore, eigenvectors are often normalized so that they have unit norm {jql| = 1 16.4.1 Properties of Eigenvalues and Eigenvectors igenvectors of the autocorrelation matrix R have the following re will be at The eigenvalues and ei properties 1. The eigenvalues of the correlation ‘The autocorrelation matrix is & He eigenvector of Rand its correspon Raq, = 4:4 i ‘ i FF, we get Premultiplying this equation PY as (16.102) matrix R are real jermitian matrix, t ding eigenvalue is As hat is, R = R”. Let q, be an ‘These are related as Rg, = sala ee his equation, we get oft ‘Taking the Hermitian transpose (16.103) (16.104) 1026 Digital Signal Processing 3. EH the eigenvalues Xj, My Aha are distinct, then the corresponding eigenvectors os Qe > Qw-1 AE linearly independent. A. The eigenvectors associated with the distinet eigenvalues of the autocorrelation — weatrix R. are orthogonal, that is, if As # Ay, then aia, =9 (16.106) Let \; and A, be two distinct eigenvalues with eigenvectors i and q,, respectively. Then, we have (16.107) (16.108) Premultiplying first equation by qi and the second by qi, we obtaining (16.109) af Ra, = sa) (16.110) af'Ra, = A,a"'4, ‘Taking the Hermitian transpose of Eq, (16.110), we obtain af R%q, = Naya @i'Raq, = yaya, Subtracting Eq. (16.111) from Eq. (16.109), we obtain Qu = Adaya =0 Noting that \, and A, are distinct, that is, Xs # Aj, therefore aja, = 0 and it follows that q, and q, are orthogonal. These orthogonal eigenvectors are the basis vectors that span the '-dimensional space. We can say that the subspaces that belong 1 distinct eigenvalues are orthogonal. Moreover, within each subspace, we can find « set of orthogonal basis vectors that span the whole subspace. 5. Let Ao, Any ++, Aw-1 be the distinct eigenvalues with unit morm eigenvectors Gp: Gy, “+ + Qy--1» Fespectively. Then, define an N x N’ matrix Q=[m a awa) Qs then a unitary matrix, that is, Q"Q=I ‘This implies that Q” = Qu" Noting that Sa: (16.111) (16.112) Power Spectrum Estimation 1027 6, Eigendecomposition The Autocorrelation matrix R can be decomposed as R= QAQ* 16.115) where Q= [ay q, ] that is, Wo] and A fa an Wx W diagonal eigenvalue matrix. Xo Os 0 OM 0 0 (16.116) 0.0 ods Let Yo, Ar »An-1 be the distinct eigenvalues with unit norm eigenvectors Wo: Qs “> + Ay—1) Tespectively, Then Rao = Aod, Ray = Arq, Ray = Aras Ray_; = Aw-14y-1 Let Q= [ao ai --- ay), then RQ=R{[q a, -- aya] = [Raq Rq, - Ray_;] [Ado Aras *** An -14n- i [qo a: -*> a-1] RQ=QA Postmultiplying this equation by R=QaQ” Pe ies ah This Sagres cor gs can abe written ms wet (16.117) Q" and noting that Q" = Q>', we obtain 1028 Digital Signal Processing ‘The transformed vector y(n) constitutes a set of uncorrelated random variables, e transformed vector y(n) consti Using Eq. (16.119), we obtain Elyin) y¥(n)] = QZE[x(n)x” (n)]Q = Q"RQ = Q"QAQ"Q I, we obtain A (16.120) Since Q”Q Ely(n)y*(n)] = Noting that A is a diagonal matrix, this shows that the elements of y(n) are uncorrelated with one another. ‘ ae Premultiplying Eq. (16.119) with Q and using QQ" = I, we obtain the inverse KL transform. x(n) = Qy(n) (16.121) 16.5 EIGENANALYSIS ALGORITHMS FOR SPECTRUM ESTIMATION In Section 16.3, we have discussed the estimation of PSD of a WSS random process that has been modelled as the output of an LTI system that is excited by white noise, However, in many applications, we use sinusoidal or harmonic model in which the signal x(n) is a sum of complex exponentials in white noise. This type of signals are found in speech processing, moving targets in radar, and spatially propagating signals in array processing, which are used for estimation of a wireless signal direction-of-arrival at a receiver. Here, our goal is to estimate the frequencies of the complex exponentials because the complex exponentials are the “information bearing” part of the signal. In this section, we describe the frequency estimation methods based on the harmonic model: PHD and MUSIC algorithm. These algorithms are based on the eigendecomposition of the auto- correlation matrix into two subspaces, a signal subspace and a noise subspace. These methods can resolve complex exponentials that are closely spaced in the frequency. Before discussing these methods, we will first discuss the harmonic model and then will deal with eigendecomposition of the autocorrelation matrix of the harmonic model. 16.5.1 Harmonic Model ‘The harmonic signal model that consists of p complex exponentials in noise is given by * 2(n) = 7 AO + w(n), (16.122) i= ere i the discrete-time frequency, A, is the complex amplitude, Amide ‘ : Power Spectrum Estimation 1029 16.5.2 Eigendecomposition of the Consider the harmonic signal model give consider a first-order harmonic process, ¢ Autocorrelation Matrix Win Eq, (16.122). Here, for simplicity, we first harmonic process, that is, p= 2. At the ert process Single complex exponential in white noise (p process 2(n) where Ay = |Aj| e?*, and w(n) is the white noise with variance o2,. Consider the signal a(n) from Eq. (16.124) at its current and future M —1 values. This can be expressed as P= 1. Then, we consider a second-order we will consider a pth order harmonic =1) Consider the first-order harmonic Aye" + w(n) (16.124) x(n) = [x(n) 2(n41) -+ 2(n+M—-1)]" (16.125) ‘We can then write the signal model consisting of single complex exponential in noise for Tength-M time window vector as oa Ay &" + w(n) oxo) + a(n-+1) Cee en Ein) =| 2(n+-2) “b= Are cai (n+ M-1) ‘Ay een M—D) + w(n + M — 1) zi w(n) nn w(n+1) cine lade cles w(n+2) a. § : iden n+M-1) vie) min ms (n) (16.126) bh iin 4 w(n) = 9(0) + x(n) = Ar v(4) gone? (16.127) pep frequency vector ‘ion of 2(n) is = Efz(n)z"(n— 2) a e vin + w(n) {Are a. men ot . : junio Mn) + 100) 12" (8 — »] poenerrerenl -setm-® + w*(n— #)}] +Aje fare _ +E) age [wearer = »] 1030 Digital Signal Processing k) = [Aa|? e** + 02,6(k) = Pret" + 03,0() (16.128) where P; = |4,|? is the power in the complex exponential. Using the fact ro(k) = rg(—k), the M x M autocorrelation matrix is given by a(n) x(n +1) R, = B[x(n)x"(n)] =E [e*(n) 2*(n +1) +++ 2*(n+ M ~1)] a a(n+M -1) a(n)z*(n) a(n)z*(n +1) a(n +1)a*(n) a(n + 1)2*(n+1) z(n)z*(n+M ~1) x(n -+1)z"(n+M—-1) ; : a a(n+M —1)a*(n) (n+ M s(n + 1) a a(n+M— 1)a*(n + M—1) r2(0) reQl). +++ g(t —1) i pu) 20) ear-2) t2(M —1) t2(M—2)--- (0) Using Eq. (16.128), the autocorrelation matrix can be written as ‘ Prto} Pret — Pye Pye“HM—1Don Pyeiws Pi+o2, Pret... Pye HM —2)un Re=| Pie = Pel Ph to + Prensa Pyet(M Wer Py ofa der Py e(MoAbn Py ge wee @oH(M—Mor eA I(M—2)wr eM Boy 1 een gi oan 1 evi Sp oe eee 1 or GM Son. AM Der gil R, R,=R,+Rw where R, is the signal autocorrelation matrix and Ry = 02,1 is the noise matrix. The rank of a matrix is defined as the number of linearly ind ‘that matrix. Here, the signal matrix R, has a rank of. ‘matrix Ry has fll rank, Note that the frst column of ee Power : ns he ome ign is = a wer Spectrum Estimation 1091 other eigenvalues of R, are » a HP, with the corres Since R. is Hermitian, its othe aq, that is. Ponding eigenvector qy. All 13) M + Gy will be orthogonal to ai'q=0 i=23.. (16.132 or equivalently ) Ha=0 § (16.133) and R, can be written as R.q, = fq, (16.134) The eigenvalues and eigenvectors of R, are Req, Ry +Rw) a, = Raq, + Rua, = The eigenvectors of Re are qy, are ia; + o21q, = (Af +02)q, (16.135) which are same as those of R,. The eigenvalues of R, Mt+oz i + pt a (uteR Mtoe ibe i285. i A (16.136) Since Aj = MP; and Xf = 0 for i = 2,3,+++ , M, we obtain (16.137) 3, £=2,3,---,M i (ea é=1 Therefore, the largest eigenvalue is max = Ai- Amax = Ai = MP) +0% ee 2. The autocorrelation matrix R, Z ues are equal to 02, 5 and the remaining M — 1 eigenval a can also be written in terms of ee 5 (16.139) Ri => 4 = gat +Eaat R, = (MP, + 2a,at +029. (16.140) ‘are known as signal subspace and ech ether since the atororeation matrix i we 1032 Digital Signal Processing ’ using least-square method) P=2(2"2)"'2" (16.141) ‘Therefore, the matrix that projects any arbitrary vector onto signal space is a P, = @,( Q%Q,) Q! = Q,1-'Q! = 9,07 (16142) J and the matrix that projects any arbitrary vector onto noise space is Pu = Q,(Q%Q,) 'QY = QuIQE = We (16.143) . ‘The two subspaces are orthogonal, that is, ge P,Q, =0 and P,Q, =0 (16.144) Since these two subspaces are orthogonal, then all the time-window frequency vector x from Eq. (16.127) must lie completely in the signal space, that is, Pyv(wi) = ver) Puv(wi) =0 (16.145) This is the central concept in subspace-based frequency estimation methods, which we will discuss in the subsequent subsections. ‘Two complex exponentials in white noise (p = 2) Consider the second-order harmonic _ process as ii 2 2(n) = > Ay 2%" + w(n) = Ay oP" + Ap 2" + w(n) (16.146) rm where A; = |Ai| ©, for i = 1,2, and w(n) is the white noise with variance o2,. Consider the signal 2(n) from Eq. (16.146) at its current and future M —1 values. This can be j a ; ‘ x(n) = [2(n) 2(n-+1) + en M1)" (aosun) ] ‘We can then write the signal model consisting of two ii hak Eee inal a complex exponentials in noise for Power Spectrum Estimation 1033 P, = |Ar|? and Pp = {ayia where 1? and Pa = |Aal? are the Pi = ® Power in the two complex exponentials, The M x M autocorrelation matrix is given ky Seepage ee eri eft, 9-i(M—11 7 1 ewe € Bea ee 1 eIs~ the eM M—1)0r 6i(M =a), ei(M—3)u, I eT. gun 1 enh eAM—Yoa gi(M~2)un 6i(M—3)un =R,+R, R, is the signal autocorrelation matrix and Ry = 021 is the noise autocorrelation > The rank of a matrix is defined as the number of linearly independent columns in matrix. Here, the signal matrix R, has a rank of two and the noise autocorrelation fix R,, has full rank. If we define y(wr) ay = [1 eh AB edn (16.150) wa}? veg ¥(w2) = G2 = [1 ef? 2... ei(M-Nua] (16.151) 4s can be written as DR, = Piqial + Poa.al! 2 Dra (16.152) the rank of R, is equal to two, it has only two nonzero eigenvalues. These two lues are oie 7% age Ryq, = Praral’ay + P22 a2 9 = Pin (91 as) = MPiay = Aja, % . eee Sp ff a.) = MP2q, = 3 Ryqy = Pray af!ag +P2a292 Qe = Ped (4742) 242 = A302 v F ‘ are Af = MP; and 4g = MPz with the corresponding ‘the a eee ‘All other eigenvalues of R, are zero, that is, Af =0, AR, is Hermitinn, its other eigenvectors as, 4, + Qu Will be orthogonal to $=34--M (16.153) (16.154) (26.155) 084 Digital Signal Processing ‘The eigenvalues and eigenvectors of Ry are Rea, = (Re + Ru), = Rea + Rua = NO: +o3lq, = (A +o); (16.156) athe eigenvectors of Fz are qu which are same 28 those of R,. The elgenvalies of Re are < a 40%, i a i The autocorrelation matrix R, can also be written in terms of eigendecomposition as (16.157) M 2 Mw R, =D saa! = raat +d raat (16.158) 1=3 2 mM =D (ur +o2) aati + Donat j=l i= R, = Q,A,Q% +02.9,08 (16.159) MP\+o, 0 where Q,= [a1 2), Qu=[a3 a“ du), A=[ 7 ees ‘The columns of Q, and Q,, are the signal and noise eigenvectors, respectively. The ‘M-dimensional subspace can be split into two subspaces spanned by the signal and noise eigenvectors, respectively. These two subspaces are known as signal subspace and noise subspace, They are orthogonal to each other since the autocorrelation matrix is Hermitian, ‘Therefore, the matrices that project any arbitrary vector onto signal and noise subspaces are P,=Q,Q! and Pu =Q,Q0 (16.160) ‘The two subspaces are orthogonal, that is, P.Qu=0, PuQ,=0 (16.161) Since these two subspaces are orthogonal, then all the time-window ‘must lie completely in the signal space, that is, Pyv(w,) = v(w,) and Pow) =0 i=1,2 in white poem ppemgpatne oad Power Spectrum Estimation 1035 We can then write the signal model coa si jength-M time window vector as isting of p complex exponential in noise for : x(n) = 7 Ai (ws) "+ wn) = a(n) +-w(n) (16.164) 2 gift) is ges (16.165) where v(w) is the time-window frequency vector. The autocorrelation matrix is given by p R, = E[x(n)x"(n)] = 7 |Aif’vw)v" (wi) +021 = VA =I where V = [v(wr) vive) «+: v(up)] (16.168) is an M x p matrix and |Ai? 0 0 0 2... 2 \Aal . (16.167) ate. 0: [Apl?. i cl lex exponentials. The diagonal matrix of the powers of each of the Tespue tt complex exponential P —— ie sank of p and the noise autocorrelation matrix Rw has full rank. we choose M > P- ye define gana)? (16.168) ae feet com . . : gion ae Att] (16.169) ¥(w2) = 92 = (MD) a (16.170) hh clon ohegee ae =a, =[ ‘can be written a5 Ee 2B (16.171) => Paar eo ete has only p nonzero eigenvalues. These eigenvalues Pie 7 = 18 ifs a , for #= 1.2, p with the corresponding 5 os ae Mee erst X =0, for i= p+, p+ am ay will be orthogonal ts orboreigonrectors dre 4a” ae 1036 Digital Signal Processing or equivalently vig, =0 j=ptipt2 (16.173) and R, can be written as | Rig = 4a (16.174) ‘The eigenvalues and eigenvectors of Ry are Rig, = (Bs +Ro)a) = Bag; + Regs = Mi + els = Qf +o%)a, (16.175) ‘The eigenvectors of R, and q; are same as those of R.,. The eigenvalues of Rez are 2 = + Po ee css The autocorrelation matrix R can also be written in terms of eigendecomposition (the eigenvectors are in descending order, that is, A: > A2 > -+- 2 Aa) M P M R,=DAqal =D saal+ So saat (16.17) i=l =I i=pHt er PR i ; =D (Mr +a)aal+ SO haat ta od R. =Q,A, QF +02,Q4 Qe (16.178) where MP\+03 0 = 0 0 MP, +02, 0 Ay= : ; é 0 0 “MP, +03, Qe= [ar az ay], Qe = [Spr dear -- aaed Power Spectrum Estimation 1037 Note that in our analysis, we 1 ) is, we used true corre Ao tl meta Bete hat sorelation mates Rtn practice, we etmate fi, «58% N (16.182) where the data matrix X has dimensions N x M and it can be formed as r x7'(0) 2(0) (1) 2(M -1) xM(1) z(1) (2) (M) ‘. r ¥ x xT(n) [=] a(n) a(n +1) --2(n+M-1) (16.183) x7 (N ~2) a(N -2) 2(N-1)---2(n+M -3) x7(N —1) a(N-1) 2(N) ~--2(n+M-2) Pisarenko Harmonic Decomposition Method hhod is based on the eigendecomposition of the autocorrelation matrix and tioning into signal and noise subspaces. In PHD method, it is assumed that 27) ‘of p complex exponentials in white noise and the length of the time window is (16.184) _ M=pt+!l Boos than the complet exponentials. Therefore, the noise subspace consists Mgenvector that corresponds tothe minimum eigenv:ve 2, This method geavecos ated with the soalest eigenvalue to estimate the frequencies ‘exponentials. The signal model is 5S yy mane) (16.185) ror ‘ ing ofp complex exponcatia woe for kength-M — p + 1 s(n) +w(n) (16.186) id (16.187) aq = Na. + ona, = (i +02) (16.188) 1038 Digital Signal Processing The eigendecomposition of the autocorrelation matrix Rz is given by (Eq, (16.177) _ with M =p +1) M ; M R. => daa! => aal+ > daar (16.190) ‘= = 41 2 = raal+ val ‘ot 5 = vaal + clay att = R. = Q, A, Qi + QQ (16.191) where Q,=[a) a2 -- ap], Qe = [aw] (16.192) The noise subspace consists of a single eigenvector Q,, = qu corresponding to the minimum eigenvalue o2,. The signal subspace and noise subspace are orthogonal. Therefore, the noise eigenvector qy, is orthogonal to each of the signal eigenvector q,, 1 08, ae des MP, +03, 11,27 (16.201) o., i=ptlpt2r 7M If the eigenvalues of R. are arranged in descending order, that is, A, 2 A2 2-2 Am, rand if qy.da.-* sy are the corresponding eigenvectors, then isptt M , M Ree yal = aaa + daa! =Q,A.QH + 7%QuQe = ‘a (16.202) where the eigenvalues of R, are MP, +03, i=1,2,---, . ? (16.203) amet 0, i=ptlp+2, ‘The p signal eigenvectors q, = v(w), 1 < 4 $ p, correspond to the p largest eigenvalues, and fhe 'M ~p noise eigenvectors qj, +1 < j < M, correspond to the eigenvalues equal to o3,. Because of the orthogonality between the noise and signal subspaces, we obtain Mot a ala, =v" (wa =F u(be™=0 j=ptipt2--M it for all p frequencies of the complex fiat: anata: exponent .. The pseudospectrum for each noise Pa aey es ae : \v4 was ‘The polynomial Q,(e!4) has M —1 roots. Ideally, p of ‘ ‘decia uf thal froauetile of. this cowinlea p of these roots will lie on the unit ay exponentials. The remaining M —1~ I acl lae=r e _— “ » first two autocorrelation val. m process consisting of p = lL complex n white noise are and re(1)=2+ 52 D method to find the frequency and power ‘complex exponential. that 2(n) is a first-order harmonic process isting of a single complex exponential in white Fale) = As "+ w(n) Mis, given that M = p41 Mitecorrelation matrix is given by r2(0) r2(1 Ble 20) |~ (2 Ticcigenvalues of R. are A= 342V2 do=3-2V2 The corresponding eigenvectors mde 1 a= | ey, waitin Therefore, the maximum eigenvalue is Ymax = Ai = 3-4 2V2 = MP1 + 70 hd the variance of the noise is Aain = Ap = 3 — 22 = oH Power in the complex exponential is a F7Ormne = desta) = 2V? Example 16.9 The first three a values of a random process consisting of a sinusoid in white noise are 72(0)=3, ra(L and r,(2)=0 Use PHD method to Jind the frequency and power of the sinusoid. Find the variance of the additive noise. Solution: ‘The harmonic model for the given case is: 2(n) = Asinuin + u(n) Ajgiwnn - fle Since a single sinusoid consists of two complex expo- nentials, so p = 2 and M =p+1=3, The 3x3 autocorrelation matrix is [ (0) r2(2) 20) [333] re(1) re(0) rx (1) | = | 131 r2(2) re(1) r2(0) 013 2") + wn) The eigenvalues of Ry are 34+Vv2 ‘The variance of the noise is o3 =Amm = 3-9 following equation {Eq. (16.197) Thus, the frequency of the sinusoid is wy = a5 Since the autocorrelation sequence for a single sinusoid in white noise is cos (kw1) + 02,6(k) obtain B programs are given. ous-time signal s(t) = 0 Hz. It is sampled at has standard deviation (Ow = 1) all; close all; p(j*2*pit+0.2*n)... p(j*2*pi+0.3%n) +wn; @ nonparametric methods of PSD estima- m ate based on autocorrelation function of €signal segment. The nonparametric (classi- Methods do not depend on any particular el of the process but use estimators that determined entirely by the data. dogram is not an unbiased estimator, , its variance do not decrease with the signal segment length. [ am, a rectangular window is used. periodogram, we use none 3. d that the inconsistency of .d could be overcome by Ss =< spectrum (dB) Bu Ss =now ones MUSIC sj So .5—0.4-0.3-0.2-0.1 0 0.1 0.2 03 047 hi Frequency in pi units Fig. 16.14 Results of M 16.8 - 5 6. The periodogram averaging is done by seg menting the signal into non-overlapping signal segments. 7. In Welch’s method, modified periodograms were averaged. 8. The parametric methods are based on the models parameters rather than autocorrela- tion function. These models include AR, MA and ARMA model. It is assumed that finite-length signal is the output one of systems with white noise input. models describe a system with a fi of parameters. aa 9, The harmonic signal model cox plex exponentials in noise. 1046 Digital Signal Processing nal subspace and g 10, The frequency estimation methods based on a oe pace, These methods amie 7 oe PED: eat SUR complex exponentials closely spaced fn hg 11. These algorithms are based on the eigen- frequency. decomposition of the autocorrelation matrix 6, Phe frequency resolution of the periodogram is (a) MA model (c) ARMA tmodel nc G pees See (b) AR model (d) AZ model io 5 Consider a difference equation @ % © a 16.5 Consider a difference equation (b) 99 (4) oom fa ay 2(n — k) + w(n) 162 The quay factor of the pidge anes (a) 1 (c) 0.9 It represents (b) 0 (a) 26 (a) MA model (c) ARMA model 16.3 The frequency resolution of the Bartlett's (b) AR model (d) AZ model method is given by 16.6 Consider a difference equation @) t © db ‘ (b) 92 (@) ght a(n) = w(n) + > by w(n—k) where L is the length of the signal segment. sia 16.4 Consider a difference equation It represents » (2) MA model (c) ARMA model a(n) =~ Say 2(n—k) (b) AR model (4) AP model -—_ ¢ 16.7 Let g, and q, be two orthogonal vectors of + w(n) + 5° by w(n ~ k) length M, then fa (a) aia (©) aq, =M It represents (b) aa, (4) aia 16.1 Consider a continuous-time signal «(t) = pee eee cos (2zfit). It is sampled at the rate f, Seen ae sate 8fi to obtain discrete-time sequence x(n) = (0) = s om (22). oe za ene re(0)=6, r-(1) = 1.93 + 4.58 and edogram for N = 4, te(2) = ~3.42 + 73.49 16.2 Consider a discrete-time signal Use PHD method to find the frequencies and -2(n) = 008(0.25%n) + c0s (0.5mm)

You might also like