You are on page 1of 6

Spectral Estimation Notes

ECE 246/446 Fall 2014

Deterministic Power Spectra and Autocorrelations
If x  n  is a signal of finite length, or even is a signal of infinite length, that is absolutely
summable then the Fourier transform X    of the signal can be computed in the
ordinary way and, in that case, (by Parseval's formula) the squared magnitude X   

2

of

the transform may be interpreted as the energy distribution of the signal with respect to
frequency. This function is called the Deterministic Power Spectrum of the signal and is
denoted
 dxx     X   

2

The xx subscript indicates that the power spectrum is of the signal x  n  , and also that
the spectral values are quantities that are second order in the sense that they are derived
from products of pairs of x  n  values. The d superscript implies that the power
spectrum is for an isolated signal rather than an estimate of the power spectrum of a
random process (see below).
Notice that  dxx     X     X    is the discrete-time Fourier transform of
x  n   x  n  which is called the Deterministic Autocorrelation of the signal and is
denoted

xxd  n   x  n   x  n 
Thus, the deterministic power spectrum and the deterministic autocorrelation are a
d
d
discrete-time Fourier transform pair (i.e. xx  n    xx    )
When the original signal is real the deterministic autocorrelation function is

xxd  n   x  n   x  n  

 x  m x  m  n 

m 

d
Expressed in words, xx  n  is the sum of all products of pairs of signal values that are nsteps apart.

A Heuristic Approach to Estimating Power Spectra
Now suppose we want to form an estimate of the power spectrum (i.e. of the way that
signal energy is distributed with respect to frequency) in a very long signal that
essentially goes on indefinitely. Taking the Fourier transform of the entire signal, or of a
1

x  N  1 . while enhancing characteristics that are the same in each section.K . A better approach is to: 1. The concept of a random process is an extension of the concept of a random variable which is briefly recalled. xK 1  n  . Use a large value of K so that the average closely approximates 1 %     lim  xx K  K K 1  k 0  dxk xk    N which is the mathematical expression for the spectral estimate.K . Random Signals A deeper meaning can be attached to the power spectrum estimate developed above by relating the estimate to the statistics of a random process. x  K  1 N  1 . is not a good alternative because this transform will be complicated by variations that represent every detail of the original signal. Compute normalized deterministic power spectra:  dx0 x0    N .K . Averaging together the deterministic power spectra of smaller sections eliminates spectral characteristics that vary from section to section. 2 . 3. x  KN  1  2. x  1 .  dxK 1xK 1    N of the blocks x0  n  .K . The normalization in step 2 is introduced to scale the total energy to be independent of N. Divide a long section of the signal x  n  into blocks of length N : x0  n    x  0 . Average the normalized power spectra 1 K K 1  k 0  dxk xk    N 4. x  2 N  1  M  xK 1  n   x   K  1 N .very long section of the signal. x  N  1  x1  n    x  N  .K .

For example. A good example of a random process is time-varying thermal noise. 4. p x1    . One way to think of the random process is as sequence of random variables ( K .5    4   5   5 6   6   6 6 and the variance is  x2   x  x  2       x  px    d 2 1  17.6 .K ) that are characterized by a corresponding sequence of   probability density functions K . Let x denote this random variable. In this discussion we only consider real-valued random processes.52    1. x  0    x  0 . Each time the noise is sampled a different sequence of values is produced. x  0 .A random variable x is a variable that assumes specific numeric values depending on the outcome of a random experiment. x  1 .52   . the mean  x of x is  1   1    x  x    px    d  1    2   3  6   6   1   1   1 1  3. p x 1    .5 2   2. specifies that the probability of each assigned value is 1 6 . x  1 .K . These PDFs produce in the sequences of means and variances: M x  1   x 1 . rolling a die is an experiment with 6 possible outcomes. For example. p x 0    . All probabilities and statistics associated with this random variable are determined by a probability density function p x    that. x  1   x 1 .5  1   1   1  1  2 1  2.5 2   1. and a random variable for this experiment is usually assigned values 1.52   .5. but all the instantiations have common features.5    6  6 6  6   6   6  6  A random process x  n  is a sequence (there are also continuous random processes) that assumes specific numeric values at each index n depending on the outcome of a random experiment.3.  x   1          x  0           x  1          2 2 x 1 x 1 2 2 x0 x0 2 2 x1 x1 M 3 .2. in this case.

The first simplifying assumption is that the random process is stationary. for all integer values of n. This means that all the probability density functions associated with the random process are translation invariant. The means  x  n 2. 2  and to the second order statistic x  n  x  m     1 2 p x n x m   1 . Several simplifying assumptions about the random process allow us to characterize random signals using only a few key statistics. k p x n      p x  n  k     p x n x m   1 . 2 x 3. 2. o. joint probabilities are also of interest. 2  p x n x m x[ o ]   1 .K  are all the same     . since the values assigned to neighboring indices may not be independent of each other. 1. called the autocorrelation sequence. 2. m. 2  d 1d 2 . 1. Thus.In addition. This weaker assumption is called wide sense stationary. may be defined by setting xx  q   x  n  x  n  q  where n can be chosen arbitrarily. The discrete-time Fourier transform of the autocorrelation sequence is denoted  xx    and is called the Power Spectrum of the random process.K   x  n    x 2  are all the same    x  : n  0. The second order statistic x  n  x  m  depends only on the difference n  m . 2   p x n  k  x m k  x[ ok ]   1 . Because of consequence 3 a sequence xx  q  . 2   px n  k  x m k    1 . 4 . We limit our attention to joint probabilities of pairs of indices: p x n x m   1 . If the three consequences of the stationarity assumption listed above are the only consequences needed (as is often the case) then the three consequences are assumed explicitly rather than as consequences of the stationarity assumption. 3  M Important consequences of this assumption are: 1. The variances : n  0. 2 .

called ergodicity. and making the substitutions N 1 X k      x  kN  n  e  jn n 0 N 1 X k      x  kN  m  e  jm m 0 gives N 1    jn   jm x kN  n e         x  kN  m e  k  0  n 0   m 0  K 1 1    j  n m    lim K  K  x  kN  n  x  kN  m  e k 0   1 %     lim  xx K  NK  1 N N 1 N 1  m 0 n 0 K 1  N 1 The third of the listed consequences of ergodicity says that the bracketed term is the 5 . This means.A further assumption is also needed so that the statistics of a random process can be estimated when only a single instance of the process is available. A Second Look at the Spectral Estimation Formula The spectral estimate that was derived heuristically can now be reconsidered in the context of a random process. This assumption. Recall that the formula for the estimate is 1 %     lim  xx K  K  lim K  K 1  k 0 1 NK  dxk xk    N K 1  k 0 X k   Xk   . that  x  x  n   lim K    2 x  x  n    2 x K 1 1 K  x n k 0  lim K  1 K  KN  K 1  x n k 0 xx  q   x  n  x  n  q   lim K  1 K  KN    x  K 1 2  x  n  q  KN  x  n k 0  KN  The term wide sense ergodic refers to the more limited assumption that time-averaging gives the same results as ensemble averaging in the above three cases. in particular. is that ensemble averages (denoted by the angle brackets) can be evaluated using time averages from a single instance of the process.

if the values of the autocorrelation sequence are only appreciable when q is much smaller than N then N  q  N and %     xx N 1  q  N 1 xx  q  e  jq   xx    In other words.autocorrelation sequence xx  n  m  . compensating for the factor  N  q  N . These include ways to improve statistical convergence (e. the estimated power spectrum is equal to the random process power spectrum. since the summand of the double sum only depends of the difference n  m . we can express the double sum as the single weighted sum (where q  n  m ) %    1  xx N N 1   N  q   q e q   N 1  jq xx Finally.g. and using parametetric models for the Power Spectrum 6 . so that %    1  xx N N 1 N 1    n  m e m  0 n 0 xx  j  n m   Now. by using blocks of data that overlap). Important note: In practice there are many ways to improve on the estimate described above.