Professional Documents
Culture Documents
(These slides: real-valued signals. See book for the complex case.)
white noise, ,
The input signal is also known as the “innovation”.
White implies that is not correlated to past samples ( , etc), and hence
is uncorrelated to , etc:
Parametric spectrum estimation is: given the correlation sequence of ,
find . For this we need to find and , i.e. model identification.
We are given a signal and wish to predict the current sample as a linear
combination of past samples :
To estimate the optimal coefficients, we can minimize the prediction error:
P
In a deterministic setting, we can minimize .
In a stochastic setting, we minimize .
This leads to the same equations as before (with and ).
4 the levinson algorithm October 2, 2011
From prediction filter to Yule-Walker equations
Model for :
In general, a random process will not precisely satisfy such a model. For any
model order , we try to find the best fitting parameters that will
minimize the residual noise variance . That is, we consider
and minimize the power of the prediction error .
If the model holds, we can easily derive the Yule-Walker equations, as follows.
Multiply by and take expectation:
For this gives, with ,
5 the levinson algorithm October 2, 2011
Yule-Walker equations
..
.
. ..
.
. . ..
..
.. .
. .
..
.
The matrix is constant along diagonals: a Toeplitz matrix. It is also symmetric.
. Because for (see before), we have
6 the levinson algorithm October 2, 2011
Yule-Walker equations
the above equation for .
This has a complexity of order multiplications. The Levinson algorithm can
reduce this to order , by exploiting the Toeplitz structure.
coefficients ( ) and modeling/prediction error power .
For order , the YW equations give (there is no filter)
For order , we have , or , and
.
For order , combine the YW equations for and into a single matrix
of size :
.. ..
. .
.
. .. .. ..
.. ..
. . . . . .
Suppose we know the solution for order , how can we find the
Take as trial solution for order the old solution for order , extended by :
.. .. .
.. ..
.
. .
. .. .. ..
. . . .
.
This is not the correct solution of the YW equations, because in general .
We need to modify the trial solution such that ; in that case the RHS has the
required form and we have a solution to the YW equations of order .
To modify the trial solution, we apply the following two tricks.
..
.
. .
. . .
.
.. .. ..
. .
.
... ..
.
..
.
..
.
..
.
Trick 2: The two equations can be combined:
.. .. .. .. .. ..
. . . . . .
. .. .. ..
. . . .
.
10 the levinson algorithm October 2, 2011
Levinson algorithm
.. .. .. .. .. ..
. . . . . .
. .. .. ..
.. . . .
..
.
Both trial solutions are not correct solutions of the YW equations of order ,
because . But we can take linear combinations of the two trial solutions in
.. .. .. .. .. ..
. . . . . .
. .. .. ..
. . . .
.
.. .. .. ..
. . . .
| {z }
For , this will bring the RHS into the required form. The LHS is then the
.. .. .. ..
. . . .
This is the update step in the Levinson recursion. The recursion is initiated by
With this, we enter the recursion. In the next step, we obtain the equations
The th step in the recursion has complexity . Overall, the complexity of solving
(i.e., the covariance matrix strictly positive definite–all its eigenvalues are
. This follows from the YW equations:
.. ..
. .
.. .
. .. .. ..
..
. . . . . .
invertible), the inverse exists and is also strictly positive definite. This implies
The update equation is
so that .
From this we also see that : the modeling/prediction error always de-
creases.
and . At this point the recursion stops (we can take , so that the
AR coefficients do not change anymore).
This can occur only if is singular (thus the matrix is not strictly positive definite).
Szegö polynomials
This allows to write the Levinson recursion compactly as
with initialization ( ) as .
16 the levinson algorithm October 2, 2011
The Levinson algorithm
Thus, the resulting system has impulse response (and also , not used).
If we use this system with input , we obtain the prediction error sequence ,
see Slide 4.
The filter structure is known as a lattice filter; note it is an FIR filter.
October 2, 2011
the levinson algorithm
18
The recursion can be reversed as follows:
The Levinson algorithm
The Levinson algorithm
This allows to compute from : the filter impulse response is . This is
an IIR filter, the filter structure is recursive, i.e., the filter coefficients of are not
explicitly computed.
It can be shown that the filter is stable as long as all reflection coefficients are strictly
smaller than 1. This is the case if the original is strictly positive definite.
If we replace by any white noise sequence (with variance ), then the re-
sulting output signal is a random process with the same correlation sequence
as .
At the transmitter, the speech signal is split in short frames, each of 20 ms. For
each frame, the correlation sequence is estimated. Using the
Levinson algorithm, the reflection coefficients and residual noise power are
estimated, coded and transmitted to the receiver.
It is not the same as the original speech signal, but has the same correlation
sequence, which is good enough for the ear (at least for unvoiced speech).