Professional Documents
Culture Documents
Accepted 1991 February 26. Received 1991 February 15; in original form 1990 July 18
399
400 R . J . O’Dowd
upon both the accuracy desired in a numerically computed the other hand, may be proven by a backward-type error
solution and upon the precision available on the computer analysis to be strongly stable on the class of real matrices,
employed. For example, if a computer provides t binary and stable on the class of symmetric, positive definite,
digits, and a solution with an accuracy, measured in terms of Toeplitz coefficient matrices.
norms, is required to within 2“‘, then any conditionaumber In terms of Wiener filtering, the fact that the
greater may be considered indicative of an ill-conditioned Wiener-Levinson algorithm is weakly stable means that,
numerical problem because the desired accuracy cannot be when the condition number of the autocorrelation matrix is
guaranteed (e.g. Wilkinson 1961). For example, if P > Z‘, not large, the Wiener-Levinson algorithm is guaranteed to
then the matrix may be considered ill-conditioned because produce a filter exhibiting small error. This means that in
an accuracy cannot be guaranteed to any accuracy. most practical cases, the issue of conditioning need not be
Concepts of stability most commonly seen in numerical one of concern. However, when the autocorrelation matrix
analysis are based on the fundamental works of Wilkinson is ill-conditioned, the Wiener-Levinson algorithm can no
(1961, 1963, 1965). In Wilkinson’s terminology, an longer be proven to behave in a stable fashion, unlike
40 - 40
30 - 30
20 - 20
in
-10 - -10
-30 - -30
-40 -40
J I
10 20 30 40 50
30 30 -
20 20 -
10 10 -
n n
-in -in -
-20 -20 -
-30 -30 -
-40 -40 -
I
10 20 30 40 50 10 20 30 40 50
This differs from the approach used by Treitel & Wang which satisfy a given error criterion. Depending upon the
(1976), who stopped the iteration when the norm of the initial estimate of the solution, and the precision of the
residual had reached a small level. The solution employed computer being employed, any of these solutions may be
here introduces a cost greater than that involved with the produced by the Conjugate Gradient algorithm. Details of
approach of Treitel & Wang (1976). However, in terms of these considerations are described by Varga (1963), and
accuracy (as measured in terms of norm of the residual) Wang & Treitel (1973).
there is little difference between solutions which would be It may be readily observed that the filter produced by
produced after 50 iterations, and those produced by the Gaussian elimination in single precision is visually similar to
approach of Treitel & Wang (1976). The formulation of the the higher precision filter, while the filters produced by the
Conjugate Gradient algorithm guarantees that the norm of Wiener-Levinson and conjugate gradient algorithms differ
the residual does not increase from iteration to iteration. quite significantly, both in magnitudes of filter values and in
Because the Conjugate Gradient algorithm measures the observable oscillations. Summary statistics of Table 1 allow
accuracy of solutions produced in successive iterations in the observation that the Wiener-Levinson algorithm has
terms of the norm of the residual, the solution computed produced results which are slightly more accurate than those
using the Conjugate Gradient algorithm in this article would of the conjugate gradient scheme. Both the Wiener-
also have a residual which would pass the error criterion of Levinson and conjugate gradient schemes exhibit a relative
Treitel & Wang (1976). In reality, there are many solutions error of the order of 1000 per cent (in terms of error norms)
The Wiener-Levinson Algorithm 403
Table 1. Summary statistics for filters. behave as stably as Gaussian elimination when solving
Algorithm, a ill-conditioned problems. The observation that the Wiener-
Higher Gaussian Conjugate Wiener Levinson algorithm performed better than the conjugate-
precision elimination gradient Levinson gradient scheme in this case indicates that the results of
( h1 Treitel & Wang (1976), in which the conjugate gradient
IlZll 13.54 13.53 139.9' 141.4 algorithm produced significantly better results, is not a
llfo-fhll 0 0.1289 146.8 137.8
general one-while the con jugate gradient algorithm may
llfa-fhll
0.009525 10.84 10.18 perform better than the Wiener-Levinson algorithm on
Ilfh II ill-conditioned problems, there is no guarantee that it will
min 6 -5.728 -5.725 -47.05 -41.03
max A 4.622 4.618 40.76 44.49 do so.
Deconvolved outputs, produced by applying prediction
error filters, are illustrated in Fig. 4. It may be readily
which means thay have produced a much poorer solution observed that prediction error filters produced from the
I
200 400 600 800 loo0 200 400 600 800 loo0
I I
200 400 600 800 lD00 200 400 600 803 loo0
until the nth order system is solved. The procedure, which is K(R) = hx =?
!
described in simple terms by Claerbout (1976), works in a Amin 1, '
recursive fashion requiring less arithmetic operations and
~
computer storage than algorithms like Gaussian elimination. and from equation (6) it may be seen that
Consider the case where the right-hand side of the
+
( k 1)th-order Wiener equation is a positive spike, and
f , = 1 is assumed: The lower bound in this equation arises because R is being
assumed positive definite. The upper bound may be
obtained using a contradiction argument; substituting an
assumption that the minimum. eigenvalue A, > r, into
equation (6) gives
where, in the case where the coefficient matrix is positive kk < ( n - 1)r,.
k =2
definite, the quantity vk, is denoted as the 'prediction error
variance'. Using the equation D ( k ) to denote the There are n - 1 terms in the summation on the left-hand
+
determinant of the ( k 1)th-order system, it may be shown side of this equation, therefore at least one of the remaining
(e.g. Robinson 1967) that eigenvalues, A,, k = 2 . . . n must be less than r, [i.e.
Ak <r,(k # I)], and the assumption of equation (8) is
violated. In a similar fashion, it may be shown that
(3)
r, 5 A, < nr,. (10)
Therefore, the determinant of the n by n coefficient matrix
may be expressed as The vectors in equation (2), with k = n - 1 may be
expressed (e.g. Bellman 1960) in the forms
D(n - 1) = n
n-1
k=O
vk (4)
D ( k ) > O V k = O , . . . ,n - 1
2 x,e,,, = I,
i=l
v , - ~= C Aixiei,l.
i=l
The Wiener- Levinson Algorithm 405
As A, 2 A,, these equations may be combined to obtain Table 2. Some computed predic-
" n n
tion error variances for the pre-
vious example.
= C A J , ~ , ,2~ C
z=l r=l
AM,,^ = C x,e,., = al,
t=l Prediction
Error Variance
which produces the result vo 1
v29 2.758 x
2 a,. (11) v30 --5.278 x
Combining equations (9) to (11) it may be seen that v3 1 1.651 x
v32 7.333 x 10-5
A, 2 v,, = r,, 2 v,- ,Il, v33 -2.293 x lo-*
2 (12)
v34 1.466x
from which a lower bound for the spectral condition v35 7.575 x 10-4
number, K ( R ) , may be written v36 2.368 x
v3 1 - 2 716 x 10-3
noted that the conjugate gradient method does not Levinson, N., 1946. The Wiener RMS (root mean square) error
necessarily produce a filter of greater accuracy than does the criterion in filter design and prediction, J . Math. Phys., 25,
Wiener-Levinson algorithm. Intermediate results of the 261-278.
Wiener-Levinson algorithm may be used in a simple test to Martin, R., Peters, G. & Wilkinson, J., 1971. Symmetric
determine when large errors may occur. decomposition of a positive definite matrix, Handbook for
Automatic Computation, vol. 2 , pp. 9-30, eds Wilkinson, J. &
Reinsch, C . , Springer-Verlag, Berlin.
ACKNOWLEDGMENTS O’Dowd, R., 1990. Ill-conditioning and pre-whitening in seismic
deconvolution, Geophys. J. Int., 101, 489-491.
The author wishes to acknowledge the reviewers of this Ralston, A. & Rabinowitz, P., 1978. A First Course in Numerical
paper, particularly the one who, while not offering complete Analysis. International Series in Pure and Applied Mathematics,
agreement, pointed out an error in the derivation of 2nd edn, McGraw-Hill, New York.
equation (11) and provided the proof which appears herein. Robinson, E. A,, 1967. Multichannel Time Series Analysis with
Digital Computer Programs, Holden Day, San Francisco.