You are on page 1of 8

Geophys. J . Int.

(1991) 106, 399-406

The Wiener-Levinson algorithm and ill-conditioned normal equations


R. J. O’Dowd”
Department of Geology and Geophysics, The University of Adelaide, Sourh Australia 5001, Australia

Accepted 1991 February 26. Received 1991 February 15; in original form 1990 July 18

Downloaded from https://academic.oup.com/gji/article-abstract/106/2/399/691111 by guest on 16 November 2019


SUMMARY
Treitel & Wang (1976) noted that, when autocorrelation matrices are ill-
conditioned, elements of Wiener filters are significantly different when the normal
equations are solved on different computers. They presented an example in which
the Wiener-Levinson algorithm produced a prediction filter exhibiting significant
error. In recent years there has been controversy in mathematical literature relating
t o stability of algorithms, such as the Wiener-Levinson algorithm, for solving linear
systems of equations with a Toeplitz coefficient matrix. In this paper, it is argued
that poor-quality results produced by the Wiener-Levinson algorithm, when applied
to problems exhibiting an ill-conditioned autocorrelation matrix, may be attributed
to stability properties of that algorithm. An example is presented, comparing the
results of Gaussian elimination, the Wiener-Levinson algorithm, and the conjugate
gradient algorithm. The use of intermediate results of the Wiener-Levinson
algorithm to detect ill-conditioned normal equations is discussed.
Key words: conditioning, linear equations, stability, Toeplitz.

where A,, and Am,,, are, respectively, the largest and


STABILITY OF TOEPLITZ ALGORITHMS
smallest eigenvalues of the autocorrelation matrix. An
A number of algorithms for solving linear equations autocorrelation matrix with a ‘large’ condition number is
involving Toeplitz coefficient matrices, or inverting Toeplitz referred to as ill-conditioned. If an autocorrelation matrix is
matrices, are described in literature e.g. Levinson (1946), not ill-conditioned, it is referred to as well-conditioned. An
Trench (1964), Zohar (1974). All are based on partitioning example of an ill-conditioned autocorrelation matrix is
of the matrices, which results in a recursive property in described by Treitel & Wang (1976). A survey of causes of
which the solution of a system of order n - 1 is used to ill-conditioned autocorrelation matrices is given by O’Dowd
compute the solution of a system of order n. Because of this (1990).
property, these algorithms require much less storage and As solution of the normal equations is commonly
computer time than do classical algorithms such as Gaussian performed on a computer, issues of stability of algorithms
elimination. This is the main reason why algorithms of this become paramount. An algorithm for solving linear systems
type are employed for seismic deconvolution (e .g. Robinson is described as unstable if cases exist in which that algorithm
1967; Claerbout 1976). They are direct algorithms which is incapable of producing a solution which is ‘acceptably
produce the exact solution, if it exists, after a fixed number close’ to the true, analytic, solution even when the
of steps, provided all calculations are done exactly. coefficient matrix is not ill-conditioned. For example, there
However, when solution is performed on a computer, is no guarantee that Gaussian elimination, performed
calculations may be inexact due to the occurrence of without any pivoting, can produce any solution at all, even if
round-off. This means that there can be no guarantee, in the coefficient matrix is well-conditioned. On the other
general, that these algorithms will produce satisfactory hand, Gaussian elimination with partial or complete
results when utilized on a computer. pivoting will always produce a solution to acceptable
The (spectral) condition number of the autocorrelation accuracy when the matrix is well-conditioned. Examples of
matrix is defined as this type of behaviour of Gaussian elimination are described
in many basic texts, e.g. Gerald & Wheatley (1984), as a
p - A,, (1) justification for the use of pivoting. This behaviour means
L i n ’ that Gaussian elimination without pivoting is unstable, while
Gaussian elimination with pivoting exhibits some form of
* Now at: Acoustic Surveillance Composite, Maritime Systems
Division, WSRL Bldg 64 TSAN, PO Box 1700, Salisbury, South stability.
Australia 5108, Australia. The definition of a ‘large’ condition number is dependent

399
400 R . J . O’Dowd

upon both the accuracy desired in a numerically computed the other hand, may be proven by a backward-type error
solution and upon the precision available on the computer analysis to be strongly stable on the class of real matrices,
employed. For example, if a computer provides t binary and stable on the class of symmetric, positive definite,
digits, and a solution with an accuracy, measured in terms of Toeplitz coefficient matrices.
norms, is required to within 2“‘, then any conditionaumber In terms of Wiener filtering, the fact that the
greater may be considered indicative of an ill-conditioned Wiener-Levinson algorithm is weakly stable means that,
numerical problem because the desired accuracy cannot be when the condition number of the autocorrelation matrix is
guaranteed (e.g. Wilkinson 1961). For example, if P > Z‘, not large, the Wiener-Levinson algorithm is guaranteed to
then the matrix may be considered ill-conditioned because produce a filter exhibiting small error. This means that in
an accuracy cannot be guaranteed to any accuracy. most practical cases, the issue of conditioning need not be
Concepts of stability most commonly seen in numerical one of concern. However, when the autocorrelation matrix
analysis are based on the fundamental works of Wilkinson is ill-conditioned, the Wiener-Levinson algorithm can no
(1961, 1963, 1965). In Wilkinson’s terminology, an longer be proven to behave in a stable fashion, unlike

Downloaded from https://academic.oup.com/gji/article-abstract/106/2/399/691111 by guest on 16 November 2019


algorithm is said to be stable on a class of matrices, d , if, Gaussian elimination which can. It must be noted, however,
when solving a linear system that the fact that the Wiener-Levinson algorithm has been
proven to be weakly stable does not discount the possibility
Ax=b
that it may be stable (or even strongly stable) on the class of
where A E d ,the computed solution is actually the solution symmetric, positive definite Toeplitz matrices. Such results
of a linear system remain to be proven or disproven.
The statement that an algorithm exhibits some form of
A s = b, stability, as defined here, only allows conclusions about
where A, is a matrix ‘close’ to A and b, is ‘close’ to b. possible sets of problems in which large errors may occur.
Stewart (1973) restricts Wilkinson’s definition of stability The definitions of different forms of stability (on a given
even further, specifying that, in addition to A, being close to matrix class) are independent of the particular problem
A and b, being close to b, that A, is in the same matrix class, being solved. None of the forms of stability discussed here
d , as the matrix A. For example, if A is Toeplitz, then A, is implies that large errors in a computed solution cannot
also Toeplitz. In the terminology of Bunch (1987), an occur. For example, an unstable algorithm may produce
algorithm which satisfies Stewart’s form of stability is large error in a solution to a well-conditioned problem. A
described as ‘strongly stable’. weakly stable algorithm is guaranteed to produce a solution
Bunch (1985) noted that algorithms based on partitioning, exhibiting small error if the problem is well-conditioned, but
such as the Wiener-Levinson algorithm, are, in general, may produce a a large error on an ill-conditioned problem.
unstable when applied to systems involving a Toeplitz If an algorithm is stable, small errors are guaranteed on a
coefficient matrix which is not symmetric and positive larger set of problems than would be the case if that
definite-this means that, even when the coefficient matrix algorithm is only weakly stable. Even an algorithm which is
is well-conditioned, there is no guarantee that these strongly stable (e.g. Gaussian elimination with pivoting)
algorithms will produce results with small error. Fortun- may produce solutions exhibiting large error-the set of
ately, in seismic deconvolution we are concerned with problems in which Gaussian elimination will compute a
autocorrelation matrices, which are symmetric and positive solution exhibiting large error is smaller than the set of
definite in most practical cases (in general, the autocorrela- problems to which a weakly stable algorithm may compute a
tion matrix is actually positive indefinite, but indefinite solution with large error. None of the algorithms employed
autocorrelation matrices are unusual in practice). This in the example of the next section is unstable-all are
means that we may ignore cases in which the coefficient guaranteed to produce a solution to the normal equations
matrix is not symmetric and positive definite. exhibiting small error if the autocorrelation is well-
On a more positive note, Cybenko (1980) claimed that the conditioned.
Levinson-Durbin algorithm is stable for the class of
symmetric, positive definite, Toeplitz coefficient matrices, A N EXAMPLE
allowing the conclusion that other algorithms, such as the
Wiener-Levinson algorithm are also stable. However, a It has been pointed out that the Wiener-Levinson algorithm
backward-type error analysis in the sense of the works of may not be as stable as Gaussian elimination on the class of
Wilkinson (1961, 1963, 1965) was not performed. In fact, it symmetric, positive definite, Toeplitz matrices. In this
was noted that a proof of stability is not possible using a section, a synthetic example is presented to compare
backward-type error analysis. This raised some controversy prediction filters produced by the Wiener-Levinson
because Cybenko had not actually demonstrated that these algorithm with those of Gaussian elimination and the
algorithms are stable, in the sense of works of Wilkinson. conjugate gradient algorithm of Hestenes & Stiefel (1952).
Bunch (1987) observed that Cybenko had actually proven Deconvolved traces will also be presented to demonstrate
that these algorithms produce solutions exhibiting small that the poor-quality filters affect the quality of deconvolved
error if the coefficient matrix is symmetric, Toeplitz, and traces. Treitel & Wang (1976) provided an example in which
well-conditioned. In terms of Bunch’s paper, these solutions produced by the Wiener-Levinson algorithm were
algorithms are described as ‘weakly stable’ on the class of compared with those produced by the Conjugate Gradient
symmetric, positive definite, Toeplitz matrices. This result is algorithm. However, filters of Treitel & Wang (1976) were
weaker than the definition of stability employed by the not compared with results computed using higher precision,
works of Wilkinson. Gaussian elimination with pivoting, on or with classical techniques such as Gaussian elimination, as
The Wiener-Leuinson Algorithm 401

will be performed here. Gaussian elimination produces a


solution at greater cost (e.g. more arithmetic operations)
than do the approaches employed by Treitel & Wang
(1976). The purpose behind the presentation of filters
produced by Gaussian elimination, in this article, is to
compare filters which could be produced'by the different
algorithms when applied to ill-conditioned problems.
A synthetic trace, with a 4 m s sampling rate, was
generated by convolving the wavelet and impulse response
of Fig. 1. This trace was convolved with a chirp sweep signal
with a 4 s duration and a linearly varying frequency between
25 and 85 Hz, thereby synthesizing a vibroseis trace. This
vibroseis trace was cross-correlated with the sweep signal to

Downloaded from https://academic.oup.com/gji/article-abstract/106/2/399/691111 by guest on 16 November 2019


produce a cross-correlation. This procedure for simulating a
vibroseis cross-correlation is described by Yilmaz (1987).
The first 1OOOms windows of the cross-correlation, and the
autocorrelation of the entire cross-correlation, are illustr-
ated in Fig. 2.
200 400 600 800 lo00
Lag (msec)
Cross correlation

Wavelet (approx. minimum phase)


200 400 600 800 lo00
Lag (msec)
Autocorrelation
Figure 2. First 1OOOms window of the cross-correlation and
normalized autocorrelation functions.

Prediction filters, computed using different algorithms,


are illustrated in Fig. 3. The filter being computed is a 50
element prediction filter for a prediction distance of 12 ms,
corresponding to three sampling units, which is the lag for
the first zero crossing of the autocorrelation. The condition
number of the corresponding normal equations is approxim-
ately 6.3213 X lo6.
For the purposes of comparison, all computations were
performed on the same computer, which provides a 23
binary digit mantissa in single precision floating point
numbers. The higher precision solution was obtained by
employing Gaussian elimination with higher precision (55
500 lo00 1500 2000 binary digits) floating point numbers. The conjugate
Lag (msec) gradient algorithm, which is an iterative scheme, was
Impulse response applied with a zero starting vector, and 50 iterations were
Figure 1. Synthetic wavelet and impulse response. performed which (in theory) should produce the exact filter.
402 R. J. O’Dowd
50 r

40 - 40

30 - 30

20 - 20

in

-10 - -10

Downloaded from https://academic.oup.com/gji/article-abstract/106/2/399/691111 by guest on 16 November 2019


-20 - -20

-30 - -30

-40 -40

J I
10 20 30 40 50

Higher precision solution. Wiener-Levinson Algorithm.


50 50 -
40 40

30 30 -

20 20 -

10 10 -

n n

-in -in -

-20 -20 -

-30 -30 -
-40 -40 -
I
10 20 30 40 50 10 20 30 40 50

Conjugate Gradient. Gaussian elimination.

Figure 3. Prediction filters produced by the different algorithms

This differs from the approach used by Treitel & Wang which satisfy a given error criterion. Depending upon the
(1976), who stopped the iteration when the norm of the initial estimate of the solution, and the precision of the
residual had reached a small level. The solution employed computer being employed, any of these solutions may be
here introduces a cost greater than that involved with the produced by the Conjugate Gradient algorithm. Details of
approach of Treitel & Wang (1976). However, in terms of these considerations are described by Varga (1963), and
accuracy (as measured in terms of norm of the residual) Wang & Treitel (1973).
there is little difference between solutions which would be It may be readily observed that the filter produced by
produced after 50 iterations, and those produced by the Gaussian elimination in single precision is visually similar to
approach of Treitel & Wang (1976). The formulation of the the higher precision filter, while the filters produced by the
Conjugate Gradient algorithm guarantees that the norm of Wiener-Levinson and conjugate gradient algorithms differ
the residual does not increase from iteration to iteration. quite significantly, both in magnitudes of filter values and in
Because the Conjugate Gradient algorithm measures the observable oscillations. Summary statistics of Table 1 allow
accuracy of solutions produced in successive iterations in the observation that the Wiener-Levinson algorithm has
terms of the norm of the residual, the solution computed produced results which are slightly more accurate than those
using the Conjugate Gradient algorithm in this article would of the conjugate gradient scheme. Both the Wiener-
also have a residual which would pass the error criterion of Levinson and conjugate gradient schemes exhibit a relative
Treitel & Wang (1976). In reality, there are many solutions error of the order of 1000 per cent (in terms of error norms)
The Wiener-Levinson Algorithm 403

Table 1. Summary statistics for filters. behave as stably as Gaussian elimination when solving
Algorithm, a ill-conditioned problems. The observation that the Wiener-
Higher Gaussian Conjugate Wiener Levinson algorithm performed better than the conjugate-
precision elimination gradient Levinson gradient scheme in this case indicates that the results of
( h1 Treitel & Wang (1976), in which the conjugate gradient
IlZll 13.54 13.53 139.9' 141.4 algorithm produced significantly better results, is not a
llfo-fhll 0 0.1289 146.8 137.8
general one-while the con jugate gradient algorithm may
llfa-fhll
0.009525 10.84 10.18 perform better than the Wiener-Levinson algorithm on
Ilfh II ill-conditioned problems, there is no guarantee that it will
min 6 -5.728 -5.725 -47.05 -41.03
max A 4.622 4.618 40.76 44.49 do so.
Deconvolved outputs, produced by applying prediction
error filters, are illustrated in Fig. 4. It may be readily
which means thay have produced a much poorer solution observed that prediction error filters produced from the

Downloaded from https://academic.oup.com/gji/article-abstract/106/2/399/691111 by guest on 16 November 2019


than has Gaussian elimination, which exhibits a relative prediction filters of the Wiener-Levinson and conjugate
error slightly less than 1 per cent. The observation that the gradient algorithms have been much less effective than that
Wiener-Levinson algorithm produces a filter of much of Gaussian elimination. This means that ill-conditioning in
poorer accuracy than Gaussian elimination lends support to the normal equations may propagate through computations,
the possibility that the Wiener-Levinson algorithm does not resulting in a relatively poor-quality deconvolved output.

I
200 400 600 800 loo0 200 400 600 800 loo0

Higher precision solution. Wiener-Levinson Algorithm.

I I
200 400 600 800 lD00 200 400 600 803 loo0

Conjugate Gradient. Gaussian elimination.

Figure 4. First lo00 ms window of the deconvolved outputs.


404 R. J . O'Dowd
(Bellman 1960), from which it may be concluded that
R E C 0G N IZIN <; I L L - C 0N D ITI 0 NI N G
v k > O V k =0, . . . , n - 1.
Causes of ill-conditioned autocorrelation matrices have
been discussed by O'Dowd (1990). Rather than u:ing those Properties of eigenvalues of a matrix, given in many texts
properties to identify tests of whether the autocorrelation (e.g. Ralston & Rabinowitz 1978; Gerald & Wheatley
matrix is ill-conditioned, this section focuses on the 1984), mean that
question: can intermediate results of the Wiener-Levinson
algorithm be employed to provide an indication of
ill-conditioning?
D(n - I ) = n
k=l
Ak

The Wiener-Levinson algorithm solves a linear equation and


of the form
Rf = g, A, =nr,,
k=l
where R is a symmetric, Toeplitz matrix:
where Ai are the eigenvalues of R, which are real and

Downloaded from https://academic.oup.com/gji/article-abstract/106/2/399/691111 by guest on 16 November 2019


r, rl .. . positive (Bellman 1960). Applying properties of prediction
R=( 1' rO .'. 2;:). error variances provided by Claerbout (1976, pp. 55-57), it
may be seen that
rn-l rn--2 ... r,
r,= v12 . . .I vn-l > O . (7)
The algorithm works by progressively solving the equations:
Considering, without loss of generality, the eigenvalues to
rofo = go, be ordered:
An 2 A, - 1 2 . . . z A, > 0 (8)
the spectral condition number may be written

until the nth order system is solved. The procedure, which is K(R) = hx =?
!
described in simple terms by Claerbout (1976), works in a Amin 1, '
recursive fashion requiring less arithmetic operations and
~

computer storage than algorithms like Gaussian elimination. and from equation (6) it may be seen that
Consider the case where the right-hand side of the
+
( k 1)th-order Wiener equation is a positive spike, and
f , = 1 is assumed: The lower bound in this equation arises because R is being
assumed positive definite. The upper bound may be
obtained using a contradiction argument; substituting an
assumption that the minimum. eigenvalue A, > r, into
equation (6) gives

where, in the case where the coefficient matrix is positive kk < ( n - 1)r,.
k =2
definite, the quantity vk, is denoted as the 'prediction error
variance'. Using the equation D ( k ) to denote the There are n - 1 terms in the summation on the left-hand
+
determinant of the ( k 1)th-order system, it may be shown side of this equation, therefore at least one of the remaining
(e.g. Robinson 1967) that eigenvalues, A,, k = 2 . . . n must be less than r, [i.e.
Ak <r,(k # I)], and the assumption of equation (8) is
violated. In a similar fashion, it may be shown that
(3)
r, 5 A, < nr,. (10)
Therefore, the determinant of the n by n coefficient matrix
may be expressed as The vectors in equation (2), with k = n - 1 may be
expressed (e.g. Bellman 1960) in the forms
D(n - 1) = n
n-1

k=O
vk (4)

for n 2 1 and where it may be seen that


vo = D(0) = r,.
For the purposes of discussion here, the most important
property of the quantities v k is that they are produced as where ei is the eigenvector corresponding to the eigenvalue
intermediate results of the Wiener-Levinson algorithm. Ai. Examining the first element of both of these vectors
If the autocorrelation matrix, R, is positive definite, the allows the observation that
determinants of all principal submatrices are positive: n n

D ( k ) > O V k = O , . . . ,n - 1
2 x,e,,, = I,
i=l
v , - ~= C Aixiei,l.
i=l
The Wiener- Levinson Algorithm 405

As A, 2 A,, these equations may be combined to obtain Table 2. Some computed predic-
" n n
tion error variances for the pre-
vious example.
= C A J , ~ , ,2~ C
z=l r=l
AM,,^ = C x,e,., = al,
t=l Prediction
Error Variance
which produces the result vo 1
v29 2.758 x
2 a,. (11) v30 --5.278 x
Combining equations (9) to (11) it may be seen that v3 1 1.651 x
v32 7.333 x 10-5
A, 2 v,, = r,, 2 v,- ,Il, v33 -2.293 x lo-*
2 (12)
v34 1.466x
from which a lower bound for the spectral condition v35 7.575 x 10-4
number, K ( R ) , may be written v36 2.368 x
v3 1 - 2 716 x 10-3

Downloaded from https://academic.oup.com/gji/article-abstract/106/2/399/691111 by guest on 16 November 2019


V38 9.798 x 10-4
u39 1.619 x
v40 -3.962 x 10-3
It may therefore be seen that intermediate results of the v4l 9.239x 10W4
v42 -1.073 x
Wiener-Levinson algorithm may be employed to give an
v43 1.235 x lo-'
indication of conditioning of the autocorrelation matrix, R. v44 1.319 x
From equation (7), this lower bound will increase with the p45 7 477 x 10-4
order of the matrix. This lower bound is sharp in theory, v46 2.796x
being attained trivially for the n by n identity matrix, I. For v47 -1.439 x
more general symmetric, Toeplitz, positive definite ma- v4s 1.108 x
trices, this lower bound may be extremely conservative. v49 4.430x
Ill-conditioning may be associated with the occurrence of however, that these approaches involve a much greater
eigenvalues of relatively small magnitude, in comparison computing cost (e.g. n3 versus n2 operations to solve a
with the one of maximum magnitude. Numerical errors system of order n) than the Wiener-Levinson algorithm,
when computing small eigenvalues may cause them to and this cost may become prohibitive for large-order
appear negative, an effect noted by Treitel & Wang (1976) systems. Pre-whitening has been noted by Treitel & Wang
(the condition number of the synthetic example of this paper (1976) and O'Dowd (1990) to be a procedure which results
was also computed using double precision floating point in less ill-conditioned normal equations.
numbers for this reason). In a similar fashion, from equation As prediction error variances are produced as intermedi-
(13), ill-conditioning may also be expected to be associated ate results of the Wiener-Levinson algorithm, they may be
with prediction error variances of relatively small mag- readily examined while the solution of the normal equations
nitude. This means that computed prediction error variances is being computed, and therefore provide an indication of
may be expected to exhibit effects similar to those exhibited conditioning. The cost of performing such a test is much less
by computed eigenvalues. So, if any prediction error than, for example, calculation of a condition number. If
variances obtained as intermediate results of the Wiener- prediction error variances indicate significant numerical
Levinson algorithm appear negative, this may be interpreted error, it must then be decided what approach should be
as an indication of ill-conditioning when solving a linear taken. The decision of what approach to take (e.g.
system involving a symmetric, positive definite, Toeplitz pre-whitening, Gaussian elimination, higher precision, or
coefficient matrix. some other approach) will amount to a balancing of the
The use of intermediate results of a solution algorithm to opposing criteria of acceptable accuracy and moderate
gauge the effect of round-off error is not unique. For computational costs.
example, the implementation of the Cholesky Decomposi-
tion given by Martin, Peters & Wilkinson (1971) tests
CONCLUSIONS
whether the numerically obtained determinant of the matrix
is positive, and indicates an error if this is not true. The When numerically solving the normal equations, effects of
results of this section indicate that a similar test may be numerical round-off must be considered. The Wiener-
applied in the Wiener-Levinson algorithm (and, by Levinson algorithm is guaranteed to produce Wiener filters
extension, other related Toeplitz algorithms) when solving a exhibiting little error when the autocorrelation matrix is
linear system with a symmetric, positive definite, Toeplitz well-conditioned. However, large errors may occur when
coefficient matrix. the autocorrelation matrix is ill-conditioned. This in turn
Table 2 lists computed prediction error variances, may have bearing on the effectiveness of deconvolution.
obtained as intermediate results of the Wiener-Levinson Gaussian elimination, on the other hand, produces filters
algorithm. The occurrence of negative prediction error exhibiting significantly less error. These effects may be
variances, the first of which is vjo (corresponding to a explained by considering stability properties of these
system of order 31), indicates that conditioning of the algorithms. The net result is that, while the Wiener-
normal equations may be affecting the accuracy of computed Levinson algorithm produces a solution with much less cost
Wiener filters of greater order. In such cases other, more in terms of computer time and storage than does Gaussian
accurate, approaches (e.g. Gaussian elimination, or higher elimination, a significant loss of accuracy may occur when
precision solutions) may be desirable. It should be noted, the autocorrelation matrix is ill-conditioned. It has also been
406 R. J . O’Dowd

noted that the conjugate gradient method does not Levinson, N., 1946. The Wiener RMS (root mean square) error
necessarily produce a filter of greater accuracy than does the criterion in filter design and prediction, J . Math. Phys., 25,
Wiener-Levinson algorithm. Intermediate results of the 261-278.
Wiener-Levinson algorithm may be used in a simple test to Martin, R., Peters, G. & Wilkinson, J., 1971. Symmetric
determine when large errors may occur. decomposition of a positive definite matrix, Handbook for
Automatic Computation, vol. 2 , pp. 9-30, eds Wilkinson, J. &
Reinsch, C . , Springer-Verlag, Berlin.
ACKNOWLEDGMENTS O’Dowd, R., 1990. Ill-conditioning and pre-whitening in seismic
deconvolution, Geophys. J. Int., 101, 489-491.
The author wishes to acknowledge the reviewers of this Ralston, A. & Rabinowitz, P., 1978. A First Course in Numerical
paper, particularly the one who, while not offering complete Analysis. International Series in Pure and Applied Mathematics,
agreement, pointed out an error in the derivation of 2nd edn, McGraw-Hill, New York.
equation (11) and provided the proof which appears herein. Robinson, E. A,, 1967. Multichannel Time Series Analysis with
Digital Computer Programs, Holden Day, San Francisco.

Downloaded from https://academic.oup.com/gji/article-abstract/106/2/399/691111 by guest on 16 November 2019


Stewart, G. W., 1973. Introduction to Matrix Compurations,
REFERENCES Academic Press, New York.
Treitel, S. & Wang, R. J., 1976. The determination of digital
Bellman, R., 1960. Introduction to Matrix Analysis, McGraw-Hill, Wiener filters from an ill-conditioned system of normal
New York. equations, Geophys. Prosp., 24, 317-327.
Bunch, J. R., 1985. Stability of methods for solving Toeplitz Trench, W. F., 1964. An algorithm for the inversion of finite
systems of equations, SIAM J. Sci. Stat. Comp., 6, 349-364. Toeplitz matrices, J. SOC.ind. appl. Math., 12, 515-525.
Bunch, J. R., 1987. The weak and strong stability of algorithms in Varga, R., 1963. Matrix Iterative Analysis, Prentice-Hall,
numerical linear algebra, Linear Algebra Appl., 88/89, 49-66. Englewood Cliffs, NJ.
Claerbout, J., 1976. Fundamentals of Geophysical Data Processing Wilkinson, J., 1961. Error analysis of direct methods of matrix
with Applications to Petroleum Prospecting, McGraw-Hill, New inversion, J. Assoc. Comp. Mach., 8, 281-330.
York. Wilkinson, J., 1963. Rounding Errors in Algebraic Processes,
Cybenko, G., 1980. The numerical stability of the Levinson-Durbin HMSO, London.
algorithm for Toeplitz systems of equations, SIAM J . Sci. Stat. Wilkinson, J. H., 1965. The Algebraic Eigenvalue Problem,
Comp., 1, 303-320. Clarendon Press, Oxford.
Gerald, C. & Wheatley, P., 1984. Applied Numerical Analysis, 3rd Yilmaz, O., 1987. Seismic Data Processing: Investigations in
edn, Addison-Wesley, California. Geophysics Vol. 2, Society of Exploration Geophysicists,
Hestenes, M. R. & Stiefel, E., 1952. The method of conjugate Tulsa, OK.
gradients for solving linear systems, US Nut. Bureau Stand. J . Zohar, S., 1974. The solution of a Toeplitz set of linear equations,
Res., 49, 409-436. J. Assoc. Comp. Mach., 21, 272-276.

You might also like