P. 1
Geophysical Inverse Theory

Geophysical Inverse Theory

|Views: 327|Likes:
Published by thu_geo

More info:

Published by: thu_geo on Jun 29, 2010
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

08/12/2013

pdf

text

original

Finally, we point out a clever result of Hestenes which seems to have been largely

ignored. In the paper [Hes75] he proves the following. Let r be the rank of A an

arbitrary matrix, and let p and q be the CGLS search vectors, and let x0 = 0. Then

A† =

p0p0
(q0,q0) + p1p1

(q1,q1) +··· pr−1pr−1
(qr−1,qr−1)

AT

(11.85)

is the generalized pseudo-inverse of A. A generalized pseudo-inverse satisfies only two

of the four Penrose conditions, to wit:

AAA† = A

(11.86)

AAA = A

(11.87)

To illustrate this result, consider the following least squares problem:

1 2
4 5
1 3
27

x

y

=

5

6

5
12

.

The column rank of the matrix is 2. It is straightforward to show that

AT

A

−1

= 1
689

22 35
35 87

.

Therefore the pseudo-inverse is

A† =

AT

A

−1

AT

= 1
689

157173 1871
7930 3184

.

Now apply the CGLS algorithm. The relevant calculations are

p0 =

48
139

, q0 =

230

887

465
1069

.

p1 =

9.97601
4.28871

, q1 =

18.55343
18.46049
2.89012
10.06985

, x2 =

1.00000
2.00000

,

which is the solution. Recalling (11.85)

A† =

p0p0
(q0,q0) + p1p1

(q1,q1) +··· pr−1pr−1
(qr−1,qr−1)

AT

(11.88)

1

180

BIBLIOGRAPHY

one has

p0p0
(q0,q0) + p1p1
(q1,q1)

=

0.12627 0.05080
0.05080 0.03193

.

But this is nothing more than

AT

A

−1

which was previously calculated:

AT

A

−1

= 1
689

22 35
35 87

=

0.12627 0.05080
0.05080 0.03193

.

In this particular case AA = I so the parameters are perfectly well resolved in the

absence of noise.

Exercises

1. Prove Equation (11.22).

2. Show that

f(z)−f(xk) =1

2(xkz,A(xkz))

where z is a solution to Ax = h and A is a symmetric, positive definite matrix.

3. Prove Lemma 4.

4. With steepest descent, we saw that in order for the residual vector to be exactly

zero, it was necessary for the initial approximation to the solution to lie on one

of the principle axes of the quadratic form. Show that with CG, in order for the

residual vector to be exactly zero we require that

(ri,pi) = (ri,ri)

which is always true by virtue of Lemma 3.

Bibliography

[Bj¨o75] A. Bj¨ork. Methods for sparse linear least-squares problems. In J. Bunch and

D. Rose, editors, Sparse Matrix Computations. Academic, New York, 1975.

[Cha78] R. Chandra. Conjugate gradient methods for partial differential equations.

PhD thesis, Yale University, New Haven, CT, 1978.

[CM79] S. Campbell and C. Meyer. Generalized inverses of linear transformations.

Pitman, London, 1979.

[CW80] J. Cullum and R. Willoughby. The Lanczos phenomenon—an interpretation

based upon conjugate gradient optimization. Linear Albegra and Applica-

tions, 29:63–90, 1980.

1

BIBLIOGRAPHY

181

[CW85] J. Cullum and R. Willoughby. Lanczos Algorithms for Large Symmetric

Eigenvalue Computations. Birkh¨auser, Boston, 1985.

[FHW49] L. Fox, H. Huskey, and J. Wilkinson. Notes on the solution of algebraic linear

simultaneous equations. Q. J. Mech. Appl. Math, 1:149–173, 1949.

[GR70] G. Golub and C. Reinsch. Singular value decomposition. Numerische Math.,

14:403–420, 1970.

[GvL83] G. Golub and C. van Loan. Matrix Computations. Johns Hopkins, Baltimore,

1983.

[Hes51] M. Hestenes. Iterative methods for solving linear equations. Technical report,

National Bureau of Standards, 1951.

[Hes75] M. Hestenes. Pseudoinverses and conjugate gradients. Communications of

the ACM, 18:40–43, 1975.

[HS52] M. Hestenes and E. Stiefel. Methods of conjugate gradients for solving linear

systems. NBS J. Research, 49:409–436, 1952.

[Ker78] D. Kershaw. The incomplete Cholesky-conjugate gradient method for the

iterative solution of systems of linear equations. Journal of Computational

Physics, 26:43–65, 1978.

[L¨au59] P. L¨auchli. Iterative L¨osung und Fehlerabsch¨atzung in der Ausgleichsrech-

nung. Zeit. angew. Math. Physik, 10:245–280, 1959.

[Law73] C. Lawson. Sparse matrix methods based on orthogonality and conjugacy.

Technical Report 33-627, Jet Propulsion Laboratory, 1973.

[Man80] T.A. Manteuffel. An incomplete factorization technique for positive definite

linear systems. Mathematics of Computation, 34:473–497, 1980.

[Pai71] C. Paige. The computation of eigenvalues and eigenvectors of very large

sparse matrices. PhD thesis, University of London, London, England, 1971.

[Par80] B. Parlett. The Symmetric Eigenvalue Problem. Prentice-Hall, 1980.

[Par94] R.L. Parker. Geophysical Inverse Theory. Princeton University Press, 1994.

[Pis84] S. Pissanetsky. Sparse Matrix Technology. Academic, N.Y., 1984.

[PS82] C. Paige and M Saunders. LSQR: An algorithm for sparse linear equations

and sparse least squares. ACM Trans. Math. Softw., 8:43–71, 1982.

[SB80] J. Stoer and R. Bulirsch. Introduction to Numerical Analysis. Springer, N.Y.,

1980.

1

182

BIBLIOGRAPHY

[Sca89] J.A. Scales. Using conjugate gradient to calculate the eigenvalues and singular

values of large, sparse matrices. Geophysical Journal, 97:179–183, 1989.

[SDG90] J.A. Scales, P. Docherty, and A. Gersztenkorn. Regularization of nonlinear

inverse problems: imaging the near-surface weathering layer. Inverse Prob-

lems, 6:115–131, 1990.

[Sti52] E. Stiefel. ¨

Uber einige Methoden der Relaxationsrechnung. Zeit. angew.

Math. Physik, 3, 1952.

[You71] D. M. Young. Iterative Solution of Large Linear Systems. Academic, N.Y.,

1971.

1

Chapter 12

More on the Resolution-Variance

Tradeoff

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->