This action might not be possible to undo. Are you sure you want to continue?

BooksAudiobooksComicsSheet Music### Categories

### Categories

### Categories

### Publishers

Scribd Selects Books

Hand-picked favorites from

our editors

our editors

Scribd Selects Audiobooks

Hand-picked favorites from

our editors

our editors

Scribd Selects Comics

Hand-picked favorites from

our editors

our editors

Scribd Selects Sheet Music

Hand-picked favorites from

our editors

our editors

Top Books

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Audiobooks

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Comics

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Sheet Music

What's trending, bestsellers,

award-winners & more

award-winners & more

P. 1

Geophysical Inverse Theory|Views: 327|Likes: 31

Published by thu_geo

See more

See less

https://www.scribd.com/doc/33694611/Geophysical-Inverse-Theory

08/12/2013

text

original

Finally, we point out a clever result of Hestenes which seems to have been largely

ignored. In the paper [Hes75] he proves the following. Let* r* be the rank of* A* an

arbitrary matrix, and let p and q be the* CGLS* search vectors, and let x0 = 0. Then

*A*† =

p0p0

(q0*,*q0) + p1p1

(q1*,*q1) +*···* pr−1pr−1

(qr−1*,*qr−1)

*A*T

(11.85)

is the generalized pseudo-inverse of* A*. A generalized pseudo-inverse satisﬁes only two

of the four Penrose conditions, to wit:

*A*†*AA*† =* A*†

(11.86)

*AA*†*A* =* A
*

(11.87)

To illustrate this result, consider the following least squares problem:

1 2

*−*4 5

*−*1 3

2* −*7

x

*y
*

=

5

6

5

*−*12

*.
*

The column rank of the matrix is 2. It is straightforward to show that

*A*T

*A
*

−1

= 1

689

22 35

35 87

*.
*

Therefore the pseudo-inverse is

*A*† =

*A*T

*A
*

−1

*A*T

= 1

689

157* −*173 18* −*71

79* −*30 31* −*84

*.
*

Now apply the CGLS algorithm. The relevant calculations are

p0 =

* −*48

139

*,* q0 =

230

887

465

*−*1069

*.
*

p1 =

9*.*97601

4*.*28871

*,* q1 =

18*.*55343

*−*18*.*46049

2*.*89012

*−*10*.*06985

*,* x2 =

1*.*00000

2*.*00000

*,
*

which is the solution. Recalling (11.85)

*A*† =

p0p0

(q0*,*q0) + p1p1

(q1*,*q1) +*···* pr−1pr−1

(qr−1*,*qr−1)

*A*T

(11.88)

1

180

BIBLIOGRAPHY

one has

p0p0

(q0*,*q0) + p1p1

(q1*,*q1)

=

0*.*12627 0*.*05080

0*.*05080 0*.*03193

*.
*

But this is nothing more than

*A*T

*A
*

−1

which was previously calculated:

*A*T

*A
*

−1

= 1

689

22 35

35 87

=

0*.*12627 0*.*05080

0*.*05080 0*.*03193

*.
*

In this particular case* A*†*A* =* I* so the parameters are perfectly well resolved in the

absence of noise.

Exercises

1. Prove Equation (11.22).

2. Show that

*f*(z)*−f*(xk) =*−*1

2(xk*−*z*,A*(xk*−*z))

where z is a solution to* A*x = h and* A* is a symmetric, positive deﬁnite matrix.

3. Prove Lemma 4.

4. With steepest descent, we saw that in order for the residual vector to be exactly

zero, it was necessary for the initial approximation to the solution to lie on one

of the principle axes of the quadratic form. Show that with CG, in order for the

residual vector to be exactly zero we require that

(ri*,*pi) = (ri*,*ri)

which is always true by virtue of Lemma 3.

Bibliography

[Bj¨o75] A. Bj¨ork. Methods for sparse linear least-squares problems. In J. Bunch and

D. Rose, editors,* Sparse Matrix Computations*. Academic, New York, 1975.

[Cha78] R. Chandra.* Conjugate gradient methods for partial diﬀerential equations*.

PhD thesis, Yale University, New Haven, CT, 1978.

[CM79] S. Campbell and C. Meyer.* Generalized inverses of linear transformations*.

Pitman, London, 1979.

[CW80] J. Cullum and R. Willoughby. The Lanczos phenomenon—an interpretation

based upon conjugate gradient optimization.* Linear Albegra and Applica-
*

*tions*, 29:63–90, 1980.

1

BIBLIOGRAPHY

181

[CW85] J. Cullum and R. Willoughby.* Lanczos Algorithms for Large Symmetric
*

*Eigenvalue Computations*. Birkh¨auser, Boston, 1985.

[FHW49] L. Fox, H. Huskey, and J. Wilkinson. Notes on the solution of algebraic linear

simultaneous equations.* Q. J. Mech. Appl. Math*, 1:149–173, 1949.

[GR70] G. Golub and C. Reinsch. Singular value decomposition.* Numerische Math.*,

14:403–420, 1970.

[GvL83] G. Golub and C. van Loan.* Matrix Computations*. Johns Hopkins, Baltimore,

1983.

[Hes51] M. Hestenes. Iterative methods for solving linear equations. Technical report,

National Bureau of Standards, 1951.

[Hes75] M. Hestenes. Pseudoinverses and conjugate gradients.* Communications of
*

*the ACM*, 18:40–43, 1975.

[HS52] M. Hestenes and E. Stiefel. Methods of conjugate gradients for solving linear

systems.* NBS J. Research*, 49:409–436, 1952.

[Ker78] D. Kershaw. The incomplete Cholesky-conjugate gradient method for the

iterative solution of systems of linear equations.* Journal of Computational
*

*Physics*, 26:43–65, 1978.

[L¨au59] P. L¨auchli. Iterative L¨osung und Fehlerabsch¨atzung in der Ausgleichsrech-

nung.* Zeit. angew. Math. Physik*, 10:245–280, 1959.

[Law73] C. Lawson. Sparse matrix methods based on orthogonality and conjugacy.

Technical Report 33-627, Jet Propulsion Laboratory, 1973.

[Man80] T.A. Manteuﬀel. An incomplete factorization technique for positive deﬁnite

linear systems.* Mathematics of Computation*, 34:473–497, 1980.

[Pai71] C. Paige.* The computation of eigenvalues and eigenvectors of very large
*

*sparse matrices*. PhD thesis, University of London, London, England, 1971.

[Par80] B. Parlett.* The Symmetric Eigenvalue Problem*. Prentice-Hall, 1980.

[Par94] R.L. Parker.* Geophysical Inverse Theory*. Princeton University Press, 1994.

[Pis84] S. Pissanetsky.* Sparse Matrix Technology*. Academic, N.Y., 1984.

[PS82] C. Paige and M Saunders. LSQR: An algorithm for sparse linear equations

and sparse least squares.* ACM Trans. Math. Softw.*, 8:43–71, 1982.

[SB80] J. Stoer and R. Bulirsch.* Introduction to Numerical Analysis*. Springer, N.Y.,

1980.

1

182

BIBLIOGRAPHY

[Sca89] J.A. Scales. Using conjugate gradient to calculate the eigenvalues and singular

values of large, sparse matrices.* Geophysical Journal*, 97:179–183, 1989.

[SDG90] J.A. Scales, P. Docherty, and A. Gersztenkorn. Regularization of nonlinear

inverse problems: imaging the near-surface weathering layer.* Inverse Prob-
*

*lems*, 6:115–131, 1990.

[Sti52] E. Stiefel. ¨

Uber einige Methoden der Relaxationsrechnung.* Zeit. angew.
*

*Math. Physik*, 3, 1952.

[You71] D. M. Young.* Iterative Solution of Large Linear Systems*. Academic, N.Y.,

1971.

1

Chapter 12

More on the Resolution-Variance

Tradeoﬀ

- Read and print without ads
- Download to keep your version
- Edit, email or read offline

Petrology - New Perspectives and Applications

Making a Geologic Map With ArcGIS_2009

Comparing Gravity and Gravity Gradient Surveys

Generating Geologic Cross Sections With Autocad Map and Arcview Gis

Course Texbook of Geophysical Methods in Geology

Dynamic Earth

Tech Report Writing

Mineral Exploration Applications Level 1

Igneous Rocks_ a Classification and Glossary of Terms

Economic Geology

Mineral Exploration Best Practice

Broughton -The potential for mineral exploration and extraction in Antarctica.pdf

Mineral Exploration and Development Risk and Reward

Minerals Exploration Safety Guide_QLD2004

Geophysics for the Mineral Exploration Geoscientist.pdf

InTech-A Discussion on Current Quality Control Practices in Mineral Exploration

A Special Issue on Volcanic Centers as Targets for Mineral Exploration

3-MineralExploration

Geophysics in Engineering

Planetary.geology

《An Introduction to Igneous and Metamorphic Petrology》Winter，2001

1578591562.the.handy.geology.answer

Applied Geophysics

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

CANCEL

OK

You've been reading!

NO, THANKS

OK

scribd

/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->