You are on page 1of 150
‘Thisis Volume 45 in INTERNATIONAL GEOPHYSICS SERIES A series of monographs and text Eifety RENATA DMOWSICA and AMES R, HOLTON GEOPHYSICAL DATA ANALYSIS: DISCRETE INVERSE THEORY Revised Edition William Menke Department of Gelogal Sciences. Columbia Uiversty (Cte af Ocenoenchs SepSuieUanenty @ ‘ACADEMIC PRESS, INC. arewet BaceJraovch, Pabibrs Sun Diego New Yor Berkley Boson Vondon” Sydney Tokyo Toros CONTENTS PREFACE wi INTRODUCTION 1 DESCRIBNG INVERSE PROBLEMS LL Formulag lasers Proiems 7 12 Theliserlaiene Prien 9 13 Buamplerof Formulating Ives Problems, 10 A Slaton vere Prose I? SOME COMMENTS ON PROBABLTY THEORY 2.1 Nae and Random Vass 21 22 Comeited Data 24 23, Funsionsof Random Vass 27 24 Gausian Dtribatons 29 25 Tesinghe Asumpton of Gain Sttiscs 31 26 Confdeac Inert SOLUTION OF THE UNEAR, GAUSSIAN INVERSE PROBLEM. \VEWFOINT I: THE LENGTH METHOD 31 TheLenginsofEsimater 35 32° Meresoftenph 36 33 Latur fora Straight Line 39 ‘coments 44 The Lest Squares Slaton of the Lina nvene Poin al 25 Sometaamples 42 36 The Eons the Lesa Soares Sctuion 45 39 ThePvrely Undetermined Problem 48 38 Migd-DeeminedPresems 50 39) Moga Mewar of Length a2 Type ofA Por Tofomaton 52 210. Otner Types A Por Infrmation 55 SIL he Varun ofthe Model Parmeter Estimates 8 5.12 Vanance and Prion Eno ofthe Lee Squares Souton Sk SOLUTION OF THE LNEAR, GAUSSIAN INVERSE PROBLEM, YEWGONT 2: GENERALZED INVERSES. 1 Salutons vesas Operas 61 22 Thebua Reon Mamie 62 43 The Mode Reotuson Matix 64 44 ThetintCovarance Mawes 65 415) Reto and Covance of Some Generalized Towers 66 46 Menus of Gondnes of Reston and Conance 6? 5 Geoctnd nv with Good Reson and 48S and the Racks Ger Spread Function 71 45 The Backus iter General tne forte Unieeterned Problem 73 4410 Inloing be Covance Sze 75 {LIT TheTrafeof of Reution nd Vorunce 76 SOLUTION OF THE LINEAR, GAUSSIAN INVERSE PROBLEM, VEWPONT 3: MAXUM UKEUNOOD METHODS S41 The Mean ofa Group of Measurements 7 52. Maimom itlneod Slton a te Lear Inverse Prem 53. APron Diubuions 83 EA Maimum Lleond rae act Theory 87 56 The Simple Gauian Cae with Linear Theory 91 $9) The Gene! Liner Cousian Case 92 SH Equivalence ofthe Tee Viewpoints 98 £9 The F Testo Err Improvement Sqnicance | 96 Slo. Deriaton ofthe Formulae Sesion 57 97 NONUMQUENESS AND LOCALIZED AVERAGES 61 Nall Veton and Nowuoavenes! 101 62. ‘Nll Vein ofa Simple Inver Problem 102 £3, Localaed Atenpssof Mol Paametes’ 103 184 Relatonsp tote Resation Man 108 165 Averages ers Esumates 103 166 Nonumique Averaging Veins aA Por Infomation 108 [APPLICATIONS OF VECTOR SPACES 2A Mosel and DataSpaces 108 72 Houholer Transformations 111 73 Designing Houta Transformations 115 4 Tanslormations That Do Na Preserve Legh 117 45 TheSouon ofthe Mixe#-Deermined Potion 118 416 Singl-Vale Desompostion spd the Naural Geveralaed 17 Darton of the Singuar-Value Decomposiion 124 14 Siping Liner Equals nsgality Consens 128 79 nog Consains 126 LUNEAR INVERSE PROBLEMS AND NON. GAUSSIAN DisTEUTONS ALL, Normsand Exponential Disibuions 133, 2 Naumum Ltn Eine ofthe Msn fa Exponent 3 The Gover Linear Problem 137 EA Savi Norm Polens 138 85° TheleNom Tah n cones NONLINEAR INVERSE PROBLEMS 91 Parameteratns 143 93, The Noatinear Inverse Proton wih Grain Data 147 54 Spec Cas 153 85. Convergence and Nonsnigvens of Nose Prbiens 153 26 NowGausian Datibulons 156 FACTOR ANALYSIS 101 The Factor Anais Problem 161 102 Noemazaon and Puyscalty Consus 168, 103 QModeand Rode Facer Anan 167 104 Enmpincl Onbopoeal Function Anais 167 CONTINUOUS INVERSE THEORY AND TOMOGRAPHY 2 m3 us us Reslsion sed Varn Treat 173, ‘ApponimatingConinaourInvene Problems a Dice Probe 8 ‘Tomapraphy and Continuous lvese Theory 176 ‘Tomapaphy an he Raton Taso 177 The FounerSlceTherem 178 Backpogetion SAMPLE NVERSE PROBLEMS 3 3 Re sd > 0 An mage Eokancement Problem 183 Dipl FterDesgn 187 ‘Adusent of Crowne Errors 190 ‘An Acoustic Tomogapty Problem 194 ‘Temperate Darbutoa an igneous Ineuson 198 Lye a6 La Fiting of Staiht ise 202 Fling the Mean fa Set of Unit Veco 207 Gausian Cane iting 210 Eamquke Laos 213 Vibronsl Problems 217 13 NUMERICAL ALGORTHMS 152 Invern Sguste Matec 229 1553 Solving Underdetermind and Overdtemined Problems ST 134 Ly Prati wih nequaiy Conurnins 240 15'S Fig the Egevans nd Eigen of Real Simmer Marne 251 136 ThesinplarValue Decomposition ofa Mauls 284 153 Thesimper Method apd ie Linc Progamming [APPLICATIONS OF INVERSE THEORY TO GEOPHYSICS 141 Earthquake Lcaton and the Determination of the Vly ‘Stucture ote Earth fom Fel Tune Data 381 142 Velocity Srectre foo re Olatons nd Semi Sore Waves 265 Sen Anenution 267 Signal Carton 267 Gravity and Geomapnetim 209 BecomagneticIndocton and the Manette Metiod 30 APPENDIX A: Implementing Constraints with Lagrange Mailer 273 APPENDIX: Ly nverse Theory with Comix. Geanttiog Fs REERENCES 277 Noe 281 INTERNATIONAL GEOPHYSICS SERS 288 PREFACE Every rescarcher inthe applied siences who has analyze data has practiced inverse theory. averse theory is simply the set of methods thd to extact useful inferences about the word fom physical med surements. The iting ofa steaght ine to datainvovesasimple appl ‘ation of inverse theory. Tomography, popularized by the physician's ‘CAT scanner, usesiton a more sophisticated level ‘The study of inverse theory, however mor than the cataloging of methods of data analysis Tes an attempt to organize these techniques, to bring out their underlying similares and pin down thei fer: fences, and to del with the fundamertal question of the Limits of Tnformation that can be lened rom any sven dt se ‘Phisial properties fll nto two general clases thow that can be aia] Pe 4h) | as Bin Btw] |e [a ee ttn) Las ase =~ smd Lend Since each beam pases through only afew ofthe many boxes, many ‘ae zero, and he matin very spare 135, EXAMPLES: GAUSSIAN CURVE FITTING Not every inverse problem can be adequately eepresented by the siseet near equation Gm = d. Consider, for example, an exper ‘ment in which the intensity of thea rays scattered rom acrystallatice 's measured asa function of Bragg angle, Near one ofthe diffraction peaks the xray intensity) vanes a= 2) am Here isthe Bragg angle, © the Bragg sage ofthe peaks maximum (ig 13), 4 the amplitude ofthe peak, and oa meseute of ts width w 1 Describing Inverse Problems intensity T row 8 inven protien dlemine he eak ae, width and potion na pica experiment (0) measured at many different sand the model parameters 4,0, andar tobe determined, The dat and Iode! are therefore rlaed. by the nonlinear explicit equation ‘gim) Furthermore, since the argument ofthe exponential sare inthe iciniy ofthe peak, the problem cannot be inesized 1.36. EXAMPLE 6 FACTOR ANALYSIS ‘Another example of a nonlinear inverse problem i that of deter mining the composition of chemical end members onthe basi the Chemistry of suite of mixtures of the end members. Consider a Simplied “ooean” (Fg. 14) ia which sediments are composed of Ininures of several chemical istinet rocks erode from the cont tents One expect the faction of chemical) in the ith sediment ‘Simple S tobe elteto the amount of endsetber rockin sediment = xe Feltham srr sous "Te mnene poem sto dene he be 296 14 Solon to Insene Problems v sample # (Ca 0 the amount ofthe ih chemical in the en mmenber wok (as amount of fend member & lend mente | compostion [ott] = 3 5° Sch o F 19) Ina pica experiment, the number of end members p the end- ‘member composition F, and the amount of end members ia the ‘Smples Careall unknown mode! parameters. Since the data Sare on tne side ofthe equations, ths problem ial ofthe explicit nonlinear ‘pe. Note that basically the problem iso factor a matrix S nto two ther mates Cand, This ictring problem sa well-studied part of {he theory of mates, and method are avalaleo solve it As wl be discussed in Chapter 10, this problem (which often called factor “ana is very los rlited to the algebraic eienvalve problem of linear algebra. 14. Solutions to Inverse Problems ‘Weshallus the terms “solution” and “answer” to indicate broadly whatever information we are abl determine aboot the problem ‘underconsiertion, Ase salle there are many dierent points of ‘iow regrding what constiates a solution tan inverse problem, Of ‘Course, one generally was to Kaos the numerical ales ofthe model parameters (we cal thi kind of answer an estimate ofthe mode parameters), Unfortunately, only very infrequently can an inverse problem be solved in such 2 way a5 (0 yield this kind of exact, information, More typically, the practitioner of inverse theory is Foyced to make various compromises tetween the kind of information her she stu wants andthe Kind of information tha can fact be ‘binned fom any gven dataset. These compromises lead to oer kinds of “answers thatare more abstract than simple estimates othe ‘model parameters Part afthe practi a inverse theory the dentiy= ing of what features ofa Solution are max valable and making the compromises that emphasize these features. Some of the possible forms an “answer” fo an inverse problem might take are decribed teow " 1 Desig Inver Problems LAL ESTIMATES OF MODEL PARAMETERS The simpli kind of solution to an inverse problem isa estimate sof the mode parameters. An estimate simply set of numeral Sales foe the model parameter, me" ~ [Hedy 29, 10? for example. Estimates are generally the most weil kind of soltion tovansnvere problem, Neverthsiess in many stuations they can be ‘ery misleading. Fornsance,es imate inthemssves give no isight io the quality of the solution. Depending on the stacture of the puriculr problem, measurement erors might be averaged out in thich ase the estimates might be meaningful) or amplied (in which ‘ase the estimates might be nonsense) In other problems, many Solutions might exis. To single out arbitrarily only one of these Solutions aa call t me gives the false impression that 2 unique feltion has been obtained. 142 BOUNDING VALUES, ‘One remedy tothe problem of dfning the quality ofan estimate to state additionally Some bounds that define its certainty. These bounds can be either absolute or probabilistic. Absolute hounds imply thatthe true value ofthe model parameter les between 0 sated Salucs, for eumple. 1.3 m= Ls. Probabilistic Bounds imply that the eimateis ikl tobe betwen te hounds, with somegiven dearer of ceranty Frinstance, mf¥ = 140.1 might mean that teres 99% probability tha mes Between 1.3 and 15. ‘When they exist, bounding values an often provide the supplemen- tary information needed To interpret propery the solution 10 an inverse problem. ‘There are, however, many instances in which bounding values do not exist FRIBUTIONS. ‘A generalization ofthe stating of bounding values isthe stating of the complete probability distribution for model parameters. The use Taine of this technique depends in parton ow complicated the station i. IF i has-only one peak (Fg. 83), then stating the “station provides ite more information than stating an estimate {asd onthe position af the peak'scener with ror bounds based on the peak's shape On the other hand, the distribution is very compli ‘ated (ie 1-0) its basally uninterpretale excep the sense that LALA Whale ‘pt tesa’ so i oe [ined wssconstendthugrosi nscale eaton Sot it implies that the model parametercannot be wellestimated). Only in those exceptional instances in which the diiibusion has some inte ‘mediate complexity (Fig. 156) does it really provide information toward the solation of an inverse problem. In practic. the most fnteresting distributions ae exceedingly dificult compute, 50 this technique i of rather limited useful. 144 WEIGHTED AVERAGES OF MODEL PARAMETERS {in many intancesits posable to deny combination or averages ofthe model parameters that arin some ene beter determined than the model parameters themselves Fr instance, pven m= (7m)? itemay tur out that (m) — 0.20, +080, i beter determined than either mi of m. Unfortunately, one might aot have the slightest interestin sucha average eit well determined or no, because itmay rot have psc significance “Average canbe of conalderable intrest when the model partes representa dicetzed ors fee conics anton Ite weight ae Inge ely fra few piel ajacet parameter then he veape id tote localized. The meaning ofthe average sch sce that Ou ‘he data canotreslve the mal parameter st parle point, they fan wesae th erage ofthe model parameter in the neigh of ‘harp 1 escbing Ives Problems In te following chapters we shal derive methods fr determining ach ofthese diferent kinds of solatons overs problems. We note het, bower, tht there iso great deal of udering simian berween these types of “answers In acti wll turnout atthe same rumerical “answer” wil be interpretable as any of several cases of solutions, 2 SOME COMMENTS ON PROBABILITY THEORY 21 Noise and Random Variables nthe preceding chapter we represented he ests ofan experiment sa vector d whose lements were individual measurement, Some ‘ins, however. single number i insfsent fo representa singe ‘observation, Measurements are known to contain note so tha ian ‘observation were to be performed several times. cach measurement would be diferent (Fig 2.1). To characterize the date completely. information about the range and shape of this scatter must also be provided, The concep of random variables wd to describe this proper. Each random variable as dette and presse properties, governing. the range and shape of the scaterof values one observes These properties cannot be measured directly, however one can only make individual measurements of realizations, the random vara and "eo estimate its re properties from these data ‘The te properties of the random variable d are specied by a station Pd), This function gives the probability thats pariculs 2 2 Some Comments on robsbity Theory ‘ealztion ofthe random variable will havea value in the neighbor Iho of d—the proablity thatthe measurements between d 0d (dt adis (dyad, 2.2) We have used the paral derivative sign 3 ‘only forthe sake of danty, to avoid adding another dt he notation Sine each measurement must have some vale, he probity that d lie somewhere between and +> is compete certainty (asually fiven the vale of 100% oF unity, which is watten as [now The dsteibution completely describe the random variable. Unfor- tunately stra continuous function that maybe quite complicate. I ‘ship therefore, to derivea ew numbers rom thedistnbuion that ty ta summarize is major properties. One such kind of number ies 1 indicate the typical numerical talue of a measurement. The most likely measurements the one with he highest probity Fi. 23.1 ' ' en wo = a ie 22, Teed ana Md ait Son Bete oy 211 Noje nd Random Varales a me [Re23, Termini df eta dn i the disbution is skewed, the peak or maximum lkelthood point ‘may notbe a god indication ofthe typical measurement, since aide range of oter values also have high probability. In such instances the mean, or expected measurement, Ed) sa beter characterization ofa ‘spiel measurement. This umber i the "balancing point” ofthe bution and is given Bs ao [etna eo weary casein irr th ee Smee ae te ce we Ende angmemncacareae esr eety Fae Snes gueetecaeogeseumrante 2. tae Src senaeee eae cal een reg Iasi rh mess ec eae o-[Te-wmae an a 2 Some Comments on Probability Theory MULL. 24 sad Aguinatitam/= iF nmionce ni ‘it repel eam ae oo ae The saute root ofthe variance o a measure ofthe width of the distin, 2.2. Correlated Data Experiments usualy involve the oletion of more than one datum. We thercfore need to quantify the probability tht a set of random ‘arable lake ona given value. The joint istibution PU isthe rly thatthe fret datum wil be nthe neighborhood of dy that the second wil be inthe neighborhood of de, ete Irthe data are Independent —thitis there are no pater in the occurrence of he ‘alos betwech two random vanables then ths joint dstbution i Sst the product of the indvdualdsteibutons (Fig 2.) ta) = Play P) ©» Pld) os) eed tmaree {speculaas ly In some experiments, om the ster hand, measrements ae comre- ated Hiph value of one dat tend to occur consistent with eer high or ow values of another datum (Fig. 2.6). The joint isibution for those two data must be consiactd to take thi corsation int sccount. Given joint ditabution, one ean text for coreation by selecting function that divides the (dd) plane into four quadrants Bate ‘Seda The ange Oa mama oth sono nd ede 6 2 Some Comments on Protity Theory + ig 27, The anton (44st eth) lane a of alternating sgn centered onthe center ofthe astbution Fig. 2.) Tone multiplies the dstibution by this function, and then sums up the area, the esl wil be zero for uncorrelated distributions, since they tend He equally nal four quadrants. Correlated disiibutions illhae ether postive or nogative area, since they tend to econcen- tratin two opposite quadrants (Fig 2.8) [dy (aids (8 {sed a the function. the esuling measre of correlation called the ona. [omnes a 2) ty compen neseane ier ey ‘Serangoon i 27 2.3 Futon of Random Varah n [Note thatthe covariance of datum with tsefisjust he variance. The ‘covariance, therefore characeries the basi shape of joint distibu ‘When there ate many data given by the veto its convenient 19 define a vector of expected values anda matrix of covariances a8 = faa faa [aan o 2 26) tonaye [a4 [1g > [Marten co-eogne “Thediagonal lementsof the covariance matrix area measure ofthe with of the distbution ofthe data, and the offagonal elements Indicate the dgsee to which pats of data are corned 2.3. Functions of Random Variables The hase premise of inverse theory is that the data and model parameters are related, Any method that solves the inverse problem that estimates a mode! parameter onthe basis of data— wil, there for, tend to map erors from the data to the estimated mode! parame ters Thus the evbmate of the mode paramtess are themselves Fandom sable, which are desoited by a ditrbution Pim). Whether or not the ire model parameters are random variables depends onthe problem. I is appropiate to consider them determi hse quantities in some problems, and random variables in ote Estimates ofthe parameters, however, ar always random variables, the distribution ofthe datas known then te dstrbution for any function ofthe data nciding estimated model parametes, can be found. Consider two uncorrelated data that are known to have white Aistibutons onthe interval f, That, they can ake on ay value between and I with equal probability (Fig. 2) Suppose tht some ‘model parameters estmated wo be the sum ofthese two dat, = 0. asongarx#0 a3» text iit 02» e+ visa 39 Maris norms (334) 038) tear=iauat ax) ‘Ast =1AList ox so = rx 009 Pereereny 3m) Equations (3.3) and (3.3) ar cal rane inequalities because of ‘their simlaiy to Pythagoras lw for iat ranges 33 Least Squares fora Straight Line ‘The elementary problem of fiting a straight line to data illustrates, the basic procedures applied in this technique. The model i the assertion thatthe data cam be described by the linear equation d, ‘my + m,z,-Note that thee ae two mode! parameters, M = 2,and that ‘ypically there are many more than two data, A> M. Since a line Astin by precisely ewo points, iis lary impossible o choose straight line tht passes through every one ofthe data excep in the ‘instance that they allie resis on the same tag ne. In practice, when measurements are infuenced by noise colnet rarely occurs ” 3 Liner, Gain nse Probe, Viewpoint 1 ‘Aswe shall discus in more detail teow, the fact that the equation dm rg, cannot bests for every # means thatthe inverse problem is ovndeermined. that i, bas no exact solution. One therefore seeks values ofthe model parameters that solve d=, + ‘maz, approximately. where the goodness of the approximation is Aetined by the eror Bree Sia. “This problem sthen the elementary callus problem of leatng the minimum of the function Elm, m,) and is solved by setting the ‘erivaives of F to zero and solving the resulting uations. eos : sh Sem mst = Win + 2m, B- 2d, * oe aa . Sn ag Elm mas =2m, Det 2m Yet 2Llad) “These two equations ae then solved simultaneously fr my and m,, vein the classi formulas for he least squares Siting of ine 3A. The Least Squares Solution ofthe Linear Inverse Problem Leas squares canbe extended othe general linear inverse problem ina vey straightforward manner. Again, one computes the derivative ifthe ror with respest to one of the model parameters. sy. a0 Sets the result wo 220 a5 -3[e-SemJa-Soum] 0s ‘Note tha the indices the sums within the parentheses are diferent hummy variabes, to preven confusion, Maltplsingout the terms and Gaya—Gm) oS SnmSc,c.-28m G+ Sad 06) “The derivatives 3m, ae now computed. Performing this diferen- Aiton teen by term gives Si6um + md S00 ste Mae mS 664 67) forthe ist erm, Noe that derivatives of he form dnn/dm ae jus the Kronecker dead, Since both mand ae independent variables, ‘cir derivative te oro unless “The second teem gies a[Sedeu]- Soden Since he third term does not contin any ms, tis 2f0 38 fe i [$aa]-0 09) Combing he te ems ives Wiring this equation in matrix notation yields Gm -G'a=0 a Note thatthe quantity G°G isa square 4/2 Af matrix and tht it rltipliesa vector mof length M. The quantity G'disalsoa vector of | length Af Ths equation is therfore a square matrix equation for the Unknown model parameters. resuming that [GT exit (an i Portant question that we shall turn 1 later, we ave the following ‘ton os) Se G10) me = (6°) 2 which she leas squares solution to the inverse problem Gm = 2 3 Liner Gaus Inverse Pate, Viewpoint 1 35. Some Examples AS1-THE STRAIGHT LINE PROBLEM In the sraght ine peobers the models d= my + ms 9 the ‘equation Gm = d has the form (‘}- e forming the marx prodvets oa) ou 613) 382 FITTING 4 PARABOLA “The problem ffitinga parabola vial generalization of fing a straight ine Fig. 34) Now the model isd) my + myc mz 50 118 Some Examples o the equation m= d has the form a 4 1 4 - 67) 1 ite & te, fermi and 4 1 7 d4l pee _ =]s4] oi adn a zu, ay, “ 3 Linea, Gaui Inver Probl, Viewpoint | ving the leas squares sou ¥2d,| 820) 32d, m= (@opora= 352. FIFTING A PLANE SURFACE To fit a plane surface, two aula variables, sy, x and y, are needed. The model Fig. 3.5)i8 dm mms m9 the equation Gam = das the form tan 4 taalomy [4 mo 2p 1a oy ay Forming the mata products G°G 36 The Faisence ofthe Least Squares Slaton s ha pple el pw ss zy 5 ty asx Ee Bay h a By Ean SF Lay oy, 62) and 4, 1 iy y4] fee Gan] om oe =| Bad] @29 now os End, ay, ving the least squares solution w ody typ [re ma[oro'Gw=| 2x, 29 By] | Ex] O29 zy tax tH] Les 3.6 The Existence of the Least Squares Solution “The least squares solution arose from consideration of an inverse problem that had no exact solution. Since there was no exact olin, ‘we chose to do the next best thing: o estimate the solution By those ‘ales ofthe model parameters that gave the best approximate sol tion (where “best” meant minimizing the L; prediction error). By writing a single formula m= (G°G)"GT4, we implicitly assumed that there was only one such “bes” solution, As we shall rove ater, least squares fails i the number of solutions that give the same ‘minimum prediction eroris greater than one To sc that last squares al for problems ith nonuniquesolu- tions, considerthe igh lise problem with only oe data point (Fe 36) isclear hat this problem is nonunige; many possible ies can “6 2 Lisa, Gaus Intene Problem, Viewpoiat We say ain a nh i THe tn sss through the point and each has zero prediction eee, The Solution then conuins the expression y Be wor =|” % 29 3. $2 ‘The inverse ofa mate proportional 10 the reciprocal ofthe determinant othe ma 50 at Icrort tact ~ 2h “This cxpesion cleat ising, The formula fo the ast squares ‘slo The question of whether the equation Gm~ provides enough infomation o spe nigel the model parameters sever era bass {orctayng inven poten A cssieaon stem based on ths ‘tenon dss i the following sections (381-363) 61 UNDERDETERMINED PROBLEMS ‘When the equation Gm = d does not provie enough information| to determine unigusy all the model parameters. the problem sai to teunderdetormined As we sa nie example above thisean happen iF there are several solutions that have zero prediction erro, From ‘ementary linear algebra we know that underdetermined problems ‘Scurwhen there are more unknowns than data, hats when M3 3.6 The Existence ofthe Least Sure Solution ” ‘We must note, however, that there ino special reason why the prodiction eror must be vro for an underdetermined problem. Fre: {quently the data uniquely determine some ofthe mode parameters Dut not othes. For example, consider the acoustic experiment in Figure 37. Since no measurements ae made ofthe acoustic slowness inthe second bic, it clear that this mods parameters completly ‘unconstrained by the data, In consist, tbe acouseslwnes of the frst brick is overdetermined, since inthe presence of measurement poise no choice of; can sash the dat exactly. The equation ‘eseibing this experiment i ar) os 1 4 1] []- 629 1 4s, ‘where ste slownesin the bck, he ick with, and hed the Ieasurements of travel time. fone were to atempt to solve this problem wih east squares, one woul ind thatthe erm [GG] "is Singular. Even though < X, the problem sill underdetermined Since the data kernel hasa very poor structure Although thsi eather ‘vial ese in which only some ofthe model parameters ae underde- “eminent eaeinent the rose a7 in moe se ‘We shal eer to underdcterminedpeoblems that have nonzero prediction eror as mived-dremined problems, to distinguish them From purely underdtermined problems that have 2er0 pediction 37. An cute wel aneurin wit Sand ener Rinamedin ete oy hh bk Thee ce incon “ 3 Liner, Gaus Insere Problem, Viewpoit 3462 EVENDETERMINED PROBLEMS, Ineven-etermined problems theres exactly enough information tw deermine the model parameters. There only one solution, andi has zero prediction ero 1363 OVERDETERMINED PROBLEMS When there is too much information contained inthe equation Gm=d fori to possess an exact solution, we speak of Hs Being ‘overdetermined. Tiss the casein which we can employ least savas to sleet 4 “best” approximate solution, Overdetermined probes typically have more dat than unknowns, thts. although for the reasons discussed above it spose to have problems tha are some degre overdetermined even when N'< Mand to have problems tha ate 1 some degree underdetermined even when N'> Mf. Tool sucestlly with the ful ange of inverse problems. we shall nee! o beable to chiracerzewhetneran inverse problem sun. ot verdcermined (or some combination af the two). We shall develop (quaniative methods fr making this characterization in Chapter 7 For the moment we asume that itis posible to characterize the problem intuitively on the basso the kind of experiment the problem 3.7 ‘The Purely Underdetermined Problem ‘Suppose hat an inverse problem Gm = dhas been denied as one thats putlyunderdetermined For simplicity, asume that there are fewerequations than unknown model parameters that," < Mand that there are no inconsistancies i these equations, Its therefore Dosible to find more than one salution far which the prediction eror sero. (Infact, we sal show that underdetermined linear inverse problems have an infinite numberof such solutions) Although the Gata provide information about the model parameters, they donot Drovide enough to determine them uniquely. "Tocbina solution wet the inverse problem, we must have some means of singling out precisely one othe nine number ofsoutons| ‘wth zero prediction error E: To do this. we must at the problem Some information not contained inthe equation Gm ~ d. This extra Information i called a prion information [Ref 10) pie informa ton can take many forms, but in each case avanti expectations bout the character ofthe soltion that are no based on the ata at, 217 The Pasty Undetermined Problem ” For instance. in the case offitinga aight line througha single data point one might hae the expectation that the ine als passes through the origin. This «prod information sow provides enough informa tion to ssve the inverse problem uniguely. since Two points (one 0, and to other cases when the solution is known to posse some Kind of| bounds: One could therefore propose anew kin of consaind east squares solution of verdetermined problems, ene that minimizes the frtor subject othe given inequality constrains. A pion inequity onsrant aso have aplication to underdetermined problems, One fan find the smallest solution that saves toth Gm = @ od Fim = b ‘These problems can be solved ina staightorward fashion, which wll be discussed in Chapter 9, se 2 near, Gausan vere Problem, Viewpoint 1 3.11. The Variance of the Model Parameter Estimates “The data invariably contain nose that cause erorsin the estimates of the model parameters. We ean calculate how this miewsarement| {ror “maps into” errors in wr by noting that all ofthe formulas derived above for estimates of the mode parameters are linea une tions ofthe data ofthe form m= Md, where M i sume aun and. some vector. Therefore, if we assume that the data have a Aisibution characterized by some covariance matrix [cov] the ‘stimatesof the model parameters have distribution characterized by 2 covariance matrix (cov m] = Mico d]M™. The covariance ofthe Solution can therefore be calculated in a traightorward fashion, Ifthe ata are uncorrelated and all of equal variance o, then very simple formulas ae obtained forthe covariance of some of the more simple ners problem solutions. “The simple leas squares sol nw = (GTGY'G has covar- {cov m]=1G*G"G"63IGTG'G"} = oYGTGS' (3.48) and the simple minimum length solution m= G'1GG"-"4 has ov m]= [GGG JoHICTEG TT = AGTESTH-G (249) 3.12 Variance and Prediction Error ofthe Least Squares Solution Ifthe prediction error Ela) = ee ofan overdetermined problem ‘hs very sharp minimum in the vicinity of th estimate solution sm, we wuld expect that the solution swell determined inthe sense ‘hati has small variance. Small eors in determining the shape of Em) ue to random fgctuatons in the dat lead to only small rors in me (Fig. 3103). Conversely, if Elm) has a road minima, we ‘expect that m has large variance (Fig 3.108) Since he curvature of ‘function i a measure of the sharpness ofits minimurn, we expect thatthe variance ofthe solution is elated to the curvature of Ea) at 5.12 Variance and Prediction Emo ofthe Les Sau Sostion $9 o Elm) etm) Fie 40.) The et ima no an pre ma he iia of Dnt Haemnmom scan narom hrs fan a ‘vam ers in n= We nun ten ain ct Sirens aden et carne eat es ne eee eee ee tae oe ae Len 650) ‘Note hat he firstonder term i 20, since the expansion s made a ‘minimum, The seond derivative canalso be computed dec rom the expresion| Am) = eed Gard— Gm} be] cal =2{-c1e-cai]-ceo “The covariance of the last squares solution (assuming uncorelated ata all with equal variance of) therefore torminnioormof 28] os © 3 Liner, Gaussian Insere Probie. Viewpoint 1 “The priction eror£ = ees the sum of quares of Gaussian data sinus consan Its, therefore a random variable wth a2 distribu om with NAY depoes of freedom, which has mean (W— M)ozand Santance20V'~ M)ay(The dees of reeiom ae reduced by Mince ‘he model can force Mf ineae combinations ofthe e190.) We ean tse the standard deviation of Eg = [UN ~ AA] Ac} i the expres sion for variance (corm =ei@6r'= a ee[SSE]" asa Pav ATL oa “Thecovariance [cov m] canbe interpreted as being controlled eter by the varance ofthe dats times a measure of how eror in the datas Tapped into cor in the model parameter. or By the standard

You might also like