Numerical Methods for Engineers and Scientists

Numerical Methods for Engineers and Scientists
SecondEdition Revised and Expanded

JoeD. Hoffman
Departmentof Mechanical Engineering PurdueUniversity WestLafayette, Indiana

MARCEL

MARCELDEKKER, INC.
DEKKER

NEW YORK. BASEL

The first edition of this book was published by McGraw-Hill, Inc. (NewYork, 1992). ISBN: 0-8247-0443-6 This bookis printed on acid-free paper. Headquarters Marcel Dekker, Inc. 270 Madison Avenue, blew York, NY10016 tel: 212-696-9000;fax: 212-685-4540 Eastern HemisphereDistribution Marcel Dekker AG Hutgasse 4, Postfach 812, CH-4001 Basel, Switzerland tel: 41-61-261-8482;fax: 41-61-261-8896 World Wide Web http://www.dekker.com Thepublisher offers discounts on this bookwhenordered in bulk quantities. For more information, write to Special Sales/’Professional Marketingat the headquarters address above. Copyright © 2001 by Marcel Dekker, Inc. All Rights Reserved. Neither this book nor any part maybe reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying,microfilming, and recording, or by any information storage and retrieval system, without permissionin writing from the publisher. Currentprinting (last digit) 10987654321 PRINTED IN THE UNITED STATES OF AMERICA

To Cynthia Louise Hoffman

Preface
The secondedition of this bookcontains several majorimprovements the first edition. over Some these improvements of involve format and presentation philosophy, and someof the changes involve old material which has been deleted and new material which has been added. Eachchapter begins with a chapter table of contents. Thefirst figure carries a sketch of the application used as the exampleproblem the chapter. Section 1 of each chapter is in an introduction to the chapter, whichdiscusses the example application, the general subject matter of the chapter, special features, and solution approaches. The objectives of the chapter are presented, and the organization of the chapter is illustrated pictorially. Each chapter ends with a summary section, which presents a list of recommendations, and dos don’ts, and a list of whatyou should be able to do after studying the chapter. This list is actually an itemization of what the student should havelearned fromthe chapter. It serves as a list of objectives, a study guide, and a review guide for the chapter. Chapter 0, Introduction, has been addedto give a thorough introduction to the book and to present several fundamentalconcepts of relevance to the entire book. Chapters 1 to 6, which comprise Part I, Basic Tools of NumericalAnalysis, have been expandedto include moreapproachesfor solving problems.Discussions of pitfalls of selected algorithms have been added where appropriate. Part I is suitable for secondsemester sophomores first-semester juniors through beginning graduate students. or Chapters 7 and 8, which comprise Part II, Ordinary Differential Equations, have been rewritten to get to the methods solving problemsmorequickly, with less emphasis for on theory. A newsection presenting extrapolation methodshas been added in Chapter 7. All of the material has been rewritten to flow moresmoothlywith less repetition and less theoretical background. Part II is suitable for juniors throughgraduate students. Chapters9 to 15 of the first edition, whichcomprisedPart III, Partial Differential Equations, has been shortened considerably to only four chapters in the present edition. Chapter9 introduces elliptic partial differential equations. Chapter10 introduces parabolic partial differential equations, and Chapter 11 introduces hyperbolic partial differential equations. Thesethree chapters are a major condensationof the material in Part III of the first edition. The material has been revised to flow more smoothlywith less emphasison theoretical background.A newchapter, Chapter 12, The Finite ElementMethod, has been addedto present an introduction to that importantmethod solving differential equations. of A new section, Programs, has been added to each chapter. This section presents several FORTRAN programs for implementing the algorithms developed in each chapter to solve the example application for that chapter. Theapplication subroutines are written in

vi

Preface

a form similar to pseudocode facilitate the implementationof the algorithms in other to programminglanguages. More examples and more problems have been added throughout the book. The overall objective of the secondedition is to improvethe presentation format and material content of the first edition in a manner that not only maintains but enhancesthe usefullness and ease of use of the first edition. Manypeople have contributed to the writing of this book. All of the people acknowledged the Preface to the First Edition are again acknowledged, in especially my loving wife, Cynthia Louise Hoffman. Mymanygraduate students provided much help and feedback, especially Drs. D. Hofer, R. Harwood, Moore,and R. Stwalley. Thanks, R. guys. All of the figures were prepared by Mr. MarkBass. Thanks, Mark. Onceagain, my expert wordprocessing specialist, Ms. Janice Napier, devoted herself unsparingly to this second edition. Thankyou, Janice. Finally, I wouldlike to acknowledge colleague, Mr. my B. J. Clark, Executive Acquisitions Editor at Marcel Dekker,Inc., for his encouragement and support during the preparation of both editions of this book.

Joe D. Hoffman

Contents
Preface Chapter 0. Introduction 0.1. Objectives and Approach 0.2. Organization of the Book 0.3. Examples 0.4. Programs 0.5. Problems 0.6. Significant Digits, Precision, Accuracy,Errors, and Number Representation 0.7. Software Packages and Libraries 0.8. The Taylor Series and the Taylor Polynomial Part I. 1.1. 1.2. 1.3. 1.4. 1.5. 1.6. 1.7. Basic Tools of NumericalAnalysis Systems of Linear Algebraic Equations Eigenproblems Roots of Nonlinear Equations Polynomial Approximationand Interpolation NumericalDifferentiation and Difference Formulas NumericalIntegration Summary
V

1 1 2 2 3 3 4 6 7 11 11 13 14 14 15 16 16 17 18 21 30 45 49 52 59 67 76 77 81 81 85 89 vii

Chapter 1. Systems of Linear Algebraic Equations 1.1. Introduction 1.2. Properties of Matrices and Determinants 1.3. Direct Elimination Methods 1.4. LUFactorization 1.5. Tridiagonal Systems of Equations 1.6. Pitfalls of Elimination Methods 1.7. Iterative Methods 1.8. Programs 1.9. Summary Exercise Problems Chapter 2. Eigenproblems 2.1. Introduction 2.2. MathematicalCharacteristics of Eigenproblems 2.3. The Power Method

viii 2.4. 2.5. 2.6. 2.7. 2.8. 2.9. The Direct Method The QR Method Eigenvectors Other Methods Programs Summary , Exercise Problem

Contents 101 104 110 111 112 118 119 127 127 130 135 140 155 167 169 173 179 181 187 188 190 197 198 204 208 217 218 221 225 235 242 243 251 251 254 257 264 270 270 273 279 279 285 285 288 290

Chapter 3. Nonlinear E uations 3.1. Introduction 3.2. General Features Root Finding 3.3. Closed Domain(1 ~racketing) Methods Open Domain M~Ihods 3.4. Polynomials 3.5. 3.6. Pitfalls of Root F nding Methods and Other Methods of Root Finding 3.7. Systems of Nonli~~ear Equations 3.8. Programs 3.9. Summary Exercise Problem., Chapter 4. Polynomial ~pproximation and Interpolation 4.1. Introduction Properties of Pol’ aomials 4.2. 4.3. Direct Fit Polynoraials 4.4. LagrangePolynonrials 4.5. Divided Differenc Tables and Divided Difference Polynomials 4.6. Difference Tables and Difference Polynomials 4.7. Inverse Interpolation 4.8. Multivariate Approximation Cubic Splines 4.9. 4.10. Least Squares Approximation 4.11. Programs 4.12. Summary Exercise Problems Chapter 5. Numerical Differentiation and Difference Formulas 5.1. Introduction ~ Unequally Spaced Data 5.2. 5.3. Equally Spaced Data 5.4. Taylor Series Approach 5.5. Difference Formulas 5.6. Error Estimation and Extrapolation Programs 5.7. Summary 5.8. Exercise Prol~lems Chapter 6. Numerical Integration 6.1. Introduction 6.2. Direct Fit Polynomials 6.3. Newton-Cotes Formulas

Contents 6.4. 6.5. 6.6. 6.7. 6.8. 6.9. Extrapolation and Romberg Integration Adaptive Integration Gaussian Quadrature Multiple Integrals Programs Summary Exercise Problems

ix 297 299 302 306 311 ¯ 315 316 323 323 323 325 326 327 330 332 335 336 340 343 346 352 359 364 378 381 391 393 397 398 401 408 414 416 435 436 439 441 450 458 466 471 477 480 483 488 490

PartII. OrdinaryDifferential Equations II.1. Introduction 11.2. General Features of Ordinary Differential Equations II.3. Classification of OrdinaryDifferential Equations II.4. Classification of Physical Problems II.5. Initial-Value OrdinaryDifferential Equations 11.6. Boundary-Value Ordinary Differential Equations 11.7. Summary Chapter 7. One-Dimensional Initial-Value Ordinary Differential Equations 7.1. Introduction 7.2. General Features of Initial-Value ODEs 7.3. The Taylor Series Method 7.4. The Finite Difference Method 7.5. The First-Order Euler Methods 7.6. Consistency, Order, Stability, and Convergence 7.7. Single-Point Methods Extrapolation Methods 7.8. Multipoint Methods 7.9. of 7.10. Summary Methods and Results 7.11. Nonlinear Implicit Finite Difference Equations 7.12. Higher-Order Ordinary Differential Equations 7.13. Systemsof First-Order Ordinary Differential Equations 7.14. Stiff OrdinaryDifferential Equations 7.15. Programs 7.16. Summary Exercise Problems Chapter 8. One-Dimensional Boundary-ValueOrdinary Differential Equations 8.1. Introduction 8.2. General Features of Boundary-Value ODEs 8.3. The Shooting (initial-Value) Method 8.4. The Equilibrium (Boundary-Value) Method 8.5. Derivative (and Other) BoundaryConditions 8.6. Higher-Order Equilibrium Methods 8.7. The Equilibrium Method for Nonlinear Boundary-ValueProblems 8.8. The Equilibrium Method on Nonuniform Grids 8.9. Eigenproblems 8.10. Programs 8.11. Summary Exercise Problems

x PartIII. Partial Differential Equations III. 1. Introduction 111.2. General Features of Partial Differential Equations III.3. Classification of Partial Differential Equations 111.4. Classification of Physical Problems 111.5. Elliptic Partial Differential Equations III.6. Parabolic Partial Differential Equations Ill.7. HyperbolicPartial Differential Equations 111.8. The Convection-Diffusion Equation 111.9. Initial Values and BoundaryConditions III. 10. Well-Posed Problems IlL 11. Summary Chapter 9. Elliptic Partial Differential Equations 9.1. Introduction 9.2. General Features of Elliptic PDEs 9.3. The Finite Difference Method 9.4. Finite Difference Solution of the Laplace Equation 9.5. Consistency, Order, and Convergence Iterative Methodsof Solution 9.6. Derivative BoundaryConditions 9.7. Finite Difference Solution of the Poisson Equation 9.8. 9.9. Higher-Order Methods 9.10. Nonrectangular Domains 9.11. Nonlinear Equations and Three-Dimensional Problems 9.12. The Control Volume Method 9.13. Programs 9.14. Summary Exercise Problems Chapter 10.1. 10.2. 10.3. 10.4. 10.5. 10.6. 10.7. 10.8. 10.9. 10.10. 10.11. 10.12. 10A3. 10. Parabolic Partial Differential Equations Introduction General Features of Parabolic PDEs The Finite Difference Method The Forward-Time Centered-Space (FTCS) Method Consistency, Order, Stability, and Convergence The Richardson and DuFort-Frankel Methods Implicit Methods Derivative BoundaryConditions Nonlinear Equations and Multidimensional Problems The Convection-Diffusion Equation Asymptotic Steady State Solution to Propagation Problems Programs Summary Exercise Problems

Contents 501 501 502 504 511 516 519 520 523 524 525 526 527 527 531 532 536 543 546 550 552 557 562 570 571 575 580 582 587 587 591 593 599 605 611 613 623 625 629 637 639 645 646 651 651 655

Chapter 11. Hyperbolic Partial Differential Equations 11.1. Introduction 11.2. General Features of Hyperbolic PDEs

Contents 11.3. 11.4. 11.5. 11.6. 11.7. 11.8. 11.9. 11.10. 11.11. Chapter 12.1. 12.2. 12.3. 12.4. 12.5. 12.6. 12.7. The Finite Difference Method The Forward-Time Centered-Space (FTCS) Method and the Lax Method Lax-Wendroff Type Methods Upwind Methods The Backward-Time Centered-Space (BTCS) Method Nonlinear Equations and Multidimensional Problems The Wave Equation Programs Summary Exercise Problems 12. The Finite Element Method Introduction The Rayleigh-Ritz, Collocation, and Galerkin Methods The Finite Element Methodfor Boundary Value Problems The Finite Element Methodfor the Laplace (Poisson) Equation The Finite Element Methodfor the Diffusion Equation Programs Summary Exercise Problems

xi 657 659 655 673 677 682 683 691 701 702 711 711 713 724 739 752 759 769 770 775 779 795

References Answersto Selected Problems Index

Introduction
0.1. 0.2. 0.3. 0.4. 0.5. 0.6. 0.7. 0.8. Objective and Approach Organization of the Book Examples Programs Problems Significant Digits, Precision, Accuracy,Errors, and Number Representation Software Packages and Libraries The Taylor Series and the Taylor Polynomial

This Introduction contains a brief description of the objectives, approach,and organization of the book. The philosophy behind the Examples,Programs, and Problemsis discussed. Several years’ experiencewith the first edition of the bookhas identified several simple, but significant, concepts whichare relevant throughoutthe book, but the place to include them is not clear. Theseconcepts, whichare presented in this Introduction, include the definitions of significant digits, precision, accuracy,and errors, and a discussionof number representation. A brief description of software packagesand libraries is presented. Last, the Taylor series and the Taylor polynomial, which are indispensable in developing and understanding manynumerical algorithms, are presented and discussed. 0.1 OBJECTIVE AND APPROACH

The objective of this bookis to introduce the engineer and scientist to numericalmethods which can be used to solve mathematicalproblemsarising in engineering and science that cannot be solved by exact methods. With the general accessibility of high-speed digital computers, it is nowpossible to obtain rapid and accurate solutions to manycomplex problemsthat face the engineer and scientist. The approach taken is as follows: 1. Introduce a type of problem.

2 2. 3. 4.

Chapter0 Present sufficient backgroundto understand the problemand possible methods of solution. Developone or more numerical methods for solving the problem. Illustrate the numerical methodswith examples.

In most cases, the numerical methodspresented to solve a particular problemproceed from simple methods to complex methods, which in manycases parallels the chronological development of the methods. Somepoor methods and some bad methods, as well as good methods, are presented for pedagogical reasons. Why methoddoes not work is almost one as important as whyanother method does work. 0.2 ORGANIZATION OF THE BOOK

The material in the bookis divided into three mainparts.: I. Basic Tools of Numerical Analysis II. Ordinary Differential Equations III. Partial Differential Equations

:

Part I considers many the basic problemsthat arise in all branchesof engineering of and science. These problemsinclude: solution of systems of linear algebraic equations, eigenproblems, solution of nonlinear equations, polynomial approximationand interpolation, numerical differentiation and difference formulas, and numerical integration. These topics are importantboth in their ownright and as the foundation for Parts II and III. Part II is devoted to the numerical solution of ordinary differential equations (ODEs). The general features of ODEs are discussed. The two classes of ODEs (i.e., initial-value ODEsand boundary-value ODEs)are introduced, and the two types of physical problems (i.e., propagation problems and equilibrium problems) are discussed. Numerous numerical methods for solving ODEs presented. are Part III is devotedto the numericalsolution of partial differential equations (PDEs). Somegeneral features of PDEsare discussed. The three classes of PDEs(i.e., elliptic PDEs,parabolic PDEs,and hyperbolic PDEs)are introduced, and the two types of physical problems (i.e., equilibrium problems and propagation problems) are discussed. Several model PDEsare presented. Numerous numerical methodsfor solving the model PDEsare presented. The material presented in this book is an introduction to numerical methods. Many practical problems can be solved by the methodspresented here. Many other p[actical problems require other or more advanced numerical methods. Mastery of the material presented in this bookwill prepare engineers and scientists to solve many their ex)eryday of problems, give them the insight to recognize whenother methodsare required, and give them the backgroundto study other methodsin other books and journals. 0.3 EXAMPLES

All of the numerical methodspresented in this book are illustrated by applying themto solve an example problem. Each chapter has one or two example problems, which are solved by all of the methodspresented in the chapter. This approachallows the analyst to comparevarious methodsfor the same problem, so accuracy, efficiency, robustness, and ease of application of the various methodscan be evaluated.

Introduction

3

Mostof the exampleproblemsare rather simple and straightforward, thus allowing the special features of the various methodsto be demonstrated clearly. All of the example problemshave exact solutions, so the errors of the various methodscan be compared.Each exampleproblembegins with a reference to the problemto be solved, a description of the numericalmethod be employed, to details of the calculations for at least one application of the algorithm, and a summary the remaining results. Somecomments of about the solution are presented at the end of the calculations in most cases. 0.4 PROGRAMS

Most numerical algorithms are generally expressed in the form of a computer program. This is especially true for algorithms that require a lot of computationaleffort and for algorithms that are applied many times. Several programming languages are available for preparing computerprograms: FORTRAN, Basic, C, PASCAL, etc., and their variations, to namea few. Pseudocode,whichis a set of instructions for implementing algorithm an expressed in conceptual form, is also quite popular. Pseudocode be expressed in the can detailed form of any specific programming language. FORTRAN one of the oldest programminglanguages. Whencarefully prepared, is FORTRAN approach pseudocode. Consequently, the programs presented in this book can are written in simple FORTRAN. There are several vintages of FORT_RAN: FORTRAN I, FORTRAN FORTRAN 77, and 90. The programs presented in this book are II, 66, compatible with FORTRAN and 90. 77 Several programs are presented in each chapter for implementing the more prominent numerical algorithms presented in the chapter. Each program is applied to solve the exampleproblemrelevant to that chapter. The implementationof the numerical algorithm is contained within a completely self-contained application subroutine which can be used in other programs. These application subroutines are written as simply as possible so that conversion to other programming languages is as straightforward as possible. These subroutines can be used as they stand or easily modified for other applications. Each application subroutine is accompaniedby a program main. The variables employedin the application subroutine are defined by comment statements in program main. The numericalvalues of the variables are defined in program main,whichthen calls the application subroutine to solve the exampleproblemand to print the solution. These main programsare not intended to be convertible to other programming languages. In someproblemswherea function of sometype is part of the specification of the problem, that function is defined in a function subprogramwhich is called by the application subroutine. FORTRAN compilers do not distinguish between uppercase and lowercase letters. FORTRAN programs are conventionally written in uppercase letters. However,in this book, all FORTRAN programsare written in lowercase letters. 0.5 PROBLEMS

Two types of problemsare presented at the end of each chapter: 1. 2. Exercise problems Applied problems

4

Chapter0

Exercise problems are straightforward problems designed to give practice in the application of the numerical algorithms presented in each chapter. Exercise problems emphasize the mechanics of the methods. Applied problems involve more applied engineering and scientific applications which require numericalsolutions. Manyof the problems can be solved by hand calculation. A large number of the problems require a lot of computational effort. Those problems should be solved by writing a computer programto performthe calculations. Evenin those cases, however,it is recommended one or two passes through the algorithm be madeby hand calculation to that ensure that the analyst fully understandsthe details of the algorithm. Theseresults also can be used to validate the computerprogram. Answers selected problems presented in a section at the end of the book. All of to are the problemsfor which answers are given are denoted by an asterisk appearing with the corresponding problem numberin the problem sections at the end of each chapter. The Solutions Manualcontains the answers to nearly all of the problems.

0.6

SIGNIFICANT DIGITS, PRECISION, ACCURACY,ERRORS, AND NUMBER REPRESENTATION

Numerical calculations obviously involve the manipulation(i.e., addition, multiplication, etc.) of numbers.Numbers be integers (e.g., 4, 17, -23, etc.), fractions (e.g., can -2/3, etc.), or an inifinite string of digits (e.g., n--3.1415926535...). When dealing with numericalvalues and numericalcalculations, there are several concepts that must be considered: , 1. Significant digits 2. Precision and accuracy 3. Errors 4. Numberrepresentation Theseconceptsare discussed briefly ha this section. Significant Digits The significant digits, or figures, in a numberare the digits of the numberwhich are known be correct. Engineeringand scientific calculations generally begin with a set of to data having a knownnumber of significant digits. Whenthese numbers are processed through a numericalalgorithm, it is important to be able to estimate howmany significant digits are present in the final computed result. Precision and Accuracy Precision refers to howclosely a number represents the number it is representing. Accuracy refers to howclosely a numberagrees with the true value of the numberit is representing. Precision is governed by the number of digits being carried in the numerical calculations. Accuracy governedby the errors in the numericalapproximation,precision is and accuracy are quantified by the errors in a numericalcalculation.

Introduction Errors

5

The accuracy of a numerical calculation is quantified by the error of the calculation. Several types of errors can occur in numericalcalculations. 1. 2. 3. 4. 5. Errors in the parameters of the problem(assumednonexistent). Algebraic errors in the calculations (assumednonexistent). Iteration errors. Approximation errors. Roundofferrors.

Iterationerror is the error in an iterative methodthat approachesthe exact solution of an exact problem asymptotically. Iteration errors must decrease toward zero as the iterative process progresses. The iteration error itself maybe used to determine the successiveapproximations the exact solution. Iteration errors can be reducedto the limit to of the computingdevice. The errors in the solution of a system of linear algebraic equations by the successive-over-relaxation (SOR)methodpresented in Section 1.5 are examples this type of error. of Approximationerror is the difference between the exact solution of an exact problemand the exact solution of an approximationof the exact problem. Approximation error can be reduced only by choosing a more accurate approximation of the exact problem. The error in the approximationof a function by a polynomial, as described in Chapter 4, is an example this type of error. The error in the solution of a differential of equation wherethe exact derivatives are replaced by algebraic difference approximations, which have mmcation errors, is another exampleof this type of error. Rouudoff error is the error caused by the finite word length employed in the calculations. Roundofferror is more significant whensmall differences between large numbers are calculated. Most computers have either 32 bit or 64 bit word length, corresponding to approximately 7 or 13 significant decimal digits, respectively. Some computershave extended precision capability, whichincreases the numberof bits to 128. Care must be exercised to ensure that enough significant digits are maintainedin numerical calculations so that roundoffis not significant. NumberRepresentation Numbersare represented in number systems. Any number of bases can be employedas the base of a numbersystem, for example, the base 10 (i.e., decimal) system, the base (i.e., octal) system,the base 2 (i.e., binary) system,etc. Thebase 10, or decimal,system the most common system used for humancommunication.Digital computers use the base 2, or binary, system. In a digital computer,a binary number consists of a number binary of bits. The number binary bits in a binary number of determinesthe precision with whichthe binary numberrepresents a decimal number. The most common size binary numberis a 32 bit number, which can represent approximately seven digits of a decimal number. Some digital computershave 64 bit binary numbers,whichcan represent 13 to 14 decimal digits. In many engineering and scientific calculations, 32 bit arithmetic is adequate. However, in many other applications, 64 bit arithmetic is required. In a fewspecial situations, 128 bit arithmetic maybe required. On 32 bit computers, 64 bit arithmetic, or even 128 bit arithmetic, can be accomplished using software enhancements. Such calculations are called double precision or quad precision, respectively. Such software enhancedprecision can require as much 10 times the executiontime of a single precision calculation. as

6

Chapter0

Consequently, somecare must be exercised whendeciding whetheror not higher precision arithmetic is required. All of the examples in this book are evaluated using 64 bit arithmetic to ensure that roundoffis not significant. Except for integers and some fractions, all binary representations of decimal numbersare approximations, owing to the finite word length of binary numbers. Thus, someloss of precision in the binary representation of a decimal numberis unavoidable. Whenbinary numbers are combined in arithmetic operations such as addition, multiplication, etc., the true result is typically a longer binary numberwhich cannot be represented exactly with the numberof available bits in the binary numbercapability of the digital computer. Thus, the results are roundedoff in the last available binary bit. This rounding off gives rise to roundoff error, which can accumulate as the number of calculations increases. 0.7 SOFTWARE PACKAGES AND LIBRARIES

Numerous commercialsoftware packages and libraries are available for implementingthe numerical solution of engineering and scientific problems. Twoof the more versatile software packages are Mathcadand Matlab. These software packages, as well as several other packagesand several libraries, are listed belowwith a brief description of each one and references to sources for the software packagesand libraries. A. Software Packages Excel Excel is a spreadsheet developedby Microsoft, Inc., as part of Microsoft Office. It enables calculations to be performedon rows and columnsof numbers.The calculations to be performed are specified for each column. Whenany number on the spreadsheet is changed, all of the calculations are updated. Excel contains several built-in numerical algorithms. It also includes the Visual Basic programming language and someplotting capability. Although its basic function is not numerical analysis, Excel can be used productively for manytypes of numerical problems. Microsoft, Inc. www.microsoft.com/ office/Excel. Macsyma Macsyma the world’s first artificial intelligence .based math engine is providing easy to use, powerful math software for both symbolic and numerical computing. Macsyma, Inc., 20 AcademySt., Arlington, MA 02476-6412. (781) 646-4550, webmaster@macsyma.com, www.macsyma.com. Maple Maple 6 is a technologically advanced computational system with both algorithms and numeric solvers. Maple 6 includes an extensive set of NAG (Numerical AlgorithmsGroup)solvers forcomputational linear algebra. WaterlooMaple, Inc., 57 Erb Street W., Waterloo, Ontario, Canada N2L6C2. (800) 267-6583, (519) 747-2373, info@maplesoft.com, www.maplesoft.com. MathematicaMathematica4 is a comprehensive software package which p,erforms both symbolic and numericcomputations. It includes a flexible and intuitive programming language and comprehensive plotting capabilities. WolframResearch, Inc., 100 Trade Center Drive, Champaign IL 61820-7237. (800) 965-3726, (217) 398-0700, info@ wolfram.corn, www.wolfram.com. Mathcad Mathcad8 provides a free-form interface which permits the integration of real mathnotation, graphs, and text within a single interactive worksheet. It includes statistical and data analysis functions, powerfulsolvers, advancedmatrix manipulation,

Introduction

7

and the capability to create your own functions. Mathsoft, Inc., 101 Main Street, Cambridge, MA 02142-1521. (800) 628-4223, (617) 577-1017, info@mathsoft.com, www.mathcad.com. Matlab Matlab is an integrated computing environment that combines numeric computation, advanced graphics and visualization, and a high-level programming language. It provides core mathematicsand advancedgraphics tools for data analysis, visualization, and algorithm and application development, with more than 500 mathematical, statistical, and engineering functions. The Mathworks, Inc., 3 AppleHill Drive, Natick, MA 01760-2090. (508) 647-7000, info@mathworks.com, www.mathworks.com. B. Libraries GAMS GAMS (Guide to Available Mathematical Software) is a guide to over 9000 software modulescontained in some80 software packages at NIST (National Institute for Standards and Technology) and NETLIB. gams.nist.gov. IMSL IMSL (International Mathematicsand Statistical Library) is a comprehensive resource of more than 900 FORTRAN subroutines for use in general mathematics and statistical data analysis. Also available in C and Java. Visual Numerics, Inc., 1300W.Sam Houston Parkway S., Suite 150, Houston TX77042. (800) 364-8880, (713) 781-9260, info@houston.vni.com, www.vni.com. LAPACK LAPACK a library of FORTRAN subroutines for solving linear is 77 algebra problems and eigenproblems. Individual subroutines can be obtained through NETLIB. The complete package can be obtained from NAG. NAG is a mathematical software library that contains over 1000 mathematical NAG and statistical functions. Available in FORTRAN C. NAG, and Inc., 1400 Opus Place, Suite 200, Downers Grove, IL 60515-5702. (630) 971-2337, naginfo@nag.com, www.nag.com. NETLIB NETLIB a large collection of numerical libraries, netlib@research.att. is com, netlib@ornl.gov, netlib@nec.no. C. Numerical Recipes Numerical Recipesis a book by WilliamH. Press, Brian P. Flarmery, Saul A. Teukolsky, and William T. Vetterling. It contains over 300 subroutines for numerical algorithms. Versions of the subroutines are available in FORTRAN, Pascal, and Basic. The source C, codes are available on disk. Cambridge University Press, 40 West20th Street, New York, NY10011. www.cup.org. 0.8 THE TAYLOR SERIES AND THE TAYLOR POLYNOMIAL

A powerseries in powersof x is a series of the form Y~.anx~ = ao + a~x + azx2 +... (0.1)

n=O

A powerseries in powersof (x - x0) is given ~ an(x- Xo)"=o + - Xo + az(x at(x )
-- X0)2 --{ - ¯ ¯ ¯

(0.2)

n=0

8

Chapter0

Within its radius of convergence, r, any continuous function, f(x), can be represented exactly by a powerseries. Thus, f(x) = ~ a,(x - Xo)"
n=0

(0.3)

is continuousfor (x0 - r) < x < (xo + r). A. Taylor Series in One Independent Variable If the coefficients, an, in Eq. (0.3) are givenby the rule: a0 =f(xo) ’ al = ~.f,(xo) ’ a2 1 " x , = ~f (0) ... then Eq. (0.3) becomesthe Taylor series off(x) at x o. Thus , f(x) =f(xo) ~.f’(xo)(X - Xo + ~ f" (xo)(X - Xo 2 +... ) ) Equation (0.5) can be written in the simpler appearing form f(x) , 1 ,, ~ =fo +f~Ax + -~f~ ~ +. .. +lf(n)Ax+ "" ~ n! o (0.6) (0.5) (0.4)

wherefo= f (xo),f (n) = df(") / ", an Ax (x - Xo)Equaton (056)can be d = i writtenin the compact form f(x) = Y~ ~fot")(x - Xo)"
n=0 n!

, (0.7)

When = 0, the Taylor series is known the Madaurinseries. In that case, Eqs. x0 as (0.5) and (0.7) become (0.8) f(x) =f(0) + f’(O)x + ½ f"(O)x2 +... f(x) = (n)(0)x~ (0.9)

It is, of course, impractical to evaluate an infinite Taylor series term by term. The Taylor series can be written as the finite Taylor series, also known the Taylor formulaor as Taylor polynomialwith remainder, as follows: f(x) =/(Xo) + f’(xo)(X - Xo) +l_g f,,(Xo)( x _ Xo)2+...
1

(0.10) Rn+l

+ ~.f(")(Xo)(X n +

n+l where the term R is the remainder term given by R,+I _ ~ f(n+l)(~)(x 1

(n+ 1)!

"+1 - Xo)

(0.11)

where~ lies betweenx0 and x. Equation(0. i0) is quite useful in numericalanalysis, where an approximation off@) is obtained by truncating the remainder term.

Introduction B. Taylor Series in Two Independent Variables

9

by

Powerseries can also be written for functions of morethan one independentvariable. For a function of twoindependent variables,f (x, y), the Taylorseries off(x, y) at 0, Y0 is given )

f(x,y)

=fo + ~-~fx o(X-Xo)+-~ o(y-yo) + I~x2 °(-xo)~+ oXoylo 2.02( (x ~l/°2flx\

-Xo)(y-yo)+ ~--~f2 o(y-yo)2) "’" (0.12)

Equation (0.12) can be written in the general form f(x,Y) = n~=o (X- Xo)-~x+ (y- yo)- ~ f(x,y)lo ~ (0.13)

where the term (...)n is expandedby the binomial expansion and the resulting expansion operates on the function f (x, y) and is evaluated at (xo,Yo). The Taylor formula with remainder for a function of two independent variables is obtainedby evaluatingthe derivatives in the (n + 1)st term at the point (¢, r/), where(¢, lies in the region between points (xo, Yo)and (x, y).

BasicToolsof Numerical Analysis
1.1. 1.2. 1.3. 1.4. 1.5. 1.6. 1.7. Systems of Linear Algebraic Equations Eigenproblems Roots of Nonlinear Equations Polynomial Approximationand Interpolation NumericalDifferentiation and Difference Formulas NumericalIntegration Summary

Many different types of algebraic processes are required in engineering and science. These processes include the solution of systems of linear algebraic equations, the solution of eigenproblems, finding the roots of nonlinear equations, polynomial approximation and interpolation, numericaldifferentiation and difference formulas, and numericalintegration. Thesetopics are not only important in their ownright, they lay the foundation for the solution of ordinary and partial differential equations, whichare discussed in Parts II and III, respectively. FigureI. 1 illustrates the types of problems consideredin Part I. The objective of Part I is to introduce and discuss the general features of each of these algebraic processes, whichare the basic tools of numericalanalysis. 1.1 SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS Systemsof equations arise in all branches of engineering and science. These equations maybe algebraic, transcendental (i.e., involving trigonometric, logarithmic, exponential, etc., functions), ordinary differential equations, or partial differential equations. The equations maybe linear or nonlinear. Chapter 1 is devoted to the solution of systems of linear algebraic equations of the following form: a] ix1 + al2x 2
-q- al3x3 q- ¯ .. + alnX = b~ n a21xI q- a22x2 q- a23x3 q- ¯ ¯ ¯ q- a2,x = b2 n anlX1 q-- an2x2 q- an3X q- ... 3 q- annX = b nn

(I.la) (I. lb) (I. 1n) 11

12

PartI

wherexj (j = 1, 2 ..... n) denotes the unknown variables, aid (i,j = 1,2 ..... n) denotes the coefficients of the unknown variables, and bi (i = 1, 2 ..... n) denotes the nonhomogeneousterms. For the coefficients aid , the first subscript i corresponds equationi, and to the second subscriptj corresponds to variable xj. The number equations can range from of two to hundreds, thousands, and even millions. Systemsof linear algebraic equations arise in many different problems,for example, (a) networkproblems(e.g., electrical networks), (b) fitting approximating functions Chapter 4), and (c) systems of finite difference equations that arise in the numerical ¯ solution of differential equations (see Chapters 7 to 12). Thelist is endless. Figure I.la illustrates a static spring-masssystem, whosestatic equilibrium configuration is governed by a system of linear algebraic equations. That system of equations is used throughout Chapter 1 as an example problem.

(a) Static spring-mass system.
f(x)

(b) Dynamic spring-masssystem.
f(x)

(c) Roots nonlinearequations. (d) Polynomial of approximation interpolation. and
f(x)’

x

x

(e) Numerical differentiation.

(f) Numerical integration.

Figure 1.1 Basictools of numerical analysis. (a) Static spring-mass system.(b) Dynamic springmasssystem. (c) Rootsof nonlinearequations. (d) Polynomial approximation interpolation. and (e) Numerical differentiation.(f) Numerical integration.

Basic Tools of Numerical Analysis

13

Systems linear algebraic equations can be expressed very conveniently in terms of of matrix notation. Solution methodscan be developed very compactly in terms of matrix notation. Consequently, the elementary properties of matrices and determinants are reviewed at the beginning of Chapter 1. Twofundamentally different approaches can be used to solve systems of linear algebraic equations: 1. Direct methods 2. Iterative methods Direct methodsare systematic procedures based on algebraic elimination. Several direct elimination methods,for example,Gausselimination, are presented in Chapter 1. Iterative methodsobtain the solution asymptotically by an iterative procedure in which a trial solution is assumed, the trial solution is substituted into the system of equations to determinethe mismatch,or error, and an improvedsolution is obtained from the mismatch data. Several iterative methods, for example, successive-over-relaxation (SOR), are presented in Chapter 1. The notation, concepts, and procedures presented in Chapter 1 are used throughout the remainderof the book. A solid understandingof systems of linear algebraic equations is essential in numericalanalysis. 1.2 EIGENPROBLEMS Eigenproblems arise in the special case where a system of algebraic equations is homogeneous; that is, the nonhogeneous terms, bi in Eq. (I.1), are all zero, and the coefficients contain an unspecified parameter, say 2. In general, whenbi -~ O, the only solution to Eq. (I.1) is the trivial solution, 1 =x2... .. x n = 0. How when the ever, coefficients aid contain an unspecified parameter,say 2, the value of that parametercan be chosen so that the system of equations is redundant, and an infinite number solutions of exist. The unspecified parameter 2 is an eigenvalue of the system of equations. For example, (all -- )~)x + al2x = 0 1 2 azlx1 ~- (a22-- ),)x 2 = 0 (I.2a) (I.2b)

is a linear eigenproblem. The value (or values) of 2 that makeEqs. (I.2a) and (I.2b) identical are the eigenvalues Eqs. (I.2). In that case, the twoequationsare redundant, of the only unique solution is xI = x2 = 0. However, infinite numberof solutions can be an obtained by specifying either xl or x2, then calculating the other from either of the two redundantequations. The set of values ofx and x2 corresponding a particular value of 2 to 1 is an eigenvector of Eq. (I.2). Chapter 2 is devoted to the solution of eigenproblems. Eigenproblemsarise in the analysis of manyphysical systems. They arise in the analysis of the dynamic behaviorof mechanical,electrical, fluid, thermal, and structural systems. They also arise in the analysis of control systems. Figure I.lb illustrates a dynamicspring-mass system, whosedynamicequilibrium configuration is governed by a system of homogeneous linear algebraic equations. That system of equations is used throughout Chapter 2 as an exampleproblem. When static equilibrium configuration of the the system is disturbed and then allowed to vibrate freely, the system of masses will oscillate at special frequencies, whichdependon the values of the massesand the spring

14

PartI

constants. Thesespecial frequencies are the eigenvaluesof the system. The relative values of x~, x2, etc. correspondingto each eigenvalue 2 are the eigenvectors of the system. The objectives of Chapter 2 are to introduce the general features of eigenproblems and to present several methods for solving eigenproblems. Eigenproblems are special problemsof interest only in themselves. Consequently,an understanding of eigenproblems is not essential to the other conceptspresented in this book. 1.3 ROOTS OF NONLINEAR EQUATIONS Nonlinear equations arise in many physical problems. Finding their roots, or zeros, is a common problem. The problem can be stated as follows: Given the continuous nonlinear functionf(x), find the value of x = e such that f(~) = where~ is the root, or zero, of the nonlinear equation. Figure I.lc illustrates the problem graphically. The function f (x) maybe an algebraic function, a transcendental function, the solution of a differential equation, or any nonlinear relationship betweenan input x and a response f(x). Chapter 3 is devoted to the solution of nonlinear equations. Nonlinearequations are solved by iterative methods.A trial solution is assumed,the trial solution is substituted into the nonlinearequation to determinethe error, or mismatch, and the mismatchis used in somesystematic mannerto generate an improvedestimate of the solution. Several methodsfor finding the roots of nonlinear equations are presented in Chapter 3. The workhorsemethodsof choice for solving nonlinear equations are Newton’s methodand the secant method.A detailed discussion of finding the roots of polynomialsis presented. A brief introduction to the problemsof solving systems of nonlinear equations is also presented. Nonlinear equations occur throughout engineering and science. Nonlinear equations also arise in other areas of numerical analysis. For example, the shooting methodfor solving boundary-value ordinary differential equations, presented in Section 8.3, requires the solution of a nonlinear equation. Implicit methodsfor solving nonlinear differential equations yield nonlinear difference equations. The solution of such problemsis discussed in Sections 7.11, 8.7, 9.11, 10.9, and 11.8. Consequently, a thorough understanding of methodsfor solving nonlinear equations is an essential requirement for the numerical analyst. 1.4 POLYNOMIAL APPROXIMATION AND INTERPOLATION In manyproblemsin engineering and science, the data under consideration are known only at discrete points, not as a continuousfunction. For example, illustrated in FigureI. 1 d, as the continuous function f(x) maybe known only at n discrete values of x: Yi = y(xi) (i = 1, 2,..., (1.3)

Values of the function at points other than the knowndiscrete points maybe needed (i.e., interpolation). The derivative of the function at somepoint maybe needed (i.e., differentiation). The integral of the function over somerange maybe required (i.e., integration). Theseprocesses, for discrete data, are performedby fitting an approximating function to the set of discrete data and performing the desired processes on the approximating function. Many types of approximating functions can be used.

Basic Tools of Numerical Analysis

15

Becauseof their simplicity, ease of manipulation, and ease of evaluation, polynomials are an excellent choice for an approximating function. The general nth-degree polynomialis specified by Pn(X)= 0 q- alx q - a ~x2 + q - a nx 2 ... Polynomials be fit to a set of discrete data in two ways: can 1. Exact fit 2. Approximate fit Anexact fit passes exactly through all the discrete data points. Direct fit polynomials, divided-difference polynomials, and Lagrangepolynomialsare presented in Chapter 4 for fitting nonequally spaced data or equally spaced data. Newton difference polynomialsare presented for fitting equally spaced data. The least squares procedure is presented for determining approximatepolynomial fits. Figure I.ld illustrates the problemof interpolating within a set of discrete data. Proceduresfor interpolating within a set of discrete data are presented in Chapter 4. Polynomial approximation essential for interpolation, differentiation, and integrais tion of sets of discrete data. A good understanding of polynomial approximation is a necessary requirementfor the numerical analyst. 1.5 NUMERICAL DIFFERENTIATION AND DIFFERENCE FORMULAS The evaluation of derivatives, a process known differentiation, is required in many as problemsin engineering and science. Differentiation of the function f(x) is denoted by d -~x (f(x)) =f’(x) (1.5) (1.4)

The function f(x) maybe a known function or a set of discrete data. In general, known functions can be differentiated exactly. Differentiation of discrete data requires an approximate numerical procedure. Numerical differentiation formulas can be developed by fitting approximating functions (e.g., polynomials) to a set of discrete data and differentiating the approximatingfunction. For polynomialapproximatingfunctions, this yields

Figure I.le illustrates the problem numericaldifferentiation of a set of discrete of data. Numericaldifferentiation procedures are developedin Chapter 5. The approximating polynomial maybe fit exactly to a set of discrete data by the methods presented in Chapter 4, or fit approximately by the least squares procedure described in Chapter4. Several numericaldifferentiation formulasbasedon differentiation of polynomialsare presented in Chapter 5. Numericaldifferentiation formulas also can be developedusing Taylor series. This approach is quite useful for developing difference formulas for approximating exact derivatives in the numericalsolution of differential equations. Section 5.5 presents a table of difference formulasfor use in the solution of differential equations. Numerical differentiation of discrete data is not required very often. However, the numericalsolution of differential equations, whichis the subject of Parts II and III, is one

16

PartI

of the most important .areas of numerical analysis. The use of difference formulas is essential in that application. h6 NUMERICAL INTEGRATION

The evaluation of integrals, a process known integration, or quadrature, is required in as manyproblemsin engineering and science. Integration of the functionf(x) is denoted I = f(x)
a

(I.7)

The function f(x) may be a known function or a set of discrete data. Someknown functions have an exact integral. Many knownfunctions, however, do not have an exact integral, and an approximatenumericalprocedureis required to evaluate Eq. (I.7). When known function is to be integrated numerically, it must first be discretized. Integration of discrete data always requires an approximatenumerical procedure. Numericalintegration (quadrature) formulas can be developed by fitting approximating functions (e.g., polynomials) to a set of discrete data and integrating the approximating function. For polynomial approximatingfunctions, this gives I = dx ~- P,(x) (I.8)

FigureI. 1 f illustrates the problem numericalintegration of a set of discrete data. of Numericalintegration procedures are developedin Chapter 6. The approximating function can be fit exactly to a set of discrete data by direct fit methods, or fit approximately by the least squares method. For unequally spaced data, direct fit polynomialscan be used. For equally spaced data, the Newton forward-difference polynomials of different degrees can be integrated to yield the Newton-Cotes quadrature formulas. The most prominent of these are the trapezoid rule and Simpson’s 1/3 rule. Romberg integration, which is a higher-order extrapolation of the trapezoid rule, is introduced. Adaptive integration, in which the range of integration is subdivided automatically until a specified accuracy is obtained, is presented. Gaussianquadrature, which achieves higher-order accuracyfor integrating known functions by specifying the locations of the discrete points, is presented. Theevaluation of multiple integrals is discussed. Numerical integration of both knownfunctions and discrete data is a common problem. The concepts involved in numerical integration lead directly to numerical methods solving differential equations. for 1.7 SUMMARY

Part I of this book is devoted to the basic tools of numericalanalysis. These topics are important in their ownfight. In addition, they provide the foundation for the solution of ordinary and partial differential equations, which are discussed in Parts II and III, respectively. The material presented in Part I comprisesthe basic language of numerical analysis. Familiarity and masteryof this material is essential for the understanding use and of more advanced numerical methods.

1
Systems Linear AlgebraicEquations of
1.1. 1.2. 1.3. 1.4. 1.5. 1.6. 1.7. 1.8. 1.9. Introduction Properties of Matrices and Determinants Direct Elimination Methods LUFactorization Tfidiagonal Systems of Equations Pitfalls of Elimination Methods Iterative Methods Programs Summary Problems

Examples 1.1. Matrix addition 1.2. Matrix multiplication 1.3. Evaluation of a 3 x 3 determinant by the diagonal method 1.4. Evaluation of a 3 × 3 determinant by the cofactor method 1.5. Cramer’srule 1.6. Elimination 1.7. Simple elimination 1.8. Simple elimination for multiple b vectors 1.9. Elimination with pivoting to avoid zero pivot elements 1.10. Elimination with scaled pivoting to reduce round-off errors 1.11. Gauss-Jordan elimination 1.12. Matrix inverse by Gauss-Jordan elimination 1.13. The matrix inverse method 1.14. Evaluation of a 3 × 3 determinant by the elimination method 1.15. The Doolittle LUmethod 1.16. Matrix inverse by the Doolittle LUmethod 1.17. The Thomasalgorithm 1.18. Effects of round-off errors 1.19. System condition 1.20. Norms and condition numbers 1.21. The Jacobi iteration method 1.22. The Gauss-Seidel iteration method 1.23. The SOR method 17

18 1.1 INTRODUCTION

Chapter1

The static mechanicalspring-masssystemillustrated in Figure 1.1 consists of three masses m~to m3, having weights W to W interconnected by five linear springs K~to K In the 1 3, 5. configurationillustrated on the left, the three massesare supportedby forces F~ to F3 equal to weightsW~ W3,respectively, so that the five springs are in a stable static equilibrium to configuration. Whenthe supporting forces F1 to F3 are removed, the masses move downward reach a new static equilibrium configuration, denoted by x~, x2, and x and 3, wherex~, x2, and x3 are measured from the original locations of the correspondingmasses. Free-body diagrams of the three masses are presented at the bottom of Figure 1.1. Performinga static force balance on the three massesyields the following systemof three linear algebraic equations:
(X 1 q-X 2-~x3)xI -X2x I + (X 2 -~-X2x 2-x3x3 = m 1

(1.1a) (1.1b)

X4)x 2 - X4x 3 = m 2

-K3x~ X~x~ (~3 + x4 + X~)x3 w3 + =

(1.1c)

Vvqaen values ofK to K and W to W are specified, the equilibrium displacements xI to 1 s 1 3 x3 can be determinedby solving Eq. (1.1). The static mechanicalspring-mass system illustrated in Figure 1.1 is used as the example problem in this chapter to illustrate methods for solving systems of linear

KIX I

K2 (x2-x I) I)

K3 (x3-x

K4(X3-X2)

m3

K3(X3-X W K2(x2-x1) W K4(x3-x2) 1) 1 2
Figure 1.1 Static mechanical spring-mass system.

W3

K5x3

Systems Linear AlgebraicEquations of

19

algebraic equations. For that purpose, let K1 = 40 N/cm, K = K = K = 20 N/cm, and 2 3 4 K = 90N/cm. Let W = W2= W3= 20N. For these values, Eq. (1.1) becomes: 5 1 80xt - 20x2 - 20x3 = 20 -20xl + 40x2 - 20x3 = 20 -20x~ - 20x2 + 130x3 = 20 (1.2a) (1.2b) (1.2c)

The solution to Eq. (1.2) is x~ = 0.6 cm, 2 =1. 0 cm an d x 3= 0.4cm, whic can be , h verified by direct substitution. Systems of equations arise in all branches of engineering and science. These equations maybe algebraic, transcendental (i.e., involving trigonometric, logarithmetic, exponential, etc. functions), ordinary differential equations, or partial differential equations. The equations maybe linear or nonlinear. Chapter 1 is devoted to the solution of systems of linear algebraic equations of the following form: allXt q--al2X q-al3x3 q-... q-alnX = b 2 n1 a21x q-- az2x -+- az3x -~ ... q- a2nX=b 1 2 3 n2 a,~lx~ a,,2x2+an3X3 " " -’}- a,,,,x,~ =b,~ + "~(1.3a) (1.3b) (1.3n)

wherexj (j = 1, 2 ..... n) denotes the unknown variables, ai, j (i,j = 1, 2 ..... n) denotes the constant coefficients of the unknown variables, and bi (i = 1, 2 ..... n) denotes the nonhomogeneous terms. For the coefficients ai,j, the first subscript, i, denotes equation i, and the second subscript, j, denotes variable xj. The number equations can range from of two to hundreds, thousands, and even millions. In the most general case, the number variables is not required to be the sameas of the number equations. However, most practical problems,they are the same. That is of in the case considered in this chapter. Evenwhenthe numberof variables is the sameas the number equations, several solution possibilities exist, as illustrated in Figure 1.2 for the of following system of two linear algebraic equations: a~ix~ + al2x2 = b~ azlx1 q- a22x = b 22 Thefour solution possibilities are: 1. A uniquesolution (a consistent set of equations), as illustrated in Figure 1.2a 2. Nosolution (an inconsistent set of equations), as illustrated in Figure 1.2b of 3. Aninfinite number solutions (a redundantset of equations), as illustrated Figure 1.2c 4. The trivial solution, xj =0 (j = 1,2 ..... n), for a homogeneous of equa set 7 tions, as illustrated in Figure1.2d Chapter t is concernedwith the first case wherea unique solution exists. Systemsof linear algebraic equations arise in many different types of problems,for example: 1. 2. Network problems(e.g., electrical networks) Fitting approximatingfunctions (see Chapter 4) (1.4a) " (1.4b)

20

Chapter1

x2

(a) Unique solution.

(b) Nosolution.

./
(c) Infinite number solutions. of ~(6) Trivial solution. x I Figure 1.2 Solutionof a systemof two linear algebraic equations. 3. Systemsof finite difference equations that arise in the numerical solution of differential equations(see Parts II and III)

Thelist is endless. There are two fundamentally different approaches for solving systems of linear algebraic equations: 1. Direct elimination methods 2. Iterative methods Direct elimination methodsare systematic procedures based on algebraic elimination, whichobtain the solution in a fixed numberof operations. Examples direct elimination of methodsare Gausselimination, Gauss-Jordan elimination, the matrix inverse method, and Doolittle LUfactorization. Iterative methods, on the other hand, obtain the solution asymptoticallyby an iterative procedure. A trial solution is assumed,the trial solution is substituted into the systemof equations to determinethe mismatch,or error, in the trial solution, and an improved solution is obtained from the mismatchdata. Examplesof iterative methodsare Jacobi iteration, Gauss-Seideliteration, and successive-over-relaxation (SOR). Althoughno absolutely rigid rules apply, direct elimination methodsare generally used whenone or more of the following conditions holds: (a) The numberof equations small (100 or less), (b) mostof the coefficients in the equationsare nonzero,(c) the of equations is not diagonally dominant[see Eq. (1.15)], or (d) the systemof equations ill conditioned (see Section 1.6.2). Iterative methods are used when the number equations is large and mostof the coefficients are zero (i.e., a sparse matrix). Iterative methodsgenerally diverge unless the system of equations is diagonally dominant [see Eq. (1.15)1. The organizationof Chapter1 is illustrated in Figure 1.3. Following introductory the material discussed in this section, the properties of matrices and determinants are reviewed. The presentation then splits into a discussion of direct elimination methods

Systems Linear Algebraic Equations of Systems Linear of AlgebraicEquations

21

Properties Matrices of andDeterminants

Direct Methods

Iterative Methods Jacobi Iteration

Gauss Elimination .~ Gauss-Jordan Elimination Matrix Inverse LU Factodzation Determinants Tridiagonal Systems

~

~

__.~ Accuracy and Convergence Successive Overrelaxation

Gauss-Seidel Iteration

P¢ograms

Summary Figure1.3 Organization of Chapter 1. followed by a discussion of iterative methods. Several methods, both direct elimination and iterative, for solving systemsof linear algebraic equationsare presentedin this chapter. Proceduresfor special problems,such as tridiagonal systems of equations, are presented. All these procedures are illustrated by examples. Althoughthe methodsapply to large systems of equations, they are illustrated by applying themto the small system of only three equations given by Eq. (1.2). After the presentation of the methods,three computer programs are presented for implementing the Gauss elimination method, the Thomas algorithm, and successive-over-relaxation (SOR). The chapter closes with a Summary, which discusses somephilosophy to help you choose the right methodfor every problem and lists the things you should be able to do after studying Chapter 1.

1.2

PROPERTIES OF MATRICES AND DETERMINANTS

Systems of linear algebraic equations can be expressed very conveniently in terms of matrix notation. Solution methods for systems of linear algebraic equations can be

3) and (1.. the general element enclosedin brackets.. will not be separated by a comma.2 . are special vectors whichhave a magnitude unity.. the subscripts 3 and 1.1. matrix. n) (1. The location of an element in the matrix is important. the elementary properties of matrices and determinants are presented in this section. . equations form the elements of an n × n matrix.6a) A row vector is a 1 x n matrix. for example. in whichall of the elements of each unit vector except one are zero.. aim (i=1.2.5) Comparing (1..2-~1/2 (1. x or y.5): A = [ai..j. Equation (1. For example.22 Chapter1 developedvery compactly using matrix algebra. m) (1. Y=[Yj]=[Yl Y2 "’" Y. which are arranged in orderly rows and columns. Whenthe general element ai4 is considered.. as illustrated in Eq.. of . are used to define coordinate systems. the general element enclosed in brackets. j=’l. ai. for example. a31. x = [xi] = x2 (i =1. The size of a matrix is specified by the number rows times the number of of columns. i. Vectors are represented by either a boldface lowercase letter. whichdenote the element in row 3 and column1. or the full array of elements. A columnvector is an n × 1 matrix.5) illustrates a conventionused throughoutthis bookfor simplicity appearance. A matrix with n rows and m columnsis said to be an n by m.[ai4]. for example. (1. 1. Orthogonal systemsof pnit vectors. [xi] or [Yi].. for example..5) showsthat the coefficients of a systemof linear algebraic Eqs. unless i orj is greater than 9..6b) Unit vectors. for example. Thus.2 l ..17 denotes the element in row 13 and column17.A. For example.Eachelement of the matrix is distinct and separate.. Matrix Definitions A matrix is a rectangular array of elements (either numbers or symbols).2 . or n x m. Consequently.. whereas a1~. Thus..I (j=l. n. where the first subscript i identifies the row of the matrix and the secondsubscriptj identifies the column of the matrix... for example. n) (1.Whena specific element is specified. Matricesare generally represented by either a boldface capital letter... Vectors are a special type of matrix whichhas only one columnor one row.j] = I all a12 .2 .7) wherethe notation Ilill denotes the length of vector i.. Elementsof a matrix are generally identified by a double subscripted lowercase letter.. the subscripts i and j are separated by a comma. or the full columnor row of elements.a37 denotes the element in row 3 and column7.

. A triangular matrix is a square matrix in whichall of the elementson one side of the major diagonal are zero.a~]..10) is the 4 x 4 identity matrix. that is. Fall [_ant a12 ¯. For example. . A square matrix S is a matrix which has the same numberof rows and columns. A lower triangular matrix L has all zero elements above the major diagonal. An upper triangular matrix U has all zero elements below the major diagonal. . The matrix U = iall 0 0 0 a12 a22 0 0 a13 a23 a33 0 al4 1 a24 ] a34 / a44 / (1.a. Fall 0 0 i 1 D = L i a2200 a3300 a44 (1....11) is a 4 x 4 upper triangular matrix. The identity matrix is the matrix equivalent of the scalar numberunity.8) is a square n x n matrix.a.9) is a 4 x 4 diagonal matrix. aln 1 S=/ . The matrix I---- 0 1 0 0 0 1 0 0 0 (1.12) L a41 a42 is a 4 x 4 lowertriangular matrix. Ourinterest will be devotedentirely to square matrices. A diagonal matrix D is a square matrix with all elements equal to zero except the elements on the major diagonal. For example.Systems Linear AlgebraicEquations of 23 There are several special matrices of interest.~. The identity matrix I is a diagonal matrix with unity diagonal elements. m = n.~:/ an2 "’" ann_] (1...’ : :. Theleftto-right downward-sloping of elements from all to ann is called the majordiagonal of line the matrix. The matrix all F L~ /a31/a21 0 a22 a32 0 0 a33 a43 0 ] 0 0 a44 (1. The remaining elements may be zero or nonzero.

a. is a row vector and vice versa. = aj.. Matrix addition and subtraction consist of adding or subtracting the corresponding elements of two matrices of equal size. That is.15) with > true for at least one row.bid ] ~. that case. 1.16a) (1. Thus. with the diagonal element being larger than the corresponding of the other sum elements for at least one row.2. the sumof the absolute values of all the other elements in that row. A = A In A sparse matrix is one in whichmost of the elements are zero.14) is a 5 x 5 bandedmatrix. diagonal dominance defined as is [ai. all B = a~l 1 a~) ~ a12 a22 a32 0 a52 0 a23 a33 a43 0 a14 0 a34 a44 a54 0 1 a25 0 a45 a55 (1..24 Chapter1 A tridiagonal matrix T is a square matrix in which all of the elements not on the major diagonal and the two diagonals surrounding the major diagonal are zero. T. The elements on these three diagonals mayor maynot be zero. i. Most large matrices arising in the solution of ordinary and partial differential equations are sparse matrices. Matrices of the same size are associative on addition. The matrix ~ T = [ all i /a2t a12 a22 0 a32 0 0 a23 a43 a33 0 0 0 i ] a44 a45 a34 a54 a55 _] (1. The transpose of a columnvector. A banded matrix B has all zero elements except along particular diagonals. A matrix is diagonally dominantif the absolute value of each element on the major diagonalis equal to.. or larger than..[bid ] = [aid + bid ] -~. A +B = [aid ] -[. T. An analogous operation is accomplishedusing the matrix inverse. and matrix multiplication. i.C .13) is a 5 x 5 tridiagonal matrix.[bi. Symmetric square matrices have identical corresponding elements on either side of the major diagonal. Thus.2.[cid ] ~-. The transpose of an n x m matrix A is the m x n matrix. For example. Matrix division is not defined. A + (B + C) = (A+B) (1. n) (1.[cid ] = C (1. Matrix Algebra Matrix algebra consists of matrix addition.B = [aid ] Unequal size matrices cannot be added or subtracted.17) .j] = [aid . aid =aj. A which has elements r. Then. Let A and B be two matrices of equal size.il >_ ~ laid I (i = 1 . matrix subtraction.16b) A .

.2 . ell = (1.2. and summing the products. Matrices that are not conformablecannot be multiplied..1. ¢12 = a12 + b12 (2+2) (1+1) (4+3) (3+1)’] (4+2)|= (3-1)/ = 2 + 2 = 4. It is easy to make errors whenperforming matrix multiplication by hand. where A = 1 4 4 3 and B = -4 1 2 2 3 -1 25 (1.. where A= 2 1 4 1 4 3 and B= 2 1 (1. if the in size of matrix A is n x mand the size of matrix B is mx r.21) Matrix multiplication consists of row-elementto column-element multiplication and summation the resulting products.Thus. multiplying the corresponding elements. r) (1.22) The size of matrix C is n x r. j= 1.19) FromEq. Matrix algebra is muchbetter suited to computersthan to humans.. A+B=B+A Example1. then AB= [aid][bid ] = [ci. The result is I 4 4 -2 2 6 3 7 2 A+B =’ (2-4) (1+2) I(1+3) (1.Systems Linear AlgebraicEquations of Matrices of the samesize are commutativeon addition.20) = all + bll 1 + 3 = 4. Thus..j] = C ci~ j = ~ ai. Matrix addition. Matrix multiplication.24) (1.. Multiplication of the two matrices Aand B is defined of only whenthe numberof columnsof matrix A is the sameas the numberof rows of matrix B.2 . n.23) . It is helpful to trace across the rows of A with the left index finger while tracing down columnsof the B with the right index finger. Multiply the 3 x 3 matrix A and the 3 x 2 matrix B to obtain the 3 x 2 matrix C.kbkj k=l (i= 1.. Multiplication of the matrix A by the scalar ~ consists of multiplying each element of A by ~. Ci~ j = aid + bi d Thus.. Matricesthat satisfy this condition are called conformable the order AB. (1. eA = e[aij ] = [eaid] = [bid] = B Example1.16a).18) (1. etc. Addthe two 3 x 3 matrices A and B to obtain the 3 x 3 matrix C. Thus.

C = [cij ] = 13 8 12 12 (1. (1.26c) Multiply the 3 × 2 matrix C by the scalar ~ = 2 to obtain the 3 × 2 matrix D.26b) (1. . From Eq. However square matrices in general are not commutative multiplication.22).a31b12 a32b22+ a33b32= (1)(1) + (4)(2) + (3)(1) + Thus.32) It mightappear logical that the inverse operationof multiplication. 2.26a) (1. However. matrix division is not defined. dll = O~Cll= (2)(10) = 20. Thus. 3.34) Unfortunately. would give A = C/B (1. on AB -~ BA (1. AB = C and BA = D (1.26 FromEq. j = 1.kblq~ k=l (i = 1. Thus. 3 Chapter1 Ci~[ = Z ai. that is.30) where C and D are n × n matrices. Multiplying yields AB = C (1. square matrices.31) Matrices A. That is. are A(BC) = (AB)C (1.29) Squarematrices are conformablein either order.2) (1.25) Evaluating Eq. Thus. etc. B. (1.25) yields C~l = allbll + al2b21+ a13b31= (1)(2) + (2)(1) + (3)(2) C12 = allbl2 -~ a12bz2+ a13b32= (1)(1) + (2)(2) + (3)(1) C32-~. an analogous for concept is provided by the matrix inverse. and C are distributive if B and C are the samesize and A is conformable to B and C. (1. d12 = ~c12= (2)(8) = 16.23). The result 0=~c=2c= (2)(13) (2)(12) (2)(8) (2)(12) = 26 16 Matrices that are suitably conformable associative on multiplication. if A and B are n x n matrices.27) (1. division.33) (1. A(B + C) = AB + AC Consider the two square matrices A and B. in general.

3) can be written as the matrix equation ~ where (1.36) Procedures for evaluating the inverse of a square matrix are presented in Examples 1. Matrix partitioning is especially convenientwhen solving systems of algebraic equations that arise in the finite difference solution of systemsof differential equations. is basedon such a factorization.2. then B is the inverse of A.. n) "’" alnl (1. respectively. For example.jx j = b i (i (1.34) can be accomplishedusing the matrix inverse. an infinite number matrices of B and C whoseproduct is A. in general. the partitioning is generally into square submatricesof equal size. To ensure that the operations of matrix algebra can be applied to the submatricesof two partitioned matrices. (1.41) j=l . (1. Thus. on -1 AA = A-1A = I (1.40) L an 1 an2 Equation(1. A----. (1.3..35) The operation desired by Eq. the matrix equivalent of Eq.37) Factorization is not a uniqueprocess.2.34) given by A = B-~C (1. the inverse of the matrix multiplication specified by Eq.4.BC (1. Systemsof Linear Algebraic Equations Systemsof linear algebraic equations. Thus. Matrix inverses commute multiplication. -1 which is denoted as A .33) is accomplished matrix multiplication using the inverse matrix. Matrix factorization refers to the representation of a matrix as the product of two other matrices.3). Thus. (1.. A particularly useful factorization for square matrices is A = LU (1. Thesesubmatricescan then be treated as elements of a smaller matrix.. 1.a. If AB= I. Eq. The LUfactorization methodfor solving systems of linear algebraic equations.3) can also be written ~ ai. which is presented in Section 1.38) whereL and I5 are lower and upper triangular matrices. can be expressed very compactly in matrix notation.Systems Linear Algebraic Equations of 27 Consider the two square matrices A and B.39) F A : all a12 :: / :. A matrix can be partitioned by groupingthe elements of the matrix into submatrices. (1. such as Eq. Thus. a known matrix A can be represented as the product of two unknownmatrices B and C.16.12 and 1..n " " " ann ~J : 1 . Thereare. Thus..

but the solution is unaffected. n) Chapter 1 (1. det(A) =IAI ~__ all a21 a12 a22 = alla22 -. In the context of the solution of a systemof linear algebraic equations. Determinants The term determinant of a square matrix A..a21a12 (1.44) The scalar value of the determinantof a 3 × 3 matrix is composed the sumof six triple of products which can be obtained from the augmenteddeterminant: all a21 a31 a12 a22 a32 a13 a23 a33 all a21 a31 a12 a22 a32 (1..ix j = bi (i. 3. 1.45) The 3 × 3 determinant is augmented repeating the first two columnsof the determinant by on the right-hand side of the determinant. eli 1. "’" """ aln azn ann (1. the repeated index j in Eq.42) summed over its range.j = 1 . Anyrow (equation) can be replaced by a weighted linear combination of that row (equation) with any other row (equation) (a process known mination). all det(A) a12 a22 an2 ¯.. The scalar value of the determinantof a 2 × 2 matrix is the product of the elements on the major diagonal minus the product of the elements on the minor diagonal. Whensolving systems of linear algebraic equations expressed in matrix notation. 1 to n. denoted det(A) or IAI.28 or equivalently as ai. The order of the rows (equations) maybe interchanged (a process known pivoting).refers to both the collection of the elementsof the square matrix. these three row operations clearly do not change the solution. starting with the elements of the first row multiplied by the two remaining elements on the right- .43) IA I = a2 1 anl Only square matrices have determinants. and the scalar value represented by that array.4. Thus..39) will be used throughout this book represent a systemof linear algebraic equations. There are three so-called row operations that are useful whensolving systems of linear algebraic equations. The appearance of the system of equations is obviously changed by any of these row operations. that is. Theyare: Any row (equation) may be multiplied by a constant (a process known scaling).42) where the summationconvention holds. (1. enclosed in vertical lines. Three triple products are formed. Equation (1.2. these row operations apply to the rows of the matrices representing the system of linear algebraic equations. Thus. 2..

000 (1. it is useful to understandthe concepts involved. the expansionof an n x n determinant is the sumof all possible products formedby choosingone and only one element from each row and each columnof the determinant.48) IA = (80)(40)(130) + (. Evaluation of a 3 x 3 determinant by the diagonal method. starting with the elements of the third row multiplied by the two remaining elements on the right-upwardsloping diagonals.000 . Three more triple products are formed.999 additions. or the method of cofactors.000 . det(A) IA = al la22a33 +a12a23a31 + I a13a21a32 -. of Although the methodof cofactors is not recommended anything larger than a for 4 x 4 determinant. The minorMij is the determinantof the (n . the expansion of a 10 x 10 determinant requires the summationof 10! products (10! = 3. The value of the determinantis the sumof the first three triple products minusthe sumof the last three triple products. Consequently.j (1. A = 80 -20 -20 40 -20 -20 -20 130 (1. j = (--1)i+Jmi. Thus. with a plus or minus sign determined by the numberof permutations of the row and columnelements. This is a total of 32.32.1) x (n .000 .46) The augmenteddeterminant is 80 -20 -20 -20 40 -20 -20 -20 130 ApplyingEq.659. The cofactor Aij associated with the minor Mi~ is defined as Ai.a31a22a13 a32a23all -. In this procedurethere are n! products to be summed.50) . whereeach product has n elements.000multiplications and 3. whereeach product involves 9 multiplications (the product 10 elements).627.(130)(-20)(-20) = 416. It is incorrect for 4 x 4 or larger determinants.a33a21a12 -Example1. One formal procedure for evaluating determinants is called expansion by minors.3.46) yields det(A) 80 -20 -20 40 -20 -20 (1.47) (1.628.(-20)(40)(-20) .49) The diagonal methodof evaluating determinants applies only to 2 x 2 and 3 x 3 determinants. Thus. (1. Let’s evaluate the determinant of the coefficient matrix of Eq.16. the evaluation of determinantsby the method cofactors is impractical.(-20)(-20)(80) .20)(-20)(-20) + (I . (1.000 = 300.000 .8.2) by the diagonal method. Thus.52.800).1) submatrixof the n x n matrix A obtained by deleting the ith row and the jth column. In general. except for very small determinants. not counting the work needed to keep track of the signs.Systems Linear AlgebraicEquations of 29 downward-sloping diagonals.

Thus. j = ~. Thus. That procedure is presented in Section 1. the cofactors are eventually reduced to 3 × 3 determinants which can be evaluated by the diagonal method.6. It is possible to transform any nonsingular matrix into a triangular matrix in such a way that the value of the determinantis either unchanged changedin a well-defined way.51) Alternatively.54) [AI = 80(5200 ÷ 400) .47): 1.that matrixis singular. A of nonsingularmatrix has a determinantthat is nonzero.iAi J = ~(.55) If the value of the determinant a matrixis zero. (1.~(-1)i+JaijMi.4. One of the more well-known methods is Cramer’s rule. expanding downany fixed columnj yields det(A) = IAI = ai.j (1. 1.30 Chapter1 Usingcofactors. The amount of work can be reduced by choosing the expansion row or column with as manyzeros as possible. By repeated application. which requires the evaluation of numerous determinants. based on the elimination concept. and thus not recommended. the determinantof matrix A is the sumof the products of the elements of any row or column. either upper or lower triangular. IA] (80) -20 40 130 -20_ = (-20) -20-20130 -20 (-20) + -20 -20 (1. Cramer’srule is highly inefficient.1 to evaluate. Recall Eq. the matrix is said to be singular. so there are n determinants of order n.52) Eachcofactor expansion reduces the order of the determinant by one.3 DIRECT ELIMINATION METHODS There are a numberof methodsfor the direct solution of systems of linear algebraic equations.24000 = 300000 (1.(-20)(-2600 + 400) + (-20)(400 = 384000 .1) i+JaijMid i=1 i=1 (1. are recom- . Evaluation of a 3 × 3 determinant by the cofactor method.60000 .3 A = -20 -20 40 -20 -20 130 (1. More efficient methods. If any row or columnof a matrix has all zero elements. Example1. Let’s rework Example using the cofactor method. is the product of the elements on the major diagonal. or The value of the determinant can then be evaluated quite easily as the product of the elements on the major diagonal.3. expanding across any fixed row i yields n n det(A) = IAI = ai jAi. multiplied by their corresponding cofactors. The determinant of a triangular matrix.53) Evaluate IAI by expandingacross the first row.

20x3 = 20 2 --20x1 + 40x2 -.000. n) is given j) det(A xj.57a) (1.Both Cramer’srule and elimination methodsare presented in this section. which is an enormousnumber of calculations..900. Obviously. 20x -.n. Cramer’s Rule Althoughit is not an elimination method. for n = 10. Gauss-Jordanelimination. and for n = 100.2)..000. In such cases.. which is obviously ridiculous..58) can be evaluated by the diagonal methoddescribed Section 1. Cramer’s rule states that the solution for xy (j = 1 . and determinant evaluation.59c) .59b) (1. Example1.4 does not work. the elimination concept is applied to develop Gausselimination.58) [ a21 a22 The determinants in Eqs. matrix inversion.1)(n + 1)!.2.4 and 1. N = 1090. n = 10).a12x2 b = 1t a21x + a22x2--. The preferred methodfor evaluating determinants is the elimination method presented in Section 1.5. Ax = b. Cramer’srule is a direct methodfor solving systems of linear algebraic equations. For systems containing morethan three equations.. n) (1.the elimination methodis preferred. The number of multiplications and divisions required by the elimination method is approximately N = n3 ÷ n 2 -.20x + 130x = 20 2 3 80X -1 (1. N = 10157. For a relatively small systemof 10 equations (i. the methodof cofactors presented in Section 1. 1.Systems Linear Algebraic Equations of 31 mended. Thus.2.det(A) (j = 1 .. Cramer’s rule. Considerthe systemof linear algebraic equations.(1.6..56) j where A is the n x n matrix obtained by replacing columnj in matrix A by the column vector b. After presenting Cramer’s rule. (1.2. N = 1. Thus.b I 2 ApplyingCramer’srule yields b 1 b2 xt -all a12 ] all a2 and x2 -. These concepts are extendedto LUfactorization and tridiagonal systems of equations in Sections 1. For example.3. For n = 100. which represents n equations. N = 360.20x = 20 3 --20x 1 -.1. respectively.5.59a) (1.lall-----1 bl b2 a12 (1..3.e.57b) a22] a12 a21 a22 (1.4. Let’s illustrate Cramer.009. The number of multiplications and divisions N required by the methodof cofactors is N = (n . consider the system of two linear algebraic equations: allX -t. the diagonal methodpresented in Section 1..srule by solving Eq.4 could be used.

det(A ~) 180. This procedureis performed .2. which change the values of the elements of matrix A and b.1 times to calculate x. Anyrow (equation) can be replaced by a weighted linear combination of that row (equation) with any other row (equation) (elimination). -20 -20 = 20 40 -20 -20 130 det(A l) ii = 180.60) 1). The order of the rows (equations) maybe interchanged (pivoting).0.000 (1.000 x~ det(A) . in terms of the remaining unknowns.000 x3 . Anyrow (equation) maybe multiplied by a constant (scaling). 3.000 x2 .000.000 (1.000 . whichare repeated below: 1._ 2 can be calculated from modified equation n. if necessary. Thus.. Then x. For det(A Next. Elimination Methods Elimination methodssolve a systemof linear algebraic equations by solving one equation.62) 1. The secondrow operation is used to prevent divisions by zero and to reduce round-off errors. say x~.000 and det(A3) = 120. then substituting the expression for x1 into the remaining n. Theserow operations._~.x2 to x.1.2. The first row operation is used to scale the equations. say the first equation.2.61) In a similar manner. do not changethe solution x to the system of equations.300.0.3. 2. det(A2) = 300.40 (1. 1.32 First. FromExample 1. and x~_l.3. for one of the unknowns.300. RowOperations The elimination process employsthe row operations presented in Section 1._2. which contains only x. calculate det(A).1.1 times until the last step yields an equation involving only x.1 equations involving x2 to xn.._~ can be calculated from modified equation n .3. The third row operation is used to implementthe systematic elimination process described above. This process is called back substitution.1. This process n is called elimination.00 120.1 equations to determine n .4.60 300. Then x. The value of xn can be calculated from the final equation in the elimination procedure. and det(A3).2. This elimination procedure is performed .300. and x.. 80 -20 -20 -20 40 -20 -20 -20 130 Chapter 1 det(A) = 300. calculate det(A 1)._~ n to x~. det(A2). .000 .000 -. x. which contains only x.

3.67) (1.73a) (1.63) has been reduced to the upper triangular system 80x1 .68) yields -25{[25 .20x2 + 130x3 = 20 Solve Eq.68) (1.20x2 . by 80xI .72b) (1. (1.2).20x -.20 3 35x .25 2 3 7_570 X3 300 -.(-25)x3]/35 } + 125x3 = 25 which can be simplified to give "~9"X 3 = ~ 33 (1. (1.72) is accomplishedeasily by back substitution.63c) gives -20{[20 .69) into Eq. (1.20 3 which can be simplified to give -25x~ + 125x3 = 25 Next solve Eq.(-20)x 2 . (1. Eq. (1.63b) (1.63).63c) (1.40)]/35 = Xl = [20 .25x3 = 25 Substituting Eq. Starting with Eq.(-20)(1.(-25)(0. (1.63a) for xl.66) (1.20x3 = 20 -20xI + 40x2 . (1.00) . Thus. Back Substitution The solution to Eq.20x2 + 130x -.2. xt = [20 .(-20)x3]/80} .2. Eq. (1.63b) gives -20{[20 .25x --. This completes the elimination process.7 (1.72c) whichis equivalent to the original equation.64) into Eq. (1.40)]/80 (1.20x3 = 20 which can be simplified to give 35x2 .(-20)x 2 .2.40 x2 = [25 .3.72c) and working backwardyields x3 = 300/750 = 0.69) (1.(-20)x 2 . (1.72a) (1.(-20)x3]/80 Substituting Eq. T hus.64) (1.3.65) (1. (1.Systems Linear Algebraic Equations of 1. (1.20x2 . Elimination Let’s illustrate the elimination method solving Eq.63a) (1. Thus.20x3 = 20 -20x1 .73c) .70) (1.66) for 2. (1.(-20)(0. 1.(-25)x3]/35 Substituting Eq.64) into Eq. x2 = [25 .71) Thus.(-20)x3]/80 } + 40x2 .73b) (1.

77) This process is continueduntil all the coefficients belowthe majordiagonal are eliminated.748) (1. is the quotient of the element to be eliminated and the pivot element. This is the process of elimination. (1. Thus.whichcan be solved for.75.76) Theresult of the secondelimination step is presented in Eq.2) (1.77) is the final result. Elimination. em.75. (1.34 Example1.20x3 = 207 0x 1 + 35x 2 25x 3 25 0x1 25x2 + 125x 3 25 R3 . I I 80x 1-20xz20x 3=201 -20x1 -t.75. At this point. as illustrated below.1). this process is nowcomplete. (1. (1. Let’s solve Eq.2)-(-20/40)xEq. multiplying the normalizedequation by the element to be eliminated. (1.2): 80xZ . Next the coefficients belowthe major diagonal in the second columnare eliminated by linear combinations with the second equation.20x2 .75.(-25/35)R 2 (1.75. This process systematically eliminates terms below the major diagonal. columnby column. the next to last equation can be solved .6.74a) (1.(em)Rjnext to the ith equationindicates that the ith equation is i be replaced by the ith equation minus em times the jth equation. (1. Recall Eq. the last equation contains only one unknown.76). and Eq.75.40x2 .2) below means replace (1.Thus.20x2 . whichis called the pivot element. Usingthat result.135x3 = 20 R .75. 80x . is chosen to eliminate the first coefficient in Eq. For example. where the elimination multiplier. (1.2).20x2 + 130x3 = 20 Chapter1 (1.20x3 = 20 -20x 1 + 40x2 . and subtracting the result from the equation containing the element to be eliminated. R2 -(-20/40)R~ beside Eq.20x2 -t. All of coefficients below the major diagonal in the first columaaare eliminated by linear combinationsof each equation with the first equation.20x3 = 20J R2 .The notation R . In the present examplewith three equations. em= (-20/40).20x3 = 20 -20xl . The elimination multiplier.75. which also shows the elimination operation for the secondelimination step.77): I 80x~ -20x 20x3= 201 20x~+35x 25x3= 25 z Oxi+ Ox2+750/7x3=300/7 (1. .(-20/80)R 1 -20xI .(-20/80)R= 3 (1.2) by elimination.74c) Elimination involves normalizing the equation above the element to be eliminated by the element immediatelyabovethe element to be eliminated. (1.3) The result of this first elimination step is presented in Eq.2) by Eq.1) (1.x3 in the present example. (1.

60 -25125 --~ x2=[25-(-25)(0. then performing the back substitution process to determine the solution vector.78a) (1.4.40)]/35 = x~ = [20 . Example1. the xj notation does not need to be carried throughout the operations.6. Consequently. the elimination procedure can be simplified by augmenting A matrix with the b the vector and performing the row operations on the elements of the augmentedA matrix to accomplish the elimination process.Aslong as the colurnnsare not interchanged.(-20)(0"40)]/80 -20120 = 0. x3 = 300/750 = 0.40)/80 The extension of the elimination procedureto n equations is straightforward. the A matrix augmentedby the b vector is [AIb] = 80 -20 -201 -20 40 -201 20 -20 -20 130120 (1. 1. Simple elimination.4)]/35 = 750/71300/7 J x3 = 300/750 = 0.00) .80) I8 i -25125 125125 -201201 (1.2. Let’s rework Example1. Components the x vector are of fixed in their locations in the set of equations.81) 83 -- (-25/35)R 2 I8 i 35 -20 0 Xl = [20 . This is the back substitution process.(-20)(0.78b) (1.7. columnj corresponds to x/.6 involves manipulation of the coefficient matrix A and the nonhomogeneous vector b. SimpleElimination The elimination procedure illustrated in Example1.82) The back substitution step is presented beside thetfiangulafized augmentedAmatfix. the first equation can be solved for x~. Onlythe numerical elements of A and b need to be considered.78c) I 2°1 (-20/80)R~ (-20/80)R Performing the row operations to accomplishthe elimination process yields: 80 -20 -20120 -20 40 -20120 -20 -20 130120 -20 35 -25 I R R3 -1 (1. Thus.Systems Linear Algebraic Equations of 35 for x2.6 using simple elimination.7. Thus.(-25)(0. From Example1.79) (1.40 x2 = [25 .3.(-20)(1’00) .(-20)(1. This simplified elimination procedure is illustrated in Example1.40 (1. Usingthe results for x3 and x2. .

the elimination process maycreate zeros on the major diagonal. Onlypartial pivoting is consideredin this book. The simple elimination procedure described so far must be modified to avoid zeros on the major diagonal.and thus it is rarely used.2.2. Interchangingonly rowsis called partialpivoting. Example 1. Pivoting also reduces round-off errors. This result can be accomplishedby rearranging the equations. When the procedure is repeated.83) Performingthe elimination process yields -20 35 0 -20 [ 20 -25 I 25 750/7 1300/7 15 250/7 201 (1. i is zero.84) Performingthe back substitution process one columnat a time yields xl = 1. The elimination process is then applied to the multiply augmentedA matrix. Full pivoting is quite complicated. Consider the system of equations presented in Example 1. the A matrix is simply augmented all of by the b vectors simultaneously. bl ~ = [20 20 20] and b2~ = [20 10 20].00 0. Pivoting The element on the major diagonal is called the pivot element.3.3. This process is called on pivoting. and division by large numbers introduces smaller round-off errors than division by small numbers.36 1.60] and (1.5. The procedurealso if fails if any subsequentpivot elementai.40 0. Eventhoughthere maybe no zeros on the major diagonal in the original matrix.6. Multiple b Vectors Chapter 1 If morethan one b vector is to be considered.This problem becomesmore severe as the numberof equations is increased. . Pivoting eliminates zeros in the pivot element locations during the elimination process.8. Interchanging both rowsand columnsis calledfullpivoting. The elimination procedure described so far fails immediately the first pivot elementalx is zero.4. The doubly augmented A matrix is [Alb 1 b2] 80 -20 -20[20120J = -20 40 -20120 I 10 -20 -20 130120120 I (1. A moreversatile procedurebased on matrix factorization is presented in Section 1. since the pivot elementis a divisor during the elimination process. Back substitution is then applied one column at a time to the modified b vectors. round-off errors can compound. before each elimination step to put the element of largest magnitude the diagonal.85) x2 = /2/3| L 1/3 ] 1.7 with two b vectors. Simple elimination for multiple b vectors. by interchanging equations (rows) or variables (columns).

2. elimination is applied to the original equations. Note that pivoting is based only on the rows below the pivot element.. 37 Use simple elimination with partial pivoting to solve the following system of linear algebraic equations. so pivoting is required.Interchanging the second and third rows and evaluating the elimination multiplier yields 7/2 -7/217/2 1 -11-3 1 I 5 R3 -.e.(-2/4)R1 (1.3. Ax= b: 4 t -1 x2 = -3 -2 3 -3 x 3 (1. The first pivot element is A zero. interchanging the first and secondrows and evaluating the elimination multipliers yields 0 2 1 I -2 3 -3 I R2.87) Performingthe elimination operations yields 2 l 15 7/2 -7/2 [7/2 (1. scaling is employed select to the pivot elements. Scaled pivoting is implemented follows.Systems Linear Algebraic Equations of Example 1. Thus. normalized) the largest are by elements in the correspondingrows. Pivoting is implemented based on the scaled elements .90) The back substitution results are presented beside the triangularized augmented matrix. Using one of the rows above the pivot element woulddestroy the elimination already accomplished. pivoting is called for again.86) 5 Let’s apply the elimination procedure by augmenting with b.89) Ii Performingthe elimination operation yields 7/2 -7/2 ] --~ x2 : 2 0 313 ] x3=l (1.(4/7)R 2 1 2 (1. Scaling is employed only to select the pivot elements.7.(0/4)R 1 R3 . it is not the largest elementin the the secondcolumnunderneaththe pivot element. The largest number magnitude)in the first columnunder (in the pivot element occurs in the second row.all of the elementsin the first column scaled (i.88) Although pivot elementin the secondrowis not zero. In such cases. Beforeelimination is applied to the first as column. The rows above the pivot element have already been through the elimination process. Elimination with pivoting to avoid zero pivot elements.9. Scaling The elimination process described so far can incur significant round-off errors whenthe magnitudes the pivot elements are smaller than the magnitudesof the other elements in of the equations containing the pivot elements. A 1.After pivoting. Thus.

Backsubstitution is then applied to obtain x. Performing the elimination indicated in Eq.0. elimination is and applied to obtain zero elements in column2 below the pivot element. let’s scale all the elements in column1 by the largest element in each row.0194 [_ 1/3 ] [_0.0.6] R3 . Elimination with scaled pivoting to reduce round-off errors.01 28.0 I -31.997. The third element of al is the largest element in a~. To accentuate the effects of round-off.33 0. the augmented matrix and the first set of A row operations are given by 2 105 I 104"] -3 103 I 98| R: -(0. Round-off errors due to the three-digit precision have polluted the solution. (1.4 (1. x2 = 1.0. Example1. Thus.9 -29.and elimination is applied to obtain zero elementsin the first column belowthe pivot element.33 0 33. pivoting does not appear to be required.94) (1.0. Let’s investigate the advantageof scaling by solving the following linear system: -3 1 = (1.93) 33. For the first column.10. The result is a~ = 12/103l-. pivoting is implemented.924. it should be used only to determineif pivoting is required.51-29.0771)R 2 Pivoting is not required for the secondcolumn.38 Chapter1 in the first column.95) where the notation a1 denotes the columnvector consisting of the scaled elements from the first columnof matrix A. Before elimination is applied to the secondcolumn. Let’s reworkthe problemusing scaling to determineif pivoting is required.93) yields the triangularized matrix -4. The effects of round-off can be reduced by scaling the equations before pivoting.334 -4. The first step in the elimination procedure eliminates all the elements in the first columnunder element all. Since scaling itself introduces round-off. The procedure is applied to the remainingrows 3 to n .333)R 31 which gives 1051 104 ~ (1.01 28. carry only three significant figures in the calculations. Before performingthat step.3333 F3/1057 F0"02861 (1. and x3 = 1.(-0.6| -32.(0.0. and x~ = -1.844.92) Performing back substitution yields x3 = 0.1. which . and xl = -0. which does not agree very well with the exact solution x3 = 1. x2 = 1. x2 = 0.91) whichhas the exact solution x1 = -1.10.667)R 1 3 ]3 ] R -. All calculations should be madewith the original unscaled equations. all of the elements from 2 to n in column2 are scaled.0.

0 0. However. (1. 00.(2/1)R~ 3 2 1051104 R 3. can be omitted at somerisk to the accuracy of the solution. pivoting is not indicated.Systems Linear Algebraic Equations of 39 indicates that rows 1 and 3 of matrix A should be interchanged. 1.3.97) I 0.(3 /1)R1 Performingthe elimination and indicating the next elimination multiplier yields 0 -5 97192 0 -1 96 I 95 R3 .97) yields 1. Example 1. Thus.0 3.98) -1/96J L-0. scaled pivoting is an absolute necessity. is commonly called Gauss elimination. GaussElimination The elimination proceduredescribed in the previous section. a time-consumingand unnecessary operation. becomes 2 -3 103 I 98 R2.0 3.91). It is the most important and most useful direct elimination methodfor solving systems of linear algebraic equations. pivoting to avoid zero pivot elements is always required.0 0. including scaled pivoting. scaling to determinethe pivot elementhas eliminated the round-off error in this simple example.99) Solving Eq. Thus.0 1. decisions about pivoting can be madeon a case by case basis.0 0.6 76. Performingthe elimination indicated in Eq. Whenperforming Gauss elimination by hand.0104 Consequently.99) by back substitution yields 1 =1.(1/5)R 2 Scaling the second and third elements of column2 gives (1. the LU factorization method. Pivoting is an essential element of Gauss elimination.6] (1. Eq.0 1 (1. andx3 =-1. and the Thomas algorithm are all modifications or extensions of the Gauss elimination method. (1.00. while very desirable in general. When writing a general-purpose computer programto apply Gauss elimination to arbitrary systems of equations. the matrix inverse method. x 2= 1.0 92. This is accomplishedby using an order vector o whose elements denote the order in which the rows of the coefficient matrix A and the fight-hand-side .0 -5. whichis the exact solution. Whensolving large systems of linear algebraic equations on a computer.3.10 illustrates the completeGausselimination algorithm. with the elimination multipliers indicated. the pivoting step is generally implemented simply keeping track of the order of the rows by as they are interchanged without actually interchanging rows. however.scaling is not necessary. The Gauss-Jordan method.31 (1. In cases where all of the elements of the coefficient matrix A are the sameorder of magnitude. Scaled pivoting to decrease round-off errors.96) 3.0 76. (1.0 97.

.1. Thus.k’~ a . consider the second part of Example 1. aij = ai.~ (1. As an example.300. n) to create zeros in columnk belowthe pivot element. Solve for x using back substitution..1) and search for the element of largest magnitudein columnk and pivot (interchange rows) to put that coefficient into the ak. The first elimination step then uses the third row to eliminate x1 fromthe secondand first rows. The order vector has the initial value or = [1 2 3].. This is a considerable reduction comparedto Cramer’s rule.n/3) for matrix A and n2 for each b. 2. After step 3 is applied to all k columns.. k. 2 . -. storing the elimination multipliers. After scaling. rows 1 and 3 are to be interchanged.~00a) (1. 1) (1. em= (ai. In fact... For n = 10. so the order vector is unchanged. the n x 1 columnvector b.k pivot position. in a format suitable for programming on a computer.n-2 . the correspondingelementsof the order vector are interchanged. X" n b. Starting with column1. The numberof multiplications and divisions required for Gauss elimination is approximately N = (n3/3 .101b) ..k + 2 .. Thus. n . scale columnk (k = 1. bi . If morethan one b vector is present..k). For column k (k = 1. n .. creates the Doolittle LUfactorization presented in Section 1. the original matrix is uppertriangular.(k = 1.k+2 . and for n = 100. instead of actually a interchanging the two rowsof elements. This step is actually accomplished by interchanging the correspondingelements of the n x 1 order vector o. 2 ... ai. The Gauss elimination procedure. is summarized follows: as Define the n x n coefficient matrix A. in place of the eliminated elements. (i..and the secondrow is used to eliminate x2 from the first row.. the order vector are changedto yield or = [3 2 1]. The rows of the A matrix and the b vector are processed in the order indicated by the order vector o during both the elimination step and the back substitution step. This proceduresaves computertime for large systems of equations. Do not actually calculate the zeros in columnk..1). but at the expense of a slightly more complicatedprogram.k] (i=k + l.10.. Backsubstitution is then performed the reverse order of in the order vector.. that is.100b) bi = bi- (ai’k]bk \ak. in the order 1.~kak.. o.2 .. 2. solve for the correspondingx vectors one at a time.. 3.j {ai..l~/al~.4. Instead of actually interchanging these rows as done in Example 10. N = 430.j=k+l. k + 2 ..i (i = n. apply the elimination procedure to rows i (i = k + 1. the correspondingelements of I. and the n x 1 order vector o.1)... n) n) (1.k } I~.... n .~.101a) xi -- j=i+l ai..1 1. Pivotingis not required for the secondelimination step. When row interchange is required. ak. N = 343.40 Chapter1 vector b are to be processed. 3.

The number of multiplications and divisions for Gauss-Jordan elimination is approximately N = (n3/2.rowsare usually scaled to yield unity diagonal elements. Gauss-Jordanelimination can be used for single or multiple b vectors.Systems Linear Algebraic Equations of 1.(-20)R 1 (1.n/2)+ n 2. Gauss-Jordan elimination. Example1.7 using simple Gauss-Jordan elimination. which is approximately 50 percent larger than for Gausselimination.3. The A matrix is transformedto a diagonal matrix. that is.79)] 80 -20 -20 -20 40 -20 -20 -20 130 [20 R1/80 1 120 120 1 gives (1.(-5/7)R3 0 -3/713/71 0 31 1215 R R1-(-3/7)R (1. elimination without pivoting. whichtransformsthe A matrix to the identity matrix.102) Scaling row 1 to give all -20 2-(-20 = 1 --1/4 --1/411/41 40 -201 20 R 20) -20 1301 20 R 3 .105) -- (-25)R 2 Applyingelimination both above and below row 2 yields -3/7 I 3/7 ] -5/7 I 5/7 750/7 1300/7 R3/(750/7 ) Scaling row 3 to give a33 = 1 gives 1 --5/715/7 2.Gausselimination is preferred.4.11.104) Scaling row 2 to give a22 = 1 gives -1/4 -1/411/4"]R 2 1-(-1/4)R 1 -5/715/7 / -25 1251 25 ] R3 (1. I. The transformed b vector is then the solution vector x. (1. Consequently.106) Ii (1. The augmentedA matrix is [see Eq.107) .103) Applyingelimination below row 1 yields [i 35 -251 25/Rz/35 -1/4 -1/41 1/4"] J -25 1251 25 (1. Gauss-JordanElimination 41 Gauss-Jordanelimination is a variation of Gauss elimination in which the elements above the major diagonal are eliminated (made zero) as well as the elements below the major diagonal. Let’s rework Example1. The.

40 Chapter1 (1. At the conclusion of step 3. (1.004000 0. transformed to the matrix inverse. -1 The inverse of a square matrix A is the matrix A such that AA-1 = A-1A= I. can be developed to solve Eq. Thus. Gauss-Jordanelimination can be used to evaluate the inverse of matrix A by augmenting A with the identity matrix I and applying the Gauss-Jordan algorithm.C. 1/100 1/250] 1/30 1/150 1/150 7/750 (1. the pivot element is scaled to unity by dividing all elementsin the row by the pivot element.00 0 1 10.109) The Gauss-Jordanelimination procedure. The transformed A matrix is the identity matrix I.40].109) by modifying the Gauss elimination procedure presented in Section 1. Thus. A Example1.60 1.42 Applyingelimination above row 3 completes the process.016000 0. and the original identity matrix.00 0. /1/100 1/30 1/150 = /0. Thus. I. and the transformedidentity matrix is the matrix inverse. I. applying Gauss-Jordan elimination yields [[A]I]--> [IIA-1]] (1. Matrix inverse by Gauss-Jordanelimination. Before performingStep 3. First.110) Performing Gauss-Jordan elimination transforms Eq.009333_] (1.112) -1 Multiplying A times A yields the identity matrix I.010000 0. augment matrix A with the identity matrix. x.108) The A matrix has been transformed to the identity matrix I and the b vector has been transformed to the solution vector. in a format suitable for programming a on computer. (1. has been -l .12. I. Steps 2 and 3 of the procedure are the same. -1 A . thus verifying the computations. x~" = [0.111) .010000 0~000460606071. 1 0l 1.3.006667 0. Step 1 is changedto augmentthe n x n A matrix with the n x n identity matrix. the A matrix has been transformed to the identity matrix. Let’s evaluate the inverse of matrix A presented in Example 1. [All]= 80 -20 -20 -20 40 -20 -201 1 0 0] -2010 1 0 13010 0 1 (1. I.110) 1 0 012/125 0 1 011/100 0 0 11 1/250 from which A-l= F2/125 1/100 1/250 1 r0.7. Step 3 is expanded performelimination abovethe pivot element as to well as below the pivot element.033333 /1/250 1/150 7/750 /0.

115) -l Thus.13.40]. Not all matrices have inverses.6.1)n! multiplications are required N evaluate the determinant of an n x n matrix by the cofactor method.117a) (1.00 0.5. Let’s solve the linear systemconsidered in Example using the matrix inverse method.2.60 1.4 and illustrated in Example 1.7 -a The matrix inverse A of the coefficient matrix A for that linear systemis evaluated in -1 Example1.(n . matrices whosedeterminant is zero. .113) (1.3. Considerthe general systemof linear algebraic equations: Ax = b Multiplying Eq.659. Approximately ---. Example1. 1. Evaluation of the determinants of large matrices by the cofactor methodis prohibitively expensive.7 gives ~2/125 1 1/100 1/250"~I20 1/30 1/150 / 20 1/150 7/750.116) Performingthe matrix multiplication yields Xl = (2/125)(20) + (1/100)(20) + (1/250)(20) x2 = (1/100)(20) + (1/30)(20) + (1/150)(20) x3 = (1/250)(20) + (1/150)(20) + (7/750)(20) Thus. (1.113) by -1 yields A-lAx = Ix --. The Matrix Inverse Method Systems of linear algebraic equations can be solved using the matrix inverse. Multiplying A by the vector h from Example1. A (1.3. For n = 10.the solution -1 vector x is simply the product of the matrix inverse A and the right-hand-side vector b.117c) 1. determinants can be evaluated much moreefficiently by a variation of the elimination method.117b) (1. when the matrix inverse A of the coefficient matrix A is known. that is. xT = [0.000. The matrix inverse method.Systems Linear AlgebraicEquations of 1.4.114) x = A-Ib (1. if not impossible.x = A-lb from which 43 -1 . (1. Fortunately. do not have inverses.~ 20 x----A-’b----/1/100 /1/250 (1. Singular matrices. N = 32.12. Determinants The evaluation of determinants by the cofactor methodis discussed in Section 1. The correspondingsystemof equations does not have a unique solution.

Example1.119) wherethe 1"] notation denotes the product of the ai... all Continuing in this manneryields the result that the determinant of an upper triangular matrix (or a lower triangular matrix) is simply the product of the elements on the major diagonal.. which is orders and orders of magnitude less effort than the N = (n . 0 a33 .44 First. .2) determinanthaving a33 as its first elementin its first column with the remainingelementsin its first column zero.but in a predictable manner.with remaining elements in its first columnall zero. (1. 3. consider the matrix A expressed in upper triangular form: Jail | 0 A----[ 0. det(A) = IAI al la22""ann (1. a12 a13 "’" a22 a23 . 2. The row operations must be modified as follows to use elimination for the evaluation of determinants. Anyrow maybe added to the multiple of any other row without changing the value of the determinant.. This procedureworksexactly as stated if no pivoting is used. aln 1 a2n | agn ] Chapter1 (1. Expanding that determinant by cofactors down first columnyields a22 times the (n ..2) x (n . When pivoting is used. Expandingthe determinant of A by cofactors downthe first columngives all times the (n .. det(A) IA = l~I ai. Thus. the value of the determinantis changed.. Evaluation of a 3 x 3 determinant by the elimination method. The modifiedelimination methodbased on the above row operations is an efficient wayto evaluate the determinant of a matrix..1) x (n . Let’s rework Example using the elimination method.n.53): 1. Thus. . then to evaluate its determinantusing Eq.14. 1.i I i=1 (1. The number multiplications required is approxiof mately N = n3 + n2 .121) .4 80 -20 -20 40 -20 -20 -20 1 -20 130 (1. whereas an odd number of row interchanges does change the sign of the determinant.. Recall Eq.1) determinanthavinga22 as its first element in its first column.118) o . so elimination can also be used with pivoting to evaluate determinants.1)n! multiplications required by the cofactor method. i. Multiplying a row by a constant multiplies the determinant by that constant. Thus.. Interchanging any two rows changes the sign of the determinant. an even number of row interchanges does not change the sign of the determinant.119). (1.120) This result suggests the use of elimination to triangularize a general square matrix.

Matrix factoring can be used to reduce the workinvolved in Gauss elimination when multiple unknown vectors are to be considered.130) (1. multiple b vectors can be processed through the elimination step using the L matrix and through the back substitution step using the elements of the U matrix.128) (1. respectively. (1. (1. The procedure based on unity elements on the major diagonal of U is called the Crout method.123) 1.7. Thus.Systems Linear AlgebraicEquations of From 45 Example1. determinedin the elimination step of Gausselimination as the elements of the L matrix. The U matrix is defined as the upper triangular matrix determinedby the elimination step of Gausselimination. The procedure based on unity elements on the major diagonal of L is called the Doolittle method.125).4 LU FACTORIZATION Matrices(like scalars) can be factored into the productof twoother matrices in an infinite number of ways. matrix A becomes 0 35 -25 0 0 750/7 (1.129) (1.124) B becomes A = LU (1.129) by L gives Lb’ = LL-lb = Ib = b (1.126) by -1 gives L-1LUx = IUx = Ux = L-lb The last two terms in Eq. (1. Let A be factored into the product LU. In the Doolittle LUmethod. det(A) = IAI = (80)(35)(750/7) = (1. Ax = b.127) (1. em. as illustrated in Eq. The linear systembecomes LUx = b Multiplying Eq. after Gauss elimination. this is b accomplishedby defining the elimination multipliers. Consider the linear system.127) give Ux = L-lb Define the vector b’ as follows: b’ = L-lb Multiplying Eq.122) There are no rowinterchanges or multiplications of the matrix by scalars in this example. Eq. (1.125) Specifying the diagonal elements of either L or U makes the factoring unique.124) When and C are lower triangular and upper triangular matrices. (1. Thus. In this manner.126) . A = BC (1.

131) applies the steps performed the triangularization of A to U to the b vector to transform b to b’. L and U are given by L = -1/4 [_-1/4 1 -5/7 and U= 35 -25 0 750/7 (1.7 and U matrices. and Eq. The L matrix is the lower triangular matrix containing the elimination multipliers. obtained in the Gauss elimination process as the elements below the diagonal. (1. and the corresponding solution vector x can be obtained simply by solving Eqs. (1. any b vector can be considered at any later time.131) and (1.135a) (1. the U matrix is the upper triangular matrix obtained by Gausselimination. Thus.129) into Eq.80) and (1.15.z -5/7 1 Lb3 = 20 20 20].128) yields Ux = b’ Chapter1 (1. Equation in (1. Consequently.46 Equatingthe first and last terms in Eq.8: b~" = [20 20 -1/4 -1/4 1 0 /b.133) Considerthe first b vector from Example 1. (1.81) in Example 1. Let’s solve Example using the Doolittle LUmethod. The U matrix is simply the upper triangular matrix determined by the Gauss elimination procedure in Example1. in that order.135b) (1.131) gives (1. em.(-5/7)(25) (1.132) is used to determinethe solution vector x.135c) .132) is upper triangular.(-1/4)(20) . Equation(1. The numberof multiplicative operations required each b vector is 2.130) yields Lb’ = b Substituting Eq. The Doolittle LUmethod. (1.132) Equation(1.134) Performingforward substitution yields b’1 = 20 b~ = 20 . used to transform Ato U. Since Eq. (1.132) is simply the back substitution step of the Gauss elimination method. once L and U have been determined. Thesemultipliers are the numbers in parentheses in the rowoperations indicated in Eqs. Since Eq. (1. Equation (1.7. (1.The first step is to determinethe L 1. The L matrix is simply the record of the elimination multipliers.(-1/4)(20) b~ = 20 .131) is used to transformthe b vector into the b’ vector. with unity elements on the major diagonal.131) is lower triangular. n Example1. forward substitution (analogousto backsubstitution presentedearlier) is used to solve for b’.132). em. back substitution is used to solve for In the Doolittle LUmethod. (1.131) (1.7.

Perform steps 1.3. When rows of A are interchanged during the the elimination process.60 b~r=[20 10 20] yields -1/4 [_-1/4 t -5/7 1. and 3 of the Gauss elimination procedure presented in Section 1. whereli. When new b vector is considered. The results of this step are the L and U matrices. in the locations of the eliminated elements.kbtk k=l 2. for example. Compute x vector using back substitution: the k=i+ I whereui.136) Performingback substitution yields x~ = [0.. ~ and ui. (1.82). in a format suitable fo{ programming a computer. (i= 2. Equation (1...n/2 multiplicative operations. The forward substitution step required to solve Lb’= b requires N = n2/2.3. it is processed in the order corresponding to the a elements of the order vector o..138) When pivoting is used with LUfactorization. the correspondingelements of the order vector o are interchanged.. is 2. em. The Doolittle LUmethod.132) gives (1. n) (1. after L and U have been determined. is for summarizedas follows: 1. n which is much less workthan required by Gausselimination. The number of multiplications and divisions required by the complete Gauss elimination method is N = (n3/3. the total number of multiplicative operations required by LUfactorization. The major advantage .Systems Linear AlgebraicEquations of 47 The b’ vector is simply the transformed b vector determined in Eq.by an order vector o. ] 20 b~=250/7 15 250/7 x 2 2/3 x 3 1/3 (1.137) 0 0 35 -25 x2 = 0 750/7 x 3 (1. Thus. Compute b’ vector in the order of the elements of the order vector o using the forwardsubstitution: bti=bi ~li. 2. i are elements of the Umatrix.3 .of LUfactorization methodsis their efficiency whenmultiple unknownb vectors must be considered. and the back substitution step required to solve Ux= b’ requires N = n2/2 ÷n/2 multiplicative operations. it is necessaryto keep track of the row order.40]. Store the pivoting informationin the order vector o. Store the row elimination multipliers. Repeating the process for 0 |b!| = 10 b i 15 1 [-b3.00 0.139) 3.. k are the elements of the L matrix. .n/3)÷ 2. especially for large systems.

004000 0. A The matrix inverse is calculated in a columnby columnmanner using unit vectors for the right-hand-side vector b.OlOOOO = / 0. l]. it can be used to evaluate the inverse of -~. by=[0 0 1 . -.006667 o. [1/100 1/30 1/150].andlettingb3 Thus. be and bn r=[0 0 . L = -1/4 Letb~ r=[1 0 0]. matrix A.141) Evaluate the L and U matrices by Doolittle LUfactorization. Thus. A.12..16. x1 will -1. Thus.Then.48 Chapter 1 As a final application of LU factorization. if bl r = [1 0 .7 by the Doolittle LU method: A= -20 I -20 80 -20 40 -20 -20 130 -201 (1. -1/4 L-l/4 1 -5/7 1 and Lb’~ =b~ gives b2 = b~ --~ h’~ = 1/4 3/7J (1. so the total number multiplicative operations is n3..n/2 operations required by the Gauss-Jordan method. -~ be the first columnof A The succeeding columns of A are calculated by letting r=[0 1 .143a) U = 35 -25 (1. The number multiplicative operations of of required to determine L and U are (n3/3 . Let’s evaluate the inverse of matrix A presented in Example1.. that is. F2/1251 35 -25 x 2 -----~ (1.~ is given by 1/4 x.143b) -20 0 -20 750/7 l[x~x 3 L3/7_] 1 k 1/250J I8i 1 I 1 -1.033333 L 0. the total number multiplicative of operations required is 4n3/3 -n/3. which is smaller than the 3n3/2.143c) o.n/3).. Example1.. Thus.. etc...006667 0. There are n columns. 0].o16ooo O. 0].01000 0. Matrix inverse by the Doolittle LUmethod..11/lOO/ F2/125 -~ = [xl x2 x3] = /1/100 A L1/25° 1/100 1/250] 1/30 1/150 1/150 7/750 0. 0].009333 (1.oo4ooo] which is the same result obtained by Gauss-Jordanelimination in Example1. . Thus. The numberof multiplicative operations for each columnis n2.142) -1/4 -5/7 0 7s0/7 Solve Ux = b~ to determine x~. Letting where x1 is the first column of A b2r=[0 1 0] gives x~= r = [0 0 1] givesx~ = [1/250 1/150 7/750].

Thus.147) i Thus.. modifyingthe procedure to eliminate all unnecessary computations involving zeros. em= (a21/all). in place of the eliminated elements allows this procedure to be used as an LUfactorization method.148) 1 Subsequentelements of the b vector are changed in a similar manner. Such methods should be considered when the coefficient matrix fits the required pattern. Thesemethods are generally very efficient in computer time and storage. Elimination in rows 2 to n is accomplishedas follows: ai. (i = 2 ... n 0 ". replace row 2 by R . There are a numberof direct elimination methodsfor solving systems of linear algebraic equations whichhave special patterns in the coefficient matrix. n) (1. The first element b~ is unchanged. such as a tridiagonal pattern.145) 0 ¯ ¯ ¯ an_l." 0 an. The second element b2 becomes b2 = b2 .n_ 1 an_l. 0 0 -all a12 0 0 a21 a22 a23 0 0 a32 a33 a34 0 T = 0 0 a43 a44 a45 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 (1. Consequently. the elimination step involves only 2n multiplicative operations to place T in upper triangular form. it is usually worthwhileto develop special methodsfor that unique pattern. let’s apply the Gauss elimination procedure to a tridiagonal matrix T. Processing the b vector requires only one multiplicative operation.. [0 az2-(a2~/a~)a12 a23 0 0 . i -.i =ai. and whencomputer storage and/or execution time are important. especially in the numerical solution of differential equations by implicit methods.144) whereT is a tridiagonal matrix.i_l)ai_l. only a43 in column3 must be eliminated fromrow 4.5 TR|DtAGONAL SYSTEMS OF EQUATIONS 49 Whena large system of linear algebraic equations has a special pattern.(a2~/a~)R Row2 becomes 2 ~. The elementsof the b vector are also affected by the elimination process. Onealgorithm that deserves special attention is the algorithmfor tridiagonal matrices. To derive the Thomas algorithm. Theeliminated elementitself does not need to be calculated.(a2l/a~)b (1. n Since all the elements of column1 belowrow 2 are already zero. the only element to be eliminated in row 2 is a2~. only a3z in column2 must be eliminated from row 3. Only the diagonal elementin each row is affected by the elimination. the Thomas algorithm has found a large numberof applications. storing the eliminationmultipliers.n_ ~ an. Thus..i_l/ai_l. often referred to as the Thomas (1949) algorithm. since the elimination multiplier.n_ 2 an_l.(ai. Large tridiagonal systems arise naturally in a numberof problems. In fact..146) Similarly. 0 0 0] (1. etc. . etc.. Considerthe matrix equation: (1.Systems Linear AlgebraicEquations of 1.

0 1. (1.0 (1.2)ai_l. . elements a’i.ati. matrix A elementsati.0 and T = 100. The first columnof matrix A elements al. In that 8.25 -2.the finite difference equation 2 Ti_ -. 2 ~.2 t a2. The to of elements and atl.152b) The Thomas algorithm.2 t a 3.25 -2.4.(ati.25 -2..al.. .0 0.25 1.0 1.(al.. t a2.0 1. for which (2 + ~X2 ~2) = 2. that is.01.~ Example 1.2 .151b) Afterali. 3 . elements ai.0 -100.149) t When elementsof column of matrix A are eliminated.2 an.0 1. the total process of elimination. is already calculated. 1 t A = at2. n .1 an-l. elementsai.1 an.0....0- At ~ and b = 0.1 an.4 yields -1.150b) (i = 2. with T1 = 0..ns obtained in Example Eq.1/ai_l. Thus. (i = n .. 3 .0 0. i.25 -2. The third columnof t.(2 + c~ z~r2)Ti -]..3. of Example correspond to the elements of the x vector) 8.151a) (1.0 and Ax = 0.0 0. requires only 3n multiplicative operations.0 -2. 2 ati. for i = 2 . Thus.i+1.25.1 t a 3.3xi+l)/a~. n) (1.153) 1 is solved for ~ = 4.150a) (i= 2 3 .54). the 1 1. 1) (1. the backsubstitution step is as follows: Xn = b n/a’n.0 -0... elements ai.2 = ati. Let’s solve the tridiagonal systemof equatio.z)bi_l (1. example.1. .0 1..25 -2.0 0. need to store the zeros.152a) xi = (bi . t become the elements of column2 of matrix A " ’ al. (8.50 Chapter 1 em= (a21/all)..0 1.Tt-+l = 0 (1..3’ donotexist.3 an-l.. The secondcolumnof matrix A’. n) (1.17. including the operation on the b vector.i_ 1.. corresponds the superdiagonal matrix T. the elementsa’i.2 -.153) in the form of the n x 3 matrix 9 (where the temperatures T.125.0 1.25 -2.3 The b vector is modifiedas follows: b1 = b 1 bi = bi .. Writing Eq. n) and b are evaluated.3 a3.154) . The n x n tridiagonal matrix T can be stored as an n x 3 matrix A’ since there is no t.2 (1.0 1.1..3 -- (1. corresponds to the diagonal elements of matrix T. corresponds to the subdiagonalof matrix T. 8.2..1/a~_l.2 (i =2.2 an-l.0 1.

608602) x7 -2.0 1.078251 60.158a) (1. (1. Thus.078251 Processing theremaining rows yields the solution vector: 1. 6 (1.0 0. N = 3n .158c) Pivoting destroys the tridiagonality of the systemof linear algebraic equations. whereelementsb~ to b. (1.0 . (1.923667 Equ~ion (1.2 = -2.0 (1. for which N = (n3/3 .157).2 for subsequent b vectors.0/(-2.a~.4 for the completeThomas algorithm. and thus cannot be used with the Thomas algorithm.0 (1. (1.0/(-2.444444) (-0.150).So pivoting is not necessary.158c) is the solution presented in Table 8.805556 -1.n/3) + 2.696154 -1.0 0.0 1. (1. If the T matrix is constant and multiple b vectors are to be considered.25)](1.2)al. Most large tridiagonal systems which represent real physical problemsare diagonally dominant. In that case.589569) (-0.966751 4. The advantages of the Thomasalgorithm are quite apparent when compared with either the Gauss elimination method.0)(60.2. 2 = (-100)/(-1.157) The solution vector is computed using Eq.0 0.25 and a’ is given by a’ 2.Systems Linear AlgebraicEquations of 51 The major diagonal terms (the center column of the A’ matrix) are transformed according to Eq.923667)]/(.0" 1.425190 7. For this particular b vector.25 .805556 (1.0 1.0) = -1. f or .This is certainly not the case in general. The results are presented in Eq.1/al.923667 (1.156) The remaining elements of b are processed in the samemanner.158b) X = (b 6 . The A’ matrix after elimination is presented in Eq.641398 1.0.[1. Thus.(a2..502398 37.9.0 1. only the back substitution step is required once the T matrix has been factored into L and U matrices.602253) (-0.0 -0.2 a2.643111 -1. 2 -.0- and b’ = 0.660431 -1.3 and the numberof multiplicative operations required by the back substitution step is N = 3n .151).643111) = 37. 3 = -2._~ are all zero.989926 x = 13. where the elimination multipliers are presented in parentheses in column1.553846) (-0. 1. The b vector is transformed according to Eq. The number of multiplicative operations required by the elimination step is N = 2n .641398) 2 = [0 - = 60.250000 -1.(-0.1. or t he D oolittle L U m ethod. = b7/a~.647747 -1. the total numberof multiplicative operations is N = 5n . 2 = a2. (1.152).157).552144 22. Thus.155) The remaining elements of column2 are processed in the same manner.0 0.25)](0. The final result is: (-0. and b2 = b2 .(a’2A/atL2)b~ = 0.0) = 0. the vector does not change.3x7)/a~.0 -100. Thus.[1.606889) . b~ = 0.

0003 (1A6Oa) (1. on An algorithm similar to the Thomas algorithm can be developed for other special types of systems of linear algebraic equations. Store the elimination multipliers.6.159b) Solve Eq.18.0002 -9999 I 1 [ 0. is summarized follows: for as 1. Thus. in a format suitable for programming a computer.52 Chapter 1 which N = n2 . (1.0003 3. and quad precision representation typically contains about 28 significant digits. 2 terms from Eq. Store the n x n tridiagonal matrix T in the n x 3 matrix A’. The Thomas algorithm. 3. in which the elements of T are partitioned into submatrices having similar patterns. Compute a~.n/2. 1. and (b) ill-conditioned systems. Round-Off Errors Round-offerrors occur whenexact infinite precision numbersare approximatedby finite precision numbers. as which is presentedat the end of this section. 1]1 I01. Compute bi terms from Eq. In most computers.159) by Gauss elimination.159a) (1. In theory. Effects of round-off errors.1. PITFALLS OF ELIMINATION METHODS All nonsingular systems of linear algebraic equations have a solution. except that matrix operations are employed the submatrix elements. there are two major pitfalls in the application of Gausselimination(or its variations): (a) the presenceof roundoff errors. (1. 2. 4. single precision representation of numbers typically contains about 7 significant digits.1/al_l.0002 1.0002]R2 .6. (1. the Solve for xi by back substitution using Eq.150). However.152). An extended form of the Thomasalgorithm can be applied to block tridiagonal matrices. The solution procedureis analogousto that just presented for scalar elements.0~03 0. 1.151).2.R1/0. the em= al.6.0003xl + 3x2 = 1. the solution can always be obtained by Gauss elimination. double precision representation typically contains about 14 significant digits. for each b vector after the first one. a pentadiagonal system of linear algebraic equations is illustrated in Example 8. 1. 1. Considerthe following system of linear algebraic equations: 0. Theeffects of round-offerrors are illustrated in the following example. in place of a~. Example 1. (1. The right-hand-side vector b is an n x 1 columnvector.Thosepitfalls are discussedin this section.160b) .0003 31 1. For example.0002 XI "+ X2 = 1 (1. The effects of round-off can be reduced by a procedure known iterative improvement.

0003 .164b) x2 -- and x~ = 1 .163) by Gauss elimination.0003 -9999 _ 0.3332 0.0003 . (1. I.3(1/3) 0.9997 0.164a) (1. Thus.159).0003 1 0 3 1. (1. 1 x2 -1.163b) 0. .3x2 0.0002.333333 0.333 0.161a) (1.162) The results are presented in Table 1.0002 .163a) (1.9999 0.161) using finite precision arithmetic with two to eight significant figures.66670000 53 The exact solution of Eq. (1.0002 -0. The results are presented Table 1.2.0003R~ 1 1 2.3333333 0.1.162) x 2 0.0003 0.0002 2 Xl .33333 0. The algorithmis clearly performingvery poorly.33 1.0002 0.70000 0. (1. Thus.3x2 _ 1.33333333 x I 3.0003 _ -9999 0.0002 0.164c) Let’s solve Eq. Let’s rework the problemby interchanging rows 1 and 2 in Eq.0003 (1.164c) using finite precision arithmetic.9997 2 (1.0002.0003 -9999 and x1 1.0.Systems Linear AlgebraicEquations of Table 1.0003x1 + 3x2 = 1.x (1.0002 R2-0.160) 1 x2 -1.161b) 1. Precision 3 4 5 6 7 8 Solution of Eq.6670000 0.0003 _ ± -9999 3 (1.0. (1. (1. Theseresults clearly demonstratethe benefits of pivoting.670000 0.333 0. Thus.1.9999 2. (1.0002 Solve Eq.9999 0. xI +x2 = 1 0.0003 3 Let’s solve Eq.

(1.2. However.3333 0.0001 R 2-R1 -0. System condition.0001 0.167b) . An ill-conditioned problemis one in whicha small change in any of the elements of the problemcauses a large change in the solution of the problem. 1.9999: xl +x2 =2 x~ ÷ 0.166b) 0.0001 to 0. they can be minimizedby using high precision arithmetic and pivoting.168a) (1.165a) x1 + 1. they are also extremelysensitive to round-off errors.54 Table 1.0001 Solving Eq.e..6667 0.19. Consider the following slightly modified form of Eq. Thus.667 0. The presence of round-off errors alters the solution of the problem.0001 (1. the exact solution can always be obtained using fractions or infinite precision numbers(i.66667 Chapter1 Round-off errors can never be completely eliminated. (1.0001 ] [10 Solving Eq.165) by Gauss elimination.6.2.168b) (1. A well-conditioned problemis one in whicha small changein any of the elements of the problemcauses only a small change in the solution of the problem. x 2 0. In theory. Example1. System Condition All well-posednonsingular numericalproblemshave an exact solution.0001 (1.165) in which aa2 changedslightly from 1.167) by Gauss elimination gives 1 0.167a) (1.0001 I 0. (1.0001 R2-RI (1.165b) Solve Eq.0001 2.33333 x 1 0. However. (1.166a) 1 2 (1. [I 2 ] 11. an infinite number significant digits). practical calculations are done with of all finite precision numberswhich necessarily contain round-off errors. Since ill-conditioned systems are extremely sensitive to small changes in the elements of the problem.9999 12.333 0.164c) Pmcision 3 4 5 Solution of Eq. (1.0001x2 = 2. Let’s illustrate the behavior of an ill-conditioned systemby the following problem: x1 +x2 = 2 (1.166b) yields 2 =1 and x I = 1.9999x2 = 2.

165). and the solution repeated.2 1 2 x1 --~ 1.. which is defined in terms of the normsof of the matrix and its inverse. (1. Thesurest wayto detect ill-conditioning is of to evaluate the condition number the matrix. Consider another slightly modified form of Eq.170a) (1. with finite precision arithmetic.Systems Linear Algebraic Equations of 55 Solving Eq. ill-conditioning is not a problem. If the magnitude of the determinantof the matrix is small. be this is -~ -1 not a foolproof test.0001 to 2: X -~-X ~. Similary. Assuming that scaled pivoting has been performed. the only possible remedyto ill-conditioning is to use higher precision arithmetic. wh ichis gre atl y differ ent from t he solution of Eq. and AA can be computed and compared to I.169b) 1 1. of 1. as discussed in the next subsection. and are . round-off errors effectively changethe elements of A and b slightly. the matrix may ill-conditioned. If a drastically different solution is obtained.165). Norms and the Condition Number The problemsassociated with an ill-conditioned systemof linear algebraic equations are illustrated in the previous discussion. (1.170b) Solving Eq. Sucha systemis ill-conditioned. (1. whi h is greatly dif ferent fro m the c solution of Eq.0001x = 2 2 Solving Eq. A close comparisonin either case suggests that matrix A is well-conditioned.(1. (1. the matrix is probably ill-conditioned. and none give a quantitative measure ill-conditioning.e. errors) can occur in the solution. A poor comparison suggests that the matrix is ill-conditioned. (A-~)-1 can be computed and compared to A.168b) yields 2 =-1andx1 =3. This problemillustrates that very small changesin any of the elements of Aor b can cause extremelylarge changesin the solution. x.169a) (1.170) yields 2 =0 and xl= 2. Some the elements of A and/or b can be of changed slightly.However.3.0001 R 2 -R 1 (1. However.169) by Gauss elimination gives (1. Noneof these approaches is foolproof.165) in which b2 is changed slightly from 2. large changes(i. ill-conditioning is quantified by the condition number a matrix.. Withinfinite precision arithmetic. The inverse matrix A can be calculated. (1. There are several waysto check a matrix A for ill-conditioning. Norms the condition number discussed in this section. In the following discussion. and if the systemis ill-conditioned.6.

171d) (1. (1.171e) IIAII = 0 only if A = 0 IIkAII= IklllAII IIA+ Bll _< IIAI[ + IIBII IIABII IIAIIIIBll _< The normof a scalar is its absolute value. Norms have the following properties: IIAII > 0 Ilxll. (1.175) Considera slightly modifiedform of Eq.174).172a) (1.3. Considera system of linear algebraic equations: Ax = b For Eq.177) 3b (1. respectively.174) (1.178) . Thus.173b) (1. (1. ’J (eigenvalue) Condition Number The condition numberof a systemis a measureof the sensitivity of the systemto small changes in any of its elements.6. In a similar manner.<~_ni= ~ n Maximumcolumn sum Maximum row sum Spectral norm Euclidean norm (1. Thus.173a) (1.171b) (1. A(x + fix) = b + fib Subtracting Eq.56 1.176) gives A fix = fib SolvingEq. llai~il --.176) (1.173d) IIAII~ -.6. n IIAII1 l~a<x.171c) (1. (1. of and Ilbll. whichcauses a changein the solution fix.3.172c) Ilxll2 = IlXlle = (~X~/)1/2 Euclidean norm Maximummagnitude norm II x II o~ max = Ixil l<i<n The Euclidean normis the length of the vector in n-space. Ilkll definitions of the normof a vector.174) in whichb is altered by fib. x.172b) (1.171a) (1. (1. Thus.177) for fix gives ~x = A-1 (1.2. Ilbll _<IIAIIIIxlt (1.174) from Eq. There are several (1. Thus.there are several definitions of the normof a matrix.173c) (1. Norms Chapter 1 The measureof the magnitude A.max~ lai~il l<i<nj= 1 IIAII2 : min2i IIAlle \i:~j:~ 1. Ilxl[1 -. or b is called its normand denotedby IIAII.~ ]xi[ Sum of magnitudes = Ikl. (1.1.

3.182) determinesthe sensitivity of the solution.to changes in the vector b.159): 0"000313]i A=[ The Euclidian normof matrix A is 2 1/2 I[AI[e = [(0.182) Equation(1.999 (1.186) 1 (0.185) A. Considerthe coefficient matrix of Eq.178).184) (1. Thesensitivity is determined directly by the value of the condition numberC(A).1057.3166 The inverse of matrix A is 1 10.6672 (1. ll6bll/llbll. 57 116xll IIA-tllll.999 -1 -1 The Euclidian normof A is IIA lie = 1. Example 1.180)byIlbllllxll yields (1.175) and (1. (1. Sucha problemis well-conditioned.000 1 (1.Systems Linear AlgebraicEquations of For Eq. II~xll/Ilxll. I[Obll [I-~ = C(A) I[-~ where C(A) is the condition numberof matrix A: [ C(A): IIA[IIIA-I[[ I (1.179) gives (1. It can be shownby a similar analysis that perturbing the matrix A instead of the vector b gives II~xll < C(A)~ IIx+ ~xllI1 The use of the condition numberis illustrated in Example 1.1 = (0. Sucha problemis ill-conditioned. (1.1057) = 3.0003)9.0003)9. Small values of C(A). (1. Large values of C(A)show large sensitivity of the solution to changesin b. of the order of unity.180) IlOxll5 t[A[IIIA-11i ll6bl[ . Thus.179) -lllll6bll I[bllll6xll IIAIIIIxlIIIA _< DividingEqs. (1. Normsand condition numbers. showa small sensitivity of solution to changesin b.3166(1.~81) (1.12] --.0003) + 32 + 12 q.187) .Sbll _< Multiplying the left-hand and right-hand sides of Eqs.183) (1.20. the condition numberof matrix A is C(A) = I[AllelIA-~ lie = 3.20.

(1.b + Afx into Eq. (1.000. which can be added to i to give an improved approximation to x. The procedure can be repeated (i.192). (1. The accumulatedeffect of round-off is roundoff error in the computed values.190) 1.b (1..002.165): A= 1. (1. even thoughit is well-conditioned.000.. Eq. This is a precision problem.193) gives a system of linear algebraic equations for fx.1. 6b.193) (1. The inverse of matrix A is A-’ [ 10.191) by a direct elimination method yields ~. Further reduction in round-off errors can be achieved by a procedure known iterative improvement.4. the condition number matrix of A is C(A) = IlAIlell A-1 lie = (2. (1. LUfactorization should be used to reduce the .000 10. Rotmd-off errors in direct elimination methodsof solving systems of linear algebraic equations are minimizedby using scaled pivoting.0 This large condition numbershowsthat matrix A is ill-conditioned.58 Chapter 1 This relatively small condition number showsthat matrix A is well-conditioned.192) Subtracting A~---. convergencecheck on the value of fix can be used to determine if the procedure should be repeated. iterated) if necessary.001-10.191) gives A~ = A(x + 6x) = Ax +A fix = b + From first and last terms in Eq.159) is sensitive to the precision of the arithmetic (i. where ~ = x + 6x..e. as Considera systemof linear algebraic equations: Ax b ---(1.] (1.5 = 40.4.189) -1 The Euclidean normofA is [I. A fix = fbJ (1.00005)20. moresignificant digits) arithmetic. (1. (1. Thus.194) can be solved for fx.0001 The Euclidean normof matrix A is IIAlle = 2.e. where~ differs from the exact solution x by the error 6x. Round-offerrors in any calculation can be decreased by using higher precision (i.not a condition problem. As shown in Section 1.00005. If the procedure is iterated.5.194) Equation (1.6. -1 lie = 20. Substituting ~ into Eq. round-off effects).6.000] =_ -10. Considerthe coefficient matrix of Eq. Iterative Improvement In all direct elimination methods.e. is given the fb = A~ .A(x + fx) ---. Thus.191) Solving Eq.000. the effects of round-off propagate as the solution progresses through the systemof equations.

194) should be solved with higher precision than the precision used in the solution of Eq. the number iterations required to satisfy the convergence of criterion decreases.. it is generally moreefficient to solve such systems of linear algebraic equations by iterative methods than by direct elimination methods. of Iterative methodsdo not converge for all sets of equations. Gauss-Seidel iteration. This procedureis x repeated (i. Equation (1.for any initial solution vector. The Jacobi Iteration Method Consider the general system of linear algebraic equations.The procedure is convergent if each iteration producesapproximations the solution vector that approachthe exact solution vector as to the number iterations increases.e. Iterative methodsbegin by assumingan initial solution vector x(°~. The methodof iteration used.. but convergence not assured.15)]. The dominance of the diagonal coefficients. 3. Diagonal of dominance a sufficient condition is for convergenceof Jacobi iteration. As the diagonal dominance increases.191)... nor for all possible arrangements a particular set of equations. Ax = b. iterated) to convergence. 2.. The convergence criterion specified. the procedureshould be terminated. Three iterative methodsare presented in this section: Jacobi iteration. Diagonal dominance defined by Eq. The initial solution vector is used to generate an improved solution vector x(1) basedon somestrategy for reducingthe difference between (°) and the actual solution vector x. 1. n) (1. In other words. Ax= b.15). by row interchanges) to makethemdiagonally dominant. 2 . (1. (1. If the matrix is diagonally dominant[see Eq. the algorithm is repeated (iterated) until somespecified convergence criterion is achieved. Somesystems that are not is diagonally dominantcan be rearranged (i. Some systems that are not diagonally dominantmayconvergefor certain initial solution vectors. the coefficient matrix A is extremelysparse. Gauss-Seideliteration. Theinitial solution vector. That is.7. 4.7 ITERATIVE METHODS For many large systems of linear algebraic equations. The numberof iterations required to achieve convergence depends on: 1. (1.195) j=l . and successive-over-relaxation (SOR).e. and SOR. When repeated application of an iterative methodproducesinsignificant changes in the solution vector. Iterative methodsshould not be used for is systems of linear algebraic equations that cannot be madediagonally dominant.Systems Linear AlgebraicEquations of 59 computational effort since matrix A is constant. most of the elements of A are zero.1. Convergence achieved whensomemeasureof the relative or absolute changein the solution vector is less than a specified convergence criterion.jx j = bi (i = 1. 1. written in index notation: ~ ai..

..2.196) to yield the first improved solution vector (~).200a) (1.(1) la i. that is. n) (1.vJ j=l (i= 1. The Jacobi iteration method.200b) ~) wherethe term ri s called the residual of equation i.. The superscript in parentheses denotes the iteration number. 1 / i--1 n ~ xi=~i.2..2 . n) (1.. Xg.. T hus.199) is generally written in the form x(k+])_ ..198) can be obtained by adding and subtracting x}k) fromthe right-hand side of Eq.dxJ°~ ) (i= 1..21.=~i+lai.jx j) (i= 1.i ~ .. n) (1.198) Anequivalent. n) (1. Toillustrate the Jacobi iteration method.199) Equation(1. Thus..196) An initial solution vector x(°) is chosen. of the equations evaluated for the approximatesolution vector x values The Jacobi methodis sometimescalled the methodof simultaneous iteration because k+l) dependonly on the all values ofx i are iterated simultaneously. (1.... with zero denotingthe initial solution vector..bi ai. The residuals RI are simplythe net ~(~) (~).. but more convenient. values of xl Example1. """ ’ n) (1. The initial solution vector x(°) is substituted into Eq.j=l~’aidx!°)J i-I -..201) . (1.60 Chapter1 In Jacobi iteration. Theorder of processing the equations is immaterial..2 . n) (i = 1.198) to yield =xi i + 1 ( biai. iterated) until some convergence criterion is satisfied. The Jacobi algorithmfor the general iteration step (k) is: xi =-. all values ofx~ k).197) This procedureis repeated(i.i \h i ..That is.i j~----1 (k) S~a id j x j 1 aid k) (i = 1.. .i n j=l k (i : 1. each equation of the systemis solved for the component the solution of vector associated with the diagonal element.2 .let’s solve the followingsystemof linear algebraic equations: -1 0 1 0 4 -1 0 1 -1 4 --1 0 0 -1 4 -1 1 0 --1 4 x 2 x 5 100 1 X = 100 3 / x 1001 4 100_] (1.. form of Eq.j n ~ =/~+t a. (1.e.(~) k) R~ ai. .i~bi--j~=laidxj--j.2 ..

000000 25. Substituting these values into Eq.714284 35.375000 35.X4 = 100 X -.3) R4= 100--x (1.0 0. The calculations were carried out on a 13-digit precision computerand iterated until all IAx.203. Thus.375000 35. --x 1 +4x2 --x 3 +x5 = 100 --X 2 -[4X3 -.X "].4x2 + x3 .0 0.4) (1.x 2 5 R3 = 100+x 2 -4x 3 +x4 (1.000000 25.000000 25.000000 25.000000 25.000000 25. The procedure is then repeated with these values to obtain x(2~.3.4X -.all real calculations are performedwith finite precision numbers.202) can be rearranged to yield expressions for the residuals.000000 31.714284 35. Due to the symmetryof the coefficient matrix A and the symmetryof the b vector.000000 25.250000 34.2) (1.714285 35. the exact solution can be obtained.203.Systems Linear AlgebraicEquations of Table 1.156250 35. when solved by direct methods.000000 31.201).714285 0.578125 42.000000 25.202.x2 + x4 -.4x (1. x1 = x5 and xz = x4.000000 25.000000 25.625000 42. The first and subsequent iterations are summarizedin Table 1.X = 100 1 3 4 5 X2 -.X4 -~.857140 42.000000 0. (1.3.000000 25.250000 34. R~ = 100-4x~ +x 2 -x 4 R = 100 + xI .202.. when expanded.857143 0.156250 35.1) (1.7.I changed by less than 0.203.000000 25. becomes 4X1 --X 2 -~-X 4 = 100 (1.714285 0.0 0.202.3) (1. which required 18 iterations.0 0.203) gives °) = 100.000000 Equation (1.000000 25.000001 between iterations.1) (1.714285 35.200a) gives x]l)= x~l)= x~)= x]0= x~l)= 25.0].4x5= 100 Equation(1.000000 25.203.5) 5 5 To initiate the solution.000000 25.2.4) l+x3-4x4+x5 R = 100 .500000 40.0.000000 25.2) (1.202.000000 25.. 5).203.546875 35.so round-off errors pollute the .0 (i = 1. However.546875 35.857142 42. let x(°)r = [0.187500 42. 1.000000 37. Solution by the Jacobi Iteration Method k x 1 x 2 x 3 x 4 x 5 61 0 1 2 3 4 5 16 17 18 0.000000 25.000000 25. (1. In principle. .5) Ri.202. Substituting these values into Eq. Accuracy and Convergenceof Iterative Methods All nonsingular systemsof linear algebraic equations have an exact solution. etc.

7. a relative accuracy criterion is preferable.00001. whichhas five significant digits. Consider an iterative calculation for whichthe desired absolute error is +0.001000. machineaccuracyis not required.001to satisfy the relative error criterion.000. whichhas no significant digits. the numerical solution yields the exact solution within the round-off limit of the computing device. This exampleillustrates the danger of using absolute error as an accuracy criterion. There are two waysto specify error: absolute error and relative error.001. personal computer.000-t.an absolute accuracy criterion can be specified to yield a specified numberof significant digits in the approximatesolution.then the approximate value is 100. Arelative error criterion yields the samenumber significant figures in the approximatevalue. but even the most careful calculations are subject to the round-off characteristics of the computing device (i. When the numberof iterations increases without bound. or mainframecomputer). regardless of the of magnitude the exact solution.00001)= 4-0.204) and relative error is defined as absolute error Relative error -. However. hand calculator.exact value (1.62 Chapter 1 solution.001.001000.000. Consideran iterative calculation for whichthe desired relative error is 4-0.the term accuracyrefers to the number of significant figures obtained in the calculations. the iterative process should be terminated whensometype of accuracy criterion (or criteria) has beensatisfied. This yields five significant digits in the approximate value.7.e. Since the exact solution is unknown. In mostpractical solutions. When solved by iterative methods. When magnitudeof the exact the solution is known.then the approximate if value is 0.000x (4-0.4-0. of 1. Suchsolutions are said to be correct to machine accuracy. Convergence Convergence an iterative procedure is achieved whenthe desired accuracy criterion (or of criteria) is satisfied. If the exact solution is 100. workstation. and (c) each iteration through the system equations is independentof the round-off errors of the previous iteration. Accuracy The accuracy of any approximatemethodis measuredin terms of the error of the method.205) Relative error can be stated directly or as a percentage. error at any step in the iterative the . Otherwise. Iterative methodsare less susceptable to round-off errors than direct elimination methodsfor three reasons: (a) The system of equations is diagonally dominant.001000 x (4-0.00001)--. hand computation. Absolute error is defined as Absolute error = approximatevalue . 1. In iterative methods.2. Convergence criteria can be specified in terms of absolute error or relative error.0010004-0.0.the exact solution of a systemof linear algebraic equations is approachedasymptotically as the numberof iterations increases. Thus.2.1. If the exact solution is 100.00000001to satisfy the relative error criterion. This yields five significant digits in the approximate solution. (b) system of equations is typically sparse. If the exact solution is 0.2. then the absolute error must be 100..exact valu~ (1. Round-off errors can be minimizedby pivoting.001. and the term convergence refers to the point in the iterative process whenthe desired accuracyis obtained.then the absolute error must be 0. the exact solution is 0.

(1. by using (~+l) x) values in the summation fromj = 1 to i . Thus..7.206) For a relative error criterion. They are relevant to the solution of eigenvalue problems(Chapter 2). Therefore.210) i-1 x) R}k) : bi .care is be neededto ensure that the desired accuracyof the completesystemof equations is achieved. brief. The Gauss-Seidelmethodis similar to the Jacobi method. .J V J=~ The Gauss-Seidel methodis sometimescalled the methodof successive iteration because the most recent values of all xi are used in all the calculations..208) k) Equation(1. Several convergence criteria are possible.except that the most recently computed values of all xi are used in all computations.207) i=1 i=1 \ Xi /I J -- The concepts of accuracy and convergence discussed in this section apply to all iterative procedures. the magnitudes the residuals Ri. ~i = "xi" (k+l) __ Xiexact . to the solution nonlinear equations (Chapter3). use themimmediately.i (i = 1. Eq. someof the residuals may near zero while others are still quite large.x} The error canalso be specified by. k).n) .1 (assuming the sweeps through the equations proceed from i = 1 to n). Gauss-Seidel iteration generally converges faster than Jacobi iteration.. the residuals are all zero.(k+l)__ j=t j=i+l ~ ai. ¯ (1. the Gauss-Seidel methodrequires diagonal dominanceto ensure convergence¯ The Gauss-Seidel algorithm is obtained from the Jacobi algorithm.~ ai. is approximatedx} by k+l)... .208) can be written in terms of the residuals Ri by addingand subtracting xl from the right-hand side of the equation and rearranging to yield X(k+l) ~_. . a.jx~:)) (i= 1 2. the error. for the iterative solution of a system of linear algebraic equations.209) (i = 1.. The Gauss-Seidel Iteration Method In the Jacobi method. 2 .198). Let 5 be the magnitude the convergence of tolerance. etc... as better values ofxi are obtained. In Like the Jacobi method. xlk+l) : 1 bi ai.. For an absolute error criterion... When exact answer(or the exact answerto machine of the accuracy)is obtained.i ( \ -- i~ aid’~J .3.j j=l ~-.not just the iterative solution of a systemof linear algebraic equations. all values of x(~+1) are based on x(~). At each step in the iterative procedure.x!~) ~ ~.~ X}k) k) R} + -i ai. ’ (1. the followingchoices are possible: n Vn 1/2 "1 I(AXi)maxl _<5 ~IAxil _<5 i=1 or ki=l J <5 (1. 2 . 1. the followingchoices are possible: _ 5 (1.Systems Linear AlgebraicEquations of 63 process is based on the changein the quantity being calculated from one step to the next¯ Thus. n) (k+l) __ (1.

l) ~) x(4l) = 26. which is three less than required by the Jacobi methodin Example1.0 0. The first and subsequent iterations are summarized Table 1.857142 42.714285 35.and x~ = 23. Substituting that result into Eq.857143 42.714286 23. and then relaxing the corresponding equation by calculating a new value of xi so that (Ri)ma x = 0.0 0. The calculations were carried out on a 13-digit precision computerand iterated until all [Axi[ changedby less than 0. the relaxation order is determined by visually searching for the residual of greatest magnitude. the methodof relaxation.022510 25.0 0.953125 34. (1.000000 Example 1. Southwell’s relaxation method embodies two procedures for accelerating the convergenceof the basic iteration scheme. whichrequired 15 iterations.1) gives l) = 25. R~ = 95.211b) Continuing in this manner yields R~l)= 131.740234 34.000000 32.IRilmax.4. Substituting the initial solution vector. Solution by the Gauss-Seidel Iteration Method k x l x 2 x 3 x 4 Chapter1 x5 0 1 2 3 4 5 13 14 15 0.686612 35.4. The Gauss-Seidel iteration method. refers to a specific procedureattributed to Southwell (1940).000000 25.863474 42.0 0. The intermediate in iterates are no longer symmetrical as they were in Example 1. Historically. The residuals are given by Eq.363453 42. since the iterative procedurecan be viewedas relaxing x(°) to the exact value x.808502 24. 1.0.210.000000 0.926760 25.925781 25.498485 35.173340 42.812500 40. (1. As the other residuals are relaxed.714286 0.0 =31. or just the term relaxation.714286 35.947586 35. x~l)= 32.191498 25.25 (1. The Successive-Over-Relaxation (SOR) Method Iterative methodsare frequently referred to as relaxation methods.250000 33.714285 35.mple1.1)gives R~°)=100.21 la) Substituting this result into Eq.0 0.184757 25. First.81250.000002 25.073240 25.0. x(0)T [0.64 Table 1.0 + 25.2) gives R(1) = (100.000001betweeniterations.999999 25. into Eq.000000 26. (1.925781.210.21.791447 35.209. .210). the value of R moves awayfrom zero. This changesthe other residuals that dependon xi. Substituting x v= [25.857143 0.000000 25.752489 35.506226 35.21.662448 35.000001 25.0 0.815243 24.0 2 (1. (1. Let’s rework the problem presented in Exa.81250. (1.000000 26. R(41) = 107.0.074219 24.21 using Gauss-Seidel iteration.209.22.0) = 125.703125.953125.000000 31.4.2) yields =0.7.250. The procedureis applied repetitively until all the residuals i satisfy the convergence criterion (or criteria).796274 42.0] into Eq.0+125.714287 35.0 0.0].0 0.

Southwell’smethodis quite efficient for hand calculation.0. there is not a goodgeneral methodfor determiningthe optimum over-relaxation factor. the system of equations is under-relaxed. 4 Southwell observedthat in many cases the changesin xi from iteration to iteration were alwaysin the samedirections. Underco relaxation is appropriate whenthe Gauss-Seidelalgorithm causes the solution vector to overshoot and movefarther awayfrom the exact solution. (1. the successive-over-relaxation methodis given by (1. by the over-relaxation factor.209). This procedureis illustrated in Figure 1.e.0. Unfortunately. of .Systems Linear AlgebraicEquations of 65 relaxation 2 3 Relaxation step k Figure 1. When < co < 2.0. Over-relaxationis appropriate for systemsof linear algebraic equations. On the other hand. the over-relaxation concept is easy to implementon the computerand is very effective in accelerating the convergencerate of the Gauss-Seidel method. Eq. (1. The Gauss-Seidel method can be modified to include over-relaxation simply by multiplying the residual RIk) in Eq.0 the systemof equations is over-relaxed.4.4 Over-relaxation. search for the the largest residual is inefficient for computer application.. over-relaxing) the values of xi by the right amountaccelerates convergence.e. The relaxation factor does not change the final solution since it multiplies the residual Ri. the The major difficulty with the over-relaxation methodis the determinationof the best value for the over-relaxation factor. Consequently. the number equations) and the nature of the equations(i. The optimum value of the over-relaxation factor COop dependson the size of the t systemof equations(i. Thus. However.212) yields the Gauss-Seidel method.. The iterative methoddiverges if co > 2.e..0. co 1.212) (l.213) When = 1. whichis zero when final solution is reached. This behavior is generally associated with the iterative solution of systems of nonlinear algebraic equations. co. co. When < 1. over-correcting (i. coopt. since it can take almostas much computer time to search for the largest residual as it does to make completepass through a the iteration procedure.

For serious calculations with a large number equations.2) gives x~1) = 0.0 ÷ 27.50 Substituting this result into Eq. the potential is too great to ignore.6.).66 Chapter 1 strength of the diagonal dominance. The first and subsequent iterations are summarized Table 1. larger values of coopt are associated with larger systems of equations.714285 35. The value of over-relaxation is modestin this example. As general rule. (1.the structure of the coefficient matrix.0 0.0 0.149503 25.1) with co = 1.23. Its value becomes moresignificant as the numberof equations increases.1)gives R~°)= I00.151602 35.0 0.500000 26. into Eq.714286 0.000000 25.10 gives 100.968342 35. In Section 9.010375 24. Substituting the initial solution vector.0 0.714286 35.000000 37.the computationtime can be reduced by factors as large as 10 to 50. To illustrate the SOR method.996719 25.857143 42.0].213.0 x~1) = 0. Substituting that value into Eq.000000 30. one must resort to numericalexperimentation to determinecoopt" In spite of this inconvenience.480925 42.915308 42.355629 25.714286 0.000001betweeniterations.214c) (1.857143 0.000000 25.5.212. The calculations in were carried out on a 13-digit precision computerand iterated until all IAxil changedby less than 0.10.714287 35. (9. etc.717992 35.855114 24.500000 Substituting xr = [27. The residuals are given by Eq. Eqs.214a) Continuingin this manneryields the results presented in Table 1.10 ~ = 27.212.692519 35.100497 24.000000 25.000000 X 2 X 3 X 4 X 5 0.0 0.062500 4 (1.062500 34.213.51) and (9. which required 13 iterations.0] into Eq. which is 5 less than required by the Jacobi methodand 2 less than required by the Gauss-Seidel method.a near optimum value of co if a systemof equations is to be solved many times. In someproblems.213).50 0.5. a procedureis described for estimating coopt for system of equations obtained whensolving the Laplace equation in a rectangular domain with Dirichlet boundaryconditions.000000 35.000000 27.142188 41.0.167386 25.905571 35.714286 35.914285 42. x(°)r=[0. Table 1. k 0 1 2 3 4 5 11 12 13 Solution by the SORMethod x 1 0.999996 25.52).2) gives °) R~ = (100. (1. of Example 1.194375 35.22 using co = 1. In general.875627 42.50) = 127.000000 .50_ 35.857145 42. (1.0 0.0 ÷ 1.5.726188 35.000000 26.214b) (1.it is almost alwaysworthwhile to search for.419371 24.0 + 1.0 0. (1. (1. let’s rework the problempresented in Example1.230346 35.10 127.987475 24. The SOR method.790750 35.

Number Iterations k as a Funcof tion of m ~o 1.215a) (1.216b) .00 1. j bi =bi . Simple GaussElimination The elimination step of simple Gausselimination is based on Eq. n bi ~ aidxj j=i+ 1 ai.11 k 13 13 13 13 13 13 ¢o 1.05 < ~o < 1. 3.10 1. Input data and output statements are contained in a main(or drives) program written specifically to illustrate the use of each subroutine.06 1.14 1. 1) (1.~)b~ (i.(ai..04 1. 2.. then a search for O)op t maybe worthwhile. n-l)... Much more dramatic results are obtained for large systems of equations.. 4.05 k 15 14 14 14 14 13 ¢o 1.Systems Linear AlgebraicEquations of Table 1.100)..12 1.02 1.03 1. Ira problemis to be workedonly once.23..2 .i (1. Simple Gauss elimination Doolittle LUfactorization The Thomasalgorithm Successive-over-relaxation (SOR) The basic computational algorithms are presented as completely self-contained subroutines suitable for use in other programs.6.k)ak.15 k 13 13 13 14 67 The optimum value of~o can be determined by experimentation.. aid = aid -.8. (1.k/a~. For each column k(k= 1. n) n) (1.k/ak.216a) x i- (i=n--l.k+2 ..13 1..08 1. that procedure is not worthwhile.8... 1.101): X n = bn/an. 1.6 presents the results of such a search for the problem considered in Example 1.1. PROGRAMS Four FORTRAN subroutines for solving systems of linear algebraic equations are presentedin this section: 1.n--2 .(ai. .1. Table 1.14 yields the most efficient solution.01 1.. a problemis to be worked if many times with the same A matrix for manydifferent b vectors.j=k+l.k+2 (i=k+l..215b) The back substitution step is based on Eq..09 1. (1. For this problem.However.07 1..

b ( ndim) .3) / 80. n.j=l. a (i. n-i do i=k+l. x(i) dimension a(6.b.0.i=l. (a(i.3) /-20. 20. 20.j). ndim = 6 in this example number of equations. x (ndim) forward elimination do k=l. end do end do end do c c c c c c .0.n write (6. n.x) simple gauss elimination dimension a (ndim.a.8. program main main program to illustrate linear equation solvers ndim array dimension. 1000) do i=l.2.0.j=l.i=l.n / 6. -20. 7f12. 1020) do i=l.3) / 20. for solving these equations. j) =a (i. Program1.i=l. 3 / data (a(i. era.6) 1020 format (" ’/’ A.0 data (a(i.j) a b right-hand side vector.0 write (6.x) write (6.3) /-20.b. A(i.0 data (a(i.n a (i.j).6) . (a(i.0 data (b(i). k) =em b (i) =b (i) -em *b do j=k+l. Simple Gauss elimination program.0. calls subroutinegauss to implement main the solution. and x after elimination’/’ ") end subroutine gauss (ndim.3). b.68 Chapter 1 A FORTRAN subroutine.0.0.1010) i.l).n).2). Note that the eliminated elements from matrix A have been replaced by the elimination multipliers. 130. -20. and prints the solution.x(6) data ndim. n n coefficient matrix. b(i) x solution vector.0. is presented below.n). subroutine gauss.a.b(i) end do call gauss (ndim. Program defines the data set and prints it.b(6) .b(i).1. whichis presented in Section 1. -20. ndim) .n em=a (i. 40. j) -em*a(k. -20.0. without pivoting. k)/a (k.x(i) end do stop ’/" A and b’/" ’) 1000 format (’ Simple Gauss elimination’/’ I010 format (i2.n write (6.1010) i.i=l. so subroutine gauss actually evaluates the L and U matrices neededfor Doolittle LUfactorization.

kb ~ (i = 2.000000 -20. A secondsubroutine. (1. n) k=l (1. 1) FORTRAN subroutines for implementing Doolittle LUfactorization are presented below.8. Solution by simple Gauss elimination.857143 0.000000 -20.7.000000 -20. /a end do return end The data set used to illustrate subroutine gauss is taken from Example 1.~ ui.~ li.000000 Gauss elimination A.kxk/ui.1 is modifiedto evaluate the L and U matrices simplyby removing line evaluating b(/) from the first groupof statements and entirely the deleting the second group of statements. and x after 1 2 3 80. The output generated by the simple Gauss elimination programis presented below.. Doolittle LU Factorization Doolittle LUfactorization is based on the LUfactorization implicit in Gausselimination.Program defines the data set and prints it. subroutine solve.2 . calls subroutinelufactor to evaluate main the L and U matrices. b.000000 20.600000 1.. n ..2.000000 -0.-I x(i)=b(i) do j=n.. Thesesteps are given by Eqs.Systems Linear Algebraic Equations of c back substitution 69 x (n)=b(n)/a (n.000000 107.000000 -20..000000 35. calls subroutine solve to implementthe solution for a specified b .000000 -20.139) and (1.000000 0.217b) xi = b~ .142857 20.000000 20. -i x(i) =x(i) -a (i. i+l..000000 130.217a) (1.000000 25.000000 40.4.8. Simple A and b 1 ¯2 3 80. which evaluates the back substitution step. j) end do x(i) =x(i) (i. based on steps 2 and 3 in the description of the Doolittle LUfactorization methodin Section 1.1.250000 -0. do i=n-l. Output1.000000 20. 3 .000000 42.250000 elimination -20. i k=i+1 (i = n . is required to process the b vector to the b’ vector and to process the b’ vector to the x vector.000000 -25. The modified subroutine is namedsubroutine lufactor.140): i-1 b’i = bi .1. Subroutine gauss presented in Section 1.400000 -20.714286 1..l.000000 -20..000000 -0.

bp(i).b(i). which are c c program main main program to illustrate linear bp b’ vector.n bp(i)=b(i) do j=l. bp(i) dimension a(6.bp(6) equation solvers 1000 1010 1020 1030 call lufactor (ndim.j).1010) i. b ( ndim) . n a (i.b(6) .1010) i.n) end do call solve (ndim. k)/a (k. and x vectors’/’ ’) end subroutine lufactor (ndim.k) =era do j=k+l.8. (a(i. stores dimension a (ndim.6) format (" "/" L and U stored in A’/’ ’) format (" ’/’ b. and prints the solution. 6) .2. Doolittle LU factorization program. end do end do end do return end .) L and U in A subroutine solve (ndim.b. Program main below shows only the statements different from the statements in program main in Section 1. bp.n em=a (i. ndim) do k=l.1030) do i=l. x ( a forward elimination step to calculate b’ bp(1)=b(1) do i=2.j) end do end do . n. Program 1. j) -em*a(k.a. x) processes b to b’ and b’ to x dimension ( ndim.ndim) . i-I bp(i)=bp(i)-a(i.x) write (6. bp ( ndim).n write (6.j=l. a (i. 7f12.70 Chapter 1 vector. bprime. bp.a) write (6.n write (6.n-i do i=k+l.x(i) end do stop f o rmat (" Doolittle LU factorization’/’ ’/’ A and b’/" forma t (i2. n.1020) do i=l. a. n.a) Doolittle LU factorization.1. b. j) =a (i. n.

2)ai_l. Output 1.000000 25.(a~.000000 130. -i x(i)=bp(i) do j=n.~Xi+l)/a~i.(a~.714286 b. (1.218d) is based on Eq.600000 1.a~i.2. 2 -= a~l.000000 35.000000 107.000000 and x vectors 20.218a) 3 (i = 2. 2 : a~.219b) .150) and (1.152): (1. 3 ..000000 40.. step (i = 2.000000 42. 2 -.3. LU fac~orization L and U matrices 1 2 3 80.8.. 3 .000000 20. 1) (1.000000 0.l/aS_l. Doolittle A and b 1 2 3 80.400000 1.2)bi_ The back substitution X = bn/atn.15. bprime.000000 stored -20.857143 0.1. j) end do x(i) =x(i) /a (i.000000 -20. n .000000 -0.2 a~.250000 -0. The Thomas step Algorithm of the Thomas algorithm is based on Eqs.000000 -20.000000 Solution by Doolittle LU factorization..Systems of Linear Algebraic Equations x 71 back substitution step to calculate x (n) =bp (n)/a (n.142857 -20.000000 20.218c) 1 The elimination al. i+l.000000 -20..2 .. i 2 3 20. n 2 xi = (b i .000000 -20.218b) (1.000000 -20. (1..219a) 2 (i = n . do i=n-l. i..000000 -0. b1 = b 1 bi = bi -. n) (1... n) (1.151): (1. -I x(i) =x(i) -a (i.250000 in A matrix -20.000000 -25. The output generated by the Doolittle LU factorization program is presented below. end do return end The data set used to illustrate subroutines lufactor and solve is taken from Example 1..l/a~_l..

0. 1. 0. for solving these equations is presented below.72 Chapter 1 A FORTRAN subroutine.j=l. (a(i. -2.6f12.6) 1020 format (’ ’/’ A.a. A(i.n em=a (i.j=l.3) / 1.j=l. 1.3. program main main program to illustrate linear equation solvers ndim array dimension.0 data (b(i).3) . 1. -2.0.0 data (a(2. -I00.25.x(9) data ndim.25. 0. n.0. em.n write (6.b. (a(i.j). so subroutine thomasactually evaluates the L and U matrices needed for Doolittle LUfactorization.0 data (a(6. 1.0.b(i).b.x) the Thomas algorithm for a tridiagonal dimension ( ndim.3) b right-hand side vector. b. x ( ndim a forward elimination do i=2. 0.0 / write (6. -2.25.25. 1.3) / 1.3).j).j=l. 1. and x after elimination’/" ") end subroutine thomas (ndim.0 data (a(4.0.j=l. 0.3) / 1.0. the Program 1. 7 / data (a(l.7) / 0.j=l.0. b(i) =b(i) -a (i.25. a (i.0. b ( ndim) .3 ). n n a coefficient matrix.j). and prints the solution. i)/a (i-l. The Thomas algorithm program. ndim -. -2.1000) do i=l.1010) i.j).b(9) .3) / 0.j). subroutine thomas. n / 9.j=l.x) write (6.n. calls subroutinethomasto implement solution.0 data (a(3.j).25.) 1010 format (i2.0 data (a(5. x(i) dimension a(9. j).j=l.0 data (a(7. 2) -em*a(i-l. n write (6.3) / 1. i) *b(i-l) end do c c system . j).25.0.3). Program main defines the data set and prints it.3) / 1.i) =era a (i.3) / 1.0.1010) i. -2.i=l.x(i) end do s top 1000 format (" The Thomas algorithm’/" "/" A and b’/" .0.j).0.a. -2. Note that the eliminated elements from matrix A have been replaced by the elimination multipliers. 0.j=l.b(i) end do call thomas (ndim.9 in this example number of equations. b(i) x solution vector.1020) do i=l. -2. 2) =a (i.

j j A FORTRAN subroutine. "¢i(k+l) (k) ~ Xi + ~O--’ ai.000000 0. Solution by the Thomas algorithm. (1.n) (1. Input variable iw is a flag for output of intermediate results.~ .000000 0.000000 1.000000 1.000000 1. and prints the solution.552144 22.000000 algorithm A.000000 1.000000 -2. Successive-Over-Relaxation (SOR) Successive-over-relaxation (SOR)is based on Eqs.000000 0.(k+~) (i = 1.647747 -1. n) __ ’a’’x !k) (i: 1.000000 0.000000 1.000000 1.000000 1. subroutine sot.696154 -1...l.000000 1. When = 1.4.8.553846 -0.923667 1.250000 -2.250000 -1.000000 0.000000 0.444444 -0.641398 1.805556 -1.000000 -0. .2. no intermediate results are output.000000 0.000000 -100.606889 -0.Systems Linear AlgebraicEquations of c back subscitution x (n) =b (n)/a (n. When = 0.212) and (1.000000 0..966751 4.425190 7.660431 -1.000000 1.989926 13. intermediate iw iw results are output. 2.000000 1.608602 -2. 3) *x(i+l))/a end do re t urn end 73 The data set used to illustrate subroutine thomasis taken from Example 1.589569 -0.602253 -0. Program maindefines the data set and prints it.250000 elimination 1.j~j j=l .220a) ..000000 -100.i. .i i-1 .250000 -2. The Thomas A and b 0.250000 -2.000000 1.250000 -2.502398 37.220b) ~) RI = bi . calls subroutine sor to implement the solution.213): .000000 1.-I x(i) = (b(i)-a (i. and x after 1 2 3 4 5 6 7 0.17.000000 0.000000 0. do i=n-l.000000 0.250000 -2. (1.3. The output generated by the Thomas algorithm programis presented below..000000 1.000000 0. for solving these equations is presented below.000000 0.643111 -1.000000 1. Output 1. b.078251 60.000000 1.¯ ’.000000 0.250000 -2.000000 1. .000000 1.~.

0. 5. i t) if (iw. -I.5) / 0.0.0. -I.0.l). x.0 / data (a(i. dxmax) dxmax=abs (residual) x (i) =x (i) +omega*residual/a (i. 0.1020) it=O write (6. iw.74 Program1.4).0.iw.2).5) / I00.1010) i. end do c .1010) it. 7f12.n) call sor (ndim. b. a. 1000) do i=l. b.0. 1.0 / data (a(i.i=l. 0. omega.x(6) data ndim. omega.0.0 data (x(i). 0.4. -1. x (ndim) do it=l. ndim) . i ter.i=l. (x(i). x. n.0.6) . Successive-over-relaxation (SOR) program. eq. 100.0 / data (a(i.j).0.0.1. 0. Chapter 1 c c c c c c c c c c program main main program to illustrate linear equation solvers ndim array dimension. n a coefficient matrix.5) / 0. b (ndim) .n) stop 1000 format (’ SOR iteration’/" ’/’ A and b’/" ’) 1010 format (i2. i t) sor iteration dimension a (ndim.n residual ( i =b do j=l.0.i=l.b(6) .0.6) 1020 format (’ ’/’ i x(1) to x(n)’/’ ’) end subroutine sor (ndim.0 write (6.0. -i.i=l. 0 do i=l. -1.n).0.b(i) end do write (6.i=l.j) b right-hand side vector.O) write (6. iter.0. ndim = 6 in this example n number of equations.i=l. 4.5) / 0.0 / data (a(i.5) /-1. tol.iter dxmax=O.0.i=l.i=l. 0. n. tol .3). 100. j ) *x ( -a end do if (abs (residual) . n. 1. 000001. a. 100. gt.0.n residual=residual ( i .0.0.i=l. 4. -1.0.1 / data (a(i. I00.5).0.0.n write (6. -1. 4. iw / 6. 0. tol. 0.0. (x(i).1010) it.5) / 1. x(i) i ter number of iterations allowed tol convergence tolerance omega over-relaxation factor iw flag for intermediate output: 0 no. A(i. (a(i. 0.0.5) / 4. 0.j=l.omega. 1 yes dimension a(6. b(i) x solution vector.0. i ter. 0.0.0 / data (b(i). 25.

such as Mathematica. the book Numerical Recipes (Press et al.947586 42.714286 35.8.000000 0. .815243 35.000000 4.000000 -1.1010) return 1000 format (i2.000000 0.lt.000000 26.857143 0. and Maple. eq.1000) it.i=l.714286 35.000000 1.740234 40.074219 33. libraries such as IMSL(International Mathematics and Statistics Library) or LINPACK (Argonne National Laboratory) can be added to the operating systems.184757 35.812500 26.000000 -1.tol) return end do write (6.000000 4.000000 i00.000000 -i. Packages for Systems of Linear Algebraic Equations Numerous libraries and software packages are available for solving systems of linear algebraic equations.000000 1.808502 34.000000 1.000000 -1.000000 -1.000000 (SOR). Macsyma. 1989) contains several subroutines for solving systems of linear algebraic equations. x(1) to x(n) 0.925781 34.000000 0.000000 25.4.173340 24. The output generated by the SORprogram is presented below.000000 0.000000 0.Systemsof Linear Algebraic Equations if (iw.953125 23. (x(i). Finally.000000 -1.000000 -1.000000 4.23.250000 32. Many work stations and mainframe computers have such libraries attached to their operating systems.000000 0.n) if (dxmax.000000 0.363453 24.5.498485 42.l) write (6.000000 35.000000 0.000000 25.000000 100.000000 1.000000 0.000000 100.857143 42. More sophisticated packages.000000 31.6) 1010 format (" ’/’ Solution failed to converge’/" ’) end 75 The data set used to illustrate subroutine sor is taken from Example1. Solution by successive-over-relaxation SOR iteration A and b 1 2 3 4 5 i 0 1 2 3 4 15 16 4.000000 25. Some of the more prominent packages are Matlab and Mathcad.191498 35.714286 42.686612 25.000000 -1. The spreadsheet Excel can also be used to solve systems of equations.000000 1.000000 0.000000 100.. Most commercial sottware packages contain solvers for systems of linear algebraic equations.714286 25. If not. 7f12.000000 4. Output 1. also contain linear equation solvers.506226 25.000000 0.073240 35.796274 25.000000 100.791447 25.

Define the determinantof a matrix. Iterative methodsare preferred for large. Somegeneral guidelines for selecting a methodfor solving systems of linear algebraic equations are given below. Explain howDoolittle LUfactorization is obtained by Gausselimination. Solve a systemof linear algebraic equations by Doolittle LUfactorization. 12. the identity matrix. 19. Explain the general concept behind direct elimination methods. 25. 7.. LUfactorization methods(e. the Doolittle method)are the methodsof choice when more than one b vector must be considered. 5. ApplyCramer’srule to solve a systemof linear algebraic equations. Performelementary matrix algebra. 10. 3. 14. Evaluate 2 x 2 and 3 x 3 determinants by the diagonal method. sparse matrices that are diagonally dominant. 22. 18. you should be able to: 1. upper and lower triangular matrices. Recognizethe structures of square matrices. and singular matrices. 15. Evaluate a determinant by the cofactor method. Solve a system of linear algebraic equations by Gausselimination. 23. b ¯ ¯ ¯ ¯ After studying Chapter 1.g. For tridiagonal systems. 9. Describe the general structure of a systemof linear algebraic equations. 4. Understandpivoting and scaling in Gauss elimination. Explain the concept of matrix factorization. Gauss elimination is the method choice. tridiagonal matrices. Evaluate a determinant by Gauss elimination.the round-off errors can be large. 24. ¯ Direct elimination methodsare preferred for small systems (n ~< 50 to 100) and systems with few zeros (nonsparse systems). Understandmatrix row operations. Understand the concept of diagonal dominance.76 1. diagonal matrices. 21. . the Thomas algorithm is the methodof choice. (c) an infinite number solutions. of (d) the trivial solution. 8. Explainthe solution possibilities for a systemof linear algebraic equations: (a) a unique solution. Solve a system of linear algebraic equations by the matrix inverse method. 2. Express a systemof linear algebraic equations in matrix form. 6. 11. Solve a system of linear algebraic equations by Gauss-Jordanelimination. The SOR methodis the methodof choice.9 SUMMARY Chapter1 The basic methodsfor solving systems of linear algebraic equations are presented in this chapter. For large systems that are not diagonally dominant. Numerical experimentation to find the optimum over-relaxation factor ~Oop is usually worthwhile if the t systemof equations is to be solved for many vectors. banded matrices. 13. 20. 16. Understandthe elementary properties of matrices and determinants. Determinethe inverse of a matrix by Doolittle LUfactorization. Understandthe differences between direct elimination methodsand iterative methods. 17. (b) no solution. Determinethe inverse of a matrix by Gauss-Jordanelimination. Understandthe concept of the inverse of a matrix.

37. 43. the Solve a tridiagonal system of linear algebraic equation by the Thomas algorithm. A+D.B. (e) A .C. (h) the 3. 36. 45. Solve a systemof linear algebraic equations by Gauss-Seideliteration. (d) B .2. 42.O. 31. Explain the significance of the condition number a matrix.C. h) CrB. for Solve a systemof linear algebraic equations by Jacobi iteration.Systems Linear AlgebraicEquations of 26.(c) CA. Determine followingquantities. CrA. (d) A+C. 44. 28. Define the meaning significance of the residual in an iterative method. b2. Understand difference betweenabsolute error and relative error. Appreciatethe importanceof using the optimum overrelaxation factor. 38. if defined: (a) A + B. and b3 are the columnsof B. Explain the concept of system condition. 77 Understand special properties of a tridiagonal matrix. Describethe effects of ill-c0nditioning. Show that: (a) (AB)r = BrAr and (b) AB = (Ab~ 2 Ab . 40. Understand the concept of block tridiagonal systems of linear algebraic equations. 6. (e) 2.A. (k) BB (1) DO ( 4.B. (i) C rD. the Explain the importance of diagonal dominance iterative methods. Determine the following quantities. Understand pitfalls of direct elimination methods.(d) r. (b) B. 29. Determine the following quantities. if defined: (a) A. 39.(f) BD. WorkProblem 5 for the general 3 x 3 matrices A and B. (e) AD. Define the condition numberof a matrix. (c) ComputeBDand DBand show that BD~ DB. (b) B + A. 33. 32. Solve a system of linear algebraic equations by successive-over relaxation (SOR). Verify that (a) A + (B + D) = (A + B) + D and (b) A(BD) . 27. A . the Understandconvergenceof an iterative methodand convergencecriteria. 35. EXERCISE PROBLEMS Section 1. of Explain the general concept behind iterative methods. the Explain the effects of round-off on numerical algorithms. Understand structure and significance of a sparse matrix. (a) Compute ABand BAand show that AB5~ BA.O. 41. (g) C . (b) Compute ADand and show that AD5~ DA. 46. 34. wh 3) ere bl . and Explain accuracy of an approximate method.(b) BC. ~Oop t. Define the normof a vector or matrix. Choosea methodfor solving a system of linear algebraic equations based on its size andstructure. 7. 5. if defined: (a) AC. (j) A r. r. 30. (f) B . Properties of Matrices and Determinants The following four matrices are considered in this section: A= 5 3 2 3 B= 1 3 C= D= 3 3 3 5 1. 47.

x4 = 9 1 2 4x~ + 2x2 + 5x3 + x4 = 27 3X -. (B) by Cramer’s rule. WorkProblem 8 using the cofactor method. without pivoting. Section 1. GaussElimination 21. Solve Eq. 10. if defined: (a) det(A) (b) det(B) (c) det(C) (d) det(D) (f) det(AD) (g) det(BA) (h) det(I)A) (i) det(CD) 9. Solve Eq. Ax= b: 3x1 + 4x2 -. 14. 15. Solve Eq.78 8. 26. Solve Eq.2x3 -. Showthat det(A) det(D) = det(AD). Solve Eq. Solve Eq. Solve Eq. Solve Eq.2X + 4X = 19 1 2 3 4 -x 1 +2x2. Solve Eq. (A) by Cramer’s rule. 25.3. Solve Eq. 17. . (E) by Cramer’s rule. Solve Eq.0 (A) x1 -. 18. without pivoting. (F) by Cramer’s rule. Solve Eq. (G) by Cramer’s rule.X = -4 3 X + 3X -I. 16. 20. 27. (H) by Cramer’s rule. without pivoting. Direct Elimination Methods Consider the following eight systems of linear algebraic equations. Showthat for the general 2 × 2 matrices A and B. without pivoting. 19. Solve Eq. det(A) det(B) = det(AB).2X2 q.3x3 ÷5x4 = 14 3 (C) 5 3 1 2 3 1 1 1 3 1 -1 -2 2 1 3 0 -2 5 (B) 4 [Xi] = (D) - 1 1 x2 = x 3 (E) 3 1 -2//x2/= 4 J kX3 J (F) Cramer’s Rule 13. 28. without pivoting. Chapter 1 Calculate the following determinants by the diagonal method. Showthat det(A) det(B) = det(AB). 11. Solve Eq. without pivoting. 24.3X -’l. (A) (B) (C) (D) (E) (F) (G) (H) by by by by by by by by Gauss elimination Gauss elimination Gauss elimination Gauss elimination Gauss elimination Gauss elimination Gauss elimination Gauss elimination without pivoting. (D) by Cramer’s rule. (C) by Cramer’s rule.5x3 ---. 12. without pivoting. Solve Eq. 3 4 -4 2 3 -2 = -1 (G) 2 l||x 21 3 43 [_x3_ J = - (H) Solve Eq. 22. 23.

41. 40. 46.4. method. method. (D) by Gauss-Jordanelimination. 49. (D) by the Solve Eq. Section 1. (B) using the Solve Eq. (E) by the Solve Eq. Solve Eq. (A) using the Solve Eq. (A) by the Solve Eq. method. method. method. Solve Eq. Solve Eq. 35. (B) by Gauss-Jordanelimination. Solve Eq. Tridiagonal Systemsof Equations Considerthe following tridiagonal systems of linear algebraic equations: 2 1 0 x2 1 2 1 0 ] 2_]Lx4_ ] 0 x 2 1 -2 1 x 0 1 -2 / kx43 4 -1 -1 4 0 -1 8 2 3 2 0/Ix2/: x3 ---. (G) by Gauss-Jordanelimination. 31. method. (C) using the Solve Eq. (E) using the Solve Eq. 79 The MatrixInverse Method 37. (F) by the Solve Eq. (F) using the Solve Eq. 38. (C) by the Solve Eq. method. 51. 42.Systems Linear AlgebraicEquations of Gauss-Jordan Elimination 29. (C) by Gauss-Jordanelimination. (G) using the Solve Eq. ] -2 1 0 17 14 (J) 7 0 3 0 -1 0 0 2 -7 L-1 (K) 1 k 0 - 1 -2 1 / / x3 / 1 -2 / Lx4J (g) -- 12001(M) [Xi]= 1150 / [_100_] 2 -1 0 -1 2 -1 0 -1 2 x2 x 3 x 4 = (N) . 47. 48. Solve (H) by Gauss-Jordanelimination. Solve Eq. 44. 50. (A) by Gauss-Jordanelimination. (E) by Gauss-Jordanelimination. 39. 32.5. method. 36. 52. method. method. Solve Eq. method. method. (B) by the Solve Eq. method. 34. Solve Eq. (G) by the Solve Eq. method. (H) by the Doolittle Doolittle Doolittle Doolittle Doblittle Doolittle Doolittle Doolittle LUfactorization LUfactorization LUfactorization LUfactorization LUfactorization LUfactorization LUfactorization LUfactorization method. (F) by Gauss-Jordanelimination. 43. Section 1. 33. Solve Eq. (H) using the matrix reverse matrix inverse matrix inverse matrix inverse matrix inverse matrix inverse matrix inverse matrix inverse method. (D) using the Solve Eq.12 (I) 0 2 3 2//x3/ 0 [_]lJ L 0 2 3_][_x4. Solve Eq. LU Factorization 45. 30.

(I) by Jacobi iteration. Let x(°)r = [0. Solve Eq. Section 1.35 with Ao9 = 0.35 with Ao9= 0. 78.01.27.0 0. Solve Eq.25 < o9 = 1.25. (N) by Gauss-Seidel iteration. Solve Eq. (N) by the SOR methodfor 1.2.8.01. Methods Chapter 1 Section 1.25 < co < 1.25 < o9 < 1. Solve Eq. Gauss-Seidel Iteration 64. Solve Eq.35 with Ao9= 0. 70. the Checkout the programusing the given data. Solve Eq. Solve Eq.8. (L) by Gauss-Seidel iteration.25 < 09 < 1. Solve Eq.8. Solve any of Eqs.3. (I) to (N) using the Thomas algorithm program. 62. (M) by the SOR method for 1. (I) by the Thomas algorithm. Implement the SORprogram presented in Section 1. (M) by Gauss-Seidel iteration. (M) by the Thomasalgorithm. 86. 83. (K) by the Thomasalgorithm.80 53. 74. (N) by the SOR methodfor 1. Solve Eq.0 0. 58. 67. 56.05. 63. Solve Eq. (L) by Jacobi iteration. Implement Doolittle LUfactorization programpresented in Section 1. (K) by the SORmethodfor 1. Check out the programusing the given data.01. Solve any of Eqs.00 < o9 = 1. iterate until six digits after the decimal place converges. (A) to (H) using the Doolittle LUfactorization program. method with o9 = 1. Solve Eq. Implement the Thomasalgorithm program presented in Section 1. makeat least five iterations. 57. Solve Eq.0 0. For computersolutions. (I) by the SOR methodwith o9 = 1. 65. 77. (K) by the SOR method with o9 = 1.27. Iterative Solve the following problems by iterative methods. (L) by the Thomas algorithm. Checkout the programusing the given data. Programs 79. Solve any of Eqs. Check out the programusing the given data. (K) by Jacobi iteration. 76.27. 54. (J) by the Thomasalgorithm. 71. Implementthe simple Gauss elimination programpresented in Section 1.35 with Ao9= 0. (A) to (H) using the Gauss elimination program. 82.01. Solve Eq. SuccessiveOver-Relaxation 69.4.8.1. Solve Eq. 55. (I) by the SOR 75.7. 60. 66. 84. Solve any of Eqs.0]. 68. 80. 73. Solve Eq. (I) and (K) to (N) using the SOR program. Solve Eq. 81. Solve Eq. Solve Eq. Solve Eq. Solve Eq. Solve Eq. 72. (M) by the SOR method with ~o = 1. For hand calculations. 61. (N) by the Thomasalgorithm. Solve Eq. Solve Eq.10 with Ao9= 0. JacobiIteration 59. Solve Eq. (N) by Jacobi iteration. Solve Eq. Solve Eq. (L) by the SOR methodwith o9 = 1. (I) by Gauss-Seideliteration.8.01. (K) by Gauss-Seidel iteration. (M) by Jacobi iteration. . (L) by the SOR methodfor 1. 85.

K4(x . The shifted inverse powermethodfor accelerating convergence 2.1b) (2.1. ~ F = m~.1c).4. Eigenvectors 2. 2.6.9.xz) . The inverse power method 2. The shifted direct powermethodfor opposite extreme eigenvalues 2.6.2.1a) (2. The shifted inverse powermethodfor intermediate eigenvalues 2. Introduction MathematicalCharacteristics of Eigenproblems The Power Method The Direct Method The QR Method Eigenvectors Other Methods Programs Summary Problems Examples 2.7.3. 2. 2.Ksx = m33~ 3 3 3 3 (2.Xl) -.K~x~ m13~ z 3 ~ = -Kz(x 2 -Xl) +K4(x -x2) = m2572 3 -K3(x . The basic QRmethod 2.1 INTRODUCTION Consider the dynamicmechanical spring-mass system illustrated in Figure 2.xl) + K3(x -.2.7.1. 2.5.Xl) .8.9. 2.3. The direct power method 2. 2.4.8. 2. The direct methodfor a nonlinear eigenproblem 2.1.5.2 Eigenproblems 2. 2. to each individual mass gives K:z(x . The direct methodfor a linear eigenproblem 2. 81 . Applying Newton’ssecond law of motion.

(2.2c) K2Xl- (K2 -I.K4x3 = m2J~ 2 g3x 1 + g4x 2 .3b) into Eq.4b) (2. RearrangingEq.4c) .2a) (2.2b) (2.82 Chapter2 K3(x3"xl) K2(x2"xl) ~ K4(x3"x2) K3(X3"Xl) ~ K4(x~-x~) Ksx~ Figure 2.1) yields: --(g I ~.4a) (2. X 2 of the masses.K2x --~ g3x 3 = ml~ 2 2 1 (2.3a) gives dx ~ = i = coX cos(cot) and d2x dt-. (2. x(t) = X sin(cot) where x(0r = [x~(t) x2(t (2. Differentiating Eq.(g 3 -[-g 4 + g5)x 3 = m3J~ 3 For steady periodic motionat large time.K -~. (2. (2.3b) Substituting Eq.1 Dynamic mechanical spring-mass system.(K 2 + K4)X2 + KaX3 = -m2co2X2 K3X1 + K4X2 . and co is the undamped natural frequencyof the system.g4)x 2 -[.(K 3 + K4 + K5)X 3 = -m3co2X3 (2.3a) r = [X~ X X3]is the amplitudeof the oscillation ) x3(t)].T = i/= -co2X sin(cot) (2.2) gives -(K I + K + K3)X + K2X + K3X = -m~co~X1 2 ~ 2 3 K2X1 -.g3)x 1 -q.

The correspondingvalues of X are 3 called eigenvectors.9) can be written I (A .e. ~ = m/mref and f. The values of 2 that satisfy Eq. dependson the special values of 2.m20)Z)x2 (K 3 ifK4 q-K2X 2 -K3X 3 = 0 K4 X3 = 0 K5 .. (2. For s 3 f these values.~3X3 =0 (2.9c) -2X~ + (4 . ~(~ = K/Kre Substituting these definitions into Eq.5) using Kref and mre Thus.mtr_o2)Xl K4 .7) into Eq. and 2 (i. f [/~1 dv ~2 q-/~3 -if/1 ~(mref~2)Kref ]jqX1 -- ~2X2 . for every value of 2. Eq.~)x3= Equation (2.6a) (2.8a) (2.9) are called eigenvalues.(/~’3 -}-/~4 q-/~5 -.2I)X = (2.Xz. and ml = m~= m = 2 kg.2X = 0 3 -zx~.2x: + (13.Equation (2. the only solution.5c) -K3X 1 . Equation(2. X~. (2.X3). The eigenvectors determinethe modeof oscillation (i.8b) (2.5b) (2...~12)xl ~:2x2 ~3x3 + = -~qxl+ (~:2 + ~:4 . and X can be determined.7) (/~1 + R2 ~:3 . the relative values of X~.ff/32)X3 --~ (2. There are four unknowns: X X3. (2. T Unique values of X = IX1 X2 X3] cannot be determined.K4X 2 q- Let’s nondimensionalize Eq.6b) (2.~2 ~(mref~2)gref jjqX2 ~4X3= -- 0 -- ~4X2 + [~3 + ~4 + ~5 -- ~3~(m~ef~)gref )j]X3 Define the parameter 2 as follows: mrefC°2 2. ~o).8c) Consider a specific system for which K~ = 40N/cm. (2.9) is classical eigenproblem.8) becomes: (8-2)S 1 --2)( 2-2X"3 =0 (2.5) and dividing by Kre gives: f. 2.9b) (2.Eigenproblems RearrangingEq.6c) + [~2 + ~4 -. cannot be determinedby three equations.5a) (2.10) .e. In fact. (2. Clearly unique values of the foul unknowns X~. relative values ofX~. K = K = K = 20N/cm. called eigenvalues.Krof Substituting Eq.9) is a systemof three homogeneous linear algebraic equations. (2. However. Let Kref = 10 N/cmand mre = 2 kg.9a) (2.6) gives nond imensional system equa tion: (2 . (2.m22)x: R4x3 = --/~’3Xl --/~4X2 q. bther than the trivial solution X = 0. 2 3 4 and K = 90 N/cm.m3092)y3 = 83 (2.2)Xz .4) yields the system equation: (K 1 A-K 2 q-K3) -K2X 1 q(K 2 q.

if the of system of equations is homogeneous: Ax = 0 (2.84 where A= 4 -2 -~3 Chapter2 (2. However. n) and n eigenvectors xi (i = 1 ..14) The eigenvalues maybe real or complexand distinct or repeated.13) Chapter 2 is concerned with solutions to Eq. and structural . or the trivial solution. The elements of the corresponding eigenvectors xi are not tmique. The values of 2 which makem = m = rn are called eigenvalues. Everynonsingular square n x n matrix Ahas a set of n eigenvalues2i (i = 1 . Eq. both passing through the origin x~ = x2 = 0. fluid.15) represents two straight lines in the xlx 2 plane. (2. The ratio of x2 to x1 is specified by value of the slope m.15) gives (all -. (2. then the ifm 2 twostraight lines lie on top of each other.. For of any value of x~.2a. If the slopes ml and m2are different.2)xl al 2X2 = 0 (2 .2)x 2 = 0 Equation(2. Both straight lines pass throughthe origin whereXl ---.16) m 2 the x~x plane.. no solution. as illustrated in Figure 2.electrical.2 illustrates Eq. (2. x = 0...(a22 a21 ~)x I ~-. whichare possible the coefficient matrix A contains an unknown parameter. However. n) that satisfy the equation (A .12) As discussed in Section 1.10) the classical form of an eigenproblem.16b) where 1 and m are the slopes of the twostraight lines. whichis the 2 trivial solution. and there are an infinite number solutions. called an eigenvalue.2I)x = 0 or Ax = 2x (2. their relative magnitudescan be determined. (2. 1 2 and the solution vector x correspondingto 2 is called an eigenvector. They arise in the analysis of the dynamic behavior of mechanical.13) other than x = 0.x2 = 0.11) Equation (2.2b. Problemsinvolving eigenvalues and eigenvectors are called eigenproblems.m2x _ 2 1 (2. thermal. Consider an eigenproblemspecified by two homogeneous linear algebraic equations: (all . 1 = m = m..2) X2 = ----Xl = m~x~ a12 X -.10) is a systemof homogeneous linear algebraic equations.. Equation (2.16a) (2.15a) (2.. there is no other solution.15b) (azlX1 -ff (a22 -.12) mayhave a unique solution (the case considered Chapter1). there is a corresponding value of x2. Rearranging Eq. Figure 2. Eigenproblemsarise in the analysis of manyphysical systems. an infinite number solutions. as illustrated in Figure 2.1. Chapter 1 is devoted to the solution of systems of nonhomogeneous linear algebraic equations: Ax = b (2.

In general det(C) ~ and unique values are found for xj.. Figure 2.2. The most powerful method. There are several methodsfor solving eigenproblems. the mathematicalcharacteristics of eigenproblems are discussed in Section 2.Eigenproblems 85 ~ (a)mI ~m 2.2 MATHEMATICAL CHARACTERISTICS OF EIGENPROBLEMS The general features of eigenproblemsare introduced in Section 2. and to illustrate those methodsby examples. A brief mention of other methods is then presented.18) (2.. After a discussion of the general features of eigenproblemsin this section.6.5.whichis called the characteristic equation. l.. The chapter closes with a Summary. is based on more advanced concepts.called the power method. The evaluation of eigenvectors is discussed in Section 2. Theyalso arise in the analysis of control systems. The QRmethod is presented in Section 2. . and lists the things you should be able to do after studying Chapter 2. (2. The objectives of this chapter are to introduce the general features of eigenproblems.21) equal to zero and solving the resulting polynomial. called the QRmethod. 1 ~1~m2 =m X~ (b) m~= 2 =m. are Consider a system of nonhomogeneous linear algebraic equations: Cx = h Solving for x by Cramer’srule yields j) det(C ~’ . n) (2.14) can be solved directly by setting the determinant of (A.4 presents the direct method. the QRmethod. which eventually yields both 2 and x.Equation (2. 2. Theorganization of Chapter 2 is illustrated in Figure 2. The power methodand its variations are presented in Section 2. The power method and the direct methodare illustrated in this chapter. Serious students of eigenproblemsshould use the QRmethod.2 Graphicalrepresentation of Eq. is developed in Section 2. Aniterative method. is based on the repetitive matrix multiplication of an assumed eigenvector x by matrix A.det(C) (j-1 . which presents somegeneral philosophy about solving eigenproblems.. Two programs for solving eigenproblems follow. systems.3. A more general and morepowerful method. for 2.16).3. to present several methodsfor solving simple eigenproblems.5. The mathematical characteristics of eigenproblems presented in this section. Section 2.17) wherematrix CJ is matrix C with columnjreplaced by the vector h.

x = 0. the value of 2 can be chosento force det(C) = 0.86 Chapter2 Eigenproblems Mathematical Characteristics The Power Method TheInverse PowerMethod Shifting Eigenvalues I TheDirect Method The QR Method I I I Eigenvectors -_ I Other Methods Programs Summary Figure 2. so that a solution other than the trivial solution.~.. In that case x is not unique.det(C) (j = 1 . n) (2. is possible. x = 0 unless det(C) = 0.. but relative values of xj can be found.19) Therefore.B (2.. x = 0. Considerthe coefficient matrix C to be of the form C = A .21) . and the only solution is trivial solution. det(C) ¢ 0. Consider a system of homogeneous linear algebraic equations: Cx = 0 Solving for x by Cramer’srule yields det(C j) 0 xj.. For certain formsof C that involvean unspecifiedarbitrary scalar 2. In general.3 Organization of Chapter 2.20) (2.

21) = 0 and finding the roots the resulting nth-order polynomial.-2 -2 x = 2) -2 (8__-~2) -2 (13 . This procedureis illustrated in the followingdiscussion. The homogeneous system of equations is generally written in the form Ax = 2Bx In manyproblems B = I.24) lAx2x = 1 I.28) The characteristic equation.2B)x = The values of 2 are determined so that det(C) = det(A . IA .24) becomes (2.300 = 0 The eigenvalues are 2 = 13. 8.25). Consider the dynamicspring-mass problemspecified by Eq.2) [(4 . (2. (2.9): (A .2) .25) In problems where B -¢ I. (2.4] + (-2)[4 + 23 .~x = 2x t (2.2I)x I (4 .31) (2.29) (2.4] .508981 (2. which is called the characteristic equation.2) .22) (2.620434.2I)x = (2. 2. Then Cx = (A. The eigenvalues can be found by expandingdet(A . is (8 .23) (2. and Eq. Equation(2. = (B-~A).2I[ = 0.2)(13 .2522 + 1762 .(-2) [(-2)(13 . define the matrix ~.2B) The correspondingvalues of 2 are the eigenvalues.30) .2) (2.Eigenproblems where 2 is an unspecified scalar.870585.26) whichhas the sameform as Eq. (2.27) which is the most common form of an eigenproblem statement.24) becomes 87 (2. Then Eq.25) can be written in the alternate (A .

33b) 2 Solving Eqs. find the amplitudes X2and 3 relative t o t he amplitude X by ~ letting X = 1. From Eq.eigenproblems arise from homogeneoussystems of equations that contain an unspecified arbitrary parameterin the coefficients.34a) The modes oscillation corresponding these results are illustrated in Figure 2. FromEqs. expandingthe characteristic determinantto obtain the characteristic equation is difficult. In principle. where092= )ur(’ref/mref.34c) -0. 23~ .. For each eigenvalue 2i(i = 1. whenthe is size of the systemof equations is very large..~z~r~/ /(13.33a) -2X~ . are f~ = 2rrog~= 2re.33c) 0. .33c) yields: For 2t = 13. In practice. Solving the characteristic equation yields n eigenvalues2i (i = 1.. The characteristic equation is determined by expanding the determinant det(A .000000 For 22 = 8.2 ..0.000000 2.32c) where Hz = Hertz = 1.9) can be used to solve for 1 X and X with X = 1.~ 1 -.656 Hz (2.084 ~/ mref (2. which is then solved for the eigenvectors.21r~/(2"508~--81)(10) = 7. of to In summary.4.35) which yields an nth-degree polynomialin 2.130 = ¥ Z ~ mref (2. Solving high-degree polynomialsfor the eigenvalues presents yet another difficult problem. Then eigenvectors i (i =1. the solution of eigenproblems straightforward.88 Chapter2 which can be demonstrated by direct substitution.33a) and (2. correspondingto the n eigenvalues2i (i = 1.33b) for X3and substituting that result in Eq.15 Substituting 2~ to 23 into Eq.21r~/(8"6204z34)(10) 13. in terms off = 2n~o..526465 0.7)..870585)(10) ~ ~ mref = 16.620434: X = [1. 2 . (2.0 cycle/sec.34b) X3 X3 (2.427072] (2.508981: X3=[1.32b) r~ j~= 2rcco3 = 2n.. the corresponding natural frequenciesof oscillation.216247] (2.2I) = (2. n)... (2. Consequently.2 . Anytwo of the three equations given by Eq. The eigenvectors corresponding to 21 to 23 are determined as follows. (2.9c).9a) and (2.33a) (10-2) 2 and 2 ( 8-2) 2 .2X -.000000 2 For 23 = 2. (2. 2~r~ ~.32a) f2 = 2~zc02= 2~r../2~Kr~e-. (2.2X ÷ (13 . 3).599712] (2. (2.. 2. n).2).145797 0.870586: X~ = [1. 2 3 1 (8 -. n) are found by substituting the individual eigenvalues into the homogeneous system of equations.2)X3 = 0 (2.491779 -3.0. more straightforward .2X" = 0 2 3 (2.

K K3/4 K3/4 . 89 1..3.. f3 0. 1.49S.0 0.. proceduresfor solving eigenproblems desired.. (b) f2 = 13. .5...3 THE POWER METHOD Consider the linear eigenproblem: Ax = 2x (2. The powermethodand several of its variations are presented in this section.52. The most general method. (c) =-7. Aniterative numericalprocedure.. the QRmethod. 1.and its variations are presentedin Section 2. called are the powermethod. 0. so that the scaling factor approaches largest the eigenvalue 2 and the scaled y vector approaches the corresponding eigenvector x. The direct method is presented in Section 2. is presented in Section 2.59.14.42..0 2. (a) fl = 16. Figure 2.3 to illustrate the numerical solution of eigenproblems.130 Hz.656 Hz.4.21.4 Modeshapes.0 -O.084Hz..Eigenproblems . 2.36) The powermethod basedon repetitive multiplication of a trial eigenvector x(°) by matrix is A with a scaling of the resulting vector y..

Scale y(1) so that the unity component remains unity: y0) = 20)x0) 4. as 2. Thenapply Eq. fails.37i The general algorithm for the powermethodis as follows: Ax(k)= y(k+O = 2(k+l)x(k+l) [ (2.00 O) Ax = -0. Find the largest (in absolute value) eigenvalue and the corresponding eigenvector of the matrix given by Eq. Scale the third component 3 to unity.ooooooj/-2. Choose component x to be unity.555555 1.0].the powermethod.111111_J I 0. Iterate to convergence.000000 A -0. Ax (°)= -2 4 -2 -2 1.1.00_] 2 (1) = 9.11): A= 4 -2 1.39). The direct power method.41) Assume (°)r = [1.11t111 X(2) 7 [ 1. (2. and the vector x is the corresponding eigenvector (scaled to unity on the unity component). its value can be found the using an iterative technique called the direct powermethod.3.0 =/°’°°/ L9. Example2. (2.444444 -2 -2-2 4 "~3 o.238532 l .444444 0. The method is slow to converge when the magnitudes (in absolute value) of the largest eigenvalues are nearly the same.42) 1.000000 (2. Whenthe largest eigenvalues are of equal magnitude.0 (2.At convergence.0.The procedure is as follows: a one of 1.90 2.39) When iterations indicate that the unity component the could be zero.0 .40) 1.(2. The Direct Power Method Chapter2 When largest (in absolute value) eigenvalue of A is distinct.888888 : 2(2) = 12.1. x .38) (1). Assume trial value x(°) for the eigenvector x.3 1. Designate that component the unity component.0 x (2.ooooool_ 12. as described.000000 1.128440q (2. a different unity componentmust be chosen.the Repeatsteps 2 and 3 with x = x value 2 is the largest (in absolute value) eigenvalueof A. Performthe matrix multiplication: Ax(0)= y(1) 3.

291793 -0.The iterations were continued until 2 changedby less than 0.143499 -0.. 2. xi (i = 1.e.45) _~_ y(2) (2...43) This problemconvergedvery slowly (30 iterations).000000 12. A k.694744 13..5.870583 13. yields Ax = CiAx i i=1 : ~ Cil~iX i _~. Basis of the Power Method The basis of the powermethodis as follows.000000 0.000000 1.870584 and x~ = [-0.111111 13.2.. + Cnx. The Power Method k 0 1 2 3 4 5 29 30 2 9. C/2/~-~Ax = ~ Ci2~ixi = y(~) Akx = Ay e i=1 i=1 (2..143499 x 3 1. A procedure for accelerating the convergenceof a slowly converging eigenproblemis presented in Example2. 22 . 2n.188895 -0.133770 -0. Theseresults were obtained on a 13-digit precision computer..44) by A.1.. y0) i=1 = ~ Ci~iAx i : ~ Cil~2ixi n i=1 i=1 (2.. = ~ Cix 2 i (2. 21. Since the eigenvectors. 2 .220183 13.238532 -0.242887 -0.143499 1.000000 0. The final solution for the largest eigenvalue. and recalling that Axi = )~ixi.000001betweeniterations.192991 -0. etc. any arbitrary vector x can be expressed as a linear combination the eigenvectors.44) Multiplyingboth sides of Eq. .560722 13..000000 1. x2 ..... (2.000000 91 The resuks of the first twoiterations presented aboveand subsequentiterations are presented in Table 2.000000 1. whichis a large number iterations of for a 3 × 3 matrix. xn.037474 -0.47) .213602 -0.444444 0... are linearly independent (i.128440 -0. Assume that A is an n x n nonsingular matrix having n eigenvalues.291794 x 2 1. denotedas 21.3. they spanthe n-dimensional space).1.46) A2x Aye1) = (x-~ = ~.000000 1.. 2. Assume further that IAll > 12el >’" > IAnl.Eigenproblems Table 2. Thus.000000 1. w here the superscript denotes repetitive matrix multiplication.000000 1..000000 1. n).000000] (2. xl. of x = Clx1 + C2x +. and the corresponding eigenvector x~ is )~1 = 13..000000 -0.291794] -0.870584 x 1 1. with n correspondinglinearly independent eigenvectors.

52) (2.49) Equation (2. (2. 2. Eq. The convergencerate is proportional to the ratio 12.51) to Eq.~k Akx= 21~ ~ C/|’°i/ xi = y(~) Chapter2 (2.. Recall the original eigenproblem: Ax = 2x (2.Essentially. (2..47) yields n i/2 ".y~ Yl essentially factors 21 out of vector y. (2.. approachesthe limit A~x = 2~ClX~= y(~) n. etc. Choose the first componentof vector y(~). 1. the ratios (2i/2~) ~ ~ 0 as k ~ ~. then (k+l) = scaling a (k+l) is scaled by 21 so thaty (k+l) = iteration particular component vector .. In the limit as k --~ ~o.51) . so that of c.50) gives y(k+l) 21k+l 1 (2.49) one more time (i. to be that component. which is the smallest (in magnitude) eigenvalue of matrix A. 3.3.0. °) The initial guess x~ must contain somecomponent eigenvector xi. if y~) = Consequently. 3 .49) of y~k)= 2~C 1 ApplyingEq.49) approaches zero if 1211 < 1 and approaches infini~ if 1211 > 1. x1 = 1.49) convergesto a finite value.50) from k to k + 1) yields (2. 2. 1. each 1. Scaling c~ be accomplished by scaling any componentof vector y(~) to ~ity each step in the process. The Inverse Power Method When smallest (in absolute value) eigenvalue of matrix A is distinct. The n eigenvectors must be independent.49) must be scaled be~eensteps. (2.Thus.48) S~ce12~1> 12i1 for i = 2. y(k+l) = 21k+lc1 1 Takingthe ratio of Eq. (2.¢0. The largest eigenvalue must be distinct. 4.92 Factoring 2~ out of the next to last term in Eq. so that Eq..I 12i_11 where 2 i is the largest (in magnitude) eigenvalue and 2i_1 is the second largest (in magnitude) eigenvalue. and the first component Eq. then of Thus. Thus.53) .y~).3. this involves finding the largest (in magnitude) eigenvalue of the inverse matrix -1. Several restrictions apply to the powermethod. (2. and the scaled vector y approachesthe eigenvector x 1. 21.48) (2. and Eq. its value can the found using a variation of the powermethodcalled the inverse powermethod. If . (2.e.. yl y~k+2)= 21. the scaling factor approaches21. (2.

x of Solve for x’ by forward substitution using the equation (2. (2.56) by A gives (k) (k+0 AA-Ix = Ix (k) = x(k) = Ay which can be written as (2.59) ’ = Lx (° x 4.The procedure is as follows: Solve for L and U such that LU= A by the Doolittle LUmethod.58) is in the standard form Ax= b. Th fo r a us.55) -1. Repeat steps 3 to 5 with x 2 = 1/2i~verse. Solve for y0) by back substitution using the equation Uy(1) = X’ (2. The powermethodapplied to matrix A is given by A-lx(k) = y(k+~) Multiplying Eq. where= y( k+l) and b = x(k). The eigenvaluesof matrix A that is.A-ix gives 93 (2. At convergence.54) yields an eigenproblemfor -~. 2~wrs Thereciprocal of that eigenvalueis the smallest (in absolute value) eigenvalue e. (2.61) 0). (k+~) = (k) X (2.63) (2. T hus. The powermethodcan be used to find the largest (in absolute value) eigenvalue of matrix -1 A .57) lAy 1.53) by A-1 A-lAx = Ix = x = )]. is y0) = )~inverse (1) 6. of matrix A. y(k+l) can be found by the Doolittle LUmethod. Iterate to convergence. Scale y(1) so that the unity component unity. Thus. (2. 2.62) (2. given x(k). In practice the LUmethodis used to solve the inverse eigenproblem instead of -1 -1 calculating the inverse matrix A .Eigenproblems Multiplying Eq.60) 5. Assume (°). I A-ix = (~)x : 2inverseX (2.are the reciprocals of the eigenvalues of -I matrix A.64) y(k+l) x ~inverse(~+0 = (~+0 .54) Rearranging Eq.58) Equation(2. The eigenvectors of matrix A are the same as the eigenvectors of matrix A.56) (2. (2. The inverse powermethodalgorithm is as follows: ~ (~ Lx = x Uy(k+l) = X’ (2. A~verse. and x(k+l) is the corresponding eigenvector. 3. Designate a component x to be unity.

145797 0.508981 and F0.000000 2.(-1/4)(1. (2.67) Solve for y(l~ by back substitution using Uy(1) = X’. Chapter2 Find the smallest (in absolute value) eigenvalue and the correspondingeigenvector of the matrix given by Eq.70) .65) Assume(°)r = [1. Theseresults were obtained on a 13-digit precision computer. Scale the first component to unity.2)]/(7/2) Y2 y~) = (15/7)/(75/7) = (2.(-5/7)(5/4) (2. (1) y(~)= |0.666667 1 The results of the first iteration presented above and subsequent iterations are presented in Table 2.2invers e -.000ooo q x~=[1. The inverse power method.599712] (2.0) ~ = 1.0].50/ (2.(-2.0)(0.2.1.2.39856~ -.0 ~3 1.2)]/8 Yl (1) = [5/4 .0 .(-2)(0.666667 [0.68) is Scale yO) so that the unity component unity.0) = .0 x’~ 1. The first step is to x ofx solve for L and U by the Doolittle LUmethod.(-5/2)(0.(-1/4)(1.0 ---x~ = 1. The results are L= -1/4 -1/4 1 0 -5/7 and 1 U= 7/2 -5/2 0 75/7 (2. The final solution for the smallest eigenvalue 23 and the correspondingeigenvector x3 is 1 1 23 .0 .0. The iterations were continued until 2~verse changedby less than 0.0 1. 7/2--5/2 0 75/7 y~2) = 5/4 Ly~ 1) [_15/7 / (l) : [1.301 / F1. for x’ by forward substitution using Lx’ = x Solve -1/4 -1/4 1 -5/7 ~ = 1.0 .000001betweeniterations.2.5) .66) (°).94 Example2.11): A= 4 -2 -~3 (2.69) ~inverse = 0.300000 X(1) ---.0 1.20 [0.

000000 1.981132 2.666667 0. Accelerate convergence for slowly converging eigenproblems 2. The Shifted Power Method The eigenvalues of a matrix A maybe shifted by a scalar s by subtracting six = sx from both sides of the standard eigenproblem.4. Find intermediate eigenvalues 3.’~Largest= 8.000000 1.000000 1. Ax= 2x. For 1.597565 0.1.000000 1.000000 1. /~shifted.000000 1. Shifting Eigenvalues Find the Opposite ExtremeEigenvalue to Considera matrix whoseeigenvaluesare all the samesign. .Solvefor the largest (in absolute value) eigenvalue. Thus.S is the eigenvalue the of shifted matrix.599712 0.382264 0.1.3.598460 0.Shifting the eigenvaluesof a matrix can be used to: 1.393346 12 13 0.666667 1.8 ~ -7 + 8 = 1. this matrix. for example. .4. for example 2. Shifting a matrix by a scalar does not affect the eigenvectors.six = 2x .145797 1. 8 is the largest (in absolutevalue) eigenvalue .sx which yields (A .603774 0. Then/~Smallest = ’~shitied. 8 is the largest (in absolute value) eigenvalueand 1 is the opposite extreme eigenvalue. 2. by the power method. This procedure yields the same eigenvalue as the inverse power method applied to the original matrix. whichis either the smallest (in absolute value) eigenvalueor the largest (in absolute value) eigenvalueof opposite sign 2.2. For this matrix.71) (2.72) (2.094439 2. Ax . Consider a matrix whoseeigenvalues are both positive and negative. -4. Find the opposite extreme eigenvalue. and8. Solve for the largest (in absolute value) eigenvalue of the shifted matrix. Shifting the eigenvalues by s = 8 yields the shifted eigenvalues -7. and 8. 4.398568 0.3. shifts the eigenvaluesby s.000000 1.398568 1. 4.000000 0.sI)x = (2 which can be written as x AshiftedX = 2shifted (2. by the direct power method.Largest 3t. -6.353333 0.145796 2.1 is the and opposite extreme eigenvalue.130396 2. The Inverse Power Method k ’~inverse Xl X2 X3 95 0. s.000000 1. Shifting a matrix Aby a scalar.599712 2.Eigenproblems Table 2.Largest = -7.73) where Ashifte d = (A .sI) is the shifted matrixand~’shifted = "~" -. and O. Solve for the largest (in absolute value) eigenvalue.300000 0.

Example2. Find the opposite extreme eigenvalue of matrix A by shifting s = )~Largest= 13. -6.000000 -2.870584 = I -5.The procedureis as follows: 1. the largest eigenvalue of opposite sign will be obtained.000000 --2.Theoriginal and shifted matrices are: 8 -2 A= -2 -2 Ashifled = the eigenvalues by -2J (2.-9 + 8 = -1. The shifted direct power methodfor opposite extreme eigenvalues.870584 I _9. the smallest (in absolute value) eigenvaluewill be obtained. Generally speaking.75) 4 -2 -2 13 (8 .000000 -2.870584 -2. If all the a eigenvaluesof a matrix havethe samesign. Calculate the opposite extremeeigenvalue of matrix A by 2 = 2shitte d q.000000 = --13.000000 / 1. by the powermethod. Then )’Largest. 2.8705841 --4.74) -2 (4 -13. -4.0 1.870584 -2.870584 --2.96 Chapter2 /~Largest = 8.870584_] 1.Larges t =--9. The above procedure is called the shifted direct powermethod.000000 -0.000000 --2. Scale the second component to unity. it is not known priori which result will be obtained. Applying the powermethodto matrix Ashifte d gives Ashified x(0) = --2.0 --0.870584) (2. by the power method.0 (2. Solve for the largest (in absolute value) eigenvalue of the shifted matrix. 2shifted.Negative "~shif~ed.870584 -2 -2 (13 .76) .3. Solvefor the largest (in absolute value) eigenvalue ~’Largest" Shift the eigenvalues of matrix A by s = 2Larges t to obtain the shifted matrix Ashitted.870584.000000 1 -2.0 1.000000 Assumex(°)r = [1.870584) -2 -2 -5.13.870584 -2. 4. Solvefor the eigenvalue ~’shit~edof the shifted matrix Ashitte by the direct power d method.0]. and 0. 3. qBoth of the cases described aboveare solved by shifting the matrix by the largest (in absolute value) eigenvalue and applying the direct powermethodto the shifted matrix.000000 -9.Shifting the eigenvalues by s = 8 yields the shifted eigenvalues -9.13. If a matrix has both positive and negative eigenvalues.000000 --9.Largest 8 -~.870584) -2 -2.S.

Considera matrix whoseeigenvalues are 1. Two intermediate eigenvalues.285948 0. can be used for that purpose.000000 1.488187 0.11. -3.351145/ (2. Solvefor the eigenvalue )]’shifted.573512 0.279482 . 2.466027 1.000001. Thus.000000 1. If "~Inter is guessed to be 2Gues = 5 and the s eigenvaluesare shifted by s = 5.508980 (2. )~Inter = 2 and 4.486864 1. Theseresults were obtained on a 13-digit precision computerwith an absolute convergencetolerance of 0.000000 1.293629 0.78) Since this eigenvalue.279482 0.870584. and all the eigenvalues of matrix A are positive.870584 and x =[ 1. it is the smallest (in absolute value) eigenvalue matrix A.310846 0. The largest (in magnitude)eigenvalue of Ashifled is )~shifled.13. has the samesign as the largest (in absolute value) eigenvalue.Largest =-11.639299 11.351145 0.2. 2.inverse of the inverseshifted matrixAshifled bythe inverse powermethodapplied to matrix Astti~t ~. .000000 1.361604+ 13. by the inverse powermethod. s 2. Shifting Eigenvalues to Find the Opposite ExtremeEigenvalue k "~shifted Xl X2 x3 97 19 20 13. the opposite extreme eigenvalue of matrix A is 2 = )’shitted.996114 11. and 8.361604 Scaling the unity component y(1) to unity gives of [-0.000000 1. Shifting Eigenvalues to Find Intermediate Eigenvalues Intermediate eigenvalues hinte r lie between the largest eigenvalue and the smallest eigenvalue. Shift the eigenvaluesby s = )~Guess obtain the shifted matrix Ashitte tO d. 2 = 13.000000 1.361604 . it The above procedure is called the shifted inverse powermethod. 4.508980. Applying inverse powermethod the shifted matrix gives ’~shifled = -. Guessa value 2Gues for the intermediate eigenvalue of the shifted matrix.Largest q.1 + 5 = 4.3.466027 0.000000[ /0.000000 1. Solve for the largest (in absolute value) eigenvalue.1.000000 0.4.Eigenproblems Table 2. fromwhich the to 2 =)]’shitted -b S = -.11. by the power method and the smallest t eigenvalue. 2 = 2. remainto be determined. -1.000000 0.711620-] 5(1) O) "~shifted = -13.3.3. The procedure as follows: 1. However. the eigenvaluesof the shifted matrix are -4.361604.514510 0. and 3. The powermethodis not an efficient methodfor finding intermediate eigenvatues. 2Larges = 8.77) Theresults of the first iteration presentedaboveand subsequentiterations are presentedin Table 2. ~’Smallest = 1.870584= --11.870584 11.711620 0.870586= 2. -1 3.

84c) .(0.0 0.(-2.84a) (2.0(1. The shifted inverse power methodfor intermediate eigenvalues.84b) (2.0 -2.0)(0.0(0.0) o.0 x~ = 1.0 -2.0)]/(-2.0 [-2.0(1.0 -6.0 0.0].0 0.0 .0 -(-2.0 1-2.0 5.0 ~ = 1 ~ 1 (2.0 -4.0 . (2.0) -2 --2 -2 (4.0)(0.83) which yields = 0.0-2. Solve for xI by x of I forward substitution using Lx = x(°): 1.0) .2Guess I) = I (8 .0 .2Gues = 10.82a) (2.0) -2 -2 -2 (13 .0)(0.0 0. Let’s attempt to find an intermediate eigenvalue of matrix A by guessing its value.0) = Solve for y(1) by back substitution using O) = x’.0 which yields 0.79) = -2.0 .0 1.inverse’ for Solvefor the intermediate eigenvalue )~Inter Chapter2 = J’shifted q.0 and --2.o = y~) = [0.81) x~ = 1.0 1. The correspondingshifted matrix Ashified is s Ashified = (A .0)]/(-4.1.98 4.0 0.82b) (2.0 -2.0) y~l) = [1. Example2.0/(5.0 -2.0 1.0.4.10.0) = 0.0 0.0 1.0 1. for example.0 0.0 3.0 .82c) (2.0) (2.S.0) .0 (2.0 0.10.0 1.0 0.0 1.1.0 Solving for L and U by the Doolittle LUmethodyields: L= Ii.80) Assume (°)r = [1.0) (2.0 -2.1. 5.10. Scale the first component x to unity. Solve *~shifted = 1/)~shifted.

2.86) /~shifted -. Thus.172840 0.~Est of an eigenvalue of an matrix A is known.000000 1. When estimate .The procedureis as follows: 1.inverse X1 X2 X3 99 -0. Obtainan estimate 2Es of the eigenvalue)-. the largest (in absolute value) eigenvalueof matrix A~Iftedis 2shifted.3.724865 -0. example.85) The first iteration and subsequentiterations are smrmaafized Table 2.00/" ko.invers = e --0.216247].00[ ~(1) ~shifled.4.736364 -0.000000 0. for example. is F-o.inverse -.000000 0.00 ~ (2. for the eigenvalues can be shifted by this approximate value so that the shifted matrix has an eigenvalue near zero.000000 1.000000 1.708025 14 15 -0.4.ooA 1.-0.527463 -0. Ashifte d"~shifled. The aboveprocedureis called the shifted inverse powermethod.526465 1.000001.87) .550000 -0.216247 Scale y(~) so that the unity component unity.000000 1.000000 0.724866 1.379566 Thus.3.493827 -0.from several initial iterations of the direct powermethod.000000 -0.526465 -0. This eigenvalue can then be found by the inverse powermethod.000000 1.233653 0.526465 0.50 L 0. Shifting Eigenvalues to Accelerate Convergence The shifting eigenvalue concept can be used to accelerate the convergenceof the power methodfor a slowly convergingeigenproblem. 2.ooA |0. Consequently.0 -0.620434 and the corresponding eigenvector is xT = [1.000000 1. Let the first guess for x be the value of x correspondingto 2~st. Shift the eigenvaluesby s = )-Est to obtain the shifted matrix. Solvefor the eigenvalue A-shifte inverse powermethod applied to matrix Ashifte d.inverse =--0.724866.000000= 8.724866 1. (2. Theseresults in were obtained on a 13-digit precision computerwith an absolute convergencetolerance of 0.inverse of the inverseshifted matrix-1 d bythe 3.500000 -0.216248 0.5ol y(1) 0.by several applications t of the direct powermethod. the intermediate eigenvalue of matrix A is 21 = )-shifted + s = --1.4.379566 + 10.363636 0.454545 -0.Eigenproblems Table 2.the correspondingeigenvalue of matrix Ashifted is 1 1 (2./~shifted.000000 1. Shifting EigenvaIues Find Intermediate Eigento values k /~shifled.

144300 1.00000 -0.686952 -0.90) Table2. convergedvery slowly since the two largest eigenvalues of matrix A are close together (i.194902J Let x(°~T = x(5~r and continue scaling the third component x to unity.143498 -0.620434). These results were obtained on a 13-digit precision computer with an absolute convergence tolerance of 0.143498 1. after 5 iterations.000000 0. Example 2.100 4.992342 0. Convergence be accelerated by using the results of an early iteration. to shift the eigenvaluesby the approximate eigenvalue. 2(5) = 13.175841 -z~hiaed.0.568216 5.694744 0.694744 -2.192991 -0. for Solve for ~ = 2~hi~d+ S. say iteration can 5.291794 -0.1. From Example2.000000 1.000000 0.297598 / 0.0.sI) = -2.000001.291674 -0.143554 -0.000000 1.000000 Ashifled : I -2.000000 1.1.141881 -0.88) The corresponding L and U matrices are I I "~shifted-- 1.291794 -0.000000 (A . Applyingthe of inverse powermethodto matrix Ashifte d yields the results presented in Table 2.inverse.5.188895 -0.000000 1. Chapter2 Example2.694744 -2.5. 5. The shifted inverse power methodfor accelerating convergence.188895 1.694744 (2.291799 -0.0000000.000000].686952 -. Thus.686807 5.143498 -0.000000 -9. shift matrix A by s = 13.. and then using the inverse power methodon the shifted matrix to accelerate convergence.295286 -0. Solve ~shitted = I/~’shiited.351201 1.686952 5.5.000000 1 (2.351201 -5.00000 1 -2.000000 1.686957 5.291794 -0.000000 -2.The eigenvalue2shif~e of the shifted matrix Ashified is d 1 1 -.000000 -2.89) -1.000000 0.000000 0.000000 0.000000 1. inverse X1 X2 X3 5. Shifting Eigenvaluesto Accelerate Convergence "~shified.192991 .000000 0.694744: -5.000000 .694744 and x(5)r = [-0. The first exampleof the powermethod.143496 -0.5.000000 -8.~ver~e (2.000000 -2.691139 5.e. 13.870584 and 8.

Twoinitial approximations 2 are assumed.the characteristic equation is obtained from det(A . the smallest eigenvalue.620434.the characteristic equation is obtained from det[A . to Eqs.694744 = 13.92) where B(2) is a nonlinear function of 2. These results agree with the exact solution of this problem presented in Section 2. 2 = 2. 2. This can be accomplishedby applying secant method. the correspondingvalues of the characteristic of determinant are computed.B(2)] = (2.95) Expanding (2. and the procedure is repeated . 2 = 8.3 apply to linear eigenproblems of the form Ax = 2x Nonlinear eigenproblems of the form [ Ax= B(2)x (2. 2 = 13.4.Eigenproblems Thus.870585 101 (2.2o and 21. was found by the powermethod. presented in Section 3. was found by shifting eigenvalues.94). Thepresent solution required only 11 total iterations: 5 for the initial solution and 6 for the final solution. was found by both the inverse power methodand by shifting the eigenvalues by the largest eigenvalue. 2.4 THE DIRECT METHOD The power methodand its variations presented in Section 2.zeros of the characteristic equation directly. yields an nthdegree polynomialin 2. (2. The corresponding eigenvectors were also found.508981. and the third (and intermediate) eigenvalue. whichcan be solved by the methods Eq.175841 + 13. (2. Linear eigenproblems and nonlinear eigenproblems both can be solved by a direct approach whichinvolves finding the. The roots of the characteristic polynomialcan be determinedby the methodspresented in Section 3. presented in Chapter 3. Summary In summary.91) This is the sameresult obtainedin the first example with 30 iterations. For a nonlinear eigenproblem.95) yields a nonlinear function of 2. (2.3.94) and (2. For a linear eigenproblem.95).93) (2.94) ExpandingEq.2.94) and (2.2I) = (2.5.870584. The solution of that linear relationship is taken as the next approximation to 2.3. which can be time consumingfor a large system. Analternate approach solving for the roots of the characteristic equationdirectly for is to solve Eqs.95) iteratively. the eigenvalue 2 of the original matrix A is 2 = 2shil~e d + S = 0.and these results are used to construct a linear relationship between2 and the value of the characteristic determinant.5 for finding the roots of polynomials.the largest eigenvalue. cannot be solved by the power method.

A: 4 -2 (2. Thus. 40.2I) (4 .0) _ -65.13.0) -2 -2 -2 = 40.0 Solving Eq.0 (13 .6.100) The results of the first iteration presented above and subsequent iterations are presented in Table 2.2) (2.3. Let’s find the largest eigenvalue of the matrix given by Eq.15. Write the linear relationship between4 and f(4): f(41 ) -41 -.(-90.0.41 (2.0) (2.98a) (8 .0) (2. (8 . Thus.870585.6.102 Chapter2 iteratively to convergence. (2.97) Equation (2. Let 2o = 15.615385 (-65.96) f(2) = det(A .0.The solution is quite sensitive to the two initial guesses.6.0 . or by applying the inverse power methodone time as illustrated in Section 2. (2.11) by the direct method.i3.101) (2.Reasonable initial approximations required.6. especially for are nonlinear eigenproblems.4.99) for 42 to give f(42) = 0.40 f(40) -. as described Section 1.0) -2 -2 -2 = -90. The direct methodfor a linear eigenproblem.0) -2 -2 -2 (4 .15.0 gives 42 = 41 f(41) -.97) can be solved by the secant method presented in Section 3.2) -2 = -2 (13 .99) wheref(42) = 0 is the desired solution.0) -2 -2 -2 (4.15.slope -f(42 ) -.0 _ 13.98) were evaluated by Gauss elimination. The corresponding eigenvectors must be determined by substituting the eigenvalues into the system of equations and solving for the corresponding eigenvectors directly. The solution is 4 = 13. The direct methoddetermines only the eigenvalues. (2. Theseresults were obtained on a 13-digit precision computer.0 slope 40.13. .13. (2.0) f(4~) =f(13.96) The characteristic determinant correspondingto Eq. Thus. Example2.0) (2.98b) The determinants in Eq.0 slope = 13.3.15.0 (13 .0 and 21 = 13.0) f(20) =f(15.f()~l 42 -.

003572) -.(-0.2 [1 .994083 -56.59.7.357212 (2.870585 13.-0.105) for 2 t o give f (22) =0. f(Xo) = I [1 .f(21 42 . (2. Thus.4 I= 0.000014 0. Thus.0 yields (-0.000000 13. The Direct Method a Linear Eigenfor problem k 0 1 2 3 4 5 6 7 ~’k(deg) 15.102a) (2. Slope = (-0.547441 -59. Consider the nonlinear eigenproblem: x1 ÷ 0.180848 [1 .cos(50)] 10.sin(55)] 0. Let 20 = 50.157487 -4.106) Solving Eq. (2.0 (2.000000 (slope)k -65.4x 2 = sin(2) 1 0.615385 13.2x1 + x2 = cos(2) 2 The characteristic determinant correspondingto Eq. The direct methodfor a nonlinear eigenproblem.2~ (2. 2. Example2.cos(55)]I 0.426424 (2.~ f(21) -.0 = 52.103) by the secant method.4 0.4 sin(2)] 0.55.slope f(22) .6.cos(2)] = (2.003572 0.001292) (2.0 deg and 21 = 55.000000 14.102b) Let’s solve Eq.001292 55.870585 f(~.107) .008098 -0.2 Writing the linear relationship between2 and f(~) yields f(21) .002882 0.k) -90.767276 slope (--0.999194 0.916914 .20 .000000 13.50.2 [1 .0 is the desired solution.870449 13.822743 -60.410.360200 0.0 deg.103) (2.864536 13. (2. as illustrated in Example2.B(2)] [1 0.647887 103 Example presents the solution of a linear eigenproblemby the direct method.952515 13.104a) 0.2 0.f(2 o) 21 .000000 40.105) wheref(22) = 0.102) f(2) = det[A .104b) f(~l) 0.002882) 22 = .sin(50)] 0.7.002882) .0 .000000 -41.Eigenproblems Table2.6 Nonlinear eigenproblems also can be solved by the direct method.2 [1 .4 I = -0.233956: 0.

A similarity transformation is defined as A’ = M-~AM.2I).002882 0.003572 -0. The QRmethod.5 THE QR METHOD The powermethodpresented in Section 2. .4. The Direct Method a Nonlinear for Eigenproblem 2k. 2.IU .110) The roots of Eq. as does the direct methodpresented in Section 2.7. deg 50. L 0 0 0 ’’’ Un. 2i =ui.2) (//33 -.3 finds individual eigenvalues. Triangular matrices have their eigenvalues on the diagonal of the matrix.000496 0. Matrices A and A’ are said to be similar. The eigenvalues of similar matrices are identical. finds all of the eigenvalues of a matrix simultaneously.~’) = (2.001365 Chapter2 The results of the first iteration presented above and subsequent iterations are presented in Table 2...000049 -0. Thus.001292 -0.. Consider the upper triangular matrix U: F Ull 0 U12 //13 U22 U23 ¯ .~) (U22 -.095189 53.0 52.2I) 0 (2.104 Table 2./~)""" (Unn -. Theseresults were obtained on a 13-digit precision computerand terminated whenthe change in f(2) betweeniterations was less than 0.131096deg.131096 f(2k) 0. on the other hand. The eigenproblem. is given (Ull -. (U .. i (i= 1.767276 53.000001. without proof.0 55. "’" . Uln U2n .. The implementationof the QRmethod.111) The QRmethoduse similarity transformations to transform matrix A into triangular form.2II.110) are the eigenvalues of matrix U.2) 0 U12 (U22--2) 0 UI3 U23 (U33 --2) "’" "’" "’" Uln UZn U3n (U .. but the eigenvectors are different. (2.001513 -0.. n) (2. yields (Ull -. is presented this section. The solution is 2 = 53.7.000001 (Slope)k -0.109) 0 0 0 "’" (Unn--~) The characteristic polynomial.2 . The developmentof the QRmethodis presented by Strang (1988).

. Thus. . 2 .118a) To determine q2. (2.2 .i " "’’ayn" = (2. Thus.. which is normalto ql.2 ..115) Equation (2.j = 1. whose columns comprise the columnvectors a1. q2 . The matrix that connects matrix A to matrix Q is the upper triangular matrix R whoseelements are the vector products rid = qfaj (i.115) by Q to obtain Q-1AQ = RQ = A’ (2.......116) (2. an. That process then reversed to give A’ = RQ (2.113) by Q-1 to to obtain Q-1A = Q-1QR = IR = R Postmultipty Eq. a2 . any arbitrary vector can be expressed as a linear combination of the columnvectors ai (i = I.. n) are linearly independent..113) The QRprocess starts with the Gauss-Schmidt process.(q~a2)qi (2.113)..114) Matrices A and A’ can be shown be similar as follows. n) (2.... n) by the followingsteps. . Eq2(2.112) Theresult is the QRfactorization: [ A = QR) (2. Then normalize a~ to obtain q~: al qt=[lalI-’---~ whereIlal II denotes the magnitude a~: of Ila~ II = [a~l + a122+’" + ~2 11/2 (2. n) can be created fromthe columnvectors i (i =1.Eigenproblems 105 The Gram-Schmidt process starts with matrix A. An orthonormal set of column vectors qi (i = 1.... a~ = a2 .116) shows that matrices A and A’ are similar. 2 .. The steps in the Gram-Schmidt process are as follows. (2. Start with the matrix A expressed as a set of columnvectors: [al a2 "" an] A= / ’ayl’’ "’a~2"’’i. Chooseq~ to have the direction of aI .. and constructs the matrix Q.. they span the n-dimensional space. n)...119a) .118b) (2.2 . A set of orthonormalvectors is a set of mutually orthogonal unit vectors.. first subtract the component a2 in the direction of ql to determine of vector a~. and thus have the same eigenvalues. Premultiply Eq.117) [_ On 1 gin2 " " ann Assuming that the columnvectors ai (i = 1.. qn. whosecolumnscomprisea set of orthonormalvectors ql.

A is factored by the Gram-Schmidt process to obtain Qo) and (1).. Thus...Ila’. Thus.. /..120a) q3= Thegeneral expressionfor a’i is i-1 ai’ (2.(qkai)qk ~ T k=l (i = 2..... The diagonal elements of R are the magnitudesof the a~ vectors: ri.127) 0) (1) A is similar to A. rln ~..~ -. so the eigenvaluesare preserved.r! R= ~0 0 .-II (i = 1....’.. 2 . A(z) = R(1)Q 0) (2..’(.120b) = ai.. T hus.. R is simply assembledfrom already calculated values.. 3 . i and rid are calculated during the evaluation of the orthonormal unit vectors qi..(q~ra3)q2 Chooseq3 to have the direction of a~. (2.. Thus.2 . rnn (2.j = i+ 1 .. first subtract the components a3 in the directions of qt and q2. n) (2. n) n).r!2.121) and the general expression for qi is qi : a’i ~ (i = 1. n.. of a~ = a3 ..122) The matrix Q is composed the columnvectors qi (i : 1.106 Chooseq2 to have the direction of a~.123) The upper triangular matrix R is assembled from the elements computed in the evaluation of Q. Thennormalize a~ to obtain q3: (2...119b) This process continues until a completeset ofn orthonormal unit vectors is obtained. 2 . n) (2.. n) (2..128) .0. Then normalize a~ to obtain q2: q2 -- Chapter2 Ila~l[ (2.(q~ra3)q] ..2 ... Thus. T next step i s t o reverse the factors Q a nd R to (°) Schrnidt process into Q(0) and he (0) obtain (0 = A (° R(°)Q (2. Let’s evaluate one moreorthonormalvector q3 to illustrate the process. of Q = [ql q2 "’" qn] (2.... j = qTaj (i = 1.125) The values of ri.. and the factors a re r eversed to obtain A(2). Thus..126) (°) (°) The first step in the QRprocess is to set A = A and factor A by the Gram(°).124) The off-diagonal elements of R are the components the a~ vectors whichare subtracted of fromthe a~ vectors in the evaluation of the a’i vectors. ri. To determineq3... V rll r12 ¯.

131) The colmnnvectors associated with matrix A.Recall: A= 4 -2 -~3 (2. it can be fairly slow. Although it generally converges.130) Equations (2.. Let’s apply the QRmethodto find all the eigenvalues of matrix A given by Eq.485281 (2.(q~a2)ql a2’ = .942809 . Example2.235702_] (2.129) and (2.[0..135a) | (2.Eigenproblems 107 (n). A A triangular form. are a1 : a2 ~ 113: -3 (2.8. Thus.235702] Next let’s solve for q2. The basic QR~nethod.134) . the QRalgorithm is generally the preferred methodfor solving eigenproblems. (2.235702 (2.942809 -0. Preprocessing matrix A into a more nearly triangular form 2.132) First let’s solve for ql.235702 -0. First subtract the component a2 in the direction of ql: of a 2 = a2 .133) Solvingfor ql = al/l[al I[ gives q~r = [0.L_ -0.135b) . and divide by the magnitude of aI . within sometolerance.235702 -. the eigenvalues of A are the diagonal elements.0. When (n) approaches The process is continued to determine A(3).11) simultaneously.129) (2.0. Two modifications are usually employed increase its to speed: 1. The process is as follows: (k) : A (kQ(’~)R (k+l) : R(~)Q(~) A (2.130) are the basic QRalgorithm. Shifting the eigenvalues as the process proceeds Withthese modifications. Let ql havethe direction of al.235702] /-0. Ilal II = [82 -~ (--2) 2 -b (--2)2]1/2 = 8. A(4) ... A = [a 1 a 2 a3].

and r33 = Ila~ll 8.329293 0.294700.137) Finally let’s solve for q3. q~’a = --9.232319.294700. Thus.matrix Q(O)= [q~ q2 0.802022 . and r23 = q~’a3 = -9.000000 8.235702 -0.139) Performingthe calculations gives qlTa3 = --4.555555) / -i -(-2.768350] 1713 ] (2.710840"] d 6.136) The magnitude a~ is Ila~ll = 4.478343 0.443165 0.235702J / -0. Thus.802022 -0.485281. The off-diagonal elements are r12 = q~’a 2 = -2.055554) I -(-7.329293 0.(q~a3)ql .357023.051743 0.143) (°) It can be shownby matrix multiplication that A = Q(°)R(°).(q~’a3)q2 a~ = . q3 = a~/lla~ [I gives of q~=[0.222222) a~ = -2 -(1. and 3 -2 -(-4.232319.325300 (2.000000 4. Thus. r13 = (°) q~’a3 = -4.235702] (2. matrix R is given by (°) R = -8.478343. rll = Ila~ll = 8.140) The magnitude a~ is Ila~ II : 8.488618)1 = |4.235702 -0.444444 k -2.357023 and Chapter2 I -(0.141) is given by In summary. .0.595049] (2.294700 -9.595049] / 0"8020221 ~.478343..443165.555555) 3.443165.108 Performing the calculations gives q~a2 = -2.595049J -~1 F 0"0517431 (2. First subtract the components 3 in the directions ofql and q2: ofa a~ = a3 .051743 0.485281 -2.518072| [_[2.619146) -(-0.548821 0.357023 -4.Thus.000000 0.235702 0. r22 = [la~l] =4.555557 A (2. k-O.768350 (2. q2 = a~/lla~ll gives of q2 = [0.573626) -(5.942809 3 -[0.055554) 13 -(1.232819J (2.138) ~3 -0.595049 0.222222) =| -(0.802022 0.051743 0.142) (°) Matrix R is assembled from the elements computedin the calculation of matrix Q(O).942809 Q(O) = -0.235702 -0.[0.548821 -0.

595049 0.232819 0.235702 -0.000000 0.898631 -1.620434 0.712949 2.543169 10.560916 8.743882 11.870585 0.000000 (19) = R 0.620435 8. R.508981 2.000000 0.235702 0) A = 0. The QRmethoddoes not yield the corresponding at eigenvectors.063588 -4.O00000J (2.000000 -2.942809 -0.148) The final values agree well with the values obtained by the power method and summarized the end of Section 2.443165 4.l 9.508981 13.329293 0.051743 0.145) -4.000141 1.8.611111 10.000000 1.147) (2°) A = (2.000000 0.000000 0.508712 9.611111 1. The results of the first iteration presented aboveand subsequentiterations are presented in Table 2.001215 0.508981 109 19 20 (1) (°). The next step is to evaluate matrix A = R(°)Q O) A = /0.000000 0.325301 2. The eigenvector corresponding to each eigenvalue can be found by the inverse shifted powermethodpresented in Section 2. The Basic QRMethod k -’].8.509360 2.000000 8.000000 0. .000000] 0.213505 9.000000 8.000000 2.508981 (2.000000 -0.325301 (~) The diagonal elements in matrix A are the first approximationto the eigenvalues of matrix A.000000 / 0.802022 -0.768350 (2.974170 12.929724 13.870585 22 9.213505 -1.9403761 6.898631 (2.870584 13.478343 F8.063588 11. and A are given below: I 9.Eigenproblems Table 2.000000 0.357023 -4.870585 0.517118 2.000000 1 0.6. Thus.620434 0.485281 -9.001215 8.3.146) 13.144) 1.003171 0.940376 1.294700 L0. The final values of matrices Q.548821 0.000000 2.620434 23 6.000141 1 0.

6 EIGENVECTORS Chapter2 Somemethods for solving eigenproblems.on. I 0.sI) = [ -2.000000 -2.870584 -2.such as the direct method and the QRmethod. such as the power method. gives -2.000000-Iyll 0.000000-1.000000 Applyingthe Doolittle LUmethodto Ashitted yields L and U: 1.000000_] (2. In these cases.151) Let the initial guess for x(°)r = [1. Let’s apply this technique to evaluate the eigenvector x~ corresponding to the largest eigenvalue of matrix A.318637 Y2 = 1.189221 -1.9. A= -2 -2 4 -2 (2.564707 (2. 21 -.189221 0.000000 0.149) Shifting matrix A by 2 = 13.000000 -9.0000000. Example2.340682 0.000001 Y3 0. From that example.000000 Ashit~e d = (A . yield only the eigenvalues.318637 0. Eigenvectors.000000--2.000000 0.110 2.000000 0.000000 -2.0 ~3 = 0. Other methods.0 1.000001 (2.000000/ 0.000000 0.000000 -9.0 -~ ~2:0.000000 0.870584. which was obtained in Example 2.143498 1.152) Solve ~r y by back s~stitution using Uy = x’. Solvefor x’ by forwardsubstitution using I 1. the corresponding eigenvectors can be evaluated by shifting the matrix by the eigenvalues and applying the inverse power methodone time. (2.0000000.870584 0.154) 0.000000J .659318 -5.870584-2. y = -0.454642 x 106 L 1.340682 1.000000 -9.0].000000 0.870584.340682 1.000000 0.870584 -2.340682 0.143498 1.870584 L -2.652404 (2.15o) U = I -5.000000 ~2 : 1.454642 x 106 -0.000000 -2. yield both the eigenvalues and the correspondingeigenvectors.153) The sol.000000[ -5.000000 -0.0000007 / L = 0.1. 1.0 ~ Lx : x.143498.000000 0.13.000000 0.659318 1.564707 including scaling ~e ~st componem ~i~ is to x 106 ~0.000000 0.

Iterative methods are then employedto evaluate the eigenvalues. The results obtained by deflation can be used to shift matrix A by the approximate eigenvalues.7 OTHER METHODS Thepowermethod.000000]. and Press.The resulting tridiagonal matrix can be expanded. However. whichis presented in Section 2. The Jacobi method transforms a symmetric matrix into a diagonal matrix. Ralston Rabinowitz (1978). In the second step.291794 -0. Many them of apply to symmetric matrices. Householder (1964).143498 1. See Rice (1983) and Smithet al.1.4 to find moreaccurate values. they are moreefficient than the Jacobi method. Moreinformation on the subject can be found in Fadeev and Fadeeva (1963). deflation techniques can be employedfor symmetricmatrices. The off-diagonal elements are eliminated in a systematic manner.5. the transformation approaches a diagonal matrix iteratively. whose eigenvalues can then be found by the QRmethod. 111 to the value 2. which is identical obtained by the direct powermethodin Example2. Due the QRmethodis generally the methodof choice. Numerous compute programs for solving eigenproblems can be found in the IMSL(International Mathematical and Statistics Library) library and in the EISPACK program (Argonne National Laboratory). for example.elimination of subsequent off-diagonal elements creates nonzero values in previously eliminated elements.3. In the first step. round-offerrors generally pollute the results a~ter a few deflations. Teukolsky. Steward (1973). the original matrix is transformed to a simpler form that has the same eigenvalues as the original matrix. For more general matrices. Finally. The Given method and the Householder method reduce a symmetric matrix to a tridiagonal matrix in a direct rather than an iterative manner. the original matrix is transformedinto a simpler form that has the same eigenvalues. the QRmethodis recommended. In principle. by the power method. Wilkinson (1965). deflation can be applied repetitively to find all the eigenvaluesof matrix A. a new matrix B is formedwhoseeigenvalues are the same as the eigenvalues of matrix A. and Vetterling (1989). Mostof the morepowerful methodsapply to special types of matrices. The Householder method can be applied to nonsymmetricalmatrices to reduce them to Hessenberg matrices. However. except that the largest eigenvalue 21 is replaced by zero in matrix B. which is the second largest eigenvalue22 of matrixA. to its robustness. See Wilkinson(1965) and Strang (1988) for a discussion of the QRmethod. (1976) for a discussion of programs. Generally speaking.Eigenproblems Thus. Consequently. x~’= [-0. .including its variations. and the direct method very inefficient when are all the eigenvalues of a large matrix are desired. The best general purpose methodis generally considered to be the QR method. Mostof these methodsare based on a two-step procedure. iterative procedures are used to determine these eigenvalues. The powermethodcan then be applied to matrix B to determine its largest eigenvalue. After the largest eigenvalue 21 of matrix A is found. Several other methodsare available in such cases. Consequently. and the corresponding characteristic equation can be solved for the eigenvaluesby iterative techniques. which are then solved for by the shifted inverse power methodpresented in Section 2. Flannery.

The Power Method The direct powermethodevaluates the largest (in magnitude)eigenvalue of a matrix.0.8 PROGRAMS Chapter2 TwoFORTRAN subroutines for solving eigenproblems are presented in this section: 1.0 data (x(i).3) /-2.155) A FORTRAN subroutine.1.50.112 2.norm.x(i) end do c c c c c c c c c c c c .i=l. -2. subroutine power.1000) do i=l. After iter iterations.3. -2. -2.shift. factors out the approximateeigenvalue 2 to obtain the approximateeigenvector x.0.1/ data ndim.0. 1.870584.1. -2. and prints the solution. n. Program defines the data set and prints it.1010) i.j) x eigenvector.0.j=l. calls main subroutine powerto implement solution. The general algorithm for the powermethodis given by Eq. iter. 1 yes dimension a(6.6) . tol. The direct power method program.shift.13. n a coefficient matrix. program main main program to illustrate eigenproblem solvers ndim array dimension.0. 2. an error message printed and is out and the iteration is terminated.2).i=l.e-6. Subroutine powerperforms the matrix multiplication.2/ data (a(i. 4. returns or continues.0. n. tol.3) /-2. y(i) specifies unity component of eigenvector norm iter number of iterations allowed convergence tolerance tol shift amount by which eigenvalue is shifted iw intermediate results output flag: 0 no.i=l.0.0.3) / 1.x(6) . iw/6. (a(i. the Program2.39): Ax(k)= y(k+l) = 2(k+t)X(k+~) (2.0 write (6. ndim = 6 in this example n number of equations.norm. for implementing the direct power methodis presented below. n write (6. x(i) y intermediate vector. 13.50. iter.n). l.3. Input data and output statements are contained in a main(or driver) programwritten specifically to illustrate the use of each subroutine.l).3.0 data (a(i. The power method The inverse power method The basic computational algorithms are presented as completely self-contained subroutines suitable for use in other programs.3) / 8.0.i=l. (2. 1.0 data (a(i. checks for convergence. 2.3).j).y(6) data ndim. A(i.3. Ax = y. iw / 6.0.8.e-6.1.

l) write (6.O.n) end do end if call power (ndim.1040) evl.6/" ’) format (Ix. ev2) the direct power method dimensi on a (ndim.0) then write (6.n) check for convergence if (abs(ev2-evl).0 do j=l.n) stop format (’ The power method’/’ ’/’ A and x(O)’/’ format (’ "/" A shifted by shift = ".1010) i.1010) k. 6f12.6) format (" ’/’ k lambda and eigenvector components’/" end 113 i000 1005 i010 1020 ") c subrou tine power (ndim. evl. x (ndim) .n y(i) =0. 0 if (iw.n a (i. (x(i).i=l. x.j).n) do k=l. Oe-3) write (6. (x(i). y. tol. tol) if (shift. x. i) =a (i.1010) k. i) -shift write (6. (a(i.l) write (6.ev2 end if return else evl =ev2 .O.O) then evl =ev2 ev2 =ev2 +shi f t write (6. eq. k.i=l.iter calculate y(i) do i=l.f10.le. ev2. a. norm.l) write (6.n y(i)=y(i)+a(i.1010) k.j=l. i3. eq. tol. norm. (x(i). 1020) write (6. shift.le.i=l. n.1005) shift do i=l. shift. iw. y (ndim) evl =O. n. i ter. ndim) . i ter. iw.gt.ne. ev2) write (6.1000) if (iw. eq.n x(i) =y(i) end do end if if (iw.j) end do end do calculate lambda and x(i) ev2=y (norm) if (abs(ev2) . y.l. a.Eigenproblems if (shift. k. ev2.1020) ev2 return else do i=l.

Example2.000000 lambda -2.114 Chapter2 end if end do write (6. stop’) format (" "/" Iterations did not converge.6.000000 components 1.188895 -0. 1030) return format (’ ’/’ k lambda and eigenvector components’/’ ") format (ix. shift. stop’) format (’ ’/’ lambda shifted =’.870584 lambda 13. i3.242887 -0.6f12.291793 -0.000000 0.000000 -2.444444 0.000000 1.matrix Ais shifted by the value of shift before the direct power methodis implemented.220183 13.000000 12.000000 4.1.000000 -2.000000 1.143499 components -0.128440 -0.e10.037474 -0.000000 -2. If the input variable.111111 13.291794 and eigenvector -0.000000 1.291794 0.6) end 1000 1010 1020 1030 1040 The data set used to illustrate the use of subroutine poweris taken from Example 2.870584 the data statement. Solution by the direct powermethod.192991 -0.000000 method and eigenvector 1.000000 -2.000000 13. .870584 Subroutine power also implements the shifted direct powermethod.000000 1.870583 13. ’ approaching zero.6) format (’ ’/’ lambda = ’.000000 1.000000 -0.1.3 illustrating the shifted direct powermethodcan be solved by subroutine powersimply by defining norm= 2 and shift = 13.238532 -0. The power A and x(O) 1 2 3 k 0 1 2 3 4 5 29 30 k 30 8.000000 9.000000 1.000000 1.000000 1." and lambda =’.f12. Output2.143499 -0.000000 1. is nonzero.2.000000 -2.000000 1. The data statement for this in additional case is included in programmain as a comment statement.f12.560722 13.213602 -0.143499 1.000000 0.694744 13.000000 1. The output generated by the powermethodprogramis presented below.133770 -0. This implements the shifted direct power method.

tol .n. 3 .157a) (2.13.x(ndim).0 cal I invpower(ndim.Eigenproblems 2. The subroutines themselves must be included whensubroutine ~(~+~) invpoweris to be executed.50.1. The Inverse Power Method 115 The inverse powermethodevaluates the largest (in magnitude)eigenvalue of the inverse -1 matrix. shi ft. shift.0. tol. 1. tol.n.shift. Programmain defines the data set and prints it. 3.157c) A FORTRAN subroutine.157b) (2. and prints the solution.0. tol.norm. 1 ev2) the inverse power method. and the solution continues or returns.e-6.1/ data ndim.2.1. calls subroutine invpower to implementthe inverse power method. xp. Subroutineinvpowerthen evaluates x’.3.3.62) to (2. A . iter.0. subroutine invpower.3. 50. that approach is taken here.n.x(6) . 3.3) / 1. norm. for implementing the inverse power method is presented below.13.10.1/ data (x(i). shi f t.8. iw / 6. y(~+~).2. program main main program to illustrate eigenproblem solvers xp intermediate solution vector dimension a( 6.e-6. (2. l .xp(ndim). -0. "’inverse. 6) . Program 2.norm.1.0.shift.iw.2.1 / data ndim. and (k+l). iw / 6. an error message printed and the solution is terminated.3) /-0.e-6. The inverse power method program.y(ndim) i000 c . x. tol. 694 744.1. Since a subroutine for Doolittle LUfactorization is presented in Section 1.shift. 3. l.n. xp. ndim). 1. norm. iw. This is indicated in subroutine invpower by including the subroutine declaration statements. 1. Program in this is main section contains only the statements whichare different from the statements in program main in Section 2. Subroutine invpowercalls subroutine lufactor and subroutine solve from Section 1.870584.8. 3 . Convergence 2 is checked.1. The general equation for the inverse powermethodis given by A-Ix = 2inverseX (2.xp(6) data ndim.192991.norm.x.a. ev2) format (’ The inverse power method’/’ "/’ A and x(O) ’/’ end subroutlne invpower (ndim.k.188895.50. y.0 data (x(i). i / data ndim. n. i ter. e-6.0. iw/6. iter.2 to evaluate L and U.i=l. i ter. iter. tol. n.156) -1 This can be accomplishedby evaluating A by Gauss-Jordan elimination applied to the identity matrix I or by using the Doolittle LUfactorization approachdescribed in Section 2. After iter x of iterations.64): (k) = x Lx’ (k+l) = x’ Uy (~+1) x(~+1) y(k+~)-~. iw/6.--inverse (2.i=l. The general algorithm for the inverse powermethodbased on the LUfactorization approach is given by Eqs. iter.y. dimension a(ndim. a.1. k.8. norm.8.

i3.l) write (6.n) end do end if if (iw. row 3: x.n) write (6.ev2 if (shift.ev2 end if return else evl =ev2 end if end do if (iter.1000) do i=l.le.6) 1010 format (Ix.l) write (6.n). Oe-3) write (6. ev2" 1 /" "/4x. tol) evl =ev2 ev2=l. O/ev2 if (iw. (xp(i). n. gt.6.1015) (y(i).i=l.n write (6.f12. (a(i. eq.1010) k.l.0) then evl =ev2 ev2 =ev2 +shi f t if (iw. xprime. n.6) 1015 format (4x.a. stop’) 1030 format (" ’/’ Iterations did not converge.n) iteration loop do k=l. row 2: y. eq.6) 1020 format (’ ’/’ ev2 = ".a) if (iw.1050) evl.j=l.le.i=l. 6f12.1010) i.l) write (6.6) .ne. stop’) 1040 format (’ ’/’ lambda inverse =’.x.6f12.i=l.1040) evl. 6f12. eq.n) write (6.l) write (6. eq.e10.l) then write (6.y) ev2=y (norm) if (abs (ev2) .2.j). xp.116 c Chapter2 perform the LU factorization call lufactor (ndim.ev2 end if check for convergence c if (abs(ev2-evl).i=l. eq.iter call solve (ndim.l) then write (6.1005) (x(i).n x (i) =y (i)/ev2 end do end if if (iw.1030) return 1000 format (’ ’/’ L and U matrices stored in matrix A’/" ") 1005 format (’ ’/’ row i: k.O.’ and lambda =’f12. 1015) (x(i)." is approaching zero.1020) ev2 return else do i=l.

213333 0.500000 1.000000 .231132 0. n.285714 0.000000 -2.250000 -0.2.000000 -2.6) subroutine implements end lufactor (ndim.398568 2.500000 -0. ev2 1.f12. a. b.000000 lambda inverse k 12 = row 2: y.000000 A 1. x) process b to b’ and b’ to x end The data set used to illustrate subroutine invpower is taken from Example 2.Eigenproblems 1050 format end (’ ’/’ lambda shifted =’.142857 0.094439 2.000000 -2.560994 0.250000 in matrix -2.000000 -2.228428 0.353333 0.382264 0.000000 1 000000 0 300000 1 000000 1 000000 0 353333 1 000000 1 000000 0.599712 = 0. bp.300000 2 0.855246 2.239026 0.000000 -2.6.a) LU factorization and stores L and U in A c subrou fine solve (ndim.n.2. generated by the inverse power method program is presented below.382264 1.145796 0.395794 0.666667 2.000000 2.000000 4.500000 10.597565 2.599712 and lambda components 2. row 3: x.000000 L and U matrices 1 2 3 8.714286 -2.000000 stored -2.000000 1.508983 1. 1. xprime.398568 1.200000 0.666667 1.000000 13.508983 lambda and eigenvector 2.447439 0.700000 1. The output Output 2.603774 2.916667 0.800629 2.714286 row i: k.250000 0. The inverse A and x(O) 1 2 3 8.000000 -0.145796 0.000000 0.000000 1.398568 1.000000 -2.000000 1.000000 1. Solution power by the inverse method power method.f12.000000 3.’ and lambda 117 =’.981132 2.

such as ISML. 1.694744 in the data statement and defining x(i) = -0. ¯ ¯ ¯ ¯ ¯ When only the l~rgest and/or smallest eigenvalue of a matrix is required. For serious eigenproblems.If the input variable. libraries such as EISPACK be added to the operating systems. Solve for the smallest (in absolute value) eigenvalue of a matrix by the inverse power method.5 by the shifted inverse power methodcan be solved by subroutine invpower by defining norm= 3 and shift = 13.it can be used for solving nonlinear eigenproblems.870584 and iter = 1 in the data statement. is nonzero. However. 2. Explain the physical significance of an eigenproblem.Mathematica. Finally. Moresophisticated packages.Macsyma. 2.4 illustrating the evaluation of an intermediate eigenvalue by the shifted inverse power methodcan be solved by subroutine invpower simply by defining shift = 10. Althoughit is rather inefficient.188895.192991. can Manycommercial sol.This implements the shifted inverse powermethod.3. 4. Maple. Explain the basis of the powermethod. The data statements for these additional cases are included in programmain as comment statements. you should be able to: 1. Solve for the largest (in absolute value) eigenvalue of a matrix by the power method. also contain eigenproblemsolvers. -0. Example illustrating the evaluation of the eigenvector corresponding 2.matrixAis shifted by the value of shift before the inverse power methodis implemented. Explain the mathematicalcharacteristics of an eigenproblem. After studying Chapter 2.are packages contain eigenproblem solvers. The direct method is not a good method for solving linear eigenproblems. Packages for Eigenproblems Numerous libraries and software packages are available for solving eigenproblems. Many workstations and mainframecomputers have such libraries attached to their operating systems. (1989)] contains subroutines and advice for solving eigenproblems. the power method can be employed. 3.0. If not.8. 6. Solve for the opposite extreme eigenvalue of a matrix by shifting the eigenvalues of the matrix by the largest (in absolute value) eigenvalue and applying .0 in the data statement.118 Chapter2 Subroutine invpoweralso implementsthe shifted inverse powermethod. the QRmethod is recommended. Someof the more prominent packages are Matlab and Mathcad. Eigenvectors corresponding to a knowneigenvalue can be determined by one application of the shifted inverse powermethod. the powermethodcan be used to solve for intermediate eigenvalues. Example illustrating shifting eigenvalues to accelerate convergence 2. 5. the and bookNumericalRecipes [Pross et al. Example2. 2.9 SUMMARY Somegeneral guidelines for solving eigenproblems are summarizedbelow. shift.9 to a knowneigenvalue can be solved by subroutine invpower simply _by defining shift = 13.

Solve for an intermediate eigenvalue of a matrix by shifting the eigenvalues of the matrix by an estimate of the intermediate eigenvalue and applying the inverse powermethodto the shifted matrix. 11. Solve for the eigenvalues of a matrix by the QRmethod. Solve for the eigenvalues of a linear or nonlinear eigenproblemby the direct method. 9.. Showall the results for the first three iterations.2I)x = 0 and solving for x.Eigenproblems 119 7. Solve the problemspresented belowfor the specified matrices. Solve for the correspondingeigenvectors by substituting the eigenvaluesinto the equation (A . EXERCISE PROBLEMS Consider the linear eigenproblem. Tabulate the results of subsequent iterations. Several of these problems require a large number iterations. Accelerate the convergenceof an eigenproblemby shifting the eigenvalues of the matrix by an approximate value of the eigenvalue obtained by another method.2 Basic Characteristics of Eigenproblems 1. Begin all problems with x(°)r = [1.0 . and applying the inverse power methodto the shifted matrix. 2. This procedure yields either the smallest (in absolute value) eigenvalue or the largest (in absolute value) eigenvalue of opposite sign. of Solve for the eigenvalues of (a) matrix D. Carry at least six figures after the decimalplace. Iterate until the values of 2 changeby less than three digits after the decimal place.0] unless otherwise specified. and (c) matrix expandingthe determinant of (A . Solve for the corresponding eigenvectors by substituting the eigenvalues into the equation (A . 8. such as the direct power method. the inverse powermethodto the shifted matrix..21) and solving the characteristic equation by Newton’smethod. Solve for the eigenvalues of (a) matrix A. Let the first componentof x be unity.21) and solving the characteristic equation by the quadratic formula. 10. and (c) matrix expandingthe determinant of (A . (b) matrix B. of 341 D= 1 2 1 E= 1 3 1 1 G= 3 2 1 1 H= 1 2 1 1 1 2. (b) matrix E.2I)x = 0 and solving for x.0 1. 1. Ax = 2x. Solve for the eigenvector correspondingto a known eigenvalue of a matrix by applying the inverse powermethodone time. for the matrices given below. . Let the first component x be unity.

Chapter2 4.0 1. (b) Let the second component of x be the unity component.0 0.0 0. (b) For each (°).0].0 1.0 0.0]. Solve Problem10 for matrix F.0 1.0 0. (a) For each (°).120 2.0 0. Solve Problem3 for matrix B. Solve Problem 16 for matrix . 12. l et t he s econd c omponent fox be the unity component. l et t he f irst c omponent o x bethe unit y f component. (c) For each (°). Solve Problem9 for matrix E. l et t he f ourth c omponent o x betheunit y f component.0 0. Solve for the largest (in magnitude) eigenvalue of matrix D and the corresponding eigenvector x by the power method with x(°)r = [1. (e) Show that the eigenvectors obtained in parts (a) to (d) are equivalent. (a) Let the first component of be the unity component. Solve for the largest (in magnitude) eigenvalue of matrix A and the corresponding eigenvector x by the power method with x(°)r = [1.0 0.0] and [0. 13.0 0. Solve Problem4 for matrix B.3 The Power Method TheDirect PowerMethod 3.(d) of the fourth component of x be the unity component.0 0. 11. 5. (a) For each (°). Solve Problem10 for matrix E.0].(d) ofx that the eigenvectorsobtained in parts (a). l et t he t hird c omponent o x betheunit y f component. Solve for the largest (in magnitude) eigenvalue of matrix D and the corresponding eigenvector x by the powermethod. and (c) are equivalent. (b) Let the second componentof x be the unity component. For each (°). Solve for the largest (in magnitude) eigenvalue of matrix G and the corresponding eigenvector x by the power method with x(°)r -[1. and [0. 14. (b). 18. Solve Problem 15 for matrix H. Solve Problem9 for matrix F. l et t he s econd c (b) omponentfox betheunity component. [0.0 1. Solve Problem3 for matrix C. l et t he s econd component o x betheunit y f component.(b) For each (°). 16.0]. [0. and [0.0 0. Solve for the largest (in magnitude) eigenvalue of matrix G and the corresponding eigenvector x by the powermethod. 6.0]. Solve for the largest (in magnitude) eigenvalue of matrix A and the corresponding eigenvector x by the powermethod.(c) Let the third component be the unity component.0 1. (a) Let the first component of be the unity component. (d) For each (°).0]. (a) Let the first component of be the unity component. 9.0 0. 17. 10. For each (°).0 0.0 1. (b) Let the second component of x be the unity component. [0. Solve Problem4 for matrix C.(c) Showthat the eigenvectors obtained in parts (a) an (b) equivalent.0]. (a) For each (°).(c) Let the third component x be the unity component. l et t he f irst c omponent o x bethe f unity component.0 0. 7.0 0.0]. l et t he f irst c omponent of x be the unity component. l et t he t hird componentfox bethe (c) unity component. 15. 8.

0 0.0 0.(c) For each of x(°). [0.0 0.0 0. 31. 25. (a) Let the first component x be the unity component.(b) Let the second component of x be the unity component.0].(d) Let the fourth componentof x be the unity component.0 1. .0 1. Solve Problem19 for matrix B. For each (°). Solve Problem20 for matrix C.(b) For each (°). and [0.0 1.0 1. Solve Problem25 for matrix E.(d) Show that the eigenvectors obtained in parts (a). Use Gauss-Jordan elimination to find the matrix inverse. let t he first componentof x be the unity component.0 0.0 0.0]. l et t he first c omponent of x be the unity component. l et t he s econd componentof x be the unity component. Solve for the smallest (in magnitude) eigenvalue of matrix G and the corresponding eigenvector x by the inverse powermethod using the matrix inverse. Solve Problem25 for matrix 17. Solve for the smallest (in magnitude) eigenvalue of matrix D and the corresponding eigenvector x by the inverse powermethod using the matrix inverse with x(°)~=[1. l et t he s econd c (b) omponentfox be the unity component.0 0. 26. For each (°).0]. (a) For each (°).0 1.0 0. 20. 22. 28. Show (c) that the eigenvectors obtained in parts (a) and (b) are equivalent.0]. Solve for the smallest (in magnitude) eigenvalue of matrix G and the corresponding eigenvector x by the inverse power methodusing the matrix inverse with x(°)z=[1. (a) For each (°).0 0. (c) Let the third componentof x be the unity component. let the second component x be the unity component. 29. and [0. (a) Let the first component x be the unity component. of 27. (a) For each (°). 23. (a) Let the first component x be the unity component. For each (°). [0. 30.0 1.0]. l et t he t hird componentfox bethe (c) unity component.(b) Let the second component of x be the unity component. (b).0].[0. (b) For each x(°). 21. Use Gauss-Jordanelimination to find the matrix inverse. Solve Problem19 for matrix C.0 0.0 0.0] and [0.0 0. 24. let the third component x be the unity component.0 0. l et t he fourth componentfox betheunity (d) component.Eigenproblems TheInverse Power Method 121 19. and are equivalent. Solve Problem26 for matrix E. l et t he first c omponentfo betheunity comp x onent. Solve for the smallest (in magnitude) eigenvalue of matrix A and the corresponding eigenvector x by the inverse power methodusing the matrix inverse with x(°)z = [1. Solve for the smallest (in magnitude) eigenvalue of matrix A and the corresponding eigenvector x by the inverse power method using the matrix inverse. 32. Solve Problem26 for matrix 17. Use Gauss-Jordanelimination to find the matrix inverse. Solve for the smallest (in magnitude) eigenvalue of matrix D and the corresponding eigenvector x by the inverse power methodusing the matrix inverse.0].(b) Let the second component of x be the unity component.0 0.(e) Show that the eigenvectors obtained in parts (a) to (d) are equivalent. Solve Problem20 for matrix B. (c) Let the third componentof x be the unity component.0].

The third and fourth eigenvalues of matrix G and the corresponding eigen- . Repeat Problem51 for matrix F by shifting by s = 0. Shifting Eigenvalues FindIntermediate to Eigenvalues 51. Let the first component x be the unity component. of Solve for the smallest eigenvalue of matrix E and the corresponding elgenvector x by shifting the elgenvalues by s = 4.0 and applying the shifted power method.6 and applying the shifted power method. Solve Problem32 for matrix H. Repeat Problem51 for matrix E by shifting by s = -0.0 and applying the shifted power method. 48.shifting the matrix by that value. LUfactorization. 49. Shifting Eigenvalues Findthe Opposite to Extreme Eigenvalue 43.8 and applying the shifted power method.0 and applying the shifted power method. LUfactorization. of 44. Let the first component x be the unity component. 42. of Solve for the smallest eigenvalue of matrix C and the corresponding elgenvector x by shifting the elgenvalues by s = 5. Let the first component x be the unity component. 37. Let the first component x be the unity component. Let the first component x be the unity component. Let the first component x be the unity component. of Solve for the smallest eigenvalue of matrix B and the corresponding elgenvector x by shifting the eigenvalues by s = 6. 35. 54. Let the first component x be the unity component. Solve for the third eigenvalue of matrix D by shifting by s = 0. LUfactorization.0 and applying the shifted power method. LUfactorization. 36. The third eigenvalue of matrix D and the corresponding eigenvector x can be found in a trial and error manner by assuming a value for 2 between the smallest (in absolute value) and largest (in absolute value) eigenvalues. of Solve for the smallest eigenvalue of matrix H and the corresponding elgenvector x by shifting the eigenvalues by s = 6. 47. LUfactorization. 53. LUfactorization. 41. 34.122 33. Solve for the smallest eigenvalue of matrix A and the corresponding elgenvector x by shifting the eigenvalues by s = 5.8 and applying the shifted inverse powermethodusing Doolittle LUfactorization.5 and applying the shifted power method. and applying the inverse powermethodto the shifted matrix.0 and applying the shifted power method. LUfactorization. of Solve for the smallest eigenvalue of matrix F and the corresponding elgenvector x by shifting the elgenvalues by s = 4. 39. 45. Let the first component x be the unity component.4. of Solve for the smallest eigenvalue of matrix D and the corresponding elgenvector x by shifting the elgenvalues by s = 4.6. Solve Problem31 for matrix H. 40. 46. of Solve for the smallest eigenvalue of matrix G and the corresponding elgenvector x by shifting the elgenvalues by s = 6. 38. Let the first component x be the unity component. Solve Problem19 using Doolittle Solve Problem21 using Doolittle Solve Problem23 using Doolittle Solve Problem25 using Doolittle Solve Problem27 using Doolittle Solve Problem29 using Doolittle Solve Problem31 using Doolittle Solve Problem33 using Doolittle Chapter2 LUfactorization. 50. of 52.

2~1°) = -4.0.270088].330367 0. After 20 iterations in Problem49.439458 -5. After 10 iterations in Problem46. 2~1°) = -4.Let 2(o) = 7.5.0 -1.587978 -0.-1.Let 2(°) = -0.0 -0. 64. 63.8 and 2(1) : -1. Let 4(0) = -0.0 -1. 60.Let 2(°) = 5. Let the first component of x be the unity component.722050 and x(1°)~ -.047476]. Solve for the largest eigenvalue of matrix H by the direct secant method. Let 2(°) = -0.397633 and x(1°)~ = [1. This procedure can be quite time consuming large matrices. 66. Repeat Problem54 for matrix tt by shifting by s = 1.7 and -0. Solve for the smallest eigenvalue of matrix H by the direct secant method.074527].5 and -0. shifting the original matrix by this improved approximation of 2. Solve for the smallest eigenvalue of matrix G by the direct secant method.0 and 2~) = -0.0 and 20) = 4.0 and 2(~) = 6.256981 1.[1. 62. shifting the matrix by that value.304477 and x(2°)v = [1.0 -0. Solve for these two eigenvalues by shifting G for by s = 1. 68.0.4 The Direct Method 61. 59. 2~2°) = -7.0 and 4(1) = 4. methodusing the methodusing the methodusing the methodusing the methodusing the methodusing the methodusing the methodusing the methodusing the methodusing the . 65. Applythe aboveprocedure to Problem47.5. After 10 iterations in Problem48.0.0 9. Let 2(o) : -1. Let 2°) : 5.385861 -0.0. 2.249896 0. Solve for the largest eigenvalue of matrix F by the direct secant method.5 and 20) = -1. 2~2°) = --4. Shifting Eigenvalues AccelerateConvergence to The convergence rate of an eigenproblem can be accelerated by stopping the iterative procedure after a few iterations. and continuing with the inverse powermethod. shifting the approximateresult back to determine an improved approximationof 4.0. After 20 iterations in Problem50. 56. Let 2(°) = 0. After 20 iterations in Problem47.250521 -1. Solve for the largest eigenvalue of matrix G by the direct secant method.0. Applythe above procedure to Problem46.683851 and x(2°)T = [0. 70.0 and (1) =6.5. Solve for the largest eigenvalue of matrix E by the direct secant method.Let 2(°) = 5.Let )o(0) = 7.961342]. Solve for the smallest eigenvalue of matrix D by the direct secant method.0. Applythe above procedure to Problem48.388013 and x(2°)T = [1. 2~2°) = -8. Solve for the smallest eigenvalue of matrix F by the direct secant method. 57. 69. 55.1 and 2(~) : -1. Solve for the smallest eigenvalue of matrix E by the direct secant method.5 and 20) --.732794]. Solve for the largest eigenvalue of matrix D by the direct secant method.0 and 2(1) = 4.Eigenproblems 123 vectors x can be found in a trial and error mannerby assuminga value for 2 between the smallest (in absolute value) and largest (in absolute value) eigenvalues.5 and applying the shifted inverse power method using Doolittle LU factorization.0. and applyingthe shifted inverse power method to the shifted matrix. 58. Applythe above procedure to Problem50. Applythe above procedure to Problem49. 67.

matrix C by the QR method. Check out the programusing the given data set. 72. matrix D by the QR method. Let the first component x be the unity component. Let the first component x be the unity component. of 80.8. 81. 75. 74. 86. 82. of Solve for the elgenvectors of matrix H correspondingto the elgenvalues found ~n Problem78 by applying the shifted inverse powermethodone time. 91. Let the first component x be the unity component. matrix F by the QRmethod. 83. Chapter2 2. Solve for Solve for Solve for Solve for Solve for Solve for Solve for Solve for the elgenvalues of the elgenvalues of the elgenvaluesof the e~genvalues of the elgenvaluesof the e~genvalues of the e~genvaluesof the e~genvaluesof matrix A by the QR method. of Solve for the elgenvectors of matrix F correspondingto the elgenvalues found mProblem76 by applying the shifted inverse powermethodone time. 90. Solve any of Problems 43 to 50 using the shifted direct power method program. Solve for the eigenvectors of matrix A correspondingto the eigenvalues found in Problem71 by applying the shifted inverse powermethodone time. matrix H by the QRmethod.6 Eigenvectors 79. Implement the inverse power method program presented in Section 2. Solve any of Problems3 to 18 using the direct powermethodprogram. . 73. Checkout the programusing the given data set. Check out the shifted direct powermethodof finding the opposite extreme eigenvalue using the data set specified by the comment statements. matrix G by the QRmethod. 84. of Solve for the elgenvectors of matrix C correspondingto the elgenvalues found in Problem73 by applying the shifted inverse powermethodone time. of Solve for the elgenvectors of matrix G correspondingto the e~genvaluesfound in Problem77 by applying the shifted inverse powermethodone time.8 Programs 87. of Solve for the elgenvectors of matrix B correspondingto the eigenvalues found in Problem72 by applying the shifted inverse powermethodone time. Let the first component x be the unity component. Let the first component x be the unity component. 76° 77.2. Let the first component x be the unity component.8. 2. 88. of Solve for the e~genvectorsof matrix E correspondingto the elgenvalues found in Problem75 by applying the shifted inverse powermethodone time. Let the first component x be the unity component.1. Implementthe direct powermethodprogrampresented in Section 2. Let the first component x be the unity component.124 2. matrix E by the QRmethod. matrix B by the QR method. of Solve for the elgenvectors of matrix D correspondingto the elgenvalues found in Problem74 by applying the shifted inverse powermethodone time. 89. 85.5 The QR Method 71.

Solve any of Problems 56 to 60 using the shifted inverse power method program. 94. Solve any of Problems 79 to 86 using the shifted inverse power method program. . Check out the shifted inverse power method programfor evaluating eigenvectors for a specified eigenvalue using the data set specified by the comment statements. 93. Check out the shifted inverse powermethodfor finding intermediate eigenvalues using the data set specified by the comment statements. Solve any of Problems 19 to 34 using the inverse power methodprogram. 96. 97. 95.Eigenproblems 125 92. Checkout the shifted inverse powermethodfor accelerating convergenceusing the data set specified by the comment statements. Solve any of Problems 51 to 55 using the shifted inverse power method program. 98.

3 NonlinearEquations 3.5.1.6.5.3. Polynomialdeflation 3.1 INTRODUCTION Consider four-bar linkage illustrated in Figure3. 3. Theangle e = 04 .7.9. 3.11.12.1.9. Arelationship between and ~b can be and e obtained by writing the vector loop equation: (3. Newton’smethodfor multiple roots 3.4.8. Fixed-pointiteration 3.1) 127 . False position (regula falsi) 3. Newton’s methodfor complex roots 3. Introduction General Features of Root Finding Closed Domain(Bracketing) Methods Open Domain Methods Polynomials Pitfalls of Root Finding Methodsand Other Methodsof Root Finding Systems of Nonlinear Equations Programs Summary Problems Examples 3. Bairstow’s methodfor quadratic factors 3. 3.4.6. the angle q5 = 02 is the output.3. Muller’s method 3. Newton’smethodfor simple roots 3. 3.8. 3.n is the input to the this mechanism. 3.1. Newton’s method 3.2. The secant method 3.2.10. 3.7. 3. Newton’smethodfor two coupled nonlinear equations 3. Interval halving (bisection) 3.

Thus.0 Combining (3.[-R 3 . R3 = !~. as illustrated by the small insert in Figure 3.cos(G 4) - (3. Equation (3.1.2 correspond to the case where links 2. and sim ying yields plif (3. and 4 are in the lower hall-plane.3 c0s(03) q r 4 c0s(04) -1 = 0 r2 sin(02)q-/’3 sin(03) + sin(04) ~-.1 and illustrated in Figure3.4) Considerthe particular four-bar linkage specified by r 1 = 10. link 2 is in the lowerhalf-plane.2 c0s(4 ) . letting 02 = 4 Eqs. 3. and link 3 crosses the x axis. and Eq. r 3 = 8. of r 2 c0s(02)-~. r 2 = 6.1 and Figure 3.3) becomes cos(G) ~ cos(4)+ ~ .128 Chapter3 Input Figure3.2a) and (3.5) is tabulated in Table3. This problem will be used throughoutChapter 3 to illustrate methodsof solving for the roots of nonlinear equations. (3. and 4 are in the upper halfplane.3) e cos(G) . Table 3.0 ] where R.2. corresponding to the x and y components the 7 vectors.cos(~ 4) ~-. R1 = I’ R2 = I.1 Four-bar linkage.2a) (3. A mirror image solution is obtained for the case where links 2. = r_~ r2 R2_-r_~ r4 R3 ~ ~ =rl +4+ r3 +~ 2r2r4 (3..1) can be written as two scalar equations.1.5) Theexact solution of Eq.2b). 3. Anothersolution and its mirror imageabout the x axis are obtainedif link 4 is in the upperhalf plane. and r 4 = 4. Thus. Let 7t lie along the x axis. .2b) --~n. Freudenstein’s (1955) equation: 0 4 =G (3. whichis illustrated in Figure 3. (3.

822497 92.2 Exactsolution of the four-bar linkage problem. Polynomial root finding is consideredas a special case.069445 86.734963 89.015180 39. There are two phases to finding the roots of a nonlinear equation: boundingthe root and refining the root to the desired accuracy.0 90. deg 0.0 140.620630 Manyproblems in engineering and science require the solution of a nonlinear equation. Several classical methodsof both types are presented in this chapter.0 50.. a transcendental equation (i. etc.270873 81.0 60. and radicals)..0 100. an equation involving trigonometric.0 170.887763 62.823533 93. 90 0 0 I I I I I I I I I I I ~ ~ 90 Input c~. Thenonlinear equation. -.0 180.0 160.104946 32. deg 180 Figure 3. the solution of a differential equation.306031 83.810401 47.0 q~.0 150.1. deg 130.0 40. an equation involving +. exponential. Exact Solution of the Four-Bar LinkageProblem ~.888734 75. deg 90. may be algebraic equation (i. Twogeneral types of root-finding methods exist: closed domain(bracketing) methodswhich bracket the root in an ever-shrinking closed interval.113229 24.0 129 ~b. The problemcan be stated as follows: Given the continuous nonlinear functionf(x).059980 68. or any nonlinear relationship betweenan input x and an output f(x)./. logarithmic.deg 0.e. x.0 ~b.069345 16. f (x) = 0. find the value x = ~ such thatf(~) = Figure 3.450827 ~.e.3 illustrates the problem graphically.0 80. whichare discussed in somedetail.000000 8.0 20.0 120. There are numerous pitfalls in finding the roots of nonlinear equations. .0 110.0 30.124080 92. and open domain(nonbracketing) methods.0 10.deg 54.101495 c~. deg 70.NonlinearEquations Table 3.. functions).

3.2 GENERAL FEATURES OF ROOT FINDING Solving for the zeros of an equation. Several special procedures applicable to polynomials are presented. After the introductory material presented in this section. nonlinear equations can behavein many different waysin the vicinity of a root. Graphingthe function 2. After the presentation of the root finding methods. in a systematic procedurethat refines the solution to a specified tolerance in an efficient manner. There are two distinct phases in finding the roots of a nonlinear equation: (1) boundingthe solution and (2) refining the solution. respectively. someof the general features of root finding are discussed.which presents some philosophy to help you choose a specific methodfor a particular problemand lists the things you should be able to do after studying Chapter3. These two phases are discussed in Sections 3.or the starting point. 3. Incremental search . Typical behaviorsare discussed in Section 3.4. The chapter closes with a Summary.3 Solution of a nonlinear equation. A section presenting several programs for solving nonlinear equations follows.1.a section discussing some the pitfalls of of root finding and someother methodsof root finding follows. Figure 3. Somegeneral features of root finding are discussed in this section.130 Chapter3 f(x) l = f(¢1) 0 = f(~2) 0 Figure 3.2.2.4 illustrates the organization of Chapter 3. is one of the oldest as problems in mathematics.2. The material then splits into a discussion of closed domain(bracketing) methodsand open domainmethods. the root should be bracketed between two points at which the value of the nonlinear function has opposite signs.3. Boundingthe Solution Bounding solution involves finding a rough estimate of the solution that can be used as the the initial approximation. a process known root finding. In general.1 and 3. A brief introduction to finding the roots of systems of nonlinear equations is presented. Some general philosophy of root finding is discussed in Section 3.If possible.2.2. Several possible boundingproceduresare: 1.2.

The resolution of the plots is generally not precise enoughfor an accurate result. Plots of a nonlinear function display the general behavior of the nonlinear equation and permit the anticipation of problems. 4. However. Past experience with the problemor a similar problem Solution of a simplified approximatemodel Previous solution in a sequence of solutions Graphingthe function involves plotting the nonlinear function over the range of interest.4 Organizationof Chapter 3. the results are generally accurate enoughto boundthe solution. Spreadsheets generally have graphing capability. Many hand calculators have the capability to graph a function simply by defining the function and specifying the range of interest.NonlinearEquations Roots of I Nonlinear Equations 131 GeneralFeaturesof RootFinding I Closed Domain (Bracketing) Methods Open Domain Methods I Polynomials I Root Finding Other Methods Systems of Nonlinear Equations Pr#grams I Summary Figure 3.Verylittle effort is required. 5. 3. as does software like Matlaband Mathcad. .

186426 4.909909 2.0 300.0 60. To illustrate an incremental search.959306 6.0 270.0 110.259945 6.623104 4. check for sign changes in the derivative of the function betweenthe ends of the interval.0 80.5) when ~ = 40deg: one root between ~b = 30deg and q~ = 40deg.0 240. As an exampleof graphing a functiOn to bounda root.0 230. When the value of the function changessign.0 50.0 10. deg f(~b) 100.178850 -0. deg f(q~) 190.0 40.701851 5.132 Chapter3 6.260843 0.0 360. Consideran input of ~ = 40 deg.5.175954 2.0 350.0 220.5) for ~t = 40 deg.0 260.2.0 340.0 140.217971 -0.0 2.0 160.0 330.0 170.617158 5.0 0 90 -1.0 180.0 4. The graph of Eq.155970 -0. Thetwo end points of the interval containing the root can be used as initial guesses for a refining method. The graph showsthat there are tworoots of Eq.831150 4.438119 6.021315 0.0 130.0 90. (3.155970 . An incremental search is conductedby starting at one end of the region of interest and evaluating the nonlinear function at small increments across the region.299767 4.0 6.025185 5.5) with ~ = 40 deg is presented in Figure 3.310239 3.398988 6.2.5) with ~ = 40 deg for from 0 to 360 deg for Aq5= 10 deg.0 150.5) with ~ = 40 deg ~b.503105 1.0 200.467286 q~.0 f(q~) 3.920381 1.0 5.deg 270 360 Figure 3.752862 q~.0 f(~b) -0.0 120.0 70.0 1. The sametwo roots identified by graphingthe function are located.0 180 ~.194963 0. it is assumed that a root lies in that interval.5 Graphof Eq. The results are presented in Table 3. consider the four-bar linkage problempresented in Section 3.376119 ~b.602990 0.198833 5.0 20.039797 0.597044 2.0 250.518297 0.005267 -0. If multiple roots are suspected.1. (3.044194 3.033722 1.0 3. deg 0. deg 280. IncrementalSearchfor Eq. let’s evaluate Eq. (3. (3.214881 6.717043 5.0 290.0 310. and one root between q5 = 350 (or -10)deg and ~b = 360 (or 0)deg.0 210.0 30.0 320.388998 1. (3.0 3. Table3.

Figure 3. 2. they use informationabout the nonlinear function itself to refine the estimates of the root. Four open domainmethodsare presented in Section 3.. This approachis totally unacceptable. Figure 3. and continueuntilf(e) to close enoughto zero. If the polynomial coefficients are all real.6a illustrates the case of a single real root. guess anothere. Twosuch methodsare presented in Section 3. x = e.4: 1. 2.e.NonlinearEquations 133 Whateverprocedure is used to boundthe solution. Consequently. Open domainmethodsdo not restrict the root to remaintrapped in a closed interval. 4. Polynomials may have real or complex roots. Figure3. If not.single complexroots can occur.. Closed domain(bracketing) methods are methodsthat start with two values of x whichbracket the root. The fixed-point iteration method Newton’s method The secant method Muller’s method 3. However. evaluatef(~). Iff(e) is close enough zero. Behavior of Nonlinear Equations Nonlinear equations can behave in various ways in the vicinity of a root. Complex roots may . Theycan be slow to converge. Interval halving (bisection) 2. simple)real roots.6b illustrates a case whereno real roots exist.3. repeated(i.6 illustrates several distinct types of behaviorof nonlinearequations in the vicinity of a root. Trial and error Closed domain(bracketing) methods Open domain methods Trial and error methodssimply guess the root. complexroots occur in conjugate pairs. Several methodsfor refining the solution are: 1. the initial approximationmust be sufficiently close to the exact solution to ensure (a) that the systematic refinement procedure converges. they are considerably moreefficient than bracketing methods.2. 3. False position (regula falsi) Bracketing methodsare robust in that they are guaranteed to obtain a solution since the root is trapped in the closed interval. x = cq and systematically reduce the interval while keepingthe root trapped within the interval. 3.2.e.2. they are not as robust as bracketing methodsand can actually diverge. Algebraic and transcendentalequationsmay havedistinct (i. or complex roots. whichis called a simple root. 3. multiple) real roots. If the polynomialcoefficients are complex. and compare zero.3: 1. Thus. quit. Refining the Solution Refining the solution involves determining the solution to a specified tolerance by an efficient systematic procedure. and (b) that the solution converges to the desired root of the nonlinear equation.

(a) Simple root. (f) Three multiple roots. Many problems in engineering and science involve a simple root. (b) Noreal roots.6h illustrates the general case whereany number of simple or multiple roots can exist. extremecare is maybe required to find the desired roots. . (e) Two simpleroots. Almostany root-finding methodcan find such a root if a reasonable initial approximation furnished. (e) Two multipleroots. respectively. Situations with two and three multiple roots are illustrated in Figure 3. (g) One simpleandtwomultiple (h) General case.3 --- (h) Figure3. Figure 3.6a.6e and f.6g. A situation with one simple root and two multiple roots is illustrated in Figure 3. exist in such a case. (d) simple roots. respectively. Lastly.~i Solutionbehavior. In the other situations illustrated in Figure 3. as illustrated in Figure 3. Situations with two and three simple roots are illustrated in Figure 3.134 f(x)! f(x) Chapter3 \ ¢ X (a) fix)’ (b) f(x) (c) f(x)’ f(x) (e) (f) fix) f(x (g) (Z2 0.6c and d.6.

When problemis to be solved only once or a few times.4. a straightforward open domainmethod. and the methodused to find the roots does not affect the values of the roots. 4. when a problem is to be solved manytimes. if possible. 1.3 Two the simplest methodsfor finding the roots of a nonlinear equation are: of 1. However. Open domain methods. In such cases. 9.4.4. CLOSEDDOMAIN (BRACKETING) METHODS 3. can be applied without worrying about special cases and peculiar behavior. of or the magnitude the nonlinear function. Analyst’s time versus computer time must be considered when selecting a method. However. 2. Root-finding algorithms should contain the following features: 1. it should be monitoredto ensure that does not approach zero. [f(xi+l)[. 4. Somegeneral philosophy of root finding is presented below. the efficiency of the a method is not of major concem. 11.NonlinearEquations 3. most algorithms will always converge if the initial approximation is close enough.xi[. Anupper limit on the numberof iterations. must be included. Closed domain methods are more robust than open domain methods because they keep the root bracketedin a closed interval. If the methoduses the derivativef’(x). 3. Bounding methodsshould bracket a root. A convergence test for the changein the magnitude the solution. generally converge faster than closed domainmethods. of When convergence indicated. 10. [xi+~ . Polynomials can be solved by any of the methods for solving nonlinear equations.such as Newton’smethodpresented in Section 3. 2. 5. efficiency of the methodis of major concern. 8. 2. can 7. The rate of convergence of most algorithms can be determined in advance. 3. 6. special techniques applicable to polynomials should the be considered. Interval halving (bisection) False position (regula falsi) . methodcan determine whether or not the roots can be found and the the amount of work required to find them. Many.3. For smoothly varying functions. If a nonlinear equation has complexroots. Goodinitial approximations are extremely important. problemsin engineering and science are well behavedand straightforward. If problems arise during the solution. Blanket generalizations about root-finding methodsare generally not possible. the final root estimate should be inserted into is the nonlinear function f(x) to guarantee that f(x)= 0 within the desired tolerance. when they converge.2 or the secant methodpresented in Section 3. Howev6r. Some General Philosophy of Root Finding 135 There are numerous methodsfor finding the roots of a nonlinear equation.if not most. then the peculiarities of the nonlinear equation and the choice of solution method be reevaluated.2. that must be anticipated when choosing a method. The roots have specific values.

is theroot. whichis the case in Figure 3. b). whichbracket the root.6) and (3. The algorithm is as follows: a+b 2 C-- (3. the root is in the interval (a. x --. set a = c and continue. interval halving will locate the discontinuity. (b) . x = e. in the interval (a. such asf(x) 1/(x . that is. c = (a + b)/2.7a) (3. Ifa nonlinear equation. obviouslylies between and b. that is.The solution is not obtained directly by a single calculation. b). fix) f(x) (a) Figure 3. is being found. not a root. is c a bracketed betweena and b. Thus.7b. Thus. The interval containing the root. b). x = a to the left of the as root and x = b to the right of the root.7) is an iteration. set b = c and continue. or the value off(x) decreases belowa prespecified tolerance ez. Eachapplication of Eqs. The objective is to locate the root to within a specified tolerance by a systematicprocedurewhile keeping the root bracketed. Iff(a)f(c) > 0.3. methods. mustfirst be obtained. or bracketing. [bi . that is. (3.ai[ < el. Iff(a)f(c) =O. is bracketed by the two estimates. [f(ci)[ <ez.~. The root.136 Chapter3 In these two methods.d)whi h has si ngularity at x = d.7 Interval halving(bisection). Theinterval a betweena and b can be halved by averaging a and b. c) and (c.two estimates of the root whichbracket the root must first be found by the boundingprocess. x = e. whichillustrates the twopossibilities withf’(x) > 0 andf(x) Theroot. the root is in the interval (c.7. A check on If(x)l as x ---> d wouldindicate that a discontinuity. Thus. whichis the case in Figure 3. x = d. Interval Halving (Bisection) Oneof the simplest methodsfor finding a root of a nonlinear equation is interval halving (also known bisection).7a.7b) Iff(a)f(c) Iff(a)f(c) < 0: > 0: a = a and b = c a = c and b = b Interval halving is an iterative procedure. 3. or both. In this method.twoestimates of the root.1. Methodswhich keep the root bracketed during the refinement process are called closed domain. Theiterations continueduntil the size of the interval decreases belowa prespecified tolerance e~. Iff(a)f(c) < 0. dependson value off(c). Terminatethe iteration. There are now two intervals: (a. as illustrated in Figure 3. c).6) (3.

9a) (3. convergence is rather slow.00358236 0.Nonlinear Equations Example 3. degrees (i.8).03979719 -0.0 30.0) .000001 deg.015180 f(~b. Recall Eq.~cos(30.0) = (3.250 31.0) + ~ .6).0 deg 2 (3. However. ~b : ~9c for the next iteration and ~a remains the same.0) -~ cos(35.0deg and f(~ba) =f(30.01015060 -0.0) = ~ cos(40.0) + ~ .1 for an input of e = 40 deg by interval halving.00000009 -0.5) with c~ = 40. which requires 24 iterations. Interval halving (bisection).10) Substituting q5c = 35. In calculations involving trigonometric functions.0 35. The results presented in Table 3.031250 31.01015060 0. Interval Halving (Bisection) i 1 2 3 4 5 6 7 22 23 24 ~a.0 deg. ~ba = 30. the angles must be expressed in radians.e.00000001 q~b. From Eq.031250 f(~b) 0. From Eq.19496296 0.00000000 .9b) 2 30.0 deg into Eq. presented in Section 3. let ~a = (3. (3. deg) are a more common unit of angular measure.1.250 31. The convergence criterion is [q5a -~bg[ < 0.015180 f(qSc) 0. but the calculations are performed in radians.3.cos(40.11) Sincef(~a)f(c~c ) < O.015181 0.8750 31.03979719 -0.00000001 32.~b) From the bounding procedure ~bb = 40.00000000 32.0 = -0. (3.0 .03979719 -0.00000000 32.00000002 -0. Table 3.0) .o) + !~ _ cos(a0.0) = ~ cos(40.0 _ 35.953125 32.~ cos(qS) + !~ _ cos(40.015181 0.3. (3.35.2.0) .0 .00289347 -0.00000004 -0.06599926 0. deg 40.01015060 0.00358236 0. q~c -~a + ~b _ (3.18750 32. in all of the examples in this chapter.00033288 -0.8750 31.0 deg bracket the solution.8750 32.) -0.50 32.06599926 0.00289347 -0.00128318 -0.015178 32.01556712 -0.~ cos(no.18750 32. The solution is presented in Table 3.01556712 -0.deg 35.0) = ~cos(40.8) 30.50 31. Clearly.03979719 f(~bb) =f(40.3 were obtained on a 13-digit precision computer. deg 30.50 32.01015060 0.. (3.00000004 -0.0 +40.0 32.015178 32.015181 0.015179 32. angles are expressed in degrees in the equations and in the tabular results.015179 32.8) yields f(q~c) =/(35.0 32. 137 Let’s solve the four-bar linkage problem presented in Section 3.015176 32.0 = o.50 32.0 31.8750 32. 19496296 Thus.0 deg and ~bb = 40.cos(a0.00000002 -0.00289347 -0.00033288 ~Pc.0 deg: f(~b) = 35-cos(40. Consequently.00289347 0.0 30.

and the root of the linear function g(x). regula falsi). (bn . Thus. required to reduce the initial interval.16) .ao) since each iteration reduces the interval size by a factor of 2. so the root remains bracketed.14) for the ~¢alue of c whichgivesf(c) = 0 yields c = b f(b) (3. Wenowhave two intervals.3.a°’~ (3.8.1. as described in Section 3.138 Chapter3 The results in the table are roundedin the sixth digit after the decimal place.e.13) The major disadvantage of the interval halving (bisection) methodis that the solution converges slowly.14) Solving Eq. and the slope of the linear function g’(x) is given g’(x) f(b)-f(a) b-a (3. it can take a large numberof iterations. and thus function evaluations. The maximum error in the root is Ibn . so the methodis guaranteed to converge.an) = ~ (bo . and thus the number of function evaluations. The root of the linear functiong(x). (3. 3.an).the nonlinear functionf(x) is assumed be a linear function g(x) in the interval to (a. c) and (c. (a.2. is given by (3. The number of iterations n. which gives the methodits name. 2.12) 1 (bn . As in the interval-halving (bisection) method. is taken as the next approximation of the root of the nonlinear function f(x). In the false position (regula fals the 0 method. The process is illustrated graphically in Figure 3. This methodis also called the Bnear interpolation method. b). The interval halving (bisection) methodhas several advantages: 1. x = c. The root is bracketed (i. b). x = e.It is a false position (in Latin. 3. The equation of the linear function g(x) is f(c) . That is.the interval containingthe root of the nonlinearfunction f(x) is retained.3.f(b) _ g’(x) c-b wheref(c) = 0. that is.1 to six digits after the decimal place.15) (3. n is given by 1 1 /’b° .. The final solution agrees with the exact solution presentedin Table3. to reach the convergence criterion.an[. is not the root of the nonlinearfunctionf(x). (b0 -a0). b) approximates root as the midpointof the interval. trapped) within the boundsof the interval. to a specified interval. x = c. False Position (Regula Falsi) The interval-halving (bisection) method brackets a root in the interval (a.

02347602 Substituting these results into Eq. FromEq. 19496296 _ 31.0 = -0.14) and (3. 3. Recall Eq. FromEq.0) ÷ ~ .0) -.8 Note thatf(a) and a could have been used in Eqs.19b) (3.0---(--0"03979730. 4’a = 30. (3.0 -= 0.NonlinearEquations 139 f(x)’ g(xi~f(x) Figure False position (regula falsi). (3.0) = ~ cos(40.16) is applied repetitively until either one or both of the following two convergence criteria are satisfied: Ib .019) = 0.al _< el Example 3.0 deg and 4’b = 40.0) + ~ -.5) with ~ = 40. (3.16) instead off(b) Equation (3.0 deg bracket the solution. False position (regula falsi). As an example the false position (regula falsi) method.let’s solve the four-bar linkage of problempresented in Section 3. f(4’a) =f(30.0 0.1.0 deg and 4’b = 40.0 deg: f(4.~ cos(4.18).19a) (3.cos(40.0) = ~ cos(40.20) (3.17) .0) .2.~ cos(30. g’(4’b) = 0.) + ~! _ cos(40.16) yields f(4’b) 4’c = 4’b g’(4’b) 40.) = ~ cos(40.0 .15).695228 de g 0.cos(a0.1949629640.18) and/or If(c)l _< (3.19496296 Thus.02347602 (3. (3.4’) Let 4’a = 30.0) .0 deg. (3.03979719 f(4’b) =f(40.~ cos(a0.21) (3.

Iq~b.015180 Chapter3 -0.4.00101233 -0.00000008 -0.00657688 -0. b Theseresults and the results of subsequentiterations are presented in Table 3. This type of behavior is common the false position method.19496296 0.4 are generally preferred because they convergemuch morerapidly.015176 32.00000000 ~b deg b.19496296 dpc deg .0 40.3. More efficient methodsfor finding the roots of a nonlinear equation are desirable.19496296 0.695228) + ~1 _ cos(40. of these methodsare guaranteed to convergebecause they keep the root bracketed within a continually shfi~aking closed interval. It convergesrather slowly.19496296 0.015154 32. is satisfied on the ninth iteration.3 convergeslowly. 31. Four such methodsare presented in this section: 1.0) .695228 31.015180 32.00657688 (3.00000356 -0.00000001 -0.00002342 -0.695228 31.00002342 -0.00000008 -0.3.000001 deg. The root is approachedmonotonicallyfrom the b left. Notice that ~b does not change in this example.19496296 0.0 40.00000054 -0.00000054 -0.00101233 -0.00657688 -0.00000356 -0.31. they maydiverge. Summary Twoclosed domain(bracketing) methodsfor finding the roots of a nonlinear equation are presentedin this section: interval halving(bisection) and false position (regula falsi).18) gives c f(4)c) = ~ cos(40.This choice of q~a and ~b keeps the root bracketed.0 40.19496296 0. and thus.140 Table 3.966238 32.0 f(c~b ) 0. 2.015009 32. q~a is set equal to ~bc and q50remainsthe same.015176 32. for 3.015009 32. Thefalse position methodgenerally converges morerapidly than the interval halving method.19496296 0.015180 32. However.0 40.22) Sincef(C~a)f(4c)> 0.~ba[ -< 0. The convergence criterion. the more slowly converging bracketing methodsmaybe preferred. they do not keep the root bracketed. the interval size.0 40.015180 f(~b~) -0.00015410 -0.19496296 0. 3.014050 32.but it does not give a boundon the error of the solution.966238 32.0 40.00015410 -0.0 .007738 32.~ cos(31.014050 32.695228) = -0. 40.deg 30.007738 32. In such cases. (3. Both methods are quite robust.4 OPEN DOMAIN METHODS The interval halving (bisection) methodand the false position (regula falsi) method presented in Section 3. False Position (RegulaFalsi) q~a.00000001 0.00000000 Substituting ~b into Eq. but converge slowly.19496296 0.0 40.0 31.4. Fixed-point iteration Newton’s method .03979719 -0.015154 32. The interval-halving methodgives an exact upper boundon the error of the solution. The open domain methods presented in Section 3.0 40.0.

9 Fixed-point iteration.24) ! g(x) x(x) Figure 3. An initial approximationto the solution x~ must be determinedby a boundingmethod. The point of intersection of these two functions is the solution to x = g(x). It is for included simply for completeness since it is a well-knownmethod. . and thus to f(x) = 0. Newton’smethodand the secant methodare two of the most efficient methods refining the roots of a nonlinear equation.23) Theprocedureis repeated (iterated) until a convergence criterion is satisfied. Ixi+~ -xil < ~ and/or If(xi+~)l <ee (3. Fixed-Point Iteration The procedure known fixed-point iteration involves solving the problemf(x) = 0 by as rearranging f(x) into the form x = g(x). Since g(x) is also a nonlinearfunction. However. Muller’s methodis similar to the secant method. This value is substituted into the function g(x) to determine the next approximation. for 3.it is slightly more complicated. the solution mustbe obtained iteratively. The algorithmis as follows: [xi+l = g(xi) (3. Fixed-point iteration essentially solves two functions simultaneously: x(x) and g(x). 4. The secant method Muller’s method 141 These methods are called open domainmethods since they are not required to keep the root bracketed in a closed domainduring the refinement process. then finding x = e such that e = g(e). The value of x such that x g(x) is cal led a f ixed point of the relationship x = g(x).1.NonlinearEquations 3. which equivalent to f(e) = 0. so the secant methodis generally preferred.4.9. For example. This process is illustrated in Figure 3. Fixed-point iteration is not a reliable methodand is not recommended use.

910798deg into Eq.35) (3.25) can be rearranged into the form q5 = g(~b) by separating the cos(~ .28) Differentiating Eq.0deg into Eq.co-1 R3 1 [U( ~)] : g(qS) (3.29) For the four-bar linkage problempresented in Section 3. (3.26) where U(~b)= 1 cos(~z) -. Recall Eq.34) (3.27). -1 ~ : ~ -.0) = ~ cos(a0. Recall d(cos-1 1 (U)) -- du (3. g’(q~).1. Substituting qS~=30. Equations (3.27) Thederivative ofg(~b).O) ~ cos(~bi) - (3.142 Example 3.945011) = 20.~ cos(30.26) gives 1 du g’(qS) _ l~/~Z~_u2 dq~ which yields 2 sin(q~) g’(q~) --R 7¢/~ (3.25) Equation (3.36) -1 ~bi+l = g(cki) = 40.0 .(0. -0. (3. .31) to (3. and 1 e R3 = !~.0) . R = 35-.3): f(~b) = R~cos(s) 2 cos(~b) ÷ R .910798deg.33) (3. that is.945011 q52 = 40.R2 C ) +R OS(b 3 (3.910798 deg g’(20.cos-~(0.33) u(30.c os(~z .~) and solving for ~. (3. (3.17027956.0) ÷ ~! = 0.945012 Substituting ~be = 20.2 cos(~b) + ] = ~ . Let’s find the output q~ for an input ~ = 40 deg. Thus. R ----~.3. (3. Fixed-point iteration.COS [R cos(s) .26).25) givesf(q52) = -0.910798) -25"sin(20"910798) = 2.03979719.31) (3.1 by fixed-point iteration.cos [u(~b/)] g’(49i) = 5 ~/1 2 sin(~bi) 2 [u(~i)] Let qSl=30.0deg.32) (3. Chapter3 Let’s solve the four-bar linkage problempresented in Section 3.728369 1) ~/1 .30) (3. is of interest in the analysis of convergence presented at the end of this section.~b)--3 (3.30) become U(~)i) = ~ cos(40.25) gives f(~b~)= (3. The entire procedure is nowrepeated with q~2 = 20. Equations (3.0 .

39) For the four-bar linkage problempresented in Section 3.05797824 0.5 illustrate the major problemof open (nonbracketing) methods.000000 20.64616254 -0.00000002 0. deg f(q~i+l) 143 30.747106 0. Let’s find the output q5 for the input c~ = 40 deg.0 .780651 . The results presented in Table 3.16442489 0.cos( The derivative of g(~b) g’(qS) si n(~ R2%//~ __/g2 1 R (3.cos(~ .03348286 0.00000001 -9.[I cos(40.17027956 0. This configuration cannot be reached if the linkage is connectedas illustrated in the large diagram.5.5. However.747106 -0.12.40) (3.q~i) .747105 -0. this configuration could be reached by reconnectingthe linkage as illustrated in the small insert.63480150 -0.548110 20. Equations(3.R 2 = I’ and R = ~.37).1.480654 -8. Fixed-Point Iteration for the First Form g(qS) of i 1 2 3 4 5 29 30 @.1.41) (3.9.61030612 0.10.77473100 0. 2 -1 ~b = COS [R 1 cos(~) -~-R 3 .4)] ~.38). (3.910798 0.64616253 -0.00000001 0.369715 -0.03979719 -0. Thus.0(~i-t-1 = (3. 3 (3.065211 -0. the methodconvergedto a root outside that interval.00000001 0.12. The in solution has convergedto the undesired root illustrated by the small insert inside the linkage illustrated in Figure 3.594736 0.cos-l[~/(~)] g(qS) (3. Let’s rework the problemby rearranging Eq.028665 .302628 . 30 deg and 40 deg. (3. R1 ---= ~.388360 -8.369715 -0.38) (3.388360 0.25) into another form of q5 = g(qS) by separating the term R cos(~b) and solving for ~b.065211 -9.03348286 0.17027956 -0.0)+ ~! _ cos(40.00000001 Theseresults and the results of 29 moreiterations are summarized Table 3.Eventhoughthe desired root is bracketed by the twoinitial guesses. deg f(Oi) U(~i) g’(@) ~iq-1.01789195 .66828436 -0.747106 -9.940796 0.NonlinearEquations Table3.37) where = [ i cos(a)+ e3 .05797824 -0.16442489 0.747104 -9.39) become u(qS)= 52.94501056 2.42) g(dPi)= COS-1 2 sin(40.910798 -0.780651 -0.

whichrequires eight iterations.0deg.85010653 0.84790767 g’(O) 0. whichrequires nine iterations. The convergence criterion is ]~i+1 -.00491050 -0.03979719 -0. deg 30.0) = cos-l[u(30.015180 32.776742deg.0.015177 32.989810 32.1.989810 32.015180 f(qSi) -0.30.6.(0.84791025 0.104810 0.’’’ (3.00491050.014901 32. Subtracting ~ = g(~) from Eq.49) .107995 0.00000061 -0.48) term.015151 32. is analyzed as follows.47) (3.0) = 0.104810 0.46) Let x = ~ denote the solution and let e = (x . (3.131899 2 5 ~/1 .84793231 0.25) gives f(q~2)=-0.48) after the first-order [g(xi) -g(~)].00491050 -0.776742 into Eq.104810 Oi+~ deg 31.014901 32.45) Substituting ~b2 = 31.0 .00000000 Let q51=30. whichrequires 24 iterations.00000061 -0.0) 2 sin(40.015180 Chapter3 f(Oi+l) -0. and substituting the result into Eq.03979719.25) -0.47) yields [ei+ 1 = g’(~)ei] (3.0)] --.00000578 -0.84790769 0.00005515 -0.015180 32. It is comparable the false position (regula to falsi) methodpresented in Example 3.00000578 -0. Convergence the fixed-point iteration method.00000006 -0.850107) (3.00000001 u(c/)i) 0.850107) = 31.0 .43) q~2 = g(~bl) = g(30.84814233 0.144 Table3. Substituting ~b1 =30.84790767 0.g(~) Expressingg(~) in a Taylor series about xi gives: g(o = g(xi) +g’(¢) (~ 0 "-1.or any iterative methodof the form of xi+1 = g(xi).0)] = cos-~(0.0) -.00052505 -0. Theseresults and the results of the subsequentiterations are presented in Table3.00052505 -0.012517 32.850107 ) gives f(~l)= (3. solving for where x i < ~ < o~. Fixed-PointIteration for the Second Form g(~b) of q~i.84790794 0.104814 0. (3.0deg into Eq.131899 0.015177 32.00000006 -0. Considerthe iteration formula: Xi+l="g(xi) (3.012517 32.30. (3.2. (3.776742 31.t~i] ~ 0.42) u(30. This is a considerable improvement over the interval halving (bisection) methodpresented in Example 3.44) g’(30.46) gives xi+~ .000001 deg.-28 [~ cos(40.776742 (3.105148 0.40) to (3.776742 31.000000 31. The entire procedure is nowrepeated with q62 = 31. Equations (3.~ = ei+1 = g(xi) .015151 32.6.~) denote the error.0 + ~! _ cos(a0. (3.00005515 -0. Truncating Eq.104845 0.00000001 -0.

Methodsfor accelerating the convergenceof iterative methodsbased on a knowl2 edge of the convergencerate can be developed. Thesethree equations can in be solved for these three unknowns. Convergenceis linear since el+ is linearly dependent ei. The methodis based on starting with an initial approximationxi for which xi = O~ q. Aitkens A acceleration methodapplies to linearly converging methods. Consequently.52) gives g’(~b) -.It can be shown that the successive approximations the root.2.53b) k2 ei (3. for whichg’(qS) = -0.Steffensen’s methodis as not developedin this booksince Newton’s method. the solution convergesto ~b = -9.53a) Twomore iterations are madeto give Xi+ = O~-~" ei+1 = o~ -1.186449. This is also a solution the four-bar linkage problem.186449.015180deg. and if it is convergent. (3. However. qS=32. approximately22 times manyiterations wouldbe required to reach the same convergencecriterion. 1 If [g’(~)l < 1 but close to 1. the procedure on diverges.0. .0. the fixed-point iteration formula becomes ~i+1= ~i q-f(¢i) = g(c~i) and g’(~b) is given gt(c~) = 1 +2 sin(qS) -si n(~ (3.6.whichis presented in Section 3.51) Substituting the final solution value. into Eq.9. convergence quite slow.but not the desired solution.747105deg.52) (3.53a) to (3. such as fixed-point iteration. g’(00 = 0.5. g’(~) = 9. the fixed-point iteration methodfor solving nonlinear equations is not recommended. When applied to the fixed-point iteration methodfor finding the roots of a nonlinear equation. any iterative method converge. The procedure is then repeated using ~ as the initial approximation. is rearranged into the form q5 = ~b f(~b) = g(qS). The iteration methodwouldnot converge the desired solution for this rearrangement off(~b) = 0 into ~b = g(¢).NonlinearEquations 145 Equation (3.0.4.104810.it is an improved approximation of the root.the fixed-point iteration methodis convergentonly if Ig’(¢)l < 1.53c) There are three unknowns Eqs.49) can be used to determine whetheror not a methodis convergent. this procedureis known Steffensen’s method. and k. for [g’(~)l = 0. In fact. Suchrapid convergence does not occur whenIg’(~)l is close to 1. value of ~ obtained by the procedure is not the The exact root.53c): ei. (3. in which ei+1 = kei. is a morestraightforward procedure for achieving quadratic convergence. whichis larger than 1. converge to quadratically. For examplepresented in Table 3.541086. For example. If Ig’(~)l > 1. c¢.its rate of convergence.ke 1 i Xi+2 = ~ + ei+ 2 = o~ + kei+ 1 : o~ + (3. For to ei+l : Ig’(~)l < 1 (3. which meansthat the error decreases by a factor of approximately10 at each iteration.whichexplains whythat form of ¢p = g(qS) diverges. For the examplepresented is Table 3. f(tp)= 0.2. Methodswhich sometimeswork and sometimesfail are undesirable.e i (3.50) Consequently. since higher-order terms have been neglected. If the nonlinear equation. ~.

Xi (3.146 3.56) Newton’smethodalso can be obtained from the Taylor series.55) is applied repetitively until either one or both of the followingconvergence criteria are satisfied: [xi+l -xi[ ~ el and/or If(Xi+l)[ ~ e 2 (3.10. (3.55) Equation(3.57) f(x) f( / Xi+l Xi X Figure 3.4. Newton’s Method Chapter3 Newton’smethod (sometimes called the Newton-Rhapson method) for solving nonlinear equations is one of the most well-knownand powerful procedures in all of numerical analysis. Let’s locally approximatef(x) by the linear function g(x). and find the solution for g(x) -. It alwaysconvergesif the initial approximation sufficiently close to the root. whichis tangent f(x).0.54) for xi+1 withf(Xi+l) = 0 yields f(xi ) Xi+ 1 ~-. is and it convergesquadratically.That solution is then taken as the next approximation the solution off(x) -to The procedureis applied iteratively to convergence." "" (3. The function f(x) is nonlinear. f(Xi+l) = f(xi) +f’(xi)(xi-bl -- Xi) "~.2. f’(xi) = slope off(x) Xi+ 1 -. Newton’s methodis sometimescalled the tangent method.10 Newton’smethod.54) Solving Eq. Newton’smethodis illustrated graphically in Figure 3. Thus.Thus.Xi ft(xi) (3. Its only disadvantage is that the derivative f’(x) of the nonlinear function f(x) must be evaluated. .

7. Convergence Newton’smethodis determined as follows.58) To illustrate Newton’smethod.0) .4. and solving 147 for (3.59) and (3.000001 is satisfied on the fourth iteration.07635182 Substituting these results into Eq. let’s solve the four-bar linkage problem presented in Section 3.~bil ~ 0. Eqs.67) . setting xi+ yields 1 f(xi) xi+1 = x i f.58) is the sameas Eq. This deg.66) Substituting q~2 = 32. is a considerable improvement the interval-halving method.NonlinearEquations TruncatingEq. R2 = 2.55) becomes (3.0) . Recall Eq.O f’(qS) = ~ sin(qS) .60) yield f(~b) = ~cos(40.57) after the first derivative term.(xi) (3.0) = 1.62) (3. (3. (3.62) gives f(qS~) = 0.sin(40.1.sin(40.0) + ~! _ cos(40. Iq~i+l -. (3.59) ~i-t-1 (9if’(~)i) = f(Oi) For R~= 2.0) = -0.55): of Xi-t-1 ~-.2 cos(~b) + R3 . (3.~cos(30.(xi) Equation(3.03979719 f’(¢~) = ~ sin(30.0 .0 deg.118463 deg 1.63) f(¢~) = ~cos(40.O) -~ cos(C) + ~ .0 2 (-0. Eq. (3.118463deg into Eq. and ~ = 40.~ b)= 0 The derivative off(q~). (3.3): f(¢) = 1 cos(a) . Newton’s method. R3 = ~!. Equations(3.07635182 (3.0 . over and the fixed-point iteration method.30.the false position method.03979719)(180/~) = 32. Theseresults and the results of subsequent iterations are presented in Table3.62) and (3.61) yields ~b = 30.65) (3.61) (3.f’(qS) f’(qS) = 2 sin(~b) -si n(~ Thus.Xi f(xi) f.O For the first iteration let q51= 30.63) (3. Example 3. (3.c os(~ . f(Xi+l) 0.60) (3.0 deg.00214376.30.55). (3.cos(40. The convergence criterion. Recall Eq.

72) Expressing f(x) in a Taylor series about xi.75) becomes lf"(~) ei+l -. f(xi) f(xi) Xi+l -.~) f’(x) (3. xi --> ~. Thus.xi) 2 = 0 Xi < { < ~ (3. g’(e) = 0. and evaluating at x = ¯ yields f(~) =f(xi) q-f’(xi)(o~ . deg 30.07635164 1. (3.~ and solving Eq.70) where { lies betweenx i and e.00000000 0.~ = ei+l = xi -. (3. by of Ig’(¢)l < 1 (3.deg 32.00000503 0. truncating after the second-order term.68) As shown Eq. x = e andf(~) = 0. and Eq.00000000 f’(4)i) 1.f"(~) --~ f"(~t).00000503 0.00000000 0.118463 32.~ denote the error. Subtract e from both sides of Eq.015423 32.18646209 1.71) At the root.~ f.f’(xi) --> f’(~).118463 32.74) (3. Consequently.00214376 0. Newton’s Method i @.2f’(e) (3.76) .72) gives ei+l = ei f(xi)e i -. (3.000000 32. FromEq.xi) -I.74) into Eq.f(x. Thus.73) Letting ei = xi -.69) (3.00214376 0.015180 32. g’(x) = _f(x) f (x) .69).015423 32. for convergence any iterative methodin the form of Eq. Newton’smethodis convergent.67).67) is of the form xi+1 = g(xi) where g(x) is given by g(x) = x .015180 Chapter3 f(@+l) 0.(xi) -.ei f. (3.18644892 @+1.½f"({)~ Substituting Eq. The convergencerate of Newton’smethodis determined as follows.68).03979719 0. (3.015180 f(~)i) -0.f( x)f"(x) _f(x 2 [f’(x)] 2 [f’(x)] (3. (3.(xi) (3.70) is satisfied.7.015180 32.00000000 Equation (3.75) In the limit as i --~ c~.½f"({)~ If"({) --~ i ft(xi) (3.½f"({)(~ .50).73) for f(xi) gives f(xi) = f’(xi)e i . (3.(3. (3.Eq. and let e = x .148 Table 3.19205359 1.

However. which can have both complexroots and multiple roots.70) is satisfied. In that case. the convergence rate is decreased.5. Newton’smethodis also an excellent methodfor polishing roots obtained by other methods whichyield results polluted by round-off errors. This is especially true in the solution of systems of nonlinear equations. due to the neglect of the higher-order terms in the Taylor series presentedin Eq. This procedure is called the lagged Newton’smethod. However. Newton’s methodrequires the value of the derivativef’(x) in addition to the value the function f(x).57). In somecases. whenthe functionf(x) is a general nonlinear relationship between an input x and an outputf(x). are discussed in Section 3. problemswhere evaluation of f ’(x) in is morecostly than evaluation off(x).7.(xi) =f(xi + 6) -f(xi) (3. Newton’s method can be used to determine complex roots of real equations or complex roots of complex equations simply by using complex arithmetic. x = ~. however. this proceduremaybe less work. the approximateNewton methoddefined by Eq. convergencedrops to first order.2). Both of these applications of Newton’smethod. This procedureis not used very often. and approximatingf’(xi) as f.4. of it eliminates the evaluation of f1(x) at each iteration.5. However. As long as the sign off’(x) does not change. and if ~ is too large.77) or the secant method presented in Section 3.f’(x) cannot be determined analytically. round-off errors are introduced. and some functions cannot be differentiated analytically at all.76) shows that convergence is second-order. (3. f(xi) "~ 0.57). or the solution mayjumparoundwildly for a while and then converge to the desired solution or an alternate solution. The procedure will not diverge disastrously like the fixed-point iteration methodwhenIg’(x)l > 1. and Eq. (3. When function f(x) is an algebraic function or a transcendental the function. If e is small. f’(x) can be estimated numericallyby evaluating f(x) at xi and xi + 6. or quadratic. As the solution is approached. A higher-order version of Newton’smethodcan be obtained by retaining the second derivative term in the Taylor series presented in Eq. its global convergence properties can be very poor. (3. Somefunctions are difficult to differentiate analytically.Eq. Newton’s methodalso can be used to find multiple roots of nonlinear equations. This problem discussedin Section 3. (3. Newton’s method has excellent local convergence properties.2. However.77) This procedure doubles the number function evaluations at each iteration.5. which is concernedwith polynomials. such as roots of deflated functions (see Section 3. In such cases.NonlinearEquations 149 Equation (3.2 for polynomials. The number significant figures essentially doubleseach iteration. f ’(x) can be determinedanalytically. complex roots and multiple roots. so the overall procedure convergesmore slowly. (3. the proceduremay convergeto an alternate solution. the efficiency of Newton’s methodcan be increased by using the same value off’(x) for several iterations. whichis discussed in Section 3.The presenceof a is .the iterates xi movetoward the root.the second-order convergence is lost. When multiple roots occur.70) maynot be satisfied. This process is called the approximate Newton method. Newton’s method has several disadvantages. For a poor initial estimate. In that case. This procedurerequires the evaluation of f "(x) and the solution of a quadratic equation for Ax = xi+1 -xi. However.3 is recommended.

x0 and xl.. it may be necessary to bracket the solution within a closed interval and ensure that successive approximations remain within the interval.g’(xi) xi+ 1 -.. The procedureis applied repetitively to convergence. an the altemative to Newton’s methodis required. The preferred alternative is the secant method. 3."i. WhenNewton’smethod misbehaves. and the root of g(x) is taken as an improvedapproximationto the root of the nonlinear function f(x).11.1 xi .3. .v. whichis concernedwith pitfalls in root finding. These last two situations are discussed in Section 3.xi. xi_~ and xi.f(xi) -. The nonlinear function f(x) is approximated locally by the linear function g(x).e. The slope of the secant passing through two points. .150 Chapter3 local extremum(i.11 The secant method.79) f(x) Figure 3. A secant to a curve is the straight line whichpasses through twopoints on the curve. maximum minimum) f(x) in the neighborhood of a root may or in cause oscillations in the solution. it maybe necessaryto makeseveral iterations with the interval halving methodto reduce the size of the interval before continuing with Newton’s method. whichis the secant tof(x). f’(x).6. is given by gt (Xi) ~--.xi (3.1. are required to initiate the secant method.4. The presence of inflection points in f(x) in the neighborhoodof a root can cause problems. is unavailable or prohibitively costly to evaluate. whichare not required to bracket the root.1 ) (3.Two initial approximations. The Secant Method When derivative function. -.78) The equation of the secant line is given by f(Xi+l) -. The secant methodis illustrated graphically in Figure 3. In extremelydifficult cases.

86b) Substituting these results into Eq.03979719 f(05~) = ~ cos(40.05i--1 (3.0. is one iteration more than Newton’smethodrequires. (3.8.19496296 052 = 40. Recall Eq. (3.0 0. Eq.0deg.R 3 .000001 is satisfied on the fifth iteration.84) gives g/(051) (0 .05)] For R~ = ~.0) .82) Thus. RE = g 3 [ f(05): 3S-cos(40. (3.80) is applied repetitively until either one or both of the following two convergence criteria are satisfied: Ixi+~ -xil </31 and/or If(xi+l)l --< ’~2 (3.1 by the secant method. (3.695228 deg (3.81) Example3.+. Eq.88) Substituting 052 = 31. (3. .0 .cos(~ 05) = (3.g’(x.83) yields 0.0) + ~! _ cos(40.80) becomes ~b~+ q~f(05i)g. which deg. let 050 = 30.0degand 051 = 40.05i1-< 0. [05i+~.0)- For the first iteration.0 .85) givesf(052) = -0. Equation(3.0 deg.695228deg into Eq.19496296)40.2 cos(05) . The secant method.0) + !~ _ cos(40.83) whereg’(05i) is given g’(Oi) f(05i) .00657688.30.cos(30.03979719) __0.19496296 (3.0) .0-__(-030.--~ = f(xi) (3. The convergence criterion.40.f(05i-~) 05i -.31.NonlinearEquations wheref(xi+l) = 0.87) Substituting g’(051)into Eq. Theseresults and the results of subsequent iterations are presented in Table 3.02347602 .25.0) = -0.0 . (3.02347602 (3.85) gives f(050) = ] cos(40.~ cos(40.(05i) I 1= (3. (3. Solving Eq. .84) and c~ 40. Let’s solve the four-bar linkage problempresented in Section 3.5.80) Equation (3.0) = 0.82) yields ~cos(05)+ ~ -cos(40.3): f(05) = R1 cos(a) .79) for xi+1 yields 151 I xi+l x.

12.2 also apply to the secant method. Newton’smethod or the secant method. 30.fi : a(xi : fi-1 :fi-2 -.x i) "~.00101233 0.b(xi-2 i) "q.00101233 0. (xi_ 1. The Secant Method i 0 1 2 3 4 5 ~b deg i. a.000000 31.f_l).015542 32.03979719 0.015542 32. The procedure is applied repetitively to convergence. He showedthat if the effort required to evaluate f’(x) is less than 43 percent of the effort required to evaluatef(x).4. which is considerably faster than the linear convergence rate of the fixed-point iteration method but somewhatslower than the quadratic convergence rate of Newton’smethod.c .00000749 -0.015180 32.015180 -0.b(x i .. whoshowed that 0"62"’" [ 1 f"(00"] 1.02070761 q~i+l’ Chapter3 deg f(q~i+l) 31.62. and x3.152 Table 3.f_2).00000001 0. The quadratic function g(x) is is specified as follows: g(x) = a(x xi) 2 -~. then Newton’s method moreefficient.966238 32.02068443 0.8. are required to start the algorithm.015180 f(gPi) -0.2 q.00000000 g’(dPi) 0.b(x .c : a(xi-1 : a(xi-2 (3.. Thus g(xi) g(xi-1) g(xi-2) ~.02053257 0. and the root of the quadratic function g(x) is taken as an improved approximation to the root of the nonlinear function f(x).00000749 -0.695228 31. which are not required to bracket the root. the secant methodis moreefficient. Three initial approximationsx1. andc) are determinedby requiring g(x) to pass through the three known points (xi. Muller’s method illustrated graphically in Figure 3.xi) 2 q..b(x i-1 .91a) (3.00657688 -0.4. b.91b) (3. The only difference between Muller’s methodand the secant methodis that g(x) is a quadratic function in Muller’s methodand a linear function in the secant method.02426795 0.c : c .015180 32. The problemswith Newton’s methoddiscussed at the end of Section 3.f).19496296 -0.2 q.e. The question of which methodis more efficient. was also answeredby Jeeves.xi) "~ (3..02347602 0.91c) .90) The coefficients (i. 3.966238 32..00000000 The convergence rate of the secant method was ~nalyzed by Jeeves (1958).62. Muller’s Method Muller’s method(1956) is based on locally approximatingthe nonlinear functionf(x) quadratic function g(x)..00000001 0.89) Convergenceoccurs at the rate 1.-~J ei (3.695228 31.4.x . Otherwise. x2.xi) -1. ei+l=[~f. and (xi_2.00657688 -0.000000 40.

~2h~ 62h~ 2 and b (3.c : 0 1 (3. Equation (3.94) h~h2(h . Thus.95) SolvingEq.Xi -1 (3.12 Muller’s method.xi) -t.(xi_ .xi) whichgives g(xi+l) = 0. and c) have been evaluated. (3.91a) showsthat c =f/.93a) (3. gives ~ 2c (xi+ . a -g(Xi+l) : a(xi+ 1 -Xi) 2 "~- (3. (3.92b) Then Eqs.112).NonlinearEquations 153 f(x) fix) ×1-2 Figure 3.. (3.93b) b(xi+ . Define the following parameters: h1 ----.4ac Solving Eq.93) by Cramersrule yields 6~h -. (3. (3. Eq. Equation(3.91b) and (3.h2) h~h2(h~.90) can solved for the value of (xi+1 .95) for (xi+ .92a) (3. b.96) for xi+~ yields 2C Xi+ -~.h2) ~ Now that the coefficients (i.xi) by the rationalized quadratic formula.xi) 1 61 = (f/_~ -f) and and h2 = (xi_2 . of the square root term in the denominator Eq. Eq.h~b=c5~ hZ2a hzb= c3 -I2 Solving Eq. (3.e. + or -.97) The sign. a.97) is chosento of the sameas the sign of b to keep xi÷1 close to xi.98) .xi) 62 : (f-2 -f) (3.91c) become h~a-t.97) is applied repetitively until either one or both of the followingtwo convergence criteria are satisfied: [Xi+ 1 --Xi[ ~< /31 and/or If(xi+~)l < e 2 (3.96) ~ b -4. (3.~ .xi) = (3.

020532520) (3.102b) Substituting these results into Eq.154 Example3. c =f3 = -0.62h 1 hlh2(h1 .019742971 hlhz(hI -. Substituting these values of 41.94). Eq. ..43) : -0. Equation (3. Eq.I’ R3 = ~.fl. and 43 = 31. 14i+1 .97) becomes 2c 4i+1 ~4i -b-b ~ (3.R1 cos(a) 2 cos(4) + R3 . and f(43) f3 = -0.99) becomes If(4) = ~ cos(40.0(0.97519103 h2 = (41 -43) = --1.0 2.c os(~ .101) gives f(41) =fl = -0.R --.9.03979719.00 and ~2 = (fl -f3) (3.61h~ .100) where c =f(4i) and a and b are given by Eq.103) (3.-0.00047830092)(-0.~cos(4) + !~1 _ cos(40. Recall Eq. (3.000001 deg.105) which yields 4i+1 = 32.4il < 0.03028443. and f2 into Eq.02053252) 1 0. Theseresults and the results of subsequentiterations are presented in Table3. (3. (3.h2) Substituting these results into Eq.O (3.015031 deg.50 and (f2 -f3) = -0. (3. and 2 c~ = 40.3): f(4) --.92) gives (42 . For 1 =~. let 41 = 30.0deg.42.104) b = 62h~ .19264670 (3. (3.1. (3.0deg. Muller’s method.99) (3.02053252. (3. Thus.h2) -.019742971 + ~/(0.100) yields 4i+1 = 31.00047830092 (3.0.O) . The convergence criterion.94) yields a -61h 2 . 42 = 30.0 deg.019742971)2 __ 4.0(-0.4 ) = 0 Thus.102a) = -0.02053252. Chapter3 Let’s illustrate Muller’s methodby solving the four-bar linkage problem presented in Section 3.f(42) =f2 = -0. is satisfied on the third iteration.5 deg.101) For the first iteration.6.

The general form of an nth-degree polynomialis 2 } Pn(x) = ao + alx + ~2x +.015180 The convergence rate of Muller’s method 1. in curve-fitting tabular data. In extremely sensitive problems.62.4 apply to any form of nonlinear equation. Newton’smethod. f(dpi+~) -0.. the secant method preferred becauseof its simplicity. as the characteristic equation of higher-order ordinary differential equations.000000 30.02053252 -0. and 1.9. whichis faster than the 1.015180 32. or not at all if Ig’(a)l > 1. for 3. In all these cases. etc. 3.5. the secant method to is more efficient.03979717 -0. Polynomialsarise as the characteristic equation in eigenproblems.000000 32. Introduction The basic properties of 13olynomialsare presented in Section 4.84.00000000 0. the roots of the polynomials mustbe determined. The fixed-point iteration methodhas a linear convergence rate and converges slowly. this methodis not recommended.5. Several special features of solving for the roots of polynomials are discussed in this section.84 for Muller’s method).4.500000 31.5 POLYNOMIALS The methodsof solving for the roots of nonlinear equations presented in Sections 3. All three of the methodscan find complex roots simply by using complex arithmetic.+ anx" [ (3.1. 1.106) .015180 32.. all three methodsmaymisbehaveand require somebracketing technique. Generally speaking.2.3 and 3. One very common form of nonlinear equation is a polynomial.03028443 -0.500000 31.0 rate of Newton’smethod.00000000 155 30. eventhoughits convergence is rate.015031 32. Muller’s Method ~bi_2.deg ~bi.000000 32.0 for Newton’smethod.000000 30. Newton’smethod. Summary Four open methodsfor finding the roots of a nonlinear equation are presented in this section: the fixed-point iteration method. the secant method. and Muller’s.84.015031 32.000000 32.1.NonlinearEquations Table3. Consequently.00000000 0. deg 30.500000 31.62 rate of is the secant methodand slower than the 2. deg qSi+ deg ~.00000309 0. is slightly smaller than the convergence rate of Muller’s method.000000 30. method. 1.015180 q~i-1. All three methodsconvergerapidly in the vicinity of a root. The secant method and Newton’s method are highly recommended finding the roots of nonlinear equations.0. 3.62 for the secant method. and Muller’s methodall have a higher-order convergence rate (2. When the derivative f’ (x) is difficult to determineor time consuming evaluate.015031 31. as the characteristic equation in systems of first-order-ordinary differential equations. the secant method.

whichapplies to polynomials havingreal coefficients. The fundamental theorem of algebra states that an nth-degree polynomial has exactly n zeros. complex roots alwaysoccur in conjugate pairs.112) b±~ Exact formulas also exist for the roots of third-degree and fourth-degree polynomials. The numberof negative roots is found in a similar mannerby considering Pn(--X).The evaluation of a polynomialwith real coefficients and its derivatives is straightforward.. e~ and e2.2x3 4 x + (3. simple) or repeated (i.110) yields two distinct real roots whichare the sumand difference of two nearly identical numbers. (3.e.. the fourth-degree polynomial P4(X) = -4 + 2x + 3x2 . The single roots of a linear polynomial can be determined directly. Thus. a more accurate result can be obtained by rationalizing Eq. (3. x = -b ± ~-~ 2a .107) (3. b Eq.2.111) whichyields the rationalized quadratic formula: x = 2c (3. P~(x) = ax + has the single root. For example. Thus. using nested multiplication and synthetic division as discussed in Section 4. Pz(x) = ax2 ÷ bx ÷ c = 0 has two roots. given by b ~ = -a (3. When 2 >> 4ac.108) The two roots of a second-degreepolynomial can also be determined directly. The coefficients a0 to an maybe real or complex. ~2 ~ (3. Theroots maybe single (i.110). Descartes’rule of signs. Iterative methods used to find the roots of are higher-degree polynomials.If the coefficients are all real. multiple).156 Chapter3 wheren denotes the degree of the polynomialand a0 to an are constant coefficients. x = e.4ac \-b (-b q: :T ~~ ~1 (3. or roots. and a pair of complexconjugate roots whenb2 < 4ac.110) yields two distinct real roots whenb2 > 4ac.113) .109) 2a (3. states that the numberof positive roots of Pn(x) is equal to the number sign changes in the of nonzero coefficients of Pn(x).e.In that case.110) Equation (3.but they are quite complicatedand rarely used. two repeated real roots when b2 = 4ac. The evaluation of a polynomial with complex coefficients requires complex arithmetic. or is smaller by an even integer. given by the quadratic formula: -b±~ 0~1. The roots maybe real or complex. Thus.

0514 .. The five roots are now 1. and 5. The last two roots should be determinedby applying the quadratic formula to the Pz(x) determined after all the deflations... Thus. whichis a change of only 0.so the subsequentroots become and less accurate. but using ft(x) to keep the root bracketed increases the amountof work. 1.. 4..1)st-degree polynomial then solved for the next root. a is multiple root.then deflate the polynomialone degree by factoring out the known root using synthetic division. and a pair of complexconjugate roots.114) which has five positive real roots. 3. Expanding (3..15x4 5 x + (3. This procedure reduces the work as the subsequentdeflated polynomialsare of lower and lower degree.Eachdeflation propagates the errors moreand more.2. The first derivativef’(x) does changesign at such roots. and 1 -I1. Three of the methodspresented in Section 3. In general. 2.44 percent.. whereit is applied to find a simple root.3)(x (3. whichare then for refined by solving the original polynomialusing the roots of the deflated polynomialsas the initial approximations the refined roots. This problem less serious if less is the roots are foundin order fromthe smallest to the largest.6191 . can Considerthe factored fifth-degree polynomial: Ps(x) = (x ~ 1)(x . Thus.2x + 3x2 + 2x3 +xz. or one positive real root and two pairs of complex conjugate roots... 3.. This process is known root polishing.0793 . and Muller’s method. as discussed in Section 4..0793 . so the roots of the deflated polynomials not are the precise roots of the original polynomial. In other words. whichis 225. interval halving and false position. It also avoids converging to an already converged root. This simple exampleillustrates the difficulty associated with finding the roots of high-degree polynomials. the secant method.I1.... The actual roots are -1. This procedure can be repeated until all the roots are determined. standard polynomial form: z Ps(x) = -120 + 274x .115) Descartes’rule of signs shows that there are either five positive real roots.3. and 3. where I = ~-~. and two complexconjugate roots..NonlinearEquations 157 has three sign changesin the coefficients ofPn(x and one sign changein the coefficients ) of Pn(-x)=-4.5.. or one positive real root.225x + 85x3 . high-degreepolynomials be ill-conditioned. ÷I1. .114) yields the Eq. Thedeflated (n .. Repeated roots with an odd multiplicity can be bracketed by monitoringthe sign off(x). for as Thebracketing methods presented in Section 3. Oneprocedurefor finding the roots of high-degreepolynomials to find one root by is any method. the polynomial must have either three positive real roots and one negative real root. to 226.4 are more efficient.4110. The roots of high-degreepolynomialscan be quite sensitive to small changes in the values of the coefficients. since the nonlinear function f(x) does not changesign at such roots. To illustrate the sensitivity of the roots to the values of the coefficients.. including the introduction a of two complex conjugate roots.4 can be used to find the roots of polynomials: Newton’smethod.. let’s changethe coefficient of xz. The majorlimitation of this approachis that the coefficients of the deflated polynomialsare not exact..44 percent in one coefficient has made major changein the roots. a change of only 0. 1 + I1. 1. or three positive real roots and two complexconjugate roots. one negative real root. Newton’smethod for polynomials presented in Section 3. the roots of the deflated polynomials should be used as first approximations those roots.2.4110.2)(x .5075 . 5. 1. cannot be used to find repeated roots with an evenmultiplicity. but even this case the open methodspresented in Section 3.

whereas Bairstow’s methodextracts quadratic factors which can then be solved by the quadratic formula. Newton’sbasic methodcan be used to find simple roots of polynomials.158 Chapter3 Thesethree methodsalso can be used for finding the complex roots of polynomials. f(xi) and f’(xi) will be evaluated directly for the sake of clarity.f’(x). provided that complexarithmetic is used and reasonably good complexinitial approximations are specified. x = ~.5: f(x) = P3(x) 3 . 3. Newton’s method or the secant method using complex arithmetic and complex initial approximations are the methods of choice. Example3.2 for maximum efficiency. Generally speaking. f(x) = 0.1.3.7.116) will be called Newton’sbasic methodin this section to differentiate from two variations of Newton’smethodwhich are presented in this section for finding multiple roots. Among these are Bairstow’s method. (3.5.2. Newton’s method ~Newton’s methodfor solving for the root. multiple roots of polynomials (wherethe rate of convergence drops to first order).3x2 +4x . Complex arithmetic is straightforward on digital computers.118) . Newton’sbasic methodcan be applied directly to find simple roots of polynomials. The first three of these applications are illustrated in this section.4.55): xi+ 1 = x i -~ (3. and complexroots of polynomials with complex coefficients. complexarithmetic is tedious whenperformedby hand calculation. Accurate initial approximations are desirable. Bairstow’s methodis presented in Section 3.2.6x + 4 (3. However. In this example. Let’s apply Newton’s basic method to find the simple root of the following cubic polynomial in the neighborhood ofx = 1. and in somecases they are necessary to achieve convergence. complex conjugate roots of polynomialswith real coefficients. Several methodsexist for extracting complexroots of polynomials that have real coefficients which do not require complex arithmetic.117) Newton’sbasic methodis given by Eq.5. of a nonlinear equation. Newton’s methodfor simple roots. and Graeffe’s method[see Hildebrand (1956)]. is given by the second-degree polynomial: f’(x) = P2(x) = 3x~ . is presented in Section 3.2 = 0 (3. The derivative off(x). Recall Eq. f(x) and f’(x) should be evaluated by the nested multiplication algorithm presented in Section 4. Newton’s Method for Simple Roots. No special problems arise. Whena polynomial has complex coefficients.116) Equation(3.5.116). (3. 3. These three methodsuse only real arithmetic. The QDmethod and Graeffe’s method can find all the roots of a polynomial. the QD(quotientdifference) method[see Henrici (1964)].

117): P3 (x) = 3 -3x2 + 4x.142857 1. (3.2. (4.5 0.00000000 ft (xi) 1.5) = 1. Recall Eq.26).6250 andf’(1.005495 1.122.1. fromEq.005495 1.2 ApplyingEq.6250 0.0) = Thus.5) = 0.0 = 0 (3. (3.00549476 0.0 + (1. Eq" (3. Thus.117) becomes P3(x) = (x.000000 1.116) f(xl) f’(Xl) -- X2 = Xl 1. Polynomial Deflation The remainingroots of Eq.3) (3.0).0)(1. Qz(x) is given by x2 .123) (3.0) = bl = a1 + xb2 = 4. if is 3.. Newton’sMethodfor Simple Roots xi 1.5. Example3. (3.00009057 1. IXi+l .142857 1.000000 1.00000033 0.00000033 0. Polynomial deflation Let’s illustrate polynomial deflation by factoring out the linear factor.10.00549467 0.1) (3.120) The coefficients of the deflated polynomial Q2(x) can be determined by applying the synthetic division algorithm presented in Eq.0)(-2.06122449 1.10. Four iterations are required to satisfy the convergence tolerance.117) and (3.142857 1.O)Q2(x ) (3.5.2.NonlinearEquations Table 3.2.2) (3.8.14577259 0.0 be = a2 + xb 3 = -3.0x + 2.00000000 xi+ 1 1. Substituting these values into Eq. Substituting this value into Eqs.000000 f (xi) 0.26) gives b3 = a3 = 1. alternate approachfor finding the remainingroots is to deflate An the original polynomial factoring out the linear factor correspondingto the known by root and solving for the roots of the deflated polynomial.122.121) .117).000000 f(x~+l) 0.14577259 0.0 + (1.6250 1.122.00000000 159 Let x1 = 1.750 (3.xil < 0.000001.1. (4.750 1. (x.117) can be found in a similar mannerby choosing different initial approximations. Newton’s method is an extremely rapid procedure for finding the roots of a polynomial a reasonableinitial approximation available.750. (3.119) Theseresults and the results of subsequent iterations are presented in Table 3. (3.50 1.118) gives f(1.

(3.(x): f(x) . (3.128) where the deflated function h(x) does not have a root at x = e. Including the multiplicity rn in Eq.~z)g/(x) (x -.mh(x)+ (x . but care must be exercised to discontinue the iterations as f’(x) approacheszero.128) into Eq.~)h(x) (3.129) (x .124) whichyields the complexconjugate roots.~)mg’(x) h(x) which yields u(x) -. (3. Next consider the variation where Newton’sbasic methodis applied to the function . 2. Substituting this result into Eq. f(x) can be expressedas (3. Newton’s Methodfor Multiple Roots Newton’s method. 3. (3.116) Solving for the root of the modified function.127) gives u(x) m(x ~)m-l +(x . Two variations of Newton’smethod restore the second-order convergenceof the basic method: 1.125) converges quadratically. (3. rate of the convergencedrops to first-order for a multiple root. h(7)¢ Substituting Eq. Thus. x = -b ± v~.123) is the desired deflated polynomial. Since Eq. Newton’sbasic methodcan be used.4.123) is a second-degree polynomial.4a¢ -(-2.160 Chapter3 Equation(3.0) = 2a + V/(-2.127) f (x) =(x . (3. However.50) showsthat Eq.r)mh(x) (3. Differentiating g(x) and evaluating the result at x = ~ yields g’(~) = 0.116): rn f(xi) Xi+ 1 Xi -~ (3.125) Equation(3.3.0)(2. u(x) =f(x)/f’(x) Thesetwo variations are presented in the following discussion. in various forms.126)" where ~ is betweenxi and ~.2 = 1 4. xi+1 = g(xi). Further analysis yields g"(~) ei+~ = Tei (3.0) 2(1.2.(x) -~’(x) If f (x) has rn repeated roots. (3. that is. its roots can be determinedby the quadratic formula. can be used to calculate multiple real roots.0) (3.125) is in the general iteration form.0(1. which showsthat Eq.5. Thus. First consider the variation whichincludes the multiplicity rn in Eq.130) .0) 2 . (3.125) is convergent. Ralston and Rabinowitz(1978) showthat a nonlinear functionf(x) approacheszero faster than derivativef’(x) approaches zero. ~1.~)mh(x) (3.I1.

and (3. This methodcan also be used for simple roots.131) yields an altemate form Eq. Equation(3.although the evaluation of f"(x) is more complicated in that case. with second-order convergence. Thus.132) into Eq. respectively. Example3.133) requires additional effort to evaluate.133) The advantageof Eq. In summary. Newton’sbasic method. Thesethree methods specified in Eqs.125). (3. These three methods are illustrated in Example 3. There is additional calculation for f"(x. and Newton’sbasic methodapplied to the modified function.134) (3. Theyare presented in this section devoted to polynomialssimply because the problemof multiple roots generally occurs morefrequently for polynomials than for other nonlinear functions.9.(xi)]2 -.132) Substituting Eqs.f(xi)f"(X i) (3.127) gives u’(x)f’(x)f’(x) -f(x)f (x 2 ’(x)] If (3.135) . 3.116).127) and (3. Newton’s methodfor multiple roots. There are several disadvantages. (3. can be applied to u(x) to give (3. The three techniques presented here can also be applied with the secant method.131): f(xi)f’(xi) Xi+1 = Xi -. Newton’sbasic method. Newton’sbasic methodapplied to the modified function. Three versions of Newton’s methodfor multiple roots are illustrated in this section: 1.133).). Round-offerrors maybe introduced due to the difference appearing in the denominatorof Eq. (3. (3. are are repeated below: Xi+ 1 = Xi f(xi) f. (3.133) has second-order convergence. 2. but it is less efficient than Newton’sbasic methodin that case.three methodsare presented for evaluating repeated roots: Newton’s basic method(which reduces to first-order convergence). Newton’sbasic methodwith the multiplicity m included.NonlinearEquations 161 Equation(3. (3.133). (3.9.131) Differentiating Eq.133) over Newton’sbasic methodfor repeated roots is that Eq. (3.(xi) f(xi) (3. (3. Newton’sbasic methodincluding the multiplicity m. These three methodscan be applied to any nonlinear equation. u(x) =f(x)/f’(x).[f.130) showsthat u(x) has a single root at x = ~. u(x) =f(x)/f’(x).

0. Newton’s method for complex roots. of the following third-degree polynomial: f(x) = P3(x) = (x + 1)(x .750) x2 = 1.1)(x f(x) =x3 -x z -x+ 1 = 0 FromEq.0 0.5 -2.1 I. Bracketing methods.960784 (3.2 = 0 and (3.750 (0.045455 2. while the two other methodsrequired only four iterations each. f(1.144) (3. The advantage of these two methodsover the basic methodfor repeated roots is obvious.140) (3.134) to (3.139) (3. The basic Newton method can find complex roots by using complex arithmetic choosing a complexinitial approximation. and Muller’s method can be used to calculate complex roots simply by using complex arithmetic and choosing complex initial approximations.10.50) = 0. IAxi+ll < 0. the secant method.1)(x.272727 x2 = 1. Consider the third-degree polynomial: f(x) = P3(x) = (x.0) .145) . Let’s solve for the repeated root. f (x) = ~ .50) = 2. From Eqs.136) where u(x)=f(x)/f’(x) has the same roots as f(x).138) Let the initial approximation be x1 = 1. and Xi+ 1 = Xi Chapter3 u(xi) f (xi) f’ (xi) U’(Xi) -.1. (3.750.1 -II)(xf(x) = x3 .6250.143) Theseresults and the results of subsequentiterations required to achieve the convergence tolerance. 1.5. Newton’smethodis applied in this section to find the complex conjugate roots of a polynomial with real coefficients.6250 x2 = 1.137) (3.141) (3. since the sign off(x) generally does not changesign at a complex root.f’(1.0.(xi)]2 (3. Newton’s Method for Complex Roots Newton’s method.136) gives 0.1 f"(x) = 6x (3.2.50.5) = 7.000001. (3.142) (3.750)2 _ (0.2x. and f"(1.3xz + 4x .140). Substituting these values Eqs.138) (3.6250 .5 .6250)(2.138). (3. such as interval halving and false position. Example3.6250)(7.(2.162 wheremis the multiplicity of the root. cannot be used to find complex roots.i [f.5 2. 3.750 -.4. in Newton’sbasic methodrequired 20 iterations. r = 1.2. are summarized Table 3.

000001 Newton’smultiplicity method. (3.000001 f(xi) 0.12.999904 1.000000 1.000000 1.000000 f(xi) 1.6250 0.500000 + 10. Eq.00000000+17. with m= i X i f(xi) Xi+ 1 f(Xi+l) 1.11.000000÷I1.144) are r = 1.04451055 0.00000000 0.99999953 -I4.00000000 0.00097047 .00301543 0.50 1.800000 1.000000 1.00000000 0.272727 1.074383 1.000000 1.01147723 0.00000032 0.045455 1.50 0.73600000+11.144082 1.00000000 Newton’smodified method.00000000 0.00000030 The roots of Eq.I0.12.000001 1. Let’s find the complex root r = 1 + I1 starting with x~ = 0.6250 0.00000050 0.000000 0.00000000 1.04451055 0.000000+I1.00000000 1. (3.960784 0.16904583 0. 1 + I1.854572 0.135).Nonlinear Equations Table 3.13.50 1.00422615 0.00000034 0.000002 1.25241794 -0.00000000 + I0.5 + I0. Newton’s Method for Multiple Real Roots Newton’sbasic method.10.00000002 +I0.00000000 0.00000000 0.005000 1.00000000 0.00422615 0.13.99785801 -1.00000000 0.00500 1.999707 + 10.16904583 0.000000 1.000000 Table 3.000000 0.999600 1. Newton’s Method for Complex Roots xi 0.75000000 . .08138309 0.53189072 + I0.00000000 + 10.50000000 5.987442+11.006386+10.272727 1.00000000 1. The complex arithmetic was performed by a FORTRAN program for Newton’s method.16000000 + I5.00000050 0. and 1 . The results are presented in Table 3. (3.95200000 0.00000000 Xiq_ 1 163 f(Xi+I) 0. Eq.000000 0.00000000 0.015093 0.960784 0.00000000 -1.03425358 .00000032 0.00059447 .000000+I1.25000000 1. Eq.00097901 -0.5.12000000 -1.000000 1.136) Xi f(xi) Xi+l f(xi+l) 0.13.144082 1.98388821 -2.16521249 .045455 1.6250 0.10.999600 1.00000000 1.I1.45103149 -2.00301543 0. (3.000000 1.14100172 .134) i 1 2 3 19 20 x i 1.400000 + I0.00000000 + I10.500000 2.

Pn(x) = ~ . 2 -rx.147) This form of the quadratic factor (i.153) .. where r* and s* are the values of r and s which yield bI =b 0=0.r) + 0 = (3. Pn(x): Pn(x) = xn ÷ an_l xn-1 ÷.(x) by the quadratic factor yields P~(x) = 2 ..as + .e.151a) (3.. s).. Ob~ 1 Or zar +-~s As = -b~ Ob° Ar Ob° Or ÷ ~-s As = "b° (3..3.. hand calculation using complexarithmetic is tedious and time consuming.7. b0.s )Qn_2(x) ÷ rem ainder (3. Or Os . (x2 . The quadratic formula can then be used to determine the corresponding pair of real roots or complex conjugateroots.146) 0 Let’s factor out a quadratic factor from Pn(x). and the four partial derivatives are evaluatedat point (r. 0 0 (3. . both b~ and b0 must be zero. Thus. ~-s bo(r*. are complex arithmetic is available in several programming languages.gives b~(r*. (3. TruncatingEq.rx . s) and wherehi.r) as As -. we have a two-variable root-finding problem. Both b~ and bo dependon both r and s. which is presented in Section 3. thus. which corresponds to a quadratic factor of the polynomial Pn(x). s*) = b~ + Ob~Ar + Ob~As + .(s* . ÷ b3x ÷ b~) + rema der in (3..148) where the remainder is given by Remainder b1 (x . they occur in conjugate pairs. This problem can be solved by Newton’smethodfor a system of nonlinear equations. the secant method.s). Bairstow’s Method Chapter3 A special problemassociated with polynomialsPn(x) is the possibility of complex roots. s) Thus. Expressing Eq. (3.s) is an exact factor of Pn(x). Consider the general nth-degree polynomial. (3.-a (3.152) (3. s*) = OO~-rar+ Obo. Fortunately. such as FORTRAN. Bairstow’s method extracts quadratic factors from a polynomial using only real arithmetic. Newton’smethod.151) after the first-order terms and solving for Ar and As gives Ob .can be determinedby the quadratic formula.5. Obo. and Muller’s methodall can find complexroots if complex arithmetic is used and complex initial approximations specified.When polynomials with real coefficients have complexroots. For the remainderto be zero.149) When remainder is zero. Per forming the division of P.s )( xn-2 ÷ b~_lx n-3 +-. q. The roots of the the quadratic factor..s ) is gen erally specified. real or complex.150) in the form of two two-variable Taylor series in terms Ar = (r* ..164 3. However.150) o =bo(r.151b) bl = bl(r.rx..rx . .

.154. partial derivatives of bn and work our waydownto the partial derivatives of b~ and b 0.154n-1) (3.156b) c2 and and Ob~o = Or c l Thus.1) (3. using the synthetic division algorithm.2) (3.n) (3.154) is simply the synthetic division algorithm presented in Section 4.2 applied for a quadratic factor.154.155n-2) (3. Bairstowshowed that the results are identical to dividing Qn_2(x) the quadratic factor.152) and (3. (3.157b) C~ Ar+c2 As = -b 0 .rcn_ q..154n-2) an_ q.152) and (3..1) Cn-1 =bn. with respect to r and s. bo). Since each coefficient bi contains bi+ 1 and bi+ we must start with the ~.153) become 6’2 Ar + c3 As = -b 1 I (3.rb 1n bn_ = an_2 + rbn_1 + sb 2 n b1 = at + rbz + sb 3 bo = ao + rbl + sb 2¯ (3.1 ~.154. (3. b0.NonlinearEquations 165 Equations (3.. yields bn = a n bn_1 = (3..155. Expandingthe right-hand side of Eq. b~.155..0) Equation (3.155. and the four partial derivatives to the coefficients of the polynomial Pn(x). n.155.. 1. The four partial derivatives required in Eqs. including the remainder term. (3. (3.152) and (3.157a) (3. The results are presented below. ai (i = O.2 .s).rx . n). by (x2 .n) (3. All that remains is to relate b1.153) can be solved for Ar and As by Cramer’s rule or Gauss elimination. The details are presented by Gerald and Wheatley(1999). and comparingthe two sides term by term.156a) (3.153) can be obtained differentiating the coefficients bi (i = n.= c 3 Os Obo ~-s c2 (3.148). that is.n-1) (3.scn 2 2 1 b2 + rc 3 + SC 4 c~=b~+rc 2+so3 C2 = The required partial derivatives are given by Ob 1 Or Ob~ -. Eqs.1 .rCn Cn_ -= bn_ q.. respectively.

154) gives b3 = a 3 b2 = a2 b~ = as bo = a o = 1.0) + (-2.5)(1.163a) (3.750 -.75) = . c3.004132 .2) (3.163b) (3.0 C = b2 + rc3 = -(1.157) gives (0.161.159b) and Example3.250Ar + (0.158b) Equations (3.2) (3.160) The roots of Eq.5)(0.999959 -2. Bairstow’s methodfor quadratic factors. (3.0 + rb 3 = (-3.1. c2.5)(-0. (3.1.159a) (3.(1.000000 Ar 0.s).(b0)i[ </3 2 (3.157) and (3.5)(-1.162.145): by f(x) = x3 .0 + (1.0) + rb1 ÷ sb2 = -2. and 1 -I1.75) ÷ (-2.1) .161.5)(1) = -1. (3.3) (3.5) + (-2.6250 Table 3. Let’s illustrate Bairstow’s method solving for a quadratic factor of Eq.5)(-1.158) are applied repetitively until either one or both of followingconvergence criteria are satisfied: [Ari[ _< eI [(bl)i+ 1 .13. and c~ into Eq.0) Substituting the values ofb1.692308 1.50 . (3.034644 -0.2 = 0 (3. let r 1 = 1.3.0) if.970660 2.000041 0. Substituting these values into Eq.155) gives c3 = b3 = 1.158a) (3.0)Ar + (1.8) (3. Bairstow’s Method QuadraticFactors for i r 1.0) 2 c~ = b1 + rc2 + sc 3 = (-0. Thus. To initiate the calculations. ri+ 1 = r i + Ar i Si+1 = Si ~.5) (3.750000 -0.5 and s1 --~ -2.999988 2.5)(1.3x~ + 4x .110091 0. 1 +I1.5) + (1.161.0)As = -(-0.000000 As 0.278352 0.0) Substituting these results into Eq.005304 1.(bl)i[ and < 52 IASi[ < e~ [(b0)i+ 1 . (3.0)As = -0.50 1.000000 (3.50 + rb2 + sb3 = 4.162.894041 -2.162. b0.1.1) (3.5.r) and As = (s* .160) are r= 1.11.0 + (1.5)(1.005317 0.161.AS i Chapter3 (3.192308 0.166 where Ar = (r* .750) + (1.000012 0.000000 s -2.004173 --0.144041 -0.

165a) s2 = s1 + As = --2.-2.163) gives Ar = 0. However.750 (3.~ .4.q/(-2.4.158) gives r z = r 1 +Ar--. or problems.192308 = 1. Thus. Newton’smethodis especially well suited for this purpose. 3.5. Anyof the methodspresented in Sections 3.0(1.4ac 2a -(-2. IAril < 0.rx . Most of these pitfalls are discussed in Sections 3.5.0. the desired quadratic factor is x2 . Pitfalls of Root Finding Methods Numerous pitfalls.whichcan arise in their application. However.5.3 to 3.are satisfied on the sixth iteration.s = x2 . (3.2.6 ¯ PITFALLS OF ROOT FINDING METHODS AND OTHER METHODS OF ROOT FINDING The root-finding methodspresented in Sections 3.165b) These results and the results of subsequent iterations are presented in Table 3.5 includes the more popular methodsand the most well-knownmethods. Theyare summarized discussed in and Section 3. like the secant methodand Muller’s method.692308 (3. drops to first-order for multiple roots.0)(2. (3.2.0) -4. Newton’smethod. and the quadratic formulacan be used to find the tworoots of the quadratic factor.6.13.0x + 2. associated with root finding are noted in Sections 3.0) =1+I1.000001.192308 and As = 0.1.1-I1 2(1.6. wherer = 2.166) by the quadratic formula yields the pair of complex conjugateroots: X-~ -b 4.750 = --1.3 and 3.4 can be used to find the roots of polynomials.0 and s -. there are several pitfalls. Good initial guesses are desirable and maybe necessary to find the roots of high-degree polynomials. 3. can be used to find complexroots simply by using complex arithmetic with complexinitial approximations. Theseinclude: .0) 2 .It can find simple roots and multiple roots directly.50+ 0. (3.0) (3. The collection of r0ot-finding methods presented in Sections 3.6.750 167 (3. Two it variations of Newton’s methodfor multiple roots restore the second-order convergence.50+ 0.3 to 3.3 to 3. The convergencecriteria. Several’ less well-knownrootfinding methodsare listed in Section 3.NonlinearEquations Solving Eq.167) 3. Summary Polynomialsare a special case of nonlinear equation.1.5 generally performas described.1. Bairstow’s methodcan find quadratic factors using real arithmetic.0 = 0 (3.164) Substituting Ar and As into Eq.000001and IAsil_< 0.3 to 3.166) Solving for the roots of Eq. or problems.

Ill-conditioning of the nonlinear function can cause serious difficulties in root finding. 3.2. Probablythe most serious pitfall associated with root finding is the lack of a good initial approximation. This can be accomplishedby either graphing the function or a fine incremental search. This dilemmacan be resolved by an enlargementof a graph of the function or the use of a smaller increment near the root in an incremental search. Onesolution to this problemis to use Bairstow’s methodfor quadratic factors.and somestrategies to avoid the problems. The problems are similar to those discussed in section 1. a doubleroot. It can be difficult to determinewherethere are no roots. 5. (a) Closelyspaced roots. 1. or divergence. Closely spacedroots can be difficult to evaluate.Lackof a goodinitial approximation lead to convergence the can to wrongroot. A better initial approximation eliminate this problem.168 Chapter3 f(x) f(x)’ X Figure3. Considerthe situation illustrated in Figure 3. and the root-finding methodis using real arithmetic. whenknown exist. However. Lack of a good initial approximation Convergenceto the wrong root Closely spaced roots Multiple roots Inflection points Complexroots Ill-conditioning of the nonlinear equation Slow convergence Theseproblems. Graphingthe function or an incremental search can help identify the possibility of multiple roots. can Complex roots do not present a problem if they are expected. 7.13.5. Multiple roots. 4. 6.if complexroots are not expected. are discussed in this section. The major problemconcerning multiple roots is not knowingthey exist.6. 8. or two closely spaced distinct roots. Roots at an inflection point can send the root-finding procedure far awayfrom the root.2 for solving ill- .13 Pitfalls of root finding. 2. The obvious wayto avoid this problemis to obtain a better initial approximation. (b) Inflection point. complex roots cannot be evaluated. slow convergence. Newton’smethodor the secant methodusing complexarithmetic and complexinitial approximations can find complexroots in a straightforward manner. can be evaluated as described for Newton’s to methodin Section 3.

y) = (3. 1970). mentionedin Section 3. Muller’s method(1956). each problem should be approached with the expectation of success. 1956). Graeff’s root squaring method (see Hildebrand.Brent’s method does not require evaluation of the derivative.4. a larger number significant digits). must always be open to the possibility of unexpecteddifficulties one and be ready and willing to pursue other approaches. However.. the best approachfor finding the roots of ill-conditioned nonlinear equations is to use a computing device with moreprecision (i. y) = g(x. is an extension of the secant methodwhichapproximatesthe nonlinear functionf(x) with a quadratic function g(x). (1989). inverse quadratic interpolation) and monitorsits behaviorto ensure that it is behavingproperly. Other Methodsof Root Finding Mostof the straightforward popular methodsfor root finding are presented in Sections 3..e. the Lehmer-Schurmethod (see Acton. Consider a system of two nonlinear equations: f (x.2. This method not used very often becausethe increased complexity. Ralston and Rabinowitz (1979) present a discussion of these methods. Several additional methods identified. In root-finding problems.3 to 3. A higher-order version to of Newton’smethod.NonlinearEquations 169 conditioned systems of linear algebraic equations. at whichtime the procedurereverts to the superlinear method.5. retains the second-order term in the Taylor series forf(x). uses the root of g(x) as the next approximation the root off(x). some interval halving steps are used to ensure at least linear behavioruntil the root is approached more closely.168b) . is compared the secant methodand Newton’s to method. of The problem of slow convergence can be addressed by obtaining a better initial approximation by a. 1964) are three methods. The Jenkins-Traub methodis implemented the IMSL in library. mentionedin Section 3.168a) (3. and the QD(quotient-difference) method(see Henrici. different root-finding method.e. This approach combinesthe efficiency of open methodswith the robustness of closed methods. is not justified by the slightly increasedefficiency. Consequently.2. or Most root-finding problemsin engineering and science are well behavedand can be solved in a straightforward mannerby one or more of the methods presented in this chapter. respectively.6. but not developed this section. An algorithm for Laguerre’s method presented by Press et al. Several additional methodshave been proposedfor finding the roots of polynomials. 3. are in Brent’s (1978) methoduses a supeflinear method(i. 3.7 SYSTEMS OF NONLINEAR EQUATIONS Many problemsin engineering and science require the solution of a system of nonlinear equations.4.4. If not. 1970) and the Jenkins-Traub method. Twoof the more important additional methods for polynomials are Laguerre’s method (Householder.

I . Figure 3.xi) + fy Ii(Y* .y) < 0 .xi) +gy Ii(Y* . The problemcan be stated as follows: Giventhe continuous functionsf(x. however. # ¯ ~ f(x...y)o \ \ .168) is known:(xi. find the values x = x* andy = y* such thatf(x*.. This problemis a considerably morecomplicatedthan the solution of a single nonlinear equation.14.rI . y) = 0 contours divide the xy plane into regions where f(x.y) = "’" .. y) and g(x. Assume that an approximate solution to Eq.. or any nonlinear relationships betweenthe inputs x and y and the outputsf(x. y*). The functions f(x.y) and g(x. y). 0 g(x*. can be extended to solve systems of nonlinear equations. Yi). Thus. The numberof solutions is not known priori. Interval halving and fixed-point iteration are not readily extendable to systems of nonlinear equations.~o<o.Yi) -I.169a) (3.~=o X . y) andg(x. // ~x. \ > I ~(x. ".y) are positive or negative. Newton’smethod. y~(x..~) = .14 Solution of two nonlinear equations.168). Yi).~. (3. The solutions to Eq. y) = 0 contours. in two -variable Tayor series abo (xi ..168) are the intersections of the f(x. transcendental equations.. and l ut evaluate the Tayl r series o at (x*. y*) = I The problem is illustrated graphically in Figure 3. I /"X~(x.-----.~.)> I/ X. y*) = gi + gx ]i( x* .y) : g(x.. Newton’smethodis extended to solve the system of two nonlinear equations specified by Eq. (3. Four such intersections are illustrated in Figure 3. y*) = 0 and g(x*....170 Chapter3 ~ g(x. if any.. (3. Thef(x. f(x*.Yi) +. y) maybe algebraic equations. .14.y) and g(x. 0 (3.-" I / . y) = 0 and g(x. In this section. the solution of differential equations. y) = g(x...169b) . y). y) andg(x./--~x.. Express f(x. y*) = fi + fx Ii(x* .

WritingTaylor series for f(02.. Eq.169) after the first derivative terms and rearranging yields 171 fxli ~i+fylizXy.03)A02 (go3102. (3.171) are applied repetitively until either one or both of followingconvergence criteria are satisfied: IAxil _< e x and [AYil < ey If(xi+~. i xi+ =xi q-.4 CO ( S(04) -.12.Ax 1 i Yi+l = Yi + Ayi (3.173).176b) (goz 102.03).174a) (3.Yi+l)[ < eg (3.1. Thus.171a) (3.174b) Solving Eqs.. 03) (3.03) A03 = --f(02. As an exampleof Newton’smethodfor solving two nonlinear equations.03) + A02 (J4~ 3102.171b) Equations (3. 03) and evaluating 2. g(O 03) = 2 sin(02) ÷3 sin(03) + r 4 sin(04) = 0 2. (3.176a) (3. Newton’s method for two coupled nonlinear equations.173a) (3..02) and A03= (0~’ .176a) and (3. 03) + A03 Equations (3.= 0 2.170) and (3.172a) (3.r1--.174) for A02and A03yields the following equations: (J~2102.o3 + . (3. (3. Let 0~’ and 0~’ be the solution to Eq. and 02 and 03 are the twooutput angles. j~: = -r 2 sin(02) go2 = r2 cos(02) and Jb~ = -r3 sin(03) and go3 = r3 c°s(03) (3.0~) = glo2.03) = -g(02. . at 0* (2. respectively.03 A03+ . 0 0* g(2.0~) gives 0* f(2.172b) Example3. FromEq. 0 A02 A03 where A02= (0~’ .o3 +go31o2.03A02+Jb310:.173b) wherer~ to r 4 are specified. 0 4 is the input angle.o3 +Jb210~. -f = gxli Axi +gyli zXYi= -gi where Ax and Ayi denote (x* -xi) and (y* -Yi)..Nonlinear Equations TruncatingEq.. Recall the two scalar components the of vector loop equation.0~) =flo2.170b) (3. 03) and g(O 03) about (02.176b) can be solved by Cramer’srule or Gauss elimination.. and 02 and 03 be an approximation the solution.174b) (3.yi+~)<_ and Ig(xi+l. let’s solve the four-bar linkage problempresented in Section 3.175a) (3. (3. (3.173).o3 +go~1o2..170a) (3.2): f(O 03) = 2 COS(02) +r3cos03) -t.

(3.319833E--01 -0.178a) (3.0 cos(0.0) = 8.405454E--07 -0.181) (3. FromTable 3.375061 -0.131975E + 00 0.180a) (3.-0. Four iterations are required’to satisfy the convergence criteria IA02I_< 0. 02 = ~b.0) = 0.000001. 0.370988 -0.0 deg and 1) = 0.131975 5.173): f(30.520530 deg A03 = -0.0 Substituting these results into Eq. Thus.520530 -4.0) = Equations (3.520530 deg and 03 = -4.082180(180/r 0 = -4.000001and 1031 < 0.000001 0.175b) J~2 = -6.370987 f(02.520530 32.180b) (3.000000 0.223639E--02 -0.020311 32.0) .708541 -0.182) where f(x) r = [J~(x) j~(x) (3.004073 -4. deg g(O2.0 sin(30.0 cos(0. (3.177a) g(30.179a) (3. Newton’sMethod Two for CoupledNonlinear Equations 02. In the general case. 03) 03.428850E+ 00 2.0 sin(0.196152 and go3 = 8.0.175a) and (3.1.043992(180/~z) = 2.which is the same as 02.0 sin(220.1.10. Consider the Case where 04 = 220.0) = -3.171) become AA=f ] r=[xl xz "" ¯ .005130 0.].0 deg..0 cos(30. and r 4 = 4.172 Table 3. (3.333480 -4.177b) go2 = 6.03 -0.1.500219 0.015180 Chapter3 AO deg A03. Let 0(21) = 30.328234E--03 -0.0cos(30. L(x)]andx x.183) .170) and (3. ~b = 32. r 2 = 6.0 sin(30.0 sin(0.196152 A02+ 8.000000 A02 + 0. In this case.0 A03 = -0.0 = (3.179b) (3.178b) (3.0 (3. Eqs.708541 -4.0) + 8.14.0) + 8.000000 -4.0. r 1 = 10.708541 deg Theseresults and the results of subsequentiterations are presented in Table 3.14.428850 Solving Eq. 03) For the problem presented in Section 3. As illustrated in Figure 3.0) + 4.176) gives -3. deg 30. Fr om Eq.0) = 6. 0z = 32.000000 32. (3.708541 deg where the factor (180/re) is needed to convert radians to degrees.0) = 5. 0 deg.0) = 6.0 cos(220. 0.deg 2.000000 and J~3 = -8.Theseresults were obtained on a 13-digit precision computer.111507E-.015180deg. 0.015181 32.0) + 4.179) gives A02 = 0.0 A03---.112109E--07 -0. (3. r 3 = 8.

ro~d-off e~ors pollute the solution. (3.. n) (3. A. (L)x2 . ~d if ~j is too l~ge. (1983) discuss such procedures.] and f is the col~ vector of ~nction values = [~ A "’" The most costly p~ of solving systems of nonline~ equations is ~e evaluation of the matrix of p~ial derivatives. f(x) = 0.. which yields the solution to the system of nonlinear equations.186) ~r=[~ fr ~ . then reevaluating A. Neve~heless. 2. The nonlinear ~ction F(x) has a global minimum zero whenall of the individual ~ctions are zero. However. =f(x + ~j) -f(x) (i. An alternate approach involves constructing a single nonline~ Nnction F(x) adding together the sums of the squ~es of the individual Nnctions f(x).2 . LI ~. is one procedurefor solving ~is systems of nonlinear equations where the pa~ial derivatives of f(x) cabot be dete~ined ~alytically. 3. . Le~ingA be const~t mayyield a muchless costly solution. (L)xo (3. (3.. A is the columnvector of conections.8 PROGRAMS Three FORTRAN subroutines for solving nonlinear equations are presented in this section: 1. Thus. De~iset al.j = 1.184) (LL. 3.NonlinearEquations whereA is the n × n matrix of partial derivatives. First.185) (3. 173 (f~)x~(Jl)x~ " A= (f~)x~ (fz)x2 "’" (Jq)xo (J~)x. of Multidimensional minimization tec~iques can be applied to minimizeF(x).. the numberof calculations is increased.184) numerically. mayyield the most economical solution.187) This procedure has ~o disadvantages. In si~ations wherethe paaial derivatives off(x) cannot be evaluated anal~ically. One alternate approach is to estimate the pa~ial derivatives in Eq. Newton’s method The secant method Newton’smethod for two simultaneous equations The basic computational algorithms are presented as completely self-contained subroutines suitable for use in other programs. Input data and output statements are contained in a main(or driver) programwritten specifically to illustrate the use of each subroutine. the above procedure cannot be applied. Secon~if ~j is too small. A strategy A based on makingseveral co~ections using const~t A. the convergence rate can decrease to first order... must be reasonably acetate for ~is procedure to work..

fl.4.fl sto~ format ( ’ Newtons method’) format (" "/’ i’.l) write (6. 1 / write (6. Every nonlinear equation requires its ownindividual function funct. and the values off(x) andf’(x) are returned asf j~v.g. calls subroutinenewtonto implement solution. ’fpi’.1. f12.1 the data set and prints it.8. 0. Function funct must be completely self-contained. Subroutine newton requires a function subprogram. eq. subroutine newton. Program defines is 3.iter call funct (xl.iter.xl.188). main program 1000 1010 1020 main program to illustrate nonlinear equation solvers xl first guess for the root iter number of iterations allowed tol convergence tolerance 1 yes intermediate results output flag: 0 no.55): f(xi) xi+1 = xi f.4.xl. The value ofx is passed to function funct.(xi) (3. "xi+l’/’ ’) format (i4. which evaluates the nonlinear equation of interest. tol. iw. I0.8) end c subroutinenewton (xl . and prints the the solution. iw.1.6) .g.fl. i ter. iw data xl.f12.174 3. Program 3. tol .fpl) dx=-fl/fpl x2 =xl +dx if (iw. f12. Newton’s method program.1000) write (6. tol) return end do write (6.8. (3.fl.188) A FORTRAN subroutine. After iter iterations. including all numerical values and conversion factors. "xi’.lOx.1020) i.1.1000) i. for implementing Newton’s method is presented in Program3. i call funct (xl. function funct. checks for convergence. respectively. 1010) call newton (xl.x2 xl =x2 if (abs(dx). (3.0.iw / 30. Newton’s Method Chapter3 The general algorithm for Newton’smethodis given by Eq. iter.fpl.000001. and either continues or returns.6x. "fi’.le.1010) return i000 format (i4.fpl) write (6. i Newton "s method do i=l. tol. f14. an error message printed and the solution is terminated. 2f14.12x. Subroutinenewtoncalls function funct to evaluatef(x) andf’(x) for a specified value of applies Eq.llx.

fp) evaluates a polynomial of u. 0)/180.7. Complex roots can be evaluated simply by declaring all variables to be complex variables.a2.NonlinearEquations 1010 format end (" ’/’ Iterations failed to converge’) 175 function funct (x.02070744 xi +i 32.00000000 0. 2.00000503 0.5.a3.0.4. Polynomialfunction funct.0.015180 32. The values in the data statement must be changed accordingly.015423 32. function funct (x.function funct. f.015180 Subroutine newton can be used to solve most of the nonlinear equations presented in this chapter.000000 32. 40.02070767 0.0 *a2 *x+3 . This function funct is illustrated by solving Example 3. Solution by Newton’smethod. al. f.00214376 0.03979719 0.0.9 to fourth degree data aO. Output 3.015180 fi -0.015180 32. The output generated by the Newtonmethodprogramis presented in Output 3. by including the multiplicity mas coefficient off(x) in function funct. function funct presented below evaluates a real coefficient polynomialup to fourth degree. Simple roots can be evaluated directly.66666667.0 / rad=acos (-i. Newtons i 1 2 3 4 4 method xi 30. Multiple roots can be evaluated in three ways: directly (which reduces the order to first order).118463 32.02080526 0.r3. 1. 0 *a3 *x* *2+4. -3.83333333. and the function subprogram.1. or by defining u(x) = f(x)/f’(x) in function funct.r2. 0. 1. must evaluate the desired nonlinear equation.0 / f=a O+al *x +a2 *x * "2+a3*x* "3+a4* x* *4 fp=al +2.0*a4 *x* *3 return end .alpha / 1. 4.015423 32.0.01878588 0.118463 32.a4 / -2. fp) evaluates the nonlinear function data rl. As an additional example.1.00000000 fpi 0.0 f=rl *cos (alpha *rad) -r2 *cos (x*rad) +r3-cos ((alpha-x) fp= (r2*sin (x*rad) -sin ((alpha-x) *rad) return end The data set used to illustrate subroutine newton IS taken from Example 3.

The Secant Method for the secant method is given by Eq. "gpi’.8.2.005495 1. iter.142857 1.75000000 1. iter call funct (x2.1. The secant method program. i) call funct (xl. and prints the solution.000000 1. eq.00000000 fpi 1.000000 3.fl do i=l.00000000 xi +i 1. "xi+l’/" .80): The general algorithm f(xi) Xi+ 1 = Xi gt(xi ) (3.00000033 0. 1 / call secant (xl.1000) i.12x.142857 1. Solution Newtons i 1 2 3 4 4 by Newton’s method. I0. x~ and x2. except two values of x. funct Chapter 3 is taken from Example 3.f2/gp2 x3 =x2 +dx . tol.2 shows only the statements which are different from the statements in Program 3.0.x2.fl) i000 format (" The secant method’) i’. c c program main main program to illustrate nonlinear equation solvers x2 second guess for the root data xl.7.00009057 1. method xi 1. Program 3. fl) if (iw. subroutine secant.1.xl.0. Program 3. iw.06122449 1. Program 3.62500000 0.llx.2 defines the data set and prints it. 0.000001. for implementing the secant method is presented below.000000 fi 0. "xi’. i) the secant method call funct (xl.500000 1. tol. calls subroutine secant to implement the secant method. f2) gp2= (f2-fl) / (x2-xl) dx=.6x.x2. iter.14577259 0.) i010 format (" "/" end subroutine secant (xl.000000 1.lOx.00549467 0.2.005495 1. 40.8. Subroutine secant works essentially like subroutine newton discussed in Section 3. "fi’. (3. iter.x2.l) write (6. iw.iw / 30.176 The data set used to illustrate the polynomial function The results are presented below. tol. are supplied instead of only one value.189) A FORTRAN subroutine.

6) format (" "/’ Iterations failed to converge’) end function funct (x.3 it. Newton’s Method for TwoCoupled Nonlinear Equations The general algorithm for Newton’smethodfor two simultaneous nonlinear equations is given by Eqs.000000 31.19496296 -0. f12.966238 32. and prints the solution.8.8.Ax i 1 The general approach to this problem is the same as the approach presented in Section 3.00657688 -0.1000) i.gp2. Output 3.x3 xl =x3 if (abs(dx).l) write (6. f2.015542 32.NonlinearEquations if (iw.02053257 0.00101233 0.190a) (3.f12.03979719 0.02070761 31.966238 32.190b) (3.3. for implementing procedureis presented below. f) evaluates the nonlinear end 177 1000 1010 function The data set used to illustrate subroutine secant is taken from Example3.015180 32. subroutine simul.695228 31. (3.00000001 0.695228 31.8.191a) (3.015180 32.00000749 -0. tol) return xl =x2 fl=f2 x2=x3 end do write (6. Program defines the data set and prints the 3. Solution by the secant method.015180 3.4.000000 40.1 for a single nonlinear equation. The output generated by the secant methodprogramis presented in Output 3.191b) gxli Axi ’~ gyli Ayi = --gi Xi+ ~.le.015180 method fi -0.170) and(3. 1010) return format (i4.x2.2. The secant i 0 1 2 3 4 5 5 xi 30. eq. 2f14.00000000 gpi xi + 1 0. calls subroutines#nul to implement solution. the . A FORTRAN subroutine.5.02068443 0.02426795 0.Yi ~t_ Ay i i Yi+l = X -~.g.02347602 0.2.015542 32.17 fxli Axi + fyli AYi = -fi (3.

¯ 178 Chapter 3 Newton’s method for simultaneous equations program. 4.0 f=r2 *cos (x’tad) +r3 *cos (y’tad) +r4 *cos ( theta4 *rad) g=r2 *sin (x*rad) +r3 *sin (y*rad) +r4 *sin ( theta4 fx= (-r2 *sin (x*rad) ) * fy= ( -r3 *sin (y*rad)) gx= (r2 *cos (x*rad) ) gy= (r3 *cos (y*rad) ) return end c . fl .f. c c c c c program main main program to illustrate nonlinear equation solvers xl.yl. gx. "fi’. i) call funct (xl .r4. gl.6. 8.yl first guess for the root iter number of iterations allowed tol convergence tolerance iw intermediate results output flag: 0 no.iw.9x.2e12.0. tol. tol) end do write (6.0. tol.lOx.0.yl.fl.6.yl.2f8.3. fy. yl . 1010) return i000 format (i3.lOx.1000) write (6.yl.0.0 / rad=acos (-I. 0. 220.4) end subroutine simul (xl.l) write (6.2f12.4.r2.gl. "dy’/" ") 1020 format (i3. gx.xl.0. gy) evaluates the two nonlinear functions data rl.yl.iw / 30.xl. Program 3.iter call funct (xl. fy.fl. 1 6x.1000) i. le.y. fx.1010) call simul (xl.4) 1010 format (’ "/" Iteration failed to converge’) end function funct (x. gy) write (6.000001. i) Newton’s method for two coupled nonlinear do i=l. "yi’.iter. gy) del =fx*gy-fy*gx dx= ( fy*gl -fl *gy/ /del equations dy= ( fl *gx-fx*gl )/del x2 =xl +dx y2 =yl +dy if (iw. theta4 / I0. tol.r3. fx. xl =x2 yl=y2 if ( (abs (dx) .dx.gl stop 1000 format (" Newtons method for two coupled nonlinear equations’) 1010 format (’ ’/’ i’.1020) i. i0. "gi’. and. 6. eq.0.2f12.g.9x. gl. 1 yes data xl.6x. fx. gx. O)/180. fl. ’dx’.yl.2e12. iw. le.iter. 1 / write (6. iter. tol) . (abs (dy) . 0. "xi’. fy.

respectively.020311 32.1115E-03-0..375061-0.NonlinearEquations The data set used to illustrate output is presented in Output3. Newtonsmethod for two couplednonlinearequations i 1 2 3 4 4 xi 30. Newton’smethod and the secant method are both effective methods for solving nonlinear equations.62 order).5002 0. Manycommercial software packages contain nonlinear equation solvers. The secant methodis recommended the as best general purpose method. also contain nonlinear equation solvers.0000E+00 3.015180 yi fi gi dx dy 0. Finally. but Newton’s method requires the evaluation of the derivative of the nonlinear function.. such as IMSL.370988-0. second order compared to 1. the bookNumerical Recipes (Press et al.and Maple.1320E+00 0.0000E+00 0.1121E-070. 179 subroutine sitnu[ is taken from Example 3.12. Interval halving(bisection) and false position (regula falsi) converge very slowly.3335 -4. 3. or any nonlinear relationship between input x and a response an f(x).that is.the secant methodrequires less total effort.0041 -4.0051 0. The Output 3.8. while quite effective.520530 32. Manyworkstations and mainframe computers have such libraries attached to their operating systems. For functions whose derivative cannot be evaluated.9 SUMMARY Several methods for solving nonlinear equations are presented in this chapter. Packagesfor Nonlinear Equations Numerous libraries and software packages are available for solving nonlinear equations.370987 0.3.000000 0.708541-0. the solution of a differential equation.4055E-07-0. methodscan find Both complexroots if complexarithmetic is used.e. Both methodsgenerally require reasonable initial approximations.4288E+00 2. Newton’smethodconverges faster than the secant method(i. Macsyma. Newton’s method requires less total effort than the secant method.015181 32. 1989) contains numerous subroutines for solving nonlinear equations. the secant methodis recommended. Thesemethodsare not recommended unless the nonlinear equation is so poorly behaved that all other methods fail. Consequently. are certain to converge because the root lies in a closed interval. a transcendental equation. it is not recommended.2236E-02-0.5205 -4.3198E-01 -0. the second-orderTaylor series method Muller’s method.0000 0. The higher-order variations of Newton’smethodand the secant method. Fixed-pointiteration converges only if the derivative of the nonlinearfunction is less than unity in magnitude. Moresophisticated packages. Someof the more prominent packages are Matlab and Mathcad. and are not used frequently.3. If the effort requiredto evaluatethe derivative is less than 43 percentof the effort required to evaluate the function itself.7085 -4.4. Solution by Newton’s method simultaneous for equations.3282E-03 -0. This is probably because Newton’smethodand the secant method .0000 -4.000000 32.Otherwise.Mathematica. The nonlinear equation maybe an algebraic equation.

12. 27. 16. multidimensional minimization techniques may be preferred. 29. 11. 2. 28. 7. special features of polynomialsshould be taken into account. After studying Chapter 3. 15. Otherwise. 25. 26. Solving systemsof nonlinear equations is a difficult task. 30. No single approach has proven to be the most effective. 10. 23. 13. Polynomialscan be solved by any of the methodsfor solving nonlinear equations. Complexroots can also be evaluated by Bairstow’s methodfor quadratic factors. 14. 19. Complex roots can be evaluated by Newton’smethodor the secant methodby using complex arithmetic and complex initial approximations. 17. 20. 22. the Multiple roots can be evalt/ated using Newton’s basic methodor its variations. 6. 18. Newton’s method can be used. For systems of nonlinear equations which have analytical partial derivatives. However. 21. Solving systems of nonlinear equations remains a difficult problem.180 Chapter3 are so efficient that the slightly morecomplicated logic of the higher-order methods not is justified. 8. Discuss the general features of root finding for nonlinear equations Explain the concept of bounding a root Discuss the benefits of graphing a function Explain how to conduct an incremental search Explain the concept of refining a root Explain the difference between closed domain(bracketing methods) and open domain methods List several closed domain(bracketing) methods List several open domain methods Discuss the types of behavior of nonlinear equations in the neighborhood a of root Discuss the general philosophy of root finding List two closed domain(bracketing) methods Explain howthe internal halving (bisection) methodworks Applythe interval halving (bisection) method List the advantages and disadvantages of the interval halving (bisection) method Explain howthe false position (regula falsi) methodworks Applythe false position (regula falsi) method List the advantages and disadvantages of the false position (regula falsi) method List several open domain methods Explain howthe fixed-point iteration methodworks Applythe fixed-point iteration method List the advantages and disadvantages of the fixed-point iteration method Explain how Newton’s method works Apply Newton’s method List the advantages and disadvantages of Newton’smethod Explain and apply the approximate Newton’s method Explain and apply the lagged Newton’smethod Explain how the secant method works Apply the secant method List the advantages and disadvantages of the secant method Explain the lagged secant method . 5. you should be able to: 1. 4. 24. 9. 3.

5 and 1. 32. (D) by interval-halving starting with x = 0.0 and 0.0) as starting values. Find the two points of intersection of the two curves y = ex and y = 3x + 2 using interval halving.0 and 2. 41. 37.0 and 3. 7. 42.5 and -2.NonlinearEquations 31. 39. -43. unless otherwise noted. carry at least six significant figuresin all calculations. 44. (B) by false position starting with x = -3.0 and 0. 5.5. 33.5 and 1. Solve Eq. in unless otherwisenoted. (C) by interval-halving starting with x = 1. Solve Eq. 2. Problems to 6 can be solved using any two initial values ofx that bracket the 1 root.0.0 and 2. Use(-1. Continueall iterative proceduresuntil four significant figures have converged. Use (-1. Choose other sets of initial values ofx to gain additional experiencewith interval halving. 03) by interval-halving starting with x = -3. 10. (A) by false position starting with x = 0. 45.0.0) and (1. 35.2x . Considerthe following four nonlinear equations: f(x) = x cos(x) = (A) f(x) = ex .0) and (2. (A) by interval-halving starting with x = 0.0. 181 Explain how Muller’s method works Apply Muller’s method List the advantages and disadvantages of Muller’s method Discuss the special features of polynomials Applythe quadratic formula and the rationalized quadratic formula Discuss the applicability (or nonapplicability) of closed domainmethodsand open domainmethodsfor finding the roots of polynomials Discuss the problemsassociated with finding multiple roots and complex roots ApplyNewton’sbasic methodand its variations to find all the roots of a polynomial Applydeflation to a polynomial Explain the concepts tmderlying Bairstow’s method for finding quadratic factors ApplyBairstow’s methodto find real or complexroots Discuss and give examplesof the pitfalls of root finding Suggest waysto get aroundthe pitfalls Explain the concepts underlying Newton’smethodfor a system of nonlinear equations Apply Newton’smethodto a system of nonlinear equations EXERCISE PROBLEMS In all of the problems this chapter. 6.2x2 . Find the two points of intersection of the two curves y = ex and y = x4 using interval halving. Solve Eq. 40.5 and -2. 34. Solve Eq.2x + 1 = 0 3.0.0. 9.2 = 0 (C) f(x) = x3 . 38.0) as starting values.0 and 1. (C) by false position starting with x = 1. 36. Solve Eq. False Position 8. 4.5. Solve Eq.sin(rex/3) = f(x) = e x . Solve Eq. . 3.0 and 2.2 Closed Domain Methods (B) (D) Interval Halving 1.

2x .[e~/3]1/2. (C) by fixed-point iteration with x0 = 1. The function can be rearrangedinto the formx = -I. 24. The function f(x) = (x 2)(x .0) as starting values. (D) by false position starting with x = 0. Solve Eq. 25. Solve Eq. For what starting values ofx might the expression x = 1/(x + 1) not converge? The function f(x)= e ~ -3x2= 0 has three roots. and (c) x = 2 -8)/2.sign (near (c) The third root is near x = 4. one form always converges to x = 4. (D) by fixed-point iteration with 0 =1. Starting with x0 = 0. (A) by Newton’smethod. (a) x 8/(x . Use (-1. Find the two points of intersection of the two curves y = ex and y = x4 using false position. Solve Eq. and solve for the third root. starting with x0 = 1.0 and 0.5.182 Chapter3 11.4) 2 .2x. Solve for the positive root by fixedpoint iteration for all three forms. 27. Thefunction f(x) can be rearranged in several ways. Solve Eq. (A) by fixed-point iteration with x0 = 0. Solve Eq.4 Open Domain Methods Fixed-PointIteration 15. (b x = (2x+ 8) 1/2. find the roots correspondingto (a) the + sign (near x = 1. (A) by fixed-point iteration with x0 = 1.0 and 0.0. starting with 0 =. Solve Eq. Solve Problem6 by fixed-point iteration with x0 = -1. 18. 21. Solve these two formsfor the root. evenwith an initial guessclose to the exact root. On e fo rm al ways converges to x = -2. Choose other sets of initial values of x to gain additional experience with false position.0.0. 12.0. and (c) x = ln(3x + 2).8 = 0 hasth e two ro ots x = -2 and 4. (D) by fixed-point iteration with x0 --. Problem 5 considers the function f(x)= ex -(3x + 2)= 0. . Find two forms of x = g(x) that will converge to this root. Solve Eq. 22.1 and 3. Use (-1. 13. 26.0) and (2.respectively. Solve Eq.5. ( x = (e~ . 17.5.0.0.0 as the starting value. which can rearranged into the following three forms: (a) x = x .0 and 3. 28. (B) by fixed-point iteration with x0 = -3.4 = 0 has a root near x = 1. with xo = 1. Solve Eq. 19. The cubic polynomial f(x) = x3 + 3x2 . Solve Eq. 20.0) and (1.0. (B) by Newton’smethod.2)/3.0) and (b) the .0) as starting values. (C) by fixed-point iteration with x0 = 2. Show that the aboveform will not converge to this root. and one form always diverges. Find the two points of intersection of the two curves y = e~ and y = 3x + 2 using false position.0. Rearrange f(x) into the form x = g(x) to obtain the root (a) x = -2 and (b) x = 4. 14.0 and 1.2). 23.0.0 and x0 = 1. Use 0 =1. Develop formof a x =g(x) that will convergeto this root.0. 16.0 and 2. Determinethe behavior of the three forms.(2x+2). Problems8 to 13 can be solved using any two initial values ofx that bracket the root. Use Xo = -3.0.0 as thestar ting valu e. for example. Solve Eq. (B) by fixed-point iteration with x0 = -2. Newton’s Method 29.0. 3.0. 30.

Use 0 =1. (A) by the secant method. 38. Polynomials 45. (b) Find the negative root using Newton’smethod. 43. 33. Use x = 0. (D) by Newton’smethod.0 assta rting val ues.0 and 1.226x2 + 274x . Solve Eq.120 = 0 . SolveProblems35(a) to (c) by the secant method. 32. 44.1= 0 bythe secant meth od usin g x = 1. Use Newton’smethodto find the real roots of the following polynomials: (a) (c) x3-5x2+7x-3=O 3-2x2-2x+I =0 (b) x4-9x3+24x2-36x+80=0 (d) 3+4x2-8x-2=0 46. Use the above result to solve the following problems: (a) (161) t/3. lx0. since a large numberof iterations maybe required. Usex = 6. Solve Eq. 39.1. 3. since a large numberof iterations maybe required. (a) For this equation. Solve Problem36 using the secant method. showthat Newton’smethodgives (E) Xi+l -- n 36.N = 0.0 assta rting val ues. The nth root of the number N can be found by solving the equation x~ . (a) Find the two positive roots using Newton’smethod. 42.56) 1/5.5.1. Youmay 33 want to solve this problemon a computer.0. The Secant Method 37.5 as sta rting values. 34. Find the positive root off(x)= 15. 0 asthestar ting valu e. sta rting wit h x0 -.5 and 0. Use 0 =1.75) values.0 and 2.2 and 1. (b) 2/4.0 as thestar ting valu e.Use the starting values given there for x0 and let x~ = 1. 2.0 an d -2 . (B) by the secant method. Solve Eq. and 3. (D) by the secant method. Solve Eq.0.5 as the initial guess. Use 0 =1. Solve Eq. (C) by the secant method. (C) by Newton’smethod. as starting (21. Solve Problem using x = 0. (c) (238. Use Newton’smethodto find the complexroots of the following polynomials: (a) x4-9x3-k24x2-36x+80=0 (b) x3-t-2x2q-x-t-2-----0 (c) 5 .0. respectively. Use 0 =-3. 40. You maywant to solve this problem on a computer.Use 0 =0. Solve Eq.NonlinearEquations 183 31. 41. 35.1 as starting values.85 x3 .5 and 1.15x4 -1.0 as starting values. Find the positive root off(x) = ~5 -1 = 0 byNew ton’s method. Consider the function f(x)= e ~ -2x2= 0.6 as starting values. Solve Problem 41 by the secant method with x = 0.

or 47 to 53 using the Newtonmethod program. (K) in Problem71. Eq. 52.1. 63.3 for solving simultaneous equations. 900. Write a computerprogramto solve Freudenstein’s equation.0. Eq. (xz + y~)2 = 2xy and y = 3. 51. Designthe program 69 output as illustrated in Example3. Solve for bothroots . 0 Write a computerprogramto solve the M. 1500. (a) Solve Problem 71.8.8.3). = 49. Implementthe Newtonmethodprogrampresented in Section 3. 1400.00001 deg. Implementthe Newton methodprogrampresented in Section 3. 1000. Follow the procedure described in Problem to initiate the calculations. let M =0. Design the programoutput as illustrated in Example 3. and 90 percent of that value as initial approximations. Checkout the programusing the given data set.0 < e < 10. For subsequent values of ~. Write a computerprogramto solve the van der Waalequation of state. (I) in Problem 70. Solve Problem70 using the program. (2x) 2/3 +y2/3 = (9)1/3 and x~/4 +y~= 1. (3.000kPa.1. 55. 3. Re. Implementthe secant methodprogrampresented in Section 3. 1300. (b) Construct a table of Mversus e for 1. (b) Construct a table of Mversus e for 1. 56. 47. 62. Eq. calculate v corresponding to T = 700.8. 800.e equation.1) 2 q. 59. 50. by Newton’smethodfor specified values of ? and 5. Write a computerprogramto solve the M. Eq. 1100. Use the approximation proposed by Genereaux(1939). X3 _~ y3 _ 3xy = 0 and x2 + y2 = 1. Solve for all roots. by the secant method. Write the programso that all the cases can be calculated in one run by stacking input data decks. (a) Solve Problem using the program. 1200.2) 2 = 3 and x2/4 +y2/3 = 1. 6.e equation. for subsonic flow.5.5. Workany of Problems 5. 58. For 5= 1. and 1600K. let 05obe the solution value for the previous value of ~. Eq. (x -. y = cosh(x) and z +y~ 2. Check out the programusing the given data set. 57. (K) in Problem71. 48. let M be the previous solution value.1. Programs 54. for 61.2. by the secant methodfor specified values of 7 and 5. y2(] _x)=x 3 andx2+y2 = 1. Solve for all four roots. by the secant method.(y .8. (G) in Problem 69. 60. and 051 = 05o + 1. For P = 10. For ~ = 40deg. Workany of Problems 37 to 44 using the secant method program. . Workany of Problems 29 to 36 using the Newton method program.184 3. by the secant method the friction coefficientf for specified value of the for roughness ratio e/D and the Reynolds number. let ~b0 = 25deg and 051 = 30deg.0 < ~ < 10. Continue the calculations until 05 changes by less than 0.7 Systemsof Nonlinear Equations Chapter3 Solve the following systems of nonlinear equations using Newton’smethod. For 0 subsequentvalues of 5.8. in increments Ae=0. Eq. 64. using the program. Write a computerprogramto solve the Colebrookequation. Check out the programusing the given data set. (J). 53. Calculate 05 for ~ = 40deg to 90deg in increments As = 10deg. x2 +y2 = 2x +y and x2/4 +y~ = 1.

APPLIED PROBLEMS Several applied problems from various disciplines are presented in this section. let M0be the previous solution value and M~= 1. Numerous variations of this problemcan be obtained by specifying combinations of ~b and a. or both.08.01M 0. Write a computerprogramto solve Eq.4 and M~= 1. Rearrangethis problemto solve for the value of r 1 such that ~b = 60 deg whena = 75 deg.000kPa and T = 800K. Show calculations for the all first iteration. let M be the previous solution value and 0 M~= 1.0 deg.00169099(m3/kg). for which R=461.NonlinearEquations 185 supersonic flow.1. Considerthe four-bar linkage problempresented in Section 3. 67. as illustrated in Example 3. (b) Construct a table of Mversus 6 for ~ = 1. The van der Waalequation of state for a vapor is whereP is the pressure (Pa = N/m2). the temperature(K).2 and 0 M1= 1.(Pb + RT)v2 + av .1.495J/kg-K. (3.00001deg. 65. v is the specific volume(m3/kg). For 6 = 1. in increments A6 = 1. All of these problemscan be solved by any of the methodspresented in this chapter. Continue the calculations until 02 and 03 changeby less than 0. Solve the four-bar linkage problemfor 04 = 210 deg by solving the two scalar components the vector loop equation. for 0 < 6 < 40deg. 68. 70.5fpV~(~) (H) . and b=0.2). Summarize first iteration andsubsequent the iterations in a table. (L) in Problem73 by the secant method for specified values of y. Consider water vapor. by Newton’smethod. 66.1. and a and b are empirical constants.12. by changingthe starting values. When incompressiblefluid flows steadily through a round pipe. and M1.06 and M~ 1. For subsequent values of e.ab = 0 (G) Calculate the specific volume v for P = 10. Present the results in the format illustrated in the examples. the pressure an drop due to the effects of wall friction is given by the empirical formula: AP=-O.0.28Pa-(m3/kg) 3. Considerthe four-bar linkage problempresented in Section 3.3.0 deg. Let of the initial guesses be 02 = 20 deg and 03 = 0 deg. 3. Eq.1. 69. R is the gas constant (J/kg-K).(a) Solve Problem73 using the program. For ~ = 1. Use the ideal gas law. to obtain the initial guess (or guesses). Equation (F) can rearranged into the form Pv3 . Aninfinite variety of exercises can be constructed by changingthe numericalvalues of the parameters of the problem. a= 1703. For = o subsequent values of 6. let M = 1. Solve for any (or all) of the results presentedin Table3. Pv = RT. in increments Ae = 0.1M 0.1 let M = 1.

5.0.. Consider isentropic supersonic flow around a sharp expansion comer. subsonicflow) and one greater than unity (i.4 by Newton’s method. For the subsonic root. For each value of e. supersonic flow). Use the approximationproposed by Genereaux (1939) to determinethe initial approximation(s): (J) n. L is the pipe length.4 0 and M~= 0.. Developa procedure to determinef for specified values of ~/D and Re. and M~. Consider quasi-one-dimensional isentropic flow of a perfect gas through a variable-area channel. one less than unity (i.0 and 7 = 1.0 deg. the area whereM= 1) and y is the specific heat ratio of the flowirlg gas. ~M! 1.001 for Re = 10 71.4. 5.29)]. 1. let M = 3. Eq. is given -° ’] 6 f = 0. p is the density.e. Solve Problem71 by the secant method.0 and M~ = (c) 1. is given by -- ((tan-l((M~ -- 1) 1/2) --tan-l((M? .1) and ~ is the specific heat ratio of the gas. and (d) 1.1)1/2)) whereb = (7 + 1)/(7 . andf is the D’Arcy friction coefficient. and 7.0deg. derived by Zucrow Hoffman and [1976. V is the velocity. c5. .16 Re whereA*is the chokingarea (i.0.5 and 20.. D is the pipe diameter.For 7 = 1. let M = 0. derived by Zucrow Hoffman and [1976. (8. 0 72. Develop procedureto solve for M for specified values of 7.e. where # is the viscosity.2. 2 solve for M for the following combinationsof M~ ~: (a) 1. two values of Mexist..0deg. Solve forf for apipe having e/D = 0. 6. M~)and after the comer(i. for n = 4. and 2 1) 0) = and 20. (4. For the subsonic root. Eq. Colebrook (1939) developed the following empiricalequation for the friction coefficient f: f~/2 - (3.51 "~ -2 log10 e/D+ where ~ is the pipe surface roughness.0 and I0. Several empirical formulas exist for the friction coefficient f as a function of the dimensionless Reynolds number.0deg.0 2.e. Calculate both values of Mfor e = 10. let M = 0. For flow in the turbulent regime between completely smooth pipe surfaces and wholly rough pipe surfaces.. The relationship between the MachnumberMand the flow area A.e. For the supersonic root. Re = DVp/#.11)]. For the supersonic root.7 2.6. let 0 M = 5. M2).e. The relationship betweenthe Mach number before the comer(i. 0 73.186 Chapter3 where~ is the pressure drop.5 and 10.0 and M~= 4.

11 Programs 4.4 Neville’s method 4.10 Least Squares Approximation 4.12 Direct multivariate linear interpolation 4.7 Inverse Interpolation 4.10 Inverse interpolation 4.9 Cubic Splines 4.1 Introduction 4.6 Difference Tables and Difference Polynomials 4.2 Properties of Polynomials 4.9 Newtonbackward-difference polynomial 4.5 Divideddifference table 4.15 Least squares quadratic polynomial approximation 4.8 Newtonforward-difference polynomial 4.8 Multivariate Approximation 4.3 Lagrange polynomials 4.7 Difference table 4.3 Direct Fit Polynomials 4.12 Summary Problems Examples 4.1 Polynomialevaluation 4.5 Divided Difference Tables and Divided Difference Polynomials 4.11 Successiveunivariate quadratic interpolation 4.13 Cubic splines 4.4 Polynomial Approximation and Interpolation 4.14 Least squares straight line approximation 4.2 Direct fit polynomials 4.16 Least squares quadratic bivariate polynomial approximation 187 .4 Lagrange Polynomials 4.6 Divided difference polynomial 4.

f(x)] pairs.. However. In manyproblemsin engineering and science.30 0. The value of the at function can be determined at any of the eight values of x simply by a table lookup.2 .1 INTRODUCTION Chapter4 Figure4. The actual function is not known cannot be and determined from the tabular values.40 0.20 0. This process.188 4. is the subject of Chapter 4. whichis called interpolation. whch is i used as the exampleproblem in this chapter.a problem arises when the value of the function is needed at any value of x betweenthe discrete values in the table. The discrete data in Figure 4. However. For example. actual function can be approximatedby the someknownfunction.5"0 0.60 0.312500 3.273973 3. the data being considered are known only at a set of discrete points.70 0. of .270270 Figure 4.35 0. and the value of the approximating function can be determined at any desired value ofx.the continuous function [y =f(x) maybe known only at n discrete values of x: [Yi = y(xi) (i = 1.1) X i= X x f(x) 3.294118 3.277778 3.298507 3.1 illustrates a set of tabular data in the formof a set of [x.285714 3.1 Approximation tabular data. Thefunction f(x) is known a finite set (actually eight) of discrete values of x.2) (4..1 are actually values of the functionf(x) 1/x.. not as a continuousfunction. n) (4..65 0.303030 3.

. differentiation). Theseprocesses are illustrated in Figure 4.3b. Anapproximate yields a polynomial fit that passes through the set of data in the best mannerpossible. as illustrated in Figure4.3a. Approximate are useful for large sets of smooth fits data and . Trigonometric functions 3. Integration. Polynomials 2. It should be easy to differentiate. Three of the more common approximating functions are: 1. differentiation. Theintegral of the function may of interest (i.Polynomial Approximation Interpolation and 189 (a) x (b) x (c) x Figure 4.2 Applications approximating of functions.3 to 4. The derivative of the function maybe required (i. Discrete data. as illustrated in Figure 4. 3. small sets of rough data.e. Polynomialssatisfy all four of these properties. the values of the discrete data at the specific points are not all that is needed. There are two fundamentallydifferent waysto fit a polynomialto a set of discrete data: 1. Severaldefinitions of best mannerpossible exist.2. interpolation).polynomialapproximating functions are usedin this bookto fit sets of discrete data for interpolation. Exponential functions Approximating functions should have the following properties: 1. and integration of a set of discrete data are of interest. Thus. and integration. without being required to pass exactly throughany of the data points. In many applications. Valuesof the function at points other than the known discrete points may be needed (i.9. be processes of interpolation. 2. Many types of approximating functions exist.. Theseprocesses are performedby fitting an approximatingfunction to the set of discrete data and performingthe desired process on the approximatingfunction.e. large sets of smoothdata. This type of fit is useful for small sets of smooth data. or large sets of rough data. In fact. The approximating function should be easy to determine. or tabular data. Consequently. 2. any analytical function can be used as an approximating function. integration).(a) Interpolation. 4. mayconsist of small sets of smoothdata. It should be easy to integrate.(b) Differentiation.e. Exactfits Approximate fits An exactfit yields a polynomialthat passes exactly through all of the discrete points.. It should be easy to evaluate. Exact polynomial fits are discussed in Sections 4. differentiation.

and (c) divided difference polynomials.3) wheren denotes the degree of the polynomialand a0 to an are constant coefficients. (a) the Newton forward-difference polynomial. Numerical differentiation and numericalintegration are discussed in Chapters 5 and 6. These methodsare quite easy to apply. for example. A discussion of inverse interpolation follows next. In this book. The final topic is a presentation of least squares approximation. Several programsfor polynomial fitting are then presented. (b) Approximate (a) small or large sets of rough data. Exactfit.2 PROPERTIES OF POLYNOMIALS The general form of an nth-degree polynomial is [ P.3 x (b) ¯ Discrete points x Polynomial approximation. so n + 1 discrete data points are required to obtain uniquevalues for the coefficients. (a) direct fit polynomials.190 Chapter4 Polynomial Polynomial ¯ Discretepoints (a) Figure 4. . and (c) several other difference polynomials. (b) the Newton backward-difference polynomial. respectively. The chapter closes with a Summary which summarizesthe main points of the chapter and presents a list of what you should be able to do after studying Chapter 4. Several procedures for polynomial approximation are developed in this chapter. Application of these procedures for interpolation is illustrated by examples. several procedurescan be used to fit approximating polynomials. + anx" ] (4. 4. the least squares method is used for approximate fits. procedures the based on differences can be used. After the brief introduction in this section. A set of discrete data maybe equally spacedor unequally spacedin the independent variable x.(x)= o + a~x + a2x 2 +.4 illustrates the organization of Chapter 4. (b) Lagrange polynomials.. In the general case wherethe data are unequallyspaced. the properties of polynomials which make them useful as approximating functions are presented. When data are equally spaced. Both types of methods are consideredin this chapter. Multivariate interpolation is then discussed. That is followed by an introduction to cubic splines. The presentation then splits into methodsfor fitting unequally spaced data and methods for fitting equally spaced data.. Figure 4. for example. Methodssuch as these require a considerable amountof effort. There are n + 1 coefficients.

Polynomial Approximation Interpolation and PolynomialApproximation andInterpolation Properties of Polynomials I 191 I unequally spaced Data I DirectFitPolynomials Lagrange Polynomials Divided Difference Polynomials I I Equally Spaced Data Difference Polynomials I I . The property of polynomialsthat makesthemsuitable as approximatingfunctions is stated by the Weierstrass approximationtheorem: If f(x) is a continuousfunction in the closed interval a _< x < b.4 Organization of Chapter 4. wherethe the value of n dependson the value of e. IP. InverseInterpolation MultivariateInterpolation I Least Squares Approximation Programs Summary Figure 4. then for every e > 0 there exists a polynomial P.(x). such that for all x in the closed interval a < x _<b.(x) -f(x)l < ~ .

is fit exactly to a set of n + 1 discrete data a points. (X n. When polynomialof degree n. Error(x).9) wherexo < ~ < xn.f...However. It does have great significance.xtensively in the error analysis of procedures based on approximating polynomials. impossible to evaluate an infinite number terms..g.).. f"(xo)(X x0) 2 -[¯ ¯ ¯ (4. The Taylor polynomialcannot be used as an approximating function for discrete data because the derivatives required in the coefficients cannot be determined. of course. . This form of the error term is used e. + ~ f(")(Xo)(X 1 (4. asill ustrated in Fig 4. term. however.X0) x° < ~ <. The Taylor series is a polynomialof infinite order. whichmakesthe product of those terms the smallest possible.8) (4.(x) -f (x) It can be shown that the error term. low-degree polynomials are employed.f ~) . Polynomialssatisfy a uniqueness theorem: I A polynomial of degree n passing exactly through n + 1 discrete points is unique The polynomial through a specific set of points maytake manydifferent forms. differentiation. 1 .5.4) It is. . so care must be taken to achieve the desired accuracy. P~(x). Thus. In practice.fo).for polynomialapproximation.. Equation (4.Xo)(X Xl)" " " (X-.because it has an explicit error term. has the form Error(x) 1 (X (n + 1)! -. Any form can be manipulated into any other form by simple algebraic rearrangement. interpolation. and the remainder term Rn+l(x are given by ) 1 P.Xo) +.Xo) ÷~.5) (4. 1 f(x) =f(x0) +f’(xo)(X . but all forms are equivalent. (xo. or error.192 Chapter4 Consequently.x The Taylor polynomialis a truncated Taylor series.9) shows that the error in any polynomialapproximation discrete data (e. with an explicit remainder..7) Rn+~(x) (n 1)! f( n+l)(¢)(x -- n+l -. any continuous function can be approximated to any accuracy by a polynomial of high enough degree. the poly ure nomial has e no rror at the data points themselves. The Taylor polynomial of of degree n is defined by f(x) = Pn(x) Rn ) +l(X where the Taylor polynomial Pn(x).xi) terms as small as possible. or of integration) will be the smallest possible whenthe approximation centered in the discrete is data because that makesthe (x .. there is an at the error whichis defined by Error(x) P. the locations between data points.Xn)f(n +l)(~) (4.6) (4.(x) =f(xo) + f’(xo)(X .

Integration of polynomialsis equally straightforward.dx I--~-~d =PE(x) 2a2 + 6a3x +’.xn)ax l = aox + a_l x2 + .14) (4.1In) (4. For the general term ai l ai xi dx /~ ai xi+l + constant (4.5 Error in polynomial approximation. + n anxn-1 = P n_~(x) dx d2Pn(x) (4.(x) 1 + 2azx +.15) .(x)dx=I(ao+alx +" " +a.(x) is I =J e.. Differentiation of polynomialsis straightforward. . p~n+~(X) = + n(n .10) The derivatives of the nth-degree polynomialPn(x) are dP. = P~. . ai d ~X (aixi) : iaixi-1 (4."~(x) = n!a. For the general term xi._2(x) (4.13) The integral of the nth-degree polynomialP.1 lb) (4. Pn(x)-f(x) X Figure 4.1)anxn-2 = P.Polynomial Approximation Interpolation and 193 f(x)! Pn(x) Error(x) ~rmr(x) . .12) xi.(x) _ P’. + an n+l + constant x 2 n+l Pn+l(x) (4.1 la) dx2..

The derivative of a polynomial P~(x) can be obtained from Eq. and R is a constant remainder. (4. then (x . For example.16a) (4. and its integral. P~(x).. +x(an_ ~ + anx)]}) a0 (4. The nested multiplication algorithm is based on the following rearrangement of Eq. or its integral. Pn(x).16b) requires (2 + 3 + 4) -. The division algorithm states that Pn(x) = (x . E quations (4. whichmeans ) that Nis a root. P’~(x) = Qn_~(x)+ -N)Q’n_~(x ) (4. Thus.21) Pn(N) = The factor theoremstates that ifPn(N = 0. (4.18) can be evaluated constructing the nested multiplication sequence: bn = an bi = ai q. the evaluation of Eq.xbi+ 1 (4. four divisions. fP4(x)dx: P4(X) = a 0 + alx + a2 x2 + a3 x3 qa4 x4 (4. (x -N) = 0.16 a2 3 -4. a polynomialmust be evaluated if manytimes.1. is straightforward. O) wherePn (x) = 0.. ~. and ~ = ). or very high degree polynomials must be evaluated. nested multiplication is given by Pn(x) +x(al +x {a2 +x[a 3 +. fPn(x) dx.16a): P4(x) = ao + x{a 1 q-x[a 2 q-x(a 3 q-a4x]} (4. The remainder theoremstates that (4. or zero.16a) requires (0 ÷ 1 + 2 + 3 + 4) = 10 multiplications four additions.P4(x).16b) 3 P~4(x) = al + 2azx + 3a3x2 + 4a4x = P3(x) I P4(x) dx = aox_~+ 2 +~ x _ -~ _ ~ + cons tant = P5 (x ) ( 4. (4. and five additions.16c) requires (1 + 2 + 3 + 4 + 5) multiplications.18) which requires n multiplications and n additions.20).2 .20) where N is any number.16b) and (4. consider the fourth-degree polynomial. P~(x).N) is a factor of P~(x). Nested multiplication is sometimescalled Horner’s algorithm. its derivative.1. For a polynomial of degree n. However. a more efficient procedure is desirable.16c) can be evaluated in asi milar ma ner wi n minor modifications to account for the proper coefficients. and the evaluation of Eq. ..N)O.22) . its derivative. n -..194 Chapter4 Theevaluationof a polynomial.9 multiplications three additions. Several other properties of polynomials are quite useful. Qn_l(x) is a polynomial of degree n .a3 x4 + a4 5 x The evaluation of Eq.19) (i =n -. This is a modest amountof work. or manypolynomials must be evaluated.. (4. (4. for a particular value of x. Pn(x). even for polynomialsof degree as high as 10. Equation (4. The nested multiplication algorithm is such a procedure. ofPn(x That is. ] 7) which requires four multiplications and four additions._~(x) (4.

Eq.28) .19). or zeros.24b) Substituting Eqs.1)st-degree polynomial Qn_l(x). and Eq. (3. Ira root. Let’s illustrate polynomialevaluation using nested multiplication.25. 195 P~.26) Equation(4. and polynomialdeflation using synthetic division.c~) to yield the (n ..24a) and (4. = an bi = ai + xbi+ 1 (i = n -..25. (4. etc.(c0 = 0 and R = 0. (4.25. ifc~ is a root Eq.25) can be written b.23) Consequently. (4.24a) and Q.n) (4.27) The deflated polynomialQn_~(x)has n . + anx n lx (4. (4. Qn_~(x). O) (4. Polynomial evaluation. first derivatives of an nth-degree polynomial can be evaluated from the (n.. Substituting x = Ninto Eq.15x4 + x (4.Pn(x).(x~is known.1.1.5. (4._~(x) in the form Qn_l(X) = 1 +b2x + b3x2 +. then P.From (4.26) is identical to the neste~ltiplication algorithm presented in Eq. (4.2 . can be evaluated by the ) synthetic division algorithm. c~.1) (4. which can be used to evaluate derivative Un(x and the remainder R.0) bl = at + xb 2 bo = ao + xb1 = R Equation (4..20) and equating coefficients of powersof x yields: b~ = a n bn_ 1 : an_ 1 + xb n (4.3): Pn(x) x2 a0÷ a + a2 +.25. n .. whichare the remainingroots. ConsiderP~(x) in the form given by Eq. or zero.. The (n.. Example4.(N) = Qn_I(N) (4.. + bn_l Xn-2 -] .1)st-degree polynomial Qn_~(x).Polynomial Approximation Interpolation and Atx=N.(N)..24b/Ls/ields the value of P’. Higher-order derivatives can be determined applying the synthetic division algorithmto Qn_l(x). of P.20) yields Qn_~(x) = 0 (4.20). of the original polynomial.bn Xn-1 (4.n-1) (4. Pn(x). or zeros.Pn(x) can be deflated by removing factor the (x . polynomialderivative evaluation using synthetic division.1 roots.225x + 85x3 . Considerthe fifth-degree polynomialconsidered in Section 3. The properties of polynomialspresented in this section makethemextremely useful as approximatingfunctions. which yields P~(N)= R.1)st-degree polynomial.115): 2 5 Ps(x) = -120 + 274x .1.24b) into Eq.

4) (4.29.5.2) (4. Qa(x) = (x .120. (4.0 q.0x3 4 +x (4.6250 274. P~(2.30) with x = 2.0 yields 1.0x 2 . T his r esult c an b e v erified b y direct e valuation o f Eq.32.29.3) (4.30) Evaluating Q4(2.0) = 59. Evaluate P5(2. gives c4 c3 c2 c1 co = 1.5(1.1) (4.0 Thus.5(47.5).0(60.3) (4.29. (4.26).0) Thus.5). the deflated fourth-degree polynomialis Qa(x) = 60. P5(2.50 85.5) = Q4(2.1)(x . (4.32.90.2) (4.3)(x .4375 ÷ 2. (4.0) = -13. Toillustrate polynomial deflation.5 + 2.32.31.32.50)= 0 =-1. (4.5(53.43750) = .43750 .0 -120.0(59.50x3 +x (4.0 = -12.3) (4.0 + 2.0) = -10.4) (4. 4. (4. 3.0) = -107.1.28) are x = 1.5) (4.0 + 2.0) = 60. and 5.29.19).0) This result can be verified directly by expandingthe product of the four remaininglinear factors.56250. Eq.5(-18.43750 .6250) = 47.32.29. and determine the deflated polynomialP4(x) obtained by removingthe factor (x Evaluating P5(2. P~(2.2) (4.33) (4.0x ÷ 59. This res uh canbe veri fied by d ire ct eval uation of Eq.31.5(28.0 -225.750x2 .6250x + 53.0 + 2.406250.196 Chapter4 Recall that the roots of Eq.0) Thus.5) by nested multiplication using Eq.5.1) 4.0 + 2.5(-10.19) yields 1.75 + 2.5(-12.0 -15.0 274.5) = Q4(2. (4. let’s deflate Ps(x) by removing factor (x the Applyingthe synthetic division algorithm.0 + 2.31.18. where Q4(x) is 4 Q4(x) = 47.750) = -90.0(-13.4) (4.4)(x .5(-90.107.2.0(-107.0) = -12.750) = .2.0 = 53.750 = -90.12. with x = 2.0) = 0.29.23).0.1) (4.13.56250 (4.750 -225.406250 (4.5(1. with bi replaced by ¢i.0 85.31.5) by nested multiplication using Eq.5) = 53.0 + 2.0) = 28.0 + 2.750) = 0.5) (4.750 = 47.0 + 2.5) and P~(2.0 -15.0 + 2.32.0(1.625 + 2.31.28) with x = 2.5) o = 0. FromEq.

3 DIRECT FIT POLYNOMIALS 197 First let’s consider a completely general procedure for fitting a polynomialto a set of equally spaced or unequally spaced data... consider the simple function y =f(x) l/ x.35 3.285714 0. Substituting each data point into Eq.J ]) .~ fl =ao-l-alxl-ba2x~-l-’"q-anx~l f.44) -.n) There are n + 1 linear equations containing the n + 1 coefficients a0 to an. letf(xi) =fi.2.60 f(x) 0.3. 1 . .bx +2 (4.294118 0. Ix1 . The resulting polynomialis the unique nth-degree polynomialthat passes exactly through the n + 1 data points..+a.35. The exact value is 1 y(3.290698.35) can be solved for a0 to an by Gauss elimination. Given n + 1 sets of data [xo.277778 Let’s interpolate for y at x = 3.44) =f(3...34) yields n + 1 equations: fo=ao+alxo+a2~+. and cubic interpolation. Equation(4.Polynomial Approximation Interpolation and 4. Let’s illustrate the procedurein detail for a quadratic polynomial: P2(x) = a -t.. Direct fit polynomials.37) (4.J~).34) For simplicity ofnotation.. quadratic.44 using linear.36) . and construct th e following set of sixsignificant figu re data : x 3.44 -.=ao+alxn+a2X]n+’"+an~ (4:35. [Xn.0.0) (4.fn).. Example4.35. (4.. To illustrate interpolation by a direct fit polynomial. The direct fit polynomial procedure works for both equally spaced data and unequally spaced data.. determine the unique nth-degree polynomialPn(x) that passes exactly through the n + 1 points: [ P~(x) = ao + alx + a2x2 xn "-b an +"" (4.298507 0.f(x~)] .. whichwill be written as (x0.1) (4.f(xn)]. (x n.50 3.f(xo) ].40 3.

198 Chapter4 To center the data aroundx = 3.290698 linear interpolation quadratic inteqaolation cubic interpolation Error = 0.0.00613333x (4. This procedure is quite laborious using direct fit polynomials.50 to center that data aroundx = 3.50) + c(3.0. P(3.0.0.290698.000001 = 0.290697 (4.0840400x (4.all of the workrequired to fit the newpolynomialmust be redone.44) = 0.44) = 0. The results are summarized below.42) gives P3(3.44) = P2(3.294118 = a + b(3.290697 = 0. Error(3.42) Substituting x = 3. all four points must be used.44) = 0.0.38.290698 = -0. and the errors. (4.41) (4. For a linear polynomial. has several disadvantages.44 into Eq.0. A second advantageis that the data can be unequally spaced.0249333x Substituting x = 3. One approach for deciding whenpolynomial interpolation is accurate enough is to interpolate with successively higher-degreepolynomials until the changein the result is within an acceptable range. and cubic interpolation.44 into Eq.40 and 3.290756.44) 2 = 0.3.290697 .256080x + 0. b.40) + c(3.44 into Eq.38.44)----P(3. (4.44.876561 .40) The error is Error (3.4 LAGRANGE POLYNOMIALS Thedirect fit polynomial presented in Section4. and c by Gausselimination without scaling or pivoting yields 2 Pz(x) -.40) 2 0.876561 .121066 .000058 -.298507 = a + b(3.0249333(3.44) = 0.39) gives P2(3.290756 = 0. the first three points are used.0878000x 3 0.50) (4.38) for a.35) 2 0.44.000001.44) + 0.-0.44). The resulting linear polynomialis P~ (x) -. are tabulated.44) = 0.000000 The main advantage of direct fit polynomials is that the explicit form of the approximating function is obtained. 4.285714 = a + b(3. The work required to obtain the polynomial does not have to be redone for each value of x.470839x + 0. (4. and interpolation at several values of x can be accomplishedsimply by evaluating the polynomial at each value of x.0.579854 . (4.2) (4. while quite straightforward in principle.290698.35) + c(3. where the results of linear. It requires a considerable amountof effort to solve the system .use x = 3.256080(3.0.44) -f(3.41) gives P~(3.39) Substituting x = 3. advantagesof higher-degree interpolation are obvious. Applying Pz(x) at each of these data points gives the followingthree equations: 2 0.38.For a cubic polynomial. The maindisadvantage of direct fit polynomialsis that each time the degree of the polynomialis changed. quadratic. The results obtained from fitting other degree polynomials of no help in fitting the next is polynomial.1) (4. The resulting cubic polynomialis 2 P3(x) = 1.3) Solving Eqs.

f(c)]..4. Given three points..f(a)]. This procedure can be applied to any set of n + 1 points to determine an nth-degree polynomial.44b) Pl(b).~f(a) +(b-a) (a (4..f(b)].called Neville’s algorithm.b) P2(x) .f(b)] .43) Substituting x = a and x = b into Eq. [a. (4.f(b)]. For a high-degreepolynomial greater than about 4).(a b)(a .~)) (x . Eq. The form of the Lagrangepolynomialis quite different in appearancefrom the form of the direct fit polynomial. However. However. (4.by the uniqueness theorem. A variation of the Lagrange polynomial.43) passes through the two points. especially for higher-degree polynomials.(x-k) (a b)(a c) . [k.f(a)] and [b.(b (a (b. a considerable amountof computational effort is involved. A simpler.45) Substitution of the values ofx substantiates that Eq..b)(x . [b.a)(x . The Lagrange polynomial is presented in Section 4. the two forms both represent the unique polynomialthat passes exactly through a set of points. Nosystem of equations must be solved to evaluate the polynomial.4. the nth degree Lagrange polynomial Pn(x) which passes through the n + 1 points is given by: P. Lagrange Polynomials Consider two points.a) ..a )( b f(b ) q ( c a)( f(c) (4.Polynomial Approximation Interpolation and 199 of equationsfor the coefficients.. .f(a)].. more direct procedure is desired.1... The linear Lagrangepolynomial Pl(X) which passes through these two points is given by P1 (x) .~)f(a) + (4.Givenn + 1 points. ~f(a) ~Z--~j (o) = f(b) which demonstrates that Eq...b) =f(a P1 (a).(x) (x-b)(x-c).34).1. whichcauses large errors in the values of the coefficients.c) -’a" (x a)(x .43) yields (a -~a)f (a .f(k)]. and [c. [a.44a) (4. whichhas somecomputationaladvantages over the Lagrangepolynomial. which can be fit to unequally spaced data or equally spaced data. One such procedure is the Lagrangepolynomial.4.j( ) -t (b .46) (x-a)(x-b)’"(x-j)f(k) a)(k ~Z (k-j) The Lagrangepolynomial can be used for both unequally spaced data and equally spaced data. 4.45) passes throughthe three points..(a +’"+(k (x-a)(x-c).(x-k) k)f(a)+(b a)(b f(b (4. is presented in Section 4. (4. the (n systemof equations can be ill-conditioned. the quadratic Lagrangepolynomial Pz(x) which passes throughthe three points is given by: (x . [a.2. [b. (4.

44).3..3. 60) (0.3.35)(3..40)(3.290698 (4.50)(3.285714) = 0.0.290697 (4.40.60~.44 .50)(3. and cubic interpolation.40 3.40 3.4118) (u" (3.= ~-.298507) (3.50 ~(0.44 -__3.290697 = 0..44 .35)(3.50 Cubicinterpolation using all four points yields P3(3.50 3. advantages of higher-degree interpolation are obvious.44) (3.40 3.48) 0. x = 3.35. Linear interpolation using the two closest points. Chapter4 Consider the four points given in Example 4.40) (3.3.3.-.40 3. quadratic..35)(3. 4. quadratic.44) using linear.40)(3.3.2857 14) 3.44 .44 .44 .44 .294118 0.3.50 3. and cubic Lagrangeinterpolating polynomials. gives P2(3"44) (3.35)(3.3 60) 4(3.50.40)(3.(3.000058 = -0.298507 0. .277778 Let’s interpolate for y =f(3.60 f(x) 0.40 tu’z~‘4 18) (3.35)(3.44 .50) .to" 2985.44 ..50 3.44 .40) -t (3.50) .2 90756 (4.~o) 0.40)(3.60 3..290756 = 0.44 .2.40 3. 1 -~ (3.44 .3.000000 .3.3.44 .3.35)(3.44~=~ .44 .40)(3.285714 0.60) .50 3 . Lagrange polynomials..50. where the results of linear.3.44)= P(3.50) (0.000001 = 0.285714) (3.44) (3.40 and 3.44 .294118) -~(3.50)(0.3.35)(3.3. The exact value is y = 1/3. Error(3.3.44 = 0.290698 linear interpolation quadratic interpolation cubic interpolation Error = 0. and the errors.35)(3..49) The results are summarized below.40 3. x --.35)(3..44 . which satisfy the simple function y =f(x)= l/x: x 3.4 0)(0. yields P1(3. P(3.44 .3. 3.3.60 3.50)(3.290698.44 ..--~ u/) (3.~ C ~ _--3.277778) -~ (3.3.50) (3.3.44) = 0.40)(3.47) Quadraticinterpolation using the three closest points. are tabulated. 29"" .50) 44 ..60) (3.50 (0.35)(3.35 3.20O Example.35 3. and 3.3.290698.

second.) and the superscript (n) denotesthe degreeof the interpolation (e.g.). (4.(x . For the first interpolation of the data. or in any structured order.(x .(X -. However. (°) f.a)f(b) .50) f(x)= (x . whichare presented in Section 4.2.52) where the subscript i denotes the base point of the value (e.. is basedon a series of linear It interpolations. zeroth.53) . Eq.5. Considerthe following set of data: Recall the linear Lagrangeinterpolating polynomial. A table of linearly interpolated values is constructedfor the original data. Neville’s Algorithm Neville’s algorithmis equivalent to a Lagrange polynomial.. Thedata do not have to be in monotonic order.51) yields f/(n) (X-. i..Xiq_n)fi f/~? Xi+n -. i + 1.X)f(O)i+l .x~+OZ (4. first.(~) (x -. Both disadvantagesare eliminated by using divided differences.2 as they should be.51) In terms of general notation.4.ta) whichcan be written in the following form: (4.g. All the workmust be redone for each value of x. which is presented in the next subsection. etc. whichare denotedas f(0). All of the work must be redone for each degree polynomial.etc. There are several disadvantages. (4. The first disadvantage is eliminated by Neville’s algorithm. 4.xi) (n-l) 1)__. Eq.the most accurate results are obtained if the data are arranged in order of closeness to the point to be interpolated. since the same data points are used in both examples.Polynomial Approximation Interpolation and 201 Theseresults are identical to the results obtained in Example by direct fit polynomials.43): (x-a) f(x) (x-b)r. 4.a) (4. The main advantage of the Lagrangepolynomial is that the data maybe unequally spaced.b)f(a) (b .X i (4.

(a) First set of linear interpolations. f/(2) ~--. (4.~ = fl x~ ~ ~o~ ~) f(3 ~ ¯ fl~) (x-x~). to (2) and f2(2) toobtain .52). Third of linear interpolations (c) set as illustrated in Figure 4. 2.xi)fi(+ll -.)~l f~3) (c) Figure 4. (b) Second of linear set interpolation. This process is repeated to create a third column which off values.1 values off (L). and so on. as illustrated in Figure4.1 is identical by to a Lagran~e polynomialbasedon the data points used to calculate the specific value.6c. Eq.0)-(x_x~) 2 fl2) = X3 -X 1 X f4 = f(40) 4 (b) =~ (x-x~)~-(x-x.~ (a) J O) ~--f. It can be shown direct substitution that each specific value in Table4.Xi+2)fi (X Xi+ -.6b. The advantage of Neville’s algorithm over direct Lagrangepolynomialinterpolation is nowapparent.54) (3) is illustrated in Figure 4.X 2i (4. For example. Theformof the resulting table is illustrated in Table4.(1)-.(X -.202 Chapter4 fl 1) = °~ (x-x~)~°>(xx~)(l X --X 21 x~ ~=~(~°~ °~~(4 = x~ ~.6a. This creates a columnof n . The third-degree Lagrangepolynomialbased on points 1 to 4 is obtained simplyby applyingthe linear interpolation formula. Thus.1. f~’) is identical to a second-degreeLagrangepolynomial based on points 1. . A second (1) columnof n -2 values off/(2) is obtained by linearly interpolating the columnoff values.6 Neville’s method. and 3.

3.290756 : (3.44 .To evaluate f~(2). each value in the table.44 3.If the original data are arrangedin order of closenessto the interpolation point.3. Noneof the prior workmust be redone.55a) Thus. Rearranging data in the order of closeness to x = 3.44 .3.)f--. and cubic interpolation using Neville’s algorithm.285714 (4.x3)f2 (0) x3 .290831 .1. f.44 .44) using linear.(3.3.50 .x2 0.35)0.35 .52) to the values offi (°) gives Eq.3.3.298507 0.3.50 .40 (4.4. Let’s interpolate for f(3.277778 Applying (4. Neville’s algorithm.(3.44 . Consider the four data points given in Example 4.40 3.285714 .50)0.40)0.(x -x3~(~) = (3.294118 0. as it wouldhave to be redone to evaluate third-degree Lagrange polynomial.X 3.290696 (4.X (x X2 1 = 0.f2(1) mustfirst be evaluated.3.<. f2(1 ) = (X -X2)f3 (0) -(X -.35 .44 yields the following set of data: x 3.298507 (3. represents a centeredinterpolation.40)0.290756 x3 -.x. f(n).): (x. Example4.35 3.290831 (3. the result of linear interpolation isf(3.40 1 0.60 f~ 0.55b) Evaluating (2) gives f~ (x -x~)f~(1) .56) .35)0.50)0. Thus.Polynomial Approximation Interpolation and Table 4. quadratic.294118 3.44) =f~(’) = 0.3.290756.c(o) I (1) fl fl(2) ] f3(1) X3 X4 f(O) AO) I J4 I f~(3).285714 0.3. Table for Neville’s Algorithm 203 Xl J .50 3.44 .

All of the work must be redone for each new value of x.35 3. the result of quadratic interpolation is ft3. To evaluate (2) fl(3).5 DIVIDED DIFFERENCE TABLES AND DIVIDED DIFFERENCE POLYNOMIALS A divided difference is defined as the ratio of the difference in the function values at two points divided by the difference in the values of the correspondingindependentvariable.Theseresults.3. Xi+l] -. are presented in Table4.204 Table 4.290703 0. Neville’s Algorithm X i (0) fi (1) fi (2) fi (3) fi Chapter4 3. Xi+I.277778 0.57) The second divided difference is defined as f[xi.Xi (4.285714 0.5. Xi+2] f[xi+l’ Xi+2] = --f[xi’ Xi+2 -. The divided difference polynomial presented in Section 4.290698 Thus.1.A(1) and A mustfirst be evaluated.290756 0.50 3.290831 0.40 3.60 0.58) Similar expressions can be obtained for divided differences of any order. The advantage of Neville’s algorithm over a Lagrangeinterpolating polynomial. These results are the same as the results obtained by Lagrange polynomials in Example 4. 4.5 minimizes these disadvantages. the first divided difference at point i is defined as f[xi.291045 0.298507 0.X i Xi+l] (4. 4. Neville’s algorithm has a couple of minor disadvantages.2. is that none of the work performed to obtain a specific degree result must be redone to evaluate the next higher degree result. and the results calculated above. if the data are arranged in order of closeness to the interpolated point.44)=f~(2)= 0.2.290696.290697 0. Thus.Thenfl ( ) can be evaluated. Xo X1 £ fo fl x2 X3 f2 f3 .294118 0. Divided Difference Tables Considera table of data: x. Approximating polynomialsfor nonequally spaced data can be constructed using divided differences.f+l -f Xi+ 1 -. The amount of work is essentially the same as for a Lagrange polynomial.

xi+ f[Xi+l. xi+ .3. x2 =f[xj . the Lagrangepolynomial.61) (x.fi....59a) (4.60) The second divided difference is defined as follows: f[xo’ xl. x1 .just as for the direct fit polynomial.63a) (4...63b) Table 4. x2] -fix o.. The notation presented aboveis a bit clumsy. ... x~ .xo) -) (f2 -J]) etc.xt f[xl. x. whichare denoted byf(°}. x~ .. whichis presented in Section 4.. However. in terms of standard notation.xo) In general.] _f[xl.moreaccurate results are obtained if the data are arrangedin order of closeness to the interpolated point.Polynomial Approximation Interpolation and Thefirst divided differences..f[xo... f[xi] -. A morecompactnotation is defined in the same manneras the notation used in Neville’s method. wherethe subscript i denotes the base point of the value and the superscript (n) denotes the degree of the divided difference. Note that (f+l --f/) (f --f+l) f[xi’ xi+~ ] ="(xi+l . Thus.._~] (4. Xn].3 illustrates the formation of a divided difference table.Xi] (4. xi+. x. f[xo.] ~ (4.fo) 205 (xl . xi+2 ] In general. differences Xi Xl f/(0) f/(1) Table of Divided fi(2) f/(o) f2(O) f3(O) f4(O) fl (I) fl(2) (1) f2 ~(22) (1) f3 (3) fl . x~] ] (x2. f(1) =f[xi. Table 4.2. The first column contains the values ofxi and the secondcolumncontains the values off(xi) =f.62) Bydefinition.59b) (4. The remaining columnscontain the values of the divided differences. xi+l.4.64) (4. and Neville’s method. The data points do not have to be in any specific order to apply the divided difference concept. are f[xo’ xl] _ ( f~ ..(xi . f(") =f[xi.Xo) (4. Xi+l ] f/(2) =f[xi.. x2]=(x2"---.--~i) -.

~1~ G(x) =f]o~ (x .(x) passes exactly through the data points. 0 Pn(Xl) --f0 + (A -fo) (X -.70 fi(O) fi(1) fi(2) fi(3) f/(4) 0. Chapter 4 Let’s construct a six-place divided difference table for the data presented in Section 4. Divided Difference Polynomials are identical to the divided Let’s define a power series for P.006132 -0.(X2 -..0.285714 0.50 3.021733 0.X0) pn(x2) :f/(o) q_ 2 _ Xo )f(1) 14.4.20 3.010677 -.5.009335 -0.(x2.xn_~)f (4. + (x .5.xl). Divided Difference Table xi 3.68a) (A-fo) (x~ x0) .(x2) =f0 + (f2 -f~) + (f~ -f0) =J~ - (4. .xo)(x ..(xo) =f(0) + (0)f(1) +..65 3.Xo)(X 2~+.68c) Since Pn(x) is a polynomial of degree n and passes exactly through the n ÷ 1 data points.XO) (4.028267 0. + xl)f] (" (x .. To demonstrate that P.020400 -0.65) P~(x) is clearly a polynomial of degree n.094700 -0.312500 0.2.4. 2 Pn(x2) :f0 "}.270270 -0.000667 0.x0)(x 2 ..30 3.079360 -0.40 3.206 Exa~nple 4.35 3.68b) (4.090460 -0.67b) P. (4.303030 0.X0) + (x 2 .(x) such that the coefficients differences. let’s substitute the data points into Eq.1.x~) (f~ -fl)/(x2 P.006668 -0.087780 -0...66) (4. Thus.Xo)J.xl)f/ (2 =fo q.x~) . it is obviously one form of the unique polynomial passing through the data points.084040 -0.074060 0. Table 4.007335 -0.000010 4.001787 0.273973 0.294118 0.60 3. +(x .67a) :J] (4.Xo)( )f/(2) q-.x0) (A-fo) + (x 2 . Divided difference Table.65). Thus.(fl -fo)/(x~ (X2 -.Xo)(X .(x~) =f(o) + (x~ .023400 0.. f(n).298507 0.Xo2 .Xoff~/(1) I .277778 0.026800 0. The results are presented in Table 4..0..Xl)(O)f (3) +.f(o) (4.024933 0. p.076100 -0.(Xl -.006665 -.

69) °) Substituting the values ofxo to x2 and~ tofo(3) into Eq.3.0.000089 + 0.44 .4)(3.5)(-0.0.006150) Evaluating Eq. Let’s interpolate for f(3.44) = 0.000001 (4.290607 = 0.44)=fo(°) + (3.65). Eq.294118 . RearrangedDividedDifference Table (4.44 .44 .003362 .3. FromEq.72) xi 3.3.084040 -0.024933) + (3.4)(0.44 . using 0 =3.50)(3.35)(-0.50)(0.44 . The aboveresults are not the most accurate possible since the data points in Table 4.40 3.4 are in monotonicorder.50 3.xl)(3.3. FromEq.44) ----.0.35)(3.44 .60 f.084040) + ÷ (3. (4.290696 = 0. (4.44) = 0.40)(--0.44 (1) X0)A ÷ (3. (4.Xo)(3.65): Pn(3.44 .Polynomial Approximation Interpolation and Example4.087780) + (3. Rearrangingthe data in order of closeness to x = 3.023710 f.298507 0.298507 .65): Pn(3.~0~ 0.44 .~2) 0. Divided difference polynomials.006132) Evaluating Eq.5.Xo)(3.3..294118 (3.277778 f.44 .285714 0.3.44)= 1/3.44 .44 .44) = -0.44) = 0.085287 -0.298507 + (3. (4.44) = 0.72) term by term gives P.(3.000001 Tabl~4.35)(3.5.73) (4.44 . 207 Consider the divided difference table presented in Example 4.40)(3.~l~ -0.44(2 Xl)A + (3.0.294118 0.44 .3.000002 = -0. (4.6.44 = 0. (4.3.024940) + (3.35 3.006150 ..000091 = -0.5.3.~3~ -0.35 asthebase point.024940 0.3.000060 + 0.35)(-0.44) using the divided difference polynomial.70) The advantage of higher-degree interpolation is obvious.71) (4. The exact solution isf(3.44 .40)(3. which makethe linear interpolation result actually linear extrapolation.082916 f.3.007900 + 0.44 .290697 linear interpolation quadratic interpolation cubic interpolation E~or(3.69) gives Pn(3.70) term by term gives Pn(3.44 .44 yields the results presented in Table 4.3.290698.000001 Summing terms yields the following results and errors: the P(3.3) - (4.

4. 4.three different interpretations can be assigned to these numbers. such as Table 4.000058 = -0. x xo xl x2 x3 f(x) fo f~ f2 f3 Tableof Differences (f~ -A) (A -A) (A -A) (A -3A+3~-A) .1. in a table with the x values in of monotonicascending order. because the samepoints are used in those two interpolations. each with its unique notation.000000 The linear interpolation value is much moreaccurate due to the centering of the data. forward difference operator A is defined as Af(xi) Af= (fi +~ .000001 = 0. a backward-difference table. The quadratic and cubic interpolation values are the same as before. Theseresults are the sameas the results obtained in the previous examples. in The numbersappearing in a difference table are unique. as illustrated in Figure Table4.6.76) (4.44) = 0.75) (4.Atriangular array is obtained. Consequently. Difference Tables A difference table is an arrangement a set of data.6. [x. the concept of differences.208 Summing terms yields the following results and errors: the P(3. can be interpreted as a forward-difference table. Implementation polynomialfitting for of equally spaced data is best accomplished in terms of differences. However. The forwarddifference relative to point i is (f+l -f).6 DIFFERENCE TABLES AND DIFFERENCE POLYNOMIALS Fitting approximating polynomialsto tabular data is considerably simpler whenthe values of the independentvariable are equally spaced.290698 linear interpolation quadratic interpolation cubic interpolation Chapter4 Error = 0.6.290697 = 0. as illustrated in Table 4.290756 = 0. with additional columnscomposed the differences of the of numbers the precedingcolumn.74) A difference table. difference tables. or a centered-differencetable.6. and the centered difference relative to point i + 1/2 is (f+a -f). except for round-off errors. and difference polynomials introducedin this are section.f(x)].f i) The backwarddifference operator V is defined as Vf(xi+~) = Vf+~ = (f+~ -f) The centered difference operator (5 is defined as ~f(xi+~/2) = 6f+~/2 = (f+l -f) (4. the backward difference relative to point i + 1 is (f+a -f).

Table 4.000040 -0.000007 -0.2 to 4. 1 < x < 3.263158 0.5 3.1 3. 8f3~ 52ft (a) Forward-difference table.6. whichuses the forward-difference notation to denote the columnsof differences.322581 0.294118 0.007508 -0.007936 -0.000010 0.000468 0.7 3.256410 -0.000008 -0.303030 0.009470 -0. Example 4.2 3.000010 0.000008 Af(x) A2f(x) A3f(x) Aaf(x) ASf(x) .7.000008 0.312500 0.270270 0.000611 0.000558 0. Difference Table x 3.000053 -0.000003 0.008404 -0. The three different types of interpretation and notation simplify the use of difference tables in the construction of approximating polynomials.285714 0. Let’s construct a six-place difference table for the function f(x) 1Ix for 3.whichis discussed in Sections 4.9 f(x) 0. (b) Backward-difference (c) Centered-difference table.6.000040 -0.7.3 3.000032 -0.7.000032 0.000000 0. 4.000508 0. The results are presented in Table 4.010081 -0. The numbersin the tables are identical.8 3.9 with Ax= 0.6 3.007112 -0.4 3.4.006748 0. Only the notation is different.000050 -0.7 table.1.000000 0.008912 -0.7. Difference table.000396 0.277778 0.Polynomial Approximation Interpolation and x f ~xo fo x~ f~ x2 f2 x3 f3 &f &~f ~fo Afl ~f2 ~fo A2fl A~f x f x_~ &~fo x_~ f_~ x~ xo f_l fo Vf Vf-2 Vf_t Vfo V2f V2f_~ V~fo V3f~ V3f 209 x f ~f ~:f ~f xl fl x2 f2 Figure 4.000364 -0.000428 0.

FromEq. while monotonic.8 showsthat the errors due to round-off in the original data oscillate and double in magnitudefor each higher-order difference. Theoriginal data set has errors.79) (4. The nth-degree polynomialP.78) In Sedtion 4. the maximum round-off error in ASf is 4-25-1 =-t-16. (4.210 Chapter4 Several observations can be madefrom Table 4. The last digit is typically roundedoff. The degree of polynomial needed to give a satisfactory fit to a set of tabular data can be estimated by considering the properties of polynomials.80t Table 4. The magnitudesof the higher-order differences decrease rapidly.(x) is given by P. Table4. 2. 3. Differencetables are useful for evaluating the quality of a set of tabular data.11n) and (4. of Round-off an effect on the accuracy of the higher-order differences. Theremaybe a singularity in f(x) or its derivatives in the range of the table. The maximum error in the differences is given by [ Maximum round-off error in Anf = n-1 ] -1-2 (4.77) For the results presented in Table 4. (4.8. = constant P~n+l)(x) (4. consider a difference table showing only the round-off error in the last significant digit.. If the differences are not smoothand decreasing.The fourth differences are not monotonic.12)] P(nn)(x) = n!a. Errors x f +1/2 -1/2 +1/2 -1/2 +1/2 -1/2 Difference Table of Round-off Af -1 1 -1 1 -1 Aef 2 -2 2 -2 -4 4 -4 8 -8 A3f A4f . it is shown that [see Eqs.~(x) = a. The increment Ax maybe too large. To illustrate this has effect. the Asf values are completely maskedby the accumulatedround-off error.8.. The worst possible round-off situation occurs whenevery other numberis rounded off by one-half in opposingdirections. as illustrated in Table4. The first and seconddifferences are quite smooth. several possible explanations exist: 1. ASf oscillates between -10 and -t-8.7. Polynomialfitting can be accomplishedusing the values in a difference table.2.The third differences. Tabular data have a finite number digits.are not very smooth.7. and the fitth differences are extremely ragged.77).x ~ + (Lower DegreeTerms)= an Xn -t- (LDTs) (4. Consequently.

.2).a.6. In fact.(n -. s = (x . it must be one form of the uniquepolynomial that passes throughthis set of data.Polynomial Approximation Interpolation and Let’s evaluate the first forwarddifference of Pn(x): A[P..h"] .2) A~f°~ +.(x)] a. last term the in Eq. x = x0._l(X) Evaluating the second forward difference of P.. n(n .Xo)/h. Let s = 1.x0 and x = xo + sh (4. and P. (4.88) is a polynomial degree n and passes exactly through if of n ÷ 1 data points. then A"(x) = constant.. (4.")(x) A" P.f(x)].1)(s3! . Consequently.(x) gives A2[P~(x)] A[(a. and Pn(Xl) ~-f0 + Af0 =J~ + (J] -f0) =f~.(x). is linear in x. + a.1)(s -.88) is an nth-degree polynomial.89) Equation(4. [x. Pn(x) is the desired unique nth-degree polynomial.(x)I = A(a. Eq. f =f0..2.1)h2xn-2 + (LDTs) In a similar mannerit can be shownthat 211 (4. Therefore.1)1 A"J~ n! wheres is the interpolating variable x .(x)] = [a. x" + (LDTs) Expanding(x + h)" in a binomial expansion gives A[P.86) (4. nhx~-~ + (LDTs) = P.(x + h)" .88) is order n. or the divided difference polynomial [see Eq.81) (4.87) n.88) does not look anything like the direct fit polynomial[see Eq. f(x ) canbe f ( approximatedby Pn(x). However. Then x = x0 + h = x~ .nx"-l h +. and An+IP~(x) = 0 A~P~(x) = a~n!h~ = constant 4..(x)/h us.(x)] a. P(.a. -.x~) + A(LDTs) A[P.46)]..65)].x n + a. (4.88) (4.84) (4. Consequently. Th Notethe similarity between P(. (4. .x0 x . nh )x "-1] + A( DTs) L which yields A2[p.(x) =f0 +sAf0+ ~ A2f0x(x-1] q s(s . +s(s -. if f (x) P. and Eq.82) (4.f =J]. Let s = 0.x n + (LDTs) which yields A[P. (4. In a similar manner.(x)] a. Theinterpolating variable.85) (4.88) is called the Newtonforward-difference polynomial.(x). Lagrange polynomial [see Eq. it can be shownthat P. The NewtonForward-Difference Polynomial Given n ÷ 1 data points. one form of the unique nth-degree polynomial that passes through the n ÷ 1 points is given by P."l(x) and A"P. Equation (4. if A"f x) ~ c onstant.83) (4.34)].(x) =f(x) for the n + 1 discrete points.(xo) =f0. (4.

whereall of the work must be repeated each time the degree of the polynomialis changed.290756 = 0.1)(0. was selected so that the point of interpolation.40.1 h Equation (4.91) A major advantage of the Newton forward-difference polynomial. in addition to its simplicity.4 .44 = 0.x0 3. In Table 4. 6 Evaluating Eq.94) The advantage of higher-degree interpolation is obvious.000040) 4-.000002 = 0.4.. The work already performed for the lower-degree polynomialdoes not have to be repeated.. Newtonforward-difference polynomial. This feature makesit simple to determine when the desired accuracy has been obtained.~ AZf(3.4) s(s .44) = 0. the Newton forward-difference polynomialis (4.4) 4-s Af(3.4 and the values of the differences from Table4..44) the Newton forward-difference polynomial. Whenthe next term in the polynomialis less than someprespecified value.4 (.0.44) 1/3.. that is.4)(-0. h = 0.44) = 0.7 into Eq.008404) 4.0. (4. is that each higher-degree polynomial is obtained from the previous lowerdegree polynomial simply by adding the next term. Choosex0 = 3.92) +.88) gives s P(3.000058 = 0.290700 ---.90) I (~)=s(s-1)(s-2)’"(s-[i-1])i! In terms of binomial coefficients.2941184.1) 2 d (0..4)j 3! (4.4) 4.94) term by term yields the following results and errors: P(3.93) gives P(3. x .44. This feature is in sharp contrast to the direct fit polynomial ahd the Lagrangepolynomial.0.290698 .1. x = 3.44) =/(3. The exact solution is f(3. If x0 is chosenso that x does not fall within the range of fit.4 0. From the six-place difference table for f(x)= l/x. (4.4) (0.(0.44) = 0.44 .8.212 Chapter4 The Newtonforward-difference polynomial can be expressed in a more compact form by introducing the definition of the binomialcoefficient. (4.4)(0. falls within the range of data used to determinethe polynomial. In this example.1)(s 2)A3C (3.. calculate P(3.000000 (4. .4 . the desired accuracy has been obtained. interpolation occurs.7. x0 = 3.93) Substituting s = 0. Thus.(0.7.3.. (4. Example4. Table 4.40 .290698 linear interpolation quadratic interpolation cubic interpolation Error(3. Then. the base point.

Sucha polynomialis developedin this section.97) The error term. (x -Xn) f(n+l)(~) (4.98).95) FromEq..2.3 The Newton Baekaeard-Differenee PolynomMI The Newton forward-difference polynomial.000011 = 0. Eq.290709 = 0. (4. . and the Newton forward-difference polynomialcannot be used.89). (4. an approach that uses the upward-slopingbackwarddifferences illustrated in Figure 4. However.000926 = 0.Polynomial Approximation Interpolation and 213 extrapolation occurs. (4. (4.95) gives 1 Error(x) ~s(s .4.n which can be written as Error(x) = n + hn+~f(n+~)(~) FromEq..96a) (4.-.2).n)h Substituting Eq. (x .44) = 0.Xo s = (x . the bottomof a set of tabular data.7b is required.96) into Eq.99) by the replacement zx"+ -+ A (4.xo) = o + sh) .xl= sh .( n .96b) (4. 4.91).2.7a exist.99) (4. For example. for which s = 2.9)]: 1 Error(x) = ~ (x -Xo)(X x1) . (4. for Pn(x). the term after the nth-term is An+if° (Sn+l) ’ (4.000000 Theincrease in error is significant for linear and quadratic extrapolation.Xo) = (s1) (x .6.Xo) = (s.xn= sh . the required forward at differences do not exist. can be obtained from Eq.xn) = o + sh) .let x0 = 3.( x~.289772 = 0.Eq. wherethe downward-sloping forwarddifferences illustrated in Figure 4. (4. the cubic yields an interpolating polynomial. (4.xl) = o + sh) .290698 linear extrapolation quadratic extrapolation cubic interpolation Error = -0. and the results are less accurate. (s .98) )h n+lf(n+l)(~) (4. can be applied at the top or in the middleof a set of tabular data.88).1)(s . The error term for the Newton forward-difference polynomialcan be obtained from the general error term [see Eq. (4. The following results and errors are obtained: P(3.100) This procedurecan be used to obtain the error term for all polynomialsbased on a set of discrete data points.. In that case. For x0 = 3.96n) (4.

f=fo.104) Example4. Fromthe six-place difference table for f(x)= 1/x. X=Xo. Then X=xo-h=X_l.. h 0.(fo-f-~) =f-1 In a similar manner.101) is order n.9. In Table 4. Equation (4.101) gives P(3.44 .-..5) s(s + 1)(s 2) )Jv3q3.44) by the Newton backward-difference polynomial.106) . P.5) + ~ V2. s =(x . and Pn(x_l) =)Co.5) + s Vf(3. s x . [x. The exact solution isf(3. Let s=-l.101) is an nth-degree polynomial.7.Xo)/h.50..5 3! ÷. (4.44) =f(3. calculate P(3.7. Let s = 0. Then. polynomial is (4. and Eq.f=f_ ~.(s + [iIn terms of the nonstandard binomial coefficients..214 Chapter4 Givenn + 1 data points.102) Theinterpolating variable. and Pn(x0)=f0.101) wheres is the interpolating variable sx .x Ax . (4.50 . last term the in Eq.105) Equation (4.j~ + " " + s(s + 1).1 0. (4. (4...0 h and x = x o + sh (4.101) is called the Newtonbackward-difference polynomial. one form of the unique nth-degreepolynomialthat passes through the n + 1 points is given by Pn(x) = fo + S Vfo + vZ q S(S + 1)(S 3! 2)V3f + . "] =s(s + l)(s + 2). Consequently.103) the Newtonbackward-difference s. Choosex0 = 3.f(3.f(x)].44) = 0.6 (4.[s+(n- 1)]V"f0 n! (4. is linear in x.(x) is the desired unique nthdegree polynomial. Newtonbackward-difference polynomial. Table 4. h = 0.x0 x . Thus. Therefore.1.Vfo =fo. it can be shown that Pn(x) =f(x) for the n + 1 discrete points.. The Newtonbackward-difference polynomial can be expressed in a more compact form using a nonstandarddefinition of the binomial coefficient..290698 .3.x0 3...

The error term for the Newtonbackward-difference polynomial can be obtained from the general error term. The Newton backward-difference polynomial uses backward differences.104) by the following replacement in (n + 1)st term: V’~+’fo-~ h"+lf(n+l)(¢) 4.4.106) gives P(3. of a set of (4..111) . ’’ 6 linear interpolation quadratic interpolation cubic i.107) Evaluating Eq..000003 = 0.6)(-0. (s + n.290695 = 0.000000 The advantages of higher-degree interpolation are obvious.110) can be obtained from Eq.xn) = (Xo + sh) n = sh + ( o . Eq. (4.008404) (-0.290698 Error = 0.6.(x 0 .7b.6. Twoof the more important ones are presentedin this section.1)n)hn+lf(n+l)(~) which can be written as Error(x) = n+ (4. Other Difference Polynomials The Newtonforward-difference polynomial and the Newton-backward-difference polynomialpresented in Sections 4.6 + 2 + 1)(-0.290756 = 0. which follow an upward-slopingpath in the backward-difference table illustrated in Figure 4.44) = 0. (4. (4.108) into Eq.110) Equation (4.6 and the values of the differences from Table 4.3. (4.9).000058 = -0.9) yields 1 Error(x) ~s(s + 1)(s + 2).000508) (4.XI) = 0 -J r.108a) (4.6. respectively. (4.7 into Eq.nterpolation (0.6)(-0.44) = 0. The Newton forward-difference and backward-difference polynomials are essential for fitting an approximating polynomialat the beginningand end.109) (4.Polynomial Approximation Interpolation and 215 Substituting s = -0.285714 + (-0.6 + 2)(-0. by makingthe following ~ubstitutions: (x . respectively.x1 ) = (s 1) (4.6)(-0.7a.108n (x .000050) +.6 ~ (-0.107) term by term yields the following results and errors: P(3.Xo) = (X -. Numerous other difference polynomials based on other paths through a difference table can be constructed. are examplesof approximating polynomials based on differences.108b) (4.sh ) 1 = s h -].2 and 4. which follow a downward-slopingpath in the forward-difference table illustrated in Figure 4. The Newton forward-difference polynomial uses forward differences.xn) = (s+ n)h Substituting Eq. (4.

The odd-degree polynomials use data from one additional point. Oddcentered differences are based on the averages of the centered differences at the half points x-1/2 and xl/2. The base point for the Bessel centered-difference polynomial is point x~/2.7c. but even centered differences do not. However. oddcentereddifferences with respect to x~/2 exist. The Stifling centered-difference polynomial is (4. Evencentered differences are based on the averages of the centered differences at points x0 and x~. from the uniqueness theorem for polynomials (see Section 4. but odd centered differences do not.7c. the Newton of polynomials are recommended. is the polynomialfollows a horizontal path through the difference table whichis centered on point x~/2. The Bessel centered-difference polynomial is: (4.other forms of difference polynomials can be developed in the middleof a set of tabular data by using the centered-difference table presented in Figure 4. Thus. As illustrated in Figure 4. Centered-difference polynomials are useful in the middle of a set of tabular data where the centered differences exist to the full extent of the table. That is.216 Chapter4 tabular data. Asillustrated in Figure4.113) It can be shown direct substitution that the Bessel centered-difference polynomialsof by odd degree are order n and pass exactly through the data points used to construct the centered differences appearing in the polynomial. The are Newtonpolynomials are somewhatsimpler to evaluate.112) It can be shown direct substitution that the Stifling centered-difference polynomialsof by even degree are order n and pass exactly through the data points used to construct the differences appearing in the polynomial. . the polynomial of degree n that passes through a specific set of n + 1 points is unique.7c. the is polynomial follows a horizontal path through the difference table which is centered on point x0. The even-degree polynomials use data from one additional point. This polynomial based on the values of the centered differences with respect to xl/2. the Newton polynomialsand the centered-difference polynomials all equivalent whenfit to the samedata points. However.2). This polynomial basedon the values of the centereddifferences with respect to x0. Consequently. when the exact number data points to be fit by a polynomialis prespecified. The base point for the Stifling centered-difference polynomial is point x0. even centered differences with respect to x0 exist. That is.

inverse interpolation is evaluation of the inverse function x(f).116) .0. Alternatively.30 .0.333333 .303030)(0. let’s evaluate the quadratic Newton forward-differencepolynomialfor f =f(x).0.10.5) (0.Polynomial Approximation Interpolation and 4.0. forf = 0.008912) + ~ (0. Inverse interpolation is the processof determining the value of the independentvariable x correspondingto a particular value of the dependent variablef In other words.0.0. Considerthe following set of tabular data.5 f(x) 0.294118 0. whichcorresponds to the functionf(x) 1/x: x 3. Solving a direct polynomialf(x) iteratively for x(f) Fitting a polynomial to the inverse function x(f) appears to be the obvious approach.303030 .0.294118)(0.303030 . In such cases..7 INVERSE INTERPOLATION 217 Interpolation is the process of determining the value of the dependent variable f correspondingto a particular value of the independentvariable x whenthe function f(x) is describedby a set of tabular data.30 . some problems may occur.303030 0.115) Substituting the values from the table into Eq. Example4. Inverse interpolation can be accomplishedby: 1.000508) (4. Inverse interpolation. Fitting a polynomialto the inverse function x(f) 2.0. However.30 .3 3. f(x) = fo + Af o + ~AZJ~ (4.333333---. by Newton’s method.0.30.0.3. Thus. a direct fit polynomialf(x) maybe preferred. The error is Error= 3.4 3.30 .30 .008404 2f(x)) 0.0.294118) (3.285714 Af(x) -0.285714 .294118)(0.303030 + s(-0.115) yields 0.. x = x(f).285714) x = (0.333301.294118) (4.4) (0.285714) (3.294118 .000032.285714) (0.000508 Let’s find the value of x for which f(x)= 0.303030)(0.303030)(0.294118 .0.285714 .-. The inverse function x(f) may not resemble a polynomial.3) + + (0.30.008912 -0. (0. The exact solution is x= 1/ f(x) = 1/0.285714) (3. Thus. Let’s evaluate the quadratic Lagrangepolynomial.114) which yields x = 3.30 = 0.Thevaluesoff most certainly are not equally spaced.3 = 3.0.30 . example. even though it must be solved iteratively for x(f).303030)(0.333301.. (4.

The second root is obviously extraneous. or integration is then performed each on of these univariate approximating functions to yield values of the desired operation at the specified value of the other independent variable.that is. for example.1.9.006060 = 0 Chapter4 (4.117) Solving Eq. successive univariate approximationfirst fits a set of univariate approximatingfunctions. X).10. fit the univariate polynomials zi(X) -:. zi(x* ) = zi(Yi. (4.000032. A univariate approximatingfunction is then fit to those results as a function of the first independent variable.018332s + 0.0.333333 = 0.3.333654)(0.000508s2 . Such functions in general are called multivariate functions.Z( Yi. the dependent variable is a function of a single independentvariable: y =f(x).8. for example. polynomials. at each value of Yi.333365 . the Simplystated. Successive Univariate Polynomial Approximation Consider bivariate function z =f(x. For example. polynomials. fit the univariate polynomial z = z(y) to the values Table 4. or univariate.333654 and 35. differentiation. differentiation.333365 The error is Error = 3. (4. function. For example.117) by the quadratic formula yields two solutions. y) is a two-variable. Interpolation.y).3 + (0. Direct multivariate polynomial approximation Approximatefit procedures for multivariate polynomial approximation are discussed in Section 4. Bivariate Tabular Data X1 Yl Y2 Y3 Y4 Zll ~21 z31 z41 X2 Z12 222 Z32 z42 x 3 ZI3 Z23 z33 Z43 x 4 ZI4 Z24 Z34 z44 . 4.8 MULTIVARIATE APPROXIMATION All of the approximating polynomialsdiscussed so far are single-variable.9. Solving for x from the first root yields x = xo + sh = 3. Many problems arise in engineering and science wherethe dependent variable a function of twoor moreindependentvariables. When multivariate functions are given by tabular data. For example.116) gives 0.118) 4. Two exact fit proceduresfor multivariate approximationare presented in this section: 1 Successive univariate polynomial approximation 2.4.218 Simplifying Eq. or bivariate.752960. s = 0.1) = 3.z =f(x. (4. x*). multivariate approximation required for is interpolation. at each value of one of the independent variables. A set of tabular data is illustrated in Table 4. and integration.

and c by Gausselimination yields 2 h(1150. 1100) = 1555.0 1497.7 1375. Successive univariate polynomialapproximationis generally used.120a) (4. h(P. similar manner.F ~ psia 1150 1200 1250 800 1380.14Btu/tbm.50 ÷ 0.6 (4.120c) Evaluating Eq. T) = 823. 219 Thefinal processof interpolation.0 Btu/lbm).2 1000 1500. T) = 846.72275T .123c) (4.04 a + 1200b + (1200)2c = 1557.119) for the values of h at T = 1100 E At (4. differentiation. Consider the following values of enthalpy.123b) (4.120b) (4.h(1250.2 1499. 0.65 Table 4.0. Use successive univariate quadratic polynomial interpolation to calculate the enthalpy h at P = 1225 psia and T = l104F (the value from the steam tables is 1556.0. Table 4.00008000T and h(1250.75725T.4 a + 1000b + (1000)2c = 1500.5 Solving for a.122) . Successive univariate quadratic interpolation. 2.14 a + 1250b + (1250)2c = 1555. h(1200. First fit the quadratic polynomial h(P a ~. at P = 1200psia.Polynomial Approximation Interpolation and Zi(X*) = zi( Yi.00006875T (4.04Btu/lbm.10.a + bT + eT i.0.75350T. b. The approximating functions employed for successive univariate approximation can be of any functional form.20 + 0.2 a + 1200b + (1200)2c = 1614.1 1200 1614.10.11. Enthalpy of Steam T.60 2. 1100) = 1557.4 1377. and interpolate P = ll50psia: a + 800b + (800)2c = 1380. . Example 4.121) at T= 1100 gives h(l150.65 Btu/lbm. a + 1150b + (1150)2c = 1558. T) Btu/lbm. 1100)= 1558. or integration is then performed on that univariate approximating function.5 1613. 1100) 2 a + bP + cP = at T = ll00E and interpolate for the value ofh at P = 1225 psia. Next fit the quadratic polynomial h(P.121) (4. (4.123a) (4.6 1612. T) = 825. T) at each pressure level Pi. At P= 1250psia. from a table of properties of steam.00008375T and h(1200. X*).

11 by direct multivariate linear interpolation. A linear bivariate polynomial x and y is obtained by including the first four terms in Eq.127c) (4. b. (4. Direct Multivariate Polynomial Approximation Considerthe bivariate function.5.125).12.124) at P = 1225 yields h(1225.10.4650T . Direct multivariate linear interpolation. The number of terms in the approximating polynomial increases rapidly as the degree of approximationincreases. z =f(x. multivariate high-degree approximationmust be used with caution.1556.00011750P Chapter4 (4.1280P + 0.127d) Solving for .y) 2 = a +bx+cy+dxy+ex +fy2 +gxZy+hxy2 + ix 3 +jy3 +. The tabular data can be fit by a multivariate polynomialof the form z =f(x.126) Substituting the four data points that bracket P= 1225psia and T= ll00F into Eq. 1225) = 1555. and the set of tabular data presented in Table 4.0.0.. c.128) Substituting T = 1100 and P = 1225 into Eq.a. . and c by Gauss elimination gives 2 h(P. 1100) = 1556.0900 x 10-3Tp (4.1 1613. if required. The advantage of this approach is that Eq.220 Solving for a.127b) (4. P).0= -0.258125P .125) The numberof data points must equal the numberof coefficients in the polynomial.1556.5 Btu/lbm.60 + 0.5 . A quadratic bivariate polynomialin x and y is obtained by including the first eight terms in Eq. The resulting polynomialis exactly equivalent to successive univariate linear polynomial approximation if the same four data points are used.127a) (4. (4.126) gives 1499. without reevaluating the polynomial coefficients. and d by Gausselimination yields h = 1079.59 + 0. 1100) = 1416. (4. Example 4.6 = = = = a a a a ÷ + + + (1000)b (1000)b (1200)b (1200)b + (1200)c + (1250)c + (1200)c + (1250)c ÷ + + + (1000)(1200)d (1000)(1250)d (1200)(1200)d (1200)(1250)d (4.128) can be evaluated for other values (T. Let’s solve the interpolation problem presented in Example4.0 = 0. The form of the approximatingpolynomialis h =a+bT+cP+dTP (4. The error in this result is Error=1555. Consequently.6 1612.125). y). (4. (4. in (4. b.0 1497.5 Btu/lbm..5 Btu/lbm.2.5Btu/lbm.8.124) Evaluating Eq. The error this result is Error-= 1556. 4. (4.128) gives h(1100. This leads to ill-conditioned systems of linear equations for determining the coefficients.

3 . This type of polynomial is called a spline function. Thereare n + 1 total points. Figure 4.The slopes of the quadratic splines can be forced to be continuousat each data point. first derivatives) and curvature (i.xi (i = 1. xi (i = 1.e. . A smoothcurve is then traced along the rod.. used by draftsmen to draw smoothcurves through a series of discrete points. Problemscan arise whena single high-degree polynomial is fit to a large number points.. Linear splines yield first-order approximating polynomials. A cubic spline yields a third-degree polynomial connectingeach pair of data points.. The namespline comes from the thin flexible rod. High-degreepolynomialswouldobviously pass through of all the data points themselves.. Linear splines are independent of each other from interval to interval. Linear splines are simply straight line segments connecting each pair of data points.3 to 4. The slopes and curvatures of the cubic splines can be forced to be continuousat each data point...1 interior grid points..fi(x) (i = 1.. However. these requirements necessaryto obtain the additional conditions required are to fit a cubic polynomialto two data points.cubic splines have proven to be a good compromisebetween accuracy and complexity.9 CUBIC SPLINES 221 Several procedures for fitting approximating polynomials to a set of tabular data are presented in Sections 4. xi (i = 2.. Acubic spline is to be fit to each interval.2 . Dueto the flexure properties of a flexible rod (typically of rectangular cross section).. Splines can be of any degree..Polynomial Approximation Interpolation and 4.. 2 . Consequently. An alternate approach is to fit a lower-degree polynomial to connect each pair of data points and to require the set of lower-degree polynomials to be consistent with each other in somesense. 2 . 3 . but they can oscillate wildly betweendata points due to round-off errors and overshoot.2 .. n+l) n intervals. Quadratic splines yield second-orderapproximatingpolynomials.. but the curvatures (i.. Thus. yielding a spline c~trvc...... n) 1 n cubic splines..xi (i = 2. n intervals..8 illustrates the discrete x space and defines the indexing convection. n) (4..xi < x < xi+ (i = 1. the secondderivatives) are still discontinuous. secondderivatives) are discontinuousat every data point... If(x) =ai+bix+cixZ+di x3 (i= 1.The slopes (i. the slope and curvatureof the rod are continuous at each point..6. If the lower-degreepolynomials independentof each other. n).e. and n . 2 . The spline is placed over the points and either weighted or pinned at each point..129) interval1 1 2 interval i-1 interval i i-1 i i+1 interval n n n+l n+l grid points.the concept of cubic splines is developedin this section.8 Cubic splines. In such cases. In fact.e. called a spline. n) n-1 interior grid points. n +1). or simplya spline.. Higher-degreesplines can be defined in a similar manner.. n) Figure 4.. lower-degree polynomials can be fit to subsets of the data points. a are piecewise approximation is obtained..

mustbe equal at all of the n . must be the same in the two splines on either side of xi at all of the n .130) twice yields expressions for f’(x) and f/(x).133) . 3..Xi __Xi) (4. is givenby ~ fitt(x "Ji q--Ji+l ) -. That is.1) conditions.1 interior points.e. This set equations can be solved by Gauss elimination.129). xi <_x <_xi+ (i = 1. xl) and last (i.e. fl(Xl) ----fl andfn(X. 2 ..Xi Evaluating Eq.+ C x + D Xi -. n).132) f(x) x3/6 . The approach presented by Chapraand Canale is followed below.. 4n boundaryconditions.130) Integrating Eq..1) conditions.Xi (4. This constraint yields (n . f’(x) -.1.e..x~ /2 -X i-Xi+1 XXi+l fitt ~ x2/2-- XX~ fi~_ 1 "~ C Xi+l --X i (4. When of the conditions given aboveare assembled.x Zxi/2. 5.1 interior points. (4. f"(x). 2 .fi~-.+l)=fn+l" This constraint yields 2 conditions.xZxi+~/Z f"(x) -~ x3/6 . fl"(X~) =f~" andf~"(xn+~) fn’~-l’ This constraint yields = conditions. This constraint yields 2(n . This constraint yields (n . Eq.4n linear algebraic equations all are obtained for the 4n spline coefficients ai. Xn+l)points. (4. 4. (1989) present three such approaches. Chapra and Canale (1998).. f(xi) =f (i = 2.1 interior points.132) at xi and xi+~ and combiningthe results to eliminate the constants of integration C and D gives fi(x) -. The secondderivative of the twosplines on either side of point xi mustbe equal at all of the n . Thefirst and last spline mustpass throughthe first (i. is a linear function ofx.simpler approaches exist for determining cubic splines.222 Chapter4 defines the cubic spline in interval i..the followingconstraints are applied. (4.e.-__ Xi) 3 (X f" .. ci.Xi+ 1 Xi+ 1 -. and d/ (i--.e. That is. Gerald and Wheatley(1999).. n). Press et al. Xn+l) points. 3 . However.. must be available. In the direct approach.6(Xi+l -.1) conditions. bi. there are 4n coefficients to be determined..f"(x)] mustbe specified at the first (i. n). n). The first-order Lagrangepolynomial the for secondderivative f"(x) in interval i..](X 6 Lxi+ 1 -....131) (4. Thefirst derivative of the two splines on either side of point x. x~) and last (i. it is obviousthat the secondderivative within each interval. f~-~ -.Xi) (Xi+1 -...... 2. The function values. Thus. Since each cubic spline has four coefficients and there are n cubic splines. or constraints. Fromthe cubic equation.6(Xi+l-. xi ~ x ~ xi+1 (i = 1.X)35.. 1. Thecurvature[i.Xi) "q-[~gi+l-fi--xi fitt(Xi+~--Xi)](Xi+l--X) "~-F" ~f/+__[1 f/~-l(Xiq-1--Xi).X--Xi+ 1 ¢q! ± X--X i ¢’t! Xi -. 2 .Xi+ 1 Xi+ 1 -.

exp ression for f’(x can be obtained by differentiating Eq.00 f(x) 0.. Example4. letting f{’ =fn~_~= 0. Let’s illustrate the cubic spline by applyingit to the data in the followingtable. (4. Letting f[’ = 0 and/orf.133).1 coupled equations for the n + 1 secondderivatives.tt 2(Xi+l -- Xi-1)f/ "~.x2)f3"= J) ...0. 4. The first approach. Several approachesare available for specifying these two values: 1. (4.. n). the function f (x) = i 1 2 3 4 x -0. Letfl" =f2" andfn’~-l =fn"’ Extrapolatef~" andf~qfrom interior values off/". Let’s the apply the natural spline boundary condition at i = 1 and i = 4.j~ 3 X -. Thefirst step is to determine values off" at the four grid points.Xi.0 There are n + 1 = 4 grid points. whichis not developedin this book. Equating those expressions yields Xi -. ApplyingEq.000000 1. n = 3 intervals. Thus.134) Xi+ 1 -.X 3 2 6J~ -)q X2 -.1 = 2 interior grid points at which f" must be evaluated. 3 .f~" andf~’. Applying that expressionto intervals i and i and evaluating those results at x = x[’ gives expressions for f’_l(Xi) and f’(xi).(Xi+l -..xi)f/+l tt tt : 6f/+l 6f/ (4.Xi-1. f~" =f4"= 0. which for is x -x3.. is the most commonly employed approach. An expression for the second derivatives at the interior grid points.268400 1.. Twomore values off" are required to close the system of equations.1 ApplyingEq. and n . f/"(i = 2.2(x -.25 1.131) to develop a relationship betweenfl and/orf~’+~ andf~’. The two additional conditions are obtained by specifying the values off{’ andf~’~_|. (4.’~q if they are known.Ji-1 q~ ¢. etc.xl)f2" + (x3 -.133) is the desired cubic spline for incrementi expressed in terms of the two unknown second derivatives f" and f~-l.135) i = 3: (X 3 --x2)fff +2(X4 --x2)fffq- (X 4 --X3)2[~ t : 6f4 -f3 x4 . f/’(i = 1. 3.’~q = 0 specifies a natural spline. i = 1 to 4.13.2 .50 0.00 0.134) at grid points 2 and 3 gives i = 2: (x2 -. 2.718282 f"(x) 0.X 3 2 . Cubic splines. This requires the evaluation of the constant of integration C. n = 3 cubic splines to be fit.1 interior points gives n .134) at the n ..0 0. ~ Specify f{ and/orf~+l and use Eq.Polynomial Approximation Interpolation and 223 Equation(4. Specify f(’ andf.731531 1. (4.Xl)fff ~.Xi Xi -. can be obtained by setting f’_~(xi)=f’(xi).136) X -.x 3 6f3 -J~ (4.X 1 (4. n + 1).

137) gives 1.] (x _ 0.0) -~ 6(0.o (4.842544 Solving Eq.x)3 -~ 6(x f~l. Substituting values into these three equations gives 2.133) becomes fill 3 Jq(x) . (4.5)] 3 + \ 0.75)(0. and (x3.139) The secondstep is to substitute the numericalvalues into Eq.x~) (x .725~52(0.0 .0)f3" + (0.5)(0.0) +[0{~5 2"4342~(0"25~] (0"25 .137a) (4. (xl.25)£’ . Thus.x4).~.o)(o.434240 (_0.(-0.718282(0.x2).138) by Gauss elimination yields 2. -x)3+ -1.x) 3 + (0.25)] (x . (x2. 75). for interval 1.219972 0.x) (4.434240 0 25 3 J~(x)~ ( .0) + 1"268400 6(0.0) + 2(0.6(x2 _ xl) (x2 .75 Evaluating Eqs.141a) +[~50 2"4342~-0(0"5)] Ix .25f~" + 2.141b) + L[1"2684°°~ .725552 0.5 o.(-0.268400~ b-~ ) Chapter4 (4.449882 \ 0.x~) 2 (4.136) gives (0.0) -1.725552 6(0.25) (x-0.138b) (4.0).434240 and f3" = -1.140) Similar equations are obtained for intervals 2 and 3. (4.0) 0. x2).25) L 0.138a) (4.75)k 0. (4.133) for the three intervals.75)f~" + (0.25f3" = 3.135) and (4.1. Eq.25)f~ -t. that is.25 (0. (x1. (4.75 .731531 j~(x) = (0.224 Substituting values from the table into Eqs.725552 (1.2(1.268400 \ 0. to determine the cubic splines for the three intervals.5~[x .5)] 2.1.0.268469) ~-~ J 0.137b) (4.0f3" = -2.50f~" + 0.75 + [. (4.x3).0 .

Polynomial Approximation Interpolation and Evaluating Eq.g.1 discusses the need for approximating functions for sets of discrete data (e. 4.25 .9 Approximate fit. For small sets of smoothdata.2 91043(x . Y(xi)] = (xi. and integration).. Yi).9.3 to 4.142b) f3(x) = -0. for interpolation.e. (4. and the approximatepolynomialy(x) chosento represent the set of discrete points.811413(x + 0.x)+ 2. for Approximate polynomialfits are the subject of this section.2 5) (4.3 + 1. approximatefits are desirable. The deviations (i. 150368x + 3.9 are desirable. 906894(1. For example.141) yields f~(x) = 0. Several definitions of best possible manner exist.145498x (4. Consider the set of discrete points.5) 3 - 225 3 1.5) (4. exact fits such as presented in Sections 4. large sets of data and sets of rough data. Equation(4. The discrete points do not fall on the approximating polynomial.463062x + 1. However. the desirable properties of approximating functions.25 .0 .if the values of the independent variable x i y(x)’ Yi(xi) Yi Figure 4.0 . 898573(0.3 . .1. as illustrated in Figure 4. Someambiguity is possible in the definition of the deviation.142) can be verified by substituting the values in the table into the equations. distances) of the points from the approximating function must be minimizedin some manner..622827(0.1 Introduction An approximatefit yields a polynomialthat passes through the set of points in the best possible mannerwithout being required to pass exactly through any of the points.383456(1. differentiation.142a) 3 J~(x) = 1.10. [xi.x)+ 5. and the benefits of using polynomials approximating for functions.797147(x + 0.10 LEAST SQUARES APPROXIMATION Section 4. 4.0 .

(c) Mi imax. but any line passing between these two points yields the samevalue for the sum of the absolute values of the deviations. [ ei = Yi-Yi l (4. Y(X) 2 Y~(ei) (c) X . however. as specified by Eq. Y. In that case.9.10a illustrates the situation where the sumof the deviations at two points is minimized.. then the perpendicular distance between a point and the approximating function would be the deviation. (b) Minimize ~-~e leil.143) It is certainly possiblethat the values of Y. (a) Minimize i. both have uncertainties in their values. Figure 4. and the deviation ei is the vertical distance between and Yi =f(xi). then all the deviation is assignedto the dependent variable Y~. Figure 4. Any straight line that passes through the midpoint of the line segment connecting the two points yields the sum of the deviations equal to zero..10 squares. (d Le n ) . The usual approachin approximatefitting of tabular data is to assumethat the deviation is the vertical distance betweena point and the approximating function. If xi and Y. (4. The best straight line obviously passes midway between these two points. the deviation wouldbe measured the horizontal distance by illustrated in Figure 4.01/ (a) r ~e i x Y(X) (b) ~’ x y(x) Y(x) y(x) (ei)max~ ~. Minimizing the sumof the absolute values of the deviations wouldyield the unique line that passes exactly through the two points. The minimax criterion is illustrated in Figure 4. but the corresponding values ofxi are in error.10c. ~is illustrated in Figure 4.226 Chapter4 are consideredexact. (d) X .10b. Thus. Bestfit criteria. Figure y(x) Y(x) / . That procedure also has deficiencies.This procedure gives poor results whenone point is far removedfrom the other points. Severalbest fit criteria are illustrated in Figure4.. where two points having the same value of the independent variable have different values of the dependentvariable.143). wherethe maximum deviation is minimized.10 for a straight line approximation. are quite accurate.

bxi)(-xi) Dividing Eqs.144) The deviation ei at each value of xi is ei = ~ . and minimizethe sumof the squares of the deviations. and the l (4. Yi). Consider the constant pressure specific heat for air at low temperatures i3resented in Table 4. whereTis the temperature(K) and Cp is the specific heat (J/gm-K). OS Oa ~2(Y/ OS N a bxi)(-1) (4.Polynomial Approximation Interpolation and 227 4. (4..148a) (4. Least squares straight line approximation. [xi.i49) are called the normalequations of the least squares fit.14. Given N data points.146) The sumof the squares of the deviations defines the function S(a. Y(xi)] = (xi. 4.144) gives Yi = a + bxi (i = 1 . Least squaresstraight line is the approximations are an extremely useful and common approximate fit.11. N) (4.148) by 2 and rearranging yields (4.148b) -Ob = ~ 2(Yi .2 The Straight Line Approximation The simplest polynomial a linear polynomial.. fit the best straight as N line through the set of data.. Given data points. (xi. The least squares straight line fit is determined follows. The approximatingfunction is y=a+bx l At each value ofxi..149b) i=1 i=1 . N) (4. The exact values. Example 4.. Theycan be solved for a and b by Gauss elimination. choosethe functional formof the approximating function to be fit. The least squares method is defined as follows.10dillustrates the least squarescriteria.10... straight line. b) = Y~(ei) i=1 2= N Y~ (Yi i=1 -- a 2 bxi) - (4.yi (i = 1 . Eq.Yi). ei = (Y/. Thus.147) The function S(a. Yi).a ..145) (4. b): N S(a. (4. approximatevalues from the least squares straight line approximation.The least squares procedure yields a goodcompromise criterion for the best fit approximation. y = y(x). in whichthe sumof the squaresof the deviations is minimized.149a) N N i=1 N Equations (4. b) is a minimum when OS/Oa= OS/Ob= 0.

0045 1.Giventhe Ndata points.0153 1. K 3O0 400 5OO 6OO 700 8OO 900 1000 Cp. (4.11..19 0.0974 1.0507 1.800.933194 + 0.000b 5632.0296 1. Higher-Degree Polynomial Approximation The least squares procedure developed in Section 4. +anx" ] (4. Considerthe nth-degree polynomial: y = ao x2 +alx+a2 + . 2 = E i=1 i=1 i=1 Evaluating the summations substituting into Eqs.10.1410 0. 4.97 0.152) aE Ti+bE T.29 -0.1385 percent error are also presented in the table.0769 1.205298 x 10-3T ] (4. Specific Heatof Air at Low Temperatures T.i i=1 8 (4.24 -0.percent error.exac t Cp..app.11 presents the exact data and the least squares straight line approximation.153a) (4.and the . (xi. (4. Figure 4.1212 1. % -0.0743 1.0564 1.0984 1. Eq.154) (4.153b) Substituting the initial values of T into Eq.1180 1.150) 8a + b Z Ti = Z Cp.11.9948 1..5331 5200a ÷ 3.155) .61 ¯ 0. Yi).149) becomes 8 i=1 8 8 8 (4.152) gives and 8a + 5200b = 8..151) (4.The straight line is not a very goodapproximationof the data.10.0134 1. fit the best nth-degree polynomialthrough the set of data. (4.22 1.154) gives the results presented in Table 4.0358 1.09 -0.151) and (4. whichpresents the exact data.54 0.228 Table 4.2 can be applied to higher-degree polynomials. Determinea least squares straight line approximation the set of data: for Cp =a+bT For this problem. Chapter4 Error. the least squares straight line approximation.74 Solving for a and b by Gausselimination without scaling or pivoting yields Cp = 0.3..

alx i . (4... N ~n..157) by 2 and rearranging yields the normalequations: aoN +alZ xi +’"+an~ X~" = ~ Yi i=1 i=1 i=1 N N N (4. Figure 4.15 8b) Equation (4.1.. A problemarises for high-degreepolynomials.158a) N i=1 N i=1 N i=1 ao~ X]i + al Y~ x’~+l + .11 Least squares straight line approximation.0( 0 500 Temperature K T...alxi .ao ~an i= 1 .157a) ~v OS -..158) can be solved for 0 t o an by G auss elimination..ao . ao -.= ~ 2(Y/..05 1.. 2(Yi . which gives rise to illconditioned systems.= Y~..10 I ~ 1.156) The function S(a0.alxi . (4.. anxT)(-~) (4. an~//)(-1 ) = 0 ~a0 i=1 (4. an) is a minimumwhen OS N -.... al .Polynomial Approximation Interpolation and 229 E ~.The coefficients in Eq..157b) Dividing Eqs. (4. a1 . Normalizing each equation helps the situation. can vary over a range of several orders of magnitude.-q-anY~ N n = ~ x~i i i=1 Y (4...158).~ (ei) 2 = ~ (Yi -i=1 i=1 2 anX~ ) 1000 I S(a0.. Double precision ... The sum of the squares of the deviations is given by N N an) -=..

12.2782 1. Consider the constant pressure specific heat of air at high temperatures presented in Table 4. and c by Gauss elimination yields 2 Cp= 0.162) gives the results presented Table 4.15 -0. Table 4.2059 1. i (4.0339143 x 10-6T (4.q.c E V?: E TiCp.160b) (4. The quadratic polynomial is a reasonable approximation of the discrete data.29 0. approximatevalues from a least squares quadratic polynomialapproximation.12. Eq.2095 1.15. The exact values.approx" Error.5186 x 106 Solving for a.2955 1. Figure 4.i Evalu~ingthe summ~ions and substituting into Eq. whereT is the temperature(K) and Cp is the specific heat (J/gm-K).230 Chapter4 calculations are frequently required.2938 .1762 10 x 103a + 22. b. K 1000 150o 2000 2500 3000 Cp.02 0.162) Substituting the initial values of T into Eq. Valuesof n up to 5 or 6 generally yield goodresults.16qc) a Z Tg .2520 1.965460 + 0. % 0.161a) (4. SpecificHeatof Air at High Temperatures T.160) gives (4.5 × 106c = 6.5 x 106b + 55 x 109c = 12.12 presents the exact data and the least squares quadratic polynomial approximation.161b) (4.2522 1. Determinea least squares quadratic polynomialapproximation this set of data: for 2 Cp = a + bT + cT For this problem.0.5 x 106a + 55 x 109b + 142. and the percent error are also presented in the table. Example4. values ofn between 6 and 10 mayor maynot yield good results.161c) 5a + l0 × 103b + 22.1427 1.159) (4.125 x 1012c = 288.2815 1.26 -0.exac t Cp.-[.158) becomes 5a + b ~ Tg + c y~ Ti 2 = ~ Cp.b E ri2 .1410 1.12. and values ofn greater than approximately10 generally yield poor results.160a) (4.211197 x 10-3T -.5413 x 103 22. (4.(4. (4. Least squares quadratic polynomial approximation.13 1.

165c) cyi)(-Yi) .163) 2 -- cYi) (4. z =f(x.164) (4. Two exact fit proceduresfor multivariate approximation presented are in Section 4. Least squares multivariate approximation consideredin this section. c) --. fit the best linear bivariate polynomial through the set of data. b.a .4.1 1000 2000 Temperature K T. 3000 Figure 4. function. Zi). for example.165b) (4.bxi .2- 1.165a) (4.bxi The function S(a.8.1) = aa OS ~’--~ = ~ 2(Zi .y) is a two-variable. Multivariate Polynomial Approximation Many problemsarise in engineering and science wherethe dependentvariable is a function of two or more independent variables.cyi)(-xi) OS ~ OC 2(Zi a bx i (4. is Giventhe N data points.10. c) is a minimum when ~S ~ 2(Zi .a .12 Least squaresquadraticapproximation. Yi.3 1. Considerthe linear polynomial: z = a + bx + cy The sumof the squares of the deviations is given by S(a.bx i cyi) (. 4.~ (ei) 2 : ~ (Zi . b.a . (xi. or bivariate.Polynomial Approximation Interpolation and 231 1.

bxi .e~. A linear fit to a set of bivariate data maybe inadequate.f xiYi) i Thefunction S(a.10.232 Dividing Eqs.166c) Equation (4. xiYi = ~ Zi a~x i +bS~ +c~xcv.170b) (4..166b) (4.fxyi)(-xgyi) = 0 Dividing Eqs.165) by 2 and rearranging yields the normal equations: aN + b Y~x . c. e. 2(Zi .~ = Y~iiZi + bEx +cEA a ZxiYi -dc b Zx2iYi -21c Zxi~i + d (4. using a least squares quadratic bivariate .166a) (4. Example4.16.170c) (4.f) = ~ i .170f) Equation(4. d..167) . b.cyi .~i +f Y~x~Y~i : ~-~xiYiZ i (4.-b c ~-~yi = i a Z xi + b Z ~ + c Z xiy~ = ~ xiZi a ZYi + b Exyi -b c ~-~.cy2 .bx i . based on the data in Table 4..a .a . f) is a minimumwhen OS O~= ~ 2(Zi . Consider the quadratic bivariate polynomial: z = a + bx + cy + dx2 + ey2 + fxy ] The sumof the squares of the deviations is given by S(a.166) can be solved for a.d y~ ~ +e ~ ~ + f ~_.a (4. (4. Let’s rework Example 4.d~ .e~ ..170e) ~x3iYi + e Y~x.169a) (4.dx~i . +d~x3c +eY’~xi)~i -kf~x2iYi : ZxiZi aEyi+bExiYi+C~-~ +dE~yi+e~-~ +f~x.170a) (4.~i : y~yiZ i aE~ +bEx ~ +cE~yi+dEx4i +eE~ ~ +fEx~yi : E~Zi +eEy4i +fY~xi.d.bxi .ey~i -fxyi)(-1) ~S ~-~ = Y~. Least squares quadratic bivariate polynomial approximation.169f) (4..cy i .~i .169) by 2 and rearranging yields the normalequations: aN + b ~ xi -~ C E Yi "-}.168) (4. (4.Y~i = ~YiZi Chapter4 (4. and c by Gauss elimination.170) can be solved for a to f by Gauss elimination.170d) (4. b.12 to calculate the enthalpy of steam at P = 1225psia and T = 1100 F. b .

equally spaced data (i.e. In this case. c is the 6 x 1 columnvector of polynomial coefficients (i.172) Evaluating Eq.088 E9 15.240 E6 11..450 El2 11.e. and Z in Eq.170) can be written as the matrix equation Ac = b (4.174) can be used to calculate the coefficients a to f at any point in the table very efficiently. A is constant for the entire table. and b is the 6 × 1 columnvector of nonhomogeneous terms.000082500PT 9. (4.975 E9 12. Solvingthe normalizedEq. Each row in Eq.000 E3 12.171) Gausselimination yields h(T. (4. solving Eq. a to f).172) yields h(1100.1638 E6 13. Equation (4. Evaluating the summations substituting into Eq.0.0.606 El2 "13. whichis smaller than the error incurred in Example 4. so A can be determinedonce for the entire table.171) (4.975 E6 10.321 El2 10.240 E6 11. (4. Let the variables x.173) for c by Gausselimination -l moreefficient than calculating A .645500T .033 .4185 E9 14. (4. x.000077500T2 + 0.3 Btu/lbm.e. a considerable simplification can be achieved.088 E9 9.606 E9 9.000 E3 10. (4.xxx En =x.088 E9 9.720 E9 13.1122 E9 16.174) whereA-1is the inverse of A.171) should be normalized by exponential term in the first coefficient of each row. 1225.664 El2 13.720 E9 13.3 .321 El2 10.606 E9 12. T. 2 = 914.12 obtained by direct multivariate linear interpolation.170) gives and 9 E0 10.xxx x I0n).240 E6 12.800 E3 12.975 E9 11. Onlythe nonhomogeneous vector b changes from point to point and must be recalculated.800 E6 12.800 E6 15.800 E6 9.0) = 1556.3337 E9 Dueto the large magnitudes the coefficients.0.800 E6 12. they are expressed in exponential format of (i.173) c = A-lb (4. However.. (4.0205000P + 0.800 E3 9.000040000P P) . (4.975 E9 11. Ax= constant for and Ay= constant) in a large table.088 E9 15.173) where A is the 6 × 6 coefficient matrix.975 E6 10. .975 E9 18.0.664 El2 10. y.321 El2 -a (4.0 = 0.4703 E3 16.170) correspond to P. (4.975 E6 15. The solution to Eq. In general. and Eq.792 El2 9.1556.3 Btu/lbm. This is accomplishedby locally transforming the independentvariables so that x = y = 0 at the -1 center point of each subset of data. and respectively. The error is Error= 1556.606 El2 11.Polynomial Approximation interpolation and 233 polynomial.6118 E6 19.

2 . Manyphysical processes are governed by a nonlinear equation of the form b = ax y Taking the natural logarithm of Eq.175) and (4. Anotherfunctional form that occurs frequently in physical processes is bx= ae y Takingthe natural logarithm of Eq. Somenonlinear functions cannot be so manipulated. as discussed in Section 3. consider the nonlinear function y -a l ÷bx (4. They can be solved by methods for solving systems of nonlinear equations.179) (4. b) is a minimum when 8S 8a 8S xi O-~ = y~. Nonlinear Functions Chapter4 One advantage of using polynomials for least squares approximationis that the normal equationsare linear.7.10. Equation(4. N). (4. and x’ = In(x).183a) ~-~ 2 (Yi a ~)(_~xi) 1 ~-bx 1 1 q~ abxi)(( 0 0 (4.181) Fora givenset of data. for example.183b) 1 . the underlying physics suggests forms of approximating functions other than polynomials..10.175) which is a linear polynomial.176) y’ = a’ + bx’ (4. Equation(4..175) gives In(y) = In(a) + b in(x) Let y’ = In(y). to In some engineering and science problems.182) (4. so it is straightforward solve for the coefficients of the polynomials. a’ = in(a).234 4. however.176) (4. The function S(a. by Newton’smethod..178) Equations (4.5.~i)2 = Equation(4. (xi.180) (4.2 (Yi ) (4.177) can be used as a least squares approximation by applying the results developedin Section 4.2.. the sumof the squares of the deviations is S(a. For example.183) comprisesa pair of nonlinear equations for determiningthe coefficients and b. b) = E 1 +-bx. . Examples of several such functions are presented in this section.178) gives ln(y) = ln(a) bx which can be written as the linear polynomial y’ = a’ + bx (4.177) (4.178) are examples of nonlinear functions that can manipulated into a linear form. Yi) (i =1. (4.

50 / 0. dimension n. and prints the solution. (4. The basic computational algorithms are presented as completely self-contained subroutines suitable for use in other programs..34): is ~ Pn(x) = o + x + a2x~ +. 2. fit interpolated coefficients x(6) . 2. x(i) polynomial variable.35. It is included. FORTRAN subroutines direct.f(6) ndim. to facilitate the addition of other-dgree polynomialsto the subroutines.44 / / 3. however.11 PROGRAMS 235 Five FORTRAN subroutines for fitting approximating polynomials are presented in this section: 1.1 to 4.11. 3.i=l. whichspecifies the degree of the polynomials. is thus not needed in the present programs.298507.a(6. program C C C C C C C C C main ndim ndeg n xp x f fxp c data data data main to illustrate ndim polynomial fitting example subroutines array degree number value values values dimension. 3.11. of ~he .. 4. 5.3) (f(i).1.3) polynomial.11. subroutine direct. only the changed statements are included. Program 4.294118. calls one of the subroutines to implementthe solution. divdiff and newtonfd are presented in Sections 4. + anx al (4.Polynomial Approximation Interpolation and 4.The variable ndeg. for implementing a quadratic direct fit polynomial is presented below. lagrange. Direct fit polynomials Lagrange polynomials Divided difference polynomials Newtonforward-difference polynomials Least squares polynomials All of the subroutines are written for a quadratic polynomial.b(6) ndeg = 2 in this example polynomial coefficients is f(i) c(i) evaluated variable. Program mainfor subroutine lesq is completely self-contained. 3.xp / 6. in which case the degree of the polynomialto be evaluated must be specified. Programmain (Program 4. 0. 4. and prints the solution. Subroutine gauss is taken from Section 1.40.6) f (xp) direct .i=l. = 6 in this program of the polynomial. 3.285714 . The completeprogram mainis presented with subroutinedirect.1) defines the data set and prints it. (x(i). ndeg. calls subroutine direct to implementthe solution.1. except the linear least squares polynomial. 3. A common programmain defines the data sets and prints them.4 for implementingthese procedures. of data points and of of of x at the the which the independent dependent value. Direct Fit Polynomials The direct fit polynomial specified by Eq.1.184) A FORTRAN subroutine. Direct fit polynomial program. Higher-degree polynomials can be constructed by following the patterns of the quadratic polynomials. For the other three subroutines.8. 0.

a (ndim.350000 3.440) 0. xp.e13.3)=x(i) b(i) =f(i) end do call gauss (ndim.298507 0.x(i). c) quadratic direct fit polynomial dimensionx (ndim) .2560800E+00 x + 0. a.f6.’) = ’.ndim) . x) implements simple Gauss elimination end ’ x**2’) c The data set used to illustrate subroutine direct is taken from Example 4.1020) xp. Quadratic x and f 1 2 3 3.f12.500000 0. x.290697 . xp.1010) i. 1000) do i=l.7.b. b (ndim) . f.b.n a(i.6) end Chapter4 ’/’ x and f’/" ") c subroutine direct (ndim.i=l. b. ndeg.294118 0. fxp. end subroutine gauss (ndim. fxp stop i000 format (’ Quadratic direct fit polynomial’/’ I010 format (i4. n.f(i) end do call direct (ndim.3.1000) (c(i).285714 + -. n. 2) =x(i) a(i.2. Solution by a quadratic direct fit polynomial. a.O a (i.e13. do i=l. fxp.n write (6.6) 1020 format (’ ’/’ f(’.1.7.2493333E-01 x**2 direct fit polynomial f(x) = 0.236 write (6. a.’ x + ’. n.x.ndeg.400000 3.l)=l.e13.n) fxp=c (1) +c(2) *xp+c( 3 ) *xp* return 1000 format (’ ’/’ f(x) = ’. c) write (6. f. Output4. a. n. The output generated by the direct fit polynomialprogramis presented below. c) write (6. b.8765607E+00 £( 3. ’ + ’. f (ndim) .7.2f12.

(4. f (6) call lagrange (ndim.x..c) ~1 (x . The output Output 4. f (ndim) fpl= (xp-x(2) * (xp-x(3) ) / (x(1)-x(2) ) / (x(1) ) fp2= (xp-x(1)) * (xp-x(3)) / (x(2) -x(1)) / (x(2) fp3= (xp-x(1)) * (xp-x(2)) / (x(3) -x(1)) / (x(3) fxp=fpl+ f~2 + fp3 return end The data set used to illustrate subroutine lagrange is taken from Example 4. f. fxp) quadratic Lagrange ~9olynomial dimension x (ndim) .285714 0. The quadratic (x .1.n. xp.c ) . calls subroutine lagrange to implement the solution.500000 0. Lagrange Polynomials is given by Eq. 3. (4.3.2) defines the data set and prints it.2.b)~. 237 The equation for the general Lagrange polynomial Lagrange polynomial is given by Eq.46). Program main (Program 4. It shows only the statements which are different from the statements in Program 4. xp.x. f. 185) P2(x)=(a_b)(a_c)Yta)+(b a)(b_c)J(O)-~ A FORTRAN subroutine. subroutine lagrange.440) . (4.Polynomial Approximation and Interpolation 4.. generated by the Lagrange polynomial program is presented in Output 4. program main main program to illustrate polynomial fitting subroutines dimension x(6) .b)(x .2. fxp) 1000 format (" Quadratic Lagrange polynomial’/’ "/" x and f’/" ") end subroutine lagrange (ndim. Quadratic x and f 1 2 3 Solution Lagrange by a quadratic polynomial Lagrange polynomial.294118 0.2.a )( x .11. Quadratic Lagrange polynomial program.290697 f( 3. and prints the solution. Program 4. n.45): (x . for implementing the quadratic Lagrange polynomial is presented below.a)(x .400000 3.350000 3.298507 0.2.

3. Divided Difference Polynomial Chapter 4 The general formula for the divided difference polynomialis given by Eq. Program main (Program4.290697 divided diff. Quadratic divided difference polynomial program. and prints the solution.350000 3.500000 0.3. xp. Solution by a quadratic divided difference polynomial. program main main program to illustrate polynomial fitting subroutines dimension x(6) . f (6) call divdiff (ndim. (4.xo)f/O)(2(x . subroutine divdiff for implementing a quadratic divided difference polynomial is presented below.6.294118 0. f (ndim) fll= (f (2) -f (i)) / (x(2) f21= (f (3) -f (2)) / (x(3) f12= (f21-fll) / (x(3) -x fxp=f (1) + (xp-x(1)) *fll + (xp-x(1)) * (xp-x(2) return end The data set used to illustrate subroutine divdiff is taken from Example4.238 4. polynomial f( 3. f. xp.Xo)(XXl . poly.xn_ ) ~)f/(") (4.400000 3.Xo)(X. n. Output4.1. calls subroutinedivdiffto implement solution.285714 0. ’/’ ’/’ x and f’/’ ’) end subroutine divdiff (ndim.298507 0.(x) = f/(o) + (x .x. Quadratic x and f 1 2 3 3.440) ..3. Program4. The output generated by the divided difference polynomialprogramis presented in Output4.x.11.3. ¯ ¯ (x. fxp) lOOO format (" Quadratic divided diff..65): P. the It showsonly the statements whichare different from the statements in Program 4. fxp) quadratic divided difference polynomial dimension x (ndim) .3) defines the data set and prints it.xl)fi + +.n.186) A FORTRAN subroutine. + (x . f.

.Polynomial Approximation Interpolation and 4. and prints the the solution.400000 3. O)/2. 0.3) / 0.440) .294118. x.50.xp.f0 (4. subroutine newtonfd. xp. f.2) A3f 1)1 A.8. f ( x( data (x(i).n. NewtonForward-Difference Polynomial 239 The general formula for the Newton forward-difference polynomialis given by Eq. Quadratic Newtonforward-difference polynomial program. 3. n.600000 0. + s(s .500000 3.60 data (f(i). f.11.27778 call newtonfd (ndim.4. Program main (Program4. Program4.4) defines the data set and prints it.(n .285714 0.88): Pn(x) =fo sAfo +~ A2fo +s(s 1) (s3! .1. Output 4. for implementing a quadratic Newton forward difference polynomial is presented below.277780 0. fxp) quadratic Newton forward-difference polynomial dimension x (ndim) . calls subroutinenewtonfdto implement solution. Solution by quadratic Newtonforward-difference polynomial.3) / 3. (4.187) +.. The output generated by the Newtonforward-difference polynomial programis presented in Output 4.1)(s 2) n!Is.x..4. 3. fxp) format " Quadratic Newton FD polynomial’/’ ’/" x and f’/" end subroutine newtonfd (ndim. 0.i=l.290700 Newton FD polynomial f( 3. It shows only the statements which are different from the statements in Program4..285714. f (ndim) dll=f (2) -f (1) d12=f(3) -f(2) d21=dl2-dll s= (xp-x(1)) / (x(2) fxp=f (I) +s*dll +s* (s-l. A FORTRAN subroutine. program main main program to illustrate polynomial fitting subroutines dimension 6) .294118 0. Quadratic x and f 1 2 3 3.4.4.40.i=l.) The data set used to illustrate subroutine newtonfd is taken from Example 4. O’d21 return end 1000 .

5. ndim = I0 in this example ndim ndeg degree of the polynomial n number of data points and polynomial coefficients values of the independent variable.5x. calls subroutinelesq to implement solution. "x’. (4. 600.0 / data (f(i).c(lO) data ndim. x. lO).188b) a ~ x i + b Z ~ = Z xiY~ i=1 A FORTRAN subroutine.0. f (ndim).9x. ’fi’. sumx=O. ndeg. x(i) x values of the dependent variable.error end do stop I000 format (" Linear least squares polynomial ’) I010 format (’ ’/’ i’.a(lO. ndim) . b. ndeg. ’f’. 7x. 900.fi. b. a (ndim. c) linear least squares polynomial dimensionx (ndim) . c(i) c dimension x(lO). ’error’/’ ’) 1020 format (i4.1212. 1 800.1000) call lesq (ndim.0. .0.240 4. x.9x. i=i. 700.2) end subroutine lesq (ndim.0.8) / 300. i=i. Linear least squares polynomial program.x(i). f.0045. n. 0 sumxx=O.n fi=c(1) +c(2) *x(i) error=lO0.0507. 400. 1. for implementing a linear least squares polynomial is presented below.0743.5. 1.158). f(i) f coefficients of the least squares polynomials.4. The normalequations for a linear least squares polynomialare given by Eq. 8 / data (x(i). 0 c sumf=O. 1. (4.f9. a.8) / 1. Programmain (Program 4. ndeg.f10.f(i). subroutine lesq.149): N i=1 N aN + b ~ xi = ~ Yi i=1 N i=1 N i=1 N (4.1020) i.0.188a) (4. i.0. 0 sumxf 0 =O. f.5) defines the data set and prints it. 1. a. n / 10.b(lO).1410 / write (6.1010) do i=l. n.11. 1.0134. 1 1. program main main program to illustrate subroutine lesq array dimension. and prints the solution.O* (f (i) -fi) write (6.2flO.0. the Program4.0296.0984. Least Squares Polynomial Chapter4 The normalequations for a general least squares polynomialare given by Eq.f(lO). 500.3. c) write (6. b (ndim). 1. 1000.

62 -0. i) =float (n) a (i.Polynomial Approximationand Interpolation do i=l. The output generated by the linear least squares polynomial program is presented in Output 4.b.i=l.57 -0." x’) end subroutine gauss (ndim. 2) =sumxx b (i)=surer b (2) =sumxf call gauss (ndim.n s umx s unucx ( i = + s umxx s umxx x ( i ) * = + sumf=sumf ( i +f sumxf =sumxf ( i ) * f ( +x end do a (i.a.32 0. Packages PolynomialApproximation for Numerous libraries and softwarepackages available for polynomial are approximation and interpolation. ’ + ’.0045 1.e13.0507 1.0743 1.000 700.19 -0.11.1000) (c(i).x) implements simple gauss elimination end 241 The data set used to illustrate subroutine lesq is taken from Example 4. Some the more prominentpackagesare Matlaband of Mathcad.1) =sumx a (2.25 4.000 500.1212 1.14.10 0. Many workstations mainframe and computers suchlibraries attachedto have their operatingsystems.000 f 1.0134 1. ndeg+l.0984 1.000 1000.ndeg+l) return I000 format (" ’/’ f(x) = ’.26 0.000 600. Many commercial sot%ware packagescontain routines for fitting approximating polynomialsand interpolation. such as IMSL. Moresophisticated packages.Mathematica. and . Linear least squares polynomial f(x) = 0.e13.1410 fi 0 9948 1 0153 1 0358 1 0564 1 0769 1 0974 1 1180 1.5.000 800.000 400.5.97 -0.000 900.2052976E-03 i 1 2 3 4 5 6 7 8 x 300.1385 error 0.7.2) =sumx a (2. Output 4. c) write (6.0296 1. Macsyma.7.6.9331940E+00 + 0. Solution by a linear least squares polynomial.n.a.b.

4. Discuss the general features of functional approximation List the uses of functional approximation List several types of approximatingfunctions List the required properties of an approximatingfunction Explain why polynomials are good approximating functions List the types of discrete data whichinfluence the type of fit required Describe the difference betweenexact fits and approximatefits Explain the significance of the Weirstrass approximation theorem Explain the significance of the uniqueness theorem for polynomials List the Taylor series and the Taylor polynomial Discuss the error term of a polynomial Evaluate. polynomialsbased on differences are recommended. Least squares approximations useful for large sets of data and are sets of rough data. Explain the concept underlying Lagrange polynomials 20. Explain and apply the synthetic division algorithm 16. and integrate a polynomial Explain and apply the nested multiplication algorithm State and apply the division algorithm. 5. The direct fit polynomial. Newtonforward-difference and backward-difference polynomials are The simple to fit and evaluate. 3. 8. After studying Chapter 4. for both one independent variable and more than one independent variable. 14. 4. Explain and implementpolynomial deflation 17. the Lagrange polynomial. 2. respectively. 13. simple. Least squares polynomial approximation is a straightforward. Finally. the least squares normal equations are nonlinear. (1989) contains numerous subroutines for for fitting approximating polynomialsand interpolation. Discuss the advantages and disadvantages of direct fit polynomials 18. also contain routines for fitting approximating polynomials and interpolation. For equally spaced data. 11. Least squares polynomialapproximationis straightforward. 6. 10. which leads to very efficient solution procedures. Procedures for developing least squares approximationsfor discrete data also are presented in this chapter.12 SUMMARY Procedures for developing approximatingpolynomials for discrete data are presented in this chapter. Procedures are discussed for inverse interpolation and multivariate approximation. and accurate procedure for obtaining approximatingfunctions for large sets of data or sets of rough data. The least squares normal equations corresponding to polynomial approximating functions are linear. and the divided difference polynomial work well for nonequally spaced data.242 Chapter4 Maple. 9. which leads to complicatedsolution procedures. Thesetwo polynomialsare used extensively in Chapters 5 and 6 to developproceduresfor numericaldifferentiation and numericalintegration. 7. Discuss the advantages and disadvantages of Lagrangepolynomials 21. differentiate. you should be able to: 1. and the factor theorem 15. Fit a Lagrangepolynomialof any degree to a set of tabular data . Pross et al. the remaindertheorems. For small sets of smooth data. Fit a direct fit polynomial any degree to a set of tabular data of 19. the bookNumerical Recipes. 12.For nonlinearapproximating functions. exact fits are desirable.

5 and removethe factor (x . For the polynomial P3(x)= 3.24 . (b) P~(1.5) by constructing Q3(x) evaluating Q3(1.9x2 +26x. 7.5) by constructing 2. WorkProblem4 for x = 3. 32. 3.4). cal culate (a) P5(1. WorkProblem4 for x = 4.2). 25.5) by nested multiplication. Applythe computerprogramspresented in Section 4.5 and removethe factor (x . 8. . WorkProblem1 for x = 3.580x2 d-1044x .5 and removethe factor (x . 26. 33.2). (b) P~(1.3).11 to fit polynomialsand to interpolate 46.7 20. 30.5 and removethe factor (x . 42. 29. the WorkProblem1 for x = 2. For the polynomial Pa(x) = 4 -10x3 + 35x2 .5) by nested multiplication.3). For the polynomial Ps(x) = 5 -20x4 + 155x3 . 40. 28.5) by nested multiplication. 6. and (c) the deflated polynomial Q3(x)obtained by removingthe factor (x WorkProblem4 for x = 2. Developcomputerprogramsfor applications not implemented Section 4.5) by constructing Q2(x) and evaluating Q2(1. and (c) the deflated polynomial Q2(x) obtained by removing factor (x . ca lculate (a P4(1. 34. 35.5 and removethe factor (x .11.5) by nested multiplication.50x + 24. 37. ca lculate (a ) P3 (1. 243 Discuss the advantages and disadvantages of Neville’s algorithm ApplyNeville’s algorithm Define a divided difference and construct a divided difference table Discuss the advantages and disadvantages of divided difference polynomials Construct a divided difference polynomial Definea difference and construct a difference table Discuss the effects of round-off on a difference table Discuss the advantages and disadvantages of difference polynomials. 23. 24. 44.4). 36. 31. 39. in EXERCISE PROBLEMS 4.5) by nested multiplication. 43. 5. 4.Polynomial Approximation Interpolation and 22. 41.2 Properties of Polynomials 1. (b) P~(1. Apply the Newtonforward-difference polynomial Apply the Newtonbackward-difference polynomial Discuss the advantages of centered difference polynomials Discuss and apply inverse interpolation Describe approaches for multivariate approximation Apply successive univariate polynomial approximation Applydirect multivariate polynomial approximation Describe the concepts underlying cubic splines List the advantages and disadvantages of cubic splines Applycubic splines to a set of tabular data Discuss the concepts underlying least squares approximation List the advantages and disadvantages of least squares approximation Derive the normal equations for a least squares polynomialof any degree Applyleast squares polynomials fit a set of tabular data to Derive and solve the normal equations for a least squares approximationof a nonlinear function 45. 38. 27.

6 and 2.6933 3.) in terms a Taylor series at the base point and comparing that result to the Taylor series forf(x) at the base point..5) by nested multiplication.9)using last four points.5933 7.9) the last three points. WorkProblem8 for x = 3. show that the order is 0(Ax for part (a) and 0(Ax2_) 0(A~+) part + for 16. Error = C Ax Estimate the order of a linear direct fit polynomial calculating f(2. Tabular Data.f/_1 .7491 6.0 j~x) 3.5543 13. Using direct fit polynomials..3 Direct Fit Polynomials Table 1 is for the function f(x) = 2/x + a.3511 5.2 (i.8).1067 x 1. 4.0000 x 2. WorkProblem16 for the range 1.0 for x = 1. (c) P3(0. x 0.4 2. WorkProblem8 for x = 4.5 and removethe factor (x .8 2. Ax=0.8 fix) 5. Ax=0.2 2. 10.3886 3. Considerthe tabular set of generic data shown Table 2: Determine direct in the fit quadratic polynomials for the tabular set of generic data for (a) Ax_ = Ax+ = Ax and (b) Ax_ ~ Ax+.0) for the data in Table1 using by x=1.5. calculating the errors. and (e) P4(0. that is.2 < x < 2. and calculating the ratio of the errors. 17.9)using the first four points. T t able i s c onsidered i n several o f t he his problems which follow.8 1. 14.0000 3.8t00 4.6). Table 2. 12.9) using all five data points. Table 1.4 1.~(x) can be determined by expressingall function values in the polynomial (i.1600 3. 11. Consider the data in the range 0.4 < x < 1.4).4 0.9) using the first three points.e.5292 8. and (c) the deflated polynomial Q4(x) obtained by removingthe factor (x 9. and x=1.0 1.2 in Table 1.1400 3. The order n of an approximating polynomial specifies the rate at which the error of the polynomialapproximationapproacheszero as the incrementin the n.244 Chapter4 and evaluating Q4(1.e. (b) P2(0.4 (i.5).f+l.e.6 1.8 and 2. (d) P3(0. Work Problem8 for x = 2. WorkProblem8 for x = 5. etc. For the direct fit polynomialsdevelopedin Problem 3) 14. tabular data approacheszero.5 and removethe factor (x ..6 0.2 fix) 5. Generic Data x xi_j = -Ax_ xi=0 xi+~ = +Ax+ f_j f f+~ 15. .4).3). The formal order of a direct fit polynomial P.6 2.5 and removethe factor (x .calculate (a) P2(0.5 and removethe factor (x .

9.90. Usingdirect fit polynomials the base point as close to with the specified value of ras possible.1410 1.l/kg 1515. WorkProblem 19 using Lagrange polynomials. 1.174 1278.2197 h.1573 1. evaluate f(0. WorkProblem18 using Neville’s algorithm. and (c) four points. 34.8 for x = 2. 24. In each use the closest points to x = 1.5.2095 1. 23.1982 1. 19. Divided Difference Polynomials 35.248 1162. Cp.657 WorkProblem 19 for h(T) instead of @(T). 27. 25. Table 3. 1. 4.9) using: (a) twopoints. 28.792 1636. 4. (b) three points.4 Lagrange Polynomials 21. WorkProblem16 for the range 2. WorkProblem 17 using Lagrange polynomials.75. and (c) four points. WorkProblem19 using Neville’s algorithm.45. Cp(1120)using three points.663 1396. The constant pressure specific heat Cp and enthalpy h of low pressure air are tabulated in Table3. WorkProblem20 using Neville’s algorithm.0 < x < 2. 1.24 in the range 1. and (d) Cp(1480) using three points.1722 1. 33. (c) Cp(1480)using two points.578 ZK 1400 1500 1600 Cp. Construct a divided difference table for the data in Table 1. Constructa divided difference table for h(T) for the data presentedin Table3. 1. k. kJ/kg-K 1. 30. (b) three points.1 < x < 1.Polynomial Approximation Interpolation and 245 18.9x~ +26x. Construct a divided difference table for Cp(T) for the data presented in Table 3.30.9 for x = 1. 1. WorkProblem16 using Neville’s algorithm.188 1757. kJ/kg 1047. Fromthe divided difference table constructed in Problem31. Using the divided difference table constructed in Problem32.10. 1.20. 36.15. 29. 1. 32. and 1. interpolate for f(1. 26.44) using: (a) twopoints.5 Divided Difference Tables and Divided Difference Polynomials Divided Difference Tables 31. WorkProblem17 using Neville’s algorithm. .1858 h. 22. WorkProblem 16 using Lagrange polynomials. calculate (a) C~(1120) using two points. WorkProblem 20 using Lagrange polynomials. Propmies of Air ~ K 1000 1100 1200 1300 20.35. kJ/kg-K 1.6. WorkProblem 18 using Lagrange polynomials.44. Construct a six-place divided difference table for the function f(x) =x3 . In each case use closest points to x = 0.

WorkProblem16 WorkProblem17 WorkProblem18 WorkProblem19 WorkProblem20 using using using using using the the the the the Stirling Stifling Stifling Stifling Stifling centered-difference centered-difference centered-difference centered-difference centered-difference polynomial. Rewrite both Newtonpolynomials in the form Cp(T) = a + bT CT2. and showthat the three polynomialsare identical. The Newton Backward-Difference Polynomial 48. middle. calculate Cp(ll20) using: (a) two points. 47. . 46. 57.246 Chapter4 37. polynomial. Other Difference Polynomials 55. WorkProblem 17 using Newtonbackward-difference polynomials. polynomials. Discuss the results. For the data in Table 3 in the temperaturerange 1200< T _< 1400. 38. the 40. and a quadratic Newtonbackward-difference polynomial for Cp(T).6 Difference Tables and Difference Polynomials Difference Tables Construct a six-place difference table for the function f(x)= x3 . 56. 52. (b) three points. 39. polynomials. Discuss the results. WorkProblem 16 WorkProblem 17 WorkProblem 18 WorkProblem 19 WorkProblem 20 using using using using using Newtonforward-difference Newtonforward-difference Newtonforward-difference Newtonforward-difference Newtonforward-difference polynomials. 51. WorkProblem 41 for h(T). Comment the degree of polynomial on required to approximatethese data at the beginning. WorkProblem 19 using Newtonbackward-difference polynomials. a quadratic Newton forward-difference polynomial. 58. 4. 50.9 for Ax--= 0. Fromthe divided difference table constructed in Problem34. construct a quadratic direct fit polynomial. Construct a difference table for the data in Table 1. 41. polynomial. The NewtonForward-Difference Polynomial 43. 54. polynomial. Analyze effects of roundoff. 45. WorkProblem 20 using Newtonbackward-difference polynomials. 44. case use the closest points to T = 1120 K. WorkProblem 16 using Newtonbackward-difference polynomials. 53. Analyze the effects of round-off. and end of the table. Discuss the results. polynomial. (b) three points. polynomials. From the divided difference table constructed in Problem 33. and(c) four points.1 < x < 1. WorkProblem 53 including a quadratic Lagrange polynomial. polynomials. and (c)four points. What degree of polynomialis required to approximatethis set of data? 42. ~Work Problem18 using Newtonbackward-difference polynomials. 59. 49.9x~ + 26x . Analyze the effects of round off.24 in the range 1. In each case use closest points to T = l120K. calculate h(1120) using: (a) twopoints.1. Construct a difference table for Cp(T) for the data presented in Table 3.

as a function of pressure P (kN/m and 2 and T = 800K. WorkProblem 65 for v(10500.750): v=a+bT +cP +dPT 70. Theexact value from the steam tables is v = 0. WorkProblem 73 for v(10500. temperature T(K) in the neighborhood ofP = 10. WorkProblem 65 for v(9500. polynomial. 76. WorkProblem 69 for v(10500. the 78. 75.850). 73. WorkProblem 73 for v(9500. The specific volumev (m3/kg) of steam.028345 0.039053 0. WorkProblem 73 for v(10500. WorkProblem 69 for v(9500. 68. 9. the set the the _.029466m3/kg. 64.400.Polynomial Approximation Interpolation and 60.037948 0.000 < T < 1. The exact value is 0.K 2 kN/m P.034590m3/kg. 63. Compare results with the results of Problem77. WorkProblem77 using every other data point. 61.750). corresponding to the van der Waal 2) equation of state (see Problem3. 66. 67.038534m3/kg.750): v = a + bT + cP + dPT+ eT2 74. WorkProblem 69 for v(10500.031980 0.025360 800 0. Solve Problem65 by direct quadratic bivariate interpolation for v(9500.035270 Usesuccessive quadratic univariate interpolation to calculate v(9500.850). 71. 79.000 700 0.8 WorkProblem 16 WorkProblem 17 WorkProblem 18 WorkProblem 19 WorkProblem20 using using using using using the the the the the Bessel Bessel Bessel Besset Bessel centered-difference centered-difference centered-difference centered-difference centered-difference polynomial.030452 900 0. WorkProblem77 for a quadratic polynomial. WorkProblem 65 for v(10500. 247 Multivariate Interpolation 65. Comparethe results with results of Problem77.]_j.043675 0.033827 0. Solve Problem65 by direct linear bivariate interpolation for v(9500.750).850). 69.850).750).850). 72. Considerthe data for the specific heat Cp of air. Table 4. 4. p2 . Compute deviations at each data point.000 10. polynomial. Specific Volume Steam of T. presented in Table3 for range 1.000 I1.000kN/m is tabulated in Table4. polynomial. Find the best straight line approximation this to of data. 4.10 Least Squares Approximation 77.850). polynomial.750). The exact value is 0. 62. The exact value is 0.69).032965m3/kg.

Modifythe quadratic divided difference polynomial program to consider a cubic divided difference polynomial. Modifythe quadratic direct fit polynomialprogramto consider a cubic direct fit polynomial. 97. Solve any of Problems16 to 20 with the program.11 Programs 83. 87. Fit the Cp(T) data in Table 3 to a fourth-degree polynomial and computethe deviations. Implementthe quadratic direct fit polynomial programpresented in Section 4. WorkProblem 80 for the least squares quadratic bivariate polynomial and computethe deviations. 88. 84. Considerthe data for the specific volume steam. Modify the quadratic Newton forward-difference polynomial program to consider a linear Newton forward-difference polynomial. Implementthe quadratic divided difference polynomial programpresented in Section 4. Modify the quadratic Lagrange polynomial program to consider a linear Lagrange polynomial. Implementthe quadratic Lagrange polynomial program presented in Section 4. Checkout the programwith the given data. given in Table 4. 86.248 Chapter4 of 80. Solve any of Problems16 to 20 with the modified program.750) and compare the with the result from Problem69. v = v(P. Modify the quadratic Lagrange polynomial program to consider a cubic Lagrange polynomial. Checkout the programwith the given data. T). 95.3.4.11. Cp(T) = a + bT + 2 + dT3 + eT4 4. 89. Modifythe quadratic direct fit polynomialprogramto consider a linear direct fit polynomial. Solve any of Problems 16 to 20 with the modified program. 94. Checkout the programwith the given data. 92. Solve any of Problems 16 to 20 with the program. 96. v = a + bT + cP + dPT + eT2 +fp2 Compare result with the result from Problem73. Calculate v(9500.11. Checkout the programwith the given data. Solve any of Problems16 to 20 with the modified program. Solve any of Problems16 to 20 with the program. 81. Solve any of Problems16 to 20 with the program. 91. Solve any of Problems16 to 20 with the modified program. Solve any of Problems 16 to 20 with the modified program.11. .2. 90.1. the 82. 85. Implement the quadratic Newton forward-difference polynomial program presented in Section 4. Develop least squareslinear bivariate polynomial the set of data in the a for form v = a+ bT+cP+dPT Compute derivation at each data point. Solve any of Problems 16 to 20 with the modified program. Solve any of Problems16 to 20 with the modified program. 93. Modifythe quadratic divided difference polynomial programto consider a linear divided difference polynomial.11.

Polynomial Approximation Interpolation and 249 98. Checkout the programwith the given data. 99. Solve Problem77 or 78 with the program.5 E÷08 1. For laminar flow.0080 106. 102. Whenan incompressible fluid flows steadily through a round pipe. measured values of the forward and backward reaction rates Kf andK6. Solve Problem77 using the program.9E÷15 1. the pressure drop APdue to friction is given by AP = -O. Solve Problem82 using the program. Extend the linear least squares polynomialprogramto consider a quadratic least squares polynomial. Reaction rates for chemicalreactions are usually expressed in the form ~ K=BT exp -(~-~) For a particular reaction.0107 2000 0.0160 1500 0.5 E ÷ 11 . Solve any of Problems16 to 20 with the modified program A. 101. Re. LID is the pipe length-todiameterratio. the friction coefficient f can be related to the Reynolds number. andf is the D’Arcy friction coefficient. Friction Coefficient Re f 500 0.6 E+07 5. are given by Table 6.K 1000 2000 3000 4000 5000 x s 7.0320 1000 0.5f pV2(L/D) where p is the fluid density.11. ReactionRates T. Extend the linear least squares polynomial programto consider a fourthdegree least squares polynomial. 103.5.9 E ÷ 04 2. V is the velocity. Modify the linear least squares polynomial program to consider a linear bivariate least squares polynomial. Implement linear least squares polynomialprogrampresented in Section the 4. Solve Problem80 with the program. respectively. Modify the quadratic Newton forward-difference polynomial program to consider a cubic Newton forward-difference polynomial. by a relationship of the form b = a Re f Use the measured data in Table 5 to determinea and b by a least squares fit.5 E ÷ 15 4.8E÷ 15 2. 100. APPLIED PROBLEMS 105.4E÷10 1.5 E÷ 15 3. Solve Problem81 with the program. Table5.5 E÷ 15 1. Modifythe linear least squares polynomialprogramto consider a quadratic bivariate least squares polynomial. 104.

250 Chapter4 (a) DetermineB and c~ for the backwardreaction rate K for which E/R = O. e and E/R for the forward reaction rate Kf. 107. The data in Table 1 can be fit by the expression f =a+ bx2X Developa least squares procedureto determinea and b. b (b) DetermineB. Solve for a and b and computethe deviations. .

5.5.2. Newtonforward-difference polynomial. 5.1 INTRODUCTION Figure5. 5. Lagrange. 5.1. Error estimation and extrapolation 5.1. centered 5. Interpolation within a set of discrete data is discussed in Chapter4.5 Numerical Differentiationand DifferenceFormulas 5. 5.6. Thefunction f(x) is known only at discrete values of x.4. one-sided 5.1 are values of the function f(x) = 1 Ix. whichare used as the exampleproblemin this chapter. 5. Taylor series difference formulas 5.7. Direct fit.~) 251 .7. Introduction Unequally Spaced Data Equally Spaced Data Taylor Series Approach Difference Formulas Error Estimation and Extrapolation Programs Summary Problems Examples 5.4.and divided difference polynomials 5. The evaluation of a derivative is required in manyproblems in engineering and science: -~-(f(x)) =f’(x) =fx(X) (5. Third-order nonsymmetricaldifference formula forfx 5. Newtonpolynomial difference formulas 5.2. Thediscrete data presented in Figure 5.3.8. 5.f(x)] pairs. 5.6.3.1 presents a set of tabular data in the formof a set of [x. Newtonforward-difference polynomial. Differentiation within a set of discrete data is presented in this chapter.

however.3i2500 3.65 0. polynomials) a set of discrete data and differentiating the approximating to function. To perform numerical differentiation.300.70 0.285714 3. Differentiation of discrete data. numericaldifferentiation is an inherently inaccurate process. wherethe altemate notationsff(x) andre(x) are used for the derivatives. the derivative of the polynomial P’n(X) maynot be a very accurate approximation of the derivative of the exact functionf(x) even at the known data points themselves. In general.270270 Figure 5.g. even thoughthe approximating polynomial P.40 0.252 Chapter5 f(x) d~= f’(x) x f(x) 3.298507 3.50 0. Numerical differentiation formulas can be developed by fitting approximating functions (e.2. Thus.200.273973 3.294118 3. The functionf(x). maybe a know function or a set of discrete data. d ~_ff_~(pn(x)) ~x( f (x) _ (5.2) This process is illustrated in Figure 5.303030 3. an approximating polynomial is fit to the discrete data. whichis to be differentiated. As illustrated in Figure 5. The evaluation of derivatives by approx’ imate numericalproceduresis the subject of this chapter. known functions can be differentiated exactly.277778 3.. In general. or a subset of the discrete data.(x) passes through the discrete data points exactly.60 0. and the approximating polynomial is .35 0.2. requires an approximatenumerical procedure.1 Differentiation of tabular data.

5b) 2 f"(3.3) whichhas the exact derivatives dxd =f’(x) = -)-ff ’ (5. and divided difference polynomials can be applied to both unequally spaced data and equally spaced data. The polynomialmaybe fit exactly to a set of discrete data by the methods presented in Sections 4. The simple function f(x) = (5. Numerical differentiation formulas can also be developedusing Taylor series. In both cases.5) -I.5a) (5.4b) is considered this chapterto illustrate numericaldifferentiation procedures.. This approachis quite useful for developingdifference formulas for approximating exact derivatives in the numericalsolution of differential equations.10.046647. Differentiation of direct fit polynomials.Numerical Differentiation andDifferenceFormulas 253 I f(x) Pn(x) df /~ f(x) ~ x /~Pn(x) Figure5.081633.(3. Differentiation formulas based on both Newtonforward-difference polynomials and Newtonbackward-difference polynomials can be applied to equally spaced data.5)2=-0.4a) dx ~ = f" (x) (5.5: f’(3. differentiated..In particular. Lagrangepolynomials. in at x = 3.2 Numerical differentiation.9. .3 to 4. )’3. or approximatelyby a least squares fit as described in Section 4.0.5‘3 . the degree of the approximating polynomial chosen to represent the discrete data is the only parameterunderour control. Several numericaldifferentiation proceduresare presented in this chapter. (5...5) 1 .

The chapter closes with a Summary whichincludes a list of what you should be able to do after studying Chapter 5. differentiation using direct fit polynomials. A table of difference formulas is presented in Section 5.3. and divided difference polynomialsas the approximatingfunction is discussed.7. A discussion of error estimation and extrapolation is presented in Section 5. The developmentof differentiation formulas based on Newtondifference polynomials is presented in Section 5. Several programs differentiatfor ing tabular data numerically are presented in Section 5.254 Chapter5 Numerical Differentiation Difference Formulas Unequally SpacedData Equally Spaced Data Taylor Series Approach Difference Formulas ErrorEstimation Extrapolation Programs Summary Figure 5. Lagrangepolynomials. Section 5.6. The organizationof Chapter5 is illustrated in Figure 5. Following introductory the discussion in this section.5.2 UNEQUALLY SPACED DATA Three straightforward numerical differentiation procedures that can be used for both unequally spaced data and equally spaced data are presented in this section: .3.3 Organization of Chapter 5.4 develops difference formulas directly from the Taylor series. 5.

~ f ( b ) -t ~~-a~c--. 2.a)(x . as discussed in Section 4.Recall the direct fit polynomial.c)Jta) -+ (b .P~(x) = 2 + 6a3x +.8) yields: f’(x) ~_ Uz(X . 2a (5. Divided difference polynomials 5. (4.7b) Equations(5.10.(x) ----f(°)+(x xo)f}l) +( x . (4.3. -P anxn a2 where Pn(x) is determined by one of the following methods: 1.xz 3) +.b)(a .f(xi)].Numerical Differentiation andDifferenceFormulas 1.3.(a .f(xi)].Jta) ÷ (ba)( b . determine the least squares nth-degree polynomial that best fits the data points. Eq.b)(a .7a) f"(x) ~. (5. Given N = n + 1 points.2..b) Pz(x) .c) (c (5.a )( x . the derivatives are determinedby differentiating the approximatingpolynomial.7b) are illustrated in Example 5.34): Pn(x)=0 -b alx + x2 --k .9a) yields: f"(x) ~_ P’2’(x) = 2f(a) ~ 2/(b) + 2f(c) (a . Direct fit polynomials 2.3.9b) are illustrated in Example 5.2. (4. (5. Divided Difference Polynomials The third procedure that can be used for both unequally spaced data and equally spaced data is based on differentiating a divided difference polynomial. Thus..7a) and (5..2.c f( b) + (c .Eq.3a3x2 +. f’(x) ~.9a) and (5. .2x .a)(b .10) .6) After the approximatingpolynomialhas been fit.9a) (5.( a + c ) (a .a)(c ~f(c) Differentiating Eq. .c) (b .Xo .b)(a .Xo . determine the exact nth-degree polynomial that passes through the data points.a)(b Differentiating Eq.Pin(x) = t + zx -+.xl)flz) +( x . [xi. (5.45): (x ..65): P.For example.. as discussed in Section 4. Lagrange polynomials 3.1 Direct Fit Polynomials 255 A direct fit polynomialprocedureis basedon fitting the data directly by a polynomial and differentiating the polynomial..c (x . [xi.~ (x. (5.9b) 2x.c) .2.(b + c) r~ 2x.. Lagrange Polynomials The second procedure that can be used for both unequally spaced data and equally spaced data is based on differentiating a Lagrangepolynomial. Given N > n + 1 points. )(X )(X )f} (5.8) Equations(5.consider the seconddegree Lagrangepolynomial.J t -~) (5.Eq.b)(x .(a ÷ b) .

081700 P’2’(x) = 0.245500. fit the quadratic polynomial.6) + a2(3.294118 0. (5.294118) 2(0.5) + 42(3.5 .277778 = ao + a1(3. (5.046800 (5.~)(0.5) = 0. and divided difference polynomials.2(xo + x2)]f.10b) are illustrated in Example Example 5..6 f~) 0.5) .9a) and (5.5) = (3.1 la) gives (3) f"(x) =2fi(2) + [6x.. First.13a) 2(0.4)(3.124) 0.13b) A divided difference table mustbe constructed for the tabular data to use the divided difference polynomial.277778) 2(0.5) = -0.(X0X+ XoX -It XlXz)]fi +. Thus.5) (3.6 .. (5. (5.5 + 3.245500 + (0.3.5 3.4 + ~ ~.4) (5.12e) Substituting the tabular values into Eqs.4 3. (5.285714) P~’(3.7a) and (5.023400.q_ (5.256 Differentiating Eq. a~ = -0.6) (5.4)(3.1 la) Differentiating Eq.zv 411-" 2(3.7b) and evaluating x = 3.4+ 3.-.(3. Considerthe following three data points: x 3.2(X -q.1.285714) ~) + (~ ~-3~-4)-~.3.5) .10a) and (5.046800 (5.6 .5 yields the solution for the direct fit polynomial: P~(3.1l b) Let’s solve the exampleproblempresented in Section 5.5)-.3.(x0 + Xl)]f/(2) 2 (3) + [3X -.6) P~(3.294118= ao + a1 2 (3.-~)tu.4----~.6) t (3..5) (0.285714= ao + aI 2 (3. and a2 = 0.081700 (5.5 ÷ (3.5)~-.04680)(3.3.Pln(X =f/(1) ) Chapter5 [2x .P2(x) =ao + a~x+ a2 to the three data points: 0.12d) (5.5) = (~ ~-.6 . Lagrange.4 .4 .3.12c) Solving for a0. --+xl Equations(5. Direct fit.3.9b) and evaluating at x = yields the solution for the Lagrangepolynomial: 2(3.285714 0.6 . .(3.6) -t 2(3.277778 x2.4)(3. 0 1 2 .4) + 42(3.5) (5.~ 3.5)(3. +.(3.10) gives f’(x) ~.12b) 2 0.Substituting these values into Eqs..858314.x1 + Xz)Xq.3. and a 2 by Gauss elimination gives a o = 0.5) = -0.277778) = -0.1 by the three procedurespresented above. al.

4 3. . so that Eq.5) is Error =f’(3.294118 0.16) must be madeexplicit in introducingEq. Either Eq.88): p. (4. 5. (5.285714 0.084040 + [2(3.(3.X 0 x° < ~ <x~.15) requires that the approximatingpolynomialbe an explicit function of x.5)-P~(3.5) -. (5.084040 -0.17) Recall the Newton forward-difference polynomial._ where the interpolating parameters is given by s -.h --~ x = xo + sh (5.277778 -0. Thus.6.079360 0.5)= -0.081633)=-0.046800 .This can significantly decrease the amount of effort required to evaluate derivatives. (5.Numerical Differentiation andDifferenceFormulas 257 3.046800 (5..0.(x)=fo+sAfo+~_A2fo_ 2)A3fo +.15) where Pn(x) is either the Newton forward-difference or backward-differencepolynomial.081700. (5.5) is Error =f"(3. presented in Section 4.5)P~’(3. (5.023400 Substituting these values into Eqs.16) is implicit in x.081700 U~’(3.5) = -0. f’(x)="~ ~ (Pn(x)) = (5.1 la) and (5.046647) = 0.5)](0. (5.or a divided difference polynomial. whereasEq.000067. the Newton the at forward-difference and backward-differencepolynomials.023400) = 0. Eq. The error in f’(3. a Lagrange polynomial..5) = 2(0.(-0.18) Equation (5.3.(0. NewtonForward-Difference Polynomial +Error (5. can be fit to the discrete data with much less effort than a direct fit polynomial.5 3. (5.1.5) .3 EQUALLY SPACED DATA When tabular data to be differentiated are known equally spaced points.16) can be used directly.14b) The results obtainedby the three proceduresare identical since the samethree points are used in all three procedures. ~ s(s-1)(s6 Err°r=(S) hn+lf(n+l)(~)n+l X -. 5.023400) = -0.1 lb) yields the solution for the divided difference polynomial: P~(3. and the error in ftt(3.16) (5.6 0.15) be transformed into explicit operations in terms of s. ! 8) into Eq.4 ÷ 3.000153. or the differentiation operations in Eq.16).

21) and differentiating gives +~[(s.23) into Eq. . Eq. and simplifying yields P’n’(X) (P. (5. At x = x0. so the secondapproachis taken. (5.(s))’ = (s .1) + slA2fo P’n(X) =~{Afo 1 q. From Eq.} SimplifyingEq. (5.. differentiating.)l -} ~- (5.0..24) Substituting Eq.l)(s .22) yields 1( Ptn(X)=~ + ~[(s . and Eqs.258 Chapter5 The first approachleads to a complicatedprocedure.pln(X ) = ~s (Pn(s)) (5.26) (5. (5.2 ~--- Af0 ~- ~ j0 ~ 3s2-6s+2A3fo+.26) and (5. (5. Consequently.(s)) ax as (5. Th us.23) and (5.1)]A3fo q-.18).23) The secondderivative is obtained as follows: f"(x) ~ d ..1) aafo +. ds ld (Ptn(X)) = P’n’(X) = ~s(P’n(S))~X ~-~-s(P’n(S)) -- ff~ 1d (5.24).27) Equations (5. (5. (5.25) Higher-order derivatives can be obtained in a similar manner.19) FromEq.19) gives ld (5. ds 1 dx h Thus. higher-order derivatives become increasingly less accurate.16) into Eq.25) becomes (5.27) are one-sided forward-difference formulas... Recall that A"f becomesless and less accurate as n increases.20) f’(x) ~.18).) (5. d f’(x) -~ @ (P~(x)) =P’~(x) = -~-(P. s = 0. x x(s). (5..s(s -.21) Substituting Eq. 2s-1.s(s 2) - . (5.

Fromthe definition of the binomialcoefficient.5 in Eqs.26) and (5.5 3.1 and constructing the difference table gives x 3.17).(x0). if terms throughA2f0are accountedfor.27). Eachadditional differentiation introduces another h into the denominator the of error term.263158 Af(x) .29) (5. when polynomial P. Equation (5. Each differentiation introduces an additional h into the denominatorof the error term.28) and differentiating yields ~ (Error)= h~f(~+O(~)[ (s-1)(s-2)’’’(s-n)+’’" +s(s-1). Eq. For example.31) Eventhoughthere is no error in P.7 3. one-sided.31) showsthat the onesided first derivative approximation P’.(x0) is 0(h").. with A("+Ofreplaced by h’~+~f("+~)(~).6 3.1 using a Newtonforwarddifference polynomialwith base point x0 --. etc. A more direct wayto determinethe order of a derivative approximation to recall is that the error term of all difference polynomials given by the first neglected term in the is polynomial. so the order of the result drops by one for each additional differentiation.90): ~n--qS~( ( n s+ )1 -s(s-1)(s-2). Newtonforward-difference polynomial.29) into Eq.285714 0.27). s ----. (5.000428 0.0. (5.(x) includes the nth forwarddifference. To achieve 0(h2) for P~(xo). whichis written 0(h"). Eq.(s-n) Substituting Eq.000032 A2f(x) A3f(x) .. For example. P’. P’2(x) is 0(h2).(x0) is order n.. (5. the error for P’2(x) is O(h3)/h= 0(hz) and 2 the error for P’~(x) is O(h3)/h = 0(h). so x0 = 3.(Xo). Example5..30) gives d_~ [Error(x0)] (~) h"f("+l)(¢) (5.0.P’l(x) is 0(h). (5. (4.2. P3(x) must be used. there is error in P’.007936 -0.8 f(x) 0. The order of an approximationis the rate at whichthe error of the approximation approaches zero as the interval h approacheszero.30) At x = x0. Let’s solve the example problem presented in Section 5.277778 0.270270 0.Numerical Differentiation andDifferenceFormulas 259 The error associated with numerical differentiation can be determined by differentiating the error term. (5. Thus.000396 -0.26) and (5.(s-n+l~] (n + 1)! (5.007112 0. Thus. P~(x0)is 0(h"-~).007508 --0. Selecting data points from Figure 5. from Eqs.0.3.5. (5. and Eq.

EvaluatingEq. and at Eqs. The second.so evaluating that yields an 0(h2) result.0428__ = 0.5) = 0~1 [(-0.5) = ~ [0. only five significant digits after the decimalplace are obtained.25) (5.33) The order of the approximationof P’~(x) is one less than the order of the highest-order difference includedin the evaluation. The secondterm in Eq.0460__ first order second order Error = -0. so evaluating that yields an 0(h2) result. s = 1. (5.32) is AZf0. Theresults presentedin this section illustrate the inherent inaccuracyassociated with numericaldifferentiation. (5.00002_ The first-order result is quite inaccurate. Eq. (5. Thefirst term in Eq.0.23) and (5. The increased order of approximation of P~(Xl) is due to centering the polynomialfit at point x~. P](x~) is 0(h) and P’~(x~) is 0(h2). (5.(x) is the same as the order of the highest-order difference includedin the evaluation..0038__ = -0.000428 .P~(Xl) 0(h 2) sin ce the A3f0 term is mi ssi ng.0006__ The first-order resuh is very poor. whichis the sameas the one-sided approximation. soevaluating that term gives an 0(h) result. In all cases.34). (5. (5. etc..33) is A3f0.260 Substituting values into Eq.000032) ] Chapter5 (5.at x = x1.33) term by term yields P~(3. For example. Equations (5. .00227_ = 0. EvaluatingEq.34) P~(x~) = ) (5. Eq. etc.27) gives P~(3. (5.000428) + ~ (-0.27).5) = -0.and third-order results are quite good.000032) +.5) = 0. (5. (5.0.~ (0.00013_ -. although much more accurate. has only four significant digits after the decimalplace.32) is o. However.27) are both one-sided formulas. is 0(h).26) gives P’n(3.(-0. The secondterm in Eq. (5.33) is A2f0.08161_ first order second order third order Error = 0.07936_ = -0.] (5.26).32) term by term yields P’n(3. The second-order resuh. whereas the one-sided approximation. (5. accurate results can be obtained with centered differentiation formulas.26) and (5. Thefirst term in Eq.08150_ = -0. Centred-difference formulas can be obtained by evaluating the Newtonforwarddifference polynomial points within the range of fit. Equation (5.32) The order of the approximationof P’.so evaluating that term gives an 0(h) result.007936) .35) FromEq.

2 base point.35) gives 1 Pn(3.2.101): P"(x) f° + ~ vf°+ ~5~.000040) + ] Eval.00007_ = 0.000428 -0.5) = 0.277778 0.s(s + 1)(s 2)V3j +.~ ~’° = v s(s + 1) ~2 . (5..1 and constructing the difference table gives: x 3. Eq.5) = .2.9 (0.008404 + ~ (0..5) = O~I-0.007936 -0. difference approximation. centered. Newtonforward-difference polynomial. (5. whereas the result in Example5.000468 0.6 3.0002__ (5.08404_ = -0.2 is a forward-difference approximation.Numerical Differentiation andDifferenceFormulas Example5.0468__ second order Error = 0.000468) .37) first order second order third order Error = -0.36) term by term yields P’n(3. (4.5) more accurate than the second-order forward-difference approximation off"(3.) which yields P~(3.3.4 3.00000_ (5.2.08163_ Equation (5. The second-order centred-difference approximation of f"(3.000468+.285714 0. 261 To illustrate a centered-difference formula..294118 0.5 is in the middleof the range of fit.08170_ = -0.000040 A2f(x) A3f(x) Substituting values into Eq. Selecting data points from Figure 5.4 as the 5.38) .-2-Z~.36) The error of the first-order result forf’(3. 5.5) Example5.5) is approximately samemagnitude the the error of the first-order result obtained in Example The current result is a backward5. let’s reworkExample using x0 = 3.5 3.3.00241_ = -0.5) = -0. ~ 3! (5.270270 Af(x) -0.(3. NewtonBackward-Difference Polynomial Recall the Newtonbackward-difference polynomial.008404 -0.uating Eq.007508 0..~ (-0..34) gives P’.7 f(x) 0. so that x1 = 3.

s = -1. For example. Difference formulas The formulas for derivatives developedin the previous subsections are expressed in terms of differences. as discussed in Section 5.. s = 0.0.262 The first derivativef’(x) is obtained fromP. and simplifying gives etn(X)= ~ Vf°q-TV.~V ~0 )~ +. (5. Consequentlyhigher-order derivatives become increasingly less accurate. (5.--) (5.(x) as illustrated in Eq.) ~-g P~n~(xo)= ~ (V2fo+ V3fo +.41) (5..40) and (5.42) (5. (5.39).39) Substituting Eq.43) are one-sided backward-differenceformulas. (5.Recall that V’f becomes less and less accurate as n increases.f°~ 1( 2s+l 2 3s2 + 6s + 2V3f0 +. At x = xo. differentiating. ) The second derivative P~(x) is given by (5. (5..1 for derivative approximations based on Newton forward-difference polynomials.. 5.3. The resulting formulas are called difference formulas. . Centered-difference formulas are obtained by evaluating the Newtonbackwarddifference polynomial points within the range of fit..Ptn(X ) = -£’~s(Pn(s)) Chapter5 (5.3. Several difference formulas are developed in this subsection to illustrate the procedure.41) become 1 1 2 ptn(X0) = ~ (vu0 ~.41) Higher-orderderivatives can be obtained in a similar manner.42) and (5.21): ld f’(x) ~-.38) into Eq..3.) (5.40) and (5. and Eqs. at and Eqs.at x = x_~.44) (5.45) The order of the derivative approximations obtained by dividing the order of the are first neglected term in each formula by the appropriate powerof h.40) P~(x)~ (V~fo(s +l) V~fo = + +.43) Equations (5. Thoseformulascan be expresseddirectly in terms of function values if the order of the approximation specified and the expressions for the differences in terms of is function values are substituted into the formulas.0.

53) (5. A selection of difference formulasis presented in Table 5.(Xl) 2) J~h22fl +A]-0(h be derived from Eqs. p~(xo)_fo-2f~ h2 p~(xo) = 2f°-5fl +f2 +O(h) +4f2-J) h2 2) ?O(h P~(Xl) (5. (5.34) (5. -f2+o(h2) (5.48) Equation(5. (5. h.48) is a one-sided first-order forward-differenceformulaforf’(x0). and indicates the dependence the error on the step of size. Consider the one-sided forward-difference formula for the second derivative.49) 0(h3)) Substituting Afo and A2fointo Eq.51) (5. (5. (5. (5.35).48) and (5. TruncatingEq.Numerical Differentiation andDifferenceFormulas 263 Consider the one-sided forward-difference formula for the first derivative.47) yields pi(xo)_f~-fo ?O(h) h (5.50) are developed. Eq. Eq.1. centred. Thus.26): (5. The following difference formulas can be obtained in the same mannerthat Eqs.54) Difference formulas of any order can be developed in a similar manner for derivatives of any order. or nonsymmetrical differences. based on one-sided. Truncating Eq..46) after Af0 gives P’n(Xo)= ~ [Afo+ O(h2)] (5.27). 2) U~(xl) -_~fp_f0 0(h =f2 + 2h p.52) and P~(Xl) can Centered-difference formulas for and (5. Substituting Af0 = (J] -f0) into Eq.50) Higher-order difference formulas can be obtained in a similar manner.46) Recall that the error term associated with truncating the Newtonforward-difference polynomial is obtained from the leading truncated term by replacing Anfo by f(n)(~)h’.46) after the A2f0term gives = 1 _ P’2(Xo) ~(Aj~ ~ A2fo (5. (5. . (5.47) where0(h2) denotes the error term.49) and simplifying yields Pi(x°)= -3fo+4f. (5.

2(0. etc. + n! Jo Ax~+"" (5.58) wherethe subscript i denotes a particular spatial location.277778 . (5. 1 tt f(t) =fo + fo’ At + ~f~ 2 +. The continuous spatial domainD(x) must be dis-cretized into an equally spacegrid of discrete points.4..4 TAYLOR SERIES APPROACH Difference formulas can also be developedusing Taylor series.4..6) -f(3.57) wheref0 =f(x0). (0. Eq.6) . (5. For the discretized x space. Chapter5 Let’s illustrate the use of difference formulas obtained from Newtonpolynomials by solving the example problem presented in Section 5..1) (5..1)2 = 0.54).294118 (0.5) =f(3.. 5.f ~ =f’(x0).6): 1 .08170_ 2(0. Difference formulas for functions of time. Thus.53) and (5. TheTaylor series forf(x) at grid points surroundingpoint i can be combined obtain difference formulas forf’(xi).1. "-~ ~.56) Theseresults are identical to the second-order results obtained in Example 5..5) +/(3. Newtonpolynomial difference formulas.3. f(t).294118 = -0.5) =f(3.2f(3.4) = 0.0.IA(n) n At +. for example.1)2 (0. Difference formulas for functions of a single variable. Calculate the second-order centered-difference approximation off’(3.5) using Eqs.f"(xi) to . U2(3. can be developed from the Taylor series for f(t): . f(x). etc.264 Example5. O(x) i+1 i+2 x Figure5.4) = 0. .55) U~’(3. 1 F(n) f(x) =fo +f~ Ax + ~f~ 2 +.5) and f"(3. f(xi) : (5.4 Continuous spatial domain D(x) and discretized x space.0468__ (5. This approachis especially useful for deriving finite difference approximations of exact derivatives (both total derivatives and partial derivatives) that appear in differential equations.59) .. as illustrated in Figure5.277778 . can be developed from the Taylor series for a function of a single variable.1) 2(0.285714) 0.

f(x. terms such as (O/Ox) etc. can be developedfromthe Taylor series for a function of several variables. etc. TheTaylor series forf(t) grid points surrounding grid point n can be combined obtain difference formulas for to f’(t~). t).lo AxAt +ftlo Aft) (5. where f0 =f(t°). n+l D(t) n n-1 Figure5.)" is expandedby 0. must be discretized into an orthogonal equally spaced grid of discrete points. xt D(x. The continuous temporal domainD(t) must be discretized into a grid of discrete points. as illustrated in Figure5. Differenceformulas for functions of several variables.6.. etc.61) I(AX~x"~-At~)n fo+. for example. t).fd =if(t°).Numerical Differentiation andDifferenceFormulas 265 t. binomial expansion.. and ~. n f(x i. For the discretized t space.. For the discrete grid. t n) =fi D(x. The expression (. (0.fxlo =fx(X to). D(x. to). are interpreted as 0~/0x etc..5 Continuous temporaldomainD(t) and discretized t space. f(t ~) =f" (5.12): f(x. The continuous xt domain. where J~ =f(xo.6 Continuous domain t) and discretized xt space. as illustrated in Figure5. ~..62) n-1 i+1 i+2 x Figure 5. etc. Eq. the increments in Axand At are raised to the indicated powers.60) wherethe superscript n denotes a particular temporallocation.t) n+l (5. t) =~ + (LIo Ax+filo At) + ~(f=lo zkx2 + 2f~.5. .f"(t").

fxt.)x]0 Zk~ + Rn+l "+1 where the remainder term R is given by Rn+l _ 1 (n + 1)! f.68). Truncatingthe Taylor series is equivalen. (5. f~.+~)x(~) (5. (5.10): 1 1 f(x) =f0 +fxl0 z~x + ~fxxl0 ~ +"" + ~f. t) with respect to x. This approach does not workfor mixedpartial derivatives.67). the following common notation for derivatives will be used in the development of difference formulasfor total derivatives and partial derivatives: d (5.partial derivatives off(x. and the Taylor formula with remainder.57) and (5. of the function f(x) is obtained from Eq. wheref~ correspondsto fx[0. and then either .57). Finite difference approximationsof exact derivatives can be obtained by solving for the exact derivative from either the infinite Taylor series or the Taylor formula.65) ~x (f(x.61).266 Chapter5 wherethe subscript i denotes a particular spatial location and the superscript n denotes a particular time level. Eq. Eq. (5. n) be combined obtain difference formulas forf~.63) Equation(5. such as fxt. can be written 1 1 (5.fn)xlo &’f +"" (5. (0. Eq. (5.64) ~x (f(x)) = fx 0 (5. f(.63) are identical in form.66) f(x o. (5. TheTaylor series forf(x.. The Taylor series for the functionf(x). t) with respect to t with x = 0 =constant can be obtained from the expression 1 1 (5. to For partial derivatives off(x.69) (5. t = o =co nstant.63) in exactly samemanneras the total derivative.t to droppingthe remainder term of the Taylor formula. (5. to) =f0 +fxl0 ~c + ~fx~10ZLvz+’:’ + ~. the difference formulas for f0’ andf~[0 are identical if the samediscrete grid points are used to developthe difference formulas.)x]o ~ +"" The Taylor formula with remainder is given by Eq. To emphasize this concept. partial derivativefx[0 of the functionf(x. t) at grid points surrounding point (i. (5.)t[0 At~ +’" Partial derivatives off(x. Consequently.61) becomes 1 1 f(x.. At= 0 and . are equivalent. etc. difference formulasfor partial derivatives of a function of several variables can be derived from the Taylor series for a function of a single variable. (5. Since Eqs. t) =f0 +f[0 At + -~ ft[o Atz +"" + ~J~. t)) =fx In a similar manner. etc. Difference formulas for mixedpartial derivatives must be determineddirectly from the Taylor series for several variables.68) wherex0 _< The infinite Taylor series. t) can be obtained from Eq.63) is identical in form to Eq.67) f (x) :f0 +fx[0 z~c + -~ fxx[O ~c2 +"" + ~. The error incurred by truncating the infinite Taylor series after the nth derivative is exactly the remainderterm of the nth-order Taylor formula.57). Eq. (5.57).f. Eq. t) withrespect to t are identical in formto total derivatives off(t) withrespect to t.

and write the Taylor series for f/+l andf-_l: f+l =f +fxli Z~x ½ fxx[i + ~ Av ~ fxxxli z~x3 "nt. _f/+~.=f+l2 ..53).76) where xi_~ < ~ < xi+~. If the remainder term is truncated.. whichare identical to the remainderterm of the Taylor formula. f+l -f-I = 2fxli Ax+ ½fxxxli/~X3 -t.72) Letting the fxx~term be the remainderterm and solving for fxli yields fxb -f+~2Ax-f-~ ~f~xx(~) Ax2 (5.71) for 1 fr om Eq(5 . (5. o li.77) Thetruncated result is identical to the result obtained from the Newton forward-difference polynomial. (5.70) fo r f+ l gi ves .70) forf+~ and Eq. .71) for f_l gives f/+l ’~-ft-i = 1 Ax " 2f + fxxli ~c2 + -f~fx~xxli4 +. Letting the fxx~xterm be the remainder term and solving for fxx l i yields fxxli _. The terms whichare truncated fromthe infinite Taylor series. Eq. are called the truncation error of the finite difference approximation the exact derivative. our main of concernis the order of the mmcation error. Equation(5. ~1~-f+~-f/-I 2Ax (5.Numerical Differentiation andDifferenceFormulas 267 truncating the Taylor series or dropping the remainder term of the Taylor formula.70) and (5.73) where xi_~ <_~ < xi+~. Choosepoint i as the base point. Thus.Eq. AddingEq. Tnmcating remainder term yields a finite difference approxthe imation forfxxli. (5.73) yields an 2) fini te diff erence appr ximation off~ Thus . (5." " " (5.2f +~-1 12f~x~(¢) Ax2 1 Ax (5. The order of the truncation error.71).75). ¯ (5.73) is an exact expression for fxli. In mostcases.74) The truncated result is identical to the result obtained fromthe Newton forward-difference polynomial. whichis the rate at whichthe truncation error approacheszero as Ax--~ 0.54).70) (5. (5.4. Eq. whichis the order of the remainder term. is denoted by the notation 0(Ax"). (5. Eqs. (5.71) fi-1 : fi -fxli Ax + ½ fxxli Ax2 -~fxxxli Ax3 + ~f~xli Subtracting Eq.. Considerthe equally spaceddiscrete finite difference grid illustrated in Figure 5.¯ ¯ ¯ z~x4 .~4fxxxxli /~x4 -~.2f +f_~ (5. (5. These two proceduresare identical. whichis equivalent to truncating the infinite Taylor series.

82) n. To obtain the most accurate results possible. Choosepoint n as the base point. (5. and write the Taylor series forf n+l andf~-l: f~+l=f~+fl~At+Eft fn-1 1 n2+. as illustrated in Figure 5.56).4 and 3..5) andf’(3.294118 = -0. Let’s illustrate the use of difference formulas obtained from Taylor series by evaluating f’(3..4) = 0. Thus.f" ~L(z) fin (5.79) Theseare the sameresults that are obtained with the difference formulas developedfrom the Newtonforward-difference polynomials illustrated in Example5.0(0.2.279778 .279778 .4.2. use the closest data points to x = 3.6) .0f(3. Difference formulas for time derivatives can be developed in a similar manner.80) (5.55) and (5. (5.285714) 0. (5.5) =/(3. Example5. Eqs. f’(3.77) are difference formulas for spatial derivatives. The time dimensioncan be discretized into a discrete temporal grid. wherethe superscript n denotes a specific value of time.81) =f. from Eq.2 At (5.74).77) are centered-difference formulas.82) is a first-order forward-difference formula for ftl Subtracting Eq. 294118 2. [ At (5.83) Letting the I n term be the remainder term and solving for f in yields 1 -~ftt(:c) At2 fin fn+l _ fn-1 -.6.4) = 0.08170_ 2(0.80) for f[" yields f"+~ -. Equation(5. (5.78) FromEq.0468__ +/(3.5. wheret ~ < z < t n+l.77).At . (5. (5.1) (5.0(0.5. -fl" At +iftl At2 -- Letting theftl ~ term be the remainder term and solving Eq. respectively.5) =f(3.5) for the data presented in Figure 5. They are inherently more accurate than one-sided difference formulas.0(0.6) -f(3.1 using Eqs.74) and (5.77). f(t ") =f~. x = 3.1) 2(0.1) (5.84) . that is.81) forf "-1 from Eq.80) forf ~+~ gives fn+l __fn-1 fttt :2ftln At "t- 1 n "~fttt[ At3 +"" (5.74) and (5. Taylor series difference formulas. Thus.268 Chapter5 Equations (5. f’(3. (5.0.1) = 0. Equations(5.5) 2.74) and (5.5.

Higher-order difference formulas require more grid points.AX4 -~.90) (5. one-sided backwarddifferences." " " + + (5.. and (c) whichgrid points are to be used? The coefficient offx is Ax.89) where xi_2 < ~ < xi+~. If the remainderterm the difference formulais to be 0(Ax3).~fxxxli 3 Ax .2fxli Ax-t.. and f/_ 2 are: f+~ =f +fxliAx+½fxxliAx2 fi-I :f/ --fx]i Ax-t-½ +~fx~liAx3 +~4fxx~(~OAx4+.91) forfxl i yields 4 (5. For a backward-biased difference formula.(4f/+~ -f/_2) = -3f~+ 6fxl~ + ~x~x. based on one-sided forward differences.. in addition to the base point i..~fxxli Ax2. nonsymmetricaldifferences.87) (5.~fx~x~(~-2) . Equation (5. Ax Ax1~. difference formula for fx.~4fxxxx(~) AX4 0(AX5) Ji(5. (5. (5. AX4 . Third-order nonsymmetricaldifference formula for f~. such as Eq. since the formulas must be divided by Axwhensolving forfx. Centred-difference formulas are inherently more accurate than one-sided difference formulas.89) by 6 and subtracting Eq. can be obtained by different combinationsof the Taylor series for f(x) or f(t) at various grid points.92) . Forming the combination (f+l -f-~) gives (f+l -f-~) -. Consequently.. thus giving a third-order difference formula forf~. centered differences. Let’s develop a third-order.~ ~ Solving Eq.82).f_~. (b) howmany grid points are required. Difference formulas of any order.85) Three questions must be answeredbefore the difference formula can be developed: (a) Whatis to be the order of the remainderterm. are required. The Taylor series forf(x) is: f(x) =f ’~-fx[i Ax--~ ½fx~l.~fx~xli Ax3-t.6. ~nonsymmetrical.2. as do formulas for higher-orderderivatives. so that fxx and fxxx can be eliminated from the Taylor series expansions..then the remainderterm in the Taylor series must 0(Ax4). The fourth-order Taylor series forf+~. Ax~ -~ f~l.6fi_~+3f~+:Zf~+~f~’~’(~) Ax3 6Ax (5.1.2fxli Ax~fxxxli Ax q-0(A 3 x5) Forming the combination (4f+ 1 -f/-2) gives (4f/+l --f/-2) = 3fi all6fxli Ax l. (5.Numerical Differentiation andDifferenceFormulas 269 where t n-~ < ~ < t n+l .three grid points. Example5.. and i ..86) (5. i . Ax3 ~4fxxx~l.91) L[~fi_~.88) fxxli Ax2--~ fxxxli AX3 -I-~4fxxxx(~-l) AX4 f-2 ----f . backward-biased.84) is a second-order centred-difference formula for fl n. (5.90) gives 6(f~+1 -f/-1) . (5. etc.. choosegrid points i + 1. Multiplying Eq.

.115) . and solve for the the desired derivative and the leading truncation error term.1] 3) where n is the order of the leading error term and rn is the incrementin the order of the following error terms.1) grid points. (5. (5. replace x by t in steps 1 to 7. Specify the order n of the derivativeJin)x for whichthe difference formulais to be developed. m.backward-biased. Specify the type of difference formula desired: centered. Determinethe numberof grid points required. 5. In summary.the error can be the estimated by evaluating the algorithm for twodifferent incrementsizes. Consider a numerical algorithm which approximates an exact calculation with an error that dependson an increment. (5.5 DIFFERENCE FORMULAS Table 5. For temporalderivatives. Thus. 1. h1 = h and h2 = h/R.O(h n+m) (5..92) yields a third-order. Combine Taylor series to eliminate the undesired derivatives.114a) gives 0 =f(h) -f(h/R) Ah n -I -~t(h/R) n q.1 presents several difference formulas for both time derivatives and space derivatives. backward. procedure for developing difference formulas by the Taylor series the approachis as follows.114b) from Eq. 5. Applying the algorithm at two increment sizes. Ax 3. nonsymmetrical. 5. and in Chapters 9 to 11 in the numericalsolution of partial differential equations. 2..1 14a) (5.O(h n+m) (5. which is at most (m ÷ n . These difference formulas are used extensively in Chapters 7 and 8 in the numerical solution of ordinary differential equations. The error estimate can be used both for error control and extrapolation. 4. forward. 6.6 ERROR ESTIMATION AND EXTRAPOLATION When functional form of the error of a numerical algorithm is known. gives fexact fexact n =f(h) Ah q. Chbosethe order m of the reminder term in the difference formula Ax Determinethe order of the remainder term in the Taylor series.1). h.270 Chapter5 Truncating Eq.difference formulafor f~li.--}-.or nonsymmetrical. (5. m+n. fexact =f(h) + Ah n + Bh n+m -}- Chn+2m .114b) = f(h/R) + A(h/R)" + n+m) Subtracting Eq. 7. Write the Taylor series of order (m + n) at the (m + n .

Numerical Differentiation and Difference Formulas Table 5..107) (5.95) (5.2f+l +f+2 ~f2 fx.104) (5.~2 zXx 3fi ~f=({) -1.96) (5.5f/_ 1 -~..112) .:ccxxx(~)4 3 fxli =-2f/_~ .6f/-1 6Ax "~ 3f/’~ 2f/+l 12f~-~({) Ax3 --f+2 ~f=~x(~) + 1 .94) (5.~2 {.108) fxli.+z ~f’~(~) fxli= f-2 -. Difference Formulas f’+~ -f~ f [~ .8f/-112 +/~ 8fi+l --fi+2 fxx[i -.2fi 2 Ax 11 + ]~fxxxx({) ZL~c2 30f ~0 f=li =--f-2 + 16f/-12 -Z~¢ + 16fi+~ -f+z+ fxxxx~v 12 (5.97) (5.1o3) (5.5f/+l 1 fx~rx(~) 2 12 Ax (5.99) (5.102) (5.106) (5.-~fxxxx( {) AX 2 fxxli = --f/_3 -~" 4fi_2 -.100) (5.110) (5.101) "~f~xx( Ax3 ) 9f/+2 fxli -.2 At Ll~ __fi+~ -fi Ax fxli f/+l2 Ax ~-fxxx Axz --f/-1 (¢) Ax2 fxli = -3f.f -.109) (5.3fi -~6Ax 6fi+l fx[i-.98) f I" -.1.Ax2 2f/"q-f-1 2f -. + 18f’+16Ax- (5.93) (5.-1 If.~f.--2f/-3 -~.At 1 ~f~(z) At At 1 ftt(z) At2 24 1 ~ftt(z) At2 1 2fxx(~ Ax ) 271 (5.-I .111) (~) + 4f+ 2 --fi+3 11 .4f/-i -I.(r) ftl n+l/2 _fn+~ _fn At fin fn+l _f.At q.~(~) ~f fxxli__fi-2 2f/-1 --~-f/ fxxli __f+l -.q_ 3~f.fi-2 -.2fi+3 -- (5. + 4f/+~ 2 Ax-f.9fi-2 -.18f-1+llf 1 ({) Ax3 6Ax " + ~f~ 1 fxli - f-2 -.105) (5.

(5. Error(Ax/2) = ~ [-0. Thevalue of error estimation and extrapolation are obvious in this example.73) with Ax = 0.5 where f~(3.R n R 1 ( f( h/R) .117).5) using Eq.2 gives f’(3.2.f(h)) n+m) + O(h (5.114b) n n Error(h) Ah -.73) with Ax = 0. (5. Thus.115) for the leading error terms in Eqs.000067. (5. l16a) (5. Example5. n + m.117).270270 .117) The error of the extrapolated value is O(hn+m). AddingEq.7) -f(3.081700 ÷ 0. As seen in Figure 5. gives Extrapolated value = -0. This process is called extrapolation.5) = -0.1.05.3 and 3. lts to yield an improved approximation.114b) gives 1 Extrapolated value = f(h/R) + ~-f (f(h/R) .(-0. (5.45 and 3.081900)] = 0. TwoO(hn+m) extrapolated results can be extrapolated to give an 0(h"+2’~)result. .5) are available.1.3) 2(0. the error estimate for the result with the smaller Axcan be calculated from Eq. (5.5) =f(3.081900 2(0.f (h Chapter5 __ (5. in Eq.116b) to Eq. (5. which requires data x = 3. I 18) The exact error in this result is Error = -0. those points are not available.081900 .116b) n1 Error(h/R) = A(h/R)n -. Eq. However.(-0.55.120) (5.n.000067 Applyingthe extrapolation formula.73) with Ax = 0. (5. Error estimation and extrapolation.081633) -0. for which the exact error is Error = -0.116) can be used to estimate the leading error terms in Eq.117) is replaced with the exponent.0. wherethe exponent. (5.081633 (5.2) (5.R _ 1 (f(h/R) -f(h))’ Equation (5. (5. ApplyingEq.000267. which is approximately four times larger than the exact error obtained in Example5. Higher-order extrapolations can be obtained by successive applications of Eq. Example5.272 Solving Eq.303030 _ -0.119) whichis the exact value to six digits after the decimalplace. A more accurate result could be obtained by evaluating Eq.000067 = -0.114).7.081700 .(-0. Now that two estimates off~(3.081633) = -0.081700.081700 . data are available at x = 3.7. (5.116b). (5. (5.5 evaluates f~(3. for which Ax= 0.114a) and (5. The error estimates can be addedto the approximateresu.2) = 0.

+ n(n . whichspecifies the degree of the polynomials. n = 3 in this example ndeg degree of polynomial. A common programmain defines the data sets and prints them. Differentiation of a direct fit polynomial Differentiation of a Lagrangepolynomial Differentiation of a divided difference polynomial FORTRAN subroutines direct. in which case the degree of the polynomial to be evaluated must be specified. + n-1 nanx P~(x) = 2a2 + 6a3x +. for implementingEq.7.to facilitate the addition of other-degree polynomials to the subroutines.7 PROGRAMS 273 Two proceduresfor numericaldifferentiation of tabular data are presented in this section: 1.7.7): P’n(x) = 1 + 2a2x + 3a3x2 +. (5.It is included..xn-2 1)an (5.Numerical Differentiation andDifferenceFormulas 5.1.121b) A FORTRAN subroutine. and divdiff are presented in this subsection for implementingthese procedures. 2. subroutines ndim array dimension. however. ndeg = 2 in this example n number of data points xp value of x at which to evaluate the derivatives x independent variable. Differentiation of a direct fit polynomial program.1..1 Direct Fit Polynomial The first. The the only change in programmain for the three subroutines is the call statement to the particular subroutine and the output format statementidentifying the particular subroutine.121a) (5. subroutine direct. is thus not needed in the present programs. Input data and output statements are contained in a main(or driver) programwritten specifically to illustrate the use of each subroutine. x(i) C C C C C C . 5. 2. Derivatives of unequally spaced data Derivatives of equally space data All of the subroutines are written for a quadratic polynomial. 3. lagrange.and second-orderderivatives of a direct fit polynomialare given by Eq.. (5.1..1. The basic computational algorithms are presented as completely self-contained subroutines suitable for use in other programs. Higher-degreepolynomials can be constructed by following the patterns of the quadratic polynomials. The variable ndeg. Derivatives of Unequally SpacedData Three procedures presented differentiating unequallyspaced are for data: 1.121) for a quadratic direct fit polynomialis presented in Program 5. calls one of the subroutines to implement solution. and prints the solution. program main main program to illustrate numerical diff. 5. Program 5.

f(i) fx numerical approximation of first derivative fxx numerical approximation of second derivative dimension x(3). 1020) fx. 3. 1000) do i=l.llx.274 c c c Chapter5 f dependent variable.x) simple gauss elimination 1000 1010 1020 . 0. "f’/’ format (i3. fx.081700 f 0.n write (6. ’ and fxx =’. 3. ndeg.x(i).c) fx=c (2) +2. n. 3.a(3. n.i=l.c(3) data ndim. Direct i 1 2 3 fx = fit polynomial x 3. n. f (ndim). xp. f.a. ndeg.400000 3. xp.ndim) . fxx.500000 3. Solution by differentiation of a direct fit polynomial.f(i) end do call direct (ndim. 2. a (ndim.b(3).6) end subroutine direct (ndim.5 / data (x(i). n. "x’.6. 2) =x(i) a (i.b.3). fxx.294118 0.O a (i. a.6x.xp / 3.4.6) format (’ ’/" fx =’.5f12.285714.b.n a(i. do i=l.3) / 0.i=l.5.1. O*c (3) return end subroutine implements end gauss (ndim.f12.6 data (f(i). f.294118.285714 0.1010) i.) c c The data set used to illustrate subroutine direct is taken from Example5. a. Output 5. O*c (3) fxx=2. 3) =x(i) b(i)=f(i) end do call gauss (ndim.1. c) write (6. b (ndim).1.3) / 3.f12.046800 .600000 -0. fx.f(3).a. ndeg.l)=l.x. 3. n.b. fxx s top format (’ Direct fit 19olynomial’/’ "/" i’. 0. c) direct fit polynomial differentiation dimension x (ndim).277778 write (6. x. b.277778 and fxx = 0. The output generated by the programis presented in Output 5.

Numerical Differentiation andDifferenceFormulas 5. xp. for implementing Eq. . fx.1.7.6x.b)(a .b.llx.b)(a (.2 Lagrange Polynomial 275 The first. a.2.a)(b .2x . 19rogram main main program to illustrate numerical diff. (5.c) (b . x.(b +Jta)~. ’x’. n. c) 1000 format (" Lagrange polynomial’/’ ’/’ i’. f’(x) ~ Fz(X) -.2. Output5. output generated by the programis presented in Output 5. Lagrange i 1 2 3 fx = x 3.294118 0. O*xp.500000 3.O* f (I) / (a-b)/ (a-c)+2. n.(a c) + (a . subroutine lagrange.9): 2x . f. ndeg.P~(x) f"(x) = 2f(a) q 2f(b) ~ 2f(c) (5. The is 5. f (3) call lagrange (ndim.~---a-~ _-~f(b) + (. fxx.(b+c) ) * f (I) /(a-b)/(a-c) (2.(a+c) ) + 1 /(b-c)+(2.and second-order derivatives of a quadratic Lagrangepolynomialare given by Eq.277778 and fxx = 0.122) is presented in Program5. (5. Differentiation of a Lagrangepolynomial program.600000 -0. subroutines dimension x(3) .122a) ~. fxx) Lagrange polynomial differentiation dimension x (ndim) .285714 0. fx.(a + b) ~. O*xp.~---a)-~ ~ ~) Jtc) (5. O* f (2) / (b-a)/ (b-c)+2. f .400000 3. ’f’/’ ’) end subroutine lagrange (ndim. Program5. Solution by differentiation of a Lagrange polynomial.081700 polynomial f 0.2.122b) (a .0*xp-(a+b))*f(3)/(c-a)/(c-b) fxx=2. ndeg. O* f 1 / (c-b) return end The data set used to illustrate subroutine Lagrange taken from Example I. xp. f (ndim) a=x(1) b=x(2) c=x(3) fx= (2.c) (c A FORTRAN subroutine. 2x .046800 .2.x.

subroutines dimension x(3) . polynomial x 3. Solution by differentiation of a divided difference polynomial.. xp.6x. x.¯ ¯ ¯ 1 2 (5.P’~(x) = 2f(i 2) + [6x .1.0* fl 2 return end The data set used to illustrate subroutine divdiff is taken from Example 5. 046800 -0.llx. f (3 call divdi ff (ndim.1. f. xp.(x) =f0) + [2x .2(x q-X1 o Xl)]f/(2) JcXz)X ~.277778 0. Program 5.XlX2)]fi (3) -~.400000 3.276 5.7. (5.3. f.294118 0.2(xo +x1 + x2)]~ +.P’. x. b.285714 0.500000 3.3. program main main program to illustrate numerical diff.3. 081700 and fxx = . fx. fxx.123a) 3) f"(x) ~.(5.123b) A FORTRAN subroutine. n.. Differentiation of a divided difference polynomial program.(X0X JcXoX -~.and second-order derivatives of a divided difference polynomial given by are Eq. (5. poly. subroutine divdiff for implementing Eq.3. fx.11): f’(x) ~. Output5. c) iooo format (" Divided diff.’/" ’/" i’. n.600000 f 0. "x’.3.123) for Pz(x) presented in Program5. f (ndim) fll= (f (2) -f (1) ) / (x(2) f21=(f (3) -f (2)) / (x(3 ) f12= (f21-fll)/ (x(3) -x(1) fx=fll+(2.(xo "~~ + [3x . Divided i 1 2 3 fx = diff. "f’/" ’) end subroutine divdiff (ndim. a. The output generated by the programis presented in Output5. O*xp-x(1) -x(2) ) fxx=2. fxx) divided difference polynomial di fferen t ia t i on dimension x (ndim) . ndeg. ndeg. Divided Difference Polynomial Chapter 5 Thefirst.

125) P2(x)is presentedin Program It definesthe data set and prints it... i).7. 1030) stop c c c c c c c c . 1030) dx(2) . 3.303030. 0.124) and (5.4. dx(2) . Error(h/2) ½[’(h/2) -f ’(h)] f Extrapolated value =f’(h/2) Error(h/2) (5.) (5. num.Numerical Differentiation andDifferenceFormulas 5. num. n = 5 in this example ndeg degree of polynomial. (5. program main main program to illustrate numerical diff.5) / 3. 3. f. data ndim. f (i) f fx numerical approximation of derivative. fx(l. fxx(2..4. to implement solution.27).285714. subroutines ndim array dimension. fxx) write (6.4.i=l.1010) write (6. the Program 5. x (i) x dependent variable.7 data (f(i).j) dimension x (5). i).5. (5. ndeg = 2 in this example number of derivative evaluations for extrapolation num number of data points n independent variable.117).f(i) end do call deriv (ndim. 0.dx. 3. 1000) do i=l. subroutine deriv. ndeg.124b) P~(xo) = ~-ff (A~f0.i=l. fxx(2.) The extrapolation formulas are given by Eqs.x(i). write (6. write (6. write (6.277778. fx(l. 1 0..3.26) and (5. 3. Differentiation of a quadratic Newtonforward-difference polynomial program.n.1005) i.A3f0+. f (5). fx(i.125a) (5. ndeg.1040) dx(1) . 2. fx.270270 / write (6. 2).x. fx(2. and prints the solution. 2. fx(2.116b) and (5.5) / 0. 0. 1050) write (6. fxx(l.2 Derivatives of Equally SpacedData 277 The first and second derivatives of a Newton forward-difference polynomialare given by Eqs.125b) A FORTRAN subroutine. respectively: 1 +. calls subroutinederiv 5. dx(2) . 5 / data (x(i). fxx(l. n ~ 5.294118.n write (6. for evaluating Eqs (5.6. 1020) dx(1) .

6) subroutinederi v (ndim. num) .6) (4x. dx(1) =x(3) -x(1) dx(2) =x(2) -x(1) fx(l.4. i) -fxx(l.285714 0. n. O*fx(2. O*fxx(2.) (i4.100000 0.6) (’ fxx’.200000 0.081700 0.200000 0. 2) = (4. dx.500000 3. x.’O(h**4)’/" (’ fx ". and software packages are available for numerical differentiation. l)=(f (4) -2.600000 3.2 and 5. i))/3. O*f (3) +f (1) )/dx fxx(2.4. num. fx.5* ( f (5) -f (I))/dx(1) fx(2.046800 0. fxx (num. I) return end The data set used to illustrate subroutine deriv is taken from Examples 5.081900 -0.300000 3. f.3f12. and mainframe computers have such libraries attached to their .277778 0. 2)= (4.5x. data f 0. i) =0.5* ( f (4)-f (2))/dx(2) fx(l. f (ndim). The output generated by the program is presented in Output 5. O’f(3) +f (2) ) fxx(l.3fl2. ’O(h**2)’.400000 3.i00000 fxx 0. ’dx’.294118 0.081633 fx 0.303030 0. Output 5.3. ’f’/" .278 i000 1005 i010 1020 1030 1040 1050 forma t format format format format format format end Chapter 5 (’ Equally spaced data’/’ "/" i’.046800 0(h*’4) -0. i)=0.7. fx (num.270270 O(h**2) -0.7.8x. "x’. dx (num) .046800 5.llx.700000 dx by differentiation of a Newton forward-difference polynomial.6) (’ ’/lOx. l)-fx(l. ndeg.6x. Packages for Numerical Differentiation Numerous libraries Many workstations operating systems. fxx numerical differentiation and extrapolation for P2 (x) dimension x (ndim).i) = (f (5) -2. Equally i 1 2 3 4 5 Solution spaced x 3. 2f12. 3f12.0 fxx(l.

15. 4. 10. 9. 5. and the divided difference polynomial work well for both unequally spaced data and equally spaced data. 11. The Newtonpolynomials yield simple differentiation formulas for equally spaced data. you should be able to: 1. are derived by both the Newtonpolynomial approachand the Taylor series approach. 2. Difference formulas are used extensively in the numericalsolution of differential equations. 6. Describe the general features of numericaldifferentiation Explainthe procedurefor numericaldifferentiation using direct fit polynomials Applydirect fit polynomialsto evaluate a derivative in a set of tabular data ApplyLagrangepolynomialsto evaluate a derivative in a set of tabular data Applydivided difference polynomials evaluate a derivative in a set of tabular to data Describe the procedure for numerical differentiation using Newtonforwarddifference polynomials Describe the procedure for numerical differentiation using Newton backwarddifference polynomials Describe the procedure for developing difference formulas from Newton difference polynomials Develop a difference formula of any order for any derivative from Newton polynomials Describe the procedurefor developing difference formulas from Taylor series Developa difference formula of any order for any derivative by the Taylor series approach Be able to use the difference formulas presented in Table 5. Least squares fit polynomials can be used for large sets of data or sets of roughdata. MACSYMA.also contain and routines for numericaldifferentiation. This table is used in several of the problems in chapter. 14. MAPLE. 5. the Lagrange polynomial. 1989)contains a routine for numericaldifferentiation. Difference formulas. More sophisticated packages.. 3. MATHEMATICA. 7. 8.Numerical Differentiation andDifferenceFormulas 279 ManycommercialsoRwarepackages contain numerical differentiation algorithms. the bookNumerical Recipes (Press et al.8 SUMMARY Procedures for numerical differentiation of discrete data and procedures for developing difference formulas are presented in this chapter. such as IMSL. Finally.1 Explain the concepts of error estimation and extrapolation Applyerror estimation Applyextrapolation EXERCISE PROBLEMS Table1 gives values off(x) ---. The numerical differentiation formulas are based on approximating polynomials. The direct fit polynomial. 12. 13. . which approximate derivatives in terms of function values in the neighborhood of a particular point. After studying Chapter 5.exp(x). Someof the more prominent packages are Matlab and Mathcad.

the For the data in Table1. and 1.98. Compute comparethe errors. evaluatef’(1.1 and i. 0. Compute comparethe errors. to workthe following problems. and 1.55998142 2.01. 3.Notethat the algebra is simplified considerablyby letting the base point value of x be zero and the other values of x be multiples of the constant increment size Ax. 0.Use the symbolic Table 2. Compute errors and compare the the ratios of the errors for parts (a) and (c) and parts (b) and (d). and 1.01. and For the data in Table1.99 and 1.00.01. (c) i . and 1. and For the data in Table1. (b) 1.04 1.96 0. and i+ 2. (b) i . Table 2.00.71828183 2.02. Difference formulascan be derived from direct fit polynomials fitting a polynomialto a by set of symbolicdata and differentiating the resulting polynomial. andi+2.82921701 2. Compute and compare the errors. evaluatef’(1. and 1.98 and 1.95 0. Compare errors in Problems1 to 3 and discuss. 0. (c) 1.03. whereAx is considered constant.01.88637099 5. .63794446 2.99. and 1. i. i.280 Table 1.1.94 0.98 f(x) 2. Values off(x) x 0.i+ 1.0) using direct fit polynomials with the followingdata points: (a) 0.96 and 1.06 f(x) Chapter5 2.66445624 x 0.00.i+ 1.99.01.0) using direct fit polynomials with the followingdata points: (a) 0.00.98.00.80106584 2.1.00. and 1. evaluatef’(1.05 1.99. 1.85765112 2. 1. 5.03 1.00 and 1. 1.02. and i+ 1.0) andf’(1.61169647 2. (b) 0.2 Unequally Spaced Data Direct Fit Polynomials 1. 4. evaluatef’(1.00 1.The truncation errors of such difference formulas can be obtained by substituting Taylor series into the difference formulas to recover the derivative being approximated accompanied all of the neglected by terms in the approximation.0) using direct fit polynomials with the followingdata points: (a) 1.99 1.58570966 2. 0.00.0) using direct fit polynomials with the followingdata points: (a) 0. and (b) 0. 1.74560102 2. 1. 1.04.0) andf’(1. SymbolicValues off(x) x Xi-2 Xi-1 f(x) fi-2 ~i-1 x Xi+I Xi+2 f(x) fi+l f/+2 xi ~ Derive difference formulas for f’(x) by direct polynomial fit using the followingdata points: (a) i and i + 1.(e) i. For the data in Table1.02. (c) 0.04.96.00.1 and i + i.97 0.0) andf’(1. and(f) i-2.0) andf’(1. and (d) 0.i. (b) 0. Compare results with the results of Problem 3.99 and 1.0:2.02 f(x) 2.01 1.98.97.77319476 x 1.69123447 2.

(5.03. and 3 with the followingpoints: (a) 0. and 3 with the followingpoints: (a) 1. Derive Eq. 23.06. evaluate fi(1.23). . Derive Eq. determinethe leading tnmcationerror term. andi + 1. determine the leading truncation error term. 2. (5. 19. evaluate f~(1. Derive Eq. (c) i .Construct a difference table for that set of data.41). polynomials. divided divided divided divided difference difference difference difference polynomials.00. 11. polynomials. Derivedifference formulasforf’i(x) by direct polynomial using the following fit data points:(a) i . 17.98 to 1. and (d) i. 14. 13. 21. and i + 3. i + 1. and (b) 0.3 Equally Spaced Data The data presented in Table 1 are used in the following problems. and i ÷ 2. evaluatef’(1.1.97 of to 1.00. Lagrange polynomials.02. i ÷ 2. formulas for the third-degree Lagrangepolynomial. For the data in Table 1.94 to 1. Compare errors and ratios of the errors for the the two incrementsizes. 18.0) andf~(1. Lagrange Polynomials 8. Difference formulas can be derived from Newton polynomials by fitting a polynomialto a set of symbolicdata and differentiating the resulting polynomial. For the data in Table 1. (b) i.00 to 1.1.99 of 1.1. 15. 2. andi + 2. 12. Compare with the results of Problems17 and 18 and discuss. For each result. Construct a difference table for that set of data throughthird differences for use in these problems. i .Thetruncation error can be determinedfrom the error term of the Newton polynomial.0) andf~1(l.Numerical Differentiation andDifferenceFormulas 281 7. Derive Eq.25). polynomials. 10.0) using Newton forwarddifference polynomials orders 1. For each result. (5.0) andf"(1. 22. Comparewith the results presented in Table5. Compare with the results presented in Table5. i. DividedDifferencePolynomials 5. WorkProblem 1 using WorkProblem 2 using WorkProblem 3 using WorkProblem 5 using Derive differentiation WorkProblem1 WorkProblem2 WorkProblem 3 WorkProblem 5 using using using using Lagrange polynomials.2. i + 1.00 of to 1. Compare errors and the ratios of the errors for the these two increment sizes.0) using Newtonbackwarddifference polynomials orders 1. i. For the data in Table 1. Compare errors and the ratios of the errors for the the two incrementsizes.0) using Newtonforwarddifference polynomials orders 1 and 2 with the followingpoints: (a) 0. and (b) 0. (5. 16. Lagrange polynomials. The symbolic data in Table 2 are used in the following problems. Lagrange polynomials. and (b) 1.1. 9. 20.01.40).

For the data in Table1.112).282 Chapter5 forward-difference poly24. (5. For each result. Compare with the results presented in Table 5.96) by substituting’Taylor series for the function values to recover the first derivative and the leading truncation error term. Compare with the results presented in Table 5.01.109) by substituting Taylor series for the function values recover the secondderivative and the leading truncation error term. Error Estimation and Extrapolation 33. 25. (a) Estimatethe error for the Ax= 0. Verify Eq. (b) i.1. Derive Eqs. and i÷2.02. Modifythe programto consider a linear direct fit polynomial. i + 1. and 0.107) to (5. i + 1. and (d) i. (b) Estimatethe error for the Ax= 0. and i÷3.7. Verify Eq.7.99) by substituting Taylor series for the function values to recover the first derivative and the leading truncation error term. (5. 0. 27. (c) Extrapolate the results to 0(Ax4). Solve Problems lc and 2c using the program. Solve Problems lb and 2b using the program. and i+2. 30.01 result.4 Taylor Series Approach 26. evaluatef’r(1.97) to (5. Checkout the programusing the given data. and 0.0. 5. determinethe leading truncation error term.i.1.106). Solve Problems lb and 2b using the program.1. Derive Eqs.1 and i. Verify Eq. (b) Estimatethe error for the Ax= 0. Implement programpresented in Section 5. and i ÷ 2. evaluate fr(1.2. 32. and i + 1. Checkout the programusing the given data. Derive difference formulas for f’(x) using Newton nomialsusing the followingdata points: (a) i and i + 1. Implement programpresented in Section 5. 28.02. (5.i. Derive difference formulas for f’(x) using Newton forward-difference polynomialsusing the followingdata points: (a) i .i+ 1. i.93) to (5.0) using Eq.02 result. i. (b) i .01. (5.i+2. i. 37. (5. 5.1 5.0) using Eq. For each result.5 Difference Formulas 29. Extendthe programto consider a cubic direct fit polynomial.92) by substituting Taylor series for the function values to recover the first derivative and the leading truncation error term.04. (e) i. Verify Eq. (5.1 for the differentiation of a the quadratic direct fit polynomial.6 .1. 5. (5. Derive Eqs. 0. determine the leading truncation error term. For the data in Table1.i+l.04. (c) i-2.99) for Ax= 0. (c) and i+l. (5. Solve Problems 1 a and 2a using the program.2 for the differentiation of a the quadratic Lagrangepolynomial.01 result. 34. i . 40. 39.7 Programs 35. and i+l. (5.109) for Ax-. 36. 38.96). (c) Extrapolate the results to 0(Ax4).i+ 1. 31. (a) Estimatethe error for the Ax= 0. i÷2. (d) i-l.02 result. and (f) i .

89 100.56 y 2. 42.3 for the differentiation of a the quadratic divided difference polynomial.u is the velocity parallel to the surface (m/s). Modify the program to consider a linear Newtonforward-difference polynomial. When fluid flows over a surface.7. the shear stress z (N/m at the surface is a given by the expression db/ surface z=#~y (1) where# is the viscosity (N-s/m2). Solve Problemslb and 2b using the program. Check out the program using the given data. Solve Problems la and 2a using the program. The values given in Table 3 were obtained. Solve Problems la and 2a using the program. y 0.7. and (d) the shear force acting on a flat plate 10 cmlong and 5 cmwide.(c) the corresponding values of the shear stress at the surface. 45.4 for the differentiation of a the quadratic Newton forward-difference polynomial. 50. second-. Solve Problems lb and 2b using the program. Solve Problems 1 c and 2c using the program. Modifythe programto consider a linear divided difference polynomial. APPLIED PROBLEMS 2) 51. Solve Problems lc and 2c using the program. and third-order polynomials. Modifythe programto consider a linear Lagrangepolynomial. 47. Extend the programto consider a cubic Lagrangepolynomial.Numerical Differentiation andDifferenceFormulas 4l. Table 3. .0 Velocity Measurements u 0. Implement programpresented in Section 5. Measurements the velocity of of an air stream flowing abovea surface are madewith an LDV (laser-Dopplervelocimeter).00024N-s/m Calculate (a) the difference table for u(y).0 u 88. 48.00 2.0 3. 46. /z = 0. Implement programpresented in Section 5. and y is the distance normalto the surface (cm). Solve Problems la and 2a using the program. 283 44. 49. Extend the programto consider a cubic divided difference polynomial. Check out the programusing the given data. Solve Problems lc and 2c using the program. At the local temperature. (b) du/dy at the surface basedon first-.00 55. 43.0 1. Extend the program to consider a cubic Newtonforward-difference polynomial.

the heat transfer rate ~ (J/s) to the surface a given by the expression ft =-to4 dT surface dy (2) wherek is the thermal conductivity (J/s-m-K).030 J/s-m-K. When fluid flowsover a surface. Measurements the temperature of an air stream flowing above a surface are made with a thermocouple. The values given in Table 4 were obtained.284 Chapter5 52.0 T 355. .(c) the corresponding values of the heat flux it/A at the surface. Table 4.00 At the average temperature. k = 0. and third-order polynomials.0 Temperature Measurements T 1000. and is the distance normal to the surface (cm).33 y 2. (b) dT/dyat the surface basedon first-. and (d) the heat transfer to a flat plate 10cm long and 5 cmwide. y 0.0 3.00 533. T is the temperature(K). second-.0 1.56 300. Calculate (a) the difference table for T(y).

1 INTRODUCTION Aset of tabular data is illustrated in Figure 6.4.1 are actually values of the function f(x) = 1/x. The evaluation of integrals. Programs 6.2. The function f(x) is known a finite set of discrete values of x.1.4. The discrete data in Figure 6.5. Newton-Cotes Formulas 6. Differentiation within a set of tabular data is presentedin Chapter5. Integration of a set of tabular data is presentedin this chapter.7.6. Adaptive Integration 6. Direct fit polynomial 6.3.6. Direct Fit Polynomials 6. The trapezoid rule 6. I = f(x) (6.5. Gaussian Quadrature 6. whichis used the exampleproblemin this chapter. is required in manyproblems in engineering and science.1 in the formof a set of [x.8.3. Doubleintegral 6. Summary Problems Examples 6. Romberg integration 6.2. Extrapolation and Romberg Integration 6. a process knownas integration or quadrature.l. Gaussian quadrature 6.7. Simpson’s1/3 rule 6.9.6 Numerical Integration 6. Multiple Integrals 6. Introduction 6. Interpolation within a set of at discrete data is presented in Chapter 4. Adaptiveintegration using the trapezoid rule 6.f(x)] pairs.1) 285 .

(b) Approximate integral.1) can evaluated exactly in closed form.2. Someknownfunctions have an exact integral.2 Numerical integration. .7 3. Many knownfunctions. f(x) 0.9 Figure 6.286 Chapter6 x 3. Numericalintegration (quadrature) formulas can be developed by fitting approximating functions (e.26315789 0.2 3. (a) Exactintegral. (6. many cases. however.. (a) (b) Figure6.32258065 0.2) This process is illustrated in Figure 6.4 3. maybe a known function or a set of discrete data.8 3. polynomials)to discrete data and integrating the approximating function: I I= f(x) dx-~ Pn(x) (6. in whichcase an approximatenumericalprocedure is again required to evaluate Eq.31250000 0. and an approximatenumerical procedure is required to evaluate Eq. the function f(x) is known only at a set of discrete points.6 3.27777778 0. in which case Eq. (6.29411765 0. whichis to be integrated.30303030 9.1).g.3 3. The evaluation integrals by approximatenumericalproceduresis the subject of this chapter.25641026 The functionf(x).28571429 0.1).1 Integralof tabulardata.5 3.27027027 0. do not have an exact integral. (6.1 3.

An important method. is presented next. Adaptive integration. The total number of discrete points can be chosen arbitrarily. Thenumericalevaluation of multiple integrals is discussed briefly.0. a procedure for minimizing the number of function evaluations required to integrate a known function. 3"91 = I =[ . Romberg integration.22957444. the degree of the approximatingpolynomialchosento represent the discrete data is the only parameterunder our control. based on extrapolation of solutions tbr successively halved increments. When known a function is to be integrated.3) is consideredin this chapter to illustrate numericalintegration methods. whichspecifies the locations of the points at whichknown functions are discretized. or several subsets of the discrete points. are developed for equally spaced data. . which are called NewtonCotes formulas. The simple function 1 f(x) (6. The following section presents Gaussian quadrature. is integrated. Proceduresare presented in this chapter for all of the situations discussed above.6. or polynomials.. an extremelyaccurate procedurefor evaluating the integral of known functions in whichthe points at whichthe integral is evaluated is chosenin a manner whichdoublesthe order of the integration formula. The organization of Chapter 6 is illustrated in Figure 6. Direct fit polynomials are applied to prespecified unequally spaced data. In that case. Adaptive integration is then discussed. is discussed. The degree of the approximating polynomial chosen to represent the discrete data can be chosen.10. is presented.9"~ ---.4) Theprocedurespresented in this chapter for evaluating integrals lead directly into integration techniquesfor ordinary differential equations.1x In 3.3. The polynomialmaybe fit exactly to a set of points by the methods presented in Sections 4. an extremely accurate and efficient algorithmwhichis basedon extrapolationof the trapezoid rule. \3. and the resulting polynomial. an approximating polynomial fit to the discrete is points. Romberg integration..Numerical Integration 287 Several types of problemsarise. amongwhich the trapezoid rule and Simpson’s1/3 rule are the most useful. The function to be integrated maybe known only at a finite set of discrete points.9 = (3. is discussed. Integration formulas based on Newtonforward-difference polynomials. A brief introduction to multiple integrals follows.3 to 4. This section is followed by a development of Newton-Cotesformulas for equally spaced data. A summary closes the chapter and presents a list of what you should be able to do after studying Chapter 6. The locations of the points at which the known function is discretized can also be chosen to enhancethe accuracy of the procedure. Gaussian quadrature. In either case.1J 3. integration using direct fit polynomials as the approximating function is discussed. whichis discussedin Chapters7 and 8.1 (6.In particular. several parametersare under our control. or by a least squares fit as described in Section 4.dx In(x) 03. After the background material presented in this section.

[xi.2 DIRECT FIT POLYNOMIALS A straightforward numerical integration procedure that can be used for both unequally spaced data and equally spaced data is based on fitting the data by a direct fit polynomial and integrating that polynomial.P.3 Organizationof Chapter 6.(x) = ao +alx + a2x2. ] where P. as discussed in Section 4.n + 1 sets of discrete data.5) .(x) is determined by one of the following methods: 1.288 Chapter6 Numerical Integration Direct Fit Polynomials Newton-CotesFormulas Extrapolation Romberg Integration AdaptiveIntegration GaussianQuadrature Multiple Integrals Programs Summary Figure 6. . as discussed in Section 4. [ f(x) ~-. GivenN >. (6. Thus. determinethe exact nth-degree polynomialthat passes through the data points.f(xi)]. .f(xi) ].3. 6. determine the least squares nth-degreepolynomial that best fits the data points.10. [xi. GivenN = n + 1 sets of discrete data.

86470519 .5) + 0.1: x 3.25641026 Fit the quadratic polynomial.25641026= ao 2 a~(3.dx ~.Numerical Integration 3.6) Substituting Eq.9 f~ 0.24813896)x + ½ (0.22957974 .6) and integrating yields I= aox +al-~+a2-~+..12) (6.5 3. and a2 by Gauss elimination gives ~ Pz(x) = 0.11) (6. (6.02363228)x31331~ Evaluating Eq.10) into Eq.9) + 0.00000530. (6.86470519)x + ½ (-0.7) gives the value of the integral. by After the approximatingpolynomialhas been fit.1) + a2(3. Pz(x)= ao + alx-I-a2 x2. Recall: I = .24813896x + 0.28571429 0.1 (6.32258065 0.Pn(x) J3.9b) (6. 289 Given a knownfunction f(x) evaluate f(x) at N discrete points and fit a polynomial an exact fit or a least squares fit. a (6.22957974 I The error is Error = 0.0.9) + a2(3. Example6. the integral becomes I = dx ~ Pn(x) (6.8) and integrating gives 2 I = [(0.9a) (6.0.7) Introducingthe limits of integration and evaluatingEq. (6.l X . (6.1 3. to the three data points by the direct fit method: 2 a1(3.9c) (6.22957444 = 0.10) . al. (6.1.5) into Eq.28571429= ao Solving for ao. (6.11) yields li = 0..32258065= ao 0.1 by a direct fit polynomial.02363228x Substituting Eq.5) + a2(3.1) + 2 a~(3. (6.8) Considerthe following three data points from Figure 6. Direct fit polynomial Let’s solve the example problem presented in Section 6.

17) yields I =h o + sh) ds (6. x = a and x = b. Higher-order formulas have been developed [see Abramowitz Stegun (1964)]. The rectangle rule has . or the second integral in Eq. so that x = a corresponds to s = 0 and x = b corresponds to s = s.14) is implicit in x.19) Each choice of the degree n of the interpolating polynomial yields a different Newton-Cotes formula. (6. Either Eq..X +’" +s(s. Table 6. Introducing these results into Eq.2)A3f ° 6 2). the Newton at forward-difference polynomial presented in Section 4. thus significantly decreasing the amountof effort required.14) must be madeexplicit in introducing Eq. so the secondapproach is taken.0h X -.1 are sufficient for most problems in engineering and science.88): P.3 NEWTON-COTES FORMULAS Chapter6 The direct fit polynomialprocedurepresented in Section 6.l)(s .15) dx=hds (6.(x) :fo +s Afo + ~ Azf0 +s(s . (6.Isn! wherethe interpolating parameter s is given by s-. so that Eq.290 6.6. from Eq. When function to be the integrated is known equally spaced points. whereasEq.15) and the Error term is Err°r=( s hn+’f(n+O(~)n + x°<X<Xn_ (6.1)(s- 1)] Anj~+ Error (6. Thefirst approach leads to a complicatedresult. Thus. (4. Thus. The resulting formulas are called Newton-Cotes formulas. I : X) dx ~ Pn(x) dx (6. (6.18) The limits of integration.17) where.2 requires a significant amount of effort in the evaluation of the polynomial coefficients. (6.14) ~ x = x o + sh (6. but those presented in Table and 6.1 lists the more common formulas.2 can be fit to the discrete data with muchless effort. (6.14) can be used directly. (6.14). Eq. (6.13) must transformed into an explicit function of s.16) Equation (6. I : f(x) dx ~Pn(x) dx : h Pn(s) s(a) (6.16) into Eq.13) where Pn(x) is the Newton forward-difference polynomial.. (6.13) requires that the approximatingpolynomialbe an explicit function of x. are expressed in terms of the interpolating parameter s by choosing x = a as the base point of the polynomial.

The upper limit of integration x 1 corresponds to s = 1. Andso on.3. Figure 6.4 Range. as illustrated in Figure 6. A quadratic polynomial requires an interval containing two increments.]o o where h = Ax. (6.4 illustrates the region of integration. b . Eq. depending on the degree of the approximating polynomial. (6. Thus.22) ~ increment Figure 6. so it is not consideredfurther. Someterminology must be defined before proceeding with the developmentof the Newton-Cotes formulas.21) whereA/denotesthe integral for a single interval. The group of increments required to fit a polynomial is called an interval. Evaluating Eq. Simplifyingyields the trapezoid rule for a single interval: A/= ½h(f0 +fl) rangeof integration interval 1 interval 2 interval n 1 (6.Numerical Integration Table 6. A quadratic polynomial requires two incrementsand three data points to obtain a fit.19) gives t/ AI= hI "~ + s Afo) ds = h~sfo +S-Afo (fo 2 . Eachinterval consists of one or more increments. andincrements. A linear polynomialrequires one increment and two data points to obtain a fit.+ ½fifo) = h[f0 + ½(f~ -f0)] (6.5. intervals.1 Newton-Cotes Formulas n 0 1 2 3 Formula Rectangle rule Trapezoid rule Simpson’s rule 1/3 Simpson’s rule 3/8 291 poor accuracy. The total range of integration can consist of one or more intervals. Andso on for higher-degree polynomials. Theother three rules are developed this in section. The distance betweenany two data points is called an increment.1 The Trapezoid Rule The trapezoid rule for a single interval is obtained by fitting a first-degree polynomial to two discrete points. A linear polynomialrequires an interval consisting of only one increment.20) AI = h(f0. 6.20) and introducing Af0 = (f~ -f0) yields 1 2 1 (6. The distance betweenthe lower and upper limits of integration is called the range of integration.

26) where x0 < ~ < xn. Eq.e. Total Error = . Thetotal error for equally spaceddata is given n-1 n--1 ~ Error i=0 = ~ .27) Thus.8 gives l(h = 0.the global (i.292 f(x) Chapter6 n-1 n Figure 6. n-1 n-1 : I = E ~i i=0 E ½hi(f i=0 +f+..~h3f"(¢) i=0 = n[. i The error of the trapezoid rule for a single interval is obtained by integrating the error term given by Eq..23) where hi = (xi+1 -xi).16). Error = h h2f"(¢) ds 3) -l~h3f"(~) = = 0(h (6. the local error is 0(h3).Xo)/h.22) over all the intervals of interest. (6. + 2fn_~ +fn) (6.23159636 Z (6. total) error is 0(h2).25) Thus. Thus.~z (Xn -Xo)h2fll(~) = 0(h2) (6. Thecomposite trapezoid rule is obtained by applyingEq.5 The trapezoid rule.. The numberof steps n = (xn .8) = -~(0. Thus. (6.28) . Solvingthe problem the range of integration consisting of only one interval for of h = 0.24) where Ax = Ax = h = constant. Equation (6. Recall that f(x) = 1 Ix.) (6.2.~h3f"(~)] (6. When data are equally spaced. Example 6.25641026) = 0. The trapezoid rule Let’s solve the example problempresented in Section 6. (6.23) simplifies I = ½h(fo + 2f~ + 2f~ +.1 by the trapezoid rule.23) does not require equally spaced data.32258065 + 0. Therefore.

00003192 Ratio .2 Simpson’s 1/3 Rule Simpson’s1/3 rule is obtained by fitting a second-degree polynomial to three equally spaced discrete points.2 Results for the Trapezoid Rule h 0.25641026] 0. E(h) 0(h2) = 22 = Ratio -.1 I 0.32258065 + 2(0..00202192 3.2 0.31) (6.97 -0. I(h = 0.4 and apply the compositerule. Error -0.Numerical Integration 293 Let’s break the total range of integration into two intervals of h = 0. the compositerule yields I(h = 0. for successive interval halvings.22957444. Thus.23008389 0.32258065 + 2(0. for eight intervals of h = 0. 6. Thus.00012762 4. I(h = 0.22970206 0. + 0. whichalso presents the errors and the ratios of the errors between successiveinterval sizes.22960636.22960636 (6.2.3.99 -0.1. (6. Thus.31250000 ÷.22970206 Finally.19) gives Table 6. Theresults are tabulated in Table6.2 0(h/2) (6.32) The results presentedin Table6. Theglobal error of the trapezoid rule is 0(h2). as illustrated in Figure 6.30303030 0. The upper limit of integration x 2 corresponds to s = 2.4) = ? [0.25641026] = 0.28571429) + 0.2 illustrate the second-order behaviorof the trapezoid rule.1) = ~-~ [0.8 0.32258065 + 2(0.23159636 0.30) Recall that the exact answeris I --= 0.00050945 3.25641026] = 0.23008389 (6.2.4 0.6..00 -0.29) For four intervals of h = 0.E(h/2) . Eq.27027027) ÷ 0.28571429 + 0.26315789) ÷ 0.2) = ~-~[0.

4.4) = ~[0. Thus.1)(s 2) Error = h h3f"({) ds0 = (6.. (6.22957974 (6. It simply means that the cubic term is identically zero. Example 6. Solving the problem for two increments of h = 0.1)(s . I = ½h(f + 4f~ + 2f2 + 4f3 +.36) 6 0 j This surprising result does not meanthat the error is zero.) 0 (6.._ 1 +f. the local error is 0(hS). Thus. (6. By an analysis similar to that performedfor the trapezoid rule.1 using Simpson’s rule.6 Simpson’s1/3 rule. the minimum permissible numberof increments for Simpson’s1/3 rule. Simpson’s 1/3 Rule Let’s solve the exampleproblempresented in Section 6.35) Theerror of Simpson’s rule for a single interval of twoincrementsis obtained by 1/3 evaluating the error term given by Eq.34) over the entire range of integration.38) . + 4f.3.294 Chapter6 0 1 2 n x Figure 6. Thus. Performingthe integration. Error = h Jis(s .3) h4fiv(~) as 24 9-~hsfiv(¢) (6. and introducing the expressions for Afo and A2fo.32258065 + 4(0. Recall 1/3 that f(x)= 1Ix.16).34) The composite Simpson’s1/3 rule for equally spaced points is obtained by applying Eq.25641026] 0. and one interval yields I(h = 0.28571429) + 0. it c~/n be shown that the global error is 0(h4). and the error is obtained from the next term in the Newton forward-difference polynomial. 2 s(s . Notethat the total number incrementsmust of be even.2)(s . yields Simpson’s rule for a single interval of two increments: 1/3 [ M= ½h(f0 + 4f~ +f2) (6. evaluating the result.37) Thus.

41) the fourth-order behavior of Simpson’s 1/3 The results presented in Table 6.32258065 ÷ 4(0. Eq.27027027) + 4(0.28571429) + 4(0.29411765) ÷ 2(0. Ratio E(h/2) E(h) __ 4 0(h) 0(h/2) 4 _ 24 = 16 (6.2 0.22957478 Finally.2) = ~ [0. for successive incrementhalvings.59 -0.40) Recall that the exact answer is I = 0. I(h = 0.2 and two intervals and applying the compositerule yields: I(h = 0.3 illustrate rule.Numerical Integration Table 6. evaluating ~e result.39) (6. Theresults are tabulated in Table6. Thus.1 and four intervals.45 -0.27027027) + 0.25641026] = 0. and in~oducingexpressions for A~.3.3 Simpson’s 3/8 Rule Simpson’s rule is obtained by fii:ting a third-degree polynomial four equally spaced 3/8 to discrete points.00000002 Ratio 295 Breakingthe total range of integration into four incrementsof h = 0.28571429) ÷ 4(0.32258065 + 4(0. whichalso presents the errors and the ratios of the errors betweensuccessive incrementsizes.43) .22957446 Error -0.30303030) ÷ 2(0. as illustrated in Figure6.4 0.31250000) + 2(0.1)= --0il [0.26315789) + 0. for eight incrementsof h = 0.22957974 0.3.3 Results for Simpson’s 1/3 Rule h 0. The global error of Simpson’s1/3 rule is 0(h4).00000034 15.22957446 (6.30303030) + 4(0. (6.22957444. ~d k3~ yields Simpson’s3/8 role for a single inte~al of tDee increments: [ M = ~h(~ + 3~ + 3~ +A) (6.27777778) + 2(0. 6.00000530 15. Theupperlimit of integration x3 corresponds to s = 3.7.25641026] = 0.22967478 0.19) gives AI = h + s k~ + k2~ + 6 Perfo~ingthe integration. Thus.1 I 0.

Newton-Cotes formulas can be expressed in the general form: I = f(x) dx = nflh(aof o + ~1 f~ +’" ") + Error (6.4 presents n. Thus. Note that the total number incrementsmust of be a multiple of three. if any.1)(s_2_~2)(s -. The first 10 Newton-Cotes formulas are presented in Abramowitz and Stegun (1964).45).r = h(fo + +3A+2A 3A+. 1/3 whereas the corresponding coefficient for Simpson’s 3/8 rule is -3/80.4 Higher-Order Newton-Cotes Formulas Numerical integration formulas based on equally spaced increments are called NewtonCotes formulas.16)._. + 3f. and Error denotes the local error term. the local error is 0(h and the global error is 0(h4). t. Table 6. + I. (6. Simpson’s1/3 rule and Simpson’s3/8 rule have the same order. Thus. In view of this result. 0(ha).296 ~ Chapter6 f(x) 0 1 2 3 n Figure 6.. Error = h f3 s(s -. 6. Three incrementscan be evaluated by the 3/8 rule.44) The error of Simpson’s rule for a single interval of three incrementsis obtained 3/8 by evaluating the error term given by Eq.3) h4fiv(¢) ds ~hsfiv(~) = d0 (6.7 Simpson’s3/8 rule. 1/3 3/8 whatuse. as shown Eqs. and Error for the first seven Newton-Cotes formulas.fl and ai are coefficients. Consequently. . is Simpson’s rule? Simpson’s rule is useful whenthe total number 3/8 3/8 of incrementsis odd.3. and the remaining even numberof increments can be evaluated by the 1/3 rule. Simpson’s rule should be moreaccurate than Simpson’s rule. +f. The composite Simpson’s3/8 rule for equally spaced points is obtained by applying Eq.45) 5) Thus. (6.37) and (6. ai. The trapezoid rule and Simpson’s1/3 and 3/8 rules are the first three Newton-Cotes formulas. (6..) (6.43) over the entire range of integration.46) where n denotes both the numberof incrementsand the degree of the polynomial. Thecoefficient in the local error of Simpson’s rule is -1/90.

117): Extrapolated value = f(h/R) + Error(h/R) (6. the basic algorithmis 0(h2). Recall the compositetrapezoid rule.I( h)] Equation(6.. Eq.48). Theerror estimate can be used both for error control and extrapolation. the result is called Romberg integration. Error(h/R) -Rn _ 1 [I(h/R) 1 I( h)] (6. The extrapolation formula is given by Eq. as discussed in Section 5. Recall the error estimation formula.24): n--1 I = ~ A//= ½h(J~ +2f~ +2j~ +. (6.53) .6.q-.52) (6. Eq.47)..n _1 [I(h/2) -I (h)] For the trapezoid rule itself. Thefollowingerror terms increase in order incrementsof 2... n = 2.47) whereR is the ratio of the incrementsizes and n is the global order of the algorithm. .49) It can be shownthat the error of the compositetrapezoid rule has the functional form h2 h4 h6 Error = C1 --b C2 -[.C3 .1/90f(4)h 5 -3/80f(4)h 7 75 27 2989 7 -8/945f(6)h 7 19 -275/12096f(6)h 9 216 41 -9/1400f(8)h 1323 3577 751 9 -8183/518400f(8)h 6. Eq.51) becomes Error(h/2) = [I(h/2) .48) When extrapolation is applied to numerical integration by the trapezoid rule. (6. for R = 2 gives Extrapolated value =f(h/2) Error(h/2) + 0(4) q-.the error can be the estimated by evaluating the algorithm for two different incrementsizes. (6. Applyingthe extrapolation formula. (5..4 EXTRAPOLATION AND ROMBERGINTEGRATION When functional form of the error of a numericalalgorithm is known. Applyingthe error estimation formula.50) Thus.52) can be used for error estimation and error control. (6. Thus. i=0 + 2f~_~ +f~) (6. whereeach successive increment size is one-half of the preceding increment size.116b). gives 1 Error(h/2) -. written for the process of numericalintegration. and Eq.4 Newton-Cotes Formulas n fl ~0 51 52 ~3 54 55 ~6 297 57 Local Error 1 2 3 4 5 6 7 1 1 1/2 1 4 1/6 1 3 1/8 7 32 1/90 75 1/288 19 216 1/840 41 1/17280 751 3577 1 3 1 12 32 50 50 27 272 1323 2989 3 -1/12f(2)h 5 . (6.51) (6. that is. so n = 2. (5. (6. with f(h) = l(h).Numerical Integration Table 6.Eq. Thus. R = h/(h/2) = 2. Let’s apply the trapezoid rule for a successionof smaller and smaller incrementsizes.

54) Repeating the procedure for the h = 0.22957973)= -0.2.4.Each successive higher-order extrapolation begins with an additional application of the 0(he) trapezoid rule. whichis then combined with the previously extrapolated values to obtain the next higher-order extrapolated result. Recall that the exact answer is I = 0. (6.22957444.57) (6.53) gives Extrapolated value = 0. 2) 0(h Error 4) 0(h Error 6) 0(h 0.22957478.23008389 .Substituting l(h = 0.23159636 -0. Substituting the two0(h4) extrapolated values into Eq. Rombergintegration Let’s apply extrapolation to the results obtained in Example 6.00012728 0.47) with n = 4 to estimate the 0(h4) error.22970206 -0.00000033 Substituting this result into Eq.23008389 + (-0. If two extrapolated 0(h4) values are available.2 results gives Error = -0.22957973 -0. Example 6.00050416) = 0.52) gives Error(h/2) = ½ (0. The second 0(h6) result agrees with the exact value to eight significant digits. Table 6.53) showsthat the result obtained by extrapolating the 0(h2) trapezoid rule is 0(h4).23008389 -0.22960636 0. are presented in Table6.00000033 0. gives 1 Error(h/2) = 2--T:-]-_ 1 (0.5. those two values can be extrapolated to obtain an 0(h6) value by applying Eq.00000002 0.3.22957445 .4 and h = 0.51).55) (6. and the results of one moreapplication of the trapezoid rule and its associated extrapolations.56) Theseresults.22957478 -0.22957478.1 I. (6.0.53) gives Extrapolated value = 0.2 into Eq.5 RombergIntegration h 0.298 Chapter6 Equation(6.1. Successively higher-order extrapolations can be performed until round-off error masksany further improvement.22957973 (6.22957478 + (-0.4.23159636) = -0. The 0(h4) results are identical to the results for the 0(h4) Simpson’s 1/3 rule presented in Table 6. and adding that error to the moreaccurate 0(h4) value.4 0.00003190 0.00012728 and Extrapolated value = 0.Both of the extrapolated values are 0(h4).00000033) = 0.00050416 Substituting this result into Eq.22957446 0.22957444 0.22957445 (6. (6. (6.8 0.2 0. with n --.0. in whichthe trapezoid rule is used to solve the example problempresented in Section 6. whichrequires three 0(he) trapezoid rule results.8) and l(h = 0. (6.4) from Table 6.00050416 0.

5 illustrate the error estimation and extrapolation concepts. which is very close to the error estimate (except the signs are reversed as discussed below).3 by taking smaller and smaller increments. One would continue and add the error estimate of -0. region a-d should be brokeninto three regions. 6.22957478. a large numberof points maybe required to achieve the desired accuracy. Error it estimation. but opposite signs. that is. In fact. since evaluation of the integrand function f(x) is the most time-consuming portion of the calculation. and the increment h must be very small.22957445 . However. behavior of the integrand functionf(x) may the not require a uniformstep size to achievethe desired overall accuracy.Numerical Integration 299 The results presented in Table6. This approachis generally undesirable. When function to be integrated is known that it can be evaluated at any the so location.00000033 the 0(h 4) value to obtain the first 0(h 6) value. the process to but wouldthen be terminated and no further trapezoid rule values or extrapolations wouldbe calculated. is not obvious howto chooseh to achieve a desired accuracy. Romberg integration. can be used to chooseh to satisfy a prespecified error criterion. In regions where the integrand function is varying rapidly.5.00000100l specified. that limit is obtained for the first 0(h4) error is estimate presented in Table 6. In regions wherethe integrand function is varying slowly. However. f(x) in varies rapidly. Consider the integrand function illustrated in Figure 6.22957444 = 0.4.8. as illustrated.22957444= 0. Error control is accomplishedby comparingthe estimated error to a specified error limit and terminatingthe process when comparison satisfied.0. . f(x) is essentially constant. A more straightforward automatic numericalprocedureis required to break the overall region of integration into subregions in whichthe values of h maydiffer greatly.00000034. the actual error for the more accurate 0(h4) value is Error = 0. In Regiond-e. The actual error in the corresponding 0(h6) extrapolation is Error = 0.22957445.constructing a plot of the integrand function is time consumingand undesirable. Successive extrapolation.5 ADAPTIVE INTEGRATION Any desired accuracy (within round-off limits) can be obtained by the numerical integration formulas presented in Section 6. For the present case.00000001. For example. Visual inspection of the integrand function can identify regions where h can be large or small. only a few points should be required to achieve the desired accuracy. so the incrementcan be reducedas far as desired.if an the is error limit of 10. Care must be exercised whenboth types of Error terms are discussed at the sametime.0. and the incrementh maybe very large. can be used to increase the accuracyfurther. 0. region a-d. However. This procedurerequires the step size h to be a constant over the entire region of integration. However. as described in Section 6. The Error calculated by the extrapolation formula is based on the formula fexact =f(h) + Error(h) which yields Error(h) = fexact -f(h) The Error in the numericalresults throughout this bookis defined as Error(h) =f(h) -fexact These two Error terms have the same magnitude. the step size h can be chosenarbitrarily.

1 < x < 0. Recall Eq.61) whichagrees with the exact value to six digits after the decimalplace. let’s break the total range of integration into twosubranges. Extrapolating the final result yields I = 2.58) (6. To satisfy the error criterion requires 65 and 17 function evaluations. To satisfy the error criterion.197225.52) is less than 0. and successively havingh until the error estimate given by Eq.000321) = 2. and each subrangeis evaluated to the desired accuracy by subdividing each individual subrange as required until the desired accuracy is obtained. Example 6.1x The exact solution is I = ln(x)l°oi91 In(0.1) = 0.the trapezoid rule or Simpson’s rule.60) The results are presented in Table 6..197546 + (-0.59) First.001 in absolute value. (6.7.. .9 . The error criterion for the two subranges is 0.dx 3o. The overall 1/3 range of integration is brokeninto several subranges.197225 (6.0. Next. = (6.9 1 1 = . and apply the trapezoid rule in each subrange.001/2 = 0.001.5 < x < 0.00625 are required. Extrapolation mayor maynot be used in each subrange. A basic integration formula must be chosen.6.5 and 0. (6. 129 integrand function evaluations with h = 0.0005.58) with a uniformincrement h over the total range integration.8 Integrand function.5.52): Error(h/2) = [I(h/2) .9/0. 0.I( h)] (6. for example. Adaptive integration using the trapezoid rule Let’s illustrate adaptiveintegration by evaluating the followingintegral using the trapezoid rule: 0.8.300 Chapter6 a b c d ~ X Figure 6. The results for the two subranges are presented in Table 6.1) = 2. let’s evaluate Eq. Adaptive integration is a generic namedenoting a strategy to achieve the desired accuracy with the minimum numberof integrand function evaluations. (6.9. starting with h = (0. [Errorl < 0.

587787 (6.40000 0.217330 2.018694 -0.01250 0.463492 2.587931 + (-0.02500 Using the Trapezoid (6.001276 -0.596825 0.20000 0.587931 E~or -0.197546 Error -0.40000 0.000144) which yields I = I 1 +[2 = 2.002249 -0.018122 -0.1 < x < 0.610686 1.5 is a simple example of adaptive integration.02500 0.202337 2.6 Integration Rule n 2 3 5 9 17 33 65 129 h 0.00625 Using the Trapezoid I 4. More sophistiTable 6.20000 0.000321 301 respectively. in the two subranges.273413 2.05000 0.622222 0.10000 0.001240 -0.444444 3.186243 -0.008466 -0.80000 0.400000 1.63) I 2. Extrapolating the two final results yields I 1 = 1.05000 0.197225 which agrees with the exact answer to six digits after the decimal place.10000 0. This is 47 less than before.20000 0.000572 -0.10000 0.590079 0.063360 -0.62) (6.022222 2.061111 -0. The extrapolation step increases the accuracy significantly. Example 6.05000 0.02500 0.40000 0.609750 0. Further increases in efficiency may be obtainable by subdividing the total range of integration into more than two subranges.5 n 2 3 5 9 17 33 65 2 3 5 9 17 h 0.5 <x< 0.609750 + (-0.614406 1.628968 1.000312) I 2 = 0.64) = 1.7 Adaptive Integration Rule Subrange 0.198509 2.866667 1.01250 0.000144 .00625 0. which is a reduction of 36 percent.9 -0.588362 0.Numerical Integration Table 6.004998 -0.000312 0.004854 -0.683333 1.609438 = 0.177778 -0.474074 -0. This suggests that using Rombergintegration as the basic integration method within each subrange may yield a significant decrease in the number of function evaluations. for a total of 82 function evaluations.

67c) (6. However. The strategy employed Example is the simplest possible strategy.65) so that the integral of a polynomial of degree 2n . n = 2). C and 1... Consequently. this approach yields the best the possible result..5 6..302 Chapter6 cated strategies can be employed increase the efficiency of adaptive integration even to further. 2n parameters are available: x i (i = 1. an (n + 1)st-degree polynomialcan be fit to the n points and integrated. in 6. C so that I is exact for the followingfour polynomials:F(t) = 1. and t 3. consider the integral of the function F(t) betweenthe limits of oft = -1 and t = ÷1: I = F(t) dt = ~ CiF(ti) -1 i=1 l l n (6. (6. Consequently. (i = 1.2 . Thus..67d) I[F(t) = t] = t dt = ½ ~ ~= 0 -1 = C1t I -~. whena known function is to be integrated. considertwo points (i.65) where x i are the locations at which the integrand function f(x) is knownand C are i weighting factors. if n points are used.66) First. To simplify the development the formulas.68) . The resulting formulas have the form: I = llf(x) n dx = ~ Cif(xi) i=1 (6.1.6 GAUSSIAN QUADRATURE The numerical integration methods presented in Section 6.. t. it should be possible to obtain numerical integration methodsof muchgreater accuracy by choosing the values ofxi appropriately. t 2.9.2 .3 are all based on equally spaced data. i (6. as illustrated in Figure6.1 is exact.67a) (6.C~t2~ C~4 -+ -1 I[F(t) = t 3] = t3 -1 dt = ¼t411_l = 0 = C~t~ + C~t32 Solving Eqs.67) yields 1 1 (6. 1. Gaussian quadrature formulas are obtained by choosingthe values ofxi and C in Eq. Gaussianquadratureis one such method.67b) (6... 2 I[F(t) = ]] = (1) -1 i I dt = t[~_l = 2 = c1(1) + c2(1) = c~ + c 2 (6. an additional degree of freedomexists: the locations xi at which the integrand function f(x) is evaluated.. With 2n parameters it is possible to fit a polynomial of degree 2n .e. Thus. n) and Ci. Choose t2.C2t 2 I[F(t) t ~1 = t 2dt = ½ t31~_l = ~ -. When locations xi are prespecified. n).if n points are considered.

Thus. x = b --~ t= 1.73) and b = m(1) + (6.72) (6. (6.69) Z=._.66) yields (6.9 Gaussian quadrature.Numerical Integration 303 f(t)~ -1 t~ 0 t2 1 t Figure 6. and dx = m dt. a = m(-1) + which gives b-a m = -2 and c b+a 2 (6.70) The problempresented in Eq. Eq.71) Thus.74) and Eq.70) can be transformed from x space to t space by the transformation x = mt+ c wherex=a--~ t= -1. Eq. (6.76) . (6.70) becomes I = f(x) dx = fix(t)] f(mt + c)m (6. Thus.75) Define the function F(t): F(t) = fix(t)] = f(mt (6.F(t) -b at-- -~ The actual problemof interest is I = ] f(x) a (6. (6.71) becomes ~ (b-a ~ b+a }t-t-(6.

8611363116 Substituting Eqs. 3.76) into Eq.3399810436 0.5 and F(t) 0. mb-a 2 -.3478548451 0.0.79) Equations (6. From (6.8611363116 -0. Consider th e two-point fo rmula applied to the total range of integration as a single interval. 5 Substituting these results into Eq. (6.3478548451 -0.80) I = 0. 9. (6.3.73) and (6. Eq. a = 3.6. 1.4 f(O at = 0.3399810436 0.4t + 3. Higher-order results are presented by i Abramowitzand Stegun (1964). an d b = 3.77) Higher-order formulas can be developed in a similar manner.78) Table 6. where f(x) 1/ x. and 4.a ~ i=l CiF(ti) (6.4 and c b+a .8 presents t i and C for n = 2.6521451549 0. Example 6.78) with n = 2 gives (6.75) I =b-a[’ J-I 2 F(t) (6.304 Table 6.74) and (6. (6.4 1)~ -1 + (1)f (6.73).6521451549 0. 4t + 3.6 0 1 1 5/9 8/9 3 5 7 ~ 5/9 0.76) become 1 x = 0. Ii f (x) dx-- b .1.8 Gaussian Quadrature Parameters n t~ Ci Order Chapter6 -1/~ 1/~/~ -~0~.81) .5 2 (6. Gaussian quadrature Toillustrate Gaussianquadrature. let’s solve the example problem presented in Section 6. Thus.

dx =I~ J3. (6. (b .30589834 1) -~ = 0.30589834) + (1)(0. Thus.3 + 0.22957421 -0.a)/2 = 0. and (b + a)/2 = 3.91) The error is Error = 0. Next.00000352. Thus. h = 0. 2t + 3.3 F(t) 0.+ (1)F ) + 3.1 X +12 (6. each one-half of the total range of integration. 1 x = 0. b = 3.86) (6.26802896 ~ = 0.0.22957092 Recall that the exact value is I=0.4[(1)(0.Numerical Integration Evaluating F(t) gives F( 1 =0.2.26802896)] = 0.22957444.22957444 -.2 F(t) -1 (6. let’s apply the two-pointformulaover two intervals. This result is comparable to Simpson’s 1/3 rule applied over the entire range of integration in a single step.82a) (6.5.9.85) (6.2. b = 3.22957421 (6.2 0.3.10821350 (6.5 X I =j3"91dx= 3. 7 I2 = 0.3 = 0. .10821350 = 0.2(1/~’-~) + = 0. (b .5 F(1) 305 (6.-0.2 I~_~ F(t) dt = O.5. and (b + a)/2 = 3.2(1/~"~) + 3.87) dt = 0. Thus.4.22957444 = -0.4(1/V’~) + 3.90) Summing results yields the value of the total integral: the I = I 1 + Iz = 0.88) (6.2t + 3.1.82b) Substituting Eq.81) yields 1 = 0.0.12136071 + 0.4(-1/V~)+3.84) For I1.2t + 3.2 (1)F .a)/2 = 0. that is.7 0. a = 3. 1 x --. (6. 3 I t = 0.2(-1/~/3) + 3.1 X . a = 3.89) (6.7.2[(1)F(---~3) + (1)F(-~3) I2 = 0. [3"91 .12136071 I~ = 0.00000023. The error is Error = 0.5 1 -0.2[ _1_ ~ L0.2.82) into Eq.22957092 . This result is comparable to Simpson’s1/3 rule with h = 0.7 F(t) 0.2t + 3.dx+ J3’51 3.2(-1/ For I2.

4t + 3.26) + 50(1/3.22957444= 0.1 b = 3.0.7 MULTIPLE INTEGRALS The numerical integration formulas developed in the preceding sections for evalulating single integrals can be used to evaluate multiple integrals.5(~0@1. As a final example.306 Chapter6 Nextlet’s apply the three-point formulaover the total range of integration as a single interval.22957444 = -0.22957445.~) + ~F(0) + ~F(~-~)] 1 8 1 I= 0.3.y)dx)dy=JiF(y)dy where f(y) = f(x.22957445 (6.10)+ 75(1/3.3.let’s evaluate the integral using the sixth-order formulabasedon the fifth-degree Newton forward-difference polynomial.4 ~=÷ [ 0.6) + 3.74) + 19(1/3.6) ÷ 3. Thus. That formula is (see Table 6.94) The error is Error = 0.00000001.96) The error is Error = 0.4t + 3.16.97) (6.5 90.95) (6. a (6.93) I = 0.22957443 [~ 5 1 +9 0.58) + 75(1/3. This result is comparable to Gaussian quadrature with three points.42) + 50(1/3. a=3. h = (3.9 b-a 2 .9 .92) x = 0.5 f(t) 1 0.99) The double integral is evaluated in two steps: .97) can be written in the form: I=Ii(Jif(x. 6.6)[19(1/3.0.1)/5 = 0.00000001.4[~F(-x/-0--.5 (6.4(-~/0.4(0~f6~.90)] = 0.4 and b+a .4(0) + = 0.1. y) dx Equation(6. 2 (6.4) I 7)2~8 (19J~ + 75f~ + 50f2 + 50f3 + 75f4 + 19f5) + 0(h = For five equally spaced increments. I . Considerthe double integral: d b I = Jc laf(X.5 (6.0. Thus.98) y) dx y = Constant (6. This result is comparable to Simpson’s1/3 rule with h = 0.22957443 .

b x If the limits of integration are variable. (b) Meridional plane view.100) gives z(r) = z~ + ~ . Double integral To illustrate doubleintegration with variable limits of integration. Evaluate ! = f F(y) dy by any numerical integration formula.1 lb. let’s calculate the mass of water in a cylindrical container whichis rotating at a constant angular velocity co. Froma basic fluid mechanics analysis. . A meridional plane view through the axis of rotation is presented in Figure 6. Su bstituting th ese values in to Eq. 2.11 Spinningcylindrical container. Example6.zl)r (z2 (6.10.11a. (6 .7. Evaluate F(y) at selected values ofy by any numerical integration formula. 1.100) Frommeasureddata.10 Doubleintegration. z(0) = 1 and z(R) =z 2. that mustbe accounted for. Figure6.y)dx a Figure6.101) (a) Physical arrangement. as illustrated in Figure 6. as illustrated in Figure6.Numerical Integration 307 F(y) = f(x. it can be shown that the shapeof the free surface is given by z(r) = A + 2 (6.

Figure 6. z~. Figure 6. zI = 10. water is p = 1. Toillustrate the process of numericaldoubleintegration. and the differential mass in a differential columnof height z(r) is given by dm = pz(r) Substituting Eq. R.308 In a specific experiment.0 cm. the massin the container can be expressedin cylindrical coordinatesas m = dm = whichhas the exact integral z~)R2] m = ~p[ziR2 + (z2 -~ (6. (6. (6. let’s solve this problem in Cartesian coordinates. R. and z2 into Eq. In Cartesian coordinates.0 + 0. Zl.105).R = 10.106) gives m = l.12 Cartesian coordinates. where 2 =x~+y~ and inte grating gives .105) 0 6 (a) Discretized grid.104) = 2~p z 1 -~ R2 (6.101) gives 2 z(r) = 10.388980g.l(xZ + yZ)] dx (6. (6.106) Substituting the specific values of p.107) (6.101) into Eq. (6.In this case.O If [lO + O. and z2 into Eq.1r = Chapter6 20.0 g/cm Due to axial symmetryin the geometryof the container and the height distribution. and 2"2 Eq. The density of 3.0 cm. 00 . dA = dx dy. (6.12(a) illustrates a discretized Cartesian grid on the bottom of the container. (6. (6.102) Let’s calculate the massof water in the container at this condition.104) yields m 1500rcg = 4712.0 cm.103) Substituting the specified values of p.

O)JoLJ [10+0.400 14.660254 0.141428 0.041941 j 7 8 9 10 11 ~.0 d0 + 0.p2)] ~ +~2)]dx dy=4 F(~)dy (6.0 10.0 8.0 8.0 2.1(x ° where FC~) is defined as F(~) = [10. Thus.0 F@) 133.107) can be expressed m_-4(1.000000 9. (6.141428 6.110) Table 6.128255 130.492600 133.9 Geometrical j 1 2 3 4 5 6 ~7 8 9 10 11 ~.000000 0.000000 7.0 9.0 4.539392 9.0cm i 1 2 3 4 5 F(x) 12. (6.000000 0.724144 0. 1 2 3 4 5 6 cm 0.Numerical Integration Table 6.0 8. cm 0.600 12.539392 0.0 7.0 1.0 9.000000 .109) and ~ and ~(~) are illustrated in Figure 6.000 Due to symmetry about the x and y axes.664426 105.358899 0.0 9.1(x z +.108) dx (6.900 20.000000 309 Table 6.0 F~5) 126.12(b).400 18. Eq.0 6.500000 133.0 5.900 13.000000 0. 10.700000 81.0 (AX)nna 1 0.0 7.11 Values of F(~) j ~.797959 0.100 17.0 5.068144 132.0 9.660254 8.0 10.410710 133.10 Integrand of Eq.000 16.0 8.100 i 6 7 8 9 10 F(x) 15.949874 9. cm 6.000000 118.000000 4.111) at~ = 5.0 3.0 3.358899 0.0 4.949874 0.165151 8.0 0. (6.0 6.165151 0.0 ~(~).0 1.0 4.500 12.0 9.0 2.0 9.797959 9.0 7.000000 Parameters imax(~) 11 10 10 10 10 9 9 8 7 5 1 x(imax(~)) 10.

When integrand is a known the function.000 + 16.946528 -3. (6.442452 4709. Repeating the calculations for nx = 21. 6.500 ÷ 2(12. 21.0 cm.861705 -8.248087 Error.0~-~[133. but not quite secondorder.111) Table 6. 2 .068144 132.200880 -1.11. are evaluated by the trapezoid rule.400 ÷ 14.75 2.664426 + 105.900] = 130.000000 . Equation (6.108) by the trapezoid rule.80 2. with a corresponding increase in complexity. 11) let’s calculate ~) from Eq. 41.as can Gaussian quadrature.1) Ax I ~(~).7 1/3 trapezoid rule.920883 g (6.1) Ay (j = 1.112) Repeating this procedure for every value of 33 in Table 6.000000 F(5.000) 2 (6. The resulting geometrical parametersare presented in Table 6.600 ÷ 12.0 [10. with a final increment (AX)~na between x = (imax@ 1) . For each value of33 = (j . These results show procedureis behavingbetter than first order.660254 (18. adaptive integration techniques can be employed.110).527276 4703.113) The error is Error = 4643.920883 4687.0 cm.109).. and 161 yields the results presented in Table 6.equally spaced increments 1) with Ax = 1. The procedure is not difficult. 41. the Example illustrates the complexityof multiple integration by numericalmethods.000000 + 118. just complicated. The values of F(~).109) becomes 8. (6.128255 + 130.724144) + 0.400) + 18.. let’s discretize the x axis into (imax@).111).111) by the trapezoid rule gives F(5. As example. Example uses the trapezoid rule. 81 and 161 nx 11 21 41 81 161 m. (6. using the values ofF@) presented Table 6. g 4643.81 Chapter6 Let’s discretize the y axis into 10 equally spaced incrementswith Ay = 1.468047g. Simpson’s rule could be used instead of the 6. (6.140893 Errorratio 2.0 ÷ 0.12.7 especially with variable limits of integration. (6.-68.000000] = 4643. considerj = 6 for which 33 = 5.1 (x~ + 25. 81. The procedure can be extended to three or more independent variables in a straightforward manner. .000 ÷ 15.920883 4712..0) d0.100 + 17.11.. Integrating Eq.900 + 20.0) -.500 ÷ 2(133.700000 + 81.78 2.9 yields the results presented in Table 6.041941 + 126. defined in Eq.9.900 ÷ 13. Integrating Eq.388980 ---. each value of33.0cm.188100 4711. yields m = 4.0 F(x) (6. g -68.0)] dx : ll.310 Table 6.492600 ÷ 133.~[12.041941 0.468097 -24.10 presents the integrand F(x) of Eq. Results for nx = 11.12.410710 + 133.

The Trapezoid Rule Thegeneral algorithm for the trapezoid rule is give by Eq. 2 0. 30303030. Romberg The basic computational algorithms are presented as completely self-contained subroutines suitable for use in other programs. "x’. 0.27027027.f(i) end do call trap (ndim.1). 1 0. 9 / data (x(i). 3. f (9) data ndim. 26315789.32258065. calls subroutine main trap to implement solution.f(ndim) sum=f(1)+f(n) sum) c c c c c c .6. and prints the solution (Output6. (6. ndim = 9 in this example n number of data points x independent variable array.1 The trapezoid rule program.24): I = ½h(f0 + 2J] +2f2+ "" + 2f~-1 +fn) (6. the Program6.4. 0.27777778. n.9/ data (f(i). trapezoid rule integration dimension x(ndim). 3.1010) i.1020) sum stop i000 format (" Trapezoid rule’~’ ’/’ i’.8. x(i) f deigendent variable array.Numerical Integration 6.x. for implementing the trapezoid rule is presented in Program 6.9)/3.7.i=l.1000) do i=l.28571429.2.2f14. n / 9.114) A FORTRAN subroutine. 0. 0. sum) write (6.x.n. Simpson’s1/3 rule integration 3. f(i) sum value of the integral dimension x(9) . The trapezoid rule 2. 3.31250000. program main main 19rogram to illustrate numerical integration subroutines ndim array dimension.8) 1020 format (" "/" I =’. 7x. 0.i=l.n write (6.1. 6. 3.f14.1.25641026 / write (6.f.8 PROGRAMS 311 Three FORTRAN subroutines for numerical integration are presented in this section: 1.13x. 3.9) / 0. f.8) end subroutine trap (ndim. Input data and output statements are contained in a main(or driver) program written specifically to illustrate the use of each subroutine.8.x(i). 0. 3. subroutine trap. ’f’/’ ’) i010 format (i4.3.1.29411765. 3. Program defines the data set and prints it.

31250000 0. Subroutine simpson works essentially like subroutine trap discussed in Section 6.29411765 0. sum) 1000 format (’ Simpsons 1/3 rule’/" ’/’ i’.8. Onlythe statements in programmain which are different from programmainin Section 6. except Simpson’s rule is used instead of the trapezoid rule.13x. subroutine simpson.27027027 0.2.1.7x.f.2 Simpson’s 1/3 rule program program main main program to illustrate numerical integration subroutines call simpson (ndim. Simpson’s 1/3 Rule The general algorithm for Simpson’s1/3 rule is given by Eq.28571429 0.25641026 3 6..20000000 3 30000000 40000000 3 50000000 3 60000000 3 70000000 3 80000000 3 90000000 0.1 are presented. calls subroutine simpson to implement Simpson’s1/3 rule.22960636 f 0. and prints the solution. Output 6.2.1 Solution by the trapezoid rule Trapezoid i 1 2 3 4 5 6 7 8 9 I = rule x 3.’x’.’f’/" ’) end subroutine simpson (ndim.1. f.0 return end Chapter6 The data set used to illustrate subroutinetrap is taken from Example 6..x. + 4fn-I +fn) (6.8.35): I = ½h(f0 +4A + 2J~ +4J~ +. 1/3 Program main defines the data set and prints it. n.8.27777778 0.26315789 0. x.115) A FORTRAN subroutine.n-I sum=sum+2. Theoutput generated by the trapezoid rule programis presented in Output6.30303030 0. sum) c . (6.0 f ( i * end do sum=sum* (x (n) -x (1) )/float (n-l)/2. Program 6.10000000 3.2. for implementing Simpson’s 1/3 rule is presented in Program6.32258065 0.312 do i=2. n.

70000000 3.80000000 3. Eq.50000000 3. calls subroutine main romberg to implement the solution.3. (6.Numerical Integration C Simpson’s 1/3 rule integration dimension x (ndim) .8.22957446 f 0.114). (6.30303030 0.n-i. 0 sum4=O.3). Romberg Integration Romberg integration is based on extrapolation of the trapezoid rule. and prints the solution (Output 6.n _l[ [(h/2) -I (h)] (6. subroutine romberg.31250000 0.116) A FORTRAN subroutine.40000000 3.2 sum2=sum2 f ( i + end do do i=2.32258065 0.27777778 0. O’sum2+4.60000000 3. for implementing the procedure is presented in Program 6.2 sum4=sum4+f (i end do sum= (f (1) +2.10000000 3.2 Solution by Simpson’s 1/3 rule Simpsons i 1 2 3 4 5 6 7 8 9 I = 1/3 rule x 3. .8. 0 313 do i=3.1 are presented.n-I. f (ndim) sum2=O.28571429 0.30000000 3.25641026 6.3.29411765 0.2. The general extrapolation formula for Romberg integration is given by Eq.O*sum4+f (n) ) * (x(2)-x(1) return end The data set used to illustrate subroutines simpsonis taken from Example 6.26315789 0. The output generated by Simpson’s1/3 rule programis presented in Output 6.3. Output 6.20000000 3. Program defines the data set and prints it.51): 1 Error(h/2) -.27027027 0. Only the statements in programmain which are different from programmain in Section 6.90000000 0.

j-l) end do end do return end used to illustrate in Output 6. ’f’/" .314 Chapter 6 6.n-l.k sum=sum+£( i end do s (j. n.4)’/’ end subroutine romberg (ndim. s) write (6.8) 1020 format (’ ’/’ i’. 4 / call romberg (ndim. s (ndim. data ndim. O’sum+f (n)) end do Romberg extrapo l a t i on do ]=2. 1020) do i=l.2) ’. The c c c The data set output is presented .8x. ’s(i. s (9.j). s(i. f (ndim) . (s(i.num. j-l) -s (i.5. hum ex=float (2* (j-l)) den=2.13x.3 Romberg integration program Program c c program main main program to illustrate numerical integration subroutines s array of integrals and extrapolations. subroutine romberg is taken from Example 6. 9.num+l-i) end do stop i000 format (" Romberg integration’/" ’/’ i’. j) =s (i+l. 1 ) = (£ (I) +£ (n)) do j=2. I) = (f(1) +2. "s(i.) 1010 format (i3. x.x.3) ’. 7x.8x. f.j) dimensionx (9).num write (6. 6x. s) Romberg integration dimension x(ndim) . f.num. "s(i. f (9). "x’.8x.num / 9. n. 0 k=num+ l -j do i=l. j-l) + (s (i+l. 0 sum=O. n. trapezoid rule integration k=n dx=(x(n) -x(1) s (I. 1 ’s(i. 0 k=k/2 do i=k+l. num dx=dx/2.3.j=l.l) ". O**ex-l.1010) i.k s (i.4f14.

and MAPLE.22957445 0. also contain numerical integration algorithms. Least squares fit polynomials can be used for large sets of data or sets of rough data.Numerical Integration Output 6. Adaptive integration procedures are suggested to reduce the effort required to integrate widely varying functions. More sophisticated packages.27777778 0.26315789 0.22957478 0. The direct fit polynomial method works well for both equally spaced data and nonequally spaced data.23159636 0.50000000 3. Romberg integration.10000000 3. is developed. give simple integration formulas for equally spaced data.40000000 3. Many workstations and mainframe computers have such libraries attached to their operating systems.22957974 0.22957444 s(i.31250000 0. is discussed.32258065 0.4. These procedures are based on fitting approximating polynomials to the data and integrating the approximating polynomials. Many commercial software packages contain numerical integration algorithms. Extrapolation. which .28571429 0.30000000 3. Gaussian quadrature.8.3 Solution by Rombergintegration Romberg integration i 1 2 3 4 5 6 7 8 9 i 1 2 3 4 x 3. such as IMSL.29411765 0.90000000 s(i.60000000 3. The Newton-Cotes formulas.80000000 3. Some of the more prominent are Matlab and Mathcad.9 SUMMARY Procedures for the numerical integration of both discrete data and known functions are presented in this chapter. Methods of error estimation and error control are presented. Finally.20000000 3.27027027 0. the book Numerical Recipes [Press et al.2) 0. or the deferred approach to the limit.70000000 3.22957446 s(i.22970206 0.30303030 0.22957444 315 6. MATHEMATICA.25641026 s(i. MACSYMA.3) 0. 6. which is extrapolation of the trapezoid rule. Packages for Numerical Integration Numerouslibraries and soft’ware packages are available for numerical integration.22960636 f 0.4) 0. which are based on Newton foward-difference polynomials.l) 0.23008390 0. (1989)] contains numerous routines for numerical integration.

17.All of these integrals have exact solutions. 3. 24. it is likely that Romberg integration is the most efficient. . Derive Simpson’s 1/3 rule Apply Simpson’s 1/3 rule Derive Simpson’s 3/8 rule Apply Simpson’s 3/8 rule Describe the relative advantage and disadvantages of Simpson’s1/3 rule and Simpson’s3/8 rule Derive Newton-Cotesformulas of any order Apply Newton-Cotesformulas of any order Explain the concepts of error estimation. you should be able to: 1. but that yields no advantage over Romberg integration. 5. 4. 9. Simpson’s1/3 rule could be developed into an extrapolation procedure. 13. 15. 6. 23. Describe the general features of numerical integration Explain the procedure for numerical integration using direct fit polynomials Applydirect fit polynomialsto integrate a set of tabular data Describe the procedure for numerical integration using Newtonforwarddifference polynomials Derive the trapezoid rule Applythe trapezoid rule. which should be used for error analysis in the numerical problems. 12. 11. but the first extrapolation of Romberg integration gives comparable results. 10. error control. 2. Of all the methodsconsidered.316 Chapter6 increases the accuracy of integrating known functions wherethe samplinglocations can be chosen arbitrarily. and extrapolation Describe Romberg integration Apply Romberg integration Describe the concept of adaptive integration Developand apply a simple adaptive integration algorithm Explain the concepts underlying Gaussian quadrature Derive the two-point Gaussian quadrature formula Describe howto develop Gaussian quadrature formula for morethan two points Apply Gaussian quadrature Describe the process for numericallyevaluating multiple integrals Developa procedure for evaluating double integrals Applythe double integral numerical integration procedure EXERCISE PROBLEMS 6. 25. 21. After studying Chapter 6. 8. Subsequent extrapolations of Romberg integration increase the order at a fantastic rate.1 Thefollowingintegrals are used throughout chapterto illustrate numerical this integration methods. 16. Simpson’s rules are elegant and intellectually interesting. 19. An exampleof multiple integration is presented to illustrate the extension of the one-dimensionalintegration formulas to multiple dimensions. 14. is presented. 20. 18. 22. 7.

5933 7. intervals. Compute errors and the ratios of the errors.0000 x 2.6 1. Compute errors and the ratios the of the errors for the twointervals. Evaluateintegrals (D). 4.5 ~ dx (H) [1. and 8 intervals.Numerical Integration (X 3 + 317 3x2 + 2x + 1) dx (A) (C) + 2) dx (B) 1.4 2.6933 3.0 e 0. the 7*. the term range of integration denotes the entire integration range.1400 3. Evaluate integrals (G) and (H) by direct fit polynomialsof order 2 and order over the total range of integration.4 1.8 2.Compute errors and the ratios of the errors. Evaluate integrals (G) and (H) by the trapezoid rule for n : 1. breakingthe total range of integration into two equal intervals. breakingthe total range of integration into two equal intervals. and the wordincrement denotes a single increment Ax. Evaluateintegrals (D).6 2. The exact value is 5. (B). Compute errors and the the ratios of the errors for the twointervals.86420916. 6.5543 4. Repeatthe calculations.4 f(x) dx using the trapezoid rule with n = 1. the Table 1 Tabular Data x 0.8 1. the word interval denotes subdivisions of the total range of integration.5292 8. 4.6 0. and intervals. (E).1067 x 1. and (F) by direct fit polynomials order 2 and of 3 over the total range of integration. the 6.2 tanx .2 dx In the numerical integration problems. 3. 2.2 Direct Fit Polynomials 1. 2.1 lnx 1.1600 3. and (F) by the trapezoid rule for n = 1. . the 5*. intervals. 2. breakingthe total range of integration into two equal intervals.8100 4. Repeatthe calculations.3511 5. Compute errors and the ratios of the errors.2 f(x) 5.3 Newton-Cotes Formulas The TrapezoidRule Evaluateintegrals (A). 2. 6. Compute errors and the ratios of the errors. Compute errors and the ratios the of the errors for the twointervals.3886 3.4X q.0 f~.2 2.4 0. Repeatthe calculations.0000 3.7491 6. and (C) by the trapezoid rule for n = 1./0. 4. 2.8 f(x) 5.0 1. Consider the function f(x) tabulated in Table 1. (E).2X + 3) dx (D) o (5 + sinx) 0 (E) (G) ~ + cosx) dx (F) i1.0 f(x) 3. (B). Evaluate integrals (A).0 4 3 (5X q. Evaluate the integral 2. 4. and (C) by direct fit polynomialsof order order 3 over the total range of integration.

Compute errors and the ratio of the errors. 3/8 4 intervals] Compute errors and the ratio of the errors. .’~ f(x) dx using the trapezoid rule with n = 1.0900 7. and intervals.4782 3. 3/8 22.2. the 19". Evaluate the integral 2. Consider the function f(x) tabulated in Table 2. 4. and (F) using Simpson’s rule for n = 1.4 f(x) dx using Simpson’s rule with n = 1. 2. and 8 intervals.4 2. Consider the function f(x) tabulated in Table 2. Compute errors and the ratio of the errors. 2.4 0. Simpson’s3/8 Rule rulefor n= 1. 03). and (C) using Simpson’s rule for n = 1.~f(x) dx using Simpson’s rule with n = 1. 2.~6f(x) dx using Simpson’s rule with n = 1 and 2 intervals. 3/8 . and intervals.4 f(x)dx using the trapezoid rule with n = 1. Evaluate the integral f~. and 4 intervals. and 4 intervals. 4. 16. the Table 2 Tabular Data x 0. 2. f~. Simpson’s1/3 Rule 11. and 4 intervals. 1/3 4 intervals. The exact value is 8. and (F) using Simpson’s rule for n = 1. 2.6150 2. fi.Compute errors and the ratios of the errors.5000 7. 2.6 1.1400 7.Compute errors and the ratios of the errors. the 12".9686 6. Evaluate the integral 1.~8/(x) dx using Simpson’s1/3 rule with n = 1. Evaluate integrals iA). (E). Compute errors and the ratio of the errors. Consider the function f(x) tabulated in Table 2.2 2. Evaluate the integral J’~. Consider the function f(x) tabulated in Table 1.3100 x 1.0 . Consider the function f(x) tabulated in Table 2. Evaluate the integral ~. Compute errors and the ratio of the errors.0 1. and 8 intervals. 3/8 23. 1/3 4 intervals. Evaluate the integral 2O . 1/3 15.5025 5. 1/3 17". The exact value is 10. Evaluate integrals (D). 2. Evaluate integrals (G) and (H) using Simpson’s3/8 rule for n = 1.6631 1.8 2. Consider the function f(x) tabulated in Table 1.2500 x 2.~gf(x) dx using the trapezoid rule with n = 1. Evaluate the integral 16 ~[~.0 J~.9267 5.43592905.and 18. Evaluate the integral ~S/(x) dx using Simpson’s1/3 rule with n = 1.[~.4850 7.92130980. Consider the function f(x) tabulated in Table 2. the 13. Consider the function f(x) tabulated in Table 1. Evaluateintegrals (D). the 20.0 f(x) 6.4 1. and 4 intervals.4 f(x) dx using Simpson’s rule with n = 1 and 2 intervals. Evaluate the integral 2. the 21. (E).6 0.6 J’~. the 14.8 1. (B).2 f(x) 6.8 f(x) 4.6 2. 4.318 Chapter6 Consider the function f(x) tabulated in Table 1. the 10. 2. Evaluate integrals (G) and (H) using Simpson’s1/3 rule for n = 1. 2.6243 9. Evaluate the integral ¯ 28 . Consider the function f(x) tabulated in Table 1. . Evaluateintegrals (A). and (C) using Simpson’s 4 intervals. and 8 intervals. Compute errors and the ratio of the errors.~f(x) dx using Simpson’s rule with n = 1 and 2 intervals. .

Evaluate the following integrals using Romberg integration with four intervals. 35*. (E). Evaluate integrals (D). and (F) by the Newton-Cotes fourth-order formula with one and two intervals. Higher-Order Newton-Cotes Formulas 28. (B). where(a) k = 3. Evaluate the integral 2. and (c) k = 7? Verify your conclusion f~xk dx. 3/8 25. 26. and 8 intervals. (b) Dividethe total range of integration into two equal intervals and use Romberg integration in each interval. 36. (E). 38.5 Adaptive Integration 04 integration.0 < .0 ~. Compute errors and the ratios of the errors. (a) Let the first interval 40. 4. (E). (E). and 7. the fifth-order formula 30. Evaluate integrals (A). Evaluate integrals (A). Let the first interval be the total range of integration. Compute errors and the ratios of the the errors. and (C) using Romberg integration with intervals. Compute errors and the ratios of the errors. and (F) using Romberg integration with intervals. Evaluate integrals (G) and (H) using Romberg integration with four intervals. 6. (E).. 6. c0. Evaluate integrals (D). Evaluate integrals (D). Let the first subinterval in each interval be the total range of integration for that interval. Start with one interval.~ f(x) dx using Romberg integration with n = 1. (B). (b) k = 5. Evaluate integrals (D).5 -x (a) ~:/4 tanx dx (b) J0 dx(c) I~/4 e-x~ dx (d) ff’S(x5 . -2. Let the first interval be the total range of integration. Evaluate integrals (D). Evaluate integrals (G) and (H) using Simpson’s rules with n = 5 and increments. and (C) using Simpson’srules with n = 5 increments. 2. Evaluate the integral 2O . b 39. Consider the function f(x) tabulated in Table 2.~6f(x) dx using Simpson’s rule with n = 1 and 2 intervals.Numerical Integration 24. Compute errors and the ratios of the errors. 5. the 32. 27. 2. and 8 intervals. and (F) by the Newton-Cotes sixth-order formula with one and two intervals. Evaluate the integral J’~. 4. 37. 319 Consider the function f(x) tabulated in Table 2.4 f(x) dx using Romberg integration with n = 1.[~. 29. 6. Let the first interval be the total range of integration. EvaluateI = f2"2.o e-x dx using Romberg be the total range of integration. and (F) using Simpson’srules with n = 5 increments. Evaluate integrals (D). the 31. Derive the Newton-Cotesformulas for polynomials of degree n = 4.4 Extrapolation and Romberg Integration 33. and (F) by the Newton-Cotes with one and two intervals. Let the first interval be the total range of integration. Consider the function f(x) tabulated in Table 1.2) dx 34. (c) Divide the total range of integration into the two intervals. Which of a Romberg row table yields f2 f(x) dx exactly iff(x) is a polynomial of degreek. (E). and (F) by the Newton-Cotesseventh-order formula with one and two intervals.

2. the Evaluate integrals (G) and (H) by four-point Gaussianquadrature for n = 1. Compute errors and the ratios of the errors. (E). .4 and -0. and ~c) using Simpson’s rule. the Evaluate integrals (D). and repeat part (b). 2. Compute errors and the ratios of the errors.[~(4x3 . 45. and (c) using Simpson’s rule.4. Compute errors and the ratios of the errors. Compute errors and the ratios of the errors. and 4 intervals. and 4 intervals. (E). 2. Let the first subinterval in each interval be the total interval. (B). 51. (a)~6xe-X~dx (b)l~?coshxdx (c)l~sinhxdx f21e -Xsinxdx (a) Whatis the smallest k for whichk-point Gaussianquadrature is exact for polynomial of degree 7? (b) Verify the answer to part (a) by evaluating -f~ x7 dx.7 Multiple Integrals 2) 53.4 < x < 0. 2. 2. and (F) by two-point Gaussian quadrature n = 1. 47.320 Chapter6 x _< -0. Compute errors and the ratios of the errors. and (e) using three-point Gaussian quadraturefor the y integral and2the trapezoid rule for the x integral¯ ¯ o ¯ | x 56. (b) using the trapezoid rule. 41. the Evaluate integrals (G) and (H) by two-point Gaussian quadrature for n = 1. (B). Compute errors and the ratios of the errors. Compute errors and the ratios of the errors. 46*. 49*. (d) Comparethe errors incurred for the three cases. and 4. Evaluate the multiple integral f~ |f~e (x2 + 1/y) dy dx: (a) analytically. 1/3 r ¯ ¯ ¯ 55. and (F) by three-point Gaussian quadrature n = 1. Evaluate integrals (A). 48. Compare results with the exact the solutions. and 4 intervals. (c) using three-point Gaussianquadrature for both integrals. Evaluate the multiple integral j’~ J’0 ~ sin(x2 ÷y2)dxdy: (a) analytically. 2. 2. 50.2x2y+ 3xy dx dy: (a) analytically. and (F) by four-point Gaussian quadrature n = 1. Compute errors and the ratios of the errors. Compute errors and the ratios of the errors. (b) using the trapezoid rule.[13 In(x) dx using Romberg integration. and 4 intervals. and 4 intervals. 43*. and (C) by three-point Gaussian quadrature n = 1. Evaluate ! = . the Evaluate integrals (A). Evaluate the multiple ~ntegral J’-i f~ xy dy dx by the proceduresdescribed in the previous problem. 6. 3. the Evaluate integrals (D). and (C) by four-point Gaussian quadrature n = 1. (b) using the trapezoid rule for both integrals. and 4 intervals. 1/3 54. and 4 intervals. (d) using the trapezoidrule for the y integral and three-point Gaussian quadrature for the x integral. Let the first subintervalbe the total interval. 6. (E). (b) Dividethe total range of integration into two equal intervals and use Romberg integration in each interval. and 4 intervals. 44. Evaluatethe multiple integral J~_~l . 52. the Evaluate integrals (D). and 4 intervals. with n = 1. (a) Start with the total interval. the Evaluateintegrals (G) and (H) by three-point Gaussianquadrature for n = 1. the Evaluate the following integrals using k-point Gaussian quadrature for k = 2. and 4 intervals. the Evaluate integrals (A).6 Gaussian Quadrature 42. and (C) by two-point Gaussian quadrature n = 1. (B).

whereR is the area enclosed by the circle x2 + y2 = 1. 64.3. Use Cartesian coordinates. Implementthe Simpson’s1/3 rule programpresented in Section 6. Check out the programusing the given data set. 62. Evaluatethe integral j’fR exp[(x .8 Programs 321 57. . Implementthe trapezoid rule programpresented in Section 6.2.~_/)1/2] dy dx.. Check out the programusing the given data set. APPLIED PROBLEMS 63. Solve any of Problems11 to 17 using the program.Numerical Integration 6.8. Use Cartesian coordinates. 61. 59. 58. Implementthe Simpson’s3/8 rule programpresented in Section 6. Solve any of Problems18 to 24 using the program. 60. Checkout the programusing the given data set. Solve any of Problems4 to 10 using the program.8. Find the volumeof a circular pyramidwith height and base radius equal to 1.1.8.

The two classes of ODEs (i. then partial derivatives occur. and partial 323 . Thevariable y is used as a generic dependentvariable throughout Part II. In most problems in engineering and science. If morethan one independent variable exists. and are The two types of physical problems(i.6.. mostreal physical processes involve morethan one independentvariable.3. the class of the corresponding governing ODE. Ordinary differential equations are the subject of Part II. 11. (b) to discuss the relationship between the type of physical problem being solved.5. Mostreal physical processes are governedby differential equations. the type of numerical method and required.e. propagationproblemsand equilibrium problems) are discussed. Some general features of ordinary differential equations are discussed in Part II. 11. In general. and the correspondingdifferential equations are partial differential equations (PDEs). to Part II is devotedto the solution of ordinary differential equations. II. and (c) to present examplesto illustrate these concepts.4. 11.2. the independent variable is either time t or space x. The dependent variable depends on the physical problem being modeled..7.simplifying assumptions are madewhichreduce the PDEs ordinary differential equations (ODEs).e. however. In manycases. Theobjectivesof Part II are (a) to present the general features of ordinarydifferential equations. 11.II Ordinary Differential Equations II.1.2 GENERAL FEATURES OF ORDINARY DIFFERENTIAL EQUATIONS An ordinary differential equation (ODE) an equation stating a relationship betweena is function of a single independentvariable and the total derivatives of this function with respect to the independentvariable. I1. initial-value ODEs boundary-valueODEs) introduced. II. Introduction General Features of Ordinary Differential Equations Classification of OrdinaryDifferential Equations Classification of Physical Problems Initial-Value OrdinaryDifferential Equations Boundary-Value Ordinary Differential Equations Summary I1.1 INTRODUCTION Differential equations arise in all fields of engineering and science.

y) The general nth-order ODE y(t) has the form for (n~ (’-~) + .. which do not involve the dependent variable. then the ODE is nonlinear. denote nth-order differentiation. whereas y’ + ~ty = F(t) (II. y) is called the derivative function. Eq. A linear ODE one in whichall of the derivatives appear in linear form and none of is the coefficients dependson the dependentvariable. A nonhomogeneous differential equation contains additional terms. For simplicity of notation. y(t) or y(x).4) (II. The solution of an ordinary differential equation is that particular function. source terms. y’ + ~y = 0 (II. Partial differential equations are the subject are Part III of this book. knownas nonhomogeneous terms. first-order ODE. For example. D(t) or D(x).3) (II. etc. the solution of an ODE be expressed in closed form. y) (II. In the can majority of problems in engineering and science. variable-coefficient.324 Part II differential equations (PDEs) obtained.7) (II. differentiation usually will be denoted by the superscript "prime" notation: y.1). The coefficients maybe functions of the independentvariable. + a~y" + aly’ + aoy = F(t) anY + an_ly (II. is For example. in which case the ODE a variable-coefficient linear ODE. = dy dt Thus. first-order ODE. in of and satisfies the auxiliary conditions specified on the boundariesof the domain interest.5) is a linear.1) can be written y’ =f(t. 1) wheref(t. A homogeneousdifferential equation is one in which each term involves the dependent variable or one of its derivatives.2) wherethe superscripts (n). y’ + ~y = F(t) is a linear. respectively. (II. or the derivatives appear in a nonlinear form.. or forcing functions. The order of an ODE the order of the highest-order derivative in the differential is equation. that identically satisfies the ODE the domain interest. (n . yy’ + ay = 0 (y. of In a few special cases. For example. constant-coefficient.9) .8) are nonlinear first-order ODEs. the solution must be obtained by numerical methods. Such problemsare the subject of Part II. The general first-order ODE is ~ t = f(t.6) (II.)2 + ey = (II. the coefficients depend on the If dependent variable.

a wide variety of ordinary differential equations exists. first-order.where F(t) is the knownnonhomogeneous term. each of which is a function of the same single independent variable and one or more of the dependent variables. The number constants of integration of is equal to the order of the differential equation.the number of auxiliary conditions must equal the number constants of integration. the differential equation is a boundary-value ODE.3 CLASSIFICATION OF ORDINARY DIFFERENTIAL EQUATIONS Physical problemsare governedby many different ordinary differential equations.z) z’ =g(t. Eachproblemhas its ownspecial governingequation or equations and its ownpeculiarities whichmust be consideredindividually. the two coupled ODEs y’ =f(t. z) (II.y) ] (11.y)y= F(x) Thesetwo special cases are studied in the following sections. 11. useful insights into the general features of ODEs be obtained by studying two special cases.y. 11b) comprisea systemof two coupledfirst-order ordinary differential equations. The particular member the family of of solutions whichis of interest is determinedby auxiliary conditions. Obviously.y)y’ + Q(x. homogeneous ODE. The secondspecial case the general nonlinear second-order ODE: [ y" + P(x. or classes. depending the type on of auxiliary conditions specified.12) wheref(t. and the solution domainD(t) is open. and each of which is governed by an ordinary differential equation.13) . If all the auxiliary conditions are specified at the same value of the independent variable and the solution is to be marchedforward from that initial point. Thus. are (II. Initial-value ODEs solved by marchingnumerical methods.10) is a linear. of ordinary differential equations. Figure ILl illustrates the solution of an initial-value ODE. The general solution of a differential equation contains one or moreconstants of integration. and y’ + ~y = F(t) 325 (II. initial value of the The dependentvariable is specified at one value of the independentvariable. As illustrated in the preceding discussion. the auxiliary conditions If are specified at two different values of the independent variable. Thus. the end points or boundaries of the domainof interest. y. a family of solutions is obtained. the differential equation is an initial-value ODE. y) is a nonlinear function of the dependent variable y. There are two different types. The first special can case is the general nonlinear first-order ODE: { y’=f(t. nonhomogeneous ODE.1 la) (II. However. first-order. Such coupled sets of ordinary differential equations are called systems of ordinarydifferential equations. whichis the of sameas the order of the differential equation. Many practical problems involve several dependent variables.Ordinary Differential Equations is a linear.

2 illustrates the solution of a boundary-value ODE. 2. 11. 3. The order of the governingordinary differential equation maybe one or greater. . are specified at one value of the independent variable. Boundary-value ODEs can be solved by both marching numerical methods and equilibrium numerical methods. its owntype of auxiliary conditions. and the solution domain D(x) is closed. and its ownnumericalsolution methods.A clear understandingof these conceptsis essential if meaningfulnumerical solutions are to be obtained. x Figure II.1 Initial-value ODE.326 y(t)’ Part II YO’ to Figure H. the initial values.2 Boundary-value ODE. boundaryvalues of The the dependentvariable are specified at two values of the independent variable.that is. Theknown information. y(x) ~Y2 I Yl x 2 Figure II. Propagation problems Equilibrium problems Eigenproblems Each of these three types of physical problems has its ownspecial features. Propagation problems are initial-value problems in open domains in which the known information (initial values) are marchedforward in time or space from the initial state.4 CLASSIFICATION OF PHYSICAL PROBLEMS Physical problems fall into one of the followingthree general classifications: 1. The number initial values must be equal to the order of the differential of equation. its own particular type of ordinary differential equation. Propagation problems are governed by initial-value ordinary differential equations.

The numberof boundaryvalues must be equal to the order of the differential equation. 8 the Stefan-Boltzmann constant (5. x) marching problems.14) applies to manyproblems in engineering and science. The marching direction in a steady space marching problemis sometimes called the time-like direction. Equilibrium problemsare steady state problemsin closed domains. respectively.e.e.. 11.3 Solutiondomain propagation for problems. and a i s t he ambient .5. Equilibrium problems are boundary-value problems in closed domainsin which the knowninformation (boundary values) are specified at two different values of the independent variable.14) Equation (11.The eigenvaluesare to be determinedin addition to the correspondingconfiguration of the system.4 Solution domainfor equilibrium problems.3 illustrates the open solution domainsD(t) and D(x) associated with time marching and space marching propagation problems. t) marchingproblems or steady space (i. CloseddomainD(x) X 1 X2 X Figure II. and maybe greater.T4~) (II. the end points (boundaries) of the solution domain. (II. Figure II. Eigenproblems a special type of problemin which the solution exists only for are special values (i. Heattransfer from the lumped mass m to its surroundings by radiation is governed by the Stefan-Boltzmann law of radiation: 4 fir = Aea(T . The order of the governing differential equation must be at least 2.14) are illustrated for the problem transient of heat transfer by radiation from a lumpedmassto its surroundingsand the transient motion of a mass-damper-springsystem. Propagation problems maybe unsteady time (i. eigenvalues) of a parameterof the problem... Equilibrium problemsare governedby boundary-value ordinary differential equations.Ordinary Differential Equations Open domain D(t)[or D(x)] 0 t [or x] 327 Figure]I.67 x 10-8 j/m2-K4-s).15) where6r is the heat transfer rate (J/s). the general features of Eq.5 INITIAL-VALUE ORDINARYDIFFERENTIAL EQUATIONS A classical exampleof an initial-value ODE the general nonlinear first-order ODE: is l y’=f(t. rr is the emissivity of the body (dimensionless). Considerthe lumped massmillustrated in Figure II. Figure 11. whichis the ratio of the actual radiation to the radiation from a black body.e. In the following discussion. T is the internal temperature of the lumped mass (K).y) y(to)=Yol (11. and the correspondingcoordinate is called the time-like coordinate. Ais the surface area of the lumped mass (m2).4 illustrates the closed solution domainD(x) associated with equilibrium problems.

E quation (II.-Or = -A~r( T4 . ApplyingNewton’ssecond law of motion. 17) is required so that the rate of changeof stored energy negative whenT is greater than T For constant mand C. Eq. Thus. 01. T(0) = 0.D = Ma = MV’ = My" (I1. 17) The minussign in Eq. 19) Consider the case where the temperature of the surroundings is constant and the initial temperatureof the lumpedmassis T(0. the initial conditions for Eq. 14). -. (II. y(0. 2F = ma.21) are V(0.mCT (II.20) is a nonlinear order initial-value ODE. Thus.20) is in the general formof Eq.e.0 and y(0.5 Heattransfer by radiation froma lumpedmass.20) Equation (II.21) where T is the thrust developedby the rocket motor (N). and y is the altitude of the rocket (m).0)= Y0.22) ~ T(O) = O / ma -~ 4) ~r=As(~(T4-Ta Figure11. and the initial elevation. a is the acceleration of the rocket (m/sZ).. V is the velocity of the rocket (m/s).6.Mg .0) = 0. the temperature of the surroundings). Equation (II.20) i s a n example o a nonlinear fi rst-order in itial-value OD f An exampleof a higher-order initial-value ODE given by the nonlinear secondis order ODE governingthe vertical flight of a rocket. (II. V(0. Mis the instantaneous mass the rocket (kg).= T’ = -a(T4 4) T~ dt where c~-Aea mC (II. is zero.18) II. The physical systemis illustrated in Figure II. whichdepends the altitude y on D is the aerodynamic drag (N). (II.17) can be written a. is zero. el(taCT) dt .0) = 0.0) = V0. 16) where m is the mass of the lumpedmass (kg) and C is the specific heat of the material (J/kg-K). An energy balance states that the rate at whichthe energy stored in the lumped masschangesis equal to the rate at whichheat is transferred to the surroundings.~) =f(t. T) T(0) T0J (II.0) T(t) = (II. g is the acceleration of gravity (m/se). The initial velocity. yields ZF = T .0) = y’(0. whichdescribes The the temperature history of the lumped mass corresponding to the initial condition. .328 Part II temperature (K) (i.20) is the function T(t). solution of Eq. (II. T i nitial-value p roblem is s tated he as follows: [ T’= -e(T 4 .T4=) (II. The energy E stored in lumped mass is given by E ---.

23) whereM is the initial massof the rocket (kg). whichdescribes the vertical motionof the rocket as a function of time t.26) is the function y(t). rh.25) M .24) where C is an empirical drag coefficient (dimensionless).F= T-Mg-D= Ma= MV"= My’" y(O.Y) ½ p(y)AV (II.25) becomes y" -F M . y) ½p(y)A 2 p. and g are constant. Eq. the thrust T is a variable.0 and V(O. and A is cross-sectional frontal area of the rocket (m2). which dependson the rocket o geometry. instantaneous mass Mis given by The M(t) = Mo . p at the density of the atmosphere(kg/m3). In that case.f~ rh(t) o o Considera simpler modelwhere T.O)= y(t) = ? and V(t) flight of a rocket. Equations(II.two cIassical examplesof initial-value ODEs represented by Eqs.26). Combining (II. The instantaneous aerodynamicdrag D is given D(p.0. are (II.h(0 o (n.24) yields the followingsecond-order Eqs.rht 0 g y(0. y" = .Ordinary Differential Equations y& 329 T_.26) The solution to Eq. and ~h(t) is the instantaneousmassflow o being expelled by the rocket (kg/s).26) are examples second-order initial-value ODEs.25) and (II. V.20) and (II.the rocket velocity V and the properties of the atmosphere altitude y(m). which depends on time and altitude.0) (II. 2 V. (II. Many more complicated examplesof initial-value ODEs arise in fields of engineering and science. Higher-order ODEsand systems of ODEsoccur .25) or (II. (II. and the aerodynamic drag D is neglected.6 Vertical In general.0) ---.0 and y’(0. Figure II. In summary.y)= Co(p.g(y) (II.21) to (II. whichdependson the altitude y (m).O)= 0. y) Co( V.0) = V(0.f~ Fn(t) M . nonlinear initialvalue ODE: F(t.

. (II.33) ~. whichstates that O(x) = -kA dx (II. Eq. Ta i s t he ambi temperature (K) ent (i. Anenergy balance on the differential control volumeyields ~t(x) = il(X + dx) + ilc(X) (II.7.28) which can be written as ~)(x)--= ~)(x)÷ ~x[~¢(x)]dx +ilc(X) which yields d ~ [0(x)l dx + Oc(x) = 0 Heat diffusion is governedby Fourier’s law of conduction.27) are illustrated for the problem steady oneof dimensionalheat diffusion (i.32) whereh is an empirical heat transfer coefficient (J/s-m2-K).6 BOUNDARY-VALUEORDINARY DIFFERENTIAL EQUATIONS A classical example of a boundary-value ODE the general second-order ODE: is y" + P(x.30) (II. (II.31) and (II. y)y = y(x.e. and P.30) gives d / dT\ (II. the temperatureof the surroundings). Numerical procedures for solving initial-value PartII ODEsare presented in 11. k is the thermal conductivity of the solid (J/s-m-K).32) into (II. m2).27) applies to manyproblems in engineering and science. (II.33) yields d2 T hP -~ .) and =Yl y(x2) 1 =Y2 (II.e. Consider the constant cross-sectional-area rod illustrated in Figure II. (II.330 frequently..-~ ( T .A is the surface area of the rod (A = P dx. In the following discussion. P is the perimeter of the rod (m).35) (IL34) .27) Equation (II.Ta) (II. Substituting Eqs.to) For constant k.-~ ~x) dx +h(Pdx)(r . y)y’ + Q(x. A.31) where~¢(x) is the energy transfer rate (J/s). conduction)in a rod. Heat diffusion transfers energy along the rod and energy is transferred from the rod to the surroundings by convection. dT/dx is t he temp erature gradient (K/m).29) (n. the general features of Eq. Heat transfer by convection is governedby Newton’slaw of cooling: Oc(x) = hA(T .ra) = which can be written as [ T"-o~2T=-o~2Tal where ~2 = hP/kA. Chapter 7. A is the cross-sectional area of the rod (m2).

e.8. solution of Eq. as illustrated by the dashed line in Figure 11.The neutral axis of the beamis the axis along whichthe fibers do not undergostrain during bending.Ordinary Differential Equations 331 T~ ~l(X)---~i ---~ x x+dx ~l (x+dx) T2 Figure lI. When distributed load q(x) a is applied. An example of a higher-order boundary-value ODE given by the fourth-order is ODE governing the deflection of a laterally loaded symmetricalbeam.. .. I(x) is the moment inertia of of the beamcross section. andy"(L) Figure 11. y"(O)= O.35) is an exampleof a second-order linear boundary-valueproblem.37) whereE is the modulus elasticity of the beam of material. whichcauses deflections in the beam. Equation(11. y(L) = O. As shown in manystrength of materials books (e. the neutral axis is coincident with the x axis. which can vary along the length of the beam.36) Equation (II.35) is a second-order boundary-valueODE.g.The physical system is illustrated in Figure 11. When load is applied (i. Equation(II. which can vary along the due ¯ d4y E I(X)~x q(x) 4= y(O)= O. and M(x) is the bending moment to transverse forces acting on the beam. the differential equation of the deflection curve is EI (x) d~-~zY2 -M(x) = (I1.8. the beam deflects.35) is in the general form of Eq. Timoshenko. (11. 1955).7 Steady heat conductionin a rod.27). (11. and the neutral axis is displaced. neglecting the weight of no the beam itself).8 Deflection of a beam.35) is the function T(x). Theshape of the neutral axis is called the deflection curve. Bendingtakes place in the plane of symmetry. which The describes the temperaturedistribution in the rod correspondingto the boundaryconditions T(x~) = T~ and T(x2) = (II.

0) = y"(L) = For a beam cantilevered (i.0) y( L) = 0. y) y(to) =Yo (II. y(0.41) (II... free) at either end.44) Anytwo combinationsof these four boundaryconditions can be specified at each end.y..0 For a beamfixed (i.45) which requires four boundaryconditions at the boundariesof the closed.40) requires four boundary conditions.39) yields the differential equation for the beamdeflection Eqs.e.0) = 0.q(x) El(x) ~-~ = q(x) Equation (II. The classical exampleof a first-order nonlinear initial-value ODE is y’ = f(t.. y.e.40) length L.y t. physical domain..332 PartII length of the beam.40 is a linear exampleof the general nonlinear fourth-order boundaryvalue ODE: . . Examples several ordinary differential equations that arise in engineering and science are presented in this section..0 or y’(L) = (II.0 For a beampinned (i. clamped)at both ends.V(x) Theshearing force V(x) is related to the distributed load q(x) as follows: (II. Fo} a horizontal beam (II. y"(0.7 SUMMARY The general features of ordinary differential equations are discussed in this section. hinged) at both ends.y )l (II. respectively.e. curve] (I1.42) (II.43) (II.37) to (II.39) dx Combining (II..The moment M(x) is related to the shearing force V(x) acting on each cross section of the beamas follows: aM(x) dx -. Ordinarydifferential equations are classified as initial-value differential equations or boundary-valuedifferential equations according to whether the auxiliary conditions are specified at a single initial time (or location) or at twolocations. Equation II.38) dV(x).0) y’ (L) = 0. y"(0. y’(0. 11.y. =j~(x.46) .

andy(xz ) = y 2I 333 those (II.Ordinary Differential Equations Chapter 7 presents several numerical methodsfor solving Eq. The classical exampleof a second-order nonlinear boundary-value ODE is [ y" + P(x. y)y = F(x) y(xO . .47) Chapter8 presents several numericalmethods solving Eq.47) and illustrates those for methods with numerical examples. y)y’ + Q(x. 01. (II.46) and illustrates methods with numerical examples.

8.9. The modified Euler method 7. The modified midpoint method 7. Consistency and order analysis of the fourth-order Runge-Kuttamethod 7. Consistencyand order analysis of the explicit Euler FDE 7.15. Stability. 7. The Taylor series method 7.6.4. 7. 7. 7.11.7. 7. 7.3.7 One-Dimensional Initial-Value Ordinary DifferentialEquations Introduction General Features of Initial-Value ODEs The Taylor Series Method The Finite Difference Method The First-Order Euler Methods Consistency. 7. Stability analysis of the Euler methods 7. The extrapolated modified midpoint method 7.16.2.5. The fourth-order Runge-Kutta method 7. and Convergence Single-Point Methods Extrapolation Methods Multipoint Methods Summary Methods and Results of Nonlinear Implicit Finite Difference Equations Higher-OrderOrdinary Differential Equations Systemsof First-Order Ordinary Differential Equations Stiff OrdinaryDifferential Equations Programs Summary Problems 7.10.10.13.7.8.11.12.9.13.14. 7.2.12. 7. 7.4.6. The explicit Euler method 7. 7. Linearization of a nonlinear ODE 7.5.3. Examples 7.1. 7. 7. 7. 7. The implicit Euler method 7.1. Order. Stability analysis of the fourth-order Adams-Bashforth FDE 335 . Stability analysis of the fourth-order Runge-Kuttamethod 7.

0K. Consequently.5.5 and solved in Sections 7.0K.This problemis discussed in Section II.T4a) by a partial fraction expansion. Consequently.250.] For To = 2500.336 7. 7. is considered of in this chapter to illustrate finite difference methodsfor solving higher-order initial-value ODEs.5. Theresult is t T (T°)’I’[(T°--Ta)(T+Ta) .0 x 10-12 (K3-s)-l. Thevertical flight of a rocket.20).15.1) where~ is defined in Section 11.1) can be obtained by separating variables.1 The fourth-order Adarns-Bashforth-Moulton method Timelinearization Newton’s method Reduction of a second-order ODE two first-order ODEs to Twocoupled first-order ODEs The stiff ODE the explicit Euler method by The stiff ODE the implicit Euler method by INTRODUCTION Chapter7 Mostpeople have somephysical understandingof heat transfer due to its presence in many aspects of our daily life.e.5.18.) tan-I ~ + -~ ln.3) The exact solution at selected values of time.0 x 10-12)2503t (7. illustrated at the top of Figure 7. That equation. 7. (7. The general features of initial-value ordinary differential equations (ODEs)are discussed in Section II..17..14. Discretizing the continuous physical domaininto a discrete finite difference grid . expressing 1/(T 4 .. is considered this chapter to illustrate finite difference methods in for solving first-order initial-value ODEs. T(0. 7. (7.the nonlinear first-order ODE governing unsteady radiation heat transfer from a lumpedmass. and applyingthe initial condition. The objective of a finite difference methodfor solving an ordinary differential equation (ODE) to transform a calculus probleminto an algebra problemby: is 1.16. 7. obtained by solving Eq.13.. I T’= -cffT 4 .2) becomes ’1’-Itan-’ (~-’-T"~. and c~ = 4. 7.T4~) = f(t.integrating.1 and illustrated in Figure 7. illustrated at the bottom Figure7.2.0) = o =2500. 7. 2o~T3a (7.is tabulated in Table 7.0 Ta = 250. (II. Ta = 250.20.~-~ 1 I_( -.2) tan-~ ~a( -.19. 7.+250)7 250)(r 2(4. In that section it is shownthat initial-value ODEs govern propagation problems. (7.tan-’ (22~00) ~’nL~J r(2500./.12 and 7. which are initial-value problems in open domains. for propagation) problemsand to developingseveral specific finite difference methods.Ta)(To+ Ta). initial-value ODEs solved numerically by marchingmethods.. The exact solution to Eq. Eq.Eq.1. This chapter is devoted are to presentingthe basic properties of finite difference methods solving initial-value (i.1 and discussedin SectionII.3) by the secant method.0 ] (7.

O)= 0.F = T-Mg-D =.41636581 t.0 4.58286508 1842.26337470 . 1944.22786679 1758.0 8.00000000 2360.O)= 0.0 3.0 and V(O.61189788 2005. 2500.0 Exact Solution of the Radiation T(t).0 5.47079576 2074.One-Dimensional Initial-Value Ordinary Differential Equations 337 T a ~ ~Ir = A ~ o" (T4-T4a) dT -o~(T4-T:).0 7.0 T(t). T(t) ? ¯ ~-= °i 7.1 Problem t.1 Heattransfer byradiationandrockettrajectory.82998845 2248. s 0. Approximatingthe exact derivatives in the ODE algebraic finite difference by approximations (FDAs) Substituting the FDAs into the ODE obtain an algebraic finite difference to equation (FDE) Solving the resulting algebraic FDE Table 7.0 9.24731405 2154.0 10.61841314 1890.09450785 1798.0 y(t) = ? and V(t) Figure 7. s 6.0 2.Ma = MV" = My" Mg y(O. T(0) = TO.0 1.

Figure 7. dt y) Y(to) (7.4). (7.e. y(to) = Yo.the initial time is t = 0.. The solution to Eq. (7. as do coupled systems of ODEs governing several dependent variables. the independentvariable t has an unspecified (i. This function mustsatisfy an initial conditionat t = to.based on the and methodsfor solving Eq. ~ ~ ~ 5 ~ ~ ~ ~ l ~ ~ "~ 10 "Sme s t.0 and y(to) = y(0. Numerous initial-value ordinary differential equations arise in engineering mad science. Theobjective of Chapter7 is to developfinite difference methods solving initialfor value ODEs such as Eq.2 Exactsolution of the radiation problem. Extrapolation methodsevaluate the solution at a grid point for . and homogeneous nonhomogeneous. Single ODEs governing a single dependent variable arise frequently. the majority of attention is devotedto the general nonlinear first-order ordinary differential equation (ODE): y’ = ~ = f(t. Procedures for solving higher-order ODEs systems of ODEs. that is. 2. (7.4) are developed this chapter. whichis presented in Section 7. open)final value. The solution domainis open. Three major types of methodsare considered: 1. Several examplesof single-point methodsare presented.or higher-order. The most significant of these is the fourth-order Runge-Kutta method. are discussed. In mostphysical problems.4). or In this chapter. (7. y) is the nonlinear derivative function.0). first.7.4) wherey’ denotes the first derivative and f(t. Single-point methods Extrapolation methods Multipoint methods Single-point methodsadvancethe solution fromone grid point to the next grid point using only the data at the single grid point. Initial-value ODEs maybe linear or nonlinear.4) is the function y(t). 3.338 Chapter7 2500 2300 2100 1900 17000 . Several finite difference methodsfor solving Eq.

These three types of methodscan be applied to solve linear and nonlinear single first-order ODEs. Order. Multipoint methodsadvancethe solution from one grid point to the next grid point using the data at several known points.9 as an exampleof multipoint methods. .3 Organization of Chapter 7.14.8 as an exampleof extrapolation methods.One-Dimensional Initial-Value Ordinary Differential Equations 339 several values of grid size and extrapolate those results to obtain a moreaccurate solution. is Initial-Value Ordinary Differential Equations TaylorSedes Method Finite Difference Methods Euler Methods Consistency. The fourth-order Adams-BashforthMoultonmethodis presented in Section 7. The extrapolated modified midpoint methodis presented in Section 7. Stab ty.The Gear methodfor solving stiff ODEs presented in Section 7. and Convergence t Single-Point Methods Extrapolation Methods Multipoint Methods I NonlinearFDEs I IHigher-OrderODEsl I SystemsofODEs I Stiff ODEs Programs t Summary Figure 7.

Thegeneral linear first-order is ODE given by is l y’ + ay = F(t) y(to) = Y0 ] (7.and lists the things you should be able to do after studying Chapter7. and convergenceare presented. The complementary solution y~(t) is given by (7. The material then splits into a presentation of the three fundamentally different types of methodsfor solving initial-value ODEs: single-point methods. and the fourth-order Adams-Bashforth-Moulton method.7) (7. are 7. and higher-order ODEs discussed briefly. Then follows a discussion of nonlinear finite difference equations. higher-order ODEs.systems of first-order ODEs.8) The particular solution describes the response of the ODE external stimuli specified by to the nonhomogeneous term F(t). The properties of nonlinear first-order ODEs. order.5) wherea is a real constant. extrapolation methods.2 GENERALFEATURESOF INITIAL-VALUE ODEs Several general features of initial-value ordinary differential equations (ODEs)are presented in this section. which depends only on the homogeneous ODE.Definitions and discussion of the concepts of consistency.9) yc(t) = -~t . The exact solution of Eq. stability. Twofirst-order methodsare then developed: the explicit Euler methodand the implicit Euler method.4). The particular solution yp(t) is the function whichsatisfies the nonhomogeneous term F(t): y~p + ayp = F(t) (7.6) The complementary solution. the extrapolated modified-midpoint method.2.5) is the sum of the complementary solution yc(t) and the particular solution yp(t): y(t) =yc(t) + yp(t) The complementary solution yc(t) is the solution of the homogeneous ODE: yP¢ ÷ ay~ = 0 (7. This is followedby an introduction to finite difference methods.describes the inherent properties of the ODE the response of the ODE the absence of external and in stimuli. discusses the advantages and disadvantages of the various methods. which summarizesthe contents of the chapter. The properties of the linear first-order ODE discussed in are somedetail. The Taylor series are methodis then presented. The general features of initial-value ordinary differential equations (ODEs) discussed first.1 The General Linear First-Order ODE Thegeneral nonlinear first-order ODE given by Eq. 7.3. The chapter closes with a summary.340 Chapter7 The organization of Chapter 7 is illustrated in Figure 7.A brief introduction to stiff ODEs follows. (7.and systems of ODEs. Programsare then presented for the fourth-order Runge-Kutta method. and multipoint methods. whichcan be positive or negative. (7.

The coefficient A can to determinedby the initial condition. yl t This is a stable ODE.13) Equations(7. of y(0) = Y0. and requiring that the coefficient of each functional formbe zero so that Eq. has two completely different types of solutions. etc.14) and (7. The homogeneous ODE. the solution. (7. The coefficients B0. after the completesolution y(t) to the ODE has been obtained.15) (7. y(t)= ~t. y(to) =Yo. y) y(to) (7. dependingon whether~ is positive or negative.10) Eq. (7. For the ODEy’-~y = 0.11) to satisfy the initial condition. Any numerical methodfor solving these ODEs must behave in a similar manner. (7. decays exponentially wih ti me.8).13) y(t) = -~t y(t) = ~’ (7. Considerthe pair of ODEs: y’+~y=0 y’-~y=O where~ is apositive real constant.5) is obtained by substituting Eqs. F"(t).8) is satisfied for all values of the independent variable Thetotal solution of Eq. grows exponentially with time without bound. as illustrated in Figure A particular member either family is chosen by specifying the initial condition. continue until F(i)(t) repeats its functional form or becomeszero.-~’ + yp 1 (t) (7.11) The constant of integration A can be determinedby requiring Eq. y(t) 2.16) wherethe derivative functionf(t. y’+ ~y = 0. are the derivatives of the nonhomogeneous These term. The general features of the solution of a nonlinear first-order ODE a small neighborhood a point are similar to in of the general features of the solution of the linear first-order ODE that point.2 The General Nonlinear First-Order ODE The general nonlinear first-order ODE given by is y’ = f(t. (7. the solution. The solutions to Eqs. BiF(i)(t).10) where the terms F’(t).10) into Eq. y(to) =Yo.6).9) and (7. For the ODE + ~y = 0. can be determinedby substituting Eq. y(t) = -~t.7) by direct substitution. as illustrated by the dashedcurves in Figure 7. This is an unstable ODE.12) (7. (7. 7.12) and (7. etc. B1. at . y) is a nonlinearfunction ofy. grouping similar functional forms. The particular solution yp(t) is given by Yp(O = BoF(t) + B~F’(t) + B2F"(t) (7. (7.14) (7. Thus. terms.15) each specify a family of solutions. (7. (7. (7.4.One-Dimensional Initial-Value Ordinary Differential Equations 341 whichcan be shown satisfy Eq.2.

f(t. y) can a Taylor series at an initial point (t o.342 Chapter7 y(t) y(t) Yo (a) (b) Figure 7.Each higher-order ODE a system of higher-order ODEs be replaced by a systemof firstin can . (7. [ y’ + o~y ----F(t) Y(to) : Yo ] (7. (7.19) (7.17b) which can be written as f(t.2.both linear and nonlinear. Thus.21) will be used extensively as a modelODE investigate the general behavior to numerical methodsfor solving initial-value ODEs. gives Eq. Yo) and droppingall terms higher than first order.18) to (7.20) (7.17a) f(t.3 Higher-Order ODEs Higher-order ODEs generally can be replaced by a system of first-order ODEs.16).21) Most of the general features of the numerical solution of a nonlinear ODE be can determined by analyzing the linearized form of the nonlinear ODE.~y = O.Consequently. (b) ODEs. y) = -ay + F(t) hi gher-order te rms where~ and F(t) are defined as follows: -f~[o I F(t) = (fo t -fl0t0 -fyloY0) +fl0 (7. The nonlinear ODE be linearized by expressing the derivative functionf(t.Yo) (7. (7. 7.to) +fyl0(Y -.18) Truncatingthe higher-order terms and substituting Eqs.y) = (J~ -floto -fyloYo) +rio t +fyloY +’" (7. Eq. y) =fo +rio(t -. y’ .4 Exactsolutions of the linear first-order homogeneous (a) y’ + ay = 0.20) into the nonlinear ODE.

At = (t . Considerthe nonlinear first-order ODE: l y’=f(t.. . 7. time marchinginitial-value problemsare used to illustrate the general features. and results. discussion of the solution of systems of coupled first-order ODEs presented in A is Section 7. 2.Consequently. 3.13.22) TheTaylorseries for y(t) at t = t o is ±1 ~ y(t) =y(to) + y~(to)(t .Steady space marching (i. t). given its derivative function and its value at somepoint.23) can be written in the simpler appearing form: (7...y) y(to)=y o ] (7.~ o~mr"t att 1 n) +. _ to)3 (7. The linear first-order ODE The nonlinear first-order ODE Higher-order ODEs Systems of first-order ODEs The general features of all four types of ODEs similar.n ÷.to). and solution of initial-value ODEs.23) Equation (7. Chapter 7 is devotedmainlyto solving first-order ODEs. equations. wherey’(to) = Y’10.2.Both types of problemsare initial-value (propagation) problems.to) +½Y"(to)(t2 ~. Chapter 7 is devoted mainly to solving single first-order ODEs. discussion of the solution of higher-order A ODEs presented in Section 7.3 THE TAYLOR SERIES METHOD In theory.. and. ÷~y( (to)(t. 7.e..Occasionally the space coordinate in a steady space marching problem is referred to as a timelike coordinate. the infinite Taylor series can be used to evaluate a function. The foregoing discussion considers initial-value (propagation) problems in time (i. is 7..5 Summary Four types of initial-value ODEs have been considered: 1.etc. which are unsteady time marchinginitial-value problems. ThroughoutChapter 7.One-Dimensional Initial-Value Ordinary Differential Equations 343 order ODEs. behavior.2. t) Y(0 =Yo+Y’Io At +½Y"loAt~ ±T~y ~o At3 +"" 1 . Consequently. x) initial-value problemsare also propagationproblems. .All the results presented in Chapter 7 can be applied directly to steady space marching initial-value problems simply by changingt to x in all of the discussions.4 Systems of First-Order ODEs Systemsof coupled first-order ODEs be solved by solving all the coupled first-order can differential equations simultaneously using methodsdeveloped for solving single firstorder ODEs. not A(f’).12. for convenience..~. 4..24) n denotes (A n.e.thus yielding coupled systems of first-order ODEs.Consequently. Chapter 7 is are devoted mainlyto the numerical solution of nonlinear first-order ODEs.

O =2500. T) = -~z(T 4 .22).25a) yields (7.04 -.1.27) where t o < z < t.29) .~ (Y’ + YYY ) +-~(Y’ + YYY .1): T’ = f(t.25d) In a similar manner. It is not practical to evaluate a large numberof the higher-order derivatives. where ~ = 4. . . = Ytt +2y~yy+YtYy + (Y’y)2 Y +Yyy(Y 2 a(y") 0 . Recall Eq. .24) can be determined by successively differentiating the lower-orderderivatives. The first derivative Y’]0 can be determinedby evaluating the derivative function f(t.250.2500. y. The higher-orderderivatives in Eq.28). (7.0 Ta = 250.04) = -156. Error estimation is difficult.The value ofyo is the initial condition specified in Eq. 0 .26b) Higher-order derivatives become progressively more complicated. t . Consequently. (7. .25c) Recall that dy/dt = y’.25c) into Eq.24) can be employed to evaluate y(t) if Y0 and the values of the derivatives at t o can be determined.T~a) T(0. starting with y’.0. y) at to: Y’lo = f(to.344 Chapter7 Equation (7.. (7.0 × 10-~2 (K3-s) The Taylor series for T(t) is given by I T(t)= O + T’[ot + ½ T" t2 + ~ T’[ot3 + ~4 )[ot4 +"" [O T(4 where At : t .Ta4)[0 = -(4.0) ---.28) -1. Example7.234375 (7.25a) (7. Yo).0 × 10-12)(2500. an d T’[o = -~(T4 . Thus.ay (7. (7.. Truncating the remainder term yields a finite truncated Taylor series. .to = t.26a) (7.dt . y"=~v")’ -. The remainderterm in a finite Taylor series is: Remainder = ~y(n+l)("C) n+l (7. .the Taylor series must be truncated. (7.30) (7. (7. FromEq. The Taylor series method Let’s solve the radiation problempresented in Section 7.. since z is unknown. Substituting Eq.25b) (7. (7. t .1 by the Taylor series method.0 (7.

T3 4) Ta ’ T" = (T")’ OT" OT" T’ = 0.234375) = 10.00 2360.06 (3).89 2249.679169 Substituting the abovevalues into Eq.04 q× 2500.25 2154. etc.847900t3 + 0.+ ~.08)(--156.34) The exact solution and the solution obtained from Eq.3 × 2500.30 1875. (7.18 2500.0 × 250.33b) K (1).)T ’ (7.60 × 2500. where (1).29) yields T(t) = 2500.27 1851.53 2031. it is obvious that the accuracy of the solution improves as the number terms in the Taylor series increases.09 .32a) T~’ = -4(4. use the Taylor series method a small step in the neighborhood the initial point.0 .0 4.18 2129.As illustrated in Figure 7.05 x 250.98 2166.87 2130.4~T3T (7.O-4~3(70T9-60TST4~+6TT8.33a) (4) T = -4(4. s 0.3 lc) (7.45 2242. the solution by the Taylor series method quite accurate for small values of t.r = = ’ OT’ OT’ T’ = 0.0 .00 2360.234375) = 39. is the basis of many it excellent numericalmethods.0 5.0 × 10-12)3(70 × 2500.0 1. FromFigure 7.32b) (7.21 2119.3T2T4a)T 2 T~" = 4(4.47 2074.32c) (T t.087402 ~° 4 + T" = -4~3(7T . of even with four terms.0 2. T(t)(2).00 2360. However.0 + 4~2(7T6 .284375t + 19.10T6Ta 8) 3TZTa T(4) (7.0 3.+ T~ = " OT" ~-~-- =O.00 2343.)z OT --~. Theseresults are also illustrated in Figure 7.77 2187.06 .03(-156. Successive reevaluation of the Table 7.Simplyput.One-Dimensional Initial-Value Ordinary Differential Equations Solvingfor the higher-order derivatives yields: 345 T" (7"3’ ~i.42 .53 2207.etc.31a) (7.31b) (7.529297t2 .06 1718. Eventhoughthe Taylor series methodis not an efficient methodfor solving initialvalue ODEs.29 2265.5. Therein is lies the basis for more accurate methodsof solving ODEs.83 T(t)(2).00 2363.0 × 10-12) × (7 × 2500.058594 7 T" = 4~2(T .0 × 10-12)2500.17 2005.61 2005.0 Texact(t).5. (7. the derivatives) at the new point.0 s.65 2207.2.2.2 Solution by the Taylor Series Method t.e.06 2187. the solution is not very accurate for t > 2. second.234375) = -17. K 2500.02 × 250. K 2500. Thenreevaluate the for of coefficients (i.5. denotethe Taylor series throughthe first. (7..04)(-156.83 2248.34) are tabulated in Table 7. T(t) K 2500. derivative T(t) terms.444965t 4 (7. T(t) K 2500. The Taylor series methodis not an efficient method for solving initial-value ODEs.07 T(t)(4).156.

..and developingfinite difference equations are discussed in this section.... " \.... 240O[.346 Chapter7 2500~ I%.. f Exactsolution First order Secondorder Third order Fourth order 220o ~ 2000 1900 1800 1 . whichyields the finite difference grid.. .6.2300 "\’~. Substituting the FDAs into the ODE obtain an algebraic finite difference to equation (FDE) 4.... The finite difference solution of the ODE is obtainedat these grid points. Approximating exact derivatives in the initial-value ODE algebraic finite the by difference approximations (FDAs) 3... \ \ \" I I ... 7.. .. Brief discussions of errors and smoothness close the section..approximating exact derivatives by finite the difference approximations.. This concept is the underlying basis of most numerical methodsfor solving initial-value ODEs. 4 Figure 7.~.. Theresulting finite difference grid is illustrated in Figure 7... Discretizing the continuousphysical domain into a discrete finite difference grid 2.. ..5 Solutionby the Taylorseries method. let these grid points be equally spacedhaving uniform spacing At (or Ax). Solving the resulting algebraic FDE Discretizing the continuousphysical domain.. \. The solution domainis discretized by a 0he-dimensionalset of discrete grid points.-..-’~-.4. coefficients as the solution progresses yields a much moreaccurate solution... \.. 7. ....~ I "N~ ..1 Finite Difference Grids The solution domainD(t) [or D(x)] and a discrete finite difference grid are illustrated Figure 7..... 5 7000 I 2 3 Time s t.4 THE FINITE DIFFERENCE METHOD The objective of afinite difference methodfor solving an ordinary differential equation (ODE) to transform a calculus problem into an algebra problem by: is 1.6.. For the present.

(or %) in the solution domainD(t) [or D(x)]. Thoseresults are presented in Table5.For the remainderof this chapter.One-Dimensional Initial-Value Ordinary Differential Equations D(t) [or D(x)] : 2 Figure 7. order. [or D(x)].e. secondderivative. For the remainderof this chapter. n-2 n-1 : : n n+l t (or .36) (7.35) 7.~ x) 347 Solution domain. second order. and convergence. stability. forward. I ~3(t) = exact solution y(t) = approximatesolution This very precise distinction betweenthe exact solution of a differential equation and the approximate solution of a differential equationis required for studies of consistency. The dependent variable at grid point n is denotedby the samesubscript notation that is used to denote the grid points themselves. (or Xn).e. Thus.derivates are denoted by dY (tn) =y’(t.whichare defined and discussed in Section 7. All the results in terms of time t can be applied directly to space x simply by changingt to x everywhere. the exact solution of the ODE denoted by an is overbar on the symbolfor the dependentvariable [i. y(t)]. t.4.).) = In a similar manner.. where approximations of various types (i.) dt (7. . finite difference approximations (FDAs)of the exact derivatives in the ODE must be developed. time t will be chosen as the independent variable.. This is accomplishedusing the Taylor series approach developed in Section 5. for etc.. The total number of grid points is denotedby nmax.1. The subscript n is used to denotethe physical grid points.e. first order..6 ~ -. Nonuniform grids in whichAt (or Zkx) is variable are consideredlater in the chapter. In the development finite difference approximationsof differential equations.and centered) of various orders (i.2 Finite Difference Approximations Now that the finite difference grid has been specified. ~(t)]. backward. that is.. Similar results hold for steady space marchingproblemsin whichspace x is the independentvariable.e.) are developed various derivatives (i. etc. the function y(t) at grid point n is denoted by y(t.4.e. grid point n correspondsto location t. Thus. a of distinction must be madebetweenthe exact solution of the differential equation and the solution of the finite difference equationwhichis an approximation the exact differential of equation.. and the approximate solution is denoted by the symbolfor the dependentvariable without an overbar [i. first derivative. Thus.6.anddiscretefinite difference D(t) grid.

Yn+~ Y~ At - 0(At) (7.~n using grid point n + 1 as the base point and solving for ~’ln+l. Consider the derivative ~’.40) ~f.~( )in Arm + ’n+l is given by where the remainder term R Rm+l_ ~(m 1 (m ÷l)(z) + 1)! At. which will be denoted by Y’Jn. Writingthe Taylor series for ~n+1using grid point n as the base point gives l .Yn+~.37) ~n+l--~n+~’lnAt+~Y .n+l (7. can be approximatedat a grid point in terms of the values of ~ at that grid point and adjacent grid points in several ways. whichis the rate at whichthe error goes to zero as At --~ 0. Equation (7. Solving Eq.n Y’Jn :. which is the order of the approximationof~’ln.tt(’C) At A finite difference approximation of 9’In. Equation(7. it becomes (7. (7. Thus. Thus.42) (7. (7.39) is terminated after the first term on the right-hand side.44) .~n+~ At--~n ~_ ½~"(z) Truncating the remainder term yields 1 -tt 2 (7.348 Chapter7 Exact derivatives. such as ~’.n "~Ptln At+gyIn At2 + "" ’ "-~. A first-order backward-difference approximation of.43) I Y’ln+~ .~’ at grid point n + 1 can be obtained by writing the Taylor series for . Rm+l Pn÷l :~. (7.41) is a first-order forward-difference approximation ~’ at grid point n. ~t[n __~n+l -.f. In most cases.38a) (7. The remainder term is simply the next term in the Taylor series evaluatedat t = z./. can be obtained from Eq.40) by tnmcating the remainder term. ~n = Yn+i + ~’n+~(-At) ~Y n+l(-At) +" ~’ln+l -.Yn 0(At) At (7. If the infinite Taylor series is truncatedafter the mth derivative term to obtain an approximationof~+1.tt 1 -ttt (7. our mainconcern is the order of the error.38b) where t < z < t + At.. (7.37)fore’In yields At ~Y In Ate At (7.39) If Eq. The remainder term which has been truncated to obtain Eq.37) can for expressed as the Taylor polynomial with remainder: 1 m 1 =. the remainder term Rm+is the error associated with the ~ truncated Taylor series.41) where the 0(At) term is shownto remind us of the order of the remainder term which was truncated.41) is called the truncation error of the finite difference approximation of~’[n.At 2 + gy nat3+"" m where the convention (At)m --~ At is employed compactness.

49) FDA= y" +y’l.. Equations (7. (7. The exact form of the truncation is error relative to grid point n is determined. Theyall the samenumericalvalue. (7. Consistency is an important property of the finite difference approximation derivatives. (7. 0(Ate) (7.4. Y" = Y’I.39) to (7. (7. Occasionally a finite difference approximationof an exact derivative is presented without its development. subtracting the two Taylor series.47) can be applied to steady space marchingproblemssimply by changingt to x in all the equations. (7. givenby Eqs.50) At As At--~ 0.Pn ~4.48) (7. 1 t (7.50).40).One-Dimensional Initial-Value Ordinary Differential Equations 349 A second-order centered-difference approximationof ~’ at grid point n + ½ can be obtained by writing the Taylor series for 3.V"(z) 2 At Truncating the remainder term yields Y’I~+~/2. (7.48). + ½Y"I. and (7. At + gy In At2 + . as Eqs. (7.46).44). gives Eq. which shows that FDAis an approximation of the exact derivative ~ at grid point n.gY n+~/z~.45b) Subtracting Eq.Yn+~At-Y./ ) +’ (7. For example.+~/~ +Yn+~/2(--At/2) iYn+~/2(-At/2) -t . +Y’ln At +½Y"lnAt2 +"" Substituting the Taylor series for Yn+linto the FDA.. 7.44). 3 = 2 + gyn+~/2(At/2) +".consider the following finite difference approximation (FDA): FDA-.3 Finite Difference Equations Finite difference solutions of differential equations are obtained by discretizing the continuoussolution domain replacing the exact derivatives in the differential equation and by finite difference approximations..45b) from Eq. and (7.+1 and ~n using grid point n + ½ as the base point..46) Notethat Eqs. respectively.. The differences in the three finite difference approximations are in the values of the truncation errors.+~/2 _~.In such cases. (7. to obtain a such .P’]. The order of FDA 0(At).47) are identical algebraic expressions. as illustrated in Eq. ~n = ~. the truncation error and order can be determined by a consistency analysis using Taylor series.. (7.47).45a) ~n+l ~n+l/2 +~’n+l/Z(At/2) + ½~’~’+I/z(At/2) ~ -. Choosing other base points for the Taylorseries yields the truncation errors relative to those base points.Yn At The Taylor series for the approximatesolution y(t) with base point n is Yn+~=Y. the (7. (7..47) (7.41).Yn+~.45a) and solving for ~’1n+1/2yields . A finite difference approximation (FDA) an exact derivative is consistent with the of exact derivative if the FDA approaches exact derivative as At --~ 0.41).+l -.. and solving for ~’1n+1/2" Thus.43). (7. FDA~ Y’I. At +. or (7.

should be employed. (7.41) y’.~(0) = (7. sincef~+adependson y. and additional effort is required to solve Eq. y.+l) = Yn+ Atf. in of Problemswhichdo not have any discontinuities in the function or its derivatives are called smoothly varying problems. y. Problemswhich have discontinuities in the function or its derivatives are called nonsmoothly varying problems. Suchapproximations called finite of are difference equations (FDEs). or extrapolation methodssuch as presented in Section 7.consider the vertical flight of a rocket illustrated in Figure 7.+~ =Yn + Atf(t. f(t. At a discontinuity. nonlinear in Yn+l. ~). difference equation.4 Smoothness Smoothness refers to the continuity of a function and its derivatives.1 and Example 7.54b) is an implicit finite I. If the ODE linear.. single-point methods such as presented in Section 7.54b) for Yn+V 7.51) Choosea finite difference approximation (FDA). and Eq. whichcauses a discontinuity in the secondderivative of the altitude y(t).+~) =f~+l -.9 should not be employed the neighborhood a discontinuity in the function or its derivatives. Multipoint methodssuch as presented in Section 7.54b) can be solved directly foryn+ If the ODE nonlinear. thenf.54a) (7. For example.54a) can be solvedexplicitly for yn+ Equation and (7.53a) for y. (7.+~ yields Y.53a) (7. =f(t.7. and solve for Yn+l: for ~’ yln __Yn+l Yn -At f(t.18. The solution is not smooth the neighborhood the in of discontinuity in the secondderivative ofy(t). for ~’.+~.53b) (7. . (7. If a problem has discontinuous derivatives of someorder at somepoint in the solution domain.350 Chapter7 difference approximation the differential equation.52) Substitute the FDA ~i into the exact ODE. This the causes a discontinuity in the acceleration of the rocket.4.+I = Y. ~) .+~ yields Y.8.53b) for y.. Yn+l-.since the step size in the neighborhood a discontinuity can be chosen so that the discontinuity of occurs at a grid point.54b) Equation(7. (7. When rocket engine is turned off. + Aif(t. does not dependon Yn+~. Consider the general nonlinear initial-value ODE: 3’ = f(t.y~) =Yn + Atfn Solving Eq.+~ is linear in is y.from Eq.+~.54a) is an explicit finite difference equation.+l (7. thenf~+~is is 1.+l.Yn At (7. the thrust drops to zero instantly.+~ Y"+IAt y.+~. The finite difference methodof solving a differential equation employs Taylor series to develop finite difference approximations (FDAs)of the exact derivatives in the differential equation. then FDAs based on the Taylor series maymisbehaveat that point.44): t Yn+l -.-Solving Eq.Yn Y" -At or Yn+l -- . sincef.) =f~ y’. Y. Eq. (7. For example. (7. (7.

13). and the numericalsolution of PDEs. 5. errors in the numericalsolution of any type tend to growas t increases. Truncation error depends on the step size--0(At~). Truncation error is the error incurred in a single step causedby truncating the Taylor series approximations for the exact derivatives.4. Since the members the solution family converge t increases. 2. (7. Figure 7. their interactions.12) and (7. error in the initial conditionor an algebraic error simply An moves solution to a different member the solution family.5 Errors 351 Five types of errors can occur in the numericalsolution of differential equations: 1. for the divergingfamily of solutions illustrated in Figure 7.a familyof solutionsexists.7 Numerical solutions of the first-order linear homogeneous ODEs. Figure 7.Thus.7 for the linear firstorder homogeneous given by Eqs. and their effects on the numericalsolution are discussed in this section. Sucherrors are assumed the of to be nonexistent. Bycontrast. Errors in the initial data (assumed nonexistent) Algebraic errors (assumednonexistent) Truncation errors Roundoff errors Inherited error Theseerrors. . y(t) y(t) Y0 ’’ y(t) = Yoe=~ Yo l (a) y’+~y=O. 3. of as errors in the numericalsolution of any type tend to diminishas t increases.whichis the subject of Part III.7b.7a. whichis the subject of Part II. A differential equation has an infinite number solutions. as illustrated in Figure7. 4.7a illustrates a family ODEs of convergingsolutions for a stable ODE. Truncationerrors propagate from step to step and accumulate as the numberof steps increases. depending the initial of on conditions. Anyerror in the numerical solution essentially movesthe numerical solution to a different member the solution family. (b) y’-c~y = Figure 7. This discussion is equally relevant to the numerical solution of ODEs. Consider the converging family of solutions of illustrated in Figure 7.7b illustrates a family of diverging and solutions for an unstableODE. Truncationerror decreases as the step size At decreases.One-Dimensional Initial-Value Ordinary Differential Equations 7.

e. 7.352 Chapter7 Round-off error is the error caused by the finite word length employed in the calculations. Round-off error is more significant whensmall differences between large numbers are calculated. inherited error is the sumof all previoustruncation errors. of Inherited error is the sumof all accumulatederrors from all previous steps. the total error at the first solution point is the local truncation error. of first-order forward-differencefinite difference approximation of~’ is given by Eq. round-off error increases as the step size At decreases. Anothertruncation error is of madeon the secondstep. inherited error remainsbounded the solution progresses. The finite difference grid is illustrated in Figure 7. which increases the numberof bits to 128. affects the solution at each step.f~) I f~(to)=~o (7. steps) increases. corresponding to approximately 7 or 13 significant decimal digits.Although these methods are too inaccurate to be of muchpractical value. wherethe cross (i. Consequently. x) denotes the base point for the finite difference approximation Eq. Somecomputers have extended precision capability.55). Onthe first step. Care must be exercised to ensure that enoughsignificant digits are maintainedin numericalcalculations so that round-off is not significant. For a converging solution family. This truncation error is relative to the exact solution passing throughthe first solution point.1 The Explicit Euler Method Consider the general nonlinear first-order ODE: [ f/=f(t.. Round-off errors propagatefrom step to step and tend to accumulateas the number calculations (i. Thetotal error at the secondsolution point is due both to the truncation error at the first solution point.56) . inherited error and local truncation error. each step places the numericalsolution on a different member the solution of family.. (7. Theinitial point for the secondstep is on a different member the solution family.8. (7. One practical consequenceof these effects is that smaller step sizes maybe required whensolving unstable ODEs which govern diverging solution families than whensolving stable ODEs which govern converging solution families. (7.5 THE FIRST-ORDER EULER METHODS The explicit Euler method and the implicit Euler method are two first-order finite difference methods for solving initial-value ODEs.e. respectively. This dual error source. as inherited error tends to grow as the solution progresses. they are useful to illustrate manyconcepts relevant to the finite difference solution of initial-value ODEs.55) Choosepoint n as the base point and develop a finite difference approximation of Eq. The presenceof inherited error means that the initial condition for the next step is incorrect.55) at that point. both because the changes in the solution are smaller and more steps are required. For a diverging solution family. Most computers have either 32bit or 64bit word length. Essentially.5. whichis now called inherited error. and the local truncation error of the secondstep.40): -At ~"(%) (7. Assuming that algebraic errors are nonexistent and that round-off errors are negligible. 7.

4.58) Truncating the remainder term. TheFDE requires only one derivative function evaluation [i.+.57) (7.~ ) =~ At Solving Eq.57) for ~+1gives ~n+l : ~n -1.e. The error in calculating y. 1. sincef. (7.56) into Eq.59) are summarized below. 3. Substituting Eq. Hence.55) and evaluating f(t. does not dependon Yn+l" The FDErequires only one known point. 2. The FDEis explicit.One-Dimensional Initial-Value Ordinary Differential Equations 353 n+l t Figure 7. (7. .+~ n=O n=O (7..+~ for a single step.5 rim0 N-1 (7.61) Error= YN Y(tN) -- Y0 t o Figure 7.60) The total truncation error is given by Error = ~ ½y"(%) a = m½ Y"(z) . tu.9. y)] per step. (7. Equation(7..ACn ½f/’(Zn) At2 : ~n + ACn+ 0(At2) + (7. (7.9 ¯¯¯ t t t 1 N Repetitive applicationof the explicit Eulermethod.59) 2) wherethe 0(At term is included as a reminderof the order of the local truncation error.e. This result derived in the following paragraph. Thesolution at point Nis N-1 N-I YN = Yo + ~ (Y.+~ yields the explicit Euler finite difference equation (FDE): [ y.59) is applied repetitively to marchfromthe initial point o t o the final point. the local truncation error. Several features of Eq. total) error accumulated after N steps is 0(At).f (t. and solving for y.~) at point n ~n+l --. is 0(At2).+I .) = Yo + Ay. whichis 0(At2).Y.-~n ½~"(Tn) At =f(tn. as illustrated in Figure 7. it is a single point method. = y~ + A~ O(At2) ] (7. 5.8 Finite differencegrid for the explicit Eulermethod. Theglobal (i.

17.61) yields [Error= ½(tN--to)Y"(Z ) At = 0(At)I (7.0 0.1 using Eq.247314 2154. (7.397688 1768.680938 -81.354 Chapter7 wheret o < z <_t N.644115 L ~’.0 4.0 s.452290 -39.078626 2125.515406 . whichis the is sameas the order of the finite difference approximation the exact derivative ~’. as shown Eq. T) = -~(T4 . Thus.234375 -91.720030 -61.3.470796 1798.56). (7.531250 1 (7. (7. The derivative function isf(t.T4~) 1 Let At = 2.0 2.234375) = 2187.0 8.241628 -69.374478 1696.0s are summarized Table 7. in The result developedin the preceding paragraph applies to all finite difference approximations first-order ordinary differential equations.168688 -29.0 3.0 1.0 × 10-12)(2500.747960 2500.0s to 10.59). global (i.156.611898 1944.780668 1729.447199 -28. of Thealgorithm based on the repetitive application of the explicit Euler FDE solve to initial-value ODEs called the explicit Euler method.136553 2248.120.234375 .545606 -49.04 -.e.Ta4).04) -156. The explicit Euler method Let’s solve the radiation problempresented in Section 7.3 Solution by the Explicit Euler Method t.0 6.765625 2223.619260 . The number steps N is related to the step size At as follows: of N = tN --to At Substituting Eq.59).65b) (7.094508 1758.531250 2004.064363 -25.608926 -39.000000 2343.716064 .70.62) into Eq. fo ----.2.580490 -64.0(-156.370270 1875.813255 . The results for At = 1. tn+l 0.227867 1758.0 9.0 s are also presented in Table 7. whichis of 0(At). Theorder of the global error of is alwaysequal to the order of the finite difference approximation the exact derivative ~’.234375 T = 2500.65a) (7. For the first time step.339355 -65. (7.618413 1842.64) These results and the results of subsequent time steps for t from 4.62) Consequently.000000 2187.+1 .156.829988 2248. T.279058 1776.0 2.3.263375 Error -60.+I 2500.-(4. (7.0 10.263375 2360.247314 2074.686999 -97.0 + 2. The explicit Euler FDEis given by Eq.0 10.0 T.63) (7. in Table 7..At ~(T4~.073108 -29. is Example7. Tn+ = Tn .250. total) error of the explicit Euler FDE 0(At).

15 E(At = 1.Convergence necessary for is a finite difference methodto be of any use in solving a differential equation.0 E(At = 1.An exampleof an implicit FDE presented in the next section.66a) and (7. Theerrors are all negative.0) (7. at t = 10. FromEq. 0(At).0) -61. The solution for the smaller step size is moreaccurate than the solution for the larger step size. ~).0 is achievedonly in the limit as At --+ 0.5. This is due to the large first-order.0 due to the finite step size. Convergence is discussed in moredetail in Section 7.6. Thefinal feature of the explicit Euler method whichis illustrated in Table7.15 is not exactly equal is to the theoretical value of 2. is 7. (7. When base point for the finite difference approximation of an ODE point the is n + 1.515406 _ 2.3 is that the numerical solution approaches the exact solution as the step size decreases.the numerical solution leads the exact solution. This occurs because the derivative functionf(t. the solutions for both step sizes are followingthe general trend of the exact solution correctly. SuchFDEs called explicit FDEs.to)T’(z)(2. Orderis discussed in moredetail in Section 7. whereit has is its largest value for the interval.66b) Assuming the values of T"(z) in Eqs. T) decreasesas t increases. The theoretical value of 2. that ratio of the theoretical errors is Ratio.= 2.0) 1.66b) are approximatelyequal.0 s.67) Equation(7. as illustrated in Table7.68) showsthat the method first order.6. (7. the order of the methodcan be estimated by comparing errors the at t = 10. In fact.One-Dimensional Initial-Value Ordinary Differential Equations 355 Several importantfeatures of the explicit Euler methodare illustrated in Table 7. but not in the derivative functionf(t. indicating that the numerical solution leads the exact solution. When base point for the finite difference approximation an ODE point n.0) -28.The explicit Euler method are is the simplest exampleof an explicit FDE.O) E(At = 1. the the of is unknown value Yn+l appears in the finite difference approximationofF’.0s.E(At = 2.3 is that the errors are relatively large. the ratio of the numericalerrors is Ratio.2 The Implicit Euler Method Consider the general nonlinear first-order ODE: I )5’ = f(t’ )5) )5(t°) = 95° (7. Consequently. The value of 2. the unknown valueyn+ appears in the finite difference approximation of~’ and also 1 in the derivative functionf(t.3. This property of a finite difference methodis called convergence.0 FromTable7. the beginningof the interval of integration. SuchFDEsare called implicit FDEs. Another feature illustrated in Table7.63).619260 (7.0) = ½ (t N . truncation error.~). First.68) (7.t0)T"(z)(1.3.69) . E(At = 2.3.0) = ½ (t N . Thederivative function in the FDE evaluatedat point n.66a) (7.0) E(At _--=2"0 2.

69) and evaluatingf(t. It is solved by Newton’smethodin Example 7. thenf.+~ = Yn + A0r.70) into Eq.356 Chapter7 n+l t Figure 7. is Tn+ = Tn 4)At~(T~4+I Ta (7. (7. 3.43): Y In÷l ~n÷l ~. (7. Eq.70) Substituting Eq. The FDEis a single-point FDE.04) (7. Iff(t. y) linear in y.75) 1. Thefinite difference grid is illustrated in Figure7. 2. This result and the results of subsequenttime steps .10. Let At = 2.+~/At =f(tn+~.74) is a nonlinear FDE. of . The derivative 4 function isf(t.73). (7.)~n÷l) At Solving Eq.73) is a linear FDE whichcan be solved directly for Y.71) for ~n+~gives ~n+l -. T) = -~(T .y) is linear iny. (7.Ta4).72) (7.~n + Atf.+~ is is linear in Yn+l.73) is nonlinearin y.~n -~’fn+l (7. (7. (7.74) 1 Equation (7.+~. Iff(t.2.0(4.10 Finite differencegrid for the implicit Eulermethod.1 using Eq.+1. Thefirst-order backward-differencefinite difference approximationofF’ is given by Eq. y) is nonlinearin y.0 s.~n At + ~Y [’~n÷l) At (7. and additional effort is required to solve for Y.250. (7.0 × 10-~2)(T~ .½~"(z.+i.69) at that point.73).+l. Equation (7.3. and Eq. (7. For the first time step.Procedures for solving nonlinear implicit FDEsare presented in Section 7. andseveral evaluations of the derivative function maybe required to solve the nonlinear FDE.~ tr. and the global error is 0(At).73) is a nonlinear FDE. Thus. The algorithm based on the repetitive application of the implicit Euler FDE solve to initial-value ODEs called the implicit Euler method.y) is nonlinear in y. 4 T~ = 2500.0 . The FDE requires only one derivative function evaluation per step if f (t.73) are summarized below.11.73) The FDE implicit. The result is T~ = 2373. sincefn+~dependsony.+I 0(Ate) Several features of Eq. then Eq. (7.+~) 2 = ~n+ A .75) is a fourth-order polynomial. Choosepoint n + 1 as the base point and develop a finite difference approximation Eq. Thesingle-step truncation error is 0(At2). (7. (7.+l . is Example7.16.~) at point n + 1 1 ~ + ~y-.+1 = .(7.145960. The implicit Euler FDE given by Eq. If f(t. The implicit Euler method Let’s solve the radiation problem presented in Section 7.+~ + 0(A tf 2) Truncating the 0(At remainder term yields the implicit Euler FDE: I Y.71) a) ~.

if any.3 Comparison the Explicit and Implicit Euler Methods of The explicit Euler methodand the implicit Euler methodare both first-order [i.0 1.431887 1824.094508 1758.wherea leading error was observed.90 E(At = 1.247314 2074.227867 1758.0 0.2 and 7. For nonlinear ODEs.468684 0.718992 2500.5. whichis moredifficult to solve.77) ~(t) = e-’ Table 7.e.209295 1783.0 6. Anerror analysis at t = 10.981428 25.One-Dimensional Initial-Value Ordinary Differential Equations 357 for t from 4. numericalsolution lags the the exact solution.394933 1891. Theerrors are all positive.~.3.776520 49.0 9.0 8. the errors in these two methodsare comparable(although of opposite sign) for the same step size. but the implicit Euler methodyields a nonlinear FDE.0s are presented in Table 7.934807 1994.77) (7. In the present case. t.000000 2373.~(0) = for which2(t.~ = 0 .000000 2282. ~) = -.145960 2267.78) Solution by the Implicit Euler Method T n Tn+ 1 Tn+ 1 2248.834998 48.0s are also presented in Table 7. This difference can be illustrated by solving the linear firstis order homogeneous ODE ~’ +.929506 1806. whereasthe explicit Euler method conditionallystable.263375 Error 34. of the implicit Euler method? The implicit Euler methodis unconditionally stable.538505 46.732059 .247314 1798.0 4.E(At = 2.0 s gives Ratio .3 and discussed in Example 7.455617 12.4.611898 1944.315972 19.0 2500. by both methods.785819 2120.0 2.455617 _ 1.0s to 10. The exact solution of Eq.4.2.263375 2360. whereit has its smallest value. indicating that the is numericalsolution lags the exact solution..468684 (7. tn+ a (7. So whatis the advantage.618413 1842. (7.0) _ 48. This result is in direct contrast to the error behaviorof the explicit. The results for At = 1.the explicit Euler methodis straightforward.4. 7.0 10.0 2.184573 25. As illustrated in Examples7. The results presented in Table 7.0) 25.4 behavegenerally the sameas the results presented in Table 7.829988 2248. Consequently.322909 49.0 10. the end of the interval is of integration. the derivative function in the FDE evaluated at point n + 1.76) whichshowsthat the method first order. Euler method. 0(At)] methods.

(7. Thus.Atfn = Yn q.The step size At generally must be 50 percent or less of the stable step size to avoid overshoot.11 Behaviorof the explicit Euler method. the numericalsolution oscillates about the exact asymptoticsolution in an unstable mannerthat growsexponentially without bound. Solving Eq.At)y n Solutions of Eq. solutions are stable for At < 2.. (7. The numerical solutions behavein a physically correct manner (i.0.11.0. The oscillatory behavior for 1.82) -2 Figure7.80) for several values of At are presented in Figure 7.358 Chapter7 Solving Eq. does not modelphysical reality. Consequently.+I. ~(~) = 0.0 < At < 2.0. This is numerical instability. it is stable only for At < 2.79) (7.0 is called overshoot and must be avoided.e.81) is linear in Y.0 for as t --~ ~x~.+I = Yn + Atfn+l = Yn + At(-yn+~) (7.0 < At < 2. thus it is it is unacceptable. the numerical solution reaches the exact asymptoticsolution.0. ~(c~) = 0. However. Overshoot not instability. For At > 2. (7.81) Since Eq.+I to yield Yn+l -- 1 + At Y" I (7. For 1. For At = 1. explicit Euler method conditionally stable for this ODE. and approach the exact asymptotic solution. in one step. the is that is.At(-Yn) (7. it can be solved directly for Y. the numericalsolution overshoots and oscillates about the exact asymptotic solution. decrease monotonically) At _< 1.80) l Yn+l= (1 -. the numerical solution oscillates about the exact asymptoticsolution in a stable mannerbut never approachesthe exact asymptoticsolution. . ~(~) = 0.0.77) by the implicit Euler methodgives the following FDE: Y. For At = 2. in a damped mannerand approaches the exact asymptotic solution as t--~ ~x~. (7.77) by the explicit Euler methodyields the following FDE: Yn+l = Yn q.0.

e. A FDE consistent with an ODE the difference betweenthem(i..0 Timet Figure7..5. Theyare (a) consistency. This is unconditional stability.. The error increases as At increases.0 ~ /-~t = 3.e. (7. ORDER. decrease monotonically) for all values of At. . the Adams-Bashforth-Moulton methodis developedin Section 7.. Such methods are called multipoint methods.0 ~ /-At = 2. Theseconcepts are defined and discussed this section..One-Dimensional Initial-Value Ordinary Differential Equations 359 >" 1 [~At /-&t = 1. One such method. Before proceeding to the more accurate methods. Solutions of Eq..12. Thenumerical solutions behave in a physically correct manner(i.~ /-~t = 4.. Moreaccurate results also can be obtained by extrapolating the results obtained by low-order single-point methods. is developed in Section 7. the solution at point n ÷ 1 is basedonly on values at point n). (c) stability.0 = 0.~"’. the FDEapproaches the ODE.e. STABILITY. Onesuch method. but this is an accuracy problem.e. can at several locations between point n and point n ÷ 1..82) for several values of At are presentedin Figure 7. higher-order) methods be developedby samplingthe derivative function. These concepts are consistency. 7. More accurate (i.12 Behaviorof the implicit Euler method.6. f(t.~"-. Moreaccurate (i. 7. the extrapolated modified midpoint method.5 Z". several theoretical concepts need to be discussed in more detail. and (d) convergence.. higher-order) methodsalso can be developed by using more knownpoints. stability.9.8.4 Summary The two first-order Euler methodspresented in this section are rather simple single-point methods (i. the Runge-Kutta method. not a stability problem.e.. however. Stability is discussed in moredetail in Section 7.6 CONSISTENCY. is developed in Section 7.7.. The order of a FDE the rate at whichthe global error decreases as the grid size is approaches zero. AND CONVERGENCE There are several important concepts which must be considered when developing finite difference approximations initial-value differential equations. of (b) order. the truncation is if error) vanishes as At -~ 0. order. One such method. y). which is the main advantage of implicit methods. and convergence. In other words.

86) into Eq. consider the linear first-order ODE: ~’ + c~5= F(t) The explicit Euler FDE is: Yn+l = Yn + A~ Substituting Eq.1 Consistency and Order All finite difference equations (FDEs) must he analyzed for consistency with the differential equation which they approximate. is Letting At --~ 0 in the modified differential equation (MDE) yields a finite-order differential equation.. an infinite-order differential equation is obtained. and write the Taylor series for yn+l.e. (7. (7. Consistency and order analysis of the explicit Euler FDE As an example. ~ + gh ttt [~ + . and rearranging terms yields the modified differential equation (MDE): lh " lh2"" (7.88) is the actual differential equation whichis solved by the explicit Euler method.86) Substituting Eq. Let grid point n be the base point.. Thus. I Yn+~=Yn+hY’ln+½h2y"ln+~hY3 ttt In+"" (7.6.4.~hYn + hFn (7.The MDE the actual differential equation whichis solved by the FDE. dividing through by h. (7. The order ofa FDE the order of the lowest-order terms in the modifieddifferential is equation (/VIDE). If this finite order differential equation is identical to the exact differential equation whose solution is desired. terms). Consistency analysis involves changing the FDE back into a differential equation and determining if that differential equation approachesthe exact differential equation of interest as At --~ 0..87) Cancelling zero-order terms (i. Equation(7.°~hyn+ hFn (7. Thus..85) (7.. A finite difference methodis convergentif the numericalsolution of the FDE (i. the approximatesolution.84) (7.-~ Y In . Example7.83) whereh = At.88) [ I Y’lnd-~Yn =Fn-gnY . This is accomplished by expressing all terms in the FDE a Taylor series having the samebase point as the FDE. 1 3 y Yn . (7.83) into Eq..360 Chapter7 A FDE stable if it producesa bounded is solution for a stable ODE is unstable if and it produces an unbounded solution for a stable ODE.. the numerical values) approaches the exact solution of the ODE At ~ 0. then the FDE a consistent approximation is of that exact differential equation. as 7. 2.e.85) gives Yn + hy’in +12h yt/. . This infinite-order differential equation is called the modified differential equation (MDE)..84) gives Yn+l= Yn -. by This Taylor series is an infinite series.. the y.

is (7. Stability is not relevant in that case. Co ~’ nsequently..2 Stability The phenomenon numerical instability was illustrated in Section 7. y’ Ytln (7..2: Eq. Nonlinear differential equations must be linearized locally.5 for the explicit of Euler method.One-Dimensional Initial-Value Ordinary Differential Equations 361 Let h = At --~ 0 in Eq. (7. The exact solution of the FDEcan be expressed as [Y~+I = GY~ 1 (7. The order of the FDE the order of the lowest-order term in Eq. Eq. ÷ ~Yn= Fn.88) becomes Y’I. Eq.6. Determiningthe conditions to ensure that IGI < 1 Stability analyses can be performedonly for linear differential equations.is called the amplification factor of the FDE.93) .½ (O)y"ln -~(0)2y"ln .90) Equation(7. (7.85) is consistent. whichin general is a complexnumber. and a finite difference approximation to it. the numerical solution the is mustalso be unstable. When ODE unstable. [ IGI < 1 ] Stability analysis thus reduces to: 1. ~’ y’ =f(t.. Recall the linearization of a nonlinear ODE. Thus. Determining the amplification factor G of the FDE 2. l y’in÷O~yn=FnJ (7.94) (7. + ~ F(t).~).88) to obtain the ODE whichEq. (7. Eq. A FDEis stable if it producesa bounded solution for a stable ODE is unstable if it producesan and unbounded solution for a stable ODE.85) is an 0(At) approximationof the exact ODE. and the FDEthat approximates the linearized differential equation is analyzed for stability. (7. FromEq. + ey F(t). (7.88). y).92) where G. with Thus.89) (7.85) is consistent with that equation. All finite difference equations must be analyzed for stability. Experience has shownthat applying the resulting stability criteria to the FDEwhich approximates the nonlinear differential equation yields stable numerical solutions.90) is identical to the linear first-order ODE. ÷ ~Yn = Fn + O(h) +.. (7.91) 7.95) (7.88). presented in Section 7. The global solution of the FDEat T = N At is YN = GNyo For YNto remain boundedas N --+ ~x~.~’ + ~ = F(t) where c~ = -~yl0 I (7. [ .. (7.21). =f(t. Consider the exact ODE.

and the particular solution yp(t) which satisfies the nonhomogeneous F(t). gives ~ = -]’rl0 = 4a]’g = 4(4.03 = 0.5. the step size of required to obtain the desired accuracy is considerablysmaller than the step size required . of the FDE.0 x 10-12)2500. (7. (7.~’ + 7~ = 0. ]’) =~ + (0)(t . which is the solution of the homogeneous differential equation. In general.103) Stability analysis is accomplished follows: as 1.99) (7.100) into Eq. ~’) in a Taylor series with base point to: f(t. Experiencehas also shownthat.96)). + fi~ . ]’) = -(4a]’03)~ (7. 2. The complementary solution yc(t) can growor decay dependingon the sign of ~. Stability analyses can be performedonly for linear differential equations. Determinethe amplification factor. Linearization of a nonlinear ODE Consider the nonlinear first-order Chapter7 ODE governing the exampleradiation problem: (7. ~’) =~ q-~10(t .]’0) +"" " + (~0 + 4a~’03~’0) +"" f(t.~’0) "3 = 0 ~ and ]’~. At must be 50 percent or less of the stable At to avoid overshoot and 10 percent or less of the stable At to obtain an accurate solution. The particular term solution yp(t) can grow or decay. Nonlinear differential equations mustbe linearized locally. ~’) = -(4~’03)~" + (f0 + 4~03 ~’0) (7. Stability is not relevant to the numerical approximationof the particular solution. for most ODEs practical interest.102) Note that a changesas T changes.362 Example7.to) +j~TI0(~".97) (7.100) Substituting Eq. Experience has shown that applyingthe stability criteria applicable to a linearized ODE the correspondto ing nonlinear ODE yields stable numerical solutions.96) ~" =f(t. For a linear differential equation.Thus. Determinethe conditions to ensure that ]GI < 1. the stable step size changesas the solution O progresses. and a stability analysis performed the on finite difference approximationof the linearized differential equation. 3.98) (7.101) Comparing this ODE with the modellinear ODE. Thus. (7. 9) = _a(~. the total solution is the sumof the complementary solution yc(t).96) and trtmcating the higher-order terms yields linearized ODE: ~I" =f(t.to) + (-4~"03)(~ . Stability is relevant only whena > 0. depending on F(t).0 (where fi is used instead of ~’ a to avoid confusion with e in Eq.= -4~ " f(t. the modeldifferential equation for stability analysis is the linear first-order homogeneous differential equation: l ~’+~=01 (7.2500 (7.0) Expressf(t. Construct the FDEfor the model ODE.4 _ T~4) ~’(0. G.

Thus. . Stability analysis of the Euler methods Consider the explicit Euler method: Y~+I= Y~ + At f~ Applying the explicit f(t.At)y~ = Gy [ G:(1-c~At) For stability. Instability is a serious problem the solution of partial differential equations.4 7. in Example 7. -1 < (1 . gives .~ At) <_ (7. the explicit Euler methodis conditionally stable.the explicit Euler methodis convergent. ~) = -e~.109) (7.111) (7. Consequently. gives Euler method to the model ODE. Consequently. (7. The left-hand inequality is satisfied only if I At 2 I (7. Example demonstratesconsistency and Example demonstratesconditional stability. Consequently. 7. Consider the implicit Euler method: Y~+I = Y~ + Atf~+l Applyingthe implicit Euler methodto the model ODE.110) (7.~’ Yn+l : Yn + At(-~Yn+~) 1 Y"+~-. implicit Euler the methodis unconditionally stable. For example.106) Yn+l = Yn + At(-~y~) = (1 r.107) The right-hand inequality is always satisfied for ~ At > 0.1 + ~ Aiy" = Gy.105) (7. except for stiff ODEs. ~+~ = 0. IGI _< 1.108) which requires that At < 2/c~. 7.104) for which (7.One-Dimensional Initial-Value Ordinary Differential Equations 363 for stability.6. which are discussed in Section 7.112) For stability. 1 G--I+eA t (7.3 Convergence Convergenceof a finite difference methodis ensured by demonstrating that the finite difference equation is consistent and stable. whichis true for all values of~ At.6 Consequently. IGI _< 1. + ~ = 0.14.6. instability is generally not a problem ordinarydifferential for equations. for the explicit Euler method.

(b) the modified midpoint method. The first. third.~5) . is very important.+l and ~./2+~1.+. it is not necessaryto actually developthe modifieddifferential equation to ensure consistency and to determine order for a finite difference approximationof a first-order ODE._.. (c) the trapezoid method.+1/2~-) +½~".+1 =~./2/ff [At\3 ) +. [At\ ~. (7. Single-point methodsare sometimescalled single-step methodsor single-value methods. The global order of the finite difference equation is alwaysthe sameas the order of the finite difference approximation the exact first derivative. as presented in Example7.114) (7. and fourth of these second-order methods not very popular. Both Euler methodsare first-order single-point methods. the finite difference equation will alwaysbe consistent.113) Choosepoint n + 1/z as the base point. Methods error estimation and error control for single-point methodsare also presented.+. The finite difference grid is illustrated in Figure 7. 7.6.. Express ~..~) ~(t°) = 1 (7. Evenso.8. concepts of consistency.13.7. since it is quite are straightforward to develop fourth-order single-point methods.+.1 Second-Order Single-Point Methods Consider the general nonlinear first-order ODE: ~’ =f(t..7 SINGLE-POINT METHODS The explicit Euler methodand the implicit Euler methodare both single-point methods. The fourth-order Runge-Kutta method is an extremely popular method for solving initial-value ODEs. Consistency and order can be determined from the modifieddifferential equation (MDE). to advancethe solution to point n + 1.4 Summary Chapter7 In summary. Simply by approximating the first derivative and the nonhomogeneous term at the samebase point.4. Single-point methodsare methodsthat use data at a single point. Runge-Kutta methods are introduced in the second subsection. Higherorder single-point methodscan be obtained by using higher-order approximations of~’. The second-order modified midpoint method. in Taylor series with base point n + 1/2: . Convergencecan be ensured by demonstrating consistency and stability./2\=! +gy (_~_~2.6. illustrated as in Example7. Four second-ordersingle-point methodsare presented in the first subsection: (a) the midpoint method. order. In general. I. and (d) modified trapezoid method(which is generally called the modified Euler method). of 7. it is importantto understand of the conceptof consistencyand to l~aowthat it is satisfied. since it is the basis of the higher-order extrapolation methodpresented in Section 7. and convergencemust always be the considered whensolving a differential equation by finite difference methods. however. stability.364 7. point n. Stability can be determined by a stability analysis..

(7.fin t.~+~ =. (7. (7.+1/2) =~. ~. (7. (7.124) ~) = 1 -.118) (7. and stability. (7.59).0(At ~. the modified midpoint FDEs are obtained: At Y~+I/2 = Yn + Tfn C Yn+l = Yn + (7.120) (7.~n+l/2 + 0(A/3) Truncating the remainder term yields the implicit midpoint FDE: [yn+l=yn+Atfn+l/2 0(At3) (7. (7. (7.~ -a At 1 -~-jy~ +0(At 2) 3) +0(At (7.The implicit midpoint FDE itself is of very little use sincef.115) from Eq.122) Substituting Eq.118) gives f.123) (7. (7.+l/2.58) for At/2. and combining the resulting FDEs to obtain single-step FDE.116) At 2_14y.113) gives 2) ~n+l . for which f(t.114) and solving for ~tln+i/2 ~. Subtracting Eq.+l/2. (7.+1 .0(At =f(t. Consistencyanalysis and stability analysis of two-step predictor-corrector methods are performedby applying each step of the predictor-corrector methodto the modelODE.117) 3) where the 0(At term is a reminder of the local order of the FDE. ifyn+~/2is first predicted by the first-order explicit Euler FDE. the superscript P in Eq. Substituting Eq.n+l--~n ~’]n+l/2 gives (7. 3’ +~ = 0.(v) At2 wheret n < z < in+1.+1/2 At Solvingfor ~3n+ gives 1 ~n+l = ~n + At. However.~)=-c~.120) denotes that Y~n+l/2is a predictor value. Eq..13 Finite differencegrid for the midpoint method.121) denotes that fnP+l/2 is evaluated using Yff~+l/2.116) into Eq.121) denotes that ynC+lis the corrected second-orderresult. andfn+t/2 is then evaluated using that value ofy.122) into Eq.One-Dimensional Initial-Value Ordinary Differential Equations 365 t n n+l Figure 7.-T-)y. order.119) (7. is FromEq.The single-step FDE analyzed for consistency.a At + -~/Y.+l/2 dependson Yn+~/2.which is unknown.121) AtJn+l/2 ~P where the superscript P in Eq. + 0(At (7. (7. and the superscript C in Eq. -4. = + 2 + ~ At’~_ = 1 .

7. Example 7. ~ At < 2).105 (7.128) (7.121).~ At+ (~ At)2 2 Substituting values of (~ At) into Eq. (7. Tn) = -°~(T4n 250"04) At T~e+l/2= Tn q. is G = 1 .125). The FDEsare consistent and conditionally stable. Equations (7. (7. + At f~I/2 (7.000 0.+i/y.1 using Eqs.5 1.. = --~y.1 G 0.625 1. The FDEsare an explicit predictor-corrector set of FDEswhich requires two ¯derivative function evaluations per step. convergent. whichshowsthat Eq.125) and letting At ~ 0 gives y~.125).+~/~. + @= ~’ Equation(7. The modified midpoint method To illustrate the modifiedmidpointmethod. and thus. The derivative function f(t .129) (7. the amplification factor.0 G 1. T)= -~(T4 .120) and (7. are 4.T~4). for whichf(t. The algorithm based on the repetitive application of the modified midpoint FDEs is called the modified midpoint method.126) Theseresults showthat IGI < 1 if ~ At < 2. 3.~) = -~. (7.000 1.c+~ = T.250.366 Chapter7 Truncating the 0(At3) remainder term yields the single-step FDEcorresponding to the modified midpoint FDE: (~ ~t)~7 y.120) and (7. The general features of the modified midpoint FDEsare presented below.~fn f~+i/2 =f(t.121) fn = f(tn. FromEq.0 0.125) Substituting the Taylor series for Yn+l into Eq. 0(At3) locally and 0(At globally. G =y. 1.500 c~At 1.0 2. ~) 2.~’ + ~ = 0.625 0. The amplification factor G for the modified midpointF_DE determinedby applying is the FDEto solve the model ODE. (7. The single-step FDE correspondingto Eqs. Tne+~/2)= -7[(Tff+~/2)4 ..121) is given by Eq.e.124) showsthat the local truncation error is 0(At3).let’s solve the radiation problempresented in Section 7. (7.130) .5 2.126) yields the following results: ~ At 0. (7. (7.+~= 1-c~At+ (7.127) (7.120) and (7.125) is consistent with the exact ODE.04] T. The FDEsare consistent. The FDEs conditionally stable (i.

0 s to 10.156.655325 10.E(At -~ 2.765625 2258.0) _ 8.5.0 + (2.081152 -86.041033 -42.0 10.378687 11.686999 c T~ = 2500.134) Theseresults. The samereduction in error was achieved with the secondorder methodat the expense of twice as manyderivative function evaluations.996968 1804.587815 1897.135) t~ tn+l 0.131) (7.0 x 10-12)(2343.855320 2360.094508 1758.0 9.0/2)(. the predictor FDE gives f0 = -(4. An error analysis at t = 10.942740 102.0 s.611898 1944.0(-120.0 0.0 s.855320 4.0 s are approximately times smaller than the errors presented in Table7.829988 2248.000000 2421.0 8.247314 1798.234375 137.071233 Tn+ l Error 2248.208442 2.179389 -75.955935 1767.267223 2010.118695 2500. which increases the numberof derivative function evaluations by a factor of 15.263375 10.104.0 s are presented in Table 7.171468 .908094 .000000 2343.0 4.583087 -41.997427 -40.247314 2074.112824 2250.486182 -51.227867 1758.132) (7.626001 2154.3 for the 15 first-order explicit Euler method.686999) = 2258.250.505442 1955.04) = -120.0 x 10-12)(2500.686999 .571344 111.339704 -58. This illustrates the advantage of the second-order method.64 E(At = 1.0 10. and the solution for At = 1.234375 T~/2 = 2500. the results for subsequent time steps for t = 4.243988 1760.133) (7.882812 2362.618413 1842.263375 1.7656254 .761780 -65. The errors presented in Table 7.One-Dimensional Initial-Value Ordinary Differential Equations Let At = 2.765625 The corrector FDEyields f1~2 = -(4.908094 Table 7.234375) = 2343.0 2.0 1.568508 2.626001 367 (7. For the first time step.439137 156.5 for the second-order modifiedmidpointmethodfor At = 1.234375 .101633 1851.04) = -156.0 2.242702 1779.0s gives Ratio-.398496 2300.5 Solution by the Modified Midpoint Method (7.601504 124.969402 9.014835 1.0 + 2.To achieve a factor of 15 decrease in the error for a first-order methodrequires a reduction of 15 in the step size.795424 -47.455756 1800.0 rn Tn+ll2 L fn+l/2 rn+ l 2500.120.156.0 6.0) 1.902460 8.544849 2086.04 -250.

The corrector step of the modifiedEuler FDEs be iterated.143) denotes thatfff+~ is evaluated using Y~+Iand the superscript C in Eq.143) showsthat are consistent with the general nonlinear first-order ODE that the global error is and . (7. but the method still 0(Aft). (7.T[fn -]-¢n+l -~.143) are usually called modiied Eule r FDEs In s ome f .141) must be solved iteratively for Yn+~. Eq. they have been called the Heun FDEs.140) Truncating the third-order remainderterms yields the implicit trapezoid FDE: At Yn "~-~(fn Yn+l ~" "~fn+l) 0(At3) (7. if desired. (7.Weshall call them the modified Euler FDEs. nonlinear ODEs. ify. (7. (7.137) and (7.0(A/2)] q- (7.142) and (7. Recall Eq.142) (7.138) AddingEqs. (7.59). then the modified trapezoid FDEs are obtained: y~+z = y. (7.141) can be solved directly for yn+ for linear ODEs. 1 (7. instances. Performing a consistency analysis of Eqs. + Atfn c At +T(L Yn+l = Yn (7. Iteration is generally not is worthwhileas using a smaller step size.142) and Eq.368 Chapter7 whichdemonstratesthat the method secondorder.139) into Eq.141) Equation(7. (7.+~/z ~-) + 0(Aft) ~ =~n+l/2"~t’n+.142) denotesthat y~n+~ a predictor value./2 +f [. obtained as follows. whichmay can increase the absolute accuracy. andf~+~is then evaluated using that value ofy. (7.+~.138) and solving forCn+~/2gives j2.136) yields At ~n+l ---~n -~.119).139) 0(At3) (7.137) (7. However.136) (7./2(--~ t) 2) "~-O(A~ (7. the superscript P is Eq.143) denotes that y~C+~ the corrected second-order is result.118): ~n+~= ~:~ + Ate+z~2 = 0(At3) Write Taylor series for)Tn+zand)Tnwith base point n + 1/2: -. An alternate approach for solving the implicit midpoint FDE. (7. since the theoretical error ratio for an is 0(Aft) methodis 4.0. For Eq.+t/2 = ½ (j2n +~. /At\ j2~+~ =)7~+.Eq.143) +f~+~) Thesuperscript P in Eq. (7. Equations (7.+~ is first predicted by the first-order explicit Euler FDE.+~)+ 0(Aft) Substituting Eq.

This illustrates the advantage of the second-order method. (7.143) fn = f(tn. and thus.At f~ =n 4 fnP+l =f(tn+. For the first time step. and the solution for At = 1.143). (7.152) (7. -.04] = 1 At T~c+.04 . The derivative function f(t . The amplification factor G for the modifiedEuler FDEs determinedby applying is the algorithm to solve the model ODE + c~ = 0.0)(-156.0 × 10-~2)(2500.0 + ½(2. + ff+~) Let At = 2. 3) 2) The FDEs consistent. Example7.8.) .148) (7. for which[G[ < 1 for c~ At < 2. e At < 2). Equations (7.0 × 10-~2)(2187. let’s solve the radiation problempresented in Section 7.(fn +L-t-l) =Y. the predictor FDE gives f0 = -(4.234375) = 2187.~ At)y. = T.580490 c T~ = 2500.145) (7. The algorithm based on the repetitive application of the modified Euler FDEsis called the modified Euler method.-(4. convergent.To achieve a factor of 32 decrease in the error for a first-order methodrequires .5312504-250.One-Dimensional Initial-Value Ordinary Differential Equations 369 0(At2).0(-156. The FDEsare an explicit predictor-corrector set of FDEswhich requires two derivative function evaluations per step.[(-c~y.~ y.147) (7. The modified Euler method To illustrate the modified Euler method.04) TP~+~T -t. 3) = -~.0 s.6.3 for the 32 first-order explicit Euler method. T)= -~(T4 .126).234375 TIP = 2500.(~At)2 2 (7.. +-~(f.0 + 2.~ At)y n At At c Yn+l ----Y. 0(At locally. +-~. The general features of the modified Euler FDEsare presented below. Consequently. are 3. 3’ Y~n+l= Yn + Atfn = Yn + At(-~Yn) = (1 . Thus. and 0(At globally.150) Theseresults.T~4).234375 .250.149) (7.04) = -91.0 s.153) (7. The errors presented in Table 7.531250 The corrector FDEyields f~P ---.144) (7.04) = -156.250.1 using Eqs.146) This expression for G is identical to the amplification factor of the modified midpoint FDEs. Tn) = -~(T4n 250.Eq. for whiehj~(t. 4.154) (7. are The FDEs conditionally stable (i. TnP+~) -c~[(T.] G=--=Y"C+l At+ 1.91.0 s to 10.151) (7.0 s are approximately times smaller than the errors presentedin Table7.0 s are presented in Table7. are The FDEs consistent and conditionally stable.142) and (7. the results for subsequent time steps for t = 4.142) and (7.e. + ~.6 for the second-order modified Euler methodfor At = 1.580490) = 2252.~e+~) .the sameresult applies to the modified Euler FDEs.c~(1 .185135 (7.

185135 2046. ).0) _ 3..156.6 tn t.698650 -45.196543 -46.263375 0.094508 1758..+t 0.0 2.580490 .765625 2361.E(At = 2. y).255256 1799.C+l 2500.0 0.102. An error analysis at t = 10. 1 (7.263375 3.0 10.897804 -38.55.149114 2249.898821 -70. and the C/(i = 1. called Runge-Kuttamethods.161712 fn ~’.234375 -91.946689 0.0 1.733668 .597515 2360.131759 -74.154554 1929. 2 . whereeach Ay is evaluated i i as At multiplied by the derivative function f(t. is developed in the following subsection.124.0 9.860889 2500.7.575660 1846.611898 1944.0 s gives Ratio-.387492 2079.0 4. A more accurate class of single-point methods.127884 Error Chapter7 2248.177915 .0 10.. since the theoretical error ratio for an is 2) 0(At methodis 4.174556 1757.442316 -37.2 Runge-Kutta Methods Runge-Kutta methods are a family of single-point methods which evaluate Ay = (Yn+l-Yn) as the weightedsumof several Ay (i = 1.774562 .898337 a reduction of 32 in the step size.686999 .247314 2074.0.156. The samereduction in error was achieved with the secondorder methodat the expense of twice as manyderivative function evaluations.390198 -100.077767 1753.597515 = 4. Thus.0 6.364338 -41.0 Solution by the Modified Euler Method Tn T..539313 2237. The modified midpoint method and the modified Euler method are two of the simplest single-point methods.0) 0.531250 2252.2 .234375 .0 2.155) whichdemonstrates that the method secondorder.972960 1833.687219 1948.709324 1.898337 (7.156) Yn+! : Y~ + AYn = Yn + (Y~+! --Y~) .102. which increases the numberof derivative function evaluations by a factor of 32.447926 -57.276752 1759..+1 .120..000000 2343.354547 3.0 8.000000 2187.983259 3.247314 1798.937821 4.193135 1761. 7.227867 1758.829988 2248.618413 1842.00 E(At = 1. evaluated at somepoint in the range tn < t < tn+ . ) are the weightingfactors.542656 4.007942 0.370 Table 7.

. fi) at t = t.. + Ci(hfn) + Cehf[tn + (~h).y.e.~.. ~.+l =Y. + (eh)fl. + (eh).)fyln 2) Substituting this result into Eq. Substituting Ay and Ay into Eq.164) and (7.164) The four free parameters. y.159) and Ay is based onf(t. (7.158) (7.166) and (7.)) =f. and/7.~) in a Taylor series at grid point n gives f(t. +(fl Ayl) e ] (7.P’ln +’" (7.163) (7.) = Atf. y. = @’)’In =a2tln = ~ n=~ln (7. +a2tlnh +~yln Ay+.) =J~. can be determined by requiring Eq. Ce.168) term by term gives (7.167) Substituting Eqs. Let At = h. + (/Thf.One-Dimensional Initial-Value Ordinary Differential Equations where Ay is given by [ A y=C1 i~yl÷C2i~y2÷C3 Ay3-t -.. J~yln) + 0(h3) Equating Eqs. + Ayi) Expressing f(t. Ay gives f(t. (7.168) I C1+ C 2 eC1 = ~ /7C 2 = ½ (7. (7.161) Evaluatingf(t. where~’ln --a~n. (7. (7.164) to matchthe Taylor series for ~(t) through second-orderterms. ~.160) wherec~ and/7 are to be determined.(Yn+l .~) =a~.169) . (7. At = eh) and y = Yn+ (/7 Ayn)(i.. (7.e. (7.+l =fin ÷ h~"n+½hZ(ftln ÷a~.165) (7.165).166) +~l. That series he ~n+l = ~n + ~’[nh + ½~"ln +’’" ~’ln =J2(t.162) (7. + h2(c~C2 ftl~ +/7C2f~ fyln) 3) + 0(h (7.157) assuming that The second-order Runge-Kutta method Ay_. + (eh) (i. Ci..Yn) is a weightedsum of two Ay’s: [ Yn+l = Yn + C1 AYl+ Ce Ay2 ] where Ay~is given by the explicit Euler FDE: Ay~ = At f(t. [ is obtained by 371 (7.161) and collecting terms yields Y~+l= Yn+ (C1 + C2)hf. + (~ At)...167) into Eq.158) 1 2 gives Y..y) evaluated somewhere the interval t n < t < tn+l: in 2 Ay = At fit. + (/7 Ay.y.

which yields the modified midpoint FDEs. AY2 = hf(tn+l Yn+l Yn) = h-fn . +5. Thus. Ayl = hf(tn.175) Y~+I Yn+(0) Ay~ Ay =y~ hn+~/E = (1) E + f Other methodsresuk for other choices for C~ and C2.) Ay3 =hf(tn +~.Thus. Runge-Kutta formulas frequently denote the Ayi’s by k’s (i = 1.170) to (7.. (7.Eqs.yn + =hf~+~/2 (7. Letting C~= ½ gives C = ½. y. ~ = ½. + ~ y. are given Yn+l = Yn +½(kl kl = hf(tn. Eqs. yn + k~) = hfn+m (7. +½Ay =y.180a) (7.178) 7. +~(f~ +L+.180b) To performa consistency and order analysis and a stability analysis of the fourthorder Runge-Kuttamethod. and/3 = ½.372 Chapter 7 Thereare an infinite number possibilities.172). 2 Thus.173) (7. and of 2 fl = 1..179) ay~=hf(tn. ~ = 1. which yields the modified Euler FDEs.179) and (7.) 2 Letting C1 = 0 gives C = 1. Ay4 = hf(t n + h.172) =Yn +½Ay.180) must be applied to the model .3 The Fourth-Order Runge-Kutta Method Runge-Kuttamethodsof higher order have been devised. (7. In the general literature.171) (7. One of the most popular is the following fourth-order method: Yn+l = Yn + ~ (Ayl + 2 Ay + 2 Ay +/~Y4) 2 3 (7. ). the second-order Runge-Kutta FDEswhich are identical to the modified Euler FDEs. ay~= hf(t.2 ..Yn+l) "~ hfn+l (7.Yn +-~) aye = hf(t.176) (7. +k2) Yn) = hfn k2 = hf(t n + At.174) (7..yn) Ay2 =hf t.178) (7.7.170) (7.yn + Ay 3) (7.

183b) ay4=hf(t.180): (7. + @= 0. (7. ..181) At Ayl\ Ay = hf t. .(eh 4 2 4 Substituting Eqs. (7. (7. (7. y~ + ~) = h(-~n+ ~[-(~h)y. ~’ + @= 0.188) (7.179) and (7. (7. Thus. = -c~y.185).(1-~)]})(7. (7.186) For the modelODE.185) is performedby substituting the Taylor series for y. .183b).~Y Le~ing h ~ 0 in Eq.183a..189). Consistency and order analysis of the Runge-Kuttamethod A consistency and order analysis of Eq.(eh)y n + ½ (~xh)2y. FromEq.y. which is consistent wi~ the model equation.One-Dimensional Initial-Value Ordinary Differential Equations 373 ~’ + @= 0.h + ~y(V)(eh)5 + .185) is 0(h4).~ (~h)3yn + ~ (~h)4yn Canceling like te~s yields y’[.182a) (7. = ~4yn. (7. = ~y. Thus h2 Yn +Y’lnh ÷ ½Y"ln + ~Ytttlnh3 ÷ 2"~y(iV)]n h4 ÷ l@6y(V)ln h5 ÷’’" = y.187) Dividing t~ough by h yields the modified differential (~)~5h4 +"" Y’n + ~Y~= .179) single-step FDEco~espondingto Eqs... = -eSy.189) gives y~ + ~y~ = 0.181). + ~.]}) 2 . y~) = h(-~y~) = -(eh)y~ (7.~) = -@. (7. fi’ and y(V)l.. Substituting these results ~to ~e le~-h~d side of Eq. Eq.(~h)Yn + ~ (~h)2yn .189) (7..184b) into Eq. (eh)y.182b).186) gives Yn + Y’ln h + ~ (~h)Zyn -~ (~h)3yn + ~ (~h)~n + ~ (~h)Syn = Yn -. +ay3) ~ (~h) (~h) ~Ay = -(eh)yn l l . Ay~ = hf(tn. ~d (7. y(iV)l. (7.~ +~] +~ n ~[-(eh.+~)=h(-~{y Ay 3 = -(eh N 1 .y.+1 with base point n into Eq. y"[.-~ (eh)3yn + ~ (<xh)4y. (7.9. = -~%. Y’I. y"[. equation (MDE): (7.182b) ~f~tn ~t +~. for whichf(t. and the results must be combinedinto a single-step FDE.. +~t.. (7. (7.184b) Example7.. y.z = -(~h)yn(1- (7. (7.

185). let’s solve the radiation problem presented in Section 7.606771 0.10. In summary. Solving equationfor the amplificationfactor.Ta4).5 1.333333 (Th) 2. The FDEs explicit and require four derivative function evaluations per step. T) = -e(T 4 . are 5) 4) The FDEsare consistent. Example7.. (7.5 2. so that (eh) is positive.) = At n AT = At f(tn 2 + ~-. Substitutingvalues of (eh) into Eq.8 G 0.11.. (7. A stability analysis is presented in Example 7.179) is given by Eq... Stability analysis of the Runge-Kuttamethod The single-step FDEcorresponding to Eq. 0(At locally and 0(At globally.Yn+llYn.785.. 2.~ (eh) (7. 2.785.0 G 1.190) Recall that e is positive.(’~h) -b ½(~h) .6 2.-~ (~h) q.10. = At f(t. 3.000000 1. T. yields 2 34 G= 1 -. Example 7.0 0.). Tn q--~. 2. G --.179) and (7.375000 0. (7. (7.191) (7. convergent..179) and (7. 1 2 3 AT..7 2.0 1.. and thus.878838 1..754733 0. eat _< 2..1 using Eqs.000000 0.180).9 demonstrates that the fourth-order Runge-Kuttamethodis consistent and 0(h4). The fourth-order Runge-Kutta method To illustrate the fourth-order Runge-Kuttamethod. Tn + ~) (7. The FDEs conditionally stable (i.e..190) gives the followingresults: (~h) 0.374 Chapter7 Example7.. Tn + AT3) .192) (7.596.) AT4 -~ At f(tn -I.022400 < Theseresults show that IGI _5_ 1 if (eh) < 2. Equations (7.785 . 4. The derivative function f(t.the fourth-order Runge-Kutta FDEshave the following characteristics: 1. are The FDEs consistent and conditionally stable.193) AT3 = At f tn q. are application of Runge-Kutta FDEsare called Algorithms based on the repetitive Rm~ge-Kuttamethods.At.648438 0.270395 0.180) yield Tn+ = Tn + ~(AT + 2 AT + 2 AT + ZXT4) .

35592518) 4 .114.0)(-4.468750000 -256.0 312"46~75000. T n by the Fourth-Order AT 1 aT 3 Runge-Kutta Method AT 2 AT4 EITOr 0.0 0.0 9.101.196) = -202.669705704 102.202.012819719 -0.35592518 AT4= (2.46875000 + 2(-256.010558967 -0.0 2500.160603003 .46875000 × 10-12)I(2500.147.263114333 .100880073 .0 s.0 8.083948884 1758.114.39.0 × 10-12) I(2500.0 10.0 1.656671860 .69306346] (7.596234925 1944.130.000000000 2360.312.0)(-4.37399871 AT3 = (2.210831240 .250. the results for subsequent time steps for t = 4.605593419 1842.7.04) = -312.04 .010660768 -83.492235905 -92.254519132 2500.198) These results.0)(-4.One-Dimensional Initial-Value Ordinary Differential Equations 375 Let At = 2.0 .0 × 10-~2)[(2500.017590925 -0.123860057 -0.234375000 139.355925175 -204.693063461 .0 s.169.0 2.102.601503560 124.000506244 -41.689901700 .000260369 .) 4_250.175.000000000 2248.0 = -241.194) AT2 = (2.246807810 1798.0 s are presented in Table 7.0 10.041 (7.038652301 -241.335495214 .884298188 -92.984283934 -39. AT --1 (2.195) -= -256.227583359 1758.256.809631378 .000283433 -0.197) .7 Solution t.22972313 + 2(-241.04] (7.373998706 -202.829563365 2248.04 ] (7.389312398 -0.000425090 -0.0 4.84.0 6.008855569 - 156.211873099 -0.0 × 10-12)(2500.0 2.731281344 124.35592518) =2248. Table 7.083178136 .898370905 -38.0 s to 10.128.69306346 = 2500.0)(-4.710427814 .366138259 .0 241"37~99871)4_250.213391686 -76. and the solution for At = 1.122675002 111.201682488 .37399871) (7.0 +I[-312.250.148. For the first time step.896277161 - 137.015662958 -0.229723129 2074.240707542 112.

6 for the modified Euler method. Results such as these clearly demonstratethe advantageof higher-order methods. it should be used only occasionally.000 times smaller than the error presented in Table 7. Care must be taken in the specification the values of (lower error limit) and (upper error limit).~(t’n+l)=y tn+l.4 Error Estimation and Error Control for Single-Point Methods Consider a FDEof 0(Ate). At) denotes the approximate m+~ solution at tn+1 with incrementAt.0 is obtained since the higher-order terms in the truncation error are still significant. If IErrorl > (upper error limit).199) where .34.008855569 -.500 times smaller than the error presented in Table 7. For a single step: m+~ ~(tn+l) = y(tn+ .0 s gives E(At = 2.7. At) + A At ~ (7.376 Chapter7 The error at t = 10.. How the can +t magnitude the local truncation error A A~ be estimated? Repeatthe calculation using of step size At/2. (7.0s for At = 1.14.200) fromEq. Y(tn+~. Subtract Eq. for A At Error = A Atm+~ [yn+l (tn+l.Anerror analysis at t = 10. and A At is the local truncation error.01 instead of 16. Thus. 7.000260369 Ratio- 4) whichshowsthat the methodis fourth order. The value Ratio = 34.-~-) -Yn+l(tn+l. whichis the local trucation error.199) and solve ~+~.0) -0. since the theoretical error ratio for 0(At methodis 16.201) If [Errorl < (lower error limit). . hacrease (double) the step size. (7.-0.200) This process is illustrated in Figure 7.~ +2 A (7. Consequently.0s is approximately110.01 E(At = 1. This methodof error estimation requires 200 percent more work.0) . .3 for the explicit Euler methodand 3.~(tn+l) denotes the exact solution at tn+~.0. decrease (halve) the step size. At)] (2m2m~_ = (7.

are 6) 5) The FDEs consistent. _ 8~: + ~ &.y.204c) (7. The general features of the Runge-Kutta-Fehlberg methodare presented below. (7. 4~9~.1 -4-~128 -. 0(h6) (7.202) and (7.202) (7.12--~ --n. k 6 = Atf(t.845u ~ ~ ~4) + _~kl+2k 2 3544r. Eqs. + (~J~ki +~k3 +~-~-d"4-~k5 + ~5 k6) Yn+l=Yn "~ (2-~6 kl 25 ~ 14081~ ± 2197/~ ~.+I.~n÷l -~... 1932 ~ ~ 7200/. as the final value ofy.~. Use Y.203) 5) 0(h ~ks) k I = Atf(t..203). 21977296+r. (7.) k2 = (7. The Runge-Kutta-Fehlberg method [Fehlberg(1966)] uses six derivative function evaluations: Y.204f) The error is estimated as follows.206) subtracting yields Error-3-ff6kl.Error 6) 0(h + Substituting Y.205) and (7..421971. are (7.3 T 4-]"6Tr~4 -28561/. 1859V +~h.205) (7.y. (7.+ 1. tn+l/2 tn+l y (tn+~.204e) 11 ~k5) Atf(tn +l~ h.206) .202) and (7. 2. into Eqs. The FDEs explicit and require six derivative function evaluations per step.207) 6) Theerror estimate.204a) (7.204b) (7.+I =Y.~3) ~ yn ~ K1 ~ ~2 k 5 = Atf(t.One-Dimensional Initial-Value Ordinary Differential Equations 377 ¯ ~/(tn+ ) ~ /..203) can be expressed follows: ~n+l --~" Yn+l + 0(h6) ~n+l = ~n+l -~- 0(hS)+ 0(h6) ---. Equations (7. Error.7.+I and ~n+l. ~k + ~k 6)0(h k3 + 5 6+ (7.~ __ tn Figure 7. is used for step size control.2-~n..÷ ¼kl) Yn k 3 = Atf(t n + ~h. whichis 0(h locally.. 7. 0(At locally and 0(At globally.-I.5 Runge-Kutta Methodswith Error Estimation Runge-Kutta methods with more function evaluations have been devised in which the additional results are used for error estimation. At/2) At) Yll Stepsize halvingfor error estimation. 1. -.y n + ~2kl + 3~k2) k4 -= Atf(t.14 ~y(tn+~. 8 -~3 + ~4 -+ h.y.204d) (7.

Summary Several single-point methods have been presented. convergent. successive extrapolations increase the order of the result by 2 for each extrapolation.208) wherejT(ti) denotesthe exact value off(t) at t t i.t i computed with incrementh. f(t i . and in Section 6. The results for .6 Chapter7 The FDEs conditionally stable.Thus.1 The Extrapolated Modified Midpoint Method Gragg(1965) has shown that the functional form of the truncation error for the modified midpoint method is ~(tn+l) y( tn+l. 4. (7. The first-order Euler methodsare useful for illustrating the basic features of finite difference methodsfor solving initial-value ODEs. h) deno the appr ximate valu tes o e off(t) at t --. 4) 6) At) ÷ 0( 2) + 0(At ÷ 0(At +--.. but they also are too inaccurate to be of muchpractical value. The but second-order single-point methodsare useful for illustrating techniques for achieving higher-order accuracy. A similar procedure can be developedfor solving initialvalue ODEs.8 EXTRAPOLATION METHODS The concept of extrapolation is applied in Section 5. Extrapolation can be applied to any approximation an exact processf(t) if the functional of form of the truncation error dependence the increment in the numerical processf(t. r = 2. n is the order of the leading truncation error term. 4. Runge-Kutta methods can be developed for any order desired. 7. h) + O(hn) + 0(h n+r) +. are The FDEs consistent and conditionally stable. Extrapolation is applied to the modified midpoint methodin this section to obtain the extrapolated modified midpoint method. In mostnumerical processes. they are too inaccurate to be of any practical value. This effect accounts for the high accuracy obtainable by the two processes mentionedabove.7. 16.. The substeps are taken for h = At~M. The step from n to n ÷ 1 is taken for several substeps using the modifiedmidpoint method. 7. r = 2. Ro_mberg integration). h) on is known.6 to increase the accuracy of secondorder finite difference approximations derivatives.. In the two processes mentionedabove. and the order of the error increases by one for successive error terms.209) Thus. are Anestimate of the local error is obtained for step size control. and thus. Thus. (7. f(ti) = f(t i. 8.378 3. etc.for M= 2. 7. Single-point methodswork well for both smoothly varying problems and nonsmoothlyvarying problems.8.extrapolation yields a rapidly decreasing error. and the order of the error increases by 2 for each successive error term. The fourth-order RungeKutta methodpresented in this section is the methodof choice whena single-point method is desired.e.4 to increase the of accuracy of the second-order trapezoid rule for integrals (i. Consequently. and r is the increase in order of successive truncation error terms. The objective of the procedure is to marchthe solution from grid point n to grid point n + 1 with time step At. 5. r = 1.

(7...1 (7.0 (7.hf (t n q. Recall Eq.213) Applythe modified midpoint methodwith At = 2.210a) z~= z0+ W(t.210c) (7. smaller h result) value of the tworesults beingextrapolated.1)h. At~M)is constructed for the selected number values of M. The extrapolated modified midpoint method Let’s solve the radiation problempresented in Section 7. Eq. Example7. let M= 2. larger h result) value of the two values being extrapolated. MAV .This processis illustrated in Figure7.0) = 2500. of and these results are successively extrapolated to higher and higher orders using the general extrapolation formula.At. The version of the modified midpoint method to be used with extrapolation is presented below: z0 = y. The algorithm based on the repetitive application of the extrapolated modified midpoint FDEsis called the extrapolated modified midpoint method.e.1): T’ = -cffT 4 .e. Thesefour values will be successively extrapolated fromthe 0(At results obtained for . improved)value. y(tn+~.LAV 2nMAV . four values of Tn+ will be calculated at each time step ~ 2) At.. zi_~] (i = 2 ..0 s to solve this problem.. MAV denotes the more accurate (i.250.LAV IV = MAV4 2 ~ . and LAV denotesthe less accurate (i.1 by the extrapolated modified midpoint method.15. ] ZM) Yn+l : ½ [ZM-1 -t- A table of values for y(tn+~.. (7.212) where IV denotes the extrapolated (i.04) =f(t.e. For each time step.One-Dimensional Initial-Value Ordinary Differential Equations 379 n n+l t Figure7.. At~M) the various substeps are then extrapolated to give a highly accurate result for for Yn+l. 4.117). z0) Zi ~--Zi_ 2 -t- (7. and 16. T) T(0.12. Thus.211) 2hf[tn + (i .. (5.210b) (7. 8.1 = 2n .15 Grids for the extrapolated modifiedmidpointmethod. M) ZM -~.

626001 2 + 1.765625 ÷ 2258. Let’s compare these results with the results obtained by the fourth-order Runge-Kutta method in Example 7. Extrapolating the three 0(h4) results yields the two 0(h results presented in column 4. 8.30754055 2248.250. a total of 340 derivative function evaluations wouldbe required by the RungeTable 7. Since each time step requires four derivative function evaluations.0 s yields the results presentedin the at Table7.04 .7656254.2hf~ 2 0 z2 = 2500.626001 T = ½(z + z2 + hJ~) 2 I T = ½ [2343. of the error of the extrapolated modified midpoint methodwould require a step size reduction of approximately (0. and 10. 6.24731430 is accepted as the final result for T K 2.00000010K.0 x 10-12)(2500.215) x 10-12)(2343. 2) Extrapolatingthe four 0(h results yields the three 0(h4) results presented in column 6) 3.0 + 1. 8.2343.0 x 10-1~)(2258.0.0(-4.380 Chapter7 the modifiedmidpoint method itself to 0(At4). 2249.00885557/0.0 ÷ 2(1.0 (7. 0(Atr).0 s for the Runge-Kutta solution is -0.24740700 2248.214d) (7.24737802 16 MAV LAV 15 2248.00885557 To reduce this error to the size K.214a) (7.0s to march from t = 0s to t = 10. Repeating procedure t -.0.0. The fourth-order Runge-Kutta method with At = 2. For the first time step M= 2. The extrapolated modified midpoint method required 31 derivative function evaluations per overall time step.48522585 2248.00000010)~/4 = 17.26241865 3 2248.0s.0(-4. h = At~2 = 1.0s. The final error at t = 10. The 0(h8) value of T~ = 2248.15523693 2248.8. The final error at t = 10.8 First Step Solution for the Extrapolated Modified Midpoint Method 4 MAV . and 0(AtS).0s is 0.250.11.26188883 2248.6260014.24731574 64 MAV LAV 63 2248.214e) (7. 6) Extrapolatingthe two 0(h results yields the final 0(h8) result presented in column5.04) --.4.0s required four derivative function evaluations per overall time step.0)(-4.214b) (7.24731430 .15523693 (7.LAV M 2 4 8 16 2) T 0(h 2.214c) (7.250.04) = 2258. The following results are obtained z 0 = T = 2500.216) Repeatingthese steps for M= 4.04)] = 2249.24831212 2248.9. which would require total of 5 x 17 --= 85 time steps.Z -~. and 16 yields the second-orderresults for T presented 2 in the second columnof Table 7. for a total of 151 derivative function evaluations with At----2.0 s.765625 Z ~.0 o zl = z0 + hJ~ zI = 2500. for a total of 20 derivative function evaluations to marchfrom t = 0s to t = 10.

26241865 2248.that is.00000010 0.61191099 2074. 7.26770120 1758. This comparison not intended to show is that the extrapolated modified midpoint methodis more efficient than the fourth-order Runge-Kuttamethod.00000024 2074.61189824 0.2 The Bulirsch-Stoer Method Stoer and Bulirsch (1980) proposed a variation of the extrapolated modified midpoint methodpresented abovein which the substeps are taken for h = At~M.8.0 T(At/16) Tn+ 1 2500.26337480 2248.24731430 2.26337470 Error 2248.9 Solution by the Extrapolated Modified Midpoint Method T(At/2) T(At/4) T(At/8) 381 4) 0(At 4) 0(At 4) 0(At 6) 0(At 6) 0(At 8) 0(zXt t.00025328 2074. 12.00000012 1758.26337496 1758.00000019 1842.. is required to advancethe solution to point n ÷ 1.61189807 4.61190855 2074.9 MULTIPOINT METHODS Themethods consideredso far in this chapter are all single-point methods.71131895 2074.09450797 1758. Higher-order ..61500750 2074. This methodsworks well for both smoothly varying problems and nonsmoothlyvarying problems. 6.63690642 2074.24740700 2248.61189788 1758.3 Summary The extrapolated modified midpoint methodis an excellent methodfor achieving highorder results with a rather simple second-order algorithm. 8.0 10.00000000 2249.48522585 2248.0 2074. 16.33199904 1758.26445688 1758.61210224 2074.24737802 2248. 7.for M= 2.26338489 1758.24731574 0.30754055 2248. only one known point. 7..15523693 2248. point n. 4.8.24731405 2074..26337480 ~’n+ l 2248.09450785 1758.26337543 1758.0 Kutta method to achieve the accuracy achieved by the extrapolated modified midpoint methodwith 151 derivative functions evaluations. Comparableaccuracy and efficiency can be obtained with the two methods.26188883 2248.+ ~ 0.0 8. and the extrapolation is performed using rational functions instead of the extrapolation formula used above.24831212 2248.61815984 2074.One-Dimensional Initial-Value Ordinary Differential Equations Table 7.26337480 0. These modifications somewhat increase the efficiency of the extrapolated modified midpoint methodpresented above.24731430 2075.26353381 1758.28065012 1758.61189807 1842.

n .e.218) Consider uniformfinite difference grid illustrated in Figure 7. Suchmethods called multipoint methods.16.1 .. explicit and implicit methodscan be derived by using morepoints to advancethe solution (i.217) which can be written in the form dy =f(t.+. fi(t)] dt = F(t) (7. A two-parameter family of explicit multipoint FDEs results correspondingto selected combinationsof q and k.. Jt. n . to point n 4. Consider the general nonlinear first-order ODE: ~’ = -~ = f(t. the subscript k denotes the degree of the Newton backward-difference polynomial.16 Finite difference grid for general explicit multipoint methods.382 I Pk(t) ¯ ¯ ¯ n-3 q=4 n-2 q=3 n-1 q=2 n q=l n+l t Chapter7 Figure7.q.Multipoint are methodsare sometimescalled multistep methods or multivalue methods. n 4.17 Finite difference grid for general implicit multipointmethods. points n. and the subscript n denotes that the base point for the Newtonbackward-difference polynomial is point n.1. There are several waysto derive multipoint methods.).2. . General implicit FDEsare obtained as follows: I = [~"+t ’~n+l--q d~= [Pk(t)ln+l ft.+l-q dt (7.1.17.220) I Pk(t) ¯¯¯ n-3 q=4 n-2 q=3 n-1 q=2 ~ n q=l n+l Figure7. Generalexplicit the FDEsare obtained as follows: I : [ ~n+t d~ : [tn+ t [Pk(t)]n (7. etc.all of whichare equivalent. We shall derive themby fitting Newton backward-difference polynomialsto the selected points and integrating from someback point.~) dt =f[t.219) wherethe subscript q identifies the back point. Consider the uniform finite difference grid illustrated in Figure 7. dt ~) ~(to) (7.

the resulting FDEs the are called AdamsFDEs. .221) Recall the third-degree Newton backward-difference polynomial with base point n.219). the set of equations is called an AdamsBashforth-Moulton FDEset.7.222) The polynomial P3(s) is expressed in terms of the variable s. can be constructed using an explicit multipoint methodfor the predictor and an implicit multipoint methodfor the corrector.. Two approachescan be taken to evaluate this integral: (a) substitute the expression for s terms of t into the polynomial integrate with respect to t.+ ~ I = d~ = [P3(t)]. Considerthe uniformfinite difference grid illustrated in Figure 7.18 Finite difference grid for the fourth-order Adams-Bashforth method. q = 1).such as the modifiedEuler methodpresented in Section 7.h + t = t n + sh --+ dt = h ds The limits of integration in Eq. (7. (7.223) P3(t) n-3 n-2 n-1 n n+l t Figure7.221).102): t . (4. and implicit AdamsFDEsare called Adams-Moulton FDEs.t n s-.+ l --~ s = 1 (7.e. Eq. (4. 7. or (b) express the integral and terms of the variable s and integrate with respect to s. .9. not the variable t. Predictor-corrector methods. Eq. 1883). We shall take the secondapproach. are t n --~ s = 0 and t. where the subscript n ÷ 1 denotes that the base point for the Newtonbackward-difference polynomial is point n ÷ 1.18. When used in a predictor-corrector combination.1 The Fourth-Order Adams-Bashforth-Moulton Method The fourth-order Adams-Bash forth FDE developed by letting q = 1 and k = 3 in the is general explicit multipoint formula. The fourth-order Adams-Bashforth-Moulton FDEs are developedin the following subsection.101): (7. in terms of s. Eq. Recall the definition of the variable s.One-Dimensional Initial-Value Ordinary Differential Equations 383 A two-parameter family of implicit multipoint FDEs results corresponding to selected combinationsof q and k. Thus.Explicit Adams FDEsare called Adams-Bashforth FDEs(Bashforth and Adams.224) (7. When lower limit of integration is point n (i. ~n J t~ if dt (7.

-0 - (g. wherethe secondintegral in Eq.3).225).~n -~- Chapter7 h P3(s) ds + h Error(s) ds (7. (7.(.251 hSF.+ 3)~_~ Substituting the expressions for the appropriate backwarddifferences into Eq.-). conditionally stable (c~ At ~< 0.3).)~n 3Z_1 + 3Z_2 --?n-3)] ~. ~n+l-fin=hIi(fCn+sV~ +h +zV____~__ +s3+3s2+2Sv3~.+. 5) 4) consistent.227) gives -I- 83.n) 2~s2+s ds6 (7. (7.226) and evaluating the result at the limits of integration yields 251 h57(4) r. (7. and thus.228) Collecting terms and truncating the remainderterm yields the fou~h-order AdamsBashfo~thFDE: h Yn+l = Yn q.3)~_~ + --)n-~) (). The FDE is The FDE is TheFDE is The FDE is Adams-BashforthFDEare summarized explicit and requires one derivative function evaluation per step.221) becomes ~n+l --. (7. (7. 0(At locally and 0(At globally..229) The general features of the fourth-order below.2).226) Iis -q. (7.384 Thus. . 3.~ (7.6s3 + 2 ls 1 24 + 6s h4y~(4)(z) ds Integrating Eq.(5)r.225) error term..22n-2 "~-Z--3) (97.3)~_. +). +Z-9 (). 2. 4. gives.225) Substituting Eq. convergent. Eq.227) The backward differences in Eq.+~ 2).-3 (?n+l tn+ l " ~n+l (Z._. (7.r~ (7. 1.~ (55fn -59fn-1 -]37fn-2 -. consistent and conditionally stable.9fn-3) (7.227) can be expressed in terms of function values from the following backward-differencetable: (fn-2 tn-2 In-1 )n-2 (Z--I --?n--2) (2n-1 -..222) into Eq.

The FDEis (7. and simplifying. with n --~ n + P3(S) =J~n+l-+-s Vj~+I q (s J~ 1)s V2~’n+l + (s + 2)(s 1)S V3j2 n+l (7. . Consider the uniform finite difference grid illustrated in Figure 7. convergent.234) Truncating the remainder term yields the fourth-order Adams-Moulton FDE: h Yn+I=Y.230) Recall the third-degree Newtonbackward-difference polynomial with base point n + 1. integrating. fn+l --fin = h -1 P3(s) ds + h and t. The FDE is 3.235) Adams-Moulton FDEare summarized implicit and requires one derivative function evaluation per step. Eq.233).19. I = df = [P3(t)]n+l Jtn dt (7.220). and thus. The FDE is 2. substituting for the appropriate backward differences.3)(S -~. (7. (7. (7.+1 19j~n 5J~n-IJI-J~n-2) + - -- 7~0h5~(5)(z) (7.101). The FDE is 4.231) (S -J. + ~-~ (9fn+l + 19fn .-2) The general features of the fourth-order below.0).230). are t n -~ s = -1 Thus. 0(At locally and 0(At globally.the integral will be expressed in terms of s. 1. The limits of integration in Eq. Thus. consistent and conditionally stable. 5) 4) consistent.+ 1 --> s = 0 (7. The fourth-order Adams-Moulton is developed by letting q = 1 and k = 3 in FDE the general implicit multipoint equation. collecting terms.232) Error(s) -1 ds (7. (4.233) Substituting P3(s) into Eq. in terms of s.One-Dimensional Initial-Value Ordinary Differential Equations I P3(t) n-3 n-2 n-1 ~ n n+l t 385 Figure7.2)(6’ l)S h4~’(4)(17) 24 As done for the Adams-Bashforth FDE. gives ~n+l = ~n "q- 2~(9J7. conditionally stable (~ At <~ 3.5f~_1 +f.19 Finite difference grid for the fourth-order Adams-Moulton method. Eq.

000 1.000 0. Eq...59(-~Yn-1) 37(--~Yn-2) -.000 + I0.2894.~ -~ 1.3 (7.-~ Yn-2 = -~ Yn-3 = ~-S Substituting these values into Eq. 4). let’s perform a stability analysis of the fourth-order Adams-Bashforth FDE. Solving Eq.236b) Y.-3 gives Yn Y..239) for y.248 + I0.242) by 3 and rearranging yields (55(7h) ~ 59(~h) G2 . It can be used as a corrector FDEwith the FDE Adams-Bashforth as a predictor FDE.4 Stability analysis of multipoint methodsis a little morecomplicatedthan stability analysis of single-point methods.-~Yn-3)] For a multipoint methodapplied to a linear ODE with constant At. (7.~4 (55fn -.281 0.59fn-1 "q. G = Yn+t _ Yn __ Yn-~ Yn-2 Yn Yn.235): FDE Yffnq-1 =Yn-~.31 GI 1.1 Yn. For stability.265 0.000 0.. (7. Stability analysis of the fourth-order Adams-Bashforth FDE The amplification factor G is determined by applying the FDEto solve the model ODE.773 1.000 0. Y~ Yn-I =.10.236a) (7. gives (c~h) 59 37 9) G=I-~ 55---~+G2 ~3 Multiplying Eq. Thus. there are four values of G.344 0.239) Solving Eq.59-~ + 37 y" .905 0.9fn--3) (7.2 Yn.13.229): h (7.394 .238) gives Yn+l = Yn -- (7.523 0.20 0.~ 4 G -t..37fn-2 -.+I/Y.238) Yn+I : .237)yields h (7. ~’+ ~j5 = 0. all four i values of Gi must satisfy }Gil < 1.+l =Yn +--(9L~+l + 19f~ .735 G2 0.203 0.389 0.240) (~h) -~ (55yn. for whichf(t.022 and G 4 0.9(.2854.] G324 + ~ tr 24 .229) and (7.~)= -~)5.9fn-3) = Example7.243) For each value of (~h). (7.5L-1 +L-Z) . (7.237) Yn+~ Yn nt" ~-~ (55fn .243) for the four roots by Newton’s methodgives the following results: (c~h) 0. and Y. Yn-2.Eq. (7.819 0. G (i = 1 . To illustrate the procedure. from Eqs.0 (7.10 0. Thus. the amplification factor G is the samefor all time steps.9Y~ -~ G3 ] (7.386 Chapter7 The Adams-Moulton is implicit.000 0.30 0.10.241) Solving for G =Y.Vn "if. (7.59fn-1 + 37fn-2 .268 G 3 [G3Iand[G41 0."~ [55(--~Yn) -._~.239 0.00 0. 37(~h).742 0. (7.195-I-I0. Thus.242) (7.

let’s solve the radiation problem 4presented in Section 7.0).245b) Let At = 1. 1.0 [55(-86.0. Thesevalues are given in Table7.250.00026037. since the exact solution is known.47079576 + ~’~ [9(-74.At ~ ~-~(9fn~+l 19fn -.16753966) . T~+I)= -c~[(Tff+l)4 .18094603) + 37(. 2. Example 7. 4.244a) (7.04] ie c n+l : Tn -].0s of -0.17350708 (7.24079704)] = 2074. and 3.0 s to t = 10.0 s for starting values. 5.2504) = -74. T)=-~(T Ta4). 387 FDEsare The FDEs explicit and require two derivative function evaluations per step. starting values wouldbe obtained by the fourth-order RungeKutta method.’.let’s use the exact solution at t = 1.247) rg : 2154.248338224.9fn_3) 1 n 1 2 fnP+l = f(tn+l. The algorithm based on the repetitive application of the Adams-Bashforth-Moulton FDEsis called the Adams-Bashforth-Moulton method.23437500)] = 2075.24079704) .59fn_ ÷ 37fn_ -. The Fourth-Order Adams-Bashforth-Moulton method To illustrate the Adams-Bashforth-Moulton method.10.124. e T4 = 2154. Use the fourth-order RungeKutta method. 3.16753966) .102.0 s are presented in Table 7.3.0 s.However.1.14.One-Dimensional Initial-Value Ordinary Differential Equations Theseresults showthat IGI _< 1 for (a At) < 0. Recall Eq. 0(At5) locally and 0(At globally.124. Let’s compare these results with the results obtained in Table7.10.47079576+ 1. For t 4 = 4.7 gives an error at t= 10.0 x 10-12)(2075.17350708) + 19(-86. for which the derivative function is f(t. are Four equally spaced starting points are needed. are 4) The FDEs consistent.18094603)+ (.7 by the fourth-order Runge-Kutta method. In general.248) Theseresults and the results of the remainingtime steps from t = 5.55075892 (7. Table 7.236): fn = f(tn.0 s. The general features of the Adams-Bashforth-Moultonpredictor-corrector summarized below.0.9(.158. are The FDEs conditionally stable (c~ At <~ 3.246) (7. are The FDEs consistent and conditionally stable. For At = 1.24833822 f4 P = -(4. 2. convergent.245a) (7.0s.244b) (7.5 (. and thus.5fn_1 +fn-2) + (7. Tn) = --e(T4~ 250 -04) At Tff+ = T + ~-~ (55£ . which is approximately 284 times smaller than the corresponding error . (7. along with the correspondingvalues of£.59(-102.

26337470 -0.102.80233360 -38. which is approximately 8.249) 0 Pk(t) ¯¯¯ n-3 q=4 n-2 q=3 n-1 q=2 n q=l n+l Figure 7.0 T.4 times larger.24079704 .18932752 r~ L Error .86.10 is -0.66995205 -57. The error at t = 10.0 9."+.00000000 2360.08167726 -0. The finite difference to grid for the general explicit Adams-Bashforth FDEsis illustrated in Figure 7.0 1.16753966 -74. 7.07872845 -0. The Runge-Kuttaresults for At = 2.10.0s.71557210 -64.55075892 2005.24731405 2154.61841314 1798.17350708 -74. However.008855569.124.07404718.0 s in Table 7. The general formula for the explicit Adams-Bashforth FDES is: iin+ t d~ = h l[Pk(S)ln I (7.388 Table 7. .10 Method Solution by the Fourth-Order Adams-Bashforth-Moulton Chapter7 t.0 s presented in Table 7.7 is -0. The fourth-order RungeKutta methodis moreefficient in this problem.20946276 exact solution 2074.08716907 -0. 2500.23437500 .18094603 .24833822 2074.9.61189788 2005.7 required the same numberof derivative function evaluations as the results in Table 7.2 General AdamsMethods Adamsmethods of any order can be derived by choosing different degree Newton backward-difference polynomials fit the solution at the data points. 0.19500703 -57.68816488 2005.The correspondingerror in Table 7.14913834 1758.0 6.0 3.0 5.07380486 -64.0 2.0 10.0 s for At = 2.22786679 1758.53124406 1798.82998845 2248.10 for At = 1.17432197 -41. the Runge-Kutta method requires four derivative function evaluations per step compared to two for the Adams-Bashforth-Moultonmethod.20.21558333 1758.47079576 2075.33468855 1944.0 4.156.41636581 1944.20 Finite difference grid for general Adams-Bashforth methods.06113896 -0.70704983 1944.07404718 in Table 7.

+l l ds (7. and simplifying the result yields the general explicit Adams-Bashforth FDE: IYn+l =Yn+/3h(eofn+~_lfn_l+e_2fn_2+ . and the stability i limit. Integrating Eq..0.11 Coefficients for the General Explicit Adams-Bashforth FDEs k fl ~0 ~-1 ~-2 ~-3 ~-4 ~-5 n 389 C 0 1 2 3 4 5 1 1/2 1/12 1/24 1/720 1 / 1440 1 3 23 55 1901 4277 .. -1. are presented in Table 7.One-Dimensional Initial-Value Ordinary Differential Equations Table 7. the global order n.251).251) wherek denotes the order of the Newton backward-differencepolynomialfit at base point n+ 1.475 1 2 3 4 5 6 2.7923 5 37 2616 9982 -9 . Integrating Eq. introducing the expressions for the appropriate backward differences at point n + 1. Thefinite difference grid for the general implicit Adams-Moulton is illustrated FDEs in Figure 7. and simplifying the result yields the general implicit Adams-Moulton FDE: 0(h").249). (7... ~At<CI (7. ~ At < C. ).7298 251 2877 .3 0.5 0.21 Finite difference grid for general Adams-Moulton methods.. e At < C.0 1.) O(hn).1274 .2 where k denotes the order of the Newton backward-differencepolynomialfit at base point n.11.1.0 0. introducingthe expressions for the appropriate backward differences at point n.252) Yn+l = Yn -Jr/3h(~lfn+ 1 + o~of n -+. -1 . ). I Pk(t) ¯¯¯ n-3 q=4 n-2 q=3 n-1 q--2 ~ n q=l n+l t Figure 7. evaluatingthe result for the limits of integration. -2 . evaluating the results for the limits of integration. (7.O~_l fn_ 1 "~ "" ") wherethe coefficients /3 and 0~ (i = 1. the global order n and the stability limit. .21..250) where the coefficients/3 and ~i (i = 0. are presented in Table 7. -16 -59 -2774 . c~At_< C (7.12.. The general formula for the implicit Adams-Moulton is: FDEs d~ = h 0[Pk(S)]..

Eqs.254) is given 19 5 Corrector Error -.253) and (7. When step size is halved. ynC+l is not known until the corrector FDE evaluated.19 --173 27 1 2 3 4 5 6 oo oo 6.0 1. every other previous solution point is used to the determine the four knownpoints. This methodof error estimation requires no additional derivative function evaluations. increase (double) the step size.253) (7. the truncation error is given by 251 Predictor Error = 251 Y(V)(z) -. assume that .258) Unfortunately. it can be used to extrapolate the solution. For the predictor. These equations can be expressed .9.Pn+l ~ Y~n+l -I. Assuming y(V)(ze) = y(V)(rc) = z that which is a reasonable approximation. When step size is doubled. Oncean estimate of the error has been obtained.7-~" ~" Yn+l -.12 Coefficients for the General Implicit Adams-Moulton FDEs Chapter7 0 1 2 3 4 5 1 1/2 1/12 1/24 1/720 1/1440 1 1 5 9 251 475 1 8 19 646 1427 -1 --5 -264 --798 1 106 482 -.19 + 251 (ynC+~ y~. (7.9 7. Thus..257) (7. decrease(halve) the step size.3 Error Estimation.254) can be combinedto C Yn+l --Y~n+l = A/’5 Y(v)(’C)(7~0 +720’2515 (7.~20 At5 Y(V)(zc) fin+ 1 C (7.255) from which 720 c _ y~.390 Table 7. (7.720At y(V)(z) 19 c 19 + 251(Yn+~ --Yffn+l) (7. the corrector error in Eq. the solution at the two the additional points at n + ½ and n + 23. or the solution can be restarted by the fourth-order Runge-Kutta methodat grid point n. Anestimate of the is C predictor error can be obtained by lagging the term (Yn+~-Y~+I).+l) kt5 Y(V~(v) 19 + -Thus.can be obtained by fourth-order interpolation. Consider the fourth-order Adams-Bashforth-Moulton FDEsgiven by Eq. (7.0 3.236).251 At5Y(V)(rP) -.256) If ICorrector Errorl is less than a prescribed lowererror limit.+~) 720 5 At _ (7.254) wheretn_3 <_5_ P <_tn and tn_ 2 5 "cC <~tn+l. Error Control. and Extrapolation Anefficient methodof error estimation and error control can be developedfor predictorcorrector methods. If ICorrector Errorl is greater than a prescribed uppererror limit.

(7.single-point methodsand extrapolation methodsare preferred..M n+l y~+~ Ay~_~ + (7. + ~--~ (9f~_~ + 19£ .264) 7.Ayn+l [ C n+l ] (7. since the values on the right-hand side are lagged one step.. Thus. example this type of method.261) (7. fn~ =f£~ =f(tn+.5fn_~ +fn-2) (7. The radiation problempresented in Section 7.22. and a simple and inexpensiveerror estimation procedure..0 s are presentedin Figure7.C+1 .259) ]y~.+l.e.+I -19 19 + 251(Y.M yC.Eq.e. Thefirst-order explicit . not y~. extrapolated) predictor solution up (7.9.4 Summary Multipoint methods work well for smoothly varying problems. up corrector solution is C.10 SUMMARY OF METHODS AND RESULTS Severalfinite difference methods solving first-order initial-value ordinary differential for equations are presented in Sections 7.. Multipoint methods other than Adams-Bashforth-Moultonmethods can be derived by integrating from back points other than point n.262) The mopup correction (i. extrapolation) for the fourth-order Adams-Moulton corrector FDEis then C. (7. The mopped (i.5 to 7. For nonsmoothlyvarying problems. if not the best.M ~ Yn+t q.Y~+0 (7.260) Onthe first time step at to.e.. (7. Seven of the moreprominentmethodsare summarizedin Table 7.e.~_~ 19 ÷ 251 (v~c -y~) The mopped (i.260) cannot be applied.1 was solved by these seven methods. The fourth-order Adams-BashforthMoulton methodis one of the best..9.It has of excellent stability limits. It is reconamendedas the method of choice when a multipoint method is desired.236b).263).263) extrapolated) Note that y~.13.One-Dimensional Initial-Value Ordinary Differential Equations (-Vn+l 391 Themop correction (i. Y~-~) h eM c Yn+x=Y. Eq. extrapolation) for the fourth-order up Adams-Bashforthpredictor FDEis --Y~n+l) c c ~ (Yn --Y~n)" 251 Ay~. The fourth-order Adams-Bashforth-Moulton method is an excellent example of a multipoint method. 7.a~ is used in Eq. excellent accuracy.M AY.. Theerrors for a step size of At -~ 1. The value of y~_ff is used to evaluate f~ for use in the fourth-order AdamsMoultoncorrector FDE.

_I + 37f. The fourth-order Adams-Bashforth-Moulton method and the fourth-order extrapolated modified midpoint method yield comparable results. +~.392 Table 7.+~ =y... zo) zi=zi_2+2hf[t~+(i-1)h.~ --Sfn_~ +fn-2).269) (7.+t = ½[ZM_+ z M + hf(t n + At. + ~--~ (5 ~ ... it lacks an efficient error control procedure.270) Ay = Atf t.265) (7.276) At PM c yn+~=y~+ ~-~(gf. M) y.1 The Adams-Bashforth-Moulton Method At 5 y~+~=y..y. yn + Ay3) (7.M _ 19 ~. =f(tn+~. C. Both of these methods have excellent error control procedures. c Yn+l = Yn + ½ At(f~ +f~l) Y~n+l (7.~. AY~n~_Ml 251 [. + At fn The lmplicit Euler Method Yn+l = Yn+ Atf~+l The Modified Midpoint Method At f~+l/2P =f(in+l/2.272) (7.+ At f(tn + At.yn + tn+--f. ZM) ~ ] MAV .y.y~+~) (7. The higher-order (sixth.267) The Modified Euler Method =Yo + Atf~. 2 .LAV 2"MAV .275) (7.. the second-order modified Euler method.268) The Fourth-Order Runge-Kutta Method y. and the second-order modified midpoint method are clearly inferior to the fourth-order methods.271) The Extrapolated Modified Midpoint Method z o = yn.273) (7.266) Y~n+l/2 ~-~ Y"+Tf"’ (7.274) (7.9f~_3). .59f..C -.13 Summary Selected Finite Difference of The Explicit Euler Method Yn+l= Y.y~+~).C iT] __ (7. zi_~] (i=2 . f~.~ + 19f. zI = zo + hf(t n. C Yn+l = Yn + At fnP+l/2 Chapter 7 Methods (7. /Xy3=atf my 4 = (7._2 .LAV 1V = MAV-} 2" . + ~(Ayl +2 Ay +2 Ay +Ay4) 2 3 Ay~ = Atf(t.277) Euler method.and eighth-order) extrapolated modified midpoint methods yield extremely accurate results for this smoothly varying problem. Y~n+i/2). However.1 2" . The fourth-order RungeKutta method is an excellent method for both smoothly varying and nonsmoothly varying problems.).

Two proceduresfor solving nonlinear implicit FDEsare: . the correspondingFDEis linear in Y.M = 2 4 10 Fourth-order Adams-Bashford-Moulton method "2 10 Fourth-orderextrapolated modified midpointmethod. When f(t.11 NONLINEARIMPLICIT FINITE DIFFERENCEEQUATIONS Several finite difference methodshave been developed in this chapter for solving the general nonlinear first-order initial-value ordinary differential equation: IF ’ =f(t.When f(t. explicit FDEs still linear in Y.. 10 Figure 7.+l.278) The derivative function f(t. ~) is linear in ~. = 16 M -7 10 0 5 Time s t.~ = 4 Fourth-orderRunge-Kutta method 10.+l. 7. ~) maybe linear or nonlinearin 3.~) is nonlinearin ~.+~. However. and special proceduresare required to solve for y.~) ~(t0) =D0 (7.+I. = 8 M 10-5 10-6 Eighth-order extrapolatedmodified midpoint method. for both explicit FDEsand implicit FDEs.22 Errorsin the solutionof the radiation problem.One-Dimensional Initial-Value Ordinary Differential Equations 393 First-order explicit Euler method Second-order modified Euler method 0 10 Second-order modified midpoint method.3 10-4 Sixth-order extrapolated modifiedmidpoint method. are implicit FDEs are nonlinear in y..

Thus.137662 58.281) (7. considerthe implicit Euler method[see Eq. ~) in a two-variable Taylor series.129192 -0.279) Yn+l = (7.394 1.282) Example7.0 2. 7.0 10.0 Tn+~ 2500.247314 1798. Timelinearization 2.0 10.73)]: Yn+l = Yn + Atf.+l Expressf(t.+I. tn+ 1 0.572473 -0. + At(f~ +f[.(Y.0 0.012500 2270.+I = Y.At fyl.606661 .11.263375 Error 43.220672 1827.440186 57.234375 .000000 2375.797133 61.261400 2500.250000 -0. The FDEis Tn+ = Tn + Atfn+l ~ Table 7.263375 2360.110305 -0.687500 2132. (7.192569 -0.0 2.280) TruncatingEq.618413 1842.82.409031 2006.253656 .0 1.15. in which the nonlinear derivative function is expressed in a Taylor series about the known point n and truncated after the first derivative term. Solving for Y.+1yields Yn +At fn + AtZ ftin -.214347 -0.281) is linear in Y.829988 2248.279) yields Y.247314 2074.155143 -0.235194 -44. the radiation problempresented in Section 7.468392 .280) and substituting into Eq.234375 .187208 -0.250000 -0.127.780411 -52.0 8.998026 14.973358 28.) +"" (7.0 9.987498 28.1 is solved by the implicit Euler method. At +fyl.227867 1758.106. jT~+1 =jT~ +El. (7.182512 21.1 Time Linearization Chapter7 One approach for solving a nonlinear implicit FDEis time linearization.0 6.)] Equation(7.(Y.611898 1944.343286 .094508 1758.311316 .0 4. (7.283) Solution by TimeLinearization Tn fn fTI.232170 1817.156.110. Newton’s method Thesetwo procedures are presented in this section. Time linearization In Example 7.215365 1786.691332 -64.3. At +~yl.571820 61.000000 2291.156.At ynfyl n 1 -.14 tn (7.+l -y.190233 1903. (7.097609 Tn+ 1 2248. Toillustrate this procedure.+l -Y.

+l) in a Taylor series about the value Y.04 . Example 7.286) Let At = 2. f = -4~T 395 (7.At Znfzl n 1 .0 × 10-12)(2500.290) can be rearranged into the form F(yn+I) Yn+l -.0(-156.0 s to 10. T) is given by 4) f(t.[n (7. For the first time step. fo = -(4.285) Substituting Eq.294) .290). 7.4.0 s.At f~.288) (7. Newton’s method Let’s illustrate Newton’s methodby solving the radiation problempresented in Section 7.14.234375 frlo = -4(4.0)(-0.250000) = 2291.2 Newton’s Method Newton’smethodfor solving nonlinear equations is presented in Section 3.250.T~ Thus.G(Y = 0 .+I) (7.One-Dimensional Initial-Value Ordinary Differential Equations The derivative function f(t.A goodinitial guess maybe required.3 by the implicit Euler method.292) after the first-order is and solving for y.Tna) (7. (7..+~ yields y(I¢+~) Yn+~FQV~k+) 1) n+l ~ (k).0 + 2.687500 (7.285) into Eq.250000) T1 = 1 .1 and solved in Example7.234375) . Newton’smethodworks well for nonlinear implicit FDEs. A nonlinear implicit FDEcan be expressed in the form Yn+l = G(Yn+I) Equation (7.11. ~t~ (k) (7.289) These results and the results of subsequent time steps for t from 4.+I) F(yn+~) +F’(y~+~)QV*n+l -Y + . 3 = 0 and f~.03 = -0. (7. (7.291) (7.0(2500.284) (7.293) Equation (7.2. T) = -a(T4 .2. Tn+ = Tn 1 q- At f~+l = Tn . (7.0 s are presented in Table 7.292) whereY*~+1 the solution of Eq.250000 2500. n+l) 0 (7. Thus.287) (7.~ At(T4n+~ .16.0 × 10-12)2500.04) = -156.290) Expanding F(y.+I and evaluating at Y’n+1 yields F(y*. TruncatingEq.293) must be solved iteratively.0 s. whichalso presents the results for At = 1.0(-0.282) yields Tn+l-Tn + Atfn .

297) gives Chapter7 (7.687500 2282.834998 48.+ .000000 2500.0)(4.T. + At ~(T~4+1.019859 0.(~) k~n+l) Let At = 2.000000 1.14 and 7.+ 1 tn+l k 0 1 2 3 4 T.380667 2248. F(T1) = T1 .297) (7.296) (7.04) = 312.468750 12.776520 49.15 differ due to the additional truncation error associated with time linearization.785819 2120.929506 1806. t~’ n Tn+l Error 0.295) (7.468750 _ 2291.+I F.(2.04 .263375 34.299) (7.03= 1.0 .302) 1.301) ~) T~ = 2500.298) (7.538505 46.0 + (2.2500.0 x 10-12)(2500.+I) F’(Tn+I) = 1 4.611898 1944.0 x 10-12)T13 °) Let T~ = 2500. Theseresults are presented in Table 7.15.~.0 8.0 s to 10.785819. differences are quite small.r( k +t~) ~n+l k ) F ( T~.0 4.800203 2282.291) yields F(Tn+1) T.0 x 10-12)2500.394933 1891.0)(4.0 6.15 Solution by Newton’s Method t.000000 2291.380674 1.500000 Repeating the procedurethree moretimes yields the convergedresult T~I 4) = 2282.247314 2074.0 10.385138 1.785819 2282.0 x 10-~2)(~ F’(TI) = 1 4-4(2.0 4.310131 0.4(2.934807 1994.500000 1.718992 312. along with the final results for the subsequent time steps fromt = 4.0 s.+ 1 T.0 2.0 s.455617 .+ .500000 Substituting these values into Eq. However.094508 1758.4 At ~T~3+~ Equation (7.293) yields .r( ~n+l F.0 312. (7. (7. For the first time step.2500.250.0)(4.468250 °)) F’(T~ = 1 4.Ta4) = 0 = 1 The derivative of F(T.0 2500. Then °)) F(~ = 2500.687500 (7.300) (7.618413 1842.322909 49.294) into the form of Eq.396 RearrangingEql (7.0)(4.0 K. The results presented in Tables 7. Time the linearization is quite popular for solving nonlinear implicit FDEswhich approximate Table 7.

303) (7.305) can be replaced by an equivalent system of n coupled first-order ODEs by defining n auxiliary variable.y. each individual is higher-order ODE can be replaced by a system of first-order ODEs.(O) = "(~-~) Yn-l(0) (7.307.2) to (7.n) and substituting these results and Eq. .304) both can be reduced to a systemof two coupledinitial-value ODEs the procedure described below.12 HIGHER-ORDER ORDINARY DIFFERENTIAL EQUATIONS Sections7.11 are devotedto the solution of first-order ordinarydifferential equations by finite difference methods.. n-.1) (7...) y.n) gives Y’n=y(n) (7.Y y(n-1)) (i= 1.13. a higher-order ODE be replaced by a system of can first-order ODEs.0) Equations(7. (7.308) Rearranging Eqs. by Consider the general nth-order ODE: y(n) =J~ . can The systems of first-order ODEs be solved as described in Section 7. y~. .When exact solution to an implicit nonlinear FDE desired. " .25).306) .305) yields the following systemof n coupled first-order ODEs: yt~ ~_ Y2 Y~= Y3 t yn -1 ~-Yn yl (0) = yz(0) = .2) (7.0) = 0.304) Mo -#.303) and (7.1) (7. ..2 . 01.F(t.309.26): y.307. Thus.307.309.. = F(t.y .0) = V(0.n-1) (7.n) (7..~(to) -~ .305) (7.5 for the vertical flight of a rocket. (n--2) =Y0 Y. and the simpler modelgiven by Eq. y~ = y Y2 =Y’ =Y’l Y3 = Y" = Y~ Yn= Y(’-~) = Y’~-~ Differentiating Eq.307.309.t y(0.. (7..Whena system of higher-order ODEs involved.and the coupled system of higher-order ODEs be replaced by coupled systems of first-order ODEs.[~ ~n(t) 0 (7. V.2) (7..307... (7.3) (7.308) Eq.n) ’ Yn -.~ in(t) o F y" -g 2 _ g(Y) CDp.309. 7.5 to 7..y) ½ p ( (y)A V M -. Y2 . Eq.1) (7.307. y) M .One-Dimensional Initial-Value Ordinary Differential Equations 397 nonlinear PDEs. can Consider the second-order initial-value ODE developed in Section II.0 and y’(0. (7.~o and ~(i)(to) -: Equation (7. Many applications in engineering and science are governed by higher-order ODEs..In general. Newton’s the is method is recommended.307. (II.

0) = 0.0 (7.. Example7. Then Eq.310) reduces to the following pair of coupled first-order ODEs: y’ = V V’ = T y(0.y 2 . (7..312) comprisea system of two coupled first-order ODEs y(t) and V(t). auxiliary variablesYi (i =1.11 are devotedto the solution of a single first-order ordinarydifferential equation by finite difference methods. (7.311) and (7.Recall Eq.13 SYSTEMS OF FIRST-ORDER ORDINARY DIFFERENTIAL EQUATIONS Sections 7. each step must be applied to all the equations before proceedingto the next step. The result is a system of n coupled first-order ODEs. In many applications in engineering and science.0) (7.which can be solved by the procedure discussed in Section 7. (7.311) and (7. VVqaen predictor-corrector or multistep methodsare used. Thestep size must be the samefor all the equations.314) Each ODE the system of ODEscan be solved by any of the methods developed for in solving single ODEs.2 .312) by the fourth-order Runge-Kutta methodis presented in Example7...18 in Section 7.309.310) Let y’ = V. (7.0) = 0. .13. the general features of a higher-order ODE similar to the general features of a first-order are ODE...y~. The solution to Eqs..0 and y’(0. expressed in terms of Eq.n) is the original nth-order ODE..0) : V(0. f.313) (7.305). (7. (7..5 to 7.304) to a systemof two coupled first-order ODEs.0) = 0. Consider the general system of n coupled first-order ODEs: ~ =~(t.... 2 . 7..Care must be taken to ensure the proper coupling of the solutions. Reduction of a second-order ODE two coupled first-order to ODEs To illustrate the reduction of a higher-order ODE a systemof coupled first-order ODEs. This reduction can nearly always be done. n)..13.17. Thus. to let’s reduce Eq.0 -g V(0.312) M0 ~t - Equations (7.398 Chapter7 whereEq.rht 0 g y(0... Yi(O) = Yi (i : 1. (7..311) (7. systems of coupled first-order ODEs governing several dependent variables arise.) (i = 1.304): y" -T M . 2 . The methodsfor solving a single first-order ODE be used to solve systemsof coupled firstcan order ODEs.

rh is the massexpulsion rate.31 I) and (7. (7. for V(10.316) M0 rht - whereM is the initial mass.0) and y(10. Yn+l ~Yn q.5.315) and integrating yields the exact solution of (7. Thus. Theseparametersare discussedin detail in Section II.180).~(z~Yl +2 /~Y2 -~2 /~Y3 -’[~Y4) (7. 2.z (7. V) = y(0.0. Eqs.0kg/s.8t y(t) = 10. andg = 9. 100.8 V(0.~h~00)-gt (7. ~h = 5.3 ! 7.315): Tt _ ~o) y(t) =M°(T~(lrh\rh/ ~nt ln(1 .323) V.0.0) = 0.05t) .2.320) Equations (7.One-Dimensional Initial-Value Ordinary Differential Equations Example7. (7.½gt~th (7. 3.000(1 . 3.315) (7.315) and (7.0) = 0. let T = 10. Ayi( i= 1. (7. Eqs. Solution of two coupled first-order ODEs 399 Consider the system of two coupled linear first-order initial-value ODEs developed in Example in Section7.+1 = V~ + ~(AV + 2 AV + 2 AV + AV4) 1 2 3 (7.4) deno te the incrementsin Y(Oand A V~.9.000 ln(1 .0 .17 of y’ = V V’ T y(0.0 10.0 g V(0.324) .317) Substituting Eq.0) with At= 1.9.6.318) As an example.0 kg. and g is the acceleration of 0 gravity.318) become V(t) = -1.0) = (7.8 m/s o = Equations (7.0 V) = 100.(i = 1.179) and (7.317) into Eq.0 (7.316) become y’ =f(t. 4) denote the incrementsin V(t).18. V’ = g(t.05t)ln(1 .0] . (7.0. The exact solution of Eq.319) (7.000N.y.05t) + 2000t . (7.y.0s.~o) +--. M 2.0) = 0.000.316) V(t)= -~ln(1 .322) Let’s solve this problemby the fourth-order Runge-Kutta method.321) (7.317) and (7.12 for the vertical motion a rocket.

327f) (7.0 + 95"4~3158) = 47.0 + 1.0(0. ~) and g(t.325a) (7.y.325c) aG=atg(t~ + at.326d) 10.y.327e) (7.0 .327c) (7. ~Vl~ ~.~.463158 4 AV4=1"0100.200000 ] Ay2 = (7.9.400 whereAy (i = 1.0[.Vn+ t~+~.9.326b) (7. Eq.0 + 1.100.45.327b) (7.326a) .0 ) .0(t.0(0.0(0.327g) (7. 4) and AV/(i = 1. y~ + aye. (7.9.5.0 AV.000.0 + 90"2~ --.0(0. ~Yl . { 10.V. are given by Eqs.5.yn.327d) (7. 4) are given by Eq. Thus. = 1.8 (7.y.000.9.0 "100.327h) 1. 0 10.0 s~_ At~2) AV At[.000.0 -0 000) ( 0.8 = 95.0(0.8 ) (7.0 + 95.~.0 ] =101"311111 .100000 AV = 1.000000 I F 10.0(0. . F). yn.325d) Due to the coupling.0 100.180): i Ay~ = Atf(tn.731579 AV = 1. +~. Yn.0+1.000. Ay = 1. (7.8 = 90.0 AVz = At 100.8] 3 = Z ~-+ At~2) Ay = At(Vn -+.0/2) .319) and (7.AV3) 4 [ 10.8 3 Let At = 1.9. Vn) Ay2 = At f AV~=Atg AY3=Atf AF 3=Atg t. For the first time step.0 -9. 3.8 = 95.000.463158 2 Ay3= 1. .y.0 9. (7.325) reduces &~ = ~t fG~ ~z~ = at~io~t~ .. G + az~) (7.0/2) .0 AV = At 100.0)-9"8 [ 10.0 . respectively.0_5.2. f(t. Ay~ and AF~both must be computed before Ay and AF c~ be 2 2 computeG and A Fe both must be computedbefore Ay and A F3 can be computeG Ay2 etc.325b) t~+~. + At) .0 .463158) = 95.327a) (7. 3 TMdefvative ~nctions.G+~) (?.~) Chapter7 (7.326c) (7.2.000. V~) A 1 = Atg(t ~.0. 3.463138 3 Ay = 1.320).5.0) = 0.100~5~0.

20000000 AYl Ay2 Ay3 AY4 0.~ fn(t) Mo S~ ~n(t) o (7.67619048 190.303).77660049 101.0 to 10.25599278 295.76410256 92.47425871 1193.100000 + 47.854386 V = I[90.00 4450.20000000 92.38205128 92.00 45.18 are deceptively simple since Eq. Vn).00000000 0.47425871 172. V.y n + Aye~2. The evaluation of AV is done at 2 (t~ + At/2.00 5647.324) gives Yl = I[ 0.00000000 1. (7.15044919 1107.29474933 .463158] = 46.00 430.y n.05250670 1288.463158) + 101.51817364 95.84705882 111. (7. There are several definitions of stiffness: 1.05000399 111.48571429 295.48334962 1197.00 Yn Yn+x V.One-Dimensional Initial-Value Ordinary Differential Equations Substituting these results into Eqs.56141219 107. higher-order linear and nonlinear ODEs. p.316) is simplified version of the actual problem specified by Eq. However.84705882 351.63788278 349.12104493 241.328) (7.the basic features of the Runge-Kuttamethodare unchanged. Yn + Ayl/2. This is a considerably more complicated n calculation.48571429 107.303) for from which V’ = y" is given by V’ = F(t.95470085 92.68315419 1107. Recall Eq.31111111 243.63788278 9.330) The evaluation of AV1is done at (tn. (7. V + AVe~2). and C be evaluated n D at t~ + At~2. Table 7.31111111 104.46315789 98.16 Solution of TwoCoupled First-Order ODEs tn tn+l 0. y) g(Y) Co(p.329) These results and the results for the subsequent time steps for t = 2.323) and (7.200000 + 2(95.30810811 191.09470280 98.0 ÷ 2(45.10000000 90.311111] 1 = 95.30810811 101.41229123 191.41212121 46.81235395 1288. y) ½ p(y)A ~ V M .36390207 295. M.60675922 104. 7. This requires that F.76410256 95.+ ~ 0.and systems of linear and nonlinear ODEs.0s are presented in Table 7. This problem is occurs in single linear and nonlinear ODEs.20000000 10. and V + AVe~2.67619048 180. The results presented in Example 7.16.560624 m/s 401 (7.41212121 115.78659469 2.463158 ÷ 95. (7.76410256 92.14 STIFF ORDINARYDIFFERENTIAL EQUATIONS A special problemarising in the numerical solution of ODEs stiffness.731579) + 95.46315789 141.01818182 180.I2104493 3.00000000 45.34394338 407.00 187.94064875 191. AnODE stiff if the step size required for stability is much is smaller than the step size required for accuracy.78659469 140.

Eq. a is computational time) is too large to obtain an accurate (i. for ~ = 1000.5/0.1 A Single First-Order ODE Althoughstiffness is usually associated with a system of ODEs. ~) = -a(33 F(t)) + F’(t). To avoid overshoot.002. and an is example of a system of stiff ODEs presented in the following subsection. To reach t = 5. gives At < 2/c~ --. For reasonable accuracy. which. one time scale associated with the complementary solution and one time scale associated with the particular solution.~) = -1000~. (7. a (7. than one time scale of interest.F(O)) (7.332) becomes [~(t) -.24. The exact solution for large values of t. An example of a single stiff ODE presented in the next subsection. Equation (7. whichis dominated by the (t + 2) term.. is presented in Figure 7.0002.~(to) = .402 2. As an example. .2/1000 = 0.0002 = 25.~o] (7. Chapter7 An ODE stiff if it contains some componentsof the solution that decay is rapidly comparedto other components the solution. y(0) (7.000 time steps are required..331-) becomes ~’ =](t.it can also occur in a single ODE that has more.14. N = . For stability of many explicit methods. presented in Figure 7. F(t) = t + 2. but stability is controlled by ~ At. to From practical point of view.332) When is a large positive constant and F(t) is a smoothslowly varying function.001. 3. 4. Note how rapidly the exponential term decays.333) and Eq. is 7.an ODE stiff if the step size basedon cost (i. Gear (1971) considered the following ODE: L~’ =f(t.e.002/2 = 0.(t + 2)) + 1.23.002/10 = 0. ~ At < 2. At < 0. The Gear is (1971) methodfor solving stiff ODEs presented in the final subsection.334) The exact solution at small values of t. which has the exact solution .332) exhibits two widely different time scales: a rapidly changingterm associated with exp(-at) and a slowly varying term associated with F(t).-e-1000t t q.e. stable) solution. let a = 1000. of A systemof ODEs stiff if at least one eigenvalueof the systemis negative and is large compared the other eigenvalues of the system. Theerror is controlled by At. At < 0. andS(0) = 1.331) l e-~t + F(t) f~(t) = ~o .2 ] -I- (7. which is dominatedby the exp(-1000t) term.

y).336) Considerfour different step sizes: At = 0.333). + 2)) (7. Theresults are tabulated in Table 7.23 Exactsolutionof the stiff ODE small time. (7.010 Figure 7. 0.335) gives y.(t.59): Yn+]= Yn + At f~ (7. and 0. at 0 LL +t+2 2 " O0 1 2 3 Rmet 4 5 Figure 7.335) Substituting the derivative function.005 Timet 1000t 4. (7. (7. f(t.19.002.0025.333) by the explicit Euler method. into Eq. 0. .One-Dimensional Initial-Value Ordinary Differential Equations 403 ~=-eI I I 0.Eq.001.25. at Example7. 0.t + 2 I I I ~. (7.0005. . definedin Eq. Solution of the stiff ODE the explicit by Euler method Let’s solve Eq.24 Exactsolutionof the stiff ODE large time.+] =y~ + At(-1000(y.17 and illustrated in Figure 7.

0000 0. that transient has completely died out and the solution is dominatedtotally by the asymptotic large time solution.004000 2. y(t) = t + 2. y).633121 1.000000 3.939500 2.0005 At = 0.245000 5.500500 1.985684 2. explicit methods be used. For At = 0.In Example 7.876500 1. (7. in one step.17 Solution of the Stiff ODE the Explicit Euler Method by At = 0. y(t) = t -t.000000 3.0020 0.73): Yn+l = Yn + Atf.002.(tn+l -t. then the small step size required for stability maybe larger than the small step size required for accuracy.0025 0. Example 7.006 0.002 0.010 1.778370 1.338) 0.004 0.Eq.333).002. and can if an accurate large time solution is required.0.0025 1.003 0. definedin Eq. Solution of the stiff ODE the implicit Euler method by Let’s solve Eq.751000 1.001 (7. the solution reaches the asymptoticlarge time solution.009955 .2)) Table 7.009999 At = 0.003521 2.006000 1.004000 3.393969 1.0005.001 0.004 0.0000 0.337) yields Yn+l=Yn+ At(-1000(Yn+l.0010 0. Fromthis example.3.0015 0.+l (7.002 0.866665 2. Example7. However. the solution is clearly unstable.000 0. For At ---.009955 0.0075 0. the implicit Euler methodis used to reduce the problemsassociated with stiffness.001000 2.382500 . the solution is stable but oscillates about the large time solution.0025.008000 3.920415 1.002 1.2. into Eq.998262 2. For At = 0.002000 2.010 1. At large times. (7.19 clearly illustrates the effect of stiffness of the ODE the numerical on solution by the explicit Euler method. For At = 0.000000 2.000 0.866665 1.866665 1.010000 At = 0.009955 0.000000 1. although the errors are rather large. (7. (7.009955 0. In that case.006947 2.20.it is obvious that the stable step size is controlled by the rapid transient associated with the exponentialtenn.337) Substituting the derivative function. explicit methods unsuitable becauseof the are small stable step size.001.502500 -0.052500 1.007665 2.010000 1. the early transient solution is of no interest.985684 2.0100 1.003000 2.0100 1.2. If an accurate small time solution is required.953213 1.404 Chapter7 The stability limit is At < 2/~ = 0. y(t) = t -t.0050 0.008 0.20.0005 0.002000 1.633121 1. f (t. the solution is a reasonable approximation of the exact solution.333) by the implicit Euler method.

01 tn 0. ’.050000 2.070000 2.y) linear in y~+l.339) Consider four different step sizes: 0.10 Yn 1.".06 0. and 0.049994 2. Consequently.03 0.07 0.000000 1.[] .009955 2. 1.09 0.18.000000 2. Solution of the Stiff At = 0.030392 2. However.18 and Figure 7..1 ~n 2.05 Y.919091 2.1 1. ’. the entire early time transient is completely lost for large values of At. . The implicit Euler method is unconditionally stable.05 0.. (7.070000 2.10.059999 2.090099 2. However.0005 At = 0.0020 At = 0. As illustrated in Table 7.01 0. 0.050000 2.01.039932 2. and the results for At = 0. there is no limit on the stable step size.00 0.05 0.030000 2.000000 2.25 Solution of the stiff ODE the explicit Euler method. sincef(t.+~ + 2) At + At) (7. .p.060000 2.05.029249 2.090000 2.100000 tn 0. The results for the three small time steps are tabulated in Table 7.080000 2.080000 2.One-Dimensional Initial-Value o ¯ [] ¯ -At = 0. If the early time transient is of interest. In fact.10 ODEby the Implicit Euler Method At = 0.0 0.02 0. by Equation (7. 0.’ .08 0.100000 .as the step size is increased.100000 ~n 2.099616 At =0. ’.+1.26. the accuracy of the early time transient due to the exponential term suffers.338) is implicit in Y.0.18.020000 2. a. the large time solution is predicted quite accurately. Eq.even in those cases.001 Time t Figure 7.090000 2. the solutions are all stable.338) Y"+~ = 1 + 1000 At (y" + 1000(t.0010 At = 0.00 0.5 are illustrated in Figure 7.5.04 0. then a small step size is required for Table 7.0025 Exact Ordinary Differential Equations 405 ¯ . However. and it can be rearranged to give: is linear in y.26..100000 0.011736 2.040000 2.

Stability and stiffness are related to the eigenvalues. by accuracy. 2 .z) (7. employed reduce the computational effort.343) (7.340) can be expressed ~’ = A~ + F (7.342) A system of coupled linear ODEs be uncoupled to yield a system of uncoupled can linear ODEs. For stability. I(Xil ~ 1 (i = l . Eq..Eq. n).26 Solution of the stiff ODE the implicit Euler method.340) For a system of coupled linear ODEs.345) (7...Fn].. n).~2 .406 Chapter7 ¯ O0 1 2 Implicit Euler 3 4 5 Time t Figure 7.313): ~ =~(t. (7... u..14. ~n) (i = 1.].~i (i = 1 ... and Fr= [F1F2.2 Systems of First-Order ODEs Consider the general systemof first-order ODEs discussed in Section 7.346) . u.. If only the large time solution is of interest. v) These two ODEs can be uncoupled to yield y’ =f(t.. of the matrix A...MaxlRe(~i)l Min[Re(c~i)[ (7. The to stiffness ratio is definedas: Stiffness ratio-..y) (7. to then implicit methodscan be 7..~. A is an n x n matrix.. (7.. n) (7.13. .341) where yz= [~1~2 ..~1. v) v’ = G(t.Consider the system of two coupled linear ODEs: u’ =F(t..344) z’ =g(t. Asystemof ODEs stiff if at least one eigenvaluehas a large is negative real part which causes the corresponding componentof the solution to vary rapidly compared the typical scale of variation displayed by the rest of the solution..

7. ~ = 1 and ~2 = 1000.347) and (7.14. Section 3. Theexac solu tions of E qs. is solved by an explicit method.7. In such cases. whenthe derivative function is nonlinear. whereC is a constant of order unity.350) (7. respectively.001C. (7. When even the most rapid transient component the solution is of of interest.001C (7. Consideran explicit finite difference methodfor whichthe stability limit is a At < C.343) and (7.347) and (7. However. However. whenthe effects of the rapid transients have decayed to insignificant levels. small time steps are required for accuracyas well as stability.347) (7.10 . whenthe system of ODEs cannot be uncoupled.344).(7. as in the previous paragraph.347) and (7. (7. When system of ODEs be uncoupled. The system of nonlinear implicit FDEscan be solved by Newton’smethod. For Eqs.348). the problemof stiffness of the systembecomescritical. that is. nonlinear implicit FDEs result. time (7. At 2 _< 0 . Thus. during whichtime the function y(t) has changed only slightly. small time steps must still be Table 7. (7.348) which corresponds to some coupled system of ODEs terms of the variables u and v.348) are: y(t) = e-’ z(t) -1000/ =e (7. implicit finite difference methodsare useful.3 Higher-Order Implicit Methods The problemsof stiffness illustrated in the previous sections occur for both single ODEs and systems of ODEs. C At 1 _< -]C = C and At2 -< 100~ = 0.348). Smalltime steps must still be taken in the solution for y(t) becauseof the stability limit associated withz(t). whichare equivalent an uncoupled system of ODEs such as Eqs. y(0) z(0) = 1 = 1 407 (7. C=I. and explicit finite difference methodscan be used to generate the solution. the common step must be the smaller of the two values correspondingto Eqs. However.349) If a coupled systemof equations such as Eqs. each ODE a can can be solved by a methodappropriate to its ownpeculiarities. Consider the uncoupled system of ODEs: y’= -y.348).One-Dimensional Initial-Value Ordinary Differential Equations where y and z are functions of u and v.347) and t (7. z’= -1000z.347) and (7.19 Coefficients for the Gear FDEs k 1 2 3 4 5 6 7 1 1/3 1/11 1/25 1/137 1/147 /~ 1 2 6 12 60 60 so 1 4 18 48 300 360 ~-~ -1 -9 -36 -300 -450 ~-2 ~-3 ~-4 2 16 200 400 -3 -75 -225 12 72 .348) can be analyzedseparately for stability. in Equations(7.351) The function z(t) decaysto a negligible value after a few time steps.

353) (7. /3. The fourth-order Runge-Kutta method The extrapolated modified midpoint method The fourth-order Adams-Bashforth-Moultonmethod The basic computational algoritlmas are presented as completely self-contained subroutines suitable for use in other programs. + Ay3) (7.354b) . especially the family of Gear methods. (7. 7. The implicit trapezoid FDE. stability limits of these FDEs quite restrictive whenapplied to the are stiff ODEs.Eq. (7. + ~ .) 1 Ay3 = hf t. However. Higher-ordermethodsare desirable. is also unconditionally stable...y.Eq. +-~.-I + ~-2Y. 2 . 2.However.73).15 PROGRAMS Three FORTRAN subroutines for integrating initial-value ordinary differential equations are presented in this section: 1. The Gear FDEshave been incorporated into a FORTRAN package named LSODE. The Gear formulas are presented below: l Y"+! = Y(/3hfn+l + (0~0yn + ~-lY. (7.352) where k denotes the global order of the FDEand the coefficients 7. that case. not accuracyrequirements. but it is only second-orderaccurate.408 Chapter7 employed to the stability limit..Implicit methods. Gear (1971) has devised a series of implicit FDEsthat has muchlarger stability limits.y. for 7.Ay4) 3 Ay = hf(t. + 2 Ay4 = hf(t.14.Explicit methodsare generally unsuitable are for solving stiff ODEs. and ~i (i = 0.141). which was developed at the LawrenceLivermore National Laboratory.-2 +’" ")) I (7.15.180): Yn+l =" Yn + ~ (AYl+ 2 Ay2+ 2 Ay q. 3.179) and (7. ) are presentedin Table7. Any of the Adams-Moulton FDEscan be used to devise a higher-order implicit method.4 Summary Stiff ODEs especially challenging problems.1 The Fourth-Order Runge-Kutta Method The general algorithm for the fourth-order Runge-Kuttamethodis given by Eqs.354a) (7.. implicit finite due In difference methodscan be used to take larger time steps.Yn + Ay = hf t. are recommended solving stiff ODEs. The implicit Euler FDE. This package has an elaborate error control procedure based on using various-order Gear FDEs in conjunction with step size halving and doubling. 1.19. is unconditionally stable. is only it first-order accurate.y. Input data and output statements are contained in a main(or driver) programwritten specifically to illustrate the use of each subroutine. 7. + h.

8) end . c c c c c c z 7 ~ 7 lO00 i010 2020 1030 program main main program to illustrate ODE solvers ndim array dimension. nmax-i call rk (ndim. defines the data set and prints it. n. y(n) +dy2/2. is presented in Program 7. t. yp(n) intermediate output flag: 0 no.4x. 1.llx.O) write (6. .dt. y(ndim) dyl=dt*f (t (n) .8. ’tn’. i. for implementing the fourth-order Rungeand (7.l) write (6.One-Dimensional Initial-Value A FORTRAN subroutine. y (n) +dyl/2 * O dy3=dt*f ( t (n) +dr~2. t(n+l).0*(dy2 +dy3 ) +dy4 ) t (n+l) =t (n) if (iw. y ( n ) +dy3 t y(n+l) =y(n)+ (dyl +2.9x.y(n) dy2=dt f ( t (n) +dr~2. n. eq.1020) write (6. dr.yp(101) data ndim. Program main and prints function. ’dy3’. tO initial value of y. t(1).O. yO y(1) time step dt dimension t(lOl) . nmax.0 data y(1) / 2500. dt / 101.llx.353) Ordinary Differential Equations 409 Kutta subroutine rk.y(lOl) . 1 yes iw t(1) initial value of t. iw) implements the fourth-order Runge-Kutta method dimension t (ndim) .0 write (6.0 / data t(1) / 0.y(1) do n=l.dy2. format (’ ’) format (18x.y.y(n+l) end do stop format (’ Fourth-order Runge-Kutta’/" ’/’ n’. specifies the derivative y’ =f(t.3f14. y(n) yp derivative function array. (7.2flS. method.dy4 return i000 format (llx. iw.1000) if (iw. i. eq. ndim = i01 in this example number of integration steps nmax t independent variable array.dy3. eq. t (n) y dependent variable array.354). iw) write (6. A FORTRANfunction.f8. function f. Program 7. "dy4’/" ’) format (i3. t.1. ii.1030) n.3.l) write (6. the solution.y-n’ subroutine rk (ndim.1030) n+l.1010) if (iw. ’dyl’. calls subroutine rk to implement the solution. dy4=d * f ( t ( n ) +dt . "dy2". Eqs.1 The fourth-order Runge-Kutta method program. y.8) end . y). n.llx. f15.1000) dyl.

. + At.12386006 2. and prints the solution. .355a) (7.2.26311433 -43.1.00000000 -156.04261410 1798.23437500 -137.355c) (7.98428393 -41.000 9.000 2360.09419793 -46. .357) A FORTRAN subroutine.89190534 -39.60150356 -139.000 10. ZM)] = IV = MAV -~ MAV .1 Solution by the fourth-order Runge-Kuttamethod Fourth-order n tn Runge-Kutta yn dyl dy2 dy3 dy4 1 2 3 9 10 ii 0. Program maindefines the data set and prints it.15.82956337 -124.1 2.. y~+~ ½ [z~t_~ + ZM+ hf(t.21187310 7. (7.11.I M) (7.78299210 -39.1. Onlythe statements in programmainwhich are different from the statements in programmainin Section 7.1 are presented.24070754 -111.15. zo) z i = zi_ 2 + 2hf(tn + (i 1)h.210) to (7.) (i = 2 . subroutine midpt.89837091 -43. .22758336 -41.LAV = 2.y) derivative function alpha=4.0e-12 f=-alpha*(y**4-250.2 The Extrapolated Modified Midpoint Method The general algorithm for the extrapolated modified midpoint methodis given by Eqs. except the extrapolated modified midpoint methodis used instead of the fourth-order Runge-Kutta method.89627716 -102.356) (7. calls subroutine midpt to implementthe extrapolated modified midpoint method.0**4) return end Chapter7 The data set used to illustrate subroutine rk is taken from Example 7..15. .212): z0 = Yn z~ = zo + hf(t.000 1842.66970570 -112.000 2248. zi _.LAV 2"MAV .73128134 -124.24680781 8.000 2500.355b) (7. Subroutine midpt works essentially like subroutine rk discussed in Section 7. for implementing the extrapolated modified midpoint methodis presented in Program 7.80727846 -38.12267500 1. The output generated by the fourth-order Runge-Kuttaprogramis presented in Output 7. Output 7.410 function f(t.80963138 1758.

j=l. kmax c=2.w(4.kmax+l-k) end do end if t (n+l) =t (n) y (n+l)=w (kmax. iw) implements the extrapolated modified midpoint method kmax number of segmentations.8. M. iw. 0 jmax=2 * jmax. f15. midpoint method’/’ ’/’ n’.f8. t.4x.l) then do k=l.6x.One-Dimensional Initial-Value Ordinary Differential Equations Program7.k)=y(n+l) for k=kmax z(1)=y(n) g(1)=f (t (n) . dt / 101.8) end subroutine midpt (ndim.z (~) =f end do w(l. iw) 1000 format (’ Extrapolated mod. nmax. I.2 The extrapolated modified midpoint method program 411 c program main main program to illustrate ODE solvers data ndim. j) ) end do end do if (iw. eq. "0(h*’4) ’.0 / call midpt (ndim. z(2) do j=3. I. 7x.z(33) kmax = 4 jmax=2 dtk=dt calculate w(l.y(ndim) . kmax dtk=dtk/2. ’yn. "O(h**6) ’.k). "tn’. kmax write (6. t.4) . f14.3f14. n. n. 2. within each interval dimension t(ndim) .1 z (2) =z (i) +dtk*g(1) g(2)=f(t(n)+dtk. n. 6. dt.dt. 7x. y(n) do k=l. jmax tj=t (n) +float (j-l) z (j) =z (j-2) +2. O*dtk*g(j-l) c c c segments g(j) (tj. k) =0.5 * (z (jmax-i ) +z (jmax)+dtk*g (jmax) end do c extrapolation do k=2.j+l) -w(k-l.0"* (2. O(h**2) ’. kmax+l-k w(k. "O(h**8) ’/’ ’) 1030 format (i3.f15o8. j) = (c*w(k-l.8) end . O’float (k-l)) do j=l. 1 5x.3. y.1000) (w(j. return 1000 format (llx.y.

00000000 2249.26241865 2. The output generated by the extrapolated modified midpoint methodprogramis presented in Output 7. and prints the solution. 61189807 8.358b) A FORTRAN subroutine. . y) deriva t i ve func t i on end Chapter7 c The data set used to illustrate subroutine midpt is taken from Example 7. 61210224 2074.26337496 1758.000 7. 61815984 4. 26188883 2248.33199904 1758.24740700 2248.412 function f (t. except the fourth-order Adams-Bashforth-Moulton methodis used instead of the fourth-order Runge-Kuttamethod. -k (9f~e+l + 19fn -. Subroutine abmworks essentially like subroutine rk discussed in Section 7. 000 1842. 30754055 2248. 000 2074. 26445688 1758.+I = Y.26338489 1758. 61500750 2074. (7.24731574 2248.15. Onlythe statements in the programmain which are different from the statements in programmainin Section 7.26337480 6 10.358a) (7. 15523693 2248.24 731430 2248.48522585 2248.9fn-3) Y. 000 2500. for implementing the procedure is presented in Program7.26770120 1758. 63690642 2074. 61189824 2074.3 The Fourth-Order Adams-Bashforth-Moulton Method The general algorithm for the fourth-order Adams-Bashforth-Moulton methodis given by Eq.1.26337480 1758.26337480 1758. midpoint n 1 tn yn.26337543 1758. 61190855 2074.15. 24731430 2075.59fn-~ + 37fn-z . O(h**2) method O(h**4) O(h**6) O(h**8) 2 3 5 0.26353381 1758. + ~(55fn .2 Solution by the extrapolated modified midpoint method Extrapolated rood. 61191099 2074. I are presented. 71131895 2074.28065012 1758.3. Output 7.2.5fn--1 +fn--2) (7. 09450797 1758. subroutine abm. 61189807 2074.236): h Y~n+~ =Y. 00025328 2074.12. Program main defines the data set and prints it.15. calls subroutine abmto implement solution. 000 2248.24831212 2248. 24737802 2248.

n.y. i.0 fpred=f ( t (n) +dt. dr.24731405. t.000 2. iw) the fourth-order Adams-Bashforth-Moul ton method dimension t (ndim) . 4 write (6. 2f15. 16753966 / do n=l.y(n). O*yp(n-3) )/24. O*yp 1 -9. 3.24731405 fn fPred -156. The Adams-Bashforth-Moulton program is presented in Output 7. "fn’) 1020 format (16x. eq.23437500 -124.23437500.1000) ypred. ’fPred’/" ’) end subroutine abm (ndim. subroutine abm is taken from Example 7. 18094603. yp. ’tn’.9x.lOx. ii. 1 2154.0.iw. *fpred+l9.4) / 2500.4) / -156.0. O*yp (n-i ) +37.0 data (y(n). yp (ndim) ypred=y(n) +dr * (55.l) write (6.yp(n) end do do n=4.14. 1. t. n.82998845. nmax-i call abm (ndim.4) / 0.y) derivative function end The data set used to illustrate output generated by the fourth-order Output 7. 1.82998845 2248. y (ndim) . t(n+l) .0.n=l. i.yp(n+l) end do 1000 format (’ Adams-B-M method’/" ’/’n’. 2. dt. *yp (n) -5.0 / data (t(n). iw) write (6.000 .3 The fourth-order Ordinary Differential Equations method program 413 Adams-Bashforth-Moulton program main main I~rogram to illustrate ODE solvers data ndim.One-Dimensional Initial-Value Program 7.1030) n+l.n=l. *yp (n-I) +yp (n-2) t (n+l) =t (n) yp(n+l)=f(t (n+l) .8) end function f(t.24079704 -102. n.3 Solution method by the fourth-order Adams-Bashforth-Moulton method Adams-B-M tn yn yPred 2500.5x. t(n).47079576 / data (yp(n).y(n+l) . "yn’.n=l. 1 -102. yp. -86.24079704.y(n+l) if (iw.00000000 2360. fpred return i000 format (llx.0.000 1. -124. 2248. dt / i01. ypred) y (n+l) =y (n) +dr * (9. 2360.nmax.y.18094603 1 2 3 0.3.13x.0 *yp (n) -59. "yPred’.1030) n.

such as IMSL.and developingfinite difference equations are discussed.18932752-38. 55075892 -74.47079576 -86.68816487 -64. 7. A procedure for determining stability criteria by analyzing the amplification factor G. Procedures for discretizing the continuous solution domain.000 2154. 000 2005. the bookNumerical Recipes [Press et al.000 2248.000 1758.74.Someof the more prominent packages are Matlab and Mathcad. the single-step exact solution of a finite difference equation.18094603 3. which works well for both smoothly-varying and nonsmoothly-varyingproblems.24731405 -102. Single-point methods 2.24833822 . and convergence are defined and discussed. 33468855 -64. Extrapolation methods 3. . Multipoint methods These methods can be used to solve single first-order ODEs. fourth-order methods are the best compromisebetween accuracy and simplicity for the single-point and multipoint methods. order.000 1798.66995205 9. representing exact derivatives by finite difference approximations. However. Manywork stations and main frame computers have such libraries attached to their operating systems.17350708 4.and systems of first-order ODEs.4 Packages for Integrating Initial-Value ODEs Numerous libraries and software packages are available for integrating initial-value ordinary differential equations.71557210 5. is presented. also contain algorithms for integrating initial-value ODEs. More sophisticated packages. MATHEMATICA. error estimation and error control can be expensive. stability. Three types of methodsfor solving initial-value ODEs presented: are 1.80233360 1758. The extrapolated modified midpoint methodcan be extended easily to higher order.15. and MAPLE.20717951 Chapter7 10 7.16753966 2075. Finally.414 3 4 5 2.higher-order ODEs. Many commercialsoftware packages contain algorithms for integrating initial-value ODEs. The concepts of consistency.21558333 -38. 000 2074. Finite difference equations of any order can be developed for all three types of methods. A procedure for investigating consistency and determining order by developing and analyzing a modified differential equation is presented.20946276 10. (1989)] contains numerous subroutines for integrating initial-value ordinary differential equations.07380486 2005. MACSYMA. so it maynot be the most efficient method.16 SUMMARY Methodsfor solving one-dimensional initial-value ordinary differential equations are presented in this chapter. Generally speaking. The fourth-order Runge-Kuttamethodis an excellent general purpose single-point method.14913834 -41.

One-Dimensional Initial-Value Ordinary Differential Equations 415 The extrapolated modified midpoint method is an excellent method for both smoothly varying problems and nonsmoothly varying problems. Describe how higher-order ODEs and systems of first-order ODEscan be solved using the proceduresfor solving a single first-order ODE 8. the RungeKutta methodgenerally behaves better. Define and discuss the concept of order 21. Define and discuss the concept of consistency 20. Derive and use the first-order explicit Euler method 17. Discretize a continuoussolution domaininto a discrete finite difference grid 13. Linearize a nonlinear first-order ODE 5.including the complimentary solution and the particular solution 3. the Runge-Kutta method may be more straightforward. for nonsmoothly varying problems. Error estimation and error control are straightforward and efficient. Multipoint methods.For nonlinear stiff ODEs. Explain the relative advantagesand disadvantagesof the explicit and implicit Euler methods 19. Explain howto develop an explicit FDEand an implicit FDE 15. the extrapolated modified midpoint methodis generally moreefficient than a multipoint method. Anyof the methodspresented in this chapter for solving initial-value ODEs be can used to solve any initial-value ODE. Explain the difference between a stable ODE an unstable ODE and 6. Althoughexpensive. It can give extremely accurate results.The Gear family of FDEsis recommended stiff ODEs. Explain the objective of a finite difference methodfor solving an ODE 11. Define and discuss the concept of convergence . After studying Chapter 7. Explain the concept of a family of solutions of an ODE howa particular and memberis chosen 7. Discuss the general features of the linear first-order ODE. Error estimation and error control are straightforward and efficient. Describe the effect of truncation error on the solution of an ODE 16. Even for smoothly varying problems. Explain the relationship between the solution of time-marchingpropagation problems and space marching propagation problems 9. choice of a methodfor a particular problem The dependson both the characteristics of the problem itself and the personal preference of the analyst. Discuss the general features of a nonlinear first-order ODE 4. you should be able to: 1.the resulting nonlinear for FDEsmust be solved iteratively.for nonsmoothlyvarying problems. However. Describe the general features of initial-value ordinary differential equations 2. Developa finite difference approximation an exact derivative by the Taylor of series approach 14. Defineand discuss the concept of stability 22. Stiff initial-value ODEs present an especially difficult problem. Describe the steps in the finite difference solution of an ODE 12. this procedure gives goodresults. Derive and use the first-order implicit Euler method 18. such as the fourth-order Adams-Bashforth-Moulton method. Explain and implementthe Taylor series method 10. work well for smoothly varying problems. usually by Newton’smethod. However.

Applytime linearization to solve a nonlinear implicit FDE 43. Derive and use the trapezoid and modified Euler methods 31. Describe and apply the Gear methodfor solving stiff ODEs 49. + b + ct + dt 2 ~(to) = . Derive the exact solution of the following ODE: ~’ = af.5). Derive an explicit multipoint method 38. G.1 Introduction 1. Developthe amplification factor. Derive and apply the extrapolated modified midpoint method 36. ODEs Derivethe exact solution of Eq. Chooseand implementa finite difference methodfor solving initial-value ODEs EXERCISE PROBLEMS 7. (7. Derive an implicit multipoint method method 39. for a FDE 26. to determinethe stability criterion for a FDE 27. Derive the exact solution of the radiation problempresented in Eq. Derive and apply error estimation. Explain the concepts underlying multipoint methods 37.1. Apply Newton’smethodto solve a nonlinear implicit FDE 44.3).3) for the times presented in Table 7. 4. error control. Reducea higher-order ODE a system of first-order ODEs to 45. Discuss the problemsarising in the solution of stiff ODEs 48. Explain the concept underlying extrapolation methods 35.2 General Features of Initial-Value 3. and extrapolation methods for multipoint methods 41. Explain the concept of a single-point method 29. Derive and apply error estimation and error control methodsfor single-point methods 34. G. (7. (7. Determinewhether or not a finite difference methodis convergent 28. Derive and apply the fourth-order Adams-Bashforth-Moulton 40. Derive and use the midpoint and modified midpoint methods 30. extrapolation methods.416 Chapter7 23. Analyze the MDE determine consistency and order of a FDE to 25. and multipoint methods 42. Solve a system of first-order ODEs 46. Apply the fourth-order Runge-Kutta method 33. Analyzethe amplification factor. 7. Derive the modified differential equation (MDE) corresponding to a FDE 24. Discuss the relative advantages and disadvantages of single-point methods. 2. Explain the concepts underlying Runge-Kutta methods 32. Explain the concept of a stiff ODE 47. Use the secant methodto solve Eq.

0.0. 0) = 1. Compare results with the results presented in Table 7. derive the following finite difference approximations (FDAs)of f~ = d~/dt. (7.0 to 5.0.0 and 10.~ y(O) = 7. y’ = t . UsingTaylor series.0 for ~ = 1. including the leading trtmcation error term: (a)y’ln+0(At).21) and identify in ~’ = ~3t q-t 2 ~(0) = 7. 0)= 1. Let ~(0. 9. Solve the following ODE the Taylor series methodincluding the fourth by derivative term for t = 0.0. 6.0 at intervals of At -.9 are concerned with solving initial-value ODEsby finite difference methods. ¯ Y’I.3 The Taylor Series Method Solve the exampleradiation problempresented in Section 7.+~ + 3yn ..0 to 10. Derive the exact solution of Eq. Compare results the with the exact solution.. Plot the solution from t = 0.0.13).21) and identify in ~. Those problems are concerned with the ODEs presentedbelow.1.0 to 10. (7.2. 11. 7. determinewhat derivative is represented by the following finite difference approximation(FDA) the leading truncation error term.4 The Finite Difference Method Finite DifferenceApproximations 12.7 to 7. Compare results the with the exact solution.1 by the Taylor series method including(a) the fifth derivative termand (b) the sixth derivative term.One-Dimensional Initial-Value Ordinary Differential Equations 417 5. Solve the following ODE the Taylor series methodincluding the fourth by derivative term for t = 0.12). (c)y’]n+0(At2). Using the Taylor series approach.0.0 and 10. Derive the exact solution of Eq.0 for ~ = 1. (7.6yn_ -’[-Yn-2 = ~ 6 At The problemsin Sections 7.5 and 7.0 at intervals of At = 1. and FDA 2y. = ~2 sin t + 10 ~(0) = Express the following ODE the linearized form of Eq. (7. Plot the’solution from t = 0. the 10. Let ~(0. (b)Y’ln+l +0(A0.along with their exact solutions.0 to 5. Carryat least six digits after the decimal .+~/2+ 0(At2) 13. Express the following ODE the linearized form of Eq.

0 to 1. 1.0 with At (D) and 0. 3(0 ep /3 3(0) =3o = 1.~. Solve ODE by the explicit Euler methodfrom t = 0.0 to 1. 3(t)=et(#o-1)+sint-cost 1 2. 20.0 to 1. ~(t)=(~o+l)et-t-1 (A) (B) (C) t 2 e-t 3’= e-t+y. Y’=~. #’=2cost+3.0 to 1.1. 15.2 = 0.4t4. 18. Solve ODE by the explicit Euler methodfrom t = 0. 7.1.5.1. #’=1+0. 19. ~(0)=~0= Y(0)=P0= 1.0 with At (E) and 0.2 = 0. 21. 16.2t + 4tz .2 = 0.1.t 4 -.0 with At (A) and 0. I.418 Chapter7 place in all calculations. ~(t)=~(t 2 + 2~ ~i~ (j) (K) ~(t)- t 2 _ 2/~ 0 t+~’ ~(0)=~°=1’ e@-~°)@o+l)-~-l=t Aninfinite variety of additional problemscan be obtained from Eqs.2 = 0. changingthe initial conditions.~(0) = . (A) to (K) (a) changingthe coefficients in the ODEs. ~(0)~o~0.1.0 with At (C) and 0.sint 3’ = 2sint+3. Solve ODE by the explicit Euler methodfrom t = 0. 3(0 = et(#o + 1) .2 with = 0. y(O)=Yo=O. For all problems.0 = 1.2 and 0. (d) changingthe range of integration.0 with At (G) and 0. (c) changing (b) the integration step size.compare errors for the two step sizes and the calculate the ratio of the errors at t = 1.~0 = ~(t) = ~0+ 2t . 17. .0 to 1.5# 3(0)=30=0. and (e) combinations of above changes.4t3 .0 to 1.t 3-. Solve ODE by the explicit Euler methodfrom t = 0.0 At -.0.0 to 1. 3(0)=3o=1. 35’ = 2 .0 to 1. y(t)=O30+½)e (D) (E) (F) (G) = 30 3’ = t~3.5 The First-Order Euler Methods TheExplicit Euler Method 14.* Solve ODE (H) by the explicit Euler method from t = 0.1.0 At = 0.2 and 0. Solve ODE by the explicit Euler methodfrom t = 0.1.* Solve ODE (B) by the explicit Euler method from t = 0. ~(t)=@o-1) ~’=tq-~. #(t)=~t~[~t+t~-l(~#o)] ~’=t~.2 +34.0 with At (F) and0.2 with . ~(0)=~o=1 .0. = 0.~ t 5 et+l ~’~1-. 3(0) = %. Solve ODE by the explicit Euler methodfrom t = 0.

Solve ODE (B) by the implicit Euler method from t = 0.0 with At = 0. 419 Solve ODE by the explicit Euler methodfrom t = 0.0 to 1. 37. 23.0 to 1. letting the mostaccurate solution (the solution for the smallest step size) an approximationof the exact solution. . Solve Problem 32 for the modified midpoint method.1. For step size halving. 28. Investigate consistency and order. is givenby Error(h) _ 2n Error(h/2) wheren is the order of the FDE. Solve ODE by the implicit Euler methodfrom t = 0. Solve Problem 32 for the modified Euler method.2 (I) and 0. 24. Consistencyand Order 32.2 (F) and 0. Solve ODE the implicit Euler methodfrom t = 0.6 Consistency. Solve ODE by the implicit Euler methodfrom t = 0.1.0 to 1.0 with At = 0. order can be estimated by solving the ODE numericallyfor three step sizes. Solve Problem 32 for the fourth-order Runge-Kuttamethod.0 with At = 0. and applying the procedure described above. Solve ODE by the implicit Euler methodfrom t = 0.0 to 1.2 (A) and 0. 30. Solve Problem 32 for the implicit midpoint method. Order.2 (D) and 0. the the ratio of the errors. 33.0 to 1.1. Solve ODE by the implicit Euler methodfrom t = 0.1.* 27. Solve ODE by the explicit Euler methodfrom t = 0.including the leading mmcationerror term. 38.2 and 0.2 (K) and 0.2 (C) and 0.1.0 to 1.1. Solve Problem 32 for the implicit trapezoid method.~’+ @= F(t). 29. each one being one-half the previous one. as At --~ 0.1. Solve ODE by the explicit Euler methodfrom t = 0. The Implicit Euler Method 25.2 (G) and 0. Develop the explicit Euler approximation of the linear first-order ODE. 36.1. The order of a finite difference equation is the order of the leading truncation error term. and Convergence 7. Order can be estimated numericallyby solving an ODE that has an exact solution for two different steps sizes and comparing ratio of the errors. Solve Problem 32 for the implicit Euler method.0 with At = 0. 31.1.0 with At = 0. Stability. .0 to 1.For ODEs that do not havean exact solution.0 to 1.0 with At = 0. 34.0 with At = 0.0 with At = 0.0 to 1. Derive the corresponding modified differential equation (MDE). 26.2 (E) and 0.One-Dimensional Initial-Value Ordinary Differential Equations 22.0 with At = 0. Solve ODE by the implicit Euler methodfrom t = 0.2 (J) and 0.1.0 with At = 0.0 to 1. 35.

The predictor and corrector FDEs must be combined into a single-step FDE. (b) n = 2. Perform a stability analysis of the explicit Euler FDE. 48. 61.Eq. (H) solved by the explicit Euler method.Eq. Solve ODE(C) by the modified midpointmethod. (H) solved by the Adams-BashforthMoulton method. 57. FDEs. Perform a stability analysis of the implicit trapezoid FDE.229).* Applythe above procedure to Eq. (F) 64. Perform a stability analysis of the modified Euler FDEs. Eq. Eq. Solve ODE by the modified midpoint method.* Solve ODE (B) by the modified midpoint method.121).180). Solve ODE (G) by the modified midpoint method.59). Solve ODE by the modified midpoint method. 40. 41. 44. (H) solved by the modified Euler method. (E) 63. (7.235). 53. 60. (7.252). The four-step FDEsmust be combinedinto a single-step FDE. (b) n = 2. (7. Eq.* Solve ODE by the modified midpoint method. The predictor and corrector FDEs FDE. Apply the above procedure to Eq. Perform a stability analysis of the implicit Euler FDE. Solve ODE by the modified midpoint method. (7. 46. Solve ODE by the modified midpoint method. Perform a stability analysis of the fourth-order Adams-Bashforth FDE. Stability 47. Eq. for (a) n = 1. (J) solved by the Adams-Bashforth-Moulton method. Perform a stability analysis of the implicit midpointFDE. (7. Solve ODE by the modified midpoint method.142) and (7.3 (see Table 51. 52. 7. (J) solved by the explicit Euler method. Perform a stability analysis of the nth-order Adams-Moulton (7. for (a) n = 1. Perform a stability analysis for the modifiedmidpointFDEs.Eq. 45. (7.Eqs. (D) 62. (J) . (H) solved by the fourth-order Runge-Kutta method. (H) 66. Applythe above procedure to Eq.179) and (7. (7. Applythe above procedure to Eq. and (c) n = 3 (see Table Perform a stability analysis of the fourth-order Adams-Moulton FDE.143). (7. 50. 43. (J) solved by the modified Euler method. Applythe above procedure to Eq. 55. 42. 49.250). Applythe above procedure to Eq. 56.7 Single-Point Methods Second-Order Single-Point Methods 58. (I) 67.420 Chapter7 39.73). Perform a stability analysis of the nth-order Adams-BashforthFDEs. (7. (7. Solve ODE (A) by the modified midpoint method. 65.Eq.141).* Applythe above procedure to Eq. and (c) n -. Eq.120) and must be combined into a single-step (7. 59. Applythe above procedure to Eq. Eq.119). (J) solved by the fourth-order Runge-Kutta method. Performa stability analysis of the fourth-order Runge-Kutta FDEs.

81. Derive the general third-order Runge-Kuttamethod: Yn+l= Yn + C~k~+ Czk2 -~. (F) 74. (7. Solve ODE(K) by the Runge-Kutta-Fehlberg method. (I) (J) 78. Eqs.179) and (7.202) (7.* Solve ODE by the modified Euler method. Solve ODE (K) by the modified midpoint method. (C) 72.yn+ Fourth-OrderRunge-Kutta Method 82. Derive the general second-order Runge-Kutta method. Solve ODE by the modified Euler method.204). Eqs. + k2:Atf (k~+ 3k~ k~) + 4 k~ = At f(t n. Solve ODE by the fourth-order Runge-Kutta method. Solve ODE by the fourth-order Rtmge-Kuttamethod. (F) 89. 71. Solve ODE by the modified Euler method. Runge-Kutta Methods with ErrorEstimation 94.C3k 3 Showthat one such methodis given by y~+~ = y. (G) (H) 90. Solve ODE by the fourth-order Runge-Kutta method. (7. Solve ODE by the fourth-order Runge-Kutta method. (I) 91.202) (7.180) comprise one such method. Solve ODE by the fourth-order Runge-Kutta method. (C) 86. Compare results with the results of Problem92. (7. (7. yn 2k2"~ --~ + ~-) tnq-~. (K) 93. Runge-Kutta Methods 421 80. Solve ODE by the modified Euler method.158) and (7. Solve ODE (A) by the modified Euler method. Showthat Eqs. (G) 76. Solve ODE by the fourth-order Runge-Kuttamethod. Eqs.One-Dimensional Initial-Value Ordinary Differential Equations 68. (J) 92. Solve ODE by the fourth-order Runge-Kutta method. (E) 73. Derive the general fourth-order Runge-Kutta method. (D) 87. 75.* Solve ODE (B) by the modified Euler method. 83. Evaluate the the error at each step.204). Compare results with the results of Problem93.169). Solve ODE by the modified Euler method.* Solve ODE by the fourth-order Runge-Kutta method. Solve ODE by the modified Euler method. Solve ODE by the fourth-order Runge-Kutta method. (E) 88. (H) 77. Solve ODE by the modified Euler method. Evaluate the the error at each step. Solve ODE by the fourth-order Runge-Kutta method. (A) 84. (B) 85.* Solve ODE (J) by the Runge-Kutta-Fehlberg method. 95. 69. (D) Solve ODE by the modified Euler method.* Solve ODE by the fourth-order Runge-Kutta method. 70. (K) 79. Solve ODE by the modified Euler method. Yn) k3 = At f tn +2 At. .

4. k2 = Atf(t n + ½At.2 and M=2. k5 = Atf(t. 4. +½At. (C) by the extrapolated modifiedmidpoint methodfor At = 0.2 and M= 2.ks). +At. (7.250). 4. 7. k3 =Atf(t. 4. 8. + ½k~ 3 --~ 97. (7.2. (G) by the extrapolated modified midpoint methodfor At --. 4. 8. and 16. (K) by the extrapolated modified midpoint methodfor At = 0.2 and M= 2. Derive the fourth-order Adams-Bashforth FDE. 8. (D) by the extrapolated modified midpoint methodfor At = 0. (J) by the extrapolated modified midpoint methodfor At = 0. yn + ½k~). 4.2 and M= 2. Solve Eq.* Solve Eq. (H) by the extrapolated modifiedmidpointmethodfor At ----. 105. 8. and 16.9 Multipoint Methods 110. (I) by the extrapolated modifiedmidpointmethodfor At ---= 0. +~k~ q. with the leading truncation error term. Solve Eq. n = 2. 107. + ~(kl + 4k4 + ks). Solve Eq. and 16. 98. k~ = At f(t~. (E) by the extrapolated modified midpoint methodfor At = 0. 8. 4. 106. Eq. 4.2 and M--. Comparethe results with the results of Problems92 and 94. with the leading truncation error term. 109.8 Extrapolation Methods 99.0. 8. SolveEq. Solve Eq. Solve ODE (K) by the Runge-Kutta-Merson method. (B) by the extrapolated modified midpoint methodfor At = 0. .2 and M= 2. (7.Eq. 101. y. and 16. and 16. 8. The Runge-Kutta-Merson method is as follows: Chapter7 Y. for (a) n = 1. + ~ At. 111. Derive the fourth-order Adams-Moulton FDE. 100.2 and M= 2. 4. and 16. Solve Eq. and 16. 8.2 and M= 2. Solve Eq.422 96. 7.229).9k3 + 8k4 . (A) by the extrapolated modified midpoint methodfor At = 0.+I = Y. and 16. + ~1kl "+ 1 k2) ~ k 4 ~. y. 108. Solve Eq.2 and M= 2.0. and 16. 4.* Solve Eq. 8. and 16. 4. Derive the nth-order Adams-Bashforth FDEs. (F) by the extrapolated modified midpoint methodfor At = 0.Atf(t. and 16.235).~k3) . Solve Eq. 104.y. 8. 8.2 and M=2. and (c) n = 112. 3 -b2k4) Solve ODE by the Runge-Kutta-Merson (J) method. Comparethe results with the results of Problems93 and 95. Error = ~(2k1 .Eq.2 and M=2. 102. 103.

(I) method.* Solve ODE by the fourth-order Adams-Bashforth-Moulton (H) method with mopup. 125. Solve ODE (F) by the fourth-order Adams-Bashforth-Moulton method with mopup. 133. Solve ODE by the fourth-order Adams-Bashforth-Moulton (F) method. Solve ODE by the fourth-order Adams-Bashforth-Moulton (G) method.257) and (7. Adams-Bashforth-Moulton method. (A) 116.* Solve ODE by the fourth-order Adams-Bashforth-Moulton (H) method. use the fourth-order Runge-Kutta methodto obtain starting values.252). 131. Solve ODE mop up. 129. Solve ODE by the fourth-order Adams-Bashforth-Moulton (J) method. 122. Adams-Bashforth-Moulton method. 136. Solve ODE (A) by the fourth-order Adams-Bashforth-Moultonmethod with mop up. (7. 135. Solve ODE by the fourth-order Adams-Bashforth-Moulton (D) method. 115. Solve ODE (D) by the fourth-order Adams-Bashforth-Moulton method with mop up. (C) method. Solve ODE by the fourth-order Adams-Bashforth-Moulton (K) method with mopup. Compare results with the results for the corresponding ODEs Problems115 the in to 125. 118. 139. use the exact solution for starting values. Eqs. 119.* Solve ODE (B) by the fourth-order Adams-Bashforth-Moultonmethod. (7. In Problemst 37 to 147. Solve Solve Solve Solve ODE (A) ODE (B) ODE (C) ODE (D) by by by by the the the the fourth-order fourth-order fourth-order fourth-order Adams-Bashforth-Moulton method. 132. 117. Adams-Bashforth-Moultonmethod. Solve ODE by the fourth-order Adams-Bashforth-Moulton (G) method with mopup. 140. 138. (E) method with 130. . Solve ODE by the fourth-order Adams-Bashforth-Moulton (K) method. Solve ODE by the fourth-order Adams-Bashforth-Moultonmethod. and (c) n = 114. Solve ODE (B) by the fourth-order Adams-Bashforth-Moulton method with mop up.One-Dimensional Initial-Value Ordinary Differential Equations 423 113. 127. Developthe error estimation formulas for the fourth-order Adams-BashforthMoultonmethod. Solve ODE by the fourth-order Adams-Bashforth-Moulton (E) method. Solve ODE by the fourth-order Adams-Bashforth-Moulton 120. 121. (C) by the fourth-order Adams-Bashforth-Moulton method with 128. 126. 134. Solve ODE by the fourth-order Adams-Bashforth-Moulton (I) method with mopup. Compare the results in Problems126 to 136 with the results for the corresponding ODEs Problems in 115 to 125. In Problems115 to 136. for (a) n = 1.258). n = 2. Solve ODE by the fourth-order Adams-Bashforth-Moulton (J) method with mop up. 137. Solve ODE by the fourth-order Adams-Bashforth-Moulton 124.Eq. Solve ODE by the fourth-order Adams-Bashforth-Moulton mopup. Derive the nth-order Adams-Moulton FDEs. 123.

170. Euler methodusing Newton’smethod. and 145. Solve ODE(H) by the fourth-order Adams-Bashforth-Moulton method. (A) Solve ODE by the implicit Euler methodusing time linearization. the 149. Euler methodusing Newton’smethod. 147. 145. Euler methodusing Newton’smethod. 172. 156. Solve ODE mopup using the fourth-order Runge-Kutta methodto obtain starting values. 144.11 Nonlinear Implicit Finite Difference Equations TimeLinearization 152. 169. 134. Solve Solve Solve Solve Solve Solve Solve Solve Solve Solve Solve . 157. Compare results with the results of Problems124. Newton’sMethod 163. and 144. Solve ODE(J) by the fourth-order Adams-Bashforth-Moulton method. Solve ODE(F) by the implicit Euler methodusing time linearization. Euler methodusing Newton’smethod. 155. and 146. Solve ODE (I) by the fourth-order Adams-Bashforth-Moultonmethod with mopup using the fourth-order Runge-Kutta methodto obtain starting values. 148. 166. Compare results with the results of Problems123. 173. the 7. Solve ODE mopup using the fourth-order Runge-Kutta methodto obtain starting values. Compare results with the results of Problems125. Solve ODE (H) by the fourth-order Adams-Bashforth-Moultonmethod with mopup using the fourth-order Runge-Kutta methodto obtain starting values. 160. 164.* 154. Euler methodusing Newton’smethod. Euler methodusing Newton’smethod. Solve ODE(E) by the fourth-order Adams-Bashforth-Moulton Solve ODE(F) by the fourth-order Adams-Bashforth-Moulton method. 168. and 147. 162. 146. the (K) by the fourth-order Adams-Bashforth-Moultonmethod with 151. 158. (B) Solve ODE by the implicit Euler methodusing time linearization. the (J) by the fourth-order Adams-Bashforth-Moultonmethod with 150. ODE(A) ODE(B) ODE(C) ODE(D) ODE(E) ODE(F) ODE(G) ODE(H) ODE(I) ODE(J) ODE(K) by by by by by by by by by by by the the the the the the the the the the the implicit implicit implicit implicit implicit implicit implicit implicit implicit implicit implicit Euler methodusing Newton’smethod. 136. 159. Solve ODE(J) by the implicit Euler methodusing time linearization. (C) Solve ODE(D) by the implicit Euler methodusing time linearization.424 141. Solve ODE(I) by the implicit Euler methodusing time linearization. Solve ODE(E) by the implicit Euler methodusing time linearization. 161. 142. Compare results with the results of Problems122. Euler methodusing Newton’smethod. Solve ODE(K) by the fourth-order Adams-Bashforth-Moulton method. 165. 167. Euler methodusing Newton’smethod. Solve ODE(H) by the implicit Euler methodusing time linearization. Solve ODE(I) by the fourth-order Adams-Bashforth-Moulton method. 133. Chapter7 method. Euler methodusing Newton’smethod. 171. Solve ODE(G) by the implicit Euler methodusing time linearization. Solve ODE by the implicit Euler methodusing time linearization. Solve ODE(G) by the fourth-order Adams-Bashforth-Moulton method. Euler methodusing Newton’smethod. Solve ODE(K) by the implicit Euler methodusing time linearization. 153. 143. 135.

Reducethe following ODE a pair of first-order to ay" + by’ + cy = d y(O) = Yo and y’(O) = 175. to 180. 0 = tan-~(v/u). ThegoverningODEs the position. Reducethe following ODE a set of first-order to ay" + by" + cy’ + dy = e y(O) = Yo. x(t) and y(t).One-Dimensional Initial-Value Ordinary Differential Equations 7. and g is the acceleration of gravity. Reducethis pair of coupled second-order ODEs a set of four first-order ODEs. Reduce this second-orderODE a pair of first-order to ODEs. to 181. The governing equation for the displacement y(t) of a projectile vertically upwardis my" + C[ VI V = -mg y(O) = O. y’(O) = Y’o and y"(O) 177. The ODE governingthe charge q(t) in a series L (inductance). Reducethe following pair of ODEs a set of four first-order ODES: to ay" + by’ + cz’ + dy + ez = F(t) y(O) = andy’(0 ) = y~ Az" + By’ + Cz’ + Dy + Ez = G(t) z(O) o andz’(O) = Z~ 178.q(O) = qo and q’(O) Reducethis second-order ODE two coupled first-order ODEs. C is a drag parameter. 1 dV(t) Lq’t + Rq + ~ q = d----~-. is a d rag perameter. to 179. The ODE governing the displacement x(t) of a mass-damper-springsystem is rex" + Cx’ + Kx = F(t) x(0) = 0 and x’(0) = Reducethis second-order ODE a pair of coupled first-order ODEs. R (resistance). and C (capacitance) circuit . Reducethe following ODE a pair of first-order to ay" + bly’ly’ ODES: + cy = F(t) y(0) =Y0and y’(0) ODES: 176.12 Higher-Order Ordinary Differential Equations ODES: 425 174. y’(O) = V(O) o whereV = dy/dt is the projectile velocity. v = dy/dt. y’(O) = v(O) o sin c~ where V = (u 2 + v2)1/2. u = dx/dt. 182.and g is the acceleration of gravity. The angular displacementO(t) of a frictionless pendulum governedby the is ODE 0" + ~ sin 0 = 0 0(0) = 00 and 0’(0) 0~ shot Reducethis second-order ODE a pair of first-order ODEs. x"(0) = u(0) = V0 y(O) = O. of a projectile shot at an for angle e with respect to the horizontal are rex" + CI VI g cos 0 = 0. my" + CIVIVsinO = -mg x(0) = 0. to .

~(0) -= (P) (P) 198.0 with At = 0. ~’ = 2f~ + ~ + et + 1 + t.f’(0)= 0. ~(0) (L) (L) 186.0 to 1. Solve ODE by the modified Euler method. ~(0) = 1 and ~’ = ~ + ~ + 1. . Solve ODE by the modified Euler method.d2f d~/~-~ mJ~-g~2 0 f(0) = 1. Solve ODE by the fourth-order Runge-Kutta method.2 and 0. ~’ = ~. (N) 193. 187.5~ + 1 + t + et. Solve ODE by the modified Euler method. to = 0. f~ = 2f~ + ~ + et. ~(0) = 1 and ~’ = ~ + ~ + t. ~’ = 2~ + ~ + t. ~(0) = 1 and ~’ = -6.2 and 0. Solve ODE (N) by the fourth-order Runge-Kutta method.1.1. ~(0) (N) 192.. 190. ~’ = 2p + ~ + 1.0 with At = 2.0 with At = 0.13 Systems First-Order Ordinary Differential of Equations 185.p .426 Chapter7 183. Solve ODE by the modified Euler method. 194. :P(0) = 1 and ~’ = :P + ~ + 1. The governing equation for a laminar mixing layer is d3f+f-~+(df~ 2= 0. Solve the following pair of initial-value ODEs the explicit Euler method by from t = 0. ~(0) = 1 and ~’ = -4.0 with At = 0.1. to 184. ~(0) (M) 189. ~(0) = 1 (Q) .25~ .0 to 1. andf’(t/) 7.0 f(0) as q-+ o~ Reducethis ODE a set of three first-order ODEs. (L) 188. Solve ODE (M) by the modified Euler method. Solve ODE by the fourth-order Runge-Kutta method. Solve the following pair of initial-value ODEs the explicit Euler method by from t = 0. Solve ODE by the fourth-order Runge-Kutta method. ~’ = ~.2 and 0. (O) 197.0 with At = 0.1. Solve ODE by the fourth-order Runge-Kutta method. ~(0) (O) 195.1. Solve the following pair of initial-value ODEs the explicit Euler method from t = 0. (O) 196.1. The governing equation for the laminar boundarylayer over a flat plate is d3f .2 and 0.4~ + 1 + t + 2et. Solve the following pair of initial-value ODEs the explicit Euler method by from t = 0. f’(0) = 0. Solve the following pair of initial-value ODEs the explicit Euler method by from t = 0.0 and 0.2 and 0. (M) by 191.2 to 1. ~(0) = 0 and ~’ = ) + ~ + t. andf’(t/) --~ 1 as n --Reducethis ODE a set of three first-order ODEs.0 to 1.0 with At = 0.0 to 1. by 200. Solve the following pair of intial-value ODEs the explicit Euler method from t = 0. (P) 199.0 to 1.

0025.002. 208. 0.0 to 0. (T) by the implicit trapezoid methodfrom t = 0.0 to 0.1 0.2 and 0.0 to 0.2. (T) by the second-order Gear method from t = 0. 218.1~ + O. Use the exact solution for starting values.01. 0. and third-order Gear methodsto obtain starting values.001.One-Dimensional Initial-Value Ordinary Differential Equations 427 201.0 for e = 0. Solve Eq.programsshould be written to solve these problems.02.01 with At = 0. Solve Eq. 0.0 to 1. (S) by the implicit trapezoid methodfrom t = 0.25. (R) (R) 7. Solve the second-order ODE ~" + ~’ + ~ = 1 ~(0) = 0 and ~’(0) (T) 214. 0.001. 217. (S) by the fourth-order Gear methodfrom t = 0. 215. and 0.01 and 0. 0.0025.025.1.0 to 1. Solve ODE by the modified Euler method. ](0) 204.01 using the first-.0 to 0. and 0. (S) by the implicit Euler method from t = 0.1 and 0. Solve ODE (S) by the explicit Euler method from t = 0. Solve Eq. Use the first-order Gear methodfor starting values.0 with At = 0. 0.25.1.01 with At = 0.1 with At = 0. and 0. Solve Eq. Compare solution with the exact the solution.0 with At = 0. Solve Eq.~’ --. (T) by the first-order Gear method from t = 0. (S) by the fourth-order Gear methodfrom t = 0. 209. . and 0. Solve Eq.2.1 with At = 0.0 to 1. (R) Solve ODE by the fourth-order Runge-Kutta method. 216.0 to 1. 210.1.01. second-.0005. Solve Eq. 0. 212.0 with At = 0. Solve Eq.1 with At = 0. 206.05.02. (T) by the implicit Euler methodfrom t = 0. (S) by the modified Euler method from t = 0. Solve the following pair of initial-value ODEs the explicit Euler method by from t = 0.0005.1. and 0.0 to 1. (Q) 203. (T) by the modified Euler method from t = 0.2 and 0. and 0.002.05. Solve Eq.01 with At = 0.1 with At = 0.0 to 0.0 to 1. by the explicit Euler methodfrom t = 0.1 with At = 0.0 with At = 0.14 Stiff Ordinary Differential Equations The following problemsinvolving stiffODEs require small step sizes and large numbers of steps.025. 205. ~(0) = 1 and ~’ = 0.02.lfl. Consider the model stiff ODE: y’ = -1000[y . . (Q) 202.0. Use the exact solution for starting values. 0. (S) by the second-order Gear methodfrom t = 0.(t + 2)] + 1 y(0) (S) 207. 213.1. Consequently.0 to 0.0 with At = 0.0 to 0. and 0.0 to 1. 211.02. Solve ODE by the fourth-order Runge-Kutta method.01.0 with At = 0. 0. Solve Eq. Solve Eq.01 and 0.01.1~2 + 1. Solve ODE by the modified Euler method.

0. and third-order Gear methodsto obtain starting values. (T) by the fourth-order Gear method from t = 0.9’ = -~.25.9’ = 998. Consider the coupled pair of ODEs: . and 0. 0. 235. with At = 0. 232. Solve Eq.2.0.0 to 1.0 by the fourth-order Gear method with At = 0. and 25.9’ = -~ + 0. .2.0 to 1.025.0 to 1.0 with At = 0.0.01.9 + 1998~. 224.02.001.0. ~(0) (U) 220. and 0.0.1 using the first-. (V) by the implicit trapezoid methodfrom t = 0. 221. second-.1.0025. (U) by the first-order Gear method from t = 0. 226.0 to 0.0.0 with At = 10.001. (V) by the fourth-order Gear method from t = 0.1 with At = 0.0 to 100.01 and 0.0 with At = 0. Solve Eq. (W) by the implicit trapezoid methodfrom t = 0. 231.0 with At = 0. 2. Solve Eq. Chapter7 Solve Eq. Solve Eq. and 0. . 222.002.1.1 with At = 0. Solve these equations by the explicit Euler methodfrom t = 0.0 to 0. ~(0) (W) 228. and 0. Solve Eq. Let At = 1. Solve Eq.0025.025. 230.0 to 1. (U) by the implicit trapezoid methodfrom t = 0. Use the first-order Gear methodfor starting values.1 and 0.0 to 100.1 At = 0.9 .999~.002.2. Solve Eq. (W) by the implicit Euler method from t = 0.0: . 0.9(0) = 1 and ~’ = -100.9(0) = 1 and ~’ = 0.01 using the first-.1 and 0.02. 0.0. second-.9(0) = 1 and ~’ = -999.02. (U) by the modified Euler method from t = 0. (V) by the implicit Euler method from t = 0. Consider the pair of ODEs . Solve Eq. Use the first-.1. and 25.25. and 0. 20. Solve Eq.0 to 1.0 to 0.1 with At = 0.2. and 0.0 to 0.0 with At = 0. 227.0.0 with At = 10. Solve Eq. . 236.2. 0.1 and 0. and third-order Gear methodsto obtain starting values.5. Solve Eq. (U) by the second-order Gear methodfrom t = 0.1 with At = 0.0 to 0. and 2. 0. ~(0) = 1 Solve these two equations by the explicit Euler methodfrom t = 0. and third-order Gear methodsto obtain starting values.25. 225.9. Solve Eq. Assumethat these two equations must be solved simultaneously with the same At as part of a larger problem. and 0.0 to 1. 0.1 with At = 0. 229. 223. (V) by the second-order Gear method from t = 0.01. Solve Eq. . Solve Eq.1 with At = 0. 0.0 with At = 0.1.0 to 1.0.01 and 0.0.0 to 1. and 0. (U) by the implicit Euler methodfrom t = 0.1999~.0 to 0. Let At = 0. (U) from t = 0.2.0 to 100. Use the first-order Gear methodfor starting values.0 to 0. 234.02. (V) by the modified Euler method from t = 0. 233.001~.25.428 219. (V) by the first-order Gear methodfrom t = 0. 20. Solve the following coupled pair of ODEs the explicit Euler methodfrom by t = 0. second-.

0 to 100.0 with At = 10. 238.0 and 20. 239.0 to 100. 253.0 to 100.One-Dimensional Initial-Value Ordinary Differential Equations 237. and 2. (W) by the modified Euler method from t = 0.0 to 100. 255.0 with At = 1. Checkout the programusing the given data set. Solve any of Problems 83 to 93 with the Runge-Kutta-Fehlberg method program.15. 247.0. 249. Modify the fourth-order Runge-Kutta method program to implement the Runge-Kutta-Fehlberg method. ~(0) 242.0. Use the first-order Gear methodfor starting values.0 by the fourth-order Gear methodwith At ---. (W) from t = 0. 240. Solve Eq. second-.0 and 20. Solve the following coupled pair of ODEs the explicit Euler methodfrom by t = 0. 2. (X) Let At = 1.0. and third-order Gear methodsto obtain starting values.5.0 with At -.0 with At = I0.0. Solve Eq. and 2.0 to 100. Checkout the programusing the given data set.0.0 to 100. Use the exact solution for starting values. Solve any of Problems 83 to 93 with the Runge-Kutta-Merson method program.2. Solve Eq.0.0. 250.15 Programs 248. Implement the fourth-order Runge-Kutta method program presented in Section 7. and 2. and 25.0.0 to 100.0 to 100. 244. (X) from t = 0.0: ~’ = -~ + 0.0.0 to 100.0 with At = 10.15. (X) by the modified Euler method from t = 0. (X) by the second-order Gear methodfrom t = 0. Use the first-. 2.0.0.0 and 20.10. (X) by the implicit trapezoid methodfrom t = 0.0 with At = 10. 243. ~(0) = 2 and ~’ = -0. 241.1. and third-order Gear methodsto obtain starting values. Solve any of Problems 99 to 109 with the extrapolated modified midpoint method program.0 and 20.0. Solve Eq. .10.0.0. Solve Eq.0 to 100.5. (X) by the implicit Euler method from t = 0.999~. second-.0 with At = 1. Check out the modified program using the given data set.0 with At = 10. Use the first-. Modify the fourth-order Runge-Kutta method program to implement the Runge-Kutta-Merson method. Solve Eq.0 by the fourth-order Gear methodwith At = 10. 251.0. (W) by the second-order Gear methodfrom t = 0. 429 Solve Eq. 245. Implementthe extrapolated modified midpoint methodprogrampresented in Section 7. 20. 2.5.0. 20. (X) by the first-order Gear method from t = 0. (W) by the first-order Gear methodfrom t = 0. Solve Eq. 7. 246. and 25.0. Solve Eq.001~. Solve any of Problems 83 to 93 with the fourth-order Runge-Kutta method program. Check out the modified program using the given data set. 252. Solve Eq. 254.0.0 to 100.

0C.1. Population growth of any species is frequently modeledby an ODE the of form dN d--[ = aN . where h is the convective cooling coefficient and A is the surface area of the mass. If h = 500.0 cm made of an alloy for which p = 3 3.0K.000kg/m s and C= 500. and so forth.0 J/(s-m2-K).500.0 J/(s-m2-K). A lumpedmass rn initially at the temperature T is cooled by convection to o its surroundings at the temperature T From Newton’s law of cooling.bN2 N(O) = o 2 whereN is the population. Checkout the programusing the given data set. T(0) 500. Froman energy balance. and so on. 260. 261. the step size At. whereC is the specific heat. Consequently. APPLIED PROBLEMS Several applied problemsfrom various disciplines are presented in this section. such as disease. Mostof these problems require a large amountof computation to obtain accurate answers. Calculate T(t) for t = 0.0 to 10.0 s and compare results the with the results of the previous problems.430 Chapter7 256. aN represents the birthrate. 258.0000008. Aninfinite variety of exercises can be constructed by changing the numerical values of the parameters.T4a) r(o)= o Consider a sphere of radius r = 1.000. and b = 0. and bN represents the death rate due to all causes.000. T(0)= 2.0 to 10. the rate of changeof E must equal the rate of cooling due to convectionOconv. calculate N(t) for t = 0. The Stefan-Boltzmann Constant a = 5.000. calculate T(t) for t = 0. The energy E stored in the mass is E = mCT.5.Ta)."Thus. These problems be solved by any of the methods can presented in this chapter.~ ( . and Ta = 50. a = 0. Eq.20): dT dt Ae~r T4 .67 x 10-8 J/(s-m2-K4).0 to 10. Consider the radiation problempresented in Part II.0 kg/m and C = 1. 259. If e = 0.0 J/(kg-K).0 cm made of an alloy for which p = 8. dT hA -(Tdt mC Ta) T(O) O Consider a sphere of radius r = 1. = hA(T.15. (II. ~¢conv. Ta = 100.5.0s.0J/(kg-K). competitionfor food supplies. 257. Let the convective cooling coefficient h = 600.0C. it is recommended they be solve by computer that programs. Solve any of Problems 115 to 125 with the fourth-order Adams-BashforthMoulton method program.0s. . IfN0 = 100.0K.0 years. calculate T(t) for t = 0.3.0 to 20. a. Combine Problems 259 and 260 to consider simultaneous cooling by radiation and convection. Implement the fourth-order Adams-Bashforth-Moulton method program presented in Section 7.

7 and (b) M/= For Problem 262.0 to 5. g is the 263. 264.0 to i 5. and D = 1.One-Dimensional Initial-Value Ordinary Differential Equations 431 262.25 cm/cm. The governingequation for a projectile shot vertically upwardis d2y m -~ = --rag -. = 0.. Calculate M(x) for x = 0.5 and (b) M~ = Solve Problem264 for fl = -50.0 cm. ]. i wherea = 0.112) in Zucrow Hoffman.0cmfor (a) M.0 J/cm.0cm. ~ c whereQ(x) is the heat transfer along the flow passage (J/cm) and C is the specific heat (kJ/kg-K). When ideal gas flows in a variable-area passagein the presenceof friction an and heat transfer.4. f is the friction coefficient (dimensionless). Thus. Calculate M(x) for x = 0. (9. = 1.0K. dT 1 dQ dx-Cdx For a linear heat transfer rate. 267. It is generally assumedto be constant for a specific set of flow conditions. and D = 1.005.0. 1 (1976)]: and dM M[I+(y-I)M2/2]I = ~ -~-x i2-~M ~dA 1 24f (+7M2)~ dT 1 ~xx+~TM ~+~ ~x wherex is the distance along the passage(cm). A = riD2~4.CI V[ V y(0) = 0 and y’(0) ---.Vol. let a = fl = 0. wh D is theinle t ere i diameter. For a conical flow passage with a circular cross section. and the stagnation temperature (K). Q= Qi + fix. Considercombined friction and heat transfer in the fluid mechanicsproblem described in Problem262.0 cm for (a) M/= 0.0kJ/(kg-K).005. and dT 1 d ~ dx . Consider a problem where f = fi = 0. Thus. dA d(~D2 dx--ax ) n d n D D =~xx ( /+~x)2=~(Di+°~x)a=a~ n The stagnation temperature T is given by Q(x) T(x) = r.0 cm.4. C= 1. Calculate M(x) for x = 0.000.C dx (Qi + fix) = The friction coefficientf is an empirical function of the Reynoldsnumber and passage surface roughness. 268. . = For Problem262. D is the diameter of the passage (cm). . 265.0 to 5. where D(x)= i + ~x. = 0.0.0 specifies a constant-area tube. the Mach numberMis governedby the following ODE [see Eq. Consider combinedarea change and friction in the fluid mechanicsproblem described in Problem262.7 and (b)/14.0J/cm. Solve Problem262 with the addition off = 0. let a =f=0.o wheremis the massof the projectile (kg). 266.0.7 = 1. and D = 1. T/= 1. A is the cross-sectional flow area (cm2). a = 0.~ is the ratio of specific heats (dimensionless).005. Solve Problem264 with the addition off = 0..0cm for (a) M. f = 0. fl = 50. y(t) is the height (m).

Calculatefit) and q(t) for t = 0. FromNewton’ssecond law of motion.0 rad/s.0N/m.0 cycles/s.dq/dt = i. calculate (a) the maximum height attained by the projectile.0cm. Solve Problem270 for V = 10.0m. q is the charge (coulombs). and y(0) = y’(0) = 270.000.0 1 dt dN2 d--~. 272.1N-s2/m2.0. V = 10.R = 10.0 sin wt.432 Chapter7 acceleration of gravity (9. o9 = 100.05 s. whichis given by o9 = 2~zf.0ohms. and L = 0. K = 50.= N2(A2 B2N2. What the is maximum current. and 10. For small 0. whereo9 is the frequency(1/s) of the applied voltage. 269. 0’(0. A machineof massm (kg) rests on a support that exerts both a dampingforce and a spring force on the machine.1.T+ sinO=O 0(0)=00 0 andO’(O)=O ’ 2) where g is the acceleration of gravity (9. C = 5. using the simplified equation. Determine the motionof the machineduring the first cycle of oscillation of the support for rn = 1.80665 m/s and L is the length of the pendulum (m). C=O.0 to 0. LetL = 100.0mH.1 and 0. d2y _ m-d~ where (y.0kg. the governing equation simplifies d20 g dt--g +-£0 =0 Solvefor O(t) for one period of oscillation for 0(0. 273. i 0 = 0. and V is the applied voltage (volts). C is an aerodynamic drag parameter.0N-s/m. L is the inductance (henrys).0. C = 1 mf. For m= 10.Y) is the relative displacement between the machine and the support.0. The angular displacement 0(t) (radians) of a frictionless pendulum governed by the equation d20 ~ dr-. and V = 500. 274.The support is subjected to the displacement y(t)= Yo sin~ot.0. C is the damping coefficient. and V=dy/dt is the velocity.0) -.0) = 0. and at what time does it occur? 271.0 re~s. Solve Problem272 using the exact governing equation. 1.0volts. Compare results the with the results of Problem272.0. C is the capacitance (farads). and q0 = 0. and K is the spring constant.C2N1) N2(0 = N2.000. Y0 = 1. and (c) the time required to return to the original elevation.5 radians. The current i(t) in a series L-R-Ccircuit is governedby the ODE di I L ~ + Ri + -~ q = V(t) i(O) o andq(O)= qo wherei is the current (amps). wheref= 60. The population of two species competing for the same food supply can be modeled by= N1 (A _ of ODEs: dN___~the pair BINI _ C1N2 1 ) N (0) = 1.000. ) 0 . 0 (b) the time required to reach the maximum height.0kg.80665 m/s2).

If 2 Nl(0. by 276.0kg. and CN1N models the death rate due to competition for the food supply. x(t) and y(t) (m). c(o) = .5.0000008. O=tan-~(v/u). 275.0000010.0. C is a drag parameter.0 = 100.0) = 0. Cl = 0. The two ODEs that 0 governthe displacement. of the projectile from the launch location are dZx m-d~= -cI vI v cos 0 x(0) = 0 and x’(0) = u(0) 0 cosc~ d2y -CIVIVsinO -mg y(0) = 0 andy’(0) = v(0) = m-~= where the vector velocity V = iu +jr.01.0m/s.0001. That is. B = 0. V = 500.80665m/s2). ) A = 0.1.0000001. 0 ~ = 1. C = Ce.One-Dimensional Initial-Value Ordinary Differential Equations 433 2 where ANis the birthrate. ~ = 0.0000008. Let e e C(0. and a level terrain.o = 0. u = dx/dt and v = dy/dt.1. BN models the death rate due to disease. For rn = 10. calculate (a) the maximum height attained by the projectile.0 to 10. and ~ = 0. and g is the acceleration of gravity (9. and C = 0.1N-s2/m 2. The inherent features of finite rate chemicalreactions can be modeled the prototype rate equation de Ce C dt ¯ where C is the instantaneous nonequilibrium mass fraction of the species under consideration. Ce.0 to 0. Assume that C varies quadratically with time.0) = N2(0. Solve for C(t) from t = 0. and (e) the velocity V at impact.o + ~t~.000. (b) the correspondingtime.0 years.1. A~ = 0.0 radian. (c) the maximum range of projectile. (d) the corresponding time. C is its equilibrium massfraction correspondingto the e local conditions. Considera projectile of massrn (kg) shot upward the angle ~ (radians) at respect to the horizontal at the initial velocity V (m/s). calculate Nl(t ) and Nz(t 2 2 2 ) for t = 0. C = 0. B1 = 0. and z has the character of a chemical relaxation time.

8
One-Dimensional Boundary-Value Ordinary Differential Equations
8.1. 8.2. 8.3. 8.4. 8.5. 8.6. 8.7. 8.8. 8.9. 8.10. 8.11. Introduction General Features of Boundary-Value ODEs The Shooting (Initial-Value) Method The Equilibrium (Boundary-Value) Method Derivative (and Other) BoundaryConditions Higher-Order Equilibrium Methods The Equilibrium Methodfor Nonlinear Boundary-ValueProblems The Equilibrium Methodon Nonuniform Grids Eigenproblems Programs Summary Problems

Examples 8.1. The second-order shooting methodby iteration 8.2. The second-order shooting methodby superposition 8.3. The second-order shooting methodby extrapolation 8.4. The second-order equilibrium method 8.5. The second-order equilibrium methodby extrapolation 8.6. A fourth-order ODE the second-order equilibrium method by 8.7. A derivative BCby the shooting method 8.8. A derivative BCby the equilibrium method 8.9. The compactthree-point fourth-order equilibrium method 8.10. A nonlinear implicit FDEusing iteration 8.11. A nonlinear implicit FDEusing Newton’s method 8.12. The equilibrium methodon a nonuniform grid 8.13. Approximationof eigenvalues by the equilibrium method

435

436 8.1 INTRODUCTION

Chapter8

Thesteady one-dimensional heat transfer problemillustrated in Figure 8.1 consists of heat diffusion (i.e., heat conduction) along a constant-area rod with heat convection to the surroundings. The ends of the rod are maintained at the constant temperatures T and T 1 2. Anenergybalance on a differential elementof the rod, presented in Section 11.6, yields the following second-order boundary-valueordinary differential equation (ODE):

r" - ~2r= -~2:r :r(x0 = :r(0.0) = T1:r(x:) = r(~;) a

(8.1)

where T is the temperature of the rod (C), a2 hP (c -2) (wh h, P, k, andA ar /kA ere defined in Section II.6), and Ta is the ambient(i.e., surroundings) temperature(C). The general solution of Eq. (8.1) T(x) = ~ + Be-~ + Ta (8.2) which can be demonstratedby direct substitution. Substituting the boundaryconditions into Eq. (8.2) yields A. = (T2 - Ta) - (T1 - T2)e -~L and e~L _ e_~L ~t B---- (T~ - T2)e _ - (T2 - Ta) (8.3) e~L ~ e-

T a
m 1 / x 1

~c(x) /

~(x)""~ IT2 x 2

d2T- ¢t2T = - (~2T T(Xl) = T1, and T(x2) 2 a, 2 dx

~

L
E I(x) dd-~4xY4 =g(x)

~

y(0) = 0, y"(0) = 0, y(L) = 0, andy"(L) Figure8.1 Steady one-dimensional transfer anddeflectionof a beam. heat

Boundary-Value OrdinaryDifferential Equations

437

Consider a rod 1.0cm long (i.e., L = 1.0cm) with T(0.0) = 0.0C and T(1.0) 100.0 C. .Let ~z=16.0cm-Z and Ta=0.0C. For these conditions, Eq. (8.2) yields 4x IT(x) = 1.832179(e -e-’~x)] (8.4)

The exact solution at selected values of x is tabulated in Table 8.1 and illustrated in Figure 8.2. The deflection of a laterally loaded symmetricalbeam,illustrated at the bottomof Figure 8.1, is consideredin this chapter to illustrate finite difference methods solving for fourth-order boundaryvalue ODEs. This problemis discussed in Section 11.6 and solved in Section 8.4,3. The general features of boundary-valueordinary differential equations (ODEs) are discussed in Section 11.6. In that section it is shownthat boundary-valueODEs govern equilibrium problems, which are boundary-value problems in closed domains. This chapter is devoted to presenting the basic properties of boundary-valueproblemsand to developing several methods for solving boundary-value ODEs. Numerous boundary-valueordinary differential equations arise in engineering and science. Single ODEs governing a single dependent variable are quite common. Coupled systems of ODEsgoverning several dependent variables are also quite common. Boundary-value ODEs be linear or nonlinear, second- or higher-order, and homomay geneousor nonhomogeneous. this chapter, the majority of attention is devoted to the In Table 8.1 Problem x, cm 0.000 0.125 0.250 0,375 0.500 Exact Solution of the Heat Transfer T(x), 0.000000 1.909479 4.306357 7.802440 13.290tli x, cm 0.625 0.750 0.875 1.000 T(x), 22.170109 36.709070 60.618093 100.000000

IO0

.~ 60 ¯ ~ 40 ~. 2O 0 0 0.2 0.4 0.6 0.8 1.0 Location x, cm

Figure 8.2 Exactsolutionof the heat transfer problem.

438

Chapter8

general second-order nonlinear boundary-valueODE the general second-order linear and boundary-value ODE: y" + P(x, y)yt + Q(x, y)y = F(x) y(xl) andy(x2) =Y2 y"+Py’+Qy=F(x) y(xl) =Yl andy(x2)

(8.5) (8.6)

The solution to these ODEs the function y(x). This function must satisfy two boundary is conditions at the two boundaries of the solution domain. The solution domain D(x) is closed, that is, x1 < x < x2. Several numerical methods for solving second-order boundary-valueODEs presented in this chapter. Procedures for solving higher-order are ODEs and systems of ODEs are discussed. First, consider a class of methodscalled finite difference methods. There are two fundamentally different types of finite difference methodsfor solving boundary-value ODEs: 1. 2. The shooting (initial-value) method The equilibrium (boundary-value) method

Several variations of both of these methodsare presented in this chapter. The shooting method transforms the boundary-value ODE into a system of firstorder ODEs,which can be solved by any of the initial-value (propagation) methods developedin Chapter 7. The boundaryconditions on one side of the closed domaincan be used as initial conditions. Unfortunately, however,the boundaryconditions on the other side of the closed domaincannot be used as initial conditions. The additional initial conditions neededare assumed,the initial-value problem solved, and the solution at the is other boundary is compared to the knownboundary conditions on that boundary. An iterative approach, called shooting, is employed vary the assumed to initial conditions on one boundaryuntil the boundaryconditions on the other boundaryare satisfied. The equilibrium method constructs a finite difference approximation of the exact ODE every point on a discrete finite difference grid, including the boundaries. A system at of coupled finite difference equations results, whichmust be solved simultaneously, thus relaxing the entire solution, including the boundaryconditions, simultaneously. A second class of methods for solving boundary-value ODEs based on approxis imating the solution by a linear combination trial functions, for example,polynomials, of and determiningthe coefficients in the trial functions so as to satisfy the boundary-value ODEin some optimummanner. The most common examples of this type of method are 1. The Rayleigh-Ritz method 2. The collocation method 3. The Galerkin method 4. The finite element method These four methodsare discussed in Chapter 12. The organizationof Chapter8 is illustrated in Figure 8.3. A discussion of the general features of boundary-value ordinary differential equations (ODEs) begins the chapter. The material then splits into a presentation of the two fundamentallydifferent approachesfor solving boundary-value ODEs: the shooting method and the equilibrium method. Procedures are then presented for implementingderivative (and other) boundaryconditions. Then follows a discussion of higher-order equilibrium methods, nonlinear problems, and nonuniform grids. A brief introduction to eigenproblemsarising from boundary-value

Boundary-Value OrdinaryDifferential Equations Boundary-Value Ordinary Differential Equations General Features of Boundary-ValueODEs

439

Shooting Method ]

I Equilibrium Method

I
Derivative BoundaryConditions ¯ Other Boundary Conditions

I

Higher-OrderMethods

Nonlinear Problems }

Nonuniform Grids

I Eigenproblems ]

Programs

Summary Figure8.3 Organization of Chapter 8. ODEs follows. The chapter closes with a Summary, which discusses the advantages and disadvantagesof the shooting methodand the equilibrium method,and lists the things you should be able to do after studying Chapter 8. 8.2 GENERAL FEATURES OF BOUNDARY-VALUE ODEs

Several general features of boundary-valueordinary differential equations (ODEs) are presented in this section. The general second-orderlinear ODE discussed in somedetail. is Nonlinear second-order ODEs,systems of ODEs,and higher-order ODEsare discussed briefly. 8.2.1 The Linear Second-Order Boundary-Value ODE The general linear second-order boundary-valueODE given by Eq. (8.6): is [ y" + Py’ + Qy = F(x) y(xl) = Yl and y(x2) = Y2 where P and Q are constants. (8.7)

440

Chapter8

As discussed in Section 7.2 for initial-value ODEs, exact solution of an ordinary the differential equation is the sum of the complementary solution yc(x) of the homogeneous ODE the particular solution yp(X) of the nonhomogeneous and ODE.The complementary solution yc(x) has the form yc(x) = z~ (8.8)

Substituting Eq. (8.8) into Eq. (8.7) with F(x) = 0 yields the characteristic equation: 22 + P2 + Q = 0 (8.9)

Equation (8.9) has two solutions, 21 and 2. T can both be r eal, o r t hey can be a hey complexconjugate pair. Thus, the complementary solution is given by y~(x) = xlx + Be~x As discussed in Section 7.2, the particular solution has the form yp(x) = o F (x) +B F’(x) +B F 1 2 "(x) + ... (8.11) (8.10)

wherethe terms F’(x), F"(x), etc., are the derivatives of the function F(x). Thus, the total solution is [ y(x)= ~x + Bex2x + yp(X) l (8.12)

The constants of integration A and B are determinedby requiring Eq. (8.12) to satisfy the two boundary conditions. 8.2.2 The Nonlinear Second-Order Boundary-Value ODE

The general nonlinear second-order boundary-value ODE given by Eq. (8.5): is [ y" + P(x,y)y’ + Q(x,y)y = F(x) y(xl) = Yl and y(x2) = (8.13)

where the coefficients P(x, y) and Q(x, maye l inear or n onlinear functions ofy. When b solved by the shooting method,which is based on methodsfor solving initial-value ODEs, the nonlinear terms pose no special problems. However,whensolved by the equilibrium method, in which the exact ODE replaced by an algebraic finite difference equation is whichis applied at every point in a discrete finite difference grid, a systemof nonlinear finite difference equations results. Solving such systems of nonlinear finite difference equations can be quite difficult. 8.2.3 Higher-Order Boundary-Value ODEs Higher-order boundary-value ODEs be solved by the shooting method by replacing can each higher-order ODE a system of first-order ODEs,which can then be solved by the by shooting method. Somehigher-order ODEs can be reduced to systems of second-order ODEs,which can be solved by the equilibrium method. Direct solution of higher-order ODEs the equilibrium methodcan be quite difficult. by

Boundary-Value OrdinaryDifferential Equations 8.2,4 Systems of Second-Order Boundary-Value ODEs

441

Systems of coupled second-order boundary-value ODEs can be solved by the shooting method by replacing each second-order ODE two first-order ODEs by and solving the coupled systems of first-order ODEs.Alternatively, each second-order ODE be solved can by the equilibrium method, and the coupling between the individual second-order ODEs can be accomplished by relaxation. By either approach, solving coupled systems of second-order ODEs be quite difficult. can 8.2.5 Boundary Conditions Boundary conditions (BCs) are required at the boundaries of the closed solution domain. Three types of boundaryconditions are possible: 1. 2. 3. The function y(x) maybe specified (Dirichlet boundarycondition) The derivative y’(x) maybe specified (Neumann boundarycondition) A combination ofy(x) and y’(x) maybe specified (mixed boundarycondition)

The terminology Dirichlet, Neumann, mixed, whenapplied to boundaryconditions, is and borrowed from partial differential equation (PDE) terminology. The procedures for implementing these three types of boundary conditions are the same for ODEsand PDEs. Consequently, the same terminology is used to identify these three types of boundary conditions for both ODEs and PDEs. 8.2.6 Summary equations (ODEs) have been consid-

Four types of boundary-valueordinary differential ered:

1. The general linear second-order boundary-value ODE 2. The general nonlinear second-order boundary-value ODE 3. Higher-order boundary-value ODEs 4. Systems of second-order boundary-value ODEs Solution methodsfor ODE types 2, 3, and 4 are based on the solution methodsfor ODE type 1. Consequently, Chapter 8 is devoted mainly to the developmentof solution methodsfor the general linear second-order boundary-value ODE, (8.7), subject Eq. boundaryconditions at the boundaries of the closed solution domain. Keepin mind that equilibrium problems are steady state problems in closed solution domains. Equilibrium problems are not unsteady time dependent problems. 8.3 THE SHOOTING(INITIAL-VALUE) METHOD

The shooting method transforms a boundary-value ODE into a system of first-order ODEs,which can be solved by any of the initial-value methodspresented in Chapter 7. The boundaryconditions on one side of the closed solution domainD(x) can be used as initial conditions for the system of initial-value ODEs.Unfortunately, however, the boundaryconditions on the other side of the closed solution domaincannot be used as initial conditions on the first side of the closed solution domain.The additional initial conditions needed must be assumed.The initial-value problemcan then be solved, and the solution at the other boundarycan be compared the known to boundaryconditions on that

442

Chapter8

side of the closed solution domain.Aniterative approach, called shooting is employed to vary the assumed initial conditions until the specified boundary conditions are satisfied. In this section, the shooting method is applied to the general nonlinear second-order boundary-value ODE with knownfunction (i.e., Dirichlet) boundary conditions, Eq. (8.13). A brief discussion of the solution of higher-order boundary-valueODEs the by shooting method is presented. Derivative (i.e., Neumann) boundary conditions are discussed in Section 8.5. As discussed in Section 7.4.2, whensolving differential equations by an approximate method,a distinction must be madebetweenthe exact solution of the differential equation and the approximatesolution of the differential equation. Analogouslyto the approach taken in Section 7.4, the exact solution of a boundary-value is denoted by an overbar ODE on the symbolfor the dependent variable [i.e., ~(x)], and the approximatesolution denoted by the symbolfor the dependentvariable without an overbar [i.e., y(x)]. Thus, ~(x) = exact solution y(x) approximate solution This very precise distinction betweenthe exact solution of a differential equation and the approximate solution of a differential equationis required for studies of consistency, order, stability, and convergence. Whensolving boundary-value ODEs the shooting method, consistency, order, by stability, and convergenceof the initial-value ODE solution methodmust be considered. These requirements are discussed thoroughly in Section 7.6 for marching methodsfor solving initial-value problems.Identical results are obtained whensolving boundary-value problems by marching methods. Consequently, no further attention is given to these conceptsin this section. 8.3.1 The Second-Order Boundary-Value ODE Consider the general nonlinear second-order boundary-value ODE with Dirichlet boundary conditions, written in the following form:
[~tt=f(x,~,~t)

~(Xl) = ~1 and.~(x2) =

(8.14)

The exact solution of Eq. (8.14) is illustrated in Figure 8.4. The boundaryconditions ~(xl) =~1 and ~(x2)=32 are both specified. The first derivative y(x~)=~/11 specified as a boundary condition, but it does havea uniquevalue whichis obtained as part of the solution. Let’s rewrite the second-order ODE, (8.14), as two first-order ODEs.Define the Eq. auxiliary variable ~(x) = ~/(x). Equation (8.14) can be expressed in terms off and follows: ~’ = ] ~’ =.~" =/(x,~,~) ~(xl) = ~(Xl) = ~’(x~) =~’11 (8.15) (8.16)

Aninitial-value problemis created by assuminga value for ~(x~) = ~’11. Choosean initial estimate for ~’l~, denoted by y’l~1), and integrate the two coupled first-order ODEs, Eqs. (8.15) and (8.16), by any initial-value ODE integration method(e.g., the Runge-Kutta ~). method).The solution is illustrated in Figure 8.5 by the curve labeledy’l~ The solution at

Boundary-Value OrdinaryDifferential Equations

443

~2

~ Yl
x 1 Figure 8.4 $o|ution X2 X

of the ~cncral second-order bounda~-wluc proN¢m.

x2 is y(x2) y~ which is notequal to t he specified boun dary cond 0, ition ~(X2 ) = ~2 Assume second value for ~(xl)= y’l~ 2), and repeat the process to obtain the solution a labeled y,]~2). The solution at 2 i s y(x2) =y~ wh again is notequa to t he spec 2), ich l ified boundarycondition ~(x2) = ~2. This procedureis continued until the value for z(x 1) =Y’]I is determinedfor whichy(x2) = ~2 within a specified tolerance. For a nonlinear ODE, this is a zero finding problemfor the function y(x2) =f(z(xl)) -which can be solved by the secant method(see Section 3.6). Thus, ~2_ y~n) Y~")- Y2 ’0 -Y’I]n)-y’I~ "-1) -- slope (8.17)

y,,(n+l)-Y’I~ I1

(8.18)

wherethe superscript (n) denotes the iteration number. SolvingEq. (8.18) for n+l) gives

y,,(,,+l)=y,l~,,) I1

slope

(8.19)
Y(X2) approaches

The entire initial-value problem reworked is with z(x~) = y’l{~+1)until within a specified tolerance.

¯ .1(2)

Y~
x 1 ×2

Figure 8.5

Iterative solutionfor the boundary condition.

444 Example8.1 The second-order shooting method using iteration

Chapter8

As an exampleof the shooting methodusing iteration, let’s solve the heat transfer problem presented in Section 8.1. The boundary-valueODE [see Eq. (8.1)]: is T" - o~2T = -~2Ta T(0.0) = 0.0 C and T(1.0) = 100.0 Rewrite the second-order ODE, (8.20), as two first-order ODEs: Eq. T’ = U T(0) = 0.0 U’ = a2(T - Ta) U(0.0) = T’(0.0) (8.21) (8.22) (8.20)

Equations (8.21) and (8.22) can be solved by the implicit trapezoid method, Eq. (7.141): T,.+a = T,. +--~(Ui+ U,.+I) Ui+l = U/+ ~- [a2(T,. - Ta) + a2(T/+l - Ta)] (8.23a) (8.23b)

For nonlinear ODEs,the implicit trapezoid method yields nonlinear implicit finite difference equations, which can be solved by the modified Euler (i.e., the modified trapezoid) method, Eqs. (7.141) and (7.142), or by Newton’s method for systems nonlinear equations, Section 3.7. However,for linear ODEs,such as Eqs. (8.21) and (8.22), the finite difference equations are linear and can be solved directly. Rearranging Eq. (8.23) yields
ri+ 1 -"~ Ui+ 1 = T~. + -~- U i 2 O~ ,~Y (8.24a)

(8.24b) 2T~) + U i Equation(8.24) can be solved directly for Ti+ and U,.+~. 1 -2, T = 0.0C, and Ax = 0.25cm. To begin the solution, let Let e2 = 16.0 cm a U(0.0)(1) = T’(0.0) O) = 7.5 C/cm, and U(0.0)(2) = T’(0.0) (2) = 12.5 C/cm. The solutions for these two values of U(0.0) are presented in Table 8.2 and Figure 8.6. From Table 8.2, T(1.0)(1) = 75.925926C, which does not equal the specified boundarycondition ~’(1.0) = 100.0 C, and T(1.0)(2) = 126.543210 whichalso does not equal 100.0 C. C, Applyingthe secant methodyields: 2 Ti+l
+

2 O~ l~Y

Ui+l

= ~ -(Ti

--

T(1.0) (2) - T(1.0) 0) 126.543210 - 75.925926 = 10.123457 (8.25) Slope = U(0.0)(2) _ U(0.0)(1) = 12.5 - 7.5 (2) 100.0 -- T(1.0) (3) U(0.0) = U(0.0)(2) ÷ = 9.878049 (8.26) Slope The solution for U(0.0)O) = 9.878049 C/cmis presented in Table 8.3 and illustrated in Figure 8.6. For this linear problem,T(1.0)(3) = 100.0 C, whichis the desired value. The (2), and T’(0.0) are presentedin Figure8.6. Thefinal (3) three solutions for T’(0.0) (1), T’(0.0) solution is presented in Table 8.4,. along with the exact solution ~’(x) and the error, Error(x) = [T(x) - ~’(x)]. Repeatingthe solution for Ax= 0.125 cm yields the results presented in Table 8.5. The accuracy of a numericalalgorithm is generally assessed by the magnitude the of errors it produces. Individual errors, such as presented in Tables 8.4 and 8.5, present a

Boundary-Value OrdinaryDifferential Equations Table 8.2 Solution x 0.00 0.25 0.50 0.75 1.00
for U(0.0) (1) =

445
(2) =

7.5 C/cmand O) U(x)

U(0.0)

12.5 C/cm
(2) U(X)

(~) T(x) 0.000000 2.500000 8.333333 25.277778 75.925926

(2) T(x) 0.000000 4.166667 13.888889 42.129630 126.543210

7.500000 12.500000 34.166667 101.388889 303.796296

12.500000 20.833333 56.944444 168.981481 506.327160

"~4o! ¯ T" (0.0) = 9.878049 C/cm
120 100 80 0 T" (0.0) = 7.5 C/cm ,/~

8o
40

~o
O~ 0 Figure 8.6 0.4 0.6 0.8 1.0 Location x, cm Solution the shooting by method. 0.2

Table 8.3 x 0.00 0.25 0.50 0.75 1.00

Solution

for

U(0.0) (3)

=

9.878049 C/cm
(3) U(x)

(3) T(x) 0.000000 3.292683 10.975610 33.292683 100.000000

9.878049 16.463415 45.000000 133.536585 400.121951

Table 8.4 Solution by Shooting Method for Ax = 0.25 cm x, cm 0.00 0.25 0.50 0.75 1.00 T(x), 0.000000 3.292683 10.975610 33.292683 100.000000 ~’(x), 0.000000 4.306357 13.290111 36.709070 100.000000 Error(x), -].013674 -2.314502 -3.416388

446 Table 8.5 Solution by the Shooting Methodfor Ax = 0.125 cm x, cm 0.000 0.125 0.250 0.375 0.500 0.625 0.750 0.875 1.000 T(x), C 0.000000 1.792096 4.062084 7.415295 12.745918 21.475452 35.931773 59.969900 100.000000 ~’(x), 0.000000 1.909479 4.306357 7.802440 13.290111 22.170109 36.709070 60.618093 100.000000 Error(x), -0.117383 -0.244273 -0.387145 -0.544194 -0.694658 - 0.777298 -0.648193

Chapter8

detailed picture of the error distribution over the entire solution domain.A measureof the overall, or global, accuracy of an algorithm is given by the Euclideannormof the errors, whichis the square root of the sumof the squares of the individual errors. The individual errors and the Euclideannormsof the errors are presented for all of the results obtained in Chapter 8. The Euclidean normof the errors in Table 8.4 is 4.249254C. The Euclidean normof the errors in Table8.5 at the three common points (i.e., x = 0.25, 0.50, and 0.75 cm) grid is 0.979800 Theratio of the normsis 4.34. Theratios of the individual errors at the three C. common points in the two grids are 4.14, 4.25, and 4.39, respectively. Bothof these results demonstratethat the method secondorder (for step size halving the ratio of errors is 4.0 is for a second-order methodin the limit as Ax ~ 0). Theerrors in Table8.5 are rather large, indicating that a smaller step size or a higherorder methodis needed. The errors from Tables 8.4 and 8.5 are plotted in Figure 8.7, which also presents the errors from the extrapolation of the second-ordershooting method results, whichis presented in Example 8.3, as well as the errors for the solution by the

0 10

~ ~

~~__J

Second-order trapezoid

method

II- "1 ~ 10

~
Ax = J ~

Ax = 0.125 0.25 y J ~ Extrapolationof the second-order"Fourth-order Runge-Kutta method method

-2 LU10

~
0

-3 10

o.5 o.o o.5 1.bo
Location x, cm

Figure 8.7 Errors in the solution by the shootingmethod.

Boundary-Value OrdinaryDifferential Equations

447

fourth-order Runge-Kuttamethod(not presented here). For the Runge-Kuttamethod, the Euclideannormsof the errors at the three common points for the two step sizes are grid 0.159057 and 0.015528 respectively. The ratio of the normsis 10.24. The ratios of the C C, individual errors at the three common points in the two grids are 8.68, 10.05, and 10.46. Bothof these results suggestthat the method fourth order (for step size halvingthe ratio is of the errors is 16.0 for a fourth-order method the limit as Ax--~ 0). in

8.3.2 Superposition For a linear ODE, principle of superposition applies. First, compute solutions for the two z(xl) = )/111) and z(xl) = y,[~2), denotedby y(x) (0 and y(x)(2), respectively. Thenforma linear combinationof these two solutions:

(~ (~ y(x) =Gy(x) +c2y(x) I
ApplyEq. (8.27) at x = 1 and x= x2. Th us, At x = x~: At x = xz: ~ = C1~ + Cz~ ~ 2) ~2 = CIY(21) C2Y~ q-

(8.27)

(8.28) (8.29)

Solving Eqs. (8.28) and (8.29) for C~ and 2 yields
.~2 -- y~2) C1--Y2"~) -- -~.(2)y2

and

C2 - y~l)

Y2 -.~2 _ y~2)

(1)

(8.30)

Substituting C and C into Eq. (8.27) yields the solution. Noiteration is required for 1 2 linear ODEs. Example8.2. The second-order shooting method using superposition The heat transfer problem considered in Example 8.1 is governed by a linear boundary-value ODE.Consequently, the solution can be obtained by generating two solutions for two assumed values of U(0.0)= T’(0.0) and superimposing those solutions. The results of the first two solutions by the implicit trapezoid methodwith Ax = 0.25 cm are presented in Table 8.2 and repeated in Table 8.6. From Table 8.6, (1) T2 = 75.925926 T~2) = 126.543210. specified value of ~’2 is 100.0 C. Substitutand The ing these values into Eq. (8.30) gives 100.000000 - 126.543210 Cl = 75.925926 - 126.543210 = 0.524389 75.925926 - 100.000000 ~2=75.925926 - 126.543210 0.475611

(8.31) (8.32)

Substituting these results into Eq. (8.27) yields

I

O) (2) T(x) = 0.524389T(x) + 0.47561IT(x) 1

(8.33)

448 Table 8.6 Solution by Superposition for Ax = 0.25 cm x, cm 0.00 0.25 0.50 0.75 1.00 T(x) ~), C 0.000000 2.500000 8.333333 25.277778 75.925926 (2), T(x) C 0.000000 4.166667 13.888889 42.129630 126.543210 T(x), 0.000000 3.292683 10.975610 33.292683 100.000000

Chapter8

Solving Eq. (8.33) for y(x) gives the values presented in the final columnof Table 8.6, whichare identical to the values presented in Table 8.4. 8.3.3 Extrapolation The conceptof extrapolation is presented in Section 5.6. As shown there, the error of any numericalalgorithm that approximatesan exact calculation by an approximatecalculation having an error that depends on an increment h can be estimated if the functional dependence the error on the increment h is known.The estimated error can be addedto of the approximate solution to obtain an improved solution. This process is knownas extrapolation, or the deferred approach the limit. The extrapolation formulais given by to Eq. (5.117): Improvedvalue = more accurate value 1 + n-(moreaccurate value - less accurate value) R - 1 (8.34) where improvedvalue is the extrapolated result, less accurate value and more accurate value are the results of applying the numerical algorithm for two increments h and h/R, respectively, where is the ratio of the twostep sizes (usually 2.0) and n is the order of the R leading truncation error term of the numericalalgorithm. Example8.3. The second-order shooting methodusing extrapolation Let’s apply extrapolation to the heat transfer problempresented in Section 8.1. The boundary-valueODE [see Eq. (8.1)]: is T" - e2T = -e2T a T(0.0) = 0.0 C and T(1.0) = 100.0 (8.35) -2 Let c~2 ----- 16.0 cm and Ta = 0.0C. This problem is solved in Example8.1 by the second-order implicit trapezoid method for Ax = 0.25 and 0.125cm. The results are presented in Tables 8.4 and 8.5, respectively. The results at the three common points in the two grids are summarized grid in Table 8.7. For these results, R = 0.25/0.125 = 2.0, and Eq. (8.34) gives 1 4MAV - LAV IV = MAV 2-T-Z-f_ 1 (MAV LAV) + 3 (8.36)

Boundary-Value OrdinaryDifferential Equations Table 8.7 Solution by Extrapolation of the Second-Order Shooting MethodResults x, cm 0.00 0.25 0.50 0.75 1.00 T(LAV), 0.000000 3.292683 10.975610 33.292683 100.000000 T(MAV), 0.000000 4.062084 12.745918 35.931773 100.000000 T(IV), 0.000000 4.318551 13.336020 36.811469 100.000000 ~’(x), 0.000000 4.306357 13.290111 36.709070 100.000000

449

Error(x), 0.012195 0.045909 0.102399

where IV denotes improved value, MAV denotes more accurate value corresponding to h = 0.125 cm, and LAV denotes less accurate value corresponding to h = 0.25 cm. Table 8.7 presents the results obtained from Eq. (8.36). The Euclidean normof these errors 0.112924C, which is 8.68 times smaller than the Euclidean normof the errors in Table 8.5. Theseresults and the correspondingerrors are presented in Figure 8.6.

8.3,4 Higher-Order Boundary-Value ODEs Consider the third-order boundary-valueproblem:

[~

" =f(x,~,~’,~")

~(Xl) ~--~1,

~t(X1)

=~’tl,

andS(x2)

(8.37)

RewritingEq. (8.37) as three first-order ODEs gives:

~’ = ~ ~(xl)= ~’ = ~ =y’ ~(x~) Y(Xl) = t ¢v =~" =~t~’ =f(x,~,~, ~) l~(Xl) =.~’t(Xl) =~"11 =.9

(8.38) (8.39) (8.40)

Aninitial-value problem created by assuming value for ¢v(x = ~/t(x 1 ) ~-- ~"11. Assume is a 1) values for Y"II and proceed as discussed for the second-order boundary-valueproblem. For fourth- and higher-order boundary-valueproblems, proceed in a similar manner. Start the initial-value problem from the boundaryhaving the most specified boundary conditions. In many cases, more than one boundarycondition mayhave to be iterated. In such cases, use Newton’smethodfor systems of nonlinear equations (see Section 3.7) conductthe iteration. The solution of linear higher-order boundary-valueproblemscan be determinedby superposition. For an nth-order boundary-value problem, n solutions are combined linearly: y(x) = l y(x) (1) +C y(x(2) +... + C y(x)(n) 2 n (8.41)

The weighting factors C (i = 1, 2 ..... n) are determinedby substituting the n boundary i conditions into Eq. (8.41), as illustrated in Eqs. (8.28) to (8.30) for the second-order boundary-value problem.

450 8.4 THE EQUILIBRIUM (BOUNDARY-VALUE) METHOD

Chapter8

The solution of boundary-valueproblems by the equilibrium (boundary-value) method accomplishedby the following steps: 1. Discretizing the continuoussolution domain into a discrete finite difference grid 2. Approximatingthe exact derivatives in the boundary-value ODE algebraic by finite difference approximations(FDAs) 3. Substituting the FDAsinto the ODE obtain an algebraic finite difference to equation fiDE) 4. Solving the resulting system of algebraic FDEs When finite difference equationis applied at every point in the discrete finite difference the grid, a system of coupled finite difference equations results, which must be solved simultaneously, thus relaxing the entire solution, including the boundarypoints, simultaneously. In this section, the equilibrium methodis applied to the linear, variable coefficient, second-order boundary-value ODE with knownfunction (i.e., Dirichlet) boundaryconditions. Derivative (i.e., Neumann) boundaryconditions are considered Section 8.5, and nonlinear boundary-valueproblemsare considered in Section 8.7. Whensolving boundary-value problems by the equilibrium method, consistency, order, and convergenceof the solution methodmust be considered. Stability is not an issue, since a relaxation procedure, not a marchingprocedure is employed.Consistency and order are determinedby a Taylor series consistency analysis, which is discussed in Section 7.6 for marching methods. The same procedure is applicable to relaxation methods. Convergence guaranteed for consistent finite difference approximationsof a is boundary-valueODE, long as the system of FDEs be solved. In principle, this can as can always be accomplishedby direct solution methods, such as Gauss elimination. 8.4.1 The Second-Order Boundary-Value ODE Consider the linear, variable coefficient, Dirichlet boundaryconditions: second-order boundary-value problem with

l~

"+P(x)f/+Q(x)~=F(x)

~(xl)

=~1 and~(x2)

(8.42)

The discrete finite difference grid for solving Eq. (8.42) by the equilibrium method illustrated in Figure 8.8. Recall the second-ordercentered-differenceapproximations of~/li and ~"li at grid point i developed Section 5.4 [Eqs. (5.73) and (5.76), respectively]: in ~’li - ~i+1- ~i-~ z) 0(Ax q_ 2Ax .~i+1 2.~i "~-~i-1 -~ttli -~2 +

(8.43) 0(~)
(8.44)

Substituting Eqs. (8.43) and (8.44) into Eq. (8.42) and evaluating the coefficients P(x) and Q(x) at grid point i yields

+ °(zXx) + ’L 2

Ax

q-

O(l~)

q-

Qi~i

= Fi

(8.45)

(8. The boundary-value ODE Eq. (8.0 (8.50.51a) T .0) = 0. respectively. and 0.0 C and T(1. The second-order equilibrium method Let’s solve the heat transfer problempresented in Section 8.5).49) becomes I T~.75: r 3 . and truncating the remainderterm yields the FDE: Ti_1 .0 Transferring T~ and T to the right-hand sides of Eqs. in gathering terms. x = 0. 5 yields the following tridiagonal system of FDEs: 1. (8.0 -3.0 1. and Ax = 0. (8.25. Then Eq.. system of FDEs.25: x = 0.50) at the three interior grid points.25 cm. Multiplyingthrough by Ax gathering terms.48) 2.0r 3 + r 4 = 0. and evaluating all the terms at grid point i gives ~’i+I.0 (8.0r 2 + r3 = 0.0:2T = -0:2Ta T(0.1 by the second-order equilibrium method.0) = 100.49) -2.0 cm Ta = 0. gives r.3.0 r. 0.0 0.46) Applying (8.4. Let 0:2 __ 16.51a) and (8.45) are 0(Ax2).0r 4 + T5 = 0.50: . (8. .52) x = 0. Eq. D(x) Figure 8.51c) T5 = ~5 = 100.3.(2 + 0:2 Ax.Boundary-Value OrdinaryDifferential Equations 451 D(x) 1 ~T i-1 i i+1 imax’~.0 (8.45) through 2.51b) 2 x = 0. (8.3.8 Solutiondomain andfinite differencegrid.0C. MultiplyingEq. All of the approximations Eq. (8. and truncating the remainderterms yields: (1-~Axp 2 Oi )Yi-F (1-~i)Yi+l:Ax2Fi +Axp i)Yi_l q-(--2+ (8.46) at each point in a discrete finite difference grid yields a tridiagonal Eq. which can be solved by the Thomas algorithm (see Section 1.0 1.51c).0 -100.47) Replacing T" by the second-order centered-difference approximation. Example8.75 cm. = 0.50) ApplyingEq.2~’i + ~i I Ax 2 .0 (8.0 T3 = T 4 0.44).0 -3.0T i + Ti+ .F 0(~ "2) -a 0~2~i = --~2T (8.1): is T" .0 (8._. = ~’.3. =0 I (8.2)T/+ Ti+~ = _0:2 Ax2 Ta (8.

452 Table $.306357 7.0 9 T9 = ~’9 = 100.2.25 0.170109 36.000 0.54a) and (8. C 0.25T6 + T = 0.17 in Section 1.0 4 T~ = f~ = 0.552144 22.000000 4. The results are presented in Table8.923667 100.118833 0.802440 13.25T/+ T.50 0.625 0.095238 I00.2.250 0.995603 1. (8.25T5+ Z ~--.709070 60.500 0.0. 0.25 cm x.000000 ~’(x).875: T~ .54c) (8.+~ = 0 (8.9.262033 0.2.750 0.125 0. C 0. The exact solution and the errors are presented for comparison.369181 0.2.53) at the seven interior grid points gives x = 0.375 0.332288 0.54f) (8.52) by the Thomas algorithm yields the results presented in Table 8.250: x = 0.709070 100.125 cm x.305575 .966751 4..187486 0.25~’4~’5= 0. cm 0.2.625: x = 0.0 3 T2 .761905 14.285714 38.000000 ~’(x). 0.49) becomes L T/_ -- 2.455548 0. 0.54b) ~3.54g). Table 8.5.000000 Error(x). (8.618093 100.989926 13.290111 36.000000 1.386168 Chapter8 Solving Eq.75 1.8.25T3 + Z = 0.00 T(x). (8.125 cm.54d) (8.425190 7.500: x = 0.502398 37. Let’s repeat the solution for Ax= 0.057272 0.0 7 Z -.909479 4.53) ApplyingEq.290111 22. 9 yields a tridiagonal system of equations.2.125: x = 0.54g) Transferring T~ and T to the right-hand sides of Eqs.000000 1.000000 Error(x).000 T(x).00 0.9 Solution by the Equilibrium Methodfor Ax = 0.0 + T .54e) (8.25r7 + T = 0.306357 13.25r 8 + T = 0.875 1. (8. In this case Eq.8 Solution by the Equilibrium Methodfor Ax = 0. cm 0.750: x = 0.0 6 8 T7 .078251 60. 0.2.54a) (8. respectively.0 (8.000000 4.0 (8.0 4 6 T5 . That tridiagonal systemof equations is solved by the Thomas algorithm in Example1.375: x = 0.25T: + T = 0.

62. 0 -3 10 The Euclidean normof the errors in Table 8. respectively.75. whichdemonstratesthe fourth-order behavior of the method.2 Extrapolation Extrapolation was applied in Example8.83.Boundary-Value OrdinaryDifferential Equations 453 c 10 ~ o.9. and 3.9 at the three common points is 0. For the fourth-order method. 10 1 ~Second-order _ / equilibrium method II-I x -2 .5. The errors in Tables 8. as well as the errors from extrapolation of the second-order method. 3.9 are about 40 percent of the magnitude the errors of in Tables 8. the Euclidean norms of the errors at the three common grid points in the two grids are 0.50 0.00 Locationx.80.8 is 1. which also presents the errors for the compactthree-point fourth-order equilibrium methodpresented in Example in Section 8.766412C. 3.8 and 8.4.9 Errors in the solution by the equilibrium method.The errors in both cases can be decreasedby using a smaller step size or a higherorder method. (8. 8.005919C. respectively.092448 C and 0.75 1.10 ~colatio ~ nof Fourth-order ond-order equilibrium method method i 0. . The ratios of the individual errors at the three common points in the two grids are 3.3 to the results obtained by the second-order shooting method. Eqs.6.9 8.77.468057 Theratio of the normsis grid C.5. The Euclidean normof the errors in Table8.4 and 8. The second-order results obtained in Example8. Theratio of the normsis 15. cm Figure 8. Both of these results demonstratethat the methodis second order.34) and (8.4 by the equilibrium methodcan be extrapolated by the same procedure presented in Section 8.Theerrors are illustrated in Figure 8.25 0. which is presented in Example8. whichpresent the solution by the second-order shooting method.3.36).

_ q(x) EI(x) (8. expressed in the form of Eq. Thethird and fourth derivatives involve five grid points. (8.000000 ~’(x). points i .8 and 8. Consequently. R = 0. 0.006595 0.55).078251 100. (8.Y (8. (II. resulting finite difference the equation (FDE)involves five grid points. Second-order centered-difference FDAs can be developed for all four derivatives in Eq. y".095238 100.1 to i + 1.18 times smaller than the Euclidean normof the errors presented in Table 8. or y".10 and Figure 8.9.y.y . Eq.. C 0.739255 100.25 0. These boundaryconditions can dependon y.552144 37.9. The first and second derivatives involvethree grid points.454 Example8. points i .307621 36.017510 0.0.6.709070 100. points i .9. 0. The two remaining boundaryconditions can be specified on the same boundary.10 are also presented Table 8.45): =J( .. A fourth-order ODE the second-order equilibrium method by Let’s solve the deflection problempresented in Section II.+ 2. All the FDAsshould be the same order. (II. A finite difference approximation(FDA) must be developedfor every derivative Eq.75 1.761905 14.55) Since Eq.000000 T(IV).36) to the results presented in Table 8.55).55) is fourth-order. Example8. y’. 0. For these results.035514C.6 for a laterally loaded symmetricalbeam.56) . which is 13.000000 4. four boundaryconditions are required.55): y. Table 8.000000 T(MAV). cm 0.10.6.Eq. (8. (8. Thoseresults are presented in Tables 8.000000 4.4. Applyingthis FDE each at point in a finite difference grid yields a pentadiagonal system of FDEs.5 The second-order equilibrium method by extrapolation Chapter8 Let’s apply extrapolation to the results obtained in Example 8.4.312952 13.125 = 2. The results at the three common grid points in the two grids are summarized Table 8.2 to i + 2.030185 8.40).. (8. or one on each boundary.2 to i.285714 38. The results in obtained by applyingEq. At least one boundarycondition must be specified on each boundaryof the closed solution domain. The Euclidean norm of these errors is 0.425190 13.00 T(LAV).000000 Error(x).25/0. 0.50 0.3 Higher-Order Boundary-Value ODEs Consider the general nonlinear fourth-order boundary-valueproblempresented in Section I1.290111 36.y.000000 4.00 0.306357 13.000000 4.which can be solved by an algorithm similar to the Thomas algorithm for a tridiagonal system of FDEs.10 Solution by Extrapolation of the Second-Order Equilibrium MethodResults x.

(8. Tnmcatingthe remainder term yields a second-order centereddifference approximation ~"’li: for Y"li ~_Yi-2 -.e.~i-2 gives 0~i+2 "q.~i+2 and . condition is required at each end of the beam. /’(0.0) = 0 (8 . clamped)..0) y( L) = y’(L) = 0.57) y(x)(8.000100x4 ."~ff tttli ~x4 q-~. y(0. I(x) is the moment inertia of of of the beamcross section.63) 8-tr 2f"i +"~Yi zfx2 q-32-ttlt [i Z4Y ~4 + ~.-tttt ~y16 li Z~X4 s 4-~60~(v)]iAx + 7~0~(vi)li6 -b.0cm.62) +’~Y li --720--" ’.~i+1 q--~i--1) -~.61) 5 -~ .-~i-2) = Z~X2 2 -rot..and a uniform distributed load q(x) = q = constant. ~C4 _1_~2 i3(vi)l ’ 6 ~. then y’ is specified. Adding~i+~ and ~i-~ gives 2-rr @i+1 +~i-1) =2~i+~Y i Adding .0) =/’(L) = Thus. for which I = wh3/12..0) y(L) = 0.60) Let’s solve this problemusing a second-order centered-difference approximationfor y"’. including the boundaryconditions. is given by y" = q y(0. (8.Boundary-Value OrdinaryDifferential Equations 455 whereE is the modulus elasticity of the beammaterial. h = 10.4yi-I -F 6y t -.64) Subtracting 4(~i+t +35i_~)from 05i+z +.4~i+1+~i+2_ ~. If is the beam pinned(i.1 -tl! mgy iZ~x 3 1 +~y -tttl li 4 Z~X (8.5 y(x) = 0.0) = y"(0. and E = 90 x 10 9 N/m2.001000x3 + 0.0m.~-. cantilevered).e. let q = -2000...~(vi)li Ax 6 q-¯ ¯ ¯ [i 1~2-’b gyS-ttt i Z~d¢3 2~t]i~C"~ t4 -t ~f. Let’s consider a rectangular cross-section beam. If the beam free (i. where w is the width of the beam and h is the height of the beam.0.If the beam fixed. (8.57) and (8.4~i-1 + @i.64) for ~"[i yields ~"’li = ~i-2 .7~ 0.0 E1 The exact solution of Eq..0) = y’(0. which can be chosen as y = 0.58) 24EI 12EI + 24EI As an example.65) Solving Eq. L = 5. (8. +"" (8.0. the problemto be solved.57) qx4 qLxS qLSx (8.4Yi+ 1 -’F yi+ 2 (8.0N/m. Then Eqs. ~(vi)]i lk~¢ 6 q_. Write Taylor series for ~i±1 and ~i±2 with base point i: fei±l~---fciq-f/[i~’iv~Yli ~ l~0~(v)li ~i±2: ~i q1 :’tl~ z~¢2 -.r 128~(vi) i z~df6q_.012500x 9) (8.66) where xi_2 < ¢ < i + 2. (8.X (8.58) become y" = -0. The ends of the beamare at the same elevation.0cm. w = 5. Let’s assume that both ends of the beam are pinned.0) y( 5. Thus.0.then y" = 0.0. where L is thelength of t he beam One additional boundary . is is then y"=0.. 0.0024 y(0.~i-2) gives (~i+2 q--~i-2) --4(.67) .~(vi)(~) 2 (8. Thus.0) -= f’( 5. _. and q(x) is the distributed load on the beam.

Thus.4y3 + Y4 = -0.0024 (8. Eq.69b) are zero. ApplyingEq.72a) (8.0 2._-0.69c) (8.YB -. (8.0 (8.0: x = 2. (8. respectively.0: YA.0 m.0 4.4y4 + 6y5 .73) yields the A 1 2 3 4 5 6 B i x 0.68) Let Ax= 1.4y~ ÷ 6y2 .69a) (8.0: x = 4. .4y3 +Y4= -0. which can solved very efficiently by a modified Gausselimination algorithm similar to the Thomas algorithm for tridiagonal matrices presented in Section 1.10 Finite differencegrid for the beam.69d).73) is a pentadiagonal matrix.69a) andy in Eq. gives Ya = -Y2 and Ya = -Y5 (8.4yi-i + 6yi .0.0: x = 3.69a) and (8.456 Substituting Eq.0 1.72d) - = |-0"00241 |-0.70a) (8.0024 Expressing Eq.~z Y2 2yl +=YA 0.69b) (8.4y5 +Y6 = -0.0024_] (8. (8.These values are determined by applying the boundary conditions .0024 Y~ .5.4y4 +Y5 = -0.59) yields Yi-2 .72) in matrix form yields -4 1 0 6 -4 -4 6 1 -4 Y3 Y4 Y5 (8.0024 Y2 .0024 Ax Chapter8 (8.4Yz + 6y3 . (8.10 gives x = 1. (8.4y4 + 5y5 = -0.0 Solving Eq.4Y3 + 6y4 .69d) Note lhaty1 in Eq.0024 Y3 .68) at the four interior points illustrated in Figure Eq.73) Althoughit is not readily apparent.V’(0.. (8.2y6 q-Y~ /~x2 ~--- 0.44) at points 1 and 6 gives Y"lt -. Grid points A and B are outside 6 the physical domain. Applying (8.4yi+! 4 +Yi+2= -0.0024 Y3 . Solving Eq.4Y4 + Y5 = -0. and rearranging those two equations yields the following system equation: 5y. with yl = Y6= 0.0.4Y5= -0.4Y3 + 6y4 .72b) (8.0 Figure 8.0 5. .0024 -4y2 + 6y3 .0024 Y2 .0) = y’(L) = 0. (8. 8.70) for YA and y~. (8. (8. (8.71) Substituting these values into Eq.67) into Eq.4y6 +YB= -0.72c) (8.0024| ]. yA and yB are unknown.0 3.70b) Y"I6 -.

000100 0.5 5. (8. The exact solution and the errors are presented for comparison. The ratio of the grid normsis 3. (8. m 0.00 2.018600 0.011600 0.00 3.000150 Y7 0.000600 0.12 Solution by the Equilibrium Method with Ax = 0. 0.000150 Y3 0.006188 0.000131 0.12.000600 0.5 2.019688 0.5 4.000150 Y8 0.011600 0.012000 0. Theratios of the individual errors at the four common points in the two grid Table 8.000150 0.006131 0. m 0.00 4.11.000000 0.11 Solution by the Equilibrium Method with Ax = 1. The Euclidean norm of the errors in Table 8. 0.Boundary-Value OrdinaryDifferential Equations Table 8. 0.0 1.5 1.73) becomes 5 -4 1 0 0 0 0 0 0 -4 6 -4 1 0 0 0 0 0 1 -4 6 -4 1 0 0 0 0 0 1 -4 6 -4 1 0 0 0 0 0 l -4 6 -4 1 0 0 0 0 0 0 0 0 1 0 -4 1 6 -4 -4 6 1 -4 0 1 0 0 0 0 0 1 -4 6 -4 00 0 0 0 0 1 -4 5 -0.000400 0.019200 0.000100 0. 0. Solving Eq.000131 0.019531 0.019200 0.018600 0.000000 Error(x).000000 Error(x).012000 0. In this case.016013 0.000000 0.011600 0.5 3.000056 0.000150 0.74) is readily apparent.000150 Y9 0.018600 0.5 m x.11 is 0. (8.0 3. The Euclidean normof the errors in Table 8.000056 .015881 0.000150 0.018750 0.012345m.12 at the four common points is 0.0 4.0 m x.018750 0.000000 0.011600 0.006131 0.00 1. (8.00 y(x).000000 ~(x).000150 Y6 0.74) The pentadiagonal structure of Eq.123456 m.006188 0. Eq.000400 457 results presented in Table 8.99.000156 0.74) yields results presented in Table8.000150 Y5 = 0.000150 Y4 0.000000 0.00 5.000150 _Yl0.0 y(x).011700 0.5 m.016013 0. 0.0 2.015881 0. Let’s repeat the solution for Ax= 0.011700 0. 0.0 0.018600 0.000000 ~(x).

T’(1.-1 eZ~L) (8.77) and the results into Eq.6 and 10.12.03353501(e + 99. 3.13 and illustrated in Figure 8. Implementation derivative boundaryconditions by of both the shooting methodand the equilibrium methodare discussed in this section.e.11.458 Chapter8 grids are 3. The boundary-valueproblemis specified as follows: T" . A procedure for implementing derivative boundary conditions for one-dimensional boundary-value problems is developed in this section.97. (~c(x)h P (T-Ta)dX : Ti / ~(x). (8. so T’(L) = 0.98.0) = T1 and U(L) = 0. knownfunction value) boundary conditions.2)] T(x) = ~x + Be-~x + T~ Substituting the boundaryconditions into Eq. 3. .99.0.kA ~--~ T’(L) =0 x2 = L Figure 8.11 Heattransfer in a rod with an insulated end. Ta =0..~ = 0 ~(x)= .o~2T = -o~2Ta T(0.78) The solution at intervals of Ax= 0. 8. whichare discussed in Sections 9..75) is [see Eq. T(0.76) -2.5 DERIVATIVE (AND OTHER) BOUNDARY CONDITIONS The boundary-value problemsconsideredso far in this chapter have all had Dirichlet (i.eZ~L) and B = (T1 .7. This procedure is directly applicable to derivative boundary conditions for elliptic and parabolic partial differential equations.96646499e (8.96. (8. (8. and 3.Ta)(1 d. respectively.0C/cm. (8. Let a2 = 16.0) = 0.--~--~ ’--~~(x+dx) dT x.0cm.e.0) = 100.125 cmis tabulated in Table 8.0 C.0 cm L = 1.1 is modifiedto create a derivative boundary condition by insulating the right end of the rod.0 (8. Manyproblems in engineering and science have derivative (i. (8. as illustrated in Figure 8.75) The general solution of Eq.Ta)eZ~L(1q.76) yields -1 A = (T1 .76) gives the exact solution: 4x -4x) T(x) = 0. Substituting these values into Eq. The heat transfer problempresented in Section 8.0C. Both of these results demonstratethat the methodis secondorder.77) (8. Neumann) boundary conditions.

0 Location x.000 0.650606 4.500 ~’(x).250 0.750 0.consider the linear second-order boundary-valueproblem: [~"+P~’+Q~=F(x) ~(Xl) = ~1 and ~’(x2) = ] (8.866765 22.6 0.625 0.79) as a systemof two first-order ODEs: ~’ = ~ ~(x~) ~ E = F(x) .000000 60.Boundary-Value OrdinaryDifferential Equations Table 8.125 0. 8.5.P~ . we are shooting for ~(x2) = Y12rather than for y(X2)= 32" .776782 x.80b) Consequently. cm Figure 8.3 applies directly. cm 0. cm 0.455827 13. except that we shoot for the value of the derivative instead of the value of the function at the boundary.375 0.80a) (8.4 0.81) (8.129253 3.000 ~’(x). 100.661899 459 IO0 8O 6O 40 2O O0 0.Q~ ~(Xl) = ~’(xl) = ~’11 The derivative boundarycondition ~’(x2) = ~’12 yields ~(x2) = Y’I2 (8.79) Rewrite Eq.Asan example. with the single changethat as we vary z(xl) = Y’I1.875 1.614287 5.2 0.688016 36.13 Exact Solution for a Rod with an Insulated End x. the shooting methoddescribed in Section 8.1 The Shooting Method The shooting methodfor derivative boundaryconditions is analogous to the shooting methodfor Dirichlet boundary conditions. 8. (8.12 Exactsolutionfor a rod withan insulatedend.8 1.

777778 197.469136 C/era -395.000000 -21. and shoot for the boundary condition at x = 1.0) = -399.0 C/era and U(0.0C/cm. (8. T = 0.382716 (t).141.0 C/cm.let’s solve Eq.7 A derivative BCby the shooting method Chapter8 As an exampleof the shooting methodwith a derivative BC. (t).82.666667 -67.0)(1)=-405.23) and (8.148148 -49. ¯ T" (0.1.000000 35.13.555556 .0) be the initial conditions at x = 0.000000 .460 Example8.0 ) = 100.0) = T’(0.0) are presented in Table 8. L T(0.0 cm. em 0.15. T(x) C 100.14 and Figure 8. T’(1.0)0)= -207. and for U(0.592593 Repeating the solution with Ax = 0. T(x) C 100.0 C/era.25 0.0 C/cm and U(0.13 . Table 8.25 cm.0C.4 0.000000 31.0) -= -405.666667 20.0) = -395.13.407407 -207.000000 . 0.000000 16.14 Solution x.0 C/cm 4O 2O 0 -20 -4O Locationx. and Ax = 0. ~’(1. Thefinal solution is also plotted in Figure 8.0) = 0.0 -2.24).555556 51. Let ~2 = 16.2 0. (8. The implicit trapezoid methodfor this problemis presented in Example 8.0 C and et a U(0.125 cm yields the results presented in Table 8.~ Figure 8.0)(2) = 395.125.0)(1) = -405.75).75 1.0)(2)=-395.0)(2) = 197. The solution for these two values of U(0.592593 The final solution (obtained by superposition).0) (2) = -395. Let U(0.00 (1) for U(0. Eqs. For U(0.878086 C/cm 100’ 8O 60 o T" (0.00 0. T’(1. cm Solution the shooting by method.666667 52. the exact solution.50 0.469136C/cm.13.16.222222 . and the errors are presented in Table 8.0) = U(1.851852 U(x) (2).0C/cm.0 era.666667 5.0 C/cm (2). C/cm U(x) -405.

136502 5.722974 16.246571 4. cm 0.068189 21.798576 -0.125 cm x.75 1.661899 Error(x).614287 5.00 0.i)Yl+l ~.173962 8.5.193140 461 Table 8.604444 22.1.866765 22.82) at boundary point 2 i s given by Eq. a finite difference proceduremust be developedto solve for the value of the function at the boundary where the derivative boundary condition is imposed.114599 2. second-order boundary-value problem: [ ~" +P(x)~’+ O(x)~ = F(x) ~(xl) =.1. which is illustrated in Figure 8.657932 -0.665600 2.358285 ~(x).50 0.960000 7. variable coefficient.000000 60.~l and~’(x2) =~’[2 ] (8. T(x) C 100. ( 8.000 100.16 at the four common points is 1.25 0.625 0.602820 -0.266667 36. cm T(x)(1).600000 12.46) evaluated at i = (1-~AXp t)Yi_l + (-2-k zXx’2 Qi)Yi + kXp (1-k~.000000 36. Consider the finite difference grid in the neighborhood of point x2. C 100.373971 11.000000 60.25 cm x.679616 The Euclidean normof the errors in Table 8.799360 1.C -0.83) .536007 .875 1.14.012304 11.15 Solution with a Derivative BC by the Shooting Methodfor z~c = 0. The Euclidean normof the errors in Table 8.477786 -0.559771 T(x).731224.~C 2 F I (8.000000 33.500 0. Consider the linear.129253 3.303615 0.703407 14.25 cm is 4.125 0. 100.00 T(x).661899 Error(x). The second-order centered-difference approximation of Eq. C (2). 100.776782 5.650606 3.724478 13.5. (8.000000 60.731349 -0.530211 .381831 -0.375 0.866765 13..455827 13.650606 4.776000 4.806056 3.268775 3.82) A finite difference equation must be developedto evaluate y(x2). 4. TheseEuclidean normsare comparable the Euclideannormsfor the errors in Tables 8.25. C 100.250 0.000000 60. 8.492794 -2. respectively. Theseresults demonstratethat the methodis secondorder.4 to and 8. -3.856612 10.15 for Ax = 0.16 Solution with a Derivative BC by the Shooting Methodfor Ax = 0. grid The ratio of the normsis 4.000000 36.Boundary-Value OrdinaryDifferential Equations Table 8.971581 10.000 0.750 0. which were obtained for the heat transfer problemwith Dirichlet boundaryconditions.2 The Equilibrium Method When equilibrium methodis used to solve a boundary-valueproblemwith a derivative the boundarycondition.030083 36.688016 36.113145C.249254C and 0.323197 -0.776782 8.468760 ~’(x).000000 21.979800C.

3. 2TI_1 . 2 . ~"(1.91a) (8.46): is Ti_1 -- (8..Z~c(2 + PI)~/II (8.84) for ~I+l gives ~I+1 =~I-1 -F 2 2) Ax~’li + Ax 0(Ax (8. FromEq.46).0 C.0T. Example8.87) When (8. (8.4.0 cm Ta = 0. Equations (8.YI+I --YI-1 + 0(Ax 2Ax (8.0) = The interior point FDE presented in Section 8.25 cm.0 and T’(1. A derivative BCby the equilibrium method Let’s solve the heat transfer problem a rod with an insulated end.43). 2) Y’ll -.. (8.75): T" . (8. Let ~2 = 16.0) = 0.14 Finite difference grid at the right boundary. (8.86) Substituting Eq.85) Truncating the remainder term yields an expression for YI+I: YI+I =7I-1 + 2 Ec)’lz (8.88) 2 (2 + cfl A~c2)T/+ l = -o~2 Ax T Ti+ a (8. can It be approximated by expressing the derivative boundary condition at point I in finite difference form as follows. (8. and Ax= 0. (8.86) into Eq. Recall Eq.0) = 100. (8.0.3. (8.89) (8.84) Solving Eq. a tridiagonal systemof equations results.89) ApplyingEq. (8.. using in the equilibrium method.8.462 Chapter8 I-I I~ X=X 2 I+I x Figure 8.87) at the-right end of the rod with P = F = 0 and Q = _~2 gives 2TI_ 1 -- (2 + ~2 l~2)Tl = --2 /~t ~"ll (8. Eq. Eq.87) is used in conjunction with the centered-difference approximation Eq.1) (8. wherepoint I + 1 is outside of the solution domain.~2T = -~2Ta T(0..0T I = 0 (i = I) I .83) and simplifying yields the desired 2yi_ + (--2 ~2QI)YI = FI -. + T/+~ = 0 (i = 1. whichcan solved by the Thomas algorithm.75). Eq.90) -2. at the interior points.90) become T/_l . The value ofyl+1 is unknown.91b) ..

0 The results are presented in Table 8.125 cm x.688016 36.250 0.000000 1.661899 Error(x).0 3 T2 .129253 3.000 0.0 (8.25 cm x.382979 4. C 100.292144 0.380231 0.776782 5.0T4 + T5 = 0.289942 3. 100. respectively. The Euclidean normof the errors in Table 8.776782 8.188482 0.92a) (8.3.614287 5. C 100.0 4 T .255319 ~’(x).Boundary-Value OrdinaryDifferential Equations 463 Applying Eqs.875 1.17 Solution with a Derivative BC by the Equilibrium Methodfor Ax = 0. cm 0.0= 0.661899 Error(x).00: Tl .0 3 2.0 T1 = ~’1 = 100. cm 0.431107 1. (8.000000 38.3.455827 13. 0.75 1.18 Solution with a Derivative BCby the Equilibrium Method for Ax = 0.81.848006 5.8. 1.91b) at the four grid points yields the following tridiagonal system of equations.17 is 2. Repeatingthe solution with Ax = 0.151383 . TheseEuclidean norms are comparable to the Euclidean norms for the errors in Tables 8.00 0.839088 4.0T2 + T = 0.650606 4.500 0.866765 22.92c) (8.732373 0.50 0.813282 ~’(x).650606 3.310649 0.000 T(x).0T5 = 0.000000 60. whichalso present the errors for extrapolation of these second-orderresults.3.536997 The ratio of the grid C.00 T(x).351249 0.297872 14. errors is 3.0T3 + T = 0.807076 14.375 0. 100.75: x----. Table 8.766412C and 0.15.17. The Euclidean norm of the errors in Table 8.116835 0.246996 22.92b) (8.25 0.18. Theseresults demonstratethat the methodis secondorder.000000 60.866765 13.593420 Table 8.92d) ~’x[1.893617 6.125 0.233719 0. which can be solved by the Thomas algorithm: x = 0. whichwere obtained for the solution of the heat transfer problemwith Dirichlet boundaryconditions by the equilibrium method.0T4 .18 at the four common points is 0.25: x = 0.125 cmyields the results presented in Table 8.045460 C.000000 36.468057C.50: x = 0. 0.998665 37.625 0.068925 8. The errors are presented in Figure 8.750 0.91a) and (8.7 and 8.160690 0.3.1.

~(x2)-F B~/(x2):C (8.Ayi) (8. (8. (8.75 1.5. which can be solved by the Thomas algorithm. 0 8.94) for ~i+1 and truncating the remainderterm yields YI+I = YI-1 + ~--(C .A finite difference approximation for Yi+i is obtained by approximating ~/(x2) by the second-ordercentered-difference approximation. A mixed boundary condition is implemented in the shooting method simply by varying the assumedinitial condition until the mixedboundarycondition. (8.46).93). yields a tridiagonal at system of FDEs. Eq. cm Figure 8.95) Solving Eq.93) becomes z) A~:1 +B~i+1 -~z-~ + 0(Ax = C 2Ax Solving Eq. Eq. with minormodifications. Eq.95) in conjunction with the second-order centered-difference approximation of the boundary-valueODE the interior points. (8. Eq.464 Chapter8 ° 10 -1 10 10_2 -3 10 0.50 0. A mixedboundary condition is implementedin the equilibrium methodin the same manneras a derivative boundarycondition is implemented. Thus.93) Mixedboundary conditions are implementedin the same manneras derivative boundary conditions. (8.3 Mixed Boundary Conditions A mixedboundary condition at the right-hand boundary x2 has the form I A. satisfied at the other boundary.00 Location x.25 0. (8. .94) (8.43).15 Errors in the solution by the equilibrium method.

In such a case.Boundary-Value OrdinaryDifferential Equations 8. 8. x = X.17 Finite domainapproximation.17.16.16 Boundary condition infinity.4 Boundary Condition at Infinity 465 Occasionallyone boundary conditionis given at infinity. ~(oo) = 0 X1 X2 X 3 X Figure 8. . infinity simply meansvery far away.5. The boundary-value problemis then solved in the usual manner. Replace(x~ with a large value of x. the boundaryconditions might be ] ~(0) = D0 and ~(~x~) = ~o~ [ (8.96) Twoprocedures for Derivative boundaryconditions can also be specified at infinity. as illustrated in Figure 8. [ ~(~x~) = ~oo --> ~(X) = ~oo large X (8. Anasymptoticsolution at large values of x.4. For example. our interest is in the near BY] ~(oo) B = ~ X 0 Figure 8.1 Finite Domain In this approach. 2. The major problemwith this approachis determiningwhat value of X. for bodies moving through the atmosphere. yields a reasonable solution to the original problem. the boundary condition at x = oo is simply replaced by the same boundarycondition applied at a finite location. In most cases. Thus.5.97) This procedureis illustrated in Figure 8. say x = X. at _ ~__~ Near region. if any. implementing boundaryconditions at infinity are: 1.

successively larger values of X.466 Chapter8 region far awayfrominfinity. ~" + P~/+ Q~. . x = X.4. In manyproblems.the behavior of the solution near x = ~ is much simpler than the behavior in the near region. it is quite easy develop higher-order methods(e.~2 (8. X2. However.~(x2) = . (8.g. 8. including the boundarycondition at infinity. to yield the solution . illustrated in Figure 8. = F(x) . and substituting that value of x into Eq.99) The boundary condition ~(~x~) = ~ is replaced by the boundarycondition ~(X) = Y.Pl and. 8.. (8.98) Theboundarycondition for the solution of the original differential equation is determined by choosing a finite location. Twoprocedures for obtaining fourth-order equilibrium methodsare presented in this section: The five-point method The compact three-point method . As discussed in the previous subsection.100) Whensolving ODEs such as Eq.5. etc.18. it more difficult to develop equilibrium methodshigher than second order. the value of X can be varied to determineits effect on the solution in the near region. In that case. The solution in the near region can be monitoredas X increases.100) by the shooting method.~(xl) = . and the boundary-value problemis solved for each value of X. the fourth-order Runge-Kutta method).2 Asymptotic Solution A second approach for implementing boundary conditions at infinity is based on an asymptoticsolution for large values of x. and the simplified differential equation can be solved exactly.. 2. until successive solutions in the region of interest changeby less than someprescribed tolerance.98) obtain ~asymptotic(X) F(X) : (8.18 Asymptoticsolution approximation.6 HIGHER-ORDER EQUILIBRIUM METHODS Consider the general linear second-order boundary-value ODE: [ 1.~asymptotic(X) -~- F(x) X< x < cx~ (8. perhaps by linearization. can be chosen.__~Near region ~(~) = N A~ ~ ~~~~tic solution 0 X x Figure 8. denotedby X1. The governing differential equation can be simplified.

A methodsimilar to the Thomas algorithm for tridiagonal matrices can be used to give an efficient solution of the pentadiagonal systemof FDEs. Fourth-order approximationsfor ~’ and ~/’ require five grid points: i. as it is whensolving boundary-value problems by the equilibrium method.30~i q.2 to i + 2. Y’li ~--. .~i +~’1i(2 AX)+ ½~"1i(22 + gy i~-AX)+ ~ivli (2 6 + 7~Ylit 5 -t-i~0yvli(2 Ax) 1 . wherethe centered fourthorder FDAcannot be applied.~’1i(2 Ax)+ ½~"1i(2 . +. ~Y -~ 1 = iv Ii( 2 Ax) 4 2 -.--Yi+2 @ 8"~i+112~X-..101 d) A finite difference approximation (FDA)for ~li can be obtained by forming the combination-~i+2 + 8~i+1.101b) (8..6. Nonsymmetricalfourth-order FDAscan be used at these points..1 -vi Ax6 1 -v li +iTgY Z~X -7--NY .101a) (8. [i(2 (8.~’li z~c-t-½~ttli -iv 5 4.1 The Five-Point Fourth-Order Equilibrium Method 467 Consider the five-point finite difference grid illustrated in Figure 8. When overall algorithm is the implicit.102) A FDA for ~ttli can be obtained by forming 16~i+1 -.V"Ii(2 Ax)3 I -v 2 1 -~i ff0YIi( ~x)s + 7-WOY ~c)6 +.6.19 Five-point finite difference grid. A pentadiagonal matrix results. The sixth-order Taylor series for ~(x) at these points with base point i are given 1-m (9 3 ~i+2 = .gYi-" i -}ff0Y li ~i-2 =~i 1 -v z~X54.8~i-1 -t-~i-2" Thus. Problemsarise at the points adjacent to the boundaries. J--2 iil "i i+’l i@2 x Figure8.1 -vi ~c3 +~yl -iv i AX4 --~-rY i /~C6 +.16~i_1 --~i-2 2 --~i+2 12 Ax (8.Boundary-Value OrdinaryDifferential Equations 8.Ptli ~C ½~"[i -. z2 Ax) + "’" z~’2 -t.--~i+2 "-~ 1@i+1-. implicitness is already present in the FDEs.2 The CompactThree-Point Fourth-Order Equilibrium Method Implicit three-point fourth-order FDAscan be devised.~vi.~i-2t_ l_~0Yv(~ ).8~i-~ -t- 4 .l r’’’’-gy h AX3 q_ _~yl i ~f4 ~i+l = ~i -t.~. ~)4 (8..30~i + 16~i_ 1 --~i-2" The result is ~ttli -.19.103) These FDAs can be substituted into the boundary-value ODE give an 0(/~x 4) finite to difference equation (FDE).. 8.. Ax the combination (8.101c) z~c2 ~i-l -~ Pi -. or second-order FDEscan be used with someloss of accuracy.

116) .2~’1i ÷ #’li-1 = AxZY"li ÷ 0(AX4) (8.101b).105) WriteTaylor series for ~’1i+1 and ~’[i-1 with base point i.. and ~i-1. Y -. ÷ ~ (~2~tli = ~ ÷ 0(AX4) (8. AX21 +gYAX3 ’~-2~v[i 4 ’~-"" -iv -’ -’ -" --.~1i--1) gO --~i+1 2Ax --~i--1 4) 0(Ax ÷ (8.107) yields #’li+~ ." --gY -iv Ax +~yVlli’ ~4.~i--1 -. (8.111) 62~’1i ~’1i+1 2~’1i÷~’li-I ----Substituting Eqs. Thus. Addingthe Taylor series for ~i+l. respectively. Equatingthe left-hand sides of those two equations gives -t Yli + 1 -t Ii+l .1 13) SolvingEq. Eq.115) Now let’s develop the compactthree-point fourth-order finite difference approximation for ~/. (8.112) (8. adding and subtracting f/l i.107) AddingEqs... i Ax Y [i+l =Y[i +Yli z_~x~-~y li 21 .110) Definethe first-order and second-order centereddifferences 6~i and 62~/I i. (8. (8. Eq. as follows: (~i = ~/+1 -. (8.101b): ~i+1 --~i--1 = 3 2 Ax~’li + ½ Ax ~"[i ÷ 0(AxS) (8.~’li-1=-’ [i --~/’li Z-~ct~y -.468 Chapter8 First..1 -ttt.112) into Eq.2~i +~i-1 = Ax2~"li + ~2 Ax4~ivli + 0(Ax6) (8.109) are identical.110) ~/l.101c).108) Dividing Eq.106) and (8.114) Truncating the remainder term yields an implicit three-point fourth-order centereddifference approximation ~’1i: for 5Yi Y’li = 2 Ax(1 ÷ 32/6) (8.105) and (8.--1 li Ax 3 i (8.104) by 2 Axgives ~i+1 -..109) Theright-hand sides of Eqs.2~’1i +Y’li-a) = Y’li + ~ Ax2y’li+ 0(Ax4) g05 (8. Eq. Eq.101c).106) (8.104) Dividing Eq.108) by 6.Yli+~ Ax2y -tit Ii÷0(AX4) (8. fromthe Taylorseries for ~i+1. (8. (8.2~’1i ÷..Sx(! + 52/6) + O(Ax4) (8. let’s developthe compact three-point fourth-order finite difference approximation for ~/.111) and (8.113) for ~/li yields Yl. - ~Y~ 2 .~i--1 (8. Subtract the Taylor series for ~i_1. and rearranging gives Y’li + 1 -! Ii+l . (8. 2Ax -. (8. (8. (8. gives ~i+1 .

P). (8.116) by 2 gives -" a=Y i-~ ~ Ax2~ivliW0(Ax4 ) kx ~ WriteTaylor series for Y"li+~and )"[i-~ with base point i.~" li + ~2 62~’t li = ~ Off 0(Ax4) SolvingEq.119) yields . If not...VOff O(x. .p= (8. ~i+1 -.121) are identical.115) and (8.120) Dividing Eq.. .tli = (8.122) Definethe second-ordercentered-differences 625~i and 62y"1ias follows: 625’i 5~i+~ +5~i-~ = .125) for ~"li yields (8.119) AddingEqs..P"Ii+I. Equatingthe left-hand sides of those equations gives ~t.117) (8.128) yields the implicit fourth-order difference equation: 62yi ZDC2(1 Off 32/12) 6y i t-P~2 kx(1 + 32/6) ~. (8.2~Pi off ~i-1 469 (8. li Off ~ @"li÷l 2.PiOff~i-1) Off 0(~f4) (8.t li3ivl Ax2 +gY/~x3 off2-~vl]i " /~x4off’’’ ]i l-v ~"Ii+I=Y li+Y IiAx+$.123) (8. Thus.. .i .127) into Eq.. (8.124) into Eq.124) (8.P"IiOff.123) and (8.~ivli 4) off 0(/~x (8. the systemis linear.Qiyi =F~ (8. the systemof FDEs nonlinear. (8.~).126) Truncating the remainder term yields an implicit three-point fourth-order centereddifference approximation ~"1i: for 62y i ~0C2(1 Off 62/12) y.118) and (8.2. (8. -m 1-iv 1~62 1-’¢ ~kx3_[_ 1-vi =Y [i--Y ]iz~XOff~Y i --gY [i --~Y Z~x i .120) by 12.121) Theright-hand sides of Eqs. (8. (8.~"li-1) ~-~_ -= 1 (~i÷l -.129) If P and/or Q dependon y.117) and (8.122) .127) Consider the second-order boundary-value ODE: ~" + P(x.2~i = Y -OffY 62Y"1i -" Ii+l2.2~"1ioff’It[i1 = 2 Z~ . (8.118) (8.~"1i-1 4 -. and rearranging gives ~ttli off-~2(~ttli+l -- 2~"1ioff~ttli-1) =~t*[i-{-~2 2~’2 ~ivli + 0(~4) (8.125) 4) = Y"le ~2(1+ a~/12) O(~x + (8.~ li -" -"li-~ Substituting Eqs... t. adding and subtracting Y"li.Boundary-Value OrdinaryDifferential Equations Dividing Eq.128) Substituting Eqs. (8. is .

1.00 T(x). let’s solve the heat transfer problempresented in Section 8.000000 4.709070 100.083333.000000 4. T" .o~2T : -o~2Ta T(0.916667Ti_~ .o~2Ti : -o~2Ta Ax2(1 + 62/12) Simplifying Eq. : --c~2 Ax2(1+ 62/12)Ta (8. 0.131) yields ~2T i -52 (8.0) = 100.238512 36. (8.766412C for the errors in Table 8.11 times smaller than the Euclidean normof 1.833333Ti + 0.8.2T/+ T/_~) = _~2 kx2 Ta i (8. Repeating the solution with Ax= 0.75 1.0) = 0.132) Expandingthe second-order centered-difference operator 32 gives 2 kx 52 (T/+~ .131) ~2(1 q.+1 0 (8.50 0.0 C. -0. The Euclidean normof the errors in Table 8.25 cm x.005918C.52 zgv2 T/ 12 (T/+~ . and Ax = 0.2.127) into Eq.134) becomes cm. The compact three-point fourth-order equilibrium method As an exampleof the compactthree-point fourth-order method. grid Table 8.9.fl)T/_ z . (8. Equation (8.20.916667T..25)2/12 = 0.283048 13.470 Chapter8 Example8.133) where fi2Ta = 0 since Ta is constant. The Euclidean normof the errors in Table 8.fl)T/+ 1 = -12fiT a ] Let 52 = 16..0 C and T(1. (8.051599 -0.134) l (1 .092448C.130) Substituting Eq.125 cmyields the results presented in Table 8.0 (8.290111 36. (8. C 0.25 fl = 16(0. and solving the tridiagonal systemof FDEs the by Thomas algorithm yields the results presented in Table 8.62/12)T.135) ApplyingEq. Defining fl = 52 kx2/12and gathering terms yields (8.00 0.2T + Ti_~) .15) at the three interior grid points.25 0.073081 .19 Solution by the CompactFourth-Order Equilibrium Method for kx = 0. T a = 0.635989 100. Thus. cm 0.023309 -0. transferring the boundaryconditions to the right-hand sides of the equations. whichis 19.20 at the common points of the two grids is 0.19.0 cm-2.000000 Error(x). which gives I = 0.19 is 0.000000 ~’(x).130) gives ~52Ti .306357 13.(2 + 10ft)T/+ (1 .

9.290111 22.62. (8.004677 -0.4 for solving single nonlinear equations.136) The solution of Eq.136) by the shooting method. 2.the problemf(x)= 0 is rearranged into the form x g(x).7 THE EQUILIBRIUM METHOD FOR NONLINEAR BOUNDARY-VALUE PROBLEMS Consider the general nonlinear second-order boundary-value ODE: f/’+P(x’fe)f/+Q(x’fOf~=F(x) ~(XI) =~l and~(x2) =~2 (8.375 0.500 0. such methodscan be applied directly to nonlinear boundary-value ODEs described in Section 8. and anini tial val ue . -0.003305 -0.003880 471 whichis 79.20 Solution by the CompactFourth-Order Equilibrium Methodfor Ax = 0.306357 7.170109 36.750 0. The solution of Eq.709070 60.002360 -0.as discussed in Section 8. In that method.3 for linear boundary-value as ODEs.000000 ~"(x).3.909479 4.000000 Error(x). The ratio of the normsis 15.875 1. The shooting methodis based on finite difference methodsfor solving initial-value problems.09 times smaller than the Euclideannormof 0.908760 4. straightforward.125 0. whichyields a system nonlinear FDEs. (8. C 0. 8.000719 -0.001494 -0. whichdemonstratesthat the methodis fourth order.solve nonlinear initial-values ODEs directly. era 0. Iteration Newton’s method 8.304863 7.800080 13.286807 22.000 T(x).000000 1.125 cm x. Consequently.000000 1.7.802440 13.704394 60. Two methods for solving nonlinear boundary-value ODEsby the equilibrium methodare presented in this section: 1.Boundary-Value OrdinaryDifferential Equations Table 8.1 Iteration The iteration methodfor solving nonlinear boundary-valueODEs similar to the fixedis point iteration methodpresented in Section 3.625 0.468057C for the errors in Table 8.000 0.004200 -0.165910 36.Explicit methods.614212 100.such as the Runge-Kutta method. 0.250 0.136) by the equilibrium methodis more complicated.618093 100. since the correspondingfinite difference equation (FDE)is nonlinear.

Preserve the general character of the ODE by lagging the lowest-order terms in a group of terms. a linear variation fromx1 to x2. y Initial stepfunction . Assume initial approximation for the solution: y(x) (°). 3. the initial approximation y(x) (°) could be a step functionat xl. The solution of a nonlinear boundary-value ODE iteration proceeds in the by followingsteps. y(x)(°)] = P(x)(°). a step functionat x2. it should be used. . Developa finite difference approximation of the ODE. P(x.)(k+l)(~t)(k)(~l/2)(k) (8. Solve the systemof linear FDEs obtain y(x) to (1). P(x) Repeat steps 4 and 5 to convergence.20 Initial approximations iteration. for the Dirichlet boundary conditions ~(xl)=~1 and ~(X2)=~2.’//~.20. etc. Calculatethe lagged coefficients basedon the newsolution. (~. y) = P[x..472 Chapter8 of x is assumedand substituted into g(x) to give the next approximation for x. Choosefinite difference approximationsfor the derivatives and construct the linearized FDE. For example. Choose the initial approximation similar in form to the expected form of the solution. 1. (1). or a quadratic variation fromx1 to x2. y(x)(0. 6.Linearize the ODE by lagging all the nonlinear coefficients. 5. as illustrated in Figure8.~. Thestep functions are obviously less desirable than the linear or quadratic variations. For example.~t~l/2 . etc.~ .137) 2. A good initial an approximationcan reduce the numberof iterations.. 4._).Linearfunction ~ Final.. Calculatethe laggedcoefficients.step I A a b Figure 8. The linear variation is quite acceptablein many cases.. The procedure is repeated to convergence. for . If additional insight into the problem suggests a quadratic variation.// Quadratic function [~. A bad initial approximation maynot converge.

Eqs.10.143) (°~ are presented in the first line of Table 8.138) (8.050781 0.44).2y~ k+l) +(1 + 0. assumea linear variation betweenthe of boundaryvalues.25)2(1 + (1.43) and (8.50) 3) (8. (8.0. Substituting these values into Eq. x=1. (8.142b) k+l) ~+~)+ ~) (1 .138) by iteration using the second-order equilibrium method. This solution y(x) (2) is presented in the y(x) (°) = -0.5x .142a) ~+0.21.25)2(1 + (1.25y~))y(4~+O (1.00000 Y4 -7.0 and ~(2.144) 0.25y % = 4(0. y(x)(°).093750 (8.SO: +(1+0. and lag the nonlinear coefficient of~’.25y(4~))y~ .0) = 4. (8.21.0.142c).140) Multiplying Eq. For the initial approximation y(x).75: Transferring ~(1.25)2(1 q. This gives yi+l -.0.25Y(4k))y~ (1 = 4(0.25.25y)y k x: 1. respectively.0) = 2.25y~k))y~ = 4(0. (8. Applying (8. (8. (8.0) = The exact solution to Eq.142c) x = 1.which can be solved by the Thomas algorithm: 0. whichcorrespondsto The values of y(x) k = 0.140) by 2 and gathering te rms yields th e nonlinear FD E: /Axyi )Yi-I --/--wcYi )Yi+l : 4 AxZ(1+x~) (8.81250 Y3 = 1.141) at the three interior points of the uniformgrid gives Eq.0) = 2.142) is then reevaluated with y(x) (1~.138) ~(x) = ~ +_1 x 473 (8.000001 [y2 1 [ 0.142) yields the following system of linear algebraic FDEs. (8.25: (1. and a new tridiagonal system of linear FDEsis assembledand solved by the Thomas algorithm. Equation (8.5 to the right-hand sides of Eqs.142a) (8.2yi +Yi (k+l) 1~ (k)(~i+l (k+l) --Yi-l~ =4+ 4x~ (8.5 + 2.00000 1 The solution y(x)(0 to Eq. A nonlinear implicit FDEusing iteration Consider the following nonlinear boundary-value ODE: ~" + 2~/= 4 + 4x3 ~(1.141) Let Ax= 0.75) 3) (8. Let’s approximate~’ and ~/’ by second-order centered-difference approximations.00000 1.269531 I-2.3oundary-Value OrdinaryDifferential Equations Example8.25) 3) (8.144) is presented in the secondline of Table 8.(1.139) Let’s solve Eq.whichcan be solved by the Thomasalgorithm. (8.0 and ~(2.03125 -2.18750 --2. Thus. yields a tridiagonal system of FDEs.2y(4 ~+ + 0.00000 1.65625 0.

for y(x) y(x)(1) = Y(x)(°)+r/(x)(°) O) Y(x) (8.477350 2.911306 2. 8.354750 2.2 Newton’s Method Newton’s method for solving nonlinear boundary-value problems by the equilibrium methodconsists of choosing an approximate solution to the problem Y(x) and assuming that the exact solution/~(x) is the sum of the approximate solution Y(x) and a small perturbation r/(x).5 third line of Table 8. Convergence faster if a good initial is approximation available.21 Solution of the Nonlinear Implicit FDEby Iteration x Chapter8 (k) 0 1 2 11 y(x) ~°) y(x) (l) y(x) (2) y(x) ¢11) y(x) ~(x) Error 1.395004 2. ~(x) = Y(x) + (8. The procedurecan diverge for a poor (i.25 2.50 3.146) The next trial value for Y(x) is Y(x)~)=y(x) (~).5 4. The general iteration algorithmis y(x)(k+l) = y(x)(~+~) Y(x)(~) + q(x)(~) ] = (8.0 2.022177 2. for the correspondingr/(x) Solve Substituting these results into Eq.643216 3.5 4. Thus.0 2.147) Newton’smethodconverges quadratically. The procedureis repeated with the approximatesolution until q(x) changesby less than some prescribed tolerance at each point in the finite difference grid. The procedure is implemented follows.943774 2.with its quadratic convergence.362500 -0.631998 3. This procedureis repeated a total of 11 times until the solution satifies the convergence criterion [y}k+l) _ y~k)[ < 0. ~1).145) to obtain (1). This process yields a linear ODE q(x).625000 2. Substitute these results in Eq.145) is substituted into the nonlinear ODE.0 1. terms involving products and r/(x) and its derivatives are assumed be negligible compared linear terms in q(x) and to to its derivatives.00 2. Assume (°).is especially useful if the .474 Table 8.7.0 2.75 3. (8.5 4.0 2.00 4. The solution is not exact.145) Equation (8.916667 -0.5 4. which can be solved by the for equilibrium method.. Solve the corresponding as Y(x) linear problem q(x)(°).007750 1.633929 -0.875000 3. since higher-order terms in q(x) and its derivatives were neglected. (8.000001. This procedure is applied repetitively until ~/(x) changes by less than someprescribed tolerance at each point the finite difference grid.681987 3.145) gives y(x) (2).Newton’s method.250000 3.001931 2.005361 1.21. unrealistic) initial is approximation.e.

0) = Let Y(x) be an approximate solution.2 Gi (8.151) into Eq.149) (8.0) = 0. A nonlinear implicit FDEusing Newton’s method Let’s solve the problem presented in Example8.155) The boundary conditions on ~(x) must be transformed to boundary conditions 7(x). r/(1. etc.153) (8. Example8.159) . r/(2.0. yields y(x) = Y(x) ÷ rl(x).154) (8.2YY’ (8.150) ~"= }~"+ n" Substituting Eqs.Boundary-Value OrdinaryDifferential Equations same boundary-value problem must be worked many times with different conditions.Ax 2 ~ and gathering te rms yields th e li near FD Multiplying Eq.10 by Newton’smethod.11.Y" . can be solved by the equilibrium methodto yield 7(x). Equation (8.154).0 .0) =3(1.157) are also approximated by second-order centered-difference Y"[i =Y~+~ 2Y.151) (8. At x = 1.0.0) = q(2. 3" + 2~’ = 4 + 4x3 3(1.158) (8. + Yi-~ ..149) to (8.0 (8.157) by E: and (1 .7i-1 ~-2Y~ 7i+1-2--~c7i-~ -. The procedureis applied repetitively to convergence.227i ~_2Y.0) = 2.148) (8. (8.0) = 0.0.0) . Equation (8.0. Let’s approximate7’ and 7" by second-order centered-difference approximations.148) (Y" + 7") + 2(Y + r/)(Y’ + 7’) = 3 Expandingthe nonlinear term yields (Y" + 7") 2(YY’ +Y7 + nY + nq’) = 4 + 3 ’ ’ Neglecting the nonlinear term qT’ in Eq.Y(1. with the boundary conditions 7(1.0)= 0.156) since the approximate solution Y(x) must also satisfy the boundarycondition. (8. (8. Then ~(x) = Y(x) + 3’ = Y’ + 7’ 475 boundary (8.0 and ~(2. In a similar manner.154) becomes 7i+1 Z~c q.Y’+~ Yi-I 2Ax (8.153) yields the linear ODE: [ rl" + 2Y7’ + 2Y’7 = G(x) ] where G(x) is given by G(x) = 4 + 4x3 . (8.[i7i Gi = Values of Y’li and approximations: Y’li . grid sizes.152) (8.AxY/)Ti-~ + (-2 + Ax 2y’li)qi + (1+ A x Y/) ~]i+I -~~.

0.75: 0.50 2.81250 q3 : 0.1.0 4.65625 0. (°).22.25 .875 4.633929 -.68750q2 + 1.476 (°).1.000000 2.0 4.0 0.0625 Chapter8 Let Ax = 0.250 3.0 2.23.631998 0.500 Y’(x)(°) 2.18750 0.81540r/4 = 0. (8. Table8.160c) Substituting r/1 = q5 = 0 into Eq. Solving Eq.5 0.0 2.3125 1.378906 (8.00 1.0. Thefirst line presents the values of x.161) by the Thomas algorithmyields the values of t/(x) (°) shown line 3 of Table8.23 Solution of the Nonlinear Implicit FDEby Newton’s Method k 0 1 2 3 4 (~) (~ y(x) =y(x) (k) t/(x) (°) Y(x) y(x) = (°) (°) tl(x) (1) = y(x) (1Y(x) (0 r/(x) (2) (2) y(x) =Y(x) (2) q(x) (3) = y(x) (3Y(x) (3) /’](x) (4) y(x) 35(x) Error(x) 1.5 0. Y"(x)(°).25.153).0 G(x) .250000 -0.160) and writing the result in matrix form gives --1.161) Theresults are presentedin Table8.631998 3.2500 6.50 1.0 1.0 0.50 Y"(x)(°) 0.25 2.00 Y(x) (°) 2.23 gives y(x) (0 = Y(x) (°) + r/(x) (°) presented in line 4.1.354752 .269211 2. Y(x) Y"(x) and G(x) x 1.23.50 3. and the second line presents the values ofy(x)(°) = Y(x)(°).078125 0.362500 -.323819 2. (8.005361 1.0 0. which will be used as Table 8.160b) (8. and G(x) are also are presented in column2 of Table 8.96875r/5 = 0.625 3. 1. as given by Eq.078125 0.0. ApplyingEq.0 0. Adding in lines 2 and 3 of Table 8.22.00 2.926181 -0.0 4.03125r/3 . (8.00000 1.082031 1 -1.68750 0.18750r/2 .632027 .911306 0.34375ql .014870 2.000005 2.644466 -0. (8.0 0.0.911306 2.159) at the three interior points of the uniform grid gives x = 1.5 4.082031 0. The values of Y’(x) presented in Table 8.50 2.875000 -0.355789 -0.00 4.03125 -1.354750 0.0 2.5 0. Y’(x) (°).354750 2.68750q3 + 1.007750 1.0 0.916667 .625000 -0.0.50: x = 1.001037 2.68750 1.68750q + 1.378906 4 (8.5 .000029 3.75 2.000000 3.230533 3.22 Valuesof x.0 4.25: x = 1. These values ofy(x) (°) = Y(x) (°).75 3.5 0.00000 1 [ q2 1 I -0. For the initial approximation y(x) (°) assume a linear variation (°) betweenthe boundaryvalues.1.001931 2.0 2.911311 .0.68750 q4 0.012439 3.160a) (8.65625q3 = -0.000000 2.000 2.0 2.000002 2.

23.162) (8.xi_~).. In problems wherethe solution is nonlinear.¯ ¯ ’ (8. the exact solution ~(x). (8.~(x) ar presented at the bottomof the table.10. Some general considerations apply to the use of nonuniform grids.~f+ Z~2_)J~xxli i Al - . Abrupt changesin the grid point distribution can yield undesirable abrupt changesin the solution._respectively. a nonuniform distribution of grid points generally yields a moreaccurate solution if the grid points are clustered closer together in regions of large gradients and spread out in regions of small gradients.+"" where Ax+= (xi+1 . The use of nonuniform grids in the finite element methodis discussed in Example 12. Four iterations are required to reach convergence [ylk+~) -ylk)l < 0.163) ~-~ =f -fxl.3. Ax2.162) by Ax_and multiply Eq.163) by Ax+and add the results to obtain Z~C_j~//+l -~. The secondapproach is not developedin this book.12. Ax_ + ½J~z[._ 1 = (Z~X_ + Z~dC+)j~/ -’]’1 ~ + ~( _ ~3+_ ~x+ (0)J~xl/--]/~X3)j~xxx[ ½ (/~X_ ~ "~. Secondarily. Direct solution on the nonuniform grid using nonequallyspacedfinite difference approximations (FDAs) 2. the differential equation is transformed from the nonuniformly discretized physical space D(x) to a uniformly discretized transformed space/3(~). Foremostof these considerations is that the nonuniform grid point distribution should reflect the nonuniform nature of the solution. and the Error(x) y(x)(~.xi) and Ax_= (xi . The first approachis illustrated in Example 8.8 THE EQUILIBRIUM METHODON NONUNIFORMGRIDS All of the results presented in Sections 8.164) .5 in Section 12. Considerf"(x) first.Z~X+j~t.000001. nonequallyspaced finite difference approximationsfor all of the derivatives in the differential equation are developeddirectly on the nonuniform grid. Multiply Eq. and equally spaced finite difference approximations employed the uniformlydiscretized are in transformedspace. to The final solution y(x) (4). Repeatingthe solution with Y(x) yields lines 5 and 6 in Table 8. there are two approachesfor developinga finite difference solution on the nonuniform grid: 1.21.--~r~li Ax3. the nonuniform grid point distribution should be relatively smooth. on the nonuniform finite difference grid illustrated in Figure 8. Let’s_ developcentered-differenceapproximations the first and secondderivatives for )7’(x) andf"(x). 8. the nonuniformgrid point distribution should attempt to match that functional form. Eleveniterations are required by the iteration method presented in Example8. Write Taylor series for~+~ and~_~: J~i+l =f/ "3l-fx[i z~X+ "q-½L[i z~2+ "q-~J~xxx[i ~X~_ "~. Methodsfor implementingthe solution by the finite difference approach on nonuniformgrids are presented in this section. Once a nonuniform grid point distribution has been determined..if the exact solution has a particular functional form. In the second approach. Solution on a transformed uniform grid using equally spaced FDAs In the first approach.7 are basedon a uniformgrid. (8.3 to 8. For example. (8.Boundary-Value OrdinaryDifferential Equations 477 (t) (t) Y(x) for the next iteration.

~xli +.167) Truncatingthe remainderterm yields the desired result.(1 + 13)~ + i Ax+ Ax_(1 + 3) ~ 3 Ax+(fl . (8.165) gives ½J~xr]i Ax_ Ax+(j~ 1) ~- =d~/+l -- 2 (1 ÷ fl~/+ fir//-1 .168) = AX+ AX_(1 3) + A finite difference approximation the first derivative.(1 .1)f~li +’" (8.fl2)j~/ _ fl2j~//_l _ lax Ax~xxxli +"" (8.fl2)f _ fl2f_l AX+(1 fl) + Note that Eq.0) = 100.li = 2[~/+1..o~2T : -o~2Ta T(0..166) Solving Eq. The equilibrium method on a nonuniform grid Let’s solve the heat transfer problempresented in Section 8._~] ~2T/----.170) f~li =fi+1 . (8.170) is second-order accurate even on a nonuniform grid.166) forfxxli yields ~.(l + fl)f + flf_~] (8. (8. (8. whichis first-order accurate: fxxli 2[f+~ . The boundary-value problem is T" .169) 6 + Ax+(1÷ fl) Truncatingthe remainderterm yields the desired result: (8. Dividing Eq. can be developed a for in similar manner. _ (8.1 using the first-order nonuniform grid finite difference approximationgiven by Eq.165) Rearranging Eq. (8..478 Ax_ i-1 Ax+ i+1 x Chapter8 Figure 8.. .(1 + ]3)T/+ flT..164) by Ax_and letting fl = Ax+/Ax_ gives j~/+l + fl~i_l= 1+ (l + fl)~ii + ½(A~+ Ax+ + Ax_)fxxli ~ (Ax+3 Ax+ Ax2_ff’xxxli . (8.168) for T" yields 2[T/+~.1).171) Substituting Eq.0) = 0.172) .(1 .0 (8.168). (8.The result is ~rxli _~ii+.~Ax+ Ax2_(fl .12.-~2T a Axe_ + 3) 3(1 (8. ~xli.21 Nonuniform finite differencegrid.0 and T(1. Example 8.

0 where~ = 0. For example.5~ . 2 x = a + b~ + cYc (8.50 0.[(1 +/3) + 8 2_/3(1 + fl )]Ti + Ti+ = 0 1 (8.25 0.547619T ÷ T4 = 0 2 3 0. ~) values into Eq.let’s choosea second-orderfunction to relate the nonuniformly-spaced points.875 where~ --. Figure 8.75.000000 0.172) yields a flTi_1 .8751. Thus.3.4.0 0.0 where~ = 1.600000T . the grid points shouldbe sparsely spacednear the left end of the rod and closely spaced near the right end of the rod. (8.000000 0. grid . The third set is chosento effect the desired grid point distribution.24 Nonuniform Grid Geometry 0.25 presents the values of x.0.00 0.933333T + T5 : 0 3 4 (8.175) Substituting ~ -. in Eq.173) From results obtainedin Sections8.714286T . denoted by ~.0 C. Thus. Rearranging Eq.24.1.458333 ÷ 0.173) at grid points 2 to 4 specified in Table8.0.666667 0.173) at the three Eq. Ax+.666667 0.25 and ~ = 0.2.let x = 0. and c gives [ x = -0.875000 1.777778T~ . b.875000: 0. and c. (8.375 0.375000: x : 0.22 illustrates the nonuniform point grid distribution. ~) values are required to determinethe coefficients a.3 and 8.0.176b) (8.0 cm and T = 0.00 0.174) Threesets of (x. we see that the solution to the heat the transfer problem changesslowlyat the left end of the rod and rapidly at the right end of the rod..25 0.175) yields the complete nonuniform grid point distribution presentedin Table8. (8.50 0.Boundary-Value OrdinaryDifferential Equations Let 52 = 479 -2 16. (8.75 1. b. Two sets values are obviously specified by the boundarypoints: x = 0.174) and solving the resulting systemof equations for a.00 x Figure 8.0 and x = 1. Ax_.5 into Eq.375000 0.666667: x = 0.00 0. denoted by x./3. and the coefficient of T.176a) (8. to the grid uniformly-spacedgrid points.176c) Table 8. Table 8. interior grid points gives: x = 0.75 1.0 /// x 0.333333T + T3 = 0 2 0.0.24.22 Nonuniform point distribution.041667Yc 2 ] (8. Substituting these three defining sets of (x. As a simple nontmiform grid example. Applying (8.

)T/ Chapter8 0.547619 0.000000 Error(x).959002 Transferring T = 0.666667 0.766412C for the errors obtained for the uniformgrid solution presented in Table 8.0 to the right-hand sides of Eqs.26 Solution by the Equilibrium Method on a Nonuniform Grid ~ 1 2 3 4 5 x.000000 ~’(x).777778 -3.0 and T5 = 100.208333 0. -0.673072 -0.000000 7. eigenproblemsare generally solved by the equilibrium method.~" + k2. which is about 33 percent smaller than the Euclideannormof 1.176a) and 1 (8.666667 0.000000 -2.547610 1. cm 0.176c).9.-3.. (8.000000/ T3 = 0.568182 59.670455 25. cm fl (.291667 0..000000 1.618093 100.333333 0.25 Metric Data for the NonuniformGrid i 1 2 3 4 5 x. cm Ax+. The Euclidean normof the errors in Table 8.0 0.333333 0.375000 0. The eigenvalues are to be determinedin addition to the correspondingequilibrium configuration of the system.179038 C.7.875000 1.0 T4 -100. 8.375000 0. respectively..e. yields the following tridiagonal systemof FDEs: . cm Ax_.480 Table 8.802440 26. 0.600000 -1.131986 -0.000000 7.375000 0.0 T(x).600000 -1.177) Solving Eq. Shooting methods are not well suited for solving eigenproblems.933333/ (8.0 0.0 0.26.~(0) ---.. 8.1 Exact Eigenvalues Consider the linear homogeneous boundary-value problem: [ .291667 0.0 Table 8.178) .714286 -2.933333 1.875000 0.~(1) (8. (8.177) by the Thomas algorithm yields the results presented in Table 8.9 EIGENPROBLEMS Eigenproblems arise in equilibrium problemsin whichthe solution exists only for special values (i..~ = 0 .714286 0.125000 0. C 0. Consequently. eigenvalues) of a parameter of the problem.26 is 1. Eigenproblems occur when homogeneousboundary-value ODEsalso have homogeneousboundary conditions.659091 100.208333 0.241253 60.

One of the major items of interest in eigenproblems are the eigenvalues of the system. 8.183). there is an eigenfunction ~(x) given by Eq.04kZ)y5-I-y6 : Y6 = 0 y~ = 0 (8.180) (8.Choose equally spacedgrid with four interior points.183) The value of A is not uniquely determined. (8. as illustrated in Figure 8. at the four interior points: x = 0. (8.9.186b) (8. The solution of the differential equation is .184) 2 Ax 2. Multiplying by Ax truncating the remainder term.2.0.~i--L + 0(Ax + k2~’/= 0 2) (8.(2 . for which k = :knu n = 1.179) where k is an unknown parameter to be determined.181) The values of k are the eigenvalues of the problem.(2 ..(2 .8: yt .0.2 Approximate Eigenvalues Eigenvalues of homogeneous boundary-value problems can also be obtained by numerical methods. There are an infinite eigenvalues.0.44)..~(x) = Asin(nztx) (8. (8.178) for its eigenvaluesby the finite difference method. These values are approximationsof the exact eigenvalues of the boundary-valueproblem.185) Applythe FDE.186a) (8. and rearranging gives the FDE: ~iq-I -- l Yi-1 -- ~ (2 . (8. In this approach.13. Example8.182) numberof (8.04kZ)y3 +Y4= Y3 . For each eigenvalue of the system. the boundary-valueODE approximatedby a systemof finite is difference equations.2: x = 0.179) gives ~(0) = A sin(k0) + B cos(k0) = 0 ---> )(1) = A sin(kl) Either A = 0 (undesired) or sin(k) = 0.04kZ)y4 +Y5= Y4.23..Boundary-Value OrdinaryDifferential Equations The exact solution to this problemis ~(x) = A sin(kx) + B cos(kx) 481 (8. (8.04/fl)y 2 +Y3= 0 y2 .6: x = 0.. The corresponding finite difference equationis: 2~i"~. Substituting the boundaryvalues into Eq.186d) . and the values of the unknown parameter (i.Ax k2)yi -}-Yi+l : 0 (8. Approximationof eigenvalues by the equilibrium method Let’s solve Eq.186c) (8.0. Approximate ~/’ with the second-order centered-difference approximation. the eigenvalues) whichsatisfy the system of FDEsare determined.with Ax= 0.4: x = 0.e. Eq.(2 . 2 .

618 k -4-3.187) which can be expressed as (8. and percent error are presented in Table 8.618 .0.66 q:6.04k . The first eigenvalue is reasonably accurate.16 qz24.l 2) .618. To improvethe accuracy of the eigenvalues and to obtain higherorder eigenvalues.0..21) = Define Z = (2.190) equation is determined by expanding the (8.2) 4k 0.31 .. (8.04k (2 -1 [yi]= 0 (8. q.189) A = -1 2 -1 0J 0 -1 2 -1 0 0 -1 2 This is a classical eigenproblem.186) in matrix form gives .482 1 ~ 2 3 4 5 Chapter8 6.192) The values of Z. however.618 0.04k2 and A is defined as -1 0 (8. and finding the zeros of highthe order polynomialsis moredifficult.511 k(exact) -4-r~= 4-3.04k -1 0 0 2) .0.1 (2 0 0 I(2 (A . more grid points are required. The characteristic determinant IA .283 -I-3~z= 4-9.566 Error. maybe used to determinethe eigenvalues for large systems of FDEs.~ Figure8.188) where 2 = 0.Expanding determinantbecomesmoredifficult.04k2) = ~1.090 4-5.191) 2.878 4-8.23 Finite difference grid for the eigenproblern. -. % q:1..0. k(exact).27.090 4-9. which are introduced in Chapter 2.142 4-2n = 4-6.0. (8.618 -0.04k2).618. Table 8. which is quadratic in Z Solving Eq. which gives Z4.1. Writing Eq. Numerical methodsfor finding the eigenvalues.45 ~:14. The higher-order eigenvalues become less and less accurate. (8.The characteristic equation is given by det(A .27 Solution of the Eigenproblem Z 1.425 4-4~z= 4-12. This is not without disadvantages.191) by the quadratic formula yields Z = (2 .. k.10 (2 2) 0 .3Z2 + 1 = 0 (8.2I)y = 2 0 -1 0.211 = 0.

ndim = 9 in this example . (8. ) __ y.193). The secant methodfor satisfying the right-hand side boundarycondition is given by Eqs. The fourth-order Runge-Kutta shooting method program program main main program to illustrate boundary-value ODE solvers ndim array dimension. Input data and output statements are contained in a main(or driver) programwritten specifically to illustrate the use of each subroutine.194) and (8.~. and prints the solution. (8.~(Xx) =~1 andS(x2)~-. fi~nction f.195) are solved by the Runge-Kutta method subroutine rk2.V’ =f(x.196) ~+~) wherethe superscript (n) denotes the iteration number.l~.Solving Eq.194) and (8.l~n ) -. ~) ~(xl) = ~(x~) ~’(Xx) ~’1~ ? ---=-- (8.1. (7.y. Programmain defines the data set and prints it.~2 (8. for implementing the fourth-order Runge-Kutta shooting method is presented in Program 8.l~. (8.15. calls subroutine shoot to implementthe solution. (8.V) . That subroutine has been expanded this section to in solve the system of two coupled first-order ODEs. which arise from Eq. specifies the derivative function specified by Eq.193) can be written as a pair of coupled nonlinear first-order initial-value ODEs: ~’ = ~ ~’ =f(x.slope (8._~ ) .~.194) (8.14): is . (8.196) for gives n+l) ~) Y’I~ = Y’I~ + y~ _ slope (8. Eqs. These equations are implemented subroutine rk in Section 7. Program 8.193).197) A FORTRAN subroutine. subroutine shoot.1.195) The general algorithm for the fourth-order Runge-Kuttamethodis given by Eqs.10. 8.179) and (7. The fourth-order Runge-Kutta shooting method The second-order equilibrium method The basic computational algorithms are presented as completely self-contained subroutines suitable for use in other programs. Equations (8.Boundary-Value OrdinaryDifferential Equations 8.193) Equation (8.1 for a in single first-order initial-value ODE.18) and (8.19): )/l~n+l/__ y.10 PROGRAMS 483 TwoFORTRAN subroutines for integrating boundary-valueordinary differential equations are presentedin this section: 1. (8. 2.1 The Fourth-Order Runge-Kutta Shooting Method The general nonlinear second-order boundary-value ODE given by Eq. A FORTRAN function..195).180).

484 c c c c c c c c c c c c imax x Chapter 8 number of grid points independent variable array.tol) return zb=z (i) slope= (rhs2-rhsl) / (zb-za) za=zb rhsl =rhs2 z (i) =z (I) + (rhs-rhs2)/slope end if end do . y(9) .5/ data by. zb / 0. z data ndim.5.y2. tol.dx / 0. za.i=l. 1 / data (x(i). imax.le. 0. z2.12x. ’x’.1010) (i.50.5).0e-06. y2. x. by.imax) stop ¯/’ i’. y (ndim) .eq. 1 ~oI.O) write (6. bz.y.i=l.7x. iw) if (iw. yl. iw) if (iw. 0.0 / write (6. imax. z(i) z yl. 7. za. 0.75. 0.0. zl.1000) if (iw. imax.1010) (i. y. zb.12x.by.0 or 1.y2. Col. dx. gC. z. 0.0. z.z(i). z2 / 1.0 za.i=l.y.6) 1020 format (’ ’) end subroutine shoot (ndim. dx. iw) the shooting method dimension x (ndim) . imax. 3. 100. z2 right-hand side boundary condition values by. ’z’ 1000 format (’ Shooting method’/’ 1 /’ ") 1010 format (i3. zl.x(i). z.0.zl.ne.y(i).l) then rhs l =by*y( imax) +bz * z ( imax z (i) =zb else rhs2 =by*y ( imax) +bz *z (imax) if (abs(rhs2-rhs).00. gt.0. x. 1.25.yl.O) write (6. 0.3f13. zb first and second guesses for z(1) grid increment iter maximum number of iterations tol convergence tolerance intermediate results output flag: 0 none.0. iter. i ter. z2. zb. 1.1000) cali shoot (ndim.y(i).0. bz. bz. dx. 0. 2 all iw dimensionx(9) . 5.za. y(i) Y derivative dy/dx array. 1 iter. ’y’.z(i). iter call rk2 (ndim.imax) if (iw.25 data y(1).x(i). x.yl. z (ndim) rhs =by*y2 +bz *z2 z (i) =za do it=l. 1 some. 12.0. bz right-hand-side boundary condition flags: 0.l) write (6.iw / 9.l) write (6. 0.zl left-hand side boundary condition values y2. gt.1000) if (it. x(i) dependent variable array.

1 with the fourth-order Runge-Kutta method replacing the second-order implicit trapezoid method. Output 8. z (i-l) +dzl/2.2) write (6.dz2.dzl. O* (dy2+dy3 +dy4) ) ) z (i) =z (i-i ) + (dzl +2.0 / f=fx-p*z-q*y re~urn end The data set used to illustrate subroutine shoot is taken from Example 8. y(i-l) . (x O.836322 by the fourth-order Runge-Kutta shooting method .i3) subroutine rk2 (ndim. O. dy4=dx* z ( i -I ) +dz3 ( dz4=dx*f (x(i-l) +dx.1.3f13.6) format (" ’/’ The solution failed end to converge.5f13. iw) implements the fourth-order Runge-Kutta method for two odes dimension x (ndim).q.1000) i. 0.694553 201. y ( i -I ) +dy2/2. dz2=dx*f(x ( i-l) +dx/2.Boundary-Value Ordinary Differential Equations 485 10oo lO10 1020 if (iter.500000 11. z ( i-i) y( i ) =y(i-i + (dyl+2.1020) return format (’ ’) format (i3. imax.3) write (6.0. eq. y( i-l ) +dyl/2. z ( i -i ) +dz2/2.574761 50.562500 28. dy3=dx*(z (i-i ) +dz2/2. imax dyl =dx*z(i -i dzl=dx*f (x(i-l) .1000) i.y(i) if (iw. dz3=dx*f (i -I ) +dx/2.422002 z 7. x.z) derivative function p coefficient of yp in the ode q coefficient of y in the ode fx nonhomogeneous term data p.z(i) end do re t ~rn i000 format (i3. 0.O.750000 1.250000 0.dy2. Shooting i 1 2 3 4 5 Solution method x 0. z.dz3.0 * (dz2 +dz3) +dz4 if (iw. z (i-l)) dy2=dx* (z (i -i ) +dzl/2. iter = ". dx.2) write (6.dyl. y(i-l) +dy3. z (ndim) do i=2.033854 74.dy3.0.dz4. y.744792 18. y.gt.dy4. eq.000000 y 0o000000 2.500000 0.6) end function f (x. y (ndim) . fx / 0.000000 0.187500 6. -16. The output generated by the fourth-order Runge-Kutta program is presented in Output 8.1.

A FORTRANsubroutine.376684 36.199) is applied at every interior point in a finite difference grid. b.75.iw / 9.~)~’+Q(x.486 0. y.y2.dx / 0. The second-order equilibrium method program c c program main main program to illustrate boundary-value ODE solvers insert comment statements from subroutine shoot main program dimension x(9) . tol. iter.500000 0. i=l.838604 100.0. That problem can be solved by subroutine shoot by changing the.i=l.0.O) write (6.l) write (6.723090 124.957935 " 84.393870 14. An initial approximation y(x) (°) must be specified. (8.5) / 0.500000 19.874459 22. and zb = -395.0. A first guess for the solution y(i) must be supplied in a data statement. 0.25. following variable values: y(1) = 100. 1.931458 55. gt.yl. 0. the solution is obtained in one pass.250000 0. Program main defines the data set and prints it. a.00. Neumann) boundary condition.000000 0.2 The Second-Order second-order Equilibrium Method ODE is given by Eq. 0.~)~=F(x) centered-difference The second-order Eq. the solution is obtained iteratively.0. and prints the solution.x(i). yl.a(9.250000 0.50. 75.500000 0.46): (1-~AXp i)Yi_l FDE which approximates -~-(-2 ~- z~" Qi)yi+ (1 +~.138810 400. ~ oi.imax) stop . Section 1. If the ODE is nonlinear.000000 12.3. 50.294148 Chapter 8 Example 8.1000) if (iw.8.338384 13. (8.000000 0.zl. Oe-06. 25..750000 1.598456 148.imax) ca i i equii (ndim. 0. by= 0.000000 4.Xe (8.270833 46. subroutine equil. i=i.0.1010) (i. 8. ne. y2 = 0.y(i).0. by.gt. zl. 100. I. w.i=l.y(i). 0. 0.b(9) data ndim.198) (8. 1 / data (x(i).1020) if (iw. Program 8. is used to solve the system equation. calls subroutine equil to set up and solve the system of FDEs. 0.0. Subroutine thomas.000000 0. za = -405. for implementing the second-order equilibrium method is presented in Program 8.i)Yi+l=Z~r2fi z~. z2.136): andS(x2)=~2 Eq. (8.e.0. i t er.7 is concerned with a derivative (i.0. If the ODEis linear. i w) if (iw. 100.241319 30.3) .199) Equation (8.000000 3.00. 5.25 data (y(i).0.0. bz= 1. imax. z2 / 1. 0. l.750000 1.O) write (6.0.x.198) is given The general ~" nonlinear boundary-value ~(xl) =~1 =P(x.0.0 / write (6.x(i).000000 1 2 3 4 5 0. The resulting system of FDEs is solved by the Thomas algorithm.0 data by.0.490922 336.imax. y2.645833 11.2.036669 0.5).y(9) . bz.0.1010) (i. dx.2.10.

eq. 0 do i=l.2)=1.O) write (6.0 a (imax. dx.O) write (6.gt.gt.1020) return 1000 format (’ ’) 1010 format (i3.0 a {imax. data fx. y2.0 b(1)=y(1) if (by. b.0 a(i. imax-i " a (i. p.12x. by.y(i).1000) if (iw.0.w (i) (y if ( dy.3 ).5*~*dx a (i.0 *z2 *dx* end if do it=lliter do i=2.i3) end subroutine thomas (ndim. le. O+q*dx**2 b (imax) =fx*dx* "2-2. i) =0.3)=0.imax) if (dymax. 0. a.6) 1020 format (’ "/’ The solution failed to converge. b. n.7x. -16.5*~*dx b ( i ) =fx*dx* end do call thomas (ndim. 0+0. i ter. imax dy=abs ( i ) . 2) =-2. 1 tol. b (ndim). x.l) then a (imax. i) =2. iw) c the equilibrium method for a nonlinear second-order ode c fx nonhomogeneous term coefficient of yp in the ode c ~ coefficient of y in the ode c q dimension x (ndim).6) 1020 format (" ") end ~ubroutine equi i (ndim..Boundary-Value Ordinary Differential 1000 format (" Equilibrium Equations method’~’ ’/" 487 i’.imax. y (ndim). yl. y..tol) return end do if (iter. b (imax) =y2 else a (imax.i=l.1010) (i. dymax ) dymax=dy y(i)=w(i) end do if (iw. a. i) =I. O+q*dx**2 a (i. imax.0. x) the Thomas algorithm for a tridiagonal end system . 3) =i.2f13. a. 2) =-2. it = ". gt. b. z2. w) dymax=O.x(i). gt . "x’. a (ndim.2f13. ’y’/’ ’) 1010 format (i3.l) write (6. 0-0.0 / a(i.2) =i.q / 0. w. zl.

Many work stations and mainframe computers have such libraries attached to their operatingsystems.750000 1.10.3 ¯ Packagesfor Integrating Boundary-ValueODEs Numerous libraries and software packages are available for integrating boundary-value ordinary differential equations. and yl = 100. Theoutput generated by the second-order equilibrium methodprogramis presented in Output 8. Solution by the second-order equilibrium method Equilibrium i x method y 1 2 3 4 5 0.0. The advantages and disadvantages of these methods are summarizedin this section. 1. The shooting methodis based on marchingmethodsfor solving initial-value ODEs. Nonlinear ODEs solved directly..500000 0. 50.O. by=O. the book Numerical Recipes (Press et al. 1989) contains numerousroutines for integrating boundary-value ordinarydifferential equations.2. bz= 8.000000 Example is concernedwith a derivative (i.Finally.000000 50. Anyinitial-value ODE solution methodcan be used.000000 0.500000 0.0.000000 0.O.5)/100. 4. Output 8.000000 75~000000 i00.. 75.0/.000000 4.000000 1 2 3 4 5 0.0. More sophisticated packages.488 Chapter8 The data set used to illustrate subroutineequil is taken fromExample 8. Neumann) 8. That problemcan be solved by subroutine equil b3) changingthe following variables in the data statements: (y(i). 2. 25. 0.000000 25.0.285714 38. There is no system of FDEsto solve.000000 0.761905 14.e. 8.4. MATHEMATICA. 3. such as IMSL. and also contain algorithms for integrating boundary-value ODEs.11 SUMMARY Two finite difference approaches for solving boundary-valueordinary differential equations are presented in this chapter: (1) the shooting methodand (2) the equilibrium method.000000 0. The advantages of the shooting methodare: 1.250000 0. Many commercialsoftware packages contain algorithms for integrating boundaryvalue ODEs.0.250000 0. i= 1.2. are It is easy to achieve fourth.or higher-order accuracy. .750000 1.095238 100. Some of the more prominent packages are Matlab and Mathcad. MAPLE.8 boundarycondition. MACSYMA.

Nonlinear ODEs yield a system of nonlinear FDEs. Experience is the best guide. Discuss the general features of the linear second-order ODE. Solve a linear boundary-valueODE the shooting methodwith superposition by 12. Explain the concepts underlying the equilibrium (boundary-value) method by 15. Applyextrapolation to increase the accuracy of the solution of boundary-value problems by the equilibrium method . Equilibrium methods work well for smoothly varying problems and for problems with complicatedor delicate boundaryconditions. Discuss the general features of the nonlinear second-order ODE 4. the secant method) satisfy the boundaryconditions. Solve a nonlinear boundary-valueODE the shooting methodwith iteration by 11. including the complementary solution and the particular solution 3. 3. Explain the concept underlying the shooting (initial-value) method as 8. Explain the concept of extrapolation as it applies to boundary-valueODEs 13. Describe how higher-order ODEs and systems of second-order ODEs can be solved using the procedures for solving a single second-order ODE 5. Shooting for more than one boundarycondition is time consuming.g. Neumann. Discuss the number boundaryconditions required to solve a boundary-value of ODE 6. Discuss the types of boundaryconditions: Dirichlet. which must be solved by iterative methods. Nonlinear problemsrequire an iterative procedure(e. Oneor moreboundaryconditions must be satisfied iteratively (by shooting). The equilibrium method is based on relaxing a system of FDEssimultaneously. Applyany initial-value ODE finite difference methodto solve a boundaryvalue ODE the shooting method by 10. 2. After studying Chapter 8. The major advantageof the equilibrium methodis that the boundary conditions are applied directly and automatically satisfied. mixed and 7. Applyextrapolation to increase the accuracyof the solution of boundary-value problems by the shooting method 14.. 2. 3. but they are generally more certain of producing a solution.Boundary-Value OrdinaryDifferential Equations The disadvantages of the shooting methodare: 489 1. Shooting methods work well for nonsmoothlyvarying problems and oscillatory problems where their error control and variable grid size capacity are of great value. including the boundaryconditions. No rigid guidelines exist for choosing between the shooting method and the equilibrium method. Thedisadvantages of the equilibrium methodare: 1. It is difficult to achieve higher than second-orderaccuracy. A system of FDEsmust be solved. Solve a linear second-order boundary-valueODE the second-order equilibrium method 16. Shootingmethodsfrequently require more computational effort. Reformulate a boundary-value ODE a system of initial-value ODEs 9. Describe the general feature of boundary-value ordinary differential equations (ODEs) 2. you should be able to: 1.

in Eq. Derive the five-point fourth-order equilibrium methodand discuss its limitations 21. Derive the compactthree-point fourth-order finite difference approximations (FDAs) for f/and 22. wherey(xl) = Yl. (8. Plot the solution. and (e) combinationsof the above changes. and y(1.4).0.0.0 intervals of Ax= 0.0) = 0. List the advantages and disadvantages of the shooting method for solving boundary-value ODEs 31. Solve simple eigenproblemsarising in the solution of boundary-valueODEs 30.2 General Features of Boundary-Value ODEs 2. Derive finite difference approximationsfor f/and ~V’on a nonuniform grid 27.1 Introduction l.0.0. Evaluate the exact solution of Problem8. Eq. Apply the compact three-point fourth-order FDAsto solve a second-order boundary-value ODE 23. in 8. The following problems involve the numerical solution of boundary-value ODEs. Discuss the occurrence of eigenproblems in boundary-value ODEs 29. Explain the difficulties encountered whensolving a nonlinear boundary-value problem by the equilibrium method 24.1.1. (d) changing (b) range of integration. F = 1. Chooseand implementa finite difference methodfor solving boundary-value ODEs EXERCISE PROBLEMS 8.(8.7). Solve a second-order boundary-value ODE a nonuniform grid on 28. Explain and apply Newton’s method for solving nonlinear boundary-value problems 26.125. Calculatethe solution presented Table8. Aninfinite variety of additional problems can be obtained from these problems by (a) changing the coefficients in the ODEs.0) = 1.0. 3. Explain and apply the iteration methodfor solving nonlinear boundary-value problems 25.490 Chapter8 17. (c) changing the step size. Discuss mixed boundaryconditions and boundaryconditions at infinity 20. and F(x) = exp(bx) + c + dx. These problems can be workedby hand calculation or computerprograms. List the advantages and disadvantages of the equilibrium methodfor solving boundary-value ODEs 32. Carry at least six digits after the decimal place in all calculations. Q = 4. Derivethe exact solution of the heat transfer problem presented Section8. y(0. Solve a boundary-value problem with derivative boundaryconditions by the equilibrium method 19. Y(X2) = Y2. Solve a boundary-value problem with derivative boundaryconditions by the shooting method 18. changingthe boundaryconditions.2 for P = 5. . Derive the exact solution of the linear second-order boundary-value ODE.0 to 1. Tabulate the solution for x = 0.

Compare errors and calculate the ratio of the errors at x = 0. 10.125. 15. the ~" + 5~’ +@= e x 3(0) = 0 and~(1) (C) 11.5. 13.0625. 9.125. Method *Solvethe following ODE the shooting methodusing the first-order explicit by Euler method with (a) 5x = 0. (b) Ax = 0. *Solve Problem4 by the second-order modified Euler method. Solve Problem 16 by the second-order modified Euler method. Solve the following ODE the shooting methodusing the first-order explicit by Euler method with (a) Ax = 0.25.let y’(0.125. the ~"+ 43’ + 6. the . Solve Problem 13 by the second-order modified Euler method.0 for the first two guesses.253 = 2ex/2 + 1 + x ~5(0) = 0 and 3(1) 20. the ~" + @’ + 6.25.0) = 0.5. (b) Ax = 0. Compare errors and calculate the ratio of the errors at x = 0. Solve the following ODE the shooting methodusing the first-order explicit by Euler method with (a) Ax = 0.0625.0625.Boundary-Value OrdinaryDifferential Equations 491 For all problemssolved by a shooting method. (F) . (b) Ax = 0. repeat the overall solution until IAYi.25. unless otherwisenoted.maxl< 0.125.5. the 3" + 53’ + 43 = 2eX/2 + 1 +x fi(0) = 0 and3(1) (E) 17.0625. and (c) Ax = 0. 12. 16.125. 21. and (c) zXx = 0. Solve Problem7 by the fourth-order Runge-Kuttamethod. Solve the following ODE the shooting methodusing the first-order explicit by Euler method with (a) Ax = 0.0625. the )Y’ + @’ + 6. Solve Problem 19 by the fourth-order Runge-Kutta method.25~ = 1 33(0) = 0 and 33(1) (B) 8.125. and (c) Ax = 0.25. and (c) Ax = 0. Solve Problem16 by the fourth-order Runge-Kuttamethod.25. 6.25~ = e x ~(0) = 0 and~(1) (D) 14.5. 7. Compare errors and calculate the ratio of the errors at x = 0.let the first approximation for y(x) be a linear variation consistent with the boundary conditions. (b) Ax = 0. Solve the following ODE the shooting methodusing the first-order explicit by Euler methodwith (a) Ax = 0. Compare errors and calculate the ratio of the errors at x = 0. Compare errors and calculate the ratio of the errors at x = 0. and (c) 5x = 0.25.5.3 The Shooting(Initial-Value) 4. *Solve Problem4 by the fourth-order Runge-Kuttamethod. and (c) Ax = 0.V’ + 5~’+@ = 1 }(0) = 0 and~(1) (A) 5. Solve Problem10 by the second-order modified Euler method. 18. For all problemssolved by an equilibrium method.0 and 1.0625. 19. Solve Problem 13 by the fourth-order Runge-Kutta method. Solve Problem7 by the second-order modified Euler method. For all nonlinear ODEs. Solve Problem 10 by the fourth-order Runge-Kutta method.001. Solve the following ODE the shooting methodusing the first-order explicit by Euler method with (a) Ax = 0. (b) 5x---0. Solve Problem19 by the second-order modified Euler method. Compare errors and calculate the ratio of the errors at x = 0. 8. (b) Ax = 0.5.

125.73" + 143’ . (b) Ax = 0. Solve Problem 37 by the second-order modified Euler method.492 22.5. and (c) Ax = 0. 33. Compare errors and calculate the ratio of the errors at x = 0.125. 34.0625. 36. Solve Problem37 by the fourth-order Runge-Kutta method. Compare solutions.25. 30.25.5.0625.0625.~(0) = 0 and3(1) (I) 29. the 3"+(1 +x)3’ + (1 +x)3= 1 3(0) = 0 and. 39. Solve Problem 28 by the fourth-order Runge-Kuttamethod. the 3" + (1 +x)3’ + (1 +x)3 = 2eX/2 + 1 +x . and 3(1) -35.125. 40. (b) Ax = 0. and (c) Ax = 0.83 = 1 3(0) = 0.0625.~(1)= 23. . *Solvethe following ODE the shooting methodusing the first-order explicit by Euler method with (a) Ax=0. the 3" + (1 +3)3’ + (1 +3)3 = 1 3(0) = 0 andp(1) (L) 38. (b) Ax = 0.125.125.125.125.25. 25.5. Solve the following ODE the shooting methodusing the first-order explicit by Euler method with (a) Ax = 0.25. and . Solve Problem25 by the second-order modified Euler method. (G) Solve Problem22 by the second-order modified Euler method. 3 I. Solve Problem34 by the second-order modified Euler method. Solve Problem 25 by the fourth-order Runge-Kuttamethod. ~V(0) = 1. 3’(0) = I. and (c) Ax = 0.25. 24.~(1) (J) 32.73"+ 143’ . the 3"+(1 +x)3’ + (1 +x)3 x 3(0 ) = 0 andy(1) = 1 (H) 26. (b) Ax = 0. Solve Problem31 by the second-order modified Euler method. Compare solutions at x = 0. Compare solutions. 28. Solve Problem 22 by the fourth-order Runge-Kuttamethod. Chapter8 Solve the following ODE the shooting methodusing the first-order explicit by Euler method with (a) Ax = 0.5. Solve the following ODE the shooting methodusing the first-order explicit by Euler method with (a) Ax = 0.25 and (b) Ax = 0. Solve Problem 34 by the fourth-order Runge-Kutta method.83 = 2ex/2 + 1 + x 3(0) = 0. and (c) Ax = 0. Solve the following ODE the shooting methodusing the first-order explicit by Euler method with (a) Ax = 0. (b) Ax = 0. 37.25 and (b) Ax = 0. 27. Solve the following third-order boundary-value ODE the shooting method by using the first-order explicit Euler method with (a) Ax = 0. Compare errors and calculate the ratio of the errors at x = 0. Solve Problem28 by the second-order modified Euler method. Compare errors and calculate the ratio of the errors at x = 0.0625. the 3"’ . Solve the following third-order boundary-value ODE the shooting method by using the first-order explicit Euler method with (a) Ax = 0. and (c) Ax---0. Solve Problem 31 by the fourth-order Runge-Kuttamethod. the 3" .

thus reducing the third-order ODE a second-order ODE ~(x).5 Derivative (and Other) BoundaryConditions Shooting Method 54.125. Solve Problem13 by the second-order equilibrium method. 42. Solve Problem 34 by the procedure described in Problem 52.Boundary-Value OrdinaryDifferential Equations Compare solutions at x = 0. 46. 48. *Solve Problem4 by the second-order equilibrium method. ~" + 5~’ + 4~ = ex ~(0) --. 493 The Equilibrium (Boundary-Value) Method 43. Solve Problem16 by the second-order equilibrium method. 3" + 4~’ + 6. Solve Problem19 by the second-order equilibrium method. (N) Solve the following ODE the shooting method using the second-order by modified Euler methodwith Ax = 0. Solve this system of two coupled ODEs solving for by the second-order ODE ~(x) by the second-order equilibrium methodand the for first-order ODE ~(x) by the second-order modifiedEuler method. Comparethe solutions x = 0. 8. y’ +@’ + 6.125. Comparethe solutions at x = 0. *Solve the following ODE the shooting method using the second-order by modified Euler methodwith Ax = 0. Solve the for problem for (a) Ax = 0.25) = ~ ~(0) = 1 andS’(1) = (Q) .253 = 1 3(0) = 1 andS’(1) (O) 56. Solve Problem40 by the fourth-order Runge-Kutta method.125.5.125. Solve the following ODE the shooting method using the second-order by modified Euler methodwith Ax = 0. 50.5. Solve Problem25 by the second-order equilibrium method. Solve Problem28 by the second-order equilibrium method. the ~"+ (1 +x +3)~’ + (1 + x +~)3 ~/2 + 1 +x ~(0) = 0 and~(1) = 1 41. Solve Problem22 by the second-order equilibrium method. Solve Problem7 by the second-order equilibrium method. 47.5.4 Solve Problem40 by the second-order modified Euler method.25 and (b) Ax = 0. 45. Solve the following ODE the shooting method using the second-order by modified Euler methodwith Ax = 0. 8. Solve Problem31 by letting 5 = ~’. 44. 51. 53. to 52. Solve Problem10 by the second-order equilibrium method. 49.125. 3" + 5~’ + 4~ = 1 ~(0) = 1 andS’(1) 55.1 andS’(1) (P) 57.

25 and (b) Ax = 0. WorkProblem 74 by the second-order modified Euler method. 35" + (1 + x)y’ + (1 + x)~ = 1 y(0) = 1 andS’(1) 59. 73. Implement the mixedboundary condition by applying the PDEat the boundary. Solve Problem 55 by the second-order equilibrium method.125. 71. f/’ .53Y(1) (T) 67.0 changesby less than 0. Solve Problem 69 by the second-order equilibrium method. 64. until the solution at x = 1.0.0 and -1. . ~" + 5)3’ + 4~ = 1 p(0) = 0 and p(1) . Solve Problem 59 by the second-order equilibrium method.494 58. Implementthe boundary condition at infinity by applying that BCat x = 2.5.0 for the first two passes.0) = 0. 63. ~(0) = 1 andS’(1)= (S) *Solve Problem 54 by the second-order equilibrium method.V(1) (U) 70. MixedBoundary Conditions 66. Solve Problem69 by the second-order modified Euler method. *Solvethe following ODE the shooting methodusing the first-order explicit by Euler methodwith (a) Ax = 0. 69.0. Implement the mixedboundary condition by applying the PDEat the boundary. Boundary Conditions Infinity at 74.y = 0 ~(0) = 1 and ~(cx)) (V) Let 3/(0.125 and (b) Ax = 0.125. Solve Problem 58 by the second-order equilibrium method. 5. 72. Solve Problem66 by the fourth-order Runge-Kuttamethod. 61. 76. 68. Solve Problem 66 by the second-order equilibrium method. 10.001.0625. 62. 75. Solve the following ODE the shooting methodusing the first-order explicit by Euler methodwith (a) Ax = 0. 65.25~ : ex .0. f/’ + 4~’ + 6. ~" + (1 + x)~’+ (1 + x)~= 2e~/2 + The EquilibriumMethod 60. Solve Problem66 by the second-order modified Euler method.125. WorkProblem 74 by the fourth-order Runge-Kutta method.0. Solve Problem 56 by the second-order equilibrium method. *Solvethe following ODE the shooting methodusing the first-order explicit by Euler methodwith (a) Ax = 0.0625.. etc. (R) Solve the following ODE the shooting method using the second-order by modified Euler method with Ax = 0. Solve Problem 57 by the second-order equilibrium method. Chapter8 Solve the following ODE the shooting method using the second-order by modified Euler methodwith Ax = 0. Solve Problem69 by the fourth-order Runge-Kuttamethod.~(0) = 0 and~(1) .0.125 and (b) Ax = 0.

9. (D) 85. the . Solve ODE by the procedure described in Problem 82.125 by iteration. (8. The Compact Three-Point Fourth-Order EquilibriumMethod 86.~ = 1 + x + ex ~(0) = 0 and ~(1) 8. *Compactthree-point fourth-order finite difference approximations are presented in Section 8.125. (W) 78. WorkProblem 77 by the fourth-order Runge-Kutta method. Solve Problem77 by the second-order equilibrium method.7 The Equilibrium Method for Nonlinear Boundary-Value Problems Iteration 90. (C) 84. Solve the following ODE the procedure described in Problem 86: by ~" . (8.~).6.25 and (b) Ax = 0.134) in Example 8. ~’(x) does not appear in the ODE.25 and (b) Ax= 0. 83.125.~ = 1 ~(0) = 0 and~(1) (X) . the however. Solve ODE by the procedure described in Problem 82. derivative. 8.129).125. Applythis procedure to solve the following ODE with (a) Ax = 0.25 and 0.2 for ~V’(x) and ~V(x). Solve the following ODE the procedure described in Problem86: by ~" +~ = 1 +x+ex ~(0) = 0 and~(1) 88. simple FDEcan be developed. 495 Solve the following ODE the shooting methodusing the first-order explicit by Euler methodwith (a) Ax = 0. Use the three-point second-order equilibrium methodat points adjacent to the boundaries.Eq. 80. 79. When first.6 Higher-Order Equilibrium Methods The Five-Point Fourth-Order Equilibrium Method 82. When used in a second-order boundary-valueODE. is obtained.~)~’ + (1 +.~" + (1 +. Solve ODE (A) using the five-point fourth-order equilibrium method for Ax : 0. *Solve the following ODEby the second-order equilibrium method with Ax: 0. Solve Problem74 by the second-order equilibrium method.5.Boundary-Value OrdinaryDifferential Equations 77. implicit fourth-order FDE. 3" + 3’ . 81. Solve the following ODE the procedure described in Problem86: by ~" -~ = 1 ~(0) = 0 and~(1) 89. ~" +~ = 1 ~(0) = 0 and~(1) 87. Compare solutions at x : 0. such as Eq. an This equation is straightforward in concept but difficult to implement numerically.23 = 1 3(0) = 1 and ~(o~) WorkProblem 77 by the second-order modified Euler method. (B) Solve ODE by the procedure described in Problem 82.

~(1) (Y) Newton’sMethod 92. (CC) Compare results with the results obtained in Problem8. the ~" + (1 + x +~)~’ + (1 + x +. Compare the results with the results obtained in Problem11. Solve the following ODE the second-order equilibrium method on the by nonuniform grid specified in Table 8. Solve the following ODE the second-order equilibrium method with by Ax = 0.125 by iteration.5.1 +x .125 by Newton’smethod. Compare results with the results the obtained in Problem8. Compare results with the results obtained in the Problem 5.8 The Equilibrium Method on Nonuniform Grids 94.25. the x/ 2 + 1 +x y" + (1 +x +. Solve Problem 100 by the second-order equilibrium methodon the transformed grid specified in Table 8.~(1) (DD) Compare results with the results obtained in Problem11. Compare solutions atx = 0.25. .25.~" + 5~’ + 4~ = e ~ ~(0) = 0 and. ~" + 5)’ + 4~ = 1 )(0) = 0 and~(1) 95. the Solve Problem 97 by the second-order equilibrium method on the transformedgrid specified in Table 8.25 and 0. (BB) )" + (1 +~)~’ + (1 +~)~= 1 ~(0) = 0 and)(1) Compare results with the results obtained in Problem5.~)p’ + (1 +x+y)y :9(0) = 0 and~(1) = 1 8.25. Solve the following ODE the second-order equilibrium method on the by nonuniform grid specified in Table 8. Compare solutions at x = 0. Compare solutions at x = 0. 99.496 91. Solve the following ODE the second-order equilibrium method with by Ax= 0. Comparethe results with the results obtained in Problem 5. 100. Solve Problem94 by the second-order modified Euler method.~(0) = 0 and. 96.25 and 0.~)~ x/ 2 q. . Compare results with the results the obtained in Problem11. 102.25~ = 1 ~(0) = 0 and~(1) 98. the 101. Compare the results with the results obtained in Problem8. the (Z) 93. 97.25.125 by Newton’smethod.5.25 and 0.5. Solve Problem 97 by the second-order modified Euler method. Chapter8 Solve the following ODE the second-order equilibrium method with by Ax= 0. the Solve Problemby the second-order equilibrium methodon the transformed grid specified in Table 8. Solve the following ODE the second-order equilibrium method on the by nonuniform grid specified in Table 8. f/’ + 4~’ + 6.25. Solve Problem100 by the second-order modified Euler method.

and showthat there are no real values of k. The numerical solution of this eigenproblem for Ax = ½ is presented in Table 8.25. 109. 112. Checkout the programusing the given data set. The exact solution this eigenproblem k = -l-nn (n = 1.9 Eigenproblems 106. WorkProblem 108 for the eigenproblem 3" + (1 + x)~’ + k2~ = 0 ~(0) = ~(1) 110. Implement the second-order equilibrium method program presented in Sertion 8. Solve any of Problems6.. Consider the eigenproblem £’ .185). Solve any of Problems 43 to 51 with the second-order equilibrium method program. 8. the 104.. 105.2. (b) Illustrate this result numerically this setting up a systemof second-orderfinite difference equations with Ax= 0.10. Compare the results with the results obtained in Problem14..25p = e ~ p(0) = 0 andp(1) (EE) Compare results with the results obtained in Problem14..Boundary-Value OrdinaryDifferential Equations 497 103.. . Consider the eigenproblemdescribed by Eq. Solve Problem103 by the second-order modified Euler method. (8. y/’ + 4~’ + 6.2 . Comparethe results with the results obtained in Problem14. 42 with the fourth-order Runge-Kutta method program. ). 9. Determinethe solution to this eigenproblemfor (a) Ax= ~ and (b) Ax Compare three sets of results in normalizedform.10 Programs 111. 114. Consider the eigenproblem ~"+ ~/+/fl~ = 0 ~(0) = ~(1) Estimate the first three eigenvalues by setting up a system of second-order finite difference equations with fix = 0. 12 . WorkProblem 108 for the eigenproblem ~" + ~’ + k2 (1 + x)~ = 0 ~(0) = ~(1) 8.2.. (a) Demonstrate result analytically.10.178). 108. Solve the following ODE the second-order equilibrium method on the by nonuniform grid specified in Table 8.27..1. that is. kin.25. (8.~ = 0 ~(0) = ~(1) This problem has no solution except the trivial solution 35(x)= 0. Checkout the programusing the given data set. the 107. 113. The finite difference equation is corresponding to the eigenproblem is given by Eq. Implementthe fourth-order Runge-Kuttashooting methodprogrampresented in Section 8. Solve Problem 103 by the second-order equilibrium method.

115. 119. and f’(r/) --~ 1 as q --~ wheref is a dimensionlessstream function. The pipe described in Problem 115 is cooled by convection on the outer surface. The temperaturedistribution in the wall of a pipe through whicha hot liquid is flowing is given by the ODE d2T 1 dT =0 ~ dr 2 r dr T(1) = 100 C and T(2) = Determine temperaturedistribution in the wall. and a =0.qLx ~ dx 2 2 2 y(0) = 0 and y(L) = whereq is the uniformload per unit length. -A 1+ Solve this problem for T(r). where R = 1. the 117. 118.0 J/(s-m2-K). the convective cooling coefficient h = 500. the heat conduction~eo~dat the outer wall is equal to the heat convection~eonvto the surroundings: dT ~cond = -k~’~ = ~conv ~--hM(T .I is the moment inertia of the beam cross section. Determine temperature distribution in the wall.0 and A = -100. and r/is proportional to distance normalto the plate. and so on. Solvethis problem forf(q). Thetemperaturedistribution in a cylindrical rod madeof a radioactive isotope is governedby the ordinary differential equation T’(0) = 0 and T(R) = -d-~r ~ ~-r d--.Ta) where the thermal conductivity k = 100. and E is the modulus of of . All these problems be solved by any of the methodspresented in this chapter.0J/(s-m-K). The deflection of a simply supported and uniformly loaded beamis governed by the ordinary differential equation (for small deflections) EI daY -. L is the length of the beam. Thus.0.0 C is thetemp erature of t he surroundings. Aninfinite variety can of exercises can be constructed by changing the numerical values of the parameters. the 116. The velocity distribution in the laminar boundary layer formed when an incompressiblefluid flows over a fiat plate is related to the solution of the ordinary differential equation f(0) = 0.498 APPLIED PROBLEMS Chapter8 Several applied problemsfrom various disciplines are presented in this section. the grid size Ax. the velocity u is proportional to f’(q). f’(0) = 0.

0N/m on the 5. . Solve for the deflection y(x). determiney(x).wherewis the width and h is 2) the height.0 cm wide. For a rectangular beam.(dy/dx)2] For the properties specified in Problem120. 120. which is subjected to the uniform load q = -1.I = wh3/12. the governing differential equationis 2 ~) qLx qx EI(d~y/dx 3/2 -. Whenthe load on the beam described in Problem 120 is applied on the 10.2 ~. Consider a beam (E = 200 GN/m 5.0 cm high.0 cmface. In that case.Boundary-Value OrdinaryDifferential Equations 499 elasticity. and 10. 5.0cm face.500.0 m long.2 [1 q. the deflection will be large.

are The two types of physical problems (i. without losing the meaningof the material. III. This replacement generally makesthe text flow in more smoothlyand more succinctly. simplifying approximations are madeto reduce the governing PDEsto ordinary differential equations (ODEs) even to algebraic equations.because of the ever or increasing requirement for more accurate modelingof physical processes.9.e. Part III is devoted to the solution of partial differential equations by finite difference methods. 501 .1 INTRODUCTION Partial differential equations (PDEs)arise in all fields of engineering and science. and hyperbolicPDEs) introduced. 1. III.III Partial DifferentialEquations III.e.5.6. 111.8. Most real physical processes are governed by partial differential equations. III.2.7. III. 10.11. parabolic.3.4. For simplicity of notation. III. engineers and scientists are moreand morerequired to solve the actual PDEsthat govern the physical problembeing investigated. Ill. The three classes of PDEs (i. Introduction General Features of Partial Differential Equations Classification of Partial Differential Equations Classification of Physical Problems Elliptic Partial Differential Equations Parabolic Partial Differential Equations HyperbolicPartial Differential Equations The Convection-Diffusion Equation Initial Values and Boundary Conditions Well-Posed Problems Summary II1.. the phrase partial differential equationfrequently will be replaced by the acronymPDE Part III. elliptic. In manycases. equilibrium and propagation problems) are discussed.. However. Ili. Some general features of partial differential equationsare discussed in this section. III. 111.

5) (III. D(x.4) to (III.1) is the two-dimensionalLaplace equation. Equations (III. the independent variables are either space (x. The dependentvariablef is used as a generic dependentvariable throughout Part III. y) or f(x. respectively. (1II.3) usually will be written f~ +fv = 0 f = efxx fttt = C2£x (III. t). Part III 3.1) to (1II. 111. and Eq. Eq. In the majority of problemsin engineering and science.7) . the classification of the corresponding governingpartial differential equation. the solution of a PDE be expressed in can closed form. in of and satisfies the initial and/or boundaryconditions specified on the boundaries of the domainof interest. or x and t.2) is the one-dimensional diffusion equation. For simplicity of notation. f(x. the solution must be obtained by numerical methods.6) are examplesof partial differential equations in independentvariables. Eqs. (III. in three independentvariables is V2f =f~ +fyy +f~z = 0 (III. (III. z.4) (III. In most problemsin engineering and Science. whichsatisfies the PDE the domain interest. The dependent variable dependson the physical problembeing modeled. To present the general features of partial differential equations To discuss the relationship betweenthe type of physical problembeing solved. y) or D(x. whichis the two-dimensional Laplace equation. y. t).2 GENERALFEATURES OF PARTIAL DIFFERENTIAL EQUATIONS A partial differential equation (PDE)is an equation stating a relationship between function of two or moreindependentvariables and the partial derivatives of this function with respect to these independentvariables. Equation(1II. x and y.6) wherethe subscripts denote partial differentiation.3) is the one-dimensional wave equation. Thesolution of a partial differential equation is that particular function. Such problemsare the subject of Part III. z) or space and time (x.502 Theobjectives of Part IlI are: 1. and the type of numerical methodrequired Topresent examplesto illustrate these concepts. y.4). t). In a very few special cases.Examples three simple partial of differential equations having two independentvariables are presented below: Equation (111. 2.

Partial Differential Equations where V2 is the Laplacian operator. Equations(III. is ff~ + Z. is a forcing function.6). a source term. in four independentvariables is f. 12) are linear partial diff erential equa tions. then the PDE nonlinear.13) wherea and b are constants. y. A linear PDE is one in which all of the partial derivatives appear in linear form and none of the coefficients dependson the dependentvariable. The nonhomogeneous F(x. Equations(III. in which case the PDE a linear. The order of a PDE determinedby the highest-order derivative appearing in the equation. 16) Equation(III.g = 0 (In. 14) . An example of a nonhomogeneous PDEis given by v~U=f~x + f~ + f~. y. in four independent (III. For example.4) to (III.4) (IlL 12) are all linear PDEs.nor does it usually change or complicate the numerical methodof solution. Other physical problemsare governedby fourth-order PDEs such as f~ +f=~y +fyyyy 0 = (III. is a variable coefficient linear PDE. The appearance of a nonhomogeneous term in a partial differential equation does not changethe general features of the PDE. + bfx = 0 (Ill.10) The parameter c is the wave propagation speed. dissipation function. Problems in two. depending on the application. z). whichis known the Poisson as equation. which in Cartesian coordinates is 503 V2= a2 ~+~ a2 a~ Equation (III.f~+ef~=0 {ii~. or the derivatives appear in a nonlinear form.9) f = ~(f~ +f~y +f~z) ~ VZ f The parametera is the diffusion coefficient. .5).4) to (III. 16) is the nonhomogeneous Laplace equation. Equation(III.. 1 l) where a and b are constants. z) (III.whereasEqs. (III. are nonlinear PDEs.Xfx 0 = (m. variables is (1II. = c2(Lx +Az)2 v +fyy = 2f (III.4) to (III.8) which is the one-dimensionaldiffusion equation. A is large number of physical problems are governed by second-order PDEs. Somephysical problems are governed by a first-order PDEof the form af.If the coefficients dependon the dependentvariable. 15) are homo geneouspartial diff erential equations. three.12) Equations (III. PDE. +~.10) are second-orderpartial diff erential equations. whichis the one-dimensional waveequation. and four independentvariables occur throughout engineering and science. variable coefficient. af.For is example. or a term. The coefficients maybe functions of the independentvariables. =F(x.

and f.17a) (III. 111. The third special case is the system of two general quasilinear first-order nonhomogeneous in two independent variables.f (x. The first special can case is the general quasilinear (i. which is PDE I Af~ + Bf~v + Cfyy + Df~ + Efy + Ff = G ] (III. A few problems are governedby a single first-order PDE.t). Consequently.20a) (III. Eachproblemhas its ownspecial governingequation or equations and its ownpeculiarities whichmust be considered individually. Y. The second special case is the general quasilinear first-order nonhomogeneous in two PDE independentvariables. and g. a wide variety of partial differential equations exists. Systemsof PDEs generally moredifficult to solve numericallythan are a single PDE.Numerous problems are governed by a system of first-order PDEs. andf. which is I af + bfx = c] (III..3CLASSIFICATION OF PARTIAL DIFFERENTIAL EQUATIONS Physical problems are governed by many different partial differential equations. linear in the highest-order derivative) second-order nonhomogeneous in two independent variables. t. fx.4) to (IlL 16) are all examples a single partial differential equation of governing one dependent variable. the coefficients D to F may depend on x. A few problems .these three special cases are studied thoroughlyin the following sections. 17b) comprise a system of two coupled partial differential equations in two independent variables (x and t) for determining twodependent the variables. useful insights into the general features of PDEs be obtained by studying three special cases. 19) where a.e. b.Someproblems are governed by a single second-order PDE. and c maydependon x. the two PDEs af ÷ bgx = 0 Ag + Bfx = 0 t (III. 18) wherethe coefficients A to C maydependon x. For example. t. Systems containing several PDEsoccur frequently. However.f. and numerousproblems are governed by a system of second-order PDEs.504 Part III Equations (III. Thegeneral features of these three special cases are similar to the general features of all the PDEs discussed in this book. and the nonhomogeneous term G may depend on x and y. y. t) g(x.20b) where the coefficients a to d and A to D and the nonhomogeneous terms e and E may dependon x. which PDEs can be written as af + bfx + cgt + dgx = e + + Cgt + Dgx= E (m. and systems containing higher-order PDEs occur occasionally. As illustrated in the preceding discussion. Manyphysical problems are governed by a system of PDEsinvolving several dependent variables. andfy.

Eq.4AC.. if any. does the classification of a PDEhave on the allowable and/or required initial and boundary conditions? Does the classification of a PDEhave any effect on the choice of numerical methodemployed solve the equation? Thesequestions are discussed in this section. and PDEshaving more than two independent variables is considerably morecomplicated. the general quasilinear (i.4AC. B .for PDEsand the form of the e discriminant.20)] is studied. 18)] is classified first..Partial Differential Equations 505 are governedby fourth-order PDEs. The classification of a PDE intimately related to the characteristics of the PDE. (1II. Eq. (III. classification of PDEs most easily explained for The is a single second-order PDE. (1II. and to the results are applied to physical problemsin the next section. (III.1)-dimensional hypersurfaces in n-dimensional hyperspace that .e.21) dependson the sign of the discriminant. Theclassification the general first-order nonhomogeneous in two independent variables [i. Consequently. 19)] is studied next. linear in the highest-order derivative) second-order nonhomogeneous PDE twoindependent in variables [i. and hyperbolic chosento classify PDEs reflects ~ the analogy betweenthe form of the discriminant.22) The type of curve represented by Eq. There is no other significance to the is terminology. Conicsections are described by the general second-order algebraic equation 2 Ax2 + Bay + Cy + Dx + Ey 4.:~ + Cfyy + D + Efy + Ff = a L (IIi.4AC. parabolic.e. in the following discussion.. B .22) dependson the sign of the discriminant. PDE (III. Finally.e.4AC Negative Zero Positive Typeof curve Ellipse Parabola Hyperbola Theanalogy to the classification of PDEs obvious. the classification of the systemof two quasilinear firstorder nonhomogeneous [i. 18)] Afxx + Bf.e. Whatis the significance of the above classification? Whatimpact. is Characteristics are (n.F = 0 (III. larger systems of PDEs. Eq.4AC Negative Zero Positive Classification Elliptic Parabolic Hyperbolic The terminologyelliptic. (1II. ~ B . B -4AC. as follows: B2 . The classification of higherPDEs order PDEs. The general quasilinear second-order nonhomogeneous partial differential equation in twoindependent variables is [see Eq. as follows: Be ..whichclassifies conic sections.21) 2 The classification of Eq.

is given by x = xo ÷ u(t) to (III. The location x(t) of the fluid and particle is related to its velocity u(t) by the relationship dx (III. then information propagatesalong these characteristics.that is. Alongthe pathline. A movingfluid particle carries (convects) its mass.23) whereu is the convection velocity.e. convected) through space by the motion of the medium occupying the space. Figure III. Fluid flow is a common exampleof convection. whichis the case considered here. Eq. The prefix hyper is used to denote spaces of more than three dimensions.f~ df 0 at (III. Consequently.. which is the characteristic path associated with the convection equation. In twodimensionalspace.1 Pathline as the characteristic for the convection equation. The convection ofa propertyf of a fluid particle in one dimension is govemedby the convection equation f + Ufx = 0 (III. calldd its pathline. energy with it as it movesthrough space. the presence or absence of characteristics has a significant impact on the solution of a PDE(by both analytical and numerical methods). the fluid property f is convected along the pathline.. in general) in the solution domainalong which information propagates.506 Part III have somevery special features.+--d__zx.e. . is the differential equation of the characteristic path. characteristics are paths (curved. xyzt spaces. (III. Consequently.= u dt Thepath of the fluid particle.26) which can be integrated to yield f = constant. information propagates throughout the solution domainalong the characteristics paths.e.24) -.25) Thepathline is illustrated in FigureIII. (b) Triangular property distribution.. Equation (III.24). then there are no preferred paths of information propagation.. the convectionequation [i. Discontinuities in the derivatives of the dependent variable (if they exist) also propagate along the characteristics paths. In other words. A simple physical examplecan be used to illustrate the physical significance of characteristic paths. Convection the process in whicha physical property is propagated is (i. the characteristic path) as the path of propagation the fluid property of f(x) (a) Pathline. momentum. which is generally called the characteristic equation. and curves and surfaces within those spaces. The physical significance of the pathline (i. If a PDE possesses real characteristics.la. If no real characteristics exist.23)] can be written = x =a-7 ft + uL=f.

Eq. one approachis to answerthe following question: Are there any paths in the solution domainD(x. that is.21) and (III. the characteristics. Thus. which is a point of discontinuous slope in the property distribution.e. This simple convectionexampleillustrates the significance of characteristic paths.29) can be solved by the quadratic formula to yield dy B q. (III. convects as a discontinuity in slope at the convectionvelocity u. must propagate along the characteristics. Equation (III. Let’s return to the classification of Eq. The apex of the triangle. consider the triangular property distribution illustrated in Figure III. if they exist.28) can be solved by Cramer’s rule to yield uniquefinite values offx~.. the secondderivatives of to . Consequently.f~y. each to particle carries with it its value of the property f. is the differential equation whichapplies along the characteristic path. and thus multivalued or discontinuous. Onerelationship for determiningthe three secondderivatives off(x. Two morerelationships are obtained applyingthe chain rule to determinethe total derivatives offx andfy.~/B 2 -4AC -~x = 2A (III. lb. y) are either infinite.B(dx)(dy) + C(dx)2 = 0 (III. whichare themselves functions of x and y. (III. y). Toillustrate further the propertyof a charadteristic path as the path of propagation in a convectionproblem. whichis generally called the compatibility equation. and f~y.Because. of PDEs.discontinuities in the derivatives of the solution. andfyy. are multivaluedor discontinuous? Suchpaths. Setting the determinantof the coefficient matrix of Eq.that is.21).29) Equation (III. In that case.28) (III. Asthe fluid particles move the right at the constant convectionvelocity u.21).f~x. y) passing through a general point P along which the second derivatives off(x. convects)to the right at the constant convectionvelocity unchanged in magnitude and shape. Alongthese two families of curves. the second derivatives of f(x.f~.28) equal to zero yields A(dy)2 .Partial Differential Equations 507 f is quite apparent for fluid convection. y) is given the partial differential equationitself.29) is the characteristic equation correspondingto Eq. (Ili.30) Equation(III. are the paths of information propagation.the triangular property distribution simplymoves (i. or they are indetemlinate.27b) o az ay a(f~) Equation(III. if they exist.27) can be written in matrix form as follows: dx dy 0 = ’ d(f~) (III. Equation(III. unless the determinant of the coefficient matrix vanishes. Several procedures exist for determining the characteristics. and hence the classification.21). d(fx) = f~x dx + fxy d(fy) = fyx dx + fyy Equations(III.26).27a) (III. whichis physically meaningless.30) is the differential equation for two families of curves in the xy plane. (Ili. corresponding the + signs.

f(Xp.21). according to whether the discriminant. depends. Considerthe classification of the single general quasilinear first-order nonhomogeneous PDE. 2 B .(b) parabolic PDE. The two families of characteristic curves maybe complex.(c) hyperbolic . (Ili. y). Elliptic PDEs. In other words. a single first-order PDEis always hyperbolic. if they exist. they have no specific domainsof dependenceor ranges of influence. and hyperbolic PDEs. zero. The domainof dependenceof point P is defined as the region of the solution domainupon whichthe solution at point P. and hyperbolicPDEs have tworeal distinct characteristic paths. parabolic PDEs have one real repeated characteristic path. The range of influence of point P is defined as the region of the solution domainin whichthe solution f(x.Eq. In effect. Consequently. parabolic. yp) influences the solution at all points in the range of influence. f(Xp. yp). the entire solution domainof an elliptic PDE both the domainof dependence the range of influence of is and every point in the solution domain. (III.2 illustrates the concepts of domainof dependence range of influence for elliptic. Consider a point P in the solution domain. Eq.4AC Negative Zero Positive Characteristiccurves Complex Real and repeated Realand distinct Classification Elliptic Parabolic Hyperbolic Consequently. Recall that parabolic and hyperbolic PDEshave real characteristic paths. The presence of characteristic paths in the solution domainleads to the concepts of domainof dependence range of influence. (Ili. Figure III. do not have real characteristic paths. or positive.real and repeated. or real eand distinct.19): af.21) is classified as follows: Eq. is negative. B 4AC. Thesetwofamilies of curves. + bZ c = 011. yp) depends on everything that has happened in the domain of dependence. and on the other hand.31) X (a) (b) Figure 111. y) is influenced by the solution at point P. Consequently. they have specific domainsof dependence ranges of influence. f(Xp.2 Domainof dependence (horizontal hatching) and range of influence (vertical hatching) of PDEs:(a) eliptic PDE. Accordingly. elliptic PDEshave no real characteristic paths. respectively. and D(x. In other words. y) maybe multivaluedor discontinuous. and Unlike the second-order PDEjust discussed.508 Part III f(x. are the characteristic paths of the original PDE.

The convection equation. Setting that determinantequal to zero gives the characteristic equation.b dt = 0 Solving Eq.36).36a) (III.31) and (III.32) (III. or multivalued. As a third example. (111. t) passing through a general point P along whichthe first derivatives off(x. (Ill.35) is the differential equation for a family of paths in the solution domain along which f and fx maybe discontinuous. x): df =~ dt + fx dx Equations(Ili.34) (III. if they exist. t) may discontinuous? be Suchpaths.34) for dx/dt gives (III. if they exist. (1II. (III. single quasilinear firsta order PDE always hyperbolic. t) passing through a general point P along whichthe first derivatives off(x. Two relationships for determiningthe four first derivatives off(x.33) is zero. is an example is such a PDE. are the characteristics of Eq.36).Partial Differential Equations 509 The characteristic paths.33) As before. t) and g(x. Eq. t) and g(x. Two morerelationships are given by the total derivatives off and df = ft dt + f~ dx dg= g~ dt+ g~ dx (III. Consequently. are determinedby answering followingquestion: the Are there any paths in the solution domain D(x.consider the classification of the systemof two general coupled quasilinear first-order nonhomogeneous partial differential equations. t) are given by Eq.36b) The characteristic paths. are the characteristics of Eq.32) can be written in matrix form (II1.37a) (111. Another relationship is given by the total derivative off(t. are determinedby answering followingquestion: the Are there any paths in the solution domainD(x. One relationship for determining f and f~ is given by Eq. (Ill. t) are not uniquelydetermined? Suchpaths.31). if they exist. the characteristic paths alwaysexist. Since a and b are real functions. Eq.31). the partial derivativesf andfx are uniquely determinedunless the determinant of the coefficient matrixof Eq.20): af -t.23).bfx + cgt +dgx = e Af + Bfx + Cgt + Og = E x (III. if they exist. (1II.35) Equation(III. whichis a dx . (1II.37b) . (III.

37). zero. whichcomprisea systemof four equations for determining f. Accordingly. preferred paths of information propagationexist. corresponding to the + signs. gt.real and repeated. parabolic and hyperbolic PDEs govern propagation problems.39) where .Ac). according to whether the dlscnmlnant. are the characteristic paths of the original systemof PDEs.=df As before.2 = 0 Equation(III. . The slopes of the twofamilies of characteristic paths may complex.41) is the differential equation for two families of curves in the xt plane.Bc). or real be and distinct.Bd). [3 = (aD .39). whichis (aC . respectively. or positive. Along these two families of curves. Thus. maybe written as ~(dx)2 . t) maybe multivalued. If real characteristics exist. and the solution at eacfi point influences the on .~ = (aC . The domainof dependenceand range of influence of every point is the entire solution domain. Specific domains of dependence and ranges of influence exist for every point in the solution domain. The speed of propagation of information through the solution domaindependson the slopes of the characteristics.41) Equation (III.40) (111. if they exist.510 Part III Equations(II1.f~.Eq.4 C Negative Zero Positive Classification Elliptic Parabolic Hyperbolic In summary.36). There are no preferred paths of information propagation.36) and (III.36) is classified as follows: Eq. Equation (1II.40) can be solved by the quadratic formula to yield (111.Ac)(dx)2 . physical interpretation of the classification of a partial differential the equation can be explained in terms of its characteristics.(aO .can be written in matrix form as follows: A B C dtdxO i]Iffxt] I abc 1 0 0 dt dx gx ~ I dg (III.then no real characteristics exist.Thesolution at every point depends the solution at all the other points. These two families of curves. B -4A C. 1s negative. (III. the partial derivatives are uniquely determinedunless the determinant of the coefficient matrix of Eq.38) Lg. whichis a quadratic equation in dx/dt. Setting that determinantequal to zero yields the characteristic equation. (1II.Bc)(dx)(dt) + (bO . Physical problems governed by PDEsthat have real characteristics are propagation problems.Ad + bC . (1II.[3(dx)(dt) ~(dt) 2 = 0 (111. /~2 _ 4.Ad + bC . If the slopes of the characteristics are complex. and gx.38) is zero. t) and g(x. the first derivatives of f(x. and ~ = (bD .

e. 2. Physical information propagates along these distinct physical propagationpaths. Consequently. there are no preferred physical information propagation paths. and it~ ownspecial numerical solution method. y) is governedby an elliptic PDE subject to boundaryconditions specified each point on the boundaryB of the domain. Thus. As illustrated in the previous section. but the physical behavior associatedwith the first-order spatial derivatives acts similarly to the behaviorof the firstorder spatial derivatives in a hyperbolic PDE. points are dependent all other points and all points influence all all on other points.the followingthree general classifications: 1..1 Equilibrium Problems Equilibrium problems are steady-state problems in closed domainsD(x. and the solution at each point influences the solution at all the other .4 CLASSIFICATION OF PHYSICAL PROBLEMS Physical problemsfall into one of. For a parabolic PDEwhich contains only second-order spatial derivatives. 111. This physical behavior should be accountedfor in the numerical approximation of the PDE. This physical behavior should be accounted for in the numericalapproximation of the PDE. 3. elliptic PDEs govern equilibrium problems. Thus. its own particular type of governing partial differential equation. Physical problems governed by PDEsthat have complexcharacteristics are equilibrium problems. distinct physical information propagationpaths exist. in The significance of the classification of a PDEas it relates to the numerical approximation of the PDEis as follows. the solution at every point in the solution domain influencedby the solution at all is the other points.This physical behavior should be accounted for in the numerical approximation of such PDEs. This physical behavior should be accounted for in the numerical approximation of the PDE. elliptic PDEs have no real characteristics. y) in which the solutionf(x. Elliptic and parabolic PDEsexist which contain first-order spatial derivatives in addition to second-order spatial derivatives. In other words. For an elliptic PDEwhich contains only second-orderspatial derivatives. 111.These concepts are related to the classification of physical problems the next section. In such cases. the preferred physical information propagation paths are lines (or surfaces)of constant time (or constant timelikevariable). the PDE)and all the boundaryconditions simultaneously. the physical behaviorassociated with the second-orderspatial derivatives is the sameas before. Equilibrium problems Propagation problems Eigenproblems Each of these three types of physical problems has its ownspecial features.at each time (or timelike variable) level.4. the solution throughout the entire solution domainmust be continuous.Partial Differential Equations 511 solution at all the other points. For a hyperbolic PDE which contains only first-order spatial derivatives. Equilibrium problems are jury problems in which the entire solution is passed on by a jury requiring satisfaction of all internal requirements (i. points are dependenton all other points and all points influence all other points. A clear understa~ding of these concepts is essential if meaningful numericalsolutions are to be obtained. Since there are no curves along whichthe derivatives may be discontinuous.

An example of a steady-state propagation problem is f~ =flfxx (III. the temperature T(x. The governingPDE is Laplace equation 2 V T= 0 whereT is the temperatureof the solid. The majority of propagation problems are unsteady problems. 111.3 illustrates the clo~ed solution domainD(x. equilibrium problemsare solved numerically by relaxation methods. In two dimensions.Propagation problems in PDEsare analogous to initial-value problems in ODEs. Boundary B (B) given ndence and Range influence of Solution domainfor an equilibrium problem. y) ~ FigureIII.42) Alongthe boundaryB. Figure III.45a) A few propagation problems are steady-state problems. is marched forwardin time: ft = af~x (III. (111. to) =F(x). whichare considered in Chapter 8.44) at each point on the boundary.e.5). The diffusion equation.42) Txx+ Tyy= 0 (111. n Equilibrium problemsarise in all fields of engineering and science. conduction)in a solid (see Section III. t) in the domainof interest D(x. Equilibrium problems in partial differential equations are analogous to boundary-valueproblems in ordinary differential equations. Consequently. A classical exampleof an equilibrium problemgovernedby an elliptic PDE steady is heat diffusion (i.Eq.3. guided and modified by boundary conditions. points.y) and its boundaryB.which are considered in Chapter 7.43) (III. Eq. t) is marched forward from the initial state.2 Propagation Problems Propagation problems are initial-value problems in open domains(open with respect to one of the independentvariables) in whichthe solution f(x. is an exampleof an unsteady propagation problem in which the initial propertydistribution at time to. f(x.5).512 Part III D(x. Propagation problems are governed by parabolic or hyperbolic PDEs.45b) .4.. (Ili. y) is subject to the boundarycondition aT + bTn = c (111. where T denotes the derivative normal to the boundary.

45a) modelsboth unsteady and steady propagation problems. t) and its boundary B which is composed of the initial time boundary and the two physical boundaries. The governingPDE the diffusion is equation: Tt = o~ V2T (III.46) Tt = ~T~ (III. Propagation problems are initial-value problems. In one space dimension. unsteady and steady propagation problems are considered simultaneouslyby consideringthe time coordinatet in the diffusion equation. Solution domainfor a propagation problem.4. values of T must be specified along both space boundaries. In the present discussion. and the correspondingcoordinate is called the timelike coordinate. (III.the marchingdirection in a steady-state space propagation problem is called the timelike direction.47) is first order in time. Y0)= F(x). Eq.47) is secondorder in space. (III. values of T mustbe specified alongthe initial time boundary. (III. (III.4 illustrates the open solution domainD(x. Consequently. is marched forward in space in the y direction. The solution of a propagationproblem subject to initial conditions specified at a is particular value of the timelike coordinate and boundary conditions specified at each point on the spacelike boundary. the solution at each point in Open boundary March f(O.. A classical example of a propagation problem governed by a parabolic PDEis unsteady heat diffusion in a solid (see Section 111.f(x.Eqs. As shown Section II1.45a) and (III. t) is open in the direction of the timelike coordinate. . and the corresponding coordinate is called the spacelike coordinate.45b)] is called spac elike direction. are identical. (1II.O) L ~- Figure 111.47) Since Eq.tf(x.6 for the in diffusion equation.t) O0 . which are solved by marching methods. so that Eq.45b) taking on the character of the time coordinate t in the diffusion equation. (III.45a) and (111.45a).e. (III. parabolic PDEshave specific domainsof dependenceand ranges of influence and infinite information propagationspeed.Since Eq. Parabolic PDEs have real repeated characteristics.46) where T is the temperature and e is the thermal diffusivity of the solid. The space direction in which diffusion occurs[i. the x direction in Eqs. The domainof interest D(x. to be timelike coordinate.Partial Differential Equations 513 in whichthe initial property distribution at location yo.6).Eq. (III. with the spacecoordinatey in Eq. The general features of these two PDEs.45b). Figure 111.t)ff(L. Thus.

Consequently.7 for the in wave equation. the speed of sound). The governing PDEis the waveequation (111.514 Part III the solution domain depends on a specific domainof dependence and influences the solution in a specific range of influence.g.e.Since Eq. As shown Section III.6. As illustrated in Figure III. The solution at point P influences the entire solution domaindownstream and including of the horizontal line throughpoint P itself. Asillustrated in Figure III.48) P~t = a2 V2P’ where P’ is the acoustic pressure (i. In one space dimension. both families of t~’ March Open 1’ |boundary ~ ~//~/~e~mna~tceU//~ ~ X Figure111. both families of characteristics have zero slope in the xt plane. space x and time t). the solution at each point in the solution domaindependsonly on the solution in a finite domainof dependenceand influences the solution only in a finite range of influence.48) P. which correspondsto an infinite information propagation speed. space x and time t).e. solution at point P does not depend the on the solution downstream the horizontal line throughpoint P. . (III. parabolic PDEsbehave like hyperbolic PDEs the limit where the in information propagation speed is infinite.49) is secondorder in space. (III. values of must be specified along both space boundaries. initial values of both U and P~ mustbe specified along the initial time boundary. nor does the solution at of point P influence the solution upstream of the horizontal line through point P. hyperbolic PDEshave two real and distinct families of characteristics. Eq.. Thus. parabolic PDEs have two real repeated families of characteristics. In two variables (e. Thus...g.. the solution at point P dependson the entire solution domain upstreamof and including the horizontal line throughpoint P itself. the pressure disturbance) and a is the speed propagation of small disturbances (i. Numerical methods for solving propagation problems governed by parabolic PDEsmust take the infinite information propagationspeed into account. (III. In two variables (e.. However.5. =aZPtxx (I11.49) is secondorder in time..7).5. Solution domainfor a ’parabolic propagation problem.49) Since Eq. HyperbolicPDEs have real distinct characteristics. A classical example of a propagation problem governed by a hyperbolic PDEis acoustic wavepropagation (see Section 111. hyperbolic PDEshave finite domains of dependence and ranges of influence and finite information propagation speed.

These differences must be accounted for whenapplying marchingmethodsto these twotypes of partial differential equations.The eigenvalues are to be determinedin March Open ~’ | boundary ~ ce ~ FigureIII.. Propagation problems govemed hyperbolic PDEsare somewhatanalogous to initial-value problems by in ODEs.g. in the negativex direction) acoustic waves. Thesecharacteristics are illustrated in Figure III.Partial Differential Equations 515 characteristics have finite slope in the xt plane. there are significant differences in propagation problemsgovernedby parabolic PDEsand hyperbolic PDEs.The solution at point P influences only the solution within the range of influence defined by the downstream propagating characteristics. boundarydata.3 Eigenproblems Eigenproblems special problemsin which the solution exists only for special values are (i. Numerical methods for solving propagation problems governed by hyperbolic PDEsmust take the finite information propagation speed into account. due to the infinite information propagation speed associated with parabolic PDEs the