Brice Carnahan Ha Luther James o Wilkes Applied Numerical Methods Krieger Pub Co 1990pdf PDF

You might also like

You are on page 1of 621
Applied Numerical Methods & Krieger Publishing Company Malabar, Florida f ve ‘99F cot 43 FOO 1990 a nga Editon 1969 Reprint Editon 1990 Printed and Published by ROBERT E. KRIEGER PUBLISHING COMPANY, INC. KRIEGER DRIVE MALABAR, FLORIDA 32950 Copyright ©1969 by John Wiley and Sons, Inc. Reprinted by Arrangement All sights reserved. No part of thisbook may be reproduced in any formorby any means, electronic or mechanical, including information storage and retrieval systems without permission in writing from the publisher, No liability is assumed with respect to the use of the information contained herein. Printed in the United States of America Library of Congress Cataloging-in-Publication Data (Camaban, Brice. Applied numerical methods / Brice Camahan, H.A. Luther, James O. wil pom, Reprint. Originally published: New York : Wiley, 1969. Includes bibliographical references and index. ISBN 0-89464-486-6 (alk. paper) 1. Numerical analysis. 2. Algorithms, I. Luther, HLA. TL Wilkes, James 0. DI. Title QA297.C34 1990 519.4-.de20 190.36060 cp w9e 765 4 to DONALD L. KATZ A, H, White University Professor of Chemical Engineering ‘The University of Michigan Preface This book is intended to be an intermediate treatment of the theory and appli- cations of numerical methods. Much of the material has been presented at the University of Michigan in a course for senior and graduate engineering students. ‘The main feature of this volume is that the various numerical methods are not only discussed in the text but are also illustrated by completely documented computer programs. Many of these programs relate to problems in engineering and applied ‘mathematics, The reader should gain an appreciation of what to expect during the implementation of particular numerical techniques on a digital computer. Although the emphasis here is on numerical methods (in contrast to numerical analysis), short proofs or their outlines are given throughout the text. The more important numerical methods are illustrated by worked computer examples. The appendix explains the general manner in which the computer examples are presented, and also describes the flow-diagram convention that is adopted. In addition to the computer examples, which are numbered, there are several much shorter examples appearing throughout the text. These shorter examples are not numbered, and usually illustrate a particular point by means of a short hand-calculation. The computer programs are written in the FORTRANGIV language and have been run on an IBM 360/67 computer. We assume that the reader is already moderately familiar with the FORTRAN-IV language. There is a substantial set of unworked problems at the end of each chapter. Some of these involve the derivation of formulas or proofs; others involve hand calculations; and the rest are concerned with the computer solution of a variety of problems, many of which are drawn from various branches of engineering and applied mathematics. Brice Carnahan H. A. Luther James 0. Wilkes Contents COMPUTER EXAMPLES CHAPTER 1 Interpolation and Approximation 1.1 Introduction 1.2 Approximating Functi 1.3 Polynomial Approximation—A Survey The Interpolating Polynomial The Least-Squares Polynomial The Minimax Polynomial Power Series 1.4 Evaluation of Polynomials and Their Derivatives 1S. The Interpolating Polynomial 1.6 Newton's Divided-Difference Interpolating Polynomial 1.7 _Lagrange’s Interpolating Polynomial 1.8 Polynomial Interpolation with Equally Spaced Base Points Forward Differences Backward Differences Central Diferences 1.9 Concluding Remarks on Polynomia! Interpolation 1.10 Chebyshev Polynomials 11 Minimizing the Maximum Error 1.12 Chebyshev Economization—Telescoping a Power Series Problems Bibliography CHAPTER 2 Numerical Integration 2.1 Introduction 2.2. Numerical Integration with Equally Spaced Base Points 23. Newton-Cotes Closed Integration Formulas 2.4 Newton-Cotes Open Integration Formulas 2.5. Integration Error Using the Newton-Cotes Formulas 2.6 Composite Integration Formulas 2.7. Repeated Interval-Halving and Romberg Integration 2.8 Numerical Integration with Unequally Spaced Base Points xv 2” 35 35 36 3” 39 39 4 43 68 9 0 1 5 B83a ix X Conents 2.9 Orthogonal Polynomials Legendre Polynomials: P,(x) Laguerre Polynomials: ,(x) Chebyshev Polynomials: T(x) Hermite Polynomials: Hy(x} General Comments on Orthogonal Polynomials 2.10 Gaussian Quadrature Gauss-Legendre Quadrature Gauss-Laguerre Quadrature Gauss-Chebyshev Quadrature Gauss-Hermite Quadrature Other Gaussian Quadrature Formulas 2.11 Numerical Differentiation Problems Bibliography CHAPTER 3 Solution of Equations 3 32 33 34 35 36 37 38 39 Introduction Graeffe's Method Bernoulli's Method Iterative Factorization of Polynomials Method of Successive Substitutions ‘Ward's Method Newton's Method Regula Falsi and Related Methods Rutishauser’s QD Algorithm Problems Bibliography CHAPTER 4 Matrices and Related Topics 4a 42 43 44 45 46 47 48 49 ‘Notation and Pretiminary Concepts Vectors Linear Transformations and Subspaces Similar Matrices and Polynomials in a Matrix Symmetric and Hermitian Matrices The Power Method of Mises Method of Rutishauser Jacobi’s Method for Symmetric Matrices Method of Daailevski Problems Bibliography 100 100 100 100 tor 101 101 101 13 15 116 116 128 131 140 4 14 14a 142 156 168 169 m 178 196 198 210 210 213 219 21 224 226 236 250 261 263 26R, ~ CHAPTER 5 Systems of Equations 269 5.1 Introduction 269 §.2 Elementary Transformations of Matrices 269 5.3 Gaussian Elimination ne 5.4 Gauss-Jordan Elimination 272 5.5 A Finite Form of the Method of Kaczmarz 297 5.6 Jacobi Iterative Method 298 5.7 Gauss-Seidel Iterative Method 299 5.8 Iterative Methods for Nonlinear Equations 308 5.9 Newton-Raphson Iteration for Nonlinear Equations 319 Problems 330 Bibliography 340 ~ CHAPTER 6 The Approximation of the Solution of Ordinary Differential Equations 341 6.1 Introduction 341 6.2 Solution of First-Order Ordinary Differential Equations 341 Taylor's Expansion Approach 343 6.3. Euler's Method 344 6.4 Error Propagation in Euler's Method 346 65 Runge-Kutta Methods 3ot 6.6 Truncation Error, Stability, and Step-Size Control in the Runge-Kutta Algorithms 363 6.7 Simultaneous Ordinary Differential Equations 365 68 Multistep Methods 381 6.9 Open Integration Formulas 381 6.10 Closed Integration Formulas 383 6.11 Predictor-Corrector Methods 384 6.12 Truncation Error, Stability, and Step-Size Control in ‘the Multistep Algorithms 386 6.13 Other Integration Formulas 390 6.14 Boundary-Value Problems 405 Problems 416 Bibliography 428 { CHAPTER 7 Approximation of the Solution of Partial Differential Equations 429 7.1 Introduction 429 7.2 Examples of Partial Differential Equations 429 7.3. The Approximation of Derivatives by Finite Differences 430 7.4 A Simple Parabolic Differential Equation 431 7.5 The Explicit Form of the Difference Equation 432 xii Contents 7.6 Convergence of the Explicit Form 7.7 The Implicit Form of the Difference Equation 78 Convergence of the Implicit Form 79 Solution of Equations Resulting from the Impticit Afetfiod 7.10 Stability 7.11 Consistency 7.12. The Crank-Nicolson Method 7.13 Unconditionally Stable Explicit Procedures DuFort-Frankel Method Saul"yeo Method Barakat and Clark Method 7.14 The Implicit Alternating-Direction Method 1.15 Additional Methods for Two and Three Space Dimensions 7.16 Simultaneous First- and Second-Order Space Derivatives 7.17 Types of Boundary Condit 7.18 Finite-Difference Approximations at the Interface between ‘Two Different M 7.19 Irregular Boundaries 7.20 The Solution of Nonlinear Partial Differential Equations 7.21 Derivation of the Elliptic Difference Equation 7.22 Laplace's Equation in a Rectangle 7.23 Alternative Treatment of the Boundary Points 7.24 Iterative Methods of Solution 7.25 Successive Overrelaxation and Alternating-Direction Methods 7.26 Characteristic-Value Problems Problems Bibliography CHAPTER 8 Statistical Methods 8.1 Introduction: The Use of Statistical Methods, 8.2. Definitions and Notation 83 Laws of Probability 84 Permutations and Combinations 8.5. Population Statistics 8.6 Sample Statistics 8.7 Moment-Generating Functions 8.8 The Binomial Distribution 8.9. The Multinomial Distribution 8.10 The Poisson Distribution 8.11 The Normal Distribution 8.12 Derivation of the Normal Distribution Frequency Function 8.13 The z* Distribution 8.14 as a Measure of Goodness-of-Fit 8.15 Contingency Tables 432 440 440 449 450 451 451 451 451 452 452 453 462 462 462 463 464 482 483 483 484 508 508, 520 530 331 331 532 533 533 533 5a 542 543 543 543 $52 553 559 560 561 8.16 The Sample Variance 8.17 Student's ¢ Distribution 8.18 The F Distribution 8.19 Linear Regression and Method of Least Squares 8.20 Multiple and Polynomial Regression 8.21 Alternative Formulation of Regression Equations 8.22 Rogression in Terms of Orthogonal Polynomials 8,23 Introduction to the Analysis of Variance Problems Bibliography APPENDIX Presentation of Computer Examples Flow-Diagram Convention INDEX 568 568 510 sm 573 314 374, 585 587 592 593, 593 594. 597 Computer Examples CHAPTER | 1.1 Interpolation with Finite Divided Differences 1.2 Lagrangian Interpolation 1.3. Chebyshev Economization CHAPTER 2 21 Radiant Interchange between Parallel Plates—Composite ‘Simpson's Rule 2.2 Fourier Coefficients Using Romberg Integration 2.3 Gauss-Legendre Quadrature 24 Velocity Distribution Using Gaussian Quadrature CHAPTER 3 3.1 Graeffe's Root-Squaring Method—Mechanical Vibration Frequencies 3.2 Iterative Factorization of Polynomials 3.3 Solution of an Equation of State Using Newton's Method 3.4 Gauss-Legendre Base Points and Weight Factors by the Half-Interval Method 3.5. Displacement of a Cam Follower Using the Regula Falsi Method CHAPTER 4 4.1 Matrix Operations 4.2 The Power Method 4.3. Rutishauser’s Method 44° Jacobi's Method ” » 46 80 2 106 m 144 158 173 180 190 216 28 238 252 xv xvi Computer Examples CHAPTER 5 5.1 $2 $3 5a 35 Gauss-Jordan Reduction—Voltages and Currents in an Electrical Network Calculation of the Inverse Matrix Using the Maximum Pivot Strategy—Member Forces in a Plane Truss Gauss-Seidel Method Flow in a Pipe Network—Successive-Substitution Method Chemical Equilibrium —Newton-Raphson Method CHAPTER 6 61 62 63 64 65 Euler's Method Ethane Pyrolysis in a Tubular Reactor Fourth-Order Runge-Kutta Method—Transient Behavior of a Resonant Circuit Hamming's Method A Boundary-Value Problem in Fluid Mechanies CHAPTER 7 1 72 13 14 15 16 1 18 19 Unsteady-State Heat Conduction in an Infinite, Parallel-Sided Slab (Explicit Method) Unsteady-State Heat Conduction in an Infinite, Parallel-Sided ‘Slab (Implicit Method) Unsteady-State Heat Conduction in a Long Bar of Square Cross Section (Implicit Alternating- Direction Method) Unsteady-State Heat Conduction in a Solidifying Alloy Natural Convection at a Heated Vertical Plate ‘Steady-State Heat Conduction in a Square Plate Deflection of a Loaded Plate Torsion with Curved Boundary Unsteady Conduction between Cylinders (Characteristic- Value Problem) 24 282 302 310 321 348, 353 367 393 407 434 443 454 465 414 486 491 498 510 CHAPTER 8 8.1 Distribution of Points in a Bridge Hand 8.2. Poisson Distribution Random Number Generator 8.3 Tabulation of the Standardized Normal Distribution 8.4. 42 Test for Goodnest-of-Fit 8.5 Polynomial Regression with Plotting sas 545 562 516 xvii Applied Numerical Methods CHAPTER 1 Interpolation and Approximation 4.1 Introduction This text is concerned with the practical solution of problems in engineering, science, and applied mathe- matics. Special emphasis is given to those aspects of prob- lem formulation and mathematical analysis which lead to the construction of a solutian algorithm or procedure suit- able for execution on a digital computer. The identifica- tion and analysis of computational errors resulting from mathematical approximations present in the algorithms will be emphasized throughout. ‘To the question, “Why approximate?", we can only answer, “Because we must!” Mathematical models of physical or natural processes inevitably contain some in- herent errors. These errors result from incomplete under- standing of natural phenomena, the stochastic or random nature of many processes, and uncertainties in experimen- tal measurements. Often, a model includes only the most Pertinera features of the physical process and is deliber- ately stripped of superfluous detail related to second-level effects Even if an error-free mathematical model could be de- veloped, it could not, in general, be solved exactly on a digital computer. A digital computer can perform only @ Nimited number of simple arithmetic operations (prin- cipally addition, subtraction, multiplication, and division) on finite, rational numbers. Fundamentally important ‘mathematical operations such as differentiation, integra- tion, and evaluation of infinite series cannot be imple- ‘mented directly on a digital computer. All such computers hhave finite memories and computational registers; only a discrete subset of the real, rational numbers may be generated, manipulated, and stored. Thus, it is impossible to represent infinitesimally small or infinitely large quan- tities, or even @ continuum of the real numbers on a finite interval. Algorithms that use only arithmetic operations and cer- tain logical operations such as algebraic comparison are called numerical methods. The error introduced in approxi mating the solution of a mathematical problem by a numerical method is usually termed the truncation error of the method. We shall devote considerable attention to the truncation errors associated with the numerical approximations developed in this text. ‘When @ numerical method is actually run on a digital ‘computer after transcription to computer program form, another kind of erfor, termed round-off error, is intro- duced. Round-off errors are caused by the rounding of results from individual arithmetic operations because only 4 finite number of digits can be retained after each opera tion, and will differ from computer to computer, even when the same numerical method is being used. ‘We begin with the important problem of approximating, one function f(x) by another “suitable” function g(x). This may be weirten S(e) = ox). There are two principal reasons for developing such approximations. The first is to replace a function f(x) which is dificult to evaluate or manipulate (for example, differentiate o¢ integrate) by a simpler, more amenable function g(x). Transcendental functions given in closed form, such as In x, sin x, and erf x, are examples of func- tions which cannot be evaluated by strictly arithmetic operations without first finding approximating functions such as finite power series. The second reason is for inter- polating in tables of functional values. The function /(x) is known quantitatively fora finite (usually small) number of arguments called base points; the sampled functional values may then be tabulated at the n+ 1 base points os Xap «+11 By aS follows: Xo S(%o) aay I) aa SG) x Sx) Xn SG). We wish to generate an approximating function that will allow an estimation of the value of f(x) for x # x, 40,1, ..,. In some cases, /(x) is known analytically but is difficult to evaluate. We have tables of functional values for the trigonometric functions, Bessel functions, etc. In others, we may know the general class of functions to which f(x) belongs, without knowing the values of specific functional parameters. Ineerpolation and Approximation In the general case, however, only the base-point func tional information is given and little is known about f(x) for other arguments, except perhaps that it is continuous, in some interval of interest, a0, there Is some polynomial p,(x) of degree n = n(c) such that U@)-pA<6 acxeb. ‘Unfortunately, although it is reassuring to know that some polynomial will approximate f(x) to a specified ac- curacy, the usual criteria for generating approximating polynomials in no way guarantee that the polynomial found is the one which the Weierstrass theorem shows must exist. If (x) is in fact unknown exoept for a few sampled values, then the theorem is of little relevance, (It is comforting nonetheless!) The case for polynomials as approximating functions is not so strong that other possibilities should be ruled out completely. Periodic functions can often be approximated ‘very efficiently with Fourier funotions; functions with an obvious exponential character wili be described more compactly with a sum of expanentials, etc. Nevertheless, for the general approximation problem, polynomial approximations are usually adequate and reasonably easy to generate, ‘The remainder of this chapter will be devoted to poly- nomial approximations of the form SO) = pa) = Fae. Fora thorough discussion of several other approximating functions, see Hamming (2). (uy 1.3 Polynomial Approximation—A Survey After selection of an nth-degree polynomial (1.1) as the approximating function, we must choose the criterion for “fitting the data.” This is equivalent to establishing the procedure for computing the values of the coefficients Boris ++» Oye ‘The Interpolating Polynomial. Given the paired values (xn f(x)), 10, 1, sm pethaps the most obvious criterion for determining the coefficients of p,(x) is to require that Pal) = SDs (1.2) ‘Thus the nth degree polynomial p,(x) must reproduce /(3) exactly for the m+ 1 arguments x = x,. This criterion seems especially pertinent since (from a fundamental theorem of algebra) there is one and only one polynomial of degree 1 or less which assumes specified values for n+ 1 distinct arguments. This polynomial, called the nth degree interpolating polynomial, is illustrated schematic. ally for n= 3 in Fig. 1.1. Note that requirement (1.2) Bhan Mey (0, fea) Figure 1.1. The interpolating pelynomie, establishes the value of p,(x) for all x, but in no way guarantees accurate approximation of f(x) for x ¥ xj, that is, for arguments other than the given base points. Iff(2) should be a polynomial of degree m or less, agree- ‘meat is of course exact for all x. ‘The interpolating polynomial will be developed in con- siderable detail in Sections 1.5 to 1.9. ‘The Least-Squares Polynomial If there is some question as to the accuracy of the individual values f(x), i= 0, 1, «+97 (often the case with experimental data), then it may be unreasonable to require that a polynomial fit the /(x,) exactly. In addition, it often happens that the desired polynomial is of low degree, say m, but that there are ‘many data values available, so that n > m. Since the exact, ‘matching criterion of (1.2) for m + 1 functional values ca bbe satisfied only by one polynomial of degree m or less, itis. generally impossible to find an interpolating polynomial of degree m using all n +1 of the sampled functional values. Some other measure of goodness-of-fit is needed. In- stead of requiring that the approximating polynomial reproduce the given functional values exactly, we ask only that it fit the data as closely as possible. Of the many 4 Interpolation and Approximation meanings which might be ascribed to “‘as closely as possible,” the most popular involves application of the least-squares principle. We ft the given n + 1 functional values with p,(x), a polynomial of degree m, requiring that the sum of the squares of the discrepancies between the f(x) and p,(x) be a minimum. If the discrepancy at the ith base point x, is given by 5, = pax) — f(x), the least-squares criterion requires that the ay, = 0, 1, ....m™, be chosen so that the aggregate squared error E= Sot= Eton) seo? -S[Zemt-sa] an be as small as possible. If m should equal m, the minimum error E is exactly zero, and the least-squares polynomial is identical with the interpolating polynomial. Figure 1.2 1) pals) Gare) Figure 12 The least-squares polynomial. illustrates the fitting of five functional values (n = 4) with 1 least-squares polynomial of degree one (m = 1), that is, a straight line. ‘When the values f(x) are thought to be of unequal re- liability or precision, the least-squares criterion is some- times modified to require that the squared error at x, be mulkiplied by a nonnegative weight factor w(x) before the aggregate squared error is calculated, that is, (1.3) assumes the form B= 5 west. ‘The weight w(x; is thus a measure of the degree of pre- cision or relative importance of the value f(x) in deter- mining the coefficients of the weighted least-squares Polynomial Py(#) ‘The least-squares principle may also be used to find an approximating polynomial p,(x) for a known continuous function f(x) on the interval [a,b]. In this case the object is to choose the coefficients of p,(x) which minimize E where B= [wolpats) —JO0T a. Here, w(x) is a nonnegative weighting function; in many cases, w(x) = I. Since the motivation for the least-squares criterion is essentially statistical in nature, further description of the least-squares polynomial will be delayed until Chapter 8. ‘The Minimax Polynomial. Another popular criterion, termed the minimax principle, requires that the co- efficients of the approximating polynomial p,(x) be chosen so that the maximum magnitude of the differences. SG) — P(X), 1=0,1,..42,¢ where ao, 0, a3 is any permutation of the integers 2, 1, 0. In general it follows by induction that PPeekpaay oo Xo) =SPeayhaur J (1.16) where the sequence of integers oy, a, permutation of m,n — 1, — 2,0. 19% is any tis also apparent from the definition that I(x) So) (x1 = x0) (%o~ x1) Sl5 2 common name. Individual elements of the vector are identifi! fs 8 single subscript attached to the name, In this case, the x and v tors consist of the elements a and Ys Yay sonsdny pe Uh ‘matrix Wil describe a rcis"gularatray (rows and col= lumns) of numbers identiied by ac 98 "on name, Individual ele- ‘ments of the matrix are identified 1% > subseipts, the ftst to indicate the row index and the second 4. dicate the column index, “Thos Fi the element appearing in we", row and jth eclomn of the matrix 7 (see Chapter 4). Nort that, alike the divided-difference tables such as Table 1.2, there is no base point (xoY> =/(%)). In order to facilitate the simple subscription scheme of (LAW), the divided-difference poction of the table is no longer symmetric about the base points near the middle of the table, All elements of the T matrix, 7, for j>i, {>m and 127, are unused. The subroutine DTABLE assumes that T is a k by k matrix, and checks for argument consistency to insure that.m x, iis assigned the value 7. The base points used to determine the interpolating polyno‘sial are noroally Xpex-a-- Xan Where max = i+ dj2 for d even and max = i+ (d= D)2 for d odd. Should max be smaller than d+ 1 of larger than n, then max is reassigned the value d+ 1 (OF 1, respectively. This insures that only given base points are used to fit the polynomial In terms of the divided differences, the interpolant value, 5(3) is desesibed by the polynomial of degree d: FR) = Ynazmat (% = Xena) fmax~a4 15 Smaxa] + (8 ~ Xmar—a+ ME = Xs a) % SUemesmas.ar Smas- $b @ f[&mars + Ximena) 18 ‘The corresponding error term, from (1.32) and (1.39), is RAB) = S(5) ~ HR) = (5 — Xned)* ( = Xue ed X SLR X pags oes Xmanndls (11.3) or ue RAB) = Baa) ~ ee TO, Fin (SaaS LA) Rewritter in the nested form of (1.10), (1.1.2) becomes HR) = Co Pe maae «9 Xa al % = Xmas 1) +S Lt mac= a9 +++ Xmax-a)) — Xmae—2) FS Dmx 29 009 mass) CE ~ Xmae~3) He Smee ae 1s Xmoxa]} % (R= Xmaxma) + Yoax— as (11.5) Flow Diagram Main Program 1m, i=1,2, Interpolation and Approximation or, from (1 5G) = ), ° {Tipax= 1h — Xmax~1) + Trae 2,4-1} x@ %nar=2) 4 Tay 34-2 F ~ Enea) ++ Tawra) XB Knead) + Ymecoee (1.1.6) FNEWT uses this nested form to evaluate 5(3). Should there be an inconsistency in the function arguments, that is, if d> m, the value assigned to FNEWT is zero; otherwise, the value is (3). In both OTABLE and FNEWT, a computational switch, trubl,is set to 1 when argument inconsistency is found; otherwise, trubl is set to 0. FNEWT does not check to insure that the elements x, i=1,2,...,m are in ascending order, although such a test could be incor- porated easily. yee cos x) Compute finite divided. difference table for id FA Compute interpolant value (8) using a “*well-centered” Newton's divided-difference differences of order m or less Ty yf eter Xoo igs) (Gubroutine DTAaLE) Trotm ad interpolating polynomial of degree d. (Function FNEWT) JR), cos Example 1.1 Subroutine DTABLE (Arguments: x, ys 7, n,m, trubl, k) i Interpolation with Finite Divided Differences 72H Hit =X Tagot = Tint gat Bier Fetay snub} -0 9 20 Interpolation and Approximation Fimetion FNEWT (Arguments: x, y, T, 1m, my dy X, trubl, k) trubl = 1 d maxei+$ max d+ Evaluate Newton’s divided- difference polynomial using nested algorithm. Fe HE Xnar + Tyas=i-ta~i I Of trubl 0 Example 11. Interpolation with Finite Divided Differences a FORTRAN Jmplementation List of Principal Variables Program Symbol (Main) wt DEG M N NMt TABLE TRUBL TRUVAL x XARG Y YINTER (Subroutine TABLE) 1suB. kK (Function NEWT) IDeGMa ssuB1, IsuBz MAX vest Definition Subscripts fj, Degree, d, of the interpolating polynomial. 1m, highest-onder divided difference to be computed by DTABLE. 1, the number of paired values (xy. ¥: = f(x). nai. Matrix of divided differences, 7. ‘Computational switch: set to 1 if argument inconsistency is encountered, otherwise set to 0. Value of cos £ computed by the library function COs. Vestor of base points, x). Interpolation argument, ¥. Vector of functional values, y, = f(x). Interpolant value, 5(3). pitl—j. Row and colusnn dimensions, k, of matrix T: d-1. Subscripts, max — i, d= i Subscript of largest base point used to determine the interpolating polynomial, max. Variable used in nested evaluation of interpolating polynomial, (1.1.6). 2 Interpolation and Approximation Program Listing Main Program c APPLIED NUMERICAL METHODS, EXANPLE 1.3 c NEWTON'S DIVIDED-DIFFERENCE INTERPOLATING POLYNOMIAL c c TEST PROGRAM FOR THE SUBROUTINE OTABLE AND THE FUNCTION c FNEWT.” THIS PROGRAM READS A SET OF N VALUES X(1).. XCM), ¢ COMPUTES "A CORRESPONDING SET OF VALUES Y(1)...YCN) WHERE c YC) = COSCXC1)), AND_THEN CALLS. ON SUBROUTINE DTABLE c TO. CALCULATE ALL’ FINITE DIVIDED DIFFERENCES OF ORDER M on c LESS," WITH THE DIVIDED DIFFERENCES STORED IN MATRIX TABLE, € THE PROGRAM READS VALUES FOR XARG, THE INTERPOLATION c ARGUNENT, AND IDEG, THE DEGREE OF THE INTERPOLATING c POLYNOMIAL TO BE EVALUATED BY THE FUNCTION FNENT. c FNEWT COMPUTES. THE INTERPOLANT VALUE, YINTER, WHICH IS. c COMPARED WITH THE TRUE VALUE, TRUVAL'= COS(xARG). c DIMENSION X(20), ¥¢20), TABLE(20,20) c c READ DATA, COMPUTE Y VALUES, AND PRINT . AeAD"(5, 100) NLM, (XCD, 1#1,N) waite (6, 200) 00.1 Iin YO) = cosexc19) 1 MRITE (6,200) 1, XCD, WED c COMPUTE AND PRINT DIVIDED DIFFERENCES .. DTABLE( X,Y, TABLE,N,M, TRUBL, 20 ) Te“CrRUBL.NE.O.05 cALL EXIT. WhiTe (6,202) mar Doe yaa, nMa ter Te CoT.w os 6 WRITE'(6,203) (TABLECI,J), Jet, L) c © saa. READ XARG AND IDEG, CALL ON FNEWT TD INTERPOLATE y+ Waite (6,200) 7 REaa (5,402) xano, 1986 YINTER © FNEWT( X,¥,TABLE,N,M, DEG, XARG, TRUBL,20 ¢ © sa... COMPUTE TRUE VALUE OF COS(XARG) AND PRINT RESULTS FRUVAL = cosCxaRG) WRITE (6,205) XARG, IDEG, YINTER, TRUVAL, TRUBL Go 10 7 ¢ c .. FORMATS FOR INPUT AND OUTPUT STATEMENTS 100 FORMAT C4X, 13, 10K, 13. / (15x, 5F10.N) ) 101 FORMAT (7K, FBL4, 15x, 15.) 200 FORMAT ( 33H1THE SAMPLE FUNCTIONAL VALUES ARE / SHO I, 8X, 1 WWKCNY, 9X, MHYCLY aH) 201 FORMAT CLM? 14, 2713.6 202 FORMAT ( QHIFOR =, i2, 29H, THE DIVIDED DIFFERENCES ARE ) 203 FORMAT (AW / (AH, BE16.7) ) 208 FORMAT ( 25HITHE DATA AND RESULTS ARE / 1HQ, 5X, UHEARG, 5X, 1 GHIDEG, 5X, SHYINTER, 6X, GHTAUVAL, 3x, SHTRUBL / 1H” ) 205 FORMAT CAH, F9.4, 18, 2F12.6, F7.1 3 e END Subroutine DTABLE SUBROUTINE DTABLE ( X,Y,TABLE,N,M,TRUBL,K ) DTABLE COMPUTES THE FINITE DIVIDED D|FFERENCES OF YC)...YON) FOR ALL ORDERS M OR LESS AND STORES THEM IN THE iGWER TRIANGULAR PORTION OF THE FIRST M COLUMNS OF THE FIRST NeL ROWS OF THE MATRIX TABLE. FOR INCONSISTENT ARGUMENTS, TRUBL * 1,0 ON EXIT, OTHERWISE, TRUL = 0.0 ON EXIT. DIMENSION X(N), YEND, TABLE(K,K) Example 1.1. Interpolation with Finite Divided Differences Program Listing (Continued) c € sera CHECK FOR ARGUMENT CONSISTENCY iF GAT MD Go TO 2 TRUBL"» i,0 RETURN sezss CALCULATE FIRST-ORDER DIFFERENCES. jad = DOS en, NMd TABLECH,1)"= CYCde2) = YCD/OKCIe) = XCD) IF OH,LE.1) GO 10 6 ves CALOULATE HIGHER-ORDER DIFFERENCES 0... 5 dea 00.5 tad,NMa 1SUB = Wied TABLECL,g) = (TABLECL,g-2) = TABLECL=: «SD OKC) = XCSUBD) TRUBL = 0.0 RETURN eno Function FNEWT FUNCTION FNEWT ( X,Y, TABLE,N,M,1DEG,XARG,TRUBL, K ) FNEWT ASSUMES THAT X(1)...XUN) ARE (M ASCENDING ORDER AND FIRST SCANS THE X VECTOR’ TO DETERMINE WHICH ELEMENT 1S NEAREST (.GE,) THE INTERPOLATION ARGUMENT, XARG. SWE (Geel BASE POINTS NEEDED FOR THE EVALUATION OF THE DIVIDED-DIFFERENCE POLYNOMLAL QF DEGREE IDEGt1 ARE THEN CENTERED ABOUT THE CHOSEN ELEMENT WITH THE LARGEST WAVING THE SUBSCRIPT MAX. 17 1S ASSUMED THAT THE FIRST M DIVIDED QUFFERENCES HAVE BEEN COMPUTED BY THE SUBROUTINE DTABLE AND ARE ALREADY PRESENT IN THE MATRIX TABLE. MAX IS CHECKED TO INSURE THAT ALL REQUIRED BASE POINTS ARE AVAILABLE, AND THE INTERPOLANT VALUE IS COMPUTED USING NESTED POLYNOMIAL EVALUATION. | THE INTERPOLANT 1S RETURNED AS THE VALUE OF THE FUNCTION. FOR INCONSISTENT ARGUMENTS, ‘TRUBL = 1.0 ON EXIT. OTHERWISE, TAUBL = 0.0 ON EXIT. DIMENSION XCH), YCND, TABLECK,K) CHECK FOR ARGUMENT INCONSISTEWCY . DEG.LE.M) GO TO 2 i TRL = id ENENT = 0/0 RETURN part SFAROH X VECTOR FOR ELEMENT .cE, xARG TatN TE U.EQ.N JOR, KARG.LELKCID) 60 TO 5 conTi ue MAK © 1s 1DEG/2 INSURE THAT ALL REQUIRED DIFFERENCES ARE IN TABLE .~... GF Guax.LE. 1G) MAX = IDES + TF WAXLGTIN) MAK = 24 {Interpolation and Approximation Program Listing (Continued) c © sense COMPUTE INTERPOLANT VALUE .. 004 VEST" = TABLE (HAX-1, 106) IF (1DEG,LE.1) GOTO 13 tech) = 1086 - 1 0012 “te1, 1oEGH2 1SuB1 = MAX’= 1 1sue2 = 1DeG - 1 12. YEST = YEST*(KARG = X(1SUBL)) + TABLE(ISUBI-1, 1SUB2) 15 15UB1 = MAX IDEG TRUBL = 0,0 FNERT. = YEST*(XARG ~ X(ISUB1)) + Y(1SUBL) RETURN c END Data Ne 8 Ne 6 XCD. x5) = 0.0000 9.2000 0.3000 0.4000 0.6000 XB) LLIx(8) = 0.7000 0.9000 1.0000 XARG'=" 0.2500 Wes = i KARG = 012500 Wes = 2 XARG = 012500, Ie = 3 XARG = 012500 IDE = XARG = 012500 WDec = 5 KARG = 012500 Woes = 6 XARG = 014500 ibe > 1 XARG = 014500 IDE = 2 XARG = 024500, Ioes = 5 KARG = 00500, IoeG = 1 XaRg = 0.0500 IDec =. 2 XARG = 0.0500 Ioec = 3 XARG = 0.9500 Weg = 1 XARG = 019500 IDec = 2 KARG = 0.9500, IDeG = 3 XARG = 011000 ines su XARG = ~01000 Wes = 4 XARG = 0.5500, IDec = 7 XARG = 1,100 Woec = 1 KARG 000, Ibes = 1 XARG = 210000 iec = 2 XARG = 210000 Ibe = 3 xaRG = 210000 IDEs =u XARG = 210000 IDEs = § XARG = 2,000, Ine = 6 Example 1.1 Interpolation with Finite Divided Differences RE 00 00 ‘Computer Output THE SAMPLE FUNCTIONAL VALUES Ai ' xD rn 1 0.0 1.000000 2 0;200000 0, 980087 3 0.300000 0955337 & — o%eoo000 0921081 5 07600000 0.825336 6 0.700000 0: 764542 7 9.900000 0621610 % — oooo00 a suo 307 FOR M = 6, THE DIVIDED DIFFERENCES ARE -0,99666716-01 -0,2673003E 00 -0,49211216 -0.34275526 00 ~0,477270NE -0, =o. 0. 70. 47862716 00 60u9359E 00 7161614 00 81307696 00 =0,0529063E 00 -0,42102276 -0,3707584E =0,3250510E THE DATA AND RESULTS ARE 0. ° °: o o ° ° vn °. °. ° ° °. °, °: o =o; 0: as 2 2 2 2 2 2 XARG «IDES 2500 a 2500 2 2500 5 2500 ‘ 2500 5 2500, 6 6500 1 S500 2 4500, > ‘0500, 1 0500 2 0500, 3 9500 1 9500 2 9500 3 1000, ‘ 1000 ¥ 5500 7 1000 1 ‘0000 1 ‘0000, 2 0000, 3 0000 ‘ 3000 5 ‘0000 6 YINTER 9.967702 0.968895 0:968909 0.968912 07968913 0.968913 0.897130 0,900287 91900837 07995017 0.99708 0.998777 01580956 01581768 01581689 07395005 9994996 010 0.458995 201272778 701626128 201457578 701395053 0.410927 16679 oo 00 00 0,3709%356-01 0, b082027-01 0,79708816-01 0,1005287€ 00 0,1192673E 00 TRUVAL 0.968913 ‘ol9gagi3 0.968913 0:968933 01968933 01968933 0900047 0.900447 0; 900u47 ‘al99750 0,998 750 01998750 0.581683 01581683 01581683 0995008 995008 91652525 0.453597 70:416187 0.416187 416187 701416187 =0,816187 2416147 TRUBL 0,5970985¢-61 0,37577096-03 0,3469983E-01 0,31251006-01 =0,3006803E-02 -0,4110366€-02 -0,4955015E-02 25 -0.1181737€=02 -0.1056310E-02 26 Interpolation and Approximation Discussion of Results ‘The programs have been written to allow calculation of the desired table of divided differences just once (by calling DTABLE once). Subsequently, an interpolating polynomial of any degree d Sint do Point a In addition, write'a main program that reads data values 1, 1, 25-20 Xp Y4s Yar oon Jor Xo and min, and then calls upon FLAGR to evaluate the appropriate interpolating polynomial and return the interpolant value, 7(@). AS test data, use information from Table 1.2.1 relating observed voltage and temperature for the Platinum to Platinum-10 percent Rhodium thermo- couple with cold junctions at 32°F. Table 1.2.1. Reference Table for the P-10% Rh Thermocouple 21) ° 320 300 i224 500 1760 1000 2964 1500 405.7 1700 476 2000 509.0 2500 608.4 3000 7087 3300 7614 3500 7990 4000 8919) 4500 9830 3000 10726 300 1128.7 3500 11608 +900 12303 6000 1473 Read tabulated values for the 13 selected base points , 2 = 500, x5 = 1000, ..., x13 = 6000, and the Flow Diagram ‘Main Program corresponding functional values yy, Ya» «== Jiyr Where ¥, = f(x). Then call on FLAGR to evaluate (3) for argu- ments = 300, 1700, 2500, 3300, 5300, and 5900, with various values for d and min, Compare the results with the experimentally observed values from Table 1.2.1 ‘Method of Solution In terms of the problem parameters, Lagrange’s form of the interpolating polynomial (1.43) becomes: I@= YL LAY, (1.2.1) where (1.2.2) min, min +1, mint d. ‘The program that follows is a straightforward imple- mentation of (1.2.1) and (1.2.2), except that some calculations (about d? multiplications and d? subtrac- tions, at the expense of d+ I divisions) are saved by writing (1.2.2) in the form La) TL@-x) Cy i= min, min +1, ..., min + d,%# x, where c= T@-*). (124) The restriction in (1.2.3), ¥# x» causes no difficulty, since, if ¥ =x, the interpolant (3) is known to be y,; no additional computation is required. Compute interpolant value (3) using Lagrange’s interpolating polynomial of degree d with base points nin (Function FLAGR) 30 Function FLAGR (Arguments: x, y, X, d, min, n) Interpolation and Approximation eel >| ce e—x) j= min, min +1, mind FORTRAN Implementation List of Principal Variables Program Symbol (Main) 1 IEG MIN N x XARG Y YINTER (Function FLAGR) FACTOR ie MAX TERM YEsT Definition Subscript, i Degree, d, of the interpolating polynomial. ‘Smallest subscript for base points used to determine the interpolating polynomial, min. rn, the number of paired values (x,, ys =f) Vector of base points, x, Interpolation argument, &. Vector of functional vaiues, y; = f(x). Interpolant value, 5(3). The factor ¢ (see (1.2.4)) Subscript, j Largest subscript for base points used to determine the interpolating polynomial, min + d. 1,a variable that assumes successively the values £(%)y; in (1.2.1), Interpolant value, 7(3).. Example 12. Lagrangian Interpolation 31 Program Listing ‘Main Program APPLIED NUMERICAL METHODS, EXAMPLE 1,2 UAGRANGE"S INTERPOLATING POLYNOMIAL TEST PROGRAM FOR THE FUNCTION FLAGR. THIS PROGRAM READS A SET OF N VALUES X(1)...XCN) AND A CORRESPONDING SET OF FUNCTIONAL VALUES YCi)...¥(N) WHERE YC) = FOX(I)), THE PROGRAM THEN READS VALUES FOR XARG, IDEG, AND MIN (SEE FLAGR FOR MEANINGS) AND CALLS ON FLAGR TO PRODUCE THE INTERPOLANT VALUE, YINTER, IMPLICIT. REAL®8(AcH, 0-2) DIMENSION X(100), ¥(400) + READ N, X AND Y VALUES, AND PRINT READ'(5, 100)" Wy (XCD, Pa1,80) READ (5,101) (YI), (41,0) waite (6,200) 001. tei,x WRITE (6,201) 1, XC1), YCL tz READ INTERPOLATION ARGUMENTS, CALL ON FLAGR, AND PRINT ...., WRITE (6,202) READ (5,102) XARG, IDEG, MIN YINTER = FLAGR (X,Y, XARG, IDEG,MIN, WN) WRITE (6,203) XARG,"IDEG, MIN, INTER 60 To 2 «FORMATS FOR INPUT ANO OUTPUT STATEMENTS FORMAT C ux, 137 (15x, 5F20.8) ) FORMAT ( 15%, 5F10.8 FORMAT ( 7k, F10,4, 13X, 12, 12K, 12) FORMAT ( 3SHITHE' SAMPLE’ FUNCTIONAL VALUES ARE / SHO 1, 8X, 1 WHKCI), 9X, SHVCL) 7a) FORMAT ("1H 5 14, 2623.8) FORMAT ( 25HITHE DATA AND RESULTS ARE / 1HO, 5X, SHXARG, 5X, 1 "UHIDEG, 5x, SHMIH, 5X, GHYINTER 7 1H ) Format Cin,"Fa.4, i8, 18, F124) Function FLAGR 2 FUNCTION FLAGR ( X,Y,XARG,IDEG,MIN,N ) ELAGR USES THE LAGRANGE FORMULA TO EVALUATE THE INTERPOLATING POLYNOMIAL OF OEGREE IDEG FOR ARGUMENT XARG USING THE DATA VALUES. XCKIN) +5 0X(MAX). AND YCMIN). «¥CMAX). WHERE MAX = WIN + IDEG, NO ASSUMPTION 13. HADE REGARDING ORDER OF THE X(1), AND NO’ ARGUMENT CHECKING 15 DONE. TERM IS A VARIABLE WHICH CONTAINS SUCCESSIVELY EACH TERM OF THE LAGRANGE FORMULA. THE FINAL YALUE OF YEST 1S THE INTERPOLATED VALUE, SEE TEXT FOR A DESCRIPTION OF FACTOR. IMPLICIT REAL#B¢A-H, 0-2) REAL*S x, y, XARG, FLAGR DIMENSION "xCHD, YO, + COMPUTE VALUE OF FACTOR . Factor ~' 1.0 MAX = MINS 1DEG OZ JeMIN, MAX. VE CXARG.NE,XCJ)) 60 TO 2 FLAGR = Yu} RETURN FACTOR = FACTOR®(XARG = X(J)) 32 Interpolation and Approximation Program Listing (Continued) 5 ya. Yes, yan) XARG. KARG XARG XARG xARG XARG xARG ARG, xARG XARG. XARG KARG KARG KARG KARG KARG XARG ARG ARG ARG. KARG ARG KARG KARG KARG ARG ARG ARG KARG XARG XARG XARG XARG xaRs XARG KARG XARG ARG EVALUATE INTERPOLATING POLYNOMIAL iy DOS” IeMIN MAX TERM = YC1)#FACTOR/(KARG = XC1)) DOM” SeMIN MAX VE CURE J) TERA = TERM/(KCT)=X(W)) YEst 2 YeST + TERM FLAGR = YEST RETURN eno. X05) = 0. 500. 1000, 1500, 2000. x(10) = 2500, 3000. 3500, 4000, © S00. X(3) #5000, 5500. 000. Igy = "732.8 176.0 296.4 © 405.7 509.0 Uivdioy"= 608.4 7001779910 ©8919 983.0 YES) © 107226 116028 1247.5 F300, Wess Lo Me 2 = 300) WEG = 2 0 WIN ® 1 = 300. Wes = 3 MIN = 1 = 300, IDES = MINS 1 = 1700: WEG * 1 MIN= 4 = 100; Wee * 2 MIN 3 = 1700. IDEs * 2 MIN = 8 > 1700 IDeG = 5 MIN = 3 = 1700, Weg = 8 MIN = 2 = 1700, DEG = 8 MINS 3 = 2500; IDEs = 1 wine 5 = 2500, IDES = 1 MINS 6 = 2500; IDEs = 2 MIN? 5 = 2500; IEG = 3 MIN & = 2500; IEG = 3 MINS 5 = 2500 IEG = & MIN = 8 = 3300 (Deg = 1 MIN = 7 = 3300. (eG = 2 MIN = 6 = 3300: eG = 2 0 MINS 7 = 3300 IDEs = 3 MINS & = 3300. IDEG = & MINS 5 3300. IEG = 4 MINS 6 3500. 1DEG = 5 MINS 5 3300. IDEG = 6 MINS 4 3300, IDE = 6 5 = 3300 IDEs = 7 MIN = 8 = 3300, (Dec = MIN = 3 3300, 1DEG = 3 MIN 8 3300. 1DEG = 8 MIN 3 5300. 1DeG = 1 MIW = 5300. 1DEG = 2 MIN = 5300: IEG = 2 MIN = 5300. IDeG = 5 = 5300. 1DEG = = s9c0) IDEG = 1 Min = 5300. WEG = 2 MIN = 5900. IDEG = 3 0 WIN = 5900; WEG = MIN Example 1.2 Lagrangian Inverpolation Computer Output THE SAMPLE FUNCTIONAL VALUES. ARE 1 xD yy 1 0.0 32,0000 2 $0000 178.0000, 5 1000;0000 236.4000 4 150070000 605: 7000, $ 2000:0000 $08: 0000 & — 2500;0000 608.4000 4 300070000 704: 7000 & 350020000 © 788:0000 $ hooo.aca 81,9000 10 4500;0000 383.0000 11 5000,0000 © 1072..6000 12 5§00,0000 1160: 8000 15 6000;0000 1247.50 THE DATA AND RESULTS ARE WEG MIN YINTER 118, 4000 u 170070000 1700, 0000 1700: 0900, 170070000 3700,0000 1700.00 2500,0000 2560,0000 25000000 2500, 0000 250.0000 2500, 0000 3300,0000 3300, 0000 3300,.0000 3300; 0000, 3300,.0000 3300, 0000 3300, 0000 ‘3300,0000 3300, 0000 5300;0000 3300.0000, 3300-0000 3300,0000 530.0000 3300:0000 300.0000 5300-0000 $300, 0000 5300.00 400: 0000 $900.00, 5300, 0000 608, 4000 761.4518 761.4547 ai 1125,5200, 1D 112816880 1112527000 10112516904 9 1225,6899, 12 1230,1600 3 1230:2800 10 120.2808 9 1230:2915, au Interpolation and Approximation Discussion of Results All computations have been carried out using double- precision arithmetic (8 byte REAL operands); for the test data used, however, single-precision arithmetic would have yielded results of comparable accuracy. Since the true function f(x), for which y, = f(x; is unknown, it is not possible to determine an upper bound for the inter- polation error from (1.398). However, comparison of the interpolant values with known functional values from Table 1.2.1 for the arguments used as test data, shows that interpolation of degree 2 or more when the argu- ment is well-centered with respect to the base points, generally yields results comparable in accuracy to the experimentally measured values. The results for ¥ = 300 are less satisfactory than for the other arguments, possibly because the argument is near the beginning of the table where the function appears to have considerable curvature, and because centering of the argument among. the base points is not practicable for large d. While the base points used as test data were ordered in. ascending sequence and equally spaced, the program is not limited with respect to either ordering or spacing of the base points. For greater accuracy, however, one would normally order the base points in either ascending or descending sequence, and attempt to center the inter- polation argument among those base points used to determine the interpolating polynomial. This tends to ‘Keep the polynomial factor TL @-x0, (1.2.5) and hence the error term comparable to (1.396), as small as possible, This practice will usually lead to more satisfactory low-order interpolant values as well. The rogram does not test to insure that the required base points are in fact available, although a check to insure that min > 0 and max 5¢/(xe) BY (x0 + hi2) ‘The differences of Table 1.8 with subscripts adjusted as shown in Table 1.10 are illustrated in Table 1.11 Table 1.11 Centrat Diference Table Example, Use the Gauss forward formula of (1.65) and central-differences of Table 1.11 to compute the interpolant value for interpolation argument x — 2.5 with a ~ 3. Following the zigzag path across the table, a= (x= xol/h=Q5—2/1 =05, PAl2.5) ~ 9+ (0.5)(16) + (0.5)(—0.5)8)/2! +(0.5)(—0.5)1.5)(6)/3! ~ 154. 140. Chebysheo Polynomials 39 Evaluation of the generating function S08) = pix) 2° — De 4-5 for x ~ 2.5 yields the same value, as expected. 19° Concluding Remarks on Polynomial Interpolation ‘Approximating polynomials which use information about derivatives as well as functional values may also be constructed. For example, a third-degree polynomial could be found which reproduces functional values f(x) and f(x,) and derivative values f"(xo) and f(x.) at xp and 1x, respectively. The simultaneous equations to be solved in this case would be (for x(x) = Y?-o@e*'): Wy + 4x0 + 23x8 + 03x) = f(X0) + ayxy + ax] + agx} =f) S80) 4 + 2px, + Saye} =f'(x)- a, + 2agxy + 3ayx3 This system has the determinant Lx x8 x3 Pax xtoxt 0 1 2x0 3x5 0 1 2x 3:7]. Higher-order derivatives may be used as well, subject to the restriction that the determinant of the system of equations may not vanish. The logical limit to this pro- sess, when f(x), S'(o), f"(xa)s «+f %Cxo) are employed, yields the nth-degree polynomial produced by Taylor's expansion, truncated after the term in x". The gener of appropriate interpolation formulas for these special cases is somewhat more tedious, but fundamentally no more difficult than cases for which only the f(x) are specified. Unfortunately, there are no simple ground rules for deciding what degree interpolation will yield best results. When it is possible to evaluate higher-order derivatives, then, of course, an error bound can be computed using (1.39). In most situations, however, it is not possible to compute such a bound and the error estimate of (1.33) is the only information available. As the degree of the interpolating polynomial increases, whe interval contain- ing the points x, x, ..., x, also increases in size, tending, for a given x, to increase the magnitude of the polynomial term [Tino (¥ — x,) in the error (1.39) or error estimate (1.33), And, of course, the derivatives and divided dif- ferences do not necessarily become smaller as n increases in fact, for many functions (13] the derivatives at first tend to decrease in magnitude with increasing n and then eventually increase without bound as becomes larger and larger. Therefore the error may well increase rather than decrease as additional terms are retained in the approximation, that is, as the degree of the interpolating polynomial is increased. ‘One final word of caution. The functional values f(x) are usually known to a few significant figures at best. Successive differencing operations on these data, which are normally of comparable magnitude, inevitably lead to loss of significance in the computed results; in some cases, calculated high-order differences may be com- pletely meaningless ‘On the reassuring side, low-degree interpolating poly- nomials usually have Very good convergence properties, ‘that is, most of the functional value can be represented by low-order terms. In practice, we can almost always achieve the desired degree of accuracy with low-degree polynomial approximations, provided that base-point functional values are available on the interval of interest, 1.40 Chebyshey Polynomials ‘The only approximating functions employed thus for have been the polynomials, that is, linear combinations of the monomiais §, x, x?,.... x7. An examination of the monomials on the interval [~1,1] shows that each achieves its maximum magnitude (1) at x= +1 and its minimum magnitude (0) at x=0. If a function f(x) is approximated by a polynomial Pat) = Og + OX +O oo FOL, where p,(x) is presumably a good approximation, the dropping of high-etder terms or modification of the co- efficients a,,..., a, will produce litte error for small x (near zero), but probably substantial error near the ends of the interval (x near +1). Unfortunately it is in general true that polynomial approximations (for example those following from Tay- lor’s series expansions) for arbitrary functions f(z) ex- hibit this same uneven error distribution over arbitrary imcervals a 0, D(x.) <0, etc., and if D(x) >0, then D(x,) <0, D(x,)>0, ete. Thus D(x) must change sign m times, or equivalently, have n roots in the interval ( 1,1]. But D(x) isa polynomial of degree n=1, because both qy(x) and 5,(x) have leading co- efficent unity. Since an (1 — I)th-degree polynomial has only n— 1 roots, there is no polynomial S,(x). The pro- position that ¢,(x)is the monic polynomial of degree n that deviates east from zero on{-I,t]is proved by contradiction, Consider the illustration of the proof for $,(+) 74(x)/2 shown in Fig, 1.11. The solid curve is 6,(x), that is 2 _ 4, which has three extreme values at Xo = 1, x =0, and x2 = 1. The dotted curve shows a proposed S,(x) which has a smaller maximum magnitude on the interval than $,(x). The difference in the ordi- nates $,(x) and S,(x) at 9, x1, and x, are shown as D(x), D(x), and D(x). As indicated by the direction of the arrows, D(x) must change ign twice on the interval, an impossibility since #3(x) — (x) is only a first-degree polynomial Da= (75) (1.76) 1.11 Minimizing the Maximum Error Since the nth-degree polynomial ,(x) = T,(x)/2"~* has the smallest maximum magnitude of all possible nth- ‘degree monic polynomials on the interval [—1,!], any error that can be expressed as an nth-degree polynomial can be minimized for the interval [1,1] by equating it with $,(2). For example, the error term for the inter- Polating polynomial has been shown to be of the form (see 1.39b) [fe-»] We can do very little about s**"(@). The only effec- ‘way of minimizing R,(x) is to minimize the maxi- mum magnitude of the (n+ I)th-degree polynomial (x — x). Treat f**"*8) as though it were constant, ‘Now equate []}.o(¥ ~ x) with $,4(8), and notice that the (x — x, terms are simply the n + I factors of by.s(%); the x; are therefore roots of $,.4(x), or equivalently, the LOO, ne Grn! a Interpolation and Approximation eats) we Dé) -al Des») Figure 1414s, the second degree monic polynomial that deviates least from zero on {~1) roots of the corresponding Chebyshev polynomial Ty4s(2), given by i+ De an+2 am xi = c0s| For an arbitrary interval, a 9,0 DoT TeusMPi2 7 CSTAR(S) # ESTAR) + BCI DeTIOXC! J) WRITE (6,203) MPL, COSTAR(I), I=1,MP1) ¢ © Lees TRANSFORM ECONOMIZED POLYNOMIAL IN X ON INTERVAL c {-1,1) TOA POLYNOMIAL IN Z ON INTERVAL (AL,AR) CALL_TRANS(’M,CSTAR,C,ONEM, ONE, AL,AR } WRITE (6, 200)" AL, AR, MH, EMAK, MPL, (CCI), 1=1,MPL) 60 10 1 c © seas FORMATS FOR THE INPUT AND OUTPUT STATEMENTS .. 100 FORMAT C4X, 12, 12K, €12.5, 2C10K, F10.5) ) 101 FORMAT ¢ lek, bELH.6") 200 FORMAT (GHIN’ =, 15/6H EPS mp €15.5/6H AL =,F10,0/6H AR =,F10.4/ 11HO/27H THE COEFFICIENTS AC1I...AG I1, SH) ARE/] (1H , 1P5E16.6) 201 FORMAT C 1HO/ 1HO/ 35H THE COEFFICIENTS ASTARCL). ..ASTARC, 11, 25H) ARE; 1M / (1H , 1P5E16.6)) 202 FORMAT ( 1HO/ 10) 27H THE COEFFICIENTS 6¢1)...6(, 12, SH) ARE/ 1 aH / GH, 1PSE16,6)) 203 FORMAT C1Nb/ 1MO/ "35H THE COEFFICIENTS CSTAR(1)...CSTAR(, 1, 2 SH) ARE/ TH / CH, 1P5E16.8)) 208 FORMAT ( 1H0/°1HO/ {SH THE ECONOMIZED POLYNOMIAL ON THE TWTERLAL ¢ VAL = ,F10.4,2H, ,5H AR = ,F10.4,8H ) 1S OF/IIH DEGREE M =,12, 2H. 2 SSH THE MAXIMUM ERROR ON THIS INTERVAL IS. NO LARGER THAN, iPELS.7, 3 1M. /27HOTHE COEFFICIENTS C¢1)...CC, V1, SH) ARE/ IW / © GW, 1Pse16.6)) c ND ‘Subroutine TRANS. SUBROUTINE TRANS( N, COEFF), COEFFT, ENDLI, ENDRI,ENOLT, ENDRT ) TRANS CONVERTS AN N-TH DEGREE POLYNOMIAL IN ONE VARIABLE (SAY_Z) ON THE INTERVAL CENDLI,ENORI) HAVING COEFFICIENTS. COEFFLCI).«.COEFFI(N+1) INTO AW N TH DEGREE POLYNOMIAL INA SECOMD'VARLABLE (SAY X) ON THE INTERVAL CENDLT, ENDRT) WITH COEFFICIENTS COEFFY(LI...COFFFT(N+1) WHERE THE TWO VARIABLES X AND Z ARE RELATED BY THE TRANSFORNATION 2 = (CENDRI~ENDLI)#x * CENDLI *ENORT-ENDRI * ENDLT) )/(ENDRT-ENDLT) X = CCENDRT-ENOLT)«Z + (ENDR1*ENDLT-ENDLI © ENDRT) )/CENDRI-ENDLI> IMPLICIT REAL#8(A-H, 0-2) REAL*S COEFFI, COEFFT, ENDLI, ENORI, ENOLT, ENORT DIMENSION COEFFIC10), COEFFT(10) <... COMPUTE CONSTANT PARAMETERS. Goidi = CENDRT-ENDLID/CENDRT-ENOLT) CON? = (ENDL! +ENORT=ENDR1eENDLT) /UENORT- ENDLT) NPL = Ws 2 Example 1.3 Chebysheo Ezonomization ‘Program Listing (Continued) c Doe e1,NP2 {+ HECK FOR CON2=0 TO AVOID COMPUTING 0.0*0 ..... ie" {'con2.ne.0.0°) "Go To 2 COEFFTCI) = COEFFLCID 60.7) § 2 GOEFFTCI) = 0.0 50-5. d=! ,wP2. BINOM = NOMIAL(J=1, 11) 3 COEFFTCI) = COEFFT(I) + COEFFI(y)*CON2e*C 3-1), 4 COEFFTCI) = COEFFT( 1 )*coNd#*( 1-1) RETURN c NOM eno Function NOMIAL FUNCTION NOMIAL {XL} c c NOMIAL COMPUTES THE BINOMIAL COEFFICIENT (K,L). ¢ NOW = 2 IF CRUEL OR. LyEQ.0 ) GO TO & 00.3 icounT=1, L 3 NOM = NOMe(K- 1 COUNTS1)/1 COUNT 4 NOMIAL © NOM RETURN c END Data N= 3 EPS = 0,000000E00 © AL = 100000 aR = 3.00000 ACD. AC) = :000000E00 © 0,000000E00° 9.00000000 1. 000000E00 N= ‘3° eps = 3:260000E00 “AL =" 1,00000 aR = "3. 00000 ACD. AC = 2:000000€00 © 0.00000000" 000000600 1, 000000E00 N= 3" eps = 0;000000E00 “AL = -3.00000 AR = "1.00000 ACD. AC) = 2,000000E00 © 3,000000E00 1.000000 0, 500000E00 N= "3" "eps = 07400000800 "AL = =1,00000 aR = "1.00000 ACD. AC) 2/000000E00 3,000000E00" 1.000000E00 0, 5000800 N= 3" "eps = 0:700000800 “AL * -1,00000 aR = "1.00000 RD. AC) = 2/000000E00 3, 000000z00" 1.000000E00 0, 500000E00 EPS * 0,000000E00 "AL = 0.00000 AR = "1.57080 AD. AC © 15000000600 0, 000000£00° =0,500000E00 0, 000000E00 AGYLIAC) = 4l166667E-2 0,000000E00 -1.388888E-3 —0.000000E00 2iweo1sse-5 EPS = 5:00000¢-5 AL = 000000 AR = 1.57080 ACD. ACAD 1,000000€00 0.000000£00" -0.500000E00 0. 000000E00 ACS) ACD) WI166667E-2 0.0D9900E00 -1;388s88E-3 0 ,000000E00 aca) 2, 6801596-5 N= 3 eS = O;000000E00 AL = -1.00000 aR = 1.00000 AD. .AG) © Looo000€00 —9.000000E00" =0.500800E00 0, 000000E00 \lig6667E-2 0,000000E00 -1,3a8z88e-3 000000000 2i4g0159e-5 = Sioo0000c-5 AL = ~1,00000 aR = 1.00000 100000000 0, 00000000" -0.500000F00 0..000000E00 Al166657E-2 000000000 -1:3e8sasE-3 0.000000600 2ineoisge-s. = 0100000600 = AL = 1.00000 ar = 1.90000 81250000E00 0380000600" -0,086300E00 9§ = 0,000000E00 “AL = 100000000 AR = 2000.00000 8.300000E00 1,820000E-3" 0, susco0E-6 = 02050000E00 “AL = 1000.00000° AR = 2000..00000 ACD. AG) 6,300000E00 1.320000E~3" 0, 34s 000E-6 53 54 ‘Computer Output Interpolation and Approximation Results for the 2nd Data Set Noe 3 eps = aL 10000 aR = 30000 THE COEFEICIENTS 0.0 COEFFICIENTS '8,0000000 00 THE COEFFICIENTS 1,1000000 01 THE COEFFICIENTS 1.100000 o1 THE ECONOMIZED POLYNOMIAL ON THE INTERVAL ( AL = THE MAXIMUM ERROR ON THIS INTERVAL 1S NO LARGER THAN DEGREE M = 1, THE COEFFICIENTS 1.450000 01 0.326000 01 ac. 0:0 AC) ARE 0.0 1,0000000 00 ASTAR(L)., -ASTAR(N) ARE 1,2000000 01 6.000000 00 1, 0000000 00 BOLD. .BCA) ARE 1,2750000 01 —3.0000000 00 —2.000000-01 CSTAR(1).. .CSTARC2) ARE 1,2750000 01 1.0000, C1). 6002) ARE 1.275000 01 Results for the 7th Data Set Nos 8 EPS = 0,500000-08 aL 0:0. AR 115708 THE COEFFICIENTS AC1)...A(9) ARE 1,0000000 000.6 -5.0000000-01 0.0 0:0 <1lSeseesd-03 010 21480159005 THE COEFFICIENTS. 7.0710550-01 =1,7508860-03, THE COEFFICIENTS 6,0219750-01 10653830-08 ASTAR(1)...ASTARCS) ARE -2.1808900-01 727420-05 5,7099140-02 315909280-06 75.5536240-01 22125u5030-08 2. BCL) -+6B(9) ARE -5.1361920-01 76 8208880-06, 1.3735810-02 2)8054130-08 ars 3.0000 ) 15 oF 3.250000 00, 4,1666670-02 1,1215910-02 1,3605030-03, Example 1.3 Chebysheo Economization ‘Computer Output (Continued) THE COEFFICIENTS CSTAR(L)...CSTARCE) ARE 7,0709870-01 =5.553595D-01 -2.179653D-015.707401D-02 1, 0884071 #1570¥6130-03 THE ECONOMIZED POLYNOMIAL ON THE INTERVAL ( AL = 0.0 AR# 1.5708.) 18 OF DEGREE M = 5. THE MAXIMUM ERROR ON THIS INTERVAL 1S HO LARGER THAN 7.29780800-06, THE COEFFICIENTS C(1)...0(6) ARE 1,0000070 00 ~3.5836380-08 9782320-01 =7,2012u90-05 —_$,1003120-02 5, 7038910-03 Results for the 8th Data Set 8 nos ers = 0.0 AL = =1:0000 AR + "120000 THE COEFFICIENTS AC1)..,AC9) ARE 1.000000 00 0.0 -5.0000000-01 0.0 416666 70-02 a0 -1l3egseep-03 0.0 2iha01590-05 THE COEFFICIENTS ASTARGL)...ASTARCO) ARE 1,0000000 00 0,0 5.0000000-01 0.0 4, 1666670-02 °, <1l3essss0-05 0.0 2, 4801590-05, WE COEFFLCLENTS 8(12,..8(9) ARE 7,6519780-01 0.0 -2.2980690-01 0.0 b.9933430-05 oo ohl1852650-05 00 1,9376280-07 THE COEFFICIENTS CSTAR(1)...CSTAR(9) ARE 1,0000000 00 0.0 -$.0009000-01 9,0 .Me8s670-02 00 o1,3asss80-03 0:0 21401590-05 THE ECONOHIZED POLYNOMIAL ON THE INTERVAL ( AL = 71,0000, AR = 1.0000) 15 OF DEGREE Mn be HE HARUM ERROR ON THIS INTERVAL 15 'HO LARGER THAN 020 THE COEFFICIENTS C(1)...0(9) ARE 1,0000000 00 0.0 -5.0000000-01 0.0 4,1666570-02 o0 -113e888s0-03 010 274801590-05, 56 Interpolation and Approximation Computer Output (Continued) Results for the 9th Data Set nos 8 EPS = 0,$00000-08 AL = -150000 AR = 10000 THE COEFFICIENTS ACL)...A(9) ARE 1,0000000 00° 0.0 =5.0000000-01 0.0 oro <1i38speeo-03 0.0 2iag01s90-05 THE COEFFICIENTS ASTAR(1)...ASTAR(S) ARE 1,0000000 00 9.0 =5:0000000-01 0.0 ao -1:3a8ssep-03 0:0 21¥e01590-05 THE COEFFICIENTS B(1)...8(9) ARE 7,6519780-01 0.0 -2.2980690-01 0.9 oro 811852650-05 0.0 119376240-07 THE COEFFICIENTS CSTAR(1)...CSTAR(S) ARE 9.9995800-01 0,0 9928050-01 0.0 THE ECONOMIZED POLYNOMIAL ON THE INTERVAL CAL ® =1.0000, AR = DEGREE = &, THE MAXIMUM ERROR OW THIS INTERVAL IS NO LARGER THAN THE COEFFICIENTS C(1)..20¢5) ARE 9.9995800-01 0.0 4,9924050-01 0.0 Results for the 12th Data Set Noe 2 Eps = 0,s00000-01 AL = 1000, 0000 AR = 20000000 THE COEFFICIENTS AC1),..AC3) ARE 6.3000000 00 1.8200800-03 -5,4500000-07, THE COEFFICIENTS ASTAR(1)...ASTAR(S) ARE 8.253700 00 —_-3.9250000-01 6250000-02 THE COEFFICIENTS 8(1)...8(3) ARE 8,2106230 00 3.9250000-01 ~4.3125000-02 4,1666670-02 4,3666670-02 4, 9533u30-03, 3,9626780-02, 1,000 ) 1s oF lzougeis0-05, 3.9626740-02 Example 13. Chebyshev Ezanomization ‘Computer Outpat (Continued) THE COEFFICIENTS CSTARC1)...CSTAR(2) ARE 8,2106250 00 $.9250000-01 THE ECONOMIZED POLYNOMIAL ON THE INTERVAL ( AL = 1000.0000, AR = 200.0000 ) ss OF DEGREE M= 1, THE MAXIMUM ERROR ON THIS INTERVAL IS HO LARGER THAN 4.31250000-02. THE COEFFICIENTS C(1)...C(2) ARE 7,0331250 00 7.8500000-0% ST 58 {Interpolation and Approximation Discussion of Results Five different polynomials were used in the twelve test data sets as follows: Daa ‘Maximum Polynomial Allowable Error, € Interval Set LR] 1 ta 0 2 13) 3.26 3° EA ° 4 Ly 4 5 07 0 0.0000 0 10,0000 ° Pe 0.05 oa ps(z) = 8.25 + 0.392 — 0.08632 6.3 + 0.001822 — 3.45 x 10-72 Data sets t, 3, 6, 8, and 11 allow no error to be intro- duced by the economization process. Hence, the econo- mized polynomial for these cases must be equivalent to the original polynomials; significant discrepancies could be accounted for only as errors in one or more elements of the X or T matrices, assuming that the executable portion of the program is free of error. Results for the 8th data set, included in the computer output, illustrate these cases, Results for data set 2, shown in the computer output, ate those outlined in (1.88) to (1.92). Results for data sets 4 and 5, not shown, are, respectively: » Emex = 0-125, pp(z) = 2 + 3.3752 + 27, m= and m= 1, Ener = 0.625, p(s} = 2.5 + 3.3752. The starting polynomial for data sets 6-9 is the power series for cos z, expanded about 29 = 0, and truncated after the term in 2°; it has been used as an example in Section 1.12. The results for data set 9, shown in the computer output, correspond to those of (1.82) to (1.87). ‘The results for data set 7, shown in the computer output, are similar to those for data set 9, except that the interval is 0< z < 7/2. The economized polynomial is 000007 — 3.383638 x 10"*z ~ 0.4974232:* = 7.241249 x 109 2 + 5.100312 x 107? 24 — 5.703891 x 10°? 25 (13.16) In this case, Enge = 7.298 x 10°, The total possible error in the approximation is given by Engg plus the maximum possible error introduced in truncating the power series expansion after the term in 28, that is, by cos 2 Eas + Foose] » 2 €in[0,/2]. (1.3.17) ‘Thus, the maximum possible magnitude of the error in (1.3.16) for the interval [0,n/2] is 7.298 x 10-6 + (n/2)'°/10! (1.3.18) By taking advantage of the fact that cos z is periodic with period 2n, that cos(x/2+a)=—cos(r/2—2), and that cos(x + B) = cos(r — B), (1.3.16) may, after suitable adjustment of the argument, be used to find the cosine of any angle within the accuracy of (1.3.18). In fact, since cos z= sin(z + 5/2), (1.3.16), with an appropriate transformation of variable, could be used 10 calcutate the sine of any angle as well, with the same bound for the error, Results of the economization process for the 10th data set, not shown, are: 04315, p(z) = 8.206850 + 0.392. Results for the 12th data set (sce the computer output) show the first-order minimax polynomial approximation to a second-degree polynomial representation of the molar heat capacity for gascous nitrogen at low pressures, inthe temperature range 1000-2000°K. See Problem 1.46 at the end of the chapter for more details. Double-precision arithmetic has been used for all calculations. In order to generate accurate coefficients for the economized polynomial, particularly when only small errors, «, are aifowed, it is important to carry as. many digits as possible throughout the calculations; double-precision arithmetic should be used, if available. AS written, the program can handle only starting. 3.25 x 1075, m=, Ege = Example 1.3 Chebyshev Economization 9 polynomials of degree nine or less. The matrices Tand Y could be expanded to allow higher-degree starting polynomials, although the storage requirements for 7 and X could become prohibitive for targe 1, if the sub- scription scheme outlined earlier were used. An alterna- tive approach would be to pack the coefficients from Tables |.12 and 1.13 using a more efficient assignment of ‘memory. Yet another, more elegant, approach to the economiza- tion process is to use the recursion relation of (1.72) to ‘generate the needed coefficients. This avoids the need for saving tabulated information. In addition, since trun- cation always starts with the highest-order term of (1.34), ax" can be expanded to yield b, directly. If the term byT, can be dropped without exceeding the maxi- mum allowable error, the af, i= 0,1,...,2—1, can be modified appropriately, to a? for example, using the recursion relation, Next, d2_,x*"! can be expanded to yield b,-. directly. If 2, -7,-, can be dropped, the af, i=0,1,...,2 2, can be modified appropriately, again using the recursion relation. This process of expanding only the highest-order untruncated power of x in terms of the Chebyshev polynomials, followed by proper adjustment of the coefficients of lower powers of x, leads directly to the economized polyaomial (1.3.8), without ever evaluating bg, ... by. Arden [1] and Hamming {2} suggest some other approaches to the economization process which use only the recursion relation of (1.72). o Interpolation and Approximation Problems Tobie P18 an tom | reo fat al at rat 1] at eaters Es Eanah, 5 a nrg i on how large must mbe to yeidan approximation fore thatis 1 | ©) 2] 9] tf, accurate within 10°? a2] o2 lowe ° o 12 For small values of x, the approximations 6 : is esltx singex a] 4) uyoy we 3 ° 3 are sometimes employed. In each case, use the error term from + Salmey a 1b Taylor's expansion to estimate how large avalue of x(iothe 5 |g | gy nearest 0.001) may be employed with the assurance thatthe error in the approximation is smaller than 0.01. Check your conclusions against tables of exponentials and sines. 1.3 Let M be the maximum magnitude of f(x) on the terval (x9, x,). Show that the error for linear interpolation for f(x), using the functional valuesat xp and x, is bounded by 4M (4, x0)" for x_ 21? 1.4 Use the algorithm of (12) to evaluate Bet fart 28 + Sa pols) and each of its derivatives at x =2.5. 15 Write a function, named POLY, that implements the algorithm of (1.12) to evaluate an nth-degree polynomial pix) = Staoanx', and each of its derivatives of order 1 through’ at x= %, The function should have the dummy argument list (NA, XBAR, DVAL) ‘where N is the degre ofthe polynomial, n, A is a vect imensional array) containing. the coeficien!s ao, @y, sy inclements A(1),A(2) ..., A(N + 1), XBAR is the independent variable value, %, and DVAL is a vector containing. piG%), 7°13), PPA imelements VAL (1), DVAL(2), ..-, DVAL(N) ‘upon return from the function, The value of POLY should be PA. Write a short main program that reads values for m, ae, 44, +5 and &, calls on POLY, prints the values returned for (42), pla), .-» POX), and then reads another data set. Test POLY with several different polynomials, including that of Problem 1.4 1.6 (@) Show that the mth divided difference of y= x*is unity, no matter which base points x», 24,» Xs are chosen. (©) Show that the nth divided diference of any polynomial aa) = 3 tea aux! ts given by ay, regardless of the choice of base points. 17 Investigate the relation, ifany, between the number of significant figures to which tabulated values of f(x) vs x are s3xen, and the highest order far which fnite divided differences ate likely to be meaningful 18 Consider the divided differences in Table P1.8. {@) What is the significance of the zero at the top of the seventh column? (b) Without renumbering the abscissas, write down the divided. difference polynomial that uses the elements denoted by asterisks. (c} What is the likely error bound for a fourth-degree inter- polation with x» Mf Deuter tols Ry(o + ah 2.3 Newton-Cotes Closed Integration Formulas The simplest case of closed integration is shown schematically in Fig. 2.3, Here, the two base points Xo =a and x, =6 are used to determine a first-degree polynomial, py(x) = py(xo + ah) or straight-line approxi- mation of f(x). The appropriate form of (1.54) is given by SC) =o + th) = fle) + 2 AF (0) + Rilo + ah) = Pilg +h) + Ri(xe tah) (2.4) where Ryl%o + a rata yh wae SP Fin (xox), (2.5) Riso ta) Wale — Df La.xo.ns). aso asl Figure 2.3. The trapecoidal rae. ‘Using this polynomial approximation for f(x) and transforming the integration variable from x to , (a = (x — xy)/h), we have Jf) ax = [700 dx = [po ae sh [pile + at) de, o6 ls where the integral on the right, is given, from (2.4), by ‘ 4 ['Ua) + aa/Geald rfeveor+£ ssc), = [reo +A ee) @7 From the definition of the first forward difference, Fe) = fo + &) — fo), (2.7) assumes the final form, f "pax off) +e Lo) 3 Bree) +00 (28) the familiar trapezoidal rule. The required area under the ‘solid curve of Fig. 2.3 is approximated by the area under the dotted straight line (the shaded trapezoid). The error involved in using the trapezoidal approxima- tion is given by the integral of the remainder term (2.5), (aoa =h [Rue + ath) de =e [ae ae, A in (xox). (2.9) n ‘Numerical Integration If f(x) is a continvous function of x, then £7€) or its equivalent, 21/(x,x0.%1] (see (2.5)], is a continuous, but unknown, function of x, so direct evaluation of (2.9) is impossible, Since x is simply a transformed value of x, f°@ is a continuous function of the integration variable a; this property simplifies the estimation of (29). The factor f"(2) can be taken outside the integral sign by applying the integral mean-value theorem from the calculus: If two functions, q(x) and g(x), are continuous for @ be. like quantities, resulting from the use of Simpson's second rote. Then Qi) In terms of the integration limits a and b, we can write (b— - Coe pone) = 2.32) ses) where , and &, are different values of € in (a,b). If we assume that f“*(é,) approximately equals f"(é,), then (2.32) reduces to (233) and (2.31) becomes =H, -th. ‘The validity of (2.34) hinges completely upon the assump tion that f%E,) and f(g.) are equal. Example. Evalvate (234) ff teyac~ fc —28 +1245) de= 205 235) by using the trapezoidal rule (2.21a) and the one-point open formula of (2.30a), each with degree of precision one, Then estimate /* by using the technique outlined above. For this ‘polynomial function [see (2.23)], the estimates of /* computed from (2.21a) and (2.300) are, tespectively, J, =26 and J; = 18. The ratio of error terms is n 0 py m6) aoa =a Sin @b. 2.36) or Assuming that £°(E,) is equal to /“(E2) leads to Ey= 2B ay ‘Note that for this case, the open formula is apparently more accurate than the closed formula of identical degree of pre- Cision (not the usual case). Substitution of (2.37) into (2.31) leads to 2x18 bis bw 4 2X8 209, pest 238) which is the true value of the integral ‘examination, (2.38) is seen to reduce to It = AYA) + 4/2) + £0, 2.39) {Simpson's rule (2.21b)]. Thus the expression of the error in terms of twa formulas with degree of precision one has resulted in a compound formula with degree of precision three. this case. Upon closer 2.6 Composite Integration Formulas ‘One way to reduce the error associated with a low- order integration formula is to subdivide the interval of integration [a,b] into smaller jntervals and then to use the formula separately on each subinterval. Repeated appli- cation of a low-order formula is usually preferred to the single application of a high-order formula, partly because of the simplicity of the low-order formulas and partly ‘because of computational difficulties, already mentioned in the previous section, associated with the high-order formulas. Integration formulas resulting from interval subdivision and repeated application of a low-order formla are called composite integration formulas, B Numerteal Integration Although any of the Newton-Cotes or other simple (one-application} formulas can be written in composite form, the closed formulas are especially attractive since, except for the base points x =a and x = b, base points at the ends of each subinterval are also base points for adincent subintervals. Thus, although one might suspect chat n repeated applications of an m-point formula would require nm functional evaluations, in fact only n(m—1)+1 such evaluations are needed, a considerable saving, especially when mis small. The simplest composite formula is generated by re- peated application of the trapezoidal rule of (2.21a). For 1n applications of the rule, each subinterval is of length f= (b= a)in Let x; =X + th J = fig I(x) dx -f f(x) dx fg Sx) dx + [year + { LO) dx h $a) + sea] h +5 UG) +e) + + Ee) 4/6091 Me <0 Distance from centerline (ft), either x, or x2 Anil. XLOW Upper and lower limits on x. (Function F81) sus Function for converting distance to corresponding subscript. e Function for evaluating equation (2.1.3). (Function siMPS) H Stepsize, h. SUMEND, SUMMID The first and second summations of (2.1.6), S, and Sq, respectively t Because of FORTRAN limitations, we have 1<1 e eno Functions F81, FB2, DIF1, DIF2, FUNCTION FBLC) i c } REAL INTL, INT? COMMON B1, 82, M,N, X, DSQ, INTL, INTZ DIMENSION’ 81¢ 4025), "82020255, INTIC1025), INT2(1025) es STATEMENT FUNCTION DEFINITIONS iSUbie) = InTeaBscz.aePywrsrLoATON +°8-001) + 1 FCP) = 0, S#DSQ/(BSO'4 (X-Plee2}e01.5, t 1 = ssuacy i Fea © BCLS CN) { RETURN eurey £02(¥9 a Fa) e BCdeF OD RETURN ENTRY DIFACY) i 1s tsuscy) DLE = LC) = INTEC) RETURN i ewTay DIF2CY) 1 1suBcy) DIF2 = B2(1) = INT2CI) RETURN END Function SIMPS. FUNCTION SINPS( A, By Me FD THE FUNCTION SIMPS USES N APPLICATIONS OF SIMPSON'S RULE XO CALCULATE NUMERICALLY THE INTEGRAL OF F(X)«DX BETWEEN INTEGRATION LIMITS A AND 8, SUMEND 13 THE Sum OF ALL FCK(1D) FOR EVEN I CEXCEPT FOR F(X¢2eN)) WHILE SUMMID IS. THE SUM OF ALL F(XC)) FOR 1 ODD. HIS THE STEPSIZE BETWEEN ADJACENT C1) AND TWOW 1S THE LENGTH OF THE INTERVAL OF INTEGRATION ‘Numerical Inegration Program Listing (Continued) c FOR EACH INDIVIDUAL APPLICATION OF SIMPSON'S RULE, K 1S THE c ITERATION COUNTER, ¢ © seaes INITIALIZE PARAMETERS «4.4 THON’ = (B-A 7H, is TWoH/2. SUMEND = 0. SmI So © ses, EVALUATE SUMEND AND SUMMID . B0°2 “ket X= A FLOATCK=1)¢TWOH SUMEND = SUNEND + F(X g 2 Sumito Sumito + Few © ese, RETURN ESTIMATED VALUE OF THE INTEGRAL SimPs © (2, 005UMEND + u.08SUMMID - FCA) + FCB) RETURN c END Data 11 = 1000.00 12 = 500,00 EPsi* 0.20 eps? = 0.60 D> 100 wot ioe SIGMA® 1I712E+9 FOL #1. 08-5 Na VMAX = 25 Ti © 100,00 12 = 500,00 ers = 0.80 EPs? = 0.60 Ds 100 wos Neo SIGMAs 1,7126-9 TOL 91. 08-5 Ns f1Max = 2 11 = 1900,00 2 800.00 EPs1 = 0.20 EPS? = 0,60 O27 1.00 wot 100 SIGMA® 1,722E-9 TOL #1, 08-6 we} VMAX = 25 Computer Outpt Results for the Ist Data Set RADIANT HEAT TRANSFER BETWEEN JNFIW(TE PARALLEL PLATES nos 1000.00 mf 500.00 EPs. = 0:80 eps? = 0:60 D : 1.00 W 100 SoMa = oLi723¢-08 To. 0; 1000-05 N : 2 Tmax 5 a 0.13708 Ob es 0,68206 02 xtow 0:50 XHIGH = 0,50 ter = a 1 pic B21) wyrcey wtacy 1 0,13753uE 08 © 0,510218F 95 0.287179 02 0.515045E 03 2 011375016 O% 9;295956E 03 0.270533 02 0.579390E 03 3 O;T374IGE 0% 01258636E 05 0.226972E 02 0. 486090E 03 ITER + 2 B11 B21) ira wr2c) 9.139587 0% 0.313637E 05 0.131350E 03 0.623590 03 O,159432E 08 0.290172E O$ 0.123582 03 0.587430E 03 O,159027E ON 0.261524 03 0.203378E 03 0.492810E 03 Example 2.1 Radiant Inserchange beiween Parallel Plates—Composte Simpron's Rule ‘Computer Output (Continued) WER ITER an a 5 5 81) B21) 0.139615 08 0.313685E OL139NSBE O04 —0.299217E OLAS9050E Gk — 0.261361E Buh B21) 0,139616E o8 0,313685E Ol159059E 04 0.299788 0,139050E 04 0.261362E Buc) B21) 0,139616E ou 0.313685E QL139NSSE Oy 0.2992186 01139050 ov 0,261362E 0.127132€ 08 = -01 2820796 03 03 03 os as 05 os 03 03 INTAG) 0,1327788 0:1249226 0:100N97E wT 0.132795 0.124900E 0.105126 ITA) 0.132790E 0.12b9u1E 0.205236 Partial Results for the 2nd Data Set (Same as Ist Set, with TER ' 1 2 3 4 5 5 7 8 9 a a2 5 Bin B21) Q.15961NE Ov 0.313623 OL13960KE 04 — 0,312659E O.13857E Ou — 0,309910E O:139524E ou 0.305357E 0.139057 04 0.299185E .139373E on 0.291029E 0,139276E 0% 0,282K07E O:139167E ou 0.272303E O;1390098 0} 0.261365E o.az7101€ OW 0: 282314E 03 Partial Results for the 3ed Data Set RADIANT HEAT TRANSFER BETWEEN INFINITE PARALLEL n u 1000,00 80:00 0,20 0:60 100 3,00 011712e-08 0, 1000E-D5 2 35 0, 3424E 03 0:4207€ 03 0.50 0:50 os 03 03 03 03 03 03 03 03 intact 0,1327208 0.232211 0.130698 o.aze2216 0.124476 0: 120866€ 0.235297E 0,220338€ OL 10N54E wry 05 0,623712E 03 015875426 03 04929036 INT2C1) DS 0.82571N 03 0,587544E 03 0492900 wnrzciy 03 0.623714 03 0158 754NE OS oleszsouE uray 03 0,6235576 93 9.822276 03 0,614276E 03 0, 60289N6 03 0,5873626 03 0, 5680736 5 OLSussaTE, 03 075202586 PLATES os 03 03 9s 0s 03 03 03 38 87 ‘Computer Output (Continued) Wer a a 6 Bi) 0.523875E 03 0. 0.812933 030. OLMES3B2E 03 0. 0,300875E os 0.293706E 03 B21) 5122136 03 506865E 03 92907 05 ‘Mumerieat integration swraay 0.22634uE 05 012151676 03 0y178728E 03 ewrecny 0.286818 03 0:215312E 03 OLIR00IIE 03 i i Example 2.1 Radiant Interchange between Parallel Plates—Composite Simpson's Rule 89 Discussion of Results The first two sets of results are for 7, = 1000°R, T= S0O°R, ¢, = 08, and ¢; = 0.6. Convergence within the specified tolerance is rapid, occurring after five iterations. The results for n= 2 are almost identical with those for n= 8, indicating that good accuracy can be obtained with just a few subdivisions. The radiant fluxes are most intense at the center of the plate (I = 1), since end leakage is least important at this point. Q, is negative, since the upper plate receives more energy from the lower plate than it can radiate and reflect to its surroundings, and so must be cooled to maintain its temperature constant. The third set of results is for T, = 1000, T, = 800, = 02, and ¢; = 0.6. These conditions are such that hheat must be supplied to both plates at approximately equal rates, #0 ‘Numerical Integration 2.7. Repeated Interval-Halving and Romberg Integration Let Ty,1 be the computed estimate of an integral fre dx by using the composite trapezoidal rule of (2.42) with n= 2%. Then To. is the estimate of the integral using the simple trapezoidal rule, 7;,, the estimate for two appli cations, T,, the estimate for four applications, ete. Ty, involves twice as many subintervals as Ty 1,1. Hence N can be viewed as the number of times the initial integra~ tion interval [a,b] has been halved to produce subintervals of length h = (6 — a)/2". From (2.42): Toa OO oot ve + fon} 2.52) =o bo Tu =e Bye +sor+s(o+"5%)| =H {rent e—ay (2+ 259)] 252) 1.252 br@+son+ an (b=a) a 2) nina. > Es (e+ Tra 252 fos sions E Esla+ wilt 2G? Esler 2s) ana r= 2 fiw +sore Ss(o+222)) 1, ,@=a o-0 wtf 5 Er(ox2Z%)) ee By induction, the general recursion relation for Ty. in wa Te <2). as The recursion relation of (2.53) can be used to compute the sequence T;,1, Ta,1.---sTy.s once To,, has been cal- culated. The function f(x) need be evaluated just 2" + 1 times to compute the entire sequence. Corresponding to Ty... the error term given by (2.42) is es Afra GaP sou! 7 -& ar SOs €in(a,b). (2.54) Provided that f(x) has a continuous and bounded second derivative on the interval (a,b), (2.54) assures that the sequence To.,...Ty.. converges to the true integral, assuming that no roundoff error enters into the calcula- tions. The Richardson extrapolation technique of (2.47) can now be applied to each pair of adjacent elements in the sequence To,i,Ty,1..-.t0_ produce a third (hopefully improved) estimate of the integral. Let /* in (2.47), cor- responding to the pair of estimates Ty,1, Tw+i,1, be de- noted Ty,» so that (2.55) ta) {rar +ap(a+©59) +/0), (2.56) which is just the integral predicted by one applica tion of Simpson's rule. Investigation of the sequence Tos Tuas «--» Ty,2 leads, by induction, to the conclusion that Ty,2 is the estimate of the integral which would be computed by using the composite Simpson’s rule of (2.49) with n= 2%, Thus Ty.2 is the value computed for the integral by Simpson's rule after halving the initial interval, [a,b], N times. From (2.49), the error in Ty, is given by 6 “) Sea "e Provided that f(x) has a continuous and bounded fourth, derivative on the interval (a,b), (2.57) assures that the sequence Ty, ..Ty,, converges to the true integral, assuming no round-off error. ‘The Richardson extrapolation technique of (2.51) can now be applied to each pair of adjacent elements in the sequence To ., T;,2, +. to produce yet another sequence of estimates, €in(ab). (2.57) (2.58) Investigation of the sequence 7o.s,7,.s... shows that Ty, is the estimate of the integral that would be com- puted by using the composite version of the five-point ‘Newton-Cotes closed formula of (2.21d) with 2" repeated. applications or, alternatively, after halving the original integration interval, [a,b], NV times. The error term for Try. has the form (b= ay? (935360(2)""- Provided that f(x) has a continuous and bounded sixth derivative on the interval (a,b), (2.59) assures that the sequence To, ...Ty,s converges to the true integral, assuming no round-off error. “OC, Ein(a,b). (2.59) 27° Repeated Interval-Halving and Romberg Integration 1 ‘The Richardson extrapolation technique can be applied to each pair of adjacent elements in the sequence To.3.T;,3 .-.t0 produce another sequence of estimates, Tres. ~ Tus 63 , ‘which can be shown (see Bauer e¢ al. [6]) to converge to the true integral ‘The relationships of (2.55), (2.58), and (2.60) are special ‘cases of the general extrapolation formula, (2.60) Tra = Twat = Try arn credited to Romberg and described in detail by Bauer et al. (6), who show that each of the sequences Ty, , for J=1,2,-+., converges to the tsue integral with increasing 'N. In addition, the sequence: To,1 To.2+--»To,; also converges to the true integral for increasing j. The sequences Ty,; for j > 3 do not correspond to composite rules for Newton-Cotes closed integration as do those for js3 These Romberg sequences can be arranged in simple tabular form as follows: To2 Tos Tos Tos (2.61) To use this technique, we simply compute the elements of the first column using (2.52a) and (2.53), fill out the remaining elements of the triangular array using (2.61), and then examine the number sequences down each column and across each row. Each of the sequences should converge to the true integral. ‘The error corresponding to Ty,,, as defined in (2.61), can be shown [6] to be equal to Bron, Sin(ab), where K(j) is @ constant that depends on a, band j, but is independent of N. Example. Use the Romberg integration scheme to estimate the value of In 137.2 from the integral ‘Table 2.1 shows the results inthe tabular form of (2.62). The first column contains the results for evaluation of the integral using the composite trapezoidal rule of (2.53), after computing the first entry with the simple trapezoidal rule of (2.52a). The remaining entries in the Romberg tableau are the results of Tie Ths ‘repeated extrapolation using (2.61). 7 7 The true integral to six figures is 4.92144. Clearly, each column sequence is converging to this value. The sequence, across the top row is also converging to this value. The apparent divergence of the last entry or two in each column results from round-off errors in the calculations (recall that the last entry in the first column involves 2? = 8192 repeated (2.62) applications of the trapezoidal rule). Table 2.1 Romberg Tableau for Evaluation of In 137.2 Nii 2 3 4 6 7 8 8 10 0 | 6ess6 24.1795 12.7845 8.05264 GoM! $2408 4.98842 493035 4.92208 4.92146 1 | 352837 134967 8.12658 6.03672 5.24125 498848 4.93035 4.92208 4.92146 492148 2 | 189434 8.46221 6.06937 5.236 4.98873 4.93037 4.92208 492145 4.92144 492144 3 | 102s 6.21893 «$.25725 498973 4.93042 492309 4.92146 4.2144 4.92144 4.92143 4 | 7asdsz 531736499391 4.93065 4.92209 492146 4.92144 4szids 4.92183 452143, s | ssasr2 Sor4i2 4.93164 © 4.92213 4.92146 497144 4.92144 © 492143 4.92183 6 | 522237 A3ee 4.92228 492146 492144 © 4.92144 492143 492143 7 | Soosi7 482318 4924s 4924s Aeziat 492143 492143 8 | 49443 49rise 4.92144 a92i4e © 4.92143 492143 9 | 492730 4214s dgzias 4.92143 492143 Jo | 493201 a9ni44 492143, 4.92143 n | 4st 9744 492143 2] 492153 aszias Bb] 4s2as EXAMPLE 2.2 FOURIER COEFFICIENTS USING ROMBERG INTEGRATION Problem Statement Write a general-purpose subroutine named TROMB that uses the Romberg integration algorithm outlined in Section 2.7 to evaluate numerically the integral J SQ) dx (2.24) where f(x) is any single-valued function and a and 8 are finite. The program should first use the trapezoidal rule ‘with repeated interval halving to determine 7,1, Ty,1» ---» Tres ftom (2.52a) and the recursion relation (2.53). Then the Romberg sequences {Ty,,} should be computed from the general extrapolation formula (2.61) for all ‘4 €Jnax: The Romberg Tableau should be organized as illustrated in Table 2.1. To test the subroutine, write a general purpose program that calls on TROMB to evaluate the coefficients of the Fourier expansion for any arbitrary function 9(x), periodic with period 2n, such that g(x) = g(x + 2kn) for integral k. The Fourier expansion may be written [14] of) = ¥ cqcos mx + F dy sin ms, (2.2.2) where Ce -t { a(2) cos mx dx 2 fF g(x) sin mx dx, a 2.24) Write the program so that the coefficients (Cjyd,) are caleulated in pairs, for m= 0, 1, ...5 Mma As a test periodic function, g(x), use the sawtooth function of Fig, 2.2.1. Figure 22.1 A periodic sawiooth function. Method of Solution ‘The subroutine TROM is a straightforward implemen- tation of the trapezoidal rule of (2.52a), To,s = (b = aL f(a) + f(O)1/2, followed by repeated interval halving using the recursion relation of (2.53), Tan fe at See (as +S?) The Romberg extrapolation for N=1,2,.. Nee formula of (2.61), is then employed for j= 2, 3, . Naw ~j-+ 1, t0 fill out the remaining clements in the fitst jnax Columns of the matrix 7. ‘The integrands for the integrals of (2.2.3) and (2.2.4), Sd) = [a(x) €08 mein, 25) L4x) = (g(x) sin mx]/n, (2.2.6) are evaluated by the functions FUNCTC and FUNCTO, respectively, defined in one multiple-entry function. The Periodic function g(x), which for the suggested function of Fig. 2.2.1 is given by a) =x, 2.7 is also defined in the multiple-entry function. From (2.2.4), itis clear that for all g(x), do = 0. For the periodic function of (2.2.7), the coefficients cy and dy of (2.2.3) and (2.2.4) may be found analytically, and are given by =f x cos mx dx = 0, m=0,1,... (2.28) IP e 2 2 ne ay = =f xsin mx dx oosma= —(-)% m=1,2,... (22.9) Then the Fourier expansion of (2.2.7) is os) = = 2(sin x — Sg SS --)- 210) Inthe programs that follow, all ¢, and d, are evaluated for m=0,1, ++ Mqaxr The Romberg tableaus for cy and d, are stored in the matrices C and D respectively. Example 2.2. Fourier Coefciens using Romberg Integration 93 ‘Flow Diagram ‘Main Program Compute elements of Romberg +] Tableaus Cnjs Djs f= 1s 2, os Jnat Ay sos Nox —J + 1, Where P pe) ax 0,1 Mag “i+ 1 (Subroutine TROMB) Functions FUNCTC, FUNCTD, 6 (Argument: x) joo aman (Function 6) (x) sin mx hen (Function 6) Subroutine TROMB (Dummy arguments: Naas @ 2, $, Ts jnass i calling arguments: Nyeo —%, %, FUNCTC or FUNCTD, C oF D, jnan 1) (on) Ta SU +50) |_-< [2.01 cy ‘Numerical Integration FORTRAN Implementation (Main) of s aM Max M MMAX Nt NMA NMAXPL Pi MMAXPY Mptust (Functions FUNCTC, FUNCTD. 9 x (Subroutine TROMB) AB F FR FORIMI 4 ' Imax. NRC NxXMJP2 tt List of Principal Variables Program Symbol Definition ‘iatrices C and D, containing the Romberg tableaus for c, and dy, respectively. Column subscript for tableaus, j. Maximum column subscript in Nth row of tableau. ‘ines, Rumider of columns in tableau. im, index on Fourier coefficients ¢,, and dy. nex, MAXIMUM value of m. Row subscript for tableaus, N. Noens Maximum value of N. Now + 1. ings + 1 mei. ‘The variable of integration, x. Lower and upper limits of integration, a and b. ‘The integrand function, f- (o—ay/2®. at a index on repeated sum of (2.53). wed nn, number of rows and columns in tableau T. Nyox = J +2. ‘Matrix containing the Romberg tableau, T { Because of FORTRAN limitations, the row subscripts of the text and flow diagrams are advanced by one when they appear in the program, For example, W assumes valves 1, 2... Nou + 1, so that To 51, 1, Tres = TM +f 15 et Example 22° Fourier Coefficients using Romberg Integration. 98 Program Listing Main Program 100 200 201 202 203 APPLIED NUMERICAL METHODS, EXAMPLE 2.2 FOURIER COEFFICIENTS USING ROMBERG INTEGRATION THIS TEST PROGRAM CALLS ON THE SUBROUTINE TROMB TO COMPUTE THE INTEGRALS NECESSARY TO DETERMINE THE COEFFICIENTS OF THE FOURIER EXPANSION FOR A FUNCTION G(X) ON THE INTERVAL (PL, PI) WHERE THE FUNCTION IS PERIODIC FOR ALL X'SUCH THAT G(X) = G(X + 2eKePI), K-BEING AN ANTEGER, THE FIRST MMAX COEFFICIENTS OF THE COSINE AND SINE TERMS (THE CCM) AND O(M) OF THE TEXT) ARE COMPUTED USING THE TRAPEZOIDAL RULE WITH REPEATED INTERVAL HALVING FOLLOWED BY THE ROMBERG EXTRAPOLATION PROCEDURE. THE ROMBERG TABLEAUS FOR C(M) AND O(M) ARE STORED IN THE UPPER TALANGULAR PORTIONS OF THE FIRST NMAX+1 ROWS OF THE FIRST JMAX COLUMNS OF THE C AND D MATRICES RESPECTIVELY. FOURIER COEFFICIENTS FOR ANY ARBITRARY PERIODIC FUNCTION CAN BE FOUND BY DEFINING G(X) APPROPRIATELY (SEE THE FUNCTIONS FUNCTC AND FUNCTD). IMPLICIT. REAL®S(A-H, 0-2) DIMENSION C(20, 20), 6(20,20) EXTERNAL FUNCTC, FUNCTO COMHON DATA PI / 3,1025826535898 / sess READ DATA, CALL TROMB TO COMPUTE INTEGRALS ...., READ’ (5,100) MWAX, NMAX, JMAX WRITE (6,200) MAK, NMAK, UMAX MMAXPL = NMAX «1 DO 3 MPLUSL=1,MMAKPL Ms mplusi = 1 CALL TROMB( NNAX, -PI, PI, FUNCTC, C, MAX, 20 > CALL TROMB( NNAX, -PI; PI, FUNCTD, D, JMAX, 20 ) sees PRINT OUT ROMBERG TABLEAUS ..... WRITE (6,202) NMAXPL.*° MAX 1 | 00-2) NeL,NMAXPL aM = gMAX i TF (N.GT.MAXPLe1-UMAX ) JM = NMAXPL #1 = W WRITE (6,202) (C(N,J), Jet JM) WRITE (8,203) DOS) Nel, NMAXPL aM = MAX TE (N-GT.NMAXPLe1-UMAX ) JM © NMAXPL #1 - WRITE (6,202) (O(N, J), Jed) 60 To 1 ese FORMATS FOR INPUT AND OUTPUT STATEMENTS ....+ FORMATE 7x, 13, 2012x, 139 > FORMAT( BHINWAK @ 4 12/ 8H RMAK = , 12/ OH UMAX =, 12) FORMATC 1HO/ 1H0, 9X, 2HCC,12,1M)/ 1H) FORMAT( 1H, 1P7E17-8 ) FORMAT 1H0/ 1MO,9X,2HD(,12/1H)/ 1M) END Functions FUNCTC, FUNCTD, 6 FUNCTION FUNCTCt X THE FUNCTIONS FUNCTC AND FUNCTD COMPUTE RESPECTIVELY THE INTEGRANO FOR THE M(TH) COEFFICIENT OF THE COSINE AND SINE TERMS OF THE FOURIER EXPANSION OF THE PERIODIC FUNCTION BO) = x. 36 ‘Numerical Integration Program Listing (Continued) IMPLICIT. REAL®B(AH, 0-2) REAL®S x, FUNCTC, FONCTD COMMON DATA PI / 3.1415926535898 / + DEFINE PERIODIC FUNCTION Gtai = x © FUNCTC © G(X) DCOS(FLOAT(M)#X)/PI RETURN c ENTRY FUNCTDC xX > FUNCTD = G(X)*DSINCFLOAT(M) #X)/PP RETURN c ND ‘Subroutine TROMB SUBROUTINE TROMBC NMAX, A, B, F, T, JMAX, NRC ) ¢ c THE SUBROUTINE TROMB FIRST APPROXIMATES THE INTEGRAL OF c FOX)40X ON THE INTERVAL (A,B) USING THE TRAPEZOIDAL c RULE WITH REPEATED INTERVAL HALVING. T(Ne1,1) 1S THE VALUE ¢ OF THE INTEGRAL COMPUTED AFTER THE N(TH) INTERVAL-HALVING c OPERATION. ALL T(N¢L,1) VALUES ARE COMPUTED FOR 'N 6 0 TO ¢ N= NMAX." HIS THE LENGTH OF TNE STARTING INTERVAL (A,B). ¢ REMAINING ELEMENTS OF THE ROMBERG TABLEAD ARE THEN ENTERED ¢ INTO THE FIRST JMAX COLUMNS OF THE FIRST NMAX+1 ROWS OF THE € MATRIX T. c IMPLICIT REAL#E(A=H, 0-2) REALE A,B, Fy T DIMENSION’ TCHRC, NRC) c € COMPUTE H AND FIRST INTEGRAL APPROXIMATION TOLL) = CFCAD + F(BD)6H/2,0 c c WALVE INTERVAL REPEATEDLY, COMPUTE T(N61,1) «+06 Ned, AMAX TIN#L,1) ='040 FR = H/2. 000i IMAX = 200 = 1 DOL ts a tMaK,2 2 TEN2,1) & TENeL,1) + FCFLOATCIDSER # AD 2 TONFLSTD @ TEN15 7/200 + HOTEL, 19/2, 0008 ¢ ¢ COMPUTE ROMBERG TABLEAU... 50°5" “Jeo, umax NIMJPE = NHAX = Jo 2 FORJM1 = 4, 0e0(J~1) DO 3 Nel, NxMJP?. 3 TON) = CFORIMLeTONeL SeL) = TEN, J=1)9/(FORIML = 1.0) RETURN e eNO Data WMAX © 1000 -MMAX® 13 OMAK © 7 97 Example 2.2 Fourier Coefficients using Romberg Integration 00 angsseess't 00 anooono00"z 00 dtseeeese:T 90 a00000000"z 00 Go0000000"2 90 ax vesEEsE"T 00 aooono000"z 90 900000000"z 90 goooo00007z 00 aceeesssE"T 90 aoooo0000"z 08 900090000"Z 00 000000000"z 90 dooonn000"2 00 ad rLESBE"T 50 Gooooe000"e 00 400000000"z 00 Go0000000"z 00 GooO00000"2 90 aD seER55E"T 90 aoo00000) 89 G00020000"% 90 Go0000000"< 99 GoD000000"Z 00 s, and dyo to conserve space. All calcu- m ty a lations were performed in double-precision arithmetic. (Calculated) (Exact) The results shown for co are typical of those found for all the Cqy m= 0, 1, ...5 10. All entries in the C matrices are in the range 10" to 5 x 107", giving exceptionally good agreement with the true values, cy, = 0. Assuming that 0(8,7), corresponding to T;,,, is the most accurate approximation to dy, the best estimates found by the ae ‘program are shown in Table 2.2.1. In every case, there on is at least nine-figure agreement (the number of figures 1 printed) between the results of the Romberg integration 28 and the exact value of the integral. -us 100 Numerical Integration 2.8 Numerical Integration with Unequally Spaced Base Points All the integration formulas developed in the preceding. sections are of the form given by (2.22) [roars Emseed, where the n+ 1 values w, are the weights to be given to the n+ | functional values f(x,). The x, have been speci- fied to be equally spaced so there is no choice in the selection of the base points. If the x, are not so fixed and if we place no other restrictions on them, it follows that, there are 2n+2 undetermined parameters (the w, and x), which apparently might suffice to define a polynomial of degree 2n + 1. The Gaussian quadrature formulas to bbe developed in Section 2.10 have a form identical with (2.22), that is, they involve the weighted sum of n+ 1 functional values. The x, values to be used are not evenly spaced, however, but are chosen so that the sum of the n+ | appropriately weighted functional values in (2.22) yields the integral exactly when f(x) is a polynomial of degree 2n + 1 or less. Before proceeding with the develop- ment, some background material on orthogonal poly- nomials is required. 29. Orthogonal Polynomials Two functions g,(x) and gq(x) selected from a family of related functions g,(x) are said to be orthogonal with respect to a weighting function w(x) on the intervat (a,6] if Jocadlsdan(s) dx =0, nm, . (2.63) . JwCotaCoF dx = ela # 0. In general, ¢ depends on n. If these relationships hold for all n, the family of functions {94(x)} constitutes a set of orthogonal functions. Some common families of orth- ogonal functions are the sets {sin kx} and {cos kx}. Orthogonatity can be viewed as a generalization of the perpendicularity property for two vectors inn dimen- sional space where n becomes very large and the elements, (coordinates) of the vectors can be represented as con- tinuous functions of some indepenctent variable (see (1] for an interesting discussion and geomettic interpreta- tion), For our purposes, the definition (2.63) is adequate. ‘The functions 1, x, x3, x°,...,x" are not orthogonal. ‘However, several families of well-known polynomials do possess a property of orthogonality. Four such sets are the Legendre, Laguetre, Chebyshev, and Hermite poly- nomial. Legendre Polynomials: P,(x). The Legendre poly- nomials are orthogonal on’ the interval [—1,1] with respect to the weighting function w(x) = 1, that is, [rscor,oo dx=0, ném, (2.64) I \ (P,G)} dx = e(n) #0. The first few Legendre polynomials are: Polx) = 1, Pix) =x, P2(x) = 4Gx? — 1), Pa(x) = 45x? ~ 32), Pax) = 435x* ~ 30x? + 3), (2.65) The general recursion relation is P(x) = Py 1() — = Pre xf). (2.66) Laguerre Polynomials: ,(x). The Laguerte poly- nomials are orthogonal on the interval [0,00] with respect to the weighting function w(x) = e~*, that is, [e*e peux) dx =O, nxm, vo (2.67) f eT L(x}? dx = on) £0. (2.68) The general recursion relation is LA3) = On = x = NL,4) = (0 IL,-202), (2.69) Chebyshev Polynomials: 7,(x). The Chebyshev poly- nomials, already described in some detail in Chapter 1, are orthogonal on the interval {—1,1] with respect to the weighting function w(x) = 1/V1—27, that is, FAyTAx) dx =0, nem, (2.70) [7,09] dx = e(n) #0. ‘The first few polynomials (see Table 1.12 for a more complete list) are Tox) = 1, Tx) TQ) = Ty) Qn) 210 Gaussian Quadrature 101 ‘The general recursion relation is T(x) = 2x, (2) — Tra Hermite Polynomials: H,(x). The Hermite polynomials are orthogonal on the interval [— 00,00] with respect to the weighting function e-*, that is, (2.72) [0 2? HGOHG) dx =0, men, o° (2.73) [CoP oor dx = 0) 0. ‘The first few Hermite polynomials are: Ho(x) = 1, H,®) = 2x, HAG) wae —2, (274) Hy(2) =x? ~ 12. ‘The general recursion relation is H,(3) = 2xH,-(2) ~ 20 = )Hy-2(0). 2.73) General Comments on Orthogonal Polynomials. The sequences of polynomials (P,(x)}, (.(x)}, {7,(x)}, and {H{x)), respectively satisfying relationships (2.64), (2.67), (2.70), and (2.73), are unique. Each of the polynomials P(x), L(x), TAx), and Hx) is an nth-degree polyno- rial in x with real coefficients and n distinct real roots interior to the appropriate interval of integrati example, all n roots of P(x) lie on the open ii (-LD. Stroud and Secrest {7] discuss these and other properties of several families of orthogonal polynomials in detail. An arbitrary mth-degree polynomial p,(x) = Diao ax! may be represented by a linear function of any of the above families of orthogonal polynomials. Thus Pal) = BoZolx) + ByZa(x) + -*° + BZ(X) § fizks, (2.76) where Z;{x) is the ith-degree polynomial of one of the families of orthogonal polynomials. Expansions for the monomials x! in terms of the Chebyshev polynomials are given in Table 1.13, Example. Expand the fourth-degree polynomial p.(x) in ‘terms of the Legendre polynomials. Substitution of the Polynomials P(x) of (2.65) into (2.76) leads to pals) = Pot Bc Pa - Ja vis) (35 Is 3) tn flae Head Colestng the coefceats of ike powers of, 1,3 3 ras) (Bo—} 643A) + (es—30,)x +Ga ~Saler} +f 4B paw Ande, Antler Za) F(as$ peneitensin Boma th ~paerwtjaty ae. ‘The reader should verify that the polynomial pe(x) — Pt de— De 4 2x is equivalent to 2 pay +2 PA) 10) = — 2 ry 3S PW 16 6 8 = pp Pala) + PAG) + 35 Pals, 2.10 Ganaslan Quadrature Gausa-Legendre Quadrature, As before, we estimate the value ofthe integral f°/(x) dx by approximating the function f(x) with an nth-degree interpolating poly- nomial p,(x), and integrate as follows: » » ‘ [s@)ax= [playde+ [ROax, 277) Here, Rx) is the ettor term for the nth-degree inter- polating polynomial. Since the x; are not yet specified, the Lagrangian form of the interpolating polynoinial, (1.43), which permits arbitrarily spaced base points, wil be used with its error term, (1.39). SCR) = Pal) + Ral) = Zure+ [fJoe-v] E98 ler act ' To find the elements of the z and w vectors for the m= point formula, the p array is searched for an element equal to m, ie. for py =m. The desired roots and weight FACtOrs ATE Zp -su5 Ztzg yi» ANG My, -- Wh, yy =a» FESPEct= ively. For example, the proper constants for the 5-point formula can be found by scanning the p vector and noting that pg = 5. Then k, = 6and ks = 9. The desired elements of the z and w vectors are Z5, 2, 75 and Wo, Wr, Wy respectively. Let c = (b — a)/2and d = (6 + a)/2. Then (2.98) may be rewritten as [eax = oS wiser) + + f(-er) +0), @32) for even values of m. For odd values of m, (2.3.2) also applies except when j = k,. In this case z, has 2 zero value and does not occur in a root pair; the factor w,f(d) should be added just once ¢o the accumulated sum. Inthe program that follows, the function GAUSS checks to insure that m has been assigned one of the legitimate values 2, 3, 4,5, 6,10, or 15. If not, GAUSS returns a true zero as its value. ‘A short calling program is included to test GAUSS. It reads values for a, b, and m, calls on GAUSS to compute the required integral, and prints the results. The function, ‘f(x is defined as a function named FUNCTN. In this example, f(x) = f/x, ie, the integral to be evaluated is r dx When a= 1, the results can be compared directly with tabulated values of In 6. Inb— Ina. 23.3) Example 23 Gaus Legendre Quadrature Flow Diagram Main Program Evaluate d= f° 02) ds sing the m-peint Gauss-Legendre quadrature formula (Function Gauss) Function FUNCTN (Dummy argument: x) Function GAUSS* (Dummy arguments: a, b, m, f; calling arguments: a, b, m, FUNCTN) Ses4 w{Se +a) +f(-ez, +d) (Function FUNCTN) sest wf (Funetion FUNCTN) "The vectors p, k, w, and z are assumed to have appropriate values upon entry (see text). 107 108 ‘Numerical Integration FORTRAN Implementation List of Principal Variables Program Symbol (Main) AB AREA “ (Function GAUSS) J JFIRST Juast Key NFOINT SUM WEIGHT z (Funetion FUNCTN) x Defii Integration limits, a and 6. Computed value of integral (2.3.1), a. Number of points in the Gauss-Legendre quadrature formula, m. c=(~a)/2. b+ a2. Subscript for vectors p and k, i. Index on the repeated sum of (2.3.2). Initial value of j, ki. Final value of j, kya — 1. Vector k. Vector p. Repeated sum of (2.3.2). Vector of weight factors, wy. Vector of Legendre polynomial roots, z, Integration variable, x. Example 2.3 Gauss-Legendre Quadrature Program Listing Main Program APPLIED NUMERICAL METHODS, EXAMPLE 2.3 ‘GAUSS"LEGENORE QUADRATURE THIS CALLING PROGRAM READS VALUES FOR A, B, ANO M, CALLS ON THE FUNCTION GAUSS TO COMPUTE THE NUMERICAL APPROXIMATION OF THE INTEGRAL OF FUNCTN(X)«DX BETWEEN INTEGRATION LIMITS A.AND BUSING THE M POINT GAUSS-LEGENDRE QUADRATURE FORMULA, PRINTS THE RESULTS, AND RETURNS TO READ A NEW SET OF DATA VALUES, IMPLICIT REAL#@(A-H, 0-2) EXTERNAL FUNCTN WRITE (6,200) READ (5,100) A, 8, M AREA = GAUSS CA, 8, M, FUNCTN ) WRITE (6,201), 8) My AREA 60 TO 1 «FORMATS FOR INPUT AND OUTPUT STATEMENTS ..... FORMAT Coax, FB.4, 10X, FB.4, 10K, 13) FORMAT ( Hi, 10x,"3HA,”14X, IMB, fox, IHM, 9X, SHAREA / 1H) FORMAT (1H; 215.6, 17, F15.6°) ENO Function FUNCTN FUNCTION FUNCTNG x 1 sees THIS FUNCTION RETURNS 1/X AS ITS VALUE ss. REALts x, FUNCTW FUNCTN = 1.0/x RETURN END Function Gauss FUNCTION GAUSS( A, 8, M, FUNCTN ) THE FUNCTION GAUSS USES THE N-POLNT GAUSS-LEGENDRE QUADRATURE FORMULA TO COMPUTE THE INTEGRAL OF FUNCTN(A)*DX BETWEEN INTEGRATION LIMITS A AND 8. THE ROOTS OF SEVEN LEGENDRE POLYNOMIALS AND THE WEIGHT FACTORS FOR THE CORRESPOND! NG QUADRATURES ARE STORED IN THE Z AND WEIGHT ARRAYS RESPECTIVELY. MAY ASSUME VALUES. 2,5,4,5,6,10, AND 15 ONLY. THE APPROPRIATE VALUES FOR THE M-POINT FORMULA ARE LOCATED IN ELEMENTS ZUREY(31),.qZ(KEY(I*1)=1) AND WEIGHT(KEY(T)) «2 .WEIGHT(KEY(IS1)~1) WHERE THE PROPER VALUE OF 1°15 DETERMINED BY FINDING THE SUBSCRIPT OF THE ELEMENT OF THE ARRAY NPOINT WHICH HAS THE VALUE MN, IF AN INVALID VALUE OF MIS USED, A TRUE ZERO IS RETURNED AS THE VALUE OF GAUSS. IMPLICIT REAL#8CAcH, 0-2) REAL®S GAUSS, A, 8," FUNCTN DIMENSION NPOINT(7), KEY(8), Z(24), WEIGHT(24) 109 n0 [Numerical Integration Program Listing (Continued) c sess. PRESET NPOINT, KEY, Z, AND WEIGHT ARRAYS . BATA’ pornt 72, 3, 4, 5, 6, 10, 157 c DATA KEY / 1, 2, 4, 6, 9, 12, 17, 25 / c DATA 7 1 0,577350268,0.0 10,774596668, 1 0,339981084,0,261135312,0.0 102538469310, 2 07906179846, 0.238619186-9.661209387,.0.932469514, 3 07148874339, 0433395394, 0.67940056%,0,865063367, ‘ 01973906529,0,0, $0, 5 0570972173, 0. 724417732, 0, 848206583,0.937273352, 6 01987992518// c DATA WEIGHT 11.0 0,888888889,0,555555556, 1 0,652145155, 0, 347854845, 0,568888889,0,478628671, 2 0,256926885,0.467913935,0.360761573,0.171326493, 3 0:295524225,0.269266719, 0.219086363,0.149451549, 4 0,066671344,0,202578202,0,198431485,0, 186161000, 5 0,166269206, 0139570678, 0.107158221,0.070366047, 6 0030753282°/ c © sssae FIND SUBSCRIPT OF FIRST Z AND WEIGHT VALUE 601" 11,7 1 (M.EQ.NBOINTCI)) GO TO 2 2 conTINvE © canny INVALID MUSED v0. Gabss = 0.0 RETURN’ c © seeee SET UP INITIAL PARAMETERS 5.044 2 OFiRST = kevci) ULAST = KEYCI+1) = 2 C= (B-4)/2.0 D = (Bea)/220, ¢ © geass ACOUMULATE THE SUM IN THE M-POINT FORMULA ... 4. Sih's Oo DO's sauFiRST,JLAST IF ( Z(3),£Q,0-0 ) SUM » SUM + WEIGHTLoD*FUNCTHO) SIF ( 20).NE2020 9 SUM = SUM + HEIGHT(UD*(FUNCTNLZ( S140 + DD. 1 + FUNCINC-ZéyeC + 0D) c ¢ = MAKE INTERVAL CORRECTION AND RETURN... GAiss = cesum RETURN c END Data As 1.0000 8 Meg a= 10000 5 we As iyes00 5 wey A= 20000 2 wes As 20000 8 Me 6 a= Yoooo Ms 0 A= Loo00 Me 15 A= 10000 Me 2 As 10000 8 Me 5 A= 1.0000 8 Mei R= 10000 8 Mes A= Lo000 8 Me 6 A= Lego 8 Me 19 A= 1oo00 8 Me 15 A= 10000 6 Me 2 a» Loo00 8 ae A» Yoo0o 8 = 10:0000 wee Example 2.3 Gauss-Legendre Quadrature m Program Listing (Continued) A= 10000 © B= 10.0000 w= 5 As 10000 = B= 1010000 6 &= Loo = 8 + io!o000 | =a a= 1o000 = 10:0000 = a5 As Lo000 8 = 20.0000 ws 2 A= Lagoa = B= 20/0000 = 5 A= 10000 = B= 2b.0000 ks k A> ioo0o 8S gor0000 ks A= io00o = B= 3oc000 hs A= 10000 © 8 > 20,0000 =H = 10 A= 10000 = B= 20,0000 =H = 15 a= ioo00 = 20/0000 w= i A= 10000 8 > 20.0000 m= 8 As 10000 = 8 = 20loms | Ka? Computer Output A 8 " AREA 3.000000 2.000000 2 0.692308 3000000 2000000 3 02693122 1000000 20000008 0.693105, 1000000, ziogoo90 «= s 0.693107 1000000, 2000000. 02695147 1000000 2:000000 10 0.693147 vivo Piseoaoo = 15, oLeasiu7 1000000 51000000 2 3.565217, 1000000 5:000000 5 1602656 1,000000, 5lo00000 32608430 1000000, 5:oo000 5 1609283 1000000 Sio0000 1609016 1000000, 5.000000 10, 2609038 x.p09080 Sioq0000 is. 1609K58 1000000 aolooo000 “2 21108383 Yao00¢0 19000000, 5 2: 246610 1000000 © 10700000 2.286970 3000000 © 1070000005 21298283 1000000 10000000 g 2:so1s08 2000000 4000000010 21302579 Yragagoo == 49000000 is 21502585 1eg0000 = 2000000" Tanesss 1000000 20,e00000 5 22779872 1ooocoo 20000000 205192 Looocoo 2070000005 2igsaz21 1ooov00 20000000 & 21980328 1ooovo0 2000000010 21995311 100co00 200000005 21995728 2000000 = 20;000000 a ae 1000070 © 207000000, 0:0 10000 20-0000 * 47 0:0 42 Numerical Integration Discussion of Results The program has been tested for the integrand func- ion (8) = I/x with integration limits a= 1 and 5 = 2, 5, 10, 20 leading to the true integrals In 2, In 5, In 10, and In 20 respectively (see Table 2.3.2). Each integral has been evaluated numerically using the 2, 3,4, 5, 6, 10, and 15- point Gauss-Legendre quadrature formulas. {n terms of the transformed variable 2, —1 <2 <1 (see (2.95)), the integrand function is given by 2 FQ) = Gaye hs 2.3.4 x (2.3.4) Then, from (2.99), the error for the m-point quadrature is 22= 4 mdb" Qm + N(QmP eb — + 6 + -1 1, the maximum truncation errors for the computed logarithms are given by 2(m')*(b — 1)" \Enmex(bsma)| = Gm + Demi” 1E(2)) bei. Q36) The error bound increases with increasing b, ie., as the length of the integration interval increases, and decreases With inereasing m, the number of points in the quadra- ture formula. The computed approximations for In 2, In 5, In 10, and In 20 show similar trends for the actual error, Table 22.2 True Integral Values Data Seis True Integral “er 0.693147 eld 1.609438 15.21 2.302585 22-28 2.995732 29.31 egal data values An alternative, though computationally less efficient, approach to integration by Gauss-Legendre quadrature is illustrated in example 3.4. Recursion relation (2.66) is used to generate the coefficients of the appropriate Legendre polynomial, and the base points z,are found by using the half-interval root-finding: method of Section 3.8. The corresponding weight factors w, are generated by evaluating the integral of (2.85), which can be deter- mined analytically. With only minor modifications, GAUSS could bechang- ed to allow evaluation of Gauss-Legendre quadrature formulas of the composite type. 2.10 Gaussian Quadrature 13 Gauss-Laguerre Quadrature. The Laguerre polynomials of (2.68) can be used to generate a Gaussian quadrature formula to evaluate integrals of the form J e*F(2) de = Y mie), The derivation of the integration formula of (2.100), which is known as the Gauss-Laguerre quadrature, is very ‘similar to that for the Gauss-Legendre quadrature of the preceding section. As before, the Lagrange form of the interpolating polynomial (1.43) with its error term (1.39) is used to approximate the function F(), since the z, are as yet unspecified. Thus (2.100) FRE Gen O<é c c THE FUNCTION FUNCTN COMPUTES THE INTEGRAND FOR THE QUADRATURE c FUNCTIONS GAUSS AND CHERY. C, D, YPLUSM AND PHI ARE DESCRIBED © UC THE PROBLEM STATEMENT. YP{ 15 THE DIMENSIONLESS DISTANCE ¢ FROM THE TUBE WALL. IMPLICIT. REAL@a(AsH, 0-2) REAL®8 FUNCTN, YPL. MON PH vPCUSK Ds 2. + yPL/yeLUsH © = 0.3640, 38e¥PLeYPLe( 1. -DEXP(~PHI*YPL/YPLUSM) #42 FUNCTN = 2.00/(1, *DSQRT(1, + 4. #C*D)) ¢ | RETAN ew Function CHEBY FUNCTION CHEBY( A, 8, M, FD c ¢ THE FUNCTION CHEBY COMPUTES THE VALUE OF THE INTEGRAL OF c FOX*DX BETWEEN THE INTEGRATION LINITS A AND BUSING THE c N=POINT GAUSS=CHESYSHEV QUADRATURE FORMULA. ZI IS THE 1-TH ROOT ¢ OF THE CHEBYSHEV POLYNOMIAL OF DEGREE M ON THE INTERVAL c (21,1) AND XI_IS THE VALUE OF 21 TRANSFORMED To THE INTERVAL c (4,8), SUM Is THE Sum OF THE FUNCTIONAL VALUES AT THE M c VALUES OF X1 CORRECTED FOR THE WEIGHTING FUNCTION. THE ¢ UNIFORM WEIGHT FACTOR 15 THEN APPLIED AND THE APPROXIMATED c VALUE OF THE INTEGRAL 15 RETURNED AS THE VALUE OF THE FUNCTION. c IMPLICIT REAL*R(A-H, 0-2) REAL®B CHEBY, A, 8, F c SUM = 0, ool eam, Zi = DCOS(FLOAT(2*(1=1) + 1)#3,1415927/ FLOAT (249) XI = (Z1s(B-A) +B AN/2. } 1 SUM = SUM + F(xIp@DSQRT(1--Z1¥Z1) CHEBY = (B-A) 5, 1415927 450M/FLOAT( 20M) RETURN ‘ END Data nye = a2 YPLUS(2)..,¥PLUS(6) 1,000 2.000 $,000 10,000 20.000, TypLustaay $0,000 100.a0a_ 200.000 $00:000 1000-000 RE =" '* 500,000 4 7 2 RE = 30007000 ” su RE = ~—-$000:000 4 6 RE = ~—-§000,000 4 = 10 RE == 5000000 ” re RE =~ 10900.000 4 = 3 RE =~ 10000,000 4 4 RE = 1000-000 4 = RE =~ 10000000 4 = 10 RE = ~—10000,000 » 7 RE = 25000000 " = 2 RE = 25000,000 4 = RE 250.000 4 6 RE = 25000;000 4 + 10 RE = 2500-000 » tas 2 ‘Numerical Integration Program Listing (Continued) Re = $000,000 Hoof 2 RE = 30000;000 moos 8 Re = $0000;000 Hoos 8 RE + ‘$ogod.000 re a = 50000;000 Hof as Computer Output Results for the 11th Data Set Re = 2500.0 Yeuusm = "700159 a 2 VELOCITY DISTRIBUTION USING GAUSS~LEGENORE QUADRATURE ypLus upLus bupwus 0.9 9992603. 0.999243 9958208 0,9965765 8775186 817953 [8923031 046865 a1 sagua 7a 121713 1571074984 130202 16,9996247 9921763 2817000495 7008247 20:5245821 18241326 VELOCITY DISTRIBUTION USING GAUSS-CHEBYSHEV QUADRATURE vypLus upwus bupus 8 2.0 1.00 111098685 2,1098645 2100 212166574 111067930 5:00 524059707 311893133 10,00 4129022 20069715 20,00 1313053378 318923955 50,00 © 27.9892826 3. 78Soue5 100/00 192499871 211607005 200.00 © 21,1825355 1,9525484 500.00 2312999978 2ianne22 Results yor the 12th Data Set RE = 5000.0 YPLUSM = 700.59 N " VELOCITY OISTRIBUTION USING GAUSS-LEGENORE QUADRATURE yewus vupwus oupwus 0.0 0.0 1.00 019992432 249982432 2100 119958190 0.996758 5:00 418779561 228821371 10:00 316026715 20,00 © 11, 8947849 3iu1is72 50.00 15.1339881 312391632 100200 1710308282 118966302 200,00 187343299 1:7037017 500.00 -20,.5672100 18320801 Example 24 Velocity Distribution using Gaussian Quadrature Computer Output (Continued) VELOCITY DISTRIBUTION USING GAUSS-CHEEYSHEV QUADRATURE yews vuptus bupLus 0.0 0.0 op 110253935 1.0253935, 2200 210030377, 150226442 5:00 510043534 219563157 817014534 516971001 32!2166106 15151572 1525693088 313526973, 1715230008 319536919 0l00 192766312 317536309 500.00 2371692723 alas26011 Results for the 13th Data Set RE = 2500.0. YPLUSH = 790.59 " 6 VELOCITY DISTRIBUTION USING GAUSS-LEGENORE QUADRATURE yews upwus oup.us do 0,0 1,00 p,39azu32 0.999432 2:00 119958190 0,9965758 5100 12779561 208821372 10:00 314806278 316026716 20,00 © 11/8987777 324101500 50:00 1511340019 3l2ss22u2 300.00 1770306880 15896686], 200.00 © 1817343936 157037056, 500.00 20.5632892 318325056 VELOCITY DISTRIBUTION USING GAUSS-CHEBYSHEY QUADRATURE vPLus upwus bupius 2.0 0.0 1,00 Moov: 1.0107488 2100 2.018747 10080459 5.00 l93sei7a 2i9n4a22y 10:00 815778605 516442275 20,00 © 12.0359825 34580978 50,00 1515233605, 3i2a7a79 100700 17.2887375 119213768 200.09 189701337 157253964 50:00 © -20,a286561 ilesasz2y Results for the 14th Data Set RE = 2500.0 YPLUSM = 700159 ” . 10 14 ‘Numerical Integration Compater Output (Continued) VELOCITY DISTRIBUTION USING GAUSS-LEGENORE QUADRATURE rews pws ouptus 0.0 0.0 1100 O:ssgze32 0.999432 2:00 129958190 019965758 5100 14779561 228821372 10:00 814306278 316026736 20:00 © 11.8987777 574101500 50,00 15.4340019 Si2sea2ez 00:00 —17.9308880 333966861 200.00 18. 7343936 317037056 500,00 20.5672995 214529087 VELOC? TY O1STRIBUTION USING GAUSS-CHEBYSNEV QUADRATURE vyeLus uptus pupwus 0.9 10033640 10033640 22o0k0873 \lag7esgy SI5150558 1118452665 1512015168 17,1069899 13,83ge741 500:00 © 20,6604113, 358419872 Results for the 5th, Data Set RE 5000.0 YPLUSM 171.36 M : 15 VELOEITY DISTRIBUTION USING GAUSS-LEGENORE QUADRATURE ypLus vpLvs bupwus oo 0.0 1,00 0.9970600 0.970600 2100 119876531 9,9905930 5100 ala710430 10.00 817551440 3.agh1010 20°00 © 1217210159 319658719 50,00 16.2839475 3. 5529516 00:00 © -17,8935651 1,6396356 VELOCITY DISTRIBUTION USING GAUSS-CHEBYSHEY QUADRATURE vypLus urwus. bupLus 0.0 0:sagsse6 0.9pssans 912901 9.392405 228436212 318911527 9738043 50,00 5425640 100700 17.9293563 1.6451400 Example 24 Velocity Disribuion using Gaussian Quadrature Computer Ontpot (Continued) Results for the 10th Data Set RE =~ 10000.0 YeLUSM = 314.25 ” : 15 VELOCITY OLSTRIBUTLGW USING GAUSS-LEGENDRE QUADRATURE plus vupwus pupius 2.9 2.9. 1,00 0:9985751 0,9983751, 2/00 119925962 0.998219} 5,00 424764971 218839023 10:00 BLSESBIDY 317083135 20:00 © 2,1a47202 31s9ss098 50:00 © 15,5076555, 313229355, 100.00 © 17: 3040266 357963711 200.00 © 18,7074835 158030573 VELOCITY DISTRIBUTION USING GAUS$~CHEBYSHEV QUADRATURE yews vuptus oupuus 0.0 0.0 3.00 Lloaazeza 1.002020 2100 119962400 0,3960379, 5:00 13853570 218891170 10:00 316014337 317160767 20.00 y.e061339 50.00 513307388 100.00 4.800108 200-00 326053356 Results for the 15th Data Set RE = 2500.0 YeLUsM = 700,58 " 5 15 VELOCITY DISTRIBUTION USING GAUSS-LEGENDRE QUADRATURE yPLus uptus bupus 0.0. 019992432 0,9992432 Lissseiaa olegas75a 418779561 2.821372 814806278 316026716 3218987777 3.414150 3513580029 Siassataa 170306880 113966861 18/7343935 i703 7056 500.00 20,5672993, 118329057 VELOCITY DISTRIBUTION USING GAUSS-CHEBYSHEV QUADRATURE yews vets oupwus 0.0 0.0 100 1:0010737 1,0010737 Tos. yisaaura% 09983987 5:00 418868067 218873362 10:00 524960833 316092786 20:00 © 11.9171720 314210887 50,00 1511633220, S.ak67438 100700 © -17;0645000 1:3005780, 200700 © 18:7716300 1,7071299, 500-00 206085537 128369237 126 Numerical Integration Computer Output (Continued) Results for the 20th Data Set RE = $0000.90 yptusm = 1284.89 " 5 VELOCITY DISTRISUTION USING GAUSS-LEGENORE QUADRATURE vyptus upwus buptus 2.0 2.0 1:00 O139956u0 0,9995540 220 119970033 019974393 5:00 wre7eni94 228811161 10.00 8L4u32825 315653632 20:00 © 11:7980045 313567220 50.00 15.0140866 312160820 100,00 © 16.9474237 319333371 200,00 © 18.7487224 118012987 500.09 0.932136, 21 1g3uin3 1000.00 © 2211927505 312603336 VELOCITY DISTRIBUTION USING GAUSS~CHEBYSHEV QUADRATURE yPuus uptus bupus 0.0 0.0 100 110013931 1,0013932 2500 210006568 0;9992637 5.00 \Ise69659 218863001 10:00 314586789 515717123 20,00 © 118202273 513615491 50.00 15.0u37528, $.2235209 10:09 16.9810396 119372868 200.00 1817859387 1!s0u8992 500.00 20.9740274 211880887 1000;00 = -22.2367209 117626835 Example 24 Velocity Distribution using Goussian Quadrature 127 Discussion of Results ‘Complete computer output is shown for data sets 11, 12, 13, 14, and 15 to illustrate the influence of the number of points in the quadrature formula on the results for Npe= 25000, and for data sets 10, 15, 20, and 25 to show results for the 1S-point quadrature formulas (probably the most accurate of those used} for Ng, = 5000, 10000, 25000, and 50000. All computations were done using for y* = $00 and Nae = 25000 1m (Gauss-Legendre _(Gauss-Chebyshev Quadrature) Quadrature) 2 205245821 23.2999976 4 205672100 24.1682723 6 20.5672992 20.8286561, 10 205672993 20,6608113 15 20.5672993 20.6088537 ‘The results for the Gauss-Legendre quadratures are nearly constant, indicating that the integrand function of (2.4.1) can probably be represented adequately by polynomials of low degree, at least for the short intervals of integra- tion weed. Since 2d Jim weno Lt JT + bed and since f(y") <1 for y* >0, any result that gives lim f(y") u* > y* is clearly in error (see, for example, the results for small y* values for the 2-point Gauss-Chebyshev quadrature). It appears that the Gauss-Legendre quad- ratures yield better values for the integrals, although the results for all Ng, using the 15-point quadrature formulas are quite consistent (in no case differing by more than 0.25 percent). The results for the 15-point quadratures with Ngy = 50000 are shown in Fig. 2.4.1. They agree in every case with the plotted values reported by Gill and Scher. Unfortunately, the authors failed to indicate how they evaluated the integral, and did not report the results in tabular form. Thus it was possible to check only the most significant digits of the results. ii 4 > Dimensiontess ava velocity — ut 10 100 To00 Dimensionless distance fom tube wall = y* Figure 24.1 Dimensionless velocity distribution in smooth ctcalar tubes—Nae = 50000. 128 ‘Numerical Integration 2.11 Numerical Differentiation Having described numerical integration in some detail, a few words about numerical differentiation are in order. ‘The differentiation problem involves the evaluation of ago) dx ‘at some arbitrary x, given only a few sample values of fie) at the base points x, +5 Hy The problem seems intuitively no more difficult than the integration problem of (2.1). The obvious solution is to find a suitable approxi- mation of f(x), say g(x), which is simple to differentizee and evaluate, that is, f(x) , dalx) dx ~ dx ; 2.118) The usual choice for the approximating function is p,(x), the nth-degree interpolating polynomial which passes through the points (Xo, S(%o)), (pf UI +5 SSO). ‘Thea (2.118) ig given by Af), dps) ie ae (2.139) Provided that f(x) is m times differentiable, higher-order derivatives may be approximated by evaluating the higher- order derivatives of (2) a <1, alo) _ ax) ax dx (2.120) Unfortunately, considerable care is required if serious errors are to be avoided. The inherent difficulty with this approach is that differentiation tends to magnify small discrepancies or errors in the approximating function Lust as the integration process tends to damp or smooth them out). Figure 2.9 shows the situation, fs ° ° Figure 2.9 Differentiation ofthe interpolating polynomial. Note that if the polynomial approximetion p,(x) is ox) dx may be an reasonably good, the integral [ excellent approximation to f(x) dx. On the other hand, dp,{x){dx, which is simply the slope of the line tangent to pax), may vary significantly in magnitude from df(x)/dx, ‘even at the base points, where p,(x) and f(x) agree exactly ; the sign of the derivative might even be in error. Higher- order differentiation tends to magnify these discrepancies still further. Admittedly, the deviations may be exagger- ated in the figure, but the point is clear. Numerical dif- ferentiation is an inherently less accurate process than numerical integration Because of this tendency toward error, numerical dif- ferentiation should be avoided wherever possible. This is Particularly true when the f(x;) values are themselves subject to some error, as they would probably be if determined experimentally {engineers and scientists, in fact, often use differentiation tests on laboratory data as an indication of experimental precision). If derivative values must be computed in such cases, particularly when the results are (o be used in subsequent calculations, it is usually better to use one of the least-squares polynomials (see Chapter 8) to smooth the data before differentiating them, in spite of the problems associated with the differentia. tion process, the approximation of (2.120) is often used. Then LG) = px) + Ral), (2.121) where R,(x) is the error associated with the nth-degree interpolating polynomial for which p,(x,) = f(x). We can then write Af) _ Aral), ARC) is at de (2.122) Any of the severaf formulations of the interpolating polynomial from Chapter 1 may be used for p,(x). To illustrate, assume that the base points x9, x), ...,%, are evenly spaced by intervals of length 4, so that p,(x) may de written as Pax) =f(X0) + (x — Xo) ‘Then ds) , Afla) Fo) LO AIG) 5 2x — xp — xy ALGO + [3x = 2x9 +x, +x) $Gvons + note + xs ep) LG (2.123) 211 Numerical Differentiation Now we must choose an appropriate value for n, For example, for m= 1 and n= 2, (2.123) becomes, respec tively, f(x) Aste oo =flxon] 1 1 ‘ =f fle) jf, (2.1248) af) Sites AF (x0) Se pe tx Xo) Sa =xXe- xy - 2h e SoM ag + es = cee Qxy+ jes + [P= ]ro0. (2.1240) Second derivatives can be computed similarly example, for n = 2, PIG), A°F(%0) dx a Yixaxe} 1 2 1 + palleo)— GaSe + pala). (2125) Other derivatives follow in obvious fashion, although the formulas quickly become rather cumbersome when written in general fore. Error terms for the differentiation formulas are given by the appropriate derivative of the remainder term for the nth-degree interpolating polynomial R,(x), given by (1.39) as LOO = 0/6 where ¢=&(x) is an unknown point on the interval (fo... Unfortunately, &(x) may not be single- valued or differentiable, although /*#(£) is, provided AE) itself possesses derivatives of higher order. If we let (x) = []feo(x — x,) and use the finite divided-difference equivalent of f°* Y(€), [see (1.39a)], then R,(x)is given by RAS) = WOOL heats eos (2.127) and its first derivative is IR, _ daw Fe HO anor eot) $20) 2 Drsratectseentols (2128) ‘We can show (see (2)) that cn Doerr nate oe, iy aD Bory with & in (x,x9,..-42,)- Then (2.128) can be written as a(x) f°) de Gi * £0) @+Dr’ Su ba in (X05 (x) (2.130) For x= x,, that is, when x is one of the base points, Ry _ dae) £*%E) +n) dx Oa EH A LOG) ~ [Bo] Tr: 213) For example, (2.124b) can be written with its error term afl) _ 1 BE peay Ten 7 EHP) + ASCs) “Sad + FSW (2.132a) df(x) 1 BF re Sie RUC) —SO0 - ZIM (2,132) afta) _ 3 Wyo Pm (se) = AFC) + 3/42) +E L%E) (2.132¢) Notice that the leading factor in the error term for the derivative at x, is just half that for the error term at x9 and at x,, We would expect that the computed derivative estimate at the midpoint would be more accurate than ‘those computed at the end points, since functional values ‘on both sides of the midpoint are used in the midpoint formula (2.132b) but not in the end-point formulas. Com- parison of the first derivative approximations, using hhigher-degree interpolating polynomials (see, for example, the extensive tables in [9]), shows that the magnitudes of the coefficients of s*"(6) at the base points decrease monatonically as x approaches the midpoint of the interval. In addition, the midpoint formula with n even is invariably simpler (for example, (2.132b)] than formulas for other base points. For these reasons, the odd-point formulas (n even) are usually favored over the even-point ones, and the base-point values should, if possible, lie on both sides of the argument at which df(2)[dx is to be approximated. To determine d"/(x)/ds" for m JMAX Ur =n) om) (@) ° where c,, 4, and k (the specific heat, viscosity, and thermal conductivity of the Mud, respectively) are functions of tem- perature F. Al goats in the above formulas ens be i Consistent units Write a program that will read values for m, T;, T2, Ts, D. and information concerning the temperature dependency of > mand k, as data. The program should then compute the required exchanger length, £. in masking approximate heat-ekchanger calculations, one often assumes mean values for cp and k, evaluating them just once, at the mean fluid temperature (7; + 73)/2. Let the Drogram eximate the erot involved in making this asurmp- tion. ‘Suggested Test Data Case A Cate B Fluid Carbon dioxide Ethylene glycol gs liquid rm, Ibybr 2s 45,000 iF cy 0 °F 280 and $00 $0 and 180 TE $50 250 Din, 0495 1.022 7 nap 14400 BTU 0st 4 246 x toe — EE 0.53 + 0.000657 roots a2") 0.0133 @12) 0.153 (constand) KBTUIMENF | OOat Gon (o.o28 (512) 22 WF 82.1 (50) pstostne — ooas2(7E5) 30 (100) 126 (130) 5.57 (200) ‘The above physical properties have been derived from Perry [15]. See Problen 1.19, concerning the interpolating poly- ‘nomials fesulting from the tabulated values of k for carbon ide and p for ethylene glycol. 216 The fugacity f(aten) ofa gas at a pressure P (atm) and 8 opecifed temperature fen by Denbigh T16} whe (St where C= Pu/RT is the experimentally determined compres sibility factor at the same vernperature T, R is the gas constant, and v is the molal volume. For a perfect gas, C= 1, and the fugacity is identical with the pressure. ‘Suppose that 7 values ate available for C, namely, C,, Cas s+.» Cn for which the corresponding pressures are pi, Pe (not necessarily equslly spaced, but arranged in ascending order ‘of magnitude), Assume that the value of p; (typically | atm) is sufficiently low for the gas to be perfect in the range 0

las, evaluate both of these integrals for x = 0.8, and compare with the tabulated value, erf (0.5) = 0.520500. 220 Since (n+ 1) = ni, the Gauss-Laguerre three-point formula could be used 10 spproximate n!: * ese des $ at ‘What is the largest integer n for which this formula would be exact? 221 Modify the function GAUSS of Example 2.3 x0 that integrals of the form [ree are evaluated using a composite formula, equivalent torepeated application of the m-point Gauss-Legendre quadrature over nonoverlapping subintervals of {a,b} of length ( — a)/n. 2.22 Write a function, named LAGUER, that employs the m-point Gauss-Laguerre quadrature formula (2.109) to evaluate numerically an integral of the form lever as, where f(x) is an arbitrary function and aii finite, The function should incorporate the necessary Laguerze polynomist roots and the corresponding weight factors (see Table 2.4) for the -mepoint quadrature where m may be any of 2, 3,4, 5, 6, 10, or 35. Let the argument fist be (A. M,F), where A and M have ‘obvious interpretations, and F is a function that evaluates FG) for any x. To test the routine, write a short main program and approp- riate functions to compute: () the gamma function TY) for «= 1.0, 1,2, 14, 1.6, 1.8, 2.0. (6) the exponential integral, for a= 0.5, 1.0, 1.5, 2.0. Ia each case the integral should be evaluated using the 2, 3,4, 5, 6, 10, and 1$-point quadratures. Compare the results with tabulated values. 223 Write and test a function, named HERMIT, that implements the Gauss-Hermite quadrature of (2.115) for n= 1,2,3,4,9, and 19 (see Table 2.6 and reference (3)), Let the argument ist be (N, F), where N and F correspond to n and. Fin 2.115), Use HERMIT to evaluate the integral e ne and compare with the true value, #/[2 sin(x/4)). 224 in a study by Carnahan [17], the fraction fas of certain fission neutrons having energies above a threshold ‘energy Er, (Mev) was found to be mo fn oes | sinb( /2B)e"* dE. Evaluate fra within 40.001 for Ex =0.5, 2.9, 5.3, and 8. Mev. ‘2.25 The following relation is available (3) for P(x), the derivative of the mh-degree Legendre polynomial: (2? — Pia) = mePx) ~ nP ea). Show that the family of polynomials Pi(x) is orthogonal on the interval {~1,1] with respect to the weighting function (x? — 1), ‘Show also that a Gaussian quadrature of the form [i feree= Sms ‘can be developed for the case in witich (wo base points are reassigned (xo —1, x4=1, that the quadrature is exaet when f(2) is a polynomial of degree 2n—1 or less, that Hes Kays Fant Ate the 20708 of Ux), and that the weight factors are 1p wen Eyl, MPAA, Le wo BRET + PHO de, 1 ot & GR TF Note. It can be shown by further manipulation that 2 2 meri ne DIPLO The above is known as Lobatto quadrature. 226 Show that a Gaussian quadrature, [iL ferae= Swsoco, for which x= —1 and the remaining base points are the » roots of 1 PAD + Pass) +) l+x is exact when f(2)is a polynomial of degree 2a or less. Find the weight factors ws, i= 0,1, ...1m, for n= 1,2,3. 227 A useful quadrature formula, attributed to Chebys shev, is given by . 2 [se 4s > Sot, Problems 135, where the x, are the n+ 1 roots of an (n+ I)th-degree poly- nomial C,,,(x). The first few of these polynomials are: It can be shown [26] that the roots of these polynomials all lie {in (~1,}) and are real, whereas some ofthe roots of C,(x), for 8 and m> 10 are complex. Note that this quadrature has the attractive feature that all the weight factors are equal, And the weighting function forthe integral is w(x) = 1. (@) Show that the polynomials C,(x) are not orthogonal with respect to the weighting function w(x) = I on the interval Cun (©) Find the error term for the quadrature. () What is the degree of precision of the formula for n even and n odd? (@) Show that the two-point formula is identical to the two- point Gauss-Legendre quadrature of (2.84), (©) Modify the quadrature to allow est of the form ation of integrals frees, where @ and 6 are finite, 2.28 Using the roots of the polynomials C,(x) of Problem 2.27 and found by the program developed in Problem 3.32, Write a function, named CHEB2, with argument list (A. 8, F, 'M, R) that implements the composite version of the quadrature developed in part (e) of Problem 2.27. Here, A and B are the lower and upper integration limits, M is the number of points in the quadrature formula, and R (integer) is the number of repeated applications of the quadrature. Thus the M-point quadrature is to be applied to R nonoverlapping subintervals ‘of (AB], each of length (8 — A)/R, F is another function that evaluates f(x) for any x. Write main program and accompanying function F to allow computation of the complete elliptic integral of the second kind, Ba) = [7 — asin? 2) a, ‘The main program should read values for ALPHA (a), M, and R, call upon CHEB to return the estimated value of the integral, print the results, and return to read another data set, Since CHEB2 should be a general routine, F should have only ‘one argument (say X). ALPHA should be available to the func- tion F through a COMMON (or equivalent) declaration, For a variety of values for M and R, calculate values of E(e), for «= 0.,0.1, 0.25, and 0.5. Compare your results with the tabulated vatues [3]: (0.00) — 1.570796327 E(0.10) = 1.530757637 E(0.25) = 1.467462209 E(0.50) = 1.350643881 2.29 A very light spring of length Z hss Young's motuhos Eand cross-sectional moment of inertia J; itis rigidly clamped at its lower end B and is initially vertical (Fig. P2.29). A downward force P at the free end A causes the spring to bend ‘over. If 8s the angle of slope at any point ané 3s the distance ‘along the spring measured from A, then integration of the exact governing equation El(dijds)=~Py, noting that dylds = sin 6, leads to [21}: —— I@PIETV C68 6 cos [2 {2°01 = sinterapin* HY where a is the value of 8 at A a and Figure P229 Show from the above that the Euler load Ps for which the spring just begins to rend is given by wEL aw ‘Let x, and ys denote the vertical distance of 4 above the datum plane and the horizontal distance of B from A, respectively. Compute the values of P/P, for which x,/L = 0.99, 0.95, 0.9, 0.5, and 0. What are the corresponding values of ys/L and a? (Note that the above expression for Landaa related expres- sion for x/L involve elliptic integrals.) 2.30 A semiinfinite medium (x > 0) has a thermal diffuse ivity « and a zero initial temperature at time = 0, For 1>0, the surface at x= 0 is maintained at a temperature 7, ~ T(t). By using Duhamel’s theorem (see p. 62 of Carstaw and Jaeger 136 Numerical Integration 122, for example), the subsequent temperature T(x) inside the medium can be shown to be given by ar 1,2 Gar Te) = (An alternative form of the integral can be obtained by intro- ducing a new variable: x = x/(2Vale— )]) Let T, represent the periodic temperature in "F ata point on the earth’s surface. For example, the mean monthly air tem- peratures in Table P2.30 have been reported [23] at the loca- tions indicated. Table P2.30 Nagpur Cape Royds Yakutsk January 669 26.1 ~45.0 February ne 208 “354 March a5 49 —na Apsil 399 199) 140 May 924 55 0 June 864 at 580 July 804 9 67 ‘August 803 157 50 September 80.2 =57 2.0 October 783 45 165 November 23 119 21 December 67 300 $12 Compute the likely mean monthly ground temperatures at 5, 10, 20, and 50 feet below the earth’s surface at each of the above locations. Plot these computed temperatures ¢o show their relation to the corresponding surface temperatures. In each case, assume: (a) dry ground with «= 0.0926 5q ft/hr, (b) the mean monthly ground and air temperatures at the surface are approximately equal, and (¢) the pattern of air temperature repeats itself indefinitely from one year to the ext 231 Surpose that (m+ 1)(n-+1) functional values 109) are ‘ailable forall combinations of m-+1 levels of AFRO oot and nt T fevels Of 3)f= 051, vgn Define Laerangian interpolation coefficients %,,(8), 0,1, man YsA)),j-=0, 1, sm, asin Problem 1.38. Let the integral in she rectangular domain a laal > “++ > lal and a, #0. 32. Gracffe’s Method OF the numerous methods of solution that have been suggested, the most completely “global” ones, in the sense of yielding simultaneous approximations to all roots, are ‘probably Graeffe’s root-squaring technique [11] and the QD method of Section 3.9. Consider the function defined by Hx) =(-DICS(—2) = (3? = aot = a)? = G3) Since $(x) is a polynomial containing only even powers, ‘we may define the polynomial Lul3) = O/8) = ( = ax ~ 03) which has the property that the roots of f,(x) = 0 are the squares of the foots of (3.2). Repeating this operation, wwe obtain a sequence of polynomials fa, fa fas fies. such that the equation Sot) (8 = ax = a3)- (x = af) Ga) where m is a positive integral power of 2, has the roots af, a3, ..., a. The coefficients of fy are determined from the coefficients of the preceding polynomial by an algorithm that is described later. The object of this procedure is to produce an equation having roots differing greatly in magnitude, since the roots of such an equation can be approximzted by simple functions of its coefficients. If the roots of (3.2) are real and |a,| > || >--~ > |a,|, then the ratios laS/aePl, laS/ee3l, «5 legos can all be made as small as desired by making m large enough. Expanding (3.4) leads to Fnl2) SH + (ated tox — (afalgay e Y? ove e (= Dfaagy a. (x28), =0, 5) Writing the right-hand side of (3.5) inthe form = A + AQF +o + (-1A, 3.6) ‘we derive the approximations A 37) From these, by taking mth roots, We may approximate the values of the roots 03,0... 44 Of (3.2). Since the signs of the roots are not determined, they must be checked by substitution or otherwise. Ifthe roots do not all have different absolute values, as will be true if multiple or complex roots are present, the situation is more complicated. 1f {2,|= {s{, while the other roots it 342 Solution of Equations are of lesser magnitude, the frst terms on the right of (3.5) are approximated by x" — (af + aja‘ + (afag)x"-? It is seen from the above that in this case a] and a are approximated by the roots of the quadratic equation v—Aye+ dy =0, while if a, and 2,4, are the roots of equal magnitude, the equation Apso AX + Ayes G8) yields approximations to a? and a7... OF course, this leaves m possible values from which a, must be calcu- Isied by reference to the original equation. Should a, be real, this offers no computational difficulty After a sufficient number of steps have been taken, itis ‘fear from (3.5) that if the roots are real and distinct, the coefficients 4), 43, .... 4, will be approximately squared at each iteration, However, if 3,1 = (aes, this will not bee true of the coefficient ,. In spite of its advantage of furnishing approximations to all roots simultaneously, Graeffe’s method does not Seem to have aroused great enthusiasm among users of automatic equipment. One drawback is the need for aking decisions not readily mechanized, The difficulties in locating complex roots have already been mentioned. Moreover, errors introduced at any stage have the effect of replacing the original problem with a new one, thus affecting the correctness of the roots arrived at rather than merely the rate of convergence. A technique for obviating this last difficulty will be explained in Section 3.4, Another feature of concern is that coefficients developed can quickly leave the usta! oating-point range. ‘The coefficients in (3.6) can be found by performing the polynomial multiplication (—1)* f(x) f(—2) and then compressing the result by ignoring the zero contents of alternate locations, starting with that corresponding to 271, and storing the contents of the remaining locations ix some suitable sequence of locations. IF desired, the coefficients may be found iteratively by the relations ids E oat $20 (0! Aan Aci O ~ 3.80859375x7 — 4.80859375x +0,31640825, ‘folx) = x4 ~ 24,6328125x? ~ 24,53269958x* ~ 25.5327025x + 0,10011292. Thus the predicted values of af, af, #8, and af are, respectively, 24.6, 0.498 + 0.888, —0.498 — 0.888), and 0,00392. These compare fairly well with the truncated true values 25.63, 0.5 +0866), —0.5 — 0.866, and 0.00391 3.3 Bernoulli's Method Let uy, O, define wan Zany es) Bernoulli's method for finding a zero of fo)= Fat is to observe relations involving the sequence {u) and certain related sequences of which we use only those given by OUP hee (3.10) fy atlas Masa The simplest theorem states that if (x4{> (2yl, then 1 = lim oa Gan An important advantage of Bernoulli's method over Graeffe’s method is that (3.11) remains true, if suitable initial values are used, for che case x, = 4 provided |a4| > |a,+,|. In addition, if two or more roots are of equal modulus and one is of greater multiplicity than the others, the timit in (3.11) exists and is that root of greater multiplicity. Moreover, the technique for finding a conjugate pair of complex roots can be con- siderably simpler than when using the root-squaring method. The price paid is lack of knowledge of other roots, If there are no multiple roots, it can be shown [1] that the unique solution of (3.9) for k > 0 is of the form + eth @.12) where the character of the ¢, is controlled by the initial choice of the y,, 0-< &< 7 ~ {. This being 50, it is clear that the conclusion of (3.11) is warranted provided that ¢, is not zero, Should a, =a, and la,| > Iayl, then the character of the solution is uy = cat + eyes +o ath + epkal + cya to + at 3.3 Bernoulli's Method 143 Similar relations hold for the general case, and the truth of (.11} for the conditions described is then clear, pro- vided that a properly chosen coefficient does not vanish. Suppose nest that faq| = |a,| > lal. It follows that 3.43) ay ta. ‘The discussion is given only for «, #22, although the proposition is valid as stated. (Indeed, it is true for yy yy = pag = a5 My tae > [ese al ao wellas when a = 3g = -~° = 205 dey = Bye = asi Hi}= all = Waisab provided that ayo and following roots of equal modulus have multiplicity tess than i.) Note that the limit of ty/oy_1 is e/vi. where sah + ena)? — esa! + cant Meat? + cna = ead "a 'Qayay ~ a9 ~ 0). Also, the limit of 44 ,/vy is (2. ,/vi, where = (rat + cxa8(eyeh"! + exah') ~ Geuah! + exah Meat? + egal?) = exenah el + a)2a 2 ~ af ~ a3) Consider next the problem of choosing to is so that none of the coefficients of dominant powers vanish, Since the relations of (3.12) (or similar ones in the case of multiple roots) hold for 0 < k n, are arbitrarily set to zero. After the computation of the By i= 1, 2,+.. 1, each B, is tested to insure that its value is well within the floating-point number range. The permitted number range is {-7, -1/T},0, (1/T, T]. The nonzero By are subjected to two magnitude tests, \Bl #0, Should all B, pass both tests, the rootsquaring process is continued, The C; are assigned the newly computed values of the B, that is, CHB, 12,6 Then, the B, for the next iteration are computed using GS). The sequence (3.1.5), G.1.6), and (3.1.7) is repeated cyclically until at least one B; fails one of the magnitude tests (3.1.6), at which time the real and distinct roots, «, of (3.1.1) should be well separated and such that B, ze where jis the number of iterations. af pofynomial (3.1.1) is then evaluated using and —& to determine which produces the smattest magnitude, and hence which sign should be assigned to aj, Of course, the a, found may not be roots at ait (p,(x) may have multiple ot complex roots). A value a, with proper sign is assumed to be a root if [Play Tor Bl < 1/7 and B #0 Fone a : latt> ert poe p-a) lB T F “Probably =| Not a Root” / By iter |____ [Probably a Root" 146 Solution of Equations FORTRAN Implementation List of Principal Variables Program Symbol at Bt ct fps 1s int, IMLIPL (PRINT ITER max 4 L N Net PMINUS Prius PyaL ROOT Tor Definition Vector of polynomial coefficients, a, Vector of coefficients, B,, of the squared polynomial after the current iteration (3.1.3). Vector of coefficients, C,, of the squared polynomial prior to the current iteration (3.1.3) Small positive number, ¢, used in test (3.1.9) to determine if is to be considered a root. Jin Msi +e, cespectively. Print control variable. If nonzero, coefficients B, are printed after each iteration, Iteration counter, j. Maximum number of iterations allowed. Subscript (not to be confused with iteration counter), L rn, degree of starting polynomial. nel p> value of p(~&) pt, value of pi). Pati 4, possibly a root of p,(x). T, uppér limit on the magnitudes of coefficients B,, produced by the root-squaring process. } Because of FORTRAN limitations (subscripts smaller than fone are not allowed) all subscripts on the vectors of coeficients a, 2B. and C that appeat in the text and flow diagram are advanced by ‘one when they appear in the program; for example, ay Becomes A(!), B, becomes B(N + 1), et, Example 3.1 Graeffe's Root-Squaring Method—Mechanleal Vibration Frequencies 17 Program Listing 10 APPLIED MUNERICAL KETHODS, EXAMPLE 3.2 GRAEFFE'S ROOT-SQUARING NETHOD THIS. PROGRAM USES GRAEFFE'S ROOT SQUARING TECHNIQUE TO FIND THE REAL ANO DISTINCT ROOTS OF THE NTH DEGREE POLYNOMIAL WHOSE COEFFICIENTS ARE READ INTO A(2),..A(N#L) IN DECREASING POWERS OF THE VARIABLE WITH THE CONSTANT TEaM IN ACNe1), THE COEFFICIENTS ARE FIRST NORMALIZED BY DIVIDING EACH BY ALI), YIELDING ACL) = 3. THE ITERATIVE ROOT SQUARING PROCESS USES TWO TEMPORARY VECTORS, CANO B, IN ALTERNATE SUCCESSION AND CONTINUES UNTIL ONE OF THE COEFFICIENTS EXCEEDS TOP IN MAGNITUDE OR UNTIL THE HAXIMUM NUMBER OF ITERATIONS. ITHAX Vs EXCEEDED, AT THIS POINT THE MAGNITUDES OF THE POSSIBLE ROOTS ARE COMPUTED (ROOT), IN ORDER TO DETERMINE THE PROPER SIGN FOR ROOT, THE NORMALIZED POLYNORIAL 1S EVALUATED FOR POSITIVE (PPLUS) AND NEGATIVE (PMINUS) ARGUMENTS, IF THE MAGNITUDE OF PPLUS OR PHINUS IS SMALLER THAN EPS, IT 1S ASSUMED THAT & ROOT HAS BEEN FOUND, AN APPROPRIATE COMMENT 1S PRINTED ALONG WITH THE VALUE OF ROOT (WATH PROPER SIGN) AND THE CORRESPONDING VALUE OF THE POLYNOMIAL, PVAL. TE IPRINT IS NONZERO, COEFFICIENTS OF THE SQUARED POLYNOMIAL ARE PRINTED AFTER EACH ITERATION. DIMENSION A(00), 86100), C(100) BaD) #1. ca) 21, READ (5,100) N, ITMAX, TOP, EPS, IPRINT NPL = Ns 1 READ (5,101) (C4), L91,NPL) se... NORMALIZE A COEFFICIENTS, INITIALIZE C VECTOR ..... b0'R 1 = 2, NPL ACI) # ACLY7AGY, cel) © act) AQ) #1, WRITE” (6/200) N,ITMAX, TOP, EPS, IPRINT,NPL, (ACI), #1,NPL) esse. BEGIN GRAEFFE'S ITERATION s. 004 bo'id TTeR = 1, 1TMax . COMPUTE COEFFICUENTS OF F(X2*F(=x)_ (WITH APPROPRIATE SIGN), IGNORING ALTERNATE ZERO COEFFICIENTS «0... Dou b= 3, NPL BC) # cerpacery so Ta 4 Le? eCC1PLyecC Im) TPRINT.EQ.0_) GO To 6 (6,201) VTER,NPL, CECI), 122,NPL) 5 TAYE ANY COEFFICIENTS EXCEEDED SIZE LIMITS «1... 1a? NPT TF ( ABS(BU1D) .GF,TOP OR, ABS(B(ID).LT.1,/TOP .AND, BC1).NE.O.0 ) 1 go 70 11 CONTINUE , ase: SHIFT SOEFFICIENTS FROM 8 TOC FOR NEXT ITERATION ... B0'i6 Ne 2 wen cc) = BCL) WRITE (6,2027 Tren = ua sees THE FOLLOWING STATEMENTS COMPUTE THE MAGNITUDES OF THE POSSIBLE ROOTS AND EVALUATE THE ORIGINAL POLYNOMIAL FOR BOTH POSITIVE AND NEGATIVE VALUES OF THESE ROOTS « 148 Solution of Equations Program Listing (Continued) c LL WRITE (6,203) 00201 2, NPL c c see COMPUTE ESTIMATE OF ROOT FROM COEFFICIENTS OF SQUARED c POLYNOMTAL ROOT® ABS(ECT)/BCi=1)}##C1./2.98 TER) c © 2... EVALUATE POLYNOMIAL AT ROOT AND -ROOT brits = 1. exits = 1. DOW “yt 2, npr PPLUS = PPLUS@ROOT + ACJ) 1b PMINUS = PMINUS#(-ROOT) + ACJ) ¢ © sey, CHOOSE LIKELY ROOT AND MINIMUM POLYNOMIAL VALUE 4.6 iFC Aws(PPLUs).cT.aBs(PMINUS) ) GO 70 i6 VAL = PLUS, G0 To 27 16 BVAL = Pains hoot = -RooT c c so 18 PVAL SMAECER THAN EPS 4... 17 TFC ABSCPVAL).GE.EPS ) GO TO 19 WRITE (6,208) ROOT, PVAL, ITER Go To 20 19 WRITE (6,205) ROOT, PVAL, ITER 20 cONTINWE 60 To1 c c FORMATS FOR INPUT AND OUTPUT STATEMENTS 4... oo FORMAT (30x, #2, 18X,12,18%,E6.1/ 10X,E6.1, 18K, 11) 206i FORMAT (20x, 510.4 ) 200 FORMAT (OHIN =, 18 / 10K ITMaX = | 18 / LOH TOP « , 1 1PELW,1/10H EPS”, 1PEI¥.2/00H HPRINT = 4 18/ 2 20H0 ACL. AC, 13, 1H), /7 OM, OPSF13,6) 9 7D2 FORMAT (LOHOITER = , 18 /20H0 BCLD...BC, 12, DJ aH / 1H, 1PS5E15.6)_) 202 FORMAT’ (W3HOITER EXCEEDS ITMAX = CALCULATION CONTINUES 203 FORMAT (1HO/ 55H AT POLYNOMIAL VALUE ITER co WENT / 1K) 204 FORMAT (1H , F10.6, 1PEI7.6, 18, 7X, ISHPROBABLY A ROOT 7 205 FORMAT (1H 1 F10.6, 1PE17.6, 18, 7X, JSMPROBABLY NOT A ROOT ) c ENO Data N = 2 rwax +25. Tor = 2.0850 EPS aor-2 PRINT = 1 AQ), AG) © 7.0000 ~3.0000 2.0000 Noe imax ss Top = 1.0830 EPS = yloe-1 —1PRINT = 0 AG). AG) # 1.0000 -3.0000 7.0000 x = 3 VIMAX = 25 Top = 1.0830 EPS = 0g-2 PRINT = ACD Aa) = 2.0000 -12.0000 22.0000 ~12.0000 u * Tmax 2s, Top = 1.0836 EPs. TPRINT =D ac), 1,0000 10,0000 35.0000 50.0000 24.0000 n Wimax" = 25 Tor = 1.0830 EPs. 1PRINT «0. ago. 0.5000 4.0000 10.5000 9.0000 toma a 2s Top = 1.0830 EPs. APRINT © 0 4a), 1,000 19.0000 55.0000 75.0000 N rma’ 225 TOP = 1.0630, EPs TpRINT = 2 AQ). 0.2500 1.2500 2.0000 1.0000 Example 3.1 Graeffe’s Root Squaring Method—Mechanical Vibration Frequencies Program Listing (Continued) N 4 EPS = 108-2 ACD). ACS) N ’ EPS = 108+ ACD. AG) ry 3 EPs = 1,06-1 ACD. ACH) eee EPs = 108-1 ACD. .ACS) y =e EPS = 1.0E-1 ACD). ACS) N 4 EPS = LE ACD). ACS) # N = 38 PS = ort ACL). .ACS) # A()ICIACS) & N . EPS = 108-1 AD). AGS) (6). .AC3) VMAX = 25, Tor IPRINT =O vrmax TPRINT = 0. 1.0000 16,0000 78, P| vax" 25 IPRINT = 0) 1,000 9.0000, =3. ‘Top WTwax” = 25, IPRINT = 0. 1,000 -28.0000_ 150, ‘Tor lima” = 25 TPRINT = 0) 1.0000 -1,0000 0, imax’ = 25 Tor APRINT = 8 1.0000 =2,0000 1, TOP Imax) = 25, UPRINT =a) 10000 -16.0000 105.1 792/000 467.0000 -120. ITWAX = 25 ‘TOP ITPRINT =O 1.0000 -20.0000 157. +1557:6560 917.2167 -227, ‘Computer Output Results for the Ist Data Set Nos 2 tax = 3 we 1.06 30 es 108-01 VPRiwy y AC ac) r.agaaan “*"=5,) ier = 1 BC)... BC 1.000000 00-5, ITER + 2 Ba). 8 1.000006 00-1. ITER = 3 8C1). BC 1.090006 00-2. ier = ‘ BC). BC 10000008 00-6, 3) 000000 2, 000000, 2 000000 00 40000008 00 » 700000 01 1,600000E 02 3» 570000€ 02 2.560000 07 » 5537006 Ob 6, 5S3600E OF 1.0000. -35.0000 146, 35 ‘Yo? = 1.0880 0000 ~100.0000 = LOe8t 0000 -412. 0000 = 1.0830 0000 1.0000 = 1.0830 0000 ~200,0000 = 1.0830 7500 0.2500 = 1.0830 2500 0.7500 = 1.030 0000 364.0000 0000 9.0000, = 1.0830 0625 -625.437% 9003 15,6643 1,0000 624.0000 =375,0000 0.2500 -0,7500 715.0000 1301, 4450 149 150 ‘Solution of Equations Compates Output (Continued) ier + 5 BCL). ..BC 3) 1,000000E 00 -4,298967E 09 © 4,204967E 09 Iter = 6 BCL). BC 3) 1,000000E 00 -1,80N674E 19 1,80N67KE 19 1TeR = 1 BCD)... BC 3) 1,000000E 00 -3,40282NE 38 $,4O282NE 38, ROOT POLYNOMIAL VALUE ITER COMMENT 2.000000 0.0 1 PROBABLY A ROOT Vooeooa = 0.0 7 PROBABLY A ROOT Results for the 2nd Data Set " 2 wax oP es. Fran 3 1,06 30 TOE-01 o ACL) AC 3D 1.000090 """=3.000000 2, 000000 ITER EXCEEDS ITMAK - CALCULATION CONTINUES. ROOT POLYNOMIAL VALUE TER COMMENT 2.000975 9. 756088E-04 3 01999513 W882812E-04 3 Results for the 7th Data Set Nos 3 iar = 5 ORs ioe 30 ul : S0E-02 CeRcuT 1 ACD. AC WD 1.000000 """=$.000000 8, 000000 ‘00600 ITER = 1 BD. BC wD 2.000000 00 -9,000000¢ 00 2.400000 01 -1.600000E 01 IR 2 BC). BCD 3,000000€ 00 -3,s00000€ 012 O000E 02 +2. 5600008 02 Example 3.1 Graeffe's Root-Squaring Method—Mechanical Vibration Frequencies ‘Computer Output (Continued) 5 tee aa. 10000008 00 IR + BaD. 1.000000 00 imeR = BD. 1.000008 00 ier aD, 1.000000 00 oor Pal 2.021778 1978456 1000000, ow ~5.1300008 02 ‘ 2ow) 1.510730 05 5 ow) 589935 09 6 BO) -5.6895096 19 LXNOMIAL VALUE b.a54202e-00 41539N90E-04 0.0 Results for the 9th Data Set 4" : 4 Tmax = 25 Top = Lge 30 EP: 10-01 TPRINT = ° AC).. AC 5) 1,000000 ""216,000000 ROOT POLYNOMIAL VALUE 11,999994 8, 300781603 51324727 -91699263E 02 W.ge2gso 317228888 02 2ioooo00 0.0 6.608800 04 1,8n4674E 19 3.402826 38 ITER 78.0000 ITER 2950985 09 ~ 6.553606 08 2949676 09 10-01 TPRINT = a AD. AC 9) 1.000000 "18, c00000 105.0000 -364,000000 715. 000000, -792,090000 462.000000 -120.000000 -$,000000 Results for she 15ck Data Set ROOT POLYNOMIAL VALUE ITER comment 3.931828 -2.785826€ 00 4 PROBABLY NOT A ROOT 506101 _3.037500E-01 4 PROBABLY NOT A ROOT 21987610 5.7604 79E-02 4 PROBABLY A ROOT 223u4709 —-1,475048E-02 4 PROBABLY & ROOT 15652350 =318357716-08 “ PROBABLY & ROOT Ol999920 2 /506805E-08 4 PROBABLY & ROOT 9.u67911 -3,538743E-08 4 PROBABLY & ROOT 07120615 -W.7683726-05 4 PROBABLY & ROOT x : 8 imax = 25 oP = 1,06 30 EPS 1108-01 TPRINT = ° ACD. AC 9) 1.000000 ""=20,000000 _157.062500 -623.437256 1541.44usz4 715571655762 912.216553 -227,900293 15,664500 ROOT «POLYNOMIAL VALUE ITER. COMMENT 6.595325 5,.43N6QE 02 4 PROBABLY NOY & ROOT 4.706691 -41292822E-01 ‘ PROBABLY NOT A ROOT 3.365898 -7.332039€-02 6 PROBABLY A ROOT 21347098 1,755998E-02 4 PROBABLY A ROOT Liswulos — -1!212120E-03, ‘ PROBABLY A ROOT 912852 1 974106E-08 ‘ PRoBagty A ROOT 01425053 -3.536743E-07 4 PROBABLY A ROOT 0:106776 — -31814697E-06 4 PROBABLY A ROOT Example 3. Graeffe's Root-Sauaring Method —Mechanical Vibration Frequencies 153 Discussion of Results Single-precision arithmetic was used for all calcula- ‘ions, Results are shown for data sets U, 2,7,9, 14, and 15, In most cases, the printing of intermediate coefficients was not requested, Final results far each data set are listed below, along with the polynomial p,(x) and the known roots. Calculated Data Set I (results shown) Roots True Roots par 42 2.000000 2.000000 Number of iterations, 1.000000 1.000000 WER =7 Caleufated Data Set 2 (results shown) _Roots__True Roots, PAX) =x? 3x42 2.000975 2.000000 ITER = 3 0.999513 1.000000 Inthis case, the maximum number of iterations permitted was 3. Hence the roots of the squared polynomial have not been separated far enough to attain the accuracy shown for the same polynomial in data set 1. Calculated Date Set 3, Roots True Roots a(x) = 2x9 — 12x? + 22x — 12 2.999999 3.000000 ITER =6 2.000000 2.000000 1.000000 1.000000 Calculated Data Set 4 Roots True Roots PAX) = x4 = 10x? + 35x? 4.000011 4.000000 ~ S0x+ 24 2.999990 3.000000 ITER = 5 1.999999 2,000000 090000 1.000000 Calculated Data Set 5 Roots True Roots PAX) =0.5x7 + Ax? + 10.5x 3.065690 —3,000000 +9 =2.938715 ~3,000000 ime = 5 1.999999 — 2.000000 In this case the dominant root has multiplicity 2. The rogram indicated that ~3.065690 and —2.935715 were probably roots, since the polynomial values were 0.00593 and —0,00386, respectively, smalier than the data valve read for © (0.1). Typically, Graeffe’s method estimates of multiple real roots will span the true value, that is, (~3.066 — 2.936)/2 = —3.0 Calculated Data Set 6 Roots True Roots 2? = 19x? 455x475 14.999993 15,0000 4 4.999995 5.000000 ~ 1.000000, — 1.900000 Here the roots are well separated, and good accuracy is attained in just four iterations, Caleulated Data Set 7 (results shown) _Roots__True Roots Pal) = 0.25x? — 1.2527 2.021778 2.000000 re 1.978456 2.000000 ITER = 6 1.000000 1.000000 Again, as in data set 5, there are multiple real roots. If a multiple root were known to exist, then (3.8) could be applied to find better root values from the solution of the quadratic equation x? — 3.689349 x 10°? + 3.402824 x 10°° ‘The predicted roots are: 8446745 x 101° 8446745 x 10! which to seven significant figures yields a, =a; = 2.000000, Calculated Data Set 8 Roats Dalx) = x4 ~ 35x? + 146x7 — 100x +1 30.288681 ier = 4 3.858056 843107 10130 This polynomial is the characteristic function found by the method of Danilevski (see Chapter 4) for the sym- metric matrix 0 9 7 S gw 8 6 a 8 6 7 S, Calculation of the roots of p,(x) (the eigenvalues of the matrix) using the Power, Rutishauser, and Jacobi methods (see Examples 4.2, 4.3, and 4.4, respectively) yielded the following results: Power Method Rutishauser Method” Jacobi Method 30288685 30.288685 30.288685 3.858057 3.838057 3.858057 0.843107 0.843107 0.843107 0.010150 0.010150 0.010150 Calculated Data Set 9(results shown) ___Roots__ True Roots Pal) =X ~ 16x" + 7822 11,999998 12,000000 ~ 412x + 624 5.324727 (NR) 1.04 5i Wren = 4 4.82880 (NR) 1.0 ~ 51 2.000000 2.000000 The program indicated correctly that 5,32 and 4.88 were probably not roots (NR). The two real roots were found, 154 Solution of Equations ‘but not the pair of complex conjugate roots. The starting polynomial in this case is the characteristic function of, the matrix 4-5 0 3 0 4-3 -5 5-3 4 of 30 5 4. Calculated Roots Truc Roots = 1.879385 = 1.879385 1.532089 1.532089 0.347296 0.347296 The computed roots agree exactly with the roots calcu- ated for p,(x) by the method of successive substitutions (see Table 3.5). Calculated Data Set [1 Roots True Roots. Pisa xt ~ 240 14.999993 45,000000 +150x4-200x 5.221369 (NR) 5.000000 375 4.788013 (NR) 5.000000 ITER = 4 = 1.000000, —1,000000 This is the characteristic function of the matrix 6441 4614 416 4) 14 4 6. Calculated Data Set 12 Roots True Roots pas) = x*~ x? 40.7534 1,000000(NR) 9.5, V3, +0.25x ~ 0.25 - ren = 6 1.000000 (NR) 0.5 4 0.505445, 0.500000 0.494614 0.500000 In this case there are two pairs of roots, one complex conjugate, the other reai, As before, the routine fails to find accurately any root from a root pair. Calculated Data Set 13 Roots True Roots Als) = x* = 2° + 1.25x*1,500000 1.500000 —0.25x — (en 8 1.75 1.000000 (NR 9 5, V3; oe This polynomial is used in the numerical example of Sec- tion 3.2. The two real and distinct roots were found, The pair of complex conjugate roots was not found, as before. Equation 3.8 could be used to find them, however. Cafeulated Roots 3.931828 (NR) 3.804101 (NR) 2.987610 2.344709 1.652354 0.999980 0.467911 0.120615 Data Set 14 (results shown) x8 = 16x" + 10526 — 3645 + 15x" ~ 792x* + 462x2 = 120x +9 This polynomial is the characteristic function for the eighth-order striped symmetric matrix (3.1.10) for Case A. tis also used as an example data set for catculation of the roots (eigenvalues) by the Power, Rutishauser, and Jacobi methods (see Examples 4.2,4.3,4.4). Power Method Rutishauser Method Jacobi Method 3.879385 3.879385 3.879385 3.532089 3.532089 3.532089 3.000000 3,000000 3.000000 2.347296 2.347296 2347296 1.652704 1.652704 1.652704 1.000000 1,000000 1,000000 0.467911 0.457911 0.467911 — 0.120615 0.120615 ‘The vibrational frequencies predicted by Graeffe's method for the mechanical system of Fig. 3.1.2 with the parameter values for Case A are: 1,982884 sec“! 1.871924 1.728470 1.531244 1.285439 0.999990 0.684040 0.347297. ‘The two or shree highest frequencies computed using Graeffe's method may not be of adequate accuracy. However, the method has produced good approximations to these roots. These may be used as first estimates in cone of the iterative root-finding procedures to be dis- cussed later in this chapter. Data Set 15 (results shown) _ Roots ay = x ~ 20" + 157.0625x¢ = 623.4374x° + 1341.4450x* — 1557.6560x* + NDT? (~227.9003x + 15.6683 wena 4 ‘This is the characteristic function for matrix (3.1.10) for Case B. Again, the largest root estimates may not be of Example 3.1 Graeffe's Root Squaring Method—-Mechanical Vibration Frequencies 155 Calculated adequate accuracy, but can serve as very good starting ‘True Roots —_-valuts for some other root-finding method, Based on the Graeffe's method results, the natural frequencies for the Sroseoi (NR) 70g399 mechanical sytem of Fig. 3.1. with the parameter 3.365894 3.366539 Yalues of Case B are 2342498 23342620 -1 1.546105 1.544235 Preeneiiae agtzasz 0912889 facet 0.425053 0.425054 areas 0.108776 0.506776 Gee 0.955433 0.651961 0.326766. 156 ‘Sotution of Equations 34 Iterative Factorization of Polynomials ‘The techniques discussed thus far are usually useful in finding approximate values of one or more roots of f(x). Should a root be located which is not multiple-valued, a procedure such as Newton's method (p.J71) can be em ployed to determine the root more accurately. In such event, a polynomial of reduced degree can be formed for further investigation by ordinary synthetic division (p.7). Proceeding in this manner, if no multiple roots are present, the solution can be completed. However, this procedure can be expected to make approximations to successive Toots more and more inaccurate, and the presence of multiple roots causes difficulty. In this section (see also Luther[6]), a method is presented for improving the accu- racy for single roots, and for maintaining a balance of accuracy among the Various roots as solution progresses. Bairstow's method (4] is an algorithm for finding quadratic factors iteratively for the polynomial of (3.2). The iterative technique which follows produces factors of arbitrary degree m for this same polynomial. Program- ‘ming in terms of m as a parameter is simple: when m is tunity, the method is Newton’s method; for m equal to Wo, the method is closely related to Bairstow’s method, but is somewhat simpler. Consider frst the problem, useful in its own right, of dividing foy= Bast ay by as)= Eres peel G18) it being understood that m —b: ei +P 0.25 — bps bap, = by eps CaP, 0.25 —brps—brp, ca = ~ba esx ery Pi + e\p3 = —2bs — es, epi + csp = —2ba ~ 0. Table 3. illustrates the resulting sequences. Thus, in terms ‘of quadratic factors, the polynomial is fla) = G2 = + Nee 025). For the same polynomial, the sequence using a stesting factor (x? + 0x +0) ig shown in Table 3.4, Thus, the same ‘quadratic factors, So) = G -025)x4# — FD, hhave been found, but this time in ceverse order. Table 3.4 Merative Factorization—Starting Factor x* kel kas a 0.00000 0.07692 0.00000 Ps ‘0.00000 0.30769 9.25000 by —1,90000 0.92308 =1,00000 bs 0.75000 0.98669 1.00000 bs 0.25000 0.04188 2.90000 be ~0.23000 0.05682 0.00000 rn 1.00000 0.84615 1.00000 eo 0.75000 ~1.22929 125000 fs 0.25000 0.12392 0.25000 % 0.25000 0.42553 031250 Bi 0.07692 0.00613 0.00000, Py 0.30769, 0.25434 0.25000 2, = = 1.00000 1.09000 EXAMPLE 32 ITERATIVE FACTORIZATION OF POLYNOMIALS Problema Statement Write a program that employs the method of iterative factorization outlined in Section 3.4 to find an mth- degree factor, gx) = Dan's (3.2.1) of the nth-degree polynomial 440) = Yan 22) Method of Solution ‘The program that follows reads data values for n, m, ta, tr Bay «0 Bes Pay» Ps WREEE Py 5 Pm BEE CHE initial estimates of the coefficients of the mth-degree factor 9x}; max isthe maximum number of factorization cycles permitted, eisthe maximumallowablepivot size inthe solu- tion of the simultaneous equations (3.20)by Gauss-Jordan reduction (see Example 5.2), and, isdescribed below. The program frst normalizes the coefficients doy dy by diving each by the initia ay, that is, AQ, = Boon aired 323) aol, fm Pht emt Syme Pi + SumP2 + Cyn Pht GqnaPa te and assigns the value one to coefficient 7p. Dropping the iteration counter as a superscript, and solving (3.18) and (3.19) for the by and cy #=0, yo. Mt leads to Xipe eres = Xp, Xmas © QD mee + bam haat Baa Yep» tem) Gry ba a- ¥ Pb» 12m 158 ot yams Ps +6, +i> nom—jri< =by=-1 ~Reten ism! Gas i Vpn iam. fi Note that in (3.2.4) and (3.2.5), is simply a summation index, and is not the iteration counter of 3.18) and (3.19). One measure of convergence is the magnitude of the coefficients by mss Eremety-on by which will vanish when a perfect factor g(x) has been found. After each iteration, the convergence test, 2X, lblsen (3.2.6) is made; if the testis passed, computation is discontinued and the »,, i~0, }, ... are takert to be the coefficients of (x). IF convergence test (3.2.6) is failed, then the m simul- taneous linear equations of (3.20) are solved for the new Pini = 1,2, 2.5m, here denoted pj: 2b, man + on-me) Am+2Pm = —(2by sz + fxmme2) 3.2.7) Cy-mPm = — (2b, = C4) Any cy for which k <0 should be assigned the value zero, The coefficients of the linear equations, with the right-hand side vector appended as an additional column, are stored in the m x m+ 1 matrix X, such that: sam, (3.2.8) The finetion SIMUL is called to solve the equations for the pj, which are then overstored in the p vector, Pim ph f= 1,2, 29) ‘The iterative factorization process is continued until the convergence testis passed, or until the maximum numbes of factorization cycles has been completed. sam. Example 3.2 Iterative Factorization of Polynomials 159 T $ xCIyu) = CCNMUT+LD MMe N= Me ta 17 XCL,MPL) = =(2,08B(NMLL) + CCNMILDD © © sass SOLVE LINEAR EQUATIONS FOR NEW P'S = SHIFT SUBSCRIPTS ..... CALL’ sIMULC M, x, P, EPS, 1, 21) Dols t= 1, Ww MPIMI = MP1 =") 18 POMPIMI¢1) = PCMPAMID i PCD ed c WRITE (6, 205) 60 TO 1 c ©... FORMATS FOR INPUT AND OUTPUT STATEMENTS «2... 100 FORMAT ( 3(10x, 12, 8X)/ 10x, E6.1, 1hX, E761 / (20x, 510.3) ) 101 FORNAT (20x, 5#10,5) 200 FORMAT ( 10MIN = 18 / 108M 1s 10H MAX =, 18 1 /10H CHEK» , FINIS/ 10H Eps = | “elb.1 / 20H0 aC 2. AC 12, H)/) CAN, $25.6) ) 201 FORMAT ( z0H0 Ba). 2PC, 12, 1s CH, SFI5.6) 202 FORMAT ( 33H BAD DATA ENCOUNTERED AND IGNORED.) 203 FORMAT ( 3SHOCONVERGENCE CRITERION HAS BEEN MET/ 1OHOITER = , 118s 2aH0 PCD. .PC, 12, IMD/1M / (1M, SFL5.6) 2 200, FORMAT ( 20HO Be1}...8C 1%, aKD/ CAN, 515.6) 9 205 FORMAT ( 1SHONO CONVERGENCE’) ¢ eno Data N = ” = 2 imax = 30 CHEK = 1,08-2 EPS = 108-20 ACD). ACS) = 1,000" -10.000 35,000 50,000 24,000 POD) LPG) = 10/000 10,000 N "4 M = 2 ITWAX 30 CHEK = 1,06-8 © EPS = 108-20 ACL). 66S) = 1,000" =10,000 35,000 50.000 24.000 C2022 PC3) = 10:000 10,000 N * " = 2 Imax = 30 ACD) ACS) 1,000 "10.000 35,000 50.000 24,000, 0:000 0.000 " = 3 IWAX = 30 EPS = _1,08-20 1,000." =10.000 35.000 50,000 24.000 10:00 19:000 10,000 ” = 3 imac "= 30 fPs = 1,0€-20 1,000 "10.000 35.000 -50,000 24.000 0:000 0.000 “0000 ” st Vmax = 30 EPS = 10-20 1.000" =10.000 35.000 50,000 24.000 CHEK = 1,08-0 EPS = 10-20 107000 N 74 " sa imax +30 CHEK = Leoe-u © EPS = 106-20 ACD). AGS) © 1,000" =10.000 35,000 -50,000 24.000, PC2) . 0000 N = 4 4 = 4 Imax = 30 CHEK = 1.084 EPS = 1 0F~20 ACD). AGS) = 1,000 "10.000 35,000 50.000 24,000 PC2d.LL PCS) = 10,000 10,000 10,000 10,000 Example 3.2 Uterative Factorization of Polynomials 163 Program Listing (Continued) 4 ‘ imax = 30 EPS = 108-20 2,000" 20.000 70,000 -100.000 48,000 10:00 10.000 10,000, 10.000 4” ° Hwar "= 30 EPS = _1,08-20 1,000" 10,000 35,000 -50,000 24,000 Noe Moot imax = 30 CHER = a,oe-4 EPS = 1.08-20 AQ) ...AG)"* 000°" =24,000 150.000 200.000 275.000 Pa) : 000 Noe a) imax + 30 CHEK = 1.0E-S EPS = 108-20 AG). Mt)" +000" 1.000 0,750 0.250 ~0,250 Pez 2 1PG) toca “2000 Ry we TAK = 30 CHEK 5 toes EPS a. oe-z0 AGD...ac3)"* +000"""'=2.000 0.750 0,250 0,250 PODDLIIPG) = ooo “0000 Computer Output Results for the Ist Data Set Nos 4 hoe 2 Tmax = 30 cHee 0.01000 es 010-19 AC). AU 5) 1,000008°"" =10,000000 5.080000 =50.000000 24. 000000 Pa). .PC 3) 1.000008" “10.0000 —_10,000000 CONVERGENCE CRITERION HAS BEEN MET Wee = n POD) PC 3) 1.000000 4.999683, 3.995807 BC1)...8 5) 1,000008 5.000337 6.008194 =0,000676 0.005026 164 Solution of Equations Computer Output (Continued) Results for the 2nd Data Set " : 4 4 : 2 THax = 30 CHEK = 0.00010 EPS os 0110-19 ACL). AC 5) 1,009000°" =10,000000 35,000000 PCL). .PC 3) 1.000008 10,000000 10,000000 CONVERGENCE CRITERION HAS BEEN MET tTeR = uy PCL). PC 3D 1,000000 4.999999 3.999991 BC)...8C 5) 1.000000 '=5 000002 6.000009 Results for the 9h Data Set 4 4 30 0, 00010 0710-19 ACL). AC 5) 1,000000°"” =10.000000 35.000000 PCa) .. PC 5) 1.000000 10.000000 10,000000 BAD DATA'ENCOUNTERED AND IGNORED Results for the 11th Data Set 4" : 4 M 1 TTMAK = 30 CHER = 0.00010 es = 0:10-19 AC). AC 5) 1,000000""" -24,000000 150, 000000 PCL)... PC 2) 1.000008 0.0 CONVERGENCE CRITERION HAS BEEN MET ime 5 PCD PC 2) 1.000000 0.812840 BCL). BC 5) 1.000000" =24,812840 170, 168865 50,000000 0000002 -50,000000 10,000000 -200,000000 -338.320032 2% ,000000 0.000006 2%,000000 10,000000 -275,000000 0.000000 Example 3.2 erative Factorization of Polynomials Computer Output (Continued) Results for the 13th Data Set " ‘ 4 TTMAK CHEK. EPS 2 30 ‘0.00001 210-19 ACD). AC 5) 1, 000000 1.000000 0.750000 0.250000 pa. 1,000000 0.0 0.0 CONVERGENCE CRITERION HAS BEEN MET ITER + 5 PCL). PC 3D 1.000000 © =0.000000 © =0.250000 BC1)...BC 5) 1, 000000 '=1,000000 1.000000 0.000000 0.250000 0.000000 166 ‘Solution of Equations Discussion of Results All calculations have been made using double-preci- sion arithmetic. Results for the Ist, 2nd, 9th, 11th, and 13th data sets are shown in the computer output. The first ten data sets are for the starting polynomial of fourth degree, $x) = which in factored form is: x — Ax — 3x — (x= 1) = (0) — 9x? + 26x — 24x — 1) (<3 — 8x? + 19x — 12x 2) (x3 — Tx? + Mx — 8Yx—3) (x3 — 6x? + Lx - Ox - 4) = (0? — Tx + 12M? — 3x +2) = (x? — 6x + 8x? — 4x + 3) (x? — Sx + 4x? ~ Sx + 6). ‘A summary of the results for the first ten data sets fol- lows, 10x? + 35x? ~ 50x + 24, Data Set 1 (results shown) Starting factor: x? + 10x +10 Iterations required: 11 Factors found: x? — 4.999663x + 3.995807 x? — $.000337x +. 6,004194 Data Set 2 (results shown) Starting factor: x? + 10x +10 Iterations required: 12 Factors found: x? — 4.999999x + 3.999991 x? — 5.000001x + 6.000009 Data Set 3 Starting factor: x? Iterations required: 11 Factors found: 2 — 5,000000x + 4,000000 x? — 5.000000x + 6.000000 Data Set 4 Starting factor: x? + 10x? + 10x +10 erations required: 12 Factors found: + — 9,000000x? + 25.999999x. — 23,999998, x — 1.000000 Data Set § Starting factor: x Iterations required: 9 Factors found: x? ~ 7,000000x? + 14,000000x Data Set 6 Starting factor: x + 10 Iterations required: 13 Factors found: x — 1.000000 x? — 9,000000x? + 26,000000x ~~ 24.000000 Data Set 7 Starting factor: x Iterations required: 7 Factors found: x ~ 1.000000 x? — 9,000000x? + 26,000000x — 24,000000 Data Set 8: Bad data encountered (m = 1). Data Set 9 (results shown): Bad data encountered (m=n). Data Set 10: Bad data encountered (m = 0). Note that although the starting factors for the first and second data sets are identical, an additional iteration is required for the second set because of the more stringent convergence requirement. The results for the fourth and fifth data sets show that different starting factors may lead to different combinations of fina! factors. The results for data sets 8, %, and 10 indicate that the program is checking properly for inconsistent values of m and 2, while the printed results for the ninth data set show that the coefficients a, are being normalized as specified in 23). ‘The starting polynomial for the eleventh data set is a(x) = x* — 24x? + 150%? — 200x — 275. The results, shown in the computer output, are: Starting factor: x Iterations required: 6 Factors found: x +0.812840 2? = 2481284007 + 170.168865x — 338.320032 To six figure accuracy, one root of this polynomial is 0.812840. The polynomial for the twelfth and thirteenth data sets is $alx) = x — 2 + 0.757 + 0.25e ~ 0.25 0,250? — x +1). This is the polynomial used in the examples of Tables 3.3 and 3.4. Results for the two cases are: Data Set 12 Starting factor; x? 2x +2 Iterations required: 15 Factors found: x? — 0,000000x ~ 0.250000 2 — 1,000000x + 1.000000 Example 3.2 Iterative Factorization of Polynomials 107 Data Set 13 (results shown) The results for these two data sets clearly show the ; Fi importance of the staring factor on the number of an eee z iterations required to converge to the same factors with Factors found: equivalent accuracy. 168 Solution of Equations 3S Method of Successive Substitutions ‘The following discussion is not confined to polynomial equations. We rewrite (3.1) in the form x= F(x), @.21) such that if f() = 0, then a = F(a) If an initial approxi- mation x, to a root a is provided, a sequence x3, x3, may be defined by the recursion relation ain = PC), (3.22) with the hope that the sequence will converge to a. The successive iterations are interpreted graphically in Fig. 3.1a, previously mentioned. If we set xj, =cos(x;), then F(x) = ~sin x. The reader should establish graphically that [F'(@)| < 1 for the unique solution a. Note that the equation x = F(x) can be formed from the original equation, f(x) = 0, in an unlimited number of ways. While the choice of F will depend on the par- ticular situation, one suggestion is offered. If a = Fla), then also a=(I—k)a-+k(@); therefore, instead of (3.22), we can consider xyes (= Wx, + KRG) G25) to see whether a suitable choice of & will affect conver~ ‘gence favorably, © Figure 3.1. Graphical interpretation of method of successive substitutions. Convergence will certainly occur if, for some constant p where 0 < p< |, the inequality IF() — F(a)| < px ~ a (3.23) holds true whenever |x —a| <|x;—a|, For, if (3.23) holds, we find that be -al [F(x,) ~ al = JF(x,) — Fla) < ply — al, sinev a = F(a), Proceeding, Con inuing in this manner, we conclude that [xj — 2] < pf'Ix, =a, and thus that lim x; =a. Condition (3,23) is clearly satisied if F possesses @ derivative F” such that |F(3)|

1 in the region of interest. Generally, witen x, is close to a, the approximate relation af = (F(x2) — Fall < plxz — al < p|x, — at. Fax; — 9) (3.24) holds true; F(a) is called the asymptotic convergence ‘factor. ‘An example is furnished by the equation cos x — x =0 1 Further discussion of this and other iterative methods can be found in Traub [14]. Example. Consider the equation {O)= 2 3x +1=0, oh Figure 3.2 Roots of fx) +1 36 Ward's Method ‘The function f(x) is illustrated in Fig. 3.2; by inspection, 2>a,> 1,1 >a >0,and ~1 >a, > —2. Consider first the version dui = 509 +0, 6.260) Corresponding to (3.260), F(x)=x%, and [F'(x)| <1 if [x] <1. Consequently, (3.26a), if used to define the iterative process, can be expected to yield as, but not to yield ay or as If, however, in (3.25) we use k= 4, so that 31, xere5 ZPD, (6.266) then F(x) becomes (3/2)x— (x? + 1/6 and F’(x)= 3/2 — 27/2, In particular, for 1<|x|0, choose ¢ =x and r < rz. Then lw + ra cos $+ Mk cos yi =u — ra + rk cosy Ku-ratrk 0, choose ¢ = =n/2; if v <0, choose $ = 1/2. Then for r 1.0166, os 09 0.1086 0.0 0.1086 . 06 09 0.0576 0.333 0.3906 04 0s 0.0876 0.333, 0.3906 os 08 0.1804 = >0.1804 05s 09 0.0958312 0.16785 0.263681, 0.45 09 0.0958312 0.16785 0.263681 0s 0.83 0.0473688 0.0 0.047368 035 0.85 0.088825 - 0.088825 0.45 08s 0.058825 - 0.058825 0.305 0.85 0.0474834 = >0.0474834 0.495 08s 0.0474834 — >0,0474834 os 0.855 0.0328462 0.0 0.0328462 0.505 0.855 0.0329621 - >-0.0329621 0.495 0.855, 0.0329621 — >-0.0329621 0s 0.86 0.0180918 0.0 0.0180918 3.7 Newton's Method m 3.7 Newton's Method Newton's method for finding the zeros of f(x) is the most widely known, and it is not limited to polynomial functions. It will be presented here for the complex case. Consider, then, z=x+ iy and f(c) = u(x,y) + io(x,y). For an iterative process, the complex expression _ fed Si is equivalent to the two expressions ea) ser emt (TE) Et Uy), fun. = 24 (3.29) (3.30) ne weuy Ya Here, u, and u, mean u/0x and du/dy, respectively. The expressions in parentheses are to be evaluated for x = x, = yu The manipulation establishing the equivalence of (3.29) and (3.30) requires only knowledge of the Cauchy- Riemann equations, stating that Jas ‘The expressions (3.30) ean alternately be found by considering the simultaneous solution of u(y) =0, (x,y) =0, using the Newton-Raphson technique (p.319). This may be consulted for an indication of proof of convergence when 2 is near a zero of f(z). As previously remarked, Newton's method, when applied to polynomials, can be considered at the case m= of the iterative factorization technique. Yet another approach is to view it in terms of the method of successive substitutions. In (3.21) let Fo) ‘Then the asymptotic convergence factor for a zero, a, of f(x) becomes, for x real, SS") @r This means that convergence is guaranteed (for f"(a) # 0) if the initial value x; is near enough to a. If f(3) is the polynomial (3.2), observe that f(r) may be written in the form (Cr + ar + ar ‘This is really synthetic division. It may be phrased iteratively as f(x) =(x—r)Ytzd bat?! +b, with diss = bar + A144 [See equation (3.15), using m= I and py=rh Observe that f'(r)= Yi=$ bet ?"4; therefore, ‘F(0) and (0) can be calculated simultaneously. F@ = Jr ay It can be shown that if « is a simple zero of f(x) and if lim, % = % then S'@) 2F@Q" This means that once x; is near a, the error in the next step is proportional to the square of the error in x,; the resulting quadratic convergence is then rapid in com- parison with the linear convergence of several other methods. To understand (3.31), recall, by Taylor's theorem, that S(%) -f@) = (x — a f'@) +4 - 2") +4 -9F"O, where lies between x and «. Then the Newton’s method algorithm, modified by subtracting a from both sides and noting that f(a) = 0, him 21 ve G= oF 31) _ Seay) te) Moa * may be written as 1 ; arena ez [-osr@) 1 +H wFO +104 -04°C| Oe (FON -SO Fa) -V@- Laas]. Dividing by (x, — a)? and noting that lim x, =a, (3.31) follows: we 1 sp 1. S@ Sig [r@-jr@]-F2. Example. Starting at the point (1,i), use Newton’s method to find a zero of the following function, in which = x-+ iy: Sle) = x* ~ 22 + 1.252? — 0.282 ~0.75 = (2-1 Se + 0.52? -2 +1) = (—LNe +0.52—0.5(1 + HVE — 0.51 — 13). For this function, w(x,y) = x* — 6xty? + y* — Dx? + Gxy? 4 1.257 12593 0.28 ~0.75, hay) = Axty — day? — Gxty + 29? + 2.Sxy — 0.25, ‘uy(x,y) = 4x? — 12xy? — 6x? + by? + 2.5x — 0.25, u,lx,y) = —12x4y + 4y? + Ixy — 25S. Table 3.7 lists x’ and y' as the iterative successors of x and y. im Table 3.7 Illustration of Newton's Method ‘Solution of Equations ket k=2 kas =5 k=6 x 10 0.762832 0459373 0.524586 0.502265 0.500005 » 10 0.757522 0.731024 0.866673 0.866018 « | -10 0.527593 0.336523 0.001940 0.000022 v | -175 0.501543, 0.078138. 0.006880 0.000014 wu | 5.75 = 186867 0.280873 0.249786 0.021549 0.000044 w | 15 1.48959 1.91366 3.33214 3.03719 3.03102 x | 0.762832 0.459373 0.524586 0.502265 0.500005 ‘0.500000 y | 078752 0.731028 0.897316 0.866673 0.866018 0.866025 Therefore, a root is 0.5 + 0,866025i. EXAMPLE 33 SOLUTION OF AN EQUATION OF STATE USING NEWTON'S METHOD Introduction Many equations of state have been developed to de- sctibe the P-V-T (pressure, Volume, temperature) relation- ships of gases. One of the better known equations of state is the Beattie-Bridgeman equation, AT B vw , wie G3.) where P is the pressure, V is the molar volurve, T'is the temperature, f,7, and 5 are temperature-dependent parameters characteristic of the gas, and 2 is the univer- sal gas constant in compatible units. The second, third, and fourth terms on the right-hand side of (3.3.1) may bbe viewed as corrections of the ideal gas law Pa (3.3.2) ascribable to “non-ideal” behavior. ‘The parameters fy, and 6 are defined by Re ; B= RIB. Ay ~ 35, 633) RT Bob + Aya "282, 3.3.4) T (3.3.5) Ag, By, a, b, and ¢ are widely tabulated constants, deter- mined empirically from experimental data, and are different for each gas. Equation (3.3.1) is explicit in pressure P but implicit in temperature T and volume V. Hence some iterative root finding procedure is required to find the volume which corresponds to given values of pressure and temperature. Problem Statement Write a program that uses Newton's method to solve (3.3.1) for the molar volume of any gas, given the pres- sure, P, temperature, T, and the constants R, Ao, Bos a by and c. After computing ¥, calculate the compressibility factor z, where PV RF The compressibility factor is a useful index of the depar- ture of real gas behavior from that predicted by the ideal as law (z= 1 for an idea! gas). As test cases, compute the compressibility factors for ‘gascous methane natural gas) at temperatures of 0°C and 200°C for the following pressures (in atmospheres): 1, 2, 5, 10, 20, 40, 60, 80, 100, 120, 140, 160, 180, 200. ‘Compare the calculated results with experimental values. 36) Method of Solution Rewriting (3.3.1) in the form aT B78 4 Wap tpt pt pn Pao and diferentating with respect to Vat constant T and P yields G37) 46 p38) The Newton’s method algorithm from (3.29) is then Id Yar =nr ae, 3. fe Y TW (3.3.9) or Vege VERTIS DY? #924 OV = PYF oy RIV? + 2BV? + 3yV + 45 where for simplicity the subscripts & have been omitted from the volume terms on the right-hand side. Using units of atmospheres (1 atm = 14.7 Iby/in?) for P, liters/g mole (I g mole of methane(CH,)is approximately 16 grams) for V, and °K (°K = °C + 273.15) for 7, the ‘gas constant R is equal to 0,0820S liter atm/*K g mole. For this set of selected units, the appropriate constants for methane are (15) Ao 2.2769 By 0.05587 a Q01855 b -0.01587 © 12,83 x 104, ‘The ideal gas law should give a reasonable first estimate for the molar volume ¥: RT y 3a A criterion for terminating the iterative procedure of (3.3.10) is (33.12) Here ¢ is a small positive number. For ¢= 10°", the final value of V+, should be accurate to approximately NV significant figures. This does not imply that the cal- culated volume will agree with experimental measurement, to NV significant figures, but rather that the equation has ‘been solved this accurately given the set of constants 4, Ags b, Py, Boy €s 8, and y. Newton’s method may fail to converge to a root, if given a bad starting value. The number of iterations should bbe limited to a small integer, itmax. 3 174 Solution of Equations Flow Diagram Re P< RTBy ~ hy T3 “a, Aay by Bos, «, itm, T, P FORTRAN Implementation List of Principal Variables Program Symbol Definition A,AZERO, 8, ZERO, C BETA, DELTA, GAMMA, DELTAY EPs. rer. iTMax Neqgia% Material-dependent constants, a, Ag, b, By, ¢. Temperature-dependent parameters, p, 5, and y. ReBo +“ RTBob + Aga — “oy y= = RTBob + a “No Convergence” Incremental change in molar volume for the Ath iteration, V,,, — M4, titer/g mole. Tolerance for convergence criterion, e. Iteration counter, k. ‘Maximum number of iterations pern Pressure, P, atm, Universal gas constant, R, liter atm/*K g mole. Temperature, 7, °K. Temperature, T, °C. Molar volume, %, titer/e mole. Compressibility factor, z tted, imax. Example 33 Solution of an Equation of State Using Newton's Method 1s Program Listing APPLIED NUMERICAL METHODS, EXAMPLE 3.5 SOLUTION OF AN EQUATION OF STATE USING NEWTON'S METHOD GIVEN A TEMPERATURE T AND PRESSURE P, THIS PROGRAM USES NEWTON'S METHOD TO COMPUTE THE MOLAR’ VOLUME V OF A GAS WHOSE PRESSURE-VOLUME-TEMPERATURE BEHAVIOR 1S DESCRIBED BY THE BEATTIE-BRIDGEMAN EQUATION OF STATE, R 1S THE UNIVERSAL GAS CONSTANT. A, AZERO, 8, BZERO AND C’ARE EMPIRICAL CONSTANTS, DIFFERENT FOR EACH GAS. BETA, GAMMA, AND DELTA ARE TENPERATURE- DEPENDENT PARAMETERS DESCRIBES IN THE PROBLEM STATEMENT. ITER IS THE ITERATION COUNTER AND DELTAV THE CHANGE IN ¥ PRODUCED BY ONE APPLICATION OF NEWTON'S ALGORITHM. FOR CONVERGENCE, THE MAGNITUDE OF DELTAV/V 1S REQUIRED TO BE SMALLER THAN SOME SMALL POSITIVE NUMBER EPS. AT MOST {TMAX ITERATIONS ARE ALLOWED, IF THE CONVERGENCE TEST 15 PASSED, THE COMPRESSIBILITY FACTOR Z IS ALSO COMPUTED. THE IDEAL GAS LAW 1S USED TO GET A FIRST ESTIMATE OF V. IT IS ASSUMED THAT TC, THE TEMPERATURE READ IN AS DATA, HAS UNITS OF DEGREES CENTIGRADE._T HAS UNITS OF DEGREES KELVIN, UNITS FOR ALL OTHER PARAMETERS MUST BE DIMENSIONALLY CONSISTENT. 1 READ (5,100) A,AZERO,B,BZERO, C, A, EPS ,ITHAX WRITE (6,200) A,AZERO,8, BZERO,C,R, EPS, 1 TMAX c 2 READ (5,101) Tc,P ¢ © sees COMPUTE TEMPERATURE-DEPENDENT PARAMETERS FOR GAS ...++ Tete + 273.15 BETA = ReTeBiERO = AZERO - ReC/(T#T) GAMMA = -ReT#BZEROSE + AZERO*A - ReCeBZERO/(T#T) DELTA = ReBZERO*B*C/(T#T) c € s:USE IDEAL GAS LAW FOR FIRST VOLUME ESTIMATE... Vite c c BEGIN NEWTON METHOD ITERATION .. OO'i" “Ter = 1, 1THax. DELTAV = (((((-PeVeReT) eVeBETA) @V+GAMMA) #V+DELTA) #V)/ 1 (CURsTsVe2 ,sBETA)*V#5. 000 pO 0100 = ooo Fe Solo Fe Fue kt > tle = 000 PE = oo pe > a0 BF a00 = 200100 Pe + 20000 BF + 200.00 ps = bool0 pe 2 i001 BF = 20000 Be zooroo BF = do0t00 pF = holo PS + 20000 BS = 20000 pS = 200100 pe = 200100» «=» 180.00 s po0l00 + 200700 Computer Output as 0.01855 zero = 2127680 oe 001587 BzeRo = 0105887 © > 32430000000 an 0208208, fs > 30-06 VTMax > 20 z VTeR 0.997678 3 01995355, 3 01398342 3 0/9767 3 0953826 4 07906962 4 07861555 4 02328736, 5 02740686 5 : 02749800 5 140000 02728305 5 160.000 02716385, é 180:000 01733085 & 200:000 07736636 7 1000, 02399898 2 22000, 07999798 2 5.000, 07339517, 2 10:000 0398096 i 20.000 Ocsgeuus 5 0,000 02397911 3 60.000 07998387 5 200000 80.000 07999865 2 2007000 1007000 11002338 5 2007000 120000 1005715, 3 2007000 10000 iotoon 5 200,000 160.000 17015156 5 200000 1807000 Liozioss 5 200,000 200000 1027773 ‘ Example 33 Solution of an Equation of Site Using Newton's Method im Discussion of Results ‘The calculated results shown in the computer output are plotted in Fig. 3.3.1 along with experimental values reported by Brown, Katz, Oberfell, and Alden [16] (the solid tines), Inthe pressure range 0-200 atm, agreement is quite good at 200°C (maximum deviation is epproxima- tely 0.3 pescent). Predicted values for O°C are quite good at pressures below 100 atm, but at higher pressures are in considerable ercar; the Beattie-Bridgeman equation ‘would have to be used with some caution at higher pres- sures, 103 5 ‘ camorestity fate (ote seal charg 8 8 070, T= 200°C Toc G 0 1 100 Pressure (atm) 331 sl for thane Figse 33.1 Conpresiliy factor for methane (CH). * Values predicted by Beatie-Bridgeman equation of Sate withthe given set of constants 178 Solutow of Equations 3:8 Regula Falsi and Related Methods For the real case of Newton's method, the expression [see (3.29): Le) Sev has the interpretation illustrated in Fig. 3.3. We draw a tangent to the curve y = f(x) at the point (f(x). This Mees =e We) (ex.s) Figuce 33 Newton's method for the real root x tangent meets the x-axis at the point (x,,,,0). If, then, the curve crosses the x-axis at a point (2,0) sufficiently near (x,f(%,)), and it is concave up or down in a region including these two points, it may easily be seen that the number x;,, iS nearer a than was %, This kind of pictorial approach suggests other methods. One method, sometimes called the rule of false position, may be constructed as follows. Referring to Fig. 3.4, let fe) HO) fee ve) Figure 3.4 Method of false position. (.f(e)) be a fixed point on y= f(x). Draw a chord through this point and the point (x,f(%)) so that it intersects the x-axis in a point (x,,.,,0). Thus, Sx) ~ S(O) ISG) -fO ~ This new point may well yield a better approximation than x, to a, ‘The procedure may be justified by the method of successive substitutions. Let f(x) = xf) we (3.32) FO ="F)=IO* where f(c) #0. I¢ is clear that f(a) = 0 implies Fla) = a, and that « = F(a) implies f(a) = 0. Since EMF HO =f) POS T= HOF it follows that in the neighborhood of a zero, 2, for /(x), the asymptotic convergence factor is fa) +957, OF Applying the mean-value theorem, together with f(z) = 0, we see that f(€) ~ f(a) = (6) = (c= 2"), in which & lies between a and ¢. Thus, F@)= fa) IO so that convergence can be expected for proper values ofc Since only functional values are involved (no derivative values are required), the resulting iterative formula (3.32) involves little computational effort. Also, in common with the other procedures given in this section, the method of false position is not confined to roots of poly- nomial equations. ‘Another technique with a simple graphical explanation is illustrated in Fig. 3.5. It gives a root, if values x1, and Xn ate known, stich that f(%,,) and f(%Ri) ate OPPo- site in sig. For continuous. functions, the number ‘S(Geus + Xq,)/2), being the valve of the function at the halfway point, will be either zero or have the sign of ‘FG,:) oF the sign of f(g). If the value is not zero, a second pair x,2 and x2 can be chosen from the three numbers xp,, xe1, and (xz + q,)/2 so that fx,,) and ‘Slxq2) are opposite in sign, while P= Wea — Xnal = HL ~ Xai Continuing in this manner, there is always a point 2 in the interval [p.m] for which f(a) = 0; a is uniquely determined by the process even though the interval may contain more than one zero for f(x). Because each new application of the iterative scheme 38 Regula Falsi and Related Methods 19 Figure 3.5 Halfinterval method. reduces by half the length of the interval in x known to contain a, this procedure is called the half-interval method. Note that since the interval of uncertainty is always known, we can specify, a priori, the umber of iterations required to locate the root within a prescribed tolerance. If A, is the length of the starting interval, then the number 1 of interval-halving operations required to reduce the interval of uncertainty to A, is given by In(Ai/d,) Tea (3.33) A technique which in some senses combines the features of the two preceding ones may also be constructed. Referring to Fig. 3.6, let x1, and xg, be numbers such that f(x.) and /(%q,) are opposite in sign. Let x be the abscissa of the point of intersection of the x-axis and the chord joining the points (x11, f(x.1)), (xpi fe); that Xu SOms) = Xm) FC ny) Feu) If/(x) = O, the process terminates with a zer0 of f(x). IF f(x) has the same sign as fm), choose X13 = ¥L1 and xp = x. If/ (x2) has the same sign as f(x), choose X1q =X, and Xgz = Xpy. The process is then continued to create the sequence of pairs (x,4, Xp.). 4 3.34) ‘The process just described is linear inverse interpolation and sometimes bears the name regula falsi Example, Find the root of (x)= x? —3x-+1=0, that is known to lie between x, = 1 and xq, = 2. ‘The results for ten iterations of the regula falsi method are siven in Table 3.8, Because of the nature of the function and. Table 38 tlustration of Regula Falsi Method a 1 1,000 2.000 1.000 3.000 2 1250 2000 0.797 3.000 3 1407 2.000 ~0.434 3.000 4 1.482 2000 0.190 3.000 5 1513 2000-0075 3.000 6 1.525 2000-0028 3.000 7 1329 2000-0011 3.000 8 1531 2.000 0.008 3.000 9 1582 2.000 0.001 3.000, 101332 2000 0.001 3.000 the points chosen, this isalso anexample of the method of false Position; note that xa remains unchanged throughout the course of the iteration and this is equivalent to c in equation (3.32), Hence, the required root is approximately 1.532; it is computed more accurately to be 1.532089. 10) Figure 3.6 Regula falst method. EXAMPLE 3.4 GAUSS-LEGENDRE BASE POINTS AND WEIGHT FACTORS BY THE HALF-INTERVAL METHOD Problem Statement Write a program that computes the base-point values and weight factors for use in the (n + 1)-point (n< 9) ‘Gauss-Legendre quadrature (see Section 2.10 and Exam- ple 23): Gat) Use the half-interval method to compute the base-point valués, which are the roots z, of the (n+ I)th degree Legendre polynomial P,,,(z). Find the corresponding weight factors w, by evaluating the integral of equation (2.85). AS a check on the calculations, use (3.4.1) to ‘evaluate a few simple test integrals. Method of Solution The coetticients of aif the Legendre pofynomials PJ), 0| 0 [22-1 in2 (ounded up) 24-2, + 0.05 fe 2.0244 005] 4 ___T op. ary + 0.05) <0 fim Prod) pee Surat Pari@ria) Example 3.4 Gauss-Legendre Base Points and Weight Factors by the Half-Interoal Method 183 FORTRAN Implementation List of Principal Variables Program Symbol Definition (Main) at Matrix whose rows contain coefficients of the successive Legendre polynomials. EXACTI, EXACT2, Exact values of the four integrals, fy, Ja, Jy, and Ja. EXACTS, EXACTS Fit Fi2, ‘Approximations to the integrals J, 1, 15 and I, #13, N n RooTs Subroutine for determining the roots of Pyss(2). wr Vector of weight factors). Weight Subroutine for finding the weight factors w,. zt Vector of roots 2. (Subroutine ROOTS) FIL, FIHALE Values of Py4,(z) at the left end and midpoint of the current interval, f; and f,,., respectively. ITER Number of half-interval iterations, k. PoLy Function for evaluating P,,(2). 21,2R, ZHALF Values of z at the left end, right end, and midpoint of the current interval, 2,, 2,. and z;2. respectively. (Subroutine WEIGHT) ct Vector of coefficients ¢ in (3.4.3). +t Because of FORTRAN limitations all subscripts in the text are advanced by one when they appear in the program; e.., the roots Ze through z, Become 2(1) through Z(N +1). 184 Solstion of Equations Program Listing Main Program © APPLIED NUMERICAL METHODS, EXAMPLE 5, GAUSS-LEGENORE BASE POINTS AND WEIGHTS BY HALF-INTERVAL METHOD. THIS PROGRAM AND ITS ASSOCIATED SUBROUTINES RO075 AKO WEIGHT REPRESENT A COMPLETE PACKAGE FOR GAUSS-LEGENDRE QUADRATURE. FIRST, THE COEFFICIENTS OF THE LEGENDRE POLYNOMIALS OF ORDERS ZERO THROUGH TEN ARE COMPUTED Ano STORED IN THE ARRAY A, A VALUE FOR N (WOT BIGGER THAN 9) 1S THEN READ AND THE ROOTS OF THE (Ne1)TH ORDER LEGENDRE POLYNOMIAL ARE COMPUTED AND STORED IN Z(1)...2(N) BY THE SUBROUTINE ROOTS, WHICH EMPLOYS THE HALF-2#7ERVAL METHOD. THE CORRESPONDING WEIGHT FACTORS WC1)...W(N) ARE COMPUTED BY THE SUBROUTINE WEIGHT, FINALLY, THE INTEGRALS OF FOUR COMMON FUNCTIONS ARE ESTIMATED BY AN (N61) Solution of Equations 188 atesez"o s99sez"o 2£9990°O ngngeteo GOGEL6"0 BSosgR"O ETHELI"O atusez"o ntesszo nuova ua 6606tz"0 sows or] 0-0 + GLOWKSGEzEEZ"T = LLDVK3—tonOss"z = HOW 3uv SivyOaLNI 40 S3N1A 1OVK3 ssovoo'o = = cid taeceert thd SoRtgetz THE uv STYYOBLNY 40 SMTA O3LYHILSI UsSt9z°0 nG9s6z"0 OS9S6z"0 sTzE9z"O 96zEIZO NezEAT“O 019990" 3WY SHOLOVS LHOLIK ONIGNOSSENOD 3HL nessenro SLBRNT Os HEEEEN*O- ETAELI“O- BSOS9E-O- LOEELE"O- BUY TVIWONATOE B3OHO HACT+ND 3HL 40 SLOOH IHL ots ua 6 oe oN Heim “sunuv¥avND 34aN3037-ssnvO Bag DING yI6 241 40f sysay oo = cuovxs © GSzeee"T = thoWea © zoNnse"z = UOWKa aay STY¥OINI 40 santVA 19¥Ka oooo0"o = = gid oneeuz"t = = zd tonoserz = td 3uv SV¥D3INI 40 S3M1VA O3LvMILSI Lzeasz"o 829819°9 resE9s"O BzBELNTO czEasz"O BWV SuoLavs ANDI3M ONTONOSI¥¥OD 3H 08t906°0 6948£5°0 coD000"0- BONES "O~ OFT906"0- 3B TIWONATOA ¥3quO HL ) HL 40 Stoow aH ot = ua ye 8 Lim *3unuvuayaD 340N3931-ssn¥9 22g c100 Yh #4 20h synsoy (pomnyw03) ywdy00 s9)ndwW0-y Example 34 Gauss Legendre Base Poims and Weight Factors by the Holf-Intereal Method 189 Discussion of Results ‘The printout is reproduced above only for the three, five, and ten-point formulas (n = 2, 4, and9, respectively). Within the specified tolerance, the roots 2, agree with those given in Table 2.2 The five-point Gauss-Legendre quadrature is highly accurate for all four test integrals. Even the three-point formula fails seriously only for /,, in which the integrand is 26; however, as predicted in Section 2.10, it handles / exactly, since the integrand, 2°, is now a polynomial only of degree 2n +1. The ten-point formula shows obvious signs of accumulated round-off error; this difficulty could bbe overcome by working in double-precision arithmetic. Note that the actual application of the Gauss-Legendre quadrature is treated much more thoroughly in Examples 23 and 2.4, EXAMPLE 3.5 DISPLACEMENT OF A CAM FOLLOWER USING THE REGULA FALSI METHOD Problem Statement Consider the rotating cam with follower shown in Fig. 3.5.1, Let d (inches) be the displacement of the follower tip measured from the center of rotation of the cam. The Figure 35.1 (0) Cam and follower (®) Rotation angle, x, and radias,r. radius of the cam, r (inches), measured from the center of rotation, is a function of the rotation angle, x (radians): 1x) ORe ! 1 | | 1 ! a “No |Convergence” 192 Solution of Equations FORTRAN Implementation List of Principal Variables Program Symbol Definition CAME Function that computes the value of f(x), in. (see equation 3.5.3). D ‘om follower displacement, D, in EPS Tolerance for convergence criterion, ¢, in. FX2, FAL, FXR Pci) Alera), and flaps in. ITER Iteration counter, k. ITMAX Maximum number of iterations permitted, itmax. 22, XL XR Xeets iar and Xp, radians. Example 3.5. Displacement of a Cam Follower Using the Regula Falsi Method Program Listing ‘100 200 201 202 203 APPLIED NUMERICAL METHODS, EXAMPLE 3.5 DISPLACEMENT OF A CAM FOLLOWER USING A METHOD OF FALSE POSITION THIS PROGRAM FINDS X2, THE ANGULAR DISPLACEMENT (IN RADIANS) OF A CAM ON THE ANGULAR INTERVAL (XL JR) WHICH CORRESPONDS TO A GIVEN FOLLOWER DISPLACEMENT D, USING THE REGULA FALSI ALGORITHM, THE CAM DISPLACEMENT EQUATION IS DEFINED 8Y THE STATEMENT FUNCTION CAMF. FXL, FXR, AND FX2 ARE THE VALUES OF CAMF AT XL, XR, AND X2 RESPECTIVELY. ITERATION CONTINUES UNTIL THE ITERATION COUNTER ITER EXCEEDS ITMAX OR UNTIL THE MAGNITUDE OF FX2 1S LESS THAN OR EQUAL TO EPS. seees DEFINE CAM FUNCTION «10. ChHECXY = 0,5 + 0.50 (EXP(~K76,283185)*S1N(XI) = 0 READ AND PRINT OATA oo... REAG"(5,100) 0, XL, KR, EPS, TTMAX WRITE (6,200) 6, XL, xR, EPS, \TMAX seees EVALUATE FUNCTION AT ENOS OF INTERVAL . ERLE CAMFCXL) FAR © CAMFOXR) CHECK FOR PRESENCE OF A ROOT... iF Cextseye ) 5, 3, 2 WRITE (6,201) 60 TO 1 ITER #1 TEU FxL Ne x= it Fx2 = 0, Go 10 8 32 = xR Fx2 = 0. co To 8 » 60 TO4 sees BEGIN REGULA FALSI ITERATION .. 46 80°F" “Teka, THAX X2 = CKL#PXR © XR@FALD/(FAR = FXL) FAQ © CAMECK2) asses CHECK FOR CONVERGENCE 444, iF’ Casscexay.ce.ePs ) Go 10's ses KEEP RIGHT OR LEFT SUBINTERVAL Crx2erxt.tT.0. 2 6070 6 XU = x2 FxL = Fx? co 107 xR = x2 FAR = FX? conTINUE WRITE (6,202) ITHAX WRITE (6,203) ITER, x2, FX2 60 701 eee. FORMATS FOR INPUT AND OUTPUT STATEMENTS. 5.444 FORMATO Sx, F10.8, 2(10x,F10,6) / 5X, FL0.8, 15%, 13 ) FORMAT( 1H0/ 10400. =, Fi0,6/ 10H XL" = | 610.6/ 1 TOK xR =, F10,6/"10H EPS =, F10,6/"10H ITNAX +, 13) FORMAT( W2HOPOSST BLY NO ROOT ON THE STARTING’ INTERVAL.) FORMAT( 21HONO CONVERGENCE AFTER, 13, 11H ITERATIONS.) FORMAT( LOMOITER = , I3/ 10H X2 = , F10,6/ 10H FX2 =, 1 Fi0.6 ) en 194 Program Listing (Continued) Data D+ 0.500000 EPS = 01000010, b= 0.750000 FPS = 0.000050 Die” 0: 700000 ees = 0:000001, d= 1000000 EPS = 02000100 Compater Output 0 = 0.500000 XL = 9:700000 xR == (31700000 Eps = _0,000010 itwax = 50 Wer = 5 ws S.tnisow Fx = -0000000 2 = 0.750000 XL 291250000 xR > 1.500000 Eps = oLoe00so imax = 50 ier +s x2 > (0580566 Feo 0000045 d= 0.700000 XLS St1s9999 3m > -Bl000000 fps 2 Osoo0001 iTwax = 100 Possiaty D = 1.000000 m+ oto R= Siasa909 Hs 2 arbaeiee IMAK = 100 PossisLy XL_= 09,7000 WTMax = *) 50 XL=. "0.250000 Niwax = "50 xt, "3.140000 Twa =" 100 xL = "0,000000 Tmax ©" 100 ‘Soluron of Equations m xn m xR NO ROOT ON THE STARTING INTERVAL NO ROOT ON THE STARTING INTERVAL 3.700000 1.500000 5.000000 3.140000 Example 3.5 Displacement of a Cam Follower Using the Regula Falsi Method 195 Discussion of Resalts used to find the angular displacement corresponding to a ‘The computer outpat shows results for two data sets Bivea follower displacement for any cam; only the furc- that yield solutions and two that do not. (The exact solu- tio CAMF need be modified to include the appropriate tion for the first data set is x= n.) The program can be "dius function, r(x). 196 Solution of Equations 3.9 Rutishauser’s QD Algorithm ‘A modernization of the classical method of Bernoulli is afforded by the QD (quotient-difference) algorithm. The ‘scheme starts in the manner described in Section 3.3. Thus et m, O n, define = ~Zem- , (3.35) The coefficients a, are, of course, those of the polynomial Se)y= Vax, As in Bernoulli's method, we form the sequence {q{!”} where {= Mats, 3.36) Under suitable circumstances, if @ is a uniquely deter- mined dominant zero, lim.» 9{? = a. Rutishauser’s extension of Bemoull’s method builds additional sequences, {f"), {a}, .... {qf}, which can converge to ay, respectively. As with Bernoull’s method, itis also possible to use subsidiary sequences when com- plex conjugate roots occur, and so on. Todefine the new sequences {g{"}, m = 2,3, -.. itis convenient to construct sequences {e{"}, m= 0, 1,2, n= 1, We then have = [al — gi 1+ f459, — B.37a) " ant? = Te al. G37) Clearly, nothing has been defined unless e{® is known. We have, always, £2 =0. For n=4, the relations are shown schematically in Table 3.9. For obvious reasons, relations (3.37) are some- times called rhombus rules. If a rhombus is centered on a q-column, pairs are added, as indicated; if a rhombus is, centered on an e-column, pairs are multiplied. ‘As might be expected for a scheme involving division, it is difficult to guarantee feasibility for every starting procedure tio, 1, 1» and for every solution set 4, 3)... iy Some results are known (C1], p. 166). Consider first the condition ey] > aa] > +++ > fagl > 0. 8.38) It is then known that if the QD sequences exist, then Time gf” = ay and limn ef) = 0 ‘Consider next the condition Jay] > laa] > +++ > Ie] > 0, (3.39) again requiring that the QD sequences exist. Then, for every m such that [y;> lol > liye, Tim. gh = Om. Also, for every m such that lanl > Itassl, lTimp+e ef =0, Here, |ao| is interpreted as 00 and [ay4,1 as 0. We thus see that the columns of a QD table can be divided into subtables by those e'™ columns that approach. zero. Then all the q columns contained in a subtable pertain to a; values having the same modulus. Thus if a subtable has a single g" column, a, is its limit. ‘One necessary and sufficient condition is known for the existence of the elements of the QD scheme. It is the non- vanishing of the determinants cn Maen 0 Meme pats Me eee | Gago) Mesmat Maem 1° aetna k2Oand I 0, and, for 0 0: wy at OBS tt BE. (3.45) Another starting procedure is essentially that employed in Section 3.3 for Bernoulli's method. Let uo = I and, for l EPS, | 0 and f(x) and /“(x) do no change sign on (x.2) (©) Show that Newton's method exhibits oscillatory con vergence (that is, successive iterates alternate from one side of to the other) if /(xe)/“(xe) <0 om the interval (xo,x,) wher xosaan. Problems 201 3.16 Based upon the nations of inverse interpolation (see Problems 1.14 and 1.15) and of iterated linear interpolation Gee Problems 1.11, 1.12, and 1.13), develop a root-fnding method for a function f(x), single valued on an interval Lro,xa] known to contain a zero, Use your method to find the root of the function I)=P -3041 with xo = 0,11 = 0.1, .... 4 = 0.5, Compare your results for interpolation of degrees 1 through $ with the true solution 0.347296 (see Section 3.5). ‘Suppose that f(x) can be calculated for any x. How would ‘you modify the method just developed to achieve greater ‘accuracy with improved computational efficiency? What con- ditions must be imposed on f(x) in the interval of interest to insure convergence to a root? 3.17 This protiem does not involve numerical methods directly; however, it establishes two functions that will be needed in Problems 3.18, 5.27, 5.30, and 6.34. Consider two infinitely long surfaces, 1 and 2, that are generated by moving the curves AB and CD in Fig. P3.17a Figure P3172 normal to the plane of the paper. Suppose we wish to evaluate Fix, the fraction of thecmal radiation emitted by surface 1 that is directly intercepted by surface 2. In the string method (see Hottel and Sarofim (23), for example), four threads are stretched tightly between 4D, BC, AC, and BD. The geometsic view factor F,; (see Problem 2.41) is then given in terms of the lengths of the threads by BD+AC~ AD~ BC 2h where L, is the distance between A and Balong the surface AB. ‘Two infinitely long parallel cylinders of diameter d have their axes a distance w apart, as shown in Fig. P3,l7b, Show by the string method that Figure P3176 (he points 4, B, C, and D between which the threads may be stretched are shown in Fis. P3.17b.) Write a function named GYLEYL that will compute Fis. Anticipate that a typical reference will be F12 = CYLCYL (0, W) ‘Also write a function named CYLPLN that will compute Fy for the infinitely long cylinder and plane shown in Fig. P3.17¢, te that a typical reference will be F12 = CYLPLN (0, H, L, W) where the arguments have obvious counterparts in Fig. P3.17e, © 1 ® Figure Putte 3.18 (a) An experiment is to be performed in which the two eylinders of Problem 3.17 are to be spaced so that Fx has « specified value 8. Write a program hat will use the function CYLCYL to compute the appropriate value of wd, if such exists, for 00.5, 02, 0.1, 0.05, 0.02, and 0.01. (©) Thecylinder and plane of Problem 3.17 are situated with A= d and [= w/2, Write a program that will use the function CYLPLN to compute those values of wd, if any, for which Fx equals 0.01, 0.02, 0.05, 0.1, 0.2, 0.49, and 0.5. 349° Consider the following approach to the problem of finding a solution to the equation f(x) = 0. Start with three ois, (xo./(%2)), (x1fCx)), and (f(x). Find the second degree interpolating polynomial (use Lagrange's form (1.43)) passing through the three points, so that FO) = P&e) = 05 +04 + ase Now, solve for the two roots of ps(2) wsing the quadratic formula, Let the two roots be r, and rs, and evaluate the corresponding functional values, f(rs) and f(r). Let x» be rs, ‘oF rs, depending on which of |] or [f(r is smaller. Next, fit the points (xs, /(eu) (4ay/(4s)),and (xf) with a second ‘degree interpolating polynomial, solve for the roots, choose 2s to be that root which yields the smaller magnitude for the functional value, etc. Continue tis process of fitting successive second-degree polynomials until, if convergence occurs, the functional value assumes some arbitrarily small value. Note that this procedure, known as Muller's method [26], allows the isolation of complex as well as real roots of f(x). ‘Implement Muller's method asa function, named MULLER, with argument list (% F, EPS, ITMAX, ITER) where x isa vector of length three containing xs, x2, and.x3 in X(1), (2), and X(3), respectively, F is another funetion that ‘computes f(x) when required, EPS is a small positive number used in the convergence test for terminating the iteration, ITMAX is a maximum allowable number of iterations, should convergence not take place, and ITER is the number of itera- tions actually performed (set to ITMAX +1 for failure to converge). The final estimate of the root should be returned as the value of MULLER. Assume that all the roots of f(x) are real; ifthe algorithm yields complex roots at some stage, the function should set ITER to zero and return to the calling program. Test the program with some trial functions whose roots are known. 202 Solution of Equations How would you modify the function so that it could isolate ‘complex conjugate roots of a polynomial with real coefficients? 320 The 615 triode shown in Fig. P3.20 is used for ampli- fying an a-c input signal of small amplitude. The amplifier gain High-tension supoly Fi bas, oy ot Figure P3.20 A (ratio of rms output a-c voltage developed across the load, resistor R to the rms input voltage) is ane aR where r, is the dynamic plate resistance and gm is the trans- conductance (see Problem 2.48). if the high-tension supply constant at »,, we have the following additional wolving the anode voltage x, and the anode current fas unknowns: Ri=v,-0, If vg», and Rare specified, devise a procedure for compu- ting v, and the amplifier gain 4. Writea program to implement the method, making use of the functions already developed in Problems 1.42 and 2.48 for the 6)S triode. Suggested test data (ey andy in V, Rink): 4 = 300, vy = —6.5, R= 10; 0,300, dy = ~162, R= 22; v5 300, vy = —2.3, R= 22. 3.21 The first-order irreversible chemical reaction A > 5 has a reaction rate constant & fir-* at a temperature T. A stirred batch reactor is charged with Vcu ftof reactant solution Of initial concentration ap Ib moles/cu ft and is operated isothermally at a temperature T for fo hours. The reaction products are then removed for product separation, and the reactor vessel is cleaned for subsequent reloading with fresh reactart, ‘The reactor is operated cyclicly; that is, the process of loading fresh reactant, allowing the reaction to proceed, dump ag the product, and cleaning the reactor is repeated indefinitely. If the down time between reaction cycles is te hours. show that the reaction time fo required to maximize the overall yield of product B per hour is given by the solution of the eouation| to In tok + tk + 1k =0, Solve this equation for the following test data: yo (uty w 1 1 10 a (Ibmoleseuft) «= OE key) 25, 28 10 10 es Gn os 10 10 20 522 The isothermal irreversible second-order constant volume reaction A+ B > C+ D has a velocity constant k. A volumetric flow rate » of a solution containing equal inlet concentrations as each of 4 and 2 is fed to two CSTRs (con- tinuous stirred tank reactors) in series, each of volume ¥. Denoting the exit concentrations of A from the first and second CSTRS by a, and a,, respectively, rate balances give: kat kai If k= 0.075 liter/g mole min, y= 30 liter/min, ay = 1.6 moles/liter, and the final conversion is 80% (that is, ax/ae = 0.2), determine the necessary volume V (liter) of each reactor. For an extension ton CSTRs in series, see Problem 5.31. 323 Assume that three successive ferates, x, Xas1, and a2, have been obtained using (3.22), x)+1 = F(x), and that «is the required solution of /(x) = 0. Using the mean-value theorem (see Section 1.6), show that Mier 0= (a FED, wae (ei — OPE), where fis in (x4,a) and &2 is in (xa +442). I & = &2, show that the solution 2 is given by _ xa aan Se, Where Bios = aor — xasvand At = Xa — Dana 3.24 In general, £, and £, of Problem 3.23 will not be equal. However iff, X++1. and xs47 re near a, then ‘sO that the next iterate may be taken as (Qxn.o* ax See mae If-xy.1 = F(x) isa first-order process (soe Problem 3.14), this extrapolation technique usually accelerates convergence to a, that is, i, — | <[xaea— al, where x43 Flxas). The iterative process that employs the sequence of calcula- tions is known as Aitken's A? process, Use Aitken’s 4? process to find the three roots of f(x) = 03x41 = 0 with x= GPF DB, ox ees Hy 5m gO 4D, x 03; In each case, compare the sequence of iterates with the results of Table 3.5. Problems 203 3.25 Devise 2 procedure, based on one of the standard cequation-solving techniques, for evaluating x". Assume that x isa real positive number, that is a positive integer, and that the real postive nth root is required. 3.26 Devise a procedure for evaluating the n complex roots, x", of a real positive number x. Assume that mis a positive integer. 3.27 Determine the first ten positive roots of the equation cosh $cos $= =I, which is important in the theory of vibrating reeds (see Wylie [18}, for example). These roots may be compared with similar values found in Problem 4.32. 3.28 Compute to at least six significant figures the first $0 positive roots of the equation B tan B= h, for k= 0.1, 1, and 10, These roots will be needed later in Problem 7.24. 3.29 Write a subroutine, named WARD, that implements ‘Ward's method (see Section 3.6) for finding a root, « = B+ iy, of a function f(2) of the complex variable : = x + iy, where Ska) = uy) + Oly). Let a typical call on the subroutine be CALL WARD (X, Y, FR, Fl, HX, HY, EPSHX, EPSHY, ITMAX,W) Here, FR and Fl are functions of two variables, X and Y, that return the real part of f(z), that is, u(X.Y), and the imaginary part of f(2), that is, (X.Y), respectively, when needed. Upon centering the routine, the first estimates of f and y should be available in X and Y. The searching strategy should employ HX and HY as the initial step sizes in the x and y directions respectively. The searching step sizes should be halved when appropriate (without modifying the values of HX and HY in the calling program) until the step size in the x direction is smaller than EPSHX and the step size in the y direction is smaller than EPSHY, or until the total number of calls upon FR(X, ¥) and FI(X, ¥) exceeds ITMAX. In either event, upon and Y should contain the final estimates of B and y, ly; W should be assigned the value of w(x») for the final iteration, where (x,y) = |u(x.»)| + [oC ‘Write a short calling program, and appropriate fun FR and Fi, to find a root of Se) = 24 — 22? + 1.2524 — 0.282 — 0.75 with various starting points, including ‘your results with those of Tables 3.6 and 3.7. 3.30 Van der Waals's equation of state for an imperfect (+3) a where P is the pressure (atm), o is the molal volume (liters) mole), Tis the absolute temperature (°K), Ris the gas constant (0.082084 liter atm/mole °K), and a and 6 are constants that depend on the particular gas. Write a program that will read values for P, T, a, and b as data, and that will compute the corresponding value(s) of o that satisfies the van der Waals equation. The test values in ‘Table P3.30 for a (liter? atm/mole?) and 6 (liter/mole) are ziven by Keller (27). +1 Compare Table P3.30 Gas a ’ Carbon dioxide 3.592 (0.08267 Dimethylanitine 37.49 0.1970 Helium 0.03412 0.02370 Nitric oxide 1.340 0.02789 Note: If T To, where T, = 8a/27Rb is the critical tem- perature (above which a gas cannot be liquefied), there will be only one real root for v; otherwise, there will be either one or three real roots, depending on the particular values of P and T- If there are three roots, v, VgDi. If values of uw and D3 are known, devise one or more schemes for determining: (a) ifa hydraulic jump is possible, and (b), if so, the corresponding value of D,. Write a pro- gram to implement the method; the input data will consist of pairs of values for u: and Ds. 3.32 Write « function, named CPOLY, with argument list (N,X), that evaluates the mth-degree polynomial Cy(x) (see Problem 2.27) at argument x = X, where =: N may be any of 2,3, 4, 5,6, 7, 9. The coefficients of the polynomials should be. preset in a suitable arrangement using a DATA (or equiva- lent) statement, The value of C(x) should be returned as the value of CPOLY. Next, write a main program that uses the halfsinterval ‘method (see Fig. 3.5) to find the roots of each of the seven polynomials evaluated by CPOLY (see Problem 2.28 for a further use for the roots). The program should reada value for DELTA, an error tolerance for the roots, equivalent to 4. in @.33), then calculate the n roots of C(x), for n= 2, 3,4, 5,6, 7, 9, and print the results in tabular form. Calculational effort can be reduced considerably by noting the following: G@) All roots of Cx(x) lie on (—1,1) (©) For m even, Co(x) is an even function of x, Hence, the ‘n roots occur in n/2 root-pairs, symmetrically arranged about the origin, Attention can be confined to (— 1,0) or ©,1). (©) For m odd, C,(x) is an odd function of x. Hence, there is root at the origin, and the remaining n—1 roots occur in (n= 1)?2 root-pairs, symmetrically arranged about the ‘origin. Again, attention can be confined to (—1,0) or (0.1), (@) No two roots for any of the polynomials are more ‘closely spaced than 0.05. A strategy similar to that used in Example 3.4 may be employed for solving the problem. Note that, because the above com- ments also hold for the Legendre polynomials, the program 204 ‘Solution of Equations cof Example 3.4 could be modified to halve approximately the number of calculations required to isolate the roots. 3.33. Devise two or more practical schemes for evaluating, within a prescribed tolerance «, all the roots of the nth- degree Laguerre polynomial. Write © program that imple- ‘ments one of these methods. The input data should consist of values for m and ¢, Your program should automatically ‘generate the necessary coefficients of the appropriate Laguerre polynomial, according to equation (2.68). Check your compu- ted values with those in Table 2.4. The problem may be repeated for the Hermite polynomials, summarized in equation (2.75). The computed roots may be checked against those given in Table 2.6. ‘3.64 For the isentropic flow of a perfect gas from a reser- voir through a converging-diverging nozzle, operating wi sonic velocity at the throat, it may be shown that ey EE ley") Here, Pis the pressure at a point where the cross-sectional area of the nozzle is 4, P, is the reservoir pressure, A, is the throat area, and y isthe ratio of the specific heat at constant pressure to that at constant volume, If As, 7, Pi, and 4 (>A) are known, devise a scheme for ‘computing the two possible pressures P that satisfy the above equation, Implement your method on the computer, ‘Suggested Test Data A= 0.159 ft, y= 1.41, Py = 100 paia, and A= 0.12 sq ft. 3.35 A spherical pocket of high-pressure gas, initially of radius re and pressure pe, expands radially outwards in an adiabatic submarine explosion. For the special case of a gas ‘with 7 = 4/3 (ratio of specific heat at constant pressure to that at constant volume) the radius r at any subsequent time 1 is given by [22]: ee nN (1432452) com, in which « = (rire) — 1, pis the density of water, and consistent its are assumed. During the adiabatic expansion, the gas pressure is given by p/po= (ralr)” Develop a procedure for computing the pressure and radius, Of the gas at any time. Suggested Test Data (Not in Consistent Units) Po = 10* Iby/sq in, p = G4 Ibafou ft,re= I ft, =0.5, 1, 2,3 $, and 10 milliseconds. 336 Fmoles/hr of an n-component natural gas stream are introduced as feed to the flash vaporization tank shown in Fig. P3.36. The resulting vapor and liquid streams are with- drawn at the rates of Vand L moles/br, respectively. The mole fractions of the components in the feed, vapor, and liquid streams are designated by 2, y, and x, respectively (/ Bevan). Assuming vapor/liquid equilibrium and steady-state opera- tion, we have Overall balance Fm ¥, Individual balance .F= b+ yV) Bnav reton ‘Kanfae oH Figare P3.36 the equilibrium constant forthe ith component at the prevailing temperature and pressure in the tank. From these equations and the fact that St.1%=S-191= 1, show that (K-11) WK D+F Write @ program that reads values for F, the z,, and the Ky fas data, and then uses Newton's method to solve the last equation above for ¥. The program should also compute the values of L, the x, nd the y; by using the first three equations siven above. ‘The test data [16], shown in Table P3.36, relate to the flashing of a natural gas stream at 1600 psia and 120°F. 0. Table PI.36 ‘Component ton K Carton dioxide 165 Methane 2 309 Ethane 3 on Propane 4 039 Isobutane 5 021 Butane 6 017s Pentanes 7 0.053 Hexanes a 0.065 Heptanes-+ 9 0.036 ‘Assume that F = 1000 moles/hr. A small tolerance © and an ‘upper limit on the number of iterations should also be read as data, What would bea good value V, for starting the iteration ? 337 For turbulent flow of a fluid in a smooth pipe, the following relation exists between the friction factor ¢, and the Reynolds number Re (see, for example, Kay [17D : dt 04+ 174 in(ReJe). Compute ¢, for Re= 10%, 10%, and 10°, ‘3.38 For steady flow of an incompressible luid in a rough pipe of length £ ft and inside diameter D in., the pressure drop AP tb,/sq in. is given by where p is the fluid density (Iba/cu f), ue is the mean fluid velocity ({t/sec), and fig is the Moody friction factor (dimen- sionless). For the indicated units, the conversion factor 9. Problems 205 ‘equals 32.2 Iba fi/lby sec*, The friction factor fis a function of the pipe roughness « (in.) and the Reynolds number, Dptin Re Ta ‘where is the fuid viscosity in Iba/ft sec. For Re < 2000, fu = GIRe, while for Re > 2000, fu is given by the solution of the Cole ‘brook equation, solution of this equation ‘may be found from the Blasius equation, Su O316Re=?8, appropriate for turbulent ‘low in smooth (¢ = 0) pipes. Write a function, named POROP, that could be called with the statement DELTAP = PDROP (0, D, L, RHO. MU, E) where the value of POROP is the pressure drop in Ib,/sq in. for a flow rate of Q gal/min of a fluid with density RHO and vis- Ccosity MU through a pipe of length t, inside diameter 0, and roughness E, Note that MU and L must be of type REAL. To test POROP, write a program that reads values for , L, RHO, MU, and E, calls upon POROP to compute the pressure drop, prints the data and result, and returns to read another data set. Suggested Test Data a eal/min) 170 4 D (in) 3.068 0.822 L @ 10000 100 RHO (Iba/cu ft) os 80.2 MU (Iba /ft ec) 0.0007 0.05 E (in) 0.002 0.0005 3.39 Asshown in Fig. P3.39, a centrifugal pump is used to transfer a liquid from one tank to another, with both tanks at the same level, Figure P3.39 ‘The pump taises the pressure of the liquid from ps (atmos- pheric pressure) to ps, but this pressure is gradually lost because of friction inside the long pipe and, at the exit, ps is back down to atmospheric pressure. ‘The pressure rise in psig across the pump mately by the empirical relation siven approxi= Pa—pi=a—bO"s, where a and 6 are constants that depend on the particular Pump being used, and @ is the flow rate in gpm. Also, from equations (5.4.2) and (5.4.3) the pressure drop in a horizontal pipe of length Z feet and Pi—P> where p is the density (Ib./cu ft) of the liquid being pumped. For the present purposes, the Moody friction factor fu treated as a constant although, as discussed in Problem 3.38, it really depends on the pipe roughness and on the Reynolds number in the pipe. ‘Write a program Whose input will include values for a, 6 p, L, D, fy € (@ tolerance used in convergence testing), and 7 (maximum number of iterations), and that will proceed to compute the flow rate Q. The program output should consist Of printed values for the specified input data, the solution Q, the intermediate pressure p:, and the actual number of itera tions used. If the method fails to converge, a message should be printed to that effect. Use the two sets of test data shown in Table P3.39. Select values of ¢ and m that seem appropriate. Table 73.39 Sel Set2 D,in. 1m 2469 Lt soo 206 prlbeleu ft (Kerosene) 51.4 _ (water) = a4 a dimensionless 0032 0026 “psi 167 385 , psiligpm)! * 0s. = 0.0296 340. Rework Problem 3.39, now allowing for a variation of the Moody friction factor fy with pipe roughness and Reynolds number, as in Problem 3.38. Use the same two sets Of test data, and assume in both cases that the pipe roughness is 0,000 ft, corresponding to galvanized iron pipe. Approp- riate viscosities (at 68°F, in centipoise) are = 2.46 (kerosene) ‘and 1,005 (water). For each data set, the program should priat values for fu and the Reynolds number. 3.dl_A semi-infinite medium is at a uniform initial tem- perature Tp. For 1 > 0, a constant heat flux density q is main- tained into the medium at the surface x= 0, If the thermal conductivity and thermal diffusivity of the medium are k and a, respectively, it can be shown that the resulting temperature T= T(t) is given by af [a oa x To+ $2] evan — verte]. ipJEeem rere ste] all other values are given, devise a scheme for finding the time 1 taken for the temperature at a distance x to reach a preassigned value 7°. Implement your method on the com- puter, Suggested Test Data T.= 10°F, q= 300 BTUhr sq ft, k= 1.0 BTU/hr ft °F, 0.08 sq fifhr, x= 1.0 ft, and T* = 120°F, 206 Solution of Equations 342A bate vertical wall of a combustion chamber con- taining hot gases is exposed t0 the surrounding air. Heat is Jost at a rate q BTU/hr sq ft by conduction through the wall and by subsequent radiation and convection to the Surround- ings, assumed to behave as a black-body radiator. Let T,, T, and T, denote the temperatures of the gases, the exposed surface of the wall, and the air, respectively, if isthe Stefan- Boltzmann constant, 0.11 x 10-* BTU/hr sqft °R*, and g, t, and k denote the emissivity, thickness, and thermal conduct vity, respectively, of the wall, we have G2 K(T,— Tolt= col Tin = +H(Te— PD). ‘The extra subscript Remphasizes that the absolute or Rankine temperature must be used in the radiation term (*R="F +460). ‘The convection heat transfer coefficient 4, BTU/hr 5a ft °F, is given by the correlation = 0.21 (T~—T,)", suggested by Rohsenow and Choi [24]. Assuming that T,.T.. ¢, k,and t ate specified, rearrange the above relations to give a single equation in the outside wall temperature Ty. Compute 1’. for the following test dat T= 100°F, ¢ = 0,0625 ft, with (a) Tz = 2100°F, k= 1.8 (fused alumina) BTUJhr ft °F, ¢ =0.39, (6) T,= 3100, k= 25.9 (Gteel), € = 0.14 (Freshly polished) and & = 0.79 (oxidized). In each case, also compute g and the relative importance of radia~ tion and convection as mechanisms for transferring heat from the hot wall to the air. 343 The rate gdA at which radiant energy leaves unit area of the surface of a black body within the wavelength range A to A dA, is given by Planck's law: Dake? dd ergs dX =e GN Se ecT TY’ Gemtsee where ¢= speed of light. 2.497925 x 10" cmjsec, ‘fr Planck’s constant, 6.6256 x 10-2" erg sec, ‘k= Boltzmann's constant, 1.38054 x 10-"* ergi"K, T= absolute temperavure, °K, A= wavelength, em, For a given surface temperature T, devise a scheme for determining the wavelength Anas for Which the radiant energy is the most intense, that is, A corresponding to dq/dA = 0, Write a program that implements the scheme, The input data should consist of values for 7, suchas 1000, 2000, 3000, 4000°K the output should consist of printed valves for Anes and the corresponding value of 4. Verify that Wien’s displacement Ja, AnesT = constant, is obeyed. 344” Write a function, named INVERF, that computes the inverse error function, x, Where the error function is given by The function should have arguments (ERFXTOL) where ERFX isthe specified value of the error function (0 < ERFX10,, x4) =0 and xq, = 10. are satisfactory starting values for (3.34). Write a short main program to test INVERF, Suggested ‘Values for ERFX and the corresponding tabulated values for x are: EREX x ‘0.0000000000 0.00 0,1124629161 0.10 07111556337 0.75 0,8427007929 1.00 0,9661051465, 1.50 10.9953222650 2.00 10000000000 10.00 34S Repeat Problem 2.15, with the following variation. ‘Assume that the exchanger length L is a known quantity, to be included in the data, and that the resulting exit temperature Tis to be computed by the program. The problem is now to find the root of age aDJe, WT.—T) for which the repute faisi, halt-interval, or false-position method could be used. ‘An alternative method for finding 7 is discussed in Prob- Jem 6.19. 3.46 A vertical mast of length L has Young's modulus £ and a weight w per unit length; its second moment of area is J. ‘Timoshenko (251 shows that the mast will ust begin to buckle under its own weight when = 4mL+,9E7 is the smallest root of IT 14 Sah= The first coefficient js ¢y = —3/8, and the subsequent ones are given by the recursion relation O° Ga ) Determine the appropriate value of B. 3.47 The stream function $= Uy(1 5) 40m! represents inviscid fluid flow past a circular cylinder of radius @ with two effects superimposed [22]: (a) a stream whose velocity is uniformly U in the negative x direction far away from the cylinder, and (b) an amticlockwise circulation of strength « round the cylinder. Here, r= (x? — y*)*?* is the radial distance from the center of the cylinder, Whose cross section is shown in Fig, P3.47. Note that the streamline y= 0 includes the surface of the eylinder. Write a program that wil! produce graphical output (in the style of Examples 6.3, 6.5, 7.1 and 8.5) showing points lying on selected streamlines within the dimensionless interval Problems 207 Figure F347 xla= 45, Fora given y, increment x/a in steps of & between these limits, and use the above equation to compute the corre sponding dimensionless ordinates yla. Suggested Test Data A=0.5, with $/aU (dimensiosless) varying from 2.5 to 2.5 in ten equal steps, for «/aU/ == 1, 2, and 3 in turn. ‘This problem could be extended to compute the streamlines for flow past an aerofoil (22) 348 The analytical solution of Problem 7.31 is given on page 285 of Carslaw and Jaeger (28). In slightly rearranged form, we have d= aah where 7 is the solution of en bat T\ ene abet ety” Read \T,—T.) extol Jaala) eos = 1) See Problem 7.31 for the definitions and units of all quantities, In this connection, write a program that will ead values for To (Suggested esi value= 37), 7,20), 7,32), k4(1.30), ‘kxl0.343), pa(S7.3), pr(624), 640.502), eyXL.O1), L(144), and «and that will then proceed to solve the last equation above for 7, within the specified tolerance «, The value of 7 thus ‘computed can then be used to evaluate the analytical solution for the purpose mentioned in Problem 7.31 349 Methane and excess moist air are fed continuously to ‘torch, Write a program that will compute, within 5°F, the adiabatic fame temperature T* for complete combustion according to the reaction CHa + 20; = Co. + 2H:0, ‘The data for the program should include the table of thermal properties given below, together with values for Tx and Te (the incoming methane and air temperatures, respectively, ‘the percentage of excess and w (Ib moles water vapor per Il mole incoming air on adry basis). Assume that dry air contuins 79 mole % nitrogen and 21 mole % oxygen. ‘The heat capacities for the five components present can be ‘computed as a function of temperature from the general relation y= a+ iT. + TE + d/TE calle mole *K, where i= 1, 2,3, 4, and 5 for CH, O25 Nix Hs0 (vapor), and. COs, respectively, and 7, is in °K. Table P3.49 [19] also shows the standard heat of formation at 298°K, AMZ» cal/g mole, for each component. Table-P3.49 ta 4 4, AHL 1 534 00s 00 00 ~ 17889, 2 827 0.000258 00 —187700. 09 3 650 o.9010 00 00 00 4 822 0.00015 1.348 10-* 00 —s7198. 5 1034 0.00274 00 198500, 94052. Suggested Test Data Ta = T.= 60°F, p = 0 to 100% in steps of 10%, w= Oand. o.015. 3.50 Producer gas consisting of 35 mole % carbon mon- ‘oxide and 65 mole % nitrogen is to be burned with an oxidant (Cither dry air or pure oxygen) in an adiabatic reactor; all gases enter at a temperature To °F. The pressure is uniformly atm throughout the system. For the reaction, CO+40:=C0;, which does not necessarily go to completion, the standard free energy and enthalpy changes at 298°K are [19] AGio = 61,452 and AH%s4 = —67,636 cal/g mole, respectively. The standard states for all components are pure gases at one atmosphere, and ideal gas behavior may be assumed. ‘The conttant-pressure heat capacities for the four compon- cents present can be computed as functions of temperature 7,°K from the general relation. n= 01+ biTa + csT? callg mole °K. For the gases involved, the constants are shown in Table P3.50, Table P3.50 Gas fy XI eX 10% co 1 625 20m 0.459 0; 2 613-2990 © —0.806 CO, 3 685 8533-2475 MN 4 630 1819-0345 Write a programt that will compute, within $°F, the adiaba- tic flame temperature 7* and the fractional conversion z of ‘carbon monoxide. investigating both oxidants in turn. Assame, that air is 79 mole % nitrogen and 21 mole ‘Suggested Test Data To=200°F, P=1 and 10 atm, f (oxidant supplied, as 3 fraction of the stoichiometric amount required) varying from 06 to 1.8 in steps of 0.1. 208 Solution of Equations that implements the guotion-Siterence algorithm of Section ®ORSider the possibility of having OD improve eat of the aot 39, for estimating the roots of @ polynomial function of a *Stmates, using Newton's method. ‘complex variable, palt)er aes" 4 a2"! by, Bibliography Bibliography 4 2 - 1. A. Ward, “The Down-Hill Method of Solving (2) P. Henrici, Elements of Numerical Analysis, Wiley, New York, 1964, HA. Luther, "A Cla tion of Polynomials, 179 (1968), ‘of Iterative Techniques forthe Pactoriza- Communications of the A.CM., 1, 177 Journal of the A.C.M., 8, 88-180 (1957) F.B, Hildebrand, Introduction 10 Numerical Analysis, McGraw Hill, New York, 1956, G. Chrystal. Textbook of Alaebra, 1, Chelsea, New York, 1952. MLA. Luther, “An erative Factorization Technique for Polynomials,” Commusieotons of: the A.CM., 6, 108-110 (1963), HL. S. Will, Marhemavis for the Physieal Sciences, Wiley, New York, 1962 A. M.Ostrowski, Solution of Equationsand Systems of Equations, Academie Press, New York, 1966 . Froberg, Introduction to Numerical Analysis, Addison Wesley, Reading, Maseachusets, 1965. Modern Computing Methods, Notes on Applied Science, No, 16, Hier Majesty's Stationery Office, London, 1961. A.S. Householder, Principles of Numerical Analysis, McGraw Hill, New York, 1953, D. R. Hartree, Numerical Analysis, Oxford University Press, London, 1958. EL. Stielel, An snroduetion 10 Numerical Mathemaies, Academic Press, New York, 198. J.-F, Traub, erarie Methods for the Solution of Eqeetions, Prentice-Hall, Englewood Cliffs, New Jersey, 1964 1s 2 2s 26, 2. 28. vs. 3. CR. Wylie, Adeanced Engineering Mothemaris, McGraw) 209 C.F, Prutton and S. H. Maron, Fusdamental Principles of Physical Chemistry, Macrmitian, New York, 1953, D. L. Katz et al, Handbook of Netural Gas Engineering, McGraw-Hill, New York, 1959. JM, Kay, 4m Invvoduction 10 Fluid Mechanics ond Heat Transfer, 2nd ed., Cambridge University Press, London, 1963. New York, 1951 J.-H. Perty, ed., Chemical Engineers’ Handbook, 3d 6, McGraw-Hill, New York, 1950. HLH, Silling, Electrical Engineering Cireits, nd ed., Wiley, 1965, M. R, Spiegel, Theory and Problems of Laplace Transforms, ‘Schaum Publishing Co.. New York, 1965, L. M, Milne-Thomson, Theorerial Hydrodynamics, 4th ed., ‘Macmitian, Landon, 1960. H.C, Hotel and A. F. Sarotim, Rodiative Transfer, MeGraw- Hill, New York, 1967. 1. W. M, Rohsenow and H. ¥. Choi, Heat, Mass, and Momentum Transfer, Prentice-Hall, Englewood Cif, New Jersey, 1961 S. Timoshenko, Theory of Elasie Slability, McGraw-Hill, New York, 1936. D. E. Muller, “A Method for Solving Algebraic Equations Using an Automatic Computer,” Math. Tables Aids Comput,, 410, 205-208 (1956), R. Keller, Basie Tables in Chemistry, McGraw-Hill, 1967, 1H, S.Carslaw and J, C. Jaeger, Conduction of Heat i Solids, 208 ed., Oxford University Press, London, 1959 S. Lin, “A Method for Finding Roots of Algebraic Equations J. Math. and Phys, 22, 60-77 (1943). New York, CHAPTER 4 Matrices and Related Topics 4.1 Notation and Preliminary Concepts This section serves to summarize the knowledge assumed concerning determinants, matrices, and simul- taneous equations. The elements are real or complex numbers. At the same time, the notational pattern is established. Let A=(a,)=| : th be mx n,n x p,n xp, and p xq matrices respectively. By the matrix sum E-= B+ C, we mean the n x p matrix E=(e,) in which e,, for permissible values of the indices. Moreover, B+ C= C + B. By the matrix product F = AB, we mean an m x p matrix F=(f,) in which fi aiybu;- Ibis always true that A(B + C) = AB +AC and (B+ C)D=BD+CD, so long as the dimensions are compatible; it may not be true that AB=BA even though m=n=p. Should AB=BA, A and B are said to commute. The n x p matrix, all of whose entries are zero, iscalled the null matrix; the context serves to specify the number of rows and columns. The null matrix is commonly denoted 0. Thus, if for all indices ¢ and j, by 1 vs we[t i] xeso-[-$ J} [3 ‘: 1S 20 10) B+OQa=|5 6 SI, Ale -19 -6 —10 a-maen-[ B 7 304 4, 1 wwe 5 Notice that (A — BXA +B) # A? — B®. Tt will be found that (A—BY(A +B) = A* —BA + AB—B?. Let dy, denote the complex conjugate of the complex number a, and let A denote the matrix derived from A by replacing a, by ,,. Because the conjugate of the sum of ‘two numbers is the sum of their conjugates, and the con- Jugate of the product is the product of the conjugates, it follows that AB = XB. By the transpose A' of the m x m matrix A we mean the matrix (g,,) such that g,, = ay. It is true that (B+ C)' = B'+C" and that (AB)' = BYA' (more generally that (ABD)! = D'BIA’, etc.). 4.1 Notation and Preliminary Concepts mn ‘By the conjugate transpose A* of matrix A, we mean (BY. Clearly, (A)' = (A and (AB)* = BYA*. A matrix A. is Hermitian if and only if A* = A (this requires of course that m=n). A matrix A is symmetric if and only if A=A"A real matrix is Hermitian if and only if it is symmetric, Example. Choose ‘Then fo-240 0 -1-3r A=) -i 3-21 2} 3-2 Iai i 2-7-8 342r =)-2e5 -a-m% 1-7], bos 2 ded 2 34H S460 AAt= | 3-1 18-5461 Sabi -S-& 17 Note that AA* is Hermitian Hisa diagonal matrix if and only it His a square (n X ni) matrix (hj) and hy =0 58 C4 J. IE we let hy= hy @ diagonal matrix is often denoted fhy,/gs yf) (with oF without the commas). An alternate nomenclature is diag (hyhy.--h). Observe that AH is the m x m matrix (hg) and that HB is the nx p matrix (hb,), These features of multiplication by diagonal matrices are of special interest in the study of eigenvectors and eigen- values described later. Hisa scalar matrix if and only if His a diagonal matrix all of whose elements /,, have the common value h. We write KA = Ah= AH and /iB=Bh= WB, The ceal or complex numbers used as the elements of the matrices are sometimes called scalars, and formations such as hA and Ahare referred to as scalar multiplications. Ifthe common value his the number one, the resulting matrix is’ called the identity matrix and denoted I (the context serves to describe the urder, namely the number of rows n in an nxn matrix), Thus IA = Al = A. If m =n, so that A is square, and if K is a matrix such that AK = 1, then K is unique, is called the inverse of A, and isdenoted A~'.Itis then true that A"A = AA~* ‘The square matrix A is called nonsingular if and only if A“! exists. If m= n =p and both AW! and Bo! exist, then (AB)"! =Bo!A~!. A square matrix A is unitary if and only if A*=A-', and orthogonal if and only if Ata At = AL Example. If, for the matrix A, we take ani vis Vis sat “TER BS si sya S3 it will be found that A®A =I, whence A is unitary. ‘The (main) diagonal of a square matrix A of order n ‘consists of the elements, aji,1 m matrix U determines uniquely m colurnn matrices witich may be construed as vectors. For any nx m matrix U, the maximum number of linearly, independent columns is called the column rank; the ‘maximum number of linearly independent rows is called the row rank. It can be shown that these numbers are the same, and are the same number as the (determinant) rank of U previously defined. If we know that row rank is determinant rank, a knowledge of transposed matrices and determinants convinces us that column rank is the same number. No attempt is made here to show that row rank is determinant rank. It is worth observing that U, US, and U' all have the same rank. If U~ exists, it has the same rank as U and, in such event, the rank is the order. A feature of value, concerning matrices of order whose rank r is less than 7, is that they can always be written as the product of ann x r matrix and an rx 1 matrix, each of rank r. Formal proof is not difficult; however, we merely indicate the proof for a matrix of order 4 and rank 2. Let the first two columns be linearly independent; then the matrix may be written (and rephrased) as i. My tat Brain + Bea My a2 dry + daa Bidar + Badr A As dy, + Mae Bras, + Bade May May ay + Bas Bday + Bates ay aa =| @t a2 b Ow Al “Yas asa |lO0 1 a2 Be ay A Conversely, if two such matrices, each of rank 2, are multiplied together, we can always find, in the product matrix, a minor that is the product of two nonvanishing determinants, one from each factor. This is readily seen by partitioning, so that the submatrices of rank two (one from each factor) can be seen to yield a submatrix in the product, also of rank two. Moreover, three or more columns must be linearly dependent. Another factorization theorem applies to nonsingular ‘matrices. The leading submatrices of a square matrix (a) are the matrices woo fox ae) fi Se yy 432 Ay The corresponding determinants are commonly denoted Ay, Bas +o» Boe [Fall the leading submatrices of a square matrix are nonsingular, it may be written as the product (and indeed in more than one way) of a lower triangular matrix L and an upper triangular matrix U. Proof is by induction and is sketched (for convenience), using always a one for the diagonal elements of L. (Any nonzero element may be used.) Clearly, if the rank r is one, [a1] = [ley1] where 144, = ay). Now assume that, for all matrices of order r — 1 (having the requisite proper- ties), the proposition is true, Let ut = [a,, 2 --- 4,-1) 42° Vectors 25 and v= (a1, di, ..d,-1,}. Then, using partitioning, any suitable matrix A = (a,,) leads to fe where A, is the (7 ~ 1)th leading submatrix of A. A, has the form L,_, U,-1, where the diagonal elements of the lower triangular matrix L,-, are all ones. Since the determinant of a triangular matrix is the product of its diagonal entries, and the determinant of the product isthe product of the determinants, we are assured that L,, A and U,—1 afe nonsingular. Let s'=w'U!, and tet w= Liv. Then ae ali a) where by = dey — 8'W. ‘The fequirement that A, #0, 1 DIMENSION A(10,10), (10,10), C(10,10), T(10,10), U(10,10), 1 VSO, 10), x(10), ¥¢10),"2¢16) oot 1 ='t, mH DO sel Pe TJ) = 0, vod 1 =i, do2 v=, P DO? K =, NW T1yd) = ACL RIAUCK UD # TEL) RETURN CMAN) = AUD # BOND eee | ENTRY waTADD (A, By Cc, M,N} D3 1e1,M 003 yen CC)" = ald) ¢ BCI, u) RETURN sees COM,N) ADHD = CHAD ENTRY maTSua (A, B,C, My WD Dow tea, # Dow Jae Cw) = ACES) = BCL) } RETORK : seeee ZUM) # AON) # XC) areas ENTRY MATWEC (A,X, 2, M,N) 005 t= 4, m 2) = 0. do6 1a, DO Jat N ZOD @ ACL, dex) 6 200) RETURN sesee YON) = 5 # KON) vee ENTRY scavec (s, x, ¥, ND 007 11, YD © sexcty RETURN sesee B(MSN) = S#ACH)ND oe ENTRY scaMaT (S, 4, BM, 10 Doe te 1,4 Dok yt, N BCl,u) = S#ACI J) RETORN 5S KO) # VOD vee ENTRY vecvec (x, ¥, S,'N sO Bos 1 =a, 8 Ses xUevc) RETURN 5 XCM) = 2M) # RID ENTRY VECMAT (2, a, XK, My MD Dol vst, N Xd) + 0. D010 x Beas « £Ca) S BeKD ACK) RETURN ‘Masrices and Related Topics Program Listing (Continued) c a w BOW = AHO matig (A,B, HM, Wi ReTéan aeeee VECTOR LENGTH $, OF XCW) 4 eeraY VECLEN (x, S, 8} sumsax, Dol? t= aN SUMSQX * SUMEQX 6 XC1)2XC1) $= DsqRTiSUMsa) RETURN YON) © XO) ses VECEQ (x,y, Wi) bois tea, W YO) = xc) RETURN seese VON)M) * AQH,N) TRANSPOSED 4.06 ENTRY TANSPZ (A, V, My ND Doi ted, mt Dow =a, N Voy) © ACS) RETORN ENO 43 Linear Transformations and Subspaces 219 43° Linear Transformations and Subspaces A set of vectors IV from a vector space V may be said to form a (vector) subspace of V if and only if, for every scalar ¢ and every pair of vectors u and v in W, it is true that both w+ and cw are vectors of W. Let A be any (fixed) nxn matrix. Then the set of all vectors of the form Au, where w is any (variable) vector of ¥,, consti- tutes a subspace of V,. For Aw is ann x I matrix; there- fore, itis an element of V,. Moreover, itis readily verified that uty= Au, +Ay, = AQ, ty), (4.6) That the set of all vectors Au is a subspace of V, means, in part, that a set of linearly independent vectors of the form Au will serve as a basis for the space of all vectors of that form, Note that Ae, is the first column of A, ‘Ae, is the second column of A, and so on, Sitce Ler we), by an obvious extension of equation (4.6) it is seen that cu =cAu, = Acu, ane Fue, ‘Then for a basis of the space, we may choose any maxi- ‘mum number of linearly independent columns of A. itis seen that the dimension of the subspace W of vectors ‘Au is the rank of A. ‘A transformation of a space V into a subspace W of V may be viewed as a device for associating with each element of V a uniquely defined image in # (more precisely, such a transformation is a many-one corzes- pondence from V to a subset of V). Expressed in operator form, Tu =v, meaning that the transform or image of tis v. The transformation is linear if and only if for all and y in V and any scalar c, Tat y= Tasty, and (cu) = cTu. Observe immediately, reading equation (4.6) in reverse, that square mattices A of order n may be viewed as linear operators for V, Let A be an nx 7 matrix and u a vector of ¥, such that Au=0. If A is nonsingular, its determinant is not zero, and the set of simultaneous linear equations, repre- sented by Au = 0, has only the trivial solution w= 0. If, however, & is singular (that is, det(A) = 0), then w exists such that u #0 and Au = 0, for in such cases, the simul- ancous eyuations involved have & nontrivial solution. The set of all vectors, whose image (using a fixed matrix A) is the null-vector, is a subspace of F,. This is readily verified in the manner by which equation (4.6) was ‘established. This set is called the mull-space of A. Its rank is called the mullity of A, and it will be shown that the rank of A plus the nullity of A is the dimension of ¥,. (The trivial space consisting of the null-vector only has rank ze10,) ‘To demonstrate the proposition above, let €,, €3, be a basis for the nullespace of A. Let eQyeis.-€ be additional vectors such that €, €,,...,€ i” foto consti- tute a basis for ¥,, Since every vector w= Si-, me, transforms as J"... 4,Ae;, it follows that every vector of the space W of all transforms is a linear combination of the vectors Ae,,m-+1- Fo laf, Visa, (4.14) & fat orit lad> Slade 1 (AYP = P diag [9(A1), 922), --» 9A] (4.22) For distinct eigenvectors, then, it is clear that the eigen- values of g(A) are g(} and, in addition, that the eigen- vectors of g(A) are the same as those of A. 4 and Example. AS a case in point, consider 9s) yo 6 -2 a=|-2 18 1 2 4 13 ‘The eigenvalues of A are found to be 7, 14, and 21 with the eigenvectors (1,0, 2, 2, 1, OF and (0,1, 31, respectively Tt develops that g(A) is 1299 126 ax ~70 3323s], 336672213. with eigenvectors as above and eigenvalues 45, 192, and 437, respectively. The use of similar matrices, and certain analogies between polynomials f(x), in an indeterminate x, and matrix polynomials f(A), combine to produce powerful results. We proceed with an example. First of ali, it develops that a necessary and sufficient condition for lim. A* to be zero (A being a square matrix) is that all the eigenvalues of A shall be, in absolute value, less than one. If A=P™'AP, where A= ld, Aayordah this is obvious, since AY= PAP and At = 4,24, ..., Ah Proof ean also be accomplished if A is not similar to a diagonal matrix. On the strength of the above statement, we now show that a necessary and sufficient condition that the matric series LEAH ARE RARE converge is that all eigenvalues of A be less than unity in modulus. In such event, (Ay STH AGAR HO ARES (4.23) Here, the meaning of the equal sign is that each element of the matrix on the left is the sur of an infinite series built from corresponding elements on the right. Teis neces- sary that lim, A* be zero; therefore, by the above, 44° Similar Matrices and Polynomials in a Mateix 23 each eigenvalue must be less than unity in modulus. Now let all eigenvalues have an absolute value less than unity, and consider sufficiency. Since det(I ~ A) implies 1 is an eigenvalue, it follows that det(I — A) #0, and that (I— A)"! exists. The identity (+ A+ Ab + + AN0-A) awe is easy to establish. Then postmultiplication by (— A)"* gives + abe Ay Atay Since lim... At“? = 0, (4.23) follows. ‘At this juncture, note that if, for a square matrix B, an upper bound can be found for the moduli of the eigen- values, then a closely related matrix A can be found whose cigenvalues are less than unity in absolute value and whose eigenvectors are those of B. Let such an upper bound be p. Then let A=(I/p)B; it follows that if Ba = 2u then T+A+Ar +: 1 a + Bu = Au=“u, (4.24) 5 A (4.24) Wis not necessary Lo know the eigenvalues to find such a bound p. Two simple tests may be derived from (4.16) and (4.17), Let 4, denote any eigenvalue of A = (a,). The first ofthe relations mentioned above yields the inequality Val 0. Such a matrix is positive semi- definite if and only if, for w xO, (x, Aw) > 0. Correspond- ing statements define negative definite and negative semi- definite Hermitian matrices. Hermitian matrices thus far not characterized are called indefinite. It isa simple matter to verify that forthe real Hermitian matrix -4..1 0 A=} 1-4 4}, ot =4 we may uss A =[—4 — /2, —4, -4 4/2) and ie Yap ee 12 ~/2}2 1/2, P Recall that the leading submatrices of a square matrix A are those square matrices of order m, 1 Alt wv + af =Ja, whence By+aw =v, (4.32) 45. Symmetric and Hermitian Matrices ns Let v= Ux, Then the first of these equations (4.32) yields UAx + ow=4Ux; then AXx-+altw = 2x and GA—A)x-=oU*w. When the latter is substituted in the second equation of (4.32), there results UGH ~ AY !attw + of = da This is, of course, valid only if 4 is not an eigenvalue of B, The factor « may be removed, since we may view the components of the matrix A as variables. Since Al ~ Ais diagonal, its inverse is disg (i, nj" 4) where A= (i ~ 4). Using this leads o the equation rage y ml Bima, uy), Should (mw) = lo, (4.33) Where U = [uy U3, it is easy to verity hat [{}=[@/] is an eigensector and 2, an eigenvalue for A**"", Should 2, be a multiple root such that for [different indices j,. jz... jw Ais the eigenvalue for u,, and (w, u,,) #0, then 2, is an cigenvatue of multi- licity at least /— 1 for A“*, This is illustrated for 1= 2, and for convenience we suppose (w, u,) and (Ww, ,) are not zero, while both w, and u, have the eigenvalue Ay. Then investigation shows that the vector figs 4 bul, with proper choice of 7 and 6, will serve ds an eigen- vector of A'**" corresponding to the eigenvalue 4. We need only require that yw'u, + dwt = 0, with y and 6 not both zero. Thus under all ciccumstances (4.33) leads to a polynomial equation of requisite degree for deter- mining the cigenvalues of C distinct from those of B. {Gf all the eigenvalues of B are eigenvalues of C, and (w, w,) = O for all values of j, then & = f is the new eigen- value and may equal a previous eigenvalue.) The remain- ing details are omitted, with the remark that a sketch of a simple situation involving real values of 2, is informa- tive. This sketch may take the form of plotting z= A ~ 8 and * twuj? $ mul on the same coordinate plane. Example. An example is furnished by the matrix -4 100 0 1-4 1 0 o 1-44 o 0 1-4 The leading submatrix [4] has the characteristic function —4—N and the eigenvalue —4. The leading. submetcix =4 1 the eigenvalues ~3 and — three hes the characteristic function —56 and the eigenvalues 4 /2,—4, and i The lading ubmatxof order Wehr 4+ J, The matrix itself has the characteristic function 209 +252 493X416)? +A and the eigenvalues (—9~ ./91! ($7 ~ $5)/2, (—9 + 4/52, and (-7+ J9)/2. AC the matrix A is Hermitian, each of the following statements give a necessary and sufficient condition for A to be positive definite: (@) All the eigenvalues of A are positive (b) The coefficients of the characteristic equation for A alternate in sign (0) Each leading submatrix of A is positive definite. @A,>0, 10, (u,u)>0 and Au=du imply 2>0, because (u, Au) = 2(u, u). Now suppose Ay, dass 4y vositive Since A is Hermitian, it has » corresponding eigenvectors Xj» Xap +o) Xe that are orthonormal, Then for any u, We CpX HeaKy to ECM whence w= (Xp, wr, + Oa, BK be + Oot Me Moreover, ‘At = Hayy Waa + ACs Wg + 1° + A, ty, so that Ge, At) = AL Oxy, WP? + A [O82, WP +o AO WP. To establish (6), observe that if the coefficients of the characteristic equation alternate in sign, there can be no negative root. Since the matrix is Hermitian, all roots are real, hence all are positive. Then, by (a), A is positive definite. Conversely, if A is positive definite, then all eigenvalues are positive and the coefficients of the char- acteristic equation alternate in sign. Consider next (c) and suppose A is positive definite. Let WE [ess Wes gy, «OF be nonzero, yet such that the last — mentriesare eros. Theinner product (u, Av) > 0% it is also the inner product for an arbitrary vector in medimensional space, one which uses as its matrix the leading submatrix of order m. Hence this submatrix is positive definite. The condition of part (c) is obviously sufficient, since A itself is then positive definite. Turning to part (d), suppose A is positive definite. Since the determinant of a matrix is the product of its cigenvalues, from part (c) it follows that each 4, > 0. To see that the condition of (d) is sufficient, proceed by in- duction, The proposition is obviously true for n= Now assume the proposition to be true for all Hermitian matrices of order & and consider one, say A'*") of order k+1. Let 2 be (if possible) a negative eigenvalue for A“*D. Then since Au. >0,and Ay. ; is the product of all eigenvalues for A", Al" has a second negative eigenvalue. By the theorem just presented, A has an igenvalue Which is negative. This cannot be; thus the ‘eigenvalues of A are positive and A is positive definite. 26 ‘Matrices and Related Teples Example, An example of a positive definite quadratic form is 13y8 + 998 + S793 + By.y2 + 8y.ys —6yays.. That this is positive definite may be seen by writing it as Sy? + (2ys + 2y:)? + Ay} + 2ys + 2ya}? + 44y3 + (va — 3y4)*, When written in the form (y, Ay) where y* = [ys 2 ysl, M4 4 49 -3|. 4-30 97. ‘The first two leading submatrices have the corresponding quadratic forms 13yi and 13y? + 8y,ys +9y2. These are seen to be positive definite, The characteristic function is $400 — 1330A + 79\* — A? and the eigenvalues are positive, since the functional values for A= 0, 10,20, and 60 are $400, ~1000, 2400, and —6400, respectively. For positive definite Hermitian matrices A, it is true that |, (u, Au) (Au, Au) An Any (4.34) Ga) < Gan © a where 4, is the least and 4, the greatest of the eigenvalues of A, and w is any vector of proper dimension. That (u, Au) _ (Au, Au) (u, Au) follows quickly from (4.28) if for v we use wand for w we use Au {note too that (u, Au) = (Au, u)). Now let A= diag (A,, 42, ...,4,), and A = P*AP. Then (u, Av) (uu) (Pa, Puy 'u/|\Pul| ; thus (x, x) = 1. But (x, Ax), where x= (Ax =F ait ia Now let u = A~'y, Then (Au, Au) (u, Au) (Aa, u) AT) (ary) where (y, y) = I and y = Pyjl|Pyl. Since As Act = diag! az',. it follows that 1 1 ay) ( Aa Since the guarantee for convergence using the power method (Section 4.6) is based on the existence of linearly independent cigenvectors, and since Hermitian matrices fulfill this requirement, the method is adapted to them, Moreover, once an eigenvalue and its corresponding eigenvector have been found, there is in this ease a par- ticularly simple theory for continuing. Let the eigenvalues and corresponding eigenvectors be Ay, Aay-..y4y and uy, Uz, +, (Ordering by magnitude is not demanded). ‘The eigenvectors must be orthogonal and we may, in addition, require that they be of unit length, Let Ay and define recursively Aes = Ana Amu. For k = 1, itis a simple matter to see that Agu, = Ayu, — Ayu, =0, and that, for 7 1, Agu, = Ant — Ayu, w) Aju, = du, Thus Az has the same eigenvalues and cigenvectors as A,, except for 2, which has been replaced by0. Itis seen that the process continues: the eigenvectors are retained, and the eigenvalues are successively replaced by zero. If, in addition, the Hermitian matrix is positive definite, all eigenvalues are real and positive. It is still necessary to consider multiple roots for the characteristic equal 4.6 ‘The Power Method of Mises In Sections 4.6-4.9, we consider various methods for determining eigenvalues and eigenvectors. For additional information concerning such methods, see Bodewig [1] and Faddeev and Faddeeva {5} ‘When the eigenvalues of a matrix A are so ordered that VAL > [ial 2 o> Ugh, and when [24] > Val or 2y = 42 = s+ 2, and || > [Apssl, it is customary to call 2, the dominant eigenvalue. If the matrix is of order n. has linearly independent eigenvectors u,,Us,....0, and a dominant eigenvalue 2,, then 2, and an associated eigenvector, m, of unit length, can be appioximated as follows. Suppose first that [2] >[23I. Let ¥, be an (almost) arbitrary vector to be described more exactly later. Define the sequence ¥,, by Ave =1/IA¥m— il y, mel. (4.38) Then lim, = W15 To see this, let vp be any vector Jim JAvq—1l = [rl (4.36) Yo = 1M, + Cg, +77" + Cally provided only that c, #0. (Since u, is not known, there is an element of hazard in choosing vp. However, even though the vp chosen should have a zero component involving u;, a round-off error might eventually provide such a component.) Then We = BiAvy = Bilerdits + eats 427+ + eli and, in general, vn = Baler, + teat], (4.37) where fis merely a normalizing factor. For large values of im, this is substantially Bye,27u,, which in turn is 1/9 Should the dominant eigenvalue have multiplicity i, itis seen that ¥, is substantially Bale Tuy + cgAfuy +--+ + ela) 4 The Power Method of Mises 27 and (4.36) is still valid if w, be interpreted as one of a family of unit eigenvectors associated with 2,. Tt is pos- sible to find the remaining eigenvectors associated with 2, by repeating the process using a different vector Yo the procedure being most effective if the multiplicity is known. An alternate procedure is to solve the system of linear equations Au = 2,u, When an eigenvalue 2, (not necessarily dominant) and fan associated eigenvector w, have been found for matrix A, it is often possible to proceed with the process. Suppose, for example, that a unit vector h, can be found such that A*h, = J,h,. (Note that J, is an eigenvalue for A*) Then let B be defined by B=A~A,hhy Observe that if Aw = Aw and / ¥ 2), then hfw = 0, For hyAw = Ahfw and also BTAw = (A*h,)*w-= (J,h,)*w Ahfw. Then from ahfw= A,hfw, we find btw =0. This means that Bw = AW — 2,h, x 0 or Bw = AW = 2m. Thus B has all the eigenvalues of A that are different from 4. Also the trace of B is 4, 1eSs than the trace of A. For Spthht) = (h,,h,) = 1; hence Sp(Jh,h) = 2, and ‘Sp{B) = Sp(A) — 2. Therefore, if 2, is not a multiple root of the characteristic equation for A, the matrix B will yield all the remaining eigenvalues and eigenvectors of A. Compare this procedure, when A= AM, with the ‘method described at the end of Section 4.5. in some physical problems, one is primarily concerned with the dominant eigenvalue, In others, the eigenvalue ‘of concern is the one of least absolute value. It is now shown that the eigenvalues of A~! are the reciprocals of those for A. For A nonsingular and det(A ~ (I/1)1) = 0 imply det((1/)(Au—) =0, hence det(An- 1) =0. Then since det(A~!(Au — 1) = det(A~!) det(Ay— 1), it follows that det(yl ~ A ). Procedures for finding AW are covered in Chapter 5. At this paint, itis worth n ig that every polynomial of the form Si. a;x°~% where dy ~ 1, determines a companion matrix a rn 1 0 0 . 0 0 o 1 0 any A 0 0 1 0 of (4.38) 0 0 1 0 and that its characteristic function is precisely rE axe This can be verified by induction or by multiplying each column in Axl in succession by x (starting with the first) and adding to the next. When the resulting determinant is evaluated in terms of the last column, the statement becomes evident, Now use this companion matrix in connection with Mises’ process, and for the Initial vector ¥o we Vo = [ya iy Mmm as +++ Mol where the meaning of the components of va is that given in Bernoulli's method. For this particular matrix, the ‘two methods are seen to coincide, Also, note that methods of finding eigenvalues, which do not depend on direct solution of the characteristic equation, might prove useful in finding the zeros of polynomial functions. EXAMPLE 4.2 THE POWER METHOD Problem Statement Write a program, based on the method of Section 4.6, that determines the eigenvalues and eigenvectors of an nx meal matrix A. Method of Solution ‘The method for determining 2, and uw, is expressed in equations (435) and (4.36). Note that after the mth iteration, the old eigenvector estimate v,. is discarded in favor of the new Yq. Hence, we need only use a single vector, ¥, for containing successive such estimates, apart from the starting vector ¥o, which is reserved for an additional purpose indicated below. ‘An alternative procedure is used here for finding the remaining eigenvalues and eigenvectors, assuming that there are no repeated eigenvalues. To obtain A,, first observe that the vector (A= ADvo = Gy — Ande + Ua — Areata FAA Ate does not contain a component u,. Thus, a repetition of the procedure with a starting vector Bvo, where B= AAI, will yield 2, and u,. Similarly, once 2, and 4, are known, a starting vector Byg, where B= (A— a1) x(A— AnD, will generate Ay and uy, etc. Because round-off error is likely ¢o introduce un- wanted components involving 1, us, etC., we must periodically eliminate such quantities. This is achieved by replacing the current approximation v with Bv /IBV|., after every mje iterations. Computations for each eigenvalue are discontinued, when, from one iteration to the next, there is litle further fractional change in the length of v. Thus, if = [Avs I and Jy = |Av,..,, convergence is assumed ‘where ¢ is a preassigned small quantity. If convergence does not occur within gg, iterations, the calculations are discontinued. From equation (4.36), note that the power method yields only the magnitude of the eigenvalues. To append their correct signs, we must compare the first nonzero elements of y on two successive iterations. If these elements are of the same sign, then 2, hank Example 4.2. The Power Method 229 Flow Diagram 1, Mgans Mage E You A “No convergence” wp kad, JQ. Fr e “4 é 1B yay. Find first k for which fos! > 107?| del Place v in the ith] column of U 230 ‘Matrices and Related Topics FORTRAN Implementation List of Principal Variables Program Symbol A 8 cD EPS IDENT 1ZERO LAMBDA, MFREQ. MMAX ZERO Definition Matrix A, whose eigenvalues are required. Repeated product matrix, B= (A — 4,IA — A.D. Matrices used for intermediate storage. Tolerance, e, used in convergence test. Identity matrix, I. Length, /, of v. J, used for temporary storage of /. ‘Vector containing eigenvalues, Iteration counter, m. Number of rows, n, of matrix A. Number of iterations, m;j,q, between periodic reorthogonalization of v. Maximum number of iterations permitted, Mee Matrix U whose columns contain the eigenvectors of A, Current approximation, v, to eigenvector. Starting guess, Yo, for eigenvector. Vector for temporary storage, y. Example 42. The Power Method 21 Program Listing c APPLIED MUMERLCAL METHODS, EXAMPLE 4.2 c POWER METHOD FOR DETERMINING EIGENVALUES AND EIGENVECTORS, c & ‘THE FOLLOWING PROGRAM MAKES EXTENSIVE USE OF THE MATRIX c OPERATIONS DEFINED IN THE SUBROUTINES OF EXAMPLE 4.2 c 1s IMPLICIT REAL@S(AH, 0-2) REAL#S L, LZERO, LANDA, (DENT DIMENSION A(10,10), 6(10,10), C¢10,10), 0(10,20), 1DENTC10,103, 1 UC1O,10), LAMBOACIO), Vi10D, VZERO(I05, ¥(30) + READ AND CHECK INPUT DATA. READ'(5, 100) N, MHAX, MFREQ, EPS, (WZERO(I), 1 = 1, ND WRLTE (6,200) hp MAS, MEREQ, EPS WRITE (6,207) (VZEROCID, 1 = 3, ND READ (5,401) (CAC, J), J # 1,°W), 1 a, ND WRITE (6,201) D035 1'= 2, w WRITE (6,207) (ACW), d= 1, agree (MUTHMCGE EQUATE @ TO THE HoEwTITY WArRIX pele Doz yaa toewr(d a) * 9. do3 ie a HENTCI,1) 1. CALL MATEQ (IDENT, B, N,N) asses PERFORM POWER METHOD FOR ALL N EIGENVALUES «1... iui Tea w + MODIFY STARTING VECTOR So THAT 17 ts ORTHO GONAL TO ALL PREVIOUSLY COMPUTED EIGENVECTORS GALL MATVEC (B, VZERO, V, N,N) CALL VECLEN CV, L2ERD, NS sesee PERFORM SUCCESSIVE POWER METHOD ITERATIONS «++. BOs w= a, Ma PERIODICALLY RE-ORTHOGONALIZE THE VECTOR V .. GF" ClnqmeRean«MPREQ NE. M) GO-TO & GALL MATVEC (B, V, Y, Ny ®) CALL VECLEN (Y, Ly WS CALL ScAVEC Cio/t, Y, Vv, ND COMPUTE NEW VECTOR V AND ITS LENGTH ., GALi"WATVEC (A, ¥. ¥, Ne ND TALL VECLEN Gr Uy at CALL scavec (ai0/i, ¥, v, ™) sree CHECK FOR CONVERGENCE ie" GbaBsc(L ~ LZERO)/LZERD) L2eRO = jagss SALVAGE PARTIAL RESULTS LE METHOD DID NOT CONVERGE WRiTE (5,202) 1, M, CLAMBOACKD, K = 1, 1M) WRITE (6,203) DOG K = 1, NW WRITE (6,209) (U(K J). J = 2, IML) 60 10 1 ESTABLISH THE SIGN OF THE EIGENVALUE , MATVEC (A, Vy Y, Ny ND BOs Ks TW IF (DARS(V(K)) «LT, 1.0D°3) GO TO 8 IF (VKDaY(K) ET," 000) Low = 50 To 9 CONTINUE Veps) oo a7 232 ‘Matrices ond Related Topics Program Listing (Continued) c ase STORE CURRENT EIGENVALUE AND EIGENVECTOR . 3 CaWebAC) = 00.10 k= 4, 8 10 UK,» VERS WRITE (6,208) 1, My L c c MODIFY MATRIX B iFG) Ge, ny G0 10 11 CALL SCAMAT (L, IDENT, C, N,N) CALL MATSUB (A, C, 0,°N,/ND CALL MATMLT (0, 8, Cy N,N, ND CALL MATEG (C,'B,"N, A) 32 CONTINUE, c © seu, PRINT EIGENVALUES AND EIGENVECTORS 2... WRITE (6,205) CLAMBDACI), | = 1, ND WRITE (6,208) po t= 1, W 32 WRITE (86,2075 (UCL, J = 1, ND GO TO 1 © ese FORMATS FOR INPUT AND OUTPUT STATEMENTS .,... 100 FORMAT (3(12x, 13), 12x, €8.2/ (10x, BF5,177 101 FORMAT (10x, 45.15 200 FORMAT (1H1, 4X, 47H POWER NETHOD FOR OETERMINING EIGENVALUES, WIT 1M/ HO, 6X," 20H N s, Wks TX, 10H MMAX =, 14/ 2) 7x, 1H MEREQ » , 14) 7X, 10H EPS = | €13.3/ 1H, 3_ux, 384 STARTING VECTOR VZERO(1)... VZEROCN) IS) 201 FORMAT (1HO, 4X, 39H THE STARTING MATRIX ACL,1),,.ACN/N)_ 1S) 202 FORMAT (1HO, 4x, 37H NO CONVERGENCE, PARTIAL RESULTS’ ARE/ 1HO, 16x, HI *, {2, 10X, 9HM*. 13/ 1HO, 2 6x, 27H LAMROACAS... LAMBDACI=1) = 7 (7x, 10F11,6)) 203 FORMAT (AHO, 4X, 23H FIRST |-1 EIGENVECTORS ARE} 204 FORMAT (HO, Gx, GH 1 =, 12, 5X, SHM =, 13, sx, 6H LS, F1I.6) 205 FORWAT (1HO, Gx, 38H EIGENVALUES LAMBDAC1)...LAKBDA(N) ARE/ 17x, 10F11.6)) 206 FORMAT (1HO, 4X, 39H EIGENVECTORS ARE SUCCESSIVE COLUMNS OF) 207 FORMAT “7K,” 1042.6) c END Data N © 3, MAX = 100, FREQ = 15, EPS = 1,0E~9 veg ea a » Sal 6) -2t-2, a8, a, a2. ae, Bt N= “4, MMAX = 50, MFREQ * 1, EPS = 1,0E-9 wero = 1.70. 0, 9 Roos al 8 7 sl ao, Pa a a MMAX’ = 300, MFREQ’ = 10, 0.0. 0, Oy “Ilo o.oo: Example 42. The Power Method Computer Output Results for the Ist Data Set POWER METHOD FOR DETERMINING EIGENVALUES, WITH N so max = 190 MFREQ = 15 EPS = 0,1000-08, STARTING VECTOR VZERO(1)...VZERO(N) IS 1.000000 1,000000 " i,c00000 THE STARTING MATRIX ACL,1) 4. 0ACM)ND 15. 11.000000 6.000000" ~2:000000 =2/000000 18,000000 1.000000 -12'000000 24000000 13000000, bed M8 as L* 21,0000 tea Ho 8 29 L = 18000000 res M+ 16 L + 7.000000 EIGENVALUES LAMBDACL)... LAMBDACN) ARE 21,000000 14,000060° 7.000000 EIGENVECTORS ARE SUCCESSIVE COLUMNS OF 9.000000 0894027 0.047214 0.316278 0.447214 _0,000000 0.948683 -0.000000 0.894427 Results for the 2nd Data Set POWER METHOD FOR DETERMINING EIGENVALUES, WITH " - 8 Max = 50 MEREQ = 1 EPS = (0.200008 VZERO(N) 1S 0 o. STARTING VECTOR VZERO(1) 1.000000 0,0 THE STARTING MATRIX ACL,1) 4. 1,L',L, Lj will be used to denote lower triangular matrices whose diagonal elements are all ones, while R, R’, R,, Rj will be used co denote upper triangular matrices (here R’, for instance, is not the transpose of R—for this we consistently use the notation R’) Let A = A, bea real square matrix such that A, = L,R, (this can always be accomplished if, as previously shown, Avis positive definite). Define A, a8 RL. It may be that LAR,. If s0, define Ay as RyL». In general, assume A= LARy Ase: = Riba (4.39) Note that in such event Ags = Le AL, = RA (4.40) Since the product of lower (upper) triangular matrices is also lower (upper) triangular, and since the inverse of a Jower (upper) triangular matrix is also of the same kind, itis readily verified that if Lis bby Las R; (4.41) then Ang = Li TALL (4.42) ‘This shows that Ay is similar to A; therefore, as previously shown, i€ has the same eigenvalues and related eigen- vectors. Suppose now that lien Bi = b (4.43) This means that lim Ly =1, lim Ay = fim Ry (4.48) where R is upper triangular. There results from (4.42) URL =A, (4.45) where R not only has the same eigenvalues as A, but has them as its diagonal entries (if A is to be real, this means ‘A must have only real eigenvalues). If Rv = 2x, then AL'y = AL'v, Thus we see that when convergence takes place, a knowledge of L' and R will yield all eigenvalues and eigenvectors for A, provided that we can find the eigenvectors for R. This is particularly straightforward if there are no. multiple eigenvalues. Consider, for example, O rss faa Fae 0 0 rs rh 0 0 0 r Solve the system Rv=ryv, where v'=[1,0,0,0], to obtain the eigenvector for ry. Solve the system Ry = r23¥, where v' = [a, 1, 0, 0}, to obtain the eigenvector for 713. Solve the system Rv = ry,¥, where ¥" = (a,b. 1.0], to obtain the eigenvector for rss, etc. The technique for individual eigenvectors is that of back substitution in Gaussian elimination (see page 270). {In application, we can adopt the philosophy of using the results if convergence occurs. We emphasize chat convergence does take place for positive definite matrices. A technique is still needed for writing a suitable matrix B as CD where C is lower triangular (with ones om the diagonal) and D is upper triangular. Let B= (b,), C= (¢,). D = (dj), it being understood that cy 20 for F) Since b,; = Dies Cu diy by the above. 6, i {4a > "++ > Val, So that in addition, the eigenvalues are real. Note first that from (4.39), itis possible to see that Ate 4.47) because LAR, = (Ly g+** La. 2)La—sLaRaRi (Re 2° R2Ry) (CE yLy Ly 2) Ry - La iRu- (Re 2" R2R,) = (LaDy La~2)Ra- alg 2Ra-2La—2(Ra_ 2 RiR,) (LR) a {cis possible to show [4] that instead of the formula listed in (4.46), we can write, for all indices i and j, by ba by ba, baa + bay Bean baa 1 Byaay bi bin by Let U=(u,) be a matrix such that AU= U diag (4, Ay... 4,) and let V = (0,,) = (U"")'. Thus, Abe U diag Ah + AVE Then om Yan uaa at ad From this it follows for B= At = CD, that c,, 47° Method of Rutishauser Mi ffeutt earat Yay | O1242 P2242 ve tm Lede Pak = det(D,)/det(D,,), where 237 ey wy Mae " PM Pad, oy dt ua aa tay [ert vat Paki Wyn Myon Uyete LPS Pale oo Dye Ma Ma cs ‘As & increases, because the 2, det(Dj,) where are in order of decreasing magnitude, the value of det(D,,) approaches that of at uy My oath vy aah Ua, ola yar LOus vail Pu uy Thats, Hi ta My on My aa tay os n= diag ha 4) Hyena Mata oe aie ea Py Pay Oy To aid in seeing this, observe that D,, is the sum of similar terms, all other terms having a middle factor, diag (4,4, ... 34), which for k large is small compared to the one shown. Use partitioning to sce that Dj, contains all terms of the type described, Thus for k large, ¢1,s approximated by det(Dj,) _ det 9,) de(Dj,) ~ deu@,,) where wy U3 9n= . Myon 4-12 or ‘This means that L = lim... Li = L, if det(U,)det(V,) #0, where wens wy We tha us, Pea My. YH. Here, U=L,R, expresses U as the product of a lower triangular matrix (with ones on the diagonal) by an upper triangular matrix. Therefore also, R = R. diag (4,2... 4,)Rz' ‘There is a modification which can be used in conjunction with real matrices A whose eigenvalues may be complex. My Pay EXAMPLE 43 RUTISHAUSER'S METHOD. Problem Statement Write a subroutine, called AUTIS, that will apply Rutis- hauser's LR transformation to a given matrix, and that will also find the eigenvectors as a byproduct. The subroutine should inrorporate the following features (a) economy of storage, (b) special handling of tridiagonal matrices, 10 take advantage of their high proportion of zeros, (¢) acceleration of convergence, When appro- priate, and (d) double-precision arithmetic. Check the subroutine for several test matrices by writing a main Program that handles input and output and calls on AUTIS Method of Solution and FORTRAN Implementation This example elaborates considerably on the basic LR method. The resulting program is long and involved, subsequent transformed matrices A, Ay, ..., Ay, and their lower and upper triangular decomposition matrices L and R (with the exception of the diagonal of L, which consists of 1's, and need not be stored). This arrange- ‘ment results in considerable economy of storage. The upper triangular portion of matrix X is reserved for the eigenvectors of Ay; its lower triangular portion stores the accumulated product Li =LyLy ...Lx- The diagonal elements of X ate all I’s and may be regarded as common to both the cigenvector and accumulated product portions. Matrix U is employed only for storing the normalized eigenvectors of A. ‘The following algorithms are used in the subroutine (1) Decomposition of matrix Ay into the product of lower and upper triangular matrices Ly and Ry (Ax ~+ LaRy): but because certain extra features are included, it is ty =4y— Dar, §= 120.45 computationally quite efficient. The complete method is ay most easily discussed by referring partly to program ay ~ San symbols from the outset. a a ‘The calling statement for the subroutine will be aes eee a CALL RUTIS (N.A ANEW, U FREO,ITMAK.EPST,EPS2,EF53, Note: (a) Also lyj= 1, j= Iy2, um, but need not be EPS4, EIGVEC, STRIPD, SWEEP, TAGI, TAG2, ITER) a The various arguments are defined as follows Program Symbol’ Definition a Array containing the » x n starting matrix, A. Anew ‘Array that is to contain the final transformed matrix. eigvec Logical variable, having the value T/F (true/false) if the eigenvectors are/are not required. ePst Tolerance used in convergence testing. If the sum, SUBSUM, of the absolute values of the subdiagonal elements of the transformed matrix falls below €PS1, the LR transformations will be discontinued. €Ps2, £PS3 Tolerances. The eigenvectors of A will be computed if and only if: (a) EIGVEC = 7, (b) no two eigenvalues tie within a small amount EPS2 of each other (if they do, TAG2 will be returned as 7), and (c) SUBSUM is not greater than EPS3 (if itis, TAGT will be returned as 7). eps Tolerance for the sweeping procedure (see below), which will occur only if: (a) SWEEP and (b) SUBSUM < EPSé. FREQ Number of LR steps elapsing between successive " sweeps,” if any ime Returned as the number of LR steps actually performed by the subroutine. iTMAX Maximum number of LR steps to be performed. N Dimension of starting matrix, 1 sTRIPD Logical variable, having the value 7/¢ if A is/is not tridiagonal. Used to avoid unnecessary ‘muiplication of zeros. sweer Logical variable, having the value 1/F if the sweeping procedure (see below) isfis not to be applied, v 1x n matrix whose columns contain the eigenvectors of A. The subroutine employs two additional m x n working matrices, B and X. As soon as RUTIS is entered, B is equated to the starting matrix A, which is left untouched, should it be required further in the main program. Matrix B is then employed for storing, in turn, all 238 (b) Any element a,, is used only once, namely, to compute the corresponding ry, oF /,;. Hence the elements of R and L can replace those of ‘A.as soon as they are calculated. Example 43. Rutishauser's Method 239 (©) In the program, A, L, and R occupy the same array (8) and are therefore referred to by the common symbol 8. (@ For the special case of a tridiagonal starting matrix, all subsequent LR. transformations also yield tridiagonal matrices, and the above algorithm would result in an unnecessarily high proportion of multiplications invalving zero. We therefore introduce two integer vectors, BEGIN and FINISH, such that for any column J, BEGIN(J) and FINISH(J) contain the row subscript of the first and last nonzero elements, respectively. For full matrices, BEGIN(J) and FINISH(J) are always 1 and N respectively, but they will be suitably modified for tridiagonal matrices. In the program, the ower and upper limits on | in the above algorithm will appear as BEGIN(!) and FINISH(4), respectively, and the lower limit on K will be BEGIN(J) (Q) Recombination of lower and upper triangular matrices, in reverse order, to give ttansformed matrix Aus (Aner = Rly): Fan EN am a= rit rules Jeiitheon Note: (a) As soon as an element a; has been computed, the corresponding element r,, or , is no longer needed. Hence, the elements of A can replace those of R or Las soon as they are calculated, (b) in the program, for reasons previously stated, the lower and upper limits on J are BEGIN() and FINISH(1), respectively. The upper limit on Kis FINISH()) for J < | and FINISH(J) for J > 1. (3) Determination of the accumulated lower triangular product matrix: Lis, -L,Li+y. where the newest matrix Lj, has elements given by Keely tlt & lida ie 1,2,cc00 FAL iah Note: (a) The substitution operator «= merely em- phasizes thata newly computed element of Li. ‘occupies the same storage location asthe corres- ponding element of Li. (b) In the program, L'is stored in the matrix X. (4) Determination of the eigenvectors (to be placed in the upper triangular portion of matrix X) of the final transformed matrix (occupying the upper triangular portion of matrix B, since the strictly lower triangular portion should now consist of numbers very close to zero): ay aay fo. ay da” feJ—bj-Beats Note that the eigenvectors can only be determined using this algorithm if no two eigenvalues are very close together. (5) Determination of the starting matrix eigenvectors, as successive columns of matrix U: eee (6) The algorithm given below relates to the acceleration ‘of convergence by “sweeping.” Note first that the sub- diagonal elements of Ay tend 10 zero most quickly when ‘A has eigenvalues whose moduli are well-spaced. For less favorable cases, convergence may not be especially rapid. Since the LR algorithm is only a means to the ultimate ‘end of producing an upper triangular matrix, an alter: native transformation could be used at any stage should a better one be available. The convergence can indeed be accelerated by the following technique, once a matrix A,, has been obtained whose subdiagonal clements have been made moderately small by the conventional LR transformation. mya 8yt Slats Jah M oo i Bilas J uy Let Aas = 15 'Anin where 1 L 21 0 -,1 0 2 = 1 7 1 0 4 -, O 1 By multiplying out, the v's could be chosen so as to bring all the subdiagonal elements of the first column of A,.+1 to zero; however, finding these values of v would require an excessive amnniint of camputation Rot, by neglecting quadratic terms in the v's and product terms of the 0's with the subdiagonal elements of columns 23,4, n—1 of A,, it can be shown readily that a choice of v’s determined by the following equations will, in most cases, effect a substantial reduction in the magnitude of the subdiagonal elements of column 1 240 ‘Matrices and Related Topies ay2- Gy) as a6 Qy3— a1 Ose Gua Further transformations Ager = Lahde mt ie Ines = Ltr Am +2 s ay Cty where vy 1 Tye vm 1 5 » 1 1 1 0 1 ee md , et., ., 1 (and the o's ate chosen properly, but will vary from one adhe, On, column to the next) will also reduce the subdiagonal elements of the remaining columns substantially. In the subroutine, the elements v,, which are stored in the vector V, are determined for a particular column j from the relation ay ~ dy i=nant,. oJ +1, This process of 'sweeping” through the columns may be repeated with ever-increasing effect. Finally, note that if the sweeping procedure is applied to a tridiagonal matrix, it ill introduce small nonzero values into elements that were previously zero. The technique of (Id) above is then no longer advantageous, but this is of little consequence, since the very introduction of the sweeping procedure means that the required degree of convergence will soon occur, ‘We have not attempted to write a concise flow diagram for this example, Instead, the overall approach can be understood best by reading the program comment cards in conjunction with the above algorithms. Example 43. Rutshauser's Method ut Program Listing Main Program ¢ APPLIED NUMERICAL METHODS, EXAMPLE 4.3 c EIGENVALUES BY RUTISHAUSER'S LEFT-RIGHT TRANSFORMATION, ¢ ¢ THE FOLLOWING MAIN PROGRAM HANDLES INPUT AND OUTPUT ONLY, AND ¢ CALLS ON THE SUBROUTINE RUTIS TO IMPLEMENT THE ALGORITHM. c IMPLICIT REAL#® (AH, 0-2) INTEGER FREQ LOGICAL EI8VEC, STRIPD, SHEEP, TAGL, TAG? DIMENSION ACII,139, ANEWC11,11), UCi2, 11) COUNT = 1) WRITE (6,200) READ (5,100) €PS1, EPS2, EPSS, EPSH, FREQ, SHEEP WRITE (6,201) EPSi, EPS?, EPS3, EPSi, FREQ, SWEEP c c READ STARTING MATRIX A AND OTHER PARAMETERS 1 WRITE (6,202) IcouNT TeOUNT =” 1COUNT + 1 READ (5,101) N, ITMAX, EIGVEC, STRIPD wRiTe (6,203) fh, ITHak, EIGVEC, STRIPD dort el WN 2 READ (5,102) (ACLU), J #2, WRITE. (6,208) sts LN 3 WRITE (6,205) (ACI, J), J * 2, ND c c see CALL ON RUTIS TO FIND EIGENVALUES AND EIGENVECTORS. GALL"RUTIS (MN, A, ANEW, U, FREQ, ITHAX, EPS2, EPS2, EPS3, LEPS4, EIGVEC, STRIPD, “SWEEP, TAG, TAG2, ITER) ¢ ¢ PRINT VARIOUS RESULTS, AS APPROPRIATE WRITE (6,206) ITER, TaG!, TAG? WRITE. (8,207) DORI eT WN WRITE (6,205) (ANEWCI,J), J = 1, ¥) IF ceigvéc) Go TO 5 WRITE (6,208) 60 To 1 5 IF (NOT, TAG1) 60 TO 6 wai Te" (6,209) 60 To 1 6 IF (NOT. Taga) Go To 7 WRITE" (6, 2109 0 To 1. 7 WRITE (6,211) doa tel, w 8 WRITE (6,205) (UCI), J = 2, ND 60 70 1 c c «FORMATS FOR INPUT AND OUTPUT STATEMENTS 4... 00 FORMAT (6x, E71, 10X, E7.1, 10K, E71, 10X, Fass 16x, 15, tix, 125 tor FoRWaT (3x, 13, dix, 6, 2¢12x, 12) 102 FORMAT (10%, 1465.15 200 FORMAT (62Hi DETERMINATION OF EIGENVALUES OF A MATRIX BY RUTISH AAUSER'S/ W7H__LEFT-RIGHT TRANSFORMATION, WITH PARAMETERS/1H ) 201 FORMAT (7X, 10H EBS] =, E12.3/ 7x, 10H EPS? =, E12.3/ 1 7x, YOH EPS3 = 5 E12.3/ 7X, 10H EPSH =, FEA / 2 7K, WOH FREQ = 2 Th" 7 7X, 10H SWEEP =) Lu”) 202 FORMAT (IHi, WX, THEXAMPLE, 13, 17H,"WITH PARAMETERS/IH ) 203. FORMAT (7%, 20H'N Sw Pk OH LTMAK Tk 1 7X, 10H EIGVEC = % Lu? 9x7 oH STRIPD = 7 Le) 204 FORMAT (29H0 THE STARTING MATRIX & 1S) 205 FORMAT (7%) 10F10.8) 206 FORMAT (1HO, 6x, JOH ITER =, 18/ 7X, 10H TAG. =, Las 1 7x, 10H TAG = Le) 207 FORMAT (35HO THE TRANSFORMED MATRIX ANEW 15) 208 FORMAT (30HO EIGENVECTORS NOT REQUIRED) 209 FORMAT (4SHO EIGENVECTORS NOT COMPUTED BECAUSE ONE OR/ 2AIH) MORE SUB-DIAGONAL ELEMENTS TOO LARGE) m Matrices and Related Topics Program Listing (Continued) 210 FORMAT (UBHO EIGENVECTORS NOT COMPUTED BECAUSE PAIR OF/ 126K EIGENVALUES TOO CLOSED 211 “FORMAT (62HO THE FOLLOWING MATRIX CONTAINS THE NORMALIZED EIGEN avecToRs) c END Subroutine RUTIS DETERMINATION OF THE ELGENVALUES OF AN NXN REAL MATRIX A, BY RUTISHAUSER'S LEFT-RIGHT TRANSFORMATION. — NOTE (1) THE UPPER AND LONER TRIANGULAR MATRICES (WITH THE EXCEPTION OF THE UNIT DIAGONAL OF THE LATTER) INTO WHICH A 1s FACTORIZED, AND ALL SUCCESSIVE TRANSFORMATIONS OF A, occupy array 6. (2) THE ACCUMULATED PRODUCT OF THE LOWER TRIANGULAR, DECOMPOSITION MATRICES IS STORED IN ARRAY X. (3) THE PROGRAM STOPS EITHER KEN ITHAK SUCCESSIVE DECOMPOSE TIONS AND MULTIPLICATIONS HAVE BEEN MADE, OR WHEN THE SUM (SUBSUN) OF THE ABSOLUTE VALUES OF THE SUB- DIAGONAL ELEMENTS OF THE TRANSFORMED MATRIX FALLS BELOW A SMALL VALUE EPS1, (4) FOR THE SPECIAL CASE OF AN INITIAL TRIDIAGONAL MATRIX, SETTING THE LOGICAL VARIABLE STRIPO = TRUE, WICC REDUCE THE COMPUTATION TIME CONSIDERABLY, (5) IF THE LOG(CAL VARIABLE SWEEP = .TRUE., A SPECIAL ROUTINE 1S INTRODUCED TO ACCELERATE CONVERGENCE, PROVIDED THAT SUBSUM IS. LESS THAN EPS (6) THE EIGENVECTORS OF A WILL BE COMPUTED AND STORED IN THE ARRAY U IF THE PARAMETER EIGVEC = TRUE., PROVIDED THAT NO TWO EIGENVALUES LIE WITHIN A SHALL VALUE EPS? OF EACH OTHER, AND THAT SUBSUM Is NOT GREATER THAN EPS3. on annnnennnnenanneaanena SUBROUTINE RUTIS (N, Ay B, U, FREQ, ITMAX, EPS1, EPS2, 1EPSS, EPSu, EIGVEC, STRIPD, SWEEP, TAGL, TAG2, ITER) IMPLICIT REAL#S (AcH, O-Z) REAL+6 A, 8, U, EPS1, EPS2, EPS3, EPS4y LENGTH DIMENSION AC1111), 8C11,239, UCL,11), VAI), (22,227 INTEGER BEGINCII), “FINISH(115, FREQ LOGI cat E/GVEC, SfRiPD, SWEEP, TAG, TAG? fate N= 2 dor t= 1,8 TAGL * FALSE. TAG? * JFALSE. y+ DETERKINE THE VECTORS BEGIN AND FINISH ..... Ulnor, “stRIPD) Go TO 3 BEGINGD) = 1 BEGINCN) © NMI FINISH(L) = 2 FINISHEN) = 8 IFW VLE. 2) Go TO 5 Do 2b s'2, aT BEGINCJ) = J = 1 FINISH(J) = J 62 co 1's Dok ved» BEGING) = i FINISH() = continue: DOS tea W SLOW = BEGINGT) SHIGH = FINTSH(TD Do 6 J = JLoW, uMiGH 6 RCL) AL SS Example 43. Rutishauser's Method Program Listing (Continued) as 6 cy rn 19 n 2 23 Ey 5 START LEFT-RIGHT TRANSFORMATION, ITERATING UNTIL CONVERGENCE SATISFACTORY OR ITERATIONS EXCEED ITMAX ..... Do 51 ITER = 1, 1TMAX seees THE MATRIX BIS DECOMPOSED INTO UPPER AND LOWER TRIANGULAR FACTORS D010 ve ly N TOW = BEGINGA) 00.8 1 = ILOW, v SUA 8, iat = = 1 KLOW = BEGIN!) Tetkton cers M1) GOTO 8 DO?” kos KLOW, ThE SUM = SUM + BCT, KD*B(K, J) Bole) = BC,d) = SUM wle ged TMIGH = FINtSH(y) 1F (JPL GT. THIGH) GO TO 15 po 1D 4s del, IHIGH SUM = 0. KLOW » BEGINGI) Os ond TF (KLOW GT. v1) GO TO 10 DO's) k= "KLOW, uM SUM = SUM + BCL RD@DIK, 3) BCL.) = (BC, u) ="SUM)/BCd, 0) THE ACCUMULATED PRODUCT OF THE SUCCESSIVE LOWER TRIANGULAR DECOMPOSITION MATRICES I$ COMPUTED IF CNOT, EIGVEC) GO TO 10. Boa t= 2 W Was od O17 v= 1, Md Xray BCL) © KCL VD KloWs J +1 TF CKLOW .GT, 141) 60 To 17 Do 16 “K = KLOW, IML XC, d) = Cd) # XC KD BCK, AD conrinbe - AND THE FACTORS ARE COMBINED IN REVERSE ORDER .... be TW SLOW» BEGING) Wea IF GLOW GT. 1M1) GO 10 21 DO20 “J = JLOW, IMI BCL,d) = BCL, 19681 u) IPl's te 1 KHIGH = FINiSHCLD TF (1P1..GT, RHIGH) GO 10 20 DO 19” K'= IPL, KHIGH BCI,J) = BCLyu) + BCT-RI@B(K,) ‘CONTINUE SHIGH = FINTSHGS 00.25 “y+ 1, uHIGH PL ed KHIGH = FiNISH(y) NF GIPL GT. KHIGH) GO 10 23 DO 22K" = gPi, KHIGH BIJ) = BCL yd) # BCLKDEBIK, 3) CONT! NOE: CONTINUE DOI tea W Slow» BEGINtY) Satay = FINTSHCLD 0025 “y"» JLOW, JHIGH IF (DABS(HC1,U)5 .LT. 1.0010) 8(1,0) = 0, velel 24 Matrices and Related Topics Program Listing (Continued) 28 30 3 3 33 3 35 so 51 52 33 su 55 ‘THE SUM OF THE ABSOLUTE VALUES OF THE SUB-DIAGONAL ELEMENTS IS COMPUTED . SUBSUM = 0, Do 26 1 = 2, hy SUBSUM = SUBSUM + DABS(B(I,1=1)) DETERMINE COLUMN VECTORS FOR SWEEPING PROCEDURE iF UnoT. (L.£d.PREQ .AND. SUBSUM,LT,EPS® .AND. SWEEP)) GO’ To 47 06377 = i, nm REJECT CASES FOR WHICH DIAG, ELEMENTS TOO CLOSE 6030 PW NE COABS(B(Jeu7 = BCI, 1)2.LT.EPS2 «AND. JANE.1) GO TO 37 Plas + Bo 32 1T = vPa, W Vane gph tT Vcty = BUI,d) Ie ted TFC ea.) GO To 32 Do $1 °K = 1Pi, 0 VO) = VOL) # BCL RDeViK) VO) = vO /BO) 2 BCD) zs MODIFY LOWER TRIANGULAR PRODUCT MATRIX , Bit uP Vane uPL> IT XCLd) = KCL) + VOD) mars = 1 IF GPL Jet, 1Ma) 0 To 38 0033" K'= JPL, IML XC0) = XC, SS + KC KDOVORD cONTINGE POSTMULTIPLY 8 WITH SWEEPING MATRIX... O35 De aw 035 Kk 2 Jbl, W BEd)» BCL, dS + BLP, KD eVEKD esse PREMULTIPLY 8 MITH INVERSE OF SWEEPING MATRIX. 60°38 1 vPa, WN 0036 K 21,8 BEARD = BCILK) = VCID«RC RD CONTINUE TF GNOT, STRIPD) GO TO 41 duo Se BEGING) = 2 FINISH(J) = N CONTINUE CONTINUE CHECK FOR CONVERGENCE HF CwoT. (Loea.FREQ .OR. ITER:EGLITMAX JOR, SUBSUM.LT.ES1)) 1 0 To so L=o CONTINUE fF (SUBSUM LT, EPS1) GO To 52 poss T= 1, it XCD) = 0. CHECK TO SEE IF EIGENVECTORS ARE REQUIRED OR IF ANY TWO EIGENVALUES ARE CLOSER TOGETHER THAN EPS? 1E (NOT. EIGVEC) GO 70 72 HF (3usSUM «LE. EPS3) GO TO Su TAGL = TRUE. G0 T0 72 0055 1 = 1, NHL tre tea. DO ss y= IPL, VF (aps(BC115 = 8(y,u)) .GE. EPS2? GO TO 55 TAG? =. TRUE. 60 To 72 CONTINE Example 43 Rutishauser's Method Program Listing (Continued) c © 4.4.5 COMPUTE EIGENVECTORS OF TRANSFORMED MATRIX ....+ 60°82 VN X98) 21. TF & .£0."1) 99 10 62 wae dy Do 61 “iT = 3, uma beg - it SUM = BCL,u) ipl et +'1 IF (IPL let. gM2) GO 70 61 00 60" K'* IPL, JMi 60 SUM = SUM + BUY, KlexK,J) BL XC1yJ) = SUM/(BOJ,J) = BCT, 199 52 CONTINUE c © s.,24 COMPUTE EIGENVECTORS OF ORIGINAL MATRIX ..... 087 Teak IND) = IF Cl £0. 1) G0 10 65 DO gy = 1, IMD UCL) = XC ea) wae yy TF i eq. 3) 60 10 64 Bo 3 "k= 1, umn 63 Vere) = UCEaI# XC KDEXCK, dD Sh conTINDE 85 0067 Jw, N UCSD) ® XC) Wis tT IF CL .Q, 1) 60 10 67 D0 6 "k= 1, 1M BE UCL) = UES) # XC KIAXCK SD commie © ayese NORMALIZE THE EIGENVECTORS ..... OH ee TN SUMSQ = 0 00.70 1 5 a,N 79 SUMSQ » SUMSQ + UC1,J)*02 LENGTH = DSQRT (SUMSQ) pO t «4, W TA UCTS) = UCL) LENGTH ° 72 CONTINUE RETURN c Sy Data EPS1 = 1,0E-7_ EFS2_= 1.0E-5 EPS3 = 1,0E-8 PSH * 0,1 FReQ = 3 SWEEP = T Neu | NMAX™ 25 Elgvec s T STRIPD = F AG) = 10. "9, 5. 9. 10. 8 Tok Tr 5.6: 5: Ne 4 imix "25" Elgvec = T — sTRIPD = F MAM © 6 kw Sogo so 8h oa! ook al gt Ns 4 ITMAx ='"25 " Ergvec =F sTRIPD # F AGD = 5. Be Bl o3l os “eal ost 46 Program Listing (Continued) Data (Continued) ‘Matrices and Related Topics N= g _ ITMAX = 100 ElgvVEC = T | STRIPD = T AGQy #2. oO, a oo 0 “oo ° goal °. alot ° oll o: 0: os °, oon Neg | LTWAx eigvec = T AG = 2S oO. a o.oo ° oil oo ° nol ° at U2 0. ofl °: oa 0. ot o. oo Computer Output DETERMINAT/ON OF EIGENVALUES OF A MATRIX BY RUTISHAUSER'S CEFT-RIGHT TRANSFORMATION, WITH PARAMETERS, €Ps1 = 0,1000-06, eps? =~ (0:100D=02 EPs3 = 0:1000-03 EPss = old Freq = 5 SWEEP > T EXAMPLE 1, W/TH PARAMETERS. N . 4 iTwax S25 Elovec = oT STRIPD =F YHE STARTING MATRIX A 15 10,000000 9.000000 9,000000 10.0000 7.000000, 5.000000, 17ER TAG TAG? 35000000 10,000000 51000000 7; 000000 900000 ‘000000 5.000000, 6:000000 7.000009 5.000000 u THE TRANSFORMED MATRIX ANEW 15 '30,288685 42, 279007 10,019862 0.8 $8057 1.007316 0.0 [au3i07 oo 5.000000 0,702164 01316835 0,010150 FOLLOWING MATRIX CONTAINS THE NORMALIZED EIGENVECTORS, 0.520925 -0.625396 0.567641 0.123697 01552955 =0:271601 -01760318 -0,208554 0.528568 0.614861 0.301652 -0, 501565, 101380262 01596306 -0,093305 0:8304KN Example 4.3 Rutishauser's Method ui Computer Output (Continued) EXAMPLE 2, WITH PARAMETERS EXAMPLE 3, LTH PARAMETERS 4 - 4 N eo Vax = a5 iWax = 25 Elevec = Tt Elvec * F STRIPD =F STRIPD =F THE STARTING MATRIX A IS THE STARTING MATRIX A 15 6.000000 4.000000 4.000000 1, 000000 4.000000 -5,000000 0.9 3.000000, 4000000 6.000000 1,000000 4: 900000 00 '$1000000 -3:000000 5.000000, 2000000 1.000000 6-000000 4000000 5.000000 ~$:000000 4000000 0.0 1000000 4.000000 v.scaa00 §.000000 31000000 0.0 51000000 4.000000 iter = 12 Iter = 25 Tcl =F Tag) ‘ mez = 7 mgr =F THE TRANSFORMED MATRIX ANEW 1S THE TRANSFORMED MATRIX ANEW IS 15,000000 4.999915 5,088¢00 1.000000 12,0000 -8,678738 3.000000 3.000000 =0;000000 5.000000 -0,000000 37000000. 0;000000 2.131230 -5.000000 -2.000000, 000000 -0.000000 5.000000 3.00005 01000000 5.255936 0.151250 -3.452492 0.0 (0.000000 0,000000 -1.000000 0:0 =01000000 -0:000000 2:000000 E\BENVECTORS NOT COMPUTED SECAUSE PAIR OF EIGENVECTORS NOT REQUIRED EIGENVALUES TOO CLOSE. EXAMPLE 4, WITH PARAMETERS N = 8 Tmax = 100 Elovec = STRIPD = JME STARTING MATRIX ATS 2.000000 =1.000000 0.0 0.0 0.0 9 2.9 000000 2000000 1.000000 0.0 0.0 : 0:0 30 15000000 2.000000 1.000000 0:0 0:0 0:0 0:0 010 15000000 2.000000 1.000009 0,0 9.0 0:0 0:0 a:0 -1!000000 "2.000000 -1: 000000 or 0:0 0:0 0:0 0:0 1000000 2000000 -1:000000 0.0 a2 0:0 0:0 9.0 0:0 =11000000 2.900000 ~1: 000000 0:0 Bie 010 0:0 0:0 0:0 11000000 2,000000 ITeR = asl =F maz =F THE TRANSFORMED MATRIX ANEW 15 3.879385 -1,002900 0.0 2.0 0.0 0.0 0.0 31532089 -1, 000000 0.0 010 0:0 9.0 0000000 3:000000 -1:000000 9.9 0.0 0:0 010 2,347296 -1.000000 0. 0:0 0:0 O10 12652708 ~1,000000 0.0 of0 O10 0.0 1000000 oo 0.0 0:0 O00 a0 11000000 ovo 0:0 oro 0:0 0,120615, THE FOLLOWING MATRIX CONTAINS THE NORMALIZED EIGENVECTORS 0.161230 0, 303015 O,e6u243 0.464243 D.NoRzAS 503013 -01464243 ~0,408248 -0,161250 0.161250 0.408243, sog2ea 0.408248 -0,000000 -0.uo8z¥s -0.4082u8 0.000000, 0, konDsa =DiNeKz03 -0;361230 01408244 01308013 0.503013 ~0,40B2NE 0. 46u243 .46N243 =0.361250 -0/408242 0.303013 0.303013 -0.408248 -0.161230 0.464203 0.408248 0, 408748 -0, 000009 -0.408748 0.408748 -0.000000 0.408248 9. 408zN8 0.303013 ~0.464245 0,40826E ~0,162230 0.161250 0.408248 0.464243 0.303015, -0:161230 01303013 -0,40828 0.864243 -0.u6u243 DLNOE2UE 0.503013 0.161230 248, Matrices and Related Topies ‘Computer Output (Continued) EXAMPLE 5, WITH PARAMETERS N = 9 imax = 100 Elovec = STRIPD = T THE STARTING MATRIX AIS 2.500000 -2,000000 0,0 0.0 ° 0.0 0.0 0.0 11000009 2/000002 -1 000000 0.0 0.0 0.0 0.0 0:0 =11000000 2:000000 -1:000000 0.0 0:0 oro 0:0 O10 15000000 2.000000 -1:000000 0.0 0:0 oro 0.0 0:0 11000000 2.000000 -1:000000 9.0 0.0 oro 0.0 0.0 11000000 27pga00 -1:000000 0.0 0.0 0.0 0:0 =1/000000 2000000 -1:000000 oro 0.0 0:0 010 11000000 2.000000 -1:000000 0.0 0:0 a0 0.0 o%0 0:0 21000000 2500000 ITER = 62 wel + F magz =F THE TRANSFORMED MATRIX ANEW 15 4.000000 ~2,000000 0.0 0.0 oo 2.9 0;000000 4088793 ~1.000000 0,0 0.0 0.0 31839861 ~1,000000 0,0 0.0 9229 ~11000000 0.0 ‘21122654 -1:000000 1,355416 0:0 =15 000000 OL au6343 THE FOLLOWING MATRIX CONTAINS THE NORMALIZED EIGENVECTORS. 0.516398 ~0,41843 0.441028 0,439670 0.437037 0.431576 0.428417 0.379102 0.251585 =01587298 01350999 -0.229308 -0,085566 0.082461 0.266983 0.376084 0.425475 (0,258285 -0.291520 20.087920 -0.363582 ~0.447150 -0.272572 0.069608 0.365200 70/129089 0.257503 "0.364695 0.408875 0.027626 -0.422554 -0.285756 0.208180 07000000 ~0.246562 -0:473667 -0.000000 0.450538 -0.000000 -0.440421 ~0,000000 0.386978, 0,129099 0.257509 0.364695 -0,408873 -0.027676 0.422555 -0.285756 ~0,208180 0.378011 07258199 ~0.291320 -0.087920 0.363582 0.447150 0.272372 0.069609 -0,363200 0.351576 01387298 0.350999 -0.229308 0.085566 0.082462 -0.24696% 0.376084 -0,425475 0.308750 07516398 -0.441843 0.442928 -0,439670 0.437037 -0.431576 0418417 -0,379102 0.251665 Example 4.3 Rutishauser's Method 9 Discussion of Results (Examples 1-5) (1) The eigenvalues are 30.288685, 3.858057, 0.843107, and 0.010150; they are remarkably well-spaced. The conditionsussuM < EPS1 (= 10" *)stoppedtheiterations. (2) The eigenvalues, 15.0, 5.0, 5.0, and ~1.0, include a coincident pair. The eigenvectors were requested but ‘were not computed because a pair of eigenvalues was found to be closer together than EPS2 = 0,001, (3) The eigenvalues, 12.0, 1.0 + 5.0%, and 2.0, include 3 complex conjugate pair. In such cases, it is typical that a two-row minor, in this example 4x aay 3. May does not converge, but that the eigenvalues of its matrix do converge (to 1.0 + 5.0/ and 1.0 - 5.0i). (4) This tridiagonal matrix arises when treating unsteady- state heat conduction in @ slab as @ characteristc~ value problem (see Section 7.26), Note that the eigen- vectors are alternately antisymmetric and symmetric. (5) This unsymmetrical tridiagonal matrix arises when treating flow of a reacting fluid between two parallel plates as @ characteristic-value problem (see Problem 724). 250 ‘Matrices and Related Topics 4.8 Jacobi’s Method for Symmetric Matrices We consider here an iterative method, credited 10 Jacobi, for transforming a real symmetric matrix A into diagonal form. This method consists of applying to A 1a succession of plane rotations designed to reduce the off-diagonal elements to zero. Let Ao = A andlet U,,i> 1, tbe orthogonal matrices. Define Ay = Uy*AoUy, A; U;A,U,, and, in general, (4.48) Anes = Up AU is (4.49) and Aj, has the same eigenvalues as A. Moreover, for all indices &, A, is real and symmetric. Recall that Sp (A*A) = Ye Dj yyy OF fOr A teal and symmetric, Sp(A®) = Sey Var aiy If T is any non- singular transformation, T~'A7P has the same eigen- values as A?, and thus Sp (T7!A?T) =p (A?), since it is the sum of the eigenvalues. Thus, if Ax = (aff), we have, for (4.48) (apy = SF ad, Dy aes If, now, the sequence {U,) can be so chosen that Seater > Say, the sequence (err) will be convergent, since it is monotonic and bounded above by Sp(A?). If, in addition, Som Serr approaches 2, then (4.50) and, if lim... Uy = U, AU =U disg (4, Ay vA): (451) ‘The further character of U,s1 is now described. Let afP(@#,) be a nonzero clement of Ay. Let the matrix Uns, be the m xn identity matrix I except that the ith row has been replaced by a row of zeros, other than cos a in column i and sin « in column j, while the jth row has been replaced by a row of zeros, other than sina in column ‘and ~cos ain column j. The method of choosing will be detailed shortly. Note that Ux. is self-inverse. ‘The result of both premultiplyingand postmultiplying A, by Urs1 is to leave the elements of A, unaltered, except those in rows i and j and columns é and j. At the moment, we concern ourselves only with aff*”, aff*! and ait*), When A, is premultiplied by U, ,, the new entry in the position is af? cos « + af® sin a, that in the j position is ai cos + a) sina that in the positon i sin a~ at) cos, and that in the ji position is —a{) cos a+ ‘ay sin a, When the result this far is postmultiplied by Uger, We have =at “0 costa + alsin? a + 2alf sin a cos 23 yer aff ait) = giP sin? a+ alt cos? a — 2a? sin 20s a3 aft? = (af? — alt) sin a cos a ~ aft (cos? a — sin? a). s+!) = 0. Then Now choose « so that af aft) a g(t) LF 608 28 | guy 1 = 80828 Grin 2a 2 2 cart) — (ay L008 2a | yy L008 20 ay a7 = ofp LOBE 4. gy ARCO of) sin 2a; and, unless aff? tan 2 = 2afP (ait) — af), (4.52) where Direct verification shows that Cot) (lp? = (oy? (ay + 20 (4.53) Thus, since the sum of the squares of all entries is constant, the’diagonal terms have gained in dominance. It can also be verified that 0 4 git) a git) 4 git aff + aff? = ai? + af) aff? = Hal? + af) +P; aft = Ha? + of) — 0; ff) di? copy = (E52) scalps Of) = ff)sin 29; ree at = (PSD) eos ae (4.54) Using (4.53) it is relatively easy to prove that the process actually does have the desired result if at each stage the off-diagonal element of largest value is used in the role of aif above. Let Sem BP Cal? — Blot? The number of nonzero off-diagonal elements contribu- ting to S, does not exceed r =n? — n — 2 (since at least ‘two elements are zeroed at each iteration). Hence, the 48 Jacobi's Method for Symmetrie Matrices 251 largest of the quantities (a(P)? equals or exceeds Sy/r. When the next step is performed, we have 2 Shor SS) = S, = Sy, where 0 < ys < J. Thus S, < p!~1S;, and itis clear that S, has the limit zero. If the program is carried out on a computer, the time taken to search for the largest current |a{!)| may be con- siderable, It is therefore natural to inquire whether it suffices to choose the position (i,j) of the element to be zeroed at step k +1 in some definite sequence, say in the order (1,2), (1s3)y ++ (yt) (23) (2,8)s --s- 2sM), (x= 1,n), then return to (1,2) and sweep through the same sequence again. That this leads to the desired result has been established by Forsythe and Henrici (3). In the fimit we have diag (Ay Ay... 3 (4.55) where U = lim)... U, Uz ... Up Thus the columns of U are the eigenvectors of A. Observe that we have proved, as a by-product, that a real Hermitian matrix of order n has n linearly independent eigenvectors, mutually orthogonal Since the matrices of the sequence A, formed by the process described are symmetric, we can economize on, storage by storing about half the original matrix, and overwriting all subsequent matrices in the same storage locations. Schematically, for n = 5, we have A 2 My hie As 422 429 24 ds 43) ase as =UAU, If we need the column [22423 dyz agp as;)' of the original matrix, we use the “bent” arrangement ayy 23 O34 Aas. The number of storage locations for this arrangement is (+ m2 ‘The program for determining the quantiies sin 2 and cos from tan 2x needs care if accuracy is to be main- tained, If we have tan 2a = a/b, we write this in the form tan 2a = plq, where q = [6] and p= a (sign 8). Then sec? 2a= 1 + pig? and cos? 2x = g?/(p® + 42), whence cos deal ee, and, as required, ~n/4 0, there is no loss of accuraey. Note that (4.56) is valid for determining cos a, even though q should be zero. To obtain sine from tan22™ plq, sin2x= pls/p* + @ (even if q = 0) and ——— 2eosalp +a A refinement of the procedure is as follows. Suppose that at an early stage of the iteration aft is already small ‘There will be litle virtue in making this zero since it will not mean decreasing the sum of the squares of the off-diagonal elements appreciably. Therefore, during the eth sweep of the off-diagonal elements systematically, 3 rotation is omitted if Ja] <¢, The set of values & might be a decreasing set and ¢ should be zero after 2 smnall number of sweeps have been made. If no eigenvectors are required, then the rotation matrices U, can be discarded as used. If required, the product IU, U, ...U, may be formed in n? additional locations, wherein the identity matrix I is stored intially. cosa = Vil + ai/p? +a"). sin 457) EXAMPLE 4.4 JACOBI'S METHOD Problem Statement Write a program that implements Jacobi’s method for finding the eigenvalues and eigenvectors of an n x 1 real symmetric matrix A. Method of Solution ‘The program given below follows in detail the pro- cedure described in Section 4.8, However, we can dis- pense with the iteration index k, so that the starting matrix and all its subsequent transformations will be denoted by the common symbol A. The product of the successive orthogonal annihilation matrices is the matrix T = 1U,U,U3... ‘The program takes account of the symmetry of the starting matrix and its subsequent transformations, so that the calculations do not involve elements in the strictly lower triangular portion of A. That is, the “bent” vector arrangement of Section 4.8 is used. Unfortunately, because of FORTRAN restrictions, we must also reserve storage for the lower triangular portion of A, even though it is aot used. In some other programming languages, such as MAD (Michigan Algorithm De- coder), that permit the definition of unique subscription functions, it would be simple to compact A into tri- angular tather than square shape, For each Jacobi iteration, each element in the strictly ‘upper triangular portion of Ais annihilated row by row, in the order: 42,4, + pni 4335 Bray +--+ Bayi 282 4,.-y- Ian element aj, is already smaller in magnitude than some value ¢, it s simply bypassed in the iterations. Before the first iteration, the sum of squares S of all elements in the full matrix A is computed. The sum of the squares of the diagonal elements of A before and after each complete Jacobi iteration is computed and saved in a, and oa, respectively. The criterion for ending the procedure is normally a See % At this point, both 9, and 2, should almost equal 5: in fact, this correspondence could alternatively be used as the criterion for termination. An upper limit, Kgs, iS also placed on the total aumber of iterations. The eigenvalues 4,, 43, .., 2, are the diagonal elements of the final transformed matrix A. The elements of the corre- sponding eigenvectors are in successive cofumns of T. The following flow diagram is intended to give an overall picture of the method, However, because of the special nature of the rotation matrix, U-'AU does not have to be expanded fully itr the program; in fact, only elements in the ith and jth row and in the ith and jth column of A will be modified at each step. It is also unnecessary for U to appear specifically, since it is simply I modified by sin and cosa in a few (known) positions. The product matrix T, however, is updated at each step. Example 44 Jocobl's Method 253 Flow Diagram Ter 1 cosae cs qe Fa-tac—aul] ay) T nm: 1 4<4 a 4 singe v2 7 laud Sea i 3 [Mt core [it vp +g 4 t_---____-: singe > 2cosaVp? +4? Ta U+l 1y #608, 14+ sin TeTU a As U7AU (=) hy oy Sty 254 Matrices ant Related Topies FORTRAN Implementation List of Principal Variables Program Symbol A AIK CSA SNA EIGEN esi EPS2 EPS3 TER imax N oFFDsa Ba s ‘SIGMAt, SIGMA2 spa T Definition Upper triangular matrix, A. Vector used for temporary storage. cos and sin a (see equations (4,56) and (4.57)). Vector of the eigenvalues 4a, 43, ... Tolerance, ¢,; for g

You might also like