You are on page 1of 260
Numerical Solution of Differential Equations (Second Edition) M. K. JAIN Department of Mathematics Indian Institute of Technology, Delhi A NEW AGE INTERNATIONAL (P) LIMITED, PUBLISHERS New Delhi - Bangalore + Calcutta » Chennai + Guwahati Hyderabad + Lucknow - Mumbai Copyright © 1979, 1984 New Age International (P) Ltd., Publishers Reprint, 2002 NEW AGE INTERNATIONAL (P) LIMITED, PUBLISHERS, 4835/24, Ansari Road, Daryaganj, New Delli- 110002 Offices at: Bangalore, Chennai, Guwahati, Hyderabad, Kolkata, Lucknow and Mumbai This book or any part thereof may not be reproduced in any form without the Written permission of the publisher. ISBN : 0 85226 432 T 5678910 Published by K.K. Gupta for New Age International (P) Ltd., 4835/24, Ansari Road, Daryaganj, New Delhi-110 002. and printed in India at Sarasgraphies. New Delhi-110064 SSE Ses RE, RRL BAT AY xia Lhe LO OS MAYER, RAS RN TEA, EAA, SSB ADA RSE, WSS AAR SR, LRT TERETP AE FOREWORD It is now nearly thirty years since the advent of the high speed electronic digital computer transformed numerical analysis from a somewhat esoteric science to a practical discipline of immense importance. For twenty-five of those years Professor Jain has been studying the numerical solution of dilferential equations, a study culminating in the present book. There used to be an academic custom that a Professor, having professed his subject for a sizeable proportion of a lifetime, should crown his work by producing a definitive book on the subject. The custom has sadly fallen into disuse of late, and it is pleasant to sce Professor Jain revive it. Jowett of Balliol was epitomised in the couplet: “Tam the Master of-this College What I don’t know isn't knowledge” Readers can be assured that, so far as the numerical solution of differential equation is concerned, that which is not in this book isn’t knowledge either. D.W. BarKon University of Southampton Southampton, U.K. PREFACE TO THE SECOND EDITION Of the number of different approximation methods for solving dilferential equations, the most important are the methods of finite difference and finite element. The fundamental requirements common to these two methods are consistency, stability and convergence. With the development of high-specd computers, it has been realized that numerical methods with strong stability are economical (from computational view-point). The stability requirements to be satisfied are: the stilt system in the left-half plane, the high oscillatory system on the imaginary axis and the convection-diffusion system in the tight-half plane. In this second edition nearly all present-day methods of solving differential equations are presented, and the convergence and stability of the methods ure detived. Chapters !—4 deal with numerical methods for the initial and boun- dary value problems of ordinary dillerential equations. The adaptive methods developed to stabilize the numerical methods for a particular problem have been discussed in Chapters 2 and 3. The variable step methods to solve stil differential equations of singular perturbation problems have been presented in Chapters 3 and 4. The cubic spline and the compact implicit methods for the general second order differential equation, and the convergence analysis for the eigenvalues of the Sturm-Liouville problem are discussed in Chapter 4, Chapters 5—7 are concerned with difference methods for the partial diffe rential equations of three types, parabolic, hyperbolic and elliptic. The ditfe- rence methods for time-dependent convection-diffusion and cylindrical sym- metric equations are given in Chapter 5. The system of conservation laws in one and two space dimensions is included in Chapter 6. The elliptic equa- tions with convection terms have been presented in Chapter 7. Chapter $ on ‘Finite Element Methods’ has been enlarged to include in detail the solu- tion of ordinary and partial differential equations. Also included in this edition are additional 78 references and 120 problems including BIT Problems. A number of problems have been solved to ilius- trate the methods currently used. The advantages and disadvantages of the methods have been discussed from the computational view-point. A biblio- graphy at the end of the book and a bibliographical note at the end of each « PREPACL chapter are given. Answers and hints to the problems are listed at the end of the book. Numerical Solution of Differential Equations will be appropriate as a text for at least two courses, for first year cruduate students and possibly ad- vanced undergraduates in mathematics, engineering, computer science and physical sciences such as biophysics, meteorology, physics and geoscience The book will also be useful as a reference tool for researchers using finite difference and finite clement methods. ‘This edition has incorporated the suggestions received from teachers teuch- ing courses on numerical solution of ordinary differential equations and par- tial differential equations. In particular I am indebted to the reviewers, Prof. 1D, Greenspan, Prof. B.A. Finlayson, Prof. A. Iscrles, Prof. Dr M.N, Spijker, Prof, A. Vander Sluis and Dr. W.H. Mills for advice on additions, corree- ons and deletions. Thanks are also due to Prof. C.E. Proberg for allowmg the use of BIT Problems. 1 am grateful to Prof. S.R.K. lyengar and Dr R.K. Jain for suggesting several important improvements in the manuscript. My thanks are also duc Lo the Director, IIT’ Delhi for providing facilities for writing this book. L express my sincere thanks to Mr. Ranjit Kumar for his assistance in the preparation of the manuscript. Finally, | wish to thank my mother Manbhari, wife Usha and son Rabindra for their tolerance and understanding while this book was in preparation. New Delhi September 1983 MLK, JAIN PREFACE TO THE FIRST EDITION It is a well-known fact that the majority of differential equations in science and engineering cannot be integrated analytically. In these cases it is neces- sary to apply some method of approximation. There exists a large number of different approximation methods for solving differential equations; the most important of which are the methods of finite difference and finite element, This book is based on a series of lectures given to undergraduate and post- graduate students, and at short-term courses under the Quality Improve- ment Programme at the Indian Institute of Technology, Delhi, during the last six years, The book is intended to serye as a text for students of mathe- matics, science and engineering who-have acquired some knowledge of ad- vanced calculus and elementary numerical analysis. Chapter 1 provides an introduction to the problem of numerical integration of differential equations. Chapter 2 contains discussion on the Runge-Kutta and allied single step methods. Chapter 3 presents a detailed treatment of multistep methods, Chapter 4 gives the derivation and implementation of the numerical algo- rithms for two point boundary value problems. Chapters 5, 6 and 7 contain numerical methods for the solution of parabolic, hyperbolic and elliptic differential equations. Chapter 8 includes a brief discussion on finite element methods, The problems given at the end of Chapters 2—8 are mainly theore- tical, and may be supplemented by programming and running some examp- les by the methods discussed in the text. A short bibliographical note at the end of each chapter is given to guide the reader to-a few standatd books and research papers which may be profitably consulted either along with the present book or subsequently for a more rigorous and detailed study. 1 gratefully acknowledge the generous help I have received from many colleagues and students in the preparation of this book. I am particularly grateful to DrS.R.K. Iyengar and Dr R.K. Arora for Chapters 2—3, to Pro- fessor M.M. Chawla, Dr J.S.V. Saldanha and Dr R.G. Gupta for Chapter 4, to Dr A.G. Lone for Chapter 5, and Dr (Mrs) Raj Ahuja for Chapter 6. 1 am also thankful to Dr R.K. Jain for suggesting several important im- provement in the manuscript, My thanks are also due to Mr U. Anantha xii PREFACE Krishnaiah for working out the problems in the book and Mr J.K. Jain and Mr J. Kurian for the computational work. I am deeply obliged to Mr M.P. Jain for critically reading the manuscript. I am thankful to Professor P.G. Reddy and Professor K.D. Sharma for helpful discussions. My thanks also go to Mr R.S, Yadav and Mr D.R. Joshi for their assistance. Finally, 1 ex- press my sincere thanks to Miss Neelam Dhody and Mr Ranjit Kumar who patiently typed the manuscript over the last five years as it progressed from year to year. New Delhi MLK, JAIn December 1978 CONTENTS Foreword Preface to the Second Edition Preface to the First Edition Cuarrer 1, Elements of Ordinary Differcatial Initia! Value Problem Approximation 1. Introduction 1 1.2 Initial Value Problems 2 1.3 Difference Equations 3 1.4 Numerical Methods 7 1.5 Stability Analysis 12 Anterval of absolute stability 1.6 Convergence Analysis 17 Bibliographical note Problems Cuaprer 2. Singlestep Methods 2.1 Introduction 27 2.2. Taylor Series Method 28 Convergence 2.3 Runge-Kutta Methods 3/ Second order methods Third order methods Fourth order methods High order Runge-Kutta methods Results from computations for Runge-Kutta methods Convergence Approximation of truncation error 2.4 Extrapolation Method 46 Euler extrapolation method 2.8 Stability Analysis 50 Fourth order Runge-Kutta method Buler extrapolation method vit xt 27 ai CHaPrer 3. CONTENTS: Implicit Runge-Kutta Methods Second order meitiod Third order method Fourth order method High order implicit Runge-Kuttit methods ObrechkolT Methods 6/ Second order methods Third order methods Fourth order method Systems of Diflerential Equations 67, Taylor series method Runge-Kutta methods Stability analysis Still system of ditlerential equations {ligher Order Dillerential Equations 73 Runge-Kutra methods Stability analysis Adaptive Numerical Methods 1 Runge-Kutia-Treanor method Runge-Kutta-Liniger-Willoughby method Runge-Kuita-Nystrom-Treanor method Bibliographical now Problems Multistep Methods 3 3.2 33 34 3.5 3.6 Introduction 43 Explicit Multistep Methods 94 Adams-Bashforth formulas (/=0) Nystrom formulas (7=1) Formulas fur, /=9, 1, 3,5 Results from computation for predictor methods Implicit Multistep Methods |/U2 Adams-Moutlton formulas ( Milne-Simpson formulas ( Multistep Methods based on Differentiation /05 General Linear Multistep Methods /(16 Determination of aj and bi Estimate of truncation error Stability and convergence ‘Other stability results Propagated etror estimates Predictor-Corrector Methods 126 Use of implicit multistep methods PU:C)"E scheme Results from computation for Adams P-C methods Modified predicior-corrector methods Hybrid Methods /40 One step hybrid methods ‘Two step hybrid methous Implementation of hybrid predictor-corrector methods ONTENTS W 3.8 Higher Order Differential Equations /47 Hybrid methods Obrechkoff methods Adaptive numerical methods Results from computation 3.9 Non-uniform Step Methods 160 ‘Adams-Bashforth methods Adams-Moulton methods Results from computation Bibliographical note Problems Difference Methods for Boundary Value Problems in Ordinary Differential Equations 173 4.1 Introduction 773 4.2. Approximate Methods 174 Shooting methods Difference methods Difference approximation to derivatives 4.3 Nonlinear Boundary Value Problem w=flx, x) 180 Difference scheme based on quadrature formulas Second order linear boundary value problems Solution of tridiagonal system Mixed boundary conditions Boundary condition at infinity High order methods 4.4 Nonlinear Boundary Value Problem SI) 200 Difference schemes Compact implicit difference schemes Difference schemes based on cuble spline function Second order linear differential equation with significant first derivative 4.5 Convergence of Diflerence Schemes 2/3 4.6 Nonlinear Boundary Value Problem Mid=flx,y) 2S Difference schemes Pourth order linear boundary value problem Solution of five-band system 4.7 Linear Eigenvalue Problems 225 Eigenvalues and eigenvectors The iteration method Convergence analysis 4.8 Results from Computation 232 4.9 Nonuniform Grid Methods for the Second (Order Boundary Value Problems 235 Nonlinear boundary value problems 1 =f(x..1) Nou linear boundary value problems y”=ftr, ¥,9") xvi CONTENTS Results from computation Bibliographical note Problems CapTeR 5. Difference Methods for Parabolic Partial Differential Equations 251 5.1 5.2 5.3 5.4 55 5.6 5.7 5.9 Introduction 25/7 Difference Methods 254 Difference Schemes for Equations in One Space Dimension with Constant Coefficients 258 Two level explicit difference schemes Multilevel explicit difference schemes Explicit difference schemes for the diffusion convection equation ‘Two level implicit difference schemes Multilevel implicit difference schemes Implicit difference schemes for the diffusion convection equation Implementation of Difference Schemes 277 The initial value problem The initial Dirichlet boundary value problem The initial mixed boundary value problem Results from computation Stability Analysis and Convergence of Difference Schemes 288 Matrix stability analysis Convergence of difference schemes Difference Schemes for Equations in Two Space Variables with Constant Coefficients 296 Explicit difference schemes Implicit difference schemes Alternating direction implicit (ADI) methods ADI Methods for Equations in Two Space Variables with a Mixed Derivative 309 Two level implicit difference schemes Three level methods Results from computation ADI Methods for Equations in Three Space Variables with Constant Coefficients 3/6 Difference Schemes for Equations with Variable Coefficients 32/7 ‘One space dimension Two space dimensions ‘Three space dimensions Stability analysis ADI formulas Results from computation 5.10 Difference Schemes for Fourth Order Equations 335 Direct procedure CONTENTS Cnaptre 6. xvii The Richtmyer procedure Results from computation 5.11 Nonlinear Parabolic Equations 349 Heration methods 5.12 Difference Schemes for Equations with Cylindrical Symmetry 359 Implicit 1wo level schemes Approximation at the boundary Two space variables Results from computation Bibhiographical note Problems Difference Methods for Hyperbolic Partial Differential Equations 380 6.1 Introduction 380 6.2 Difference Schemes for Hyperbolic Equations in One Space Variable with Constant Coefficients 380. Explicit difference schemes Implicit difference schemes Results from computation 6.3 Difference Schemes for Equations in Two Space Variables with Constant Coefficients 389 Explicil difference schemes Implicit difference schemes ADI methods Results from computation 6.4 Diflerence Schemes for Equations in Three Space Variables with Constant Coefficients 398 6.5 Difference Schemes for Equations with Variable Coelicients 400) ‘One space dimension Two spiive dimensions Results fram computation 6.6 Locally One Dimensional (LOD) Methods 405 Two Space dimensions Three space dimensions esulls frony computation 6.7 Diflerence Schemes for System of Equations in One Space Variable 407 First order byperbolie scalar equation System of equations Systems in conservation form Stability i sis 6.8 Implementation of DilTerence Schemes 4/4 Initial boundary value problem Results from computation xviii CHAPTER 7. CHarrr 8. CONTENTS 6.9 Dillerence Schemes for System of Equations in Two Space Variables 427 Stability analysis Systems of conservation laws in two space dimensions Results from computation Bibliographical note Problems Difference Methods for Elliptic Partial Differential Equations 448 7.1 Introduction 448 7.2 Difference Schemes 448 Difference approximation to r* Difference approximation so p# 7.3 Dirichlet Problem 458 74 Iterative Methods 46/ Jacobi method Gauss-Seidel method Successive aver-relaxation (SOR) method Richardson method Results from computation 7.5 Alternating Direction Method 476 7.6 Neumann Problem 484 Derivative condition at the curved boundary 7.7. Third Boundary Value Problem 487 7.8 Dillusion Convection Equation 489 7.9 Axially Symmetric Equation 493 7.10 Biharmonic Equation 496 Bibliographical note Problems: Finite Element Methods 513 8.1 Introduction 5/3 8.2 Weighted Residual Methods 5/3 Least square method tition method jalerkin metho Moment method Collocation method $8.3. Variational Methods 522 Ritz method Rd Finite Elements Line seginent element Triangular element Rectangular element Quadrilaeral element ron element sdron element CONTENTS xix ‘Curved boundary element ‘Numerical integration over finite elements 8,5 Finite Element Methods 559 Ritz fiaite clement method. Least square finite element method Galerkin finite element method ‘Convergence analysis 8.6 Boundary Value Problems in Ordinary Differential Equations 563 Assembly of element equations Mixed boundary conditions Galerkin method 8.7 Boundary Value Problem in Partial Differential Equations 575 Assembly of clement equations Mixed boundary conditions Boundary points Galerkin method 8.8 Nonlinear Differential Equations 606 8.9 Initial Value Problems in Ordinary Differential Equations 609 First order init Second order 8.10 Initial Boundary Value Problems 6/5 Parabolic equation First order hyperbolic equation ‘Second order hyperbolic equation Bibliographical note Problems Bibliography 645 Answers and Hints to the Problems 660 Index 695 1 Elements of Ordinary Differential Initial Value Problem Approximation 1.1 INTRODUCTION To obtain accurate numerical solutions (o differential equations govern- ing physical systems has always been an important problem with scientists and engineers. These differential equations basically fall into two classes, ordinary and partial, depending on the number of independent variables present in the differential equations: one for ordinary and more than one for partial, The general form of the ordinary differentia Liyj=r (1) where Z is a differential operator and r is a given function of the indepen- dent variable 1. The order of the differential equation is the order of its highest derivative and its degree is the degree of the derivative of the highest order after the equation has been rationalized. [fo product of the dependent variable »(r) with itself or any of its derivatives occur, the equation is said to be linear, otherwise it is nonlinear. A linear differential equation of order m can be expressed in the form equation can be written as Lil= ¥ pore m=ro (12) in which f,(t) are known functions. The general nonlinear differential equa- tion of order m-can be written as F(t y ys ces YO, yi) = 0 (1.3) or V1) = At Ye Hy ees OMY (14) which is called a canonical representation of differential equation (1.3). In such a form, the highest order derivative is expressed in terms of the lower order derivatives and the independent variable. The general solution of the mith order ordinary differential equation contains m independent arbitrary 2 NUMERICAL SOLUTIONS constints. In order to determine the arbitrary constants in the general solu- tion if jhe m conditions are prescribed at one point, these are called initial conditions, The differential equation together with the initio! conditions is called the initial value problen. Thus, the mth order initial value problem an be expressed as pern) At) qs) Ifthe wm conditions are prescribed at more than one point, these are called houndary conditions, The differential equation together with the boundary conditions is known as the bondary value problem. We shall now discuss the basic concepts needed for the solution of initial value problems, Tiny, Wp OL, 2, oem cy MOD) 1.2 INITIAL VALUE PROBLEMS ‘The mth order initial value problem of Fquation (1.5) is equivalent to the following system of ya first order equations: ville) = Yo p(ta) = Ny Py (lo) vn 2 Py) ralig rgd (6) In vector notations it can be written as de dt where ve fry). vito) = 1) (1.7) Poy ty ie aa) IU, My P2, oor f(t, v) = [ra ta oye a= [i004 We shall, therefore, be concerned with methods for finding out numerical sipproximations to the solution of the equation Tip Lee Meta) = V0 (1.8) However, before attempting to approximate the solution numerically, we must ask if the problem has any solution. This can be answered casily in the case of initial value problem for ordinary differential equation by Theorem 1.1. THEOREM 1.1 We assume thar f(t, .") satisfies the following condition G) fa. v) iva real finetion, (i) fit.) as defined and continuous Mm the strip 1E Io, hy & fe, <9), ORDINARY DIFFERENIAL INITIAL VALUE PROBLEM 3 (iii) there exists a constant L such that for any 1 © (tg, 6) and for any two numbers yy and y> LG. ft yd 1S Lbs, where L is called the Lipschitz constant. Then, for any ¥o the initial value problem (\.8) has a unique solution y(t) for vE (ko, 5}. We will always assume the existence and uniqueness of the solution and also that f(¢, y) has continuous partial derivatives with respect to ¢ and y of as high an order as we desire. 1.3 DIFFERENCE EQUATIONS In order to develop approximations to differential equations, we define the following operators: Ey(t) = y(t+h) ‘The shift operator y(t) = y(+h—y (0) "The forward-difference operator Py) = y(O-¥-M) The backward-difference operator a= y (+ 4)-y(i- 4) ‘The central-difference operator wy) = +f y (4+ t)y (- 4)] The average operator Dy(s) = y'()) The differential operator where h is the difference interval. Repeated applications of the difference operators lead to the following higher order differences: ate) = ¥ (- gtigg 7 eo) sy Petty = 3 (1 agg ¥ kl) (1.10) = (Qn) ayO=F Or gE ito) (4) For linking the difference operators with the differential operator we con- sider Taylor's formula ‘ H+ =yO+W' Ot HOt (1.12) In operator notations we can write Ey) = (1440-00" tee ) 0) 4 ‘NUMERICAL SOLUTIONS ‘The series in parentheses is the expression for the exponential and hence we have (formally) E= (1.13) Treating Equation (1.13) as an identity, we may derive expressions for any order derivative in terms of the various difference operators. In Table 1.1, we list the relations between the operators. There are many difference approximations possible for a given differential equation. The selection of a particular difference relation is usually determined by the nature of the truncation error associated with the approximation. ‘As an example, let us develop expressions for the first order derivative in terms of the forward, backward, and central difference operators. We assume that the function y(t) may be expanded in a Taylor series in the closed interval t—h & ¢ € t-+h. We have * ‘ (CEA) = y aEhy (+ AT) bn EE iy nt where a prime denotes differentiation with respect to s. We have then NV Ly get y+ 01r) Ay() _ dy or 20 8 0m) where the notation (4) means that the first term neglected is of order h. Similarly we obtain py(t)_ dy “ho dt +0(h) However, wdy(t)_ dy h a tO) The forward and backward differences are accurate to order h; the average central difference is accurate to order h?. Thus replacing the derivatives in a differential equation by the difference approximations results in an equation which may be considered as relating differences of an unknown function and may be called a difference equation. The order of a difference equation is the number of intervals separating the Targest and the smallest arguments of the dependent variable. A linear difference equation is one in which no product of the dependent variable with itself or any of its differences appear. A difference equation is homoge- neous if all nonzero terms involve the dependent variable; otherwise it is inkomogeneous. Furthermore, a difference equation may have coefficients which are constants or functions of the independent variable. If the coeffi- cients are constants, we call it a difference equation with constant coeffi- clents, otherwise we call it an equation with variable coefficients, ORDINARY DIFFERENTIAL INITIAL VALUE PROBLEM av ( =) ,-sox z (4-1) 80,— (+1) 801 zi 6 oy sae ae —p( 4- = a (4) Sara ot-0($-1) wev+n(F+1) Gratuat 4 2 wel4—4 lee = (4) quisz 2 [x-(4-1) wile +iP wa-uT ot ot (Stt)sete 4- 4 P+D—4 5-1 4 Io (441)pete + 1=1-(4-D ° “zoo Ve 1 pes a — et (4 +1) Aghe Tt rf4—1) +7 a gz ay 2 4 r a svouvagag ait NawMuse amssNONYTTY [} TEVL S SIERO DAIHE SA, AGS, GA MRE GREASE AMET AQ, AGEING QS, WAV ARENA = SORA Ee WAS®K ag Qy WWE QE SL, = OR. SRE, SRA WE, TRAIN RGM) HIATT HA WAR IA TD AW UWA ANA SEARS. SEAR = MAA NEAT LES Sen QAI NEES SL WYSE NE SRR LSE EAE .kQreS SAN WETS SER SAAT. ASSEN MOA, AE w= S48, SALSA US HE ASSN, Wane KE De acandamd SARE GL WE, EE SHALE ASSAY A WADE WE ABARAT | SQES SAN WR Ge, Ha EAGT T WE MAE ENGNG Ws aya MA, SAREE SSNS HQ AD MEW AH WET XN x + WFAN WA RED AEE RA ANG, Ai Se I RARAS ES BASIE IER DANA VR GAL DL OS TEAR QOL S SA EE ER ERSTE San RAGAN SR, SNR A ARN SEER EMER ANIT AS MEK ENS OSS DA DOV Mg, SE a QO NSO, QE RAL WS Aa AAC ATEN NSIS EQ TA Ss LESH MERLE AER, WAH GQhaoadka | Ng da ay WWD RAMEN | ee Ares Ry, SER AAR Re Weaw aos Sear, ORDINARY DIFPERENTIAL INITIAL VALUE PROBLEM 7 where k constants bjm, m = 1, 2, ++, ¥) 7 = 1,2, --+p, are arbitrary. To find the particular solution y? of (1.14), we shall confine ourselves ¥ to the case when gn (= g) is independent of n. If 3) ay # 0, we can then choose as particular solution Ye) = (1.21) BY r= The general solution of (1.14) becomes yn = yh 4S (1.22) The & arbitrary parameters in y(“) can be determined in such a way that for n= 0, 1, 2, .... k—-1, the variable y takes the assigned values You His e005 Where 1.4 NUMERICAL METHODS The numerical methods for the solution of the differential Equation (1.8) are the algorithms which will produce a table of approximate values of y(t) at certain equally spaced points called grid, nodal, net or mesh points along the ¢ coordinate. Each grid point in terms of the previous point is given by the relationship fay = teth, un = 0,1, 2,+-,N-1 tn=b where it is called the step size. Alternatively, we may write ta =totnh, n=l We shall now discuss the numerical methods and related basic concepts with reference to a simple differential equation ds dt The exact solution of the above equation is given by pac (1.24) where c is an arbitrary constant. Using the initial condition \(/o) = Jo. We can write (1.24) in the form ¥ (A = y(t) eM In order to compute the valucs of y(r) at the nodal points ¢ = f9+kh, k =1,2,..., N, we write a recurrence relation between the values of W1) atta+iand fas V (tres) = OM yt), = 0,1, 2, 0 NAT (1.25) AY, to) = You 1 Lo, 4] (1.23) 8 NUMERICAL SOLUTIONS This gives an algorithm for determining the values of y (t,), y (t2), «=, ¥ (tw) from the given value y (fo) at 1 = fp. However, from the computation view- point each value has to be multiplied by &“, which is an exponcntial func- tion and difficult to calculate exactly. We, therefore, take suitable approxima- tion to &“~. For example, for sufficiently small | Ah |, the polynomials appro- ximation to eM can be written A = 14.0400 M2) Oh EME OH) OC MAP) OF EME OE ot FOO (ARP) We can also take other types of approximation to e”. The Padé approxi- mations are =a tO (| KP) lta a T +0 (| AP) 1- aM that on an San fou 1- tn (aay 1+ Lan dont h amp ohn Jeger Ta OM (Mr) 1 bat han a Qn? Let us denote the approximation to e™ by E(AK). The numerical method for obtaining the approximate values ys of y(t,) can be written as Ynty = E(Ah) yo, t= 0,1, 2, 0, N-1 (1.26) The approximate values of (1) may contain the following errors. DEFINITION 1.1 The round-off error is the quantity R which must be added to a finite representation of a computed number in order to make it the exact representation of that number. y (machine representation)+R =: y (true representation) DEFINITION 1.2 The truncation error is the quantity T which must be added to the true representation of the computed quantity in order that the result be exactly equal to the quantity we are seeking to generate. y (true representation) +7 = y (exact) ORDINARY DIFFERENTIAL INITIAL VALUE PROBLEM ai 1 (nye If, EQ) = 1 EM + op OWE oe tO then (1.26) becomes Voor = ¥ (tos) + 0 (AP) (1.27) The integer p is called the order of the method (1.26). The remainder term or em, 0<0<1 which is neglected, is the relative discretization or local truncation error. The numerical methods for finding solution of the initial value problem of Equation (1.8) may broadly be classified into the following two types: (i) Singlestep methods These methods enable us to find approximation to the true solution p(t) at nes if yn, ¥,, and h are kaown. Gi) Multistep methods These methods use recurrence relations, which ex- Press the function value y(t) at tn+; in terms of the function values Xt) and derivative values y'(t) at (241 and at previous nodal points. It is obvious from (1.27) that the aumerical methods of order p will produce exact results for all differential equations whose solutions are polynomials of degree p or less. If (N= aptaet where ap and a, are arbitrary constants, then the singlestep method of order one will be recurrence relation between the values Yeti, Ys and y,. We may write Ine = Gotay tows In = Mot; tn In = Eliminating ap and a), we obtain Ynei =Ynthyy Thus the singlestep numerical method of order one for Equation (1.8) will be of the form mh fry A= 0, 1, 25 ey N= fn = f (bn Yn) The exact values of y(t) will satisfy Vltnss) = V(t) +H ta, Vt) To (1,28) where Ta is the local truncation error of the form Tn = Crh? y" (£2), ta < bp < test To determine C;, we substitute y(t) = 1? in (1.28), and get C2 = 1/2. Dae where My 10 NUMERICAL SOLUTIONS Next, we construct a multistep numerical method that will produce exact tesults whenever y(r) is a polynomial of degree three or less. Consider Yt) = agtayt+ant? +250? where ap, a), a2 and ay are arbitrary constants. A simple third order multistep method uses a recurrence relation between the values Yaris Yn Yas Yay atid yi». Eliminating ao, a), a2and a3 from the equations Vary = A+ tags t G2 thy, + ashy, Yn = Oot arty ta212 +as03 Yo = a4 +2 +30; 2 Yay = M$ 2a best 3a t2 Vo g= 4% +2a2 teat3ast2 , we obtain A ; ’ ’ Yass = dart jy (23y/,~ LO, +572) The third order multistep method for Equation (1.8) becomes. yas = et LOY U6faart Sead, = 23 0N=1 (1.29) Here we necd y, and y3 initially to start the computation. The exact values of y(¢) will satisfy V(t) = Pod + ALI la, ¥ C= 16F (tna, 7D) FSf(tae2y ¥ (te-+T (1.30) where 7» is the local truncation error given by Tr = Cyl YE), tra <€ 0 there exists a8= 8() such that differ- ence between two different numerical solutions ys and jx is such that lyn-Fal 0) or decreases (A < 0) with the factor e™ whereas the approximate value yx (see (1.26)) grows or dies out with the factor £ (Ah). In order to ORDINRAY DIFFERENTIAL INITIAG VALUE PROBLEM 13 haye meaningful numerical results, it is necessary that the growth factor of the numerical method either should not increase faster than the growth factor of the exact solution or should decay at least as fast as the growth factor of the exact solution. DEFINITION 1.4 A numerical method of the form (1,26) is called relatively stable if [EQ |< (1.34) For example, the methods based upon polynomial approximation are al- ways relatively stable for A > 0, as E(M) < &, A>0 For A < 0, the numerical methods will be absolutely stable if | E (Ah) | <1, ie. |14An | <1, -2 << 0, first order method, |i One| < 1,-2<"<0, second order method, [eae foes t (|< 1, -2.5< 4 <0, third order method, 1 1 1 | opant 5 0n7-+ 5 amet ine \ coand if any | &{ > 1, the error | «n | increases unboundedly. If two or more é), are equal and equal to one, then also | ¢, | increases unboundedly. « DEFINITION 1.5 A multistep method when applied toy’ = Ay, A < 0, is said to be absolutely stable if the roots of the characteristic equation of the homogeneous difference equation for the error are either inside the unit circle or on the unit circle and simple. The roots of Equation (1.40) are plotted in Figure 1.2. Inthe graph the roots are displayed in the following fashion: for real roots the absolute value of the roots is plotted, and for conjugate complex roots the modulus of the ORDINARY PIFFERENTIAL INITIAL VALUE PROBLEM 15 plotted asa single quantity (thus conjugate pair of roots are shown as a single curve). In Figure 1.2, it can be seen that | £4] is greater than one at = —0.55 and in Equation (1.41) the term containing | | grows without bound as n gets large. Thus the third order method (1.29) is absolutely stable for -0.55< Ah < 0, © Positive real root 0 4S Negative reat root ' x Modulus of complex root Ban ~16 0 -12 -08 -0.4 0 0.4 0.8 12 Fig. 1.2 Roots of the characteristic equation of the third order method 1.5.1 Interval of Absolute Stability In the previous section we have determined roots of the characteristic equation (1.40) by repeatedly solving the polynomial equation for a range of values of Ak. A plot of the roots against Ah (see Figure 1.2) then enables us to obtain the interval of absolute stability as (--.55, 0), This procedure is known as the root locus method. An alternative to this procedure consists of applying to the characteristic equation (1.40) a transformation which maps the interior of the unit circle ‘onto the left half-plane and then using the Routh-Hurwitz criterion which 16 ‘NUMERICAL SOLUTIONS gives the necessary and sufficient conditions for the roots of the character- istic equation to have negative real part. The transformation Itz jo (1.42) Maps the interior of the unit circle onto the left half-plane and the unit circle onto the imaginary axis, the point = 1 onto z = 0, and the point € =—1 onto z =—ce, This is shown in Figure 1.3, Z~-Plane Zw %-Ptane Q Fig. 1.3 Mapping of the unit circle onto left half-plane THEOREM (Hurwitz) 1.2 Let P(z) = ag za, 2 '+...-bae and 4 a M@ D=|0 «& (1.43) ee 6 6 where a, > Ofor all j. Then, the real parts of the roots of P(z) = 0 are negative if and only if, the leading principal minors of D are positive. For k = 3, we have % > 0, a, >0, a, >0,a3>0 (1.44) @,02~a5 a9 > 0 as the necessary and sufficient conditions for the real parts of the roots to be negative. ORDINARY DIFFERENTIAL INITIAL VALUE PROBLEM 7 Substituting (1.42) into (1.40), we get ag 2a 2+arzta, = where ay = 24 ia = 42 fi, = 20M, a = Conditions (1.44) are satisfied if —.55 < fi < 0, Thus the interval of abso- lute stability is given by (—.55, 0). Example 1.2 Find y(v) at ¢ = | with the help of the fourth order method for the initial value problem y =-10y, 90) = 1,0<1<1 using step lengths 2-", m = 1(1)8. The exact value of y(z) at = 1 is obtained from the solution Y(t) = eas y(1) = 0.45399930 x 10-4 The approximate value is given by awe (1100. AQP GER, Cy" 2 3! a where Nh = 1 The values of yw are listed as integers with an exponent (e.g., 43670—08 = 43670 x 10-* ~ 4,3670% 1074) in Table 1.3. TABLE 1.3 EFFecT OF AnsoLute STABILITY ON THE VALUES ¥(1) N a al) 2 $ 0.18791840-+03 4 25 0,17679602 8 125 0.79844684—04 16 0625 0,46385288—04 2 03125 0.45446811—04 6 015625 0,48402499—04 128 0078125 0.45400080—04 256 00390625 0.45399939 04 The fourth order method is absolutely stable if | AW | < 2.78, ie. 104 < 2,78 or i < .278, It is obvious from Table 1.3 that the results are reasonably good as soon as the absolute stability criterion is satisfied. 1.6 CONVERGENCE ANALYSIS The difference approximation to the differential equation does not guaran- tee that the solution of the difference equation approximates the analytical solution of the differential equation. Here the convergence problem arises by way of the conditions for which the difference solutions converge to the analytical solution if the difference approximate is refined 1B NUMERICAL SOLUTIONS DEFINITION 1,6 A numerical method of the form (1.26) is said to be convergent if ti Ya = 3(tu) for all ta E [to, 5) 0 In = ty ban The true value y (fa) will satisfy PV (tot) = E(M)y (te) +Tn (1.45) where 7; denotes the truncation error. The approximate solution will satisfy Ynun = E (MA) yn— Rn (1.46) where Rr denotes the round-off error, Subtracting (1.45) from (1,46) and by substituting én = y»—(ta), we get nay = E (MN) eur— Rn—-Ta (1.47) Let us denote max | Ry| = Rand max | T| = Tand assume these (tte) ye ty) as constants. Then (1.47) becomes Veen | SEA) [en | RT, a= 0, 1,2. (1,48) By induction, we can write (1.48) for E(Ah) + Las ‘ EA (\i)— 4 Leal SOM | eo) FEBS RE) (1.49) Let E (Ah) be the pth order arpeerinatia, then (ne* =£00+ a Moss Where My_; is a constant, Thus (1.49) becomes (lot) — Leh < [ea] Mtr ee _(R ae \ : «(4 toe Men ) (1.50) It is obvious that in the absence of the initial and round-olf errors, | en | -- Oas h -> 0 like Ch? where C is a constant, independent of h. For | ¢9 | = Oand p = 1,(1,50) becomes eM) — 1 R bal 0, the truncation error tends to zero whereas the round-off error becomes infinite, On the other hand as h->e9, the round-off error tends to zero but the (1.51) ORDINARY DIFFERENTIAL INITIAL VALUE PROBLEM 19 Total error Truncation error Round - oft error hopt Fig. 1.4 Truncation and round-off errors as function of A truncation error becomes infinitely large. The chaice of A for which the bound (1.51) is minimum is obtained when n= | PR BM, We also observe from (1,51) that the first order miethod converges as h > 0 if the round-off error is of order /?. To discuss the convergence of the multistep method (1.29), we determine the constants ¢;, ¢2, cs in (1.41), Let us denote Bren P. J= 012 The constants ¢\, cz. ¢; can be found by solving the linear system ebestes, = Eurber faites ban, eB bea th bes By 20 ‘NUMERICAL SOLUTIONS. If we assume that the initial errors «, €), €2 are constant and equal to «, then (1,41) becomes 3 -(<+) (=E=Ea) py Eu En) gy : Fe DLE w= 66) Ei fan) “En Eaa) (Can Ean) (Ew 1-9 +e u= En (En Ew) wlie i (1552) For the stable method, as h > 0, £4 > 1, £2» and £54 approach to zero; for sufficiently small values of | M1|, a behaves like e and, é., and y, are less than onc. Thus, (1.52) can be written as LY) an T as ( -f jew +5 of fa SMe td + FE (1—P—Wd) (1.53) We may also write (1.53) as e | <2 aM 4 (1 — eet) where we have put lef =0 3 A \TI 0 (1+7i by) > 0 -h>o Thereforc, the method is unstable when fi > 0. For hk < 0, b; = 4, h & (—4, 0), the above conditions are satisfied and the interval of absolute stability is (—4, 0). We now study the relation of stability to truncation error. We have bd < aR ne vaca hey where [max means the signed fA value corresponding to | AN max We wish to compare (#+h3) to Ivimax a5 AMmax decreases in magnitude from 4 to 0, We write as $th = =lt+p--=Q ie The values of /\max, ae and C; are listed in Table 1.4. TApLe 14 VALUES OF Jay AND Cy = = a a © al-4|- 1 4 2 x|- als ale 1 22 NUMERICAL SOLUTIONS These results show that for this method, the magnitude of the truncation error coefficient Cz decreases towards zero (from the right) and then increa- ses negatively as the magnitude of fm decreases to zero, The values of Cz and fidmas are shown in Figure 1.5. Fig. 1.5 Truncation error coefficient C, as function of hAmax. Bibliographical Note There are many numerical analysis books having chapters concerning nu- merical solution of ordinary differential equations, e.g. 61, 103, 114, 121 and 237. Some useful books which deal with the numerical inethods for ordi- nary differential equations in detail are 33, 93, 113, 161 and 163. The difference methods for ordinary and partial differential equations are given in 46, 88 and 147, Problems 1, Prove that each of the following ordinary differential equation has a unique solution on the interval indicated: (i) y' = 2 exp (—y?), y (0) = 1 on [0, 10) (i) y= yee, ¥(0) = 0 on [0,41 2. Verify that the function 1) = Joexp (f, 2) a )+ex (f), ple) dr ) ffsn( fa} ORDINARY DIFFERENTIAL INITIAL VALUE PROBLEM 23 3. satisfies the initial value problem Fe = WD y + Rt), ¥t0) = Yo (i) Solve the initial value problem y'—ky = A sin wt, 90) = yo where k, 4, # and yo are given constants. (ii) Assuming that k < 0 show that ‘re steady-state response is equal Ay ve A is called the amplification factor, and ¢ is called the phase angle. to Ap sin (t=) where 4g = = and tanp = 2 4. Consider a system of first order ordinary differential equations es SE = Silt 5») whte) = 9 i= Wim Assume that each of the functions f(/, 1, 22,-.-, mm) iscontinuous and bounded and satisfies a Lipschitz condition in iy Uzyse+, Pm fOr LE [lg bl and —0e < 04, py,000, Ye 0 Solve the difference equation. (BIT13 (1973), 123) . The sequence {y4} is defined by Jom. ee gee Sia mw (280yn=261)5)-1+81 yaw > 2 Show that lim yn =—2, (BITIT (1977), 494) ee For which values of a and b do the solutions ao difference equation Yass Jind (4te+1 re ($ +4 ra po = remain finite. (BITI9 (1979), 425) . Show that the functions 7 cos ms—COS nt vat) = i: 60s §— cos satisfy the recurrence relation Fogalt) = 2 cos ft yn (t)— Salt) and hence obtain an explicit expression for y(t). (BIT4 (1964), 61) 7 as, n integer, n > 0 . Show that all solutions of the difference equation Jap Wyat, =0 ORDINARY DIFFERENTIAL INITIAL VALUE PROBLEM 25 20. at are bounded, when n->co if|A| <1, while for all other complex values of A there is at least one unbounded solution, (BIT4 (1964), 261) . Find the general solution of the recurrence relation Inst 2bynt i Feyn = 0 where 6 and c are real constants. Show that all solutions tend to zero asm ->co, if and only if, the point (6, c) lies in the interior of a certain region in the b-c plane, and determine this region. . Find the general solution of the difference equation De + 2)yy1- 2 +2€] yy +120 — Oye = 0 Determine the conditions under which there are no oscillations in the solution, . Determine a, bo, a2 and bz such that the relation ¥'(a4-b)/2) = aoy(a)+ boy(b)-+a23"(a)+ bay"(b) is of O((b—a)*), The truncation error tends to (—(b—a)*/2880)y"(a+-6)/2) as | b—a | > 0 (BITI6 (1976) 111) . Define Sih) = [—p(t-+2h)+4y(e-+h)—3y(t)/2h Show that ¥'(t)— Sh) = ey? + coh? +esh*+-»-and state ¢. (BITI9 (1979), 285) . Determine the exponents é; in the difference formula (oto-) 8 goo x ayhti assuming that 3(¢) has a convergent Taylor expansion in a sufficiently large interval around to. (BIT!3 (1973), 123) Consider the method of the form Yer = Ynt bboy nts + b1y'n) where by and 6, are arbitrary constants. Determine the coefficients by) and 6, for y(t) to be of the following form: (i) {1,4 7} (i) (1, t, exp (A} (iii) (1, 4, rexp (Ar)} Also show that the boundary of the stability of these methods is a circle with the centre at point (1/(b)—0,), 0) if by # 0). If by = by the stability region is the left half plane, 4 < 0. Determine the coefficients in the method Yngy = Yat ACboy'nerF Bryn) by assuming the function y(t) = {1, cos we, sin wi}, where wis a parameter. 26 22 23. 24, NUMERICAL SOLUTIONS: Estimate the parameter w by requiring that the method be also exact for the following functions: (i) pt} = {Le t, (ii) y(e) = {1, 4, exp (—we)} Consider the recursion formula: Li hk 2 e(t+ 27% na ibhtie (Z+4+8) Show that Jn-e" = Oh) as h > 0, nh = constant, (BIT14 (1974), 482) Show that the general solution of the initial value problem y"+y = r(t), 1) = yo, »(0) = y’o can be written as “ y(t) = yy cos t+y’g sin t+ j; r(r) sin (¢—1) dr Show that the numerical solution of the initial value problem y+y = 0, (0) = 1, y'(0) = 0 may be obtained by satisfying the formula Fat + Vaan St oe . Consider the following system of difference equations: [ Un | [ ~—I+cos (h) sin (i) | [ Ua-1 | tw dL sina) 1008 (0) IL ony How small should / be chosen so that tm -- 0 and mm — 0 when nee, (BIT20 (1980), 389) 2 Singlestep Methods 2. INTRODUCTION A singlesteyy method for the solution of the differential equation ay dt is one in which the solution of the differential equation is approximated by calculating the solution of a related first order difference equation. Thus, a general singlestep method can be written in the form Yney = Inthe (tay Yin Ay = 04 1, 2ye2) NAD (2,2) where 4(, ¥, f) is afunetion of the arguments . y, # and, in addition, “ depends on the right-hand side of (2.1). The function #(1, 1, 4) is called the jacrement fonction. UE yn4; can be obtained simply by evaluating the right- hand side of (2.2). then the singlestep method is called explicit otherwise it is called implicit, The truc value (¢,) will satisfy Y(t) = V(t) thd (ty Y (tr), + Tn, w= 0, 1, 2, ey N= 1 (2.3) where Ty is the truncation error. fs The largest integer p such that | 7! T, | = 0 (i?) is called the order of the singlestep method. Betore stating the main result about convergence, we introduce a few definitions. = 1, 9), Uo) = Yo. 4 E [to, 4] (2.1) DEFINITION 2.1 The singlestep method (2.2) is said to be regular if the function $(t, y, h) is defined and continuous in the domain fo <¢ < b, 2

0; ie, the error tends to zero as ht -» 0 like Ch? for some constant C for tx € io, b], We shall now state a resujt which tells us about how the error €, tends to zero, THEOREM 2.3 Iff is sufficiently differentiable, €, will satisfy n= WP B (ta) +0 (hE) (2.11) where 8 (1) is the solution of the ini value problem 8) = Ale, yO) BO Let us denote »,, (i) the aie en of y (tn) calculated from (2.6) with step 4. Then (2.11) can be written as Yn (It) = y(t) AP 8 (tn) +0 (he) (2.13) In conclusion, Taylor’s series method has advantages; it is easily derived in * any order, and values of y (1) for 1 not on the grid are easily obtained, How- ever, the method suffers from the time consumed in calculating the higher derivatives. + PHN), 8 (to) = 0 (2.12) 2.3 RUNGE-KUTTA METHODS We first explain the principle involved in the Runge-Kutta methods. By the Mean-Value Theorem any solution of L(ts §))s to) = Yor 1 E [oy 6] satisfies 9" (tuts) = y(ta)-thy’ (En) Vita) hf (ny ¥ (End) tt nh OK On h f oui = self (tot Fe sot E flim on) @u4) Alternatively, again using Euler's method, we proceed as follows: a” (ot b )= $b" (Fy (nea) 7 LLC MAL Unb Seen 32 NUMERICAL SOLUTIONS. and thus we have the approximation yaa = Yet LLM Altes In FAO Dd (15) Either (2.14) or (2.15) can be regarded as Yati = Yah (average slope) (2.16) This is the underlying idea of the Runge-Kutta approach. In general, we find the slope at f, and at several other points: average these slopes, multiply by A, and add the result to yn, Thus the Runge-Kutta method with v slopes can be written as Ki = hf lteter, Yok i ayKi)s er = OE = Vy Queers 9 (2.17) or K, =hf(ta; ya) Ky = hf (tat eah, yebon Ki) Ky = hflturbes, yottay Ki tas: Ka) Kg = hf (tut cqhy Yn aay Ki t@ag Kat aus Ks) and Ines = Yat a wi Ky where the parameters c2. C3, +1, Co, 42) +++) @etv_y) aNd wy are arbitrary, From (2.16), we may interpret the increment function as the linear combination of the slopes at f, and at several other points between ta and ta4,..To obtain specific values for the parameters, we expand yn41 in powers of f such that it agrees with the Taylor series expansion of the solution of the differential equation to a specified number of terms, Let us study this approach with just two slopes. 2.3.1 Second Order Methods Define Ky = hf(te yn) Ky = hf(tetberh, yo an Ky) and Yat, = Yow, Ki pw, Kz (2.18) where the parameters c>, a1, ; and ware chosen to make y,+, closer to Hines). Now Taylor's series gives 2 I oes) = ¥ Ce) bhy" (10+ y(t (2.19) where Sit {fe Lut Uf firth AL LAG) SINGLESTEP METHODS 33 The values of y' (tn), y" (tn), «+ ate obtained by substituting ¢ = t,. We expand X, and K, about the point (tn, yn). Ki=hfe Ky = hf (tee h, yw ay h fn) = AE flte, Yn)-Hlez bh fb an hi fr fr Fy (GIA Sich Den aa fF IE ooo] = hifurt Heafitayfefe)+ FPG hic Rehr oh Pio Substituting the values of K, and K in (2.18), we get Yarn = Yok (re bwadh fob A? (wea fie waar fif)-+ F Wulf Restarhfirb a f3fylt (220) Comparing (2.19) with (2.20) and matching coefficients of powers of h, we obtain three equations for the parameters Wy ti = 1 2 Wy = 1/2 ayy W2 = 1/2 From these equations, we see that if cz is chosen arbitrarily (nonzero), then 1 1 Ay = 62 55 1-5 (2.21) Using (2.21) in (2.20), we get 2 13 Yast = InN GAGA + fut aft fehodt- (2.22) Subtracting (2.22) from (2.19), we obtain the local truncation error Ta = ¥ (tats) —Ynt =P [(F-B)ter enter in) +t (seths )]e 2 = Hr2—3ea +302 frp] ten We observe that no choice of the parameter c, will make the leading term of 7» vanish for all f(t, y). The local truncation error depends not only on derivatives of the solution y(r) but also on the function f. This is typical of all the Runge-Kutta methods; in most other methods the truncation error depends only on certain derivatives of y(t). Generally, cz would be chosen between 0 and I. From the definition of the Runge-Kutta equations, we see 34 NUMERICAL SOLUTIONS that any Runge-Kutta formula must reduce to a quadrature formula of the same order or greater for,f(t, 3’) independent of y,, {wy} and fey) being, the weights and abscissas of such formula. In the present case, cp = 1/2 and ¢z = 1 give the mid-point and the trapezoidal rule of integration for f(t. y) independent of y. An alternative way of choosing the arbitrary parameters isto produce zeros among iiy’s, where possible, to simplify the final formula, The choice of c; = 1/2, for example, makes w, = 0. Sometimes the free parameters are chosen either to have as large as possible the interval of absolute stability or to minimize the sum of the absolute values of the coefficients in the term Ty. Such a choice is called optimal. In the latter case we define atty rey. aren < rp hia O12 We find \fI RRA SR SAS SNS SA A RS 2/2 1h 3| 3 2)2 2 2 3 3 FNS eo Ce 2? 9 fe) oot 2 3 ey 8 8 8 9 #9 9 Nystrom Nearly Optimal wef ty 2/2 3|3 1|-1 2 2 2 a|* = je 19 2 6 6 6 4 4 Classical Heun 2.3.3 Fourth order methods The details of the derivation of the fourth order method will be omitted, since they follow the same pattern as above. In the above notations we can write the fourth order formulas as: Ay db aol pit 2 2 3 3 I 1 2 1 1 (ce z)-7 ! 1 o 0 1 1 1-1 1 A228 123 31 6 6 6 6 8 8 8 8B Classical Kutta A /i 2 2 al. 2 | (v2-/2 (@—v2)/2 1] 0 -V2i2 1142/2 = Q-v26 @+VvzI6 + Gill SINGLESTEP METHODS 37 aja 3,3 dfaoon 3 6 1 3 TIF Ste 1 3 > 0 -z 2 1 2 1 ‘eu ‘Ho Z 5 Merson Example 2.3 Solve the initial value problem y =r+y, 90) = 11 € [0, 1] by classical fourth order Runge-Kutta method with # = .1. Forn =0 to = 0, yo = 1 Ky = (to, yo) = (.1) O+1) => «1 Kyau ( ot, et 4) = oo 0+ +( 1+¢)]=y = (ott, n+) = wo g+( 1434) ]= 1105 Ka = lf (to+h, yor Ka) = (1) (0+. +-(+.1105)] = 121 n= +L +224 2210 12105) = 1.11034167 Forn =1 = Ay yy = 1.11034167 hf(ti, 1) ‘ = (1) [1+1.11034167] = 121034167 Kye Watts nt-At) = co[(a+t)+( 1.110341674 (121034167) | = .132085875 38 NUMERICAL SOLUTIONS i A Dee B= it ( 14 Ft yk) ae «o[(a4 it )a( 1.11084167+4-(.132085875)) | = 132638461 Kg = hf +h, yt Ks) = (1) [(1+-1)+(1.11034167-+.132638461)] = .144303013 2 = 1110341674 [.121034167 +2(.132085875) +2(.132638461) +.144303013] = 1.24280514 The exact solution is y(t) = 2e—t=-1 The computed solution is given in Table 2.1. TABLE 2.1 SOLUTION oF y’ = ¢+, )(0) = 1, BY CLASSICAL FOURTH ORDER RUNGE-KUTTA METHOD WiTH fh = 0.1 vn its) 0 1 1 0.1 1,11034167 111034184 02 1.24280514 1.24280552 03 1,39971699 1,39971762 04 1,58364848 1,58364940 0s 1.79744128 1,79744254 06 2.04423592 2,04423760 0.7 2,32750325 2.32750842 08 2.65107913 2.65108186 09 3.01920283 3.01920622 1.0 3.43655949 3.43656366 2.3.4 High order Runge-Kutta methods ‘We have seen that the values » = 2, 3, and 4 in (2.17) give the formulas of order two, three and four, respectively. Surprisingly » = 5 gives only fourth order formula. We need v = 6 to give a fifth order method, » = 7 or 8 to give a sixth order method and » = k to give a(k—2)thorder method, k & 9. We now list a few high order methods. SINGLESTEP METHODS é 4l 2.3.5 Results from computations for Runge-Kutta methods We haye uscd various order Runge-Kutta methods to solve the initial value problem dy _ ad with analytic solution j(r) = 1/(1-+4). The step sizes ft = 2-", 1 = 4(1) 8 have been used in cach case. The error values €) = Jn—ypl(in) arc tabulated at ¢ = 5 in Table 2.2. To avoid round- off errors, we have performed all calculations in double precision floating- point arithmetic throughout. Thus the error values presented in the Table 2.2 are practically pure discretization errors. The graph of the error (=logio | ¥n—}Un) |) against —log, 4 is shown in Figure 2.1, The number attached to cach curve indicates the order of the corresponding method. 16 Iz ‘ ss 3 2 a 8 2 76 La 0 o 0.2 Ok —bog)h Fig. 2.1 Representation of error If éf/@y = A, a constant, then the methods of the same order produce identical results. However, if é//éy =—g(7, y), a function of rand y, then the methods of the same order produce slightly different results, though of the same order of magnitude. Thus, we may conclude that it is immaterial which one of the same order methods is chosen for solving the initial value problems, If we choose a fixed step size to solve an initial value problem over a given interval, we observe from the Table 2,2 and Figure 2.1 that the low-order methods produce less accurate results as compared to high- order methods. The error values decrease as the step size h diminishes or the ad NUMERICAL SOLUTIONS order of the method increases. If N denotes the number of evaluation of f, then basing our comparison on an equal number of evaluation of f for a given interval of integration, we find that high-order methods are economi- cal to get high accuracy with small step size. Improved tangent y Second Order Methods Euler-Cauchy TABLE 2.2 Comparison oF ERROR IN RUNGE-KUTTA METHODS, WO) = Le 5 Optimat ht N a 721696—10 637310—10 46862910 160 a4 174836—10 154920—10 11509310 320 43038211 381970—11 28514911 640 Pa Ww6774—11 948376—12 709647121280 ro 26591912 236282 —12 177009-12 2560 ‘Third Order Methods a Nystrom Heun Nearly Optimal = -N a —118873—11 =157779=11 =177S3—1 240 a 142853 = 12 —190086 —12 14219912 480 Bet — 17509513 3323913. —174700—13, 960 ar —216736—14 28884314 —216493-14 1920 = —269597--15 —359382—15 —269447—15 3840. Fourth Order Methods h Classical Kutta nN a 58197314 283369—14 320 2-8 366262—15 214773—15 640 2 23523416 150162—16 1280 27 20739617 157031—17 2560 z 732941—18 memes 18 5120 2.3.6 Convergence To discuss the convergence of the Runge-Kutta methods, we shall apply Theorem 2.1 to the third order method. The increment function is given by (2.27) Ata, Yn A) = PM Ky + w2K2t wk), We know from Theorent 1.1, that /(¢, ») satisfies a Lipschitz condition. ‘Thus Ky, Ky and Ka satisfy SSSR NERS, xs WER MRA’ KE QS ASIF ABE TN WE WAS EAEY MRS EA = ERMA RANE OSS VE SEY KWHAADM AS YA = AANA HART ESE ARID EE AW AERATED ME EERE NEADS OL, MSA EA SNES, SRR SRO AR SOX SERA REE < SQM DARA. x MAAR AEA AMADA Yi 3 SRR EEES ESSER. Sem OMA iE TA SRAM AK. EUrOnwW RRS SAAR SA IN EAE AAR ARSE, ARREARS ERNST AG HAN X DAS ASS. GE HH EE AE AANA GHEE YATE AEE, ASAT EY HERA ANESTH ASQ TIER HANDS, GGG STINGER EEA GEA SAE, SAMI: “Wah AO eras GQ ag, We Leak RAS G QE AVA AKG NEUGAE BEG NS ANTE SEL WUE CERI TEL AH. AES) RANT AMET, Na a TSE A. G NEE, EAE, AH AAA RAE GQ ARES ARRAS ANAND MAH GH AMET SS SS ACE, PARANA RA AA ME HEA, TL, GAD TSE SET SAE RSS AAS RS HOM MANA SHE ANE ce DEQ RARER BEES HANAN QSAR HEL HSA 8 SSQe RAS ASAD. EAGT SSSA EIS AL ET SIRENS SATA SS ten QR WSSAIRO ANITA NSIS AIS ASWELL, SAL SAGE SE SISK ARIAT OO SSAA, BRE TES TAL REE RAR, sMkemie at VAL he, Sea that EAE SSA Daaiody edrogihny MA 44 NUMERICAL SOLUTIONS (not local) truncation error for any method. If the function f(t, y) is suffi- ciently differentiable and if p is the order of the numerical method, then from Theorem 2.3 we have 4 Eu = APU) + OCHP +) (2,28) where 8(1) is called the magnified error function and &q = Ya—y(tx). Suppose we calculate p(¢) using a certain A and get y»/h). Then we repeat the calcula- tion using 4/2, and obtain ya(ti/2). It follows from (2.28) that dA) yt) = hPB(tu)-F OCP +) yn(£) ~vtea = (Jae +008" (2.28) Hence waht) —ye(L) = (1- 4 \ir8(tay +008") (2.30) 2 3 From the equations in (2.28) and (2,30) we obtain the equation 2» h a= gj [padd—m (4) ] ocr Thus, we obtain the Richardson extrapolation to the true solution at the mesh point ty I ry 4) —>ain 2-1 The right sides of the relations of the following ae ca (van) vn (+ )) (2.31) Ht) = +0(hP*!) (2.32) help, respectively, to determine the estimate of the accumulated truncation error and the true solution at t. with an error whose order exceeds the order of the singlestep method by one. We denote the predicted accumulated error by P, and the actual error in the extrapolated solution by T, where _ ey h r= seq am (F)) h zy 4 )—vatty and m= 35-0 We have estimated the truncation error at t=5 for the differential equation # =», yo=t dt SINGLESTEP METHODS 45 when solyed by the various order Runge-Kutta methods with step sizes Amo", m = 4(1) 8 ‘Using the following methods: (a) the second order Euler-Cauchy method, p = (b) the nearly optimal third order method, p = (c) the classical fourth order method, p = 4, we have tabulated the error eq, the predicted error Pa, the extrapolated error T, and the magnified error function 4(1,) in Table 2.3, TABLE 2.3 ESTIMATION OF THE TRUNCATION ERROR IN y’ = —", 30) = 1, aTi = S Second order Euler-Cauchy method h me P, qT Bt) za 468629-10 47138210 -215291—12 11996907 z 115093—10 115437—10 344974 -13 117855—07 zm 28$149—11 288579—-11 —430039-14 116797~07 nw 10964712. 110184—12 —S371—-15 116269—07 mo 177009 —12 177076 —)2 67106316 116008—07 Nearly optimal third order method h G Pe qT Mt) = =117753-11 =118328~11 570S77—14 48231708 mo —142199-12 14254712 349358—15 46595708 ra =174700—13 17491613 21513716 —457967—08 nm 21649314 21662714 133650—17 —454019—08 and —269447—15 —269530—-15, 832775 19 45205708 Classical fourth order method A cA P. qT Bt) zw 58973—14 S817" —14 269744—17 381402—09 road 366262—15 363588—15 674144—18 384053—09 ra 23523416 228794—16 644000—18 394657—09 rad 207396—17 14304217 643540—18 536725—09 2 732941 —18 894086—19 64353218 31479608 From the numerical results we can draw the following conclusions: (a) The predicted error P, gives good estimate for the error value ¢,. (b) The extrapolated error 7. is smaller than the corresponding predicted error P, and compares favourably with the predicted error Py of one order higher method, (c) The value of the magnified error function (ta) tends to a constant value as h decreases, 46 NUMERICAL SOLUTIONS 2.4 EXTRAPOLATION METHOD In Section 2.3 we considered the ‘Richardson extrapolation method’ or the deferred approach to the limit for estimating the accumulated discreti- zation error aad improving the approximate value of y(n). It was shown that the order of the improved y, exceeds the order of the method by one. We shall now discuss a successive repeated application of this procedure so that the approximate value of solution tends to the exact value as h > 0. Let y(r) be the true solution of differential equation Y= flt,y), Vtg) = Yoo E Ito, 6] and y(t, h) be the approximate solution obtained by using step length 4 and a suitable numerical method. The value of y(t, fi) will contain error, We assume that y(f, 4) admits an asymptotic expansion in /i of the following form Wt A) = Ae A by hp ty hP bee btm hein HM oe (2.33) where 0-< 7) > hye of fy = hyb', 0 hy \ > hm is discussed below in detail. Equation (2,33) becomes tA) == Uther i era Pp ry hp Tm HO tgs HM. (2.3) Beha, We now attempt to climinate 71, 72, by evaluating y(t, A) forho > hy > hy ++, we get Wt, fig) = PCA if beh rah foo eth baa vt, fy) = yb rib rohit brah. bth wt, hha) = yl) tT hy brah + ahd +». bth to we Him) = Yt) ETE ATMA TOL so Penh ae (2.35) Eliminating 1, in Equation (2.35) we obtain Hoyt, In) —hiylt, ho) Tig Ht Atylt, hn) —hgv(4, hy) aa = NM AEN LH s = 30) — Aihigra = MACH Hide os ints Sim) — AEWA, Him) ae HW lata ina Ay Wi, ACA g Hh a (2.36) * SINGLESTEP METHODS, 47 Let us use the following notation BYRD Hh em Yon Ti Fh where the superscript k stands for )(t, hx), i.e, the approximation obtained by the numerical method with step length hx and the subscript mm denotes the number of times the linear combination has been applied to eliminate Tis Tay cory 7m. Thus Yi) = y(t, he), and YU implies m times linear combi- nation performed to eliminate tm. Equation (2.36) can now be written as Y{) = Aa) dghtea— hat hig + fp)ra— = YY = y(t) —hiytra— MAGE g)ry— Yu = FeO = OI shige Mya, Fg ITS 0 Eliminating 72, we get, YO) = y(t) hpagays+- YY) = ylQ+ heh t..- PYRO = yl) UE, ght yesh Thus to eliminate z», we need ¥® which is given by YQ = YOHAWPAL Hg My mar +0 (OD) @3n The calculation of ¥ can be simplified if we form the triangular array shown as follows: Y-scheme Y(t, hg) = YY yo rGmM=1P YD yw ry y(t, hy) = YO ¥? y? yal yy y(t hs) = yo) ¥p v(t, ha) = YO where two values are used to form the next approximation according to the rule ry (+t) pr (ey yim MYSEPHie YO aan 48 NUMERICAL SOLUTIONS For fiy = hob! with 0 NSN He, WER AANA ag ae AGERE HW WD QE Weare s SEE SAAS WOE, Was aE RE OAGA WON ASRS DNB aA “SSE, FRE WA YAMS SEE WIS We NEGA AAA AES BRAEMAR EY UREA AE ANE, SAAN WA EIS SANTA RASCAL FARA CAE EGER WAR Disa Lauds Te SEY QE NENT, Ne Wemae QE As GARE DAA GAA AAG AE BAR Srsai.ans dae RA AREAS EST ALAS SAL SHEA DAK WSWAKS See agg. SSE AS SEMEN TA EAR GAGE, RRR GOOG TRATES EAL MAMAN TAS, RENEE KERR AE QE ITE, REE, AGERE SH YAS ANDE A WS ENG AAT AL TAME TENA AS WAH GAKK WATTS NE WA, CREAN ENG SHE TAT LAGER EAANSQ GQ NAMES EAE SE, JAS AAS AS, RSH te TAMA AQ HAH ATE AEG DUG GALE ASKIN NAiaats nn. Tas TAA QQ AN NE RENE IS] EEG G Loe. RRM ASR UE TENA A ITN AST HS UN BAER GEL Senn. RR ERATE ASSL ESE] BSG. QE ASIOABad x ay FY A RES ESE: ASS Sx Sass LN SINGLESTEP METHODS 5) It can be argued that the stability characteristics of the linear equation (2.47) are very similar to the stability charactetistics of the equation of the form given by (2.46). Since the terms Br and C will give rise to correspond- ing terms in both numerical and exact solutions which are also linear in 1(A40), we conclude that (2.46) exhibits short-range numerical instability in the neighbourhood of (t,, yn), when the corresponding equation (2.45) with d= fy (tw Yn), exhibits numerical instability. Therefore, the stability analysis will be based on the equation V=flt, W=Ay, y (to) = Yo (2.48) gy where A= (2), and it is assumed that (2f/0y) is relatively invariant in the region of interest. Equation (2.48) has as its solution y (Q=y (to) exp (Att) which at f=fo++-nh becomes ¥(tm=Y (to) OM = yp (CM" A singlestep method when applied to (2.48) will lead toa first order difference equation which has solution of the form Jn=er (E (A) where c; is a constant to be determined from the initial condition and E (AA) is an approximation to e“. We call the singlestep method Absolutely stable if | E(Ah) | <1 Relatively stable if | E(Ah) | < e** If A < 0, the exact solution decreases as f, increases and the important condition is the absolute stability, since the numerical solution must also decrease with f,. If Euler's method is used, we obtain Yat =Yot Hfn=E (Nh) Yn where EQA)=14+0. Obviously, Euler's method is absolutely stable if {1444 |< lor-2€<0 If A > 0, the exact solution increases as ¢, increases. The numerical solution must also increase with /,. Thus, we are concerned with the relative accuracy, to a fixed number of significant figures, than with the absolute accuracy, to a fixed number of decimal places, Here, the relative stability is an important condition. This is ensured if the numerical solution does not increase faster than the true solution. For Euler's method we have JIA | Se, ADO which shows that the method is always relatively stable, 52 NUMERICAL SOLUTIONS 2.5.1 Fourth order Runge-Kutta method We apply the classical fourth order Runge-Kutta method to Equation (2.43) and get K=hf (tu Ya) hh Yn =A (tit By Yn FAK) =A (Yu BAh Yad =P b Ohl yn Ky=hf (tnt 3h, Yat 4K) = Mi ind MME (MP) Yn) = [ME (MEE ON) Lo Ky = lif (toth, yt Ka) = Mi Lya b+ $d (Ah)PA+E (AND) Yad = [MEAP +E ANP +4 (AR) Yn Your =Yoh (Ki+2K24-2K3+Ka) =Yobd AR) yn E ARTE AR) an +E@A+E AAP +3 MY) Yn FE ONT OMPTE ANP +S ANY) Jn [LFA NPE Oh) Be OYA] Yn Thus, the growth factor for the fourth order method is Qari EY ie +en whereas the growth factor of the exact solution is &, If Mi > 0, then cMt > E(Ah); so the fourth order Runge-Kutta method is always relatively stable, However, if Mt < 0, then to find the interval of absolute stability we construct the following table: An 0 =1 =2 =2.2, -2.6 3.0 E (Ah) 1 0.3750 0.3330» 0.4212 0.7547 1.375 ‘The graph of E(Ah) and e™ for various order Runge-Kutta methods is shown in Figure 2.3. We notice from this graph that for A < 0 the fourth order Runge-Kutta method first fails to be relatively stable, and then to be absolutely stable. The interval of absolute stability is -2.78 < Ah < 0. 2.5.2 Eulcr extrapolation method The first column of the Y-scheme for the Euler extrapolation method for (2.48) is given by ke Mo\? YQ= (14-4) vn SINGLESTEP METHODS 53 JE(Ah)| Fig, 2.3 Stability of Runge-Kutta method and the other columns can be generated from the relation (2.40), am yen yd, Y= (2.49) ety < Mp \® = [ Sem Pe ( 1 +35) |» where Conny = PE MM aI Ets matet Cnn = tmp = 0 If for some k = Kand m = M, the extrapolated value Y{ js taken as yne1 then we can write (2.49) as Yass = E (Mio, K, M) yn a Mig \st where E (ho, KM) = cates (1456 Thus the Euler extrapolation method is absolutely stable if | Eh, K,M)| <1 In order to find the interval of absolute stability for variousvalues of K and M, we determine the values of Aho corresponding to K and M such that | £ (Ato, K, M) | = 1. The principal diagonal of the ¥-scheme converges faster than any other diagonal or column and so we determine the interval of absolute stability for K = 0 and various values of M, We have for M=5 54 NUMERICAL SOLUTIONS ryey opt TF 7 ¥(0) -1 i; | | ¥en 1 6 8 | ¥f) 3-3 + | ¥@ sult des 3.) ob | ye Ml=|-y mn TT oF j iy Be at (Zee: = 960: 1024 “ ye 35 "31S 315 315.35 | " yo | |__b 62 _ 1240 9920 _ 31744 32768 | | yo SL 9765 9765 9765 9765 ~ 9765 «9765 | L Consider, for example, M = 2, YP = J OY -orpssry -[4 (Akg) = a( + Me) $(4%)']» Ya = [ato a 4 Mo cr] Thus E (Mo, 0, 3 = ah MO? one + Sit which gives the stability interval as — 2.785 < Mf <0. The graph of the function E (Mg, 0, Mf) for various values of M is shown in Figure 2.4. JE ,0,m)| SINGLESTEP METHODS 55 2.6 IMPLICIT RUNGE-KUTTA METHODS We define an implicit Runge-Kutta method with » slopes by the following equations: Ki = hflteben hy Yo Lav Kp), f= AQ (2.50) and yout = Job Ewe Ki (51) where a= Lan i=1,2,00 (2.52) x and ayy, 1 ws (ay er-taia eats (621 er-baa 6a) = miedtia d= and C1 = ayy +ay2, C2 = ayy tage (2.63) The two arbitrary parameters can be chosen on the basis that either K, is explicit or K is explicit. If we want K, to be explicit then we choose a = a2 =0 On solving (2.63), we get 2 I =O ams ana = z 3 1 3 wep ma 58 NUMERICAL SOLUTIONS Thus the third order implicit method which is explicit in K; is given by Ky = bf (tms Yn) ¢ 2 1 1 Ky = hs (tet Z ha roby Ket Ka) 1 Yat =e g Kit > Ks (2.64) In a similar manner, if @2=42.=0, a third order implicit method, which is explicit in Ka, is obtained as 1 1 Katy (t Zh tsk ) Ki=if let hy Yo K) dnemyrt SKA EK (2.65) 2.6.3 Fourth order method The following equations are obtained wytwge] 1 mertwa=s- 1 ws (auser-ta acs) +2 (aac banca) = wicttind=— (aut wraan) (arse tFarzea)+ (matwiae) (ane bane) 34 1 wrer(ayyert aiaca) +waca (aarey t+ aaaca) = -g- wi (ayic}+a1203) +2 (andtond = wich +wc}= + where C=O tay, Coan tan Here we have cight equations in six unknowns. Solving the equations which are independent of aj, 42, 21 and agg, we get 1 ti v3 Livi Le A On substituting these values in the rest of the equations, we find 1 1 1 Ay On= G4 MAC gy A= C2 SINGLESTEP METHODS 59 Thus the fourth order implicit Runge-Kutta method is given by Kei“ wr(4-%3), yt tK+(4- V3 )s,) mew is yi, Yat +(5 +28 )e+tie) Jnpa= Yat Ski +Ka) (2,66) which in terms of the earlier notations can be written as (3-/3)/6 | 1/4 (3-2/5 /12 B4+V36 | G4+2V3)/12 1/4 1/2 1/2 Runge-Kutta-Butcher method Applying the formula (2,66) to y'=Ay, we obtain Lb pat hah? ns =—-—] — Yn aban bene | Thus the implicit Runge-Kutta method with y=2 has order four and it is absolutely stable in the interval (—o9, 0). 2.6.4 High order implicit Runge-Kutta methods If f (J, y) is independent of y, then (2.51) corresponds to a quadrature formula n= ¥ =yeth ; tnt cah, 2. Inst Jat Y wi Kyat Bw flo berh) (2.67) where (t+ cy) and w;, i=1, 2,..-, 2 are the abscissas and the weights respec- tively. The quadrature formula (2.67) attains the order 2y if ¢; and ws are given by the Gauss-Legendre quadrature formula. The abscissas c,, i = 1, 2 pare the roots of the modified Legendre polynomial of degree v, ic. P, (2c—1)=0 and the weights w, i=l, 2,--., » are given by the equations ta tt uM : =a k (2.68) where ¢, #0. The parameters a), 1 & i,j < » are the solution of the linear system ¥ ay tap RED, Zoos 0 (2.69) i ® AEA ASTER EO WME NE NOSE GENE CHL Me SHEESH RAL SS Ane MUA RA QE. BASSE TAT VG, E.G SSS RQ. CERES ERS A Gut ESSE RN SSS TYR RR REISE ESE ERG x Ay WS, SEES SQ BA QE EEN AAA ARR GEA WEAISEG, “RAREST Qh AME BRIG QRWN X EK Ws SA&ass SX See QE EEL AONG & DETTE WANA SAE WALA ESR DEAN, NERA LAE GER Auras GAGS MEL SHAE aN VER QED BER QS HEED AR SERIE EARS RARE ATE, ee FEEENSS ARAGM AA ARNT GRRL RRO, Wade wabsady, Ss S x Xx Le. SETES RIA QR TENTNASH LAIRIR QAI KSAT WHATS . e WEAs QNSH Nga a Mer wala WAC RADA WAC ~ REYES OH FRE WAND . RIN RAEI ETE a NS XS OE RSAC SNS NS YAAK WA Ae HORNA TeaQNaoTBAQer WALA SINGLESTEP METHODS 61 Sixth order methods (6-V 75 )/10 5/36 (10-315 /45. (25-6415 )/180 1/2 (10-+34/ 15 )/72 2/9 (10—39/ 75/72 (S+-ViI5)/10| (254-6V 15/180 (104+34/TS 45 5/36 5/18 B/l8 5/18 Runge-Kutta- Buicher method with Gaussian nodes 0 | 0 0 0 0 (5—/35)/10| (5-+-4/5 )/60 1/6 (1s—79/'5 )/60 0 (+V 510) (5-75 )60 (15+75)/60 1/6 0 bh 1/6 G-V5yI2 S+V5VI2 0 12 5/12 5/12 Wiz Runge-Kutta-Butcher method with Lobatto nodes Finally, the implicit Runge-Kutta methods have these advantages: They have large stability interval, and high order for the number of K;'s or the function evaluations, A disadvantage of the methods is that they require 4 system of linear or nonlinear equations depending on f, to be solved at each step. 2.7 OBRECHKOFF METHODS The Taylor series method of order p can be obtained easily if it is possible to find the second and higher order derivatives from the given differential equation. The method is explicit and gives a pth degree polynomial approxi- mation to e™ when it is applied to the differential equation y’=Ay, y(f9)=o- ‘The interval of absolute stability is finite. We shall now develop implicit single step method based on first p deriva- tives of y(r) at ft, and fn4;. The method has maximum order 2p and it is absolutely stable on (—©>, 0), The general method is defined by A 2 Baro Ye ach Wt YS bi i yO (2.73) where a; and b; are arbitrary, The true value y (1,) will satisfy T, y Cnet —¥ Cn) SS a HY Ona) = Be eH C65) (2.74) 62 NUMERICAL SOLUTIONS where T;, is the local truncation error. Expanding each term in (2.74) about fs in the Taylor series and equating the coefficients of the like powers of 4, we get p+q equations to determine the arbitrary parameters, The order of the method (2.73) becomes (p+q). The general treatment of (2.73) is difficult, we shall onl give details for p=¢=2. From (2.73), we get Yast =Inbh aynsi th? asynsrt hbiya+h? bayn (2.75) Expanding each term in the Taylor series in (2.74)(p=q=2) and equating the coefficient of the powers of h, we get methods of various orders. 2.7.1 Second order methods On equating the coefficients of h and j?, we obtain ath=1 aytartby=3 (2.76) For a,=a,=0, we get Taylor's series method ym ynbh tt hy, (2.17) which is absolutely stable for —2 < Ah < 0. The values a,=b,=0 lead to the trapezoidal method varia Tot E gas b94) (2.78) with stability interval (—=9, 0). For bi=0, we obtain net =oth (1 bi) Yount (61-5) Weyath bey, (2.79) as one parameter family of second order methods. The parameter 6, can be determined from the stability consideration. Applying method (2.79) to the differential equation p’=Ay, »(to)=yo, we get the characteristic equation as fi =A (bi) & pe (4 y Your—(1+Mt Bi) yr=0 (2.80) The principal root é of (2.80) is given by i. A+ Mb f= 1= An (1—,)— 0? Fb, — 8) 281) The method (2.79) will be absolutely stable if | § | <1. The value of € given by (2.81) crosses the line =++-1 when Ah =0 (2.82) SINGLESTEP METHODS 63 or Ah = 2/(1—2b,) and the line =—1 when Ah = 14[1-4/(1 —2,))? (2.83) Thus, between —3/2 & by < 1/2 the value of ¢ is never less than 1 for any AA, and is above the line 1 only in the range 0 1 only for 0 < Ak < 1/2, The stability interval for the method (2.79) is shown in Figure 2.5. Fig, 2.5 Principal root of the second order method 2.7.2 Third order methods Here we get the following equations: atbh=1 avtastb; =4 Lata st 2.84) acta =t (284) If we choose by = 0, then we find a= 2, b= 4d and gat 64 NUMPRICAT. SOLUTIONS Thus the method (2.75) becomes Lee ody Thy oo ey ae (2.85) The principal root of the characteristic equation of method (2.85) is given by I+ Ek 5 6 2 0 2 6 6 8 1 Ah Fig. 2.6 Principal root of the third order method We find that the method (2.85) is not only absolutely stable in. the inter- val (—o, 0) but itis also stable for all positive Ah > 6. Keepins a; arbit- rary, we obtain from (2.84) by = 1-a eid “ego toi be po Substituting in (2.75), we get Sati = Yobh ay math (Z-t a ) Mey thay, 1 3 +n(4 ++ a) i (2.86) SINGLESTEP METHODS 65 a third order method with one arbitrary parameter. The principal root is found to be 14a —a)+a2% (2-4 a) 1a # (LF a) The value a, = 1-++/3/3 gives a third order method which has optimal Stability. 2.7.3 Fourth order method If, in addition to (2.84), we take 1 1 1 ea tz a= 4 (2.87) then we obtain a fourth order method as A, ny ft "4 rs Ja, = Baty her tI ay (yet) ) (2.88) The method (2.88) is absolutely stable on (— 29, 0). Alternatively, we may write (2.73) in the form Ynsr = Pye (AD) Vn (2.89) where Py, ¢(hD) = Pp (hD)/Qq(hD) P, (iD) = 1+ X44 (hDy and Qa(hd) = 1= Ba (hD¥ Equation (2,89) represents an approximation to the equation (bari) = oh? y (ta) The function P,, q (AD) is a rational approximation to e? Table 2.4 con- tains approximations of e”. Thus, depending on the values of p and q we obtain the following cases: (i) q = 0, we get b) = 1/i! and (2,73) becomes the Taylor series method of order p. (ii) p = 0, we obtain a = (= 1)!" 1/i! and (2.73) becomes the backward Taylor series method of order g which is absolutely stable on (—o0, 0). (iii) p = g, we find that a = (—1)!*! by and by = oar The method (2.73) becomes ° Yann = Yat Per Ser MUD to] and it has order 2p. The interval of absolute stability is (~20, 0). ‘NUMERICAL SOLUTIONS 66 9 z 2 ty Ltt 3 Ee dy = = (2) dxo of NoUVAIXOMsaY 92 TTAVL SINGLESTEP METHODS a7 (iv) p g, we obtain a method order of (p+q) which has a finite stability interval. 2.8 SYSTEMS OF DIFFERENTIAL EQUATIONS We have already shown that any mth order initial value problem can be replaced by a system of m first order initial value problems. The system of n equations in the vector form may be written as ya Beatynciss (2.90) ¥ (to) = Yo ” FFi Ct Yin Yas vere In) where y=| 92 | fGy) “lf (te Ys Yar seen Yn) Ya Sn (ty Yas Yay 00s Yn) Yio and r= Ye e 1% Yom o The singlestep methods developed for a single first order equation can be directly written for the system (2.90). For example, the general singlestep method is given by Yin = yrthO(t, yuh) 2.91) a PPE Mt 25 i as ty oon Yon te AD where — DB (try yoy) == | a (tts Yardy Hae ts ovey Yes ty A) Yar ty h) $a (th Yu Pare « 2.8.1 Taylor series method We write i a é ie we Yin = hy ay PMP T= 0,12, NL (292) Pa hilu, Yi Yas ty s+ Yost) 7 BP (toy Yate tr Yas te eos Yoo ae Ea he Yt Yr te oes You) | Ge where = y) = 68 NUMERICAL SOLUTIONS 2.8.2 Runge-Kutta methods The classical fourth order Runge-Kutta formula becomes 1 e- Yun = wt se (K, +2K2+2K3+Kq) (2.93) Ku Ky Ky Ky where = Ky =] Ka |, Kp =] Kaz [Ks =| Kos | Ka =| Kaw Rut Kee Krs Ka and Ky = AF Mls Vs ts F2 te e Ye 1 1 Kaa ity (try haw chy Kinde chy Kay man chy Kn ) I 1 1 I Kiam hy ( ub by rt Kins p Kan vs tn ck Ka) Kig = hf (ith vy, Kia, Ya, it Kos, oy Yn Keg), f= 12) In an explicit form (2.93) may be expressed as Jy iti | te {[{ Ku Ka Ka Ke] ly ! Yaist |=] Yaa it e4| Ka fr2 +2) Ka [+] Kae | > i ge ll i} | : =f Yay 4 dmid UL Ra Kat Lk Ku JJ Example 2.4 Solve the initial value problem x=y, x0) =0 y x, M0) = 1,2 € (0, 1] by second order Runge-Kutta method with h = 0. 1. Forn=0 to = 0, x0 = 0, y= 1 = lif (toy Xo, Yo) = ACL) = «1 hfs (tor Xo. Jo) = -1(0) = 0 Afi (to+h, Xo+Ki\. Jo+K21) = .L +0) = hf (to +h, xo Ki, Yor Kar) = 1 (O-.1) = =.01 Ud ea! 1 x = xotog (Kit Kia) = 04 5 GID = yr =e b (Ky+- Ka) = 14-b 0.01) = 1=.005 = .995 Form =1 clyxy = My = 995 If (ty, Xi, Yr) = 1.995) = 0995 ifs (ty, X01) = A(—.1) =—.01 tbh, Xi: +Kie YA Ka1) = 1 (.995—01) 0985 SINGLESTEP METHODS. 69 Ra = Ahh, xt Kis vit Ku) == 01995 el [=(.14+.0995)] xe mt (Kut Ku) = 1 ++ (.0995-+.0985) = 1990 gi bl pea yas net (Kart Re) = 99544 (—.01=.01995) rt = .980025, The exact solution is given by x(t) = sin t, v1) = cos t The computed solution is listed in Table 2.5. TABLE 2.5 Sotuion oF y" = 1,1" ==, x(0) = 0, (0) = I ay THE SFcOND Orprr RUNGE-Kutra METHOD WITH h = 0.1 a Mb) v(t) 0 1 1 ot ot 0.995 0.099833 0.995005 0.2 0.1990 0.980025 0.198669 0.980067 03 0.296008 0.955225 0.295520 0.955336, 04 0.390950 0.920848 0.389418 0.921061 0.5 0.480185 0.877239 0.479426 0.877583, 06 0.565507 0.824834 0.564642 0.825336 “0.7 0.645163 0.764459 0.644218 0.764842 08 0.718353 0.695822 0.757356 0.696707 09 0.784344 0.620508 0.783327 0.621610 10 0.842473 0.538971 0.841471 0.540302 2.8.3 Stability analysis The stability of the numerical methods for the system of first order differ- ential equations is discussed by applying the numerical methods to the homogeneous locally linearized form of the equation (2.90). Assuming that a the functions f, have continuous partial derivatives SH = qi; and A denotes 7] the » bay (2,113) The substitution of K, and K2 in (2.108) yields + it Ip : Ma = rethyet Sows Wy) et 3 a2W Din 4 + Ea? fick Wye fu fi+OU8) (2.114) Nog = tt WE WD fot Pa D hh Magn = boy MURS Wi) fh Fae fi 5 + Wj eh D ft Wy ais jnfid+0U4) On comparing (2.114) with (2,111), we obtain WitW, = 1, WitW, = 2 a=, aW,=1 (2,115) The coefficients of h* in yas, and of f? in y;,, of equation (2.114) will not match with the corresponding coefficients in Equation (2.111) for any choice “of az, ax), W2 and W,, Thus the local truncation error is 0 (AS) in y and 0 (43) in y’. A simple solution of (2.113) and (2,115) may be written as Wy=Wy= tb 2 2 4 9 = ay = Fy ba = Thus the Runge-Kutta method (2.108) becomes 2 =F sora) i 2 Sy mn he oy, 4 k= Furs hawk Shyyt 2K at 3th) AS SEAN ERA ENA MEE WAL SEES, XY RE ERAS WES SS, Se TAS SAY SRSA VQ WS RARER GS QEAE SHAME TRNAS. AW oeione Se SRS NOSE ARIE ESET ESL NE SESE TOK SS SIOSEQ, SY RA MAE Sak QQ Ag Ween WEAR NX, 2 Ss SINGLESTEP METHODS 7 son = sob ot EUR) Nan = Set (Kit 3K) (2.119) The Runge-Kutta-Nystrom formula is written as 2 Ky =F Sltm 3) ReZz (+4 hh yack hy + #%) 2 25 Ka Es (mt thrmtdine $ *) K= Fi (ute he ree 5 at 35 Fe (Kit K)) Snes = yeblye + Sie (23K,+75Ky— 27K +25K) Mae it al BK+ 125K3—81K3-+125K,) (2.120) where the truncation error in y and y’ is 0 (#5). A formula based on three function evaluations with truncation error 0(h*) is given by 2 Ky er (u+tn, mtth %) 2 Ky= Ef (tt Fa set Shy $k) 3 i . i re Ce 2 nets (et Fh wt Zhnt aned 2) Ins = Jnth Nt Ty (10K +4245) Waa = hy, t Tpll2K+8Ki+ 1243) (2.121) Example 2.5 Solve the initial value problem x = (1+) y, 940) = 1,9") = 0,6 € [0,1] by the Runge-Kutta method (2.119) with h = 0.1. For »=0 bh =0,y=1,9,=0 ( Dy ie : Ks Zilla yo) = +0)1 = .005 Ht 2 cs aA Mah (+24, sor ha+GK,) 78 NUMERICAL SOLUTIONS wef, 4 = (+ 4c) (14 2a oF $(005)) = .0050333827 = wth vt PUR Ka) = = 1-+0+ 4 (.005 +.0050333827) = 1.0050167 y= OF sey 005+ 0151001481) = 0,100S0074 The exact solution is given by Ha) = et ‘The computed solution is listed in Table 2.6. TABLE 2.6 SovuTiON OF »’ = (141%) y, » (0) = 1, 9’ (0) = 0 BY THE Runae-Kutta Merton with hi = 0.1 b yn a Hh) vn) o t 0 1 o on 1,0050167 0.100501 1.050125 0.100501 02 1.022098 0.204038 1,0202013 0.204040 03 1,0860407 0.313802 1,0460279 0.313808 o4 10833046 0.433303, 1,0832871 0.433215 os 1.131710 0.566554 1,1331485 0.566574 06 1,1972453 0.718298 1.1972174 0.718330 07 1.2776552 0.894286, 1,2776213 0.894335 08 13771681 1101629 13771278 4.101702 09 14993498 1.349266, 1,4993030 4.349372 1.0 16487762 1.648568 1.6487213 1.648722 2.9.2 Stability analysis We can discuss the stability and the error analysis of the Runge-Kutta method (2.119) in a manner similar to that adopted in Section 2,5. Let us consider the differential equation yi say (2,122 subject to the initial conditions ¥ (to) = Yo," (to) = Gt E [to, 6) where & is a real number. SINGLESTEP METHODS 79 We shall discuss the three cases a = into (2.118), we get a - M4 Ud BF oy Kim am Ba = (Ft AE Jat Pang Substituting the expressions for Ki and K into (2.119), we find [ Yost | [ a aie | [ Mn | i. = ; (2.123) Lo Fntt a ay My —K2, 2 Using Equation (2.122) Wa hod Ix where ays ley ep aah 1 ig eg, y= ah We, an= (2.124) For «=0, we have (2.125) We now consider the case x= —A?; the solutions in this case are oscillat- ing. We, therefore, consider the eigenvalues of the matrix in (2.123), which ane given by MA faba flan asa) a)" 1 (2.126) Substituting #=—A? jnto (2.124) and inserting the resulting values into (2,126), we get Wks aky® 14 EK HY pes — 36 H4K4 2-12 e+ is 2[(7) (hKS —36N4k' 8 + 432I2h2— 1296) | 1] a Poppe MA (By apr [2 anes 2] (RY cute —4.4goaar37 pe (ik KS — 2 hh? a8 rr] ] Where #)=15.779763 and y=6,.5467418. Computing A, and A, as functions of i? 2, we find that the roots have unit modulus for 0 € /? k* < 4.44. Thus the stability interval of the Runge- Kutta method (2,119) is 0 < h®k? < 4.44, SS WERT SAN WS SNOT AREA RE SEMA UNE, SAA ISSA GK SE PARES WANA SOPA, SE ERASE. St San WENA WQS rs DARA OA ACN ESS ERRAND AA MAMIE WAVE QAR EE GR, HT SSNS WR GK AKG SQ AE EL DEAR WMO AQT AQUA MAH WET LEE S. (QRS MAAS ASE WER LQ GAS Ss Qn ASS DC AGS SOAs AEA DANAE SN CETERA ANAT WR ANE, SS BAO WAL | As ASS AS , \ ew’ a AEA ESTES, VS S LWW WO SS Ses SN = = : SAS NS AW USSAT \ \ & SN SS \ S wo SS ee SANS SS RAK NE RAGE AT RETA DUG GSAET SE SE SS DAQTITR QS SSM AAAI |, SAAN MESE SSN, TRA EASA AE GQ NET, Sa ES SS EAS SY YEAST SOV SS SS Aw ST = v SN ‘ AEE NEAT EEG HAI NEAQELE EA QQAASS WAH EG HUE REALS NASA EE GAAE ASIETG YAW VD aa AES WEA AGES YSQRASQEREE WKS _ TSSRAARE DQ AES ERAS RAS RAGS MAGA YOESAABLT ae ausinnas: SAA EEA WAIST QU SHAE MAGEE GH BEA “AREA Rze t Stas SRA SHURA A, WS Nn CESSRRA. ‘ TNE < SS N SSSA —~ WSS R Re NSSOQH SS WSRASHK te SSIES. S&S SON, W\ WWE NER EL AS NSS SWIG WEE NENG MENA TE SIE, WRASSE WIAD MAG NE MANETS WELT WSR AS WAALAK TE NSE ARES ES AAA INS MEERA RS SANLAQ, THAAG |B Nona HENNS Nort WL Te GAT REEL RSL SQUIER AEH Tage reat AGEL EQ AIRE NSSAS SEAR 2 AREER, RAC MEE AEN RAS RA Wah Mai, Deady AN AK EER AS WSAMAQY MAMA SRST ANN SR BH TT TE ARASH HET CARR SERIA NN BS SOSSSSSSNANG ASS Sie FTN RRL, TSE SA. GR ARE GSI ASQ RARER MA, Y= - ws BR Nera SeAoora QS Se SARE ASAE, EE AOE AEA AVR DAA SET TA MANE SA FANE SHE GGA EOMAHaH LOA AE AK ARAHGSAGHALE + SSS WONG. Gas WASH ANE ASE SE SEAT LA TDA WANE FAK QV SA AMAL, AT. SD Was ‘ SOQVAE AW ANE 5 SS < RAK SANS SSSA NS ASSL KARAS, AY SEEN ADS SAND VE SEEN SOAR MIN EIA WIS, DC perce WANG LOSE Loe TENNIS Nc ne, AQ ae MEQ A ASE Renee WOVAN BEDS DARL RAS Ways \ : BWERMUAI : =. SSE SAE SS ced a RASS Wy REE, x AREER a SSS ONY, WWWGAGEETE, Wer AW war as BS AEE EEE AQ DAE REE AH, SQSaUsc AAMC VQ SEIS, Nw WA CR DANES AD SARA, NRA AMEE RABREA DAA Sy SERRE SH OM RRNA TEER REE AENEAN RERE SINGLESTEP METHODS, 83 where ettph=1 fe =p? 8 (phy om — Lp) ph—1 B= (ply Fin= ye SHB 4 (2.133) (~ph) Equation (2.132) becomes Vn =Yot KP +[— 3K + ph yn) +2 Kot ph Fisiya) HAI Kst ph Yur 2d— (Kat eh Fas) Fotb4 (Ki tphy'n) —(Ky+ph Vnsiiz)—(Katph Jasin) + (Ket ph Fas Fs (2.134) which is the required Runge-Kutla- Treanor method. Substituting the values 1 from (2.130), the equation (2.134) can be written as OF Fuetas Pasiz and vyunesJurb (Ki + 2Ka + 2K + Ke) —(pit)? [(K2—K3) Fy-+-(K, ~4K2+-2K3 + Ki)Fa-4( Ky, ~ Ka — Ks + Ka) Fs)] (2.135) The value of p is given by A jn —( Kaa Ko peli (34) (2.136) The first part in (2.135) is due to the fourth order Runge-Kutta method and the additional term is fifth order and higher in /, It is seen that when the equation (2,134) or (2.135) is used to integrate over the interval where pit is small, the result will be identical with the Runge-Kutta method. If ph is large, a condition where the Runge-Kutta method is known to be unstable then the equation (2.134) gives a far superior solution, DEFINITION 2,7 An adaptive numerical method is said to be 4-stable in the sense of Dahlquist if when the method is applied to the equation y’=Ay, P(ty)=No. A < O with exact initial condition, it gives the true solution which is identical to that of the differential equation for arbitrary h and p=A, Here, f(t, r)=Ay, then p= -A and the equation (2.134) gives ya =OM yn (2.137) Since 4 < 0 and therefore yO when noe and for any fixed h. SA MEANING AAS AR RRR AINA HSA ASE GAS, WUE HOIST ERD ES ARR RRR EASA REIS ERS, WENA REM QAR -DBA, AQAA, satay QA SRSA TRAM AS EE NOUN SSN QAKQ™\ Wak, NENA YA ROA EEE MH WSR WEN AANA ATES AQEHA|GWB HAHAH. GAA AQUQSMIS LAN ASA NAA RAE WEA VQ MAM S RAISE GAGA) MAAS WMA, AWHSANS TE TaGMiaQ Gay, SAREE IRE R AL Ses he es, See WESTIE WAAL NAMIE NG, Aas SLaH ANY =Ahg AACA DEH Ss SSN . = . “SASS SANS, NS WEusMEhsdys, REWER AHHESAQIWA WAV, SARK AE AED SOAK seat SOW Ww Sa SQM\A, XK _ sss QS SS S TNS SO SESS AS WQS GNE SNE AMAIA HA MES GAG BSS NESS B® |S ~ [SESS SSS SS KE DQAHU=|ANNL RAS MAES BEN MQ MAHER MLAS Oe OG ERMAN MH OHA, SHA, USS RAS RAE NENG AEG TA, WRIA SOSA EE Shoe, SATAN Dae Ra E MEAT DEAD MMM YI QA, NY AR AER ARE CORMAN ML MONE BAAS s OA WA, Y SQ AYE FHA SARE NAS SINGLESTEP METHODS 85 Integrating (2.143), we obtain the following equations Inet =Suk ly Fit PAF + BPs th Cky Wyqr =H YyFob WRAP, +HBF;+ CEs (2.144) where w=Vph, foscow, 7 = oe w WF na = A Fy m= 0, 1, 2 (2.145) Using the Runge-Kuita-Nystrom nodes, tay tot Zh, ntti toh weget four equations for the determination of the four unknowns A, B, C and p. Denoting k= frm 9) ie KF (tot 4, ynsas) 2,4 Vng21s=Sn tah urd K, me 2 Ky= E(t fh sna) ae Yara = drt Shy tS Ky 2 ram root sean) HG ged Vagats = Yb hy tyg(Ki + Ka) (2.146) We obtain the values of A, B, C and p as WA=2K, 2 WB=— Ke K,- FK+ why, (2.147) 86 NUMERICAL SOLUTIONS Substituting the above values in (2.144) and using (2.145) to simplify, we heve I Tara hy +t agl4hit al 16K, +25K> = 9K5)Fs+(30K,— 75K3+45Ky)Fe] War =h ALK 3K) 16Ky+ 15K —9K3)F,+ (30K, —15K2+-45Ka)Fs) (2,148) The term in w? in (2.148) is the modification to the Runge-Kulla-Nysirom method. DEFINITION 2.9 An adaptive numerical method is satu to be P-stable if, when the method is applied to the equation y""=—Ay, A> 0, 3ig)=yn. ¥'(f)=1;, with exact initial conditions it gives rise to the solution which is identical to that of the differential equation for an arbitrary f and the free parameter is chosen as the square of the frequency. Here, p=) and the cquations (2.144) become 7 Sin w Ying = Jn cos wh — Aynst = —JnW'sin WHA yn COS (2.149) which may be written in the matrix form as Yai Ja [ e el 7 | (2.150) Iyn se Ayn sin w cos nw, E(w)= * -wsinw — cosw is a2%2 matrix, The eigenvalues of the matrix E(w) are complex and of unit moduli. where Bibliographical Note There are many text books which deal with the singlestep methods for solving initial value problems of ordinary differential equations. Particularly useful are 33, 46, 93, 113, 16] and 163. An automatic integration programme based on the Taylor series method for soiving initial value problems is given in 94. The Runge-Kutta methods of various order are studied in 23, 25, 174, 175, 209 and 222. The stability of the Runge-Kutta formulas is given in 68. We find the methods with minimum truncation error in 118, 156 and 199, the methods with extended SINGLESTEP METHODS 87 region of stability in 165 and 166, and the error bounds of the methods in 34, 137 and 220, By an m-fold predifferentiation of the differential equations and a simple transformation of the variables, the Runge-Kutta formulas of high accuracy have been discussed in 82, 83, 145 and 146. The extrapolation algorithms for the initial value problems are established in 21 and 100. The implicit Runge-Kutta methods are given in 22, 24, 28, 208 and 252. The two point Runge-Kutta formulas are found in 27, Using higher order derivatives, the obrechkoff methods are obtained in 177 and 188. The singlestep methods based upon quadratures and interpolations have been studied in Sl, 63, 64, 65, 66, 140 and 223, The Runge-Kutta methods for the system and the higher order initial value problems are discussed in 5, 42, 109, 111, 148, 213, 219 and 260. The adaptive numerical methods are given in 136 and 238. Problems 1, Obtain the Taylor series solution of the initial value problem ~ ¥ = 1-2ty, yO) = and determine: (i) t when the crror in y{r) obtained from four terms only is to be Jess than 10-® after rounding. (ii) The number of terms in the series to find results correct to 10! forO 0. Let us assume that f(t, y) has k+1 continuous derivatives. The Newton backward difference formula which interpolates at these k+1 points in terms of u = (t—tn)/h is given by Pa (oft) = far HU=3) Phan + 2D 92 fn (eae (u+ke—2) +t O* Sra 4 We) ductal peer pen (8) = er( = Oe aieccet aa OED La (3.14) Substituting (3.14) into (3.6), we get 1 z Late Hing) = ite +h JCS cor( wa) oe on “i + (-11( Ea ) sat gar © du . or Wine) = Wlrth Ye 89) 7 fant TES) (3.18) 1 where rg =e) case ({7T reo © ae si 1 l-u a= | (-0r( i ) du (3.16) aI Neglecting 7;{ in (3.15), we get E Jars = Iaith TBD "for (3.17) o where a) = 1+) 3) = Fy 1 , = ay) =" (+i (27) MULTISTEP METHODS 103 i 1 ‘ 5 BY) = = R= JP 1 are i) = — jj? | — 38}- 2 — 673 ay 799 ITA? 19-38)4277?- 7) zal 1440 If we replace the difference operator 7 fn: in terms of the function values, we obtain a) = CAP (275474457? - 1673 +274) ® Int = Ynnsth x Br fammet (3.18) From (3.17) or (3.18) we can obtain a number of multistep formulas for various values of j. It is obvious from (3.15) that the implicit multistep methods are of one order higher than the corresponding explicit multistep methods with the same number of previously calculated ordinates and slopes. 3.3.1 Adams-Moulton formulas (j = 0) Substituting j = 0 in (3.17), we get 1 1 1 Fats = Vath [ foam y Phot Ty VP Sani 9g FP Ins = 29 pep 7 ps ] Fa9 Pi Pe ggg fete The error term associated with truncation after kth 7 is 1 rag =f (—yes EDD WEED puen dy (31) a Since the coefficient of f+!) (€) does not change sign in (0, 1), it is possible to write (3,19) as Teo) = HE? 8), FHV (B) The coefficients 8° in the formula E Fase = ob YB frome mo are given in Table 3.7. 3.3.2 Milne-Simpson formulas (j = 1) These formulas can be obtained by substituting j = 1 in (3.17) and we find 1 Det = dearth ( Yoor—27 Sas Poor +P fa 1 1 = 35 Pins BP Sies ) (3.20) The coefficients 5:(" of formula (3.18) are listed in Table 3.8, 104 NUMERICAL SOLUTIONS. TABLE 3.7 Coericients ror THE FORMULA i Vos, eH BO fami at k *(0) 40) (0) (0 200 7 5! 2 ai ) ax » By ) ay ao 0 1 4 z 5 1 ® r ir 9 1 3 x 24 4 251 646 106 ae 720 nr0 720 720 5 475 1427 482 3 au 1440 1440 440 1440 1440 With k = 2 in (3.20), the formula i Yon = Saar g fats torte) rn — 9 (E 3 £0 (6) is of special interest since in this case the coefficient of the third difference is zero and the use of second or third difference gives the same accuracy. This formula also reduces to Simpson’s rule of integration when f(t, y) is independent of y. TABLE 3.8 CogeFicients For THE FORMULA k Yarn = Yat LBA fms =o k a am ayy ao ay) ay) 0 2 1 0 2 1 4 1 . v v z 1 4 1 3 v > 7 7 4 29 124 24 4 0 90 0 oi 5 Be 129 14, 1 90. Cu 50 10 MULTISTEP METHODS: 105 3.4 MULTISTEP METHODS BASED ON DIFFERENTIATION We have so far discussed implicit multistep methods which required the replacement of (7, 7) under the integral sign by an interpolation polynomial which takes values futis fie <5 fi-ker AU nets fay os biker We shall now develop methods which are based on the replacement of the function y(t) on the left hand side of v(t) = ft (0) (3.21) by an interpolation polynomial and differentiating it, We write (3.21) at Fung US MDY (v1) = Af tors Klneid) (3.22) Using operator relation h D =—log (I-P), we may express (3.22) as slog (=F) (tis) = Af Getty Mtoe) or [EE] oad = oes none (23) If we truncate the series on the left hand side of (3.23) after kth difference, we get k op [3 ED st = at (3.24) ty The above expression (3.24) in terms of function values y,, becomes i Liymrnnts = Mires (3.28) where y» for 1 < k < 6 are given in Table 3.9. TABLE 3.9 Coerricients For THE FORMULA Yo rorems = her a k % oh “% 1 1 3 1 2 zr -2 nL 3 l SY a FF 25 4 L 4 # -4 a at + 37 0 5 1 5 1 = aa Sl H . a 3 4 3 147 1s 20 1s 6 é 147 a 5 20 is = t o a 2 3 a 5 6 | 106 NUMERICAL SOLUTIONS 3.5 GENERAL LINEAR MULTISTEP METHODS Let us consider the general linear multistep methods of the form Koby = aut dapat s+ baeyu-et) Fhlboy gy Pot oe FERN ag) (3.26) or Jn = k yn th Ey bon-iet Symbolically, we can write (3.26) as ACE) Yoneri he (E) Ya pg = 0 where p and o are polynomials defined by (6) = fF ayt*! ager... — ae O(E) = bof b+ oo tbe The above formula (3.26) can only be used if we know the values of the solution y(t) and y'(t) at k successive points. These k values will be assumed to be given. Further, if bp = 0, the resulting equation is called an explicit or predictor formula because y,4; occurs only on the left hand side of the formula. In other words, y+; can be calculated directly from the right hand side yalues. If bo # 0, the equation is referred to as an ~ implicit or corrector formula since y,;; occurs in both sides of the equation. In other words the unknown yn+; cannot be calculated directly since it is contained within y,,,. We can also assume that the polynomials p() and o(€) haye no common factors since, otherwise, (3,26) can be reduced to an equation of lower order. In order that the difference equation (3.26) should be useful for numerical integration, it is necessary that (3.26) be satisfied with good accuracy by the solution of the differential equatiun yp’ = f(s, y), when his small for an arbitrary function f(s, y). This imposes restrictions on the coefficients a; and by. With the difference equation (3.26), we associate the difference operator L defined by i & L(t), A = Mtg YL ay Criss bay toe 3.27 DO), A = Ht —2 ay Crate) “ABs bran) 3.27) We assume that the function y(t) has continuous derivatives of sufficiently high order. Expanding y(t.—ij1) and y(tn-2+1) in Taylor's series, we have Winaitt) = Vt) =i hy'r) a (=i hs s EE iva t + EP woven) froin nin —s)PYOrM(s) ds Ble fe MULTISTEP METHODS 107 Hons) = Hl) ahyted+ 9S ray"tn) =r + Se We-ly(t,) tes rn 1 + oa | (tater)? 1y@*N(5) ds Substituting in (3.27), we get L(t), fh] = Coy ta) + Cyhty' (tn) + Cah?y! Cin) + + CphPy (te) + Tr (3.28) where -> a(1-i)]— roa ‘ 5 oa-ar, g=1,2 fess 3] j (tara s)?yP*%(s) ds h fer = y OH \ Ctnattr—sPyPtO(s) ds trae hp | bltners¥iye*%s) ds ‘5 Trait, : =, by | Guia UfgMas | (3.29) i DEFINITION 3.1 The difference operator (3.27) and the. associated linear multistep method (3.26) are said to be of order p if, in (3.28) C=C ==... = Cp = Oand Cy #0 (3.30) Thus for any fuaction y(¢) € C?) and for some nonzero constant Cysi, we have L(t), A == Cpe 49425? + 0(H2 #2) (3.31) where Cp+,/o(1) is called the error constant. In particular, L{y(¢), A] vanishes identically when s(t) is a polynomial whose degree is less than or equal to p. We now introduce the following definitions. 108 NUMERICAL SOLUTIONS DEFINITION 3.2 The linear multistep method (3.26) is said to be consis- tent ifit has order p > 1. DEFINITION 3,3 The linear multistep method (3.26) is said to satisfy the root condition if the roots of the equation p(€) = 0 be inside the unit circle in the complex plane, and are simple if they lie on the circle, We shall now use the definitions of order, consistency, and root condi- tion to determine the parameters a; and 6, in the linear multistep method (3.26). 3.5.1 Determination of a; and Equation (3.31) holds good for any function (1) € C*?), The constants C, and p are independent of (7). These can thus be determined by a_parti- cular case y(t) = e', and substituting it in (3.31) we obtain L(e!, hi] = efesx—ayetn—... ag etemert = Alby efte Eby efrb oon t be ett) == Cpa hPt! ote + O(hPt2) Simplifying we get Let, h) = (Ce —aye@-4 = ak) — hi boet+ bye“... + by] etre ‘yet HP* ole +0 (he#2) or ple) —holeh) = — Cer hPH-LO (he?) Putting e* = 2, as h+0, é-+1, the above equation becomes p(E)—(log £)o(E) = — Cys (E—1)?*+0((E = 1)P*) (3.32) or PED af) == — Cras =P -0E= 1") (3.33) Equations (3.32) and (3.33) provide us with the methods for determining p(é) or o(£) for maximum order if o(£) or p(é) is given. If o(2) is specified, (3.32) can be used to determine a p(é) of degree k such that the order is at least k. The (log é) o(¢) can be expanded as a power series in (€—1) and the terms up to (—1)* can be used to find p(é). If, on the other hand, we are given (£) we caa use (3.33) to determine o(é) of degree < k such that the order is at least k-+1. The p(€)/log £ is expanded as a power series in (€—1), and terms up to (f—1)* are used to get o(). For example, a few choices of the polynomial p(é) and the resulting poly- nomials o(€) which give the well-known methods are as follows: Adams-Bashforth Methods e(é) = €*1 (€—1) and o(€) of degree kK—1 iS ot) = 8S yal 9" me MULTISTEP METHODS. 109 ite mri ™ where mth Ym yo = 1,m = 0,1, ae Nystrom Methods p(é) = £2 (€2—1) and o(€) of degree k—1 o(8) = 2 SS yn (9 where rte Init 2 etn oy sm Adams-Moulton Methods p(é) = 1 (£—1) and o(é) of degree k 6) = SD yall EH where Milne-Simpson Methods pl) = €&2 (2-1) and o(¢) of degree k b off) = YY yal Ee & F 2,m=0- l pa ape at war tS where Ym vet maT lm=1 i 0,m = 2,3... As the number of coefficients in (3.26) is equal to 2k-+1 we may expect that they can be chosen so that 2k-+1 relations of the type (3.30) are satisfied, in which p is equal to 2k. However, the root condition to be satisfied by the method considerably restricts this order. We now state the fundamental theorem which specifies the maximun; order of a linear k-step method THEOREM 31.1 For any positive integer k although there exists a-consistent method of order p == 2k, the order of a k-step method satisfying the root condition cwmot exceed k-+-2. If k is odd it cannot excecd k+1. 110 NUMERICAL SOLUTIONS Example 3.3 Let p(£) = (-1)(€-A) where A is real and -1 0, &\n behaves as ‘he exact solution and £1 dies out since | &4| <1, but for 4 < 0, &\4 decreases as does the exact solution but £2» oscillates with increasing amplitude. This behaviour is independent of h, Therefore, Milne’s method is stable for fi = 0 but unstable for h <0. It is a weakly stable method. 4.5.5 Propagated error estimates The constants Aj, A2, «-., de in (3.47) are chosen so that the initial con- ditions are satisfied; thus Ey =Ay +Ag tet de Ey = Auk +g ban poet Anka Ee = Ay Shy An Bp + Ae where By 9 Fey Oe bd The principal root £,4 of the characteristic equation for sufficiently small Mh is approximately equal to e™. The other roots €2%, &yn, ---, fea are extra- neous roots. The stability of the numerical method requires that these extraneous roots have magnitude less than unity so that the corresponding components of the error are negligible. For stable methods we therefore do not need to know A, «++, At To find A), we use Cramer's rule and obtain Ey 1 sie Tt bn fen 2h ti (3.61) 1 I Ene Ee | [aise at on at | Substituting CUE) = cea Byler BpP bobo in Equation (3.61), we can write tek 2 Exot + coFo Cn) which, if the initial errors ¢ are constant and equal to «, becomes: ) ca) Ap'(1)} C(E wn) 126 NUMERICAL SOLUTIONS In (3.47) we now substitute this last expression for A, and put ,, = &™. Substituting mh = t,—f and neglecting the factor C(1)/C(E) which is close to unity, since £,,as # > 0 is equal to 1, we get the estimate of the propa- gated error for any stable formula as nel gag) Memtd)+ ogy O62) The first term is dominant when A > 0, while the second term is dominant when A <0. For small A it is worth noting the existence of the limit in (3.62) as A» 0. It yields an expression which increases linearly with (ta—to). 3.6 PREDICTOR-CORRECTOR METHODS We now discuss the application of the multistep methods for the solution of the initial value problems. 3.6.1 Use of implicit multistep methods Let us assume that the values of the ordinates and slopes are given at k points. We are required to determine yns1 from the formula & Yast = he bof ney Yars)+ YE [a1 Yotseth brie] As we cannot solve yn44 directly, we use an iterative procedure: P: Predict some value y(®, for Yaer E: Evaluate f(tn+1, ¥) C: Correct {°), to obtain a new yl!) for Ynes b Kity = bof tmsas Yes A BE [a1 Vota tlt bi faces] E: Evaluate f(tms 94) C: Correct y), z HA, = MN bo ftasi Ws VE las Yeterth bi facta] The sequence of operations PECECE... determines for y»+1 a sequence of values We Le Wie (3.63) Let us examine the convergence of this sequence, MULTISTEP METHODS 127 THEOREM 3.7 Let 4), be a sequence of approximations 10 ys. If for all values of y close to Yny1 and including the values y = Y,, yl), .., we have | f (tm y) |

You might also like