This action might not be possible to undo. Are you sure you want to continue?

Welcome to Scribd! Start your free trial and access books, documents and more.Find out more

This page intentionally left blank

Copyright © 2009, New Age International (P) Ltd., Publishers Published by New Age International (P) Ltd., Publishers All rights reserved. No part of this ebook may be reproduced in any form, by photostat, microfilm, xerography, or any other means, or incorporated into any information retrieval system, electronic or mechanical, without the written permission of the publisher. All inquiries should be emailed to rights@newagepublishers.com

ISBN (13) : 978-81-224-2707-3

PUBLISHING FOR ONE WORLD

NEW AGE INTERNATIONAL (P) LIMITED, PUBLISHERS 4835/24, Ansari Road, Daryaganj, New Delhi - 110002 Visit us at www.newagepublishers.com

Preface

This book is based on the experience and the lecture notes of the authors while teaching Numerical Analysis for almost four decades at the Indian Institute of Technology, New Delhi. This comprehensive textbook covers material for one semester course on Numerical Methods of Anna University. The emphasis in the book is on the presentation of fundamentals and theoretical concepts in an intelligible and easy to understand manner. The book is written as a textbook rather than as a problem/guide book. The textbook offers a logical presentation of both the theory and techniques for problem solving to motivate the students for the study and application of Numerical Methods. Examples and Problems in Exercises are used to explain each theoretical concept and application of these concepts in problem solving. Answers for every problem and hints for difficult problems are provided to encourage the students for selflearning. The authors are highly grateful to Prof. M.K. Jain, who was their teacher, colleague and co-author of their earlier books on Numerical Analysis. With his approval, we have freely used the material from our book, Numerical Methods for Scientific and Engineering Computation, published by the same publishers. This book is the outcome of the request of Mr. Saumya Gupta, Managing Director, New Age International Publishers, for writing a good book on Numerical Methods for Anna University. The authors are thankful to him for following it up until the book is complete. The first author is thankful to Dr. Gokaraju Gangaraju, President of the college, Prof. P.S. Raju, Director and Prof. Jandhyala N. Murthy, Principal, Gokaraju Rangaraju Institute of Engineering and Technology, Hyderabad for their encouragement during the preparation of the manuscript. The second author is thankful to the entire management of Manav Rachna Educational Institutions, Faridabad and the Director-Principal of Manav Rachna College of Engineering, Faridabad for providing a congenial environment during the writing of this book. S.R.K. Iyengar R.K. Jain

This page intentionally left blank .

5 Answers and Hints. 99 2.Contents Preface 1.1.3 Eigen Value Problems.1.1 1.3 Iterative Methods.1 Gauss Elimination Method. 4 Method of False Position. INTERPOLATION AND APPROXIMATION 2.2 Introduction.1 1.2. 53 63–108 (v) 1–62 1.2 Introduction. 52 Power Method. 63 2. 64 2. 80 2. 64 2.1 Solution of Algebraic and Transcendental Equations.3 Inverse of a Matrix by Gauss-Jordan Method. 11 General Iteration Method.2.3. 52 1. 25 Direct Methods. 6 Newton-Raphson Method.2.2 Linear System of Algebraic Equations. 92 2.3.1. 108 .4 Spline Interpolation and Cubic Splines. 1 Initial Approximation for an Iterative Procedure.3. 1 1.1 1.3.2 Newton’s Backward Difference Interpolation Formula. SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 1.1. 25 1.2 Gauss-Seidel Iteration Method.2 Gauss-Jordan Method. 33 1.1 Gauss-Jacobi Iteration Method. 46 1. 19 Introduction.1.2 1.3.2. 15 Convergence of Iteration Methods.3.2.3 Interpolation with Evenly Spaced Points.2.1.1 Lagrange Interpolation.1 Introduction.3 1.5 1. 72 2. 41 1.2. 35 1.2. 89 2.4 1. 59 2. 41 1.6 1.2 Newton’s Divided Difference Interpolation. 28 1.1 Newton’s Forward Difference Interpolation Formula.2.2.2.2 Interpolation with Unevenly Spaced Points. 26 1.4 Answers and Hints.2.2.

2 Derivatives Using Newton’s Backward Difference Formula.6. 207 4. 109 3. 117 3.2. 136 3.1 Gauss-Legendre Integration Rules.1 Evaluation of Double Integrals Using Trapezium Rule. 160 Evaluation of Double Integrals.4 Answers and Hints.4 Runge-Kutta Methods. 109 3.1 Introduction.3.2 Numerical Differentiation.2. 221 4. 147 3.3.3. 217 4.3.1 Trapezium Rule. 128 3. 169 3. 129 3.1. NUMERICAL DIFFERENTIATION AND INTEGRATION 3. 109 3. 122 3.1 3. 128 Integration Rules Based on Uniform Mesh Spacing.3 Taylor Series Method.4.2 Evaluation of Double Integrals by Simpson’s Rule.6.3 Simpson’s 3/8 Rule.1 Taylor Series Method.1 Predictor Methods (Adams-Bashforth Methods).6.5.3 3. 208 4.3 Predictor-Corrector Methods.2 Runge-Kutta Fourth Order Method.6.2.5. 225 4.2 Single Step and Multi Step Methods. 177 4. 221 4.4 Romberg Method.2. 109 109–179 3. 180 4.2 Introduction. 144 3.3.3. 237 4.1 Adams-Moulton Methods. 192 4.3.2.5 System of First Order Initial Value Problems.4 Integration Rules Based on Non-uniform Mesh Spacing.3 Numerical Integration.6 Multi Step Methods and Predictor-Corrector Methods.3. 216 4.3.2 Corrector Methods.1 Derivatives Using Newton’s Forward Difference Formula.2.2.1 Modified Euler and Heun’s Methods. 129 3.2. 184 4.1 Methods Based on Finite Differences.1. 173 3. 208 4.2 Simpson’s 1/3 Rule.3.3. 182 4. 200 4.2 Milne-Simpson Methods.2.3 Derivatives Using Newton’s Divided Difference Formula. 159 3.3.4. 169 3.3.2.6.8 Answers and Hints.1. INITIAL VALUE PROBLEMS FOR ORDINARY DIFFERENTIAL EQUATIONS 4.7 Stability of Numerical Methods.1 Introduction. 238 180–240 . 224 4.2.3.

5. 241 5.2 Boundary Value Problems Governed by Second Order Ordinary Differential Equations. 252 5. BOUNDARY VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS AND INITIAL & BOUNDARY VALUE PROBLEMS IN PARTIAL DIFFERENTIAL EQUATIONS 241–309 5. 308 Bibliography Index 311–312 313–315 .1 Introduction. 274 5.3 Classification of Linear Second Order Partial Differential Equations.4 Finite Difference Methods for Laplace and Poisson Equations. 250 5.7 Answers and Hints.6 Finite Difference Method for Wave Equation.5 Finite Difference Method for Heat Conduction Equation. 241 5. 291 5.

This page intentionally left blank .

or a zero of f(x).1 SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS 1. is called a transcendental equation. a root of an equation f(x) = 0 is the value of x at which the graph of the equation y = f(x) intersects the x-axis (see Fig. tan x = x are algebraic (polynomial) equations. + an –1x + an = 0 (1.1..1).1 ‘Root of f(x) = 0’ 1 . O a f(x) (1.1) x4 – 3x2 + 1 = 0. trigonometric functions etc. For example. We assume that the function f(x) is continuous in the required interval. 3x3 – 2x2 – x – 5 = 0.1 Introduction A problem of great importance in science and engineering is that of determining the roots/ zeros of an equation of the form f(x) = 0. logarithmic functions. exponential functions.2) is called an algebraic equation. A polynomial equation of the form f(x) = Pn(x) = a0xn + a1xn–1 + a2xn–2 + . 1.. and x Fig. We define the following. Geometrically. Solution of Equations and Eigen Value Problems 1. for which f(α) ≡ 0 is called a root of the equation f(x) = 0. cos x – xex = 0. xe2x – 1 = 0. are transcendental equations. Root/zero A number α. 1. x2 – 3x + 1 = 0. An equation which contains polynomials.

we can write f(x) = (x – 1)(x2 + x + 2) = (x – 1) g(x). consider the equation f(x) = x3 – 3x2 + 4 = 0. We can write f(x) = (x – 2)2 (x + 1) = (x – 2)2 g(x). f ″(2) = 6 ≠ 0. and f (m) (α) ≠ 0.2 NUMERICAL METHODS Simple root A number α is a simple root of f(x) = 0. Hence. Therefore. g(2) = 3 ≠ 0. We shall derive methods for finding only the real roots. we can give the total number of operations (additions. we can write f(x) as f(x) = (x – α) g(x). g(α) ≠ 0.4) (1. infinite number of roots or no root. f ′(x) = 3x2 – 6x. of multiplicity m. f ′(1) = 4 ≠ 0. we can write f(x) as f(x) = (x – α)m g(x). Hence. The methods for finding the roots are classified as (i) direct methods. if f(α) = 0 and f ′(α) ≠ 0. f (m –1) (α) = 0. This number is called the operational count of the method. For example. 2a For this method. divisions and multiplications). we shall be considering the case of simple roots only. subtractions. g(1) ≠ 0. f ″(x) = 6x – 6. In this chapter. simple or multiple. x= There are direct methods for finding all the roots of cubic and fourth degree polynomials. we find f(1) = 0. of f(x) = 0.3) 1 − b ± b2 − 4 ac . For example. for any direct method. we can give the count of the total number of operations. and (ii) iterative methods. . real or complex.. f ′(x) = 3x2 + 1. We find f(2) = 8 – 12 + 4 = 0. Then. Multiple root A number α is a multiple root. x = 2 is a multiple root of multiplicity 2 (double root) of f(x) = x3 – 3x2 + 4 = 0. can be obtained using the method (1. Alternately. Direct methods These methods give the exact values of all the roots in a finite number of steps (disregarding the round-off errors). the roots of the quadratic equation ax2 + bx + c = 0. if f(α) = 0. these methods are difficult to use. since (x – 1) is a factor of f(x) = x3 + x – 2 = 0. Direct methods for finding the roots of polynomial equations of degree greater than 4 or transcendental equations are not available in literature. For example.. x = 1 is a simple root of f(x) = x3 + x – 2 = 0.. where as a transcendental equation may have one root. However. Remark 1 A polynomial equation of degree n has exactly n roots. f ′(2) = 12 – 12 = 0. Then. g(α) ≠ 0. L M N OP Q . a ≠ 0. f ′(α) = 0.

| xk + 1 – xk | ≤ ε. As we have seen earlier.6) Convergence of iterative methods The sequence of iterates... x3 = φ (x2). {xk}.. x1. we require a suitable iteration function and suitable initial approximation(s) to start the iteration procedure. if we require two decimal place accuracy.. . We start with one or two initial approximations to the root and obtain a sequence of approximations x0. If a method uses two initial approximations x0.. is said to converge to the exact root α. An iterative method for finding a root of the equation f(x) = 0 can be obtained as xk + 1 = φ(xk).. we need a criterion to stop the iterations. or k→∞ lim | xk – α | = 0. 2. (1. k = 1.. k→∞ k→∞ Remark 2 Given one or two initial approximations to the root.5) This method uses one initial approximation to the root x0 . We use one or both of the following criterion: (i) The equation f(x) = 0 is satisfied to a given accuracy or f(xk) is bounded by an error tolerance ε. then we iterate until | xk+1 – xk | < 0.7) The error of approximation at the kth iterate is defined as εk = xk – α. (1. x2 = φ(x1).8) (ii) The magnitude of the difference between two successive iterates is smaller than a given accuracy or an error bound ε.. (1. we cannot perform infinite number of iterations. we require to use both the criteria. k = 0. we also require a suitable criterion to terminate the iteration. | f (xk) | ≤ ε.. we use the second criterion. x1. . In some very special problems. 2. if k→∞ lim xk = α. The function φ is called an iteration function and x0 is called an initial approximation... we give a method to find initial approximation(s). The sequence of approximations is given by x1 = φ(x0).. such that the sequence of iterates. (1. Then. In the next section. converge to the exact root α.0005.. Further. . 1.SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 3 Iterative methods These methods are based on the idea of successive approximations. then we iterate until | xk+1 – xk | < 0. {xk}.. xk). Criterion to terminate iteration procedure Since. converge to the exact root α. which in the limit as k → ∞. . For example.9) Generally. xk. .7) as k→∞ lim | error of approximation | = lim | xk – α | = lim | εk | = 0. (1.. .. to the root. If we require three decimal place accuracy. we require a suitable iteration function φ for a given function f(x).005. we can write (1. then we can write the method as xk + 1 = φ(xk – 1...

) (ii) We write the equation f(– x) = Pn(– x) = 0. If there are three changes in signs. if f (1) and f (2) are of opposite signs. Descartes’ rule of signs gives the bound for the number of positive and negative real roots. We use the following theorem of calculus to determine an initial approximation. Again. (i) 8x3 – 12x2 – 2x + 3 = 0 Solution (i) Let f(x) = 8x3 – 12x2 – 2x + 3 = 0. b). The number of changes in signs in the coefficients (– 8. then the equation may have four negative roots or two negative roots or no negative root. If there are three changes in signs. 2. – 12. if there are four changes in signs. Example 1. Theorem 1. then there is a root in the interval (1. Table 1. 2).1). Values of f (x). 2). f(– x) = – 8x3 – 12x2 + 2x + 3. (For polynomial equations with real coefficients. The number of changes in the signs of the coefficients (8. (Table 1. complex roots occur in conjugate pairs. For example. Let us illustrate through the following examples. Since f(– 1) f(0) < 0. there is a root in the interval (0.1.1. then the equation may have three negative roots or definitely one negative root. Therefore. – 12. f(1) f(2) < 0. then the equation f(x) = 0 has at least one real root or an odd number of real roots in the interval (a.2 Initial Approximation for an Iterative Procedure NUMERICAL METHODS For polynomial equations. . (i) We count the number of changes of signs in the coefficients of Pn(x) for the equation f(x) = Pn(x) = 0.4 1. – 2.1 Determine the maximum number of positive and negative roots and intervals of length one unit in which the real roots lie for the following equations. Now. f(0) f(1) < 0. We set up a table of values of f (x) for various values of x. there is a root in the interval (1. Example 1. This result is very simple to use. It is also called the intermediate value theorem. then the equation may have four positive roots or two positive roots or no positive root. then the equation may have three positive roots or definitely one positive root.1(i ). 3) is 2. if there are four changes in signs. the equation has 2 or no positive roots. Studying the changes in signs in the values of f (x). The number of negative roots cannot exceed the number of changes of signs. 3) is 1. and count the number of changes of signs in the coefficients of Pn(– x). x f(x) –2 – 105 –1 – 15 0 3 1 –3 2 15 3 105 (ii) 3x3 – 2x2 – x – 5 = 0. there is a root in the interval (– 1. 0). The number of positive roots cannot exceed the number of changes of signs. We have the following table of values for f(x).1 If f(x) is continuous on some interval [a. 1). For example. Therefore. we determine the intervals in which the roots lie. the equation has one negative root. b] and f(a)f(b) < 0.

4537. Since. Since. the root lies in the interval (1. 1. – 5) is 2. f(0. Values of f (x ). Solution Let f(x) = 9x3 + 18x2 – 37x – 70 = 0. Therefore. – 2.3. Example 1. – 2. there are three real roots and the roots lie in the intervals (– 1. Therefore. which is smallest in magnitude lies for the equation 9x3 + 18x2 – 37x – 70 = 0. (Table 1. We have the following function values. f(0) f(1) < 0. We have f(0) = – 1.718 – 0.2. x f(x) –5 – 560 –4 – 210 –3 – 40 –2 4 –1 – 24 0 – 70 Since. that in the neighborhood of a point on a curve.1) = – 0.2).2) < 0. Now. f(1.178. x f(x) –3 – 101 –2 – 35 –1 –9 0 –5 1 –5 2 9 3 55 From the table. (Table 1. there is a root in the interval (0. Example 1. f(– 2)f(– 1) < 0. we present some iterative methods for finding a root of the given algebraic or transcendental equation. Solution (i) Let f(x) = xex – cos x = 0.3). f(1. f(1) = e – cos 1 = 2. (ii) tan x = 2x. Example 1.2). the smallest negative real root in magnitude is required. (ii) Let f(x) = tan x – 2x = 0. Since. The number of changes in signs in the coefficients (– 3. the negative root of smallest magnitude lies in the interval (– 2. 1). Now. (1.3 Locate the smallest positive root of the equations (i) xex = cos x. Table 1.2) = 0. we form a table of values for x < 0. The equation has no negative real root. we find that there is one real positive root in the interval (1. 1). f(1. (ii) Let f(x) = 3x2 – 2x2 – x – 5 = 0. – 1. 2). –1).1) f(1. the curve can be approximated by a straight line.SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 5 Therefore.540 = 2. 1) = – 0. We have the table of values for f (x). We know from calculus.2352. the equation has two negative or no negative roots. The number of changes in the signs of the coefficients (3.2. 2). f(0. Values of f (x ).1. 1. For deriving numerical methods to find a root of an equation . f(– x) = – 3x2 – 2x2 + x – 5.4426.1722. the equation has one positive root. – 5) is 1.1(ii ).2 Determine an interval of length one unit in which the negative real root. Table 1. f(1) = – 0. f(0) = 0. 0). Example 1.5) = – 0. (0.0997.

The iteration is continued using the interval in which the root lies. a ≠ 0 where a and b are arbitrary parameters to be determined by prescribing two appropriate conditions on f(x) and/or its derivatives. we compute x2 = x 0 f1 − x1 f0 . f1 − f0 Now. we get the next approximation to the root as x = – b/a. we can also write the approximation as xk+1 = x k ( fk − fk −1 ) − ( x k − x k −1 )fk x f − x k fk −1 = k −1 k . Otherwise. we approximate the curve in a sufficiently small interval which contains the root. Then.(1.11) fk − fk −1 fk − fk −1 Therefore. xk). = fk −1 − fk x k −1 − x k The point of intersection of this line PQ with the x-axis is taken as the next approximation to the root. These methods are also called chord methods. Draw a straight line joining the points P and Q (Figs. fk–1 fk < 0.2a. and f(xk) = fk. x1).3 Method of False Position The method is also called linear interpolation method or chord method or regula-falsi method. xk). x1). b). Let the root of the equation f(x) = 0. that is.10) Simplifying. by a straight line. starting with the initial interval (x0. until the required accuracy criterion given in Eq. xk].. (1.(1. the root lies in the interval (x2. b). Q(xk. That is. lie in the interval (xk–1. Then. Q(xk. At the start of all iterations of the method. we get x = xk – FG x Hf k −1 k −1 − xk − fk IJ K IJ K fk = x k – FG x Hf k k − x k −1 f. 2. if f(x0) f(x2) < 0. in which the root lies. P(xk–1.6 NUMERICAL METHODS f(x) = 0. we approximate f(x) ≈ ax + b. lie in the interval (xk–1. Setting ax + b = 0.1.2a. then the root lies in the interval (x0. 1. fk) are points on the curve f(x) = 0. . 1. P(xk–1. Different ways of approximating the curve by a straight line give different methods. Setting y = 0. The line PQ is taken as an approximation of the curve in the interval [xk–1.8) or Eq. fk) are points on the curve f(x) = 0. − fk −1 (1. 1. We . and solving for x. where f(xk–1) = fk–1. Alternate derivation of the method Let the root of the equation f(x) = 0. fk–1). in the neighborhood of a root. Draw the chord joining the points P and Q (Figs.. we require the interval in which the root lies. fk–1).9) is satisfied. k = 1. The equation of the line PQ is given by y − fk x − xk . x2). − fk −1 k IJ K The next approximation to the root is taken as xk+1 = xk – FG x Hf k k − x k −1 fk. Method of false position (also called regula-falsi method) and Newton-Raphson method fall in this category of chord methods.

the left end point x0 is fixed and the right end point moves towards the required root.2a.10).SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 7 approximate the curve in this interval by the chord.2b. x1). If the root lies initially in the interval (x0. a linear interpolation polynomial describes a straight line or a chord. in actual computations. … f k − f0 (1. fk − fk −1 . in actual computations. x k − x k −1 b f − ax k f =− k = xk – k = xk – a a a FG x Hf k k − x k −1 f − fk −1 k IJ K which is same as the method given in Eq. The linear interpolation polynomial that fits the data (xk–1. for each iteration. Therefore. y P x0 Q x3 x2 O x1 x O Q x0 y x3 x2 x1 x P Fig. For example. 2. The next approximation to the root is given by x = – b/a. in this case. whose length is decreasing. we get fk – fk–1 = a(xk – xk – 1). 1. the method behaves like xk+1 = x k f1 − x 1 f k . … f1 − f k (1. (xk. the right end point x1 is fixed and the left end point moves towards the required root. fk) is given by . Graphically.1. then one of the end points is fixed for all iterations. we get fk–1 = axk–1 + b. 2.12) In Fig. f(x) ≈ ax + b.1.13) Remark 5 The computational cost of the method is one evaluation of the function f(x). k = 1. the next approximation is given by xk+1 = – a= and fk = axk + b. the method always converges. or The second equation gives b = fk – axk. that is. Hence. Therefore. Since the chord passes through the points P and Q. Subtracting the two equations. k = 1. Hence.2a ‘Method of false position’ Fig. fk–1). the method behaves like xk+1 = x0 fk − xk f0 .(1. Remark 4 The method of false position has a disadvantage.2b ‘Method of false position’ Remark 3 At the start of each iteration. in Fig. Remark 6 We would like to know why the method is also called a linear interpolation method. the required root lies in an interval. 1.

347306.36364. the root lies in the interval (0. f(0) f(0. Example 1. 0. 1).375. f1 = f(x1) = f(1) = – 1. x2 = x 0 f1 − x1 f0 0 −1 = = 0.5(1) = = 0.34741(1) = 0. We have x0 = 0.4 Locate the intervals which contain the positive real roots of the equation x3 – 3x + 1 = 0. the root lies in the interval (0. xk −1 − x k k–1 xk − x k− 1 k NUMERICAL METHODS (We shall be discussing the concept of interpolation polynomials in Chapter 2). − 0. f(x2) = f(0. x5 = x 0 f4 − x 4 f0 0 − 0.375 − 1 f2 − f0 f(x3) = f(0.36364) = – 0.34741.04283. 0.00030. x1 = 1.34870. 2). Setting f(x) = 0. 0.3487) < 0.00370. = f 3 − f0 − 0.5) < 0. Since.36364). the root lies in the interval (0. 1) and another in the interval (1.0003 − 1 . Solution We form the following table of values for the function f(x).11).8 f(x) = x − xk x − x k− 1 f + f.5.36364(1) = 0. There is no real root for x > 2 as f(x) > 0.34741) = – 0. f k − f k−1 or x = xk+1 = This gives the next approximation as given in Eq. x4 = x 0 f3 − x 3 f0 0 − 0.36364) < 0. 0. x3 = x0 f2 − x2 f0 0 − 0. f(x5) = f(0.34870) = – 0. f(0) f(0.34741). f4 − f0 − 0. we get ( x − x k− 1 ) f k − ( x − x k ) f k− 1 = 0. Obtain these roots correct to three decimal places.5) = – 0.3487(1) = = 0. f1 − f0 −1−1 Since. f(0) f(0.34870).00370 − 1 Since. First. f(x4) = f(0. x6 = x 0 f5 − x 5 f0 0 − 0. x f (x) 0 1 1 –1 2 3 3 19 There is one positive real root in the interval (0.04283 − 1 Since. (1.5). we find the root in (0. using the method of false position. = f5 − f0 − 0. for all x > 2. xk − x k − 1 or x(fk – fk–1) = xk–1 fk – xk fk – 1 xk − 1 f k − xk f k − 1 .34741) < 0. f(0) f(0. f0 = f(x0) = f(0) = 1. the root lies in the interval (0.

x5 = x4 f1 − x1 f4 1.189730. . x2 = x 0 f1 − x1 f0 3 − 2( − 1) = 1.074884) = 1.525012(3) − 2( − 0. f0 = f(x0) = f(1) = – 1. 2). We use the formula given in Eq. = 3 − ( − 0.SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 9 Now. f1 − f 4 3 − ( − 0.34741 | ≈ 0. 2). f(1.003928.531116) = – 0.525012.525012. The root has been computed correct to three decimal places. the root lies in the interval (1.531116) f(2) < 0. Since.434437. x8 = x 7 f1 − x1 f7 1.347306 – 0.407407) f(2) < 0. f(1. x3 = x 2 f1 − x1 f2 1. The required root can be taken as x ≈ x6 = 0. the root lies in the interval (1.74884) f1 − f5 f(x6) = f(1. We may also give the result as 0. f(1.010586) f1 − f7 f(x8) = f(1. = 3 − ( − 1) f1 − f0 Since.(1.529462) f(2) < 0.796875) f1 − f2 f(x3) = f(1. Since f(1. 2). f1 − f 6 3 − ( − 0.25(3) − 2( − 0.0005.482367(3) − 2( − 0.407407(3) − 2(− 0. = 3 − ( − 0.18973) = = 1. even though x6 is more accurate.434437) = 1. Since.525012) f(2) < 0.529462(3) − 2( − 0.513156) = – 0.25) = – 0. | x6 – x5 | = | 0.25) f(2) < 0. we compute the root in (1.482367) = – 0.796875) = 1. = 3 − ( − 0.407407. the root lies in the interval (1.028374) f(x7) = f(1.25. We have x0 = 1.796875.25. the root lies in the interval (1.028374) = = 1.513156(3) − 2( − 0. 2).347. 2).434437) f(x4) = f(1.407407) = – 0. f(1. 2). f1 = f(x1) = f(2) = 3. f(1.529462. Since.074884. x1 = 2. the root lies in the interval (1. the root lies in the interval (1.482367) f(2) < 0.010586.529462.531116. f(x2) = f(1.0001 < 0. x6 = x 5 f1 − x1 f5 1. the root lies in the interval (1.010586) = 1. Since.18973) f(x5) = f(1.028374.482367. x7 = x 6 f1 − x1 f6 1. 2). = f1 − f3 3 − ( − 0.407407.529462) = – 0. Since.513156.513156.513156) f(2) < 0.525012) = – 0.13). x4 = x3 f1 − x1 f3 1. Now. Note that the left end point x = 0 is fixed for all iterations.531116.482367. f(1.347306. 2).

f (0.17798 − 1 Since.31467( − 2.02360. the root lies in the interval (0. |x10 – x9 | = | 1.44673(− 2. we obtain the following results.17798 − 0.531956.17798 − 0. The root has been computed correct to three decimal places. Since.20354.003928) f1 − f8 f(x9) = f(1. − 2.49402.531729(3) − 2( − 0. = f1 − f0 − 2. 1). Since. the root lies in the interval (0.531956 – 1.07079) = = 0.51986.44673. using the method of false position.49402) f(1) < 0.00776.44673. 2).53179 | ≈ 0. the root lies in the interval (0.44673) f(1) < 0.50995) = 0. Using the method of false position. x1 = 1.10 x9 = NUMERICAL METHODS x 8 f1 − x1 f8 1.50995(− 2.17798) − 1(0. − 2.51986) = 0. x10 = x 9 f1 − x1 f9 1.51520. A root of the equation lies in the interval (0.0236) = = 0.49402) = 0.17798 − 0. Example 1.001454) = 1.50995.20354 f1 − f3 f(x4) = f(0.531729) f(2) < 0.531956. f(0. 1). The required root can be taken as x ≈ x10 = 1. 1). 49402(− 2.17798) − 1(0.17798.17798 − 0.17798) − 1(0. x2 = x 0 f1 − x1 f0 0 − 1(1) = 0.31467. = f1 − f2 − 2.20354) = = 0. We have f(0) = 1. Solution Define f(x) = cos x – xex = 0. f(x2) = f(0.51520) = 0. f1 − f4 − 2.531729. Since.50995. x4 = x3 f1 − x1 f3 0. the root lies in the interval (1.531729) = – 0.31467) = 0.0236 f1 − f5 f(x6) = f(0. x5 = x4 f1 − x1 f4 0 . f (0.5 Find the root correct to two decimal places of the equation xex = cos x. Let x0 = 0. .0005. the root lies in the interval (0. There is no negative root for the equation.531729.003928) = 1.000227 < 0. Note that the right end point x = 2 is fixed for all iterations.17798) − 1(0. 1).31467) f(1) < 0.001454.531116(3) − 2( − 0. x6 = x5 f1 − x1 f5 0. x3 = x 2 f1 − x1 f2 0.001454) f1 − f9 Now. = 3 − ( − 0. f(0.(1.31467. = 3 − ( − 0.50995) f(1) < 0.07079 f(x5) = f(0. Since.51986 f(x3) = f(0.07079. We use the formula given in Eq. 1).13).44673) = 0.49402. f(1) = cos 1 – e = – 2. f(1.

This method is also a chord method in which we approximate the curve near a root. 1).SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 11 Since. where f0 = f(x0). The root has been computed correct to two decimal places.5152(− 2. f ′( x 0 ) We repeat the procedure. The required root can be taken as x ≈ x7 = 0.00776) = 0. (Fig.3). that is f(xk + ∆x) ≡ 0. by a straight line. 1.4 Newton-Raphson Method This method is also called Newton’s method. x7 = x6 f1 − x1 f6 0. Then.17798 − 0. The point of intersection of the tangent with the x-axis is taken as the next approximation to the root. The method is also called the tangent method.51520 | ≈ 0. Let x0 be an initial approximation to the root of f(x) = 0.00776 f1 − f6 Now. f ′(x0) ≠ 0.51692 – 0.14) This method is called the Newton-Raphson method or simply the Newton’s method.51520) f(1) < 0.51692.1.17798) − 1(0. Let ∆x be an increment in x such that xk + ∆x is the exact root.51692. we get x = x0 – y P a O x1 x0 x Fig. f0) is given by y – f(x0) = (x – x0) f ′(x0) where f ′(x0) is the slope of the tangent to the curve at P. f(0. P(x0. . The equation of the tangent to the curve y = f(x) at the point P(x0. 1. Draw the tangent to the curve at P. f ′(xk) ≠ 0. The next approximation to the root is given by x1 = x0 – f ( x0 ) . f0). We approximate the curve in the neighborhood of the root by the tangent to the curve at the point P. | x7 – x6 | = | 0. f ′( x k ) (1. Setting y = 0 and solving for x. The iteration method is defined as xk+1 = xk – f ( xk ) . The process is repeated until the required accuracy is obtained. Alternate derivation of the method Let xk be an approximation to the root of the equation f(x) = 0. = − 2.005. the root lies in the interval (0.00172 < 0. is a point on the curve. f ′( x 0 ) f ′(x0) ≠ 0.3 ‘Newton-Raphson method’ f ( x0 ) . 1.51520. Note that the right end point x = 2 is fixed for all iterations.

0575)2 = 0. for each iteration. 2.05. where N > 0.058823.058823. we shall give the condition for convergence of the method.05) – 17(0. in this section. Later. k = 0. we get f(xk) + ∆x f ′(xk) + ( ∆x ) 2 f ″ (xk) + . we obtain the sequence of approximations 2 x1 = 2x0 – Nx 0 = 2(0.6).0825.058823. or Hence.05)2 = 0. 2! NUMERICAL METHODS (1.(1. Remark 9 The computational cost of the method is one evaluation of the function f(x) and one evaluation of the derivative f ′(x). 2 x4 = 2x3 – Nx3 = 2(0. b). Hence.. Then. find 1/17. b) and x0 ∈ (a. = 0. or N 1 1 1 = N. if a root lies in a small interval (a. The required root is 0.058794)2 = 0.14).15. (ii) With N = 17. and x0 = 0.0575) – 17(0. we observe that the method may fail when f ′(x) is close to zero in the neighborhood of the root. using the initial approximation as (i) 0.15)2 = – 0. Since. Remark 7 Convergence of the Newton’s method depends on the initial approximation to the root.15) Neglecting the second and higher powers of ∆x.058794) – 17(0. Define f(x) = – N.12 Expanding in Taylor’s series about the point xk. f ′( x k ) which is same as the method derived earlier.15) – 17(0. Do the iterations converge ? Solution Let x = 1 . we obtain the iteration method xk+1 = xk + ∆x = xk – ∆x = – f ( xk ) . 2 x3 = 2x2 – Nx 2 = 2(0.05.. Example 1. we obtain f(xk) + ∆x f ′(xk) ≈ 0.15. 2 x2 = 2x1 – Nx1 = 2(0. x x x Newton’s method gives xk+1 = xk – f ( xk ) [(1/ x k ) − N ] 2 2 = xk – = xk + [xk – Nx k ] = 2xk – Nx k . we obtain the sequence of approximations 2 x1 = 2x0 – Nx 0 = 2(0..6 Derive the Newton’s method for finding 1/N. 2 f ′( x k ) [ − 1/ x k ] (i) With N = 17. 1.058823) – 17(0. . then the method converges. f ′(xk) ≠ 0. f ′( x k ) f ( xk ) .0575.058823)2 = 0.058794. (ii) 0. However. f ′(x) = – 2 . Remark 8 From Eq. If the approximation is far away from the exact root. the iterations converge to the root.. the method diverges (see Example 1. and x0 = 0. . | x4 – x3 | = 0.

7 Derive the Newton’s method for finding the qth root of a positive number N. we obtain xk+1 = xk – 3 xk − 5 x k + 1 2 3 xk − 5 = 2 3 xk − 5 3 2 xk − 1 .900942) – 17(– 1. Solution We have f(0) = 1. 2 3 x1 3(2. Hence.75. 13 We find that xk → – ∞ as k increases.900942)2 = – 65. we have q = 3 and N = 17. f(0) f(1) < 0. Example 1. Define f(x) = x q – N. 1).8025)2 = – 0.280706) – 17(– 0.. 3( 4 ) x2 = 3 2 x1 + 17 2(2. assuming the initial approximation as x0 = 2. With x0 = 2. Example 1.280706. 1.571332) 2 = 2.571332.. . we obtain the following results. the iterations diverge very fast.8 Perform four iterations of the Newton’s method to find the smallest positive root of the equation f(x) = x3 – 5x + 1 = 0.571282 as the required root correct to four decimal places.0825) – 17(– 0. x4 = Now. 2 x4 = 2x3 – Nx3 = 2(– 1.75) 3 + 17 = = 2. q > 0. 1. x1 = 3 2x 0 + 17 2 3x 0 = 2( 8) + 17 = 2. We may take x ≈ 2. k = 0. Hence.582645) 2 2(2. the method becomes xk+1 = 3 2x k + 17 2 3x k .280706)2 = – 1.582645)3 + 17 3(2. f(1) = – 3. Solution Let x = N1/q. or xq = N. Therefore. 3 2x 3 + 17 2 3x 3 = = 2.75) 2 3 2x 2 + 17 2 3x 2 x3 = = 2(2.571282. This shows the importance of choosing a proper initial approximation. the smallest positive root lies in the interval (0. Applying the Newton’s method. 2. 2.571332)3 + 17 3(2.23275. For computing 171/3. . f ′(x) = qx q – 1. k = 0. N1/q. q xk − N q qx k − 1 q q qx k − x k + N q qx k − 1 q ( q − 1) x k + N q qxk − 1 Newton’s method gives the iteration xk+1 = xk – = = ..900942.. Since. | x4 – x3 | = | 2. Then. compute 171/3 correct to four decimal places.582645.571282 – 2.571332 | = 0. . where N > 0.00005. 2 x3 = 2x2 – Nx 2 = 2(– 0.SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 2 x2 = 2x1 – Nx1 = 2(– 0.

We have the following results.5) 2 − 5 2(0.34 log 10 x1 + 0.34 with x0 = 10.434294 11.434294 = 11. the root correct to six decimal places is x ≈ 0.U. 2 3 x2 − 5 3 2 x2 − 1 = 3(0. .34 = 10 – = 11.59487 − 12.34 x0 log 10 x0 − 12.201640.434294 x0 + 0. . log 10 10 + 0.434294. log10 x k + 0. 2.14 Let x0 = 0.34 = 11. 2 3 x1 − 5 3 2 x1 − 1 = 3(0.594854 – 11.34 = 11. 1.201640.594870 | = 0.000016.59487 – We have | x3 – x2 | = | 11.631465.34 ..434294 11.631465 log 10 11. x1 = x2 = x3 = x4 = 2 3 x0 − 5 3 2 x0 − 1 NUMERICAL METHODS = 3(0. Then f ′(x) = log10 x + 1 = log10 x + 0.201640) 3 − 1 = 0.594870. log e 10 Using the Newton-Raphson method.631465 + 0.34. x1 = x0 – x2 = x1 – 10 log 10 10 − 12. (A.594854. Apr/May 2004) Solution Define f(x) = x log10 x – 12. Example 1. log 10 11.434294 log 10 x1 log 10 x1 − 12.631465 − 12.176471) 2 − 5 2(0.5) 3 − 1 = 0. k = 0.201640) 2 − 5 = 0.176471.5.9 Using Newton-Raphson method solve x log10 x = 12..34 log 10 x2 + 0. we obtain the following results.201568) 3 − 1 2(0.59487 + 0.176471) 3 − 1 2(0. Therefore.59487 log 10 11.594854 as the root correct to four decimal places.434294 = 11.201640. We may take x ≈ 11.201568) 2 − 5 3(0.434294 With x0 = 10. we obtain xk+1 = xk – x k log10 x k − 12. 2 3 x3 3 2 x3 − 1 −5 = = 0. log 10 11.201568.631465 – x3 = x2 – x2 log 10 x2 − 12.

we compute the next approximations as x1 = φ(x0). we get the sequence of approximations as (1. x3 = 2. etc. 2. xk (1. A fixed point of a function φ is a point α such that α = φ(α). x5 = 0. This result is also called the fixed point theorem..4.. Using Eq. For example.. k = 0. depends on the choice of the iteration function φ(x). x1 = 1.(1. x = (5x – 1)1/3. the iteration methods given in Eq. 1. . x2 = 1. There are many ways of rewriting f(x) = 0 in this form. x (1. Starting with the initial approximation x0. 1. a fixed point of φ(x). that is. Consider again.. The method converges and x ≈ x5 = 0. 3 xk + 1 .20) With x0 = 1. x4 = 0. . Since. x= (1. ..5 General Iteration Method The method is also called iteration method or method of successive approximations or fixed point iteration method. x2 = 0.. (1.20164 is taken as the required approximation to the root. can be rewritten in the following forms. for finding a root of the equation f(x) = x3 – 5x + 1 = 0.18) The function φ(x) is called the iteration function..0437. 1)..19) x1 = 0. (iii) xk+1 = 5x k − 1 . x3 = 0.0968.. .. it is important to know whether all or at least one of these iteration methods converges. the iteration method is written as xk+1 = φ(xk). The stopping criterion is same as used earlier. Remark 10 Convergence of an iteration method xk+1 = φ(xk).. 5 With x0 = 1.. k = 0.9072. 2. (ii) xk+1 = (5xk – 1)1/3. x2 = φ(x1). 1. x = 5 5x − 1 ..17). and a suitable initial approximation x0.20165. x4 = 2.. The positive root lies in the interval (0. we get the sequence of approximations as (i) xk+1 = (1.16) x3 + 1 . 2. which does not converge to the root in (0. x3 = φ(x2). 1).SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 15 1. finding a root of f(x) = 0 is same as finding a number α such that α = φ(α). The first step in this method is to rewrite the given equation f(x) = 0 in an equivalent form as x = φ(x).5874. 2.20164. 1.(1... k = 0. f(x) = x3 – 5x + 1 = 0.21) . k = 0.. there are many ways of writing f(x) = 0 as x = φ(x).. to the root.1. k = 0.17) Now. 2.20193.2128.16). 1.

Condition of convergence The iteration method for finding a root of f(x) = 0. we derive the condition that the iteration function φ(x) should satisfy in order that the method converges. 1. k = 0. k = 0.. 0 < x < 1.1280.24) recursively.. Using (1. We have | εk+1 | = | φ′(tk) | | φ′(tk–1) | .. Subtracting (1. x2 = 2. . | φ′(tk) | ≤ c. φ′(t0) ε0. k = 0... the iteration method (1. we get εk = φ′(tk–1) εk–1. | φ′(t0) | | ε0 |.… | εk+1 | ≤ ck+1 | ε0 |. We define the error of approximation at the kth iterate as εk = xk – α.25) (using the mean value theorem) (1.20). Therefore. if and only if c < 1. (1. 2.. α = φ(α). (1.. 1. if and only if | φ′(xk) | ≤ c < 1. b). for all x in the interval (a. we obtain xk+1 – α = φ(xk) – φ(α) = (xk – α)φ′(tk) or Hence. .23) (1. which does not converge to the root in (0.0.22) converges.. xk < tk < α.. (1. 1). 1. This result is possible.22) Setting k = k – 1. Let α be the exact root. 1). The initial error ε0 is known and is a constant.. the initial approximation.22). k = 0. Let Then. εk+1 = φ′(tk) εk.24) (1..26) We can test this condition using x0. or | φ′(x) | ≤ c < 1. That is. 2..16 With x0 = 1.19).21) converge to a root in (0. Hence.. φ′(x) = . NUMERICAL METHODS Now.1213. x3 = 2.. 2. we require that | εk+1 | → 0 as k → ∞. εk+1 = φ′(tk)φ′(tk–1) εk–1. and | φ′(x) | = ≤ 1 for all x in 5 5 5 the method converges to a root in (0. Let us now check whether the methods (1. we get the sequence of approximations as x1 = 2. 1. x4 = 2. (1. we get For convergence. before the computations are done.1284. 2. is written as xk+1 = φ(xk).. 1) of the equation f(x) = x3 – 5x + 1 = 0. xk–1 < tk–1 < α. εk+1 = φ′(tk)φ′(tk–1) .23) from (1. (i) We have φ(x) = 3x 2 3x 2 x3 + 1 .

| x4 – x3 | = 2. Since.10 Find the smallest positive root of the equation x3 – x – 10 = 0. Write x3 = x + 10. using the method of successive approximations. A judicious choice of a value in this interval may give faster convergence. we find the interval in which α lies. Example 1. Let x0 be an initial approximation contained in the interval in which the root lies.3208)1/3 = 2. Now | φ′(x) | < 1. 5 (iii) We have φ(x) = Remark 11 Sometimes. We obtain φ′(x) = 1 3( x + 10)2/ 3 .3097)1/3 = 2. Hence. f(3) = 27 – 3 – 10 = 14. x1 = (12. Example 1. Solution We have f(x) = x3 – x – 10. f(2) f(3) < 0. and x = (x + 10)1/3 = φ(x). where α is a constant to be determined.3089 – 2. we take the required root as x ≈ 2. f(1) = – 10.3089. φ′(x) = .5. Solution We have f(x) = 3x4 + x3 + 12x + 4 = 0.3090)1/3 = 2.SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 17 (ii) We have φ(x) = (5x – 1)1/3. f(– 1) = 3 – 1 – 12 + 4 = – 6. 5x − 1 1 .5)1/3 = 2. the iteration converges. it may not be possible to find a suitable iteration function φ(x) by manipulating the given function f(x). Write f(x) = 0 as x = x + α f(x) = φ(x). (1. x4 = (12. We choose a value for α from this interval and compute the approximations. 3). we require | φ′(x0) | = | 1 + α f′(x0) | < 1. | φ′(x) | < 1. We obtain the following results. Convergence is not guaranteed. when x is close to 3/ 2 x 2x (5x − 1)1/ 2 1 and | φ′(x) | > 1 in the other part of the interval.3208. f(2) = 8 – 2 – 10 = – 4.0001.3089. We find | φ′(x) | < 1 for all x in the interval (2. using the general iteration method.3097. Let x0 = 2. Again. we may use the following procedure.3090 | = 0. 0). .11 Find the smallest negative root in magnitude of the equation 3x4 + x3 + 12x + 4 = 0.27) Simplifying. 3). when x is close to 1 and 3(5x − 1)2/ 3 | φ′(x) | > 1 in the other part of the interval. x3 = (12. f(0) = – 10. the smallest negative root in magnitude lies in the interval (– 1. Convergence is not guaranteed. For convergence.3090. the smallest positive root lies in the interval (2. x2 = (12. Then. Since. φ′(x) = . Since. We define the iteration method as xk+1 = (xk + 10)1/3. f(– 1) f(0) < 0. f(0) = 4.

= – 0. 1.25. converges to the root.5.3329) + ( − 0. 0).33290.33333..18 Write the given equation as x(3x3 + x2 + 12) + 4 = 0. Let us choose the value α = – 0. 3( − 0.3329) 2 + 12 3 4 = – 0. α takes negative values. We find | φ′(x) | < 1 for all x in the interval (– 1. the iteration converges.5.5. The interval for α depends on the initial approximation x0. Let x0 = – 0.. 3(− 0. We obtain the following results.25) 2 + 12 4 3 3 = – 0. Determine an iteration function φ(x).33333) 2 + 12 The required approximation to the root is x ≈ – 0. k = 0. 3 x + x 2 + 12 3 . 9 Hence. x1 = – x2 = – x3 = – 4 3(− 0.33333..33333. Example 1. This condition is also to be satisfied at the initial approximation. Hence. We obtain the iteration method as xk+1 = xk – 0. We write the given equation as x = x + α (3x3 + 4x2 + 4x + 1) = φ(x) where α is a constant to be determined such that | φ′(x) | = | 1 + α f ′(x) | = | 1 + α (9x2 + 8x + 4) | < 1 for all x ∈ (– 1.. x0 = – 0.5 (3xk3 + 4xk2 + 4xk + 1) . Solution We illustrate the method given in Remark 10.25) + (− 0. 0). 4( 9x 2 + 2x ) (3x 3 + x 2 + 12)2 . and x = – The iteration method is written as xk+1 = – We obtain φ′(x) = 4 3 3 xk 2 + x k + 12 NUMERICAL METHODS 4 = φ(x). 0). such that the sequence of iterations obtained from xk+1 = φ(xk). we get | φ′(x0) | = | 1 + α f ′(x0) | = 1 + or –1<1+ 9α <1 4 9α < 1 or 4 – 8 < α < 0.33333) + (− 0.12 The equation f(x) = 3x3 + 4x2 + 4x + 1 = 0 has a root in the interval (– 1. Setting x0 = – 0.

.332723) + 1] = – 0. xk+1 = εk+1 + α. Definition An iterative method is said to be of order p or has the rate of convergence p. 1. Method of false position We have noted earlier (see Remark 4) that if the root lies initially in the interval (x0. if p is the largest positive real number for which there exists a finite constant C ≠ 0 .28) The constant C.000119 < 0. 3 2 x5 = φ(x4) = – 0. 3 2 x2 = φ(x1) = – 0.5.337036)2 + 2(– 0. Let us now obtain the orders of the methods that were derived earlier. such that | εk+1 | ≤ C | εk | p.5(3x4 + 4x4 + 2x4 + 1) = – 0.5)2 + 2(– 0.337036. the method behaves like (see Fig.5[3 (– 0.337036)3 + 4(– 0. 2. 19 Starting with x0 = – 0. We can take the approximation as x ≈ x5 = – 0..333316 + 0.3125)2 + 2(– 0.5(3x2 + 4x2 + 2x2 + 1) = – 0.333435)3 + 4(– 0.5 (3x0 + 4x0 + 2x0 + 1) = – 0. 2 f ′(α) . If the left end point x0 is fixed and the right end point moves towards the required root. 3 2 x4 = φ(x3) = – 0. We can verify that | φ′(xj) | < 1 for all j. 1.3125)3 + 4(– 0.5(3x3 + 4x3 + 2x3 + 1) = – 0. The exact root is x = – 1/3. is called the asymptotic error constant and it depends on the derivatives of f(x) at x = α.1.5 (3xk + 4xk + 2xk + 1) = φ(xk).333435.2a) xk+1 = x0 f k − x k f0 ..333435)2 + 2(– 0. f k − f0 Substituting xk = εk + α.5)3 + 4(– 0. then one of the end points is fixed for all iterations.5[3(– 0.332723)3 + 4(– 0.5) + 1] = – 0.333435) + 1] = – 0. if the initial approximation is sufficiently close to the desired root.333316. we obtain the following results. x0 = ε0 + α. where C = f ′′(α) .3125) + 1] = – 0. k = 0.333435 | = 0. (1.3125. Define the error of approximation at the kth iterate as εk = xk – α. Since | x5 – x4 | = | – 0.5 [3(– 0. we expand each term in Taylor’s series and simplify using the fact that f(α) = 0.5[3(– 0.332723)2 + 2(– 0.SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 3 2 = – 0.6 Convergence of the Iteration Methods We now study the rate at which the iteration methods converge to the exact root. which is independent of k.332723.333316.337036) + 1] = – 0.5[3(– 0.5(3x1 + 4x1 + 2x1 + 1) = – 0. We obtain the error equation as εk+1 = Cε0εk.0005. x1). the result is correct to three decimal places. 3 2 x1 = φ(x0) = – 0.1. 3 2 x3 = φ(x2) = – 0.

. and C = | φ′(α) |. 2 f ′(α) + f ′′(α) 2 εk 2 f ′ (α ) k OP LM1 + f ′′(α) ε Q N f ′(α) O L f ′′(α) ε + . NUMERICAL METHODS (1. we get xk+1 – α = φ(xk) – φ(α) = φ(α + xk – α) – φ(α) = [φ(α) + (xk – α) φ′(α) + . Method of successive approximations or fixed point iteration method We have Subtracting. where C = f ′′(α) .. we obtain εk+1 + α = εk + α – f (ε k + α ) . f ′ ( xk ) f ′(xk) ≠ 0. and α = φ(α) | εk+1 | = C | εk |..P Q −1 = εk – ε k − f ′′ (α) 2 f ′′(α) εk2 + . f ′ (ε k + α ) obtain Expand the terms in Taylor’s series. ε k + .. xk+1 = φ(xk). the error equation becomes | εk+1 | = | C* | | εk |.20 Since ε0 is finite and fixed.... 2 εk+1 = εkφ′(α) + O(εk ). Newton-Raphson method The method is given by xk+1 = xk – Substituting f ( xk ) ..P M1 − Q N f ′(α) OP Q k O + ..30) Hence. we εk+1 = εk LMε – N kf ′ (α) + 1 2 ε f ′′(α) + .. (1. xk < tk < α. the fixed point iteration method has order 1 or has linear rate of convergence.] – φ(α) or Therefore.. where C* = Cε0.. 2 k f ′(α) + ε k f ′′(α) OP Q k = εk = εk L – Mε N L – Mε N LM N k f ′′ (α) 2 + ε k + .. xk = εk + α.P Q O + . and canceling f ′(α).. xk+1 = εk+1 + α. the method of false position has order 1 or has linear rate of convergence.. 2 f ′(α) . we get k 2 εk+1 = Cεk .. Using the fact that f(α) = 0. = 2 f ′(α) 2 f ′ (α) Neglecting the terms containing ε 3 and higher powers of εk.29) Hence.

then the method converges. However. the error in magnitude in computing the root is 10–1 = 0. b) and x0 ∈ (a.. Let us assume that at a particular stage of iteration.31) and | εk+1 | = | C | | εk |2. and a2 = (1/2)φ″(α) ≠ 0.32) OP Q The exact root satisfies the equation α = φ(α). a1 = φ′(α). we have f ′ ( xk ) φ(x) = x – f ( x) . that in the next iteration. Further. a2 = (1/2)φ″(α). (ii) From Eq. we may possibly get an accuracy of four decimal places in the next iteration.SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 21 (1.. Because of the quadratic convergence of the method. Newton’s method is of order 2 or has quadratic rate of convergence...6). For the Newton-Raphson method xk+1 = xk – Then.31). That is. then the method is of order 1 or has linear convergence..(1. if the root lies in a small interval (a. the graph of y = f(x) is almost parallel to x-axis at the root α. 2.. we note that if f ′(α) ≈ 0. Therefore. we conclude that both fixed point iteration and regula-falsi methods converge slowly as they have only linear rate of convergence. – φ(α) k 2 (1. From xk+1 = φ(xk). k = 0. we obtain (see Eq. the method is of order 2 or has quadratic convergence.. the error behaves like C(0. For the general iteration method. k = 0. We observe from (1. then from Eq. Remark 14 Let us have a re-look at the error equation. L M N 1 φ ′′(α) ε 2 + . 1. in this case.1. we have derived that the condition of convergence is | φ′(x) | < 1 for all x in the interval (a. Newton’s method converges at least twice as fast as the fixed point iteration and regula-falsi methods. That is.32). it also depends on the value of C. However. 1. which is of first order. etc. (1.. | φ′(x) | ≠ 0 for all x in the neighborhood of the root α. From this discussion.31). b). Note that in this method. We have defined the error of approximation at the kth iterate as εk = xk – α.. Let us verify this result for the Newton-Raphson method. b) in which the root lies. and f ″ (x) is finite then C → ∞ and the method may fail. and α = φ(α). f ′( x) [ f ′( x)]2 − f ( x) f ′′( x) f ( x) f ′′( x) = [ f ′( x)]2 [ f ′ ( x)]2 .1)2 = C(10–2).(1. Remark 13 When does the Newton-Raphson method fail? (i) The method may fail when the initial approximation x0 is far away from the exact root α (see Example 1. If a1 ≠ 0 that is. 2. φ′(α) ≠ 0. If a1 = φ′(α) = 0.24)) xk+1 – α = φ(xk) – φ(α) = φ(α + εk) – φ(α) = φ(α) + φ ′ (α) ε k + or where εk+1 = a1εk + a2εk2 + . Remark 12 What is the importance of defining the order or rate of convergence of a method? Suppose that we are using Newton’s method for computing a root of f(x) = 0. we may possibly get an accuracy of two decimal places. φ′(x) = 1 – f ( xk ) ..

such that f(α) ≡ 0 is called a root of f(x) = 0. then the equation f(x) = 0 has at least one real root or an odd number of real roots in the interval (a. we find an interval (a. we have | φ′(xk) | < 1.. How can we find an initial approximation to the root of f (x) = 0 ? Solution Using intermediate value theorem. and count the number of changes of signs in the coefficients of Pn(– x). x2. xk.. (ii) simple root and (iii) multiple root of an algebraic equation f(x) = 0. f (m–1) (α) = 0. The number of positive roots cannot exceed the number of changes of signs in the coefficients of Pn(x). 2. 2. (iii) Let α be a root of f(x) = 0. then. f (xk) → 0.. we obtain a sequence of iterates (approximations to the root of f(x) = 0). we write the equation f(– x) = Pn(– x) = 0.. 3. k = 1. This implies that f (a)f(b) < 0. g(α) ≠ 0. α is said to be a multiple root of multiplicity m. f ′(α) = 0.22 and φ′(α) = f (α) f ′′(α) [ f ′(α)] 2 NUMERICAL METHODS =0 since f(α) = 0 and f ′(α) ≠ 0 (α is a simple root). we can write f(x) as f (x) = (x – α) g(x). REVIEW QUESTIONS 1. When. we can write f (x) as f(x) = (x – α)m g(x).. What is the Descartes’ rule of signs? Solution Let f (x) = 0 be a polynomial equation Pn(x) = 0. State the intermediate value theorem. The number of negative roots cannot exceed the number of changes of signs in the coefficients of this equation. and → 0 as n → ∞. x1. Solution (i) A number α. a2 ≠ 0 and the second order convergence of the Newton’s method is verified. b] and f (a)f (b) < 0.. 4. Any point in this interval (including the end points) can be taken as an initial approximation to the root of f(x) = 0. Now.. If f(α) = 0. b) which contains the root of the equation f (x) = 0. If f(α) ≡ 0 and f ′(α) ≠ 0.. If . and φ″(x) = φ″(α) = 1 [f ′(x) {f ′(x) f ″(x) + f(x) f ′″(x)} – 2 f(x) {f ″(x)}2] [ f ′( x)]3 f ′′(α) ≠ 0. g(α) ≠ 0... and f (m) (α) ≠ 0. Define convergence of an iterative method. Then. Then. Now.... Solution If f(x) is continuous on some interval [a. then α is said to be a simple root. We count the number of changes of signs in the coefficients of f (x) = Pn(x) = 0. (ii) Let α be a root of f(x) = 0.. Solution Using any iteration method. 5. b). f ′(α) Therefore. Define a (i) root. xk → α.

We normally check this condition at x0. Write the Newton-Raphson method to obtain a root of f(x) = 0. the method behaves like xk+1 = x0 f k − x k f0 . in this case. 8. We write f(x) = 0 in an equivalent form as x = φ(x). the left end point x0 is fixed and the right end point moves towards the required root. in Fig. or lim | xk – α | = 0 k→∞ where α is the exact root. x2. in actual computations.. k = 1. What are the criteria used to terminate an iterative procedure? Solution Let ε be the prescribed error tolerance. 6.1. Therefore. Let x0 be an initial approximation to the root. we obtain a sequence of approximations x1. 2... x1 be two initial approximations to the root in this interval. such that in the limit as k → ∞. in actual computations. We terminate the iterations when either of the following criteria is satisfied. Therefore. k = 0. x1). Define the fixed point iteration method to obtain a root of f(x) = 0. … Starting with x0. Sometimes. xk → α. we may use both the criteria. What is the disadvantage of the method of false position? Solution If the root lies initially in the interval (x0.. (i) | f(xk) | ≤ ε.2a. Let x0. Write the method of false position to obtain a root of f(x) = 0. f1 − f k 10. (ii) | xk+1 – xk | ≤ ε. 1. What is the computational cost of the method? Solution Let a root of f(x) = 0 lie in the interval (a.1. xk. The method converges when | φ′(x) | < 1. The method of false position is defined by xk+1 = xk − 1 f k − x k f k − 1 . and define the fixed point iteration method as xk+1 = φ(xk). 2.. b). the right end point x1 is fixed and the left end point moves towards the required root.... 9. b). fk − f k − 1 The computational cost of the method is one evaluation of f(x) per iteration. 7. b).SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 23 k→∞ lim xk = α.. When does the method converge? Solution Let a root of f(x) = 0 lie in the interval (a. The Newton-Raphson method to find this root is defined by . then one of the end points is fixed for all iterations. For example. b). f k − f0 In Fig. Let x0 be an initial approximation to the root in this interval. for all x in the interval (a.. What is the computational cost of the method? Solution Let a root of f(x) = 0 lie in the interval (a. then the method is said to be convergent. the method behaves like xk+1 = x k f1 − x 1 f k .2b.

Find the positive root of x3 = 2x + 5./Dec. Define the error of approximation at the kth iterate as εk = xk – α. 6. The constant C.U. 2006. Find the smallest positive root of x – e– x = 0. (A. Find the root of xex = 3. Nov. 11.U. 1. Solution Let α be the exact root of f (x) = 0. correct to three decimal places... find the root as specified using the Newton-Raphson method. near 1. 2. (ii) Two. 2. correct to three decimal places. 2003) 142 . (Do only four iterations)./Dec. (iii) Fixed point iteration method? Solution (i) One. Define the order (rate) of convergence of an iterative method for finding the root of an equation f(x) = 0. Find the real root of the equation 3x = cos x + 1.U. Find the smallest positive root of x4 – x = 10. Nov. correct to three decimal places. (ii) Newton-Raphson method. Nov. Nov... Solve the equation x tan x = – 1. EXERCISE 1./Dec. k = 0. . is called the asymptotic error constant and it depends on the derivatives of f(x) at x = α. 2006) N where N is a real number. The computational cost of the method is one evaluation of f(x) and one evaluation of the derivative f ′(x) per iteration. 1. evaluate Find a root of x log10 x – 1. 12.U. A.1 In the following problems. 10. if p is the largest positive real number for which there exists a finite constant C ≠ 0. 11.. 7. (A. 3./Dec. 1. Find an approximate root of x log10 x – 1.2 = 0.. Find the root between 0 and 1 of x3 = 6x – 4. 2004) Find the root of x = 2 sin x. correct to two decimal places.2 = 0. 2006) (A. correct to three decimal places. k = 0. What is the rate of convergence of the following methods: (i) Method of false position. 4.5 and b = 3. (i) Write an iteration formula for finding (ii) Hence.9. (A. 2. find the root as specified using the regula-falsi method (method of false position). correct to two decimal places. correct to three decimal places. correct to three decimal places. such that | εk+1 | ≤ C | εk | p. 12. starting with a = 2.. correct to three decimal places. 9.24 xk+1 = xk – f ( xk ) ./Dec. f ′ ( xk ) NUMERICAL METHODS f ′(xk) ≠ 0. An iterative method is said to be of order p or has the rate of convergence p. 8. (iii) One. Nov. Find the smallest positive root of x4 – x – 10 = 0. which is independent of k. In the following problems.U. 5.

Find the smallest positive root of x = e–x. 2. 2.2. + a2nxn = b2 .SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 25 13. (i) Write an iteration formula for finding the value of 1/N. is convergent near x = α. i = 1. n are the unknowns to be determined.. correct to three decimal places. …. if | α | < | β |. 17. has two real roots α and β.. 21.. (ii) Hence. is convergent near x = α. (ii) xk+1 = – b/(xk + a).. n.2 LINEAR SYSTEM OF ALGEBRAIC EQUATIONS 1.. i = 1... ….1 Introduction Consider a system of n linear algebraic equations in n unknowns a11x1 + a12x2 + . …. 2. correct to four decimal places.U. – 1). The equation x2 (i) xk+1 = – (axk + b)/xk.x= … … … ann OP PP QP LM x OP LMb OP x MM … PP . Find the approximate root of xex = 3. Find the smallest positive root of x5 – 64x + 30 = 0. correct to four decimal places.. n. 2006) + ax + b = 0. correct to four decimal places. an1x1 + an2x2 + . 20. In the following problems.. correct to two decimal places.. Show that the iteration method 1. . where N is a real number. + a1nxn = b1 a21x1 + a22x2 + . are the known right hand side values and xi . (A. … x Q b Q NM P NM P 1 1 2 2 n n The matrix [A | b]. evaluate 1/26. Nov. bi . which lies in the interval (– 2.. obtained by appending the column b to the matrix A is called the augmented matrix.. correct to three decimal places./Dec. . j = 1. are the known coefficients.33) where LMa a A= M MNMa… 11 21 n1 a12 a22 … an 2 … a1n … a2n ... .. That is . Find the real root of the equation cos x = 3x – 1.. 16. i = 1. In matrix notation we write the system as Ax = b (1. 15. . and b = MMb PP . 14. + annxn = bn where aij.. n. if | α | > | β |. 2. find the root as specified using the iteration method/method of successive approximations/fixed point iteration method. 19. Find the root of the equation sin x = 1 + x3. 18. Find the smallest negative root in magnitude of 3x3 – x + 1 = 0. correct to four decimal places. Find the smallest positive root of x2 – 5x + 1 = 0.

then the solution is obtained directly. then the system has (n – r) parameter family of infinite number of solutions. then the system has unique solution.. Now. 11 21 n1 a12 a22 … an 2 … a1n … a2n … … … ann b1 b2 … bn OP PP PQ (i) The system of equations (1. which in the limit as k → ∞. we derive some direct methods. we consider the system of equations Ux = b as . In these methods. = bn–1 (1. If r < n. n. We start with an initial approximation to the solution vector x = x0. If r = n. if rank (A) = rank [A | b] = r.. That is. xk.. i = 1. aii ≠ 0. we consider the system of equations Dx = b as a11x1 a22x2 . and obtain a sequence of approximate vectors x0.. (ii) The system of equations (1. The methods of solution of the linear algebraic system of equations (1. (a) Direct methods produce the exact solution after a finite number of steps (disregarding the round-off errors).33) may be classified as direct and iterative methods..2...35) (b) Let A be an upper triangular matrix.33) is inconsistent (has no solution) if rank (A) ≠ rank [A | b]. .. That is. subtractions. . converge to the exact solution vector x.33) is consistent (has at least one solution)..2 Direct Methods If the system of equations has some special forms. divisions and multiplications). (b) Iterative methods are based on the idea of successive approximations.. A = U. Solving directly. . This number is called the operational count of the method. . aii (1. A = D.. we obtain xi = bi . We assume that the given system is consistent.. x1.34) annxn = bn This system is called a diagonal system of equations.. an–1.26 NUMERICAL METHODS LMa a [A|b] = M MMNa… We define the following.. we can determine the total number of operations (additions. (a) Let A be a diagonal matrix. n – 1 xn– 1 = b1 = b2 . 2... . 1.. We consider two such special forms.

when the given system of equations is one of the above two forms. x1 F Gb − ∑ a G H = 1 j=2 a11 I JJ F K = GGb − ∑ a H n 1 j=2 1. the solution is obtained directly. 1.. then we usually denote the operation as Ri ↔ Rj... The matrix B obtained after the elementary row operations is said to be row equivalent with A. These row operations change the form of A. Before we derive some direct methods... (iii) Adding/subtracting a scalar multiple of any row to any other row..36) an–1.. . If all the elements of the jth row are multiplied by a scalar p and added to the corresponding elements of the ith row. In the context of the solution of the system of algebraic equations.SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 27 a11x1 + a12x2 + . we get xn = bn/ann. Solving for the unknowns in the order xn. . . .37) The unknowns are obtained by back substitution and this procedure is called the back substitution method.. j x j . nxn)/an–1. we define elementary row operations that can be performed on the rows of a matrix. n . n–1.. we shall be using only the elementary row operations.. .... the solution of the new system is identical with the solution of the original system... we usually denote this operation as Ri ← Ri + pRj. Elementary row transformations (operations) The following operations on the rows of a matrix A are called the elementary row transformations (operations). (i) Interchange of any two rows. xn–1... then we usually denote this operation as pRi. but do not change the row-rank of A.. + a1n xn = b1 + a2n xn = b2 . If we interchange the ith row with the jth row. . Therefore. The above elementary operations performed on the columns of A (column C in place of row R) are called elementary column transformations (operations). n xn = bn–1 This system is called an upper triangular system of equations. xn–1 = (bn–1 – an–1... n–1 xn–1 + an–1... The elements of the jth row remain unchanged and the elements of the ith row get changed. then. .. Note the order in which the operation Ri + pRj is written. If the ith row is multiplied by p.. However. annxn = bn (1. j xj I JJ K a11 (1.. a22x2 + . .. .. x1. (ii) Division/multiplication of any row by a non-zero number p.

We use this pivot to reduce the element below this pivot in the second column as zero. This element a11 in the 1 × 1 position is called the first pivot.2. to an upper triangular system of equations Ux = z.2.28 NUMERICAL METHODS In this section.40) 12 . we derive two direct methods for the solution of the given system of equations.38) LMa MNa a 11 21 31 a12 a22 a32 a13 b1 a 23 b2 a33 b3 OP PQ (1. b K IJ a . namely. Multiply the first row in (1.39) by a21/a11 and a31/a11 respectively and subtract from the second and third rows. 1. is then solved by the back substitution method to obtain the solution vector x. We illustrate the method using the 3 × 3 system a11x1 + a12x2 + a13 x3 = b1 a21x1 + a22 x2 + a23 x3 = b2 a31x1 + a32 x2 + a33 x3 = b3 We write the augmented matrix [A | b] and reduce it to the following form [A|b] → [U|z] The augmented matrix of the system (1. That is.38) is Gauss elimination (1. K IJ b . (1) a23 = a23 – 12 . (1) a33 = a33 FG a Ha Fa – G Ha 21 11 31 11 IJ a . Multi- . We obtain the new augmented matrix as LMa MM 0 N0 11 a12 (1) a22 (1) a32 a13 (1) a23 (1) a33 b1 (1) b2 (1) b3 where (1) a22 = a22 – (1) a32 = a32 Fa Ga H Fa – G Ha 21 11 31 11 Ia J K IJ a K OP PP Q (1. This element a22 in the 2 × 2 position is called the second pivot. We know that these two systems are equivalent.1 Gauss Elimination Method The method is based on the idea of reducing the given system of equations Ax = b. using elementary row operations. We use this pivot to reduce all the elements below this pivot in the first column as zeros.39) First stage of elimination We assume a11 ≠ 0. we are performing the elementary row operations R2 – (a 21/a11)R1 and R3 –(a31/a11)R1 respectively. b K 13 13 (1) 2 = b2 – (1) 3 = b3 FG a Ha Fa – G Ha 21 11 31 11 IJ b . Gauss elimination method and Gauss-Jordan method. the solutions of both the systems are identical. This reduced system Ux = z. That is. K 1 1 Second stage of elimination (1) (1) We assume a22 ≠ 0 .

41) (1) 23 . (2 The element a33) ≠ 0 is called the third pivot. then division by it introduces large round-off errors and the solution may contain large errors. Therefore. We obtain the new augmented matrix as LMa MM 0 N0 11 a12 (1) a22 0 (1) 32 (1) 22 where (2 (1) a33) = a33 − Fa I a G a JK H a13 (1) a23 (2 a33) b1 (1) b2 ( b32) OP PP Q (1. all the elements below that pivot in that column are made zeros. at each stage of elimination. This procedure is continued until the upper triangular system is obtained. we may have the system 2x2 + 5x3 = 7 7x1 + x2 – 2x3 = 6 2x1 + 3x2 + 8x3 = 13 in which the first pivot is zero. From the third row. the first column of the augmented matrix is searched for the largest element in magnitude and brought as the first pivot by interchanging the first row of the augmented matrix (first equation) with the row (equation) having the largest element in magnitude. we get From the second row. Alternately. The solution vector x is now obtained by back substitution.40) by a32 / a22 and subtract from the third row. ( (1) b32 ) = b3 − Fa I b GH a JK (1) 32 (1) 22 (1) 2 . we are (1) (1) performing the elementary row operation R3 – ( a32 / a22 )R2. as the elimination progresses. we get From the first row. the second column is searched for the largest element in magnitude among the n – 1 elements leaving the first element. we get ( (2 x3 = b32) / a33) . since division by zero is not defined. we may also make the pivot as 1. This system is in the required upper triangular form [U|z]. Remark 15 When does the Gauss elimination method as described above fail? It fails when any one of the pivots is zero or it is a very small number. . If a pivot is a very small number. and this element is brought as the second pivot by interchanging the second row of the augmented matrix with the later row having the largest element in magnitude. In the second stage of elimination.SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 29 (1) (1) ply the second row in (1. In general. using a pivot. partial pivoting is done after every stage of elimination. For example. we search the entire matrix A in the augmented matrix for the largest element in magnitude and bring it as the first pivot. Pivoting Procedures How do we avoid computational errors in Gauss elimination? To avoid computational errors. we follow the procedure of partial pivoting. In the first stage of elimination. (1) (1) (1) x2 = (b2 − a 23 x 3 )/ a 22 . If a pivot is zero. That is. then division by it gives over flow error. In this procedure. There is another procedure called complete pivoting. by dividing that particular row by the pivot. x1 = (b1 – a12x2 – a13 x3)/a11.

rank (A) = rank [A|b] = 3. using the Gauss elimination method? A way of determining the consistency is from the form of the reduced system (1. Therefore. the system is inconsistent and has no solution. subtractions. rank (A) = rank [A|b] = 2 < 3. rank (A) = 2. divisions and multiplications. additions. Suppose that we obtain the reduced system as LMa MM 0 N0 11 a12 (1 a22) 0 a13 (1 a23) 0 b1 ( b21) . 0 OP PPQ LM 1 2 MN10 10 − 1 3 3 20 7 −1 2 4 OP PQ . Then. Remark 16 Gauss elimination method is a direct method. a33) ≠ 0 and b32 ) ≠ 0. Suppose that we obtain the reduced system as Then. Hence.41). (2 ( Suppose that in (1. ( 2) b3 OP PP Q Then. the procedure is computationally expensive and is not used in any software. Therefore. the system has 3 – 2 = 1 parameter family of infinite number of solutions. By checking the elements of the last rows. It is possible that the position of a variable is changed a number of times during this pivoting. We know that if the system is inconsistent then rank (A) ≠ rank [A|b]. it is possible to count the total number of operations.13 Solve the system of equations x1 + 10x2 – x3 = 3 2x1 + 3x2 + 20x3 = 7 10x1 – x2 + 2x3 = 4 using the Gauss elimination with partial pivoting. Solution We have the augmented matrix as LMa MMN 0 0 11 a12 (1 a22) 0 a13 (1 a23) 0 b1 ( b21) . Remark 17 When the system of algebraic equations is large. The total number of additions and subtractions (addition and subtraction take the same amount of computer time) is n (n – 1)(2n + 5)/6.41). how do we conclude that it is consistent or not. Example 1. Therefore. we mention that the total number of divisions and multiplications (division and multiplication take the same amount of computer time) is n (n2 + 3n – 1)/3. Without going into details. conclusion can be drawn about the consistency or inconsistency.30 NUMERICAL METHODS This requires not only an interchange of the rows. that is. The system is consistent and has a unique solution. We need to keep track of the positions of all the variables. rank [A|b] = 3 and rank (A) ≠ rank [A|b]. but also an interchange of the positions of the variables.

2 19. R2 – (1/2) R1.26908.1)R2 : Back substitution gives the solution.2 19.2 4 10 6. R3 – (3/4) R1.2 .37512.1 1 1 (4 + x2 – 2x3) = (4 + 0.6 5.2x3) = (2.37624 OP PQ OP PQ 5.2(0.SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 31 We perform the following elementary row transformations and do the eliminations.2894 – 2(0. 10 10 Example 1. R4 – (1/4) R1: 2 0 7 2 −1 − 5 OP PP Q . R2 ↔ R3 : 0 2.98020 1 1 (2.2 2 − 1.26908)) = 0.26908)) = 0.6 4 2. Third equation gives Second equation gives First equation gives x3 = x2 = x1 = LM10 MN 0 0 OP PQ OP PQ LM MN −1 101 .37624 = 0. 3. 2 0 7 2 −1 −5 OP PP Q We perform the following elementary row transformations and do the eliminations.6 .6 − 1. Solution The augmented matrix is given by LM2 MM4 3 N1 R1 1 0 2 3 1 − 2 − 10 2 1 8 .98020 4 .2 101 .1 10. R1 ↔ R3 : LM10 MN 0 0 LM10 MN 2 1 −1 2 4 3 20 7 . 6.28940.2 −1 . LM4 2 ↔R : M 3 MN1 2 0 1 2 3 2 1 8 1 − 2 − 10 .14 Solve the system of equations 2x1 + x2 + x3 – 2x4 = – 10 4x1 + 2x3 + x4 = 8 3x1 + 2x2 + 2x3 = 7 x1 + 3x2 + 2x3 – x4 = – 5 using the Gauss elimination with partial pivoting. 19.2/10. R2 – (R1/5). 2. R3 – (R1/10) : 10 − 1 3 2 19. R3 – (3.6 + 1. 10. 101 0 2 − 1.6 + 1.6 0 −1 3.

15 Solve the system of equations 3x1 + 3x2 + 4x3 = 20 2x1 + x2 + 3x3 = 13 x1 + x2 + 3x3 = 6 using the Gauss elimination method. H 2K H 4 K Q 3 N H 2K H 4K Q 3N F G H 52 3 3 3 3 4 = – 10. 2 1/ 2 − 3/ 4 1 1 0 − 5/ 2 − 14 0 2 1 8 3 3/2 − 5/ 4 −7 .32 NUMERICAL METHODS LM4 MM0 0 N0 0 2 1 8 1 0 − 5 / 2 − 14 . Solution Let us solve this problem by making the pivots as 1. x = – 2 FG 17 − 1 x IJ = – 2 FG 17 − 1 ( 8)IJ J G 13 K H 3 12 K KH H 3 12 K 1L F 3I F 5 I O 1 L F 3 I F 5I O = M− 7 − G J x + G J x P = M− 7 − G J ( − 10) + G J ( 8)P = 6. 1 3 6 OP PQ We perform the following elementary row transformations and do the eliminations. R – R : 4 3 0 − 1/ 2 1/ 12 17 / 3 0 −1/ 2 − 25 / 12 − 35 / 3 0 2 1 8 − 5/ 4 −7 3 3/2 . R4 – (1/3)R2: LM4 MM0 0 N0 LM4 MM0 0 N0 LM4 MM0 0 N0 0 2 1 8 3 3/ 2 − 5/ 4 − 7 . 0 − 1/ 2 1/ 12 17 / 3 − 13 / 6 − 52 / 3 0 0 OP PP Q OP PP Q OP PP Q Using back substitution. 1 1 3 6 0 0 5/ 3 − 2/3 LM MN OP PQ LM MN OP PQ . 4 4 Example 1. R ↔ R : 2 4 2 1/ 2 − 3 / 4 1 3 3/ 2 − 5 / 4 − 7 OP PP Q R3 – (2/3) R2. R2 – 2R1. we obtain x4 = − x2 I F − 6 IJ = 8. R3 – R1: 0 − 1 1/ 3 − 1/ 3 . 1 1 4 / 3 20/ 3 1 1 4 / 3 20/ 3 R1/3: 2 1 3 13 . The augmented matrix is given by LM3 2 MN1 3 4 20 1 3 13 . x1 = 1 1 [8 – 2x3 – x4] = [8 – 2(– 10) – 8] = 5.

SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 33 Back substitution gives the solution as x3 = – x1 = F 2 I F 3I = – 2 . and rank [A|b] = 3. R3 – 9R1 : 0 − 17 22 1 . We know that the solutions of both the systems are identical.Jordan method LM MN OP PQ LM MN OP PQ and the solution is xi = di. 3 5 3 5 5 3 3 FG IJ H K FG IJ H K Example 1.2 Gauss-Jordan Method The method is based on the idea of reducing the given system of equations Ax = b. LM1 MN0 0 0 1 0 0 0 1 d1 d2 d3 OP PQ (1. 3 3 3 3 5 5 20 1 4 2 35 20 4 = − − − – x2 – x3 = = 7. the system is inconsistent and has no solution. [A|b] → [I | X] In this case.2. 0 − 68 88 18 0 0 0 14 Now. i = 1. x G 3 J G 5J 5 H KH K 2 = 1 x3 1 1 2 1 + = + − = . 2. we obtain the augmented matrix for a 3 × 3 system as Gauss . Therefore.42) . 3. rank [A] = 2. using elementary row operations. where I is the identity matrix. to a diagonal system of equations Ix = d. This reduction is equivalent to finding the solution as x = A–1b. R3 – 4R2 : 0 − 17 22 1 .16 Test the consistency of the following system of equations x1 + 10x2 – x3 = 3 2x1 + 3x2 + 20x3 = 7 9x1 + 22x2 + 79x3 = 45 using the Gauss elimination method. after the eliminations are completed. This reduced system gives the solution vector x. 1. 1 10 − 1 3 1 10 − 1 3 R2 – 2R1.2. Solution We have the augmented matrix as LM 1 MN2 9 10 − 1 3 3 20 7 . 22 79 45 OP PQ We perform the following elementary row transformations and do the eliminations.

Example 1.34 NUMERICAL METHODS Elimination procedure The first step is same as in Gauss elimination method. (ii) We perform the following elementary row transformations and do the elimination. − 1/2 OP PQ Therefore. Solution We have the augmented matrix as LM1 MN4 3 1 1 1 3 −1 6 5 3 4 OP PQ (i) We perform the following elementary row transformations and do the eliminations. R3 + 2R2 : 0 0 LM MN LM 1 MN0 0 1 −1 2 0 −1 0 1 −5 0 −4 −5 − 10 1 R1 – (4/10)R3. x3 = – 1/2.42). making the pivots as 1. Lastly. R3 – 3R1 : 1 R1 + R2. ((– R2). 1 0 0 − 10 1 − 1/2 . Let us illustrate the method. using the elementary row transformations. x2 = 1/2. (ii) with partial pivoting.17 Solve the following system of equations x1 + x2 + x3 = 1 4x1 + 3x2 – x3 = 6 3x1 + 5x2 + 3x3 = 4 using the Gauss-Jordan method (i) without partial pivoting. the solution of the system is x1 = 1. (R3/(– 10))) we get LM1 MN0 0 0 1 0 0 0 1 1 1/2 . 4 OP PQ 1 3/4 − 1/4 3/2 R1/4 : 1 1 1 1 . 5 OP PQ Now. we make the elements below the first pivot as zeros. 3 5 3 4 LM MN OP PQ . we make the elements below and above the pivots as zeros using the elementary row transformations. that is. P 5Q 0 −1 0 1 2 . We may also make the pivots as 1 before performing the elimination. we divide each row by its pivot so that the final augmented matrix is of the form (1. R2 – (5/10) R3 : 0 0 LM MN OP PQ 3O 2P . 4 R1 ↔ R 2 : 1 3 LM MN 3 1 5 −1 1 3 6 1 . R2 – 4R1. From the second step onwards. Partial pivoting can also be used in the solution.

However. 1. we would be finding the inverse of a different matrix. we cannot first interchange the rows of A and then find the inverse. R2 – (15/11)R3 : 0 0 LM MN OP PQ OP PQ OP PQ 0 1 0 0 0 1 1 1/2 . it is computationally more expensive than Gauss elimination. Example 1. The most important application of this method is to find the inverse of a non-singular matrix.Jordan method LM 1 MN4 3 1 1 3 −1 5 3 OP PQ . we obtain [A | I] → [I | A–1] since. we do not normally use this method for the solution of the system of equations. − 1/2 0 − 14/11 18/11 1 15/11 − 2/11 . Remark 19 Partial pivoting can also be done using the augmented matrix [A|I]. the important application of the Gauss-Jordan method is to find the inverse of a non-singular matrix A. the solution of the system is x1 = 1. We present this method in the following section. Then. Remark 18 The Gauss-Jordan method looks very elegant as the solution is obtained directly. x3 = – 1/2. the total number of divisions and multiplications for Gauss-Jordan method is almost 1. R3 – (1/4)R2 : LM 1 MN0 0 OP PQ OP PQ LM MN 3/4 1 1/4 − 1/4 15/11 5/4 3/2 − 2/11 . For large n. R3 – 3R1 : R2 ↔ R3 LM1 : 0 MN0 LM1 MN0 0 LM1 MN0 0 3/4 1/4 11/4 − 1/4 5/4 15/4 3/2 − 1/2 . Hence.2. 0 10/11 − 5/11 18/11 − 2/11 . R2 /(11/4) : 0 0 − 1/2 R1 – (3/4) R2. We start with the augmented matrix of A with the identity matrix I of the same order. − 1/2 R3 /(10/11) : 0 1 0 − 14/11 15/11 1 1 R1 + (14/11) R3.2. However. When the Gauss-Jordan procedure is completed. − 1/2 3/4 11/4 1/4 − 1/4 15/4 5/4 3/2 1 − 1/2 . AA–1 = I. x2 = 1/2.18 Find the inverse of the matrix Gauss .3 Inverse of a Matrix by Gauss-Jordan Method As given in Remark 18.5 times the total number of divisions and multiplications required for Gauss elimination.SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 35 R2 – R1. − 1/2 OP PQ Therefore.

36 NUMERICAL METHODS using the Gauss-Jordan method (i) without partial pivoting. R3 – 2R2 : −4 −3 0 1 0 1 5 4 −1 0 . 1 OP PQ . 2 0 −3 0 1 0 −1 0 0 0 1 1 1 2 1 5 0 1 4 −3 OP PQ OP PQ R1 – R2. 1 1 1 3 1 3 LM4 : 1 MN3 OP PQ 3/4 1 5 − 1/4 1 3 0 1 0 1/4 0 0 0 0 . and (ii) with partial pivoting. R2 – 5R3 LM1 : 0 MN0 OP PQ OP PQ Therefore. R2 R2 LM1 ↔R R /4 : 1 MN3 L1 3/4 − 1/4 0 1/4 0O – R . the inverse of the given matrix is given by LM 7/5 − /2 MN11310 / R1 2 1/5 0 − 1/5 − 2/5 1/2 . Solution Consider the augmented matrix LM1 MN4 3 1 3 5 1 −1 3 1 0 0 0 1 0 0 0 . R3 – 3R1: 1 – R2 : 0 0 LM MN LM1 MN0 0 LM1 MN0 0 −4 5 1 1 1 1 0 0 −1 −5 − 4 1 0 . MN0 1/4 5/4 1 − 1/4 0PQ 3 1 5 −1 1 3 0 1 0 1 0 0 0 0 . 0 − 1/10 2/10 0 − 2/10 − 4/10 5/10 . MN0 11/4 15/4 0 − 3/4 1PQ L1 3/4 − 1/4 0 1/4 0O ↔ R : M0 11/4 15/4 0 − 3/4 1 P . R2 – 4R1. − 1/10 LM1 R /(– 10) : 0 MN0 3 OP PQ 0 1 0 R1 + 4R3. R – 3R : M0 1/4 5/4 1 − 1/4 0P . 0 − 10 − 11 2 1 −3 4 11/10 0 1 0 0 0 1 1 −1 − 2/10 14/10 − 15/10 11/10 0 . − 1/10 OP PQ (ii) We perform the following elementary row transformations and do the eliminations. 1 OP PQ (i) We perform the following elementary row transformations and do the eliminations.

R3 – R1 : 0 0 1 3 5 0 0 1 1 R2 ↔ R3./May 2004) Example 1. R3 – (1/4)R2 : 0 0 1 R3 /(10/11) : 0 0 LM MN LM MN OP PQ 0 1 0 − 14/11 0 1511 / 0 1 1110 / − 311 / 411 . Apr. 2 1 3 3 1 5 1 0 0 0 1 0 0 0 . the inverse of the matrix is given by LM 7/5 3/2 MN− 1110 / LM2 MN2 1 LM2 MN2 1 LM MN OP PQ / 15 0 − 15 / − 2/5 1/2 − 110 / OP PQ (A.SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 37 1 R2 /(11/4) : 0 0 LM MN 3/4 1 1/4 − 1/4 / 1511 5/4 0 0 1 0 1 0 1/4 − 311 / − 1/4 − 1411 / 1511 / 1011 / 511 / − 311 / − 15 / 0 1 0 0 0 1 0 / 411 . 0 0 0 1 511 / − 311 / − 211 / − 311 / 4/11 . 3 5 Solution We have the following augmented matrix.U. R2 – (15/11)R3 : 0 0 LM MN OP PQ OP PQ OP PQ Therefore. / − 110 / 7/5 − 3/2 1110 / 1/5 0 − 1/5 − 2/5 1/2 . − 111 / 1 R1 – (3/4) R2.19 Using the Gauss-Jordan method. 1 1 3/2 1/2 0 0 1 R1 / 2 : 2 1 1 0 1 0 . 1 OP PQ We perform the following elementary row transformations and do the eliminations. − 110 / 1 R 1 + (14/11) R3. 1 1 1 −1 − 1/4 7/4 − 1/4 3/2 7/4 −2 3/4 − 1/4 − 5/4 1/2 − 1/4 −1 0 0 1 0 0 0 1/2 . Then. R2/2 : 0 0 1 R1 – R2. R2 – 2R1. find the inverse of 2 3 1 1 . 1 0 0 1 0 − 1/2 1/2 . R3 + R2 : 0 0 LM MN LM MN OP PQ LM MN 1 −1 2 3/2 −2 7/2 1/2 −1 − 1/2 0 1 0 0 0 . 1/2 OP PQ OP PQ OP PQ .

5.38 NUMERICAL METHODS R3 /(– 1/4) : LM1 MN0 0 0 1 0 − 1/4 7/4 1 3/4 − 1/4 5 1 R1 + (1/4)R3. Define the rank of a matrix. (i) The system of equations Ax = b is consistent (has at least one solution). 3. then the system has (n – r) parameter family of infinite number of solutions. R2 – (7/4)R3 : 0 0 Therefore. if rank (A) = rank [A | b] = r. Solution We define the following operations as elementary row transformations. What is an augmented matrix of the system of algebraic equations Ax = b ? Solution The augmented matrix is denoted by [A | b]. then we usually denote the operation as Ri ↔ Rj. then the system has unique solution. Define consistency and inconsistency of a system of linear system of algebraic equations Ax = b. We note that row-rank = column-rank = rank. 2. What is a direct method for solving a linear system of algebraic equations Ax = b ? Solution Direct methods produce the solutions in a finite number of steps. If A is an n × n matrix and b is an n × 1 vector. the inverse of the given matrix is given by LM MN 0 0 −4 0 1 0 0 0 1 − 1/2 1/2 . If the ith row is multiplied by p. −2 OP PQ LM 2 MN− 9 5 −1 7 −4 −1 4 . (ii) Division/multiplication of any row by a non-zero number p. Solution Let the augmented matrix of the system be [A | b]. (i) Interchange of any two rows. If we interchange the ith row with the jth row. can be calculated. −2 OP PQ REVIEW QUESTIONS 1. (ii) The system of equations Ax = b is inconsistent (has no solution) if rank (A) ≠ rank [A | b]. then we usually denote this operation as pRi. The number of operations. where A and b are the coefficient matrix and right hand side vector respectively. Solution The number of linearly independent rows/columns of a matrix define the rowrank/column-rank of that matrix. then the augmented matrix is of order n × (n + 1). −2 2 −9 5 −1 7 −4 OP PQ −1 4 . called the operational count. . 4. Define elementary row transformations. If r = n. If r < n.

7. If a pivot is a very small number. as the elimination progresses. Therefore. This requires not only an interchange of the equations. the procedure is computationally expensive and is not used in any software. If a pivot is zero. the first column of the augmented matrix is searched for the largest element in magnitude and brought as the first pivot by interchanging the first row of the augmented matrix (first equation) with the row (equation) having the largest element in magnitude. partial pivoting is done after every stage of elimination. (ii) Gauss-Jordan method. Describe the principle involved in the Gauss-Jordan method for finding the inverse of a square matrix A. If all the elements of the jth row are multiplied by a scalar p and added to the corresponding elements of the ith row. Solution In this procedure. and this element is brought as the second pivot by interchanging the second row of the augmented matrix with the later row having the largest element in magnitude. When does the Gauss elimination method fail? Solution Gauss elimination method fails when any one of the pivots is zero or it is a very small number. to an upper triangular system of equations Ux = z. That is. Solution The method is based on the idea of reducing the given system of equations Ax = b. This procedure is continued until the upper triangular system is obtained. 11. In the second stage of elimination. We know that these two systems are equivalent. Describe the principle involved in the Gauss elimination method. 10. Which direct methods do we use for (i) solving the system of equations Ax = b. We need to keep track of the positions of all the variables. and (ii) finding the inverse of a square matrix A? Solution (i) Gauss elimination method and Gauss-Jordan method. then division by it gives over flow error. Define complete pivoting in Gauss elimination. we usually denote this operation as Ri ← Ri + pRj. then. 6.SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 39 (iii) Adding/subtracting a scalar multiple of any row to any other row. is then solved by the back substitution method to obtain the solution vector x. the second column is searched for the largest element in magnitude among the n – 1 elements leaving the first element. 8. Hence. The elements of the jth row remain unchanged and the elements of the ith row get changed. This reduced system Ux = z. the solutions of both the systems are identical. then division by it introduces large round off errors and the solution may contain large errors. How do we avoid computational errors in Gauss elimination? Solution To avoid computational errors. 9. . we search the entire matrix A in the augmented matrix for the largest element in magnitude and bring it as the first pivot. we follow the procedure of partial pivoting. since division by zero is not defined. but also an interchange of the positions of the variables. In the first stage of elimination. using elementary row operations. It is possible that the position of a variable is changed a number of times during this pivoting. Note the order in which the operation Ri + pRj is written.

x 1 x Q N 2Q QN 1 2 3 4 Solve the following system of equations by Gauss-Jordan method. Nov/Dec 2006) 4 9 2 6 6 −6 . Nov/Dec 2004) Find the inverses of the following matrices by Gauss-Jordan method.12y – 2. 10x – 2y + 3z = 23 2x + 10y – 5z = – 53 3x – 4y + 10z = 33. 1.05y + 2. M3 5 MN 1 − 2 0PQ . 12.U. x + 3y + 3z = 16 x + 4y + 3z = 18 x + 3y + 4z = 19.40 NUMERICAL METHODS Solution We start with the augmented matrix of A with the identity matrix I of the same order. However.15x – 1. 10x + y + z = 12 2x + 10y + z = 13 x + y + 5z = 7.Jordan method EXERCISE 1.89z = – 8. (A. 7. LM2 MN4 1 2 1 3 3 1 1 OP LM xOP LM 1OP PQ MN yPQ = MN2PQ . Then.U. Nov/Dec 2006) (A. LM2 MN3 1 LM2 2 MN4 1 1 2 3 . Nov/Dec 2005) .85z = 12. z 3 LM2 4 4.13x + 5. 5. When the Gauss-Jordan elimination procedure using elementary row transformations is completed. 6. x1 + x2 + x3 = 1 4x1 + 3x2 – x3 = 6 3x1 + 5x2 + 3x3 = 4.96y + 3. 2.U. 11. we cannot first interchange the rows of A and then find the inverse. 10x – 2y + 3z = 23 2x + 10y – 5z = – 53 3x – 4y + 10z = 33.92x + 3.95 2. Partial pivoting can also be done using the augmented matrix [A | I]. L2 0 1OP 12.88. 1 (A. −8 −8 OP PQ 10. Apr/May 2005) (A.2 Solve the following system of equations by Gauss elimination method. M MN3 1 1 0 2 3 1 2 2 2 2 1 0 6 OP LM x OP LM 2OP PP MMx PP = MM− 3PP . Gauss .61 5. 3.U.U. 9. OP PQ LM 1 1 3OP 1 3 −3 MN− 2 − 4 − 4PQ . 8. 3. we would be finding the inverse of a different matrix. Can we use partial pivoting in Gauss-Jordan method? Solution Yes.15z = 6. we obtain [A | I] → [I | A–1] since. AA–1 = I. (A.

and obtain a sequence of approximate vectors x0. 16. iterative methods are based on the idea of successive approximations. If we require three decimal places of accuracy. (1. A general linear iterative method for the solution of the system of equations Ax = b. When to stop the iteration We stop the iteration procedure when the magnitudes of the differences between the two successive iterates of all the variables are smaller than a given accuracy or error tolerance or an error bound ε. 1..3 Iterative Methods As discussed earlier. which depends on A and b. x1 + 5x2 – x3 = 0. for all i. 4x1 + 13x2 + 11x3 = 28. Write the equations as (1. .2. 15. xk.45) . xi( k + 1) − xi( k) ≤ ε. H is called the iteration matrix. . we derive two iterative methods for the solution of the system of algebraic equations a11x1 + a12x2 + a13x3 = b1 a21x1 + a22x2 + a23x3 = b2 a31x1 + a32x2 + a33x3 = b3 1.44) For example. if we require two decimal places of accuracy. can be written in matrix form as x(k+1) = Hx(k) + c.. x1. x1 – 3x2 + 4x3 = 2 x1 + x2 – x3 = 0 3x1 – x2 + 2x3 = 4. then we iterate ( + 1) − xi( k) < 0. for all i. 14. … (1. 2. which depends on A and c is a column vector.0005.005. which in the limit as k → ∞. converges to the exact solution vector x = A–1b. that is. We assume that the pivots aii ≠ 0. 4x1 + 13x2 + 11x3 = 25. 5x1 + 11x2 + x3 = 22. k = 0.2.43) where x(k+1) and x(k) are the approximations for x at the (k + 1)th and kth iterations respectively.SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 41 Show that the following systems of equations are inconsistent using the Gauss elimination method. to solve the system of equations Ax = b.. 1. We start with an initial approximation to the solution vector x = x0. 2x1 + x2 – 3x3 = 0. 2x1 + x2 – 3x3 = 0 5x1 + 8x2 + x3 = 14...3. then we iterate until xi( k + 1) − xi( k) < 0. Now.. 5x1 + 8x2 + x3 = 14. for all i. until xi k Convergence property of an iterative method depends on the iteration matrix H. for all i. Show that the following systems of equations have infinite number of solutions using the Gauss elimination. 2x1 + 3x2 + x3 = 11. the method is called Jacobi method.1 Gauss-Jacobi Iteration Method Sometimes. 13.

the coefficient matrix A is diagonally dominant. i = 1.5. The necessary and sufficient condition for convergence is that the spectral radius of the iteration matrix H is less than one unit. this method is also called the method of simultaneous displacement. (1. the initial approximation becomes xi = bi /aii.46) at the end of each iteration. Jacobi method gives the iterations as x1(k+1) = 0.46) Since.. we can choose x = 0. ρ(H) < 1. where ρ(H) is the largest eigen value in magnitude of H.25 [2 – (x2(k) + x3(k))] (ii) x1 = 0. a33 ( x3k + 1) = k = 0. Solution Note that the given system is diagonally dominant. for all i. that is xi = 0 for all i. We can verify that | aii | ≥ system is not diagonally dominant. Testing of this condition is beyond the scope of the syllabus. that is.20 Solve the system of equations 4x1 + x2 + x3 = 2 x1 + 5x2 + 2x3 = – 6 x1 + 2x2 + 3x3 = – 4 using the Jacobi iteration method. if possible. 3. Perform five iterations in each case. Remark 20 A sufficient condition for convergence of the Jacobi method is that the system of equations is diagonally dominant. such that the new system is diagonally dominant and convergence is guaranteed. If the system is not diagonally dominant. x2 = – 0. This implies that convergence may be obtained even if the . then the iteration converges for any initial solution vector. that is. x3 = – 0. However. 1. such manual verification or exchange of equations may not be possible for large systems that we obtain in application problems. we replace the complete vector x(k) in the right hand side of (1. 2. j =1. .5. 2.5. If no suitable approximation is available. i ≠ j ∑ |a n ij |. Remark 21 How do we find the initial approximations to start the iteration? If the system is diagonally dominant. Use the initial approximations as (i) xi = 0. Then. Example 1.42 a11x1 = b1 – (a12x2 + a13x3) a22x2 = b2 – (a21x1 + a23x3) a33x3 = b3 – (a31x1 + a32x2) The Jacobi iteration method is defined as ( x1k + 1) = NUMERICAL METHODS 1 ( ( [b1 − ( a12 x2k) + a13 x3k) )] a11 ( x2k + 1) = 1 ( ( [b2 − (a21 x1k) + a23 x3k) )] a22 1 ( ( [b3 − ( a31 x1k) + a32 x2k) )] . we may exchange the equations..

x3(5) = 0.89334 – 0.33333 [– 4 – (x1(1) + 2x2(1))] = 0. x3(2) = 0.33333))] = – 0.2 [– 6 – (0. x3(1) = 0.19999)] = 1.2 [– 6 – (x1(0) + 2x3(0))] = – 1.19998))] = – 0.25 [2 – (– 1. First iteration x1(1) = 0.0.14667))] = – 0.7))] = – 1.14667.5))] = – 1. x3(0) = 0.25 [2 – (x2(0) + x3(0))] = 0. x3(0) = – 0. x2(4) = 0.13333.. .33333 [– 4 – (x1(3) + 2x2(3))] = 0.33333 [– 4 – (x1(0) + 2x2(0))] = 0.33333 [– 4 – (0. x2(0) = 0.2 [– 6 – (1.0.33333 [– 4 – (x1(k) + 2x2(k))].2 [– 6 – (0.86667 + 2(– 1.86667 + 2(– 1.2 [– 6 – (x1(k) + 2x3(k))] x3(k+1) = 0.5. Second iteration x1(2) = 0.25 [2 – (x2(3) + x3(3))] = 0. x3(3) = 0.5.5 + 2(– 1.7. x2(3) = 0.5 + 2(– 0.86667. x2(1) = 0.08666 + 2(– 0.7)] = 0. x2(5) = 0.2 [– 6 – (x1(1) + 2x3(1))] = 0.25 [2 – (– 0.13333 + 2(– 0.2 [– 6 – (x1(2) + 2x3(2))] = 0.33333 [– 4 – (1. x2 = – 1.76668))] = – 1.2 – 1.76668 – 0.5)] = 0. .5 + 2(– 1.2 [– 6 – (x1(0) + 2x3(0))] = 0.2 [– 6 – (1. k = 0.07422.16667.25 [2 – (x2(2) + x3(2))] = 0. x3(4) = 0.5 + 2(– 0.SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 43 x2(k+1) = 0.25 [2 – (– 0. x2(2) = 0.89334.33333 [– 4 – (x1(4) + 2x2(4))] = 0. x3 = – 1. 1.2 [– 6 – (x1(4) + 2x3(4))] = 0.5))] = – 1.25 [2 – (x2(0) + x3(0))] = 0.5 – 0. x2(0) = – 0.33333 [– 4 – (x1(2) + 2x2(2))] = 0.25 [2 – (x2(1) + x3(1))] = 0.33333 [– 4 – (0. Fourth iteration x1(4) = 0.2.14667 – 1.5..33333 [– 4 – (x1(0) + 2x2(0))] = – 1.13333 + 2(– 0.25 [2 – (– 0.09998.89334))] = – 1.2))] = – 0. First iteration x1(1) = 0.93778.76668.2 [– 6 – (0.75. It is interesting to note that the iterations oscillate and converge to the exact solution x1 = 1.19998.25 [2 – (– 1. Third iteration x1(3) = 0.08666. (ii) x1(0) = 0.85777))] = – 1.85777.33333 [– 4 – (0. We have the following results.08666 + 2(– 0.1. x3(1) = 0. x2(1) = 0.5.33333.85777)] = 0. (i) x1(0) = 0.33333 [– 4 – (1.33333)] = 1. Fifth iteration x1(5) = 0.2 [– 6 – (x1(3) + 2x3(3))] = 0.25 [2 – (x2(4) + x3(4))] = 0.

x2(5) = 0.2 [– 6 – (x1(2) + 2x3(2))] = 0.09999)] = 1.2 [– 6 – (x1(4) + 2x3(4))] = 0. Example 1. x3(5) = 0.33333 [– 4 – (x1(3) + 2x2(3))] = 0.6] = 0. x2(4) = 0. x3(4) = 0.07334 – 1.84999))] = – 1. Hence.25 [2 – (– 1. Jacobi method gives the iterations as x1(k+1) = [12. Fourth iteration x1(4) = 0.2 [– 6 – (0.25 [2 – (– 0.92887))] = – 1.93333 + 2(– 1.33333 [– 4 – (0. we can expect faster convergence.04333 + 2(– 0. Solution The given system of equations is strongly diagonally dominant. Fifth iteration x1(5) = 0.2 [– 6 – (x1(3) + 2x3(3))] = 0. x2(3) = 0.6 – (2x2(0) + 2x3(0))] = [12. x3(3) = 0.2 [– 6 – (x1(1) + 2x3(1))] = 0.92887)] = 0.94667 – 0.93333.. First iteration x1(1) = 1 1 [12. x2(2) = 0.16667))] = – 0.33333 [– 4 – (x1(1) + 2x2(1))] = 0.04333 + 2(– 0.96889.94667))] = – 1.1 – 1.03712.25 [2 – (x2(2) + x3(2))] = 0.07334..04333. Third iteration x1(3) = 0.0 using the Jacobi iteration method. x2(0) = 0.06667 + 2(– 0.2 [– 6 – (1.75 + 2(– 1.88333.48462.92887.33333 [– 4 – (1.09999))] = – 0.33333 [– 4 – (1. = 0.25 [2 – (x2(3) + x3(3))] = 0.1))] = – 0.09999.33333 [– 4 – (x1(4) + 2x2(4))] = 0.3 – (3x1(k) + x3(k))]/27 x3(k+1) = [6.25 [2 – (– 1.33333 [– 4 – (x1(2) + 2x2(2))] = 0. . x3(0) = 0. Obtain the result correct to three decimal places.94667. 1. We obtain the following x1(0) .88333 – 0.07334))] = – 0.21 Solve the system of equations 26x1 + 2x2 + 2x3 = 12.16667)] = 1.06667 + 2(– 0.25 [2 – (x2(4) + x3(4))] = 0.06667.44 Second iteration NUMERICAL METHODS x1(2) = 0.2 [– 6 – (1.3 2x1 + 3x2 + 17x3 = 6.25 [2 – (x2(1) + x3(1))] = 0.25 [2 – (– 0.88333))] = – 1.33333 [– 4 – (0.2 [– 6 – (0.04999.75 + 2(– 1. x3(2) = 0.0 – (2x1(k) + 3x2(k))]/17 Choose the initial approximation as results.93333 + 2(– 1.84999)] = 0.84999.6 – (2x2(k) + 2x3(k))]/26 x2(k+1) = [– 14. 26 26 k = 0.6 3x1 + 27x2 + x3 = – 14.

50000.49821) + 3(– 0.0 – (2(0.0 – (2x1(2) + 3x2(2))] = [6. 26 26 1 1 [– 14.39989 – 0. 17 17 1 1 [12.39960)] = – 0.3 – (3x1(0) + x3(0))] = [– 14. ( ( x 34 ) − x 33 ) = | 0.0 – (2(0.50006) + 3(– 0. 27 27 1 1 [6.00029.6 – (2x2(3) + 2x3(3))] = [12.48462) + 3(– 0. 17 17 Second iteration x1(2) = x2(2) = x3(2) = 1 1 [12. 26 26 1 1 [– 14. 27 27 1 1 [– 6.38939)] = 0.59941.35294)] = – 0.6 – 2 (– 0.3 – (3(0.50000) + 0.59941))] = 0.59999.3 – (3x1(2) + x3(2))] = [– 14.39989)] = 0.59941 + 0.0] = 0.6 – 2(– 0.60000.52963.6 – (2x2(4) + 2x3(4))] = [12.52963))] = 0.0 – (2(0.50006) + 0.3 – (3x1(4) + x3(4))] = [– 14.6 – 2(– 0. 26 26 1 1 [– 14. 17 17 1 1 [12.38939.3 – (3x1(1) + x3(1))] = [– 14.59655))] = 0.49821) + 0.3 – (3(0.0 – (2x1(1) + 3x2(1))] = [6.6 – (2x2(2) + 2x3(2))] = [12.00006.00058.35294)] = 0.59999 + 0.50006 | = 0. Third iteration x1(3) = x2(3) = x3(3) = Fourth iteration x1(4) = x2(4) = x3(4) = We find Three decimal places of accuracy have not been obtained at this iteration.48462) + 0. 26 26 1 1 [– 14.5 – 0.49821.3] = – 0.52963 + 0.35294.39960.39989. Fifth iteration x1(5) = x2(5) = 1 1 [12.6 – (2x2(1) + 2x3(1))] = [12.38939)] = – 0.39989)] = – 0.0 – (2x1(3) + 3x2(3))] = [6.39960 | = 0.3 – (3(0. ( ( x 24 ) − x 23 ) = | – 0.0 – (2x1(0) + 3x2(0))] = [6.39960)] = 0.59655.50001.6 – 2(– 0.3 – (3(0. 27 27 1 1 [– 6.50006. 27 27 . 27 27 1 1 [– 6.59655 + 0.59941 | = 0.59999 + 0.SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 45 x2(1) = x3(1) = 1 1 [– 14. 17 17 ( ( x14 ) − x13 ) = | 0.3 – (3x1(3) + x3(3))] = [– 14.

the required solution is x1 = 0. If we use the updated values of x1. 2.59999 | = 0.00001.6 + 0. In general. 17 17 (4) (3 ) We find x1 − x1 = | 0.39989 | = 0. values of the previous iteration are used even though the updated values of the previous variables are available.59999))] = 0. x2.47) is same as writing the given system as a11x1(k+1) a21x1(k+1) + a22 x2(k+1) a31x1(k+1) + a32 x2 (k+1) = b1 – (a12 x2(k) + a13 x3(k)) = b2 – a23 x3(k) + a33x3(k+1) = b3 (1.. The value of the second variable x2 is also obtained using the values of the previous iteration.46 x3(5) = NUMERICAL METHODS 1 1 [– 6.3.5 | = 0. then we obtain a new method called Gauss-Seidel iteration method.5. x3 = 0. This method is also called the method of successive displacement..00001. even though the updated value of x1 is available. xi–1 in computing the value of the variable xi.4 – 0.50001 – 0.0 – (2(0.48) . We assume that the pivots aii ≠ 0.4. ( ( x 34 ) − x 33 ) = | 0.40000.. x2 = – 0. Since. x2. 1. all the errors in magnitude are less than 0.50000) + 3(– 0. We write the equations as a11x1 = b1 – (a12x2 + a13x3) a22x2 = b2 – (a21x1 + a23x3) a33x3 = b3 – (a31x1 + a32x2) The Gauss-Seidel iteration method is defined as x1(k+1) = x2(k+1) = x3(k+1) = 1 [b – (a12x2(k) + a13x3(k))] a11 1 1 [b – (a21x1(k+1) + a23x3(k))] a 22 2 1 [b – (a31x1(k+1) + a32x2(k+1))] a33 3 (1..0 – (2x1(4) + 3x2(4))] = [6. We observe that (1.2 Gauss-Seidel Iteration Method As pointed out in Remark 22.. 1.....6. Remark 22 What is the disadvantage of the Gauss-Jacobi method? At any iteration step. we use the updated values of x1.00011.0005.47) k = 0.2... at every stage in the iteration. the value of the first variable x1 is obtained using the values of the previous iteration. for all i. xi–1 in computing the value of the variable xi. ( ( x 24 ) − x 23 ) = | – 0.

20 20 1 1 (58 – 2x2(1) – 3x3(1)) = (58 – 2(2. where ρ(H) is the largest eigen value in magnitude of H. Testing of this condition is beyond the scope of the syllabus. we may exchange the equations. ρ(H) < 1. Gauss-Seidel method gives the iteration x1(k+1) = x2(k+1) = x3(k+1) = 1 (58 – 2x2(k) – 3x3(k)).31212. 22 22 1 1 (67 – 5x1(1) – x2(1)) = (67 – 5(1. x3(0) = 0.SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 47 Remark 23 A sufficient condition for convergence of the Gauss-Seidel method is that the system of equations is diagonally dominant. First iteration x1(1) = x2(1) = x3(1) = 1 1 (58 – 2x2(0) – 3x3(0)) = (58) = 1. if possible.99198. 22 1 (67 – 5x1(k+1) – x2(k+1)).99198) – 2(2. such that the new system is diagonally dominant and convergence is guaranteed. 45 1 (47 + 3x1(k+1) – 2x3(k)). x2(0) = 0. If both the Gauss-Jacobi and Gauss-Seidel methods converge. Solution The given system of equations is strongly diagonally dominant. then Gauss-Seidel method converges at least two times faster than the Gauss-Jacobi method. we get the following results.28889) – 2(0)) = 2. the coefficient matrix A is diagonally dominant. we can expect fast convergence. 22 22 Second iteration x1(2) = x2(2) = .31212) – 3(2.00689.91217. that is. 45 45 1 1 (47 + 3x1(2) – 2x3(1)) = (47 + 3(0.28889.22 Find the solution of the system of equations 45x1 + 2x2 + 3x3 = 58 – 3x1 + 22x2 + 2x3 = 47 5x1 + x2 + 20x3 = 67 correct to three decimal places. This implies that convergence may be obtained even if the system is not diagonally dominant. Example 1.91217)) = 2.31212)) = 2. 20 Starting with x1(0) = 0. Hence. The necessary and sufficient condition for convergence is that the spectral radius of the iteration matrix H is less than one unit.28889) – (2. using the Gauss-Seidel iteration method.91217)) = 0. If the system is not diagonally dominant. that is. 45 45 1 1 (47 + 3x1(1) – 2x3(0)) = (47 + 3(1.

00012.0. x3 = 0.99999. Take the initial approximations as x1 = 0. Gauss-Seidel method gives the iteration x1(k+1) = [23 + 6x2(k) – 2x3(k))]/3 x2(k+1) = [– 8 + 4x1(k+1) + x3(k)] .0. x3 = 3.99979.99999 – 1.00042.1.1.0.00000 – 0.00000 – 3. 45 45 1 1 (47 + 3x1(3) – 2x3(2)) = (47 + 3(0. 20 20 1 1 (58 – 2x2(3) – 3x3(3)) = (58 – 2(1.00000.9. ( ( x 34 ) − x 33 ) = 3.00000.9.99979 = 0.00166)) = 1.00020. x2 = – 3.00012. 22 22 1 1 (67 – 5x1(3) – x2(3)) = (67 – 5(0.00166) = 0. all the errors in magnitude are less than 0.99958) – (1. the required solution is x1 = 1.99958 = 0.9.0. 45 45 1 1 (47 + 3x1(4) – 2x3(3)) = (47 + 3(1.99979) – 3(3.99958. The exact solution is x1 = 1.0.0.00000) – (1. 20 20 Third iteration x1(3) = x2(3) = x3(3) = Fourth iteration x1(4) = x2(4) = x3(4) = We find ( ( x14 ) − x13 ) = 1.00000) – 2(3.9.00689)) = 3. x2 = – 3.00166. Solution Note that the system of equations is not diagonally dominant. 22 22 1 1 (67 – 5x1(4) – x2(4)) = (67 – 5(1. Rounding to three decimal places.00012)) = 1.00689) – 3(3.48 x3(2) = NUMERICAL METHODS 1 1 (67 – 5x1(2) – x2(2)) = (67 – 5(0.0. x3 = 3.99979)) = 3. Again take the initial approximations as x1 = 0. x3 = 0. Since. x2 = 1. we get x1 = 1. x2 = 2.99198) – (2.99958) – 2(3.0.99999)) = 3. x3 = 1. Interchange the first and second equations and solve the resulting system by the Gauss-Seidel method. ( ( x 24 ) − x 23 ) = 1.0005.99999.00012)) = 1. Example 1. and obtain the result correct to two decimal places.00012 = 0. x2 = – 3. 20 20 1 1 (58 – 2x2(2) – 3x3(2)) = (58 – 2(2.23 Computationally show that Gauss-Seidel method applied to the system of equations 3x1 – 6x2 + 2x3 = 23 – 4x1 + x2 – x3 = – 8 x1 – 3x2 + 7x3 = 17 diverges.

3 3 1 1 [17 – x1(2) + 3x2(2)] = [17 + 0.0982) + 0. The system of equations is now diagonally dominant.8339] = – 37.1) – 2(0.1.9. 3 3 1 1 [17 – x1(3) + 3x2(3)] = [17 + 7. 3 3 1 1 [17 – x1(1) + 3x2(1)] = [17 – (0.0982.1043)] = – 12. x3 = 0.0676 + 3(– 37. we obtain the following results. 7 7 x2(1) = [– 8 + 4x1(1) + x3(0)] = [– 8 + 4(0.7477. First iteration x1(1) = 1 1 [8 + x2(0) – x3(0)] = [8 – 3.8667. x2 = – 3. we obtain the following results.1043. 7 7 1 1 [23 + 6x2(2) – 2x3(2)] = [23 + 6 (– 7.8339)] = – 7.0676) – 0.8667) + 0.9] = 1. x3 = 0.9.1 – 0.9] = – 3.9)] = 0. 7 7 1 1 [23 + 6x2(1) – 2x3(1)] = [23 + 6 (– 3. Now.7477] = – 7.6451) – 2(– 0. x3(2) = Third iteration x1(3) = x2(3) = [– 8 + 4x1(3) + x3(2)] = [– 8 + 4(– 7.1.6332.6332)] = 0.SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 49 x3(k+1) = [17 – x1(k+1) + 3x2(k+1)]/7. 4 4 .9. we exchange the first and second equations to obtain the system – 4x1 + x2 – x3 = – 8 3x1 – 6x2 + 2x3 = 23 x1 – 3x2 + 7x3 = 17. First iteration x1(1) = 1 1 [23 + 6x2(0) – 2x3(0)] = [23 + 6(– 3.7477)] = – 0. Starting with the initial approximations x1 = 0.6451.6451)] = – 0.0982 + 3(– 7.4636. x3(1) = Second iteration x1(2) = x2(2) = [– 8 + 4x1(2) + x3(1)] = [– 8 + 4(– 0. x2 = – 3.9.8667) + 3(– 3. Starting with the initial approximations x1 = 0. Gauss-Seidel method gives iteration x1(k+1) = [8 + x2(k) – x3(k)]/4 x2(k+1) = – [23 – 3x1(k+1) – 2x3(k)]/6 x3(k+1) = [17 – x1(k+1) + 3x2(k+1)]/7.8339.6332) – 2(0. x3(3) = It can be observed that the iterations are diverging very fast.0676.0.

9953) – 2(0.9988 + 3(– 3. we get x1 = 1.0002. .9976] = 0. the required solution is x1 = 0.9999 – 0.0) – 2(0.9996 = 0.0333.9998. 7 7 1 1 [8 + x2(3) – x3(3)] = [8 – 3. 6 6 Second iteration x1(2) = x2(2) = – x3(2) = 1 1 [17 – x1(2) + 3x2(2)] = [17 – 0.9998.9999.0333 – 0.9988) – 2(0. 6 6 Fourth iteration x1(4) = x2(4) = – x3(4) = We find 1 1 [17 – x1(4) + 3x2(4)] = [17 – 0. Since. 6 6 1 1 [17 – x1(1) + 3x2(1)] = [17 – 1. all the errors in magnitude are less than 0.9996. 7 7 ( ( x14 ) − x13 ) = 0. x2 = – 3.0.9976.9976)] = – 3.0071 – 0.0014)] = 0.0333)] = 0.0071)] = 0.9988 = 0.005. Rounding to two decimal places.9998 + 3(– 3.50 x2(1) = – x3(1) = NUMERICAL METHODS 1 1 [23 – 3x1(1) – 2x3(0)] = – [23 – 3(1.0 + 3(– 3. 7 7 1 1 [8 + x2(2) – x3(2)] = [8 – 3.9857] = 0. x3 = 0.0014 = 0. 4 4 1 1 [23 – 3x1(4) – 2x3(3)] = – [23 – 3(0.0003. ( ( x 34 ) − x 33 ) = 0.0014.9998) – 2(0.0012. 6 6 Third iteration x1(3) = x2(3) = – x3(3) = 1 1 [17 – x1(3) + 3x2(3)] = [17 – 0. 4 4 1 1 [23 – 3x1(3) – 2x3(2)] = – [23 – 3(0.9857. x3 = 1.9996] = 0.9988.9953.9996)] = – 3.0071.0.9998 – 0. ( ( x 24 ) − x 23 ) = – 3.0014 – 0.0002)] = 0.9857)] = – 3.0. 7 7 1 1 [8 + x2(1) – x3(1)] = [8 – 3.0010.0002 + 3.0002.9953 + 3(– 3.9)] = – 3. 4 4 1 1 [23 – 3x1(2) – 2x3(1)] = – [23 – 3(0. x2 = – 3.9999.

then we iterate until ( ( x i k + 1) − x i k ) < 0. if possible. What is the condition of convergence of an iterative procedure for the solution of a system of linear algebraic equations Ax = b ? Solution A sufficient condition for convergence of an iterative method is that the system of equations is diagonally dominant. .. x1 . If the system is not diagonally dominant. 1. ( ( x i k + 1) − x i k ) ≤ ε. H is called the iteration matrix depending on A and c. the sequence of approximate vectors x0.. k = 0. Define an iterative procedure for solving a system of algebraic equations Ax = b. then we ( k + 1) ( − x i k) iterate until x i < 0.. . We can verify that | aii | ≥ obtained even if the system is not diagonally dominant.. Gauss-Jacobi method or Gauss-Seidel method converges faster. such that the new system is diagonally dominant and convergence is guaranteed. for all i. 2.005.. converge to the exact solution vector x = A–1 b. we may exchange the equations. xk.. xk. The necessary and sufficient condition for convergence is that the spectral radius of the iteration matrix H is less than one unit.. 4.0005.. j =1.. What do we mean by convergence of an iterative procedure? Solution A general linear iterative method for the solution of the system of equations Ax = b can be written in matrix form as x(k+1) = Hx(k) + c. We start with an initial approximation to the solution vector x = x0. We say that the iteration converges if in the limit as k → ∞. How do we terminate an iterative procedure for the solution of a system of algebraic equations Ax = b ? Solution We terminate an iteration procedure when the magnitudes of the differences between the two successive iterates of all the variables are smaller than a given accuracy or an error bound ε.. and obtain a sequence of approximate vectors x0. ij | This implies that convergence may be . i ≠ j ∑ |a n . x1. If we require three decimal places of accuracy. then GaussSeidel method converges at least two times faster than the Gauss-Jacobi method. for all i. 2. that is. if we require two decimal places of accuracy. that is.. 3. that is.. which is a column vector depends on A and b... for all i. Which method.SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 51 REVIEW QUESTIONS 1. where x(k+1) and x(k) are the approximations for x at the (k + 1)th and kth iterations respectively. ρ(H) < 1.. for the solution of a system of algebraic equations Ax = b ? Solution If both the Gauss-Jacobi and Gauss-Seidel methods converge. For example. the coefficient matrix A is diagonally dominant. where ρ(H) is the largest eigen value in magnitude of H.

(1. (A.51) (1. 1. 1. Consider the eigen value problem Ax = λ x. (1.31.. x + y + 54z = 110.52 NUMERICAL METHODS EXERCISE 1. (A.U.46. x + y + 8z = 20.61. x + 20y + z = – 18. Start with x = 1.49) . z = 3. 6. 20x – y – 2z = 17. 8. 25x + y – 5z = 19. x + 20y + z = – 18. we obtain the characteristic equation as p(λ) = (– 1)n λn + a1λn–1 + . 3. 3x + 12y – z = 28. (A.3. x + 3y + 52z = 173. 4x + 2y + z = 14. x + 4y + 7z = 2.U.U. 9.U. 6x + 15y + 2z = 72. 3x + 4y + 8z = 7. Nov/Dec 2003) (A. 27x + 6y – z = 85. x + y + 54z = 110.3 EIGEN VALUE PROBLEMS 1. 20x + y – 2z = 17. 3x + 20y – z = – 18.U. x + 5y – z = 10.50) If the matrix A is of order n. 41x – 2y + 3z = 65.. (A.3 Solve the following system of equations using the Gauss-Jacobi iteration method. then expanding the determinant. 2x – 3y + 20z = 25. 27x + 6y – z = 85.1 Introduction The concept of eigen values and finding eigen values and eigen vectors of a given matrix are very important for engineers and scientists. 10x + 4y – 2z = 20. 5. May/June 2006) 7. y = – 1. x + 4y + 7z = 2. 10. 3x + 20y – z = – 18.U. + an–1 λ + an = 0. Solve the following system of equations using the Gauss-Seidel iteration method. May/June 2006) 4. Apr/May 2005) 2x – 3y + 20z = 25. Apr/May 2004) 2. 10x + 4y – 2z = 20. 3x + 12y – z = 28. 25x + y – 5z = 19. (A. Nov/Dec 2006) 6x + 15y + 2z = 72. The eigen values of a matrix A are given by the roots of the characteristic equation A – λ I = 0. x – 27y + 2z = 71. 3x + 4y + 8z = 7.

OP PQ .. + cn FG λ IJ v OP . v = c1v1 + c2v2 + . + cn vn.3. v2.. vn be the eigen vectors corresponding to the eigen values λ1.. we get Av = c1λ1v1 + c2λ2v2 + .... The n linearly independent eigen vectors form an n-dimensional vector space.. λ2. The roots may be real. λn are distinct eigen values such that λ1 | > λ2 > .. λn. λn. + cn .2 Power Method The method for finding the largest eigen value in magnitude and the corresponding eigen vector of the eigen value problem Ax = λ x... In the syllabus. + cn FG λ IJ Hλ K n 1 k vn . where ρ(H) is the largest eigen value in magnitude of H. There are several methods for finding the eigen values of a general matrix or a symmetric matrix. λn. n are called the eigen vectors of the system. Let xi be the solution of the system of the homogeneous equations (1. If we write the matrix formulations of the methods. v2.. 1. (1. respectively.. which determines whether the methods converge or not. The method is applicable if a complete system of n linearly independent eigen vectors exist.. which are the eigen values.. We assume that λ1. repeated or complex. corresponding to the eigen value λi.50)...52) Let v1... The necessary and sufficient condition for convergence of the Gauss-Jacobi and Gauss-Seidel iteration methods is that the spectral radius of the iteration matrix H is less than one unit.. expand it and find the roots λ1. These vectors xi. i = 1. Av2 = λ2v2. then we know H. We can now find the largest eigen value in magnitude of H. Avn = λnvn.... λ3. Premultiplying by A and substituting Av1 = λ1v1. even though some of the eigen values λ2. …. vn can be written as a linear combination of these vectors. is included. That is.... + cnλnvn = λ1 c1 v1 + c2 (1.... is called the power method. What is the importance of this method? Let us re-look at the Remarks 20 and 23. λ2... 2..... FG λ IJ Hλ K n 1 2 vn ... Any vector v in this space of eigen vectors v1. only the power method for finding the largest eigen value in magnitude of a matrix and the corresponding eigen vector. we get A2v = .54) λk 1 1 1 + c2 FG λ IJ Hλ K 2 1 k v 2 + . H λ K QP n 1 n Premultiplying repeatedly by A and simplifying.SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 53 For any given matrix we write the characteristic equation (1... λ2...49). > λn .. ..53) L M M N FG λ IJ v Hλ K 2 1 2 + ... Akv = λ2 1 L Mc v M N L Mc v M N 1 1 + c2 FG λ IJ Hλ K 2 1 2 v 2 + .. OP PQ (1. that is.. may not be distinct.. ρ(H) < 1..

(1. If the sign of the eigen value is required. we can choose v0 with all its components as one unit. ….. which is the largest eigen value in magnitude. The normalization that we use is to make the largest element in magnitude as unity. If no suitable approximation is available. . 1. When do we stop the iteration The iterations are stopped when all the magnitudes of the differences of the ratios are less than the given error tolerance. then the eigen value is of positive sign. Nov/Dec 2004) . Both the right hand side vectors in (1.56) where the suffix r denotes the rth component of the vector. that is. In order to keep the round-off errors under control. λ1 . 2.. As k → ∞ . OP PQ (1. n.55) [c1v1 + c2(λ2 / λ1)k v2 + ... which is the eigen vector corresponding to λ1. we obtain n ratios. the largest element in magnitude of vk+1 is one unit. The eigen value λ1 is obtained as the ratio of the corresponding components of Ak+1v and Akv.54). r = 1.55) As k → ∞.55) tend to λk c1v1 and λk+1 c1v1. 1. Therefore...24 Determine the dominant eigen value of A = LM1 2OP by power method. 2. Then (1. Remark 25 Faster convergence is obtained when λ2 << λ1 . Now. then we substitute this value in the determinant A – λ1I and find its value. 3. Remark 27 Power method gives the largest eigen value in magnitude. since 1 1 λi / λ1 < 1.. λ1 = lim ( A k +1v )r ( A k v )r k→∞ .58) where mk+1 is the largest element in magnitude of yk+1..59) and vk+1 is the required eigen vector... n (v k ) r (1. it is of negative sign. + cn (λn / λ1)k vn].54 Ak+1v = λk +1 c1 v1 + c2 1 NUMERICAL METHODS L M M N Fλ I GH λ JK 2 1 k +1 v 2 + . i = 2. .. Example 1. and [c1v1 + c2(λ2 / λ1)k+1 v2 + .. premultiplication each time by A. vk+1 = yk+1 /mk+1 (1. r = 1. Otherwise. this initial approximation to the vector should be non-orthogonal to v1. 3.. 3.57) (1. + cn (λn / λ1)k+1 vn] tend to c1v1.54) and (1.. the right hand sides of (1. N3 4Q (A. 1]T. However. That is. If we use this normalization. yk+1 = Avk.U. a simple algorithm for the power method can be written as follows. Remark 24 The choice of the initial approximation vector v0 is important.. all of them tending to the same value.56) can be written as λ1 = lim k→∞ (y k + 1 ) r ... mk+1 also gives λ1 . n (1. + cn Fλ I GH λ JK n 1 k +1 vn . If this value is approximately zero. v0 = [1. we normalize the vector before premultiplying by A. Remark 26 It may be noted that as k → ∞ . may introduce round-off errors.

37187 P M 1 Q N Q N L1 2OP LM0. We have the following results. = Av = M N3 4Q N 1 Q N5.45729OP = LM2.45744OP = LM2. 2.42857O = L0. vk+1 = yk+1 /mk+1 where mk+1 is the largest element in magnitude of yk+1.42857O = Av PQ = MN5. m = 5.37838 Q N 1 Q L1 2O L0. 3.45743OP = LM2. Let v0 = [1 1]T.45744OP = LM0.37187. = = m 5.45946OP = LM2.28571P M 1 N Q N Q L1 2O L0.45946OP . = Av = M3 4 P M 1 N QN Q N5. y1 = Av0 = y2 v2 y3 v3 y4 v4 y5 v5 y6 Now.37229Q 2 4 1 1 1 1 1 L1 M3 N L1 = M N3 2 2 2 2 3 3 3 3 4 4 4 4 4 5 5 5 ( y k +1 )r ( v k )r r = 1. PQ Q N1Q N7Q m 7 M7 P M 1 NQ N 2O L0.45744OP = = m 5.28571 M5. . Then. m = 5. 4P M 1 QN y 1 L2.37187Q y 1 L2. n ( v k )r and vk+1 is the required eigen vector. m = 5. we find the ratios λ1 = lim k→∞ OP LM1OP = LM3OP .37232Q y 1 LM2. = Av = M N3 4Q N 1 Q N5.45729OP .. = Av = M3 4 P M 1 N QN Q N5. .37232 Q N 1 Q L1 2OP LM0.45729OP . m = 5.28571. r = 1. The dominant eigen value in magnitude is given by λ1 = lim k→∞ ( y k +1 ) r . = = m 5.45744OP .37232.42857O L2..28571PQ . m = 7.45946OP .37187 M5.45946OP = LM0. = = m 5. the power method is given by yk+1 = Avk.45729O = L0.37232 N5.37838Q y 1 LM2.SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 55 Solution Let the initial approximation to the eigen vector be v0. v = y = 1 L3O = L0. 2.45743OP .42857O .45743OP .37838.37838 N5..

37225. 5. OP LM OP LM OP PQ MN PQ MN PQ L 28 O L 1 OP 1 M P M 1 4 = 0. Hence. The dominant eigen value in magnitude is given by λ1 = lim ( y k + 1 )r ( v k )r k→∞ . 2. v = y = 25. v = y = 28 M− 2P M− 0.05714P = M 1. Then. m M 2 0 − 4PQ MN0.37229. L25 M1 M2 N 1 3 0 2 0 −4 OP PQ (A. correct to four decimal places is 5. 1]T.14286 = 0.09143PQ L25 1 2 O L 1 O L25.37225 – 5. the dominant eigen value.63428 PQ N y1 = Av0 = L25 M1 M2 N 28 1 2 1 3 0 1 = 4 .00005. We have the following results.25 Determine the numerically largest eigen value and the corresponding eigen vector of the following matrix.00004 < 0. n and vk+1 is the required eigen vector. the power method is given by yk+1 = Avk.07143P m N Q N Q L25 1 2 O L 1 O L25.28572 PQ MN0.00000OP LM 1 OP 1 M 1 1.24.14286 .09143PQ MN 1.U. 1. m1 = 28.28572PQ N L25.37229 | = 0. …. May/June 2006) Solution Let the initial approximation to the eigen vector be v0. Example 1.0.42858P . Let the initial approximation to the eigen vector be v0 = [1. .45743 The magnitude of the error between the ratios is | 5.14286 P = M1. 0.3722.17142 P .0000O y = Av = M 1 3 0 P M 0. vk+1 = yk+1 / mk+1 where mk+1 is the largest element in magnitude of yk+1. using the power method.07143PQ MN2.0 M m N 2. 2 2 2 3 2 3 = 25.56 We obtain the ratios as NUMERICAL METHODS 2. 0 −4 1 −2 1 1 1 2 1 2 = 25. r = 1.05714 .24000O y = Av = M 1 3 0 P M0. 3.45743 = 5. m M 2 0 − 4PQ MN− 0.

72628 P M0. 1.13530 P .18355.18214OP = M 1 3 0 0.24 1. r = 1.13524 .06855PQ MN 1.18218OP = M 1 3 0 0.24000 1 1 1.04508 = 1.04508 = 1.17591 M 1. m = 25.13524 .06915P N Q N Q L25 1 2 O L 1 O L25. 3.06855P N Q N Q L25 1 2 OP LM 1 OP LM25.04510 . m = 25. M 2 0 − 4PQ MN0. 2.06475PQ = MN 1.18196.13530 = 0.06853P N Q N Q L25 1 2 OP LM 1 OP LM25.06915PQ MN 1.13923 = 0.18355 M 1.72580 P M0. we find the ratios OP LM 1 OP LM25. 25. m = 25.04641 . 25.18214.17142 = 0.17591OP 0.18196 M 1.04525 .72340 PQ N LM25.72340 P M0.63428 m3 0.04641 PQ MN0. ( v k )r .18196O = M 1 3 0 P M0.SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 57 v3 = 1 25.13524 = 0.18196OP LM 1 OP .17591OP LM 1 OP 1 y = 1.74100 P M0. M 2 0 − 4PQ MN0.06843P Q N Q N L25 1 2 O L 1 O L25.18218 M 1.72588 PQ N 1 2 3 0 0 −4 4 4 5 5 6 6 7 7 8 k→∞ L25 M1 M2 N LM MN OP PQ LM MN OP PQ λ1 = lim ( y k +1 )r .74100 LM25.17591.06475 y4 = Av3 = v4 = 1 m4 y5 = Av4 v5 = 1 m5 y6 = Av5 v6 = 1 m6 y7 = Av6 v7 = 1 m7 y8 = Av7 Now.04508 .18355OP LM 1 OP 1 y = 1.72580 PQ N LM25.04525P = M 1.18218OP LM 1 OP 1 y = 1.06853PQ MN 1.04510P = M 1. 25. y3 = 25.18218. m = 25.13575 = 0.06843PQ MN 1.13923 PQ .13575 P . M 2 0 − 4PQ MN0.04508 25.18355O = M 1 3 0 P M0. 1 y = 1. M 2 0 − 4PQ MN0.72628 PQ N LM25. m = 25.

Hence.18 I | = 1. while | A + 25. the largest element in magnitude of vk+1 is one unit. ( v k )r r = 1.18214 m8 1. vk+1 = yk+1 / mk+1 where mk +1 is the largest element in magnitude of yk+1.18. The largest eigen value in magnitude is given by λ1 = lim k→∞ ( y k +1 )r . Therefore. Solution Power method can be written as follows. NUMERICAL METHODS 1. we substitute λ1 in the characteristic equation.04508 The magnitudes of the errors of the differences of these ratios are 0. the largest element in magnitude of vk+1 is one unit. n and vk+1 is the required eigen vector. 0. 3.06854 In Remark 26. 3. 0.13524 = 0. All the ratios in the above equation tend to the same number.72588 1.00065. we have noted that as k → ∞. We find that this statement is true since | m8 – m7 | = | 25.06853 0. = 25. The largest eigen value is given by . we find that | A – 25.18430. the largest eigen value in magnitude is | λ1 | = 25. yk+1 = Avk. yk+1 = Avk. Describe the power method.18.00151.58 We obtain the ratios as 25.04508 .72588 0. 2. y8 = 25. Now.13524 = 25. LM MN OP PQ LM MN OP PQ REVIEW QUESTIONS 1. In the present problem. the required eigen value is 25. the results are correct to two decimal places.18 I | is very large. vk+1 = yk+1/mk+1 where mk+1 is the largest element in magnitude of yk+1. When do we use the power method? Solution We use the power method to find the largest eigen value in magnitude and the corresponding eigen vector of a matrix A.00006. The corresponding eigen vector is v8.005. 0. When do we stop the iterations in power method? Solution Power method can be written as follows. mk+1 also gives | λ1 |.18214 – 25.18214 1 1 1 1. 2.00216.18214. Now. …. Therefore.18220 | = 0.18279.4018. If we require the sign of the eigen value. which are less than 0. v8 = 25.

Root correct up to 3 decimal places is x4. 5. MN5 2 − 1PQ 4.801252.721014. x4 = 2. x1 = 3.000103. Use suitable initial approximation to the eigen vector. All the ratios in the above equation tend to the same number. Power method gives the largest eigen value in magnitude.. x3 = 2.798390. 2.089639. Root correct up to 4 decimal places is x4. . x4 = 2. | x4 – x3 | = 0. 3).5. x4 = 2. > | λn |. x3 = 2. 0 1 6 0 5 0 LM MN L65 8. Apr/May 2005) LM20 1 1OP 3 MN 1 0 0PQ . LM 1 3 2 MN−31 4 LM35 2 3 MN 2 0 1 LM15 2 MN 0 3 0 0 O P P Q 1O 0 P. ….740637. x2 = 2. | x4 – x3 | = 0. we assume | λ1 | > | λ2 | > . x1 = 3. Otherwise. (2. x0 = 2. x2 = 2. Faster convergence is obtained when | λ2 | << | λ1 |. 1 1 L3 1 5 O 7. 2P Q 1. If the sign of the eigen value is required.SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 59 λ1 = lim k→∞ ( y k +1 )r . 2.1 1.058824. 4 MN6 −3 2 4 −1 3 5 OP PQ (A. 6 1 0 5. (2. EXERCISE 1. x1 = 3. it is of negative sign.081264.4 Determine the largest eigen value in magnitude and the corresponding eigen vector of the following matrices by power method. x0 = 2. If | A – λ1I | = 0 is satisfied approximately. then we substitute this value in the characteristic determinant | A – λ1I | and determine the sign of the eigen value. Does the power method give the sign of the largest eigen value? Solution No.U. x5 = 2.. x3 = 2.740205. 3). That is.092740. 4. then it is of positive sign. n ( v k )r and vk+1 is the required eigen vector. 3. 1. 10 (A. 6. x2 = 2. − 1P Q 1O 2 P. Nov/Dec 2003) LM1 2. M 0 MN 1 OP PQ 1O 0P .4 ANSWERS AND HINTS Exercise 1. The iterations are stopped when all the magnitudes of the differences of the ratios are less than the given error tolerance. x0 = 2.000432. 3. the leading eigen value in magnitude is much larger than the remaining eigen values in magnitudes. − 1P Q −1 4 . 3.798493.U. 1 40 1 . r = 1. When can we expect faster convergence in power method? Solution To apply power method. M1 0 2 P .

.87358.1. x2 = 11. Root correct up to 2 decimal places is x13.250869..636364. x0 = 12. x3 = 0.. x1 = 0.249055. | x3 – x2 | = 0. x6 = 1. 1). k = 0. x1 = 0. 2). k = 0. x3 = 0.045980.. x4 = – 1. |x5 – x4 | = 0.895494. x1 = 1. (1.855585. (2. 1. | x3 – x2 | = 0. x1 = 1. 15. Root correct up to 3 decimal places is x3. 1).049909.038461.500473.732049.000003. x2 = 0. (0.60 NUMERICAL METHODS 4. 7.000316.000003. Root correct up to 3 decimal places is x5. 18. Root correct up to 4 decimal places is x3. x1 = 0.854772. .. 2.249052. xk+1 = [(xk5 + 30)/64] = φ(xk). Root correct up to 3 decimal places is x2.620016. 2 xk (ii) N = 142. Root correct up to 3 decimal places is x2. 1).851395. x4 = 0. | x5 – x4| = 0.469105.. 2).04.916667.000196.730159. x0 = 0. 6. x0 = 0.00351. Root correct up to 2 decimal places is x3.5.2. 2 (i) xk+1 = 2xk – N x k . x3 = 1. (ii) N = 26. . x3 = 1. x0 = 1.201639. x2 = 1. 1). x1 = 11.000012.000001. (0..895506.. Root correct up to 2 decimal places is x5. k = 0. k = 0. | x3 – x2 | = 0. x4 = 1. x2 = 0.740649.. | x6 – x5 | = 0. (0.692201. x0 = 1.855228. 1). x3 = 0. Root correct up to 3 decimal places is x3. x4 = 1. x3 = – 0. x0 = 0. | x2 – x1 | = 0.. x2 = 1. 2. (0.. x0 = 0.612700. x0 = 2. | x3 – x2 | = 0.000497. 13. Root correct up to 4 decimal places is x6. 16. 19. x13 = 0.567206.916375. x2 = 1. (0..851463. x2 = – 1.051819. x5 = – 0. x3 = 2. 1).042470. x3 = 0. x1 = 0. x2 = 1. x3 = 0. x1 = 0. | x6 – x5 | = 0. (1.469104.038462.9.740646. x0 = 1. x0 = 0. (0.855781.666667. x3 = – 1.828197.023360. k = 0. 8. (0.049912. 1). Root correct up to 3 decimal places is x6.567703. x1 = 1. x3 = 1. Root correct up to 4 decimal places is x3. x0 = – 1.. x5 = 1. x1 = 1.0384. x1 = 2. x2 = 1.293764. 0). x0 = 1..2016. x2 = 0. 9. x1 = – 0.000003. 1. 1. xk+1 = [(xk – 1)/3]1/3 = φ(xk). | x2 – x1 | = 0.46875. x2 = – 0.2. x1 = 2. x3 = 0.870968.855544. x2 = 2. x3 = 0. x1 = – 1. 10. x2 = 0. x12 = 0. | x3 – x2 | = 0.566415. xk+1 = [(xk3 + 1)/5] = φ(xk). x5 = 1. | x3 – x2 | = 0. x1 = 1. | x3 – x2 | = 0. 3).607121. 5. x0 = – 1. x2 = 0. 12. (– 1. x6 = – 0. 14. | x13 – x12 | = 0.00189. Root correct up to 3 decimal places is x3. x2 = 0. x0 = 1.000019.852441..001142. x4 = 0. (1.. x2 = 0. (i) xk +1 = 2 xk + N .572182. The required root is x3.000039.. 2). x0 = 1. x0 = 3.000001. x3 = 1.746149.851902.00001.367879. 17.035841.851385. x4 = – 0. | x3 – x2 | = 0. Root correct up to 3 decimal places is x4. Root correct up to 4 decimal places is x3.567557. 1. 11. x1 = 2.. . x5 = 0.000292. | x4 – x3 | = 0.607102.

24 8 12 1 − 10 − 2 − 6 . φ′(x) = – 2 = − 2 . The system is consistent and has 0 0 0 5 −1 0 − 7 3 11 . x1 = 1. rank (A) = 2.607182. z = – 1/2. x = 1. −6 −1 5 1 24 − 17 3 . O P P Q . M 5 5 M− 5 N 10.000096. rank (A|b) = 2. k = 0.595296. We have α + β = – a. x6 = 0. y = 1. rank (A|b) = 3. z = 5.5. or (ii) φ(x) = – | φ′(α) | = | (αβ)/β2 | < 1. we have ( x + a) 2 ( x − α − β) 2 x+a | φ′(α) | = | – (αβ)/α2 | < 1. rank (A) = 2. x2 = 0. 1. z = 3. | x6 – x5 | = 0. y = 2. z = 1. 16.666667. Root correct up to 3 decimal places is x6.. z = 1. L12 M1 M5 N 4 6 5 −3 . z = 1. x2 = – 1.80032. x = 1. −3 0 0 −3 4 2 4 − 5 − 2 .606678.607086. 2. rank (A) = 2. x = – 4. 15.04909. xk+1 = [(1 + cos xk)/3] = φ(xk). one parameter family of solutions. 13. 0 0 2 −3 2 1 11 / 2 17 / 2 14 . x0 = 0. y = – 5. αβ b b = . 8.. 2 4 OP PQ OP PQ 14. x = 1. rank (A|b) = 2.SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 61 20. LM2 MN0 0 LM1 MN0 0 LM2 MN0 0 LM1 MN0 0 −3 0 1 11 / 2 17 / 2 14 . α β = b. y = 2. 1). −3 −1 O P P Q O P P Q LM MN 5 1L 12. 6. x5 = 0. Gauss elimination gives the following results. 9. The system is consistent and has one 0 0 0 O P P Q O P P Q O P P Q parameter family of solutions. rank (A) = 2. rank (A|b) = 3.2 or | α | < | β |. we have x x x | α | > | β |. x1 = 0. x4 = 0. For convergence to α. x4 = 1. Exercise 1.609328. 5. x = 1. x3 = 0.70869. 2 − 10 −1 7 1 56 L M M N 11. 8 −2 −2 −2 −1 −2 −1 −7 . The system is inconsistent.5. 2. 3. For convergence to α. x = 1. 7. φ′(x) = . 21. y = – 5. x = 1.. In Problems 13-16. x3 = – 1. 4. The system is inconsistent. y = – 1. z = 1. 1. y = 1/2. (i) φ(x) = – ax + b b αβ . (0.

0]. [0.06.45000].42180. [0. 0.01237].49167.059]. Interchange first and second rows. v = [0. – 1].3 NUMERICAL METHODS In all the Problems. 1]. 1.99965].02766]. [1. – 1. | λ | = 40. 0. [2.54513. 3.3. – 0.33333.98175. [0. 3].938. Exact: [1. 3. – 0.0]. 1.92265]. 3. 1.0.9. 3. – 1. 1. 1.0015. – 2. 7.04].28571]. [2.01088]. [2.875]. [1. 0. [0.5. [1. [0. 4. 8. [1. 0. 1. [0. 3.12381.93655].8. 1. | λ | = 35. 0]. – 0.00040. 3. [1. 9.76. – 0. – 1. 3.0.99821]. 0.04759]. 7.965. [3. 0. 1. [2. – 1. 0].00342. [0.90202. [0.14815.45003].40093.02936]. 3.04762].36969.62 Exercise 1. 1.43218. 1. [0. [1.0.98469.00125. 1.99990. 0. Exchange the first and second rows. [2. – 0.76. 4. .0]. – 1. 1. 3. – 0. 1.99755]. [1.57301.00002. 1.23000. 0.00325]. values for four iterations have been given.99989]. [1. – 0. 0. 0. – 1. [0.45000]. 0].06220.0]. [2. 0.34000.99700. – 0. 3. 4.83333. 2.00771. – 0.92585]. [1. 1. 1. – 1.15693.02.02. 0.34000. 1. Exact: [2. – 0. 0]. v = [1. [2.0.90200. | λ | = 6.93605]. 1. 0. | λ | = 65. Exact: [1. 0. 0. [1. Exact: [1.03704].375. 2. 5. 1.94127.0. [1.00727. 0. [0.96508]. 1.00775.92595].905.0.22856.98175.57294.00037.0. 0.00055. [2.15. – 0.03]. [0. 0. 0. 1].9. 3.0.00045. [1. [2.85714.85. 1. 1. Exact: [1.22999. 1.66. [0. 1. 1. 2.00932. Solutions are the transposes of the given vectors.91317]. | λ | = 6. 1. 1. 0.99773]. – 2. 0. 6. 2. 0]. 2.85. 3. v = [1.05625.34007. – 1.01210]. [2.00003].971.99525]. [0. 2].25]. The results obtained after 8 iterations are given. [1.98.72091].99978. [1. [2.05316. 1.35080.00011]. 1.00175].99048. – 1.99825. 2. v = [1. – 2.02496. 0].54074. – 0.01031.01587]. 2.99990. Exercise 1.0. – 0.02936.89971. Solutions are the transposes of the given vectors.14815. 1.0275. 1.99982. 2.92.06690.0.92595]. 0. 0]. 10. 1. 0]. [0. | λ | = 11. 8. 0. – 1. [0. Interchange first and third rows.32829. – 2. v = [1.01210]. 0. 0.44982].68525.42549. – 0. 0].42569.0. [1. | λ | = 15. | λ | = 20.965].57204. we have taken v(0) = [1.98441. 1.03628.88985].98468.4 In all problems. v = [1. – 1.11. 1. – 1. 1. [3.9]. 6. [0. [0. [0. [1. v = [0.98175. [3.99909]. 1.05714. – 1]. 3.29737.33333]. 1. 0. v = [0.00068. 0. 5. 1]. 1. [1.26913. 1.

1 INTRODUCTION In this chapter.. (2. b]. i = 0. like determination of roots. we can construct a unique polynomial of degree 2 (parabola) or a unique polynomial of degree1 (straight line). The problem of polynomial approximation is to find a polynomial Pn(x).. 63 ..1) are called the interpolating conditions. The conditions given in (2. can be carried out using the approximating polynomial P(x). that is f(x) – P(x). is called the error of approximation. The interpolation polynomial fitting a given data is unique. xn may be non-equispaced or equispaced. and be prescribed at n + 1 distinct tabular points x0.. x1. Let f(x) be a continuous function defined on some interval [a. through three distinct points. In general. that is.. through n + 1 distinct points. There are two main uses of these approximating polynomials. The first use is to reconstruct the function f(x) when it is not given explicitly and only values of f(x) and/ or its certain order derivatives are given at a set of distinct points called nodes or tabular points. Through three distinct points.…. Pn(xi) = f(xi). Remark 1 Through two distinct points. n –1. k = 0.. which fits the given data exactly. < xn = b. The approximating polynomial P(x) can be used to predict the value of f(x) at a nontabular point.…. We may express it in various forms but are otherwise the same polynomial. we discuss the problem of approximating a given function by polynomials. of degree ≤ n. 2. we can construct a unique polynomial of degree ≤ n. For example. The deviation of P(x) from f(x). 1. differentiation and integration etc... that is xk+1 – xk = h. we can construct a unique polynomial of degree 1 (straight line). The distinct tabular points x0.Interpolation and Approximation 2. The second use is to perform the required operations which were intended for f(x). 2. x1. we can construct a unique polynomial of degree ≤ 2. xn such that a = x0 < x1 < x2 < . f(x) = x2 – 2x – 1 can be written as x2 – 2x – 1 = – 2 + (x – 1) + (x – 1) (x – 2)..1) The polynomial Pn(x) is called the interpolating polynomial. 1. n. That is..

L M N i≠ j i = j.(xi – xn).. xn. + ln(x0) f(xn). xi–1.1 Lagrange Interpolation Let the data x f(x) x0 f(x0) x1 f(x1) x2 f(x2) .. For this data. + ln(x) f(xn) = l0(x) f0 + l1(x) f1 + . At a general point x = xi.. The product of these factors is a polynomial of degree n. we can write the polynomial as Pn(x) = l0(x) f(x0) + l1(x) f(x1) + ..... This equation is satisfied only when l0(x0) = 1 and li(x0) = 0.. i ≠ j. Therefore.(xi – xi–1) (xi – xi+1). since li(xi) = 1. 1.. li(xj) = 1. .. f(x1)... (x – xi+1). li(x) = 0 at x = x0. (2.. 2.2) where f(xi) = fi and li(x). + ln(xi) f(xn). (x – x1).n are polynomials of degree n. we get li(xi) = 1 = C(xi – x0) (xi – x1).2....1) exactly.. x1. li(x)..64 2... we can write li(x) = C(x – x0) (x – x1). we can fit a unique polynomial of degree ≤ n.. Since the interpolating polynomial must use all the ordinates f(x0). (x – xi–1). i = 0. ….2 INTERPOLATION WITH UNEVENLY SPACED POINTS NUMERICAL METHODS 2.. This equation is satisfied only when li(xi) = 1 and lj(xi) = 0.. i ≠ 0. we get f(x0) ≡ Pn(x0) = l0(x0) f(x0) + l1(x0) f(x1) + . .. This polynomial fits the data given in (2..(x – xi–1) (x – xi+1)..3) Since.. At x = x0. That is. This data may also be given at evenly spaced points... + ln(x) f(xn) (2. xn f(xn) be given at distinct unevenly spaced points or non-uniform points x0. Now. x1. ... we get f(xi) ≡ Pn(xi) = l0(xi) f(x0) + ..(x – xn) where C is a constant.... Therefore.. satisfy the conditions 0...... we know that (x – x0). which are polynomials of degree n.. + li(xi) f(xi) + . it can be written as a linear combination of these ordinates. xn.. (x – xn) are factors of li(x).. f(xn). xi + 1.

. Therefore. Denote w(x) = (x – x0) (x – x1). ( x 0 − x1 ) ( x 0 − x 2 ) 1 ( x 2 − x 0 ) ( x 2 − x1 ) ( x1 − x 0 ) ( x1 − x 2 ) 2 ...( x − x i −1 )( x − x i +1 )..INTERPOLATION AND APPROXIMATION 65 Hence.(xi – xn) since all other terms vanish.( x − x n ) ( x i − x 0 ) ( x i − x1 ). The polynomial given in (2. Quadratic interpolation For n = 2.7) The Lagrange fundamental polynomials are given by l0(x) = ( x − x1 ) ( x − x 2 ) ( x − x 0 ) ( x − x1 ) ( x − x0 ) (x − x2 ) .( x i − x n ) (2. Therefore.4) Note that the denominator on the right hand side of li(x) is obtained by setting x = xi in the numerator. ( x 0 − x1 ) ( x1 − x 0 ) (2. C= li(x) = 1 .4) is called the Lagrange interpolating polynomial and li(x) are called the Lagrange fundamental polynomials. ( x − x i ) w′ ( x i ) (2.( x i − x i −1 ) ( x i − x i +1 ). ( x i − x 0 ) ( x i − x1 ). Differentiating w(x) with respect to x and substituting x = xi we get w′(xi) = (xi – x0) (xi – x1).. l (x) = .... l1(x) = .( x i − x i −1 ) ( x i − x i +1 ). we have the data x f(x) x0 f(x0) x1 f(x1) x2 f(x2) (2.(xi – xi–1) (xi – xi+1).5) Let us derive the linear and quadratic interpolating polynomials.2) where li(x) are defined by (2.(x – xn) which is the product of all factors. Linear interpolation For n = 1........ we can also write li(x) as li(x) = w( x ) ... l (x) = .6) The Lagrange linear interpolation polynomial is given by P1(x) = l0(x) f(x0) + l1(x) f(x1).( x i − x n ) ( x − x 0 ) ( x − x1 ).. We can write the Lagrange fundamental polynomials li(x) in a simple notation. we have the data x f(x) x0 f(x0) x1 f(x1) The Lagrange fundamental polynomials are given by l0(x) = ( x − x1 ) (x − x0 ) .

..2) = 0. f(0. the error of interpolation is also unique. x) < ξ < max(x0. the error is same whichever form of the polynomial is used.14925. x1.19867) = 0.. that is. x) = f(x) – Pn(x) = (2. x) = f(x) – Pn(x).15) by Lagrange interpolation.09983) + (0. However. we can find a bound of the error. The bound of the error is obtained as | E(f.09983 and sin (0..1) = 0.11).2) ( 0.( x − x n ) | (n + 1) ! a ≤ x ≤b LM N OP LMmax | f QN a ≤ x ≤b ( n +1 ) (x) | (2.. it is difficult to find the value of the error.2) ( 0. The Lagrange linear polynomial is given by P1(x) = = ( x − x1 ) (x − x0 ) f(x0) + f(x1) ( x 0 − x1 ) ( x1 − x 0 ) ( x − 0.09983) + (0..1) Hence. Since. xn. Error of interpolation NUMERICAL METHODS (2.15 − 0.66 The Lagrange quadratic interpolation polynomial is given by P1(x) = l0(x) f(x0) + l1(x) f(x1) + l2(x) f(x2). we write the expression for the error of interpolation as E(f.1 Using the data sin(0.. x). xn.19867) ( 0.9) ( x − x 0 ) ( x − x1 ).11) OP Q Note that in (2. Since the interpolating polynomial is unique.1 − 0. ξ is an unknown. . the results contain errors. x1.(x – xn) | | f (n+1) (ξ) | ( n + 1) ! ≤ 1 max | ( x − x 0 ) ( x − x1 ) . We define the error of interpolation or truncation error as E(f. we compute the maximum absolute value of w(x) = (x – x0)(x – x1).5) (0..15 − 0.10) (n + 1) ! ( n + 1) ! where min(x0.2) ( 0.19867.. that is max | w(x) | and not the maximum of w(x).8) We assume that f(x) has continuous derivatives of order up to n + 1 for all x ∈ (a.15) = = (0.( x − x n ) w( x ) f (n+1) (ξ) = f (n+1) (ξ) (2.. Since.15) = P1(0. f(x) is approximated by Pn(x)..2 − 0.15.(x – xn). b). Example 2.1) ( 0.. Obtain a bound on the error at x = 0.1) (0.19867). x) | = 1 | (x – x0) (x – x1).09983) + (0. ( 0.2 − 0. Without giving the derivation.. find an approximate value of sin (0.5) (0... Solution We have two data values.2) ( x − 0.1) (0..1 − 0.

1) (0. ( x 2 − x 0 ) ( x 2 − x1 ) (3) ( 2) 6 Hence. ( x 1 − x 0 ) ( x 1 − x 2 ) (1) ( − 2 ) 2 The Lagrange quadratic polynomial is given by f(x) = l1(x) f(x1) = 1 1 (3x – x2) (1) = (3x – x2). the Lagrange quadratic polynomial is given by P2(x) = l0(x) f(x0) + l1(x) f(x1) + l2(x) f(x2) = 1 2 1 55 2 (x – 4x + 3) + (3x – x2) (3) + (x – x) = 8x2 – 6x + 1.2) (– sin ξ) = 0. 2 2 0. x2 = 3. At x = 0. f(1) = 3. f2 = 55. 2005) Solution Since f0 = 0 and f2 = 0. to find the quadratic polynomial that takes the values x y 0 0 1 1 3 0 (A.E | = 0. we need to compute l1(x) only. 2 2 Example 2. which fits the given data.00125 max | sin x | = 0.2 ) f ″(ξ) = (– sin ξ). Example 2.19867) = 0.1≤ x ≤ 0. we obtain the bound as T. The Lagrange fundamental polynomials are given by l0(x) = l1(x) = l2(x) = ( x − x1 ) ( x − x 2 ) ( x − 1) ( x − 3 ) 1 2 = = (x – 4x + 3). ( x 1 − x 0 ) ( x1 − x 2 ) (1) ( − 2) 2 ( x − x 0 ) ( x − x1 ) x ( x − 1) 1 2 = = (x – x).00025. x1 = 1.3 Given that f(0) = 1.E = ( x − x 0 ) ( x − x1 ) ( x − 0.1) ( x − 0.2) = 0.2.00125 sin (0.00125(0.E = and (0.00125 | sin ξ | ≤ 0. f1 = 3.15 − 0.U Nov/Dec. Solution We have x0 = 0.00125 sin ξ 2 0.15 − 0.2 Use Lagrange’s formula. ( x 0 − x1 ) ( x 0 − x 2 ) ( − 1) ( − 3 ) 3 (x − x0 ) (x − x2 ) x ( x − 3) 1 = = (3x – x2). We have l1(x) = ( x − x0 ) (x − x2 ) x ( x − 3) 1 = = (3x – x2).1 < ξ < 0. find the unique polynomial of degree 2 or less. f(3) = 55.15. 3 2 6 .INTERPOLATION AND APPROXIMATION 67 The truncation error is given by T. f0 = 1. since f(x) = sin x.2 | T.

are given x f(x) 10° 1. l1(x) = ( x − x0 ) (x − x2 ) ( x − 0. l2(x) = ( x − x 0 ) ( x − x1 ) ( x − 0.68 NUMERICAL METHODS Example 2.5236x + 0.5236.6981x + 0. We have x0 = 10° = π π π = 0. interpolate at x = 5.8727x + 0.5236 ) = ( x 0 − x1 ) ( x 0 − x 2 ) ( − 01746) ( − 0.2234.3660 Construct the quadratic Lagrange interpolating polynomial that fits the data. The Lagrange quadratic polynomial is given by P2(x) = l0(x) f(x0) + l1(x) f(x1) + l2(x) f(x2) = 16. x1 = 20° = = 0.2247.4061(x2 – 0.1745) = 16.1746) ( − 0.0914).3491) ( − 0. we convert the data into radian measure. Solution Since the value of f at π /12 radians is required.8616 (x2 – 0.1745.0394x + 0.3491) ( x − 0.1828).8727x + 0.3491) . = 16.3491.3660) = – 0.2618) = sin(0.1828) (1.5236x + 0.6374x2 + 1.2817 30° 1.4 The following values of the function f(x) = sin x + cos x.1585) – 32. Solution The Lagrange fundamental polynomials are given by l0(x) = ( x − x1 ) ( x − x 2 ) ( x − x 3 ) ( x − 1) ( x − 4 ) ( x − 7 ) = − x 1 ) ( x 0 − x 2 ) ( x 0 − x 3 ) ( − 1 − 1) ( − 1 − 4 ) ( − 1 − 7 ) (x0 .2618) = 1. x2 = 30° = = 0. Example 2.1745) ( x − 0. The exact value is f(0.6981x + 0. Hence.4155(x2 – 0. Compare with the exact value.1745) ( x − 0.3491) = ( x 2 − x 0 ) ( x 2 − x1 ) ( 0.8216(x2 – 0.2618) + cos(0.2817) + 16.5236 ) = ( x1 − x 0 ) ( x1 − x 2 ) ( 0. 18 9 6 The Lagrange fundamental polynomials are given by l0(x) = ( x − x1 ) ( x − x 2 ) ( x − 0.0914) (1.0609) (1.1745) = – 32.2618) = 1.1585 20° 1.9950.5 Construct the Lagrange interpolation polynomial for the data x f (x) –1 –2 1 0 4 63 7 342 Hence. find f (π/12).4061 (x2 – 0.0609). f(π/12) = f(0.4155(x2 – 0.

The Lagrange interpolation polynomial is given by P3(x) = l0(x) f(x0) + l1(x) f(x1) + l2(x) f(x2) + l3(x) f(x3) =– 1 1 (x3 – 12x2 + 39x – 28) (– 2) – (x3 – 7x2 – x + 7) (63) 80 45 + = 3 2 FG 1 − 7 + 171IJ x + FG − 3 + 49 − 171IJ x + FG 39 + 7 − 171IJ x + FG − 7 − 49 + 171IJ H 40 5 72 K H 10 5 18 K H 40 5 72 K H 10 5 8 K 1 (x3 – 4x2 – x + 4) (342) 144 = x3 – 1. a new value (xn+1. n at the (n + 1) distinct points. 2. 1. f(xi)). Now. assume that we have determined the Lagrange interpolation polynomial of degree n based on the data values (xi. However. 1. Remark 2 For a given data. n. The nth degree Lagrange polynomial obtained earlier is of no use. i = 0.INTERPOLATION AND APPROXIMATION 69 =– l1(x) = = l2(x) = 1 (x3 – 12x2 + 39x – 28). However. This is the disadvantage of the Lagrange interpolation. 80 ( x − x 0 ) ( x − x 2 ) ( x − x3 ) ( x + 1) ( x − 4 ) ( x − 7) = ( x1 − x 0 ) ( x1 − x 2 ) ( x1 − x 3 ) (1 + 1) (1 − 4 ) (1 − 7 ) 1 (x3 – 10x2 + 17x + 28).…. Suppose that to this given data. then we need to compute all the Lagrange fundamental polynomials again. Hence. 144 Note that we need not compute l1(x) since f(x1) = 0. f(5) = P3(5) = 53 – 1 = 124. it is very difficult and time consuming to collect and simplify the coefficients of xi. f(xn+1)) at the distinct point xn+1 is added at the end of the table. . If we require the Lagrange interpolating polynomial for this new data. 36 ( x − x 0 ) ( x − x1 ) ( x − x 3 ) ( x + 1) ( x − 1) ( x − 7 ) = ( x 2 − x 0 ) ( x 2 − x1 ) ( x 2 − x 3 ) ( 4 + 1) ( 4 − 1) ( 4 − 7 ) 1 (x3 – 7x2 – x + 7). i = 0. it is possible to construct the Lagrange interpolation polynomial. Lagrange interpolation is a fundamental result and is used in proving many theoretical results of interpolation. 2.…. 45 =– l3(x) = = ( x − x 0 ) ( x − x1 ) ( x − x 2 ) ( x + 1) ( x − 1) ( x − 4 ) = ( x 3 − x 0 ) ( x 3 − x 1 ) ( x 3 − x 2 ) ( 7 + 1) ( 7 − 1) ( 7 − 4 ) 1 (x3 – 4x2 – x + 4).

xi + 1 ] xi + 2 − xi . be given. x2] = f ( x2 ) − f ( x1 ) etc. is given. x2 − x0 i = 0. xi+2] = Therefore. f(xi)). n – 2 (2. f(xi)). (xi+1. we define the second divided difference as f [xi xi+1.13) We can express the divided differences in terms of the ordinates. Then. f [x0. xi+2] = f [ xi + 1 .…. i = 0. f(xi)). we define the first divided difference as f [xi. f[x1. Second divided difference Consider any three consecutive data values (xi. Assume that a new value (xn+1. x2] = f [ xi + 1 . xi+1] = Therefore. The data. x1.…. f(xi+1)). then the interpolating polynomial is said to have the permanence property. f(xi)). x2] = 1 x2 − x0 Lf Mx N 2 2 − f1 f − f0 − 1 − x0 x1 − x0 OP Q = = f0 f1 f2 1 1 + − + ( x0 − x1 )( x0 − x2 ) ( x2 − x0 ) x2 − x1 x1 − x0 ( x2 − x0 )( x2 − x1 ) f0 f1 f2 + + ( x0 − x1 )( x0 − x2 ) ( x1 − x0 )( x1 − x2 ) ( x2 − x0 )( x2 − x1 ) LM N OP Q Notice that the denominators are same as the denominators of the Lagrange fundamental polynomials. 2. f(xn+1)) at the distinct point xn+1 is added at the end of the table. x2 − x1 (2. i = 0. x1. f(xi+1)). x2 ] − f [ x0 . (xi+2. We observe that the Lagrange interpolating polynomial does not have the permanence property. 2.….12) f [xi. We have f [x0. 1. n. Note that f [x0. 1. xi] = f ( xi ) f ( xi + 1 ) . 1. xi + 1 ] xi + 2 − xi f [ x1. (xi. First divided difference Consider any two consecutive data values (xi. n + 1. (xi. represents a polynomial of degree ≤ (n + 1). xi + 1 − xi f ( x1 ) − f ( x0 ) . + xi − xi + 1 xi + 1 − xi We say that the divided differences are symmetrical about their arguments. 1. (xi+1. xi + 2 ] − f [ xi . n. n – 1. xi+1] = f [xi+1. x1 − x0 i = 0. 1. Divided differences Let the data. f(xi+2)). Then.…. In general. we have the second divided difference as f [xi xi+1. 2. x1 ] etc. f(xi)).70 NUMERICAL METHODS Remark 3 Suppose that the data (xi. i = 0.…. xi + 2 ] − f [ xi . 2. If this polynomial of degree (n + 1) can be obtained by adding an extra term to the previously obtained nth degree interpolating polynomial. x1] = f ( xi + 1 ) − f ( xi ) . We define the divided differences as follows. 2.

The divided differences can be written in a tabular form as in Table 2. b] c−a 1 L 1 1 1 = f [a.d Third d. x2] f2 f [x2. b.d Second d.7 Obtain the divided difference table for the data x –1 –8 0 3 2 1 LM OP N Q f (c) − f (b) b− c 1 L1 1O 1 = f [b. x n −1] xn − x 0 (2... x 2 . using the points a.U Nov/Dec 2006) . x x0 x1 x2 x3 f(x) f0 f [x0. (b − a) ab b−a b− a b a ab 3 12 f (x ) (A. .d). x n ] − f [ x 0 . x3] f [x0. c] = MN c − b PQ = (c − b)bc = − bc .. x2. b] = f (b) − f ( a) a−b 1 1 1 1 = − = =− . x1 . x1. is defined as f [x0. c..14) The nth divided difference can also be expressed in terms of the ordinates fi. c] − f [ a. x2] f [x0.d Example 2. The denominators of the terms are same as the denominators of the Lagrange fundamental polynomials.1. x1.6 Find the second divided difference of f(x) = 1/x.. x3] f3 f [x1. x3] First d. Table 2. x1. b.INTERPOLATION AND APPROXIMATION 71 = fi fi + 1 fi + 2 + + ( xi − xi + 1 )( xi − xi + 2 ) ( xi + 1 − xi )( xi + 1 − xi + 2 ) ( xi + 2 − xi )( xi + 2 − xi + 1 ) The nth divided difference using all the data values in the table.. c] = MN− bc + ab OPQ = (c − a)abc = abc . c−b c−b f [b.. x2. x1] f1 f [x1. Solution We have Example 2. xn] = f [ x1 .1.. Divided differences (d. c−a c−a f[a....

**72 Solution We have the following divided difference table for the data.
**

Divided difference table. Example 2.7. x –1 f(x) –8 First d.d Second d.d

NUMERICAL METHODS

Third d.d

3+8 = 11 0+1

0 3

− 1 − 11 =–4 2+1 1− 3 =−1 2−0 4+4 =2 3+1 11 + 1 =4 3−0 12 − 1 = 11 3−2

2

1

3

12

2.2.2 Newton’s Divided Difference Interpolation We mentioned earlier that the interpolating polynomial representing a given data values is unique, but the polynomial can be represented in various forms. We write the interpolating polynomial as f(x) = Pn(x) = c0 + (x – x0) c1 + (x – x0)(x – x1) c2 + ... + (x – x0)(x – x1)...(x – xn–1) cn. The polynomial fits the data Pn(xi) = f(xi) = fi. Setting Pn(x0) = f0, we obtain Pn(x0) = f0 = c0 since all the remaining terms vanish. Setting Pn(x1) = f1, we obtain f1 = c0 + (x1 – x0) c1, or Setting Pn(x2) = f2, we obtain f2 = c0 + (x2 – x0) c1 + (x2 – x0)(x2 – x1) c2, or c2 = =

f2 − f0 − ( x2 − x0 ) f [ x0 , x1 ] ( x2 − x0 )( x2 − x1 ) f1 − c0 f − f0 = 1 = f [x0, x1]. x1 − x0 x1 − x0

(2.15)

c1 =

f − f0 1 f2 − f0 − ( x2 − x0 ) 1 ( x2 − x0 )( x2 − x1 ) x1 − x0

LM NM

F GH

I OP JK QP

INTERPOLATION AND APPROXIMATION

73

=

f0 f1 f2 + + ( x0 − x1 )( x0 − x2 ) ( x1 − x0 )( x1 − x2 ) ( x2 − x0 )( x2 − x1 )

= f [x0, x1, x2]. By induction, we can prove that cn = f [x0, x1, x2, ..., xn]. Hence, we can write the interpolating polynomial as f(x) = Pn(x) = f(x0) + (x – x0) f [x0, x1] + (x – x0)(x – x1) f [x0, x1, x2] + ... + (x – x0)(x – x1)...(x – xn–1) f [x0, x1, ..., xn] This polynomial is called the Newton’s divided difference interpolating polynomial. Remark 4 From the divided difference table, we can determine the degree of the interpolating polynomial. Suppose that all the kth divided differences in the kth column are equal (same). Then, all the (k + 1)th divided differences in the (k + 1)th column are zeros. Therefore, from (2.16), we conclude that the data represents a kth degree polynomial. Otherwise, the data represents an nth degree polynomial. Remark 5 Newton’s divided difference interpolating polynomial possesses the permanence property. Suppose that we add a new data value (xn+1, f(xn+1)) at the distinct point x n+1 , at the end of the given table of values. This new data of values can be represented by a (n + 1)th degree polynomial. Now, the (n + 1)th column of the divided difference table contains the (n + 1)th divided difference. Therefore, we require to add the term (x – x0)(x – x1)...(x – xn–1)(x – xn) f [x0, x1, ....., xn, xn+1] to the previously obtained nth degree interpolating polynomial given in (2.16). Example 2.8 Find f(x) as a polynomial in x for the following data by Newton’s divided difference formula

x f(x) –4 1245 –1 33 0 5 2 9 5 1335

(2.16)

(A.U Nov/Dec. 2004) Solution We form the divided difference table for the data. The Newton’s divided difference formula gives f(x) = f(x0) + (x – x0) f [x0, x1] + (x – x0)(x – x1) f [x0, x1, x2] + (x – x0)(x – x1)(x – x2 f [x0, x1, x2, x3] + (x – x0)(x – x1)(x – x2)(x – x3) f [x0, x1, x2, x3, x4]

74

NUMERICAL METHODS

**= 1245 + (x + 4)(– 404) + (x + 4)(x + 1)(94) + (x + 4)(x + 1) x (– 14) + (x + 4)(x + 1) x (x – 2)(3) = 1245 – 404x – 1616 + (x2 + 5x + 4)(94) + (x3 + 5x2 + 4x)(– 14) + (x4 + 3x3 – 6x2 – 8x)(3) = 3x4 – 5x3 + 6x2 – 14x + 5.
**

Divided difference table. Example 2.8. x –4 –1 0 2 5 f(x) 1245 – 404 33 – 28 5 2 9 442 1335 88 10 13 94 – 14 3 First d.d Second d.d Third d.d Fourth d.d

**Example 2.9 Find f(x) as a polynomial in x for the following data by Newton’s divided difference formula
**

x f(x) –2 9 –1 16 0 17 1 18 3 44 4 81

Hence, interpolate at x = 0.5 and x = 3.1. Solution We form the divided difference table for the given data. Since, the fourth order differences are zeros, the data represents a third degree polynomial. Newton’s divided difference formula gives the polynomial as f(x) = f(x0) + (x – x0) f [x0, x1] + (x – x0)(x – x1) f [x0, x1, x2] + (x – x0)(x – x1) (x – x2) f [x0, x1, x2, x3] = 9 + (x + 2)(7) + (x + 2)(x + 1)(– 3) + (x + 2)(x + 1) x(1) = 9 + 7x + 14 – 3x2 – 9x – 6 + x3 + 3x2 + 2x = x3 + 17. Hence, f(0.5) = (0.5)3 + 17 = 17.125. f(3.1) = (3.1)3 + 17 = 47.791.

INTERPOLATION AND APPROXIMATION

75

Divided difference table. Example 2.9. x –2 –1 0 1 3 4 f(x) 9 7 16 1 17 1 18 13 44 37 81 8 4 1 0 1 0 –3 1 0 First d.d Second d.d Third d.d Fourth d.d

**Example 2.10 Find f(x) as a polynomial in x for the following data by Newton’s divided difference formula
**

x f(x) 1 3 3 31 4 69 5 131 7 351 10 1011

Hence, interpolate at x = 3.5 and x = 8.0. Also find, f′(3) and f″ (1.5). Solution We form the divided difference table for the data.

Divided difference table. Example 2.10. x 1 3 4 5 7 10 f(x) 3 14 31 38 69 62 131 110 351 220 1011 22 16 1 12 1 0 8 1 0 First d.d Second d.d Third d.d Fourth d.d

Since, the fourth order differences are zeros, the data represents a third degree polynomial. Newton’s divided difference formula gives the polynomial as

76

NUMERICAL METHODS

f(x) = f(x0) + (x – x0) f [x0, x1] + (x – x0)(x – x1) f [x0, x1, x2] + (x – x0)(x – x1)(x – x2) f [x0, x1, x2, x3] = 3 + (x – 1)(14) + (x – 1)(x – 3)(8) + (x – 1)(x – 3)(x – 4)(1) = 3 + 14x – 14 + 8x2 – 32x + 24 + x3 – 8x2 + 19x – 12 = x3 + x + 1. Hence f(3.5) ≈ P3(3.5) = (3.5)3 + 3.5 + 1 = 47.375, f(8.0) ≈ P3(8.0) = (8.0)3 + 8.0 + 1 = 521.0. Now, Therefore, P′3(x) = 3x2 + 1, and P″3(x) = 6x. f ′(3) ≈ P′(3) = 3(9) + 1 = 28, f ″(1.5) ≈ P″(1.5) = 6(1.5) = 9.

Inverse interpolation Suppose that a data (xi, f(xi)), i = 0, 1, 2,…, n, is given. In interpolation, we predict the value of the ordinate f(x′) at a non-tabular point x = x′. In many applications, we require the value of the abscissa x′ for a given value of the ordinate f(x′). For this problem, we consider the given data as ( f (xi), xi), i = 0, 1, 2,…, n and construct the interpolation polynomial. That is, we consider f(x) as the independent variable and x as the dependent variable. This procedure is called inverse interpolation

REVIEW QUESTIONS

1. Give two uses of interpolating polynomials. Solution The first use is to reconstruct the function f(x) when it is not given explicitly and only values of f(x) and/ or its certain order derivatives are given at a set of distinct points called nodes or tabular points. The second use is to perform the required operations which were intended for f (x), like determination of roots, differentiation and integration etc. can be carried out using the approximating polynomial P(x). The approximating polynomial P(x) can be used to predict the value of f(x) at a non-tabular point. 2. Write the property satisfied by Lagrange fundamental polynomials li(x). Solution The Lagrange fundamental polynomials li(x) satisfy the property li(x) = 0, i ≠ j = 1, i = j 3. Write the expression for the bound on the error in Lagrange interpolation. Solution The bound for the error in Lagrange interpolation is given by | E(f, x) | = ≤

1 | (x – x0)(x – x1)...(x – xn) | | f (n+1) (ξ) | (n + 1) !

1 max |( x − x 0 )( x − x1 ) ... ( x − x n )| (n + 1) ! a ≤ x ≤ b 4. What is the disadvantage of Lagrange interpolation?

L M N

OP LM max |f QN

a≤x ≤b

( n +1 )

( x )|

OP Q

Solution Assume that we have determined the Lagrange interpolation polynomial of degree n based on the data values (xi, f(xi)), i = 0, 1, 2,…, n given at the (n + 1) distinct points.

INTERPOLATION AND APPROXIMATION

77

Suppose that to this given data, a new value (xn+1, f(xn+1)) at the distinct point xn+1 is added at the end of the table. If we require the Lagrange interpolating polynomial of degree (n + 1) for this new data, then we need to compute all the Lagrange fundamental polynomials again. The nth degree Lagrange polynomial obtained earlier is of no use. This is the disadvantage of the Lagrange interpolation. 5. Define the permanence property of interpolating polynomials. Solution Suppose that a data (xi, f(xi)), i = 0, 1, 2,…, n, is given. Assume that a new value (xn+1, f(xn+1)) at the distinct point xn+1 is added at the end of the table. The data, (xi, f(xi)), i = 0, 1, 2,…, n + 1, represents a polynomial of degree (n + 1). If this polynomial of degree (n + 1) can be obtained by adding an extra term to the previously obtained nth degree interpolating polynomial, then the interpolating polynomial is said to have the permanence property. 6. Does the Lagrange interpolating polynomial have the permanence property? Solution Lagrange interpolating polynomial does not have the permanence property. Suppose that to the given data (xi, f(xi)), i = 0, 1, 2,…, n, a new value (xn+1, f(xn+1)) at the distinct point xn+1 is added at the end of the table. If we require the Lagrange interpolating polynomial for this new data, then we need to compute all the Lagrange fundamental polynomials again. The nth degree Lagrange polynomial obtained earlier is of no use. 7. Does the Newton’s divided difference interpolating polynomial have the permanence property? Solution Newton’s divided difference interpolating polynomial has the permanence property. Suppose that to the given data (xi, f(xi)), i = 0, 1, 2,…, n, a new data value (xn+1, f(xn+1)) at the distinct point xn+1 is added at the end of the table. Then, the (n + 1) th column of the divided difference table has the (n + 1)th divided difference. Hence, the data represents a polynomial of degree (n + 1). We need to add only one extra term (x – x0)(x – x1)...(x – xn) f[x0, x1, ... xn + 1] to the previously obtained nth degree divided difference polynomial. 8. Define inverse interpolation. Solution Suppose that a data (xi, f(xi)), i = 0, 1, 2,…, n, is given. In interpolation, we predict the value of the ordinate f(x′) at a non-tabular point x = x′. In many applications, we require the value of the abscissa x′ for a given value of the ordinate f(x′). For this problem, we consider the given data as ( f (xi), xi), i = 0, 1, 2,…, n and construct the interpolation polynomial. That is, we consider f(x) as the independent variable and x as the dependent variable. This procedure is called inverse interpolation

EXERCISE 2.1

1. Use the Lagrange’s formula to find the quadratic polynomial that takes these values

x y 0 0 1 1 3 0

Then, find f(2).

(A.U Nov/Dec. 2005)

Determine this polynomial using Lagrange’s interpolation. evaluate f(9) using Lagrange’s formula. x f(x) 5 150 7 392 11 1452 13 2366 17 5202 (A.1 find f(27) by using Lagrange’s interpolation formula. 2004) 8. Using Lagrange’s method. 2006) 9.0 31 44. 2006) 7. – 1). Using Lagrange interpolation. (A. Year Profit in lakhs of Rs. P(4) = 64. Hence. 4. 1). fit a polynomial to the data x y 0 – 12 1 0 2 6 4 12 Also find y at x = 2. y(11) = 16. 1997 43 1999 65 2001 159 2002 248 (A. (1.0 35 39. y(6) = 13. y(9) = 14. Find the polynomial f(x) by using Lagrange’s formula and hence find f(3) for x f(x) 0 2 1 3 2 12 5 147 (A. 1). 2003) 10.U Nov/Dec. Given the values x f(x) 14 68. From the given values. Nov/Dec. and (3. 5. P(3) = 27.78 NUMERICAL METHODS 2.U Nov/Dec. x f(x) –1 –5 2 13 4 255 5 625 . A third degree polynomial passes through the points (0.5. find the value at 1. calculate the profit in the year 2000 from the following data.U Nov/Dec. 2). From the given values. evaluate f(3) using Lagrange’s formula. Using Lagrange interpolation. (2. Using Lagrange interpolation. find y(10) given that y(5) = 12.U Apr/May 2005) 6.U May/Jun 2006.7 17 64. (A. find the unique polynomial P(x) of degree 2 or less such that P(1) = 1. 3.

y(9) = 14. Using Newton’s divided difference interpolation.. 19. find the divided difference f [x1. …. y(6) = 13. 20. 2004) 15. x4].. 2004) 13. Using Newton’s divided difference formula. f(34) = – 13. x2. determine f(3) for the data x f(x) 0 1 1 14 2 15 4 5 5 6 18.6200860. x f(x) –1 –5 2 13 4 255 5 625 . 14. (A.. Find f(8) by Newton’s divided difference formula. find u(3) given u(1) = – 26.3) = 0. Obtain the root of f(x) = 0 by Lagrange’s interpolation given that f(30) = – 30. u(2) = 12. If f(x) = 1/x2.U Nov/Dec. Using divided difference formula.9) = 0.U Nov/Dec. based on the points x0.0) = 0. x2. Using the Lagrange interpolation with the truncation error. x1. f(38) = 3. y(11) = 16.U Nov/Dec. and f(2. show that the Lagrange interpolation polynomial for f(x) = xn+1 at the points x0. x3.6) = 0. find y(10) given that y(5) = 12. for the data x f(x) 4 48 5 100 7 294 10 900 11 1210 13 2028 (A. 2005) 22.(x – xn). xn 17. evaluate f(3) using Newton’s divided difference formula. Find the missing term in the table using Lagrange’s interpolation x y 0 1 1 3 2 9 3 – 4 81 12. (A.U Apr/May 2005) 21. 2004) (A. f(1. Using Newton’s divided difference method.4554022.5) using the data f(1. f(1. Show that abcd ∆ 3 FG 1 IJ = − 1 H a K abcd (A. 16.. u(4) = 256. and u(6) = 844.1103623.2818186.. From the given values. find f(1. xn is given by xn+1 – (x – x0)(x – x1).U Nov/Dec. x1.INTERPOLATION AND APPROXIMATION 79 11. f(42) = 18.. Calculate the nth divided difference of 1/x.7651977.2) = 0. f(1.

19) IJ K Forward difference operator ∆ When the operator ∆ is applied on f(xi). 2. ∆ f(x0) = f(x0 + h) – f(x0) = f(x1) – f(x0). Ef(x1) = f(x1 + h) = f(x2). For example. These differences are called the first forward differences. . from (2. In this case. we have Ek f(xi) = f(xi + kh) = f(xi+k) where k is any real number. the operator E when applied on f(x) shifts it to the value at the next nodal point.18) E 1/ 2 f ( xi ) = f xi + FG H h = f ( xi + 1/ 2 ) .. That is. that is. In general. f(xi)) be given with uniform spacing.80 2.3 INTERPOLATION WITH EVENLY SPACED POINTS NUMERICAL METHODS Let the data (xi. We define finite difference operators and finite differences to derive these formulas.17) Therefore. However. Shift operator E When the operator E is applied on f(xi ). i = 0. Ef(x0) = f(x0 + h) = f(x1). we define (2. etc. etc. (2. we obtain ∆ f(xi) = f(xi + h) – f(xi) = fi+1 – fi . 2 (2... n. Finite difference operators and finite differences We define the following five difference operators. That is. The second forward difference is defined by ∆2 f(xi) = ∆[∆ f(xi)] = ∆[f(xi + h) – f(xi)] = ∆ f(xi + h) – ∆ f(xi) = [f(xi + 2h) – f(xi + h)] – [f(xi + h) – f(xi)] = f(xi + 2h) – 2f(xi + h) + f(xi) = fi+2 – 2fi+1 + fi. we obtain Ef(xi) = f(xi + h) = f(xi+1). Lagrange and divided difference interpolation polynomials can also be used for interpolation. we can derive simpler interpolation formulas for the uniform mesh case. 1. we get ∆ f(xi) = f(xi + h) – f(xi) = E fi – fi = (E – 1) fi. We have E2 f(xi) = E[Ef(xi)] = E[f(xi + h)] = f(xi + 2h) = f(xi+2). the nodal points are given by xi = x0 + ih.19).18) and (2. . ∆ f(x1) = f(x1 + h) – f(x1) = f(x2) – f(x1). Now. The third forward difference is defined by ∆3 f(xi) = ∆[∆2 f(xi)] = ∆ f(xi + 2h) – 2∆ f(xi + h) + ∆ f(xi) = fi+3 – 3fi+2 + 3fi+1 – fi.

Table 2.21) The forward differences can be written in a tabular form as in Table 2. or E = 1 + ∆. or E–1 = 1 – ∇. or E = (1 – ∇)–1. x x0 x1 x2 x3 f(x) f(x0) ∆f0 = f1 – f0 f(x1) ∆f1 = f2 – f1 f(x2) ∆f2 = f3 – f2 f(x3) ∆2f 1 ∆f ∆2 f ∆3 f ∆2 f0 = ∆f1 – ∆f0 ∆3f0 = ∆2f1 – ∆2f0 = ∆f2 – ∆f1 Backward difference operator ∇ When the operator ∇ is applied on f(xi). we can write the nth forward difference of f(xi) as ∆n f(xi) = (E – 1)n f(xi) = (2. we obtain ∇ f(xi) = f(xi) – f(xi – h) = fi – fi–1.18) and (2.INTERPOLATION AND APPROXIMATION 81 Comparing. we obtain the operator relation ∆ = E – 1. Now. Comparing.22). k ! (n − k ) ! (2.23) (2. Forward differences. we get ∇ f(xi) = f(xi) – f(xi – h) = fi – E–1 fi = (1 – E–1)fi. These differences are called the first backward differences.22) .2. ∇ f(x1) = f(x1) – f(x0). That is. from (2.2. etc. The second backward difference is defined by ∇2 f(xi) = ∇[∇ f(xi)] = ∇[f(xi) – f(xi – h)] = ∇ f(xi) – ∇ f(xi – h) = f(xi) – f(xi – h)] – [f(xi – h) – f(xi – 2h)] = f(xi) – 2f(xi – h) + f(xi – 2h) = fi – 2fi–1 + fi–2 The third backward difference is defined by ∇3 f(xi) = ∇[∇2 f(xi)] = ∇ f(xi) – 2∇ f(xi – h) + ∇ f(xi – 2h) = fi – 3fi–1 + 3fi–2 – fi–3. ∇ f(x2) = f(x2) – f(x1). Using this relation. we obtain the operator relation ∇ = 1 – E–1. (2.20) k=0 ∑ n ( − 1) k n! fi +n − k .

82 Using this relation.11 x –1 0 1 2 f(x) –8 3 + 8 = 11 3 1–3=–2 1 12 – 1 = 11 12 11 + 2 = 13 – 2 – 11 = –13 13 + 13 = 26 ∆f ∆2 f ∆3 f .. 2. Example 2. Example 2.2.3. we have ∆ f0 = ∇f1. For example. x x0 x1 x2 x3 f(x) f(x0) ∇f1 = f1 – f0 f(x1) ∇f2 = f2 – f1 f(x2) ∇f3 = f3 – f2 f(x3) ∇2 f3 = ∇ f3 – ∇ f2 ∇2f2 = ∇ f2 – ∇ f1 ∇3f3 = ∇2 f3 – ∇2 f2 ∇f ∇2 f ∇3 f Remark 6 From the difference tables 2. Backward differences..3.3. we can write the nth backward difference of f(xi) as ∇n f(xi) = (1 – E–1)n f(xi) = NUMERICAL METHODS k=0 ∑ n (− 1) k n! fi − k k ! (n − k) ! (2. ∆f2 = ∇f3. We identify these numbers as the required forward or backward difference. . Forward difference table. Table 2. we note that the numbers (values of differences) in all the columns in the two tables are same. from the columns of the table.11 Construct the forward difference table for the data x f(x) –1 –8 0 3 1 1 2 12 Solution We have the following difference table.24) The backward differences can be written in a tabular form as in Table 2. ∆ f1 = ∇f2.. ∆3f0 = ∇3f3.

Example 2.12. δ f3/2 = f2 – f1. δ f1/2 = f1 – f0.25) We note that the ordinates on the right hand side of (2. we obtain δ f(xi) = xi + FG H h h − f xi − = fi + 1/ 2 − fi − 1/ 2 . All the odd central differences contain non-nodal values and the even central differences contain nodal values. Backward difference table.25) are not the data values. Alternately.26) are the data values. 2 2 IJ FG K H IJ K (2. The second central difference is defined by δ2 f(xi) = δ[δ f(xi)] = δ[fi+1/2 – fi–1/2] = δ fi+1/2 – δfi–1/2 = [fi+1 – fi] – [fi – fi–1] = fi+1 – 2fi + fi–1. The ordinates on the right hand side of (2. x –1 0 1 2 f(x) –8 ∇f 3 + 8 = 11 ∇2f ∇3f 3 1–3=–2 1 12 – 1 = 11 12 – 2 – 11 = –13 13 + 13 = 26 11 + 2 = 13 Central difference operator δ When the operator δ is applied on f(xi). . 2 IJ K (2.26) That is. we can define the first central differences as δf xi + FG H h = δ fi +1/ 2 = f ( xi + h) − f ( xi ) = fi +1 − fi .INTERPOLATION AND APPROXIMATION 83 Example 2. The third central difference is defined by δ3 f(xi) = δ[δ2 f(xi)] = δ fi+1 – 2δ fi + δ fi–1 = (fi+3/2 – fi+1/2) – 2(fi+1/2 – fi–1/2) + (fi–1/2 – fi–3/2) = fi+3/2 – 3fi+1/2 + 3fi–1/2 – fi–3/2 . These differences are called the first central differences.12 Construct the backward difference table for the data x f(x) –1 –8 0 3 1 1 2 12 Solution We have the following difference table. etc.

δ = (E1/2 – E–1/2) = (E – 1)E–1/2 = ∆E–1/2. we may denote the reference point as x0 and the previous points as x–1. we obtain the operator relation δ = (E1/2 – E–1/2).. x x0 f(x) f(x0) δ f1/2 = f1 – f0 x1 f(x1) δ f3/2 = f2 – f1 x2 f(x2) δ f5/2 = f3 – f2 x3 f(x3) δ2f2 = δf5/2 – δf3/2 δ2f1 = δf3/2 – δf1/2 δ3f3/2 = δ2f2 – δ2f1 δf δ2f δ3f Very often.5. for k > n. Central differences. .18) and (2. for k > n. k ! (n − k) ! i + (n/2) – k (2. we can write the nth central difference of f(xi) as δn f(xi) = (E1/2 – E–1/2)n f(xi) = (2.. . Table 2. for k = n. Note that all the differences of even order lie on the same line as the abscissa and the ordinate.27) k=0 ∑ n (− 1) k n! f .. Remark 8 Let Pn(x) = a0xn + a1xn–1 + a2xn–2 + .4. . Comparing. x–2.. the central difference table can be written as in Table 2. x2.4.. Using this relation. We have ∇ = 1 – E–1 = (E – 1) E–1 = ∆E–1.25). and the later points as x1. Remark 7 We show that ∆nfi = ∇nfi+n = δn fi+(n/2). Then.. ∆k Pn(x) = 0. ∇nfi+n = ∆nE–n fi+n = ∆nfi. for k = n and ∇k Pn(x) = 0. Then. = a0 (n !). = a0(n !). + an be a polynomial of degree n.. δn fi+(n/2) = ∆n E–n/2 fi+(n/2) = ∆nfi. from (2.28) The central differences can be written in a tabular form as in Table 2.84 Now. we get NUMERICAL METHODS δ f(xi) = fi+1/2– fi–1/2 = E1/2 fi – E–1/2 fi = (E1/2 – E–1/2) fi.

we obtain µ f(xi) = 1 h h + f xi − f xi + 2 2 2 L FG MH N IJ FG K H IJ OP = 1 f KQ 2 i +1/ 2 + f i −1/ 2 = 1 1/ 2 E + E −1/ 2 fi 2 Comparing. From the table.INTERPOLATION AND APPROXIMATION 85 Table 2. 2 Example 2. µ= Solution We have ∆3(1 – 2x)(1 – 3x)(1 – 4x) = ∆3(– 24x3 + lower order terms) = – 24 (3 !) = – 144 since ∆3 (polynomial of degree 2 and less) = 0. 0. Mean operator µ When the operator µ is applied on f(xi). it can be observed that the errors propagate and increase in magnitude and that the errors in each column are binomial coefficients.14 Construct the forward difference table for the sequence of values f(0.5. Central differences. the forward differences. ε. Show that the errors propagate and increase in magnitude and that the errors in each column are binomial coefficients. 0) (2. Solution We have the following difference table. . backward differences and central differences decrease in magnitude along each column. 0. the second difference is smaller in magnitude than the first difference. Example 2. That is.29) where ε is the magnitude of the error in one ordinate value and all other ordinates are exact. 0. x x–2 x–1 x0 x1 f(x) f–2 δf δ2f δ3f δ4f δf–3/2 f–1 δf–1/2 f0 δ f1/2 f1 δ f3/2 x2 f2 δ2 f1 δ2 f0 δ3f1/2 δ2 f–1 δ3f–1/2 δ4f0 Remark 9 For well behaved functions. we have the operator relation 1 1/ 2 E + E −1/ 2 .13 Compute ∆3(1 – 2x)(1 – 3x)(1 – 4x). the third difference is smaller in magnitude than second difference etc. 0.

Example 2. f(x) 0 0 0 0 0 ε ε –ε 0 0 0 0 0 0 ε –ε – 2ε 3ε – 4ε ε – 3ε 6ε 0 ε –4ε 10 ε ∆f ∆2 f ∆3 f ∆4 f ∆5 f NUMERICAL METHODS ∆6 f – 20 ε – 10 ε Example 2. (i) δ = ∇(1 – ∇)–1/2 (iii) ∆ (fi2) = (fi + fi+1) ∆fi Solution (i) L δ OP (ii) µ = M1 + MN 4 PQ F f(x) IJ = f(x) ∆f(x) − f(x) ∆g(x) .86 Forward difference table. .15 Prove the following. (ii) L1 + δ O M 4P N Q (iii) (iv) ∆ 2 1/ 2 = 1+ L M N 1 ( E + E −1 − 2) 4 OP Q 1/ 2 = 1 1 [ E + E −1 + 2]1/ 2 = [E1/2 + E–1/2] = µ. 2 2 FG f (x) IJ = f (x + h) − f (x) = f (x + h) g(x) − f (x) g(x + h) g ( x) g( x + h) H g(x) K g(x + h) g( x) 2 2 ∆ (fi2) = fi + 1 − fi = ( fi + 1 + fi )( fi + 1 − fi ) = ( fi +1 + fi ) ∆fi . (iv) ∆ G g(x) g(x + h) H g(x) K 2 1/2 ∇(1 – ∇)–1/2 = (1 – E–1) [1 – (1 – E–1)]–1/2 = (1 – E–1)E1/2 = E1/2 – E–1/2 = δ. δ2 = (E1/2 – E–1/2)2 = E + E–1 – 2.15.

. we get ∆ f(x) = f(x + h) – f(x) = f ( x) + hf ′ ( x) + L M N h2 h2 f ′′( x) + ... δ = E1/2 – E–1/2 = ehD/2 – e–hD/2 = 2 sinh (hD/2). Using the Taylor series expansions.. 2. L M N h2 D2 + . 2 D r f(x) = dr f / dxr. In terms of the forward.. 2 3 1 2 1 3 ∇ + ∇ + . The error term is given by or f ′(x) ≈ OP Q 1 ∆ f(x). 2 2 Neglecting the higher order terms.INTERPOLATION AND APPROXIMATION 87 = = g( x)[ f ( x + h) − f ( x)] − f ( x)[ g ( x + h) − g( x)] g ( x) g ( x + h) g( x)∆f ( x) − f ( x) ∆g ( x) . backward and central differences. h (2. g( x) g( x + h) Relations between differences and derivatives We write Ef(x) as Ef(x) = f(x + h) = f(x) + hf ′(x) + = 1 + hD + where h2 f ″(x) + .32) (2. Hence.33) (2.. − f ( x) = hf ′( x) + f ′′ ( x) + ... we obtain (2. we get the approximation ∆ f(x) ≈ hf ′(x).35) f ′( x) − 1 h ∆ f ( x) = − f ′′ ( x) + ..30) hD = ln( E) = ln(1 + ∆ ) = ∆ − 1 2 1 3 ∆ + ∆ − . .. hD = 2 sinh–1 (δ/2) and h2D2 = 4 [sinh–1 (δ/2)]2.31) (2...34) hD = ln( E ) = ln(1 − ∇ ) −1 = − ln(1 − ∇ ) = ∇ + Also. f(x) = ehD f(x). we call the approximation given in (2..35) as a first order approximation or of order O(h). h 2 Hence. 2! OP Q Hence... r = 1. 2 3 (2.. we obtain the operator relation E = ehD or hD = ln(E). .

we get the approximation δ f(x) ≈ h f ′(x). 24 1 δ f(x).88 We have ∆2 f(x) = f(x + 2h) – 2f(x + h) + f(x) = f ( x) + 2 hf ′ ( x) + NUMERICAL METHODS L M N 4h2 8h 3 f ′′( x) + f ′′′( x) + . we call the approximation given in (2. [O(h) approximation] δ f ( x) = f x + = f ( x) + L M N FG H h h − f x− 2 2 IJ FG K H IJ K h h2 h3 f ′ ( x) + f ′′ ( x) + f ′′′( x) + ... 2 8 48 OP Q = hf ′(x) + h3 f ′′′( x) + . h Neglecting the higher order terms. or f ′ ( x) ≈ 1 ∇ f ( x) h 1 h2 [O(h) approximation] (2. we obtain ∇ 2 f ( x ) .... we have the following results for backward differences..P + f(x) Q (2.. 2 8 48 – f ( x) − L M N OP Q h h2 h3 f ′( x) + f ′′ ( x) − f ′′′( x) + .36) f ″(x) ≈ ∆2 f(x)..37) (2.39) .36) as a first order approximation or of order O(h). Similarly. h 24 or f ′(x) ≈ (2.. or The error term is given by f ′′ ( x) − 1 h2 ∆2 f ( x) = − h f ′′′( x) + . The error term is given by f ′( x) − 1 h2 δ f ( x) = − f ′′′( x) + .38) ∇2 f(x) ≈ h2f ′′(x).. 2 6 h2 h3 f ′′( x) + 2 6 – 2 f ( x) + hf ′ ( x) + = h2 f ″(x) + h3 f ′″(x) + .. Hence.. we get the approximation ∆2f(x) ≈ h2 f ″(x). ∇ f(x) ≈ hf ′(x). or f ′′( x ) ≈ For the central differences... 1 h2 LM N OP Q O f ′′′( x) + .. Neglecting the higher order terms..

INTERPOLATION AND APPROXIMATION

89

Hence, we call the approximation given in (2.39) as a second order approximation or of order O(h2). We have δ2 f(x) = f(x + h) – 2f(x) + f(x – h) = f ( x) + hf ′( x) +

**Neglecting the higher order terms, we get the approximation δ2 f(x) ≈ h2 f ″(x), The error term is given by f ″(x) –
**

1 h

2

L M N

h2 h2 f ′′ ( x) + ... – 2 f(x) + f ( x) − hf ′( x) + f ′′ ( x) − ... 2 2

OP Q

LM N

O P Q

or f ″ (x) ≈

1 h2

δ2 f(x).

(2.40)

δ2 f(x) = –

h 2 ( iv) f ( x) + ... 12

Hence, we call the approximation given in (2.35) as a second order approximation or of order O(h2). Note that the central differences give approximations of higher order because of symmetry of the arguments. The divided differences are also related to forward differences and backward differences. In the uniform mesh case, the divided differences can be written as

f [ xi , xi + 1 ] = f [ xi , xi + 1 , xi + 2 ] = f ( xi + 1 ) − f ( xi ) 1 1 = ∆ fi = ∇ fi + 1 . xi +1 − xi h h f [ xi + 1 , xi + 2 ] − f [ xi , xi + 1 ] (1/h) ∆fi + 1 − (1/h) ∆fi = 2h xi + 2 − xi 1 2! h 1 n! h

n 2

=

∆2 fi =

1 2 ! h2 1 n !h

n

∇ 2 fi + 2 .

**By induction we can prove that the nth divided difference can be written as
**

f [ x0 , x1 , ..., x n ] = ∆n f0 = ∇ n fn .

(2.41)

2.3.1 Newton’s Forward Difference Interpolation Formula Let h be the step length in the given data. In terms of the divided differences, we have the interpolation formula as f(x) = f (x0) + (x – x0) f [x0, x1] + (x – x0)(x – x1) f [x0, x1, x2] + ... Using the relations for the divided differences given in (2.36)

f [ x0 , x1 , ..., xn ] = 1 n ! hn ∆n f0

90

NUMERICAL METHODS

we get

f(x) = f(x0) + (x – x0)

∆2 f0 ∆f0 (x – x0)(x – x1) + ... 1! h 2 ! h2 ∆n f0

(2.42)

+ (x – x0)(x – x1)...(x – xn–1)

1! h n

This relation is called the Newton’s forward difference interpolation formula. Suppose that we want to interpolate near the point x0 . Set x = x0 + sh. Then, x – xi = x0 + sh – (x0 + ih) = (s – i)h. Therefore, x – x0 = sh, x – x1 = (s – 1)h, x – x2 = (s – 2)h, etc. f(x) = f(x0 + sh) = f(x0) + s∆f0 + Substituting in (2.42), we obtain

s(s − 1) 2 s(s − 1)(s − 2) ... (s − n + 1) n ∆ f0 + ... + ∆ f0 n! 2!

(2.43)

We note that the coefficients are the binomial coefficients Hence, we can write the formula (2.43) as f(x) = f(x0 + sh)

sC 0 , sC1 ,..., sC n .

= sC0 f(x0) + sC1 ∆f0 + sC2 ∆2f0 + ... + sCn ∆nf0 Note that s = [(x – x0)/h] > 0. This is an alternate form of the Newton’s forward difference interpolation formula. Error of interpolation The error of interpolation is same as in the Lagrange interpolation. Therefore, error of interpolation is given by En(f, x) = =

( x − x0 )( x − x1 )...( x − xn ) ( n + 1) (ξ) f (n + 1) !

s( s − 1)( s − 2)..( s − n ) n +1 ( n +1) h f ( ξ ) = sCn +1 h n +1 f ( n +1) ( ξ ) (n + 1) !

(2.44)

where 0 < ξ < n. The coefficient in the error expression is the next binomial coefficient sCn+1. Remark 10 The Newton’s forward difference formula has the permanence property. Suppose we add a new data value (xn+1, f(xn+1)) at the end of the given table of values. Then, the (n + 1)th column of the forward difference table has the (n + 1)th forward difference. Then, the Newton’s forward difference formula becomes f(x) = f(x0) + (x – x0)

∆f0 ∆2 f0 + ( x − x0 )( x − x1 ) + ... 1! h 2 ! h2 ∆n +1 f0 (n + 1) ! h n + 1

+ (x – x0)(x – x1)...(x – xn)

INTERPOLATION AND APPROXIMATION

91

Example 2.16 Derive the Newton’s forward difference formula using the operator relations. Solution We have f(x0 + sh) = Es f(x0) = (1 + ∆)s f(x0). Symbolically, expanding the right hand side, we obtain f(x0 + sh) = (sC0 + sC1∆ + sC2∆2 + ...) f(x0) = sC0 f(x0) + sC1∆f(x0) + sC2∆2 f(x0) + ... + sCn∆n f(x0) + ... We neglect the (n + 1)th and higher order differences to obtain the Newton’s forward difference formula as f(x0 + sh) = (sC0 + sC1∆ + sC2∆2 + ...) f(x0) = sC0 f(x0) + sC1∆f(x0) + sC2∆2f(x0) + ... + sCn∆n f(x0). Example 2.17 For the data

x f (x) –2 15 –1 5 0 1 1 3 2 11 3 25

construct the forward difference formula. Hence, find f (0.5). Solution We have the following forward difference table.

Forward difference table. Example 2.17. x –2 –1 0 1 2 3 f(x) 15 – 10 5 –4 1 2 3 8 11 14 25 6 6 0 6 0 6 0 ∇f ∇2 f ∇3 f

From the table, we conclude that the data represents a quadratic polynomial. We have h = 1. The Newton’s forward difference formula is given by f(x) = f(x0) + (x – x0)

FG ∆f IJ + (x − x )(x − x ) FG ∆ f IJ HhK H 2h K

0 2 0 1 0 2

92 = 15 + (x + 2)(– 10) + (x + 2) (x + 1)

NUMERICAL METHODS

FG 6 IJ H 2K

= 15 – 10x – 20 + 3x2 + 9x + 6 = 3x2 – x + 1. We obtain f(0.5) = 3(0.5)2 – 0.5 + 1 = 0.75 – 0.5 + 1 = 1.25.

2.3.2 Newton’s Backward Difference Interpolation Formula Again, we use the Newton’s divided difference interpolation polynomial to derive the Newton’s backward difference interpolation formula. Since, the divided differences are symmetric with respect to their arguments, we write the arguments of the divided differences in the order xn, xn–1,..., x1, x0. The Newton’s divided difference interpolation polynomial can be written as f(x) = f(xn) + (x – xn) f [xn, xn–1] + (x – xn)(x – xn–1) f [xn, xn–1, xn–2] + ... + (x – xn)(x – xn–1) ... (x – x1) f [xn, xn–1,..., x0] Since, the divided differences are symmetric with respect to their arguments, we have f [xn, xn–1] = f [xn–1, xn] = (2.45)

1 ∇fn , h 1 ∇ 2 fn , ...., 2 ! h2

1 n ! hn ∇ n fn .

f[xn, xn–1, xn–2] = f [xn–2, xn–1, xn] = f [xn, xn–1, ..., x0] = f [x0, x1,..., xn] =

Substituting in (2.45), we obtain the Newton’s backward difference interpolation formula as f(x) = f(xn) + (x – xn)

1 1 ∇ f ( xn ) + ( x − xn )( x − xn −1 ) ∇ 2 f ( xn ) + ... 1! h 2 ! h2

1 n ! hn ∇ n f ( xn ) .

+ (x – xn)(x – xn–1) ... (x – x1) Let x be any point near xn. Let x – xn = sh. Then,

(2.46)

x – xi = x – xn + xn – xi = x – xn + (x0 + nh) – (x0 + ih) = sh + h(n – i) = (s + n – i)h, i = n – 1, n – 2, ..., 0. Therefore, x – xn = sh, x – xn–1 = (s + 1)h, x – xn–2 = (s + 2)h, ..., x – x1 = (s + n – 1)h.

Substituting in (2.46), we obtain the formula as f(x) = f(xn + sh) = f(xn) + s∇f(xn) + +

s(s + 1) ∇2 f(xn) + ... 2!

(2.47)

s(s + 1)(s + 2) ... (s + n − 1) n ∇ f ( xn ) . n!

INTERPOLATION AND APPROXIMATION

93

Note that s = [(x – xn)/h] < 0. The magnitudes of the successive terms on the right hand side become smaller and smaller. Note that the coefficients are the binomial coefficients [(– 1)k (– s)Ck]. Error of interpolation The expression for the error becomes En(f, x) = = where 0 < ξ < n. Remark 11 As in divided differences, given a table of values, we can determine the degree of the forward/ backward difference polynomial using the difference table. The kth column of the difference table contains the kth forward/ backward differences. If the values of these differences are same, then the (k + 1)th and higher order differences are zero. Hence, the given data represents a kth degree polynomial. Remark 12 We use the forward difference interpolation when we want to interpolate near the top of the table and backward difference interpolation when we want to interpolate near the bottom of the table. Example 2.18 Derive the Newton’s backward difference formula using the operator relations. Solution Let We have x = xn + sh. Then, s = [(x – xn)/h] < 0 f(xn + sh) = Es f(xn) = (1 – ∇)–s f(xn)

s(s + 1) ... (s + n − 1) n s( s + 1) 2 ∇ f(xn) + ... + ∇ f(xn) + ... 2! n!

( x − xn )( x − x n − 1 ) ... ( x − x0 ) ( n +1) (ξ) f (n + 1) !

s(s + 1)(s + 2) ... (s + n) n +1 (n+ 1) h f (ξ) (n + 1) !

(2.48)

Symbolically, expanding the right hand side, we obtain f(xn + sh) = f(xn) + s∇f(xn) +

We neglect the (n + 1)th and higher order differences to obtain the Newton’s backward difference formula as f(xn + sh) = f(xn) + s∇f(xn) +

s(s + 1) ... (s + n − 1) n s( s + 1) 2 ∇ f(xn) + ... + ∇ f(xn). 2! n!

Example 2.19 For the following data, calculate the differences and obtain the Newton’s forward and backward difference interpolation polynomials. Are these polynomials different? Interpolate at x = 0.25 and x = 0.35.

x f(x) 0.1 1.40 0.2 1.56 0.3 1.76 0.4 2.00 0.5 2.28

Solution The step length is h = 0.1. We have the following difference table. Since, the third and higher order differences are zero, the data represents a quadratic polynomial. The third column represents the first forward/ backward differences and the fourth column represents the second forward/ backward differences.

**94 The forward difference polynomial is given by f(x) = f(x0) + (x – x0) = 1.4 + (x – 0.1) = 2x2 + x + 1.28. The backward difference polynomial is given by f(x) = f(xn) + (x – xn)
**

∆2 f0 ∆f0 + (x – x0) (x – x1) h 2 ! h2

NUMERICAL METHODS

FG 0.16 IJ + (x − 0.1)(x − 0.2) FG 0.04 IJ H 0.1 K H 0.02 K

∇ fn ∇ 2 fn + ( x − xn )( x − xn −1 ) h 2 ! h2

= 2.28 + (x – 0.5) = 2x2 + x + 1.28.

FG 0.28 IJ + ( x − 0.5)( x − 0.4) FG 0.04 IJ H 0.1 K H 0.02 K

**Both the polynomials are identical, since the interpolation polynomial is unique. We obtain f(0.25) = 2(0.25)2 + 0.25 + 1.28 = 1.655 f(0.35) = 2(0.35)2 + (0.35) + 1.28 = 1.875.
**

Difference table. Example 2.19. x 0.1 0.2 0.3 0.4 0.5 f(x) 1.40 0.16 1.56 0.20 1.76 0.24 2.00 0.28 2.28 0.04 0.04 0.0 0.04 0.0 0.0 ∇f ∇2 f ∇3 f ∇4 f

**Example 2.20 Using Newton’s backward difference interpolation, interpolate at x = 1.0 from the following data.
**

x f(x) 0.1 – 1.699 0.3 – 1.073 0.5 – 0.375 0.7 0.443 0.9 1.429 1.1 2.631

Solution The step length is h = 0.2. We have the difference table as given below.

the data represents a third degree polynomial.048 0.1) + 2.1)(x – 0.008) K = 2.2 K + (x – 1.7(x – 1.986 1.0 ∇f ∇2 f ∇3 f ∇4 f ∇5 f REVIEW QUESTIONS 1.0 – 1.1) + 2.631 + 6.0.1)(1.1)(x – 0.168 0.7) = 2.120 0.1)(x – 0.1)(0.04) K H 0.9)(1. we may not simplify this expression.9)(x – 0. Example 2.698 – 0.1) + (– 0.631 + 6.0) = 2.626 – 1.7) FG 0.1)(– 0.9) FG 0.7) Since.5 0.3) = 2.9)(x – 0..1)(x – 0.3 0.7(1.202IJ + (x – 1.1) + 2.429 1.216 IJ H 2(0.0 0.073 0.0 0.1)(1.7 0.004. 2 3 .216 0.20.INTERPOLATION AND APPROXIMATION 95 Since the fourth and higher order differences are zero.443 0.0 – 1. Difference table.1)(0.202 2.0 – 0. x 0. we have not been asked to find the interpolation polynomial.01(x – 1.9) + (x – 1.0 – 0.048 0.818 0.01(1.1 0.9 1. we obtain f(1. At x = 1.631 0.048 0. The Newton’s backward difference interpolation polynomial is given by f(x) = f(xn) + (x – xn) 1 1 ∇ f(xn) + (x – xn)(x – xn–1) ∇2 f(xn) 1! h 2 ! h2 + (x – xn)(x – xn–1)(x – xn–2) = 2. Solution The required expression is hD = ln(E) = ln(1 + ∆) = ∆ – 1 2 1 3 ∆ + ∆ – .1 f(x) – 1.631 + (x – 1.631 + 6.01(– 0. Write the expression for the derivative operator D in terms of the forward difference operator ∆.0 – 0.0 – 1.072 0.375 0.699 0.9) + (1.7(– 0..048 IJ H 6( 0.1) 1 3!h 3 ∇3 f ( x n ) FG 1.

Give the relation between the divided differences and forward or backward differences. Does the Newton’s forward difference formula has permanence property ? Solution Yes. The Newton’s forward difference formula has permanence property. What is the order of the approximation f ′′ ( x) ≈ Solution The error term is given by f ′′( x) − 1 h2 ∆2 f ( x) ? 1 2 ∆ f ( x) = − h f ′′′( x) + . xn] = 1 1 ∆n f0 = ∇ n fn . Write the expression for the derivative operator D in terms of the backward difference operator ∇.. 2 3 1 1 1 1 1 1 ∇ + ∇ 2 + ∇ 3 + ..... What is the order of the approximation f ′(x) ≈ Solution The error term is given by f ′(x) – 1 h ∆ f(x) = – f ′′( x) + ..... . 6. 3 OP Q 2. . f(xn+1)) at the end of the given table of values.. n ! hn n ! hn 7.. 4. Solution The required relation is f [x0. the Newton’s forward difference formula becomes ... it is a second order approximation or of order O(h2). it is a first order approximation or of order O(h). Suppose we add a new data value (xn+1. What is the order of the approximation f ′′ ( x) ≈ 1 h2 δ 2 f ( x) ? 2 Solution The error term is given by f ′′ ( x) − 1 δ 2 f ( x) = − h f (iv) ( x) + .96 1 1 1 ln(E) = ln (1 + ∆) = h h h NUMERICAL METHODS or D= LM∆ − 1 ∆ N 2 2 + 1 3 ∆ − . h2 Hence. Then. ln(E) = ln (1 – ∇)–1 = – ln(1 – ∇) = h h h h 2 3 1 ∆ f(x) ? h LM N OP Q 3. h 2 Hence. 5. it is a first order approximation or of order O(h). 2 h 12 Hence. Then. the (n + 1)th column of the forward difference table has the (n + 1)th forward difference... Solution The required expression is hD = ln(E) = ln(1 – ∇)–1 = – ln(1 – ∇) = ∇ + or D= 1 2 1 3 ∇ + ∇ + . x1.

(viii) 1 + δ 2 µ 2 = 1 + (1/2) δ2. 10. 0 < ξ < n . x) = s(s − 1)(s − 2). The kth column of the difference table contains the kth forward/ backward differences. (v) µδ = (∆ + ∇)/2. x) = s(s + 1)(s + 2). If x = x0 + sh. Prove the following. (iv) ∆ – ∇ = – ∆ ∇. Solution The error expression is given by En(f. + ( x − x0 )( x − x1) 1! h 2 ! h2 ∆n + 1 f0 (n + 1) ! h n + 1 f(x) = f(x0) + (x – x0) + (x – x0)(x – x1) . write the error expression in the Newton’s forward difference formula. Using the Newton’s forward difference formula. Solution The error expression is given by En(f. find the polynomial f(x) satisfying the following data. Hence. (vii) δ = ∇E1/2. If the values of these differences are same. ∇ ∆ (iii) k=0 ∑ n ∆2 f k = ∆f n + 1 − ∆f0 ... (n + 1) ! 0 < ξ < n.. write the error expression in the Newton’s backward difference formula. when do we use the Newton’s forward and backward difference formulas? Solution We use the forward difference interpolation when we want to interpolate near the top of the table and backward difference interpolation when we want to interpolate near the bottom of the table.U. 9. evaluate y at x = 5. then the (k + 1)th and higher order differences are zero. 2. x y 4 1 6 3 8 8 10 10 (A. i i+ 1 (ii) ∆ + ∇ = ∆ ∇ − . May/Jun 2006) .. For performing interpolation for a given data. (x – xn) 8.(s − n) n + 1 ( n +1) h f (ξ) = sCn + 1h n + 1 f ( n + 1) (ξ). (iv) (1 + ∆)(1 – ∇) = 1.. EXERCISE 2. we can determine the degree of the forward/ backward difference polynomial using the difference table. If x = xn + sh.(s + n) n+ 1 (n +1) h f (ξ).INTERPOLATION AND APPROXIMATION 97 ∆f0 ∆2 f0 + . Hence..2 1. the given data represents a kth degree polynomial. Can we decide the degree of the polynomial that a data represents by writing the forward or backward difference tables? Solution Given a table of values. (i) ∆ F 1 I = − ∆f GH f JK f f i i . (n + 1) ! 11.

f(0) = 1.1010. 1) and (3. Hence. 1).5) = – 0.U.5.U. find f(– 1/3). 2.U. – 2). of persons (in thousands) below 40 250 40–60 120 60–80 100 80–100 70 100–120 50 (A. For the following data. estimate the number of persons earning weekly wages between 60 and 70 rupees. Apr/May 2004) 9. find the polynomial f(x) satisfying the following data.33493750.) No. x y 0 14 5 379 10 1444 15 3584 (A. Is it the same as obtained from the cubic polynomial found above? 5. Hence.7183 1. Wage (in Rs. find the value at 1. A third degree polynomial passes through the points (0. Using the Newton’s backward interpolation formula construct an interpolation polynomial of degree 3 for the data f(– 0. f(– 0.1825 . Apr/May 2003) 8.75) = – 0.0 7. Nov/Dec 2003) 6. (2. f(– 0.U. – 1). Determine this polynomial using Newton’s forward interpolation formula.4817 e x. 4. The following data represents the function f(x) = x y 1 2.07181250. x y 0 1 1 2 2 1 3 10 Evaluate y(4) using Newton’s backward interpolation formula. Obtain the interpolating quadratic polynomial for the given data by using the Newton’s forward difference formula x y 0 –3 2 5 4 21 6 45 (A. find the cubic polynomial which takes the following values. Using the Newton’s forward difference formula. Nov/Dec 2003) 7. (A.25) = 0.5 12. Using the Newton’s forward interpolation formula. (1.024750.5 4. Hence. find f(2).3891 2.98 NUMERICAL METHODS 3.

1. xi+1]... that is. (iii) F(x) and its first (n – 1) derivatives are continuous on (a. (i) by interpolating directly from the table.3624 0. Let and F(x) = Pi(x) = aix3 + bix2 + cix + di on [xi–1. xi] F(x) = Pi+1(x) = ai+1x3 + bi+1x2 + ci+1x + di+1 on [xi. xi]. f′(x) and f″(x) are continuous on the curves. . 1 ≤ i ≤ n..9500 0.. x f(x) 0. b). Compare with the exact value. i = 0. i = 0. For our discussion. b). 10. (ii) On each subinterval [xi–1.1845 0. The following data represents the function f(x) = cos (x + 1). (Interpolation conditions).3027 Calculate f(0. x f(x) 0.9003 0.6 – 0. . Spline function A spline function of degree n with nodes x0.. From the definition.. 2. xn.. (ii) On each subinterval [xi–1. Let the given interval [a...5) using the Newton’s backward difference interpolation. [x1.. n. we shall consider cubic splines only. [xn–1.2 0. We now define a spline.0 0. Explain the difference between the results. where x is in radians.. The following data are part of a table for f ( x) = cos x / x .. xn are called nodes or knots and x1.. F′(x) and F ″(x) are continuous on (a. F(x) is a polynomial of degree n. f(x).5403 0.12).0292 Estimate the value of f(0. x1. the draftsman used a device to draw smooth curves through a given sets of points such that the slope and curvature are also continuous along the curves.. . < xn = b. The points x0. (Interpolation conditions).. Such a device was called a spline and plotting of the curve was called spline fitting. . Compare with the exact value.2 4..25) using the (i) Newton’s forward difference interpolation and (ii) Newton’s backward difference interpolation. 11.4 0...1700 0. (iii) F(x). (ii) by first tabulating x f(x) and then interpolating from the table. 1 ≤ i ≤ n.3 3. x1. b] be subdivided into n subintervals [x0. x2]. (i) F(xi) = f(xi).4 SPLINE INTERPOLATION AND CUBIC SPLINES In the earlier days of development of engineering devices. xi]. F(x) is a third degree (cubic) polynomial. xn] where a = x0 < x1 < x2 < .. n.1 9. . 1. . is a function F(x) satisfying the following properties.4 2.INTERPOLATION AND APPROXIMATION 99 Estimate the value of f(2. xn–1 are called internal nodes. a cubic spline has the following properties.. x1]. (i) F(xi) = f(xi).. .

f ″(x) = 30x – 6.. (n – 1) equations from (2. –1≤x≤0 0 ≤ x ≤ 1. We need two more equations to obtain the polynomial uniquely. f(x) is a cubic polynomial in both intervals (– 1. . 0 ≤ x ≤ 1. (i) We have x → 0+ (ii) f(x) = – 2x3 – x2. . x→0 The function f ′(x) is continuous on (– 1.51) (iii) Continuity of F″ (x) : 6aixi + 2bi = 6ai+1xi + 2bi+1. Example 2. (n – 1) equations from (2. (i) Continuity of F(x) : On [xi–1. – 1 ≤ x ≤ 0 = – 5x3 – 3x2. xi] : Pi (xi) = f(xi) = ai xi3 + bi xi2 + ci xi + di (2. we have a total of 4n – 2 equations. 2. – 1 ≤ x ≤ 0 = 2x3 + x2. . . 2. We have x → 0+ –1≤x≤0 0 ≤ x ≤ 1. 1). 0 ≤ x ≤ 1. 0) and (0. 2... (2. f ′(x) = 15x2 – 6x. in which we set the two conditions as F″(x0) = 0.. ci and di.. 1). That is. Therefore.. (ii) Continuity of F′(x) : 3ai xi2 + 2bi xi + ci = 3ai + 1 xi2 + 2bi + 1 xi + ci + 1 .50). Solution In both the examples.51) and 2 equations from (2. xi+1] : Pi+1(xi) = f(xi) = ai+1 xi3 + bi + 1 xi2 + ci + 1 xi + di + 1.49). n – 1.52) We have 2(n – 1) equations from (2. n – 1.21 Find whether the following functions are cubic splines ? (i) f(x) = 5x3 – 3x2. F″(xn) = 0. (2. = – 30x – 6. 1).100 NUMERICAL METHODS On each interval. i = 1.. We shall consider the case of a natural spline. i = 1. x→0 The given function f(x) is continuous on (– 1. 3 2 f(x0) = a1 x0 + b1 x0 + c1 x0 + d1.52). b) implies the following.. n. we have the interpolation conditions and 3 2 f(xn) = an xn + bn xn + cn xn + dn . we shall derive a simple method to determine the cubic spline. . we have four unknowns ai. The above procedure is a direct way of obtaining a cubic spline. Continuity of F(x).. i = 1. However. i = 1.50) (2. bi.. the total number of unknowns is 4n.49) On [xi.. = – 15x2 – 6x. lim f ( x) = 0 = lim− f ( x) . 2. There are various types of conditions that can be prescribed to obtain two more equations. F′(x) and F″(x) on (a.. lim f ′( x) = 0 = lim− f ′( x) . n – 1. At the end points x0 and xn.

Consider the interval [xi–1. f ′(x) = – 6x2 – 2x. x→0 The given function f(x) is continuous on (– 1. 2( xi − xi − 1 ) 2( xi − xi −1 ) (2. 1). b are arbitrary constants to be determined by using the conditions . Hence. Cubic spline From the definition. lim f ′′( x) = 2 . and F″(xi) = Mi. we get F ′(x) = – ( xi − x) 2 ( x − xi −1 ) 2 Mi − 1 + Mi + a .53) F″(xi–1) = Mi–1. F ″(x) can be written as F″(x) = = Denote x − xi x − xi − 1 F ′′ ( xi −1 ) + F ′′( xi ) xi −1 − xi xi − xi − 1 xi − x x − xi − 1 F ′′ ( xi −1 ) + F ′′( xi ) xi − xi − 1 xi − xi − 1 (2. lim f ′( x) = 0 = lim− f ′( x) x→0 The function f ′(x) is continuous on (– 1. 1). lim− f ′′( x) = – 2 The function f ″(x) is not continuous on (– 1.54) Integrating (2. – 1 ≤ x ≤ 0 = 12x + 2.53) with respect to x. the spline is a piecewise continuous cubic polynomial. 1). We conclude that the given function f(x) is a cubic spline. we get F(x) = ( xi − x) 3 ( x − xi − 1 ) 3 Mi − 1 + Mi + ax + b 6( xi − xi − 1 ) 6( xi − xi − 1 ) (2. We have x → 0+ x→0 0 ≤ x ≤ 1. xi]. We have x → 0+ –1≤x≤0 0 ≤ x ≤ 1. = 6x2 + 2x. Integrating (2.55) where a. f ″(x) = – 12x – 2. 1).INTERPOLATION AND APPROXIMATION 101 We have x → 0+ lim f ′′( x) = − 6 = lim− f ′′( x) . Using Lagrange interpolation in this interval. We conclude that the given function f(x) is not a cubic spline. x→0 The function f ″(x) is continuous on (– 1. F″(x) is a linear function of x in all the intervals.54) again with respect to x. (ii) We have x → 0+ lim f ( x) = 0 = lim− f ( x) .

57).56) Denote xi – xi–1 = hi.58). xi] as Fi(x) = h2 ( x − x) 1 fi−1 − i M i− 1 [( xi − x)3 Mi −1 + ( x − xi −1 ) 3 Mi ] + i hi 6hi 6 + ( x − xi −1 ) h2 fi − i Mi 6 hi LM MN OP PQ LM NM O P P Q (2. F(x) = 6 hi 6 hi Using the condition F(xi–1) = f(xi–1) = fi–1. we get fi = or d= ( xi − xi − 1 ) 3 h3 Mi + d( xi − xi − 1 ) = i M i + dhi 6 hi 6 hi h2 1 fi − i M i . we obtain the spline in the interval [xi–1. Differentiating (2. we write (2. hi 6 or c= L M M N OP PQ Using the condition F(xi) = f(xi) = fi. b = c xi – d xi–1. xi].55) as ( xi − x) 3 ( x − xi − 1 ) 3 Mi−1 + M i + c( xi − x) + d( x − xi −1 ) . we get fi–1 = ( xi − xi − 1 ) 3 h3 Mi − 1 + c( xi − xi −1 ) = i Mi − 1 + chi 6 hi 6 hi h2 1 fi − 1 − i M i − 1 . Setting i = i + 1 in (2. To ease the computations. we write ax + b = c(xi – x) + d(x – xi–1) where a = d – c. f(xi–1) = fi–1. hi 6 L M M N OP QP Substituting the expressions for c and d in (2. That is. and f(xi) = fi.57) and (2.58) where hi+1 = xi+1 – xi. NUMERICAL METHODS (2.102 F(xi–1) = f(xi–1) and F(xi) = f(xi ). we get .55). xi+1] as Fi+1(x) = ( x − x) h2 1 fi − i + 1 M i [( xi + 1 − x) 3 M i + ( x − xi )3 Mi +1 ] + i − 1 hi + 1 6hi + 1 6 ( x − xi ) h2 fi + 1 − i + 1 M i + 1 6 hi + 1 + LM NM OP QP LM MN O P P Q (2. Note that hi is the length of the interval [xi–1. Mi are unknowns and are to be determined.57) Note that the spline second derivatives Mi–1. we get the spline valid in the interval [xi.

60). 2. we obtain hi h h 1 1 Mi − fi − 1 + i M i − 1 + fi − i M i 2 6 6 hi hi =– hi + 1 h h 1 1 Mi − fi + i + 1 M i + fi + 1 − i + 1 M i + 1 hi + 1 hi + 1 2 6 6 or hi h 1 1 1 M i − 1 + ( hi + hi + 1 ) M i + i + 1 M i + 1 = ( fi + 1 − fi ) − ( f i − fi − 1 ) 6 3 6 hi + 1 hi i = 1. ε→0 lim Fi′ ( xi − ε) = lim Fi′+ 1 ( xi + ε) . Hence. Now. xi]. ε→0 Using (2. .61) simplify as Fi(x) = 1 [( x i − x )3 M i −1 + ( x − x i −1 )3 M i ] 6h ( xi − x ) h2 ( x − xi −1 ) h2 fi −1 − M i −1 + fi − Mi 6 h 6 h + LM NM OP QP LM N OP Q (2. ....59) 1 [ − 3( x i +1 − x ) 2 M i + 3( x − x i ) 2 M i +1 ] 6hi +1 − h2 1 h2 1 fi + 1 − i + 1 M i + 1 fi − i +1 M i + hi + 1 6 hi +1 6 L M M N OP QP LM NM OP QP (2. M2. M1. we obtain the spline valid in the interval [xi–1. Mn.59) and (2. xi]. Equispaced data When the data is equispaced. If the derivative is required. xi +1]. The two additional conditions required are the natural spline conditions M0 = 0 = Mn. the spline in the interval [xi–1. Substituting these values in (2.INTERPOLATION AND APPROXIMATION 103 F ′i(x) = 1 [ − 3( xi − x) 2 M i −1 + 3( x − xi − 1 ) 2 M i ] 6hi h2 h2 1 1 fi − 1 − i M i − 1 + fi − i M i hi hi 6 6 − valid in the interval [xi–1. Mn–1. the left hand and right hand derivatives of F′(x) at x = xi must be equal.. given in (2. (2. Then. that is. we can find it from (2.62) ..57) and the relation between the second derivatives given in (2.59). and F′i+1(x) = L M M N OP PQ LM MN OP PQ (2.. xi].. These equations are solved for M1. we require that the derivative F ′(x) be continuous at x = xi.. n – 1.60) valid in the interval [xi.61) This relation gives a system of n – 1 linear equations in n + 1 unknowns M0. . we have hi = hi+1 = h and xi = x0 + ih..57).

Example 2.. M2 = 0. 2]: F(x) = = LM N OP Q LM N O P Q 1 1 1 [(x1 – x)3 M0 + (x – x0)3 M1] + (x1 – x) f0 − M0 + ( x − x0 ) f1 − M1 6 6 6 1 1 (33)x3 + (1 – x)(– 1) + x 3 − (33) 6 6 1 (11x3 – 3x – 2). The spline is given by Fi(x) = 1 1 1 [( xi − x)3 Mi − 1 + ( x − xi − 1 ) 3 Mi ] + ( xi − x) fi − 1 − M i − 1 + (x – xi–1) fi − M i 6 6 6 We have the following splines.. Hence. splines perform better than higher order polynomial approximations. Solution We have equispaced data with h = 1. x f(x) 0 –1 1 3 2 29 with M0 = 0. we get M0 + 4M1 + M2 = 6(f2 – 2f1 + f0). M2 = 0.5. .. 1.63) i = 1.63). n – 1. On [0.5. interpolate at x = 0. Remark 12 Splines provide better approximation to the behaviour of functions that have abrupt local changes. 2 1 1 1 [( x2 − x) 3 M 1 + ( x − x1 ) 3 M2 ] + ( x2 − x) f1 − M1 + ( x − x1 ) f2 − M2 6 6 6 1 1 [(2 − x) 3 (33)] + (2 − x) 3 − (33) + ( x − 1) [29] 6 6 FG H IJ K LM N OP Q LM N O P Q LM N OP Q LM N OP Q LM N O P Q .22 Obtain the cubic spline approximation for the following data. or M1 = 33. i = 1.104 Mi–1 + 4Mi + Mi+1 = 6 h2 ( fi + 1 − 2 fi + fi − 1 ) NUMERICAL METHODS (2. For i = 1. Since. we get 4M1 = 6[29 – 2(3) – 1] = 132. Further. Mi–1 + 4Mi + Mi+1 = 6(fi+1 – 2fi + fi–1). 1]: F(x) = = = On [1. 2. M0 = 0. We obtain from (2.

5 lies in the interval (0. we get M0 + 4M1 + M2 = 6(f2 – 2f1 + f0) = 6(33 – 4 + 1) = 180. we obtain F(0. 1 1 1 [( x1 − x) 3 M 0 + ( x − x0 ) 3 M 1 ] + ( x1 − x) f0 − M 0 + (x – x0) f1 − M1 6 6 6 1 3 1 x (− 24) + (1 − x) + x 2 − (− 24) = − 4 x 3 + 5 x + 1 .5 lies in the interval (1. For i = 1. we obtain F(1.5) − 68] = . 1]: F(x) = = On [1. we get M1 + 4M2 + M3 = 6(f3 – 2f2 + f1) = 6(244 – 66 + 2) = 1080. 2 8 2 16 1 223 [11(2 − 15) 3 + 63(1. 2. M0 = 0.5. 0.23 Obtain the cubic spline approximation for the following data. The cubic splines in the corresponding intervals are as follows. 2]: F(x) = = M1 = – 24. interpolate at x = 2. 1). Since. x f(x) 0 1 1 2 2 33 3 244 with M0 = 0. 2 16 Since. Mi–1 + 4Mi + Mi+1 = 6(fi+1 – 2fi + fi–1) i = 1. 6 6 1 1 [( x2 − x)3 M 1 + ( x − x1 ) 3 M2 ] + ( x2 − x) f1 − M1 + (x – x1) f2 − 1 M 2 6 6 6 1 [(2 – x)3 (– 24) + (x – 1)3 6 LM N OP Q LM N OP Q LM N O P Q LM OP LM OP N Q N Q L 1 O L 1 O (276)] + (2 – x) M2 − ( − 24) P + ( x − 1) M33 − (276) P N 6 Q N 6 Q . M3 = 0. M2 = 276.63). 1.INTERPOLATION AND APPROXIMATION 105 = 11 63 (2 − x) 3 + x – 34. . Hence. M1 + 4M2 = 1080. The solution is On [0. M3 = 0.5) = Example 2. Solution We have equispaced data with h = 1. 2). we get 4M1 + M2 = 180.5) = F G H IJ K Since. For i = 2. We obtain from (2. 2 2 1 11 3 17 − −2 =− .

5) + 715 = 121. = 2. splines perform better than higher order polynomial approximations. The estimate at x = 2.. i = 1. 2. f(x) = = 2x3 5x3 2x3 – – + 3x2 3x2 6x2 + x + 2. f(x) = x3 – 2x + 3. 0 ≤ x ≤ 1. Further.3 Are the following functions cubic splines? 1. Solution The required relation is Mi–1 + 4Mi + Mi+1 = 6 h2 M i (x ) in cubic splines with equal ( fi+1 – 2fi + fi–1). Mn(x) = 0. Write the relation between the second derivatives mesh spacing. 1 ≤ x ≤ 2.25. 3]: F(x) = = = 1 1 1 [( x3 − x) 3 M2 + ( x − x2 ) 3 M3 ] + ( x3 − x) f2 − M2 + (x – x2) f3 − M 3 6 6 6 1 1 [(3 − x) 3 (276)] + (3 − x) 33 − (276) + (x – 2)(244) 6 6 LM N OP Q LM N OP Q LM N O P Q 1 [(27 – 27x + 9x2 – x3) (276)] – 13(3 – x) + 244(x – 2) 6 = – 46x3 + 414 x2 – 985 x + 715.5) = – 46(2. n – 1. 0 ≤ x ≤ 1.. . 1 ≤ x ≤ 2. On [2.5)3 + 414(2.5)2 – 985(2. What are the advantages of cubic spline fitting? Solution Splines provide better approximation to the behaviour of functions that have abrupt local changes. – 9x + 4. REVIEW QUESTIONS 1. .5 is F(2. 2. 3. EXERCISE 2.106 = NUMERICAL METHODS 1 [(8 – 12x + 6x2 – x3)(– 24) + (x3 – 3x2 + 3x – 1)(276)] + 6(2 – x) – 13(x – 1) 6 = 50x3 – 162x2 + 167x – 53. Write the end conditions on Mi(x) in natural cubic splines. Solution The required conditions are M0(x) = 0. + 1..

x f(x) 1 3 2 10 3 29 4 65 7.5. estimate f(1. The following values of x and y are given x f(x) 1 1 2 2 3 5 4 11 Find the cubic splines and evaluate y(1. f(x) = = 3x2 x3 x3 – 5x + 1. Obtain the cubic spline approximation valid in the interval [1. x f(x) 1 1 2 5 3 11 4 8 Hence. Nov/Dec 2004) . Fit the following four points by the cubic spline using natural spline conditions M(1) = 0. – 1 ≤ x ≤ 0.5). Fit the following four points by the cubic spline using natural spline conditions M(1) = 0. = 4. and f ″(4) = M(4) = 0. 0 ≤ x ≤ 1. 0 ≤ x ≤ 1. interpolate at x = 1. under the natural cubic spline conditions f ″(0) = M(0) = 0.5). 5. 1 < x ≤ 2. for the function given in the tabular form. Find the values of α. f(x) = 3x2 + x + 1. under the natural cubic spline conditions f ″(1) = M(1) = 0.INTERPOLATION AND APPROXIMATION 107 3. x f(x) 0 1 1 4 2 10 3 8 8. f(x) = α x3 + β x2 + 2x. 2]. 4].U. M(4) = 0. 9. 6. Hence. x f(x) 1 0 2 1 3 0 4 0 10. (A. – 2. 1 ≤ x ≤ 2. – 0 ≤ x ≤ 1. β such that the given function is a cubic spline. and f ″(3) = M(3) = 0. for the function given in the tabular form. = 3x3 + x2 + 2x. M(4) = 0. Obtain the cubic spline approximation valid in the interval [3. 3x2 + 1.

M2 = 12/5. 7. 2. (– 1)n/(x0 x1 .5118199. 10. P(70) ≈ 424. Cubic spline. 99. 6. 4. 3. 19. 41. Spline in [3. 3x3 + 7x2 – 4x – 32. x3 + x2 . Spline in [1.2819. (– x3 + 21x2 – 126x + 240)/8.375. 15. M2 = – 14. M1 = 34/5. 2]: (– 3x3 + 9x2 – x – 5)/5.12)/0. 3. f n+1 (ξ) = (n + 1) !. 10. 2]: (17x3 – 51x2 + 94x – 45)/15.0264. Profit in the year 2000 = 100. 9. 2]: (x3 – 3x2 + 5x)/3. (a) [cos (0. 11. 7. 20. x3 – x2. (b) 0. cos (0. 4]: 2(– x3 + 12x2 – 47x + 60)/5. 5. 2. 21. 0. f′(x) is not continuous. – [x2x3(x1 + x4) + x1x4(x2 + x3)]/[x12 x22 x32 x42]. x3 – 9x2 + 21x + 1. 13. 10. 5.9912. 6.7694 and x = 37. M1 = 62/5.3 1. Spline in [2. Exercise 2.3125. (– x3 – 3x2 + 16x – 6)/6 . Cubic spline. 35. 100. (x3 – 9x2 + 32x – 24)/2. Roots of f(x) = 0 are x = 162. 8x2 – 19x + 12.. Exercise 2.25.31815664. 4]: (– 112x3 + 1344x2 – 4184x + 4350)/30. 44/3. 0. 2x3 – 7x2 + 6x + 1. 2x3 – 4x2 + 4x + 1. 810. 6.9928. 10. 44/3. M2 = – 76/5. x3 + x2 – x + 2. 2. 6. Not a cubic spline.575.12] = 8. 8. 4. 18. 1. 22. xn). 8. 71. Exact value = 0. Differences in (b) decrease very fast. M2 = 112/5.108 2. (3x3 – 70x2 + 557x – 690)/60. 17. M1 = 2.1 1.375. M1 = – 18/5. results from (b) will be more accurate. 2] : (– 22x3 + 90x2 – 80x + 36)/6. 9. 3. (x3 – 5x2 + 8x – 2)/2.5 ANSWERS AND HINTS NUMERICAL METHODS Exercise 2. Spline in [1. Spline in [1. 5. 9. Not a cubic spline. 448. 10x3 – 27x2 + 3x + 35. f(x) is a cubic spline for β = 1 and all values of α. f(x) is not continuous. (x3 + 13x2 + 56x + 28)/2. (3x – x2)/2. For f(x) = x n+1. Spline in [1.0708. 1.2306..5534. M2 = 4. Spline in [3.5037. 1. 49. M1 = 8. 10x3 – 27x2 + 3x + 35. Number of persons with wages between 60 and 70 = (Persons with wages ≤ 70) – (Persons with wages ≤ 60) = 424 – 370 = 54. 7. 16. 12. 1. – (x2 – 200x + 6060)/32. 9.2 2. 4. x2 + 2x – 3. 3]: (30x3 – 234x2 + 570x – 414)/30. . 71. 8. 11.12) = 1. 7. (3x3 – 70x2 + 557x – 690). 31. Hence. 0.

2. we shall derive numerical methods to compute the derivatives or evaluate an integral numerically. The Newton’s forward difference formula is given by 109 . we require to compute the value of the definite integral when f(x) is given explicitly. z b a f ( x) dx .2.1. In this chapter. f(xi)) given at equispaced points xi = x0 + ih.1 Methods Based on Finite Differences 3. xn. we require approximations to the derivatives f (r) (x′). where x′ may be a tabular or a non-tabular point. 3.1 Derivatives Using Newton’s Forward Difference Formula Consider the data (xi.. r ≥ 1. . 1. n where h is the step length.. From the given tabular data. In many applications of Science and engineering.... . i = 0. We consider the cases r = 1.. Even 3. (ii) Methods based on divided differences or Lagrange interpolation for non-uniform data. x1. 2. it may be a complicated function such that integration is not easily carried out.1 INTRODUCTION We assume that a function f(x) is given in a tabular form at a set of n + 1 distinct points x0. 2.! Numerical Differentiation and Integration 3. where f(x) may be given explicitly or as a tabulated data.2 NUMERICAL DIFFERENTIATION Approximation to the derivatives can be obtained numerically using the following two approaches (i) Methods based on finite differences for equispaced data.

2) with respect to x. 120 LM N OP Q OP Q (3...2) s = [x – x0]/h > 0. 120 (3.1) 1 1 s(s – 1) ∆2 f0 + s(s – 1)(s – 2) ∆3 f0 2! 3! 1 1 s(s – 1)(s – 2)(s – 3) ∆4f0 + s(s – 1)(s – 2)(s – 3)(s – 4)∆5 f0 + . Now.. we obtain the approximation to the derivative f ′(x) as f ′(x0) = 1 1 1 1 1 ∆f0 − ∆2 f0 + ∆3 f0 − ∆4 f0 + ∆5 f0 − . (s − n + 1) n ∆ f0. we get df df ds 1 df = = dx ds dx h ds = 1 1 1 1 ∆f0 + (2s − 1) ∆2 f0 + (3s 2 − 6s + 2) ∆3 f0 + (4 s3 − 18 s 2 + 22s − 6) ∆4 f0 h 2 6 24 + 1 (5 s 4 − 40 s3 + 105 s 2 − 100 s + 24)∆5 f0 + ..1) becomes f(x) = f(x0 + sh) = f(x0) + s∆f0 + + + Note that n ! hn ∆n f0 .....5) .. The magnitudes of the successive terms on the right hand side become smaller and smaller.. n! (3.3) with respect to x.(x – xn–1) Set x = x0 + sh. 1! h 2 ! h2 NUMERICAL METHODS + (x – x0)(x – x1). we get 1 d df ds 1 d df d2 f = 2 = 2 h ds ds dx h ds ds dx = 1 1 1 ∆2 f0 + (6 s − 6) ∆3 f0 + (12s 2 − 36 s + 22)∆4 f0 6 24 h2 + LM N FG IJ H K FG IJ H K OP Q 1 (20 s 3 − 120 s 2 + 210 s − 100) ∆5 f0 + .4) Differentiating (3. (3.. Differentiating (3.3) At x = x0.. that is. (3...110 ∆f0 ∆2 f0 f(x) = f(x0) + (x – x0) + (x – x0)(x – x1) + . at s = 0. h 2 3 4 5 LM N (3. 4! 5! s(s − 1)(s − 2) .

4). 2h LM N OP Q LM N OP Q (3.. Using Taylor series approximations. 12 6 180 OP Q (3. we get the following approximations.} − f ( xk ) { f ( xk ) + hf ′ ( xk ) + h 2 LM N OP Q (3. in general. xk) = f ′(xk) – = f ′ (xk) – =– 1 [ f ( xk + h) − f ( xk )] h h2 1 f ′′ ( xk ) + . we get f ′(x0) = or.. that is.NUMERICAL DIFFERENTIATION AND INTEGRATION 111 At x = x0.7) Taking two terms in (3. in general.8) (3.6) Very often. LM∆ f N 2 0 − ∆3 f0 + 11 4 5 137 6 ∆ f0 − ∆5 f0 + ∆ f0 − . f ″(xk) = 1 h2 1 h2 ∆2 f0 = 1 h2 [ f ( x2 ) − 2 f ( x1 ) + f ( x0 )] [ f ( x k+ 2 ) − 2 f ( xk + 1 ) + f ( x k )] .7) for f ′(x) at x = xk as E(f. we obtain the approximation to the derivative f ″(x) as f ″(x0) = 1 h 2 We use formulas (3. we get f ′(x0) = = or.. (3.10) Errors of approximations. Taking a few terms in (3.. we may require only lower order approximations to the derivatives.9) Similarly. we obtain the error in the formula (3.4).5) when the entire data is to be used. or the formula is of first order.3) and (3. The error in the formula (3. Taking one term in (3. xk) = f ′(xk) – 1 [– 3 f(xk) + 4f(xk+1) – f(xk+2)] 2h .. .11) h f ′′( xk ) + . we have the approximation for f ″(x0) as f ″(x0) = or. h h (3. h h 1 1 ∆ fk = [f(xk+1) – f(xk)]. f ′(xk) = 1 1 1 1 ∆f0 − ∆2 f0 = { f ( x1 ) − f ( x0 )} − { f ( x2 ) − 2 f ( x1 ) + f ( x0 )} h h 2 2 1 [− 3 f ( x0 ) + 4 f ( x1 ) − f ( x2 )] 2h 1 [– 3f(xk) + 4f(xk+1) – f(xk+2)]. 2 The error is of order O(h).4).8) for f ′(x) at x = xk is obtained as E(f. at s = 0.. f ′(xk) = 1 1 ∆ f0 = [f(x1) – f(x0)]. in general.

6 h2 h3 f ′′( x k ) + f ′′′( x k ) + . we have the multiplying factor 1/h2. 2h 2 6 (2h) 2 (2h) 3 f ′′ ( xk ) + f ′′′( xk ) + .13) The error is of order O(h).. we have the multiplying factor 1/h. Since h is small. or the formula is of second order.. and on the right hand side of the approximation to f ′′(x). Therefore. For example.. xk) = f ″ (xk) – = f ″ (xk) – 1 h2 1 h2 [f(xk + 2h) – 2 f(xk + h) + f(xk)] – 2 f ( x k ) + hf ′( x k ) + = – h f ″′(xk) + . 1]. When a data is given.9) for f ″(x) at x = xk is obtained as E(f. the round-off errors in the values of f(x) and hence in the forward differences. we do not know whether it represents a continuous function or a piecewise continuous function. this implies that we may be multiplying by a large number. or the formula is of first order. .12) The error is of order O(h2).. R S T LMR f (x ) + 2hf ′(x ) + (2h) S 2 NMT k k 2 f ′′( xk ) + (2h) 3 f ′′′( x k ) + . if h = 0. what value do we get when we try to find f ′(x) at x = 0.01.. What happens if we try to find the derivatives at these points where the function is not differentiable? For example.. + f ( xk ) 2 6 U V W OP QP U V W (3. if f(x) = | x |. It is possible that the function may not be differentiable at some points in its domain.. 2 6 – f ( x k ) + 2 hf ′ ( xk ) + h2 f ′′′( x k ) + . and a data is prepared in the interval [– 1. the multiplying factor on the right hand side of the approximation to f ′(x) is 100. when we need the values of the derivatives at points near the top of the table of the values... while the multiplying factor on the right hand side of the approximation to f ′′(x) is 10000... when multiplied by these multiplying factors may seriously effect the solution and the numerical process may become unstable. Remark 2 Numerical differentiation must be done with care. 3 R S T LM MN R S T UOP VP WQ U V W = (3. Remark 1 It can be noted that on the right hand side of the approximation to f ′(x). The error in the formula (3.. This is one of the drawbacks of numerical differentiation.112 NUMERICAL METHODS = f ′( x k ) − 1 h2 h3 − 3 f ( x k ) + 4 f ( x k ) + hf ′( x k ) + f ′′( xk ) + f ′′′( xk ) + . where the function is not differentiable? Remark 3 We use the forward difference formulas for derivatives.

dy 1 1 1 (1) = ∆f0 − ∆2 f0 + ∆3 f0 dx h 2 3 =7– FG H IJ K 1 1 (12) + (6) = 3. we get s = 0. 2 3 Example 3. Therefore. and x = x0 + sh = 1 + s. h 2 3 4 5 LM N OP Q OP Q h2D2 f(x0) = [log (1 + ∆)]2 f(x0) L ∆ = M∆ − N 2 2 ∆3 ∆4 ∆5 + − + − .. where D = d/dx. Forward difference table. x0 = 1.NUMERICAL DIFFERENTIATION AND INTEGRATION 113 Example 3. For x = 1.2 Using the operator relations. 3 4 5 OP Q 2 f(x0) . Solution From the operator relation E = e hD. we obtain hDf(x0) = log E[ f(x0)] = log (1 + ∆) f(x0) = ∆− LM N ∆2 ∆3 ∆4 ∆5 + − + − .1. x 1 2 3 4 y 1 7 8 19 27 37 64 18 12 6 ∆y ∆ 2y ∆ 3y We have h = 1.. f(x0) 2 3 4 5 or f ′(x0) = 1 1 1 1 1 ∆f0 − ∆2 f0 + ∆3 f0 − ∆4 f0 + ∆5 f0 − ... derive the approximations to the derivatives f ′(x0) and f ′′(x0) in terms of forward differences.. Example 3..1 Find dy/dx at x = 1 from the following table of values x y 1 1 2 8 3 27 4 64 Solution We have the following forward difference table.

928 6.6 – 0.2 2 3 4 5 OP Q .120 ∆f ∆2 f ∆3 f ∆4 f ∆5 f We have the following results: f ′(x0) = f ′ (3.400 1.040 – 0.768) + (− 0.584 – 3.0) = 1 1 1 1 1 ∆f0 − ∆2 f0 + ∆3 f0 − ∆4 f0 + ∆5 f0 h 2 3 4 5 LM N 1 L 11 5 f ″(x ) = MN∆ f − ∆ f + 12 ∆ f − 6 ∆ f OPQ h 1 L 11 5 f ″ (3.768 – 0.0 f(x) – 14 3.968 − (0.968 – 10.072 0.0) = MN0.032 3.120) = 9.0 – 14 3. Example 3.3. For x = 3.672 [A.4 3. Forward difference table.256 3. x 3. 12 6 180 or f ″(x0) = 1 h2 Example 3.4 – 5.768 + 0.488 0. we get s = 0.2).8 LM∆ f N 2 OP Q 0 − ∆3 f0 + OP Q 4.328 14 0.12)OPQ = 184.U.04 0 2 2 0 3 0 4 0 5 0 LM N OP Q 1 1 1 1 1 3..4667. April/May 2005] Solution We have h = 0.4 0.464 2.048) − 6 (− 5.2 3.304 1. We have the following difference table.048) + (− 5..256 6.0 3.2 – 10.0 14 6.464 + 12 (2.3 Find f ′(3) and f ″(3) for the following data: x f(x) 3.296 3.464) − (2.0 + s(0.114 2 3 = ∆ −∆ + NUMERICAL METHODS LM N 11 4 5 5 137 6 ∆ − ∆ + ∆ + .2 and x = x0 + sh = 3.048 – 5. f ( x0 ) 12 6 180 11 4 5 137 6 ∆ f0 − ∆5 f0 + ∆ f0 − . 0.736 – 5.888 –1.032 4..672 7..296 5.8 4.6 3.

6 0.0000 0.3).4 The following data represents the function f(x) = e2x.0000 0.3 1.2315 1.7295 6.3201 2.0496 1.8221 1.3201 0.3 and x = x0 + sh = 0. x 0.0126 0.0232 2. Using the forward differences and the entire data.8221 + ( 0. f ′(x0 + sh) = f ′(x0 + h) = 1 1 1 1 4 ∆f0 + ∆2 f0 − ∆3 f0 + ∆ f0 h 2 6 12 f ′(0.3) = 1 1 ∆ f(0.9933.3 0. 0. 2h From (3.3 2 6 12 LM N LM N OP Q OP Q The first order approximation gives f ′(0.3 1 [– 3 f(xk) + 4 f(xk+1) – f(xk+2)].8221 1.6759 0. x f (x) 0.3) = [f (0. find the first order and second order approximations to f′(0.3) = 1 1 1 1 0. we have the following approximation for s = 1.4980 [3.0 1.3).5556 0.9).6 3.4570) = 3.6851.3).3201 – 1. compute the approximation to f′(0.3).4. 0.NUMERICAL DIFFERENTIATION AND INTEGRATION 115 Example 3.4570 ∆f ∆2 f ∆3 f ∆4 f From (3. Also.2 f(x) 1.5556) + ( 0.0232 Solution The step length is h = 0.0496 4.8221 0.9 6. Compute the magnitudes of actual errors in each case.3 0.0 0.3) using the entire data and the first order approximation.3. f ′(xk) = .6759) − ( 0. We have the following forward difference table.4980 3.0 + s(0.3)] h 0.3 = 1 1. Compute the approximation to f′′(0.6) – f(0. we get s = 1. For x = 0.2 11. Example 3.2441 1.8221] = = 4. Forward difference table.9 1.9736 11.

| 7. From (3. | 3.09 12 LM N LM∆ f N 2 0 − 1 4 ∆ f0 12 OP Q OP Q The first order approximation gives f ″(0.6442. The step length is h = 2.3949.6442 | = 0.8221) = 3.7034.9408 – 3.3) = 2e0.6442 | = 1.116 We get the second order approximation as f ′(0.6) + f(0.3) = 1 [f(0.6833.0409.2884 | = 0.0869 – 7.9408.6 1 [– 3(1.6833 – 7.6) – f(0.3) = 1 h 2 1 1 0.3) = = The exact value is NUMERICAL METHODS 1 [– 3 f(0.2884 | = 6.9933 – 3.6 = 2(1.0496 – 2(3. Time (sec) Velocity (m/sec) 0 0 2 172 4 1304 6 4356 8 10288 Solution If v is the velocity. | 4.3)] h2 1 [6. 0. f ″(x0 + sh) = f ″(x0 + h) = f ″(0.3201) – 6.6 = 7. 0.5 The following data gives the velocity of a particle for 8 seconds at an interval of 2 seconds.6851 – 3.2884 The errors in the approximations are as follows: First order approximation: Full data: | 13.6759 − (0. then initial acceleration is given by FG dv IJ H dt K . Example 3.2015. 0.4570) = 7.8221) + 4(3. t =0 We shall use the forward difference formula to compute the first derivative at t = 0.8221] = 13.6 f′(0. We form the forward difference table for the given data. Find the initial acceleration using the entire data.3) = = 1 h 2 ∆2 f(0.0869 .5). .3201) + 1.9)] 0.3491.0496] = 2.9) – 2f(0.3) = 4e0. we have the following approximation for s = 1.3) + 4 f(0.09 The exact value is f ″(0. The errors in the approximations are as follows: First order approximation: Full data: Second order approximation: | 2.6442 | = 0.

x 0 2 4 6 8 f(x) 0 172 172 1132 1304 3052 4356 5932 10288 2880 1920 960 960 960 0 ∆f ∆2 f ∆3 f ∆4 f We have the following result: f ′(x0) = LM OP N Q 1L 1 1 O f ′(0) = M172 − (960) + (960) P = 6.. where h is the step length. 2N 2 3 Q 1 1 1 ∆f0 − ∆2 f0 + ∆3 f0 − . The magnitudes of the successive terms on the right hand side become smaller and smaller.2. Let x – xn = sh.5. 1! h 2 ! h2 1 ∇n f(xn). Then. Example 3. (x – x1) Let x be any point near xn. n! (3...14) + (x – xn)(x – xn–1) .1. n ! hn (3. The Newton’s backward difference formula is given by f(x) = f(xn) + (x – xn) 1 1 ∇ f(xn) + (x – xn)(x – xn–1) ∇ 2 f(xn) + . f(xi)) given at equispaced points xi = x0 + ih.NUMERICAL DIFFERENTIATION AND INTEGRATION 117 Forward difference table.. .. h 2 3 3... (s + n − 1) n ∇ f(xn).. 4! 5! s(s + 1)(s + 2) .. the formula simplifies as f(x) = f(xn + sh) = f(xn) + s∇f(xn) + + + Note that s(s + 1) 2 s(s + 1)(s + 2) 3 ∇ f(xn) + ∇ f(xn) 2! 3! s(s + 1)(s + 2)(s + 3) 4 s(s + 1)(s + 2)(s + 3)(s + 4) 5 ∇ f(xn) + ∇ f(xn) + .15) s = [(x – xn)/h] < 0..2 Derivatives Using Newton’s Backward Difference Formula Consider the data (xi.

16) with respect to x again. .21) when the entire data is to be used. at s = 0.. 12 6 180 OP Q (3. we get s = 0. . (3. . that is. 120 OP Q (3.17).. Hence...... .118 Differentiating (3. we obtain the approximation to the second derivative f ″(xn–1) as f ″(xn–1) = 1 h 2 LM∇ N 2 fn − 1 4 1 5 ∇ fn − ∇ f n + .. 12 12 OP Q (3. we have xn–1 = xn – h = xn + sh. h 2 6 12 20 LM N OP Q (3.18).19) At x = xn.15) with respect to x. we get NUMERICAL METHODS df df ds 1 df = = dx ds dx h ds = 1 1 1 1 ( 4s 3 + 18s 2 + 22s + 6)∇ 4 fn ∇fn + ( 2s + 1)∇ 2 fn + (3s 2 + 6s + 2)∇3 fn + h 2 6 24 + LM N 1 (5s 4 + 40 s 3 + 105s 2 + 100s + 24)∇5 fn + . h 2 3 4 5 LM N OP Q (3. we get 1 d df ds 1 d df d2 f = = dx 2 h ds ds dx h 2 ds ds = 1 h 2 FG IJ H K 2 n FG IJ H K OP Q LM∇ f N + 1 1 (6 s + 6)∇ 3 fn + (12s 2 + 36 s + 22) ∇ 4 fn 6 24 + 1 (20 s 3 + 120 s 2 + 210 s + 100)∇ 5 fn + .18) Differentiating (3. we get s = – 1.17) At x = xn–1. We obtain s = – 1. . 120 (3.. the approximation to the first derivative f ′(xn–1) is given by f ′(xn–1) = 1 1 1 1 4 1 5 ∇ f n − ∇ 2 f n − ∇ 3 fn − ∇ fn − ∇ fn + .21) We use the formulas (3. we obtain the approximation to the second derivative f ″(x) as f ″(xn) = 1 h2 LM∇ N 2 fn + ∇ 3 f n + 11 4 5 137 6 ∇ f n + ∇ 5 fn + ∇ fn + . Hence..20) At x = xn–1.. . we obtain the approximation to the first derivative f ′(xn) as f ′(xn) = 1 1 1 1 1 ∇fn + ∇ 2 fn + ∇ 3 fn + ∇ 4 fn + ∇ 5 fn + .16) At x = xn. Hence.. . (3.20) and (3.

2 12 6 h LM N OP Q OP Q O + .0 – 3. we obtain hDf(xn) = [log E] f(xn) = log [(1 – ∇)–1] f(xn) = – log (1 – ∇) f(xn) = ∇+ LM NM ∇ 2 ∇3 ∇ 4 ∇5 + + + + ..5).0 ∇f ∇2 f ∇3 f ∇4 f .5 0.P Q 2 f(xn) f ( xn ) OP Q Example 3.0 0..5 2.0 1.. ..7. 2 3 4 5 h LM N OP QP h2D2 f(xn) = [log (1 – ∇)]2 f(xn) = ∇ + = ∇2 + ∇3 + or f ″ (xn) = LM N LM N ∇2 ∇3 ∇4 ∇5 + + + 2 3 4 5 11 4 5 5 ∇ + ∇ + .75 0. x 1.625 3. where D = d/dx.75 0. for the data x f(x) 1..625 – 3. Example 3.5 – 2.25 1.5 – 1.875 2.0 2.. We have the following backward difference table. f″(xn) in terms of the backward differences..6 Using the operator relation.625 3.5 2. Backward difference table. when we need the values of the derivatives near the end of table of values...5 Solution The step length is h = 0. For x = 3.0 + s(0. f(xn) 2 3 4 5 or f ′(xn) = 1 1 1 1 1 ∇fn + ∇ 2 f n + ∇ 3 f n + ∇ 4 f n + ∇ 5 fn + .5 0.875 – 2. derive approximations to the derivatives f′(xn). Solution From the operator relation E = ehD.125 0.7 Find f ′(3) using the Newton’s backward difference formula. we get s = 0.75 0.5 3.0 – 1. Example 3.5 1.5 and x = xn + sh = 3.5 2.375 – 2.875 – 0.0. 12 6 1 11 4 5 ∇ 2 fn + ∇ 3 fn + ∇ fn + ∇ 5 fn + ..5 – 2.0 f(x) – 1.NUMERICAL DIFFERENTIATION AND INTEGRATION 119 Remark 4 We use the backward difference formulas for derivatives.

Example 3.0 3.120 From the formula f ′(xn) = we obtain f ′(3) = NUMERICAL METHODS 1 1 1 1 1 ∇fn + ∇ 2 f n + ∇ 3 f n + ∇ 4 f n + ∇ 5 fn + .8.7183 1. 0.5.9675. we get s = – 1. The magnitude of the error in the solution is | Error | = | 12.5 2. 0.7934 13..5 2 3 LM N OP Q OP Q The exact value is f ′(2.7183 1. .7420 ∇f ∇2 f ∇3 f From the formula f ′(xn) = we obtain f ′(2. .5) using the Newton’s backward difference method. for the data of the function f(x) = ex + 1.1875 – 11.9074 8. Solution The step length is h = 0.75) = 9.5 5.4817 2.9675 | = 0. From the formula f ′(xn–1) = 1 1 1 1 4 1 5 ∇ fn − ∇ 2 f n − ∇ 3 f n − ∇ fn − ∇ fn + .0.5 f(x) 3.. The backward difference table is given below. we get s = 0.5).5 2 3 LM N OP Q OP Q Example 3.2150.5 13.8860 1.5) = 1 1 1 1 1 ∇f n + ∇ 2 fn + ∇ 3 fn + ∇ 4 fn + ∇ 5 fn + ..0 2.3891 4. 2 6 12 20 h LM N OP Q .7634 5.0 8.125 + (2. f ′(2) and f ″(2. x 1.5 and x = xn + sh = 2.7420) = 11..7934 + (18860) + (0. 2 3 4 5 h LM N 1 1 1 3. For x = 2. 2 3 4 5 h LM N 1 1 1 4.4817 2.8 Find f ′(2. For x = 2. x f(x) 1..1875.5 + s(0.0 1. . .1825 1. Backward difference table.1825 Find the magnitudes of the actual errors.5).25) + (0..1440 0.5) = e2.3891 2.5 = 12.

5556 0.9736 11.5) = e2.2).3 0. Backward difference table.8660 + 0..3891 – 7.3201 0.5.2 11.9 6. 12 6 OP Q 1 [1.6705.2) f ′(0.6759 0.3 1.2315 1..3201 2.0644. x 0.4570 ∇f ∇2 f ∇3 f ∇4 f .4980 3.0232 Find f ′(1.5 2 6 1 h 2 LM∇ f N 2 n + ∇ 3 fn + 11 4 5 ∇ fn + ∇5 fn + . The magnitude of the error in the solution is | Error | = | 12.9) and f ″(1.9 The following data represents the function f(x) = e2x. Example 3.5 1 1 1 ∇ f n − ∇ 2 fn − ∇ 3 fn 0. The magnitude of the error in the solution is | Error | = | 7.0000 0.6 3.8221 1.5120.0126 0. .3.25 The exact value is f ′′(2.5120 | = 1.4535. For x = 2.2441 1. 0. Solution The step length is h = 0.6 0.0000 0.0 0.7295 6.8860) − 6 (0.4535 | = 0.7420] = 10.0496 1.NUMERICAL DIFFERENTIATION AND INTEGRATION 121 we get f ′(2) = The exact value is f ′(2) = e2 = 7.1875 – 10.7934 − 2 (1.0 1.2 f(x) 1.5) = LM OP N Q 1 L 1 1 = MN4.1875. We have the following backward difference table. Example 3. 0. Compute the magnitudes of the errors.9 1. using the Newton’s backward difference method.3891. x f(x) 0. we get s = 0.9.7420)OPQ = 7.0232 2.8221 1. From the formula f ″(xn) = we get f ′′(2.5 = 12.8221 0.0496 4.

2 + s(0..9736 − 2 (2.0126) − 12 (0.2.2441) + 3 (1.3 Derivatives Using Divided Difference Formula The divided difference interpolation polynomial fitting the data (xi.0126 + (0.9736 + 2 (2.0464. ...2) = 0.8248.4 = 22. h 2 6 12 20 | Error | = | 12. + (x – x0)(x – x1) (x – x2) f [x0.22) .3).2441 + 1.3 1 1 1 1 4 1 5 ∇fn − ∇ 2 fn − ∇3 fn − ∇ fn − ∇ fn + ..1. (x – xn–1) f [x0.3). we get for x = 1.4570) P = 40. x1. s = 0..09 M 12 N Q 1 2 2 n 3 n 4 n 5 n The exact value is f ″(1.0497.4570)OPQ = 21.2 + s(0.OP . x1.. x1] + (x – x0)(x – x1) f [x0.2441) − 6 (1.0464 – 21.3).3 1 1 1 1 1 ∇fn + ∇ 2 fn + ∇ 3 fn + ∇ 4 fn + ∇ 5 fn + . f ″(1. s = – 1. h 2 3 4 5 LM N The exact value is f ′(1.2) = 4e2. + (x – x0)(x – x1) . 12 6 Q h N 1 L 11 O 2.2) = 2e2..0126) + 4 (0. Using the formula f ″ (xn) = we get LM∇ f + ∇ f + 11 ∇ f + 5 ∇ f + .9) = MN4. i = 0. 0.. x2] + .4570)OPQ = 12. xn] (3.. 3.0993.1490.2. The magnitude of the error is From x = xn + sh = 1. Using the formula f ′(xn–1) = we get OP Q 1 L 1 1 1 f ′(0.8248 | = 0.2. we get for x = 0.0927 | = 3.8 = 12. f(xi))..…. we get for x = 1.2525.0993 – 12...122 NUMERICAL METHODS From x = xn + sh = 1. 2. The magnitude of the error is | Error | = | 40.1490 | = 0. x1. The magnitude of the error is | Error | = | 22.2216.2 + s(0.4 = 44..9) = 2e1. LM N The exact value is f ′(0. Using the formula f ′(xn) = we get OP Q 1 L 1 1 1 f ′(1. s = 0. n is given by f(x) = f(x0) + (x – x0) f [x0. From x = xn + sh = 1. x2] + .9.8402 – 44.2) = MN4. 1.. . 0..0927.8402.

x1.09861 [A. x1.113267 – 0.10 Find the first and second derivatives at x = 1.23) again. x1] + [(x – x0) + (x – x1)] f [x0. x3] . if the data is equispaced. x1] + [(x – x0) + (x – x1)] f [x0.81094 0. Example 3.0 0. x3.69315 0.23).24).40547 0.69315 3. we obtain f ″(x) = 2f [x0. x3.10.235580 0. Differentiating (3.00000 0. x 1.NUMERICAL DIFFERENTIATION AND INTEGRATION 123 Differentiating with respect to x.24) If the second derivative f ″(x) is required at any point x = then we substitute x = x* in (3. we get f ′(x) = f [x0. (3. then we substitute x = x* in (3. x1.40547 2. If the data is equispaced. x1. 2005] Solution The data is not equispaced.. x2. x1. x3] + [(x – x1)(x – x2)(x – x3) + (x – x0)(x – x2)(x – x3) + (x – x0)(x – x1) (x – x3) + (x – x0)(x – x1)(x – x2)] f [x0. we can also determine the Newton’s divided differences interpolation polynomial and differentiate it to obtain f ′(x) and f ″(x). x1. then the formula is simplified.0 1. x1. x2] + 2[(x – x0) + (x – x1) + (x – x2)] f [x0./Dec.. x2.0 1.d Substituting x = 1. x2..0 0.0 1.0 3. Nov. x*. Again. x2] + [(x – x1)(x – x2) + (x – x0)(x – x2) + (x – x0)(x – x1)] f [x0. However. We use the divided difference formulas to find the derivatives. x4] + .5 2.23) If the derivative f ′(x) is required at any particular point x = x*. x2. Example 3.5 0.09861 – 0.061157 First d. x3] + 2[(x – x0)(x – x1) + (x – x0)(x – x2) + (x – x0)(x – x3) + (x – x1)(x – x2) + (x – x1)(x – x3) + (x – x2)(x – x3)] f [x0.d Third d.57536 0.d Second d. for the function represented by the following tabular data: x f(x) 1. x1. We have the following difference table: Divided differences table. then the formula is simplified. x2] + [(x – x1)(x – x2) + (x – x0)(x – x2) + (x – x0)(x – x1)] f [x0.. x4] + .40546 1.6.6 in the formula f ′(x) = f [x0.U. x2. (3.0 f(x) 0.

23558) + 2[(1.7 (– 0.81094 + 0. f(xi)).061157) = 0.43447..01. we do not know whether it represents a continuous function or a piecewise continuous function. It is possible that the function may not be differentiable at some points in its domain. we have the multiplying factor 1/h.6 – 2. 2. If we try to find the derivatives at these points where the function is not differentiable.6 – 1. x1. Given the data (xi. f(xi)).6 – 1. set it equal to zero to find the stationary points.6 – 2. LM N OP Q . .6 – 1. x1. Therefore. REVIEW QUESTIONS 1. where h is the step length. Substituting x = 1.6 – 1.6 in the formula f ″(x) = 2 f [x0. x3] we obtain f ″(1. write the formula to compute f ″(x0). this implies that we may be multiplying by a large number. n at equispaced points xi = x0 + ih where h is the step length.23558) – 0.….0) + (1. write the formula to compute f ′(x0). Alternatively.22(0.6) = 0. n at equispaced points xi = x0 + ih.5) + (1. 2. 1. we require the maximum and/ or minimum of a function given as a tabulated data. 2 3 4 5 h 3. using the Newton’s forward difference formula. and on the right hand side of the approximation to f ″(x). in applications.5)(1.6 – 2.0)](0. we have the formula f ′(x0) = 1 1 1 1 1 ∆f0 − ∆2 f0 + ∆3 f0 − ∆4 f0 + ∆5 f0 − . the round-off errors in the values of f(x) and hence in the forward differences. differentiate it and set it equal to zero to find the stationary points. (i) On the right hand side of the approximation to f ′(x).…. (ii) When a data is given. x2.6 – 1.03669 = – 0.6 – 1. Solution In terms of the forward differences. i = 0. while the multiplying factor on the right hand side of the approximation to f ″(x) is 10000.0)(1. The numerical values obtained for the second derivatives at these stationary points decides whether there is a maximum or a minimum at these points.63258. we have the multiplying factor 1/h2. Since h is small. we can use the numerical differentiation formula for finding the first derivative.6 – 1.5)] (0.6) = 2(– 0.81094 + [(1. x2] + 2[(x – x0) + (x – x1) + (x – x2)] f [x0. We may obtain the interpolation polynomial.061157) = 0. 1. i = 0.5)] (– 0..23558) + [(1. using the Newton’s forward difference formula. For example.0) + (1. Given the data (xi. 2. the multiplying factor on the right hand side of the approximation to f ′(x) is 100.061157) = – 0. Remark 5 Often. What are the drawbacks of numerical differentiation? Solution Numerical differentiation has two main drawbacks. when multiplied by these multiplying factors may seriously effect the solution and the numerical process may become unstable.124 we obtain NUMERICAL METHODS f ′(1.6 – 1.0) + (1.47116 + 0.0) + (1. the result is unpredictable.0)(1. if h = 0.

…. What is the error in the following approximation? f ′(xk) = LM∇ N 2 fn + ∇ 3 f n + 11 4 5 137 6 ∇ f n + ∇ 5 fn + ∇ fn + . 1. .. . using the Newton’s backward difference formula. 2h Solution Using the Taylor series expansion of f(xk+1) and f(xk+2). we have the formula f ′′(x0) = 1 h 2 4. n at equispaced points xi = x0 + ih. f(xi)).. 1. where h is the step length.. where h is the step length. we have the formula f ″(xn) = 1 h 2 7.. h 2 8. 2. write the formula to compute f ′(xn–1). Solution In terms of the backward differences. f(xi)). 2. we have the formula f ′(xn–1) = LM N OP Q 1 1 1 1 4 1 5 ∇fn − ∇ 2 fn − ∇3 fn − ∇ fn − ∇ fn + . What is the error in the following approximation? E(f. Solution In terms of the backward differences. using the Newton’s backward difference formula. h 2 6 12 20 LM N OP Q 6. i = 0.NUMERICAL DIFFERENTIATION AND INTEGRATION 125 Solution In terms of the forward differences..…. xk) = f ′(xk) – h2 1 f ′′′( x k ) + . 12 6 180 OP Q 1 [f(xk+1) – f(xk )]. 1. write the formula to compute f ′(xn) using the Newton’s backward difference formula. h Solution Using the Taylor series expansion of f(xk+1). i = 0.. write the formula to compute f ″(xn). we get the error of approximation as E(f. h 2 3 4 5 5. n at equispaced points xi = x0 + ih. . 2. Given the data (xi. n at equispaced points xi = x0 + ih.….. f(xi)). where h is the step length. we have the formula f ′(xn) = LM∆ f N 2 0 − ∆3 f0 + 11 4 5 137 6 ∆ f0 − ∆5 f0 + ∆ f0 − . [– 3 f(xk) + 4f(xk+1) – f(xk+2)] = 3 2h . 12 6 180 OP Q 1 1 1 1 1 ∇fn + ∇ 2 fn + ∇ 3 fn + ∇ 4 fn + ∇ 5 fn + .. . xk) = f ′(xk) – f ′(xk) = 1 [– 3 f(xk) + 4f(xk+1) – f(xk+2)].. Solution In terms of the backward differences. Given the data (xi. Given the data (xi. we get the error of approximation as 1 h [f(xk + h) – f(xk)] = – f ″(xk) + .... i = 0.

6 10.989 1. 6.750 1. Find the maximum and minimum values of y tabulated below.2.1 8. Time (sec) Velocity (m/sec) 0 0 5 3 10 14 15 69 20 228 (A. 2004) 5.0552 1.451 1.75 4 56 4.0250 obtain dy/dx.0 7.381 3 20.4 9.718 2 7. x y 9 1330 10 1340 11 1320 12 1250 13 1120 14 930 (A.0496 (A./Dec. x y –2 1 –1 – 0.5 9.1 1. 2003) 2. The following data gives the velocity of a particle for 20 seconds at an interval of 5 seconds.U. Nov.086 4 54.031 find dy/dx. Find the initial acceleration using the entire data./Dec. Find the error term using the Taylor series. Compute f ′(0) and f ″(4) from the data x y 0 1 1 2. d2y/dx2 at 1.2 9. Find the value of x for which f(x) is maximum in the range of x given.6 4.126 NUMERICAL METHODS EXERCISE 3.781 1.598 (A.1.403 1.U. April/May 2004) 2. Nov.4 4.3201 1.9530 1. Find also the maximum value of f(x). Nov./Dec.0 2.7183 1. 2006) . The first derivative at a point xk. For the given data x y 1.8 6. is approximated by f(xk) = [f(xk + h) – f(xk – h)]/(2h). From the following table x y 1.U.129 1.25 0 0 1 – 0. using the following table.U. (A. d2y/dx2 at x = 1.3 9.3891 2.2 8. 7.2 3. May 2000) 3.0 7.U.25 2 2 3 15.

7183 1. Find f ′(1) using the following data and the Newton’s forward difference formula. Obtain the value of f ′(0.1096 0.1825 Find the magnitude of the actual error.1047 0. x f(x) 1.04) using an approximate formula for the given data x y 0.6745 (A.1071 0. if the data represents the function ex + 1.0 0.1148 (A.5 2.U.3891 2.04 0.U. 2006) 13.1122 0. x f(x) 1.U.05 0.0 8.1023 0. Using the Newton’s forward difference formula. Given the following data. find y ′(6).01 0. .0 – 3. 2004) 10. 12.4817 2. x y 0 4 2 26 3 58 4 112 7 466 8 668 (A. Given the following data.5 – 2.5 11.875 2.02 0.0 3./Dec.5 1.U. find y ′(6)./Dec.5) from the following data. An approximation to f ″(x) is given by f ″(x) = 1 h2 [f(x + h) – 2f(x) + f(x – h)].5 5.NUMERICAL DIFFERENTIATION AND INTEGRATION 127 8.625 3. Find the value of sec 31° for the following data θ (deg) tan θ 31 0.06 0. 2003) 9. Nov. Nov. May/Jun.6008 32 0.6494 34 0.03 0.0 – 1.6249 33 0. Nov. find f ′(1. x y 0 4 2 26 3 58 4 112 7 466 9 922 (A./Dec.5 – 2.5 13. 2006) 14. y ′(5) and the maximum value of y.

We define the error of approximation for a given method as Rn(f) = z b a w(x) f(x) dx – k=0 ∑ n λk f(xk).2183 If the data represents the function f(x) = e2x + x + 1. p.4 3.. x f(x) 0. x. Then. for m = 0..1 2.6918 0.128 NUMERICAL METHODS Compute f ″(0. for all polynomials of degree less than or equal to p.. The function f(x) may be given explicitly or as a tabulated data. The limits of integration may be finite.27) Order of a method An integration method of the form (3. That is. The error term is obtained for f(x) = c= z We define dx – b a w(x) x p+1 k=0 ∑ n λk xkp+1 (3. semi-infinite or infinite.1221 0.28) where c is called the error constant. . We assume that w(x) and w(x) f(x) are integrable on [a.3214 0. b) is called the weight function.25) b a w(x) f(x) dx = k=0 ∑ n λk f(xk) (3. that is Rn = 0. k=0 ∑ n λk xkm = 0.. it produces exact results for f(x) = 1. x p. 2.3 3.1 Introduction The problem of numerical integration is to find an approximate value of the integral I= where w(x) > 0 in (a. ….2 2. f(xk)’s are called the ordinates and λk’s are called the weights of the integration rule or quadrature formula (3.5 4. the error term is given by .26) = λ0 f(x0) + λ1 f(x1) + λ2 f(x2) + . The tabulated points xk’s are called abscissas. x2..26).26) is said to be of order p. (3. The integral is approximated by a linear combination of the values of f(x) at the tabular points as I= z z b a w(x) f(x) dx (3.3 NUMERICAL INTEGRATION 3.3) using this formula with all possible step lengths for the given data..3. 1. what are the actual errors? Which step length has produced better result? 3.6255 0. if it produces exact results. + λn f(xn). b]. This implies that Rn(xm) = z b a w(x) x m dx – x p+1.

we obtain I= z b a f ( x) dx = z x1 x0 f ( x)dx = f ( x0 ) 1 1 ( x − x0 ) 2 h 2 = (x1 – x0) f(x0) + LM N OP Q x1 x0 z x1 x0 dx + 1 h LM N z x1 x0 ( x − x0 ) dx ∆f0 OP Q ∆f0 . ( p + 1) ! The bound for the error term is given by | Rn (f) | ≤ |c| max | f ( p+ 1) ( x) . x = b. is given by y Q z = λ0 f(x0) + λ1 f(x1) + λ2 f(x2) + . We shall now derive some Newton-Cotes formulas. 3. Using the Newton’s forward difference formula. interpolating at the points P(a. 3.2.26) are called Newton-Cotes integration rules. Let the curve y = f(x). P 1 (x – x0) ∆f(x0) f(x) = f(x0) + h O a b x (3. the linear polynomial approximation to f(x). the methods (3.NUMERICAL DIFFERENTIATION AND INTEGRATION 129 Rn(f) = = z b a w(x) f(x) dx – k=0 ∑ n λk f(xk) (3.2 Integration Rules Based on Uniform Mesh Spacing When w(x) = 1 and the nodes xk’s are prescribed and are equispaced with x0 = a. The weights λk’s are called Cotes numbers.32) Fig.29) c f (p+1) (ξ)..1 Trapezium Rule This rule is also called the trapezoidal rule. a ≤ x ≤ b. be approximated by the line joining the points P(a. then the error term is obtained for f(x) = x p+2. a≤ x≤b ( p + 1) ! (3. a < ξ < b. That is.31) We note that. f(a))..1.31). where h = (b – a)/n. b a f(x) dx defines the area under the curve y = f(x). f(b)) on the curve (see Fig. above the x-axis. Q(b. xn = b.30) If Rn(x p+1) also becomes zero. f(a)).1). f(b)). 3. Substituting in (3. Q(b.3. where x0 = a. 3. Trapezium rule. we derive formulas of the form I= z b a f(x) dx = k=0 ∑ n λk f(xk) (3. x1 = b and h = b – a.3. between the lines x = a. + λn f(xn).

x) = where a ≤ ξ ≤ b. where M 2 = max | f ′′( x)|.35) . the expression for the error is given by R1(f. 2 2 (3. Let f(x) = x2. and ordinates f(a) and f(b).29). x.130 = (x1 – x0) f(x0) + = hf(x0) + = The trapezium rule is given by I= NUMERICAL METHODS 1 [f(x1) – f(x0)](x1 – x0)2 2h h [f(x1) – f(x0)] 2 (b − a) h [f(x1) + f(x0)] = [f(b) + f(a)]. From (3. 2 2 2 a x dx – z b a x2 dx – (b − a) 2 1 1 (b + a2) = (b – a)3 – (b3 + a2b – ab2 – a3) 2 3 2 1 3 1 (a – 3a2b + 3ab2 – b3) = – (b – a)3.34) (3. using the definition of error given in (3. 6 6 Using (3. Error term in trapezium rule We show that the trapezium rule integrates exactly polynomial of degree z b a f(x) dx = h (b − a) [f(x1) + f(x0)] = [f(b) + f(a)]. a≤ x≤b 12 12 c (b − a) 3 h3 f ′′ (ξ) = − f ′′ (ξ) f ″(ξ) = – 2! 12 12 (3. we get f(x) = 1: R1(f.27). the trapezium rule integrates exactly polynomial of degree ≤ 1. The bound for the error is given by | R1 (f. x) = f(x) = x: R1(f. and the method is of order 1. x in (3. 2 2 Remark 6 Geometrically. Substituting f(x) = 1.28). we show that R1(f.27). x) = Hence. which is an approximation to the area under the curve y = f(x) above the x-axis and the ordinates x = a and x = b.33) ≤ 1. That is. 2 (b − a) 1 1 (b + a) = (b2 – a2) – (b2 – a2) = 0. the right hand side of the trapezium rule is the area of the trapezoid with width b – a. x) = 0 for f(x) = 1. x) | ≤ (b − a) 3 h3 M2 = M 2 . we get c= = z z b a b dx – (b − a) (2) = (b – a) – (b – a) = 0.

NUMERICAL DIFFERENTIATION AND INTEGRATION 131 If the length of the interval [a. (3. we subdivide [a.. 2.. + {f(xN–1) + f(xN)}] 2 = h [f(x0) + 2{f(x1) + f(x2) + .. the error decrases. This expression is a true representation of the error in the trapezium rule..35) becomes meaningless.. . In this case.34) becomes R1(f. As we increase the number of intervals. x) | ≤ h3 [| f ′′(ξ 1 )|+| f ′′(ξ 2 )|+ . Remark 7 Geometrically.. 12 x N −1 < ξ N < x N ..36) The composite trapezium rule is also of order 1.. x) | ≤ ≤ or where | R1(f. i = 1. The nodal points are given by a = x0. the right hand side of the composite trapezium rule is the sum of areas of the N trapezoids with width h. x) = – h3 [ f ′′( ξ1 ) + f ′′( ξ 2 ) + . then b – a is also large and the error expression given (3. +| f ′′(ξ N )|] 12 (b − a)h 2 Nh 3 M2 = M2 12 12 (b − a) 3 12 N 2 a≤ x≤b (3. b] is large. + z xN x N −1 f ( x )dx .. 2 (3. we get the composite trapezoidal rule as b a f ( x) dx = h [{f(x0) + f(x1)} + {f(x1) + f(x2)} + ... Composite trapezium rule Let the interval [a. b] into a number of subintervals of equal length and apply the trapezium rule to evaluate each integral.37) The bound on the error is given by | R1(f. N... .38) M2 M2 = max | f ′′ ( x)| and Nh = b – a... and ordinates f(xi–1) and f(xi). There are N integrals. b] be subdivided into N equal parts of length h. x1 = x0 + h. . x2 = x0 + 2h.. xN = x0 + Nh = b. + f(xN–1)} + f(xN)]. h = (b – a)/N. The error expression (3. This sum is an approximation to the area under the curve y = f(x) above the x-axis and the ordinates x = a and x = b. The rule is then called the composite trapezium rule. + f ′′( ξ N )]. That is. We write z z b a f ( x ) dx = z xN x0 f ( x ) dx = z x1 x0 f ( x ) dx + z x2 x1 f ( x ) dx + .. Using the trapezoidal rule to evaluate each integral.

25. The nodes are 0.675. 0. Using the exact solution.5. 1. we get I= = = = z b a f ( x) dx = 1 (b − a) 1 1 { f (b) − f (a)} (b2 − a2 ) + {bf (a) − af (b)}(b − a) (b − a) 2 1 (b + a)[ f(b) – f(a)] + bf(a) – af(b) 2 (b − a) [f(a) + f(b)] 2 LM N z b a [{ f (b) − f (a)} x + {bf (a) − af (b)}] dx OP Q which is the required trapezium rule. using the trapezium rule with 2. 0. Lagrange linear interpolation gives f(x) = ( x − b) ( x − a) f (a) + f (b) (a − b) (b − a) 1 [{f(b) – f(a)} x + {bf(a) – af(b)}]. N 4 b− a 1 = . Solution With N = 2.37). 1. 0.1). 0. 0. 0. f(a)). The nodes are 0.12 Find the approximate value of I = and 8 equal subintervals.5. Q(b. Example 3. we have the following step lengths and nodal points. This can be verified from the error expressions given in (3.25. 4 and 8.75.125. 0.75. This result implies that error is zero and the trapezium rule produces exact results for polynomials of degree ≤ 1.875. N 2 b−a 1 = . If f(x) is a polynomial of degree ≤ 1. 0. 0. (b − a) = Substituting in the integral.132 NUMERICAL METHODS Remark 8 We have noted that the trapezium rule and the composite trapezium rule are of order 1. 0. 0.34) and (3. find the absolute errors. 1.0.0. then f ″(x) = 0. 4 1+ x b− a 1 = . f(b)) (see Fig. 3.11 Derive the trapezium rule using the Lagrange linear interpolating polynomial. The nodes are 0.0. N = 2: h = N = 4: h = N = 8: h = z 1 0 dx . Example 3. N 8 .375.5. Solution The points on the curve are P(a.

533333} + 0.666667) + 0. Find the bound on the errors. The exact value of the integral is I = ln 2 = 0. The additional values required are the following: x f (x) 0. The errors in the solutions are the following: | Exact – I1 | = | 0.666667 1.75) + f(0.0 + 2(0. 1. The nodes are 1.888889 + 0.125 0.5.693147 – 0.5] = 0. 5 + 3x b−a 1 = .015187 | Exact – I2 | = | 0. Solution With N = 4 and 8.8 0.125) + f(0.0 0.75)} + f(1.888889 0.615385 + 0.25 [1. N = 4: h= z 2 1 dx with 4 and 8 subintervals using the trapezium rule.000975.708334 | = 0. Comment on the magnitudes of the errors obtained.571429 + 0.25) + f(0.003877 | Exact – I3 | = | 0.8 + 0.666667 + 0.5] = 0.693147 – 0.727273 + 0.8 + 0.666667 + 0. we have the following step lengths and nodal points.625 0.625) + f(0.615385 0.0625[1. N = 2: x f(x) 0 1.693147 – 0.375 0.875 0.5) + f(1.375) + f(0.875)} + f(1. N = 2: I1 = h [f(0) + 2f(0.13 Evaluate I = Compare with the exact solution and find the absolute errors in the solutions.0)] 2 h [f(0) + 2{f(0.0 + 2 {0.125 [1.697024.NUMERICAL DIFFERENTIATION AND INTEGRATION 133 We have the following tables of values.0)] 2 h [f(0) + 2{f(0.0 0.0.25.5) 2 + f(0.697024 | = 0. N 4 .25 0. N = 4: I2 = = 0. N = 8: I3 = = 0.727273 0. we compute the value of the integral.5] = 0.25) + f(0.5) + f(0.571429} + 0. Example 3.571429 N = 8: We require the above values. 1.694122.0)] = 0. 1.0 + 2{0.5 N = 4: We require the above values.694122 | = 0.693147.75. 2.5 0.708334. The additional values required are the following: x f (x) 0.75 0.533333 Now.

11429 1.11940 1.134 N = 8: h= NUMERICAL METHODS b− a 1 = .5) 2 + f(1. N = 4: x f (x) N = 8: We require the above values. x f(x) 1. 4 Bounds for the errors | Error | ≤ We have f(x) = (b − a) h 2 M2 .10127 + 0.375 0. 3 The errors in the solutions are the following: | Exact – I1 | = | 0.5) + f(1.09091 We have the following tables of values.10615 – 0.0625 [0.625 0. The nodes are 1.10615 – 0. 1.125 [0.11429 + 0.11429 + 0.10959 1. 1.11940 + 0.75 0.375) + f(1.10618.00003. 1. N 8 1.75) + f(1. where M2 = max | f ″(x) |.10526 + 0.10127 1. 2.875 0.5 0.10618 | = 0.375.10627.0 0. 2 5 + 3x (5 + 3 x) (5 + 3 x) 3 .125 1.125.125 + 2{0.25) + f(1.10526 1. N = 8: I2 = h [f(1) + 2{f(1. 1. 1. .0)] = 0. f ′( x) = − .75)} + f(2. [1.00012. f ′′( x) = . We find that | Error in I2 | ≈ 1 | Error in I1 |.0 0.125) + f(1.125 + 2 {0.0. The exact value of the integral is 1 ln (5 + 3x ) I= 3 LM N OP Q 2 = 1 1 [ln 11 − ln 8] = 010615 .09412 Now.25) + f(1.875.10627 | = 0. 2 ] 12 1 3 18 . The additional values required are the following.25 0.125 0.09091] = 0.10526 + 0.675.25.10959 + 0.09412} + 0. 1.09756 + 0. N = 4: I1 = h [f(1) + 2 {f(1.09091] = 0.5. | Exact – I2 | = | 0.75. we compute the value of the integral.09756 2.625) + f(1.09756} + 0.875)} + f(2.0)] 2 = 0. 1.

0.25) + f(0. 0.75.25) 2 (0. N = 2: I1 = h [f(0.5.75)} + f(1.07694. we compute the value of the integral.07677 .05882 We have the following tables of values. Now.75 0.00018.000046.03516) = 0.0 0. 1.07694 | = 0.1 + 2(0.25 0.07744 | = 0.25 [0.07547 + 0.07547) + 0.07547 1.25: | Error | ≤ h = 0.06639) + 0. 12 (0.1 0.25.05882] = 0. 1.1 + 2(0.00017. The nodes are 0. h = 0. x f (x) x f (x) 0.125: | Error | ≤ Actual errors are smaller than the bounds on the errors. Compare with the exact solution.5. Example 3.0)] 2 h [f(0.5) + f(1.00067 | Exact – I2 | = | 0.0. N = 4: We require the above values.0. 0.07677 – 0.125[0.08649 0. 12 h = 0.0. .0)] 2 = 0.08649 + 0.03516. evaluate the integral I = . The exact value of the integral is I= z 1 dx ( x + 3) 2 0 L = M tan +1 M N −1 O ( x + 3)P PQ 1 = tan −1 ( 4 ) − tan −1 (3) = 0.06639 0. The additional values required are the following.0) + 2 f(0. N = 2: N = 4: N = 2: h = 0.5) + f(0.07677 – 0. with 2 and x + 6x + 10 4 subintervals. 0 2 z 1 dx Solution With N = 2 and 4. we have the following step lengths and nodal points.05882] = 0.5 0.25. The nodes are 0. Comment on the magnitudes of the errors obtained.14 Using the trapezium rule.5. 512 (5 + 3 x) 3 (0.07744.03516) = 0.0 0. 0. 0 The errors in the solutions are the following: | Exact – I1 | = | 0.NUMERICAL DIFFERENTIATION AND INTEGRATION 135 M2 = max [ 1. N = 4: I2 = = 0.0) + 2{f(0. 2 ] 18 18 = = 0.0.125) 2 (0.

Using the trapezium rule. we have v= ds . We have three abscissas x0 = a. Then. and x2 = b. that is.2 Simpson’s 1/3 Rule In the previous section. interpolating at the points P(x0. f(x2)) are three points on the curve y = f(x). We approximate the curve y = f(x). we approximate the given curve by a polynomial of degree 2. Q(x1 f(x1)). t (sec) v (ft/sec) 0 0 2 16 4 29 6 40 8 46 10 51 12 32 14 18 16 8 18 3 20 0 Evaluate using trapezium rule. or s = dt Starting from rest. the quadratic polynomial approximation to f(x). In many science and engineering applications. h [f(0) + 2{f(2) + f(4) + f(6) + f(8) + f(10) + f(12) + f(14) 2 + f(16) + f(18)} + f(20)] = 0 + 2{16 + 29 + 40 + 46 + 51 + 32 + 18 + 8 + 3} + 0 = 486 feet. x1 = (a + b)/2. we obtain s= z 20 0 v dt. f(x0)). Q. Q(x1 f(x1)). P(x0.136 We find that | Error in I2 | ≈ NUMERICAL METHODS 1 | Error in I1 |. we have shown that the trapezium rule of integration integrates exactly polynomials of degree ≤ 1. that is. is given by f(x) = f(x0) + Substituting in (3. we obtain 1 1 ( x − x0 ) ∆ f ( x0 ) + ( x − x0 )( x − x1 ) ∆2 f ( x0 ) . by the parabola joining the points P.3. 4 Example 3. One such method is the Simpson’s 1/3 rule. R(x2. f(x0)). we require methods which produce more accurate results. b] be subdivided into two equal parts with step length h = (b – a)/2. Let the interval [a. The step length is h = 2. h 2 h2 z b a f ( x) dx = z x2 x0 f ( x) dx = z x2 x0 LM f (x ) + 1 (x − x ) ∆f (x ) + 1 h 2h N 0 0 0 2 ( x − x0 )( x − x1 ) ∆2 f ( x0 ) dx OP Q . R. R(x2. the total distance travelled in 20 seconds.31). Using the Newton’s forward difference formula. a ≤ x ≤ b. the distance travelled in 20 seconds is s= z v dt.2. the order of the formula is 1. Solution From the definition. 3. f(x2)).15 The velocity of a particle which starts from rest is given by the following table.

we can also write the formula as b a f ( x)dx = (b − a) f (a) + 4 f 6 LM N FG a + b IJ + f (x )OP H 2 K Q 2 (3. We have 2 ( x − x0 )( x − x1 ) ∆2 f ( x0 ) dx .39) In terms of the end points. we obtain I1 = = Hence 1 2 2 2 [2(3 x0 + 6 hx0 + 4 h 2 ) − 3(4 x0 + 6 hx0 + 2h 2 ) + 6 x0 + 6hx0 ] ∆2 f ( x0 ) 6h h 1 (2 h2 ) ∆2 f ( x0 ) = ∆2 f ( x0 ) . 6h 3 z z b a f ( x) dx = z x2 x0 f ( x) dx = 2hf ( x0 ) + 2h∆f ( x0 ) + h 2 ∆ f ( x0 ) 3 = = h [6f(x0) + 6{f(x1) – f(x0)} + {f(x0) – 2f(x1) + f(x2)}] 3 h [f(x0) + 4f(x1) + f(x2)] 3 (3. we obtain I1 = 1 2h 2 1 12h 2 1 1 ( x − x 0 )2 h 2 LM N OP Q x2 x0 ∆f ( x 0 ) + I1 = 2hf ( x 0 ) + 2h∆f ( x 0 ) + I1 .40) This formula is called the Simpson’s 1/3 rule. x1 = x0 + h.NUMERICAL DIFFERENTIATION AND INTEGRATION 137 = (x2 – x0) f(x0) + Evaluating I1. as follows. We can also evaluate the integral z x2 x0 f ( x) dx = z x2 x0 LM f (x ) + 1 ( x − x ) ∆f (x ) + 1 h 2h N 0 0 0 z x2 x0 f ( x) dx . OP Q . Substituting x2 = x0 + 2h. LM x N3 3 x2 − ( x0 + x1 ) + x0 x1 x 2 OP Q x2 x0 ∆2 f ( x0 ) = 3 3 2 2 [2( x2 − x0 ) − 3( x0 + x1 )( x2 − x0 ) + 6 x0 x1 ( x2 − x0 )] ∆2 f ( x0 ) = 1 12h 2 2 2 ( x 2 − x 0 ) [ 2( x 2 + x 0 x 2 + x 0 ) − 3( x 0 + x1 )( x 2 + x 0 ) + 6x 0 x1 ] ∆2 f ( x 0 ) .

x) = R2(f. NUMERICAL METHODS z x2 x0 f ( x) dx = h z LMN 2 0 f ( x0 ) + s∆f ( x0 ) + 2 1 s( s − 1) ∆2 f ( x0 ) ds 2 3 2 2 2 OP Q L O 1Fs s s I ∆f ( x ) + G − = h Ms f ( x ) + JK ∆ f (x )PPQ 2 2H 3 2 MN 1 L O = h M2f ( x ) + 2∆ f ( x ) + ∆ f ( x )P 3 N Q 0 0 0 0 0 2 0 0 = = h [6f(x0) + 6{f(x1) – f(x0)} + {f(x0) – 2f(x1) + f(x2)}] 3 h [f(x0) + 4f(x1) + f(x2)] 3 which is the same formula as derived earlier. we show that R2(f. x. s = 0. we get f(x) = 1: R2(f. We have dx = h ds. 6 a x dx − (b − a) a+b +b a+4 6 2 LM N FG H IJ OP K Q IJ K 2 f(x) = x2: R2(f. x2. The limits of integration become: for x = x0. x2.27). x) = 0 for f(x) = 1. x. x) = z b a x 3 dx − a+b (b − a) 3 a +4 6 2 LM MN FG H IJ K 3 + b3 OP PQ . s = 2. That is. 3 3 f(x) = x3: R2(f. 2 2 b a x 2 dx − a+b (b − a) 2 a +4 6 2 LM MN FG H + b2 OP PQ 1 3 (b − a) 2 (b − a3 ) − [ a + ab + b2 ] 3 3 1 3 1 (b − a3 ) − (b3 − a3 ) = 0.27).138 Let [(x – x0)/h] = s. Error term in Simpson 1/3 rule. x3 in (3. Substituting f(x) = 1. x) = = f(x) = x: z z b a b dx − (b − a) (6) = (b – a) – (b – a) = 0. x) = = = z 1 2 1 2 (b – a2) – (b – a2) = 0. x3. and for x = x2. Hence. We show that the Simpson’s rule integrates exactly polynomials of degree ≤ 3. using the definition of error given in (3.

then b – a is also large and the error expression given in (3. b] into even number of subintervals of equal length h.28).29). In this case. The bound for the error is given by | R(f.41) since h = (b – a)/2. if the length of the interval [a. x) = (b − a )5 ( 4 ) c (4) h5 ( 4 ) (ξ) = − (ξ) = − f f f (ξ) 4! 2880 90 (3. and a ≤ ξ ≤ b. we obtain an odd number of nodal points. The rule is then called the composite Simpson’s 1/3 rule. Therefore. x) = 0. that is. Composite Simpson’s 1/3 rule We note that the Simpson’s rule derived earlier uses three nodal points.NUMERICAL DIFFERENTIATION AND INTEGRATION 139 = = 1 4 (b − a) 3 (b – a4) – [a + a2b + ab2 + b3] 4 4 1 4 1 4 (b – a4) – (b – a4) = 0.42) As in the case of the trapezium rule. the expression for the error is given by R(f.41) becomes meaningless. x) | ≤ (b − a )5 h5 (4 ) M4 = M 4 . Since the method produces exact results. R2(f. b] is large. a≤x≤b 2880 90 (3. since we have approximated f(x) by a polynomial of degree 2 only. Let f(x) = x4. It is interesting to note that the method is one order higher than expected. we subdivide the given interval [a. we subdivide [a. the method is of order 3. We take the even number of intervals as 2N. we get c= z b a x 4 dx − a+b (b − a) 4 a +4 6 2 LM MN FG H IJ K 4 + b4 OP PQ = = 1 5 (b − a) (b – a5) – (5a4 + 4a3b + 6a2b2 + 4ab3 + 5b4) 5 24 1 [24(b5 – a5) – 5(b – a)(5a4 + 4a3b + 6a2b2 + 4ab3 + 5b4)] 120 (b − a) 4 [b – 4ab3 + 6a2b2 – 4a3b + a4)] 120 (b − a) 5 . 120 =– =– Using (3. The step length is given by h = (b – a)/(2N). the method is of order 3. The nodal points are given by . b] into a number of subintervals of equal length and apply the Simpson’s 1/3 rule to evaluate each integral. From (3. where M4 = max | f ( x)|. 4 4 Hence. Hence. when f(x) is a polynomial of degree ≤ 3. That is. the Simpson’s rule integrates exactly polynomials of degree ≤ 3.

.45).34) becomes R(f... + | f ( 4) (ξ N )| 90 Nh 5 (b − a) h 4 M4 = M4 90 180 (b − a) 5 2880 N 4 a≤ x≤b ≤ (3.....44) x0. If f(x) is a polynomial of degree ≤ 3. + f ( 4) (ξ N )] . z b a f ( x) dx = h [{f(x0) + 4f(x1) + f(x2)} + {f(x2) + 4 f(x3) + f(x4)} + .+ f(x2N–1)} + 2{f(x2) + f(x4) + .140 NUMERICAL METHODS a = x0. etc.. < ξ1 < x2. 90 (3.41) and (3. This expression is a true representation of the error in the Simpson’s 1/3 rule. the error decreases.. The given interval is now written as Note that there are N integrals. x2 < ξ2 < x4. Using the Simpson’s 1/3 rule to evaluate each integral.. + z x 2N x 2N −2 f ( x ) dx. . The limits of each integral contain three nodal points. The error expression (3. This can be verified from the error expressions given in (3. x) | ≤ M4 M4 = max | f ( 4) ( x) | and N h = ( b – a)/2. x2N = x0 + 2N h = b. x2 = x0 + 2h. x1 = x0 + h. . We observe that as N increases.45) or where | R(f. x) = – where h 5 ( 4) [ f (ξ 1 ) + f (4) (ξ 2 ) + . 3 + {f(x2N–2) + 4f(x2N–1) + f(x2N)}] = h [f(x0) + 4{f(x1) + f(x3) + .43) The composite Simpson’s 1/3 rule is also of order 3. Remark 9 We have noted that the Simpson 1/3 rule and the composite Simpson’s 1/3 rule are of order 3. x) | ≤ h5 | f (4) (ξ 1 )|+ | f ( 4) (ξ 2 )|+ . then f (4) (x) = 0... we get the composite Simpson’s 1/3 rule as z b a f ( x ) dx = z x2N x0 f ( x ) dx = z x2 x0 f ( x ) dx + z x4 x2 f ( x ) dx + .... The bound on the error is given by | R(f. This result implies that error is zero and the composite Simpson’s 1/3 rule produces exact results for polynomials of degree ≤ 3. 3 + f(x2N–2)} + f(x2N)] (3.

16 Find the approximate value of I = 2. 0.0)] 3 1 [1. 6 h [f(0) + 4{f(0.375) + f(0.75. The nodes are 0. Example 3. n = 2N = 2: x f (x) x f(x) x f(x) 0 1. 0.5) + f(1.8 0. Now.75)} + 2f(0.5] = 0.125 0. We can also say that the number of subintervals is n = 2N and write h = (b – a)/n. 4 we have the following step lengths and nodal points.5.875 0.533333 1. 1.75 0.0)] 3 1 [1. using the Simpson’s 1/3 rule with 1+ x We have the following tables of values.5 n = 2N = 4: We require the above values. 0. 0. 1.125) + f(0. or N = 1. 0.5. Using the exact solution.625) + f(0.25) + f(0.666667) + 0.571429} + 2(0.693254.571429 0.888889 0. 2N 8 z 1 0 dx .125.8 + 0.666667) + 0. The additional values required are the following.5] = 0.25 0.0 0. 2N 4 b−a 1 = . n = 2N = 2: I1 = = n = 2N = 4: I2 = = n = 2N = 8: I3 = h [f(0) + 4f(0.5.375 0.NUMERICAL DIFFERENTIATION AND INTEGRATION 141 Remark 10 Note that the number of subintervals is 2N.75)} + f(1.25) + f(0. 2. N = 1: N = 2: N = 4: h= h= h= b−a 1 = .875.727273 0. find the absolute errors. 12 h [f(0) + 4{f(0.0. 0.0)] . we compute the value of the integral.0 + 4 {0.5 0.0. where n is even. The nodes are 0.0 0.5) + f(1.875)} 3 + 2{f(0.625.615385 0. 0. The nodes are 0.25. 0.75.25. Solution With n = 2N = 2. 0. The additional values required are the following. 2N 2 b−a 1 = . 4 and 8.375.0 + 4(0. 4 and 8 equal subintervals. 0.674444.0. 1. n = 2N = 8: We require the above values.625 0.666667 0. 0.5) + f(0.

888889 + 0.693254 | = 0.25) + f(1. 8 or N = 2.75.125) + f(1. Solution With N = 2N = 4.25. N = 2: h= b−a 1 = .693147 – 0.25.375. 1.675. 1. 1.693147 – 0.615385 + 0.0 0.09756 1.693147.11429 + 0.09756} + 2(0.75)} + 2f(1.0.75 0.75.625) + f(1.125 0.693147 – 0.09091] 3 = 0. The nodes are 1.875)} 3 + 2{f(1.375) + f(1.666667 + 0.000107.5] = 0. using the Simpson’s 1/3 rule with 4 and 8 subintervals.142 = NUMERICAL METHODS 1 [1.875 0.8 + 0. | Exact – I2 | = | 0. we compute the value of the integral. 1.375 0.5.000008.001297. n = 2N = 4: x f (x) x f(x) 1. n = 2N = 4: I1 = = h [f(1) + 4{(1.727273 + 0.25) + f(1.5. 1.0.0 + 4 {0.571429} + 0.625 0. 5 + 3x N = 4: h= We have the following tables of values. 1.11429 1.693155.533333} 24 + 2 {0. The errors in the solutions are the following: | Exact – I1 | = | 0.125.25 0.75)} + f(2. 4. 1.10959 1. n = 2N = 8: I2 = h [f(1) + 4{f(1.125 1. 1. we have the following step lengths and nodal points. | Exact – I3 | = | 0. 2.0)] .09412 2. Now.0 0. The nodes are 1.125 + 4{0.694444 | = 0.5 0. The additional values required are the following. 2N 8 z 2 1 dx . Example 3.5) + f(2.10127 1. The exact value of the integral is I = ln 2 = 0.11940 1.10526) + 0.10615.875.10526 1.17 Evaluate I = Compare with the exact solution and find the absolute errors in the solutions.693155 | = 0.25 [0. 2.0)] 3 0. 1. 1. 2N 4 b−a 1 = .09091 n = 2N = 8: We require the above values.5) + f(1.

06639) + 2(0.0 0. z 1 2 dx x + 6x + 10 0 .07678.07677.75 0. h = 0. 3 The exact value of the integral is I= z 1 0 dx = tan −1 ( x + 3) 2 ( x + 3) + 1 LM MN OP PQ 1 = tan −1 ( 4 ) − tan −1 (3) = 0.25.125 [0.08649 0. Now. N = 1: N = 2: h = 0. 0.0) + 4 {f(0.5 0.75)} + 2 f(0.10615.09412} 3 + 2{0.0.07547) + 0.25 0.07547 0.1 + 4(0.5) + f(1.1 0.0) + 4 f(0.18 Using Simpson’s 1/3 rule. 3 The results obtained with n = 2N = 4 and n = 2N = 8 are accurate to all the places. we compute the value of the integral. The additional values required are the following. Compare with the exact solution. The nodes are 0. or N = 1.5) + f(1.5.07677 . The nodes are 0.0.06639 1.08649 + 0. evaluate the integral I = 4 subintervals. 0 .10526 + 0.25.NUMERICAL DIFFERENTIATION AND INTEGRATION 143 = 0.05882 We have the following values of the integrand. 2.25 [0.11940 + 0. Example 3.09091] = 0. n = 2N = 2: I1 = = n = 2N = 4: I2 = = h [f(0.25) + f(0.10959 + 0.0.05882] = 0. with 2 and Solution With n = 2N = 2 and 4.09756} + 0.75.10127 + 0.07547) + 0.5 [0.5.125 + 4{0.0)] 3 0.0)] 3 0.0 0. 1. x f (x) x f(x) 0.1 + 4(0. The exact value of the integral is I = 1 [ln 11 – ln 8] = 0. 3 h [f(0.0. n = 2N = 2: n = 2N = 4: We require the above values. 1.5. 0.10615. we have the following step lengths and nodal points.11429 + 0. 0.05882] = 0. 0.

07677 – 0. x1 = x0 + h.667 feet. For interpolating by a cubic polynomial. t (sec) v (ft/sec) 0 0 2 16 4 29 6 40 8 46 10 51 12 32 14 18 16 8 18 3 20 0 Evaluate using Simpson’s 1/3 rule. the cubic polynomial approximation to f(x). b] into 3 equal parts so that we obtain four nodal points. f(x0)). x2 = x0 + 2h. The nodal points are given by x0 = a. f(x1)).00001. we have v= ds . we require four nodal points. x3 = x0 + 3h. the distance travelled in 20 seconds is s= z v dt. f(x2)). S(x3. Using the Simpson’s rule.07677 | = 0. we obtain s= z 20 0 v dt. To derive the Simpson’s 3/8 rule.07678 | = 0. or s = dt Starting from rest. Let h = (b – a)/3. h [f(0) + 4{f(2) + f(6) + f(10) + f(14) + f(18)} + 2{f(4) + f(8) 3 + f(12) + f(16)} + f(20)] = 2 [0 + 4{16 + 40 + 51 + 18 + 3} + 2{29 + 46 + 32 + 8} + 0] 3 = 494. we approximate f(x) by a cubic polynomial. Using the Newton’s forward difference formula. 3. Hence. we have approximated f(x) by a quadratic polynomial. we subdivide the given interval [a. Q(x1.07677 – 0. f(x3)) is given by .19 The velocity of a particle which starts from rest is given by the following table. | Exact – I2 | = | 0.144 The errors in the solutions are the following: | Exact – I1 | = | 0. the total distance travelled in 20 seconds. R(x2.3. The step length is h = 2.3 Simpson’s 3/8 Rule To derive the Simpson’s 1/3 rule. interpolating at the points P(x0.00000.2. Solution From the definition. NUMERICAL METHODS Example 3.

47) becomes meaningless.31). b] into 6 parts. when f(x) is a polynomial of degree ≤ 3. Substituting in (3. we apply the Simpson’s 3/8 rule to evaluate each integral.. In this case. R3(f. That is. For example. x0 < ξ1 < x3. the bound for the error expression is given by | R (f. the number of intervals must be 6 or 9 or 12 etc. 80 6480 x0 < ξ < x3 . so that we get 7 or 10 or 13 nodal points etc. (3. if we divide [a. x) = – 3 5 (4) h [ f (ξ1) + f (4)(ξ2)]. 8 (3. As in the case of the Simpson’s 1/3 rule. x1 = x0 + h. and integrating. that is. x) = – z b a f ( x) dx = z x3 x0 f ( x) dx = 3h [f(x0) + 3f(x1) + 3f(x2) + f(x3)]. b] into a number of subintervals of equal length such that the number of subintervals is divisible by 3.47) Since the method produces exact results. then b – a is also large and the error expression given in (3. x3 = x0 + 3h...46) (b − a) 5 ( 4) 3 5 ( 4) h f (ξ ) = f (ξ). then we get the seven nodal points as x0 = a.48) In the general case. Then. x) | ≤ C h4 M4 where M4 = max | f ( 4) ( x)|. The rule is then called the composite Simpson’s 3/8 rule. the method is of order 3. x3 < ξ2 < x6. b] is large. x6 = x0 + 6h. we obtain the Simpson’s 3/8 rule as The error expression is given by R3(f. x) = 0. The Simpson’s 3/8 rule becomes z b a f ( x ) dx = z x3 x0 f ( x ) dx + z x6 x3 f ( x ) dx = = 3h [{f(x0) + 3 f(x1) + 3 f(x2) + f(x3)} + {f(x3) + 3f(x4) + 3 f(x5) + f (x6)}] 8 3h [f(x0) + 3 f(x1) + 3 f(x2) + 2 f(x3) + 3 f(x4) + 3 f(x5) + f(x6)] 8 The error in this composite Simpson’s 3/8 rule becomes R3(f. we subdivide [a. . a≤x≤b . x2 = x0 + 2h. 80 (3.. if the length of the interval [a.NUMERICAL DIFFERENTIATION AND INTEGRATION 145 f(x) = f(x0) + + 1 6 h3 1 1 (x – x0) ∆f(x0) + (x – x0)(x – x1) ∆2 f(x0) h 2h 2 (x – x0)(x – x1)(x – x2) ∆3 f(x0).

9/6. 3N 3 b−a 1 = . Remark 12 Simpson’s 3/8 rule has some disadvantages. in the case of Simpson’s 1/3 rule. 5 + 3x Solution With n = 3N = 3 and 6. 5/3. which is same as the order of the Simpson’s 1/3 rule.125[0. Remark 11 In Simpson’s 3/8th rule.0 3N 6 We have the following tables of values.0)] 8 3h [f(1) + 3{f(7/6) + f(8/6) + f(10/6) + f(11/6)} 8 + 2 f(9/6)} + f(2. Hence. the formula is of order 3. 8/6. we have h= where n is a multiple of 3.10616.47) or (3. The nodes are 1. n = 3N = 6: I2 = .125 + 3{0.20 Using the Simpson’s 3/8 rule. Therefore. then f (4) (x) = 0. Therefore. we have the following step lengths and nodal points. 7/6.48) is zero and the composite Simpson’s 3/8 rule produces exact results for polynomials of degree ≤ 3. Simpson’s 3/8 rule is not used in practice. (ii) It is of the same order as the Simpson’s 1/3 rule. n = 3N = 3: x f (x) x f(x) 1.146 NUMERICAL METHODS If f(x) is a polynomial of degree ≤ 3. b− a b−a . which is much larger than the error constant c = 1/90.0)] = 0. the error in the case of the Simpson’s 3/8 rule is larger than the error in the case Simpson 1/3 rule.09091] = 0. The additional values required are the following.11765 4/3 0.0 0. which only requires that the number of nodal points must be odd.0 0.09524 2. Due to these disadvantages. n = 3N = 3: I1 = 3h [f(1) + 3 f(4/3) + 3 f(5/3) + f(2. 2.0. The nodes are 1.11111 9/6 0. 2. the number of subintervals is n = 3N.125 7/6 0. 11/6. evaluate I = Compare with the exact solution.10000 11/6 0. 4/3. (iii) The error constant c in the case of Simpson’s 3/8 rule is c = 3/80.10000} + 0. This result implies that error expression given in (3. They are the following: (i) The number of subintervals must be divisible by 3. n = 3N = 3: n = 3N = 6: h= h= b−a 1 = . Now. we compute the value of the integral.10526 5/3 0. or h = 3N n z 2 1 dx with 3 and 6 subintervals.11111 + 0. Example 3.09091 n = 3N = 6: We require the above values. 10/6.

. convergence may be slow. Romberg method is a powerful tool which uses the method of extrapolation.10000 + 0.NUMERICAL DIFFERENTIATION AND INTEGRATION 147 = 1 [0. to refine the solution such that the new values are of higher order. Let I denote the exact value of the integral and I T denote the value obtained by the composite trapezium rule. I = IT + c1h2 + c2h4.49) where c1. We compute the value of the integral with a number of step lengths using the same method. or I = IT + c1h2 + c2h4 + c3h6 + ..00001 and for n = 6 the result is correct to all places. The error.09524} 16 + 2(0. Usually. 3 The magnitude of the error for n = 3 is 0. Convergence may be obtained after computing the value of the integral with a number of step lengths. (3. when convergence is attained (usually.. To illustrate the extrapolation procedure. in the composite trapezium rule in computing the integral is given by I – IT = c1h2 + c2h4 + c3h6 + . c3. 3. the magnitude of the difference in successive values of the integrals obtained by reducing values of the step lengths is less than a given accuracy). then reduce the step lengths and recompute the value of the integral.125 + 3 {0.10615. first consider two error terms. we start with a coarse step length. The extrapolation method is derived by studying the error of the method that is being used. Let us derive the Romberg method for the trapezium and Simpson’s rules. we compute the integrals by trapezium or Simpson’s rules for a number of values of step lengths.. Romberg method for the trapezium rule Let the integral I= be computed by the composite trapezium rule. The sequence of these values converges to the exact value of the integral.10615.11765 + 0.50) z b a f ( x) dx .10526) + 0.3. .11111 + 0.2.. The exact value of the integral is I = 1 [log 11 – log 8] = 0. (3. We stop the computation.4 Romberg Method (Integration) In order to obtain accurate results. While computing the value of the integral with a particular step length. Romberg method uses these values of the integral obtained with various step lengths. each time reducing the step length. the values of the integral obtained earlier by using larger step lengths were not used.09091] = 0. are independent of h. That is. Further. as if the results are obtained using a higher order method than the order of the method used. I – IT .. c2.

54) and (3.51). Using the results obtained with the step lengths h/2. From (3. that is. computations are done with step lengths h and h/2. For q = 1/2.56) We note that this value is obtained by suitably using the values of the integral obtained with step lengths h and qh. the integral is computed using the step lengths h. . which is higher than the order of the trapezium rule. we obtain I – IT (qh) = c1q2h2 + c2q4h4. we obtain (1 – q2)I – IT (qh) + q2IT(h) = c2q2h4(q2 – 1). Solving for I. Eliminating c1q2h2 from (3. (1 − q 2 ) (3. Neglecting the O(h4) error term.53) by q2 to obtain q2 [I – IT (h)] = c1q2h2 + c2q2h4. 0 < q < 1. h/2. h/22. we get .. I = IT(qh) + c1q2h2 + c2q4h4.148 NUMERICAL METHODS Let I be evaluated using two step lengths h and qh. h/2.54) (3. we obtain I – IT(h) = c1h2 + c2h4. the formula (3. 0 < q < 1. (1 − q 2 ) Note that the error term on the right hand side is now of order O(h4). O(h4).51) (3.55) (3.57) In practical applications. 4 −1 3 (3.52) IT (qh) − q 2 IT (h) – c2q2h4. The error equations become I = IT(h) + c1h2 + c2h4. Suppose. we obtain the new approximation to the value of the integral as ( I ≈ IT1) (h) = IT (qh) − q 2 IT (h) . Multiply (3. This computed result is of order.55).56) simplifies to (1) IT (h ) ≈ I T ( h/2) − (1/4 ) I T ( h ) 1 − (1/4 ) = 4 I T (h/2) − IT (h ) 4 IT (h/2) − I T (h ) = . we obtain I= (3. Let these values be denoted by IT (h) and IT (qh).. h/22. h/22. From (3.53) (3. h/23.52). we normally use the sequence of step lengths h. which is of O(h2).

NUMERICAL DIFFERENTIATION AND INTEGRATION

149

( IT1) (h/2) ≈

IT (h/4) − (1/4) IT (h/2) 1 − (1/4) 4 IT (h/4) − IT (h/2) 4 IT (h/4) − IT (h/2) = . 4−1 3

(3.58)

=

( ( Both the results I T1) (h), I T1) (h/2) are of order, O(h4). Now, we can eliminate the O(h4) terms of these two results to obtain a result of next higher order, O(h6). The multiplicative factor is now (1/2)4 = 1/16. The formula becomes ( IT2) (h) ≈ ( ( ( ( 16 IT1) (h/2) − IT1) (h) 16 IT1) (h/2) − IT1) (h) = . 16 − 1 15

(3.59)

**Therefore, we obtain the Romberg extrapolation procedure for the composite trapezium rule as
**

( ITm) (h) ≈ ( ( 4 m ITm −1) (h/2) − ITm− 1) (h) , 4m − 1

m = 1, 2, ...

(3.60)

where

( IT0) (h) = IT (h) .

The computed result is of order O(h2m+2). The extrapolations using three step lengths h, h/2, h/22, are given in Table 3.1.

Table 3.1. Romberg method for trapezium rule. Step Length h Value of I O(h2) I(h) Value of I O(h4) Value of I O(h6)

I (1) (h ) =

4 I ( h/2) − I ( h ) 3 I ( 2 ) (h ) = 16I (1) ( h/2) − I (1) (h ) 15

h/2

I(h/2)

I (1) (h/2) =

h/4 I(h/4)

4 I (h/4 ) − I ( h/2) 3

Note that the most accurate values are the values at the end of each column. Romberg method for the Simpson’s 1/3 rule We can apply the same procedure as in trapezium rule to obtain the Romberg’s extrapolation procedure for the Simpson’s 1/3 rule. Let I denote the exact value of the integral and IS denote the value obtained by the composite Simpson’s 1/3 rule.

150

NUMERICAL METHODS

The error, I – IS, in the composite Simpson’s 1/3 rule in computing the integral is given by I – IS = c1h4 + c2h6 + c3h8 + ... or I = IS + c1h4 + c2h6 + c3h8 + ... (3.61) As in the trapezium rule, to illustrate the extrapolation procedure, first consider two error terms. I = IS + c1h4 + c2h6. (3.62) Let I be evaluated using two step lengths h and qh, 0 < q < 1. Let these values be denoted by IS(h) and IS(qh). The error equations become I = IS(h) + c1h4 + c2h6. I = IS(qh) + c1q4h4 + c2q6h6. From (3.63), we obtain I – IS(h) = c1h4 + c2h6. From (3.64), we obtain I – IS(qh) = c1q4h4 + c2q6h6. Multiply (3.65) by q4 to obtain q4[I – IS(h)] = c1q4h4 + c2q4h6. Eliminating c1q4h4 from (3.66) and (3.67), we obtain (1 – q4)I – IS(qh) + q4 IS(h) = c2q4h6 (q2 – 1). Note that the error term on the right hand side is now of order O(h6). Solving for I, we obtain I= (3.67) (3.66) (3.65) (3.63) (3.64)

IS (qh) − q 4 IS (h) (1 − q 4 )

−

c2 q4 h6. 1 + q2

**Neglecting the O(h6) error term, we obtain the new approximation to the value of the integral as
**

( I ≈ I S1) (h) =

I S (qh) − q 4 I S (h) (1 − q 4 )

.

(3.68)

Again, we note that this value is obtained by suitably using the values of the integral obtained with step lengths h and qh, 0 < q < 1. This computed result is of order, O(h6), which is higher than the order of the Simpson’s 1/3 rule, which is of O(h4). For q = 1/2, that is, computations are done with step lengths h and h/2, the formula (3.68) simplifies to

( I S1) (h) ≈

I S ( h/2) − (1/16) I S ( h) 1 − (1/16)

NUMERICAL DIFFERENTIATION AND INTEGRATION

151 (3.69)

=

16 I S (h/2) − IS (h) 16 IS (h/2) − I S (h) = . 16 − 1 15

h/23,

In practical applications, we normally use the sequence of step lengths h, h/2, h/22, ...

Suppose, the integral is computed using the step lengths h, h/2, h/22. Using the results obtained with the step lengths h/2, h/22, we get

( I S1) (h/2) ≈

IS (h/4) − (1/16) I S (h/2) 1 − (1/16) 16 I S (h/4) − I S (h/2) 16 IS (h/4) − I S (h/2) . = 16 − 1 15

(3.70)

=

( ( Both the results IT1) (h), IT1) (h/2) are of order, O(h6). Now, we can eliminate the O(h6) terms of these two results to obtain a result of next higher order, O(h8). The multiplicative factor is now (1/2)6 = 1/64. The formula becomes ( I S2) (h) ≈ ( ( ( ( 64 IS1) (h/2) − IS1) (h) 64 I S1) (h/2) − IS1) (h) = . 64 − 1 63

(3.71)

**Therefore, we obtain the Romberg extrapolation procedure for the composite Simpson’s 1/3 rule as
**

( I Sm) (h) ≈ ( ( 4 m+ 1 I Sm− 1) (h/2) − ISm −1) (h)

4 m+ 1 − 1

, m = 1, 2, ...

(3.72)

where

( I S0 ) ( h) = IS(h).

The computed result is of order O(h2m+4). The extrapolations using three step lengths h, h/2, h/22, are given in Table 3.2.

Table 3.2. Romberg method for Simpson’s 1/3 rule. Step Length h Value of I O(h4) I(h) Value of I O(h6) Value of I O(h8)

I (1) (h ) =

h/2 I(h/2)

16I (h/ 2) − I ( h ) 15

I(2) (h) =

64 I (1) ( h/2) − I (1) (h ) 63

I (1) (h/2) =

h/4 I(h/4)

16I ( h/ 4 ) − I ( h/2) 15

152

NUMERICAL METHODS

Note that the most accurate values are the values at the end of each column. Example 3.21 The approximations to the values of the integrals in Examples 3.12 and 3.13 were obtained using the trapezium rule. Apply the Romberg’s method to improve the approximations to the values of the integrals. Solution In Example 3.12, the given integral is I=

The approximations using the trapezium rule to the integral with various values of the step lengths were obtained as follows. h = 1/2, N = 2: I = 0.708334; h = 1/4, N = 4: I = 0.697024. h = 1/8, N = 8: I = 0.694122. We have

z

1

0

dx 1+ x

I (1) (1/2) = I (1) (1/4) = I (2) (1/2) =

4 I (1/4) − I (1/2) 4(0.697024) − 0.708334 = 0.693254 = 3 3 4 I (1/8) − I (1/4) 4(0.694122) − 0.697024 = = 0.693155. 3 3 16 I (1) (1/4) − I (1) (1/2) 16(0.693155) − 0.693254 = 0.693148. = 15 15

**The results are tabulated in Table 3.3. Magnitude of the error is | I – 0.693148 | = | 0.693147 – 0.693148 | = 0.000001.
**

Table 3.3. Romberg method. Example 3.21. Step Length 1/2 1/4 1/8 Value of I O(h2) 0.708334 0.693254 0.697024 0.693155 0.694122 0.693148 Value of I O(h4) Value of I O(h6)

In Example 3.13, the given integral is I=

The approximations using the trapezium rule to the integral with various values of the step lengths were obtained as follows. h = 1/4, N = 4: I = 0.10627; h = 1/8, N = 8: I = 0.10618.

z

2

1

dx . 5 + 3x

NUMERICAL DIFFERENTIATION AND INTEGRATION

153

We have I (1) (1/4 ) =

4 I (1/8) − I (1/4 ) 4( 010618 ) − 010627 . . = 0.10615. = 3 3

Since the exact value is I = 0.10615, the result is correct to all places. Example 3.22 The approximation to the value of the integral in Examples 3.16 was obtained using the Simpson’s 1/3 rule. Apply the Romberg’s method to improve the approximation to the value of the integral. Solution In Example 3.16, the given integral is I=

The approximations using the Simpson’s 1/3 rule to the integral with various values of the step lengths were obtained as follows. h = 1/2, n = 2N = 2: I = 0.694444; h = 1/4, n = 2N = 4: I = 693254; h = 1/8, n = 2N = 8: I = 693155. We have

z

1

0

dx . 1+ x

I (1) (1/2) = I (1) (1/4) = I ( 2 ) (1/2) =

16 I (1/4) − I (1/2) 16 (0.693254) − 0.694444 = 0.693175 = 15 15 16 I (1/8) − I (1/4) 16 (0.693155) − 0.693254 = = 0.693148 15 15 64 I (1) (1/4 ) − I (1) (1/2) 64( 0.693148) − 0.693175 = 0.693148. = 63 63

**The results are tabulated in Table 3.4. Magnitude of the error is | I – 0.693148 | = | 0.693147 – 0.693148 | = 0.000001.
**

Table 3.4. Romberg method. Example 3.22. Step Length 1/2 1/4 1/8 Value of I O(h4) 0.694444 0.693175 0.693254 0.693148 0.693155 0.693148 Value of I O(h6) Value of I O(h8)

154

NUMERICAL METHODS

REVIEW QUESTIONS

1. What is the order of the trapezium rule for integrating sion for the error term?

( b − a )3 h3 f ′′( ξ ) = − f ′′( ξ ), 12 12

z

b

a

f ( x ) dx ? What is the expres-

**Solution The order of the trapezium rule is 1. The expression for the error term is Error = – 2.
**

where a ≤ ξ ≤ b .

When does the trapezium rule for integrating

Solution Trapezium rule gives exact results when f(x) is a polynomial of degree ≤ 1. 3. What is the restriction in the number of nodal points, required for using the trapezium rule for integrating

z

b

a

f ( x) dx gives exact results?

Solution There is no restriction in the number of nodal points, required for using the trapezium rule. 4. What is the geometric representation of the trapezium rule for integrating

z

b

a

f ( x) dx ?

Solution Geometrically, the right hand side of the trapezium rule is the area of the trapezoid with width b – a, and ordinates f(a) and f(b), which is an approximation to the area under the curve y = f(x) above the x-axis and the ordinates x = a, and x = b. 5. State the composite trapezium rule for integrating error.

z

b

a

f ( x) dx ?

z

b

a

f ( x) dx , and give the bound on the

Solution The composite trapezium rule is given by

where nh = (b – a ). The bound on the error is given by | Error | ≤ where 6.

z

b

a

f ( x)dx =

h [f(x0) + 2{f(x1) + f(x2) + ... + f(xn–1)} + f(xn)] 2

(b − a) h 2 nh 3 M2 = M2 12 12

a≤x≤b

M2 = max | f ′′ ( x)| and nh = b – a.

What is the geometric representation of the composite trapezium rule for integrating

Solution Geometrically, the right hand side of the composite trapezium rule is the sum of areas of the n trapezoids with width h, and ordinates f(xi–1) and f(xi) i = 1, 2, ..., n. This

z

b

a

f ( x) dx ?

. + | f ( 4) (ξ N )| 90 Nh 5 (b − a) h 4 M4 = M4 90 180 . z b a f ( x) dx ? z b a f ( x) dx . 12 and the expression for the error in the composite trapezium rule is given by R1(f.NUMERICAL DIFFERENTIATION AND INTEGRATION 155 sum is an approximation to the area under the curve y = f(x) above the x-axis and the ordinates x = a and x = b. required for using the Simpson’s 1/3 rule for integrating z b a f ( x)dx gives exact results? Solution The number of nodal points must be odd for using the Simpson’s 1/3 rule or the number of subintervals must be even. + f ″(ξn)]. x) = – xn–1 < ξn < ξn.. + f(x2N–2)} + f(x2N)] The bound on the error is given by | R(f. 9. x) | ≤ ≤ h5 | f (4) (ξ 1 )|+ | f ( 4) (ξ 2 )|+ . x) = – h3 f ′′(ξ) 12 h3 [f ″(ξ1) + f ″(ξ2) + . 3 + {f(x2N–2) + 4 f(x2N–1) + f(x2N)}] = h [f(x0) + 4{f(x1) + f(x3) + . 8. + f(x2N–1)} 3 + 2{f(x2) + f(x4) + ... and give the bound on Solution Let n = 2N be the number of subintervals. How can you deduce that the trapezium rule and the composite trapezium rule produce exact results for polynomials of degree less than or equal to 1? Solution The expression for the error in the trapezium rule is given by R1(f. What is the restriction in the number of nodal points. This result implies that error is zero and the trapezium rule produces exact results for polynomials of degree ≤ 1. When does the Simpson’s 1/3 rule for integrating Solution Simpson’s 1/3 rule gives exact results when f(x) is a polynomial of degree ≤ 3. then f ″(x) = 0. If f(x) is a polynomial of degree ≤ 1. 10. The composite Simpson’s 1/3 rule is given by z b a f ( x) dx = h [{f(x0) + 4f(x1) + f(x2)} + {f(x2) + 4f(x3) + f(x4)} + . State the composite Simpson’s 1/3 rule for integrating the error.... 7....

in the case of Simpson’s 1/3 rule. which only requires that the number of nodal points must be odd. and a ≤ ξ ≤ b. as if they are obtained using a higher order method than the order of the method used. to refine the solution such that the new values are of higher order. x) = (b − a) 5 (4) c ( 4) h 5 (4 ) f (ξ ) = − f (ξ ) = − f (ξ) 4! 2880 90 where h = (b – a)/2. x) = – where h 5 (4 ) [ f (ξ 1 ) + f (4) (ξ 2 ) + . required for using the Simpson’s 3/8 rule for integrating Solution The number of subintervals must be divisible by 3. M4 = max | f ( x)| and N h = (b – a)/2. Explain why we need the Romberg method. NUMERICAL METHODS (4 ) x0 < ξ1 < x2 . Romberg method is a powerful tool which uses the method of extrapolation. What is the restriction in the number of nodal points. then f(4)(x) = 0.. If f(x) is a polynomial of degree ≤ 3.. etc. Further. What are the disadvantages of the Simpson’s 3/8 rule compared with the Simpson’s 1/3 rule? Solution The disadvantages are the following: (i) The number of subintervals must be divisible by 3. a≤x≤b How can you deduce that the Simpson’s 1/3 rule and the composite Simpson’s 1/3 rule produce exact results for polynomials of degree less than or equal to 3? Solution The expression for the error in the Simpson’s 1/3 rule is given by R(f. Convergence may be obtained after computing the value of the integral with a number of step lengths. convergence may be slow. Solution In order to obtain accurate results..156 where 11. when convergence is attained (usually. 12. x2 < ξ2 < x4. While computing the value of the integral with a particular step length. (ii) It is of the same order as the Simpson’s 1/3 rule. The expression for the error in the composite Simpson’s 1/3 rule is given by R(f. Therefore. This result implies that error is zero and the Simpson 1/3 rule produces exact results for polynomials of degree ≤ 3. That is. the error in the case of the Simpson’s 3/8 rule is larger than the error in the case Simpson 1/3 rule. 13. the values of the integral obtained earlier by using larger step lengths were not used. the magnitude of the difference between successive values of the integrals obtained with the reducing values of the step lengths is less than a given accuracy). We stop the computation. (iii) The error constant c in the case of Simpson’s 3/8 rule is c = 3/80. etc. + f (4) (ξ N )] 90 x0 < ξ1 < x2 . each time reducing the step length. we compute the integrals by trapezium or Simpson’s rules for a number of values of step lengths. 14. x2 < ξ2 < x4. z b a f ( x)dx ? . Romberg method uses these computed values of the integrals obtained with various step lengths. which is much larger than the error constant c = 1/90.

. Write the Romberg method for improving the accuracy of the value of the integral.2 1.. m = 1. Solution The required Romberg approximation is given by ( ITm) (h) ≈ ( ( 4 m ITm− 1) (h/2) − ITm− 1) (h) . 2. . 18. h/2. . Solution The required Romberg approximation is given by ( I Sm) (h) ≈ ( ( 4 m+ 1 I Sm− 1) (h/2) − ISm −1) (h) 4 m+ 1 − 1 . dividing the range into four equal parts. Solution Let IT(h).. IT(qh) denote the values of the integral evaluated using the step lengths h and qh. h/22.. x 2.. . h/2. (1 − q2 ) 16.NUMERICAL DIFFERENTIATION AND INTEGRATION 157 15.. An integral I is evaluated by the composite Simpson’s 1/3 rule with step lengths h. 2. h/ 22. m = 1..64 z (A. .44 .. Using the trapezium rule. Write the Romberg method for improving the accuracy of the value of the integral. An integral I is evaluated by the Simpson’s 1/3 rule with step lengths h and qh.22 6 10.56 1 3. h/2m. . ( I T0) ( h) = IT (h).62 3 5. 4m − 1 where 17.U. IS(qh) denote the values of the integral evaluated using the step lengths h and qh. m = 1. Solution Let IS(h). h/2m. The required Romberg approximation is given by ( I ≈ IT1) (h) = IT (qh) − q2 IT (h) .. where ( I S0 ) ( h) = IS(h).. An integral I is evaluated by the trapezium rule with step lengths h and qh.. 2 4. . The required Romberg approximation is given by ( I ≈ I S1) (h) = I S (qh) − q 4 I S (h) (1 − q4 ) . An integral I is evaluated by the composite trapezium rule with step lengths h. 2. Write the Romberg method for improving the accuracy of the value of the integral. from the following set of values of x and f(x). EXERCISE 3. ...12 4 7. find x f (x) 0 1.. Write the Romberg method for improving the accuracy of the value of the integral.. . May/June 2006) 6 0 f ( x) dx ..05 5 9.. Evaluate z 1 1/2 dx by trapezium rule.

1 + x2 actual integration. (ii) Simpson’s rule.36178 1./Dec.7 0. For the given data x f (x) 0. evaluate intervals. 5.81759. 10.91360 1. 2004) Compute Ip = using trapezium rule and Simpson’s 1/3 rule with the number of points 3.158 3.7 f ( x) dx . 8.64835 0. Improve the result using Romberg integration.U.7 f ( x) dx . 4. Using the trapezium rule. Improve the results using Romberg integration. t (sec) v (ft/sec) 0 0 2 12 4 16 6 26 8 40 10 44 12 25 14 12 16 5 18 0 z z 6 π 0 sin x dx by dividing the range into 6 equal (A.1 1. 0 Evaluate z z 0 z 1 0 x e x dx taking four intervals.9 1. 5 and 9.5. Nov. Evaluate solution.3 1. Compare with exact 6 dx by (i) trapezium rule.9 0. check the result by (A.49500 1. (A. Also. April/May 2003) z 2. which is equal to 1.1 1.5 1. 9. 1 x + 10 3 use Simpson’s 1/3 rule for first six intervals and trapezium rule for the last interval to evaluate z 2. Using the trapezium rule. evaluate NUMERICAL METHODS The velocity of a particle which starts from rest is given by the following table. 6. Using the trapezium rule. evaluate z 1 dx 1 + x2 −1 taking 8 intervals.U. use trapezium rule for the first interval and Simpson’s 1/3 rule for the rest of intervals to evaluate by comparing with the exact value of the integral.1 0. 2004) 1 sin x dx with h = 0.16092 1.U. Comment on the obtained results . evaluate result with actual value. Compare the 2 e x dx using the Simpson’s rule with h = 1 and h = 1/2./Dec. (A.52882 2.U. Nov.35007 1. 7. Evaluate using trapezium rule.44573 z 1 0 xp dx for p = 0.7 1. 11.1 0. the total distance travelled in 18 seconds. April/May 2004) Using the Simpson’s 1/3 rule. Also.

3. Nov.. (n = 10). Gauss-Hermite integration rules Limits of integration = (– ∞. Weight function = w(x) = e − x . Gaussian integration rules can be obtained when the limits are finite or one of the limits is infinite or both the limits are infinite. Gauss-Laguerre integration rules Limits of integration = [0. evaluate parts.U.U. Nov./Dec. 2. i = 1.NUMERICAL DIFFERENTIATION AND INTEGRATION 159 12. When the abscissa are not prescribed in advance and they are also to be determined. + λn f(xn). Weight function = w(x) = 1/ 1 − x 2 . Gauss-Legendre integration rules Limits of integration = [– 1. May/June 2006 .3 Integration Rules Based on Non-uniform Mesh Spacing We have defined the general integration rule as I= z b a w( x) f ( x) dx = k=0 ∑λ n k f ( xk ) = λ0 f(x0) + λ1 f(x1) + λ2 f(x2) + . We have the following Gaussian integration rules depending on the limits of integration and on the expression for the weight function w(x). 2. ∞].U. A. Abscissas = Zeros of the corresponding Hermite polynomial.….73) When the abscissas are prescribed and are equispaced. Using Simpson’s 3/8th rule. 2004) 3. ∞). we have derived the trapezium and Simpson’s rules (Newton-Cotes formulas). 2006) 14. Verify your answer with integration. xi = x0 + ih.U. Evaluate z 5 0 dx by Simpson’s 1/3 rule and hence find the value of loge 5. then the formulas using less number of abscissas can produce higher order methods compared to the NewtonCotes formulas. Weight function = w(x) = 1. By dividing the range into ten equal parts. 4x + 5 13.. 2 . 1]. Abscissas = Zeros of the corresponding Chebychev polynomial. Abscissas = Zeros of the corresponding Legendre polynomial. (A. (3. April/May 2005) π 0 sin x dx by trapezium rule and z 1 dx 1 + x2 0 by dividing the range into six equal (A. n. Abscissas = Zeros of the corresponding Laguerre polynomial. Weight function = w(x) = e–x.3. evaluate Simpson’s rule. Such formulas are called Gaussian integration rules or formulas./Dec. Gauss-Chebychev integration rules Limits of integration = [– 1. 1]. z (A. that is. 1. 4.

+ λn f(xn).74) The required transformation is Then.. using a linear transformation. Without loss of generality.. q = (b + a)/2. 2 2 W T W T f ( x) dx = 1 −1 z f z 1 −1 g (t) dt (3. we get x = pt + q. x p.76) Therefore.77) . x2. Let the transformation be When x = b. this implies that z 1 −1 f ( x) dx = λ0 f(x0) + λ1 f(x1) + λ2 f(x2) + . b] to [– 1. we have t = 1: Solving. we have t = – 1: a = – p + q.1 Gauss-Legendre Integration Rules Since the weight function is w(x) = 1. we transform the limits [a. As per the terminology used in syllabus. We shall follow the approach of method of undetermined coefficients to derive the formulas. the limits of integration for Gauss-Legendre integration rules are [– 1.160 NUMERICAL METHODS For our discussion and derivation. 2 (3.75) f(x) = f{[(b – a)t + (b + a)]/2} and dx = [(b – a)/2]dt.. 1]. if it produces exact results.. An integration method of the form (3. + λn f(xn). When w(x) = 1. p(b – a)/2. 3. . When x = a..77) is said to be of order p. 1]. 1 [(b – a)t + (b + a)]. Therefore. we shall derive formulas to evaluate z 1 −1 g(t) dt . b = p + q.. The integral becomes I= z b a where R 1 [(b − a)t + (b + a)]U R 1 (b − a)U dt = S2 V S2 V T WT W U R1 U R1 g(t) = S (b − a) V f S [(b − a)t + (b + a)]V . let us remember the definition of the order of a method and the expression for the error of the method. x= z b a f ( x) dx = λ0 f(x0) + λ1 f(x1) + λ2 f(x2) + . let us write this integral as The required integration formula is of the form z 1 −1 f ( x) dx . for all polynomials of degree less than or equal to p. (3. That is. (3. that is error Rn = 0. x. we shall call these formulas as Gaussian formulas. we shall consider only the Gauss-Legendre integration rules. Before deriving the methods. it produces exact results for f(x) = 1. we shall write the integration rule as I= As mentioned earlier..3.3.

2. then the error term is obtained for f(x) = x p+2. for m = 0. x.80) z z 1 −1 1 dx = 2 = λ0. …. The method has two unknowns λ0.81) z 1 −1 x2 dx – 0 = 2 . the one point Gauss formula is given by Error of approximation The error term is obtained when f(x) = x2. xdx = 0 = λ0 x0. Rn(xm) = z 1 −1 x m dx − k=0 ∑λ n n k The error term is obtained for f(x) = x p+1. Then. We obtain c= The error term is given by.78) where c is called the error constant. z 1 −1 f ( x)dx = λ0 f(x0) (3. −1 Therefore. we get f(x) = 1: f(x) = x: Since. R(f ) = z 1 −1 f ( x)dx = 2 f(0). (3. 3 c 1 f ′′(ξ) = f ′′(ξ).79) If Rn(x p+1) also becomes zero. p. We define c= z 1 −1 x p +1 dx − k=0 ∑λ n k p xk + 1 (3. (3.NUMERICAL DIFFERENTIATION AND INTEGRATION 161 m x k = 0 . we get x0 = 0. Making the formula exact for f(x) = 1. 2! 3 − 1 < ξ < 1. the error term is given by Rn(f) = = z 1 −1 f ( x ) dx − k=0 ∑λ k f (xk ) c f ( p + 1) (ξ). 1.82) . Gauss one point rule (Gauss-Legendre one point rule) The one point rule is given by where λ0 ≠ 0. ( p + 1) ! a<ξ<b (3. x0. λ0 ≠ 0.

(3. Gauss two point rule (Gauss-Legendre two point rule) The two point rule is given by where λ0 ≠ 0. Gauss one point rule integrates exactly polynomials of degree less than or equal to 1. the two point Gauss rule (Gauss-Legendre rule) is given by z 1 −1 f ( x) dx = f F − 1IJ + f FG 1 IJ . we get or λ0 = λ1. which is not possible. Substituting in (3. x2. Hence. 1]. λ1 ≠ 0 and x0 ≠ x1. x1 = 0. 3 (3. Substituting in (3.83) f(x) = x2: f(x) = x3: z z z z 1 −1 dx = 2 = λ0 + λ1.86) 1 −1 x3dx = 0 = λ0x03 + λ1x13. The method has four unknowns λ0. λ0 = λ1 = 1.162 NUMERICAL METHODS Remark 13 Since the error term contains f ′′(ξ) . Making the formula exact for f(x) = 1. 2 x0 = 1 1 . we get f(x) = 1: f(x) = x: z 1 −1 f ( x)dx = λ0 f(x0) + λ1 f(x1) (3.85) and (3. b] can be subdivided and the limits of each subinterval can be transformed to [– 1.84) (3. or λ 1 x1 ( x1 − x0 )( x1 + x0 ) = 0 . Now. then the original interval [a.85). Therefore. (3. 3 3 Therefore.86). we get 3 2 λ 1 x1 − λ 1 x1 x0 = 0. the results obtained from this rule are comparable with the results obtained from the trapezium rule. Therefore. x3.84). we get Substituting in (3.88) . we require two function evaluations in the trapezium rule whereas we need only one function evaluation in the Gauss one point rule. we get λ0 – λ1 = 0. If x1 = 0. x1 = – x0. λ1. (3. G 3K H 3K H (3. x2dx = 1 −1 2 = λ0x02 + λ1x12. If better accuracy is required. λ1 ≠ 0 and x0 ≠ x1.85) 1 −1 x dx = 0 = λ0x0 + λ1x1.87). or x0 = ± = − x1 . x1. x. Gauss one point rule can then be applied to each of the integrals.85) gives x0 = 0. However. x0. or x1 = – x0.87) Eliminating λ0 from (3.

(3. Gauss two point rule integrates exactly polynomials of degree less than or equal to 3. x2.95) (3. x0. 5 −1 5 5 5 x 5 dx = 0 = λ 0 x0 + λ 1 x1 + λ 2 x2 .93) (3. and x0 ≠ x1 ≠ x2.91) (3. Making the formula exact for f(x) = 1. we get f(x) = 1: f(x) = x: f(x) = x2: f(x) = x3: f(x) = x4: f(x) = x5: z 1 −1 f ( x)dx = λ0 f(x0) + λ1 f(x1) + λ2 f(x2) (3. λ2 ≠ 0. Solving this system as in the two point rule. However. λ1 = . 1 −1 1 x 4 dx = 2 4 4 4 = λ 0 x0 + λ 1 x1 + λ 2 x2 . we obtain x0 = ± 3 . the results obtained from this rule are comparable with the results obtained from the Simpson’s rule. The method has six unknowns λ0. x1. 3 −1 3 3 3 x 3 dx = 0 = λ 0 x0 + λ 1 x1 + λ 2 x2 . x3. x5.90) z z 1 z 1 −1 dx = 2 = λ0 + λ1 + λ2.96) 1 −1 x dx = 0 = λ0x0 + λ1x1 + λ2x2. λ1. x 1 = 0. N 9 9 Q 5 9 45 − 1 < ξ < 1. x2. λ1 ≠ 0.94) (3. λ 0 = λ 2 = . λ2. x 2 = + 5 3 5 8 .92) (3. (3. x4. b] can be subdivided and the limits of each subinterval can be transformed to [– 1. Gauss three point rule (Gauss-Legendre three point rule) The three point rule is given by where λ0 ≠ 0.89) c ( 4) 1 (4 ) f (ξ) = f (ξ). We obtain c= The error term is given by R(f) = z 1 −1 x 4 dx − LM 1 + 1 OP = 2 − 2 = 8 . . 4! 135 Remark 14 Since the error term contains f (4)(ξ). x. If better accuracy is required. Gauss two point rule can then be applied to each of the integrals.NUMERICAL DIFFERENTIATION AND INTEGRATION 163 Error of approximation The error term is obtained when f(x) = x4. Therefore. 5 9 9 z z z −1 1 x2 dx = 2 2 2 2 = λ 0 x0 + λ 1 x1 + λ 2 x2 . we require three function evaluations in the Simpson’s rule whereas we need only two function evaluations in the Gauss two point rule. then the original interval [a. 1].

Solving. 2] to [– 1. We have not derived any Newton-Cotes rule. We obtain 1 5 − x dx − c= −1 9 1 z The error term is given by R(f) = LM F MN GH 3 5 I JK 6 c (6) 8 1 f (ξ) = f (6 ) ( ξ ) = f (6) (ξ). we obtain I=f LM 24 OP = 48 = 0. (6 !) 175 6! 15750 − 1 < ξ < 1.159193 = 0. b] can be subdivided and the limits of each subinterval can be transformed to [– 1. 1]. GH 5 JK PQ F 3 I OP = 2 − 6 = 8 . the three point Gauss rule (Gauss-Legendre rule) is given by NUMERICAL METHODS z 1 −1 f ( x)dx = 1 5f − 9 LM F MN GH 6 3 + 8 f (0) + 5 f 5 I JK F 3 I OP.543376. Gauss three point rule can then be applied to each of the integrals. b = 3/2. we get a = 1/2.577350) H 3K H 3K = 0.23 Evaluate the integral I = three point rules.577350) + f(0. then the original interval [a. The integral becomes I= where z 2 2x 1 + x4 1 dx. Therefore. we obtain I = 2 f(0) = 2 Using the two point Gauss rule. (3. Example 3. the results obtained from this rule are very accurate. Gauss three point rule integrates exactly polynomials of degree less than or equal to 5.98) Remark 15 Since the error term contains f (6)(ξ).384183 + 0. two point and f(t) = 8(t + 3)/[16 + (t + 3)4].164 Therefore.97) Error of approximation The error term is obtained when f(x) = x6. Compare with the exact solution I = tan–1 (4) – (π/4). N 16 + 81Q 97 FG − 1 IJ + f FG 1 IJ = f(– 0. 2 = a + b. we get 1 = – a + b.00006349). z 8(t + 3) dt = − 1 [ 16 + (t + 3) 4 ] 1 z 1 −1 f (t) dt Using the one point Gauss rule. the error coefficient is very small (1/15750 ≈ 0. If better accuracy is required. which can be compared with the Gauss three point rule.494845. x = (t + 3)/2. . + 0 + 5G H 5 JK PQ 7 25 175 6 (3. using Gauss one point. Solution We reduce the interval [1. Further. Therefore. Writing x = a t + b. dx = dt/2. 1] to apply the Gauss rules.

using the Gauss three point formula.439299) + 8(0. 1] to apply the Gauss three point rule. dx = dt/2.693122. Example 3. .540592. x = (t + 1)/2. 1] to [– 1.264929)] = 0. Writing x = a t + b. we obtain I= 1 3 5f − + 8 f (0) + 5 f 9 5 LM F NM GH I JK F 3 I OP GH 5 JK QP = 1 [5(0.693147.774597)] 9 1 [5(0.137889) = 0. 2] to [– 1.247423) + 5(0.000172 respectively. 1 = a + b Solving. Using the three point Gauss rule.449357) + 8(0. by Gauss three point formula. z 1 −1 dt = t+3 z 1 −1 f ( t ) dt f(t) = 1/(t + 3). The integral becomes I= where z 1 0 dx . we get a = 1/2.24 Evaluate the integral I = with the exact solution. The magnitudes of the errors in the one point. 1] to apply the Gauss three point rule.045575. Therefore. b = 1/2.NUMERICAL DIFFERENTIATION AND INTEGRATION 165 Using the three point Gauss rule.774597) + 8 f(0) + 5f(0.000025.002956. 9 I = 0.333333) + 5(0. (A.U. Example 3. and 0. 0. Compare 1+ x Solution We reduce the interval [0. we get 0 = – a + b. The absolute error in the three point Gauss rule is 0.540420. we obtain I= 1 3 5f − + 8 f (0) + 5 f 9 5 LM F MN GH I JK F 3 I OP GH 5 JK PQ = = The exact values is 1 [5 f(– 0. 9 The exact solution is I = ln (2) = 0. April/May 2005) Solution We reduce the interval [0.25 Evaluate the integral I = z 2 ( x 2 + 2x + 1) 1 + ( x + 1) 4 0 dx. two point and three point rules are 0.

Therefore.. λk > 0. b] = [– 1. For deriving the Gauss-Legendre formulas. −1 k 1 (3. The weights are given by λk = z b a w(x) f(x)dx = λ0 f(x0) + λ1 f(x1) + λ2 f(x2) + . and dx = dt. we obtain x = ± . However. Theorem 3. = Consider the integration rule (3. n are the Lagrange fundamental polynomials.100) 1 1 (3x2 – 1) = 0. we have w(x) = 1.461347) + 8(0. + λn f(xn). Using the three point Gauss rule. 9 Remark 16 We have derived the Gauss-Legendre rules using the method of undetermined parameters.73) I= We state the following theorem which gives these rules. we obtain I= 1 3 5f − + 8 f (0) + 5 f 9 5 LM F NM GH I JK F 3 I OP GH 5 JK QP 1 [5(0.99) has precision 2n + 1 (or the formula is exact for polynomials of degree ≤ 2n + 1). Further. b = 1. we get 0 = – a + b. and the orthogonal polynomials are the Legendre polynomials Pk(x).536422. x1 are the zeros of the Legendre polynomial P2(x).1 If the abscissas xk of the integration rule are selected as zeros of an orthogonal polynomial. 1]. Solving. Setting P2(x) = z l (x)dx. [a. The integral becomes I= where z 1 (t + 2 )2 1 + (t + 2) 4 −1 dt = z 1 −1 f ( t ) dt f(t) = (t + 2)2/[1 + (t + 2)4].235194) + 5(0.166 Writing x = a t NUMERICAL METHODS + b. 2 3 . …. 1. we get a = 1. x = t + 1. all the Gaussian rules can be obtained using the orthogonal polynomials. (3.. orthogonal with respect to the weight function w(x) over [a.99) where lk(x). The weights are given by λk = z b a w(x) lk(x)dx Gauss-Legendre two point formula The abscissas x0. then the formula (3.127742)] = 0. k = 0. b]. 2 = a + b.

and x1 = 1/ 3 . ( x2 − x0 )( x2 − x1 ) 3 . x1 = 0. and x2 = l0(x) = l2(x) = The weights are given by λ0 = l1 ( x ) = ( x − x 0 )( x − x 2 ) . 5 3 / 5 . ( x1 − x 0 )( x1 − x 2 ) The three point rule is as given in (3. we obtain x = 0.97). λ1 = 9 z 1 −1 l1 ( x) dx = 8 . . Gauss-Legendre three point formula z −1 l0 ( x) dx = 1. Setting P3(x) = Let x0 = – 1 (5x3 – 3x) = 0.NUMERICAL DIFFERENTIATION AND INTEGRATION 167 Let x0 = – 1/ 3 . x1. we have l0(x) = The weights are given by λ0 = x − x1 x − x0 . 3 – 1 < ξ < 1. 9 REVIEW QUESTIONS 1. Write the error term in the Gauss two point rule for evaluating the integral Solution The error term in the Gauss two point rule is given by R(f) = z 1 −1 f ( x) dx. 1 ( 4) f (ξ) . 2. 135 – 1 < ξ < 1. λ2 = 9 z 1 −1 l2 ( x) dx = 5 . ( x 0 − x 1 )( x 0 − x 2 ) ( x − x0 )( x − x1 ) . We have ( x − x 1 )( x − x 2 ) . 1 f ′′(ξ) . x2 are the zeros of the Legendre polynomial P3(x). The abscissas x0. Write the error term in the Gauss one point rule for evaluating the integral Solution The error term in the Gauss one point rule is given by R(f) = z 1 −1 f ( x) dx . and l1(x) = .88). x0 − x1 x1 − x0 1 The two point rule is as given in (3. z 1 −1 l0 ( x) dx = 5 . λ 1 = z 1 −1 l1 ( x) dx = 1. ± 2 3 / 5 .

Apply Gauss two point formula to evaluate 3. Use three point Gauss formula to evaluate 2. Find the value of the integral I = formulas.U. EXERCISE 3. evaluate 6.168 NUMERICAL METHODS 3. Nov.5 0. April/May 2004) 2 0 ( x 2 + 2 x + 1) dx by Gauss three point formula. April/May 2005) 5. Evaluate z z z z 2 1 1 dx . April/May 2003) 2 7. (A.U. 2005) 1. z z 2 2 0 x + 2 x + 10 .2 e − x dx using the three point Gauss quadrature. 3 2 cos 2 x dx. Compare with the exact solution.U. 1 f (6) (ξ) ./Dec. Evaluate z z 1 dx 1 + x4 0 .U. write I = I1 + I2 = integrals by two point and three point Gauss formulas. Compare with 3 + 4x dx 9./Dec.U. Then. Using the three point Gauss quadrature. Use two point and three point Gauss formula to evaluate z 2 0 dx .3 1. (A. (A. Write the error term in the Gauss three point rule for evaluating the integral Solution The error term in the Gauss three point rule is given by R(f ) = z 1 −1 f ( x ) dx . evaluate 4.U. (A. evaluate each of the . using two point and three point Gauss 1 + sin x 1 10. Using three point Gauss formula. 1 + ( x + 1) 4 (A. Nov. Use two point and three point Gauss formula to evaluate I = the exact solution. 15750 – 1 < ξ < 1. 8. z 0 f(x) dx + z 2 1 f(x) dx. 2003) −1 1 . x dx 1 + x2 dx 1 + x2 (A. April/May 2005) 0 . In problem 7.

That is.101) (a. x = b. x2 = x0 + 2h. d] into M equal subintervals each of length k.3.. we obtain I= b− a 2 Using the trapezium rule again to evaluate the integrals in (3. c) + f(b.. f0M = f(x0. c). yM = y0 + M k = d.... . 3. 3. c).. If we use the composite trapezium rule in both the directions. + fiM–1) + fiM} (3. k = d – c. y2 = y0 + 2k..3). 4 1 1 (3. yj). c) (b. c) + f (a.. . d) (b. fN0 = f(xN. y)] dy . (b. d) + f (b. and the interval [c. c) 3. c) + f(a. y0) = f(a.102). x0 = a.104)..103) 1 1 Notice that the points (a. d) are the four corners of the rectangle (Fig. 3. y0). 3.1 Evaluation of Double Integrals Using Trapezium Rule Evaluating the inner integral in (3.105) i=1 + fN0 + 2(fN1 + fN2 + . . c) + f (b. yj). If we denote h = b – a. c). f01 = f(x0. d)].... Composite trapezium rule Divide the interval [a. f00 = f(x0. c).4. M The general grid point is given by (xi. y1). N d−c . y1 = y0 + k. y0) = f(b. + fNM–1) + fNM] . Fig. Weights in formula (3. x1 = x0 + h... b] into N equal subintervals each of length h. y) dx dy IJ K (3.. 3. y) + f (b.2). (b. xN = x0 + N h = b. 3. y = c.104) Fig. y1). 4 hk × 4 (3. yM) = f(a. The weights (coefficients of the ordinates f ) in the trapezium rule are given in the computational molecule (Fig.. we obtain I= z d c [ f (a.2).3. Region of integration (a rectangle).4 Evaluation of Double Integrals We consider the evaluation of the double integral z FGH z d c b a f ( x. d). We have h= k= b−a .. (3. f10 = f(x1.NUMERICAL DIFFERENTIATION AND INTEGRATION 169 (a.. (Fig. y = d. d). f11 = f(x1.102) (b − a)(c − d) [ f (a.101) by trapezium rule. . + f0M–1) + f0M 4 00 +2 N−1 ∑ {fi0 + 2(fi1 + fi2 + . Denote. we obtain I= hk [f + 2(f01 + f02 + . etc.. fij = f(xi. c) over a rectangle x = a.. we can write the formula as I= hk [f(a. (a..3.2. d)] . d) + f(b. y0 = c. .

1. (2. 1) (1. 1). that is. we get the method given in (3. Solution We have a = 1. Example 3. (1. Find the absolute errors in the solutions obtained by the trapezium rule and the Romberg value. 3. (1. 2) zz 2 1 dx dy x+y (1. Weights in the trapezium rule (3. 1. it integrates exactly polynomials of degree 1 hk ×2 4 1 NUMERICAL METHODS 2 4 1 2 1 2 ≤ 1. 2). The values of the integrand at the nodal points are obtained as the following.26. 1.4. When h = k = 0. 2). 1. 2) (1. 3. Nodal points. 3.6. 3. (1. the weights in the formula are given in Fig.5. Example 3. 1. 2) (1. we have the nodes at (1. Improve the estimate using Romberg integration. (2.5 and h = k = 0.5. 3.5) (2. b = 2.5) (2. in x and y.104).5) (1.60). 1) (2. 1.105). 3.5). Weights in the trapezium rule. For M = N = 4.5. then Romberg extrapolation can also be used to improve the computed values of the integral. The nodes are given in Fig.5.5.170 If M = N = 1.25. 2). (2. (1.5. Fig. c = 1.6. d = 2. The Romberg formula is same as given in (3. (1. .4. Remark 17 The order of the composite trapezium rule is 1. that is. 1) Fig. the weights in the formula are given in Fig. If we use the same mesh lengths along x and y directions. 2 1 2 2 1 2 4 4 4 2 hk ×2 4 4 4 4 2 2 4 4 4 2 1 2 2 2 1 Fig.5). The exact value of the integral is I = ln (1024/729).5.5. 1). 1). h = k. (1. For M = N = 2. h = 0.26 Evaluate the integral I= 2 1 using the trapezium rule with h = k = 0.5.5.5).

5.333333 + 0. y) (x.333333 (1.25. 2)} + 4 f(1.75. With h = 0. 2) = f(1. 1.5) = f(1. 2) (2.75. 1.343303.5) = f(1.25)(1.5. 1) = f(1. 1.75.266667.75. 1.5.5) = f(1.5) 4 + f(1.5.75.25.25.25 (1.25) (2.5.5.75. 1.75. 1. 1. f(2. 1) Fig.75) = f(1. 1) 0.285714} + 4(0. 1) = f(1. 1. 1.333333 + 0.25.5) 0.25) = 0. 1) + f(2. 1.4.285714 (1. 1.5) + f(2.333333.5. 1) (1. .25.333333 Using the trapezium rule.25) = f(1.5. 2) 0. 3.5. The values at the nodal points are as the following.4 + 0. 1.5) (2. 1.75) (2.5. 3.5) (1. f(1. 1) = 0. f(2.333333 (x. 1. y) f (x. f(1. 1) = 0.5. 1.75) (1. 1. 1) 0.25. 2) (1.5) (1.5.25. 1) 0. 1.5 + 0. 1.1.25) = 0.25.5.307692. (1. 2) = 0. y) f (x.285714 + 0. 1) = f(1.75) (1.33333)] = 0.75) = f(2.25) (1. 1. 1) + f(1. 2) = f(1. 1. f(2.25) = f(1.5.5.7.75) = f(1. 1) (2. 2) 0. 2) = 0. 2) (1. 1) (1. 1. 1.0625 [{0. 1. we obtain I= hk [{f(1.5) = f(1. f(1.4 (2.25) = f(1. 1.25.25.5) (1. 1) + f(1.26.25. Example 3.25.25} + 2{0.5) 0.7. 1) (1. we have the nodal points as shown in Fig.444444. 2) 0. 1. f(1. 2)} + 2{f(1. 1. 2) (1. 1.4 + 0. 2) + f(2.5) 0.25.363636.75) = 0.5)] = 0.285714.5) = 0.75.5.75) (1. 1.5 (2. 1.25) (1. 2) = 0. 2) (1.75. 1. f(2.NUMERICAL DIFFERENTIATION AND INTEGRATION 171 (2.285714 (1. 1. Nodal points. y) (1. 1.25) (1. 1. h = 0.5. 1.4 (1.5) (1.75. f(1.75) (1.75) = f(1. 1.

333333) + 2(0. 2)} + 4{f(1.000008.307692) + 0.363636) + 2(0.U.0 5 11 14 0 2 3 4 (A.000870 | 0.5) + f(1. Trapezium rule with h = k = 0.75) + f(2. Romberg integration gives the improved value of the integral as I= Exact value: 1 1 [4I(0.339798. 1.25. Also.5 3 4 6 1. 2) + f(1. 1.5: Trapezium rule with h = k = 0. 1. 3 3 I = ln (1024/729) = 0. The magnitudes of the errors in the solutions are the following.015625)[{0. 1) + f(1.25: Romberg value: Example 3.339798 – 0.5.363636) + 3(0.25.25. 1) 4 + f(1.75.4 + 2(0.343303 | = 0.75) + f(1.5 + 2(0. 1.25) + f(2. the number of intervals along x-axis and y-axis are M = 4 and N = 2. . | 0.27 Evaluate y/x 0 1 2 | 0.339790. 1.25.25) + f(1.75.172 Using the composite trapezium rule.5) + f(1.25) – I(0.5. 1. 1.5.75) + f(1.285714}] = 0. 1) + f(1.75. 1) + f(1. 1. 1.5) + f(1. zz 2 0 2 0 f(x.25) + f(1.25} + 2{2(0.339798 – 0.5) + f(2. 1.75)}] = (0.5. 1.340668 | = 0. 2) + f(1.5 5 9 11 2.266667)} + 4{0.25) + f(1.4) + 2(0. 1. 1) + f(2.340668. we obtain I= NUMERICAL METHODS hk [{f(1. April/May 2005) Solution We have the step lengths along x-axis and y-axis as h = 0.5 and k = 1.444444) + 2(0.5.75.003505.0 4 6 8 1.285714) + 2(0.343303] = 0.0 respectively.8). 0.75) + f(1. y) dxdy by trapezium rule for the following data. 1.307692) + 2(0. 2) + f(2.333333) + 0. 2)} + 2{f(1. We have the following grid (Fig.75.340668) – 0. 1.5) + f(1.339790 | = 0.5)] = [4(0.25) + f(1.339798 – 0. 3. 1.25.

4. 0) (1. 2)} + 2{f(0. we obtain I= hk [{f(0. d)} + 4{ f(a + h.5. 0) (2.5.5. 1) (1.27. 1) + f(0. 0) + f(0. z FGH z d c b a f ( x. c) + 4f(a. h = 0. y) + f(b. 0) 4 + f(1.NUMERICAL DIFFERENTIATION AND INTEGRATION 173 (1.5.106) Evaluating the integral again by Simpson’s rule. d)}] .8.2). 2)} + 4{f(0. 2) (0. Using the trapezium rule. Nodal points. (Fig. c) + 4f(a + h. 0) + f(0. 2) + f(2. x = b. 2) (1. 3. 3. 3. y) dx dy IJ K To apply the Simpson’s rule. let h = (b – a)/2 and k = (d – c)/2. 2) + f(1. Evaluating the inner integral by Simpson’s rule. 2) (0. 0) Fig.5. 1) + f(2. 0) (0.2 Evaluation of Double Integrals by Simpson’s Rule We consider the evaluation of the double integral over a rectangle x = a. y = d. 1) (0. k = 1.5. 0) + f(1.5.0.375. 2) + f(1.5.5. we obtain I= hk [{ f(a. y)] dy (3. 1) (1.125[25 + 2(51) + 4(19)] = 25. Example 3.5. 0) (1. c + k) + f(a. 1) (2.5. 0) + f(2. 1)}] = (0. 2) (2. we obtain I= where h 3 a + h = (a + b)/2.3. 2) (0. c) + 4f(b. 1) + f(1.5. y) + 4f(a + h.5. c + k) + f(b. 1) (0.125) [{2 + 5 + 4 + 14} + 2{3 + 4 + 5 + 3 + 11 + 6 + 8 + 11} + 4{4 + 6 + 9}] = 0. 1) + f(1. z d c [f(a. c + k) 9 + f(a + h. d)} + { f(b. y = c.

fij = f(xi.184401.9.5 along x-axis and k = 0. (2N + 1) (2M + 1) is also odd.107). For M = N = 2. Composite Simpson’s rule Divide the interval [a.25 along y-axis. The general grid point is given by (xi. yj). Denote. Divide the interval [c. The exact value of the integral is I = 0. Weights in Simpson’s rule (3.9) below. c) + f(b. c + k)} + 16 f(a + h.10. Example 3. c) + f(a + h. 3. 3.107) The weights (coefficients of the ordinates f ) in the Simpson’s rule are given in the computational molecule (Fig. b] into 2N equal parts each of length h = (b – a)/(2N). (3.174 = NUMERICAL METHODS hk [{f(a. Weights in the Simpson’s rule. d) + f(b.10.28 Evaluate the integral I= using the Simpson’s rule with h = 0.5 y =1 2 x =1 dx dy x+y . 3. We have odd number of points on each mesh line and the total number of points. 4 1 2 4 1 4 16 8 16 4 hk ×2 9 8 4 8 2 4 16 8 16 4 1 4 2 4 1 Fig. yj). the weights in the composite Simpson’s rule are given in Fig. d) + f(b. d)} + 4{f(a. 3. c + k)]. Find the absolute error in the solution obtained by the Simpson’s rule. d] into 2M equal parts each of length h = (d – c)/(2M). z z 1. c + k) 9 + f(a + h. 1 hk ×4 9 1 4 16 1 4 1 4 Fig. c) + f(a.

d)} + 4{f(a.5.25) (1.5.5 = 1.U.5. Nodal points.307692 + 0.333333} + 16(0. c) + f(a + h.12. 1) Solution Since the step lengths are not prescribed.5) = 0.5) Solution We have a = 1.5.5 = 4. 1) (1. 0) = f(0.11.5. Let h = k = 0. 0) Fig. we use the minimum number of intervals required to use both the trapezium and Simpson’s rules. A.U.25) + f(1. d) + f(b.5) = 0.12). f(2.184432 | = 0.5) (0. 3.5) (1. e0. 0. The magnitude of the error in the solution is given by | 0. May/June 2006 . 1) = f(1.5) (2. 2006) (0. 1. 1. f(1.71828. 1) = f(0.285714} + 4{0. 1) + f(1.5.5) + f(2. Example 3. 1) (2.5. c) + f(b.648720.25) = 0. 0) = f(0. (A.5. 0.4 + 0.389056. f(0.5. 0.29.25) = 0. f(1.444444 9 + 0. Example 3.333333 + 0.28. 1) = 0. 1. we have the following nodal points as given in Fig. Nodal points.29 Evaluate zz 1 0 1 0 e x + y dx dy using Simpson and trapezium rules. 1. With h = 0.5.444444. 1. f(1.363636)] = 0.48169. c + k)] = 0. 1.5)} + 4{f(1. d = 1. 1.363636.5. 1.0. Example 3. 0) = e0 = 1.5. 1.4 + 0.25) (2. c + k)} + 16f(a + h. 1.25)] = 0. We have the following grid of points (Fig. k = 0.307692. 1) = e1. 1) (0.NUMERICAL DIFFERENTIATION AND INTEGRATION 175 (1.5) = f(0.25) 9 + f(2. At the nodal points.5.5) (0. c = 1.125 [{0.5.25) (1. The values at the nodal points are as follows.125 [{f(1. . 1) Fig. b = 2.5) = e1 = 2. 1.25.5. d) + f(b.5 + 0. 1. f(1. 1. 0.5) = (0. 1) (1. Nov.000031. 0. c) + f(a. 1) = f(1. f(1. f(2. 1) + f(1.5. we have the following values of the integrand. f(2.25) = 0. hk [{f(a. f(0.5)} + 16 f(1. 1.184432.184401 – 0.5) = 0. 1. 3.5.5) (1. 3. c + k) 9 + f(a + h.285714. 0) (0.5. 0.333333.11.4./Dec. 1.5. 3. 0) (1. 1. f(1. Simpson rule gives the value of the integral as I= (1. 1. 1) = e2 = 7. 1) + f(2. f(1.

07627.4 1.5)} + 16 f(0. 1)} + 4{f(0. 0./Dec. 1)} + 2{f(0.64872) + 2(4. 0) + f(1.12378. k = 0. taking h = k = 0.5)] = 0. 0) + f(0.0625[{1. 1+ x + y (A. 1.95448. 0) + f(0. The exact solution is L I = Me MN x OP PQ 1 0 LMe MN y OP PQ 1 = ( e − 1) 2 = 2.389056} + 2{2(1.U./Dec.5) + f(1. Nov. April/May 2004) 1 0 dx dy using trapezium rule. 2005) .0) + f(1. 1+ x + y 2. taking h = 0.71828)] = 3.5. Using the Simpson’s rule. 0) + f(0.07627 – 2. Evaluate zz 1 0 1 0 (A. A. 0. we obtain I= hk [{f(0. 0. Using Simpson’s 1/3 rule.0.95249 . 0) + f(1. EXERCISE 3.0 + 2(2.5.5.176 Using the trapezoidal rule. Nov./Dec.5.5. 1)} + 4 f(0.95448 – 2.00199. 0.71828)] = 2.5.5) 9 + f(0.71828) + 7. 2004 . 0. 1) + f(1. I JK (A.48169)} + 16(2.25 [{1. 0 The magnitudes of errors in the solutions are Trapezium rule: | 3. 0) + f(0.95249 | = 0.95249 | = 0.5)] = 0. Simpson’s rule: | 2.389056} + 4{2(1.71828) + 7.25.U.25.5) 4 + f(0.0 + 2(2. 1) + f(1. 2006) 2 3. 0. Evaluate z FGH z 2 xy (1 + x )(1 + y ) 2 2 1 dy dx by trapezium rule.48169)} + 4(2.U. we obtain I= NUMERICAL METHODS hk [{f(0.5.64872) 9 + 2(4. Nov.5. evaluate zz 1 0 1 0 dx dy taking h = k = 0.U.

2225. Evaluate the double integral rule with two subintervals.25. Maximum value = 922. 2. 12. 13. zz 2 1 2 dx dy x2 + y2 1 numerically with h = 0.2 along x-direction and k = 0. The maximum value is obtained at x = 9.0546 or x = 10.4 ANSWERS AND HINTS Exercise 3. 6.U. Since (dy/dx) is always positive. 11. (dy/dθ) = 0. y is maximum at s = 1. y(minimum) = – 0. y(maximum) = 0. .U.4235. 135. 4. 9.. 6.0546. or as – (h2/6) f ′″ (ξ).0245 respectively.5 1. zz zz 1.1. y is minimum at s = – 2. (dy/dθ) = sec2 θ.5894. y is maximum at s = 2..5 1 2 1 1 1. Evaluate y-direction. Using trapezium rule evaluate. – 3. y is minimum at s = 1.170833.3792 or x = 0. – 3. xk – h < ξ < xk + h.1: 7.31. 2003) using the trapezium rule and Simpson’s 7. Magnitudes of errors are 0. zz zz 2.0546. h = 0. 3.0 ln ( x + 2 y ) dy dx choosing ∆x = 0.1548. 135. 7. h = 0. 0. evaluate ∆y = 0.3202. 10.2885. – (h2/6) f ′″(xk) + .. Error is smaller for h = 0.2558./Dec. 1.9485.1208 or x = – 0. Nov.25 in the 5.5 1 0 dx dy ( x + y 2 )1/ 2 0 dx dy using the trapezium rule and ( 3 + x )( 4 + y ) 3.2: 7.8792.1363.3211. Magnitude of error = 0. 4. – 0. y(minimum) = 1300. y(maximum) = 1339.3792.0 1. April/May 2003) dx dy taking four sub-intervals.0723. 3. y is an increasing function. sec (31°) = 0. 8.0582.0. x+y (A.099 and 0. 8. 5.2598. 3. Using trapezium rule. 3. Exact : 7. 52. Evaluate the double integral Simpson’s rule with two and four subintervals. 14. For θ = 31°.NUMERICAL DIFFERENTIATION AND INTEGRATION 177 4.023967.4 2 2 1 1 1.15 and (A.1 1. 98.8637.3875.

056448 0.097110 0.25. h = 2.216880.097604 0. p=0 h 1/2 1/4 1/8 O(h2) 0.5. h = 0.3 (For comparison. f(t) = 1/(t + 3). 7. 8.983524. wherever it is necessary).389056.405648. Exact : 2.097635 0. 0.097634 0. 6.097662 0. f(t) = 2/[16 + (t + 1)4]. 1.178 Exercise 3.064740 0. h = 0.057600 0.051621 O(h4) O(h6) 2. 9. h = π/10.097633 0. 5.097504 0.693122 3. Trapezium rule: (Romberg table).360.65.125.054322 0.586423. 1.411059. 10. h = π/6. 12. h = 0. 1.051944 O(h6) O(h8) 11.697024 3.061305 0. 0.000169. Exact: 2 tan–1 1 = 570796. Exact: 1. 4. 1.427027. IT = 1.5. 2x = t + 3. 1. IS = 1.097637 0.954097. p=0 h 1/2 1/4 1/8 O(h4) 0. 4. h = 0. 35. ln (0. 0.072419 0.5.25. 0.097634 0. The second result is more accurate.840541. f(t) = (t + 2)2/[1 + (t + 2)4].4199.785267 2. 1.000110. Exact: 6. the exact solutions are given. h = 1/6. IS = 2. h = 3. . Exact : 1. 2x = t + 1. 14.097635 0.610084.76351. 5. Exercise 3. h = 1: 6. Exact: tan–1 6 = 1. IT = 1. Simpson’s rule: (Romberg table).79099.052283 0. 13.052005 0.055895 0. h = 1.389242.391210. Exact: cos 1 – cos 6 = – 0.609438. 1. 0. 2x = t + 1.097634 O(h6) O(h8) h 1/2 1/4 1/8 p=1 O(h4) 0. 1. – 0. the exact solutions are given.785396.051994 0. x = t + 1. 0.420728. Exact: 2.0. wherever it is necessary). Romberg value = 6. f(t) = 2/[4 + (t + 1)2].2 NUMERICAL METHODS (For comparison.5) = 1.5: 6. h = 0.097637 O(h4) O(h6) h 1/2 1/4 1/8 p=1 O(h2) 0. h = 0.565588.402521. 0.

524074.324390. f(t) = 1/(4t + 7). 0. I(Three point) = 0. I(Three point) = 0.340668.324795.531953. Two subintervals: IT = 0.064285. 15 points.65 e − (0.324239.064199. 5. 4. 0. 9. 10. Exact: 0. f(t) = 0. 7. 30 points.141900.65 t + 0. 2x = t + 5.85. I : 2x = t + 3.85) . 2.112971. IT = 0. 0. 0. . I2 = 0. x = t + 1.064554. Exact : 0.232316. f(t) = 1/[2{3 + 2(t + 3)}]. x = t + 1.211268.428875. 0. 9 points. 25 points. Three point formula: I1 = 0. Results are more accurate than in Problem 7. 0. IS = 0. I(Two point) = 0. 9 points. I1 : 2x = t + 1. I2 = 0. IS = 0. f(t) = cos (t + 5)/[2(1 + sin {(t + 5)/2}]. f(t) = 1/[9 + (t + 2)2].154639. f(t) = 1/[2/{3 + 2(t + 1)}]. IS = 0.142157. 15 points. I(Two point) = 0. x = 0.112996. I = 0. 3.324821. I = 0. 6.658602. 25 points. Four subintervals: IT = 0.NUMERICAL DIFFERENTIATION AND INTEGRATION 179 2 6.312330. I(Two point) = 0.4 1. 8. I(Three point) = 0. 0. Two point formula: I1 = 0. Exercise 3.154548.064195. 7.320610.405428.65t + 0.407017.324821.211799. 8.

.3) (4.. y.2) together with the initial conditions (4. .. c2. + am–1(x) y′(x) + am(x) y(x) = r(x) where a0(x). or in an explicit form as y = h(x. A linear differential equation of order m can be written as a0(x) y(m) (x) + a1(x) y(m–1)(x) + .1) The order of a differential equation is the order of its highest order derivative and the degree is the degree of the highest order derivative after the equation has been rationalized in derivatives... A first order initial value problem can be written as y′ = f (x. a1(x).1) or (4. y(x0) = b0 (4. y(m–1) (x0) = bm–1.....6) (4.5) The conditions are prescribed at one point x0. . . cm can be determined by prescribing m conditions of the form y(x0) = b0.4) The m arbitrary constants c1. y″.. y′(x0) = b1. The solution may be obtained in an implicit form as g(x. c1. y. y(m)) = 0. c1... ... (4.. . The general form of an mth order ordinary differential equation is given by (4.. . This point x0 is called the initial point and the conditions (4." Initial Value Problems For Ordinary Differential Equations 4. cm ) (4.2) contains m arbitrary constants. y). The differential equation (4.. c2. y′.5) is called an initial value problem.5) are called initial conditions. y″(x0) = b2. am (x) and r(x) are constants or continuous functions of x. c2.. cm ) = 0.2) 180 . The general solution of the equations (4.1 INTRODUCTION φ(x..1) or (4..

y′(1) = 0.7). we have the system u1′ = y′ = u2. y(0) = 1. Define u1 = y. y2]T. y).11) can be used to solve the system of equations (4. f = [f1. (ii) x2y″ + (2x + 1)y′ + 3y = 6. Ny Q Nf ( x . Then. f2]T. the second order initial value problem (4. u )P Mu ( x )P Mb P N N Q N Q N Q 1 2 1 0 0 2 2 1 2 2 0 1 (4. y(x0) = b. y . u1(x0) = b0. u2) = 1 [r(x) – a1(x) u2 – a2(x) u1]. y . we may have a system as LMy OP′ = LMf ( x . a0 ( x) In general. y′(x0) = b1 (4. we can write the system as y′ = f(x. Then. Example 4. b = [b0. y′(0) = 2.9) In vector notation. b1]T.INITIAL VALUE PROBLEMS FOR ORDINARY DIFFERENTIAL EQUATIONS 181 Reduction of second order equation to a first order system Let the second order initial value problem be given as a0(x) y″(x) + a1(x) y′(x) + a2(x) y(x) = r(x). u2′ = y″ = = The system is given by 1 [r(x) – a1(x) y′(x) – a2(x) y(x)] a0 ( x ) 1 [r(x) – a1(x) u2 – a2(x) u1]. . y )Q N y ( )Q Nb Q 1 1 1 2 1 0 0 2 2 1 2 2 0 1 (4. u2(x0) = b1. denote y = [y1. y(x0) = b0. by writing the method in vector form. a0 ( x) LMu OP′ = LMu O . u1. u . LM y ( xx ) OP = LMb OP . that is. y(x0) = y0 dx (4.7) We can reduce this second order initial value problem to a system of two first order equations.1 Reduce the following second order initial value problems into systems of first order equations: (i) 2y″ – 5y′ + 6y = 3x.8) where f2(x. Lu ( x ) O = Lb O u Q f ( x .9) or (4. the methods derived for the solution of the first order initial value problem (4. y). y(1) = 2.10) dy = f(x. Therefore.5.10). y ) OP .

u1.. u . [6 − (2 x + 1) y ′ − 3 y] = 1 x2 [6 − (2 x + 1)u2 − 3u1 ]. < xn = b.. and (ii) multi step methods. u1. u2) = u2 and f2(x. Then. They are (i) single step methods. NUMERICAL METHODS 1 1 [3 x + 5 y ′ − 6 y] = [3 x + 5u2 − 6u1 ]. u )Q Nu (0)Q N2Q 1 1 1 2 1 2 2 1 2 2 where f1(x. If the spacing is uniform.5Q 1 1 1 2 1 2 2 1 2 2 where f1(x.. 2. i = 1.5 . (4.. we shall consider the case of uniform mesh only. The points are called mesh points or grid points.. n. u2) = [6 – (2x + 1)u2 – 3u1]/x2.2 SINGLE STEP AND MULTI STEP METHODS y′ = f(x. u2) = [3x + 5u2 – 6u1]/2.12) 4. u2 (1) = 0. u1. u2) = u2 and f2(x.. u . u ) OP . The spacing between the points is given by hi = xi – xi–1. u .13) The methods for the solution of the initial value problem can be classified mainly into two types. The system may be written as LMu OP′ = LMf ( x . i = 1. into a finite number of subintervals by the points x0 < x1 < x2 < . u ) OP . then hi = h = constant. For our discussions. u )Q Nu (1)Q N0. . n.182 Solution (i) Let u1 = y. y(x0) = y0 (4. we have the system u1′ = u2 u2′ = u1(0) = 1. Numerical methods We divide the interval [x0. u1. u1′ = u2. . 2. b] on which the solution is desired.. LMu (1) OP = LM2 OP Nu Q Nf ( x . u2′ = 1 x 2 (ii) Let u1 = y. . we have the system u1(1) = 2. 2 2 u2 (0) = 2. LMu ( 0) OP = LM1 OP Nu Q Nf ( x . The system may be written as LMu OP′ = LMf ( x . We assume the existence and uniqueness of solutions of the problems that we are considering. Then. u . y)..

xi. yi. the solution at any point xi+1 is obtained using the solution at a number of previous points.14) depends on yi+1 also. that is.. Local truncation error or discretization error The exact solution y(xi) satisfies the equation y(xi+1) = y(xi) + h φ(xi+1. yi–1. xi. we obtain a nonlinear algebraic equation for the solution of yi+1 (if the differential equation is nonlinear). We assume that the numerical solution is being obtained at xi+1 and the solution values at all the required previous points are known. Order of a method The order of a method is the largest integer p for which (4. h). .. Therefore. the values yi+1. h) + Ti+1 (4.E.. yi+1. That is. In this case.18) Multi step methods In multi step methods. yi. xi. xi. y) of the given differential equation. yi. h ). a general single step method can be written as yi+1 = yi + h φ(xi+1. a two step method uses the values yi+1.. yi–1. y(xi).16) where Ti+1 is called the local truncation error or discretization error. xi–1. yi–1.17) (4. yi+1. yi–1. yi. the solution at any point xi+1 is obtained using the solution at only the previous point xi. y′i–k+1 are used to determine the approximation to y(x) at xi+1. then the method is called an explicit method.. Thus. xi–k+1. h (4. If the right hand side of (4. then it is called an implicit method. xi–1. y1..15) 1 Ti + 1 = O(h p ) . xi.14) where φ is a function of the arguments xi+1. h) (4. Suppose that we use y(x) and y′(x) at k + 1 previous points xi+1 .. xi. yi. yi–k+1.) is defined by Ti+1 = y(xi+1) – y(xi) – h φ(xi+1. xi–1. yi. the Truncation error (T. xi. h). y′i+1. h).. We call the method as a k-step multi step method.. h and depends on the right hand side f(x. This function φ is called the increment function. y(xi+1). . we compute successively y1 = y0 + h φ(x0. y′i+1. yi+1.. y2 = y1 + h φ(x1. yi′.INITIAL VALUE PROBLEMS FOR ORDINARY DIFFERENTIAL EQUATIONS 183 We denote the numerical solution and the exact solution at xi by yi and y(xi) respectively. h) or as yi+1 = yi–1 + h φ(xi+1 .. h).14). y(xi). the method is of the form yi+1 = yi + h φ(xi.. y(xi+1). yi′. y0. If yi+1 can be obtained simply by evaluating the right hand side of (4.. For example. Single step methods In single step methods. yi+1. y′i–1. yi. . That is. . y′i–1 and the method can be written as yi+1 = yi + h φ(xi+1 .

.. b] and b is the point up to which the solution is required. tively. we obtain y(xi+1) = y(xi) + h y′(xi) + h2 h p ( p) h p+ 1 y′′( xi ) + .3 TAYLOR SERIES METHOD Taylor series method is the fundamental numerical method for the solution of the initial value problem given in (4. xi. ... h). xi. yi. then it is called an implicit method.19) 1 ( x − xi ) p+ 1 y( p+ 1) ( xi + θh) ( p + 1) ! where 0 < θ < 1. the two step method is of the form yi+1 = yi + hφ(xi..184 NUMERICAL METHODS where φ depends on the right hand side f(x. yi–1. xi–1. yi–1.. . yi+1. xi–1. yi. . We denote the numerical solution and the exact solution at xi by yi and y(xi) respec- Now. . The length of the interval is h = xi+1 – xi.. Expanding y(x) in Taylor series about any point der. x ∈ [x0. we obtain a nonlinear algebraic equation for the solution of yi+1 (if the differential equation is nonlinear).. yi. Substituting x = xi+1 in (4. + (x – xi) p y (p) (xi) p! 2! (4. This function φ is called the increment function. If the right hand side depends on yi+1 also. In this case.. h) and a general k-step implicit method can be written as yi+1 = yi + hφ(xi–k+1. yi–k+1. h) or as yi+1 = yi–1 + hφ(xi. xi+1]. A general k-step explicit method can be written as yi+1 = yi + hφ(xi–k+1. y) of the given differential equation... then the method is called an explicit method. h). + y ( xi ) + y ( p +1) ( xi + θh) . xi+1.13). yi... we obtain y(x) = y(xi) + (x – xi) y′(xi) + + xi . consider the interval [xi. yi–k+1. We now derive a few numerical methods. 4. yi–1. with the Lagrange form of remain- 1 1 (x – xi)2 y″(xi) + ... xi–1. that is. If yi+1 can be obtained simply by evaluating the right hand side..19). 2! ( p + 1) ! p! .

E. yi″ + . we obtain the first order Taylor series method as yi+1 = yi + hyi′ = yi + hf(xi. we obtain the Taylor series method as yi+1 = yi + hyi′ + h2 h p ( p) yi . The truncation error of the Euler’s method is T. = Since. ( p + 1) ! (4. increases as the order of the derivative of y increases.24) can be used to predict the error in the method at any point.21) Using the definition of the order given in (4.. h2 h3 y ′′ ( xi ) + y ′′′( xi ) + . y″ = y″′ = ∂ ∂x ∂f ∂f dy + = fx + f f y . y). ∂x ∂y dx FG ∂f + ∂f dy IJ + ∂ FG ∂f + ∂f dy IJ dy H ∂x ∂y dx K ∂y H ∂x ∂y dx K dx = fxx + 2f fxy + f 2 fyy + fy(fx + f fy).23) Sometimes..INITIAL VALUE PROBLEMS FOR ORDINARY DIFFERENTIAL EQUATIONS 185 Neglecting the error term.E..22) This method is also called the Euler method.20) is of order p.18). we say that the Taylor series method (4. h Euler method is a first order method. then (4. The higher order derivatives are obtained as y′ = f(x.E.) = O(h). we write this truncation error as T.20) Note that Taylor series method is an explicit single step method. If the higher order derivatives can be computed. Therefore. = h2 yi′′ ( xi + θh) . we find that computation of higher order derivatives is very difficult. For p = 1. 2! (4.17). The number of partial derivatives required. the truncation error of the method is given by Ti+1 = h p+ 1 y( p + 1) ( xi + θh) . + 2! p! (4. Hence. .24) 1 (T. Using the definition given in (4. 2! 3! (4.. Remark 1 Taylor series method cannot be applied to all problems as we need the higher order derivatives. yi). (4. etc. we need suitable methods which do not require the computation of higher order derivatives.

Euler’s method is of first order.27) 1 2 (h fxx + 2hk f xy + k2 f yy ) + .. we need the Taylor series expansion of function of two variables.25) ( p+ 1) ( x)|.. y) + . Denote the numerical values obtained with step length h by yE(h). we get the new approximation to y(x) as y(x) ≈ For q = 1/2.30) (4. y) + G h 2 ! H ∂x ∂y K H ∂x ∂y K = f(x.. y) + 1 FG h ∂ + k ∂ IJ f(x + h.E. We illustrate this procedure for Euler’s method..26) Remark 3 For performing the error analysis of the numerical methods. q−1 [(1/2) yE (h) − yE (h/2)] (1/2) − 1 [2 yE (h/2) − yE (h)] = [2yE(h/2) – yE(h)]. Mp+1 = max | y The bound on the truncation error of the Euler method (p = 1) is given by | T. that is. (4.. | ≤ h2 max | y ′′( x)|. the error expression is given by y(x) – yE(h) = c1h + c2h2 + c3h3 + . 2 x0 ≤ x ≤ b (4..28) Repeating the computations with step length qh. (4. The Taylor expansion is given by F ∂ + k ∂ IJ f (x.31) .. we obtain q[y(x) – yE(h)] – [y(x) – yE(qh)] = c2(q – q2)h2 + . 0 < q < 1. we get the error of approximation as y(x) – yE(qh) = c1(qh) + c2(qh)2 + c3(qh)3 + . Neglecting the O(h2) term. y) + (h fx + k fy) + 2 f ( x.29) [q yE (h) − yE (qh)] .... Eliminating c1. 2−1 (4.. y + k) = f(x.. 2! Remark 4 Extrapolation procedure (as described in numerical integration for Romberg integration) can also be used for the solution of the ordinary differential equations. we get y(x) ≈ = (4. or (q – 1)y(x) – [q yE(x) – yE(qh)] = c2(q – q2)h2 + .186 NUMERICAL METHODS Remark 2 The bound on the truncation error of the Taylor series method of order p is given by |Ti + 1 |= where h p+ 1 h p+ 1 y ( p+ 1) ( xi + θh) ≤ M p +1 ( p + 1) ! ( p + 1) ! x0 ≤ x ≤ b (4.

. y3 1.2(0. 1.34) Example 4.1.11692 + = 1. .0 y1 hxi . we get the first column of approximations as y(x) ≈ (0 (0 2 yE ) (h/2) − yE ) (h) ( 0) ( 0) = 2 y E ( h/2) − y E ( h ) .11692 .4) = 1. (h/22).2) = 1.2(0. y0 0.2) ≈ y1 = y0 + y(x2) = y(0.2 and h = 0. y(0) = 1. 2m − 1 (4.2 Solve the initial value problem yy′ = x. with h = 0. Compare the results with the exact solution at x = 0. we get the second column of approximations as y(x) ≈ (1 (1 2 2 yE ) (h/2) − yE ) (h) .33) For m = 2. we get y′ = f(x.2.2 x2 0. Let the step lengths be successively reduced by the factor 2.6) = 1. Extrapolate the result.2 x0 = 1.8. 22 − 1 (4. yi) = yi + x0 = 0.0. yi+1 = yi + 0.2 x1 0. yi 0. 0. we use the step lengths. Solution We have Euler method gives Initial condition gives When h = 0.04.32) For m = 1.04 0.4) ≈ y2 = y1 + y(x3) = y(0. h.2 xi .6) ≈ y3 = y2 + y(x4) = y(0. yi+1 = yi + h f (xi.11692 = 1.22436.2 x3 0.1.8.2(0. using the Euler method in 0 ≤ x ≤ 0. yi 0.INITIAL VALUE PROBLEMS FOR ORDINARY DIFFERENTIAL EQUATIONS 187 We can derive general formula for extrapolation. y(x1) = y(0.1 xi . y0 = 1. That is. we get yi+1 = yi + We have the following results. h/2.0 + = 1.8) ≈ y4 = y3 + When h = 0. The formula is given by (m y(x) ≈ yE ) (h) = (m (m 2 m yE − 1) (h/2) − yE −1) (h) .. y) = (x/y). 2−1 (4..04 + y2 1. yi We have the following results.

2) with h = 0.6) = 1.2).01 0. Using (4.4) ≈ y4 = y3 + y(x5) = y(0.14229.2) = 1. yi) = yi + 0. y) = x(y + 1). Compute y(0.22436 = 1. y6 1.25341) – 1.64 = 1.7) = 1.1[xi (yi + 1)].1(0. h = 0.1x2 0.25341 | = 0. and (iii) fourth order Taylor x /2 series method.0.1x1 0. we get . y3 1.3) = 1.1x4 0.00184.1x6 0.4) = 1.14229 + = 1.6) ≈ y6 = y5 + y(x7) = y(0.0 y1 NUMERICAL METHODS 0. find the estimate of the errors.05893 + = 1.1(0.25341.0967 0.188 y(x1) = y(0.1: | 1.02980. If the exact solution is y = –1 + 2 e . x0 = 0. y2 1.19482. The magnitudes of the errors in the solutions are the following: h = 0.2: | 1.1) ≈ y1 = y0 + y(x2) = y(0. 0.09670.28246 | = 0.1x5 0.31).2) ≈ y2 = y1 + y(x3) = y(0. y7 1.8) ≈ y8 = y7 + The exact solution is y = x2 +1 . yi+1 = yi + h f(xi.1 using (i) Euler method (ii) Taylor series method of order two. y(0) = 1. we get the extrapolated result as y(0.28062 – 1.02721.8) = 1.05893 y4 0.19482 + = 1.01. y0 = 1. the exact value is y(0.1(0. y5 1.0298 + = 1. 2 Solution We have (i) Euler’s method: f(x.05893.28062 – 1.8) = [2yE(h/2) – yE(h)] = 2(1. The magnitude of the error in the extrapolated result is given by | 1. 1.19482 At x = 0.0967 + = 1.1) = 1.1(0.22436 | = 0.5) = 1.3 Consider the initial value problem y′ = x(y + 1).0298 0.1x7 0.0 + 1. In the solutions obtained by the Euler method.28246. Example 4.28062 – 1.5) ≈ y5 = y4 + y(x6) = y(0. find the magnitudes of the actual errors for y(0.1(0.01 + = 1. = 1.1x3 0.05626.8.14229 0.1x0 = 1. y0 0.7) ≈ y7 = y6 + y(x8) = y(0.1(0.28062.3) ≈ y3 = y2 + y(x4) = y(0. y0 = 1. With x0 = 0.1(0.

0.1[x0 (y0 + 1)] = 1 + 0.010025 + 1 = 2.201003.1) ≈ y1 = y0 + 0.2) ≈ y2 = y1 + 0.201003) = 0.1)(2)] = 1.1 [x1(y1 + 1)] = 1. y1″ = x1 y1″ + y1 + 1 = (0.1. y″′ = xy″ + 2y′.0 + 0.INITIAL VALUE PROBLEMS FOR ORDINARY DIFFERENTIAL EQUATIONS 189 y(0. y1″′ = x1 y1″ + 2y1′ = 0.005 y0 = 1 + 0 + 0. 0.1 y0 + 0. With x1 = 0. 24 . we get y1′ = 0. 6 24 = xy′″ + 3y″.01 + 1) = 0.1[0] = 1.030125.1.005y1′ = 1.1y0′ + 0. y0″′ = x0y0″ + 2y0′ = 0.1)(0.010025. 2! i y″ = xy′ + y + 1.1) ≈ y1 = y0 + 0. y1(4) = x1 y1″ + 3y1″ = 0.04025.02.010025 + 1) = 0.005 [ 2] = 1. y1″ = x1y1′ + y1 + 1 = (0.0001 ( 4 ) yi ′″ + yi .0301)] = 1. y0″ = x0y0′ + y0 + 1 = 0 + 1 + 1 = 2.0001 (6) = 1.01 + 1 = 2.005yi″.605019. With x1 = 0.030125) = 6.1(0.1[(0. = yi + 0.01 + 0. (ii) Taylor series second order method.2) ≈ y2 = y1 + 0. ′ ′′ y(0. yi+1 = yi + hyi′ + h2 h3 h4 (4) y i′′+ yi ′″ + y 2! 3! 4! i 0.1 (0. With x0 = 0.005y0″ = 1 + 0 + 0. y1 = 1. y1 = 1.010025. we get y1′ = 0.1)(0. y(4) With x0 = 0. y(0.0.005(2.201) + 0.201) + 1.1 y1′ + 0. we get y0′ = 0.005 (2) + 0 + With x1 = 0. (iii) Taylor series method of fourth order.030125) + 2(0.150877.1.001 0.1(1.605019) + 3(2.1 yi′ + 0.01.1(2. y0 = 1. we get y0′ = 0. we get y(0. yi+1 = yi + h y′ + i We have h2 y″ = yi + 0. y0″ = 2.1) ≈ y1 = y0 + 0.201.01.1(1. y0 = 1. y0(4) = x0y0″′ + 3y0″ = 0 + 3(2) = 6.0301. y(0.1 yi′ + 0. y1 = 1.201003) + 1.005 yi″ + We have y″ = xy′ + y + 1.

y(0. 2! 0.2) = 1.040403 | = 0.010025.02.000001. then we may include the fourth term and the fifth term can be neglected.0001 (6. y0″ = 2.2) ≈ y2 = 1.04025 – 1. y0 = 0. y0′ = 2y0 + 3e0 = 3.02 – 1. 2006) y i +1 = yi + hyi′ + We have With x0 = 0.010025 + 0. we get h2 h3 h4 (4) yi yi′′′+ y + . y(4) = 2y0 + 3 = 2(21) + 3 = 45./Dec. + The magnitudes of the actual errors at x = 0. Solution The Taylor series method is given by (A.01.1(0.1] ≈ y1′ = x1 (y1 + 1) = 0.01 (2. ≈ We have h2 y ′′( xi ) . [Estimate of error in y(0.030125) + 0.000101 and 0.605019) 6 0. y(0) = 0.01 (2) = 0.000152. 24 The exact value is y(0.2] ≈ Example 4. ″′ ″ ″′ 0 .U. Nov.005(2.040403.2.2) at x = 0. ′′+ 2! 3! 4! i y′ = 2y + 3ex. | 1. This implies that if the result is required to be accurate for three decimal places only (the error is ≤ 0. y1″ = 1 + y1 + x1y1′ = 1 + 1 + 0. 2 y0′ = 0.1(0.02) = 0.1 and x = 0.001 (0.020403. given y′ – 2y = 3ex.201003) + 0.3 (iii).4 Find y at x = 0.040403 | = 0. 0. To estimate the errors in the Euler method.0005).E..2 correct to three decimal places. y0 = 2y0′ + 3 = 2(3) + 3 = 9.000026 respectively. y′″ = 2y″ + 3ex. 2 Remark 4 In Example 4.1) = 1.190 NUMERICAL METHODS y(0. Taylor series method of second order: Taylor series method of fourth order: | 1. [Estimate of error in y(0.2) = 2.040402. ″ y0 = 2y0 + 3 = 2(9) + 3 = 21.1) at x = 0. y″ = 2y′ + 3ex.2 are Euler method: | 1.0101.040402 – 1. we use the approximation T. notice that the contributions of the fourth and fifth terms on the right hand side are 0.150877) = 1.. y(4) = 2y″′ + 3ex.040403 | = 0.

1) ≈ y1 = y0 + 0.00023 < 0.005y0″ + = 0 + 0.0005.000021.105171) = 25.001 0.005(9) + 0.1.341287) + 0.1) – y1 | = | 0.2) = 0. y1″′ = 2y1″ + 3e0. it is sufficient to consider the five terms on the right hand side of the Taylor series. We obtain y(0. ( y14 ) = 2 y1 ′″ + 3e 0.998087) + (55.0035 + 0.0005. 4! 24 Therefore.000187 < 0.1 = 2(25. It is interesting to check whether we have obtained the three decimal place accuracy in the solutions.1 = 2(11.012887) + 3(1.0001 ( 4 ) y1 ′″ + y1 6 24 = 0.811266.348687) + 3(1.341287.311687) = 0. . The magnitudes of the errors are given by | y(0.348687 | = 0.1 = 2(4.998087) + 3(1. and y(0.000008.348687 + 0.0001 y0 = (45) = 0.001 0.1 y0′ + 0.348687.2) ≈ y2 = y1 + 0. We obtain y(0.000187 = 0.045 + 0. we get y1′ = 2y1 + 3e0.2) – y2 | = | 0. | y(0.811245. y(0.012887) + 0.001 0. y1 = 0.811245 | = 0.0001 ( 4 ) y 0 ′″ + y0 6 24 0.1 y1′ + 0.105171) = 11. it is sufficient to consider the five terms on the right hand side of the Taylor series.348695.1(4.105171) = 4. The exact solution is y(x) = 3(e2x – ex).998087.3 + 0.056706 + 0.00023 = 0. y1″ = 2y1′ + 3e0.0001 y0 = (55.005 (11.INITIAL VALUE PROBLEMS FOR ORDINARY DIFFERENTIAL EQUATIONS 191 The contribution of the fifth term on the right hand side of the Taylor series is h 4 (4) 0. The contribution of the fifth term on the right hand side of the Taylor series is h 4 (4) 0.401289 + 0.0001 (25.348695 – 0.004333 + 0.311687.348687.001 0.1(3) + 0.005 y1″ + 0.1) = 0.012887.341287) + 3(1.0001 (21) + (45) 6 24 = 0.311687) 6 24 = 0. With x1 = 0.811266 – 0. 4! 24 Therefore.1 = 2(0.105171) = 55.348687 + 0.

y(4)(0) = 2[y(0)y″′(0) + 3y′(0)y″(0)] = 0. However.1 Modified Euler and Heun’s Methods First. y) in the interval [xi. y) = x2 + y2. all these methods must compare with the Taylor series method when they are expanded about the point x = xi. Euler method. + 3 63 We have noted earlier that from application point of view. y″(0) = 0 + 2y(0)y′(0) = 0. we require methods which do not require the derivation of higher order derivatives. We now derive methods which are of order higher than the Euler method. y(5) = 2[yy(4) + 4y′y″′ + 3(y″)2]. y(7)(0) = 2[y(0) y(6)(0) + 6y′(0)y(5)(0) + 15y″(0)y(4)(0) + 10{y″′(0)}2] = 80. y(4) = 2[yy″′ + 3y′y″]. y(7) = 2[yy(6) + 6y′y(5) + 15y″y(4) + 10(y″′)2]. we discuss an approach which can be used to derive many methods and is the basis for Runge-Kutta methods. The Taylor series with first two non-zero terms is given by y(x) = x3 x7 . (4.35) . it is a first order method and the step length h has to be chosen small in order that the method gives accurate results and is numerically stable (we shall discuss this concept in a later section). y ) dx or y(xi+1) = y(xi) + z x i +1 xi f ( x . y ) dx . Integrating the differential equation y′ = f(x. the Taylor series method has the disadvantage that it requires expressions and evaluation of partial derivatives of higher orders. y″′(0) = 2 + 2 [y(0) y″(0) + {y′(0)}2] = 2. y″′ = 2 + 2[yy″ + (y′)2].5 Find the first two non-zero terms in the Taylor series method for the solution of the initial value problem y′ = x2 + y2. y(5)(0) = 2[y(0)y(4)(0) + 4y′(0)y″′(0) + 3{y″(0)}2] = 0. y(6)(0) = 2[y(0)y(5)(0) + 5y′(0)y(4)(0) + 10y″(0)y″′(0)] = 0. However. which is an explicit method. xi+1]. 4. y(0) = 0. y′(0) = 0 + [y(0)]2 = 0.3. y(6) = 2[yy(5) + 5y′y(4) + 10y″y″′]. we get z x i +1 xi dy dx = dx z x i +1 xi f ( x . y″ = 2x + 2yy′. which we shall derive in the next section. Solution We have f(x. We have y(0) = 0. can always be used. Since the derivation of higher order derivatives is difficult.192 NUMERICAL METHODS Example 4.

(4.39) The method is called a modified Euler method or mid-point method. xi+1]. In this case. xi+1]. yi ) on the right hand side of (4. xi+1] by the fixed slope at xi+1. y xi + 2 2 FG H IJ IJ . y) is the slope of the solution curve. In this case. 2 2 IJ K (4. xi+1] by the fixed slope at xi . We obtain the method yi+1 = yi + hf(xi. The slope at the mid-point is replaced by an approximation to this slope. 1] produces a numerical method. Case 2 Let θ = 1. xi+1] by a fixed slope or by a linear combination of slopes at several points in [xi. If we approximate the continuously varying slope in [xi. KK However. In this case. . This method is called backward Euler method. yi+1). xi+1] by the fixed slope at xi+1/2. In (4.38) Case 3 Let θ = 1/2. yi + f ( xi . y(xi + θh)). 2 2 I J K yi+1 = yi + hf xi + FG H h h . yi). 0 < θ < 1. yi ) . which is the Euler method.INITIAL VALUE PROBLEMS FOR ORDINARY DIFFERENTIAL EQUATIONS 193 Applying the mean value theorem of integral calculus to the right hand side. Then. yi). the integrand on the right hand side is the slope of the solution curve which changes continuously in [xi.37) which is an implicit method as the nonlinear term f(xi+1. We note that y′ and hence f(x. we are approximating the continuously varying slope in [xi. or y(xi+1) = y(xi) + h f (xi + θ h. Case 1 Let θ = 0. xi + (h/2) is not a nodal point. we obtain different methods. The method is of first order. If we approximate y(xi + (h/2)) on the right hand side by Euler method with spacing h/2. y(xi + θh)).36) Since xi+1 – xi = h. we have the explicit method yi+1 = yi + hf(xi+1. yi + hf(xi)). that is. (4. We obtain the method yi+1 = yi + hf(xi+1. We obtain the method yi+1 = yi + hf xi + FG H h h . y xi + we get the method F G H h h = yi + f(xi. we obtain y(xi+1) – y(xi) = (xi+1 – xi) f (xi + θ h. we are approximating the continuously varying slope in [xi. we are approximating the continuously varying slope in [xi.37). The method can be made explicit by writing the approximation yi+1 = yi + hf (xi . (4.35). Any value of θ ∈ [0. yi+1) occurs on the right hand side. The method is of first order.

yi) = yi + h fi on the right hand side of (4. Then. yi ). The method can be made explicit by writing the approximation yi+1 = yi + hf(xi... = y + hf + –h f + OP Q IJ K OP Q LM N h h fx + f f y + (three terms of h 2 ) + . Therefore. y ′′ + . 2 (4. yi) + f(xi+1.E.. − y − 2 2 OP Q . − y − h f + where all the terms are evaluated at (xi . – y 2 2 = (terms of h3) + . Then.194 Error of approximation The truncation error in the method is given by T. yi+1)] 2 h [f + fi+1] 2 i (4.P Q The truncation error is of order O(h3)..41) The slope at the point xi+1 is replaced by an approximation to this slope. y i ) 2 2 2 y′′ + .. Case 4 Let the continuously varying slope in [xi. the method is of second order. we obtain LM N h [f(xi) + f(xi + h... xi+1] be approximated by the mean of slopes at the points xi and xi+1. Using the expressions for y′ and y″..40) where f(xi. y( x i ) + f ( x i . LM N h h fx + f f y 2 2 OP Q O + (three terms of h ) + .. Using the expressions for y′ and y″.E. 2 2 h2 ( fx + f f y ) + (five terms of h 3 ) + . we obtain the method yi+1 = yi + = yi + h [f(xi.. yi+1) = fi + 1... The method is an implicit method..32). yi). It is also called the trapezium method. yi) + f(xi+1. we have the explicit method yi+1 = yi + h [f(xi. yi) = fi and f(xi+1. = y(xi+1) – y(xi) – = y + hy ′ + where all the terms are evaluated at (xi. we obtain T. Error of approximation The truncation error in the method is given by T.. yi + h f(xi))] 2 h2 h 2 f + h f x + hf f y + (three terms of h 2 ) + .E. yi + h fi)]. The method is called Heun’s method or Euler-Cauchy method. = y(xi+1) – y(xi) – hf x i + NUMERICAL METHODS = LM y + hy′ + h 2 N LM N FG H h h .

Example 4. We have x0 = 0.05 f0) = 1.1 f(x0 + 0.21.24205 + 0.05) = 1. y(0) = 1.3 Numerical solution 1. y1 + 0.11 1.24281 1. y1 = 1.21)) = 1.1 f(0.05 f(xi. the method is of second order. Errors in the modified Euler method.0 + 0.1) ≈ y1 = y0 + 0.15. Therefore.05.24205 + 0.3]. The truncation error is of order O(h3).44205)) = 1.39846 Exact solution 1.24205. x2 = 0.24205 + 0. 1.1 f(0. Example 4.2. yi ) 2 2 IJ K = yi + 0.1 f(0.05.2 = 1.11.05.1 f(0..24205 1.1) = 1.0.0 y(0.1 0.11 + 0.1 f(0. y0 + 0. 0.44205. y2 + 0. x1 = 0.11. yi + f ( xi .1 (1..1 = 1.3) ≈ y3 = y2 + 0..1. y(0. y2 = 1. yi)).31415) = 1.6 Solve the following initial value problem using the modified Euler method and Heun’s method with h = 0.00076 0. Point 0. 1.INITIAL VALUE PROBLEMS FOR ORDINARY DIFFERENTIAL EQUATIONS 195 T.1.24205.00126 . – y OP Q h [2f + h fx + h f fy + (three terms of h2) + .00034 0.1 f(xi + 0.11034 1.05 f2) = 1.1705) = 1. = LM y + hf + h 2 N – 2 ( fx + f f y ) + (five terms of h 3 ) + . y′ = y + x. y0′ = f0 = y0 + x0 = 1.05 f1) = 1. 1. 1.2) ≈ y2 = y1 + 0.6.E. The errors in the solution are given in Table 4.2 0. yi + 0.] 2 = (terms of h3) + .11 + 0.05.1.1 f(x2 + 0. Solution (i) Modified Euler method is given by yi+1 = yi + h f xi + F G H h h ..25. y2′ = f2 = y2 + x2 = 1. 1..05..1 f(x1 + 0.05(1.11 + 0.39972 Magnitude of error 0.24205 + 0. Table 4.05(1.25.0 + 0.11 + 0.15. Compare with the exact solution y(x) = 2ex – x – 1. y1′ = f1 = y1 + x1 = 1. y(0.1 for x ∈ [0.39846. y0 = 1.

21 + f(0.05[f1 + f(x2.1 f1 = 1. y(0.1 fi)] Denote yi* = yi + 0.21 + 1.2) and y(0.11 + 0. y(0.38626.431] = 1. y1* = y1 + 0.2.05[f2 + f(0.231)] = 1. x1 = 0.38626)] = 1. Then.44205) = 1. y′ = – 2xy2.1. 1.05 [1. y1′ = f1 = y1 + x1 = 1.2.1 0.1 f0 = 1 + 0.11 1. yi) + f(xi+1. yi*)] We have x0 = 0.21.231. Point 0.44205 + 1. Table 4. y1*)] = 1.2.05 [f(xi.44205.3) ≈ y3 = y2 + 0.1) ≈ y1 = y0 + 0.11.68626] = 1.2) ≈ y2 = y1 + 0. using the modified Euler method and the Heun’s method with h = 0.05 [f2 + f(x3.196 (ii) The Heun’s method is given by yi+1 = yi + NUMERICAL METHODS h [f(xi.0 + 0. yi) + f(xi+1.6. Errors in modified Euler method.05[1.24205.39847 Exact solution 1. we write the method as yi+1 = yi + 0.24205.2.2.39972 Magnitude of error 0.0 + 1.1 (1.2.11 + 0.05 [f(xi. yi + 0. x2 = 0.1 (1.24205) = 1.11. y2*)] = 1.24281 1. 1. y(0.2] = 1.05[1.21) = 1.11 + 0.39847.1 fi.24205 + 0.1 (1) = 1. y(0) = 1. Compare the numerical solutions with the exact solution y(x) = 1/(1 + x2).24205 + 0. y1 = 1. y2 = 1.1 f2 = 1.24205 1.11 + 0. The errors in the solution are given in Table 4.00076 0.7 For the following initial value problem. obtain approximations to y(0. yi) + f(xi+1.4). y0* = y0 + 0.05[1.3 Numerical solution 1. 1. Note that the modified Euler method and the Heun’s method have produced the same results in this problem. Solution (i) Modified Euler method is given by . yi + h fi)] 2 = yi + 0. y2′ = f2 = f(0.2 0.00034 0.05[f0 + f(x1.3.24205 + 0. y0 = 1. y2* = y2 + 0. Example 4. y0′ = f0 = y0 + x0 = 1.11034 1.1.0. y0*)] = 1.00125 Example 4.1 = 1.0.

yi) + f(xi+1.2 f0 = 1. Then.2) = 1 – 0.96 + 0.2 f(x1 + 0.0)] = 1. yi) + f(xi+1.2 f(xi + 0.0. y0*)] = 1.1. yi). FG H IJ K We have x0 = 0. y0 = 1.2 f1 = 0.4.00154 0.7. (ii) Heun’s method is given by yi+1 = yi + h [f(xi.0 + f(0.86207 Num.3.2 (–0. The actual errors are given in the following Table 4.96 + 0.36864) = 0.96.4) ≈ y2 = y1 + 0. yi) + f(xi+1.1 (–0. yi ) 2 2 = yi + 0. 0.96) = –0. yi*)] We have x0 = 0. yi + 0.4 Exact solution 0.1 f(xi.96 0. y0* = y0 + 0. y1* = y1 + 0. y(0. solution 0.96 + 0.1 [f0 + f(x1.2) ≈ y1 = y0 + 0. f1 = f(0.1(–0.36864.2 fi)].96.1 [–0. Denote yi* = yi + 0.INITIAL VALUE PROBLEMS FOR ORDINARY DIFFERENTIAL EQUATIONS 197 h h .1. Table 4. 0.88627)] = 0. y1*) = 0. Denote yi* = yi + 0.00433 Heun’s method Num.2.1 f0 = 1.04 = 0.96154 0. yi)). yi + hfi)] 2 = yi + 0.4) = 0.1 [f1 + f(x2.96. yi + 0.0 + 0.1 [–0.1[0.85774. y1*)] = 0. solution 0.4.0 + 0.96 + 0. y(0.36864 – 0. y) = –2xy2. y0* = y0 + 0.85774 | Error | 0.1.2 0.1 [f(xi. 0. yi+1 = yi + hf xi + y0′ = f0 = 0.86030. x1 = 0. y(0.2 f(0.88627. 1) = 1 + 0.2.36864 + f(0.00177 .2. y(0.1 f(xi.36864. y1* = y1 + 0.92314. y0 = 1. we write the method as yi+1 = yi + 0.4) ≈ y2 = y1 + 0. Errors in modified Euler and Heun’s methods.2 (–0.51131) = 0. Example 4. y1 = 0. x1 = 0.96 + 0.2 f(0. f(x.3.2) ≈ y1 = y0 + 0.2. Modified Euler method x 0.3.1 [f(xi. x1 = 0.00154 0.2. y1 = 0.96 0. 1.2 fi.92314) = 0. x2 = 0.96. yi + f ( xi .96)2 = –0.1 f1 = 0. y0′ = f0 = 0.96 + 0.62838] = 0.2)(0.86030 | Error | 0.36864) = 0.2 (–0. y1′ = f1 = –2x1y12 = –2(0.

198 NUMERICAL METHODS REVIEW QUESTIONS 1. Write the truncation error of the Euler’s method. xi. etc. . 2 x0 ≤ x ≤ b 5.E. y″ = ∂f ∂f dy = fx + f fy.E. Therefore. yi. h). yi+1. xi. The order of a method is the largest integer p for which 1 Ti + 1 = O(h p ) . h) + Ti+1 where Ti+1 is called the local truncation error or discretization error. 4. y(xi). 2. h 3. What is the disadvantage of the Taylor series method ? Solution Taylor series method requires the computation of higher order derivatives. h). Solution The truncation error of the Euler’s method is T. y(x0) = y0. y(x0) = y0. Write the bound on the truncation error of the Euler’s method. The exact solution y(xi) satisfies the equation y(xi+1) = y(xi) + hφ(xi+1. y). 2! 0 < θ < 1. y). + ∂x ∂y dx y″′ = ∂ ∂f ∂f dy ∂ ∂f ∂f dy dy + + + ∂x ∂x ∂y dx ∂y ∂x ∂y dx dx FG H IJ K FG H IJ K = fxx + 2 f fxy + f2 fyy + fy(fx + f fy). Define truncation error of a single step method for the solution of the initial value problem y′ = f (x. Solution The bound on the truncation error of the Euler method (p = 1) is given by | T.) is defined by Ti+1 = y(xi+1) – y(xi) – hφ(xi+1. Define the order of a numerical method for the solution of the initial value problem y′ = f(x. y).E. Solution A single step method for the solution of the given initial value problem is given by yi+1 = yi + hφ(xi+1. the Truncation error (T. | ≤ h2 max | y ′′( x)|. = h2 y″(xi + θh). xi. y(xi). Solution Let Ti+1 define the truncation error of the numerical method. The higher order derivatives are given by y′ = f(x. y(xi+1). y(xi+1).

.2. for the initial value problem y′ = x2 + y2. 6. y(0) = 1 using the Euler method with h = 0.2]. 7. y(0) = 1.1 and h = 0. Extrapolate the results to get a better approximation to y(1.2 7. for the initial value problem y′ = x + y2. Find an approximation to y(0.2) with an error less than 0. 1. y′ = –y2. Extrapolate the results to get a better approximation to y(0. (ii) modified Euler method. 1.4] with h = 0. Find an approximation to y(1. y′ = 2x + cos y. Compare with the exact solution. 4. we find that computation of higher order derivatives is very difficult.4] with h = 0.4). 0.2 to compute y(0.1 Solve the following initial value problems using (i) Euler method. Apply Taylor series method of second order and Heun’s method to integrate y′ = 2x + 3y. y(1) = 1 using the Euler method with h = 0.6). Write the bound on the truncation error of the Taylor series method. 5. EXERCISE 4. 3. y(0) = 1 for x ∈ [0.INITIAL VALUE PROBLEMS FOR ORDINARY DIFFERENTIAL EQUATIONS 199 The number of partial derivatives to be computed increases as the order of the derivative of y increases. Use Taylor series method of order four to solve y′ = x2 + y2.1. Therefore.4). and (iii) Heun’s method with h = 0.1. 2. x ∈ [0. y′ = x + y. What are the orders of (i) modified Euler method.2. x ∈ [1.05. Solution The bound on the truncation error of the Taylor series method of order p is given by | Tp+1 | = where h p +1 h p +1 y ( p +1) ( x i + θh ) ≤ M p +1 ( p + 1) ! ( p + 1) ! x0 ≤ x ≤ b Mp+1 = max | y ( p+ 1) ( x)|.1 and h = 0.6). (ii) Heun’s method ? Solution Modified Euler method and Heun’s method are both second order methods. y(1) = 0. Given the initial value problem. 0. y(0) = 1 show that it is sufficient to use Euler method with step length h = 0. 6. y(1) = 1.

y(0) = 1. y ) dx ./Dec. x ∈ [0. Obtain numerical solution correct to two decimals for the initial value problem y′ = 3x + 4y. we include the slope at the initial point x = xi. y) in the interval [xi. 2004) + 1. 12. (A.U. the slope f(xi. x ∈ [1. y(0) = 1./Dec. However. Find y(0./Dec. For our discussion.1. Find y(0. given the initial value problem y′ = y – x2 (A. (4. yi). (A. xi+1]. that y′ and hence f (x. xi+1]./Dec.1) given that y′ = x + y.U.1) if y′ = x2 + y2. In all the Runge-Kutta methods. the integrand on the right hand side is the slope of the solution curve which changes continuously in [xi. Get the value of y at x = h. y(1) = 0. we shall consider explicit Runge-Kutta methods only. yi + a21k1). k2 = h f(xi + c2h. Nov. 2006) Using the modified Euler method. 15. we get We have noted in the previous section. 2006) 4. Further. Heun’s and modified Euler methods. If we also include the slope at xi+1. Nov. If we do not include the slope at xi+1. Nov. Obtain numerical solution correct to two decimals for the initial value problem y′ = 3x + y2. yi ). obtain the solution by Taylor series method. 10. 9. y) is the slope of the solution curve. April/May 2003 .2] using the Taylor series method with h = 0. given y′ = x + y. y(0) = 1. 0. By approximating the continuously varying slope in [xi. xi+1]. Define k1 = h f(xi . 2004) (A. 11. April/May 2005) (A. that is.200 NUMERICAL METHODS 8. Find y(1.1 if y′ = x2y – 1.42) . solve the following initial value problems. 14./Dec.4 RUNGE-KUTTA METHODS Integrating the differential equation y′ = f (x. xi+1] by a fixed slope.U. 1. Find y at x = 0.2] using the Taylor series method with h = 0. 13. 2006) (A. Nov. In the following problems.5. Runge-Kutta method of second order Consider a Runge-Kutta method with two slopes.2. Nov. y(0) = 1. given y′ = x + y + xy. we obtain explicit Runge-Kutta methods.U.U.1. y(1) = 1. Find the values y at x = 0.1 and x = 0. z x i +1 xi dy dx = dx z x i +1 xi f ( x . The basic idea of Runge-Kutta methods is to approximate the integral by a weighted average of slopes and approximate slopes at a number of points in [xi. we have obtained the Euler. Runge-Kutta methods must compare with the Taylor series method when they are expanded about the point x = xi. we obtain implicit Runge-Kutta methods.2). y(0) = 0. y(0) = 1.U.

We note that the method has one arbitrary parameter c2. 6 xx (4.. Solving these equations. w1.INITIAL VALUE PROBLEMS FOR ORDINARY DIFFERENTIAL EQUATIONS 201 (4. Now. k2 = hf(xi + c2h. we obtain + (4. We may choose any value for c2 such that 0 < c2 < 1. we obtain a21 = c2. we obtain the method yi+1 = yi + 1 (k + k2) 2 1 (4. the Runge-Kutta methods using two slopes (two evaluations of f ) is given by yi+1 = yi + 1 − where FG H 1 1 k1 + k2 2c2 2c2 IJ K (4. w2 are chosen such that the method is of highest possible order. If we choose c2 = 1. yi). 2 O P P Q Substituting the values of k1 and k2 in (4.43) yi+1 = w1 k1 + w2 k2 where the values of the parameters c2. gives y(xi+1) = y(xi) + hy′(xi) + h2 h3 y ′′( xi ) + y ′′′( xi ) + . 2 Comparing the coefficients of h and h2 in (4..45). Therefore.45).47) . 2! 3! h2 ( f x + ff y ) x i 2 = y(xi) + h f(xi.. we get yi+1 = yi + (w1 + w2) h fi + h2(w2c2 fx + w2a21 f fy ) xi h3 2 2 w2 ( c2 f xx + 2c2 a 21 ff xy + a 21 f 2 f yy ) x i + . we have an infinite family of these methods..44) and (4.46) k1 = h f(xi. a21.. Therefore.. 2c2 w1 = 1 – 1 . c2w2 = 1/2. k2 = h f(xi + c2h. It is not possible to compare the coefficients of h3 as there are five terms in (4..44) and three terms in (4. Taylor series expansion about x = xi. yi + c2k1). yi + a21hfi) = h fi + h( c2 fx + a 21 ff y ) x i + h3 [f + 2ffxy + f2fyy + fy(fx + ffy) ] xi + . w2 = 1 . 2c2 where c2 is arbitrary.43).45) w1 + w2 = 1. a21w2 = 1/2..44) L M M N h2 2 2 ( c2 f xx + 2c2 a 21 f f xy + a21 f 2 f yy ) x i + . y(xi)) + + We also have k1 = hfi.

(4. We may note that for c2 = 2/3. Therefore.. Runge-Kutta method of fourth order The most commonly used Runge-Kutta method is a method which uses four slopes.46) gives an infinite family of second order methods. yi + k1) NUMERICAL METHODS which is the Heun’s method or Euler-Cauchy method. k2 = hf xi + (4. we get the truncation error in the method as T. Therefore. yi + k3). yi ). = y(xi+1) – yi+1 = h3 2 Since the truncation error is of order O(h3). yi + k1 2 2 IJ K which is the modified Euler method. The method is given by yi+1 = yi + where LF 1 − c IJ { f MG 6 4 K NH xx + 2ffxy + f 2 f yy } + 1 f y ( f x + ff y ) + .E.50) is a second order method with minimum truncation error. If we choose c2 = 1/2. k2 = hf xi + F G H IJ K Therefore. The method is given by yi+1 = yi + 1 (k + 2k2 + 2k3 + k4) 6 1 (4. Heun’s method derived in the previous section can be written in the formulation of a Runge-Kutta method. yi ) k2 = hf xi + k3 i F G H F = hf G x H h 1 .50) k1 = hf(xi.. yi + k1 . yi 2 IJ K 1 I + k J . 3 3 (4. Error of the Runge-Kutta method Subtracting (4. modified Euler method can also be written in the formulation of a Runge-Kutta method. yi + k1 . 6 OP Q (4. yi). The method is given by yi+1 = yi + k2 k1 = hf (xi . yi). 2 2 + h . the first term inside the bracket in (4. .44).202 k1 = hf(xi . k2 = h f(xi + h.48) F G H h 1 .49) vanishes and we get a method of minimum truncation error.49) xi 1 (k + 3k2) 4 1 2 2 h. the method is of second order for all values of c2. Therefore.51) k1 = hf (xi .45) from (4. the method (4. we get w1 = 0. 2 K 2 k4 = hf (xi + h.

INITIAL VALUE PROBLEMS FOR ORDINARY DIFFERENTIAL EQUATIONS

203

Remark 5 We would like to know as to why the Runge-Kutta method (4.51) is the most commonly used method. Using two slopes in the method, we have obtained methods of second order, which we have called as second order Runge-Kutta methods. The method has one arbitrary parameter, whose value is suitably chosen. The methods using four evaluations of slopes have two arbitrary parameters. The values of these parameters are chosen such that the method becomes simple for computations. One such choice gives the method (4.51). All these methods are of fourth order, that is, the truncation error is of order O(h5). The method (4.51) is called the classical Runge-Kutta method of fourth order. If we use five slopes, we do not get a fifth order method, but only a fourth order method. It is due to this reason the classical fourth order Runge-Kutta method is preferred for computations. Remark 6 All the single step methods (Taylor series, Runge-Kutta methods etc.) are self starting. They do not require values of y and/or the values of the derivatives of y beyond the previous point. Example 4.8 Solve the initial value problem y′ = – 2xy2, y(0) = 1 with h = 0.2 on the interval [0, 0.4]. Use (i) the Heun’s method (second order Runge-Kutta method); (ii) the fourth order classical Runge-Kutta method. Compare with the exact solution y(x) = 1/(1 + x2). Solution (i) The solution using Heun’s method is given in Example 4.7. The solutions are y(0.2) ≈ 0.96, y(0.4) ≈ 0.86030. (ii) For i = 0, we have x0 = 0, y0 = 1. k1 = hf (x0, y0) = –2(0.2)(0)(1)2 = 0, k2 = hf x0 + k3

+

FG H F = hf G x H

1 h , y0 + k1 = –2(0.2)(0.1)(1)2 = –0.04, 2 2

h , y0 2

2

0

IJ K 1 I + k J = –2 (0.2)(0.1)(0.98) 2 K

2

= –0.038416,

k4 = hf (x0 + h, y0 + k3) = –2(0.2)(0.2)(0.961584)2 = –0.0739715, y(0.2) ≈ y1 = y0 +

1 (k + 2k2 + 2k3 + k4) 6 1 1 [0.0 – 0.08 – 0.076832 – 0.0739715] = 0.9615328. 6

= 1.0 + For i = 1, we have

x1 = 0, y1 = 0.9615328. k1 = h f (x1, y1) = –2(0.2)(0.2)(0.9615328)2 = – 0.0739636,

204 k2 = hf x1 + k3

+

NUMERICAL METHODS

F G H F = hf G x H

h 1 , y1 + k1 = –2(0.2)(0.3) (0.924551)2 = – 0.1025753, 2 2 h , y1 2

2

1

IJ K 1 I + k J = –2(0.2)(0.3)(0.9102451) 2 K

2

= – 0.0994255,

k4 = hf (x1 + h, y1 + k3) = –2(0.2)(0.4)(0.86210734)2 = – 0.1189166, y(0.4) ≈ y2 = y1 +

1 (k + 2k2 + 2k3 + k4) 6 1

1 [–0.0739636 –0.2051506 –0.1988510 – 6

= 0.9615328 + 0.1189166] = 0.8620525

**The absolute errors in the numerical solutions are given in Table 4.4.
**

Table 4.4. Absolute errors in Heun’s method and fourth order Runge-Kutta method. Example 4.8. Heun’s method x 0.2 0.4 Exact solution 0.9615385 0.8620690 Num. solution 0.96 0.86030 | Error | 0.0015385 0.0017690 Runge-Kutta method Num. solution 0.9615328 0.8620525 | Error | 0.0000057 0.0000165

Example 4.9 Given y′ = x3 + y, y(0) = 2, compute y(0.2), y(0.4) and y(0.6) using the Runge-Kutta method of fourth order. (A.U. April/May 2004) Solution We have For i = 0, we have x0 = 0, y0 = 2, f(x, y) = x3 + y, h = 0.2. x0 = 0, y0 = 2. k1 = hf(x0, y0) = 0.2 f (0, 2) = (0.2)(2) = 0.4, k2 = hf x0 +

F G H

1 h , y0 + k1 = 0.2 f(0.1, 2.2) 2 2

IJ K

= (0.2)(2.201) = 0.4402, k3 = hf x0 +

F G H

h 1 , y0 + k2 2 2

IJ = 0.2 f(0.1, 2.2201) K

= (0.2)(2.2211) = 0.44422, k4 = hf (x0 + h, y0 + k3) = 0.2 f (0.2, 2.44422) = (0.2)(2.45222) = 0.490444, y(0.2) ≈ y1 = y0 +

1 (k + 2k2 + 2k3 + k4) 6 1

INITIAL VALUE PROBLEMS FOR ORDINARY DIFFERENTIAL EQUATIONS

205

= 2.0 +

1 [0.4 + 2(0.4402) + 2(0.44422) + 0.490444] 6

= 2.443214. For i = 1, we have x1 = 0.2, y1 = 2.443214. k1 = h f (x1, y1) = 0.2 f(0.2, 2.443214) = (0.2)(2.451214) = 0.490243, k2 = hf x1 +

F G H

F G H

1 h , y1 + k1 = 0.2 f(0.3, 2.443214 + 0.245122) 2 2

IJ K

= (0.2)(2.715336) = 0.543067, k3 = hf x1 +

h 1 , y1 + k2 2 2

IJ = 0.2 f(0.3, 2.443214 + 0.271534) K

= (0.2)(2.741748) = 0.548350, k4 = hf (x1 + h, y1 + k3) = 0.2 f(0.4, 2.443214 + 0.548350) = (0.2)(3.055564) = 0.611113, y(0.4) ≈ y2 = y1 +

1 (k + 2k2 + 2k3 + k4) 6 1 1 [0.490243 + 2(0.543067) + 2(0.548350) + 0.611113] 6

= 2.443214 + = 2.990579. For i = 2, we have

x2 = 0.4, y2 = 2.990579. k1 = hf (x2, y2) = 0.2 f (0.4, 2.990579) = (0.2)(3.054579) = 0.610916, k2 = hf x2 +

F G H

1 h , y2 + k1 2 2

IJ K

= 0.2 f(0.5, 2.990579 + 0.305458) = (0.2)(3.421037) = 0.684207,

k3 = hf x2 +

F G H

1 h , y2 + k2 2 2

**IJ = 0.2f (0.5, 2.990579 + 0.342104) K
**

= (0.2)(3.457683) = 0.691537,

k4 = hf (x2 + h, y2 + k3) = 0.2 f (0.6, 2.990579 + 0.691537) = (0.2) (3.898116) = 0.779623. y(0.6) ≈ y3 = y2 +

1 (k + 2k2 + 2k3 + k4) 6 1 1 [0.610916 + 2(0.684207) + 2(0.691537) + 0.779623] 6

= 2.990579 + = 3.680917.

206

NUMERICAL METHODS

REVIEW QUESTIONS

1. Write the Heun’s method for solving the first order initial value problems in the RungeKutta formulation. Solution Heun’s method can be written as follows yi+1 = yi +

1 (k + k2) 2 1

k1 = hf (xi , yi), k2 = hf (xi + h, yi + k1). 2. Write the modified Euler method for solving the first order initial value problems in the Runge-Kutta formulation. Solution Modified Euler method can be written as follows yi+1 = yi + k2 k1 = hf (xi , yi) k2 = hf xi +

F G H

1 h , yi + k1 . 2 2

IJ K

3. Why is the classical Runge-Kutta method of fourth order, the most commonly used method for solving the first order initial value problems ? Solution Using two slopes in the method, we can obtain methods of second order, which are called as the second order Runge-Kutta methods. The method has one arbitrary parameter, whose value is suitably chosen. The methods using four evaluations of slopes have two arbitrary parameters. All these methods are of fourth order, that is the truncation error is of order O(h5 ). The values of these parameters are chosen such that the method becomes simple for computations. One such choice gives the classical RungeKutta method of fourth order. If we use five slopes, we do not get a fifth order method, but only a fourth order method. It is due to this reason, the classical fourth order RungeKutta method is preferred for computations.

EXERCISE 4.2

In the following problems, obtain the solution by the fourth order Runge-Kutta method. 1. Find f(0.4), f(0.6), given the initial value problem y′ = y – x2 + 1, y(0) = 0.5. (A.U. April/May 2003) 2. Solve

dy y 2 − x 2 with y(0) = 1 at x = 0.2. = dx y 2 + x 2

(A.U. April/May 2005, Nov./Dec. 2004)

3. Find y(0.1) and y(0.2) for the initial value problem y′ = x + y2, y(0) = 1.

INITIAL VALUE PROBLEMS FOR ORDINARY DIFFERENTIAL EQUATIONS

207

4. Find y(0.4) given that y′ = x + y2, y(0) = 1.3456. Take h = 0.2. 5. Determine y(0.2) with h = 0.1, for the initial value problem y′ = x2 + y2, y(0) = 1. 6. Find an approximate value of y when x = 0.2 and x = 0.4 given that y′ = x + y, y(0) = 1, with h = 0.2. (A.U. May/June 2006, Nov./Dec. 2006) 7. Determine y(0.2), y(0.4) with h = 0.2, for the initial value problem y′ = x3 + 3y, y(0) = 1. 8. Solve

dy y − x 2 , y(0) = 1 at x = 0.2 with h = 0.1. = dx y + x 2

4.5

SYSTEM OF FIRST ORDER INITIAL VALUE PROBLEMS

In section 4.1, we have discussed the reduction of a second order initial value problem to a system of first order initial value problems. For the sake of completeness, let us repeat this procedure. Let the second order initial value problem be given as a0(x) y″(x) + a1(x)y′(x) + a2(x)y(x) = r(x) y(x0) = b0, y′(x0) = b1. Define u1 = y. Then, we have the system u1′ = y′ = u2, u1(x0) = b0, u2′ = y″ = = The system is given by (4.52)

1 [r(x) – a1(x) y′(x) – a2(x) y(x)] a0 ( x) 1 [r(x) – a1(x) u2 – a2(x) u1], u2(x0) = b1. a0 ( x)

**u (x ) b Lu O′ = Lu Mu P M f (x, u , u )OPQ , LMNu (x )OPQ = LMNb OPQ N Q N
**

1 2 2 1 2 0 0 1 2 1 2 0

(4.53)

where

f2 (x, u1, u2) =

1 [r(x) – a1(x) u2 – a2(x) u1]. a0 ( x)

In general, we may have a system as

L y O′ = L f (x, y , y )OP , LM y (x ) OP = LMb OP . M y P M f (x, y , y ) Q N y (x )Q Nb Q N Q N

1 2 1 2 1 2 1 2 0 0 1 1 2 0

(4.54)

In vector notation, denote y = [y1, y2]T, f = [f1, f2]T, b = [b0, b1]T.

208 Then, we can write the system as y′ = f(x, y), y(x0) = b.

NUMERICAL METHODS

(4.55)

Therefore, the methods derived for the solution of the first order initial value problem

dy = f(x, y), dx

y(x0) = y0

(4.56)

can be used to solve the system of equations (4.54) or (4.55), that is, the second order initial value problem (4.52), by writing the method in vector form. 4.5.1 Taylor Series Method In vector format, we write the Taylor series method (4.20) of order p as yi+1 = yi + hyi′ +

h2 h p ( p) y i ″ + ... + y p! i 2!

k− 1 k−1 k−1 k− 1

(4.57)

where

y ( k) i

Ld L OP = MM dx =M M N QP MM d N dx

( ) y1,ki (k y2, ) i

f1 ( xi , y1, i , y2, i ) f2 ( xi , y1, i , y2, i

OP PP . )P Q

(4.58)

**In component form, we obtain (y1)i+1 = (y1)i + h(y1′)i +
**

h2 h p ( p) ( y1″ ) i + ... + ( y1 ) i . p! 2 h2 h p ( p) ( y 2 ″ ) i + ... + ( y 2 )i . p! 2

(y2)i+1 = (y2)i + h(y2′)i +

(4.59)

Euler’s method for solving the system is given by ( y1)i+1 = ( y1)i + h( y1′)i = (y1)i + h f1(xi , (y1)i , (y2)i). ( y2)i+1 = ( y2)i + h( y2′)i = ( y2) i + h f2(xi , ( y1)i , (y2) i). 4.5.2 Runge-Kutta Fourth Order Method In vector format, we write the Runge-Kutta fourth order method (4.51) as yi+1 = yi + where (4.60) (4.61)

**k k k k k1 = k11 , k 2 = k12 , k 3 = k13 , k 4 = k14 21 22 23 24
**

kn1 = hfn (xi , (y1)i , (y2)i), n = 1, 2. kn2 = hfn xi +

L OP M Q N FG H

1 (k + 2k2 + 2k3 + k4), i = 0, 1, 2, … 6 1

(4.62) (4.63)

LM OP N Q

LM OP N Q

LM OP N Q

1 1 h , ( y1 ) i + k11 , ( y2 ) i + k21 , n = 1, 2. 2 2 2

IJ K

5. 2.5 – 0.2 f2 (0.2 (–3u0 + 2v0) = 0.3) = 0. (k4. k11 = hf1(x0.2 f1(0.1. Note that we have used the matrix notation for representing the column vectors k1. v0 = 0. kn4 = hfn (xi + h. (k3. (y2)i + k23).18. we write the method as (y1)i+1 = (y1)i + (y2)i+1 = (y2)i + 1 (k + 2k12 + 2k13 + k14). v0 + k21 2 2 2 IJ K IJ K = 0. l1).2 f1(0. then we can write the equations as ui+1 = ui + vi+1 = vi + 1 (k + 2k12 + 2k13 + k14).1. l4) for representing the column vectors k1. Some books use the notation (k1. u0 + k11 . 0.09) = 0. k22 = hf2 x0 + FG H FG H 1 1 h . v0) = 0. n = 1.146.5)) = –0.3)] = 0. 0.2(0 + 2(0. 0 + 0. 6 11 1 (k + 2k22 + 2k23 + k24).3)] = –0. u0 + k11 .1. v0 + k21 2 2 2 IJ K = 0. u0. 0. 0. 0.4.4].2 on the interval [0. k4. v0 + k22 2 2 2 = 0. k21 = hf2(x0. k2. k12 = hf1 x0 + FG H h 1 1 . k13 = hf1 x0 + 1 1 h . n = 1.2 (3u0 – 4v0) = 0. 2. 0. ( y2 ) i + k22 2 2 2 IJ K . v(0) = 0.2) = 0.06.03. 6 11 1 (k + 2k22 + 2k23 + k24). u0 = 0.41)] = 0.2. y2 = v. we have x0 = 0. k4.1) + 2(0.2 f1(0. u(0) = 0 v′ = 3u – 4v. u0 + k12 . Solution For i = 0.1.2[3(0.1) – 4(0. 6 21 If we denote y1 = u. v0) = 0. l2). l3).1.2(0 –4(0.5 – 0. k3. 0. .INITIAL VALUE PROBLEMS FOR ORDINARY DIFFERENTIAL EQUATIONS 209 kn3 = hfn xi + FG H h 1 1 . 0. (y1)i + k13.5. k3.2 f1(0.2[– 3(0.41) = 0.65) Example 4.10 Solve the initial value problem u′ = –3u + 2v.1.03.3) = 0. u0.64) (4.5)) = 0.1. In explicit form. 0. ( y1 ) i + k12 . (k2.2 [–3(0. 6 21 (4. k2. with h = 0.1.03) + 2(0. 0. using the Runge-Kutta fourth order method.

u1 + k12 .2217)] = 0.146) – 4(0.2.0437. u1 = 0. u1. 0.1001.03.1474.0116) = 0. v1) = 0. 0. v0 + k22 2 2 2 IJ K = 0.2.146. 6 1 (k + 2k22 + 2k23 + k24) 6 21 1 (– 0. v1 + k21 2 2 2 IJ K IJ K IJ K = 0.2593)] = – 0. u1 + k11 .0116. u0 + k13. k22 = hf2 x1 + 1 1 h . 0. k24 = hf2 (x0 + h.292 – 0.1001) – 4(0. v1 + k22 2 2 2 = 0.0664. 0.12 + 0.1856)] = – 0.36 – 0.2.0010.0283.1006) + 2(0.2 f1(0.3. u0 + k12 .2) ≈ v1 = v0 + = 0. k21 = hf2 (x1.1220.2593.31.3.31) = 0.19) = 0. k12 = hf1 x1 + FG H FG H FG H 1 1 h . k11 = hf1(x1.2.2593)] = 0. v1 + k21 2 2 2 = 0.2593 6 = 1.2 f1 (0. 0.2 (– 3u1 + 2v1) = 0. 0.1856) = 0. 0.2 f2 (0.62 – 0.2[3(0.2 f2 (0.2 f1(0.19)] = – 0. u0 + k13.146) + 2(0.1856) = 0.1220) + 2(0. 0.0644) = 0.5 + For i 1 (k + 2k12 + 2k13 + k14) 6 11 1 (0.1856)] = 0.0753.1220) – 4(0.2) ≈ u1 = u0 + =0+ v(0.41)] = – 0.4 – 0.2[– 3(0.2[3(0.41) = 0.2 + 0.2[– 3(0.19) = 0.2 (3u1 – 4v1) = 0. k13 = hf1 x1 + h 1 1 . 0. 0.1.2217) = 0.1001) + 2(0.1001.2 f1(0.03) – 4(0.2 [3(0.2 [–3(0. v1) = 0. 0.2 f2 (0.2 [– 3(0. we have x1 = 0. k14 = hf1 (x0 + h.146. 0. u(0.2 [3(0.5 – 0.19)] = – 0. v0 + k23) = 0. v0 + k23) = 0. u1.146. u1 + k11 . v0 = 0. .210 k23 = hf2 x0 + NUMERICAL METHODS FG H h 1 1 .1220. 0.1006.3. 0.

4) ≈ v2 = v1 + = 0. Solution Let y = u. (ii) Runge-Kutta method of fourth order.0016 ( 4 ) ui + ′′′ ui 6 24 = ui + 0.2 ui′ + 0. (i) Taylor series method of fourth order gives ui+1 = ui + hui′ + h2 h3 h 4 (4) ui′′ + ui u ′′′+ 2 6 24 i 0.0020 + 0. we obtain u′ = v. find the magnitudes of the errors. 6 Example 4. v(0) = 0.1138.1423)] = – 0. y(0) = 1.1423)] = – 0. 6 1 (k + 2k22 + 2k23 + k24) 6 21 1 (– 0. k24 = hf2 (x1 + h.1506 – 0. with step length h = 0.2 [3(0. 0. 0.2 f2(0.4).2593 + 1 (k + 2k12 + 2k13 + k14) 6 11 1 (0.1170.1284.1423) = 0. 0.11 Compute approximations to y(0. k14 = hf1(x1 + h. u(0) = 1.0201) = 0.2340 – 0. v1 + k22 2 2 2 IJ K = 0.1006) – 4(0.0437 + 0.1001 + v(0.4.1474 – 0.2[– 3(0.2217)] = – 0.4) and y′(0. v′ = cos t – 4y = cos t – 4u.2 f2 (0.1284) + 2(0.0368. v1 + k23) = 0. 0. u(0.1284.2 f1(0. v1 + k23) = 0.02 ui″ + vi+1 = vi + hvi + h2 h3 h4 (4) vi″ + vi + v ′′′ 2 6 24 i .0566 – 0.4.2217) = 0. 0.2. If exact solution is given by y(t) = (2 cos 2t + cos t)/3. Reducing the given second order equation to a system of first order equations.008 0.INITIAL VALUE PROBLEMS FOR ORDINARY DIFFERENTIAL EQUATIONS 211 k23 = hf2 x1 + FG H h 1 1 . 0.4) ≈ u2 = u1 + = 0. for the initial value problem y″ + 4y = cos t.1284) – 4(0.0201. u1 + k13.0368) = 0. u1 + k13.3.2[3(0. y′(0) = 0 using (i) Taylor series method of fourth order.1006.1423) = 0.1645. u1 + k12 .

0016 ( 4 ) v0 + ′″ v0 6 24 0.2 (– 3) + 0 + For i = 1: 0.782865. ″′ ″ ″′ ″ (4) (4) u0 = v0 = 11. u″ = v′.782865) + 0.02u0″ + = 1 + 0 + 0.940733.4) = v2 = v1 + 0.782865) = 10.2) – 4u1 = – 0. ″ ′ u″′ = v1″ = 2.2) = u1 = u0 + 0.2u1′ + 0. ′ ″ ″ u0′ = v0 = 0. ″ v1 = – sin (0.980067 – 4(– 2.2) – 4u1 = 0.782865.0016 ( 4 ) vi′″ + vi .008 0. ′″ u(0. 6 24 0.008 0.142663.2) – 4u1 = 0.0016 ( 4 ) u1 + ′′′ u1 6 24 = 0. ″ (4) v1 = sin (0. v0 = – 1 – 4u0 = – 1 + 12 = 11.0016 ( 4 ) u0 + ′″ u0 6 24 0.198669 – 4(2.212 = vi + 0.2u0′ + 0.0016 (11) = 0.142663) + (10.980067 – 4(0.151393.585333. u0 = v0 = 0. For i = 0: u0 = 1.142663. v1 = – 0.02(– 2.008 0. t0 = 0. ″′ ″ (4) u1 = v1 = 10.2 v1′ + 0.585333.940733) = – 2.2(– 0.940733 + 0. v′ = cos t – 4u. v0 = – 4u0 = 0. v′″ = – cos t – 4u″.008 (11) + 0 = – 0. u0 = v0′ = – 3. 6 u1 = 0.0016 (2.2) – 4u1 = – 0.2) = v1 = v0 + 0. v0 = 1 – 4u0 = 1 – 4 = – 3.771543.02u1″ + 0.142663) = – 8. v0 = 0.02 (– 3) + 0 + v(0.008 0. t1 = 0.2 vi′ + 0.008 0.371983.151393) = 0. u1′ = v1 = – 0.585333. ′″ ″ u(0.2.585333) = 2.0016 ( 4 ) v1 + ′″ v1 6 24 v(0.151393. u1 = v1′ = – 2.02vi″ + We have NUMERICAL METHODS 0. u″′ = v″.585333) + 0. v1 = – cos (0.198669 – 4(– 0.008 0.4) = u2 = u1 + 0. v(4) = sin t – 4u″′. v″ = – sin t – 4u′. u(4) = v″′.02 v1 + ″ .2v0′ + 0. v1′ = cos (0.940733 24 0.02v0″ + = 0 + 0. 6 24 u′ = v. v0 = – 4u0′ = 0.

we have t0 = 0.8) + cos (0.3004995) = 0. f2(u.03.0.06.2 (– 0.3004995) = – 0.4) + cos (0. v0 + k21 2 2 2 IJ K = 0. f1(u.3) = 0.2)] = – 0. u0 + k11. u0 + k11.2 (1 – 4) = – 0.000003.02(2.086076 | = 0. u0 = 1.142663) + The exact solutions are u(0.1) – 4] = – 0. 3 1 [4 sin (0.585333 | = 0.0.8) + sin (0.2 f1(0. u0.2 f1(0.3) = – 0. | v(0. 1. v0) = hf1(0. | u(0. 0) = 0. 6 24 1 [2 cos (0. v0 + k22 2 2 2 = 0.000115. 1. v0) = hf2(0. 1 + 0. 3 v(0. v0 = 0.2 f1(0. u0 + k12 . 0) = 0.2(– 2.940733 | = 0. v) = v.2) – v1 | = | – 0.151393) + (– 8.585448. 0.086281.1.0 – 0.2 (– 0.940730.086076.3) = 0.1. 1.4) – u2 | = | 0. k12 = hf1 t0 + F G H h 1 1 .771491.0016 (10. – 0.1.0 – 0.4)] = – 1. v0 + k21 2 2 2 IJ K IJ K = 0.3) = 0. k21 = hf2(t0.086281 + 1.2 f1(0. k13 = hf1 t0 + h 1 1 .2) = – u(0. 3 v(0.771491 – 0. u0.008 0.060100.0.4) = 1 [2 cos (0.2) – u1 | = | 0.0 – 0.2)] = 0.585448 + 0.940730 – 0. k11 = hf1(t0. v) = cos t – 4u. .2 [cos (0.1.600999.782865) + 0.000205.585333 + 0.2) = 0. | v(0.000052. 1.4) + sin (0.1.4) – v2 | = | – 1. 3 1 [4 sin (0.INITIAL VALUE PROBLEMS FOR ORDINARY DIFFERENTIAL EQUATIONS 213 = – 0.3004995) = 0. k22 = hf2 t0 + FG H FG H 1 1 h .771543 | = 0.97.4) = – The magnitudes of errors in the solutions are | u(0.4)] = 0. 1. (ii) For i = 0.6. – 0. 0.2 f2 (0. – 0. 0.371983) = – 1.

2 (– 0.2) ≈ u1 = u0 + = 1. k12 = hf1 t1 + FG H h 1 1 . – 0. – 0.3004995) = 0. u0 + k13.863604) = – 0. u1. – 0.576999) – 0.214 k23 = hf2 t0 + NUMERICAL METHODS FG H h 1 1 . u1.2.882202.939900.1) – 4(0.2. 0.2 f1(0.172721.0 + 2(– 0.576999) = 0.1.0 + 1 (k + 2k12 + 2k13 + k14) 6 11 1 [0. v1 + k21 2 2 2 IJ K = 0. v0 + k22 2 2 2 IJ K = 0. v1 = – 0.2 f1(0. u1 + k11 . u0 + k12 .2 [cos (0. 0. k14 = hf1(t0 + h.514694. v0 + k23) = 0. 1. u1 = 0.0 + = – 0.3) – 4(0.6 + 2(– 0.2 f1 (0.117063. – 0.940733)] = – 0.2 [cos (0. u0 + k13. – 0. k24 = hf2(t0 + h. v1 + k21 2 2 2 IJ K = 0.576999.576999) = – 0.940733.585317) = 0.939900. For i = 1. u(0.555907. u1 + k11.940733.2.2) ≈ v1 = v0 + = 0.2) – 4(0.2 f1 (0.556573. – 0.585317] = – 0.863604) = 0. 0. k11 = hf1 (t1.585317. v1) = 0.2.940733.2[– 0.060100. – 0. v1) = h f2(0.882202)] = – 0. v0 + k23) = 0.2 [cos (0. – 0.97)] = – 0.115400] = 0. k22 = h f2 t1 + FG H 1 1 h .576999) = 0.585317) = 0.939900)] = – 0.2) – 4(0.882202.2 [cos (0.3.0 – 0.06) + 2(– 0.585317.555907] 6 v(0.576999) = 0.2. 0. we have t1 = 0.600999) + 2(– 0.2 (– 0. .2. 0. k21 = hf2(t1. 0. 0.3.2 f2 (0.060100) – 0.2 f2 (0.2 f2(0.940733. 6 1 (k + 2k22 + 2k23 + k24) 6 21 1 [– 0.863604) = 0.115400.97.

2) – v1 | = | – 0.3 Reduce the following second order initial value problems to systems of first order initial value problems. | u(0. EXERCISE 4.433548] 6 = – 0.492430) – 0.772200.940733 | = 0.854372.585317 + = – 1.4.077747) = 0.772200)] = – 0.086045.2) – u1 | = | 0.000055. v(0.433548.854372)] = – 0. The magnitudes of errors in the solutions are | u(0. – 1.4) ≈ u2 = u1 + 1 (k + 2k12 + 2k13 + k14) 6 11 1 [– 0.168533.000236. v1 + k23) = 0. – 0.556573 + 2(– 0.077747) = 0.854372.2 f1 (0.2 f2(0. u1 + k13. 0.2 [cos (0. | v(0. k24 = h f2 (t1 + h.2 [cos (0.2 [– 1.INITIAL VALUE PROBLEMS FOR ORDINARY DIFFERENTIAL EQUATIONS 215 k13 = hf1 t1 + F G H h 1 1 .585448 + 0.585317 | = 0.215549] 6 = 0.4) – 4(0. u1 + k12 . 1. k14 = h f1(t1 + h.000003.771491 – 0.077747] = – 0.000131.842664) = 0. u1 + k12 .514694) + 2(– 0. v1 + k23) = 0. v1 + k22 2 2 2 IJ K = 0.3. k23 = h f2 t1 + F G H 1 1 h . u(0.3) – 4(0.940733 + = 0771546. 0.2 f2 (0.842664] = – 0. 0. . – 0. | v(0.172721) + 2(– 0. v1 + k22 2 2 2 IJ K = 0.2 f1(0.4) ≈ v2 = v1 + 1 (k + 2k22 + 2k23 + k24) 6 21 1 [– 0.3.4.117063 + 2(– 0.492430.086281 + 1. 0.771546 | = 0. with y(0) = 1 and y′(0) = 1.086045 | = 0.940730 – 0.772200. y″ + 3y′ + 2y = e2t. u1 + k13. – 1.215549.842664) = 0.2 [– 0.168533) – 0.4) – v2 | = | – 1.4) – u2 | = | 0.

1). . with y(0) = – 0. yi–k+1. Taylor series method or Runge-Kutta method.66) Remark 7 Multi step methods are not self starting. y(x0) = b0.6.5 and y′(0) = 0. since a k-step multi step method requires the k previous values yi. (A. xi.8. 12. xi–1.2. which is of the same or lower order than the order of the multi step method. xi+1. yi+1. h) and a general k-step implicit method can be written as yi+1 = yi = hφ(xi–k+1. April/May 2003) 9. . 6. In the following second order initial value problems. Find y(0. April/May 2005) 4.216 2.68) (4.1). with y(0) = 0 and y′(0) = 1. Find y(0. ..4 and y′(0) = – 0.. y″ + 3y′ + 2y = et. . with y(0) = 0.1. 8. h).2) with h = 0. Find y(0. What are the values of k1 and l1 in the Runge-Kutta method of fourth order. y″ – 6y′ + 5y = et. 4.U.. (A.. yi–k+1.U. y′(0) = 0.. y″ – 2y′ + 2y = e2t sin t. 11.2.U. to solve y″ + xy′ + y = 0.1. Find y(0. yi. The k values that are required for starting the application of the method are obtained by using some single step method like Euler’s method. Nov. y″ – 4y′ + 4y = e2t./Dec. y).. A general k-step explicit method can be written as yi+1 = yi = hφ(xi–k+1. with y(0) = 1 and y′(0) = 1. y′(0) = 0. yi–1. with y(0) = 0. yi–1.2) with h = 0.. y(0) = 1. with y(0) = 0. y″ + 3y′ + 2y = et.1. 3. y″ – 2y′ + y = tet....5 and y′(0) = 1. y″ – 6y′ + 5y = sin 2t.6 MULTI STEP METHODS AND PREDICTOR-CORRECTOR METHODS In section 4. Find y(0.67) (4. 7.April/May 2003) 5...2. Given y″ + y′ + y = 0. (A.. yi–k+1. with y(0) = 0 and y′(0) = – 1. yi. y(0) = 1. NUMERICAL METHODS Solve the following second order initial value problems by Taylor series method..2) with h = 0.5 and y′(0) = 0. (4.2) with h = 0. 2006) 10..8. (A.1) with h = 0. we have defined the explicit and implicit multi step methods for the solution of the initial value problem y′ = f (x. with y(0) = – 0. Find y(0.4 and y′(0) = – 0. with y(0) = 1 and y′(0) = 1.U. . xi. find the value of y(0. Consider the second order initial value problem y″ – 2y′ + 2y = e2t sin t. y″ + 2y′ + y = tet. . Find y(0.2).6. obtain the solution by the fourth order Runge-Kutta method.

we fit the Newton’s backward difference interpolating polynomial of degree k – 1 as (see equation (2. xi+1].. 4. The limits of integration in (4. s = 1..6. s = 0 and for x = xi+1. we approximate the integrand f(x. z xi + 1 xi − m f ( x. we get (4. (4. fi–k+1). y) dx . . (xi–k+1. y ) dx or y( xi + 1 ) = y( xi − m ) + For m = 0.69). xi–k+1 and x..47) in chapter 2) Pk–1(x) = f(xi + sh) = f(xi) + s∇f(xi) + + Note that s(s + 1) 2 ∇ f(xi) + .. We have k data values. ∇ (k − 1) ! s = [(x – xi)/h] < 0... y) dx . dx = hds.69). xi+1]. For this data. = s(s + 1)(s + 2) . (s + k − 2) k− 1 f ( xi ) . 2! (4. or y( xi + 1 ) = y( xi ) + To derive the methods.. y) in the interval [xi.1 Predictor Methods (Adams-Bashforth Methods) All predictor methods are explicit methods. (s + k − 1) k (k) h f (ξ) (k) ! (4... The expression for the error is given by T..71) where ξ lies in some interval containing the points xi. fi–1).. y) dx .E. y) in the interval [xi–m. we may integrate the differential equation y′ = f(x.. fi)(xi–1–.. ds 2 6 OP Q . y) by a suitable interpolation polynomial. . Also. xi–1. (xi.69) z x i +1 xi −m dy dx = dx z x i +1 x i −m f ( x .69) become for x = xi . We get z xi + 1 xi f ( x.INITIAL VALUE PROBLEMS FOR ORDINARY DIFFERENTIAL EQUATIONS 217 Let us construct a few multi step methods.70) s(s + 1)(s + 2) . we get z xi + 1 xi dy dx = dx z xi + 1 xi f ( x. Integrating the differential equation y′ = f(x. y) by Pk–1(x) in (4.. We replace f(x. In general. We get yi+1 = yi + h z LMN 1 0 fi + s∇fi + 1 1 s( s + 1)∇ 2 fi + s( s + 1)( s + 2)∇3 fi + .

Using (4.. Remark 8 From (4. = y(xn+1) – yn+1 Using Taylor series. the method is of first order. NUMERICAL METHODS Hence.74). 2 12 8 720 OP Q (4..72) These methods are called Adams-Bashforth methods. we obtain that the truncation error is of order O(hk+1). we get different methods.. 2 0 s(s + 1)(s + 2) ds = 9 . Using (4. we obtain the error term as Tk = hk+1 = hk+1 Since. a kstep Adams-Bashforth method is of order k.218 Now. we obtain the error term as T1 = (4. 4 z z 1 0 s( s + 1) ds = 1 5 . k! Alternately.71). k = 1: We get the method yi+1 = yi + hfi which is the Euler’s method. The leading term gives the order of the truncation error. 6 0 s(s + 1)(s + 2)(s + 3) ds = 251 . Therefore. 1].. . g(s) does not change sign in [0. By choosing different values for k. 0 < ξ1 < 1 (4. we expand y(xn+1). we have z z 1 1 0 s dx = 1 . k = 2: We get the method yi+1 = yi + h LM f N i + 1 1 ∇fi = yi + h fi + ( fi − fi −1 ) 2 2 OP Q LM N OP Q . (s + k – 1)]. we write the truncation error as T..74). 30 yi+1 = yi + h fi + LM N 1 5 2 3 251 4 ∇ fi + ∇ fi + ∇ 3 fi + ∇ fi + . (s + k − 1) (k) f (ξ) ds (k) ! g(s) f (k) (ξ) ds.73) 0 (ξ1) z 1 0 g(s)ds. 2 2 Therefore. (4.E.75) h2 h2 f ′(ξ 1 ) = y′′(ξ 1 ) . yn+1 about xn.. we get by the mean value theorem Tk = hk+1 f where g(s) = (k) z z 1 0 1 s(s + 1)(s + 2) .74) 1 [s(s + 1) . and simplify.

we obtain the error term as T3 = k = 4: We get the method yi+1 = yi + h fi + + 3 4 (3) 3 4 (4) h f (ξ3) = h y (ξ3). Taylor series method or Runge-Kutta method. 8 8 Therefore. yi–1 and yi–2. LM N L = y + hMf N i 1 5 2 ∇ fi + ∇ fi 2 12 OP Q i + 1 5 ( fi − f i − 1 ) + ( fi − 2 f i − 1 + f i − 2 ) 2 12 OP Q (4. 12 For using the method. Using (4. 12 12 Therefore.74). LM N L = y + h Mf N i 1 5 2 3 ∇ fi + ∇ fi + ∇ 3 fi 2 12 8 OP Q i 1 5 3 ( fi − fi − 1 ) + ( fi − 2 fi − 1 + fi − 2 ) + ( fi − 3 fi − 1 + 3 fi − 2 − fi − 3 ) 2 12 8 OP Q = yi + h [55fi – 59fi–1 + 37fi–2 – 9fi–3]. the method is of fourth order. Using (4. the method is of third order. the method is of second order.77) = yi + h [23fi – 16fi–1 + 5fi–2].78) For using the method. Using (4. yi–2 and yi–3. 2 For using the method. 24 (4. we require the starting values yi . yi–1.76) = yi + h [3fi – fi–1]. we require the starting values yi . we obtain the error term as T2 = k = 3: We get the method yi+1 = yi + h fi + 5 3 5 3 h f ′′( ξ 2 ) = h y″ ′ (ξ 2 ).74).74). 720 720 Therefore. Remark 9 The required starting values for the application of the Adams-Bashforth methods are obtained by using any single step method like Euler’s method.INITIAL VALUE PROBLEMS FOR ORDINARY DIFFERENTIAL EQUATIONS 219 (4. we require the starting values yi and yi–1. we obtain the error term as T4 = 251 5 b 4g 251 5 (5) h f (ξ 4 ) = h y (ξ 4 ) . .

220 NUMERICAL METHODS Example 4.1.3) using the Adams-Bashforth method of third order for the initial value problem y′ = x2 + y2.695793.2.111333. ′ ″ ″′ y(0.1 y0′ + 0.1(1) + 0.01 0.252625. Now. For i = 2.967355. y0 = 2. y1 = 1. 12 .2) ≈ y2 = y1 + 0.436688. i = 0 : x0 = 0.001 (8) = 1. y1′″ = 11. y1′ = 1. y0 = 8. we apply the given Adams-Bashforth method. We have x2 = 0. y0 = 1. y0 = 1.111333.005(2) + 0.01 0. y0. 2 6 y′ = x2 + y2.252625. The third order Taylor series method is given by yi+1 = yi + hyi′ + We have h2 h3 y i″ + y i′″ . y″′ = 2 + 2[yy″ + (y′)2]. y″ = 2x + 2yy′.1) ≈ y1 = y0 + 0. The Adams-Bashforth method of third order is given by yi+1 = yi + h [23fi – 16fi–1 + 5fi–2].609069. Solution We have f(x. The initial condition gives y0 = 1. y2 = 1. y0 = 1.1(1. y(0) = 1 with h = 0. x0 = 0.3) ≈ y3 = y2 + = y2 + 0. y2.695793) 6 h [23f2 – 16f1 + 5 f0] 12 0.1 y1′ + 0.005 (2.245061) + 0. 12 We need the starting values. = 1 + 0.001 (11.245061.1 [23(1. y1. y) = x2 + y2.111333 + 0. y1″ = 2.1. y2′ = f2 = 1.245061) + 5(1)] = 1. we obtain y(0.001 y1″ + y1′″ 2 6 = 1. Calculate the starting values using the corresponding Taylor series method with the same step length.12 Find the approximate value of y(0.001 y0 + ″ y0 ′″ 2 6 We obtain the following starting values. y(0. 6 i = 1: x1 = 0.609069) – 16(1.967355) + = 1.

0. y1′ = f(0.INITIAL VALUE PROBLEMS FOR ORDINARY DIFFERENTIAL EQUATIONS 221 Example 4. Now. 4. fi–1).1 [55(2.231. fi). y0 = 1.6.1(1.715361. y(0. We obtain the following starting values. y2 = 1. y0′ = f0 = 1. The initial condition gives y0 = 1. i = 0: x0 = 0. we apply the given Adams-Bashforth method. y1.3) ≈ y3 = y2 + 0. For i = 3. 24 We need the starting values.1 [55 f3 – 59f2 + 37f1 – 9f0] 24 0.267107) – 59(1. y(0) = 1 with h = 0. 1.1 + 0. (xi+1. 1.1. (xi . y0.4) ≈ y4 = y3 + h [55fi – 59fi–1 + 37fi–2 – 9fi–3].715361) + 37(1. y0 = 1.31) = 1. fi+1). .2 Corrector Methods All corrector methods are implicit methods.1.231 + 0.3.2.715361) = 1. y(0. y2.1 Adams-Moulton Methods Consider the k + 1 data values. we fit the Newton’s backward difference interpolating polynomial of degree k as (see equation (2. y3.1.1.267107.402536 + = 1.1 y2′ = 1.47) in chapter 2) .2. y1 = 1.6.. (xi–1. y2′ = f(0. we obtain y(0. We have x3 = 0.1(1) = 1.402536.402536.1 y0′ = 1 + 0.31. x0 = 0.664847..1) = 1. y(0. y3′ = f3 = 2. The Adams-Bashforth method of fourth order is given by yi+1 = yi + Euler’s method is given by yi+1 = yi + h yi′ = yi + h fi.31) – 9(1)] 24 = 1. Solution We have f(x. fi–k+1) which include the current data point.231. i = 1: x1 = 0. y3 = 1.13 Find the approximate value of y(0. (xi–k+1.1 y1′ = 1.1. Calculate the starting values using the Euler’s method with the same step length.2) ≈ y2 = y1 + 0. y) = x + y2.231) = 1.1) ≈ y1 = y0 + 0. For this data.1(1. i=2: x2 = 0. 4.4) using the Adams-Bashforth method of fourth order for the initial value problem y′ = x + y2..2.

.. The expression for the error is given by T. xn–k+1 and x.. 30 LM N 1 1 2 1 3 19 4 ∇ fi + 1 − ∇ fi + 1 − ∇ fi + 1 − ∇ fi + 1 − . (s + k − 2) k ∇ f ( xi + 1 ) (k) ! s = [(x – xi)/h] < 0.80) where ξ lies in some interval containing the points xi+1. Also. 2! (4.80).. we have yi+1 = yi + h fi + 1 − z z z 1 0 ( s − 1) ds = − 1 ....81) These methods are called Adams-Moulton methods. (s + k − 1) k+1 (k+1) h f (ξ) (k + 1) ! (4... Using (4.. We get yi+1 = yi + h z LMN 1 0 fi +1 + ( s − 1) ∇fi +1 + 1 ( s − 1)s∇ 2 fi +1 2 + 1 ( s − 1)s( s + 1)∇3 fi +1 + . 2 12 24 720 OP Q (4. 4 0 (s − 1) s(s + 1)( s + 2) ds = − 19 .E.. ( s + k − 1) (k+1) f (ξ) ds ( k + 1) ! g( s) f ( k+1) (ξ) ds 0 (4. The limits of integration in (4. We replace f(x.. y) by Pk(x) in (4. and for x = xi+1.82) . Hence. s = 0.. 2 z 1 0 ( s − 1)s ds = LM s − s OP NM 3 2 QP 3 2 1 =− 0 1 6 1 0 1 Ls s O (s − 1) s(s + 1) ds = M − P N4 2Q 4 2 1 0 =− 1 ...222 NUMERICAL METHODS Pk(x) = f(xi + sh) = f(xi+1) + (s – 1) ∇f(xi+1) + + where (s − 1) s ∇2 f(xi+i) + .69).69) become for x = xi. s = 1. ds 6 OP Q Now. dx = hds. xi . we obtain the error term as Tk = hk+2 = hk+2 z z 1 0 1 ( s − 1) s(s + 1)( s + 2) . = (s − 1)s(s + 1)(s + 2) .79) (s − 1) s(s + 1)(s + 2) ..

k = 0 : We get the method yi+1 = yi + hfi +1 which is the backward Euler’s method. we get by the mean value theorem Tk = hk+2 f (k+1) (ξ1) Remark 10 From (4. + 1) ! (k Since g(s) does not change sign in [0. we obtain the error term as T1 = – h2 h2 f ′ (ξ 1 ) = − y ′′(ξ 1 ) . 12 12 Therefore. Using (4. we obtain that the truncation error is of order O(hk+2).83) (4.84) Therefore.86) h [5fi + 1 + 8fi – fi – 1]. Therefore. the method is of second order. 2 2 z 1 0 g( s) ds . (4. This method is also called the trapezium method. k = 2: We get the method yi + 1 = yi + h f i +1 − LM N L = y + h Mf N = yi + 1 1 2 ∇ fi + 1 − ∇ fi + 1 2 12 OP Q i +1 − 1 1 ( f i +1 − f i ) − ( f i +1 − 2 fi + f i −1 ) 2 12 OP Q (4. 1].83).. Using (4. we obtain the error term as T2 = – 1 3 1 3 h f ″ (ξ2) = – h y′″(ξ2). a k-step Adams-Moulton method is of order k + 1.83). 0 < ξ1 < 1.85) h [ f + f ]. 12 . k = 1: We get the method yi+1 = yi + h fi + 1 − = yi + LM N 1 1 ∇ fi + 1 = y i + h fi + 1 − ( fi + 1 − f i ) 2 2 OP Q LM N OP Q (4. we get different methods. By choosing different values for k.. 2 i+1 i This is also a single step method and we do not require any starting values.83). (s + k – 1)]. the method is of first order.INITIAL VALUE PROBLEMS FOR ORDINARY DIFFERENTIAL EQUATIONS 223 where g(s) = 1 [(s – 1)s(s + 1) .

ds 6 OP Q . we obtain the error term as T4 = – 19 5 ( 4) 19 5 (5) h f (ξ 4 ) = − h y (ξ 4 ) .. The interval of integration for s is [– 1. 1]. the method is of fourth order. We obtain yi+1 = yi–1 + h z LMN fi +1 + ( s − 1) ∇fi +1 + 1 ( s − 1)s∇ 2 fi +1 2 + 1 ( s − 1)s( s + 1)∇3 fi +1 + . 1 4 (3) 1 4 (4 ) h f (ξ 3 ) = − h y (ξ 3 ) . y i−1.2 Milne-Simpson Methods To derive the Milne’s methods. we integrate the differential equation y′ = f(x. xi+1]. yi–2.224 For using the method. y ) dx . y) in the interval [xi–1. We get z x i +1 x i −1 dy dx = dx z x i +1 x i −1 f ( x . 24 For using the method. yi–1.2. the method is of third order. we use the same approximation.83). Using (4. we require the starting values yi.6. (4. y ) dx . 720 720 Therefore. or y(xi+1) = y(xi–1) + z 1 0 x i +1 x i −1 f ( x . k = 3: We get the method yi+1 = yi + h fi + 1 − LM N L = y + h Mf N i 1 1 2 1 3 ∇ fi + 1 − ∇ fi + 1 − ∇ fi + 1 2 12 24 OP Q i +1 − 1 1 ( fi + 1 − f i ) − ( fi +1 − 2fi + fi −1 ) 2 12 – 1 (f – 3fi + 3fi–1 – fi–2) 24 i+1 OP Q (4. we obtain the error term as T3 = – NUMERICAL METHODS y i . 24 24 Therefore.88) To derive the methods.87) = yi + h [9fi+1 + 19fi – 5fi–1 + fi–2]. 4.. we require the starting values Using (4.83). follow the same procedure and steps as in Adams-Moulton methods.

For using the method. 90 1 −1 (s – 1)s(s + 1)(s + 2) ds = – yi+1 = yi–1 + h 2 fi + 1 − 2∇fi + 1 + These methods are called Milne’s methods.. y(x0) = y0. which is computationally very expensive. 3 i+1 This method is also called the Milne-Simpson’s method. 90 90 Therefore. which we describe in the next section.6. we have derived explicit single step methods (Euler’s method. However. The error term is given by Error = – 1 5 ( 4) 1 5 (5 ) h f (ξ) = − h y (ξ) . the method is of fourth order.. we may need to use the method thousands or even millions of steps. Most implicit methods have strong stability properties. We obtain the method as yi+1 = yi–1 + h 2 fi + 1 − 2∇fi + 1 + = yi–1 LM N L + h M2 f N 1 2 ∇ fi + 1 3 OP Q i+ 1 − 2( fi + 1 − fi ) + 1 ( fi + 1 − 2 fi + fi − 1 ) 3 OP Q (4. −1 (s – 1)s(s + 1) ds = 0. that is. z z 1 0 (s – 1)s ds = 2 . Therefore. we can use sufficiently large step lengths for computations and we can obtain convergence. Taylor series methods and Runge-Kutta methods). 4. Milne-Simpson methods) for the solution of the initial value problem y′ = f(x. Hence. If we perform analysis for numerical stability of these methods (we shall discuss briefly this concept in the next section). we find that all explicit methods require very small step lengths to be used for convergence.INITIAL VALUE PROBLEMS FOR ORDINARY DIFFERENTIAL EQUATIONS 225 Now.2. we need to solve a nonlinear algebraic equation for obtaining the solution at each point. LM N 1 2 1 4 ∇ fi + 1 + (0) ∇ 3 fi + 1 − ∇ fi + 1 − . yi–1.89) The case k = 2. This would give rise to the explicit-implicit methods or predictor-corrector methods. Remark 11 The methods derived in this section are all implicit methods. these methods are not used as such but in combination with the explicit methods. 3 90 OP Q (4.3 Predictor-Corrector Methods In the previous sections. we have z z 1 1 −1 (s – 1) ds = – 2. . is of interest for us. Hence. y). we need to solve a nonlinear alge- . explicit multi step methods (Adams-Bashforth methods) and implicit methods (Adams-Moulton methods. If the solution of the problem is required over a large interval.90) = yi–1 + h [f + 4fi + fi–1)]. 3 24 . we require the starting values yi.

If the order of the predictor is less than the order of the corrector. + (4.226 NUMERICAL METHODS braic equation for the solution at each nodal point. We give below a few examples of the predictor-corrector methods. Remark 12 The order of the predictor should be less than or equal to the order of the corrector. For example. that is the result is now of second order. if the predictor and corrector are both of fourth order. + + . ynp)1 ) . The corrector is used 1 or 2 or 3 times. If the orders of the predictor and corrector are same. yn ) . yn0)1 ) . Denote this approximation as yi(+1 . that is.92) h2 h2 f ′ (ξ 1 ) = − y ′′(ξ 1 ) . p) C: Correct the approximation yi(+1 . + ( ( yn1) 1 = yn + hf ( xn +1 . Example 1 Predictor P: Euler method: ( ynp)1 = yn + hf ( xn . + (4. then we require more iterations of the corrector. We denote P for predictor and C for corrector. Such methods are called predictor-corrector methods or P-C methods. We compute Error term = – ( yn0)1 = yn + hf ( xn . If we iterate a third time. we define the predictor-corrector methods.91) h2 h2 f ′ (ξ 1 ) = y ′′(ξ 1 ) . then the order of the combination increases by one. then we may require only one or two corrector iterations at each nodal point. 2 2 Corrector C: Backward Euler method (4. using an explicit p) method. that is. yn1) 1 ). Therefore. an implicit method. + + ( ( yn2)1 = yn + hf ( xn + 1. Denote c) this corrected value as yi(+1 . depending on the orders of explicit and implicit methods used. etc. Further iterations may not change the results. Now. If corrector is iterated once more. This procedure may also be computationally expensive as convergence is to be obtained for the solution of the nonlinear equation at each nodal point. P: Predict an approximation to the solution yi+1 at the current point. then the truncation error of the combination reduces. yn ) .84): Error term = ( ) ( ync+ 1 = yn + hf ( xn +1 . we combine the explicit methods (which have weak stability properties) and implicit methods (which have strong stability properties) to obtain new methods. using a corrector. then one application of the combination gives a result of first order. if we use a first order predictor and a second order corrector. For example. then the combination (P-C method) is also of fourth order and we may require one or two corrector iterations at each point. 2 2 Both the predictor and corrector methods are of first order. we may get a better result.

85): ( ) ync+ 1 = yn + h ( [ f ( xn . ynp)1 )] . 720 720 The method requires the starting values yi. That is. yn ) + f ( xn + 1 . 720 720 The method requires the starting values yi. yn ) + f ( xn+ 1 . y3. yi(+ 1 ) + 19 fi − 5 fi − 1 + fi − 2 ] . Initial condition gives the value y0. yi–2 and yi–3. 24 (4. yn ) + f ( xn+ 1 . Both the predictor and corrector methods are of fourth order. + 2 ( yn2)1 = yn + + Example 3 Adams-Bashforth-Moulton predictor-corrector method of fourth order. + (4. yi–1. y2. this method is also referred to as Adams-Bashforth predictor-corrector method. The combination requires the starting values yi. yn1) 1 )] . + 2 (4. yi–1. yn ) . y1. yn0)1 )] . yi–2. 24 (4. 12 12 The predictor is of first order and the corrector is of second order. Predictor P: Adams-Bashforth method of fourth order.95) Error term = 251 5 ( 4 ) 251 5 ( 5 ) h f (ξ4 ) = h y (ξ 4 ) . . yi–2 and yi–3. Corrector C: Adams-Moulton method of fourth order. p) yi(+ 1 = yi + h [55 fi − 59 fi − 1 + 37 fi − 2 − 9 fi − 3 ] . + 2 h ( [ f ( xn . + ( yn1) 1 = yn + + h ( [ f ( xn . Both the predictor and corrector methods are of fourth order. In the syllabus. c yi(+)1 = yi + h p) [9 f ( xi + 1. Example 4 Milne’s predictor-corrector method.93) Corrector C: Trapezium method (4. We compute ( yn0)1 = yn + hf ( xn .96) Error term = – 19 5 ( 4 ) 19 5 ( 5 ) h f (ξ4 ) = – h y (ξ 4 ) .INITIAL VALUE PROBLEMS FOR ORDINARY DIFFERENTIAL EQUATIONS 227 Example 2 Predictor P: Euler method: ( ynp)1 = yn + hf ( xn . we require the values y0. yi–1. etc. yn ) .94) Error term = – 1 3 1 3 h f ′′(ξ 2 ) = − h y′′′(ξ 2 ) .

y3. yi–1. 3 (4. yi–1. Initial condition gives the value y0. we require the values y0. we obtain z x i +1 xi −3 dy dx = dx z x i +1 xi −3 f ( x . 3 35 OP Q (4. y ) dx or y(xi+1) = y(xi–3) + z x i +1 xi −3 f ( x .99) Retaining terms up to ∇ 3fi .. y2.. y ) dx . Replace the integrand on the right hand side by the same backward difference polynomial (4. c yi(+)1 = yi − 1 + h p) [ f ( xi + 1. Remark 13 Method (4.70) and derive the method in the same way as we have done in deriving the explicit Adams-Bashforth methods. 3 (4. We obtain the method as y i +1 = yi −3 + h 4 fi − 4∇fi + LM N 8 2 14 4 ∇ fi + ( 0)∇3 fi + ∇ fi + . 45 45 The method requires the starting values yi. xi+1). That is. yi–1.97) is obtained in the same way as we have derived the Adams-Bashforth methods.97) Error term = 14 5 ( 4 ) 14 5 ( 5) h f (ξ) = h y (ξ) . yi(+ 1 ) + 4 fi + fi −1 )] . . Integrating the given differential equation y′ = f(x. The combination requires the starting values yi. 90 90 The method requires the starting values yi. p) yi(+ 1 = yi − 3 + NUMERICAL METHODS 4h [ 2 f i − fi − 1 + 2 f i − 2 ] . y) on the interval (xi–3 . Corrector C: Milne-Simpson’s method of fourth order.98) Error term = – 1 5 (4) 1 5 (5 ) h f (ξ) = − h y (ξ) .228 Predictor P: Adams-Bashforth method of fourth order. yi–2 and yi–3. we obtain the method yi + 1 = yi −3 + h 4 fi − 4∇fi + = yi − 3 LM N L + h M4 f N 8 2 ∇ fi + (0) ∇ 3 fi 3 OP Q i − 4 ( fi − fi − 1 ) + 8 ( fi − 2 fi − 1 + fi − 2 3 OP Q = yi −3 + 4h [2 f i − f i − 1 + 2 f i − 2 ] 3 . yi–2 and yi–3. y1.

yi–1.972) = – 0. 24 = 0.3) = 0. y1. [ 55( − 0. c yi(+)1 = yi + h p) [9 f ( xi + 1. y) = 1 x 2 − y . y2) = f(1. f3 = f(x3.972 + 01 . 1) = 1 – 1 = 0.3) = 0. y(1. y3) = f(1. yi–2 and yi–3. Corrector C: Adams-Moulton method of fourth order. the method is of fourth order. 2006) Solution The Adams-Bashforth predictor-corrector method is given by h [55 fi − 59 fi −1 + 37 fi − 2 − 9 fi − 3 ].1.INITIAL VALUE PROBLEMS FOR ORDINARY DIFFERENTIAL EQUATIONS 229 The error term is given by T.986. y(1.986. yi–2. evaluate y(1. if y satisfies dy y 1 + = 2 dx x x and y(1) = 1.996) = – 0.127222.2.2) = 0. y3. 24 The method requires the starting values yi. ( y 40) = 0. With h = 0.079008 ) − 9( 0 )] .4).079008. Example 4.986) = – 0. . y1) = f(1. p) yi(+ 1 = yi + (A. f1 = f(x1. x Predictor application For i = 3. Nov.E.1. y(1./Dec. we are given the values y(1) = 1. yi–1. yi–1.U. = 14 5 ( 4) 14 5 (5) h f (ξ) = h y (ξ) . f2 = f(x2. y(1.2) = 0. y(1. 0. we obtain ( ( y40) = y4p) = y3 + h [55 f3 − 59 f2 + 37 f1 − 9 f0 ] 24 We have f0 = f(x0. The combination requires the starting values yi.155976) − 59( − 0127222 ) + 37( − 0.155976. 45 45 Therefore.972. We have f(x. y0) = f(1.14 Using the Adams-Bashforth predictor-corrector equations. we require the values y0. yi(+ 1 ) + 19 fi − 5 fi − 1 + fi − 2 ] 24 The method requires the starting values yi.972.996. That is. 0.996. 0.3.955351. yi–2 and yi–3. y(1.1) = 0. y2. Predictor P: Adams-Bashforth method of fourth order.1) = 0.

and y(0. Now. yi–1.955516) = – 0.023.127222) 24 + (– 0.955512.4) = y2 = 2.955516.U.2) = y1 = 2.4. NUMERICAL METHODS First iteration (1) ( y 4 = y 4c ) = y 3 + h ( [ 9f ( x 4 .972 + 0.023 are got by Runge-Kutta method of fourth order.127222) 24 = 0. y(0. Therefore. y41) ) + 19 f3 − 5 f2 + f1 ] 24 0.079008)] = 0. Find y(0. yi(+ 1 ) + 4 fi + fi − 1 ] .955516 | = 0.4) = 2. y(0. y1. y3. y 40 ) ) + 19f3 − 5f 2 + f1 ] 24 = 0. That is. y2.955512.1 [9(– 0. 3 Corrector C: Milne-Simpson’s method of fourth order.172189. (A.452.8) by Milne’s predictor-corrector method taking h = 0.230 Corrector application Now.6) = y3 = 3. y(0.073. ( ( | y42) − y41) | = | 0. Second iteration ( f ( x4 .972 + + (– 0. 3 h p) [ f ( xi + 1 .4) = 0. y0 = 2. the values y(0.155976) – 5(– 0. 0.079008)] = 0.6) = 3.2) = 2. y) = x3 + y.000004. ( f ( x4 .15 Given y′ = x3 + y. Example 4.452. we require the values y0. We are given that f(x. ( y42) = y3 + h ( [9 f ( x4 .2.955512 – 0. y40) ) = f(1. . April/May 2004) Solution Milne’s predictor-corrector method is given by Predictor P: Adams-Bashforth method of fourth order. c yi(+)1 = yi − 1 + The method requires the starting values yi. x0 = 0. y(1.172189) + 19(– 0.4.155976) – 5(– 0. y(0) = 2. y41) ) = f(1.955351) = – 0.1 [9(– 0. The result is correct to five decimal places. y(0. yi–2 and yi–3.073. Initial condition gives the value y0.172307.172307) + 19(– 0. p) yi(+ 1 = yi − 3 + 4h [ 2 f i − fi − 1 + 2 f i − 2 ] . 0.

6784 + 4(3. 3 f0 = f(x0.239) – 2.1664. Third iteration ( y 43) = y 2 + 0.073) = 2. f2 = f(x2.8 [2(3.452 + 0. f ( x4 . y41) ) = f(0.6784) = 4.2) [2 f3 − f 2 + 2 f1 ] .30736. 3.452 + 0.770624 | = 0.770624 – 3.6784.2 (1) [ f ( x 4 .023) = 3. f ( x4 . 2) = 2. 4.8. ( y43) = 2.2 [4.770624. The result is accurate to two decimal places.516 + 2(2.081.8.770624) = 4.79536 | = 0.2 ( [ f ( x 4 .282624 + 4(3.2.516.516] = 3.6.2 [4. ( y42) = 2.239) + 2.239. 3 Second iteration ( y 42 ) = y 2 + 0. ( y41) = 2. y 40 ) ) + 4 f3 + f2 ] 3 ( Now.024736.282624.30736 + 4(3. ( f ( x4 .768975 – 3. 2.516] = 3. The result is accurate to one decimal place. we get (1) y 4 = y2 + 0.081)] = 4.239) + 2. f1 = f(x1. y42) ) = f (0.8. 3. ( y40) = 2 + 0. 3 Corrector application First iteration For i = 3.768975. . y2) = f(0. y0) = f(0.001649.2 [4. y 4 ) + 4 f3 + f2 ] 3 ( Now. y3) = f(0.452) = 2. y40) ) = f(0. 4.INITIAL VALUE PROBLEMS FOR ORDINARY DIFFERENTIAL EQUATIONS 231 Predictor application For i = 3. 2.2 ( [ f ( x 4 .1664) = 4. y 42 ) ) + 4 f3 + f 2 ] 3 Now.516] = 3. 3 We have ( ( | y43) − y42) | = | 3. we obtain ( ( y40) = y4p) = y0 + We have 4(0. f3 = f(x3. 3 We have ( ( | y42) − y41) | = | 3. y1) = f(0.4.79536.239) + 2.452 + 0.

x0 = 0.1 ( x2 + y2 ) = 1. yi–2 and yi–3. ( c) y i +1 = y i −1 + h (p [ f ( x i +1 . 3 Corrector C: Milne-Simpson’s method of fourth order.239) + 2.76897) = 4. y) = x2 + y2.2 [4.768865 – 3.7689. with h = 0.04 + (1.01 + 1. y2.768865.232 Fourth iteration ( y 44 ) = y 2 + NUMERICAL METHODS 0.8) = 3. The required result can be taken as y(0. y43) ) = f(0. ( y44) = 2. y0 = 1. That is.16 Using Milne’s predictor-corrector method. With x0 = 0.0 + 0.1. y3.375328. yi–1.1 ( x1 + y1 ) = 1. we get 2 2 y1 = y0 + 0.1[0. we require the values y0. The result is to be accurate to three decimal places.1 (0 + 1.452 + 0.000100. find y(0.1 + 0.1( xi2 + yi2 ) . p) yi(+ 1 = yi − 3 + 4h [ 2 fi − fi − 1 + 2 fi − 2 ] . y i +1) ) + 4 f i + f i −1 ] .222.280975 + 4(3.21) = 1.2 ( [ f ( x 4 . 3 ( ( We have | y44) − y43) | = | 3. Calculate all the required initial values by Euler’s method.1. yi ) = yi + 0.768975 | = 0. 2 2 y2 = y1 + 0. ( f ( x4 . 3 The method requires the starting values yi.222)2] = 1.4) for the initial value problem y′ = x2 + y2.516] = 3.1 (0. y1. Initial condition gives the value y0.222 + 0.0) = 1. . y(0) = 1. y 43 ) ) + 4 f3 + f 2 ] 3 Now.1 ( x0 + y0 ) = 1.8. y0 = 1.280975. 3. We are given that f(x. The result is accurate to three decimal places. Example 4. 2 2 y3 = y2 + 0. Solution Milne’s predictor-corrector method is given by Predictor P: Adams-Bashforth method of fourth order. Euler’s method gives yi + 1 = yi + hf ( xi .

y 42 ) ) + 4 f3 + f 2 ] 3 Now.1 [2.4) ≈ 1. ( y42) = 1. y1) = f(0. we get (1) y 4 = y2 + 01 .981527) + 1. y40) ) = f(0.981527) + 1.981527. y 4 ) + 4 f3 + f 2 ] 3 Now.981527) – 1.4. The result is accurate to two decimal places.822024 + 4(1.533284. 1. The required result can be taken as y(0. 3 ( ( We have | y42) − y41) | = | 1. The result is accurate to three decimal places.63138.22.4 [2(1.22)] = 1. 1.00019.880200 + 4(1. y2) = f(0.001749.INITIAL VALUE PROBLEMS FOR ORDINARY DIFFERENTIAL EQUATIONS 233 Predictor application For i = 3.633320 | = 0.533284] = 1.3. 1.631571) = 2.1 [2. 1.222 + 0.533284] = 1.631381. . 3 Corrector application First iteration For i = 3.631571 | = 0.1.1) [2 f3 − f2 + 2 f1 ] 3 We have f1 = f(x1.533284 + 2(1. ( y41) = 1. ( y40) = 1.533284] = 1.633320. 1. Third iteration ( y 43) = y 2 + 01 .1) = 1.981527) + 1.4.0 + 0.633320) = 2.222) = 1.822024. 3 Second iteration ( y 42) = y 2 + 01 . 3 ( ( We have | y43) − y42) | = | 1.827734.631381 – 1.631571.880200.4. (1) [ f ( x 4 . y 40 ) ) + 4 f3 + f 2 ] 3 Now.649303.222 + 0.1. f3 = f(x3. ( f ( x4 . we obtain ( ( y40) = y4p) = y0 + 4(0. f2 = f(x2.375328) = 1. 1. ( [ f ( x 4 . ( [ f ( x 4 .1 [2. ( y43) = 1.649303) = 2.827734 + 4(1. ( f ( x4 . y41) ) = f(0.222 + 0. ( f ( x4 . y3) = f(0.631571 – 1. y42) ) = f(0.

we find that all explicit methods require very small step lengths to be used for convergence. If the order of the predictor is less than the order of the corrector. P: Predict an approximation to the solution yi+1 at the current point. an implicit method. then the combination (P-C method) is also of fourth order and we may require one or two corrector iterations at each point. that is. Taylor series method or Runge-Kutta method. thousands or even millions of steps. using an explicit p) method. we can use sufficiently large step lengths for computations and we can obtain convergence. which is of the same or lower order than the order of the multi step method. The corrector is used 1 or 2 or 3 times. that is. if we use a first order predictor and a second order corrector. y). we combine the explicit methods (which have weak stability properties) and implicit methods (which have strong stability properties) to obtain new methods. y(x0) = y0 ? Solution If we perform analysis for numerical stability of single or multi step methods.234 NUMERICAL METHODS REVIEW QUESTIONS 1. then we require more iterations of the corrector.. then we may require only one or two corrector iterations at each nodal point. 3. Denote c) this corrected value as yi(+1 . The order of the predictor should be less than or equal to the order of the corrector. p) C: Correct the approximation yi(+1 . then one . yi–k+1.. using a corrector. yi–1. For example. we need to solve a nonlinear algebraic equation for the solution at each nodal point. . If the solution of the problem is required over a large interval. What are predictor-corrector methods for solving the initial value problem y′ = f (x. depending on the orders of explicit and implicit methods used. if the predictor and corrector are both of fourth order. If the orders of the predictor and corrector are same. Denote this approximation as yi(+1 . However. Such methods are called predictor-corrector methods. 2. we may need to use the method. Therefore. which is computationally very expensive. We denote P for predictor and C for corrector. This procedure may also be computationally expensive as convergence is to be obtained for the solution of the nonlinear equation at each nodal point. For example. since a k-step multi step method requires the k previous values yi. y).. Solution We combine the explicit methods (which have weak stability properties) and implicit methods (which have strong stability properties) to obtain new methods. y(x0) = y0 ? Comment on the order of the methods used as predictors and correctors. Are the multi step methods self starting ? Solution Multi step methods are not self starting. Most implicit methods have strong stability properties. The k values that are required for starting the application of the method are obtained using some single step method like Euler method. Why do we require predictor-corrector methods for solving the initial value problem y′ = f(x. Such methods are called predictor-corrector methods.

INITIAL VALUE PROBLEMS FOR ORDINARY DIFFERENTIAL EQUATIONS 235 application of the combination gives a result of first order. y1. Solution The Adams-Bashforth predictor-corrector method is given by Predictor P: Adams-Bashforth method of fourth order. yi–2. That is. Comment on the order and the required starting values. 90 90 The method requires the starting values yi. Initial condition gives the value y0. y3. The combination requires the starting values yi. Write the Milne’s predictor-corrector method for solving the initial value problem y′ = f(x. yi–1. y). then the truncation error of the combination reduces. Further iterations may not change the results. Corrector C: Milne-Simpson’s method of fourth order ( c) y i +1 = y i −1 + h (p [ f ( x i +1 . we may get a better result. 720 720 The method requires the starting values yi. yi–2 and yi–3. 3 Error term = 14 5 ( 4) 14 5 (5) h f (ξ) = h y (ξ) . Solution Milne’s predictor-corrector method is given by Predictor P: Adams-Bashforth method of fourth order (p yi +1) = y i −3 + 4h [ 2fi − fi −1 + 2fi −2 ] . If corrector is iterated once more. the result is now of second order. yi(+ 1 ) + 19 fi − 5 fi − 1 + fi − 2 ] . y(x0) = b0. Comment on the order and the required starting values. If we iterate a third time. yi–1. y2. that is. Error term = Corrector C: Adams-Moulton method of fourth order. 24 251 5 ( 4) 251 5 ( 5) h f (ξ 4 ) = h y (ξ 4 ) . yi–1. 45 45 The method requires the starting values yi. (ξ4 ) = − 720 720 The method requires the starting values yi. 5. c yi(+)1 = yi + h p) [9 f ( xi + 1. y i +1) ) + 4 fi + fi −1 ] 3 Error term = – 1 5 (4 ) 1 5 (5) h f (ξ) = − h y (ξ) . Write Adams-Bashforth predictor-corrector method for solving the initial value problem y′ = f(x. y(x0) = b0. we require the values y0. 4. yi–2 and yi–3. y). yi–1. p) yi(+1 = yi + h [55fi – 59fi–1 + 37fi–2 – 9fi–3]. then the order of the combination increases by one. that is. . yi–1. 24 Error term = – 19 5 ( 4 ) 19 5 ( 5 ) h f h y (ξ4 ) . yi–2 and yi–3.

Given that y′ + xy + y = 0. 8. 0 ≤ x ≤ 1. y(0. y2. Initial condition gives the value y0./Dec. y1. 2003) . y3. we require the values y0. That is. y(0) = 1. 2. (A. Compute the value of y(0.8) by Milne’s predictor-corrector formula. (A. (A.3 by Taylor series method and find the solution for y(0. yi–1. y3. y(4) = 1. 0. Using the Runge-Kutta method of order 4.46820. and 0. y(0. yi–2 and yi–3.7359. 6. we require the values y0.4) = 0. What are the orders of the predictor and corrector in Milne’s predictor-corrector method ? Solution Both the predictor and corrector are of fourth order.2) and y(0. y(0. That is. y(4. Nov. Use Milne’s method to find y(4. 2003) 6.4 1. yi–1.236 NUMERICAL METHODS The combination requires the starting values yi. find y for x = 0. Determine the value of y(0.0049.6) = 0. 0. y2. y(4. How many prior values are required to predict the next value in Adams-BashforthMoulton method ? Solution The Adams-Bashforth-Moulton predictor-corrector method requires four starting values yi. What are the orders of the predictor and corrector in Adams-Bashforth-Moulton predictor-corrector method ? Solution Both the predictor and corrector are of fourth order. y(0) = 1 dx and also find the solution at x = 0. y(0.U. 0. (A. 2004) is satisfied by y(0) = 1. Nov.2.4) using Milne’s method given that y′ = xy + y2. Initial condition gives the value y0 . Nov. Solve y′ = x – y2. 7. that is./Dec.0795. obtain y for x = 0.2. April/May 2005) 5./Dec. truncation error is of order O(h5) in each case.U./Dec. y(4. that is.1.4 by Milne’s method. y1. y(0) = 0. Use Taylor series method to get the values of y(0. Initial condition gives the value y0.1. The differential equation y′ = y – = 1./Dec.0097. y(0. y(0.0143.1) = 1. 2006) 4.2) = 1.12186. yi–2 and yi–3. y(0) = 1. we require the values y0. y(0.8) and y(1).1762 by Milne’s method to find y(0. (A. That is.3 given that dy = xy + y2 .U. yi–2 and yi–3. 9.3). y3.2) = 0.1). 2003) x2. y2.U.U.4) by Milne’s method.3) = 1. yi–1. y1.4) given that 5xy′ + y2 – 2 = 0. How many prior values are required to predict the next value in Milne’s predictorcorrector method ? Solution The Milne’s predictor-corrector method requires four starting values yi.4) 3.02. Nov. EXERCISE 4. (A. Nov. truncation error is of order O(h5) in each case.U.6) = 1.2) = 1.

Many implicit methods have no restriction on the step length that can be used. 0. Consider the initial value problem dy = y − x 2 + 1 . Perform two iterations of the corrector.7 STABILITY OF NUMERICAL METHODS In any initial value problem. The computations contain mainly two types of errors: truncation error and round-off error.1 using the Adams-Bashforth predictor-corrector method.U.U. find f(0.1.U. Solve the initial value problem y′ = y – x2.8). y(1) = 0.5 dx (A.4] with h = 0.3) = 1979 . 0. The single step methods are applied to this differential equation to obtain the difference equation yi+1 = E(λh)yi.INITIAL VALUE PROBLEMS FOR ORDINARY DIFFERENTIAL EQUATIONS 237 7. The step length h for application of any numerical method for the initial value problem must be properly chosen. (A. 9. Using these results find y(0. Solve y′ = 1 – y with the initial condition x = 0. y(1.1. λ < 0. compare these solutions with the exact solution y(x) = x2 + 2x + 2 – ex. where E(λh) is . = x 2 (1 + y).U. y(1) = 1.4) by Adams-Bashforth method. Perform two iterations of the corrector. x ∈ [0.0. we say that the method is numerically unstable. Solve the initial value problem y′ = (1 + x2)(y – 1). Truncation error is in the hand of the user. Given dy . y(1. . Nov. Compute the first 3 steps of the initial value problem y′ = (x – y)/2. 2004) evaluate y(1. The linearized form of the initial value problem is given by y′ = λ y. April/May 2003) Using the Adams-Bashforth predictor-corrector method. Also. Round-off errors are not in the hands of the user.2. 10.0 by Taylor series method and next step by Milne’s method with step length h = 0. The behaviour of the solution of the given initial value problem is studied by considering the linearized form of the differential equation y′ = f (x. May/June 2006) 4. 0.1 using the Milne’s predictor corrector method. y(0) = 1. 11. using Adams-Bashforth predictor-corrector method. y(x0) = y0. Such methods are called unconditionally stable methods. This happens when the step length is chosen larger than the allowed limiting value.2) = 1548. 12.4] with h = 0. It can be controlled by choosing higher order methods. 1.1) = 1. y = 0.0. All explicit methods have restrictions on the step length that can be used. 0. we require solution for x > x0 and usually up to a point x = b. y(1. (A. 2005) 8. dx (A. They can grow and finally destroy the true solution. Compute the starting values using the Euler method with the same step size. y).4.233. y(0) = 0./Dec. Compute the starting values using the Taylor series method of the same order. y(0) = 1. In such case./Dec. using Euler’s algorithm and tabulate the solutions at x = 0. Nov. x ∈ [1.3.5).

0. 0) (– ∞. 0. if we are solving the initial value problem y′ = – 100y. 0.5. We have the following stability intervals (condition on λ h).9. If | E(λh) | < 1. 0) (– 6.3.1 1. 0) (– 0.137805 = 4. 0) (– 1. Extrapolated value = 2(3. We have the following conditions for stability of the single step methods that are considered in the previous sections. then all the errors (round-off and other errors) decay and the method gives convergent solutions. 0. 0) Adams-Moulton methods (– ∞. | Error | ≤ (h2/2) max | y ′′( x)|= (h /2) max |1 − sin y|≤ h .11.78 < λh < 0.693175. 0) (– 3. 2.848948) – 3.252533.2: 3. This condition gives a bound on the step length h that can be used in the computations. 2 2 5. | Error | ≤ 0 ≤ x ≤ 0.2 0. Similar stability analysis can be done for the multi step methods.560091. 4.573481) – 1. h = 0.2: 1.496 =1. Backward Euler method: Stable for all h. (Unconditionally stable method).02. by Euler method. 0.1: 1. Order 1 2 3 4 Adams-Bashforth methods (– 2. 3. .22. The choice of the step length is very important and it is governed by the stability condition. 0) (– 0. h = 0. For h = 0.833962.848948. Runge-Kutta method of second order: – 2 < λh < 0. Extrapolated value = 2(1. that is.2. Euler method: – 2 < λh < 0. 0.496.9095. h = 0. 0.819.24205. We say that the method is stable.1: 3.137805.650962. 4.834344.05. 0. for the multi step methods that are considered in the previous sections.24205. h = 0. y1 = 1. 0. we conclude that a numerical method can not be applied as we like to a given initial value problem. 0.11.90975. 0. then the step length should satisfy the condition – 2 < λh < 0 or – 2 < – 100h < 0.2 0 ≤ x ≤ 0. 6. 2. or h < 0.238 NUMERICAL METHODS called the amplification factor.573481.1. y(x0) = y0. 3. For example. y2 = 1. 1. – ∞ < λh < 0.8 ANSWERS AND HINTS Exercise 4. 0. 0) Thus. Classical Runge-Kutta method of fourth order: – 2. 4. The predictor-corrector methods also have such strong stability conditions.

Use Taylor series method of fourth order 1. use Taylor series method of fourth order. y(0. Taylor series method: 1. 1. 0. 1.2) ≈ 1. 3. 6. 2.355.551614.110342. 2. 2. v′ = e2t – 2u – 3v. Set u = y. y(0. 13. y(0. l1 = – h.551614. u′ = v.0.46555.4) ≈ 1. u(0) = 1.623546. u′ = v. y(0.821846.013683.4) ≈ 3. u(0) = 1.129492. y(0.8) = 2.195440. y(0. 0. k1 = 0.525572. u′ = v. u′ = v. Taylor series method of fourth order was used. two iterations of the corrector in Milne’s method were performed. 6. y(0. 1. 9. 1. y(0.2) ≈ 1. 1. 11. 8. 9. y(0. u(0) = 0.5. u(0) = 0. u′ = v. u′ = v.305006.4. Use Taylor series method of fourth order 1.018737.2) ≈ – 0. u(0) = 1.5.304614.5.116492.504187. v(0) = 0.8.INITIAL VALUE PROBLEMS FOR ORDINARY DIFFERENTIAL EQUATIONS 239 7.2) ≈ 1. u(0) = 0.2) = 1. 2. Set u = y. y(0. v′ = e2t sin t – 2u + 2v. u(0) = – 0.133067.461735. .3 1. v′ = tet – u – 2v. y(0. Set u = y. y(0. Set u = y. 1. 4. 1. 1.6.855475.648922. 7. Set u = y. 3.0) = 0. v′ = e2t – 4u + 4v. y(0. u′ = v. y(0. y(0.508933. h = 0.3) = 1. Set u = y.855475. 4.325775. 14.8.1) ≈ 1.2) ≈ 1. u(0) = 0.4) = 1. u(0) = 1.2. u′ = v. u′ = v.66475. 7. Set u = y. 1. 1 + h + Exercise 4. v′ = et – 5u + 6v.133187.128321.196 3. 12.1) = 1.4. y(0. 5.4) = 1.900308.214076.775727. Set u = y. v(0) = 1. Exercise 4. 15.2428. 2.099383.2) ≈ 0.510921.6.1) ≈ – 0. v(0) = – 0. 0. v(0) = – 1.866204. v(0) = 1. y(0. y(4.1105. h = 0. Set u = y. y(1. y(0. u′ = v.8) = 0. 8.2 1. u′ = v. u(0) = 1. y(0.4) ≈ 3. 2. v′ = sin 2t – 5u + 6v. 3. v(0) = 1. v′ = et – 2u – 3v.1) ≈ – 0. v(0) = 0. y(0. v′ = et – 2u – 3v.355. v′ = – u – xv.110342. 1.4 In the following problems. Set u = y.1) ≈ 1. Heun’s method: 1. y(0. 3. v′ = tet – u – 2v.1: 1.1) ≈ 0. Exercise 4. v(0) = 0.253015. h2 + (h3/3) + (h4/12) 11.116887.839364.2) ≈ 1. u(0) = – 0.07267 5. v(0) = 0.2) ≈ 1.277391. In Problems 4–7. y(0. Set u = y.273563. 10. Two iterations of the corrector in AdamsBashforth predictor-corrector method were performed. 8. In Problems 10–13. u′ = v. y(0. v(0) = 1. Set u = y. u(0) = 0. v′ = e2t sin t – 2u + 2v.455551.2) ≈ 0. use Taylor series method of fourth order.2) ≈ 1. 10. 12. v(0) = – 0.995167.111463. v(0) = 1.510921. y(0. 2. v′ = – u – v.583636.242806. 4.

2) = 1. y(0. y(1.5) = 0.271. y(0.503843. y(0.4) = 1.214091. y(0.2. 7.1) = 0.19. 9.4) = 1.104829. y(0.882124.3) = 1.1) = 1.708222. y(1.802520.648947. y(0. y(0.468174.2) = 0.4) = 1.1) = – 0.953688.4) = 0. y(0.4652.4) = – 1. y(1.1) = 0.2) = 1.4) = 0.575142.3) = 0. y(0. y(1. y(0. y(0. 6. y(0.3) = 0.2) = 0. y(0.116838. y(0.2) = – 0. 12. y(0.839043. y(0.218596. y(1.1) = 1. 11.8) = 2. y(0.618784. y(0.240 NUMERICAL METHODS 5. y(0.3439.4) = 0.914512.2) = 0.1.2) = 0.900325. y(0.406293. 10. 8.34039.822709. y(0.3) = 0.6) = 1.483650.277276.4) = 2. y(0.1) = 0.127230. y(0.3) = – 0.3) = 1. .8293. y(0. y(0.856192.

in the form 241 x ∈ [a. y). If the conditions are prescribed at the end points x = a and x = b. then it is called a two-point boundary value problem.2) . (b) Boundary value problems governed by linear second order partial differential equations. we shall consider only the linear second order ordinary differential equation a0(x) y″ + a1(x) y′ + a2(x) y = d(x). We shall discuss the solution of the Laplace equation uxx + uyy = 0 and the Poisson equation uxx + uyy = G(x. We shall discuss the solution of the heat equation ut = c2uxx and the wave equation utt = c2uxx under the given initial and boundary conditions. or. (5. b]. b] (5. y. 5. For our discussion in this chapter.1) A general second order ordinary differential equation is given by Since the ordinary differential equation is of second order.2 BOUNDARY VALUE PROBLEMS GOVERNED BY SECOND ORDER ORDINARY DIFFERENTIAL EQUATIONS y″ = f(x.# Boundary Value Problems in Ordinary Differential Equations and Initial and Boundary Value Problems in Partial Differential Equations 5. In this chapter. y′). we shall discuss the numerical solution of the following problems: (a) Boundary value problems in ordinary differential equations. we need to prescribe two suitable conditions to obtain a unique solution of the problem.1 INTRODUCTION Boundary value problems are of great importance in science and engineering. (c) Initial boundary value problems governed by linear second order partial differential equations. x ∈ [a.

We shall consider the solution of Eq. x2 = x0 + 2h. b].3) We shall assume that the solution of Eq. x ∈ [a.(5.. b] (5. x ∈ [a.. b] into n equal sub-intervals. Approximation to y′(xi) at the point x = xi (i) Forward difference approximation of first order or O(h) approximation: y′(xi ) ≈ 1 1 [y(xi+1) – y(xi)]. . | a0 | + | a1 | ≠ 0. b0y(b) + b1 y′(b) = B. 5.6) (5. y(b) = B.242 y″ + p(x) y′ + q(x) y = r(x).8) . that is.. a2(x) and d(x). b0. where a0. In Chapter 3.1 Nodes. y(a) = A. NUMERICAL METHODS (5. q(x) and r(x) are continuous for all x ∈ [a. a1(x). can be prescribed in the following three ways: (i) Boundary conditions of first kind The dependent variable y(x) is prescribed at the end points x = a and x = b.4) (ii) Boundary conditions of second kind The normal derivative of y(x).(5. b1. x1 = x0 + h. 5. This implies that a0(x). we shall consider the solution of the boundary value problem y″ + p(x) y′ + q(x) y = r(x). n The points a = x0.3).. Finite difference method Subdivide the interval [a.. . We denote the numerical solution at any point xi by yi and the exact solution by y(xi). or p(x). are called the nodes or nodal points or lattice points (Fig. y(b) = B.2) or Eq.2) or Eq. b] .(5.. we have derived the following approximations to the derivatives.(5. (slope of the solution curve) is prescribed at the end points x = a and x = b. (5. The two conditions required to solve Eq. b−a . We denote the step length by ∆x or h. xi = x0 + ih... or b = a + nh.5) y(a) = A. Therefore. ∆x = h = h a = x0 x1 h x2 xi–1 h xi xi+1 xn = b Fig.3) exists and is unique.7) (5. The length of the sub-interval is called the step length.(5.. | b0 | + | b1 | ≠ 0. a1. xn = x0 + nh = b. or y′i = [y – yi].3) under the boundary conditions of first kind only.. | a0 | + | b0 | ≠ 0. b0b1 ≥ 0. (iii) Boundary conditions of third kind or mixed boundary conditions a0 y(a) – a1 y′(a) = A. y′(b) = B. y′(a) = A. h h i+1 (5. A and B are constants such that a0a1 ≥ 0.1).

10).11).15) . or i = 1: (5.. .13) Collecting the coefficients. ai yi–1 + bi yi + ci yi+1 = di .3) at the nodal point x = xi. ..9).BOUNDARY VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS. x2. At x = x1... 2. we need to determine the numerical solutions at the n – 1 nodal points x1. 243 (ii) Backward difference approximation of first order or O(h) approximation: y′(xi) ≈ 1 [y(xi) – y(xi–1)].11) are both of second order.. particularly in Fluid Mechanics. Now.. we obtain y″(xi) + p(xi ) y′(xi ) + q(xi ) y(xi ) = r(xi ). which are of first order. (5. bi = – 4 + 2h2 q(xi). At x = xi . (5. or y′i = [y – yi–1]. h or y′i = 1 [y – yi–1]. 2h 2h i+1 (5.12). n – 1 (5. …. we obtain 1 h 2 [yi+1 – 2yi + yi–1] + p( xi ) [yi+1 – yi–1] + q(xi) yi = ri 2h or 2[yi+1 – 2yi + yi–1] + h p(xi) [yi+1 – yi–1] + 2h2 q(xi) yi = 2h2 ri .12) Since y(a) = y(x0) = A and y(b) = y(xn) = B are prescribed.(5. (5.11) in Eq.9) (iii) Central difference approximation of second order or O(h2) approximation: y′(xi) ≈ 1 1 [y(xi+1) – y(xi–1)]. Using the approximations (5. But.10) Approximation to y″(xi) at the point x = xi Central difference approximation of second order or O(h2) approximation: y″(xi) ≈ or y″i = 1 h2 1 h2 [y(xi+1) – 2y(xi) + y(xi–1)]. …. y′(xi) is approximated by one of the approximations given in Eqs. xi. However.. i = 1. if y′(xi) is approximated by (5. or b1 y1 + c1 y2 = d1 – a1 A = d1 . (5.9) give better results (nonoscillatory solutions) than the central difference approximation (5. Since the approximations (5. [yi+1 – 2yi + yi–1].8).10) and (5.(5.8). di = 2h2r(xi).9). the approximation to the differential equation is of second order..10) and (5. then the approximation to the differential equation is only of first order. i = 2. 3. (5. ci = 2 + h p(xi).10) and y″(xi) is approximated by the approximation given in Eq.11) Applying the differential equation (5. n – 2 : ai yi–1 + bi yi + ci yi +1 = di (5.8) or (5. We have the following equations. xn–1.14) Let us now apply the method at the nodal points.. (5. approximations (5. in many practical problems. * a1 y0 + b1 y1 + c1 y2 = d1. we can write the equation as where ai = 2 – h p(xi). h i (5.

244 At x = xn–1. ci.14). . or i = n – 1: NUMERICAL METHODS * an–1 yn–2 + bn–1 yn–1 + cn–1 yn = dn–1... yn–1..1 Derive the difference equations for the solution of the boundary value problem y″ + p(x) y′ + q(x) y = r(x).13) is diagonally dominant. we obtain 1 h 2 [ yi+ 1 − 2 yi + yi− 1 ] + p( xi ) [ yi+ 1 − yi ] + q( xi ) yi = r ( xi ) h .16) Eqs. dn–2. .... x ∈ [a. Consider the case when the interval [a.. . we can try to find the bound for h for which this condition is satisfied. y2. (5.. . y(b) = B using central difference approximation for y″ and forward difference approximation for y′.2) or Eq. yi.16) give rise to a system of (n – 1) × (n – 1) equations Ay = d for the unknowns y1.15).13) always converge? We have the following sufficient condition: If the system of equations (5.. dn−1 ]T. its solution can be obtained in a few minutes on a modern desk top PC. d2. . y2.(5. bi.. Tri-diagonal system of algebraic equations is the easiest to solve. yi′ = 1 [ yi + 1 − yi ] h in the differential equation. . . Solution Using the approximations yi′′ = 1 h 2 [ yi + 1 − 2 yi + yi − 1 ]. Using the expressions for ai.. Example 5. then it always converges...(5. Then. Lb Ma M0 M0 M A = M0 M0 M0 M0 M0 M N 1 2 c1 b2 a3 0 0 0 0 0 0 0 c2 b3 a4 0 0 0 0 0 0 0 c3 b4 a5 0 0 0 0 0 0 0 c4 b5 a6 0 0 0 0 0 0 0 c5 b6 a7 0 0 0 0 0 0 0 c6 b7 a8 0 0 0 0 0 0 0 c7 b8 b9 0 0 0 0 0 0 0 c8 c9 OP PP PP PP PP PQ Remark 2 Does the system of equations (5. b] y(a) = A.(5..3) by finite differences gives rise to a tri-diagonal system of algebraic equations. the numerical solution of Eq.. (5. we have 9 unknowns.... di. Therefore. b] is subdivided into n = 10 parts. y2. or an–1 yn–2 + bn–1 yn–1 = dn–1 – cn–1 B = dn−1 . d = [ d1 . yn–1]T. y9. It is interesting to study the structure of the coefficient matrix A. In fact... Remark 1 Do you recognize the structure of A? It is a tri-diagonal system of algebraic equations.. even if the system is very large. whose solution can be obtained by using the Gauss elimination method or the Thomas algorithm. and the coefficient matrix A is as given below. yi.. y1. where A is the coefficient matrix and * * y = [y1. (5.

n – 1 ci = 1 + h p(xi).0. di = h2ri . 2.3 Solve the boundary value problem x y″ + y = 0. y4 = 2. For i = 1. Solution Using the approximations yi′′= 1 h2 [ yi + 1 − 2 yi + yi − 1 ] . we get i h2 xi [ yi +1 − 2 yi + yi− 1 ] + yi = 0.25.0 : For i = 2.25 and n = b−a 2−1 = = 4.25). …. y(1. x1 = 1. The system again produces a tri-diagonal system of equations. We are given the data values y(x0) = y0 = y(1) = 1. y(x4) = y4 = y(2) = 2.25 We have five nodal points x0 = 1.25. Example 5. bi = – 2 – h p(xi) + h2 q(xi). x1 = 1. x3 = 1. Example 5.2 Derive the difference equations for the solution of the boundary value problem y″ + p(x) y′ + q(x) y = r(x). y(2) = 2 by second order finite difference method with h = 0. 245 or or where [yi+1 – 2yi + yi–1] + h p(xi) [yi+1 – yi] + h2 q(xi) yi = h2ri yi–1 + bi yi + ci yi+1 = di . …. We are to determine the approximations for y(1. For i = 3. i = 1. x2 = 1. we obtain p( xi ) 1 [ yi + 1 − 2 yi + yi −1 ] + [ yi − yi −1 ] + q( xi ) yi = r( xi ) 2 h h or or where [yi+1 – 2yi + yi–1] + h p(xi) [yi – yi–1] + h2 q(xi)yi = h2ri ai yi–1 + bi yi + yi + 1 = di. 2. or 16xi yi–1 + (1 – 32xi) yi + 16xi yi+1 = 0.25. n – 1 ai = 1 – h p(xi). x4 = 2.BOUNDARY VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS. bi = – 2 + h p(xi) + h2q(xi).75. We have the following difference equations. y0 = 1. Using the central difference approximation for y″.. y(1. b] y(a) = A. x ∈ [a. di = h2ri .5 : 20y0 – 39y1 + 20y2 = 0 or – 39y1 + 20y2 = – 20 24y1 – 47y2 + 24y3 = 0. h 0.. yi′ = 1 [ yi − yi −1 ] h in the differential equation.0.5).75. 28y2 – 55y3 = – 56. i = 1. The system again produces a tri-diagonal system of equations. y(1) = 1. Solution We have h = 0. x3 = 1.0 : 28y2 – 55y3 + 28y4 = 0 or .75). y(b) = B using central difference approximation for y″ and backward difference approximation for y′. x2 = 1.5.

y(0. LMM0 1 − 936/1353 480/1353OPP .25. x4 = 1. R – 24R . y 1 2 3 1 We can solve this system using Gauss elimination. 39 Example 5.25 or – 33y1 + 16y2 = 0. y(0. M 24 − 47 24 0P . x3 = 0.75) satisfying the differential equation y″ – y = x and subject to the conditions y(0) = 0. M24 − 47 24 0 P M 0 28 − 55 − 56P Q N N 0 28 − 55 − 56 Q 1 − 20/39 0 20 / 39 L1 − 20/39 0 20 / 39 OP R M0 − 1353/39 24 − 480/39P .25).4 Using the second order finite difference method. M0 − 936/1353 1 480/1353 P M0 0 − 48207/1353 − 89208/1353PQ N 2 1 2 3 2 From the last equation. y(1) = 2. We have the following difference equations.85052. 1353 1353 y1 = 20 (1 + 1.63495) = 1. y(x4) = y4 = y(1) = 2. . x1 = 0. 16y1 – 33 y2 + 16y3 = 0.0. (− 1353/39) . or 16yi–1 – 33yi + 16yi+1 = xi. x3 = 0. y0 = 0.25 x0 = 0.75. Using the central difference approximation for y″.85052) = 1.75 or 16y2 – 33y3 = – 31.63495. Solution We have h = 0. We obtain L− 39 20 0 − 20O R LM 1 − 20/39 0 20 / 39OP .5).25. M0 28 − 55 − 56 Q − 55 − 56 Q N N0 28 0 20 / 39 O L1 − 20/39 R – 28R . 48207 480 936 + (1.25.75). find y(0.5.0 : 16y0 – 33y1 + 16y2 = 0. 16y2 – 33y3 + 16y4 = 0. we get y3 = Back substitution gives y2 = 89208 = 1.25 and n = We have five nodal points b − a 1− 0 = = 4.35126. y(0.5). For i = 1. We are given the data values y(x0) = y0 = y(0) = 0. y(0.246 We have the following system of equations NUMERICAL METHODS L− 39 M 24 M 0 N 20 0 24 − 47 28 − 55 OP LM y OP LM− 20OP 0 PQ MN y PQ = MN− 56PQ . x1 = 0.25.75. x2 = 0. we get i 1 h2 [ yi +1 − 2 yi + yi − 1 ] − yi = xi . − 39 . h 0.0 : For i = 2. y4 = 2.5. x2 = 0.0.5 : For i = 3. We are to determine the approximations for y(0.25).

Using the central difference approximations. M0 − 1 0 M0 0 − 0. y(1) = 0 by finite difference method. h 0. y1 = – 0. x1 = 0.25OP . x3 = 0.39534. or 6yi–1 – 28yi + 26yi+1 = 1.. M0 . − 0. If the exact solution is y(x) = Ae–x + Be–4x + 0. Solution We have h = 0.8584 y2 = – 0.02461 + 0..85624 = 1. MN0 16 − 33 − 31.007576OP R L1 − 0.02461 . We obtain LM− 33 16 0 0. 2h or 16[ yi+1 – 2yi + yi–1] + 10(yi+1 – yi–1) + 4yi = 1.25). − 33 . we get Back substitution gives y3 = 30.48485(0.25 – A find the magnitude of the error and percentage relative error at x = 0.25 PQ (− 25.8584 85624 N 2 2 – 16R1.0.83102) = 0.75.5 .48485 0 − 0.63385 −−30. y(x4) = y4 = y(1) = 0.34989.48485 0 − 0.25 O P P Q 3 2 From the last equation.25O R LM 1 − 0. Use central difference approximations with h = 0. y(0.5 P MN 16 − 33 − 16 − 31.2424) MN0 16 − 0−63385 33 L1 − 0.02461 PQ 22.BOUNDARY VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS.. R – 16R .0.63385(1.25.5.5 Solve the boundary value problem y″ + 5y′ + 4y = 1. − 31.025PQ y 1 2 3 1 We can solve this system using Gauss elimination.25. . . We are given the data values y(x0) = y0 = y(0) = 0. y(0. Example 5. We are to determine the approximations for y(0.25 x0 = 0.007576OP 33 .75). x4 = 1.5.5 PQ MN y PQ = MN− 31.48485 0 − 0. 247 We have the following system of equations L− 33 M 16 M 0 N 16 0 16 − 33 16 − 33 OP LM y OP LM 0. B = – 0.007576 + 0.25 and n = We have five nodal points b − a 1− 0 = = 4.621216 . . where A = e −3 − e 4(1 − e −3 ) .007576 − 0.2424 16 0.25. y(0) = 0. 22. x2 = 0.5).025P .83102. R 0 33 0 16 33 Q LM1 − 0.34989) = 0. we get 1 h 2 [ yi + 1 − 2 yi + yi −1 ] + 5 ( yi + 1 − yi − 1 ) + 4yi = 1. MN16 −16 −16 −056 PQ .007576OP .48485 0 1 0 − 25.

M0 − 1.21426 .03571 1 . y0 = 0.15924(0. Use central difference approximations with h = 1/3.45208.06295) = – 0.42858) MN0 − 28 6 1 − 0. y1 = – 0.92857 R – 6R .25.03571 – 0.05414 – 1.11465 Example 5. For i = 1. y(0) = 0.75. Percentage relative error = 0. B = 0. x2 = 0.03571O 0 L1 − 0.70208. Solution We have h = 1/3. R2 – 6R1 1 − 0. x3 = 0.11465 | = 0.5 : For i = 3. we get y3 = Back substitution gives 1. MN6 − 6 −26 P 0 28 0 6 Q LM1 − 0.5) | = | – 0.15924 − 0. y(0. x1 = 0. .12711 + 0. x1 = 1/3.05414 1 − 28 6 1 OP PQ O P P Q 3 2 From the last equation.92857 0 − 0.5) = Ae–0.04456 y2 = – 0.8%.03571 0 − 1. Now. x3 = 1.15374. P M y P = M1P P N y Q N1Q Q 1 2 3 We can solve this system using Gauss elimination.42858 26 1.03571OP R L1 0 − 22.01246.05414P .92857(0.12711) = – 0. y4 = 0 : NUMERICAL METHODS 6y0 – 28y1 + 26y2 = 1 or – 28y1 + 26y2 = 1.01246 (100) = 10. x2 = 2/3.248 We have the following difference equations.15924 − 0.32484 PQ .5 + Be–2 = 0. The nodal points are x0 = 0. We have the following system of equations L− 28 M 6 M 0 N 26 0 − 28 26 6 − 28 O LM y OP LM1OP . 0. N 1 2 − 0. | error at x = 0.11465. We also have A = – 0. We obtain LM− 28 26 0 1O R LM1 − 0.06295.6 Solve the boundary value problem (1 + x2)y″ + 4xy′ + 2y = 2.32484 = – 0. − 28 . − 21.0 : For i = 2.5 | = | y2 – y(0. .12711.92857 0 28 26 1 MN 6 − 28 − 28 1P . y(1) = 1/2 by finite difference method. M0 MN0 PQ (− 22. 6y2 – 28y3 + 26y4 = 1 or 6y2 – 28y3 = 1. 6y1 – 28y2 + 26y3 = 1.92857 − 0. 1 M0 0 − 2104456 1.

9y1 – 24y2 = – 6. x1 = 1/3. 4. For i = 2. 3. . 162 162 REVIEW QUESTIONS 1.5 = 0. where h is the step length. Solution (i) y′(xi) = [ yi+1 – yi]/(h). where h is the step length. y3 = 1/2 : 1 L9FG1 + 4 IJ − 4O y + L2 − 18 FG1 + 4 IJ OP y + LM9FG1 + 4 IJ + 4OP y M H 9 K P M H 9K Q N H 9 K Q Q N N 2 3 =2 or 9y1 – 24y2 = – 6. Finite difference methods when applied to linear second order boundary value problems in ordinary differential equations produce a system of linear equations Ay = b. (ii) y″(xi) = [yi+1 – 2yi + yi–1]/h2.. Solution (i) y′(xi) = [yi+1 – yi–1]/(2h). (ii) y″(xi) based on central differences.. What is the structure of the coefficient matrix A ? Solution Tridiagonal matrix.305556 . For i = 1.BOUNDARY VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS. x1 = 2/3. Write the first order difference approximations for y′(xi) based on (i) forward differences. 2.5 we obtain y1 = 15 49. What types of methods are available for the solution of linear system of algebraic equations ? Solution (i) Direct methods. 249 Using the central difference approximations. (ii) Iterative methods. y2 = = 0. Solving the equations – 9y1 + 6y2 = 1. (ii) backward differences. we obtain 1 h or 2 (1 + xi2 )[ yi+ 1 − 2 yi + yi−1 ] + 4 xi ( yi+ 1 − yi− 1 ) + 2 yi = 2 2h [9(1 + xi2 ) − 6 xi ] yi − 1 + [2 − 18(1 + xi2 )] yi + [9(1 + xi2 ) + 6 xi ] yi + 1 = 2. Write the second order difference approximations for (i) y′(xi).092592. L9FG 1 + 1IJ − 2O y + L2 − 18 FG 1 + 1IJ OP y + LM9FG 1 + 1IJ + 2OP y M H 9K P M H 9 K Q N H 9 K Q N Q N 1 2 =2 or – 18y1 + 12y2 = 2. y0 = 0 : 0 We have the following difference equations. (ii) y′(xi) = [yi – yi–1]/h.5.

y(1) = 2. Since it is a sufficient condition. it implies that the system may converge even if the system is not diagonally dominant. 5. y(1) = 2(e – 1) with h = 1/3. y(0) = 0. y″ + 3y′ + 2y = 1. y(1) = e – 1 with h = 1/4. y(1) = 1 with h = 0. y(1) = 1 with h = 0. y″ = y′ + 1. Solve the boundary value problem y″ – 10y′ = 0. Most of the mathematical models of the physical systems give rise to a system of linear or nonlinear partial differential equations. y(3) = 0 with h = 1/3. y(0) = 1. 2. y″ – y = – 4xex.25. 7. in this case convergence is guaranteed. we attempt to solve by numerical methods. x2y″ = 2y – x. When iterative methods are used to solve the linear system of algebraic equations. y(2) = 0 with h = 1/4. y(0) = 0. 4. |a ii |≥ n j = 1. y″ = y + 1. y(0) = 0. y(1) = 0 with h = 1/3. y(1) = 1 with h = 0. 1. 8. y(0) = 0. If the exact solution is y(x) = ex – 1. find the absolute errors at the nodal points. 10. . 9. y″ = 2x–2y + x–1. (iii) forward difference approximation to y′. that is. find the absolute errors at the nodal points.3 CLASSIFICATION OF LINEAR SECOND ORDER PARTIAL DIFFERENTIAL EQUATIONS In this and later sections.25. If the exact solution is y(x) = 2ex – x – 1. under what conditions convergence to the exact solution is guaranteed? Solution A sufficient condition for convergence is that the coefficient matrix A should be diagonally dominant. y(2) = 0. y(2) = 0.1 Solve the following boundary value problems using the finite difference method and central difference approximations. y″ = (y + 1)/4. (ii) backward difference approximation to y′. 5. We shall be considering only the finite difference methods for solving some of these equations.250 NUMERICAL METHODS 5. y(0) = 0. by using central difference approximation to y″ and (i) central difference approximation to y′. EXERCISE 5. y(3) = 0 with h = 1/3. i ≠ j ∑ |a ij |. y″ = xy. we shall study the numerical solution of some second order linear partial differential equations. If the exact solution is y(x) = (e10x – 1)/(e10 – 1).25. The numerical methods can broadly be classified as finite element methods and finite difference methods. y(1) = e – 1 with h = 1/3. compare the magnitudes of errors at the nodal points in the three methods. 3. Since analytical methods are not always available for solving these equations. 6. That is. y″ – 3y′ + 2y = 0. y(0) = 1.

(One dimensional heat equation).21) Remark 4 What is the importance of classification? Classification governs the number and type of conditions that should be prescribed in order that the problem has a unique solution.BOUNDARY VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS. For example.(5. (One dimensional wave equation).18 i) (5. the condition in Eq. utt = c2uxx . Then.(5.18 iii) (5. C = 1 and B2 – AC = 0. x varies from 0 to l and t > 0.19)). 0) = f(x). C = 0 A = c2. 0) = g(x). in Eq. we require the boundary conditions to be prescribed on the bounding curve. and x = l. a square or a circle etc. say.20)).18 ii) (5. Remark 5 Elliptic equation together with the boundary conditions is called an elliptic boundary value problem. For the solution of this equation. Laplace’s equation (Eq. B. we classify the linear second order partial differential equation Auxx + 2Buxy + Cuyy + Dux + Euy + Fu + G = 0 where A. . we require two initial conditions to be prescribed. uxx + uyy = 0.20)) represents the vibrations of an elastic string of length l. ut = c2uxx .(5. 251 First. A = c2. where l is the length of the rod (boundary conditions).21)) may be solved inside. E. for the solution of the one dimensional heat equation (Eq. and the conditions along the boundary lines x = 0.. The partial differential equation is called a Elliptic equation Parabolic equation Hyperbolic equation if B2 – AC < 0 if if B2 B2 – AC = 0 – AC > 0 .18) changes to B2 – 4AC. For the solution of the Laplace’s equation (Eq. F and G are functions of x.(5.(5. and the conditions along the boundary lines x = 0 and x = l. The initial value problem holds in either an open or a semi-open domain. a rectangle. u(x.(5.20) (5.(5. B = 0. In the case of the one dimensional wave equation (Eq.(5. are to be prescribed. and B2 – AC = – 1 < 0.(5. Both the hyperbolic and parabolic equations together with their initial and boundary conditions are called initial value problems. The boundary value problem holds in a closed domain or in an open domain which can be conformally mapped on to a closed domain. Here. (5. For example. are to be prescribed. the initial velocity ut(x. For example. (Two dimensional Laplace equation). C.21). (boundary conditions). (5.17) Remark 3 Some books write the coefficient of u xy in Eq. 0) = f(x). and B2 – AC = c2 > 0. The simplest examples of the above equations are the following: Parabolic equation: Hyperbolic equation: Elliptic equation: We can verify that in Eq.(5.19). the initial displacement u(x.17) as B.19)). B = 0. Note that the lower order terms do not contribute to the classification of the partial differential equation.20). Suppose that the one dimensional wave equation (Eq.(5.19) (5. B = 0. y or are real constants.21)). Sometimes. in the case of the one dimensional heat equation (Eq. they are also called initial-boundary value problems.. we require an initial condition to be prescribed. u(x. x varies from 0 to l and t > 0. t) represents the displacement of the string in the vertical plane. C = – 1 A = 1. in Eq. D.

(b) We have A = 2. B = 0. 5.4 FINITE DIFFERENCE METHODS FOR LAPLACE AND POISSON EQUATIONS In this section. B = 0. C = 4 and B2 – AC = 0. the given partial Fig. that is.Hence. If x2 + y2 – 1 < 0. (a) Write the given equation as uxx – 6ux – 3uy = 0. if x2 + y2 – 1 > 0. Finite difference method We have a two dimensional domain (x. 1. u(x.252 Example 5. that is. 4. uxx + 4uyy = ux + 2uy = 0. (a) uxx = 6ux + 3uy. the given partial differential equation is an elliptic equation. C = 0 and B2 – AC = 0. C = 1 – y2 and B2 – AC = x2 – (1 – y2) = x2 + y2 – 1. Hence. the given partial differential equation is an hyperbolic equation. B = x. If x2 + y2 – 1 Elliptic Hyperbolic = 0. the given partial differential equation is a parabolic equation. 5. C = 3 and B2 – AC = – 6 < 0. y) ∈ R.2). (d) uxx + 2xuxy + (1 – y2)uyy = 0. . We superimpose on this domain R. that is. 2. y) = g(x. B = 2. u(x. the given partial differential equation is a parabolic equation.that is. 5. Hence. (c) utt + 4utx + 4uxx + 2ux + ut = 0. a rectangular network or mesh of lines with step lengths h and k respectively. uxx + 5.2 Classify the following partial differential equations. (a) Laplace’s equation: uxx + uyy = ∇2u = 0. y) prescribed on the boundary. y) = f(x. that is. uxx – uyy + 3ux + 4uy = 0. outside the unit circle 2 2 x +y =1 Parabolic x2 + y2 = 1. (c) We have A = 1. Solution NUMERICAL METHODS (b) 2uxx + 3uyy – ux + 2uy = 0. We have A = 1. inside the unit circle x2 + y2 = 1. utt + (5 + 2x2)uxt + (1 + x2)(4 + x2)uxx = 0. with u(x. Example 5. (b) Poisson’s equation: uxx + uyy = ∇2u = G(x. EXERCISE 5. 3. Hence. uxx + 4xuxy + (1 – 4y2)uyy = 0. In both the problems.7. differential equation is an elliptic equation (see Fig.7 Classify the following partial differential equations. on the unit circle x2 + y2 = 1. the given partial differential equation is a parabolic equation. y) prescribed on the boundary. with u(x. we consider the solution of the following boundary value problems governed by the given partial differential equations along with suitable boundary conditions. 4uxy + (x2 + 4y2)uyy = x2 + y2 . y) on the boundary. (d) We have A = 1. y) on the boundary.2. y). the boundary conditions are called Dirichlet boundary conditions and the boundary value problem is called a Dirichlet boundary value problem.

2k i j+1 1 k2 (ui+1. j+1 – 2ui. If h = k.22) (5. The grid points are given by (xi.. . j+1 + ui.and y-axis. j = (ui. 1. j + (uyy)i. j . Nodes in a rectangle.BOUNDARY VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS. j–1). yj).. where p = h/k.3b. j + ui. j + ui. j = 1 (u – ui–1. j). Nodes in a square. we obtain (uxx)i. j–1) =0 (5. The points of intersection of the mesh lines are called nodes or grid points or mesh points.25) We observe that ui. yj = jk. The nodal points that are used in computations are given in Fig. We use the following central difference approximations. j) + p2 (ui. 5.. We can write this formula as ui. Solution of Laplace’s equation We apply the Laplace’s equation at the nodal point (i. 253 parallel to the x. j = 1 (u + ui–1. the partial derivatives in the differential equation are replaced by suitable difference approximations. ui+1. Remark 6 The nodes in the mesh are numbered in an orderly way. 5. j+1 + ui. Fig. the partial differential equation is approximated by a difference equation at each nodal point. j = or 1 1 (ui+1.23) (5. . j 1 h 2 (uy)i.5b. j–1 – 4ui. The mesh of lines is called a grid. j – 2ui. j–1) = 0. 2.. i = 0. j = (uxx)i. We number them from left to right and from top to bottom or from bottom to top.3 a. j). p = 1 (called uniform mesh spacing). j = 0 If h = k. j–1). 5. 4 i+1. j) + 2 (ui . j–1). (ux)i. that is. . This procedure is called discretization of the partial differential equation. Inserting the above approximations in the Laplace’s equation. j + ui–1. j+1 – 2ui.4. 5. (see Figs. 1. At the nodes. y y O x O x Fig. Denote the numerical solution at (xi. where the mesh lines are defined by xi = ih.5a. j = 1 (u . then we have a uniform mesh.. j = 0. j + ui.5.. j (5.3a. 2. h2 k j+1 – 2ui. we obtain the difference approximation as This approximation is called the standard five point formula.24) (ui+1. j + ui–1. j – 2ui. (uyy)i. yj) by ui. j + ui–1. b).5. j + ui. j is obtained as the mean of the values at the four neighbouring points in the x and y directions. 2h i+1. j + ui. j + ui–1. A typical numbering is given in Figs. j – 2ui. That is. – ui.. j).

Since the boundary values are known. Numbering of nodes.24).254 i. b4. j i + 1. j i. 5.5a. b9 are the contributions from the boundary values. j i – 1. to the Laplace equation uxx + uyy = ∇2u = 0 is applied at all the nodes and the boundary conditions are used to simplify the equations. b2. u5 + u7 + u9 – 4u8 = b8. 5. j + 1 NUMERICAL METHODS i. u3 + u5 + u9 – 4u6 = b6. u1 – 4u2 + u3 + u5 = b2. u2 + u6 – 4u3 = b3. b8.5a. u3 + u5 – 4u6 + u9 = b6. u4 – 4u7 + u8 = b7.5b. or or or or or or or or – 4u1 + u2 + u4 = b1. u4 + u8 – 4u7 = b7. u6 + u8 – 4u9 = b9. u1 – 4u4 + u5 + u7 = b4. j – 1 Fig. u5 + u7 – 4u8 + u9 = b8. At 1: At 2: At 3: At 4: At 6: At 7: At 8: At 9: u2 + u4 – 4u1 = b1. Standard five point formula. u1 + u7 + u5 – 4u4 = b4. u1 u2 u3 u7 u8 u9 u4 u5 u6 u4 u5 u6 u7 u8 u9 u1 u2 u3 Fig. u2 – 4u3 + u6 = b3. Numbering of nodes.23) or (5. . we have the following system of equations. where b1. At 5: u2 + u4 + u8 + u6 – 4u5 = 0. b6. 5. b3. The resulting system is a linear system of algebraic equations Au = d. Structure of the coefficient matrix Let us write the system of equations that arise when we have nine nodes as given in Fig. b7. u1 + u5 + u3 – 4u2 = b2. Fig. u2 + u4 – 4u5 + u6 + u8 = 0.4. System of equations governing the solutions The difference approximation (5.

Therefore. Consider the case of uniform mesh.23) with h = k. yj) + u(xi. for a large n × n system (n unknowns on each mesh line). that is. u8 are these points). yj) – 2u(xi. all the non-zero elements are located in this band. This property is true in all problems of solving Dirichlet boundary value problems for Laplace’s equation... that is. yj+1) – 2u(xi. All the elements on the leading diagonal are non-zero and are equals to – 4..JK − 2u MS | NT F ∂u + h ∂ u − h ∂ u + h ∂ u − .BOUNDARY VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS.I U + Gu − h V JK | H ∂x 2 ∂x 6 ∂x 24 ∂x | W RF ∂u + h ∂ u + h ∂ u + h ∂ u + .E) in the approximation for the Laplace’s equation.I UOP | + Gu − h H ∂y 2 ∂y 6 ∂y 24 ∂y JK VPQ | W 2 3 4 2 2 3 3 4 4 2 3 4 2 2 3 3 4 4 2 3 4 2 2 3 3 4 4 2 3 4 i. 255 We have the following linear algebraic system of equations. For equations corresponding to the nodal points near the boundary... all the elements on the first super-diagonal and the first sub-diagonal are non-zero and are equal to 1.. the half band width is n and the total band width is n + n + 1 = 2n + 1.. u3. 3.. The software for the solution of such band matrix systems is available in all computers. u9 are corner points) and at other points near the boundaries (in the above example. the number of non-zero elements is 3 (in the above example. u6. The remaining elements in the matrix are all zero. that is. j . L− 4 M 1 M 0 M 1 M 0 M M 0 M 0 M 0 M 0 M N 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 −4 1 −4 0 0 1 0 0 0 0 0 −4 1 0 1 0 0 1 0 1 −4 1 0 1 0 0 1 0 1 −4 0 0 1 0 0 1 0 0 −4 1 0 0 0 0 1 0 1 −4 1 0 0 0 0 1 0 1 −4 OP LMu OP LMb OP PP MMu PP MMb PP b u u P Mb P PP MMu P = M 0 P PP MMu PP MMb PP b u PP MMu PP MMb PP PQ MNu PQ MNb PQ 1 1 2 3 2 3 4 6 7 8 9 4 5 6 7 8 9 Remark 7 Do you recognize the structure of the matrix? It is a band matrix system. In the general case. the number of non-zero elements is 4.I − 2u | + SG u + h | TH ∂y 2 ∂y 6 ∂y 24 ∂y JK F ∂u + h ∂ u − h ∂ u + h ∂ u − . the total band width of the matrix is 3 + 3 + 1 = 7. u4. Remark 8 Let us derive the error or truncation error (T. The half band width is the number of nodal points on each mesh line. u1. h = k. At the corner points.. yj–1)}] 2 2 3 3 4 4 = LRF ∂u h ∂ u h ∂ u h ∂ u I M|GH u + h ∂x + 2 ∂x + 6 ∂x + 24 ∂x + . u2. yj)} + {u(xi. Except in the case of the equations corresponding to the nodal points near the boundaries. which is of the form Au = d. The remaining two non-zero elements (which equals 1) corresponding to each equation are located in the band. we obtain [{u(xi+1.. yj) + u(xi–1.(5. the number of non-zero elements is less than 5. u7. Using the Taylor series expansions in Eq.

(i + 1. j – 1).. ∂2u ∂2u + = 0. j – 1).. We can obtain another five point formula by using the four neighbours on the diagonals. (i – 1. j –1). Using the Taylor series expansions as in Remark 8. the errors in the numerical solutions are reduced by a factor of 4. j–1) = The truncation error of the method (5. The nodal points are given in Fig..E = h4 6 F∂ u + 6 ∂ u GH ∂x ∂x ∂y 4 4 4 2 2 + ∂4u ∂y 4 I JK + . j . j + ui–1.26). Therefore. j – 2ui. (i + 1. The order of the formula (5. (i – 1. j + 1).. What is the importance of the order of the finite difference formulas? When a method converges. ∂x 2 ∂y 2 T. This formula is called the diagonal five point formula. j) uses the four neighbours.6.27) (5. (i – 1. i. Then. j ) = 0 (5.28) ui+1.OP M GH ∂x ∂y JK 12 GH ∂x ∂y JK P M N Q 2 2 4 4 4 2 2 4 4 = i. j h4 12 F ∂ u + ∂ uI GH ∂x ∂y JK 4 4 4 4 + . 4 i+1.26) (5. This can easily be checked at the common points between the two meshes. j + 1). the truncation error of the method is of order O(h4). and recompute the numerical solution using the step length h/2. j since. on the x and y axis. j+1 + ui+1. (i. j–1 + ui–1. j). j–1 – 4ui.256 NUMERICAL METHODS = h2 L F ∂ u + ∂ u I + h F ∂ u + ∂ u I + . j–1 + ui–1. Hence. j–1 – 4ui. if we reduce the step length h by a factor.24) at (i. j+1 + ui+1. j = or or 1 2h 2 (ui+1. is given by h4 ∂ 4 u ∂ 4 u + 12 ∂x 4 ∂y 4 F GH I JK + . we obtain the error or truncation error (T. j + 1). j). We say that the method is of second order.E) = O(h2). (i + 1.23) when h = k.23) is defined as Order = 1 h2 (T.23) or (5. j = 1 (u + ui–1. j = 0 ui. j+1 + ui+1. j + (uyy)i.. Another five point formula The standard five point formula (5. say 2..5. j–1 + ui–1. Suppose that a method is of order O(h2). The five point formula for solving the Laplace’s equation is given by (uxx)i. it implies that the errors in the numerical solutions → 0 as h → 0. j + ui. j+1 + ui–1. then the error becomes O[(h/2)2] = [O(h2)]]/4. j using the above derivation. j–1). i. j+1 Note the factor 2 in the denominator of (5... j+1 + ui–1.E) in the formula as T. i.E = (ui+1. (i. j+1 – 2ui. j) + (ui.

i + 1.6. Dii + 1. j = h2Gi. When the order of the system is large. For the purpose of our study. j) + 1 k2 (ui.E) = O(h2).29) (5. which may i.31a) is of order O(h2). j – 2ui. j) + p2 (ui. j + 1 of the coefficient matrix and the right hand side vector into the memory of the computer. (5. we use the direct methods. y). where Gi. j not be possible if the system is large.. Solution of Poisson equation Consider the solution of the Poisson’s equation uxx + uyy = ∇2u = G(x. (5. j–1) = Gi. When h = k. j . we shall consider the solution of the system of equations Au = d by Gauss elimination (a direct method) and an iterative method called Liebmann iteration. Information of few equations only can be loaded at a time. j + ui–1. The formula (5. y) on the boundary. the diagonal five point formula for solving the Poisson’s equation can be written as ui+1. Solution of the system of equations The solution of the system of equations can be obtained by direct or iterative methods. j + ui. When the order of the system of equations is not large. j . j + ui–1. (5.BOUNDARY VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS. loading of all the elements of the coefficient matrix and the right hand side vector. j + ui. j (5. yj) and p = h/k. j+1 + ui. j + (uyy)i.23)-(5. j+1 + ui+1. 257 The order of the formula is given by Order = 1 h2 (T. say about 50 equations. j + ui. in many problems. Eqs. j = G(xi. we obtain the difference approximation as ui+1. j = 2h2Gi.30) (ui+1. We also call it a second order formula. that is. j+1 – 2ui. y) = g(x.31b) We shall illustrate the application of the finite difference method through the following examples. 5. j . Therefore. we use the iterative methods. j – 1 In fact. j + ui–1. j–1 + ui–1. . which is the case in most practical problems. Iterative methods do not require the Fig. j–1) = h2Gi. j = or 1 h 2 (ui+1. If h = k. j–1 – 4ui.30) is of order O(h2 + k2) and formula (5. with u(x. j – 2ui.31a) This approximation is called the standard five point formula for Poisson’s equation. u(x. j+1 – 2ui. j + 1 rect methods require the loading of all the elements i – 1. y) prescribed on the boundary. j–1 – 4ui. that is. the orders of the standard and the diagonal five point formulas are the same. p = 1. which is the application of Gauss-Seidel iterative method to the present system. we encounter thousands of equations..25) become (uxx)i. j – 1 i – 1. Diagonal five point formula. j+1 + ui–1.

− 78 / 7 OP PQ OP PQ . LM− 2 1 MN 0 1 −4 1 0 1 −2 −2 1 R − 10 .258 Example 5. 1 −8 −2 0 − 1/ 2 1 1 0 − 2/ 7 −2 1 R2 . 0 −8 0 − 1/ 2 1 0 LM MN − 1/2 − 7/2 1 0 1 −2 1 − 11 . u1 – 4u2 + u4 = – 10. Solution We note that the partial differential equation and the boundary conditions are symmetric about the diagonals AC and BD. j–1 – 4ui. j + ui–1. 2 u1 u2 6 4 u4 u3 8 A 6 8 B We obtain the following difference equations. Solution We note that the partial differential equation and the boundary conditions are symmetric about the diagonal BD. or or – 2u1 + u2 = – 3. At 1: u2 + 3 + 3 + u3 – 4u1 = 0. u 2 and u 4 . From the first equation. j + ui. or u1 = 4. j At1: At 2 + ui. D 2 4 C Example 5.7. u2 + 2 + 2 + u3 – 4u1 = 0. Hence. j+1 + ui. Example 5. 2u2 – 4u4 = – 16. or – 4u1 + 2u2 = – 6.8. or – 2u1 + u2 = – 2. We use the standard five point formula ui+1. or u2 = 5. u2 – 2u4 = – 8. u1 = u4 and u2 = u3. At 2: 6 + 6 + u1 + u4 – 4u2 = 0. R3 −8 OP PQ OP PQ L1 − R . Hence. 5.9 Solve uxx + uyy = 0 numerically for the following mesh with uniform spacing and with boundary conditions as shown below in the figure 5. u2 = u3 and we need to determine u1 . we need to solve for two unknowns u1 and u2.9. We use the augmented matrix [A|d]. We use the standard five point formula ui+1. j + ui–1. j+1 + ui. R2 − R1 . Adding the two equations.8 Solve uxx + uyy = 0 numerically for the following mesh with uniform spacing and with boundary conditions as shown below in the figure 5.8. Therefore. 6 + 4 + u1 + u4 – 4u2 = 0. we get D NUMERICAL METHODS 3 6 C 3 u1 u2 6 6 u3 u4 3 A 6 3 B Fig. we get – 3u2 = – 15.8. We obtain the following difference equations. j–1 – 4ui. Fig. 5. j = 0. 1 .7. 2u1 – 4u2 = – 12. M0 MN0 2 1 1 − 10 . −8 0 − 2/ 7 − 12 / 7 1 22 / 7 . 2u1 = u2 + 3 = 5 + 3 = 8. or or or – 4u1 + 2u2 = – 4. Example 5. At 4: 8 + u2 + u3 + 8 – 4u4 = 0. or We solve the system of equations using the Gauss elimination method. j = 0. 0 − (7/2) 0 L M M N O P P Q LM MN − 1/2 −4 1 0 1 −2 1 22 / 7 .

12 22 13 35 + = = 5. u1 – 4u3 + u4 = 0. 2 2 3 Example 5. LM− 4 MM 1 1 N 0 LM 1 MM0 0 N0 LM 1 MM0 0 N0 1 − 1/4 − 1/4 0 1/2 1 1 0 −2 1 0 1 −4 −4 0 1 − 4 R1 −4 . 7 7 7 u1 = 1 + 5 = 3. we get Substituting in the first equation.9. j+1 + ui. Solution We note that the boundary conditions have no symmetry. 1 0 1 0 −4 0 −4 1 0 −4 0 1 1 −4 −2 1 1 −4 −2 O P P P Q LM MM N OP PP Q 0 1/2 − 1/4 − 1/4 1/4 1 − 9/2 − 15/4 .BOUNDARY VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS. . − 4/5 P (− 56/15) M0 P MN0 − 48/15Q 2 3 0 1/2 − 1/4 1 − 1/15 − 4/15 18/15 R3 − R2 . 1 −4 −2 OP PP Q 0 1/2 − 1/4 − 1/4 1 − 1/15 − 4/15 18/15 0 1 − 16/56 12/56 . j–1 – 4ui. . MM0 1/14 0 Q GH 4 JK N0 1 1/2 O LM1 R 0 18/15 P . u 2 . 0 u1 u2 1 0 u3 u4 2 0 0 We obtain the following difference equations. We use the augmented matrix [A|d]. R2 − R1 . j At1: At 2 + ui.. We use the standard five point formula ui+1.9. we get Substituting in the second equation. u2 + u3 – 4u4 = – 2. u 3 and u 4 . 259 Solving the last equation. or or or or – 4u1 + u2 + u3 = – 2. 5. Therefore.10. j = 0.5 . R3 − R1. At 4: 2 + u2 + u3 + 0 – 4u4 = 0. 1 + 3 + u1 + u4 – 4u2 = 0. u2 + 2 + 0 + u3 – 4u1 = 0. 4 1 − 15/4 − 1/2 R4 − R2 . Fig. We solve the system of equations using the Gauss elimination method.. At 3: u4 + u1 + 0 + 0 – 4u3 = 0.10 Solve uxx + uyy = 0 numerically for the following mesh with uniform spacing and with boundary conditions as shown below in the figure 5. j + ui–1. 1/4 − 15/4 1 − 1/2 1 1 −4 −2 0 − 1/4 − 1/4 1 − 1/15 − 4/15 0 − 56/15 16/15 0 16/15 − 56/15 OP LM 1 − 1/4 R PP F − 15 I . we get u4 = u2= 78 = 6. 0 16/15 − 56/15 − 48/15 OP PP Q . Example 5. u1 – 4u2 + u4 = – 4.5 . we need to find the values of the four unknowns u1 .

u(x. 2 IJ = − 2 G3 J 3 H K H3 K 3 3 H 3K 3 3 2 = u F1. u = u FG 2 . u1 – 4u2 + u4 = – 5/3. We need to find the values of the four unknowns u1. we get Substituting in the second equation. I = 2 − = . At 4: u10 +u2 + u3 + u12 – 4u4 = 0. we get u3 = 12 16 28 + = = 0. u(1. u = u F 0.5.5. u1 – 4u3 + u4 = – 1/3. j + ui–1. u2 + u3 – 4u4 = – 3. H3 K 3 H3 K 3 6 7 9 10 12 We obtain the following difference equations. At1: At 2: u2 + u5 + u7 + u3 – 4u1 = 0. 5. u2. We use the standard five point formula ui+1. j–1 – 4ui. 56 56 56 18 1 4 45 + + = = 1. G 2J 3 4 GH 1 JK 1 GH 1 JK 1 5 H 3K 3 3 3 3 3 3 F1 I 2 F2 I 4 = u G . j = 0. u8 + u6 + u1 + u4 – 4u2 = 0. we get the boundary values as u5 = u u8 u11 F 1 .10. 1I = 2 − 1 = − 1 . Substituting in the third equation. u = u F1. I = 2 − = .260 NUMERICAL METHODS 16 R4 – R. Solution The mesh is given in Fig. 0J = . 1IJ = 4 − 1 = 1 . 1) = 2x – 1. or or or or – 4u1 + u2 + u3 = 1.11. At 3: u4 + u1 + u9 + u11 – 4u3 = 0. we get u2 = Substituting in the first equation. u3 and u4. j +1 + ui. y) = – y. u = u FG 0. u7 u1 u2 u8 u9 u3 u4 u10 u11 u12 Fig. 15 3 L1 M0 M0 M0 N − 1/4 1 0 0 − 1/4 − 115 / 1 0 0 − 415 / − 16/56 − 2880 /840 1/2 1815 / 12/56 . 0J = . y) = 2 – y with square mesh of width h = 1/3. 0) = 2x. 15 30 15 30 u1 = 1 3 1 + + = 1.5 .11 Solve uxx + uyy = 0 numerically under the boundary conditions u(x. 2 8 8 u5 u6 Example 5. Using the boundary conditions. I = − . − 2880 /840 OP PP Q Last equation gives u4 = 1. u(0. .10. u = u G . j + ui. Example 5.

25 − 0.42857 OP PP Q R4 – 1.66667 + 0. we get u1 = – 0.377778 L M M M N OP PP Q LM 1 MM0 0 N0 − 0.25 0 − 0.73333 − 3.25 1 1 0 1 R1 1 0 1 − 5/3 −4 0 1 − 5/3 −4 .17778 − 3.25 − 0.25 − 0.26667 = 0.25 − 0.41667 − 3. R3 − R1 .37778 .25 0 − 0. Substituting in the third equation. | x | ≤ 1. − 0. Example 5.04762 + 0.06667 − 0.BOUNDARY VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS. u = 0 on the boundary of the square using the five point formula with square mesh of width h = 1/2.06667 R3.25 + 0.28572 − 0. 0.25 1 −4 −3 1 1 OP PP Q LM1 MM0 0 N0 OP PP Q 1 − 0.06667 (0.26667 0. .28572 = 0.33334) + 0.25(0. 0 1 − 1/3 −4 0 −4 1 − 1/3 − 4 1 0 1 1 −4 −3 1 1 −4 −3 O P P P Q LM MM N OP PP Q 0 − 0..04762 0 106667 − 3.04762 − 3.25 1 0 0 − 0.25 − 0. we get u2 = 0. | y | ≤ 1.25 R2 0.26667 − 0. LM− 4 MM 1 1 N 0 LM 1 MM0 0 N0 1 − 0. 0. .06667 − 3. R4 − R2 .75 1 − 0.33334) = 0. we get u3 = 0.75 .08333 − 3.75 1 1 −4 −3 − 0.25 − 0.25 − 0.25 1 − 1.25 1 − 0.25 R3 R3 − 0. 0 1 − 0.26667 0. 0.25 − 3.75 − 0.73333 − 3. Substituting in the first equation.12 Solve the boundary value problem for the Poisson equation uxx + uyy = x2 – 1.28572 − 3. LM1 MM0 0 N0 OP PP Q Last equation gives u4 = 1.25 0 − 0.06667 1 0 0 − 0. 0 0 1. 0 1 − 0.06667 − 0.73333 − 3.08333 .37778 . We use the augmented matrix [A|d].37778 + 0.66667.25 − 0. R2 − R1 .. . .33334.73333 0 0 1.37778 .37778 − 3.25 0.26667 0.06667 − 0. Substituting in the second equation.06667 − 0.25 0 1 − 0.42856 − 0.25 R2 . 261 We solve the system of equations using the Gauss elimination method.37778 .

j+1 + ui.5 1 0 − 0. u2 + u3 – 4u4 = – 0. u4 + 0 + u4 + u1 – 4u3 = 0. u2.0625 R2 0 − 0.125. u1 – 4u2 + 2u4 = – 0.1875 LM MM N OP PP Q − 0. 5.57143 2 −4 0.5. ui+1.07143 .25 − 2 1 1 − 4 − 0. – 2u1 + u2 + u3 = – 0. .5 − 3. We use the standard five point formula y 0 0 0 0 0 0 u4 u3 u4 0 0 u2 u1 u2 0 x 0 u4 0 0 0 0 u3 u4 0 Fig. At 1(0. We need to find the values of the four unknowns u1. R2 – R1.5 − 3.1875 −4 . 0 −4 2 − 0.25(0.5 0.1875.12.5): or At 4(0. 0. 0 2 −4 − 0. j + ui–1.25.3125 − 3. R3 – R1.5 1 0. We solve the system of equations using the Gauss elimination method.25 1 1 − 4 − 0.25(xi2 – 1). − 01875 OP PP Q LM1 MM1 1 0 N 0 0.5 1 0 2 2 −4 O P P P Q 1 0. j + ui.125 R 0 2 − 0.5 − 0.11.5 .25 .1875 −4 . j = h2Gi.1875.11.25.0625 0.5 0. − 01875 OP PP Q .1875 − 0.0625 − 0.5. u3 and u4. j = 0.25 – 1) = – 0.5 1 − 0. LM− 2 MM 1 1 0 N LM1 MM0 0 N0 1 1 0 − 0. u1 – 4u3 + 2u4 = – 0. 1 . Example 5.5 0 2 − 0.1875.1875. 0 + u4 + u1 + u4 – 4u2 = 0.25(0. 0 + 0 + u3 + u2 – 4u4 = 0. − 0. We obtain the following difference equations. 0): or At 2(0. 0): or At 3(0.3125 .25(0 – 1) = – 0.5.25 – 1) = – 0. j–1 – 4ui.5): or u2 + u3 + u2 + u3 – 4u1 = – 0. 0.5 1 − 0. We use the augmented matrix [A|d]. − 0.5 0 0 . The partial differential equation and the boundary conditions are symmetric about x-and y-axis.262 NUMERICAL METHODS Solution The mesh is given in Fig.25. − 014286 − 3.

we use iterative methods.28572 − 0. R4 − R2 .25) .66667 0.5 0 0.37500 0 0 u4 = OP PP Q 0..17969.5 R2 . There are many powerful iterative methods available in the computer software. 0 0 1. of the system of equations is diagonally dominant. we get u2 = 0. we update the value of the first unknown u1.BOUNDARY VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS.14286 R3. we get u3 = 0.5 − 014286 .42857 0. L1 M0 M0 M0 N Last equation gives − 0. Substituting in the second equation. which are variants of successive over relaxation (SOR) method. then u = 0 can be taken as the initial approximation.14286 − 3.57143(0.14286 − 0. If no suitable approximation is available.57143 − 0. (a) A sufficient condition for convergence is that the coefficient matrix A.10156 + 0. which is the case in most practical problems.66667 − 0.07143 .07143 .5 R3 R3 − 0. 263 1 − 0. 0 0 − 3. 1 − 0.66667(0. However.34822 − 3.0625 − 0.42857 .19531) + 0.25. we update the value of u2. We continue until all the values are updated. 0 1 − 0.25893 L M M M N OP PP Q L1 M0 M0 M0 N − 0.5(0. Iterative methods We mentioned earlier that when the order of the system of equations is large. − 0.5 − 0. 010156 .5 0 0.57143 0.. in many practical applications.0625 = 0.25893 OP PP Q R4 – 1. 0 − 0.37500 = 0.14062. (c) Using the initial approximations. Liebmann iteration We use the above procedure to compute the solution of the difference equations for the Laplace’s equation or the Poisson equation. the implementation of the Gauss-Seidel method for the solution of the system of equations obtained in the application of the finite difference methods.42857 2.42857 − 0.14062) = 0.10156 0 1 − 2. Substituting in the first equation.17969 + 0.19531.66667 Substituting in the third equation. we encounter thousands of equations.14286(0.14286 − 0. 2. conjugate gradient method etc. The initial approximations are obtained by judiciously using the standard five point formula (5.0625 − 0.07143 + 0.5 1 0 0 − 0.14062) = 0. 1 114286 . we shall discuss here.66667 − 3. Using this updated value of u1 and the initial approximations to the remaining variables.0625 0. In fact. we get u1 = 0.19531) + 0. We repeat the procedure until the required accuracy is obtained.07143 . (b) The method requires an initial approximation to the solution vector u.57143 0. Let us recall the properties of the GaussSeidel method.

j i + 1.kj −1 + ui(+ 1. j − 1 e j i. j) are already updated (when the numbering of unknowns is from bottom to top).32) where the values at the nodes (i. j – 1). For the mesh defined in Fig. if we want two decimal places accuracy for the solution. Similarly. j = 0. j–1) 4 i+1. j+1 + ui+1. then we set the values of required number of variables as zero.13. j+1 or the diagonal five point formula (5. if we want three decimal places accuracy for the solution. j + 1 (5.28) ui.12. We illustrate the method through the following problems. j + 1 e j Stopping criteria for iteration We stop the iterations when the following criterion is satisfied | ui ( k + 1) ( − u i k ) | ≤ given error tolerance for all i. using five point formula and Liebmann iteration. for the following mesh with uniform spacing and with boundary conditions as shown below in the figure 5. j = after setting the values of one or two variables as zero. j–1 – 4ui. j–1) 4 i+1. If the numbering is from top to bottom. j+1 + ui. u2.13 Solve uxx + uyy = 0 numerically. Standard five point formula. Solution We note that the boundary conditions have no symmetry. j–1 + ui–1. u3 and u4.0005 for all i (5.14). j 4 i. then we iterate until the condition | ui is satisfied. ( k + 1) ( k + 1) − ui( k ) | ≤ 0. For example. We use the standard five point formula ui+1.34) . we need to find the values of the four unknowns u1. u j 4 i.kj + 1 u . then the iteration becomes i. Example 5.kj+ 1) 1 ( k + 1) k+ k) ) = + ui(− 1. then we iterate until the condition | ui is satisfied. 5.005 for all i (5. j + ui–1. j + ui(. j + ui. Therefore. j = NUMERICAL METHODS 1 (u + ui–1. Obtain the results correct to two decimal places. 1) + ui(. j + ui. we write the following Liebmann iteration ui(. j . these two formulas cannot be used (see Example 5. j 1 (u + ui–1. i.33) − ui( k ) | ≤ 0. j i – 1. j+1 + ui. 5.264 ui. j – 1 Fig.kj+ 1) = 1 ( k + 1) k k) ) + ui(− 1+ 1) + ui(+ 1. j ui(. If in some problems.12. (i – 1.

25(2 + u2 + u3). First iteration ( ( ( u11) = 0. Initial approximations Set ( u4 = u40) = 0.. u1 – 4u3 + u4 = – 3. u4 = 0. u3 = 0.5 + 1. ( ( ( u2k + 1) = 0. .25.25(1 + 3 + 2 + 0) = 1. Using the diagonal five point formula at the first node.6875. we can use the standard five point formula to obtain initial approximations at the nodes 2.5. we obtain ( u10) = 0.25 + 0) = 1.BOUNDARY VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS.25 (u40) + u10) + 3 + 0) = 0. 2 2 3 At 3: u4 + u1 + 3 + 0 – 4u3 = 0.13. 265 We obtain the following difference equations.6875 + 1.25 (2 + u20) + u30) + 0) = 0.25(2 + 1. u2 + u3 – 4u4 = – 2.25 (2 + u2k+ 1) + u3k+ 1) ) . or Using these equations.. or At 4: 2 + u2 + u3 + 0 – 4u4 = 0. ( ( ( u21) = 0.25(4 + 1.25(0 + 3 + 2 + 3) = 2. At1: At 2 u2 + 2 + 2 + u3 – 4u1 = 0. 4.25(u4 + 3 + 2 + 3) = 0. ( ( ( u30) = 0.1875.25 (3 + u1k+ 1) + u4k) ) .5 + 1.25 (1 + 3 + u10) + u40) ) = 0.25 (4 + u2k) + u3k) ) .25) = 1. u2 = 0.25(4 + u1 + u4). or – 4u1 + u2 + u3 = – 4. ( ( ( u4k + 1) = 0.25(4 + 1. or u1 – 4u2 + u4 = – 4.25(4 + u2 + u3). 1 + 3 + u1 + u4 – 4u2 = 0.13. ( ( ( u40) = 0. Example 5.25 (4 + u11) + u40) ) = 0.1875) = 1. ( ( ( u3k + 1) = 0. 3.25 (4 + u20) + u30) ) = 0. and write the iteration procedure as ( ( ( u1k + 1) = 0. 2 u1 u2 1 3 u3 u4 2 0 0 Fig.25(3 + u1 + u4).25 (4 + u1k+ 1) + u4k) ) . we write the difference equations at the grid points as u1 = 0. We obtain ( ( ( u20) = 0. Now. 5.71875.25(0 + 2 + 3 + 0) = 1.

79053 | = 0.266 NUMERICAL METHODS ( ( ( u31) = 0.25(2 + 1.25 (3 + u15) + u44) ) = 0.52344) = 1.71875 + 1.33277) = 1.29688) = 1.82422.83106 + 1.25 (4 + u15) + u44) ) = 0.79688 + 1.33277) = 1. ( ( |u25) − u24) | = | 1.25(4 + 1.79053.33320. ( ( ( u33) = 0.25 (2 + u23) + u33) ) = 0.82422 + 1.33106.82422 + 1.83277 – 1.83106 + 1. ( ( ( u34) = 0.32422) = 1.83277 + 1.77344 + 1.25(4 + 1.25(3 + 1.00086. Fourth iteration ( ( ( u14) = 0.46875) = 1.53711) = 1.25(4 + 1. .78711 + 1. ( ( ( u43) = 0.25(2 + 1.79053 + 1.25(4 + 1. the magnitudes of the errors in the successive iterations are ( ( |u15) − u14) | = | 1.25 (2 + u22) + u32) ) = 0.33106) = 1.79688 + 1.54139) = 1. Second iteration ( ( ( u12) = 0.71875 + 1.83277 + 1.25(2 + 1. At this stage.46875) = 1.53711.33277.25(4 + 1.79139 – 1.25 (2 + u25) + u35) ) = 0.25(2 + 1.25 (3 + u13) + u42) ) = 0. ( ( ( u22) = 0.83277.25 (4 + u23) + u33) ) = 0. ( ( ( u23) = 0.1875) = 1.79139.25(4 + 1.25 (4 + u22) + u32) ) = 0.25 (3 + u11) + u40) ) = 0.33106) = 1. ( ( ( u42) = 0. ( ( ( u45) = 0.53711) = 1.78711 + 1.83106.25(4 + 1.25 (4 + u24) + u34) ) = 0.32422) = 1.00171.54139.29688) = 1.25 (2 + u21) + u31) ) = 0. Third iteration ( ( ( u13) = 0.83106 | = 0. ( ( ( u25) = 0.25(3 + 1.77344.25(2 + 1. ( ( ( u35) = 0. ( ( ( u44) = 0.54053.25(3 + 1.79688.52344.79053 + 1.25(3 + 1.25 (4 + u12) + u41) ) = 0.52344) = 1.6875 + 1.54053) = 1.25 (4 + u14) + u43) ) = 0.78711.25(4 + 1.54053) = 1. ( ( ( u24) = 0.46875. ( ( ( u41) = 0.29688. Fifth iteration ( ( ( u15) = 0.25 (3 + u12) + u41) ) = 0.79139 + 1.25 (3 + 1.25 (4 + u13) + u42) ) = 0.25 (2 + u24) + u34) ) = 0.32422.77344 + 1. ( ( ( u32) = 0.25 (3 + u14) + u43) ) = 0.25 (4 + u21) + u31) ) = 0.

There is symmetry with respect the line y = x. the fifth iteration values are correct to two decimal places. Obtain the results correct to three decimal places. Hence.25(2u1 – 0. Therefore. 27 5 .14 Solve the boundary value problem uxx + uyy = x + y + 1.005. j+1 + ui. j + ui–1.259259). Solution The mesh is given in Fig. ( ( |u35) − u34) | = | 1.259259) ( ( . 9 7 . Example 5. We note that all the boundary values are zero. 0 ≤ x ≤ 1. We use the standard five point formula ui+1. j = h2Gi. ( ( |u45) − u44) | = | 1.222222). u1 = u4.BOUNDARY VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS.185185).25 (2u1k+ 1) − 0.14.25 ( 2u1k +1) − 0185185 ).00086. 267 All the errors are < 0. 0 u1 u2 0 0 u3 u4 0 0 0 Fig.. 0 ≤ y ≤ 1. ( ( u2k+ 1) = 0. j = We obtain the following difference equations. At 1: 1 (x + yj + 1).00043.54139 – 1.54053 | = 0.25 (u2k) + u3k) − 0.5. with mesh length h = 1/3. u2 and u3. u3 = 0. we write the difference equations at the grid points as u1 = 0.33320 – 1. j–1 – 4ui. We take these values as the required solutions.14. and write the iteration procedure as ( ( ( u1k+ 1) = 0. j + ui.25(2u1 – 0.25(u2 + u3 – 0.. u3k +1) = 0.33277 | = 0.222222) . u2 = 0. we need to find the values of the three unknowns u1.14. Example 5. . 27 1 4 2 or 4 1 3 or Using these equations. 5. 9 i At 2: At 3: FG IJ H K 1 F2 2 I 7 0 + 0 + u + u – 4u = G + + 1J = K 27 9 H3 3 1F1 1 I 5 u + u + 0 + 0 – 4u = G + + 1J = H 3 3 K 27 9 u2 + 0 + 0 + u3 – 4u1 = 1 1 2 2 + +1 = 9 3 3 9 or – 4u1 + u2 + u3 = 2u1 – 4u2 = 2u1 – 4u3 = 0 0 2 . Hence. u = 0 on the boundary numerically using five point formula and Liebmann iteration.

( ( u35) = 0.25 (2u15) − 0.094907. as in Gauss-Seidel ( ( method.120343.185185) = 0.194444 – 0.25 (2u11) − 0.25(– 0.25 (2u11) − 0.111056.101418.25(– 0.113426 – 0.259259) = – 0.222222) = 0.259259) = – 0.222222) = 0.25(– 0.25(– 0.074074 – 0.259259) = – 0.259259) = 0.222222) = – 0.185185) = – 0. Second iteration ( ( ( u12) = 0.05556.25 (2u12) − 0.09407 – 0.119936 – 0.25 (2u16) − 0.118634 – 0.25 (2u14) − 0.221788 – 0.259259) = 0. Sixth iteration ( ( ( u16) = 0.259259) = – 0.222222) = 0.185185) = 0.222222) = – 0.222222) = 0.215278 – 0.100116.25(– 0. ( ( u26) = 0.194444 – 0.25 (u25) + u35) − 0.113426.185185) = – 0.118634.110243.185185) = 0.092593.185185) = 0. ( ( u24) = 0.25(– 0.259259) = 0.111112 – 0. Third iteration ( ( ( u13) = 0. ( ( u32) = 0.25 (2u14) − 0. Fourth iteration ( ( ( u14) = 0.074074.259259) = 0.185185) = 0.259259) = – 0.222222) = 0.259259) = 0.25 (2u15) − 0.25(– 0.215278 – 0. ( ( u31) = 0. ( ( u23) = 0.107639.101418 – 0.268 NUMERICAL METHODS Initial approximations Since the boundary values are all zero.120262 – 0.25 (u20) + u30) − 0.111112 – 0. First iteration ( ( ( u11) = 0.101740 – 0.222222) = – 0.25(– 0.25(– 0.221788 – 0.259259) = – 0.119936.25(– 0. Fifth iteration ( ( ( u15) = 0.185185) = – 0.25(– 0.25(– 0.092593 – 0.25 (2u12) − 0.222112 – 0.25(– 0.25 (2u13) − 0.185185) = – 0.259259) = 0.120262. . ( ( u25) = 0.25(– 0.25 (2u13) − 0. we cannot use the standard or diagonal five point formulas to obtain the initial approximations.222222) = – 0. ( ( u36) = 0.25(– 0.220486 – 0.25 (u24) + u34) − 0.101740.097222 ( ( u22) = 0.222222) = – 0.101824.25(– 0.100116 – 0. we assume u20) = u30) = 0 .185185) = – 0.185185) = – 0. ( ( u34) = 0.25 (u21) + u31) − 0. ( ( u21) = 0. Hence.220486 – 0.222112 – 0.222222) = – 0.25 (2u16) − 0.25(– 0.110894.25 (u22) + u32) − 0. ( ( u33) = 0.185185) = 0.25 (u23) + u33) − 0.

15 Using the Liebmann method.000084.25(6 + u1k+ 1) + u4k) ) .5 (5 + u2k +1) ) . Solution The mesh is given in Fig. j + ui.101740 | = 0. We have three unknowns u1 .111056 + 0. We take these values as the required solutions.25.5.000162.5 + 2. We have the following initial approximations.5(1 + u2). All the errors are < 0. we write the difference equations at the grid points as u1 = 0.25(4 + 2 + u10) + u40) ) = 0. ( ( u10) = 0. u11) becomes same as u10) . At 4: 5 + u2 + u2 + 5 – 4u4 = 0. The partial differential equation and the boundary values are symmetric with respect to line BD.5(5 + u2).25(6 + 0.0005. j + ui–1. u2 = u3. . j–1 – 4ui. ( ( u4k + 1) = 0. u40) = 0. the fifth iteration values are correct to three decimal places. u2k+ 1) = 0. u 2 and u 4 . j = 0. ( ( |u36) − u35) | = |– 101824 + 0. Hence..110894 | = 0. u1 – 4u2 + u4 = – 6. Example 5.120262 | = 0.5 .15.. Hence.BOUNDARY VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS. Number the nodes as D 1 2 C 1 u1 u2 4 u1 . Using these equations. the magnitudes of the errors in the successive iterations are ( ( |u16) − u15) | = | – 0. Example 5. ( ( Otherwise.5. u 3 and u 4 . Iterate until the maximum difference between successive values at any point is less than 0. 2 u3 u4 5 A 4 5 B Fig. Initial approximations Since the value at the corner point D is not given.15. or or We obtain the following difference equations.5(1 + u2k) ) . u 2 . At 1: u2 + 1 + 1 + u2 – 4u1 = 0. 269 At this stage. ( ( |u26) − u25) | = | – 0. we set u2 = 0 and u3 = 0. At 2: 4 + 2 + u1 + u4 – 4u2 = 0.5) = 2. or – 2u1 + u2 = – 1.15. 5. u4 = 0. solve the equation uxx + uyy = 0 for the following square mesh with boundary values as shown in figure. we need to use the standard five point difference formula. u2 – 2u4 = – 5. and write the iteration procedure as ( ( ( ( ( u1k+ 1) = 0. j+1 + ui.25(6 + u1 + u4).000081.25(0 + 1 + 1 + 0) = 0.25(5 + 0 + 0 + 5) = 2. We use the standard five point formula ui+1.001. We can update u2 also and take the initial approximation as ( ( ( u20) = 0. Hence. u2 = 0.120343 + 0.

Fifth iteration ( ( u15) = 0.99909 .5(1 + 2.76563) = 2.625 + 2.99644) = 2.98535) = 2.25(6 + 1.5(1 + 2. ( ( ( u23) = 0.5(1 + 2.5(1 + 2. NUMERICAL METHODS Second iteration ( ( .5 (5 + u22) ) = 0.25(6 + u16) + u45) ) = 0.5(5 + 2.5 (5 + u24) ) = 0. ( ( ( u21) = 0.97070) = 1.99817) = 3.25 (6 + 1.25(6 + u15) + u44) ) = 0.99955) = 3.25(6 + 1.53125 ) = 3.98535 + 3. (1) (1) u4 = 0. ( ( ( u26) = 0.5(5 + 2.99817.5 (5 + u25) ) = 0.5(5 + 2.5(5 + 2.5(5 + 2.88282) = 1.25(6 + u11) + u40) ) = 0. Third iteration ( ( u13) = 0.270 First iteration ( ( u11) = 0.99909 + 3.99817) = 1.5) = 2.5 (1 + u20) ) = 0.5 (1 + u21) ) = 0. ( ( ( u25) = 0.76563 + 3.25(6 + u13) + u42) ) = 0.76563.99634 . ( ( u42) = 0.5 ( 5 + u 2 ) = 0.97070) = 3.99909 . ( ( u46) = 0.99977 .25(6 + u12) + u41) ) = 0.5(1 + 2.625.99268) = 1.94141 + 3.99268) = 3.94141.5 (1 + u22) ) = 0.88282.25(6 + 1.5 (1 + u23) ) = 0. ( ( u43) = 0. ( ( u45) = 0.53125) = 176563.25 (6 + 1. Sixth iteration ( ( u16) = 0.94141. ( ( ( u22) = 0.99955.99634 + 3.5 (5 + u23) ) = 0. ( ( ( u24) = 0.98535 Fourth iteration ( ( u14) = 0. u12) = 0.5 (1 + u25) ) = 0.53125. .98535.5( 5 + 2.94141) = 2. ( ( u44) = 0.97070.5(1 + 2.25) = 1.25 (6 + 1.99268.99909) = 2.5 (5 + u26) ) = 0.25(6 + u14) + u43) ) = 0.88282) = 3.5 (1 + u24 ) ) = 0.99634 .

Write the standard five point formula for the solution of (i) Laplace’s equation uxx + uyy = 0. for uniform mesh spacing h. Write the general linear second order partial differential equation in two variables.99955 | = 0. Solution (i) (ii) ui+1.00034. j + ui. Write the Laplace equation in two dimensions. G are functions of x.99994 – 3. j+1 + ui. u1 ≈ 1. j + ui–1.99977 | = 0. 6.. B.99989.99955) = 1.. Solution Solution uxx + uyy = 0.99994. and (iii) a parabolic equation when B2 – AC = 0. y).5 (5 + u2 ) = 0. where A. (ii) Poisson equation uxx + uyy = G(x. j–1 – 4ui.BOUNDARY VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS. 2. Write the diagonal five point formula for the solution of (i) Laplace’s equation uxx + uyy = 0.99978 . At this stage.99978 + 3.25(6 + u1 + u46) ) = 0. . 4.99989 – 2.5(5 + 2. the seventh iteration values are taken as the required solutions. j + ui–1. (7) ( |u4 − u46) | = | 3. E. 5.99978. 3.00069. Hence. for uniform mesh spacing h. the magnitudes of the errors in the successive iterations are (7) ( |u1 − u16) | = | 1.99977) = 2. Solution Auxx + 2Buxy + Cuyy + Dux + Euy + Fu + G = 0. uxx + uyy = G(x. u2 = u3 ≈ 2.99989. All the errors are < 0. (7) (7) ( u2 = 0. j.5(1 + 2. 271 Seventh iteration (7) ( u1 = 0.99989) = 3. (ii) Poisson equation uxx + uyy = G(x. When is the linear second order partial differential equation Auxx + 2Buxy + Cuyy + Dux + Euy + Fu + G = 0 called an elliptic or hyperbolic or parabolic equation? Solution The given linear second order partial differential equation is called (i) an elliptic equation when B2 – AC < 0. (ii) a hyperbolic equation when B2 – AC > 0.99994 . y).00017. j+1 + ui.25 (6 + 1. y). D.99909 | = 0. ui+1. j = 0. REVIEW QUESTIONS 1. C. u4 ≈ 3.5 (1 + u26) ) = 0.001. j = h2Gi. Write the Poisson equation in two dimensions. j + ui. (7) ( |u2 − u26) | = | 2. y. j–1 – 4ui. F. (7) (7) u4 = 0.99978 – 1.

j = 0. j 7.. j+1 + ui+1. j + ui. (ii) ui+1. j+1 + ui–1. What is the order and truncation error of the standard five point formula for the solution of Laplace’s equation uxx + uyy = 0. j+1 + ui–1. j = 0. (ii) Iterative methods like Gauss-Jacobi method or Gauss-Seidel method can be used when the system of equations is large. j = 1 (u + ui–1. j = 1 (u + ui–1. j+1 . j–1 + ui–1. or O(h2). j . Solution (i) Direct methods like Gauss elimination method or Gauss-Jordan method can be used when the system of equations is small. Order = 2. j–1 – 4ui. The initial approximations are obtained by judiciously using the standard five point formula ui. When do we normally use the diagonal five point formula while finding the solution of Laplace or Poisson equation? Solution We use the diagonal five point formula to obtain initial approximations for the solutions to start an iterative procedure like Liebmann iteration. j–1 – 4ui. j–1 + ui–1.. i. j or the diagonal five point formula ui. j+1 + ui. j+1 + ui. j + ui. What is the order and truncation error of the diagonal five point formula for the solution of Laplace’s equation uxx + uyy = 0. T.. j+1 + ui+1. j+1 + ui+1. Finite difference methods when applied to Laplace equation or Poisson equation give rise to a system of algebraic equations Au = d. When do we use the Liebmann method? Solution We use the Liebmann method to compute the solution of the difference equations for the Laplace’s equation or the Poisson equation. j–1 + ui–1. with uniform mesh spacing ? Solution The method is Order = 2. Name the types of methods that are available for solving these systems. h4 6 F∂ u + 6 ∂ u G ∂x ∂x ∂y H 4 4 4 2 2 + ∂4u ∂y 4 I JK + . j+1 + ui+1. j–1) 4 i+1. j+1 + ui–1. j–1 + ui–1. j–1 – 4ui. or O(h2).E = ui+1.E = h4 ∂ 4 u ∂ 4 u + 12 ∂x 4 ∂y 4 F G H I JK + . NUMERICAL METHODS 6.272 Solution (i) ui+1. j = 2h2Gi. j + ui–1. i. j–1) 4 i+1.. with uniform mesh spacing? Solution The method is ui+1. T. 10. j 8. j–1 – 4ui. 9. j = 0.

3 Find the solution of the Laplace equation uxx + uyy = 0 in the given region R. 0 2 –4/9 u1 u2 5/9 –1 u1 u2 5 –1/9 u3 u4 8/9 0 u3 u4 6 1/9 4/9 3 5 Fig. and re-compute the numerical solution using the step length h/2. What is the condition of convergence for the system of equations obtained. EXERCISE 5. .16. Fig.17.. This can easily be checked at the common points between the two meshes. Problem 4. is diagonally dominant. What is the importance of the order of a finite difference method? Solution When a method converges. 5. Then. 12. Fig. Problem 1. This implies that convergence may be obtained even if A is not diagonally dominant. Suppose that a method is of order O(h2). 1. these two formulas cannot be used. If in some problems. say 2. 273 after setting the values of one or two variables as zero.BOUNDARY VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS. 3 6 200 u1 u2 500 0 u1 u2 6 100 u3 u4 400 0 u3 u4 3 100 200 0 0 Fig. 5. 3. when we apply finite difference methods for Laplace’s or Poisson equation? Solution A sufficient condition for convergence of the system of equations is that the coefficient matrix A of the system of equations Au = d. That is.. 5. using the standard five point formula. 11. the errors in the numerical solutions are reduced by a factor of 4. 400 500 4. 5.18.19. subject to the given boundary conditions. if we reduce the step length h by a factor. Problem 3. –8/9 –5/9 2. it implies that the errors in the numerical solutions → 0 as h → 0. then the error becomes O[(h/2)2] = [O(h2)]/4. Problem 2. then we set the values of required number of variables as zero.

y) = 3x + 2y. u(x. is insulated. we have defined the linear second order partial differential equation Auxx + 2Buxy + Cuyy + Dux + Euy + Fu + G = 0 as a parabolic equation if B2 – AC = 0. R : 0 ≤ x ≤ 1. In Problem 2. t) = g(t).35) where is a constant and depends on the material properties of the rod.5 FINITE DIFFERENCE METHOD FOR HEAT CONDUCTION EQUATION In section 5. y) = 0 on the boundary and h = 1. insulated bar or a wire of length l. 5. y) = 0. (b) One end of the bar. The simplest example of a parabolic equation is the following problem. y) in the region R. G(x. The partial differential equation governing the flow of heat in the rod is given by the parabolic equation ut = c2uxx . ∂x (5. boundary conditions at x = 0 and at x = l are to be prescribed. R is a square of side 1 unit. Let the bar be located on the x-axis on the interval [0. This implies the condition that ∂u = 0. 0 ≤ y ≤ 3. say at x = 0. the temperature is prescribed. y) = 4. In Problem 4. A parabolic equation holds in an open domain or in a semi-open domain. y) = 3 + y. Assume h = 1/3. solve the system of equations using the Liebmann iteration. 0) = x. 0 ≤ x ≤ l. the rod may be heated at one end or at the middle point or has some source of heat. take the value at the top left hand point as 300. In Problem 3. 3. R : 0 ≤ x ≤ 1. In order that the solution of the problem exists and is unique. Let the rod have a source of heat. 9. 0 ≤ y ≤ 1. y) = x2 + y2 on the boundary and h = 1/3. u(x. take the value at the top left hand point as – 2. l]. Find the solution of the Poisson’s equation uxx + uyy = G(x. 0 ≤ y ≤ 1. In Problems 2. u(x. t > 0. 0 ≤ x ≤ l. c2 (5.Perform four iterations in each case. Consider a thin homogeneous. A parabolic equation together with the associated conditions is called an initial value problem or an initial-boundary value problem. Boundary conditions are u(0. u(x. 8.3. 9. 10. G(x. 6.274 NUMERICAL METHODS 5. we need to prescribe the following conditions. R : 0 ≤ x ≤ 3. take the value at the top left hand point as 0. These conditions are of the following types: (a) Temperatures at the ends of the bar is prescribed u(0. u(x. G(x. (i) Initial condition At time t = 0. 4. 8. For example. subject to the given boundary conditions. u(x. R is a square of side 3 units. y) = x – y on the boundary and h = 1/3. y) = x2 + y2 . u(l. u(x. Let u(x. t) = h(t). 7. 0) = f(x). The problem is to study the flow of heat in the rod. at x = 0 for all time t. t > 0.36) . Assume step length as h = 1. (ii) Boundary conditions Since the bar is of length l. t) denote the temperature in the rod at any instant of time t. y) = x – y on the boundary. u(3. 3) = 2x.

(2. Let the mesh length along the t-axis be k and define tj = jk. 2.5...31) as t Level j + 1 Level j Level j – 1 x Level 0 O Fig. Since both initial and boundary conditions are prescribed. Then..20.BOUNDARY VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS. we can write Eq. multiplications and divisions) using the solutions at the previous one or more levels. use the approximation LM N OP Q (5.. For our discussion. The mesh points are (xi . In implicit methods. tj).20). When a method uses the nodal values on three time levels tj–1. 5.38) . j = [ui.. Let the interval [0. we shall consider only the boundary conditions given in (5. we denote the numerical solution by ui. t) = h(t). The points along the x-axis are xi = ih. the temperature may be prescribed. as in Fig. 275 At the other end. i = 0. we may have the condition that the end of the bar at x = l is insulated. Denote ∆t as the forward difference in the t-direction.36). u(l. Nodes. tj).. l] be divided into M equal parts. 1 1 ∂u 1 ∆ t − ∆2 + . the mesh length along the x-axis is h = l/M. j ] . tj) We call tj as the jth time level (see Fig. = [log (1 + ∆ t )] u = t 2 ∂t k k Now. Let us derive a few methods. j 1 1 ∆ t ui. t > 0. a rectangular network of mesh lines. 1. . we have derived the relationships between the derivatives and forward differences. the solution at each nodal point on the current time level is obtained by simple computations (additions. j + 1 − ui. the problem is also called an initial boundary value problem. we solve a linear system of algebraic equations for all the unknowns on any mesh line t = tj+1. j and the exact solution by u(xi . k k (5. t > 0. tj and tj+1 then it is called a three level formula. 5. then it is called a two level formula. subtractions. Explicit methods In Chapter 2.37) F ∂u I G ∂t J H K ≈ i. Remark 9 Finite difference methods are classified into two categories: explicit methods and implicit methods.20. M. u. At any point (xi . When a method uses the nodal values on two time levels tj and tj+1.. In explicit methods. Then. Mesh generation Superimpose on the domain 0 ≤ x ≤ l. Alternatively.

t j ) = Fig. j+1 = λui–1.. k h or or or ui.5.. Truncation error of the Schmidt method We have the method as ui. tj ) – 2u(xi . t j + k) − u( xi . j+1 at the node (xi. j + λui+1. j ] ui.21.. is 1 c2 [ ui . The nodes that are used in the computations are given in Fig... j + ui− 1... j + 1 Level j + 1 i – 1. j = λ [ u i +1.276 Using central differences. tj) – 2u(xi. The truncation error is given by T. tj) + u(xi–1. Note that the value ui. 2 2 ∂t ∂t ∂x 2 12 ∂x 4 OP Q LM N OP Q . we also have the approximation NUMERICAL METHODS F ∂ uI G ∂x J H K 2 2 ≈ i.40) where λ = kc2/h2.. we obtain the left hand and right hand sides as u( xi .. i. It is a two level method. j − 2ui. − u = k U V W OP L PQ MN ∂u k 2 ∂ 2 u + + .. Expanding in Taylor’s series.. j Level j λ [u(xi+1. j = λ [ui+ 1. (5. tj + k) – u(xi .21.U − 2u + Ru − h ∂u + h MS ∂x 2 ∂x 6 ∂x V S ∂x 2 MT W T N Lh ∂ u + h ∂ u + . This method is called the Schmidt method. tj) – λ[u(xi+1. tj )] = kc 2 h2 2 2 3 2 LRu + k ∂u + k MS ∂ t 2 MT N 3 3 2 ∂ 2u ∂t 2 + . j − 2ui .35) at the point (xi .. j + u i −1. j (5. j ] .E = u(xi . j+1 – ui. tj+1). j+1 = ui. j ] . j − 2u i . j = x 1 h2 [ui + 1. 6 ∂x 3 2 2 4 2 4 2 2 4 L M N ∂u k2 ∂ 2 u ∂ 2u h2 ∂ 4u + + . j ] = 2 [ u i +1... Schmidt method. − kc 2 + + . j 1 h 2 δ 2 ui. j + λ [ui+ 1..OP M ∂x 12 ∂x P N ∂x 12 ∂x Q N Q 2 2 4 4 2 ∂ 2u ∂x 2 − h3 ∂3u + .O = kc LM ∂ u + h ∂ u + . j − 2ui. tj ) + u(xi–1. j + (1 – 2λ)ui. j + ui− 1. j+1 – ui. j ] ui. tj+1) is being obtained explicitly using the values on the previous time level tj . 2 ∂t 2 ∂t OP Q UOP VP WQ = kc 2 h2 where all the terms on the right hand sides are evaluated at (xi . is called the mesh ratio parameter. tj). j i + 1. j + u i −1. j − 2ui. j +1 − u i .. an approximation to the heat conduction equation (5. tj)] = k LRu + h ∂u + h ∂ u + h ∂ u + . j i. j + ui − 1.39) Therefore. j ] . 5.

the values of h and k are reduced such that the value of λ is always same. ∂ 4u k 2 c 4 ∂ 4 u kh 2 c 2 ∂ 4u kh 2 c 2 ( 6λ − 1) 4 + . j + ui+1.45) .43) This method is also called Bender-Schmidt method. 2 (5. Remark 11 For λ = 1/2. since t > 0. stability of the numerical computations plays the important role. Schmidt method simplifies to ui. − + . We have noted that a sufficient condition for the convergence of the iteration methods is that the coefficient matrix is diagonally dominant. Therefore. the truncation error of the method is of the order O (k 3 + kh 4 ) . Hence.. The order of the method is O(k2 + h4).. time t plays an important role. 6 i–1. we have k = λ h2/c 2 or k = O(h2).. using the differential equation ∂ ∂ 2u ∂2 ∂ 2 u ∂ ∂u ∂u ∂ 2u . Stability means that the cumulative effect of all errors (round-off and other numerical errors) → 0 as computation progresses along t-axis. the leading term in the error expression given in (5. j). we are performing infinite cycles of computation. for a fixed value of λ.. the method is of order O(h2). λ = kc 2/h 2 = fixed. the method is of order O(h4) (see Remark 10). = 2 ∂x 4 12 ∂x 4 12 ∂x I JK (5. j]. In the solution of an initial value problem. Hence.. That is.42) Remark 10 For a fixed value of λ. j+1 = 1 [u + 4ui. Analysis of the Schmidt method gives that the method is stable if λ= kc 2 h2 ≤ 1 .. The higher order method is given by ui.BOUNDARY VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS.E) = O(h2 + k). j (5. 2 ∂x 2 ∂t 2 12 ∂x 4 I J K Now. Remark 12 For λ = 1/6. 277 = k F ∂u − c G ∂t H 2 ∂2u k 2 ∂ 2 u kh 2 c 2 ∂ 4 u − + + .42). j+1 = 1 (u + ui+1. and 2 = = c2 = c4 = c2 ∂t ∂x 2 ∂t ∂t ∂x 2 ∂t ∂t ∂x 2 we obtain T. 2 i–1. from (5. This method is also of order O(h2) for a fixed λ. convergence of the system of equations is important.. Hence..41) vanishes.44) Remark 13 For the solution of a boundary value problem (for Laplace or Poisson equations).41) The order of the method is given by order = 1 (T.E = FG IJ H K F I GH JK F GH F ∂ uI = c GH ∂x JK 2 2 4 ∂ 4u ∂x 4 . that is. for a fixed value of λ = 1/6. Theoretically. j (5. k (5.

u(0. t > 0 give the solutions at all the nodal points on the boundary lines x = 0 and x = l . j = 0.5. Compute with (i) λ = 1/2 for two time steps. is also stable. u3. u(l.433013. Alternately. (called boundary points). Example. upto t = 1/9. 0) = f(x) gives the solution at all the nodal points on the initial line (level 0).5(u0. 2. with u(x. then we have computed the solutions up to time tm = mk.278 NUMERICAL METHODS Note that the Bender-Schmidt method uses the value λ = 1/2. for all j. 1 . 0 2 . 0 ≤ x ≤ 1. t) = u(1. 0J = u H3 K 1. H 3K 2 F 2π I 3 = 0.44).22). j = 0.5(u1.16. 0 ≤ x ≤ 1. t) = 0 using the Schmidt method. Example 5. . Let us illustrate the method through some problems. t The initial condition gives the values u FG 1 . (ii) λ = 1/4 for four time steps. compare the solutions at time t = 1/9. i = 1.45).433013.16 Solve the heat conduction equation ut = uxx . 1 = 0. j u1. 0) = 0. If the exact solution is u(x.22. = sin G J = H3K 2 = sin 0 1/3 2/3 1 x The boundary conditions give the values u0. We choose a value for λ and h. for all time levels.866025) = 0. we get the method ui. we may choose the values for h and k. This gives the value of the time step length k. (i) We have λ = 1/2. t) = h(t).0 FG π IJ = 3 . For j = 0: i = 1: i = 2: Fig. t) = g(t).0 + u2. The solutions at all nodal points. j = 0.5(0. The computations are repeated for the required number of steps. If we perform m steps of computation. 5. Computational procedure The initial condition u(x. For λ = 1/2. From the condition (5. h = 1/3. that is.866025 + 0) = 0. which uses the value λ = 1/6. t) = exp(– π2 t) sin (π x). u2. The boundary conditions u(0. 0IJ = u H3 K F2 I u G .5(0 + 0. (iii) λ = 1/6 for six time steps. on level 1 are obtained using the explicit method. The computations are to be done for two time steps. j + (1 – 2λ)ui. Hence. 2 i–1. j + λui+1. (called interior points). 1 = 0. j ). k = λh 2 = 1/18. we find that the higher order method (5.0 + u3. We have to find the solution at the two interior points. 5. Assume h = 1/3. j We are given h = 1/3.866025. j+1 = We have the following values. 0) = sin (π x). j+1 = λui–1. Solution The Schmidt method is given by ui. 1 (u + ui+1. 0) = 0. we have four nodes on each mesh line (see Fig.

3 + 2u2. j = 0.2 = FG 1 . j). FG 1 . upto t = 1/9. that is. k = λh2 = 1/54. u1.1 = 0.365354.1 + 2u1. 4.4 = 0.2 + 2u1. 279 For j = 1: i = 1: i = 2: u1. j+1 = We have the following values. we get the method ui. Hence.25(u1. j ).721688) + 0] = 0.0 + 2u2. that is. 2. For j = 0: i = 1: i = 2: For j = 1: i = 1: i = 2: u1.721688)] = 0.649519.0) = 0.3 = 0.2) = 0. 1 + u2.601407. k = λh2 = 1/36.433013) = 0.1) = [0 + 5(0.649519. H 3 9K H 3 9K 1 (u + 4ui. 2.25[3(0. u2.5(0 + 0.25(u1.866025)] = 0.25(u0. h = 1/3.866025) + 0] = 0.5(0.487139) + 0] = 0.25(u0. 1IJ = u FG 2 .1 + u3.1 + u2. 3.25[0 + 3(0. u2. 1.0 + 2u1. u1. For λ = 1/6. 2 = 0. 3 .487139)] = 0.649519)] = 0.0 + u3.25[3(0.0 + u3.1) = 0.1) = [5(0. 2. 2 = 0. j = 0. i = 1.3 + u2. H 3 9K H 3 9K 1 (u + 2ui.721688.2 = 0.25(u1.25(u0. Hence. For j = 0: i = 1: i = 2: For j = 1: i = 1: i = 2: For j = 2: i = 1: i = 2: For j = 3: i = 1: i = 2: u1. 1IJ ≈ 0.5(u1.1 = u1.0 6 1 1 (u + 4u2. 1IJ ≈ 0. upto t = 1/9. j + ui+1.3) = 0. we get the method ui.866025) + 0] = 0. 6 i–1.216507.5(u0. i = 1. j After four steps t = 4k = 1/9. The computations are to be done for four time steps.1 + u3.0 + u2. The computations are to be done for six time steps.25[3(0. 2.487139.25(u1.1) = 0.25[0 + 3(0. j 1 1 (u + 4u1.216507. 4 i–1. 5 .2 + 2u2.0) = 0.1) = 0.2 + u3.3 = 0. 6 1.1) = 0. u1.365354)] = 0.433013 + 0) = 0.1 + u3.649519) + 0] = 0.25[3(0.1 + 4u1.1 + u2. 1.0) = [0 + 5(0. u (iii) We have λ = 1/6.2 + u2.1 + 2u2.866025)] = 0.25[0 + 3(0..3) = 0.0 + u2.216507. u2. After two steps t = 2k = 1/9.0) = [5(0.274016 .365354) + 0] = 0.365354.25(u0..721688.1 + 4u2.274016.601407. h = 1/3. u (ii) We have λ = 1/4. 6 6 .25[0 + 3(0. j + ui+1.1 = u2.4 = 0.3 + u3.487139. u2. 6 0.274016. For λ = 1/4. u2. j+1 = We have the following values.1 = 0. 6 6 1 1 (u1. 1IJ = u FG 2 .2 = u2.3 + 2u1.2) = 0.BOUNDARY VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS.0 6 1 1 (u0.2 = 0.

501173. 1IJ = u FG 2 .501173) + 0] = 0.289250 | = 0.5) = [0 + 5(0.216507 – 0. 0) = 0. 6 1.4 = u2.3) = [5(0.348037) + 0] = 0.3 + 4u1. 6 1. t) = t. 0 ≤ x ≤ 1.4 + u3. 1IJ ≈ 0.6 = u2.501173)] = 0. taking h = 0. | 0.601407)] = 0.2 + u3. 6 1. H 3 9K H 3 9K The magnitudes of errors at x = 1/3 and at x = 2/3 are same.3 + u2. 0 ≤ x ≤ 1. 6 0. 6 0.5 + u3.417644.5 6 After six steps t = 6k = 1/9.5 + u2.417644.072743.280 For j = 2: i = 1: i = 2: For j = 3: i = 1: i = 2: For j = 4: i = 1: i = 2: For j = 5: i = 1: i = 2: u1. H 3 9 K H 3 9K H 9 K H 3 K 2 | 0.4) = [5(0.290031.3 = u2.3 + u3. 6 6 1 1 (u0.348037. 1IJ = exp FG − π IJ sin FG π IJ ≈ 0.417644)] = 0.5) = [5(0. u(1.417644) + 0] = 0. u FG 1 .601407) + 0] = 0. t > 0. Compute for four time steps. 32 .290031 – 0.4 + u2.3) = [0 + 5(0. Hence.2) = [0 + 5(0.4 6 1 1 (u + 4u2.5 = u2.4 6 1 1 (u + 4u1. Example 5.348037.5 and u(x.3 = u1. We note that the higher order method produced better results.5 = u1.290031.290031. The exact solution at t = 1/9 is u The magnitudes of errors are the following: λ = 1/2 : λ = 1/4 : λ = 1/6 : FG 1 .289250 | = 0.015234.289250. 6 6 1 1 (u1. t) = 0.5 6 1 1 (u + 4u2.3 6 1 1 (u + 4u1.274016 – 0. | 0.2) = [5(0. u(0.4) = [0 + 5(0.501173.2 + 4u2.2 + u2. Solution The given partial differential equation is ut = FG 1 IJ u H 32 K xx and c 2 = 1 .2 + 4u1.4 = u1.000781. 1IJ = u FG 2 .289250 | = 0. 6 6 1 1 (u + 4u2. Use an explicit method with λ = 1/2.348037)] = 0.6 = NUMERICAL METHODS 1 1 (u0.17 Solve uxx = 32 ut .

5(0 + 0) = 0.5(u0..BOUNDARY VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS. We shall discuss the most .3 = 0.25.25 0.2 + u4.3 = 0.1) = 0.5(u2. 2. u1.1 + u3. u3.0 + u2.2) = 0.2 + u2. j + λui+1. 1. 0 = u1.2 = 0.5(0 + 0. j + (1– 2λ)ui.25.2) = 0.25. 4) ≈ 0.0 + u3.0) = 0.0 = u3.5(u1.5(u2. 281 The step length is h = 0.3 + u3.4 = 0. these methods are not useful because the time taken is too high. u4. For λ = 1/2. u2.5.125. The boundary conditions give the values u0.5) = 0. u(0.1 = 0. The Schmidt method is given by ui. u1. i = 1. u2.1 = 0.5 0. We obtain the following solutions..1) = 0. j). j+1 = 0. λh 2 1 1 (32) = 1.5.2 + u3. Example 5.2) = 0.5(u1.5(0 + 0) = 0.3 = 0.4 = 0.0) = 0. u3.2 = 0.3 + u4.625.5(u0.5(u1.23). where the computation is to be done up to large value of t.1 + u2. 4) ≈ 1.17. We have seen that the Schmidt method is stable for λ ≤ 0. j+1 = λ ui–1.25 + 3) = 1. In most practical problems.75 1 x Fig. 2.5(0.4 = 0.5(u2. for all j.23. u2. In such cases.125.0 + u4.1) = 0. This condition severely restricts the values that can be used for the step lengths h and k.25) = 0.5.0) = 0. u (0. we use the implicit methods.5.5(0 + 1) = 0. j = 0.3 + u2. j. We have k= t 0 0.5.5(0 + 1.2 = 0.3) = 0. j = tj = jk = j. We are to find the solutions at three internal points. 3 .5. j + ui+1. u1. We have five nodal points on each mesh line (see Fig.1 = 0.5(0 + 0) = 0.5(u0.625.0) = 0. For j = 0: i = 1: i = 2: i = 3: For j = 1: i = 1: i = 2: i = 3: For j = 2: i = 1: i = 2: i = 3: For j = 3: i = 1: i = 2: i = 3: u1. Implicit methods Explicit methods have the disadvantage that they have a stability condition on the mesh ratio parameter λ.5(u2.0 = u2.0. u3. u3. The approximate solutions are u(0.1 + u4. = 2 16 c2 FG IJ H K The initial condition gives the values u0. j = 0. the method becomes ui.0 = 0.0 = u4.5(u0.5 (ui–1.5(u1. 4) ≈ 0. 5.5(0 + 0.3) = 0.5(0 + 2) = 1. u2.75.3) = 0. 3.

Applying the differential equation at the nodal point (i. j + 1 − IJ K IJ . j + δ 2 ui. 2 4 OP Q FG ∂uIJ H ∂t K L M N i. j ) .47) which agrees with the first two terms on the right hand side of (5. j + 1 + δ 2 ui. j + 1 − 1 2 δ x u i . j + 1 − ui. Denote ∇t as the backward difference in the time direction. I u }J . j + 1 FG H F = λ Gδ H F = λ Gδ H F = λ Gδ H FG 1 − 1 ∇ IJ δ H 2 K t O P Q 2 x ui. j +1 − u i . j + 1 x k 1 − (1/2) ∇ t h or or or or or or or or ∇t ui.24). u.46). we obtain LM N OP Q −1 = ∇t 1 + LM N 1 1 2 ∇ t + ∇ t + . j + 1 ∇t ui . approximate k 1 2 1 ∂u = − log (1 − ∇ t )u = ∇ t + ∇ t + ∇ 3 + . j + 1 2 ui. x 2 λ 2 (δ x ui.. j +1 ∇ t ui. j + 1 = c2 F ∂ uI GH ∂x JK 2 2 . x 2 1 2 δ x ∇ t ui. j +1 − ui.32). j +1 = ui. j + 1 = ui.49) λ 2 λ δ x ui. j 2 1 2 {δ x ui. (see Fig.46) (5. j + 1 . we write the relation k Now.5. j + 1 − δ 2 x 2 n 2 x sIJK . x 2 (5.48) Using the approximation given in (5. j + 1 . j + 1 = 2 δ 2 ui. j + 1 (5. K i. 1 ∇ t δ 2 ui. j ∇ t ui. From Eq.282 NUMERICAL METHODS popular and useful method called the Crank-Nicolson method. j ) . j +1).(2. j = ui.47) to left hand side and the central difference approximation (5.. There are a number of ways of deriving this method.j+1 = λ δ x ui. Q OP Q (5. j x 2 2 . j + 1 − 2 x ∇ t ui. K 2 x ui . j + 1 − λ 2 (δ x ui. i. we get ∇t 1 = ∇ t 1 − ∇t 1 − (1/2) ∇ t 2 L M N OP LM Q N LM N OP u.39) to the right hand side. we obtain ∇t 1 c2 ui.. j + 1 + δ 2 ui. t 2 3 ∂t ∇t 1 ∂u ≈ ∇t + ∇2 u ≈ t 2 1 − (1/2) ∇ t ∂t If we expand the operator on the right hand side. j+1 = kc 2 h 2 2 ∇tui.. We describe one of the simple ways.

The nodal points that are used in the method are given in Fig. The computations are repeated for the required number of steps.. t > 0 give the solutions at all the nodal points on the lines x = 0 and x = l for all time levels. Remark 15 The order of the Crank-Nicolson method is O(k2 + h2). then we have computed the solutions up to time tm = mk. to the right hand side of the differential equation on the levels j x and j + 1. j + 1 ) = ui. We choose a value for λ and h. j + 1 l. it is a tri-diagonal system of equations. u(l. This system of equations is solved to obtain the values at all the nodal points on this time level.5. we may choose the values for h and k. j − 2ui. Stability analysis of the Crank-Nicolson method shows that the method is stable for all values of the mesh ratio parameter λ.24. l– 1. Nodes in Crank-Nicolson method. j + 1 l + 1. j + 1 + ui − 1. The boundary conditions u(0. ui. 2 2 = λ λ ui − 1. If we perform m steps of computation. j + 1 − 2ui. j + (1 − λ)ui. j + 1 Level j + 1 l – 1. The difference equations at all nodal points on the first time level are written.. we may use sufficiently large values of the step lengths.BOUNDARY VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS. δ 2 u . t) = g(t). Depending on the particular problem that is being solved. It uses the three consecutive unknowns ui–1. j ) . This is the advantage of the method. Remark 14 From the right hand side of Eq. 2 2 (5. j +1 − ui + 1.24. 283 or or ui. j Level j Fig. j + 1 + (1 + λ)ui.49). j + ui + 1. Such methods are called unconditionally stable methods. j+1 and ui+1. j + 1 − − λ λ (ui + 1. Computational procedure The initial condition u(x. j+1 . j .50) where λ = kc2/h2. we note that it is the mean of the central difference approximations. Let us illustrate the application of the method. j l. 2 2 j +1 λ λ ui − 1. Remark 17 Do you recognize the system of equations that is obtained if we apply the CrankNicolson method? Again.(5. This concept of taking the mean of the central difference approximations to the right hand side of a given differential equation is often generalized to more complicated differential equations. t) = h(t). This method is called the Crank-Nicolson method. Remark 16 Implicit methods often have very strong stability properties. This gives the value of the time step length k. j + ui− 1. j + (ui+ 1. 5. This implies that there is no restriction on the values of the mesh lengths h and k. . Alternately. 0) = f(x) gives the solution at all the nodal points on the initial line (level 0). j+1 on the current time level. j l + 1.

j 2 2 For λ = 1 / 4. j + 1 + 10ui.0 or i = 2: 10u1. u(0. 8 4 j +1 = λ λ ui − 1. 2 2 2 Subtracting the two equations. i = 1 : – u0.1 = u2. 5.1.0 + 6u2. (A.1 – 11u2. 2.0 + u2. 0 ≤ x ≤ 2. For j = 0. j + 1 = ui− 1. j + 1 + (1 + λ)ui. j + ui+ 1. t) = 0 NUMERICAL METHODS using the Crank-Nicolson method with. t > 0 and u(x. j for all j.1 = u2.1 = u1. (A. Example 5. Example 5.k= .1 = 6.1 = 0.25. we get 11u1. j + (1 − λ)ui. j . 9 0 1/3 2/3 1 x Fig. The boundary conditions give the values u0. j + 6ui.0 = 0.0 = sin (π/3) = ( 3 /2) = u2. j + 1 + ui. 2006) Solution We have 1 1 1 kc 2 1 .5. j 8 8 4 8 j = 0 . t > 0. h = (Fig.0 = t 3 6 3 7 3 + = = 6. t) = 0.1 + 10u2. j +1 − ui + 1.0 + 6u2. using ∆x = 0. The initial condition gives the values u0. − λ λ ui − 1. j + ui +1.0 + u3. − ui− 1. t) = u(1.1 + 10 u1.1 = u1.06218. k = 1/36. 0) = sin (π x/ 2). h = 1/3.18 Solve the equation ut = uxx subject to the conditions u(x. u(0.1 – u3.284 Example 5. Hence. 0) = sin (π x).U. j +1 − ui+ 1. + = 2 2 2 – u1. we have the method as − or j +1 − 1 1 3 1 ui +1. We have the following equations.67358.U Apr/May 2003) . u3.λ= 2 = (9) = .0.1 = 6 3 3 7 3 = 6. j + ui. 0 ≤ x ≤ 1. j +1 = ui −1. Nov/Dec.1 – u2.25).19 Solve uxx = ut in 0 < x < 2.06218.18. j = 0 = u3.1 – u2.0 = 0. t) = u(2.0 or – u1.1 + 10u2. u1. j + ui + 1.06218 = 0.1 = u0.0 + 6u1. i = 1. 3 36 4 36 h Crank-Nicolson method is given by c2 = 1. The solution is given by u1. u1.25 for one time step by Crank-Nicolson implicit finite difference method. 2 2 1 5 ui −1.5. ∆t = 0. Do one time step.

285 Solution We have c2 = 1.0 x Fig. 5. u(0. j for all j.1 = u3.70711.1 + 4u1.5. j + (1 − λ)ui. or – u1. 2 2 j +1 = λ λ ui − 1. j = 0 = u4.0 + u2.0 + u3. 2. Nov/Dec. for two time steps. t) = t.1 + 4u2.19. i = 1 : i = 2: i = 3: – u0.1.0 + u4.1 = u(x.U Nov/Dec. j + u i + i .1 = 1.1 + 4u2.1 – u2.1 – 4u3.0 = 0. The boundary conditions give the values u0. j +1 + 4u i . Hence..41421 7. or – u2.1 = 1.1 = 0. the solution is obtained as 5.0 = sin (π/2) = 1. u2.1 = u2. Using determinants. j 2 2 − u i −1.1 – u4. 2 2 For λ = 1. Example 5. 0 = sin (3π/4) = (1/ 2 ) = 0. (A. ∆x = 0. j + 1 + 2ui. 0) = 0.0 1. t 0 0.1 + 4u3. ∆t = 0.41421. j + ui + 1.1 + 4u2. 0.25 Crank-Nicolson implicit finite difference method is given by − λ λ ui − 1.0 – u2.0 = sin(π/4) = (1/ 2 ) = 0.1 – u2.1 = 1.1 – u3.1 = u0.1 – u3.5 2. j + 1 − ui +1.1 = 1.1 = = 0.BOUNDARY VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS. Subtracting the first and third equations. We have the system of equations as 4u1. j +1 − ui +1.41421..0 or 4u1. We have the following equations.25.1 = u1. λ = c 2 ∆t ∆x 2 = 0. u1. 2 2 1 1 ui −1. and – 2u1. j . t) = 0 and u(1.1 + 4u3. we get 4u1. j = 0 . For j = 0. j + ui + 1. j + 1 + (1 + λ)ui.70711.26.38673.5 1.20 Solve by Crank-Nicolson method the equation uxx = ut subject to u1. we have the method as − or j +1 = 1 1 ui −1. j +1 = ui −1.54692. j .25 = 1. j +1 − ui + 1.0 – u1.65684 = 0. i = 1. u1. u2. 3. 14 14 Example 5. 2006) . u3.1 – u2. 2003.1 = 1. The initial condition gives the values u0.

0 . k = λh2 = 0. H 15 K H 15 K F 1I F 1I The first equation gives u = G J u = G J 0. H 56 K F4I F4I The second equation gives u = G J u = G J 0.0625 LM MN OP PQ LM MN OP PQ 1.0625.1 + 4u3. j = 0 .1 – u3.0 + u4.1 – u2.0 + u2.20. j +1 + 4ui . j . For j = 0. Perform 1 .0 or i = 3 : – u2.00112 . The initial condition gives the values ui.01674. j + 1 − ui +1.1 3.00446 . j + (1 − λ)ui.0625 O P P Q Perform FG 15 IJ = 0.00446 = 0. j + 1 + 2ui. Example 5. i = 1 : – u0. Hence. 2. 1 1 − 1/4 0 0 R2 . j + ui + 1. j +1 − ui +1. then R3 + R2. – u1. We have the following equations.1 + 4u3. j +1 2 2 = t λ λ ui − 1. j = tj = jk = 0.1 = 0. 1 − 4/15 0 (15/4) 0 0 56/15 0. 0 15/4 − 1 . 4 −1 4 0.1 = 0. for all j and u4. OP LM PQ MNM OP PP Q LM MN OP PQ −1 0 0 1 − 1/4 0 0 R 4 −1 0 0 .27.0 or i = 2 : – u1. i = 1.1 = 0. 2 2 0 j +1 For λ = 1.1 + 4u2.0625 0 −1 4 0.1 .1 = u1.1 – u2. 5.5 0. 0 = 0 for all i. (Fig.1 −1 0 4 − 1 u2. we have the method as 0. 2 2 = 1 1 ui −1. 0.1 = . then R2 + R1. j +1 = ui −1. j + ui +1.27). 5.0625 j. L4 M− 1 M0 N L4 M− 1 M0 N OL PM PM QN 0 0 u1.25 and λ = 1. − ui −1. 3.286 NUMERICAL METHODS Solution Since the values of the step lengths h and k are not given. let us assume h = 0.0625 4 u3.0 or The system of equations is given by 4u1. j + ui + 1. j + 1 − ui + 1.0 + u3.1 2.0625.1 + 4u2. j 2 2 Fig.1 = 0.1 – u3. j .1 = u2.1 – u4. j = 0. The boundary conditions give the values u0. j + 1 + (1 + λ)ui. – u2.0625 2. H 4K H 4K The last equation gives u3..1 = u0.01674 = 0.25 0.75 1 x − or 1 1 ui −1.1 −1 We solve this system by Gauss elimination. Crank-Nicolson implicit finite difference method is given by − λ λ ui − 1.1 + 4u1.

2 = u1. H 15 K H 15 K F 1I F 1I The first equation gives u = G J u = G J 0. then R2 + R1.12946 L4 M− 1 M0 N −1 0 0.1 = 0. 4u1. u(l. t) = h(t).13452 = 0. t > 0.014669 .2 REVIEW QUESTIONS 1. Write the one dimensional heat conduction equation and the associated conditions. H 4K H 4K The last equation gives u3. i = 2: i = 3: or The system of equations is given by L4 M− 1 M0 N 0 −1 4 −1 4 −1 We solve this system by Gauss elimination.00446 1 − 1/4 R1 4 − 1 0. Q 0 12946 LM MN 0. .01786 .. OP LMu PQ MNMu u 1.004782. 2 = 2.2 – u4.01786.00446O PP = MMN0. the temperature is prescribed. – u2. 287 For or j = 1. u(x. H 56 K F4I F 4I The second equation gives u = G J u = G J 0. then R3 + R2.00112 + 0.2 = u2. 2 2. 0 ≤ x ≤ l. 2 3.2 LM MN O P P Q 0 − 4/15 56/15 1. – u2.2 + 4u2.125 = 0.2 – u2.00446 + 0. Perform .1 = 0.01674 = 0. Boundary conditions Since the bar is of length l. – u1.2 + 4u3.1 + u4. 0 15 /4 4 0 −1 −1 4 0.001115O 0. u(0. Initial condition At time t = 0..2 + 4u3. t) = g(t).1 = 0 + 0. 4 0. 0 ≤ x ≤ l.2 – u3.036032 = 0. i=1: – u0.00446.2 = 0.2 – u2.014669 = 0.00446 + 0.12946.00446.001115 − 1 0.2 = u0.1 + u3.. 0 1 (15/4) 0 0 FG 15 IJ 0. 2 OP L0.00506P . The associated conditions are the following. 2 3.01786PPQ .1 + u2. boundary conditions at x = 0 and at x = l are to be prescribed.036032.BOUNDARY VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS.12946 OP PQ Perform 1 − 1/4 R2 .13452P Q 0 0. 0) = f(x). Solution The heat conduction equation is given by ut = c2uxx.2 = 0.018975 .2 + 4u1. 2 2. t > 0. 0.

12 ∂x 5.. Solution The Schmidt method for solving the heat conduction equation ut = c2uxx. Bender-Schmidt method is also stable. 1. 8. is the mesh ratio parameter and h and k are the step lengths in the x and t directions respectively.. Write the particular case of the Schmidt method which is of order O(k2 + h4). . j . 3. j = 0. . j+1 = 7. the method behaves like an O(h4) method.. j). . When do we call a numerical method as stable? Solution A numerical method is said to be stable when the cumulative effect of all errors tend to zero as the computation progresses. 0 ≤ x ≤ l. What is an explicit method for solving the heat conduction equation? NUMERICAL METHODS Solution In explicit methods. j method in which we use the value λ = 1/2. Write the Bender-Schmidt method for solving the one dimensional heat conduction equation. j + (1 – 2λ)ui. Schmidt method is stable when the mesh ratio parameter λ satisfies the condition λ ≤ 1/2... What is the order and truncation error of the Schmidt method? Solution The order of the method is O(k + h2). the method behaves like an O(h2) method. For a fixed value of λ. . Solution The higher order O(k2 + h4) method is obtained by setting λ = 1/6 in the Schmidt method. i = 1. j+1 = 6.. 4. the solution at each nodal point on the current time level is obtained by simple computations (additions.. 6 i–1. j For a fixed value of λ. This method is a particular case of the Schmidt 2 i–1. multiplications and divisions) using the solutions at the previous one or more levels. subtractions. j + λui+1. where λ = kc2/h2. The method is given by 1 [u + 4ui. j]. is given by ui. j+1 = λui–1. Write the Schmidt method for solving the one dimensional heat conduction equation. 2. Is the Bender-Schmidt method for solving the heat conduction equation stable? Solution The Bender-Schmidt method is obtained from the Schmidt method by setting λ = 1/2. The truncation error of the method is given by T. What is the condition of stability for the Schmidt method? Solution Schmidt method is stable when the mesh ratio parameter λ satisfies the condition λ ≤ 1/2. j + ui+1. Hence. Solution The Bender-Schmidt method for solving the heat conduction equation ut = c2uxx.288 2. ui. 0 ≤ x ≤ l. t > 0 is given by ui. t > 0 LM N OP Q 1 (u + ui+1. 9.E = ∂4u kh 2 c 2 (6λ − 1) 4 + .

2. is given by ui. Compute with (i) λ = 1/2 for two time steps. . Use explicit method with h = 0. with u(x. tj and tj+1. j + 1 + (1 + λ)ui. j 2 2 where λ = kc2/h2. 4. j + 1 = ui. t) = u(1. 0 ≤ x ≤ 1 and u(0. What is the order of the Crank-Nicolson method for solving the heat conduction equation? Solution The order of the Crank-Nicolson method is O(k2 + h2). 0 ≤ x ≤ 1.with u(x. 0 ≤ x ≤ 1. When a method uses the nodal values on three time levels tj–1. Solve the heat conduction equation ut = uxx. t) = 0 for all t > 0. j +1 − ui + 1. Compute for four time steps. Define two level and three level methods. Use Schmidt method with h = 0. EXERCISE 5. 0 ≤ x ≤ 1. (ii) λ = 1/4 for four time steps. 1/2] and 2(1 – x) for x ∈ [1/2. 0) = sin(2π x). 289 10. 13. j x 2 2 j +1 − λ λ ui − 1.. and u(0. Compute for four time steps. with u(x. Define an implicit method for solving the heat conduction equation. 11. Solve ut = uxx.with u(x. t) = 0 for all t > 0. t) = 0 using the Schmidt method.25. we solve a linear system of algebraic equations for all the unknowns on any mesh line t = tj+1. 3. is the mesh ratio parameter and h and k are the step lengths in the x and t directions respectively. Assume h = 0. 0) = x(1 – x). t) = u(1. 0 ≤ x ≤ l. 15. Use Schmidt method with h = 0. The method is also called an unconditionally stable method. Solution When a method uses the nodal values on two time levels tj and tj+1. What is the condition of stability for the Crank-Nicolson method? Solution The Crank-Nicolson method is stable for all values of the mesh ratio parameter λ. 12. then it is called a two level formula. j + δ 2 ui. 2 2 = λ λ ui − 1. Write the Crank-Nicolson method for solving the one dimensional heat conduction equation.25. 1]. t > 0. 0) = x(1 – x). j + ui + 1. and u(0. t) = u(1. then it is called a three level formula. t) = u(1. What type of system of equations do we get when we apply the Crank-Nicolson method to solve the one dimensional heat conduction equation? Solution We obtain a linear tridiagonal system of algebraic equations.25 and λ = 0. t) = 0 for all t > 0.BOUNDARY VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS. Solution In implicit methods.. Solve uxx = 16ut . j+1 – or λ 2 λ δ x ui. Compute for four time steps. (iii) λ = 1/6 for six time steps.25 and λ = 1/6. 0 ≤ x ≤ 1. 14.25 and λ = 0. Solve uxx = 4ut . j + (1 − λ)ui. 0 ≤ x ≤ 1. 0 ≤ x ≤ 1 and u(0. Solution The Crank-Nicolson method for solving the one dimensional heat conduction equation ut = c2uxx.5.4 1. 0) = 2x for x ∈ [0.

λ = 1/2. 1]. t) using the Crank-Nicolson method with h = 0. t) = exp(– 4π2 t) sin (2πx). λ = 1/6. subject to the initial and boundary conditions u(x. t) = 0 using the Crank-Nicolson method with h = 0. Let u ( x. t) = 0. Then. subject to the conditions u(x. Find the maximum absolute error if the exact solution is u(x. Solve the heat equation ut = uxx. 0) = 3x.51) utt = c2uxx. 0 ≤ x ≤ 1. (A. t > 0. the vibrations of the elastic string is governed by the one dimensional wave equation (5.290 NUMERICAL METHODS 5. All vibration problems are governed by wave equations. x ∈ [1. 0 ≤ x ≤ 1 subject to the conditions u(x. Solve the heat equation ut = uxx. t) = 1 – t. by Crank-Nicolson method. u(0. 0) = 20. subject to the initial and boundary conditions u(x. t) = 0 = u(1.25. Integrate for one time step. Integrate for one time step. . t) = u(1. Find the solution of the equation 16uxx = ut. t) = 100. 2 is a constant and depends on the material properties of the string. 0) = 6x. u(0. u(5. 0) = sin (π x). given that u(x. for x ∈ [0. 8. Compute u for one time step with h = 1. Integrate for two time steps. k = 1/32. 0 ≤ x ≤ l. Find the solution of the equation 4ut = uxx. u(1. 1] and 6(2 – x). find the magnitudes of the errors on the second time step. Integrate for two time steps. we have defined the linear second order partial differential equation as an hyperbolic equation if B2 – AC > 0. 1/2] and 3(1 – x). h = 0.25. 9. t) = 0 using the Crank-Nicolson method with. 0 ≤ x ≤ 1 subject to the conditions u(x. Solve ut = uxx. we need to prescribe the following conditions.6 FINITE DIFFERENCE METHOD FOR WAVE EQUATION Auxx + 2Buxy + Cuyy + Dux + Euy + Fu + G = 0 In section 5. t) using the Crank-Nicolson method with h = 0. Study of the behavior of waves is one of the important areas in engineering. located on the x-axis on the interval [0. 7. An hyperbolic equation holds in an open domain or in a semi-open domain.U Apr/May 2005) 6. t) = exp(– π2 t) sin (πx). for 0 ≤ x ≤ 1. t) = 0 using the Crank-Nicolson method with. u(0. t) = u(1. 0 ≤ x ≤ 1. If the exact solution of the problem is u(x.3. 0) = 1 – x. u(0. Consider the problem of a vibrating elastic string of length l. The simplest example of an hyperbolic equation is the one dimensional wave equation. 10. 0 ≤ x ≤ 1.25.8. 0) = sin (2π x). 0 ≤ x ≤ 5. t) = 0 = u(2. h = 1/3.4. λ = 0. the tension T in the where c string and the mass per unit length of the string. t ≥ 0. for x ∈ [0. 2]. Find the solution of the equation ut = uxx. u(0. 5. t ) denote the displacement of the string in the vertical plane. x∈ [1/2. Integrate for two time steps. l]. λ = 1/2. u(0. 0 ≤ x ≤ 1. In order that the solution of the problem exists and is unique.

tj). At any point (x i . j +1 = 2ui . 2.53) Since both the initial and boundary conditions are prescribed. j 1 h k 2 δ 2 ui . and using the central difference approximations. .54). The mesh points are (xi . i = 0. j + u i −1..BOUNDARY VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS. The points along the xaxis are xi = ih. j ] ui .55). u(l. j + u i . j −1 + r 2 [ui +1. j ] .28. 291 (i) Initial condition Displacement at time t = 0 or initial displacement is given by u(x. The nodes that are used in the computations are given in Fig. Let us derive a few methods.55) Applying the differential equation (5... 0 ≤ x ≤ l. l] be divided into M parts. j + ui − 1. we write the approximations F ∂ uI GH ∂x JK F ∂ uI GH ∂t JK 2 2 2 2 ≈ i.54) ≈ i. Mesh generation The mesh is generated as in the case of the heat conduction equation. Initial velocity: ut(x. Let the interval [0. [u i . t) = 0. j 1 2 (5. j ] ui.20.. 0) = f(x). a rectangular network of mesh lines.53 b) (ii) Boundary conditions We consider the case when the ends of the string are fixed.. (5. j − 2u i . j − 2u i . j = x 2 δ t ui . j −1 ] = c2 h2 [ui + 1. j + 1 = 2(1 − r 2 )ui. j + ui − 1. (5.52 a) (5. j and the exact solution by u(xi . Since the ends are fixed. j + r 2 [ui + 1. Then. we get 1 k 2 [ui. j + ui − 1. j + ui. tj) as given in Fig. we denote the numerical solution by u i . the problem is called an initial boundary value problem. 5. 0) = g(x). j − 1 = r 2 [ui + 1. j + 1 − 2ui. t) = 0. is called the mesh ratio parameter. j +1 − 2u i . j ] − ui. the mesh length along the x-axis is h = l/M. 0 ≤ x ≤ l. t > 0. we can derive explicit and implicit methods for the solution of the wave equation.5. M. j − 2ui. Let the mesh length along the t-axis be k and define tj = jk. j + ui.56) where r = kc/h. tj). j = 1 h2 1 k2 [u i +1. j −1 ] (5. tj). t > 0. Explicit methods Using central differences. Superimpose on the region 0 ≤ x ≤ l.51) at the nodal point (xi . . (5. 1. (5. we have the boundary conditions as u(0. j − 2ui. j + 1 − 2ui. j − 1 (5. j − ui . j + u i −1. As in the case of the heat conduction equation. We call tj as the jth time level. j ] or or or ui.

. (5. tj)] = k2 c2 2 LRu + k ∂u + k ∂ u + k ∂ u + k ∂ u + . Truncation error of the explicit method We have the method as ui.OPQ = k c MNM ∂xu + h ∂xu + .U MS ∂x 2 ∂x 6 ∂x 24 ∂x V h MT W N R ∂u + h ∂ u − h ∂ u + h ∂ u − .UOP − 2u + S u − k T ∂t 2 ∂t 6 ∂t 24 ∂t VQP W L ∂ u + k ∂ u + .28. j+1 at the node (xi .292 NUMERICAL METHODS i.. tj) + u(xi . j + ui. j i. explicitly using the values on the previous time levels tj and tj–1.PQP 12 h N ∂ 2 2 3 3 4 4 2 3 4 2 2 3 3 4 4 2 3 4 2 2 2 2 2 4 4 2 2 4 2 4 2 2 2 4 . Expanding in Taylor’s series.. Therefore. j + 1 Level j + 1 i – 1.U MS ∂t 2 ∂t 6 ∂t 24 ∂t V MT W N R ∂u + k ∂ u − k ∂ u + k ∂ u − .... tj + k) – 2u(xi.. tj) + u(xi–1 . we obtain u(xi .. j – 1 Fig. the method is always a three level method. j − 1 = r 2 [ui + 1.... tj). j − 2ui.56).. j + ui − 1. tj) – 2u(xi . Nodes in explicit method... The value ui.UOP − 2u + Su − h T ∂x 2 ∂x 6 ∂x 24 ∂x VPQ W L O ∂ k c L ∂ u h ∂ u = Mh ∂x + 12 ∂x + .OP = Mk N ∂t 12 ∂t Q 3 4 4 2 3 4 2 2 3 3 4 4 2 3 4 2 2 4 4 2 4 where all the terms on the right hand sides are evaluated at (xi . The truncation error is given by LRu + h ∂u + h ∂ u + h ∂ u + h ∂ u + . 5. tj – k) 2 2 3 = r2 [u(xi+1 . j i + 1. tj+1) is being obtained by the formula in Eq. j Level j Level j – 1 i. j ] . j + 1 − 2ui. Remark 18 We note that the minimum number of levels required for any method (explicit or implicit) is three.

j + ui− 1. (5. the truncation error of the method is of the order O(k6 + k2h4). 5. Therefore.57) (T. r = kc/h = fixed. j + ui− 1.. j − 2ui. The order of the method is O(k4 + h4).. the method is of order O(h4)..E = [u(xi .. 2 12 ∂t 4 12 ∂x 4 ∂x I JK OP Q LM N OP Q Now. j or ui.58) Remark 19 For a fixed value of r. j ] = ui+ 1. tj) + u(xi .. tj) + u(xi–1 .. The order of the method is given by order = 1 k2 F ∂ uI = c GH ∂x JK 2 2 4 ∂2 ∂x 2 F ∂ uI = c GH ∂x JK 2 2 4 ∂4u ∂x 4 k 4 c 4 ∂ 4 u k2 h 2 c 2 ∂ 4 u k2 h 2 c 2 ∂4u (r 2 − 1) 4 + . tj – k)] – r2 [u(xi+1 . using the differential equation ∂ 2u ∂t 2 = c2 ∂ 2u ∂x 2 and ∂ 4u ∂t 4 = c2 ∂2 ∂t 2 we obtain T.59) The nodes that are used in computations are given in Fig.. j − 1 = ui+ 1. The higher order method obtained when r = 1 is given by ui. = 4 4 12 ∂x 12 ∂x 12 ∂x F GH I JK (5. j – 1 Fig. for the fixed value of r = 1. = since k = (hr)/c. (5.. i. That is. 293 T. Hence..E) = O(h2 + k2).E. j i + 1. tj) – 2u(xi . . j − ui.29. tj)] = k2 L M N = k2 F∂ u − c GH ∂t 2 2 ∂ 2u k4 ∂ 4u ∂ 2u h2 ∂ 4u + .. j Level j Level j – 1 i. j + 1 − 2ui. we have k = rh/c or k = O(h).BOUNDARY VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS. j − 2ui. the method is of order O(h2). j + 1 = 2ui. j + ui−1. the values of h and k are reduced such that the value of r is always same. that is. for a fixed value of r. j − 1 + [ui+ 1. j + 1 Level j + 1 i – 1. + + . − k2 c 2 h 2 2 + 12 ∂x 4 ∂t 2 12 ∂t 4 ∂x 2 k4 ∂ 4 u k2 h 2 c 2 ∂ 4 u ∂2u + − + . Hence. the leading term in the error expression given in (5.59. − + . j − ui. Nodes in explicit method for r = 1.. 5.. Remark 20 For r = 1. j + ui.57) vanishes. j − 1 . tj + k) – 2u(xi .

0) ≈ [ui.61).59) at the nodes on the level t = k.1.63 b) ∂u ( x. The boundary conditions u(0.294 NUMERICAL METHODS When the values of h and k are not prescribed in any particular problem. –1 from (5. stability of the numerical computations plays an important role. Remark 21 In the case of the wave equation also.0 + ui−1.0 + r2 [ui+ 1. This gives the values at all nodal points on the level t = k.63 a) The external points u i . and c = 1. (5. Hence. We get ui. 0) = f(x) gives the solution at all the nodal points on the initial line (level 0). This gives the value of r.63b) becomes (5. 0) = g( x) . t) = h(t).59) is of three levels. we need data on two time levels t = 0 and t = k. for j = 0. The formula (5. Alternately.56) or (5.1 = 2(1 − r 2 )ui. then we get from ∂t .0 ] − [ui.−1 .62) Now. to start computations. ui.1 − ui.–1 = ui. t) = g(t). Analysis of the method gives that the method is stable if r= kc ≤ 1. we obtain ∂u 1 ( x. The initial condition u(x. For r = 1. Computational procedure Since the explicit method (5.62). t > 0 give the solutions at all the nodal points on the lines x = 0 and x = l for all time levels. Solving for ui . the higher order method is also stable. –1 = ui. we use the method (5.0 ] + 2kg( xi ) .60) Note that the higher order method (5. (5. h (5. (5. if the initial condition is prescribed as (5.0 + ui− 1.0 ] − ui −1 . u(l.59) uses the value r = 1. we have h = k.1 – 2kg(xi). ∂t If we write the central difference approximation. For example.0 + r2 [ui+1. 0) = 0 .–1. we may choose these values such that r = 1.−1 ] = g ( xi ) . that are introduced in this equation are eliminated by using the relation in (5.1 − 2kg ( xi )] or 2ui. The values required on the level t = k is obtained by writing a suitable approximation to the initial condition ∂u ( x.1 = 2(1 – r2)ui.56) or (5.62). we may choose the values for h and r.0 + ui −1.0 + r 2 [ui +1.1 = 2(1 – r2)ui. ui. we get ui . We choose a value for k and h.61) 2k ∂t This approximation introduces the external points ui . that is.

t) = u(1. If the exact solution is u(x. 2.0]. (i) When k = 1/8. 1.0 = sin (π /2) = 1. (b) ut(x. j + 0.30). 3. For t > k. j +1 = 2(1 – r2)ui. u(0. 1. j] – ui. If we perform m steps of computation. .0 = sin (i π /4). j = 0. 2 i+1. we use the method (5. 2. 2.0 = sin (π) = 0. ut (x.1 = (1 – r2)ui.65) Thus. we get r = becomes k 1 1 = (4) = . the solutions at all nodal points on level 1 are obtained. j + [ui + 1. 1. j] – ui. 0) = sin (π x).5.70711. compare the solutions at times t = 1/4 and t = 1/2. i = 0. that is. j–1 . we have five nodes on each time level (see Fig. Example 5. Fig.59).0 + ui–1.0 x IJ K = 1.21 Solve the wave equation utt = uxx .0 (5. t) = cos (π t) sin (π x). 1. The initial conditions give the values (a) ui. t > 0 using the explicit method with h = 1/4 and (i) k = 1/8. 0) = 0.64) For r = 1.1 = 2(1 – r2)ui. 2 (5.. The boundary conditions give the values u0. j + ui − 1.0].21. t) = 0. 0 ≤ x ≤ 1.75 1. j–1. 0 ≤ x ≤ 1.BOUNDARY VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS. for all j.25 0. u1. The method h 8 2 t ui. 4 u0.0 = sin (3π /4) = (1/ 2 ) = 0. u3. up to t = 1/2 or j = 0. 3 .66) 0 0.0 + r2 [ui+1. The computations are repeated for the required number of steps.–1 = ui. 5. u4. then we have computed the solutions up to time tm = mk. Solution The explicit method is given by ui. u4.j + r2 [ui+1. (ii) k = 1/4. i = 1. 0) = 0 gives ui. that is for j ≥ 1. 2.0] or ui.1 = 1 [u + ui–1. 3. subject to the conditions u(x.30. j + ui–1. j − 1 4 4 (5.5 0. j = 0. We have to find the solution at three interior points. The computations are to be done for four time steps. Let us illustrate the method through some problems. the method simplifies to ui.j + ui–1.. 3. 5.25[ui+1. 0 = sin (π/4) = (1/ 2 ) = 0.70711. We are given c = 1 and h = 1/4.0 + ui–1. 0 = 0. j = 0. j+1 = 2 1 − FG H 1 1 ui. Compute for four time steps for (i). Example.0 + r2 [ui+1.5ui. u2.56) or (5. 295 2ui. and two time steps for (ii). j ] − ui. Hence.

717835) – 0.25[u3.5(0. i=3: u3.2 + 0.125(1 + 0) = 0.0 + u1.75 + 0.71784 + 0) – 0.3 + 0.70711 + 0. 1.65533.70711) + 0.50758) + 0.25[u4.5u1.2 = 1. For j = 1: We use the formula (5.50788 + 0.65533) – 1.0].1 = 0.5(0.5u3.92678 = 0.0) = 0.285499) – 0.2] – u1.0 = 0.70711 = 0.50758.0 + ui–1.40377.70711) = 0.5u1.0 + u2. i=3: u3.5(0.02161.65533) + 0.5u3.25[u2.40377) – 0.3 = 1.5u2.5(0. –1 = ui.1 = 0.92678) – 0.1 + u1.403765 + 0) – 0.3] – u2.0 + 0. For j=3: i=1: u1.5u2.2 + u2.2 = 1.3] – u3.3 + u0.0 + u0. i=3: u3.28550.75u3.25(0.1] – u3.75(0. i=2: u2.717835 = 0.0 = 1.285499) + 0.2 + 0.25[u3.25(0.50758. i=2: u2.0 + 0.1 + u0.5(0.0) = 0.1 + 0. The method simplifies to ui.1] – u1.1 = 1.296 We have the following values. For NUMERICAL METHODS j = 0 : Since ut(x.25(0.71784. For j=2: i=1: u1.65533 = 0. i=3: u3.0 + 0.1 + u2.25(0 + 0. i=2: u2.50758 = 0.65538 = 0.0 + 0.75ui.4 = 1.1 + 0.2 + u1.25(2)(0.3 + u1.5u3.3 = 1.1 = 1.75u1. .2 + u0. i=2: u2.92678 + 0) – 0.0 = 1.5(0.25[u3. 0) = 0 we obtain ui.65533) + 0.70711 = 0.25(0 + 0.25[u2.02161.1 = 0.75(0.28550.2 = 1.66).25(0.71784) + 0.1 = 1.0) = 0.5(0.5(0.1] – u2.65533 + 0.125(0 + 1) = 0.2] – u2.4 = 1.25[u4.92678) + 0. i=1: u1.4 = 1.1 = 0.25[u2.5(0.285499) + 0.25(0.75u2.0 = 1. i=1: u1.5 u2.2 = 1.2 + 0.125(u4.2 = 1.50788) – 0.50758 = 0.125(u3.3 + u2.1 + 0.3] – u1.5u1.125(u2.03056.125(0.50758) + 0.3 + 0.3 + 0.3 = 1.4037625) + 0.25(0 + 0.2 = 1.125[ui+1.65533.70711) + 0.2] – u3.25[u4.92678.

0.5(ui–1.25. h = 1/4. For r = 1. The computations are to be done for two time h 4 steps.0 + ui+1.5.0 + u3.1 – u3.1 – u1. 1. j +1 = u i −1.0 u2.70711 | = 0.5(1 + 0) = 0.25 : u(0.5 | = 0.5. The number of time steps up to which the computations are to be performed is not prescribed.0 Fig. For j = 0 : ui.1 – u2. j − ui .5.5 + 0. and 0.25) = 0. we obtain the exact solution. 1 = 0.0 = 0.717835 – 0.00758.2 = u1. we have x 0 1.BOUNDARY VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS.02161. (5.67).00758.5) = 0.22.U. that is.75. For r = 1/2 : The magnitudes of errors are 0.1.0 = 0.25) = 0. Nov/Dec 2006) Solution We have c2 = 4. 0.02161. | u(0.0. Let the number of time steps up to which the computations are to be performed be 4.25.70711 = 0. u3.31.25) – u1.2 | = | 0. i = 1. Example 5.0).75.0 2. 0. 0) = 0.25) = u(0.22 Solve utt = 4uxx .5. u3.0) = 0. .0.1 + u4. u2. 1 + u2. For r = 1.75. 0. The exact solution and the magnitudes of errors are as follows: At t = 0.. 0. t) = 0 = u(4.25.5) = u(0.70711 + 0 – 0.25) – u2. 1. | u(0. up to t = 1/2 or j = 0. we obtain the exact solution. t (A.5. At t = 0. 0. simplifies the method as ui.1 = 0.0 = 0.0 4.5 | = 0. For r = 1/2: The magnitudes of errors are the following: | u(0.50758 – 0.50758 – 0. or ui.1 + u3.1 = 0. 5. 297 k 1 = (4) = 1.5.5[0 + 1] = 0. Example 5. j = 1 : We use the formula (5.0 + u2.1 = ui–1. j = 0.5 – 1.2 | = | 0.1 = 0. u1. i=1: i=2: i=3: For i=1: i=2: i=3: u1.03056.0 3. 1 .0 + u4. we get the method as (ii) When k = 1/4..67) We have the following values.5(u2. 2 = u0.0107. with boundary conditions u(0.0) = 0. Then. we get r = ui . 0. j −1 .0. u(0.5(2)(0.–1 = ui. 2. let us assume that we use an explicit method with h = 1 and k = 0.70711) = 0. 3.5) = u(0.0) = 0.. t > 0 and the initial conditions ut(x. u(x.0 – ui.2 = u2. t). j + ui +1.25) – u3.70711 = 0.5.2| = | 0. 0) = x(4 – x).0 + ui+1. 0. 0.5(u0. Therefore. For r = 1.5 : u(0. The values of the step lengths h and k are not prescribed.70711 – 0. 0.0 = 0 + 0.70711.70711.5(u1.

ut(x. u3. to give ui. u2.1.0 = u(1. The boundary conditions give the values u0. u4. ui. 0) = 3.3 – u3.3 – ui. given u(x.68) These are the solutions at the interior points on the time level t = 0. These are the solutions at the interior points on the time level t = 1. u2.0).0 = u(2. For j = 3: We use the formula (5.2 = u3. .23 Solve uxx = utt .0 = 0 + 3 – 3 = 0.5) = 1.4 = u2. for all j (see Fig. to give ui. j = 0. Compute for four time steps with h = 0.0. t > 0.0 + u1. 0) = x(4 – x). j + ui− 1.0) = 0.1 = 0 + 0 – 2 = – 2.1 – u1.1.1 + u2. 0. u2.3 + u0. 0 < x < 1.2 = u4.0 = 3 + 0 – 3 = 0. i = 1: i = 2: i = 3: u1. 3 .3 – u2.2 = – 2 – 2 – 0 = – 4. Nov/Dec.3 + u2. t) = 0 and u(1. Example 5. i = 1: i = 2: i = 3: u1. i = 1: i = 2: i = 3: u1.59)) r= ui.2 = u2. u2.3 = u4.0. the formula simplifies to ui. The initial conditions give the following values.5.1 = 0.2 = – 3 + 0 – 0 = – 3. u3. j + 1 = ui+ 1.4 = u4.5(u2. t) = 100 sin (π t).0 = u(3.3 + ui–1. 1. 0) = 0.298 NUMERICAL METHODS ck 2(0. u3.68).4 = u3.0) = 0.5.5(ui+1.2 – u3. 0) = 0.1 + ui–1.1 – u2. u(0. 3.4 = ui+1.0 = 2 +2 – 4 = 0. 0) = 0.31). For j = 0: Since.25. 0) = 4.3 = u2. u2.3 + u1.5(4 + 0) = 2.1 = 0.3 = ui+1. = h 1 The explicit formula is given by (see (5. j = 0. j − ui.1. 2003) .2 + u2. gives u0. 2. 5. 0) = 0 gives ui.2 – u2.5(u4.5(u3.1 – u3. For j = 1: We use the formula (5. For j = 2: We use the formula (5.0 = 0.0) = 0.1 = 0 + 0 – 3 = – 3.1 = 0. u3.–1 = ui. u4.1 = 0 + 0 – 2 = – 2.2 = 0 – 3 – 0 = – 3. j −1.0 = u(4. j = 0. (5.0 + u0.5(3 + 3) = 3. We have the following results. (A.68). u1.U. u(x. 0) = 3.2. These are the solutions at the interior points on the required fourth time level t = 2.2 + u0.2 + u1.2 – u1. to give ui. These are the solutions at the interior points on the time level t = 1.1 = 0. 2.–1 = ui.3 = u3.1 + u1. u3. i = 1.5(0 + 4) = 2.2 = ui+1.3 – u1.68).2 – ui.1 + u0. Central difference approximation to ut(x.1 – ui.0 + u2.0 + ui–1. i=1: i=2: i = 3: u1.2 + ui–1.

u2.5(u3.4 = u3.1 + u2.2 = u2. The method is given by ui . Since the method is not specified.0 = 50 2 + 0 – 0 = 50 2 .1. 1. i = 1: i = 2: i = 3: u1. to give ui. u2.1 = 0. u3. For j = 0: Since.2 = 50 2 + 0 – 0 = 50 2 .. u2.2 + u2. 299 Solution We have c = 1 and h = 0. j + u i −1.2 = ui+1. we use an explicit method. j = 100 sin (π jk) = 100 sin (π j/4).–1 = u i.1 = 0. 3. u4.0 + u1. These are the solutions at the interior points on the required fourth time level t = 1.2 + u1.3 – u1. j = 0. j − ui . These are the solutions at the interior points on the time level t = 0. u4.5(0 + 0) = 0.68).1 = 50 2 + 0 – 0 = 50 2 .0 + u0.0 = 0.25. For j = 1: We use the formula (5. i = 1.1 = 100 + 0 – 0 = 100. u3. 3. (see Fig. the formula simplifies to ui.2 = 100 sin (π/2) = 100. . u4. u4.BOUNDARY VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS.1 – u1. For j = 3: We use the formula (5.5(u2.1 – u3.3 = u3.68).4 = u4.75. j −1 . We assume k = 0. These are the solutions at the interior points on the time level t = 0.25 so that r = 1.0) = 0. The value of the step length k is not prescribed. to give ui.2 = 50 2 + 50 2 – 50 2 = 50 2 .4 = u2.0) = 0. u4.1 + u1.3 + ui–1. 1 = 100 sin (π /4) = (100/ 2 ) = 50 2 .4 = 100 sin (π) = 0.3 – u3. 2.30).2 – ui.3 + u0.1 – ui. i = 1: i = 2: i = 3: u1. j +1 = u i +1. u i .5.1 – u2.0.3 = ui+1. j = 0. That is.5(0 + 0) = 0. u2.2 + u0.2 + ui–1.0 = 0.0).2 = u3. These are the solutions at the interior points on the time level t = 0.3 – ui.5(u4. 5.0.2 – u3. and u4.3 = 100 sin (3π /4) = (100/ 2 ) = 50 2 .25.1 = 0.2 = u4.2 – u2. to give ui.4 = ui+1.3 = u4. The boundary conditions give the values u0. u3.3 + u2.3 = u2. 2. i = 1: i = 2: i = 3: u1.0 + ui–1.1 = 0 + 0 + 0 = 0.5(0 + 0) = 0 u3.5(ui+1.0) = 0. For j = 2: We use the formula (5.3 – u2.1 + ui–1. i = 1: i = 2: i = 3: u1.0 + u2.1 = 0.2 = 100 + 0 – 0 = 100. for all j.1 + u0.0 = 0.2.3 + u1. 1.68).2 – u1..

these methods are not useful because the time consumed is too high. j +1 = 2u i . j −1 = ui +1. j + ui + 1. j +1 − r2 2 [ δ x ui . Nodes in implicit method (5.300 Implicit methods NUMERICAL METHODS Explicit methods have the disadvantage that they have a stability condition on the mesh ratio parameter r = (ck)/h. j + 1 + ui. j − 1 ] 2 (5. i – 1. (5. j −1 ] x 2 (5. j + 1 = 2ui. j −1 − 2ui . j = t r2 2 δ x [ui. j −1 + ui −1. j −1 ].72).69) = δ 2 [u i . j +1 + ui −1. j + 1 + ui. tj). F ∂ uI GH ∂t JK F ∂ uI GH ∂x JK 2 2 2 2 = i. We derive the following two implicit methods. j – 1 Fig. j +1 − 2u i .70) i. j −1 + δ x ui . . j − 1 − (1 + r 2 )ui. In such cases. j + 1 − 2ui. x (5. tj ) is given by 1 k2 δ 2ui. j + 1 i. Explicit methods are stable for r ≤ 1. j −1 − ui . The nodal points that are used in the method are given in Fig. j – 1 i. j +1 + δ 2 u i . x We get – r2 r2 r2 r2 ui + 1. x or δ 2 ui. j Hence. where the computation is to be done up to a large value of t. j +1 = ui +1. we use the implicit methods. 2 2 where r = (kc/h). j = t c2 2h 2 δ 2 [ui. In most practical problems.0.71) or or ui.72) r2 2 r2 2 δ x u i . j + 1 Level j + 1 Level j Level j – 1 i – 1. 5. j + 1 + (1 + r 2 )ui. j + 1 − ui − 1. (i) We write the following approximations at (xi . This condition restricts the values that can be used for the step lengths h and k. j . the difference approximation to the wave equation at the node (xi .32. j + 1 i. j i + 1. j +1 . j + ui. j −1 . j −1 . j – 1 i + 1. x δ 2 ui . 2 2 2 2 j −1 . j +1 + ui . j −1 = ui . j 1 k2 1 2h 2 2 δ t ui. j − 1 + ui −1.5. j − 1 ]. We can expand the central differences and write δ 2 ui .32.

j +1 = ui +1. j +1 + ui −1. j + δ 2 u i. .72) is O(k4 + k2h2). This implies that there is no restriction on the values of the mesh lengths h and k. x The difference approximation to the wave equation at the node (xi . j −1 − r 2 δ 2 u i. F ∂ uI GH ∂x JK 2 2 = i.73) ui . j Level j Level j – 1 i – 1. j −1 + u i −1.74). j −1 = r 2 [ δ 2 ui . j + 1 i. j − 1 ] t x (5. (ii) We use the approximation (5. the order of the method is O(k2 + h2). j + ui. j −1 ] x or or or δ 2 ui.74) shows that the methods are stable for all values of the mesh ratio parameter r . j +1 . we can show that the truncation error of the method given in Eq. j −1 . j − 1 ] .5.72) and (5. j 1 h2 δ 2 [ui. j +1 − r 2 δ 2 ui . j i + 1. Depending on the particular problem that is being solved.33.73) is again of order O(k4 + k2h2). we may use sufficiently large values of the step lengths. j + u i.BOUNDARY VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS. the methods are unconditionally stable. j +1 − δ 2 u i. j + 1 i + 1.(5.. Stability analysis of the above implicit methods (5. j – 1 i + 1. x x The nodal points that are used in the method are given in Fig. j −1 = ui +1. j – 1 i. j +1 = 2ui . j −1 ] x x x ui . j + 1 − ui. Remark 24 Implicit methods often have very strong stability properties. δ 2 ui . j – 1 Fig. i – 1. j = t c2 h 2 δ 2 [ui.74) where r = (kc/h). j +1 − 2ui . and the following approximation for ∂ 2u ∂x 2 . j + ui. j = r 2 δ 2 [ui. j + 1 − ui. j Level j + 1 i – 1. We can expand the central differences and write δ 2 ui .33.69) for ∂ 2u ∂t 2 . j −1 − 2u i. j + 1 i. j −1 x x x (5. Hence. Remark 23 Using Taylor series expansions.. j + r 2 δ 2 ui . j − ui . 5. tj) is given by 1 k 2 δ 2ui. 301 Remark 22 Using Taylor series expansions. the order of the method is O(k2 + h2).(5. j + 1 − ui. Hence. j +1 − 2u i. we can show that the truncation error of the method given in Eq. j + ui. Hence. Nodes in implicit method (5.

ut(x. (5. If we perform m steps of computation.75) If the initial condition is.1 ) = 2ui. The boundary conditions u(0.1.302 Computational procedure NUMERICAL METHODS The initial condition u(x. We choose the values for k and h.1 = 2u i .1 – r2 2 r2 2 δ x u i . uM–1. Let us illustrate the application of the methods.1 − 2kg i ) + δ x ( u i . j+1 on the current time level j + 1. ut(x. We obtain for j = 0. 0) = f(x) gives the solution at all the nodal points on the initial line (level 0). ui. we approximate ui.. The computations are repeated for the required number of steps.0 + 2 kgi − kr 2 ( gi + 1 − 2 gi + gi − 1 ). j+1 and ui+1. . Remark 25 Do you recognize the system of equations that is obtained on each time level? Again.0 − ui .0 − ( ui . consider the method given in (5. . t) = g(t). Use an implicit method with h = 1/4 and k = 1/4.–1 = ui.1 − or or ui. it is a tri-diagonal system of equations. then we have computed the solutions up to time tm = mk. 1. 0) = 0. This system of equations is solved to obtain the values at all the nodal points on the time level 1. t > 0.1.….72) or (5.1 = 2ui. we obtain a system of equations for u1. (5.76) The right hand side in (5. subject to the conditions u(x. t) = u(1. Example 5. This gives the value of the mesh ratio parameter r.r= h h 4 4 (Fig. t) = 0.72).1 – 2kg(xi). ui .1 − r 2 (ui + 1.1 − 2kg i ) 2 2 2ui.1 + ui − 1. then the method simplifies as – r 2 ui − 1. we may choose the values for r and h. u2. 2. t > 0 give the solutions at all the nodal points on the lines x = 0 and x = l for all time levels. that is. Solution We have c = 1. For j > 0..1 − 2ui.5.72) or (5.75) or (5. h = kc 1 1 1 = (4) = 1. For example.k= . we use the method (5.1 − r 2 ui + 1. t) = h(t). . we use the same approximation as in the case of the explicit method. 0 ≤ x ≤ 1.74) and solve a system of equations on each mesh line.1 + 2(1 + r 2 )ui.24 Solve the wave equation utt = uxx. M – 1.34). 0) = 0.j+1. Now. On level 1. u(l. u(0.74) on level 1. we apply the finite difference method (5.76) is computed. 0 ≤ x ≤ 1. Alternately.. Compute for two time levels. For i = 1. It uses the three consecutive unknowns ui–1.−1 2 2 r2 2 r2 2 δ x u i .−1 + δ x u i .0 .1 = 2ui . 0) = sin(π x).

77346. j +1 − 1 2 1 δ x u i . − 0.82842 = 0. u2. 1 = = 0.1 – u2.1 = 7.65684 10.24. j −1 + 0.1 = u3. 2.5ui+1. 0) = 0.. 5.1. – u2.1 = 2 We have the following equations for j = 0.–1) or – ui–1. The initial condition ut(x. Example. i = 1.1 + 4u3. and – 2u1. we get 4u1.0 = 0. u3.1 – 4u3.–1 + 0. j + 1 + 2ui.0 = sin(π/4) = (1/ 2 ).72) as ui . 5.1 = 2ui. u4.–1 + ui+1.1 = 1.0 – u2.5ui− 1. j −1 + ui − 1. The initial condition u(x.5(ui–1.1 = 2u3.41421.41421.1.–1 = ui.34.0 – 2ui.0 x t Fig.5(ui− 1.1 = 2 FG 1 IJ = H 2K 2 = 1. j − ui.1 .BOUNDARY VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS. gives the values u0. 0) = sin (π x). j − 2ui. Therefore.0 = sin(π/2) = 1. j + 1 − 0. 3. u1.1 – ui+1.1 = 2u1..1 + 2ui. j + 1 + ui − 1. we get the equation – 0. j for all j. j − 1 ) 2 2 j = 0. j + 1 − 1 1 (ui + 1. j −1 ) The boundary conditions give the values u0.1 + 4u3. j + 1 − 2ui.54692 = u3. FG 1 IJ = H 2K 2 = 1.1 = 2u2.1 + 4u1. j +1 = 2u i .0 = sin(3π/4) = (1/ 2 ). j − 1 + ui+1.0 = 0. we have the equations 4u1. Hence. j −1 + (ui + 1.1 + 4u2. Subtracting the first and third equations.1 – u2. – u1. j + 1 = 2ui. The solution is given by u1. for j = 0. u1.41421. j = 0 = u4. j + 1 ) = 2ui.1 = 2ui. 14 14 .0 = 2. j − ui .1 = 0.1 + 4ui. Therefore. gives the values ui. 0 1/4 2/4 3/4 1. j −1 + δ 2 u i .1 = 2. 1 . we have the method (5. 303 For r = 1.0 4u1. i = 1: or i = 2: i = 3: or – u0.1 – u2.1 – u3. u2.1 – u4. j −1 x 2 2 or or ui.5ui–1.1 – 0.1 + 4u2.5ui+ 1. j −1 − 2ui.0.

2 j +1) = 2ui. – 0. j − 1 − 2ui.5u2.5(0) = 0. u2.77364) – 2(1) + 0. j −1 2 2 ui.2 + 2u3. 2. Solution An explicit method for solving the one dimensional wave equation is given by ui .2 – 0. j + 1 − 2ui.5(2)(0.2 = 0.2 = .2 – 2u3. u1. Write an explicit method for solving the one dimensional wave equation utt = c2uxx .68768 = 013893 = u3.5(0 + 1. 3. j −1 .72).1 – 2u2. j + 1 − r2 (ui + 1. we use the method (5. j − ui.2 = 2u1.5u1.5u2. j + 1 + ui −1.25403. where c2 depends on the material properties of the string. Solution The one dimensional wave equation governing the vibrations of an elastic string is given by utt = c2uxx .2 = 2(0.5u0.70711) + 0.2 – 0.5(0) = 0. NUMERICAL METHODS ui . j − 1 + For j = 1.2 = 0.304 For j > 0. j +1 = 2(1 − r 2 )u i .0) 2u1.2 = 2(0. j + ui −1. j +1 − or r2 2 r2 2 δ x u i ..5u2.0) + 0.2 – 0. i = 3: or Subtracting the first and third equations.17962. . and The solution is given by u1.1 – 2u1. 1. j − 1 + ui − 1..2 + 2u3..0 + u2.5(1..0 + 0.5u3.0 + 0..54692) – 2(0. j + r 2 [ u i +1.5(u2.2.0 + 0. – 0.70711) = 0. = 0. Write the one dimensional wave equation governing the vibrations of an elastic string.5u2.5 – u1.54692) – 2(0. we get (with r = 1) i = 1: or i = 2: r2 (ui +1. – 0. j = 0.5u4.2 = 2u3. i = 1.5(u3. t > 0. we have the equations 2u1..0 + u1. Hence. 3.70711) + 0.0 + u0.2 + 2u2.0) – 0. REVIEW QUESTIONS 1. 2.2 + 2u2. 2 j −1 ) .2 – 0.25403.2 = 0.1 – 2u3.0) = 2(0.17962.2 . Therefore. 0 ≤ x ≤ l.0 + 0) + 0. j ] − u i .5 3. j − u i . t > 0.2 = u3. j −1 + δ x ui . we get 2u1. the tension T in the string and the mass per unit length of the string. 0 ≤ x ≤ l.2 = 0.19648. 2.17962.2 – 0. j +1 = 2u i .486255 0.2 = 2u2.5(u4.2 + 2u1.5u2..

j + 9. where λ = ? ∆x c 2 ∂t 2 (A. 6. Nov/Dec. j + 1 = 2ui. j − ui. 2003) Solution In error analysis. = 2ui. 2. The truncation error is given by T. j + ui− 1. 12 ∂x 4. t > 0 when r = [(kc)/h] = 1..U. the explicit method for solving the hyperbolic equation ∂ 2u ∂x 2 = c∆t 1 ∂ 2u is stable. 5. What is the order and truncation error of the method given in Problem 2? Solution The order of the method is O(k2 + h2). 3. 2. j −1 + δ x ui. What do you mean by error in error analysis? (A.. j + 1 2 2 r2 r2 ui + 1. For what values of r = [(kc)/h] is the implicit method for the one dimensional wave equation stable? . 1. Solution An implicit method is given by ui.. 7. 0 ≤ x ≤ l. For what values of r = [(kc)/h] is the explicit method for the one dimensional wave equation stable? Solution The explicit method is stable for r ≤ 1.. We write the Taylor series expansions of all the terms in the method and simplify. i = 1. j + 1 = ui+ 1. j − 1 + ui − 1. j − ui. . t > 0.. error means the truncation error of the method. j + 1 − ui − 1.. j − 1 . j + 1 − or – r2 2 r2 2 δ x ui. Apr/May 2003) Solution For λ ≤ 1.E. Write an implicit method for solving the one dimensional wave equation utt = c2uxx. The leading term of this series (the first non-vanishing term) is called the truncation error. For what values of λ. 0 ≤ x ≤ l.. and h and k are step lengths in the x direction and t directions respectively.. Write an explicit method for solving the one dimensional wave equation utt = c2uxx . 305 where r = (kc)/h. Solution The method is given by LM N OP Q ui. j − 1 2 2 j = 0. . 3. 8.BOUNDARY VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS.U. . j −1 2 2 r2 r2 ui +1.. j − 1 − (1 + r 2 )ui. = k2 h2 c 2 ∂4u (r 2 − 1) 4 + . j +1 + (1 + r 2 )ui.

t) = 0 and u(x. subject to y (0.U. t > 0. j+1 and ui+1. and ∆t = 0. t > 0. and ut(x. 0 ≤ x ≤ 1. 2003) 7. t > 0. u(0. u(0. 0) = sin (π x). 0 < x < 1. 0) = 0. 10. Nov/Dec. ut(x. t ) = 0. 0) = sin (2π x). 0) = 10 + x(1 – x).U. t) = u(1. y t ( x.U. u(x. Nov/Dec. t ) = 0 and u(x. 0 ≤ x ≤ 1. Solve uxx = utt. and u(x. Solve ytt = yxx. and ∆t = 0. and ut (x. 0) = 0. (1/2) < x ≤ 1. (A.U. 1999) 6. t) = 0. Solve utt = uxx. t) = u(1. ut(x. u(x. t ) = u (1.j+1. t) = 100 sin (π t).0) = 0. t > 0. 0 ≤ x ≤ (1/2). 0) = 0. up to t = 0. subject to the following conditions u(0. 8. Approximate the solution of the wave equation utt = uxx. 0) = 0. Apr/May 2003.1. 0) = sin (2π x). EXERCISE 5. that is. t) = 0. taking h = 1/4. t ) = 0.25. 0) = 0. Solve utt = uxx. Solve utt = uxx. 0) = x – x2. (A. (A. by finite difference method for one time step with h = 0. using h = k = 0. 0 < x < 1. 0 < x < 1. (A. y (1. Compute u for four time steps.5 1. with ∆x = 0. and ut(x. 0 ≤ x ≤ 1. Apr/May 2006) 9.U. 2005) . and ut(x. 0) = – 1. t) = u(1. t > 0 u(0. 2004) 4. 2004) 3. t) = u(1. Nov/Dec. It uses the three consecutive unknowns ui–1. t) = 0 and u(x.25 for three time steps. u(0. ui. Solve the wave equation utt = uxx . (A. What type of system of equations do we get when we apply the implicit method to solve the one dimensional wave equation? Solution We obtain a linear tridiagonal system of algebraic equations. 0 < x < 1. t > 0. u(x.j+1 on the current time level j + 1. t > 0. Nov/Dec. t > 0 with u (0. Approximate the solution of the wave equation utt = uxx.U.25 for three time steps. 0) = 0.2 up to one half of the period of vibration by taking appropriate time step. with ∆x = 0. 0) = ut(x. Apr/May 2000) 5.25. t) = 0. given u(x. Nov/Dec.306 NUMERICAL METHODS Solution The implicit method is stable for all value of r. Approximate the solution to the equation uxx – utt = 0. taking h = 0. given u(x. (A.1 for three time steps. 0) = 0. 0 < x < 1. 0) = 1. 0) = sin3 (π x).U. t > 0. and ut(x. 0) = 100(x – x2).5 with a spacing of 0. and y(x. 2. t) = u(1. u(0. (A.25. 0 < x < 1.25. the method is unconditionally stable. Compute u for four time steps with h = 0. t > 0. t > 0. t) = 0. t) = 0. and u(1. 0) = u(0. 0 < x < 1. 0 ≤ x ≤ 1. t) = u(1.

y2 = – 0. solve utt = uxx.00218. 8. x2 + y2 = 0.74).71661. y1 = 0. t > 0. Errors in magnitude: 2. y1 = 0. 1. 2.2 1.69231. y3 = 0. 9. with ∆x = 0. y2 = 2.10435. | ε1 | = 0. y). parabolic for 4.BOUNDARY VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS.68562. 5.28408.25. y2 = 2. y3 = – 0.7 ANSWERS AND HINTS Exercise 5. 5.3 In all the problems. hyperbolic for x2 + 4y2 < 4. y3 = 0.25.54319.04400. 6. 0. Using the implicit method given in Eq. y3 = 0. | ε1 | = 0.11128. Errors in magnitude: 0. y3 = 1. u(0. Exercise 5.19332. 10. t) = 0. y2 = 0.00152. y3 = 1. 0. y1 = 2. 4.04217.72).01220.25. y2 = 0. 1. y2 = – 0. y2 = 0. Hyperbolic for all (x.00146.04217. y2 = 0.53334. Hyperbolic for all (x. given u(x.00203.06878. parabolic for x2 + 4y2 = 4. hyperbolic for x2 + y2 > 0. y1 = 0.94938. y2 = 0. Using the standard five point formula.25. | ε2 | = 0. y)..00551. 0) = 100(x – x2).39707. y1 = 1. 0. (iii) y1 = 2. y2 = 0.00378.00165. | ε2 | = 0.25. we obtain the mesh as given in Fig. 0 < x < 1. we obtain the system of equations as Au = b. 5. . y1 = – 0. where 2. y3 = 1. y1 = 0. 3. t > 0. (Oscillatory solutions). y).84397.45488.23159. y1 = 0. with k = 0.28092.61538.13319.01677. and ∆t = 0.07692. t) = u(1.25.35. Elliptic for x2 + y2 < 0.(5.04400. y1 = 0.07641. ut(x.01626. Errors in magnitude: 0.21767.5. 10. h = 0. Elliptic for all (x. (ii) y1 = 0. 3. Exercise 5.00301. 7. 0) = 0. 0.19888. the resulting equations are solved by the Gauss elimination procedure.1 In all problems. y2 = 0.16811.07547.45503.25 for two time steps. Elliptic for x2 + 4y2 > 4. 307 using the implicit method given in Eq.08650. (i) y1 = – 0. Compute for two time steps.(5..44811.22502.46681. 0. y2 = 0.

– 0. (By symmetry we can start by setting u2 = u3).6875.14063. – 3. u2 = 8/9.10417. 0. 7. – 0. 0. 3.58594.46674.58594. 3. u3 = – 25/216.3. 0. – 2. – 22/9.96778.29297.45023. – 1. b = [5. 0.26074. u3 = 2/9.50537.42188.84656. u1 = – 101/216. – 2. b = [1.50537.20341. 1. 5. 1. 3. (iii) 1.10894. 199. u3 = – 7/4.74219. u2 = – 13/4. 398. In Fig. – 0.69141.25. 5]T. 3.69337.10417.19321. u3 = 0. u1 = 300. b = [– 10/9. 0. 1. u4 = 4. – 0. 0. 0.375. – 0. – 1. – 1.98389. (If we use symmetry. – 2. i = 1. – 12. – 28/27]T. 0. – 4/3]T. – 7. 3. u1 = – 5/2. u1 = 5/9. – 0. b = [– 600. 3. – 2.70315. u4 = 300. 394. – 2. we have solved the systems by Gauss elimination method. 1. 0. b = [4/3.47071. 0. u1 = 1.41667.16161.25269. 2. 2. 288.99598. – 0. – 3]T.48438.93555.46419. 3.52149. u4 = 8/3.84375.14063. – 3.96778.99195.99195.19003. 376.13484. 2.28125. u3 = 0.25.38624. 297. 253.25.24024.69312. b = [– 3. – 2.19141. u4 = 1/3. 194.38624. we get the fourth iteration as – 2. 176. (By symmetry we can start by setting u1 = u4). – 2. 2.19066. u4 = 5/9. – 2.35.08854. u3 = 2. 0. 5.5625. 0. 2. u3 = 200. 0.92188.87110.03125. 0.11532. u2 = 10/3. 2/9.11404. u40) are obtained by the standard five point formula. 0. . 106.125. (v) Set all initial approximations as zeros. (iv) – 0. – 0. b = [43/27. 306. 297. – 10/9]T. b = [4/3.5625. 1. – 0. u1 = 2. 299. 2. – 1000.28125. u4 = 1/3.99598. 1. u1 = – 1/3.84375. u2 = – 35/216. 4. 10/27. u2 = 0.76075. – 0.48047. 1. u4 = – 5/2. u4 = 41/ 216. – 3. 253. that is.74219.47071.87110. ( 10.u= 0 −4 1 1 1 −4 OP PP Q LMu OP MMu PP u Nu Q 1 2 3 4 u1 u3 u2 u4 bi . 3. – 9. – 1.03125.125. 0. 3.3125. (i) 0. u4 = 2. u3 = 4/3. 299.45399.99799. (By symmetry we can start by setting u1 = u4). u2 = 0.0. 0. 0.38282. u2 = 4. 288. 0.01563.49268. – 4/3]T. 6.28125. 299. ( ( ( whereas u20) . 1.5. u1 = – 1/3.1875.15524.74024). 1. Problems 1 to 9.69312. u2 = 400. 1. – 1.6875.93555.375. – 0.48438.73537. 1. 1. unless stated other wise.23535. 3. – 200. – 11]T. 9. 0.70315. b = [– 2. 8. – 0. – 0. u1 = 5/3.98389. u30) .51563. – 1.16034. 4 are obtained from boundary conditions. We obtain the initial approximation u10) using the five point diagonal formula and u 4 = 0. (By symmetry we can start by setting u1 = u4). 198. Example 5. (ii) 225. – 6]T. 0. 0. u1 = u4. 3.8125. 8. 5/27. – 3. 2. u2 = 3.52149. – 0. – 0. – 3.75. u3 = 1.79297.85158. – 600]T. 399.308 NUMERICAL METHODS LM− 4 1 A=M 1 MN 0 and 1 1 0 0 1 −4 . 0. 3.38282.

10. (ii) 0. 0. 0. 9. choose k such that r = (k/h) = 1. 15.125.125. 0. 0. The given data is discontinuous.8186. 0). 0. – 1/81. Use explicit method. choose k such that r = (k/h) = 1. Since k is not prescribed. (iii) 0.5 1. 0.17857. 0. – 0.0625. Exercise 5.20. 9. 0.0927. 15.25. 0. 0.0957.04. 1.5. 18. 0. – 0. – 1. 1.13672.5. 0. 0.3588. 50 2 .. The effect of the singularity at x = 1/2 is propagated into the interior when we use finite difference methods.1775. – 1. 100.86734. 0.75. 0.7699. 41.1775. 8. 2.8069. 5. 0. – 1. 0. 0. Since k is not prescribed. 0. 0. | ε1 | = | ε3 | = 0. 0. 0.09668.BOUNDARY VALUE PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS.70711. 0. 1. – 1 /( 2 2 ) . 20. 0. 0.. 8. choose k such that r = (k/h) = 1. Since k is not prescribed. 2. 10. – 1. 1/81. Since k is not prescribed. – 0.12.24.499943. 0. 0. 5. 0.716668. 0.7699. Assume h = 0.740683.5.749665. – 0.24.740683. – 1/9. (5.17053.12095.5491. Such problems require special techniques to deal with singularities.0 = 20. 0) is propagated into the interior when we use finite difference methods.0114. – 0. 0). 1.25. 5. 0. 0. – 1 /(2 2 ) . Use explicit method. Use explicit method. 0. u5. 0.332299.08. 0. 6. 8. 3.20. 0. – 0.5. if we take the initial conditions valid at (0. 0. 0. 0. 0. 0. Use explicit method. 0. 0.5. 0. 10. 0. 0.748856. we obtain the solutions as 15. 7. Since k is not prescribed. 0.12095.20. 1/9. 12.17857. choose k = h such that r = 1. – 0. The effect of the singularities at (0.08.499723.08. 0. 1. Such problems require special techniques to deal with singularities. – 1 /(2 2 ) .4450. 0. 4. – 0. | ε | = 0. 2.5.7328. 1. 7. | ε2 | = 0.5491.5. 0. 2. – 0. 1.53571.5. 0. 5. (i) 0.09668. 309 Exercise 5. 0.24999. choose k = 0.12. 0 = 20. 0. 6. – 1. 0. 0. 50 2 . – 0.00695.3397.12. 0. 8. – 0. 0.04.12. 0.0927. The given data is discontinuous. 0. Use explicit method. 1. 4.8186. 1 /( 2 2 ) . Use explicit method. .198160. – 1. Computations are to be done up to t = 1. 0. 7.24.20. 0.249941.5.5.08. 1. 0.08779. In the present problem. 100. 0. 0.04. 4. – 0.67346. 10. – 0. 5. 8. 3. 10. Period of vibration = [(2l /c)] = 2. 0.4 1.04. 0. Use explicit method.70711.2345. 0. 50 2 .16. – 0. 0. 0. 10. 25. – 0. 1. – 0. 20.8614. 0. that is.5.21.5.0625. 12.5.716668. 0. – 0. 9. 0) and (5. – 0. – 0.08779. – 0. 0. 2. u0. 0. – 0. 0.16. 0. 0. 0.6485. 5. 0. 50 2 .0239.5. 0. 3.2 such that r = (k/h) = 1.86734. 0. 0.

This page intentionally left blank .

4th edition.K. New Delhi. Pergamon. 7.. L.. Mass. Numerical Initial Value Problems in Ordinary Differential Equations. Numerical Methods. Conte. 1989. P. Collatz. 2nd ed. 3rd edition.K. Ferziger. M.. Analysis of Numerical Methods.O. 1969. London. 6. 1962. There are various other texts which are not reported here. Reading.C. PWS-Kent. Keller.D.K. Englewood Cliffs.. New Age International Publishers. N.D. The Numerical Analysis of Ordinary Differential Equations: Runge-Kutta and General Linear Methods. Elementary Numerical Analysis. 4.L.. Froberg.E.B. Wiley. Numerical Methods for Scientific and Engineering Computation. McGraw-Hill. Mass. 1981. 1974. Numerical Analysis.. New Delhi. Burden. A. L. Elements of Numerical Analysis.K.. Jain. 1991. Gear. Berlin. 3. Householder. Numerical Analysis. David Kincaid and W. 9.S. Principles of Numerical Analysis. 12. 13. Addison-Wesley. C. Butcher. and H. Elementary Numerical Analysis: An Algorithmic Approach. 16. (Formerly Wiley Eastern Limited). Cheney. N..F. S. Numerical Treatment of Differential Equations. Iyengar.J. John Wiley. New York. S. New York.. New York.. Bjorck. Reading. 17. R. Jain. 1964. Brooks/ Cole.W. Jain. 1980. Faires.. New York. Calif. Numerical Solution of Differential Equations. Atkinson.. 8. and C. Issacson. and P. Introduction to Numerical Analysis. 311 . C. 2... 3rd edition... 1971. Englewood Cliffs. C. New York. 1953. J. Springer Verlag.. Gerald. John Wiley. M..R. Sixth Edition. Addison-Wesley. 1987. 1985. Prentice Hall. Wiley. 11. New York. 1989. Dahlquist. 14.. and R.J. 1966. Wiley Eastern Ltd. 5.. 15. G. John Wiley. Henrici. McGraw-Hill.H. E. and A. New York. and J. 2008... Numerical Solution of Ordinary and Partial Differential Equations. deBoor. Numerical Methods for Engineering Application. 1. 10.Bibliography The following is a brief list of texts on numerical methods.. 1984. K. Wheatley. 1966. Prentice-Hall. Applied Numerical Analysis. Fox. J. 4th Ed..

1982.. Numerical Solution of Ordinary Differential Equations. 1988. Addison-Wesley. L. New York. Reading. New York. 1971.. 2nd ed. 1962. Todd.. 1978. McGraw-Hill. Rabinowitz. 2nd ed. . A first course in Numerical Analysis. John Wiley.W. Ralston. F.. Lapidus. 1973. Survey of Numerical Analysis.. McGraw-Hill. and P. Numerical Analysis.. J. A. and R. Johnson.. Computational Methods in Ordinary Differential Equations. 22. McGrawHill.D. New York. 21. Lambert. New York.. Riess.. Scheid. J. Mass. L. 23. Academic Press.D. New York.. 19. Numerical Analysis. 20. and J. Seinfeld.312 NUMERICAL METHODS 18.

242 second kind. 183. 16 consistent system of equations. 128 Adams-Bashforth methods. 80 backward difference operator. 185 Euler-Cauchy method. 217 Adams-Bashforth predictor corrector methods. 70 double integration. 276. 27 backward Euler method. 242 E eigen value problem. 277 boundary conditions Dirichlet. 129 F finite differences. 47 diagonal five point formula. 291 extrapolation. 1 amplification factor. 81 313 . 26 Dirichlet boundary value problem. 257 direct methods. 253 divided differences. 238 agumented matrix. 26 diagonally dominant. 242 mixed kind. 252 discretization. 227 Adams-Moulton methods. 26 convergence of iteration methods. 2. 53 elementary column transformation. 242 third kind. 3. 194 explicit methods. 6. 221 algebraic equation. 63 error tolerance. 25 D Descarte’s rule of signs. 27 elementary row transformation. 19 corrector methods. 11 complete pivoting. 193 Bender-Schmidt method. 27 error of approximation. 256. 4 diagonal system of equations. 186 C characteristic equation. 169 B back substitution method. 52 eigen vector. 29 condition of convergence.Index Crank-Nicolson method. 52 chord method. 283 cubic splines. 99 A abscissas. 41 Euler method. 42. 221 Cotes numbers. 252 first kind.

193 multi step methods. 109 numerical integration. 41 composite trapezium rule. 63 inverse interpolation. 4 interpolating conditions. 216 multiple root. 6 G Gauss elimination method. 180. 65 Lagrange interpolating polynomial. 83 derivative operator. 180 initial value problem. 194 I implicit methods. 63 interpolating polynomial. 242. 139 composite Simpson’s 3/8 rule. 76 iteration function. 15 NUMERICAL METHODS L Lagrange fundamental polynomials. 144 J Jacobi method. 80 finite difference method. 42 method of successive approximation. 3. 2 H heat equation. 276. 72 forward differences. 35 Gauss-Seidel iteration method. 3 initial boundary value problem. 85 shift operator. 63.314 central difference operator. 15 iterative methods. 6 method of simultaneous displacement. 26 initial approximation. 41 Gauss-Jordan method. 282. 28 Gauss-Jacobi iteration method. 46 general iteration method. 251. 11 Newton’s interpolating polynomial using backward differences. 183. 275 initial point. 275 initial conditions. 122 forward differences. 41. 15 grid points. 182 M mesh points. 242. 65 Laplace equation. 275. 225 modified Euler method. 131 . 292 method of false position. 90 nodes. 100 Newton-Cotes integration rules. 263 linear interpolation method. 180 intermediate value theorem. 128 composite Simpson’s 1/3 rule. 15 mid-point method. 183. 301 inconsistent system of equations. 182 mesh ratio parameter. 3. 253 numerical differentiation using backward differences. 26. 87 forward difference operator. 227 Milne-Simpson method. 129 Newton-Raphson method. 252 fixed point iteration method. 117 divided differences. 263 N natural spline. 80 mean operator. 275 Heun’s method. 252 Liebmann iteration. 92 divided differences. 193 Milne’s predictor-corrector method.

241 U unconditionally stable methods. 6 root. 251 parabolic. 251 hyperbolic. 19 regula-falsi method. 99 stability. 183 P partial differential equations. 183 spline. 216. 144 trapezium rule. 291 weight function. 63 tangent method. elliptic. 160 Romberg integration. 237 standard five point formula. 136 Simpson’s 3/8 rule. 128 W R rate of convergence. 242 O operation count. 147 Simpson’s 1/3 rule. 129 S Schmidt method. 27 Q quadrature formula. 99 spline function.INDEX 315 Gauss-Legendre methods. 2 single step methods. 1 Runge-Kutta methods. 257 step length. 70 Poisson equation. 225 T tabular points. 237 upper triangular system of equations. 26 order. 253. 29 permanence property. 223 truncation error. 11 Taylor series method. 1 trapezium method 194. 251 partial pivoting. 128. 208 transcendental equation. 200 fourth order. 202 wave equation. 184. 53 predictor-corrector methods. 128 . 19. 276 simple root. 252 power method. 2. 183. 200 second order. 255 two point boundary value problem.

- 54235445 Mobile Banking
- Numerical Methods (2009) [CuPpY]
- Mobile Bug
- 2964928 Java Cheat Sheet for C Programmer
- 2964928 Java Cheat Sheet for C Programmer
- 2964928 Java Cheat Sheet for C Programmer
- 3276262 Learning JAVA Through BlueJ
- 3815221-Ajax
- 4371224 Hacking Password Protected Websites
- Recent Development in Cellular System

- Numerical Solutions of Stiff Initial Value Problems Using Modified Extended Backward Differentiation Formulaby International Organization of Scientific Research (IOSR)

- Numerical Methods for Engineers and Scientists, 2nd Edition
- numerical methods
- Finite Different Method - Heat Transfer - Using Matlab
- Document
- Numerical methods
- Course Note Signals
- Collecti5645645on456456564564456456564564456456564564456456564564456456564564456456564564456456564564456456564564456456564564456456564564456456564564
- Analysis of Numerical Schemes_IITM
- Modern Numerical Methods for Fluid Flow
- 9. Maths - IJMCAR - A Note on the Divergence - Akinsola and Oluyo Paper Newest Boutayeb
- Nc 77777777777777777777
- SIMULATION-LaminarPipeFlow-NumericalSolution-210616-0119-15518.pdf
- Very Short Intro Godunov
- En 010 501a Engineering Mathematics IV(3)
- 10.1.1.32.7237
- First Day Hands Out
- Soln. of Eqn. in 1 Variable
- Basic Numerical Method Using Scilab-ID2069
- lec1
- Numerical Solutions of Stiff Initial Value Problems Using Modified Extended Backward Differentiation Formula
- 20094em3 QA 22_1
- bcprob11
- Lec 02 GENG 300 Numerical Methods
- Research
- CFD methods
- M-III 26-11-14
- Engineers (Chemical Industries) by Peter Englezos [Cap 13]
- Numerical Methods
- A Generalized Polynomial Form of the Objective Function in Flash Calculaions
- chap1

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue reading from where you left off, or restart the preview.

scribd