You are on page 1of 32

Numerical method

Chapter one
Error estimation
1.1 Introduction to Error
Numerical analysis is the branch of mathematics that is used to find approximations to difficult
problems.
There are two kinds of numbers—exact and approximate numbers.
1. Exact numbers𝑥 : A Number with which no uncertainly is associated to no
approximation is taken, are known as exact numbers.
2. Approximate number 𝒙𝒂 : is a number that differs, but slightly, from an exact number.
Example: The numbers 1, 2, 3, … , , π, √2, e, …, etc., are all exact (𝑥 ), but 1.41, 1.414, 1.4142,
1.41421, …, are some approximate value of √2, Similarly 3.14, 3.141, 3.14159, …, etc., are
some approximate values of π.
Significant Digits/Figures
The digits that are used to express a number are called significant digits/figures.
The given digit in the given number is significant if one of the following is satisfied.
1. All non-zero digits are significant.
2. A zero digit, which lie between significant digits are significant.
3. Final (trailing) zeros in the decimal portion ONLY are significant digits.
Note: The significant figure in a number written in scientific notation ( M × 10 ) consists of all
the digits explicitly in M. Significant figures are counted from left to right starting with the left
most non zero digit.
Numbers Significant digits(figures) No. of Significant digits(figures)
3.78 3, 7, 8 3
3080 3, 0 , 8 3
3.7080 3,7,0,8,0 5
0.378 3,7,8 3
0.0378 3,7,8 3
3.00 3 3
8.23x10 8,2,3 3
Decimal places:
The numbers after the decimal point (to the right) are called numbers in the decimal places.
Or the significant digit after the decimal point (to the right) is called the decimal places.

Errors
one of the most important aspects of numerical analysis is the error analysis. Errors may occur at
any stage of the process of solving a problem. By the error we mean the difference between the
true value and the approximate value.

∴ Error = True value – Approximate value.

KIOT NM for covid Page 1


Numerical method

1.2. Sources of Errors


When a computational procedure is involved in solving a scientific-mathematical problem, errors
often will be involved in the process. A rough classification of the kinds of original errors that
might occur is as follows:
Modeling Errors: - Mathematical modeling is a process when mathematical equations are used
to represent a physical system. This modeling introduces errors and are called modeling errors.
Blunders and Mistakes: - Blunders occur at any stage of the mathematical modeling process
and consist to all other components of error. Blunders can be avoided by sound knowledge of
fundamental principles and with taking proper care in approach and design to a solution.
Mistakes are due to the programming errors.
Machine Representation and Arithmetic Errors: These errors are inevitable when using
floating-point arithmetic when using computers or calculators. Examples are rounding and
chopping errors.
Mathematical Approximation Errors: This error is also known as a truncation error or
discretisation error. These errors arise when an approximate formulation is made to a problem
that otherwise cannot be solved exactly.

Accuracy and Precision: Accuracy refers to how closely a computed or measured value agrees
with the true value. Precision refers to how closely individual computed or measured values
agree with each other. Inaccuracy (also known as bias) is the systematic deviation from the truth.
Imprecision (uncertainty) refers to the magnitude of the scatter.
Accuracy is a qualitative term that describes how close the measurements are to the
actual (true) value. Precision describes the spread of these measurements when
repeated.

KIOT NM for covid Page 2


Numerical method

1.3. Types of Errors


Usually, we come across the following types of errors in numerical analysis.
1. Absolute and Relative Errors: If x is the exact or true value of a quantity and x is its
approximate value, then
i. absolute error, E = |x − x |
ii. Relative error E = | | provided x ≠ 0 or x is not too close to zero.
iii. percentage relative error, E = 100E = 100| |
Example: 1. Let the exact or true value = 20/3 and the approximate value = 6.666.
Then, the absolute error is 0.000666... = 2/3000, and relative error is (2/3000)/ (20/3) = 1/10000.
Example: 2. Three approximate values of the number 1 3 are given as 0.30, 0.33 and 0.34.
Which one of the three is the best approximation?
Solution: The value with the smallest absolute error is the best approximation and therefore we
have to find the absolute errors. Let 𝑥 = 0.30, 𝑥 = 0.33 and 𝑥 = 0.34
𝐸 = 1 3 − 0.30 = 1 30

𝐸 = 1 3 − 0.33 = 1 300

𝐸 = 1 3 − 0.34 = 1 150

Here, 1 300 < 1 150 < 1 30


Therefore, the best approximation of the number is 0.33.
2. Rounding off error:-The process of dropping unwanted digits in a given number is called
Round-off and due to this process we create an error the so called Rounding off error.

Example:1 Let 𝑥 = 0.142862 × 10 , which is approximated using 4 decimal digits, then


determine the error due to
1. Rounding and express it as a percent.
2. Chopping and express it as a percent
Solution: Chopping form: Here 𝑥 = 0.142862 and 𝑥 = 0.1428.
Therefore, the number becomes, 𝑥 = 0.1428 × 10 and the resulted error is given by:
𝑒 = |𝑥 − 𝑥| = |0.142862𝑥10 − 0.1428 × 10 | = 0.6200 × 10
Rounding form: Here 𝑎 = 0.142862 and 𝑎 = 0.1429.
Therefore, the number becomes, 𝑥 = 0.1429 × 10 and the resulted error is given by:
𝑒 = |𝑥 − 𝑥 | = |0.142862𝑥10 − 0.1429 × 10 | = 0.3800 × 10

KIOT NM for covid Page 3


Numerical method
2. Given the number 𝜋 which is approximated using 5 decimal digits, then determine the relative
error due to
a. Rounding and express it as a percent.
b. Chopping and express it as a percent.
Solution:
a. In case of chopping 𝜋 ≈ 3.1428, then the relative error is:
. .
𝐸 = = = 0.00001818

and 𝐸 = 100𝐸 = 0.001818.


b. In case of rounding 𝜋 ≈ 3.1428(𝑠𝑖𝑛𝑐𝑒 𝑑 = 5 & 𝑑 𝑖𝑠 𝑒𝑣𝑒𝑛), then the relative error is:
. .
𝐸 = = = 0.00001818

and 𝐸 = 100𝐸 = 0.001818.


3. Inherent Errors: - Most of numerical computations are inexact either due to the given data
or due to the limitations of the computing aids such as mathematical tables, desk calculators
or the digital computers. Due to these limitations numbers have to be rounded causing errors
called inherent errors.
Inherent errors cannot be completely eliminated, but can be minimized by selecting
 A better initial data
 A better mathematical model that represent the problem
 By employing high precision computer computations
The inherent error arises either due to the simplified assumptions in the mathematical
formulation of the problem or due to the errors in the physical measurements of the parameters
of the problem.
4. Truncation Errors: - The mathematical models may be algebraic or transcendental or other
types of equations. The solution of such equations may not be solved analytically; hence we
use numerical methods or terminating after a finite number of terms. In this process errors are
introduced and such errors are called truncation errors.
Example: - Approximating the maclaurin series 𝑒 ≈ 1 + 𝑥 + then
!

Truncation error is = 𝑒 − (1 + 𝑥 + )= + +⋯
! ! !

KIOT NM for covid Page 4


Numerical method

Chapter 2: Solution of Non Linear Equations


Introduction
In this chapter we shall discuss some numerical methods for solving algebraic and transcendental
equations. The equation 𝑓 (𝑥) = 0 is said to be algebraic if 𝑓(𝑥) is purely a polynomial in 𝑥. If
𝑓(𝑥) contains some other functions, namely, Trigonometric, Logarithmic, Exponential, etc., then
the equation 𝑓(𝑥) = 0 is called a Transcendental Equation.
One of the most common problem encountered in engineering analysis is that given a function
𝑓(𝑥), find the values of 𝑥 for which 𝑓 (𝑥) = 0. The solution (values of 𝑥) are known as the roots
of the equation 𝑓 (𝑥) = 0, or the zeroes of the function 𝑓(𝑥).
Theorem: If a function 𝑓(𝑥) assumes values of opposite sign at the end points of interval
(𝑎, 𝑏), i.e., 𝑓(𝑎)𝑓(𝑏) < 0 then the interval will contain at least one root of the equation 𝑓(𝑥) = 0, in
other words, there will be at least one number 𝑐 ∈ (𝑎, 𝑏) such that 𝑓(𝑐) = 0.
There are two types of methods available to find the roots of algebraic and transcendental equations of the
form 𝑓(𝑥) = 0.
1. Direct Methods: These methods give the exact value of the roots (in the absence of round off errors)
in a finite number of steps and determine all the roots at the same time.
2. Iterative methods: These methods, also known as trial and error methods, are based on the
idea of successive approximations, i.e., starting with one or more initial approximations to
the value of the root, we obtain the sequence of approximations by repeating a fixed
sequence of steps over and over again till we get the solution with reasonable accuracy.
These methods generally give only one root at a time.
The indirect or iterative methods are further divided into two categories: bracketing and open methods.
The bracketing methods require the limits between which the root lies, whereas the open methods require
the initial estimation of the solution. Bisection and False position methods are two known examples of the
bracketing methods. Among the open methods, the Newton-Raphson and the method of successive
approximation are most commonly used. The most popular method for solving a non-linear equation is
the Newton-Raphson method and this method has a high rate of convergence to a solution.
Rate of Convergence
Definition: An iterate method is said to be of order 𝑝 or has the rate of convergence𝑝, if 𝑝 is the largest
positive number for which there exists a finite constant 𝑐 ≠ 0 such that |𝑒 | ≤ 𝑐|𝑒 |
where 𝑒 = 𝑥 − ξ is the error in the 𝑘 iterate.
Criterion to terminate iteration procedure: Since, we cannot perform infinite number of
iterations; we need a criterion to stop the iterations. We can select a tolerance 0 and
generate 𝑥 , 𝑥 , 𝑥 , … , 𝑥 , until one of the following conditions is met:
1. The equation 𝑓(𝑥) = 0 is satisfied to a given accuracy or 𝑓 (𝑥 ) is bounded by an error
tolerance  . 𝑓(𝑥 ) ≤ .
2. The magnitude of the difference between two successive iterates is smaller than a given
accuracy or an error bound  . |𝑥 − 𝑥 | ≤ .

KIOT NM for covid Page 5


Numerical method
|𝑥𝑘+1 −𝑥𝑘 |
3. Relative error = ≤ .
| |
Generally, we use the second criterion.
In chapter, we present the following indirect or iterative methods with illustrative examples:
1. Bisection Method
2. Iteration Method and
3. Newton-Raphson Method (Newton’s method)
A. Bisection Method (Interval halving method)
This is one of the simplest iterative methods and is strongly based on the property of intervals.
To find a root using this method, let the function 𝑓(𝑥) be continuous between 𝑎 and 𝑏 and f(𝑎)
𝑓(𝑏) < 0. Then there is a root of 𝑓(𝑥) = 0, lying between 𝑎 and 𝑏. Let the first approximation
be 𝑥 = (𝑎 + 𝑏) (i.e., average of the ends of the range).
Now, if 𝑓(𝑥 ) = 0 then 𝑥 is a root of 𝑓(𝑥) = 0. Otherwise, the root will lie between 𝑎 and 𝑥
or 𝑥 and 𝑏 depending upon whether 𝑓(𝑥 ) 𝑓(𝑎) < 0 or 𝑓(𝑥 ) 𝑓(𝑏) < 0 respectively.

Then, we bisect the interval and continue the process till the root is found to be desired accuracy.
In the above figure, 𝑓(𝑥 )𝑓(𝑎) < 0; therefore, the root lies in between 𝑎 and 𝑥 . The second
approximation to the root now is 𝑥 = (𝑎 + 𝑥 ). If 𝑓(𝑥 ) is negative as shown in the figure
then the root lies in between 𝑥 and𝑥 , and the third approximation to the root is
𝑥 = (𝑥 + 𝑥 ) and so on. This method is simple but slowly convergent. It is also called as
Bolzano method.
Procedure for the Bisection Method to Find the Root of the Equation 𝑓(𝑥 ) = 0
Step 1: Choose two initial guess values (approximation) 𝑎 and 𝑏 such that 𝑓(𝑎)𝑓 (𝑏) < 0.
Step 2: Evaluate the mid-point 𝑥 of 𝑎 and 𝑏 given by 𝑥 = (𝑎 + 𝑏) and also evaluate 𝑓(𝑥 ).
Step 3: If 𝑓 (𝑎)𝑓 (𝑥 ) < 0, then set 𝑏 = 𝑥 else set 𝑎 = 𝑥 . Then apply the formula of step 2.

KIOT NM for covid Page 6


Numerical method
Step 4: Stop evaluation when the difference of two successive values of 𝑥 obtained from
step 2, is numerically less than the prescribed accuracy.
Order of Convergence of Bisection Method
As the error decreases with each step by a factor of (i.e. = ), then the convergence in the
bisection method is linear.
Example 1: Find the root of equation 𝑥 − 𝑥 − 1 = 0 for the root lying between 1 and 2, correct
to two decimal places by bisection method.
Solution: Let 𝑓 (𝑥) = 𝑥 − 𝑥 − 1 = 0
Since 𝑓(1) = 1 − 1 − 1 = −1 ≤ 0 and
𝑓(2) = 2 − 2 − 1 = 5 ≥ 0
Therefore, 𝑓(1) is negative and 𝑓(2) is positive, so at least one real root will lie between 1 and 2.
𝟏𝒔𝒕 Iteration: Now using Bisection Method, we can take first approximation
𝑥 = = = 1.5 ⟹ 𝑓(𝑥 ) = 𝑓(1.5) = 0.875 ≥ 0 𝑎𝑛𝑑 𝑓(𝑥 )𝑓(𝑎) > 0
So the root will be between 1 and 1.5
𝟐𝒏𝒅 Iteration: The second approximation is given by
.
𝑥 = = = 1.25 ⟹ 𝑓(𝑥 ) = 𝑓(1.25) = −0.297 < 0
Therefore, 𝑓(1.5) is positive and 𝑓(1.25) is negative, so that root will lie between 1.25 and 1.5
𝟑𝒓𝒅 Iteration: The third approximation is given by
. .
𝑥 = = = 1.375 ⟹ 𝑓(𝑥 ) = 𝑓(1.375) = 0.22460 > 0 ,
Therefore the required root lies between 1.25 and 1.375
𝟒𝒕𝒉 Iteration: The fourth approximation is given by
𝑥 +𝑥 1.375 + 1.25
𝑥 = = = 1.313 ⟹ 𝑓(𝑥 ) = 𝑓(1.313) = −0.0494 < 0
2 2
Therefore, 𝑓(1.313) is negative and 𝑓(1.375) is positive. Thus root lies between 1.313 and 1.375
𝟓𝒕𝒉 Iteration: The fifth approximation is given by
𝑥 +𝑥 1.313 + 1.375
𝑥 = = = 1.344 ⟹ 𝑓(𝑥 ) = 𝑓(1.344) = 0.0837 > 0
2 2
𝑓(1.313) is negative and 𝑓(1.344) is positive, so root lies between 1.313 and 1.344
𝟔𝒕𝒉 Iteration: The sixth approximation is given by
. .
𝑥 = = = 1.3285 and 𝑓(1.3285) = 0.016186 > 0
𝟕𝒕𝒉 Iteration: The seventh approximation is given by
. .
𝑥 = = = 1.3208
From above iterations, the root of 𝑓(𝑥) = 𝑥 − 𝑥 − 1 = 0 up to two places of decimals is
1.3208, which is of desired accuracy.

Example 2:- Solve the equation 𝑥 − 5𝑥 + 3 = 0 using the Bisection method in the interval
[0,1] with error 𝜀 = 0.001.
Solution: Here 𝑓 (𝑥) = 𝑥 − 5𝑥 + 3, 𝑎 = 0 and 𝑏 = 1
𝑓 is continuous in the given interval [0,1] and 𝑓(0) = 3 > 0, 𝑓(1) = −1 < 0. Hence, there is a
root of the equation in the given interval.

KIOT NM for covid Page 7


Numerical method

𝑥 = = = 0.5 and 𝑓 (0.5) = 0.625 > 0

Then, the root lies in the interval [0.5,1] and 𝑒 = |0.5 − 1| = 0.5 > 0.001
.
𝑥 = = 0.75 and 𝑓(0.75) = −0.328125 < 0

Then, the root lies in the interval [0.5,0.75] and 𝑒 = |0.5 − 0.75| = 0.25 > 0.001
. .
𝑥 = = 0.625 and 𝑓(0.625) = 0.11914 > 0

Then, the root lies in the interval [0.625,0.75] and 𝑒 = |0.625 − 0.75| = 0.125 > 0.001
. .
𝑥 = = 0.6875 and 𝑓(0.6875) = −0.112548 < 0

Then, the root is in the interval [0.625,0.6875] & 𝑒 = |0.625 − 0.6875| = 0.0.0625 > 0.001
. .
𝑥 = = 0.65625 and 𝑓(0.65625) = 0.001373 > 0

Then, the root lies in the interval [0.65625,0.6875] and


𝑒 = |0.65625 − 0.6875| = 0.03125 > 0.001
. .
𝑥 = = 0.671875 and 𝑓(0.671875) = −0.05607986 < 0

Then, the root lies in the interval [0.65625,0.671875] and


𝑒 = |0.65625 − 0.671875| = 0.015625 > 0.001
. .
𝑥 = = 0.6640625 and 𝑓(0.6640625) = −0.02747488 < 0

Then, the root lies in the interval [0.65625,0.6640625] and


𝑒 = |0.65625 − 0.6640625| = 0.0078125 > 0.001
. .
𝑥 = = 0.66015625 and 𝑓(0.66015625) = −0.01308 < 0

Then, the root lies in the interval [0.65625,0.66015625] and


𝑒 = |0.65625 − 0.66015625| = 0.00390625 > 0.001
. .
𝑥 = = 0.658203125 and 𝑓(0.658203125 ) = −0.00586 < 0

Then, the root lies in the interval [0.65625,0.658203125] and


𝑒 = |0.65625 − 0.658203125| = 0.001953125 > 0.001
. .
𝑥 = = 0.657226562 and 𝑓(0.657226562 ) = −0.002246 < 0

Then, the root lies in the interval [0.65625,0.657226562] and


𝑒 = |0.65625 − 0.657226562| = 0.000976562 < 0.001

KIOT NM for covid Page 8


Numerical method
Since|0.65625 − 0.657226562| = 0.000976562 < 0.001 = 𝜀, the process terminates and the
approximate root of the equation is 𝑥 = 0.657226562 .

Example 3:- Approximate √2 using bisection method correct to two decimal places.

Solution: let 𝑥 = √2 ⟺ 𝑥 − 2 = 0 ⟹ 𝑓(𝑥) = 𝑥 − 2

Now 𝑓(1) = −1 ≤ 0 𝑎𝑛𝑑 𝑓(2) = 1 ≥ 0

Thus, by intermediate value theorem there exist at least one root in [1,2].

Now, 𝑥 = = = 1.5 ⟹ 𝑓(𝑥 ) = 𝑓(1.5) = 0.25 ≥ 0 𝑎𝑛𝑑 𝑓(𝑥 )𝑓(𝑎) ≤ 0

.
𝑥 = = = 1.25 ⟹ 𝑓(𝑥 ) = 𝑓(1.25) = −0.4375 ≤ 0 𝑎𝑛𝑑 𝑓(𝑥 )𝑓(𝑥 ) ≤ 0

. .
𝑥 = = = 1.375 ⟹ 𝑓(𝑥 ) = 𝑓(1.375) = −0.109375 ≤ 0 ,

and 𝑓(𝑥 )𝑓(𝑥 ) ≤ 0, implies

𝑥 +𝑥 1.375 + 1.5
𝑥 = = = 1.4375 ⟹ 𝑓(𝑥 ) = 𝑓(1.4375) = 0.0664 ≥ 0
2 2

Similarly 𝑥5 = 1.406, 𝑥6 = 1.414, 𝑥7 = 1.418

Hence, √2 approximated to 1.418 in 3 decimal places

Exercises 1: Using the bisection method solves the following equations.


1. 𝑥 − 9𝑥 + 1 = 0 for the root lying between 2 and 3, correct to three significant figures.
2. Perform five iterations to obtain the smallest positive root of 𝑓 (𝑥) = 𝑥 − 5𝑥 + 1 = 0.
3. 3𝑥 − 𝑒 = 0 for 1 ≤ 𝑥 ≤ 2, accurate to within 10 .

Exercises 2: Use the bisection method find the drag coefficient 𝑐 needed for a parachutist of
mass 𝑚 = 68.1 kg to have a velocity of 𝑣 = 40 𝑚/𝑠 after free-falling for time 𝑡 = 10 𝑠. Note:
The acceleration due to gravity is 𝑔 = 9.8 𝑚/𝑠 and 𝑓 (𝑐) = 1 − 𝑒 ( ⁄ ) − 𝑣.

B. Iteration Method (Method of Successive Approximation)


This method is also known as the direct substitution method or method of fixed iterations. To
find the root of the equation 𝑓(𝑥) = 0 by successive approximations, we rewrite the given
equation in the form 𝑥 = 𝑔(𝑥).
Now, first we assume the approximate value of root (let 𝑥0 ), then substitute it in 𝑔(𝑥) to have a
first approximation 𝑥1 given by
𝑥1 = 𝑔(𝑥0 )
Similarly, the second approximation 𝑥2 is given by

KIOT NM for covid Page 9


Numerical method
𝑥2 = 𝑔(𝑥1 )
In general, 𝑥 1 = 𝑔(𝑥 )
Procedure for Iteration Method to Find the Root of the Equation 𝑓(𝑥) = 0
Step 1: Rewrite 𝑓(𝑥) = 0 of the form 𝑥 = 𝑔(𝑥), 𝑥 = ℎ(𝑥 ), 𝑜𝑟 𝑥 = 𝑝(𝑥).
Step 2: Choose the form 𝑥 = 𝑔(𝑥) which satisfies|𝑔 (𝑥)| < 1 on (𝑎, 𝑏) containing the solution.
Step 2: Take an initial approximation as 𝑥 .
Step 3: Follow the procedure to find the successive approximations 𝑥 1 by using
𝑥 1 = 𝑔(𝑥 ), 𝑖 = 0,1,2,3, …
Step 4: Stop the evaluation where relative error ≤ 𝜀 , where 𝜀 is the prescribed accuracy.
Note 1: The iteration method 𝑥 = 𝑔(𝑥) is convergent if 𝑔′ (𝑥) < 1.
Note 2: When 𝑔′ (𝑥) > 1 ⟹ 𝑔′ (𝑥) > 1 or 𝑔′ (𝑥 ) < −1, the iterative process is divergent.
Note 3: The rate of convergence of iteration method is 1. In other words, iteration method
converges linearly.
Example: Find a real root correct up to four decimal places of the equation 2𝑥 – ln 𝑥 – 7 = 0
using iteration method.
Solution: Here, we have f(x) = 2x − lnx − 7 = 0
Now, we find that 𝑓 (3) = −1.447 < 0 and 𝑓(4) = 0.398 > 0
Therefore, at least one real root of 𝑓(𝑥) = 0 lies between 3 and 4.
Now, the given equation can be rewritten as
x = (lnx + 7) = g(x), say.
Now, 𝑔 (𝑥) = , from which we clearly note that |𝑔 (𝑥)| < 1 for all 𝑥 ∈ (3, 4).
Again since |𝑓(4)| < |𝑓(3)| , the root is near to 𝑥 = 4.
Let the initial approximation be 𝑥 = 3.6 because 𝑓(3.6) tends to zero. Then from the iterative
formula 𝑥 = 𝑔(𝑥 ), we obtain x = g(x ) = (lnx + 7) = (ln3.6 + 7) = 3.77815
x = g(3.77815) = 3.78863
x = g(3.78863) = 3.78924
x = g(3.78924) = 3.78927
Hence, the root of the equation correct to the four places of decimals is 3.7893.

Example 2: Find the real root of equation 𝑓(𝑥) = 𝑥 + 𝑥 – 1 = 0 by using iteration method.
Solution: Here, 𝑓(0) =– 1 and 𝑓 (1) = 1 so a root lies between 0 and 1. Now, 𝑥 = so that,

𝑔(𝑥 ) = ⟹ 𝑔 (𝑥) = − ⟹ |𝑔 (𝑥)| < 1 for 𝑥 < 1
√ ( )
Hence iterative method can be applied.
Take, x = 0.5 we get x = g(x ) = = = 0.81649
√ .
1 1
x = g(x ) = = = 0.74196
1+ x √1.81649

KIOT NM for covid Page 10


Numerical method
1 1
x = g(x ) = = = 0.75767
1+ x √1.74196

1
x = g(x ) = = 0.75487
1+ x
Exercise: Find the cube root of 15 correct to four significant figures by iterative method.
C. Newton-Raphson Method (Newton’s method)
The Newton-Raphson method is the best-known method of finding roots of a function 𝑓 (𝑥). The
method is simple and fast. One drawback of this method is that it uses the derivative 𝑓′(𝑥) of the
function as well as the function f (x) itself.
This method can be derived from Taylor’s series as follows:
Let 𝑓(𝑥 ) = 0 be the equation for which we are assuming 𝑥0 be the initial approximation and
ℎ be a small corrections to 𝑥0 , so that 𝑓(𝑥0 + ℎ) = 0.
Expanding it by Taylor’s series, we get
2
𝑓(𝑥 ) = 𝑓(𝑥0 + ℎ) = 𝑓 (𝑥0 ) + ℎ𝑓 ′ (𝑥0 ) + 2!
𝑓 ′′ (𝑥0 ) + ⋯ = 0.
Since ℎ is small, we can neglect second and higher degree terms in ℎ and therefore, we get
𝑓(𝑥0 ) + ℎ𝑓 ′ (𝑥0 ) = 0
( 0)
From which we have, ℎ = − ′( where 𝑓 ′ (𝑥0 ) ≠ 0
0)
Hence, if 𝑥0 be the initial approximation, then next (or first) approximation 𝑥1 is given by
( 0)
𝑥1 = 𝑥0 + ℎ = 𝑥0 − ′(
0)
( )
The next and second approximation 𝑥2 is given by 𝑥2 = 𝑥1 − ′( )
( )
In general, 𝑥 1 =𝑥 − ′( )

This formula is well known as Newton-Raphson formula or Newton’s method.

KIOT NM for covid Page 11


Numerical method

Algorithm for Newton-Raphson Method to Find the Root of the Equation 𝑓(𝑥) = 0
Step 1: Take a trial solution (initial approximation) as 𝑥0 . Find 𝑓 (𝑥0 ) and 𝑓 ′ (𝑥0 )
Step 2: Follow the above procedure to find the successive approximations 𝑥 1 using the
( )
formula 𝑥 1 =𝑥 − ′( )
, where 𝑛 = 0, 1, 2, 3, …
Step 3: Stop the process when |𝑥 1 − 𝑥 | ≤ 𝜀, where 𝜀 is the prescribed accuracy.

Note: the order of convergence of Newton-Raphson method is 2 i.e., Newton Raphson method
is quadratic convergent. This also shows that subsequent error at each step is proportional to
the square of the previous error and as such the convergence is quadratic.
Example 1: Find the real root of the equation log 𝑥 – cos 𝑥 = 0 correct to three places of
decimal by Newton-Raphson’s method.
Solution: Let log 𝑥 – cos 𝑥 = 0
So that 𝑓(1) = −0.5403 and 𝑓(2) = 1.1092.
∴ The root lies between 1 and 2.
Also, 𝑓(1.1) = 𝑙𝑜𝑔1.1 − 𝑐𝑜𝑠1.1 = 0.0953 − 0.4535 = −0.3582
𝑓(1.2) = 𝑙𝑜𝑔1.2 − 𝑐𝑜𝑠1.2 = −0.18
𝑓(1.3) = 𝑙𝑜𝑔1.3 − 𝑐𝑜𝑠1.3 = −0.0051
𝑓(1.4) = 𝑙𝑜𝑔1.4 − 𝑐𝑜𝑠1.4 = 0.1665
Thus the root lies between 1.3 and 1.4 .
Since 𝑓 (𝑥) = 𝑙𝑛𝑥 − 𝑐𝑜𝑠𝑥 and 𝑓 (𝑥) = + 𝑠𝑖𝑛𝑥
Then by Newton’s-Rapton method, we get
𝑓(𝑥) 𝑙𝑛𝑥 − 𝑐𝑜𝑠𝑥
𝑥 =𝑥 − =𝑥 −
𝑓 (𝑥) 1
+ 𝑠𝑖𝑛𝑥
𝑥

KIOT NM for covid Page 12


Numerical method
𝑥 + 𝑥 𝑠𝑖𝑛𝑥 − 𝑥 𝑙𝑛𝑥 + 𝑥 𝑐𝑜𝑠𝑥
𝑥 = , 𝑛 = 0, 1, 2, …
1 + 𝑥 𝑠𝑖𝑛𝑥
Let us take the initial data 𝑥 = 1.3
First approximation: By taking 𝑛 = 0, we get the first approximation
𝑥 + 𝑥 𝑠𝑖𝑛𝑥 − 𝑥 𝑙𝑛𝑥 + 𝑥 𝑐𝑜𝑠𝑥
𝑥 =
1 + 𝑥 𝑠𝑖𝑛𝑥
1.3 + (1.3) 𝑠𝑖𝑛1.3 − 1.3𝑙𝑛1.3 + 1.3𝑐𝑜𝑠1.3
= = 1.3029
1 + 1.3𝑠𝑖𝑛1.3
Second approximation: By taking 𝑛 = 1, we get the second approximation
𝑥 + 𝑥 𝑠𝑖𝑛𝑥 − 𝑥 𝑙𝑛𝑥 + 𝑥 𝑐𝑜𝑠𝑥
𝑥 =
1 + 𝑥 𝑠𝑖𝑛𝑥
1.3029 + (1.3029) 𝑠𝑖𝑛1.3029 − 1.3029𝑙𝑛1.3029 + 1.3029𝑐𝑜𝑠1.3029
=
1 + 1.3029𝑠𝑖𝑛1.3029
2.9401
= = 1.303
2.2564
Hence the required root is 1.303 correct to three decimal places.
Example 2: Find the real root of the equation 𝑥 – 5𝑥 + 2 = 0 between 4 and 5 by Newton-
Raphson’s method.
Solution: Let 𝑓(𝑥 ) = 𝑥 – 5𝑥 + 2 = 0
Now, 𝑓(4) = −2 and 𝑓 (4) = 2
Therefore, the root lies between 4 and 5.
Now, Newton-Raphson’s method becomes
( )
Since, 𝑓 (𝑥 ) = 2𝑥 − 5, then 𝑥 =𝑥 −
( )
𝑥 – 5𝑥 + 2 𝑥 –2
𝑥 =𝑥 − = , 𝑛 = 0, 1, 2, …
2𝑥 − 5 2𝑥 − 5
Let take 𝑥 = 4 the initial approximation, then
First approximation: By taking 𝑛 = 0, we get the first approximation
𝑥 –2 4 –2
𝑥 = = = 4.6667
2𝑥 − 5 2(4) − 5
Second approximation: By taking 𝑛 = 1, we get the second approximation
𝑥 –2 4.6667 – 2
𝑥 = = = 4.5641
2𝑥 − 5 2(4.66667) − 5
Third approximation: By taking 𝑛 = 2, we get the second approximation
𝑥 –2 4.5641 – 2
𝑥 = = = 4.5616
2𝑥 − 5 2(4.5641) − 5
Fourth approximation: By taking 𝑛 = 3, we get the second approximation
𝑥 –2 4.5616 – 2
𝑥 = = = 4.5616
2𝑥 − 5 2(4.5616) − 5

KIOT NM for covid Page 13


Numerical method
Since 𝑥 = 𝑥 , hence the root of the equation is 4.5616 correct to four decimal places.

Exercise 1: Evaluate √12 to four decimal places by Newton’s iterative method.


Exercise 2: Find by Newton’s method, the real root of the equation

a. 𝑥 4 − 11𝑥 + 8 = 0, correct to five decimal places and the root near 2.


b. 𝑥 log10 𝑥 = 1.2, correct to five decimal places.
c. 𝑥 = 𝑒 , correct to four decimal places.

Chapter 3: Solving System of Equations


3.1 System of Linear Equations
In this section we present the solution of 𝑛 linear simultaneous algebraic equations in 𝑛
unknowns. Linear systems of equations are associated with many problems in engineering and
science, as well as with applications of mathematics to the social sciences and quantitative study
of business and economic problems.
A system of algebraic equations has the form
a11 x1  a12 x 2  ...  a1n x n  b1
a 21 x1  a 22 x 2  ...  a 2 n x n  b2
  
a n1 x1  a n 2 x 2  ...  a nn x n  bn
Where the coefficients 𝑎 and the constants 𝑏 are known and 𝑥 represent the unknowns. In
matrix notation, the equations are written as
𝑎 𝑎 … 𝑎 𝑥 𝑏
𝑎 𝑎 ⋯ 𝑎 𝑥 𝑏
⋮ ⋮ ⋱ ⋮ =
⋮ ⋮
𝑎 𝑎 ⋯ 𝑎 𝑥 𝑏
Or simply𝑨𝒙 = 𝒃
Note 1:A system of linear equations in 𝑛 unknowns has a unique solution, provided that the
determinant of the coefficient matrix is non-singular.
Note 2: If bi  0, i  1,2,...n then the above linear system is called Homogenous otherwise,
non-Homogenous.
Methods of Solution
KIOT NM for covid Page 14
Numerical method
1. Direct methods: The common characteristics of direct methods are that they transform the
original equation into equivalent equations (equations that have the same solution) that can
be solved more easily. The solution does not contain any truncation errors but the round off
errors is introduced due to floating point operations.
LU decomposition method
It is possible to show that any square matrix 𝐴 can be expressed as a product of a lower
triangular matrix 𝐿 and an upper triangular matrix 𝑈.
𝐴 = 𝐿𝑈
For instance
𝑎 𝑎 𝑎 𝐿 0 0 𝑈 𝑈 𝑈
𝑎 𝑎 𝑎 = 𝐿 𝐿 0 0 𝑈 𝑈
𝑎 𝑎 𝑎 𝐿 𝐿 𝐿 0 0 𝑈
The process of computing 𝐿 and 𝑈 for a given 𝐴 is known as LU-Decomposition or LU-
Factorization. LU-decomposition is not unique (the combinations of 𝐿and 𝑈for a prescribed
𝐴 are endless), unless certain constraints are placed on 𝐿or 𝑈. These constraints distinguish one
type of decomposition from another.
The three decomposition methods are given below:
1. Doolittle’s decomposition: Constraints are𝐿 = 1,
2. Crout’s decomposition: Constrains are𝑈 = 1 and
3. Cholesky’s decomposition: Constraints are 𝐿 = 𝑈 for 𝑖 = 1, 2, 3, … , 𝑛.
After decomposing the matrix 𝐴, it is easier to solve the equations 𝐴𝑥 = 𝑏.We can rewrite the
equations as 𝐿𝑈𝑥 = 𝑏 or denoting 𝑈𝑥 = 𝑣, the above equation becomes
𝐿𝑣 = 𝑏
This equation 𝐿𝑣 = 𝑏 can be solved for v by forward substitution. Then 𝑈𝑥 = 𝑣 will yield 𝑥
by the backward substitution process. The advantage of LU decomposition method over the
Gauss elimination method is that once 𝐴 is decomposed, we can solve 𝐴𝑥 = 𝑏 for as many
constant vectors 𝑏 as we please. Also, the forward and backward substitutions operations are
much less time consuming than the decomposition process.
Doolittle’s Method: This method is based on the fact that every square matrix A can be
expressed as the product of a lower triangular matrix and an upper triangular matrix, provided all
the principle minors of A are non-singular. Also, such a factorization, if exists, is unique.
Method: Decomposes𝐴 to 𝐿 and 𝑈
1 0 0 u11 u12 u13 
A  LU   21 1 0  0 u22 u23 

 31  32 1  0 0 u33 
 𝑈 is the same as the coefficient matrix at the end of the forward elimination process.
 𝐿 is obtained using the multipliers that were used in the forward elimination process.

Consider the system of linear equations:


𝑎 𝑥 +𝑎 𝑥 +𝑎 𝑥 = 𝑏
𝑎 𝑥 +𝑎 𝑥 +𝑎 𝑥 =𝑏
𝑎 𝑥 +𝑎 𝑥 +𝑎 𝑥 =𝑏
Finding the 𝑼 matrix: Using the Forward Elimination Procedure of Gauss Elimination, we get

KIOT NM for covid Page 15


Numerical method
𝑎 𝑥 +𝑎 𝑥 +𝑎 𝑥 = 𝑏
𝑎 𝑥 +𝑎 𝑥 = 𝑏
𝑎 𝑥 =𝑏
𝑎 𝑎 𝑎
Thus, 𝑈 = 0 𝑎 𝑎
0 0 𝑎
Finding the 𝑳 matrix: Using the multipliers used during the Forward Elimination Procedure
𝑎 𝑎 𝑎 → 𝑎 𝑎 𝑎 ⇒then we can get 𝐿 = and 𝐿 =
𝑎 𝑎 𝑎 ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯ 0 𝑎 𝑎
𝑎 𝑎 𝑎 → 0 𝑎 𝑎
⎯⎯⎯⎯⎯⎯⎯⎯⎯
→ 𝑎 𝑎 𝑎
⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯ 0 𝑎 𝑎 Also we get 𝐿 =
0 0 𝑎
1 0 0
𝑎 𝑎 𝑎
1 0
Thus 𝑈 = 0 𝑎 𝑎 and 𝐿 = ⎛ ⎞
0 0 𝑎 1
⎝ ⎠
Or, To obtain the elements of 𝐿and 𝑈, we first find the first row of 𝑈. Then, we determine the
second row of 𝐿 and 𝑈. Finally, we compute the third row of 𝐿 and 𝑈.
𝑎 𝑎 𝑎 1 0 0 𝑢 𝑢 𝑢
That is, 𝐴 = 𝐿𝑈 ⇒ 𝑎 𝑎 𝑎 = 𝐿 1 0 0 𝑢 𝑢
𝑎 𝑎 𝑎 𝐿 𝐿 1 0 0 𝑢
𝑢 𝑢 𝑢
= 𝐿 𝑢 𝐿 𝑢 + 𝑢 𝐿 𝑢 +𝑢
𝐿 𝑢 𝐿 𝑢 +𝐿 𝑢 𝐿 𝑢 +𝐿 𝑢 +𝑢
Now equating the corresponding entries of two matrices we get the entry of 𝐿 and 𝑈.
𝑢 = 𝑎 ,𝑢 = 𝑎 , 𝑢 = 𝑎
𝑎
𝐿 = ,𝑢 = 𝑎 − 𝐿 𝑢 , 𝑢 = 𝑎 − 𝐿 𝑢
𝑢
𝑎 𝑎 −𝐿 𝑢
𝐿 = , 𝐿 = , 𝑢 = 𝑎 − (𝐿 𝑢 + 𝐿 𝑢 )
𝑢 𝑢
Example: Solve the following equations by Doolittle’s method.
2𝑥 + 𝑦 + 4𝑧 = 12
8𝑥 – 3𝑦 + 2𝑧 = 20
4𝑥 + 11𝑦 – 𝑧 = 33
Solution:
2 1 4 𝑥 12
We have, 𝐴 = 8 −3 2 , 𝑥 = 𝑦 and 𝑏 = 20
4 11 −1 𝑧 33

KIOT NM for covid Page 16


Numerical method
1 0 0 𝑢 𝑢 𝑢
Let 𝐴 = 𝐿𝑈 = 𝐿 1 0 0 𝑢 𝑢
𝐿 𝐿 1 0 0 𝑢
2 1 4 → 2 1 4 −9 2 1 4
𝐴 = 8 −3 2 ⎯⎯⎯⎯⎯⎯⎯ 0 −7 −14 𝑅 → 𝑅 − 𝑅 0 −7 −14
→ 7
4 11 −1 ⎯⎯⎯⎯⎯⎯⎯ 0 9 −9 0 0 −27
⇒𝐿 = 4, 𝐿 = 2 and 𝐿 = .
1 0 0 2 1 4
Thus, 𝐿 = 4 1 0 and 𝑈 = 0 −7 −14
2 −9 1 0 0 −27
7
1 0 0 𝑣 12
Writing: 𝐿𝑣 = 𝑏, we get 4 1 0 𝑣 = 20 by forward elimination which gives
2 −9 𝑣
7 1 33

𝑣 = 12, ⇒4𝑣 + 𝑣 = 20⇒𝑣 = 20 − 4(12) = −28 and 2𝑣 − 𝑣 + 𝑣 = 33

⇒𝑣 = 33 − 2(12) + (−28)⇒𝑣 = 33 − 24 − 36 = −27.

The solution to the original system is given by:𝑈𝑥 = 𝑣


2 1 4 𝑥 12
0 −7 −14 𝑦 = −28
0 0 −27 𝑧 −27
2𝑥 + 𝑦 + 4𝑧 = 12
⇒ −7𝑦 − 14𝑧 = −28 , then by back substitutation 𝑧 = 1, 𝑦 = 2 and 𝑥 = 3.
−27𝑧 = −27
Exercise:Applying (a) Crouts’ method (b) triangularization method solves the ff equations.
2𝑥 + 𝑥 + 𝑥 = 7 3𝑥 + 2𝑦 + 7𝑧 = 4
a) 𝑥 + 2𝑥 + 𝑥 = 8 b) 2𝑥 + 3𝑦 + 𝑧 = 5
𝑥 + 𝑥 + 2𝑥 = 9 3𝑥 + 4𝑦 + 𝑧 = 7

10𝑥 + 𝑦 + 𝑧 = 12
c) 2𝑥 + 10𝑦 + 𝑧 = 13
2𝑥 + 2𝑦 + 10𝑧 = 14
2𝑥 – 6𝑦 + 8𝑧 = 24
d) 5𝑥 + 4𝑦 – 3𝑧 = 23
𝑥 + 𝑦 + 2𝑧 = 16

KIOT NM for covid Page 17


Numerical method
2. Iterative or indirect methods: An iterative technique to solve the 𝑛 𝑥 𝑛 linear system
𝐴𝑥 = 𝑏 Starts with an initial approximation 𝑥 ( ) to the solution 𝑥 and generates a sequence of
vectors 𝑥 ( )
that converges to 𝑥. Iterative techniques involve a process that converts the
system 𝐴𝑥 = 𝑏 into an equivalent system of the form 𝑥 = 𝐶𝑥 + 𝑑 for some fixed matrix 𝐶 and
vector 𝑑. After the initial vector 𝑥 ( ) is selected, the sequence of approximate solution vectors is
generated by computing 𝑥 ( ) = 𝐶𝑥 ( ) + 𝑑, for each 𝑘 = 1,2,3, . .. and 𝐶 = 0.
 𝐴 is diagonally dominant. The solution will exist (iterating will converge) for iteration
methods if the absolute values of the leading diagonal elements of the coefficient matrix
𝐴 of the system 𝐴𝑥 = 𝑏 are greater than thesum of absolute values of the other
coefficients of that row. If A is not diagonally dominant, then the rows or columns must
be interchanged to obtain diagonally dominant matrix.
𝑎
≤1
|𝑎 |

(i) Gauss Seidel Method


This method makes three assumptions:
1. The system given by 𝐴𝑥 = 𝑏 has a unique solution
2. The coefficient matrix 𝐴 has no zeros on its main diagonal, namely 𝑎 , 𝑎 , … , 𝑎 are
nonzeros. That is, if any of the diagonal entries are zero, then rows or columns must be
interchanged to obtain a coefficient matrix that has nonzero entries on the main diagonal.
3. The coefficient matrix A is diagonally dominant. If it is not, we need to change into
diagonally dominant matrix.
This method is the modification is of the Gauss-Jacobi iterative technique. In the Jacobi method, the
values of 𝑥 obtained in the 𝑛 approximation remain unchanged until the entire (𝑛 + 1) th
approximation has been calculated. With the Gauss-Seidel method, on the other hand, we use the new
values of each as soon as they are known. That is, once we have determined 𝑥 from the first equation, its
value is then used in the second equation to obtain the new 𝑥 . Similarly, the new 𝑥 and 𝑥 are used in
the third equation to obtain the new 𝑥 and so on.
( ) ( ) ( )
𝑥 = −∑ 𝑎 𝑥 −∑ 𝑎 𝑥 + 𝑏 , for 𝑖 = 1, 2, … , 𝑛, instead of Eq. (*).
Example: Use the Gauss-Seidel iteration method to approximate the solution to the system of equations
with initial guess (𝑥 , 𝑥 , 𝑥 ) = (0,0,0) correct to 3 significant digits.
5𝑥 − 2𝑥 + 3𝑥 = −1
−3𝑥 + 9𝑥 + 𝑥 = 2
2𝑥 − 𝑥 − 7𝑥 = 3

18
Numerical method
Solution: To begin, write the system in the form
( )
1 2 3
𝑥 =− + 𝑥 ( )− 𝑥 ( )
5 5 5
2 3 1
𝑥 ( )= + 𝑥 ( )− 𝑥 ( )
9 9 9
3 2 1
𝑥 ( )=− + 𝑥 ( )− 𝑥 ( )
7 7 7
𝟏𝒔𝒕 Iteration: By taking 𝑘 = 1 with initial approximations (𝑥 , 𝑥 , 𝑥 ) = (0,0,0)

( )
1 2 3
𝑥 = − + (0) − (0) = −0.200
5 5 5
Now that we have a new value for 𝑥 , however, use it to compute a new value for 𝑥 . That is,
( )
2 3 1
𝑥 = + (−0.200) − (0) ≈ 0.156
9 9 9
( ) ( )
Similarly, use 𝑥 = −0.200 and 𝑥 = 0.156 to copmute a new value for 𝑥 . That is,
( )
𝑥 = − + (−0.200) − (0.156) ≈ −0.508.
( ) ( ) ( )
So the first approximation is 𝑥 = −0.200, 𝑥 = 0.156, 𝑎𝑛𝑑 𝑥 = −0.508.
𝟐𝒏𝒅 Iteration: By taking 𝑘 = 2 with second approximations (x , x , x ) = (−0.200,0.156, −0.508)
( )
1 2 3
𝑥 = − + (0.156) − (−0.508) = 0.167
5 5 5
2 3 1
𝑥 ( ) = + (0.167) − (−0.508) ≈ 0.334
9 9 9
3 2 1
𝑥 ( ) = − + (0.167) − (0.334) ≈ −0.429
7 7 7
Continued iterations produce the sequence of approximations shown in below Table
𝑛 0 1 2 3 4 5
𝑥 0.000 −0.200 0.167 0.191 0.186 0.186
𝑥 0.000 0.156 0.334 0.333 0.331 0.331
𝑥 0.000 −0.508 −0.429 −0.422 −0.423 −0.423
Because the last two columns in Table are identical, we can conclude that to three significant
digits the solution is 𝑥 = 0.186, 𝑥 = 0.331, 𝑎𝑛𝑑 𝑥 = −0.423.
Note 1: Iterative methods are generally less efficient than direct methods due to the large number
of operations or iterations required.
Note 4: The initial guess affects only the number of iterations that are required for convergence.

19
Numerical method

Chapter four
Interpolation and curve fitting

4.1 Finite differences


Consider a function y = f(x) defined on (a, b), x and y are the independent and dependent
variables respectively. If the points x , x , … . , x are taken at equidistance that is, a = x , x +
h, x + 2h, … , x + nh = b, then the value of y, when x = x , is denoted by y , where y = f(x ).
Here, the values of x are called arguments and the values of y are called as entries. The interval h
is called the difference interval (increment).
Forward or Leading Differences
If we subtract from each value of 𝑦 except 𝑦 , the previous value of 𝑦, we have 𝑦 − 𝑦 , 𝑦 −
𝑦 , 𝑦 − 𝑦 … , 𝑦 − 𝑦 .These differences are called the first forward differences of 𝑦 and is
denoted by ∆𝑦. The symbol ∆ denotes the forward difference operator. That is,
∆𝑦 = 𝑦 − 𝑦
∆𝑦 = 𝑦 − 𝑦
∆𝑦 = 𝑦 − 𝑦
⋮ ⋮
∆𝑦 = 𝑦 − 𝑦
Also it can be written as,
∆𝑓(𝑥 ) = 𝑓(𝑥 + ℎ) − 𝑓(𝑥)
or ∆𝑓(𝑥 ) = 𝑓 (𝑥 + ℎ) − 𝑓(𝑥 ) where ℎ is the difference interval.
Second order forward differences are
∆ 𝑦 = ∆𝑦 − ∆𝑦 = 𝑦 − 2𝑦 + 𝑦
∆ 𝑦 = ∆𝑦 − ∆𝑦 = 𝑦 − 2𝑦 + 𝑦
⋮ ⋮ ⋮
∆ 𝑦 = ∆𝑦 − ∆𝑦 = 𝑦 − 2𝑦 + 𝑦
Third order forward differences are
∆ 𝑦 = ∆ 𝑦 − ∆ 𝑦 = 𝑦 − 3𝑦 + 3𝑦 − 𝑦
∆ 𝑦 = ∆ 𝑦 − ∆ 𝑦 = 𝑦 − 3𝑦 + 3𝑦 − 𝑦
⋮ ⋮ ⋮
∆ 𝑦 = ∆ 𝑦 − ∆ 𝑦 = 𝑦 − 3𝑦 + 3𝑦 − 𝑦
In general, 𝑛 order forward difference are given by
∆ 𝑦 =∆ 𝑦 −∆ 𝑦
∆ 𝑓(𝑥) = ∆ 𝑓(𝑥 + ℎ) − ∆ 𝑓(𝑥)
Note: ∆ is an identity operator. That is ∆ 𝑓(𝑥 ) = 𝑓(𝑥 ) 𝑎𝑛𝑑 ∆ = ∆.

20
Numerical method
The forward difference table for the arguments 𝑥 , 𝑥 , … . , 𝑥 are shown in below.
𝑥 𝑦 ∆ ∆ ∆ ∆ ∆
𝑥 𝑦
∆𝑦
𝑥 𝑦 ∆ 𝑦
∆𝑦 ∆ 𝑦
𝑥 𝑦 ∆ 𝑦 ∆ 𝑦
∆𝑦 ∆ 𝑦 ∆ 𝑦
𝑥 𝑦 ∆ 𝑦 ∆ 𝑦
∆𝑦 ∆ 𝑦
𝑥 𝑦 ∆ 𝑦
∆𝑦
𝑥 𝑦
where x = x + h, x = x + 2h, … . , x = x + nh.
The first term in the above Table is y and is called the leading term. The differences ∆y , ∆ y , … , ∆ y
are called the leading differences.

Example 1: Construct a forward difference table for the following values:


x 1 2 3 4 5
y = f(x) 4 6 9 12 17

Solution: a. Forward difference table for given data is


𝑥 𝑦 ∆𝑦 ∆ 𝑦 ∆ 𝑦 ∆ 𝑦
1 4
2
2 6 1
3 -1
3 9 0 3
3 2
4 12 2
5
5 17

Example 2: If y = x + x − 2x + 1, calculate values of y for x = 0, 1, 2, 3, 4, 5 and construct


the forward difference table.
Solution: For 𝑥 = 0, 1, 2, 3, 4, 5 , we get the values of y are 1, 1, 9, 31, 73, 141. Therefore,
difference table for these data is
Example 3: Find the function whose first forward difference is 𝑒 .
Example 4: Find the missing term in the table below.
𝑥 0 1 2 3 4
𝑦 3 2 3 ? 11

4.2 Interpolation
Introduction

21
Numerical method
Interpolation is the technique of estimating the value of a function for any intermediate value of
the independent variable 𝑥 when a set of values of 𝑦 = 𝑓(𝑥) for certain value of 𝑥 are known or
given. That is, 𝑖𝑓 (𝑥 , 𝑦 ), 𝑖 = 0, 1, 2, … , 𝑛 are the set of (𝑛 + 1) given data points of the function
𝑦 = 𝑓(𝑥), then the process of finding the value of y corresponding to any value of 𝑥 = 𝑥
between 𝑥 and 𝑥 , is called interpolation. The process of computing or finding the value of a
function for any value of the independent variable outside the given range is called
extrapolation.
If 𝑝(𝑥) is a polynomial, then 𝑝(𝑥) is called the interpolating polynomial and the process of
computing the intermediate values of 𝑦 = 𝑓 (𝑥) is called the polynomial interpolation.
To construct this polynomial we follow two different approaches which are interpolation
with equal intervals and interpolation with unequal intervals.
4.2.1. Interpolation with equal intervals
Newton forward interpolation formula
Let 𝑦 = 𝑓(𝑥) be a given function of 𝑥 which takes the value 𝑓(𝑥 ), 𝑓(𝑥 + ℎ), 𝑓 (𝑥 + 2ℎ),
. . ., 𝑓(𝑥 + 𝑛ℎ) for(𝑛 + 1) equally spaced values 𝑥 , 𝑥 + ℎ, 𝑥 + 2ℎ, … , 𝑥 + 𝑛ℎ of the
independent variable 𝑥. Assume that 𝑓(𝑥) be a polynomial of 𝑛 degree, given by 𝑓(𝑥 ) = 𝑎 +
𝑎 (𝑥 − 𝑥 ) + 𝑎 (𝑥 − 𝑥 )(𝑥 − 𝑥 ) + ⋯ + 𝑎 (𝑥 − 𝑥 )(𝑥 − 𝑥 ) … (𝑥 − 𝑥 )……… (*)
and 𝑓(𝑥 ) = 𝑦 for 𝑖 = 0, 1, … , 𝑛. where 𝑎 , 𝑎 , … , 𝑎 are to be determined.
Now to find the values of 𝑎 , 𝑎 , … , 𝑎 :put 𝑥 = 𝑥 , 𝑥 + ℎ, 𝑥 + 2ℎ, … , 𝑥 + 𝑛ℎ in equation (*)
successively.
Therefore for 𝑥 = 𝑥 , 𝑓 (𝑥 ) = 𝑎
( ) ( ) ( ) ∆ ( )
for 𝑥 = 𝑥 + ℎ, ⇒ 𝑎 = = =
∆ ( ) ∆ ( )
for 𝑥 = 𝑥 + 2ℎ,⇒ 𝑎 = =
!
∆ ( ) ∆ ( )
Similarly,⇒ 𝑎 = ,…,𝑎 =
! !
Put the values of 𝑎 , 𝑎 , … , 𝑎 , in the equation (*), we get
∆𝑓( 𝑥 ) ∆ 𝑓 (𝑥 )
𝑓(𝑥 ) = 𝑓 (𝑥 ) + (𝑥 − 𝑥 ) + (𝑥 − 𝑥 )(𝑥 − 𝑥 ) + ⋯
ℎ 2! ℎ
∆ 𝑓 (𝑥 )
+ (𝑥 − 𝑥 )(𝑥 − 𝑥 ) … (𝑥 − 𝑥 )
𝑛! ℎ
Again, put 𝑥 = 𝑥 + ℎ𝑢 ⇒ 𝑢 = , we have
∆ 𝑓 (𝑥 ) ∆ 𝑓 (𝑥 )
𝑓(𝑥 ) = 𝑓(𝑥 ) + 𝑢∆𝑓( 𝑥 ) + 𝑢 (𝑢 − 1) + ⋯ + 𝑢 (𝑢 − 1) … (𝑢 − (𝑛 − 1))
2! 𝑛!
which is required formula for Newton’s forward interpolation.

22
Numerical method
This formula is used to interpolate the values of 𝑓(𝑥) near the beginning of the set of values and
is applicable for equally spaced argument only.
Example:
1. Find the second degree polynomial passes through (1, −1), (2, −2), (3, −1) 𝑎𝑛𝑑 (4, 2).
2. Find cubic polynomial which takes the following data
X 0 1 2 3
Y 1 0 1 10
3. Find the number of students from the ff data who scored marks not more than 45.
Marks range 30-40 40-50 50-60 60-70 70-80
No. of students 35 48 70 40 22
4. Find e for x = 0.05 using the following table
x 0 0.1 0.2 0.3 0.4
e 1 1.3499 1.8221 2.4596 3.3201

4.2.2. Interpolation with unequal intervals


A. Lagrange’s Interpolation Formula
Let 𝑦 = 𝑓(𝑥) be a real valued continuous function defined in an interval [𝑎, 𝑏]. Let 𝑥 , 𝑥 , … , 𝑥
be (𝑛 + 1) distinct points which are not necessarily equally spaced and the corresponding values
of the function are 𝑦 , 𝑦 , … , 𝑦 . Since (𝑛 + 1) values of the function are given corresponding to
the (𝑛 + 1) values of the independent variable 𝑥, we can represent the function 𝑓(𝑥) as a
polynomial in 𝑥 of degree 𝑛.
Let the polynomial is represented by 𝑓 (𝑥) = 𝑎 (𝑥 − 𝑥 )(𝑥 − 𝑥 ) … (𝑥 − 𝑥 ) +
𝑎 (𝑥 − 𝑥 )(𝑥 − 𝑥 ) … (𝑥 − 𝑥 ) + 𝑎 (𝑥 − 𝑥 )(𝑥 − 𝑥 )(𝑥 − 𝑥 ) … (𝑥 − 𝑥 ) + ⋯ +
𝑎 (𝑥 − 𝑥 )(𝑥 − 𝑥 )(𝑥 − 𝑥 ) … (𝑥 − 𝑥 )……………………….. (*)
To find the values of 𝑎 , 𝑎 , … , 𝑎 , put 𝑥 = 𝑥 , 𝑥 , … , 𝑥 respectively in the above formula.
That is
( )
for 𝑥 = 𝑥 , 𝑓 (𝑥 ) = 𝑎 (𝑥 − 𝑥 )(𝑥 − 𝑥 ) … (𝑥 − 𝑥 ) ⇒ 𝑎 = ( )( )…( )
( )
for 𝑥 = 𝑥 , 𝑓 (𝑥 ) = 𝑎 (𝑥 − 𝑥 )(𝑥 − 𝑥 ) … (𝑥 − 𝑥 ) ⇒ 𝑎 = ( )( )…( )
Similarly,
( )
for 𝑥 = 𝑥 , 𝑓 (𝑥 ) = 𝑎 (𝑥 − 𝑥 )(𝑥 − 𝑥 ) … (𝑥 − 𝑥 )⇒ 𝑎 =
( )( )…( )
Substituting the values of 𝑎 , 𝑎 , … , 𝑎 , in equation (1), we get
(𝑥 − 𝑥 )(𝑥 − 𝑥 ) … (𝑥 − 𝑥 ) (𝑥 − 𝑥 )(𝑥 − 𝑥 ) … (𝑥 − 𝑥 )
𝑓(𝑥 ) = 𝑓(𝑥 ) + 𝑓(𝑥 ) + ⋯
(𝑥 − 𝑥 )(𝑥 − 𝑥 ) … (𝑥 − 𝑥 ) (𝑥 − 𝑥 )(𝑥 − 𝑥 ) … (𝑥 − 𝑥 )
(𝑥−𝑥 )(𝑥 − 𝑥 ) … (𝑥 − 𝑥 )
+ 𝑓 (𝑥 )
(𝑥 − 𝑥 )(𝑥 − 𝑥 ) … (𝑥 − 𝑥 )

23
Numerical method
Or 𝑦 = 𝑓(𝑥) = 𝑙 (𝑥)𝑓 (𝑥 ) + 𝑙 (𝑥)𝑓(𝑥 ) + ⋯ + 𝑙 (𝑥)𝑓 (𝑥 ),⟹ 𝑓(𝑥 ) = ∑ 𝐿 (𝑥)𝑓 where the
Lagrange basis functions are defined by
𝐿 (𝑥) = ∏ , 𝑖 = 0,1,2, … , 𝑛.

This is called Lagrange’s interpolation formula.


Note: If 𝑙 (𝑥) is 𝑛 degree polynomial, then the interpolation is called 𝑛 degree polynomial.
Example1: Using the Lagrange interpolation formula find the polynomial which passes
through(0, – 12), (1, 0), (3, 6), (4, 12). and approximate 𝑓(2).
Solution: The polynomial is given by:
𝑃 (𝑥) = 𝐿 (𝑥)𝑓 + 𝐿 (𝑥)𝑓 + 𝐿 (𝑥)𝑓 + 𝐿 (𝑥)𝑓 = −12𝐿 (𝑥) + 0𝐿 (𝑥) + 6𝐿 (𝑥) + 12𝐿 (𝑥),
= −12𝐿 (𝑥) + 6𝐿 (𝑥) + 12𝐿 (𝑥), where
( )( )( ) ( )( )( )
𝐿 (𝑥) = ( )( )( )
= ( )( )( )
=− (𝑥 − 8𝑥 + 19𝑥 − 12)
( )( )( ) ( )( )( )
𝐿 (𝑥) = ( )( )( )
= ( )( )( )
= (𝑥 − 7𝑥 + 12𝑥)
( )( )( ) ( )( )( )
𝐿 (𝑥) = ( )( )( )
= ( )( )( )
= − (𝑥 − 5𝑥 + 4𝑥)
( )( )( ) ( )( )( )
𝐿 (𝑥) = ( )( )( )
= ( )( )( )
= (𝑥 − 4𝑥 + 3𝑥)

Therefore,

𝑃 (𝑥) = −12 − (𝑥 − 8𝑥 + 19𝑥 − 12) + 6 (𝑥 − 5𝑥 + 4𝑥) + 12 (𝑥 − 4𝑥 + 3𝑥)

= 𝑥 − 7𝑥 + 18𝑥 − 12
Hence, the polynomial is 𝑃 (𝑥) = 𝑥 − 7𝑥 + 19𝑥 − 12 and
𝑓(2) = 𝑃 (2) = (2) − 7(2) + 18(2) − 12 = 4.
Example2: Let 𝑃 (𝑥) is the interpolating polynomial for the data (0,0), (0.5, 𝑦), (1,3) and(2,2).
Then find 𝑦 if the coefficient of 𝑥 in 𝑃 (𝑥) is 6.
Solution: The polynomial is given by:
𝑃 (𝑥) = 𝐿 (𝑥)𝑓 + 𝐿 (𝑥 )𝑓 + 𝐿 (𝑥 )𝑓 + 𝐿 (𝑥)𝑓 =𝑦𝐿 (𝑥 ) + 3𝐿 (𝑥 ) + 2𝐿 (𝑥 ), where
( )( )( ) ( )( )( )
𝐿 (𝑥) = ( )( )( )
=( . )( . )( . )
= (𝑥 − 3𝑥 + 2𝑥)
( )( )( ) ( )( . )( )
𝐿 (𝑥) = ( )( )( )
= ( )(
= −2𝑥 + 5𝑥 − 2𝑥
. )( )
( )( )( ) ( )( . )( )
𝐿 (𝑥) = ( )( )( )
= ( )(
= (𝑥 − 1.5𝑥 + 0.5𝑥)
. )( )

Therefore,
𝑃 (𝑥) = 𝑦 (𝑥 − 3𝑥 + 2𝑥) + 3(−2𝑥 + 5𝑥 − 2𝑥) + 2 (𝑥 − 1.5𝑥 + 0.5𝑥)

= 𝑦− 𝑥 + (−8𝑦 + 14)𝑥 + 𝑦−

24
Numerical method
Now, the coefficient of 𝑥 in the polynomial 𝑃 (𝑥) is 𝑦− .⟹ 𝑦− =6⟹𝑦=

Therefore, the value of the unknown 𝑦 is .

Exercise:
1. Using the Lagrange interpolation formula find the polynomial with 𝑥 = 2, 𝑥 = 2.5 and 𝑥 = 4 and 𝑓(𝑥) = .

2. Apply Lagrange’s interpolation formula to find a polynomial which passes through the points
(0, – 20), (1, – 12), (3, – 20) 𝑎𝑛𝑑 (4, – 24).
3. Using Lagrange’s interpolation formula, find the value of 𝑦 corresponding to 𝑥 = 10 from the following data.
x 5 6 9 11
Y 380 -2 196 509

4.5.Least square regression


4.5.1 Discrete least-square approximations
Curve fitting is a procedure in which a mathematical formula (equation) is used to best fit a
given set of data points. The basic idea of least square approximation is to fit a polynomial
function P(x) to a set of data points (𝑥𝑖, 𝑦𝑖) having a theoretical solution 𝑦 = 𝑓(𝑥) .

The aim is to minimize the squares of the errors. In order to do this, suppose the set of data
points satisfying the theoretical solution 𝑦 = 𝑓(𝑥) are (𝑥 , 𝑦 ), 𝑖 = 1,2, … , 𝑛 .
The curve P(x) fitted to the observation 𝑦 , 𝑦 , . . . , 𝑦 will be regarded as the best fit to f(x), if the
difference between 𝑃(𝑥 ) and f(x ), i = 1, 2, . . . , n, is least. That is, the sum of the differences
e = f(x )– P(x ), i = 1, 2, . . . , n should be the minimum. The differences obtained from e
could be negative or positive and when all these e are summed up, the sum may add up to zero.
This will not give the true error of the approximating polynomial. Thus to estimate the exact
error sum, the square of these differences are more appropriate.
Linear least square approximation
The general form of a linear equation can be written as 𝑦 = 𝑎 + 𝑏𝑥 which is fitted to the data
points (𝑥 , 𝑦 ), 𝑖 = 1,2, … , 𝑛.
Now we have 𝑠 = ∑ 𝑒 = ∑ (𝑦 − (𝑎 + 𝑏𝑥 ))
By the principle of least squares, the value of S is minimum therefore = 0 and = 0.
Then the normal equations becomes

25
Numerical method
∑ 𝑦 = 𝑛𝑎 + 𝑏 ∑ 𝑥
∑ 𝑦 𝑥 = 𝑎∑ 𝑥 +𝑏∑ 𝑥
On solving these equations, we get the values of a and b. Putting the value of a and b in the
linear equation 𝑦 = 𝑎 + 𝑏𝑥, we get the equation of the line of best fit.
Example: By using the Least Squares Approximation, fit
a. a straight line
X 1 2 3 4 5 6
Y 120 90 60 70 35 11
Solution a: In order to fit s line to the set of data point, we assume the equation of the form
𝑦 = 𝑎 + 𝑏𝑥 where the unknowns a and b are to be determined.
The normal equations becomes
∑ 𝑦 = 𝑛𝑎 + 𝑏 ∑ 𝑥
∑ 𝑥 𝑦 = 𝑎∑ 𝑥 +𝑏∑ 𝑥
Here we shall need to construct column for values of 𝑥 , 𝑦 , 𝑥 𝑦 and 𝑥 given as
𝑥 𝑦 𝑥𝑦 𝑥
1 120 120 1
2 90 180 4
3 60 180 9
4 70 280 16
5 35 175 25
6 11 66 36
∑ 𝑥=21 ∑ 𝑦 =386 ∑ 𝑥𝑦 = 1001 ∑ 𝑥 = 91
386 = 6𝑎 + 21𝑏
1001 = 21𝑎 + 91𝑏 solving these, we get 𝑎 = 134.33 and 𝑏 = −20
Therefore a straight line fitted with a given data point is 𝑦 = 134.33 − 20𝑥
4.5.2 Continuous-linear least square approximations
Let 𝑦 = 𝑓(𝑥 ) be continuous on [a, b] and p(x) be its approximated polynomial of degree one
𝑝(𝑥 ) = 𝑎 + 𝑎 𝑥, then 𝑆 = ∫ [𝑦 − (𝑎 + 𝑎 𝑥)] 𝑑𝑥
The necessary condition for minimum S are given by
= = 0 and this gives

−2 ∫ [𝑦 − (𝑎 + 𝑎 𝑥)] 𝑑𝑥 = 0
−2 ∫ [𝑦 − (𝑎 + 𝑎 𝑥 +)] 𝑥𝑑𝑥 = 0
Hence the normal equations are as follows
∫ 𝑦𝑑𝑥 = 𝑎 ∫ 𝑑𝑥 + 𝑎 ∫ 𝑥𝑑𝑥
∫ 𝑥𝑦𝑑𝑥 = 𝑎 ∫ 𝑥𝑑𝑥 + 𝑎 ∫ 𝑥 𝑑𝑥
Example: Find a least square approximation to fit straight line
i. 𝑦 = √𝑥 for [0, 1] ii. 𝑦 = sin 𝜋𝑥 for [0, 1]
Solution: i. a. linear approximation

26
Numerical method
For 𝑦 = √𝑥, we can write the linear polynomial as 𝑝(𝑥) = 𝑎 + 𝑎 𝑥 and the normal equation
becomes ∫ √𝑥𝑑𝑥 = 𝑎 ∫ 𝑑𝑥 + 𝑎 ∫ 𝑥𝑑𝑥 ⟹ = 𝑎 +
∫ 𝑥√𝑥𝑑𝑥 = 𝑎 ∫ 𝑥𝑑𝑥 + 𝑎 ∫ 𝑥 𝑑𝑥 ⟹ = +
6a + 3a = 4
The simultaneous equation yields a = and a =
15a + 10a = 12
Therefore the linear polynomial approximation is (𝑥) = + x.
4.6. Application of interpolation
Introduction
The differentiation and integration are closely linked processes which are actually inversely
related. For example, if the given function 𝑦(𝑡) represents an objects position as a function of
time, its differentiation provides its velocity,
𝑑
𝑣 (𝑡) = 𝑦(𝑡)
𝑑𝑡
On the other hand, if we are provided with velocity 𝑣(𝑡) as a function of time, its integration
denotes its position.

𝑦(𝑡) = 𝑣(𝑡) 𝑑𝑡

There are so many methods available to find the derivative and definite integration of a
function. But when we have a complicated function or a function given in tabular form, then we
use numerical methods. In the present chapter, we shall be concerned with the problem of
numerical differentiation and integration.
4.6.1. Numerical Differentiation
The method of obtaining the derivatives of a function using a numerical technique is known as
numerical differentiation.
1. If the values of x are equi-spaced and is required
i. near the beginning of the table, we employ Newton's forward formula.
ii. near the end of the table, we use Newton's backward formula
2. If the values of 𝑥 are not equally spaced, we use Newton’s divided difference interpolation
formula or Lagrange’s interpolation formula to get the required value of the derivative.
1. Derivatives using Newton’s forward interpolation formula
Suppose the function 𝑦 = 𝑓(𝑥) is known at (𝑛 + 1) equispaced points 𝑥 , 𝑥 , … , 𝑥 and they are 𝑦 , 𝑦 , … , 𝑦 resp.
i.e., 𝑦 = 𝑓(𝑥 ), 𝑖 = 0, 1, … , 𝑛. Let 𝑥 = 𝑥 + 𝑖ℎ and 𝑢 = , where ℎ is the spacing.
Then, the Newton’s forward interpolation formula is given by
( ) ( )( )
𝑦 = 𝑦 + 𝑢∆𝑦 + ∆ 𝑦 + ∆ 𝑦 + ⋯ ……………………………….…. (*)
! !
Differentiating equation (*) with respect to 𝑢, we get
= ∆𝑦 + ∆ 𝑦 + ∆ 𝑦 +⋯ but, = =
! !

Therfore, = ∆𝑦 + !
∆ 𝑦 + !
∆ 𝑦 +⋯ …………..… (**)
Differentiating equation (**) again with respect to 𝑥, we get

27
Numerical method
𝑑 𝑦 𝑑 𝑑𝑦 𝑑𝑢 1 𝑑 𝑑𝑦
= =
𝑑𝑥 𝑑𝑢 𝑑𝑥 𝑑𝑥 ℎ 𝑑𝑢 𝑑𝑥
= ∆ 𝑦 + (𝑢 − 1)∆ 𝑦 + ∆ 𝑦 −⋯ …….. (***)
Equations (**) and (***) give the approximate derivatives of f(x) at arbitrary point x = x0 + uh.
When 𝑥 = 𝑥 , 𝑢 = 0, Eqs.(**) and (***) becomes
𝑑𝑦 1 1 1 1
= ∆𝑦 − ∆ 𝑦 + ∆ 𝑦 − ∆ 𝑦 + ⋯
𝑑𝑥 ℎ 2 3 4
𝑑 𝑦 1 11 5
= ∆ 𝑦 −∆ 𝑦 + ∆ 𝑦 − ∆ 𝑦 +⋯
𝑑𝑥 ℎ 12 6
Example: From the following table find the value of 𝑎𝑛𝑑 at the point x = 1.0
x 1 1.1 1.2 1.3 1.4 1.5

Y 5.4680 5.6665 5.9264 6.2551 6.6601 7.1488

Solution: The forward difference table is


𝑥 𝑦 ∆𝑦 ∆ 𝑦 ∆ 𝑦 ∆ 𝑦 ∆ 𝑦
1.0 5.4680
0.1985
1.1 5.6665 0.0614
0.2599 0.0074
1.2 5.9264 0.0688 0.0001
0.3287 0.0075 -0.0002
1.3 6.2551 0.0763 -0.0001
0.4050 0.0074
1.4 6.6601 0.0837
0.4887
1.5 7.1488
Here x0 = 1.0 and h = 0.1. Then u = 0 and hence
𝑑𝑦 1
= 𝑓 (𝑥 ) = [∆𝑦 − ∆ 𝑦 + ∆ 𝑦 − ∆ 𝑦 + ∆ 𝑦 − ⋯ ]
𝑑𝑥 5
1
𝑓 (1.0) = . [.01985 − (0.0614) + ( . ) ( . ( . )] 1.70202
2
𝑑 𝑦 5
= 𝑓 ′(𝑥 ) = [∆ 𝑦 − ∆ 𝑦 + ∆ 𝑦 − ∆ 𝑦 +⋯ ]
𝑑𝑥 6
𝑑 𝑦 1 11 5
= 𝑓 ′(1.0) = 0.0614 − 0.0074 − (0.0001) + (0.0002) = 5.4040
𝑑𝑥 (0.1) 12 6
Example: Obtain the first and second derivatives of the function tabulated below at the points x = 1.1 and x = 1.2
X 1 1.2 1.4 1.6 1.8 2.0
Y 0 0.128 0.544 1.298 2.440 4.020
Solution: We first construct the forward difference table as shown below
𝑥 𝑦 ∆𝑦 ∆ 𝑦 ∆ 𝑦 ∆ 𝑦 ∆ 𝑦
1.0 0

28
Numerical method
0.128
1.2 0.128 0.288
0.416 0.05
1.4 0.544 0.338 0
0.754 0.05 0
1.6 1.298 0.388 0
1.142 0.05
1.8 2.440 0.438
1.580
2.0 4.020
Since x = 1.1 is a non-tabulated point near the beginning of the table, we take
𝑥 = 1.0 and compute 𝑢 = .
.
.
= 0.5

Hence, = [∆𝑦 + ∆ 𝑦 + ∆ 𝑦 ]
( . ) ( . ) ( . )
= [0.128 + (0.288) + (0.05)] = 0.62958
.

𝑑 𝑦 1
= [∆ 𝑦 + (𝑢 − 1)∆ 𝑦 ] = ( [0.288 + (0.5 − 1)(0.05) = 6.575
𝑑𝑥 ℎ . )

Now, 𝑥 = 1.2 is a tabulated point near the beginning of the table .


For 𝑥 = 𝑥 = 1.2, 𝑢 = 0 and
𝑑𝑦 1 1 1
= [∆𝑦 − ∆ 𝑦 + ∆ 𝑦 ] = 0.416 − (0.338) + (0.05) = 1.31833
𝑑𝑥 0.2 2 3
𝑑 𝑦 1
= [∆ 𝑦 − ∆ 𝑦 ] = [0.338 − 0.05] = 7.2
𝑑𝑥 (0.2)
I Example: Let 𝑓(1) = 1, 𝑓(3) = −1, 𝑓(5) = 4, 𝑓(7) = 10, 𝑓(9) = 13, 𝑓(11) = 18 and 𝑓(13) = 24,
then find 𝑓 (1) , 𝑓 (5) and 𝑓 (3).
Solution: First we find the forward differences using forward difference table as follows
𝑥 𝑓(𝑥) ∆ ∆ ∆ ∆ ∆ ∆
1 1 −2 7 −6 2 7 −22
3 −1 5 1 −4 9 −15
5 4 6 −3 5 −6
7 10 3 2 −1
9 13 5 1
11 18 6
13 24
∆ ∆ ∆ ∆ ∆
Then, 𝑓 (1) = ∆𝑦 − + − + − ,ℎ = 2

29
Numerical method

= −2 − (7) + (−6) − (2) + (7) − (−22)

= (−2 − − 2 − + + )=−
∆ ∆ ∆
𝑓 (5) = ∆𝑦 − + − ,ℎ = 2

= 6 − (−3) + (5) − (−6) = (6 + + + ) =

𝑓 (3) = ∆ 𝑦 −∆ 𝑦 + ∆ 𝑦 − ∆ 𝑦 ,ℎ = 2

= 1 − (−4) + (9) − (−15) = 1+4+ + =

Exercise: From the following table of values, estimate 𝑦′(1.0) 𝑎𝑛𝑑 𝑦′′(1.0):
𝑥 1 2 3 4 5 6
𝑦 −4 3 22 59 120 211

4.6.2. Numerical Integration


The need often arises for evaluating the definite integral of a function that has no explicit
anti-derivative or whose anti-derivative is not easy to obtain. The basic method involved in
approximating ∫ 𝑓(𝑥) 𝑑𝑥 is called numerical Quadrature.
Given the data points 𝑥 , 𝑓(𝑥 ) , 𝑖 = 0,1, … , 𝑛 of the function 𝑦 = 𝑓(𝑥), Here we are
required to evaluate the value of 𝐼 = ∫ 𝑓(𝑥)𝑑𝑥. Let the interval of integration (𝑎, 𝑏) be divided
into 𝑛 equal subintervals of width ℎ = so that 𝑎 = 𝑥 , 𝑥 = 𝑥 + ℎ, … , 𝑥 = 𝑥 + 𝑛ℎ = 𝑏.
∴𝐼 = ∫ 𝑓(𝑥 )𝑑𝑥 = ∫ 𝑓(𝑥 )𝑑𝑥 …………………………………… (*)
Since, Newton’s forward interpolation formula is given by
∆ 𝑦 ∆ 𝑦
𝑦 = 𝑓(𝑥 ) = 𝑦 + 𝑢∆𝑦 + 𝑢 (𝑢 − 1) + 𝑢 (𝑢 − 1)(𝑢 − 2) +⋯
2! 3!
where 𝑢 = ⟹ 𝑑𝑢 = 𝑑𝑥 ⇒ 𝑑𝑥 = ℎ𝑑𝑢
∴ Equation(*) becomes,
∆ 𝑦 ∆ 𝑦
𝐼=ℎ 𝑦 + 𝑢∆𝑦 + 𝑢(𝑢 − 1) + 𝑢 (𝑢 − 1)(𝑢 − 2) + ⋯ 𝑑𝑢
2! 3!
𝑛 𝑛(2𝑛 − 3) 𝑛(𝑛 − 2)
= 𝑛ℎ 𝑦 + ∆𝑦 + ∆ 𝑦 + ∆ 𝑦 + ⋯ 𝑢𝑝 𝑡𝑜 (𝑛 + 1) 𝑡𝑒𝑟𝑚𝑠
2 12 24
( ) ( )
∴∫ 𝑓(𝑥) 𝑑𝑥 = 𝑛ℎ 𝑦 + ∆𝑦 + ∆ 𝑦 + ∆ 𝑦 +⋯ …………..... (**)
The formula given by Eq. (**) is known as Newton-Cotes closed Quadrature formula. From (Eq.
(**)), we can derive or different closed type integration formulae by substituting 𝑛 = 1, 2, 3, …

30
Numerical method
i. Trapezoidal Rule
Putting 𝑛 = 1 in equation (**) and taking the curve 𝑦 = 𝑓(𝑥) through (𝑥 , 𝑦 ) and (𝑥 , 𝑦 ) as a
polynomial of degree one so that differences of order higher than one vanish, we get
1 ℎ ℎ
𝑓(𝑥) 𝑑𝑥 = ℎ 𝑦 + ∆𝑦 = [2𝑦 + (𝑦 − 𝑦 )] = (𝑦 + 𝑦 )
2 2 2

Similarly, for the next subinterval (𝑥 , 𝑥 ), we get


ℎ ℎ
𝑓(𝑥) 𝑑𝑥 = (𝑦 + 𝑦 ), … , 𝑓(𝑥) 𝑑𝑥 = (𝑦 +𝑦 )
2 2
( )

Adding the above integrals, we get



𝑓(𝑥) 𝑑𝑥 = [(𝑦 + 𝑦 ) + 2(𝑦 + 𝑦 + ⋯ + 𝑦 )]
2

This is known as the Trapezoidal rule.


( )
Note: The total error in the evaluation of the integral of the trapezoidal rule is of the order of ℎ .𝐸 = − ℎ 𝑦 (𝑥)

ii. Simpson’s 1/3rd Rule


Putting 𝑛 = 2 in equation (4) and taking the curve through (𝑥 , 𝑦 ), (𝑥 , 𝑦 ) and (𝑥 , 𝑦 )as a
polynomial of degree two so that differences of order higher than two vanish, we get
1 2ℎ ℎ
𝑓(𝑥) 𝑑𝑥 = 2ℎ 𝑦 + ∆𝑦 + ∆ 𝑦 = [6𝑦 + 6(𝑦 − 𝑦 ) + (𝑦 − 2𝑦 + 𝑦 )] = (𝑦 + 4𝑦 + 𝑦 )
6 6 3

Similarly, ∫ 𝑓(𝑥) 𝑑𝑥 = (𝑦 + 4𝑦 + 𝑦 ), … , ∫ ( )
𝑓(𝑥) 𝑑𝑥 = (𝑦 + 4𝑦 +𝑦 )
Adding the above integrals, we get

𝑓(𝑥) 𝑑𝑥 = [(𝑦 + 𝑦 ) + 2(𝑦 + 𝑦 + ⋯ + 𝑦 ) + 4(𝑦 + 𝑦 + ⋯ + 𝑦 )]
3

which is known as Simpson’s one-third rule.


Note: To use this formula, the given interval of integration must be divided into an even number of subintervals.
-The error in simpson’s 1/3rd rule can be writen as 𝐸 =
( )
𝑓 ( ) (𝜉) = 𝑓 ( ) (𝜉) where 𝜉𝜖[𝑎, 𝑏] (for 𝑛 subintervals of length ℎ).
th
iii. Simpson’s 3/8 Rule
Putting 𝑛 = 3 in equation (**) and taking the curve through (𝑥 , 𝑦 ), (𝑥 , 𝑦 ), (𝑥 , 𝑦 ) and (𝑥 , 𝑦 ) as a
polynomial of degree three so that differences of order higher than three vanish, we get
3 3 1
𝑓(𝑥) 𝑑𝑥 = 3ℎ 𝑦 + ∆𝑦 + ∆ 𝑦 + ∆ 𝑦
2 4 8
3ℎ
= [8𝑦 + 12(𝑦 − 𝑦 ) + 6(𝑦 − 2𝑦 + 𝑦 ) + (𝑦 − 3𝑦 + 3𝑦 − 𝑦 )]
8
3ℎ
= (𝑦 + 3𝑦 + 3𝑦 + 𝑦 )
8
Similarly, ∫ 𝑓(𝑥) 𝑑𝑥 = (𝑦 + 3𝑦 + 3𝑦 + 𝑦 ), … , ∫ ( ) 𝑓(𝑥) 𝑑𝑥 = (𝑦 + 3𝑦 + 3𝑦 +𝑦 )
Adding the above integrals, we get

31
Numerical method
3ℎ
𝑓(𝑥) 𝑑𝑥 = [(𝑦 + 𝑦 ) + 3(𝑦 + 𝑦 + 𝑦 + ⋯ + 𝑦 ) + 2(𝑦 + 𝑦 + ⋯ + 𝑦 )]
8
which is known as Simpson’s three-eighth rule.
Note: To use this formula, the given interval must be divided into sub-intervals whose number 𝑛 is a multiple of 3.
-The total error of the Simpson’s three-eighth rule is 𝐸 = 𝑓 ( ) (𝜉), where 𝜉 ∈ [𝑎, 𝑏] (for 𝑛 subintervals of length ℎ).
- Simpson’s three-eighth rule is not as accurate as Simpson’s one-third rule.
.
Example: Evaluate the integral ∫ 𝑒 𝑑𝑥, up to 3 sign. digit and taking six sub-intervals using
i. Trapezoidal rule ii. Simpson’s 1/3rd rule iii. Simpson’s 3/8th rule
Solution: Here, we have 𝑎 = 0, 𝑏 = 1.2, 𝑛 = 6
𝑏 − 𝑎 1.2 − 0
ℎ= = = 0.2
𝑛 6
𝑥 0 0.2 0.4 0.6 0.8 1.0 1.2

𝑦 = 𝑓(𝑥) = 𝑒 1 1.221 1.492 1.822 2.226 2.718 3.320

𝑦 𝑦 𝑦 𝑦 𝑦 𝑦 𝑦

The trapezoidal rule can be written as



𝐼= [(𝑦 + 𝑦 ) + 2(𝑦 + 𝑦 + 𝑦 + 𝑦 + 𝑦 )]
2
0.2
𝐼= [(1 + 3.320) + 2(1.221 + 1.492 + 1.822 + 2.226 + 2.718)] = 2.3278 ≅ 2.328
2
The simpson’s 1/3 rule becomes 𝐼 = [(𝑦 + 𝑦 ) + 2(𝑦 + 𝑦 ) + 4(𝑦 + 𝑦 + 𝑦 ]
0.2
𝐼= [(1 + 3.320) + 2(1.492 + 2.226) + 4(1.221 + 1.822 + 2.718] = 2.320
3
The simpson’s 3/8 rule becomes
3ℎ
𝐼= [(𝑦 + 𝑦 ) + 3(𝑦 + 𝑦 + 𝑦 + 𝑦 ) + 2(𝑦 ] = 2.320
8
.
The approximate value is = ∫ 𝑒 𝑑𝑥 = 2.320
. .
Exact value, ∫ 𝑒 𝑑𝑥 = 𝑒 − 𝑒 = 3.32011692 − 1 = 2.32011692
Exercise
1. Evaluate ∫ 𝑑𝑥by using Simpson’s 3/8 rule and taking seven ordinates.

2. Evaluate a. ∫ 𝑒 𝑑𝑥 b. ∫ √𝑐𝑜𝑠𝜃 𝑑𝜃 Using


i. Trapezoidal rule, taking ℎ = and ℎ =
ii. Simpson’s 1/3rd rule, taking ℎ = and ℎ =
iii. Simpson’s 3/8th rule, taking ℎ = and ℎ = .
3. A rocket is launched from the ground. Its acceleration is registered during the first 80 seconds and is given in the
table below. Using Simpson’s one-third rule, find the velocity of the rocket at 𝑡 = 80 seconds.
𝑡(sec) 0 10 20 30 40 50 60 70 80
𝑓( 𝑐𝑚 30 31.63 33.34 35.47 37.75 40.33 43.25 46.69 50.67
𝑠𝑒𝑐 )

32

You might also like