You are on page 1of 15

1 Lecture Notes on Computational Mathematics

Dr. K. Manjunatha Prasad


Professor of Mathematics,
Department of Data Science, PSPH
Manipal Academy of Higher Education, Manipal, Karnataka-576 104
kmprasad63@gmail.com, km.prasad@manipal.edu
Lecture Notes on Computational Mathematics

Dr. K. Manjunatha Prasad


Department of Data Science, PSPH
Manipal Academy of Higher Education, Manipal, India

M.Sc Data Science/Biostatistics/Digital Epidemiology (I block), Batch 2022-2023

Contents

1 Numerical Methods 2
1.1 Solution of Algebraic and Transcendental Equations . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Bisection Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Secant Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Newton-Raphson Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.5 Solution of linear system equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5.1 Gauss Elimination (Direct method) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5.2 Gauss-Seidel method(Iterative method) . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.6 Rayleigh’s Power method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.7 Numerical Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.7.1 Finite differences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.8 Newton-Cotes quadrature formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.8.1 Trapezoidal Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.8.2 Simpson’s one-third rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.8.3 Simpson’s three-eighth rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.9 Numerical Solutions of First order ordinary differential equations . . . . . . . . . . . . . . . 13
1.9.1 Tayor series method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.9.2 Runge-Kutta method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

1
Numerical Methods

1. Numerical Methods

1.1 Solution of Algebraic and Transcendental Equations


In many occasions, we often come across the problem of finding the roots of the equations of the form

f (x) = 0.

We have algebraic formulas, to express the roots in terms of the coefficients, if f (x) is a quadratic, cubic
or a bi-quadratic expression.
Algebraic functions of the form

f n (x) = a 0 x n + a 1 x n−1 + a 2 x n−2 + ... + a n , (1.1)

are called polynomials.

Definition 1.1. A non-algebraic function is called a transcendental function.

Example 1. f (x) = lnx3 − 0.7

Example 2. φ(x) = e−0.5 x − 5x

Example 3. ψ(x) = sin2 x − x2 − 2

Let f (x) be a polynomial as given in Eq(5.1). Then we have the following:

1. Every polynomial equation of the nth degree has n and only n roots.

2. If n is odd, the polynomial equation has atleast one real root whose sign is opposite to that of the
last term.

3. If n is even and the constant term is negative, then the equation has atleast one positive root and
atleast one negative root.

4. If the polynomial equation has,

• real coefficients, then imaginary roots occur in pairs, and

• rational coefficients, then irrational roots occur in pairs.

5. Descartes’ Rule of Signs

• A polynomial equation f (x) = 0 cannot have more number of positive real roots than the
number of changes of sign in the coefficients of f (x).

• In the above point, f (x) = 0 cannot have more number of negative real roots than the number
of changes of sign in the coefficients of f (− x).

Theorem 1.1 (Intermediate Value Theorem). If f (x) is continuous in [a, b], and if f (a) and f (b) are of
opposite signs, then f (ξ) = 0 for at least one number ξ such that a < ξ < b.

Lecture Notes 2
1.2 Bisection Method

1.2 Bisection Method


This method consists in locating the root of the equation between a and b and it is based on Intermediate
Value Theorem. It has the following computational steps:

1. Choose two real numbers a and b such that f (a) f (b) < 0

a+ b
2. Set xr = 2 .

3. Now,

• if f (a) f (xr ) < 0, then the root lies in the interval (a, xr ). Then, set b = xr and go to step 2
above.

• if f (a) f (xr ) > 0, then the root lies in the interval (xr , b). Then, set a = xr and go to step 2.

• if f (a) f (xr ) = 0, it means that xr is a root of the equation f (x) = 0 and the computation may
be terminated.

Practically, the roots may not be exact so that the third condition in step 3 is never satisfied. In such
a case, we bisect the interval as before and continue the process until the root is found to the desired
accuracy.

Exercise 1.1. Find a real root of the equation f (x) = x3 − x − 1 = 0.

Exercise 1.2. Find a real root of the equation x3 − 2x − 5 = 0.

Exercise 1.3. Find a real root of the equation f (x) = x3 + x2 + x + 7 = 0 correct to three decimal places.

Exercise 1.4. Find the positive root between 0 and 1 of the equation x = e− x correct to 3 decimal places.

Lecture Notes 3
1.3 Secant Method

Exercise 1.5. Find a root, correct to 3 decimal places and lying between 0 and 0.5 of the equation
4e− x sinx − 1 = 0.

Exercise 1.6. Find a root, correct to 3 decimal places of the equation x3 − 4x − 9 = 0.

Exercise 1.7. Find a root, correct to 3 decimal places of the equation 5x log10 x − 6 = 0.

Exercise 1.8. Find a root, correct to 3 decimal places of the equation x2 + x − cosx = 0.

1.3 Secant Method


If f (x) = 0 is a first degree equation in x, then it can be readily solved. We now study the iteration
process, which will give the exact solution whenever f (x) = 0 is a first degree equation. Thus if we
approximate f (x) by a first degree equation, then we may write,

f (x) = a 0 x + a 1 = 0 (1.2)

the solution of which is given by

−a 1
x= (1.3)
a0
where a 0 ̸= 0 and a 1 are arbitrary parameters to be determined by prescribing two appropriate
conditions on f (x) and/or its derivatives.
If xk and xk−1 are two approximations to the root, then a 0 and a 1 are found using the conditions,

f (xk−1 ) = a 0 xk−1 + a 1 ,

f (xk ) = a 0 xk + a 1 .

On solving, we get,
f (xk ) − f (xk−1 )
a0 =
xk − xk−1
and
xk f (xk−1 ) − xk−1 f (xk )
a1 = (1.4)
xk − xk−1
From equations (5.3) and (5.4), the next approximation xk+1 is given by

xk−1 f (xk ) − xk f (xk−1 )


xk+1 =
f (xk ) − f (xk−1 )
which can also be written as

xk − xk−1
xk+1 = xk − f (xk ).
f (xk ) − f (xk−1 )
This is called the secant or the chord method.

Exercise 1.9. Use secant method to determine the root of the equation, cosx − xe x = 0.

Lecture Notes 4
1.4 Newton-Raphson Method

Exercise 1.10. Use secant method to determine a real root of the equation, x3 − 2x − 5 = 0.

Exercise 1.11. Use secant method to determine a real root of the equation, xe x − 1 = 0.

Exercise 1.12. Use secant method to determine the root, between 5 and 8, of the equation, x2.2 = 69.

Exercise 1.13. Use secant method to determine a real root of the equation, x = e− x .

Exercise 1.14. Use secant method to determine a real root of the equation, 3x + sinx − e x = 0 to an
accuracy of 4 decimal places.

Exercise 1.15. Use secant method to determine a real root of the equation, x4 − x − 10 = 0.

Exercise 1.16. Use secant method to determine a real root of the equation, x − e− x = 0.

Exercise 1.17. Use secant method to determine a real root of the equation, e− x (x2 − 5x + 2) − 1 = 0.

Exercise 1.18. Use secant method to determine a real root of the equation, x − sinx − 12 = 0.

Exercise 1.19. Use secant method to determine a real root of the equation, e− x = 3 log x.

1.4 Newton-Raphson Method


Let x0 be an approximate root of f (x) = 0 and let x1 = x0 + h be the correct root so that f (x1 ) = 0.
Expanding f (x0 + h) by Taylor’s series, we obtain

h2 ′′
f (x0 ) + h f ′ (x0 ) + f (x0 ) + ... = 0.
2!

Neglecting the second and higher order derivatives, we have

f (x0 ) + h f ′ (x0 ) = 0,

which gives
f (x0 )
h=− .
f ′ (x0 )
Successive approximations are given by x2 , x3 , ..., xn+1 , where

f (xn )
xn+1 = xn − .
f ′ (xn )

This is called the Newton-Raphson formula.

Exercise 1.20. Use Newton-Raphson method to determine a root of the equation, x3 − 2x − 5 = 0.

Exercise 1.21. Use Newton-Raphson method to determine a root of the equation, xsinx + cosx = 0.

Exercise 1.22. Use Newton-Raphson method to determine a root of the equation, x = e− x .

x
Exercise 1.23. Use Newton-Raphson method to determine a root of the equation, sinx = 2 correct to 3
π
decimal places given that the root lies between 2 and π.

Lecture Notes 5
1.5 Solution of linear system equations

Exercise 1.24. Use Newton-Raphson method to determine a root of the equation, 4e− x sinx − 1 = 0 correct
to 3 decimal places given that the root lies between 0 and 0.5.

Exercise 1.25. Use Newton-Raphson method to compute a real root of the equation, x2 + 4sinx = 0.

Exercise 1.26. Use Newton-Raphson method to derive a formula for finding the kth root of a positive
1
number N and hence compute the value of 25 4 .

Exercise 1.27. Use Newton-Raphson method to determine a root of the equation, x sin2 − 4 = 0 correct to
3 decimal places.

Exercise 1.28. Use Newton-Raphson method to determine a root of the equation, e x = 4x correct to 3
decimal places.

Exercise 1.29. Use Newton-Raphson method to determine a root of the equation, x3 − 5x + 3 = 0 correct
to 3 decimal places.

Exercise 1.30. Use Newton-Raphson method to determine a root of the equation, xe x = cosx correct to 3
decimal places.

1+ cosx
Exercise 1.31. Use Newton-Raphson method to determine a root of the equation, x = 3 correct to 3
decimal places.

Exercise 1.32. Use Newton-Raphson method to determine a root of the equation, cotx = − x correct to 3
decimal places.

1.5 Solution of linear system equations


The solution of a linear system of equations can be accomplished by a numerical method which falls in
one of two categories: direct or iterative methods

1.5.1. Gauss Elimination (Direct method)


This is an elementary elimination method and it reduces the system of equations to an equivalent upper
triangular system, which can be solved by back substitution.
Let the system of n-linear equations in n-unknowns be given by

a 11 x1 + a 12 x2 + · · · + a 1n xn = b1

a 21 x1 + a 22 x2 + · · · + a 2n xn = b2

··· ··· ··· ··· ··· ··· ··· (1.5)

a n1 x1 + a n2 x2 + · · · + a nn xn = bn

Step 1: The unknowns are eliminated to obtain an upper-triangular system.

Lecture Notes 6
1.5 Solution of linear system equations

−a 21
To eliminate x1 from the second equation of (1.5), we multiply the first equation by a 11 and add it
to second equation, we obtain
¡ a 21 ¢ ¡ a 21 ¢ ¡ a 21 ¢ a 21
a 22 − a 12 x2 + a 23 − a 13 x3 + · · · + a 2 n − a 1 n xn = b 2 − b 1 ,
a 11 a 11 a 11 a 11

which can be written as


a′22 x2 + a′23 x3 + · · · + a′2n xn = b′2
−a 31
Similarly we can multiply the first equation by a 11 and add it to the third equation of the system (1.5).
This eliminates x1 from third equation andv we obtain

a′32 x2 + a′33 x3 + · · · + a′3n xn = b′3 .

In a similar fashion, we can eliminate x1 from the remaining equations and after eliminating x1 from
the last equation of (1.5), we obtain the system

a 11 x1 + a 12 x2 + a 13 x3 + · · · + a 1n xn = b1

a′22 x2 + a′23 x3 + · · · + a′2n xn = b′2

a′32 x2 + a′33 x3 + · · · + a′3n xn = b′3

··· ··· ··· ··· ··· ··· ··· (1.6)

a′n2 x2 + a′n3 x3 + · · · + a′nn xn = b′n

We next eliminate x2 from the last (n − 2) equations of the system (1.6). Now to eliminate x2 from
−a′32
the third equation of (1.6), we multiply the second equation by a′22
and add it to the third equation.
Repeating this process with remaining equations, we obtain the system

a 11 x1 + a 12 x2 + a 13 x3 + · · · + a 1n xn = b1

a′22 x2 + a′23 x3 + · · · + a′2n xn = b′2

a′′33 x3 + · · · + a′′3n xn = b′′3

··· ··· ··· ··· ··· ··· ··· (1.7)

a′′n3 x3 + · · · + a′′nn xn = b′′n

In equation (1.7), the “double primes" indicate that the elements have changed twice. It is easily seen
that this procedure can be continued to eliminate x3 from the fourth equation onwards, x4 from the fifth
equation onwards etc. till we finally obtain the upper-triangular form:

a 11 x1 + a 12 x2 + a 13 x3 + · · · + a 1n xn = b1

a′22 x2 + a′23 x3 + · · · + a′2n xn = b′2

a′′33 x3 + · · · + a′′3n xn = b′′3

··· ··· ··· ··· ··· ··· ··· (1.8)


n−1)
a(nn xn = b(nn−1)

Lecture Notes 7
1.5 Solution of linear system equations

Step 2: We now have to obtain the required solution from the system (1.8). From the last equation of
the system, we obtain
b(nn−1)
xn = n−1)
a(nn
This is then substituted in the (n − 1)th equation to obtain xn−1 and the process is repeated to compute
the other unknowns by back substitution.
Use Gauss elimination method to solve the following system of equations

1. x1 + 2x2 − x3 = 2
3x1 + 6x2 + x3 = 1
3x1 + 3x2 + 2x3 = 3

2. 2x1 + x2 + x3 = 10
3x1 + 2x2 + 3x3 = 18
x1 + 4x2 + 9x3 = 16

3. 10x − y + 2z = 4
x + 10y − z = 3
2x + 3y + 20z = 7

1.5.2. Gauss-Seidel method(Iterative method)


Now we consider an iterative method to find the solution of system of linear equations. Which start from
an approximation to the true solution and, if convergent derive a sequence of closer approximations, the
cycle of computations being repeated till the required accuracy is obtained.
Consider the system

a 11 x1 + a 12 x2 + · · · + a 1n xn = b1

a 21 x1 + a 22 x2 + · · · + a 2n xn = b2

··· ··· ··· ··· ··· ··· ··· (1.9)

a n1 x1 + a n2 x2 + · · · + a nn xn = bn

In which the diagonal elements a ii do not vanish and large compared to other co-efficients in the equa-
tion. If this is not the case, then equations should be arranged so that this condition is satisfied. Now
we rewrite the system (1.9) as

b 1 a 12 a 13 a 1n
x1 = − x2 − x3 − · · · − xn
a 11 a 11 a 11 a 11
b 2 a 21 a 23 a 2n
x2 = − x1 − x3 − · · · − xn
a 22 a 22 a 22 a 22
··· ··· ··· ··· ··· ··· ··· (1.10)
bn a n1 a n2 a n(n−1)
xn = − x1 − x2 − · · · − xn−1
a nn a nn a nn a nn

Lecture Notes 8
1.6 Rayleigh’s Power method

Suppose x1(0) = x2(0) = x3(0) = · · · = x(0)


n = 0 be the initial approximatins to the unknowns x1 , x2 , . . . , x n ,

then the first approximation is given by


b1
x1(1) =
a 11
b 2 a 21 (1)
x2(1) = − x
a 22 a 22 1
··· ··· ··· ··· ··· ··· ··· (1.11)
bn a n1 (1) a n2 (1) a n(n−1) (1) (1)
x(1)
n = − x1 − x2 − · · · − x x
a nn a nn a nn a nn n−1 n
and second approximations are given by
b 1 a 12 (1) a 13 (1) a 1n (1)
x1(2) = − x2 − x3 − · · · − x
a 11 a 11 a 11 a 11 n
b 2 a 21 (2) a 23 (1) a 2n (1)
x2(2) = − x1 − x3 − · · · − x
a 22 a 22 a 22 a 22 n
··· ··· ··· ··· ··· ··· ··· (1.12)
b n a n1 (2) a n2 (2) a n(n−1) (2)
x(2)
n = − x1 − x2 − · · · − x
a nn a nn a nn a nn n−1
In general (k + 1)th step values are given by
b 1 a 12 (k) a 13 (k) a 1 n ( k)
x1(k+1) = − x2 − x3 − · · · − x
a 11 a 11 a 11 a 11 n
b 2 a 21 (k+1) a 23 (k) a 2 n ( k)
x2(k+1) = − x1 − x3 − · · · − x
a 22 a 22 a 22 a 22 n
··· ··· ··· ··· ··· ··· ··· (1.13)
b n a n1 ( k+1) a n2 ( k+1) a n(n−1) (k+1)
x(nk+1) = − x1 − x2 −···− x
a nn a nn a nn a nn n−1
Entire process is repeated till the values of x1 , x2 , . . . , xn are obtained to the required accuracy. Solve the
following system of linear equations using Gauss Seidel iteration method.

1. 6x + y + z = 20
x + 4y − z = 6
x − y + 5z = 7

2. 2x − y = 7
− x + 2y − z = 1
− y + 2z = 1

3. 10x + 2y + z = 9
2x + 20y − 2z = −44
− 2x + 3y + 10z = 22

1.6 Rayleigh's Power method


Rayleigh’s Power method is used to determine the largest eigen value(in magnitude) and the corre-
sponding eigen vector of a square matrix A. Let λ1 , λ2 , . . . , λn be the distinct eigen values such that

Lecture Notes 9
1.7 Numerical Integration

|λ1 | > |λ2 | > · · · > |λn |. To find |λ1 | follow the following method Let v0 be the initial eigen vector. Find
Av0 = y1 = m 1 v1 , where m 1 is the largest element in magnitude of y1
Av1 = y2 = m 2 v2
.
.
.
Avk = y + k + 1 = m k+1 vk+1
As k → ∞, m k+1 → |λ1 | and vk+1 is the corresponding eigen vector. The process is repeated till we get
numerically largest eigen value of desired degree of accuracy. The initial vector is usually chosen as a
   
1 1
   
vector v0 = 
1 , 0.
  

1 0
Find the numerically largest eigen value and the corresponding eigen vectors using Rayleigh’s power
method for the following matrices
 
−15 4 3
 
1. 
 10 −12 6

20 −4 2
 
1 6 1
 
2. 
1 2 0

0 0 3
 
2 −1 0
 
3. 
−1 2 −1

0 −1 2

1.7 Numerical Integration


Given a set of data points (x0 , y0 ), (x1 , y1 ), ..., (xn , yn ) of a function y = f (x), where f (x) is not known
explicitly, we wish to compute the value of the definite integral
Z b
I= y dx.
a

Before visiting Newton-Cotes quadrature formula, let us look into some of the basic important notions.

1.7.1. Finite dierences


Forward differences: If y0 , y1 , y2 , ..., yn denote a set of values of y, then y1 − y0 , y2 − y1 , ..., yn − yn−1 are
called the differences of y. Denoting these differences by ∆ y0 , ∆ y1 , ..., ∆ yn−1 respectively, we have

∆ y0 = y1 − y0 , ∆ y1 = y2 − y1 , ..., ∆ yn−1 = yn − yn−1 ,

Lecture Notes 10
1.8 Newton-Cotes quadrature formula

where ∆ is called the forward difference operator and ∆ y0 , ∆ y1 , ..., are called first forward differ-
ences. The differences of the first forward differences are called second forward differences and are
denoted by ∆2 y0 , ∆2 y1 , ... Similarly, one can define third forward differences, fourth forward differences,
etc. Thus,
∆2 y0 = ∆ y1 − ∆ y0 = y2 − y1 − (y1 − y0 ) = y2 − 2y1 + y0 ,

∆3 y0 = ∆2 y1 − ∆2 y0 = y3 − 2y2 + y1 − (y2 − 2y1 + y0 ) = y3 − 3y2 + 3y1 − y0

∆4 y0 = ∆3 y1 − ∆3 y0 = y4 − 3y3 + 3y2 − y1 − (y3 − 3y2 + 3y1 − y0 ) = y4 − 4y3 + 6y2 − 4y1 + y0

Backward differences: The differences y1 − y0 , y2 − y1 , ..., yn − yn−1 are called first backward differ-
ences if they are denoted by ∇ y1 , ∇ y2 , ..., ∇ yn respectively, so that

∇ y1 = y1 − y0 , ∇ y2 = y2 − y1 , ..., ∇ yn = yn − yn−1 ,

where ∇ is called the backward difference operator. In a similar way, one can define backward
differences of higher orders. Thus we obtain,

∇2 y2 = ∇ y2 − ∇ y1 = y2 − y1 − (y1 − y0 ) = y2 − 2y1 + y0

∇3 y3 = ∇2 y3 − ∇2 y2 = y3 − 3y2 + 3y1 − y0 , etc.

Note: Given a set of n + 1 values, viz., (x i , yi ), i = 0, 1, 2, .., n of x and y, it is required to find yn (x), a
polynomial of the nth degree such that y and yn (x) agree at the tabulated points. Let the values of x be
equidistant i.e., let
x i = x0 + ih, i = 0, 1, 2, ..., n.

Let x = x0 + ph. Then Newton’s forward difference interpolation formula is given by


p(p − 1) 2 p(p − 1)(p − 2) 3 p(p − 1)...(p − n + 1) n
yn (x) = y0 + p∆ y0 + ∆ y0 + ∆ y0 + ... + ∆ y0 .
2! 3! n!
Similarly, Newton’s backward difference interpolation formula is given by
p(p + 1) 2 p(p + 1)(p + 2) 3 p(p + 1)...(p + n − 1) n
yn (x) = yn + p∇ yn + ∇ yn + ∇ yn + ... + ∇ yn
2! 3! n!
x− x n
where p = h .

1.8 Newton-Cotes quadrature formula


Let Z b
I= f (x) dx
a
where f (x) takes the values y0 , y1 , ..., yn for x = x0 , x1 , ..., xn . Let us divide the interval (a, b) into n
subintervals of width h so that x0 = a, x1 = x0 + h, x2 = x0 + 2h, ..., xn = x0 + nh = b. Then
Z x0 +nh Z n
f (x) dx = h f (x0 + rh) dr,
x0 0
putting x = x0 + rh, dx = hdr and using Newton’s forward interpolation formula, we eventually get,
R x0 +nh −2)2 3
¢ ¡ 4 3 2 ¢ ∆4 y
f (x)dx = nh y0 + n2 ∆ y0 + n(212
n−3) 2
∆ y0 + n(n24 ∆ y0 + n5 − 32n + 113n − 3n 4! 0
£¡
x0
¡ 5 3 2 ¢ ∆5 y
+ n6 − 2n4 + 354n − 503n + 12n 5! 0 + ....
¤

This is known as Newton-Cotes quadrature formula.

Lecture Notes 11
1.8 Newton-Cotes quadrature formula

1.8.1. Trapezoidal Rule


Setting n = 1 in Newton-Cotes quadrature formula, all differences higher than the first will become zero
and we obtain Z x1 1 1 ¤ h
ydx = h y0 + ∆ y0 = h y0 + (y1 − y0 ) = (y0 + y1 ).
¡ ¢ £
x0 2 2 2
Similarly, for the next interval [x1 , x2 ], we have,
Z x2
h
ydx = (y1 + y2 )
x1 2
and so on. For the interval [xn−1 , xn ], we have,
Z xn
h
ydx = (yn−1 + yn ).
xn−1 2
Adding all these expressions, we get,
Z xn
h
ydx = [y0 + 2(y1 + y2 + ... + yn−1 ) + yn ].
x0 2
This is known as the Trapezoidal rule.

1.8.2. Simpson's one-third rule


This rule is obtained by putting n = 2 in Newton-Cotes quadrature formula.
Z x2
1 ¢ h
ydx = 2h y0 + ∆ y0 + ∆2 y0 = (y0 + 4y1 + y2 ).
¡
x0 6 3
Similarly, for the interval [x2 , x4 ], we have
Z x4
h
ydx = (y2 + 4y3 + y4 )
x2 3
and so on. Finally, Z xn h
ydx = (yn−2 + 4yn−1 + yn )
xn−2 3
Summing all these integrals, we obtain
Z xn
h£ ¤
ydx = y0 + 4(y1 + y3 + y5 + ... + yn−1 ) + 2(y2 + y4 + y6 + ... + yn−2 ) + yn .
x0 3
This is known as the Simpson’s one-third rule.

1.8.3. Simpson's three-eighth rule


Setting n = 3 in Newton-Cotes quadrature formula.
Z x3
3 3 1 ¢ 3h
ydx = 3h y0 + ∆ y0 + ∆2 y0 + ∆3 y0 =
¡
(y0 + 3y1 + 3y2 + y3 ).
x0 2 4 8 8
Similarly, for the interval [x3 , x6 ], we have
Z x6
3h
ydx = (y3 + 3y4 + 3y5 + y6 )
x3 8

Lecture Notes 12
1.9 Numerical Solutions of First order ordinary differential equations

and so on. Summing up all these, we obtain


Z xn
3h £ ¤
ydx = y0 + 3(y1 + y2 ) + 2y3 + 3(y4 + y5 ) + 2y6 + ... + 2yn−3 + 3(yn−2 + yn−1 ) + yn .
x0 8
This is known as the Simpson’s three-eighth rule.

Exercise 1.33. Evaluate the integral


Z 6 dx
0 1 + x2
by using

1. Trapezoidal Rule

2. Simpson’s one-third rule

3. Simpson’s three-eighth rule


R2 2
Exercise 1.34. Use Trapezoidal rule to evaluate e x dx taking 10 intervals. 0
R 0.6 2
Exercise 1.35. Use Simpson’s one-third rule to evaluate 0 e− x dx taking 7 ordinates.
R 1. 4
Exercise 1.36. Use Simpson’s three-eighth rule to evaluate 0.2 (sinx − log x + e x )dx.

Exercise 1.37. Find, from the following table, the area bounded by the curve and the x-axis from x = 7.47
to x = 7.52
x f(x) x f(x)

7.47 1.93 7.50 2.01


7.48 1.95 7.51 2.03
7.49 1.98 7.52 2.06

1.9 Numerical Solutions of First order ordinary dierential equations


dy
Consider differntial equations of the form dx = f (x, y), with the initial conditionn y(x0 ) = y0 . We consider
the following methods to solve this differential equations.

1.9.1. Tayor series method


Consider
y′ = f (x, y), y(x0 ) = y0 (1.14)

If y(x) is the exact solution of equation (1.14) then by taylor series expansion about x0 , we get
(x − x0 )2 ′′ (x − x0 )3 ′′′
y(x) = y0 + (x − x0 )y0′ + y0 + y0 + − − − − −
2! 3!
Where y0 = y(x0 )
y0′ = f (x0 , y0 )
y0′′ = f x (x0 , y0 ) + f y (x0 , y0 )y0′
y0′′′ = f xx (x0 , y0 ) + f x y (x0 , y0 )y0′ + f yx (x0 , y0 )y0′ + f yy (x0 , y0 )y0′2 + f y (x0 , y0 )y0′′

Lecture Notes 13
1.9 Numerical Solutions of First order ordinary differential equations

Exercise 1.38. From the Taylor series for y(x) find y(0.1) correct to four decimal places, if y′ = x −
y2 , y(0) = 1.

Exercise 1.39. Find by Taylor series method the value of y at x = 0.1 and x = 0.2 correct to 5 decimal
dy
placces. Given dx = x2 y − 1, y(0) = 1.

Exercise 1.40. Find y at x = 0.2 by Tayolor series method. Given y′ = log(x y), y(1) = 2.

1.9.2. Runge-Kutta method


The Runge-Kutta method(4th order Runge-Kutta method) is most commonly used method to solve
dy
dx = f (x, y) y(x0 ) = y0 . The solution y(x1 ) = y(x0 + h) = y0 + k
1
where k = 6 (k 1 + 2k 2 + 2k 3 + k 4 )
k 1 = h f (x0 , y0 )
k 2 = h f (x0 + h2 , y0 + k21 )
k 3 = h f (x0 + h2 , y0 + k22 )
k 4 = h f (x0 + h, y0 + k 3 )

Exercise 1.41. Apply Runge-Kutta method, to find an approximate value of y when x = 0.2 given that
dy
dx = x + y, y(0) = 1

dy y2 −2 x
Exercise 1.42. Given that dx = y2 + x
and y = 1 at x = 0. Find y for x = 0.1, 0.2 using Runge-Kutta
method.

dy 3 x+ y
Exercise 1.43. Use Runge-Kutta method to find y at x = 1.1, given dx = x+2 y and y(1) = 1.

****END****

Lecture Notes 14

You might also like