You are on page 1of 26

1

APPENDIX D

NUMERICAL TECHNIQUES

This Appendix shows the various numerical techniques that can be

employed in solving design problems, which could pose difficulty if an

analytical method is used. The numerical methods can readily be

incorporated in computer programs to obtain results of design problems.

SIMPSON’S RULE FOR AREA UNDER THE CURVE

Simpson’s rule is a numerical integration technique that is widely used in

calculating the area under the curve. It is simple and has a greater degree of

accuracy than the trapezoidal rule. The Simpson’s 1 3 rule is based on

quadratic polynomial interpolation.


2

Q
P

y1 y2 y3

h h
p′ Q′ x

The diagram shows a section of a curve and three co-ordinates erected to it

at equally spaced intervals along the x -axis. Simpson’s rule states that the

area P′PQQ′ is given approximately by the formula

Area =
h
(y1 + 4 y 2 + y 3 )
3

If we reduce the step size h, the result becomes more accurate. The interval

over which the integral is to be taken is divided into larger number of equal

sub intervals as shown


3

R
P S
T Q

y1 y2 y3 y4 y5 y6 y7 y8 y9

h h h h h h h h

P′ R′ S′ T′ Q′ x

We will divide the total area into four sections namely: P′PRR ′ , R ′RSS′ ,

S′STT′ and T′TQQ′ .

We shall write down the expression for each area and sum them up to obtain

the total area P′PQQ′

Area of P′PRR ′ = (y1 + 4 y 2 + y3 )


h
3

Area of R ′RSS′ = (y3 + 4 y 4 + y5 )


h
3
4

Area of S′STT′ = (y5 + 4 y6 + y 7 )


h
3

Area of T′TQQ′ = (y 7 + 4 y8 + y 9 )
h
3

The total area is the sum of the areas P′PRR ′ , R ′RSS′ , S′STT′ and T′TQQ′ .

Total area = h{y1 + 4 y 2 + y 3 + y 3 + 4 y 4 + y 5 + y 5 + 4 y 6 + y 7 + y 7 + 4 y8 + y 9 }


1
3
= h{y1 + 4(y 2 + y 4 + y 6 + y8 ) + 2(y 3 + y 5 + y 7 ) + y 9 }
1
3

The Simpson’s 1 3 rule for a quadratic integrated over two ∆x intervals that

are of uniform width or panel is

b
Area = Ι = ∫ f (x )dx = (y1 + 4 y 2 + 2 y 3 + 4 y 4 + 2 y 5 + ... + 2 y n −1 + 4y n + y n +1 ) + E
h
a
3

= width x average height

Simpson’s 3 8 Rule

Simpson’s 3 8 rule is derived by integrating a third-order polynomial

interpolation formula. For a domain (a , b ) divided into three intervals, it is

expressed as
5

b
Area = Ι = ∫ f (x )dx = (y1 + 3y 2 + 3y 3 + 2 y 4 + 3y 5 + 3y 6 + ... + 2 y n −2 + 3y n −1 + 3y n + y n +1 ) + E
3h
a
8

= width x height

A computer program SIMPSON has been developed to determine the area

under the curve. Simpson’s rule is also easy to use in Microsoft Excel

spreadsheet. We can carry this out by entering the x-values in one column,

the y-values in the next column. We then follow this procedure by an

additional column containing the y-values multiplied by their appropriate

constants (i.e. multiplied by 4 or 2, except the initial and final values). We

sum the last column to obtain the value of the integral and multiply this by

∆x 3 .

NON LINEAR EQUATIONS

The Newton-Raphson iterative method

The Newton-Raphson method is a process for the determination of a real

root of an equation f (x ) = 0 , given just one point close to the desired root. If

we let x 0 represent the known approximate value of the root of f (x ) = 0 , and


6

h be the difference between the true value α and the approximate value, we

have

α = x0 + h

From Taylor series,

f (x ) = f (x 0 ) + hf ′(x 0 ) + f ′′(x 0 ) + ......... + f n (x 0 )


h h
2! n!

about x 0 , gives

f (α ) = f (x 0 + h ) = f (x 0 ) + hf ′(x 0 ) + f ′′(ζ )
h
2!

where

ξ = x 0 + θh , 0 < θ < 1 , lies between α and x 0 . Ignoring the remainder term

and writing f (α ) = 0 , we have

f (x 0 ) + hf ′(x 0 ) ≈ 0
7

so that

f (x 0 )
h≈ −
f ′(x 0 )

Therefore, the next root that gives a better estimate than x 0 is

f (x 0 )
x1 = x 0 −
f ′(x 0 )

Better approximations may be obtained by repetition (iteration) of the

process. We may write this as


8

f (x n )
x n +1 = x n −
f ′(x n )

y = f (x)

x1 x3 x2 x0
x

Newton-Raphson method.
9

Each iteration provides the point at which the tangent at the original point

cuts the x- axis as shown in the above diagram. The equation of the tangent

at the point (x n , f (x n )) is

y − f (x n ) = f ′(x n )(x − x n )

Therefore the point (x n +1, 0) corresponds to

− f (x n ) = f ′(x n ) (x n +1 − x n )

which gives

f (x n )
x n +1 = x n −
f ′(x n )

A developed computer program PROG51.FOR is used to solve nonlinear

equation using Newton-Raphson’s method of iteration in Chapter 5 of the

book.
10

SOLUTION OF NON LINEAR EQUATIONS

Consider a set of N non-linear equation of the form

F(x , y,...................) = 0

G (x , y,..................) = 0 (D-1)

where x, y,............ are the roots of the N equations. These equations can be

solved explicitly for the roots. If we consider some points (x1 , y1 ) near the

root of definition for the functions F, G. We can expand both functions by

an N dimensional Taylor series about the point (x1 , y1 ) as

∂F
F(x , y,......) = 0 = F(x1 , y1...) + (x − x1 ) + ∂F 0 (y − y1 ) + .........
∂x ∂y
0

∂G
G (x, y,......) = 0 = G (x1 , y1...) + (x − x1 ) + ∂G 0 (y − y1 ) + ......... (D-2)
∂x ∂y
0

Truncating the series after the first order derivative and re-write in matrix

form will yield


11

 ∂F ∂F 
 ∂x ...
∂y
0 0
   x − x1   F − F0 
 ∂G ∂G
...  y − y  = G − G  (D-3)
 ∂x 0
∂y
0
  1  0

 :   :   : 
 
 

We can solve for the roots x, y...... which give F = G = ........ = 0 as

−1
 ∂F ∂F 
 ...
 x   x1   ∂x ∂y
0 0
  F0 
 y  =  y  −  ∂G ∂G
... G  (D-4)
   1   ∂x 0
∂y
0
  0
 :   :   :   : 
 
 

for N = 1

Equation (D-4) becomes

F0
x = x1 − (D-5)
∂F
∂x
0

This is Newton-Raphson’s method for finding the roots of an equation.


12

When N = 2

∂G ∂F
F0 0 − G0
∂x ∂x
0
x = x1 −
∂F ∂G ∂F ∂G
0. 0 − 0 .
∂x ∂y ∂y ∂x
0

∂G ∂F
− F0 + G0
∂y ∂y
0 0

y = y1 − (D-6)
∂F ∂G ∂F ∂G
0. − 0 .
∂x ∂y ∂y ∂x
0 0

Equation (D-6) is a two-dimensional generalization of Newton’s method.

This technique is often employed to solve large sets of nonlinear algebraic

equations. Care must be taken in choosing initial guess (x1 , y1 ) quite close to

the final roots, as the algorithm may diverge. A developed computer

program PROG52.FOR is used to solve nonlinear equations in Chapter 5 of

the book.
13

SOLUTION OF SIMULTANEOUS FIRST ORDER ORDINARY

DIFFERENTIAL EQUATIONS

Analytical solutions to complex kinetic reactions in reactor systems are time

consuming and intractable. The designer must resort to numerical technique

with the aid of a computer for his or her solutions. The reactions taking

place in batch and piston flow reactions involve a set of simultaneous, first

order, ordinary differential equations. Several numerical methods have been

used for solving sets of equations, and the most popular method is the

Runge-Kutta fourth order. The Runge-Kutta is a powerful integration

technique that can be easily implemented on a personal computer. The only

drawback is its instability if the step size is too large. In a set of equations,

The Runge-Kutta algorithm uses the same step size for each member of the

set of equations. This causes practical problems if the set is stiff where

some members of these equations have characteristic times much smaller

than other members of the equations. An example is the free-radical kinetics

reaction, which has rates that may differ by three orders of magnitude.

In general, a system of nth first order equations will be of the form


14

=f i (x , y o , y1 , y 2 ,.........., y n −1 )
dy i
dx

i=0,1,2,3,.....n

with n initial conditions y i (x 0 )=A i ,i = 0,1,2,....n

Consider the system of two equations:

=f (x , y,z ) y(x 0 )= y 0
dy
dx

=g (x , y,z ) z(x 0 )=z 0


dz
dx

We may advance the solution of y and z to new values at x1 =x 0 + h using any

of the one-step or Runge-Kutta methods.

In general, our solutions will be advanced using expressions of the form

y(x 1 )= y(x 0 )+ K
15

z(x 1 )=z(x 0 )+ L

where the nature of K or L depends on the method being applied. For the

Runge-Kutta fourth order

K 1 + 2K 2 + 2K 3 + K 4
K=
6

and

L1 + 2L 2 + 2L 3 + L 4
L=
6

where

K 1 = h f (x 0 , y 0 , z 0 )

L1 =h g(x 0 , y 0 ,z 0 )

 1 1 1 
K 2 = h f  x 0 + h , y 0 + K 1 ,+ z 0 + L1 
 2 2 2 

 1 1 1 
L 2 = h g x 0 + h , y 0 + K 1 , z 0 + L1 
 2 2 2 
16

 1 1 1 
K 3 =hf  x 0 + h , y 0 + K 2 , z 0 + L 2 
 2 2 2 

 1 1 1 
L 3 = h g x 0 + h , y 0 + K 2 ,z 0 + L 2 
 2 2 2 

K 4 =hf (x 0 +h , y 0 +K 3 ,z 0 +L 3 )

L 4 =hg(x 0 +h , y 0 + K 3 ,z 0 +L 3 )

EXTENSION OF RUNGE-KUTTA METHODS

RUNGE-KUTTA-GILL METHOD

Runge-Kutta Gill method is the most widely used single-step method for

solving ordinary differential equations. For the differential equation

= f (x , y ), y(x n ) = y n
dy
dx

1 

y n +1 = y n + k1 + 21−
1 


k 2 + 2 1+
1 
 ( )
k3 + k 4  + O h5
6  2  2 

where

k1 = hf (x i , yi )
17

 h 1 
k 2 = hf  x i + , y i + k1 
 2 2 

 h  −1 1   1  
k 3 = hf  x i + , yi +  +  k1 + 1−  k 2 
 2  2 2   2  

 1  1  
k 4 = hf  x i + h, yi − k 2 + 1 + k3 
 2  2  

The RUNGE-KUTTA-MERSON METHOD

Runge-Kutta-Merson outlines a process for deciding the step size for better

predetermined accuracy. For this method, five functions are evaluated at

every step. The algorithm is

k1 = hf (x i , yi )

 h k 
k 2 = hf  x i + , y i + 1 
 3 3

 h k k 
k 3 = hf  x i + , y1 + 1 + 2 
 3 6 6 

 h k 3k 
k 4 = hf  x i + , y i + 1 + 3 
 2 8 8 

 k 3k 
k 5 = hf  x i + h , y i + 1 − 3 + 2k 4 
 2 2 
18

y n +1 = y n +
1
6
( )
(k1 + 4k 4 + k 5 ) + O h 5

We can estimate the local error from a weighted sum of the individual

estimate.

E=
1
(2k1 − 9k 3 + 8k 4 − k 5 )
30

PARTIAL DIFFERENTIAL EQUATION

If two or more independent variables are involved in a differential equation,

we can express the differential equation as partial differential equation

(PDE).

We shall consider the second-order equation (PDE) of the form

∂ 2u ∂ 2u ∂ 2u  ∂u ∂u 
A + B + C + D x , y, u , ,  = 0 (D-7)
∂x 2
∂x∂y ∂y 2
 ∂x ∂y 

where the coefficients A,B, C and D are functions of x, y, and u; x and y are

the independent variables and u is the dependent variable.


19

We can classify equation (D-7) with respect to the sign of the discriminant

( )
∆ = B2 − 4AC . The equation is elliptical if ∆ < 0 , hyperbolic if ∆ > 0 and

parabolic type if ∆ = 0 .

Elliptical equation

∂ 2u ∂ 2u
+ =− u (D-8)
∂x 2 ∂y 2

and the Laplace equation

∂ 2u ∂ 2u
+ =0 (D-9)
∂x 2 ∂y 2

The coefficients A = C = 1, B = 0 and

B2 − 4AC = − 4

are examples of elliptical equation

Hyperbolic equation

The wave equation


20

∂ 2 u Tg ∂ 2 u
= (D-10)
∂t 2 w ∂x 2

is a hyperbolic type

where the coefficients A = 1, B = 0, C = −1 and

B2 − 4AC = 4

Parabolic equation

The heat conduction equation

∂u k ∂ 2 u
= (D-11)
∂t cρ ∂x 2

is a parabolic type

where the coefficients A = 0, B = 0, C = 1 and

B2 − 4AC = 0
21

A method for solving the above partial differential equations is to replace the

derivatives by difference quotients, that is, converting the equation to a

difference equation. We can then write the difference equation that

corresponds to each point at the intersections (nodes) of a grid work that

subdivides the region of interest at which the function values are known.

Solving these equations simultaneously gives values for the function at each

node that approximates the true values.

∆x
yi + 2

yi +1

yi • • •

yi −1

∆y

yi − 2

xi −2 x i −1 xi x i +1 x i+ 2

Let h = ∆x = equal spacing of grid work in the x - as shown in the figure.


22

From Taylors series,

f ′′(x n )h 2 f ′′′(x n )h 3 f iv (x n )h 4
f (x n + h ) = f (x n ) + f ′(x n ) h + + + + ..... for x n < ξ1 < x n + h
2 6 24

(D-12)

and

f ′′(x n )h 2 f ′′′(x n )h 3 f iv (x n )h 4
f (x n − h ) = f (x n ) − f ′(x n ) h + − + + ..... for x n − h < ξ 2 < x n
2 6 24

(D-13)

It follows that

f iv (ξ ) 2
f (x n + h ) + f (x n − h ) = 2f (x n ) + f ′′(x n )h 2 + h
12

or

f iv (ξ )h 2 f (x n + h ) − 2f (x n ) + f (x n − h )
f ′′(x n ) + = for x n − h < ξ < x n + h
12 h2

(D-14)
23

Using the subscript notation, we have

f n +1 − 2f n + f n +1
( )
f ′′(x n ) + O h 2 =
h2
(D-15)

where the subscripts on f indicate the x- values at which it is evaluated. The

order relation O(h 2 ) shows that error approaches proportionality to h 2 as

h →0.

Similarly, the first derivative is approximated to

f (x n + h ) − f (x n − h ) = 2f ′(x n ) h + O h 2 ( )

or

f (x n + h ) − f (x n − h )
f ′(x n ) =
2h

f n +1 − f n −1
= (D-16)
2h
24

The first derivative could also be approximated by the forward or backward

difference, but would have an error of O(h ) . The central difference

approximation gives the more accurate approximation.

When f is a function of both x and y, we can obtain the second partial

∂ 2u
derivative with respect to x, 2 by holding y constant and evaluating the
∂x

function at three points where x equals x n , x n + h and x n − h .

∂ 2u
Correspondingly, the partial derivative is determined by holding x
∂y 2

constant.

Consider the Laplace equation on a region in the xy − plane. We subdivide

the region with equispaced lines parallel to the x and y-axes. We shall

consider the region near (x i , yi ) ,

∂ 2u ∂ 2u
∇ 2u = + =0 (D-17)
∂x 2 ∂y 2

We can replace the derivatives by difference quotients which approximate

the derivatives at point (x i , yi ) , we have


25

u (x i+1 , y i ) − 2u (x i , y i ) + u (x i−1 , y i )
∇ 2 u (x i , y i ) =
∆x 2

u (x i , y i+1 ) − 2u (x i , y i ) + u (x i , yi−1 )
+ =0 (D-18)
∆2 y

or

u i +1, j − 2u i, j + u i −1, j u i , j+1 − 2u i , j + u i , j−1


∇2u i, j = + =0 (D-19)
(∆x )2 (∆y )2

If we let ∆x = ∆y = h , the PDE becomes

∇2u i, j =
1
h2
{u i +1, j + u i −1, j + u i, j+1 + u i, j−1 − 4u i, j}= 0 (D-20)

This is known as the standard five-point formula, as five points are involved

in the relationship of equation (D-20), points to the right, left, above and

below the central point (x i , yi ) . We can write equation (D-20) as

u i, j =
1
4
{u i +1, j + u i −1, j + u i, j+1 + u i, j−1} (D-21)

Instead of equation (D-21), we may also use the formula


26

u i, j =
1
{u i −1, j−1 + u i +1, j−1 + u i +1, j+1 + u i −1, j+1} (D-22)
4

Therefore, the general procedure is to approximate the PDE by a finite

difference transformation and then to obtain the solution at the mesh points,

using the finite difference approximations. Other numerical methods of

solution are the implicit Crank-Nicolson method or the alternating –direction

implicit scheme (ADI) by Peaceman and Rachford. Details of these

methods are illustrated in Numerical analysis texts.

You might also like