You are on page 1of 25

3

NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATION


1. TRAPEZOIDAL RULE

Theoretical Discussion:
The trapezoidal rule is one of a family of formulas for numerical integration called
NewtonCotes formulas, of which the midpoint rule is similar to the trapezoid rule.
Simpson's rule is another member of the same family, and in general has faster
convergence than the trapezoidal rule for functions which are twice continuously
differentiable, though not in all specific cases. However for various classes of
rougher functions (ones with weaker smoothness conditions), the trapezoidal rule
has faster convergence in general than Simpson's rule.

Moreover, the trapezoidal rule tends to become extremely accurate when periodic
functions are integrated over their periods, which can be analyzed in various ways.

For non-periodic functions, however, methods with unequally spaced points such as
Gaussian quadrature and ClenshawCurtis quadrature are generally far more
accurate; ClenshawCurtis quadrature can be viewed as a change of variables to
express arbitrary integrals in terms of periodic integrals, at which point the
trapezoidal rule can be applied accurately.

Sample Problems:
1. Numerically approximate the integral 2 + 2
2
0
by using the
trapezoidal rule with m = 1, 2, 4, and 8 subintervals.
= 2 +cos2
1 =
2 0
1
2
(0 + [2]
1 = 5 +cos22 = .

4
NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATION
2 =
2 0
2
2
0 + 21 + 2
2 =
1
2
5 + 22 + cos2 + cos22 = .
4 =
2 0
4
2
0 + 2
1
2
+ 21 + 2
3
2
+ 2
4 =
1
4
(5 +22 + cos2 + 2 2 + cos2 +cos2 +22 + cos6
= .
8 =
2 0
8
2
0 + 2
1
4
+ 2
1
2
+2
3
4
+21 +2
5
4
+2
3
2

+2
7
4
+ 2
8 =
1
8
(5 +22 + cos1 + 22 + cos2 + 2(2 +cos2 +cos22
+2 2 +cos3) + 22 +cos5 +22 + cos6
+22 + cos7 = .

2. Using n=5, approximate the integral:
2
+1
1
0
. Use Trapezoidal Rule.
a = 0 and b = 1
=

=
1 0
5
= 0.2
0 = = 0 = 0
2
+ 1 = 1
1 = + = 0.2 = 0.2
2
+1 = 1.0198039
2 = + 2 = 0.4 = 0.4
2
+1 = 1.0770330
3 = + 3 = 0.6 = 0.6
2
+1 = 1.1661904
4 = + 4 = 0.8 = 0.8
2
+1 = 1.2806248
5 = +5 = 1.0 = 1
2
+1 = 1.4142136


5
NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATION


Then,
Integral = 0.2
1
2
1 +1.0198039 + 1.0770330 +1.1661904 +1.2806248 +
121.4142136=1.150
Therefore:
2
+1
1
0
1.150

Applications:
In the applications of remote sensing, electron microscopy, satellite imaging and
biomedical image processing filters are widely used to reduce the noise. Adaptive
filters are those whose behavior changes based on statistical characteristics of the
image inside the filter region defined by the rectangular window.


6
NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATION
2. EULERS METHOD

Theoretical Discussion:
In mathematics and computational science, the Euler method is a first-order
numerical procedure for solving ordinary differential equations (ODEs) with a given
initial value. It is the most basic explicit method for numerical integration of
ordinary differential equations and is the simplest RungeKutta method. The Euler
method is named after Leonhard Euler, who treated it in his book Institutionum
calculi integralis (published 176870).
The Euler method is a first-order method, which means that the local error (error
per step) is proportional to the square of the step size, and the global error (error at
a given time) is proportional to the step size. It also suffers from stability problems.
For these reasons, the Euler method is not often used in practice. It serves as the
basis to construct more complicated methods.

Sample Problems:
1. Solve the initial value problem, finding a value for the solution at x = 1, and
using steps of size h = 0.25.
y = x + 2y
y (0) = 0
F(x, y) = x + 2y
xo = 0 ; yo = 0
n = 0; x1 = xo + h
x1 = 0 + 0.25
x1 = 0.25
n = 0; y1 = yo + h f(xo, yo)
y1 = yo + h (xo + 2yo)
y1 = 0 + 0.25 (0 + 2*0)
y1 = 0

7
NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATION
x1 = 0.25 ; y1 = 0
n=1; x2 = x1 + h
x2 = 0.25 + 0.25
x2 = 0.5
n = 1; y2 = y1 + h f(x1, y1)
y2 = y1 + h (x1 + 2y1)
y2 = 0 + 0.25 (0.25 + 2*0)
y2 = 0.0625
x2 = 0.5 ; y2 = 0.0625
n = 2; x3 = x2 + h
x3 = 0.5 + 0.25
x3 = 0.75
n = 2; y3 = y2 + h f(x2, y2)
y3 = y2 + h (x2 + 2y2)
y3 = 0.0625 + 0.25 (0.5 + 2*0.0625)
y3 = 0.21875
x3 = 0.75 ; y3 = 0.21875
n = 3; x4 = x3 + h
x4 = 0.75 + 0.25
x4 = 1
n = 3; y4 = y3 + h f(x3, y3)
y4 = y3 + h (x3 + 2y3)
y4 = 0.21875 + 0.25 (0.75 + 2*0.21875)
y4 = 0.515625
x4 = 1 ; y4 = 0.515625





8
NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATION
Tabular form:
n xn yn
0 0 0.00000
1 0.25 0.00000
2 0.50 0.062500
3 0.75 0.218750
4 1.00 0.515625

The example below illustrates these calculations for the initial value problem

= 2t + y, y(1) = 0 on the interval [1, 3] using n = 10 steps. First notice that h


= 0.2.
-------------------------------------------------------------------
Step ti - 1 ti wi - 1 f(ti - 1, wi - 1) wi
-------------------------------------------------------------------
1 1.0 1.2 .0000 2.0000 .4000
2 1.2 1.4 .4000 2.8000 .9600
3 1.4 1.6 .9600 3.7600 1.7120
4 1.6 1.8 1.7120 4.9120 2.6944
5 1.8 2.0 2.6944 6.2944 3.9533
6 2.0 2.2 3.9533 7.9533 5.5439
7 2.2 2.4 5.5439 9.9439 7.5327
8 2.4 2.6 7.5327 12.3327 9.9993
9 2.6 2.8 9.9993 15.1993 13.0391
10 2.8 3.0 13.0391 18.6391 16.7669
-------------------------------------------------------------------


Applications:
In electronic engineering and other fields, signals that vary periodically over time
are often described as a combination of sine and cosine functions, and these are
more conveniently expressed as the real part of exponential functions
with imaginary exponents, using Euler's formula. Also, phasor analysis of circuits
can include Euler's formula to represent the impedance of a capacitor or an
inductor.


9
NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATION
3. PICARDS METHOD

Theoretical Discussion:
Using the Modified Picard Method of Parker and Sochacki [1, 2], we derive a hybrid
scheme using the analytical Picard method with approximations to the differential
operators. This new method, called Discretized Picards Method, uses
approximations in the space dimensions to computed derivatives but utilizes a
continuous approximation in the time dimension.We illustrate the method using
finite difference schemes on linear and nonlinearPDEs. We derive the stability
condition for several examples and show the stability region is increasing up to the
CFL condition. Finally, we demonstrate results in one and two dimensions for this
method.

Sample Problems:
1. Find the Picard approximations y1,y2, y3 to the solution of the initial value
problem y'=y, y(0) =2
Use y
3
to estimate the value of y (0.8) and compare it with the exact solution.
Solution: Let y0 =2, the value of y1 is
y1=2+
}
+ =
x
0
x 2 2 dt 2

y2 = 2+
}
x
0
(2+2t)dt=2+2x+x2
y3=2+
}
+ + + = + +
x
0
2 2
x x 2 2 dt ) t t 2 2 (
3
1
x3
At x= 0.8
y3=2+2(0.8)+(0.8)2+
3
1
(0.8)3
=4.41

10
NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATION
The solution of the initial-value problem, found by separation of variables, is y=2ex.
At x=0.8
y=2e0.8=4.45

2. Apply Picards method to solve the following initial value problem up to third
approximation y = 2y 2x
2
3 given that y =2 when x = 0.
Solution:
y = 2y 2x
2
3 = 2 +
2 2
2
3

0

= 2 + 2 2
2
3

1
() = 2 + 2
0
2
2
3

0

= 2 + 4 2
2
3

0

= 2 + 1 2
2

0

= 2 +
2
3

0

= 2
2
3

3
+

2
= 2 + 2 2 +
2
3

3
2
2
3

0

= 2 + +
2

2
3

4
4

2
= 2 + 2 2 + +
2

2
3

4
4
2
2
3

0

= 2 + +
2

4
3

5
15



11
NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATION
Applications:
Application of the Picards iterative method is for the solution of one-phase Stefan
problem. The one-phase Stefan problem, which consists of determining the
temperature distribution in the given domain and the function describing position
of the moving interface (the freezing front). The Stefan problem is a mathematical
model of thermal processes, during which the changing of phase is taking place,
connected with the heat absorption or emission. The examples of such kind of
processes can be solidification of pure metals, melting of ice, freezing of water, and
deep freezing of foodstuffs and so on.

12
NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATION
4. TAYLOR SERIES METHOD

Theoretical Discussion:
In mathematics, a Taylor series is a representation of a function as an infinite sum of
terms that are calculated from the values of the function's derivatives at a single
point.

The concept of a Taylor series was formally introduced by the English
mathematician Brook Taylor in 1715. If the Taylor series is centered at zero, then
that series is also called a Maclaurin series, named after the Scottish mathematician
Colin Maclaurin, who made extensive use of this special case of Taylor series in the
18th century.

It is common practice to approximate a function by using a finite number of terms of
its Taylor series. Taylor's theorem gives quantitative estimates on the error in this
approximation. Any finite number of initial terms of the Taylor series of a function is
called a Taylor polynomial. The Taylor series of a function is the limit of that
function's Taylor polynomials, provided that the limit exists. A function may not be
equal to its Taylor series, even if its Taylor series converges at every point. A
function that is equal to its Taylor series in an open interval (or a disc in the
complex plane) is known as an analytic function.

Sample Problems:
1. Solve x = cost sin x + t
2
; x(-1) = 3
Recall:
xt+h = xt + hxt +
1
2
h
2
t +
1
6
h
3
xt+
1
24
h
4
xt +
x = -sint xcosx + 2t
x = -cost xcosx +x
2
sinx +2
x = sint + x
3
cosx+3xxsinx-xcosx

13
NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATION
2. Take ( ) ( ) x x f sin = , we all know the value of 1
2
sin =
|
.
|

\
| t
. We also know the
( ) ( ) x x f cos = ' and 0
2
cos =
|
.
|

\
| t
. Similarly ( ) ) sin(x x f = ' ' and 1
2
sin =
|
.
|

\
| t
. In a
way, we know the value of ( ) x sin and all its derivatives at
2
t
= x . We do not
need to use any calculators, just plain differential calculus and trigonometry
would do. Can you use Taylor series and this information to find the value of
( ) 2 sin ?
Solution
2
t
= x
2 = + h x
x h = 2
2
2
t
=
42920 . 0 =
So
( ) ( ) ( ) ( ) ( ) + ' ' ' ' + ' ' ' + ' ' + ' + = +
! 4
) (
! 3 ! 2
4 3 2
h
x f
h
x f
h
x f h x f x f h x f
2
t
= x
42920 . 0 = h
( ) ( ) x x f sin = ,
|
.
|

\
|
=
|
.
|

\
|
2
sin
2
t t
f
1 =

( ) ( ) x x f cos = ' , 0
2
=
|
.
|

\
|
'
t
f
( ) ( ) x x f sin = ' ' , 1
2
=
|
.
|

\
|
' '
t
f

14
NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATION
( ) ) cos(x x f = ' ' ' , 0
2
= |
.
|

\
|
' ' '
t
f
( ) ) sin(x x f = ' ' ' ' , 1
2
= |
.
|

\
|
' ' ' '
t
f
Hence
+ |
.
|

\
|
' ' ' ' + |
.
|

\
|
' ' ' + |
.
|

\
|
' ' + |
.
|

\
|
' + |
.
|

\
|
= |
.
|

\
|
+
! 4 2 ! 3 2 ! 2 2 2 2 2
4 3 2
h
f
h
f
h
f h f f h f
t t t t t t

( )
( ) ( ) ( )
+ + + + =
|
.
|

\
|
+
! 4
42920 . 0
1
! 3
42920 . 0
0
! 2
42920 . 0
1 42920 . 0 0 1 42920 . 0
2
4 3 2
t
f

+ + + + = 00141393 . 0 0 092106 . 0 0 1
90931 . 0 ~
The value of ( ) 2 sin I get from my calculator is 90930 . 0 which is very close to the
value I just obtained. Now you can get a better value by using more terms of the
series. In addition, you can now use the value calculated for ( ) 2 sin coupled with the
value of ( ) 2 cos (which can be calculated by Taylor series just like this example or by
using the 1 cos sin
2 2
+ x x identity) to find value of ( ) x sin at some other point. In
this way, we can find the value of ( ) x sin for any value from 0 = x to t 2 and then can
use the periodicity of ( ) x sin , that is ( ) ( ) , 2 , 1 , 2 sin sin = + = n n x x t to calculate the
value of ( ) x sin at any other point.

Applications:
A Taylor series is a numerical method of representing a given function. This method
has application in many engineering fields. In some cases, such as heat transfer,
differential analysis results in an equation that fits the form of a Taylor series. A
Taylor series can also represent an integral if the integral of that function doesn't
exist analytically. These representations are not exact values, but calculating more
terms in the series will make the approximation more accurate.

15
NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATION
5. MIDPOINT METHOD

Theoretical Discussion:
The midpoint method is an explicit method for approximating the solution of the
initial value problem y' = f(x,y); y(x0) = y0 at x for a given step size h. For the
midpoint method the derivative of y(x) is approximated by the symmetric
difference,
y'(x) = ( y(x+h) - y(x-h) ) / 2h + O(h
2
).

Then the differential equation becomes
y(x+h) = y(x-h) + 2h f(x,y) + (h
3
).

The approximation yn for y(x0+nh) is then given recursively by
yn+1 = yn-1 + h f(xn,yn)

for n = 1, 2, ... . Locally the midpoint method is a third order method and therefore
globally a second order method.

The midpoint method is a stable and convergent method but it is only weakly stable,
small perturbations in the initial conditions give rise to growing oscillations.

Sample Problems:
1. Suppose y (t ) satisfies the ODE
y t = y t t 2 + 1, y (0) = 0.5.
Use the Midpoint Method with h = 0.2 to estimate y (0.6).
We set f t , y = y t 2 + 1 and use the recurrences
ti +1/2 = ti + 0.1 yi +1/2 = yi + 0.1f (ti , yi )
ti +1 = ti + 0.2 yi +1 = yi + 0.2f (ti +1/2, yi +1/2)
As usual, we initialize with t0 = 0 and y0 = y (0) = 0.5. Then

16
NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATION
t
1
2
= 0.1 y
1
2
= 0.5 + 0.1 (0.5 0
2
+ 1) = 0.6500
t1 = 0.2 y1 = 0.5 + 0.2 (0.6500 0.1
2
+ 1) = 0.8280
t
3
2
= 0.3 y
3
2
= 0.8280 + 0.1 (0.8280 0.2
2
+ 1) = 1.0068
t2 = 0.4 y2 = 0.8280 + 0.2 (1.0068 0.3
2
+ 1) = 1.2114
t
5
2
= 0.5 y
5
2
= 1.2114 + 0.1 (1.2114 0.4
2
+ 1) = 1.4165
t3 = 0.6 y3 = 1.2114 + 0.2 (1.4165 0.5
2
+ 1) = 1.6447

Hence our estimate is y(0.6) = y3 = 1.6447. Compare this with the actual value
y0.6 = 1.6489

2. The table gives values of a continuous function. Use the Midpoint Rule to
estimate the average value of f on [20, 50]
Code:
x 20 25 30 35 40 45 50
f(x) 42 38 31 29 35 48 60
You were asked to find the "average value of the function over the interval".
Dont forget that to find this it is the integral divided by the interval. You also have
your midpoint formula wrongly set up

delta x = (b - a)/n
so (50 - 20)/6

then you find the midpoints between the 7 f(x) values, note you used the x values in
this step instead of the f(x) values

Midpoint sum = (50 -20)/60 * (.5(42 + 38) + .5(38 + 31) ........)
Now that you have the midpoint sum, or in other words the integral of the function,
divide that by the interval of 30. The answer of 38.3333 is correct.

17
NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATION


Applications:
The midpoint formula and its use in determining elasticities, including the price
elasticity of demand. The standard method for computing the price elasticity of
demand has one major drawback. It is based on the assumption that the company is
starting from a position of a specific price and quantity sold, and is considering
a change in price. But what about the case where the company is not really
"starting" from one particular point, but instead only wants to compare the
results of two different possible prices. In other words, it does not care
whether one price is considered to be the beginning price and the other is
considered to be the ending price; it just wants to compare the difference
between two possible prices.


18
NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATION
6. RUNGE-KUTTA METHOD

Theoretical Discussion:
There are two main reasons why Euler's method is not generally used in scientific
computing. Firstly, the truncation error per step associated with this method is far
larger than those associated with other, more advanced, methods (for a given value
of ). Secondly, Euler's method is too prone to numerical instabilities.
The methods most commonly employed by scientists to integrate o.d.e.s were first
developed by the German mathematicians C.D.T. Runge and M.W. Kutta in the latter
half of the nineteenth century.
14
The basic reasoning behind so-called Runge-
Kutta methods is outlined in the following.
The main reason that Euler's method has such a large truncation error per step is
that in evolving the solution from xn to xn+1 the method only evaluates derivatives at
the beginning of the interval: i.e., at xn . The method is, therefore,
very asymmetric with respect to the beginning and the end of the interval. We can
construct a more symmetric integration method by making an Euler-like trial step to
the midpoint of the interval, and then using the values of both x and y at the
midpoint to make the real step across the interval. To be more exact,


(19)


(20)


(21)
As indicated in the error term, this symmetrization cancels out the first-order error,
making the method second-order. In fact, the above method is generally known as
a second-order Runge-Kutta method. Euler's method can be thought of as a first-
order Runge-Kutta method.
Of course, there is no need to stop at a second-order method. By using two trial
steps per interval, it is possible to cancel out both the first and second-order error

19
NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATION
terms, and, thereby, construct a third-order Runge-Kutta method. Likewise, three
trial steps per interval yield a fourth-order method, and so on.
The general expression for the total error, , associated with integrating our o.d.e.
over an -interval of order unity using an th-order Runge-Kutta method is
approximately

(22)


Sample Problems:
1. Solve the initial value problem,

= 2u + x + 4 , u(0) = 1 ,
to obtain u(0.2) using x = 0.2 (i.e., we will march forward by just one
x).
u(x+x) = u(x) + (1/6) (K1 + 4 K2 + K3) x ,
K1 = f(x, u(x)) ,
K2 = f(x+x/2, u(x)+K1x/2) ,
K3 = f(x+x, u(x) K1x+2 K2x) .

In our case, f(x, u) = -2u + x + 4. At x = 0 (the initial state), and using x = 0.2, we
have
K1 = f(0, u(0)) = f(0, 1) = -2*1+0+4 = 2
K2 = f(0.1, u(0)+2*0.2/2) = f(0.1, 1.2) = 2*1.2+0.1+4 = 1.7
K3 = f(0.2, u(0) -2*0.2+2*1.7*0.2) = f(0.2, 1.28) =-2*1.28+0.2+4 =
1.64
Thus,
u(0.2) = u(0) + (1/6)* (2 + 4*1.7+ 1.64)* 0.2 = 1.348 .



20
NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATION
2. Rewrite
( ) 5 0 , 3 . 1 2 = = +

y e y
dx
dy
x

in

0
) 0 ( ), , ( y y y x f
dx
dy
= = form.
Solution
( ) 5 0 , 3 . 1 2 = = +

y e y
dx
dy
x

( ) 5 0 , 2 3 . 1 = =

y y e
dx
dy
x

In this case
( ) y e y x f
x
2 3 . 1 , =



Applications:
Application of Runge-Kutta Numerical Methods is solving the Schrodinger Equation
for Hydrogen and Positronium Atoms. The radial Schrodinger equation for central
coulomb potential using numerical Runge-Kutta has been solved. Energy
eigenvalues for hydrogen and positronium bound systems is derived 13.6056 and -
6.803 eV, respectively. Numerical results of ground state modes of wave functions
for hydrogen and positronium R (r) and the presence probability function
|rR(r)|2has been presented. These results are in good agreement with analytical
calculations of the hydrogen atom in modern physics and quantum mechanics.
Therefore, numerical methods can be very useful and effective in solving physical
problems.




21
NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATION
7. NEWTON RAPHSON

Theoretical Discussion:
Newton's method, also called the Newton-Raphson method, is a root-finding
algorithm that uses the first few terms of the Taylor series of a function in the
vicinity of a suspected root. Newton's method is sometimes also known as Newton's
iteration, although in this work the latter term is reserved to the application of
Newton's method for computing square roots.

For f(x) a polynomial, Newton's method is essentially the same as Horner's method.
The Taylor series of about the point is given by


(1)
Keeping terms only to first order,
(2)
Equation (2) is the equation of the tangent line to the curve at , so is
the place where that tangent line intersects the -axis. A graph can therefore give a
good intuitive idea of why Newton's method works at a well-chosen starting point
and why it might diverge with a poorly-chosen starting point.
This expression above can be used to estimate the amount of offset needed to land
closer to the root starting from an initial guess . Setting and solving
(2) for gives


(3)
which is the first-order adjustment to the root's position. By letting ,
calculating a new , and so on, the process can be repeated until it converges to
a fixed point (which is precisely a root) using

(4)

22
NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATION
Unfortunately, this procedure can be unstable near a horizontal asymptote or a local
extremum. However, with a good initial choice of the root's position, the algorithm
can be applied iteratively to obtain

(5)
for , 2, 3, .... An initial point that provides safe convergence of Newton's
method is called an approximate zero.

Sample Problems:

1. Use Newton's Method to find the only real root of the equation x
3
x 1 = 0
correct to 9 decimal places.

f(x) = x
3
x 1
f

(x) = 3x
2
1 f(1) = -1 ; f(2) = 5
where: x0 = 1.5

+1
=

1
3

2
1
=

1
3

2
1

=
2(1.5)
3
+1
3(1.5)
2
1
1.3478260.

n

)
0 1.5 0.875
1 1.32471817400.. 0.100682173091..
2 1.32471817400.. 0.000000924378..
3 1.32471817400.. 0.000000924378..
4 1.32471795724.. 0.000000000000..
5 1.32471795724..

Therefore r = 1:324717957 correctly rounded to 9 decimal places.


23
NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATION
The function has a root in the interval [1, 2] since the function changes sign between
[1, 2].
Newton's formula here is

+1
=

1
3

2
1
=

1
3

2
1


so, with our value of x0 = 1:5, our approximation for x1 is given by

=
2(1.5)
3
+1
3(1.5)
2
1
1.3478260.

2. Using a scientific calculator, Take the value of x1 andrepeat the above
calculations using this as the initial guess. The resulting answer will be x2. Again
repeat the procedure until the 9th decimal place remains unchanged.

Let's use the Newton-Raphson method
xk+1 = xk - F(xk) / F(xk) k=0,1,2,3,...
to solve the quadratic equation we used in the Introduction, namely:
x2 - 4x - 7 = 0 from a starting guess of x0 = 4 .
In this example, the function F(x) = x2 - 4x - 7 , and the first thing we do is to
differentiate this:
dF/dx = F(x) = 2x - 4 .
To find x1 , we first evaluate F(x) and F(x) at the point x0 , i.e. at x=4. We find:
F(4) = 42 - 4.4 - 7 = - 7 and F(4) = 2.4 - 4 = 4
(Here we use the. to indicate multiplication).
Now we insert these values in the Newton-Raphson formula for x1:
x1 = x0 - F(x0) / F(x0) = 4 - F(4) / F(4) = 4 - (-7)/4 = 4 + 7/4 = 4 + 175 =
575

24
NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATION
So x1 = 575 . Now we use this value in the iterative formula to find x2 .
First we evaluate F(x) and F(x) at the new point x1 :
F(575) = (575)2 - 4.(575) - 7 = 330625 -23 - 7 = 30625
and F(575) = 2.(575) - 4 = 115 - 4 = 75
Now we insert these values in the Newton-Raphson formula for x2:
x2 = x1 - F(x1) / F(x1) = 575 - F(575) / F(575) = 575 - (30625)/(75) =
534167

So x2 = 534167 , We now evaluate F(x) and F(x) at this point, to calculate x3 ,
and so on.

We don't need to write down all the calculations, but we should make a table with
columns for the values of xk , F(xk), F(xk) and xk+1 . At the end of each row, we
copy the value of xk+1 to the start of the next row to start the next iteration. The
table will look like this:

k xk F(xk) F(xk) xk+1 = xk - F(xk) / F(xk)
0 40 -70 40 575
1 575 30625 75 534167
2 534167
3








25
NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATION
Applications:
Application of newton raphson method is to a finite Barrier quantum well (fbqw)
system. Quantum wells are important in semiconductor lasers because they allow
some degree of freedom in the design of the emitted wavelength through
adjustment of the energy levels within the well by careful consideration of the well
width. Many realistic model in Physics requires numerical methods since these
models cannot always be solved analytically i.e. in closed form. In this paper, a
simple model of the energy levels in a quantum well was considered, with the
adoption of Newton Raphson method (due to its rapid convergence) to a special
case in which one of the parameters of the transcendental equations of the finite
barrier quantum well equals four. We have been careful enough with the choice of
initial estimate, to obtain the results for the eigenstates of this system which
compares favourably well (with only marginal error) with other results obtained
using graphical approach.


26
NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATION
8. STIFF EQUATION

Theoretical Discussion:
In mathematics, a stiff equation is a differential equation for which
certain numerical methods for solving the equation are numerically unstable, unless
the step size is taken to be extremely small. It has proved difficult to formulate a
precise definition of stiffness, but the main idea is that the equation includes some
terms that can lead to rapid variation in the solution.
When integrating a differential equation numerically, one would expect the
requisite step size to be relatively small in a region where the solution curve
displays much variation and to be relatively large where the solution curve
straightens out to approach a line with slope nearly zero. For some problems this is
not the case. Sometimes the step size is forced down to an unacceptably small level
in a region where the solution curve is very smooth. The phenomenon being
exhibited here is known as stiffness. In some cases we may have two different
problems with the same solution, yet problem one is not stiff and problem
two is stiff. Clearly the phenomenon cannot be a property of the exact solution, since
this is the same for both problems, and must be a property of the differential system
itself. It is thus appropriate to speak of stiff systems.

Sample Problems:
1. Consider the initial value problem

The exact solution (shown in cyan) is
With as



27
NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATION
Applying this method instead of Euler's method gives a much better result (blue).
The numerical results decrease monotonically to zero, just as the exact solution
does.
One of the most prominent examples of the stiff ODEs is a system that describes the
chemical reaction of Robertson:




If one treats this system on a short interval, e.g. there is no problem in
numerical integration. However, if the interval is very large (10
11
say), then many
standard codes fail to integrate it correctly.

2. Given the IVP y(1)(t) = 1 - t y(t) with y(0) = 1, approximate y(1) with one step.
First, let t0 = 0, y0 = 1, and h = 1. Thus, we write down the equation
- + y0 + h*ft1, = 0
and, after substituting the appropriate values, we get
- + 1 + 1*f1, = -2 + 2 = 0
Solving this equation yields = 1, and therefore we set y1 = 1. The absolute error is
0.33.

Applications:
Problems involving rapidly decaying transient solutions occur naturally in a wide
variety of applications, including the study of spring and damping systems, the
analysis of control systems, and problems in chemical kinetics. These are all
examples of a class of problems called stiff (mathematical stiffness) systems of
differential equations, due to their application in analyzing the motion of spring and
mass systems having large spring constants (physical stiffness).

You might also like