Chapter

---
Numerical
Differentiation,
Ordinary differential
equations(ODE)

Aman W
Department of Applied Physics
University of Gondar
Contents

• Numerical Differentiation,
• Ordinary differential equations(ODE)
– Forward
– backward difference &
– Central difference
Introduction

• Estimate the derivatives (slope, curvature, etc.) of a function
by using the function values at only a set of discrete points
– Ordinary differential equation (ODE)
– Partial differential equation (PDE)
• We represent the function by Taylor polynomials or Lagrange
interpolation
• Evaluate the derivatives of the interpolation polynomial at
selected (unevenly distributed) nodal points
• For this lecture, we are going to look at
– Ordinary differential equation (ODE)
Ordinary differential equations(ODE)
• A general form of first-order differential equation can be written as
??(?)
= ?(?, ?)
??
• In your undergraduate study, you have taken classical mechanics
courses, Some of the equations, position and velocity of classical
mechanics can be written as differential equations. i.e
??(?)
=? ?
??

??(?)
= ?(?)
??
• These equations may have analytical solutions, and
ODE algorithm

• Generally one must apply numerical methods in order to solve
them. here we look at how to solve these kind of equ.
 So let us take the first-order differential equation
??
= ?(?, ?)
??
 The classic way to solve a differential equation is to start with
the known initial value of the dependent variable, y0, y(t = 0),
• Problem is what is the value of y at time t?

 y' (t )  f (t , y(t ))

 y(t 0 )  y0
Ordinary differential equations
• In order to solve these equations, we need
1. A numerical method that permits an approximate solution
using only arithmetic and
2. Some Initial conditions.
• The first an approximation to the derivative is suggested by the
definition from basic calculus: remember the elementary
definition of the derivative of a function f at a point x:

??(?) ? ? + ∆? − ?(?) ? ? + ∆? − ?(?)
= lim ≈
?? Δ?→0 ∆? ∆?

• This is called Euler method
• Common definitions of the derivative of any f(x): the Euler’s
Rule
df x  f ( x  x)  f ( x)
 lim forward difference
dx x 0 x
df ( x) f ( x)  f ( x  x)
 lim backward difference
dx x 0 x
df ( x) f ( x  x)  f ( x  x)
 lim
dx x 0 2x centered difference

• These are all correct definitions in the limit ∆? →0. But we
want ∆? to remain Finite
Forward difference

• Forward difference: The
difference defined above is
called a forward difference
• The derivative is clearly
defined in the direction of
increasing x
• Using x &x+h (∆? = ℎ)

df x  f ( x  x)  f ( x)
 lim
dx x 0 x
Backward difference

• We can equivalently define
a backwards difference
• The derivative is defined in
the direction of decreasing x
• Using x &x-h (∆? = ℎ)

df ( x) f ( x)  f ( x  x)
 lim
dx x 0 x
Centered difference

• Now, rather than making a
single step of h forward, we
form a central difference by
stepping forward half a step
and backward half a step:

df ( x) f ( x  x)  f ( x  x)
 lim
dx x 0 2x

• How good are the FD
approximations?
• This leads us to Taylor series
Taylor Series
• Taylor series are expansions of a function f(x) for some finite
distance Δx to f(x±Δx)

x 2
x 3
x 4
f ( x  x)  f ( x)  xf ' ( x)  f '' ( x)  f ''' ( x)  f '''' ( x)  ...
2! 3! 4!

• What happens, if we use this expression for
forward/backward difference

df x  f ( x  x)  f ( x)
 lim ?
dx x 0 x
Taylor Series
For the forward difference that leads to :
f ( x  x)  f ( x) 1  x 2 '' x 3 ''' 
  xf '
( x )  f ( x )  f ( x )  ...
x x  2! 3! 
x
 f ' ( x)  f '' ( x )
2
 f ' ( x)  O(x)

• The error of the first derivative using the forward formulation is of
order Δx. It is the same for backward difference
• It should be clear that both the forward & backward differences are
O(∆? = ℎ)
• Is this the case for other formulation of the derivative?
• Let’s check!
Taylor Series
... with the centered formulation we get:

f ( x  x / 2)  f ( x  x / 2) 1   x 3

 xf ( x) 
'
f ( x)  ...
'''

x x  3! 
 f ' ( x)  O(x 2 )

• The error of the first derivative using the centered
approximation is of order Δx2.
• This is an important results: it DOES matter which
formulation we use. The centered scheme is more accurate!
• Remember Δt is small, so that Δt 2< Δt, hence O(Δt 2) is
more accurate than O(Δt)
Euler Method: Explicit Euler Method

• Consider Forward Difference, where the differentiation wrt
time
dy y (t  t )  y (t )
 y ' (t ) 
dt t
y (t  t )  y (t )  tf (t , y (t ))
• The Explicit Euler Method, Which implies
y(t + Dt) » y(t)+ Dt × y'(t)
• Split time t into n slices of equal length Δt
ìt 0 = 0
ï
íti = i × Dt
ït = t
în
Evenly distributed points along the t-axis
t  h

ti-3 ti-2 ti-1 ti ti+1 ti+2 ti+3

Distance between two neighboring points is the same, i.e. h.

Evenly distributed points along the x-axis

x  h

xi-3 xi-2 xi-1 xi xi+1 xi+2 xi+3
Euler’s Rule

• Use the derivative function f(t, y) to find an approximate value
for y at a small step ∆t = h forward in time; that is,
??
= ?(?, ?)
??
? ? + ℎ = ?1 (t)+h?(?, ?)
• Once you can do that, you can solve the ODE for all t values
by just continuing stepping to larger times one small h at a
time
• Simply we use integration to solve differential equations
• Euler’s rule (Figure ) is a simple algorithm for integrating the
differential equation () by one step and is just the forward-
difference algorithm for the derivative:
Example
• The simple differential equation is the equation that gives rise
to the exponential decay law,

??(?)
= −??(?)
??
• Where λis a decay constant. Numerical approximation. Change
in a time interval is proportional to N, the proportionality
constant is ?

? ? + Δ? − ? ?
= −??(?)
Δ?
N ? + Δ? = ? ? − ??(?)Δ?
• from pylab import *
• Recipe to solve • t = 0.
exponential decay law: • dt = 0.1
• Lamda=L = 0.5 # decay constant
– define the initial •

N = 10000.0 # Number of atoms at t = 0
ta = [t] # list for storing time
condition at t = 0 • na = [N] # and instantaneous number of
atoms
– repeat the following while t < 5:
until some final value dn = -L * N * dt
N = N + dn
for t is reached t = t + dt
ta.append(t)
• Use above equation () to na.append(N)
determine the solution
plot(ta,na)
at t + Δt show()
• The explicit(forward difference) solution can be unstable for
large step sizes.
• Compare anyway for the following
∆? =dt= 0.01
dt= 0.05
dt= 0.15
Example
• Let us take a simple ordinary differential equation(ode)
??
= −6?
??
• The solution of this differential equation is the following
? = ? −6?
• Let us use the Euler method to solve and plot it along side with
the exact result, to be able to judge the accuracy of the
numerical method.
• To solve this equation using the Euler method we rewrite the
forward Euler formula above with a different look
?? ? ? + ∆? − ? ?
=
?? ∆?
? ?+∆? −? ?
= −6? ? or
∆?
? ? + ∆? = ? ? − 6? ? ∆?
• When we are ready to start coding in
• ∆?= 0.1 # step's size
• N=100; # number of steps
• y(1)=1.0
• listx= [] # list for storing x and
• listy= [] # list for storing y
• listx. append()
• Listy.append ( )

• for i in range(N):
yi+1=yi-∆? *6yi # y(i+∆?)= y(i)-∆? ∗(6*y(i))
Listy.append ( yi+1)
xi+1=i*∆?
listx. append()

plot(listx,listy)
• The approximate gradient is calculated using values of f at x
greater than xt+h, this algorithm is known as the forward
difference method.
• Where h would be a separation between points
• Note this is calculating a series of derivatives at the points f(x)
• Just how good is this estimate?

??
0.0 ≤ ≤ ??
??

• Taylor Series expansion uses an infinite sum of terms to
represent a function at some point accurately.
Error Analysis (Assessment)
• The approximation errors in numerical differentiation decrease with
decreasing step size h, while round-off errors increase with decreasing step
size (you have to take more steps and do more calculations).
• To obtain a rough estimate of the round-off error, we observe that
differentiation essentially subtracts the value of a function at argument x
from that of the same function at argument x + h and then divides by h:

? ?+ℎ −? ?
?~

• As h is made continually smaller, we eventually reach the round-off error
limit where y(x + h) and y(x) differ by just machine precision ϵm
??
??? =

• Best h values
– ℎ?? ~5.0?10−8
– ℎ?? ~3.0?10−5
Example: First Derivatives
• Use forward and backward difference approximations to estimate
the first derivative of
f ( x )  0.1 x 4  0.15 x 3  0.5 x 2  0.25 x  1.2
at x = 0.5 with (∆? = ℎ) = 0.5 and 0.25 (exact sol. = -0.9125)
• Forward Difference
 f ( 1)  f (0.5 ) 0.2  0.925
 h  0.5 , f ( 0.5 )    1.45 ,  t  58.9%
1  0.5 0.5

h  0.25 , f (0.5 )  f (0.75 )  f (0.5 )  0.63632813  0.925  1.155 ,   26.5%
 0.75  0.5 0.25
t

• Backward Difference
 f (0.5 )  f (0 ) 0.925  1.2
 h  0.5 , f ( 0.5 )    0.55 ,  t  39.7%
0.5  0 0.5

h  0.25 , f (0.5 )  f (0.5 )  f (0.25 )  0.925  1.10351563  0.714 ,   21.7%
 0.5  0.25 0.25
t
Example: First Derivative
• Use central difference approximation to estimate the first
derivative of
f ( x )  0.1 x 4  0.15 x 3  0.5 x 2  0.25 x  1.2
at x = 0.5 with (∆? = ℎ) = 0.5 and 0.25 (exact sol. = -0.9125)
• Central Difference

f ( 1 )  f ( 0 ) 0.2  1.2
h  0.5 , f ( 0.5 )    1.0 ,  t  9.6%
10 1

f ( 0.75 )  f ( 0.25 )
h  0.25 , f ( 0.5 ) 
0.75  0.25
0.63632813  1.10351563
  0.934 ,  t  2.4%
0.5
Example more

• Consider: y’(t)=-15y(t), t≥0,
y(0)=1
• Exact solution: y(t)=e-15t, so
y(t)→0 as t→0
• If we examine the forward
Euler method, strong
oscillatory behavior forces
us to take very small steps
even though the function
looks quite smooth
• Various problems arise in numerical calculations.
• The solution of the differential equation is subject to truncation
errors: the approximation of the problem gives rise to these.
• stability. The solution can oscillate around or diverge from the
true solution, even though the truncation error might be small.
• This arises from exponentially growing solutions of the
difference equations that we use.
• The floating point number system used in the computer is only
an approximation to the real. Real numbers have to be rounded
to the nearest representable floating point number giving rise
to rounding errors. Also, the range of numbers is restricted.

Master your semester with Scribd & The New York Times

Special offer for students: Only $4.99/month.

Master your semester with Scribd & The New York Times

Cancel anytime.