© All Rights Reserved

8 views

© All Rights Reserved

- A Critique of the Crank Nicolson Scheme Strengths and Weaknesses for Financial Instrument Pricing
- Simulation of Steam Reformers for Methane
- Adv Mgt Test
- Journal Review List
- Evening Routine Fall 2017 12-9-17
- CFD1
- OshSet88
- Sem 1
- BC0043
- Handout 5 NA
- Viscous-Inviscid Matching for Surface-Piercing Wave-Body Interaction Problems
- Copy of 100132942
- 26__ISSN_1392-1215_Experimental Study of Moment Method for Undergraduates in Electromagnetic
- Collocation
- Exercise+Chapter+5 7
- Dispersion and Finite Difference
- Week10 @1 Nonlinear Equation No1
- as
- Polynomial Interpolation and neville's algorithm
- finite element- 1601.02361

You are on page 1of 52

Aman W

Department of Applied Physics

University of Gondar

contents

Introduction

Root finding of an equation

Guess and check

Iterative algorithms

Bisection method

Newton-Raphson

Secant method

Ordinary differential equations(ODE)

Numerical differentiation

Forward & backward difference

Central difference

Introduction

Knowledge could be divided into two categories

Declarative knowledge

Statements of fact

The square root of a number x is a number y such that

y*y = x

Imperative knowledge

how to(y is square root of x) methods or recipes

Can you use this to find the square root of a particular

instance of x?

Introduction

can be solved on a computer?

Analyze the problem to find a suitable numerical method of

solution.

Derive an algorithm a recipe

to generate the result.

Write a computer program to embody the algorithm.

Get the program debugged and running.

Solve the problem!

Root finding of an equation

The equation in the form

y = f (x)

Contains three elements:

An input value x,

An output value y, and the rule f for computing y.

If a given a function f (x), we will determine the values of x for

which f (x) = 0.

The solutions (values of x) are known as the root of the equation f

(x) = 0, or the zeroes of the function f (x).

These roots of an equation could be solved numerically using

different techniques, algorithms

guess & check algorithm

Iterative algorithm

Root finding of an equation

Here is a recipe for deducing the square root of a number

x(attributed to Heron of Alexandria) in the first century

If g*g is close enough to x, stop and say that g is the answer

Otherwise make a new guess, by averaging g and x/g !

Using this new guess, repeat the process until we get close

enough

Guess & check code

x = 25

We need a good way to generate

epsilon = 0.01

guess step = epsilon**2

If we could guess possible values Numguess = 0.0

ans = 0.0

for square root, g,

We should use a method(check) while abs(ans**2-x) >= epsilon and ans <= x:

where our guess is true/false

ans = ans+step

Since we can not guarantee an

exact answer, but look for Numguess = Numguess+ 1

Start with exhaustive enumeration

if abs(ans**2 - x) >= epsilon:

Take small steps to generate print('failed')

guess else:

Check to see if close enough print(str(ans)+ 'is close to the square root of '+ str(x))

x = 25

Step could be any small epsilon = 0.01

number step = epsilon**2

Numguess = 0.0

If too small, takes a long ans = 0.0

time to find square root while abs(ans**2-x) >= epsilon and ans <= x:

Too large, it may

ans = ans+step

skip/jump over answer

Numguess = Numguess+ 1

without getting close

enough print('Numguess = '+str(Numguess))

print('failed')

0.5 else:

print(str(ans)+ 'is close to the square root of '+ str(x))

2

Take another number x which is large for instance, x = 1250

Step = epsilon**2

In general, we will take x/step times through a code to find the

solution

So we need more efficient way to deal with square root

problem

Iterative Algorithms

All methods of finding roots are iterative procedures that

require a starting point, i.e., an estimate of the root.

This estimate can be crucial; a bad starting value may fail to

converge, or it may converge to the wrong root (a root

different from the one sought).

There is no universal recipe for estimating the value of a root.

If the equation is associated with a physical problem, then the

context of the problem (physical insight) might suggest the

approximate location of the root.

Otherwise, a systematic numerical search for the roots can be

carried out. One such search method is described in the above

The roots can also be located visually by plotting the function

The method of bisection uses the same principle as incremental

search- guess and check

If there is a root in the interval (x1, x2), then

f (x1) f (x2) < 0.

The method of bisection accomplishes this by successively

halving the interval until it becomes sufficiently small.

This technique is also known as the interval halving method.

Bisection is not the fastest method available for computing

roots, but it is the most reliable.

Once a root has been bracketed, bisection will always close in

on it.

Bisection method

The bisection algorithm; if there is a root in the interval (x1,

x2), then f (x1) f (x2) < 0. or

It relies on the fact that for a continuous function f,

if f (x0) < 0 and f (x1) > 0 then

f (x) = 0 for some x in the interval (x0, x1).

Bisection method

x3 = (x1 + x2)

x3 is the midpoint of the interval. If f (x2) f (x3) < 0, then the root must

be in (x2, x3) and we record this by replacing the original bound x1 by

x3.

Otherwise, the root lies in (x1, x3), in which case x2 is replaced by x3.

In either case, the new interval (x1, x2) is half the size of the original

interval.

The bisection is repeated until the interval has been reduced to a small

value , so that

Example

Finding the root of a polynomial

First, two numbers a and b have to be found such that f(a) and f(b) have

opposite signs.

For the above function, a = 1 and b = 2 satisfy this criterion

F(1)=-2 and f(2)=4

Because the function is continuous, there must be a root within the interval [1,

2].

so the midpoint is

C1=(a+b)/2=1.5

Because f(c1) is negative, a =1 is replaced with a = 1.5 for the next iteration to

ensure that f(a) and f(b) to have opposite signs.

Bisection

This techniques simply says that rather than exhaustively

trying things starting at 0, we should pick a number in the

middle of the range

The basic idea of the algorithm is that we know that the square

root of x lies between 0 and x

x = 0.0 x=g x

We should select half of x, g= (0+x)/2

If by chance we got the answer, we succeed(most probably

not)

If g not close enough, that is it may be

Too big

Too small

If g**2 > x, we know g is too big, so we search again

If this new g is for instance too small, g**2 <x, we know that

g is too small, we search again

Iterate these steps until the requested

precision is reached

Code

x = 25

epsilon how close we are epsilon = 0.01

numGuesses number of numGuesses = 0

low = 0.0

step high = x

Range ans = (high + low)/2.0

while abs(ans**2 - x) >= epsilon:

Low print('low = ' + str(low) + ' high = ' + str(high) + '

ans = ' + str(ans))

High numGuesses += 1

ans half of the range if ans**2 < x:

low = ans

Some of the important point else:

high = ans

about this algorithm is

ans = (high + low)/2.0

It reduces computation time print('numGuesses = ' + str(numGuesses))

It work well on problems that print(str(ans) + ' is close to square root of ' + str(x))

are ordered

Exercise

lies in the interval (0.6, 0.8).

Newton-Raphson

approximate the value of the square root of a polynomial,

which is general approximation algorithm to find roots

For example to find the square root of 24, find the root of

p(x) = x2 24

Newton-Raphson showed that if g is an approximation /guess

to the root, then

Let a polynomial is x2 + k ,

Its derivative is 2x

According to Newton-Raphson, for a given guess g for a root,

the better guess is

Newton-Raphson is an iterative root finding algorithm which

uses the derivative of the function to provide more information

about where to search for the root.

This method is based on Taylor expansion using the first two

terms of the Taylor series for a function f starting at our initial

guess x1:

f ( x1 h) f ( x1 ) hf ' ( x1 ) 0(h )

2

small, we can drop h2

So we impose this condition and solve for h:

f (x )

f ( x1 h) 0 h '

1

f ( x1 )

Newton-Raphson Method

guess x, then following the tangent line to where it crosses the

x-axis.

f (x i ) 0

f (x i )

'

x i x i1

f (x )

x i1 x i ' i

f (x i )

approximating f (x) by a straight line that is

tangent to the curve at xi at each iteration.

Thus xi+1 is at the intersection of the x-axis

and the tangent line.

The polynomial is # Newton-Raphson for square root

epsilon = 0.01

y = 24.0

guess = y/2.0

In the while loop, we

while abs(guess*guess - y) >= epsilon:

check/test how we close

guess = guess - (((guess**2) - y)/

If not close enough, we (2*guess))

should use Newton-rap print(guess)

new guess method

print('Square root of ' + str(y) + ' is

about ' + str(guess))

A root of f (x) = x3 10x2 + 5. Compute this root with the

NewtonRaphson method.

In general, there are more iterative algorithms,

Secant Method(This method based on the same principles

as the Newton method, but it approximates the derivative

numerically)

Brents Method

etc

Both iterative and Guess and check algorithms are reusing the

same code over and over again

Use looping construct to generate guesses check,

Secant Methods

method is the evaluation of the derivative - there are certain

functions whose derivatives may be difficult or inconvenient

to evaluate.

For these cases, the derivative can be approximated by a

backward finite divided difference:

f (x i1 ) f (x i )

f (x i )

'

x i1 x i

Secant Methods (cont)

Newton-Raphson method equation gives us

f (x i )x i1 x i

x i1 x i

f (x i1 ) f (x i )

not require an analytical expression of the derivative.

Ordinary differential equations(ODE)

In your undergraduate study, you have taken classical

mechanics courses, here we look at how to solve these.

A general form for a first-order differential equation can

be written in the form

generally one must apply numerical methods in order to

solve them.

The equations of classical mechanics can be written as

differential equations. example

ODE algorithm

So let us take the first-order differential equation

the known initial value of the dependent variable, y0, y(t = 0),

y(t0 ) y0

Problem

What is the value of y at time t?

Ordinary differential equations

In order to solve these equations, we need

1. A numerical method that permits an approximate solution

using only arithmetic.

2. Some Initial conditions.

The first an approximation to the derivative is suggested by

the definition from basic calculus:

Remember the elementary definition of the derivative of a

function f at a point x:

Ordinary differential equations

As you see in the fig, the function f at x

df ( x) f ( x h) f ( x)

lim

dx h 0 h

In our first equation,

How good is this as an approximation?

Let us use the Taylor expansion to derive it:

approximation is of order t, i.e. O(t). Take this to be the error

term which we write as E(t). In this case the lowest power that

appears is t, so E(t)~ t, or in Big-O notation E(t) is O(t).

So the forward difference method is first order. Error proportional

to t

Remember t is small, so that t 2< t, hence O(t 2) is more

accurate than O(t)

Discretizing the derivatives

velocity can now be written as

x

recursively.

Eulers Rule

for y at a small step t = h forward in time; that is,

y(t + h) = y1.

Once you can do that, you can solve the ODE for all t values

by just continuing stepping to larger times one small h at a

time

Simply we use integration to solve differential equations

Eulers rule (Figure ) is a simple algorithm for integrating the

differential equation () by one step and is just the forward-

difference algorithm for the derivative:

Velocity from position

We can turn this into an approximation rule for y (t) =f (x)

by replacing the limit as h approaches 0 with a small but finite

h: Let us take the trajectory is that of a projectile

dy y (t h) y (t )

y ' (t )

dt h

projectile with air resistance

Forward-difference

approximation (slanted

dashed line) for the

numerical first derivative at

time t.

Euler Method

Explicit Euler Method

Consider Forward Difference

y (t t ) y (t )

y ' (t )

t

y (t t ) y (t ) tf (t , y (t ))

Which implies

36

Euler Method

t0 0

ti i t

t t

n

The Explicit Euler Method Formula

37

Take a function which is related to that parabola

Exact differentiation

Not a good algorithm, however it is useful

The approximate gradient is calculated using values of t

greater than t+h, this algorithm is known as the forward

difference method.

Where h would be a separation between points

Note this is calculating a series of derivatives at the points f(x)

Just how good is this estimate?

represent a function at some point accurately.

Example

The A simple differential equation is the equation that gives

rise to the exponential decay law,

Numerical analysis: approximation.

Change in a time interval is proportional to y

Recipe to solve exponential # initial conditions

decay law: y=100.0

Define the initial condition at lamb=1.0

t=0

Repeat the following until deltaT=0.02

some final value for t is # use that equation () for some

reached steps

Use above equation () to >>> for i in

determine the solution at range(3*lamb/deltaT):

t + t print i*deltaT, y

y=y-lamb*y*deltaT

The explicit(forward difference) solution can be unstable for

large step sizes.

Compare for

dt = 0.01

dt = 0.05

dt = 0.15

Euler Method

Implicit Euler Method

Consider Backward Difference

y(t) y(t t)

y'(t)

t

Which implies

43

Euler Method

value of y(ti+1)

Extra computation

Sometimes worth because implicit method is more accurate

44

Example

Consider: y(t)=-15y(t), t0, y(0)=1

Exact solution: y(t)=e-15t, so y(t)0 as t0

If we examine the forward Euler method, strong

oscillatory behaviour forces us to take very small

steps even though the function looks quite

smooth

Forwards vs Backwards difference

above is called a forward

difference

The derivative is clearly

defined in the direction of

increasing t

Using t & t+h

We can equivalently define

a backwards difference

The derivative is defined in

the direction of decreasing x

Using t & t-h

h2 h3

f ( x h) f ( x) hf ' ( x) f ' ' ( x) f ' ' ' ( x) ...

2! 3!

f ( x ) f ( x h) h h2

f b ' ( x) f ' ' ( x) f ' ' ' ( x) ...

h 2 6

Improving the error

It should be clear that both the forward & backward

differences are O(h)

However, by averaging them we can eliminate the term that is

O(h) there by defining the centered difference:

f ' c ( x)

1

f ' f ( x) f 'b ( x)

2

f ( x h) f ( x ) f ( x ) f ( x h) h h

f ' ' ( x) f ' ' ( x)

2h 2 2

h2

f ' ' ' ( x) ....

6

f ( x h) f ( x h ) Note that all odd ordered

f ' c ( x) O(h )

2

terms cancel too. Further the

2h

central difference is symmetric

about x, forward & backward

differences are clearly not.

Central difference

single step of h forward, we

form a central difference by

stepping forward half a step

and backward half a step:

following

central-difference algorithm by

substituting the Taylor series

for y(t h/2) into

Central difference

the exact derivative independent of h

Backward Difference

Error Analysis (Assessment)

The approximation errors in numerical differentiation decrease with

decreasing step size h, while round-off errors increase with decreasing

step size (you have to take more steps and do more calculations).

To obtain a rough estimate of the round-off error, we observe that

differentiation essentially subtracts the value of a function at argument x

from that of the same function at argument x + h and then divides by h:

limit where y(t + h) and y(t) differ by just machine precision m

Best h values

Consider again the A simple y = 1.0 # initial y

differential equation is the

equation that gives rise to the

y0 = 1.0 # initial y0

exponential decay law, dt = 0.1 # time step

Implicit Euler Method(backward lamb = 1.0 # decay

difference, central difference)

constant

Note: This implicit solution is

stable for large step sizes. Nmax=101 # no. of steps

Compare for y=y0

dt = 0.01 for i in range(Nmax):

dt = 0.05

dt = 0.15

print i*dt, y

y = (1.0 - dt*lamb)*y

- A Critique of the Crank Nicolson Scheme Strengths and Weaknesses for Financial Instrument PricingUploaded byChris Simpson
- Simulation of Steam Reformers for MethaneUploaded bydashali1
- Adv Mgt TestUploaded byZizila
- Journal Review ListUploaded byIlyani Akmar
- Evening Routine Fall 2017 12-9-17Uploaded byAbdul Aziz
- CFD1Uploaded byrahuldbajaj2011
- OshSet88Uploaded bymarkj146
- Sem 1Uploaded bysamarsekharreddy
- BC0043Uploaded byChandan Kumar
- Handout 5 NAUploaded byMuddasar Yamin
- Viscous-Inviscid Matching for Surface-Piercing Wave-Body Interaction ProblemsUploaded byHeru Prasetya
- Copy of 100132942Uploaded byterrymittons
- 26__ISSN_1392-1215_Experimental Study of Moment Method for Undergraduates in ElectromagneticUploaded bybal3x
- CollocationUploaded byKrittini Intoramas
- Exercise+Chapter+5 7Uploaded bySani Sunny
- Dispersion and Finite DifferenceUploaded byIsoken Efionayi
- Week10 @1 Nonlinear Equation No1Uploaded byWajisa Jomworawong
- asUploaded byKalyan Chakravarthy Tripuraneni
- Polynomial Interpolation and neville's algorithmUploaded byStephen Rickman
- finite element- 1601.02361Uploaded byGrantHerman
- Newton RAphson MethodeUploaded byAga Sanity Sains
- MmmmmmmUploaded bysultaniiu
- ECE Jntur07Uploaded byganga_ch1
- r312_15Uploaded bykaloy33
- Tutorial Prob AnsUploaded byvignanaraj
- Hansen, Discrete Inverse Problems Full BookUploaded byPascal Kropf
- DEUploaded byGanesh Priya Balakrishnan
- Finite_Difference_and_Interpolation.pdfUploaded byanupam
- TARLAC NATIONAL HIGH SCHOOL.docxUploaded byMichelle Timbol
- SyllabusUploaded bySukhvinder Deora

- Chapter ten I.pptxUploaded byAzanaw Asmamaw
- chapter 06.pptxUploaded byAzanaw Asmamaw
- Chapter 12Uploaded byAzanaw Asmamaw
- Chapter 008.pptxUploaded byAzanaw Asmamaw
- Chapter 070.pptxUploaded byAzanaw Asmamaw
- chapter 0061.pptxUploaded byAzanaw Asmamaw
- Chapter 003.pptxUploaded byAzanaw Asmamaw
- Chapter 04Uploaded byAzanaw Asmamaw
- Chapter ten I.pptxUploaded byAzanaw Asmamaw
- Chapter 00.pptxUploaded byAzanaw Asmamaw
- chapter 0061.pptxUploaded byAzanaw Asmamaw

- Algebra Tips PDFUploaded byAkki_Vaghela
- graphing polynomial functions guideUploaded byapi-234379774
- cycloidUploaded by88915334
- Fast fourier transform - help 1Uploaded byHareesh R Iyer
- 2014 Wenona T2 QUploaded byShahrazad8
- alg_notes_5_15Uploaded bydanielofhouselam
- Derivatives and the Shape of Curves-1Uploaded byanmoldhiman
- Smarandache Linear Algebra, by W.B.Vasantha KandasamyUploaded bymarinescu
- Combinatorial Identity SudeepKamathUploaded byNeetu Behal
- Eccentricity_lecture.pptUploaded byHanim Ghazali
- DomainUploaded byKb Ashish
- lecture07.pptUploaded byyinglv
- ncbmUploaded bybolabola
- Mathematical AbilityUploaded byashodhiya14
- Math 2Uploaded bystephanraza
- Laplace Transforms Used in Real Life(Electric Circuit)Uploaded bysaperuddin
- A PROOF OF SUBBARAO’S CONJECTURE, by CRISTIAN-SILVIU RADUUploaded byCercetare Constanța
- Group TheoryUploaded byZeref Dagneel
- Complex Numbers WorkshopUploaded byMichelle Hsieh
- Differentiation Under the Integral Sign_-_StevechengUploaded byAndres Granados
- Bletzinger Plate Bending FemUploaded byae09d010
- EquationsUploaded byIndianhoshi Hoshi
- 2015-2-NSembilan-SMK St Paul_MATHS QA.docxUploaded byVishalinie Raman
- toth.pdfUploaded byCharlie Pinedo
- ref17Uploaded byNina Brown
- Math Readings 2Uploaded byChristian Lazatin Sabadisto
- lesson 11 3 logarithms exponentsUploaded byapi-233527181
- MATRICES.docUploaded byVizet
- z TransformationUploaded bySaimir Lleshi
- Problems on Permutations and CombinationsUploaded byManas Ranjan Jena