Professional Documents
Culture Documents
Unit IV Lectures
Unit IV Lectures
20CHC15
Solution Techniques
Raphson method
4
Intermediate Value Theorem
Let f(x) be defined on the interval [a,b].
f(a)
Intermediate value theorem:
if a function is continuous and f(a) and f(b)
have different signs then the function has at
least one zero in the interval [a,b]. a b
f(b)
5
Examples
If f(a) and f(b) have the same sign,
the function may have an even a b
number of real zeros or no real zeros
in the interval [a, b].
The function has four real zeros
a b
The function has no real zeros
6
Two More Examples
If f(a) and f(b) have
different signs, the function a b
7
Bisection Method
If the function is continuous on [a,b] and f(a) and f(b) have different signs,
Bisection method obtains a new interval that is half of the current interval
and the sign of the function at the end points of the interval are different.
This allows us to repeat the Bisection procedure to further reduce the size of
the interval.
8
Bisection Method
Assumptions:
Given an interval [a,b]
f(x) is continuous on [a,b]
f(a) and f(b) have opposite signs.
These assumptions ensure the existence of at least one zero in the interval
[a,b] and the bisection method can be used to obtain a smaller interval
that contains the zero.
9
Bisection Algorithm
Assumptions:
f(x) is continuous on [a,b]
f(a) f(b) < 0
f(a)
Algorithm: c b
Loop
a
1. Compute the mid point c=(a+b)/2
2. Evaluate f(c) f(b)
3. If f(a) f(c) < 0 then new interval [a, c]
If f(a) f(c) > 0 then new interval [c, b]
End loop
10
Flow Chart of Bisection Method
Start: Given a,b and ε
u = f(a) ; v = f(b)
c = (a+b) /2 ; w = f(c) no
yes
is no is
Stop
yes (b-a) /2<ε
u w <0
b=c; v= w a=c; u= w
11
Example
12
Answer:
f ( x) is continuous on [0,2]
and f(0) * f(2) (1)(3) 3 0
Assumption s are not satisfied
Bisection method can not be used
f ( x) is continuous on [0,1]
and f(0) * f(1) (1)(-1) 1 0
Assumption s are satisfied
Bisection method can be used
14
Best Estimate and Error Level
The best estimate of the zero of the function f(x) after the first iteration of
the Bisection method is the mid point of the initial interval:
ba
Estimate of the zero : r
2
ba
Error
2
15
Stopping Criteria
16
Stopping Criteria
After n iterations :
b a x 0
error r - cn Ean n
n
2 2
17
Convergence Analysis
log( b a) log( )
n
log( 2)
18
Convergence Analysis – Alternative Form
log( b a) log( )
n
log( 2)
n 11
20
Example
Use Bisection method to find a root of the equation x = cos (x) with
absolute error <0.02
(assume the initial interval [0.5, 0.9])
21
Bisection Method
Initial Interval
22
Bisection Method
23
Bisection Method
24
Summary
After 5 iterations:
Interval containing the root: [0.725, 0.75]
Best estimate of the root is 0.7375
| Error | < 0.0125
25
A Matlab Program of Bisection Method
c=
a=.5; b=.9; 0.7000
fc =
u=a-cos(a);
-0.0648
v=b-cos(b); c=
for i=1:5 0.8000
c=(a+b)/2 fc =
0.1033
fc=c-cos(c)
c=
if u*fc<0 0.7500
b=c ; v=fc; fc =
else 0.0183
c=
a=c; u=fc;
0.7250
end fc =
end -0.0235
26
Example
* f(x) is continuous
* f( 0 ) 1, f (1) 1 f (a) f (b) 0
Bisection method can be used to find the root
27
Example
c= (a+b) (b-a)
Iteration a b f(c)
2 2
28
Bisection Method
Advantages
Simple and easy to implement
One function evaluation per iteration
The size of the interval containing the zero is reduced by 50% after each iteration
The number of iterations can be determined a priori
No knowledge of the derivative is needed
The function does not have to be differentiable
Disadvantage
Slow to converge
Good intermediate approximations may be discarded
29
Newton-Raphson Method
x
Given an initial guess of the root 0, Newton-Raphson method uses information about the function
and its derivative at that point to find a better guess of the root.
Assumptions:
f(x) is continuous and the first derivative is known
An initial guess x0 such that f’(x0)≠0 is given
30
Derivation of Newton’s Method
Given: xi an initial guess of the root of f ( x) 0
Question : How do we obtain a better estimate xi 1?
____________________________________
Taylor Therorem : f ( x h) f ( x) f ' ( x)h
Find h such that f ( x h) 0.
f ( x)
h Newton RaphsonFormula
f ' ( x)
f ( xi )
A new guess of the root : xi 1 xi
f ' ( xi )
31
Newton’s Method
32
Newton’s Method
34
Example
0 4 33 33 3 1
1 3 9 16 2.4375 0.5625
36
Practice Problems based on NR Method:
(ii).
Using the initial estimate 0.9000
2.
Theorem :
Let f(x), f ' (x) and f ' ' (x) be continuous at x r
where f(r) 0. If f ' (r) 0 then there exists 0
xk 1-r
such that x0 -r 2
C
xk -r
max f ' ' ( x)
1 x0 -r
C
2 min f ' ( x)
x0 -r
39
Convergence Analysis
Remarks
When the guess is close enough to a simple root of the function then
Newton’s method is guaranteed to converge quadratically.
40
Problems with Newton’s Method
xl f u xu f l
xr
fu fl 43
More clear view of iterations
Example
Problem
A root of f(x) = 3x – 2 e 0.5 x is known to exist between x = 1 and x = 2. Calculate
the guessed location of the root using false position method.
Answer:
Given xl = 1 and xu = 2, therefore:
1st Iteration
f(xl ) = 3 – 2 e 0.5 = -0.2974 and f(xu ) = 3*2 – 2 e 0.5 *2= 0.5634
And and f(xr ) = 0.1173
f ( xu )( xu xl )
xr xu 1.3455
f ( xu ) f ( xl )
Since f(xl)*f(xr)< 0, the new interval is xl =1 and xu = 1.3455
2nd Iteration
It is obtain from the 1st iteration that xl =1 and xu = 1.3455
f(xl ) = -0.2974 and f(xu ) = 0.1173, therefore:
f ( xu )( xu xl )
xr xu 1.2478
f(xr ) = 0.0110 f ( xu ) f ( xl )
Since f(xl)*f(xr)< 0, the new interval is xl =1 and xu = 1.2478
3rd Iteration
It is obtain from the 1st iteration that xl =1 and xu = 1.2478
f(xl ) = -0.2974 and f(xu ) = 0.0110, therefore:
and f(xr ) = 9.4635 x 10-4
f ( xu )( xu xl )
xr xu 1.2390
f ( xu ) f ( xl )
The complete result for 10 iteration
Iteration F(xl) F(xu) xroot f(xroot) 0.12
iteration progress
function value
0.06
Derivatives
Partial Derivatives
Ordinary Derivatives
dv u
dt y
u is a function of
v is a function of one
more than one
independent variable
independent variable
50
Differential Equations
Differential
Equations
d v 2
u u
2 2
6tv 1 0
dt 2 y 2
x 2
involve one or more involve one or more
Ordinary derivatives of partial derivatives of
unknown functions unknown functions
51
Ordinary Differential Equations
Examples:
dv(t )
v(t ) et x(t): unknown function
dt
d 2 x(t ) dx(t )
2
5 2 x(t ) cos(t )
dt dt
t: independent variable
52
Example of ODE:
Model of Falling Parachutist
The velocity of a falling parachutist is given by:
dv c
9.8 v
dt M
M : mass
c : drag coefficient
v : velocity
53
Definitions
Ordinary
differential
equation
dv c
9.8 v
dt M
54
Definitions (Cont.)
dv c
9.8 v (Dependent
dt M variable) unknown
function to be
determined
55
Definitions (Cont.)
dv c
9.8 v
dt M
(independent variable)
the variable with respect to which other
variables are differentiated
56
Order of a Differential Equation
The order of an ordinary differential equation is the order of the highest order
derivative.
Examples:
dx(t )
x(t ) et First order ODE
dt
d 2 x(t ) dx(t )
2
5 2 x(t ) cos(t ) Second order ODE
dt dt
3
d x(t )
2
dx(t )
2
2 x 4
(t ) 1 Second order ODE
dt dt
57
Solution of a Differential Equation
A solution to a differential equation is a function that satisfies the equation.
Example: t
Solution x(t ) e
dx(t )
x(t ) 0 Proof :
dt
dx(t )
e t
dt
dx(t )
x(t ) e t e t 0
dt
58
Linear ODE
An ODE is linear if
The unknown function and its derivatives appear to power one
No product of the unknown function and/or its derivatives
Examples:
dx(t )
x(t ) et
dt Linear ODE
d 2 x(t ) dx(t )
2
5 2t 2
x(t ) cos(t ) Linear ODE
dt dt
3
d x(t )
2
dx(t ) Non-linear ODE
dt 2 dt x(t ) 1
59
Nonlinear ODE
An ODE is linear if
The unknown function and its derivatives appear to power one
No product of the unknown function and/or its derivatives
Examples of nonlinear ODE :
dx(t )
cos( x(t )) 1
dt
d 2 x(t ) dx(t )
2
5 x(t ) 2
dt dt
d 2 x(t ) dx(t )
2
x(t ) 1
dt dt 60
Solutions of Ordinary Differential
Equations
x(t ) cos(2t )
is a solution to the ODE
2
d x(t )
2
4 x(t ) 0
dt
Is it unique?
All functions of the form x(t ) cos(2t c)
(where c is a real constant) are solutions.
61
Uniqueness of a Solution
d 2 x (t )
4 x (t ) 0 Second order ODE
dt 2
x(0) a Two conditions are needed to
x (0) b uniquely specify the solution
62
Auxiliary Conditions
Auxiliary Conditions
All conditions are at one point of The conditions are not at one point of
the independent variable the independent variable
63
Boundary-Value and
Initial value Problems
Initial-Value Problems Boundary-Value Problems
The auxiliary conditions are at one The auxiliary conditions are not at one point
point of the independent of the independent variable
variable More difficult to solve than initial value
problems
Analytical Solutions to ODEs are available for linear ODEs and special classes
of nonlinear differential equations.
66
Numerical Solutions
67
Numerical Solution
of Ordinary Differential Equation
A first order initial value problem of ODE may be written in the form
y' (t ) f ( y, t ), y(0) y0
Example:
y' (t ) 3 y 5, y(0) 1
y' (t ) ty 1, y(0) 0
Euler Methods
Forward Euler Methods
Backward Euler Method
Modified Euler Method
Runge-Kutta Methods
Second Order
Third Order
Fourth Order
69
Forward Euler Method
Consider the forward difference approximation for first
derivative yn1 yn
yn ' , h tn1 tn
h
Rewriting the above equation we have
yn1 yn hyn ' , yn ' f ( yn , tn )
So, yn is recursively calculated as
y1 y0 hy0 ' y0 h f ( y0 , t0 )
y2 y1 h f ( y1 , t1 )
yn yn1 h f ( yn 1 , tn 1 ) 70
Example: solve
y' ty 1, y0 y(0) 1, 0 t 1, h 0.25
Solution:
for t0 0, y0 y(0) 1
for t1 0.25, y1 y0 hy0 '
y0 h(t0 y0 1)
1 0.25(0 *1 1) 1.25
for t 2 0.5, y2 y1 hy1 '
y1 h(t1 y1 1)
1.25 0.25(0.25*1.25 1) 1.5781
etc 71
Graph the solution
72
Backward Euler Method
Consider the backward difference approximation for first
derivative yn yn1
yn ' , h tn tn1
h
Rewriting the above equation we have
yn yn1 hyn ' , yn ' f ( yn , tn )
yn
So, is recursively calculated as
y1 y0 hy1 ' y0 h f ( y1 , t1 )
y2 y1 h f ( y2 , t2 )
yn yn 1 h f ( yn , tn ) 73
Example: solve
y' ty 1, y0 y(0) 1, 0 t 1, h 0.25
Solution:
Solving the problem using backward Euler method for yn yields
y0 h 1 0.25
for t1 0.25, y1 1.333
1 ht1 1 0.25 * 0.25
74
y1 h 1.333 0.25
for t2 0.5, y2 1.8091
1 ht2 1 0.25 * 0.5
y2 h 1.8091 0.25
for t3 0.75, y3 2.5343
1 ht3 1 0.25 * 0.75
y3 h 2.5343 0.25
for t4 1, y4 3.7142
1 ht4 1 0.25 *1
75
Graph the solution
76
Modified Euler Method
Modified Euler method is derived by applying the trapezoidal rule to
integrating yn ' f ( y, t ) ; So, we have
h '
yn1 yn ( yn1 yn' ), yn ' f ( yn , tn )
2
77
Example: solve
y' ty 1, y0 y(0) 1, 0 t 1, h 0.25
Solution:
f is linear in y. So, solving the problem using modified Euler method for yn
yields h
yn yn 1 ( y 'n 1 y 'n )
2
h
yn 1 (t n 1 yn 1 1 t n yn 1)
2
h h
yn (1 t n ) yn 1 (1 t n 1 ) h
2 2
h
(1 t n 1 )
yn 2 yn 1 h
h
(1 t n ) 78
2
Graph the solution
79
Second Order Runge-Kutta Method
The second order Runge-Kutta (RK-2) method is derived by applying
the trapezoidal rule to integrating y' f ( y, t )
over the interval [t n , t n 1 ] . So, we have
t n1
yn 1 yn f ( y, t )dt
tn
yn f ( yn , t n ) f ( yn 1 , t n 1 )
h
2
We estimate yn 1 by the forward euler method.
80
So, we have
yn f ( yn , tn ) f ( yn hf ( yn , tn ), tn1 )
h
yn1
2
81
Third Order Runge-Kutta Method
The third order Runge-Kutta (RK-3) method is derived by applying
the Simpson’s 1/3 rule to integrating y' f ( y, t )
over the interval [t n , t n 1 ] . So, we have
t n1
yn 1 yn f ( y, t )dt
tn
yn
h
6
f ( yn , t n ) 4 f ( yn 12 , t n 12 ) f ( yn 1 , tn 1 )
We estimate yn 1 2 by the forward Euler method.
82
The estimate yn 1 may be obtained by forward difference method,
central difference method for h/2, or linear combination both forward and
central difference method. One of RK-3 scheme is written as
yn 1 yn k1 4k2 k3
1
6
where k1 hf ( yn , tn )
h
k2 hf ( yn k1 , t n )
1
2
2
k3 hf ( yn k1 2k2 , t n 1 )
83
Fourth Order Runge-Kutta Method
The fourth order Runge-Kutta (RK-4) method is derived by applying the
Simpson’s 1/3 or Simpson’s 3/8 rule to integrating y' f ( y, t ) over the
interval [t n , t n 1 ]. The formula of RK-4 based on the Simpson’s 1/3 is
written as
yn 1 yn k1 2k 2 2k3 k 4
1
6
where k1 hf ( yn , t n )
k 2 hf ( yn 12 k1 , t n h )
2
k3 hf ( yn 12 k 2 , t n h )
2
k 4 hf ( yn k3 , t n h)
84
The fourth order Runge-Kutta (RK-4) method is derived based on Simpson’s
3/8 rule is written as
yn 1 yn k1 3k 2 3k3 k 4
1
8
where k1 hf ( yn , t n )
k 2 hf ( yn 13 k1 , t n h )
3
k3 hf ( yn 13 k1 13 k 2 , t n 2h )
3
k 4 hf ( yn 3k1 3k 2 k3 , t n h)
85
C. Curve fitting: Method of Least Squares
Given is a bivariate dataset (x1, y1), …, (xn, yn), where x1, …, xn are nonrandom and Yi = α + βxi
+ Ui are random variables for i = 1, 2, . . ., n. The random variables U1, U2, …, Un have zero
expectation and variance σ 2
To find the least squares estimates, we differentiate S(α, β) with respect to α and β, and
we set the derivatives equal to 0:
(slope)
(Intercept)
Regression
The observed value yi corresponding to xi and the value α+βxi on the
regression line y = α + βx.
∑
1
( yi − α− β x i )2
Problem. 1
Ans-3a.
Ans-3b.
Ans-3c.
Regression line y = 0.25 x –2.35 for points
Residuals:
A way to explore whether the linear regression model is appropriate to model a given
bivariate dataset is to inspect a scatter plot of the so-called residuals ri against the xi.
The ith residual ri is defined as the vertical distance between the ith point and the
estimated regression line:
We always have
Dr Raj Verma (CBIT Hyderabad) 103