You are on page 1of 13

Sec:1.

Algorithms and
Convergence
Sec:1.3 Algorithms and Convergence

Algorithm
An algorithm is a procedure that describes, in an unambiguous
manner, a finite sequence of steps to be performed in a specified
order. The object of the algorithm is to implement a procedure to
solve a problem or approximate a solution to the problem.

We use a pseudocode to describe the algorithms. This pseudocode specifies


the form of the input to be supplied and the form of the desired output.
Sec:1.3 Algorithms and Convergence

Looping techniques
counter-controlled

x=1:5;
vsum = 0;
for i=1:5 conditional execution
vsum = vsum + x(i);
end x=1:5;
vsum vsum = 0;
for i=1:5
vsum = vsum + x(i);
condition-controlled if vsum > 5;
break;
x=1:5; end
vsum = 0; i=1; end
while i < 3 vsum
vsum = vsum + x(i);
i = i + 1;
end
vsum Indentation
Sec:1.3 Algorithms and Convergence

 
Calculate:

clear; clc
clear; clc
n = 9;
n = 9;
x=1.5;
x=1.5;
s=+1; pw = x-1; pn = s*pw;
pn = 0;
for i = 2:n
for i = 1:n
s = -s; pw=pw*(x-1);
term = (-1)^(i+1)*(x-1)^i/i;
term = s*pw/i;
pn = pn + term;
pn = pn + term;
end
end
pn
pn
Sec:1.3 Algorithms and Convergence

Construct an algorithm to
determine the minimal
value of N required for  
¿  𝒍𝒏 ⁡𝟏 . 𝟓− 𝑷𝑵 (𝟏 .𝟓)∨¿𝟏𝟎 −𝟓 ,

 
clear; clc
n = 13;
From calculus we know that x=1.5;
pn = 0;
 
≤ ||. for i = 1:n
term = (-1)^(i+1)*(x-1)^i/i;
pn = pn + term;
end
pn
if abs(term) < 1e-5; N=i; break; end
Sec:1.3 Algorithms and Convergence

Algorithm is stable
small changes in the initial data produce correspondingly small changes in
the final results. otherwise it is unstable.

Some algorithms are stable only for certain choices of initial data, and are
called conditionally stable.

Example: How small is


𝝅 2
  𝑥 +100 𝑥 − 22= 0 𝑥  1 𝑥  2 | 𝑥ˇ 1 − 𝑥 1|
2
  .𝟏𝟒𝟏𝟓) 𝑥 +100 𝑥 − 22=0
(𝟑 𝑥ˇ  1 𝑥ˇ  2 | 𝑥ˇ 2 − 𝑥 2|

  −𝟑.𝟏𝟒𝟏𝟓∨¿
¿𝝅 | 𝑥ˇ 1 − 𝑥 1| | 𝑥ˇ 2 − 𝑥 2|
small changes in the initial data produce small changes
Sec:1.3 Algorithms and Convergence

Example
Rates of Convergence
Consider the following two series
sequence: {αn}  α
  𝒏  𝜶 𝒏   𝜸 𝒏
1 2.00000 4.00000   {}  0
2 0.75000 0.62500  then we say that {αn} converges to
3 0.44444 0.22222 α with rate (order) of
4 0.31250 0.10938 convergence O().
“big oh of”
5 0.24000 0.064000
6 0.19444 0.041667 If a positive constant K
7 0.16327 0.029155 exists with

  |αn − α| ≤ K
 𝜶 = 𝒏 +𝟏  𝜸 = 𝒏 +𝟑 for large n,
𝒏 𝟐 𝒏 𝟑
𝒏 𝒏

Then we write:
Which one is faster?   αn = α + O().
Rate of convergence
Remark: Comparsion test and
Limit comparison test
Sec:1.3 Algorithms and Convergence

Example
Rates of Convergence
Consider the following two series
sequence: {αn}  α
 𝜶 = 𝒏 +𝟏  𝜸 = 𝒏 +𝟑
𝒏 𝟐 𝒏 𝟑
𝒏 𝒏   {}  0
 then we say that {αn} converges to
α with rate (order) of
 𝜶 = 𝒏 +𝟏 ≤ 𝟐 ( 𝟏 )
𝟏
convergence O().
𝒏 𝟐  𝒑=𝟏 “big oh of”
𝒏 𝒏
If a positive constant K
𝟐 exists with
 𝜸 = 𝒏 +𝟑 ≤ 𝟒 ( 𝟏 )
𝒏 𝟑
𝒏  𝒑=𝟐
𝒏   |αn − α| ≤ K
for large n,

Then we write:
  αn = α + O().
Remark: Comparsion test and
Limit comparison test
Sec:1.3 Algorithms and Convergence

Rates of Convergence
Suppose {βn} is a sequence known to converge to zero, and {αn} converges to
a number α. If a positive constant K exists with

|αn − α| ≤ K|βn|, for large n,


then we say that {αn} converges to α with rate (order) of convergence O(βn).

(This expression is read “big oh of βn”.)

Rates of Convergence
  |αn − α| ≤ K
Two sequences: {αn}  α for large n,
  {}  0

 We are generally interested in the largest value of p with


αn = α + O().
Root-finding problem
 
The root-finding problem is a process involves finding a root, or solution, of an
equation of the form

for a given function . A root of this equation is also called a zero of the function .

In graph, the root (or zero) of a


function is the x-intercept Three numerical methods for
root-finding

Sec(2.1): The Bisection Method


Sec(2.2): Fixed point Iteration
root
Sec(2.3): The Newton-Raphson
Method
Newton’s Method

THE NEWTON-RAPHSON METHOD is a method for finding successively better


approximations to the roots (or zeroes) of a function.

Algorithm Example

 
Use the Newton-Raphson method to estimate
To approximate the roots of 𝑓  ( 𝑥 )= 0
the root of f (x) =, employing an initial
guess of x1 = 0.
Given
  initial guess
′ −𝒙
  f (x) = 𝒇  ( 𝒙 )=− 𝒆 −𝟏

 𝑥 𝑓 (𝑥 𝑛 )  𝑥1 =0   f (0) = 𝒇  ( 𝟎 ) =¿ −𝟐 ¿

𝑛+1 = 𝑥 𝑛 −
𝑓 ′ (𝑥 𝑛 )
 𝑥 = 𝑥 − 𝑓 ( 𝑥1 )
2 1
𝑓 ′ ( 𝑥1 )
 𝑥 =0 − 𝑓 (0 )¿ 0 .5
  𝒏  𝒙 𝒏 2  
𝑓 ′ (0 )
1 0.000000000000000
2 0.500000000000000  𝑥 =0.5 − 𝑓 (0.5  )¿0.566311003197218
𝑓 ′ (0.5  )  
3
3 0.566311003197218
4 0.567143165034862
5 0.567143290409781 The true value of the root: 0.56714329.Thus, the
approach rapidly converges on the true root.
Newton’s Method

THE NEWTON-RAPHSON METHOD is a method for finding successively better


approximations to the roots (or zeroes) of a function.

Example
clear
 
Use the Newton-Raphson f = @(t) exp(-t) - t;
method to estimate the root of df = @(t) - exp(-t) - 1;
f (x) =, employing an initial x(1) = 0;
guess of x1 = 0.
for i=1:4
x(i+1) = x(i) - f( x(i) )/df( x(i) );
end
x'

  𝒏  𝒙 𝒏

1 0.000000000000000
2 0.500000000000000
3 0.566311003197218
4 0.567143165034862
5 0.567143290409781
Newton’s Method

Example
 
Approximate a root of
f (x) = using
Newton’s Method
employing an initial guess of clear
f = @(t) exp(-t) - t;
df = @(t) - exp(-t) - 1;
x(1) = 0;
  𝒏 (Newton)
 
for i=1:4
1 0.785398163397448
x(i+1) = x(i) - f( x(i) )/df( x(i) );
2 0.739536133515238
3 0.739085178106010
end
4 0.739085133215161 x'
5 0.739085133215161
6 0.739085133215161
7 0.739085133215161
8 0.739085133215161

You might also like