Numerical Methods - Midterm Work-Sheet: Roots of Equations

You might also like

You are on page 1of 5

Numerical Methods – Midterm work-sheet

Roots of equations
Bisection Method
Initial Step: a(0) = a, b(0) = b, I(0) = (a(0), b(0)), x(0) = (a(0) + b(0))/2.
At each step k ≥ 1 we select the subinterval I(k) = (a(k), b(k)) of the interval I(k−1) = (a(k−1),
b(k−1)) as follows:
- x(k−1) = (a(k−1) + b(k−1))/2,
- if f(x(k−1)) = 0 then α = x(k−1)
- if f(a(k−1))f(x(k−1)) < 0 set a(k) = a(k−1), b(k) = x(k−1);
- if f(x(k−1))f(b(k−1)) < 0 set a(k) = x(k−1), b(k) = b(k−1).
- x(k) = (a(k) + b(k))/2 and increase k by 1.

Error of approximation: 𝑒𝑟𝑟 = (𝑏 − 𝑎)

Newton Method
𝑓(𝑥 )
𝑥 =𝑥 −
𝑓′(𝑥 )
Error of approximation: 𝑒𝑟𝑟 = |𝑓(𝑥 )|

Secant Method
𝑓(𝑥 )(𝑥 − 𝑥 )
𝑥 =𝑥 −
𝑓(𝑥 ) − 𝑓(𝑥 )

Error of approximation: 𝑒𝑟𝑟 = |𝑓(𝑥 )|

Newton Horner Method


Horner Division
Let 𝑃 = 𝑎 𝑥 + ⋯ + 𝑎 𝑥 + 𝑎 , be a polynomial of degree n.
Let 𝑄 =𝑏 𝑥 + ⋯ + 𝑏 𝑥 + 𝑏 , be the result of the division 𝑃 ⁄(𝑥 − 𝛼).
𝑏 =𝑎
𝑏 = 𝑎 + 𝛼𝑏 , k = n − 1, n − 2, ..., 1
𝑟 = 𝑎 + 𝛼𝑏

Roots of Polynomials:
𝑝 (𝑥) = (𝑥 − 𝑥 )𝑝 (𝑥) + 𝑟
𝑝 (𝑥 )
𝑥 =𝑥 −
𝑝 (𝑥 )
Error of approximation: 𝑒𝑟𝑟 = |𝑃 (𝑥 )|
Linear Systems of Equations
Naive Gauss Method
Forward Elimination:

Step 1

For Equation 2, divide Equation 1 by a11 and multiply by a 21 :


a 21 a a
a 21 x1  a12 x 2  ...  21 a1n x n  21 b1
a11 a11 a11

Subtract the result from Equation 2:

a21 x1  a22 x2  a23 x3  ...  a2 n xn  b2


-

a21 a a
a21 x1  a12 x2  ...  21 a1n xn  21 b1
a11 a11 a11
__________________________________

 a   a  a
 a22  21 a12  x2  ...   a2 n  21 a1n  xn  b2  21 b1
 a11   a11  a11

Repeat this procedure for the remaining equations.

Step 2

Repeat the same procedure for the 3rd term of Equation 3.

At the end of (n-1) Forward Elimination steps, the system of equations will be triangular.

Back Substitution:

bn( n 1)
xn  ( n 1)
ann

bii 1  ai,ii11 xi 1  ai,ii12 xi  2  ...  ai,in1 xn


xi  for i  n  1,...,1
aiii 1

Gauss Elimination with Partial Pivoting


At the beginning of the kth step of forward elimination, find the maximum of
a kk , a k 1,k ,................, a nk

If the maximum of the values is a pk in the p th row, k  p  n, then switch rows p and k.
LU Decomposition
1 0 0 u11 u12 u13 

A  LU    21 1 0  0 u22 u23 
 31  32 1  0 0 u33 

Perform the Forward Elimination steps of the Naive Gauss Method.

The matrix U is the matrix of the final triangular system.

The coefficients of L are: 𝑙 = , used to multiply each equation during the Elimination steps.

Gauss Seidel
n
ci   aij x j
j 1
j i
xi  , i  1,2,  , n.
aii

xinew  xiold
a i  100
xinew
Error of approximation:

Polynomial Interpolation
Newton First Order Interpolating Polynomials
y1  y 0
P1 x   y 0   x  x0 
x1  x0

Newton Second Order Interpolating Polynomials


𝑃 (𝑥) = 𝑏 + 𝑏 (𝑥 − 𝑥 ) + 𝑏 (𝑥 − 𝑥 )(𝑥 − 𝑥 )

b0  y 0

y1  y 0
b1 
x1  x0

y 2  y1 y1  y 0

x 2  x1 x1  x 0
b2 
x 2  x0

Newton Interpolating Polynomials of Order n


Pn  x   b0  b1  x  x0     bn  x  x0  x  x1   x  x n1 

𝑏 =𝑦
𝑦 −𝑦
𝑏 = 𝑓[𝑥 , 𝑥 ] =
𝑥 −𝑥
𝑓[𝑥 , 𝑥 ] − 𝑓[𝑥 , 𝑥 ]
𝑏 = 𝑓[𝑥 , 𝑥 , 𝑥 ] =
𝑥 −𝑥

𝑓[𝑥 , ⋯ , 𝑥 ] − 𝑓[𝑥 ,⋯,𝑥 ]
𝑏 = 𝑓[𝑥 , ⋯ , 𝑥 , 𝑥 ] =
𝑥 −𝑥

b0
x0 f ( x0 ) b1
f [ x1 , x0 ] b2
x1 f ( x1 ) f [ x 2 , x1 , x0 ] b3
f [ x 2 , x1 ] f [ x3 , x 2 , x1 , x0 ]
x2 f ( x2 ) f [ x3 , x 2 , x1 ]
f [ x3 , x 2 ]
x3 f ( x3 )

Lagrange Interpolating Polynomials


The Lagrange polynomial interpolation for n points is:
n
Pn 1  xi    Li x  f  xi 
i 1

n x  xj
Li  x   
j 1 xi  x j
j i

Multidimensional Polynomial Interpolation


xi  x 2 y i  y 2 x  x1 y i  y 2
f xi , y i   f  x1 , y1   i f  x 2 , y1  
x1  x 2 y1  y 2 x 2  x1 y1  y 2
xi  x 2 y i  y1 x  x1 y i  y1
 f  x1 , y 2   i f x2 , y 2 
x1  x 2 y 2  y1 x 2  x1 y 2  y1
Spline Interpolation
Linear Splines
𝑓 (𝑥) = 𝑎 𝑥 + 𝑏 , 𝑖 = 1, … , 𝑛
Interpolation conditions:
𝑓 (𝑥 ) = 𝑦
, 𝑖 = 1, … , 𝑛
𝑓 (𝑥 ) = 𝑦

Quadratic splines
𝑓 (𝑥) = 𝑎 𝑥 + 𝑏 𝑥 + 𝑐
Interpolation conditions:
𝑓 (𝑥 ) = 𝑦
, 𝑖 = 1, … , 𝑛
𝑓 (𝑥 ) = 𝑦
Continuity of the derivatives:

𝑓 (𝑥 ) = 𝑓 (𝑥 ), 𝑖 = 1, … 𝑛 − 1

Forced condition:

𝑎 =0

Regression
Least-squares regression - minimize the sum of the squares of the estimate residuals:

𝑆 = 𝑒𝑟𝑟 = ( 𝑦 − 𝑓(𝑥 ))

The coefficient of determination is: 𝑟 = 1 − , where

𝑆 =∑ ( 𝑦 − 𝑓(𝑥 )) , 𝑆 = ∑ ( 𝑦 − 𝑦)

Linear Regression
𝑓(𝑥) = 𝑎𝑥 + 𝑏

Polynomial Regression
𝑓 (𝑥) = 𝑎 𝑥 + ⋯ + 𝑎 𝑥 + 𝑎

Exponential Regression
𝑓(𝑥) = 𝛼𝑒

Multiple Linear Regression


𝑓(𝑥 , … , 𝑥 ) = 𝑎 + 𝑎 𝑥 + ⋯ + 𝑎 𝑥

You might also like