Professional Documents
Culture Documents
Engineering
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
What are Computational Methods or Numerical Methods in
Engineering?
3
Objectives of the course
• Expose you to the analysis of these algorithms so that, if needed, you can
modify an existing algorithm or develop your own algorithm for the
problem at hand.
4
Scope of the course
Engineering Physical
Problems laws/rules/
relationships
Data
Initial/Boundary
conditions
Mathematical
Models
Computational Resources
Methods/Algorithms Programming
Results 5
Example
6
Example
Based on the example, we need to answer several questions
• Do we need numerical methods to solve the problem?
– Shall we solve the ODE directly using a numerical technique or go for
analytical solutions and then utilise a numerical technique
• How many significant digits do we have? How many should we take? This is
important as computer has finite space
• What algorithm should we take (here BA)?
8
Number representation in Computer
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
Errors and Error
Analysis
3
Significant digits
Significant digits of a number are those that can be used with
confidence
5
Accuracy vs Precision
7
Source: Chapra and Canale
Define Error:
Absolute Error:
Relative Error:
• For an iterative process, the true value ‘a’ is replaced with the
current iteration value and a prefix ‘approximate’ is added. This is
used for testing convergence of the iterative process.
9
Example
10
We will never have the true value, but would like to have an
idea about the error of the algorithm
– How to get an error bound?
– Error bound should be a tight bound
11
Sources of Error in computation?
12
Truncation error
13
Truncation error
14
Truncation error
15
Source: Chapra and Canale
Truncation error-Error bound
16
Truncation error-Error bound
18
Data error
19
Data error
20
Data error
21
Data error
22
Data error
23
Summary
24
ESO 208A: Computational Methods in
Engineering
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
Recap
• Computational methods cannot be studied in isolation of the problem
• Significant digits/figures are the numbers that one can use with confidence
3
Recap
• Types of error
- Model error
- Data error
- Truncation error
Computers are finite
- Round-off error
4
Round-off error
• Round-off error originates from the fact that computers retain only
a fixed number of significant figures during a calculation
5
Round-off error
6
Round-off error
7
Round-off error
8
Round-off error
• Floating point
10
Round-off error
11
Round-off error
To store a floating point number, a computer word is divided into three parts
12
Round-off error
13
Round-off error
What we did so far was for binary, in case of decimal the maximum decimal
power can be
14
Summary
• Integers
• Fixed Point
• Floating Point
15
ESO208A: Computational Methods in
Engineering
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
Round-off error
3
Round-off error
Mantisaa
5
Floating point number representation
Overflow
0
6
Round-off error
7
Floating point number representation
Overflow
0
Chopping
Round-off Errors Δx
x Rounding
u 0.5 b1t
x
Δx Δx
Real number in Maths and Computer are not the same
Round-off errors can be avoided subtraction of nearly equal nos. 8
Round-off error
Another example
a+1-a =1
10
Round-off error
In floating point, we can change the number such that it has the
highest power
0.246 × 103
- 0.245 × 103= 0.001 × 103
Matissa normalize = 0.100 × 101 (3-significant digits)
But we actually have 1 significant digit. This is called loss of
significance 11
Forward error analysis
x f (x)
Δx Δ f (x)
Condition number of the problem
Relative error in f ( x) f ( x) f ( x) xf ( x )
Cp
Relative error in x x x f ( x)
C p 1 - well-conditioned problem
C p 1 - ill-conditioned problem
Assuming the error to be small, the 2nd and higher order terms are
neglected. (a first order approximation!)
13
Condition Number of the Problem (Cp):
∆
Also: ∆ ∆
As Δx → 0,
Problem 1:
∆
∆ .
15
Examples of Forward Error Analysis and Cp:
16
Backward error analysis
x fA (x)
xA
x xA
CAu u is machine precision
x
Characteristic of the numerical stability of the algorithm
18
Backward error analysis- Example
20
ESO208A: Computational Methods in
Engineering
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
Solution of non-linear
equations
3
Mathematical Preliminaries
4
Mathematical Preliminaries
5
Mathematical Preliminaries
6
Non-linear equation
7
Non-linear equation
8
Graphical Method
1. Bisection Method
10
Bisection Method
11
Bisection Method
• Principle: Choose an initial interval based on intermediate value
theorem and halve the interval at each iteration step to generate the
nested intervals.
• Initialize: Choose a0 and b0 such that, f(a0)f(b0) < 0. This is done by trial
and error.
• Iteration step k:
– Compute mid-point mk+1 = (ak + bk)/2 and functional value f(mk+1)
– If f(mk+1) = 0, mk+1 is the root. (It’s your lucky day!)
– If f(ak)f(mk+1) < 0: ak+1 = ak and bk+1 = mk+1; else, ak+1 = mk+1 and bk+1 =
bk
– After n iterations: size of the interval dn = (bn – an) = 2-n (b0 – a0), stop if
dn ≤ ε
– Estimate the root (x = α say!) as: α = mn+1 ± 2-(n+1) (b0 – a0)
12
Bisection Method
Maximum error
at 0th step
13
Bisection Method
14
Bisection Method
15
Bracketing Methods
y = f(x)
f(ak)
mk+1
bk
ak
f(bk)
16
Regula-Falsi or Method of False Position
• Principle: In place of the mid point, the function is assumed to be linear within
the interval and the root of the linear function is chosen.
• Initialize: Choose a0 and b0 such that, f(a0)f(b0) < 0. This is done by trial and
error.
• Iteration step k:
• A straight line passing through two points (ak, f(ak)) and (bk, f(bk)) is given by:
17
Bracketing method
20
ESO 208A: Computational Methods in
Engineering
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
Non-linear equation
Distinguishing feature:
• Only one starting value
• Convergence is not always guaranteed
• If algorithm convergences, the rate of convergence may be faster
Open Methods
1. Fixed Point
y = g(x)
y=x y = x y = g(x)
x3 x2 x1 x0 x3 x2 x1 x0
Root α Root α
Fixed Point Method
• Stopping criteria:
6
Fixed Point Method
7
Fixed Point Method
Convergence of fixed point
Fixed Point Method
Convergence of fixed point
Open Methods
f(x0)
f(x1)
x0
y = f(x)
x1 x2
Newton-Raphson Method
Newton-Raphson Method
Newton-Raphson Method
Convergence
Newton-Raphson Method
Convergence
Newton-Raphson Method
Advantages:
y = f(x)
Faster convergence (quadratic) f(x0)
x1
Disadvantages: x0
f(x1)
Need to calculate derivate
Newton-Raphson
method may get stuck!
15
Newton-Raphson Method
a) Inflection point
y = f(x)
f(x0)
x1
x0
f(x1)
16
Newton-Raphson Method
17
Newton-Raphson Method
c) Multiple Solutions
18
Open Methods
3. Secant Method
• Principle: Use a difference approximation for the slope or derivative in
the Newton-Raphson method. This is equivalent to approximating the
tangent with a secant.
• Problem: f(x) = 0, find a root x = α such that f(α) = 0
f(x0)
f(x1)
f(x2)
x0 x1 x2
y = f(x)
x3
Secant Method
• Initialize: choose two points x0 and x1 and evaluate f(x0) and f(x1)
• Iteration Formula:
• Stopping criteria:
20
Secant Method
Advantages:
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
Non-linear equation
4
Hybrid Method
Combined Approach
• Bracketing method (when starting)
• Open method (when close to the solution)
5
Multiple Roots
What to do when your function has multiple roots?
6
Multiple Roots
What to do when your function has multiple roots?
Problems with multiple roots
1) Bracketing method cannot be used when m is even
2) Newton-Raphson may not work as f’(x)=0
3) Large interval of uncertainty for solution of f(x)
b) Second modification
8
Multiple Roots
2. Division of polynomials
Polynomials
3. Deflation of Polynomials
Polynomials
Secant Müller
f 2 ( x) a( x x2 ) b( x x2 ) c
2
2. The parabola should intersect the three points [xo, f(xo)], [x1, f(x1)],
[x2, f(x2)].
f ( xo ) a( xo x2 ) b( xo x2 ) c
2
f ( x1 ) a ( x1 x2 ) 2 b( x1 x2 ) c
f ( x2 ) a ( x2 x 2 ) 2 b ( x2 x 2 ) c
Müller Method
Define
ho x1 - xo h1 x2 - x1
f ( x1 ) f ( xo ) f ( x2 ) f ( x1 )
o 1
x1 xo x2 x1
then,
1 o
a b ah1 1 c f ( x2 )
h1 ho
Müller Method
2c
x3 x2
b b 4ac
2
5. ±term yields two roots; the sign is chosen to agree with b. This
will result in a large denominator, and will give root estimate
that is closest to x2.
Müller Method
• Characteristics of a polynomial
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
Non-linear equation
f n ( x ) ao a1 x a2 x 2 K an x n
x 2 rx s f n 2 ( x ) R
where
n 3 n2
f n 2 ( x ) b2 b3 x K bn 1 x bn x
R b1 ( x r ) bo
Bairstow's Method
8
Polynomial Methods: Single Root
For a given , are known. For a choice of r, one can determine from n+1
equations above having n+1 unknowns 9
Polynomial Methods: Single Root
or
→ b0’(r) = b1 →
Let us divide by a factor (x2 – rx – s). If the factor is exact, the resulting polynomial will
be of order (n – 2). Two roots of the polynomial can be estimated simultaneously as the
roots of the quadratic factor. For the complex roots, they will be the complex
conjugates.
If the factor (x2 – rx – s) is not exact, there will be two remainder terms, one function
of x and another constant.
Let us express the remainder term as b1(x - r) + b0. This form instead of the standard b1x
+ b0 is chosen to device a convenient iteration formula!
11
Polynomial Methods: Bairstow's
For a given , are known. For a choice of r and s, one can determine from
n+1 equations above having n+1 unknowns 12
Polynomial Methods: Bairstow's
13
Polynomial Methods: Bairstow's
=0
14
Polynomial Methods: Bairstow's
=0 =0
(say)
15
Polynomial Methods: Bairstow's
; ; and
For any given polynomial, we know {a0, a1, … an}. Assume r and s. Compute
{b0, b1, … bn} and {c0, c1, … cn}. Compute Δr and Δs.
16
Polynomial Methods: Bairstow's
Step 1: input a0, a1, … an and initialize r and s.
Step 2: compute b0, b1, … bn
•
Step 7: Stop if all convergence checks are satisfied. Else, set r = rnew , s =
snew and go to step 2.
17
Bairstow's Method
Step 8. The roots quadratic polynomial x2-rx-s are obtained as
r r 2 4s
x
2
Step 9. At this point three possibilities exist:
1. The quotient is a third-order polynomial or greater.The previous values
of r and s serve as initial guesses and Bairstow’s method is applied
to the quotient to evaluate new r and s values.
2. The quotient is quadratic.The remaining two roots are evaluated
directly, using the above eqn.
3. The quotient is a 1st order polynomial. The remaining single root can
be evaluated simply as x=-s/r.
Summary
• Bairstow method
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
Example Problem
•
5 4 3 2
3
Example Problem
•
5 4 3 2
Soln:
Step 1: Input a0, a1, … an and initialize r and s.
In
Here n = 5;
Here, n =5
5
Example Problem
Step 3: compute c0, c1, … cn using recursive relations derived
Here, n =5
6
Example Problem
Step 4: compute Δr and Δs from
Here,
Step 7: Stop if all convergence checks are satisfied. Else, set r = rnew
, s = snew and go to step 2.
7
Revision of Solution of Non-linear Equations
Hybrid Methods
Combination
1. Dekker method
- Bracketing method at the beginning
2. Brent method - Open method near convergence
Multiple roots
1. Bracketing method – Only for odd number of roots
2. Newton-Raphson - Linear convergence
3. Modified Newton Raphson – Quadratic convergence
a. Known multiplicity
b. Derivative function
Revision of Solution of Non-linear Equations
Roots of polynomials
1. Evaluation of polynomials
2. Division of polynomials
3. Deflation of polynomials
4. Effective degree of polynomials
Method of finding roots
1. Müller method Real and complex roots
2. Bairstow method
Revision of Solution of Non-linear Equations
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
System of linear
equations
System of linear equations
Preliminaries
2. Diagonal Matrix
5
Preliminaries
3. Identity Matrix
Band Width = a + b - 1
7
Preliminaries
7. Sparse Matrix
Most of the elements are zero
8. Dense Matrix
Most of the elements are non-zero
8
Solution of system of linear equations
• Direct Methods:
• One obtains the exact solution (ignoring the round-off errors) in a
finite number of steps.
• These group of methods are more efficient for dense and banded
matrices.
• Gauss Elimination; Gauss-Jordon Elimination, LU-Decomposition,
Thomas Algorithm (for tri-diagonal banded matrix)
• Iterative Methods:
• Solution is obtained through successive approximation.
• Number of computations is a function of desired
accuracy/precision of the solution and are not known apriori.
• More efficient for sparse matrices.
• Jacobi Iterations, Gauss Seidal Iterations with Successive
Over/Under Relaxation
9
Solution of system of linear equations
Graphical Interpretation
10
Solution of system of linear equations
11
Solution of system of linear equations
Direct Methods
12
Solution of system of linear equations
Direct Methods
13
Solution of system of linear equations
Direct Methods
All these methods belong to family of Gauss Elimination. Gauss Elimination is one of the
ubiquitous algorithm
E: ax+by+cz=d
If we multiply, divide, add or subtract both sides nothing is going to change!
14
Direct Methods: Gauss Elimination
Gauss Elimination for the matrix equation Ax = b:
Indices:
• i: Row index
• j: Column index
• k: Step index
Gauss Elimination
Gauss Elimination Algorithm
Forward Elimination:
For k = 1, 2, …. (n - 1)
Define multiplication factors:
Compute: - − for
i = k+1, k+2, ….n and j = k+1, k+2, ….n
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
Recap
Today’s lecture
• Situations under which Gauss Elimination method will not work
• Gauss Jordan Method
• How to find algorithm complexity?
• LU decomposition method
Gauss Elimination Algorithm
Forward Elimination:
For k = 1, 2, …. (n - 1)
Define multiplication factors:
Compute: - − for
i = k+1, k+2, ….n and j = k+1, k+2, ….n
Difficult Cases
Difficult Cases
b) Ill-conditioned
6
Gauss Elimination
Difficult Cases
c) Round-off Error
What if you are using a computer that has four significant digits
7
Gauss Elimination
Difficult Cases
c) Round-off Error
8
Gauss Elimination
a) Ill-Conditioned
9
Gauss Elimination
a) Ill-Conditioned
Can we use determinant as a measure of ill-conditioning?
10
Gauss Elimination
11
Gauss Elimination
Example:
12
Gauss Elimination
Example:
We did row
pivoting
13
Gauss Elimination
Example:
We did total
pivoting
15
Gauss Elimination
16
Gauss Elimination
Scaling
17
Gauss Elimination
Scaling
18
Gauss Elimination
Scaling
19
Gauss Elimination
20
Direct Methods: Gauss Jordon
Example
Summary
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
Comparison
Computing Time
• Speed of computer
• Programming language
• Input Data
• Algorithm
Comparison
Operations
• Sum =0 (Assignment operation)
• Within the for loop (n assignments, n summations)
Operations
• Sum =0, product =0 (Assignment operation=2)
• Within the for loop (2n assignments, n summations, n
products= 2n)
Two things:
1) Worst Case Scenario
Find a number x0 in the vector X
Two things:
2) Asymptotic Analysis
• Any algorithm is sufficiently efficient for small input.
• When comparing algorithms for computational time one is
interested in very large inputs
• As a proxy for “very large” asymptotic analysis that consider size of
input data tending to infinity
• “Big O” gives an upper bound on the asymptotic growth of the
algorithm
• The complexity of the function/algorithm is O(n2) it means that for
the worst case O(n2) steps are needed to estimate function value
when n is very large
Comparison: Algorithm Complexity
Two things:
2) Asymptotic Analysis
• If the computation time is the sum of multiple terms. Keep the
number which has the largest growth rate and drop the others.
• So, if no. of basic steps are n2+n+c
• As , n2 is what we are worried about.
Comparison: Algorithm Complexity
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
LU decomposition
A=
LU decomposition Theorem
If A is a square matrix of size n × n and if . Then there exists
a lower triangular matrix (L) and an upper triangular matrix (U) such
that A=LU.
Further, if the diagonal elements of either L or U are unity, i.e lii or uii
=1 for i=1,2,….n, then both L and U are unique
LU decomposition
How to get elements of both L and U
1. Gauss Elimination gives both L and U
lii=1
2. Dolittle Method
3. Crout Method uii=1
4. Thomas Algorithm- Tri-diagonal matrix
5. Cholesky Algorithm- Positive definite matrix
LU decomposition
1. Gauss Elimination Method for L and U
LU decomposition
Gauss Elimination Method for L and U
LU decomposition
Gauss Elimination Method for L and U
LU decomposition
Comparison of GE and LU
LU decomposition
Comparison of GE and LU
LU decomposition
2. Crout’s Method
LU decomposition
2. Crout’s Method
LU decomposition
3. Dolittle Method
Summary
• What is LU decomposition
• Crout’s method
• Dolittle method
ESO 208A: Computational Methods in
Engineering
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
Recap
Today’s lecture
• Thomas Algorithm
• Cholesky Decomposition
• Forward Error Analysis
• Indirect Methods-Gauss-Seidal, Jacobi iterative method
LU decomposition
Example of a 3 × 3 matrix:
LU decomposition
This implies,
• Thomas Algorithm
• Cholesky Decompsition
ESO208A: Computational Methods in
Engineering
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
Recap
• Direct Methods:
• Gauss Elimination,
• Gauss-Jordon Elimination,
• LU-Decomposition,
• Thomas Algorithm (for tri-diagonal banded matrix)
• Cholesky Decomposition
3
LU decomposition
Error Analysis
LU decomposition
For Error Analysis, we need to first understand vector and matrix
norms
Vector Norm
A vector norm is a measure (in some sense) of the size or “length” of a vector
• Properties of Vector Norm:
•
•
•
• Lp-Norm of a vector x:
• Example Norms:
• p = 1: sum of the absolute values
• p = 2: Euclidean norm
• p → ∞: maximum absolute value,
LU decomposition
Matric Norm: A matrix norm is a measure of the size of a matrix
• Properties of Matrix norm:
•
•
•
•
• for consistent matrix and vector norms
• Lp Norm of a matrix A:
LU decomposition
Matric Norm:
• Column-Sum norm:
• Row-Sum norm:
• Frobenius norm:
Matric Norm
• Spectral Radius: largest absolute eigenvalue of matrix A denoted by ρ(A)
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
Recap
• Direct Methods:
• Gauss Elimination,
• Gauss-Jordon Elimination,
• LU-Decomposition,
• Thomas Algorithm (for tri-diagonal banded matrix)
• Cholesky Decomposition
3
Indirect Methods
All these methods are version of fixed-point iteration for linear system of
equations
Fixed-Point Method
Fixed-Point Method
Jacobi and Gauss Seidal
• Jacobi Iteration
• Gauss Seidal
• Example
• Jacobi Iteration
Jacobi and Gauss Seidal
• Gauss Seidal
Jacobi and Gauss Seidal
Comments
• Useful when dealing with large sparse systems
• To save computation time divide the equation by its diagonal.
It saves computation, but can introduce round-off error.
• Convergence is not guaranteed [like FP methods]. If you get
convergence, its linear convergence.
Jacobi and Gauss Seidal
Comments
• Convergence Criteria
Jacobi and Gauss Seidal
• Convergence Criteria
• The magnitude of the diagonal element should be greater than the sum
of absolute values of all off-diagonal elements. Such systems are called
Diagonal dominant system
• The criteria for convergence is sufficient but not necessary i.e. the
method may converge even if the criteria is not met.
Relaxation Techniques
Relaxation Techniques
Relaxation Techniques
• Example
Relaxation Techniques
• Example
• Gauss-Seidal
• Jacobi Iteration
• Successive Over Relaxation Technique
ESO 208A: Computational Methods in
Engineering
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
GAPS
Example
Eigen Values and Eigen Vectors
Example
Unit vectors
Characteristics of Eigen Vectors
iff
else linearly dependent.
The vectors are linearly independent The vectors are linearly dependent
Characteristics of Eigen Vectors
4. Eigen Values
If × is a matrix of real numbers with as eigen values.
Product of eigen values =det(A)
Sum of eigen values =trace (A)
Summary
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
Estimation of Eigen Values
Direct Power Method: Used to find the largest [in terms of abs value] eigen value and
corresponding eigen vector
2. Multiply
• Some books suggest that you pick one component of and keep making it 1 after
every iteration. It is the correct way, but may result in division by 0.
• If the algorithm is converging, the element corresponding to maximum value will
not change
Why Direct Power Method Works?
Why Direct Power Method Works?
When algorithm converges, scaling coefficient becomes eigen value. Consequently, scaling a
particular component of vector Y at each iteration essentially factors out. So, this equation (1)
attains a finite value as k tends to infinity, the scaling factor S approaches
Why Direct Power Method Works?
Remark:
• Method will work only if the largest eigen value is distinct (non repeated)
• The eigen vectors should be independent. If all the eigen values are distinct, the eigen
vectors will be independent. Otherwise, it is still possible that vectors are independent,
but not guaranteed.
• The initial guess value should contain a component of i.e
• The convergence rate is proportional to , where, is the largest eigen value and
Inverse Power Method: Used to find the smallest [in terms of abs value] eigen value
and corresponding eigen vector
Power Method
Power Method
Shifted Power Method can be used to find the extreme eigen values
when matrix inversion is to be avoided
Estimation of Intermediate Eigen Values
• Power Method
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
Estimation of Eigen Values
Remarks:
1) The eigen values of a diagonal matrix
Q
• Q is an orthogonal matrix. It is a square matrix whose columns are
orthonormal vectors
To be done
using Gram
Schimdt
Process
QR Method
Gram Schmidt Process
• Like power method, the QR methods works best for non-defective matrix, i.e. matrices
with complete basis of eigenvectors
• We have seen the basic QR method, many modifications are available to increase its
convergence
A0 A A0 A
Ak Qk Rk Ak sI Qk Rk
Ak 1 Rk Qk Ak 1 Rk Qk sI
QR Method
GAPS
Ax = b
• Direct methods
o Gauss elimination [Partial pivoting & scaling]
o Gauss Jordan
o LU decomposition
o Gauss Elimination
o Compact methods [Doolittle & Crout]
o Tridiagonal [Thomas Algorithm]
o Symmetric Positive Definite [Cholesky]
• Indirect methods [Sparse Matrices]
o Jacobi iterations
o Gauss-Seidel
o SOR
Summary on System of Linear Equations and Eigenvalue
Av = λv
• Characteristic equations
• Power methods
o Direct power method
o Inverse power method
o Shifted power method
• QR method
o Gram-Schmidt process
o Improvement in QR by pre-processing and shifting
ESO 208A: Computational Methods in
Engineering
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
Approximation of
Functions [Curve fitting]
Approximation of Functions
Regression
Approximation of Functions
• Easy to determine
• Uniform approximation (Weierstrass Approximation Theorem)
For every continuous and real valued function f(x) in [a, b] and ε > 0, there exists a
polynomial p(x) such that,
Which is better?
The problem defines this
Regression [Polynomial]
Linear function
• Linear Function
ESO208A: Computational Methods in
Engineering
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
Regression [Polynomial]
Polynomial +…+
For any function
…+ (eq. 1)
For polynomial,
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
Recap
• Legendre Polynomials
Legendre Polynomials
0 if n j
2n 1 n
P0 x 1,P1 x x,Pn 1 x xPn x Pn 1 x ; Pn ,Pj 2
n 1 n 1 2n 1 if n j
Legendre Polynomials
Orthogonal Polynomials
Regression [Basis Function]
Least Square Regression with Orthogonal Basis Function
• Discrete Case
Regression [Basis Function]
Least Square Regression with Orthogonal Basis Function
• Discrete Case-Example
Regression [Basis Function]
Least Square Regression with Orthogonal Basis Function
• Discrete Case-Example
Regression [Basis Function]
Least Square Regression with Orthogonal Basis Function
• Continuous Case
Regression [Basis Function]
Least Square Regression with Orthogonal Basis Function
1. Least Square
If you have outlier in the data, least squares become very large
5. Multiple Regression
6. Non-linear Regression
Regression
Remarks:
6. Non-linear Regression
7. Basis Function
8. Interpolation vs Regression
Depends on the problem and domain
Summary
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
Approximation of
Functions [Curve fitting]
Approximation of Functions
y* = 0.5234;
1.07 % error
Direct Fit-Example
Lagrange’s Polynomial
Lagrange’s Polynomial
Lagrange’s Polynomial
Lagrange’s Polynomial-Example
Newton’s Divided Difference Formula
Example
Newton’s Divided Difference Formula
Example
• Direct Fit
• Lagrange Polynomials
• Newton’s Divided Difference Formula
ESO208A: Computational Methods in
Engineering
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
Recap
• Direct Fit
• Lagrange Polynomials
• Newton’s Divided Difference Formula
Errors in Interpolation
Error in Interpolation
Errors in Interpolation
• The error remains same for nth order polynomial fitted by using any form.
• This error can be easily estimated if Newton’s DD Formulae is used for interpolation
Errors in Interpolation
Interpolation Errors
– Let us compare Taylor series approximation with Newton’s Divided
Difference Polynomial
Errors in Interpolation
Comparison of remainder term in Taylor Series and Newton’s DD
Formulae
Errors in Interpolation
y* = 0.5234;
1.07 % error
Errors in Interpolation
b) The errors are larger for the x’s that are near the edges
Errors in Interpolation
Properties of the error
b) The errors are larger for the x’s that are near the edges
Errors in Interpolation
• Tchebycheff Polynomials
Errors in Interpolation
• Interpolation Error
• Properties of Error
• Methods to reduce error
ESO208A: Computational Methods in
Engineering
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
Errors in Interpolation
xi+2, fi+2
xi, fi qi+2(x)
qi+1(x)
xi+2, fi+2
xi+2, fi+2
Natural Splines-Example
Splines
The boundary condition can be given by using the different kind of
splines
Splines
The boundary condition can be given by using the different kind of
splines
Interpolation
Remarks
Interpolation
Remarks
Summary
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
Numerical Integration
Numerical Integration
Integration
Numerical Integration
fi+1/2 fi+2
fi fn
fi+1
f1 fi-1
f0
fi+2
fi fn
fi+1
f1 fi-1
f0
fi+2
fi fn
fi+1
f1 fi-1
f0
fi+2
fi fn
fi+1
f1 fi-1
f0
n = 2m, m integer
Numerical Integration: Simpson’s Rule
Polynomial p(x) is piecewise cubic function:
𝑓 𝑥 ≈𝑝 𝑥
𝑥−𝑥 𝑥−𝑥 𝑥−𝑥 𝑥−𝑥 𝑥−𝑥 𝑥−𝑥
= 𝑓+ 𝑓
𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥
𝑥−𝑥 𝑥−𝑥 𝑥−𝑥
+ 𝑓
𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥
𝑥−𝑥 𝑥−𝑥 𝑥−𝑥
+ 𝑓
𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥
Assume, = = = h and substitute z = (x – xi)
𝑓 𝑥 𝑑𝑥 ≈ 𝑝 𝑥 𝑑𝑥
𝑓 𝑓
=− 𝑧 − 3ℎ 𝑧 − 2ℎ 𝑧 − ℎ 𝑑𝑧 + 𝑧 − 3ℎ 𝑧 − 2ℎ 𝑧𝑑𝑧
6ℎ 2ℎ
𝑓 𝑓
− 𝑧 − 3ℎ 𝑧 − ℎ 𝑧𝑑𝑧 + 𝑧 − 2ℎ 𝑧 − ℎ 𝑧𝑑𝑧
2ℎ 6ℎ
Numerical Integration: Simpson’s Rule
𝑓 𝑥 𝑑𝑥 ≈ 𝑝 𝑥 𝑑𝑥
𝑓 𝑓
=− 𝑧 − 3ℎ 𝑧 − 2ℎ 𝑧 − ℎ 𝑑𝑧 + 𝑧 − 3ℎ 𝑧 − 2ℎ 𝑧𝑑𝑧
6ℎ 2ℎ
𝑓 𝑓
− 𝑧 − 3ℎ 𝑧 − ℎ 𝑧𝑑𝑧 + 𝑧 − 2ℎ 𝑧 − ℎ 𝑧𝑑𝑧
2ℎ 6ℎ
𝑓 3ℎ 3ℎ 3ℎ
=− − 6ℎ + 11ℎ − 6ℎ 3ℎ
6ℎ 4 3 2
𝑓 3ℎ 3ℎ 3ℎ 𝑓 3ℎ 3ℎ 3ℎ
+ − 5ℎ + 6ℎ − − 4ℎ + 3ℎ
2ℎ 4 3 2 2ℎ 4 3 2
𝑓 3ℎ 3ℎ 3ℎ 3ℎ
+ − 3ℎ + 2ℎ = 𝑓 + 3𝑓 + 3𝑓 +𝑓
6ℎ 4 3 2 8
This is known as Simpson’s 3/8th Rule
Numerical Integration: Simpson’s Rule
fi+2
fi fn
fi+1
f1 fi-1
f0
, , , … , , ,…
n = 3m, m integer
Summary
• Rectangular Rule
• Trapezoidal Rule
• Simpson’s Rule
ESO208A: Computational Methods in
Engineering
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
Recap
• Rectangular Rule
• Trapezoidal Rule
• Simpson’s Rule
Numerical Integration
0
Numerical Integration: Rectangular Rule
0
Numerical Integration: Rectangular Rule
We will derive Global Truncation Error later. First, let us derive Local
Truncation Errors for Trapezoidal and Simpson’s 1/3rd Rule!
Local Truncation Error: Trapezoidal Rule
Local Truncation Error: Trapezoidal Rule
(1)
We earlier showed that,
(2)
Putting from eq(1) in eq(2) and combining terms of the same order of h,
Weighted sum with weights of 2/3 to expression from rectangular rule and 1/3 to
trapezoidal rule!
Therefore, the Simpson’s 1/3rd Rule is O(h5) accurate in a single interval or the Local
Truncation Error of Simpson’s 1/3rd Rule is O(h5)
Global Truncation Error: Trapezoidal Rule
• Error Analysis
• Local Truncation Error
• Global Truncation Error
ESO208A: Computational Methods in
Engineering
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
Numerical Integration
• One-point integration:
• Two-points integration:
Gauss-Legendre Quadrature: Example
• Three-points integration:
Gauss Legendre Quadrature
a) Case 1 ab>0
Improper Integral
b) Case 2 ab<0
The function may be singular at one limit, so use open formula’s (Gauss
Legendre, Open Newton Cotes Formula with multiple application)
Summary
• Romberg Integration
• Method of Undetermined Coefficients
• Improper Integrals
ESO 208A: Computational Methods in
Engineering
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
Numerical
Differentiation
Numerical Differentiation
Numerical Differentiation
Types of Methods
• Graphical
• Taylor Series
• Lagrange Polynomial
• Method of undetermined coefficients
Numerical Differentiation-Taylor Series
Numerical Differentiation-Lagrange Polynomial
Forward Difference:
Backward Difference:
Numerical Differentiation-Lagrange Polynomial
Approximate the function between three points: 𝑥 ,𝑥 ,𝑥
𝑥−𝑥 𝑥−𝑥 𝑥−𝑥 𝑥−𝑥 𝑥−𝑥 𝑥−𝑥
𝑓 𝑥 = 𝑓 + 𝑓+ 𝑓
𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥
𝑓 𝑓 𝑓
𝑓 𝑥 = 𝑥−𝑥 𝑥−𝑥 − 𝑥−𝑥 𝑥−𝑥 + 𝑥−𝑥 𝑥−𝑥
2ℎ ℎ 2ℎ
𝑑 𝑓 𝑓 − 2𝑓 + 𝑓
=
𝑑𝑥 ℎ
Numerical Differentiation: Finite Difference
This is
a=nb+nf+1
Forward difference
• Taylor Series
• Lagrange Interpolation Formula
• Method of Undetermined Coefficients
ESO208A: Computational Methods in
Engineering
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
Recap
ℎ ℎ ℎ ℎ ℎ
𝑓 = 𝑓 + ℎ𝑓 +
𝑓 + 𝑓 + 𝑓 + 𝑓 + 𝑓 ⋯
2! 3! 4! 5! 6!
𝑓 −𝑓 ℎ ℎ ℎ ℎ
=𝑓 + 𝑓 + 𝑓 + 𝑓 + 𝑓 ⋯
ℎ 2! 3! 4! 5!
𝑓 −𝑓 ℎ ℎ ℎ ℎ
𝑓 = − 𝑓 − 𝑓 − 𝑓 − 𝑓 ⋯
ℎ 2! 3! 4! 5!
Truncation error for this forward difference scheme for the 1st
Derivative is: O(h)
ℎ ℎ ℎ ℎ ℎ
𝑓 = 𝑓 − ℎ𝑓 + 𝑓 − 𝑓 + 𝑓 − 𝑓 + 𝑓 ⋯
2! 3! 4! 5! 6!
𝑓 −𝑓 ℎ ℎ ℎ ℎ
𝑓 = + 𝑓 − 𝑓 + 𝑓 − 𝑓 ⋯
ℎ 2! 3! 4! 5!
Truncation error for this backward difference scheme for the 1st
Derivative is: O(h)
Numerical Differentiation: Truncation Error Analysis
Truncation error for this central difference scheme for the 1st
Derivative is: O(h2)
Numerical Differentiation: Truncation Error Analysis
Truncation error for this central difference scheme for the 2nd Derivative is:
O(h2)
Numerical Differentiation: Truncation Error Analysis
Truncation error for this central difference scheme for the 1st Derivative
is O(h) for non-uniform grid and O(h2) uniform grid
Numerical Differentiation: Finite Difference
1. Unequal Intervals
Integration
• Apply trapezoidal rule individually
• Apply Lagrange interpolation formula to get higher order estimates
Differentiation
2. Multiple Variables
Numerical Differentiation-Remarks
2. Multiple Variables
Numerical Differentiation-Remarks
3. Data Uncertainty
• Differentiation will be more sensitive to errors
• Higher order derivatives will be more sensitive
Numerical Differentiation
Improve Accuracy
• Richardson Extrapolation
• Reduce Step Size (trade off between truncation error and round-
off error)
• Higher order Polynomials
Summary
• Error Analysis
• General Remarks
ESO208A: Computational Methods in
Engineering
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
Ordinary Differential
Equation
Ordinary Differential Equation
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
Recap
RK 5th order
Estimation of Errors
Ordinary Differential Equation
Adams-Bashforth
Ordinary Differential Equation
Adams-Moulton
Ordinary Differential Equation
Ordinary Differential Equation
Ordinary Differential Equation
Explicit & Implicit Euler Scheme for alpha = 1
System of Simultaneous Equations (ODE)
Summary
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
Recap
2.785
Summary
• Consistency
• Stability
• Convergence
ESO208A: Computational Methods in
Engineering
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
Recap
• Consistency
• Stability
• Convergence
Stiff Problems
Stiff Problems
• Stiff Problems
• Boundary Value Problems
ESO 208A: Computational Methods in
Engineering
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
Partial Differential
Equation
Partial Differential Equation (PDE)
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
Recap
Boundary Conditions
• Dirichlet Condition
• Neumann Condition
• Robin:
Partial Differential Equation (PDE)
Elliptic PDE
Laplace Equation: 1st Type BC
¶ 2f ¶ 2 f
+ 2 =0
¶x ¶ y
2 ( )
x Î( 0, Lx ) and y Î 0, Ly
A f=a B
f ( 0, y ) = d, f ( Lx , y ) = b
1 2 3 4
( )
f ( x,0 ) = c, f x, Ly = a Dx
5 6 7 8
f=d f=b
9 10 11 12
13 14 15 16
Dy
y
D x C
f=c
A f=a B
Laplace Equation
1 2 3 4
Dx
¶ f ¶ f
5 6 7 8
2 2 f=d f=b
+ 2 =0 9 10 11 12
¶x ¶ y
2
13 14 15 16
Dy
D C
f=c
A f=a B
Laplace Equation
1 2 3 4
Dx
¶ f ¶ f
5 6 7 8
2 2 f=d f=b
+ 2 =0 9 10 11 12
¶x ¶ y
2
13 14 15 16
¶ f
2 fi+1, j - 2fi, j + fi-1, j Dy
= D
f=c
C
¶ x2 i, j
Dx 2
13 14 15 16
¶ 2f fi+1, j - 2fi, j + fi-1, j fi-1,j fi,j fi+1,j Dy
=
¶ x2 i, j
Dx 2
D C
f=c
¶ 2f fi, j+1 - 2fi, j + fi, j-1
=
¶ y2 i, j
Dy 2 fi,j+1
æ 1 ö æ 1 ö æ 2 2 ö æ 1 ö æ 1 ö
çè Dy 2 ÷ø fi, j-1 + çè Dx 2 ÷ø fi-1, j + çè - Dx 2 - Dy 2 ÷ø fi, j + çè Dx 2 ÷ø fi+1, j + çè Dy 2 ÷ø fi, j+1 = 0
Laplace Equation
Laplace Equation- Example
Laplace Equation- Example
Handling Neumann and Robin BC
Three Options for implementation:
Backward Difference approximation with increased size of the
matrix
asymmetric backward difference approximation
size of the matrix is increased
solution at the boundary nodes are obtained together
Ghost Node
symmetric central difference approximation
size of the matrix is increased
solution at the boundary nodes are obtained together
Backward Difference approximation without increasing the size
of the matrix
asymmetric backward difference approximation
size of the matrix remains unaltered
unknowns at the boundary nodes to be computed separately using
the approximation of the BC after the solution have been computed
for the interior nodes
Application of Backward Difference …approach 1
Number of equations is now 24 and the size of the matrix A is 24×24
For Node 5, the 5th equation is:
f - 4f + 3f æ 1 ö æ 2ö æ 3 ö
3 4 5
= b or ç ÷ f + ç - ÷ f + ç ÷ f =b
2Dx è 2Dx ø 3
è Dx ø è 2Dx ø
4 5
æ 1 ö æ 2ö æ 3 ö
a =ç ÷ , a = ç- ÷, a = ç ÷ , and b = b
è 2Dx ø
53
è Dx ø 54
è 2Dx ø 55 5
A f=a B
For Node 21, the 21st equation is:
1 2 3 4 5
f - 4f + 3f Dx
11 16 21
=c f=d
6 7 8 9 10 ¶f
=b
2Dy ¶x
11 12 13 14 15
æ 1 ö æ 2ö æ 3 ö
çè 2Dy ÷ø f + çè - Dy ÷ø f + çè 2Dy ÷ø f = c
16 17 18 19 20
11 16 21
Dy
21 22 23 24 25
D ¶f C
æ 1 ö æ 2ö æ 3 ö ¶y
=c
a =ç ÷ , a = ç- ÷, a =ç ÷ , and b = c
21 11
è 2Dy ø 21 16
è Dy ø 21 21
è 2Dx ø 21
A f=a B
Application of Backward
1 2 3 4
Difference…approach 2 Dx
4’
¶f
5 6 7 8 8’ =b
f=d ¶x
Number of equations will remain 9 10 11 12 12’
at 16 and the size of the matrix A
13 14 15 16 16’
is 16×16 Dy
For Node 16, the 16th equation is: 13’ 14’ 15’ 16”
D ¶f C
=c
¶y
æ 1 ö æ 1 ö æ 2 2 ö æ 1 ö æ 1 ö
çè Dy 2 ÷ø f + çè Dx
12 2 ÷ø f + çè - Dx - Dy
15 2 2 ÷ø f + çè Dx
16 2 ÷ø f + çè Dy
16¢ 2 ÷ø f = 0
16¢¢
f - 4f + 3f æ 1 ö æ 2ö æ 3 ö
15 16 16¢
= b or ç ÷ f + ç- ÷f + ç ÷ f =b
2Dx è 2Dx ø 15
è Dx ø è 2Dx ø
16 16¢
f - 4f + 3f æ 1 ö æ 2ö æ 3 ö
12 16 16¢¢
= c or ç ÷ f +ç- ÷f +ç ÷ f =c
2Dy è 2Dy ø è Dy ø
12
è 2Dy ø 16 16¢¢
A f=a B
Application of Backward 1 2 3 4 4’
Difference…approach 2 Dx
5 6 7 8 ¶f
8’ =b
f=d ¶x
Number of equations will remain 9 10 11 12 12’
Recall, for Node 16, the 16th equation for the 1st type BC was:
æ 1 ö æ 1 ö æ 2 2 ö b c
çè Dy 2 ÷ø f + çè Dx
12 2 ÷ø f + çè - Dx - Dy
15 2 2 ÷ø f = - Dx - Dy
16 2 2
A f=a B
Application of Backward 1 2 3 4 4’
Difference…approach 2 Dx
5 ¶f
6 7 8 8’ =b
f=d ¶x
Number of equations will remain 9 10 11 12 12’
is 16×16 Dy
13’ 14’ 15’ 16”
For Node 8, the 8th equation is: D
¶f
C
=c
¶y
æ 1 ö æ 1 ö æ 2 2 ö æ 1 ö æ 1 ö
çè Dy 2 ÷ø f + çè Dx
4 2 ÷ø f + çè - Dx - Dy
7 2 2 ÷ø f + çè Dx
8 2 ÷ø f ’ + çè Dy
8¢ 2 ÷ø f = 0
12
f - 4f + 3f ’ æ 1 ö æ 2ö æ 3 ö
7 8 8¢
= b or ç ÷ f + ç- ÷f + ç ÷ f’= b
2Dx è 2Dx ø è Dx ø
7
è 2Dx ø 8 8¢
After obtaining the solutions for the 16 interior nodes, the values of
phi at the boundary nodes are to be computed from the BC equations
used for substitution!
Ghost Node
Number of equations is now 25 and the size of the matrix A is 25×25
For Node 5:
æ 1 ö æ 1 ö æ 2 2 ö æ 1 ö æ 1 ö
çè Dy ÷ø a + çè Dx
2 2 ÷ø4
f + -
2
-
çè Dx Dy 2 ÷ø f + çè Dx
5 2 ÷ø f + çè Dy
5¢ 2 ÷ø f = 0
10
A f=a B
f -f
5¢ 4
=b 1 2 3 4 5 5’
2Dx Dx
6 7 8 9 10 ¶f
f=d =b
¶x
f -f 11 12 13 14 15
23¢ 18
=c
2Dy 16 17 18 19 20
Dy 25’
21 22 23 24 25
D ¶f C
=c
¶y 23’ 25’’
Ghost Node
Number of equations is now 25 and the size of the matrix A is 25×25
For Node 25:
æ 1 ö æ 1 ö æ 2 2 ö æ 1 ö æ 1 ö
çè Dy 2 ÷ø f + çè Dx
20 2 ÷ø f + çè - Dx - Dy
24 2 2 ÷ø f + çè Dx
25 2 ÷ø f + çè Dy
25¢ 2 ÷ø f = 0
25¢¢
A f=a B
f -f 1 3
=b
25¢ 24
2 4 5 5’
2Dx Dx
6 7 8 9 10 ¶f
f=d =b
¶x
f -f 11 12 13 14 15
25¢¢ 20
=c
2Dy 16 17 18 19 20
Dy
25’
21 22 23 24 25
D C
¶f
=c
¶y 23’ 25’’
Non-Rectangular Boundary
Richa Ojha
Department of Civil Engineering
IIT Kanpur
2
Recap