You are on page 1of 667

ESO208A: Computational Methods in

Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
What are Computational Methods or Numerical Methods in
Engineering?

Formulation of mathematical problems in such a way that


numerical answers can be computed using arithmetic operations
in a computer

Computer can only perform: + - × /

All kinds of problems can be solved: linear and non-linear


systems of equations; approximation and interpolation of
data; differentiation and integration, ODE, PDE

3
Objectives of the course

• Introduce you to computational methods and algorithms for the solution


of engineering problems.

• Familiarize you to the algorithms behind the software packages so that


you don’t use them as black boxes.

• Expose you to the analysis of these algorithms so that, if needed, you can
modify an existing algorithm or develop your own algorithm for the
problem at hand.

4
Scope of the course

Engineering Physical
Problems laws/rules/
relationships

Data
Initial/Boundary
conditions
Mathematical
Models

Experimental Numerical Analytical

Computational Resources
Methods/Algorithms Programming

Results 5
Example

6
Example
Based on the example, we need to answer several questions
• Do we need numerical methods to solve the problem?
– Shall we solve the ODE directly using a numerical technique or go for
analytical solutions and then utilise a numerical technique
• How many significant digits do we have? How many should we take? This is
important as computer has finite space
• What algorithm should we take (here BA)?

• We have selected an Algorithm. Is this algorithm going to converge?


(Numerical Analysis)
7
– What is the convergence rate (How frequently it converges?)
Example
• What is the error in the results?
If we do experiments, will we get the same result?
• Mainly because we have model error (air resistance is neglected, gravity is
not constant)
• Data error because of data, the height may not be correct, g may have
error
• Uncertainty propagates
• Round off error because of finite nature of computer. Computer has an
upper and lower limit of numbers.
• Truncation error (again because of computer)
Condition Number or Stability helps us decide which algorithm we should select.

8
Number representation in Computer

I will leave this for you as a reading assignment


9
Summary
• What are computational methods? How they are used for solving engineering
problems?

• The choice of computational method depends on the problem and intended


use of the results.

• Example: Object falling from a building


‒ Formulation of mathematical model
‒ Choice of methods, convergence, convergence rate, errors [model, data,
round-off and truncation], propagation of errors, stability & condition
number.
• Number representation in computers
‒ Binary and decimal representation
10
ESO208A: Computational Methods in
Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
Errors and Error
Analysis

3
Significant digits
Significant digits of a number are those that can be used with
confidence

Fig: A speedometer (Source: Chapra and Canal)


4
Significant digits

Number Significant digits Rule

228.18 5 All non-zero digits are significant

10.08 4 Zeros between non-zero digits are


significant.
0034.5 3 Leading zeros are not significant.

34.500 5 In a decimal number trailing zeros


are significant.
34500 3 or 4 or 5 In a non-decimal number trailing
zeros may or may not be significant
3.450 x 104 4 No ambiguity in scientific notation

5
Accuracy vs Precision

• Accuracy - How closely a measured/computed value


agrees with the true value
– opposite sense: Inaccuracy (or bias) A systematic
deviation from the actual value

• Precision (or reproducibility)- How closely individual


computed/measured values agree with each other
– opposite sense: Imprecision (or uncertainty).
Magnitude of scatter
6
Errors and Error Analysis

7
Source: Chapra and Canale
Define Error:

True Value = Approximate Value + Error (e)

Absolute Error:
Relative Error:

• Relative error is often expressed as (%) by multiplying (e) with 100.


• Absolute error can have sign as well as | . |
• If the error is computed with respect to the true value (if known), a
prefix ‘True’ is added.
8
Define Error:

• For an iterative process, the true value ‘a’ is replaced with the
current iteration value and a prefix ‘approximate’ is added. This is
used for testing convergence of the iterative process.

9
Example

10
We will never have the true value, but would like to have an
idea about the error of the algorithm
– How to get an error bound?
– Error bound should be a tight bound

11
Sources of Error in computation?

• Model Error: physical processes are too complex or some


of the processes cannot be characterized
• Data Error: initial and boundary conditions, measured
values of the parameters and constants in the model
• Round-off Error: irrational numbers, product and division
of two numbers, limited by the machine capability
• Truncation Error: truncation of an infinite series, often
arises in the design of the numerical method through
approximation of the mathematical problem.

12
Truncation error

13
Truncation error

This series can go upto fourth


order, but if a series can go
upto infinite and we can do
only upto certain order. The
error is truncation error!

14
Truncation error

15
Source: Chapra and Canale
Truncation error-Error bound

16
Truncation error-Error bound

• E becomes closer to the true error as the no. of terms increases


• We try to use e and E to make decisions. Try to select a problem with
min e and min E
17
Data error

For some reason I


can’t get true x
values. If there is
an error in x what
will be the error
in y

18
Data error

19
Data error

20
Data error

21
Data error

22
Data error

23
Summary

• What are significant digits?

• What are the sources of error in the computation?

• What is Truncation Error?

• What is Data Error?

24
ESO 208A: Computational Methods in
Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
Recap
• Computational methods cannot be studied in isolation of the problem

“The purpose of computing is insight, not numbers”, Hamming

• Significant digits/figures are the numbers that one can use with confidence

• True error = True value – Measured/Computed value


‒ approximate error
‒ error bound

True error is never known

3
Recap

• Types of error
- Model error
- Data error
- Truncation error
Computers are finite
- Round-off error

4
Round-off error

• Round-off error originates from the fact that computers retain only
a fixed number of significant figures during a calculation

• In addition, because computers use a base-2 representation, they


cannot precisely represent certain base-10 numbers.

5
Round-off error

6
Round-off error

Number representation in computers


• Integer
• Fixed point
• Floating point

7
Round-off error

Integer-unsigned (0,1,2), signed (-1,-2,1,2,0)

For a 4-bit machine how many integers one can store?

1-bit: one space in binary computer


4-bit: nibble
8-bit: one byte
32-bit: word
64-bit: double word

8
Round-off error

For a 4-bit machine how many integers one can store?

You can store 2^4 =16 numbers


• 0-15 in case of unsigned numbers
• In case of signed, the first bit holds the sign. The remaining 3
bits can hold binary number from 000 to 111, i.e. in decimal
numbers from 0 to 7
• The range should be from -7 to 7, but the range is -8 to 7.
• The +ve limit in generalized form can be obtained by 2n-1-1, n
is number of bits. 9
Round-off error

• Fixed point- , 0.71

– Location of decimal point is fixed (two places after decimal)


– Useful for hand calculator

• Floating point

10
Round-off error

11
Round-off error

To store a floating point number, a computer word is divided into three parts

12
Round-off error

13
Round-off error
What we did so far was for binary, in case of decimal the maximum decimal
power can be

14
Summary

Number representation in Computers

• Integers

• Fixed Point

• Floating Point

15
ESO208A: Computational Methods in
Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
Round-off error

Number representation in computers


• Integer
• Fixed point
• Floating point

3
Round-off error

Mantisaa

• Mantisaa is usually normalized if it has leading zero digits


For example, 1/34=0.0294117 (in a base 10 system)
• If this has to be stored in a computer, that allows 4 decimal places.
1/34 would be stored as 0.0294×100

• The number is normalized to remove leading zero, 0.2941×10-1


• The consequence of normalization is that absolute value of m is
• limited. That is where b is the base.
4
Round-off error

5
Floating point number representation
Overflow
0

“Hole near zero”


Underflow
Consider a hypothetical system

6
Round-off error

7
Floating point number representation
Overflow
0

“Hole near zero”


Underflow

Chopping
Round-off Errors Δx
x Rounding
 u  0.5 b1t
x
Δx Δx
Real number in Maths and Computer are not the same
Round-off errors can be avoided subtraction of nearly equal nos. 8
Round-off error

Why round-off error is important?

Let us say you want to add two numbers, 208.00 +0.25


=208.25

In computer, the numbers would be represented as:


0.208 × 103
0.25 × 100
In floating point, we can change the number such that it has the
highest power
0.208 × 103
+ 0.00025 × 103= 0.20825 × 103
Computer will round off and will return 0.208 × 103
9
Round-off error

Why round-off error is important?

Another example

a+1-a =1

Let us take a= 10^20

Output from computer = 0 (because for this large number


there is a hole)

If, I write, a-a+1 then output from computer =1

10
Round-off error

Why round-off error is important?

Most important effect is in subtractions


Subtraction of two nearly equal numbers:
0.246 × 103
0.245 × 103

In floating point, we can change the number such that it has the
highest power
0.246 × 103
- 0.245 × 103= 0.001 × 103
Matissa normalize = 0.100 × 101 (3-significant digits)
But we actually have 1 significant digit. This is called loss of
significance 11
Forward error analysis

x f (x)

Δx Δ f (x)
Condition number of the problem

Relative error in f ( x) f ( x) f ( x) xf ( x )
Cp   
Relative error in x x x f ( x)

C p  1 - well-conditioned problem
C p  1 - ill-conditioned problem

Characteristic of the problem


12
Forward Error Analysis:
Single Variable Function: y = f(x). If an error is introduced in x,
what is the error in y?

Assuming the error to be small, the 2nd and higher order terms are
neglected. (a first order approximation!)

13
Condition Number of the Problem (Cp):


Also: ∆ ∆

As Δx → 0,

Cp < 1: problem is well-conditioned, error is attenuated


Cp > 1: problem is ill-conditioned, error is amplified
14
Cp = 1: neutral, error is translated
Examples of Forward Error Analysis and Cp:

 Problem 1:

∆ .

The problem is well-conditioned for 0 ≤ | x | < 1; neutral at | x | = 1


and ill-conditioned for | x | > 1.

15
Examples of Forward Error Analysis and Cp:

 Problem 2: Solve the following system of equations:


x + αy = 1; αx + y = 0
Solving:

well-conditioned for │α│<< 1 and ill-conditioned for α ≈ 1.

16
Backward error analysis

x fA (x)

xA

Condition number of the algorithm

x  xA
 CAu u is machine precision
x
Characteristic of the numerical stability of the algorithm

small C A - stable algorithm


large C A - instable algorithm
17
Backward error analysis- Example

18
Backward error analysis- Example

Condition Number of an algorithm can be changed


19
Summary

• How mantissa is represented in computers?

• What are chopping and rounding?

• What is Condition Number of Problem and Algorithm?

20
ESO208A: Computational Methods in
Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
Solution of non-linear
equations

3
Mathematical Preliminaries

4
Mathematical Preliminaries

5
Mathematical Preliminaries

6
Non-linear equation

• This is a quadratic equation, and has an analytical solution.


• Not all equations have analytical solution. So we may have to use computer

7
Non-linear equation

In computer, we have five approaches


• Graphical method
• Bracketing methods: Bisection, Regula-Falsi
• Open methods: Fixed point, Newton-Raphson, Secant, Muller
• Special methods for polynomials: Bairstow’s
• Hybrid methods: Brent’s

8
Graphical Method

One of the best methods to get an insight.

Final value depends on how much you zoom in


9
Bracketing Methods

1. Bisection Method

10
Bisection Method

11
Bisection Method
• Principle: Choose an initial interval based on intermediate value
theorem and halve the interval at each iteration step to generate the
nested intervals.
• Initialize: Choose a0 and b0 such that, f(a0)f(b0) < 0. This is done by trial
and error.
• Iteration step k:
– Compute mid-point mk+1 = (ak + bk)/2 and functional value f(mk+1)
– If f(mk+1) = 0, mk+1 is the root. (It’s your lucky day!)
– If f(ak)f(mk+1) < 0: ak+1 = ak and bk+1 = mk+1; else, ak+1 = mk+1 and bk+1 =
bk
– After n iterations: size of the interval dn = (bn – an) = 2-n (b0 – a0), stop if
dn ≤ ε
– Estimate the root (x = α say!) as: α = mn+1 ± 2-(n+1) (b0 – a0)
12
Bisection Method
Maximum error
at 0th step

13
Bisection Method

14
Bisection Method

15
Bracketing Methods

2. The method of false position

y = f(x)

f(ak)

mk+1

bk
ak
f(bk)

16
Regula-Falsi or Method of False Position
• Principle: In place of the mid point, the function is assumed to be linear within
the interval and the root of the linear function is chosen.
• Initialize: Choose a0 and b0 such that, f(a0)f(b0) < 0. This is done by trial and
error.
• Iteration step k:
• A straight line passing through two points (ak, f(ak)) and (bk, f(bk)) is given by:

• Root of this equation at f(x) = 0 is:


• If f(mk+1) = 0, mk+1 is the root. (It’s your lucky day!)
• If f(ak)f(mk+1) < 0: ak+1 = ak and bk+1 = mk+1; else, ak+1 = mk+1 and bk+1 = bk
• After n iterations: size of the interval dn = (bn – an), stop if dn ≤ ε
• Estimate the root (x = α say!) as:

17
Bracketing method

• False position method also has linear convergence. The constant


may be different from ½.
• False position method works faster than bisection method.
• No one algorithm can be claimed to be universally superior then
other. (No free lunch theorem!)
– If you have more than one solution. The bisection method will
find only one of them. If you want to find multiple roots have
separate bounds for different roots

Look for Modified False Position Method!


18
Bracketing Methods

• Convergence to a root is guaranteed (may not get all the


roots, though!)
• Simple to program
• Computation of derivative not needed
Disadvantages
• Slow convergence
• For more than one roots, may find only one solution by this
approach.
19
Summary

• What is bisection method?


• What is false-position method?

20
ESO 208A: Computational Methods in
Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
Non-linear equation

In computer, we have five approaches


• Graphical method
• Bracketing methods: Bisection, Regula-Falsi
• Open methods: Fixed point, Newton-Raphson, Secant
• Special methods for polynomials: Muller, Bairstow’s
• Hybrid methods: Brent’s
Open Methods

Distinguishing feature:
• Only one starting value
• Convergence is not always guaranteed
• If algorithm convergences, the rate of convergence may be faster
Open Methods

1. Fixed Point
y = g(x)
y=x y = x y = g(x)

x3 x2 x1 x0 x3 x2 x1 x0
Root α Root α
Fixed Point Method

• Problem: f(x) = 0, find a root x = α such that f(α) = 0


• Re-arrange the function: f(x) = 0 to x = g(x)
• Iteration: xk+1 = g(xk)

• Stopping criteria:

6
Fixed Point Method

7
Fixed Point Method
Convergence of fixed point
Fixed Point Method
Convergence of fixed point
Open Methods

2. Newton Raphson Method


• Problem: f(x) = 0, find a root x = α such that f(α) = 0
• Principle: Approximate the function as a straight line having same slope
as the original function at the point of iteration.

f(x0)

f(x1)

x0
y = f(x)
x1 x2
Newton-Raphson Method
Newton-Raphson Method
Newton-Raphson Method
Convergence
Newton-Raphson Method
Convergence
Newton-Raphson Method

Advantages:
y = f(x)
Faster convergence (quadratic) f(x0)
x1
Disadvantages: x0
f(x1)
Need to calculate derivate

Newton-Raphson
method may get stuck!
15
Newton-Raphson Method

Places where Newton-Raphson may not work

a) Inflection point

Double Derivative =0; Solution is diverging

y = f(x)
f(x0)
x1
x0
f(x1)
16
Newton-Raphson Method

b) If you have a local minima, it will trap your solution

17
Newton-Raphson Method

c) Multiple Solutions

• Convergence depends on the function


• Guess is close to the solution

“ No substitute for understanding the problem”

18
Open Methods
3. Secant Method
• Principle: Use a difference approximation for the slope or derivative in
the Newton-Raphson method. This is equivalent to approximating the
tangent with a secant.
• Problem: f(x) = 0, find a root x = α such that f(α) = 0

f(x0)

f(x1)
f(x2)
x0 x1 x2
y = f(x)
x3
Secant Method

• Problem: f(x) = 0, find a root x = α such that f(α) = 0

• Initialize: choose two points x0 and x1 and evaluate f(x0) and f(x1)

• Approximation: , replace in Newton-Raphson

• Iteration Formula:

• Stopping criteria:

20
Secant Method

Advantages:

• Fast convergence (slightly


less than quadratic)
y = f(x)
f(x0)
• Overcomes the disadvantage x1
of having to calculate x0
derivate f(x1)

Secant method may also


get stuck!
Look for Modified Secant Method!
21
Summary

What are fixed-point, Newton-Raphson, and Secant


method?
Under what conditions these methods will not work.
What is the convergence rate of these methods?
ESO208A: Computational Methods in
Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
Non-linear equation

In computer, we have five approaches


• Graphical method
• Bracketing methods: Bisection, Regula-Falsi
• Open methods: Fixed point, Newton-Raphson, Secant
• Special methods for polynomials: Muller, Bairstow’s
• Hybrid methods: Brent’s
Open Methods

4
Hybrid Method

Combined Approach
• Bracketing method (when starting)
• Open method (when close to the solution)

Two popular methods:


• Dekker Method: Combines Bisection method and Secant Method
• Brent Algorithm: Combines Bisection method and Open Method
(inverse quadratic)
In matlab fzero function is used, this function uses Brent Algorithm

5
Multiple Roots
What to do when your function has multiple roots?

6
Multiple Roots
What to do when your function has multiple roots?
Problems with multiple roots
1) Bracketing method cannot be used when m is even
2) Newton-Raphson may not work as f’(x)=0
3) Large interval of uncertainty for solution of f(x)

Option: change or reformulate f(x)=0 to u(x)=0, such that u(x) has a


solution 7
Multiple Roots
Two modifications of Newton Raphson Method
a) First modification

b) Second modification

8
Multiple Roots

• Need to evaluate f(xi), f’(xi) and f’’(xi) at every iteration


• Each iteration is more expensive, even though it converges rapidly
9
Polynomials

Consider an nth order polynomial


fn(x) = a0+a1x+a2x2+…..+anxn=0

If a’s are real,


• This polynomial will have n roots (real or complex)
• If n is odd, atleast one root will be real
• Complex roots occur in conjugate pairs

We are interested in finding the roots of the polynomials


Polynomials

Certain characteristics of polynomials:


1. Evaluation of polynomials by a computer
Polynomials

2. Division of polynomials
Polynomials

3. Deflation of Polynomials
Polynomials

4. Effective degree of Polynomials

In x it’s 12th order polynomial but in x4 it’s a cubic polynomial

So, try to reduce a polynomial to a lesser degree


Roots of Polynomials

Two methods that can be used to find roots of polynomials


a) Muller method
b) Bairstow method
Müller Method

Müller’s method obtains a root estimate by projecting a parabola to


the x axis through three function values.

Secant Müller

Figure 7.3 of C&C


Müller Method

1. Write the equation of a parabola in a convenient form:

f 2 ( x)  a( x  x2 )  b( x  x2 )  c
2

2. The parabola should intersect the three points [xo, f(xo)], [x1, f(x1)],
[x2, f(x2)].

f ( xo )  a( xo  x2 )  b( xo  x2 )  c
2

f ( x1 )  a ( x1  x2 ) 2  b( x1  x2 )  c
f ( x2 )  a ( x2  x 2 ) 2  b ( x2  x 2 )  c
Müller Method

3. The three equations can be solved to estimate a, b, and c

Define
ho  x1 - xo h1  x2 - x1
f ( x1 )  f ( xo ) f ( x2 )  f ( x1 )
o  1 
x1  xo x2  x1
then,
1   o
a b  ah1  1 c  f ( x2 )
h1  ho
Müller Method

4. Roots can be found by applying quadratic formula:

2c
x3  x2 
b  b  4ac
2

5. ±term yields two roots; the sign is chosen to agree with b. This
will result in a large denominator, and will give root estimate
that is closest to x2.
Müller Method

6. Once x3 is determined, the process is repeated by employing a


sequential approach just like in secant method, x1, x2, and x3 to
replace x0, x1, and x2.
Summary

• We looked at open methods for solving system of non-linear


equations

• How to modify Newton-Raphson in case of multiple roots?

• Characteristics of a polynomial

• Muller method for solving a polynomial


ESO 208A: Computational Methods in
Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
Non-linear equation

In computer, we have five approaches


• Graphical method
• Bracketing methods: Bisection, Regula-Falsi
• Open methods: Fixed point, Newton-Raphson, Secant
• Special methods for polynomials: Muller, Bairstow’s
• Hybrid methods: Brent’s
Bairstow's Method

1. Bairstow’s method is an iterative approach loosely related to


both Müller and Newton Raphson methods

2. It is based on dividing the given polynomial by a quadratic


polynomial x2-rx-s:

f n ( x )  ao  a1 x  a2 x 2  K  an x n
  x 2  rx  s  f n  2 ( x )  R
where
n 3 n2
f n  2 ( x )  b2  b3 x  K  bn 1 x  bn x
R  b1 ( x  r )  bo
Bairstow's Method

3. The coefficients b’s are obtained very easily by using recursive


relation
bn  an
bn-1  an-1  rbn
bi  ai  rbi 1  sbi  2 i  n - 2 to 0
4. Using Newton Raphson approach, r and s are adjusted so as to
make both bo and b1 approach zero

b1  a1  rb2  sb3  u( r,s )


b0  a0  rb1  sb2  v( r,s )
Bairstow's Method

5. Obtain corrections in r and s by Newton-Raphson method

Changes s and r needed to improve guesses will be estimated by


b1 b1
r  s  b1
r s
bo bo
r  s  bo
r s
Bairstow's Method

6. Bairstow (1920) showed that the partial derivatives of b0 and b1


are obtained by the recursive relation
cn  bn
cn 1  bn 1  rcn
ci  bi  rci 1  sci  2 i  n  2 to 2
where
bo bo b1 b1
 c1   c2  c3
r s r s
7. Iterate the steps untill (Δr/r) and (Δs/s) drops below a specified
threshold
Polynomial Methods: Single Root

If we divide by a factor (x - r) such that, r = α is a root of the polynomial, we


will get an exact polynomial of order (n - 1), say .

If r ≠ α, dividing by a factor (x - r) will have a remainder b0.

8
Polynomial Methods: Single Root

For a given , are known. For a choice of r, one can determine from n+1
equations above having n+1 unknowns 9
Polynomial Methods: Single Root

Remainder b0 is a function of r → b0(r), at r = α, b0(r) = 0


Problem: f(x) = 0, find a root x = α such that f(α) = 0
Problem: b0(r) = 0, find a root r = α such that b0(α) = 0
Apply Newton-Raphson:
Iteration Formula for Step k:

or

→ b0’(r) = b1 →

Assume a value of r, estimate b0 and b1, compute new r.


Continue until b0 becomes zero. (with acceptable relative error)
10
Polynomial Methods: Bairstow's

Let us divide by a factor (x2 – rx – s). If the factor is exact, the resulting polynomial will
be of order (n – 2). Two roots of the polynomial can be estimated simultaneously as the
roots of the quadratic factor. For the complex roots, they will be the complex
conjugates.

If the factor (x2 – rx – s) is not exact, there will be two remainder terms, one function
of x and another constant.
Let us express the remainder term as b1(x - r) + b0. This form instead of the standard b1x
+ b0 is chosen to device a convenient iteration formula!
11
Polynomial Methods: Bairstow's

For a given , are known. For a choice of r and s, one can determine from
n+1 equations above having n+1 unknowns 12
Polynomial Methods: Bairstow's

b0 and b1 are functions of r and s → b0(r, s) and b1(r, s)


Expand in Taylor’s series: Apply 2-d Newton-Raphson

Need to evaluate: , , and

13
Polynomial Methods: Bairstow's

Partial differentials with respect to r:

=0

14
Polynomial Methods: Bairstow's

Partial differentials with respect to s:

=0 =0

(say)

15
Polynomial Methods: Bairstow's

; ; and

For any given polynomial, we know {a0, a1, … an}. Assume r and s. Compute
{b0, b1, … bn} and {c0, c1, … cn}. Compute Δr and Δs.

16
Polynomial Methods: Bairstow's
 Step 1: input a0, a1, … an and initialize r and s.
 Step 2: compute b0, b1, … bn

 Step 3: compute c0, c1, … cn


 Step 4: compute Δr and Δs from

 Step 5: compute rnew = r + Δr, snew = s + Δs

 Step 6: check for convergence, and b0, b1 ≤ εʹ

 Step 7: Stop if all convergence checks are satisfied. Else, set r = rnew , s =
snew and go to step 2.

17
Bairstow's Method
Step 8. The roots quadratic polynomial x2-rx-s are obtained as
r  r 2  4s
x
2
Step 9. At this point three possibilities exist:
1. The quotient is a third-order polynomial or greater.The previous values
of r and s serve as initial guesses and Bairstow’s method is applied
to the quotient to evaluate new r and s values.
2. The quotient is quadratic.The remaining two roots are evaluated
directly, using the above eqn.
3. The quotient is a 1st order polynomial. The remaining single root can
be evaluated simply as x=-s/r.
Summary

• Bairstow method

• Derivation of Bairstow method


ESO208A: Computational Methods in
Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
Example Problem

5 4 3 2

Use initial guesses of r = s = -1 and iterate to εa ≤ 0.1%

3
Example Problem

5 4 3 2

Use initial guesses of r = s = -1 and iterate to εa ≤ 0.1%

Soln:
Step 1: Input a0, a1, … an and initialize r and s.
In

Here n = 5;

= 1.25; = -3.875; = 2.125; = 2.75;


= -3.5; = 1;
4
Example Problem
Step 2: compute b0, b1, … bn using recursive relations derived

Here, n =5

5
Example Problem
Step 3: compute c0, c1, … cn using recursive relations derived

Here, n =5

6
Example Problem
Step 4: compute Δr and Δs from

Here,

Solving, = 0.3558 and = 1.1381

Step 5: compute rnew = r + Δr, snew = s + Δs


rnew = -1 + 0.3558 = -0.6442, snew = -1 + 1.1381 = 0.1381

Step 6: check for convergence, ; b0, b1 ≤ εʹ


. ( ) . ( )
, = ; , = ;
. .

Step 7: Stop if all convergence checks are satisfied. Else, set r = rnew
, s = snew and go to step 2.
7
Revision of Solution of Non-linear Equations

1. Graphical Method – Provide insights but tedious/subjective


2. Bracketing methods
1. Bisection method Guaranteed convergence
Linear or better convergence
2. False position method
3. Modified false position method
3. Open methods
May diverge
1. Fixed-point iteration FP - linear convergence
NR – quadratic convergence
2. Newton-Raphson Secant – between linear & quadratic
3. Secant & Modified Secant NR – problems near zero gradient
Revision of Solution of Non-linear Equations

Hybrid Methods
Combination
1. Dekker method
- Bracketing method at the beginning
2. Brent method - Open method near convergence

Multiple roots
1. Bracketing method – Only for odd number of roots
2. Newton-Raphson - Linear convergence
3. Modified Newton Raphson – Quadratic convergence
a. Known multiplicity
b. Derivative function
Revision of Solution of Non-linear Equations

Roots of polynomials
1. Evaluation of polynomials
2. Division of polynomials
3. Deflation of polynomials
4. Effective degree of polynomials
Method of finding roots
1. Müller method Real and complex roots
2. Bairstow method
Revision of Solution of Non-linear Equations

1. Except for rare cases, computers will provide approximate


solution.
2. No method is “universally” better than others.
3. Domain knowledge should guide the selection of algorithm
and guess value(s).
Comparison of different algorithms
ESO208A: Computational Methods in
Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
System of linear
equations
System of linear equations
Preliminaries

Important square matrices:


1. Symmetric Matrix

2. Diagonal Matrix

5
Preliminaries

3. Identity Matrix

4. Upper Triangular Matrix

All elements below the main diagonal are zero


6
Preliminaries

5. Lower Triangular Matrix

All elements above the main diagonal are zero


6. Banded Matrix

All elements are zero except


for a band centered on the main
diagonal

Band Width = a + b - 1
7
Preliminaries

7. Sparse Matrix
Most of the elements are zero

8. Dense Matrix
Most of the elements are non-zero

9. Positive Definite Matrix


A symmetric matrix, such that xTAx is positive for every non-zero column vector x of n real
number

8
Solution of system of linear equations

• Direct Methods:
• One obtains the exact solution (ignoring the round-off errors) in a
finite number of steps.
• These group of methods are more efficient for dense and banded
matrices.
• Gauss Elimination; Gauss-Jordon Elimination, LU-Decomposition,
Thomas Algorithm (for tri-diagonal banded matrix)
• Iterative Methods:
• Solution is obtained through successive approximation.
• Number of computations is a function of desired
accuracy/precision of the solution and are not known apriori.
• More efficient for sparse matrices.
• Jacobi Iterations, Gauss Seidal Iterations with Successive
Over/Under Relaxation
9
Solution of system of linear equations

Graphical Interpretation

Let us take two linear equations with two unknowns


a1x1+b1x2=c1
a2x1+b2x2=c2
a) Unique Solution

10
Solution of system of linear equations

b) Unique Well Conditioned Solution c) No Solution (Singular)

d) Infinite Solution (Singular) e) Unique Ill conditioned

11
Solution of system of linear equations

Direct Methods

1) If A is I (Identity Matrix) 2) If A is a diagonal Matrix

12
Solution of system of linear equations

Direct Methods

3) If A is an upper triangular matrix 4) If A is a lower triangular matrix

13
Solution of system of linear equations

Direct Methods

If the coefficient matrix A is “full”. Can use


• Gauss Elimination
• Gauss-Jordon Elimination
• LU-Decomposition
• Thomas Algorithm (for tri-diagonal banded matrix)

All these methods belong to family of Gauss Elimination. Gauss Elimination is one of the
ubiquitous algorithm

E: ax+by+cz=d
If we multiply, divide, add or subtract both sides nothing is going to change!

14
Direct Methods: Gauss Elimination
Gauss Elimination for the matrix equation Ax = b:

Approach in two steps:


a) Operating on rows of matrix A and vector b, transform the matrix A to an upper
triangular matrix.
b) Solve the system using Back substitution algorithm.

Indices:
• i: Row index
• j: Column index
• k: Step index
Gauss Elimination
Gauss Elimination Algorithm

Forward Elimination:
For k = 1, 2, …. (n - 1)
Define multiplication factors:
Compute: - − for
i = k+1, k+2, ….n and j = k+1, k+2, ….n

Resulting System of equation is upper triangular. Solve it using the Back-


Substitution algorithm:
Summary

• Different types of matrices


• Graphical interpretation of solution of system of linear
equations
• Gauss-Elimination Method
ESO 208A: Computational Methods in
Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
Recap

• What is a system of linear equations?


• Different kind of matrices
• Direct method-Gauss Elimination Method

Today’s lecture
• Situations under which Gauss Elimination method will not work
• Gauss Jordan Method
• How to find algorithm complexity?
• LU decomposition method
Gauss Elimination Algorithm

Forward Elimination:
For k = 1, 2, …. (n - 1)
Define multiplication factors:
Compute: - − for
i = k+1, k+2, ….n and j = k+1, k+2, ….n

Resulting System of equation is upper triangular. Solve it using the Back-


Substitution algorithm:
Gauss Elimination

Difficult Cases

a) Division by zero • l21 can not be calculated, exchange


the rows, which one to pick

• Does it matter? Yes, in terms of


round off error.

• When we switch the rows, it is


called as pivoting or row pivoting Partial
Pivoting
• When we switch columns, it is
column pivoting. In this case, we
need to reorder the unknowns.

• When we switch both, it is total


pivoting
5
Gauss Elimination

Difficult Cases

b) Ill-conditioned

x1+2x2=10 • By just changing the coefficient


1.1x1+2x2=10.4 slightly, the solution changes
significantly
Solution: x1=4, x2=3
• This is very costly
Now if I slightly change the coefficients
x1+2x2=10 • Can we without solving find if the
1.05x1+2x2=10.4 system is ill conditioned

Solution: x1=8, x2=1

6
Gauss Elimination

Difficult Cases

c) Round-off Error

True Solution: x1=10, x2=1

What if you are using a computer that has four significant digits

7
Gauss Elimination

Difficult Cases

c) Round-off Error

• The solution is very different from


the actual solution

• Before solving the problem, can we


know our system will have round-
off problem

8
Gauss Elimination

Difficult Cases: Options for handling

a) Ill-Conditioned

• If determinant is close to zero- ill


conditioned

• If determinant is zero- singular

9
Gauss Elimination

Difficult Cases: Options for handling

a) Ill-Conditioned
Can we use determinant as a measure of ill-conditioning?

Suppose in the example we multiply the equations by 10

Now the determinant is significantly different from 0. On its own


D is not a good measure of ill-conditioning

10
Gauss Elimination

Difficult Cases: Options for handling

The three issues mentioned earlier can be avoided by:


• Use of more significant digits
• Pivoting: Row or partial pivoting-exchange row of the augmented
matrix
– Exchange rows which will result in largest magnitude of pivot
element

11
Gauss Elimination

Difficult Cases: Options for handling

Example:

12
Gauss Elimination

Difficult Cases: Options for handling

Example:

We did row
pivoting

13
Gauss Elimination

Difficult Cases: Options for handling

Example:

We did total
pivoting

x1=10, and x2=1


14
Gauss Elimination

Difficult Cases: Options for handling

Why pivoting has worked?

15
Gauss Elimination

Difficult Cases: Options for handling

Why pivoting has worked?


• Even by making the pivot large still we get round-off error.
• It is not the magnitude of the pivot element but relative magnitude of
elements that leads to round-off error
• Scaling of elements of ‘A’ governs the round-off errors

16
Gauss Elimination

Difficult Cases: Options for handling

Scaling

17
Gauss Elimination

Difficult Cases: Options for handling

Scaling

Perform pivoting by using scaled coefficients but perform computations


(GE) using original coefficients

18
Gauss Elimination

Difficult Cases: Options for handling

Scaling

Perform pivoting by using scaled coefficients but perform computations


(GE) using original coefficients

19
Gauss Elimination

Difficult Cases: Options for handling

Most common implementations of GE:


• Use scaled values of the coefficients as a criterion to decide pivoting
• Retain the original coefficients for actual elimination and substitution
• “No general pivoting strategy that will work for all linear systems”
– Example: If coefficient matrix is a positive definite matrix, the
BEST strategy is no interchange
• If you know, any special characteristics of the system use it to decide the
pivoting strategy

20
Direct Methods: Gauss Jordon

In this method, the coefficient matrix is reduced to an Identity matrix

• Requires a minor modification in GE algorithm


– At each step, first the pivot element is made unity by dividing
pivot equation by the pivot element

– In addition to sub-diagonal elements, the above diagonal


elements are also made 0.
Direct Methods: Gauss Jordon

Example
Summary

• Under what situations Gauss Elimination will not work

• Gauss Jordan Method


ESO208A: Computational Methods in
Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
Comparison

Which of these two algorithms is better?


1. Minimum Round-off errors (Condition no. is small)
2. Minimum storage requirement
3. Minimum computational time
4. Programming ease- Subjective

Computing Time
• Speed of computer
• Programming language
• Input Data
• Algorithm
Comparison

Computational or Algorithm Complexity

• Instead of measuring time in micro-seconds, we measure time in


terms of number of basic steps executed by algorithm.

• Basic steps: (+, -, ×, /, assignment, comparison)

• Instead of representing algorithm complexity as a single no. we


represent it in terms of size of data
Comparison: Algorithm Complexity

Example 1: Sum of n numbers, X=[x1,x2,x3….xn]

Operations
• Sum =0 (Assignment operation)
• Within the for loop (n assignments, n summations)

Total no. of operations = n


Comparison: Algorithm Complexity

Example 2: Sum and product of n numbers, X=[x1,x2,x3….xn]

Operations
• Sum =0, product =0 (Assignment operation=2)
• Within the for loop (2n assignments, n summations, n
products= 2n)

Total no. of operations = 2n


Comparison: Algorithm Complexity

Example 3: Sum of all possible pairs, X=[x1,x2,x3….xn]

Total no. of operations = n2


Comparison: Algorithm Complexity

Two things:
1) Worst Case Scenario
Find a number x0 in the vector X

The number of basic steps depends on the location of x0


Comparison: Algorithm Complexity

Two things:
2) Asymptotic Analysis
• Any algorithm is sufficiently efficient for small input.
• When comparing algorithms for computational time one is
interested in very large inputs
• As a proxy for “very large” asymptotic analysis that consider size of
input data tending to infinity
• “Big O” gives an upper bound on the asymptotic growth of the
algorithm
• The complexity of the function/algorithm is O(n2) it means that for
the worst case O(n2) steps are needed to estimate function value
when n is very large
Comparison: Algorithm Complexity

Two things:
2) Asymptotic Analysis
• If the computation time is the sum of multiple terms. Keep the
number which has the largest growth rate and drop the others.
• So, if no. of basic steps are n2+n+c
• As , n2 is what we are worried about.
Comparison: Algorithm Complexity

Common Complexity Classes


Comparison: Algorithm Complexity

Computational Complexity of GE and GJ


Comparison: Algorithm Complexity
Gauss Elimination

Pseudo code for Gauss elimination


(Source: Chapra and Canal
Comparison: Algorithm Complexity
Gauss Elimination
On the first pass, k=1

• The limits of middle loop are 2 to n


• The number of iterations in the middle loop will
be

• For every iteration in the middle loop,


• The number of multiplication/division
operations

• The number of subtraction

• Total multiplication for the first pass =

• The total number of subtraction operations


Comparison: Algorithm Complexity
Gauss Elimination
Comparison: Algorithm Complexity
Gauss Elimination
The total addition/subtraction operations can be computed as

Applying some of the relationships mentioned earlier:

By doing similar analysis for multiplication and division.

Total number of floating point operations:


Comparison: Algorithm Complexity
Gauss Elimination
Summary

• How to determine algorithm complexity?


ESO208A: Computational Methods in
Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
LU decomposition

Consider the system


Ax = b
• In most engineering problems, the matrix A remains constant while the
vector b changes with time.
• The matrix A describes the system and the vector b describes the
external forcing. e.g., all network problems (pipes, electrical, canal,
road, reactors, etc.); structural frames; many financial analyses.
• If all b’s are available together, one can solve the system by augmented
matrix but in practice, they are not!
LU decomposition
For the system,
Ax = b
• Perform a decomposition of the form A = LU, where L is a lower-
triangular and U is an upper-triangular matrix!
• For any given b, solve Ax = LUx = b
• This is equivalent to solving two triangular systems:
• Solve Ly = b using forward substitution to obtain y
• Solve Ux = y using back substitution to obtain x
• Most frequently used method for engineering applications!
LU decomposition

A=

12 Unknowns and 9 equations! 3 free entries!


In general, n2 equations and n2 + n unknows! n free entries!
LU decomposition

In general, n2 equations and n2 + n unknown, n free entries!


It means we cannot have a unique solution for lij and uij. However, if we
fix ‘n’ terms, we will get a unique solution

LU decomposition Theorem
If A is a square matrix of size n × n and if . Then there exists
a lower triangular matrix (L) and an upper triangular matrix (U) such
that A=LU.
Further, if the diagonal elements of either L or U are unity, i.e lii or uii
=1 for i=1,2,….n, then both L and U are unique
LU decomposition
How to get elements of both L and U
1. Gauss Elimination gives both L and U
lii=1
2. Dolittle Method
3. Crout Method uii=1
4. Thomas Algorithm- Tri-diagonal matrix
5. Cholesky Algorithm- Positive definite matrix
LU decomposition
1. Gauss Elimination Method for L and U
LU decomposition
Gauss Elimination Method for L and U
LU decomposition
Gauss Elimination Method for L and U
LU decomposition
Comparison of GE and LU
LU decomposition
Comparison of GE and LU
LU decomposition
2. Crout’s Method
LU decomposition
2. Crout’s Method
LU decomposition
3. Dolittle Method
Summary

• What is LU decomposition
• Crout’s method
• Dolittle method
ESO 208A: Computational Methods in
Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
Recap

• Pitfalls of Gauss Elimination Method


• Gauss Jordan Method
• LU decomposition-Gauss Elimination, Dolittle, Crout

Today’s lecture
• Thomas Algorithm
• Cholesky Decomposition
• Forward Error Analysis
• Indirect Methods-Gauss-Seidal, Jacobi iterative method
LU decomposition

Thomas Algorithm (Tri-diagonal Matrix)


LU decomposition

Thomas Algorithm (Tri-diagonal Matrix)


LU decomposition

Cholesky Decomposition (for +ve definite matrix)


Diagonalization (LDU theorem):
Let A be a n × n invertible matrix then there exists a decomposition of
the form A = LDU where, L is a n × n lower triangular matrix with
diagonal elements as 1, U is a n × n upper triangular matrix with diagonal
elements as 1, and D is a n × n diagonal matrix.

Example of a 3 × 3 matrix:
LU decomposition

Cholesky Decomposition (for +ve definite matrix)


For a symmetric Marix

This implies,

For symmetric matrix: U = LT and A = LDLT


Note that the entries of the diagonal matrix D are the pivots!
LU decomposition
Cholesky Decomposition (for +ve definite matrix)
• For positive definite matrices, pivots are positive!
• Therefore, a diagonal matrix D containing the pivots can be factorized as: D = D1/2D1/2
• Example of a 3 × 3 matrix

• For positive definite matrices: A = LDLT = L D1/2D1/2 LT


• However, D1/2LT = (LD1/2)T. Denote: L D1/2 = L1
• Therefore, A = L1 L1T . This is also a LU-Decomposition where one needs to evaluate only
one triangular matrix L1.
LU decomposition
Cholesky Decomposition (for +ve definite matrix)
LU decomposition
Cholesky Decomposition (for +ve definite matrix)
Summary

• Thomas Algorithm

• Cholesky Decompsition
ESO208A: Computational Methods in
Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
Recap

• Direct Methods:
• Gauss Elimination,
• Gauss-Jordon Elimination,
• LU-Decomposition,
• Thomas Algorithm (for tri-diagonal banded matrix)
• Cholesky Decomposition

3
LU decomposition
Error Analysis
LU decomposition
For Error Analysis, we need to first understand vector and matrix
norms
Vector Norm
A vector norm is a measure (in some sense) of the size or “length” of a vector
• Properties of Vector Norm:


• Lp-Norm of a vector x:

• Example Norms:
• p = 1: sum of the absolute values
• p = 2: Euclidean norm
• p → ∞: maximum absolute value,
LU decomposition
Matric Norm: A matrix norm is a measure of the size of a matrix
• Properties of Matrix norm:




• for consistent matrix and vector norms
• Lp Norm of a matrix A:
LU decomposition
Matric Norm:

• Column-Sum norm:

• Spectral norm: where, λj are the


eigenvalues of the square symmetric matrix ATA.

• Row-Sum norm:

• Frobenius norm:

Trace of a matrix is the sum of elements on the main diagonal


LU decomposition

Matric Norm
• Spectral Radius: largest absolute eigenvalue of matrix A denoted by ρ(A)

• If there are m distinct eigenvalues of A:


• Lower bound of all matrix norms:

• For any norm of matrix A:


LU decomposition
Matric Norm:
LU decomposition
Condition Number
LU decomposition
Condition Number
LU decomposition
Condition Number
If we change 1.1 to 1.05, what would be the corresponding change in x
Condition Number
Condition Number
Recall: Determinant is not a good measure of the ill or well conditioning of
the matrix

Measure of C(A) is independent of the scaling, which is a good thing


Condition Number
Question: It is always recommended that after estimating X, substitute it in
the equation and see whether the equation is satisfied or not. Is the residual
r a good measure for e
Condition Number
Question: It is always recommended that after estimating X, substitute it in
the equation and see whether the equation is satisfied or not.
Iterative Refinement or Improvement
Summary

• Forward error analysis


• Vector norm and matrix norm
• Condition number of a matrix
ESO208A: Computational Methods in
Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
Recap

• Direct Methods:
• Gauss Elimination,
• Gauss-Jordon Elimination,
• LU-Decomposition,
• Thomas Algorithm (for tri-diagonal banded matrix)
• Cholesky Decomposition

• Forward Error Analysis

3
Indirect Methods

Indirect or Iterative Methods


• Jacobi Iteration
• Gauss Seidal
• Relaxation Technique

All these methods are version of fixed-point iteration for linear system of
equations
Fixed-Point Method
Fixed-Point Method
Jacobi and Gauss Seidal

• Jacobi Iteration

• Gauss Seidal

• Gauss-Seidal method is faster than Jacobi method


• Relaxation Method: A way to improve convergence
Jacobi and Gauss Seidal

• Example

• Jacobi Iteration
Jacobi and Gauss Seidal

• Gauss Seidal
Jacobi and Gauss Seidal

Comments
• Useful when dealing with large sparse systems
• To save computation time divide the equation by its diagonal.
It saves computation, but can introduce round-off error.
• Convergence is not guaranteed [like FP methods]. If you get
convergence, its linear convergence.
Jacobi and Gauss Seidal

Comments
• Convergence Criteria
Jacobi and Gauss Seidal

• Convergence Criteria
• The magnitude of the diagonal element should be greater than the sum
of absolute values of all off-diagonal elements. Such systems are called
Diagonal dominant system
• The criteria for convergence is sufficient but not necessary i.e. the
method may converge even if the criteria is not met.
Relaxation Techniques
Relaxation Techniques
Relaxation Techniques

• Example
Relaxation Techniques

• Example

ea is the maximum of relative


error in x, y and z
Relaxation Techniques
Suppose it is given ea<0.1%. If
• Example any of the variable, exceeds your
threshold go to next iteration.
Step 1:

After 7 iterations, the solution


converges. Try it!
Relaxation Techniques

How do we get the optimal value of


• Problem Specific
• The usual procedure is to do empirical evaluation
– Useful when the system has to be solved a number of times

• Can use this for solving x for different values of b


GAPS

• Why GS is faster than Jacobi?


• The convergence criteria is sufficient (not necessary)
• Why the relaxation techniques work?
• , Why this range works?

To answer these, we need to study Eigen Values and Eigen Vectors


Summary

• Gauss-Seidal
• Jacobi Iteration
• Successive Over Relaxation Technique
ESO 208A: Computational Methods in
Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
GAPS

• Why GS is faster than Jacobi?


• The convergence criteria is sufficient (not necessary)
• Why the relaxation techniques work?
• , Why this range works?

To answer we need to study Eigen Values and Eigen Vectors


Eigen Values and Eigen
Vectors
Eigen Values and Eigen Vectors
Eigen Values and Eigen Vectors
Eigen Values and Eigen Vectors

Example
Eigen Values and Eigen Vectors

Example

Unit vectors
Characteristics of Eigen Vectors

1. The vectors are called linearly independent if

iff
else linearly dependent.

The vectors are linearly independent The vectors are linearly dependent
Characteristics of Eigen Vectors

2. Any linearly independent vectors are a “basis” for space, i.e. a


vector in space can be expressed uniquely as a linear combination of basis
vectors

’s are unique, components of w.r.t basis ]

3. A matrix is non-singular if its columns ] are linearly


independent.
Characteristics of Eigen Vectors

4. Eigen Values
If × is a matrix of real numbers with as eigen values.
Product of eigen values =det(A)
Sum of eigen values =trace (A)
Summary

• Eigen Values and Vectors

• Characteristics of Eigen Vectors


ESO208A: Computational Methods in
Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
Estimation of Eigen Values

• Largest eigenvalue: Power Method


• Smallest eigenvalue: Inverse Power Method
• All Eigen values:
– Inverse power method with shift
– Faddeev-Leverrier method
– QR Decomposition
Power Method

Direct Power Method: Used to find the largest [in terms of abs value] eigen value and
corresponding eigen vector

Keep doing the iterations till convergence


Power Method
Alogirthm

Algorithm of Direct Power Method


1. For a × matrix, start with a guess vector ×

2. Multiply

3. Find Scaling Factor,

4. Divide each component of by

5. Repeat 2 to 4, unless change in is negligible.

• Some books suggest that you pick one component of and keep making it 1 after
every iteration. It is the correct way, but may result in division by 0.
• If the algorithm is converging, the element corresponding to maximum value will
not change
Why Direct Power Method Works?
Why Direct Power Method Works?

When algorithm converges, scaling coefficient becomes eigen value. Consequently, scaling a
particular component of vector Y at each iteration essentially factors out. So, this equation (1)
attains a finite value as k tends to infinity, the scaling factor S approaches
Why Direct Power Method Works?

Remark:

• Method will work only if the largest eigen value is distinct (non repeated)
• The eigen vectors should be independent. If all the eigen values are distinct, the eigen
vectors will be independent. Otherwise, it is still possible that vectors are independent,
but not guaranteed.
• The initial guess value should contain a component of i.e

• The convergence rate is proportional to , where, is the largest eigen value and

is the second largest in terms of absolute values


Power Method

Inverse Power Method: Used to find the smallest [in terms of abs value] eigen value
and corresponding eigen vector
Power Method
Power Method

Shifted Power Method:


Power Method

Shifted Power Method can be used to find the extreme eigen values
when matrix inversion is to be avoided
Estimation of Intermediate Eigen Values

Gershgorin’s Disc Theorem:


Estimation of Intermediate Eigen Values

Gershgorin’s Disc Theorem:


Estimation of Intermediate Eigen Values
Gershgorin’s Disc Theorem: The method can be used in conjunction with
inverse power method to improve convergence rate.
Summary

• Power Method

• Inverse Power Method

• Power Shift Method


ESO208A: Computational Methods in
Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
Estimation of Eigen Values

• Largest eigenvalue: Power Method


• Smallest eigenvalue: Inverse Power Method
• All Eigen values:
– Inverse power method with shift
– Faddeev-Leverrier method
– QR Decomposition
Estimation of Intermediate Eigen Values
QR method: Finds all the intermediate eigen values

Remarks:
1) The eigen values of a diagonal matrix

2) Upper triangular matrix


Estimation of Intermediate Eigen Values
QR method: It is analogous to Gauss Elimination method

The transformation should not change the solution

The transformation should not change the eigen value


Estimation of Intermediate Eigen Values
Similarity Transformation:
• Two n × n matrices A and B are similar if there exists another n × n invertible
matrix M such that A = MBM-1 or B = M-1AM
• The process of obtaining the similar matrix B from matrix A using the relation B =
M-1AM is called similarity transformation!
• Similar matrices have the same eigenvalues!
• Eigen vector are different
Estimation of Intermediate Eigen Values
How to get this M? What M will transform A into a diagonal or upper
diagonal matrix.
QR method gives that answer.

Q
• Q is an orthogonal matrix. It is a square matrix whose columns are
orthonormal vectors

• Orthogonal vectors: The vectors are perpendicular to each other

• Normal vectors: Vectors of unit length


• Orthonormal vectors: Vectors of unit length and are perpendicular to
each other
• We denote orthonormal vectors by
Estimation of Intermediate Eigen Values
QR Method
Algorithm

To be done
using Gram
Schimdt
Process
QR Method
Gram Schmidt Process

Given two independent vectors a1 and a2 , the Gram-Schimdt process


transforms these vectors such that
i) they are perpendicular to one another
ii) they have unit length
QR Method
Gram Schmidt Process
QR Method
Gram Schmidt Process
Comments on QR method

• Like power method, the QR methods works best for non-defective matrix, i.e. matrices
with complete basis of eigenvectors

• A matrix with distinct eigenvalues is always non-defective

Most matrices in real-world problems are non-defective

• The QR method yields eigenvalues, the corresponding eigenvectors can be obtained by


shifted-power method

• We have seen the basic QR method, many modifications are available to increase its
convergence

─ Pre-processing the matrix into a more nearly triangular form

─ Shifting the matrix


Modification of QR method - I

• A similarity transform (like householder transform) is applied to convert a

─ symmetric matrix into a tridiagonal matrix.

─ non-symmetric matrix into a quasi upper triangular (Hessenberg) matrix

• QR method is then applied on the transformed matrix

• Significant saving in computation cost.


Modification of QR method - II
• Similar to power method, shift the matrix by a scalar

Basic QR method Shifted QR method

A0  A A0  A
Ak  Qk Rk Ak  sI  Qk Rk
Ak 1  Rk Qk Ak 1  Rk Qk  sI
QR Method
GAPS

• Why GS is faster than Jacobi


• The convergence criteria is sufficient (not necessary)
• Why the relaxation techniques work?
• , Why this range works?

To answer these, we need to study Eigen Values and Eigen Vectors


Convergence of Jacobi/Gauss Seidal
Convergence of Jacobi/Gauss Seidal
Summary on System of Linear Equations and Eigenvalue

Ax = b
• Direct methods
o Gauss elimination [Partial pivoting & scaling]
o Gauss Jordan
o LU decomposition
o Gauss Elimination
o Compact methods [Doolittle & Crout]
o Tridiagonal [Thomas Algorithm]
o Symmetric Positive Definite [Cholesky]
• Indirect methods [Sparse Matrices]
o Jacobi iterations
o Gauss-Seidel
o SOR
Summary on System of Linear Equations and Eigenvalue

• Computational Complexity [Big O notation]


o Gauss elimination O(2/3 n3)
o There were theorems to show that for a general system of n linear
equations cannot be solved in less than O(n3log) operations
o Now methods exists O(Cn2.8); The fastest reported is O(Cn2.373)
o These methods are seldom used because C is very large, i.e. breakeven
happens when n is very large. In addition, the programming is very
awkward
o Present focus is on improving efficiency in multi-core/parallel machines
Summary on System of Linear Equations and Eigenvalue

• Condition number of the matrices


o Norm of vectors and matrices: L1, L2 and L∞ norms
o Iterative refinement

• Convergence of iterative method


o Sufficient and necessary conditions
Summary on System of Linear Equations and Eigenvalue

Av = λv
• Characteristic equations
• Power methods
o Direct power method
o Inverse power method
o Shifted power method
• QR method
o Gram-Schmidt process
o Improvement in QR by pre-processing and shifting
ESO 208A: Computational Methods in
Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
Approximation of
Functions [Curve fitting]
Approximation of Functions

Two kind of problems

a) Data exhibit a significant degree of scatter

• The objective is to derive a curve that represents the general


trend of the data
• We are looking for approximate fit (Regression)
Temperature in Northern Hemisphere (Hockey-stick Curve)

Regression
Approximation of Functions

Two kind of problems

b) Data are precise

• Pass the curve or series of curves through each data point


• Exact fit (interpolation)
Pipe Roughness (Moody’s Diagram)
Approximation of Functions

Whether regression or interpolation problem, what function


shall we fit?
It depends on what we want to do?
• Interpolation/Extrapolation
• Integrate
• Differentiate

We should select a function that is easy to a) determine, b)


evaluate, c) integrate and d) differentiate

Many choices, traditionally polynomial is used


• Trigonometric functions
• Exponential functions
• Other functions depending upon the application
Approximation of Functions
Qualities of Polynomial Function:

• Easy to determine
• Uniform approximation (Weierstrass Approximation Theorem)
For every continuous and real valued function f(x) in [a, b] and ε > 0, there exists a
polynomial p(x) such that,

When not to use polynomial basis?


If the functional form or the model is known
Sharp front
Periodic function
• Uniqueness:
– A polynomial of degree ‘n’ passing exactly through ‘n+1’ discrete points is
unique.
– Polynomial may be represented in different forms, but all forms are unique
Approximate Fit (Regression)

We will use the Principle of least squares

The Principle of Squares


Approximate Fit (Regression)

The Principle of Squares


Approximate Fit (Regression)

The Principle of Squares


Approximate Fit (Regression)

Can we have other choices?

a) Minimize absolute error

b) Minimize the maximum deviation (min max)

Which is better?
The problem defines this
Regression [Polynomial]

Linear function

Best choice for fitting a linear function

a) Minimize the error (Trivial)


Regression [Polynomial]

Best choice for fitting a linear function

b) Minimize the absolute error

For any line in between the blue


lines, the absolute error is zero. This
leads to multiple solutions

c) Minimize the maximum error


Regression [Polynomial]

Best choice for fitting a linear function

d) Minimize sum of squared error


Summary

• Why do we need approximation of function?

• Principle of least squares

• Linear Function
ESO208A: Computational Methods in
Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
Regression [Polynomial]

Fitting of a linear function


Regression [Polynomial]

Fitting of a linear function-Example


Regression [Polynomial]

Fitting of a linear function-How good is the fit?

• Smaller the value of ‘S’, better the fit


• As of now, S depends on unit of y (can be in cm/km). We want a measure
independent of unit
• Normalization will address this issue
Regression [Polynomial]

Fitting of a linear function-How good is the fit?


Regression [Polynomial]

Fitting of a linear function-How good is the fit?


Regression [Polynomial]

Fitting of higher order polynomials


Regression [Polynomial]

Fitting of higher order polynomials


Regression [Polynomial]

Extending Ordinary Least Squares to a General Basis Function


Regression [Polynomial]

Extending Ordinary Least Squares to a General Basis Function

Polynomial +…+
For any function
…+ (eq. 1)

For polynomial,

• ’s are the basis function. If ’s are trigonometric then it is a Fourier Transform


• Sometimes you have a basis function associated with as well
Regression [Polynomial]

Extending Ordinary Least Squares to a General Basis Function


For any function
…+ (eq. 1)

Equation (1) in general matrix form


Regression [Polynomial]

Extending Ordinary Least Squares to a General Basis Function


Design matrix for quadratic case
Regression [Polynomial]
Extending Ordinary Least Squares to a General Basis Function

Problem: has a large condition number

2nd or 3rd polynomial fitting is ok, but as we increase the order CN


becomes very large.

a) Better way to solve normal equation- Use singular value


decomposition
b) Select basis function ( so that the condition no. of can be
reduced
c) One option is to take basis functions which are orthogonal, so that
will be a diagonal matrix
Regression [Polynomial]
Extending Ordinary Least Squares to a General Basis Function
Orthogonal basis function
Regression [Basis Function]
Extending Ordinary Least Squares to a General Basis Function
Orthogonal basis function
Summary

• Fitting of linear function


• Fitting of higher order polynomial
• General Basis Functions
ESO208A: Computational Methods in
Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
Recap

• Fitting of linear function


• Fitting of higher order polynomial
• General Basis Functions
Regression [Basis Function]
General Basis Function

Gram-Schimdt Process to make the basis functions orthonormal: Given a


set of independent functions a set of orthogonal functions
[in the same sub-space] are obtained as:
Regression [Basis Function]
Orthogonal Function

This is 5/8 instead of


5/2
Regression [Basis Function]
Orthogonal Function
Regression [Basis Function]
Orthogonal Function

• The functions are called the Legendre Polynomials.

• In its typical representation, the Legendre’s polynomial are only proportional to


’s, because by convection they are normalized to a length other than 1.
Regression [Basis Function]
Orthogonal Function

• Legendre Polynomials
Legendre Polynomials

 0 if n  j
2n  1 n 
P0  x   1,P1  x   x,Pn 1  x   xPn  x   Pn 1  x  ; Pn ,Pj   2
n 1 n 1  2n  1 if n  j
Legendre Polynomials
Orthogonal Polynomials
Regression [Basis Function]
Least Square Regression with Orthogonal Basis Function

• Discrete Case
Regression [Basis Function]
Least Square Regression with Orthogonal Basis Function

• Discrete Case-Example
Regression [Basis Function]
Least Square Regression with Orthogonal Basis Function

• Discrete Case-Example
Regression [Basis Function]
Least Square Regression with Orthogonal Basis Function

• Continuous Case
Regression [Basis Function]
Least Square Regression with Orthogonal Basis Function

• Continuous Case- Example


Regression [Basis Function]
Least Square Regression with Orthogonal Basis Function

• Continuous Case- Example


Example: Function Approximation
Regression
Remarks:

1. Least Square
If you have outlier in the data, least squares become very large

2. We are trying to minimize errors. When we are doing it we are


assuming that errors are only in y
Regression
Remarks:

2. In many conditions, both x and y have errors, e.g. temp vs density. In


such cases we need a model which minimizes both the errors in x and y

3. Weighted least square


Sometimes one has more confidence on certain data point, in that case
do weighted minimization
Regression
Remarks:

4. Connection with statistics

• One gets the same solution by using the method of maximum


likelihood under the assumption that errors are normally distributed
• This leads to confusion that least square is only applicable when error
is normally distributed
• Assumption of normality is not essential to apply least square
• If error follows Normal Distribution
– Confidence interval for the estimates
– Determine the optimum no. of basis function
Regression
Remarks:

5. Multiple Regression

6. Non-linear Regression
Regression
Remarks:

6. Non-linear Regression

• These two approaches will give different results.


• Linearization is easy but not applicable to all the problems
• The second approach is universal but it is computationally expensive
Regression
Remarks:

7. Basis Function

Polynomial basis function [1,x, x2, x3]


This kind of basis function is called Global Basis Function

If I have one more data,


entire line will change

Alternative is to use local basis function (spline, wavelet, radial)

8. Interpolation vs Regression
Depends on the problem and domain
Summary

• Gram Schimdt Process


• Least square regression using orthogonal basis function
• General Remarks about least square regression
ESO 208A: Computational Methods in
Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
Approximation of
Functions [Curve fitting]
Approximation of Functions

Two kind of problems

a) Data exhibit a significant degree of scatter

• The objective is to derive a curve that represents the general


trend of the data
• We are looking for approximate fit (Regression)
Approximation of Functions

Two kind of problems

b) Data are precise

• Pass the curve or series of curves through each data point


• Exact fit (interpolation)
Interpolation

b) Data are precise

Given (n+1) data points ( ) where


Objective: Fit an nth order polynomial
Though the polynomials are unique, there are variety of formats in which
the polynomials can be expressed
• Direct Fit (standard format)
• Lagrange Polynomials
• Newton’s Divided Difference Polynomial
All the methods are applicable if the data has arbitrary spacing
Direct Fit
Direct Fit

Vandermonde matrix has large condition number


Direct Fit-Example

Whenever you get data


arrange it in such a way
that first point is nearest
to the point where you
want to interpolate

y* = 0.5234;

1.07 % error
Direct Fit-Example
Lagrange’s Polynomial
Lagrange’s Polynomial
Lagrange’s Polynomial
Lagrange’s Polynomial-Example
Newton’s Divided Difference Formula

The divided difference is defined as the ratio of difference between


function values at two points by the difference between the two points.

Example
Newton’s Divided Difference Formula

Example

First Divided Difference

Second Divided Difference

Nth Divided Difference


Newton’s Divided Difference Formula
Newton’s Divided Difference Formula
Summary

• Direct Fit
• Lagrange Polynomials
• Newton’s Divided Difference Formula
ESO208A: Computational Methods in
Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
Recap

• Direct Fit
• Lagrange Polynomials
• Newton’s Divided Difference Formula
Errors in Interpolation

Error in Interpolation

• Characteristics of the errors


• Methods to reduce errors
Errors in Interpolation

Errors in Interpolation

Given ( ) where and that the underlying function f(x) is


continuous and infinitely differentiable, the truncation error in
interpolation by nth order polynomial is given by:
Errors in Interpolation

The truncation error can be approximated by

• The error remains same for nth order polynomial fitted by using any form.

• This error can be easily estimated if Newton’s DD Formulae is used for interpolation
Errors in Interpolation

Interpolation Errors
– Let us compare Taylor series approximation with Newton’s Divided
Difference Polynomial
Errors in Interpolation
Comparison of remainder term in Taylor Series and Newton’s DD
Formulae
Errors in Interpolation

Residual term in Newton’s DD Formulae:


Errors in Interpolation

Residual term in Newton’s DD Formulae- Example

y* = 0.5234;

1.07 % error
Errors in Interpolation

Residual term in Newton’s DD Formulae:

Properties of the error


a) What will be the error at the data points? Zero

b) The errors are larger for the x’s that are near the edges
Errors in Interpolation
Properties of the error

b) The errors are larger for the x’s that are near the edges
Errors in Interpolation

Properties of the error

c) The errors are extremely large outside the data range


Errors in Interpolation

Methods to reduce interpolation errors in polynomials


• Selection of data points
• Piecewise fitting of polynomials
Errors in Interpolation

Methods to reduce interpolation errors in polynomials


• Selection of data points: The solution of the optimization problem is
given by Tchebycheff points

• Tchebycheff Polynomials
Errors in Interpolation

Methods to reduce interpolation errors in polynomials


• Selection of data points:
• The roots of the Tchebycheff Polynomial are Tchebycheff points
(nodes)
Errors in Interpolation

Methods to reduce interpolation errors in polynomials


• Tchebycheff Polynomials-Example
Summary

• Interpolation Error
• Properties of Error
• Methods to reduce error
ESO208A: Computational Methods in
Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
Errors in Interpolation

Methods to reduce interpolation errors in polynomials


• Piecewise fitting of Polynomial (Splines)
– The concept originated from the drafting technique

Courtesy: Prof. Carl de Boor


Errors in Interpolation

Methods to reduce interpolation errors in polynomials


• Piecewise fitting of Polynomial (Splines)
– The concept originated from the drafting technique

– Given ( ) where are ordered pairs, the spline is


obtained by fitting a polynomial of an appropriate order between
the two points
Spline Interpolation
• Given: (n + 1) observations or data pairs [(x0, f0), (x1, f1), (x2, f2)
… (xn, fn)]
• This gives a mesh of nodes on the independent
variable and the corresponding function values as
• Goal: fit an independent polynomial in each interval (between two
points) with certain continuity requirements at the nodes.
• Linear spline: continuity in function values, C0 continuity
• Quadratic spline: continuity in function values and 1st derivatives, C1
continuity
• Cubic spline: continuity in function values, 1st and 2nd derivatives, C2
continuity
• Denote for node i or at xi: functional value fi, first derivative ui,
second derivative vi
Spline Interpolation: Linear

xi+2, fi+2

xi, fi qi+2(x)
qi+1(x)

qi(x) xi+1, fi+1 qi(xi) = fi


xi-1, fi-1 qi-1(xi) = qi(xi)

• A straight line in each interval: (n+1) points, n straight lines, 2n unknowns


• Available conditions: (n+1) function values, (n-1) function continuity
conditions
• Straight lines can be uniquely estimated!
Splines
Splines
Splines
Spline Interpolation: Quadratic

xi+2, fi+2

xi, fi qi+1(x) qi+2(x)

qi(x) xi+1, fi+1


xi-1, fi-1

• A quadratic polynomial in each interval: (n+1) points, n quadratic


polynomials, 3n unknowns
• Available conditions: (n + 1) function values, (n - 1) function continuity
and (n - 1) 1st derivative continuity conditions, total 3n-1 conditions.
• 1 free condition to be chosen by the user!
Spline Interpolation: Cubic

xi+2, fi+2

xi, fi qi+1(x) qi+2(x)

qi(x) xi+1, fi+1


xi-1, fi-1

• A cubic polynomial in each interval: (n+1) points, n cubic polynomials,


4n unknowns
• Available conditions: (n + 1) function values, (n - 1) function continuity,
(n - 1) 1st derivative continuity conditions and (n - 1) 2nd derivative
continuity conditions, total 4n - 2 conditions.
• 2 free conditions to be chosen by the user!
Splines
Splines
Splines
Splines
Splines
The boundary condition can be given by using the different kind of
splines
Splines
The boundary condition can be given by using the different kind of splines

Natural Splines-Example
Splines
The boundary condition can be given by using the different kind of
splines
Splines
The boundary condition can be given by using the different kind of
splines
Interpolation
Remarks
Interpolation
Remarks
Summary

• Different types of splines


• Boundary conditions for cubic splines
ESO 208A: Computational Methods in
Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
Numerical Integration
Numerical Integration

Integration
Numerical Integration

Numerical Integration (Quadrature)


• Sometimes function can be evaluated only at discrete points.
• Function is so ‘complex’ that analytical expression does not
exist
Numerical Integration

Numerical Integration (Quadrature)


• General approach: approximate f(x) with one or a piece-wise
continuous set of polynomials p(x) and evaluate:
Numerical Integration

Newton’s Cotes Formulae for Integration


• Applicable when function values are available at equal intervals

Closed Newton Cotes Formula’s


– Rectangular rule
– Trapezoidal rule
– Simpson’s rule
Numerical Integration: Rectangular Rule

fi+1/2 fi+2
fi fn
fi+1
f1 fi-1
f0

a = x0 x1 xi-1 xi xi+1/2 xi+1 xi+2 b = xn

Polynomial p(x) is piecewise constant function: pi(x) = fi+1/2


Numerical Integration: Trapezoidal Rule

fi+2
fi fn
fi+1
f1 fi-1
f0

a = x0 x1 xi-1 xi xi+1 xi+2 b = xn

Polynomial p(x) is piecewise linear function:


Numerical Integration: Trapezoidal Rule

If the mesh is uniform, = h for all i:


Numerical Integration: Simpson’s Rule

fi+2
fi fn
fi+1
f1 fi-1
f0

a = x0 x1 xi-1 xi xi+1 xi+2 b = xn


Polynomial p(x) is piecewise quadratic function:
𝑓 𝑥 ≈𝑝 𝑥
𝑥−𝑥 𝑥−𝑥 𝑥−𝑥 𝑥−𝑥 𝑥−𝑥 𝑥−𝑥
= 𝑓+ 𝑓 + 𝑓
𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥
Numerical Integration: Simpson’s Rule
Polynomial p(x) is piecewise quadratic function:

Assume, = = h and substitute z = (x – xi)


Numerical Integration: Simpson’s Rule

Assume, = = h and substitute z = (x – xi)


Numerical Integration: Simpson’s Rule

This is known as Simpson’s 1/3rd Rule


Numerical Integration: Simpson’s Rule

fi+2
fi fn
fi+1
f1 fi-1
f0

a = x0 x1 xi-1 xi xi+1 xi+2 b = xn


If the mesh is uniform, = h for all i:

n = 2m, m integer
Numerical Integration: Simpson’s Rule
Polynomial p(x) is piecewise cubic function:
𝑓 𝑥 ≈𝑝 𝑥
𝑥−𝑥 𝑥−𝑥 𝑥−𝑥 𝑥−𝑥 𝑥−𝑥 𝑥−𝑥
= 𝑓+ 𝑓
𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥
𝑥−𝑥 𝑥−𝑥 𝑥−𝑥
+ 𝑓
𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥
𝑥−𝑥 𝑥−𝑥 𝑥−𝑥
+ 𝑓
𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥
Assume, = = = h and substitute z = (x – xi)

𝑓 𝑥 𝑑𝑥 ≈ 𝑝 𝑥 𝑑𝑥

𝑓 𝑓
=− 𝑧 − 3ℎ 𝑧 − 2ℎ 𝑧 − ℎ 𝑑𝑧 + 𝑧 − 3ℎ 𝑧 − 2ℎ 𝑧𝑑𝑧
6ℎ 2ℎ

𝑓 𝑓
− 𝑧 − 3ℎ 𝑧 − ℎ 𝑧𝑑𝑧 + 𝑧 − 2ℎ 𝑧 − ℎ 𝑧𝑑𝑧
2ℎ 6ℎ
Numerical Integration: Simpson’s Rule

𝑓 𝑥 𝑑𝑥 ≈ 𝑝 𝑥 𝑑𝑥

𝑓 𝑓
=− 𝑧 − 3ℎ 𝑧 − 2ℎ 𝑧 − ℎ 𝑑𝑧 + 𝑧 − 3ℎ 𝑧 − 2ℎ 𝑧𝑑𝑧
6ℎ 2ℎ

𝑓 𝑓
− 𝑧 − 3ℎ 𝑧 − ℎ 𝑧𝑑𝑧 + 𝑧 − 2ℎ 𝑧 − ℎ 𝑧𝑑𝑧
2ℎ 6ℎ
𝑓 3ℎ 3ℎ 3ℎ
=− − 6ℎ + 11ℎ − 6ℎ 3ℎ
6ℎ 4 3 2
𝑓 3ℎ 3ℎ 3ℎ 𝑓 3ℎ 3ℎ 3ℎ
+ − 5ℎ + 6ℎ − − 4ℎ + 3ℎ
2ℎ 4 3 2 2ℎ 4 3 2
𝑓 3ℎ 3ℎ 3ℎ 3ℎ
+ − 3ℎ + 2ℎ = 𝑓 + 3𝑓 + 3𝑓 +𝑓
6ℎ 4 3 2 8
This is known as Simpson’s 3/8th Rule
Numerical Integration: Simpson’s Rule

fi+2
fi fn
fi+1
f1 fi-1
f0

a = x0 x1 xi-1 xi xi+1 xi+2 b = xn


If the mesh is uniform, = h for all i:

, , , … , , ,…

n = 3m, m integer
Summary

• Rectangular Rule
• Trapezoidal Rule
• Simpson’s Rule
ESO208A: Computational Methods in
Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
Recap

• Rectangular Rule
• Trapezoidal Rule
• Simpson’s Rule
Numerical Integration

• Accuracy: How accurate are the numerical integration


schemes with respect to the TRUE integral?
• Truncation Error analysis: local and global
• Recall:True Value (a) = Approximate Value + Error (ε)
• Is it possible to improve the accuracy?
• Romberg Integration
• Quadrature Methods
Numerical Integration: Rectangular Rule

Expand f(x) in Taylor’s series around xi+1/2:


Let us denote yi = xi+1/2
Numerical Integration: Rectangular Rule

0
Numerical Integration: Rectangular Rule

0
Numerical Integration: Rectangular Rule

Rectangular rule is O(h3) accurate in a single interval. This is also known


as Local Truncation Error.

We will derive Global Truncation Error later. First, let us derive Local
Truncation Errors for Trapezoidal and Simpson’s 1/3rd Rule!
Local Truncation Error: Trapezoidal Rule
Local Truncation Error: Trapezoidal Rule

(1)
We earlier showed that,

(2)

Putting from eq(1) in eq(2) and combining terms of the same order of h,

Therefore, the Trapezoidal Rule is O(h3) accurate in a single interval.


The Local Truncation Error of both, Rectangular Rule and Trapezoidal Rule is 3rd order.
Let us apply these two integration techniques over an interval 2hi or {xi, xi+2}
In this case: yi = xi+1
Local Truncation Error: Simpson’s 1/3rd Rule

Weighted sum with weights of 2/3 to expression from rectangular rule and 1/3 to
trapezoidal rule!

Therefore, the Simpson’s 1/3rd Rule is O(h5) accurate in a single interval or the Local
Truncation Error of Simpson’s 1/3rd Rule is O(h5)
Global Truncation Error: Trapezoidal Rule

Recall, if the mesh is uniform, = h for all i:

Apply, the first mean value theorem of integrals!


Global Truncation Error: Trapezoidal Rule
ℎ ℎ ℎ
𝑓 𝑥 𝑑𝑥 = 𝑓 𝑥 𝑑𝑥 = 𝑓 𝑎 +𝑓 𝑏 +2 𝑓 − 𝑓 𝑦 − 𝑓 𝑦 +⋯
2 12 480

Applying the first mean value theorem for integrals:

Global Truncation Error of the Trapezoidal Rule is O(h2)


Similarly, for all the methods, we can derive GTE to be one order less than LTE!
Order of a method is referred to by it’s GTE!
Numerical Integration
Open Newton-Cotes
Summary

• Error Analysis
• Local Truncation Error
• Global Truncation Error
ESO208A: Computational Methods in
Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
Numerical Integration

For improvement of Numerical Integration Results,


a) Simplest way is to increase the no. of intervals
b) Increase the order of polynomial-wiggling at ends
c) Use error information-Romberg Integration
d) Optimally select the points for function evaluation-Adaptive
quadrature rules (Gauss Legendre Quadrature)
Romberg Integration
Romberg Integration
Romberg Integration
Romberg Integration
Romberg Integration
Adaptive Quadrature
Gauss Quadrature

(a) Graphical depiction of


Trapezoidal Rule

(b) Improved integral estimate by


taking the area under the straight
line passing through two
intermediate points. By positioning
these points wisely, the positive and
negative errors are balanced, and an
improved integral estimate results

Source: Chapra and Canale, pg 641 (2012)


Gauss Quadrature

Integral using Trapezoidal


rule
Gauss Quadrature

Integral using Trapezoidal


rule
Gauss Quadrature
Gauss Quadrature
Gauss Quadrature
Gauss Quadrature
It turns out that for a given n+1 points, the y’s are the roots of (n+1)th order
Legendre Polynomials
Gauss-Legendre Quadrature: Example

• One-point integration:

• Two-points integration:
Gauss-Legendre Quadrature: Example
• Three-points integration:
Gauss Legendre Quadrature

How to perform integration in an arbitrary range?


Gauss Legendre Quadrature

How to perform integration in an arbitrary range?


Improper Integral

• If either a or b or both are infinity


• If the function to be integrated is undefined/discontinuous at
any point in the interval
Meaningful, only if the integral converges

a) Case 1 ab>0
Improper Integral

b) Case 2 ab<0

The function may be singular at one limit, so use open formula’s (Gauss
Legendre, Open Newton Cotes Formula with multiple application)
Summary

• Romberg Integration
• Method of Undetermined Coefficients
• Improper Integrals
ESO 208A: Computational Methods in
Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
Numerical
Differentiation
Numerical Differentiation
Numerical Differentiation

Types of Methods
• Graphical
• Taylor Series
• Lagrange Polynomial
• Method of undetermined coefficients
Numerical Differentiation-Taylor Series
Numerical Differentiation-Lagrange Polynomial

Let us compute dy/dx or df/dx at node i

Denote the difference operators:


Numerical Differentiation-Lagrange Polynomial

Approximate the function between as:

Forward Difference:

Approximate the function between as:

Backward Difference:
Numerical Differentiation-Lagrange Polynomial
Approximate the function between three points: 𝑥 ,𝑥 ,𝑥
𝑥−𝑥 𝑥−𝑥 𝑥−𝑥 𝑥−𝑥 𝑥−𝑥 𝑥−𝑥
𝑓 𝑥 = 𝑓 + 𝑓+ 𝑓
𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥 𝑥 −𝑥

𝑓 𝑓 𝑓
𝑓 𝑥 = 𝑥−𝑥 𝑥−𝑥 − 𝑥−𝑥 𝑥−𝑥 + 𝑥−𝑥 𝑥−𝑥
2ℎ ℎ 2ℎ

Now, evaluate central difference approximations of df/dx and d2f/dx2 at x = xi:


𝑑𝑓 𝑓 𝑓 𝑓
= 𝑥−𝑥 + 𝑥−𝑥 − 𝑥−𝑥 + 𝑥−𝑥 + 𝑥−𝑥 + 𝑥−𝑥
𝑑𝑥 2ℎ ℎ 2ℎ
𝑑𝑓 𝑓 −𝑓
=
𝑑𝑥 2ℎ

𝑑 𝑓 𝑓 − 2𝑓 + 𝑓
=
𝑑𝑥 ℎ
Numerical Differentiation: Finite Difference

Similarly, one can approximate the function between three points


and obtain the forward difference expressions of the first
and second derivatives at x = xi as follows:

This is left for homework practice!


Numerical Differentiation: Finite Difference

Similarly, one can approximate the function between three points


and obtain the backward difference expressions of the first
and second derivatives at x = xi as follows:

This is left for homework practice!


General Technique for Construction of Finite
Difference Scheme of Arbitrary Order

Method of Undetermined Coefficients


General Technique for Construction of Finite
Difference Scheme of Arbitrary Order
General Technique for Construction of Finite
Difference Scheme of Arbitrary Order

This is
a=nb+nf+1
Forward difference

Source: Chapra & Canale


Backward difference

Source: Chapra & Canale


Central difference

Source: Chapra & Canale


Summary

• Taylor Series
• Lagrange Interpolation Formula
• Method of Undetermined Coefficients
ESO208A: Computational Methods in
Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
Recap

• Taylor Series Approximation


• Lagrange Interpolation Formula
• Method of Undetermined Coefficient
Numerical Differentiation: Finite Difference

• Accuracy: How accurate is the numerical differentiation scheme with


respect to the TRUE differentiation?
• Truncation Error analysis
• Recall: True Value (a) = Approximate Value + Error (ε)
• Consistency: A numerical expression for differentiation or a
numerical differentiation scheme is consistent if it converges to the
TRUE differentiation as h → 0.
Numerical Differentiation: Truncation Error Analysis

ℎ ℎ ℎ ℎ ℎ
𝑓 = 𝑓 + ℎ𝑓 +
𝑓 + 𝑓 + 𝑓 + 𝑓 + 𝑓 ⋯
2! 3! 4! 5! 6!
𝑓 −𝑓 ℎ ℎ ℎ ℎ
=𝑓 + 𝑓 + 𝑓 + 𝑓 + 𝑓 ⋯
ℎ 2! 3! 4! 5!
𝑓 −𝑓 ℎ ℎ ℎ ℎ
𝑓 = − 𝑓 − 𝑓 − 𝑓 − 𝑓 ⋯
ℎ 2! 3! 4! 5!

Truncation error for this forward difference scheme for the 1st
Derivative is: O(h)
ℎ ℎ ℎ ℎ ℎ
𝑓 = 𝑓 − ℎ𝑓 + 𝑓 − 𝑓 + 𝑓 − 𝑓 + 𝑓 ⋯
2! 3! 4! 5! 6!
𝑓 −𝑓 ℎ ℎ ℎ ℎ
𝑓 = + 𝑓 − 𝑓 + 𝑓 − 𝑓 ⋯
ℎ 2! 3! 4! 5!

Truncation error for this backward difference scheme for the 1st
Derivative is: O(h)
Numerical Differentiation: Truncation Error Analysis

Truncation error for this central difference scheme for the 1st
Derivative is: O(h2)
Numerical Differentiation: Truncation Error Analysis

Truncation error for this central difference scheme for the 2nd Derivative is:
O(h2)
Numerical Differentiation: Truncation Error Analysis

Truncation error analysis for these forward and backward


difference schemes are left as homework!
Numerical Differentiation: Truncation Error Analysis
What will be the error term in case of non-uniform grid-points?

For regular or uniform grid:

Truncation error for this central difference scheme for the 1st Derivative
is O(h) for non-uniform grid and O(h2) uniform grid
Numerical Differentiation: Finite Difference

Consistency: A numerical expression for the derivative is consistent if


the leading order term in the Truncation Error (TE) satisfies the
following:

If the leading order term in the truncation error is:


TE = Khp or O(hp) where,
the numerical differentiation scheme is consistent if ,
p≥1
Numerical Differentiation and Integration-Remarks

1. Unequal Intervals
Integration
• Apply trapezoidal rule individually
• Apply Lagrange interpolation formula to get higher order estimates
Differentiation

2. Multiple Variables
Numerical Differentiation-Remarks

2. Multiple Variables
Numerical Differentiation-Remarks

3. Data Uncertainty
• Differentiation will be more sensitive to errors
• Higher order derivatives will be more sensitive
Numerical Differentiation

Improve Accuracy
• Richardson Extrapolation
• Reduce Step Size (trade off between truncation error and round-
off error)
• Higher order Polynomials
Summary

• Error Analysis
• General Remarks
ESO208A: Computational Methods in
Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
Ordinary Differential
Equation
Ordinary Differential Equation

Any mathematical equation relating a function and it’s derivative

When we are solving: v(t)= unknown function


v=Dependent variable
t=Independent variable

Auxillary Condition, v(t=0)=0

Difficult to always get analytical solution, so we do it numerically


Ordinary Differential Equation
Ordinary Differential Equation

Initial Value Problems (IVP)


• Euler’s method
• RK family (Single step)
• Adam’s family (Multi-step)
• Adam Bashforth (Explicit)
• Adam Moulton (Implicit)
• Backward difference formulae (Gear’s Method)

Boundary Value Problem (BVF)


• Shooting method
• Finite difference method
Ordinary Differential Equation

Comments on explicit and implicit schemes


Ordinary Differential Equation

Comments on explicit and implicit schemes


Ordinary Differential Equation
Ordinary Differential Equation
Ordinary Differential Equation
Can we do something to improve the estimates?
An obvious way use higher order terms
Ordinary Differential Equation
Including higher order terms makes it complicated
There are two family’s of methods which give good estimates
without including higher order terms
1) Runge-Kutta Method
2) Adam’s Method
Summary

• Ordinary Differential Equations


• Classification of Differential Equations
• Euler’s method
ESO 208A: Computational Methods in
Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
Recap

• Ordinary Differential Equations


• Classification of Differential Equations
• Euler’s method
Ordinary Differential Equation
Including higher order terms makes it complicated
There are two family’s of methods which give good estimates
without including higher order terms
1) Runge-Kutta Method
2) Adam’s Method
Ordinary Differential Equation
Including higher order terms makes it complicated
There are two family’s of methods which give good estimates without
including higher order terms
1) Runge-Kutta Method (single point method)
2) Adam’s Method (multi-point method)

Instead of partial derivative, we evaluate slopes f(x,y) at multiple points


Ordinary Differential Equation
Ordinary Differential Equation
Ordinary Differential Equation
Ordinary Differential Equation
Ordinary Differential Equation
Estimation of Errors
Estimation of Errors
Estimation of Errors
Runge-Kutta Fehlberg (ode45 in Matlab)
RK 4th order

RK 5th order
Estimation of Errors
Ordinary Differential Equation
Adams-Bashforth
Ordinary Differential Equation
Adams-Moulton
Ordinary Differential Equation
Ordinary Differential Equation
Ordinary Differential Equation
Explicit & Implicit Euler Scheme for alpha = 1
System of Simultaneous Equations (ODE)
Summary

• Runge Kutta Method


• Adam’s Family
• Estimation of Error
ESO208A: Computational Methods in
Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
Recap

• Runge Kutta Method


• Adam’s Family
• Estimation of Error
Consistency, Stability and Convergence
Consistency, Stability and Convergence
Consistency, Stability and Convergence
Consistency, Stability and Convergence
Consistency, Stability and Convergence
Consistency, Stability and Convergence
Consistency, Stability and Convergence
Consistency, Stability and Convergence
Consistency, Stability and Convergence
Consistency, Stability and Convergence
Consistency, Stability and Convergence
Consistency, Stability and Convergence
Consistency, Stability and Convergence
Consistency, Stability and Convergence
Consistency, Stability and Convergence
Stability of 4th order RK scheme

2.785
Summary

• Consistency
• Stability
• Convergence
ESO208A: Computational Methods in
Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
Recap

• Consistency
• Stability
• Convergence
Stiff Problems
Stiff Problems

The step-size obtained are very small


Stiff Problems
Explicit methods are not good for stiff systems

How about implicit methods


• Implicit Euler?
• We need an implicit method which has lineant stability requirement
and also provides an easy option to adjust step size

Backward Difference Formula (BDF is a family of implicit methods


William Gear [Gear’s Method]
Stiff Problems-BDF
Stiff Problems-BDF
Backward Difference Formulae (Gear Methods)
Stiff Problems
Just like RK45, we can devise a strategy to apply BDF
• Initial h small step size

• First order BDF


• Increase h and switch to BDF2 after a few time step
• Increase h and switch to BDF3
• Proceed like this all the way to BDF6
• The BDF formula are for uniform step-size, however for adjusting ‘h’
schemes, grids may be non-uniform
Class Example for Gear’s method
Stiff Problems-BDF
Stiff Problems-BDF
Boundary Value Problem
Boundary Value Problem
Boundary Value Problem
Summary

• Stiff Problems
• Boundary Value Problems
ESO 208A: Computational Methods in
Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
Partial Differential
Equation
Partial Differential Equation (PDE)

A PDE is an equation that relates a function of two or more


independent variables and its partial derivatives w.r.t independent
variables.

• Independent variables- x,y,z,t


• Dependent variables, T, u, h
Partial Differential Equation (PDE)
Partial Differential Equation (PDE)
Partial Differential Equation (PDE)
Partial Differential Equation (PDE)
Summary

• What are PDEs?


• Classification of PDEs
ESO208A: Computational Methods in
Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
Recap

• What are PDEs?


• Classification of PDEs
Partial Differential Equation (PDE)

Numerical methods for PDE:


• Finite Difference Method (FDM)
• Finite Element Method (FEM)
• Control Volume
• Mesh Free Methods

Specified algorithms for specific problems


Partial Differential Equation (PDE)
Partial Differential Equation (PDE)

Boundary Conditions

• Dirichlet Condition

• Neumann Condition

• Mixed Condition: f and g at different boundaries

• Robin:
Partial Differential Equation (PDE)

Elliptic PDE
Laplace Equation: 1st Type BC

¶ 2f ¶ 2 f
+ 2 =0
¶x ¶ y
2 ( )
x Î( 0, Lx ) and y Î 0, Ly

A f=a B

f ( 0, y ) = d, f ( Lx , y ) = b
1 2 3 4

( )
f ( x,0 ) = c, f x, Ly = a Dx
5 6 7 8
f=d f=b

9 10 11 12

13 14 15 16

Dy
y
D x C
f=c
A f=a B
Laplace Equation
1 2 3 4

Dx

¶ f ¶ f
5 6 7 8
2 2 f=d f=b
+ 2 =0 9 10 11 12

¶x ¶ y
2
13 14 15 16
Dy

D C
f=c
A f=a B
Laplace Equation
1 2 3 4

Dx

¶ f ¶ f
5 6 7 8
2 2 f=d f=b
+ 2 =0 9 10 11 12

¶x ¶ y
2
13 14 15 16

¶ f
2 fi+1, j - 2fi, j + fi-1, j Dy

= D
f=c
C
¶ x2 i, j
Dx 2

 Create a node number vs.


fi, j+1 - 2fi, j + fi, j-1 co-ordinate look-up table
¶ 2f
=  Initialize a null matrix (A)
¶ y2 i, j
Dy 2 of size N×N and a vector
(b) of size N
Transform the equation to the form:  N is the total number of
A =b unknown nodes.
A f=a B
Laplace Equation 1 2 3 4
Dx
¶ 2f ¶ 2f fi,j-1 5 6 7 8
+ 2 =0 f=d f=b
¶x ¶ y
2
9 10 11 12

13 14 15 16
¶ 2f fi+1, j - 2fi, j + fi-1, j fi-1,j fi,j fi+1,j Dy
=
¶ x2 i, j
Dx 2
D C
f=c
¶ 2f fi, j+1 - 2fi, j + fi, j-1
=
¶ y2 i, j
Dy 2 fi,j+1

æ ¶ 2f ¶ 2f ö fi+1, j - 2fi, j + fi-1, j fi, j+1 - 2fi, j + fi, j-1


çè ¶ x 2 + ¶ y 2 ÷ø = Dx 2
+
Dy 2
=0
i, j

æ 1 ö æ 1 ö æ 2 2 ö æ 1 ö æ 1 ö
çè Dy 2 ÷ø fi, j-1 + çè Dx 2 ÷ø fi-1, j + çè - Dx 2 - Dy 2 ÷ø fi, j + çè Dx 2 ÷ø fi+1, j + çè Dy 2 ÷ø fi, j+1 = 0
Laplace Equation
Laplace Equation- Example
Laplace Equation- Example
Handling Neumann and Robin BC
Three Options for implementation:
 Backward Difference approximation with increased size of the
matrix
 asymmetric backward difference approximation
 size of the matrix is increased
 solution at the boundary nodes are obtained together
 Ghost Node
 symmetric central difference approximation
 size of the matrix is increased
 solution at the boundary nodes are obtained together
 Backward Difference approximation without increasing the size
of the matrix
 asymmetric backward difference approximation
 size of the matrix remains unaltered
 unknowns at the boundary nodes to be computed separately using
the approximation of the BC after the solution have been computed
for the interior nodes
Application of Backward Difference …approach 1
Number of equations is now 24 and the size of the matrix A is 24×24
For Node 5, the 5th equation is:
f - 4f + 3f æ 1 ö æ 2ö æ 3 ö
3 4 5
= b or ç ÷ f + ç - ÷ f + ç ÷ f =b
2Dx è 2Dx ø 3
è Dx ø è 2Dx ø
4 5

æ 1 ö æ 2ö æ 3 ö
a =ç ÷ , a = ç- ÷, a = ç ÷ , and b = b
è 2Dx ø
53
è Dx ø 54
è 2Dx ø 55 5

A f=a B
For Node 21, the 21st equation is:
1 2 3 4 5

f - 4f + 3f Dx
11 16 21
=c f=d
6 7 8 9 10 ¶f
=b
2Dy ¶x
11 12 13 14 15

æ 1 ö æ 2ö æ 3 ö
çè 2Dy ÷ø f + çè - Dy ÷ø f + çè 2Dy ÷ø f = c
16 17 18 19 20
11 16 21
Dy
21 22 23 24 25
D ¶f C
æ 1 ö æ 2ö æ 3 ö ¶y
=c
a =ç ÷ , a = ç- ÷, a =ç ÷ , and b = c
21 11
è 2Dy ø 21 16
è Dy ø 21 21
è 2Dx ø 21
A f=a B
Application of Backward
1 2 3 4
Difference…approach 2 Dx
4’

¶f
5 6 7 8 8’ =b
f=d ¶x
Number of equations will remain 9 10 11 12 12’
at 16 and the size of the matrix A
13 14 15 16 16’
is 16×16 Dy
For Node 16, the 16th equation is: 13’ 14’ 15’ 16”
D ¶f C
=c
¶y
æ 1 ö æ 1 ö æ 2 2 ö æ 1 ö æ 1 ö
çè Dy 2 ÷ø f + çè Dx
12 2 ÷ø f + çè - Dx - Dy
15 2 2 ÷ø f + çè Dx
16 2 ÷ø f + çè Dy
16¢ 2 ÷ø f = 0
16¢¢

f - 4f + 3f æ 1 ö æ 2ö æ 3 ö
15 16 16¢
= b or ç ÷ f + ç- ÷f + ç ÷ f =b
2Dx è 2Dx ø 15
è Dx ø è 2Dx ø
16 16¢

f - 4f + 3f æ 1 ö æ 2ö æ 3 ö
12 16 16¢¢
= c or ç ÷ f +ç- ÷f +ç ÷ f =c
2Dy è 2Dy ø è Dy ø
12
è 2Dy ø 16 16¢¢
A f=a B

Application of Backward 1 2 3 4 4’
Difference…approach 2 Dx
5 6 7 8 ¶f
8’ =b
f=d ¶x
Number of equations will remain 9 10 11 12 12’

at 16 and the size of the matrix A 13 14 15 16 16’


is 16×16 Dy
13’ 14’ 15’ 16”
For Node 16, the 16th equation is: D
¶f
C
=c
¶y
æ 2 ö æ 2 ö æ 2 2 ö 2b 2c
çè 3Dy 2 ÷ø f + çè 3Dx
12 2 ÷ø f + çè - 3Dx - 3Dy
15 2 2 ÷ø f = - 3Dx - 3Dy
16

Recall, for Node 16, the 16th equation for the 1st type BC was:

æ 1 ö æ 1 ö æ 2 2 ö b c
çè Dy 2 ÷ø f + çè Dx
12 2 ÷ø f + çè - Dx - Dy
15 2 2 ÷ø f = - Dx - Dy
16 2 2
A f=a B

Application of Backward 1 2 3 4 4’
Difference…approach 2 Dx
5 ¶f
6 7 8 8’ =b
f=d ¶x
Number of equations will remain 9 10 11 12 12’

at 16 and the size of the matrix A 13 14 15 16 16’

is 16×16 Dy
13’ 14’ 15’ 16”
For Node 8, the 8th equation is: D
¶f
C
=c
¶y

æ 1 ö æ 1 ö æ 2 2 ö æ 1 ö æ 1 ö
çè Dy 2 ÷ø f + çè Dx
4 2 ÷ø f + çè - Dx - Dy
7 2 2 ÷ø f + çè Dx
8 2 ÷ø f ’ + çè Dy
8¢ 2 ÷ø f = 0
12

f - 4f + 3f ’ æ 1 ö æ 2ö æ 3 ö
7 8 8¢
= b or ç ÷ f + ç- ÷f + ç ÷ f’= b
2Dx è 2Dx ø è Dx ø
7
è 2Dx ø 8 8¢

After obtaining the solutions for the 16 interior nodes, the values of
phi at the boundary nodes are to be computed from the BC equations
used for substitution!
Ghost Node
Number of equations is now 25 and the size of the matrix A is 25×25
For Node 5:
æ 1 ö æ 1 ö æ 2 2 ö æ 1 ö æ 1 ö
çè Dy ÷ø a + çè Dx
2 2 ÷ø4
f + -
2
-
çè Dx Dy 2 ÷ø f + çè Dx
5 2 ÷ø f + çè Dy
5¢ 2 ÷ø f = 0
10

For Node 23:


æ 1 ö æ 1 ö æ 2 2 ö æ 1 ö æ 1 ö
çè Dy 2 ÷ø f
18
+ çè Dx 2 ÷ø f
22
+ - -
çè Dx Dy
2 2 ÷ø f
23
+ çè Dx 2 ÷ø f +
24 çè Dy 2 ÷ø f = 0
23¢

A f=a B

f -f
5¢ 4
=b 1 2 3 4 5 5’
2Dx Dx
6 7 8 9 10 ¶f
f=d =b
¶x
f -f 11 12 13 14 15
23¢ 18
=c
2Dy 16 17 18 19 20
Dy 25’
21 22 23 24 25
D ¶f C
=c
¶y 23’ 25’’
Ghost Node
Number of equations is now 25 and the size of the matrix A is 25×25
For Node 25:

æ 1 ö æ 1 ö æ 2 2 ö æ 1 ö æ 1 ö
çè Dy 2 ÷ø f + çè Dx
20 2 ÷ø f + çè - Dx - Dy
24 2 2 ÷ø f + çè Dx
25 2 ÷ø f + çè Dy
25¢ 2 ÷ø f = 0
25¢¢

A f=a B

f -f 1 3
=b
25¢ 24
2 4 5 5’

2Dx Dx
6 7 8 9 10 ¶f
f=d =b
¶x
f -f 11 12 13 14 15
25¢¢ 20
=c
2Dy 16 17 18 19 20

Dy
25’
21 22 23 24 25
D C
¶f
=c
¶y 23’ 25’’
Non-Rectangular Boundary

Handling Neumann Boundary Condition


Recap

• Discretization of Elliptic Equation


• Handling Different Boundary Conditions
ESO208A: Computational Methods in
Engineering

Richa Ojha
Department of Civil Engineering
IIT Kanpur

Acknowledgements: Profs. Abhas Singh and Shivam Tripathi (CE)


1
• Copyright:
The instructor of this course owns the copyright of all the course materials.
This lecture material was distributed only to the students attending the
course ESO208A: Computational methods in Engineering of IIT Kanpur and
should not be distributed in print or through electronic media without the
consent of the instructor. Students can make their own copies of the course
materials for their use.

2
Recap

• Solution of Elliptic PDE


Partial Differential Equation (PDE)
Parabolic PDE
Partial Differential Equation (PDE)
Parabolic Equation
Parabolic Equation
Theoretical Analysis of Numerical Algorithm
Forward Time Central Spacing (FTCS)
Richardson Scheme
DuFort Frankel Scheme
Crank Nicolson Scheme
Partial Differential Equation (PDE)

• Hyperbolic PDEs [Wave Propagation]


o Semi-discretization
o Upwind scheme

You might also like