# Approximation and Errors

Significant Figures
• 4 significant figures
– 1.845
– 0.01845
– 0.0001845
• 43,500 ? confidence
• 4.35 x 10
4
3 significant figures
• 4.350 x 10
4
4 significant figures
• 4.3500 x 10
4
5 significant figures
Accuracy and Precision
• Accuracy - how closely a computed or
measured value agrees with the true value
• Precision - how closely individual
computed or measured values agree with
each other
– number of significant figures
– spread in repeated measurements or
computations
increasing accuracy
i
n
c
r
e
a
s
i
n
g

p
r
e
c
i
s
i
o
n

Error Definitions
• Numerical error - use of approximations to
represent exact mathematical operations and
quantities
• true value = approximation + error
– absolute error, E
t
= true value - approximation

– subscript t represents the true error
– shortcoming....gives no sense of magnitude
– normalize by true value to get true relative error
Error definitions cont.
t
true error true value estimate
100% 100%
true value true value
÷
c = × = ×
• True relative percent error
• But we may not know the true answer a priori
Error definitions cont.
• May not know the true answer a priori
a
approximate error
100%
approximation
c = ×
• This leads us to develop an iterative
approach of numerical methods
a
approximate error
100%
approximation
present approx. previous approx.
100%
present approx.
c = ×
÷
= ×
Error definitions cont.
• Usually not concerned with sign, but with
tolerance
• Want to assure a result is correct to n
significant figures
( )% 10 5 . 0
n 2
s
s a
÷
× = c
c < c
Example
Consider a series expansion to estimate trigonometric
functions
· < < · ÷ + ÷ + ÷ = x .....
! 7
x
! 5
x
! 3
x
x x sin
7 5 3
Estimate sin t / 2 to three significant figures

Error Definitions cont.
• Round off error - originates from the fact
that computers retain only a fixed number
of significant figures
• Truncation errors - errors that result from
using an approximation in place of an exact
mathematical procedure
Computer Storage of Numbers
• Computers use a binary base system for logic
and storage of information
• This is convenient due to on/off circuitry
• The basic unit of computer operation is the bit
with value either 0 (off) or 1 (on)
• Numbers can be represented as a string of bits
• 110
2
= 0*2
0
+ 1*2
1
+ 1*2
2
= 0+2+4 = 6
10
• A string of 8 bits is called a byte
Computer Storage of Integers
• It is convenient to think of an integer as being
represented by a sign bit and m number bits
• This is not quite correct
• Perhaps you remember that underflow & overflow
are -4,294,967,296 & +4,294,967,295 for a 4 byte
(32 bit or single precision) integer
• With a sign bit this should be ± 2
(32-1)
- 1 =
±4,294,967,295
• Because +0 and -0 are redundant, a method of
storage called 2’s complement is used
2’s Complement
• The easiest way to understand 2’s
complement is in terms of a VCR counter
• When you rewind a tape and the counter goes
past zero, does it register negative values?
0 0 2
2’s Complement & -1
• Use a 1 byte (8 bit) integer as an example
Use all 1’s (our highest digit in base 2) for -1
Thus we use 11111111 to represent -1
• We furthermore represent 0 as all zero’s
What happens we we add -1 and 1 using these?
• 1 1 1 1 1 1 1 1
+ 0 0 0 0 0 0 0 1
1 0 0 0 0 0 0 0 0
Carried Bit is Lost
8 Bit Integer Example, Cont.
• With our system we don’t need separate
We simply add the negative of a number
• So -1 plus -1 is 1 1 1 1 1 1 1 1
+ 1 1 1 1 1 1 1 1
(-2) 1 1 1 1 1 1 1 1 0
• And 1 less than -2 is 1 1 1 1 1 1 1 0
+ 1 1 1 1 1 1 1 1
(-3) 1 1 1 1 1 1 1 0 1
8 Bit Integer Example, Cont.
• It is not hard to see from the trend that the
most negative number is 1000000
2
= -128 (-2
7
)
and the most positive is 0111111
2
= 127 (2
7
-1)
• The algorithm for negation is to reverse
all bits and add 1, thus two’s complement.
So that - 0111111
2
= 1000001
2
= -127
• This even works for 0 1 1 1 1 1 1 1 1
+ 0 0 0 0 0 0 0 1
1 0 0 0 0 0 0 0 0
8 Bit Real Variable Example
• Real numbers must be represented by a
mantissa and an exponent (each with a sign)
• Computers use 0.XXX E
2
+ZZ rather than
the X.XX E
10
+YY of scientific notation
• Divide the 8 bits up into 3 bits for the
exponent and 5 bits for the mantissa, with
each using a sign bit (not really, but nearly)
So we have ± Z Z ± X X X X
Exponent Mantissa
8 Bit Real, Machine c
• Machine c is the error relative to real #
representation of 1 ± 0 1 ± 1 0 0 0
• Addition of two reals requires that their
exponents be the same ± 0 1 ± 0 0 0 1
• So the smallest number which can be added
to real 1without losing the mantissa bit is
shown above ¬ (0x2
-1
+ 0x2
-2
+ 0x2
-3
+1x2
-4
) x 2
1
= 0.125 : Machine c here
Exponent Mantissa
Exponent Mantissa
32 Bit Real Representations
• IEEE standard real numbers have 8 bit
exponents and 24 bit mantissas
• Machine c for this system is then
.00000000000000000000001
2
= 2
-23

= 1.192093 x 10
-7
• Overflow for this system is then
.11111111111111111111111
2
x E
2
2
1111111

= (1- c
machine
) x 2
127+1
= 3.40282 x 10
+38
(note that the first mantissa bit after . is
assumed as 1--thus 127+1 factors of 2)
TAYLOR SERIES
• Provides a means to predict a function value
at one point in terms of the function value
and its derivative at another point
• Zero order approximation
( ) ( )
i 1 i
x f x f ~
+
This is good if the function is a constant.
Taylor Series Expansion
• First order approximation
{

slope multiplied by distance
Still a straight line but capable of predicting
an increase or decrease - LINEAR
( ) ( ) ( )( )
i 1 i i i 1 i
x x x ' f x f x f ÷ + ~
+ +
Taylor Series Expansion
• Second order approximation - captures
some of the curvature
( ) ( ) ( )( )
( )
( )
2
i 1 i
i
i 1 i i i 1 i
x x
! 2
x ' ' f
x x x ' f x f x f ÷ + ÷ + ~
+ + +
Taylor Series Expansion
( ) ( ) ( ) ( )
( )
( ) ( )
( )
( )
( )
1 i i
1 n
1 n
n
i 1 i
n
n
i
n
......
3
i
2
i
i i i 1 i
x x h
! 1 n
f
R
x x size step h where
R h
! n
x f
h
! 3
x ' ' ' f
h
! 2
x ' ' f
h x ' f x f h x f x f
+
+
+
+
+ +
+
< ç <
+
ç
=
÷ = =
+ +
+ + ~ + =
Example
Use zero through fourth order Taylor series expansion
to approximate f(1) from x = 0 (i.e. h = 1.0)
( ) 2 . 1 x 25 . 0 x 5 . 0 x 15 . 0 x 1 . 0 x f
2 3 4
+ ÷ ÷ ÷ ÷ =
-1
-0.5
0
0.5
1
1.5
0 0.5 1 1.5
x
f
(
x
)
Note:
f(1) = 0.2
-1.5
-1
-0.5
0
0.5
1
1.5
0 0.5 1 1.5
x
f
(
x
)
n = 3
n = 0
n = 1
n = 2
act ual value
Functions with infinite number of
derivatives
• f(x) = cos x
• f `(x) = -sin x
• f ``(x) = -cos x
• f ```(x) = sin x
• Evaluate the system where x
i
= t/4 and
x
i+1
= t /3
• h = t /3 - t /4 = t /12
Functions with infinite number of
derivatives
• Zero order
 f(t /3) ~ cos (t /4 ) = 0.707 c
t
= 41.4%
• First order
 f(t /3) ~ cos (t /4 ) - sin (t /4 )(t /12)
c
t
= 4.4%
• Second order
 f(t /3) ~ 0.4978 c
t
= 0.45%
• By n = 6 c
t
= 2.4 x 10
-6
%
0 c bx ax ) x ( f
a 2
ac 4 b b
x
2
2
= + + =
÷ ± ÷
=
This equation gives us the roots of the algebraic function
f(x)

i.e. the value of x that makes f(x) = 0

How can we solve for f(x) = e
-x
- x?
Roots of Equations
• Plot the function and determine where it
crosses the x-axis
• Lacks precision
• Trial and error
-10
-5
0
5
10
15
-5 0 5 10
x
f
(
x
)
f(x) = e
-x
- x
Overview of Methods
• Bracketing methods
– Graphing method
– Bisection method
– False position
• Open methods
– One point iteration
– Newton-Raphson
– Secant method
Bracketing Methods
• Graphical
• Bisection method
• False position method
Graphical
(limited practical value)
x
f(x)
x
f(x)
x
f(x)
x
f(x)
consider lower
and upper bound
same sign,
no roots or
even # of roots
opposite sign,
odd # of roots

Bisection Method
• Takes advantage of sign changing
• f(x
l
)f(x
u
) < 0 where the subscripts refer to
lower and upper bounds
• There is at least one real root
x
f(x)
x
f(x)
x
f(x)
-10
-5
0
5
10
15
-5 0 5 10
x
f
(
x
)
f(x) = e
-x
- x
•f(x) = e
-x
- x
•x
l
= -1
•x
u
= 1
PROBLEM STATEMENT

Use the bisection method
to determine the root
SOLUTION
• f(x) = e
-x
- x
• x
l
= -1 x
u
= 1
– check if f(x
l
) f(x
u
) < 0
– f(-1) f(1) = (3.72)(-0.632) < 0
• x
r
= (x
l
+ x
u
) / 2 = 0
• f(0) = 1 exchange so that x
l
= 0
• x
l
= 0 x
u
= 1
Solution cont.
• x
l
= 0 x
u
= 1 SWITCH LOWER LIMIT
 check if f(x
l
) f(x
u
) < 0
 f(0) f(1) = (1) (-0.632) < 0
• x
r
= (x
l
+ x
u
) / 2 = (0 + 1)/2 = 0.5
• f (0.5) = 0.1065
• x
l
= 0.5 x
u
= 1 SWITCH LOWER LIMIT
• x
r
= (x
l
+ x
u
) / 2 = (0.5 + 1)/2 = 0.75
• f (0.75) = -0.2776
Solution cont.
• x
l
= 0.5 x
u
= 0.75 SWITCH UPPER LIMIT
• x
r
= (x
l
+ x
u
) / 2 = (0.5 + 0.75)/2 = 0.625
• f (0.625) = -0.090
• x
l
= 0.5 x
u
= 0.625 SWITCH UPPER LIMIT
• x
r
= (0.5

+ 0.625) / 2 = 0.5625
• f (0.5625) = 0.007
Solution cont.
• Here we consider an error that is not
contingent on foreknowledge of the root
• c
a
= f (present and previous approx.)
-10
-5
0
5
10
15
-5 0 5 10
x
f
(
x
)
f(x) = e
-x
- x
False Position Method
• “Brute Force” of bisection method is
inefficient
• Join points by a straight line
• Improves the estimate
• Estimating the curve by a straight line gives
the “false position”
x
l
x
u
f(x
l
)
f(x
u
)
next estimate
real root
DEVELOP METHOD BASED ON
SIMILAR TRIANGLES
x
l
x
u
f(x
l
)
f(x
u
) next estimate, x
r
( ) ( )
( )( )
( ) ( )
u l
u l u
u r
u r
u
l r
l
x f x f
x x x f
x x
x x
x f
x x
x f
÷
÷
÷ =
÷
=
÷
Based on
similar
triangles
Example
• f(x) = x
3
- 98
∙ x = 98
1/3
= 4.61
• x
l
= 4.55 f(x
l
) = -3.804
• x
u
= 4.65 f(x
u
) = 2.545
• x
r
= 4.65 - (2.545)(4.55-4.65)/(-3.804-2.545)
• x
r
= 4.6099 f(x
r
) = -0.03419
– if f(x
l
)f(x
r
) > 0 x
l
= x
r
– if f(x
l
)f(x
r
) < 0 x
u
= x
r
Example (continued)
• x
l
= 4.61 f(x
l
) = -0.034
• x
u
= 4.65 f(x
u
) = 2.545
• x
r
= 4.610
• x
r
= 4.6104 f(x
r
) = -0.0004
• c
a
= 0.011%
Pitfalls of False Position Method
f(x) = x
10
- 1
-5
0
5
10
15
0 0.5 1 1.5
x
f
(
x
)
Open Methods
• Fixed point iteration
• Newton-Raphson method
• Secant method
• Multiple roots
• In the previous bracketing methods, the root
is located within an interval prescribed by
an upper and lower boundary
Open Methods cont.
• Such methods are said to be convergent
– solution moves closer to the root as the
computation progresses
• Open method
– single starting value
– two starting values that do not necessarily
bracket the root
• These solutions may diverge
– solution moves farther from the root as the
computation progresses
Pick initial
estimate x
i
x
i
f(x)
x
f(x
i
)
draw a tangent
at f(x
i
)
x
i
f(x)
x
f(x
i
)
At the intersection
with the x-axis
we get x
i+1
and
f(x
i+1
)
f(x)
x
x
i+1
f(x
i+1
)
The tangent
gives next
estimate.
x
i
f(x)
x
f(x
i
)
x
i+1
f(x
i+1
)
Solution can “overshoot”
the root and potentially
diverge
x
0
f(x)
x
x
1
Solution can “overshoot”
the root and potentially
diverge
x
0
f(x)
x
x
1
x
2
Fixed point iteration
• Open methods employ a formula to predict
the root
• In simple fixed point iteration, rearrange the
function f(x) so that x is on the left hand
side of the equation
 i.e. for f(x) = x
2
- 2x + 3 = 0
 x = (x
2
+ 3) / 2
Simple fixed point iteration
• In simple fixed point iteration, rearrange the
function f(x) so that x is on the left hand
side of the equation
 i.e. for f(x) = sin x = 0
 x = sin x + x
• Let x = g(x)
• New estimate based on
 x
i+1
= g(x
i
)
Example
• Consider f(x) = e
-x
-3x
• g(x) = e
-x
/ 3
• Initial guess x = 0
f(x) = e
-x
-3x
-15
-10
-5
0
5
10
15
-4 -2 0 2 4 6
x
f
(
x
)
Initial guess 0.000
g(x) f(x) c
a
0.333 -0.283
0.239 0.071 39.561
0.263 -0.018 9.016
0.256 0.005 2.395
0.258 -0.001 0.612
0.258 0.000 0.158
0.258 0.000 0.041
f(x) = e
-x
-3x
-15
-10
-5
0
5
10
15
-4 -2 0 2 4 6
x
f
(
x
)
Newton Raphson
most widely used
x
i+2
f(x)
x
Note how the
new estimates
converge
i.e. x
i+2

is closer to the
root, f(x) = 0.
Newton Raphson
( )
( )
( )
( )
i
i
i 1 i
1 i i
i
i
x ' f
x f
x x
rearrange
x x
0 x f
x ' f
' f
dx
dy
gent tan
÷ =
÷
÷
=
= =
+
+
f(x
i
)
x
i
tangent
x
i+1
Newton Raphson Pitfalls
f(x)
(x)
Newton Raphson Pitfalls
f(x)
(x)
solution diverges
Example
• f(x) = x
2
- 11
• f '(x) = 2x
• initial guess x
i
= 3
• f(3) = -2
• f '(3) = 6
f(x) = x
2
- 11
-20
0
20
40
60
80
100
0 5 10
x
f
(
x
)
Secant method
( )
( ) ( )
i 1 i
i 1 i
x x
x f x f
x ' f
÷
÷
=
÷
÷
Approximate derivative using a finite divided difference
What is this? HINT: dy / dx = lim Ay / Ax

Substitute this into the formula for Newton Raphson
Secant method
( )
( )
( )( )
( ) ( )
i 1 i
i 1 i i
i 1 i
i
i
i 1 i
x f x f
x x x f
x x
x ' f
x f
x x
÷
÷
÷ =
÷ =
÷
÷
+
+
Substitute finite difference
approximation for the
first derivative into this
equation for Newton
Raphson
Secant method
• Requires two initial estimates
• f(x) is not required to change signs, therefore this
is not a bracketing method
( )( )
( ) ( )
i 1 i
i 1 i i
i 1 i
x f x f
x x x f
x x
÷
÷
÷ =
÷
÷
+
Secant method
new estimate
{

initial estimates
slope
between
two
estimates
f(x)
x
Example
• Let’s consider f(x) = e
-x
- x
-10
-5
0
5
10
15
-5 0 5 10
x
f
(
x
)
f(x) = e
-x
- x
Example cont.
• Choose two starting points
 x
0
= 0 f(x
0
) =1
 x
1
= 1.0 f(x
1
) = -0.632
• Calculate x
2
 x
2
= 1 - (-0.632)(0 - 1)/(1+0.632) = 0.6127
x
f(x)
1
2
new est.
x
f(x)
1
new est.
2
FALSE POSITION
SECANT METHOD
The new estimate
is selected from the
intersection with the
x-axis
Multiple Roots
• Corresponds to a point where
a function is tangential to the
x-axis
• i.e. double root
 f(x) = x
3
- 5x
2
+ 7x -3
 f(x) = (x-3)(x-1)(x-1)
 i.e. triple root
 f(x) = (x-3)(x-1)
3
f(x) = (x-3)(x-1)
2
-4
-2
0
2
4
6
8
10
0 1 2 3 4 5
x
f
(
x
)
double root
Difficulties
• Bracketing methods won’t work
• Limited to methods that may diverge
f(x) = (x-3)(x-1)
2
-4
-2
0
2
4
6
8
10
0 1 2 3 4 5
x
f
(
x
)
double root
• f(x) = 0 at root
• f '(x) = 0 at root
• Hence, zero in the
denominator for
Newton-Raphson
and Secant
Methods

f(x) = (x-3)(x-1)
3
-3
-2
-1
0
1
2
3
4
5
6
0 1 2 3 4
x
f
(
x
)
triple root
Multiple Roots
( ) ( )
( ) | | ( ) ( )
i i
2
i
i i
i 1 i
x ' ' f x f x ' f
x ' f x f
x x
÷
÷ =
+
f(x) = (x-3)(x-1)
4
-4
-2
0
2
4
6
8
10
12
14
16
0 1 2 3 4
x
f
(
x
)
root
Systems of Non-Linear Equations
• We will later consider systems of linear
equations
 f(x) = a
1
x
1
+ a
2
x
2
+...... a
n
x
n
- C = 0
 where a
1
, a
2
.... a
n
and C are constant
• Consider the following equations
 y = -x
2
+ x + 0.5
 y + 5xy = x
3
• Solve for x and y
Systems of Non-Linear Equations cont.
• Set the equations equal to zero
 y = -x
2
+ x + 0.5
 y + 5xy = x
3
• u(x,y) = -x
2
+ x + 0.5 - y = 0
• v(x,y) = y + 5xy - x
3
= 0
• The solution would be the values of x and y
that would make the functions u and v equal
to zero
Recall the Taylor Series
( ) ( ) ( )
( ) ( )
( )
i 1 i
n
n
i
n
......
3
i
2
i
i i 1 i
x x size step h where
R h
! n
x f
h
! 3
x ' ' ' f
h
! 2
x ' ' f
h x ' f x f x f
÷ = =
+
+ + + ~
+
+
+
+
Write 2 Taylor series with respect
to u and v
( ) ( )
( ) ( ) HOT y y
y
v
x x
x
v
v v
HOT y y
y
u
x x
x
u
u u
i 1 i
i
i 1 i
i
i 1 i
i 1 i
i
i 1 i
i
i 1 i
+ ÷
c
c
+ ÷
c
c
+ =
+ ÷
c
c
+ ÷
c
c
+ =
+ + +
+ + +
The root estimate corresponds to the point where
u
i+1
= v
i+1
= 0
First Order Taylor Series
Approximation
( ) ( )
( ) ( )
i i 1 i
i
i 1 i
i
i i 1 i
i
i 1 i
i
v y y
y
v
x x
x
v
u y y
y
u
x x
x
u
÷ = ÷
c
c
+ ÷
c
c
÷ = ÷
c
c
+ ÷
c
c
+ +
+ +
Defining A x = x
i+1
– x
i
& A y = y
i+1
– y
i

Then in matrix form, the equations are
(
(
(
(
¸
(

¸

÷
÷
=
(
(
(
(
¸
(

¸

A
A
(
(
(
(
¸
(

¸

c
c
c
c
c
c
c
c
i
i
i i
i i
v
u
y
x
y
v
x
v
y
u
x
u
This can be solved for x
i+1
& y
i+1

This is a 2 equation version of Newton-Raphson
x
v
y
u
y
v
x
u
x
v
u
x
u
v
y y
x
v
y
u
y
v
x
u
y
u
v
y
v
u
x x
i i i i
i
i
i
i
i 1 i
i i i i
i
i
i
i
i 1 i
c
c
c
c
÷
c
c
c
c
c
c
÷
c
c
÷ =
c
c
c
c
÷
c
c
c
c
c
c
÷
c
c
÷ =
+
+
Therefore
THE DENOMINATOR
OF EACH OF THESE
EQUATIONS IS
FORMALLY
REFERRED TO
AS THE DETERMINANT
OF THE
JACOBIAN
This is a 2 equation version of Newton-Raphson
x
v
y
u
y
v
x
u
x
v
u
x
u
v
y y
x
v
y
u
y
v
x
u
y
u
v
y
v
u
x x
i i i i
i
i
i
i
i 1 i
i i i i
i
i
i
i
i 1 i
c
c
c
c
÷
c
c
c
c
c
c
÷
c
c
÷ =
c
c
c
c
÷
c
c
c
c
c
c
÷
c
c
÷ =
+
+
Example
• Determine the roots of the following
nonlinear simultaneous equations
 y = -x
2
+ x + 0.5
 y + 5xy = x
3
• u(x,y) = -x
2
+ x + 0.5 - y = 0
• v(x,y) = y + 5xy - x
3
= 0
• Use an initial estimate of x = 0, y =1
Or alternately
i i
i i
i 1 i i
i i i i
i i
i i
i 1 i i
i i i i
v u
u v
y y
x x x
u v u v
x y y x
u v
v u
x x
y y y
u v u v
x y y x
+
+
c c
÷
c c
÷ = A = ÷
c c c c
÷
c c c c
c c
÷
c c
÷ = A = ÷
c c c c
÷
c c c c
THE
SAME
DETERMINANT
OF THE
JACOBIAN
This is the Δ form of the equations
Example cont.
x 5 1
y
v
x 3 y 5
x
v
1
y
u
1 x 2
x
u
2
+ =
c
c
÷ =
c
c
÷ =
c
c
+ ÷ =
c
c
First iteration: [J] = 6
x = -0.08333
y = 0.41667
System of Linear Equations
• We have focused our last lectures on
finding a value of x that satisfied a single
equation
 f(x) = 0
• Now we will deal with the case of
determining the values of x
1
, x
2
, .....x
n
, that
simultaneously satisfy a set of equations
System of Linear Equations
• Simultaneous equations
 f
1
(x
1
, x
2
, .....x
n
) = 0
 f
2
(x
1
, x
2
, .....x
n
) = 0
.............
 f
3
(x
1
, x
2
, .....x
n
) = 0
• Methods will be for linear equations
 a
11
x
1
+ a
12
x
2
+...... a
1n
x
n
=c
1
 a
21
x
1
+ a
22
x
2
+...... a
2n
x
n
=c
2

..............

 a
n1
x
1
+ a
n2
x
2
+...... a
nn
x
n
=c
n
Mathematical Background
Matrix Notation
• a horizontal set of elements is called a row
• a vertical set is called a column
• first subscript refers to the row number
• second subscript refers to column number
| |
(
(
(
(
¸
(

¸

=
mn 3 m 2 m 1 m
n 2 23 22 21
n 1 13 12 11
a ... a a a
. . . .
a ... a a a
a ... a a a
A
This matrix has m rows an n column.

It has the dimensions m by n (m x n)
note
subscript
| |
(
(
(
(
¸
(

¸

=
mn 3 m 2 m 1 m
n 2 23 22 21
n 1 13 12 11
a ... a a a
. . . .
a ... a a a
a ... a a a
A
| |
(
(
(
(
¸
(

¸

=
mn 3 m 2 m 1 m
n 2 23 22 21
n 1 13 12 11
a ... a a a
. . . .
a ... a a a
a ... a a a
A
row 2
column 3
Note the consistent
scheme with subscripts
denoting row,column
Row vector: m=1
Column vector: n=1 Square matrix: m = n
| | | |
n 2 1
b ....... b b B =
| |
(
(
(
(
(
(
¸
(

¸

=
m
2
1
c
.
.
c
c
C
| |
(
(
(
¸
(

¸

=
33 32 31
23 22 21
13 12 11
a a a
a a a
a a a
A
| |
(
(
(
¸
(

¸

=
33 32 31
23 22 21
13 12 11
a a a
a a a
a a a
A
The diagonal consists of the elements

a
11
a
22
a
33

• Symmetric matrix
• Diagonal matrix
• Identity matrix
• Upper triangular matrix
• Lower triangular matrix
• Banded matrix
Symmetric Matrix

a
ij
= a
ji
for all i’s and j’s
| |
(
(
(
¸
(

¸

=
8 7 2
7 3 1
2 1 5
A
Does a
23
= a
32
?
Yes. Check the other elements
Diagonal Matrix

A square matrix where all elements off
the main diagonal are zero
| |
(
(
(
(
¸
(

¸

=
44
33
22
11
a 0 0 0
0 a 0 0
0 0 a 0
0 0 0 a
A
Identity Matrix

A diagonal matrix where all elements on
the main diagonal are equal to 1
| |
(
(
(
(
¸
(

¸

=
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
A
The symbol [I] is used to denote the identify matrix.
Upper Triangle Matrix

Elements below the main diagonal are
zero
| |
(
(
(
¸
(

¸

=
33
23 22
13 12 11
a 0 0
a a 0
a a a
A
Lower Triangular Matrix

All elements above the main diagonal
are zero

| |
(
(
(
¸
(

¸

=
8 7 2
0 3 1
0 0 5
A
Banded Matrix

All elements are zero with the exception
of a band centered on the main diagonal

| |
(
(
(
(
¸
(

¸

=
44 43
34 33 32
23 22 21
12 11
a a 0 0
a a a 0
0 a a a
0 0 a a
A
Matrix Operating Rules
 a
ij
+ b
ij
= c
ij

 [A] + [B] = [B] + [A]
 [A] + ([B]+[C]) = ([A] +[B]) + [C]
Matrix Operating Rules
• Multiplication of a matrix [A] by a scalar g
is obtained by multiplying every element of
[A] by g
| | | |
(
(
(
(
(
(
(
(
¸
(

¸

= =
mn 2 m 1 m
n 2 22 21
n 1 12 11
ga . . . ga ga
. . . . . .
. . . . . .
. . . . . .
ga . . . ga ga
ga . . . ga ga
A g B
Matrix Operating Rules
• The product of two matrices is represented as
[C] = [A][B]

 n = column dimensions of [A]
 n = row dimensions of [B]
¿
=
=
n
1 k
kj ik ij
b a c
Simple way to check whether
matrix multiplication is possible
[A] m x n [B] n x k = [C] m x k
interior dimensions
must be equal
exterior dimensions conform to dimension of resulting matrix
Matrix multiplication
• If the dimensions are suitable, matrix
multiplication is associative
 ([A][B])[C] = [A]([B][C])
• If the dimensions are suitable, matrix
multiplication is distributive
 ([A] + [B])[C] = [A][C] + [B][C]
• Multiplication is generally not commutative
 [A][B] is not equal to [B][A]
Inverse of [A]
Transpose of [A]
| || | | | | | | | I A A A A
1 1
= =
÷ ÷
| |
(
(
(
(
(
(
(
(
¸
(

¸

=
mn n 2 n 1
2 m 22 12
1 m 21 11
t
a . . . a a
. . . . . .
. . . . . .
. . . . . .
a . . . a a
a . . . a a
A
d c
b a
÷ =
(
¸
(

¸

Determinants
Denoted as det A or A

for a 2 x 2 matrix
d c
b a
÷ =
(
¸
(

¸

Determinants cont.
There are different schemes used to compute the determinant.

Consider cofactor expansion
- uses minor and cofactors of the matrix

Minor: the minor of an entry a
ij
is the determinant of the
submatrix obtained by deleting the ith row and the jth
column

Cofactor: the cofactor of an entry a
ij
of an n x n matrix
A is the product of (-1)
i+j
and the minor of a
ij
(
(
(
¸
(

¸

=
33 32 31
23 22 21
13 12 11
32
a a a
a a a
a a a
a of minor
Minor: the minor of an entry a
ij
is the determinant of the
submatrix obtained by deleting the ith row and the jth
column.

Example: the minor of a
32
for a 3x3 matrix is:

13 22 23 12
23 22
13 12
33 32 31
23 22 21
13 12 11
31
a a a a
a a
a a
a a a
a a a
a a a
a of minor ÷ = =
(
(
(
¸
(

¸

=
Cofactor: A
ij
, the cofactor of an entry a
ij
of an n x n matrix
A is the product of (-1)
i+j
and the minor of a
ij

i.e. Calculate A
31
for a 3x3 matrix

First calculate the minor a
31
Minors and cofactors are used to calculate the determinant
of a matrix.

Consider an n x n matrix expanded around the ith row
in i2 i1
A A A
in 2 i 1 i
a ...... a a A + + =
Consider expanding around the jth column
nj 2j 1j
A A A
nj j 2 j 1
a ...... a a A + + =
(for any one value of i)
(for any one value of j)
32 31
22 21
13
33 31
23 21
12
33 32
23 22
11
33 32 31
23 22 21
13 12 11
a a
a a
a
a a
a a
a
a a
a a
a
a a a
a a a
a a a
D + ÷ = =
( ) ( ) ( ) ( )
( ) ( )
31 22 32 21 13
3 1
31 23 33 21 12
2 1
32 23 33 22 11
1 1
a a a a a 1
a a a a a 1 a a a a a 1 det
÷ ÷ +
÷ ÷ + ÷ ÷ =
+
+ +
Example:
Calculate the determinant of the following 3x3 matrix.
First, calculate it using the 1st row (the way you
probably have done it all along).
Then try it using the 2nd row.

(
(
(
¸
(

¸

5 1 6
2 3 4
9 7 1
Properties of Determinants
• det A = det A
T

• If all entries of any row or column are zero,
then det A = 0
• If two rows or two columns are identical,
then det A = 0
How to represent a system of
linear equations as a matrix
[A]{X} = {C}

where {X} and {C} are both column vectors
| |
¦
)
¦
`
¹
¦
¹
¦
´
¦
÷
÷
=
¦
)
¦
`
¹
¦
¹
¦
´
¦
(
(
(
¸
(

¸

=
÷ = + +
= + +
÷ = + +
44 . 0
67 . 0
01 . 0
x
x
x
5 . 0 3 . 0 1 . 0
9 . 1 1 5 . 0
1 52 . 0 3 . 0
} C { } X { A
44 . 0 x 5 . 0 x 3 . 0 x 1 . 0
67 . 0 x 9 . 1 x x 5 . 0
01 . 0 x x 52 . 0 x 3 . 0
3
2
1
3 2 1
3 2 1
3 2 1
Graphical Method
2 equations, 2 unknowns
22
2
1
22
21
2
12
1
1
12
11
2
2 2 22 1 21
1 2 12 1 11
a
c
x
a
a
x
a
c
x
a
a
x
c x a x a
c x a x a
+
|
|
.
|

\
|
÷ =
+
|
|
.
|

\
|
÷ =
= +
= +
x
2
x
1
( x
1
, x
2
)
1 x
2
1
x
9 x
2
3
x
2 x 2 x
18 x 2 x 3
1 2
1 2
2 1
2 1
+
|
.
|

\
|
÷
÷ =
+
|
.
|

\
|
÷ =
= + ÷
= +
x
2
x
1
( 4 , 3 )
3
2
2
1
9
1
Check: 3(4) + 2(3) = 12 + 6 = 18
Special Cases
• No solution
• Infinite solution
• Ill-conditioned
x
2
x
1
( x
1
, x
2
)
a) No solution - same slope
f(x)
x
b) infinite solution

f(x)
x
-1/2 x
1
+ x
2
= 1
-x
1
+2x
2
= 2
c) ill conditioned
so close that the points of
intersection are difficult to
detect visually
f(x)
x
Let’s consider how we know if the system is
ill-conditions. Start by considering systems
where the slopes are identical
• If the determinant is zero, the slopes are
identical

2 2 22 1 21
1 2 12 1 11
c x a x a
c x a x a
= +
= +
Rearrange these equations so that we have an
alternative version in the form of a straight
line: i.e. x
2
= (slope) x
1
+ intercept

22
2
1
22
21
2
12
1
1
12
11
2
a
c
x
a
a
x
a
c
x
a
a
x
+ ÷ =
+ ÷ =
If the slopes are nearly equal (ill-conditioned)
0 a a a a
a a a a
a
a
a
a
12 21 22 11
12 21 22 11
22
21
12
11
~ ÷
~
~
A det
a a
a a
22 21
12 11
=
Isn’t this the determinant?
If the determinant is zero the slopes are equal.
This can mean:
- no solution
- infinite number of solutions

If the determinant is close to zero, the system is ill
conditioned.

So it seems that we should use check the determinant
of a system before any further calculations are done.

Let’s try an example.

Example
Determine whether the following matrix is ill-conditioned.

)
`
¹
¹
´
¦
÷
=
)
`
¹
¹
´
¦
(
¸
(

¸

12
22
x
x
5 . 2 2 . 19
7 . 4 2 . 37
2
1
Solution
( )( ) ( )( )
76 . 2
2 . 19 7 . 4 5 . 2 2 . 37
5 . 2 2 . 19
7 . 4 2 . 37
=
÷ =
What does this tell us? Is this close to zero? Hard to say.

If we scale the matrix first, i.e. divide by the largest
a value in each row, we can get a better sense of things.
-80
-60
-40
-20
0
0 5 10 15
x
y
This is further justified
when we consider a graph
of the two functions.

Clearly the slopes are
nearly equal
1 0126
1 0130
0 004
.
.
. =
Cramer’s Rule
• Not efficient for solving large numbers of
linear equations
• Useful for explaining some inherent
problems associated with solving linear
equations.
| |{ } { } b x A
b
b
b
x
x
x
a a a
a a a
a a a
3
2
1
3
2
1
33 32 31
23 22 21
13 12 11
=
¦
)
¦
`
¹
¦
¹
¦
´
¦
=
¦
)
¦
`
¹
¦
¹
¦
´
¦
(
(
(
¸
(

¸

Cramer’s Rule
to solve for
x
i
- place {b} in
the ith column
33 32 3
23 22 2
13 12 1
1
a a b
a a b
a a b
A
1
x =
33 32 3
23 22 2
13 12 1
1
a a b
a a b
a a b
A
1
x =
Cramer’s Rule
to solve for
x
i
- place {b} in
the ith column
Cramer’s Rule
to solve for
x
i
- place {b} in
the ith column
3 32 31
2 22 21
1 12 11
3
33 3 31
23 2 21
13 1 11
2
33 32 3
23 22 2
13 12 1
1
b a a
b a a
b a a
A
1
x
a b a
a b a
a b a
A
1
x
a a b
a a b
a a b
A
1
x
=
= =
Example: Use of Cramer’s Rule
)
`
¹
¹
´
¦
=
)
`
¹
¹
´
¦
(
¸
(

¸

÷
= +
= ÷
5
5
x
x
1 1
3 2
5 x x
5 x 3 x 2
2
1
2 1
2 1
Note the substitution
of {c} in [A]
( )( ) ( )( )
( )( ) ( )( ) | |
( )( ) ( )( ) | | 1
5
5
1 5 5 2
5
1
5 1
5 2
5
1
x
4
5
20
5 3 1 5
5
1
1 5
3 5
5
1
x
5 3 2 1 3 1 2 A
5
5
x
x
1 1
3 2
2
1
2
1
= = ÷
|
.
|

\
|
= =
= = ÷ ÷
|
.
|

\
|
=
÷
=
= + = ÷ ÷ =
)
`
¹
¹
´
¦
=
)
`
¹
¹
´
¦
(
¸
(

¸

÷
Elimination of Unknowns
( algebraic approach)
( )
( )
2 11 2 22 11 1 11 21
1 21 2 12 21 1 11 21
11 2 2 22 1 21
21 1 2 12 1 11
2 2 22 1 21
1 2 12 1 11
c a x a a x a a
SUBTRACT c a x a a x a a
a c x a x a
a c x a x a
c x a x a
c x a x a
= +
= +
× = +
× = +
= +
= +
Elimination of Unknowns
( algebraic approach)
21 12 22 11
2 12 1 22
1
11 22 21 12
2 11 1 21
2
11 2 21 1 2 11 22 2 21 12
2 11 2 22 11 1 11 21
1 21 2 12 1 ` 2 1 11 21
a a a a
c a c a
x
a a a a
c a c a
x
a c a c x a a x a a
c a x a a x a a
SUBTRACT c a x a a x a a
÷
÷
=
÷
÷
=
÷ = ÷
= +
= +
NOTE: same result as
Cramer’s Rule
Gauss Elimination
• One of the earliest methods developed for
solving simultaneous equations
• Important algorithm in use today
• Involves combining equations in order to
eliminate unknowns
Blind (Naive) Gauss Elimination
• Technique for larger matrix
• Same principles of elimination
– manipulate equations to eliminate an unknown
from an equation
– Solve directly then back-substitute into one of
the original equations
Two Phases of Gauss Elimination
(
(
(
¸
(

¸

(
(
(
¸
(

¸

' '
3
' '
33
'
2
'
23
'
22
1 13 12 11
3 33 32 31
2 23 22 21
1 13 12 11
c | a 0 0
c | a a 0
c | a a a
c | a a a
c | a a a
c | a a a
Forward
Elimination

Note: the prime indicates
the number of times the
element has changed from
the original value. Also,
the extra column for c’s
makes this matrix form
what is called augmented.
Two Phases of Gauss Elimination
( )
( )
11
3 13 2 12 1
1
'
22
3
1
23
'
2
2
' '
33
' '
3
3
' '
3
' '
33
'
2
'
23
'
22
1 13 12 11
a
x a x a c
x
a
x a c
x
a
c
x
c | a 0 0
c | a a 0
c | a a a
÷ ÷
=
÷
=
=
(
(
(
¸
(

¸

Back substitution
Example
3 x 9 x 5 x 2
1 x 7 x 4 x 4
1 x 3 x x 2
3 2 1
3 2 1
3 2 1
= + +
= + +
= + +
( ) ( ) ( ) ( )
1 x x 2 x 0
2 1 x 6 7 x 2 4 x 4 4
3 2 1
3 2 1
÷ = + +
÷ = ÷ + ÷ + ÷
2 3 1
4 4 7 1
2 5 9 3
1 2 3
1 2 3
1 2 3
x x x
x x x
x x x
+ + =
+ + =
+ + =
3 x 9 x 5 x 2
1 x 7 x 4 x 4
2 x 6 x 2 x 4
3 2 1
3 2 1
3 2 1
= + +
= + +
= + +
a
21
/ a
11
= 4/2 = 2 (called pivot factor for row 2)
Subtract the (temporary) revised first equation from
the second equation

Multiply equation 1 by a
31
/a
11
= 2/2 = 1
and subtract it from equation 3
( ) ( ) ( ) ( )
2 x 6 x 4 x 0
1 3 x 3 9 x 1 5 x 2 2
3 2 1
3 2 1
= + +
÷ = ÷ + ÷ + ÷
The equivalent matrix algorithm for calculating
the revised entries in row 3 (a
31
, a
32
, a
33
, & a
34
) is
note that a
31
can be assumed 0
k k
k i
j k j i
'
j i
a
a
a a a ÷ =
Where i is the row being modified 3 in this case
j is the column in row i 1,2,3, & 4 in this case
k is index of the x eliminated 1 in this case
2 x 6 x 4
1 x x 2
1 x 3 x x 2
3 x 9 x 5 x 2
1 x 7 x 4 x 4
1 x 3 x x 2
3 2
3 2
3 2 1
3 2 1
3 2 1
3 2 1
= +
÷ = +
= + +
= + +
= + +
= + +
We now replace equations 2 and 3 in the revised matrix
(Note that 1st eq. is still original)
Continue the
computation
by multiplying
the second equation
by a
32
’/a
22
’ = 4/2 =2

4 x 4
1 x x 2
1 x 3 x x 2
2 x 6 x 4
1 x x 2
1 x 3 x x 2
3
3 2
3 2 1
3 2
3 2
3 2 1
=
÷ = +
= + +
= +
÷ = +
= + +
THIS DERIVATION OF
AN UPPER TRIANGULAR MATRIX
IS CALLED THE FORWARD
ELIMINATION PROCESS
From the system we immediately calculate:
1
4
4
x
3
= =
Continue to back substitute
4 x 4
1 x x 2
1 x 3 x x 2
3
3 2
3 2 1
=
÷ = +
= + +
( )
2
1
2
1 3 1
x
1
2
1 1
x
1
2
÷ =
÷ ÷ ÷
=
÷ =
÷ ÷
=
THIS SERIES OF
STEPS IS THE
BACK
SUBSTITUTION
Pitfalls of the Elimination
Method
• Division by zero
• Round off errors
– magnitude of the pivot element is small
compared to other elements
• Ill conditioned systems
Division by Zero
• When we normalize i.e. a
12
/a
11
we need to
make sure we are not dividing by zero
• This may also happen if the coefficient is
very close to zero
5 x 6 x x 2
3 x 7 x 6 x 4
8 x 3 x 2
3 2 1
3 2 1
3 2
= + +
÷ = + +
= +
Techniques for Improving the
Solution
• Use of more significant figures
• Pivoting
• Scaling
| |{ } { } b x A
b
b
b
x
x
x
a a a
a a a
a a a
3
2
1
3
2
1
33 32 31
23 22 21
13 12 11
=
¦
)
¦
`
¹
¦
¹
¦
´
¦
=
¦
)
¦
`
¹
¦
¹
¦
´
¦
(
(
(
¸
(

¸

Use of more significant figures
• Simplest remedy for ill conditioning
• Extend precision
Pivoting
• Problems occur when the pivot element is
zero - division by zero
• Problems also occur when the pivot element
is smaller in magnitude compared to other
elements (i.e. round-off errors)
• Prior to normalizing, determine the largest
available coefficient
Pivoting
• Partial pivoting
– rows are switched so that the largest element is
the pivot element
• Complete pivoting
– columns as well as rows are searched for the
largest element and switched
– rarely used because switching columns changes
the order of the x’s, adding unjustified
complexity to the computer program
Division by Zero - Solution
Pivoting has been developed
to partially avoid these problems
5 x 6 x x 2
8 x 3 x 2
3 x 7 x 6 x 4
3 2 1
3 2
3 2 1
= + +
= +
÷ = + +
5 x 6 x x 2
3 x 7 x 6 x 4
8 x 3 x 2
3 2 1
3 2 1
3 2
= + +
÷ = + +
= +
Scaling
• Minimizes round-off errors for cases where some
of the equations in a system have much larger
coefficients than others
• In engineering practice, this is often due to the
widely different units used in the development of
the simultaneous equations
• As long as each equation is consistent, the system
will be technically correct and solvable
Scaling
998 , 49 x 999 , 49
000 , 100 x 000 , 100 x 2
2 x x 2 x x
1 x x 00002 . 0 000 , 100 x 000 , 100 x 2
2
2 1
2 1 2 1
2 1 2 1
÷ = ÷
= +
= + = +
= + = +
1 x x 00 . 1 x 00 . 0 x
2 1 2 1
= = = =
P
i
v
o
t

r
o
w
s

t
o

p
u
t

t
h
e

g
r
e
a
t
e
s
t

v
a
l
u
e

o
n

t
h
e

d
i
a
g
o
n
a
l

EXAMPLE

Use Gauss Elimination to solve the following set
of linear equations
4 x 8 x 4
45 x x 6 x 2
50 x 13 x 3
3 1
3 2 1
3 2
= +
= + ÷
÷ = ÷
4 x 8 x 4
45 x x 6 x 2
50 x 13 x 3
3 1
3 2 1
3 2
= +
= + ÷
÷ = ÷
SOLUTION
First write in matrix form, employing short hand
presented in class.
0 3 13 50
2 6 1 45
4 0 8 4
÷ ÷
÷

¸

(
¸
(
(
(

We will clearly run into
problems of division
by zero.

Use partial pivoting
(
(
(
¸
(

¸

÷ ÷
÷ ÷
(
(
(
¸
(

¸

÷ ÷
÷
(
(
(
¸
(

¸

÷
÷ ÷
50 13 3 0
43 3 6 0
4 8 0 4
50 13 3 0
45 1 6 2
4 8 0 4
4 8 0 4
45 1 6 2
50 13 3 0

Begin developing
upper triangular matrix
( )
( )
( ) ( ) okay 50 966 . 1 13 149 . 8 3
CHECK
931 . 2
4
966 . 1 8 4
x
149 . 8
6
966 . 1 3 43
x 966 . 1
5 . 14
5 . 28
x
5 . 28 5 . 14 0 0
43 3 6 0
4 8 0 4
50 13 3 0
43 3 6 0
4 8 0 4
1
2 3
÷ = ÷ ÷
÷ =
÷
=
÷ =
÷
+
= =
÷
÷
=
(
(
(
¸
(

¸

÷ ÷
÷ ÷
(
(
(
¸
(

¸

÷ ÷
÷ ÷

...end of
problem
Matrix Inversion with Gauss-
Jordan Method
• Gauss-Jordan (Like Gauss Elim.--Direct Sol.)
– primary motive for introducing this method is that
it provides and simple and convenient method for
computing the matrix inverse
• Gauss-Seidel (Ch. 11)
– fundamentally different from Gauss elimination
– this is an approximate, iterative method
– particularly good for large number of equations
Gauss-Jordan
• Variation of Gauss elimination
• When an unknown is eliminated, it is eliminated
from all other equations, rather than just the
subsequent one
• All rows are normalized by dividing them by their
pivot elements
• Elimination step results in and identity matrix
rather than an UT matrix
| |
(
(
(
¸
(

¸

=
33
23 22
13 12 11
a 0 0
a a 0
a a a
A
| |
(
(
(
(
¸
(

¸

=
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
A
( )
( )
( )
( )
( )
( ) n
3 3
n
2 2
n
1
n
3
n
2
n
c x
c x
c x
c | 1 0 0
c | 0 1 0
c | 0 0 1
1
1
=
=
=
(
(
(
¸
(

¸

Graphical depiction of Gauss-Jordan
( )
( )
( )
(
(
(
¸
(

¸

(
(
(
¸
(

¸

n
3
n
2
n
' '
3
' '
33 32 31
'
2
'
23
'
22 21
1 13 12 11
c | 1 0 0
c | 0 1 0
c | 0 0 1
c | a a a
c | a a a
c | a a a
1
Matrix Inversion
• [A] [A]
-1
= [A]
-1
[A] = I
• One application of the inverse is to solve
several systems differing only by {c}
– [A]{x} = {c}
– [A]
-1
[A] {x} = [A]
-1
{c}
– [I]{x}={x}= [A]
-1
{c}
• One quick method to compute the inverse is
to augment [A] with [I] instead of {c}
Graphical Depiction of the Gauss-Jordan
Method with Matrix Inversion

| | | |
| | | |
1
1
33
1
32
1
31
1
23
1
22
1
21
1
13
1
12
1
11
33 32 31
23 22 21
13 12 11
A I
a a a 1 0 0
a a a 0 1 0
a a a 0 0 1
1 0 0 a a a
0 1 0 a a a
0 0 1 a a a
I A
÷
÷ ÷ ÷
÷ ÷ ÷
÷ ÷ ÷
(
(
(
¸
(

¸

(
(
(
¸
(

¸

Note: the superscript
“-1” denotes that
the original values
have been converted
to the matrix inverse,
not 1/a
ij
EXAMPLE

Use the Gauss-Jordan Method to solve the
following problem.
Note: You can use this solution to practice Gauss
elimination method.
65 x 5 x 5
29 x 7 x 2
28 x 6 x 5 x 4
2 1
3 1
3 2 1
÷ = ÷ ÷
= ÷
= ÷ +
(
(
(
¸
(

¸

÷ ÷ ÷
÷
÷
65 0 5 5
29 7 0 2
28 6 5 4

65 x 5 x 5
29 x 7 x 2
28 x 6 x 5 x 4
2 1
3 1
3 2 1
÷ = ÷ ÷
= ÷
= ÷ +
SOLUTION
First, write the coefficients
and the right hand vector
as an augmented matrix
4 5 6 28
2 0 7 29
5 5 0 65
1 125 15 7
2 0 7 29
5 5 0 65
÷
÷
÷ ÷ ÷

¸

(
¸
(
(
(
÷
÷
÷ ÷ ÷

¸

(
¸
(
(
(

. .
Normalize the first row by
dividing by the pivot
element 4
Example: -6/4 = -1.5
1 125 15 7
2 0 7 29
5 5 0 65
. . ÷
÷
÷ ÷ ÷

¸

(
¸
(
(
(

The x
1
term in the second
row can be eliminated by
subtracting 2 times the
first row from the second row
In other words, we want this
term to be zero
1 125 15 7
2 0 7 29
5 5 0 65
1 125 15 7
5 5 0 65
. .
. .
? ? ? ?
÷
÷
÷ ÷ ÷

¸

(
¸
(
(
(
÷
÷ ÷ ÷

¸

(
¸
(
(
(

The x
1
term in the second
row can be eliminated by
subtracting 2 times the
first row from the second row
All values
in this row
will change
1 125 15 7
2 0 7 29
5 5 0 65
1 125 15 7
5 5 0 65
. .
. .
?
÷
÷
÷ ÷ ÷

¸

(
¸
(
(
(
÷
÷ ÷ ÷

¸

(
¸
(
(
(

The x
1
term in the second
row can be eliminated by
subtracting 2 times the
first row from the second row
2 - (1)
1 125 15 7
2 0 7 29
5 5 0 65
1 125 15 7
5 5 0 65
. .
. .
?
÷
÷
÷ ÷ ÷

¸

(
¸
(
(
(
÷
÷ ÷ ÷

¸

(
¸
(
(
(

The x
1
term in the second
row can be eliminated by
subtracting 2 times the
first row from the second row
2
1 125 15 7
2 0 7 29
5 5 0 65
1 125 15 7
5 5 0 65
. .
. .
?
÷
÷
÷ ÷ ÷

¸

(
¸
(
(
(
÷
÷ ÷ ÷

¸

(
¸
(
(
(

The x
1
term in the second
row can be eliminated by
subtracting 2 times the
first row from the second row
2 - (1)(2)
1 125 15 7
2 0 7 29
5 5 0 65
1 125 15 7
0
5 5 0 65
. .
. .
÷
÷
÷ ÷ ÷

¸

(
¸
(
(
(
÷
÷ ÷ ÷

¸

(
¸
(
(
(

The x
1
term in the second
row can be eliminated by
subtracting 2 times the
first row from the second row
2 - (1)(2) = 0
1 125 15 7
2 0 7 29
5 5 0 65
1 125 15 7
0
5 5 0 65
. .
. .
?
÷
÷
÷ ÷ ÷

¸

(
¸
(
(
(
÷
÷ ÷ ÷

¸

(
¸
(
(
(

The x
1
term in the second
row can be eliminated by
subtracting 2 times the
first row from the second row
0
(
(
(
¸
(

¸

÷ ÷ ÷
÷
(
(
(
¸
(

¸

÷ ÷ ÷
÷
÷
65 0 5 5
? 0
7 5 . 1 25 . 1 1
65 0 5 5
29 7 0 2
7 5 . 1 25 . 1 1

The x
1
term in the second
row can be eliminated by
subtracting 2 times the
first row from the second row
0 - (2)
(
(
(
¸
(

¸

÷ ÷ ÷
÷
(
(
(
¸
(

¸

÷ ÷ ÷
÷
÷
65 0 5 5
? 0
7 5 . 1 25 . 1 1
65 0 5 5
29 7 0 2
7 5 . 1 25 . 1 1

The x
1
term in the second
row can be eliminated by
subtracting 2 times the
first row from the second row
0 - (2)(1.25)
(
(
(
¸
(

¸

÷ ÷ ÷
÷
÷
(
(
(
¸
(

¸

÷ ÷ ÷
÷
÷
65 0 5 5
5 . 2 0
7 5 . 1 25 . 1 1
65 0 5 5
29 7 0 2
7 5 . 1 25 . 1 1

The x
1
term in the second
row can be eliminated by
subtracting 2 times the
first row from the second row
0 - (2)(1.25) = -2.5
(
(
(
¸
(

¸

÷ ÷ ÷
÷ ÷
÷
(
(
(
¸
(

¸

÷ ÷ ÷
÷
÷
65 0 5 5
15 4 5 . 2 0
7 5 . 1 25 . 1 1
65 0 5 5
29 7 0 2
7 5 . 1 25 . 1 1

The x
1
term in the second
row can be eliminated by
subtracting 2 times the
first row from the second row
These term are derived
following the same algorithm.
(
(
(
¸
(

¸

÷ ÷
÷ ÷
÷
(
(
(
¸
(

¸

÷ ÷ ÷
÷ ÷
÷
30 5 . 7 25 . 1 0
15 4 5 . 2 0
7 5 . 1 25 . 1 1
65 0 5 5
15 4 5 . 2 0
7 5 . 1 25 . 1 1

The x
1
term in the third
row can be eliminated by
subtracting -5 times the
first row from the second row
Now we need to reduce the x
2
term from
the 1st and 3rd equation

(
(
(
¸
(

¸

÷ ÷
÷ ÷
÷
30 5 . 7 25 . 1 0
15 4 5 . 2 0
7 5 . 1 25 . 1 1

Now we need to reduce the x
2
term from
the 1st and 3rd equation

(
(
(
¸
(

¸

÷ ÷
÷ ÷
÷
30 5 . 7 25 . 1 0
15 4 5 . 2 0
7 5 . 1 25 . 1 1

Now we need to reduce the x
2
term from
the 1st and 3rd equation

Normalize the second row by a
22
(
(
(
¸
(

¸

÷ ÷
÷ ÷
÷
30 5 . 7 25 . 1 0
15 4 5 . 2 0
7 5 . 1 25 . 1 1

Now we need to reduce the x
2
term from
the 1st and 3rd equation

Normalize the second row by a
22
(
(
(
¸
(

¸

÷ ÷
÷ ÷
÷
30 5 . 7 25 . 1 0
15 4 5 . 2 0
7 5 . 1 25 . 1 1

1 125 15 7
0 1 16 6
0 125 7 5 30
. .
.
. .
÷
÷
÷ ÷

¸

(
¸
(
(
(

Now we need to reduce the x
2
term from
the 1st and 3rd equation

Normalize the second row by a
22
1 125 15 7
0 1 16 6
0 125 7 5 30
. .
.
. .
÷
÷
÷ ÷

¸

(
¸
(
(
(

-2.5/(-2.5) = 1
(
(
(
¸
(

¸

÷ ÷
÷ ÷
÷
30 5 . 7 25 . 1 0
15 4 5 . 2 0
7 5 . 1 25 . 1 1

1 125 15 7
0 1 16 6
0 125 7 5 30
. .
.
. .
÷
÷
÷ ÷

¸

(
¸
(
(
(

Now we need to reduce the x
2
term from
the 1st and 3rd equation

Normalize the second row by a
22
15/(-2.5) = -6
(
(
(
¸
(

¸

÷ ÷
÷ ÷
÷
30 5 . 7 25 . 1 0
15 4 5 . 2 0
7 5 . 1 25 . 1 1

(
(
(
¸
(

¸

÷ ÷
÷
(
(
(
¸
(

¸

÷ ÷
÷
÷
30 5 . 7 25 . 1 0
6 6 . 1 1 0
0 1
30 5 . 7 25 . 1 0
6 6 . 1 1 0
7 5 . 1 25 . 1 1

1.25 - 1.25 (1) = 0
(
(
(
¸
(

¸

÷ ÷
÷
(
(
(
¸
(

¸

÷ ÷
÷
÷
30 5 . 7 25 . 1 0
6 6 . 1 1 0
? 0 1
30 5 . 7 25 . 1 0
6 6 . 1 1 0
7 5 . 1 25 . 1 1

Before going any further
calculate the new coefficient
for a
13
(
(
(
¸
(

¸

÷ ÷
÷
(
(
(
¸
(

¸

÷ ÷
÷
÷
30 5 . 7 25 . 1 0
6 6 . 1 1 0
? 0 1
30 5 . 7 25 . 1 0
6 6 . 1 1 0
7 5 . 1 25 . 1 1

Before going any further
calculate the new coefficient
for a
13

(
(
(
¸
(

¸

÷ ÷
÷
(
(
(
¸
(

¸

÷ ÷
÷
÷
30 5 . 7 25 . 1 0
6 6 . 1 1 0
? 0 1
30 5 . 7 25 . 1 0
6 6 . 1 1 0
7 5 . 1 25 . 1 1

-1.5
(
(
(
¸
(

¸

÷ ÷
÷
(
(
(
¸
(

¸

÷ ÷
÷
÷
30 5 . 7 25 . 1 0
6 6 . 1 1 0
? 0 1
30 5 . 7 25 . 1 0
6 6 . 1 1 0
7 5 . 1 25 . 1 1

-1.5 - 1.6
-1.5 - 1.6 (1.25)
The element to get the
zero at a
12
(
(
(
¸
(

¸

÷ ÷
÷
(
(
(
¸
(

¸

÷ ÷
÷
÷
30 5 . 7 25 . 1 0
6 6 . 1 1 0
? 0 1
30 5 . 7 25 . 1 0
6 6 . 1 1 0
7 5 . 1 25 . 1 1

(
(
(
¸
(

¸

÷ ÷
÷
(
(
(
¸
(

¸

÷ ÷
÷
÷
30 5 . 7 25 . 1 0
6 6 . 1 1 0
? 0 1
30 5 . 7 25 . 1 0
6 6 . 1 1 0
7 5 . 1 25 . 1 1

-1.5 - 1.6 (1.25)
Recall, 1.25
was need to establish
a zero at a
12
(
(
(
¸
(

¸

÷ ÷
÷
÷
(
(
(
¸
(

¸

÷ ÷
÷
÷
30 5 . 7 25 . 1 0
6 6 . 1 1 0
5 . 3 0 1
30 5 . 7 25 . 1 0
6 6 . 1 1 0
7 5 . 1 25 . 1 1

-1.5 - 1.6 (1.25) = -3.5
Follow the same scheme for c
1

and for the third row
(
(
(
¸
(

¸

÷ ÷
÷
÷
(
(
(
¸
(

¸

÷ ÷
÷
÷
(
(
(
¸
(

¸

÷ ÷
÷
÷
5 . 22 5 . 9 0 0
6 6 . 1 1 0
5 . 14 5 . 3 0 1
30 5 . 7 25 . 1 0
6 6 . 1 1 0
5 . 14 5 . 3 0 1
30 5 . 7 25 . 1 0
6 6 . 1 1 0
7 5 . 1 25 . 1 1

(
(
(
¸
(

¸

÷ ÷
÷
÷
(
(
(
¸
(

¸

÷ ÷
÷
÷
(
(
(
¸
(

¸

÷ ÷
÷
÷
5 . 22 5 . 9 0 0
6 6 . 1 1 0
5 . 14 5 . 3 0 1
30 5 . 7 25 . 1 0
6 6 . 1 1 0
5 . 14 5 . 3 0 1
30 5 . 7 25 . 1 0
6 6 . 1 1 0
7 5 . 1 25 . 1 1

1 0 35 14 5
0 1 16 6
0 0 9 5 22 5
1 0 0 22 79
0 1 0 9 79
0 0 1 2 37
÷
÷
÷ ÷

¸

(
¸
(
(
(
÷

¸

(
¸
(
(
(
. .
.
. .
.
.
.

Now we need
to reduce x
3

from the 1st and
2nd equation

will complete the
identity matrix
......end of problem
LU Decomposition Methods
Chapter 10
• Two fundamentally different approaches for solving
systems of linear algebraic equations
• Elimination methods
– Gauss elimination
– Gauss Jordan
– LU Decomposition Methods
• Iterative methods
– Gauss Seidel
– Jacobi

Naive LU Decomposition
• [A]{x}={c}
• Suppose this can be rearranged as an upper
triangular matrix with 1’s on the diagonal
• [U]{x}={d}
• [A]{x}-{c}=0 [U]{x}-{d}=0
• Assume that a lower triangular matrix exists
that has the property
 [L]{[U]{x}-{d}}= [A]{x}-{c}
Naive LU Decomposition
• [L]{[U]{x}-{d}}= [A]{x}-{c}
• Then from the rules of matrix multiplication
• [L][U]=[A]
• [L]{d}={c}
• [L][U]=[A] is referred to as the LU
decomposition of [A]. After it is
accomplished, solutions can be obtained
very efficiently by a two-step substitution
procedure
Consider how Gauss elimination can be
formulated as an LU decomposition

U is a direct product of forward
elimination step if each row is scaled by
the diagonal
| |
(
(
(
¸
(

¸

=
1 0 0
a 1 0
a a 1
U
23
13 12
Although not as apparent, the matrix [L] is also
produced during the step. This can be readily
illustrated for a three-equation system
¦
)
¦
`
¹
¦
¹
¦
´
¦
=
¦
)
¦
`
¹
¦
¹
¦
´
¦
(
(
(
¸
(

¸

3
2
1
3
2
1
33 32 31
23 22 21
13 12 11
c
c
c
x
x
x
a a a
a a a
a a a
The first step is to multiply row 1 by the factor
11
21
21
a
a
f =
Subtracting the result from the second row eliminates a
21
¦
)
¦
`
¹
¦
¹
¦
´
¦
=
¦
)
¦
`
¹
¦
¹
¦
´
¦
(
(
(
¸
(

¸

3
2
1
3
2
1
33 32 31
23 22 21
13 12 11
c
c
c
x
x
x
a a a
a a a
a a a
Similarly, row 1 is multiplied by
11
31
31
a
a
f =
The result is subtracted from the third row to eliminate a
31
In the final step for a 3 x 3 system is to multiply the modified
row by
22
32
32
' a
' a
f =
Subtract the results from the third
row to eliminate a
32
The values f
21
, f
31
, f
32
are in fact the elements
of an [L] matrix
| |
(
(
(
¸
(

¸

=
1 f f
0 1 f
0 0 1
L
32 31
21
CONSIDER HOW THIS RELATES TO THE
LU DECOMPOSITION METHOD TO SOLVE
FOR {X}
[A] {x} = {c}
[U][L]
[L] {d} = {c}
{d}
[U]{x}={d}
{x}
Crout Decomposition
• Gauss elimination method involves two
major steps
– forward elimination
– back substitution
• Efforts in improvement focused on
development of improved elimination
methods
• One such method is Crout decomposition
Crout Decomposition
Represents and efficient algorithm for decomposing [A]
into [L] and [U]
(
(
(
¸
(

¸

=
(
(
(
¸
(

¸

(
(
(
¸
(

¸

33 32 31
23 22 21
13 12 11
23
13 12
33 32 31
22 21
11
a a a
a a a
a a a
1 0 0
u 1 0
u u 1
0
0 0
  
 

Recall the rules of matrix multiplication.

The first step is to multiply the rows of [L] by the
first column of [U]
( )( ) ( )( ) ( )( )
31 31
21 21
11 11 11
a
a
0 0 0 0 1 a

 
=
=
= + + = Thus the first
column of [A]
is the first column
of [L]
(
(
(
¸
(

¸

=
(
(
(
¸
(

¸

(
(
(
¸
(

¸

33 32 31
23 22 21
13 12 11
23
13 12
33 32 31
22 21
11
a a a
a a a
a a a
1 0 0
u 1 0
u u 1
0
0 0
  
 

Next we multiply the first row of [L] by the columns
of [U] to get
13 13 11
12 12 11
11 11
a u
a u
a
=
=
=

(
(
(
¸
(

¸

=
(
(
(
¸
(

¸

(
(
(
¸
(

¸

33 32 31
23 22 21
13 12 11
23
13 12
33 32 31
22 21
11
a a a
a a a
a a a
1 0 0
u 1 0
u u 1
0
0 0
  
 

n ,..... 3 , 2 j for
a
u
a
u
a
u
a u
a u
a
11
j 1
j 1
11
13
13
11
12
12
13 13 11
12 12 11
11 11
= =
=
=
=
=
=

Once the first row of [U] is established
the operation can be represented concisely
(
(
(
¸
(

¸

=
(
(
(
¸
(

¸

(
(
(
¸
(

¸

33 32 31
23 22 21
13 12 11
23
13 12
33 32 31
22 21
11
a a a
a a a
a a a
1 0 0
u 1 0
u u 1
0
0 0
  
 

n ,..... 3 , 2 j for
a
u
11
j 1
j 1
= =

Schematic
depicting
Crout
Decomposition
(
(
(
¸
(

¸

=
(
(
(
¸
(

¸

(
(
(
¸
(

¸

33 32 31
23 22 21
13 12 11
23
13 12
33 32 31
22 21
11
a a a
a a a
a a a
1 0 0
u 1 0
u u 1
0
0 0
  
 

Schematic
depicting
Crout
Decomposition
¿
¿
¿
÷
=
÷
=
÷
=
÷ =
+ + =
÷
=
+ = ÷ =
÷ =
= =
= =
1 n
1 k
kn nk nn nn
jj
1 j
1 i
ik ji jk
jk
1 j
1 k
kj ik ij ij
11
j 1
j 1
il 1 i
u a
n .... 2 j , 1 j k for
u a
u
n ,..... 1 j , j i for u a
1 n ...... 3 , 2 j For
n ,..... 3 , 2 j for
a
u
n ,....., 2 , 1 i for a
 

 

The Substitution Step
• [L]{[U]{x}-{d}}= [A]{x}-{c}
• [L][U]=[A]
• [L]{d}={c}
• [U]{x}={d}
• Recall our earlier graphical depiction of the
LU decomposition method
[A] {x} = {c}
[U][L]
[L] {d} = {c}
{d}
[U]{x}={d}
{x}
| |{ } { }
1 ,..... 2 , 1
,...... 3 , 2
1
1
1
11
1
1
÷ ÷ = ÷ =
=
=
=
÷
=
=
¿
¿
+ =
÷
=
n n i for x u d x
d x
d x U recall ution substit Back
n i for
d c
d
c
d
n
i j
j ij i i
n n
ii
i
j
j ij i
i

Example
Use LU decomposition to solve the following matrix
3 a
2 a
1 a
n ,....., 2 , 1 i for a
39
22
15
x
x
x
7 4 3
3 4 2
2 3 1
31 31
21 21
11 11
il 1 i
3
2
1
= =
= =
= =
= =
¦
)
¦
`
¹
¦
¹
¦
´
¦
=
¦
)
¦
`
¹
¦
¹
¦
´
¦
(
(
(
¸
(

¸

2
a
u
3
1
3 a
u
n ,..... 3 , 2 j for
a
u
11
13
13
11
12
12
11
j 1
j 1
= =
= = =
= =

( )
( ) 5 3 3 4 u a
2 3 2 4 u a
n ,..... 1 j , j i for u a
1 2
1 k
2 k k 3 32 32
1 2
1 k
2 k k 2 22 22
1 j
1 k
kj ik ij ij
÷ = ÷ = ÷ =
÷ = ÷ = ÷ =
+ = ÷ =
¿
¿
¿
÷
=
÷
=
÷
=
 
 
 
( )
5 . 0
2
2 2 3
u a
u
n .... 2 j , 1 j k for
u a
u
22
1 2
1 i
3 i i 2 23
23
jj
1 j
1 i
ik ji jk
jk
=
÷
÷
=
÷
=
+ + =
÷
=
¿
¿
÷
=
÷
=

( )( ) ( )( ) 5 . 3 5 . 0 5 2 3 7 u a
u a
1 3
1 k
3 k k 3 33 33
1 n
1 k
kn nk nn nn
= ÷ ÷ ÷ = ÷ =
÷ =
¿
¿
÷
=
÷
=
 
 
Therefore the L and U matrices are
| | | |
(
(
(
¸
(

¸

=
(
(
(
¸
(

¸

÷
÷ =
1 0 0
5 . 0 1 0
2 3 1
U
5 . 3 5 3
0 2 2
0 0 1
L
| |{ } { }
15
1
15 c
d
c d L solving and
39
22
15
c
c
c
11
1
1
3
2
1
= = =
=
¦
)
¦
`
¹
¦
¹
¦
´
¦
=
¦
)
¦
`
¹
¦
¹
¦
´
¦

Recall the original column matrix
( )
( ) ( )( )
4
5 . 3
4 5 15 3 39
d c
d
4
2
15 2 22
d c
d
n ,...... 3 , 2 i for
d c
d
ii
1 3
1 j
j j 3 3
3
ii
1 2
1 j
j j 2 2
2
ii
1 i
1 j
j ij i
i
=
÷ ÷ ÷
=
÷
=
=
÷
÷
=
÷
=
=
÷
=
¿
¿
¿
÷
=
÷
=
÷
=

Forward substitution
Backsubstitution
( )
( ) ( ) 1 4 2 2 3 15 x u d x
2 4 5 . 0 4 x u d x
1 ,..... 2 n , 1 n i for x u d x
4 d x
d x
3
1 1 j
j j 1 1 1
3
1 2 j
j j 3 2 2
n
1 i j
j ij i i
3 3
n n
= ÷ ÷ = ÷ =
= ÷ = ÷ =
÷ ÷ = ÷ =
= =
=
¿
¿
¿
+ =
+ =
+ =
...end of example
Matrix Inversion
• [A] [A]
-1
= [A]
-1
[A] = I
• One application of the inverse is to solve
several systems differing only by {c}
– [A]{x} = {c}
– [A]
-1
[A] {x} = [A]
-1
{c}
– [I]{x}={x}= [A]
-1
{c}
• One quick method to compute the inverse is
to augment [A] with [I] instead of {c}
Graphical Depiction of the Gauss-Jordan
Method with Matrix Inversion

| | | |
| | | |
1
1
33
1
32
1
31
1
23
1
22
1
21
1
13
1
12
1
11
33 32 31
23 22 21
13 12 11
A I
a a a 1 0 0
a a a 0 1 0
a a a 0 0 1
1 0 0 a a a
0 1 0 a a a
0 0 1 a a a
I A
÷
÷ ÷ ÷
÷ ÷ ÷
÷ ÷ ÷
(
(
(
¸
(

¸

(
(
(
¸
(

¸

Note: the superscript
“-1” denotes that
the original values
have been converted
to the matrix inverse,
not 1/a
ij
Matrix Inversion with LU
Decomposition Method
11 1(1) 11 1(2)
21 22 2(1) 21 22 2(2)
31 32 33 3(1) 31 32 33 3(2)
11 1(3)
21 22 2(3)
31 32 33 3(3)
0 0 d 1 0 0 d 0
0 d 0 0 d 1
d 0 d 0
0 0 d 0
0 d 0
d 1
¦ ¹ ¦ ¹
¦ ¹ ¦ ¹ ( (
¦ ¦ ¦ ¦ ¦ ¦ ¦ ¦
( (
= =
´ ` ´ ` ´ ` ´ `
( (
¦ ¦ ¦ ¦ ¦ ¦ ¦ ¦
( (
¹ ) ¹ ) ¸ ¸ ¸ ¸
¹ ) ¹ )
¦ ¹
(
¦ ¦
(
=
´ `
(
¦ ¦
(
¸ ¸
¹ )
¦ ¹
¦ ¦
´ `
¦ ¦
¹ )
Solving for unknowns using {d}
(1)
gives the first column of [I]
and so forth for {d}
(2)
through {d}
(n)

Chapter 11:Iterative Methods
• Solution by Gauss-Seidel Iteration
• Solution by Jacobi Iteration

Gauss Seidel Method
• An iterative approach
• Continue until we converge within some
pre-specified tolerance of error
• Round off is no longer an issue, since you
control the level of error that is acceptable
Gauss-Seidel Method
• If the diagonal elements are all nonzero, the
first equation can be solved for x
1

• Solve the second equation for x
2
, etc.
11
n n 1 3 13 2 12 1
1
a
x a x a x a c
x
÷ ÷ ÷ ÷
=

To assure that you understand this, write the equation for x
2
n , n
1 n 1 n , n 2 3 n 1 1 n n
n
33
n n 3 2 32 1 31 3
3
22
n n 2 3 23 1 21 2
2
11
n n 1 3 13 2 12 1
1
a
x a x a x a c
x
a
x a x a x a c
x
a
x a x a x a c
x
a
x a x a x a c
x
÷ ÷
÷ ÷ ÷ ÷
=
÷ ÷ ÷ ÷
=
÷ ÷ ÷ ÷
=
÷ ÷ ÷ ÷
=

Gauss-Seidel Method
• Start the solution process by guessing
values of x
• A simple way to obtain initial guesses is to
assume that they are all zero
• Calculate new values of x
i
starting with
 x
1
= c
1
/a
11

• Progressively substitute through the
equations
• Repeat until tolerance is reached
( )
( )
( )
( )
( )
( )
3 33 2 32 1 31 3 3
2 22 23 1 21 2 2
1
11
1
11 13 12 1 1
33 2 32 1 31 3 3
22 3 23 1 21 2 2
11 3 13 2 12 1 1
' x a / ' x a ' x a c x
' x a / 0 a ' x a c x
' x
a
c
a / 0 a 0 a c x
a / x a x a c x
a / x a x a c x
a / x a x a c x
= ÷ ÷ =
= ÷ ÷ =
= = ÷ ÷ =
÷ ÷ =
÷ ÷ =
÷ ÷ =
Gauss-Seidel Method for 3 eq. System
Example
Given the following augmented matrix,
complete one iteration of the Gauss
Seidel method.
2 3 1 2
4 1 2 2
3 2 1 1
÷
÷

¸

(
¸
(
(
(

2 3 1 2
4 1 2 2
3 2 1 1
÷
÷

¸

(
¸
(
(
(

( )
( )( ) ( )( )
( )
( )( ) ( )( )
( )
( )( ) ( )( )
10
1
12 3 1
1
6 2 1 3 1
x
' x a / ' x a ' x a c x
6
1
4 2
1
0 2 1 4 2
x
' x a / 0 a ' x a c x
1
2
2
2
0 1 0 3 2
x
' x
a
c
a / 0 a 0 a c x
3
3 33 2 32 1 31 3 3
2
2 22 23 1 21 2 2
1
1
11
1
11 13 12 1 1
=
+ ÷
=
÷ ÷ ÷
=
= ÷ ÷ =
÷ =
÷ ÷
=
÷ ÷ ÷
=
= ÷ ÷ =
= =
÷ ÷ ÷
=
= = ÷ ÷ =
GAUSS SEIDEL
Gauss-Seidel Method
convergence criterion
s
j
i
1 j
i
j
i
i , a
100
x
x x
c < ×
÷
= c
÷
as in previous iterative procedures in finding the roots,
we consider the present and previous estimates.

As with the open methods we studied previously with one
point iterations

1. The method can diverge
2. May converge very slowly
Convergence criteria for two
linear equations
( )
( )
0
x
v
a
a
x
v
a
a
x
u
0
x
u
v and u of s derivative partial the consider
x
a
a
a
c
x , x v
x
a
a
a
c
x , x u
2 22
21
1
11
12
2 1
1
22
21
22
2
2 1
2
11
12
11
1
2 1
=
c
c
÷ =
c
c
÷ =
c
c
=
c
c
÷ =
÷ =
Class question:
where do these
formulas come from?
Convergence criteria for two linear
equations cont.
1
y
v
y
u
1
x
v
x
u
<
c
c
+
c
c
<
c
c
+
c
c
Criteria for convergence
in class text material
for nonlinear equations.

Noting that x = x
1
and
y = x
2

Substituting the previous equation:
Convergence criteria for two linear
equations cont.
1
a
a
1
a
a
11
12
22
21
< <
This is stating that the absolute values of the slopes must
be less than unity to ensure convergence.

Extended to n equations:
i j excluding n , 1 j where a a
ij ii
= = >
¿
Convergence criteria for two linear
equations cont.
i j excluding n , 1 j where a a
ij ii
= = >
¿
This condition is sufficient but not necessary; for convergence.

When met, the matrix is said to be diagonally dominant.
x
2
x
1
Review the concepts
of divergence and
convergence by graphically
illustrating Gauss-Seidel
for two linear equations
99 x 9 x 11 : v
286 x 13 x 11 : u
2 1
2 1
= ÷
= +
x
2
x
1
Note: initial guess is
x
1
= x
2
= 0
286 x 13 x 11 : u
99 x 9 x 11 : v
2 1
2 1
= +
= ÷
x
2
x
1
Note: initial guess is
x
1
= x
2
= 0
286 x 13 x 11 : u
99 x 9 x 11 : v
2 1
2 1
= +
= ÷
u
v
x
2
x
1
286 x 13 x 11 : u
99 x 9 x 11 : v
2 1
2 1
= +
= ÷
Solve 2
nd
eq. for x
2

v
u
x
2
x
1
286 x 13 x 11 : u
99 x 9 x 11 : v
2 1
2 1
= +
= ÷
Solve 1st eq. for x
1

v
u
x
2
x
1
286 x 13 x 11 : u
99 x 9 x 11 : v
2 1
2 1
= +
= ÷
Solve 2
nd
eq. for x
2

v
u
x
2
x
1
286 x 13 x 11 : u
99 x 9 x 11 : v
2 1
2 1
= +
= ÷
Solve 1st eq. for x
1

v
u
x
2
x
1
286 x 13 x 11 : u
99 x 9 x 11 : v
2 1
2 1
= +
= ÷
Solve 2
nd
eq. for x
2

v
u
x
2
x
1
Note: we are converging
on the solution
286 x 13 x 11 : u
99 x 9 x 11 : v
2 1
2 1
= +
= ÷
v
u
x
2
x
1
Change the order of
the equations: i.e. change
direction of initial
estimates
99 x 9 x 11 : v
286 x 13 x 11 : u
2 1
2 1
= ÷
= +
Solve 2
nd
eq. for x
2

u
v
x
2
x
1
99 x 9 x 11 : v
286 x 13 x 11 : u
2 1
2 1
= ÷
= +
Solve 1st eq. for x
1

u
v
x
2
x
1
99 x 9 x 11 : v
286 x 13 x 11 : u
2 1
2 1
= ÷
= +
Solve 2
nd
eq. for x
2

u
v
x
2
x
1
99 x 9 x 11 : v
286 x 13 x 11 : u
2 1
2 1
= ÷
= +
Solve 1st eq. for x
1

u
v
x
2
x
1
This solution is diverging!
99 x 9 x 11 : v
286 x 13 x 11 : u
2 1
2 1
= ÷
= +
u
v
Improvement of Convergence
Using Relaxation
This is a modification that will enhance slow convergence.

After each new value of x is computed, calculate a new value
based on a weighted average of the present and previous
iteration.
( )
old
i
new
i
new
i
x 1 x x ì ÷ + ì =
Improvement of Convergence Using
Relaxation
• if ì = 1 unmodified
• if 0 < ì < 1 underrelaxation
– nonconvergent systems may converge
– hasten convergence by dampening out oscillations
• if 1< ì < 2 overrelaxation
– extra weight is placed on the present value
– assumption that new value is moving to the correct
solution by too slowly
( )
old
i
new
i
new
i
x 1 x x ì ÷ + ì =
Jacobi Iteration
• Iterative like Gauss Seidel
• Gauss-Seidel immediately uses the value of
x
i
in the next equation to predict x
i+1

• Jacobi calculates all new values of x
i
’s to
calculate a set of new x
i
values
( ) ( )
( ) ( )
( ) ( )
( ) ( )
( )
FIRST ITERATION
x c a x a x a x c a x a x a
x c a x a x a x c a x a x a
x c a x a x a x c a x a x a
SECOND ITERATION
x c a x a x a x c a x a x a
x c a x a x a x c a x
1 1 12 2 13 3 11 1 1 12 2 13 3 11
2 2 21 1 23 3 22 2 2 21 1 23 3 22
3 3 31 1 32 2 33 3 3 31 1 32 2 33
1 1 12 2 13 3 11 1 1 12 2 13 3 11
2 2 21 1 23 3 22 2 2 21 1
= ÷ ÷ = ÷ ÷
= ÷ ÷ = ÷ ÷
= ÷ ÷ = ÷ ÷
= ÷ ÷ = ÷ ÷
= ÷ ÷ = ÷ ÷
/ /
/ /
/ /
/ /
/
( )
( ) ( )
a x a
x c a x a x a x c a x a x a
23 3 22
3 3 31 1 32 2 33 3 3 31 1 32 2 33
/
/ / = ÷ ÷ = ÷ ÷
Graphical depiction of difference between Gauss-Seidel and Jacobi
( ) ( )
( ) ( )
( ) ( )
( ) ( )
( )
FIRST ITERATION
x c a x a x a x c a x a x a
x c a x a x a x c a x a x a
x c a x a x a x c a x a x a
SECOND ITERATION
x c a x a x a x c a x a x a
x c a x a x a x c a x
1 1 12 2 13 3 11 1 1 12 2 13 3 11
2 2 21 1 23 3 22 2 2 21 1 23 3 22
3 3 31 1 32 2 33 3 3 31 1 32 2 33
1 1 12 2 13 3 11 1 1 12 2 13 3 11
2 2 21 1 23 3 22 2 2 21 1
= ÷ ÷ = ÷ ÷
= ÷ ÷ = ÷ ÷
= ÷ ÷ = ÷ ÷
= ÷ ÷ = ÷ ÷
= ÷ ÷ = ÷ ÷
/ /
/ /
/ /
/ /
/
( )
( ) ( )
a x a
x c a x a x a x c a x a x a
23 3 22
3 3 31 1 32 2 33 3 3 31 1 32 2 33
/
/ / = ÷ ÷ = ÷ ÷
Graphical depiction of difference between Gauss-Seidel and Jacobi
( ) ( )
( ) ( )
( ) ( )
( ) ( )
( )
FIRST ITERATION
x c a x a x a x c a x a x a
x c a x a x a x c a x a x a
x c a x a x a x c a x a x a
SECOND ITERATION
x c a x a x a x c a x a x a
x c a x a x a x c a x
1 1 12 2 13 3 11 1 1 12 2 13 3 11
2 2 21 1 23 3 22 2 2 21 1 23 3 22
3 3 31 1 32 2 33 3 3 31 1 32 2 33
1 1 12 2 13 3 11 1 1 12 2 13 3 11
2 2 21 1 23 3 22 2 2 21 1
= ÷ ÷ = ÷ ÷
= ÷ ÷ = ÷ ÷
= ÷ ÷ = ÷ ÷
= ÷ ÷ = ÷ ÷
= ÷ ÷ = ÷ ÷
/ /
/ /
/ /
/ /
/
( )
( ) ( )
a x a
x c a x a x a x c a x a x a
23 3 22
3 3 31 1 32 2 33 3 3 31 1 32 2 33
/
/ / = ÷ ÷ = ÷ ÷
Graphical depiction of difference between Gauss-Seidel and Jacobi
( ) ( )
( ) ( )
( ) ( )
( ) ( )
( )
FIRST ITERATION
x c a x a x a x c a x a x a
x c a x a x a x c a x a x a
x c a x a x a x c a x a x a
SECOND ITERATION
x c a x a x a x c a x a x a
x c a x a x a x c a x
1 1 12 2 13 3 11 1 1 12 2 13 3 11
2 2 21 1 23 3 22 2 2 21 1 23 3 22
3 3 31 1 32 2 33 3 3 31 1 32 2 33
1 1 12 2 13 3 11 1 1 12 2 13 3 11
2 2 21 1 23 3 22 2 2 21 1
= ÷ ÷ = ÷ ÷
= ÷ ÷ = ÷ ÷
= ÷ ÷ = ÷ ÷
= ÷ ÷ = ÷ ÷
= ÷ ÷ = ÷ ÷
/ /
/ /
/ /
/ /
/
( )
( ) ( )
a x a
x c a x a x a x c a x a x a
23 3 22
3 3 31 1 32 2 33 3 3 31 1 32 2 33
/
/ / = ÷ ÷ = ÷ ÷
Graphical depiction of difference between Gauss-Seidel and Jacobi
2 3 1 2
4 1 2 2
3 2 1 1
÷
÷

¸

(
¸
(
(
(

( )
( )( ) ( )( )
( )
( )( ) ( )( )
( )
( )( ) ( )( )
10
1
12 3 1
1
6 2 1 3 1
x
' x a / ' x a ' x a c x
6
1
4 2
1
0 2 1 4 2
x
' x a / 0 a ' x a c x
1
2
2
2
0 1 0 3 2
x
' x
a
c
a / 0 a 0 a c x
3
3 33 2 32 1 31 3 3
2
2 22 23 1 21 2 2
1
1
11
1
11 13 12 1 1
=
+ ÷
=
÷ ÷ ÷
=
= ÷ ÷ =
÷ =
÷ ÷
=
÷ ÷ ÷
=
= ÷ ÷ =
= =
÷ ÷ ÷
=
= = ÷ ÷ =
RECALL
GAUSS SEIDEL
2 3 1 2
4 1 2 2
3 2 1 1
÷
÷

¸

(
¸
(
(
(

( )
( )( ) ( )( )
( )
( )( ) ( )( )
( )
( )( ) ( )( )
1
1
1
1
0 2 0 3 1
x
' x a / 0 a 0 a c x
2
1
2
1
0 2 0 4 2
x
' x a / 0 a 0 a c x
1
2
2
2
0 1 0 3 2
x
' x
a
c
a / 0 a 0 a c x
3
3 33 32 31 3 3
2
2 22 23 21 2 2
1
1
11
1
11 13 12 1 1
= =
÷ ÷
=
= ÷ ÷ =
÷ =
÷
=
÷ ÷ ÷
=
= ÷ ÷ =
= =
÷ ÷ ÷
=
= = ÷ ÷ =
... end of problem
JACOBI
Optimization Methods
Overview
• Unconstrained Searches
▀ 1-D (one independent variable)
▲Golden Search
▲Newton’s Method
▀ 2-D (two or more independent variables)
▲Steepest Ascent (Descent)
▲Newton and Quasi-Newton Methods
• Constrained Optimization
▀ Linear Programming

Optimization Uses in Engineering
● Maximizing Strength/Weight
● Minimizing Cost or Maximizing Profit
● Minimizing Wait Time
● Maximizing Efficiency
● Minimizing Virtual Work
1-D Searches
• Objective is to maximize a function ¬ Objective Function
• 1-D searches have objective functions of one independent
variable ¬ f(x)
• Analogy to root finding :
►Bracketed Method ¬ Interval Search or Limited
Search
►Open Method ¬ Unlimited or Free Search
Golden Section Search
• Defining 
2
/ 
1
as R and taking the reciprocal of the above
equation
1
2
2 1
1
1
2
0
1
or

 

=
+
=
• Define the search interval as [x

,x
u
]

• Define the search interval length as 
0
= |x
u
- x

|

• Divide 
0
up into 
1
and 
2
such that 
0
= 
1
+ 
2

• Also make sure that the ratio of the first sub-length
to the original length is the same as the first
sub-length to second sub-length ratio
618 . 0
2
1 5
R 1 R R
R
1
R 1
2
~
÷
= ¬ = + ¬ = +
Golden Section Search, Continued
• Once subdivided, we now evaluate values of f(x)
• Since the designations 
1
and 
2
are arbitrary, test f(x

+ 
1
)
and f(x
u
- 
1
) to determine the new search interval
• So let x
1
= (x

+ d) and x
2
= (x
u
- d), where d = 
1

• Then if f(x
1
) > f(x
2
) ¬ the maximum must lie in the interval
[x
u
- d, x
u
] ¬ call x
u
- d (= x
2
) the new x

(Case 1)
otherwise ¬ call x

+ d (= x
1
) the new x
u
(Case 2)
• With a little thinking you can convince yourself that for
(Case 1), x
2
for this new search interval is x
1
from the
previous, because 
0
/ 
1
= 
1
/ 
2
(Hint: d = R 
0
)
And for (Case 2) Old x
1
¬ x
2
x

x
u

d
d
x
2
x
1

First Step
x

x
u

Old x
2
Old x
1

Second Step
x
1
x

x
2

d’
f (x)
f (x)
Text Example 13.1
● Use Golden Search to find max (2 sinx -x
2
/10) on [0,4]
● First Step: Calculate d = R 
0
= 0.61803*4 = 2.4721
● Second Step: Calculate x
2
= 4 - 2.4721 = 1.5279
and x
1
= 0 +2.4721 = 2.4721
● Third Step: Calculate f(x
2
) = 1.7647
and f(x
1
) = 0.6300
● Now because f(x
1
) is not > f(x
2
), set x
u
to 2.4721
& set new x
1
= 1.5279
Graphically
Text Example 13.1 f(x)
-4
-3
-2
-1
0
1
2
3
0 1 2 3 4 5
x
f
(
x
)
<-------- l
0
= 4 ------------>
x
1
x
2

f=1.7647
f=0.6300
Newton’s Method for Optimization in 1-D
• Basis: Maximum (or minimum) of f(x) will occur where
f ’(x) = 0
• If we call f ’(x) by the name g(x) and apply Newton-
Raphson to find where g(x) = 0
x
i +1
= x
i
- g (x) / g ’(x)
or
x
i +1
= x
i
- f ’ (x) / f ”(x)
• Method converges to nearest max or min (local max/min)
• Method can “blow up” near points where f ”(x) = 0
if f ’(x) is not going to 0 as fast (or at all) as f ”(x)
0.0 0.5 1.0 1.5 2.0
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
2.0
2.5
f
g
f

(
x
)

&

g

(
x
)
x
f(x) and its Derivative in Examples on [0,2]
2-D Search Methods
• The Multi-D Equivalent of f ’ is the
1
i, cf/cx
2
j, cf/cx
3
k...]
T

• All 2-D Methods Use V f
• We can Conceptualize the Gradient as the
Topographic Elevation Slope in x-y-z space
+ x
1
¬ x
+ x
2
¬ y
+ f ¬ z
-50
-40
-30
-20
-10
0
10
-3
-2
-1
0
1
2
3
4
5
6
-2
-1
0
1
2
3
3D Elevation Plot
3-D ‘Elevations’ ¬ Contour Plot
V f ¬
Contour Graph of f (x,y)
X
-2 -1 0 1 2 3 4 5
Y
-1
0
1
2
3
-5
-5
-5
-10
-10
-15
-20
0
0
0
0
0
0
0
-5
-5
-5
-5
-10
-10
-10
-15
-15
-20
-20
-25
-30
-35
Steepest Ascent/Descent
• Follow V f = g direction (- g for descent)
• How “far” should we go in this direction?
• Call the magnitude of the step h
• Now maximize f along the g direction as a
function of h ¬ called a line search
• Step ¬ x
1
i+1
= x
1
i
+ g
1
h & x
2
i+1
= x
2
i
+ g
2
h

Text Example 14.4
• Search for max of f = 2 x
1
x
2
+ 2x
1
- x
1
2
- 2x
2
2

from the point x
1
= -1 and x
2
= 1
• Calculate g (= 6 i - 6 j), because
+ g
1
= 2 x
2
+ 2 - 2 x
1
= 2 (1) + 2 - 2 (-1) = 6
+ g
2
= 2 x
1
- 4 x
2
= 2 (-1) - 4 (1) = -6
Text Example 14.4, Continued
• Now search for max of f (x
1
1
, x
2
1
) subject to
+ x
1
1
= x
1
0
+ h g
1
= -1+6h & x
2
1
= x
2
0
+ h g
2
= 1 - 6h

+f (x
1
1
, x
2
1
) = -180 h
2
+72 h - 7 = g(h)
+Max {g(h)}¬ g’(h) =0 ¬ -360 h +72 = 0 ¬ h=0.2
• Now stepping along g (= 6 i - 6 j) gives
+x
1
i+1
= x
1
i
+ g
1
h = - 1 +(6)(0.2) = 0.2
+x
2
i+1
= x
2
i
+ g
2
h = 1 +(-6)(0.2) = -0.2

Graphical Depiction of Steepest Ascent
Steps in Example 14.4
Contour Graph of f (x,y)
X
-2 -1 0 1 2 3 4 5
Y
-1
0
1
2
3
-5
-5
-5
-10
-10
-15
-20
0
0
0
0
0
0
0
-5
-5
-5
-5
-10
-10
-10
-15
-15
-20
-20
-25
-30
-35
2-D Newton Method
• The 2-D Version of Newton Maximization
Gradient ¬0 or V f = 0 ¬ g
1
= g
2
= 0
• Applying Multi-variate Newton-Raphson
(
¸
(

¸

A
A
+
(
¸
(

¸

=
(
¸
(

¸

(
¸
(

¸

÷
÷
=
(
¸
(

¸

A
A
(
(
(
(
¸
(

¸

c
c
c
c
c
c
c
c
+
2
1
i
2
1
1 i
2
1
2
1
2
1
2
2
1
2
2
1
1
1
x
x
x
x
x
x
and
g
g
x
x
x
g
x
g
x
g
x
g
• The first matrix is the Jacobian of the Gradient
. AKA the Hessian ¬
| |
(
(
(
(
¸
(

¸

c
c
c c
c
c c
c
c
c
=
(
(
(
(
¸
(

¸

c
c
c
c
c
c
c
c
=
2
2
2
2 1
2
2 1
2
2
1
2
2
2
1
2
2
1
1
1
x
f
x x
f
x x
f
x
f
x
g
x
g
x
g
x
g
H
2-D Newton Method
• Text describes Newton’s equation as
0
g
g
x
x
H H
H H
2
1
2
1
22 21
12 11
=
)
`
¹
¹
´
¦
+
)
`
¹
¹
´
¦
A
A
(
¸
(

¸

• The system is then solved by matrix inversion
)
`
¹
¹
´
¦
(
¸
(

¸

÷
)
`
¹
¹
´
¦
=
)
`
¹
¹
´
¦
÷ +
2
1
1
22 21
12 11
i
2
1
1 i
2
1
g
g
H H
H H
x
x
x
x
Newton Method for Text Example 14.4
• Search for max of f(x
1
,x
2
) = 2 x
1
x
2
+ 2x
1
- x
1
2
- 2x
2
2

from the point x
1
= -1 and x
2
= 1
• g
1
= 2 x
2
+ 2 - 2 x
1
= 2 (1) + 2 - 2 (-1) = 6
• g
2
= 2 x
1
- 4 x
2
= 2 (-1) - 4 (1) = -6
• H
11
= -2, H
12
= 2, H
21
= 2, H
22
= -4
• Solving,
( )
( )
(
¸
(

¸

=
(
¸
(

¸

A
A
(
¸
(

¸

÷ ÷
÷
=
(
¸
(

¸

A
A
(
¸
(

¸

÷
÷
0
3
x
x
get we
6
6
x
x
4 2
2 2
2
1
2
1
• So x
1
= 2, x
2
= 1 ¬ f = 2 ¬ Maximum (Why?)
Graphical Depiction of Newton Step
Contour Graph of f (x,y)
X
-2 -1 0 1 2 3 4 5
Y
-1
0
1
2
3
-5
-5
-5
-10
-10
-15
-20
0
0
0
0
0
0
0
-5
-5
-5
-5
-10
-10
-10
-15
-15
-20
-20
-25
-30
-35
Constrained Optimization
• Search for maximum is limited by constraints
• Problem related constraints (resource limits)
+Plant operates no more than 80 hours per week
+Raw materials can not be purchased for less than
\$30/ton
• Feasibility constraints
+Efficiency can not be greater than unity
+Cost can not be negative
+Mass can not be negative
Linear Programming
• Multi-D Objective Function Z(x
1
, x
2
, x
3
…)
is a linear function of x
1
, x
2
, x
3

• Constraints must be formulated as linear
inequality statements in terms of x
1
, x
2
, x
3

• Multi-D Problem Statement
+ Z = c
1
x
1
+ c
2
x
2
+… c
n
x
n
+ a
i1
x
1
+ a
i2
x
2
+... a
in
x
n
s b
i
( i = 1, m)
Text Example
• Maximize profit (Z) in the production of
two products (x
1
, x
2
)
• 2-D Problem
+ Z = 150 x
1
+ 175

x
2

+ 7 x
1
+ 11 x
2

s 77 (1)
+ 10 x
1
+ 8 x
2

s 80 (2)
+ x
1
s 9 (3)
+ x
2
s 6 (4) (m=4)
+ x
1
> 0 (5) (Note that these are
+ x
2
> 0 (6) not s constraints)

Graphical Depiction of Constraints
x
1

x
2

Graphical Depiction Z(x
1
, x
2
) Subject to Constraints
x
1

x
2

Z = 0
Z = 600
Z = 1400
175
Z
x
175
150
x
1 2
+ ÷ =
Things to Note from Graphical Solution
• Region of possible solutions which meet all
constraints (feasible solutions) is a polygon
• Brute force approach could find max Z by
evaluating Z at all vertices
• Vertices are simultaneous solution of 2
constraints (inequalities) converted to 2
equations (equalities) ¬ x
1
= x
1
*
& x
2
= x
2
*

• Z evaluation is of Z (x
1
*
, x
2
*
)
Simplex Method
• Converts relations to m equations in m+n
unknowns ¬ we must have at least m = 1
• Constraint inequalities are converted to equalities
with new variables called slack variables S
i

• a
i1
x
1
+ a
i2
x
2
+... + a
in
x
n
+ S
i
= b
i
(i = 1, m)
• Inequality i will then be at its limit when S
i
= 0
• The independent variables (x’s) are called
structural variables
Simplex Method, Continued
• Slack variables will have positive values if the
values of structural variables meet the inequality
with room to spare
• There will be Comb(n+m,m) different possible
constraint intersections (including non-negativity
constraints)
• Simplex method uses Gauss Jordan elimination
to check only vertices on the feasible polygon
starting with all x’s = 0
Simplex Tableau for Text Example
Basic Z x
1
x
2
S
1
S
2
S
3
S
4
b
i
___________________________________________________________________________________

6 1 0 0 0 1 0 0 S
9
80
77
0
0 1 0 0
0 0 1 0
0 0 0 1
0 0 0 0
0 1 0 S
8 10 0 S
11 7 0 S
175 150 1 Z
4
3
2
1
÷ ÷
• Note that we never do Gauss Jordan elimination on Z row
• 1st row keeps track of current estimate of Z in b column
• Basic variables start out as S’s
• Non-basic variables start out as x’s and are assumed zero
Simplex Elimination
Basic Z x
1
x
2
S
1
S
2
S
3
S
4
b
i
___________________________________________________________________________________

6 1 0 0 0 1 0 0 S
9
80
77
0
0 1 0 0
0 0 1 0
0 0 0 1
0 0 0 0
0 1 0 S
8 10 0 S
11 7 0 S
175 150 1 Z
4
3
2
1
÷ ÷
• Arbitrarily choose x
1
for Gauss Jordan elimination (not standard)
• This makes it a basic variable (‘enters’ basic list: entering variable)
2
row, because 80/10 is smallest b
i
/ a
i1
-- any other
choice results in negative S’s after elimination
• This choice makes S
2
non-basic (it is the leaving variable)
First Simplex Step - Eliminate x
1

6 1 0 0 0 1 0 0 S
1
8
21
1200
0 1 1 . 0 0
0 0 1 . 0 0
0 0 7 . 0 1
0 0 15 0
8 . 0 0 0 S
8 . 0 1 0 x
4 . 5 0 0 S
55 0 1 Z
4
3
1
1
÷
÷
÷
÷
• S
1
row can be read as S
1
= 21 (reduced from 77), because x
2
and
S
2
are non-basic variables (their values can be taken as zero)
• Similarly x
1
1
= 8
• Equivalent to moving from [0,0] to [8,0] for location of max
• This moves our estimate of max Z from 0 to 1200 (see Z eq.)
Basic Z x
1
x
2
S
1
S
2
S
3
S
4
b
i
___________________________________________________________________________________

Next Elimination Choice
6 1 0 0 0 1 0 0 S
1
8
21
1200
0 1 1 . 0 0
0 0 1 . 0 0
0 0 7 . 0 1
0 0 15 0
8 . 0 0 0 S
8 . 0 1 0 x
4 . 5 0 0 S
55 0 1 Z
4
3
1
1
÷
÷
÷
÷
• Now choose x
2
as the entering variable (most neg. Z eq. Coef.)
• The S
1
row is chosen as pivot row (note: 1/(-0.8) not allowed-why?)
• So now S
1
is our leaving variable

Basic Z x
1
x
2
S
1
S
2
S
3
S
4
b
i
___________________________________________________________________________________

Next Simplex Step (Elimination)
11 . 2 1 0 13 . 0 18 . 0 0 0 0 S
11 . 4
89 . 4
89 . 3
1414
0 1 20 . 0 15 . 0
0 0 20 . 0 15 . 0
0 0 13 . 0 18 . 0
0 0 87 . 7 18 . 10
0 0 0 S
0 1 0 x
1 0 0 x
0 0 1 Z
4
3
1
2
÷
÷
÷
÷
• We know we are done, because all coefficients in Z eq. are positive
• The non- basic variables are S
1
& S
2
, so vertex is at the intersection
of the equations from constraint (1) and constraint (2)
• The optimal Z is then 1414
Basic Z x
1
x
2
S
1
S
2
S
3
S
4
b
i
___________________________________________________________________________________

Graphical Depiction of Simplex Steps
x
1

x
2

Actual Simplex Path
We want to find the best “fit” of a curve through the data.
Here we see :
a) Least squares fit
b) Linear interpolation
Can you suggest another?
f(x)
x
Curve Fitting
Mathematical Background
• The prerequisite mathematical background
for interpolation is found in the material on
the Taylor series expansion and finite
divided differences
• Simple statistics
– average
– standard deviation
– normal distribution
Normal Distribution
A histogram used
to depict the distribution
x
x ± 2
95%
o
x ± o
68%
Material to be Covered in Curve
Fitting
• Linear Regression
– Polynomial Regression
– Multiple Regression
– General Linear Least Squares
– Nonlinear Regression
• Interpolation
– Newton’s Polynomial
– Lagrange Polynomial
– Coefficients of Polynomials
Least Squares Regression
• Simplest is fitting a straight line to a set of
paired observations
– (x
1
,y
1
), (x
2
, y
2
).....(x
n
, y
n
)
• The resulting mathematical expression is
– y = a
o
+ a
1
x + e
• We will consider the error introduced at
each data point to develop a strategy for
determining the “best fit” equations
f(x)
x
| | ( )
2
n
1 i
i 1 o i
n
1 i
2
i r
x a a y e S
¿ ¿
= =
+ ÷ = =
i 1 o i
x a a y ÷ ÷
To determine the values for a
o
and a
1
, differentiate
with respect to each coefficient
( )
( ) | |
¿
¿
÷ ÷ ÷ =
c
c
÷ ÷ ÷ =
c
c
i i 1 o i
1
r
i 1 o i
o
r
x x a a y 2
a
S
x a a y 2
a
S
Note: we have simplified the summation symbols.
What mathematics technique will minimize S
r
?
( )
2
n
1 i
i 1 o i
n
1 i
2
i r
x a a y e S
¿ ¿
= =
÷ ÷ = =
( )
( ) | |
¿
¿
÷ ÷ ÷ =
c
c
÷ ÷ ÷ =
c
c
i i 1 o i
1
r
i 1 o i
o
r
x x a a y 2
a
S
x a a y 2
a
S
Setting the derivative equal to zero will minimizing S
r
.
If this is done, the equations can be expressed as:
¿ ¿ ¿
¿ ¿ ¿
÷ ÷ =
÷ ÷ =
2
i 1 i o i i
i 1 o i
x a x a x y 0
x a a y 0
¿ ¿ ¿
¿ ¿ ¿
÷ ÷ =
÷ ÷ =
2
i 1 i o i i
i 1 o i
x a x a x y 0
x a a y 0
If you recognize that:

you have two equations, with two simultaneous equations
with two unknowns, a
o
and a
1
.

What are these equations? (hint: only place terms with
a
o
and a
1
on the LHS of the equations)

What are the final equations for a
o
and a
1
?
¿
=
o o
na a
( )
x a y a
x x n
y x y x n
a
y x x a x a
y x a na
1 o
2
i
2
i
i
i i i
1
i i
2
i 1 i o
i i 1 o
÷ =
÷
÷
=
= +
= +
¿ ¿
¿ ¿ ¿
¿ ¿ ¿
¿ ¿
These first two
equations are called
the normal equations
n
y
y &
n
x
x that note Also
y x
y
a
a
x x
x n
i
_
i
_
i i
i
1
0
2
i i
i
E
=
E
=
)
`
¹
¹
´
¦
E
E
=
)
`
¹
¹
´
¦
(
¸
(

¸

E E
E
These second two
result from solving
Error
( )
2
n
1 i
i 1 o i
n
1 i
2
r
x a a y e S
i
¿ ¿
= =
÷ ÷ = =
Recall:
f(x)
x
( )
1 n
S
s
y y S
t
y
2
i t
÷
=
÷ =
¿
The most common measure of the “spread” of a sample is the
Introduce a term to measure the standard error of the estimate:
2
|
÷
=
n
S
s
r
x y
Coefficient of determination r
2
:
t
r t
2
S
S S
r
÷
=
r is the correlation coefficient
t
r t
2
S
S S
r
÷
=
The following signifies that the line explains 100 percent
of the variability of the data:

S
r
= 0
r = r
2
= 1

If r = r
2
= 0, then S
r
= S
t
and the fit is invalid.

Data 1 Data 2 Data 3 Data 4
10 8.04 10 9.14 10 7.46 8 6.58
8 6.95 8 8.14 8 6.77 8 5.76
13 7.58 13 8.74 13 12.74 8 7.71
9 8.81 9 8.77 9 7.11 8 8.84
11 8.33 11 9.26 11 7.81 8 8.47
14 9.96 14 8.10 14 8.84 8 7.04
6 7.24 6 6.13 6 6.08 8 5.25
4 4.26 4 3.10 4 5.39 19 12.50
12 10.84 12 9.13 12 8.15 8 5.56
7 4.82 7 7.26 7 6.42 8 7.91
5 5.68 5 4.74 5 5.73 8 6.89
Consider the following four sets of data
Data Set 1
0.00
2.00
4.00
6.00
8.00
10.00
12.00
0 5 10 15
x
f
(
x
)
0.00
2.00
4.00
6.00
8.00
10.00
12.00
14.00
0 5 10 15 20
f
(
x
)
Data Set 2
0.00
2.00
4.00
6.00
8.00
10.00
0 5 10 15
x
f
(
x
)
0.00
2.00
4.00
6.00
8.00
10.00
12.00
14.00
0 5 10 15
0.00
2.00
4.00
6.00
8.00
10.00
0 5 10 15
Data Set 3
0.00
2.00
4.00
6.00
8.00
10.00
12.00
14.00
0 5 10 15
x
f
(
x
)
0.00
2.00
4.00
6.00
8.00
10.00
12.00
0 5 10 15
Data Set 4
0.00
2.00
4.00
6.00
8.00
10.00
12.00
14.00
0 5 10 15 20
x
f
(
x
)
GRAPHS OF FOUR DATA SETS
Data Set 1
0.00
2.00
4.00
6.00
8.00
10.00
12.00
0 5 10 15
x
f
(
x
)
0.00
2.00
4.00
6.00
8.00
10.00
12.00
14.00
0 5 10 15 20
f
(
x
)
Data Set 2
0.00
2.00
4.00
6.00
8.00
10.00
0 5 10 15
x
f
(
x
)
0.00
2.00
4.00
6.00
8.00
10.00
12.00
14.00
0 5 10 15
0.00
2.00
4.00
6.00
8.00
10.00
0 5 10 15
Data Set 3
0.00
2.00
4.00
6.00
8.00
10.00
12.00
14.00
0 5 10 15
x
f
(
x
)
0.00
2.00
4.00
6.00
8.00
10.00
12.00
0 5 10 15
Data Set 4
0.00
2.00
4.00
6.00
8.00
10.00
12.00
14.00
0 5 10 15 20
x
f
(
x
)
Linear Regression Data Set 1
0.50 3.00
0.117906 1.124747
0.666542 1.236603
17.98994 9
Linear Regression Data Set 2
0.50 3.00
0.117964 1.125302
0.666242 1.237214
17.96565 9
Linear Regression Data Set 3
0.50 3.00
0.117878 1.124481
0.666324 1.236311
17.97228 9
Linear Regression Data Set 4
0.50 3.00
0.117819 1.123921
0.666707 1.235695
18.00329 9
All equations are y = 0.5x + 3 R
2
= 0.67
GRAPHS OF FOUR DATA SETS
Linearization of non-linear
relationships

Some data is simply ill-suited for
linear least squares regression....

or so it appears.
f(x)
x
P
t
ln P
t
slope = r
intercept = ln P
0
rt
o
e P P =
L
i
n
e
a
r
i
z
e

why?
EXPONENTIAL
EQUATIONS
( )
( ) ( )
( ) rt P ln
e ln P ln
e P ln P ln
e P P
0
rt
0
rt
0
rt
0
+ =
+ =
=
=
slope = r

intercept = ln P
o
Can you see the similarity
with the equation for a line:

y = b + mx

where b is the y-intercept
and m is the slope?
lnP

t
( )
( ) ( )
( ) t r P ln
e ln P ln
e P ln P ln
e P P
0
t r
0
t r
0
t r
0
+ =
+ =
=
=
ln P
0
t
slope = r intercept = ln P
0
After taking the natural log
of the y-data, perform linear
regression.
From this regression:
The value of b will give us
ln (P
0
). Hence, P
0
= e
b

The value of m will give us r
directly.
POWER EQUATIONS
log Q
log H
Q
H
a
H c Q =
Here we linearize
the equation by
taking the log of
H and Q data.
What is the resulting
intercept and slope?
(Flow over a weir)
( )
H log a c log
H log c log
H c log Q log
H c Q
a
a
a
+ =
+ =
=
=
log Q
log H
slope = a
intercept = log c
µ
S
1/µ
1/ S
SATURATION-GROWTH
RATE EQUATION
S K
S
s
max
+
µ = µ
slope = K
s

max

intercept = 1/µ
max

Here, µ is the growth rate of
a microbial population,
µ
max
is the maximum
growth rate, S is the
substrate or food
concentration, K
s
is the
substrate concentration at a
value of µ = µ
max
/2
Regression
• You should be cognizant of the fact that
there are theoretical aspects of regression
that are of practical importance but are
beyond the scope of this book
• Statistical assumptions are inherent in the
linear least squares procedure
Regression
• x has a fixed value; it is not random and is
measured without error
• The y values are independent random
variable and all have the same variance
• The y values for a given x must be normally
distributed
Regression
• The regression of y versus x is not the same
as x versus y
• The error of y versus x is not the same as x
versus y
f(x)
x
y
-
d
i
r
e
c
t
i
o
n

x-direction
Polynomial Regression
• One of the reasons you were presented with
the theory behind linear regression was to
allow you the insight into similar
procedures for higher order polynomials
• y = a
0
+ a
1
x
• mth - degree polynomial
 y = a
0
+ a
1
x + a
2
x
2
+....a
m
x
m
+ e
Based on the sum of the squares
of the residuals
( )
¿
÷ ÷ ÷ ÷ ÷ =
2
m
i m
2
i 2 i 1 o i r
x a ...... x a x a a y S
1. Take the derivative of the above equation with respect to
each of the unknown coefficients: i.e. the partial with respect
to a
2
( )
¿
÷ ÷ ÷ ÷ ÷ =
c
c
m
i m
2
i 2 i 1 o i
2
i
2
r
x a ..... x a x a a y x 2
a
S
2. These equations are set to zero to minimize S
r
., i.e.
minimize the error.

3. Set all unknowns values on the LHS of the equation.
Again, using the partial of S
r
. wrt a
2
4. This set of normal equations result in m+1 simultaneous
equations which can be solved using matrix methods to
determine a
0
, a
1
, a
2
......a
m
¿ ¿ ¿ ¿ ¿
= + + + +
+
i
2
i
2 m
i m
4
i 2
3
i 1
2
i o
y x x a ..... x a x a x a
Multiple Linear Regression
• A useful extension of linear regression is
the case where y is a linear function of two
or more variables
 y = a
o
+ a
1
x
1
+ a
2
x
2

• We follow the same procedure
 y = a
o
+ a
1
x
1
+ a
2
x
2
+ e
Multiple Linear Regression
For two variables, we would solve a 3 x 3 matrix
in the following form:
¦
)
¦
`
¹
¦
¹
¦
´
¦
=
¦
)
¦
`
¹
¦
¹
¦
´
¦
(
(
(
¸
(

¸

¿
¿
¿
¿ ¿ ¿
¿ ¿ ¿
¿ ¿
i i 2
i i 1
i
2
1
0
2
i 2 i 2 i 1 i 2
i 2 i 1
2
i 1 i 1
i 2 i 1
y x
y x
y
a
a
a
x x x x
x x x x
x x n
[A] and {c} are clearly based on data given for x
1
, x
2
and y
to solve for the unknowns in {x}.

Interpolation
• General formula for an n-th order
polynomial
 y = a
0
+ a
1
x + a
2
x
2
+....a
m
x
m
• For m+1 data points, there is one, and only
one polynomial of order m or less that
passes through all points
• Example: y = a
0
+ a
1
x
 fits between 2 points
 1st order
Interpolation
• We will explore two mathematical methods
well suited for computer implementation
• Newton’s Divided Difference Interpolating
Polynomials
• Lagrange Interpolating Polynomial
Newton’s Divided Difference
Interpolating Polynomials
• Linear Interpolation
• General Form
• Errors
Linear Interpolation
Temperature, °C
Density, kg/m
3

0 999.9
5 1000.0
10 999.7
15 999.1
20 998.2

How would you approach estimating the density at 17 °C?
Temperature, °C
Density, kg/m
3

0 999.9
5 1000.0
10 999.7
15 999.1
20 998.2

µ
T
15
20
9
9
8
.
2

9
9
9
.
1

???
999.1 > µ > 998.2
( ) ( )
( ) ( )
( )
0
0 1
0 1
o 1
x x
x x
x f x f
x f x f ÷
÷
÷
+ =
Note: The notation f
1
(x) designates that this is a first order
interpolating polynomial
15 17
1 . 999
15 20
1 . 999 2 . 998
÷
÷ µ
=
÷
÷
f(x)
x
true solution
smaller intervals
provide a better estimate
1
2
f(x)
x
true solution
Alternative approach would be to
include a third point and estimate
f(x) from a 2nd order polynomial.
( ) ( ) ( )( )
1 0 2 0 1 0 2
x x x x b x x b b x f ÷ ÷ + ÷ + =
Prove that this a 2nd order polynomial of
the form:
( )
2
2 1 0
x a x a a x f + + =
( )
1 2 0 2 1 0 2
2
2 0 1 1 0 2
x x b x x b x x b x b x b x b b x f ÷ ÷ + + ÷ + =
First, multiply the terms
( )
2
2 1 0
x a x a a x f + + =
( ) ( ) ( )( )
1 0 2 0 1 0 2
x x x x b x x b b x f ÷ ÷ + ÷ + =
Collect terms and recognize that:
2 2
1 2 0 2 1 1
1 0 2 0 1 0 0
b a
x b x b b a
x x b x b b a
=
÷ ÷ =
+ ÷ =
f(x)
x
x
2
, f(x
2
)
x
1
, f(x
1
)
x
0
, f(x
0
)
x, f(x)
Procedure for
Interpolation
( )
( ) ( )
( ) ( ) ( ) ( )
0 2
0 1
0 1
1 2
1 2
2
0 1
0 1
1
0 0
x x
x x
x f x f
x x
x f x f
b
x x
x f x f
b
x f b
÷
÷
÷
÷
÷
÷
=
÷
÷
=
=
Interpolation
( )
( ) ( )
( ) ( ) ( ) ( )
0 2
0 1
0 1
1 2
1 2
2
0 1
0 1
1
0 0
x x
x x
x f x f
x x
x f x f
b
x x
x f x f
b
x f b
÷
÷
÷
÷
÷
÷
=
÷
÷
=
=
( ) ( ) ( )( )
1 0 2 0 1 0 2
x x x x b x x b b x f ÷ ÷ + ÷ + =
Example
998
998.5
999
999.5
1000
1000.5
0 10 20 30
Temp
D
e
n
s
i
t
y
Temperature, °C
Density, kg/m
3

0 999.9
5 1000.0
10 999.7
15 999.1
20 998.2

Include 10 degrees in
density at 17 degrees.
Example
Temperature, °C
Density, kg/m
3

0 999.9
5 1000.0
10 999.7
15 999.1
20 998.2

Include 10 degrees in
density at 17 degrees.
( ) ( ) ( )( )
1 0 2 0 1 0 2
x x x x b x x b b x f ÷ ÷ + ÷ + =
( )
( ) ( )
( ) ( ) ( ) ( )
0 2
0 1
0 1
1 2
1 2
2
0 1
0 1
1
0 0
x x
x x
x f x f
x x
x f x f
b
x x
x f x f
b
x f b
÷
÷
÷
÷
÷
÷
=
÷
÷
=
=
General Form of Newton’s
Interpolating Polynomials
for the nth-order polynomial
( ) ( ) ( )( ) ( )
1 n 1 0 n 0 1 0 n
x x x x x x b .... x x b b x f
÷
÷ ÷ ÷ + + ÷ + = 
To establish a methodical approach to a solution define
the first finite divided difference as:
| |
( ) ( )
j i
j i
j i
x x
x f x f
x , x f
÷
÷
=
| |
( ) ( )
j i
j i
j i
x x
x f x f
x , x f
÷
÷
=
if we let i=1 and j=0 then this is b
1
( ) ( )
0 1
0 1
1
x x
x f x f
b
÷
÷
=
Similarly, we can define the second finite divided difference,
which expresses both b
2
and the difference of the first two
divided difference
Similarly, we can define the second finite divided difference,
which expresses both b
2
and the difference of the first two
divided difference
( ) ( ) ( ) ( )
| |
| | | |
k i
k j j i
k j i
0 2
0 1
0 1
1 2
1 2
2
x x
x , x f x , x f
x , x , x f
x x
x x
x f x f
x x
x f x f
b
÷
÷
=
÷
÷
÷
÷
÷
÷
=
Following the same scheme, the third divided difference is
the difference of two second finite divided difference.
i x
i
f(x
i
) first second third

0 x
0
f(x
0
) f[x
1
,x
0
] f[x
2
,x
1
,x
0
] f[x
3
,x
2
,x
1
,x
0
]

1 x
1
f(x
1
) f[x
2
,x
1
] f[x
3
,x
2
,x
1
]

2 x
2
f(x
2
) f[x
3
,x
2
]

3 x
3
f(x
3
)
( ) ( ) ( )( ) ( )
1 n 1 0 n 0 1 0 n
x x x x x x b .... x x b b x f
÷
÷ ÷ ÷ + + ÷ + = 
These difference can be used to evaluate the b-coefficient s.

The result is the following interpolation polynomial called
the Newton’s Divided Difference Interpolating Polynomial
( ) ( ) | |( )
| |( )( ) ( )
1 n 1 0 0 1 n n
0 0 1 0 n
x x x x x x x , , x , x f
.... x x x , x f x f x f
÷ ÷
÷ ÷ ÷ +
+ ÷ + =
 
To determine the error we need an extra point.
The error would follow a relationship analogous to the error
in the Taylor Series.
Lagrange Interpolating
Polynomial
( ) ( ) ( )
( )
[
¿
=
=
=
÷
÷
=
=
n
i j
0 j
j i
j
i
n
0 i
i i n
x x
x x
x L
x f x L x f
where H designates the “product of”
The linear version of this expression is at n = 1
( ) ( ) ( )
( )
( ) ( )
1
0 1
0
0
1 0
1
1
n
i j
0 j
j i
j
i
n
0 i
i i n
x f
x x
x x
x f
x x
x x
) x ( f
x x
x x
x L
x f x L x f
÷
÷
+
÷
÷
=
÷
÷
=
=
[
¿
=
=
=
Linear version: n = 1
Your text shows you how to do n = 2 (second order).
What would third order be?
( ) ( ) ( )
( )
( )( )( )
( )( )( )
( )
.......
x f
x x x x x x
x x x x x x
) x ( f
x x
x x
x L
x f x L x f
0
3 0 2 0 1 0
3 2 1
3
n
i j
0 j
j i
j
i
n
0 i
i i n
÷ ÷ ÷
÷ ÷ ÷
=
÷
÷
=
=
[
¿
=
=
=
Note:
x
0
is
not being subtracted
from the constant
term x

( ) ( ) ( )
( )
( )( )( )
( )( )( )
( )
.......
x f
x x x x x x
x x x x x x
) x ( f
x x
x x
x L
x f x L x f
0
3 0 2 0 1 0
3 2 1
3
n
i j
0 j
j i
j
i
n
0 i
i i n
÷ ÷ ÷
÷ ÷ ÷
=
÷
÷
=
=
[
¿
=
=
=
Note:
x
0
is
not being subtracted
from the constant
term x
or x
i
= x
0
in
the numerator
or the denominator
j= 0
( ) ( ) ( )
( )
( )( )( )
( )( )( )
( )
.......
x f
x x x x x x
x x x x x x
) x ( f
x x
x x
x L
x f x L x f
0
3 0 2 0 1 0
3 2 1
3
n
i j
0 j
j i
j
i
n
0 i
i i n
÷ ÷ ÷
÷ ÷ ÷
=
÷
÷
=
=
[
¿
=
=
=
( ) ( ) ( )
( )
( )( )( )
( )( )( )
( )
( )( )( )
( )( )( )
( )
( )( )( )
( )( )( )
( )
( )( )( )
( )( )( )
( )
3
2 3 1 3 0 3
2 1 0
2
3 2 1 2 0 2
3 1 0
1
3 1 2 1 0 1
3 2 0
0
3 0 2 0 1 0
3 2 1
3
n
i j
0 j
j i
j
i
n
0 i
i i n
x f
x x x x x x
x x x x x x
x f
x x x x x x
x x x x x x
x f
x x x x x x
x x x x x x
x f
x x x x x x
x x x x x x
) x ( f
x x
x x
x L
x f x L x f
÷ ÷ ÷
÷ ÷ ÷
+
÷ ÷ ÷
÷ ÷ ÷
+
÷ ÷ ÷
÷ ÷ ÷
+
÷ ÷ ÷
÷ ÷ ÷
=
÷
÷
=
=
[
¿
=
=
=
Note:
x
3
is
not being subtracted
from the constant
term x or x
i
= x
3
in
the numerator
or the denominator
j= 3
Example
998
998.5
999
999.5
1000
1000.5
0 10 20 30
Temp
D
e
n
s
i
t
y
Temperature, °C
Density, kg/m
3

0 999.9
5 1000.0
10 999.7
15 999.1
20 998.2

Determine the density
at 17 degrees.
In fact, you can derive Lagrange directly from
Newton’s Interpolating Polynomial
( ) 776 . 998 496 . 279 244 . 839 964 . 119 17 f
2
= + + ÷ =
( )
( ) 74 . 998 17 f
776 . 998 17 f
1
2
=
=
Using Newton’s
Interpolating Polynomial
Coefficients of an Interpolating
Polynomial
( ) ( ) ( )( ) ( )
1 n 1 0 n 0 1 0 n
x x x x x x b .... x x b b x f
÷
÷ ÷ ÷ + + ÷ + = 
y = a
0
+ a
1
x + a
2
x
2
+....a
m
x
m
HOW CAN WE BE MORE STRAIGHT
FORWARD IN GETTING VALUES?
( )
( )
( )
2
2 2 2 1 0 2
2
1 2 1 1 0 1
2
0 2 0 1 0 0
x a x a a x f
x a x a a x f
x a x a a x f
+ + =
+ + =
+ + =
This is a 2nd order polynomial.

We need three data points.

Plug the value of x
i
and f(x
i
)
directly into equations.

This gives three simultaneous equations
to solve for a
0
, a
1
, and a
2
Example
998
998.5
999
999.5
1000
1000.5
0 10 20 30
Temp
D
e
n
s
i
t
y
Temperature, °C
Density, kg/m
3

0 999.9
5 1000.0
10 999.7
15 999.1
20 998.2

Determine the density
at 17 degrees.
¦
)
¦
`
¹
¦
¹
¦
´
¦
=
¦
)
¦
`
¹
¦
¹
¦
´
¦
(
(
(
¸
(

¸

2 . 998
1 . 999
7 . 999
a
a
a
20 20 1
15 15 1
10 10 1
2
1
0
2
2
2
Temperature, °C
Density, kg/m
3

0 999.9
5 1000.0
10 999.7
15 999.1
20 998.2

( ) ( )
78 . 998
17 006 . 0 17 03 . 0 1000
006 . 0
03 . 0
1000
a
a
a
2
17
2
1
0
=
÷ + = µ
¦
)
¦
`
¹
¦
¹
¦
´
¦
÷
=
¦
)
¦
`
¹
¦
¹
¦
´
¦
Numerical Differentiation and
Integration
• Calculus is the mathematics of change.
• Engineers must continuously deal with
systems and processes that change, making
calculus an essential tool of our profession.
• At the heart of calculus are the related
mathematical concepts of differentiation
and integration.
Differentiation
• Dictionary definition of differentiate - “to
mark off by differences, distinguish; ..to
perceive the difference in or between”
• Mathematical definition of derivative - rate
of change of a dependent variable with
respect to an independent variable
( ) ( )
x
x f x x f
x
y
i i
A
÷ A +
=
A
A
( ) ( )
x
x f x x f
x
y
i i
A
÷ A +
=
A
A
( )
i
x f
( ) x x f
i
A +
Ay
Ax
f(x)
x
Integration
• The inverse process of differentiation
• Dictionary definition of integrate - “to bring
together, as parts, into a whole; to unite; to
indicate the total amount”
• Mathematically, it is the total value or summation
of f(x)dx over a range of x. In fact the
integration symbol is actually a stylized capital S
intended to signify the connection between
integration and summation.
f(x)
x
( )
}
=
b
a
dx x f I
? dx e
?
x
dx
? dx a
? du u
? udv
ax
bx
n
=
=
=
=
=
}
}
}
}
}
Mathematical Background
? ) uv (
dx
d
? u
dx
d
x of functions are v and u if
? x ln
dx
d
? x
dx
d
? x tan
dx
d
? a
dx
d
? x cos
dx
d
? e
dx
d
? x sin
dx
d
n
n
x
x
= =
=
= =
= =
= =
Newton-Cotes Integration
• Common numerical integration scheme
• Based on the strategy of replacing a complicated
function or tabulated data with some
approximating function that is easy to integrate
Newton-Cotes Integration
• Common numerical integration scheme
• Based on the strategy of replacing a complicated
function or tabulated data with some
approximating function that is easy to integrate
( ) ( )
( )
n
n 1 0 n
b
a
n
b
a
x a .... x a a x f
dx x f dx x f I
+ + + =
~ =
} }
Newton-Cotes Integration
• Common numerical integration scheme
• Based on the strategy of replacing a complicated
function or tabulated data with some
approximating function that is easy to integrate
( ) ( )
( )
n
n 1 0 n
b
a
n
b
a
x a .... x a a x f
dx x f dx x f I
+ + + =
~ =
} }
f
n
(x) is an nth order
polynomial
The approximation of an integral by the area under
- a first order polynomial
- a second order polynomial
We can also approximated the integral by using a
series of polynomials applied piece wise.
An approximation of an integral by the area under
five straight line segments.
Newton-Cotes Formulas
• Closed form - data is at the beginning and
end of the limits of integration
• Open form - integration limits extend
beyond the range of data.
Trapezoidal Rule
• First of the Newton-Cotes closed integration
formulas
• Corresponds to the case where the
polynomial is a first order
( ) ( )
( ) x a a x f
dx x f dx x f I
1 0 n
b
a
1
b
a
+ =
~ =
} }
( ) ( )
( ) x a a x f
dx x f dx x f I
1 0 n
b
a
1
b
a
+ =
~ =
} }
A straight line can be represented as:
( ) ( )
( ) ( )
( ) a x
a b
a f b f
a f x f
1
÷
÷
÷
+ =
Integrate this equation. Results in the trapezoidal rule.
( )
( ) ( )
2
b f a f
a b I
+
÷ ~
( ) ( )
( )
( ) ( )
( )
b b
1
a a
b
a
I f x dx f x dx
f b f a
f a x a dx
b a
= ~
( ÷
= + ÷
(
÷
¸ ¸
} }
}
( )
( ) ( )
2
b f a f
a b I
+
÷ ~
Recall the formula for computing the area of a trapezoid:

height x (average of the bases)
h
e
i
g
h
t

base
base
The concept is the same but the trapezoid is on its side.
width
h
e
i
g
h
t

h
e
i
g
h
t

h
e
i
g
h
t

base
base
( )
( ) ( )
2
b f a f
a b I
+
÷ ~
Error of the Trapezoidal Rule
( )( )
b a where
a b ' ' f
12
1
E
3
t
< ç <
÷ ç ÷ =
This indicates that is the function being integrated is
linear, the trapezoidal rule will be exact.

Otherwise, for section with second and higher order
derivatives (that is with curvature) error can occur.

A reasonable estimate of ç is the average value of
b and a
Multiple Application of the
Trapezoidal Rule
• Improve the accuracy by dividing the
integration interval into a number of smaller
segments
• Apply the method to each segment
• Resulting equations are called multiple-
application or composite integration
formulas
( ) ( ) ( ) ( ) ( ) ( )
( )
h
n
x x
given
2
x f x f
h
2
x f x f
h
2
x f x f
h I
dx ) x ( f dx ) x ( f dx ) x ( f I
0 n
n 1 n 2 1 1 0
x
x
x
x
x
x
n
1 n
2
1
1
0
=
÷
+
+
+
+
+
~
+ + + =
÷
} } }
÷

Multiple Application of the
Trapezoidal Rule
where there are n+1 equally spaced base points.
( ) ( ) ( ) ( ) ( ) ( )
( )
( ) ( ) ( )
( )
( ) ( ) ( )
n 2
x f x f 2 x f
a b
2
x f x f 2 x f
n
a b
I
2
x f x f
h
2
x f x f
h
2
x f x f
h I
dx ) x ( f dx ) x ( f dx ) x ( f I
1 n
1 i
n i 0
1 n
1 i
n i 0
n 1 n 2 1 1 0
x
x
x
x
x
x
n
1 n
2
1
1
0
¿ ¿
} } }
÷
=
÷
=
÷
+ +
÷ =
+ +
÷
~
+
+
+
+
+
~
+ + + =
÷

We can group terms to express a general form
}

}

width average height
( )
( ) ( ) ( )
n 2
x f x f 2 x f
a b I
1 n
1 i
n i 0 ¿
÷
=
+ +
÷ ~
}

}

width average height
The average height represents a weighted average
of the function values

Note that the interior points are given twice the weight
of the two end points
( )
' ' f
n 12
a b
E
2
3
a
÷
÷ =
Example
Evaluate the following integral using the
trapezoidal rule and h = 0.1
( )
( ) ( ) ( )
n 2
x f x f 2 x f
a b I
1 n
1 i
n i 0 ¿
÷
=
+ +
÷ ~
}
=
6 . 1
1
x
dx e I
2
Example Problem
0
10
20
30
40
50
60
0 0.5 1 1.5 2 2.5
x
f
(
x
)
n
a b
h
÷
=
Solution
( ) ( ) ( ) ( ) ( ) ( ) ( ) | | 6 . 1 f 5 . 1 f 2 4 . 1 f 2 3 . 1 f 2 2 . 1 f 2 1 . 1 f 2 1 f
n 2
a b
I + + + + + +
÷
=
Example Problem
0
10
20
30
40
50
60
0 0.5 1 1.5 2 2.5
x
f
(
x
)
x f(x)
1 2.718
1.1 3.353
1.2 4.221
1.3 5.419
1.4 7.099
1.5 9.488
1.6 12.936
( )
( ) 741 . 3 816 . 74
6 2
1 6 . 1
dx e I
6 . 1
1
x
2
=
÷
~ =
}
.....end of example
Simpson’s 1/3 Rule
• Corresponds to the case where the function
is a second order polynomial
( ) ( )
( )
2
2 1 0 n
b
a
2
b
a
x a x a a x f
dx x f dx x f I
+ + =
~ =
} }
Simpson’s 1/3 Rule
• Designate a and b as x
0
and x
2
, and
estimate f
2
(x) as a second order Lagrange
polynomial
( ) ( )
( )( )
( )( )
( ) dx ....... x f
x x x x
x x x x
dx x f dx x f I
2
0
x
x
0
2 0 1 0
2 1
b
a
2
b
a
}
} }
(
¸
(

¸

+
÷ ÷
÷ ÷
=
~ =
Simpson’s 1/3 Rule
• After integration and algebraic
manipulation, we get the following
equations
( ) ( ) ( ) | |
( )
( ) ( ) ( )
6
x f x f 4 x f
a b
0 2
x x
h where x f x f 4 x f
3
h
I
2 1 0
0 2
2 1 0
+ +
÷ ~
÷
÷
= + + ~
}

}

width average height
Error
( )( )
b a where
a b ' ' f
12
1
E
3
t
< ç <
÷ ç ÷ =
Single application of Trapezoidal Rule.
Single application of Simpson’s 1/3 Rule
( )( )
5
) 4 (
t
a b f
2880
1
E ÷ ç ÷ =
Multiple Application of
Simpson’s 1/3 Rule
( )
( ) ( ) ( ) ( )
( )
( ) 4
4
5
a
1 n
.. 5 , 3 , 1 i
2 n
.. 6 , 4 , 2 j
n j i 0
x
x
x
x
x
x
f
n 180
a b
E
n 3
x f x f 2 x f 4 x f
a b I
dx ) x ( f dx ) x ( f dx ) x ( f I
n
1 n
2
1
1
0
÷
÷ =
+ + +
÷ ~
+ + + =
¿ ¿
} } }
÷
=
÷
=
÷

( )
( ) ( ) ( ) ( )
n 3
x f x f 2 x f 4 x f
a b I
1 n
.. 5 , 3 , 1 i
2 n
.. 6 , 4 , 2 j
n j i 0 ¿ ¿
÷
=
÷
=
+ + +
÷ ~
The odd points represent the middle term for each application.
Hence carry the weight 4
The even points are common to adjacent applications
and are counted twice.
Simpson’s 3/8 Rule
• Corresponds to the case where the function
is a third order polynomial
( ) ( )
( )
( ) ( ) ( ) ( ) | |
0 3
x x
h where
x f x f 3 x f 3 x f
8
h 3
I
x a x a x a a x f
dx x f dx x f I
0 3
3 2 1 0
3
3
2
2 1 0 n
b
a
3
b
a
÷
÷
=
+ + + ~
+ + + =
~ =
} }
Integration of Unequal Segments
• Experimental and field study data is often
unevenly spaced
• In previous equations we grouped the term (i.e. h
i
)
which represented segment width.
( )
( ) ( ) ( )
( ) ( ) ( ) ( )
( ) ( )
2
x f x f
h
2
x f x f
h
2
x f x f
h I
n 2
x f x f 2 x f
a b I
n 1 n
2 1 1 0
1 n
1 i
n i 0
+
+
+
+
+
~
+ +
÷ ~
÷
÷
=
¿

Integration of Unequal Segments
• We should also consider alternately using
higher order equations if we can find data in
consecutively even segments
trapezoidal
rule
1/3
rule
3/8
rule
trapezoidal
rule
Example
Integrate the following using the trapezoidal rule,
Simpson’s 1/3 Rule and a multiple application of
the trapezoidal rule with n=2. Compare results with
the analytical solution.
}
4
0
x 2
dx xe
0
5000
10000
15000
20000
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
x
f
(
x
)
Solution
( ) 927 . 5216 1 x 2
4
e
dx xe
4
0
x 2
4
0
x 2
= ÷ =
}
First, calculate the analytical solution for this problem.
Consider a single application
of the trapezoidal rule.
f(4) = 11923.83
f(0) = 0

( )
( ) ( )
( )
% 357 100
93 . 5216
66 . 23847 93 . 5216
66 . 23847
2
83 . 11923 0
0 4
2
b f a f
a b I
t
= ×
÷
= c
=
+
÷ ~
+
÷ ~
0
5000
10000
15000
20000
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
x
f
(
x
)
( )
( ) ( ) ( )
( )
( )
( )
% 133 100
93 . 5216
22 . 12142 93 . 5216
22 . 12142
2 2
83 . 11923 196 . 109 2 0
0 4
n 2
x f x f 2 x f
a b I
t
1 n
1 i
n i 0
= ×
÷
= c
=
+ +
÷ ~
+ +
÷ ~
¿
÷
=
Multiple Application of
the Trapezoidal Rule
We are obviously not
doing very well on our
estimates.
Lets consider a scheme
where we “weight” the
estimates
....end of example
0
5000
10000
15000
20000
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
x
f
(
x
)
Integration of Equations
• Integration of analytical as opposed to
tabular functions
• Romberg Integration
– Richardson’s Extrapolation
– Romberg Integration Algorithm
• Improper Integrals
Richardson’s Extrapolation
• Use two estimates of an integral to compute a third more
accurate approximation
• The estimate and error associated with a multiple
application trapezoidal rule can be represented generally
as:
 I = I(h) + E(h)
 where I is the exact value of the integral
 I(h) is the approximation from an n-segment application
 E(h) is the truncation error
 h is the step size (b-a)/n
Make two separate estimates using step sizes
of h
1
and h
2
.

I(h
1
) + E(h
1
) = I(h
2
) + E(h
2
)

Recall the error of the multiple-application of the trapezoidal
rule
' ' f h
12
a b
E
2
÷
÷ =
Assume that is constant regardless of the step size
' ' f
( )
( )
2
2
2
1
2
1
h
h
h E
h E
~
( )
( )
( ) ( )
2
2
1
2 1
2
2
2
1
2
1
h
h
h E h E
h
h
h E
h E
|
|
.
|

\
|
~
~
Substitute into previous equation:

I(h
1
) + E(h
1
) = I(h
2
) + E(h
2
)
( )
( ) ( )
2
2
1
2 1
2
h
h
1
h I h I
h E
|
|
.
|

\
|
÷
÷
=
Thus we have developed an estimate of the truncation
error in terms of the integral estimates and their step
sizes. This estimate can then be substituted into:

I = I(h
2
) + E(h
2
)

to yield an improved estimate of the integral:
( ) ( ) ( ) | |
1 2
2
2
1
2
h I h I
1
h
h
1
h I I ÷
(
(
(
(
¸
(

¸

÷
|
.
|

\
|
+ ~
( )
( ) ( )
2
2
1
2 1
2
h
h
1
h I h I
h E
|
|
.
|

\
|
÷
÷
=
( ) ( ) ( ) | |
1 2
2
2
1
2
h I h I
1
h
h
1
h I I ÷
(
(
(
(
¸
(

¸

÷
|
.
|

\
|
+ ~
What is the equation for the special case where
the interval is halved?

i.e. h
2
= h
1
/ 2
( ) ( ) ( ) | |
( ) ( )
1 2
1 2
2
2
2
2
2
1
h I
3
1
h I
3
4
I
terms collecting
h I h I
1 2
1
h I I
2
h
h 2
h
h
÷ ~
÷
(
¸
(

¸

÷
+ ~
=
|
.
|

\
|
=
|
.
|

\
|
Example
Use Richardson’s extrapolation to evaluate:
}
4
0
x 2
dx xe
0
5000
10000
15000
20000
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
x
f
(
x
)
Solution
Recall our previous example where we calculated the integral
using a single application of the trapezoidal rule, and then
a multiple application of the trapezoidal rule by dividing the
interval in half.
0
5000
10000
15000
20000
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
x
f
(
x
)
SINGLE APPLICATION
OF TRAPEZOIDAL RULE

MULTIPLE APPLICATION
OF TRAPEZOIDAL RULE (n=2)

RICHARDSON’S
EXTRAPOLATION
357%

133%

57.96%
....end of example
( ) ( )
( ) ( )
( ) ( )
1 4
I I 4
I
h I
63
1
h I
63
64
I
h I
15
1
h I
15
16
I
h I
3
1
h I
3
4
I
1 k
1 k , j 1 k , 1 j
1 k
k , j
m
m
1 2
÷
÷
~
÷ ~
÷ ~
÷ ~
÷
÷ ÷ +
÷

We can continue to improve the estimate by successive
halving of the step size to yield a general formula:
k = 2; j = 1
ROMBERG INTEGRATION
Note:
the subscripts
m and  refer to
more and less
accurate estimates
k = 3
k = 4
Following a similar pattern to Newton divided differences,
Romberg’s Table can be produced
Error orders for j values
i = 1 i = 2 i = 3 i = 4
j O(h
2
) O(h
4
) O(h
6
) O(h
8
)

1 h I
1,1
I
1,2
I
1,3
I
1,4

2 h/2 I
2,1
I
2,2
I
2,3

3 h/4 I
3,1
I
3,2

4 h/8 I
4,1

Trapezoidal Simpson’s 1/3 Boole’s

Rule Rule Rule

( ) ( )

h I
3
1
h I
3
4
I
m
÷ ~
Error orders for j values
i = 1 i = 2 i = 3 i = 4
j O(h
2
) O(h
4
) O(h
6
) O(h
8
)

1 h I
1,1
I
1,2
I
1,3
I
1,4

2 h/2 I
2,1
I
2,2
I
2,3

3 h/4 I
3,1
I
2,3

4 h/8 I
4,1

Trapezoidal Simpson’s 1/3 Boole’s

Rule Rule Rule

Following a similar pattern to Newton divided differences,
Romberg’s Table can be produced
( ) ( )

h I
3
1
h I
3
4
I
m
÷ ~
Error orders for j values
i = 1 i = 2 i = 3 i = 4
j O(h
2
) O(h
4
) O(h
6
) O(h
8
)

1 h I
1,1
I
1,2
I
1,3
I
1,4

2 h/2 I
2,1
I
2,2
I
2,3

3 h/4 I
3,1
I
3,2

4 h/8 I
4,1

Trapezoidal Simpson’s 1/3 Boole’s

Rule Rule Rule

Following a similar pattern to Newton divided differences,
Romberg’s Table can be produced
( ) ( )

h I
15
1
h I
15
16
I
m
÷ ~
Error orders for j values
i = 1 i = 2 i = 3 i = 4
j O(h
2
) O(h
4
) O(h
6
) O(h
8
)

1 h I
1,1
I
1,2
I
1,3
I
1,4

2 h/2 I
2,1
I
2,2
I
2,3

3 h/4 I
3,1
I
3,2

4 h/8 I
4,1

Trapezoidal Simpson’s 1/3 Boole’s

Rule Rule Rule

Following a similar pattern to Newton divided differences,
Romberg’s Table can be produced
( ) ( )

h I
3
1
h I
3
4
I
m
÷ ~
Error orders for j values
i = 1 i = 2 i = 3 i = 4
j O(h
2
) O(h
4
) O(h
6
) O(h
8
)

1 h I
1,1
I
1,2
I
1,3
I
1,4

2 h/2 I
2,1
I
2,2
I
2,3

3 h/4 I
3,1
I
3,2

4 h/8 I
4,1

Trapezoidal Simpson’s 1/3 Boole’s

Rule Rule Rule

Following a similar pattern to Newton divided differences,
Romberg’s Table can be produced
( ) ( )

h I
15
1
h I
15
16
I
m
÷ ~
Error orders for j values
i = 1 i = 2 i = 3 i = 4
j O(h
2
) O(h
4
) O(h
6
) O(h
8
)

1 h I
1,1
I
1,2
I
1,3
I
1,4

2 h/2 I
2,1
I
2,2
I
2,3

3 h/4 I
3,1
I
3,2

4 h/8 I
4,1

Trapezoidal Simpson’s 1/3 Boole’s

Rule Rule Rule

Following a similar pattern to Newton divided differences,
Romberg’s Table can be produced
( ) ( )

h I
63
1
h I
63
64
I
m
÷ ~
f(x)
x
f(x)
x
Extend the area
under the straight
line
Method of Undetermined
Coefficients
Recall the trapezoidal rule
( )
( ) ( )
2
b f a f
a b I
+
÷ ~
This can also be expressed as
( ) ( ) b f c a f c I
1 0
+ ~
where the c’s are constant
Before analyzing
this method,
question.
What are two
functions that
should be evaluated
exactly
by the trapezoidal
rule?
The two cases that should be evaluated exactly
by the trapezoidal rule: 1) y = constant
2) a straight line
f(x)
x
y = 1
(b-a)/2
-(b-a)/2
f(x)
x
y = x
(b-a)/2
-(b-a)/2
Thus, the following equalities should hold.
( ) ( )
( )
( )
( )
( )
}
}
÷
÷
÷
÷
÷
÷
~
|
.
|

\
|
÷
+
|
.
|

\
|
÷
÷
~ +
+ ~
2
a b
2
a b 1 0
2
a b
2
a b 1 0
1 0
xdx
2
a b
c
2
a b
c
dx 1 c c
b f c a f c I
For y = 1
since f(a) = f(b) =1
For y = x
since f(a) = x =-(b-a)/2
and
f(b) = x =(b-a)/2
Evaluating both integrals
0
2
1 b
c
2
a b
c
a b c c
1 0
1 0
=
÷
+
÷
÷
÷ = +
For y = 1

For y = x
Now we have two equations and two unknowns, c
0
and c
1
.

Solving simultaneously, we get :

c
0
= c
1
= (b-a)/2

Substitute this back into:

( ) ( ) b f c a f c I
1 0
+ ~
( )
( ) ( )
2
b f a f
a b I
+
÷ ~
We get the equivalent of the trapezoidal rule.
DERIVATION OF THE TWO-POINT
GAUSS-LEGENDRE FORMULA

( ) ( )
1 1 0 0
x f c x f c I + ~
Lets raise the level of sophistication by:
- considering two points between -1 and 1
- i.e. “open integration”
f(x)
x
-1 x
0
x
1
1
Previously ,we assumed that the equation
fit the integrals of a constant and
linear function.

Extend the reasoning by assuming that
it also fits the integral of a parabolic and a cubic function.
( ) ( )
( ) ( )
( ) ( )
( ) ( )
}
}
}
}
÷
÷
÷
÷
= = +
= = +
= = +
= = +
1
1
3
1 1 0 0
1
1
2
1 1 0 0
1
1
1 1 0 0
1
1
1 1 0 0
0 dx x x f c x f c
3 / 2 dx x x f c x f c
0 xdx x f c x f c
2 dx 1 x f c x f c
We now have four
equations and four
unknowns

c
0
c
1
x
0
and x
1

What equations are
you solving?
( ) ( ) ( ) ( )
( ) ( )
( ) ( )
( ) ( ) 0 x c x c 0 dx x x f c x f c
3
2
x c x c
3
2
dx x x f c x f c
0 x c x c 0 xdx x f c x f c
2 1 c 1 c 2 dx 1 x f c x f c
3
1 1
3
0 0
1
1
3
1 1 0 0
2
1 1
2
0 0
1
1
2
1 1 0 0
` 1 1 0 0
1
1
1 1 0 0
1 0
1
1
1 1 0 0
= + = = +
= + = = +
= + = = +
= + = = +
}
}
}
}
÷
÷
÷
÷
Solve these equations simultaneously
f(x
i
) is either 1, x
i
, x
i
2
or x
i
3
|
.
|

\
|
+ |
.
|

\
| ÷
~
=
÷
=
= =
3
1
f
3
1
f I
3
1
x
3
1
x
1 c c
1
0
1 0
This results in the following
The interesting result is that the simple addition
of the function values at
3
1
and
3
1
÷
However, we have set the limit of integration at -1 and 1.

This was done to simplify the mathematics. A simple
change in variables can be use to translate other limits.

Assume that the new variable x
d
is related to the
original variable x in a linear fashion.

x = a
0
+ a
1
x
d

Let the lower limit x = a correspond to x
d
= -1 and the upper
limit x=b correspond to x
d
=1

a = a
0
+ a
1
(-1) b = a
0
+ a
1
(1)

a = a
0
+ a
1
(-1) b = a
0
+ a
1
(1)
SOLVE THESE EQUATIONS
SIMULTANEOUSLY
2
a b
a
2
a b
a
1 0
÷
=
+
=
( ) ( )
d
d
d 1 0
x
2
a b
2
a b
x or
2
x a b a b
x a a x
|
.
|

\
|
÷
+
+
=
÷ + +
= + =
substitute
d
d d 1 0
dx
2
a b
dx
x
2
a b
2
a b
x a a x
|
.
|

\
|
÷
=
|
.
|

\
|
÷
+
+
= + =
These equations are substituted for x and dx respectively.

Let’s do an example to appreciate the theory
behind this numerical method.
Example
Estimate the following using two-point Gauss Quadrature:
}
4
0
x 2
dx xe
0
5000
10000
15000
20000
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
x
f
(
x
)
( )
( )
}
÷
+
+
1
1
d
x 2 2 2
d
dx 2 e x 2 2
d
Now evaluate the integral
55 . 3477 38 . 3468 17 . 9
2 e
3
1
2 2 2 e
3
1
2 2
3
1
f
3
1
f I
3
1
2 2 2
3
1
2 2 2
= + =
|
|
.
|

\
|
|
.
|

\
|
+ +
|
|
.
|

\
|
|
.
|

\
| ÷
+ =
|
.
|

\
|
+ |
.
|

\
| ÷
~
|
|
.
|

\
|
|
.
|

\
|
+
|
|
.
|

\
|
|
.
|

\
| ÷
+
....end of problem
SINGLE APPLICATION
OF TRAPEZOIDAL RULE

MULTIPLE APPLICATION
OF TRAPEZOIDAL RULE (n=2)

RICHARDSON’S
EXTRAPOLATION

2-POINT GAUSS
357%

133%

58%

33 %
Higher-Point Gauss Formulas
( ) ( ) ( )
1 n 1 n 1 1 0 0
x f c x f c x f c I
÷ ÷
+ + + ~ 
For two point, we determined that c
0
=c
1
= 1

For three point:

c
0
= 0.556 (5/9) x
0
=-0.775 = - (3/5)
1/2

c
1
= 0.889 (8/9) x
1
=0.0
c
2
= 0.556 (5/9) x
2
=0.775 = (3/5)
1/2

Higher-Point Gauss Formulas
For four point:

c
0
= {18-(30)
1/2
}/36 x
0
= -{525+70(30)
1/2
}
1/2
/35
c
1
= {18+(30)
1/2
}/36 x
1
= -{525-70(30)
1/2
}
1/2
/35
c
2
= {18+(30)
1/2
}/36 x
2
= +{525-70(30)
1/2
}
1/2
/35
c
3
= {18-(30)
1/2
}/36 x
3
= +{525+70(30)
1/2
}
1/2
/35

factors (c
i
’s) and function arguments (x
i
’s)
in Table 22.1 p. 626.
Numerical Differentiation
• Forward finite divided difference
• Backward finite divided difference
• Center finite divided difference
• All based on the Taylor Series
( ) ( ) ( )
( )
......... h
! 2
x ' ' f
h x ' f x f x f
2
i
i i 1 i
+ + + =
+
Forward Finite Difference
( ) ( ) ( )
( ) ( )
( )
( ) ( )
( )
( )
( ) ( ) ( )
( )
2
i i 1 i
i
i 1 i
i
3
i
2
i
i i 1 i
h O h
2
x ' ' f
h
x f x f
x ' f
h O
h
x f x f
x ' f
......... h
! 3
x ' ' ' f
h
! 2
x ' ' f
h x ' f x f x f
+ ÷
÷
=
+
÷
=
+ + + + =
+
+
+
Forward Divided Difference
f(x)
x
(x
i
, y
i
)
(x
i+1
,y
i+1
)
( )
( ) ( )
( ) ( ) h O
h
f
x x O
x x
x f x f
x ' f
i
i 1 i
i 1 i
i 1 i
i
+
A
= ÷ +
÷
÷
=
+
+
+
first forward divided difference
Error is proportional to
the step size
O(h
2
) error is proportional to the square of the step size

O(h
3
) error is proportional to the cube of the step size
( ) ( ) h O
h
f
x ' f
i
i
+
A
=
f(x)
x
(x
i
,y
i
)
(x
i-1
,y
i-1
)
( ) ( ) ( )
( )
( ) ( ) ( )
( )
( )
( ) ( )
h
f
h
x f x f
x ' f
..... h
! 2
x ' ' f
h x ' f x f x f
...... h
! 2
x ' ' f
h x ' f x f x f
i 1 i i
i
2
i
i i 1 i
2
i
i i 1 i
A
=
÷
=
+ + ÷ =
+ + + =
÷
÷
+
Backward Difference Approximation of the
First Derivative

Expand the Taylor series backwards
The error is still O(h)
Centered Difference Approximation of the
First Derivative and Second Derivative

Subtract and add backward Taylor expansion
from and to the forward Taylor series expansion
( ) ( ) ( )
( ) ( )
( ) ( ) ( )
( ) ( )
( )
( ) ( )
( )
( )
( ) ( ) ( )
( )
2
2
1 i i 1 i
i
2
1 i 1 i
i
3
i
2
i
i i 1 i
3
i
2
i
i i 1 i
h O
h
x f x f 2 x f
x ' ' f
h O
h 2
x f x f
x ' f
h
6
x ' ' ' f
h
2
x ' ' f
h x ' f x f x f
.... h
6
x ' ' ' f
h
2
x ' ' f
h x ' f x f x f
÷
+ +
=
÷
÷
=
+ ÷ + ÷ =
+ + + + =
÷ +
÷ +
÷
+

Subtracting
Forward
Backward
f(x)
x
(x
i
,y
i
)
(x
i-1
,y
i-1
)
(x
i+1
,y
i+1
)
( )
( ) ( )
( )
2
1 i 1 i
i
h O
h 2
x f x f
x ' f ÷
÷
=
÷ +
Numerical Differentiation
• Forward finite divided differences Fig. 23.1
• Backward finite divided differences Fig. 23.2
• Centered finite divided differences Fig. 23.3
• First - Fourth derivative
• Error: O(h), O(h
2
), O(h
4
) for centered only
Derivatives with Richardson
Extrapolation
• Two ways to improve derivative estimates
¤ decrease step size
¤ use a higher order formula that employs more
points
• Third approach, based on Richardson
extrapolation, uses two derivatives
estimates to compute a third, more accurate
approximation
Richardson Extrapolation
( ) ( ) ( )
( ) ( )
( ) ( )
2 2 1
2
1
2
1
2
2 1
2 1
1
I I h I h I h
h
1
h
h
Special case where h
2
4 1
I I h I h
3 3
In a similar fashion
4 1
D D h D h
3 3
( ~ + ÷
¸ ¸
| |
÷
|
\ .
=
~ ÷
~ ÷
For a centered difference
approximation with
O(h
2
) the application of
this formula will yield
a new derivative estimate
of O(h
4
)
Example
Given the function:

f(x) = -0.1x
4
- 0.15x
3
- 0.5x
2
- 0.25x +1.2

Use centered finite divided difference to estimate
the derivative at 0.5.

f(0) = 1.2
f(0.25) =1.1035
f(0.75) = 0.636
f(1) = 0.2

Example Problem
0
0.2
0.4
0.6
0.8
1
1.2
1.4
0 0.2 0.4 0.6 0.8 1 1.2
x
f
(
x
)
Using centered finite divided difference for h = 0.5

f(0) = 1.2
f(1) = 0.2

D(0.5) = (0.2 - 1.2)/1 = -1.0 c
t
= 9.6%

Using centered finite divided difference for h = 0.25

f(0.25) =1.1035
f(0.75) = 0.636
D(0.25) = (0.636 - 1.1035)/0.5 =-0.934 c
t
= 2.4%

Derivatives of Unequally Spaced
Data
• Common in data from experiments or field studies
• Fit a second order Lagrange interpolating polynomial to
each set of three adjacent points, since this polynomial
does not require that the points be uniformly spaced
• Differentiate analytically
( ) ( )
( )( )
( )
( )( )
( )
( )( )
1 i 1 i i 1 i
1 i i
1 i
1 i i 1 i i
1 i 1 i
i
1 i 1 i i 1 i
1 i i
1 i
x x x x
x x x 2
x f
x x x x
x x x 2
x f
x x x x
x x x 2
x f x ' f
÷ + +
÷
+
+ ÷
+ ÷
+ ÷ ÷
+
÷
÷ ÷
÷ ÷
+
÷ ÷
÷ ÷
+
÷ ÷
÷ ÷
=
Derivative and Integral Estimates
for Data with Errors
• In addition to unequal spacing, the other problem related to
differentiating empirical data is measurement error
• Differentiation amplifies error
• Integration tends to be more forgiving
• Primary approach for determining derivatives of imprecise
data is to use least squares regression to fit a smooth,
differentiable function to the data
• In absence of other information, a lower order polynomial
regression is a good first choice
0
100
200
300
0 10 20
t
y
0
10
20
30
0 5 10 15
t
d
y
/
d
t
0
50
100
150
200
250
0 5 10 15
t
y
0
10
20
30
40
0 5 10 15
0
50
100
150
200
250
0 5 10 15
0
10
20
30
40
0 5 10 15
t
d
y
/
d
t
Ordinary Differential Equations
• A differential equation defines a
relationship between an unknown function
and one or more of its derivatives
• Physical problems using differential
equations
– electrical circuits
– heat transfer
– motion
Ordinary Differential Equations
• The derivatives are of the dependent
variable with respect to the independent
variable
• First order differential equation with y as
the dependent variable and x as the
independent variable would be:
 dy/dx = f(x,y)
Ordinary Differential Equations
• A second order differential equation would
have the form:
}

does not necessarily have to include
all of these variables
|
.
|

\
|
=
dx
dy
, y , x f
dx
y d
2
2
Ordinary Differential Equations
• An ordinary differential equation is one
with a single independent variable.
• Thus, the previous two equations are
ordinary differential equations
• The following is not:
( ) y , x , x f
dx
dy
2 1
1
=
Ordinary Differential Equations
• The analytical solution of ordinary
differential equation as well as partial
differential equations is called the “closed
form solution”
• This solution requires that the constants of
integration be evaluated using prescribed
values of the independent variable(s).
Ordinary Differential Equations
• An ordinary differential equation of order n
requires that n conditions be specified.
• Boundary conditions
• Initial conditions
consider this beam where the
deflection is zero at the boundaries
x= 0 and x = L
These are boundary conditions
consider this beam where the
deflection is zero at the boundaries
x= 0 and x = L
These are boundary conditions
a
y
o
P
In some cases, the specific behavior of a system(s)
is known at a particular time. Consider how the deflection of a
beam at x = a is shown at time t =0 to be equal to y
o
.
Being interested in the response for t > 0, this is called the
initial condition.
Ordinary Differential Equations
• At best, only a few differential equations
can be solved analytically in a closed form.
• Solutions of most practical engineering
problems involving differential equations
require the use of numerical methods.
Review of Analytical Solution
C
3
x 4
y
dx x 4 dy
x 4
dx
dy
3
2
2
+ =
=
=
} }
At this point lets consider
initial conditions.

y(0)=1
and
y(0)=2
( )
( )
( )
( )
2 C and
C
3
0 4
2
2 0 y for
1 C then
C
3
0 4
1
1 0 y for
C
3
x 4
y
3
3
3
=
+ =
=
=
+ =
=
+ =
What we see are different
values of C for the two
different initial conditions.

The resulting equations
are:

2
3
x 4
y
1
3
x 4
y
3
3
+ =
+ =
0
4
8
12
16
0 0.5 1 1.5 2 2.5
x
y
y(0)=1
y(0)=2
y(0)=3
y(0)=4
One Step Methods
• Focus is on solving ODE in the form
y
x
slope = |
y
i
y
i+1
h
This is the same as saying:
new value = old value + slope * step size
( )
h y y
y , x f
dx
dy
i 1 i
| + =
=
+
Euler’s Method
• The first derivative provides a direct
estimate of the slope at x
i
• The equation is applied iteratively, or one
step at a time, over small distance in order
to reduce the error
• Hence this is often referred to as Euler’s
One-Step Method
Example
4413 . 1 y
4413 . 0 x
3
4
1 y
dx x 4 dy
solution Analytical
1 . 0 size step
1 x at 1 y . C . I
x 4
dx
dy
1 . 1
1
3
1 . 1
1
2
y
1
2
=
= = ÷
=
= =
=
} }
( ) ( ) ( ) | |( )
( ) ( ) ( ) | |( ) 1 . 0 1 4 1 f 1 . 1 f
: Note
4 . 1 1 . 0 1 4 1 f 1 . 1 f
h y y
x 4
dx
dy
2
2
i 1 i
2
+ =
= + =
| + =
=
+
I.C.
Slope
step size
Recall the analytical solution was 1.4413
If we instead reduced the step size to to 0.05 and
apply Euler’s twice
Error Analysis of Euler’s Method
• Truncation error - caused by the nature of
the techniques employed to approximate
values of y
– local truncation error (from Taylor Series)
– propagated truncation error
– sum of the two = global truncation error
• Round off error - caused by the limited
number of significant digits that can be
retained by a computer or calculator
Example
0
2
4
6
8
10
12
0 1 2 3
x
y
Analyt ical
Solut ion
Numerical
Solut ion
....end of example
Higher Order Taylor Series
Methods
• This is simple enough to implement with
polynomials
• Not so trivial with more complicated ODE
• In particular, ODE that are functions of both
dependent and independent variables require
chain-rule differentiation
• Alternative one-step methods are needed
( )
( )
2
i i
i i i 1 i
h
2
y , x ' f
h y , x f y y + + =
+
Modification of Euler’s Methods
• A fundamental error in Euler’s method is
that the derivative at the beginning of the
interval is assumed to apply across the
entire interval
• Two simple modifications will be
demonstrated
• These modification actually belong to a
larger class of solution techniques called
Runge-Kutta which we will explore later.
Heun’s Method
• Consider our Taylor expansion
( )
( )
2
i i
i i i 1 i
h
2
y , x ' f
h y , x f y y + + =
+
• Approximate f’ as a simple forward difference
( )
( ) ( )
h
y , x f y , x f
y , x ' f
i i 1 i 1 i
i i
÷
~
+ +
• Substituting into the expansion
h
2
f f
y
2
h
h
f f
h f y y
i 1 i
i
2
i 1 i
i i 1 i
|
.
|

\
|
+
+ =
|
.
|

\
|
÷
+ + =
+ +
+
Heun’s Method Algorithm
• Determine the derivatives for the interval @
– the initial point
– end point (based on Euler step from initial point)
• Use the average to obtain an improved
estimate of the slope for the entire interval
• We can think of the Euler step as a “test” step
y
x
i
x
i+1
h
y
x
i
x
i+1
Take the average of these
two slopes
y
x
i
x
i+1
y
x
x
i
x
i+1
( ) ( )
h
2
y , x f y , x f
y y
1 i 1 i i i
i 1 i
+ +
+
+
+ =
Improved Polygon Method
• Another modification of Euler’s Method
(sometimes called the Midpoint Method)
• Uses Euler’s to predict a value of y at the
midpoint of the interval

• This predicted value is used to estimate the
slope at the midpoint
( )
2
h
y , x f y y
i i i 2 / 1 i
+ =
+
( )
2 / 1 i 2 / 1 i 2 / 1 i
y , x f ' y
+ + +
=
Improved Polygon Method
• We then assume that this slope represents a valid
approximation of the average slope for the entire
interval

( ) h y , x f y y
2 / 1 i 2 / 1 i i 1 i + + +
+ =
• Use this slope to extrapolate linearly from x
i
to
x
i+1
using Euler’s algorithm

• We could also get this algorithm from
substituting a forward difference in f to i+1/2 into
the Taylor expansion for f’, i.e.

h f y
2
h
2 / h
f f
h f y y
2 / 1 i i
2
i 2 / 1 i
i i 1 i +
+
+
+ =
|
.
|

\
|
÷
+ + =
y
x
x
i
x
i+1/2
f(x
i+1/2
)
y
x
x
i
x
i+1/2
f’(x
i+1/2
)
y
x
x
i
x
i+1/2
x
i+1
h
now to get f(x
i+1
)
Runge-Kutta Methods
• RK methods achieve the accuracy of a Taylor
series approach without requiring the calculation
of a higher derivative
• Many variations exist but all can be cast in the
generalized form:
( ) h h , y , x y y
i i i 1 i
| + =
+
{

| is called the incremental function
| , Incremental Function
can be interpreted as a representative slope
over the interval
( )
( )
( )
( ) h k q h k q h k q y , h p x f k
h k q h k q y , h p x f k
h k q y , h p x f k
y , x f k
: are s ' k the and t tan cons are s ' a the where
k a k a k a
1 n 1 n , 1 n 2 2 , 1 n 1 1 , 1 n i n i n
2 22 1 21 i 2 i 3
1 11 i 1 i 2
i i 1
n n 2 2 1 1
÷ ÷ ÷ ÷ ÷
+ + + + + =
+ + + =
+ + =
=
+ + + = |

NOTE:
k’s are recurrence relationships,
that is k
1
appears in the equation for k
2

and each appear in the equation for k
3
This recurrence makes RK methods efficient for
computer calculations
( )
( )
( )
( ) h k q h k q h k q y , h p x f k
h k q h k q y , h p x f k
h k q y , h p x f k
y , x f k
: are s ' k the and t tan cons are s ' a the where
k a k a k a
1 n 1 n , 1 n 2 2 , 1 n 1 1 , 1 n i n i n
2 22 1 21 i 2 i 3
1 11 i 1 i 2
i i 1
n n 2 2 1 1
÷ ÷ ÷ ÷ ÷
+ + + + + =
+ + + =
+ + =
=
+ + + = |

Second Order RK Methods
( )
( )
( ) h k q y , h p x f k
y , x f k
where
h k a k a y y
1 11 i 1 i 2
i i 1
2 2 1 1 i 1 i
+ + =
=
+ + =
+
( )
( )
( )
( ) h k q h k q h k q y , h p x f k
h k q h k q y , h p x f k
h k q y , h p x f k
y , x f k
: are s ' k the and t tan cons are s ' a the where
k a k a k a
1 n 1 n , 1 n 2 2 , 1 n 1 1 , 1 n i n i n
2 22 1 21 i 2 i 3
1 11 i 1 i 2
i i 1
n n 2 2 1 1
÷ ÷ ÷ ÷ ÷
+ + + + + =
+ + + =
+ + =
=
+ + + = |

Second Order RK Methods
• We have to determine values for the constants a
1
,
a
2
, p
1
and q
11
• To do this consider the Taylor series in terms of
y
i+1
and f(x
i
,y
i
)
( )
( ) ( )
2
h
y , x ' f h y , x f y y
h k a k a y y
2
i i i i i 1 i
2 2 1 1 i 1 i
+ + =
+ + =
+
+
( )
( )
2
h
dx
dy
y
f
x
f
h y , x f y y
ansion exp to in substitute
dx
dy
y
f
x
f
y , x ' f
2
i i i 1 i
i i
|
|
.
|

\
|
c
c
+
c
c
+ + =
c
c
+
c
c
=
+
Now, f’(x
i
, y
i
) must be determined by the
chain rule for differentiation
The basic strategy underlying Runge-Kutta methods
is to use algebraic manipulations to solve for values
of a
1
, a
2
, p
1
and q
11

( )
( ) ( ) ( ) ( )
2
h
y , x
dx
dy
y , x
y
f
y , x
x
f
h y , x f y y
h k a k a y y
2
i i i i i i i i i 1 i
2 2 1 1 i 1 i
|
|
.
|

\
|
c
c
+
c
c
+ + =
+ + =
+
+
By setting these two equations equal to each other and
recalling:
( )
( ) h k q y , h p x f k
y , x f k
1 11 i 1 i 2
i i 1
+ + =
=
we derive three equations to evaluate the four unknown
constants
2
1
q a
2
1
p a
1 a a
11 2
1 2
2 1
=
=
= +
Because we have three equations with four unknowns,
we must assume a value of one of the unknowns.

Suppose we specify a value for a
2
.

What would the equations be?
2
11 1
2 1
a 2
1
q p
a 1 a
= =
÷ =
Because we can choose an infinite number of values
for a
2
there are an infinite number of second order
RK methods.

Every solution would yield exactly the same result
if the solution to the ODE were quadratic, linear or a
constant.

Lets review three of the most commonly used and
preferred versions.
( )
( )
( )
2
1
q a
2
1
p a
1 a a
h k q y , h p x f k
y , x f k
where
h k a k a y y
11 2
1 2
2 1
1 11 i 1 i 2
i i 1
2 2 1 1 i 1 i
=
=
= +
+ + =
=
+ + =
+
Consider the following:

Case 1: a
2
= 1/2

Case 2: a
2
= 1

These two methods
have been previously
studied.

What are they?
( )
( ) h k y , h x f k
y , x f k
where
h k
2
1
k
2
1
y y
1
a 2
1
q p
2
1
q a
2
1
p a
2 / 1 2 / 1 1 a 1 a
1 i i 2
i i 1
2 1 i 1 i
2
11 1
11 2
1 2
2 1
+ + =
=
|
.
|

\
|
+ + =
= = =
=
=
= ÷ = ÷ =
+
Case 1: a
2
= 1/2

This is Heun’s Method with
a single corrector.

Note that k
1
is the slope at
the beginning of the interval
and k
2
is the slope at the
end of the interval.
( )
( )
( ) h k q y , h p x f k
y , x f k
where
h k a k a y y
1 11 i 1 i 2
i i 1
2 2 1 1 i 1 i
+ + =
=
+ + =
+
( )
|
.
|

\
|
+ + =
=
+ =
= = =
=
=
= ÷ = ÷ =
+
h k
2
1
y , h
2
1
x f k
y , x f k
where
h k y y
2
1
a 2
1
q p
2
1
q a
2
1
p a
0 1 1 a 1 a
1 i i 2
i i 1
2 i 1 i
2
11 1
11 2
1 2
2 1
( )
( )
( ) h k q y , h p x f k
y , x f k
where
h k a k a y y
1 11 i 1 i 2
i i 1
2 2 1 1 i 1 i
+ + =
=
+ + =
+
Case 2: a
2
= 1

This is the Improved Polygon
Method.
Ralston’s Method

Ralston (1962) and Ralston and Rabinowitiz (1978)
determined that choosing a
2
= 2/3 provides a minimum
bound on the truncation error for the second order RK
algorithms.

This results in a
1
= 1/3 and p
1
= q
11
= 3/4
( )
|
.
|

\
|
+ + =
=
|
.
|

\
|
+ + =
+
h k
4
3
y , h
4
3
x f k
y , x f k
where
h k
3
2
k
3
1
y y
1 i i 2
i i 1
2 1 i 1 i
Example
( ) ( )
1 . 0 h size step
1 1 y . e . i 1 x at 1 y : . C . I
y x 4
dx
dy
2
=
= = =
=
As a class problem, lets
consider two steps.

Some of you folks do the
analytical solution,
others do either:

•Ralstons’s
•Heun’s
•Improved Polygon
Third Order Runge-Kutta Methods
• Derivation is similar to the one for the second-order
• Results in six equations and eight unknowns.
• One common version results in the following
( )
( )
( )
2 1 i i 3
1 i i 2
i i 1
3 2 1 i 1 i
hk 2 hk y , h x f k
h k
2
1
y , h
2
1
x f k
y , x f k
where
h k k 4 k
6
1
y y
+ ÷ + =
|
.
|

\
|
+ + =
=
(
¸
(

¸

+ + + =
+
Note the third term
NOTE: if the derivative is a function of x only, this reduces to Simpson’s 1/3 Rule
Fourth Order Runge Kutta
• The most popular
• The following is sometimes called the classical
fourth-order RK method
( )
( )
( )
3 i i 4
2 i i 3
1 i i 2
i i 1
4 3 2 1 i 1 i
hk y , h x f k
hk
2
1
y , h
2
1
x f k
h k
2
1
y , h
2
1
x f k
y , x f k
where
h k k 2 k 2 k
6
1
y y
+ + =
|
.
|

\
|
+ + =
|
.
|

\
|
+ + =
=
(
¸
(

¸

+ + + + =
+
• Note that for ODE that are a function of x alone
that this is also the equivalent of Simpson’s 1/3
Rule
( )
( )
( )
3 i i 4
2 i i 3
1 i i 2
i i 1
4 3 2 1 i 1 i
hk y , h x f k
hk
2
1
y , h
2
1
x f k
h k
2
1
y , h
2
1
x f k
y , x f k
where
h k k 2 k 2 k
6
1
y y
+ + =
|
.
|

\
|
+ + =
|
.
|

\
|
+ + =
=
(
¸
(

¸

+ + + + =
+
Example
Use 4th Order RK to solve the following differential equation:
( ) 1 1 y . C . I
x 1
xy
dx
dy
2
=
+
=
using an interval of h = 0.1
Solution
( )
( )
( )
3 i i 4
2 i i 3
1 i i 2
i i 1
4 3 2 1 i 1 i
hk y , h x f k
hk
2
1
y , h
2
1
x f k
h k
2
1
y , h
2
1
x f k
y , x f k
where
h k k 2 k 2 k
6
1
y y
+ + =
|
.
|

\
|
+ + =
|
.
|

\
|
+ + =
=
(
¸
(

¸

+ + + + =
+
We will determine
different estimates
of the slope
i.e. k
1
, k
2
, k
3
and k
4
( )
( ) ( ) ( ) ( )
05119 . 1
1 . 0 52323 . 0 51219 . 0 2 51189 . 0 2 5 . 0
6
1
1
h k k 2 k 2 k
6
1
y y
4 3 2 1 i 1 i
=
(
¸
(

¸

+ + + + =
(
¸
(

¸

+ + + + =
+
.....end of problem
Higher Order RK Methods
• When more accurate results are required,
Bucher’s (1964) fifth order RK method is
recommended
• There is a similarity to Boole’s Rule
• The gain in accuracy is offset by added
computational effort and complexity
Systems of Equations
• Many practical problems in engineering and
science require the solution of a system of
simultaneous differential equations
( )
( )
( )
n 2 1 n
n
n 2 1 2
2
n 2 1 1
1
y , , y , y , x f
dx
dy
y , , y , y , x f
dx
dy
y , , y , y , x f
dx
dy

=
=
=
• Solution requires n initial conditions
• All the methods for single equations can be used
• The procedure involves applying the one-step
technique for every equation at each step before
proceeding to the next step
( )
( )
( )
n 2 1 n
n
n 2 1 2
2
n 2 1 1
1
y , , y , y , x f
dx
dy
y , , y , y , x f
dx
dy
y , , y , y , x f
dx
dy

=
=
=
• Note that higher order ODEs can be reformulated
as simultaneous first order ODEs
( )
2 1
2
2
1
2 1
2
2
y , y , x g
dx
dy
y
dx
dy
so ,
dx
dy
y & y y defining by
d transforme be can
dx
dy
, y , x g
dx
y d
=
=
÷ ÷
|
.
|

\
|
=
Partial Differential Equations
• An equation involving partial derivatives of an
unknown function of two or more independent
variables
• The following are examples. Note: u depends on
both x and y
x
y
u
xu
x
u
y 5 u 8
y
u
x
y x
u
x
y x
u
6
x
u
1 u
y
u
xy 2
x
u
2
2
2
2 2
2
3
3
2
2
2
2
2
2
=
c
c
+
c
c
= +
c
c
+
c c
c
=
c c
c
+
|
|
.
|

\
|
c
c
= +
c
c
+
c
c
Partial Differential Equations
• Because of their widespread application in
engineering, our study of PDE will focus on linear,
second-order equations
• The following general form will be evaluated for
B
2
- 4AC (note that text does not list E, F, G terms)
0 Gu
y
u
F
x
u
E D
y
u
C
y x
u
B
x
u
A
2
2 2
2
2
=
|
|
.
|

\
|
+
c
c
+
c
c
+ +
c
c
+
c c
c
+
c
c
B
2
-4AC Category Example
< 0 Elliptic Laplace equation (steady state with
2 spatial dimensions

= 0 Parabolic Heat conduction equation (time variable
with one spatial dimension

>0 Hyperbolic Wave equation (time-variable with one
spatial dimension
0
y
T
x
T
2
2
2
2
=
c
c
+
c
c
t
T
x
T
k
2
2
c
c
=
c
c
2
2
2 2
2
t
y
c
1
x
y
c
c
=
c
c
y or t
x
set up a grid
estimate the dependent
variable at the center
or intersections
of the grid
• Typically used to characterize steady-state
boundary value problems
• Before solving, the Laplace equation will be
solved from a physical problem
0
y
u
x
u
0 D
y
u
C
y x
u
B
x
u
A
2
2
2
2
2
2 2
2
2
=
c
c
+
c
c
= +
c
c
+
c c
c
+
c
c
Finite Difference:
Elliptic Equations
B
2
- 4AC < 0
The Laplace Equation
• Models a variety of problems involving the
potential of an unknown variable
• We will consider cases involving
thermodynamics, fluid flow, and flow through
porous media
0
y
u
x
u
2
2
2
2
=
c
c
+
c
c
The Laplace equation
• Let’s consider the case of a plate heated from the
boundaries
• How is this equation derived from basic concepts
of continuity?
• How does it relate to flow fields?
0
y
T
x
T
2
2
2
2
=
c
c
+
c
c
Consider the plate below, with thickness Az.
The temperatures are known at the boundaries.
What is the temperature throughout the plate?

T = 200 T= 200
T = 400
T = 200
T = 200 T= 200
T = 400
T = 200
y
x
Divide into a grid, with increments by Ax and Ay
T = 200 T= 200
T = 400
T = 200
y
x
What is the temperature here, if
using a block centered scheme?
T = 200 T= 200
T = 400
T = 200
y
x
What is the temperature here, if
using a grid centered scheme?
Consider the element shown below on the face of
a plate A z in thickness.

The plate is illustrated everywhere by at its edges or
boundaries, where the temperature can be set.
y
x
A x
A y
q(y + A y)
q(x + A x)
q(x)
q(y)
By continuity, the flow of heat in must equal the flow of heat
out.
( ) ( )
( ) ( ) t z x y y q t z y x x q
t z x y q t z y x q
A A A A + + A A A A +
= A A A + A A A
Consider the heat flux q
in and out of the elemental volume.
( ) ( )
( ) ( ) t z x y y q t z y x x q
t z x y q t z y x q
A A A A + + A A A A +
= A A A + A A A
Divide by Az and A t and collect terms, this equation
reduces to:
0
y
q
x
q
=
c
c
+
c
c
Again, this is our
continuity equation
0
y
q
x
q
=
c
c
+
c
c
The link between flux and temperature is provided by Fourier’s
Law of heat conduction
i
T
C k q
i
c
c
µ ÷ =
Where q
i
is the heat flux in the direction i.
Substitute B into A to get the Laplace equation
Equation A
Equation B
0
y
q
x
q
=
c
c
+
c
c
i
T
C k q
i
c
c
µ ÷ =
Equation A
Equation B
0
y
T
x
T
y
T
C k
y x
T
C k
x y
q
x
q
2
2
2
2
=
c
c
+
c
c
|
|
.
|

\
|
c
c
µ ÷
c
c
+
|
.
|

\
|
c
c
µ ÷
c
c
=
c
c
+
c
c
Consider Fluid Flow
In fluid flow, where the fluid is a liquid or a gas, the
continuity equation is:
0
y
V
x
V
y
x
=
c
c
+
c
c
The link here can by either of the following sets of equations:
The potential function:

Stream function:
y
V
x
V
y x
c
c|
=
c
c|
=
x
V
y
V
y x
c

÷ =
c

=
0
y x
or 0
y x
2
2
2
2
2
2
2
2
=
c
¢ c
+
c
¢ c
=
c
| c
+
c
| c
The Laplace equation is then
0
y
V
x
V
y
x
=
c
c
+
c
c
y
V
x
V
y x
c
c|
=
c
c|
=
x
V
y
V
y x
c

÷ =
c

=
Flow in Porous Media
0
y
q
x
q
=
c
c
+
c
c
i
H
K q
i
c
c
÷ =
Darcy’s Law
provided by Darcy’s Law
0
y
h
x
h
2
2
2
2
=
c
c
+
c
c
) y , x ( f
y
u
x
u
2
2
2
2
=
c
c
+
c
c
For a case with sources and sinks within the 2-D
domain, as represented by f(x,y), we have the
Poisson equation.
Now let’s consider solution techniques.
Evaluate these equations based on the grid and
central difference equations
(i,j)
(i+1,j)
(i-1,j)
(i,j+1)
(i,j-1)
2
1 j , i j , i
1 j ,
i
2
2
2
j , 1 i j , i j , 1 i
2
2
y
u u 2 u
y
u
x
u u 2 u
x
u
A
+ ÷
=
c
c
A
+ ÷
=
c
c
÷
+
÷ +
0
y
u u 2 u
x
u u 2 u
2
1 j , i j , i
1 j ,
i
2
j , 1 i j , i j , 1 i
=
A
+ ÷
+
A
+ ÷
÷
+ ÷ +
(i,j)
(i+1,j)
(i-1,j)
(i,j+1)
(i,j-1)
If A x = A y
we can collect the terms
to get:
0 u 4 u u u u
j , i 1 j , i 1 j , i j , 1 i j , 1 i
= ÷ + + +
÷ + ÷ +
(i,j)
(i+1,j)
(i-1,j)
(i,j+1)
(i,j-1)
0 u 4 u u u u
j , i 1 j , i 1 j , i j , 1 i j , 1 i
= ÷ + + +
÷ + ÷ +
This equation is referred
to as the Laplacian difference
equation.

It can be applied to all
interior points.

We must now consider
what to do with the
boundary nodes.
Boundary Conditions
• Dirichlet boundary conditions: u is specified at the
boundary
 Temperature
• Neumann boundary condition: the derivative is specified
 q
i

• Combination of both u and its derivative (Mixed BC)
i i
x
h
or
x
T
c
c
c
c
The simplest case is where the boundaries are
specified as fixed values.

This case is known as the Dirichlet boundary
conditions.
u
1
u
2
u
3
u
4
u
1
u
2
u
3
u
4
Consider how we can deal with the lower node shown, u
1,1
-4u
1,1
+u
1,2
+u
2,1
+u
1
+u
4
= 0
1,2
1,1 2,1
Note:
This grid would result
in nine simultaneous
equations.
Let’s consider how to model the Neumann boundary condition
x 2
u u
x
u
j , 1 i j , 1 i
A
÷
~
c
c
÷ +
centered finite divided difference
approximation
c
c
c
c
2
2
2
2
0
h
x
h
y
+ =
c
c
h
x
= 0
c
c
h
x
= 0
c
c
h
y
= 0
suppose we wanted to consider this end
grid point
1,2
1,1 2,1
0
x
h
=
c
c
0
y
h
=
c
c
The two boundaries are consider
to be symmetry lines due to the
fact that the BC translates
in the finite difference form
to:
h
i+1,j
= h
i-1,j

and

h
i,j+1
= h
i,j-1

1,2
1,1 2,1
h
1,1
= (2h
1,2
+ 2 h
2,1
)/4

h
1,2
= (h
1,1
+ h
1,3
+2h
22
)/4
2,2
0
x
h
=
c
c
0
y
h
=
c
c
Example
The grid on the next slide is designed to solve the LaPlace
equation
0
y x
2
2
2
2
=
c
u c
+
c
u c
Write the finite difference equations for the nodes (1,1),
(1,2), and (2,1). Note that the lower boundary is a
Dirichlet boundary condition, the left boundary is a
Neumann boundary condition, and Ax = Ay.
Solution
0
x
=
c
u c
20 = u
(1,1) (2,1)
(1,2)
( )
( )
( )
4
20
4
20 2
4
2
31 22 11
21
21 12
11
11 22 13
12
+ u + u + u
= u
+ u + u
= u
u + u + u
= u
The Liebmann Method
• Most numerical solutions of the Laplace
equation involve systems that are much
larger than the general system we just
evaluated
• Note that there are a maximum of five
unknown terms per line
• This results in a significant number of terms
with zero’s
The Liebmann Method
• In addition to the fact that they are prone to
round-off errors, using elimination methods
on such sparse systems wastes a great
amount of computer memory storing zeros
• Therefore, we commonly employ
approaches such as Gauss-Seidel, which
when applied to PDEs is also referred to as
Liebmann’s method.
The Liebmann Method
is diagonally dominant.
• Therefore the procedure will converge to a stable
solution.
• Over relaxation is often employed to accelerate the
rate of convergence
( )
old
j , i
new
j , i
new
j , i
j , i 1 j , i 1 j , i j , 1 i j , 1 i
u 1 u u
0 u 4 u u u u
ì ÷ + ì =
= ÷ + + +
÷ + ÷ +
( )
old
j , i
new
j , i
new
j , i
j , i 1 j , i 1 j , i j , 1 i j , 1 i
u 1 u u
0 u 4 u u u u
ì ÷ + ì =
= ÷ + + +
÷ + ÷ +
As with the conventional Gauss Seidel method, the iterations
are repeated until each point falls below a pre-specified
tolerance:
% 100
u
u u
new
j , i
old
j , i
new
j , i
s
×
÷
= c
Groundwater Flow Example
c
c
c
c
2
2
2
2
0
h
x
h
y
+ =
c
c
h
x
= 0
c
c
h
x
= 0
c
c
h
y
= 0
Modeling 1/2 of the system shown, we can develop the following
schematic where A x = A y = 20 m

The finite difference equations can be solved using a
100 =A1+0.05*20 =B1+0.05*20 =C1+0.05*20 =D1+0.05*20 =E1+0.05*20 =F1+0.05*20 =G1+0.05*20 =H1+0.05*20 =I1+0.05*20 =J1+0.05*20
=(A1+2*B2+A3)/4 =(B1+C2+B3+A2)/4 =(C1+D2+C3+B2)/4 =(D1+E2+D3+C2)/4 =(E1+F2+E3+D2)/4 =(F1+G2+F3+E2)/4 =(G1+H2+G3+F2)/4 =(H1+I2+H3+G2)/4 =(I1+J2+I3+H2)/4 =(J1+K2+J3+I2)/4 =(K1+K3+2*J2)/4
=(A2+2*B3+A4)/4 =(B2+C3+B4+A3)/4 =(C2+D3+C4+B3)/4 =(D2+E3+D4+C3)/4 =(E2+F3+E4+D3)/4 =(F2+G3+F4+E3)/4 =(G2+H3+G4+F3)/4 =(H2+I3+H4+G3)/4 =(I2+J3+I4+H3)/4 =(J2+K3+J4+I3)/4 =(K2+K4+2*J3)/4
=(A3+2*B4+A5)/4 =(B3+C4+B5+A4)/4 =(C3+D4+C5+B4)/4 =(D3+E4+D5+C4)/4 =(E3+F4+E5+D4)/4 =(F3+G4+F5+E4)/4 =(G3+H4+G5+F4)/4 =(H3+I4+H5+G4)/4 =(I3+J4+I5+H4)/4 =(J3+K4+J5+I4)/4 =(K3+K5+2*J4)/4
=(A4+2*B5+A6)/4 =(B4+C5+B6+A5)/4 =(C4+D5+C6+B5)/4 =(D4+E5+D6+C5)/4 =(E4+F5+E6+D5)/4 =(F4+G5+F6+E5)/4 =(G4+H5+G6+F5)/4 =(H4+I5+H6+G5)/4 =(I4+J5+I6+H5)/4 =(J4+K5+J6+I5)/4 =(K4+K6+2*J5)/4
=(2*A5+2*B6)/4 =(2*B5+C6+A6)/4 =(2*C5+D6+B6)/4 =(2*D5+E6+C6)/4 =(2*E5+F6+D6)/4 =(2*F5+G6+E6)/4 =(2*G5+H6+F6)/4 =(2*H5+I6+G6)/4 =(2*I5+J6+H6)/4 =(2*J5+K6+I6)/4 =(2*K5+2*J6)/4
100 =A1+0.05*20 =B1+0.05*20 =C1+0.05*20 =D1+0.05*20 =E1+0.05*20 =F1+0.05*20 =G1+0.05*20 =H1+0.05*20 =I1+0.05*20 =J1+0.05*20
=(A1+2*B2+A3)/4 =(B1+C2+B3+A2)/4 =(C1+D2+C3+B2)/4 =(D1+E2+D3+C2)/4 =(E1+F2+E3+D2)/4 =(F1+G2+F3+E2)/4 =(G1+H2+G3+F2)/4 =(H1+I2+H3+G2)/4 =(I1+J2+I3+H2)/4 =(J1+K2+J3+I2)/4 =(K1+K3+2*J2)/4
=(A2+2*B3+A4)/4 =(B2+C3+B4+A3)/4 =(C2+D3+C4+B3)/4 =(D2+E3+D4+C3)/4 =(E2+F3+E4+D3)/4 =(F2+G3+F4+E3)/4 =(G2+H3+G4+F3)/4 =(H2+I3+H4+G3)/4 =(I2+J3+I4+H3)/4 =(J2+K3+J4+I3)/4 =(K2+K4+2*J3)/4
=(A3+2*B4+A5)/4 =(B3+C4+B5+A4)/4 =(C3+D4+C5+B4)/4 =(D3+E4+D5+C4)/4 =(E3+F4+E5+D4)/4 =(F3+G4+F5+E4)/4 =(G3+H4+G5+F4)/4 =(H3+I4+H5+G4)/4 =(I3+J4+I5+H4)/4 =(J3+K4+J5+I4)/4 =(K3+K5+2*J4)/4
=(A4+2*B5+A6)/4 =(B4+C5+B6+A5)/4 =(C4+D5+C6+B5)/4 =(D4+E5+D6+C5)/4 =(E4+F5+E6+D5)/4 =(F4+G5+F6+E5)/4 =(G4+H5+G6+F5)/4 =(H4+I5+H6+G5)/4 =(I4+J5+I6+H5)/4 =(J4+K5+J6+I5)/4 =(K4+K6+2*J5)/4
=(2*A5+2*B6)/4 =(2*B5+C6+A6)/4 =(2*C5+D6+B6)/4 =(2*D5+E6+C6)/4 =(2*E5+F6+D6)/4 =(2*F5+G6+E6)/4 =(2*G5+H6+F6)/4 =(2*H5+I6+G6)/4 =(2*I5+J6+H6)/4 =(2*J5+K6+I6)/4 =(2*K5+2*J6)/4
100
=(A1+2*B2+A3)/4
=(A2+2*B3+A4)/4
=(A3+2*B4+A5)/4
=(A4+2*B5+A6)/4
=(2*A5+2*B6)/4
You will get an
error message in
Excel that states that
it will not resolve
a circular reference.
CAN USE EXCEL DEMONSTRATION
100 =A1+0.05*20 =B1+0.05*20 =C1+0.05*20 =D1+0.05*20 =E1+0.05*20 =F1+0.05*20 =G1+0.05*20 =H1+0.05*20 =I1+0.05*20 =J1+0.05*20
=(A1+2*B2+A3)/4 =(B1+C2+B3+A2)/4 =(C1+D2+C3+B2)/4 =(D1+E2+D3+C2)/4 =(E1+F2+E3+D2)/4 =(F1+G2+F3+E2)/4 =(G1+H2+G3+F2)/4 =(H1+I2+H3+G2)/4 =(I1+J2+I3+H2)/4 =(J1+K2+J3+I2)/4 =(K1+K3+2*J2)/4
=(A2+2*B3+A4)/4 =(B2+C3+B4+A3)/4 =(C2+D3+C4+B3)/4 =(D2+E3+D4+C3)/4 =(E2+F3+E4+D3)/4 =(F2+G3+F4+E3)/4 =(G2+H3+G4+F3)/4 =(H2+I3+H4+G3)/4 =(I2+J3+I4+H3)/4 =(J2+K3+J4+I3)/4 =(K2+K4+2*J3)/4
=(A3+2*B4+A5)/4 =(B3+C4+B5+A4)/4 =(C3+D4+C5+B4)/4 =(D3+E4+D5+C4)/4 =(E3+F4+E5+D4)/4 =(F3+G4+F5+E4)/4 =(G3+H4+G5+F4)/4 =(H3+I4+H5+G4)/4 =(I3+J4+I5+H4)/4 =(J3+K4+J5+I4)/4 =(K3+K5+2*J4)/4
=(A4+2*B5+A6)/4 =(B4+C5+B6+A5)/4 =(C4+D5+C6+B5)/4 =(D4+E5+D6+C5)/4 =(E4+F5+E6+D5)/4 =(F4+G5+F6+E5)/4 =(G4+H5+G6+F5)/4 =(H4+I5+H6+G5)/4 =(I4+J5+I6+H5)/4 =(J4+K5+J6+I5)/4 =(K4+K6+2*J5)/4
=(2*A5+2*B6)/4 =(2*B5+C6+A6)/4 =(2*C5+D6+B6)/4 =(2*D5+E6+C6)/4 =(2*E5+F6+D6)/4 =(2*F5+G6+E6)/4 =(2*G5+H6+F6)/4 =(2*H5+I6+G6)/4 =(2*I5+J6+H6)/4 =(2*J5+K6+I6)/4 =(2*K5+2*J6)/4
=B1+0.05*20
=(C1+D2+C3+B2)/4
=(C2+D3+C4+B3)/4
=(C3+D4+C5+B4)/4
=(C4+D5+C6+B5)/4
=(2*C5+D6+B6)/4
After selecting the
appropriate command,
EXCEL with perform
the Liebmann method
for you.

In fact, you will be able
to watch the iterations.
Table 2: Results of finite difference model.
A B C D E F G H I J K
1 100 101 102 103 104 105 106 107 108 109 110
2 101.6 102 102.6 103.4 104.2 105 105.8 106.6 107.4 108 108.4
3 102.5 102.7 103.1 103.7 104.3 105 105.7 106.3 106.9 107.3 107.5
4 103 103.1 103.4 103.9 104.4 105 105.6 106.1 106.6 106.9 107
5 103.3 103.3 103.6 104 104.5 105 105.5 106 106.4 106.7 106.7
6 103.3 103.4 103.7 104 104.5 105 105.5 106 106.3 106.6 106.7
...end of problem.
Secondary Variables
• Because its distribution is described by the
Laplace equation, temperature is considered
to be the primary variable in the heated
plate problem
• A secondary variable may also be of interest
• In this case, the second variable is the rate
of heat flux across the place surface
i
T
C k q
i
c
c
µ ÷ =
|
|
.
|

\
|
= u
+ =
A
÷
÷ =
A
÷
÷ =
c
c
µ ÷ =
÷
+ ÷
÷ +
y
x
1
2
y
2
x n
1 j , i 1 j . i
y
j , 1 i j . 1 i
x
i
q
q
tan
q q q
y 2
T T
' k q
x 2
T T
' k q
i
T
C k q
FINITE DIFFERENCE
APPROXIMATION
BASED ON RESULTS
THE RESULTING
FLUX IS A VECTOR
WITH MAGNITUDE
AND DIRECTION
Finite Difference: Parabolic
Equations B
2
- 4AC = 0
0 D
y
u
C
y x
u
B
x
u
A
2
2 2
2
2
= +
c
c
+
c c
c
+
c
c
These equations are used to characterize
transient problems.

We will first study this in one spatial direction (1-D).
t
T
x
T
k
2
2
c
c
=
c
c
Consider the heat-conduction equation
As with the elliptic PDEs, parabolic equations can be solved
by substituting finite difference equations for the
partial derivatives.

However we must now consider changes in time
as well as space
t
x
y
x
l
i
u
spatial
{
temporal
{

Centered finite divided difference
Forward finite divided
difference
( )
T
T T
x
T T 2 T
k
t
T
x
T
k
l
i
1 l
i
2
l
1 i
l
i
l
1 i
2
2
A
÷
=
A
+ ÷
c
c
=
c
c
+
÷ +
( )
( )
( )
2
l
1 i
l
1 i
l
1 i
l
i
1 l
i
l
i
1 l
i
2
l
1 i
l
i
l
1 i
x
t k
where
T T 2 T T T
T
T T
x
T T 2 T
k
A
A
= ì
+ ÷ ì + =
A
÷
=
A
+ ÷
÷ +
+
+
÷ +
We can further reduce the equation:
NOTE:

Now the temperature
at a node is estimated
as a function of
the temperature at the
node, and surrounding
nodes, but at a previous time
Example
Consider a thin insulated rod 10 cm long with
k = 0.835 cm
2
/s

Let A x = 2 cm and A t = 0.1 sec.

At t=0 the temperature of the rod is zero.
h
o
t

c
o
l
d

Now subject the two ends to temperatures of
100 and 50 degrees

Set the nodes with these boundary and initial conditions.

This is what we consider as the conditions
at t=0
100 0 0 0 0 50
i= 0 1 2 3 4 5
x= 0 2 4 6 8 10
100

0

0

0

0

50
t = 0
Consider the temperature
at this node at time t+At
( )
l l l l l
1 i i 1 i i
1
i
T T 2 T T T
÷ +
+
+ ÷ ì + =
0

2

4

6

8

10
x
( ) ( ) 0 0 2 1 0 0 0 T
t t
1
+ ÷ ì + =
A +
0

1

2

3

4

5
i
ì
= 0.020875
t 0 0.1 0.2
x
0 100 100 100
2 0 =B5+\$B\$1*(B6-2*B5+B4) =C5+\$B\$1*(C6-2*C5+C4)
4 0 =B6+\$B\$1*(B7-2*B6+B5) =C6+\$B\$1*(C7-2*C6+C5)
6 0 =B7+\$B\$1*(B8-2*B7+B6) =C7+\$B\$1*(C8-2*C7+C6)
8 0 =B8+\$B\$1*(B9-2*B8+B7) =C8+\$B\$1*(C9-2*C8+C7)
10 50 50 50
A B C D
1
2
3
4
5
6
7
8
9
( )
( )
2
1 i i 1 i i
1
i
x
t k
where
T T 2 T T T
A
A
= ì
+ ÷ ì + =
÷ +
+ l l l l l
0
10
20
30
40
50
60
70
80
90
100
0 5 10
x (cm)
t

(
s
e
c
)
5 sec
10 sec
15 sec
...end of example
Convergence and Stability
• Convergence means that as A x and A t
approach zero, the results of the numerical
technique approach the true solution
• Stability means that the errors at any stage
of the computation are attenuated, not
amplified, as the computation progresses
• The explicit method is stable and
convergent if
2
1
s ì
Derivative Boundary Conditions
T
o

T
L

In our previous example T
o
and T
L
were constant values.

However, we may also have derivative boundary conditions
( )
i
1
i
0
i
1
i
0
1 i
0
T T 2 T T T
÷
+
+ ÷ ì + =
Thus we introduce an imaginary point at i = -1
This point provides the vehicle for providing the derivative BC
Derivative Boundary Conditions
q
0
=

0

T
L

For the case of q
o
= 0, so T
-1
= T
1
.

So in this case the balance at node 0 is:
( )
l l l l
0 1 0
1
0
T 2 T 2 T T ÷ ì + =
+
Derivative Boundary Conditions
q
0
=

1
0

T
L

For the case of q
o
= 10, we need to know k’ [= k/(µC)].
Assuming k’ =1, then 10 = - (1) dT/dx,
or dT/dx = -10
( )
|
|
.
|

\
|
÷
A
÷ ì + =
'
A
÷ =
A
÷
'
÷ =
+
÷
÷
l l l l
l l
l l
0 1 0
1
0
1 1
1 1
T 2
1
x
20 T 2 T T
k
x
20 T T
x 2
T T
k 10
Implicit Method
• Explicit methods have problems relating to
stability
• Implicit methods overcome this but at the
expense of introducing a more complicated
algorithm
• In this algorithm, we develop simultaneous
equations
Explicit Implicit
grid point involved with space difference
grid point involved with time difference

With the implicit method, we develop a set of simultaneous
equations at step in time
( )
2
1 l
1 i
1 l
i
1 l
1 i
x
T T 2 T
A
+ ÷
+
÷
+ +
+
( )
2
l
1 i
l
i
l
1 i
x
T T 2 T
A
+ ÷
÷ +
( )
( )
( )
1 l
0
1 l
0
l
i
1 l
1 i
1 l
i
1 l
1 i
l
i
1 l
i
2
1 l
1 i
1 l
i
1 l
1 i
t f T
T T T 2 1 T
t
T T
x
T T 2 T
k
+ +
+
+
+ +
÷
+ +
÷
+ +
+
=
= ì ÷ ì + + ì ÷
A
÷
=
A
+ ÷
which can be expressed as:
For the case where the temperature level is given at the end
by a function f
0
i.e. x = 0
( )
( )
( ) ( )
1 l
0
l
i
1 l
1 i
1 l
i
1 l
0
1 l
0
l
i
1 l
1 i
1 l
i
1 l
1 i
t f T T T 2 1
t f T
T T T 2 1 T
+ +
+
+
+ +
+
+
+ +
÷
+ = ì ÷ ì +
=
= ì ÷ ì + + ì ÷
Substituting
In the previous example problem, we get a 4 x 4 matrix
to solve for the four interior nodes for each time step

Accuracy and Precision
• Accuracy - how closely a computed or measured value agrees with the true value • Precision - how closely individual computed or measured values agree with each other
– number of significant figures – spread in repeated measurements or computations

increasing precision

increasing accuracy

Error Definitions
• Numerical error - use of approximations to represent exact mathematical operations and quantities • true value = approximation + error
– – – – absolute error, Et= true value - approximation subscript t represents the true error shortcoming....gives no sense of magnitude normalize by true value to get true relative error

Error definitions cont.
true error true value  estimate t  100%  100% true value true value

• True relative percent error • But we may not know the true answer a priori

Error definitions cont.
• May not know the true answer a priori
approximate error a  100% approximation

• This leads us to develop an iterative approach of numerical methods
approximate error a  100% approximation present approx.  previous approx.  100% present approx.

Error definitions cont.
• Usually not concerned with sign, but with tolerance • Want to assure a result is correct to n significant figures
a  s

 s  0.5 10 2 n %

Example
Consider a series expansion to estimate trigonometric functions

x3 x5 x7 sin x  x     .....   x   3! 5! 7!
Estimate sin  / 2 to three significant figures

Error Definitions cont.
• Round off error - originates from the fact that computers retain only a fixed number of significant figures • Truncation errors - errors that result from using an approximation in place of an exact mathematical procedure

Computer Storage of Numbers
• Computers use a binary base system for logic and storage of information • This is convenient due to on/off circuitry • The basic unit of computer operation is the bit with value either 0 (off) or 1 (on) • Numbers can be represented as a string of bits • 1102 = 0*20 + 1*21 + 1*22 = 0+2+4 = 610 • A string of 8 bits is called a byte

Computer Storage of Integers
• It is convenient to think of an integer as being represented by a sign bit and m number bits • This is not quite correct • Perhaps you remember that underflow & overflow are -4,294,967,296 & +4,294,967,295 for a 4 byte (32 bit or single precision) integer • With a sign bit this should be  2(32-1) - 1 = 4,294,967,295 • Because +0 and -0 are redundant, a method of storage called 2’s complement is used

does it register negative values? 002 .2’s Complement • The easiest way to understand 2’s complement is in terms of a VCR counter • When you rewind a tape and the counter goes past zero.

2’s Complement & -1 • Use a 1 byte (8 bit) integer as an example Use all 1’s (our highest digit in base 2) for -1 Thus we use 11111111 to represent -1 • We furthermore represent 0 as all zero’s What happens we we add -1 and 1 using these? • 11111111 + 00000001 100000000 Carried Bit is Lost .

8 Bit Integer Example. • With our system we don’t need separate algorithms for addition and subtraction. Cont. We simply add the negative of a number • So -1 plus -1 is (-2) • And 1 less than -2 is 11111111 + 11111111 111111110 11111110 + 11111111 111111101 (-3) .

thus two’s complement.8 Bit Integer Example. So that . • It is not hard to see from the trend that the most negative number is 10000002= -128 (-27) and the most positive is 01111112 = 127 (27-1) • The algorithm for negation is to reverse all bits and add 1.01111112 = 10000012= -127 • This even works for 0 11111111 + 00000001 100000000 . Cont.

XX E10 +YY of scientific notation • Divide the 8 bits up into 3 bits for the exponent and 5 bits for the mantissa. with each using a sign bit (not really. but nearly) So we have  Z Z  X X X X Exponent Mantissa .XXX E2 +ZZ rather than the X.8 Bit Real Variable Example • Real numbers must be represented by a mantissa and an exponent (each with a sign) • Computers use 0.

8 Bit Real. Machine  • Machine  is the error relative to real # representation of 1  0 1  1 0 0 0 Exponent Mantissa • Addition of two reals requires that their exponents be the same  0 1  0 0 0 1 Exponent Mantissa • So the smallest number which can be added to real 1without losing the mantissa bit is shown above  (0x2-1 + 0x2-2 + 0x2-3 +1x2-4 ) x 21 = 0.125  Machine  here .

192093 x 10-7 • Overflow for this system is then .111111111111111111111112 x E2 21111111 = (1.000000000000000000000012 = 2-23 = 1.32 Bit Real Representations • IEEE standard real numbers have 8 bit exponents and 24 bit mantissas • Machine  for this system is then . is assumed as 1--thus 127+1 factors of 2) .40282 x 10+38 (note that the first mantissa bit after .machine ) x 2127+1 = 3.

.TAYLOR SERIES • Provides a means to predict a function value at one point in terms of the function value and its derivative at another point • Zero order approximation f x i 1   f x i  This is good if the function is a constant.

Taylor Series Expansion • First order approximation f x i1   f x i   f ' x i x i1  x i  slope multiplied by distance Still a straight line but capable of predicting an increase or decrease .LINEAR { .

captures some of the curvature f ' ' x i  2 x i1  x i  f x i 1   f x i   f ' x i x i1  x i   2! .Taylor Series Expansion • Second order approximation .

..  h  Rn 3! n! where h  stepsizex i 1  x i f  n 1  n 1 Rn  h x i    x i 1 n  1! .Taylor Series Expansion f ' ' x i  2 f x i 1   f x i  h   f x i   f ' x i  h  h 2! f ' ' ' x i  3 f n x i  n  h  ....

5 0 -0.e.25x  1.2 1 f(x) 0.5 -1 0 0.0) f x    0.Example Use zero through fourth order Taylor series expansion to approximate f(1) from x = 0 (i.5x 2  0.1x 4  0. h = 1.5 Note: f(1) = 0.5 x 1 1.2 1.5 .15x 3  0.

5 -1 -1.5 x 1 1.5 1 0.1.5 n=3 n=0 n=1 n=2 actual value .5 f(x) 0 -0.5 0 0.

 /4 =  /12 .Functions with infinite number of derivatives • • • • • f(x) = cos x f `(x) = -sin x f ``(x) = -cos x f ```(x) = sin x Evaluate the system where xi = /4 and xi+1 =  /3 • h =  /3 .

707 t = 41.4978  t = 0.Functions with infinite number of derivatives • Zero order  f( /3)  cos ( /4 ) = 0.4 x 10-6 % .45% • By n = 6  t = 2.4% • First order  f( /3)  cos ( /4 ) .sin ( /4 )( /12)  t = 4.4% • Second order  f( /3)  0.

e.Roots (Quadratic Formula)  b  b  4ac x 2a f ( x )  ax 2  bx  c  0 2 This equation gives us the roots of the algebraic function f(x) i. the value of x that makes f(x) = 0 How can we solve for f(x) = e-x .x? .

Roots of Equations • Plot the function and determine where it crosses the x-axis • Lacks precision 15 10 • Trial and error f(x) = e .x -x f(x) 5 0 -5 -10 -5 0 x 5 10 .

Overview of Methods • Bracketing methods – Graphing method – Bisection method – False position • Open methods – One point iteration – Newton-Raphson – Secant method .

Bracketing Methods • Graphical • Bisection method • False position method .

odd # of roots x x f(x) x f(x) x .Graphical (limited practical value) f(x) consider lower and upper bound same sign. no roots or even # of roots f(x) opposite sign.

Bisection Method • Takes advantage of sign changing • f(xl)f(xu) < 0 where the subscripts refer to lower and upper bounds • There is at least one real root f(x) f(x) f(x) x x x .

x •xl = -1 •xu = 1 15 10 f(x) = e-x .PROBLEM STATEMENT Use the bisection method to determine the root •f(x) = e-x .x f(x) 5 0 -5 -10 -5 0 x 5 10 .

632) < 0 • xr = (xl + xu) / 2 = 0 • f(0) = 1 exchange so that xl = 0 • xl = 0 xu = 1 .x • xl = -1 xu = 1 – check if f(xl) f(xu) < 0 – f(-1) f(1) = (3.72)(-0.SOLUTION • f(x) = e-x .

Solution cont.2776 .1065 xl = 0.5) = 0.75 f (0.5 xu = 1 SWITCH LOWER LIMIT xr = (xl + xu) / 2 = (0. • xl = 0 • • • • • xu = 1 SWITCH LOWER LIMIT  check if f(xl) f(xu) < 0  f(0) f(1) = (1) (-0.75) = -0.5 + 1)/2 = 0.5 f (0.632) < 0 xr = (xl + xu) / 2 = (0 + 1)/2 = 0.

75)/2 = 0.5 + 0.5625 f (0.625 SWITCH UPPER LIMIT xr = (0.007 .625) = -0.5 xu = 0.090 xl = 0.625) / 2 = 0.5 xu = 0.5 + 0.5625) = 0.75 SWITCH UPPER LIMIT xr = (xl + xu) / 2 = (0.Solution cont.625 f (0. • • • • • • xl = 0.

• Here we consider an error that is not contingent on foreknowledge of the root • a = f (present and previous approx.) 15 10 f(x) f(x) = e-x .Solution cont.x 5 0 -5 -10 -5 0 x 5 10 .

False Position Method • “Brute Force” of bisection method is inefficient • Join points by a straight line • Improves the estimate • Estimating the curve by a straight line gives the “false position” .

DEVELOP METHOD BASED ON SIMILAR TRIANGLES f(xu) next estimate xl xu f(xl) real root .

xr xl xu f(xl) f(xu) Based on similar triangles f x u  f x l   xr  xl xr  xu f x u x l  x u  xr  xu  f x l   f x u  .next estimate.

55-4.61 • • • • xl = 4.65 f(xu) = 2.Example • f(x) = x3 .804 xu= 4.03419 – if f(xl)f(xr) > 0 xl = xr – if f(xl)f(xr) < 0 xu = xr .65)/(-3.545) xr= 4.98 ∙ x = 981/3 = 4.804-2.65 .(2.545 xr = 4.6099 f(xr) = -0.55 f(xl) = -3.545)(4.

61 f(xl) = -0.545 xr = 4.011% .6104 f(xr) = -0.034 xu= 4.610 xr= 4.0004 a = 0.65 f(xu) = 2.Example (continued) • • • • • xl = 4.

5 x 1 1.Pitfalls of False Position Method f(x) = x10 .5 .1 15 10 f(x) 5 0 -5 0 0.

Open Methods • • • • • Fixed point iteration Newton-Raphson method Secant method Multiple roots In the previous bracketing methods. the root is located within an interval prescribed by an upper and lower boundary .

Open Methods cont. • Such methods are said to be convergent – solution moves closer to the root as the computation progresses • Open method – single starting value – two starting values that do not necessarily bracket the root • These solutions may diverge – solution moves farther from the root as the computation progresses .

f(x) Pick initial estimate xi xi x f(xi) .

f(x) xi x f(xi) draw a tangent at f(xi) .

f(x) f(xi+1 ) At the intersection with the x-axis we get xi+1 and f(xi+1 ) xi+1 x .

f(x) f(xi+1 ) xi xi+1 f(xi) The tangent gives next estimate. x .

f(x) Solution can “overshoot” the root and potentially diverge x1 x0 x .

f(x) Solution can “overshoot” the root and potentially diverge x2 x1 x0 x .

rearrange the function f(x) so that x is on the left hand side of the equation  i. for f(x) = x2 .e.Fixed point iteration • Open methods employ a formula to predict the root • In simple fixed point iteration.2x + 3 = 0  x = (x2 + 3) / 2 .

Simple fixed point iteration • In simple fixed point iteration. for f(x) = sin x = 0  x = sin x + x • Let x = g(x) • New estimate based on  x i+1 = g(xi) . rearrange the function f(x) so that x is on the left hand side of the equation  i.e.

Example
• Consider f(x) = e-x -3x • g(x) = e-x / 3 • Initial guess x = 0
15 10 5
f(x)

f(x) = e-x -3x

0 -5 -10 -15 -4 -2 0 x 2 4 6

Initial guess g(x) 0.333 0.239 0.263 0.256 0.258 0.258 0.258

0.000 f(x) -0.283 0.071 -0.018 0.005 -0.001 0.000 0.000 39.561 9.016 2.395 0.612 0.158 0.041
-4
f(x)

a
15 10 5 0 -5 -10 -15 -2

f(x) = e-x -3x

0 x

2

4

6

Newton Raphson
most widely used
Note how the new estimates converge i.e. xi+2 is closer to the root, f(x) = 0.

f(x)

x xi+2

Newton Raphson
tangent f(xi)

dy tangent   f' dx f x i   0 f ' x i   x i  x i 1 rearrange
xi+1 xi

f x i  x i 1  x i  f ' x i 

Newton Raphson Pitfalls

f(x)

(x)

Newton Raphson Pitfalls
f(x)

solution diverges

(x)

Example
• • • • • f(x) = x2 - 11 f '(x) = 2x initial guess xi = 3 f(3) = -2 f '(3) = 6
f(x) = x2 - 11
100 80 60
f(x)

40 20 0 -20 0 5 x 10

Secant method
Approximate derivative using a finite divided difference

f x i 1   f x i  f ' x   x i 1  x i
What is this? HINT: dy / dx = lim y / x Substitute this into the formula for Newton Raphson

f x i  x i 1  x i  f ' x i 

Substitute finite difference approximation for the first derivative into this equation for Newton Raphson

f x i x i 1  x i  x i 1  x i  f x i 1   f x i 

Secant method

Secant method
x i 1 f x i x i 1  x i   xi  f x i 1   f x i 

• Requires two initial estimates • f(x) is not required to change signs, therefore this is not a bracketing method

Secant method
f(x)

slope between two estimates

{
new estimate

x

initial estimates

Example
• Let’s consider f(x) = e-x - x
15 10

f(x) = e-x - x

f(x)

5 0 -5 -10 -5 0 x 5 10

Example cont.
• Choose two starting points
 x0 = 0 f(x0 ) =1  x1 = 1.0 f(x1) = -0.632

• Calculate x2
 x2 = 1 - (-0.632)(0 - 1)/(1+0.632) = 0.6127

FALSE POSITION
f(x)

2

SECANT METHOD 2
f(x)

1 1 new est. The new estimate is selected from the intersection with the x-axis
x

x

new est.

Multiple Roots
• Corresponds to a point where a function is tangential to the x-axis • i.e. double root
10 8 6
f(x)

f(x) = (x-3)(x-1)2

   

f(x) = + 7x -3 f(x) = (x-3)(x-1)(x-1) i.e. triple root f(x) = (x-3)(x-1)3

x3

5x2

double root

4 2 0 -2 -4 0 1 2 x 3 4 5

Difficulties
• Bracketing methods won’t work • Limited to methods that may diverge
f(x) = (x-3)(x-1)2
10 8 6
f(x)

double root

4 2 0 -2 -4 0 1 2 x 3 4 5

zero in the denominator for Newton-Raphson and Secant Methods f(x) = (x-3)(x-1)3 6 5 4 3 f(x) triple root 2 1 0 -1 -2 -3 0 1 2 x 3 4 .• f(x) = 0 at root • f '(x) = 0 at root • Hence.

Multiple Roots f x i f ' x i  x i 1  x i  2 f ' x i   f x i f ' ' x i  f(x) = (x-3)(x-1)4 16 14 12 10 8 f(x) quadruple root 6 4 2 0 -2 -4 0 1 2 x 3 4 .

a2 .5  y + 5xy = x3 • Solve for x and y . anxn ..Systems of Non-Linear Equations • We will later consider systems of linear equations  f(x) = a1x1 + a2x2+....C = 0  where a1 ... an and C are constant • Consider the following equations  y = -x2 + x + 0...

y) = -x2 + x + 0.Systems of Non-Linear Equations cont.5 .y) = y + 5xy . • Set the equations equal to zero  y = -x2 + x + 0.x3 = 0 • The solution would be the values of x and y that would make the functions u and v equal to zero .y = 0 • v(x.5  y + 5xy = x3 • u(x.

Recall the Taylor Series f ' ' x i  2 f ' ' ' x i  3 f x i 1   f x i   f ' x i  h  h  h  2! 3! n f x i  n .  h  Rn n! where h  step size  x i 1  x i ......

Write 2 Taylor series with respect to u and v u i u i x i1  x i   yi1  yi   HOT u i 1  u i  x y v i v i x i1  x i   yi1  yi   HOT v i1  v i  x y The root estimate corresponds to the point where ui+1 = vi+1 = 0 .

First Order Taylor Series Approximation u i u i x i1  x i   yi1  yi    u i x y v i v i x i1  x i   yi1  yi    v i x y Defining  x = xi+1 – xi &  y = yi+1 – yi Then in matrix form. the equations are  u i  x  v  i  x  u i  y   v i  y    x    u i           y   v i          .

This can be solved for xi+1 & yi+1 v i ui y x i 1  x i  u i v i x y u i vi x y i 1  y i  u i v i x y u i  vi y u i v i  y x v i  ui x u i v i  y x This is a 2 equation version of Newton-Raphson .

Therefore v i ui y x i1  x i  u i v i x y u i vi x y i 1  y i  u i v i x y u i  vi y u i v i  y x v i  ui x u i v i  y x THE DENOMINATOR OF EACH OF THESE EQUATIONS IS FORMALLY REFERRED TO AS THE DETERMINANT OF THE JACOBIAN This is a 2 equation version of Newton-Raphson .

Example • Determine the roots of the following nonlinear simultaneous equations  y = -x2 + x + 0.y = 0 • v(x.5  y + 5xy = x3 • u(x. y =1 .x3 = 0 • Use an initial estimate of x = 0.5 .y) = y + 5xy .y) = -x2 + x + 0.

Or alternately x i 1  x i   x i   ui u i x vi u i x vi y vi y u i x vi y u i y u v  i i y x v  ui i x u v  i i y x  vi THE SAME DETERMINANT OF THE JACOBIAN yi 1  yi   yi   This is the Δ form of the equations .

u  2x  1 x v  5y  3x 2 x u  1 y v  1  5x y First iteration: [J] = 6 x = -0.41667 .Example cont.08333 y = 0.

. x2...System of Linear Equations • We have focused our last lectures on finding a value of x that satisfied a single equation  f(x) = 0 • Now we will deal with the case of determining the values of x1. that simultaneously satisfy a set of equations .xn. ..

System of Linear Equations
• Simultaneous equations
 f1(x1, x2, .....xn) = 0  f2(x1, x2, .....xn) = 0 .............  f3(x1, x2, .....xn) = 0

• Methods will be for linear equations
 a11x1 + a12x2 +...... a1nxn =c1  a21x1 + a22x2 +...... a2nxn =c2 ..............  an1x1 + an2x2 +...... annxn =cn

Mathematical Background Matrix Notation
a13... a1n  a 23... a 2 n   . .   a m 3 ... a mn  a horizontal set of elements is called a row a vertical set is called a column first subscript refers to the row number second subscript refers to column number  a11 a12 a a 22 21 A    . .  a m1 a m 2

• • • •

 a11 a12 a13... a1n  a  a 22 a 23... a 2 n  21 A    . . . .    a m1 a m 2 a m3 ... a mn 
This matrix has m rows an n column. It has the dimensions m by n (m x n) note subscript

column 3

 a11 a12 a13... a1n  a   21 a 22 a 23... a 2 n  A   . . . .    a m1 a m 2 a m3 ... a mn 

Note the consistent scheme with subscripts denoting row,column row 2

Row vector: m=1

B  b1
 c1  c   2 C   .     .  c m   

b2 ....... bn 
Square matrix: m = n

Column vector: n=1

 a11 a12 a13  a a  A   21 22 a 23  a 31 a 32 a 33   

The diagonal consists of the elements a11 a22 a33

 a11 a12 a13  A  a 21 a 22 a 23    a 31 a 32 a 33   

• • • • • •

Symmetric matrix Diagonal matrix Identity matrix Upper triangular matrix Lower triangular matrix Banded matrix

Symmetric Matrix
aij = aji for all i’s and j’s
5 1 2 A  1 3 7   2 7 8   

Does a23 = a32 ? Yes. Check the other elements on your own.

Diagonal Matrix
A square matrix where all elements off the main diagonal are zero
0 0  a11 0 0 a  0 0 22  A  0 0 a 33 0    0 0 a 44  0

Identity Matrix
A diagonal matrix where all elements on the main diagonal are equal to 1
1 0 A   0  0 0 1 0 0 0 0 1 0 0 0  0  1

The symbol [I] is used to denote the identify matrix.

Upper Triangle Matrix
Elements below the main diagonal are zero
a11 a12 a13  A   0 a 22 a 23    0 0 a 33   

Lower Triangular Matrix
All elements above the main diagonal are zero
5 0 0 A  1 3 0   2 7 8  

Banded Matrix All elements are zero with the exception of a band centered on the main diagonal 0  a11 a12 0 a a 22 a 23 0   21  A   0 a 32 a 33 a 34    0 a 43 a 44  0 .

Matrix Operating Rules • Addition/subtraction  add/subtract corresponding terms  aij + bij = cij • Addition/subtraction are commutative  [A] + [B] = [B] + [A] • Addition/subtraction are associative  [A] + ([B]+[C]) = ([A] +[B]) + [C] .

. .   . .Matrix Operating Rules • Multiplication of a matrix [A] by a scalar g is obtained by multiplying every element of [A] by g  ga 11 ga 12  ga  21 ga 22  . .   ga mn   . . . . .   ga m1 ga m 2  . . B  gA   . . . .  . .  . . . . . ga 1n  ga 2 n   . . .  .

Matrix Operating Rules • The product of two matrices is represented as [C] = [A][B]  n = column dimensions of [A]  n = row dimensions of [B] cij   a ik bkj k 1 n .

Simple way to check whether matrix multiplication is possible exterior dimensions conform to dimension of resulting matrix [A] m x n [B] n x k = [C] m x k interior dimensions must be equal .

Matrix multiplication • If the dimensions are suitable. matrix multiplication is associative  ([A][B])[C] = [A]([B][C]) • If the dimensions are suitable. matrix multiplication is distributive  ([A] + [B])[C] = [A][C] + [B][C] • Multiplication is generally not commutative  [A][B] is not equal to [B][A] .

. . . a m1  a m2   .   a1n  a 21 a 22 . . . . . . .Inverse of [A] AA 1  A 1 A  I Transpose of [A]  a 11 a  12  . . t A    .   a mn   . . . .  .  .   . . . a 2n . . . . .

Determinants Denoted as det A or A for a 2 x 2 matrix a b  c d   ad  bc   a b  c d   ad  bc   .

uses minor and cofactors of the matrix Minor: the minor of an entry aij is the determinant of the submatrix obtained by deleting the ith row and the jth column Cofactor: the cofactor of an entry aij of an n x n matrix A is the product of (-1)i+j and the minor of aij . Consider cofactor expansion .Determinants cont. There are different schemes used to compute the determinant.

Example: the minor of a32 for a 3x3 matrix is:  a11 a12 a minor of a 32  21 a 22 a 31 a 32  a13   a 23  a 33   .Minor: the minor of an entry aij is the determinant of the submatrix obtained by deleting the ith row and the jth column.

Calculate A31 for a 3x3 matrix First calculate the minor a31  a11 a12 a13  a   a12 a13  a a  a a minor of a 31 21 a 22 a 23  12 23 22 13 a 22 a 23 a 31 a 32 a 33    .Cofactor: Aij. the cofactor of an entry aij of an n x n matrix A is the product of (-1)i+j and the minor of aij i.e.

..a in Ain (for any one value of i) Consider expanding around the jth column A  a1 j A1j  a 2 j A 2j  ...a njAnj (for any one value of j) . Consider an n x n matrix expanded around the ith row A  a i1Ai1  a i 2 Ai2  .....Minors and cofactors are used to calculate the determinant of a matrix...

a11 a12 a13 a 22 a 23 a 21 a 23 a 21 a 22 D  a 21 a 22 a 23  a11  a12  a13 a 32 a 33 a 31 a 33 a 31 a 32 a 31 a 32 a 33 det   1 a11a 22a 33  a 23a 32    1 a12 a 21a 33  a 23a 31  11 1 2   1 a13 a 21a 32  a 22a 31  13 .

calculate it using the 1st row (the way you probably have done it all along).Example: Calculate the determinant of the following 3x3 matrix. 1 7 9   4 3 2   6 1 5    . Then try it using the 2nd row. First.

then det A = 0 .Properties of Determinants • det A = det AT • If all entries of any row or column are zero. then det A = 0 • If two rows or two columns are identical.

How to represent a system of linear equations as a matrix [A]{X} = {C} where {X} and {C} are both column vectors .

67 0.3 0.5x1  x 2  1.01  0 .01 0.9   2      0.3 0.44 A {X}  {C} 0.9 x 3  0.67  1 1.5x 3  0.5  x    0.3x 2  0.1x1  0.5  x 3   0.1 0.3x1  0.0.44      .52 1   x1    0.52 x 2  x 3  0.

x2 ) x1 .Graphical Method 2 equations. 2 unknowns a 11x1  a 12x 2  c1 a 21x1  a 22x 2  c 2  a 11  c1 x 2    a  x1  a  12  12   a 21  c2 x 2    a  x1  a  22  22  x2 ( x1.

3) 1 2 1 x1 Check: 3(4) + 2(3) = 12 + 6 = 18 .x2 3x1  2 x 2  18  x1  2 x 2  2 3 x 2    x1  9 2  1  x 2    x1  1  2  9 3 2 (4.

x2 ) x1 .Special Cases • No solution • Infinite solution • Ill-conditioned x2 ( x1.

a) No solution .same slope f(x) b) infinite solution f(x) x -1/2 x1 + x2 = 1 -x1 +2x2 = 2 c) ill conditioned so close that the points of intersection are difficult to detect visually x f(x) x .

x2 = (slope) x1 + intercept .e. the slopes are identical a x  a x  c 11 1 12 2 1 a 21x1  a 22x 2  c 2 Rearrange these equations so that we have an alternative version in the form of a straight line: i.Let’s consider how we know if the system is ill-conditions. Start by considering systems where the slopes are identical • If the determinant is zero.

a 11 c1 x2   x1  a 12 a 12 a 21 c2 x2   x1  a 22 a 22 If the slopes are nearly equal (ill-conditioned) a11 a 21  a12 a 22 a11a 22  a 21a12 a11a 22  a 21a12  0 Isn’t this the determinant? a11 a12  det A a 21 a 22 .

So it seems that we should use check the determinant of a system before any further calculations are done. .no solution .infinite number of solutions If the determinant is close to zero. the system is ill conditioned. Let’s try an example. This can mean: .If the determinant is zero the slopes are equal.

2 2.5 x    12   2    .Example Determine whether the following matrix is ill-conditioned.7  x1   22  19. 37.2 4.

2 2.2 19.76 What does this tell us? Is this close to zero? Hard to say. If we scale the matrix first.7  37. .7 19. we can get a better sense of things.22.5  2. i.Solution 37.e.5  4.2 4. divide by the largest a value in each row.

 0. Clearly the slopes are nearly equal 1 0126 .x 0 0 -20 -40 -60 -80 y 5 10 15 This is further justified when we consider a graph of the two functions.004 1 0130 . .

Cramer’s Rule • Not efficient for solving large numbers of linear equations • Useful for explaining some inherent problems associated with solving linear equations. a11 a12 a13   x1   b1  a a a  x   b  Ax  b  21 22 23   2   2  a 31 a 32 a 33  x 3  b3       .

Cramer’s Rule b1 1 x1  b2 A b3 a12 a13 a 22 a 23 a 32 a 33 to solve for xi .place {b} in the ith column .

place {b} in the ith column .Cramer’s Rule b1 1 x1  b2 A b3 a12 a 22 a 32 a13 a 23 a 33 to solve for xi .

place {b} in the ith column .Cramer’s Rule b1 a12 a13 a11 b1 a13 1 1 x1  b 2 a 22 a 23 x 2  a 21 b 2 a 23 A A b 3 a 32 a 33 a 31 b 3 a 33 a11 a12 b1 1 x 3  a 21 a 22 b 2 A a 31 a 32 b 3 to solve for xi .

Example: Use of Cramer’s Rule 2 x 1  3x 2  5 x1  x 2  5 2 1   3  x1   x   1  2  5   5 .

2 1   3  x1  5  x   5 1  2    Note the substitution of {c} in [A] A  2 1   31  2  3  5 15 x1  55 1 2 x2  51 3 1 20   51   35  4 1 5 5 5 1 5   2 5  51   1 5 5 5 .

Elimination of Unknowns ( algebraic approach) a 11x1  a12x 2  c1 a 21x1  a 22x 2  c 2 a 11x1  a12x 2  c1 a 21  a 21x1  a 22x 2  c 2 a11  a 21a11x1  a 21a12x 2  a 21c1 SUBTRACT a 21a11x1  a 11a 22x 2  a11c 2 .

Elimination of Unknowns ( algebraic approach) a 21a11x1  a 2`1a12x 2  a 21c1 SUBTRACT a 21a11x1  a11a 22x 2  a11c 2 a12a 21x 2  a 22a11x 2  c1a 21  c 2a11 a 21c1  a11c 2 x2  a12a 21  a 22a11 a 22c1  a12c 2 x1  a11a 22  a12a 21 NOTE: same result as Cramer’s Rule .

Gauss Elimination • One of the earliest methods developed for solving simultaneous equations • Important algorithm in use today • Involves combining equations in order to eliminate unknowns .

Blind (Naive) Gauss Elimination • Technique for larger matrix • Same principles of elimination – manipulate equations to eliminate an unknown from an equation – Solve directly then back-substitute into one of the original equations .

. Also.Two Phases of Gauss Elimination  a 11 a 12 a a 22 21  a 31 a 32  a 11 a 12  0 a' 22  0 0  a 13 | c1  a 23 | c 2   a 33 | c 3   a 13 | c1  ' '  a 23 | c 2  '' ' a 33 | c 3'   Forward Elimination Note: the prime indicates the number of times the element has changed from the original value. the extra column for c’s makes this matrix form what is called augmented.

Two Phases of Gauss Elimination a11 a12 a13 | c1   0 a ' a ' | c'  22 23 2  '' ' 0 0 a 33 | c3'    ' c 3' x 3  '' a 33 x2 x1 Back substitution c  ' 2 c1  a12x 2  a13x 3   a11  a1 x 3 23 a '22  .

Example 2 x1  x 2  3x 3  1 4 x1  4 x 2  7 x 3  1 2 x 1  5x 2  9 x 3  3 .

a21 / a11 = 4/2 = 2 (called pivot factor for row 2)
2 x1  x2  3x3  1 4 x1  4 x2  7 x3  1 2 x1  5x2  9 x3  3

4 x1  2 x 2  6 x 3  2 4 x1  4 x 2  7 x 3  1 2 x 1  5x 2  9 x 3  3

Subtract the (temporary) revised first equation from the second equation

4  4 x1  4  2 x 2  7  6 x 3  1  2
0x1  2x 2  x 3  1

Multiply equation 1 by a31/a11 = 2/2 = 1 and subtract it from equation 3

2  2 x1  5  1 x 2  9  3 x 3  3  1
0 x1  4 x 2  6 x 3  2
The equivalent matrix algorithm for calculating the revised entries in row 3 (a31, a32, a33, & a34) is note that a31 can be assumed 0

a  ai j  ak j
' ij

ai k akk

Where i is the row being modified 3 in this case j is the column in row i 1,2,3, & 4 in this case k is index of the x eliminated 1 in this case

We now replace equations 2 and 3 in the revised matrix (Note that 1st eq. is still original)

2 x 1  x 2  3x 3  1 4 x1  4 x 2  7 x 3  1 2 x1  5x 2  9 x 3  3 2 x 1  x 2  3x 3  1 2 x 2  x 3  1 4x 2  6x 3  2
Continue the computation by multiplying the second equation by a32’/a22’ = 4/2 =2

2 x 1  x 2  3x 3  1 2 x 2  x 3  1 4x 2  6x 3  2 2 x 1  x 2  3x 3  1 2x 2  x 3   1 4x 3  4
THIS DERIVATION OF AN UPPER TRIANGULAR MATRIX IS CALLED THE FORWARD ELIMINATION PROCESS

From the system we immediately calculate:

2x1  x 2  3x 3  1 2x 2  x 3  1 4x 3  4
THIS SERIES OF STEPS IS THE BACK SUBSTITUTION

4 x3   1 4
Continue to back substitute

1 1 x2   1 2 1  3   1 1 x1   2 2

Pitfalls of the Elimination Method
• Division by zero • Round off errors
– magnitude of the pivot element is small compared to other elements

• Ill conditioned systems

Division by Zero
• When we normalize i.e. a12/a11 we need to make sure we are not dividing by zero • This may also happen if the coefficient is very close to zero
2x 2  3x 3  8 4x1  6x 2  7 x 3  3 2 x1  x 2  6 x 3  5

Techniques for Improving the Solution
• Use of more significant figures • Pivoting • Scaling
 a 11 a  21 a 31  a 12 a 22 a 32 a 13   x1   b1      a 23  x 2   b 2   a 33   x 3  b 3     

 A  x   b

Use of more significant figures

• Simplest remedy for ill conditioning • Extend precision

Pivoting
• Problems occur when the pivot element is zero - division by zero • Problems also occur when the pivot element is smaller in magnitude compared to other elements (i.e. round-off errors) • Prior to normalizing, determine the largest available coefficient

Pivoting
• Partial pivoting
– rows are switched so that the largest element is the pivot element

• Complete pivoting
– columns as well as rows are searched for the largest element and switched – rarely used because switching columns changes the order of the x’s, adding unjustified complexity to the computer program

Solution Pivoting has been developed to partially avoid these problems 2x 2  3x 3  8 4x1  6x 2  7 x 3  3 2 x1  x 2  6 x 3  5 4x1  6x 2  7 x 3  3 2x 2  3x 3  8 2 x1  x 2  6 x 3  5 .Division by Zero .

the system will be technically correct and solvable . this is often due to the widely different units used in the development of the simultaneous equations • As long as each equation is consistent.Scaling • Minimizes round-off errors for cases where some of the equations in a system have much larger coefficients than others • In engineering practice.

998 0.000x 2  100.000x 2  100.000 x1  x 2  2 2x1  100.00 x 2  1.Scaling Pivot rows to put the greatest value on the diagonal 2x1  100.000  49.00 x1  x 2  1 .999 x 2   49.00002 x1  x 2  1 x1  x 2  2 x1  0.

EXAMPLE Use Gauss Elimination to solve the following set of linear equations 3x 2 13x 3  50 2x1  6x 2  x 3  45 4 x1  8x 3  4 .

Use partial pivoting . employing short hand presented in class.SOLUTION 3x 2 13x 3  50 2x1  6x 2  x 3  45 4 x1  8x 3  4 First write in matrix form.  0  2   4  3 6 0 13 1 8    50 45   4   We will clearly run into problems of division by zero.

 0  2   4   4  2   0   4  0   0  3 6 0 0 6 3 0 6 3  13 1 8 8 1  13 8 3  13           50 45   4   4  45    50  4  43    50  Begin developing upper triangular matrix .

 4  0   0       4 0 0 0 6 3 0 6 0 8 3  13    4  43    50     4  43    28.end of problem .966  x3   1.966  x1   2.5  28..966 x 2   8.931 4 CHECK 3 8.5 43  31.966   50 okay .149  14.5  8 3  14.5 6 4  81.149   131..

11) – fundamentally different from Gauss elimination – this is an approximate.Matrix Inversion with GaussJordan Method • Gauss-Jordan (Like Gauss Elim.) – primary motive for introducing this method is that it provides and simple and convenient method for computing the matrix inverse • Gauss-Seidel (Ch.--Direct Sol. iterative method – particularly good for large number of equations .

rather than just the subsequent one • All rows are normalized by dividing them by their pivot elements • Elimination step results in and identity matrix rather than an UT matrix a11 a12 a13  A   0 a 22 a 23    0 0 a 33    1 0 A    0  0 0 1 0 0 0 0 1 0 0 0  0  1 .Gauss-Jordan • Variation of Gauss elimination • When an unknown is eliminated. it is eliminated from all other equations.

Graphical depiction of Gauss-Jordan a1 1 a  21 a 3 1  a1 2 a '2 2 a 32 a1 3 a '2 3 '' a 33 | | | c1  c '2   '' c3   1  0 0  0 1 0 0 0 1 | | | c 1n   n   c2   c 3n    1  0 0  0 1 0 0 0 1 | | | c n   1 n   c2   c 3n    x1  c 1n  x 2  c n  2  x 3  c 3n  .

Matrix Inversion • [A] [A] -1 = [A]-1 [A] = I • One application of the inverse is to solve several systems differing only by {c} – [A]{x} = {c} – [A]-1[A] {x} = [A]-1{c} – [I]{x}={x}= [A]-1{c} • One quick method to compute the inverse is to augment [A] with [I] instead of {c} .

not 1/aij 1 1 a 11 a12 a 1 a 1 21 22 1 1 a 31 a 32 I A 1 .Graphical Depiction of the Gauss-Jordan Method with Matrix Inversion A I  a11 a 12 a a 22 21   a 31 a 32  1 0  1 0 0 0  a 13 a 23 a 33 0 0 1       1 0 0 0 1 0      1 a 13  1  a 23  1 a 33   0 0 1 Note: the superscript “-1” denotes that the original values have been converted to the matrix inverse.

4x1  5x 2  6x 3  28 2 x1  5x 1  5x 2  7 x 3  29  65 .EXAMPLE Use the Gauss-Jordan Method to solve the following problem. Note: You can use this solution to practice Gauss elimination method.

write the coefficients and the right hand vector as an augmented matrix .SOLUTION 4x1  5x 2  6x 3  28 2 x1  5x 1  5x 2  4  2   5  6 7 0  7 x 3  29  65 5 0 5    28  29    65  First.

0 7 5 0 Example: -6/4 = -1. 4  2   5   1  2   5  5 0 5 6 7 0       28  29   65  7  29   65  Normalize the first row by dividing by the pivot element 4 125 15 .5 . .

we want this term to be zero . 0 7 5 0    7  29   65  The x1 term in the second row can be eliminated by subtracting 2 times the first row from the second row In other words. 1  2   5  125 15 . .

 1  2   5   1  ?   5  125 15 . ? ? 5 0       7  29   65  7  ?   65  The x1 term in the second row can be eliminated by subtracting 2 times the first row from the second row All values in this row will change . . . 0 7 5 0 125 15 .

.(1) . . 0 7 5 0 125 15 . 5 0       7  29   65  7    65  The x1 term in the second row can be eliminated by subtracting 2 times the first row from the second row 2 . 1  2   5   1  ?   5  125 15 .

. 5 0       7  29   65  7    65  The x1 term in the second row can be eliminated by subtracting 2 times the first row from the second row 2 . 1  2   5   1  ?   5  125 15 . 0 7 5 0 125 15 . .

0 7 5 0 125 15 . 5 0       7  29   65  7    65  The x1 term in the second row can be eliminated by subtracting 2 times the first row from the second row 2 . .(1)(2) . . 1  2   5   1  ?   5  125 15 .

(1)(2) = 0 . . 5 0       7  29   65  7    65  The x1 term in the second row can be eliminated by subtracting 2 times the first row from the second row 2 . 1  2   5   1  0   5  125 15 . 0 7 5 0 125 15 . .

0 7 5 0 125 15 . ? 5 0       7  29   65  7    65  The x1 term in the second row can be eliminated by subtracting 2 times the first row from the second row 0 . 1  2   5   1  0   5  125 15 . . .

5 0 7 5 0 1.25  1. 1  2   5   1  0   5  1.25  1.5 ? 5 0       7  29    65   7     65   The x1 term in the second row can be eliminated by subtracting 2 times the first row from the second row 0 .(2) .

(2)(1.25  1.25) . 1 1.25  1.5  2 0 7   5 5 0   1 1.5  0 ?   5 5 0        7  29  The x1 term in the second  row can be eliminated by  65  subtracting 2 times the  first row from the second row 7     65   0 .

25  1.5 0 7 5 0 1.5 .25  1.25) = -2.5 5 0       7  29    65   7     65   The x1 term in the second row can be eliminated by subtracting 2 times the first row from the second row 0 .5  2 .(2)(1. 1  2   5   1  0   5  1.

5  0  2 .25  1. 1 1.25  1.5  4   5 5 0        7  29    65   7  15    65   The x1 term in the second row can be eliminated by subtracting 2 times the first row from the second row These term are derived following the same algorithm. .5  2 0 7   5 5 0   1 1.

 1  0   5   1  0   0  1.25  7.25  1.5  2.5  2.5  4 1.25  1.5       7  15    65   7  15    30   The x1 term in the third row can be eliminated by subtracting -5 times the first row from the second row .5  4 5 0 1.

25  7.5  4 1.25  1.5    7  15    30   .5  2.Now we need to reduce the x2 term from the 1st and 3rd equation  1  0   0  1.

5  4 1.Now we need to reduce the x2 term from the 1st and 3rd equation  1  0   0  1.25  1.25  7.5    7  15    30   .5  2.

25  1.Now we need to reduce the x2 term from the 1st and 3rd equation  1  0   0  1.5  2.5  4 1.5    7  15    30   Normalize the second row by a22 .25  7.

25  7.6 7.25 1 1.25 15 .5  4 1.5    7  6   30   .25  1.5    7  15    30   Normalize the second row by a22  1  0   0  1. 1.Now we need to reduce the x2 term from the 1st and 3rd equation  1  0   0  1.5  2.

5  2.5    7  15    30   -2.25 1 1. 1.6 7.5  4 1.5) = 1 Normalize the second row by a22  1  0   0  1.5/(-2.25  1.5    7  6   30   .Now we need to reduce the x2 term from the 1st and 3rd equation  1  0   0  1.25  7.25 15 .

5  4 1.25 15 .25 1 1.5  2.5    7  6   30   .6 7.25  1.Now we need to reduce the x2 term from the 1st and 3rd equation  1  0   0  1. 1.25  7.5) = -6 Normalize the second row by a22  1  0   0  1.5    7  15    30   15/(-2.

5   0 1 1 .25  1.6    0 1.25 (1) = 0 . 1 1.1.5   7   6   30     6   30   1.25  7.25 .6    0 1.5     1 0  0 1 1 .25  7.

25  1.25  7.5 1 1 .5 0 ? 1 1 .6 1.6 1. 1  0   0   1  0   0  1.5       7   6   30    6    30   Before going any further calculate the new coefficient for a13 .25  7.

 1  0   0   1  0   0  1.25  7.5       7  6    30    6    30   Before going any further calculate the new coefficient for a13 Your answer should follow the following scheme .25  1.5 0 ? 1 1 .25  7.6 1.6 1.5 1 1 .

25  7.25  7.5 1 1 .6 1. 1  0   0   1  0   0  1.6 1.25  1.5       7  6    30    6    30   -1.5 0 ? 1 1 .5 .

 1  0   0   1  0   0  1.25  1.25  7.25  7.5       7  6    30    6    30   -1.6 .6 1.1.5 1 1 .6 1.5 .5 0 ? 1 1 .

25  1.6 (1. 1  0   0   1  0   0  1.5 .6 1.6 1.5 1 1 .25  7.25) .1.25  7.5 0 ? 1 1 .5       7  6    30     The element to get the  6  zero at a12  30   -1.

6 (1.25  7.5       7  6    30    6    30   Recall.5 .6 1.25) .25  1.1. 1.5 0 ? 1 1. 1  0   0   1  0   0  1.25 was need to establish a zero at a12 -1.25  7.5 1 1.6 1.

5 1 1.25  7.5 .5 .5 0  3.1.6 1. 1  0   0   1  0   0  1.5 1 1.6 (1.25) = -3.25  7.25  1.6 1.5       7  6    30    6    30   -1.

5 1. 6  9.25  1.5 1 1.5  6    30      14.5 1 1.5 0  3.5  6    22. 1  0   0   1  0   0   1  0   0  1. 6 1.5  Follow the same scheme for c1 and for the third row .5       7  6    30   14. 6 1.25  7.25  7.5 0 1 0  3.

25 1 1.5  . 5  3. 6  7.5       7  6    30   14. 6  9.5 1.25 0 1 1.5 1. 5  3.5  6    22. 1  0   0   1  0   0       1 0 0 1.5 1. 6  7.5  6    30      14.25 0 1 0  1.

 1  0   0   1  0   0  0 1 0 0 1 0 35 . 16 ..79  9.5  6   22.5 0 0 1       14. 9..5  22. we will complete the identity matrix .end of problem .37   Now we need to reduce x3 from the 1st and 2nd equation In addition.79  2....

LU Decomposition Methods Chapter 10 • Two fundamentally different approaches for solving systems of linear algebraic equations • Elimination methods – Gauss elimination – Gauss Jordan – LU Decomposition Methods • Iterative methods – Gauss Seidel – Jacobi .

Naive LU Decomposition • [A]{x}={c} • Suppose this can be rearranged as an upper triangular matrix with 1’s on the diagonal • [U]{x}={d} • [A]{x}-{c}=0 [U]{x}-{d}=0 • Assume that a lower triangular matrix exists that has the property  [L]{[U]{x}-{d}}= [A]{x}-{c} .

Naive LU Decomposition • • • • • [L]{[U]{x}-{d}}= [A]{x}-{c} Then from the rules of matrix multiplication [L][U]=[A] [L]{d}={c} [L][U]=[A] is referred to as the LU decomposition of [A]. solutions can be obtained very efficiently by a two-step substitution procedure . After it is accomplished.

Consider how Gauss elimination can be formulated as an LU decomposition U is a direct product of forward elimination step if each row is scaled by the diagonal 1 U   0  0  a 12 1 0 a 13  a 23   1   .

This can be readily illustrated for a three-equation system a11 a12 a13   x1   c1  a a a  x   c   21 22 23   2   2  a 31 a 32 a 33   x 3  c3       The first step is to multiply row 1 by the factor f 21 a 21  a1 1 Subtracting the result from the second row eliminates a21 . the matrix [L] is also produced during the step.Although not as apparent.

row 1 is multiplied by f 31 a 31  a 11 The result is subtracted from the third row to eliminate a31 In the final step for a 3 x 3 system is to multiply the modified row by a '32 f 32  a '22 Subtract the results from the third row to eliminate a32 . a11 a  21 a 31  a12 a 22 a 32 a13   x1   c1   x   c  a 23   2   2  a 33   x 3  c3      Similarly.

f31. f32 are in fact the elements of an [L] matrix  1 0 0 L  f 21 1 0   f 31 f 32 1   CONSIDER HOW THIS RELATES TO THE LU DECOMPOSITION METHOD TO SOLVE FOR {X} .The values f21 .

[A] {x} = {c} [U][L] [L] {d} = {c} {d} [U]{x}={d} {x} .

Crout Decomposition • Gauss elimination method involves two major steps – forward elimination – back substitution • Efforts in improvement focused on development of improved elimination methods • One such method is Crout decomposition .

Crout Decomposition Represents and efficient algorithm for decomposing [A] into [L] and [U] 0   11 0   22 0   21   31  32  33    1 u12 u13   a11 a12 a13  0 1 u   a a 22 a 23  23    21  0 0 1  a 31 a 32 a 33      .

0   11 0   22 0   21   31  32  33    1 u12 u13   a11 a12 a13  0 1 u   a a 22 a 23  23    21  0 0 1  a 31 a 32 a 33      Recall the rules of matrix multiplication. The first step is to multiply the rows of [L] by the first column of [U] a11   11 1  00  00   11 a 21   21 a 31   31 Thus the first column of [A] is the first column of [L] .

0   11 0   22 0   21   31  32  33    1 u12 u13   a11 a12 a13  0 1 u   a a 22 a 23  23    21  0 0 1  a 31 a 32 a 33      Next we multiply the first row of [L] by the columns of [U] to get  1 1  a1 1  1 1u 1 2  a 1 2  1 1u 1 3  a 1 3 .

..3..n ... 11  a 11  11u12  a 12  11u13  a 13 a 12   11 a 13   11 a1 j  11  11 0 0  1 u12 u13  a11 a12 a13    0  0 1 u 23   a 21 a 22 a 23   21 22      31  32  33  0 0 1  a 31 a 32 a 33       u12 u13 Once the first row of [U] is established the operation can be represented concisely u1 j  for j  2.

.u1 j  a1 j  11 for j  2..3....n 1 u12 u13  a11 a12 a13  0 1 u   a a a 23  23    21 22  0 0 1  a 31 a 32 a 33        11 0 0    0  21 22   31  32  33    Schematic depicting Crout Decomposition .

Schematic depicting Crout Decomposition .

...... i1  a il for i  1.3. j  1.n k 1 j1 u jk  a jk    ji u ik i 1 j1  jj n 1 k 1 for k  j  1.n  nn  a nn    nk u kn ..n For j  2...3....n  1  ij  a ij    ik u kj for i  j..2.... j  2.. n u1 j  a1 j  11 for j  2........

The Substitution Step • • • • • [L]{[U]{x}-{d}}= [A]{x}-{c} [L][U]=[A] [L]{d}={c} [U]{x}={d} Recall our earlier graphical depiction of the LU decomposition method .

[A] {x} = {c} [U][L] [L] {d} = {c} {d} [U]{x}={d} {x} .

.c1 d1   11 di  ci    ij d j j 1 i 1  ii for i  2.n recall U x  d  for i  n  1.1 Back substit ution xn  d n xi  d i  j i 1 u x ij n j ...3. n  2.........

2.. n ......Example Use LU decomposition to solve the following matrix 1 2  3  3 4 4 2  x 1    3  x 2    7 x 3    15    22 39     i1  a il for  11  a 11  1  21  a 21  2  31  a 31  3 i  1.

..3.. j  1..n a 12 3   3  11 1 a 13  2  11  ij  a ij    ik u kj for i  j.n k 1 j1  22  a 22    2 k u k 2  4  23  2 k 1 2 1  32  a 32    3k u k 2  4  33  5 k 1 2 1 ...u1 j  u12 u13 a1 j  11 for j  2.....

.5 k 1 31 . j  2.n 3  22    0.u jk  a jk    ji u ik i 1 j1  jj a 23    2i u i 3 i 1 2 1 for k  j  1.5  3...5 2 n 1 k 1 u 23   22  nn  a nn    nk u kn  33  a 33    3k u k 3  7  32   50.

Therefore the L and U matrices are 0 1 0 L  2  2 0    3  5 3.5   0 0 1    Recall the original column matrix and solving L d c c1 15 d1    15  11 1 .5    c1    c 2   c   3 15    22 39    1 3 2  U  0 1 0.

.....3.5 d2   ii c 3    3 jd j j1 31 d3   ii .n 22  215  4 2 39  315   54   4 3..Forward substitution di  c i    ijd j j1 i 1  ii c 2    2 jd j j1 2 1 for i  2.

Backsubstitution xn  dn x 3  d3  4 xi  di  x2  d2  x 1  d1  j i 1 3 u x ij j 2 1 3 n j for i  n  1.... n  2.end of example .....54   2 j11 u 1j x j  15  32   24   1 .1 u 3j x j  4  0.

Matrix Inversion • [A] [A] -1 = [A]-1 [A] = I • One application of the inverse is to solve several systems differing only by {c} – [A]{x} = {c} – [A]-1[A] {x} = [A]-1{c} – [I]{x}={x}= [A]-1{c} • One quick method to compute the inverse is to augment [A] with [I] instead of {c} .

Graphical Depiction of the Gauss-Jordan Method with Matrix Inversion A  I  a 11 a  21  a 31  1  0 0  a 12 a 22 a 32 0 1 0 1 a 13 a 23 a 33 0 0 1       1 0 0 1 a 11 a 1 21 1 a 31 0 1 0 1 a 12 a 1 22 1 a 32 I A      1 a 13  1  a 23  1 a 33   0 0 1 Note: the superscript “-1” denotes that the original values have been converted to the matrix inverse. not 1/aij .

Matrix Inversion with LU Decomposition Method      11 21 31 0 22 32 0   d1(1)  1   d   0  0  2(1)       d 3(1)  0  33           11 21 31      11 21 31 0 22 32 0   d1(2)  0    d   1  0  2(2)       d 3(2)  0  33      0 22 32 0   d1(3)  0   d   0  0  2(3)       d 3(3)  1  33      Solving for unknowns using {d}(1) gives the first column of [I] and so forth for {d}(2) through {d}(n) .

Chapter 11:Iterative Methods • Solution by Gauss-Seidel Iteration • Solution by Jacobi Iteration .

since you control the level of error that is acceptable .Gauss Seidel Method • An iterative approach • Continue until we converge within some pre-specified tolerance of error • Round off is no longer an issue.

To assure that you understand this. write the equation for x2 . etc.Gauss-Seidel Method • If the diagonal elements are all nonzero. the first equation can be solved for x1 c1  a12x 2  a13x 3    a1n x n x1  a11 • Solve the second equation for x2.

n . n 1x n 1 a n.c1  a 12x 2  a 13x 3    a 1n x n x1  a 11 c 2  a 21x1  a 23x 3    a 2 n x n x2  a 22 c 3  a 31x1  a 32x 2    a 3n x n x3  a 33  xn  c n  a n1x1  a n 3 x 2    a n .

Gauss-Seidel Method • Start the solution process by guessing values of x • A simple way to obtain initial guesses is to assume that they are all zero • Calculate new values of xi starting with  x1 = c1/a11 • Progressively substitute through the equations • Repeat until tolerance is reached .

System x1  c1  a 12x 2  a 13x 3  / a 11 x 2  c 2  a 21x1  a 23x 3  / a 22 x 3  c 3  a 31x1  a 32x 2  / a 33 x1  c1  a 12 0  a 13 0  / a 11  c1 x 2  c 2  a 21x '1 a 23 0  / a 22  x '2 x 3  c 3  a 31x '1 a 32x '2  / a 33  x '3 a 11  x '1 .Gauss-Seidel Method for 3 eq.

2 4  3  3 1 2 1 2 1    2 2  1  .Example Given the following augmented matrix. complete one iteration of the Gauss Seidel method.

2 4  3  3 1 2 1 2 1    2 2  1  GAUSS SEIDEL x 1  c1  a 1 2 0  a 1 3 0  / a 1 1 c1 2   30    0  1 2 x1   1 2 2 x 2  c 2  a 2 1x '1 a 2 3 0  / a 2 2  x '2 a1 1  x '1  2  4    2 0  1 24 x2    6 1 1 x 3  c 3  a 3 1x '1 a 3 2 x '2  / a 3 3  x '3 x3 1  3   2  6  1 1  3  12    10 1 1 .

i x ij  x ij1  100   s j xi as in previous iterative procedures in finding the roots. The method can diverge 2. we consider the present and previous estimates. As with the open methods we studied previously with one point iterations 1. May converge very slowly .Gauss-Seidel Method convergence criterion  a .

x 2    x1 a 22 a 22 Class question: where do these formulas come from? consider the partial derivatives of u and v u u a 12 0  x1 x 2 a11 v a 21 v  0 x1 a 22 x 2 . x 2   c1 a 12  x2 a11 a11 c 2 a 21 v x 1 .Convergence criteria for two linear equations u x 1 .

u v  1 x x u v  1 y y Criteria for convergence in class text material for nonlinear equations.Convergence criteria for two linear equations cont. Noting that x = x1 and y = x2 Substituting the previous equation: .

Extended to n equations: a ii   a ij where j  1. a 21 a1 2 1 1 a 22 a1 1 This is stating that the absolute values of the slopes must be less than unity to ensure convergence.Convergence criteria for two linear equations cont. n excluding j  i .

for convergence. n excluding j  i This condition is sufficient but not necessary. . a ii   a ij where j  1. When met. the matrix is said to be diagonally dominant.Convergence criteria for two linear equations cont.

x2 Review the concepts of divergence and convergence by graphically illustrating Gauss-Seidel for two linear equations x1 u :11x1  13x 2  286 v :11x1  9x 2  99 .

x2 v :11x1  9x 2  99 u :11x1  13x 2  286 Note: initial guess is x1 = x2 = 0 x1 .

x2 v :11x1  9x 2  99 v u :11x1  13x 2  286 Note: initial guess is x1 = x2 = 0 x1 u .

x2 v :11x1  9x 2  99 v u :11x1  13x 2  286 Solve 2nd eq. for x2 x1 u .

x2 v :11x1  9x 2  99 v u :11x1  13x 2  286 Solve 1st eq. for x1 x1 u .

x2 v :11x1  9x 2  99 v u :11x1  13x 2  286 Solve 2nd eq. for x2 x1 u .

for x1 x1 u .x2 v :11x1  9x 2  99 v u :11x1  13x 2  286 Solve 1st eq.

x2 v :11x1  9x 2  99 v u :11x1  13x 2  286 Solve 2nd eq. for x2 x1 u .

x2 v :11x1  9x 2  99 v u :11x1  13x 2  286 Note: we are converging on the solution x1 u .

x2 u :11x1  13x 2  286 v v :11x1  9x 2  99 Change the order of the equations: i. change direction of initial estimates x1 Solve 2nd eq. for x2 u .e.

for x1 x1 u .x2 u :11x1  13x 2  286 v v :11x1  9x 2  99 Solve 1st eq.

x2 u :11x1  13x 2  286 v v :11x1  9x 2  99 Solve 2nd eq. for x2 x1 u .

for x1 x1 u .x2 u :11x1  13x 2  286 v v :11x1  9x 2  99 Solve 1st eq.

x2 u :11x1  13x 2  286 v v :11x1  9x 2  99 x1 This solution is diverging! u .

x new i  x new i  1   x old i . After each new value of x is computed.Improvement of Convergence Using Relaxation This is a modification that will enhance slow convergence. calculate a new value based on a weighted average of the present and previous iteration.

Improvement of Convergence Using Relaxation new new old xi  x i  1   x i • if  = 1 unmodified • if 0 <  < 1 underrelaxation – nonconvergent systems may converge – hasten convergence by dampening out oscillations • if 1<  < 2 overrelaxation – extra weight is placed on the present value – assumption that new value is moving to the correct solution by too slowly .

Jacobi Iteration • Iterative like Gauss Seidel • Gauss-Seidel immediately uses the value of xi in the next equation to predict x i+1 • Jacobi calculates all new values of xi’s to calculate a set of new xi values .

Graphical depiction of difference between Gauss-Seidel and Jacobi FIRST ITERATION x1   c1  a12 x2  a13x3  / a11 x2   c2  a 21x1  a 23x3  / a 22 x3   c3  a 31x1  a 32 x2  / a 33 x1   c1  a12 x2  a13x3  / a11 x2   c2  a 21x1  a 23x3  / a 22 x3   c3  a 31x1  a 32 x2  / a 33 SECOND ITERATION x1   c1  a12 x2  a13x3  / a11 x2   c2  a 21x1  a 23x3  / a 22 x3   c3  a 31x1  a 32 x2  / a 33 x1   c1  a12 x2  a13x3  / a11 x2   c2  a 21x1  a 23x3  / a 22 x3   c3  a 31x1  a 32 x2  / a 33 .

Graphical depiction of difference between Gauss-Seidel and Jacobi FIRST ITERATION x1   c1  a12 x2  a13x3  / a11 x2   c2  a 21x1  a 23x3  / a 22 x3   c3  a 31x1  a 32 x2  / a 33 x1   c1  a12 x2  a13x3  / a11 x2   c2  a 21x1  a 23x3  / a 22 x3   c3  a 31x1  a 32 x2  / a 33 SECOND ITERATION x1   c1  a12 x2  a13x3  / a11 x2   c2  a 21x1  a 23x3  / a 22 x3   c3  a 31x1  a 32 x2  / a 33 x1   c1  a12 x2  a13x3  / a11 x2   c2  a 21x1  a 23x3  / a 22 x3   c3  a 31x1  a 32 x2  / a 33 .

Graphical depiction of difference between Gauss-Seidel and Jacobi FIRST ITERATION x1   c1  a12 x2  a13x3  / a11 x2   c2  a 21x1  a 23x3  / a 22 x3   c3  a 31x1  a 32 x2  / a 33 x1   c1  a12 x2  a13x3  / a11 x2   c2  a 21x1  a 23x3  / a 22 x3   c3  a 31x1  a 32 x2  / a 33 SECOND ITERATION x1   c1  a12 x2  a13x3  / a11 x2   c2  a 21x1  a 23x3  / a 22 x3   c3  a 31x1  a 32 x2  / a 33 x1   c1  a12 x2  a13x3  / a11 x2   c2  a 21x1  a 23x3  / a 22 x3   c3  a 31x1  a 32 x2  / a 33 .

Graphical depiction of difference between Gauss-Seidel and Jacobi FIRST ITERATION x1   c1  a12 x2  a13x3  / a11 x2   c2  a 21x1  a 23x3  / a 22 x3   c3  a 31x1  a 32 x2  / a 33 x1   c1  a12 x2  a13x3  / a11 x2   c2  a 21x1  a 23x3  / a 22 x3   c3  a 31x1  a 32 x2  / a 33 SECOND ITERATION x1   c1  a12 x2  a13x3  / a11 x2   c2  a 21x1  a 23x3  / a 22 x3   c3  a 31x1  a 32 x2  / a 33 x1   c1  a12 x2  a13x3  / a11 x2   c2  a 21x1  a 23x3  / a 22 x3   c3  a 31x1  a 32 x2  / a 33 .

2 4  3  3 1 2 1 2 1    2 2  1  RECALL GAUSS SEIDEL x1  c1  a 12 0  a 13 0  / a 11 c1 a 11  x '1 2   30   10  2 x1   1 2 2 x 2  c 2  a 21x '1 a 23 0  / a 22  x '2  2  4 1  2 0   2  4 x2    6 1 1 x 3  c 3  a 31x '1 a 32x '2  / a 33  x '3 1  31  2  6  1  3  12 x3    10 1 1 .

.. end of problem 1  30   2 0  1 x3   1 1 1 .2 4  3  3 1 2 1 2 1    2 2  1  JACOBI x1  c1  a 12 0  a 13 0  / a 11 c1  x '1 2   30   10  2 x1   1 2 2 x 2  c 2  a 210  a 23 0  / a 22  x '2 a 11  2  4 0   2 0   2 x2    2 1 1 x 3  c 3  a 310  a 32 0  / a 33  x '3 .

Optimization Methods Overview • Unconstrained Searches ▀ 1-D (one independent variable) ▲Golden Search ▲Quadratic Interpolation ▲Newton’s Method ▀ 2-D (two or more independent variables) ▲Steepest Ascent (Descent) ▲Conjugate Gradients ▲Newton and Quasi-Newton Methods • Constrained Optimization ▀ Linear Programming .

Optimization Uses in Engineering ● Maximizing Strength/Weight ● Minimizing Cost or Maximizing Profit ● Minimizing Wait Time ● Maximizing Efficiency ● Minimizing Virtual Work .

1-D Searches • Objective is to maximize a function  Objective Function • 1-D searches have objective functions of one independent variable  f(x) • Analogy to root finding : ►Bracketed Method  Interval Search or Limited Search ►Open Method  Unlimited or Free Search .

xu ] • Define the search interval length as 0 = |xu .Golden Section Search • Define the search interval as [x .618 R 2 .x| • Divide 0 up into 1 and 2 such that 0 = 1 + 2 • Also make sure that the ratio of the first sub-length to the original length is the same as the first sub-length to second sub-length ratio 1  2   0 1 1 2 or  1   2 1 • Defining 2 / 1 as R and taking the reciprocal of the above equation 1 5 1 2 1 R   R  R 1  R   0 .

x2 for this new search interval is x1 from the previous. xu ]  call xu . Continued • Once subdivided. because 0 / 1 = 1 / 2 (Hint: d = R 0 ) And for (Case 2) Old x1  x2 . where d = 1 • Then if f(x1) > f(x2)  the maximum must lie in the interval [xu . we now evaluate values of f(x) • Since the designations 1 and 2 are arbitrary.d.Golden Section Search.1) to determine the new search interval • So let x1 = (x + d) and x2 = (xu .d (= x2 ) the new x (Case 1) otherwise  call x + d (= x1 ) the new xu (Case 2) • With a little thinking you can convince yourself that for (Case 1). test f(x + 1) and f(xu .d).

f (x) d d x2 x f (x) x1 xu First Step Second Step d’ x x Old x2 x2 x1 xu Old x1 .

4721 ● Third Step: Calculate f(x2) = 1.5279 .1 ● Use Golden Search to find max (2 sinx -x2/10) on [0.4721 ● Second Step: Calculate x2 = 4 .6300 ● Now because f(x1) is not > f(x2). set xu to 2.4] ● First Step: Calculate d = R 0 = 0.2.5279 and x1 = 0 +2.61803*4 = 2.4721 & set new x1 = 1.4721 = 2.7647 and f(x1) = 0.Text Example 13.4721 = 1.

6300 0 1 2 3 4 5 f(x) 0 -1 -2 -3 -4 x2 x1 x .1 f(x) 3 2 1 <-------.l0 = 4 ------------> f=1.7647 f=0.Graphically Text Example 13.

Newton’s Method for Optimization in 1-D • Basis: Maximum (or minimum) of f(x) will occur where f ’(x) = 0 • If we call f ’(x) by the name g(x) and apply NewtonRaphson to find where g(x) = 0 xi +1 = xi .g (x) / g ’(x) or xi +1 = xi .f ’ (x) / f ”(x) • Method converges to nearest max or min (local max/min) • Method can “blow up” near points where f ”(x) = 0 if f ’(x) is not going to 0 as fast (or at all) as f ”(x) .

0 -0.5 1.0 1.5 1.0 x f g .5 0.0 f (x) & g (x) 0.0 -1.f(x) and its Derivative in Examples on [0.5 2.0 0.5 0.0 1.2] 2.5 -1.5 2.

f/x3 k..2-D Search Methods • The Multi-D Equivalent of f ’ is the Gradient  f or [f/x1 i.. f/x2 j.]T • All 2-D Methods Use  f • We can Conceptualize the Gradient as the Topographic Elevation Slope in x-y-z space  x1  x  x2  y f z .

3-D ‘Elevations’  f  3D Elevation Plot 3 -35 -30 -25 Contour Plot Contour Graph of f (x.y) -20 -10 -15 -5 0 10 0 -10 -20 -30 -40 -50 3 1 2 1 0 0 -1 -2 -3 2 3 4 Y 2 -20 -15 0 -10 -5 0 1 -10 -5 0 -5 0 -10 6 5 0 -5 0 0 -5 -15 -5 -10 -20 -1 -2 -1 0 1 X 2 3 4 5 -1 -2 .

g for descent) • How “far” should we go in this direction? • Call the magnitude of the step h • Now maximize f along the g direction as a function of h  called a line search • Step  x1i+1 = x1i + g1h & x2i+1 = x2i + g2h .Steepest Ascent/Descent • Follow  f = g direction (.

because  g1 = 2 x2 + 2 .2 (-1) = 6  g2 = 2 x1 .4 x2 = 2 (-1) .6 j).4 • Search for max of f = 2 x1x2 + 2x1 .2x22 from the point x1 = -1 and x2 = 1 • Calculate g (= 6 i .Text Example 14.2 x1 = 2 (1) + 2 .x12 .4 (1) = -6 .

x21) subject to  x11 = x10 + h g1 = -1+6h & x21 = x20 + h g2= 1 .2 x2i+1 = x2i + g2h = 1 +(-6)(0. x21) = -180 h2 +72 h .2) = 0.4.6 j) gives x1i+1 = x1i + g1h = .2) = -0.7 = g(h) Max {g(h)} g’(h) =0  -360 h +72 = 0  h=0.1 +(6)(0.6h f (x11. Continued • Now search for max of f (x11.2 • Now stepping along g (= 6 i .2 .Text Example 14.

y) 3 -35 -30 -25 -20 -10 -15 -5 0 2 -20 -15 0 -10 -5 0 Y 1 -10 -5 0 -5 0 -10 0 -5 0 0 -5 -15 -5 -10 -20 -1 -2 -1 0 1 X 2 3 4 5 .Graphical Depiction of Steepest Ascent Steps in Example 14.4 Contour Graph of f (x.

2-D Newton Method • The 2-D Version of Newton Maximization Gradient 0 or  f = 0  g1 = g2 = 0 • Applying Multi-variate Newton-Raphson  g 1  x  1  g 2  x 1  g 1  x 2   g 2  x 2    x 1    g1   x     g   2  2 and  x1  x   2 i 1  x 1   x 1     x 2   x 2    i • The first matrix is the Jacobian of the Gradient 2 f . AKA the Hessian   g1 g1    2 f  x H    g 1  2  x 1   2 x 2    x 1  g 2    2 f  x 2    x 1 x 2      x 1 x 2  2 f   x2  2  .

2-D Newton Method • Text describes Newton’s equation as  H 11 H  21 H 12  H 22    x1    x 2   g1    0 g 2  • The system is then solved by matrix inversion  x1    x 2  i 1  x 1   H 11      x 2   H 21 i H 12  H 22   1  g1    g 2  .

H22= -4 • Solving.Newton Method for Text Example 14.2 x1 = 2 (1) + 2 .   2 2   2  4     x 1    6     x      6    2   x 1  3 we get        x 2  0  • So x1 = 2.2x22 from the point x1 = -1 and x2 = 1 • g1 = 2 x2 + 2 . H12= 2. H21= 2.x2) = 2 x1x2 + 2x1 .x12 .2 (-1) = 6 • g2 = 2 x1 . x2 = 1  f = 2  Maximum (Why?) .4 x2 = 2 (-1) .4 (1) = -6 • H11= -2.4 • Search for max of f(x1.

Graphical Depiction of Newton Step Contour Graph of f (x.y) 3 -35 -30 -25 -20 -10 -15 -5 0 2 -20 -15 0 -10 -5 0 Y 1 -10 -5 0 -5 0 -10 0 -5 0 0 -5 -15 -5 -10 -20 -1 -2 -1 0 1 X 2 3 4 5 .

Constrained Optimization • Search for maximum is limited by constraints • Problem related constraints (resource limits) Plant operates no more than 80 hours per week Raw materials can not be purchased for less than \$30/ton • Feasibility constraints Efficiency can not be greater than unity Cost can not be negative Mass can not be negative .

x3… • Constraints must be formulated as linear inequality statements in terms of x1. x2. x2... a in xn  bi ( i = 1.Linear Programming • Multi-D Objective Function Z(x1. x2. x3… • Multi-D Problem Statement  Z = c1 x1 + c2 x2 +… cn xn  a i1 x1 + a i2 x2 +. x3…) is a linear function of x1. m) .

x2) • 2-D Problem  Z = 150 x1 + 175 x2  7 x1 + 11 x2  77 (1)  10 x1 + 8 x2  80 (2)  x1  9  x2  6  x1  0  x2  0 (3) (4) (5) (6) (m=4) (Note that these are not  constraints) .Text Example • Maximize profit (Z) in the production of two products (x1.

Graphical Depiction of Constraints x2 x1 .

x2) Subject to Constraints x2 150 Z x2   x1  175 175 Z = 600 Z=0 Z = 1400 x1 .Graphical Depiction Z(x1 .

Things to Note from Graphical Solution • Region of possible solutions which meet all constraints (feasible solutions) is a polygon • Brute force approach could find max Z by evaluating Z at all vertices • Vertices are simultaneous solution of 2 constraints (inequalities) converted to 2 equations (equalities)  x1= x1* & x2 = x2* • Z evaluation is of Z (x1*. x2*) .

m) • Inequality i will then be at its limit when Si = 0 • The independent variables (x’s) are called structural variables ..Simplex Method • Converts relations to m equations in m+n unknowns  we must have at least m = 1 • Constraint inequalities are converted to equalities with new variables called slack variables Si • a i1x1+ a i2 x2+. + a inxn + Si = bi (i = 1..

Continued • Slack variables will have positive values if the values of structural variables meet the inequality with room to spare • There will be Comb(n+m.m) different possible constraint intersections (including non-negativity constraints) • Simplex method uses Gauss Jordan elimination to check only vertices on the feasible polygon starting with all x’s = 0 .Simplex Method.

Simplex Tableau for Text Example Basic Z S1 S2 S3 S4 Z 1 0 0 0 0 x1  150 7 10 1 0 x2  175 11 8 0 1 S1 0 1 0 0 0 S2 0 0 1 0 0 S3 0 0 0 1 0 S4 0 0 0 0 1 bi 0 77 80 9 6 ___________________________________________________________________________________ • Note that we never do Gauss Jordan elimination on Z row • 1st row keeps track of current estimate of Z in b column • Basic variables start out as S’s • Non-basic variables start out as x’s and are assumed zero .

Simplex Elimination Basic Z S1 S2 S3 Z 1 0 0 0 x1  150 7 10 1 x2  175 11 8 0 S1 0 1 0 0 S2 0 0 1 0 S3 0 0 0 1 S4 0 0 0 0 bi 0 77 80 9 ___________________________________________________________________________________ S4 0 0 1 0 0 0 1 6 • Arbitrarily choose x1for Gauss Jordan elimination (not standard) • This makes it a basic variable (‘enters’ basic list: entering variable) • Pivot about S2 row.any other choice results in negative S’s after elimination • This choice makes S2 non-basic (it is the leaving variable) . because 80/10 is smallest bi / ai1 -.

0] to [8.0] for location of max • This moves our estimate of max Z from 0 to 1200 (see Z eq.First Simplex Step .1 0 S3 0 0 0 1 0 S4 bi ___________________________________________________________________________________ 0 1200 0 21 0 8 0 1 1 6 • S1 row can be read as S1 = 21 (reduced from 77).) .8 1 S1 0 1 0 0 0 S2 15  0 .4 0 .7 0 .8  0 .Eliminate x1 Basic Z S1 x1 S3 S4 Z 1 0 0 0 0 x1 0 0 1 0 0 x2  55 5 . because x2 and S2 are non-basic variables (their values can be taken as zero) • Similarly x1 row is read x1 = 8 • Equivalent to moving from [0.1  0 .

) • The S1 row is chosen as pivot row (note: 1/(-0.Next Elimination Choice Basic Z S1 x1 S3 S4 Z 1 0 0 0 0 x1 0 0 1 0 0 x2  55 5 .1  0 . Z eq.1 0 S3 0 0 0 1 0 S4 bi ___________________________________________________________________________________ 0 1200 0 21 0 8 0 1 1 6 • Now choose x2 as the entering variable (most neg.8) not allowed-why?) • So now S1 is our leaving variable .4 0 .8  0 .8 1 S1 0 1 0 0 0 S2 15  0 . Coef.7 0 .

basic variables are S1 & S2 . are positive • The non.89 0 4.15 S2 7.87  0.11 0  0.20  0. because all coefficients in Z eq.15 0.13 S3 0 0 0 1 0 S4 bi ___________________________________________________________________________________ 0 1414 0 3.13 0. so vertex is at the intersection of the equations from constraint (1) and constraint (2) • The optimal Z is then 1414 .Next Simplex Step (Elimination) Basic Z x2 x1 S3 S4 Z 1 0 0 0 0 x1 0 0 1 0 0 x2 0 1 0 0 S1 10.11 1 2.18 0.18 • We know we are done.18  0.20 0 .89 0 4.

Graphical Depiction of Simplex Steps x2 Actual Simplex Path x1 .

Curve Fitting f(x) Can you suggest another? x We want to find the best “fit” of a curve through the data. Here we see : a) Least squares fit b) Linear interpolation .

Mathematical Background • The prerequisite mathematical background for interpolation is found in the material on the Taylor series expansion and finite divided differences • Simple statistics – average – standard deviation – normal distribution .

.Normal Distribution A histogram used to depict the distribution of exam grades.

x x  2 95% x  68% .

Material to be Covered in Curve Fitting • Linear Regression – – – – Polynomial Regression Multiple Regression General Linear Least Squares Nonlinear Regression • Interpolation – Newton’s Polynomial – Lagrange Polynomial – Coefficients of Polynomials .

(xn. y2)..y1). (x2...Least Squares Regression • Simplest is fitting a straight line to a set of paired observations – (x1. yn) • The resulting mathematical expression is – y = ao + a1x + e • We will consider the error introduced at each data point to develop a strategy for determining the “best fit” equations ..

f(x) Sr   ei2   yi  a o  a1x i  i 1 i 1 n n 2 y i  a o  a 1x i x .

What mathematics technique will minimize Sr? .Sr   ei2   yi  a o  a1x i  i 1 i 1 n n 2 To determine the values for ao and a1. differentiate with respect to each coefficient  Sr  2 y i  a o  a 1x i  a o  Sr  2 y i  a o  a 1x i x i  a 1 Note: we have simplified the summation symbols.

If this is done. Sr  2 y i  a o  a 1x i   ao  Sr  2 y i  a o  a 1x i  x i   a1 Setting the derivative equal to zero will minimizing Sr. the equations can be expressed as: 0   yi   a o   a1 x i 0   yi x i   a o x i   a1 x i2 .

ao and a1. What are these equations? (hint: only place terms with ao and a1 on the LHS of the equations) What are the final equations for ao and a1? .0   y i   a o   a 1x i 0   y i x i   a o x i   a1x i2 If you recognize that: a o  na o you have two equations. with two simultaneous equations with two unknowns.

na o  a 1  x i   y i a o  x i  a 1  x i2   x i y i a1  n  x i yi   x i  yi n  x   x i  2 i 2 These first two equations are called the normal equations These second two result from solving a o  y  a 1x  n  x i  a 0    y i   x  x 2   a   x y  i  1   i i  i _  xi  yi Also note that x  & y n n _ .

Error f(x) Recall: Sr   e i    y i  a o  a 1 x i  2 i 1 i 1 n n 2 x The most common measure of the “spread” of a sample is the standard deviation about the mean: 2 St   y i  y  St sy  n 1 .

Introduce a term to measure the standard error of the estimate: s y |x  Sr n2 Coefficient of determination r2: r 2 S t  Sr  St r is the correlation coefficient .

S t  Sr r  St 2 The following signifies that the line explains 100 percent of the variability of the data: Sr = 0 r = r2 = 1 If r = r2 = 0. then Sr = St and the fit is invalid. .

04 8 6.95 13 7.84 7 4.81 11 8.26 12 10.73 Data 4 8 6.58 9 8.47 8 7.14 8 8.42 5 5.04 8 5.74 Data 3 10 7.15 7 6.10 6 6.14 13 8.84 6 6.24 4 4.46 8 6.71 8 8.77 13 12.50 8 5.68 Data 2 10 9.26 14 8.74 9 7.91 8 6.13 4 3.76 8 7.89 .58 8 5.11 11 7.33 14 9.13 7 7.26 5 4.96 6 7.08 4 5.Consider the following four sets of data Data 1 10 8.81 14 8.56 8 7.82 5 5.84 8 8.25 19 12.10 12 9.74 9 8.39 12 8.77 11 9.

00 2.00 4.00 0.00 0.00 5 5 10 15 10 2015 0 5 x 10 15 Data Set 3 Data Set 4 14.00 8.00 4.00 6.00 8.00 6.00 2.00 2.00 4.00 10.00 0 0 5 14.00 10.00 2.00 8.00 4.00 10.00 6.00 4.00 10.00 4.00 10.00 2.00 0 0 14.00 0 5 x 10 15 10.00 0.00 f(x) 6.00 f(x) Data Set 2 14.00 2.00 6.00 12.00 8.00 8.00 10.00 12.00 8.00 8.00 f(x) f(x) 6.00 0.00 4.00 0.00 6.00 0 5 x 10 15 f(x) 6.00 0.00 0.00 0 5 10 x 15 20 5 10 10 15 15 GRAPHS OF FOUR DATA SETS .00 4.00 0.Data Set 1 12.00 12.00 2.00 12.00 8.00 12.00 2.00 10.

117964 1.67 Data Set 4 14.00 4.00 4.00 10.00 2.00 Data Set 2 10.235695 18.00 0.124481 10.00329 9 .117906 12.123921 0.00 10.00 0 5 4.00 0.00 0.00 2.50 3.00 8.00 17.00 0.00 0 5 10 x 15 20 4.237214 17.236311 8.50 3.00 9 f(x) Linear Regression Data Set 4 0.00 0.00 8.00 9 6.117878 1.00 4.00 14.00 12.00 0.00 12.236603 10.00 3.00 4.00 0.117819 1.50 14.00 2.00 0 10 5 15 10 15 GRAPHS OF FOUR DATA SETS f(x) Linear Regression Data Set 3 0.00 10.00 10.00 10.97228 6.00 0 5 10 0 15 20 5 10 15 0 5 x 10 15 All equations are y = 0.00 0.50 3.000.00 2.00 0 5 x 10 15 Linear Regression Data Set 1 0.00 0.00 1.125302 0.96565 9 4.666242 1.00 0.00 Linear Regression Data Set 2 0.00 6.00 4.00 2.00 8.00 17.00 2.00 8.00 f(x) f(x) 6.666324 1.98994 6.00 0.00 f(x) 6.00 1.00 0.Data Set 1 12.00 6.666542 8.00 2.666707 1.00 8.124747 12.00 0.00 12.00 0 5 x 10 15 R2 = 0.5x + 3 Data Set 3 14.00 2.00 6.00 8.

... x .Linearization of non-linear relationships f(x) Some data is simply ill-suited for linear least squares regression. or so it appears.

P EXPONENTIAL EQUATIONS P  Poe t ln P rt intercept = ln P0 slope = r why? t Linearize .

P  P0 e rt ln P  ln P0 e rt  ln P0   rt lnP  ln P0   ln e rt     Can you see the similarity with the equation for a line: y = b + mx where b is the y-intercept and m is the slope? intercept = ln Po slope = r t .

P  P0 e r t ln P  ln P0 e r t  ln P0   r t ln P0  ln P0   ln e r t     After taking the natural log of the y-data. perform linear regression. From this regression: The value of b will give us ln (P0). Hence. P0 = eb The value of m will give us r directly. intercept = ln P0 slope = r t .

What is the resulting intercept and slope? .Q POWER EQUATIONS Q  cH a (Flow over a weir) log Q H log H Here we linearize the equation by taking the log of H and Q data.

Q  cH a log Q  log c H  a a   log c  log H  log c  a log H log Q slope = a log H intercept = log c .

mmax is the maximum growth rate. m is the growth rate of a microbial population. S is the substrate or food concentration.m SATURATION-GROWTH RATE EQUATION m  m max S Ks  S 1/m S slope = Ks/mmax intercept = 1/mmax 1/ S Here. Ks is the substrate concentration at a value of m = mmax/2 .

General Comments of Linear Regression • You should be cognizant of the fact that there are theoretical aspects of regression that are of practical importance but are beyond the scope of this book • Statistical assumptions are inherent in the linear least squares procedure .

General Comments of Linear Regression • x has a fixed value. it is not random and is measured without error • The y values are independent random variable and all have the same variance • The y values for a given x must be normally distributed .

General Comments of Linear Regression • The regression of y versus x is not the same as x versus y • The error of y versus x is not the same as x versus y f(x) y-direction x-direction x .

..degree polynomial  y = a0 + a1x + a2x2 +..Polynomial Regression • One of the reasons you were presented with the theory behind linear regression was to allow you the insight into similar procedures for higher order polynomials • y = a0 + a1x • mth .amxm + e .

 a x 2 2 i  m 2 m i  1... Take the derivative of the above equation with respect to each of the unknown coefficients: i.e... a m x i a 2   ... the partial with respect to a2 S r 2 2 m  2 x i yi  a o  a1 x i  a 2 x i  ..Based on the sum of the squares of the residuals Sr   yi  a o  a1x i  a x  ...

These equations are set to zero to minimize Sr. minimize the error.... Again.e.  a m  x 2 3 i 4 i m 2 i  x i yi 2 4... This set of normal equations result in m+1 simultaneous equations which can be solved using matrix methods to determine a0.. using the partial of Sr... i. a1.2.. 3. wrt a2 a o  x i  a1  x  a 2  x  ..am . a2. Set all unknowns values on the LHS of the equation.

Multiple Linear Regression • A useful extension of linear regression is the case where y is a linear function of two or more variables  y = ao + a1x1 + a2x2 • We follow the same procedure  y = ao + a1x1 + a2x2 + e .

we would solve a 3 x 3 matrix in the following form:  n    x1i  x 2 i  x x x x x x x x 1i 2 1i 1i 2 i  a 0    yi      1i 2i   a 1     x1i y i  2    a 2   x 2i yi  2i    2i [A] and {c} are clearly based on data given for x1. . x2 and y to solve for the unknowns in {x}.Multiple Linear Regression For two variables.

and only one polynomial of order m or less that passes through all points • Example: y = a0 + a1x  fits between 2 points  1st order ..amxm • For m+1 data points..Interpolation • General formula for an n-th order polynomial  y = a0 + a1x + a2x2 +.. there is one.

Interpolation • We will explore two mathematical methods well suited for computer implementation • Newton’s Divided Difference Interpolating Polynomials • Lagrange Interpolating Polynomial .

Newton’s Divided Difference Interpolating Polynomials • • • • Linear Interpolation Quadratic Interpolation General Form Errors .

kg/m 999.2 3 How would you approach estimating the density at 17 C? .0 999.9 1000.Linear Interpolation Temperature.7 999.1 998. C 0 5 10 15 20 Density.

C 0 5 10 15 20 r Density.2 998.2 ??? 999.1 > r > 998. kg/m3 999.0 999.Temperature.7 999.1 T 15 20 .2 999.9 1000.1 998.

2  999.1  20  15 17 15 f  x 1   f x 0  x  x 0  f1 x   f x o   x1  x 0 Note: The notation f1(x) designates that this is a first order interpolating polynomial .998.1 r  999.

true solution f(x) 2 1 smaller intervals provide a better estimate x .

true solution f(x) Alternative approach would be to include a third point and estimate f(x) from a 2nd order polynomial. x .

Quadratic Interpolation f 2 x   b 0 b 1x  x 0   b 2x  x 0 x  x 1  Prove that this a 2nd order polynomial of the form: f x   a 0  a 1x  a 2 x 2 .

multiply the terms f 2x   b 0 b 1x  x 0   b 2x  x 0 x  x 1  f 2x   b 0 b 1x  b 1x 0 b 2 x 2 b 2 x 0 x 1 b 2 x x 0 b 2 x x 1 Collect terms and recognize that: a 0  b 0  b 1x 0  b 2 x 0 x 1 a 1 b 1  b 2 x 0  b 2 x 1 a 2 b 2 f x   a 0  a 1x  a 2 x 2 .First.

f(x2) Procedure for Quadratic Interpolation b 0  f x 0  x f x 1   f x 0  b 1 x 1 x 0 f x 2   f  x 1  f  x 1   f x 0   x 2 x 1 x 1 x 0 b2  x2  x0 . f(x) f(x) x1. f(x0) x2.x. f(x1) x0.

Procedure for Quadratic Interpolation b 0  f x 0  f x 1   f x 0  b 1 x 1 x 0 f x 2   f  x 1  f  x 1   f x 0   x 2 x 1 x 1 x 0 b2  x2  x0 f 2 x   b 0 b 1x  x 0   b 2x  x 0 x  x 1  .

C 0 5 10 15 20 Density. Temperature.5 999 998.5 998 0 10 Tem p 20 30 Include 10 degrees in your calculation of the density at 17 degrees.5 1000 Density 999.9 1000.Example 1000.1 998. kg/m3 999.7 999.0 999.2 .

C 0 5 10 15 20 Density.0 999.7 999. f x 1   f x 0  b 1 x 1 x 0 f x 2   f  x 1  f  x 1   f x 0   x 2 x 1 x 1 x 0 b2  x2  x0 Temperature.1 998.9 1000.2 f 2x   b 0 b 1x  x 0   b 2x  x 0 x  x 1  . kg/m3 999.b 0  f x 0  Example Include 10 degrees in your calculation of the density at 17 degrees.

General Form of Newton’s Interpolating Polynomials for the nth-order polynomial f n x   b 0 b 1x  x 0   .x  j  f x i   f x x i x j j  ....  b n x  x 0 x  x 1   x  x n  1  To establish a methodical approach to a solution define the first finite divided difference as: f x i .

x j    f x i   f x j  xi  x j if we let i=1 and j=0 then this is b1 f x 1   f x 0  b1  x1  x 0 Similarly. we can define the second finite divided difference.f xi . which expresses both b2 and the difference of the first two divided difference .

xk    f xi . . the third divided difference is the difference of two second finite divided difference. x j  f x j.Similarly. which expresses both b2 and the difference of the first two divided difference f x 2   f x 1  f x 1   f x 0   x 2  x1 x1  x 0 b2  x2  x0 f xi . x j. x k xi  xk     Following the same scheme. we can define the second finite divided difference.

x1] second f[x2.x1] third f[x3.This leads to a scheme that can easily lead to the use of spreadsheets i xi f(xi) first f[x1.x2.x2.x0] f[x2.x1.x0] f[x3.x2] .x0] 0 x0 f(x0) 1 x1 f(x1) 2 x2 3 x3 f(x2) f(x3) f[x3.x1.

 bn x  x 0 x  x1 x  x n 1  These difference can be used to evaluate the b-coefficient s. x n 1 ......f n x   b0  b1 x  x 0   .. The error would follow a relationship analogous to the error in the Taylor Series.. x 0 x  x 0   . x 0 x  x 0 x  x1 x  x n 1  To determine the error we need an extra point.  f x n . . The result is the following interpolation polynomial called the Newton’s Divided Difference Interpolating Polynomial f n x   f x 0   f x1 .

Lagrange Interpolating Polynomial f n x    L x  f x  i 0 i i n L i x   x j 0 j i n x  xj i  xj where P designates the “product of” The linear version of this expression is at n = 1 .

What would third order be? .Linear version: n = 1 f n x    L i x  f x i  i 0 n n L i x    j 0 j i x  xj xi  x j x  x0 x  x1 f1 ( x )  f x 0   f x 1  x 0  x1 x1  x 0 Your text shows you how to do n = 2 (second order).

f n x    L i x  f x i  i 0 n n L i x    j 0 j i x  xj xi  x j x  x1 x  x 2 x  x 3  f x  f3 (x)  x 0  x1 x 0  x 2 x 0  x 3  0 ..... ...

.....f n x    L i x  f x i  i 0 n n L i x    j 0 j i x  xj xi  x j Note: x0 is not being subtracted from the constant term x x  x1 x  x 2 x  x 3  f x  f3 (x)  x 0  x1 x 0  x 2 x 0  x 3  0 ...

... ...f n x    L i x  f x i  i 0 n n L i x    j 0 j i x  xj xi  x j Note: x0 is not being subtracted from the constant term x or xi = x0 in the numerator or the denominator j= 0 x  x1 x  x 2 x  x 3  f x  f3 (x)  x 0  x1 x 0  x 2 x 0  x 3  0 ..

f n x    L i x f x i  i 0 n n L i x    j 0 j i x  xj xi  x j x  x1 x  x 2 x  x 3  f x  f3 (x)  x 0  x1 x 0  x 2 x 0  x 3  0 x  x 0 x  x 2 x  x 3  f x   x1  x 0 x1  x 2 x1  x 3  1 x  x 0 x  x1 x  x 3  f x   x 2  x 0 x 2  x1 x 2  x 3  2 x  x 0 x  x1 x  x 2  f x   x 3  x 0 x 3  x1 x 3  x 2  3 Note: x3 is not being subtracted from the constant term x or xi = x3 in the numerator or the denominator j= 3 .

999.5 999 998.5 998 0 10 Tem p 20 30 Temperature.7 999. kg/m3 999.Example 1000.2 .5 1000 Density Determine the density at 17 degrees.1 998.9 1000.0 999. C 0 5 10 15 20 Density.

776 f 2 17   998.776 f1 17   998.496  998.74 Using Newton’s Interpolating Polynomial In fact. you can derive Lagrange directly from Newton’s Interpolating Polynomial .f 2 17  119.964  839.244  279.

.  bn x  x 0 x  x1 x  x n 1  y = a0 + a1x + a2x2 +....amxm HOW CAN WE BE MORE STRAIGHT FORWARD IN GETTING VALUES? .Coefficients of an Interpolating Polynomial f n x   b0  b1 x  x 0   ...

We need three data points. and a2 .2 f x 0   a 0  a 1 x 0  a 2 x 0 2 f x 1   a 0  a 1 x 1  a 2 x 1 f x 2   a 0  a 1 x 2  a 2 x 2 2 This is a 2nd order polynomial. Plug the value of xi and f(xi) directly into equations. This gives three simultaneous equations to solve for a0 . a1 .

C 0 5 10 15 20 Density.Example 1000.0 999. kg/m3 999.5 999 998.2 .7 999.5 998 0 10 Tem p 20 30 Temperature.9 1000.1 998. 999.5 1000 Density Determine the density at 17 degrees.

kg/m3 999.9 1000.1 998.2      . C 0 5 10 15 20 Density.7 999.7     2  1 15 15   a1   999.2 1 10 10 2  a 0  999.1 1 20 20 2  a 2  998.0 999.Temperature.

006  2   2 .78 a 0   1000       a1    0.r17  1000  0.03  a   0.00617   998.0317   0.

.Numerical Differentiation and Integration • Calculus is the mathematics of change. making calculus an essential tool of our profession. • At the heart of calculus are the related mathematical concepts of differentiation and integration. • Engineers must continuously deal with systems and processes that change.

.rate of change of a dependent variable with respect to an independent variable .“to mark off by differences. distinguish.to perceive the difference in or between” • Mathematical definition of derivative . .x Differentiation y f x i  x   f x i   x • Dictionary definition of differentiate .

f(x) f x i  x  y f x i  x x y f x i  x   f x i   x x .

to indicate the total amount” • Mathematically. to unite. it is the total value or summation of f(x)dx over a range of x. . into a whole.Integration • The inverse process of differentiation • Dictionary definition of integrate . as parts. In fact the integration symbol is actually a stylized capital S intended to signify the connection between integration and summation.“to bring together.

f(x) x I  f x dx a b .

 udv  ?  u du  ?  a dx  ? n bx dx  x ? e ax dx  ?  .

Mathematical Background d d x sin x  ? e ? dx dx d d x cos x  ? a ? dx dx d d n tan x  ? x ? dx dx d ln x  ? dx if u and v are functions of x d n d u ? ( uv)  ? dx dx .

Newton-Cotes Integration • Common numerical integration scheme • Based on the strategy of replacing a complicated function or tabulated data with some approximating function that is easy to integrate .

 a n x a a n b b ....Newton-Cotes Integration • Common numerical integration scheme • Based on the strategy of replacing a complicated function or tabulated data with some approximating function that is easy to integrate I   f x dx   f n x dx f n x   a 0  a1x  .

 a n x a a n b b fn(x) is an nth order polynomial ..Newton-Cotes Integration • Common numerical integration scheme • Based on the strategy of replacing a complicated function or tabulated data with some approximating function that is easy to integrate I   f x dx   f n x dx f n x   a 0  a1x  ...

a first order polynomial .The approximation of an integral by the area under . .a second order polynomial We can also approximated the integral by using a series of polynomials applied piece wise.

An approximation of an integral by the area under five straight line segments. .

.Newton-Cotes Formulas • Closed form .data is at the beginning and end of the limits of integration • Open form .integration limits extend beyond the range of data.

Trapezoidal Rule • First of the Newton-Cotes closed integration formulas • Corresponds to the case where the polynomial is a first order I   f x dx   f1 x dx f n x   a 0  a 1 x a a b b .

I   f x dx   f1 x dx f n x   a 0  a 1 x a a b b A straight line can be represented as: f b   f a  x  a  f1 x   f a   ba .

I  f  x  dx   f  x  dx 1 a b a b b   f b  f a    f  a    x  a   dx ba a   Integrate this equation. f a   f b  I  b  a  2 . Results in the trapezoidal rule.

f a   f b  I  b  a  2 Recall the formula for computing the area of a trapezoid: height x (average of the bases) base height base .

height base height width height base .f a   f b  I  b  a  2 The concept is the same but the trapezoid is on its side.

the trapezoidal rule will be exact.Error of the Trapezoidal Rule 1 3 Et   f ' '  b  a  12 where a    b This indicates that is the function being integrated is linear. A reasonable estimate of  is the average value of b and a . for section with second and higher order derivatives (that is with curvature) error can occur. Otherwise.

Multiple Application of the Trapezoidal Rule • Improve the accuracy by dividing the integration interval into a number of smaller segments • Apply the method to each segment • Resulting equations are called multipleapplication or composite integration formulas .

Multiple Application of the Trapezoidal Rule I   f ( x )dx   f ( x )dx     f ( x )dx f x 0   f x 1  f x 1   f x 2  f x n 1   f x n  Ih h  h 2 2 2 x n  x 0   h given n where there are n+1 equally spaced base points. x0 x1 x n 1 x1 x2 xn .

We can group terms to express a general form I   f ( x )dx   f ( x )dx     f ( x )dx f x 0   f x 1  f x 1   f x 2  f x n 1   f x n  Ih h  h 2 2 2 x0 x1 x n 1 x1 x2 xn b  a  I n f x 0   2 f x i   f x n  i 1 n 1 2  b  a  f x 0   2 f x i   f x n  i 1 n 1 2n width } average height } .

I  b  a  width f  x 0   2 f  x i   f  x n  i 1 n 1 2n The average height represents a weighted average of the function values Note that the interior points are given twice the weight of the two end points } } average height Ea b  a   12n 2 3 f '' .

5 1 x 1.5 I  b  a  f  x 0   2 f  x i   f  x n  2n .1 Example Problem 1.Example Evaluate the following integral using the trapezoidal rule and h = 0.5 2 2.6 I 1 f(x) e x2 60 dx 50 40 30 20 ba h n n 1 i 1 10 0 0 0.

741 I   e dx  26 1 1.6 30 20 10 0 0 0.816  3.5 1.936 1.5 2 2..099 9.2  2f 1.6  1 74.221 5.6 2n Example Problem 60 50 40 f(x) x 1 1.2 1.5 f(x) 2.5 1 x 1.6 x2 ..1 1..4 1.5  f 1.353 4.4  2f 1.3  2f 1..1  2f 1.419 7.3 1.end of example .Solution I ba f 1  2f 1.718 3.488 12.

Simpson’s 1/3 Rule • Corresponds to the case where the function is a second order polynomial I   f x dx   f 2 x dx f n x   a 0  a 1 x  a 2 x a a 2 b b .

.Simpson’s 1/3 Rule • Designate a and b as x0 and x2.. and estimate f2(x) as a second order Lagrange polynomial I   f x dx   f 2 x dx a x2 a b b  x  x1 x  x 2     f x 0   ....dx x 0  x1 x 0  x 2   x0  ..

Simpson’s 1/3 Rule • After integration and algebraic manipulation. we get the following equations x2  x0 h I  f x 0   4f x1   f x 2  where h  3 20 f x 0   4f x1   f x 2   b  a  6 width } } average height .

Error Single application of Trapezoidal Rule. 1 3 E t   f ' '  b  a  12 where a    b Single application of Simpson’s 1/3 Rule 1 ( 4) 5 Et   f b  a  2880 .

.Multiple Application of Simpson’s 1/3 Rule I   f ( x )dx   f ( x )dx     f ( x )dx x0 x1 x n 1 x1 x2 xn I  b  a  Ea f x 0   4 5 i 1. j n n 1 n 2 b  a   3n f 4  180 n 4 . 4 . 6. 3.. 5.  f x   2  f x   f x  i j 2.

I  b  a  f x 0   4 i 1. Hence carry the weight 4 The even points are common to adjacent applications and are counted twice. 5. 3..  f x   2  f x   f x  i j 2. 4. . j n n 1 n 2 3n The odd points represent the middle term for each application. 6..

Simpson’s 3/8 Rule • Corresponds to the case where the function is a third order polynomial I   f x dx   f 3 x dx f n x   a 0  a 1 x  a 2 x 2  a 3 x 3 3h f x 0   3f x1   3f x 2   f x 3  I 8 x3  x0 where h  30 a a b b .

I  b  a  f x 0   2  f x i   f x n  i 1 n 1 2n f x n  1   f x n  f x 0   f x 1  f x 1   f x 2  Ih h h 2 2 2 .Integration of Unequal Segments • Experimental and field study data is often unevenly spaced • In previous equations we grouped the term (i. hi) which represented segment width.e.

Integration of Unequal Segments • We should also consider alternately using higher order equations if we can find data in consecutively even segments trapezoidal rule 1/3 rule trapezoidal rule 3/8 rule .

5 3 x 3. Compare results with the analytical solution.5 . Simpson’s 1/3 Rule and a multiple application of the trapezoidal rule with n=2.5 2 2.5 4 4.5 1 1. 20000  xe 0 4 2x 15000 dx f(x) 10000 5000 0 0 0.Example Integrate the following using the trapezoidal rule.

calculate the analytical solution for this problem. e 2x  xe dx  4 2x  1 0 4 2x 4 0  5216.Solution First.927 .

Consider a single application of the trapezoidal rule.93 . f(4) = 11923.5 2 2.5 x f a   f b  I  b  a  2 0  11923.93  23847.5 3 3.83 f(0) = 0 20000 15000 f(x) 10000 5000 0 0 0.66 t  100  357% 5216.83  4  0   23847.66 2 5216.5 1 1.5 4 4.

93 estimates.22 22  We are obviously not 5216.5 x 2n 0  2109.93  12142.22 doing very well on our t  100  133% 5216.5 2 2.5 3 3..Multiple Application of the Trapezoidal Rule I  b  a  f  x 0   2 f x i   f  x n  i 1 n 1 20000 15000 f(x) 10000 5000 0 0 0. Lets consider a scheme where we “weight” the .end of example estimates .5 1 1...83  4  0   12142.5 4 4.196   11923.

Integration of Equations • Integration of analytical as opposed to tabular functions • Romberg Integration – Richardson’s Extrapolation – Romberg Integration Algorithm • Gauss Quadrature • Improper Integrals .

Richardson’s Extrapolation • Use two estimates of an integral to compute a third more accurate approximation • The estimate and error associated with a multiple application trapezoidal rule can be represented generally as:  I = I(h) + E(h)  where I is the exact value of the integral  I(h) is the approximation from an n-segment application  E(h) is the truncation error  h is the step size (b-a)/n .

Make two separate estimates using step sizes of h1 and h2 . I(h1) + E(h1) = I(h2) + E(h2) Recall the error of the multiple-application of the trapezoidal rule ba 2 E h f '' 12 Assume that f ' ' is constant regardless of the step size 2 Eh1  h1  2 E h 2  h2 .

2 E h1  h1  2 E h 2  h2  h1 E h1   E h 2   h  2     2 Substitute into previous equation: I(h1) + E(h1) = I(h2) + E(h2) E h 2   I h1   I h 2   h1 1  h  2     2 .

E h 2   Ih1   Ih 2   h1 1  h  2     2 Thus we have developed an estimate of the truncation error in terms of the integral estimates and their step sizes. This estimate can then be substituted into: I = I(h2) + E(h2) to yield an improved estimate of the integral:     1 Ih 2   Ih1  I  Ih 2    2   h1   1  h2      .

e.    1  Ih 2   Ih1  I  Ih 2    2   h1   1  h2      What is the equation for the special case where the interval is halved? i. h2 = h1 / 2 .

 h 1    2h 2   2  h   h2  2     1  I  Ih 2    2 Ih 2   Ih1   2  1 collecting terms 4 1 I  Ih 2   Ih1  3 3 .

5 .5 1 1.5 4 4.Example Use Richardson’s extrapolation to evaluate: 20000  xe dx 2x 0 4 15000 f(x) 10000 5000 0 0 0.5 2 2.5 3 x 3.

5 1 1.5 .5 3 x 3. 20000 15000 f(x) 10000 5000 0 0 0.5 2 2.Solution Recall our previous example where we calculated the integral using a single application of the trapezoidal rule. and then a multiple application of the trapezoidal rule by dividing the interval in half.5 4 4.

...end of example .96% .SINGLE APPLICATION OF TRAPEZOIDAL RULE 357% MULTIPLE APPLICATION OF TRAPEZOIDAL RULE (n=2) RICHARDSON’S EXTRAPOLATION 133% 57.

ROMBERG INTEGRATION We can continue to improve the estimate by successive halving of the step size to yield a general formula: 4 1 I  Ih 2   Ih 1  3 3 16 1 I Ih m   Ih   15 15 64 1 I Ih m   Ih   63 63 4 k 1 I j1. k 1 I j. k 1  I j. k  4 k 1  1 k = 2. j = 1 k=3 k=4 Note: the subscripts m and  refer to more and less accurate estimates .

4 j 1 h 2 h/2 3 h/4 4 h/8 4 1 I  I h m   I h   3 3 Boole’s Rule Trapezoidal Simpson’s 1/3 Rule Rule .1 I3.3 i=4 O(h8) I1.2 I3.1 I2.2 I2. Romberg’s Table can be produced i=1 O(h2) I1.Following a similar pattern to Newton divided differences.2 I1.3 I2.1 I4.1 Error orders for j values i=2 i=3 O(h4) O(h6) I1.

3 i=4 O(h8) I1.1 I2.1 I4.1 I3.1 Error orders for j values i=2 i=3 O(h4) O(h6) I1.3 I1.4 j 1 h 2 h/2 3 h/4 4 h/8 4 1 I  I h m   I h   3 3 Boole’s Rule Trapezoidal Simpson’s 1/3 Rule Rule .Following a similar pattern to Newton divided differences.3 I2. Romberg’s Table can be produced i=1 O(h2) I1.2 I2.2 I2.

1 I4.2 I3.2 I1. Romberg’s Table can be produced i=1 O(h2) I1.1 Trapezoidal Simpson’s 1/3 Rule Rule j 1 h 2 h/2 3 h/4 4 h/8 Error orders for j values i=2 i=3 O(h4) O(h6) I1.1 I3.2 I2.4 16 1 I I h m   I h   15 15 Boole’s Rule .3 i=4 O(h8) I1.1 I2.Following a similar pattern to Newton divided differences.3 I2.

1 I4.2 I3.4 j 1 h 2 h/2 3 h/4 4 h/8 4 1 I  I h m   I h   3 3 Boole’s Rule Trapezoidal Simpson’s 1/3 Rule Rule .1 I2.2 I2.2 I1.1 Error orders for j values i=2 i=3 O(h4) O(h6) I1.3 i=4 O(h8) I1.1 I3. Romberg’s Table can be produced i=1 O(h2) I1.3 I2.Following a similar pattern to Newton divided differences.

Following a similar pattern to Newton divided differences.4 16 1 I I h m   I h   15 15 Boole’s Rule . Romberg’s Table can be produced i=1 O(h2) I1.2 I2.1 Trapezoidal Simpson’s 1/3 Rule Rule j 1 h 2 h/2 3 h/4 4 h/8 Error orders for j values i=2 i=3 O(h4) O(h6) I1.1 I3.3 i=4 O(h8) I1.1 I2.1 I4.2 I3.3 I2.2 I1.

1 I4.2 I2.3 I2.1 I3.Following a similar pattern to Newton divided differences.1 Trapezoidal Simpson’s 1/3 Rule Rule j 1 h 2 h/2 3 h/4 4 h/8 Error orders for j values i=2 i=3 O(h4) O(h6) I1.2 I3. Romberg’s Table can be produced i=1 O(h2) I1.2 I1.4 64 1 I I h m   I h   63 63 Boole’s Rule .1 I2.3 i=4 O(h8) I1.

Gauss Quadrature f(x) Extend the area under the straight line f(x) x x .

What are two functions that should be evaluated exactly by the trapezoidal rule? .Method of Undetermined Coefficients Recall the trapezoidal rule f a   f b  I  b  a  2 This can also be expressed as I  c0f a   c1f b  where the c’s are constant Before analyzing this method. answer this question.

The two cases that should be evaluated exactly by the trapezoidal rule: 1) y = constant 2) a straight line f(x) y=1 f(x) -(b-a)/2 (b-a)/2 x y=x -(b-a)/2 (b-a)/2 x .

I  c 0 f a   c1f b  c 0  c1   b a 2 1dx  2  b a  For y = 1 since f(a) = f(b) =1 For y = x since f(a) = x =-(b-a)/2 and f(b) = x =(b-a)/2  ba   b  a  b a  2  c0    c1    b a  xdx 2  2   2  .Thus. the following equalities should hold.

c0 and c1.Evaluating both integrals c 0  c1  b  a  c0 ba b 1  c1 0 2 2 For y = 1 For y = x Now we have two equations and two unknowns. Solving simultaneously. we get : c0 = c1 = (b-a)/2 Substitute this back into: I  c0f a   c1f b .

f a   f b  I  b  a  2 DERIVATION OF THE TWO-POINT GAUSS-LEGENDRE FORMULA I  c0f x 0   c1f x1  Lets raise the level of sophistication by: .i. “open integration” .considering two points between -1 and 1 .e.We get the equivalent of the trapezoidal rule.

we assumed that the equation fit the integrals of a constant and linear function. . Extend the reasoning by assuming that it also fits the integral of a parabolic and a cubic function.f(x) -1 x0 x1 1 x Previously .

c 0 f x 0   c1f x1    1dx  2 1 1 1 c 0 f x 0   c1f x1    xdx  0 1 1 We now have four equations and four unknowns c0 c1 x0 and x1 c 0 f x 0   c1f x1    x 2 dx  2 / 3 1 1 c 0 f x 0   c1f x1    x 3dx  0 1 What equations are you solving? .

xi.f(xi) is either 1. xi2 or xi3 c 0 f x 0   c1f x1    1dx  2 1 1 1 c 0 1  c1 1  2 c 0 x 0  c1x1`  0 2 c 0 x  c1x  3 2 0 2 1 3 c 0 x 3  c1x1  0 0 c 0 f x 0   c1f x1   1 1  xdx  0 2 2 c 0 f x 0   c1f x1    x dx  3 1 c 0 f x 0   c1f x1   1 x 3dx  0  1 Solve these equations simultaneously .

This results in the following c 0  c1  1 1 x0  3 1 x1  3  1   1  I  f f   3  3 The interesting result is that the simple addition of the function values at 1 1 and  3 3 .

A simple change in variables can be use to translate other limits. x = a0 + a1xd Let the lower limit x = a correspond to xd = -1 and the upper limit x=b correspond to xd=1 a = a0 + a1(-1) b = a0 + a1(1) .However. Assume that the new variable xd is related to the original variable x in a linear fashion. we have set the limit of integration at -1 and 1. This was done to simplify the mathematics.

a = a0 + a1(-1) b = a0 + a1(1) SOLVE THESE EQUATIONS SIMULTANEOUSLY ba a0  2 substitute ba a1  2 x  a 0  a 1x d or 2 ba  ba  x   xd 2  2  b  a   b  a x d  .

.ba  ba  x  a 0  a 1x d    xd 2  2  ba  dx   dx d  2  These equations are substituted for x and dx respectively. Let’s do an example to appreciate the theory behind this numerical method.

5 1 1.5 4 x 4.5 2 2.5 3 3.Example Estimate the following using two-point Gauss Quadrature:  xe 0 4 2x dx f(x) 20000 15000 10000 5000 0 0 0.5 .

17  3468.end of problem   1   2 2 2     3     1  2 2 2     3   .55 .38  3477....1  2  2x e d 1 2 2 2 x d  2dx d Now evaluate the integral  1   1  I  f f   3  3    1    1    2  2 2   2  2 2  e  e      3   3     9.

SINGLE APPLICATION OF TRAPEZOIDAL RULE 357% MULTIPLE APPLICATION OF TRAPEZOIDAL RULE (n=2) RICHARDSON’S EXTRAPOLATION 2-POINT GAUSS QUADRATURE 133% 58% 33 % .

(3/5)1/2 x1=0.889 (8/9) c2 = 0.556 (5/9) c1 = 0.556 (5/9) x0=-0.0 x2=0.775 = .Higher-Point Gauss Formulas I  c0f x 0   c1f x1     cn 1f x n 1  For two point.775 = (3/5)1/2 . we determined that c0 =c1 = 1 For three point: c0 = 0.

626. .1 p.Higher-Point Gauss Formulas For four point: c0 = {18-(30)1/2 }/36 c1 = {18+(30)1/2 }/36 c2 = {18+(30)1/2 }/36 c3 = {18-(30)1/2 }/36 x0= -{525+70(30)1/2 } 1/2/35 x1= -{525-70(30)1/2 } 1/2/35 x2= +{525-70(30)1/2 } 1/2/35 x3= +{525+70(30)1/2 } 1/2/35 Your text goes on to provide additional weighting factors (ci’s) and function arguments (xi’s) in Table 22.

.....Numerical Differentiation • • • • Forward finite divided difference Backward finite divided difference Center finite divided difference All based on the Taylor Series f ' ' x i  2 f x i 1   f x i   f ' x i h  h  ... 2! ..

. 2! 3! f x i 1   f x i  f ' x i    Oh  h f x i 1   f x i  f ' ' x i  f ' x i    h  O h2 h 2   ....Forward Finite Difference f ' ' x i  2 f ' ' ' x i  3 f x i 1   f x i   f ' x i h  h  h  .....

Forward Divided Difference f x i1   f x i  f i f ' x i    Ox i1  x i    Oh  x i 1  x i h f(x) (x i+1. yi) x .y i+1) (xi.

f i f ' x i    Oh  h Error is proportional to the step size first forward divided difference O(h2) error is proportional to the square of the step size O(h3) error is proportional to the cube of the step size .

yi-1) x .f(x) (xi.yi) (xi-1.

.Backward Difference Approximation of the First Derivative Expand the Taylor series backwards f ' ' x i  2 f x i 1   f x i   f ' x i  h  h  ....... 2! f x i   f x i 1  f i f ' x i    h h The error is still O(h) ... 2! f ' ' x i  2 f x i 1   f x i   f ' x i  h  h  .

... Forward f 2 6 f ' ' x i  2 f ' ' ' x i  3 f x i 1   f x i   f ' x i  h  h  h   Backward 2 6 f x i 1   f x i 1  Subtracting f ' x i    O h2 2h f x i 1   2f x i   f x i 1  Adding f ' ' x i    O h2 2 h     .Centered Difference Approximation of the First Derivative and Second Derivative Subtract and add backward Taylor expansion from and to the forward Taylor series expansion x i 1   f x i   f ' x i  h  f ' ' x i  h 2  f ' ' ' x i  h 3  .

yi+1) (xi.f(x) f x i 1   f x i 1  f ' x i    O h2 2h   (xi+1.yi) (xi-1.yi-1) x .

2 Centered finite divided differences Fig. 23.1 Backward finite divided differences Fig. 23. O(h2).3 First .Fourth derivative Error: O(h). 23. O(h4) for centered only .Numerical Differentiation • • • • • Forward finite divided differences Fig.

Derivatives with Richardson Extrapolation • Two ways to improve derivative estimates ¤ ¤ decrease step size use a higher order formula that employs more points • Third approach. more accurate approximation . uses two derivatives estimates to compute a third. based on Richardson extrapolation.

Richardson Extrapolation I  I h2   1  h1   1  h  2  2  I  h 2   I  h1     h1 Special case where h 2  4 1 I  I  h 2   I  h1  3 3 In a similar fashion 4 1 D  D  h 2   D  h1  3 3 2 For a centered difference approximation with O(h2) the application of this formula will yield a new derivative estimate of O(h4) .

5x2 .15x3 .2 Use centered finite divided difference to estimate the derivative at 0.5.25x +1.0.1x4 .0.Example Given the function: f(x) = -0.1035 f(0. f(0) = 1.0.25) =1.2 f(0.75) = 0.2 .636 f(1) = 0.

8 1 1.2 .6 0.2 0 0 0.2 1 f(x) 0.2 0.4 0.8 0.4 0.Example Problem 1.6 x 0.4 1.

4% .25) =1.75) = 0.0 t = 9.25) = (0.636 D(0.2 f(1) = 0.2)/1 = -1.Using centered finite divided difference for h = 0.5 f(0) = 1.1.636 .2 D(0.5 =-0.1.934 t = 2.6% Using centered finite divided difference for h = 0.5) = (0.25 f(0.1035 f(0.1035)/0.2 .

Derivatives of Unequally Spaced Data • Common in data from experiments or field studies • Fit a second order Lagrange interpolating polynomial to each set of three adjacent points. since this polynomial does not require that the points be uniformly spaced • Differentiate analytically 2x  x i  x i 1 2x  x i 1  x i 1 f ' x   f x i 1   f x i  x i1  x i x i1  x i1  x i  x i1 x i  x i1  2x  x i  x i 1  f x i 1  x i1  x i x i1  x i1  .

differentiable function to the data • In absence of other information. a lower order polynomial regression is a good first choice .Derivative and Integral Estimates for Data with Errors • In addition to unequal spacing. the other problem related to differentiating empirical data is measurement error • Differentiation amplifies error • Integration tends to be more forgiving • Primary approach for determining derivatives of imprecise data is to use least squares regression to fit a smooth.

300 200 100 0 0 10 t 20 30 20 10 0 0 5 t 10 15 250 200 150 100 50 0 0 5 t 10 15 250 200 150 100 50 0 0 40 30 20 10 0 0 5 dy/dt dy/dt y 40 30 20 10 0 y 5 10 10 15 15 0 5 t 10 15 .

Ordinary Differential Equations • A differential equation defines a relationship between an unknown function and one or more of its derivatives • Physical problems using differential equations – electrical circuits – heat transfer – motion .

y) .Ordinary Differential Equations • The derivatives are of the dependent variable with respect to the independent variable • First order differential equation with y as the dependent variable and x as the independent variable would be:  dy/dx = f(x.

y.Ordinary Differential Equations • A second order differential equation would have the form: d2y dy    f  x.  2 dx dx   } does not necessarily have to include all of these variables .

x 2 . • Thus. the previous two equations are ordinary differential equations • The following is not: dy  f x 1 .Ordinary Differential Equations • An ordinary differential equation is one with a single independent variable. y  dx1 .

Ordinary Differential Equations • The analytical solution of ordinary differential equation as well as partial differential equations is called the “closed form solution” • This solution requires that the constants of integration be evaluated using prescribed values of the independent variable(s). .

Ordinary Differential Equations • An ordinary differential equation of order n requires that n conditions be specified. • Boundary conditions • Initial conditions consider this beam where the deflection is zero at the boundaries x= 0 and x = L These are boundary conditions .

consider this beam where the deflection is zero at the boundaries x= 0 and x = L These are boundary conditions P a yo In some cases. Being interested in the response for t > 0. . Consider how the deflection of a beam at x = a is shown at time t =0 to be equal to yo. this is called the initial condition. the specific behavior of a system(s) is known at a particular time.

. only a few differential equations can be solved analytically in a closed form.Ordinary Differential Equations • At best. • Solutions of most practical engineering problems involving differential equations require the use of numerical methods.

y(0)=1 and y(0)=2 .Review of Analytical Solution dy  4x 2 dx dy   4 x 2 dx  4x y C 3 3 At this point lets consider initial conditions.

The resulting equations are: 40  2 C 3 and C  2 3 4x 3 y 1 3 4x 3 y 2 3 .4x 3 y C 3 for y0   1 40  1 C 3 thenC  1 for y0   2 3 What we see are different values of C for the two different initial conditions.

5 2 2.5 1 x 1.5 .16 12 8 y 4 0 0 y(0)=1 y(0)=2 y(0)=3 y(0)=4 0.

y  dx yi 1  yi  f h y h yi+1 yi slope = f x This is the same as saying: new value = old value + slope * step size .One Step Methods • Focus is on solving ODE in the form dy  f x .

over small distance in order to reduce the error • Hence this is often referred to as Euler’s One-Step Method . or one step at a time.Euler’s Method • The first derivative provides a direct estimate of the slope at xi • The equation is applied iteratively.

y  1at x  1 stepsize0.1 1 4 3 y 1  x 3 y  1.1 Analy ticalsolution  dy  1 y 1.4413 .C.Example dy  4x 2 dx I.1 4 x 2 dx  1 1.4413  0.

Slope step size Recall the analytical solution was 1.1  1.1 I.C.4413 If we instead reduced the step size to to 0.1  f 1  41 0.1  f 1  41 0.dy  4x 2 dx y i 1  y i  fh f 1.05 and apply Euler’s twice .4 2     2 Note : f 1.

Error Analysis of Euler’s Method • Truncation error .caused by the limited number of significant digits that can be retained by a computer or calculator .caused by the nature of the techniques employed to approximate values of y – local truncation error (from Taylor Series) – propagated truncation error – sum of the two = global truncation error • Round off error .

end of example ....Example 12 10 8 6 4 2 0 0 1 x 2 3 Analytical Solution Numerical Solution y .

ODE that are functions of both dependent and independent variables require chain-rule differentiation Alternative one-step methods are needed .Higher Order Taylor Series Methods • • • • f ' x i . yi  h  h 2 This is simple enough to implement with polynomials Not so trivial with more complicated ODE In particular. y i  2 yi 1  yi  f x i .

.Modification of Euler’s Methods • A fundamental error in Euler’s method is that the derivative at the beginning of the interval is assumed to apply across the entire interval • Two simple modifications will be demonstrated • These modification actually belong to a larger class of solution techniques called Runge-Kutta which we will explore later.

y i  f ' x i .Heun’s Method • Consider our Taylor expansion y i 1 f ' x i . y i  2  y i  f x i . y i 1   f  x i . y i   h • Substituting into the expansion y i 1 f i 1  f i  h 2   f i 1  f i   yi  fi h    yi    h  h  2  2  . y i  h  h 2 • Approximate f’ as a simple forward difference f  x i 1 .

Heun’s Method Algorithm • Determine the derivatives for the interval @ – the initial point – end point (based on Euler step from initial point) • Use the average to obtain an improved estimate of the slope for the entire interval • We can think of the Euler step as a “test” step .

y h xi xi+1 .

y xi xi+1 Take the average of these two slopes .

yi   f x i1 . yi1  yi1  yi  h 2 y xi xi+1 xi xi+1 x .y f x i .

yi 1/ 2  . y i  2 • This predicted value is used to estimate the slope at the midpoint y'i 1/ 2  f x i 1/ 2 .Improved Polygon Method • Another modification of Euler’s Method (sometimes called the Midpoint Method) • Uses Euler’s to predict a value of y at the midpoint of the interval yi 1/ 2 h  y i  f x i .

y i 1 / 2  h • We could also get this algorithm from substituting a forward difference in f to i+1/2 into the Taylor expansion for f’.e. y i 1 f i 1 / 2  f i  h 2   yi  fi h    y i  f i 1 / 2 h   h/2  2 .Improved Polygon Method • We then assume that this slope represents a valid approximation of the average slope for the entire interval • Use this slope to extrapolate linearly from xi to xi+1 using Euler’s algorithm y i 1  y i  f  x i 1 / 2 . i.

y f(xi+1/2) xi xi+1/2 x .

y f’(xi+1/2) xi xi+1/2 x .

y Extend your slope now to get f(x i+1) h xi xi+1/2 xi+1 x .

h  h f is called the incremental function { . yi .Runge-Kutta Methods • RK methods achieve the accuracy of a Taylor series approach without requiring the calculation of a higher derivative • Many variations exist but all can be cast in the generalized form: yi 1  yi  fx i .

y i  q n 1.f . Incremental Function can be interpreted as a representative slope over the interval f  a 1k 1  a 2 k 2    a n k n where the a ' s are cons tant and the k ' s are : k 1  f x i .n 1k n 1h  . y i  q 21k1h  q 22k 2 h   k n  f x i  p n h. 2 k 2 h    q n 1. y i  k 2  f x i  p1h. y i  q11k1h  k 3  f x i  p 2 h.1k1h  q n 1.

that is k1 appears in the equation for k2 and each appear in the equation for k3 This recurrence makes RK methods efficient for computer calculations . y i  q n 1.f  a 1k 1  a 2 k 2    a n k n where the a ' s are cons tant and the k ' s are : k 1  f x i . y i  q 21k1h  q 22k 2 h   k n  f x i  p n h. y i  k 2  f x i  p1h. 2 k 2 h    q n 1.n 1k n 1h  NOTE: k’s are recurrence relationships. y i  q11k1h  k 3  f x i  p 2 h.1k1h  q n 1.

y i  q n 1. 2 k 2 h    q n 1. y i  q 21k1h  q 22k 2 h   k n  f x i  p n h.Second Order RK Methods y i 1  y i  a 1k1  a 2 k 2 h where k 1  f x i . y i  k 2  f x i  p1h . y i  q11k1h  f  a 1k 1  a 2 k 2    a n k n where the a ' s are cons tant and the k ' s are : k 1  f x i .n 1k n 1h  .1k1h  q n 1. y i  q11k1h  k 3  f x i  p 2 h. y i  k 2  f x i  p1h.

a2. y i  2 2 . y i h  f ' x i .yi) y i 1  y i  a1k1  a 2 k 2 h h y i 1  y i  f x i .Second Order RK Methods • We have to determine values for the constants a1. p1 and q11 • To do this consider the Taylor series in terms of yi+1 and f(xi.

Now. f’(xi . yi ) must be determined by the chain rule for differentiation f f dy f ' x i . y i h     x y dx  2    The basic strategy underlying Runge-Kutta methods is to use algebraic manipulations to solve for values of a1. p1 and q11 . a2. y i    x y dx substitute in to exp ansion  f f dy  h 2 y i 1  y i  f x i .

yi   x i . yi  x i .yi1  yi  a1k1  a 2 k 2 h  f  h2 f dy yi1  yi  f x i . yi h   x i . yi   x 2 y dx   By setting these two equations equal to each other and recalling: k 1  f x i . yi  q11k1h  we derive three equations to evaluate the four unknown constants . y i  k 2  f x i  p1h.

a1  a 2  1 1 a 2 p1  2 1 a 2 q1 1  2 Because we have three equations with four unknowns. What would the equations be? . Suppose we specify a value for a2. we must assume a value of one of the unknowns.

. Lets review three of the most commonly used and preferred versions. Every solution would yield exactly the same result if the solution to the ODE were quadratic.a1  1  a 2 p1  q11 1  2a 2 Because we can choose an infinite number of values for a2 there are an infinite number of second order RK methods. linear or a constant.

y i  q11k1h  .y i 1  y i  a1k1  a 2 k 2 h where k 1  f x i . y i  a1  a 2  1 1 a 2 p1  2 1 a 2 q11  2 Consider the following: Case 1: a2 = 1/2 Case 2: a2 = 1 These two methods have been previously studied. What are they? k 2  f x i  p1h.

y i  q11k1h  . y i  k 2  f x i  h . y i  k 1h  Case 1: a2 = 1/2 This is Heun’s Method with a single corrector. y i  k 2  f x i  p1h.a1  1  a 2  1  1 / 2  1 / 2 1 a 2 p1  2 1 a 2 q11  2 1 p1  q11  1 2a 2 1  1 y i 1  y i   k1  k 2 h 2  2 where k 1  f x i . y i 1  y i  a1k1  a 2 k 2 h where k 1  f x i . Note that k1 is the slope at the beginning of the interval and k2 is the slope at the end of the interval.

y i  1 1   k 2  f  x i  h . y i  k 1h  2 2   Case 2: a2 = 1 This is the Improved Polygon Method.a1  1  a 2  1  1  0 1 a 2 p1  2 1 a 2 q11  2 1 1 p1  q11   2a 2 2 y i 1  y i  k 2 h where k 1  f x i . y i  k 2  f x i  p1h. yi  q11k1h  . y i 1  yi  a1k1  a 2 k 2 h where k 1  f x i .

Ralston’s Method Ralston (1962) and Ralston and Rabinowitiz (1978) determined that choosing a2 = 2/3 provides a minimum bound on the truncation error for the second order RK algorithms. y i  k 1h  4 4   . y i  3 3   k 2  f  x i  h . This results in a1 = 1/3 and p1 = q11 = 3/4 2  1 y i 1  y i   k1  k 2 h 3  3 where k 1  f x i .

Example As a class problem.e. y1  1 step size h 0.C. others do either: •Ralstons’s •Heun’s •Improved Polygon . lets consider two steps. dy  4x 2 y dx I.1 Some of you folks do the analytical solution. : y  1 at x  1 i.

y i  hk1  2hk 2  NOTE: if the derivative is a function of x only. • One common version results in the following 1  y i 1  y i   k1  4k 2  k 3  h 6  where k 1  f x i . y i  1 1   k 2  f  x i  h . this reduces to Simpson’s 1/3 Rule Note the third term . y i  k 1h  2 2   k 3  f x i  h .Third Order Runge-Kutta Methods • Derivation is similar to the one for the second-order • Results in six equations and eight unknowns.

y i  1 1   k 2  f  x i  h . y i  hk 2  2 2   k 4  f x i  h . y i  k 1h  2 2   1 1   k 3  f  x i  h .Fourth Order Runge Kutta • The most popular • The following is sometimes called the classical fourth-order RK method 1  y i 1  y i   k1  2k 2  2k 3  k 4  h 6  where k 1  f x i . y i  hk 3  .

y i  hk 2  2 2   k 4  f x i  h . y i  1 1   k 2  f  x i  h . y i  hk 3  .• Note that for ODE that are a function of x alone that this is also the equivalent of Simpson’s 1/3 Rule 1  y i 1  y i   k1  2k 2  2k 3  k 4  h 6  where k 1  f x i . y i  k 1h  2 2   1 1   k 3  f  x i  h .

Example Use 4th Order RK to solve the following differential equation: dy xy  dx 1  x 2 using an interval of h = 0.C.y1  1 .1 I.

k1.e. y i  k 1h  2 2   1 1   k 3  f  x i  h. y i  hk 2  2 2   k 4  f x i  h. y i  hk 3  different estimates of the slope i. y i  1 1   k 2  f  x i  h .Solution 1  y i 1  y i   k1  2k 2  2k 3  k 4  h 6  where We will determine k 1  f x i . k2. k3 and k4 .

1 k1  2k 2  2k 3  k 4  h y i 1  y i    6   1   1   0.51189  20..05119 ..5  20..52323 0..1 6   1.51219  0.end of problem .

Higher Order RK Methods • When more accurate results are required. Bucher’s (1964) fifth order RK method is recommended • There is a similarity to Boole’s Rule • The gain in accuracy is offset by added computational effort and complexity .

 . y1 . y n  dx . y 2 .  . y1 .  . y 2 . y n  dx dy 2  f 2 x .Systems of Equations • Many practical problems in engineering and science require the solution of a system of simultaneous differential equations dy1  f1 x . y1 . y n  dx  dy n  f n x . y 2 .

• Solution requires n initial conditions • All the methods for single equations can be used • The procedure involves applying the one-step technique for every equation at each step before proceeding to the next step dy1  f1 x .  . y1 . y 2 . y1 . y n  dx dy 2  f 2 x . y n  dx . y 2 .  . y n  dx  dy n  f n x . y 2 . y1 .  .

 2 dx dx   can be transformed dy by defining y1  y & y 2  . y 2  dx .• Note that higher order ODEs can be reformulated as simultaneous first order ODEs d2y dy    g x . so dx dy1  y2 dx dy 2  g x . y. y1 .

Note: u depends on both x and y  u  u  2xy 2  u  1 x 2 y 2 2  2u  2u  x 2  8u  5y xy y  u  3u  2  6 x 2  x  xy    2u u  xu x 2 x y 2 3 .Partial Differential Equations • An equation involving partial derivatives of an unknown function of two or more independent variables • The following are examples.

F. our study of PDE will focus on linear. second-order equations • The following general form will be evaluated for B2 . G terms)  u   2u  2u  2u u A 2 B  C 2  DE   x  F  y  Gu   0  x xy y   .4AC (note that text does not list E.Partial Differential Equations • Because of their widespread application in engineering.

B2-4AC Category Example <0 Elliptic Laplace equation (steady state with 2 spatial dimensions  2T  2T  0 2 2 x y =0 Parabolic Heat conduction equation (time variable with one spatial dimension  2T T k  x 2 t >0 Hyperbolic Wave equation (time-variable with one spatial dimension 2y 1 2y  2 2 x c t 2 .

y or t set up a grid estimate the dependent variable at the center or intersections of the grid x .

the Laplace equation will be solved from a physical problem  2u  2u  2u A B C D0 2 2 x xy y  2u  2u  0 2 2 x y .Finite Difference: Elliptic Equations B2.4AC < 0 • Typically used to characterize steady-state boundary value problems • Before solving.

The Laplace Equation 2 2  u  u  2 0 2 x y • Models a variety of problems involving the potential of an unknown variable • We will consider cases involving thermodynamics. fluid flow. and flow through porous media .

The Laplace equation • Let’s consider the case of a plate heated from the boundaries • How is this equation derived from basic concepts of continuity? • How does it relate to flow fields?  2T  2T  2 0 2 x y .

The temperatures are known at the boundaries.Consider the plate below. with thickness z. What is the temperature throughout the plate? T = 400 T = 200 T= 200 T = 200 .

Divide into a grid. with increments by x and y y T = 400 T = 200 T= 200 T = 200 x .

What is the temperature here. if using a block centered scheme? y T = 400 T = 200 T= 200 T = 200 x .

What is the temperature here. if using a grid centered scheme? y T = 400 T = 200 T= 200 T = 200 x .

Consider the element shown below on the face of a plate  z in thickness. The plate is illustrated everywhere by at its edges or boundaries. where the temperature can be set. y y x x .

qx yzt  qy xzt  qx  x yzt  qy  y xzt .q(y +  y) q(x) q(x +  x) Consider the heat flux q in and out of the elemental volume. the flow of heat in must equal the flow of heat out. q(y) By continuity.

this is our continuity equation .qx yzt  qy xzt  qx  x yzt  qy  y xzt Divide by z and  t and collect terms. this equation reduces to: q q  0 x y Again.

q q  0 x y Equation A The link between flux and temperature is provided by Fourier’s Law of heat conduction T q i  krC i Equation B Where qi is the heat flux in the direction i. Substitute B into A to get the Laplace equation .

q q  0 x y T q i  krC i Equation A Equation B q q   T    T      krC     krC  x y x  x  y  y     2T  2T  2 0 2 x  y .

Consider Fluid Flow In fluid flow. where the fluid is a liquid or a gas. the continuity equation is: Vy Vx  0 x y The link here can by either of the following sets of equations: The potential function: f Vx  x f Vy  y Stream function:  Vx  y  Vy   x .

Vx Vy  0 x y f f Vx  Vy  x y  Vx  y  Vy   x The Laplace equation is then  2f  2f  2 0 2 x y  2  2 or  2 0 2 x y .

q q  0 x y H q i  K i Flow in Porous Media Darcy’s Law The link between flux and the pressure head is provided by Darcy’s Law  2h  2h  0 2 2 x y .

we have the Poisson equation. as represented by f(x. .y). y) 2 x y 2 2 Now let’s consider solution techniques.For a case with sources and sinks within the 2-D domain. u u  2  f ( x .

j+1) (i. j  u i 1.Evaluate these equations based on the grid and central difference equations  2 u u i 1. j  u i . j1  2u i .j) .j-1) (i+1. j  2 2 x x  2 u u i .j) (i-1.j) (i. j  2u i . j1  2 y y 2 (i.

j  u i. j1  2u i. j  u i1.j+1) (i. j  u i1. j1  u i. j  0 .j) (i+1. j  u i. j u i .j-1) u i1.j) (i. j1  0 2 2 x y If  x =  y we can collect the terms to get: (i-1.j) (i.u i1. j  2u i. j1  4u i.

j-1) . j  u i. We must now consider what to do with the boundary nodes. (i.u i1.j) (i. j  u i1.j) (i-1. (i+1.j+1) (i. j1  4u i.j) It can be applied to all interior points. j1  u i. j  0 This equation is referred to as the Laplacian difference equation.

Boundary Conditions • Dirichlet boundary conditions: u is specified at the boundary  Temperature  Head • Neumann boundary condition: the derivative is specified  qi T  xi or h  xi • Combination of both u and its derivative (Mixed BC) .

u2 u1 u3 u4 . This case is known as the Dirichlet boundary conditions.The simplest case is where the boundaries are specified as fixed values.

1 u3 Note: This grid would result in nine simultaneous equations.1 +u1.1+u1 +u4 = 0 . u4 -4u1.1 u2 u1 1.2+u2.1 2. u1.2 1.Consider how we can deal with the lower node shown.

j  u i 1. j  x 2x centered finite divided difference approximation h 0 x  2h  2 h  0 x 2 y 2 h 0 x h 0 y suppose we wanted to consider this end grid point .Let’s consider how to model the Neumann boundary condition u u i 1.

j = h i-1.j+1 = h i.h 0 x 1.1 2.2 1.1 The two boundaries are consider to be symmetry lines due to the fact that the BC translates in the finite difference form to: h i+1.j and h 0 y h i.j-1 .

1 2.2 = (h1.1 + h1.1 h 0 y .h1.3+2h22)/4 2.2 1.2 + 2 h2.1 = (2h1.1)/4 h 0 x 1.2 h1.

(1. Note that the lower boundary is a Dirichlet boundary condition. the left boundary is a Neumann boundary condition.1).1). . and (2. and x = y.2).Example The grid on the next slide is designed to solve the LaPlace equation  2  2  2 0 2 x y Write the finite difference equations for the nodes (1.

2) (1.1) (2.1) 4 12  221  20 11  4 11  22  31  20  21  4 13  222  11     20 .Solution 12  0 x (1.

The Liebmann Method • Most numerical solutions of the Laplace equation involve systems that are much larger than the general system we just evaluated • Note that there are a maximum of five unknown terms per line • This results in a significant number of terms with zero’s .

which when applied to PDEs is also referred to as Liebmann’s method.The Liebmann Method • In addition to the fact that they are prone to round-off errors. . using elimination methods on such sparse systems wastes a great amount of computer memory storing zeros • Therefore. we commonly employ approaches such as Gauss-Seidel.

j  1   u old i. j  u new i. j  u i 1. j1  u i . j  u i . j  0 u new i. • Therefore the procedure will converge to a stable solution.The Liebmann Method • In addition the equations will lead to a matrix that is diagonally dominant. j . • Over relaxation is often employed to accelerate the rate of convergence u i 1. j1  4u i .

j  0 u new i. j 1   u iold  . j  u i 1.j . j  u i . the iterations are repeated until each point falls below a pre-specified tolerance: s  u inew  u iold . j  u new i.u i 1.j u new i. j 100 % . j1  u i . j1  4u i .j As with the conventional Gauss Seidel method.

Groundwater Flow Example .

.Modeling 1/2 of the system shown. we can develop the following schematic where  x =  y = 20 m h 0 x  2h  2 h  0 x 2 y 2 h 0 x h 0 y The finite difference equations can be solved using a a spreadsheet.

05*20 =(B1+C2+B3+A2)/4 =(B2+C3+B4+A3)/4 =(B3+C4+B5+A4)/4 =(B4+C5+B6+A5)/4 =(2*B5+C6+A6)/4 =B1+0.05*20 =(G1+H2+G3+F2)/4 =(G2+H3+G4+F3)/4 =(G3+H4+G5+F4)/4 =(G4+H5+G6+F5)/4 =(2*G5+H6+F6)/4 =G1+0.05*20 =(D1+E2+D3+C2)/4 =(D2+E3+D4+C3)/4 =(D3+E4+D5+C4)/4 =(D4+E5+D6+C5)/4 =(2*D5+E6+C6)/4 =D1+0.05*20 =(F1+G2+F3+E2)/4 =(F2+G3+F4+E3)/4 =(F3+G4+F5+E4)/4 =(F4+G5+F6+E5)/4 =(2*F5+G6+E6)/4 =F1+0.05*20 =(E1+F2+E3+D2)/4 =(E2+F3+E4+D3)/4 =(E3+F4+E5+D4)/4 =(E4+F5+E6+D5)/4 =(2*E5+F6+D6)/4 =E1+0.05*20 =(C1+D2+C3+B2)/4 =(C2+D3+C4+B3)/4 =(C3+D4+C5+B4)/4 =(C4+D5+C6+B5)/4 =(2*C5+D6+B6)/4 =C1+0.05*20 =(I1+J2+I3+H2)/4 =(I2+J3+I4+H3)/4 =(I3+J4+I5+H4)/4 =(I4+J5+I6+H5)/4 =(2*I5+J6+H6)/4 =I1+0.05*20 =(K1+K3+2*J2)/4 =(K2+K4+2*J3)/4 =(K3+K5+2*J4)/4 =(K4+K6+2*J5)/4 =(2*K5+2*J6)/4 100 =(A1+2*B2+A3)/4 =(A2+2*B3+A4)/4 =(A3+2*B4+A5)/4 =(A4+2*B5+A6)/4 =(2*A5+2*B6)/4 =A1+0.05*20 =(K1+K3+2*J2)/4 =(K2+K4+2*J3)/4 =(K3+K5+2*J4)/4 =(K4+K6+2*J5)/4 =(2*K5+2*J6)/4 100 =(A1+2*B2+A3)/4 =(A2+2*B3+A4)/4 =(A3+2*B4+A5)/4 =(A4+2*B5+A6)/4 =(2*A5+2*B6)/4 You will get an error message in Excel that states that it will not resolve a circular reference.05*20 =(I1+J2+I3+H2)/4 =(I2+J3+I4+H3)/4 =(I3+J4+I5+H4)/4 =(I4+J5+I6+H5)/4 =(2*I5+J6+H6)/4 =I1+0.05*20 =(H1+I2+H3+G2)/4 =(H2+I3+H4+G3)/4 =(H3+I4+H5+G4)/4 =(H4+I5+H6+G5)/4 =(2*H5+I6+G6)/4 =H1+0.05*20 =(D1+E2+D3+C2)/4 =(D2+E3+D4+C3)/4 =(D3+E4+D5+C4)/4 =(D4+E5+D6+C5)/4 =(2*D5+E6+C6)/4 =D1+0.05*20 =(E1+F2+E3+D2)/4 =(E2+F3+E4+D3)/4 =(E3+F4+E5+D4)/4 =(E4+F5+E6+D5)/4 =(2*E5+F6+D6)/4 =E1+0.05*20 =(J1+K2+J3+I2)/4 =(J2+K3+J4+I3)/4 =(J3+K4+J5+I4)/4 =(J4+K5+J6+I5)/4 =(2*J5+K6+I6)/4 =J1+0.05*20 =(H1+I2+H3+G2)/4 =(H2+I3+H4+G3)/4 =(H3+I4+H5+G4)/4 =(H4+I5+H6+G5)/4 =(2*H5+I6+G6)/4 =H1+0.05*20 =(J1+K2+J3+I2)/4 =(J2+K3+J4+I3)/4 =(J3+K4+J5+I4)/4 =(J4+K5+J6+I5)/4 =(2*J5+K6+I6)/4 =J1+0.05*20 =(B1+C2+B3+A2)/4 =(B2+C3+B4+A3)/4 =(B3+C4+B5+A4)/4 =(B4+C5+B6+A5)/4 =(2*B5+C6+A6)/4 =B1+0.CAN USE EXCEL DEMONSTRATION 100 =(A1+2*B2+A3)/4 =(A2+2*B3+A4)/4 =(A3+2*B4+A5)/4 =(A4+2*B5+A6)/4 =(2*A5+2*B6)/4 =A1+0.05*20 =(G1+H2+G3+F2)/4 =(G2+H3+G4+F3)/4 =(G3+H4+G5+F4)/4 =(G4+H5+G6+F5)/4 =(2*G5+H6+F6)/4 =G1+0.05*20 =(C1+D2+C3+B2)/4 =(C2+D3+C4+B3)/4 =(C3+D4+C5+B4)/4 =(C4+D5+C6+B5)/4 =(2*C5+D6+B6)/4 =C1+0. .05*20 =(F1+G2+F3+E2)/4 =(F2+G3+F4+E3)/4 =(F3+G4+F5+E4)/4 =(F4+G5+F6+E5)/4 =(2*F5+G6+E6)/4 =F1+0.

05*20 =(H1+I2+H3+G2)/4 =(H2+I3+H4+G3)/4 =(H3+I4+H5+G4)/4 =(H4+I5+H6+G5)/4 =(2*H5+I6+G6)/4 =H1+0.05*20 =(J1+K2+J3+I2)/4 =(J2+K3+J4+I3)/4 =(J3+K4+J5+I4)/4 =(J4+K5+J6+I5)/4 =(2*J5+K6+I6)/4 =J1+0.05*20 =(D1+E2+D3+C2)/4 =(D2+E3+D4+C3)/4 =(D3+E4+D5+C4)/4 =(D4+E5+D6+C5)/4 =(2*D5+E6+C6)/4 =D1+0. .05*20 =(F1+G2+F3+E2)/4 =(F2+G3+F4+E3)/4 =(F3+G4+F5+E4)/4 =(F4+G5+F6+E5)/4 =(2*F5+G6+E6)/4 =F1+0. In fact. EXCEL with perform the Liebmann method for you.05*20 =(G1+H2+G3+F2)/4 =(G2+H3+G4+F3)/4 =(G3+H4+G5+F4)/4 =(G4+H5+G6+F5)/4 =(2*G5+H6+F6)/4 =G1+0.05*20 =(B1+C2+B3+A2)/4 =(B2+C3+B4+A3)/4 =(B3+C4+B5+A4)/4 =(B4+C5+B6+A5)/4 =(2*B5+C6+A6)/4 =B1+0.100 =(A1+2*B2+A3)/4 =(A2+2*B3+A4)/4 =(A3+2*B4+A5)/4 =(A4+2*B5+A6)/4 =(2*A5+2*B6)/4 =A1+0.05*20 =(C1+D2+C3+B2)/4 =(C2+D3+C4+B3)/4 =(C3+D4+C5+B4)/4 =(C4+D5+C6+B5)/4 =(2*C5+D6+B6)/4 After selecting the appropriate command.05*20 =(K1+K3+2*J2)/4 =(K2+K4+2*J3)/4 =(K3+K5+2*J4)/4 =(K4+K6+2*J5)/4 =(2*K5+2*J6)/4 =B1+0.05*20 =(E1+F2+E3+D2)/4 =(E2+F3+E4+D3)/4 =(E3+F4+E5+D4)/4 =(E4+F5+E6+D5)/4 =(2*E5+F6+D6)/4 =E1+0. you will be able to watch the iterations.05*20 =(C1+D2+C3+B2)/4 =(C2+D3+C4+B3)/4 =(C3+D4+C5+B4)/4 =(C4+D5+C6+B5)/4 =(2*C5+D6+B6)/4 =C1+0.05*20 =(I1+J2+I3+H2)/4 =(I2+J3+I4+H3)/4 =(I3+J4+I5+H4)/4 =(I4+J5+I6+H5)/4 =(2*I5+J6+H6)/4 =I1+0.

A 1 2 3 4 5 6 100 101..9 107.4 108 108.4 C D E F 105 105 105 105 105 105 G H I J K 102 103 104 102.3 B 101 102 102.7 104 104.3 106.7 105.end of problem.6 103.7 104.9 104.7 106.4 103.4 105.4 103.8 106.2 103.3 103.4 104.4 106.5 103.6 102.6 107..5 106 106.1 103.6 106.Table 2: Results of finite difference model.5 103 103.5 106 107 108 109 110 105.7 106.3 107.9 107 105.5 105.1 103. .3 103.6 104 104.7 103.3 106.5 106 106.1 106.3 103.6 106.6 106.7 .

Secondary Variables • Because its distribution is described by the Laplace equation. the second variable is the rate of heat flux across the place surface T q i  krC i . temperature is considered to be the primary variable in the heated plate problem • A secondary variable may also be of interest • In this case.

j1 q y  k ' 2 y FINITE DIFFERENCE APPROXIMATION BASED ON RESULTS qn  q2  q2 x y 1  qx   tan  q  y     THE RESULTING FLUX IS A VECTOR WITH MAGNITUDE AND DIRECTION .T q i   k rC i Ti 1. j  Ti 1. j q x  k ' 2 x Ti. j1  Ti .

4AC = 0  2u  2u  2u A 2 B C 2 D 0 x xy y These equations are used to characterize transient problems. We will first study this in one spatial direction (1-D).Finite Difference: Parabolic Equations B2. .

Consider the heat-conduction equation  2T T k  2 x t As with the elliptic PDEs. parabolic equations can be solved by substituting finite difference equations for the partial derivatives. However we must now consider changes in time as well as space .

y t x temporal { u l i spatial { x .

 T T k 2 x t Til1  2Til  Til1 k 2 x  2 Til1  Til  T Forward finite divided difference Centered finite divided difference .

but at a previous time . and surrounding nodes.We can further reduce the equation: Til1  2Til  Til1 Til1  Til k  2 T x  Til1  Til   Til1  2Til1  Til1 where   kt NOTE:   x 2 Now the temperature at a node is estimated as a function of the temperature at the node.

Example Consider a thin insulated rod 10 cm long with k = 0.1 sec. At t=0 the temperature of the rod is zero. hot cold .835 cm2/s Let  x = 2 cm and  t = 0.

100 0 0 0 0 50 i= x= 0 0 1 2 2 4 3 6 4 8 5 10 Now subject the two ends to temperatures of 100 and 50 degrees Set the nodes with these boundary and initial conditions. This is what we consider as the conditions at t=0 .

x 0 i 0 t=0 100 2 1 0 Consider the temperature at this node at time t+t 4 2 0 Ti  Ti   T  2 Ti  T l l i 1 l l 1  l i 1  6 3 0 T1t   t  0   1 0 0  2  0   0  8 4 0 10 5 50 .

2 D 100 =B5+\$B\$1*(B6-2*B5+B4) =B6+\$B\$1*(B7-2*B6+B5) =B7+\$B\$1*(B8-2*B7+B6) =B8+\$B\$1*(B9-2*B8+B7) 50 100 =C5+\$B\$1*(C6-2*C5+C4) =C6+\$B\$1*(C7-2*C6+C5) =C7+\$B\$1*(C8-2*C7+C6) =C8+\$B\$1*(C9-2*C8+C7) 50 Til 1  Til   Til1  2Til  Til1  where k t  x 2 .A 1 2 3 4 5 6 7 8 9 = t x 0 2 4 6 8 10 B 0.1 C 0.020875 0 100 0 0 0 0 50 0.

end of example ...100 90 80 70 60 50 40 30 20 10 0 0 5 x (cm) 10 5 sec 10 sec 15 sec t (sec) .

not amplified. as the computation progresses • The explicit method is stable and convergent if  1 2 . the results of the numerical technique approach the true solution • Stability means that the errors at any stage of the computation are attenuated.Convergence and Stability • Convergence means that as  x and  t approach zero.

we may also have derivative boundary conditions i T0i1  T0i   T1i  2T0i  T1   Thus we introduce an imaginary point at i = -1 This point provides the vehicle for providing the derivative BC . However.Derivative Boundary Conditions To TL In our previous example To and TL were constant values.

So in this case the balance at node 0 is: T 0l  1  T 0l   2 T1l  2 T 0l  . so T-1 = T1 .Derivative Boundary Conditions q0= 0 TL For the case of qo = 0.

Derivative Boundary Conditions q0= 10 TL For the case of qo = 10. Assuming k’ =1.(1) dT/dx. then 10 = . we need to know k’ [= k/(rC)]. or dT/dx = -10 T1l  Tl1 10   k 2 x x l l T1  T1  20 k  l x l 1 l l T0  T0    2T1  20  2T0    1   .

Implicit Method • Explicit methods have problems relating to stability • Implicit methods overcome this but at the expense of introducing a more complicated algorithm • In this algorithm. we develop simultaneous equations .

Explicit Implicit l i 1 T  2T  T 2  x  l i 1 l i Til11  2 Til 1  Til11   x 2 grid point involved with space difference grid point involved with time difference With the implicit method. we develop a set of simultaneous equations at step in time .

  Til11  2Til 1  Til11 Til1  Til k  2 t x  which can be expressed as:    Til11  1  2 Til 1  Til11  Til For the case where the temperature level is given at the end by a function f0 i. x = 0 l 1 0 T  f0 t   l 1 .e.

Substituting    Til11  1  2 Til1  Til11  Til T 1  2 T l 1 0  f0 t   l 1 l 1 i  T l 1 i 1  T  f0 t l i   l 1 In the previous example problem. we get a 4 x 4 matrix to solve for the four interior nodes for each time step .