You are on page 1of 202

# LINEAR ALGEBRAIC

EQUATIONS
• DIRECT METHODS
- MATRIX INVERSION
- GAUSSIAN ELIMINATION
- LU DECOMPOSITION
• ITERATIVE METHODS
- SUCCESSIVE UNDER- /OVER
RELAXATION
MATRIX INVERSION

5 x1 + 2 x2 + x3 = 8 (1)

x1 + 2 x2 + x3 = 4 (2)

x1 + x2 + 3x3 = 5 (3)

5 2 1  x1  8 
1 2 1  x  = 4
  2    [ A]( x ) = ( r ) ⇒ ( x ) = [ A] −1 ( r )
1 1 3  x3  5
COFACTOR METHOD
COFACTORS

A11 = 2x3 – 1x1 = 5; A12 = - (1x3 – 1x1) = -2; A13 = 1x1 – 1x2 = -1;

A21 = - (2x3 – 1x1) = -5; A22 = 5x3 – 1x1 = 14; A23 = - (5x1 – 1x2) = -3;

A31 = 2x1 – 2x1 = 0; A32 = - (5x1 – 1x1) = - 4; A33 = 5x2 – 1x2 = 8.

DETERMINANT

D = a11 .A11 + a12 .A12 + a13 .A13 = 5x5 – 2x2 –1x1 = 20.
MATRIX INVERSE
 5 −5 0 
1  
[ A] −1 = − 2 14 − 4
20  
 − 1 − 3 8 

 x1  1
   
( x ) = [ A] ( r ) =  x 2  = 1
−1

 x   1
 3  
GAUSSIAN ELIMINATION

• OBTAIN UPPER TRIANGULAR COEFFICIENT
MATRIX BY ROW OPERATIONS
• OBTAIN SOLUTION BY BACKWARD
SUBSTITUTION
UPPER TRANGULATION (GAUSSIAN
ELIMINATION)
5 x1 + 2 x2 + x3 = 8 (1)

x1 + 2 x2 + x3 = 4 (2)

x1 + x2 + 3x3 = 5 (3)

5 x Eq.(2) – Eq.(1) = 8 x2 + 4 x3 = 12  2 x2 + x3 = 3  New Eq.(2)

5 x Eq.(3) – Eq.(1) = 3 x2 + 14 x3 = 17  New Eq.(3)

5 x1 + 2 x2 + x3 = 8 (1)

2 x2 + x3 = 3 (2)

3x2 +14x3 = 17 (3)

2 x Eq.(3) – 3 x Eq.(2)  New Eq.(3) i.e. 25 x3 = 25  x3 = 1
BACKWARD SUBSTITUTION
(GAUSSIAN ELIMINATION)

New Eq.(3) i.e. 25 x3 = 25  x3 = 1

Substitute for x3 in Eq. (2)  2 x2 = 2  x2 = 1

Substitute for x3 and x2 in Eq.(1), 5 x1 = 5  x1 = 1
LU DECOMPOSITION
 a11 a12 a13  b11 0 0  c11 c12 c13 
a a 22 a 23  = b21 b22 0   0 c 22 c 23 
 21
 a31 a32 a33  b31 b32 b33   0 0 c33 

a11 = b11 .c11  Let a11 = b11 ; hence, c11 = 1.

a12 = b11 .c12  c12 = a12 /b11 ;

a13 = b11 .c13  c13 = a13 /b11 ;
LU DECOMPOSITION
a21 = b21 .c11  b21 = a21 /c11 ;

a22 = b21 .c12 + b22 .c22  Let b22 = a22 ; hence, c22 = (a22 – b21 .c12 )/b22 ;

a23 = b21 .c13 + b22 .c23  c23 = (a23 – b21 .c13 )/b22 ;

a31 = b31 .c11  b31 = a31 /c11 ;

a32 = b31 .c12 + b32 .c22  b32 = (a32 – b31 .c12 )/c22 ;

a33 = b31 .c13 + b32 .c23 + b33 .c33  Let b33 = a33 ;

hence, c33 = (a33 – b31 .c13 – b32 .c23 )/b33 .
LU DECOMPOSITION
5 2 1 5 0 0 1 0.4 0. 2 
1 2 1 = 1 2 0   0 0. 8 0 . 4 
    
1 1 3 1 0.75 3 0 0 0.8333

Let [ A]( x ) = [ B][ C ]( x ) = ( r ) and [ C ]( x ) = ( y )

Then [ B]( y ) = ( r )
LU DECOMPOSITION
Applying forward substitution procedure, we can solve for the vector (y).

For example, 5 y1 = 8.0; y1 + 2 y2 = 4.0; y1 + 0.75 y2 + 3 y3 = 5.0.

By forward substitution, y1 = 1.6; y2 = 1.2 ; y3 = 0.8333.

Now consider [ C ]( x ) = ( y )
This system can be solved by backward substitution. For example,

0.8333 x3 = 0.8333; 0.8 x2 + 0.4 x3 = 1.2; x1 + 0.4 x2 + 0.2 x3 = 1.6.

By backward substitution procedure, we get: x1 = 1.0; x2 = 1.0; x3 = 1.0.
TRIDIAGONAL SOLVER
Consider the tridiagonal system

b1 T1 + c1 T2 = d1

a2 T1 + b2 T2 + c2 T3 = d2

…………………………

ai Ti-1 + bi Ti + ci Ti+1 = di

…………………………

an Tn-1 + bn Tn = dn
TRIDIAGONAL SOLVER
2 T1 – T2 = 40 (1)

-T1 + 2 T2 – T3 = 0 (2)  3 T2 – 2 T3 = 40

-T2 + 2 T3 – T4 = 0 (3)  4 T3 – 3 T4 = 40

-T3 + 2 T4 – T5 = 0 (4)  5 T4 – 4 T5 = 40

-T4 + 2 T5 – T6 = 0 (5)  6 T5 – 5 T6 = 40

-T5 + 2 T6 = 110 (6)  7 T6 = 700

By backward substitution, we get T6 = 100, T5 = 90, T4 = 80, T3 = 70, T2
= 60 & T = 50.
ITERATIVE SOLUTION
Consider 5 x = 5. If this equation is rewritten as 2 x(k+1) = 5 – 3 x(k) ,
we can investigate the iterative solution.
Iter x(k)
0 0
1 2.5
2 –1.25
3 4.38
4 –4.07
5 8.61
6 –10.42
7 18.12
8 –24.68
9 39.52
10 –56.78
ITERATIVE SOLUTION
Now consider the system 3 x(k+1) = 5 – 2 x(k) ; using the same initial
guess, we get
Iter (k) x(k)
0 0
1 1.67
2 0.56
3 1.29
4 0.81
5 1.13
6 0.91
7 1.06
8 0.96
9 1.027
10 0.982
UNDER RELAXATION
Consider 2 x (k+1) = 5 - 3 x(k) . We can iterate as follows.

x cal = 2.5 – 1.5 xk.

xk+1 = w. xcal + (1-w).xk = w.{ 2.5 – 1.5 xk} + (1-w).xk

Let w = 0.6; here, xk+1 = 1.5 – 0.5 xk. The iterative
solution can be shown to converge in this case.
UNDER RELAXATION (w=0.6)
Iter (k) x(k)
0 0
1 1.5
2 0.75
3 1.125
4 0.938
5 1.031
6 0.984
7 1.008
8 0.996
9 1.002
10 0.999
UNDER RELAXATION (w=0.2)
Let w = 0.2; here, xk+1 = 0.5 + 0.5 xk. Hence
Iter (k) x(k)
0 0
1 0.5
2 0.75
3 0.875
4 0.938
5 0.969
6 0.984
7 0.992
8 0.996
9 0.998
10 0.999
ITERATIVE SOLUTION OF A
MATRIX SYSTEM
Now consider the set of equations
2 x + 3 y = 4.0
3 x + 2 y = 3.5
The solution for the above system is x = 0.5 and y = 1.0.
The above system can be written for iterative solution as
2 x = 4.0 – 3 y
2 y = 3.5 – 3 x
Or,
 x k +1   2.0   0 1.5 x k 
 k +1  =   
 y  1.75  − 1.5 0  y k 
      
ITERATIVE SOLUTION
Iter x(k) y(k)
0 0 0
1 2.0 1.75
2 -0.63 -1.25
3 3.88 2.70
4 -2.05 -4.07
5 8.10 4.83
6 -5.25 -10.4
7 17.6 9.63
8 -12.45 -24.65

The above system diverges because the eigen values of the
iterative coefficient matrix are 1.5 and –1.5.
REFORMULATED SOLUTION
Let us reformulate the problem as
3 x = 3.5 – 2 y
3 y = 4.0 – 2 x

Or, x = 1.1667 – 0.6667 y
y = 1.3333 – 0.6667 x

In matrix form, we have
 x k +1  1.1667   0 0.6667 x k 
 k +1  =   k
 y  1.3333  − 0.6667 0  y 
       

Here, the eigen values of the iterative coefficient
matrix are 0.6667 and –0.6667.
ITERATIVE SOLUTION
Iter x(k) y(k)
0 0 0
1 1.1667 1.3333
2 0.2778 0.5555
3 0.7963 1.1481
4 0.4013 0.8024
5 0.6317 1.0658
6 0.4561 0.9121
7 0.5586 1.0292
8 0.4805 0.9609
9 0.5261 1.0130
10 0.4913 0.9825
OVER/ UNDER RELAXATION

In general, one can apply successive over relaxation (SOR) and
successive under relaxation (SUR), to increase the rate of
convergence or to increase the stability by slowing down the
rate of change respectively.

xk+1 = w.xcal + (1-w).xk
yk+1 = w.ycal + (1-w).yk

For 0<w<1, the method is known as successive under relaxation.
For 1<w<2, the method is known as successive over relaxation.
GAUSS-SIEDEL & JACOBI
ITERATION
Consider the iterative system

3 x = 3.5 – 2 y
3 y = 4.0 – 2 x

In Jacobi iteration, on the right-hand side, the values of both
x and y are taken from previous (kth ) iteration. In Gauss-
Siedel iteration, the latest available values are used. For
example, while solving for yk+1 , the latest available value of
x (i.e. xk+1 ) is used.
JACOBI ITERATION
Consider the example
6 x1 – 2 x2 + x3 = 11
x1 + 2 x2 – 5 x3 = -1
- 2 x1 + 7 x2 + 2 x3 = 5
Formulate the iterative solution as

x1k+1 = 1.8333 + 0.3333 x2k – 0.1667 x3k
x2k+1 = -0.5000 - 0.5000 x1k + 2.5000 x3k
x3k+1 = 2.5000 + 1.0000 x1k - 3.5000 x2k
JACOBI ITERATION
iter x1 x2 x3
1 0.0 0.0 0.0
2 1.833 -0.500 2.500
3 1.245 4.834 -4.417
4 4.181 -12.165 -13.174
5 -0.025 -35.526 49.259
6 -18.219 122.66 126.816
MAXIMAL PIVOTING
6 x1 – 2 x2 + x3 = 11
- 2 x1 + 7 x2 + 2 x3 = 5
x1 + 2 x2 – 5 x3 = -1

x1k+1 = 1.8333 + 0.3333 x2k – 0.1667 x3k
x2k+1 = 0.7143 + 0.2857 x1k – 0.2857 x3k
x3k+1 = 0.2000 + 0.2000 x1k + 0.4000 x2k
JACOBI ITERATION WITH
PIVOTING
iter x1 x2 x3
1 0.0 0.0 0.0
2 1.833 0.714 0.200
3 2.038 1.181 0.852
4 2.085 1.053 1.080
5 2.004 1.001 1.038
6 1.994 0.990 1.001

Finally, x1 = 2.000, x2 = 1.000, x3 = 1.000.
GAUSS- SIEDEL ITERATION
(WITH PIVOTING)
x1k+1 = 1.8333 + 0.3333 x2k – 0.1667 x3k
x2k+1 = 0.7143 + 0.2857 x1k+1 – 0.2857 x3k
x3k+1 = 0.2000 + 0.2000 x1k+1 + 0.4000 x2k+1

iter x1 x2 x3
1 0.0 0.0 0.0
2 1.833 1.238 1.062
3 2.069 1.002 1.015
4 1.998 0.995 0.998
5 1.999 1.000 1.000
6 2.000 1.000 1.000
ERROR PROPAGATION

(i) Discretisation Error
(ii) Round-off Error (This
occurs because the
computer can handle only
a fixed number of digits, Discretisation
e.g. 1/3 = 0.3333)
ROUND-OFF ERROR
x + 2.333 y = 5.166  x + 2.33 y = 5.17
3x + 7.000 y = 15.5 3x + 7.00 y = 15.5

Actual solution is x = 0.5, y = 2.0
After round-off, the solution is y = -1.0, x = 7.5.

The eigen values of the above system are : λ1 = 7.999875, λ 2 = 0.000125

If , λ max / λ min >> 1 then the system may become
sensitive to round-off error.
ILL-CONDITIONED SYSTEMS
System1 :
1.01 0.99 x   2.00 
0.99 1.01 y  =  2.00 
    
The solution is x = 1.00, y=1.00

System 2:
1.01 0.99 x   2.02 
0.99 1.01 y  =  1.98 
    

The solution is x = 2.00, y=0.00
ILL-CONDITIONED SYSTEMS
System 3:
1.01 0.99 x   1.98 
0.99 1.01 y  =  2.02 
    

The solution is x =0.00, y=2.00

The reason for such sensitivity in the solution is
because of large ratio in the eigen values. Here we
have λ1 = 2.0, λ 2 = 0.02 . These systems are called as
ill-conditioned systems. In general, any system
with a very small value of pivot element on any
row is called as an ill-conditioned matrix system.
SUMMARY OF LINEAR ALGEBRAIC
SOLUTION METHODS

For the system [A]{x} = {r}, direct solution
method such as matrix inversion, Gaussian
elimination or LU decomposition can be used.
For iterative solution, it is possible to rewrite in
the form {x}k+1 = {r} + [B]{x}k. For
convergence, the iterative coefficient matrix
should have eigen values with magnitudes less
than 1.
SUMMARY
If all the eigen values are positive and |
λ i| < 1, the iterative method will exhibit
monotonic convergence. If any of the eigen
values are negative, the solution will
oscillate. If any of the eigen values is very
close to one, then convergence of the
iterative procedure will become very slow.
When any of the eigen values has a
magnitude more than one, under relaxation
can be applied for obtaining convergence.
SUMMARY

For the system [A]{x} = {r}, the maximum
pivotal strategy can be applied, by exchange of
rows or columns. If the diagonal coefficient has
the maximum value in every row with the
property a ≥ ∑ a , such a system will satisfy the
ii
i≠ j
ij

condition |λ i| < 1 and hence the iterative
procedure will converge. Before applying the
Jacobi or Gauss- Siedel iterative methods,
maximum pivotal strategy needs to be applied.
SUMMARY

For the system [A]{x} = {r}, when the ratio of the
maximum to minimum eigen values becomes very
large (i.e. λ max /λ min >>1), then the solution
becomes very sensitive to round- off error. Such a
system is said to be ill- conditioned. In this case,
more digits need to be retained for obtaining an
accurate solution.
SOLUTION OF NON-LINEAR
ALGEBRAIC EQUATION

• Bi-section method
• Regula- falsi & Secant methods
• Newton- Raphson method
BI-SECTION METHOD
Let f(x) = 0 be a non- linear equation. If
two guesses are taken such that f(α ) > 0
and f(β ) <0, the next guess can be taken
as (α +β )/2. Depending on the sign of
f(0.5α +0.5β ), the new guess can be set
as α or β .
BI-SECTION METHOD
Consider f(x) = x2 + 3x – 10. For this, x = 2.0 is a solution.

Iter α f(α ) β f(β ) 0.5α +0.5β
f(0.5α +0.5β )
1 1.5 -3.25 3.0 8.0 2.25 1.8125
2 1.5 -3.25 2.25 1.81 1.875 -0.8594
3 1.875 -0.8594 2.25 1.81 2.0625 0.4414
4 1.875 -0.8594 2.0625 0.4414 1.9688 -0.2174
5 1.9688 -0.2174 2.0625 0.4414 2.0157 0.1101
6 1.9688 -0.2174 2.0157 0.1101 1.9923 -0.0538
7 1.9923 -0.0538 2.0157 0.1101 2.004 0.0280
8 1.9923 -0.0538 2.004 0.0280 1.9982 -0.0126
9 1.9982 -0.0126 2.004 0.0280 2.0011 0.0077
10 1.9982 -0.0126 2.0011 0.0077 1.9997 -0.0021
11 1.9997 -0.0021 2.0011 0.0077 2.0004 0.0028
12 1.9997 -0.0021 2.0004 0.0028 2.00005 0.00035
REGULA FALSI (METHOD OF
FALSE POSITION)

This method is similar
to the bisection method,
f(x)
except that the new
position is not taken as xo
the average, but it is x2
linearly interpolated. x
Therefore, we set x1
( x o − x1 )
x 2 = xo − f ( xo ) *
f ( xo ) − f ( x1 )

If f(x2) is negative, x2 = x1; otherwise, x2 = x0.
SECANT METHOD
( xi −1 − xi )
xi +1 = xi −1 − f ( xi −1 ) *
f ( xi −1 ) − f ( xi )

This is similar to Regula falsi, but the
two most recent guesses are taken to get
the next guess. Convergence faster for
the secant method than Regula falsi, but
in some cases the method may fail to
converge
NEWTON- RAPHSON
METHOD

The next guess is obtained
f(x)
by proceeding along the
local tangent to the curve

f ( xi )
xi +1 = xi − ' xi+1 xi x
f ( xi )
EXAMPLE OF NON-LINEAR
EQUATION SOLUTION
f ( x) = 3 x + sin x − e x = 0

iter       Bi-section                Regula Falsi             Secant Method
x f(x) x f(x) x f(x)
1 0.5 0.330704 0.470990 0.265160 0.470990 0.265160
2 0.25 -0.286621 0.372277 0.029533 0.372277 0.029533
3 0.375 0.036281 0.361598 2.94x10-3 0.359904 –1.29x10-3
4 0.3125 –0.121899 0.360538 2.90x10-4 0.360424 5.53x10-6
5 0.34375 -0.041956 0.360433 2.93x10-5 0.360422 2.13x10-7
EXAMPLE OF NON-LINEAR
EQUATION SOLUTION
Newton- Raphson Method

f ( x) = 3 x + sin x − e x
f ' ( x) = 3 + cos x − e x

iter x f(x)
0 0.0 -1.0
1 0.33333 -0.068418
2 0.36017 -6.279x10-4
3 0.3604217 ~10-8
NEWTON-RAPHSON METHOD FOR
NON-LINEAR SYSTEM
Consider the non-linear algebraic system

f 1 ( x, y ) = x 2 + 3 xy − 10 = 0
f 2 ( x, y ) = xy + y 2 − 3 = 0
 ∂f 1 ∂f 1 
 ∂x ∂y  2 x + 3 y 3x 
[ J ] =  ∂f =
∂f 2   y
 2 x + 2 y 
 ∂x ∂y 
k +1 k k
x x −1  f 1 
  =   −[J]  
 y  y  f2 
NEWTON-RAPHSON METHOD
FOR NON-LINEAR SYSTEM
iter x y f1 f2
0 1.0 0.0 - 9.0 - 3.0
1 1.0 3.0 0.0 9.0
2 1.3971 1.5442 - 1.576 1.542
3 1.9013 1.0269 - 0.5277 6.97x10-3
4 1.9971 0.9993 - 0.02449 - 5.697x10-3
5 2.003091 0.999974 0.02149 2.987x10-3

The correct solution is x = 2.0 and y = 1.0
SUMMARY
• The secant method converges faster than the
bisection and the regula falsi methods. However, the
regula falsi is more stable than the secant method.
The bisection method is the slowest among these
three methods.
• The Newton-Raphson method is faster in
convergence than all the above three methods
(quadratic convergence). It can also be used for
systems of non-linear algebraic equations.
• In case there is convergence difficulty, the under-
relaxation technique can be used for non-linear
systems also.
SOLUTION OF ORDINARY
DIFFERENTIAL EQUATIONS
Consider the ODE dy / dx = f ( x, y )

n +1 dy
y = y + ∆x = y n + f ( x, y ).∆x
n

dx ∆
y y

Explicit method: Use initial slope.
Implicit method: Use final slope x

x
FIRST ORDER METHODS
Explicit scheme: (uses initial slope)
n
n +1 dy
y =y +
n
∆x = y + f ( x , y ).∆x
n n n

dx
Implicit scheme: (uses final slope)
n +1
n +1 dy
y =y +
n
∆x = y n + f ( x n +1 , y n +1 ).∆x
dx
EXPLICIT/ IMPLICIT MARCHING
dy
Consider the differential equation = −2 x − y with the
dx
initial condition of y(0) = -1. The solution for this is given as y(x)
= -3e-x – 2x + 2. Using h = ∆ x = 0.1, we get by explicit scheme:
xn yn (y’)n h(y’)n yanalytical
0.0 -1.00000 1.00000 0.10000 -1.00000
0.1 -0.90000 0.70000 0.07000 -0.91451
0.2 -0.83000 0.43000 0.04300 -0.85619
0.3 -0.78700 0.18700 0.01870 -0.82245
0.4 -0.76830 -0.03170 -0.00317 -0.81096

Final error at x = 0.4 is given as –0.04266 .
EXPLICIT/ IMPLICIT MARCHING
The solution of the previous example is accurate only up to the
first decimal. The integration gives
yn+1 = yn + h(y’)n + O(h2)
But the above discretisation error is only a local error. For N
steps, the cumulative error will be O(h), if N.h ~ 1. The
implicit solution is obtained as:

n
yn (y’)n+1 h(y’)n+1 yanalytical

.0 -1.00000

.1 -0.92727 0.72727 0.07273 -0.91451

Here,
.2 the error is 0.03809.
-0.87934 0.47934 0.04793 -0.85619
MODIFIED EULER METHOD
' n +1, p
(y ) + (y )
' n ' n +1 ( y ' n
) + ( y )
y n +1
= y +h
n Or, y n +1 = y +h
n

2 2
' n +1, p
+
= y n + h{ f ( x n , y n ) + f ( x n +1 , y n +1, p )} / 2
' n
( y ) ( y )
y n +1 = y +h
n

2

yn+1,p
yn+1,c
This uses the average yn
between the initial slope and
the predicted final slope

xn xn+1
MODIFIED EULER METHOD
n
yn h(y’)n yn+1,p h(y’)n+1 h(y’)av yn+1,c

.0 -1.0000 0.1000 -0.9000 0.0700 0.0850 -0.9150

.1 -0.9150 0.0715 -0.8435 0.0444 0.0579 -0.8571

.2 Average
-0.8571 of initial
0.0457 -0.8114final0.0211
& predicted 0.0334 -0.8237
slopes (semi-implicit)
gives a better solution. The final corrected solution is more
.3 accurate
-0.8237 than0.0224 -0.8013
the predicted solution.0.0001 0.0112 -0.8124

.4 -0.8124 0.0012 -0.8112 -0.0189 -0.0088 -0.8212
RUNGE- KUTTA METHODS
• These involve multiple slope evaluations over one
interval
• The intermediate or final slope at the end of the
interval are obtained using predicted y locations
• The average slope of the interval is evaluated as a
weighted average of the initial, intermediate and
final slopes in an interval
• Solution is updated (or corrected) using the
weighted average slope
II ORDER R-K METHOD
• This is somewhat similar to the modified Euler
method discussed above.
• This evaluates slopes twice in an interval (usually at
the beginning and at the end of the interval).
• The slope at the end of the interval is predicted on the
basis of the initial slope
• The arithmetic average of the initial and final slopes is
taken as the average slope over the interval
• The method gives second order accuracy for the
integrated solution
II ORDER R-K METHOD
{ }
Let y n +1 = y n + ahf ( x n , y n ) + bhf x n + αh, y n + βhf ( x n , y n )

Here, a and b are weights which add up to 1, so that the second
and third terms on the right hand side can be thought of as giving
rise to some weighted average slope over the interval. The
parameters α and β define the location at which the second
slope is evaluated. The y coordinate of the second slope location
itself is derived in terms of the initial slope of the interval. Many
second order methods will arise for specific values of a,b,α ,β ;
for example, one takes a = 2/3, b = 1/3, α = 3/2 and β = 3/2 in a
specific variant of the II order R-K method. More popularly, one
may set a = b = ½ and α = β = 1, and in this case the II order
R-K method is the same as the modified Euler method.
II ORDER R-K METHOD
n +1
y = y + h(ak1 + bk 2 )
n

Where k1 and k2 are slopes evaluated within the interval.

k1 = f ( x , y )
n n

k 2 = f ( x + αh, y + βhk1 )
n n

Thus the second slope k2 is evaluated using the first slope k1.
ACCURACY OF R-K II METHOD
n n n
 dy  h d y h d y
2 2 3 3
y n +1
= y + h.  +
n
 2  +  3  + O(h 4 )
 dx  2!  dx  3!  dx 
n +1
With a = b = ½ and α = β = 1, we y = y + h(k1 + k 2 ) / 2
n

have k = f ( x n , y n ) k = f ( x n
+ h , y n
+ hk1 )
1 2

n +1 n n
Applying Taylor  dy  tokd2, we
expansion y  get
2
dy 
k 2 ≈   =   + h 2  + O(h 2 )
 dx   dx   dx 
Substituting for k2, the final expression becomes
n n
 dy  h d y 2 2
y n +1
= y + h.  +
n
 2  + O(h 3 )
 dx  2!  dx 
Thus the R-K II method has local error of O(h3) and cumulative
error of O(h2).
R-K IV ORDER METHOD
n +1 h
y = y + ( k1 + 2 k 2 + 2 k 3 + k 4 ) + O ( h )
n 5

6
k1 = f ( x n , y n ) k 2 = f ( x n + 0.5h, y n + 0.5hk1 )
k 3 = f ( x n + 0.5h, y n + 0.5hk 2 ) k 4 = f ( x n + h, y n + hk 3 )

k2 k4

y k1 k3

xn xn+h
x
R-K IV METHOD
dy
For the equation = −2 x − y with the b.c. of y(0) = -1.0,
dx
the application of R-K IV method gives

n
yn hk1 hk2 hk3 hk4 hkav

.0 -1.0000 0.1000 0.0850 0.0858 0.0714 0.0855

.1 -0.9145 0.0715 0.0579 0.0586 0.0456 0.0583

.2The-0.8562 0.0456
exact solution 0.0333
gives y(0.5) 0.0340
= -0.81959. 0.0222 0.0337

.3 -0.8225 0.0222 0.0111 0.0117 0.0011 0.0115
COMPARISON OF MARCHING METHODS
Method          No. of Slopes    Local Error    Global Error
Euler 1 O(h2) O(h)
Modifiied 2 O(h3) O(h2)
Euler (R-K II)
R-K IV 4 O(h5) O(h4)

Error at x = 0.4 for the example discussed earlier.
h               Euler          Mod. Euler                RK-IV
(RK-II)
0.4 2.11e-01 2.90e-02 2.40e-04

0.2 9.10e-02 6.42e-03 1.27e-05
0.1 4.27e-02 1.44e-03 7.29e-07
0.05 2.07e-02 3.48e-04 4.37e-08
0.0250 1.02e-02 8.54e-05 2.76e-09
ODE SYSTEM
Consider the system
dy1
= 2 y1 − y 2
dx

dy 2
= y1 + y 2
dx

with b.c: y1(0) = 1.0, y2(0) = 0.5.
RK-IV FOR ODE SYSTEM
n +1 n h
yi = yi + (k1i + 2k 2i + 2k 3i + k 4i ) + O(h 5 )
6
n n n
k1i = f i ( x , y1 , y 2 ,.... y m )
n

n
k 2i = f i ( x + 0.5h, y i + 0.5hk1i )
n

n
k 3i = f i ( x + 0.5h, y i + 0.5hk 2i )
n

n
k 4i = f i ( x n + h, y i + hk 3i )

Here, we need to evaluate all the k1i first, then all the k2i
etc. for the whole range of i = 1,2,… m; where m is the
number of differential equations (number of dependent
variables)
INITIAL/ BOUNDARY VALUE PROBLEMS
Initial value problem:
d2y dy
2
+ 5 + 3y = 0 y(0) = 0.0, y’(0.0) = 0.2.
dx dx
Boundary value problem:

d2y dy
2
+ 5 + 3y = 0 y(0) = 0.0, y(1.0) = 0.5.
dx dx
dy1
dy = y2
Let y = y1 ; = y 2 dx
dx
dy 2
= −5 y 2 − 3 y1
dx
y1(0) = 0.0; y2(0) = α (guess). α is to updated until the
boundary condition y1(1.0) = 0.5 is satisfied.
MARCHING METHOD FOR
BOUNDARY VALUE PROBLEM
yexact (L)
y } g(α ) = yexact -ynum
ynum (L)

y(0) α

x
We need to find α such that g(α ) = 0. By applying the
Newton- Raphson method, we get:
g (α k
)
α =α − ' k
k +1 k

g (α )
MARCHING METHOD FOR
BOUNDARY VALUE PROBLEM
Initially a guess value of the slope α is assumed. By integrating
the ODE system with this guess value of α , the predicted end
boundary condition value is obtained. The difference between
the correct end b.c. and the predicted end b.c. value is taken as a
function g(α ), whose root we seek. In other words, we look for
the correct value of slope α which will result in the predicted
end b.c.matching with exact end b.c.
In order to evaluate the derivative g’(α ), we slightly perturb the
current value of α ; find the change ∆ g in g(α ) due to this
small perturbation and numerically compute g’(α ) as ∆ g/∆ α .
EXAMPLE OF INITIAL
VALUE PROBLEM
Consider the set of reactions
C + ½ O2 k1 CO ; CO + ½ O2 k2 CO2

Let C, O2, CO and CO2 be denoted as Y1, Y2, Y3 and Y4 respectively.
Also let the initial concentrations of these species be 0.5,0.5,0.0 and 0.0
respectively. The ODE system can be written as follows:
dY1 dY2 k1 1.0 0.5 k 2 1.0 0.5
= − k1Y1 Y2
1.0 0.5
= − Y1 Y2 − Y3 Y2
dt dt 2 2
dY3 dY4
= k1Y11.0Y20.5 − k 2Y31.0Y20.5 = k 2Y31.0Y20.5
dt dt

with initial conditions Y1(0) = 0.5,Y2(0) = 0.5,Y3(0) = 0.0 and Y4(0)= 0.0.
Boundary value problem
(boundary layer flow)
Using the similarity transform method, we get : f ''' + 0.5 ff '' = 0

f’ = 1 y ∞

y= 0
Uα  1
f = f’ = 0.0

Let f = Y1, f’ = Y2 and f’’ = Y3. The equations become:
dY1 dY2 dY3
= Y2 = Y3 and = −0.5Y1Y3
dη dη dη
The boundary conditions are: Y1(0) = 0.0, Y2(0) = 0.0 and Y2 (y--> ∞) =
1.0. Here, Y3(0) is not available and it can be guessed as α . The value of
α can be found iteratively, until the end b.c. is satisfied. Also,
integration need not be carried out till infinite value of η but even
Predictor- Corrector Methods
• These are multi-step methods. With a small number
of additional slope evaluations, the ODE can be
integrated to a high level of accuracy.
• After obtaining the solution for some steps (by R-K
methods), the slopes at previous points can be used
in predictor- corrector methods.
• The predictor step extrapolates the solution to the
next point; in corrector step, this solution at the new
point is corrected by an implicit evaluation of slope.
Thus, an alternate use of the predictor & corrector
steps provides an accurate solution.
x n +1
y n +1 = y n + ∫ f ( x, y)dx
xn

( f n − 2 f n −1 + f n−2
) (3 f n
− 4 f n −1
+ f n−2
)
f ( x, y ) = x +
2
x+ f n

2h 2 2h

y n +1
=y +
n h
12
[
23 f n − 16 f n −1
+5f n−2
] + O(h 4
)

Here, with one additional slope for f n, a fourth order accurate
solution can be obtained.
Method (Predictor & Corrector)
Predictor Step:
x N +1
h

n +1 n −1 n−2
y =y +
n
f ( x, y )dx = y + (23 f n − 16 f
n
+5f ) + O(h 4 )
xn
12
Corrector Step:
x N +1
h
y n +1 = y n + ∫
n +1 n −1
f ( x, y )dx = y n + (5 f +8f n − f ) + O(h 4 )
xn
12

Quadratic fits are used for the slope in both the predictor &
corrector steps. Predictor involves extrapolation, while corrector
involves interpolation (implicit evaluation).
Milne’s Method
Predictor Step:
x N +1
4h 28 5 v
∫ f ( x, y)dx = y ) + h y (ξ 1 )
n +1 n −3 n −3 n −1 n−2
y =y + + (2 f n − f +2f
x n −3
3 90

Corrector Step:
x N +1
h 1 5 v
∫ f ( x, y)dx = y ) − h y (ξ 2 )
n +1 n −1 n −1 n +1 n −1
y =y + + (f +4f + f
n

x n −1
3 90
The predictor step uses quadratic fit between three points for the
slope and extrapolates the solution; the corrector step also involves
a quadratic fit for the slope which is implicit. But this gives a more
accurate final expression. However, Milne’s method suffers from an
instability which renders the method inaccurate for a certain class of
problems.
Predictor Step:
x N +1
h 251 5 v
y n +1 = y n + ∫ h y (ξ 1 )
n −1 n−2 n −3
f ( x, y )dx = y n + (55 f n − 59 f + 37 f −9f )+
xn
24 720

Corrector Step:
x N +1
h 19 5 v
y n +1 = y n + ∫ h y (ξ 2 )
n +1 n −1 n−2
f ( x, y )dx = y n + (9 f + 19 f n − 5 f + f )−
xn
24 720

In both the predictor and corrector steps, integration is carried
out for the interval xn to xn+1 , using a cubic fit for the slope
function. In the predictor step, the fit involves extrapolation,
while it involves interpolation (implicit) in the corrector step.
Hence accuracy improves considerably in the corrector step.
Features of predictor/ corrector
methods
• These methods involve the extrapolation using
previous slopes in predictor and interpolation of
slope in an implicit manner in the corrector. High
order accuracy can be obtained.
• Due to explicitness in the predictor step, in some
cases, these methods may display instability.
Hence, the stability characteristics are not very
good.
• In problems where stability problems may arise,
fully implicit schemes are preferred.
Example for predictor/ corrector methods
Consider dy/dx = - 2x – y with y(0) = -1.0. Using the
RK-4 method for 3 steps, we obtain

x y f(x,y)
0.0 -1.0000000 1.0000000
0.1 -0.9145122 0.7145123
0.2 -0.8561923 0.4561923
0.3 -0.8224547 0.2224547

0.4 (-0.8109678) (-0.8109687) predicted (-0.8109601)
(-0.8109596) (-0.8109652) corrected

0.5 (-0.8195969) (-0.8195978) predicted (-0.8195920)
(-0.8195906) (-0.8195905) corrected
Error Estimation
• The error can be obtained, if two different estimates
of the solution at a point are available e.g. the
solution at the predictor and corrector steps.
• In case of single step methods, the error can be
estimated by obtaining the solution for two different
grid sizes, say ‘h’ and ‘2h’.
• If a good estimate is available for the local error,
this can be used to vary the step size so as to meet
tolerance (error) criteria for each step.
Automatic error-based step-size selection
19
251 + 19
(predicted sol.–corrected sol.)
1
For Milne’s Method, error ≈
29 (predicted sol. – corrected sol.)

Since the error in these methods is O(h5), until the above error estimate is
less than the prescribed tolerance value, the step size h has to be reduced.
For example, if the estimated error for Milne’s method at a point is 2 x
10-6 and we need to reduce this to 10-8 , the step size can be decreased to
h/3. (Note: (h/3)5 = h5/243).

For RK-4 method, by finding the difference in solution for a step size of
2h and for h at the same location, we can get an estimate of the error.
The step size can then be reduced suitably to meet the tolerance
criterion.
Stability of the solution technique
dy
= −10 y
Consider dx with the b.c. y(0) = 1.0. The solution for this
case is y(x) =e-10x . The discretised form is obtained as:

yn+1 = yn + (dy/dx).h = yn – 10yn.h = yn(1-10h).

The above expression is of the form yn+1 = λ yn, which will diverge if |
λ | > 1. Here λ is called the amplification factor.
The discretised equation will diverge if 10h > 2. Or, if h > 0.2. In order
to avoid this one can go for an implicit solution procedure.
yn+1 = yn + (dy/dx)n+1 .h = yn – 10yn+1 .h  yn+1 (1+10h) = yn.
The above system is stable for any step size h. In general for a system of
the form dy/dx = - cy, the explicit procedure will diverge if h> 2/c.
Stiff ODE systems
d 2u du
Consider the ODE 2
+ 100 + 0.01u = 0
dt dt

The above equation is equivalent to the ODE system

dy1 dy 2
= y2 and = −100 y 2 − 0.01y1
dt dt
du
where u = y1 and = y2
dt

The above system can also be written in a  dy i  = [ A]{ y }
  i
matrix form dt
 
Stiff ODE systems
The above system has a general solution of the form

u = C1e −99.9999t + C 2 e −0.0001t

Because of the first term (which hardly makes any
contribution to the solution), the time step has to be selected
as very small when an explicit method is employed. In order
to avoid this difficulty, one may employ a completely implicit
technique.
PDE systems

∂ 2φ ∂ 2φ ∂ 2φ
A +B +C 2 = 0
Consider the linear PDE system ∂x 2 ∂x∂y ∂y

This system is said to be elliptic for the case B 2
− 4 AC < 0.

It is parabolic if B 2
− 4 AC = 0.

It is hyperbolic when B 2
− 4 AC > 0.
Elliptic PDE
Consider steady two dimensional heat conduction governed by the
equation  ∂ 2T ∂ 2T 
k  2 + 2  + Q = 0
 ∂x ∂y 
Here, A = C = k and B = 0. Hence B 2
− 4 AC = −4 k 2
< 0. Therefore, the
system is elliptic. For an elliptic PDE, the boundary conditions need to
be given on a closed boundary. In other words, the boundary conditions
all around influence the solution at a point

b.c. b.c.

b.c. P b.c. P

b.c.
Boundary conditions for elliptic systems
Parabolic PDE
Transient heat conduction problem which follows the governing
∂T k ∂ 2T ∂ 2T
equation = =α 2
∂t ρc ∂x 2
∂x

is a parabolic system. Here, A = α , B= 0 and C = 0.

Hence, B2 – 4AC = 0. For a parabolic
At x=0, system the
Atconditions
x=L, need to
b.c. b.c.
specified as shown below.
At t=0, initial
condition
Hyperbolic PDE
∂2 y ∂ 2
y
The wave equation = c 2

∂t 2 ∂x 2 is a hyperbolic system, with c
denoting the acoustic speed. Here, B = 0 and A = 1, C = -c2. Hence,
B2 – 4AC = 0 – 4x1x (-c2) = 4 c2 >0. For a hyperbolic system, there are
characteristic variables which determine the number of boundary
conditions to be given. In the above case, the two characteristics (x + ct)
and (x– ct) represent the solutions corresponding to the backward-and
forward- propagating waves.
Boundary conditions for
hyperbolic PDE
u+c
u+c u-c u
u u-c

Subsonic flow Supersonic flow

A compressible flow has three characteristic velocities i.e.
u+c, u, u-c. Depending on the number of characteristics
crossing into the domain at the boundary, the b.c. are decided.
First order PDE system
∂v ∂u ∂u ∂v
Consider the system +c =0 +c =0
∂t ∂x ∂t ∂x

∂ 2u 2 ∂ u
2

This can be reduced to the form −c =0
∂t 2
∂x 2

A similar hyperbolic PDE is obtained for v also. Thus, a system of two
first order PDE is equivalent to a second order hyperbolic PDE.
COMPUTAIONAL
MODELING
• COMPUTERS AND NUMERICAL
TECHNIQUES HAVE WITNESSED
PHENOMENAL GROWTH
• COMPARED TO EXPERIMENTAL
ANALYSIS WHICH MAY BE
EXPENSIVE, COMPUTATIONAL
SIMULATION HAS BECOME AN
ECONOMIC ALTERNATIVE
APPLICATIONS OF
COMPUTATIONAL MODELING

• AEROSPACE VEHICLES
• AUTOMOBILES
• POWER GENERATION
• MANUFACTURING TECHNOLOGY
• STRUCTURAL DESIGNS
• ENVIRONMENTAL POLLUTION
STEPS INVOLVED IN MODELING

• CREATION OF THE GEOMETRY
• DIVISION OF THE GEOMETRY INTO A
COMPUTATIONAL MESH
• APPLICATION OF MASS BALANCE, FORCE
BALANCE AND ENERGY BALANCE PRINCIPLES
TO SMALL COMPUTAIONAL CELLS
• SOLUTION OF VARIABLES SUCH AS VELOCITY,
PRESSURE, DENSITY, TEMPERATURE, STRESSES,
DISPLACEMENTS ETC. AT VARIOUS POINTS IN
THE GEOMETRY
STEPS INVOLVED IN MODELING

• PRE-PROCESSING (CREATION GEOMETRY
& MESH AND APPLICATION OF
BOUNDARY CONDITIONS)
• ANALYSIS (SOLUTION OF GOVERNING
EQUATIONS)
• POST-PROCESSING (ESTIMATION OF
DESIRED QUANTITIES- SAY, HEAT
TRANSFER FROM PREDICTION OF
TEMPERATURE, ETC.)
AIRCRAFT MODEL
LAUNCH VEHICLE
Compressible & Incompressible flows
• Incompressible flow occurs when flow velocity
<< velocity of sound i.e. the Mach number of flow
is less than 0.3. Under such circumstance, ∆ ρ <
0.1ρ av.
• The pressure variation is very strong in
compressible flow. Incompressible flow has small
∆ p values.
• The flow equations may be solved independently
in incompressible flow. Flow & energy equations
are always coupled in compressible flows.
Governing equations
(incompressible flow)
∂u ∂v ∂w
Continuity eq.: + + =0
∂x ∂y ∂z

∂u ∂u ∂u ∂u ∂p

x-mom.: ρ ( ∂t
+u
∂x
+v
∂y
+ w ) = − + µ∇ 2 u + ρg x
∂z ∂x

∂v ∂v ∂v ∂v ∂p
ρ (
y-mom.: ∂t + u + v + w ) = − + µ ∇ 2
v + ρg y
∂x ∂y ∂z ∂y

z-mom.: ρ ( ∂w + u ∂w + v ∂w + w ∂w ) = − ∂p + µ∇ 2 w + ρg z
∂t ∂x ∂y ∂z ∂z
∂T ∂T ∂T ∂T
Heat balance: ρC p ( + u +v + w ) = k∇ 2T + Q
∂t ∂x ∂y ∂z
Governing equations (compressible flow)
Mass balance: ∂ρ ∂ ( ρu ) ∂ ( ρv) ∂ ( ρw)
+ + + =0
∂t ∂x ∂y ∂z

∂ ( ρu ) ∂ ( ρu 2 ) ∂ ( ρuv) ∂ ( ρuw) ∂σ xx ∂τ xy ∂τ xz
+ + + = + + + ρg x
Momentum ∂t ∂x ∂y ∂z ∂x ∂y ∂z

Balance ∂ ( ρv) ∂ ( ρuv) ∂ ( ρv 2 ) ∂ ( ρvw) ∂τ yx ∂σ yy ∂τ yz
+ + + = + + + ρg y
(x,y,z) eqs.: ∂t ∂x ∂y ∂z ∂x ∂y ∂z

∂ ( ρw) ∂ ( ρuw) ∂ ( ρvw) ∂ ( ρw 2 ) ∂τ zx ∂τ zy ∂σ zz
+ + + = + + + ρg z
∂t ∂x ∂y ∂z ∂x ∂y ∂z

∂u 2 µ ∂v 2 µ ∂w ∂v
σ xx = − p + 2µ − (∇.V ) σ yy = − p + 2 µ − (∇.V ) τ yz = τ zy = µ ( + )
∂x 3 ∂y 3 ∂y ∂z

∂w 2µ ∂u ∂v ∂u ∂w
σ zz = − p + 2µ − (∇.V ) τ xy = τ yx = µ ( + ) τ xz = τ zx = µ ( + )
∂z 3 ∂y ∂x ∂z ∂x
Energy ∂ ( ρe) ∂ ( ρuH ) ∂ ( ρvH ) ∂ ( ρwH )
+ + + = ∇.(k∇T ) + µΦ + Q
eq.: ∂t ∂x ∂y ∂z
Typical flow boundary
conditions
U

u=0, v=0, w=0
(no slip-condition on the wall)

Far stream b.c.  u=Uα , v=0,w=0, p=pα

No-slip b.c. Exit b.c.
extrapolation
Symmetry
v=0, y-der. = 0
Inlet
b.c.
Typical thermal b.c.
flow
Temp. specified
T = Tw Convective b.c
-k(dT/dn) = h(T-Tf)
Heat flux = 0 ambient at Tα
q -k(dT/dn) = σ ε (T4-
Prescribed heat flux Tα 4)
-k(dT/dn) = q
MODELING METHODS
• FINITE DIFFERENCE METHOD
• FINITE VOLUME METHOD
• FINITE ELEMENT METHOD
• BOUNDARY ELEMENT METHOD
• SPECTRAL METHOD
FINITE DIFFERENCE METHOD

• IN THIS METHOD, DIFFERENTIAL
EQUATIONS ARE CONVERTED INTO
DIFFERENCE EXPRESSIONS

dT Ti − Ti −1 Ti +1 − Ti
= or
dx ∆x ∆x

i-1 i i+1
FINITE VOLUME METHOD
• FLUX BALANCE IS APPLIED FOR EACH CELL
• HEAT FLUX IN – HEAT FLUX OUT = RATE OF
THERMAL STORAGE
• FLUXES ARE APPROXIMATED USING
NEIGHBOURING NODES
FINITE ELEMENT METHOD
• WHILE FDM & FVM WERE APPLIED FOR
FLOW/THERMAL PROBLEMS, FEM WAS
INITIALLY DEVELOPED FOR STRUCTURAL
PROBLEMS
• IN THIS METHOD, A LARGE STRUCTURE IS
DIVIDED INTO SMALL ELEMENTS AND
CHARACTERISTIC OF EACH ELEMENT IS
WRITTEN AS A MATRIX CONTRIBUTION
• BY ADDING CONRIBUTIONS OF ALL
ELEMENTS, WE GET THE MATRIX
EQUATION FOR THE WHOLE GEOMETRY
STRUCTURAL ANALYSIS

F = K x for a
spring

Consider the structure as a F
collection of many connected
springs. In that case we get
Fi = Kij xj
APPLICATIONS OF FINITE
DIFFERENCE METHOD
ESTIMATION OF ERROR

ε ik = T ( x i , t k ) − T * ( x i , t k )

ε ik ∝ ∆x i2 and ε ik ∝ ∆t k

ε = O (∆ x2,
∆ t)
TAYLOR SERIES EXPANSIONS
 dT   d 2T  ∆x 2  d 3T  ∆x 3  d nT  (− ∆x) n
Ti −1 = Ti −   ∆x +  2  −  3  + .... +  n  + 0(∆x n +1 )
 dx  i  dx i 2!  dx  3!  dx  n!

T i1=T i     
dT
dx
Δx
d 2 T Δx2
2
dx i 2!
 
d 3 T Δx3
3
dx i 3!
 . ..
 
.
d n T Δxn
n
dx n!
O  Δx
 
n1

 dT   d 2 T  (2∆x ) 2  d 3 T  (2∆x ) 3
Ti + 2 = Ti +   (2∆x ) +  2  +  3 
 dx  i  dx  i 2!  dx  3!
 d n T  (2∆x ) n
....... +  n  + 0 (∆x n +1 )
 dx  n!

 dT   d 2 T  (2∆x ) 2  d 3 T  (2∆x ) 3
Ti − 2 = Ti −   (2∆x ) +  2  −  3 
 dx  i  dx  i 2!  dx  i 3!
 d n T  (−2∆x ) n
+  n  + 0(∆x n +1 )
 dx  i n!
DERIVATIVE APPROXIMATIONS

 dT  Ti +1 − Ti  d 2 T  ∆x  d 3 T  ∆x 2
  = −  2  −  3 
 dx  i ∆x  dx  i 2!  dx  i 3!
T − Ti
= i +1 + 0(∆x )
∆x

 dT  Ti − Ti −1  d T  ∆x  d T  ∆x
2 3 2
  = −  2  +  3 
 dx i ∆ x  dx i 2!  dx i 3!
T −T
= i i −1 + 0(∆x)
∆x

 dT  Ti +1 − Ti −1
  = + O ( ∆x 2 )
 dx  i 2 ∆x
DERIVATIVE APPROXIMATIONS

 dT 
4Ti +1 − Ti + 2 = 3Ti + 2  ∆x + 0( ∆x 3 )
 dx i

 dT  4Ti +1 − Ti + 2 − 3Ti
  = + 0( ∆x 2 )
 dx i 2 ∆x

 d 2 T  ∆x 2  d 4 T  ∆x 4
Ti +1 + Ti −1 = 2Ti + 2 2  + 2 4  + .......
 dx  i 2!  dx  i 4!

 d 2 T  Ti +1 + Ti −1 − 2Ti
 2 = + ∆ 2
0( x )
 dx i ∆x 2
FDM FOR HEAT CONDUCTION
d 2T
k 2 +Q= 0
dx
 d 2T  ( T + T − 2Ti )
k  2  + Q = k i +1 i −12 + Q + 0( ∆x 2 )
 dx i ∆x

Ti+1 + Ti-1 - 2Ti = - Q∆ x2/k
TO TN
AT X = 0, T = TO
AT X = L, T = TN
FLUX TYPE BOUNDARY CONDITION

dT
=0 at x = L
dx

 dT  T − TN
  = N+2 =0
dx
  i = N +1 2 ∆x

k (TN + 2 + TN − 2TN +1 )
+Q=0
∆x 2

2 k (TN − TN +1 ) dT
+Q= 0 TO =0
∆x 2 dx

IMAGE POINT METHOD
FLUX TYPE BC

APPLYING TAYLOR’S SERIES EXPANSION
AT BOUNDARY POINT

 dT   d 2 T  ∆ x 2  d 3T  ∆ x 3  d n T  ∆x n
Ti +1 = Ti +  ∆x +  2  +  3  + .... +  n  + O(∆x n +1 )
 dx   dx  i 2!  dx  i 3!  dx  n!

dT/dX = 0 and d2T/dX2 = -Q/k and higher
order terms are zero. Hence
TN+1 = TN – Q ∆ x2/2k
POLYNOMIAL EXPANSION

IT IS POSSIBLE TO USE LOCAL POLYNOMIAL
EXPANSIONS OF THE FORM
T = a x2 + b x + c
AND USE THREE NODES TO FIT A
VARIABLE. FROM SUCH AN EXPANSION THE
REQUIRED DERIVATIVES AT BOUNDARY
CAN BE EVALUATED FOR IMPLEMENTING
THE FLUX TYPE BC
MATRIX FORM FOR FLUX
TYPE BC
1 0 0 0 0 0   T1   Tb 
 1 −2 1 0 0 0   T   −Q∆x 2 / k 
   2  
 0 1 − 2 1 0 0  T
 3  − Q∆ x 2
/ k 
 0 0 1 −2 1 0  =
 T   −Q∆x / k  2
4
     
 0 0 0 1 − 2 1  T
 5  − Q∆ x 2
/ k 
 0 0 0 0 1 −1  T6   −Q∆x / 2 k 
2
VARIBLE HEAT
GENERATION
d 2T
k 2 + Q( T ) = 0
dx

(Ti +1 + Ti −1 − 2Ti )
k + aTi +b = 0
2
∆x 2

T1 = Tb at x = 0

dT
=0
dx at x = L
QUASI-LINEARISATION OF
SOURCE TERM

Q(Ti) = aTi + b = ( aTi ) Ti
2 k k +1
+b

(T k +1
+ Ti −k1+1 − 2Ti k +1 )
+ ( aTi k ) Ti k +1 + b = 0
i +1
k
∆x 2

k +1 k +1
 aTi
k
∆x 2
 k +1 −b ∆ x 2

Ti +1 + Ti −1 −  2 −  Ti =
 k  k
QUASI-LINEARISATION OF
SOURCE TERM
• DUE TO SOURCE TERM, DIAGONAL
DOMINANCE ( |aii | > Σ |aij | ) MAY BE
AFFECTED
• FOR THIS PURPOSE, QUASI- LINEARISATION
IS DONE IN THE FORM Q(Ti) = - A ( Ti k+1 ) + B
• IF IT IS NOT NOT POSSIBLE TO EXPRESS Q IN
THE ABOVE FORM WITH NEGATIVE
COEFFICIENT, THEN TAKE A = 0 AND SET Q =
a (Ti)2 + b = B
TWO-DIM. HEAT CONDUCTION
i,j+1
∂ 2T ∂ 2T 
k 2 + 2  + Q = 0 i-1,j i+1,j
 ∂x ∂y 
i,j
∂ 2T  Ti +1, j + Ti −1, j − 2Ti , j i,j-1
 2  = + ∆ 2
0( x )
 ∂x  i , j ∆x 2

 ∂ 2T  Ti , j +1 + Ti , j −1 − 2Ti , j
 2  = + 0 ( ∆y 2
)
 ∂y  i , j ∆y 2

k (Ti +1, j + Ti −1, j − 2Ti , j ) k (Ti , j +1 + Ti , j −1 − 2Ti , j )
+ +Q= 0
∆x 2
∆y 2
IMPLEMENTATION OF BC
− Q∆ x 2
Ti +1, j + Ti −1, j + β 2 (Ti , j+1 + Ti , j−1 ) − 2(1 + β 2 )Ti , j =
k

where the grid aspect ratio β =
∆ x/∆ y. Consider the boundary
condition∂T
−k = h( T − T f )
∂x i = i max imax+1,j
 ∂T   ∂ 2T  ∆x 2
Ti −1, j = Ti , j −   ∆x +  2  + 0(∆x 3 ) ∂T
 ∂x  i , j  ∂x i , j 2! −k
∂x
= h( T − T f )
i = i max

Q∆x 2 β (Ti , j +1 + Ti , j −1 − 2Ti . j )
2
Ti-1,j = Ti,j + { h (Ti,j - Tf)/k}∆ x − −
2k 2

THE SAME EXPRESSION IS OBTAINED BY IMAGE POINT METHOD
Convective boundary condition
∂T
At i=imax, − k = h(T − T f )
∂x
∂T Ti max +1, j − Ti max −1, j − h(Ti max, j − T f )
= =
∂x ∆x k
imax+1,j
h∆x(Ti max, j − T f )
Ti max +1, j = Ti max −1, j −
k
Applying heat balance at node imax, we have
Q∆x 2
Ti max +1, j + Ti max −1, j + β 2 (Ti max, j +1 + Ti max, j −1 ) − 2(1 + β 2 )Ti max, j =−
k
Substituting for the image point temperature, we get:
h∆x(Ti max, j − T f ) Q∆x 2
2Ti max −1, j − + β (Ti max, j +1 + Ti max, j −1 ) − 2(1 + β )Ti max, j
2 2
=−
k k
IMAGE POINT METHOD
i,j+1

i,j
x x
i,j-1
i-1,j i,j i+1,j

USING IMAGE POINT, DISCRETISE THE BOUNDARY
CONDITION AND SUBSTITUTE IN GOVERNING EQUATION

FOR CORNER POINTS x
WITH TWO FLUX TYPE BC x
SOLUITION METHODS

POINT-BY-POINT METHOD
2(1+β 2) Ti,j = T*i+1,j + T*i-1,j + β 2(T*i,j+1 + T*i,j-1 ) + Q∆ x2/k

LINE-BY-LINE METHOD
− Q ∆x 2
Ti +1, j + Ti −1, j − 2(1 + β 2 )Ti , j = − β 2 (Ti*, j+1 + Ti*, j−1 )
k
− Q ∆x 2
β 2 (Ti , j+1 + Ti , j−1 ) − 2(1 + β 2 )Ti , j = − Ti*+1, j − Ti*−1, j
k
UNDER-RELAXATION/ OVER-RELAXATION
Ti ,kj+1 = W × Ti , j + (1 − W )Ti ,kj
HEAT CONDUCTION IN
CYLINDRICAL GEOMETRY
 ∂ 2 T 1 ∂T 1 ∂ 2 T 
 2 + + 2 =0
2 
 ∂r r ∂r r ∂θ 

T r = ro = a + b sin θ

(Ti , j+1 + Ti , j−1 − 2Ti , j ) 1 Ti , j+1 − Ti , j−1 1 Ti +1. j + Ti −1, j − 2Ti , j
+ + 2 =0
∆r 2 rj 2 ∆r rj ∆θ 2

 ∆r   ∆r  β 2  β2
Ti , j +1 1 +  + Ti , j −1 1 −  + 2 (Ti −1, j + Ti +1, j ) − 21 + 2 Ti , j = 0
 2r j   2r j  r j  rj 

β = ∆r
HERE ∆θ
MESH FOF CYLINDRICAL GEOMETRY
BOUNDARY CONDITIONS IN
ANGULAR DIRECTION
AT i = imax,
 ∆r   ∆r  β2  β2 

Ti max, j +1 1 + + Ti max, j −1 1 − + (T + T1, j ) − 21 + 2 Ti max, j = 0
 2r   2r  r 2 i max −1, j  r j 
 j   j  j 

AT i = 1,

 ∆r    β2  β2 
T1, j +1 1 +  + T1, j −1 1 − ∆r + (T + T2, j ) − 21 + 2 T1, j = 0
 2r   2r  r 2 i max, j  r j 
 j   j  j 
BOUNDARY CONDITIONS
AT i=1 AND i = imax, THE REENTRANT BOUNDARY
CONDITION IS USED. THAT IS i = 1 AND i = imax ARE
TREATED AS NEIGHBOURS

AT CENTRE OF CYLINDER, WE HAVE:
∂ 2 T ∂ 2 T TE + TW − 2TP TN + TS − 2TP
2 + 2 = + =0
∂x ∂y ∆r 2
∆r 2

TE + TW + TN + TS - 4TP = 0
TRANSIENT HEAT
CONDUCTION

∂T ∂ 2T
ρc p = k 2 +Q Ti
∂t ∂x
Tb Tb
BOUNDARY CONDITIONS:

T = Tb at x = 0 and x = L

INITIAL CONDITION:

T = Ti for all 0 < x < L
METHODS FOR TRANSIENT
MARCHING

• EXPLICIT METHOD
• IMPLICIT METHOD
• SEMI-IMPLICIT METHOD (CRANK-
NICOLSON TECHNIQUE)
EXPLICIT METHOD

n n
 ∂T  ∂ T  2
  = α  2 
 ∂t  i  ∂x  i
n
n +1 n  ∂T 
Ti = Ti + ∆t  
 ∂t  i

Ti n +1
= Ti + (α∆t / ∆x )(T
n 2 n
i +1 +T n
i −1 − 2Ti )
n
IMPLICIT METHOD
n +1
n +1 n  ∂T 
Ti = Ti + ∆t  
 ∂t  i

n +1 n +1
 ∂T   ∂ 2T 
  = α  2 
 ∂t  i  ∂x  i

Ti n +1
− (α∆t / ∆x )(T 2 n +1
i +1 +T n +1
i −1 − 2Ti n +1
) = Ti n
SEMI-IMPLICIT METHOD
n n +1
n +1 n  ∂T   ∂T 
Ti = Ti + (∆t / 2){  +  }
 ∂t  i  ∂t  i

Ti n +1 − (α∆t / 2∆x 2 )(Ti +n1+1 + Ti −n1+1 − 2Ti n +1 ) = Ti n + (α∆t / 2∆x 2 )(Ti +n1 + Ti −n1 − 2Ti n )
COMPARISON OF IMPLICIT/
EXPLICIT METHODS
• EXPLICIT METHOD INVOLVES POINTWISE UPDATING &
REQUIRES NO MATRIX INVERSION. IMPLICIT SCHEME
NEEDS MATRIX INVERSION
• COMPUTATIONAL TIME PER TIME STEP IS MORE FOR
IMPLICIT METHOD THAN THE EXPLICIT.
• FROM STABILITY CONSIDERATIONS, EXPLICIT SCHEME
MAY REQUIRE VERY SMALL TIME STEPS AND HENCE
SEVERAL THOUSAND STEPS TO OBTAIN STEADY STATE
SOLUTION. LARGE TIME STEPS CAN BE USED IN IMPLICIT
SCHEME
• BOTH EXPLICIT & IMPLICIT METHODS ARE O(∆ t) WHILE
SEMI-IMPLICIT SCHEME IS SECOND ORDER ACCURATE
ALTERNATING DIRECTION
IMPLICIT METHOD
∂T ∂ 2T ∂ 2T
ρC p = k( 2 + 2 ) + Q
∂t ∂x ∂y

X-DIR. IMPLICIT ∂T  ∂ 2T  n +1  ∂ 2T  n 
ρC p = k  2  +  2   + Q
∂t  ∂x   ∂y  

Y-DIR. IMPLICIT
∂T  ∂ 2T  n +1  ∂ 2T  n + 2 
ρC p = k  2  +  2   + Q
∂t  ∂x   ∂y  
CONVECTIVE DIFFUSION
dT d 2T
u =α 2
dx dx
u
i-1 i i+1
Ti +1 − Ti −1 Ti +1 + Ti −1 − 2Ti
u =α
2 ∆x ∆x 2

2Ti − (1 − Pec / 2)Ti +1 − (1 + Pec / 2)Ti −1 = 0

Pec = u∆ x/α

Cell Pe < 2 for spatial stability, when central difference is used
UPWIND DIFFERENCING
DIFFUSION

Ti − Ti −1 T + Ti −1 − 2Ti
u = α i +1 i-1 i i+1
∆x ∆x 2

CONVECTION
i-1 i i+1
FOR U>0
2(1 + Pec )Ti − Ti +1 − (1 + Pec )Ti −1 = 0
FOR U<0
2(1+ | Pec |)Ti − (1+ | Pec |)Ti +1 − Ti −1 = 0
ARTIFICIAL DIFFUSION
CENTRAL DIFFERENCE:
2Ti − (1 − Pec / 2)Ti +1 − (1 + Pec / 2)Ti −1 = 0
UPWIND DIFFERENCE:
2(1 + Pec )Ti − Ti +1 − (1 + Pec )Ti −1 = 0

DIFFERENCE = ( Pec / 2)(Ti +1 + Ti −1 − 2Ti )
ARTIFICIAL DIFFUSION
dT d 2T d 2T
u = α 2 +αa
dx dx dx 2
The last term on the right is the artificial diffusion term

αa
(u∆x / 2α )(Ti +1 − Ti −1 ) = (1 + )(Ti +1 + Ti −1 − 2Ti )
α
By setting (α a/α ) = Pec/2, one can get the upwind
form from central difference form
UPWINDING &ARTIFICIAL
DIFFUSION
• UPWINDING CAN BE DONE WITH HIGHER ORDER
ACCURACY
• FOR NODE i, WE CAN CONSIDER THE NODES (i-2),
(i-1) AND (i) TO GET SECOND ORDER ACCURATE
EXPRESSION FOR CONVECTIVE TERM. EVEN
NODES (i-2), (i-1), (i) AND (i+1) CAN BE TAKEN FOR
THIRD ORDER ACCURACY
• FOR ARTIFICIAL DIFFUSION II ORDER, IV ORDER
OR VI ORDER EXPRESSIONS ETC. CAN BE USED
HIGHER ORDER ARTIFICIAL
DIFFUSION
dT d 2T 2
II d T d 4
T d 6
T
u = α 2 +αa 2
+αa
IV
4
+αa
VI

dx dx dx dx dx 6

d 2T Ti +1 + Ti −1 − 2Ti
=
dx 2
∆x 2

d 4T Ti + 2 − 4Ti +1 + 6Ti − 4Ti −1 + Ti − 2
=
dx 4
∆x 4
ARTIFICIAL DIFFUSION
• CAN BE USED IN FLOW DIRECTION
FOR HIGH SPEED FLOWS TO AVOID
NUMERICAL OSCILLATIONS; NEED
NOT BE USED IN CROSS- FLOW
DIRECTION
• CAN BE USED TO SMOOTHEN THE
SOLUTION AT SHOCKS & HIGH
INCOMPRESSIBLE FLOW
FORMULATIONS
• PRIMITIVE VARIABLES- VELOCITY
COMPONENTS & PRESSURE
• STREAM FUNCTION/ VORTICITY
• STREAM FUNCTION
• VELOCITY/ VORTICITY
INCOMPRESSIBLE FLOW
EQUATIONS
∂u ∂v
+ =0
∂x ∂y

∂u ∂u 1 ∂p µ ∂ 2 u ∂ 2 u
u +v =− + { 2 + 2}
∂x ∂y ρ ∂x ρ ∂x ∂y

∂v ∂v 1 ∂p µ ∂ 2 v ∂ 2 v
u +v =− + { 2 + 2}
∂x ∂y ρ ∂y ρ ∂x ∂y
VORTICITY STREAM
FUNCTION FORMULATION
CROSS-DIFFERENTIATING THE X- AND Y-
MOMENTUM EQUATIONS, AND SUBTRACTING
ONE FROM THE OTHER, WE GET:

∂ς 
∂ς µ ∂ ς ∂ ς
2 2

VORTICITY TRANSPORT u +v =  2 + 2 
EQUATION ∂x ∂y ρ  ∂x ∂y 

   ∂v ∂u
WHERE VORTICITY ς = ∇ ×V = k ( − )
IS DEFINED AS ∂x ∂y
VORTICITY STREAM
FUNCTION FORMULATION
FOR 2-D INCOMPRESSIBLE FLOW, WE CAN DEFINE
STREAM FUNCTION ψ AS
∂ψ ∂ψ
u= ;v = −
∂y ∂x
IN TERMS OF ψ THE EQUATIONS FOR VORTICITY
DEFINITION AND VORTICITY TRANSPORT BECOME
 ∂ 2ψ ∂ 2ψ 
ς = − 2 + 2 
 ∂x ∂y 

∂ψ ∂ς ∂ψ ∂ς µ  ∂ 2ς ∂ 2ς 
− =  2 + 2 
∂y ∂x ∂x ∂y ρ  ∂x ∂y 
LID DRIVEN CAVITY FLOW
u=uo,v=0
FOR INTERIOR POINTS:

ψ i +1, j + ψ i −1. j − 2ψ i , j ψ i , j +1 + ψ i , j −1 − 2ψ i , j
+ = −ς i , j u=0, u=0,
∆x 2
∆y 2
v=0 v=0

ψ i +1, j + ψ i −1, j + β 2 (ψ i , j +1 + ψ i , j −1 ) − 2(1 + β 2 )ψ i , j = − ∆x 2 .ς i , j u=0,v=0

ψ i , j +1 − ψ i , j −1 (ς i , j − ς i ±1, j ) ψ i +1, j − ψ i −1, j (ς i , j − ς i , j 1 ) µ ς i +1, j + ς i −1. j − 2ς i , j ς i , j +1 + ς i , j −1 − 2ς i , j
+ = ( + )
2.∆y ∆x 2.∆x ∆y ρ ∆x 2 ∆y 2
BOUNDARY CONDITIONS

FOR STREAM FUNCTION:

ANY SURFACE HAVING NORMAL VELOCITY = 0
MUST BE A ψ = CONST. SURFACE. HENCE, ALL THE
BOUNDARIES OF THE CAVITY HAVE SAME ψ
VALUE, WHICH CAN BE CONVENIENTLY SET AS
ZERO
ψ 1, j = 0 ψ i max, j = 0

ψ i ,1 = 0 ψ 1, j max = 0
BOUNDARY CONDITIONS
FOR VORTICITY: CONSIDER

 ∂ψ   ∂ 2ψ   ∂ 3ψ 
ψ 2, j = ψ 1, j +  ∆x +  2  ∆x / 2!+ 3  ∆x 3 / 3!+...
2

 ∂x 1, j  ∂x 1, j  ∂x 1, j

ψ 2, j = −ς 1, j .∆x 2 / 2!+O(∆x 3 )

SIMILARLY

ψ i max −1, j = −ς i max, j .∆x 2 / 2!+O(∆x 3 )
BOUNDARY CONDITIONS
IN THE Y-DIRECTION, WE HAVE

 ∂ψ   ∂ 2ψ   ∂ 3
ψ 
ψ i,2 = ψ i ,1 +   ∆y +  2  ∆y 2 / 2!+ 3  ∆y 3 / 3!+...
 ∂y  i ,1  ∂y  i ,1  ∂y  i ,1

ψ i , 2 = −ς i ,1 .∆y 2 / 2!+O(∆y 3 )

FOR THE TOP SURFACE

ψ i , j max −1 = −u o ∆y − ς i , j max .∆y / 2!+O(∆y )
2 3
VELOCITY-PRESSURE
FORMULATION
CONTINUITY EQUATION
∂u ∂v
+ =0
∂x ∂y
X-MOMENTUM EQ. (FOR UPDATING U VELOCITY):

∂u ∂u ∂u 1 ∂p µ ∂ 2 u ∂ 2 u
+u +v =− + { 2 + 2}
∂t ∂x ∂y ρ ∂x ρ ∂x ∂y

Y-MOMENTUM EQ. (FOR UPDATING V VELOCITY):
∂v ∂v ∂v 1 ∂p µ ∂ 2 v ∂ 2 v
+u +v =− + { 2 + 2}
∂t ∂x ∂y ρ ∂y ρ ∂x ∂y
SIMPLE METHOD
Semi- IMplicit Pressure Linked Equation Solver-- SIMPLE
n n +1 n
∂u ∂u ∂u 1 ∂p µ ∂ u ∂ u
2 2
X-mom.: = − (u +v ) − + { 2 + 2}
∂t ∂x ∂y ρ ∂x ρ ∂x ∂y

n n +1 n
n +1 ∂u ∂u 1 ∂p µ ∂ u ∂ u 2 2
u = u − ∆t.(u
n
+ v ) − ∆t. + ∆t. { 2 + 2 }
∂x ∂y ρ ∂x ρ ∂x ∂y

n n +1 n
∂v ∂v ∂v 1 ∂p µ ∂ v ∂ v
2 2

Y-mom.: = − (u + v ) − + { 2 + 2}
∂t ∂x ∂y ρ ∂y ρ ∂x ∂y
n n +1 n
∂v ∂v 1 ∂p µ ∂ v ∂ v 2 2
v n +1 = v n − ∆t.(u + v ) − ∆t. + ∆t. { 2 + 2 }
∂x ∂y ρ ∂y ρ ∂x ∂y
VELOCITY CORRECTION
EQUATION
n n +1 n
∂u ∂u 1 ∂p µ ∂ 2u ∂ 2u
u n +1 = u − ∆t.(u
n
+ v ) − ∆t. + ∆t. { 2 + 2 }
∂x ∂y ρ ∂x ρ ∂x ∂y

n * n
∂u ∂u 1 ∂p µ ∂ u ∂ u 2 2
u * = u n − ∆t.(u + v ) − ∆t. + ∆t. { 2 + 2 }
∂x ∂y ρ ∂x ρ ∂x ∂y

n +1 *
 1 ∂p   1 ∂p 
u n +1 − u * = − ∆t.  + ∆t. 
 ρ ∂x   ρ ∂x 
VELOCITY CORRECTION
EQUATION
n n +1 n
n +1 ∂v ∂v 1 ∂p µ ∂ v ∂ v
2 2
v = v − ∆t.(u + v ) − ∆t.
n
+ ∆t. { 2 + 2 }
∂x ∂y ρ ∂y ρ ∂x ∂y

n * n
∂v ∂v 1 ∂p µ ∂ v ∂ v 2 2
v = v − ∆t.(u + v ) − ∆t.
* n
+ ∆t. { 2 + 2 }
∂x ∂y ρ ∂y ρ ∂x ∂y

n +1 *
n +1  1 ∂p   1 ∂p 
v − v = − ∆t.
*
 + ∆t. 
 ρ ∂y   ρ ∂y 
PRESSURE CORRECTIONS
DEFINE
u ' = u n +1 − u * v =v
' n +1
−v * p ' = p n +1 − p *

IT CAN BE SHOWN THAT
∆t ∂p' ∆t ∂p '
u =−
'
v' = −
ρ ∂x ρ ∂y
SUBSTITUTING FOR VELOCITY & PRESSURE CORRECTIONS,
WE GET
∂2 p' ∂2 p' ρ  ∂u ' ∂v '  ρ  ∂u * ∂v * 
+ = −  +  =  + 
∂x 2
∂y 2
∆t  ∂x ∂y  ∆t  ∂x ∂y 
STEPS INVOLVED IN SIMPLE
• AT THE START OF A TIME STEP, ASSUME A GUESS
PRESSURE FIELD P*
• SOLVE MOMENTUM EQUATIONS TO GET GUESS
VELOCITIES u* AND v* AT EACH NODE
• USING u* AND v* CALCULATE CONTINUITY
RESIDUE AT EACH POINT
• FROM CONTINUITY EQUATION RESIDUE, SOLVE
FOR PRESSURE CORRECTION P’ AT EACH NODE
• USING P’ SOLVE FOR VELOCITY CORRECTIONS
• UPDATE VARIABLES AS Pn+1 =P*+P’,un+1 =u*+u’,vn+1 =
v*+v’ AND GO TO NEXT TIME STEP
STAGGERED MESH
PROCEDURE
• PRESSURE NODES ARE TAKEN AS THE
MAIN NODES
• X-VELOVITY (u) NODES ARE SHIFTED BY
∆ X/2 WITH REFERENCE TO PRESSURE
NODES AND Y-VELOCITY (v) NODES ARE
SHIFTED BY ∆ Y/2 WITH REFERENCE TO
PRESSURE NODES
• SUCH A STAGGERED MESH AVOIDS ODD-
EVEN DECOUPLING (CHECQUER-BOARD
CONFIGURATION) BETWEEN VELOCITIES
&PRESSURES
STAGGERED MESH

V-VELOCITY

U-VELOCITY

PRESSURE
COMPRESSIBLE FLOW
SIMULATION
• FOR COMPRESSIBLE FLOW THE MASS,
MOMENTUM & ENERGY BALANCE
EQUATIONS ARE INTEGRATED WITH
RESPECT TO TIME TO OBTAIN ρ ,ρ u,ρ v
AND ρ e VALUES AT NEW TIME LEVEL
• FROM THESE VARIABLES, THE VELOCITY,
TEMPERATURE ARE DERIVED
• NEW PRESSURE IS OBTAINED USING THE
STATE EQUATION
EQUATIONS OF
COMPRESSIBLE FLOW
∂ρ ∂ρu ∂ρv
Mass bal.: + + =0
∂t ∂x ∂y

∂ρu ∂ρu 2 ∂ρuv ∂p ∂τ xx ∂τ yx
X Mom. bal.: + + = − +{ + }
∂t ∂x ∂y ∂x ∂x ∂y

∂ρv ∂ρuv ∂ρv 2 ∂p ∂τ yx ∂τ yy
Y Mom. bal.: + + = − +{ + }
∂t ∂x ∂y ∂y ∂x ∂y

Total energy eq.:
∂ρe ∂ρu (e + p / ρ ) ∂ρv(e + p / ρ ) ∂  ∂T  ∂  ∂T 
+ + = k  +  k  + µΦ
∂t ∂x ∂y ∂x  ∂x  ∂y  ∂y 
EULER EQUATIONS
FOR NON-CONDUCTING INVISCID FLOW
∂ρ ∂ρu ∂ρv
+ + =0
∂t ∂x ∂y

∂ρu ∂ ( ρu 2 + p ) ∂ρuv
+ + =0
∂t ∂x ∂y
∂ρv ∂ρuv ∂ ( ρv 2 + p )
+ + =0
∂t ∂x ∂y
∂ρe ∂ρu (e + p / ρ ) ∂ρv(e + p / ρ )
+ + =0
∂t ∂x ∂y

Here, p=ρ RT, and e= cvT +(u2+v2)/2
SOLUTION OF EULER
EQUATIONS
LET US DEFINE

ρ  ρu   ρv 
 ρu   ρu 2 + p   ρ uv 
 =Q  =E  =F
 ρv   ρuv   ρv + p 2

     
 ρe   ρu ( e + p / ρ )   ρ v ( e + p / ρ ) 

∂Q ∂E ∂F
+ + =0
∂t ∂x ∂y
SOLUTION OF EULER
EQUATIONS
∂Q ∂E ∂F
+ + =0
∂t ∂x ∂y
n n +1
 ∂E ∂F   ∂E ∂F 
Q n +1 = Q n − ∆t  +  Or Q n +1 = Q n − ∆t  + 
 ∂x ∂y   ∂x ∂y 

( ρu ) n +1 n +1 ( ρv) n +1 ( ρe) n +1
= u = v n +1
= e n +1
ρ n +1 ρ n +1 ρ n +1

e n +1 = c v T n +1 + {(u n +1 ) 2 + (v n +1 ) 2 } / 2

p n +1 = ρ n +1 RT n +1
SOLUTION OF EULER
EQUATIONS
• WHILE EVALUATING THE FLUX TERMS, ONE
CAN INTRODUCE ARTIFICIAL DIFFUSION
TERMS IN THE FLOW DIRECTION ONLY (SAY,
X) TO SMOOTHEN THE OSCILLATIONS
• TIME MARCHING CAN BE DONE WITH
HIGHER ORDER SCHEMES SUCH AS RUNGE-
KUTTA
• STABILITY OF SOLUTIONS MAY DEMAND
RESTRICTIONS ON STEP SIZE OR TIME STEP
(u∆ t/∆ x<1,ν ∆ t/∆ x2<0.5 etc.)
MODELING OF
COMPRESSIBLE FLOWS
SOLUTION METHODS

• MacCormack predictor- corrector scheme
• Jameson’s scheme with shock switches
• Flux splitting schemes
• Fully Implicit schemes
MacCormack Method (Scheme I)
∂u ∂u ∂u ∂u
+u =0 = −u
∂t ∂x ∂t ∂x

− u n
∆t n
FTFS (predictor): u i = ui −
n
(ui +1 − uin )
∆x
≈ − u n ∆t − −
FTBS(Corrector): ui = ui − (ui − u i −1 )
∆x
n +1 1− ≈ 
u i = u i + u i 
2 
MacCormack Method (Scheme II)
− u n
∆t n
FTBS (predictor): u i = ui −
n
(u i − u in−1 )
∆x
≈ u n ∆t −
− −
FTFS(Corrector): ui = ui − (u i +1 − u i )
∆x

n +1 1− ≈ 
u i = u i + u i 
2 
MacCormack Method
• Scheme I can capture the left running
characteristics better
• Scheme II can capture the right- running
characteristics better
• To avoid favouring the left- running or
right- running characteristics, both schemes
are applied one after another, in subsequent
time steps.
MacCormack Method
• The predictor/ corrector scheme is stable only
for Courant number less than 1 ( u∆ t/∆ x
<1)
• For avoiding spatial oscillations (particularly
in the regions of high gradients), suitable
artificial diffusion terms may be added
Jameson’s Method
∂u ∂u ∂u ∂u II ∂ u
2
IV ∂ u
4
+u =0 = −u +α +α
∂t ∂x ∂t ∂x ∂x 2
∂x 4

The time marching can be done using fourth order Runge- Kutta
marching scheme. The second order and fourth order artificial
Viscosities are calculated using switches in the shock region
Jameson’s scheme
• This has a good temporal accuracy for
prediction
• The time step can be suitably adjusted
depending on the first slope (k1)
• The shock switches are obtained the
variation of any variable (say, velocity,
pressure etc.)
u i +1 + u i −1 − 2u i u i +1 + u i −1 − 2u i
= O(1) = O (∆x 2 )
| u i +1 | + | 2u i | + | u i −1 | | u i +1 | + | 2u i | + | u i −1 |

Shock region Smooth region
FINITE VOLUME METHOD
• IN FINITE VOLUME METHOD, THE GOVERNING
EQUATION IS BROUGHT TO FLUX BALANCE FORM,
IF NECESSARY BY PERFORMING INTEGRATION
OVER ONE CELL
• THE FLUXES AT THE BOUNDARIES OF THE CONTROL
VOLUME ARE APPROXIMATED TO OBTAIN THE
DISCRETISED EQUATION FOR THE CELL
• IF FLUX TYPE BOUNDARY CONDITIONS ARE
IMPOSED AT THE BOUNDARY, THESE ARE
INCORPORATED IN THE CELL FLUX BALANCE IN A
SIMPLE MANNER
HEAT TRANSFER FROM FIN
d 2T B.C.: T=To at x=0;
kA 2 − hP(T − T f ) = 0
dx T=Tlat x=L

dT (TP − TW )
kA = kA
dx xw ∆x w

dT (TE − TW ) kA(TE − TP ) kA(TP − TW )
kA = kA − − hp(TP − T∞ ) = 0
dx xe ∆x e ∆x ∆x

hp
TE − 2TP + TW − (TP − T∞ )∆x 2 = 0
kA
HEAT TRANSFER FROM FIN

CONSIDER INSULATED B.C. AT X=L

dT
=0 AT X = L
dx
FLUX BALANCE FOR LAST CELL BECOMES
 dT   dT  − kA(Tp − Tw )
− kA  + kA  − hp(TP − T∞ ) = − − hp(TP − T∞ )∆x = 0
 dx  w  dx  e ∆x
TRANSIENT 1-D CONDUCTION

W P E

 dT 
q w − q e = ρc p ( A∆x) 
 dt  P
 dT   dT 
q w = −(kA) w   q e = −(kA) e  
 dx w  dx  e

qn − q s + qe − q w = 0
N
 dT 
− (kA) n 
 dT 
 + (kA) s 
 dT 
 − (kA) e 
 dT 
 + (kA) w   =0
∆ YN
 dy n  dy s  dx e  dx w

Y
W P E

∆X ∆ XE
W

X ∆ YS
(TN − TP ) (T − TS ) (T − TP ) (T p − TW )
− k n An + k s As P − k e Ae E + k w Aw =0 S
∆y n ∆y s ∆x e ∆x w
VARIABLE PROPERTIES
For smooth variation
k (Tp ) + k (TW ) k p + kW
kw = =
2 2
k (TE ) + k (TP ) k E + k P
ke = =
2 2

For sudden variation

2 1 1
= +
ke kE kp
Modeling of Convective
Diffusion by FVM
Heat balance for a cell
convection
conduction
w e

∂T ∂T ∂T
( ρuC T )
p ∆y − k ∆y − ( ρuC pT ) e ∆y + k ∆y = ρC p ∆x∆y
w
∂x w ∂x e ∂t
Discretisation of Cell Balance

∂T ∂T ∂T
( uT ) w − ( uT ) e −α +α = ∆x
∂x w ∂x e ∂t

 TW + T P TP + T E  α
u − − ( TP − TW ) + α ( TE − TP ) = ∆x ∂T
 2 2  ∆x ∆x ∂t

u∆t n α ∆t
2 ∆x
( ) (
TW − TE − 2 2TPn − TWn − TEn + T Pn = TPn +1
n
)
∆x
Stability Considerations
α ∆t 1
For temporal stability, <
∆x 2
2
For spatial stability, u∆x
<2
α
UPWINDING SCHEMES
In upwind scheme, Tw =TW and Te = TP for u>0.

For u<0, Tw = TP and Te = TE.

In QUICK scheme, Ti-1/2 = 3/8 Ti + 3/4 Ti-1 -1/8 Ti-2

Ti+1/2 = 3/8 Ti+1 + 3/4 Ti -1/8 Ti-1

The temperature at east or west boundary is evaluated in
the convective term with upwind bias and quadratic
accuracy
SUMMARY OF FVM
• FVM USES A PHYSICAL APPROACH TO WRITE THE
FLUX BALANCES FOR ANY CELL
• A MESH WITH VARIABLE STEP SIZE CAN BE
EASILY HANDLED
• FLUX TYPE BOUNDARY CONDITIONS CAN BE
HANDLED BY TAKING A FULL CELL; WHEREVER
VALUE IS PRESCRIBED FOR THE VARIABLE, TAKE
HALF A CELL ON THAT BOUNDARY
• PROPERTY VARIATIONS CAN BE TAKEN INTO
ACCOUNT BY CONSIDERING APPROPRIATE
PROPERTIES AT CELL FACES
FINITE ELEMENT MODELING
OF THERMAL PROBLEMS

BY
PROF. T.SUNDARARAJAN
DEPT. OF MECHANICAL ENGINEERING
FINITE ELEMENT METHOD
• WHILE FDM & FVM WERE APPLIED FOR
FLOW/THERMAL PROBLEMS, FEM WAS
INITIALLY DEVELOPED FOR STRUCTURAL
PROBLEMS
• IN THIS METHOD, A LARGE STRUCTURE IS
DIVIDED INTO SMALL ELEMENTS. RESPONSE
WRITTEN AS A MATRIX CONTRIBUTION
• BY ADDING MATRIX CONRIBUTIONS OF ALL
ELEMENTS, THE MATRIX EQUATION FOR THE
WHOLE GEOMETRY IS OBTAINED
STRUCTURAL ANALYSIS

F = K x for a
spring

Consider the structure as a F
collection of many connected
springs. In that case we get
Fi = Kij xj
FEM DISCRETISATION
METHODLOGY
• DIRECT METHOD
• VARIATIONAL METHOD
• WEIGHTED RESIDUAL (GALERKIN)
METHOD
ONE DIMENSIONAL HEAT
CONDUCTION (DIRECT METHOD)
FROM FOURIER’S LAW OF
CONDUCTION, WE GET:

T1 − T2
q1− 2 = kA T1 T2
∆x
T1 − T2 ∆x
q1−2 = whereR1−2 =
R1−2 kA
T1 T2
T2 − T1
q 2 −1 =
R1−2
ONE DIMENSIONAL HEAT
CONDUCTION (DIRECT METHOD)
Considering heat flow from node 1 to node 2, or from
node 2 to node 1, we get:

 kA kA 
q1− 2   ∆x −  T 
∆x 1
q  =  kA kA  T2 
 2−1  − 
 ∆x ∆x 

Thus, one element gives a matrix contribution of

kA  1 − 1
[K ]
(e)
=
∆x − 1 1 
ELEMENT ASSEMBLY

Heat balance for node 2 gives:
Q2 Q3
q 2−3 + q 2−1 = Q2
1 2 3 4
q2-1 q2-3
kA kA
(T2 − T3 ) + (T2 − T1 ) = Q2
∆x ∆x
kA
(T3 − 2T2 − T1 ) = Q2
∆x
MULTI-DIMENSIONAL
CONDUCTION
5 4
6 3
SUM OF ALL FLUXES FROM THE NODE 1
= NET HEAT GENERATION AT THE 7 2
POINT

1 1 1
(T1 −T2 ) + (T1 −T3 ) + (T1 −T4 )
R12 R13 R14
1 1 1
+ (T1 −T5 ) + (T1 −T6 ) + (T1 −T7 ) = Q1
R15 R16 R17
APPLICATION OF GALERKIN
WEIGHTED RESIDUAL METHOD

d 2T
k 2 +Q= 0
dx
AT X = 0, T = TO
AT X = L, T = TN
TO TN
d 2T *
k 2
+ Q = R ( x)
dx

Where R(x) is the residue for the
heat balance equation
GALERKIN’S METHOD
 d 2T 
∫ i  dx 2
N  k + Q dx = 0; i = 1,2,...

d  dT  dN i dT d 2T
 kN i −k = Nik
dx  dx  dx dx dx 2
x=L
dN i dT dT
∫ dx dx
k dx = ∫ i
N Qdx + kN i
dx x =0

T (e) = ∑ N ( x)T
i =1, 2
i i

[ K ][T ] = Q
ij j i
ELEMENTAL MATRICES
 kA kA 
 ∆x − 
∆x ELEMENTAL CONDUCTION
[K](e) =  kA kA 
−  MATRIX
 ∆x ∆x 

 ''' . 
 q A∆x 
 2  ELEMENTAL GENERATION
[G](e) =  ''' . 
 q A∆x  VECTOR
 2 
TWO-DIM. HEAT CONDUCTION

∂ 2T ∂ 2T 
k 2 + 2  + Q = 0
 ∂x ∂y  Sh

∂T
−k =q on Sq
∂n
Sq ST
∂T
−k = h(T − T f ) on Sh
∂n
T=Tb on ST
2-D HEAT CONDUCTION
 ∂ 2T ∂ 2T 
∫ i  ∂x 2 ∂y 2 dxdy = 0; i = 1,2,...
N  k + k + Q

∂  ∂T  ∂  ∂T  ∂N i ∂T ∂N i ∂T  ∂ 2T ∂ 2T 
 kN i  +  kN i  − k −k = N i k  2 + 2 
∂x  ∂x  ∂y  ∂y  ∂x ∂x ∂y ∂y  ∂x ∂y 

∂N i ∂T ∂N i ∂T
∫ k( + ) dxdy = ∫ N i Qdxdy + ∫ N i k ∂T dl
∂x ∂x ∂y ∂y ∂n

∂N i ∂T ∂N i ∂T
∫ k( ∂x ∂x
+
∂y ∂y
) dxdy + ∫ N i hTdl = ∫ N i Qdxdy + ∫ N i hT f dl − ∫ N i qdl
Sh Sh Sq
2-D HEAT CONDUCTION

[ K ]{T } = {G }
ij j i

∂N i ∂N j ∂N i ∂N j
where [K ]
ij = ∫ k(
∂x ∂x
+
∂y ∂y
)dxdy + ∫ hN i N j dl
Sh

{ Gi } = ∫ N i Qdxdy + ∫ N i hT f dl − ∫ N i qdl
Sh Sq
SHAPE FUNCTIONS
T ∗ = N 1 ( x, y )T1 + N 2 ( x, y )T2 + N 3 ( x, y )T3

x( y 2 − y 3 ) + x 2 ( y 3 − y ) + x3 ( y − y 2 )
N 1 ( x, y ) = T3
x1 ( y 2 − y 3 ) + x 2 ( y 3 − y1 ) + x3 ( y1 − y 2 )

x1 ( y − y3 ) + x( y3 − y1 ) + x3 ( y1 − y )
N 2 ( x, y ) =
x1 ( y2 − y3 ) + x2 ( y3 − y1 ) + x3 ( y1 − y2 ) T1
T2
x1 ( y 2 − y ) + x 2 ( y − y1 ) + x( y1 − y 2 )
N 3 ( x, y ) =
x1 ( y 2 − y 3 ) + x 2 ( y 3 − y1 ) + x3 ( y1 − y 2 )
ELEMENTAL CONTRIBUTIONS
e 1 / 3 1 / 6 e 0.5
∫e i j
hN N dl = h∆l 1 / 6 1 / 3
  ∫e f
NhT dl = hT f ∆l  
0.5

1 / 3
e  0.5
∫e i ∫e N i qdl = q∆l 0.5
e
N Qdxdy = QA 1 / 3
1 / 3
 

 x 232 + y 232 x 23 x31 + y 23 y31 x 23 x12 + y 23 y12 
∂N i ∂N j ∂N i ∂N j
∫ k( + )dxdy = k e  x 23 x31 + y 23 y31 x312 + y312 x31 x12 + y 31 y12 

∂x ∂x ∂y ∂y 4A
e  x 23 x12 + y 23 y12 x31 x12 + y31 y12 x 2
+ y 2 
 12 12 
CONCLUSIONS
• FINITE ELEMENT METHOD IS A
VERSATILE METHOD WHICH CAN BE
USED FOR THE MODELING OF HEAT
TRANSFER EQUIPMENT
• IT CAN BE USED FOR PERFORMANCE
ANALYSIS OF A THERMAL SYSTEM
OR FOR THE OPTIMISATION OF THE
DESIGN/ PERFORMANCE