You are on page 1of 6

ME 543 (CFD)

Homework Assignment 1, Marks 8


Submit by 2.00 pm, 30 August 2019

Note:
(i) You are required to enclose a printout of the code(s) (FORTRAN or C) you have
developed.
(ii) Please write down the date of submission conspicuously on the cover page of your
report.

It is required to obtain the steady-state temperature distribution on a 2D rectangular plate


as shown below.

Dirichlet B.C.: y = 0, T = T1; x = 0, T = T2; y = H, T = T3; x = L, T = T4;


(T1 = 100oC, T2 = T3 = T4 = 0)
Governing equation for steady 2D heat conduction (which if there is no heat generation is
a Laplace equation):
2 ∂ 2T ∂ 2T
∇ T = 2 + 2 =0
∂x ∂y
It is intended to solve this elliptic pde subject to imposed boundary conditions. Select
constant spatial step sizes ∆x = ∆y = 0.05 m, in which case imax = 21 and jmax = 41. The
temperature distribution is to be computed for a total of (imax -2)x(jmax -2) = 741 grid
points. Condition of convergence to be used is if ERROR < ERRORmax, the solution has
converged where

j = jmax −1
i =imax −1
ERROR = ∑
i=2
Ti ,nj+1 − Ti ,nj
j =2

1
ERRORmax = 0.01

Use an initial guess T = 0.0 at all interior points.


(a) Use point Gauss-Seidel method to compute the temperature the temperature
distribution and write down the number of iterations required. Present the results
in the format given by TABLE 1.
(b) Do the same using time-marching method.
(c) Do the same using PSOR with a relaxation parameter given by the optimal
relation:
2
  π  2  π 
 cos   + β cos  
2 − 2 1−α   imax − 1   jmax − 1   ∆x
ωopt = , where α = , β=
α  1+ β 2
 ∆y
 
 
(d) Perform numerical experiments with ω taking values from 0.8 to 2.0 at intervals
of 0.1 and find the number of iterations required for convergence with each value of
ω. Give an ω (vertical axis) vs number of iterations for convergence (horizontal axis)
plot.
(e) Find the temperature distribution from the analytical solution given below at the
grid nodes and present the result in the format given in TABLE 1:

  nπ ( H − y )  
 ∞ 1 − (−1) n sinh   
T = T1  2∑  L  sin  nπ x  
 
 n =1 nπ  nπ H   L 
sinh  
  L  

Carry out the summation for n = 1, … , 110 and use `long double’

TABLE 1

i j T (Gauss-Seidel) T (Time-marching) T (PSOR) T (Analytical)

11 1 - - - -
11 2 - - - -

- - - - - - - - - - - - - - - - - - - - - - - --
- - - - - - - - - - - - - - - - - - - - - - - --

11 41 - - - -

(f) Draw contours of constant temperature for the rectangular plate from results obtained
by Gauss-Seidel method. You may use TECPLOT (preferred) or GNU plot.

2
APPENDIX

Background information for Homework Assignment 1

Laplace equation that describes 2D heat conduction in a solid is given by,

∂ 2 u ∂ 2u
∇ 2u = + =0
∂x 2 ∂y 2

The second-order-accurate finite-difference expressions for the derivatives are:


∂ 2u  ui +1, j − 2ui , j + ui −1, j
2  = 2
+ O[(∆x)]2
∂x i , j (∆x)
∂ 2u  ui , j +1 − 2ui , j + ui , j −1
2  = 2
+ O[(∆y )]2
∂y i , j (∆y )
The discretized Laplace equation (second-order-accurate in space) is thus given by
ui +1, j − 2ui , j + ui −1, j ui , j +1 − 2ui , j + ui , j −1
+ =0
(∆x) 2 (∆y )2

Point Gauss-Seidel iteration for the Laplace equation

The last equation can be rewritten


∆x
ui +1, j + ui −1, j − 2(1 + β 2 )ui , j + β 2 ( ui , j +1 + ui , j −1 ) = 0 (here β = , the grid-aspect ratio)
∆y
2(1 + β 2 )ui , j = ui +1, j + ui −1, j + β 2 ( ui , j +1 + +ui , j −1 )
ui +1, j + ui −1, j + β 2 ( ui , j +1 + ui , j −1 )
ui , j =
2(1 + β 2 )
The point Gauss-Seidel iteration is given by

k +1
ui, j =
(
u ik+1, j + u ik−1,+1j + β 2 u ik, j+1 + u ik, +j−11 ) k +1
(We may call this uGS )
2
2(1 + β )

Note that in Gauss-Seidel iteration, on the right-hand side, we use the most recent values
of the variables (unlike point Jacobi, where the old values are used till u is updated once
at all the points). Since before computing u at i,j it has already been computed at i-1,j
and i,j-1 - the same iteration level of k+1 has been assigned to u at all these locations. If
convergence takes place, Gauss-Seidel iterations are always faster than Point Jacobi. The
additional advantage of Gauss-Seidel is that only one value of u requires to be stored at

3
each grid node whereas in Point Jacobi two values of u (new and old) must necessarily be
stored at each node.

Point Successive Over-Relaxation (PSOR)

Obviously the faster the computational algorithm the better. Gauss-Seidel is therefore
better than Point Jacobi (which now a days is rarely used). Can there be anything – not
more complex – but better than Gauss-Seidel? The answer is `yes’. PSOR is one such
algorithm.

The development of PSOR:

In Gauss-Seidel we start with a value of u (denoted by uk) at an iteration level k and based
k +1
on this then obtain uGS at the next iteration level. It is not difficult to visualize that with
the progress of the iterations, more and more change to the initial uk takes place. At a
stage in the computational process, when the change in u brought about by an iteration
k +1
becomes a very small fraction of u, we say convergence has taken place. As uGS is
k
computed from u , a Gauss-Seidel iterate can be viewed as
k +1 k +1
uGS = u k + (uGS − uk )
The quantity in the parentheses may be viewed as the change in u brought about by an
iteration. To obtain faster convergence, can we make the change bigger by multiplying it
with a larger-than-one number ω (called the over-relaxation factor)? Not always, but
many times, we can provided ω lies between 1 and 2, i.e., 1 < ω < 2 . Loosely speaking it
can be thought of as an extrapolation in the direction leading to the solution. A PSOR
iteration can thus be written as
k +1 k +1
u PSOR = u k + ω (uGS − uk )

k +1 k +1
or u PSOR = (1 − ω )u k + ωuGS

Now let us come back to the Laplace equation and write a PSOR iteration for its
computation. To do this we just substitute in the above equation the expression for
k +1
uGS developed earlier.
PSOR Iteration for the Laplace Equation:

uk +1 k i +1, j i −1, j
(
 u k + u k +1 + β 2 u k + u k +1
= (1 − ω )u + ω 
i , j +1 i , j −1
) 
i, j

i, j 2
2(1 + β ) 
 

An aside:
We have mentioned that Point Gauss-Seidel method is faster than Point Jacobi and PSOR
is faster than Point Gauss-Seidel. Is there anything that is faster even than PSOR? Yes,
there is, especially when the grid size is large (say, 200x200 or more). One such
technique is known as Multigrid, which is one of the most efficient general iterative

4
methods known today (general in the sense that it works in conjunction with all of finite
difference, finite volume and finite element). Other than the relaxation techniques like
Jacobi and Gauss-Seidel there are also methods like Conjugate Gradient (CG) [for
symmetric and positive-definite (this means x T Ax > 0 ) matrices as given by the Laplace
equation] and other Krylov-subspace-based methods like BiCG, BiCGStab, GMRES etc.,
which are very efficient, and useful even in those situations where Jacobi and/or Gauss-
Seidel fail to converge. By now you must have realized that there are several methods for
solving a PDE numerically using various algebraic-equation solvers. But it is more
important to realize that it is not enough just to obtain the results; it should also be done
using the most efficient algorithm possible, as in large computations it may make a huge
difference in computational time (several days or even months).

Prove that PSOR converges only if 1< ω < 2

P.S. In most situations, the optimum value of the over-relaxation factor ω is not known a
priori. In these situations a series of numerical experiments can be carried out to
approximately determine the optimum value. However, for a uniform mesh in a
rectangular domain with Dirichlet boundary conditions, there is a closed form expression
for ωoptimum , which is given in the Homework Assignment 1.

Time-marching to steady state


In CFD many times steady-state solutions are obtained by integrating time-dependent
equations with time (using appropriate boundary conditions that do not change with
time). The initial assumed solution is generally arbitrary; however, to ensure convergence
of the iterations or to reduce time for convergence, a good initial guess is always helpful.
As one marches with time, the time-dependence of the solution gradually decreases and
after a sufficiently long time the solution ceases to be time-dependent. This is the so-
called steady state. This strategy of computing the steady solution, starting with a time-
dependent equation, is many times more convenient and sometimes even necessary. We
will now show how the steady-state temperature distribution in a rectangular domain can
be obtained starting with the unsteady heat equation given below:
∂u  ∂ 2u ∂ 2u 
= α∇ 2 u = α  2 + 2 
∂t  ∂x ∂y 
Where α = k/ρCp denotes thermal diffusivity. Unlike the time-independent Laplace
equation (which is mathematically elliptic), the above time-dependent equation is
mathematically parabolic that allows marching with time. The time derivative can be
approximated by the Forward-Euler time-stepping
n +1 n
∂u ui , j − ui , j
= + O ( ∆x )
∂t ∆t
The second-order-accurate finite-difference expressions for the space derivatives at time-
level n are:

5
n
∂ 2u  uin+1, j − 2uin, j + uin−1, j
2 
= 2
+ O[(∆x)2 ]
∂x i , j ( ∆x )
n
∂ 2u 
n n n
ui , j +1 − 2ui , j + ui , j −1
2 
= + O[(∆y ) 2 ]
∂y i , j ( ∆y ) 2

The discretized 2D heat equation (first-order accurate in time and second-order-accurate


in space) is now given by
uin, +j 1 − uin, j  uin+1, j − 2uin, j + uin−1, j uin, j +1 − 2uin, j + uin, j −1  2
=α  +  + O[(∆t ), (∆x) ]
∆t  ( ∆x ) 2
( ∆y ) 2
 
This is an explicit discretization as the space-derivatives are carried out explicitly at time
level n, resulting in only one unknown, namely, uin, +j 1 in the whole discretized equation.
Time marching with an explicit formulation generally makes it necessary for the time
step to be limited by a stability criterion. Carrying out a von Neumann stability analysis
of the last equation it can be shown that the stability condition is
α ∆t α ∆t 1
+ ≤
( ∆x ) ( ∆y ) 2 2
2

α ∆t 1
If ∆x = ∆y = ∆L, then we may write 0 ≤ r = ≤
( ∆L ) 2 4
The discretized equation can now be written in a form useful for computer simulation:
α ∆t α ∆t
uin, +j 1 = uin, j + rx (uin+1, j − 2uin, j + uin−1, j ) + ry (uin, j +1 − 2uin, j + uin, j −1 ) where rx = 2
and ry =
( ∆x ) ( ∆y ) 2
α ∆t 1
[Note that if ∆x = ∆y = ∆L and we choose r = 2
= then we get the well-known 4-point
( ∆L ) 4
uin+1, j + uin−1, j + uin, j +1 + uin, j −1
formula for the Laplace equation: uin, +j 1 = ]
4

When spacing in the x- and y-directions are not equal, one can choose any positive value
less than or equal to 1/4 for rx and ry and iterate till there is no significant change in, say,
the L1-norm of the error given by ∑ uin, +j 1 − uin, j .
i, j

You might also like