Professional Documents
Culture Documents
Note:
(i) You are required to enclose a printout of the code(s) (FORTRAN or C) you have
developed.
(ii) Please write down the date of submission conspicuously on the cover page of your
report.
j = jmax −1
i =imax −1
ERROR = ∑
i=2
Ti ,nj+1 − Ti ,nj
j =2
1
ERRORmax = 0.01
nπ ( H − y )
∞ 1 − (−1) n sinh
T = T1 2∑ L sin nπ x
n =1 nπ nπ H L
sinh
L
Carry out the summation for n = 1, … , 110 and use `long double’
TABLE 1
11 1 - - - -
11 2 - - - -
- - - - - - - - - - - - - - - - - - - - - - - --
- - - - - - - - - - - - - - - - - - - - - - - --
11 41 - - - -
(f) Draw contours of constant temperature for the rectangular plate from results obtained
by Gauss-Seidel method. You may use TECPLOT (preferred) or GNU plot.
2
APPENDIX
∂ 2 u ∂ 2u
∇ 2u = + =0
∂x 2 ∂y 2
k +1
ui, j =
(
u ik+1, j + u ik−1,+1j + β 2 u ik, j+1 + u ik, +j−11 ) k +1
(We may call this uGS )
2
2(1 + β )
Note that in Gauss-Seidel iteration, on the right-hand side, we use the most recent values
of the variables (unlike point Jacobi, where the old values are used till u is updated once
at all the points). Since before computing u at i,j it has already been computed at i-1,j
and i,j-1 - the same iteration level of k+1 has been assigned to u at all these locations. If
convergence takes place, Gauss-Seidel iterations are always faster than Point Jacobi. The
additional advantage of Gauss-Seidel is that only one value of u requires to be stored at
3
each grid node whereas in Point Jacobi two values of u (new and old) must necessarily be
stored at each node.
Obviously the faster the computational algorithm the better. Gauss-Seidel is therefore
better than Point Jacobi (which now a days is rarely used). Can there be anything – not
more complex – but better than Gauss-Seidel? The answer is `yes’. PSOR is one such
algorithm.
In Gauss-Seidel we start with a value of u (denoted by uk) at an iteration level k and based
k +1
on this then obtain uGS at the next iteration level. It is not difficult to visualize that with
the progress of the iterations, more and more change to the initial uk takes place. At a
stage in the computational process, when the change in u brought about by an iteration
k +1
becomes a very small fraction of u, we say convergence has taken place. As uGS is
k
computed from u , a Gauss-Seidel iterate can be viewed as
k +1 k +1
uGS = u k + (uGS − uk )
The quantity in the parentheses may be viewed as the change in u brought about by an
iteration. To obtain faster convergence, can we make the change bigger by multiplying it
with a larger-than-one number ω (called the over-relaxation factor)? Not always, but
many times, we can provided ω lies between 1 and 2, i.e., 1 < ω < 2 . Loosely speaking it
can be thought of as an extrapolation in the direction leading to the solution. A PSOR
iteration can thus be written as
k +1 k +1
u PSOR = u k + ω (uGS − uk )
k +1 k +1
or u PSOR = (1 − ω )u k + ωuGS
Now let us come back to the Laplace equation and write a PSOR iteration for its
computation. To do this we just substitute in the above equation the expression for
k +1
uGS developed earlier.
PSOR Iteration for the Laplace Equation:
uk +1 k i +1, j i −1, j
(
u k + u k +1 + β 2 u k + u k +1
= (1 − ω )u + ω
i , j +1 i , j −1
)
i, j
i, j 2
2(1 + β )
An aside:
We have mentioned that Point Gauss-Seidel method is faster than Point Jacobi and PSOR
is faster than Point Gauss-Seidel. Is there anything that is faster even than PSOR? Yes,
there is, especially when the grid size is large (say, 200x200 or more). One such
technique is known as Multigrid, which is one of the most efficient general iterative
4
methods known today (general in the sense that it works in conjunction with all of finite
difference, finite volume and finite element). Other than the relaxation techniques like
Jacobi and Gauss-Seidel there are also methods like Conjugate Gradient (CG) [for
symmetric and positive-definite (this means x T Ax > 0 ) matrices as given by the Laplace
equation] and other Krylov-subspace-based methods like BiCG, BiCGStab, GMRES etc.,
which are very efficient, and useful even in those situations where Jacobi and/or Gauss-
Seidel fail to converge. By now you must have realized that there are several methods for
solving a PDE numerically using various algebraic-equation solvers. But it is more
important to realize that it is not enough just to obtain the results; it should also be done
using the most efficient algorithm possible, as in large computations it may make a huge
difference in computational time (several days or even months).
P.S. In most situations, the optimum value of the over-relaxation factor ω is not known a
priori. In these situations a series of numerical experiments can be carried out to
approximately determine the optimum value. However, for a uniform mesh in a
rectangular domain with Dirichlet boundary conditions, there is a closed form expression
for ωoptimum , which is given in the Homework Assignment 1.
5
n
∂ 2u uin+1, j − 2uin, j + uin−1, j
2
= 2
+ O[(∆x)2 ]
∂x i , j ( ∆x )
n
∂ 2u
n n n
ui , j +1 − 2ui , j + ui , j −1
2
= + O[(∆y ) 2 ]
∂y i , j ( ∆y ) 2
α ∆t 1
If ∆x = ∆y = ∆L, then we may write 0 ≤ r = ≤
( ∆L ) 2 4
The discretized equation can now be written in a form useful for computer simulation:
α ∆t α ∆t
uin, +j 1 = uin, j + rx (uin+1, j − 2uin, j + uin−1, j ) + ry (uin, j +1 − 2uin, j + uin, j −1 ) where rx = 2
and ry =
( ∆x ) ( ∆y ) 2
α ∆t 1
[Note that if ∆x = ∆y = ∆L and we choose r = 2
= then we get the well-known 4-point
( ∆L ) 4
uin+1, j + uin−1, j + uin, j +1 + uin, j −1
formula for the Laplace equation: uin, +j 1 = ]
4
When spacing in the x- and y-directions are not equal, one can choose any positive value
less than or equal to 1/4 for rx and ry and iterate till there is no significant change in, say,
the L1-norm of the error given by ∑ uin, +j 1 − uin, j .
i, j