You are on page 1of 15

Numerical Solution of the Poisson Equation

MAE 237 Computational Fluid Dynamics

Homework 1

Roy Culver Department of Mechanical and Aerospace Engineering University of California, Irvine Irvine, CA 92612

January 24, 2005

Table of Contents

1 Introduction

1

2 Analytical Solution

2

3 Numerical Methods

5

3.1 Central Difference Discretization

 

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

5

3.2 Point Iterative Methods

 

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

5

3.2.1 Jacobian Iteration

 

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

6

3.2.2 Gauss-Siedel Method

 

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

6

3.2.3 S.O.R. (Successive Over Relaxation)

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

6

3.3 Line Iterative Methods

 

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

7

3.3.1 S.L.O.R. (Successive Line Over Relaxation)

 

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

7

3.3.2 A.D.I (Alternate Direction Implicit)

 

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

7

3.4 Direct Methods

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

8

4 Results

9

4.1 Numerical Convergence

 

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

9

4.2 Solution Accuracy

 

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

10

 

i

List of Figures

1 Duct geometry considered

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

1

2 Plot of the Analytical Solution

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

4

3 Grid nodes used in our discretization scheme

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

5

4 Residual convergence history for different grids

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

9

5 Time of convergence versus grid size

 

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

10

6 Error history between analytical solution and numerical solution

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

11

7 R.M.S. error versus x

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

12

8 Comparison of Analytical and Numerical Solutions to Poisson Equation

.

.

.

.

.

.

.

.

.

.

.

12

ii

1

Introduction

This paper presents a solution to the Poisson Equation representing steady, laminar flow through a square duct with a no slip boundary condition at the walls. For this type of flow, the governing equation is the axial momentum equation which can be simplified as seen in Equation 1 for the geometry shown in Figure 1.

∂p

∂z

=

µ 2 w ∂x 2

+

2 w ∂y 2

(1)

where w(x, y) may be solved for with a given pressure gradient,

gradient and viscosity, we can nondimentionalize Equation 1 to arrive at

∂p

∂z

and viscosit, µ. For constant pressure

Re = ∂x 2 w 2

+

2 w ∂y 2

(2)

where Re is the corresponding Reynolds number. The “no-slip” boundary condition translates to

w = 0 at x = ±1, y = ±1. Y 1 Z −1 1
w = 0
at
x = ±1, y = ±1.
Y
1
Z
−1
1
X
−1

Figure 1: Duct geometry considered

In this paper, solutions for the velocity distribution, w(x, y), for Re = 1000 will be obtained analytically and numerically. The numerical methods considered here include:

1. Jacobi Iteration

2. Gauss-Siedel Method

3. S.O.R. (Successive Over Relaxation)

4. S.L.O.R. (Successive Line Over Relaxation)

5. A.D.I (Alternate Direction Implicit)

Convergence histories for each numerical scheme

are presented, and comparson to the analytical solution is made. Cacluations are conducted on different numerical grids of sizes 33 × 33, 65 × 65, 129 × 129, and 257 × 257.

Details of the numerical schemes used are given.

1

2

Analytical Solution

The governing equation for laminar, viscous, steady flow through a square duct is a linear P.D.E. in two dimensions. Analagous to solving an O.D.E., one method of solving this P.D.E. is to first find a particular solution which satisfies the full P.D.E. and assiciated boundary conditions, then solve the P.D.E. in homo- geneous form and combine the two solutions. There are several options for a particular solution, but one of the more simple forms is

(3)

w p (x, y) = Re (1 x 2 ).

2

By simple examination we see that Equation 3 satisfies Equation 2 and it’s associated boundary conditions. Next we must solve the homogeneous form of the original P.D.E. which is exactly the Laplace equation in 2 dimensions. There are several ways to solve this, here we will use the seperation of variables. This method assumes that the solution is seperable and that it is of the form

w h (x, y) = X(x)Y (y).

If we assume this form of solution, we may substitute into the homogeneous form of Equation 2 to arrive at

X (x)Y (y) + X(x)Y (y) = 0.

Rearranging Equation 4, we arrive at

X (x) = Y (y)

X(x)

Y (y)

= λ 2

(4)

(5)

where λ is called a “seperation” constant. We know that the left hand side is a function of x only and the middle is a function of y only. If they are equal to each other than they must both be equal to a constant. The choice of λ 2 is simply for conveneice. Rearranging once again, we now arrive at two “seperate” O.D.E.s which is the goal of the method. Those O.D.E.s are

(6)

and

(7)

Y (y) λ 2 Y (y) = 0

X (x) + λ 2 X(x) = 0

For Equation 6, we see that for non trivial solutions (i.e. X(x) = 0) we get solutions of the form

X(x) = asin(λx) + bcos(λx).

If we apply the boundary conditions that X(1) = 0, and X(1) = 0, and recognize that sin and cos are odd and even functions respectively, we may arrive at either of the two relations

asin(λ) =0

bcos(λ) =0

From here we realize that we are forced to find a solution in terms of a seriese expansion. Since the only

2

values of a or b which satisfy this relation are 0, we make use of the periodicity of sin and cos to chose λ which satisfy the relation. For constant λ we can only satisfy one of these relations, so for this solution we arbitrarily chose bcos(λ) = 0 and arrive at λ as

λ

= π 2n+1

2

f or

n = 0, 1, 2

etc.

(8)

Similar to the O.D.E. for X(x), for Equation 7, we see that for non trivial solutions (i.e. Y (y) = 0) we get solutions of the form

Y (Y ) = dsinh(λy) + ecosh(λy).

Again, we must chose by which function we are to expand the solution, and for this solution we will use cosh. Combining our solutions, we now have a general solution for the homogeneous equation as

w h (x, y) =

n=0

C n cosh(λ n y)cos(λ n x)

where C n is a distinct constant for each n such that the boundary conditions are satisfied, and λ n is given by Equation 8. If we now combine the homogenous and particular solutions we can apply the boundary conditions and solve for the constants C n .

w(x, y) =

Re

2

(1 x 2 ) +

n=0

C n cosh(λ n y)cos(λ n x)

(9)

Applying the boundary condtion w(x, 1) = 0, we can rearrange to get

Re

2

(1 x 2 ) =

n=0

C n cosh(λ n )cos(λ n x)

Here we make use of the identity

l cos mπx cos nπx dx =

l

l

l

0

l

m = n

m

= n

= 0

to solve for the C n coefficients. First we multiply by cos(λ m x) and integrate over the range 0 to 1.

Re

2

1

0

(1 x 2 )cos(λ m x)dx =

n=0

C n cosh(λ n

) 1 cos(λ n x)cos(λ m x)dx

0

(10)

We then notice that the integral on the right hand side is equal to zero for all values of m

summation dissapears and we are left with

= n, thus the

Re

2

1

0

(1 x 2 )cos(λ n x)dx = C n cosh(λ n )

2

(11)

where we have changed the subscripts to n for simplicity. Evaluating the integral on the right by integration

3

by parts, we get

1

0

(1 x 2 )cos(λ n x)dx = sin(λ n x)

λ

n

x 2 sin(λ n x)

2xcos(λ n x)

2sin(λ n x)

λ

n

λ

2

n

λ

3

n

which simplifies to

1

0

Thus, each C n can be evaluated as

(1 x 2 )cos(λ n x)dx = 2 (1) n .

λ

3

n

C n = 2 (1) n

λ

3

n

1

cosh(λ n ) .

1

0

Substituting this back into Equation 9, we are arrive at a final solution of

∞ 180 (−1) n cosh(λ n x) w(x, y) = Re (1 − x 2
180
(−1) n
cosh(λ n x)
w(x, y) =
Re (1 − x 2 ) − 2Re
cos(λ n x)
100
120
2
λ
3
cosh(λ n )
200
n
220
n=0
260
240
Figure 2 shows a plot of the solution over the domain
60
40
20
80
140
1
80
60
20
160
40
140
100
160
40
20
240
220
0.5
120
120
180
0
200
140
-0.5
100
160
200
80
160
120
120
60
180
140
40
60
20
100
80
20
-1
-1
-0.5
0
0.5
1
40
x
180
260
60
40
20
20
180
220
80
80
100
280
280
60
140
100
220
160
120
20
40
y
240
240
120
60
200
80
160
260
140
180
100
80
40
60
20

(12)

Figure 2: Plot of the Analytical Solution

4

3

Numerical Methods

3.1 Central Difference Discretization

(i,j+1) y (i−1,j) (i,j) (i+1,j) (i,j−1) x
(i,j+1)
y
(i−1,j)
(i,j)
(i+1,j)
(i,j−1)
x

Figure 3: Grid nodes used in our discretization scheme

There are many options for discretizing P.D.E.s to make them convenient to solve numerically. For our numerical solution to the Poisson Equation, we have chosen a second order central difference scheme to discretize the orignal governing equation. The resulting discretized form of Equation 1 is then

w (i1,j) 2w (i,j) + w (i+1,j) + w (i,j1) 2w (i,j) + w (i,j+1)

x 2

y 2

= q(i, j)

(13)

for all (i, j) within the domain of our solution. Here q(i, j) represents the source term which is exactly Re for our case. We note that this discretized equation approximates the original P.D.E., but it does not replicate it. As we solve Equation 13 numerically, we will see that the solution approaches the analytical solution as the grid resolution tends to infinity. That is

lim

x0 w numerical = w analytical

Five different methods were used to solve Equation 13 and they will be described in the following section.

3.2 Point Iterative Methods

Point iterative methods start from an initial guess for the solution of the equation and evaluate the discretized equation for each w(i, j) in the domain. At this point, a residual defined by

R(i, j) = w (i1,j) 2w (i,j) + w (i+1,j) + w (i,j1) 2w (i,j) + w (i,j+1) + q (i,j) h 2

(14)

Here we have

replaced x and y with h to signify that we use only uniform grids for this solution (i.e. x = y)

is calculated for everypoint to be used as a measure of the convergence of the solution.

5

As you can see by comparing Equations 13 and 14, when the residual approaches zero, we approach the solution to the central difference equation. From here we take the newly calculated values for w(i, j) as the initial guess and begin the process again. The methods presented here vary slightly in the way that they chose to evaluate the neighboring points as well as the replace old values with new ones.

3.2.1 Jacobian Iteration

The Jacobi Iteration method is the most straight forward of the iterations discussed here. As discussed above, the method starts from an initial guess, timestep (n), and uses the central difference equation to calculate new values, timestep (n + 1) of the solution as

(n+1)

w (i,j)

= 1 4 w

(n)

(i1,j) + w (i+1,j) + w (i,j1) + w (i,j+1) +

(n)

(n)

(n)

1

4

q (i,j) h 2

(15)

for the entire domain. Next a residual is calculated at each point in the domain. Next, timestep (n + 1) becomes timestep (n), and the process is repeated until the r.m.s. residual for the domain reaches the convergence criteria. For double precision caculations this criteria is generally used as

R criteria = 1 × 10 10

3.2.2 Gauss-Siedel Method

The Gauss-Siedel Method is very similar to Jacobian Iteration. The only difference is the way in which the neighboring values are collected. Whereas in the Jacobian method, only neighboring values from the

, the Gauss-Siedel method includes neighboring values

previous timestep were used to calculated w

which have already been updated in the computation of each new value. The governing formula now looks like

(16)

(n+1)

(i,j)

(n)

(n+1)

w (i,j)

1

4 w

(n+1)

(i1,j) + w (i+1,j) + w (i,j1) + w (i,j+1) +

(n+1)

(n)

1

4

q (i,j) h 2 ,

=

where w (i1,j) and w (i,j1) represent neighbors which have already been updated and may be used as

. Again, residuals are calculated after all w (i,j) have been computed,

and the process is repeated until residual reach the convergence criteria. Because of the use of faster updating of the neighboring values, the Gauss-Siedel method converges faster than the Jacobian iteration method.

updated values in computing w

(n+1)

(i,j)

3.2.3 S.O.R. (Successive Over Relaxation)

The S.O.R. method employs the concept of over relaxation to point iteration. It may be encorporated with either Jacobian or Gauss-Siedel Iteration. For the solution computed here, we have used the Gauss-Siedel method of the base of the S.O.R. method to increase the speed of iteration. The method is based on the assumption that for certain “well behaved” functions, we may be able to overestimate our solution for each iteration and thus increase the speed of solution convergence. This assumption proves to be invalid for many nonlinear sets of equations, but for linear elliptic equaitons such as we have here, it is a good assumption. The method is implemented by changing the way in which we chose to update each value after a new value is computed. Instead of a newly computed value becoming the

6

value for the next time step, it becomes an intermediate value, w () , and we use the equation

w

n+1

i,j

= (1 α)w

n

i,j + αw

i,j

where α is called a relaxation parameter, to update the values at timestep (n + 1). For our equations, α may take a value between 0 and 2. There is an optimal value, but since it is often difficult to compute, we have tested several values and chosen α = 1.7 as it seems to work well for most of the computations perfomed here.

3.3 Line Iterative Methods

Line iterative methods differ from point iterative method in the respect that they iterate on a whole “line” of values at once as opposed to a single point. For the Poisson Equation, this makes the solution more implicit and requires the solution of a tridiagonal matrix at each line. The resulting formula is

w

(n+1)

(i1,j) 2w (i,j)

(n+1)

+ w

(n+1)

(i+1,j) = w (i,j1) + 2w (i,j)

(n)

(n+1)

w

(n)

(i,j+1) q (i,j) h 2

(17)

where the line is considered in the i direction and those neighbors in the j direction are considered as boundary conditions. In general, the matrix resulting from Equation 17 can be solved directly, but to save computational effort and make use of the orderly nature of a tridiagonal matrix, the Thomas Algorithm is used here. While direct solution of matricies can be very time consuming for large grid sizes, the Thomas Algorithm greatly reduces this time, and convergenece may be greatly accelerated due to the fact that many fewer iterations are needed to reach a converged solution than for point iterative methods. The concepts of Jacobian iteration, Gauss-Siedel iteration or over relaxation can easily be implimented with line iterative methods. To speed up the convergence of the solution, the Gauss-Siedel method was used as the base iteration scheme for both Line methods presented here, and overrelaxation was also used.

3.3.1 S.L.O.R. (Successive Line Over Relaxation)

The S.L.O.R. method as implemented here uses the Gauss-Siedel scheme as the base iteration scheme, and incorporates the concept over overrelaxation when replacing successive lines. After each matrix is solved, overrelaxed values replace the line of values. Once a sweep has covered the whole domain, residuals are calculated and the process is repeated until the residual reaches the convergence criteria. Without implimenting further convergence acceleration techniques, S.L.O.R. actually takes longer to converge than S.O.R. and thus it should not be used. Several acceleration techniques are discussed in the references.

3.3.2 A.D.I (Alternate Direction Implicit)

The A.D.I. method is a slight modification to the S.L.O.R. method in that two sweeps of the domain are conducted for each iteration. The first sweep is conducted in the i direction over each row, and all the row values are replaced. Next, a sweep is conducted in the j direction over each column, and the column values are replaced.

7

This method is significantly faster than the S.L.O.R. method and may be used to increase convergence time over S.O.R. without need for further convergence acceleration techniques.

3.4 Direct Methods

Direct methods for solving Equation 13 require solving a pentadiagonal matrix. Methods such as Gaussian elimination and specific pentadiagonal matrix solvers are common apporaches to this. While this can in principle be done, it is often very time consuming and less useful than iterative methods. For the sake of brevity these will not be discussed here. Please refer to the references.

8

4

Results

4.1 Numerical Convergence

Solutions were computed using all five numerical schemes discussed in Section 3. Convergence histories for five different grid resolutions are shown in Figure 4.1. Here we can see that the order of convergence stays constant for all cases. As we expect, increasing the implicitness and incorporate acceleration techniques makes the solution take fewer and fewer iterations to converge.

-1 Jacobian -1 Jacobian 10 10 Gauss-Seidel Gauss-Seidel SOR SOR SLOR SLOR -3 10 ADI
-1
Jacobian
-1
Jacobian
10
10
Gauss-Seidel
Gauss-Seidel
SOR
SOR
SLOR
SLOR
-3
10
ADI
-3
10
ADI
-5
-5
10
10
-7
-7
10
10
-9
-9
10
10
Convergence Criteria
Convergence Criteria
-11
-11
10
10
0
1000
2000
3000
4000
0
5000
10000
15000
i
i
(a) Grid size 33 × 33
(b) Grid size 65 × 65
-1
Jacobian
-1
Jacobian
10
10
Gauss-Seidel
Gauss-Seidel
SOR
SOR
SLOR
SLOR
-3
ADI
-3
10
10
ADI
-5
-5
10
10
-7
-7
10
10
-9
-9
10
10
Convergence Criteria
Convergence Criteria
-11
-11
10
10
0
10000
20000
30000
40000
50000
0
50000
100000
150000
i
i
res
res
res
res

(c) Grid size 129 × 129

(d) Grid size 257 × 257

Figure 4: Residual convergence history for different grids

The time of computation is perhaps more important than the total number of iterations as very long iterations can be as inneficient as many very short iterations. Figure 5 is a plot of the time of convergence versus grid size for each method used. We notice that the fastest method is actually one of the point methods (S.O.R.) when used for the largest grid.

9

10000 Jacobian Gauss-Siedel S.O.R. S.L.O.R. A.D.I. 100 1 0.01 0 50 100 150 200 250
10000
Jacobian
Gauss-Siedel
S.O.R.
S.L.O.R.
A.D.I.
100
1
0.01
0
50
100
150
200
250
300
Time (Seconds)

Grid Size (Nx)

Figure 5: Time of convergence versus grid size

4.2 Solution Accuracy

The overall accuracy of the solution is in the end limited by the order of accuracy of the discretization scheme. Since we have used a second order discretization scheme we expect the error to be related to x 2 . In Figure 4.2 we see the error convergence histories at four different grid resolutions for each method used. We note that the final asmptotic error value decreases as we increas the resolution of the grid Figure 7 shows the converged root mean squared error values as a function of x. Here we see that the R.M.S. error actually seems to decrease by O(∆x 3 ). Perhaps this is due to the number of fourier terms included in the analytical solution. For the analytical solution used for comparison here, the number of fourier terms used was 10. Taking more terms we should expect a more accurate analytical solution, which may explain the unexpected trend in error. Finally Figure 4.2 shows both the analytical solution a converged numerical solution given on the same grid using the A.D.I. method.

10

err

err

1 1 10 10 Jacobian 0 10 Jacobian Gauss-Seidel Gauss-Seidel 0 10 SOR SOR SLOR
1
1
10
10
Jacobian
0
10
Jacobian
Gauss-Seidel
Gauss-Seidel
0
10
SOR
SOR
SLOR
SLOR
ADI
ADI
-1
10
-1
10
-2
10
-2
10
-3
10
-3
-4
10
10
0
500
1000
1500
2000
2500
3000
0
2000
4000
6000
8000
10000
i
i
(a) Grid size 33 × 33
(b) Grid size 65 × 65
1
1
10
10
0
0
10
10
Jacobian
Jacobian
Gauss-Seidel
Gauss-Seidel
SOR
SOR
-1
-1
10
SLOR
10
SLOR
ADI
ADI
-2
-2
10
10
-3
-3
10
10
-4
-4
10
10
-5
-5
10
10
0
10000
20000
30000
40000
50000
0
50000
100000
150000
i
i
err
err

(c) Grid size 129 × 129

(d) Grid size 257 × 257

Figure 6: Error history between analytical solution and numerical solution

References

1. Stanley J. Farlow. Partial Differential Equations for Scientists and Engineers. Dover Publications Incorporated,

1993.

2. C.A.J. Fletcher. Computational Techniques for Fluid Dynamics, Vol. 1 and 2. Springer-Verlag, 1991.

3. C. Hirsch. Numerical Computation of Internal and External Flows, Vol. 1 and 2. John Wiley, 1990.

4. Feng Liu. Computational fluid dynamics. In Course Notes for MAE 237. University of California, Irvine, Depart- ment of Mechanial and Aerospace Engineering, 2005.

11

0.01 R.M.S. Error Y=22.1386( X) 3.06647 0.0001 0.001 0.01 0.1 X R.M.S. Error
0.01
R.M.S. Error
Y=22.1386( X) 3.06647
0.0001
0.001
0.01
0.1
X
R.M.S. Error

Figure 7: R.M.S. error versus x

60 120 160 140 20 180 40 100 220 280 80 240 1 20 60
60
120
160
140
20
180
40
100
220
280
80
240
1
20
60
40
100
260
280
200
0.5
240
80
120
200
200
160
40
20
0
180
60
120
140
60
80
140
140
260
220
160
-0.5
80
100
120
40
20
60
-1
-1
-0.5
0
0.5
1
x
20
40
80
120
100
(a) Analytical solution given by Equation 12
180
160
40
20
60
100
180
140
120
200
100
60
20
260
220
40
y
120
20
80
140
200
240
240
160
60
220
160
80
180
100
40
20
100 200 60 220 80 140 160 260 40 180 20 120 120 240 240
100
200
60
220
80
140
160
260
40
180
20
120
120
240
240
180
220
140
160
120
60
40
180
20
100
80
20
40
80
1
60
20
140
160
20
100
40
200
0.5
0
100
160
80
-0.5
120
120
40
60
20
140
-1
-1
-0.5
0
0.5
1
x
200
180
(b) Numerical solution to Equation 13 using A.D.I.
40
20
260
80
100
60
180
220
280
80
280
60
220
140
100
120
160
240
20
40
200
240
y
160
120
60
80
260
140
180
100
80
40
60
20

Figure 8: Comparison of Analytical and Numerical Solutions to Poisson Equation

12