You are on page 1of 19

Homework 5: Laplace and Random Numbers

Johnson Liu
14 May 2016

1 Electric Potential And Electric Field: Square Inside Square


The electric potential (V) at each point along a surface with boundaries can be determined numerically by
using the Laplace equation and the Gauss-Seidel and Over-Relaxation methods. V(i, j, k) at each point in
a region must satisfy:

∆2 V ∆2 V ∆2 V
+ + =0. (1)
∆x2 ∆y 2 ∆z 2
In order to work with a discrete number of points in a region of space, the following approximation can be
derived from the Laplace equation:

∆V V (i + 1, j, k) − V (i, j, k)
≈ ,
∆x ∆x
∆V V (i, j, k) − V (i − 1, j, k)
≈ ,
∆x ∆x

∆2 V 1 V (i + 1, j, k) − V (i, j, k) V (i, j, k) − V (i − 1, j, k)
≈ ( − )
∆x2 ∆x ∆x ∆x

∆2 V V (i + 1, j, k) + V (i − 1, j, k) − 2V (i, j, k)
≈ , (2)
∆x2 2∆x

∆2 V V (i, j + 1, k) + V (i, j − 1, k) − 2V (i, j, k)


≈ , (3)
∆y 2 2∆y

∆2 V V (i, j, k + 1) + V (i, j, k − 1) − 2V (i, j, k)


≈ . (4)
∆z 2 2∆z
Substituting Equations (2), (3), and (4) into Equation (1) and solving for V(i, j, k) gives

1
V (i, j, k) = [V (i + 1, j, k) + V (i − 1, j, k) + V (i, j + 1, k) +
6
V (i, j − 1, k) + V (i, j, k + 1) + V (i, j, k − 1)] . (5)

When working in only two dimensions, a similar derivation leads to the following:

1
V (i, j) = [V (i + 1, j) + V (i − 1, j) + V (i, j + 1) + V (i, j − 1)] . (6)
4
Equation (6) shows that the electric potential at a given point depends on the the electric potential of
neighboring points. Given the condition of the boundaries of a region and an initial guess for the electric

1
potential at each point in the region, Equation (6) can be used iteratively to produce new guesses for V(i, j)
during each iteration from the previous generation of V(i, j) values.
One method for producing new guesses for V(i, j) at each point in a region during each iteration is the
use of the Gauss-Seidel algorithm:

1
Vnew (i, j) = [Vold (i + 1, j) + Vold (i − 1, j) + Vold (i, j + 1) + V + old(i, j − 1)]
4
Vold (i, j) = Vnew (i, j) . (7)

During each iterative step, the Gauss-Seidel algorithm moves through each point and calculates the new value
of V(i, j) at a particular point using the old values of V(i, j) from the neighboring points. The algorithm
then immediately overwrites the V(i, j) value at the current point.
Another method for generating new values for V(i, j) is the use of the Over-Relaxation algorithm:

2
α = π ,
1+ N
1
Vnew (i, j) = [Vold (i + 1, j) + Vold (i − 1, j) + Vold (i, j + 1) + V + old(i, j − 1)]
4
Vover (i, j) = Vold (i, j) + α ∗ (Vnew (i, j) − Vold (i, j))
Vold (i, j) = Vover (i, j) , (8)

where α = 1+2 π is the most efficient value of α for a square of side length N.
N
After a certain number of iterations, the values of V(i, j) at each point begins to change by smaller and
smaller amounts after each successive iteration. V(i, j) at each point begin to reach their respective limiting
values. These limiting values are the values of V(i, j) that satisfy the Laplace equation at each point. The
electric potentials over a region can be determined to a certain accuracy by choosing how small the change in
potentials (∆V(i, j)) need to be after successive iterations before the algorithm decides to end the iterative
process. The following condition can be used to determine whether or not the values of V(i, j) over a surface
during a particular iteration are close enough to some converging value at each point:

X
 ≥ |∆V (i, j)|
i,j
X
 ≥ |Vnew (i, j) − Vold (i, j)| , (9)
i,j

where  can be chosen to determine how accurate the final values of V(i, j) are. The smaller the value of ,
the closer the final values of V(i, j) are to their limiting values.
The algorithms discussed above can be used to solve for the electric potential across a surface in a case
with specific boundary conditions. The case being explored consists of a larger box with V = 1V on each
of its sides. In the middle of the larger box exists a smaller box with side lengths equal to .2 times the side
lengths of the larger box. The edges of the smaller box have V = 2V.
Figures 1 and 2 show the color maps of the electric potential across a box within a box using the Gauss-
Seidel and Over-Relaxation algorithms, respectively. In these cases, the distance (∆x) between points on
the surface in the x and y directions is .04m and the value of  is 10−4 . Each point other than the boundary
points are given an initial guess of 0V. Figures 3 and 4 show the color maps of the electric potential across
a box within a box with ∆x = .02m and  = 10−4 . The color at each point along the xy plane indicates
the value of the electric potential at that point. Red areas represent places in which the electric potential is
positive while blue areas show regions in which the potential is negative. The darker the color, the greater
the magnitude of the potential.
The final solutions for V(i, j) at each point across the surface of the region are very similar between the
Gauss-Seidel and Over-Relaxation algorithms. The electric potential of the points within the inner box are
all at 2V. There is a smooth gradient between the edges of the inner box and the edges of the outer box

2
Figure 1: Map of electric potential of a box within a larger box. Gauss-Seidel, ∆x = .04m,  = 10−4 , initial
V(i, j) guess = 0V.

Figure 2: Map of electric potential of a box within a larger box. Over-Relaxation, ∆x = .04m,  = 10−4 ,
initial V(i, j) guess = 0V.

Table 1: Number of iterations for V(i, j) to converge for the box within a box case.  = 10−4 . Initial V(i, j)
guess = 0V.
Algorithm ∆x = .04m ∆x = .02m
Gauss-Seidel 299 1204
Over-Relaxation 71 146

3
Figure 3: Map of electric potential of a box within a larger box. Gauss-Seidel, ∆x = .02m,  = 10−4 , initial
V(i, j) guess = 0V.

Figure 4: Map of electric potential of a box within a larger box. Over-Relaxation, ∆x = .02m,  = 10−4 ,
initial V(i, j) guess = 0V.

4
where the potential gradually decreases from 2V to 1V. The gradient is more smooth for when ∆x = .02m
because there are move points plotted.
The Gauss-Seidel and the Over-Relaxation algorithms produce the same end result for the solution to
the Laplace equation, but Over-Relaxation converges faster than Gauss-Seidel. Table 1 shows the number
of iterations it takes the Gauss-Seidel and the Over-Relaxation algorithms to produce converged values of
V(i, j) for the box within a box with ∆x = .04m and ∆x = .02m. For both of the algorithms, a smaller size
for ∆x requires a larger number of iterations in order for the size of ∆V(i, j) to reach the chosen threshold.
This is due to the larger number of points that are iterated over for smaller values of ∆x. Since the initial
electric potentials for the points on the surface are chosen to be 0V and since the new value of V(i, j) is
determined by averaging over the current potentials at neighboring points, the overall rate of change of the
potential of the entire surface is slower for surfaces that have a greater number of points to iterate over. In
both of the cases where ∆x = .04m and ∆x = .02m, the box within a box reaches convergence after a smaller
number of iterations when the Over-Relaxation algorithm is used. The factor of α in the Over-Relaxation
algorithm increases the speed of convergence by increasing the change suggested by Gauss-Seidel algorithm
during each iteration by a factor greater than 1.
The x and y components of the electric field at each point on the surface of the box within a box can be
found by the following:

V (i + 1, j) − V (i − 1, j)
Ex (i, j) ≈ − , (10)
2∆x
V (i, j + 1) − V (i, j − 1)
Ey (i, j) ≈ − . (11)
2∆x
The equations above show that the electric field points in the direction going from an area of higher electric
potential to an area of lower electric potential. Figures 5, 6, 7, and 8 show the electric field of the box
within a box using the Gauss-Seidel and Over-Relaxation algorithms. The direction of the arrows indicate
the direction of the electric field at each point while the length of the arrows show the relative magnitude of
the electric field. The results are the same qualitatively for Gauss-Seidel and Over-Relaxation for when ∆x
= .04m and when ∆x = .02m. The magnitude of the electric field is 0 V/m within the inner box because
the potential of each point within the inner box is the same at 2V. When moving beyond the edges of the
inner box, the electric field is directed from the edges of the inner box to the edges of the outer box because
the electric potential is greater at the inner box than at the outer box. When following a path from the
inner box to the outer box, the magnitude of the electric field starts out relatively small, then increases,
then decreases when approaching the edge of the outer box. This indicates that when moving from the inner
box to the outer box, the difference in electric potential between neighboring points starts out small, then
increases, then decreases again. The symmetry of the electric field in its direction and magnitude when going
from the center of the inner box to the edges of the outer box is due to the symmetric geometry of the inner
and outer boxes and to the symmetry of the boundary potentials.

2 Parallel Plate Capacitor: Two Plates Inside Rectangle


The electric potential and electric field of an 2D area around a parallel plate capacitor can be modeled in
the same fashion as the treatment of the box within a box case in the previous section. Here, two plates are
placed within a rectangle of length L and width W = 3L. The length of each plate are 21 L and the separation
between the two plates are 23 W in the first case and 13 W in the second case. The electric potential of the
left plate is 1V while the potential of the right plate is -1V. The points on the edges of the rectangle have
potentials of 0V. Over-Relaxation is used to converge the potentials and the value of α is arbitrarily chosen
to be 1+2 π . The value of α can be chosen to be any value between 1 and 2. When α = 1, the algorithm
W
becomes the same as Gauss-Seidel. When α ¿ 2, the system never converges. α can be changed to see what
value leads to the fastest convergence for the case of a parallel plate capacitor.  is chosen to be 10−4 and
the initial guesses for the values of V(i, j) are 0V for points other than those on the plates and at the edges
of the rectangle.

5
Figure 5: Map of electric field of a box within a larger box. Gauss-Seidel, ∆x = .04m,  = 10−4 , initial V(i,
j) guess = 0V.

Figure 6: Map of electric field of a box within a larger box. Over-Relaxation, ∆x = .04m,  = 10−4 , initial
V(i, j) guess = 0V.

6
Figure 7: Map of electric field of a box within a larger box. Gauss-Seidel, ∆x = .02m,  = 10−4 , initial V(i,
j) guess = 0V.

Figure 8: Map of electric field of a box within a larger box. Over-Relaxation, ∆x = .02m,  = 10−4 , initial
V(i, j) guess = 0V.

7
Figure 9: Map of electric field of a parallel plate capacitor. Over-Relaxation, L = 1m, W = 3m, plate
separation = 23 W, ∆x = .025m,  = 10−4 , initial V(i, j) guess = 0V, convergence time = 317 iterations.

Figure 10: Map of electric field of a parallel plate capacitor. Over-Relaxation, L = 1m, W = 3m, plate
separation = 13 W, ∆x = .025m,  = 10−4 , initial V(i, j) guess = 0V, convergence time = 307 iterations.

8
Figure 9 shows the map of the electric potential across a rectangle that contains a parallel plate capacitor.
The plates are separated by a distance of 32 W. In figure 10, the plates are separated by a distance of 13 . Both
cases show the same behavior for the electric potential across the rectangle. The potential gradually decreases
from the plate with a potential of 1V to the plate with a potential of -1V. It takes 317 iterations for the first
case to converge while the second case takes 307 iterations.
Figures 11 and 12 show the electric fields of the parallel plates separated by 23 W and by 13 W, respectively.
The plots show that the electric field moves away from areas of higher electric potential and towards areas of
lower electric potential. The electric field is greatest in magnitude around the plates which have potentials
of 1V and -1V. The closer the two plates are to each other, the greater the strength of the electric field in
the area between the two plates. The electric field is symmetric in that the pattern in which the electric
field points away from the plate with 1V is the opposite of the way in which the electric field points towards
the plate with -1V.

3 Random Number Generator: Distribution


”Random” numbers can be generated using different methods, one of which is shown by the following:

xn+1 = (axn + c)%m , (12)

where xn is the random number in the current iteration, xn+1 is the newly generated random number, a and
c are constants, and % is the modulo operation that finds the remainder when dividing the quantity axn + c
by m. a is chosen to be 75 , c is chosen to be 0, and m is chosen to be 231 − 1. Starting with some seed, x0 ,
the generator shown in Equation (12) can be used to generate a series of pseudo-random numbers uniformly
in the range [1, m). If the range [1, m) is divided into bins of equal size and if the numbers generated by the
generator are placed into those bins, each bin will be filled by about the same amount if enough numbers
are generated. Figure 13 shows the assortment of random numbers generated by the generator in Equation
(12). The histogram divides the range of random numbers into smaller, equally sized ranges, or bins, and
assorts the generated numbers into those bins. The figure shows that the numbers generated by the random
number generator are evenly distributed over the range [1, m).
Another generator is shown below:

m axn
xn+1 = a(xn % ) − (m%a) . (13)
a m
This random number generator should produce the same ”randomness” as the generator shown in Equation
(12).
Random numbers can be used to produce probability distributions. The generator shown in Equation
(13) will be the generator used in the Accept/Reject method and the Metropolis method for selective picking
of generated random numbers. The algorithm for the Accept/Reject method is as shown:

9
Figure 11: Map of electric field of a parallel plate capacitor. Over-Relaxation, L = 1m, W = 3m, plate
separation = 23 W, ∆x = .025m,  = 10−4 , initial V(i, j) guess = 0V, convergence time = 317 iterations.

Figure 12: Map of electric field of a parallel plate capacitor. Over-Relaxation, L = 1m, W = 3m, plate
separation = 13 W, ∆x = .025m,  = 10−4 , initial V(i, j) guess = 0V, convergence time = 307 iterations.

10
m xold a
xnew
1 = a(xold
1 % ) − (m%a) 1
a m

if xnew
1 <0:

xnew
1 = xnew
1 +m

xnew
1 u
r1 =
m

m xold a
xnew
2 = a(xold
2 % ) − (m%a) 2
a m

if xnew
2 <0:

xnew
2 = xnew
2 +m

xnew
2 u
r2 =
m

if r2 ≤ P (r1 ) :

accept r2 ,

where P(r) represents a chosen probability distribution, x1 and x2 are two random numbers generated
independently from each other using different seeds, u is the upper limit of the desired range of random
numbers, r2 is a candidate random number, and r1 is a random number used to decide whether to accept
or reject r2 . During each iteration, the Accept/Reject method generates two random numbers, x1 and x2 ,
separately from each other. Accept/Reject then produces two other numbers, r1 and r2 , from x1 and x2 ,
respectively, that are within the desired range of the probability distribution. If r2 is less than P(r1 ), then
the algorithm puts r2 into a list of accepted numbers. When bins are created to assort the accepted random
numbers, each bin has a width ∆r and is centered at a particular r value. When the accepted numbers are
assorted into bins of equal size, the proportion of the quantity of generated numbers within each bin to the
total amount of generated numbers should be equal to P(r) for the r value in which the bin is centered. In
other words, the height of each bin should be equal to the value of P(r) at each bin.
1
Figure 14 shows the probability distribution 1+r 4 as approximated by the Accept/Reject method. Since
1
the sum of the probabilities within a probability distribution should be equal to 1, the function 1+r 4 should

be multiplied by some constant so that its integral from 0 to 10 will be equal to 1. It is difficult to integrate
1
1+r 4 so the curve has been adjusted by eye by a factor of .925 in order to better observe how well the
distribution generated by the Accept/Reject method approximates the distribution of P(r). The heights of
the bins do not quite reach the values of P(r) for every bin, but the distribution generally matches that
1
expected for a P(r) function of 1+r 4 multiplied by a constant. A chi-squared test can be used to see how

much the the approximated distribution deviates from the ideal distribution of P(r).
1
Figure 15 shows the probability distribution of 1+r produced the Accept/Reject method. The curve
−1 −1
shown in the figure is that of the function P(r) = (ln(51))
1+r . The integral of (ln(51))
1+r over 0 to 50 is equal to
1
1. As with the previous case in which P(r) = 1+r4 , the heights of the bins do not quite match the expected

11
Figure 13: Distribution of numbers generated using the generator xn+1 = (axn + c)%m. a = 75 , c = 0, m
= 231 − 1, x0 = 54938, 20 bins, time = 106 iterations.

1 5
Figure 14: Probability distribution, P(r) = 1+r 4 , in range [0, 10] using Accept/Reject. a = 7 , c = 0, m =

231 − 1, xinitial
1 = 222, xinitial
2 = 111, time = 2x105 iterations.

12
value of P(r) at each bin, but the general distribution seems as if it matches the expected distribution.
Again, a chi-squared test can be used to see how much the generated distribution deviates from the expected
distribution. If there is a large difference between the distribution produced by the Accept/Reject method
and the expected distribution, then the initial conditions set for the generation of random numbers might
not be adequate in approximating probability distributions or the Accept/Reject method itself might not be
sufficient.
The Metropolis algorithm is shown below:

m xold a
xnew
1 = a(xold
1 % ) − (m%a) 1
a m

if xnew
1 <0:

xnew
1 = xnew
1 +m

xnew
1
s1 =
m

m xold a
xnew
2 = a(xold
2 % ) − (m%a) 2
a m

if xnew
2 <0:

xnew
2 = xnew
2 +m

xnew
2
s2 =
m

r2 = r1 + d(2s1 − 1)

P (r2 )
if r2 > 0 AND r2 < u AND s2 < P (r1 ) :

accept r2

r1 = r2 .

The Metropolis method is similar to the Accept/Reject method, but with a few differences. The most
important difference is that the Metropolis method produces candidates to be accepted using the previous
accepted random number. In a way, the Metropolis method tries to stay in a general area of previously
accepted numbers when searching for random numbers that fulfill the acceptance criteria. Figure 16 shows
1
the approximation of a probability function P(r) = 1+r 4 multiplied by a constant. In this case, values of r2

that are negative are rejected. The result is that the bins nearest to r = 0 contain too few accepted random
numbers. A better approximation to the probability distribution can be made by accepting r2 values that
are negative, but are greater than -d. Figure 17 shows a generated distribution that accepts negative values
of r2 that are greater than -d using the same value of d as in Figure 16. Allowing r2 values to be negative
solves the issue of there not being enough values within the left-most bins, but the distribution still does

13
not seem to match the expected distribution. Some of the bins seem to contain too many accepted random
numbers while others seem to contain too little. The value of d is increased in Figure 18. This change
produces a distribution that more closely matches the expected distribution shape.
Changing the value of d changes the acceptance rate of positive random numbers. Increasing the value
of d decreases the acceptance rate, while decreasing the value of d increases the acceptance rate.
−1
Figure 19 shows the generated distribution for the function P(r) = (ln(51))
1+r using the Metropolis method.
For d = .955, there seems to be some bins that contain too many random numbers while other bins seem to
contain too few. Figures 20 and 21 show the generated distributions using different values for d. Changing
d does not seem to resolve the issue.
Qualitatively, both Accept/Reject and Metropolis seem to do a decent job at generating a distribution for
1 1
P(r) = 1+r 4 , but neither seems to do a well enough job of producing a distribution for P(r) = 1+r . Finding

the chi-square for the two probability distributions approximated by the Accept/Reject and Metropolis
methods can show qunatitatively how well the two methods can generate the distributions using random
numbers.

4 Random Number Generator: Integration


Random numbers can be used to find approximate solutions to integrals. The following is an example of an
integral that is difficult to solve by hand and requires a numerical approach:
Z 10
4
e−x dx . (14)
0

Two methods can be used to approximate the solution to the integral shown in Equation (14). One
method is to use random numbers generated uniformly in the range [0, 10]. The other method is to use
random numbers in the range [0, 10] selected by the Accept/Reject method using the probability distribution
1
P(r) = 1+r 2.

The following is the algorithm used to approximate the integral using uniformly generated random num-
bers in the range [0, 10] with valuesinitial = 0:

xf = (axn + c)%m
xf
r = alim + ( )(blim − alim )
m
f =e −r 4 (15)
values = values + f
xn = xf ,

where alim is the lower limit of integration, blim is the upper limit, and ”values” is the sum of the function
4
e−r evaluated at each random number r. After the algorithm runs its course, the calculated value for the
integral is:
values
I = (blim − alim )( ), (16)
i
where i is the number of random numbers used in the evaluation of the integral.
The second method for approximating the integral uses a different process than the first. The following
shows how the integral can be solved by using random numbers selected through the Accept/Reject process:

14
1
Figure 15: Probability distribution, P(r) = 1+r , in range [0, 50] using Accept/Reject. a = 75 , c = 0, m =
231 − 1, xinitial
1 = 222, xinitial
2 = 111, time = 2x105 iterations.

1 5
Figure 16: Probability distribution, P(r) = 1+r 4 , in range [0, 10] using Metropolis. a = 7 , c = 0, m =

231 − 1, xinitial
1 = 222, xinitial
2 = 111, d = .76, time = 2x105 iterations.

15
1 5
Figure 17: Probability distribution, P(r) = 1+r 4 , in range [0, 10] using Metropolis. a = 7 , c = 0, m =

231 − 1, xinitial
1 = 222, xinitial
2 = 111, d = .76, time = 2x105 iterations.

1 5
Figure 18: Probability distribution, P(r) = 1+r 4 , in range [0, 10] using Metropolis. a = 7 , c = 0, m =

231 − 1, xinitial
1 = 222, xinitial
2 = 111, d = 1.5, time = 2x105 iterations.

16
1
Figure 19: Probability distribution, P(r) = 1+r , in range [0, 10] using Metropolis. a = 75 , c = 0, m =
231 − 1, xinitial
1 = 222, xinitial
2 = 111, d = .955, time = 3x105 iterations.

1
Figure 20: Probability distribution, P(r) = 1+r , in range [0, 10] using Metropolis. a = 75 , c = 0, m =
231 − 1, xinitial
1 = 222, xinitial
2 = 111, d = 1.5, time = 3x105 iterations.

17
R 10 4
Table 2: Average approximations for the integral 0 e−x dx and standard deviations for the method of
using uniform random numbers and the method of using selective random numbers.
Method Average Integral Standard Deviation
Uniform random 0.909369665183 +/- 0.00611005052852
Accept/Reject 0.897014320339 +/- 0.00811643810896

Z
I = f (x)dx
Z
f (x)
I = P (x)dx
P (x)
Z
I = g(x)P (x)dx ,
R
g(x)P (x)
<g> = R
P (x)
Z Z
<g> P (x)dx = g(x)P (x)dx ,
Naccept
1 X
<g> = g(xi ) ,
Naccept i=1
Naccept Z Z
1 X f (xi )
P (x)dx = f (x)dx
Naccept i=1
P (xi )
Naccept Z 10 Z 10
1 X
−r 4 1 1 4
e ( )−1 dx = e−x dx
Naccept i=1
1 + ri2 0 1 + x2 0
Naccept Z 10
1 X 4 1 4
e−r ( )−1 arctan(10) = e−x dx . (17)
Naccept i=1
1 + ri2 0

The following shows the calculation for the standard deviations associated with the average integral found
by the two methods discussed above:

Naccepted
1 X
<I> = Ii , (18)
Naccepted i=1
Naccepted
1 X
< I2 > = Ii2 , (19)
Naccepted i=1
< I 2 > − < I >2 1/2
σ = ( ) . (20)
Naccepted − 1

Table 2 shows the average approximations to the integral being examined found by using the method of
uniform random numbers and the method of using random numbers selected by the Accept/Reject method.
The table also shows the standard deviations associated with each average. The actual average integrals
found by each method could be somewhere within the ranges determined by the standard deviations. From
the averages
R 10 and4 standard deviations,
R 10 it would seem as if the two methods produce the same result for the
4
integral 0 e−x dx. The integral 0 e−x dx should thus be expected to be somewhere around .90.

18
1
Figure 21: Probability distribution, P(r) = 1+r , in range [0, 10] using Metropolis. a = 75 , c = 0, m =
231 − 1, xinitial
1 = 222, xinitial
2 = 111, d = .6, time = 3x105 iterations.

19

You might also like