UECM1693
MATHEMATICS FOR PHYSICS 11
 
  .~ . 
.. ".~
Contents
1 Preliminaries
J.l
Introduction _
..
.,.
..
2 Numerical Differentiation
Roots Of Equations
10
4. 1 Introduction _ _ _
10
10
4.2.2
11
...
13
.
15
.
4.3.1 FixedPoint Method ...
4.3.2 NewtonRaphson Method
4.3.3 Finite Difference Method _
 
15
17
19
20
..
20
25
26
..
.. .
..
26
28
29
5.3.4
6 Optimization
6.1
.. . .
30
32
32
6.1.1
32
6.1.2
Gradient Method . . . . . . . . . .
34
36
7.1
7.2
36
7.1.1
Euler's Method . . . .
36
7.1.2
38
7.1.3
40
7.1.4
40
7.1.5
Multistep Methods . . . . . . . .
42
7.1.6
42
7.1.7
43
44
7.2.1
45
7.2.2
47
48
8.1
48
8.2
49
8.3
49
50
8.4.1
50
8.4.2
CrankNicolson Method
51
8.5
54
8.6
55
..
Chapter 1
Preliminaries
1.1
Introduction
Numer ical meth ods are methods for solving problems on a computer or a pocket calculator. Such
methods are needed for many real life problems that do not have analytic solutions or in other cases,
the analytic solutions may be practically useless.
1.2
Error Analysis
Definitio n 1.2.1.
(8) The er ror in a computed quantity is defined as
x2
 !
".
x,
0.111 2  0.1111
(i) If a 4(ugit calculator is used to find f(0 .3334), we will obtain 0.3334 _ 0.3333 '" 1.
~ x +~,
'" 0.6667.
(b) Truncation errors are those that resul t from using an approximation in place of an exact
mathematical procedure.
Example 2. Recall that for all x ,
2!
3!
e ~ I + I +  ++
1 I
1
.
. I
I
If we use 1 + 1 + 21 + 31+ ... + 101 to approximate e, then the truncatIon error IS 11 ! + 121+ ....
Remark. (Truncation Errors)
Ma.ny numerical schemes are deri ved from the Taylor series
y(x + h)
~ y(x) + hy'(x) + h:
y"(x) + ....
2.
If the t runcated Taylor series used to approximate y(x) is the order n Taylor polynomial
h2
h"
Pn(x) ~ y(x) + hy'(x) + 2f Y"(x ) + .. . + n! y1n)(x),
then t he approximation is called an nt h order method since it is accurate to the terms of order hn.
T he neglected remainder term
hn+!
hn+2
.,.,:;;y1n+l)(x) +
y(n+2) (x)
(n + I )!
(n+2) !
+ ...
Chapter 2
Numerical Differentiation
We replace the derivatives of a function
Taylor series:
(i) f(x
(ii) f(x  h)
+ h)  f( x)
h
31
1 hf"( x
)
21

h
(the forward difference formula for 1')
~ f(x)  f( x  h) + O(h)
h
(the backward difference formula for 1')
x
f(x)
1.00 1.01
6.01
5
1.02 1.03
7.04 8.09
(a) Use forward and backward difference approximations of O(h) and a central difference approximation of 0(h2) to estimate /,(1.02) using a step size h = 0.01.
(b) Calculate the percentage errors for the approximations in part (a) if the actual value /'(1.02) =
104.
Reading Assignment 2.0.1. Given the following table of data:
x
f(x)
1.00 1.25
1.50
1.75
2.00
8
9.93359375 11.4375 11.99609375 11 .0000
(a) Use forward and backward difference approximations of O(h) and a central difference approximation of O(/t2 ) to estimate the first derivative of I(x) at x = 1.5 using a step size It = 0.25.
(b) Calculate the percentage errors for the approximations in part (a) if the actual value /,(1.5) =
 4.5.
Answer.
(a) For It = 0.25,
(i) using the forward difference formula
'(15)
f(1.5
. '"
+ It) It
f(1.5)
=
f(1.75)  f(1.5)
1 1.99609375  ( 11.4375)
0.25
=
0.25
=  2.234375
'(15)
.
'"
f(1.5)  f(1.5  h)
f(1.5)  f(1.25)
 11.4375  ( 9.93359375)
=
=
=  6.015625
h
h
O .~
+ It) 2h
al al
actu v ue
x 100%
 4.5  ( 2.234375)
x 100
 4.5
50 .35%
 4.5  (6.015625)
x 100 ~  33.68%
4.5
4.5  (4.125)
x 100
 4.5
8.33%
Chapter 3
Numerical Integration
J:
1 = [f( X) dx
where a and b arc constants and f is a function given analytically or empirically by a table of values.
Geometrically, if f( x) ;::: 0 V x E [a, hI, then I is equal to the area under the curve of f between a and
b.
3.1
L A,
L'I
= L i (J( x, d + f(x ,) )
T=
.: I
1= 1
b a
 .
n
Remark. The absolute error incurred by the Trapezoidal approximation is given by
Er = I f(x) dx  TI This error will decrease as the step size 6.x decreases, because the trapezoids
fit the curve better as their number increases.
J:
IErI
Example 4 . Estimate
Answer . f(x}
If"(x)i
M for
(b  a)'
12n' M.
x2,6x
f"
4.
:. T =
;;
1. .!.
2
Example 5 . How many subdivisions should be used in the Trapezoidal Rule to approximate
Answer . Since 1f"(x)1 = 1:'1~ 2 for x E [1,21. we have from T heorem 3. 1.1 ,
We choose n so that ~ < 10 4 :
~<
lr
I : : :
dx
IErI ~ t,;..
41. In particular, n
41 will
Simpson's Rule
3.2
l'
(Ax' + B x + C) dx = !:(2Ah'
+ 6C). To wri te Ap in terms of Yo, Yt , Y, :
_,
3
Since the curve passes through ( h, Yo) , (0, Yt), and (h, y,), we get
Ap =
Yo = Ah'  Bh + C
Yt =C
y, = Ah' + B h + C
(3. 1)
(3.2)
(3.3)
Solving these simultaneous equations, we obtain C = YI, 2Ah2 = YO+Y2 2Yll and Ap = ~ (Yo+4YI +Y2).
An approximation for the area under the curve y = f (x) from x = a to x = b is
S = ~ [(f(xo) + 4f(xt) + f (x, + (f(x, ) + 4f(x,) + f (x. + ... + (f(x n , + 4f (x n_,) + f(x n
J.'
h (xo)
: . f(x) dx '" :df
].
b a
h=.
n
where
Theorem 3.2.1 (Error Bound for Simpson's Rule). If J<') is continuous and 1/(')(x) 1 S M for
(b  a)'
all x E la, b], then the Simpson's rule error IEsl S ISDn' M.
Example 6. Approximate
Answer. f(x)
S =
=
4x 3 , h
11
~,Xi = ih = ~.
= 1.
11
e:r.1. dx .
+ 4/(xg) + /(x lO )]
Oil [/(0) + 4/(0.1) + 2/(0.2) + ... + 2/(0.8) + 4/(0.9) + /(1)]
O 01
0:/ reo + 4e .
+ 2eO.04 + 4eO.09 + 2eO.16 + 4 e O.25 + 2eO.36 + 4e0 .49 + 2eO.64 + 4eO.81 + ell
'" 1.4626S1
(ii ) 0 S x S I
=> 0 < /(')(x)

'
76e(1  D)'
. 1 Es'
E xerclse.
tlmate
14
170 2 db
'
x y usmg
_2 1 +x
x
Y
3.0
6.7
3.25
7.4
3.5
8.2
3.75
9.2
4.0
4.25 4.5
10.4 11.6 12.5
4.75 5.0
13.3 14.0
y dx .
Answer. 20.7.
9
Chapter 4
Roots Of Equations
4.1
Int roduction
D e finition 4.1.1. Any number r for which f (r ) = 0 is called a solution or a root of that equat ion
or a ze ro of f.
Example 8 . T he root of t he linear equation ax
+b =
b
0 is x =  ,
a
a "# O.
 b V b'
4ac
2a
4.2
4x = O.
Bracketing M ethods
J is
if f is a continuous function on [a, bj that has values of opposite signs at a and b, then
one root in the interval (a , b).
f has at least
They are called bracketing met hods because 2 initial guesses !l bracketing" the root are required to
start t he procedure. The solution is found by systematically reducing the widt h of the bracket.
T wo examples of bracketing methods are :
(a) T he Bisect ion Method
(b) The Method of FalsePosition
Example 11. Show t hat t he equation x
10
Xl 
4.2.1
The method calls for a repeated halving of subintervals of [a, bj, and at each step, picking the half
where f changes sign.
The basic algorithm is as follows:
Step 1. Choose lower x, and upper x. guesses for the root such that f (x,)f(x.) < 0.
Step 2. An estimate of the root is determined by
Xr
Xl +Xu
(b)
(c)
f(x,)f(x.) > 0, the root lies in (X., x.). c. set x, = x., and return to Step 2.
[ f(x,)f(x.) = 0, then the root equals x . Stop.
'" I
'"
Iea I = I
11
Example 12. Usc bisect ion to find the root of f (x) = x lO  1. Employ initial guesses of Xl = 0 and
xu = 1.3 and iterate until the estimated percentage crror a falls below a stopping criterion of (, = 8%.
Answer.
(a) /(0)/(1.3)
(b) 1(0)/(0.65)
x,. 
XI
= 0.65,
Xu
= 1.3. Hence
o.
(c) /(0.65)/(0.975)
0.975
x~
XI
= 0.975,
Xu
= 1.3. Hence
975 1 x 1000/,
 0.
Ia 1= 11.1375
1.1375
(d) 1 (0.975 )/( 1.1375)
= 14.30/,.0
If.1 =
XI
= 0.975,
Xu
= 1.1375. Hence
0.975 + 1.1375 _ 0 62
2
 1. 5 5.
After 4 iterations, the estimated percentage error is reduced to less than 8%.
(ii) The error bound is guaranteed to decrease by one half with each iteration .
(b) Disadvantages
(i) It generally converges more slowly than most other methods.
(ii) It requires two initial estimates at which
(iii) If f is not continuous
the method may converge to a wrong point. Should check the values
I (x).
(iv) If f does not change sign over any interval , then the method will not work .
12
4.2.2
"if f( c) is closer to zero than I(d) , then c is closer to the root than cf' (which is not true in general).
f(x)
x
x, ~ x.  f(x,) _ f (x./( x.) .
Xj
By replacing th is formula in the Step 2 of the Bisection method, we obtain the algorithm for the
method of falseposition as follows:
Step 1. Choose lower x, and upper x. guesses for the root such that f( x,)f(x.) < O.
Step 2. An estimate of the root is determined by
~ Xn
> 0, the root lies in (xn x.). :. set x, ~ Xn and return to Step 2.
(c) If f (x,)f(x,)
13
Example 13. Use the Method of FalsePosition to find the zero of f (x)
of 0 and 1.
x
e  ~.
Answer.
(i) First iteration,
Xl ~ 0, f (xl) ~  1
x. ~ 1, f(x.) ~ 0.63212
0 1
X, ~ 1   1  0.63212(0.63212) ~ 0.61270, f (x,) ~ 0.07081
E (x" x,)
(ii) Second iteration , Xl ~ 0, f(xl) ~  1
x. ~ 0.61270, f(x.) ~ 0.0.07081
0  0.61270
x, ~ 0.61270   1  0.07081 (0.07081) ~ 0.57218, f(x,) ~ 0.00789
r
Xl
X.
X,
f(x l)
1.00000
1.00000
1.00000
1.00000
1.00000
1.00000
is 0.56714.
f(x,)
f (xl) f (x,)
0.07081
0.00789
0.00087
0.00010
0.00001
0.00000
0.07081
0.00789
0.00087
0.00010
0.00001
0.00000
Reading Assignment 4.2.2. Use the Method of FalsePosition to find the zero of f (x) = x  e  ~.
Use initial guesses of 0 and 1. Iterate until two successive approximations differ by less than 0.01.
Answer . Let ea = Ix~ew _ X~ld I.
(i) First iteration,
Xl ~ 0, f(X I) ~  1
x. ~ 1, f (x.) ~ 0.63212
X,
~ 1
 1~
 1  0.07081
e. ~ 10.57218  0.612701 ~ 0.04052
(iii ) T hird iteration, Xl ~ 0, f( XI) ~  1
x. ~ 0.57218, f(x.) ~ 0.0.07081
Xl  X.
0  0.57218
X, ~ X.  f(xl ) _ f(x./(x,) X, ~ 0.57218   1 _ 000789 (0.00789) ~ 0.56770
10.56770  0.572181 ~ 0.00448 < om (tolerance satisfied).
Hence, the approximate root is 0.56770.
e.
14
4.3
Open Methods
In contrast to the bracketing methods, the open methods are based on formulas that require a single
starting value or two starting values that do not necessarily bracket the root. Hence, t hey sometimes
diverge from the true root . However , when they converge, they tend to converge much faster than the
bracketing methods.
Examples of open methods:
(a) Fixed Point Method
(b) NewtonIlaphson Method
(c) Secant Method
4.3.1
FixedPoint Method
To solve f(x)
= 0, rearrange
f(x)
n = 0, 1,2, ...
Xn+l = g(x n ),
Remark. This method is also called the successive substitution method , or onepoint iteration.
Example 14. The function f( x) = x 2  3x + eX  2 is known to have two roots , one negative and one
positive. Find the smaller root by using t he fixedpoint method .
x2+ex _ 2
Xn+l
= g(xn) =
+ e n3
, n = 0, 1, 2, ...
x,
 0.5(initial guess)
 0.381156446
 0.390549582
0.390262048
 0.390272019
 0.390271674
 0.390271686
 0.390271686
1
2
3
4
5
6
7
Note 1:
(a) There are many ways to change the equation f( x)
0 to the form x
convergence of the corresponding iterative sequences {Xn } may differ accordingly. For instance,
if we use the arrangement x = J3x e'Z + 2 = g(x) in the above example, the sequence might
not converge at all .
(b) It is simple to implement but in this case slow to converge.
(c) Even in the case where convergence is possible, divergence can occur if the initial guess is not
sufficiently close to the root.
Xn+ 1 
n
0 0.5
0
1 0.4791667 1
2 0.4816638 2
Xn
Xn
1.5
0
 0.0625
1
0.5000407 2
IO =
3  xn3
6
Xn
Xo =
5.5.
Xn
2.5
0 5.5
2. 1041667 1 27.2291667
2.0527057
2 3365.242242
0.4791616
6351813600
0.4271122072 x 10"
0.4813757 3
0.4814091
0.4814052
0.4814057
0.4814056
0.4814056 8
0.4814057
0.4814056
10 0.4814056
G
E
S
!!
13 0.4814013
14 0.4814055
16 0.4814056
17 0.4814056
Note that when the initial guess Xo is close enough to the fixed point
but if it is too far away from 1', the method will diverge.
T,
in (a, b) .
the fixed point is unique and the method converges for any choice of initial point Xo in (a , b).
16
4.3.2
NewtonRaphson Method
This is a method used to approximate a root of an equation f(x) = 0 assuming that f has a continuous
derivatives /'. It consists of the following steps:
Step 1. Guess a first approximation Xo to the root.
(A graph may be helpfuL)
Step 2. Use the first approximation to get the second, t he second to get the third, and so on, using
the formula
f(x n )
xn+J = Xn  f'(Xn) , n = 0, 1,2,3, ...
where
Xn
Xn+l  Xn
Xn+l
I x 100% <
i,
Note 2: The underlying idea is that we approximate the graph of f by suitable tangents.
If you are writing a program for this method, don't forget also to include an upper limit to the number
of iterations in the procedure.
Example 16. Use NewtonRaphson Method to approximate the root of f(x) = x  e = 0 that lies
between 0 and 2. Continue the iterations until two successive approximations differ less than 10 8 .
IS
A nswer. Th e
Iteration
gIVen by Xn +l =
Xn 
e%n+l
= 0.4 x
10' < 10 ', we stop the process and take x, '" 0.56714329 as
17
+ x'  x + 1.
(a) First, use the intermediate Value Theorem to local the root.
x, =
X3
2~~!:t:oil
Xo
f(xn}
Xn  f'(x )
2x~
=
+ x!  Xn + 1
6'
xn + 2Xn  1
Xn 
= [(2) + (1)]/2 = 
1.5, we obtain
1.236967446, lx,
= 
 xd > 0.0001
1.233763552, IX3  xd > 0.0001
x, = 1.23375 1929
Since
IX4 
X4 =
Answer. 2.165737
Advantages
18
 1.233751929.
4.3.3
Y"
Suppose a = Xo <
y,
= y(x,) , Po =
+ P (x)y' + Q(x)y =
< ... < Xn  ] < Xn = b with Xi  Xi_I = h for all i = 1, 2, ... , n . Let
P(x,) , Q, = Q(x,) and J; = f(x ,). Then by replacing y' and y" with their central
Xl
Yi+1  2Yi
h2
+ Yi l
+ Pi
Yi+l  Yi  l
2h
+ QiYi = h, t =
1, 2, ... , n  1.
The last equation, known as a finite difference equation , is an approximation to the differential
equation. It enables us to approximate the solution at X I , . . ,Xn_ l ..
y"  4y
= 0,
y(O)
= 0,
y( l ) = 5.
That is,
Y2  2.25YI + Yo Y3  2.25Y2 + YI
y,  2.25Y3+Y2 
0
0
0
+ YI
0
0
5
= 2.9479.
Notes : We can improve the accuracy by using smaller h. But for that we have to pay a price , i. e.
we have to solve a larger system of equations .
19
Chapter 5
Some Topics In Linear Alge bra
5.1
The iterative methods start with an initial approximation to a solut ion and then generate a succession of better and better approximations that (may) tend toward an exact solution. We shall study
the following two iterative methods:
(8.) Jacobi iteration: The order in which t he equations are examined is irrelevant, since the Jacohi
method treats them independently. For this reason, the Jacohi method is also known as the
me thod of simultaneous corrections, since the updates could in principle be done simultaneously.
(b) GaussSeidel iteration: this is a method of successive corrections. It is very similar to the
Jacohi technique except it replaces approximations by corresponding new ones as soon as the
latter are available.
Definition 5.1.1. An n x n matrix A = [Oij] is strictly diagonally dominant if
n
..
V k ~ 1. 2.. . n .
That is, A is strictly diagonally dominant if the absolute value of each diagonal entry is greater than
the sum of the absolute values of the remaining entries in the same row .
20
[~
3
74]
1
Answer . Since in t he first row, lalll = 2 :f la121 + la131 = 7 + 4 = 11 and in t he second row ,
la,,1 = 11 la,,1 + lad = 8 + 6 = 14, matrix A is not strictly diagonally dominant.
[ ~ ~7:] is strictly
3
= 7,
= 6,
= 8.
Theorem 5.1.1 ( Convergence of the Iterative Methods). If the square matrix A is strictly
diagonally dominant, then the GaussSeidel and Jacobi approximations to the solution of the linear
system Ax = b both converge to the exact solution for all choice'> of the initial approximation.
lE
lOx2
+ X3
+ X2  X3
X l + X2 + 10x3
20Xl
13
17
18
< 0.0002, for all i. Prepare all the computations in 5 decimal places.
Answer .
(i) To ensure the convergence of this method , we rearrange the equations to obtain a strictly diagonally dominant system :
+ X2  X3
Xl  lOx2 + X3
Xl + X2 + lOX3
20XI
21
17
13
18
Xl
1
20(17 x,+x3)
x,
1
10 (13 + Xl +X3)
X3
1
10 (18 + XI  x,)
(*)
O' 3
xlol= O.
e.g. Xlol_
I  O,xlol=
2
(iv) Substitute this initial approximation into the RHS of (*) , and calcula.te the new approximation
where
xf]
xrl + x!rl)
XI(P+ I]
1
20(17 
X2(P+l]
~(13 +x~1
+ x~l)
10
I
3
X3{P+I]
11 (18 + xfl 0
x~l)
Xi
Xliii _
X,III
x3[11 
x~l) =
0.850
1.8
(v) To improve the approximation, we can repeat the substitution process. T he next approximation
is
2~(17 
1
10 ( 13 + 0.85 + 1.8) =  1.035
1
10(18 + 0.85  ( 1.3)) = 2.015
(vi) As Ix,I'1  x,lslj < 0.0002 for all i, we stop the computation.
Th e resu1ts obtained are summarized in the followinp; table:
m 0
1
2
4
5
6
3
XI,ffl, 0 0.850 1.005 1.0025
1.0001 0.99997 1.00000
X"ffl, 0
1.3 1.035 0.9980 0.99935 0.99999 1.00000
X3,ffl, 0
2.004 2.00005 1.99995 2.00000
1.8 2.015
22
20XI
+ X2 
Xl
Xj: IP] I
+ X3
Xl 
13
X3
17
+ X2 + lOx3
18
< 0.0002. for all i. Prepare all the computations in 5 decimal places.
Answer.
(i) Make sure the matrix is diagonally dominant (rearrange it if necessary).
Rearranging the equations leads to
20XI +X2  X3
17
Xl 
lOX2 +X3
+ X2 + lOx 3
13
18
 Xl
20{17 
XI
X,
X,
1O {13 +
XI
X3)
(.)
X3)
10 (18 +
X3
XI 
X3)
0, x~oJ = O.
(iv) Substitute this initial approximation into the RHS of (*" and calculate the new approximation
210{17  Xrl+X~I)

IP II
~(10 13 + X I + + xIPI)
3
:0
xr'
where
is the pth iteration of the approximation to
That is, the new approximation is
1
20 (17 
XliII
x,iIl
x3 iII 
Xi .
x~1 + xil) =
0.850
(v) To improve the approximation, we can repeat the substitution process. The next approximation
IS
XI!'!
2~(17 
x,!'! 
110 ( 13 + 1.0111
X3!'!
( 1.215)
+ 2.0065) =
1.01108
+ 2.0065) =
 0.99824
=
2.00093
' d 'mt
h ieoowm
ll
table;
we summarize t he resu ts 0 b
tame
m
XI,m,
x,lmJ
X3 JmJ
0
0
0
0
2
4
I
3
0.850 1.01108 0.99996 1.00000
1.215 0.99824 0.99991 1.00000
2.0065 2.00093 1.99999 2.00000
Note 3:
(a) The GaussSeidel technique is not appropriate for use on vector computer, as the sct of equations
must be solved in series.
(b) The GaussSeidel technique requires less storage than the Jacobi technique and leads to a convergent solution almost twice as fast .
24
5.2
A Review On Eigenvalues
Definition 5.2.1 . Let A be a.n n x n matrix. A scalar ..\ is called an eigenvalue of A if there exists
a nonzero vector x E !Rn such that
Ax ~ AX.
In this case, the nonzero vector x is called an e igenvector of A corresponding to A.
[!]
eigenvalue.
Answer.
AVt =
[~ ~] [~]
= ... = 5 Vl
=>
[!]
A ~ 5.
Note 4:
IA  MI ~ O
for A.
(b) To find the eigenvectors corresponding to A, we solve the linear system
(A  M )x
for nonzero x.
Answer.
[~
;].
Definition 5 .2.2. A square matrix A is called diagonalizable if there is an invertible matrix P such
that p  l AP is a diagonal matrix; the matrix P is said to diagonalize A.
Exercise 5 . Let A
Answer. p  I AP
[~ ~l]
and P
[~
~ [~I ~] [~ ~I] [~
!].
;J ~ [~I
Theorem 5.2.1.
Let A be an n x n matrix.
(a) A is diagonalizable if and only if it has n linearly independent eigenvectors.
(b) These n linearly independent eigenvectors form a basis of !Rn.
25
5.3
Approximation of Eigenvalues
Example 22.
(a) If 4 x 4 matrix A has eigenvalues
for all i
11.
A,
= 2,
A,
= 5,
A3
= 5
5.3.1
Theorem 5.3.1 (The Power Method (or Direct Iteration Method)) . Let A be a diagonalizable
n x n matrix with eigenvalues
Assume that VI" " , Vn are unit eigenvectors of A associated with Al ..\2 ... An respectively. Let Xo
be a nonzero vector in !Rn that is an initial guess for the dominant eigenvector v._ Then the vector
Xk = Ak:xo is a good approximation to a dominant vector of A VI when the exponent k is sufficiently
large,
Note 5:
A2
An
AI "'" AI'
The smaller the ratios, the faster the rate of convergence.
26
[~ =~l . Use the power method to approximate the dominant eigenvalue and
[~].
Prepare 6 iterations.
Answer.
We compute
XI
= AXo =
X, =
AXI
X,=
Ax, =
X m +1
1
= 10 [0 9]
"' 22
[09~45] ,
X, = A", =
[09~83] ,
[~~] "' 94 [09~94] ,
Xo=Ax, =
So
Xl)
[~~~]
(3823811. [190]
189
Remark.
(a) From the above calculations, it is clear that the vectors Km are getting closer and closer to scalar
multiples of
[n
is less than the prespecified error criterion l. Unfortunately, the actual value A is usually unknown.
So instead, we will stop the computation at the ith step if the estimated relative error
A(i)A(i 1)\
< ,.
A(i)
The val ue obtained by multiplying estimated relative error by 100% is called the estimated
percentage error.
27
5.3.2
Theorem 5.3. 2. The power method often generates a sequence of vectors {A'~Xo} that have inconveniently large entries. This can be avoided by scaling the iterative vector at each step. That is, we
[~,] by
multiply AXo =
Xn
{I 111 1
max Xl
AXI
X2
1 1
Xn
X 2,
and so on.
1
2. Set
Xk
max{ly.!} Yk
AXk'Xk
Example 24. Repeat the iterations of Example 23 using the power method with scaling.
Answer.
(i) y,
= AXo =
[1]
[0~5] '
1 1
[1]
A"". ""
Al '"
\
[2.0106 2.00531'
[
X . ""
[0.9~947]
1
[10.99471' 0.99947
28
[i].
5.3.3
>:I will be the dominant eigenvalue of A I, We could apply the power method on A I to find
n
The algorithm for the Inverse Power Method with Scaling as follows:
1. Compute B = A I
. 2. Compute Yk = BXk _ 1
1
3. Set ~
(I Ij Yk
x,
max Yk
.
al ue 0 f matnx
' AI
4. D omlllaut
clgenv
 = Bx,x, = J1.
5. Smallest eigenvalue of A
xk' Xk
~ .!.
jJ.
>"1
and
).2
be the eigenvalues of A =
[~ ~]
such that
Al
>
.A2'
(a) Use an appropriate power method with scaling to approximate an eigenvector corresponding to
"\2. Start with t he initial approximation Xo =
Answer.
(a) We use the inverse power method with scaling to find A2 , t he smallest eigenvalue of A.
B
A I
[ 0.1 0.3]
0.4  0.2
Iteration I YI
. 2:
I teratIon
0.3771]
= B Xl = [0.5083
Y2
,X2
/ 05083
Y2
[ 0.7419]
1
is an approximation to the required
eigenvector.
X2' X2
BX2' X2
XI' XI
BXI"xj
~  1.9952
A,(2)  A,( I ) I
A,(2)
x 100% ~ 0.3397%
29
=  2.0020
5.3.4
Al l '"
This method can be used to find any eigenvalue and eigcnvector of A. Let a be any number and let Ak
be the eigenvalue of A closest to a. The inverse power iteration with A  aI will converge to IAk  al  1
and a multiple of Vk .
The algorithm for the Shifted Inverse Power Method with Scaling as follows:
1. Compute C = (A  aI)1
2. Compute Yk = CXk _ l
1
3. Set Xk = max{IYkllYk
. .
 1
eXit; Xk
= {J
X It; XIt;
.
1
5. Elgenvalue closest to a =
p+ a
Example 26. Apply the shifted inverse power method with scaling (2 steps) to A =
the eigenvalue nearest to a = 6. Start with the initial guess Xo =
Answer. A  al = A  61 =
C = (A  aI) 1 =
Iteration L
YI
Cx. =
m,
XI
[~
[~ =~]
~YI =
[~ =~]
to fi nd
[~] .
=;].
[04;86] '
Iteration 2:
6.1428]
1
[1] .
. . .
d
112 = C X l = [2.5714' X2 = 6.1428Y2 = 0.4186 IS an apprOXimatIOn to an elgenvector correspon ing to the required eigenvalue,say, A.
Let {3 be the dominant eigenvalue of C.
ex, x,
7.2433
.
[6.1628]
T hen {3 '" x,. x, = 1.1752 = 6.1634 smce Cx, = 2.5814 .
1
1
Hence, A = a + fj = 6 + 6.1634 = 6.1622.
30
Remark.
(a) The Power Method converges rather slowly. Shifting can improve the rate of convergence.
(b) If we have some knowledge of what the eigenvalues of A are, then t his method can be used. to find
any eigenvalue and eigenvector of A.
(c) We can estimate the eigenvalues of A by using the Gerschgorin's Theorem.
lA  a,,1 $ T,
That is, the eigenvalues of A lie in the union of the n discs with radius Ti centered at llii.
Furthermore, if a union of k of these n discs form a connected. region that is disjoint from all the
remaining n  k discs, then there are precisely k eigenvalues of A in this region.
Example 27. Draw the Gerschgorin discs corresponding to
A=
[!
~8 =~]
4  1 8
31
Chapter 6
Optimization
6.1
Direct search methods apply primarily to strictly unimodal Ivariable functions. The idea of these
methods is to identify the interval of uncertainty that contains the optimal solution point. The
procedure locates the optimum by iteratively narrowing the interval of ullcertainty to any desired level
of accuracy. We will discuss only t he golden section method. T his method is used to find the maximum
value of a unimodal fun ction f(x) over a given interval [a, b] .
Definition 6 .1.1. A fun ction f( x) is unimodal on [a, bj if it has exactly one maximum (or minimum )
on [a, b] .
6.1.1
vis 
= Xn
r(xR  xd ,
where r =
2 ' the golden ratio. (Clearly,
h is determined in the following way:
XL
X2
<
= XL + r(xn 
Xl
<
X2
hl 
XL)
(i) If f(xl) > f(x,), then XI. < x' < x,.'Set XR = X, and I, = [X L,X,].
(ii) If f(xd < f(x,) , then Xl < X XR Set XL = Xl and h = [XI. XR].
(iii) If f(xd = f(x,), then X l < x' < X,. Set XL = XI,XR = X" and I, = [Xl ,X,].
Remark. (a) The choice of XI and X2 ensures that h e l k _ I '
(b) Let L, = 11 /, 11, the length of I ,. Then the algorithm terminates at iteration k if L, < " the user
specified level of accuracy.
(c) It can be seen that Lk = rL k_1 and Lk = rk(b  a), Thus, the algorithm will terminate at k
iterations where Lk = rk(b  a) < E.
32
Example 28. Find the maximum value of f(x) = 3 + 6x  4x' on the interval [0, 1[ using the golden
section search method with the final interval of uncertainty having a length less than 0.25.
Answer. Solving
r'(b  a) < 0.25
for the number k of iterations that must be performed , we obtain
> 2.88.
=> XI = XR 
0, Xn
1
XL =
vis2
(XR 
xd =
0.381966,
vis  1
X, = XL +
2 (XR  xd = 0.618034
f(xl) = 4.708204 < f (x,) = 5. 180340 => take XL
Iteration 2 :
=> XI = XR 
XL
= 0.381966, XR = 1
vis 2
(Xll 
xd =
0.618034,
vis  1
X, = XL +
2 (XR  xd = 0.763932
f(xl) = 5.180340 < f(x,) = 5.249224 => take XL
Iteration 3 :
=>
Xl
x, =
XR 
XL
1
(XR 
xd =
=1.
0.763932,
(XR  xd = 0.854102
f(xd = 5.249224 > f(x,) = 5.206651 => take XL remains the same with the same
Le., XL = 0.618034, XR = 0.854102
XL
i.e., XR
= 0.618034, XR = 1
vis 
vis 
33
XR
= X, = 0.854102
6.1.2
Gradient Method
 'Il I(x),
is the direction of maximum decrease (or the direction of steepest descent)
Theorem 6.1.1 (Method of Steepest Ascent / Gradient Method) . An algorithm for finding
the nearest local maximum of a twice continuous differentiable function !(x",) , which presupposes that
the gradient of the function can be computed . The method of steepest ascent, also called the gradient
method, starts at a point Xo and , as many times as needed , moves from X l; to X k+ 1 by maximizing
along the line extending from Xl; in the direction of 'iJ !(XIc) , the local uphill gradient. That is, we
determine the value of t and the corresponding point
X k+}
Xk.
Remark : This method has the severe drawback of requiring a great many iterations (hence a slow
convergence) for functions which have long, narrow valley structures.
Example 29. Use the method of steepest ascent to determine a maximum of f(x) = _ x2 _
= Xo
+ t'll I(Xo)
g(t)
Solving 9'(t)
= 0, we obtain 4(1 
2t ) + 4(1  2t)
2t , 1  2t)
=
(1  2t)'  (1  2t)'.
= 0 => t = 1/ 2.
1
Hence XI = (1, 1) + 2(2,  2) = (0,0)
Now 'Il l (xl ) = 'Il 1 (0,0) = (0, 0) , and we terminate the algorithm.
XI
y2
starting
Reading Assignment 6.1.2. Starting at the point (0, 0) , use one iteration of the steepestdescent
algorithm to approximate the minimum value of the function
(0,0),
Xl
= (0.382,  0.255)
and f(XI)
35
Chapter 7
Numerical Methods For Ordinary
Differential Equations
7.1
7.1.1
y(xo) = Yo
EuleT method with step size h consists in using the iterative formula
Yn+ ! = Yn
+ hf(xno Yn)
Xn
+h =
IO
+ (n + l)h,
n = 0, 1, 2, ....
36
y' = x + y, y(O) = 1.
(a) Use Euler's method to obtain a fivedecimal approximation to y(O.5) using the step size h = 0.1.
(b)
(i) Estimate the truncation error in your approximation to y(O.I) using the next two terms in
the corresponding Taylor series.
(ii) The exact value for y(O.I) is 1.11034 (to 5 decimal places) . Calculate the error between
the actual value y(O.I) and your approximation Yt. How does this error compare with the
truncation error you obtained in (i)?
(c) The exact value for y(0.5) is 1.79744 (to 5 decimal places) . Calculate the absolute error between
the actual value y(O.5) and your approximation Ys.
Answer. Here f (x , y)
x + y, Xo
Yn+l
(a)
Yn
+ O.l(xn + Yn)
O.lxn
+ 1.1Yn.
(c) 0.07642
37
Example 31. Given that the exact solution of the initialvalue problem
y' ~ x
is y(x)
+ y, y(O) ~
(a) the approximate values obtained by using Euler's method with step size h = 0.1
(b) the approximate values with h = 0.05 , and the actual values of the solution at the points
x = 0.1, 0.2, 0.3, 0.4, 0.5 :
Yn
Yn
Xn with h ~ 0.1 with h ~ 0.05
0.0
1.00000
1.00000
0.1
1.10000
1.10500
1.23101
0.2
1.22000
0.3
1.36200
1.38019
1.52820
1.55491
0.4
0.5
1.72102
1.75779
exact
value
1.00000
1.11034
1.24281
1.39972
1.58365
1.79744
absolute error
with h ~ 0.1
0.00000
0.01034
0.02281
0.03772
0.05545
0.07642
absolute error
with h ~ 0.05
0.00000
000534
0.01180
0.01953
0.02874
0.03965
7.1.2
(1 )
Note 7:
(a) This is an example of a predictorcorrector methodit uses (1) to predict a value of Y( Xn +l) and
then uses (2 ) to correct this value.
(b) The local TE for Heun 's method is 0(h 3 ).
(c) The global TE for Heun's method is 0(h2).
38
Example 32. Use Improved Euler's method with step size h = 0.1 to approximate the solution of the
IVP
y' =X+ y, y(O) = I
on the interval [0, 1]. The exact solution is y(x) = 2e%  x  1. Make a table showing the approximate
values, the actual values together with the absolute errors.
(c)
y,
Remark.
(a) Considerable improvement in accuracy over Euler's method.
(b) For the case h = 0.1, the TE ill approximation to y(O.I ) is '" ":y'~' = (O~)'(2) = 0.00033, very
close to the actual error 0.00034.
39
7.1.3
_
Yn+l  Un
+ hYn +
h2" h3 (3)
2! Un + 31 Un
...
hP
(P)
+ pI Yn
Example 33.
Consider the IVP
y' = 2xy , y(l) = I.
Use Taylor series method of order 2 to approximate y( 1.2) using the step size It
2xy, Xo
0.1.
1, and Yo = 1.
=
2
' Yn+ l
The met hd
0 IS
Xn
= Yn
h "=
+ hi
Yn + 2fYn
Yn
+ 01'
. Yn + 000'"
. vUn)
h
n= 0
) 1
, ,2
'" '
WIt
Xn+l = Xn
+h
+ 0.1.
(a) When n = 0 : Xo = I
(i) y' = 2xy "'" y~ = 2xoYo
(b)
= 2(1)( 1) = 2
(ii) y" = 2y + 2xy' "'" y~ = 2yo + 2xov. = 2(1) + 2(1)(2) = 6
:. y, = Yo + 0.1v. + 0. 005y'~ = I + 0.1(2) + 0.005(6) = 1.23
When n = I : x, = Xo + 0.1 = 1.1
(i) y' = 2xy "'" V, = 2x,y, = 2(1.1 )(1.23) = 2.706
(ii) y" = 2y + 2xy' "'" y'{ = 2y, + 2x,v, = 2(1.23) + 2(1.1)(2.706) = 8.4 132
:. y, = y, + 0.1v, + 0.005y'{ = 1.23 + 0.1(2.706) + 0.005(8.4132) = 1.542666 '" y(x,) = y(1.2)
7.1.4
where
Cl.j =
1.
Yn+' = Yn + 6(k,
where
k, = hf(xn, Yn)
k, = hf (xn + i h , Yn
kJ = hf(xn , h, Yn
+ ik.)
+ ,k,)
k, = hf(xn + h, Yn + kJ)
This method is a 4th order method, so its TE is O(h').
40
Example 34. Consider y' = x + y, y(O) = 1. Use RungeKutta method of order 4 to obtain an
approximation to y(O .2) using the step size h = 0.1.
O.l(xn
+ Yn)
41
7.1.5
Multistep Methods
The methods of Euler, Heull , and RK are called onestep methods because the approximation for
the mesh point Xn+ l involves information from only the previous mesh point , Xn. That is, only the
initial point (xo ,Yo ) is used to compute (xl ,yJ) and in general , Yn is needed to compute Yn+ )'
Methods which usc more than one previous mesh I)oints to fiud the next approximation arc c:alll.'(l
multistep methods.
7.1.6
y~+l =
y.. +
2~ (55/n 
59/n_'
+ 37/n_'  9/n ,) ,n ~ 3
Yn+ ' = y. +
where f~+ l
~~ (9/~+, + 19/n 
5/n l + In ' )
Example 35. Use the AdamsBashforth/ AdamsMoulton Method with h = 0.2 to obtain an approximation to y(0.8) for the solution of
y' = x
Answer. With h
0, Yo = 1 to obtain
0.2 , y(0.8)
::::::l
+ y , y(O) = 1.
(a) y, = 1.242800000
(b) y, = 1.583635920
(c) y, = 2.044212913
(d) In = l (xn, Yn) = Xn +Yn
=> 10 = Xo + Yo = 0 + 1 = 1
=> " = X, + Y, = 0.2 + 1.242800000 = 1.442800000
=> /, = 0.4 + 1.583635920 = 1.983635920
=> /, = 0.6 + 2.044212913 = 2.644212913
 9/0) =
2.044212913 +
~~(55(2.644212913)59(1.983635920)+
y,
(g) y, = y, +
~~ (9/; + 19/, 
7.1.7
order
Method
Euler
1
2
Heun
SecondOrder Taylor Series
2
ThirdOrder Taylor Series
3
FburthOrder RungeKutta
4
AdamsBashforthj AdamsMoulton
4
Local TE Global TE
O(h')
O(h)
O(h3)
O(h')
O(h')
O(h')
O(h )
O(h')
O(h")
O(h)
O(h')
O(h')
43
7.2
The methods use to solve first order IVPs can be applied to higher order IVPs because a higher IVP
can be replaced by a system of firstorder IVPs. For example, the secondorder initial value problem
y~
can be decomposed to a system of two firstorder initial value problems by using the substitution
y' = u;
y' =
u
, y{xo) = Yo
u' = f{x, y, u) , u{xo) = if,
Each equation can be solved by the numerical techniques presented earlier. For example, the Euler
method for this system would be
+
+
Yn+l = Yn
Un+I
= Un
hUn
hf(xn , Yn , Un)
Remark. The Euler met hod for a general system of two first oruer differcntiaJ equations,
f{x , y, u) , u{xo) = "0
, y{xo) = Yo
u' 
y'
= g{x, y, u)
is given as follows
+
+
u,,+, = u"
Yn+l
Yn
R eading Assignment 7 .2.1. Use the Euler method to approximate y{O.2) and y'{0.2) in two steps,
where y{x) is the solution of the IVP
y"
+ xy' + y =
0,
y{O) = I ,
y'{O) = 2.
Answer . Let y' = u.. Then the equation is equivalent to the system
y'
u'
=
=
u
X'U 
+
+
0.1un
O.l(  xnu n  Yn)
lIo = 2, we get
(a) y, = yo + 0.1"0 = 1 + 0.1(2) = 1.2
u, = "0 + O.I{  XoUo  Yo) = 2 + 0. 1[{O)(2)  IJ =
(b) y, = y, + O.lu, = 1.2 + 0.1{1.9) = 1.39
u, = u, + O.I{x,u,  y,) = 1.9 + 0.1 [{0.I)(1.9) T hat is, y{0.2) '" y, = 1.39 a nd y'{0.2) '" u, = 1.761.
With
Xo
= O,Yo = 1, UQ =
44
1.9
1.2J = 1.761
7.2.1
n, y(b)
aSxS b
yU
y(a)
n, y'(a)
O.
Replace this secondorder IVP by a system of two firstorder IVPs. Then solve each of the two
firstorder IVPs using, say, the fourthorder RungeKutta method to obtfl.in Yl
yU
p(x)y' + q(x)y,
y(a)
0, y'(a)
1.
(ii)
(i)~( ii)
y(b)
(d) T herefore
y(x) ~ YI(x) +
is the solution to the BVP, provided y,(b)
# O.
45
(3  YI(b)
y,(b)
y,(x)
Example 36. Use the shooting method (together with the fourthorder RungeKutta method with
h=
:3)
0., x ., 1,
y(O) = 0, y(l) = 2.
Answer.
{~;
~~~~: ~ I~I
Y1W
Y1W
y( )
{y = u , y(O) = 0 [3]
0,
0 = 1 => u' = 4y , ufO) = 1 [4]
(i) Use RK4 to solve [3] and [4] : we obtain
= 0.35802469,
= 0.88106488 and y,(I) = 1.80973178
y,W
y,W
2  Yl(l)
2+0.80973178
y,(I) y,(x) = Yl(X) + 1.80973178 y,(x)
46
7.2.2
y,
Xo
= y(x,), P; =
< XI < ... < Xn  l < In = b with Xi  Xi  l = h for all i = 1,2, ... ) n. Let
P(x,), Q, = Q(x,) , and /; = f(x,). Then by replacing y' and y" with their central
h2
+ Yi I + P. Yi+l
t
 Yi l
2h
+ Q. . = J,.
tU.
11
i = 1,2, . . . , n  1.
h) YHl + (h Q, (1 +"2P;
2
2)y,
h l = h2k
+ (I  "2P;)Y'
The last eq'uation, known as a finite difference e quation , is an approximation to the DE. [t enables
us to approximate the solution at Xl .. . , Xn _ I ..
yU _ (1  ~)y
Answer. Here Pi
= 0,
Qi
=  1+
fi
x 1
Xi.
i = 1
T hat is ,
With Xl = 2 and the boundary conditio ns Yo = 2 and Y2 =  1, solving the above equation gives
Yl =  0.3846
Notes : We can improve the accuracy by using smaller h. But for that we have to pay a price, Le.
we have to solve a larger system of equations .
47
Chapter 8
Numerical Methods For Partial
Differential Equations
8.1
De finition 8.1.1 (Three B asic Types of Second Orde r Linear Equations). The linear PDE
Au.. + Bu.,
(b) hyp erbolic if B2  4AC > O. Hyperbolic equations often describe wave motion and vibrating
phenomena, such as violin 's strings and drum heads.
(c) e lliptic if 8 2  4AC < O. Elliptic equations are often used to describe steady state phenomena
and thus do not depend 0 11 time. Elliptic equations are important in the study of electricity and
magnetism.
48
8.2
We replace derivatives by their corresponding difference quotients based on the Taylar series :
+ 2\h'U"(x)
+ ~h3u"'(x)
+ ...
.
3.
u(x)  hu'(x)
2!
3!
... ) (.)
'() u(x + It)  u(x)  1u
hx
"( )' "
I =>UX =
(1ll
It
2!
i.e. u'(x) = u(x + It~  u(x) + O(h) (the forward difference formula for u')
(iv) (ii) => u'(x) =
u(x)u(xh)
.
h
+O(h) (the backward difference formula for u')
~) + O(h')
( .. )
"() u(x + h)  2u(x)
(VI.) (.)
I + Il ::::} 'U X =
h2 + u(x  h) + O(h')
(the central difference formula for u")
8.3
Then
aUI
ax
'Ui
u(ih,ik)
l Uj '
J
+ Jh
aul
ay = "' 1'"
k
aul
ax =
2h P
~:~ Ip =
J+
'Ui+ l ' J
'Ui+l,j 
Uj I '
J
+ O(h)
hand !1y
= 'Ui,j'
the FD for
+ O(k)the FD
+ O(h')
u, at P
for u, at P
the CD for
u, at P
49
k. At a typical point
8.4
8.4.1
au a'u
at = 8x2 '
tL,;:I;'
0 < X < a,
t > O.
Answer.
Let "',; = u(iIL, jk) = u(x" t;).
Then t he finite difference approximation is
Ut,;+! 
Ui,;
'Uj+l,;  2Ui,j
+ 'ai  I';
h'
i = i , ... M,
j = O, ... ,N
k
where r = hl
'
Note 10: We use the ahove formula to estimate the values of u at time level j
level j. This is known as the FTCS explicit finite difference method.
au
at
t ~ O
0 :::; x:::; 1
O.
Using an x step of h = 0.1 and a t step of k = 0.0005, use the FTCS scheme to estimate u(O. I , O.OOI ).
Answer.
k
r =
0.0005
50
U2,O =
sinO.21f = 0.5878
U3,O =
sin 0.311"
= 0.8090
?res scheme is stable for a given space step h provided the .t ime step k
is restricted
by the condition
Note 12: Suppose there is an initial condition: u(x , O) = f (x). Then the scheme starts with (i,j) =
(0, 0), t he left... hand corner of the 20 grid. Along the horizontal line j = 0, where t = 0, we have
U;,O = U( Xi, 0 ) = f (Xi).
8.4.2
UM ,j
= 0,
uta, t ) =
O. Then we have
j > O.
CrankNicolson Method
Example 41 (CrankNicolson Implicit Sche me for the H eat Equation) . This CN[ scheme
repl aces Ul:l: by an average of two CD quotients, one at the time level j and another at j + 1:
U i,j + 1 
'lLj,j
k
2
After simplified, we obtain
 TU;  IJ+I
+ 2(1 + T)U; J +l 
2Uj ,j+ l
1t2
TU;+l J +l = TU;  IJ
+ 2(1 
T)Ui J
2'lLj ,j
1t2
+ U i l,j + 0(h2)]
+ TU;+ IJ + kO(k , h2 )
k
where r = h2
Note 14: For each time level j, we obtain an (M  1) x (M  1) tridiagonal system which can be
solved using iterat ive methods.
Note 15: T he CN I scheme has no stability restriction. It is more accurate than the explicit scheme.
51
8u
82 u
t2:0
at  ax2'
u(X,O)
sin7rx,
S x
~ 1
Using an x step of h
u(O.4,O.OO1).
= 0.2
and a t step of k
= 0.001,
t 2: O.
Answer.
Since the initial temperature distribution is symmetric with respect to x
consider the grid points over 0 S x ~ 0.5.
= 0 .5,
we only need to
k
0.001
r = h' = (0.2)' = 0.025
(a) ,,(0, t) = u(l, t) = 0
(b) ,,(x, 0) = sin TTX
=}
=}
"oJ = "5J = 0
(c)  rui IJ+l + 2(1 + r)'UiJ+l  r'Ui+lJ+1 = TUt IJ + 2(1  r)tiiJ + rUt+l J
0.025Ut IJ+l + 2.05uiJ+l  0.025Ui+lJ+l = 0 .025'Ui_ IJ + 1.95UtJ + 0 .025uH IJ
(iii) Solving (A) and (B) by Gauss elimination or Cramer's Rule or GaussSeidel method , we
obtain UI ,I = 0.58219907 ' ''',1 = 0.94201789
Therefore u(O.4 , 0.001) '" " 2,1 = 0.94201789
52
Example 43.
The exact solution of the 10 heat equation.
is u(x,t)
Ut
U xx
u(O, t)
u(X, 0)
u( l ,t) = 0
f(x) = sin7rx
t> 0
O<x< 1
e 1f"lt sin(7rx) .
(a) By using the FTCS explicit scheme with h = 0.2 and k = 0.008 , we obtain the following table
t
0.000
0.008
0.016
0.024
0.032
0.040
x O
Exact 0.000
FTCS 0.000
Exact 0.000
FTCS 0.000
Exact 0.000
FTCS 0.000
Exact 0.000
FTCS 0.000
Exact 0.000
FTCS 0.000
Exact 0.000
FTCS 0.000
x 0.2
0.5878
0.5878
0.5432
0.5429
0.5019
0.5014
0.4638
0.4631
0.4286
0.4277
0.3961
0.3951
x 0.4
0.9511
0.9511
0.8789
0.8784
0.8121
0.8113
0.7505
0.7493
0.6935
0.6921
0.6408
0.6392
x O
Exact 0.000
CN! 0.000
FTCS 0.000
0.04 Exact 0.000
CN! 0.000
FTCS 0.000
0.08 Exact 0.000
CN l 0.000
FTCS 0.000
0.12 Exact 0.000
CNl 0.000
FTCS 0.000
0.16 Exact 0.000
CN! 0.000
FTCS 0.000
0.20 Exact 0.000
CN! 0.000
FTCS 0.000
x 0.2 x O.4
0.5878 0.9511
0.5878 0.9511
0.5878 0.9511
0.3961 0.6408
0.3993 0.6460
0.3633 0.5878
0.2669 0.4318
0.2712 0.4388
0.2245 0.3633
0.1798 0.2910
0.1842 0.2981
0.1388 0.2245
0.1212 0.1961
0.1251 0.2025
0.0858 0.1388
0.0817 0.1321
0.0850 0.1376
0.0530 0.0858
53
8.5
Example 44 (D iffe re nce Equatio n for the Laplace Equa tio n) . Using C.D for both 'UX:l; and
the finite difference approximation for the 2D Laplace equation 'U:r:x + Uyll = 0 is
'Ui+lJ  2'lLjj
+ 'UiIj
h'
UjJ+l 
'U1I1I1
+ 'Ui,j  i = 0
k'
21LiJ
('UHt ,j
Example 45. The four sides of a square plate of side 12 cm made of uniform material are kept at
constant temperature such that
LHS = RHS = Bottom = 100'G and Top = O' G.
Using a mesh size of 4 cm, calculate the temperature u(x, y) at the internal mesh points A(4, 4), B(8, 4),
G(8,8) and D(4 ,8) by using the GaussSeidel method if u(x,y) satisfies the Laplace equation Un +
Uvu = 0, start with the initial guess 'UA = UB = 80, Uc = Uo = 50. Continue iterating until all
lur+ IJ  u~ I < 10 3 where ur1 are the pth iterates for tLi , i = A, B , C, D .
Answer.
Applying the equation
I
u'J = :\[UE
+ UN + Uw + Us]
U.
I
U8 = :\ (u.
+ Uc + 200)
Uc =
Uo =
This system is strictly diagonally dominant, so we can proceed with the GaussSeidel iteration starting
with the initial guess UA = UB = 80, Uc = UD = 50. Continue iterating until all lur+1]  u~l < 10 3
we obtain UA = UB = 87.5,uc = Uo = 62.5.
54
8.6
Example 46 (CTCS scheme for Hyperbolic Equations). Consider the BVP involving the ID
wave equation
0'1.1
0'1.1
at'
ax'
t>0
1.1(0, t) = 1.1(1, t) = 0,
u(x ,O) = f(x) ,
0 :S x :S 1
u,(x,O) = g(x),
0:S x :S 1
+ UiJ l
2Ui,j
k'
i.e.
'Ui,; +! =
'U,:+l ,j 
PUi+l,j
+ 2(1 
~:c
2u,;,j
and
Ut!
+ 'Ui_ l,j
h'
P)UiJ
+ fYUi  lJ 
Ui,j l
Note 16:
(a) This scheme is stable for 0 :S p :S 1 or k :S h.
Ui, l
2k
= 9i =? u':, l = u,;,l 
(d) To calculate
''0, 1 :
'lLj,1 = P'Ui+ l ,O
+ 2(1 
P)Ui ,O
+ PUi I,O 
'tLi,  1
55
2kgi
Example 47. If the string (with fixed ends at x = 0 and x = I) governed by the hyperbolic partial
differential equation
Utt = U XZ )
0 =:; X :::; 1, t;::: 0
with boundary conditions
U(O, t) = u(l, t) = 0
starts from its equilibrium position with initial velocity
g(x) = sin7fx
and initial displacement u(x,O) = O.
What is its displacement u at time t = 0.4 and x = 0.2,0.4, 0.6, 0.8? (Use the eTCS explicit scheme
with /::'x = /::,t = 0.2.)
Answer.
(a) u(O, t)
(b)
= /::'x = /::,t = k = 0.2 => c = (k/h)2 = I, we have the following CTCS scheme:
'UiJ+l = tLi+1J
with
Ui,l =
+ 'Uilj 
O.29j
'U,;j l
(iv)
(v)
US,I
(ii)
(f)
UO,I
'Ui,j+l = 'Ui+1,j
0.1176
= 0
+ 141';
 UiJ l
= I ,j = 1 : U '.2 = U2,1 + UO,I (H) i = 2,j = 1 : U2 ,2 = U3,1 + U',' (iii ) i = 3,j = 1 : U3,2 = U4,1 + U2, 1 (iv) i = 4,j = 1 : U4 ,2 = US,I + U3,1 (i) i
= 0.1902 + 0  0 = 0.1902
U2,O = 0.1902 + 0.1176  0 = 0.3078
U3,O = 0.1176 + 0.1902  0 = 0.3078
U4,O = 0 + 0.1902  0 = 0.1902
U' ,O
56
Bibliography
[l[ Koay Hang Leen, Lecture Note for Numerical Methods and Statistics, 2007.
[2[ Peter V.O'Nei!, Advanced Engineering Mathematics.
[3) Glyn James, Advanced Modern Engineering Mathematics.
!4} Anthony Croft, Robert DavisoD , Martin Hargreaves ,Engineer!ng Mathematics.
57
Much more than documents.
Discover everything Scribd has to offer, including books and audiobooks from major publishers.
Cancel anytime.