Professional Documents
Culture Documents
Recommended Books:
Objective: Learn a few basic numerical techniques (Finite Difference) to solve simple
ODEs/PDEs/algebraic equations derived from a variety of heat and mass transfer related
conservation equations.
• Analytical methods yield exact solutions, often in infinite series; use mathematical functions.
• Numerical methods yield approximate solutions; use numbers, computations; results are dependent
on the method accuracies and limited by computers machine errors.
Course focuses on
• Algebra (Matrix Operation)
• Methods (Finite Difference)
Requires knowledge of
• Computer Programming(because large # of iterations)
(there are engineering softwares, e.g., Matlab, NAG, IMSL, Polymath)
• No numerical analysis in this course!
𝑁
Trays 𝑉𝑁+1 𝐿𝑁
(more volatile)
1
𝐿𝑖 = 𝑥𝑖 𝐿
𝑉𝑖 = 𝑦𝑖 𝑉
𝑦𝑖 = 𝑘𝑖 𝑥𝑖 (𝑒𝑞𝑢𝑖𝑙𝑖𝑏𝑟𝑖𝑢𝑚)
𝑦 𝑉
Therefore, 𝑉𝑖 = �𝑥𝑖� � 𝐿 � 𝐿𝑖
𝑖
or 𝑉𝑖 = 𝐴′𝑖 𝐿𝑖
𝑉𝑖 ↓ 𝑙𝑖−1
⋯ 𝑖 𝑡ℎ 𝑝𝑙𝑎𝑡𝑒
𝑉𝑖+1 ↓ 𝑙𝑖
𝑉𝑖−1
𝑉𝑖 + 𝑉𝑖 ⁄𝐴′𝑖 − 𝐴′𝑖−1
− 𝑉𝑖+1 = 0
𝑖 = 1, 𝑁
Thus, there are ‘N’ equations and (N+2) variables (𝑉𝑜 ⋯ ⋯ 𝑉𝑁+1 )
𝐴1 𝑉1 + ( 1 − 𝐴2 )𝑉2 − 𝑉3 = 0 (𝑖)
𝐴𝑋� = 𝑏�
rows column
( 1 − 𝐴1 ) −1 0 0 0
⎡ ⎤
𝐴 ( 1 − 𝐴2 ) −1 0 0 0
𝐴= ⎢ 1 ⎥
⎢0 𝐴2 (1 − 𝐴3 ) −1 0 0 ⎥
⎣0 0 0 0 𝐴𝑁−1 (1 − 𝐴𝑁+1 )⎦𝑁×𝑁
rows columns
2
𝑉1 𝐿0
𝑉 ⋮
𝑋� = � 2 � , 𝑏� = � � (Known)
⋮ ⋮
𝑉𝑁 𝑉𝑁+1
column vectors
Alternatively*,
( 1 − 𝐴1 ) −1 0 0 0 𝑉1 𝐿𝑜
𝐴 ( 1 − 𝐴2 ) −1 0 0 0 𝑉
� 1 � � 2� =� �
⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮
0 0 0 0 𝐴𝑁−1 (1 − 𝐴𝑁+1 ) 𝑁×𝑁 𝑉𝑁 𝑁×1 𝑉𝑁+1 𝑁×1
o Most of the finite difference methods lead to the discretization of ODEs/PDEs and a set
of algebraic equations.
o You have to solve a set of algebraic equations using Gauss Elimination, G- Jordan etc.
(𝑑𝑝 , 𝜌𝑝 , 𝑣𝑝 , 𝐴𝑝 , 𝑚𝑝 )
𝐻 𝑙𝑖𝑞𝑢𝑖𝑑/𝑔𝑎𝑠
𝑡 =?
(𝜌𝑓 , 𝑣𝑓 )
3
𝑑𝑣
𝑑𝑡
= 𝐴 − 𝐵𝐶𝐷 (𝑣)𝑣 2 𝐶𝐷 24⁄𝑅𝑒 , 𝑝
= 𝐴 − 𝐵Φ(𝑣) 0.44
𝑡 = 0, 𝑣=0 𝑅𝑒 , 𝑝
𝑣𝑡 homogeneous or non-homogeneous
𝑣 or a set of ODEs.
𝑡=0 𝑣𝑝 = 0
𝑇 𝑇𝑓 𝑇 = 𝑇𝑜 > 𝑇𝑓 (𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡)
𝑟𝑝 ℎ𝑓 (𝑓𝑙𝑢𝑖𝑑 𝑡𝑒𝑚𝑝𝑒𝑟𝑎𝑡𝑢𝑟𝑒)
𝑑𝑇
𝑚𝐶𝑝 𝑑𝑡 = −ℎ𝐴𝑠 �𝑇 − 𝑇𝑓 �
(𝐴𝑠 = 𝜋𝑑𝑝2 )
(𝑁𝑜 𝑡𝑒𝑚𝑝𝑒𝑟𝑎𝑡𝑢𝑟𝑒 𝑔𝑟𝑎𝑑𝑖𝑒𝑛𝑡 𝑖𝑛 𝑝𝑎𝑟𝑡𝑖𝑐𝑙𝑒)
ℎ𝑑𝑝
ℎ = ℎ(𝑁𝑢) ⇒ = 𝑓(𝑅𝑒,𝑝, 𝑃𝑟 )
𝑘𝑓
𝑣1 𝑑𝑝 𝜌𝑓 𝐶𝑝 𝜇
𝑜𝑟 ℎ = ℎ �𝑣𝑝 �; 𝑅𝑒 = , 𝑃𝑟 = � �
𝜇𝑓 𝑘 𝑓
2 Time-dependence of velocity (see previous example) and energy balance equation can be
written as
4
𝑑𝑣𝑝
= 𝐴 − 𝐵Φ(𝑣𝑝 )
𝑑𝑡
𝑑𝑇 � 𝑡=0 𝑣𝑝 = 0, 𝑇 = 𝑇𝑜 Type equation here.
= 𝐶 𝐷�𝑣𝑝 � − 𝐸
𝑑𝑡
𝑑𝑦1
= 𝑓(𝑦1 )
𝑑𝑡 � 𝑇𝑤𝑜 𝑂𝐷𝐸𝑠
𝑑𝑦2
= Φ(𝑦1 , 𝑦2 )
𝑑𝑡
𝑇𝑜 𝑓𝑙𝑢𝑖𝑑 𝑡𝑒𝑚𝑝𝑒𝑟𝑎𝑡𝑢𝑟𝑒
𝑟
𝑇(𝑡, 𝑟) =? 𝑟𝑝 𝑇𝑓
Assume
∇𝑇(𝑟) ≠ 0 𝑖. 𝑒. , 𝑡ℎ𝑒𝑟𝑒 𝑖𝑠 𝑎 𝑡𝑒𝑚𝑝𝑒𝑟𝑎𝑡𝑢𝑟𝑒 𝑔𝑟𝑎𝑑𝑖𝑒𝑛𝑡 𝑖𝑛𝑠𝑖𝑑𝑒 𝑡ℎ𝑒 𝑝𝑎𝑟𝑡𝑖𝑐𝑙𝑒
𝜕𝑇 𝑘 𝜕 𝜕𝑇
𝜌𝐶𝑝 𝜕𝑡 = 𝑟 2 𝜕𝑟 (𝑟 2 𝜕𝑟 )
5
Example 5: Mass transfer in a tubular liquid flow (steady- state)
𝐶𝐴 = 𝐶𝑖𝑛 𝑅
𝑟 (𝑤𝑎𝑡𝑒𝑟 𝑓𝑙𝑜𝑤𝑠) 𝑉𝑧 (𝑟)
𝑡=0 z
tube length = L
𝑟2
𝑣𝑧 (𝑟) = 2𝑣𝑜 (1 − 𝑅2 )
𝑐(𝑡, 𝑧, 𝑟) =?
CV
𝜕𝐶𝐴
+ 𝑉𝑧 . ∇𝐶𝐴 = 𝐷∇2 𝐶𝐴 + 𝑟(= 0)
𝜕𝑡
(no reaction)
𝑡 =0, 𝐶𝐴 = 0 𝑒𝑣𝑒𝑟𝑦𝑤ℎ𝑒𝑟𝑒
0+ 𝑧=0 𝐶𝐴 = 𝐶𝑖𝑛
𝜕𝐶𝐴
𝑧=𝐿 𝜕𝑋
= 0 (𝑙𝑜𝑛𝑔 𝑡𝑢𝑏𝑒 𝑎𝑝𝑝𝑟𝑜𝑥𝑖𝑚𝑎𝑡𝑖𝑜𝑛)
𝜕𝐶𝐴
𝑟 =0, =0 (𝑠𝑦𝑚𝑚𝑒𝑡𝑟𝑖𝑐)
𝜕𝑟
𝜕𝐶𝐴
=𝑅 − 𝜕𝑟
= 0 (𝑛𝑜𝑛 − 𝑟𝑒𝑎𝑐𝑡𝑖𝑣𝑒 𝑤𝑎𝑙𝑙)
6
Lecture #02
𝑓1 (𝑋1 , 𝑋2 , ⋯ ⋯ 𝑋𝑛 = 0
𝑓2 (𝑋1 , 𝑋2 , ⋯ ⋯ 𝑋𝑛 = 0
� 𝑠𝑦𝑠𝑡𝑒𝑚 𝑜𝑓 𝑛 𝑙𝑖𝑛𝑒𝑎𝑟 𝑎𝑙𝑔𝑒𝑏𝑟𝑎𝑖𝑐 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛𝑠
⋮
𝑓𝑛 (𝑋1 , 𝑋2 , ⋯ ⋯ 𝑋𝑛 = 0
or
or 𝐴𝑋� = 𝑏� or 𝐴𝑋 = 𝑏
row column
𝑋1 𝑏1
𝑋 𝑏
𝑋� = 𝑐𝑜𝑙𝑢𝑚𝑛 𝑣𝑒𝑐𝑡𝑜𝑟 = � 2 � , 𝑏� = 𝑐𝑜𝑙𝑢𝑚𝑛 𝑣𝑒𝑐𝑡𝑜𝑟 = � 2 �
⋮ ⋮
𝑋𝑛 𝑏𝑛
or,
𝑋1 𝑏1
𝑋 𝑏
𝑋 = 𝑚𝑎𝑡𝑟𝑖𝑥 = � 2 � 𝑏 = 𝑚𝑎𝑡𝑟𝑖𝑥 = � 2 �
⋮ ⋮
𝑋𝑛 𝑛×1 𝑏𝑛 𝑛×1
1
𝑁𝑜𝑡𝑒: 𝐼𝑛 𝑔𝑒𝑛𝑒𝑟𝑎𝑙, 𝑎 𝑚𝑎𝑡𝑟𝑖𝑥 ℎ𝑎𝑠 𝑛 × 𝑚 𝑒𝑙𝑒𝑚𝑒𝑛𝑡𝑠
⎧
⎪
⎪
⎪ 𝑇ℎ𝑒𝑟𝑒𝑓𝑜𝑟𝑒,
𝑎𝑖𝑗 𝑖 = 1 𝑡𝑜 𝑛
⎪ 𝑗 = 1 𝑡𝑜 𝑚
⎪
𝐼𝑓 𝑛 = 𝑚, 𝑡ℎ𝑒𝑟𝑒 𝑚𝑎𝑦 𝑏𝑒 𝑢𝑛𝑖𝑞𝑢𝑒 𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛 𝑎𝑛𝑑 𝑟𝑎𝑛𝑘 𝑜𝑓 𝑡ℎ𝑒 𝑚𝑎𝑡𝑟𝑖𝑥 = 𝑚
⎨ 𝐼𝑛 𝑔𝑒𝑛𝑒𝑟𝑎𝑙, 𝑟𝑎𝑛𝑘 >, =, < 𝑚 (3 𝑐𝑎𝑠𝑒𝑠)
⎪ (𝐹𝑜𝑟 𝑚𝑜𝑟𝑒, 𝑟𝑒𝑎𝑑 𝑎 𝑏𝑜𝑜𝑘 𝑜𝑛 𝑚𝑎𝑡𝑟𝑖𝑥)
⎪𝐼𝑛 𝑡ℎ𝑖𝑠 𝑐𝑜𝑢𝑟𝑠𝑒, 𝑤𝑒 𝑤𝑖𝑙𝑙 𝑏𝑒 𝑖𝑛𝑡𝑒𝑟𝑒𝑠𝑡𝑒𝑑 𝑖𝑛 𝑠𝑜𝑙𝑣𝑖𝑛𝑔 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛𝑠 𝑤𝑖𝑡ℎ 𝑎𝑠 𝑚𝑎𝑛𝑦 𝑣𝑎𝑟𝑎𝑖𝑏𝑙𝑒𝑠:
⎪
⎪ 3𝑥 + 2𝑦 + 𝑧 = 5
3𝑥 + 2𝑦 = 5
⎪ � 𝑜𝑟 𝑥 + 𝑦 − 𝑧 = 2 � 𝑒𝑡𝑐.
𝑥−𝑦 =6
⎩ 2𝑥 + 2𝑦 − 2𝑧 = 4
(First and last rows have only two elements and all other rows have 3 elements, one on
the diagonal and the other two on each side of the diagonal element)
(2) Operation:
Summation: [𝐴]𝑚×𝑛 = [𝐵]𝑚×𝑛 + [𝐶]𝑚×𝑛
2
Multiplication (rules):
[𝐴]𝑚×𝑛 × [𝐵]𝑛×𝑙 = [𝐶]𝑚×𝑙
same
5 1 5 1 0
−3 0 −4 −3 2
�3 2� ×� � ; �3 2� ×� � ; [1 3 2]1×3 × � 5 �
1 0 2×2 2 7 −3 2×3
0 3 3×2 0 3 3×2 −8 3×1
3 1 (3 × 5 + 1 × 7) 3×6+1×2
5 6
examaple: �8 6� ×� � = �(8 × 5 + 6 × 7) (8 × 6 + 6 × 2)�
7 2 2×2
0 4 3×2 0 × 5 + 4 × 7) (0 × 6 + 4 × 2
3×2
Inverse of a matrix:
1 0 0
[𝐴]−1 ⇒ [𝐴]𝑚×𝑛 × [𝐴∗ ]−1
𝑛×𝑚 = [𝐼] ≡ � 0 1 0�
0 0 1
Identity matrix
For 𝟐 × 𝟐 matrix
1 𝑎 𝑎12 𝑎11 𝑎12
[𝐴]−1 = �−𝑎22 𝑎11 � 𝑤ℎ𝑒𝑟𝑒 𝐴 = �𝑎 𝑎22 �
|𝐴| 21 21
3
𝑎1
𝑎2
example: [𝐴] = � ⋮ � ⇒ [𝐴]𝑇 = [𝑎1 𝑎2 ⋯ 𝑎𝑛 ]1×𝑛
𝑎𝑛 𝑛×1
[𝐴]{𝑋} = {𝑏}
𝑜𝑟 [𝐴][𝑋] = [𝑏] � 𝑏 𝑇 = [𝑏1 , 𝑏2 , ⋯ ⋯ 𝑏𝑛 ], 𝑋 𝑇 = [𝑋1 , 𝑋2 , ⋯ ⋯ 𝑋𝑛 ]
𝑜𝑟 𝐴𝑋⃗ = 𝑏�⃗
𝑿 = 𝑨−𝟏 𝒃
3 5 7 7
Therefore, if 𝐴 = � 2 0 8� ; 𝑏 = �3�
−1 2 5 2
3 7 7
𝑋2 = � 2 3 8��|𝐴| ; |𝐴| ≠ 0
−1 2 5
3 5 7
𝑋3 = � 2 0 3��|𝐴| , 𝑒𝑡𝑐.
−1 2 2
Let us briefly discuss unique solution, singularity, and rank(r) of a (𝑚 × 𝑛) matrix. (Read the
textbook for details)
4
Therefore, there is no unique solution, coefficient matrix is singular and 𝑟 of such matrix
= 1 < 2 𝑜𝑟 𝑚(# 𝑜𝑓 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛𝑠) < 𝑛(# 𝑜𝑓 𝑣𝑎𝑟𝑖𝑎𝑏𝑙𝑒𝑠)
Geometrically, two equations represent two straight lines intersecting at a unique point.
intersecting will yield a point (𝑋1 , 𝑋2 , 𝑋3 ), or the unique solution to the equations.
Note: There are three linearly independent equations representing three non-parallel
equations. In this case m = 𝑛, 𝑟 = 3(𝑛).
5
Lecture #03
Gauss Elimination
It is the most common method to solve a set of linear algebraic equations. The method reduces
the original matrix to a upper triangular matrix which can also be used to determine the
determinant of the matrix. Interestingly, one can also inspect the intermediate steps/solutions
to determine singularity, rank, and number of independent equations.
𝑅1 , first row is called pivot row and first non-zero element (𝑎11 ) is called pivotal element.
Subsequent operations are performed around pivot row and pivotal element.
𝑎11 ≠ 0
Similarly,
𝑎
𝑎32 ′ = 𝑎32 − �𝑎31 � × 𝑎12 ⎫ Brie�ly, �irst equation has divided by
11
𝑎
⎪ 𝑎 , 𝑚𝑢𝑙𝑡𝑖𝑝𝑙𝑖𝑒𝑑 𝑏𝑦 𝑎31 𝑎𝑛𝑑 𝑠𝑢𝑏𝑠𝑡𝑟𝑎𝑐𝑡𝑒𝑑
𝑎33 ′ = 𝑎33 − �𝑎31 � × 𝑎13 (3) 11
11 ⎬ 3𝑟𝑑 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 𝑡𝑜 𝑦𝑖𝑒𝑙𝑑 𝑡ℎ𝑒
𝑏3 ′ = 𝑏3 − �𝑎31 � × 𝑏1 ⎪
𝑎
𝑚𝑜𝑑𝑖𝑓𝑖𝑒𝑑 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛(3)
11 ⎭
𝑎
or, in the simplest form 𝑅2 ′ = 𝑅2 − 𝑅1 �𝑎21 �
11
1
𝑎
𝑅3 ′ = 𝑅3 − 𝑅1 �𝑎31 �
11
Geometrically, two planes (non-parallel) will intersect to yield a straight line. Therefore, step1
has yielded two straight lines by the intersections of plane/(eq1) with plane2 (eq2) and plane3
(eq3).
Step2: Follow the similar procedure. Now 𝑅2 ′ (2nd row) becomes pivotal equation and
𝑎22 ′ becomes pivotal element.
0 0 𝑎33 ′′ 𝑥3 𝑏3 ′′
where,
𝑎 ′
𝑎33 ′′ = 𝑎33 ′ − �𝑎32 ′ � × 𝑎23 ′
22
𝑎 ′
𝑏3 ′′ = 𝑏3 ′ − �𝑎32 ′ � × 𝑏2 ′
22
𝑎 ′
or in the simplest form, 𝑅3 ′′ = 𝑅3 ′ − 𝑅2 ′ �𝑎32 ′ �
22
Note: At the end of step (2) an upper triangular matrix is obtained. The three modified
algebraic equations are as follows:
This is a new set of three linearly independent algebraic equations to solve! Here ‘new’ means
‘modified’.
�):
Reverse (back) substitution (to determine𝒙
2
𝑏2 ′ −𝑎23 ′ 𝑥3
𝑥2 = (from 2nd last row)
𝑎22 ′
𝑏1 −𝑎12 𝑥2 −𝑎13 𝑥3
𝑥1 = (from 1st row)
𝑎11
Evaluation of determinant:
Apply GE method
𝑎11 𝑎12 𝑎21
𝐴′ = � 0 ′�
𝑎22 2×2 = (𝑎 22 − � � × 𝑎12 )
𝑎11
determinant of the modified matrix 𝐴′ = (𝑎11 × 𝑎22 − 𝑎21 × 𝑎12 ) = 𝑠𝑎𝑚𝑒 𝑎𝑠 det (𝐴)
“Value of determinant is not changed by the forward elimination step of GE”. This must be
true because forward step only modifies the equations.
Take 3 × 3 matrix:
𝑎11 𝑎12 𝑎13 𝑎11 𝑎12 𝑎13
𝐺𝐸
�𝑎21 𝑎22 𝑎23 � �� � 0 𝑎22 ′ 𝑎23 ′ �
𝑎31 𝑎32 𝑎33 0 0 𝑎33 ′′
[𝐴] [𝐴′ ]
Therefore,
Note that swapping of rows of a matrix does not alter values of determinant. However, sign
changes: (−1)𝑘 , where k is the number of times rows are swapped.
3
Re-visit
Alternatively,
If a33 ′′ = 0, det[A] = 0 ⇒ the matrix is singular; rank of the matrix = 2(< 3); only first
two equations are linearly independent; problem is under-determined and one more equation
is required to solve(𝑋1 , 𝑋2 , 𝑋3 ). Algebraically, 3rd plane is ∥ to one of the other two planes.
[Note: 𝑌 = 1�𝑋 ,
𝑑𝑌 1
𝑑𝑋
= − 𝑋2
In such case, the problem is said to be ill-conditioned. To invert such a matrix, large machine
precision may be required, and one has to be careful with round-off errors.
Recall, determinant of a matrix is the product of the diagonal elements of UTM. Therefore,
pivotal elements should not only be zero, but also large, or the matrix should be “diagonally
strong”.
𝑥1 + 𝑥2 = 2
4
Working on 4th decimal, apply GE with 0.0001 as the pivotal element to obtain
𝑋 𝑇 = [0.0000 1.0000] which is wrong. Now, swap the equations as
𝑥1 + 𝑥2 = 2
0.0001𝑥1 + 𝑥2 = 1
This is called pivoting. Swap the row, so that the pivotal element is relatively larger. Also,
scaling (or multiplying equation (A) by 10000) can be done before applying GE.
1 −1 2 𝑥1 −8
�2 −2 3� �𝑥2 � = �−20�
1 1 1 𝑥3 −2
1 −1 2 𝑥1 −8
Step 1: 𝑥
�0 0 −1� � 2 � = �−4�
0 2 −1 𝑥3 +6
Row pivoting is required (or swap 2nd equation with 3rd so that pivotal elemental is non – zero).
1 −1 2 𝑥1 −8
�0 𝑥
2 −1� � 2 � = �+6�
0 0 −1 𝑥3 −4
We have now UTM. Reverse substitution will yield
𝑥 𝑇 = [−11 5 4]
1 2 −1 𝑥1 −8
�0 𝑥
−1 0 � � 2 � = �−4�
0 −1 2 𝑥3 +6
Column 2 is replaced with column 3 to make the pivotal element ≠ 0. However, note that the
variables 𝑥2 𝑎𝑛𝑑 𝑥3 are also swapped in the column vector 𝑥̅ , else you would be solving a
different set of equations!
5
𝑥1 + 2𝑥2 − 𝑥3 = −8 𝑥1 + 2𝑥3 − 𝑥2 = −8
−𝑥2 = −4 �(left) original equation/(right) swapped equation: −𝑥3 = −4 �
−𝑥2 + 2𝑥3 = +6 2𝑥2 − 𝑥3 = +6
Proceed as follows:
1 −2 −1 𝑥1 −8
�0 −1 0 � �𝑥3 � = �−4�
0 0 2 𝑥2 10
Reverse substitution will yield 𝑥 𝑇 = [−11 , 5 , 4]
6
Lecture #04
Gauss-Jordan
1. Procedure is similar to that of GE
3. Eliminate the unknown from all rows, below and above the pivot row
𝐴𝑋� = 𝑏�
𝑎11 𝑎12 𝑎13 𝑥1 𝑏1
�𝑎21 𝑎22 𝑎23 � �𝑥2 � = �𝑏2 �
𝑎31 𝑎32 𝑎33 3×3 𝑥3 𝑏3
𝑎
Step 1: 𝑎1𝑗 ′ = 𝑎1𝑗 𝑗 = 1,3 (𝑛𝑜𝑟𝑚𝑎𝑙𝑖𝑧𝑒 𝑎11 (𝑝𝑖𝑣𝑜𝑡𝑎𝑙 𝑒𝑙𝑒𝑚𝑒𝑛𝑡) to 1.0)
11
𝑏1 ′ = 𝑏1 ⁄𝑎11
1 𝑎12 ′ 𝑎13 ′ 𝑏1 ′
�0 𝑎22 ′ 𝑎23 ′ � � 𝑋� � = �𝑏2 ′ �
0 𝑎32 ′ 𝑎33 ′ 3×3 𝑏3 ′
𝑎 ′
Step2: 𝑎2𝑗 ′′ = 𝑎 2𝑗 ′ 𝑗 = 2,3 (𝑛𝑜𝑟𝑚𝑎𝑙𝑖𝑧𝑒 𝑝𝑖𝑣𝑜𝑡𝑎𝑙 𝑒𝑙𝑒𝑚𝑒𝑛𝑡 𝑜𝑓 2𝑛𝑑 𝑟𝑜𝑤 to 1.0)
22
𝑏2 ′′ = 𝑏2 ′ ⁄𝑎22 ′
′′ ′
𝑎1𝑗 = 𝑎1𝑗 − 𝑎2𝑗 ′ × 𝑎12 ′ 𝑠𝑢𝑏𝑠𝑡𝑟𝑎𝑐𝑡 𝑡ℎ𝑒 𝑟𝑜𝑤 𝑎𝑏𝑜𝑣𝑒 𝑓𝑟𝑜𝑚 𝑡ℎ𝑒 𝑝𝑖𝑣𝑜𝑡𝑎𝑙 𝑟𝑜𝑤.
�
𝑏1 ′′ = 𝑏1 ′ − 𝑏2 ′ × 𝑎12 ′ 𝑵𝒐𝒕𝒆 𝒕𝒉𝒊𝒔 𝒊𝒔 𝒕𝒉𝒆 𝒆𝒙𝒕𝒓𝒂 𝒔𝒕𝒆𝒑 𝒊𝒏 𝑮𝑬.
′ ′
𝑎3𝑗 ′′ = 𝑎3𝑗 − 𝑎2𝑗 × 𝑎32 ′ 𝑠𝑢𝑏𝑠𝑡𝑟𝑎𝑐𝑡 𝑡ℎ𝑒 𝑟𝑜𝑤 𝑏𝑒𝑙𝑜𝑤 𝑓𝑟𝑜𝑚 𝑡ℎ𝑒 𝑝𝑖𝑣𝑜𝑡𝑎𝑙 𝑟𝑜𝑤.
′′ ′ ′ �
𝑏3 = 𝑏3 − 𝑏2 × 𝑎32 ′ 𝑇ℎ𝑖𝑠 𝑖𝑠 𝑡ℎ𝑒 𝑠𝑎𝑚𝑒 𝑠𝑡𝑒𝑝 𝑎𝑠 𝑖𝑛 𝐺𝐸.
1
1 0 𝑎13 ′′ 𝑏1 ′′
�0 1 𝑎23 ′′ � 𝑋� = �𝑏2 ′′ �
0 0 𝑎33 ′′ 𝑏3 ′′
1 0 0 𝑥1 𝑏1 ′′′
′′′
�0 1 0� �𝑥2 � = �𝑏2 �
0 0 1 𝑥3 𝑏3 ′′′
Make a note that G-J method can also be used to determine inverse of the matrix, if
working on the augmented matrix. Check text books for more details.
1 −1 2 −8
example: �1 1 1� � 𝑋� � = � −2 �
2 −2 3 −20
1 −1 2 𝑥1 −8
⇒ �0 2 −1� �𝑥2 � = � 6 � (the pivotal element already has ‘1’ ⇒ no need to normalize!)
0 0 −1 𝑥3 −4
1 −1 2 𝑥1 −8
⇒ �0 𝑥
1 −0.5� � 2 � = � 3 � (divide pivotal row by 2 to make the pivotal element to be1)
0 0 −1 𝑥3 −4
1 0 1.5 𝑥1 −5
⇒ �0 1 −0.5� �𝑥2 � = � 3 � (subtract the rows above and below from the pivotal row)
0 0 −1 𝑥3 −4
1 0 1.5 𝑥1 −5
⇒ �0 𝑥
1 −0.5� � 2 � = � 3 � (make the pivotal element to 1)
0 0 1 𝑥3 4
1 0 0 𝑥1 −11
⇒ �0 �
1 0 2� 𝑥 � = � 5 � (subtract the rows 1 & 2 from the pivotal row)
0 0 1 3 𝑥 4
𝑋 𝑇 = [−11 5 4]𝑇
You should ensure that the programming code for G-J is written by modifying the code for GE,
instead of writing fresh, by changing the indices for i and j.
2
LU Decomposition: Two similar/identical methods
Dolittle Crout’s
𝐴𝑋� = 𝑏�
(This method is considered to be the best if only right hand side 𝑏� changes from problem/case
to problem/case)
1. 𝐴 = 𝐿𝑈 (𝑑𝑒𝑐𝑜𝑚𝑝𝑜𝑠𝑒)
𝐿 𝑈 𝑋� = 𝑏�
2. � ⇒ 𝐿𝑑̅ = 𝑏�, 𝑠𝑜𝑙𝑣𝑒 𝑓𝑜𝑟 𝑑̅ 𝑓𝑖𝑟𝑠𝑡
𝑑̅
3. 𝑈𝑋 = 𝑑̅ ⇒ 𝑠𝑜𝑙𝑣𝑒 𝑓𝑜𝑟 𝑋�
�
1 −1 2 −8
example: �1 �
1 1� � 𝑋 � = � −2 �
2 −2 3 −20
1 0 0 𝑈11 𝑈12 𝑈13
𝐴 = �𝑙21 1 0� � 0 𝑈22 𝑈23 �
𝑙31 𝑙32 1 3×3 0 0 𝑈33 3×3
Match,
3
𝑙21 𝑈11 = 1, 𝑙21 𝑈12 + 𝑈12 = 1, 𝑙21 𝑈13 + 𝑈23 = 1
𝑙31 𝑈11 = 2, 𝑙31 𝑈12 + 𝑙32 𝑈23 = −2, 𝑙31 𝑈13 + 𝑙32 𝑈23 + 𝑈33 = 3
Therefore, there are 9 unknown & 9 equations to solve and the problem is well defined.
However, in addition, store multiplication coefficients of GE as the coefficients 𝑙21 , 𝑙31 , 𝑎𝑛𝑑 𝑙32
of lower triangular matrix
𝑎 𝑘 = 1 𝑡𝑜 𝑛 − 1
as: 𝑙𝑖𝑘 = 𝑎 𝑖𝑘 ; �
𝑘𝑘 𝑖 = 𝑘 + 1 𝑡𝑜 𝑛
It is clear that the programming code used for GE to reduce the matrix A to UTM (U) is the
same as that for determining U of LU decomposition method. In addition, the coefficients of L
are also determined automatically by defining an extra coefficient 𝑙𝑖𝑘 (see above) in the same
programming code.
Example:
1 −1 2
𝐴 = �1 1 1�
2 −2 3
Step 1:
L U
𝑁𝑜𝑡𝑒 𝑙21 𝑖𝑠 𝑡ℎ𝑒
1 0 0 1 −1 2 ⎧ 𝑟𝑒𝑞𝑢𝑖𝑟𝑒𝑑 𝑐𝑜𝑒𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡
⎪
= �𝑙21 = 1 1 0� �0 2 −1� 𝑡𝑜 𝑚𝑎𝑘𝑒 𝑡ℎ𝑒 𝑐𝑜𝑒𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡
𝑙31 = 2 𝑙32 = 0 1 0 0 −1 ⎨ 𝑈 = 0. 𝑆𝑖𝑚𝑖𝑙𝑎𝑟𝑖𝑡𝑦,
⎪ 12
⎩𝑙31 𝑎𝑛𝑑 𝑙32 𝑎𝑟𝑒 𝑑𝑒𝑡𝑒𝑟𝑚𝑖𝑛𝑒𝑑
L U
1 −1 2 1 0 0 1 −1 2
Therefore, �1 1 1� = �1 1 0� �0 2 −1�
2 −2 3 2 0 1 0 0 −1
A L U
4
Step 2: 𝐿𝑑⃗ = 𝑏�⃗
1 0 0 𝑑1 −8
�1 1 0� �𝑑2 � = � −2 �
2 0 1 𝑑3 −20
𝑑1 = −8 ; 𝑑2 = −2 + 8 = 6; 𝑑3 = −20 + 16 = −4
𝑑 𝑇 = [−8 6 − 4]𝑇
(Note: you have done forward substitution or solved a LTM, which is just opposite to
solving an UTM or reverse/backward substitution step of GE)
Step 3: 𝑈𝑋� = 𝑑̅
1 −1 2 𝑋1 −8
�0 2 −1� �𝑋2 � = � 6 �
0 0 −1 𝑋3 −4
(Note this is the reverse/backward substitution of GE)
You should ensure that you are not writing a fresh code for the LU decomposition method.
Rather, you simply modify the code you wrote earlier for the 1st part (forward substitution) of
GE, or the code to reduce A to UTM. Then, write a code for inverting LTM and use the
previously written code for the back-substitution or the reverse substitution of GE. In other
words, the code of LU decomposition has three sub-parts.
5
Lecture #05
Matrix Inverse 𝑨 𝑨−𝟏 = 𝑰(𝒅𝒆𝒇𝒊𝒏𝒊𝒕𝒊𝒐𝒏 𝒇𝒐𝒓 𝑨−𝟏 )
1 0 0
�0 1 0�
0 0 0
𝑎11 𝑎12 𝑎13
𝑎
If 𝐴 = � 21 𝑎22 𝑎23 �, then
𝑎31 𝑎32 𝑎33
1
or 𝐿 ���
𝑑2 = [0 1 0]𝑇 ⇒ 𝑑𝑒𝑡𝑒𝑟𝑚𝑖𝑛𝑒 ���
𝑑2
𝑇ℎ𝑖𝑠 𝑖𝑠 𝑎𝑛 𝑈𝑇𝑀 .
25 5 1 𝑐𝑜𝑒𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡 𝑢𝑠𝑒𝑑 𝑡𝑜 𝑚𝑎𝑘𝑒 𝑡ℎ𝑒
�0 −4.8 −1.56� ⇒ 1𝑠𝑡 𝑒𝑙𝑒𝑚𝑒𝑛𝑡 𝑜𝑓 𝑡ℎ𝑒 𝑙𝑎𝑠𝑡 𝑟𝑜𝑤
0 0 0.7 𝑡𝑜 𝑏𝑒 𝑧𝑒𝑟𝑜
= 16.8⁄4.8 = 3.5
Therefore,
1 0 0
𝐿 = �2.56 1 0�
5.76 3.5 1
or,
25 5 1 1 0 0 25 5 1
� 64 8 1� = �2.56 1 0� � 0 −4.8 −1.56�
144 1 1 5.76 3.5 1 0 0 0.7
A L U
2
1 0 0 𝑑1 1
�2.56 1 0� �𝑑2 � = �0�
5.76 3.5 1 𝑑3 0
L ��
𝒅�𝟏�
𝑑1 = 1, 𝑑2 = 0 − 2.56 = −2.56, 𝑑3 = 0 − 5.76 + 3.50 × 2.56 = 3.2
25 5 1 𝑏11 𝑑1
2. �0 � �
−4.8 −1.56 21𝑏 � = � 𝑑 2�
0 0 0.7 𝑏31 𝑑3
U ���
𝒃𝟏
1 0 0 𝑑1 0
3. �2.56 1 0� �𝑑2 � = �1� ⇒ 𝑑1 = 0, 𝑑2 = 1, 𝑑3 = −3.5
5.76 3.5 1 𝑑3 0
L ���𝟏�
𝒅
25 5 1 𝑏12 0
�0 −4.8 −1.56� �𝑏22 � = � 1 �
0 0 0.7 𝑏32 −3.5
U ���
𝒃𝟐
𝑇
𝑏�2 = [−0.0833 1.417 − 0.5]𝑇
Similarly,
𝑇
𝑏�3 = [0.03571 − 0.4643 1.429]𝑇
3
it is recommended that one clearly distinguishes the two steps as “subroutines”.
You will later see that the other methods also often require one or both of the
steps/codes.
(b) GJ: The programming code for GJ is similar to that of the forward elimination of
GE, with some modification. The second part (code) of GE, ie. reverse
substitution is not required. The modification or extra step is simple. First,
pivotal elements should be normalized to 1; in addition to subtracting the
modified pivotal row from the rows below the ‘𝑖 𝑡ℎ ’ row of GE, the pivotal row is
also subtracted from the rows above. Therefore, one extra line is included in the
code for GJ.
(c) The code to determine the determinant of the matrix A is the same as that for
converting A to UTM, ie. the first or forward elimination step of GE. Further, add
a line to determine the ∏𝑛𝑖=1 𝑎𝑖𝑖 product of the diagonal elements of the UTM ie.
one can put the flag to check if any of the 𝑎𝑖𝑖 is zero, the matrix is singular and
cannot be inverted. One can also write a simple code to check the number # of
zeros on the diagonal elements and therefore, determine the rank of the matrix
as (𝑛 − #).
(d) LU method: The programming code is similar to that of the forward step of GE,
to determine UTM. An extra line in the programming loop is required to store
𝑎
the coefficients 𝑙𝑖𝑘 = �𝑎 𝑖𝑘 �, used to make the elements of the first column of the
𝑘𝑘
UTM to be zero, as the elements of the empty columns of the LTM.
𝐴 = �𝑙11 �×� �
𝑙21 𝑙22
LTM UTM
(e) Inverse of the matrix: First, the programming code of LU is to be used. The
forward substitution on LTM will yield/give 𝑑̅ (𝑐𝑜𝑙𝑢𝑚𝑛 𝑚𝑎𝑡𝑟𝑖𝑥) and the
backward substitution on UTM will yield 𝑏� (𝑐𝑜𝑙𝑢𝑚𝑛 𝑚𝑎𝑡𝑟𝑖𝑥)(already available).
Therefore, one new code/subroutine is required for the forward substitution on
LTM, which is not very different from that for the reverse substitution on UTM:
• 𝐴 = 𝐿𝑈 (𝑐𝑜𝑑𝑒 # 𝑑)
• 𝐹𝑜𝑟𝑤𝑎𝑟𝑑 𝑠𝑢𝑏𝑠𝑡𝑖𝑡𝑢𝑡𝑖𝑜𝑛 → 𝑑̅ (𝑠𝑖𝑚𝑖𝑙𝑎𝑟 𝑡𝑜 𝑐𝑜𝑑𝑒 #𝑎)
• 𝐵𝑎𝑐𝑘𝑤𝑎𝑟𝑑 𝑠𝑢𝑏𝑠𝑡𝑖𝑡𝑢𝑡𝑖𝑜𝑛 → 𝑏� ( 𝑐𝑜𝑑𝑒 #𝑎)
4
Lecture #06
Thomas Algorithm (Tridiagonal system/matrix)
𝐴𝑋� = 𝑏�
𝑎11 𝑎12 0 ⋯ 0 0 0 0
⎡𝑎 ⎤ 𝑋1 𝑏1
𝑎22 𝑎23 ⋯ 0 0 0 0 ⎫ ⎧𝑏 ⎫
⎢ 21 ⎥⎧ 𝑋
⎪ 2⎪ ⎪ 2⎪
0 𝑎32 𝑎33 𝑎34 0 ⋯ 0 0
⎢ ⎥ 𝑋3 = 𝑏3
⎢ 0 0 𝑎43 𝑎44 𝑎45 ⋯ 0 0 ⎥⎨ ⋮ ⎬ ⎨ ⋮ ⎬
⎢ ⎥⎪ ⎪ ⎪ ⎪
⎩𝑋𝑛 ⎭ ⎩𝑏𝑛 ⎭
⎣ 0 0 0 0 0 0 𝑎𝑛𝑛−1 𝑎𝑛𝑛 ⎦
Tridiagonal matrix has 3 non-zero elements in all of its rows, except in the 1st and last row,
with one element each on the left and right of the diagonal element. The 1st and last rows have
one element on the right and left of the diagonal elements, respectively. It is a banded matrix
around its diagonal:
𝑋1 𝑏1
⎡ ⎤ ⎧ 𝑋 ⎫ ⎧𝑏 ⎫
⎢ ⎥ ⎪ 2⎪ ⎪ 2⎪
⎢ ⎥ 𝑋3 = 𝑏3
⎢ ⎥⎨ ⎬ ⎨ ⎬
⎪ ⋮ ⎪ ⎪⋮⎪
⎣ ⎦ ⎩𝑋𝑛 ⎭ ⎩𝑏𝑛 ⎭
Therefore, a tridiagonal matrix can be represented using single subscripted indices for its
elements:
𝑏1 𝑐1 0 0 ⋯ 0
⎡
𝑎 𝑏2 𝑐2 0 ⋯ 0⎤
⎢ 2 ⎥
⎢0 𝑎3 𝑏3 𝑐3 ⋯ 0⎥
Zero
⎢ 𝑎4 𝑏4 𝑐4 ⎥
⎢ 𝑧𝑒𝑟𝑜 ⎥
⎣ 𝑎𝑛 𝑏𝑛 ⎦
Note that the element in the 𝑖 𝑡ℎ row is represented as(𝑎𝑖 , 𝑏𝑖 , 𝑐𝑖 ). All bs are on the diagonal.
Therefore, 𝐴𝑋� = 𝑑̅, where 𝐴 is a tridiagonal matrix and it represents the following set of linear
algebraic equations:
1
𝑏1 𝑥1 + 𝑐1 𝑥2 = 𝑑1
𝑎2 𝑥1 + 𝑏2 𝑥2 + 𝑐2 𝑥3 = 𝑑2 ⎫
⎪
𝑎3 𝑥2 + 𝑏3 𝑥3 + 𝑐3 𝑥4 = 𝑑3
𝑎4 𝑥3 + 𝑏4 𝑥4 + 𝑐4 𝑥5 = 𝑑4 ⎬
⎪
𝑎𝑛 𝑥𝑛−1 + 𝑏𝑛 𝑥𝑛 = 𝑑𝑛 ⎭
⇒ You must have noted that for the 𝑖 𝑡ℎ row, 𝑏𝑖 is the diagonal element multiplied with the
variable 𝑥𝑖 in the some row, whereas 𝑎𝑖 is multiplied with 𝑥𝑖−1 (the variable above 𝑖 𝑡ℎ row) and
𝑐𝑖 is multiplied with 𝑥𝑖+1 (the variable below 𝑖 𝑡ℎ row). Naturally, the first and last rows have
only two variables.
⇒(Tridiagonal system is quite common when using Finite Difference 2nd order method to solve
boundary value problems or partial differential 𝑒𝑞 𝑛𝑠 .)
Step 1:
𝑐 𝑑
𝒙𝟏 is eliminated : �𝑏2 − 𝑏1 𝑎2 � 𝑥2 + 𝑐2 𝑥3 = (𝑑2 − 𝑏1 𝑎2 )
1 1
(by dividing 1st row with 𝑏1 , multiplying with 𝑐1 and subtracting from row 2)
𝑐 𝑑
𝒙𝟐 is eliminated : �𝑏3 − 𝑏2 𝑎3 � 𝑥3 + 𝑐3 𝑥4 = (𝑑3 − 𝑏2 𝑎3 )
2 2
⋮
𝑐 𝑑
𝒙𝒌−𝟏 is eliminated : �𝑏𝑘 − 𝑏𝑘−1 𝑎𝑘 � 𝑥𝑘 + 𝑐𝑘 𝑥𝑘+1 = (𝑑𝑘 − 𝑏𝑘−1 𝑎𝑘 )
𝑘−1 𝑘−1
It is clear that;
2
(It is important to note that 𝑘 𝑡ℎ row uses the latest modified values of 𝑏𝑘 and 𝑑𝑘 from the
previous (k-1) step. Therefore, there is no need to store the previous values of (a, b, d), while
writing the code)
𝑑𝑛
Back-substitution: 𝑋𝑛 = (𝑛𝑡ℎ 𝑟𝑜𝑤 ℎ𝑎𝑠 𝑜𝑛𝑙𝑦 𝑏𝑛 𝑎𝑛𝑑 𝑑𝑛 )
𝑏𝑛
𝑑𝑛−1 −𝑐𝑛−1 𝑥𝑛
𝑋𝑛−1 = (𝑛 − 1 𝑟𝑜𝑤 ℎ𝑎𝑠 𝑏𝑛−1 , 𝑐𝑛−1 𝑎𝑛𝑑 𝑑𝑛−1 )
𝑏𝑛−1
Tridiagonal (N, a, b, c, d, X)
𝑑𝑜 𝑖 = 1, 𝑁
𝑎(𝑖) =
𝑏(𝑖) =
𝑐(𝑖) =
𝑑(𝑖) =
end do
𝑑𝑜 𝑖 = 2, 𝑁
𝑐(𝑖 − 1) ⎫
𝑏(𝑖) = 𝑏(𝑖) − 𝑎(𝑖)
𝑏(𝑖 − 1) ⎪
𝑓𝑜𝑟𝑤𝑎𝑟𝑑 𝑠𝑢𝑏𝑠𝑡𝑖𝑡𝑢𝑡𝑖𝑜𝑛
𝑑(𝑖 − 1)⎬
𝑑(𝑖) = 𝑑(𝑖) − 𝑎(𝑖)
𝑏(𝑖 − 1)⎪⎭
end do
𝑑(𝑛)
𝑋𝑛 = 𝑏(𝑛)
𝑑𝑜 (𝑖) = 𝑁 − 1, 1, −1
𝑑(𝑖) − 𝑐(𝑖)𝑋𝑖+1
𝑋𝑖 = � 𝑟𝑒𝑣𝑒𝑟𝑠𝑒 𝑠𝑢𝑏𝑠𝑡𝑖𝑡𝑢𝑡𝑖𝑜𝑛
𝑏(𝑖)
end do
3
Indirect Methods
GE, GJ, LU decomposition are the direct methods to solve 𝐴𝑋� = 𝑏�. Simple iterations
can also be done to solve a set of algebraic equations, Jacobi and Gauss-Seidel being the two
commonly used indirect methods.
2 1 0 𝑋1 1
Ex. 𝑋
�1 2 1� � 2 � = �2�
0 1 1 𝑋3 4
(1) (1) (1)
Make a guess 𝑋1 , 𝑋2 , 𝑋3
Jacobi Gauss-Seidel
(1) (1)
(2) 1−𝑋2 1−𝑋2
𝑋1 = =
2 2
(1) (2)
(2) 4−𝑋2 4−𝑋2
𝑋3 = =
1 1
(𝑘−1)
Takes all 𝑋𝑠 of the previous iteration Uses the most latest iterated values of 𝑋𝑠 .
In general: 𝐴𝑋� = 𝑏�
(𝑘)
(𝑘+1) 𝑏𝑖 −∑𝑛
𝑗=1 𝑎𝑖𝑗 𝑋𝑗
Jacobi: 𝑋𝑖 = (𝑗 ≠ 𝑖)
𝑎𝑖𝑖 (≠0)
diagonal elements
4
At any 𝑖 𝑡ℎ row:
1⋯⋯𝑖 − 1 𝑖 𝑖 + 1⋯⋯ 𝑁
𝑢𝑝𝑑𝑎𝑡𝑒𝑑 𝑜𝑙𝑑
𝑣𝑎𝑙𝑢𝑒𝑠 𝑣𝑎𝑙𝑢𝑒𝑠
(𝑘 + 1) (𝑘)
Schematically,
𝑋1
⎡ ⎤ ⎧𝑋 ⎫
2
⎢ ⎥⎪⎪ ⋮ ⎪ ⎪
⎢ ⎥
⎢1 ⋯ 𝑖−1 𝑖 𝑖+1 ⋯ 𝑛⎥ 𝑋 𝑖
⎢ ⎥ ⎨ ⋮ ⎬
⎢ ⎥⎪⎪ ⋮ ⎪ ⎪
⎣ ⎦ ⎩𝑋𝑛 ⎭
G-S (modified)
𝜆=1 (𝑢𝑛𝑚𝑜𝑑𝑖𝑓𝑖𝑒𝑑)
5
Look differently! : LDU method
𝑎11 𝑎12 ⋯ 𝑎1𝑛
⎡ ⋮ ⎤
⎢ ⎥
⎢ ⋮ ⎥ =
⎢ ⋮ ⎥
⎣𝑎𝑛1 ⋯ ⋯ 𝑎𝑛𝑛 ⎦𝑛×𝑛
0 𝑎 0 ⋯ ⋯ 𝑎1𝑛
⎡ ⎤ ⎡ 11 ⎤ ⎡ ⎤
𝑎21 0 ⋱ 0
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ 0 ⎥+⎢ ⋱ ⎥+⎢ 0 ⎥
⎢ 0⎥ ⎢ ⎥ ⎢ 𝑎𝑛−1𝑛 ⎥
⎣𝑎𝑛1 ⋯ 𝑎𝑛−1 ⎦ ⎣ 𝑎𝑛𝑛 ⎦ ⎣ 0 ⎦
strictly lower 𝐷 𝑠𝑡𝑟𝑖𝑐𝑡𝑙𝑦 𝑢𝑝𝑝𝑒𝑟
𝑡𝑟𝑖𝑎𝑛𝑔𝑢𝑙𝑎𝑟 𝑚𝑎𝑡𝑟𝑖𝑥 (𝑑𝑖𝑎𝑔𝑜𝑛𝑎𝑙 𝑡𝑟𝑖𝑎𝑛𝑔𝑢𝑙𝑎𝑟 𝑚𝑎𝑡𝑟𝑖𝑥
(𝐿) 𝑒𝑙𝑒𝑚𝑒𝑛𝑡𝑠 𝑜𝑛𝑙𝑦) (𝑈)
𝐴=𝐿+𝐷+𝑈
𝐴𝑋� = 𝑏�
𝑏� − 𝐿𝑋� − 𝑈𝑋�
𝑜𝑟 𝑋� =
𝐷
Jacobi:
𝑏� − (𝐿 + 𝑈)𝑋� (𝑘)
𝑋� (𝑘+1) =
𝐷
(𝑘)
(𝑘+1)
𝑏𝑖 − ∑𝑛𝑗=1 𝑋𝑗 𝑎𝑖𝑗
𝑜𝑟 𝑋𝑖 = ; 𝑗 ≠ 𝑖 (𝑗 = 𝑖 𝑤𝑖𝑙𝑙 𝑟𝑒𝑝𝑟𝑒𝑠𝑒𝑛𝑡 𝑑𝑖𝑎𝑔𝑜𝑛𝑎𝑙 𝑒𝑙𝑒𝑚𝑒𝑛𝑡)
𝑎𝑖𝑖
𝐆 − 𝐒 ∶ (𝐿 + 𝐷)𝑋� = 𝑏� − 𝑈𝑋�
Both methods will lead to the same solutions, with different # of iterations.
6
Lecture #07
Homogeneous linear algebraic equations
Naturally, a trivial solution is 𝑥̅ = 0 . In the simplest geometrical term, two straight lines or
three planes intersect at the origin:
𝑥1 − 𝑥2 = 0
�⇒ 𝑥2
𝑥2 − 0.3𝑥2 = 0
𝑥1
Compare this situation to the earlier discussed set of non-homogeneous linear algebraic
equation 𝐴𝑋� = 𝑏� , where we sought a unique solution, when 𝑑𝑒𝑡|𝐴| ≠ 0. Comparatively,
𝐴𝑋� = 𝑏� 𝐴𝑋� = 0
𝑟=𝑚 𝑟<𝑚
𝐸𝑖𝑔𝑒𝑛𝑣𝑎𝑙𝑢𝑒 𝑝𝑟𝑜𝑏𝑙𝑒𝑚
1
Let us look at
𝐴𝑋� = 𝜆𝑋� 1 where A is the coefficient matrix and ′𝜆′ is a non-zero number.
1 0 0
𝐼 = �0 1 0 �
0 0 1
Equation 1 or 2 represents eigenvalue problem and such equations commonly occur in several
𝑑𝑦�
dynamic studies of distillation, adsorption and CSTR, viz. in solving 𝑑𝑡 = 0. Equation (2) can
also be written as
or
Most vectors 𝑋� will not satisfy such an equation. A common vector (𝑖𝑋1 + 𝑗𝑋2 ) will change
direction and magnitude to (𝑖𝑏1 + 𝑗𝑏2 ) on the transformation by A. Only certain special
vectors 𝑋�, called eigenvectors, corresponding to only special numbers 𝜆(𝑒𝑖𝑡ℎ𝑒𝑟 + 𝑣𝑒 𝑜𝑟 − 𝑣𝑒),
called eigenvalue will satisfy the above equation. In such case, the eigenvector 𝑋� does not
change its direction or does not rotate when transformed by the coefficient matrix A, but is
only scaled by the eigenvalue 𝜆. See the geometrical representation below:
2
𝑋2
𝑋2
𝑋�
𝑏� 𝜆𝑋⃗
𝑋�
𝑋1 𝑋1
It turns out that (𝑛 × 𝑛) matrix 𝐴 will give ‘𝑛’ 𝜆𝑠 (𝑒𝑖𝑔𝑒𝑛𝑣𝑎𝑙𝑢𝑒𝑠) and each eigenvalue will give
‘𝑛’ eigenvectors that will also be linearly independent. As an example
Both vectors {𝑋}1 , & {𝑋}2 will satisfy 𝐴𝑋� = 𝜆𝑋�. They will also be linearly independent so that
they can be used as a basis for the space description. They may be orthogonal or non-
orthogonal.
𝑋2
𝜆1
𝜆2
𝑋1
In the figure above, dotted and solid lines represent two vectors scaled by the respective
eigenvalues1 and 2, respectively.
3
Similarly, (3 × 3) A matrix will give 3𝜆𝑠 (𝜆1 , 𝜆2 , 𝜆3 ). Each 𝜆 will give three independent
vectors {𝑋}1 , {𝑋}2 , {𝑋}3 satisfying 𝐴𝑋� = 𝜆𝑋�.
or (𝐴 − 𝜆𝐼)𝑋� = 0
Linear algebra tells us that for 𝑋� to have a non-trivial solution 𝑑𝑒𝑡|(𝐴 − 𝜆𝐼)| = 0.
2 1 𝐴𝑋� = 𝜆𝑋�
Example: 𝐴=� �
1 2 (𝐴 − 𝜆𝐼)𝑋� = 0
2−𝜆 1 (2 − 𝜆)2 − 1 = 0
𝑑𝑒𝑡 � �=0 ⇒
1 2−𝜆 𝜆1 = 1, 3
2 1 𝑋1 𝑋
𝜆1 = 1: � � � � = 1 � 1�
1 2 𝑋2 𝑋2
1
So, 𝑋� (𝜆1 = 1): (𝑖 − 𝑗) ≡ � �
−1
2 1 𝑋1 𝑋
𝜆2 = 3: � � � � = 3 � 1�
1 2 𝑋2 𝑋2
𝑋2
�⃗ = 𝑐1 (𝑖 + 𝑗) + 𝑐2 (𝑖 − 𝑗)
𝐵
𝑖+𝑗
𝑋1
(𝑖 − 𝑗)
4
System of ODEs
It may represent dynamics, i.e. how a system behaves due to a perturbation around steady state
value. Such situation is common in kinetics and heat and mass transfer related problems of
chemical engineering.
𝑑𝑋1
= 𝑓1 (𝑋1 … … 𝑋𝑛 ) ⎫
𝑑𝑡
𝑑𝑋2 ⎪
= 𝑓2 (𝑋1 … … 𝑋𝑛 ) ⇒ dynamics
𝑑𝑡
⋮ ⎬
𝑑𝑋𝑛 ⎪
= 𝑓𝑛 (𝑋1 … … 𝑋𝑛 ) ⎭
𝑑𝑡
𝑋� 𝑋3
𝑑𝑋�
𝑑𝑡
= 𝑓 (̅ 𝑋�) [𝑋�]𝑇 ≡ [𝑋1 , … … . 𝑋𝑛 ] 𝑋2 𝑋1
𝑜𝑟
𝑓1 (𝑋1 … … 𝑋𝑛 ) = 0
𝑇ℎ𝑒𝑠𝑒 𝑠𝑒𝑡𝑠 𝑜𝑓 𝑎𝑙𝑔𝑒𝑏𝑟𝑎𝑖𝑐 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛𝑠
𝑓2 (𝑋1 … … 𝑋𝑛 ) = 0
� 𝑚𝑎𝑦 𝑟𝑒𝑝𝑟𝑒𝑠𝑒𝑛𝑡 𝑒𝑛𝑒𝑟𝑔𝑦 𝑜𝑟 𝑠𝑝𝑒𝑐𝑖𝑒𝑠
⋮
𝑏𝑎𝑙𝑎𝑛𝑐𝑒 𝑢𝑛𝑑𝑒𝑟 𝑆𝑆.
𝑓𝑛 (𝑋1 … … 𝑋𝑛 ) = 0
𝑋�(𝑡) = ?
𝑑𝜉� 𝑑𝑋�(𝑡)
𝑜𝑟, 𝑑𝑡
= = 𝑓(̅ 𝑋�)
𝑑𝑡
𝑑𝜉𝑖
𝑜𝑟, = 𝑓𝑖 (𝑋1 … … 𝑋𝑛 ) (𝑖 = 1 … … 𝑛)
𝑑𝑡
5
𝜉1 = (𝑋1 − 𝑋1𝑠 ), 𝜉2 = (𝑋2 − 𝑋2𝑠 ) … … . , 𝜉𝑛 = (𝑋𝑛 − 𝑋𝑛𝑠 )
(Taylor series expansion for multivariables: retain 1st order terms only, i.e. small departure from SS)
𝑑𝜉1
⎧ ⎫ ⎡ 𝜕𝑓1 𝜕𝑓1
⋯
𝜕𝑓1 𝜉
⎤⎧ 1 ⎫
𝑑𝑡
⎪ . ⎪ ⎢𝜕𝑋1 𝜕𝑋2 𝜕𝑋𝑛 ⎥ 𝜉2
⎪ ⎪ ⎪ ⎪
⎪ . ⎪ ⎢ 𝜕𝑓2 𝜕𝑓2 𝜕𝑓2 ⎥ ⎪ ⋮. ⎪
. ⋯
= ⎢𝜕𝑋1 𝜕𝑋2 𝜕𝑋𝑛 ⎥ .
⎨ . ⎬ ⎢ ⎥⎨ . ⎬
⋮
⎪𝑑𝜉𝑛 ⎪ ⎢ 𝜕𝑓 𝜕𝑓𝑛
⎥
𝜕𝑓𝑛 ⎥ ⎪ ⎪
⎪ ⎪ ⎢ 𝑛 ⎪ . ⎪
⎪ 𝑑𝑡 ⎪ ⋯
⎩ ⎭ ⎣𝜕𝑋1 𝜕𝑋2 𝜕𝑋𝑛 ⎦ ⎩ 𝜉 𝑛 ⎭
𝑑𝜉�
𝑑𝑡
= 𝐴𝜉̅ 𝑜𝑟 𝜉̅′ = 𝐴𝜉̅ ⎫
𝜕𝑓1 𝜕𝑓1 ⎪
⎡𝜕𝑋1 ⋯
𝜕𝑋𝑛 ⎤
𝐴 𝑖𝑠 𝑐𝑎𝑙𝑙𝑒𝑑 𝑡ℎ𝑒
𝑤ℎ𝑒𝑟𝑒 𝐴 = ⎢ ⋮ ⎥; ⎬
⎢ 𝜕𝑓𝑛 𝜕𝑓𝑛 ⎥
𝐽𝑎𝑐𝑜𝑏𝑖𝑎𝑛 𝑚𝑎𝑡𝑟𝑖𝑥 𝑜𝑓 𝐴. ⎪
⎣𝜕𝑋1 ⋯ 𝜕𝑋𝑛 ⎦ ⎭
𝑑𝜉�
𝑑𝑡
= 𝐴𝜉̅
𝜕𝑓1 𝜕𝑓1
⎡𝜕𝑋1 ⋯ 𝜕𝑋𝑛 ⎤
where 𝜉 = (𝑋� − 𝑋�𝑠 ) and A is the Jacobian matrix of 𝑓 as ⎢ ⋮
̅ ̅ ⎥
⎢ 𝜕𝑓𝑛 𝜕𝑓𝑛 ⎥
⎣𝜕𝑋1 ⋯ 𝜕𝑋𝑛 ⎦
6
Lecture #08 (continue..)
𝑑𝜉�
Seeking solution to 𝑑𝑡 = 𝐴𝜉 ̅ 𝑜𝑟 𝜉 ′ = 𝐴𝜉 ̅
Substitute,
or,
Thus, we have got a characteristic equation which describes the characteristics of the system
dynamics AND this equation represents an eigenvalue problem. In summary, we started from
𝑑𝑋�
= 𝑓 (̅ 𝑋�) ⎫
𝑑𝑡 ⎪
𝑑𝜉 ̅ 𝑤ℎ𝑒𝑟𝑒 𝐴 = 𝐽𝑎𝑐𝑜𝑏𝑖𝑎𝑛 𝑚𝑎𝑡𝑟𝑖𝑥 𝑜𝑓 𝑓 ̅ 𝑎𝑛𝑑
𝑎𝑛𝑑 𝑔𝑜𝑡 = 𝐴𝜉 ̅
𝑑𝑡 ⎬ 𝜉 ̅ = 𝑧̅𝑒 𝜆𝑡
𝑁𝑜𝑤, 𝑤𝑒 ℎ𝑎𝑣𝑒 (𝐴 − 𝜆𝐼)𝑧̅ = 0⎪
⎭
1
𝑎11 − 𝜆 𝑎12 ⋯ 𝑎1𝑛
⋮
or 𝑑𝑒𝑡 � �=0
⋮
𝑎𝑛1 ⋯ ⋯ 𝑎𝑛𝑛−𝜆
Solve for 𝑛 𝜆𝑠 ′ (eigenvalues) from the 𝑛𝑡ℎ degree polynomial on ′𝜆′ after you have expanded the
(𝑛 × 𝑛) |𝐴 − 𝜆𝐼|matrix to determine its determinant. Recall, every 𝜆𝑖 (𝑖 = 1 ⋯ ⋯ 𝑛) will give
one 𝑧̅(𝑖) eigenvector so that 𝐴𝑧̅(𝑖) = 𝜆𝑖 𝑧̅ (𝑖) . In other words,
𝜆1 → 𝑧̅ (1) 𝑒 𝜆1 𝑡
⎫ 𝐴𝑛𝑑 𝑎𝑙𝑙 𝑡ℎ𝑒𝑠𝑒
𝜆2 → 𝑧̅ (2) 𝑒 𝜆2 𝑡 𝑒𝑖𝑔𝑒𝑛𝑣𝑒𝑐𝑡𝑜𝑟𝑠 𝑎𝑟𝑒 𝑙𝑖𝑛𝑒𝑎𝑟𝑙𝑦
⋮ ⎬ 𝑖𝑛𝑑𝑒𝑝𝑒𝑛𝑑𝑒𝑛𝑡
𝜆𝑛 → 𝑧̅(𝑛) 𝑒 𝜆𝑛𝑡 ⎭
(𝑁𝑜𝑡𝑒 𝑋� = 𝑋�𝑠 + 𝜉 ̅)
Numerical values of ′𝜆′ decides how solution will behave i.e. error 𝜉 or perturbation or
deviation from SS will grow or decay. If the real part of 𝜆 𝑖𝑠 ′ − 𝑣𝑒 ′ , 𝑦 → 𝑦𝑠 or solution will
decay; if ′ + 𝑣𝑒′ solution will grow ⇒ (unstable)
𝑑𝑋� −3 2
𝑜𝑟 = 𝐴𝑋�, 𝐴 = � �
𝑑𝑡 1 −2
−3 − 𝜆 2
Determine 𝜆𝑠 : det � � ⇒ 𝜆2 + 5𝜆 + 4 = 0 (𝐶ℎ𝑎𝑟𝑎𝑐𝑡𝑒𝑟𝑖𝑠𝑡𝑖𝑐𝑠 𝑝𝑜𝑙𝑦𝑛𝑜𝑚𝑖𝑎𝑙)
1 −2 − 𝜆
𝜆1 = −1, 𝜆2 = −4
2
Determine corresponding eigenvector:
−3 2 𝑋1 𝑋
𝜆1 = −1: � � � � = −1 � 1 �
1 −2 𝑋2 𝑋2
1
or − 3X1 + 2X2 = −X1 𝑋 = 𝑋2 ⇒ 𝑋� (1) = � �
�⇒ 1 1
and X1 − 2X2 = −X2 (𝑟 = 1 < 𝑚)
−2
𝜆2 = −4 𝑋1 = −2𝑋2 ⇒ 𝑋� (2) = � �
1
(1)
Check 𝑋�1 𝑎𝑛𝑑 𝑋� (2) are linearly independent.
⇒ Note: both 𝜆𝑠 are −𝑣𝑒. Therefore, the solutions will decay to the SS values or perturbation
will die/decay, or solutions will not grow, or the systems will be stable.
General solution:
𝑋� = 𝐶1 𝑋� (1) 𝑒 𝜆1 𝑡 + 𝐶2 𝑋� (2) 𝑒 𝜆2 𝑡
𝑋1 1 −2
𝑜𝑟 � � = 𝐶1 𝑒 −𝑡 � � + 𝐶2 𝑒 −4𝑡 � �
𝑋2 1 1
𝑜𝑟 𝑋1 = 𝐶1 𝑒 −𝑡 − 2𝐶2 𝑒 −4𝑡
�
𝑋2 = 𝐶1 𝑒 −𝑡 + 𝐶2 𝑒 −4𝑡
(Note: For repeat or complex 𝜆, check the books for the methods to determine 𝑋�)
0 2 3
Ex. [𝐴] = �−10 −1 2� �s.
determine all λs and X
−2 4 7
From 𝑑𝑒𝑡[𝐴 − 𝜆𝐼] = 0 ⇒ 𝜆1 = 1, 𝜆2 = 2, 𝜆3 = 3
−1 2 3 𝑋1
𝜆1 = 1 [𝐴 − 𝜆𝐼]𝑋� = 0 ⇒ �−10 −2 2� �𝑋2 � = 0
−2 4 6 𝑋3
Apply Gauss Elimination:
−1 2 3 𝑋1
� 0 −22 −28� �𝑋2 � = 0
0 0 0 𝑋3
3
choose 𝑋3 = 1
𝑋1 = 2𝑋2 + 3𝑋3
� ⇒ 𝑋1 = 5⁄11
−22𝑋2 − 28𝑋3 = 0
𝑋2 = − 14⁄11
Therefore,
5⁄11
� (1)
𝑋 = �− 14⁄11�
1
Similarly for
𝑇
𝜆2 = 2: �𝑋� (2) � = [1⁄2 − 1 1]
𝑇
𝜆3 = 3: �𝑋� (3) � = [1⁄2 −3⁄4 1]
(1)
Note: These are all linearly independent (𝑋�0 ≠ 𝐶1 𝑋� (2) + 𝐶2 𝑋� (3) ). They may not be
orthogonal.
𝑋1 𝑋2
𝑚1 𝑚2
𝐾𝑋1 𝐾(𝑋1 − 𝑋2 )
Force balance or mass 𝑚1 𝑎𝑛𝑑 𝑚2 , when the two blocks are displaced by 𝑋1 & 𝑋2 in the right
direction from its original position:
4
𝑑2 𝑋1
𝑚1 = −𝐾𝑋1 + 𝐾(𝑋2 − 𝑋1 )⎫ 𝑋
𝑑𝑡 2 𝑋� = � 1 �
𝑑2 𝑋2 ⎬ 𝑋2
𝑚2 = −𝐾(𝑋2 − 𝑋1 ) − 𝐾𝑋2 ⎭
𝑑𝑡 2
𝑋𝑖 = 𝐴𝑖 𝑠𝑖𝑛(𝜔𝑡) (𝑎𝑠𝑠𝑢𝑚𝑒 𝑎 𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛)
( 𝑜𝑟 𝑋� = 𝐴̅ 𝑠𝑖𝑛(𝜔𝑡))
Therefore,
𝑋𝑖 ′′ = −𝐴𝑖 𝜔2 𝑠𝑖𝑛𝜔𝑡
Substitute,
2𝐾 𝐾
�𝑚 − 𝜔2 � 𝐴1 − 𝑚 𝐴2 = 0
1 1
𝐾 2𝐾
�
2
− 𝑚 𝐴1 + �𝑚 − 𝜔 � 𝐴2 = 0
2 2
𝑜𝑟
2𝐾 𝐾
� − 𝜔2 � −
𝑚1 𝑚1 𝐴1
�� 𝐾 2𝐾 �� �𝐴 � = 0
2
− � − 𝜔2 �
𝑚2 𝑚2
Therefore, the characteristic equation describes an eigenvalue problem (or spring dynamics is
an eigenvalue problem). It has assumed the form of
(𝐴 − 𝜔2 𝐼)𝑋� = 0
2𝐾 𝐾
𝑚1
−𝑚 𝑋
where 𝜔2 = 𝜆, 𝐴=� 1
� and 𝑋� = � 1 �
−𝑚
𝐾 2𝐾 𝑋2
2 𝑚2
𝑚1 = 𝑚2 = 40 𝑘𝑔 ; K = 200 𝑁⁄𝑚
5
𝑑𝑒𝑡( ) : (10 − 𝜔2 )2 − 25 = 0
1
𝜆1 = 15 𝐴1 = −𝐴2 𝑜𝑟 𝐴(1) = � �
−1
1
𝜆2 = 5 𝐴1 = 𝐴2 𝑜𝑟 𝐴(2) = � �
1
General solution:
𝑋 1 1
𝑜𝑟 � 1 � = 𝐶1 � � 𝑠𝑖𝑛𝜔1 𝑡 + 𝐶2 � � 𝑠𝑖𝑛𝜔2 𝑡
𝑋2 −1 1
𝑜𝑟 𝑋1 = 𝐶1 𝑠𝑖𝑛𝜔1 𝑡 + 𝐶2 𝑠𝑖𝑛𝜔2 𝑡
�
𝑎𝑛𝑑 𝑋2 = −𝐶1 𝑠𝑖𝑛𝜔1 𝑡 + 𝐶2 𝑠𝑖𝑛𝜔2 𝑡
𝜆1 =15 𝜆2 =5
𝜔1 =√15
𝐴1 𝐴2 𝜔2 =√5
𝐴1 𝐴2
6
Lecture #09
One more example of eigenvalue or characteristic type of problem:
𝐴𝑋� = 𝑏�
⇒ (𝐿 + 𝐷 + 𝑈)𝑋� = 𝑏�
⇒ 𝐷𝑋� = 𝑏� − (𝐿 + 𝑈)𝑋�
or 𝑋� = 𝑆𝑋� + 𝐶̅
S is called stationary matrix because very often, matrix 𝐴 𝑜𝑟 (𝐿, 𝐷, 𝑈) do not change and are
fixed. It is 𝑏� (right hand side) that varies from one problem to the other, as a forcing function.
Approximate
𝑇𝑟𝑢𝑒 𝑣𝑎𝑙𝑢𝑒
𝑣𝑎𝑙𝑢𝑒 𝑎𝑡 𝑖𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛
#𝑘
Similarly,
𝑒̅ (𝑘+1) = 𝑋� (𝑘+1) − 𝑋�
Let �𝜈𝑗 �𝑗=1,𝑛 be the eigenvectors corresponding to the 𝜆𝑗 eigenvalue of the stationary matrix
1
𝑆(𝑛 × 𝑛). Recall eigenvectors are linearly independent. So, they can form the basis of n
dimensional space. Alternatively, any vector can be represented as the linear combination of the
eigenvectors.
Therefore,
where Sνj = λj νj
𝑒 (0) = ∑𝑛𝑗=1 𝑐𝑗 𝜈𝑗
or Sν1 = λ1 ν1 , etc.
There is the Gershgorin Theorem (Check the book by Strang) which states that in such case
and
non-zero
Note that zero or no iterations are required to solve an identity matrix! Also, a diagonally
strong matrix will require relatively fewer number of iterations. On the other hand, the above
right hand matrix is a singular matrix and cannot be inverted. Similarly, a matrix with smaller
2
value-numbers on its diagonal relative to the other elements in the same row will require a
relatively larger number of iterations to converge.
Power Method
The previous examples have shown that the ‘dynamics’ of a 1st order system can be
inspected or characterized by determining 𝜆𝑚𝑎𝑥 𝑎𝑛𝑑 𝜆𝑚𝑖𝑛 , instead of all 𝜆𝑠 . Therefore, a 𝑛 × 𝑛
matrix will have 𝑛𝜆𝑠 , and the overall response of the system will be the least sensitive to
𝜆𝑚𝑖𝑛 and most sensitive to 𝜆𝑚𝑎𝑥 . The response corresponding to the other 𝜆𝑠 will be
intermediate. Power method is commonly used to determine 𝜆𝑚𝑎𝑥 and also, 𝜆𝑚𝑖𝑛 .
If [𝐴]𝑛×𝑛 matrix has 𝑛 𝜆𝑠(𝑒𝑖𝑔𝑒𝑛𝑣𝑎𝑙𝑢𝑒𝑠), then one can write |𝜆1 | > |𝜆2 | > ⋯ ⋯ |𝜆𝑛 |
𝜆𝑚𝑎𝑥 𝜆𝑚𝑖𝑛
� = λmax �
And AX � = λmin �
X and AX X
𝑋� (1) 𝐴𝑋 (0)
Use 𝑋� (1) = �𝑋� (1) �
= �𝐴𝑋 (0)�
𝐴𝑋� (𝑘−1)
𝑋� (𝑘) = �𝐴𝑋� (𝑘−1)� , 𝑘 = 1, 2,3 (# 𝑜𝑓 𝑖𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛𝑠)
In other words, after sufficient # of iterations, one obtains maximum eigenvalue and the vector
𝑋� (𝑘−1) when transformed by ‘A’ matrix does not rotate and is just scaled by 𝜆𝑚𝑎𝑥 , i.e.
𝜆𝑚𝑎𝑥 𝑋� (𝑘+1)
𝑋� (𝑘)
3
0 2 3
Example: Determine 𝜆𝑚𝑎𝑥 of 𝐴 = �−10 −1 2�
−2 4 7
1
𝜈 (1) = �0� ≡ [1 0 0]𝑇 (1𝑠𝑡 𝑔𝑢𝑒𝑠𝑠)
0
0 2 3 1 0
�−10 −1 2��0� �−10� 0.0000
(2) −2 4 7 0 −2
𝜈 = �𝐴𝜈 (1) �
= = �−0.9806�
√100+4
−0.1961
𝜆2 = 5.9063
After 20 iterations,
−0.3714
(20)
𝑋 = � 0.55698� with λ(20) = 3.0014
0.7428
You will see a gradual convergence from 10.198 to 3.0014.
𝐴𝑋 (19) = 𝜆𝑚𝑎𝑥 𝑋 (20) and 𝑋 (19) 𝑎𝑛𝑑 𝑋 (20) will have approximately the same direction.
It can be shown that 𝝀𝒎𝒊𝒏 𝐨𝐟 𝐀 = 𝟏�𝝀 𝒘𝒉𝒆𝒓𝒆 𝝀𝒎𝒂𝒙 is the highest eigenvalue of the
𝒎𝒂𝒙
inverse of matrix 𝑨 𝒐𝒓 𝑨−𝟏 :
� = λmax X
AX �
or �
X = A−1 (λmax �
X) = λmax (A−1 �
X)
� = (1�
or A−1 X �
)X
λ max
� ≡ λ′max X
or BX �
4
The λmax to λmin ratio is known as stiffness ratio and indicates the stiffness (maximum rate of
change in the functional value corresponding to λmax relative to minimum rate corresponding to
λmin) of the system. A ratio > 10 indicates the system to be stiff requiring a fine step size for
calculations (to be discussed later)
(4) A matrix is singular if and only if it has zero 𝜆 . A non-singular matrix has all non zero
𝜆𝑠 .
(7) The 𝜆𝑠 of an upper or lower triangular matrix are the elements on its main diagonal.
Quiz 1
5
Lecture #10
Nonlinear Algebraic Equations
Solving 𝐹1 (𝑋1 ⋯ ⋯ 𝑋𝑛 ) = 0
𝐹2 (𝑋1 ⋯ ⋯ 𝑋𝑛 ) = 0
� 𝑜𝑟 𝐹� (𝑋�) = 0
⋮
𝐹𝑛 (𝑋1 ⋯ ⋯ 𝑋𝑛 ) = 0
𝐹1 (𝑋�)
𝑜𝑟 𝐹� (𝑋�) = � ⋮ �=0
�
𝐹𝑛 (𝑋)
𝑜𝑟 𝑋1 − sin 𝑋2 = 0
�
and 𝑋1 = 𝑒 𝑋2
Closed Methods
𝑓(𝑋)
𝑓(𝛼) = 0
𝑋𝑙1 𝑋𝑙2
𝑋𝑚2 𝑋𝑚1 𝑋𝑈 𝑋
𝑓(𝑋𝑙 )𝑓(𝑋𝑈 ) < 0 ∶ All it means is that one root is cornered or bracketed. Some
1
functions may have multiple roots (be careful and corner all three roots separately by plotting
the function qualitatively but accurately).
𝑋𝑈
𝑋𝑙
𝑋𝑈 +𝑋𝑙
Step 2: 𝑋𝑚 (1) = = 𝑋𝑜𝑙𝑑
2
XU = Xm (replace XU with Xm )
(This way, you are cornering the root or coming closer to the root)
(Note: The other check f(X)f(XU ) < 0 will also work. If – ve Xl = Xm else
XU = Xm )
XU +Xl
Step 3: Xm (2) = = Xnew (XU & Xl are new values)
2
2
It is clear that this equation is polynomial in V with 𝑛 = 3.
Let us assume that the other variables and constants are known and the simplified
equation takes the following form:
f(V) = 0
Xl = 0, XU = 0.11 by plotting
0.11
1. Check f(Xl )f(XU ) < 0 V
So that the root is bracketed (cornered)
(1) 0+0.11
2. X m = = 0.055 (3 digits accurate after decimal)
2
Xl +Xu 0.055+0.011
3. Xm (2) = = = 0.0825 = Xnew
2 2
0.0825−0.055
4: Check ∈a = � � × 100 = 33.33% > 0.2% (∈s )
0.0825
0.055+0.0825
Therefore, X m = = 0.00875
2
Do 10 iterations to obtain Xm = 0.0625 with ∈a = 0.1721% < 0.2%
3
Ans. V (root) = 0.0625.
You should also prepare a table while solving:
>
Xl XU Xm ∈ f(Xm )f(Xl ) your
< 0
�
> 0 decision
or f(Xm )f(XU )
<
1. Yes
2. Xm No
10.
(This method is similar to the previous one, except draw a straight line connecting
XU and Xm to determine the intersection on X axis as the approximate value for the root)
(1) f(XU )
2. X r = XU − × (Xl − XU )
f(Xl ) − f(XU )
f(XU ) 0 − f(Xl )
X �X = �
U − Xr Xr − Xl
XU = Xr or Xl = Xr
XU f(Xl ) − Xl f(XU )
Xr (2) = = Xnew
f(Xl ) − f(XU )
Check ∈a <∈s else go to 2; iterate till convergence. Note that it is difficult to choose
between two methods. In general, this method is faster. There are always exceptions. Try
f(X) = X10 − 1!
4
Open Methods
Sometimes it is not straight forward & there may be more than one combinations.
The working formula is
Y = X = g(X) (2 equations and find its interaction)
Sometimes, one has to start from left to the root; sometimes there is a divergence or
spiral convergence, depending on type of functions.
Y or
𝑔(𝑋)
𝑋0 𝑋1 𝑋2
fast convergence slow convergence divergence
(spiral) (because of a wrong starting value)
Criterion for convergence: |𝐠 ′ (𝐗)| < 𝐘 ′ (𝐗) = 𝟏 at 𝐗 = 𝐗 𝐧
5
Note: Closed methods often generate at least one root. Open methods may sometimes result in
divergence depending upon type of function and starting guess Xo . However, if they converge,
they will do more quickly than the closed methods.
2. Newton-Raphson Method
(Most popular method to solve a non-linear algebraic equation)
(Note: f ′ (Xr ) ≠ 0)
X1 −Xr
3. If ∈a = � � × 100 <∈s stop
X1
α = X1 (root)
else Xr = X1 (repeat the steps)
f(Xi )
Sequence: Xi+1 = Xi − ; f ′ (Xi ) ≠ 0
f′ (Xi )
Xi+1 −Xi
and ∈i+1 = � � × 100
Xi+1
6
Notes: ⇒
• f ′ (X) ≠ 0 is the major drawback of the method.
• Simple to code and convergence is fast (see later)
• There are cases of divergence and ‘missing’ roots, depending on the function and
initial guess.
divergence
Y missing root
X0
X1 X 3 X 2 X 0 X X1
root
In all such cases, the best strategy is to plot and locate all roots graphically (qualitatively but
accurately), and also, start close to the root.
The NR method can be derived from the Taylor series expansion showing the order of error
in the method.
f′′ (Xi )
f(Xi+1 ) = f(Xi ) + f ′ (Xi )(Xi+1 − Xi ) + (Xi+1 − Xi )2 + ⋯ ⋯
2!
= f(Xi ) + f ′ (Xi )(Xi+1 − Xi ) + 0(h2 )
(error)
If f(Xi+1 ) = 0, root is located
f(Xi )
Xi+1 = Xi −
f′ (Xi )
𝑓 ′ (𝑋) = 3𝑋 2 − 0.33𝑋
Step 1. 𝑋0 = 0.05: 𝑃𝑙𝑜𝑡 𝑓(𝑋)𝑎𝑛𝑑 𝑐ℎ𝑜𝑜𝑠𝑒 𝑋0 𝑐𝑙𝑜𝑠𝑒 𝑡𝑜 𝑡ℎ𝑒 𝑖𝑛𝑡𝑒𝑟𝑠𝑒𝑐𝑡𝑖𝑜𝑛 𝑜𝑓 𝑓(𝑋) 𝑤𝑖𝑡ℎ 𝑋 – axis
7
0.06242−0.0500
3. ∈ = � 0.06242
� × 100 = 19.89%
𝑓(𝑋1 )
4. 𝑋2 = 𝑋1 − = 0.06238
𝑓 ′ (𝑋1 )
8
Lecture #11
Newton-Raphson (Multi-variables)
F1 (X1 , X2 ⋯ Xn ) = 0 �⃗(X
�) = 0
F
F2 (X1 , X2 ⋯ Xn ) = 0
�⇒ or Fi (X�) = 0, i = 1 ⋯ n
⋮
Fn (X1 , X2 ⋯ Xn ) = 0 or Fi (Xi ) = 0, i = 1 ⋯ n
Taylor-expansion of multi-variables:
(𝑖 = 1 ⋯ 𝑛)
1
𝜕𝐹1 𝜕𝐹1 𝜕𝐹1 (𝑘)
⎡𝜕𝑋1 ⋯
𝜕𝑋𝑛 ⎤
0 𝐹1 (𝑘) 𝜕𝑋2
∆𝑋1
⎧ ⎫ ⎧𝐹 ⎫ ⎢ 𝜕𝐹2 ⎥ ⎧∆𝑋 ⎫
⎪0⎪ ⎪ 2 ⎪ ⎢𝜕𝑋1 ⋯ ⋯ ⋯⎥ ⎪ 2⎪
⋮ = ⋮ +⎢
⋮ ⎥ ⋮
⎨⋮⎬ ⎨ ⋮ ⎬ ⎨ ⋮ ⎬
⎪ ⎪ ⎪ ⎪ ⎢ ⋮ ⎥ ⎪ ⎪
⎩0⎭ ⎩𝐹𝑛 ⎭ ⎢𝜕𝐹𝑛 𝜕𝐹𝑛 𝜕𝐹𝑛 ⎥ ⎩∆𝑋𝑛 ⎭
⎣𝜕𝑋1 ⋯
𝜕𝑋2 𝜕𝑋𝑛 ⎦
𝑜𝑟 ���
𝐽𝐽∆𝑋� (𝑘) = −𝐹� (𝑘)
𝑜𝑟 𝑋� (𝑘 + 1) = 𝑋� (𝑘) − ���
𝐽𝐽−1 𝐹� (𝑘) ⇒ Multi-variable NR method (similar in form to single
variable NR method)
𝑋� (𝑘+1) = 𝑋� (𝑘) − 𝐽𝐽
� −1 𝐹� (𝑘)
𝜕𝐹1 𝜕𝐹1 −1
⎡ ⎤
𝑌 (𝑘+1) 𝑌 (𝑘)
𝜕𝑌 𝜕𝑌2 ⎥ 𝐹 (𝑘)
𝑜𝑟 � 1 � = � 1� − ⎢ 1 � 1�
𝑌2 𝑌2 ⎢𝜕𝐹2 𝜕𝐹2 ⎥ 𝐹2
⎣ 𝜕𝑌1 𝜕𝑌2 ⎦
2
−1
𝑌 1 0.5 (−8 − 6𝑌1 2 ) 4 𝐹
� 1� = � � − � � � 1�
𝑌2 0.5 −4 (3 + 2𝑌2 ) 𝐹2 0.5,0.5
1
𝑌2 (1) = 0.5 − (−4 × 1.75 − 9.5 × 0.75) = 0.50568
22
- Continue iteration till there is a convergence. One can prepare a table like this:
𝜕𝐹1 𝜕𝐹1 𝜕𝐹2 𝜕𝐹2
# 𝑌1 𝑌2 𝐹1 𝐹2 𝑌1 𝑌2 ∈𝑎
𝜕𝑌1 𝜕𝑌2 𝜕𝑌1 𝜕𝑌2
Secant Method
(Another common open method; it does not require derivative to compute. Requires two
starting adjacent guess values instead)
Step1:
Straight line: Choose 𝑋𝑖−1 𝑎𝑛𝑑 𝑋𝑖 adjacent guess values near the root ′𝛼′.
𝑖 𝑖−1(𝑋 −𝑋 )
𝑋𝑖+1 𝑋 𝑋𝑖+1 = 𝑋𝑖 − 𝑓(𝑋𝑖 ) × 𝑓(𝑋 )−𝑓(𝑋
𝑖 𝑖−1 )
𝑓(𝑋𝑖 )
𝑜𝑟 𝑋𝑖+1 = 𝑋𝑖 − 𝑓(𝑋𝑖 )−𝑓(𝑋𝑖−1 ) : sequence
𝑋𝑖 −𝑋𝑖−1
3
Check if 𝑓(𝑋𝑖+1 ) → (𝑦𝑜𝑢 ℎ𝑎𝑣𝑒 ℎ𝑖𝑡 𝑡ℎ𝑒 𝑟𝑜𝑜𝑡), else repeat with 𝑋𝑖 𝑎𝑛𝑑 𝑋𝑖+1 following
the above sequence.
• The method converges faster than Bisection method, but slower that NR. It is preferred
over NR to avoid 𝑓 ′ = 0.
𝑓(𝑋 )
• If 𝑋𝑖 → 𝑋𝑖−1 , note 𝑋𝑖+1 = 𝑋𝑖 − 𝑓′ (𝑋𝑖 ) (same as NR!)
𝑖
At this stage, let us look at Taylor’s series. Any function f(X) can be expanded as:
exact
exact
approximate
approximate
exact
𝑓 𝑛+1 (𝜉)
where 𝑅𝑛+1 = 𝑛+1!
(𝑋𝑖+1 − 𝑋𝑖 ) (𝑐𝑎𝑛 𝑏𝑒 𝑒𝑥𝑎𝑐𝑡𝑙𝑦 𝑑𝑒𝑡𝑒𝑟𝑚𝑖𝑛𝑒𝑑)
Therefore,
exact
4
𝑓 ′ (𝜉)(𝑋𝑖+1 −𝑋𝑖 )
= 𝑓(𝑋𝑖 ) + (1st Mean Value Theorem)
1!
exact
𝑓(𝑋𝑖+1 )−𝑓(𝑋𝑖 )
In other words, 𝑓 ′ (𝜉) = (𝑋𝑖+1 −𝑋𝑖 )
All it means there is ′𝜉′ somewhere between 𝑋𝑖 𝑎𝑛𝑑 𝑋𝑖+1 , where the tangent equals the divided
difference between two end points 𝑋𝑖 − 𝑋𝑖+1 .
Note: Taylor series is used to determine the error or accuracy of a numerical method, or to
determine the convergence of the method.
Convergent Analysis
𝑓(𝑋) = 𝑋 − 𝑔(𝑋)
𝑒 𝑛+1 = �𝛼 − 𝑋 (𝑛+1) �
α and X (n)
= �𝑔′ (𝑋�)𝑒 𝑛 �
𝑒 𝑛+1
= �𝑔′ (𝑋�)� < 1 for error to decrease as 𝑛 → ∞ where 𝑋� is {𝛼, 𝑋 𝑛 }
𝑒𝑛
NR Convergent
𝑓
𝑋 𝑘+1 = 𝑋 𝑘 − 𝑓𝑘′ ; 𝑓𝑘′ ≠ 0
𝑘
𝑓
𝑒 (𝑘+1) = |𝛼 − 𝑋 𝑘+1 | = �𝛼 − 𝑋 𝑘 + 𝑓𝑘′ �
𝑘
5
(𝑋 −𝑋 ) 2
𝑓(𝑋 𝑘+1 ) = 𝑓(𝑋 𝑘 ) + 𝑓𝑘 ′ (𝑋 𝑘+1 − 𝑋 𝑘 ) + 𝑓 ′′ (𝑋�) 𝑘+12! 𝑘
(𝛼 − 𝑋𝑘 )2 ′′
0 = 𝑓(𝑋 𝑘 ) + 𝑓𝑘 ′ (𝛼 − 𝑋 𝑘 ) + 𝑓 (𝑋�)
2!
𝑓 ′′ (𝑋�)
∴ 𝑒 (𝑘+1) = � (𝛼 − 𝑋𝑘 )2 �
2𝑓 ′ 𝑘
𝑓 ′′ (𝑋� )
= � 2𝑓′ (𝑒 𝑘 )2 �
𝑘
𝑒 (𝑘+1) 𝑓 ′′ (𝑋� )
= �2𝑓′ (𝛼)� ; 𝑓 ′ (𝛼) ≠ 0
(𝑒 𝑘 )2
If 𝑘 → ∞ 𝑒 (𝑘+1) decreases by the square of previous step error or. In other words, the method
has quadratic convergence (fast convergence), although the method is only 1st order accurate.
This is the special feature of NR method.
6
Lecture #12
Function Approximation: Interpolation
Fit a line passing through the data points to
predict the functional value at an intermediate interpolated value
data. The line must pass through the data. The method is used
to interpolate, for example, certain properties such as enthalpy and
vapour pressure, at an intermediate point where the functional 𝑓𝑖
values or properties are not repeated or available.
Therefore, the data provided for the interpolation must
be reliable or accurate. 𝑋𝑖
- Mathematically, interpolating function must be smooth
(differential); continuous and pass through all data points.
- Compare it to ‘regression’ (best fit to the data) when the line need not pass through
each data. Regression is used for prediction when data are experimentally measured or
approximately calculated. Therefore, the regressed data may have errors.
Yi experimental data
𝑓𝑖
𝑋𝑖
(No regression topic to be covered in this course; no extrapolation as well!)
𝑦1 1 𝑥1 𝑥1 2 ⋯ 𝑥1 𝑛−1 𝑎0
𝑦2 1 𝑥2 𝑥2 2 ⋯ 𝑥2 𝑛−1 ⋮
�⋮�=� ��
⋮ ⋮ �
𝑦𝑛 1 𝑥𝑛 𝑥𝑛 2 𝑥𝑛 𝑛−1 𝑎𝑛−1
Although the problem is well defined, it is tedious to solve (𝑎0 ⋯ ⋯ 𝑎𝑛−1 ) from a large sized
𝑛 × 𝑛 matrix.
𝑦� = 𝐴𝑛×𝑛 𝑎�
The common methods to fit 𝑃𝑛−1 polynomial passing through ‘n’ data points, or 𝑃𝑛 through
(n + 1) data points:
(1) Newton’s divided-difference scheme
Consider 2 data points
Let a line pass through (𝑋0 , 𝑌0 ) & (𝑋1 , 𝑌1 )
𝑌1 − 𝑌0 𝑦1 − 𝑦0
=
𝑋1 − 𝑋0 𝑥 − 𝑥0
𝑦 −𝑦
Y1 or 𝑌 = 𝑌0 + 𝑥1 −𝑥0 (𝑋 − 𝑋0 )
1 0
Y 𝑜𝑟 𝑌 = 𝑎0 + 𝑎1 (𝑋 − 𝑋0 )
Y0
where a0 = Y0
⎧
⎪a1 = y1 −y0 = Y(X1 , X0 ) de�ined as
x −x 1 0
X
⎨ Newton First Divided
X0 X1 ⎪
⎩ Difference(N1DD)
Similarly, define
2
𝑌[𝑋2 , 𝑋1 , 𝑋0 ] ≡ Newton 2nd divided difference
𝑌[𝑋2 , 𝑋1 ] − 𝑌[𝑋1 , 𝑋0 ]
= (𝑁2𝐷𝐷)
𝑋2 − 𝑋0
0 𝑥0 𝑦0 𝑎𝑛 = (𝑌[𝑋𝑛 , ⋯ 𝑋1 ]
⎧ 1 𝑥1 𝑦1
⎪ −𝑌[𝑋𝑛−1 , ⋯ 𝑋0 ])
2 𝑥2 𝑦2 𝑋𝑛 − 𝑋0
⎨ ⋮
⎪𝑛−1 𝑥𝑛−1 𝑦𝑛−1
⎩ 𝑛 𝑥𝑛 𝑦𝑛
3
2. Lagrangian Interpolation
𝑦(𝑥) = ∑𝑛𝑘=0 𝑙𝑘 (𝑥) 𝑦𝑘 (𝑛 + 1 𝑑𝑎𝑡𝑎)
𝑌𝑛 (𝑥) = � 𝑙𝑘 (𝑥)𝑌𝑘 ≡ 𝑃𝑛
𝑘=0
where
𝑛
(𝑥 − 𝑥𝑖 )
𝑙𝑗 (𝑥) = �
𝑖=0
�𝑥𝑗 − 𝑥𝑖 �
(𝑖≠𝑗)
(X,Y) (X1,Y1)
Y1 (x) = l0 (x)Y0 + l1 (x)Y1
x−x x−x
l0 = x −x1 , l1 = x −x0
0 1 1 0
x−x x−x0
or Y1 (x) = �x −x1 � Y0 + �x � Y1 (X0,Y0)
0 1 1 −x0
(Since 𝑌1 (𝑥) is unique, it cannot be different from Newton-Divided difference formula; only forms are
different)
4
(𝑥 − 𝑥1 )(𝑥 − 𝑥2 ) (𝑥 − 𝑥0 )(𝑥 − 𝑥2 ) (𝑥 − 𝑥0 )(𝑥 − 𝑥1 )
𝑌(𝑥) = 𝑌0 + 𝑌1 + 𝑌
(𝑥0 − 𝑥1 )(𝑥0 − 𝑥2 ) (𝑥1 − 𝑥0 )(𝑥1 − 𝑥2 ) (𝑥2 − 𝑥0 )(𝑥2 − 𝑥1 ) 2
≡ 𝑠𝑎𝑚𝑒 𝑎𝑠 𝑁𝐷𝐷𝐸 𝑜𝑟 𝑎𝑋 2 + 𝑏𝑋 2 + 𝑐
Ex. 𝑋1 𝑋2 𝑋 𝑋3
𝑌2 (𝑥) = 𝐶0 + 𝐶1 𝑥 + 𝐶2 𝑥 2 (unique)
11.3−10.1 1.2−0.6
𝑋0 −1 10.1 = 1.2 = −0.3
0+1 −1−1
11.9−11.3
𝑋1 0 11.3 = 0.6
1−0
𝑋2 1 11.9
2 x−xi
(2) Lagrange Y2 (x) = ∑2k=0 lk Yk ; lj (x) = ∏i=0 � �
xj −xi
i≠j
5
Note: ⇒ If ‘n’ is large, (> 4), the polynomial shows oscillation. In principle, one uses ‘piece-wise’
interpolation for large # of data points.
⇒ NDD uses the previous calculations, if an extra point is to be included in finding a new
polynomial, unlike each coefficient has to be recalculated in ‘LI’. However, it is easy to program ‘LI’.
Error analysis: For (n+1) data points, 𝑃𝑛 (𝑜𝑟𝑑𝑒𝑟 𝑛) is the unique polynomial to pass through (n+1)
data points. This polynomial can be used to predict 𝑃𝑛 (𝑋) at a new data points ‘X’. How much error
can be expected in this prediction/interpolation? From engineering point of view, how much
confidence an user has in the interpolated functional value? Well, in such case one should use an
additional data (𝜉, 𝑌(𝜉)) over the range (𝑋0 ⋯ 𝑋𝑛 ) and check the convergence in the re-interpolated
value at 𝑃𝑛+1 (𝑋). Note, now you have (n+2) data points including 𝜉.
Theoretically, it can be shown using Taylor’s series that the error in the interpolation using 𝑃𝑛 (𝑋)
for (n+1) data is
𝑦 𝑛+1 (𝜉)
(𝑥 − 𝑥0 )(𝑥 − 𝑥1 ) ⋯ ⋯ (𝑥 − 𝑥𝑛 )
𝑛+1!
Therefore, the error calculated is nothing but 𝑅𝑛+1 (remainder) of the Taylor’s series. For more on
this, read the book by Ferziger.
6
Lecture #13
Similar to Newton’s Divided Difference and Lagrangian interpolation schemes, there is a
Newton’s interpolation formula that makes use of ′Δ′ operator (Forward Difference
Operator)
𝑜𝑟 𝐸 = 1 + Δ ∶ 𝐸 & Δ operators
𝛼(𝛼 − 1) 2 𝛼(𝛼 − 1) ⋯ (𝛼 − 𝑛 + 1) n
𝑜𝑟 𝐸 𝛼 = (1 + Δ)𝛼 = 1 + 𝛼Δ + Δ +⋯ Δ +. ..
2! 𝑛!
𝑜𝑟 𝐸 𝛼 𝑦(𝑥0 ) = 𝑦(𝑥0 + 𝛼Δ𝑥) = (1 + Δ)𝛼 𝑦(𝑥0 )
𝛼(𝛼−1) 𝛼(𝛼−1)⋯(𝛼−𝑛+1)
= 𝑦0 + 𝛼Δ𝑦0 + 2!
Δ2 𝑦0 + ⋯ Δn 𝑦0 +
𝑛!
𝑥−𝑥0
If 𝛼 = Δ𝑥
𝛼(𝛼−1) 𝛼(𝛼−1)⋯(𝛼−𝑛+1)
𝑦(𝑥) = 𝑦(𝑥0 + 𝛼Δ𝑥) = 𝑦0 + 𝛼(Δ𝑦0 ) + 2!
Δ2 𝑦0 + ⋯ Δn 𝑦0 + 𝑅
𝑛!
𝛼(𝛼−1)(𝛼−𝑛)
𝑅= Δn+1 𝑦(𝜉) 𝑤ℎ𝑒𝑟𝑒 𝑋 < 𝜉 < 𝑋𝑛
𝑛!
1
Splines
We have earlier noted that a high order polynomial (𝑛 ≥ 4) usually shows oscillation.
Therefore, a piece-wise polynomial (𝑛 ≤ 3) is usually preferred for most engineering
applications.
Therefore, for n data points there are (n-1) spline segments or 4(n-1) unknown
quantities (coefficients) to be determined:
Equations: 𝑦𝑖 = 𝑆𝑖 (𝑥𝑖 ) ∶ 𝑛
𝑆 ′+ = 𝑆 ′− ∶ (𝑛 − 2)
𝑆 ′′+ = 𝑆 ′′− ∶ (𝑛 − 2)
4𝑛 − 6
You have two BCs specified at the end data points or knots.
2
𝑆𝑖 (𝑥) = ∑3𝑖=0 𝑎𝑖 𝑥 2 = ?
Note that 𝑆𝑖 "(𝑥) is a piece-wise linear and continuous (i. e. 𝑆𝑖 "(𝑥) = 6𝑎𝑖 𝑥 + 2𝑏𝑖 )
𝑆𝑖 "(𝑥)
𝑥𝑖 (linear) 𝑥𝑖+1
𝑆𝑖 (𝑥) ≡ 𝑐𝑢𝑏𝑖𝑐
𝑥3 𝑥2 𝑆”(𝑥𝑖 ) 𝑥3 𝑥2 𝑆”(𝑥𝑖+1 )
On integrating twice, 𝑆𝑖 (𝑥) = � 6 − 𝑥𝑖+1 � + � 6 − 𝑥𝑖 � + 𝑐1 𝑥 + 𝑐2
2 Δ𝑖 2 Δ𝑖+1
Δ𝑖−1 ′′ Δ𝑖−1 + Δ𝑖 ′′ Δ𝑖
𝑆 (𝑥𝑖−1 ) + 𝑆 (𝑥𝑖 ) + 𝑆 ′′ (𝑥𝑖+1 )
6 3 6
𝑦𝑖+1 − 𝑦𝑖 𝑦𝑖 − 𝑦𝑖−1
= − , 𝑖
Δ𝑖 Δ𝑖−1
= 2, 𝑛 − 1 (𝑖𝑛𝑡𝑒𝑟𝑖𝑜𝑟 𝑛𝑜𝑑𝑒𝑠)
3
1 2 𝑥𝑖−1 𝑥𝑖 𝑥𝑖+1 𝑛 − 1 𝑛
Thus, 𝑆𝑖 (𝑥) is determined from the boxed equation on the preceding page 3.
0 1.67 0
Use ** above
1 2 1 0−2×1.67+0
6
𝑆 ′′ (𝑥1 ) + 3 𝑆 ′′ (𝑥2 ) + 6 𝑆 ′′ (𝑥3 ) = 2.52
4
Numerical Differentiation
f(x)
or P2 approximated
y(xi) polynomial
f(x) ≈ Pn (x)
xi-1 xi xi+1
5
4 −4𝑓(𝑥𝑖+2 ) − 8𝑓(𝑥𝑖−1 ) + 8𝑓(𝑥𝑖+1 ) + 4𝑓(𝑥𝑖+2 )
�𝑓 ′ (𝑥𝑖 ) = + 0(ℎ4 )
𝑑𝑎𝑡𝑎 𝑝𝑜𝑖𝑛𝑡𝑠 12ℎ
= 𝑓 ′′ (𝑥𝑖−1 ) = 𝑓 ′′ (𝑥𝑖+1 ):
Note the notations: 𝑓(𝑥𝑖 ) = 𝑦𝑖 (𝑥𝑖 ) and f ′ and f ′′ ≈ y ′ and y ′′ ; 𝑓(𝑥) ≃ 𝑃𝑛 (𝑥)
6
Lecture #14-15
Numerical Integration
Similar to numerical differentiation, integration of 𝑓(𝑥), i.e., ∫ 𝑓(𝑥)𝑑𝑥 may be performed by
approximating 𝑓(𝑥) ≈ 𝑃𝑛 (𝑥), usually 𝑛 ≤ 3. Therefore, first calculate 𝑦𝑖 (𝑥𝑖 ) = 𝑓(𝑥), then fit a
polynomial through the ′𝑛′ data points and determine∫ 𝑃𝑛 𝑑𝑥 𝑜𝑟 area under the approximated
curve. Considering usual oscillations set in the fitted polynomial for 𝑛 ≥ 4, piece-wise
integration is performed Detailed method and analysis are as follows.
𝑓(𝑥)
𝑃𝑛 (𝑥) is the interpolating 𝑓 𝑛 between (𝑥0 , ⋯ 𝑥𝑛 )
𝑃𝑛 (𝑥)
It can be shown/proposed that
𝑏
∫𝑎 𝑓(𝑥)𝑑𝑥 = ∑𝑛𝑘=0 𝑤𝑘 𝑓(𝑥𝑘 ); 𝑓(𝑥𝑘 ) = 𝑦(𝑥𝑘 )
1 𝑏
= (𝑏 − 𝑎) ∑𝑛𝑘=0 𝐶 𝑘 𝑦𝑘 ; 𝐶 𝑘 = (𝑏−𝑎) ∫𝑎 𝑙𝑘 (𝑥)𝑑𝑥
Cotes number
1
Newton-Cotes Coefficients (ends pts included)
1 2 1 1 . 0183Δ3 𝑓′′
2 6 1 4 1 . 0035Δ5 𝑓′′′′
3 8 1 3 3 1 . 00016Δ5 𝑓′′′′
4 90 7 32 12 32 7 ⋯
5 288 19 75 50 50 75 19 ⋯
These are called closed N-C formula because they use functions at the end points also (𝑎, 𝑏).
Note, 𝑛 or the number of segments is usually 1 or 2 or 3, or the degree of polynomial is 1 or 2
or 3, respectively. Readers may have recognized that n = 1, corresponds to Trapezoidal rule
with ‘1�2’ coefficients of 𝑦𝑘 ; 𝑛 = 2 corresponds to Simpson’s ‘1�3 𝑟𝑑’ rule with
(1�6 , 2�3 , 1�6) coefficients, etc. For more details, you can refer the book by Ferziger or SKG.
The error in the numerical integration can be calculated from the remainder term of the
polynomial:
𝑦 𝑛+1 (𝜉) ′
⇒ 𝐼 = �(𝑒𝑟𝑟𝑜𝑟)𝑑𝑥 ⇒ 𝑤ℎ𝑒𝑟𝑒 ′𝑒𝑟𝑟𝑜𝑟 = (𝑥 − 𝑥0 )(𝑥 − 𝑥1 ) ⋯ (𝑥 − 𝑥𝑛 )
𝑛 + 1!
𝑓𝑜𝑟 (𝑛 + 1)𝑑𝑎𝑡𝑎𝑝𝑜𝑖𝑛𝑡𝑠
𝛼(𝛼−1)�Δ2 𝑦0 � 𝛼(𝛼−1)(𝛼−2)𝛼⋯(𝛼−𝑛)(Δ𝑛+1 𝑦0 )
𝑦 (𝑥) = 𝑦(𝑥0 + 𝛼Δ𝑥) = 𝑦0 + 𝛼(Δ𝑦0 ) + 2!
+ ⋯⋯ 𝑛+1!
𝑅𝑛
𝑟𝑑
To obtain the different methods for integration, viz. Trapezoidal’s rule, Simpson’s 1�3
, etc, two approaches must give the identical results because polynomial of the highest degree
that can fit to (𝑛 + 1) data points is unique.
Take 𝑛 = 1
𝑓(𝑥)
𝑏 𝑃1 (𝑥)
∫𝑎 𝑓(𝑥)𝑑𝑥 = (𝑏 − 𝑎) ∑𝑛𝑘=0 𝐶𝑘𝑛 𝑓(𝑥𝑘 ) 𝑓(𝑏)
1 1 𝑓(𝑥) 𝑓(𝑎)
= (𝑏 − 𝑎) �2 𝑦𝑎 + 2 𝑦𝑏 �
(𝑏−𝑎)
= (𝑦𝑎 + 𝑦𝑏 )
2
𝑎 𝑥 𝑏
2
(𝑏−𝑎)
�𝑜𝑟 = (𝑓(𝑎) + 𝑓(𝑏))�
2
(Trapezoidal rule)
𝑛=2
𝑓(𝑥) 𝑃2 (𝑥)
𝑏 (𝑏 − 𝑎) 𝑎+𝑏
� 𝑓(𝑥)𝑑𝑥 = �𝑓(𝑎) + 4𝑓 � � + 𝑓(𝑏)�
𝑎 6 2
𝑟𝑑
𝑆𝑖𝑚𝑝𝑠𝑜𝑛′ 𝑠 1�3 𝑓𝑜𝑟𝑚𝑢𝑙𝑎
(ℎ𝑖𝑔ℎ𝑒𝑟 𝑜𝑟𝑑𝑒𝑟 𝑎𝑐𝑐𝑢𝑟𝑎𝑡𝑒) ΔX ΔX
𝑎 (𝑎 + 𝑏)� 𝑏
2
(𝑁𝑜𝑡𝑒: 𝑃2 ≡ (𝑎 + 𝑏𝑥 + 𝑎𝑥 2 ) ≡ 𝑝𝑎𝑟𝑎𝑏𝑜𝑙𝑎)
Δ𝑋 𝑎+𝑏
= 3
�𝑓(𝑎) + 4𝑓 � 2
� + 𝑓(𝑏)�
𝑛=3
𝑏 (𝑏 − 𝑎)
� 𝑓(𝑥)𝑑𝑥 = [𝑓(𝑎) + 3𝑓(𝑐) + 3𝑓(𝑑) + 𝑓(𝑏)]
𝑎 8
3Δ𝑋 (𝑏−𝑎)
= [ − 𝑑𝑖𝑡𝑡𝑜 − ], Δ𝑥 =
8 3
𝑡ℎ
- Simpson’s 3�8 formula.
In actual practice, applying a single Cotes formula over the entire range is rarely done.
Considering large error for large ′Δ𝑋′, interval is broken into sub-intervals & then applying
piece-wise quadrature formula and summing it over.
𝑋 ℎ
∫𝑋 𝑖 𝑓(𝑥)𝑑𝑥 = 2 [𝑓(𝑋𝑖−1 ) + 𝑓(𝑋𝑖 )]
𝑖−1
(𝑏−𝑎)
ℎ= (𝑛 + 1 data points or 𝑛 segments)
𝑛
𝑎 𝑋𝑖−1 𝑋𝑖 𝑏
𝑋0 𝑋𝑛
𝑏 ℎ
Therefore, ∫𝑎 𝑓(𝑥)𝑑𝑥 = ∑𝑛𝑖=1 2 [𝑓(𝑥𝑖−1 ) + 𝑓(𝑥𝑖 )]
3
𝑛−1
ℎ ⎫
�𝑓(𝑥0 ) + 2 � 𝑓(𝑥𝑘 ) + 𝑓(𝑥𝑛 )� ⎪
2
𝑘=1
= 𝑛 ∗
(𝑏 − 𝑎) 1 ⎬
�� 𝑓(𝑥𝑘 ) − [𝑓(𝑎) + 𝑓(𝑏)]�⎪
𝑛 2 ⎭
𝑘=0
* Trapezoidal rule
𝒓𝒅
𝑺𝒊𝒎𝒑𝒔𝒐𝒏’𝒔 𝟏�𝟑 𝒓𝒖𝒍𝒆: (intervals must be even or data pts must be odd),
2 6 1 4 1
(𝑏 − 𝑎)�
ℎ= 2
𝑏 𝑏−𝑎
∫𝑎 𝑓(𝑥)𝑑𝑥 = � 6
� (𝑓(𝑎) + 4𝑓(𝑐) + 𝑓(𝑏))
𝑎 𝑐 𝑏
(𝑏−𝑎) (𝑛 𝑠𝑒𝑔𝑚𝑒𝑛𝑡𝑠 𝑜𝑟
ℎ ℎ ℎ=
𝑋𝑖 𝑋𝑖 𝑋𝑖+1 𝑛 𝑛 + 1 𝑑𝑎𝑡𝑎 𝑝𝑜𝑖𝑛𝑡𝑠)
𝑎 𝑋 𝑋𝑋 𝑏
𝑖−1 𝑖 𝑖+1
(𝑋0 ) (𝑋𝑛 )
𝑏 ℎ
∫𝑎 𝑓(𝑥)𝑑𝑥 = 3 (𝑓(𝑋0 ) + ∑ ⋯ + ∑ … + 𝑓(𝑋𝑛 ))
(𝑏−𝑎)
= (𝑓(𝑋0 ) + 4 ∑𝑛−1 𝑛−2
𝑘=1,3,5 +2 ∑𝑘=2,4,6 +𝑓(𝑋𝑛 ))
6
(Note all even nodes ‘𝑋𝑘 ’ are summed/counted twice in calculating total area)
What do you do if 𝑛 ≡ odd or data are even? Apply the method to one data less so that n =
even. Add the area corresponding to the left-over segment, calculated using Trapezoidal rule!
One open NC method is popular: Mid-point rule, the method is frequently applied when the
𝑓 𝑛 is discontinuous at end points: a or b or both.
𝑏 𝑎+𝑏
∫𝑎 𝑓(𝑥)𝑑𝑥 = (𝑏 − 𝑎) �𝑓 � 2
��
1 1 𝑎 (𝑎+𝑏) 𝑏
ex. ∫0 1−𝑥 2
𝑑𝑥 (Note 𝑓(1) → ∞) 2
4
Example:
Tw R
r
z
𝑣(r) T(r) Δr
𝑇−𝑇𝑤 𝑟4 7
𝜃= = −4𝑧 − 𝑟 2 + + 24
𝑇𝑤 4
Consider the classical heat transfer/transport problem of SS heat transfer at constant wall
temperature in the fully developed region of a tubular laminar parabolic flow, often discussed in
TP lectures. The dimensionless SS temperature 𝜃(𝑧, 𝑟) in the fluid is analytically derived. See
the solution above. Determine the ‘mix-cup’ temperature at z = 0.5.
Soln : In a flow-system, mix-cup temperature is the radially emerged temperature, taking into
consideration the total heat (cal) carried away by the flowing fluid in unit time across the cross-
section of the tube.
𝑅
∫0 𝜌𝐶𝑝 𝑣(𝑟 ′ )𝑇(2𝜋𝑟 ′ )𝑑𝑟′
𝑚3
⎛ ⎞ (𝑁𝑜𝑡𝑒: 𝑑𝑞 = 𝑣2𝜋𝑟𝑑𝑟 )
𝑠
⎜𝑘𝑔⁄𝑚 𝑐𝑎𝑙⁄𝑘𝑔 − 𝐾 𝐾 ⎟
3
𝑅
𝑇� = ⎜ 𝑅
′ )2𝜋𝑟 ′ 𝑑𝑟′ ⎟ 𝑄 = � 𝑣2𝜋𝑟𝑑𝑟
⎜ ∫0 𝜌𝐶𝑝 𝑣(𝑟 ⎟ 0
at a ref. temperature
⎝ 𝑐𝑎𝑙⁄𝑠 − 𝐾 ⎠
In the dimensionless quantities,
1
∫0 𝜃(𝑟, 𝑧)(1 − 𝑟 2 )𝑟𝑑𝑟
𝜃̅(𝑧) = 1
∫0 (1 − 𝑟 2 )𝑟𝑑𝑟
Simpson’s 1�3 rule can be applied by taking odd number (5) of data points over 𝑟 ≡ (0, 1).
1−0
Therefore # of segment = 4, step size, ℎ = 4
= 0.25, Δ𝑟 ≡ ℎ = 0.25; 𝑟𝑖 = 𝑖Δ𝑟
Show,
𝑇−𝑇𝑤
̅
𝜃𝑚𝑖𝑥−𝑐𝑢𝑝 (0.5) ≡ −1.9976 =
𝑇 𝑤
𝑇𝑓𝑙𝑢𝑖𝑑
𝑇𝑤
What does decide # of data points ‘n’ or the step size ‘h’ ? It is the level of desired accuracy.
̅
Repeat the entire calculations for n = 9 and check your requirement of an improved 𝜃𝑚𝑖𝑥−𝑐𝑢𝑝
from the previous value -1.9976. Sometimes you have to decrease ‘n’ or increase ‘h’ if you have
exceeded accuracy limit!
Error Analysis
Let us take the simplest Trapezoidal rule applied to one segment over (a,b).
𝛼(𝛼 − 1) 2
𝑓(𝑥) = 𝑓(𝑥0 ) + 𝛼Δ𝑓(𝑥0 ) + Δ 𝑓(𝑥0 ) + ⋯ 𝑅𝑛
2!
6
𝛼(𝛼−1)⋯(𝛼−𝑛)
𝑅𝑛 = Δ𝑛+1 𝑓(𝜉) 𝑤ℎ𝑒𝑟𝑒 𝑥0 < 𝜉 < 𝑥
𝑛+1!
𝑥−𝑥0
𝛼= 𝑜𝑟 𝑑𝑥 = ℎ 𝑑𝛼 ; 𝑥0 = 𝑎,
ℎ
𝑏 1 𝛼(𝛼−1)
𝐼 = ∫𝑎 𝑓(𝑥)𝑑𝑥 = ℎ ∫0 �𝑓(𝑥0 ) + 𝛼Δ𝑓(𝑥0 ) + Δ2 𝑓(𝜉)� 𝑑𝑥
2!
𝑃1 = (𝑎𝑥 + 𝑏)
1 𝛼3 𝛼2 1 2
= ℎ �𝑓(𝑎) + �𝑓(𝑏) − 𝑓(𝑎)� + ( − ) � ℎ 𝑓′′(𝜉)�
2 6 4 0
𝑓(𝑎)+𝑓(𝑏) 1 Δ2 𝑓(𝜉)
= ℎ� 2
� − 12 ℎ3 𝑓 ′′ (𝜉) (𝑓 ′′ (𝜉) = )
2ℎ
∑ 𝑓 ′′ (𝜉) ℎ ∑ 𝑓 ′′ (𝜉)
𝑁𝑜𝑡𝑒: 𝑓 ′′ �𝜉 ̅� = = .
𝑛 (𝑏−𝑎)
Similarly, for the error estimation of Simpson’s 1�3 rule start with 𝑃2 = 𝑎𝑥 2 + 𝑏𝑥 + 𝑐, 𝑜𝑟
𝛼(𝛼−1)(𝛼−2)Δ3 𝑓(𝜉)
𝑅𝑛 = over one segment or 3 data pts.
3!
to show that error ≡ 0(ℎ5 )𝑓 ′′′′ (𝜉) and if summed over ‘n’ segments error = 0(ℎ4 )𝑓 ′′′′ �𝜉 ̅�.
A common question is asked: Show that Simpson’s 1�3 𝑟𝑢𝑙𝑒 gives the error corresponding to
a cubic interpolating function, although it is based on a parabola or a quadratic polynomial. See
Chhapra and Cannale's book for the details.
Mid-Term
7
Lecture #15-16
1st order ODE
A set of ODEs takes the following form:
𝑑𝑦1
= 𝑓1 (𝑦1 , 𝑦2 ⋯ 𝑦𝑛 )
𝑑𝑡 𝑑𝑦�
⋮ � 𝑜𝑟 ̅ �)
= 𝑓 (𝑦
𝑑𝑦𝑛 𝑑𝑡
= 𝑓2 (𝑦1 , 𝑦2 ⋯ 𝑦𝑛 )
𝑑𝑡
⇒ We explore numerical solution to ODEs starting with Euler’s forward and backward
methods. These methods are basics, and also set the tone for a stability analysis. Let us begin
with one ODE.
Euler’s Method
𝑑𝑦
𝑑𝑥
= 𝑓(𝑥, 𝑦) ; 𝑦(𝑥0 ) = 𝑦0
𝑦0 𝑦1 𝑦𝑛 𝑦𝑛+1
𝑦𝑛+1 −𝑦𝑛
𝑜𝑟 , = 𝑦(𝑥𝑛 , 𝑦𝑛 ) + 0(ℎ2 )
ℎ 𝑥0 𝑥1 𝑥𝑛 𝑥𝑛+1
(This is the general ‘discretization’ of a derivative at the ‘n’ th grid. Recall the lecture on
numerical differentiation)
𝑦𝑛′
Note: The RHS terms are known to be 1st order accurate or have 2nd order error.
From the previous calculations, start from 𝑦(𝑥 = 0 ) = 𝑦0 → 𝑦1 → 𝑦2 ⋯ 𝑒𝑡𝑐. Therefore, you
march forward and the method is called Euler’s forward explicit method.
1
Graphically,
It is all about
⎫ calculating true derivatives
Numerical ⎪ (gradients) at xn as
calculations ⎪
you march forward:
Theoretical y(x)⎬ y0′ = f(x0 , y0 )
𝑌 𝑦2 ⎪ y1′ = f(x1 , y1 )
𝑦1 ⎪ ⋮
𝑦0 ⎭ yn′ = f(xn , yn )
𝑥0 𝑥1 𝑥2 𝑋
It is clear that smaller ‘h’(step-size) is, less is the error or accuracy is higher: 0(ℎ2 ). There is a
price for long CPU time. Let us look at the stability aspect of the method.
𝑑𝑦
𝑑𝑥
= 𝑓(𝑥, 𝑦) = 𝑓(𝑥0 + Δ𝑥, 𝑦0 + Δ𝑦)
𝑑𝑓 𝑑𝑓
= 𝑓(𝑥0 , 𝑦0 ) + (𝑦 − 𝑦0 ) � + (𝑥 − 𝑥0 ) � + 0(Δ𝑥 2 , Δ𝑦 2 )
𝑑𝑦 𝑥 𝑑𝑥 𝑥0 ,𝑦0
0 ,𝑦0
= 𝛼𝑦 + 𝛽𝑥 + 𝜈
term const.
gives exponential
for linear
solution solution
𝑑𝜉
𝑜𝑟, 𝑑𝑥
= 𝛼(𝑦 − 𝑦) = 𝛼𝜉 (𝜉 = 𝑦 − 𝑦)
We have the same inference: error satisfies homogeneous part of the solution. The equation
describes how ‘𝜉’ can grow because of a small perturbation.
2
𝑦𝑛+1 − 𝑦𝑛 = ℎ𝑓(𝑥𝑛 , 𝑦𝑛 ) = ℎ𝑦𝑛′ = ℎ(𝛼 𝑦𝑛 )
𝑦𝑛+1 = 𝑦𝑛 (1 + 𝛼ℎ)
𝑦1 = 𝑦0 (1 + 𝛼ℎ), 𝑦2 = 𝑦1 (1 + 𝛼ℎ)2
example:
𝑑𝑦
𝑑𝑥
= −𝑦 (𝑥 = 0, 𝑦 = 1) ⇒ 𝑦 = 𝑒 −𝑥
or ℎ < +2
𝑦𝑛+1 = 𝑦𝑛 + ℎ(𝛼𝑦𝑛+1 )
= 𝑦0 (1 − 𝛼𝑟 ℎ − 𝑖𝛼𝑖 ℎ)−𝑛
It is clear the stability region for the implicit Euler is much more larger than that of the explicit
Euler.
Notes: Error decreases with decreasing step sizes (h): 0(ℎ𝑛 ), however, at the expense of CPU
time. This is not surprising. The ‘issue’ is with increasing step sizes. There is a tendency to
speed up computation by taking large step-size. An explicit method like Euler Forward may
not permit or allow you to do so, even if the level of desired accuracy may be acceptable.
Instability may set in at large ‘h’. On the other hand, Euler Backward or an implicit method
permits you to use relatively layer ‘h’ without instability, yielding a bounded solution.
4
⇒ ‘Error’ must not be mixed up with the ‘instability’. A method may be relatively higher order
accurate, but may show instability at small step-size. In general, implicit methods are more
stable than explicit methods, but require iterations to compute 𝑦𝑛+1 . Examples:
𝑑𝑦
𝑑𝑥
= 𝑓(𝑥, 𝑦) = 𝛼𝑦 2
⇒ " A method is stable if it produces a bounded solution when it is supposed to and is unstable
𝑑𝑦
if it does not." Therefore, while solving 𝑑𝑥
= −𝑦, if the numerical method produces a stable
solution, it is stable, else unstable. On the other hand, there is no instability ‘issue’ with
𝑑𝑦
𝑑𝑥
= +𝑦 because the exact solution itself is unbounded. Therefore, both Euler
Forward(explicit) or Backward(implicit) will produce an unbounded solutions!
𝑑𝑦
Get back to 𝑑𝑥 = −𝑦(𝛼 = −1) and 𝑦𝑛 = (1 + ℎ)−𝑛
(Implicit method)
𝛼ℎ
𝑜𝑟 𝑦𝑛+1 − 𝑦𝑛 = [𝑦 + 𝑦𝑛+1 ]
2 𝑛
(1+𝛼ℎ⁄2)
𝑦𝑛+1 = 𝑦𝑛 (1−𝛼ℎ⁄2)
𝑛
(1 + 𝛼ℎ⁄2)
𝑜𝑟 𝑦𝑛 = 𝑦0
(1 − 𝛼ℎ⁄2)
5
|𝑧| < 1
(1+𝛼ℎ⁄2)
For stability �(1−𝛼ℎ⁄2)� < 1 𝑜𝑟
6
Lecture #17
Single Step Methods
Forward or Backward Euler is the simplest single step method to solve ODEs. The methods
are 1st order accurate. A general higher order method can be derived considering that it is all
about choosing a suitable gradient at (𝑥𝑛 , 𝑦𝑛 ) 𝑜𝑟 (𝑡𝑛 , 𝑦𝑛 ) to calculate 𝑦𝑛+1 (𝑥𝑛+1 ):
𝑑𝑦
𝑑𝑡
= 𝑓(𝑡, 𝑦); 𝑦(0) = 𝑦0
Here, 𝑘1 and 𝑘2 are the slopes with weights a & b respectively ( Forward Euler may be
considered to assume 𝑎 = 1, 𝑏 = 0 and 𝑘1 = 𝑦𝑡′𝑛,𝑦𝑛 , whereas Backward Euler assumes 𝑎 =
0, 𝑏 = 1, 𝑘2 = 𝑦𝑡′𝑛+1 ,𝑦𝑛+1 )
Substituting in (1)
1
𝑑𝑦
Also, restart from 𝑑𝑡
= 𝑓(𝑡, 𝑦(𝑡)) as a total derivative on t.
ℎ2
or 𝑦𝑛+1 = 𝑦𝑛 + ℎ𝑓 + 𝑓 ′ + 0(ℎ3 ): (Single variable(t) Taylor − Series)
2!
𝑑𝑓 𝜕𝑓 𝜕𝑓 𝜕𝑦
where 𝑓 ′ = 𝑑𝑡
(𝑡𝑛 , 𝑦𝑛 ) = �
𝜕𝑡
+ 𝜕𝑦 ∙ 𝜕𝑡 � = �𝑓𝑡 + 𝑓𝑦 𝑓�
𝑡𝑛 ,𝑦𝑛
On substitution,
ℎ2 ℎ2
𝑦𝑛+1 = 𝑦𝑛 + ℎ𝑓 + 𝑓𝑡 (𝑡𝑛 , 𝑦𝑛 ) + 𝑓𝑦 𝑓 + 0(ℎ3 ) (3)
2 2
Thus, model equation (2) can be equated with the fundamental equation (3) derived for 𝑦𝑛+1 ;
𝑎+𝑏 =1
1
� 𝑏𝑝 = �2 �
𝑏𝑞 = 1�2
ℎ
𝑦𝑛+1 = 𝑦𝑛 + 2 [𝑓(𝑡𝑛 , 𝑦𝑛 ) + 𝑓(𝑡𝑛 + ℎ, 𝑦𝑛 + ℎ𝑦𝑛′ )] + 0(ℎ3 )
𝑘1 𝑘2
Write differently,
∗
𝑦𝑛+1 = 𝑦𝑛 + ℎ𝑓(𝑥𝑛 , 𝑦𝑛 ) predictor full step
ℎ ∗ )]
𝑦𝑛+1 = 𝑦𝑛 + 2 [𝑓(𝑡𝑛 , 𝑦𝑛 ) + 𝑓(𝑡𝑛 + ℎ, 𝑦𝑛+1
corrector full step
𝑘1 𝑘2 �
Look at graphically,
𝑦
𝑦𝑛+1
𝑘2
𝑦𝑛+1 (𝑘1 + 𝑘2 )�
𝑦𝑛 𝑘1 2=𝑘
𝑛
𝑡𝑛 𝑡𝑛+1
𝑡
∗
Therefore, first predict 𝑦𝑛+1 and then correct as 𝑦𝑛+1 by taking the average of the slopes,
𝑘1 and 𝑘2 ∶ 𝑦𝑛+1 = 𝑦𝑛 + ℎ𝑘�
2
- This method is called Heun’s improved method or modified Euler method or Runga-
Kutta (RK-2) method.
Case 2: 𝑎 = 0 , 𝑏 = 1, 𝑝 = 𝑞 = 1�2
ℎ ℎ
𝑦𝑛+1 = 𝑦𝑛 + ℎ �𝑓 �𝑡𝑛 + 2 , 𝑦𝑛 + 2 𝑦𝑛′ ��
Write differently,
∗
𝑦𝑛+ 1 ′
1� ≡ 𝑦𝑛 + �2 ℎ𝑦𝑛 (Euler predict 1�2 step)
2
Graphically,
𝑦 (1) Start with slope 1
∗
𝑌𝑛+1 ⎫ Reach half step.
⎪
∗
𝑌𝑛+ 1�
(2) Calculate slope 2
2 1 ⎬ (3)Re − start with the slope
𝑦𝑛 ⎪
2 ⎭ 2 to reach full step.
𝑡𝑛+1 𝑡
𝑡𝑛 𝑡𝑛+1�
2
(𝑡𝑛 + ℎ)
3. RK-4
ℎ
𝑦𝑛+1 == 𝑦𝑛 + 6 [𝑘1 + 2𝑘2 + 2𝑘3 + 𝑘4 ] + 0(ℎ5 )
𝑘1 = 𝑓(𝑥𝑛 , 𝑦𝑛 )
∗
𝑦𝑛+ 1
1� ∶ 𝐸𝑢𝑙𝑒𝑟 𝑝𝑟𝑒𝑑𝑖𝑐𝑡𝑜𝑟 �2 𝑠𝑡𝑒𝑝
2
3
∗∗
𝑦𝑛+ 1� ∶ 𝐵𝑎𝑐𝑘𝑤𝑎𝑟𝑑 𝐸𝑢𝑙𝑒𝑟
1� 𝑠𝑡𝑒𝑝 𝑐𝑜𝑟𝑟𝑒𝑐𝑡𝑖𝑜𝑛
2 2
Write differently,
∗
𝑦𝑛+ ℎ 1
1� = 𝑦𝑛 + �2 𝑓(𝑥𝑛 , 𝑦𝑛 ) ∶ 𝐸𝑃 �2 𝑠𝑡𝑒𝑝
2
∗∗
𝑦𝑛+ ℎ ∗ 1
1� = 𝑦𝑛 + �2 𝑓 �𝑥𝑛+1� , 𝑦𝑛+1� � ∶ 𝐵𝐸 �2 𝑠𝑡𝑒𝑝 𝑐𝑜𝑟𝑟𝑒𝑐𝑡𝑜𝑟
2 2 2
∗∗∗ ∗∗
𝑦𝑛+1 = 𝑦𝑛 + ℎ𝑓 �𝑥𝑛+1� , 𝑦𝑛+ 1� � : 𝑀𝑖𝑑 𝑝𝑜𝑖𝑛𝑡 𝑝𝑟𝑒𝑑𝑖𝑐𝑡𝑜𝑟 𝑓𝑢𝑙𝑙 𝑠𝑡𝑒𝑝.
2 2
ℎ ∗ ∗∗ ∗∗∗
𝑦𝑛+1 = 𝑦𝑛 + �𝑓(𝑥𝑛 , 𝑦𝑛 ) + 2𝑓(𝑥𝑛+1� , 𝑦𝑛+ 1� ) + 2𝑓 �𝑥𝑛+1� , 𝑦𝑛+1� � + 𝑓(𝑥𝑛+1 , 𝑦𝑛+1 )�
6 2 2 2 2
∶ 𝑆𝑖𝑚𝑝𝑠𝑜𝑛 𝑓𝑢𝑙𝑙 𝑠𝑡𝑒𝑝 𝑐𝑜𝑟𝑟𝑒𝑐𝑡𝑜𝑟.
Graphically,
∗∗∗
𝑌𝑛+1 𝑘4
𝑌
𝑦𝑛+1 𝑘3
𝑘�
∗ 𝑘2
𝑌𝑛+ 1�
2
∗∗
𝑌𝑛+ 1� 𝑘3
2
𝑘2
𝑦𝑛 𝑘1
𝑥𝑛 ℎ� 𝑥𝑛+1� ℎ� 𝑥𝑛+1 𝑋
2 2 2
(2) The method is explicit (does not require iterations to calculate 𝑦𝑛+1 ). You can march ahead
from 𝑥𝑛 𝑡𝑜 𝑥𝑛+1 . Yet, considering that it makes use of the value 𝑦𝑛+1 at 𝑥𝑛+1 ahead of 𝑥𝑛 via a
predictor step, the method possesses some ‘characteristics’ of an implicit method. Therefore, the
method also produces a stable solution at a relatively larger step size.
4
(4) Even if two or more number of simultaneous ODEs are coupled, the method is explicitly
applied w/o requiring iterations,
𝑑𝑦
Write alternatively, solve = 𝑓(𝑥, 𝑦) ; 𝑦(𝑡, 0) = 𝑦0
𝑑𝑡
𝑦′ −100 0 𝑦1 𝑦1 2
or � 1′ � = � �� 𝑦 �; � 𝑦 � =� �
𝑦2 2 −1 2 2 0 1
or
𝑦′ −100𝑦1
� 1′ � = � �
𝑦2 2𝑦1 − 𝑦2
𝑘′ 1
(2) 𝑘2 = � ′′2 � ⇒ 𝑘�2 = 𝑓(𝑡
̅
𝑛+ 1� , 𝑦𝑛 + 𝑘 � ℎ)
2 1
𝑘2 2
1
0.01, 2 + (−200) × .02
2 0.01, 0 0
= 𝑓� � = 𝑓� �=� �
1 0.01, 1.03 −1.03
0.01, 1 + (3). 02
2
𝑘′ 1
(3) 𝑘3 = � ′′3 � ⇒ 𝑘�3 = 𝑓(𝑡𝑛+1� , 𝑦𝑛 + 2 𝑘�2 ℎ)
𝑘3 2
. 02
0.01, 2−0×
= 𝑓� 2 � = 𝑓 �0.01, 2
� = 𝑓�
−100 × 2
�
0.02 0.01, 0.9897 2 × 2 − 0.9897
0.01, 1 − 1.03 ×
2
−200
=� �
3.0103
𝑘′
(4) 𝑘4 = � ′′4 � ⇒ 𝑘�4 = 𝑓(𝑡𝑛+1 , 𝑦𝑛 + ℎ𝑘�3 )
𝑘4
5
0.02, 2 − 0.02 × 200 0.02, −2
= 𝑓� � = 𝑓� �
0.02, 1 + 0.02 × 3.0103 0.02, 1.060206
−100 × −2 200
= 𝑓� �=� �
2 × −2 − 1.060206 −5.060206
0.66667
=� � 𝐴𝑛𝑠.
1.00633
⇒How to choose h? Choose a reasonable ‘h’ depending on the physics of the problem. Solve and
re-do the calculation for ℎ�2 and check the convergence. Sometimes, one may have to increase
the step-size if the accuracy can be relaxed.
⇒ There are R-K-Mersen or R-K-Fehlberg automatic step-size control method where ‘h’ is
adjusted(𝑒𝑖𝑡ℎ𝑒𝑟 ℎ → 2ℎ 𝑜𝑟 ℎ → ℎ�2 ) depending upon the error in the RK-4 solution relative
to that from error in a higher order method proposed by Mersen or Fehlberg.
6
Lecture #18
Example 1 (RK-4)
Consider a special (diameter = 0.1cm) pellet made of steel k = 40
𝑊 ⁄𝑚 − 𝐾, 𝜌 = 8000 𝑘𝑔⁄𝑚3 , 𝐶𝑝 = 400 𝐽⁄𝑘𝑔 − 𝑘). The initial temperature of the pellet is
300 K. It is immersed in a large oil tank at 400 K. The convective heat transfer coefficient, h at
the sphere surface is 3000 𝑤 ⁄𝑚2 − 𝑘. Assume that there is no radial temperature gradient
inside the pellet. Use R-K-4 numerical technique to solve the governing first order energy
balance equation for predicting temperature of the sphere at 𝑡 = 10 min after it was immersed
in the oil tank. Use Δ𝑡 = 10 𝑠. Repeat calculations for Δ𝑡 = 5 𝑠. Draw graphical depiction of
the slopes comprising all stages of the RK method. Do calculations upto 5 digits after decimal.
Calculate the rate of temperature decrease at 𝑡 = 0, 0.5 and 1.0 min.
ℎ
𝑅 = 0.05𝑐𝑚
𝑡 = 0 𝑇 = 300 𝐾 𝑇(𝑡) 𝑡
𝑇𝑜 = 400 𝐾
𝑅
𝑑𝑇 3 𝑉𝑠
8000 × 400 × 𝑑𝑡 = +3000 × 0.05 (400 − 𝑇)
𝑑𝑇
𝑑𝑡
= +0.05625(400 − 𝑇) = 𝑓(𝑡, 𝑇)
Δ𝑇 = ℎ = 10 or 5 𝑠
March for the next time steps 𝑡 = 20, 30, 40, 50, 60 𝑠, each time using the calculated
values from the previous time steps(viz. 342.9795, etc). Repeat the entire calculations for
Δ𝑡 = 5𝑠.
⇒ In a 20-25 min quiz or examination, such problem can be solved for one time step
using calculator. One requires to write a code to solve such problem for a long time or several
steps calculations.
𝑑𝑇
� = 0.05625(400 − 𝑇) = 0.05625 × 100 = 5.625 𝑘⁄𝑠
𝑑𝑡 0
Plot the graph as done in the previous lecture and approximately draw the steps for
∗ ∗∗ ∗∗∗
𝑘1 , 𝑘2 , 𝑘3 , 𝑘4 , 𝑘, and functional values (𝑇𝑛 , 𝑇𝑛+ 1� , 𝑇𝑛+1� , 𝑇𝑛+1 , 𝑇𝑛+1 ) on y-axis.
2 2
⇒ From chemical engineering point of view, gradient in the steel was neglected
considering a large thermal conductivity of the material, 𝑘 ≫ ℎ′𝑅′ or small Biot #. Therefore,
the lump body approach was used to determine the sphere temperature, and the resulting
energy balance equation is derived to be a 1st order ODE, else.
Inlet temperature is 𝑇𝑜
𝑇𝑜 𝐶𝐴 (𝑥) =?
𝑉�
𝐶𝐴𝑜 (𝑟 = 𝑘𝐶𝐴 )
𝑥=𝐿
A species balance can be derived as follows, neglecting radial concentration profiles (Pe,
r>>1)
2
𝜕𝐶𝐴 𝑑𝐶𝐴 𝑑 2 𝐶𝐴
�
+𝑉 = 𝐷𝑥 − 𝑘𝐶𝐴
𝜕𝑡 𝑑𝑥 𝑑𝑥 2
𝑑𝐶𝐴 𝑑2 𝐶𝐴
𝑜𝑟, 𝑉� = 𝐷𝑥 − 𝑘𝐶𝐴 − (1)
𝑑𝑥 𝑑𝑥 2
Note the reaction is exothermic and the rate constant is temperature dependent. Write down
the energy-balance eqn.
𝜕2 𝑇
(𝐶𝑎𝑙�𝑠 − 𝑚2 )
𝜕𝑇 𝜕𝑇
𝜌𝐶𝑝 � 𝜕𝑡 + 𝑉� 𝜕𝑥 � = 𝑘 𝜕𝑥 2 + 𝑘𝐶𝐴 (−Δ𝐻)
(𝑆𝑆)
𝑑𝑇 𝑑2 𝑇 𝑘
𝑉 𝑑𝑥 = 𝛼 𝑑𝑥 2 + 𝜌𝐶 𝐶𝐴 (+Δ𝐻) − (2)
𝑝
𝑑𝑇 𝑘0 𝑒𝑥𝑝�−𝐸�𝑅𝑇 �
𝑉 𝑑𝑥 = (−Δ𝐻)𝐶𝐴
𝜌𝐶𝑝
The two 1st order equations are coupled and right hand side terms have non-linear
dependence on 𝑦1 and 𝑦2 . RK-4 can be applied without any problem, with the conditions
𝑋 = 0 𝑦1 = 𝑦10 𝑎𝑛𝑑 𝑦2 = 𝑦20 . You can choose a step-size of Δ𝑥 = 𝐿�10.
Note: -If the axial diffusion terms are not neglected, the governing equations will
assume a form of 2nd order ODE, which will be discussed in the next lecture as Boundary Value
Problems.
3
methods use a polynomial on the derivatives of a function, passing through the
single or multi-points.
Adams-Bashforth (Explicit)
𝑑𝑦
= 𝑓(𝑦, 𝑡) ≡ 𝑓(𝑦(𝑡), 𝑡)
𝑑𝑡
𝑡 = 0, 𝑦(0) = 𝑦0
Solve for y(t) = ?
𝑡0 𝑡1 𝑡𝑛 𝑡𝑛+1
𝑦
� 0 𝑦1 𝑦𝑛 𝑦𝑛+1 : solve
𝑦0′ 𝑦1′ 𝑦𝑛′ ′
𝑦𝑛+1 : gradient
We have,
𝑡
𝑦𝑛+1 = 𝑦𝑛 + ∫𝑡 𝑛+1 𝑦′𝑑𝑡
𝑛
𝑡−𝑡𝑛 1
Define 𝛼 = :⇒ 𝑦𝑛+1 = 𝑦𝑛 + ℎ ∫0 𝑦 ′ (𝛼)𝑑𝛼 (0 ≤ 𝛼 ≤ 1) − (1)
ℎ
Fit a jth order polynomial of 𝑦𝑛′ through 𝑋𝑛 and 𝑋𝑛−𝑗 (𝑗 + 1 pts preceding 𝑋𝑛 )𝑗 using backward
difference formula (∇𝑦𝑖 = 𝑦𝑖 − 𝑦𝑖−1 ).
𝛼(𝛼 + 1) 2 𝛼(𝛼 + 1) ⋯ (𝛼 + 𝑗 − 1) 𝑗 ′
𝑦 ′ (𝛼) = �1 + 𝛼∇ + ∇ +⋯ ∇ � 𝑦𝑛 + 𝑅(𝜉)
2! 𝑗!
(Thus, 𝑦′(𝛼) is the polynomial between �𝑋𝑛 , 𝑋𝑛−1 , 𝑋𝑛−2 ⋯ 𝑋𝑛−𝑗 �, substituting in (1)
1
1 5 2 3 3 j ′
𝑦𝑛+1 = 𝑦𝑛 + ℎ � ∇ + ∇ + ∇ + ⋯ 𝐶𝑗 ∇ � 𝑦𝑛 + ℎ � 𝑅(𝜉)𝑑𝑥
2 12 8 0
4
ℎ
= 𝑦𝑛 + [3𝑓(𝑦𝑛 ) − 𝑓(𝑦𝑛−1 )] − 2𝑛𝑑 𝑜𝑟𝑑𝑒𝑟 (𝐴𝑑𝑎𝑚 − 𝐵𝑎𝑠ℎ𝑓𝑜𝑟𝑡ℎ)
2
′
𝑦𝑛+1
𝑦𝑛′
′
𝑦𝑛−1
′
𝑦𝑛−2
𝑚=1 2 3 4 ⋯
𝛽1𝑚 1 1 𝑠𝑡 𝑜𝑟𝑑𝑒𝑟
2𝛽2𝑚 3 −1 2𝑛𝑑 𝑜𝑟𝑑𝑒𝑟
12𝛽3𝑚 23 −16 5 3𝑟𝑑 𝑜𝑟𝑑𝑒𝑟
Graphically,
(𝑞𝑢𝑎𝑑𝑟𝑎𝑡𝑖𝑐)
𝑗 = 1(𝑙𝑖𝑛𝑒𝑎𝑟)
𝑗=1
𝑗 = 0(𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡)
𝑗=0
𝑦𝑛′ 𝑦𝑛∗
⇒ (𝑙𝑖𝑛𝑒𝑎𝑟)
𝑗=1
𝑡𝑛−1 𝑡𝑛 𝑡𝑛+1 𝑡𝑛−1 𝑡𝑛 𝑡𝑛−1
⇒ From the above description, it is clear that the method has a ‘starting’ problem because only
one condition is known: 𝑦(0) = 𝑦0 . In practice one starts with Euler’s forward/explicit approach to
calculate 𝑦1 , then use 2nd order method to predict 𝑦2 , and then use (𝑦0 , 𝑦1 , 𝑦2 ) to calculate 𝑦3 . Use of the
preceding ‘3’ data-points gives 3rd order accurate method.
5
Lecture #19
Instability and stiffness for a system of ODEs
In the previous lectures, we have shown that a system of ODEs (IV𝑃𝑠 ) 𝑦𝑖′ = 𝑓𝑖 (𝑦1 ⋯ 𝑦𝑚 ) 𝑖 =
′
1,2, ⋯ 𝑚 with 𝑦𝑖 (0) = 𝐶0 can be written as the coupled ODEs: 𝑦𝑖 = 𝐴𝑦 where A is the
Jacobian matrix.
It is also clear that for the 𝑚 × 𝑚 matrix A, 𝐴𝜙𝑘 = 𝜆𝑘 𝜙𝑘 , where 𝜙𝑘 is the eigenvector
corresponding to 𝜆𝑘 eigenvalue. Now make use of linear transformation. One can construct a
𝜙1
matrix S whose column is the eigenvectors of A: � ⋮ � having the property 𝑆 −1 𝐴𝑆 = Λ, where
𝜙𝑘
𝜆1
Λ is the diagonal matrix of the elements which are 𝜆𝑘 of A, as � ⋱ �. Then one can
𝜆𝑘
write,
where 𝑧𝑘′ are uncoupled ODEs. Therefore, all (stability) characteristics of single ODE will also
be shown by the system of ODEs.
We have also seen earlier shown that a general solution to the system of ODEs can be written
as
𝐶1 𝑒 𝜆1 𝑡 { } + 𝐶2 𝑒 𝜆2 𝑡 { } +
⋯ , where the terms in the parenthesis are the eigenvectors corresponding to the eignevalues.
In such case growth or decay of the function depends on 𝜆𝑚𝑖𝑛 − 𝜆𝑚𝑎𝑥 . A large numerical value
of 𝜆𝑚𝑎𝑥 will cause the function to grow or decay at a faster rate than that corresponding to
𝜆𝑚𝑖𝑛
𝑦
𝜆𝑚𝑎𝑥
1
𝜆𝑚𝑖𝑛 . In other words, small change in ‘t’ will cause a large change/variation in 𝑦(𝜆𝑚𝑎𝑥 ).
Solving such type of system of ODEs is numerically not trivial because a fine grid or step-size
Δ𝑡 𝑜𝑟 ℎ is required to accurately predict change in 𝑦(𝜆𝑚𝑎𝑥 ), whereas a coarse grid can be used
to predict 𝑦(𝜆𝑚𝑖𝑛 ).
(2) A combination of series and parallel chemical reactions with low and high rate constants:
𝑑𝑦 −100 0
ex: =� � 𝑦 ; 𝑦(0) = [2 1]𝑇
𝑑𝑡 2 −1
−100 0 𝑌1 100 𝑌1
𝜆1 = −100 ∶ � � � � = � � (𝑁𝑜𝑡𝑒: 𝑆𝑅 = 100)
2 −1 𝑌2 𝑌2
1
2𝑋1 = −99𝑋2 ⇒ 𝑌 (1) = �− 2� �
99
−100 0 𝑌1 −1 𝑌
𝜆2 = −1 ∶ � �� � = � �
2 −1 𝑌2 𝑌2
0
−100𝑌1 = −𝑌1 and 2𝑌1 − 𝑌2 = −𝑌2 ⇒ 𝑌 (2) = � �
1
1 0
GS: 𝑌 = 𝐶1 𝑒 −100𝑡 �− 2� � + 𝐶2 𝑒 −𝑡 � �
99 1
𝑌1 = 𝐶1 𝑒 −100𝑡
or −2 �
𝑌2 = 𝐶 𝑒 −100𝑡 + 𝐶2 𝑒 −𝑡
99 1
2𝑒 −100𝑡
Apply: 𝑌 = 103 −𝑡 4 −100𝑡 � 𝐴𝑛𝑎𝑙𝑦𝑡𝑖𝑐𝑎𝑙 𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛.
𝑒 − 𝑒
99 99
2
To solve numerically, a fine grid size Δ𝑡 will be initially
𝑃𝑙𝑜𝑡 2.0
: required to accurately predict a fast decay in the functional
𝑦
1.0 𝑦2
value of 𝑦1 , which can be relaxed, or step-size can be
𝑦1
increased during the later calculations.
1 𝑡 2
In other words, time change in 𝑦2 is gradual (slow). Therefore, one would have used a coarse size for 𝑦2 ,
but for a rapid change in 𝑦1 during initial time of calculations requiring fine grid sizes.
𝑑𝑦3
= 0.161(𝑦1 − 𝑦3 )
𝑑𝑡
In this case, stiffness changes with time, requiring adaptive mesh sizes: coarse when 𝑓𝑛 values slowly
change and fine when they change rapidly.
106
𝑦
𝑓𝑖𝑛𝑒 𝑚𝑒𝑠ℎ⁄𝑠𝑡𝑒𝑝 − 𝑠𝑖𝑧𝑒 𝑖𝑠 𝑟𝑒𝑞𝑢𝑖𝑟𝑒𝑑
10−2
𝑡
𝑓𝑖𝑛𝑒 𝑚𝑒𝑠ℎ⁄𝑠𝑡𝑒𝑝 − 𝑠𝑖𝑧𝑒 𝑖𝑠 𝑟𝑒𝑞𝑢𝑖𝑟𝑒𝑑
If you use one fixed fine step-size throughout the calculations, it is a wastage of CPU time.
Gear’s Technique: Used to solve stiff ODEs, based on multiple steps and predictor-corrector approach:
Predictor
𝑦 = 𝛼1 𝑦𝑛 + 𝛼2 𝑦𝑛−1 + ⋯ 𝛼𝑘 𝑦𝑛−(𝑘−1) + ℎ𝛽0 𝑦𝑛′
(𝑒𝑥𝑝𝑙𝑖𝑐𝑖𝑡): 𝑛+1
Corrector ′
: 𝑦 = 𝛼1 𝑦𝑛 + 𝛼2 𝑦𝑛−1 + ⋯ 𝛼𝑘 𝑦𝑛−(𝑘−1) + ℎ𝛽0 𝑦𝑛+1
(𝑖𝑚𝑝𝑙𝑖𝑐𝑖𝑡) 𝑛+1
You should note that this method also has a "starting" problem, similar to the previously discussed
multi-step methods.
3
Predictor Table
𝑘 𝛼1 𝛼2 𝛼3 ⋯ 𝛼6 𝛽0
1 1 0 0 ⋯ 0 1
2 0 1 0 ⋯ 0 2
3 −3� 6� −1� ⋯ 0 6�
2 2 2 2
⋮
⋮
6
Corrector Table
𝑘 𝛼1 𝛼2 𝛼3 ⋯ 𝛼6 𝛽0
1 0 0 ⋯ 1
1 4� −1� 0 2�
2 3 3 0 ⋯ 0 3
3 18� −9� 2� ⋯ 0 6�
11 11 11 11
⋮
⋮
6
Quiz II
4
Lecture #20
BVP (Boundary Value Problems)/2nd order ODEs
In this lecture, we will learn how to apply finite difference (direct) methods to solve BVPs. Such
problems are common in 1 D steady-state heat and mass transport in solid or fluid. The general
form of the BVP assumes a 2nd order ODE:
𝑑2 𝑦
𝑑𝑥 2
= 𝑓(𝑥, 𝑦, 𝑦 ′ ) with two boundary conditions
𝑑2 𝑦 𝑑𝑦
𝐴(𝑥) 𝑑𝑥 2 + 𝐵(𝑥) 𝑑𝑥 + 𝐶(𝑥)𝑦 + 𝐷(𝑥) + 𝐸 = 0; (𝐴(𝑥) ≠ 0)
The required boundary conditions at two ends of the domain can assume any form: Drichlet or
Danckwarts or Neumann. That is, either functional value or gradient or mixed condition can be
specified:
𝑦 = 𝑐, or 𝑦 ′ = c, or 𝑎𝑦 ′ + 𝑏𝑦 = 𝑐.
𝑑2 𝑦 𝑑𝑦
𝐴(𝑥) 𝑑𝑥 2 + 𝐵(𝑥) 𝑑𝑥 + 𝐶(𝑥)𝑦 + 𝐷(𝑥) + 𝐸 = 0;
Step1: divide the domain into N equal steps/grids or grids. Therefore, there are (N+1) nodes.
Step 2: discretize the equation or each terms of the equation at ‘ith’ grid. In this course, we will
discretize the terms using 2nd order method (recall numerical differentiation).
𝑖 = 1: 𝑎1 𝑦0 + 𝑏1 𝑦1 + 𝑐1 𝑦2 = 𝑑1
𝑎2 𝑦1 + 𝑏2 𝑦2 + 𝑐2 𝑦3 = 𝑑2 ⎫
⎪
⋮
𝑎𝑁−2 𝑦𝑁−3 + 𝑏𝑁−2 𝑦𝑁−2 + 𝑐𝑁−2 𝑦𝑁−1 = 𝑑𝑁−2 ⎬
⎪
𝑁 − 1: 𝑎𝑁−1 𝑦𝑁−2 + 𝑏𝑁−1 𝑦𝑁−1 + 𝑐𝑁−1 𝑦𝑁 = 𝑑𝑁−1 ⎭
Step 4: Apply the BCs and bring the terms containing y0 and yN to RHS:
𝑏1 𝑦1 + 𝑐1 𝑦2 = (𝑑1 − 𝑎1 𝑦0 ) = 𝑑1′
⎫
𝑎2 𝑦1 + 𝑏2 𝑦2 + 𝑐2 𝑦3 = 𝑑2 ⎪
⋮
𝑎𝑁−2 𝑦𝑁−3 + 𝑏𝑁−2 𝑦𝑁−2 + 𝑐𝑁−2 𝑌𝑁−1 = 𝑑𝑁−2 ⎬
′
⎪
𝑎𝑁−1 𝑦𝑁−2 + 𝑏𝑁−1 𝑦𝑁−1 = (𝑑𝑁−1 − 𝑐𝑁−1 𝑌𝑁 ) = 𝑑𝑁−1 ⎭
There are exactly (N-1) unknown and (N-1) equations. Notably, set of equations can be written as
𝑏 𝑐1 0
⎡ 1 ⎤
𝑎 𝑏2 𝑐2 0 ⋯
𝐴𝑦 = 𝑏 where 𝐴 = ⎢ 2 ⎥
⎢ 0 0 𝑎𝑁−2 𝑏𝑁−2 𝑐𝑁−2 ⎥
⎣ 0 𝑎𝑁−1 𝑏𝑁−1 ⎦
is the tridiagonal matrix.
Tridiagonal (𝑁 − 1, 𝑎, 𝑏, 𝑐, 𝑑, 𝑌)
𝑜𝑢𝑡𝑝𝑢𝑡 (𝑖 = 1, ⋯ 𝑁 − 1)
# 𝑜𝑓 𝑒𝑞 𝑛𝑠
𝑐𝑜𝑒𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡𝑠
𝑜𝑓 𝑚𝑎𝑡𝑟𝑖𝑥 𝐴
You should be able to clearly recognize that while preparing to call tridiagonal matrix,
the following coefficients were changed:
𝑑′1 = 𝑑1 − 𝑎1 𝑌0 ∶ 1st row (𝑖 = 1)
2
𝐴(𝑥𝑖 ) 𝐵(𝑥𝑖 )
𝑐𝑖 = + ; 𝑑𝑖 = −𝐷(𝑥𝑖 ) − 𝐸 ;
ℎ2 2ℎ
⇒ Let us consider another example where the BVP equation is the same. At boundary, 𝑦(0) =
𝑦0 (also same as before). However, at the other boundary, gradient is specified instead of functional
value: 𝑦𝑁′ = 0.
𝑌𝑁+1 −𝑌𝑁−1
The boundary condition 𝑦𝑁′ can be discretized as 2ℎ
= 0 �0(ℎ2 )� 𝑜𝑟 𝑌𝑁+1 = 𝑌𝑁−1
Considering that YN is NOT known in this case, the coefficients (𝑎𝑖 , 𝑏𝑖 , 𝑐𝑖 , 𝑑𝑖 ) are extended to the
rows: 𝑖 = 1 … . . 𝑁, 𝑖. 𝑒., the matrix A will have ‘N’ rows and the 1st & last row will be modified as
𝑑1 = 𝑑1 − 𝑎1 𝑌0 (same as before)
and 𝑎𝑁 = 𝑎𝑁 + 𝑐𝑁 ; why ?
⇒ It is recommended that you should do your best to keep consistency between the order of methods
used to discretize the equation and boundary conditions. Also note that, 2nd order discretization method
is reasonably good/accurate for engineering calculations. If you use a higher order method, the
coefficient matrix A will no more be tridiagonal that is easy and fast to invert.
Ex: Consider the steady-state one dimensional heat conduction in a fine (length L, cross-section A,
perimeter P), whose one end is at a constant temperature 𝑇𝑓 and the other end is insulated. The ambient
temperature is 𝑇𝑎 and the convective heat transfer coefficient at the fix surface is ℎ.
𝑇𝑓 𝑇(𝑥) =? 𝐴
𝑥 𝑃
L
An energy balance over ′𝐴Δ𝑥′ control volume will result in the following differential equation:
∂T cal�
ρCp � + V. ∇T� = k∇2 T + S ∶
∂t s − m3 𝑚2
SS Solid −h(T − Ta ) (PΔx)
Cal� . 3
(A Δx)
s − m2
𝑚3
Note that there is no source or sink of heat as such in the fin. However, considering 1D axial heat
transfer, heat dissipated from the fin surface is homogenously distributed within the 𝐶𝑉: 𝑆 =
−ℎ(𝑇 − 𝑇𝑎 )𝑎𝑠 where 𝑎𝑠 is the surface area per unit volume of the fin.
𝑑2 𝑇 ℎ(𝑇−𝑇𝑎 )𝑃
⇒𝑘 − =0
𝑑𝑥 2 𝐴
d2 T −hP hP
or + BT + C = 0 where B = ; C= T and choose Δh = L�10
dx2 kA kA a
𝑑𝑇
BC: 𝑇(𝑥 = 0) = 𝑇𝑓 ; −𝑘∇𝑇 (𝑥 = 𝐿) = 0 𝑜𝑟 =0
𝑑𝑥
(𝑖𝑛𝑠𝑢𝑙𝑎𝑡𝑖𝑜𝑛)
Discretize the equation using the 2nd order method.
𝑑2 𝑇
Soln: � + 𝐵𝑇 + 𝐶 = 0�
𝑑𝑥 2
0 1 𝑖−1 𝑖 𝑖+1𝑁−1 𝑁
𝑇𝑖−1 −2𝑇𝑖 +𝑇𝑖+1
+ 𝐵𝑇𝑖 + 𝐶 = 0
ℎ2
2
𝑇𝑖−1 + �𝐵 − � 𝑇𝑖 + 𝑇𝑖+1 = −𝐶 𝑖 = 1, 𝑁
ℎ2
Therefore, you have the tridiagonal coefficient matrix A. Call the subroutine as:
As an example, take L = 1 cm, width of the �in = 0.1 cm, 𝑇𝑓 = 1000 𝐶, 𝑇𝑎 = 300 𝐶, ℎ =
100 𝑤⁄𝑚2 𝐾, 𝑘 = 20 𝑤⁄𝑚 − 𝐾. Take Δℎ = 0.1 𝑐𝑚, write a programming code to obtain the solution
and plot the temperature profiles:
100𝑜 𝐶
𝑇
30𝑜 𝐶 𝑥 𝐿 = 1 𝑐𝑚
4
Ex: Consider the catalytic oxidation of 𝑆𝑂2 into 𝑆𝑂3 under the constant temperature and pressure
conditions, over a spherical (diameter 𝑑𝑝 = 1 mm) pt-dispersed porous carbon catalyst. 𝑆𝑂2 in air (1%)
diffuses (𝐷 = 1 × 10−9 m2 ⁄s) into the pores of the catalyst and is oxidized to 𝑆𝑂3 by a pseudo-zeroth
moles
order reaction, 𝑟 = 0.01 . The convective film mass transfer coefficient is 𝑘𝑚 = 0.1 m⁄s.
s−m3
Determine the steady-state concentration profiles of 𝑆𝑂2 within the pellet.
Soln:
𝐶(𝑟) =? 𝑘𝑚
𝑃, 𝑇 = constant
𝑆𝑂2 (1%)
𝐶𝑏 = 1%
𝑟𝑝
𝜕𝐶𝐴
+ 𝑉. ∇𝐶𝐴 = 𝐷𝑝𝑜𝑟𝑒 ∇2 𝐶𝐴 − 𝑘
𝜕𝑡
, 𝑆𝑆
𝑆𝑜𝑙𝑖𝑑
Dpore d dC
𝑜𝑟 �r 2 A � + k = 0 (0 ≤ r ≤ rp )
r2 dr dr
5
Lecture #21
(continued…..from the previous lecture)
𝑑2 𝐶𝐴 2𝐷𝑝 𝑑𝐶𝐴
𝑜𝑟 𝐷𝑝 + −𝑘 =0
𝑑𝑟 2 𝑟 𝑑𝑟
𝑑2 𝑦 2 𝑑𝑦 −𝑘
𝑜𝑟 𝑑𝑟 2
+ 𝑟 𝑑𝑟 + 𝐴 = 0 ; 𝐴 = �𝐷 �
𝑝
rp
1 𝑁−1 𝑁
�h = �
0 2 𝑖−1 𝑖 𝑖+1 N
𝑑𝐶𝐴
BCs. 𝑟=0 = 0 (symmetric)
𝑑𝑟
𝑑𝐶𝐴
𝑟 = 𝑟𝑝 − 𝐷𝑝 = 𝑘𝑚 (𝐶𝐴 − 𝐶𝑏 )
𝑑𝑟
Discretized eqn:
𝑦𝑖+1 −2𝑦𝑖 +𝑦𝑖−1 2 𝑦𝑖+1 −𝑦𝑖−1
+𝑟 +𝐴 =0 − (1)
ℎ2 𝑖 2ℎ
(𝑟𝑖 = 𝑖ℎ)
Discretized BCs:
𝑦𝑁+1 −𝑦𝑁−1
𝑦1 −𝑦−1
=0 𝑟𝑖 = 𝑟𝑝 − 𝐷𝑝 = 𝑘𝑚 (𝑦𝑛 − 𝐶𝑏 )
2ℎ
𝑟𝑖 = 0 2ℎ (𝑖 = 𝑁)
(𝑖 = 0) 2ℎ 2ℎ𝑘𝑚
𝑦−1 = 𝑦1 𝑦𝑁+1 = �𝑦𝑁−1 − 𝐷 𝑘𝑚 𝑦𝑛 + 𝐶𝑏 �
𝑝 𝐷𝑝
Before you re-arrange the terms, you should realize that there is a clear discontinuity at
𝑟𝑖 = 0 (or 1st node). Therefore, you cannot proceed without removing discontinuity. One way
1 𝑑𝑦 𝑑2 𝑦 because ∇y = 0 at r = 0
of doing this is to approximate 𝑟 𝑑𝑟 � as 𝑑𝑟 2 �
𝑖=0 𝑖=0
𝑑2 𝑦
or �3 𝑑𝑟 2 + 𝐴� = 0
𝑦1 −2𝑦0 +𝑦−1
or 3 + 𝐴 = 0 at 𝑖 = 0
ℎ2
Eq. (1) is applied over the next nodes, i.e., 𝑖 = 1 ⋯ 𝑁 without any discontinuity
1
Arrange the terms now,
2
⎧ 𝑦−1 + �𝐴 ℎ �3 − 2� 𝑦0 + 𝑦1 = −𝐴 𝑟=0
⎪
⎪
⎪ 1 1 2 1 1
�ℎ2 − 𝑟 ℎ� 𝑦0 − ℎ2 𝑦1 + �ℎ2 + 𝑟 ℎ� 𝑦2 = −𝐴 𝑟 = 1
1 1
⎨ � 1 − 1 � 𝑦 − 2 𝑦 + � 1 + 1 � 𝑦 = −𝐴 𝑟 = 2
ℎ2 𝑟2 ℎ 1 ℎ2 2 ℎ2 𝑟2 ℎ 3
⎪
⎪ ⋮
⎪ 1 1 2 1 1
�
⎩ ℎ 2 − 𝑟 ℎ
� 𝑦 𝑁−1 − ℎ 2 𝑦𝑁 + �ℎ 2 + 𝑟 ℎ
� 𝑦𝑁+1 = −𝐴 𝑟 = 𝑁
𝑁 𝑁
2
1𝑠𝑡 𝑟𝑜𝑤: �𝐴 ℎ �3 − 2� 𝑦0 + 2𝑦1 = −𝐴
2 2 2ℎ𝑘𝑚 1 1 2ℎ𝑘𝑚 1 1
𝐿𝑎𝑠𝑡 𝑟𝑜𝑤: ℎ2
𝑦𝑁−1 − �ℎ2 + × �ℎ2 + 𝑟 ℎ�� 𝑌𝑁 = −𝐴 − �ℎ2 + 𝑟 ℎ� 𝐶𝑏
𝐷𝑝 𝑁 𝐷𝑝 𝑁
2
𝑎0 = 1, 𝑏0 = �𝐴 ℎ �3 − 2� , 𝑐0 = 1, 𝑑0 = −𝐴
and
𝑎𝑖 = 1�ℎ2 − 𝑟 ℎ , 𝑏𝑖 =
1 −2 1 1
ℎ2
, 𝑐𝑖 = ℎ2 + 𝑟 2 ℎ ; 𝑑𝑖 = −𝐴 ; 𝑖 = 1 − 𝑁
𝑖 𝑖
𝑐0 = 𝑐0 + 𝑎0 (1st row @ i = 0)
2ℎ𝑘𝑚
𝑎𝑁 = 𝑎𝑁 + 𝑐𝑁 , 𝑏𝑁 = 𝑏𝑁 − 𝑐𝑁 ,
𝐷𝑝
2ℎ𝑘𝑚 � 𝑙𝑎𝑠𝑡 𝑟𝑜𝑤
𝑑𝑁 = 𝑑𝑁 − 𝑐𝑁 . . 𝐶𝑏
𝐷𝑝
𝑏0 𝑐0 0
call Tridiagonal (𝑵 + 𝟏, 𝒂, 𝒃, 𝒄, 𝒅, 𝒚) 𝑖 = 0 ⋯ 𝑁 � �
0 𝑏𝑁 𝑐𝑁
2
Write a programming code to solve the SS profiles of CA (r) for different rp = 0.25,
2
0.5 and 1 mm, k m = 0.01, 0.1, 0.5 m⁄s and Dp = 10−7 , 10−9 , 10−11 m �s.
for increasing
Dp
Ex. Consider the steady-state 1D reactive flow (Re > 5000) of a gaseous species A in a 1ong
quartz tubular reactor (1D = D, length = L) radiated by UV light. Velocity in the tube is 𝑉.
The species A is converted into B by the 1st order homogeneous reaction (𝑟 = 𝑘𝐶𝐴 ), 𝑎𝑠 it flows
in the tube. The inlet concentration of A is 𝐶𝐴𝑜 . Determine the axial concentration profiles
𝐶𝐴 (𝑋) =?. The reaction takes place under isothermal conditions and Δ𝐻 ≈ 0.
Δ𝑋
CA (𝑋) =? 𝑋
𝑉�
𝐶𝐴𝑜 (r = kCA )
𝑋
𝐿
𝜕𝐶𝐴
Soln: + 𝑉. ∇𝐶𝐴 = 𝐷∇2 𝐶𝐴 + (𝑟𝐴 )
𝜕𝑡
𝑑𝐶 𝑑 𝐶 2
𝑆𝑆 𝑉� 𝑑𝑋𝐴 = 𝐷 𝑑𝑋 2𝐴 − 𝑘𝐶𝐴
𝑑𝐶𝐴
BC. 𝑋=0 𝐶𝐴 = 𝐶𝐴𝑜 , 𝑋 = 𝐿 ∇𝐶𝐴 = 0 or = 0 (long tube approximation)
𝑑𝑋
𝑑2 𝐶𝐴
𝐵 = 𝐷� � ; 𝐶 = 𝑉� , Δℎ = 𝐿�𝑁
𝑑𝐶𝐴 𝑘
+𝐵 + 𝐶𝐶𝐴 = 0,
𝑑𝑋 𝑑𝑋 2 𝑉
3
𝑦𝑁+1 −𝑦𝑁−1
Discretize BCs 𝑦0 = 𝐶𝐴𝑜 ; = 0 ⇒ 𝑦𝑁+1 = 𝑦𝑁−1
2ℎ
On substitution of BCs,
𝑦 = 1, 𝑁
0 𝐿 𝑋
- Parabolic equations
𝜕𝑦 𝜕 2𝑦 𝐵 𝜕 𝜕𝑦 𝐵 𝜕 2 𝜕𝑦 𝜕𝑦
1D ∶ A + B 2 �or �𝑟 � or 2
�𝑟 �� + C + Dy + E = 0
𝜕𝑡 𝜕𝑋 𝑟 𝜕𝑟 𝜕𝑟 𝑟 𝜕𝑟 𝜕𝑟 𝜕𝑋
Such time dependent 1D (spatial) equations are common in heat/mass/momentum
transport. The PDE representing conservation of transport variables usually contains a
convective term (𝑉. ∇𝑇 or 𝑉. ∇𝐶 or 𝑉. ∇𝑉) and a diffusion term,
2 2 2 �⃗
�𝛼∇ 𝑇 or 𝐷∇ 𝐶 or 𝜈∇ 𝑉 �, besides the unsteady-state or transient term
𝜕𝑇 𝜕𝐶 �⃗
𝜕𝑉
�𝜌𝐶𝑝 𝜕𝑡 or 𝜕𝑡
or 𝜌 𝜕𝑡 �. Therefore,
4
𝜕𝐶𝐴 𝜕𝐶𝐴 𝜕 2 𝐶𝐴
+ 𝑉𝑋 =𝐷 − 𝑘𝐶𝐴 ⎫
𝜕𝑡 𝜕𝑋 𝜕𝑋 2 ⎪
𝜕𝑇 𝜕𝑇 𝜕2𝑇 ⎪
or 𝜌𝐶𝑝 � + 𝑉𝑋 � = 𝑘 2 + (𝑘𝐶𝐴 )(−Δ𝐻) ⎪
𝜕𝑡 𝜕𝑋 𝜕𝑋 1D
⎪
�⃗
𝜕𝑉 𝜕𝑉𝑋 𝜕 2 𝑉𝑋 𝑑𝑝 Parabolic
or 𝜌 � + 𝑉𝑋 �=𝜇 − � � + 𝜌𝑔
𝜕𝑡 𝜕𝑋 𝜕𝑋 2 𝑑𝑋 ⎬ PDE
𝜕𝑇 𝑘 𝜕 𝜕𝑇 ⎪ equation.
or 𝜌𝐶𝑝 = 2 �𝑟 2 � ⎪
𝜕𝑡 𝑟 𝜕𝑟 𝜕𝑟 ⎪
𝜕𝐶𝐴 𝐷 𝜕 𝜕𝐶𝐴 ⎪
or = �𝑟 � − 𝑘𝐶𝐴 ⎭
𝜕𝑡 𝑟 𝜕𝑟 𝜕𝑟
are all examples of the time-dependent 1D parabolic PDE. You may refer the book by Ferziger
for the exact mathematical definition of different types of PDE including parabolic elliptic or
hyperbolic.
⇒ Note: If you drop the time-dependent equation (or the unsteady-state or transient term),
what do you get? A boundary value problem (BVP), which we covered in the previous lectures.
⇒ Message is clear. Solving 1D parabolic PDE is not very different from solving BVP. The
procedure for discretization in the space remains the same as before; it is to be repeated on every
time step as you march by refreshing the numerical values from the previous step. In fact, you
may like to copy and paste the programming code for BVP and nest it within the time loop. The
graphical representation of the numerical calculation is as follows.
𝑖−1 𝑖 𝑖+1
𝑡=2
𝑖−1 𝑖 𝑖+1
𝑡=1
𝑡=0
0 𝑖−1 𝑖 𝑖+1 1
march on time 𝑋
5
Take a general case:
𝜕𝑦 𝜕𝑦 𝜕2 𝑦 𝜕𝑦
𝐴 𝜕𝑡 + 𝐵 𝜕𝑋 = 𝐶 𝜕𝑋 2 + 𝐷 �or 𝐴 𝜕𝑡 = 𝑓(𝑦𝑋 , 𝑦𝑋𝑋 , 𝐷)�
Steps
(1) Discretize ‘X’ -terms as before. On time, apply trapezoidal rule for integration:
𝐷 𝑡 +𝐷𝑡+1
(Note: ‘D’ is a const. 2
= 𝐷!) – This method is better known as Crank-Nicolson.
Also note that you have used a 2nd order accurate scheme on both time and X.
1 𝐶 𝐵 𝑡+1 𝐴 2𝐶 1 𝐶 𝐵
�− 2 − � 𝑦𝑖−1 + � + 2 � 𝑦𝑖𝑡+1 + �− 2 + � 𝑦 𝑡+1
2 ℎ 2Δℎ Δ𝑡 ℎ 2 ℎ 2Δℎ 𝑖+1
1 𝐶 𝐵 𝑡 𝐴 𝐶 1 𝐶 𝐵
= � 2+ � 𝑦𝑖−1 + � − 2 � 𝑦𝑖𝑡 + � 2 − � 𝑦𝑡 + 𝐷 𝑖 = 1, 𝑁
2 ℎ 2Δℎ Δ𝑡 ℎ 2 ℎ 2Δℎ 𝑖+1
Note: All terms on RHS are known/calculated from the previous time step.
6
Lecture #22
(… . . 𝒄𝒐𝒏𝒕𝒊𝒏𝒖𝒆)
(3) Apply BCs on both sides (𝑡 + 1) 𝑎𝑛𝑑 (𝑡):
𝑡+1 𝐶 𝐴 𝐶 𝐶 𝐴 𝐶
− ℎ2 𝑦𝑁−1 + �Δ𝑡 + ℎ2 � 𝑦𝑁𝑡+1 = ℎ2 𝑦𝑁−1
𝑡
+ �Δ𝑡 − ℎ2 � 𝑦𝑁𝑡 + 𝐷 (i = N)
The other rows (2 ⋯ 𝑁 − 1) will remain the same. Again the coefficient matrix is a
tridiagonal one:
𝐴𝑦 (𝑡+1) = 𝑑(𝑡)
on application of BCs
t
𝑑1 = 𝑑1 − 𝑎1 𝑦0 (1st row , 𝑖 = 1) Note that y0 is substituted for yi−1 before applying BC.
𝑎𝑁 = 𝑎𝑁 + 𝑐𝑁 (last row, i = N)
𝑡 𝐶 𝐴 𝐶
𝑑𝑛 = ℎ2 𝑦𝑁−1 + �Δ𝑡 − ℎ2 � 𝑦𝑁𝑡 + 𝐷
1
(4) Start calculations: RHS is known from 𝑡 = 0 (initial conditio
𝐴𝑦 (1) = 𝑏 (0)
𝐴𝑦 (2) = 𝑏 (1)
March on time
⋮
and call
𝐴𝑦 (𝑡+1) = 𝑏 (𝑡)
Convince yourself that the SS values will be the same as solving the corresponding BVP
by dropping the transient or unsteady-state term.
𝑦�0 1 𝐷 𝑃𝑎𝑟𝑎𝑏𝑜𝑙𝑖𝑐
𝑦2
𝐵𝑉𝑃
Ex:
Consider the SS flow of a liquid through a long tube. Reynolds number is 5000 and a
radially flat velocity profile may be assumed. At certain time a tracer is injected into the liquid
at the inlet. The dispersion coefficient of the tracer in the liquid may be assumed to be
𝐷 𝑐𝑚2 ⁄𝑠. Radial peclet number is large. Determine unsteady-state or transient axial
concentration profiles, 𝐶𝐴 (𝑡, 𝑋) = ?
Soln: Δ𝑋
CAo �
V X
t=0
L
𝜕𝐶𝐴
+ 𝑉. ∇𝐶𝐴 = 𝐷∇2 𝐶𝐴 + 𝑟
𝜕𝑡
(no − reaction)
𝜕𝐶𝐴 𝜕𝐶𝐴 𝜕2 𝐶𝐴
+ 𝑉𝑋 . =𝐷 ; (neglecting radial term)
𝜕𝑡 𝜕𝑋 𝜕𝑋 2
2
𝜕𝐶𝐴
𝑋 = 𝐿, ∇𝐶𝐴 = 0 (long − tube approximation) 𝑜𝑟 =0
𝜕𝑋
𝜕𝐶𝐴 𝜕𝐶𝐴 𝜕2 𝐶𝐴
+ 𝑉𝑋 . =𝐷 ; 𝑉𝑋 = 𝑉�
𝜕𝑡 𝜕𝑋 𝜕𝑋 2
𝐶𝐴𝑡+1
𝑖
− 𝐶𝐴𝑡𝑖 1 𝐶𝐴,𝑖+1 − 2𝐶𝐴,𝑖 + 𝐶𝐴,𝑖−1 𝑡+1 𝐶𝐴,𝑖+1 − 2𝐶𝐴,𝑖 + 𝐶𝐴,𝑖−1 𝑡
= �𝐷 � � +𝐷� �
Δ𝑡 2 ℎ2 ℎ2
𝐶𝐴,𝑖+1 − 𝐶𝐴,𝑖−1 𝑡+1 𝐶𝐴,𝑖+1 − 𝐶𝐴,𝑖−1 𝑡
�
−𝑉� � �
−𝑉� ��
2ℎ 2ℎ
𝐷 𝑉� 1 𝐷 𝐷 𝑉�
� 2 + � 𝐶𝐴,𝑖−1 𝑡+1 − � − 2 � 𝐶𝐴,𝑖 𝑡+1 + � 2 − � 𝐶𝐴,𝑖+1 𝑡+1
2ℎ 4ℎ Δ𝑡 ℎ 2ℎ 4ℎ
𝐷 �
𝑉 1 𝐷 𝐷 𝑉�
= − � 2 + � 𝐶𝐴,𝑖−1 𝑡 − � − 2 � 𝐶𝐴,𝑖 𝑡 − � 2 − � 𝐶𝐴,𝑖+1 𝑡
2ℎ 4ℎ Δ𝑡 ℎ 2ℎ 4ℎ
𝑖 = 1, 𝑁
𝑡 𝑡+1
𝑖 = 0: 𝐶𝐴,0 = 𝐶𝐴,0 = 𝐶𝐴,𝑖𝑛 (Inlet BC)
𝑡+1 𝑡+1 𝑡 𝑡
𝑖 = 𝑁: 𝐶𝐴,𝑁+1 = 𝐶𝐴,𝑁−1 and 𝐶𝐴,𝑁+1 = 𝐶𝐴,𝑁−1
Apply BCs:
𝑎𝑁 = 𝑎𝑁 + 𝑐𝑁
1 𝐷 𝑡 𝑡 𝐷
𝑑𝑁 = − �Δ𝑡 − ℎ2 � 𝐶𝐴,𝑁 − �ℎ2 � 𝐶𝐴,𝑁−1
3
Call Tridiagonal (N, a, b, c, d, y)
1.0 𝑡
𝜕𝐶
�⇒ 𝜕𝑋
= 0�
𝛼=𝐿
0.001
0 𝑥 𝐿
Note that
• Tridiagonal matrix is called at every time-step.
• � vector is updated with the recent most 𝐶𝐴 values.
𝑏
• gradient at the exit must be flat (consistent with BC)
• There are two steps (ℎ & Δ𝑡). How do you choose them?
Δ𝑋 Δ𝑋 2
No one knows! In general Δ𝑡 < 𝑉
, . Why? ?
𝐷
One should choose a fixed Δ𝑋, and then refine ′Δ𝑡′ till there is a convergence in the solution.
Again, use Δ𝑋 = Δ𝑋�2, adjust ′Δ𝑡′ till the solution converges, and so forth!
Ex. Repeat the previous problem, assuming that some solute is irreversibly adsorbed at the
tube-wall @ 𝑘𝐶𝐴 rate.
r = -kCA
𝐶𝐴,𝑖𝑛 X
𝑉� 𝐶𝐴 (𝑡, 𝑋) =?
Δ𝑋 L
𝜕𝐶𝐴 𝜕𝐶𝐴 𝜕2 𝐶𝐴
or + 𝑉� =𝐷 − 𝑘′𝐶𝐴 ;
𝜕𝑡 𝜕𝑋 𝜕𝑋 2
4
𝐼𝐶: 𝑡 = 0 𝐶𝐴 = 𝐶𝐴,𝑖 ,
𝜕𝐶𝐴
𝐵𝐶 0+ , 𝑋 = 0; 𝐶𝐴 = 𝐶𝐴,𝑖𝑛 , 𝑋 = 𝐿, ∇𝐶𝐴 = 0 ⇒ =0
𝜕𝑋
0
𝑖−1 𝑁 𝑡+1
𝑖+1
𝑖=0 1 𝑖−1 𝑖 𝑖+1 𝑁−1 𝑁 𝑡
𝐶𝐴𝑡+1 −𝐶𝐴𝑡 1 𝐶𝐴,𝑖+1 −2𝐶𝐴,𝑖 +𝐶𝐴,𝑖−1 𝑡+1 𝐶𝐴,𝑖+1 −2𝐶𝐴,𝑖 +𝐶𝐴,𝑖−1 𝑡 𝐶𝐴,𝑖+1 −𝐶𝐴,𝑖−1 𝑡+1
𝑖 𝑖
= 2 �𝐷 � � +𝐷� � − 𝑉� � � −
Δ𝑡 ℎ2 ℎ2 2ℎ
𝐶 −𝐶 𝑡
𝑉� � 𝐴,𝑖+12ℎ 𝐴,𝑖−1 � − �𝑘′𝐶𝐴,𝑖
𝑡+1 𝑡 �
+ 𝑘′𝐶𝐴,𝑖 � Note: This is the extra term.
Re-arrange,
𝑖 = 1, 𝑁
𝑡 𝑡+1
𝑖 = 0, 𝐶𝐴,𝑜 = 𝐶𝐴,𝑜 = 𝐶𝐴,𝑖𝑛 (for all time steps)
𝐵𝐶𝑠: � 𝑡+1 𝑡+1 𝑡 𝑡
𝑖 = 𝑁, 𝐶𝐴,𝑁+1 = 𝐶𝐴,𝑁−1 𝑎𝑛𝑑 𝐶𝐴,𝑁+1 = 𝐶𝐴,𝑁−1
Apply BCs.
𝐷 �
𝑉 1
𝑡 𝐷 𝑘′ 𝑡 𝐷 �
𝑉
𝑑1 = − �ℎ2 + 2ℎ� 𝐶𝐴,𝑖𝑛 − �Δ𝑡 − ℎ2 + 2 � 𝐶𝐴,1 − �2ℎ2 − 4ℎ� 𝐶𝐴,2 : i =1 (1st row)
𝑎𝑁 = 𝑎𝑁 + 𝑐𝑁
𝑡 𝐷 𝑡1 𝐷 𝑘′ � 𝑖=𝑁 (𝑙𝑎𝑠𝑡 𝑟𝑜𝑤)
𝑑𝑁 = − ℎ2 𝐶𝐴,𝑁−1 − �Δ𝑡 − ℎ2 + 2 � 𝐶𝐴,𝑁
5
Proceed as before with initial condition CA (i = 1, N) = CA,i . Evaluate d(t) , solve for
y (t+1) , evaluate d(t+1) , solve for y (t+2) ⋯ etc. Subroutine Tridiagonal (N, a, b, c, d, y) is called
at every time step. Plot qualitatively for the same conditions as before, use k = 0.1 s−1 .
The axial concentration profiles for different ′t s ′ will be the similar as before; however increase
in concentration with ‘t’ will be less than before because of the reactive/adsorptive wall.
𝐶𝐴,𝑖𝑛
𝑡2
𝐶𝐴 (non − reactive wall)
𝑡2 𝑡1
𝑡1 (reactive wall)
6
Lecture #23
Example: Consider a spherical (𝑑𝑝 = 0.1 cm) pellet made of steel (𝑘 = 40 W⁄m − K , ρ =
8000 kg⁄m3 , Cp = 400 J⁄kg − K. The initial temperature of the pellet is 300 𝐾. It is
immersed in a large oil tank at 400 K. The corrective heat transfer coefficient, ℎ𝑓 at the sphere
surface is 3000 𝑊 ⁄𝑚2 𝐾. Solve for unsteady- state temperature profiles in the sphere. Choose
Δ𝑡 = 1 𝑠. Determine the average temperature in the sphere and the rate of which the surface
temperature decreases at 𝑡 = 20 𝑠.
1 𝜕𝑇 𝜕2 𝑇
However, the grad ∇𝑇 = 0 at 𝑟 = 0. Therefore, 𝑟 𝜕𝑟 ≈ 𝜕𝑟 2
𝑑𝑞
𝑡 400 (𝑇𝑓 )
𝑟
hf Δ𝑟
𝑡=0 300 𝐾
rp
(𝛼 = 𝑘�𝜌𝐶 )
𝜕𝑇
𝜌𝐶𝑝 𝜕𝑡 = 𝑘∇2 𝑇 + 𝑆
𝑝
𝜕𝑇 𝛼 𝜕 𝜕𝑇
𝜕𝑡
= 𝑟 2 𝜕𝑟 �𝑟 2 𝜕𝑟 � 𝑟𝑝 > 𝑟 ≥ 0
𝜕𝑇
𝐵𝐶: 0+ 𝑟=0 ∇𝑇 = 0 𝑜𝑟 𝜕𝑟
=0
𝜕𝑇
𝑟 = 𝑟𝑝 − 𝑘 𝜕𝑟 = ℎ(𝑇 − 𝑇𝑓 )
𝜕𝑇
or 𝑘 𝜕𝑟 = ℎ(𝑇𝑓 − 𝑇)
1
𝜕𝑇 𝜕2 𝑇 2𝛼 𝜕𝑇
The equation is modified as 𝜕𝑡
= 𝛼 𝜕𝑟 2 + 𝑟 𝜕𝑟
𝑡+1
0 1 𝑖−1 𝑖 𝑖+1 𝑁−1 𝑁
0 (𝑟 = 𝑖ℎ)
1 𝑖−1 𝑖 𝑖+1 𝑁−1 𝑁 𝑖
𝑇𝑖𝑡+1 − 𝑇𝑖𝑡 1 𝑇𝑖+1 − 2𝑇𝑖 + 𝑇𝑖−1 𝑡+1 𝑇𝑖+1 − 2𝑇𝑖 + 𝑇𝑖−1 𝑡 2𝛼 𝑇𝑖+1 − 𝑇𝑖−1 𝑡+1
= �𝛼 � � +𝛼� � + � �
Δ𝑡 2 ℎ2 ℎ2 𝑟𝑖 2ℎ
2𝛼 𝑇𝑖+1 − 𝑇𝑖−1 𝑡
+ � ��
𝑟𝑖 2ℎ
𝛼 𝑡+1 𝛼 1 𝛼 𝛼 𝛼 𝛼 𝛼
Arrange: �2ℎ2 − 2𝑟 ℎ� 𝑇𝑖−1 − �Δ𝑡 − ℎ2 � 𝑇𝑖𝑡+1 + �2ℎ2 + 2𝑟 ℎ� 𝑇𝑖+1
𝑡+1 𝑡
= �2𝑟 ℎ − 2ℎ2 � 𝑇𝑖−1 +
𝑖 𝑖 𝑖
𝛼 1 𝛼 𝛼
�ℎ2 − Δ𝑡� 𝑇𝑖𝑡 − �2ℎ2 − 2𝑟 ℎ� 𝑇𝑖+1
𝑡
𝑖 = 1, 𝑁
𝑖
𝜕𝑇 𝜕2 𝑇
Note 𝑖 = 0 requires a modified equation as (Why?): 𝜕𝑡
= 3𝛼 𝜕𝑟 2
3𝛼 𝑡+1
3𝛼 1 3𝛼 3𝛼 3𝛼 1 3𝛼
� 2
� 𝑇−1 − � 2 + � 𝑇0𝑡+1 + � 2 � 𝑇1𝑡+1 = − � 2 � 𝑇−1
𝑡
+ � 2 − � 𝑇0𝑡 − � 2 � 𝑇1𝑡
2ℎ ℎ Δ𝑡 2ℎ 2ℎ ℎ Δ𝑡 2ℎ
3𝛼 3𝛼 1 3𝛼
2
𝑎0 =
, 𝑏0 = − � 2 + � , 𝑐0 = 2
2ℎ ℎ Δ𝑡 2ℎ � 𝑖 = 0
3𝛼 𝑡
3𝛼 1 3𝛼
𝑑0 = − � 2 � 𝑇−1 + � 2 − � 𝑇0𝑡 − � 2 � 𝑇1𝑡
2ℎ ℎ Δ𝑡 2ℎ
𝛼 𝛼 𝛼 1 𝛼 𝛼
𝑎𝑖 = � 2 − �, 𝑏𝑖 = � 2 − � , 𝑐𝑖 = � 2 + � , 𝑑𝑖
2ℎ 2𝑟𝑖 ℎ ℎ Δ𝑡 2ℎ 2𝑟𝑖 ℎ
𝛼 𝛼 𝑡 𝛼 1 𝛼 𝛼
= −� − 2 � 𝑇𝑖−1 + � 2 − � 𝑇𝑖𝑡 − � 2 + � 𝑇𝑡
2𝑟𝑖 ℎ 2ℎ ℎ Δ𝑡 2ℎ 2𝑟𝑖 ℎ 𝑖+1
𝑖 = 1, 𝑁
𝜕𝑇
Discretize BCs (1) 𝑟 = 0 𝜕𝑟
= 0 ⇒ 𝑖 = 0; 𝑇−1 = 𝑇+1
2
𝜕𝑇
(2) 𝑟 = 𝑟𝑝 𝑘 𝜕𝑟 = ℎ𝑓 �𝑇𝑓 − 𝑇� 𝑖 = 𝑁;
𝑇𝑁+1 −𝑇𝑁−1 ℎ𝑓
= �𝑇𝑓 − 𝑇𝑁 �
2h 𝑘
2ℎ𝑓 ℎ
or, 𝑇𝑁+1 = 𝑇𝑁−1 + �𝑇𝑓 − 𝑇𝑁 �
𝐾
Apply BC
𝑐0 = 𝑐0 + 𝑎0
3𝛼 1 3𝛼 � 𝑖=0
𝑑0 = � ℎ2 − Δ𝑡� 𝑇0𝑡 − ℎ2 𝑇1𝑡
𝑎𝑁 = 𝑎𝑁 + 𝑐𝑁
𝛼
𝑡 𝛼 1 𝛼 𝛼 2ℎ𝑓 ℎ 𝛼 𝛼 2ℎ𝑓 ℎ
𝑑𝑁 = − ℎ2 𝑇𝑁−1 + �ℎ2 − Δ𝑡� 𝑇𝑁𝑡 + �2ℎ2 − 2𝑟 ℎ� 𝑇𝑁𝑡 − �2ℎ2 − 2𝑟 ℎ� 𝑇𝑓
𝑖 𝑘 𝑖 𝑘
Example: Consider the catalytic oxidation of 𝑆𝑂2 into 𝑆𝑂3 over a spherical Pt-dispersed porous
carbon catalyst �radius = 𝑟𝑝 �. The atmospheric concentration of 𝑆𝑂2 is 1%. The rate of
reaction is 𝑟 = 𝑘 (= 0.1) 𝐶𝐴 mole⁄s − m3 . The film mass transfer coefficient is 𝑘𝑚 (𝑚⁄𝑠).
Determine the time-development of concentration profiles within the catalyst. The pore
diffusion coefficient of 𝑆𝑂2 is estimated to be 𝐷 𝑚2 ⁄𝑠.
𝐶𝑏 = 0.01
𝐶(𝑡, 𝑟) =?
𝑆𝑆
𝑘𝑚
𝑟𝑝
𝜕𝐶𝐴 𝐷 𝜕 𝜕𝐶𝐴
= 𝑟 2 𝜕𝑟 �𝑟 2 � − 𝑘𝐶𝐴 (1)
𝜕𝑡 𝜕𝑟
3
0+ ∇𝐶𝐴 = 0 @ 𝑟 = 0
𝜕𝐶𝐴
−𝐷 = 𝑘𝑚 (𝐶𝐴 − 𝐶𝑏 ) @ 𝑟 = 𝑟𝑝
𝜕𝑟
(1) The steady-state solution of eq(1) is the same as that of the equation solved in the
previous lectures for BVP. Therefore, one way of checking the present code for this
unsteady-state 1D parabolic equation is to compare its SS solution with that from the
BVP-code. In the latter lectures you will see that very often it is better to artificially
𝜕𝑇 𝜕𝐶 𝜕𝑉
introduce the transient term �𝑣𝑖𝑧. 𝜕𝑡 , 𝜕𝑡 , 𝜕𝑡 � in the BVPs and solve the entire time-
profiles, even when one is asked to solve the SS solution only. This strategy is often
followed for solving elliptic PDE.
(2) You should also compare this mass transport example with the previous example on
heat transfer. But for the non-homogenous term (𝑘𝐶𝐴 ), two equations are identically the
same, including the IC and BCs.
Recall:
𝜕𝑇 𝛼 𝜕 𝜕𝑇 𝜕𝐶𝐴 𝐷 𝜕 𝜕𝐶𝐴
= 𝑟 2 𝜕𝑟 �𝑟 2 𝜕𝑟 � = 𝑟 2 𝜕𝑟 �𝑟 2 �
𝜕𝑡 𝜕𝑡 𝜕𝑟
𝐼𝐶 ∶ 𝑡 = 0 𝑇 = 𝑇𝑜 𝐶𝐴 = 𝐶𝐴𝑖 for all 𝑟𝑝 > 𝑟 > 0
𝜕𝑇 𝜕𝐶𝐴
𝐵𝐶 1 ∶ 𝑟 = 0 𝜕𝑟
=0 =0
𝜕𝑟
𝜕𝑇 𝜕𝐶𝐴
2 ∶ 𝑟 = 𝑟𝑝 − 𝑘 𝜕𝑟 = ℎ𝑓 (𝑇 − 𝑇𝑓 ) −𝐷 = 𝑘𝑚 (𝐶𝐴 − 𝐶𝑏 )
𝜕𝑟
All it means is that their non-dimensionalized forms are identically the same:
𝜕𝜃 1 𝜕 𝜕𝜃
= 2 �𝜉 2 �
𝜕𝜏 𝜉 𝜕𝜉 𝜕𝜉
where,
𝑇 − 𝑇𝑜 𝐶𝐴 − 𝐶𝐴𝑖
⎧𝜃 = ,
⎪ 𝑇𝑓 − 𝑇𝑜 𝐶𝐴,𝑏 − 𝐶𝐴𝑖
𝑟
𝜉 = �𝑟𝑝
⎨ 2 2
⎪
𝜏 = 𝑡� ; 𝑡 = 𝑟𝑝 , 𝑟𝑝
⎩ 𝑡𝑐 𝑐
𝛼 𝐷
𝐼𝐶. 𝜏=0 𝜃=0
𝜕𝜃
𝐵𝐶 1. 𝜉=0 𝜕𝜉
=0
𝜕𝜃
2. 𝜉 = 1 − 𝜕𝜉 = 𝐴(𝜃 − 1) , where A = Nu or Sh
4
Therefore, it is clear that their (non-dimensional) solutions will also be the same.
It also follows that one should non-dimensionalize the transport concentration equation, as a
good practice, before solving the equation. Apart from learning about transport phenomena
from the analogy between heat, mass & momentum, there is every possibility that the non-
dimensional forms of the conservation equations & bcs being the same, the non-dimensionalized
solutions will also be the same. In such cases or similar cases, the programming code written
for one problem will be the same or similar, requiring small modifications.
Let us continue with the non-dimensionalized form of the present mass transport problem:
𝜕𝜃 1 𝜕 𝜕𝜃 𝐶𝐴𝑖
𝜕𝜏
= 𝜉2 𝜕𝜉 �𝜉 2 𝜕𝜉 � − 𝐵𝜃 − 𝐶 , where 𝐵 = 𝑘𝑡𝑐 , 𝐶 = 𝑘𝑡𝑐 𝐶
𝐴,𝑏 −𝐶𝐴𝑖
𝑁
𝜏+1
𝜏
𝑖=0 𝑖−1 𝑖 𝑖+1 𝑁
discretize:
𝜃𝑖𝜏+1 − 𝜃𝑖𝜏 1 𝜃𝑖+1 − 2𝜃𝑖 + 𝜃𝑖−1 𝜏+1 𝜃𝑖+1 − 2𝜃𝑖 + 𝜃𝑖−1 𝜏 2 𝜃𝑖+1 − 𝜃𝑖−1 𝜏+1
= �� � + � � + � �
Δ𝜏 2 Δ𝜉 2 Δ𝜉 2 𝜉𝑖 2Δ𝜉
2 𝜃𝑖+1 − 𝜃𝑖−1 𝜏
+ � � − 𝐵(𝜃𝑖𝜏+1 + 𝜃𝑖𝜏 )� − 𝐶
𝜉𝑖 2Δ𝜉
𝑖 = 0, 𝑁
Apply BCs
𝑖=0 𝜃−1 = 𝜃1
𝜃𝑁+1 −𝜃𝑁−1
𝑖=𝑁 − = 𝐴(𝜃𝑁 − 1) �
2Δ𝜉
𝑜𝑟 𝜃𝑁+1 = 𝜃𝑁−1 − 2𝐴Δ𝜉(𝜃𝑁 − 1)
1 3 𝐵 3 1 3 𝐵 3
𝑖 = 0: � + 2 − � 𝜃0𝜏+1 − 2 𝜃1𝜏+1 = � − 2 + � 𝜃0𝜏 + 2
𝜃1𝜏 − 𝐶
Δ𝜏 Δ𝜉 2 Δ𝜉 Δ𝜏 Δ𝜉 2 2Δ𝜉
5
1 𝜏+1
1 𝐵 1 1 1 1 𝐵
𝑖 = 𝑁: � 2
� 𝜃𝑁−1 − � 2 + + � 𝜃𝑁𝜏+1 = − � 2 � 𝜃𝑁−1
𝜏
− � − 2 − � 𝜃𝑁𝜏 + 𝐶
Δ𝜉 Δ𝜉 2 Δ𝜏 Δ𝜉 Δ𝜏 Δ𝜉 2
1 1 𝜏+1 1 1 𝐵 𝜏+1 1 1
𝑖 = 1, 𝑁 − 1 � − � 𝜃𝑖−1 − � + − � 𝜃 𝑖 + � − � 𝜃 𝜏+1
2Δ𝜉 2 2Δ𝜉𝜉𝑖 Δ𝜏 Δ𝜉 2 2 2Δ𝜉 2 2𝜉Δ𝜉 𝑖+1
1 1 𝜏 1 1 𝐵 1 1
= �− + � 𝜃𝑖−1 − � − 2 − � 𝜃𝑖𝜏 + �− − �𝜃 + 𝐶
2Δ𝜉 2 2Δ𝜉𝜉 Δ𝜏 Δ𝜉 2 2Δ𝜉 2 2Δ𝜉𝜉 𝑖+1
𝐴𝜃 𝜏+1 = 𝑑𝜏 , 𝑖 = 0, 𝑁 with 𝐼𝐶 𝜃 = 0 @ 𝜏 = 0
Solve
(same as before, 𝐴 ≡ Tridiagonal matrix)
Here, we skipped the steps for preparing tridiagonal matrix and directly substituted
discretized BCs into the discretized equations! As an exercise, prepare tridiagonal matrix and
see if you get the same discretized equations post substitution of BCs, as before.
Quiz III
6
Lecture #24-25
Elliptic PDE (Method of Lines)
The model equations may be recognized from the following examples:
2 2 2
(1) 𝑘 �𝜕𝑋 2 + 𝜕𝑦 2 � + 𝑆 = 0 ; 𝑆 ≡ 𝐼 𝑅�∀ 𝑐𝑎𝑙�𝑠 − 𝑚3
𝜕 𝑇 𝜕 𝑇
𝜕2 𝑇 1 𝜕 𝜕𝑇
(2) 𝑘 �𝜕𝑋 2 + 𝑟 𝜕𝑟 �𝑟 𝜕𝑟 �� + 𝑆 = 0
----- SS 2D temperature profiles in the uniform heating of a rectangular plate (1) and a
cylindrical wire (2)
𝜕𝐶 𝜕2 𝐶 𝐷 𝜕 𝜕𝐶
(3) 𝑉𝑋 𝜕𝑋 = 𝐷 𝜕𝑋 2 + 𝑟 �𝜕𝑟 �𝑟 𝜕𝑟 �� − 𝑘𝑐
𝜕𝑇 𝜕2 𝑇 𝜕2 𝑇
(4) 𝑉𝑋 𝜕𝑋 = 𝑘 𝜕𝑋 2 + 𝑘 𝜕𝑦 2
- SS 2D temperature distributions in a flow through rectangular channel.
𝜕𝑉𝑋 𝜕2 𝑉 1 𝜕 𝜕𝑉𝑋 𝑑𝑝
(5) 𝜌 = 𝜇 � 𝜕𝑋 2𝑋 + 𝑟 𝜕𝑟 �𝑟 �� − 𝑑𝑥
𝜕𝑋 𝜕𝑟
By now, you must have realized that we are referring to an elliptic PDE which describes a SS
2D (no time-dependent term) heat/mass/momentum transport problem. Why is the equation
called elliptic? Because one has to solve over the entire 2D space. Considering that there is the
SS consideration, there is no as such ‘marching’ on time and updating of the solution from the
previous time-step, as we earlier discussed for the parabolic PDEs, and the solution in this case
must be sought one-time only under SS conditions. You will see later that it is more
computational extensive requiring iterations.
- The best way of understanding the numerical technique for solving an elliptic PDE is to
directly take the example (1) above.
1
Ex: A rectangular plate (𝐿 × 𝑤 × 𝑡ℎ ) fabricated from stainless steel (𝑘 = 40 𝑊 ⁄𝑚 − 𝑘 , 𝜌 =
8000 𝑘𝑔⁄𝑚3 , 𝐶𝑝 = 400 𝐽⁄𝑘𝑔 𝐾) is uniformly heated using an electric power source (100 𝑊).
The top and bottom ends are insulated, whereas the side surfaces are exposed to atmosphere
(𝑇𝑎 = 300 𝐶, ℎ = 100 𝑊 ⁄𝑚2 𝑘). Determine the SS temperature profiles in the plate.
𝑦
𝑑𝑇
−𝑘 =0
𝑑𝑦
𝑀 𝐿 100 𝑊
ℎ𝑓 S (𝑊 ⁄𝑚3 ) = � �
𝐿 × 𝑊 × 𝑡ℎ
(ℎ𝑒𝑎𝑡 𝑠𝑜𝑢𝑟𝑐𝑒)
(𝑖, 𝑗 + 1) Δ𝑥
𝑤 Δ𝑦 ℎ𝑓
ℎ𝑓
(𝑇𝑎 )300 𝐶 (𝑖 − 1, 𝑗) (𝑖, 𝑗) (𝑖 + 1, 𝑗)
300 𝐶(𝑇𝑎 )
(𝑖, 𝑗 − 1)
𝑂 N X
𝑑𝑇
−𝑘 =0
𝑑𝑦
𝑆𝑆 0, 𝑠𝑜𝑙𝑖𝑑
𝜕 2𝑇 𝜕 2𝑇
𝑘 � 2 + 2� + 𝑆 = 0
𝜕𝑥 𝜕𝑦
𝜕𝑇
BCs: 𝑥 = 0 𝑓or 𝑤 > 𝑦 > 0 ; −𝑘 𝜕𝑥 = −ℎ𝑓 (𝑇 − 𝑇𝑎 )
𝜕𝑇
= 𝐿 for 𝑤 > 𝑦 > 0 ; −𝑘 𝜕𝑥 = ℎ𝑓 (𝑇 − 𝑇𝑎 )
𝜕𝑇
𝑦 = 0 and 𝑤 for 𝐿 > 𝑥 > 0, − 𝑘 𝜕𝑦 = 0(insulation)
2
Computational molecule:
(𝑗 + 1) Δ𝑥 = 𝐿�𝑁
Δ𝑦 = 𝑊�𝑀
𝑖 = 0… 𝑁
(𝑖 − 1) (𝑖, 𝑗) (𝑖 + 1) �
𝑗 = 0… 𝑀
(𝑗 − 1)
Discretize equations over (𝑖, 𝑗). Note that it is a 2D problem. Therefore, one will have to
discretize in both directions (𝑥, 𝑦) using steps Δ𝑥 and Δ𝑦, respectively. They need not be equal.
𝜕2 𝑇 𝜕2 𝑇
𝑘 ��𝜕𝑋 2 � + �𝜕𝑦 2 � � + 𝑆𝑖,𝑗 = 0
𝑖,𝑗 𝑖,𝑗
𝐴𝑦� = 𝑏� form ∶
1 1 𝑆𝑖,𝑗
(1)Δ𝑥 2 �𝑇𝑖−1,𝑗 − 2𝑇𝑖,𝑗 + 𝑇𝑖+1,𝑗 � = − Δ𝑦 2 �𝑇𝑖,𝑗−1 − 2𝑇𝑖,𝑗 + 𝑇𝑖,𝑗+1 � − 𝑘
1 1 𝑆𝑖,𝑗
(2) Δ𝑦 2 �𝑇𝑖,𝑗−1 − 2𝑇𝑖,𝑗 + 𝑇𝑖,𝑗+1 � = − Δ𝑦 2 �𝑇𝑖−1,𝑗 − 2𝑇𝑖,𝑗 + 𝑇𝑖+1,𝑗 � − 𝑘
Notes: 1. Although the linear algebraic equations have taken the form of 𝐴𝑦� = 𝑏� in both cases,
𝑏� or RHS terms are not known. Therefore, in principle 𝑦� 𝑜𝑟 𝑇� cannot be solved by inverting
the ‘matrix’ formed on LHS, the way we did in the previous examples. In other words, there is
no way one can march in 𝑥 𝑜𝑟 𝑦 direction solving unknown variables, because marching in
either direction (𝑖 𝑜𝑟 𝑗) creates unknown variables in the other direction (𝑗 𝑜𝑟 𝑖). In fact, one
has to solve the ‘entire space’ at one time (without marching). This is the problem in solving an
elliptic PDE!
(2) The only way to solve an elliptic PDE or 𝐴𝑦� = 𝑏� , where 𝐴 is the tridiagonal matrix is by
making guess for all variables (𝑇𝑖,𝑗 ) to start with, so that 𝑏� (RHS term) is known. Then, 𝑦� can
be solved. Next, compare the new/calculated values with guess values –-- iterate till there is
b0 = initial guess for �Ti,j �
the convergence: 𝐴𝑦 𝑘+1 = 𝑏 𝑘
k = # of iterations.
3
(3) Either of the two discretized schemes (1) & (2) can be used to solve 𝑇𝑖,𝑗 . In general, one
should sweep the discretized set of equations in the direction the expected solution (functional
value) is lesser stiff than in the other direction. In the present example, 𝑤 ≪ 𝐿 and the y-ends
are insulated. One should use scheme (1). The convergence will be relatively faster.
1 1 𝑔 𝑆𝑖,𝑗
Δ𝑥 2
�𝑇𝑖−1,𝑗 − 2𝑇𝑖,𝑗 + 𝑇𝑖+1,𝑗 � = − Δ𝑦 2 �𝑇𝑖,𝑗−1 − 2𝑇𝑖,𝑗 + 𝑇𝑖,𝑗+1 � − (1)
𝐾
1 2 1
𝑎(𝑖, 𝑗) = ; 𝑏(𝑖, 𝑗) = − 2 , 𝑐(𝑖, 𝑗) = ⎫
Δ𝑥 2 Δ𝑥 Δ𝑥 2 𝑖 = 0, 𝑁
1 𝑔 𝑆𝑖,𝑗 ⎬ 𝑗 = 0, 𝑀
𝑑(𝑖, 𝑗) = − 2 �𝑇𝑖,𝑗−1 − 2𝑇𝑖,𝑗 + 𝑇𝑖,𝑗+1 � −
Δ𝑦 𝑘 ⎭
The superscript "g" stands for the guess values. Once the initial guess is made, one can now
march along ‘𝑗’ direction, solving 𝑇𝑖,𝑗 at every ‘𝑗𝑡ℎ ’ step as 𝐴𝑦�𝑗 = 𝑏, where 𝐴 is the
tridiagonal matrix containing the discretized ‘𝑦’ along ‘𝑖’ direction for a fixed 𝑗, and 𝑏� is known
from the guess.
Discretize BCs
𝑇1,𝑗 − 𝑇−1,𝑗
⎧ (𝑖 = 0), 𝑗: − 𝑘 = −ℎ𝑓 (𝑇0,𝑗 − 𝑇𝑎 )
⎪ 2Δ𝑥
2ℎ𝑓 Δ𝑥
or 𝑇−1,𝑗 = 𝑇1,𝑗 − (𝑇0,𝑗 − 𝑇𝑎 )
⎨ 𝑘
⎪ 2ℎ𝑓 Δ𝑥
(𝑖
⎩ = 𝑁), 𝑗: 𝑇𝑁+1,𝑗 = 𝑇𝑁−1,𝑗 −
𝑘
(𝑇𝑁,𝑗 − 𝑇𝑎 )
Apply BCs
𝑗=0
4
2 𝑔 𝑆𝑖,0
𝑑(𝑖, 0) = Δ𝑦 2 �𝑇𝑖,0 − 𝑇𝑖,1 � − ; 𝑖 = 1, 𝑁 − 1
𝑘
Tridiag�𝑁 + 1, 𝑎, 𝑏, 𝑐, 𝑑, 𝑇𝑖,0 � (𝑖 = 0, 𝑁)
𝑗 = 1, 𝑀 − 1
⇒ No coefficients will change for the rows 𝑖 = 1, 𝑁 − 1, because all of these correspond
to interior nodes:
𝑀
𝑀−1
𝑁−1
0,0 𝑖 𝑁
5
𝑎(𝑁, 𝑗) = 𝑎(𝑁, 𝑗) + 𝑐(𝑁, 𝑗)
⎫
2ℎ𝑓 Δ𝑥
𝑏(𝑁, 𝑗) = 𝑏(𝑁, 𝑗) − � � 𝑐(𝑁, 𝑗) ⎪
𝑘 𝑖=𝑁
2ℎ𝑓 Δ𝑥 ⎬
𝑑(𝑁, 𝑗) = 𝑑(𝑁, 𝑗) − � � 𝑇 𝑎(𝑁, 𝑗)⎪
𝑘Δ𝑦 2 𝑎 ⎭
Tridiag�𝑁 + 1, 𝑎, 𝑏, 𝑐, 𝑑, 𝑇𝑖,𝑗 � (𝑖 = 0, 𝑁)
𝑗=𝑀
2 𝑔 𝑆𝑖,𝑀
𝑑(𝑖, 𝑗) = − Δ𝑦 2 �−𝑇𝑖,𝑀−1 + 𝑇𝑖,𝑀 � − ; 𝑖 = 1, 𝑁 − 1
𝑘
Tridiag�𝑁 + 1, 𝑎, 𝑏, 𝑐, 𝑑, 𝑇𝑖,𝑗 � (𝑖 = 0, 𝑁)
You have, therefore, called Tridiagonal subroutine ‘𝑀 + 1’ times as you marched along ‘𝑗’
direction. At the end, entire ‘𝑥 − 𝑦’ grids have been solved for 𝑇𝑖,𝑗 (𝑖 = 0, 𝑁; 𝑗 = 0, 𝑀).
Compare solved values with the guess values and keep iterating till there is a convergence.
⇒ There is another twist in solving an elliptic PDE. Do you update your guess values of 𝑏�
(RHS terms) at ‘𝑗’ with the ones solved at ‘𝑗 − 1’ as you march in y-direction, or do you wait till
you have solved till ‘𝑀’ (the last boundary)? This question should remind you of the G-S and
Jacobi iterations. Choice is yours. A fast convergence is the criterion. Also, relaxation factor ‘𝑤’
can be used to update the functional values in either case:
𝑇𝑖,𝑗 = 𝑤𝑇𝑖,𝑗 + (1 − 𝑤)𝑇𝑖,𝑗
6
Lecture #25-26
Ex 2: Consider the fully developed SS flow (Re = 200) of an incompressible NF in a long
horizontal tube (L, D). The inlet temperature of the liquid is 𝑇𝑜 . The heat is supplied to the
flowing liquid at constant flux 𝑞𝑤 (𝑊 ⁄𝑚2 ) through the tube walls. Determine the 2D (𝑟, 𝑥) SS
temperature profiles in the tube.
𝑄 𝑊 ⁄𝑚2
𝐿
𝑅 T(r, X)=? Δ𝑟
𝑟
𝑇𝑜 𝑋 𝑟
Δ𝑋 𝑄 𝑊 ⁄𝑚2 Δ𝑟
2
𝑟
𝑢(𝑟) = 𝑈𝑚𝑎𝑥 �1 − � (𝑅𝑒 = 200); 𝜌 , 𝐶𝑝 , 𝑘 ≡ 𝐶𝑜𝑛𝑠𝑡𝑎𝑛𝑡
𝑅2
′
Energy balance over 2𝜋𝑟Δ𝑟Δ𝑋 ′ 𝐶𝑉:
𝜕𝑇 𝜕2 𝑇 1 𝜕 𝜕𝑇 �
𝑘
𝑉(𝑟) 𝜕𝑋 = 𝛼 �𝜕𝑋 2 + 𝑟 𝜕𝑟 �𝑟 𝜕𝑟 �� 𝛼 = 𝜌𝐶 ;
𝑝
BC. 𝑋 = 0 𝑇(𝑟) = 𝑇𝑜 𝑅 ≥ 𝑟 ≥ 0
𝜕𝑇
𝑋=𝐿 + 𝜕𝑋 = 0 (long tube approximation)
𝜕𝑇
𝑟=0 𝜕𝑟
= 0 (symmetric BC)⎫
𝜕𝑇
⎪
𝑟=𝑅 − 𝑘 𝜕𝑟 = −𝑞𝑊 (𝑊 ⁄𝑚2 ) 𝐿 ≥ 𝑧 ≥ 0
⎬
𝜕𝑇
𝑜𝑟 𝑘 𝜕𝑟 = 𝑞𝑊 ⎪
⎭
Let us non-dimensionalize the equation & BC
𝑇 𝑋 Umax R r
𝜃= , 𝑧= 𝑤here Lc = RPe where Pe (radial) = , ξ=
To Lc α R
𝑈𝑚𝑎𝑥(1 − 𝜉 2 ) 𝜕𝜃 1 𝜕 2 𝜃 1 𝜕 2 𝜃 1 1 𝜕𝜃
= α� 2 2 + 2 2 + 2 �
Lc ∂z Lc ∂z R ∂ξ R ξ ∂ξ
1
or
𝜕𝜃 1 𝜕 2 𝜃 𝜕 2 𝜃 1 𝜕𝜃
(1 − 𝜉 2 ) = 2 2+ 2+
∂z 𝑃𝑒 ∂z ∂ξ ξ ∂ξ
BCs:
𝑧=0 𝜃=1
𝑧 = 𝐿�𝐿 + 𝜕𝑧 = 0� 1 > 𝜉 > 0
𝜕𝜃
𝑐
𝜕𝜃
𝜉=0 𝜕𝜉
=0
𝜕𝜃 𝑞𝑊 𝑅
𝜉=1 𝜕𝜉
=� 𝑘𝑇𝑜
� = 𝐻(constant) for 𝐿 > 𝑧 > 0
Ex. Consider the fully developed SS flow (𝑅𝑒 = 200) of an incompressible NF in a long
horizontal tube (𝐿, 𝐷). The inlet concentration of the species A in the liquid is 𝐶𝐴𝑜 .The species
are catalytically destroyed at the tube walls by the zeroth order chemical reaction
(𝑘 𝑚𝑜𝑙𝑒⁄𝑠 − 𝑚3 ). Determine the 2D (𝑟, 𝑥)𝑆𝑆 concentration profiles in the tube.
𝑅 𝐶(𝑟, 𝑋) 𝑟 = −𝑘 Δ𝑟
𝐶𝐴𝑜 𝑟 𝑋
𝑟2
𝑢(𝑟) = 𝑈𝑚𝑎𝑥 �1 − � Δ𝑋
𝑅2
𝜕𝐶𝐴 𝜕 2 𝐶𝐴 1 𝜕 𝜕𝐶𝐴
𝑉𝑋 = 𝐷� 2 + �𝑟 �� moles⁄s − m3
𝜕𝑋 𝜕𝑋 𝑟 𝜕𝑟 𝜕𝑟
𝑟 2 𝜕𝐶𝐴 𝜕 2 𝐶𝐴 1 𝜕 𝜕𝐶𝐴
𝑈𝑚𝑎𝑥 �1 − 2
� = 𝐷 � 2
+ �𝑟 ��
𝑅 𝜕𝑋 𝜕𝑋 𝑟 𝜕𝑟 𝜕𝑟
𝜕2 𝐶 𝜕2 𝐶𝐴 1 𝜕𝐶𝐴
= 𝐷 � 𝜕𝑋 2𝐴 + +𝑟 �
𝜕𝑟 2 𝜕𝑟
𝜕𝐶𝐴
BCs. 𝑋 = 0 𝐶𝐴 = 𝐶𝐴𝑜 , 𝑋 = 𝐿 = 0 (𝑅 > 𝑟 > 0)
𝜕𝑥
𝜕𝐶𝐴 𝜕𝐶𝐴
𝑟=0 =0 𝑟=𝑅 , −𝐷 =𝑘 (𝐿 > 𝑧 > 0)
𝜕𝑟 𝜕𝑟
2
𝐶𝐴 𝑥 𝑟 𝑈𝑚𝑎𝑥
𝜃= �𝐶 , 𝑧= , 𝜉= ; 𝐿𝑐 = 𝑅𝑃𝑒 where Pe (radial) =
𝐴𝑜 𝐿𝑐 𝑅 𝐷
2)
𝜕𝜃 1 𝜕 2 𝜃 𝜕 2 𝜃 1 𝜕𝜃
(1 − 𝜉 = + +
𝜕𝑧 𝑃𝑒2 𝜕𝑧 2 𝜕𝜉 2 𝜉 𝜕𝜉
𝑧 = 𝐿�𝐿 ,
𝜕𝜃
BC. 𝑧 = 0, 𝜃=1 ; 𝜕𝑧
=0
𝑐
𝜕𝜃 𝜕𝜃 𝑘𝑅
𝜉 = 0, 𝜕𝜉
=0 ; 𝜉=1 − 𝜕𝜉 = �𝐷𝐶 � = 𝑀 (constant)
𝐴𝑜
⇒ It is clear that the non-dimensionalized equations and BCs of this (mass transport)
and previous (heat transport) examples are the same. Therefore, you need to solve only one of
the two. The dimensionless solutions will be the same. This situation should remind you of the
recommendation that one should non-dimensionalize the equation before solving it. Several
such analogous heat, mass, and momentum transport scenarios exist in chemical engineering
applications.
⇒In most cases, radial Pe (mass or heat) number in laminar flow regime is of the order
of 10 or higher. In such cases, the axial diffusion term can be neglected. In other words,
𝜕2 𝑇 𝜕𝑇 𝜕2 𝐶𝐴 𝜕𝐶𝐴
𝛼 𝜕𝑋 2 << 𝑉𝑋 𝜕𝑋 𝑜𝑟 𝐷 << 𝑉𝑋 . (See BSL book)
𝜕𝑋 2 𝜕𝑋
If you neglect the axial diffusion term, the 2D elliptic PDE is modified/simplified to 1D
parabolic PDE:
𝜕𝑇 𝛼 𝜕 𝜕𝑇 𝜕𝐶𝐴 𝐷 𝜕 𝜕𝐶𝐴
𝑉𝑋 𝜕𝑋 = 𝑟 𝜕𝑟
�𝑟 𝜕𝑟 � 𝑜𝑟 𝑉𝑋 = �𝑟 �
𝜕𝑋 𝑟 𝜕𝑟 𝜕𝑟
This simplified PDE can be solved using the Crank-Nicholson technique described in the
preceding lecture. In other words, one can numerically march in ‘X’ direction and solve in ‘r’
direction using Thomas Algorithm for the tridiagonal matrix built from the discretized ‘r’
terms. For now, let us revert to the original (full) non-dimensionalized elliptic PDE:
2)
𝜕𝜃 1 𝜕 2 𝜃 𝜕 2 𝜃 1 𝜕𝜃
(1 − 𝜉 = + +
𝜕𝑧 𝑃𝑒2 𝜕𝑧 2 𝜕𝜉 2 𝜉 𝜕𝜉
𝜕𝜃 1 𝜕 2 𝜃 2𝜕 2 𝜃
(1 − 𝜉 2 ) = 2 2+
𝜕𝑧 𝑃𝑒 𝜕𝑧 𝜕𝜉 2
3
Discretize the main conservation equation:
𝜃𝑖+1,𝑗 −𝜃𝑖−1,𝑗 1 𝜃𝑖+1,𝑗 −2𝜃𝑖,𝑗 +𝜃𝑖−1,𝑗 𝜃𝑖,𝑗+1 −2𝜃𝑖,𝑗 +𝜃𝑖,𝑗−1 1 𝜃𝑖,𝑗+1 −𝜃𝑖,𝑗−1
�1 − 𝜉𝑗 2 � = 𝑃2 + +𝜉
2Δ𝑧 𝑒 Δ𝑧 2 Δ𝜉 2 𝑗 2Δ𝜉
or
- (1)
0 Δ𝑧 2 i−1 i i+1 N 𝑖 = 1, 𝑁
𝑗 = 1, 𝑀
Discretize the approximated conservation equation at the line of symmetry:
𝑖 = 1, 𝑁
𝑗=0
𝐿⁄𝐿𝑐
Δ𝑧 = 1�𝑁 , Δ𝜉 =
𝑀
Note that RHS terms for all rows contain guess values for all 𝜃𝑖,𝑗 to begin with
Discretize BCs:
𝑗=0
=1
4
middle rows,
�1 − 𝜉0 2 � 1 2𝜃𝑖,0 (1 − 𝜉0 2 ) 1
𝑖 = 2, 𝑁 − 1: � + 2 2 � 𝜃𝑖−1,0 − 2 2 − � − 2 2 � 𝜃𝑖+1,0
2Δ𝑧 𝑃𝑒 Δ𝑧 𝑃𝑒 Δ𝑧 2Δ𝑧 𝑃𝑒 Δ𝑧
4𝜃𝑖,0 4𝜃𝑖,0
=− 2 +
Δ𝜉 Δ𝜉 2
2 2𝜃 𝑁,0 4𝜃𝑁,1 4𝜃𝑁,0
last row, 𝑖 = 𝑁: 𝜃
𝑃𝑒2 Δ𝑧 2 𝑁−1,0
− 𝑃2 Δ𝑧 2
=− +
𝑒 Δ𝜉 2 Δ𝜉 2
𝑘+1 𝑘
You have a tridiagonal matrix A in (𝐴𝜃𝑖,0 = 𝑏𝑖,0 ) to invert
𝑗 = 1, 𝑀 − 1
1st row,
2𝜃1,𝑗 1 − 𝜉𝑗2 1 1 − 𝜉𝑗2 1
𝑖 = 1: − 2 2−� − 2 2 � 𝜃2,𝑗 = − � − 2 2 � 𝜃0,𝑗
𝑃𝑒 Δ𝑧 2Δ𝑧 𝑃𝑒 Δ𝑧 2Δ𝑧 𝑃𝑒 Δ𝑧
1
1 1 2𝜃1,𝑗 1 1
�2Δ𝜉𝜉 − Δ𝜉2 � 𝜃1,𝑗−1 + − �2Δ𝜉𝜉 + Δ𝜉2 � 𝜃1,𝑗+1
𝑗 Δ𝜉 2 𝑗
middle rows:
last row
2 2𝜃𝑁,𝑗 1 1 2𝜃𝑁,𝑗 1 1
𝑖=𝑁 𝜃𝑁−1,𝑗 − =� − 2 � 𝜃𝑁,𝑗−1 + −� + 2 � 𝜃𝑁,𝑗+1
𝑃𝑒2 Δ𝑧 2 2
𝑃𝑒 Δ𝑧 2 2Δ𝜉𝜉𝑗 Δ𝜉 Δ𝜉 2 2Δ𝜉𝜉𝑗 Δ𝜉
𝐾+1 𝐾 𝑖 = 1, 𝑁
Again, 𝐴𝜃𝑖,𝑗 = 𝑏𝑖,𝑗 ;
𝑗 = 1, 𝑀 − 1
𝑗=𝑀
1st row,
2𝜃1,𝑀 (1 − 𝜉𝑀 2 ) 1 (1 − 𝜉𝑀 2 ) 1
𝑖=1 − 2 2−� − 2 2 � 𝜃2,𝑀 = � + 2 2 � 𝜃0,𝑀
𝑃𝑒 Δ𝑧 2Δ𝑧 𝑃𝑒 Δ𝑧 2Δ𝑧 𝑃𝑒 Δ𝑧
1
2 2𝜃1,𝑀 1 1
− Δ𝜉2 𝜃1,𝑀−1 + − 2𝑀Δ𝜉 �2Δ𝜉𝜉 − Δ𝜉2 �
Δ𝜉 2 𝑀
5
last row,
𝑘+1 𝑘
Again, 𝐴𝜃𝑖,𝑀 = 𝑏𝑖,𝑀
You have called the Thomas Algorithm (M+1) times to invert the tridiagonal A matrix to solve
𝜃𝑖,𝑗 , 𝑖 = 1, 𝑁
� using the guess values on the RHS of the tridiagonal matrix .
𝑗 = 1, 𝑀
You must have noted we did not prepare the tridiagonal matrix, and instead directly
substituted the BCs in the discretized equation! As an exercise, prepare the tridiagonal matrix
by defining individuals elements of rows for every ‘j’ viz a(i, j), b(i, j) ⋯ ⋯, substitute BCs and
see if you get the same equations or not for different ′𝑗𝑠′ .
Now, this is the time to compare the solved 𝜃𝑖,𝑗 values with the guess values you made at the
beginning of the iterations before you start the second iteration.
⇒ From programming point of view, you should be able to choose reasonable values of all
variables including, D, Vmax , M and Pe,mass transport , or H and Pe,heat transport .
⇒ As earlier noted, solving elliptic PDE is computational extensive, requiring guess values,
iterations & convergence. Very often, one artificially inserts a transient term
𝜕𝐶𝐴 𝜕𝑇 �⃗
𝜕𝑉
� or 𝜌𝐶𝑝 𝜕𝑡 or 𝜌 𝜕𝑡 � and seek SS solutions, which is the focus of the last two lectures.
𝜕𝑡
6
Lecture #26-27
𝑇𝑖𝑚𝑒 − 𝑑𝑒𝑝𝑒𝑛𝑑𝑒𝑛𝑡 2𝐷 𝑝𝑎𝑟𝑎𝑏𝑜𝑙𝑖𝑐 𝑃𝐷𝐸 (𝐴𝐷𝐼) 𝑀𝑒𝑡ℎ𝑜𝑑
𝜕𝐶𝐴
+ 𝑉. ∇𝐶𝐴 = 𝐷∇2 𝐶𝐴 + (−𝑟𝐴 ) ; 𝐶𝐴 (𝑡, 𝑟, 𝑥)
𝜕𝑡
or
𝜕𝑇
𝜌𝐶𝑝 � + 𝑉. ∇𝑇� = 𝑘∇2 𝑇 + (−𝑟𝐴 )(Δ𝐻) ; 𝑇(𝑡, 𝑟, 𝑥)
𝜕𝑡
Note: SS solution must be the same as that of the converged solution of the analogous
𝜕𝐶 𝜕𝑇
2D elliptic PDE � 𝜕𝑡𝐴 = 𝜕𝑡
= 0� discussed in the previous lectures.
𝜕𝜙
= 𝜙𝑋𝑋 + 𝜙𝑋 + 𝜙𝑌𝑌 + 𝜙𝑌 ; 𝜙(𝑡, 𝑥, 𝑦)
𝜕𝑡
with necessary IC and BCs.
𝜙 𝑡+1 − 𝜙 𝑡
� �
Δ𝑡 𝑖,𝑗
1 𝜙𝑖+1,𝑗 − 2𝜙𝑖,𝑗 + 𝜙𝑖−1,𝑗 𝑡+1 𝜙𝑖+1,𝑗 − 𝜙𝑖−1,𝑗 𝑡+1
= �� � +� �
2 Δ𝑋 2 2ΔX
𝜙𝑖,𝑗+1 − 2𝜙𝑖,𝑗 + 𝜙𝑖,𝑗−1 𝑡 𝜙𝑖,𝑗+1 − 𝜙𝑖,𝑗−1 𝑡
+� � +� ��
Δ𝑌 2 2ΔY
1
1 1 𝑡+1 1 1 𝑡+1 1 1
� − � 𝜙𝑖−1,𝑗 −� + � 𝜙𝑖,𝑗 +� + � 𝜙 𝑡+1
2Δ𝑋 2 4Δ𝑋 Δ𝑡 2Δ𝑋 2 2Δ𝑋 2 4Δ𝑋 𝑖+1,𝑗
1 1 𝑡 1 1 𝑡 1 1 𝑡
=� − 2
� 𝜙𝑖,𝑗−1 − � − 2
� 𝜙 𝑖,𝑗 − � + 2
� 𝜙𝑖,𝑗+1
4Δ𝑌 2Δ𝑌 Δ𝑡 Δ𝑌 4Δ𝑌 2Δ𝑌
- This way you are ‘marching’ in ‘𝑗’ direction and ‘sweeping’ in ‘i’ direction.
By the second scheme, if you ‘march’ in ‘i’ direction and ‘sweep’ in ‘𝑗’-direction, you will
get the following equation:
1 1 𝑡+1 1 1 1 1
𝑡+1 𝑡+1
�4Δ𝑌 − 2Δ𝑌 2 � 𝜙𝑖,𝑗−1 + �Δ𝑡 + Δ𝑌 2 � 𝜙𝑖,𝑗 − �4Δ𝑌 + 2Δ𝑌 2 � 𝜙𝑖,𝑗+1
1 1 𝑡 1 1 𝑡 1 1
= � − � 𝜙𝑖−1,𝑗 + � − � 𝜙𝑖,𝑗 +� + � 𝜙𝑡
2Δ𝑋 2 4Δ𝑋 Δ𝑡 Δ𝑋 2 2Δ𝑋 2 4Δ𝑋 𝑖+1,𝑗
Both schemes will work and the Crank-Nicholson method will produce 2nd order
accuracy.
First discretize 𝜙𝑋 and 𝜙𝑋𝑋 terms implicitly and 𝜙𝑌 and 𝜙𝑌𝑌 terms explicitly to solve 𝜙
over half step (Δ𝑡⁄2) and then discretize 𝜙𝑌 and 𝜙𝑌𝑌 terms implicitly and 𝜙𝑋 , and 𝜙𝑋𝑋 terms
explicitly:
Step1:
1
𝜙𝑡+ �2 − 𝜙 𝑡 1 1
� � = �(𝜙𝑋𝑋 + 𝜙𝑋 )𝑡+ �2 + (𝜙𝑌𝑌 + 𝜙𝑌 )𝑡 �
Δ 𝑡⁄2 2
𝑖,𝑗
𝑖𝑚𝑝𝑙𝑖𝑐𝑖𝑡 𝑒𝑥𝑝𝑙𝑖𝑐𝑖𝑡
Step2:
1�
𝜙 𝑡+1 − 𝜙𝑡+ 2 1 1
� � = �(𝜙𝑋𝑋 + 𝜙𝑋 )𝑡+ �2 + (𝜙𝑌𝑌 + 𝜙𝑌 )𝑡 �
Δ 𝑡⁄2 2
𝑖,𝑗
𝑒𝑥𝑝𝑙𝑖𝑐𝑖𝑡 𝑖𝑚𝑝𝑙𝑖𝑐𝑖𝑡
𝑡+1�2
𝐴𝜙𝑦 = 𝜙𝑥𝑡
or �
𝑡+1�
and 𝐵𝜙𝑥𝑡+1 = 𝜙𝑦 2
2
𝑡 𝑡 𝑡+1
𝑖, 𝑗 + 1 𝑖, 𝑗 + 1 𝑖, 𝑗 + 1
𝑡 + 1�2 𝑡 + 1�2
𝑠𝑤𝑒𝑒𝑝 ′𝑖′ 𝑠𝑤𝑒𝑒𝑝 ′𝑗′
𝑖 − 1, 𝑗 𝑖, 𝑗 𝑖 + 1, 𝑗𝑚𝑎𝑟𝑐ℎ ′𝑗′ 𝑖 − 1, 𝑗 𝑖, 𝑗 𝑖 + 1, 𝑗 𝑚𝑎𝑟𝑐ℎ ′𝑖′ 𝑖 − 1, 𝑗 𝑖, 𝑗 𝑖 + 1, 𝑗
𝑖, 𝑗 − 1 𝑖, 𝑗 − 1
𝑖, 𝑗 − 1
Ex: Consider 2D diffusion in a rectangular shaped solid porous carbon block. The block is
initially soaked with moisture, say at concentration 𝐶𝑜 (𝑚𝑜𝑙𝑒𝑠⁄𝑚3 ). At 𝑡 = 0+ all four sides of
the block is exposed to dry air (moisture concentration = 𝐶𝑖 ≪ 𝐶𝑜 ). We are interested in
calculating the unsteady-state concentration profiles of moisture within the block, ie. 𝐶(𝑡, 𝑥, 𝑦) =
?. Pore diffusion coefficient for moisture in solid is 𝐷𝑝𝑜𝑟𝑒 .
𝑦
Ans:
𝐶𝑖 𝑚𝑜𝑙𝑒𝑠⁄𝑚3
𝐿
C(t, x, y) =?
Δ𝑦
𝑤
Δ𝑋 X
A species balance over ′Δ𝑋Δ𝑦. 1′ 𝐶𝑉 will yield the following conservation equation:
𝜕𝐶𝐴
+ 𝑉. ∇𝐶𝐴 = 𝐷𝑝𝑜𝑟𝑒 ∇2 𝐶𝐴 + (−𝑟𝐴 )
𝜕𝑡
𝜕𝐶𝐴 𝜕2 𝐶 𝜕2 𝐶𝐴
= 𝐷𝑝𝑜𝑟𝑒 � 𝜕𝑋 2𝐴 + �
𝜕𝑡 𝜕𝑦 2
𝑡=0 𝐶 = 𝐶𝑜 𝑓𝑜𝑟 𝐿 ≥ 𝑥 ≥ 0 ; 𝑤 ≥ 𝑦 ≥ 0
(𝐶𝑖∗ is the solid phase moisture concentration at the surface of the block in equilibrium with
𝐶𝑖 in atmoshphere).
Step1: discretize ‘𝑖’ implicitly and ‘𝑗’ explicitly over (𝑡 & 𝑡 + 1�2)
3
𝑡+1�2 𝑡 1�
𝐶𝑖,𝑗 − 𝐶𝑖,𝑗 𝐷𝑝𝑜𝑟𝑒 𝐶𝑖+1,𝑗 − 2𝐶𝑖,𝑗 + 𝐶𝑖−1,𝑗 𝑡+ 2 𝐶𝑖,𝑗+1 − 2𝐶𝑖,𝑗 + 𝐶𝑖,𝑗−1 𝑡
= �� � +� ��
Δ𝑡� 2 Δ𝑋 2 Δ𝑦 2
2
M
M−1
(i, j + 1)
0 1 𝑖 − 1, j (𝑖, j) 𝑖 + 1, j 𝑁−1 𝑁
(𝑖, j − 1)
1
0
Arrange:
𝑖 = 1, 𝑁 − 1
𝑗 = 1, 𝑀 − 1
Step-2:
𝐶𝑖,0 = 𝐶𝑖,𝑀 = 𝐶 ∗ ; 𝑖 = 0, 𝑁
𝑖 = 1, 𝑁 − 1
�
𝑗 = 1, 𝑀 − 1
4
𝑑(1, 𝑗) = 𝑑(1, 𝑗) − 𝑎(1, 𝑗)𝐶 ∗
� 𝑗 = 1, 𝑀 − 1
𝑑(𝑁 − 1, 𝑗) = 𝑑(𝑁 − 1, 𝑗) − 𝑐(𝑁 − 1, 𝑗)𝐶 ∗
Convince yourself that no other coefficients will change. This was actually an easy
problem, when all four boundary conditions were simple, i.e., functional values were prescribed.
Problems are complicated when you have ‘flux’/gradient or mixed boundary conditions; for
example,
𝜕𝐶
−𝐷 𝜕𝑋 = 𝑘𝑚 (𝐶 − 𝐶𝑎𝑡𝑚 )
𝑜𝑟 = Flux (known)
𝜕𝐶
and/or −𝐷 𝜕𝑦 = 𝑘𝐶 𝑜𝑟 𝑘𝑚 (𝐶 − 𝐶𝑎𝑡𝑚 )
you will be calling the subroutine ′𝑀 − 1′ times as you ‘march’ along ′𝑗′ direction. Now, you
𝑡+1� 𝑖 = 1, 𝑁 − 1
have the values for 𝐶𝑖,𝑗 2 ,
𝑗 = 1, 𝑀 − 1
Step3:
Now march along ′𝑖′ direction and ‘sweep’ along ′𝑗′ direction to solve for
𝑡+1 𝑡+1�2
𝐶𝑖,𝑗 from 𝐶𝑖,𝑗 .
𝑡+1 𝑡+1�2 1
𝐶𝑖,𝑗 −𝐶𝑖,𝑗 𝐷𝑝𝑜𝑟𝑒 𝐶𝑖−1,𝑗 −2𝐶𝑖,𝑗 +𝐶𝑖+1,𝑗 𝑡+ �2 𝐶𝑖,𝑗−1 −2𝐶𝑖,𝑗 +𝐶𝑖,𝑗+1 𝑡+1
Δ𝑡� = �� � +� � �
2 2 Δ𝑋 2 Δ𝑦 2
𝑒𝑥𝑝𝑙𝑖𝑐𝑖𝑡 𝑖𝑚𝑝𝑙𝑖𝑐𝑖𝑡
Arrange:
𝑗 = 1, 𝑀 − 1
�
𝑖 = 1, 𝑁 − 1
5
Step-4:
𝑖 = 1, 𝑁 − 1
�
𝑗 = 1, 𝑀 − 1
Substitute the discretized BCs, and only the following coefficients will be modified:
you will be calling the subroutine ′𝑁 − 1′ times as you ‘march’ along ′𝑖′ direction. Thus, you
𝑡+1 𝑖 = 1, 𝑁 − 1
have solved for 𝐶𝑖,𝑗 ,
𝑗 = 1, 𝑀 − 1
6
Lecture #28
Example: Consider the SS flow of a liquid through a long tube. Reynolds number is 180. At
time 𝑡 = 0, a tracer is injected into the liquid at inlet to the tube. Diffusion coefficient of tracer
in the liquid is 𝐷 𝑐𝑚2 ⁄𝑠. Determine the time profiles of the tracer concentrations (𝑟, 𝑥) in the
tube, ie. 𝐶(𝑡, 𝑟, 𝑥) =?
Soln.
𝐿 𝑅
𝐶𝑜 Δ𝑟
𝑅 C(t,r,x)=?
𝑟 𝑟
𝑂 X Δ𝑋 Δ𝑟
𝑟2
V(r) = Umax �1 − �
𝑅2
A species balance over ′2𝜋𝑟Δ𝑟Δ𝑥 ′ 𝐶𝑉 yields the following eqn
𝜕𝐶
𝜕𝑡
+ 𝑉. ∇𝐶 = 𝐷∇2 𝐶 + (−𝑟𝐴 )
or
𝜕𝐶 𝜕𝐶 𝜕2 𝐶 1 𝜕 𝜕𝐶 𝐿>𝑋>0
+ 𝑉(𝑟) 𝜕𝑋 = 𝐷 �𝜕𝑋 2 + 𝑟 𝜕𝑟 �𝑟 𝜕𝑟 ��
𝜕𝑡 𝑅>𝑟>0
𝜕𝐶 𝜕𝐶 𝜕2 𝐶 1 𝜕𝐶 𝜕2 𝐶
or 𝜕𝑡
= �−𝑉(𝑟) 𝜕𝑋 + 𝐷 𝜕𝑋 2 � + 𝐷 �𝑟 𝜕𝑟 + 𝜕𝑟 2 �
0+ 𝐶 = 𝐶𝑜 and 𝑥 = 0
𝜕𝐶 � for all 𝑅 > 𝑟 > 0
∇𝐶 = 𝜕𝑋 @ 𝑋 = 𝐿
1
Note: You will encounter discontinuity at 𝑟 = 0 in the radial diffusion terms, while discretizing.
Considering ∇𝐶 = 0, you will be solving the approximated equation instead.
𝜕𝐶 𝜕𝐶 𝜕2 𝐶 𝜕2 𝐶
𝜕𝑡
= �−𝑉(𝑟) 𝜕𝑋 + 𝐷 𝜕𝑋 2 � + 2𝐷 𝜕𝑟 2 𝑎𝑡 𝑟 = 0 𝑓𝑜𝑟 𝑎𝑙𝑙 𝐿 > 𝑋 > 0
(𝑖, M)
(𝑖, M − 1)
(i, j + 1)
𝑡+1�2 𝑡 𝑡+1�2
𝐶𝑖,𝑗 − 𝐶𝑖,𝑗 1 𝐶𝑖+1,𝑗 − 𝐶𝑖−1,𝑗 𝐶𝑖+1,𝑗 − 2𝐶𝑖,𝑗 + 𝐶𝑖−1,𝑗
= ���−𝑉𝑗 � +𝐷 �
Δ𝑡� 2 2Δ𝑋 Δ𝑋 2
2 implicit
𝐶𝑖,𝑗+1 − 2𝐶𝑖,𝑗 + 𝐶𝑖,𝑗−1 𝑡 ; 𝑗=0
+ 2𝐷 � �� �
Δ𝑟 2 𝑖 = 1, 𝑁
explicit
Arrange;
2
Similarly, discretize the main equation:
𝑡+1�2 𝑡 𝑡+1�2
𝐶𝑖,𝑗 − 𝐶𝑖,𝑗 1 𝐶𝑖+1,𝑗 − 𝐶𝑖−1,𝑗 𝐶𝑖+1,𝑗 − 2𝐶𝑖,𝑗 + 𝐶𝑖−1,𝑗
= ���−𝑉𝑗 � +𝐷 �
Δ𝑡� 2 2Δ𝑋 Δ𝑋 2
2 implicit
𝑡
1 𝐶𝑖,𝑗+1 − 𝐶𝑖,𝑗−1 𝐶𝑖,𝑗+1 − 2𝐶𝑖,𝑗 + 𝐶𝑖,𝑗−1 𝑗 = 1, 𝑀
+𝐷� + �� �
𝑟𝑗 2Δ𝑟 Δ𝑟 2 𝑖 = 1, 𝑁
explicit
Arrange,
𝑟𝑗 2
Note: 𝑉𝑗 = 𝑈𝑚𝑎𝑥 �1 − � ; 𝑟𝑗 = 𝑗Δ𝑟
𝑅2
0+ 𝐶(0, 𝑗) = 𝐶𝑜
𝐶(𝑁+1,𝑗)−𝐶(𝑁−1,𝑗)
=0 � 𝑀>𝑗>0
Δ𝑋
𝑜𝑟 𝐶(𝑁 + 1, 𝑗) = 𝐶(𝑁 − 1, 𝑗)
0 𝑉 𝐷 2 𝐷 0 𝑉 𝐷
𝑗 = 0: 𝑎(𝑖, 0) = �4Δ𝑋 + 2Δ𝑋 2 � ; 𝑏(𝑖, 0) = − �Δ𝑡 + Δ𝑋 2 � ; 𝑐(𝑖, 0) = − �4Δ𝑋 − 2Δ𝑋 2� ; 𝑑(𝑖, 0) =
𝐷 𝑡 2 2𝐷 𝑡 𝐷 𝑡
− �Δ𝑟 2 � 𝐶𝑖,−1 − �Δ𝑡 + Δ𝑟 2 � 𝐶𝑖,0 − �Δ𝑟 2 � 𝐶𝑖,+1 ; 𝑖 = 1, 𝑁
Substitute BCs,
2𝐷 𝑡 2 2𝐷 𝑡
𝑑(1,0) = − � 2
� 𝐶1,1 − � + 2 � 𝐶1,0 − 𝑎(1,0)𝐶𝑜
Δ𝑟 Δ𝑡 Δ𝑟
3
𝑎(𝑁, 0) = 𝑎(𝑁, 0) + 𝑐(𝑁, 0)
2𝐷𝑡 2 𝑡 2𝐷
𝑑(𝑁, 0) = − �Δ𝑟 2 � 𝐶𝑁,1 − �Δ𝑡 + Δ𝑟 2 � 𝐶𝑁,0
1�
Tridiagonal �𝑁, 𝑎, 𝑏, 𝑐, 𝑑, 𝑦 𝑡+ 2 (𝑖, 0)� ; 𝑖 = 1, 𝑁
𝑉𝑗 𝐷 2 𝐷 𝑉𝑗 𝐷
𝑎(𝑖, 𝑗) = � + � ; 𝑏(𝑖, 𝑗) = − � + � ; 𝑐(𝑖, 𝑗) = − � − � ; 𝑑(𝑖, 𝑗)
4Δ𝑋 2Δ𝑋 2 Δ𝑡 Δ𝑋 2 4Δ𝑋 2Δ𝑋 2
𝐷 𝐷 𝑡 2 𝐷 𝑡
=� − 2
� 𝐶𝑖,𝑗−1 − � + 2
� 𝐶𝑖,𝑗
2𝑟𝑗 Δ𝑌 2Δ𝑟 Δ𝑡 Δ𝑟
𝐷 𝐷 𝑡 𝑗 = 1, 𝑀
−� + � 𝐶𝑖,𝑗+1 �
2𝑟𝑗 Δ𝑟 2Δ𝑟 2 𝑖 = 1, 𝑁
Sbustitute BC (𝑗 = 1, 𝑀 − 1)
1� 𝑖 = 1, 𝑁
Tridiagonal �𝑁, 𝑎, 𝑏, 𝑐, 𝑑, 𝑦 𝑡+ 2 (𝑖, 𝑗)� ; �
𝑗 = 1, 𝑀 − 1
You have called the tridiagonal subroutine ′𝑀 − 1′ times for the interior nodes.
Substitute BC @ 𝑗 = 𝑀
𝐷 𝑡 2 𝐷 𝑡
𝑑(1, 𝑀) = − � 2 � 𝐶𝑖,𝑀−1 − � + 2 � 𝐶𝑖,𝑀 − a(1, N). Co
Δ𝑟 Δ𝑡 Δ𝑟
1�
Tridiagonal �𝑁, 𝑎, 𝑏, 𝑐, 𝑑, 𝑦 𝑡+ 2 (𝑖, 𝑀)� ; 𝑖 = 1, 𝑁
(Therefore, you have called the Thomas Algorithm (𝑀 + 1) times while sweeping 𝑗 =
0, 𝑀 rows)
For the last time in this course, let us write down complete eqn
4
1�
𝐴𝑦 𝑡+ 2 = 𝑑𝑡 𝑓𝑜𝑟 𝑗 = 0, 𝑖 = 1, 𝑁
𝑡+1�2
2 𝐷 𝑉0 𝐷
⎡− � + 2 � − � + � ⋯ ⋯ ⋯ ⎤⎧ ⎫
⎢ Δ𝑡 Δ𝑥 4Δ𝑋 2Δ𝑋 2 ⎥⎪
𝑉0 𝐷 2 𝐷 𝑉0 𝐷 ⎪ 𝑌𝑖 ⎪
⎪
⎢ ⋯ � + � −� + 2� − � − � ⋯ ⎥
⎢ 4Δ𝑋 2Δ𝑋 2 Δ𝑡 Δ𝑥 4Δ𝑋 2Δ𝑋 2 ⎥
⎢ ⋮ ⋮ ⋮ ⋮ ⋮ ⎥⎨ ⎬
⎢ 𝐷 2 𝐷 ⎥⎪⎪ ⎪ ⎪
⋯ ⋯ ⋯ � 2� − � + 2 �⎦ ⎩ ⎭
⎣ Δ𝑥 Δ𝑡 Δ𝑥
𝑡
𝑑(1,0)
⎧ ⋮ ⎫
⎪
⎪ 𝑑(𝑖, 0) ⎪ ⎪
= 𝑖 = 2, 𝑁 − 1
⎨ ⋮ ⎬
⎪
⎪ ⋮ ⎪
⎪
⎩ 𝑑(𝑁, 0) ⎭
Similarly, you can write a set of equations for 𝑗 = 1, 𝑀 − 1 and 𝑗 = 𝑀 using the coefficient as
above:
𝑗 = 1, 𝑀 − 1
𝑡+1�2
2 4 𝑉𝑗 𝐷
⎡− � + 2 � − � + 2
� ⋯ ⋯ ⋯ ⎤⎧ ⎫
⎢ Δ𝑡 Δ𝑥 4Δ𝑋 2Δ𝑋 ⎥⎪
𝑉𝑗 𝐷 2 𝐷 𝑉𝑗 𝐷 ⎪ 𝑌𝑖 ⎪
⎪
⎢ ⋯ � + � −� + 2� − � − � ⋯ ⎥
⎢ 4Δ𝑋 2Δ𝑋 2 Δ𝑡 Δ𝑥 4Δ𝑋 2Δ𝑋 2 ⎥
⋮ ⋮ ⋮ ⋮ ⋮ ⎨ ⎬
⎢ ⎥
⎢ 𝐷 2 𝐷 ⎥⎪⎪ ⎪ ⎪
⎣ ⋯ ⋯ ⋯ � 2� − � + 2 �⎦ ⎩ ⎭
Δ𝑥 Δ𝑡 Δ𝑥
𝑡
𝑑(𝑖, 𝑗)
⎧ ⋮ ⎫
⎪
⎪ 𝑑(𝑖, 𝑗) ⎪ ⎪
= 𝑖 = 2, 𝑁 − 1
⎨ ⋮ ⎬
⎪
⎪ ⋮ ⎪
⎪
⎩ 𝑑(𝑁, 𝑗) ⎭
5
𝑗=𝑀
𝑡+1�2
2 4 𝑉𝑀 𝐷
⎡− � + 2 � − � − � ⋯ ⋯ ⋯ ⎤⎧ ⎫
⎢ Δ𝑡 Δ𝑥 4Δ𝑋 2Δ𝑋 2 ⎥⎪
𝑉𝑀 𝐷 2 𝐷 𝑉𝑀 𝐷 ⎪ 𝑌𝑖 ⎪
⎪
⎢ ⋯ � + � −� + 2� − � − � ⋯ ⎥
⎢ 4Δ𝑋 2Δ𝑋 2 Δ𝑡 Δ𝑥 4Δ𝑋 2Δ𝑋 2 ⎥
⎢ ⋮ ⋮ ⋮ ⋮ ⋮ ⎥⎨ ⎬
⎢ 𝐷 2 𝐷 ⎥⎪⎪ ⎪ ⎪
⋯ ⋯ ⋯ � 2� − � + 2 �⎦ ⎩ ⎭
⎣ Δ𝑥 Δ𝑡 Δ𝑥
𝑡
𝑑(1, 𝑀)
⎧ ⋮ ⎫
⎪
⎪ 𝑑(𝑖, 𝑀) ⎪ ⎪
= 𝑖 = 2, 𝑁 − 1
⎨ ⋮ ⎬
⎪
⎪ ⋮ ⎪
⎪
⎩ 𝑑(𝑁, 𝑀) ⎭
Now, sweep in ′𝑗′ direction and march in ′𝑖′ direction for (𝑡 + 1) based on the 𝑦(𝑖, 𝑗) values you
determined at (𝑡 + 1�2) step (above)
1� 𝑖 = 1, 𝑁
𝐴𝑦 𝑡+1 (𝑖, 𝑗) = 𝑑𝑡+ 2 �
𝑗 = 0, 𝑀
Discretize 𝑗 = 0
𝑡+1 𝑡+1�2
𝐶𝑖,𝑗 − 𝐶𝑖,𝑗
Δ𝑡�
2
𝑡+1�2
1 𝐶𝑖+1,𝑗 − 𝐶𝑖−1,𝑗 𝐶𝑖+1,𝑗 − 2𝐶𝑖,𝑗 + 𝐶𝑖−1,𝑗
= ���−𝑉𝑗 � +𝐷 �
2 2Δ𝑋 Δ𝑋 2
explicit
𝐶𝑖,𝑗+1 − 2𝐶𝑖,𝑗 + 𝐶𝑖,𝑗−1 𝑡+1
+ 2𝐷 � � � (𝑖 = 1, 𝑁)
Δ𝑟 2
implicit
𝑗 = 1, 𝑀
6
𝑡+1�2 𝑡+1�2
𝐶𝑖,𝑗 − 𝐶𝑖,𝑗
Δ𝑡�
2
𝑡+1�2
1 𝐶𝑖+1,𝑗 − 𝐶𝑖−1,𝑗 𝐶𝑖+1,𝑗 − 2𝐶𝑖,𝑗 + 𝐶𝑖−1,𝑗
= ���−𝑉𝑗 � +𝐷 �
2 2Δ𝑋 Δ𝑋 2
𝑡
1 𝐶𝑖,𝑗+1 − 𝐶𝑖,𝑗−1 𝐶𝑖,𝑗+1 − 2𝐶𝑖,𝑗 + 𝐶𝑖,𝑗−1
+𝐷� + � � (𝑖 = 1, 𝑁)
𝑟𝑗 2Δ𝑟 Δ𝑟 2
In this last lecture of the course, I leave it here for you to do the remaining part (arranging the
terms and applying BCs for 𝑖 = 1, (2 ⋯ 𝑁 − 1), 𝑁 rows to invert the matrix) as you sweep in
′𝑗′ direction, as an exercise. This topic on ADI stops here.
For
𝜕𝐶
+ 𝑉. ∇𝐶 = 𝐷∇2 𝐶 + (−𝑟𝐴 ) ;
𝜕𝑡
𝐶(𝑡, 𝑥, 𝑟) 𝑎𝑠 𝑡 → ∞ must be the same as that of
(2) A question arises. When asked to solve the elliptic (2D) PDE, should not or cannot we
𝜕𝐶
artificially insert the transient term 𝜕𝑡
and seek the SS solution to the corresponding time-
dependent 2D parabolic equation? Very often, yes. Recall that, solving elliptic PDE requires
iterations and there is always a convergence issue. How many iterations? On the other hand,
the parabolic equation does not require iterations and you march on ′𝑡′ axis solving 𝑦(𝑥, 𝑟) at
every time step without iterations. Therefore, more than often ADI method is preferred over
‘Method of lines’ for solving an elliptic (2D) PDE. Insert the transient term and solve till you
have SS solution.
(3) Before closing this chapter, let us answer how we address non-linearity in the differential
𝜕𝑉
term, for example, V𝜕𝑥 of the NS equation? Answer is simple. By iterations! Guess velocity
fields (Vo). Discretize the derivative term as before. Solve for velocity fields as before. Iterate till
there is convergence.
End – Semester Exam
These lectures are usually covered in approximately 42 one-hour or 28 1hr 15min lectures.
All comments including suggestions and corrections may be sent to vermanishith@gmail.com .