You are on page 1of 2

INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR

Dual-Mode End-Test for Spring Semester 2021-22

Date of Examination: 12.4.22 Time: 8-10 AM Duration: 2 hrs.


Subject No: MA20204 Subject: Applied Computational Methods FM: 50M
Department/Center/School: Department of Mathematics
Specific charts, graph paper, log book etc., required: NO
Special Instructions (if any): Upload answer in a single PDF file by naming as roll-name.
Caution: Multiple submission is not allowed. Email-submission is not allowed.
Only One-time uploading of single PDF in google form.
Attempt all questions
Each question carries 10 M [Part-Question Marking shown]
1. Consider a Linear System of Equations (LSE): 𝐴𝐴𝐴𝐴 = 𝑏𝑏, where 𝐴𝐴 is 𝑛𝑛 × 𝑛𝑛 nonsingular
matrix, and 𝑥𝑥, 𝑏𝑏 are unknown and given vectors respectively of size 𝑛𝑛. Then answer the
following. [3+7=10M]
a) Starting from appropriate splitting of the matrix 𝐴𝐴, derive the Gauss-Seidel (GS)
iteration formula 𝑥𝑥𝑚𝑚+1 = 𝑅𝑅𝐺𝐺𝐺𝐺 𝑥𝑥𝑚𝑚 + 𝐶𝐶𝐺𝐺𝐺𝐺 for approximate solution of the LSE, where 𝑚𝑚
is number of iterations. Show, by writing the element-wise formula, that the GS scheme
uses most current values for updating. [No credit without the explicit expressions for
the iteration matrix 𝑅𝑅𝐺𝐺𝐺𝐺 and the constant matrix 𝐶𝐶𝐺𝐺𝐺𝐺 .]
b) Let {𝑥𝑥𝑖𝑖 }𝑛𝑛0 be a sequence generated from some iterative method 𝑥𝑥𝑖𝑖+1 = 𝑅𝑅𝑥𝑥𝑖𝑖 + 𝐶𝐶, and 𝑥𝑥𝑛𝑛
is approximation to exact solution 𝑥𝑥 at 𝑛𝑛-th step. Assume 𝑥𝑥�𝑛𝑛 = ∑𝑛𝑛𝑖𝑖=0 𝛼𝛼𝑖𝑖,𝑛𝑛 𝑥𝑥𝑖𝑖 is a better
approximation to 𝑥𝑥, where ∑𝑛𝑛𝑖𝑖=0 𝛼𝛼𝑖𝑖,𝑛𝑛 = 1. [1+3+3]
i. If the error 𝑒𝑒 ≔ 𝑥𝑥�𝑛𝑛 − 𝑥𝑥 = 𝑃𝑃𝑛𝑛 (𝑅𝑅)(𝑥𝑥0 − 𝑥𝑥), then determine the matrix
polynomial 𝑃𝑃𝑛𝑛 (𝑅𝑅) in terms of classical Chebyshev polynomials {𝑇𝑇𝑛𝑛 (𝑥𝑥)}
assuming that the spectral radius of 𝑃𝑃𝑛𝑛 (𝑅𝑅), 𝜌𝜌�𝑃𝑃𝑛𝑛 (𝑅𝑅)� is as small as possible.
[Hint: 𝑇𝑇0 (𝑥𝑥) = 1, 𝑇𝑇1 (𝑥𝑥) = 𝑥𝑥, 𝑇𝑇𝑛𝑛+1 (𝑥𝑥) = 2𝑥𝑥𝑇𝑇𝑛𝑛 (𝑥𝑥) − 𝑇𝑇𝑛𝑛−1 (𝑥𝑥)]
ii. Establish the following recurrence formulae:
1 1 2 1
• Recurrence for 𝜇𝜇𝑛𝑛 ≔ 𝑇𝑇𝑛𝑛 � �: = −
𝜌𝜌(𝑅𝑅) 𝜇𝜇𝑛𝑛 𝜌𝜌𝜇𝜇𝑛𝑛−1 𝜇𝜇𝑛𝑛−2
2𝑅𝑅𝜇𝜇𝑛𝑛 𝜇𝜇𝑛𝑛 2𝜇𝜇𝑛𝑛 𝐶𝐶
• Expression for 𝑥𝑥�𝑛𝑛 : 𝑥𝑥�𝑛𝑛 = 𝑥𝑥�𝑛𝑛−1 − 𝑥𝑥�𝑛𝑛−2 +
𝜌𝜌𝜇𝜇𝑛𝑛−1 𝜇𝜇𝑛𝑛−2 𝜌𝜌𝜇𝜇𝑛𝑛−1
iii. Hence, write down Chebyshev acceleration algorithm.

2. 𝐴𝐴 is an 𝑛𝑛-by-𝑛𝑛 real matrix and 𝑏𝑏 is a vector of size 𝑛𝑛. Then answer the following.
[2+8=10M]
a) Define Krylov subspace 𝐾𝐾𝑟𝑟 (𝐴𝐴, 𝑏𝑏), 𝑟𝑟 > 1 integer.
b) Assume that 𝐴𝐴 is symmetric. Consider the decomposition 𝑄𝑄𝑇𝑇 𝐴𝐴𝐴𝐴 = 𝐻𝐻, where 𝑄𝑄 =
(𝑞𝑞1 , 𝑞𝑞2 , ⋯ , 𝑞𝑞𝑛𝑛 ) is the orthogonal matrix and 𝐻𝐻 = �ℎ𝑖𝑖,𝑗𝑗 � is an upper Hessenberg
𝑛𝑛×𝑛𝑛
matrix. Answer the following. [2+2+4]
i. Establish the formulae:
𝑗𝑗
𝑇𝑇
𝑞𝑞𝑚𝑚 𝐴𝐴𝑞𝑞𝑗𝑗 = ℎ𝑚𝑚,𝑗𝑗 , 𝑚𝑚 = 1,2, … , 𝑗𝑗; ℎ𝑗𝑗+1,𝑗𝑗 = 𝐴𝐴𝑞𝑞𝑗𝑗 − � ℎ𝑖𝑖,𝑗𝑗 𝑞𝑞𝑖𝑖
𝑖𝑖=1

1
ii. Using above formulae write down Arnoldi Algorithm to compute Arnoldi
vectors 𝑞𝑞𝑗𝑗 , 𝑗𝑗 = 1 to 𝑟𝑟.
iii. Given that 𝛼𝛼1 , … , 𝛼𝛼𝑛𝑛 are diagonal (main) entries of the matrix 𝐻𝐻 and
𝛽𝛽1 , … , 𝛽𝛽𝑛𝑛−1 are entries of both the subdiagonal and superdiagonal.
1
• Prove: 𝑞𝑞𝑗𝑗𝑇𝑇 𝐴𝐴𝑞𝑞𝑗𝑗 = 𝛼𝛼𝑗𝑗 , 𝑞𝑞𝑗𝑗+1 = �𝐴𝐴𝑞𝑞𝑗𝑗 − 𝛽𝛽𝑗𝑗−1 𝑞𝑞𝑗𝑗−1 − 𝛼𝛼𝑗𝑗 𝑞𝑞𝑗𝑗 �
𝛽𝛽𝑗𝑗
• Using above formulae, write down Lanczos algorithm (without
reorthogonalization) to compute Lanczos vectors 𝑞𝑞𝑗𝑗 , 𝑗𝑗 = 1, … , 𝑟𝑟.

3. Consider Finite Difference (FD) approximation of the partial derivatives of the dependent
variable 𝑢𝑢(𝑥𝑥, 𝑡𝑡). Answer the following. [2+8=10M]
a) Derive 1 order (in time) forward FD approximation for 𝑢𝑢𝑡𝑡 , and 1 order (in space)
st st

central FD approximation for 𝑢𝑢𝑥𝑥 .


(𝑣𝑣Δ𝑡𝑡)2
b) Let 𝐿𝐿𝑥𝑥 (Δ𝑡𝑡) ≔ 1 − (𝑣𝑣Δ𝑡𝑡)𝛿𝛿𝑥𝑥 + 𝛿𝛿𝑥𝑥𝑥𝑥 be a differential marching operator where
2
𝛿𝛿𝑥𝑥 , 𝛿𝛿𝑥𝑥𝑥𝑥 are some FD approximations for the differential operators 𝜕𝜕𝑥𝑥 , 𝜕𝜕𝑥𝑥𝑥𝑥 . [5+3]
i. For the 1D linear advection equation 𝑢𝑢𝑡𝑡 + 𝑣𝑣𝑢𝑢𝑥𝑥 = 0, derive the general
(second order in time) time marching scheme 𝑢𝑢𝑖𝑖𝑛𝑛+1 = 𝐿𝐿𝑥𝑥 (Δ𝑡𝑡)𝑢𝑢𝑖𝑖𝑛𝑛 . Hence,
establish Forward Time Centered Space (FTCS) scheme by choosing
appropriate discretization 𝛿𝛿𝑥𝑥 , 𝛿𝛿𝑥𝑥𝑥𝑥 .
ii. Show that FTCS is consistent with the PDE. Also determine the order of the
scheme with respect to both time and space.

4. Consider 1D Shallow Water Equations (SWE) by neglecting source term:


𝑢𝑢1 ℎ 𝑓𝑓 ℎ𝑣𝑣
𝑢𝑢𝑡𝑡 + [𝑓𝑓(𝑢𝑢)]𝑥𝑥 = 0, 𝑢𝑢 ≔ �𝑢𝑢 � = � � , 𝑓𝑓(𝑢𝑢) ≔ � 1 � = � 2 ℎ2 �,
2 ℎ𝑣𝑣 𝑓𝑓2 ℎ𝑣𝑣 + 𝑔𝑔
2
where the vector 𝑢𝑢 is dependent variable (solution) and 𝑓𝑓(𝑢𝑢) is called flux vector.
Answer the following. [3+4+3=10M]
a) Show that the given SWE without source term can be expressed as 𝑢𝑢𝑡𝑡 + 𝐽𝐽𝑢𝑢𝑥𝑥 = 0,
where 𝐽𝐽 ≔ 𝜕𝜕𝜕𝜕/𝜕𝜕𝜕𝜕 is the Jacobian matrix. [No credit without determining 𝐽𝐽(ℎ, 𝑣𝑣)]
b) Show that the given SWE, which is a system of two nonlinear coupled PDEs, can
be decoupled under an appropriate change of dependent variable 𝑢𝑢 → 𝑤𝑤 to a system
of two uncoupled 1D linear advection equations.
[Hint: Use eigenvalue decomposition of the Jacobian matrix: 𝑞𝑞 −1 𝐽𝐽𝐽𝐽 = 𝐷𝐷]
c) Establish First order Upwinding (FOU) scheme for the given SWE without source
term, and mention the restriction on maximum number of allowable time steps for
ensuring stability of the scheme.
[Hint: The FOU for 1D linear advection equation is stable if 𝑐𝑐 ≤ 1, where 𝑐𝑐 is
Courant number.]

5. Given a system of nonlinear algebraic equations: 𝑓𝑓(𝑥𝑥) = 0, where 𝑥𝑥 = (𝑥𝑥1 , 𝑥𝑥2 , … , 𝑥𝑥𝑛𝑛 ) is
unknown vector and each scalar function 𝑓𝑓𝑖𝑖 of the vector function 𝑓𝑓 = (𝑓𝑓1 , 𝑓𝑓2 , … , 𝑓𝑓𝑛𝑛 )
depends on unknown vector and 𝑛𝑛 > 1. Let 𝛼𝛼 = (𝛼𝛼1 , 𝛼𝛼2 , … , 𝛼𝛼𝑛𝑛 ) be the exact root, whose
approximation has to be obtained. Then answer the following. [7+3=10M]
a) Describe Newton’s method and write down the algorithm to find an approximate
root vector 𝑥𝑥 (𝑘𝑘+1) from an appropriate iteration formula, to be determined by you.
𝑘𝑘 is iteration step.
b) Prove that Newton’s method is quadratically convergent.
******************************END**********************************

You might also like