You are on page 1of 12

1.

“Subtraction of two nearly equal numbers may result in loss of significant digits on a finite precision
machine”.

Given √ 1 1 and 1 10 , analyse the possible loss of significance using 5-


significant-digit arithmetic. Redo the computation to avoid the subtraction of two nearly equal numbers by
modifying the algorithm and report the values of from the two algorithms. (10)
Solution and Marking Scheme:

Using 5-significantl-digit arithmetic for x = 10-3, and given , we have = 0.


To modify the algorithm, rationalizing the numerator (as done in a few methods for root finding),
we have

√ 1 1
Numerator = 0.1 × 10-5
Denominator = 0.2 × 101
= 0.5 × 10-6
(The modified algorithm could be different, using the Taylor's series, f(x)=x/2, and the function
value comes out to be the same)
Grading Policy
03/10 for showing initially =0
04/10 for the new algorithm, √
-6
03/10 for showing = 0.5 × 10 with the new algorithm.
2. A four-bar linkage is commonly used for moving platforms, with the length of bars given by r1, r2, r3, and
r4, and the two bar-angles as α and φ. The relationship between these parameters is given by

F (φ, α) = r1cos(α)/r2 – r1cos(φ)/r4 + (r12+ r22 - r32 + r42)/(2r2r4) − cos(α − φ) = 0


For a particular requirement, r1 = 20, r2 = 12, r3 = 16, and r4 = 8 (all the lengths are in same units).
Calculate the value of φ for α = 40° by (note that both the angles are in degrees):

a. Regula Falsi method: use the initial bracket as φa = 30° and φb = 40°. Perform 2 iterations. (4)

b. Muller’s method: use φ(0) = 27°, φ(−1) = 25° , φ(−2) = 23°. Use stopping criteria as |φk+1 − φk | < 0.00001° or
|F (φ, 40°)| < 0.000001; whichever is achieved first. (7)
Graphically explain the concept of each method. (4)

Solution and Marking Scheme:

phia f(phia) phib f(phib) abs(phianew -phia) abs(f(phianew )) Marks Distribution


30 -0.0397972 40 0.19496296 0.039797 1
31.695228 -0.0065769 40 0.19496296 1.695228 0.006577 1.5
31.966239 -0.0010123 40 0.19496296 0.27101100 0.001012 1.5

phi1 f(phi1) phi2 f(phi2) phi3 f(phi3) a b c abs(phianew -phia) abs(f(phianew )) Marks Distribution

23 -0.1474928 25 -0.12162122 27 -0.0918123 0.000492 0.015889 -0.0918123 0.0918123 +1 for calculation of F(phis) and errors 3
25 -0.1216212 27 -0.0918123 32.00308 -0.0002505 0.000485 0.020763 -0.0002505 5.00308000 0.0002505 +2 for a, b, c 4
27 -0.0918123 32.0031 -0.0002505 32.015141 -0.00000082 0.01206100 0.00000082

Regula-Falsi graphical explanation 2


Muller Graphical Expalanation 2
3. (a) Use Gauss elimination method with four-digit floating point arithmetic with rounding to solve: (4)
0.003 x + 59.14 y = 59.17
5.291 x − 6.130 y = 46.78
b) Solve the same problem by using Gauss elimination with partial pivoting and four digit rounding
arithmetic and comment on which one is better and why. (6)

Marking scheme:

part a ……………………. 4 marks (-1 for wrong floating point considered)

part b ………………………. 4 marks (-1 for wrong floating point considered)

Comment …………………… 2 marks


4. Solve the following system of linear equation using Gauss Seidel Method. Also solve the same with an
under-relaxation factor of 0.9. Use the initial guess as x1 =2, x2 =0, and x3 =3 in both cases. Perform 5
iterations and compare the errors with respect to the true values (1.89189, −0.54054, 3.24324) in both
cases.
3 x1 − 2 x2 + x3 = 10
x1 − 3 x2 + 2 x3 =10
2 x2 − x1 + 4 x3 = 10
(15)

Solution and Marking Scheme:

A 3 -2 1 C 10 Under Relaxation ω 0.9


1 -3 2 10
-1 2 4 10

Iteration x1 x2 x3 Iteration x1 x2 x3

2 0 3 Marks 2 0 3 Marks
1 2.333333 -0.55556 3.361111 1.5 1 2.3 -0.51 3.297 1
2 1.842593 -0.4784 3.199846 1.5 2 1.9349 -0.49233 3.236601 1
3 1.947788 -0.55084 3.262367 1.5 3 1.927112 -0.52914 3.245373 1
4 1.878651 -0.5322 3.235765 1.5 4 1.901616 -0.53521 3.243243 1
5 1.899942 -0.54284 3.246407 1.5 5 1.896065 -0.53875 3.243379 1

True Value 1.89189 -0.54054 3.24324 1.89189 -0.54054 3.24324


Error % 0.425586 0.425993 0.097641 1.5 0.220697 0.330235 0.004278 1

Total 9 Total 6
5. Consider the matrix

 3 0 0
A = −1 1 0
 0 2 8

(a) Find the eigenvalue closest to 7 using the Inverse Power method with shift. Take the starting guess
vector as [1, 1, 1]T. Also mention the percentage relative error in computation of the eigenvalue for
each iteration. Perform 4 iterations, consider 4 digits after decimal place for calculation, and use L2
norm for normalization. (8)
(b) Obtain the characteristic polynomial for matrix A using the Faddeev Le Verrier method. Find the other
two eigenvalues of matrix A by solving the characteristic equation. (8)

 4 2 -2 2
  1 
(c) The matrix P = -5 3 2  has eigenvalues of 1, 2, 5 with the corresponding eigenvectors as  ,
-2 4 1   4 

1  0  -0.5 -1 1 
1  1  -1  
  and   . The inverse is given by P =  0.1 0 0.2  . What is the smallest magnitude
 2  1  -1.4 -2 2.2

 4 -5 -2 
−1  
eigenvalue of (i) P and (ii) S =  2 3 4  (4)
 -2 2 1 

Solution and marking scheme:


(a)

 −4 0 0 
 
θ = 7, A - θI =  −1 −6 0 
 0 2 1 

 −0.25 0 0
( A - θI ) =  0.0417 −0.1667 0 (1)

−1

 −0.0834 0.3334 1 

Now, finding the largest magnitude eigenvalue for (A- θI)-1 using the Power method.
(4 iterations: 4 * 1.5 = 6)
First iteration:

1  −0.25 0 0 −0.25 


z = 1 R = ( A - θI ) =  0.0417 −0.1667 0 z R = −0.125
   
0 −1 0

1 −0.0834 0.3334 1 1.25 


0
 −0.1952 
z1 = 0 =  −0.0976 
z R
z R
 0.9759 
λ = 1.2808.
Second iteration:

−0.1952 0.0488 0.0508 


z1 R 
z = −0.0976 z R = 0.0081 z = 1 = 0.0084  λ = 0.9608, εr = -33.3%
1   1   2

zR
 0.9759  0.9596 0.9987 

Third iteration:

0.0508 −0.0127  −0.0127 


z2R 
z = 0.0084 z R =  0.0007  z = 2 =  0.0007  λ = 0.9973, εr = 3.6%
2   2   3

z R
0.9987  0.9972   0.9999 

Fourth iteration:

−0.0127  0.0032   0.0032 


z3R 
z =  0.0007  z R = −0.0006 z = 3 =  −0.0006  λ = 1.0012, εr = 0.39 %
3   3   4

z R
 0.9999   1.0012   1 

Therefore, λmax = 1 for ( A - θI ) .


−1

λmin = 1/1 = 1 for A-θI .

Eigen value closest to θ (= 7) is (λmin + θ) = 1+7 = 8. (Final answer: 1 mark)


Not calculated % relative error at each step: -2
Not calculated eigenvalue at each step: -2
Not maintained precision of 4 digits after decimal place: -2
Used L∞ norm instead of L2 norm: -1
Method is correct but some minor calculation mistake: -2
(b) Faddeev Leverrier Method
Characteristic Equation: (-1) (λ3 – a2 λ2 – a1 λ1 – a0 ) = 0
A2 = A
a2 = Trace (A2) = 12
Ai = A(Ai+1 – ai+1I) and ai = trace(Ai)/(n-i) where i = 1, 0 (n = 3, I is a 3x3 identity matrix)
2 iterations (2.5 * 2 = 5)
For i = 1
 −9 0 0  −27 0 0 
( A 2 -12I ) =  −1 −11 0  (1), A1 = A ( A 2 -12I ) =  8 −11 0  (1)
 0 2 −4  −2 −6 −32
trace(A 1 ) -70
a1 = = = -35 (0.5)
n -1 2
For i = 0

 8 0 0  24 0 0 
( A1 + 35I ) =  8 24 0 , A 0 = A ( A1 + 35I ) =  0 24 0 
 
 −2 −6 3  0 0 24 

trace(A 0 )
a0 = = 24
n -0
Characteristic Equation: λ3 – 12 λ2 + 35 λ – 24 = 0 (1)
(λ - 8) (λ2 - 4 λ + 3) = 0
(λ - 8) (λ - 1) (λ - 3) = 0

Eigenvalues are 8, 1, 3. (2)

(c) Smallest magnitude eigenvalue of P-1 = 1/5 = 0.2. (2 marks each) (2 * 2 = 4)

Smallest magnitude eigenvalue of S (PT) = 1.


6. (a) What is the difference between interpolation and regression? (2)
(b) The value of a function was reported at 5 points, as shown below:
i 0 1 2 3 4
xi 1 2 4 6 8
f(xi) 1.2 3.5 6.6 7.6 8.1

Estimate the value of the function at x = 3, using 3rd degree Newton’s interpolating polynomial with
appropriate number of points so that the error in the estimated value is likely to be minimized. (5)

(c) It is known that this function can be represented by the equation: , where α and β are the
equation parameters. Transform the equation in a form suitable for linear regression and determine the
values of α and β. (8)
Note: Perform calculations more precisely but round-off to 4 significant digits when storing a result, and
use this rounded-off value for further calculations.

Solution and Marking Scheme


(a) Interpolation: Interpolation consists of determining the unique nth-order
polynomial that fits n + 1 data points. The interpolating polynomial passes through all the n+1 data points.
This polynomial then provides a formula to compute
intermediate values. (#$%&': )*)

Regression: However, in Regression, we derive an approximating function that


fits the shape or general trend of the data without necessarily matching the individual
points, by minimizing the variance between the data points and the function. This model is best fit to
predict the future values. (#$%&': )*)

(b)

Newton’s Divided Difference

x f(x) f[xi,xj] f[xi,xj,xk] … Marks

1 1.2
2.30 01
2 3.5 -0.25 01
1.55 -0.0025 01
4 6.6 -0.2625
0.50
6 7.6

Interpolating polynomial:
f(x) = 1.2 + 2.3(x-1) - 0.25(x-1)(x-2) - 0.0025(x-1)(x-2)(x-4) (#$%&': )*)
Therefore, f(3) = 5.3050 (#$%&': )*)

(c)
-
+ , /
.
1 β 1 1 β 1
1 2 2 1 2 5 Y 1 2X 89:;<=9>;? #$%&': )@
y α x α α α

x y X=1/x2 Y=1/y X2 X.Y


1 1.2 1.0000 0.8333 1.0000 0.8333
2 3.5 0.2500 0.2857 0.0625 0.0714
4 6.6 0.0625 0.1515 0.0039 0.0095
6 7.6 0.0278 0.1316 0.0008 0.0037
8 8.1 0.0156 0.1235 0.0002 0.0019
SUM = 1.3559 1.5256 1.0674 0.9198
Marks )* )* )* )*

 1  
 5 1.3559  α   1.5256 
1.3559 1.0674  β   = 0.9198
      
 α  

1
β=
α
0.1090 6.6369

β
α=
α
0.7233 9.1760

(#$%&': )@)

9.1760
AB;=; C=;, DB; <EE=C 9F<D; G:HD9C: 9I J9K;: L+: +M , /
6.6369
sin t
7. (a) Approximate the function over the interval [1, 6], using a fourth degree polynomial in t, by fitting
t
a quadratic polynomial to sin t and another quadratic polynomial to 1/t. Given ∫ t sin tdt = sin t − t cos t
∫t
2
and sin tdt = 2t sin t − t 2 cos t + 2 cos t . (9)

1
(b) Obtain the second degree Tchebycheff polynomial fit for the function over the interval [1, 6], if the
t
1
(
2 2x2 −1 )
linear Tchebycheff fit is given by 0.888 – 0.137 t, and ∫ (5 x + 7 )(
−1 1 − x2 )dx = 0.226 (3)

(c) We have obtained a quadratic polynomial fit of the same function, 1/t, using two different methods in
(a) and (b). Which of these would you prefer and why? (3)
Note: Use three digit floating point arithmetic with round off for all calculations.

Solution and Marking Scheme


sin t 1
a) The formulation for can be re written as sin t × where
t t

f (t ) = sin t = co + c1t + c2 t 2

1
g (t ) = = d o + d1t + d 2 t 2
t
Approximating f (t) in two degree polynomial system

 6 6 6   6 
 ∫ dt ∫ tdt ∫ 
t 2
dt  ∫ sin tdt 
 1 1 1
 1 
6 6 6   o
c  6 
 3     
 ∫ tdt ∫t ∫ t dt   c1  =  ∫ t sin tdt 
2
dt
1 1 1  c   1 
 2  6 
6 6 6 
 ∫ t 2 dt  t 2 sin tdt 
∫t ∫  ∫
3 4
dt t dt
1 
1 1  1 

 5 17.500 71.667  co   − 0.420 


17.500 71.667 323.750  c  =  − 6.342 
   1  
71.667 323.750 1555  c 2  − 38.222
  
(2 marks) (-0.5 for not rounding off up to three decimal places)

c0 = 2.664
c1 = −1.233 f (t ) = 2.664 − 1.233t + 0.109t 2
c2 = 0.109

(2 marks) (-0.5 for not rounding off up to three decimal places in final form)
Approximating g (t) in two degree polynomial system

 6 6 6   61 
 ∫ dt ∫ tdt ∫ 
t 2
dt  ∫ dt 
 1 1 1
 1t 
6 6 6  d o   6   5 17.500 71.667  d o   1.792 
 3     1  17.500 71.667 323.750    
 ∫ tdt ∫t
2
dt ∫ t dt   d1  =  ∫ t dt 
   d1  =  5 
1 1 1  d   1 t  d  17.500
 2  6  71.667 323.750 1555   2  
6 6 6 
 ∫ t 2 dt  t 2 1 dt 
∫t
3
∫ 
4
∫ t 
dt t dt
1 1 1  1 

(2 marks) (-0.5 for not rounding off up to three decimal places)

d 0 = 1.792
d1 = 5 g (t ) = 1.206 − 0.411t + 0.041t 2
d 2 = 17.500

(2 marks) (-0.5 for not rounding off up to three decimal places in final form)
Thus the four degree polynomial equation is

( )(
f (t ) g (t ) = 2.664 − 1.233t + 0.109t 2 1.206 − 0.411t + 0.041t 2 )
(1 mark) (No need of multiplication)
Above values of sin t can be evaluated using
• degrees instead of radians
• Legendre polynomial
Then also the marking scheme remains same, if all the coefficients are evaluated correctly

1
b) The formulation for can be written as
t
1 3
h(t ) = = eo + e1t + e2 t 2
t
Already given formulation for linear fit as
0.888 – 0.137t
2t − 7 5x + 7
The conversion of t to x in the interval [1, 6] is x = or t =
5 2
5x + 7
t=
2

(1 mark)
For 2nd degree polynomial

1

( )
2 2x 2 − 1
dx
( + ) 1− x2 
−1 5 x 7  
b2 =   = 0.226 = 0.144
π 1.571
2

(1 mark)
1
The two degree Tchebycheff polynomial fit for can be written as
t

= 0.888 − 0.137t + (2 x 2 − 1)(0.144)

= 0.046t 2 − 0.46t + 1.308

(1 mark) (-0.5 for not rounding off up to three decimal places in final form)

c) The relative true error for variation of 1/t within the given interval can be written as

x f(x) Method a Method b RTE a RTE b


1 1 0.836 0.894 16.4 10.6
2 0.5 0.548 0.572 -9.6 -14.4
3 0.333333 0.342 0.342 -2.6 -2.6
4 0.25 0.218 0.204 12.8 18.4
5 0.2 0.176 0.158 12 21
6 0.166667 0.216 0.204 -29.6 -22.4

For calculation purposes


Method b is better as only extra terms needs to be evaluated instead of the whole matrix
(1 mark)
However, for approximation purposes
Method a is suitable for in between points
Method b is suitable for end points and also reduces the maximum error as seen from the table
(2 marks)

You might also like