You are on page 1of 5

Birla Institute of Technology and Science, Pilani

Work Integrated Learning Programmes Division


I Semester 2019-2020
Mathematical Foundations for Data Science
DSE CLZC416
Mid-Sem Solutions

1 A) They are not necessarily linearly independent. (1/2)


2
Counter example: V ∈ IR , S = {(0, 0), (0, 1), (1, 0)}
S spans V , but elements of S are not linearly independent. (1/2)
(Other examples are also possible)
No marks without counter example.
1 B) Consider an upper triangular matrix An×n with entries aij . If constant αi is associated with
column vector i of A, then in equation form, the last last row of Aα = 0 yields, (1/2)
αn ann = 0 which yields, αn = 0 as ann 6= 0.
Similar arguments, in a recursive way, would yield all αi = 0, ∀i = 1, 2, . . . , n, thus proving
linear independence of the column vectors. (1/2)
If akk = 0 then αk need not be zero and hence we may conclude that the column vectors of
A are linearly dependent. (1)
Counter examples are possible and right ones should be awarded marks
1 C) i) V as IR, V 0 as C, IF as IR (field). (1)
IR over IR (1 dimension)
C over IR (2 dimensions)
ii) IR over IR (1 dimension)
IR over C (1 dimension) (1)
In both cases above, other examples are equally possible.
2 A) A is a n × m matrix
T (B + C) = (A)(B + C) = AB + AC
T (B) = AB
T (C) = AC
T (A + B) = T (A) + T (B) (1/2)
T (kB) = AkB = kAB = kT (B) (1/2)
Thus, T is linear transformation

1
2 B) Write (8,3,2) has linear combination of (1,0,1) and (2,1,0)
(8, 3, 2) = 2(1, 0, 1) + 3(2, 1, 0) (1)
Hence, T (8, 3, 2) = 2T (1, 0, 1) + 3T (2, 1, 0) = 2(1, −1, 3) + 3(0, 2, 1) = (2, 4, 9) (1)
2 C) Verify Rank Nullity Theorem
Let T (x1 , x2 , x3 ) = (0, 0, 0)
=⇒ (−x1 + x2 + x3 , 2x1 − x2 , x1 + x2 + 3x3 ) = (0, 0, 0)
Therefore, (x1 , x2 , x3 ) = (−t, −2t, t) = t(−1, −2, 1) (1/2)
Ker(T ) = Span{(−1, −2, 1)}
Nullity(T ) = 1 (1/2)
T (x1 , x2 , x3 ) = x1 (−1, 2, 1) + x2 (1, −1, 1) + x3 (1, 0, 3)
Since (1, 0, 3) = (−1, 2, 1)+2(1, −1, 1), we conclude that Range(T ) is spanned by the elements
of {(−1, 2, 1), (1, −1, 1)}. (1/2)
Also, since the elements of {(−1, 2, 1), (1, −1, 1)} are linearly independent, we conclude that
dim(Range)T = 2. (1/2)
3 A) Given β is not a eigenvalue
So, Ax − βx 6= 0 for any x 6= 0
(A − βI)x 6= 0 for any x 6= 0
=⇒ A − βI is invertible (1)

V = (A − βI)−1 (A − βI)V = (A − βI)−1 (AV − βV )


= (A − βI)−1 (λk V − βV ) = (A − βI)−1 (λk − β)V

1
This implies that (A − βI)−1 V = V (1)
λk − β
1
So, is the required eigenvalue.
λk − β
3 B) Given AB = BA
Let Ax = λx
Since all eigenvalues are distinct
dim(N (A − λi I)) = 1 ∀i —————- (1) (1/2)
Since Ax = λx
BAx = λ(Bx)
=⇒ A(Bx) = λ(Bx)
Bx is an eigen vector of A ———————-(2) (1/2)
From (1), Bx has to be a scaled version of x
i.e, Bx = αx
=⇒ x is an eigen vector of B. (1)
3 C) Given,
n
!
X
|aii | > |aij | ∀ i = 1, 2, ..., n —————— (1)
j=1,j6=i

Page 2
Let us assume A has zero eigenvalues
Let one of them λ = 0
By Gershgorin’s theorem λ should at least satisfy one of these equations as (1/2)

|λ − aii | = |aii | —————(2)

n
!
X
|aii | ≤ |aij | ∀ i = 1, 2, ..., n —————— (3)
j=1,j6=i

It will contradict (1). Hence A cannot have any zero eigenvalues. (1/2)

4 A) 0.003 x1 + 59.14 x2 = 59.17 ———————– (1)


5.291 x1 - 6.130 x2 = 46.78 ————————(2)
Without partial pivoting : Choosing first equation as pivot equation, the multiplier is
m = 5.291
0.003 = 1764. (1/2)
Multiply equation (1) by m and subtract from equation (2)
We get −104300x2 = −104400 and thus x2 = 1.001. (1/2)
By using the value of x2 in equation (1) we get 0.003x1 = −0.03 and thus x1 = −10.00. (1/2)
With partial pivoting: We choose equation (2) as pivot equation
5.291 x1 - 6.130 x2 = 46.78 ————————(3)
0.003000 x1 + 59.14 x2 = 59.17 —————— (4)
0.003
The multiplier m = 5.291 = 0.000567 (1/2)
59.14
Now multiply equation (3) with m and subtracting from equation (4), we get x2 = 59.14=
1.000. Hence x2 = 1.000. (1/2)
Using this value of x2 in equation (3) we get x1 = 10.00. (1/2)
4 B) The crux is in reducing A to an upper triangular form. The factors mij used for reduction
are precisely the elements of the matrix L. Hence the number of additions, multiplications
and divisions are exactly the same as in Gauss elimination. (1/2)
n−1
X (n − 1)(n)(2n − 1)
No. of additions = (n − k)(n − k) = . (1/2)
6
k=1
n−1
X (n − 1)(n)(2n − 1)
No. of multiplications = (n − k)(n − k) = . (1/2)
6
k=1
n−1
X 1
No. of divisions = (n − k) = n(n − 1). (1/2)
2
k=1
n3
Students writing 3 and proving the same should also be given 2 marks provided the justifi-
cation is correct

5 A) An×n matrix.
n−1 n−1
X X 2n3 + 3n2 − 5n
No. of additions = (n − k)(n − k + 1) + i= (1)
i=1
6
k=1

Page 3
2n3 + 3n2 − 5n
Similarly, no. of multiplications = . (1)
6
n−1 n−1
X X n(n − 1) n(n + 1)
No. of divisions = (n − k) + j= + = n2 . (1)
j=1
2 2
k=1

5 B) 2x + y + z = 4
x + 2y + z = 4
x + y + 2z = 4
Decomposition

1 1 1 1
     
1 2 2 0 0 0 0 2 2
1 1 1 1

2 1 2
=I +L+U =I +
2 0 0 + 0 0 2

1 1 1 1
2 2 1 2 2 0 0 0 0
Gauss Jacobi method, C = −I −1 (U + L) (1)
1 1
− 12 − 21
    
1 0 0 0 2 2 0
1 1  =  −1
C=−  0 1 0  
2 0 2 2 0 − 21 
1 1
0 0 1 2 2 0 − 12 − 12 0
q √
||C|| = (− 12 )2 + (− 12 )2 + (− 12 )2 + (− 21 )2 + (− 12 )2 + (− 12 )2 = 1.5 = 1.22 > 1. Also, it
could be observed that the row sum and column sum norms are also 1. Hence Gauss Jacobi
iteration may not converge. (1)
OR

Spectral method:

− 12 − 21
 
0
I − A =  − 12 0 − 21 
− 12 − 12 0

Eigenvalues are {−1, 12 , 12 }


Spectral radius ρ = maximum of eigenvalues in absolute value = 1. (1)
Hence iteration may not converge as ρ(A) 6< 1. (1)

6 A) x1 = Number of part 1 produced/day


x2 = Number of part 2 produced/day
Load on each milling machine (in minutes)
20x1 + 15x2
= 4x1 + 3x2
5
Load on drill press (in minutes)= 3x1 + 5x2
Time restriction on each milling machine is
4x1 + 3x2 ≤ 8(60) = 480
4x1 + 3x2 ≤ 480 (1)
For drill press 3x1 + 5x2 ≤ 480 (1)

Page 4
Machine balance constraint
|(4x1 + 3x2 ) − (3x1 + 5x2 )| ≤ 30
|x1 − 2x2 | ≤ 30
x1 − 2x2 ≤ 30
−x1 + 2x2 ≤ 30 (1/2)
Let y represent the number of completed assemblies. Then, objective function is to
Maximize Z=min(x1 , x2 )
y=min (x1 , x2 ) (1/2)
MaximizeZ = y
Subject to
4x1 + 3x2 ≤ 480
3x1 + 5x2 ≤ 480
x1 + 2x2 ≤ 30
−x1 + 2x2 ≤ 30
x1 − y ≥ 0
x2 − y ≥ 0
x1 ≥ 0, x2 ≥ 0, y ≥ 0 (1)
6 B) Shadow Price R1 = Z1 (3, 0) − Z(8/3, 0) = 135 − 120 = 15 (1/2)
Since there would be no change in the value of Z for a unit increase in the availability of R2 ,
the shadow price is 0. (1/2)
Graph is a must. If no graph is drawn, only 1/2 mark is to be given and that too if the answer
is correct.

Page 5

You might also like