You are on page 1of 2

1.

Consider a random column vector 𝐱̅ that has the multi-variate Gaussian distribution
with mean 𝛍 ̅. Its covariance matrix is defined as
̅ )𝑇 }
̅)(𝐱̅ − 𝛍
𝐸{(𝐱̅ − 𝛍
Ans c
2. The picture shown corresponds to a Classifier
Ans b
3. A set of orthonormal basis vectors for the same subspace is given as
1 −3
1 1 1 −1
𝐯̅1 = [ ] , 𝐯̅2 = [ ]
2 1 2√5 1
1 3
Ans d
4. The variance of the quantity 𝐚̅𝑇 𝐱̅ is
𝜎 2 ‖𝐚̅‖2
Ans c
5. The vector 𝐀𝐱̅ + 𝐛̅ is Gaussian with mean and covariance
𝐀𝝁̅ + 𝐛̅, 𝐀𝚺𝐀𝑇
Ans d
6. The Gaussian classification problem with the two classes 𝐶1 , 𝐶2 distributed as
1 1
0 0
−1 2
𝐶1 ∼ 𝑁 ([ ] , [2 ]) , 𝐶2 ∼ 𝑁 ([ ] , [2 ])
2 1 1 1
0 0
4 4
The classifier chooses 𝐶1 if
𝛍
̅1 + 𝛍̅2
(𝛍
̅1 − 𝛍̅2 )𝑇 𝚺 −1 (𝐱̅ − )≥0
2
−3 𝑇 2 0 1 1
⇒ ([ ]) [ ] (𝐱̅ − [ ]) ≥ 𝟎
1 0 4 2 3
1 1
⇒ [−6 4] (𝐱̅ − [ ]) ≥ 𝟎
2 3
⇒ −6𝑥1 + 4𝑥2 ≥ 3 ⇒ 6𝑥1 − 4𝑥2 ≤ −3
Ans a
7. The probability distribution function (PDF) 𝑓𝑋 (𝑥) is given as
1 (𝑥−𝜇)2

𝑒 2𝜎2
√2𝜋𝜎 2
Ans c
8. The classifier for this problem, to classify a new vector 𝐱̅, can be formulated as
𝛍
̅1 +𝛍
̅2
Choose 𝐶1 if (𝛍 ̅2 )𝑇 𝚺 −1 (𝐱̅ −
̅1 − 𝛍 ) ≥ 0 and 0 otherwise
2
Ans b
9. The eigenvectors of the matrix are
3 1
[ ],[ ]
2 −1
Ans c
10. Given
1 −2
𝐱̅1 = [1] , 𝐱̅ 2 = [ 1 ]
1 2
1 −3
The orthonormal vector basis can be found as follows
1
𝐱̅1 1 1
𝐯̅1 = = [ ]
‖𝐱̅1 ‖ 2 1
1
−2 1
1
𝐯̃2 = 𝐱̅ 2 − 𝐯̅1 𝐯̅1𝑇 𝐱̅ 2 = [ 1 ] − [1] × (−2)
2 4 1
−3 1
3

2
−2 1 3 −3 −3
1 1 1
= [ 1 ] + [1] = 2 = [ 3 ] = [3]
2 2 1 5 2 5 2√17 5
−3 1 2 −5 −5
5

[ 2]
Ans d

You might also like