Professional Documents
Culture Documents
Diagonalization of Matrices
In engineering applications, we usually want to work with diagonal matrices. So, it is important to
diagonalize the matrices.
𝒅𝟏 𝟎 𝟎 𝟎
𝟎 𝒅𝟐 𝟎 𝟎
𝑫=[ ] , is a nxn diagonal matrix
𝟎 𝟎 ⋱ 𝟎
𝟎 𝟎 𝟎 𝒅𝒏 𝒏𝒙𝒏
Notes:
i) |𝑫| = 𝒅𝟏 𝒅𝟐 … . 𝒅𝒏
ii) 𝑫 is nonsingular if and only if main diagonal elements are different than ‘0’
𝟏/𝒅𝟏 𝟎 𝟎 𝟎
𝟎 𝟏/𝒅𝟐 𝟎 𝟎
iii) 𝑫−𝟏 =[ ]
𝟎 𝟎 ⋱ 𝟎
𝟎 𝟎 𝟎 𝟏/𝒅𝒏 𝒏𝒙𝒏
𝒅𝟏 𝟎 𝟎 𝟎 𝒘𝟏 𝟎 𝟎 𝟎
𝟎 𝒅𝟐 𝟎 𝟎 𝟎 𝒘𝟐 𝟎 𝟎
iv) 𝑫 = [ ] ,𝑾=[ ] ,
𝟎 𝟎 ⋱ 𝟎 𝟎 𝟎 ⋱ 𝟎
𝟎 𝟎 𝟎 𝒅𝒏 𝒏𝒙𝒏 𝟎 𝟎 𝟎 𝒘𝒏 𝒏𝒙𝒏
𝒅𝟏 𝒘𝟏 𝟎 𝟎 𝟎
𝟎 𝒅𝟐 𝒘𝟐 𝟎 𝟎
𝑾𝑫 = 𝑫𝑾 = [ ]
𝟎 𝟎 ⋱ 𝟎
𝟎 𝟎 𝟎 𝒅𝒏 𝒘𝒏 𝒏𝒙𝒏
𝑷−𝟏 𝑨𝑷 = 𝑫 ⇒ ⏟
𝑷𝑷−𝟏 𝑨 ⏟
𝑷𝑷−𝟏 = 𝑷−𝟏 𝑫𝑷 = 𝑷𝑫𝑷−𝟏
𝑰 𝑰
1/ 13
𝛌𝟏 𝟎 𝟎 𝟎
𝟎 𝛌𝟐 𝟎 𝟎
where, 𝑫 = [ ]
𝟎 𝟎 ⋱ 𝟎
𝟎 𝟎 𝟎 𝛌𝒏 𝒏𝒙𝒏
−𝟑 𝟎
ex: 𝑨 = [ ]
𝟏 −𝟐 𝟐𝒙𝟐
|𝑨 − 𝛌. 𝐈 | = 𝟎 = |−𝟑 − 𝝀 𝟎
| = (−𝟑 − 𝝀)(−𝟐 − 𝛌), 𝛌𝟏 = −𝟐, 𝛌𝟐 = −𝟑
𝟏 −𝟐 − 𝛌
−𝟑 + 𝟐 𝟎 −𝟏 𝟎 −𝟏 𝟎
for, 𝛌𝟏 = −𝟐, [𝑨 − 𝛌. 𝐈 ] = [ ]=[ ]~[ ]⇒
𝟏 −𝟐 + 𝟐 𝟏 𝟎 𝟎 𝟎
𝐱𝟏 = 𝟎 𝟎
⃗𝟏=[
𝒙 ]⇒[ ]
𝐱𝟐 = 𝒑𝒂𝒓𝒂𝒎𝒆𝒕𝒓𝒊𝒄 𝟏 𝐱𝟐=𝟏
−𝟑 + 𝟑 𝟎 𝟎 𝟎 𝟏 𝟏
for, 𝛌𝟐 = −𝟑, [𝑨 − 𝛌. 𝐈 ] = [ ]=[ ]~[ ]⇒
𝟏 −𝟐 + 𝟑 𝟏 𝟏 𝟎 𝟎
𝐱𝟏 = −𝐱𝟐 −𝟏
⃗ 𝟐 = [𝐱 = 𝒑𝒂𝒓𝒂𝒎𝒆𝒕𝒓𝒊𝒄] ⇒ [ ]
𝒙
𝟐 𝟏 𝐱𝟐=𝟏
−𝟏 𝟎
𝑷 = column matrix of the eigenvectors; 𝑷 = [𝒙 ⃗ 𝟐 ] 𝑶𝑹 [𝒙
⃗ 𝟏, 𝒙 ⃗ 𝟏] = [
⃗ 𝟐, 𝒙 ]
𝟏 𝟏
−𝟏 𝟎 𝟏 𝟎 𝟏 𝟎 −𝟏 𝟎 −𝟏 𝟎
⇒ 𝑷−𝟏 ⇒ [ | ]~[ | ] ⇒ 𝑷−𝟏 = [ ]
𝟏 𝟏 𝟎 𝟏 𝟎 𝟏 𝟏 𝟏 𝟏 𝟏
−𝟏 𝟎 −𝟑 𝟎 −𝟏 𝟎 −𝟑 𝟎
𝑫 = 𝑷−𝟏 𝑨𝑷 ⇒ [ ][ ][ ]=[ ];
𝟏 𝟏 𝟏 −𝟐 𝟏 𝟏 𝟎 −𝟐
diagonal matrix with diagonal elements = the eigenvalues
Notes: if the eigenvalues of matrix 𝑨𝒏𝒙𝒏 are not distinct: 𝑨𝒏𝒙𝒏 may or may not be diagonalized. 𝑨 may
or may not be similar to 𝑫 . So. Not all matrices can be diagonalized using similarity transformation.
𝛌𝟏
a) 𝑨𝒏𝒙𝒏 {𝛌𝟐 ⇒ Distinct (different) ⇒eigenvectors will be linearly independent⇒Matrix can be
𝛌𝟑
diagonalized
𝛌𝟏 = 𝛌𝟐
b) 𝑨𝒏𝒙𝒏 { ⇒ NOT Distinct (different) ⇒eigenvectors may or may not be linearly independent
𝛌𝟑
i) ⇒Matrix can be diagonalized with similarity transformation if eigenvectors are linearly independent
ii) ⇒Matrix cannot be diagonalized with similarity transformation (𝑫 = 𝑷−𝟏 𝑨𝑷) if eigenvectors are
not linearly independent
2/ 13
Linearly Dependent and Independent Set of Vectors in 𝑫 = 𝑹𝒏
𝑹𝒏 ∶n-dimensional vector space
These vectors are linearly independent if the matrix 𝑨𝒎𝒙𝒏 has a rank of m.
𝒓[𝑨]𝒎𝒙𝒏 = 𝒎 ⇒ ⃗𝒗𝟏 , ⃗𝒗𝟐 ,…, ⃗𝒗𝒎 are linearly independent set of vectors in n dimensional space
ex: Given, 𝒊 = (𝟏, 𝟎, 𝟎), 𝒋 = (𝟎, 𝟏, 𝟎), ⃗𝒌 = (𝟎, 𝟎, 𝟏) are unit vectors of 𝑹𝟑 ∶3-dimensional vector
space (x, y, z)
m=3
n=3
𝒊 𝟏 𝟎 𝟎
𝑨𝟑𝒙𝟑 = [ 𝒋 ] = [𝟎 𝟏 𝟎] ⇒ 𝒊𝒏 𝑹𝑹𝑬𝑭 𝒓[𝑨]𝒎𝒙𝒏 = 𝒎 = 𝟑, so, they are linearly independent
⃗ 𝟑𝒙𝟑
𝒌 𝟎 𝟎 𝟏
𝟑 𝟎 𝟎
ex: let, 𝑨𝟑𝒙𝟑 = [𝟏 −𝟐 −𝟖]
𝟎 −𝟓 𝟏
𝟑−𝛌 𝟎 𝟎
|𝑨 − 𝛌. 𝐈 | = | 𝟏 −𝟐 − 𝛌 −𝟖 | = 𝟎 = (𝟑 − 𝛌)[(−𝟐 − 𝛌)(𝟏 − 𝛌) − (−𝟖. −𝟓)] = 𝟎
𝟎 −𝟓 𝟏−𝛌
⇒ − 𝛌𝟑 + 𝟐𝛌𝟐 + 𝟒𝟓𝛌 − 𝟏𝟐𝟔 = 𝟎
𝛌𝟏 = 𝟔, 𝛌𝟐 = 𝟑, 𝛌𝟑 = −𝟕 ⇒ distinct eigenvalues
Corresponding eigenvectors:
3/ 13
𝟎 𝟑𝟎 𝟎
⃗ 𝟏 = [ 𝟏 ], 𝒙
𝒙 ⃗ 𝟐 = [−𝟐], 𝒙
⃗ 𝟑 = [𝟖]
−𝟏 𝟓 𝟓
⃗𝟏
𝒙 𝟎 𝟏 −𝟏 𝟑𝟎 −𝟐 𝟓
⃗ 𝟐 ] = [𝟑𝟎 −𝟐 𝟓 ] 𝑹𝟏 → 𝑹𝟐 [ 𝟎
[𝒙 𝟏 −𝟏] 𝑹𝟏 /𝟑𝟎 → 𝑹𝟏
⃗𝟑
𝒙 𝟎 𝟖 𝟓 𝟎 𝟖 𝟓
𝟏 −𝟏/𝟓 𝟏/𝟔 −𝟖𝑹 + 𝑹 → 𝑹 𝟏 𝟎 𝟏/𝟏𝟎 𝟏 𝟎 𝟏/𝟏𝟎
𝟐 𝟑 𝟑
[𝟎 𝟏 −𝟏 ] 𝟏/𝟏𝟓𝑹 + 𝑹 → 𝑹 [𝟎 𝟏 −𝟏 ] 𝑹𝟏 /𝟏𝟑 → 𝑹𝟏 [𝟎 𝟏 −𝟏 ]
𝟐 𝟏 𝟏
𝟎 𝟖 𝟓 𝟎 𝟎 𝟏𝟑 𝟎 𝟎 𝟏
𝑹𝟐 + 𝑹𝟑 → 𝑹𝟐 𝟏 𝟎 𝟎
[𝟎 𝟏 𝟎] ⇒ 𝒓[𝑨] = 𝟑 , so they are linearly independent
−𝑹𝟑 /𝟏𝟎 + 𝑹𝟏 → 𝑹𝟏
𝟎 𝟎 𝟏
𝟎 𝟑𝟎 𝟎
𝑷=[ ⏟⃗ ⃗ ⃗
𝒙𝟏 , 𝒙𝟐 , 𝒙𝟑 ] = [ 𝟏 −𝟐 𝟖]
𝒄𝒐𝒍𝒖𝒎𝒏 𝒗𝒆𝒄𝒕𝒐𝒓𝒔 −𝟏 𝟓 𝟓
𝟎 𝟑𝟎 𝟎 𝟏 𝟎 𝟎 𝑹 + 𝑹 → 𝑹 𝟏 −𝟐 𝟖 𝟎 𝟏 𝟎
𝑷−𝟏
= [ 𝟏 −𝟐 𝟖 |𝟎 𝟏 𝟎] 𝟐 𝑹 →𝟑 𝑹 𝟑 [𝟎 𝟑𝟎 𝟎 |𝟏 𝟎 𝟎] 𝑹𝟐 /𝟑𝟎 → 𝑹𝟐
𝟏 𝟐
−𝟏 𝟓 𝟓 𝟎 𝟎 𝟏 𝟎 𝟑 𝟏𝟑 𝟎 𝟏 𝟏
𝟏 −𝟐 𝟖 𝟎 𝟏 𝟎 𝟐𝑹 + 𝑹 → 𝑹 𝟏 𝟎 𝟖 𝟐/𝟑𝟎 𝟏 𝟎
[𝟎 𝟏 𝟎 |𝟏/𝟑𝟎 𝟎 𝟎] −𝟑𝑹𝟐 + 𝑹𝟏 → 𝑹𝟏 [𝟎 𝟏 𝟎 | 𝟏/𝟑𝟎 𝟎 𝟎]
𝟐 𝟑 𝟑
𝟎 𝟑 𝟏𝟑 𝟎 𝟏 𝟏 𝟎 𝟎 𝟏𝟑 −𝟏/𝟏𝟎 𝟏 𝟏
𝟏 𝟎 𝟖 𝟐/𝟑𝟎 𝟏 𝟎
𝑹𝟑 /𝟏𝟑 → 𝑹𝟑 [𝟎 𝟏 𝟎 | 𝟏/𝟑𝟎 𝟎 𝟎 ] − 𝟖𝑹𝟑 + 𝑹𝟏 → 𝑹𝟏
𝟎 𝟎 𝟏 −𝟏/𝟏𝟑𝟎 𝟏/𝟏𝟑 𝟏/𝟏𝟑
𝟏 𝟎 𝟎 𝟓/𝟑𝟗 𝟓/𝟏𝟑 −𝟖/𝟏𝟑
[𝟎 𝟏 𝟎 | 𝟏/𝟑𝟎 𝟎 𝟎 ]
𝟎 𝟎 𝟏 −𝟏/𝟏𝟑𝟎 𝟏/𝟏𝟑 𝟏/𝟏𝟑
𝟓/𝟑𝟗 𝟓/𝟏𝟑 −𝟖/𝟏𝟑 𝟑 𝟎 𝟎 𝟎 𝟑𝟎 𝟎 𝟔 𝟎 𝟎
−𝟏
𝑷 𝑨𝑷 = [ 𝟏/𝟑𝟎 𝟎 𝟎 ] [𝟏 −𝟐 −𝟖] [ 𝟏 −𝟐 𝟖] = [𝟎 𝟑 𝟎 ] = 𝑫
−𝟏/𝟏𝟑𝟎 𝟏/𝟏𝟑 𝟏/𝟏𝟑 𝟎 −𝟓 𝟏 −𝟏 𝟓 𝟓 𝟎 𝟎 −𝟕
𝟑 𝟎 𝟎
ex: let, 𝑨𝟑𝒙𝟑 = [𝟎 𝟑 𝟐]
𝟎 𝟎 𝟒
𝟑−𝛌 𝟎 𝟎
|𝑨 − 𝛌. 𝐈 | = | 𝟎 𝟑−𝛌 𝟐 | = 𝟎 = (𝟑 − 𝛌)𝟐 (𝟒 − 𝛌)
𝟎 𝟎 𝟒−𝛌
𝛌𝟏 = 𝛌𝟐 = 𝟑, 𝛌𝟑 = 𝟒 ⇒ repeated, not distinct eigenvalues
4/ 13
𝟎 𝟎 𝟎 𝟎 𝟎 𝟏 𝐱𝟏 = 𝒑𝒂𝒓𝒂𝒎𝒆𝒕𝒓𝒊𝒄
for, 𝛌𝟏 = 𝛌𝟐 = 𝟑 ⇒ [𝑨 − 𝛌. 𝐈 ] = [𝟎 𝟎 𝟐] ~ [𝟎 𝟎 𝟎] ⇒ 𝒙
⃗ = [𝐱𝟐 = 𝒑𝒂𝒓𝒂𝒎𝒆𝒕𝒓𝒊𝒄]
𝟎 𝟎 𝟏 𝟎 𝟎 𝟎 𝐱𝟑 = 𝟎
𝟏 𝟎
⃗ 𝟏 ⇒ [𝟎]
𝒙 ⃗ 𝟐 ⇒ [𝟏]
,𝒙
𝟎 𝐱𝟏 =𝟏 𝟎 𝐱𝟏=𝟎
𝐱𝟐 =𝟎 𝐱𝟐 =𝟏
−𝟏 𝟎 𝟎 𝟏 𝟎 𝟎 𝐱𝟏 = 𝟎
for, 𝛌𝟑 = 𝟒 ⇒ [𝑨 − 𝛌. 𝐈 ] = [ 𝟎 −𝟏 𝟐] ~ [𝟎 𝟏 −𝟐] ⇒ 𝒙
⃗ =[ 𝐱𝟐 = 𝟐𝐱𝟑 ]
𝟎 𝟎 𝟎 𝟎 𝟎 𝟎 𝐱𝟑 = 𝒑𝒂𝒓𝒂𝒎𝒆𝒕𝒓𝒊𝒄
𝟏𝟎
⃗ 𝟑 ⇒[ 𝟐 ]
𝒙
𝟏 𝐱𝟑=𝟏
⃗𝟏
𝒙 𝟏 𝟎 𝟎 𝟏 𝟎 𝟎
⃗ 𝟐 ] = [𝟎 𝟏 𝟎] ~ [𝟎 𝟏 𝟎] ⇒ 𝒓[𝑨] = 𝟑 , so they are linearly independent
[𝒙
⃗𝟑
𝒙 𝟎 𝟐 𝟏 𝟎 𝟎 𝟏
𝟏 𝟎 𝟎
𝑷 = [𝟎 𝟏 𝟐]
𝟎 𝟎 𝟏
𝟏 𝟎 𝟎
𝑷−𝟏 = [𝟎 𝟏 −𝟐]
𝟎 𝟎 𝟏
𝟑 𝟎 𝟎
𝑷−𝟏 𝑨𝑷 = [𝟎 𝟑 𝟎] = 𝑫
𝟎 𝟎 𝟒
𝑷−𝟏 𝑨𝑷 = 𝑫
𝑷𝑷−𝟏 𝑨 ⏟
⇒⏟ 𝑷𝑷−𝟏 = 𝑷−𝟏 𝑫𝑷 = 𝑷𝑫𝑷−𝟏 ⇒ 𝑨 = 𝑷−𝟏 𝑫𝑷
𝑰 𝑰
𝑨𝟐 = 𝑨𝑨 = 𝑷−𝟏 𝑫 ⏟
𝑷 𝑷−𝟏 𝑫 𝑷 = 𝑷𝑫𝟐 𝑷−𝟏
𝑰
𝑨𝟑 = 𝑨𝑨𝑨 = 𝑷−𝟏 𝑫 ⏟
𝑷𝑷−𝟏 𝑫 ⏟
𝑷𝑷−𝟏 𝑫𝑷 = 𝑷𝑫𝟑 𝑷−𝟏
𝑰 𝑰
5/ 13
𝒅𝒌𝟏 𝟎 𝟎 𝟎
𝑫𝒌 = 𝟎 𝒅𝒌𝟐 𝟎 𝟎
𝟎 𝟎 ⋱ 𝟎
[𝟎 𝟎 𝟎 𝒅𝒌𝟑 ]𝒏𝒙𝒏
𝟓 𝟑
ex:let, 𝑨 = [ ]
𝟏 𝟑
|𝑨 − 𝛌. 𝐈 | = 𝟎 = |𝟓 − 𝝀 𝟑
| = (𝟓 − 𝝀)(𝟑 − 𝛌) − 𝟑, 𝛌𝟏 = 𝟐, 𝛌𝟐 = 𝟔
𝟏 𝟑−𝛌
𝟑 𝟑 𝟏 𝟏
for, 𝛌𝟏 = 𝟐, [𝑨 − 𝛌. 𝐈 ] = [ ]~[ ]⇒
𝟏 𝟏 𝟎 𝟎
𝐱𝟏 = −𝐱𝟐 −𝟏
⃗ 𝟏 = [𝐱 = 𝒑𝒂𝒓𝒂𝒎𝒆𝒕𝒓𝒊𝒄] ⇒ [ ]
𝒙
𝟐 𝟏 𝐱𝟐=𝟏
−𝟏 𝟑 𝟏 −𝟑
for, 𝛌𝟐 = 𝟔, [𝑨 − 𝛌. 𝐈 ] = [ ]~[ ]⇒
𝟏 −𝟑 𝟎 𝟎
𝐱𝟏 = 𝟑𝐱𝟐 𝟑
⃗𝟐=[
𝒙 ]⇒[ ]
𝐱𝟐 = 𝒑𝒂𝒓𝒂𝒎𝒆𝒕𝒓𝒊𝒄 𝟏 𝐱𝟐=𝟏
⃗𝟏
𝒙 −𝟏 𝟏 𝟏 𝟎
[ ]=[ ]~[ ] ⇒ 𝒓[𝑨] = 𝟐 , so they are linearly independent
⃗𝟐
𝒙 𝟑 𝟏 𝟎 𝟏
−𝟏 𝟑
𝑷 = column matrix of the eigenvectors; 𝑷 = [𝒙 ⃗ 𝟐] = [
⃗ 𝟏, 𝒙 ]
𝟏 𝟏
−𝟏 𝟑 𝟏 𝟎 𝟏 𝟎 −𝟏/𝟒 𝟑/𝟒 −𝟏/𝟒 𝟑/𝟒
⇒ 𝑷−𝟏 ⇒ [ | ]~[ | ] ⇒ 𝑷−𝟏 = [ ]
𝟏 𝟏 𝟎 𝟏 𝟎 𝟏 𝟏/𝟒 𝟏/𝟒 𝟏/𝟒 𝟏/𝟒
𝟐 𝟎
𝑫 = 𝑷−𝟏 𝑨𝑷 ⇒ = [ ];
𝟎 𝟔
⇒ 𝑨𝒌 = 𝑷𝑫𝒌 𝑷−𝟏 ⇒ [
−𝟏 𝟑 𝟐𝒌
][ 𝟎 ] [−𝟏/𝟒 𝟑/𝟒] = 𝟏 [𝟐𝒌 + 𝟑. 𝟔𝒌 𝟑. −𝟐𝒌 + 𝟑. 𝟔𝒌 ]
𝟏 𝟏 𝟎 𝟔𝒌 𝟏/𝟒 𝟏/𝟒 𝟒 −𝟐𝒌 + 𝟔𝒌 𝟑. 𝟐𝒌 + 𝟔𝒌
6/ 13
Real Symmetric Matrices (RSM)
If A=AT; A is symmetric
𝒂𝟏𝟏 𝒂𝟏𝟐 … 𝒂𝟏𝒏
𝒂𝟏𝟐 𝒂𝟐𝟐 … …
𝑨=[ … … ⋱ … ] :off diagonal elements are the same: aij=aji for i≠j
𝒂𝟏𝒏 … … 𝒂𝒏𝒏 𝒏𝒙𝒏
𝟒 𝟕 −𝟐
ex: 𝑨 = [ 𝟕 𝟏 𝟓 ]
−𝟐 𝟓 𝟑
Notes:
i)Eigenvalues of RSM are real
ii)Every nxn RSM has a set of orthonormal eigenvectors: orthonormal=orthogonal and unit vector
iii)Every nxn RSM has ‘n’ linearly independent eigenvectors. So, they are definitely diagonalized by
using similarity transformation: 𝑫 = 𝑷−𝟏 𝑨𝑷
iv)In transformation of RSM to diagonal matrix, we use Modal Matrix :M
Orthogonality Condition:
̂𝒊 . 𝒙
Let, 𝒙 ̂𝒋 = 𝜹𝒊𝒋 ; Kronecker Delta
𝟎 , 𝒊𝒇 𝒊 ≠ 𝒋 → Orthonormality Condition
𝜹𝒊𝒋 { {
𝟏, 𝒊𝒇 𝒊 = 𝒋
A nxn matrix , A, is diagonalizable if and only if A is a symmetric matrix
Orthonormal Matrix: having all column vectors unit and orthogonal
̂𝟏
𝒙 ̂𝟐
𝒙 ̂𝟑
𝒙
⏞𝟏/𝟗 𝟖/𝟗 𝟒/𝟗
ex: let, 𝑀 = [𝟒/𝟗 −𝟒/𝟗 𝟕/𝟗 ]
𝟖/𝟗 𝟏/𝟗 −𝟒/𝟗
if the vectors are unit vector:
̂𝒊 . 𝒙
𝒙 ̂𝒋 = 𝟏 , 𝒊𝒇, 𝒊 = 𝒋
̂𝟏 = |𝒙
̂𝟏 . 𝒙
Naturally; 𝒙 ̂𝟏 |. |𝒙
̂𝟏 |𝒄𝒐𝒔𝜽 = 𝟏. 𝟏. 𝒄𝒐𝒔(𝟎) = 𝟏
𝟏 𝟒 𝟖 𝟏 𝟒 𝟖 𝟏 𝟏𝟔
̂𝟏 . 𝒙
Or; directly applying dot product: 𝒙 ̂𝟏 = (( ) , ( ) , ( )) (( ) , ( ) , ( )) = (( ) + ( ) +
𝟗 𝟗 𝟗 𝟗 𝟗 𝟗 𝟖𝟏 𝟖𝟏
𝟔𝟒
(𝟖𝟏)) = 𝟏
7/ 13
̂𝟐 . 𝒙
Similarly: 𝒙 ̂𝟐 = 𝒙
̂𝟑 . 𝒙
̂𝟑 = 𝟏
̂𝒊 . 𝒙
𝒙 ̂𝒋 = 𝟎 , 𝒊𝒇, 𝒊 ≠ 𝒋
𝟏 𝟒 𝟖 𝟖 −𝟒 𝟏 𝟖 𝟏𝟔 𝟖
̂
𝒙𝟏 . ̂
𝒙𝟐 = (( ) , ( ) , ( )) (( ) , ( ) , ( )) = (( ) − ( ) + ( )) = 𝟎
𝟗 𝟗 𝟗 𝟗 𝟗 𝟗 𝟖𝟏 𝟖𝟏 𝟖𝟏
̂𝟏 . 𝒙
Similarly: 𝒙 ̂𝟑 = 𝒙
̂𝟐 . 𝒙
̂𝟑 = 𝟎
Modal Matrix: A nxn matrix, where columns are orthonormal eigenvectors of a nxn matrix ‘A’ is
called a modal matrix of A
𝑴 = [𝒙
̂𝟏 , 𝒙 ̂𝒏 ]
̂𝟐 , … , 𝒙
8/ 13
orthogonality conditions:
⃗ 𝒊. 𝒙
𝒙 ⃗ 𝒋 = 𝟎 , 𝒊𝒇, 𝒊 ≠ 𝒋
⃗ 𝑻𝟏 . 𝒙
or, 𝒙 ⃗𝟐=𝟎
⃗ 𝟏. 𝒙
Similarly: 𝒙 ⃗𝟑=𝒙
⃗ 𝟐. 𝒙
⃗𝟑=𝟎
The vectors are orthogonal; however, they are not unit vectors, therefore, they should be normalized!
normalization:
𝟎
[−𝟏] 𝟎 𝟏/√𝟑 −𝟐/√𝟔
⃗𝟏
𝒙 𝟏 ⃗𝟐
𝒙 ⃗𝟑
𝒙
̂𝟏 =
𝒙 = =[ −𝟏/√𝟐 ̂𝟐 = |𝒙⃗ | = [𝟏/√𝟑] , 𝒙
],𝒙 ̂𝟑 = |𝒙⃗ | = [ 𝟏/√𝟔 ]
|𝒙
⃗ 𝟏| √𝟐 𝟐 𝟑
𝟏/√𝟐 𝟏/√𝟑 𝟏/√𝟔
𝟎
𝟏/√𝟑 −𝟐/√𝟔
𝑴 = [𝒙
̂𝟏 , 𝒙 ̂𝟑 ] = [−𝟏/√𝟐 𝟏/√𝟑 𝟏/√𝟔 ]
̂𝟐 , 𝒙
𝟏/√𝟐 𝟏/√𝟑 𝟏/√𝟔
𝟐 𝟎 𝟎
to check: 𝑴−𝟏 𝑨𝑴 = 𝑴𝑻 𝑨𝑴 = 𝑫 = [𝟎 𝟑 𝟎]
𝟎 𝟎 𝟔
−𝟏
⃗𝟑=[ 𝟏 ]
for, 𝛌𝟑 = −𝟒 → 𝒙
𝟏 𝒙𝟑=𝟏
9/ 13
⃗𝒙𝟏 𝟏 𝟏 𝟎
𝑷 = [𝒙⃗ 𝟐 ] = [ 𝟏 𝟎 𝟏] ⇒ 𝒓[𝑨] = 𝟑 , so they are linearly independent
⃗𝟑
𝒙 −𝟏 𝟏 𝟏
orthogonality conditions:
⃗ 𝒊. 𝒙
𝒙 ⃗ 𝒋 = 𝟎 , 𝒊𝒇, 𝒊 ≠ 𝒋
⃗ 𝟏. 𝒘
Check orthogonality: 𝒙 ⃗⃗⃗ 𝟐 = 𝒙
⃗ 𝟑. 𝒘
⃗⃗⃗ 𝟐 = 𝟎
normalization:
𝟏
[𝟏] 𝟏/√𝟐 𝟏/√𝟔 −𝟏/√𝟑
⃗𝟏
𝒙 𝟎 ⃗⃗⃗ 𝟐
𝒘 ⃗𝟑
𝒙
̂𝟏 =
𝒙 |𝒙
⃗ 𝟏|
= ̂ 𝟐 = |𝒘 | = [−𝟏/√𝟔] , 𝒙
= [𝟏/√𝟐] , 𝒘 ̂𝟑 = |𝒙⃗ | = [ 𝟏/√𝟑 ]
√𝟐 ⃗⃗⃗ 𝟐 𝟑
𝟎 𝟏/√𝟔 𝟏/√𝟑
10/ 13
Quadratic Forms
So far, we have focused on linear systems of equations. Quadratic forms also occur frequently in
engineering science applications like design optimization etc. Quadratic function is any function that
involves product of two 𝐱𝟏 . 𝐱 𝟐 or same variables 𝐱𝟏 𝟐 , 𝐱𝟐 𝟐 , …
⃗ in Rn can be
A quadratic form on Rn is a function Q, defined in Rn, whose value at a vector 𝒙
computed by an expression of the form:
𝑸(𝒙 ⃗ 𝑻 𝑨𝒙
⃗)=𝒙 ⃗ , where A is a nxn symmetric matrix. Matrix A is called the matrix of the quadratic
form.
𝐱𝟏 𝟒 𝟎 𝟑 −𝟐
Ex: if, 𝒙 ⃗ 𝑻 𝑨𝒙
⃗ = [ 𝐱 ] , compute 𝒙 ⃗ for matrix A given by i)𝑨 = [ ] ,ii) 𝑨 = [ ]
𝟐 𝟎 𝟑 −𝟐 𝟕
𝐱 𝟒𝐱
i)Quadratic form -> ⃗𝒙𝑻 𝑨𝒙
⃗ = [𝐱𝟏 𝐱𝟐 ] [𝟒 𝟎] [ 𝟏 ] = [𝐱𝟏 𝐱𝟐 ] [ 𝟏 ] = 𝟒𝐱𝟏 𝟐 + 𝟑𝐱𝟐 𝟐
𝟎 𝟑 𝐱𝟐 𝟑 𝐱𝟐
since there is no 𝐱𝟏 . 𝐱𝟐 term, symmetric elements are “0”
𝐱 𝟑𝐱 𝟏 − 𝟐𝐱𝟐
⃗ 𝑻 𝑨𝒙
ii) -> 𝒙 ⃗ = [𝐱𝟏 𝐱𝟐 ] [ 𝟑 −𝟐] [ 𝟏 ] = [𝐱𝟏 𝐱𝟐 ] [ ]=
−𝟐 𝟕 𝐱𝟐 −𝟐 𝐱𝟏 + 𝟕𝐱𝟐
⃗ in R3 , let, 𝑸(𝒙
Ex:For 𝒙 ⃗ ) = 5x12+3x22+2x32-x1x2+8x2x3, write the statement in quadratic form as:
𝑻
⃗ 𝑨𝒙⃗
𝒙
𝟓 −𝟏/𝟐 𝟎 𝐱𝟏
⃗ ) = [𝐱𝟏
𝑸(𝒙 𝐱𝟐 𝐱𝟑 ] [−𝟏/𝟐 𝟑 𝟒] [ 𝐱𝟐 ]
𝟎 𝟒 𝟐 𝐱𝟑
⃗ = 𝑷𝒚
𝒙 ⃗ or 𝑷−𝟏 𝒙⃗ = 𝑷−𝟏 𝑷𝒚
⃗ = 𝑰𝒚 ⃗ = 𝑷−𝟏 𝒙
⃗ →𝒚 ⃗ , where 𝑷 is an invertible matrix and 𝒚
⃗ is the new
n
variable vector in R .
⃗ ) = 𝛌𝟏 𝐲𝟏 𝟐 + 𝛌𝟐 𝐲𝟐 𝟐
𝑸(𝒚
11/ 13
⃗ ) = x12-8 x1x2-5x22
Ex: 𝑸(𝒙
𝐱
⃗ 𝑻 𝑨𝒙
⃗)=𝒙
𝑸(𝒙 ⃗ = [𝐱𝟏 𝐱𝟐 ] [ 𝟏 −𝟒] [ 𝟏 ]
−𝟒 −𝟓 𝐱𝟐
𝟐/√𝟓 𝟏/√𝟓
𝛌𝟏 = 𝟑 → ⃗⃗⃗⃗
𝐱𝟏 = [ ] , 𝛌𝟐 = −𝟕 → ⃗⃗⃗⃗
𝐱𝟐 = [ ]
−𝟏/√𝟓 𝟐/√𝟓
𝟐/√𝟓 𝟏/√𝟓
So, 𝑷 = [ ]
−𝟏/√𝟓 𝟐/√𝟓
𝟐/√𝟓 −𝟏/√𝟓
Thus, 𝑷𝑻 = 𝑷−𝟏 = [ ]
𝟏/√𝟓 𝟐/√𝟓
As A is RSM, P is orthogonal
𝟑 𝟎
𝑷𝑻 𝑨𝑷 = 𝑫 = [ ]
𝟎 −𝟕
As, 𝑷−𝟏 𝒙
⃗ = 𝑷𝑻 𝒙
⃗ =𝒚
⃗
⃗ 𝑻 𝑨𝒙
𝒙 ⃗ )𝑻 𝑨(𝑷𝒚
⃗ → (𝑷𝒚 ⃗ 𝑻 𝑷𝑻 𝑨𝑷𝒚
⃗)=𝒚 ⃗ 𝑻 𝑫𝒚
⃗ =𝒚 ⃗
𝐲
⃗ ) = [𝐲𝟏
𝑸(𝒚 𝐲𝟐 ] [𝟑 𝟎 ] [ 𝟏 ] = 3y12-7y22
𝟎 −𝟕 𝐲𝟐
⃗ ) = [𝟐 −𝟐] using new quadratic form:
let’s compute 𝑸(𝒙
𝒙 ⃗ , 𝑷−𝟏 𝒙
⃗ = 𝑷𝒚 ⃗ =𝒚
⃗
⃗ 𝑻 𝑫𝒚
For the geometric meaning of equality of quadrature forms, new quadratic form 𝒚 ⃗ has crossterm
𝟏
𝐚
𝟎 𝐱𝟏 𝟐 𝐱𝟐 𝟐
as D , a diagonal matrix: 𝑫 = [ 𝟏 ] and 𝒂 + 𝒃 = 𝟏, 𝒂 > 𝒃 > 𝟎
𝟎 𝐛
𝟏 𝟏
⃗ ) = 𝐱𝟏 𝟐 + 𝐱𝟐 𝟐 − 𝟏, which is an ellipse in 2-D (ax2+bxy+cy2=d)
→ 𝑸(𝒙
𝒂 𝒃
12/ 13
We can find a new coordinate system for which the illustration is in standard position
Ex: For an ellipse given: 5x12-4x1x2+5x22=48
Find a change of variable that removes the cross-product term from the equation
𝟓 −𝟐
𝑨=[ ]
−𝟐 𝟓
𝟏/√𝟐 −𝟏/√𝟐
𝛌𝟏 = 𝟑 → ⃗⃗⃗⃗
𝐱𝟏 = [ ] , 𝛌𝟐 = 𝟕 → ⃗⃗⃗⃗
𝐱𝟐 = [ ]
𝟏/√𝟐 𝟏/√𝟐
⃗ = 𝑷𝒚
𝒙 ⃗
𝑸(𝒙 ⃗ 𝑻 𝑨𝒙
⃗)=𝒙 ⃗
⃗ )𝑻 𝑨(𝑷𝒚
(𝑷𝒚 ⃗ 𝑻 𝑷𝑻 𝑨𝑷𝒚
⃗)=𝒚 ⃗ 𝑻 𝑫𝒚
⃗ =𝒚 ⃗)
⃗ = 𝑸(𝒚
𝟑 𝟎
𝑫=[ ⃗ ) =3y12+7y22=48
] → 𝑸(𝒚
𝟎 𝟕
Note: Let, A be a RSM with eigen values λ1,…, λn
Let, D be an orthogonal matrix that diagonalizes A , then the change of coordinates
⃗ = 𝑷𝒚
𝒙 ⃗ transforms the statement: ∑𝑛𝑗=1 ∑𝑛𝑘=1 𝑎𝑗𝑘 𝑥𝑗 𝑥𝑘 in to: λ1 y12+ λ2 y22+…+ λn yn2 , which is called
as the canonical form.
13/ 13