Professional Documents
Culture Documents
det(I) = 1
4. If two rows are equal , then det(A) = 0. [Exchange the row prove].
5. Subtracting a multiple of one row from another row, leaves the det unchanged.
0
a b a + lc b + ld a b c d
a b
⇒
c d = c c d + lc d = c d
d
Ax = λx
where, x is called the eigen vector and λ is called the eigen value.
1
• To determine, eigen values and eigen vectors
Av̄ = λv̄
⇒ (A−λI)v̄ = 0
⇒ hence the nullspace of (A − λI) is not only zero. Hence, (A − λI) is singular.
(A − λi I)vi = 0
Example: 1 4 1 − λ 4
A= → =0
2 3 2 3 − λ
⇒ (1 − λ)(3 − λ) − 8 = 0
⇒ 3 − 4λ + λ2 − 8 = 0
⇒ λ2 − 4λ − 5 = 0
⇒ (λ − 5)(λ + 1) = 0
λ = 5, −1
−4 4 x1 0 1
λ1 = 5 ⇒ = ⇒ v1 =
2 −2 x2 0 1
2 4 0 −2
λ2 = −1 ⇒ v2 = ⇒ v2 =
2 4 0 1
1 −2
The eigen vectors are, v 1 = and v 2 =
1 1
– These two features can be used to check the calculated eigen values.
2
– the eigen values of U are basically the diagonal/pivot entries. However, these are not
the eigen values of A. The elimination does not preserve the eigen values.
• The λ’s of an orthogonal matrix are imaginary having absolute value as 1, i.e., |λ| = 1 →
AAT = I
• The λ’s of AB is not the product of the individual eigen values of A and B. Because they
do not share same λ.
• Similarly, eigen values of (A + B) are not the sum of the individual eigen values of A and B.
Diagonalization:
• Av i = λi v i → λi is the eigen value corresponding to the eigen vector v i .
– If a matrix V is formed whose columns are the eigen vectors of A, then V is called eigen
vector matrix of A.
| | | | | |
AV = a1 a2 · · · an v 1 v 2 · · · v n
| | | | | |
| | | | | |
= A v 1 v 2 · · · v n = λ1 v 1 λ2 v 2 · · · λn v n
| | | | | |
λ1 0 · · · 0
| | | 0 λ ··· 0
2
= v 1 v 2 · · · v n ..
.. ..
. . .
| | |
0 0 ··· λn
=VΛ
↑
diagonal matrix with eigen values as diagonal entry.
A = V ΛV −1
or, Λ = V −1 AV
3
• Thus, the eigen vector matrix (V ) containing the linearly independent eigen vector (vi ) as
columns, diagonalize the matrix A. The diagonal matrix contains the eigen values as entries.
Λ = V −1 AV
• Now, since, A = V ΛV −1
⇒ A2 = (V ΛV −1 )(V ΛV −1 )
A2 = V Λ2 V −1
Thus, Ak = V Λk V −1
λk1 0 0 0
0 λk 0 0
2
Λk = ..
.. . . ..
. . . .
0 0 · · · λkn
• Thus, if λ is the eigenvalue of A, then λk is the eigen value of Ak ; where as the eigen vectors
remain same.
• Now, if the eigen values of A are all distinct, i.e., different from each other, then the eigen
vectors are linearly independent and thus eigen vector matrix is invertible. Hence, A is
diagonalizable.
• A matrix having repeated eigen values, does not have enough eigen vectors and hence are not
diagonalizable.
Example:
3 2
A= → λ1 + λ2 = 7, λ1 λ2 = 10
1 4
Characteristic polynomial: (3 − λ)(4 − λ) − 2 = 0
⇒ 12 − 7λ + λ2 − 2 = 0
⇒ λ2 − 7λ + 10 = 0
⇒ (λ − 5)(λ − 2) = 0 ⇒ λ = 2, 5
1 2 0 −2
λ1 = 2 ⇒ v1 = ⇒ v1 =
1 2 0 1
−2 2 0 1
λ2 = 5 ⇒ v = ⇒ v2 =
1 −1 2 0 1
−2 1
V =
1 1
−1 1 1 −1
V =
−3 −1 −2
1 −1 1
=
3 1 2
4
Lets calculate → −1 1 −1 1 3 2 −2 1
V AV =
3 1 2 1 4 1 1
1 −1 1 −4 5 1 6 0 2 0
= = =
3 1 2 2 5 3 0 15 0 5
Example: 1 −1
A= ⇒ (1 − λ)(−1 − λ) + 1 = 0
1 −1
⇒ −1 + λ2 + 1 = 0 ⇒ λ = 0, 0
1 −1 1
v=0⇒ v= ← is in the null space of A.
1 −1 1
1
Only one eigen vector v = , there is no other. Hence, the matrix cannot be diagonalized.
1
0 1
Same is true for →
0 0
NOTE:- No connection between invertibility and diagonalizability.
⇒ If the eigen vectors are not linearly independent (few eigen vectors), then the matrix is not
diagonlizable.
Now, multiply by A →
Eqn.(2) − Eqn.(1) × λn →
:0
c1 (λ1 − λn )v1 + c2 (λ2 − λn )v2 + · · · + ci (λi − λn )vi + · · · +
cn
n − λn )vn = 0
(λ
n−1
X
⇒ ci (λi − λn )vi = 0 (3)
i=1
n−1
X
⇒ ci (λi − λn )Avi = 0
i=1
n−1
X
⇒ ci (λi − λn )λi vi = 0 (4)
i=1
5
Eqn.(4) − Eqn.(3) × λn−1 →
n−1
X
ci (λi − λn )(λi − λn−1 )vi = 0
i=1
Since λi ’s are distinct, c1 = 0 only satisfy the condition. Similarly, it can be shown for every c1 = 0 .
Thus, eigen vectors are independent.
Multiplicity
• It represents the repetition of the eigen values
• Geometric Multiplicity : no of independent eigen vectors and is equal to the dimension of the nullspace
of (A − λI)
• Algebraic Multiplicity : no of repetitions in the eigen values, i.e., in the solution of det(A − λI) = 0
1 −1
Example: A = =⇒ has both the eigen values as 0. Hence, AM=2, GM=1,since it
1 −1
1
has only one eigen vector
1
Example: Suppose λ = 2, 2, 2 =⇒ then AM=3 but GM can be 1 or 2 or 3 ??
? When GM<AM → then there is a shortage of eigen vectors and hence A is not diago-
nizable
Explanation: AM & GM: Suppose A is having three eigen values identical out of n eigen
values. Now while substituting this repeated eigen values, say λi to find corresponding eigen
vector →
(A − λi I)v̄i = 0 =⇒ actually we are looking for the nullspace of (A − λi I). Now if (A − λi I)
has three free columns (after elimination), hence (A − λi I) has three independent vectors to
span its nullspace; thus having three independent vectors as a solution of (A − λi I)v̄i = 0.
Hence GM=3 and ‘A’ is diagonizable
1 1 0
Ex1 : A = 0 1 1 =⇒ |A − λi I| = 0 =⇒ (1 − λ)(1 − λ2 ) + 1 × 0 + 0 × 1 = 0
0 0 1
=⇒ (1 − λ)3 = 0
=⇒ λ = 1, 1, 1
Thus AM=3
0 1 0
0
A = (A − λi ) = 0
0 1
0 0 1
6
0 1 0 0 1 0 v1 0
A0 v = 0 =⇒ A0 = 0 0 1 =⇒ 0 0 1 v 2 = 0
0 0 1 0 0 0 v3 0
↑ ↑
pivot columns
1
0
N(A ) = c 0
0
GM = dim (N(A0 )) = 1
1 1 0
Ex2 : B = 0 1 0 =⇒ |B − λI| = 0 =⇒ λ = 1, 1, 1 =⇒ AM = 3
0 0 1
0 1 0
B 0 = 0 0 0 =⇒ B 0 v̄ = 0
0 0 0
1 0
v̄1 = 0 and v̄2 = 0
0 1
GM = dim(N(B 0 )) = 2
1 0 0
Ex3 : C = 0 1 0 =⇒ λ = 1, 1, 1 =⇒ AM = 3
0 1 0
0 0 0 1 0 0
0 0
C = 0
0 0 =⇒ N (C ) = c1 0 + c2 1 + c3 0
0 0 0 0 1 1
GM = dim(N(C 0 )) = 3
• For each eigen value find the eigen vector from (A − λi I)x̄i = 0
• The set of all vectors x̄i , that satisfy the (A − λi I)x̄i = 0, forms a vector space E(λi , A),
called as eigen space associated with λi
7
? Thus eigen space of A corresponding to the eigen value λ is basically the solution space of
(A − λI)x̄ = 0 or the nullspace of (A − λI)
0 0 −2
A = 1 2 1
1 0 3
−1 0 −2
λ1 = 1, (A − I)x̄1 = 0 =⇒ 1 1 1 x̄1 = 0
1 0 2
x̄1 is the nullspace of (A − I)
−1 0 −2 -1 0 −2
1 1 1 = 0 1 −1
1 0 2 0 0 0
↑
1 free column
−2
x̄1 = 1
1
−2
N([A − I]) = c 1
1
dim(N([A − I]) = 1
−2
Eigen Space of A corresponding to λ = 1 is one-dimensional and E(1, A) = c 1
1
8
λ2,3 = 2 =⇒ (A − 2I)x̄2 = 0
x1 x2 x3
−2 0 −2 −2 0 −2
(A − 2I) =⇒ 1 0 1 = 0 0 0
1 0 1 0 0 0
↑ ↑
2 free column
0 −1
N(A − 2I) = c1 1 + c2 0 =⇒ dim(N(A − 2I) = 2
0 1
Similar Matrices
One can say a matrix B is similar to A , if there exist an invertible matrix P such that
B = P −1 AP
=⇒ A = P BP −1 if (P −1 = Q)
A = Q−1 BQ
• they have same trace, characteristic polynomial, eigen value and eigen space corresponding
to same eigen value
B = P −1 AP where, B is diagonal
9
Symmetric matrix: S = S T
• Numbers to be saved = n diagonal entries + triangular part above diagonal
= n + (n − 1) + (n − 2) + (n − 3) + .............. + 1 + 0
↑ ↑ ↑ ↑ ↑
1st row 2nd row 3rd row (n − 1)th row nth row
= 1 + 2 + 3 + ......... + (n − 3) + (n − 2) + (n − 1) + n
n(n + 1)
=
2
• The eigen values and eigen vectors of a real symmetric matrix are real.
• The eigen vectors of ‘S’ are orthogonal to each other, which can be used to diagonalize the
matrix ‘S’.
Proof:
Say ‘S’ is a real symmetric matrix and λ and v are the imaginary eigen values.
Thus, Sv = λv ← λ and v are imaginary (5)
Since, ‘S’ is a real and λ̄v̄ is the complex conjugate of λv, then →
S v̄ = λ̄v̄ (6)
where, λ̄ is complex conjugate of λ and v̄ is complex conjugate of v.
Taking transpose of Eqn.(6) →
v̄ T S T = v̄ T λ̄ (7)
T T
v̄ Sv = v̄ λ̄v (8)
Taking dot product of Eqn.(5) with v̄, one gets →
v̄ T Sv = v̄ T λv (9)
Left side of Eqn.(8) and Eqn.(9) are same and v̄ T v = length squared (real)
Hence, λ = λ̄ (only possible when complex part are zero)
a + ib = a − ib =⇒ b = 0
S = QΛQ−1 = QΛQT
10
• A symmetric matrix S can be factorized as QΛQT , where, Λ contain eigen values as diagonal
entries and Q contain orthogonal eigen vectors of S.
Example:
1 2
S= =⇒ λ1 + λ2 = 5 and λ1 λ2 = 0
2 4
λ2 − 5λ + 0 = 0
=⇒ λ = 0, 5
−2 1
λ1 = 0, v1 = and λ2 = 5, v2 =
1 2
1 −2 1 1
q1 = √ and q2 = √
5 1 5 2
1 −2 1
Q= √
5 1 2
1 −2
T 1 0 0 −2 1
QΛQ =
5 1 2 0 5 1 2
1 −2 1 0 0 1 5 10 1 2
= = = =A
5 1 2 5 10 5 10 20 2 4
Spectral Theorem
Every symmetric matrix has the factorization with real eigen values in Λ and orthogonal eigen
values in column of Q.
11
• Every symmetric matrix −→ S = λ1 q1 q1T + λ2 q2 q2T + ....... + λn qn qnT
(spectral theorem)
• For real matrices (A, ‘non symmetric’) the eigen values and eigen vectors appear in conjugate
pairs.
cos θ − sin θ
Example: A = ;
sin θ cos θ
(cos θ − λ)(cos θ − λ) + sin2 θ = 0
⇒ λ2 − 2 cos θ + 1 = 0
√
+2 cos θ ± 4 cos2 θ − 4
λ=
2
λ = cos θ ± i sin θ
⇒ λ1 = cos θ + i sin θ
λ2 = cos θ − i sin θ
−i sin θ − sin θ 1
v =0 =⇒ v1 =
sin θ −i sin θ 1 −i
i sin θ − sin θ 1
v2 = 0 =⇒ v2 =
sin θ i sin θ i
• As discussed earlier, that with elimination the values are not preserved. It implies that though
pivot of U are the eigen value of U , but are not the eigen value of A.
– Product of the pivot are equal to the determinant of U as well as the determinant of A.
• In case of symmetric matrix S, the pivots and eigen values have the same signs.
1 3
S= =⇒ λ2 − 2λ − 8 = 0
3 1
=⇒ (λ − 4)(λ + 2) = 0
=⇒ λ = 4, 2
a b
=⇒ (λ − a)(λ − d) − cb = 0
c d
=⇒ λ2 − (a + d)λ + (ad − bc) = 0
=⇒ λ2 − tr(A)λ + det(λ) = 0
1 3
U= =⇒ pivots 1, −8
0 −8
S has one positive eigen value and one negative eigen value and one positive pivot and one
negative pivot.
12
– Repeated eigenvalues in A, sometimes yield shortage of independent eigen vectors.
– But it is not the case with symmetric matrices. There are always enough eigen vectors
to diagonalize. S = S T
• If S has repeated eigen values, one can always come up with n orthogonal vectors (eigen)
which diagonalize S.
S = QΛQT or, Λ = QT SQ
Proof:
To prove that S can always be diagonalized, we need to take help of Schur’s theorem.
Schur’ds theorem says that any square matrix A can be ‘triangularized’ by →
A = QT Q−1
λi > 0
? To check whether all the eigenvalue are positive or not one can easily look the sign of pivots
(applicable only for symmetric matrix) →
2 2 1
2 2 1
0 1 0
S= 2 3 1
−→
1 1 2 0 0 3/2
13
? All pivots are positive ⇒ hence, eigenvalues are also positive.
? In many application, the requirement of positive energy given rise to the other definition of
positive definite matrix.
Example:
1 T
2
x Kx = strain energy stored in deformed body and always positive for any non-zero dis-
placement. Hence, K is positive definite, if x represent deformation and does not incorporate
rigid body motion. Otherwise it is positive semi-definite.
However, Kinetic energy = 12 v T M v > 0, for non-zero velocity v. Hence, M is always positive
definite.
? If S and T are symmetric and positive definite, then (S + T ) is also symmetric positive
definite.
– xT (S + T )x = xT Sx + xT T x > 0
– S T = AT A = S −→ symmetric
– xT (AT A)x = (Ax)T (Ax) = k Ax k2 > 0
? xT Sx = 1 is an equation of an ellipse of which the major and minor axis are denoteed by the
eigen vector of ‘S’.
– in the rotated system the principle axes of the ellipse are oriented along coordinate di-
rection, (v).
Example:
5 4
S= ⇒ xT Sx = 1
4 5
5x + 4y
⇒ x y =1
4x + 5y
⇒ 5x2 + 5y 2 + 8xy = 1
14
λ1 + λ2 = 10, λ1 λ2 = 9 =⇒
1 1 1
λ1 = 1 −→ v1 = −→ q1 = √
−1 2 −1
1 1 1
λ2 = 9 −→ v2 = −→ q2 = √
1 2 1
1 1 1
1 0 1 −1
S=√ √
2 × 2 −1 1
0 9 1 1
1 1 1 1 −1 1 10 8 5 4
⇒S= = =
2 −1 1 9 9 2 8 10 4 5
Now,
T 1 1 −1 x
Q x= √
2 1 1 y
1 x−y
=√
2 x+y
T
T T T 1 x−y 1 x−y
(Q x) Λ(Q x) = √ √
2× 2 x+y 9 x+y
1
= {(x − y)2 + 9(x + y)2 }
2
( 2 2 )
x−y x+y
= √ +9 √
2 2
= (X 2 + 3Y 2 )
15