Professional Documents
Culture Documents
MODULE-5
Determinants and Eigen Values and Eigen Vector
Faculties Handling: -
1. Mrs. Anjani , 2. Mrs. Laxmi G
Asst Professor, Senior Asst Professor,
Dept. of ECE, Dept. of ECE,
MITE, Moodabidri. MITE, Moodabidri
1
Course Name: Engineering Statistics and Linear Algebra(ESLA)
OUTLINE
• The Determinant of a Matrix
• Evaluation of a Determinant using Elementary Row Operations
• Properties of Determinants
• Application of Determinants: Cramer’s Rule
• Class Exercise
• Eigenvalues and Eigenvectors
• Diagonalization
• Symmetric Matrices and Orthogonal Diagonalization
• Application of Eigenvalues and Eigenvectors
• Principal Component Analysis
2
Engineering Statistics and Linear Algebra(ESLA )
[As per Choice Based Credit System (CBCS) scheme]
SEMESTER – IV
Dept.of ECE,MITE,Moodabidri 3
Course Name: Engineering Statistics and Linear Algebra(ESLA)
4
Department of Electronics and Communication, MITE, Moodabidri
Course Name: Engineering Statistics and Linear Algebra(ESLA)
COURSE OUTCOME
Course Outcomes(CO)
Identify and associate random variables and distributive functions associated to
CO.1
Communication events.
Understand two variable expectations, transformation and Joint probabilities for multiple
CO.2
random variables and application exercises to Chi-square Random Variables, Student-T
Random Variables, and Cauchy and Rayleigh Random Variables.
Interpret Random process, Correlation functions and analyze with effect of noise.
CO.3
Understand the concept of vector spaces, linear independence, basis and dimension, and
CO.4
Orthogonality and apply these concepts to various vector spaces and subspaces.
3.6
※ The determinant is NOT a matrix operation
※ The determinant is a kind of information extracted from
a square matrix to reflect some characteristics of that
square matrix
※ For example, this chapter will discuss that matrices with
a zero determinant are with very different characteristics
from those with non-zero determinants
※ The motives to calculate determinants are to identify the
characteristics of matrices and thus facilitate the
comparison between matrices since it is impossible to
investigate or compare matrices entry by entry
※ The similar idea is to compare groups of numbers
through the calculation of averages and standard
deviations
※ Not only the determinant but also the eigenvalues and
eigenvectors are the information that can be used to
identify the characteristics of square matrices
3.7
The Determinant of a Matrix
• The determinant of a 2 × 2 matrix:
a11 a12
A
21 22
a a
Note:
1. For every SQUARE matrix, there is a real number associated with this matrix
and called its determinant
2. It is common practice to omit the matrix brackets
a11 a12 a 11 a 12
a
21 a22 a 21 a 22
3.8
• Historically speaking, the use of determinants arose from the recognition of
special patterns that occur in the solutions of linear systems:
a11 x1 a12 x2 b1
a21 x1 a22 x2 b2
b1a22 b2 a12 b2 a11 b1a21
x1 and x2
a11a22 a21a12 a11a22 a21a12
Note:
1. x1 and x2 have the same denominator, and this quantity is called the determinant
of the coefficient matrix A
2. There is a unique solution if a11a22 – a21a12 = |A| ≠ 0
3.9
• Ex. 1: The determinant of a matrix of order 2
2 3
2(2) 1(3) 4 3 7
1 2
2 1
2(2) 4(1) 4 4 0
4 2
0 3/ 2
0(4) 2(3 / 2) 0 3 3
2 4
3.10
Minor of the entry aij: the determinant of the matrix obtained by deleting the i-th
row and j-th column of A
• Cofactor of aij:
Cij ( 1)i j M ij ※ Cij is also a real number
3.11
• Ex:
a11 a12 a13
A a21 a22 a23
a31 a32 a33
a12 a13 a11 a13
M 21 M 22
a32 a33 a31 a33
Notes: Sign pattern for cofactors. Odd positions (where i+j is
odd) have negative signs, and even positions (where i+j is even)
have positive signs. (Positive and negative signs appear
alternately at neighboring positions.)
3.12
• Theorem 3.1: Expansion by cofactors
Let A be a square matrix of order n, then the determinant of A
is given by
n
(a) det( A) | A | aij Cij ai1Ci1 ai 2Ci 2 ainCin
j 1
or
n
(b) det( A) | A | aij Cij a1 j C1 j a2 j C2 j anj Cnj
i 1
3.14
• Ex 3: The determinant of a square matrix of order 3
0 2 1
A 3 1 2 det( A) ?
4 0 1
Sol:
11 1 2 1 2 3 2
C11 (1) 1 C12 (1) (1)(5) 5
0 1 4 1
1 3 3 1
C13 (1) 4
4 0
det( A) a11C11 a12C12 a13C13
(0)(1) (2)(5) (1)(4)
14
3.15
• Alternative way to calculate the determinant of a square matrix of order 3:
3.16
• Ex: Recalculate the determinant of the square matrix A in Ex 3
–4 0 6
0 2 1 0 2
A 3 1 2 3 1
4 0 1 4 0
0 16 0
det( A) | A | 0 16 0 (4) 0 6 14
3.17
• Ex 4: The determinant of a square matrix of order 4
1 2 3 0
1 1 0 2
A det( A) ?
0 2 0 3
3 4 0 2
3.18
Sol:
det( A) (3)(C13 ) (0)(C23 ) (0)(C33 ) (0)(C43 )
3C13
1 1 2
3(1)13 0 2 3
3 4 2
2 1 1 2 22 1 2 23 1 1
3(0)( 1) ( 2)( 1) (3)( 1)
4 2 3 2 3 4
30 ( 2)(1)( 4) (3)( 1)( 7)
(3)(13)
39
※ By comparing Ex 4 with Ex 3, it is apparent that the computational effort for
the determinant of 4×4 matrices is much higher than that of 3×3 matrices. In
3.19
the next section, we will learn a more efficient way to calculate the determinant
• Upper triangular matrix :
All entries below the main diagonal are zeros
Lower triangular matrix :
All entries above the main diagonal are zeros
Diagonal matrix :
All entries above and below the main diagonal are zeros
Ex:
3.21
Pf: by Mathematical Induction Suppose that the theorem is true for any upper
triangular matrix U of order 𝑛– 1, i.e.,
| U | a11a22a33 a( n1)( n1)
Then consider the determinant of an upper triangular matrix A of order n by the
cofactor expansion across the n-th row
3.22
Ex 6: Find the determinants of the following triangular matrices
1 0 0 0 0
2 0 0 0 0 3 0 0 0
4 2 0 0
(a) A (b) B 0 0 2 0 0
5 6 1 0
1 3 3 0 0 0 4 0
5 0 0 0 0 2
Sol:
3.23
Keywords :
determinant
minor
cofactor
expansion by cofactors
upper triangular matrix
lower triangular matrix
diagonal matrix
3.24
3.2 Evaluation of a Determinant Using Elementary Row Operations
The computational effort to calculate the determinant of a square matrix with a large
number of n is unacceptable. In this section, I will show how to reduce the
computational effort by using elementary operations
(c) B Ai(,kj) ( A) det( B) det( A) (by combining Thm. 3.4 and 3.5)
Notes: The above three properties remains valid if elementary column operations are
performed to derive column-equivalent matrices (These results will be used in Ex 5
on Slide 3.25)
3.25
Ex: 1 2 3
A 0 1 4 det( A) 2
1 2 1
4 8 12 A1 M 1(4) ( A)
A1 0 1 4 det( A1 ) 8
det( A1) 4 det( A) (4)( 2) 8
1 2 1
0 1 4
A2 I 1, 2( A)
A2 1 2 3 det( A2 ) 2
det( A2) det( A) ( 2) 2
1 2 1
1 2 3 ( 2)
A3 A1,2 ( A)
A3 2 3 2 det( A3 ) 2
det( A3) det( A) 2
1 2 1 3.26
Row reduction method to evaluate the determinant
1. A row-echelon form of a square matrix is either an upper triangular matrix or a matrix
with zero rows
2. It is easy to calculate the determinant of an upper triangular matrix (by Theorem 3.2) or a
matrix with zero rows (det = 0)
Ex: Evaluation a determinant using elementary row operations
2 3 10
A 1 2 2 det( A) ?
0 1 3
Sol: • Notes:
2 3 10 1 2 2
det( A1 ) det( A)
det( A) 1 2A 2 2 3A110
I1,2
det( A) det( A1 )
0 1 3 0 1 3 3.27
1 2 2 1 2 2
( 1 )
( 2 ) 1
0 7 14
( 1) (
A ) 0 1 2 A
A1,2 M2 7
2 (1/ 7) 3
0 1 3 0 1 3
1 2 2
( 1)
A4
7 0 1 2 7(1) 7
A2,3
0 0 1
• Notes:
det( A2 ) det( A1 )
1 1
det( A3 ) det( A2 ) det( A2 ) det( A3 )
7 (1/ 7)
det( A4 ) det( A3 )
3.28
Comparison between the number of required operations for the two kinds of methods
to calculate the determinant
3 5 9 5 10
5 119 205 30 45
3 5 2
A 2 4 1
3 0 6
Sol:
3 5 2 (2)
AC1,3
3 5 4
det( A) 2 4 1 2 4 3
3 0 6 3 0 0
3 1 5 4
(3)(1) (3)(1)(1) 3
4 3
※ ACi(,is
k ) the counterpart column operation to the row operation
Ai(,kj )
j
3.30
• Ex 6: Evaluating a determinant using both row and column reductions and
cofactor expansion
2 0 1 3 2
2 1 3 2 1
A 1 0 1 2 3
3 1 2 4 3
1 1 3 2 0
Sol:
2 0 1 3 2 2 0 1 3 2
2 1 3 2 1 A(1) ( 1)
A2,5
2 1 3 2 1
2,4
det( A) 1 0 1 2 3 1 0 1 2 3
3 1 2 4 3 1 0 5 6 4
1 1 3 2 0 3 0 0 0 1
2 1 3 2
2 2 1 1 2 3
(1)( 1)
1 5 6 4
3.31 3 0 0 1
8 1 3 2
( 3) 8 1 3 0 0 5
8 1 2
(1)
AC4,1 A2,1
3
(1)( 1) 4 4 8 1 2 = 8 1 2
13 5 6 4
13 5 6 13 5 6
0 0 0 1
1 3 8 1
5( 1)
13 5
(5)(27)
135
3.32
• Theorem 3.4: Conditions that yield a zero determinant
If A is a square matrix and any of the following conditions is true, then det(A) = 0
Notes: For conditions (b) or (c), you can also use elementary row or column operations
to create an entire row or column of zeros and obtain the results by Theorem 3.3
※ Thus, we can conclude that a square matrix has a determinant of zero if and
only if it is row- (or column-) equivalent to a matrix that has at least one row
(or column) consisting entirely of zeros
3.33
Ex:
1 2 3 1 4 0 1 1 1
0 0 0 0 2 5 0 0 2 2 2 0
4 5 6 3 6 0 4 5 6
1 4 2 1 2 3 1 8 4
1 5 2 0 4 5 6 0 2 10 5 0
1 6 2 2 4 6 3 12 6
3.34
3.3 Properties of Determinants
Theorem 3.5: Determinant of a matrix product
1 2 2 2 0 1
A 0 3 2 B 0 1 2
1 0 1 3 1 2
Sol:
1 2 2 2 0 1
| A | 0 3 2 7 | B | 0 1 2 11
1 0 1 3 1 2
3.36
1 2 2 2 0 1 8 4 1
AB 0 3 2 0 1 2 6 1 10
1 0 1 3 1 2 5 1 1
8 4 1
| AB | 6 1 10 77
5 1 1
Check:
|AB| = |A| |B|
3.37
Ex:
1 2 2 1 2 2 1 2 2
?
A0 3 2 1 1 2 1 2 0 BC
1 0 1 1 0 1 1 0 1
Pf:
2 1 2 2 2 2 1 2 23 1 2
| A | 0( 1) 3(1) 2(1)
0 1 1 1 1 0
2 1 2 2 2 2 1 2 23 1 2
| B | 1( 1) 1(1) 2(1)
0 1 1 1 1 0
2 1 2 2 2 2 1 2 23 1 2
| C | 1( 1) 2( 1) 0( 1)
0 1 1 1 1 0
3.38
Theorem 3.6: Determinant of a scalar multiple of a matrix
det(cA) = cn det(A)
(can be proven by repeatedly use the fact that if B M i( k ) ( A) B k A )
• Ex 2:
10 20 40 1 2 4
A 30 0 50 , if 3 0 5 5, find |A|
20 30 10 2 3 1
Sol:
1 2 4 1 2 4
A 10 3 0 5 A 10 3 0 5 (1000)(5) 5000
3
2 3 1 2 3 1 3.39
Theorem 3.7: (Determinant of an invertible matrix)
)
(Pf:
If A is invertible, then AA–1 = I. By Theorem 3.5, we can have |A||A–1| = |I|. Since
( ) |I| = 1, neither |A| nor |A–1| is zero
Suppose |A| is nonzero. It is aimed to prove A is invertible.
By the Gauss-Jordan elimination, we can always find a matrix B, in reduced
row-echelon form, that is row-equivalent to A
1. Either B has at least one row with entire zeros, then |B| = 0 and thus |A| = 0
since |Ek|…|E2||E1||A| = |B|. →←
2. Or B = I, then A is row-equivalent to I, and by Theorem 2.15 (Slide 2.59), it can
be concluded that A is invertible
3.40
• Ex 3: Classifying square matrices as singular or nonsingular
0 2 1 0 2 1
A 3 2 1 B 3 2 1
3 2 1 3 2 1
Sol:
3.41
Theorem 3.8: Determinant of an inverse matrix
1
If A is invertible, then det( A1 )
det( A)
(Since AA1 I , then A A1 1)
Theorem 3.9: Determinant of a transpose
3.43
Equivalent conditions for a nonsingular matrix:
(1) A is invertible
(2) Ax = b has a unique solution for every n × 1 matrix b (Thm. 2.11)
(3) Ax = 0 has only the trivial solution (Thm. 2.11)
(4) A is row-equivalent to In (Thm. 2.14)
(a) 2 x2 x3 1
3 x1 2 x2 x3 4
3 x1 2 x2 x3 4
(b)
2 x2 x3 1
3 x1 2 x2 x3 4
3 x1 2 x2 x3 4
3.45
Sol:
(a) Ax b (the coefficient matrix is the matrix A in Ex 3)
A 0 (from Ex 3)
This system does not have a unique solution
3.46
3.4 Applications of Determinants
Matrix of cofactors of A:
Adjoint matrix of A:
C1n C2 n Cnn
3.47
• Theorem 3.10: The inverse of a matrix expressed by its adjoint matrix
1 1
A adj( A)
det( A)
Pf: If A is an n × n invertible matrix, then
an1 an 2 ann
1
A[adj( A)] det( A) I A adj( A) I
det( A) 3.49
Ex: For any 2×2 matrix, its inverse can be calculated as follows
a b
A det( A) ad bc,
c d
C11 C21 d b
adj( A)
12
C C 22 c a
1 1 d b
A 1
adj( A)
det A ad bc c a
3.50
• Ex 2:
1 3 2 (a) Find the adjoint matrix of A
A 0 2 1
(b) Use the adjoint matrix of A to find A–1
1 0 2
Sol:
Cij (1)i j M ij
2 1 0 1 0 2
C11 4, C12 1, C13 2,
0 2 1 2 1 0
3 2 1 2 1 3
C21 6, C22 0, C23 3,
0 2 1 2 1 0
3 2 1 2 1 3
C31 7, C 1, C33 2.
2 1
32
0 1 0 2
3.51
cofactor matrix of A adjoint matrix of A
4 1 2 4 6 7
Cij 6 0 3 adj( A) Cij 1 0 1
T
7 1 2 2 3 2
an1 x1 an 2 x2 ann xn bn
x1 b1
x b
where A aij A(1) A(2) ( n)
A , x 2
, b 2
nn
n
x bn
Suppose this system has a unique solution, i.e.,
a11 a12 a1n
a21 a22 a2 n
det( A) 0
an1 an ( j 1) bn an ( j 1) ann
det( Aj )
xj , j 1, 2, ,n
det( A)
3.54
• Pf:
Ax = b (det( A) 0)
1 1
x A b adj( A)b (according to Thm. 3.10)
det( A)
C11 C21 Cn1 b1
C Cn 2 b2
1 12 C22
det( A)
C1n C2 n Cnn bn
det( A)
n
x b C
1 1n 2 2 n
b C bn nn
C
det(A1 ) / det( A)
det(A ) / det( A) (On Slide 3.49, it is already derived that
2
det( Aj ) b1C1 j b2C2 j bnCnj )
det(An ) / det( A)
det( Aj )
xj , j 1, 2, ,n
det( A)
3.56
• Ex 4: Use Cramer’s rule to solve the system of linear equation
x 2 y 3z 1
2x z 0
3x 4 y 4 z 2
Sol:
1 2 3 1 2 3
det( A) 2 0 1 10 det( A1 ) 0 0 1 8
3 4 4 2 4 4
1 1 3 1 2 1
det( A2 ) 2 0 1 15, det( A3 ) 2 0 0 16
3 2 4 3 4 2
det( A1 ) 4 det( A2 ) 3 det( A3 ) 8
x y z
det( A) 5 det( A) 2 det( A) 5 3.57
Keywords
• matrix of cofactors
• adjoint matrix
• Cramer’s rule: Cramer
3.58
Class Exercise
Determinants
If A = aisij a square matrix of order 1,
a11 a12
If A = a
is a square matrix of order 2, then
21 a 22
a11 a12
|A| = = a11a22 – a21a12
a21 a22
Example
4 -3
Evaluate the determinant :
2 5
4 -3
Solution : = 4 × 5 - 2 × -3 = 20 + 6 = 26
2 5
Solution
a11 a12 a13
a a22is aa23
If A = 21 square
matrix of order 3, then
a31 a32 a33
= a11 a22 a33 - a32 a23 - a12 a21a33 - a31a23 + a13 a21a32 - a31a22
a11a22 a33 a12 a31a23 a13 a21a32 a11a23 a32 a12 a21a33 a13 a31a22
Example
2 3 -5
Evaluate the determinant : 7 1 -2
-3 4 1
Solution :
2 3 -5
1 -2 7 -2 7 1
7 1 -2 =2 -3 + -5
4 1 -3 1 -3 4
-3 4 1
= 2 1 + 8 - 3 7 - 6 - 5 28 + 3
= 18 - 3 - 155
= -140
Minors
-1 4
If A = , then
2 3
4 7
Similarly, M23 = Minor of a23 = =12 -14 = -2
2 3
4 8
M32 = Minor of a32 = = 0+72
etc. = 72
-9 0
Cofactors
Cij = Cofactor of aij in A = -1
i+ j
Mij ,
0 0
C11 = Cofactor of a11 = (–1)1 + 1 M11 = (–1)1 +1 =0
3 4
4 7
C23 = Cofactor of a23 = (–1)2 + 3 M23 = 2
2 3
4 8
C32 = Cofactor of a32 = (–1)3 + 2M32 = -
etc. = - 72
-9 0
Value of Determinant in Terms of Minors
and Cofactors
a11 a12 a13
If A = a21 a22 a23 , then
a31 a32 a33
3 3
A 1 i j
aijMij aijCij
j1 j1
a1 b1 c1 a1 a2 a3
a2 b2 c2 = b1 b2 b3 i.e. A A '
a3 b3 c3 c1 c2 c3
a1 b1 c1 a2 b2 c2
a2 b2 c2 = - a1 b1 c1 Applying R2 R1
a3 b3 c3 a3 b3 c3
Properties (Con.)
3. If all the elements of a row (or column) is multiplied by a
non-zero number k, then the value of the new determinant
is k times the value of the original determinant.
a1 + x b1 c1 a1 b1 c1 x b1 c1
a2 + y b2 c2 = a2 b2 c2 + y b2 c2
a3 + z b3 c3 a3 b3 c3 z b3 c3
a1 b1 c1 a1 + mb1 - nc1 b1 c1
a2 b2 c2 = a2 + mb2 - nc2 b2 c2 Applying C1 C1 + mC2 - nC3
a3 b3 c3 a3 + mb3 - nc3 b3 c3
Properties (Con.)
6. If any two rows (or columns) of a determinant are
identical, then its value is zero.
a1 b1 c1
a2 b2 c2 = 0
a1 b1 c1
0 0 0
a2 b2 c2 = 0
a3 b3 c3
Properties (Con.)
a 0 0
8 Let A = 0 b 0 be a diagonal matrix, then
0 0 c
a 0 0
A =0 b 0 abc
0 0 c
Row(Column) Operations
Following are the notations to evaluate a determinant:
x 5 2
For example, if Δ = x2 9 4 , then at x = 2
x3 16 8
+ – +
+ –
, – + –
– +
+ – +
Example-1
Find the value of the following determinants
42 1 6 6 -3 2
(i) 28 7 4 (ii) 2 -1 2
14 3 2 -10 5 2
Solution :
42 1 6 6×7 1 6
i 28 7 4 = 4×7 7 4
14 3 2 2×7 3 2
6 1 6
=7 4 7 4 Taking out 7 common from C1
2 3 2
3 2 3 2
1 2 1 2
5 2 5 2
3 3 2
( 2) 1 1 2 Taking out 2 common from C1
5 5 2
( 2) 0 C1 and C2 are identical
0
Example - 2
1 a b+c
Evaluate the determinant 1 b c+a
1 c a+b
Solution :
1 a b+c 1 a a+b+c
1 b c+a = 1 b a+b+c Applying c3 c2 +c3
1 c a+b 1 c a+b+c
1 a 1
= a+b+c 1 b 1 Taking a+b+c common from C3
1 c 1
a b c
We have a2 b2 c2
bc ca ab
(a-b) b-c c
= (a-b)(a+b) (b - c)(b+c) c2 Applying C1 C1 - C2 and C2 C2 - C3
-c(a-b) -a(b - c) ab
1 1 c
Taking a-b and b - c common
= (a-b)(b - c) a+b b+ c c2
from C and C respectively
-c -a ab 1 2
Solution Cont.
0 1 c
= (a-b)(b - c) -(c - a) b+ c c2 Applying c1 c1 - c2
-(c - a) -a ab
0 1 c
= -(a-b)(b - c)(c - a) 1 b+c c2
1 -a ab
0 1 c
= -(a-b)(b - c)(c - a) 0 a+b+c c2 - ab Applying R 2 R 2 -R 3
1 -a ab
3 2 1 1 2 1
= x3 4 3 3 + x2 y 3 3 3
5 4 6 6 4 6
3 2 1
= x3 4 3 3 + x2 y×0 C1 and C2 are identical in II determinant
5 4 6
Solution Cont.
3 2 1
= x3 4 3 3
5 4 6
1 2 1
= x3 1 3 3 Applying C1 C1 - C2
1 4 6
1 2 1
= x3 0 1 2 Applying R 2 R 2 -R1 and R 3 R 3 -R 2
0 1 3
1 ω3 ω5 1 ω3 ω3 .ω2
L.H.S = ω3 1 ω4 = ω3 1 ω3 .ω
ω5 ω5 1 ω3 .ω2 ω3.ω2 1
1 1 ω2
= 1 1 ω ω3 =1
ω2 ω2 1
x+a b c x+a+b+c b c
L.H.S= a x+b c = x+a+b+c x+b c
a b x+C x+a+b+c b x+c
Applying C1 C1 +C2 +C3
1 b c
= x+a+b+c 1 x+b c
1 b x+c
Taking x+a+b+c common from C1
Solution cont.
1 b c
=(x+a+b+c) 0 x 0
0 0 x
Applying R 2 R 2 -R1 and R 3 R 3 -R1
Solution :
b+c c+a a+b
L.H.S = c+a a+b b+c
a+b b+c c+a
1 1 1
=2(a+b+c) c+a a+b b+c
a+b b+c c+a
Solution Cont.
0 0 1
= 2(a+b+c) (c -b) (a- c) b+c Applying C1 C1 - C2 and C2 C2 - C3
(a- c) (b - a) c +a
x+4 2x 2x
2x x+4 2x =(5x+4)(4- x)2
2x 2x x+4
Solution :
x+4 2x 2x 5x + 4 2x 2x
L.H.S = 2x x+4 2x = 5x + 4 x + 4 2x Applying C1 C1 +C2 +C3
2x 2x x+4 5x + 4 2x x+4
1 2x 2x
= (5x + 4) 1 x + 4 2x
1 2x x+4
Solution Cont.
1 2x 2x
=(5x + 4) 0 -(x - 4) 0 Applying R 2 R 2 -R1 and R 3 R 3 -R 2
0 x-4 -(x - 4)
=(5x+4)(4- x)2
=R.H.S
Example -9
Using properties of determinants, prove that
x+9 x x
x x+9 x =243 (x+3)
x x x+9
Solution :
x+9 x x
L.H.S= x x+9 x
x x x+9
3x +9 x x
= 3x +9 x +9 x Applying C1 C1 +C2 +C3
3x +9 x x +9
Solution Cont.
1 x x
=(3x+9) 1 x+9 x
1 x x+9
1 x x
= 3 x +3 0 9 0 Applying R 2 R 2 -R1 and R 3 R 3 -R 2
0 -9 9
Solution :
(b+c)2 a2 bc b2 +c2 a2 bc
L.H.S.= (c+a)2 b2 ca = c2 +a2 b2 ca Applying C1 C1 - 2C3
(a+b)2 c2 ab a2 +b2 c2 ab
a2 +b2 +c2 a2 bc
a2 +b2 +c2 b2 ca Applying C1 C1 +C2
a2 +b2 +c2 c2 ab
1 a2 bc
=(a2 +b2 +c2 ) 1 b2 ca
1 c2 ab
Solution Cont.
1 a2 bc
=(a2 +b2 +c2) 0 (b- a)(b+a) c(a-b) Applying R 2 R 2 -R1 and R 3 R 3 -R 2
0 (c-b)(c+b) a(b- c)
1 a2 bc
=(a2 +b2 +c2 )(a-b)(b- c) 0 -(b+a) c
0 -(b+c) a
x1 y1 1
1
Δ= x2 y2 1
2
x3 y3 1
1
= [x1 (y2 - y3 ) + x2 (y3 - y1 ) + x3 (y1 - y2 )]
2
Example
Find the area of a triangle whose
vertices are (-1, 8), (-2, -3) and (3, 2).
Solution :
x1 y1 1 -1 8 1
1 1
Area of triangle = x2 y2 1 = -2 -3 1
2 2
x3 y3 1 3 2 1
1
= -1(-3 - 2)- 8(-2 - 3)+1(-4 + 9)
2
1
= 5+ 40 +5 = 25 sq.units
2
Condition of Collinearity of Three Points
If A (x1 , y1 ), B (x2 , y2 ) and C (x3 , y3 ) are three points,
x1 y1 1 x1 y1 1
1
x2 y2 1 = 0 x2 y2 1 = 0
2
x3 y3 1 x3 y3 1
Example
If the points (x, -2) , (5, 2), (8, 8) are collinear,
find x , using determinants.
Solution :
x -2 1
5 2 1 =0
8 8 1
-6x-6+24=0
6x =18 x =3
Solution of System of 2 Linear Equations
(Cramer’s Rule)
Let the system of linear equations be
D1 D2
Then x = , y= provided D 0,
D D
a1 b1 c1 b1 a1 c1
where D = , D1 = and D2 =
a2 b2 c2 b2 a2 c2
Cramer’s Rule (Con.)
Note :
1 If D 0,
2 If D = 0 and D1 = D2 = 0,
Solution :
2 -3
D= = 2+9 =11 0
3 1
7 -3
D1 = =7+15=22
5 1
2 7
D2 = =10-21=-11
3 5
D0
D1 22 D -11
By Cramer's Rule x = = = 2 and y = 2 = = -1
D 11 D 11
Solution of System of 3 Linear Equations
(Cramer’s Rule)
Let the system of linear equations be
a1x +b1y + c1z = d1 ... i
D1 D2 D3
Then x = , y= , z= provided D 0,
D D D
a1 b1 c1 d1 b1 c1 a1 d1 c1
where D = a2 b2 c2 , D1 = d2 b2 c2 , D2 = a2 d2 c2
a3 b3 c3 d3 b3 c3 a3 d3 c3
a1 b1 d1
and D3 = a2 b2 d2
a3 b3 d3
Cramer’s Rule (Con.)
Note:
Solution :
5 -1 4 = 5(18+10)+1(12+5)+4(-4 +3)
D1 = 2 3 5 = 140 +17 –4
-1 -2 6 = 153
Solution Cont.
5 5 4 = 5(12 +5)+5(12 - 25)+ 4(-2 - 10)
D2 = 2 2 5 = 85 + 65 – 48 = 150 - 48
5 -1 6 = 102
5 -1 5
= 5(-3 +4)+1(-2 - 10)+5(-4-15)
D3 = 2 3 2 = 5 – 12 – 95 = 5 - 107
5 -2 -1 = - 102
D0
D1 153 D 102
By Cramer's Rule x = = = 3, y = 2 = =2
D 51 D 51
D3 -102
and z = = = -2
D 51
Example
Solve the following system of homogeneous linear equations:
x + y – z = 0, x – 2y + z = 0, 3x + 6y + -5z = 0
Solution:
1 1 - 1
We have D = 1 -2 1 = 1 10 - 6 - 1 -5 - 3 - 1 6 + 6
3 6 - 5
= 4 + 8 - 12 = 0
x + y = k, x – 2y = -k
Solution (Con.)
k 1
D1 -k -2 -2k + k k
By Cramer's rule x = = = =
D 1 1 -2 - 1 3
1 -2
1 k
D2 1 -k -k - k 2k
y= = = =
D 1 1 -2 - 1 3
1 -2
k 2k
x= , y= , z = k , where k R
3 3
Eigen values and Eigen vectors
Eigenvalues and Eigenvectors
Diagonalization
Symmetric Matrices and Orthogonal Diagonalization
Application of Eigenvalues and Eigenvectors
Principal Component Analysis
7.108
Eigenvalues and Eigenvectors
• Eigenvalue problem (one of the most important problems in the linear
algebra):
If A is an nn matrix, do there exist nonzero vectors x in Rn such that Ax is a
scalar multiple of x?
(The term eigen value is from the German word Eigen wert, meaning
“proper value”)
Eigenvalue and Eigenvector:
A: an nn matrix
l: a scalar (could be zero) ※ Geometric Interpretation
x: a nonzero vector in Rn y
Ax = l x
Eigenvalue
Ax lx
x
Eigenvector
x
• Ex 1: Verifying eigenvalues and eigenvectors
2 0 1 0
A x1 x 2
0 1 0 1
Eigenvalue
※ In fact, for each eigenvalue, it
2 0 1 2 1 has infinitely many eigenvectors.
Ax1 2 2x1 For l = 2, [3 0]T or [5 0]T are
0 1 0 0 0 both corresponding
eigenvectors. Moreover, ([3 0] +
Eigenvector
[5 0])T is still an eigenvector.
The proof is in Thm. 7.1.
Eigenvalue
2 0 0 0 0
Ax 2 1 (1)x 2
0 1 1 1 1
Eigenvector
• Thm. 7.1: The eigenspace corresponding to l of matrix A
If A is an nn matrix with an eigenvalue l, then the set of all eigenvectors of l
together with the zero vector is a subspace of Rn. This subspace is called the
eigenspace of l
Pf:
x1 and x2 are eigenvectors corresponding to l
(i.e., Ax1 l x1 , Ax 2 l x 2 )
(1) A( x1 x 2 ) Ax1 Ax 2 l x1 l x 2 l (x1 x 2 )
(i.e., x1 x 2 is also an eigenvector corresponding to λ)
(2) A(cx1 ) c( Ax1 ) c(l x1 ) l (cx1 )
(i.e., cx1 is also an eigenvector corresponding to l )
Since this set is closed under vector addition and scalar multiplication, this set
is a subspace of Rn according to Theorem 4.5
• Ex 3: Examples of eigenspaces on the xy-plane
For the matrix A as follows, the corresponding
eigenvalues are l1 = –1 and l2 = 1:
1 0
A
0 1
Sol:
For the eigenvalue l1 = –1, corresponding vectors are any vectors on the x-axis
For the eigenvalue l2 = 1, corresponding vectors are any vectors on the y-axis
x x 0 x 0
Av A A A A
y 0 y 0 y
x 0 x
1 1
0 y y
• Thm. 7.2: Finding eigenvalues and eigenvectors of a matrix AMnn
Let A be an nn matrix.
det(l I A) 0
Characteristic polynomial of AMnn:
l 2 12
det(l I A)
1 l 5
l 2 3l 2 (l 1)(l 2) 0
l 1, 2
Eigenvalue: l1 1, l2 2
3 12 x1 0
(1) l1 1 (l1 I A)x
1 4 x2 0
3 12 G.-J. E. 1 4
1 4 0 0
x1 4t 4
t , t 0
x2 t 1
4 12 x1 0
(2) l2 2 (l2 I A)x
1 3 x2 0
4 12 G.-J. E. 1 3
1 3 0 0
x1 3s 3
s , s 0
x2 s 1
• Ex 5: Finding eigenvalues and eigenvectors
Find the eigenvalues and corresponding
eigenvectors for the matrix A. What is the
dimension of the eigenspace of each eigenvalue?
2 1 0
A 0 2 0
0 0 2
Sol: Characteristic equation:
l 2 1 0
lI A 0 l 2 0 (l 2)3 0
0 0 l 2
Eigenvalue: l 2
The eigenspace of λ = 2:
0 1 0 x1 0
(l I A)x 0 0 0 x2 0
0 0 0 x3 0
x1 s 1 0
x2 0 s 0 t 0 , s , t 0
x3 t 0 1
1 0
s 0 t 0 s , t R : the eigenspace of A corresponding to l 2
0 1
x1 0 0
5t
G.-J.E. x 5
2 t , t 0
x3 t 1
4
x 0 0
0
5 is a basis for the eigenspace
1 corresponding to l2 2
0
※The dimension of the eigenspace of λ2 = 2 is 1
2 0 0 0 x1 0
0 2 5 10 x2 0
(3) l3 3 (l3 I A)x
1 0 1 0 x3 0
1 0 0 0 x4 0
x1 0 0
5t
G.-J.E. x 5
2 t , t 0
x3 0 0
x4 t 1
0
5 is a basis for the eigenspace
0 corresponding to l3 3
1
※The dimension of the eigenspace of λ3 = 3 is 1
• Thm. 7.3: Eigenvalues for triangular matrices
If A is an nn triangular matrix, then its eigenvalues are the entries on its
main diagonal
T ( x1 , x2 , x3 ) ( x1 3x2 ,3x1 x2 , 2 x3 )
• Theorem: Standard matrix for a linear transformation
1 1 0 3 0 0
T (e1 ) T ( 0 ) 3 , T (e 2 ) T ( 1 ) 1 , T (e3 ) T ( 0 ) 0
0 0 0 0 1 2
Thus, the above linear transformation T is with the following corresponding
standard matrix A such that T(x) = Ax
1 3 0 1 3 0 x1 x1 3 x2
A 3 1 0 Ax 3 1 0 x2 3 x1 x2
0 0 2 0 0 2 x3 2 x3
※ The statement on Slide 7.18 is valid because for any linear transformation T: V →V,
there is a corresponding square matrix such that T(x) = Ax. Consequently, the
eignvalues and eigenvectors of a linear transformation T are in essence the
eigenvalues and eigenvectors of the corresponding square matrix A
Ex 8: Finding eigenvalues and eigenvectors for standard matrices
l 1 3 0
lI A 3 l 1 0 (l 2) (l 4) 0
2
0 0 l 2
eigenvalues l1 4, l2 2
T (x)B ' A ' xB ' , where A ' T ( v1 )B ' T ( v 2 )B ' T ( v n ) B '
is the transformation matrix for T relative to the basis B '
※ On the next two slides, an example is provided to verify numerically that this
extension is valid
• EX. Consider an arbitrary nonstandard basis B ' to be {v1,
v2, v3}= {(1, 1, 0), (1, –1, 0), (0, 0, 1)}, and find the
transformation matrix T (x)B ' A ' xB '
such that A ' corresponding to the same
linear transformation T(x1, x2, x3) = (x1 + 3x2, 3x1 + x2, –2x3)
1 4 4 1 2 0
T ( v1 )B ' T ( 1 ) 4 0 , T ( v 2 )B ' T ( 1 ) 2 2 ,
0 0 B ' 0 0 0 B ' 0
B' B'
4 0 0
A ' 0 2 0
0 0 2
• Consider x = (5, –1, 4), and check that T (x) A 'x B' B'
4 0 0 2 8
A ' x B ' 0 2 0 3 6 T ( x) B '
0 0 2 4 8
For a special basis 𝐵′ = 𝐯1 , 𝐯2 , … , 𝐯𝑛 , where 𝐯𝑖 ’s are eigenvectors of the standard matrix 𝐴,
𝐴′ is obtained immediately to be diagonal due to
𝑇 𝐯𝑖 = 𝐴𝐯𝑖 = 𝜆𝑖 𝐯𝑖
and
𝜆𝑖 𝐯𝑖 𝐵′ = 0𝐯1 + 0𝐯2 + ⋯ + 𝜆𝑖 𝐯𝑖 + ⋯ + 0𝐯𝑛 𝐵′ = 0 ⋯ 0 𝜆𝑖 0 ⋯ 0 𝑇
Then A ', the transformation matrix for T relative to the basis B ', defined as
[[T ( v1 )]B ' [T ( v 2 )]B ' [T ( v3 )]B ' ] (see Slide 7.22), is diagonal, and the main
diagonal entries are corresponding eigenvalues (see Slides 7.23)
for l1 4 for l2 2 4 0 0
B ' {(1, 1, 0), (1, 1, 0), (0, 0, 1)} A' 0 2 0
Eigenvectors of A 0 0 2
Eigenvalues of A
Keywords
• eigenvalue problem
• Eigen value
• Eigen vector
• characteristic equation
• characteristic polynomial
• Eigen space
• multiplicity
• linear transformation
• diagonalization
7.2 Diagonalization
• Diagonalization problem
For a square matrix A, does there exist an invertible matrix P such that P–1AP is
diagonal?
Diagonalizable matrix
※ In Sec. 6.4, two square matrices A and B are similar if there exists an invertible
matrix P such that B = P–1AP.
Notes:
This section shows that the eigenvalue and eigenvector problem is closely related to
the diagonalization problem
• Thm. 7.4: Similar matrices have the same eigenvalues
If A and B are similar nn matrices, then they have the same eigenvalues
Pf:
1 For any diagonal matrix in the
A and B are similar B P AP form of D = λI, P–1DP = D
Consider the characteristic equation of B:
l I B l I P 1 AP P 1l IP P 1 AP P 1 (l I A) P
P 1 l I A P P 1 P l I A P 1P l I A
lI A
Since A and B have the same characteristic equation, they are with the same
eigenvalues
※ If there are n linearly independent eigenvectors, it does not imply that there are n
distinct eigenvalues. In an extreme case, it is possible to have only one eigenvalue with
the multiplicity n, and there are n linearly independent eigenvectors for this eigenvalue
※ On the other hand, if there are n distinct eigenvalues, then there are n linearly
independent eigenvectors (see Thm. 7.6), and thus A must be diagonalizable
Pf: ( )
Since A is diagonalizable, there exists an invertible P s.t. D P 1 AP
is diagonal. Let P [p1 p 2 p n ] and D diag (l1 , l2 , , ln ), then
l1 0 0
0 l 0
PD [p1 p 2 pn ] 2
0 0 ln
[l1p1 l2p 2 lnp n ]
AP PD (since D P 1 AP )
[ Ap1 Ap 2 Ap n ] [l1p1 l2p 2 lnp n ]
Api li pi , i 1, 2, ,n
(The above equations imply the column vectors pi of P are eigenvectors
of A, and the diagonal entries li in D are eigenvalues of A)
Because A is diagonalizable P is invertible
Columns in P, i.e., p1 , p 2 , , p n , are linearly independent
(see Slide 4.101 in the lecture note)
Thus, A has n linearly independent eigenvectors
( )
Since A has n linearly independent eigenvectors p1 , p 2 , p n with
corresponding eigenvalues l1 , l2 , ln (could be the same), then
Api li pi , i 1, 2, ,n
Let P [p1 p 2 pn ]
AP A[p1 p 2 p n ] [ Ap1 Ap 2 Ap n ]
[l1p1 l2p 2 lnp n ]
l1 0 0
0 l 0
[p1 p 2 pn ] 2
PD
0 0 ln
Since p1 , p 2 , , p n are linearly independent
P is invertible (see Slide 4.101 in the lecture note)
AP PD P 1 AP D
A is diagonalizable
(according to the definition of the diagonalizable matrix on Slide 7.27)
※ Note that p 's are linearly independent eigenvectors and the diagonal
i
0 2 1
l1 I A I A eigenvector p1
0 0 0
Since A does not have two linearly independent eigenvectors, A is not
diagonalizable
• Steps for diagonalizing an nn square matrix:
Step 1: Find n linearly independent eigen vectors p1 , p 2 , pn for A with
corresponding eigen values l1 , l2 , , ln
l1 0
Step 3:
0
1
0 l2 0
P AP D
0 0 ln
where Api li pi , i 1, 2, ,n
• Ex 5: Diagonalizing a matrix
1 1 1
A 1 3 1
3 1 1
Find a matrix P such that P 1 AP is diagonal.
Sol: Characteristic equation:
l 1 1 1
l I A 1 l 3 1 (l 2)(l 2)(l 3) 0
3 1 l 1
The eigenvalues : l1 2, l2 2, l3 3
1 1 1 1 0 1 x1 0
l1 2 l1 I A 1 1 1 G.-J. E.
0 1 0 x2 0
3 1 3 0 0 0 x3 0
x1 t 1
x 0 eigenvector p 0
2 1
x3 t 1
3 1 1 1 0 14 x1 0
l2 2 l2 I A 1 5 1
G.-J. E.
0 1 14 x2 0
3 1 1 0 0 0 x3 0
x1 14 t 1
x 1 t eigenvector p 1
2 4 2
x3 t 4
2 1 1 1 0 1 x1 0
l3 3 l3 I A 1 0 1 G.-J. E.
0 1 1 x2 0
3 1 4 0 0 0 x3 0
x1 t 1
x t eigenvector p 1
2 3
x3 t 1
1 1 1
P [p1 p 2 p 3 ] 0 1 1 and it follows that
1 4 1
2 0 0
P 1 AP 0 2 0
0 0 3
Note: a quick way to calculate Ak based on the diagonalization technique
l1 0 0 l1k 0 0
0 l
0 0 l2k 0
(1) D 2
D
k
k
0 0 ln 0 0 ln
(2) D P 1 AP D k P 1 AP P 1 AP P 1 AP P 1 Ak P
repeat k times
l1k 0 0
k 1 0 l2k 0
A PD P , where D
k k
k
0 0 ln
• Thm. 7.6: Sufficient conditions for diagonalization
If an nn matrix A has n distinct eigenvalues, then the corresponding
eigenvectors are linearly independent and thus A is diagonalizable according
to Thm. 7.5.
Pf:
Let λ1, λ2, …, λn be distinct eigenvalues and corresponding eigenvectors be x1,
x2, …, xn. In addition, consider that the first m eigenvectors are linearly
independent, but the first m+1 eigenvectors are linearly dependent, i.e.,
1 2 1
A 0 0 1
0 0 3
l1 1, l2 0, l3 3
1 1 1
A 1 3 1
3 1 1
From Ex. 5 you know that λ1 = 2, λ2 = –2, λ3 = 3 and thus A is
diagonalizable. So, similar to the result on Slide 7.25, these
three linearly independent eigenvectors found in Ex. 5 can be
used to form the basis B '. That is
B ' {v1 , v 2 , v3} {(1, 0, 1), (1, 1, 4), ( 1, 1, 1)}
※ Note that it is not necessary to calculate A ' through the above equation.
According to the result on Slide 7.25, we already know that A ' is a diagonal
matrix and its main diagonal entries are corresponding eigenvalues of A
Symmetric Matrices and Orthogonal Diagonalization
• Symmetric matrix
A square matrix A is symmetric if it is equal to its transpose:
A AT
Ex 1: Symmetric matrices and nonsymetric matrices
0 1 2
A 1 3 0 (symmetric)
2 0 5
4 3
B
3 1
(symmetric)
3 2 1
C 1 4 0 (nonsymmetric)
1 0 5
• Thm 7.7: Eigenvalues of symmetric matrices
If A is an nn “symmetric” matrix, then the following properties are true
(1) A is diagonalizable (symmetric matrices (except the matrices in the form of A =
aI, in which case A is already diagonal) are guaranteed to have n linearly
independent eigenvectors and thus be diagonalizable)
(2) All eigenvalues of A are real numbers
(3) If l is an eigenvalue of A with the multiplicity to be k, then l has k linearly
independent eigenvectors. That is, the eigenspace of l has dimension k
※ The above theorem is called the Real Spectral Theorem, and the set of
eigenvalues of A is called the spectrum of A
• Ex 2:
Prove that a 2 × 2 symmetric matrix is diagonalizable
a c
A
c b
Pf: Characteristic equation:
l a c
lI A l2 (a b)l ab c 2 0
c l b
As a function in l, this quadratic polynomial function has a
nonnegative discriminant as follows
(a b) 2 4(1)(ab c 2 ) a 2 2ab b 2 4ab 4c 2
a 2 2ab b 2 4c 2
(a b) 2 4c 2 0 real-number solutions
(1) (a b) 2 4c 2 0
a b, c 0
a c a 0
A itself is a diagonal matrix
c b 0 a
※ Note that in this case, A has one eigenvalue, a, whose multiplicity is 2,
and the two eigenvectors are linearly independent
(2) (a b) 2 4c 2 0
P 5 1
5
0
2 4 5
3 5 3 5 3 5
2 4 5
3 5 3 5 3 5
we can produce p1 p 2 p1 p3 p 2 p 3 0 and p1 p1
p 2 p 2 p3 p3 1
Pf:
l1 (x1 x2 ) (l1x1 ) x2 ( Ax1 ) x2 ( Ax1 )T x2 (x1T AT )x2
because A is symmetric
(x1T A)x2 x1T ( Ax2 ) x1T (l2 x2 ) x1 (l2 x2 ) l2 (x1 x2 )
The above equation implies (l1 l2 )(x1 x 2 ) 0, and because
l1 l2 , it follows that x1 x 2 0. So, x1 and x 2 are orthogonal
※ For distinct eigenvalues of a symmetric matrix, their corresponding
eigenvectors are orthogonal and thus linearly independent to each other
(Theorem 5.10 states that orthogonality implies linear independence)
※ Note that there may be multiple x1’s and x2’s corresponding to l1 and l2
Orthogonal diagonalization
Pf:
( )
A is orthogonally diagonalizable
D P 1 AP is diagonal, and P is an orthogonal matrix s.t. P 1 PT
A PDP 1 PDPT AT ( PDPT )T ( PT )T DT PT PDPT A
( )
See the next two slides
Orthogonal diagonalization of a symmetric matrix:
orthogonal
※If v2 and v3 are not orthogonal, the Gram-Schmidt Process should be
performed. Here we simply normalize v2 and v3 to find the
corresponding unit vectors
v2 v3
u2 ( 25 , 1
5
, 0), u3 ( 325 , 3
4
5
, 5
3 5
)
v2 v3
13 2 2 6 0 0
2 15 3 5
P 1 AP 0 3 0
P3 4
5 3 5
23 0 5 0 0 3
3 5
u1 u 2 u3
※ Note that there are some calculation error in the solution of Ex.9 in
the text book
Keywords
• symmetric matrix
• orthogonal matrix
• orthonormal set
• orthogonal diagonalization
Applications of Eigenvalues and Eigenvectors
The rotation for quadratic equation: ax2 + bxy + cy2 + dx + ey + f = 0
x2 y 2
(a) In standard form, we can obtain 2 2 1.
3 2
a b / 2
Matrix of the quadratic form: A
b / 2 c
※ Note that A is a symmetric
x matrix
If we define X = , then XTAX= ax2 + bxy + cy2 . In fact, the quadratic equation can
y
be expressed in terms of X as follows
X T AX d e X f
Principal Axes Theorem
For a conic whose equation is ax2 + bxy + cy2 + dx + ey + f = 0, the rotation to
eliminate the xy-term is achieved by X = PX’, where P is an orthogonal matrix that
diagonalizes A. That is,
1 l1 0
P AP P AP D
T
,
0 l2
where λ1 and λ2 are eigenvalues of A. The equation for the rotated conic is given by
l1 ( x ')2 l2 ( y ')2 d e PX f 0
Pf:
According to Thm. 7.10, since A is symmetric, we can conclude that there exists
an orthogonal matrix P such that P–1AP = PTAP = D is diagonal
Replacing X with PX’, the quadratic form becomes
13x 2 10 xy 13 y 2 72 0
Sol:
The matrix of the quadratic form associated with this equation is
13 5
A
5 13
The eigen values are λ1 = 8 and λ2 = 18, and the corresponding eigenvectors are
1 1
x1 and x 2
1 1
After normalizing each eigenvector, we can obtain the orthogonal matrix P as
follows.
※ According to the results on p.
1 1 268 in Ch4, X=PX’ is
2 2 equivalent to rotate the xy-
cos 45 sin 45 coordinates by 45 degree to
P
1 1 sin 45 cos 45 form the new x’y’-coordinates,
2 2 which is also illustrated in the
figure on Slide 7.62
a d /2 e/2
A d / 2 b f / 2 ※ Note that A is a symmetric
matrix
e / 2 f / 2 c
If we define X = [x y z]T, then XTAX= ax2 + by2 + cz2 + dxy + exz + fyz, and the quadratic
surface equation can be expressed as
X T AX g h i X j
Keywords
• quadratic form
• principal axes theorem
Principal Component Analysis
Principal component analysis
It is a way of identifying the underlying patterns in data
It can extract information in a large data set with many variables and
approximate this data set with fewer factors
In other words, it can reduce the number of variables to a more manageable set
Step 3:
x T
xT x xT y
var( X ) E XX E T
T T
x y E T
y y x yT y
var( x) cov( x, y ) 0.616556 0.615444
0.615444 0.716556 A
cov( x , y ) var( y )
Step 4: Calculate the eigenvectors and eigen values of the covariance matrix A
0.67787 0.73518
l1 1.284028, v1 = l2 0.049083, v 2 =
0.73518 0.67787
x’ y’ x’ y’
-0.82797 -0.17512 -0.82797 0
1.77758 0.14286 1.77758 0
-0.99220 0.38437 -0.99220 0
-0.27421 0.13042 -0.27421 0
-1.67580 -0.20950 -1.67580 0
( X )T -0.91295 0.17528 ( X )T -0.91295 0
0.09911 -0.34982 0.09911 0
1.14457 0.04642 1.14457 0
0.43805 0.01776 0.43805 0
1.22382 -0.16268 1.22382 0
0 0 0 0
1.284028 0 1.284028 0
var((X )T ) = var((X )T ) =
0 0.049083 0 0
Step 6: Getting the original data back:
※ We can derive the original data set if ※ Although when we derive the transformed data, only v1
we take both v1 and v2 and thus x’ and thus only x’ are considered, the data gotten back is
and y’ into account when deriving still similar to the original data. That means, x’ can be a
the transformed data common factor almost able to explain both series x and y
v1
v2
vij li
lij Fi x j
s.d . j
0.67787 1.284028 0.73518 0.049083
l11 xx 0.97824 l21 yx 0.20744
0.785211 0.785211
Factor loadings are used to identify and interpret the unobservable principal
components
next
7 2 14 l 7 2 14
A 5 4 7 lI A 5 l 4 7
5 1 10 5 l
1 10
next
7 2 14 l 7 2 14
A 5 4 7 lI A 5 l 4 7
5 1 10 5 l
1 10
l 7 2 14
det(lI A) det 5 l 4 7
5 l
1 10
next
7 2 14 l 7 2 14
A 5 4 7 lI A 5 l 4 7
5 1 10 5 l
1 10
l 7 2 14
det(lI A) det 5 l 4 7
5 l
1 10
next
7 2 14 l 7 2 14
A 5 4 7 lI A 5 l 4 7
5 1 10 5 l
1 10
l 7 2 14
det(lI A) det 5 l 4 7
5 l
1 10
next
7 2 14 l 7 2 14
A 5 4 7 lI A 5 l 4 7
5 1 10 5 l
1 10
l 7 2 14
det(lI A) det 5 l 4 7
5 l
1 10
next
7 2 14 l 7 2 14
A 5 4 7 lI A 5 l 4 7
5 1 10 5 l
1 10
l 7 2 14
det(lI A) det 5 l 4 7
5 l
1 10
next
7 2 14 l 7 2 14
A 5 4 7 lI A 5 l 4 7
5 1 10 5 l
1 10
l 7 2 14
det(lI A) det 5 l 4 7
5 l
1 10
l 7 2 14
det(lI A) det 5 l 4 7
5 l
1 10
A 5 4 7
5 1 10
l3 14l2 33l
7l2 98l 231
10l 30
70l 210
characteristic polynomial next
l3 7l2 15l 9
7 2 14 l 7l 15l 9 0
3 2
A 5 4 7
5 1 10 potential rational roots:1,-1,3,-3,9,-9
synthetic division:
next
7 2 14 l 7l 15l 9 0
3 2
A 5 4 7
5 1 10 potential rational roots:1,-1,3,-3,9,-9
synthetic division:
1 7 15 9
next
7 2 14 l 7l 15l 9 0
3 2
A 5 4 7
5 1 10 potential rational roots:1,-1,3,-3,9,-9
synthetic division:
1 1 7 15 9
next
7 2 14 l 7l 15l 9 0
3 2
A 5 4 7
5 1 10 potential rational roots:1,-1,3,-3,9,-9
synthetic division:
1 1 7 15 9
next
7 2 14 l 7l 15l 9 0
3 2
A 5 4 7
5 1 10 potential rational roots:1,-1,3,-3,9,-9
synthetic division:
1 1 7 15 9
1
1 8
next
7 2 14 l 7l 15l 9 0
3 2
A 5 4 7
5 1 10 potential rational roots:1,-1,3,-3,9,-9
synthetic division:
1 1 7 15 9
1 8
1 8 23
next
7 2 14 l 7l 15l 9 0
3 2
A 5 4 7
5 1 10 potential rational roots:1,-1,3,-3,9,-9
synthetic division:
1 1 7 15 9
1 8 23
1 8 23 31
next
7 2 14 l 7l 15l 9 0
3 2
A 5 4 7
5 1 10 potential rational roots:1,-1,3,-3,9,-9
synthetic division:
1 1 7 15 9
1 8 23
1 8 23 31
This is not zero. 1 is not a root.
next
7 2 14 l 7l 15l 9 0
3 2
A 5 4 7
5 1 10 potential rational roots:1,-1,3,-3,9,-9
synthetic division:
-3 1 7 15 9
next
7 2 14 l 7l 15l 9 0
3 2
A 5 4 7
5 1 10 potential rational roots:1,-1,3,-3,9,-9
synthetic division:
-3 1 7 15 9
next
7 2 14 l 7l 15l 9 0
3 2
A 5 4 7
5 1 10 potential rational roots:1,-1,3,-3,9,-9
synthetic division:
-3 1 7 15 9
-3
1 4
next
7 2 14 l 7l 15l 9 0
3 2
A 5 4 7
5 1 10 potential rational roots:1,-1,3,-3,9,-9
synthetic division:
-3 1 7 15 9
-3 -12
1 4 3
next
7 2 14 l 7l 15l 9 0
3 2
A 5 4 7
5 1 10 potential rational roots:1,-1,3,-3,9,-9
synthetic division:
-3 1 7 15 9
-3 -12 -9
1 4 3 0
next
7 2 14 l 7l 15l 9 0
3 2
A 5 4 7
5 1 10 potential rational roots:1,-1,3,-3,9,-9
synthetic division:
-3 1 7 15 9
-3 -12 -9
1 4 3 0
This is zero. -3 is a root.
next
7 2 14 l 7l 15l 9 0
3 2
A 5 4 7
5 1 10 potential rational roots:1,-1,3,-3,9,-9
synthetic division:
-3 1 7 15 9
-3 -12 -9
1 4 3 0
l 7l 15l 9 (l 3)(l 4l 3)
3 2 2
next
7 2 14 l 7l 15l 9 0
3 2
A 5 4 7
5 1 10 potential rational roots:1,-1,3,-3,9,-9
synthetic division:
-3 1 7 15 9
-3 -12 -9
1 4 3 0
l 7l 15l 9 (l 3)(l 4l 3)
3 2 2
A 5 4 7
5 1 10
The eigenvalues are: -3, -3, -1
synthetic division:
-3 1 7 15 9
-3 -12 -9
1 4 3 0
l 7l 15l 9 (l 3)(l 4l 3)
3 2 2
A 5 4 7
5 1 10
The eigenvalues are: -3, -3, -1
next
7 2 14 l 7l 15l 9 0
3 2
A 5 4 7
5 1 10
The eigenvalues are: -3, -3, -1
10 2 14 1 .2 1.4
3I A 5 1 7 reduces to 0 0 0
5 1 7 0 0
0
A 5 4 7
5 1 10
The eigenvalues are: -3, -3, -1
8 2 14 1 0 2
1I A 5 3 7 reduces to 0 1 1
5 1 9 0 0 0
A 5 4 7
5 1 10
The eigenvalues are: -3, -3, -1
next
7 2 14 l 7l 15l 9 0
3 2
A 5 4 7
5 1 10
The eigenvalues are: -3, -3, -1
1
3 0 0 1 7 2 7 2 14 1 7 2
0 3 0 5 0 1 5 4 7 5 0 1
0 0 5 1 5 1 10 0 5 1
0 1
next
7 2 14 l 7l 15l 9 0
3 2
A 5 4 7
5 1 10
The eigenvalues are: -3, -3, -1
1
3 0 0 1 7 2 7 2 14 1 7 2
0 3 0 5 0 1 5 4 7 5 0 1
0 0 5 1 5 1 10 0 5 1
0 1
P –1 A P
diagonal matrix
that is similar
to A
For more details
• Prof. Gilbert Strang’s course videos:
• http://ocw.mit.edu/OcwWeb/Mathematics/18-06Spring-
2005/VideoLectures/index.htm
• http://www.cliffsnotes.com/study_guide/Determining-the-
Eigenvectors-of-a-Matrix.topicArticleId-20807,articleId-20804.html
Course Name: Engineering Statistics and Linear Algebra(ESLA)
220
Software
List of numerical analysis software
• Several programming languages use numerical linear algebra
optimisation techniques and are designed to implement
numerical linear algebra algorithms. These languages
include MATLAB, Analytica, Maple, and Mathematica.
• LAPACK, python has the library NumPy, and Perl has the Perl
Data Language.