You are on page 1of 11

MA 106 : Linear Algebra

Lecture 14

J. K. Verma
Department of Mathematics
Indian Institute of Technology Bombay

J. K. Verma 0 / 10
Characterization of self-adjoint operators
1 Theorem. Let V be a finite dimensional inner product space over F and let
T : V → V be a linear operator. Then T is self-adjoint iff MBB (T ) is
self-adjoint for every ordered orthonormal basis B of V .
2 Proof. Let B = (v1 , . . . , vn ) be an ordered orthonormal basis of V .
3 Suppose that T is self-adjoint and A = (aij ) = MBB (T ).
Pn Pn
4 Then T (vj ) = k=1 akj vk . So hT (vj ), vi i = h k=1 akj vk , vi i = aij .
Pn
5 Therefore aij = hT (vj ), vi i = hvj , T (vi )i = hvj , k=1 aki vk i = aji .
6 Conversely let A = (aij ) = MBB (T ) is self-adjoint for an orthonormal basis B
of V . Then aij = aji and hT (vj ), vi i = hvj , T (vi )i.
Pn Pn
7 Let x = j=1 aj vj and y = i=1 bi vi . Then
X X X
hx, T (y )i = h aj vj , bi T (vi )i = aj bi hvj , T (vi )i,
j i j,i
X X X
hT (x), y i = h aj T (vj ), bi vi i = aj bi hT (vj ), vi i.
j i j,i
8 Therefore T is self-adjoint.
J. K. Verma 1 / 10
Spectral Theorem for self-adjoint operators
1 Theorem (Spectral Theorem for Self-Adjoint Operators). Let V be a
finite dimensional inner product space over F and let T : V → V be a
self-adjoint linear operator. Then there exists an orthonormal basis of V
consisting of eigenvectors of T .
2 Proof. Apply induction on d = dim V . If d = 1 then there is a nonzero
vector v of unit length such that V = {av | a ∈ F}.
3 Hence T (v ) = bv for some b ∈ F. Clearly v is an eigenvector of T .
4 As Hermitian matrices have only real eigenvalues, there exists λ ∈ R and a
unit vector v ∈ V with T (v ) = λv . Put W = L({v })⊥ .
5 Claim. w ∈ W =⇒ T (w ) ∈ W and T : W → W is self-adjoint.
6 Proof. hT (w ), v i = hw , T (v )i = hw , λv i = λhw , v i = 0, since w ∈ W .
Therefore T (w ) ∈ W .
7 Since dim W = d − 1, by induction on dimension, there is an orthonormal
basis B of W consisting of eigenvectors of T : W → W .
8 Now {v } ∪ B is the required orthonormal basis of V .
J. K. Verma 2 / 10
Eigenspaces of self-adjoint matrices are mutually ⊥

1 Proposition. Let T be a self-adjoint operator on a finite-dimensional inner


product space V . Let u, v be eigenvectors of T with distinct eigenvalues λ
and µ respectively. Then u ⊥ v .
2 Proof. As T is self-adjoint, λ, µ ∈ R. Therefore,

(λ − µ)hu, v i = hλu, v i − hu, µv i


= hTu, v i − hu, Tv i
= hu, Tv i − hu, Tv i = 0.

3 Since λ 6= µ, u and v are mutually perpendicular.


4 Recall that V = W1 ⊕ W2 if V = W1 + W2 and W1 ∩ W2 = {0}.
5 Theorem. Let T be a self-adjoint linear operator on a
finite dimensional IPS V . Let λ1 , . . . , λr be the distinct eigenvalues of T . Then
Pr
V = Vλ1 ⊕ Vλ2 ⊕ · · · ⊕ Vλr and dim V = i=1 dim Vλi .
6 Proof. Use the fact that T is diagonalizable.
J. K. Verma 3 / 10
Spectral Theorem for real symmetric matrices

1 Theorem (Spectral Theorem for Real Symmetric matrices). Let A be


an n × n real symmetric matrix with (real) eigenvalues λ1 , . . . , λn . Set
D = diag(λ1 , . . . , λn ). Then there exists an n × n real orthogonal matrix S
such that S t AS = D.
2 Theorem. Let A be an n × n Hermitian matrix with eigenvalues
λ1 , . . . , λn ∈ R. Set D = diag(λ1 , . . . , λn ). Then there exists an n × n unitary
matrix U such that U ∗ AU = D.
3 Proof. Recall that for given n × n real symmetric (resp. Hermitian) matrix A
as in theorems, TA is a self-adjoint operator on Rn (resp. on Cn ). The
spectral theorem for self-adjoint operators implies ∃ an orthonormal B of Rn
(resp. Cn ) so that
MBB (TA ) = diag(λ1 , λ2 , . . . , λn ).
4 If E is the standard basis of Rn (resp. Cn ), then MEE (TA ) = A.
5 Let P = MEB . Hence MBB (TA ) = MBE MEE (TA )MEB = P −1 AP

J. K. Verma 4 / 10
An algorithm for diagonalizing a self-adjoint matrix
1 Notice that P is orthogonal (resp. unitary) in real (resp. complex) case and
hence P t AP = diag(λ1 , λ2 , . . . , λn ).
2 Find the distinct eigenvalues µ1 , . . . , µk of A. These are all real.
3 For each µi construct a basis of Vµi using Gauss elimination.
4 Convert it to an orthonormal basis B(µi ), using Gram-Schmidt.
5 Suppose this basis has di vectors. Then d1 + · · · + dk = n.
6 Form an n × n matrix as follows: the first d1 columns are the vectors in
B(µ1 ) the next d2 columns are the vectors in B(µ2 ) and so on.
7 This is the matrix U in the complex case or S in the real case and
D = diag(µ1 , . . . , µ1 , . . . , µr ).
8 Example. Consider the real symmetric matrix

−2
 
1 2
A= 2 1 2 .
−2 2 1

J. K. Verma 5 / 10
Diagonalization of a real symmetric matrix

1 Solve det(λI − A) = 0. Check that the eigenvalues of A are 3, 3, −3.


2 The eigenvectors for λ = 3 are in the null space of N (A − 3I ).
3 They are the nonzero solutions of

−2 2 −2
     
x 0
 2 −2 2   y  =  0 .
−2 2 −2 z 0
4 Hence we obtain the single equation x − y + z = 0. Using GEM,
5 V3 = {(y − z, y , z) | y , z ∈ R} = L({(0, 1, 1)t , (−1, 1, 2)t }).
6 Apply Gram-Schmidt process to get an orthonormal basis of V3 :
 t  q t
v1 = 0, √12 , √12 and v2 = − 23 , − √16 , √16 .
7 Check: {v3 = √1 (1, −1, 1)t } is an orthonormal basis of V−3 .
3
8 Set S = [v1 , v2 , v3 ] and D = diag(3, 3, −3). Then S t AS = D.

J. K. Verma 6 / 10
Diagonalization of commuting self-adjoint operators

1 Definition. Two operators A, B : V → V are called commuting operators if


AB = BA.
2 Theorem. Let V be an n-dimensional complex inner product space. Let A, B
be two commuting self-adjoint operators on V . Then there exists an
orthonormal basis (v1 , . . . , vn ) of V such that each vi is an eigenvector of
both A and B.
3 Proof. Let V1 , . . . , Vr be eigenspaces of A for distinct eigenvalues
µ1 , . . . , µr . Let v ∈ Vi . Then we claim that B(v ) ∈ Vi . For,

A(B(v )) = (AB)(v ) = (BA)(v ) = B(A(v )) = B(µi v ) = µi B(v ).

4 Therefore for all i, B : Vi → Vi is a self-adjoint operator.


5 Hence each Vi has an orthonormal basis of eigenvectors of B and all of these
vectors are already eigenvectors of A.

J. K. Verma 7 / 10
Diagonalization of normal matrices
1 Theorem (Spectral Theorem for Normal Matrices). A complex normal
matrix N has an orthonormal basis of Cn of eigenvectors of N.
N+N ∗ N−N ∗
2 Proof. Let N be a normal matrix. Write N = 2 + 2 .
3 Put A = (N + N ∗ )/2 and B = (N − N ∗ )/2.
4 Check that A = A∗ and B ∗ = −B and AB = BA.
5 C = iB is Hermitian as C ∗ = −iB ∗ = iB = C , and AC = CA.
6 Therefore there is a common orthnormal eigenbasis B of A and C .
7 As B = −iC , B is an orthonormal eigenbasis of B and N = A + B.
8 Proposition. Let U be an n × n unitary matrix. Then U is unitarily
diagonalizable and every eigenvalue λ of U satisfies |λ| = 1.
9 Proof. Since U is normal, it has an orthonormal eigenbasis of Cn .
10 If x, y ∈ Cn then Ux · Uy = (Ux)∗ Uy = x ∗ U ∗ Uy = x ∗ y = x · y .
11 So kUxk = kxk. If x 6= 0 and Ux = λx then
kxk = kUxk = kλxk = |λ|kxk =⇒ |λ| = 1.
J. K. Verma 8 / 10
Applications of spectral theorem to geometry
1 Definition. Let A = (aij ) be an n × n real symmetric matrix. The quadratic
form associated with A is the map Q : Rn → R defined as follows. For
X = [x1 , x2 , . . . , xn ]t ∈ Rn ,
n X
X n n
X X
Q(X ) = X t AX = aij xi xj = aii xi2 + 2aij xi xj .
i=1 j=1 i=1 1≤i<j≤n

2 If A = diag (λ1 , λ2 , . . . , λn ), then Q(X ) = λ1 x12 + λ2 x22 + · · · + λn xn2 is called


a diagonal form.
" # " #
1 2 x1
3 Example. Let A = , X = . Then
2 5 x2
" #" #
t 1 2 x1
Q(X ) = X AX = [x1 , x2 ] = x12 + 4x1 x2 + 5x22 .
2 5 x2
4 Theorem. Let A be real symmetric and U be orthogonal and
U t AU = D = diag (λ1 , λ2 , . . . , λn ), X = UY = U(y1 , y2 , . . . , yn )t , then
Q(X ) = X t AX = Y t U t AUY = Y t DY = λ1 y12 + λ2 y22 + · · · + λn yn2 .
J. K. Verma 9 / 10
Diagonalization of quadratic forms

1 Example. Let us determine the orthogonal matrix U which reduces the


quadratic form Q(X ) = 2x12 + 4x1 x2 + 5x22 to a diagonal form.
   
2 2 x1
2 For, we write Q(X ) = [x1 , x2 ] = X t AX .
2 5 x2
 
2 2
3 The eigenvalues of A = are λ1 = 1 and λ2 = 6.
2 5
4 An orthonormal set of eigenvectors for λ1 and λ2 is
   
1 2 1 1
u1 = √ and u2 = √ .
5 −1 5 2

 
2 1
5 Hence U = √1 and U t AU = diag (1, 6).
5 −1 2
 
t 1 0
6 Now use X = UY . The diagonal form is: Y Y = y12 + 6y22 .
0 6

J. K. Verma 10 / 10

You might also like