Professional Documents
Culture Documents
symmetric matrices
(Optional)
Math 419
for some λ ∈ R.
(We do not prove these lemma here, but a proof can be found in
good books on vector calculus.)
We can now turn to the more abstract version of the spectral the-
orem.
(1) w = α1 u1 + · · · + αn un .
Subject to w · w = 1 (equivalently w · w − 1 = 0)
Let
F (α1 , ..., αn ) = w · Aw
G(α1 , ..., αn ) = w · w − 1.
The method of Lagrange multipliers implies at the maximum (αi∗ ), for
each i,
But,
∂ ∂ ∂
∂i F (α1 , ..., αn ) = w · Aw = ( w) · Aw + w · ( Aw) (product rule)
∂αi ∂αi ∂αi
∂ ∂
=( w) · Aw + w · (A w) (A is linear)
∂αi ∂αi
= ui · Aw + w · Aui (compute based on (1))
= 2ui · Aw.
By a similar process,
∂
∂i G(α1 , ..., αn ) = (w · w − 1) = 2ui · w.
∂αi
Hence if
w∗ = α1∗ u1 + · · · + αn∗ un
is where the maximum occurs for our optimization problem, then (2)
implies
(3) ui · Aw∗ = λui · w∗ .
If we have
Aw∗ = β1∗ u1 + · · · + βn∗ un ,
From Lemma 2, (3) implies
βi∗ = λαi∗ for all i,
and hence
Aw = λw.
∗
That is to say, at the value w that maximizes our optimization prob-
lem, we must have that w∗ is an eigenvector of A, with (obviously real)
eigenvalue λ.
For w∗ achieving the maximum, let u1 = w∗ and λ1 = λ. Then
consider the (n − 1)-dimensional subspace
W 0 = span(u1 )⊥ .
I claim that A : W 0 → W 0 ; that is for any w0 ∈ W 0 , we have
A(w0 ) ∈ W 0 . This is the case for the following reason: if
0 = w 0 · u1 ,
then
0 = w0 · λu1 = w0 · Au1 = Aw0 · u1 ,
and this establishes A(w0 ) ∈ W 0 .
By induction, the (n−1)-dimensional space W 0 has an orthonormal
basis {u2 , ..., un } of eigenvectors of A, each with a real eigenvalue, and
u1 is a unit vector orthogonal to each of u2 , ..., un , so {u1 , ..., un } is an
orthonormal basis of eigenvectors of W , and this proves the claim.