You are on page 1of 136

ran T and Ker T are vector

spaces

an element of the column space


is each oof the columns
6 Orthogonality and Least INNER PRODUCT
Squares If u and v are vectors in , then we regard u and v
as matrices.
6.1
The transpose uT is a matrix, and the matrix
INNER PRODUCT, LENGTH, T
product u v is a matrix, which we write as a
AND ORTHOGONALITY single real number (a scalar) without brackets.

The number uTv is called the inner product of u


and v, and it is written as .

This inner product is also referred to as a dot


product.
© 2016 Pearson Education, Inc. © 2016 Pearson Education, Inc. Slide 6.1- 2

INNER PRODUCT INNER PRODUCT


Theorem 1: Let u, v, and w be vectors in , and
let c be a scalar. Then
If and , a.

then the inner product of u and v is d. and if and only if

Properties (b) and (c) can be combined several


. times to produce the following useful
ru
© 2016 Pearson Education, Inc. Slide 6.1- 3 © 2016 Pearson Education, Inc. Slide 6.1- 4
THE LENGTH OF A VECTOR THE LENGTH OF A VECTOR
If we identify v with a geometric point in the plane,
If v is in , with entries v1 vn, then the square as usual, then coincides with the standard notion
root of is defined because is nonnegative.
of the length of the line segment from the origin to v.
This follows from the Pythagorean Theorem applied
Definition: The length (or norm) of v is the
nonnegative scalar defined by to a triangle such as the one shown in the following
figure.

= , and =

For any scalar c, the length cv is times the length of


Suppose v is in , say,
v. That is,

© 2016 Pearson Education, Inc. Slide 6.1- 5 © 2016 Pearson Education, Inc. Slide 6.1- 6

THE LENGTH OF A VECTOR THE LENGTH OF A VECTOR


Example 2: Let . Find a unit vector u
A vector whose length is 1 is called a unit vector. in the same direction as v.
Solution: First, compute the length of v:
If we divide a nonzero vector v by its length that is,
multiply by we obtain a unit vector u
because the length of u is .

Then, multiply v by to obtain


The process of creating u from v is sometimes called
normalizing v, and we say that u is in the same
direction as v.

© 2016 Pearson Education, Inc. Slide 6.1- 7 © 2016 Pearson Education, Inc. Slide 6.1- 8
DISTANCE IN DISTANCE IN
To check that , it suffices to show that . Example 4: Compute the distance between the
vectors and .
Solution: Calculate

Definition: For u and v in , the distance between u


and v, written as dist (u, v), is the length of the vector The vectors u, v, and are shown in the figure on
. That is, the next slide.
When the vector is added to v, the result is u.
© 2016 Pearson Education, Inc. Slide 6.1- 9 © 2016 Pearson Education, Inc. Slide 6.1- 10

DISTANCE IN ORTHOGONAL VECTORS


Consider or and two lines through the origin
determined by vectors u and v.
See the figure below. The two lines shown in the
figure are geometrically perpendicular if and only if
the distance from u to v is the same as the distance
from u to .

Notice that the parallelogram in the above figure


shows that the distance from u to v is the same as the This is the same as requiring the squares of the
distance from to 0. distances to be the same.
© 2016 Pearson Education, Inc. Slide 6.1- 11 © 2016 Pearson Education, Inc. Slide 6.1- 12
ORTHOGONAL VECTORS ORTHOGONAL VECTORS
Now
The two squared distances are equal if and only if
, which happens if and only if
This calculation shows that when vectors u and v are
Theorem 1(b) identified with geometric points, the corresponding
Theorem 1(a), (b) lines through the points and the origin are perpendicular
Theorem 1(a) if and only if .

Definition: Two vectors u and v in are orthogonal


The same calculations with v and interchanged
(to each other) if .
show that
The zero vector is orthogonal to every vector in
because for all v.

© 2016 Pearson Education, Inc. Slide 6.1- 13 © 2016 Pearson Education, Inc. Slide 6.1- 14

THE PYTHOGOREAN THEOREM ORTHOGONAL COMPLEMENTS


Theorem 2: Two vectors u and v are orthogonal if and 1. A vector x is in if and only if x is
only if . orthogonal to every vector in a set that spans W.
2. is a subspace of .
Orthogonal Complements
If a vector z is orthogonal to every vector in a subspace Theorem 3: Let A be an matrix. The
W of , then z is said to be orthogonal to W. orthogonal complement of the row space of A is the
The set of all vectors z that are orthogonal to W is null space of A, and the orthogonal complement of
called the orthogonal complement of W and is the column space of A is the null space of AT:
W and
W perp

© 2016 Pearson Education, Inc. Slide 6.1- 15 © 2016 Pearson Education, Inc. Slide 6.1- 16
ORTHOGONAL COMPLEMENTS ORTHOGONAL COMPLEMENTS
Proof: The row-column rule for computing Ax shows that
if x is in Nul A, then x is orthogonal to each row of A Since this statement is true for any matrix, it is true
(with the rows treated as vectors in ). for AT.

Since the rows of A span the row space, x is orthogonal to That is, the orthogonal complement of the row space
Row A. of AT is the null space of AT.

Conversely, if x is orthogonal to Row A, then x is This proves the second statement, because
certainly orthogonal to each row of A, and hence . .

This proves the first statement of the theorem.


© 2016 Pearson Education, Inc. Slide 6.1- 17 © 2016 Pearson Education, Inc. Slide 6.1- 18

ANGLES IN AND (OPTIONAL) ANGLES IN AND (OPTIONAL)


If u and v are nonzero vectors in either or ,
then there is a nice connection between their inner
product and the angle between the two line
segments from the origin to the points identified with
u and v.

The formula is
(2) By the law of cosines,

To verify this formula for vectors in , consider the


triangle shown in the figure on the next slide with which can be rearranged to produce the equations on
the next slide.
sides of lengths, , , and .
© 2016 Pearson Education, Inc. Slide 6.1- 19 © 2016 Pearson Education, Inc. Slide 6.1- 20
ANGLES IN AND (OPTIONAL) 6 Orthogonality and Least
1
Squares
2 2 2
u v cos u v u v
2
1 2 6.2
u1 u22 v12 v22 (u1 v1 ) 2 (u2 v2 ) 2
2
ORTHOGONAL SETS
u1v1 u2v2
uv
The verification for is similar.
When , formula (2) may be used to define the
angle between two vectors in .
In statistics, the value of defined by (2) for
suitable vectors u and v is called a correlation
coefficient.
© 2016 Pearson Education, Inc. Slide 6.1- 21 © 2016 Pearson Education, Inc.

ORTHOGONAL SETS ORTHOGONAL SETS


Proof: If for some scalars
A set of vectors {u1 up} in is said to be an c1 cp, then
orthogonal set if each pair of distinct vectors from
the set is orthogonal, that is, if whenever
.

Theorem 4: If is an orthogonal set of


nonzero vectors in , then S is linearly independent
and hence is a basis for the subspace spanned by S. because u1 is orthogonal to u2 up.
Since u1 is nonzero, iis not zero and so
Similarly, c2 cp must be zero.
© 2016 Pearson Education, Inc. Slide 6.2- 2 © 2016 Pearson Education, Inc. Slide 6.2- 3
ORTHOGONAL SETS ORTHOGONAL SETS

Thus S is linearly independent. Proof: The orthogonality of {u1 up} shows that
Definition: An orthogonal basis for a subspace W of
is a basis for W that is also an orthogonal set.

Theorem 5: Let {u1 up} be an orthogonal basis for a


Since is not zero, the equation above can be
subspace W of . For each y in W, the weights in the
solved for c1.
linear combination

To find cj for , compute and solve


are given by
for cj.

© 2016 Pearson Education, Inc. Slide 6.2- 4 © 2016 Pearson Education, Inc. Slide 6.2- 5

AN ORTHOGONAL PROJECTION AN ORTHOGONAL PROJECTION


Given a nonzero vector u in , consider the problem Given any scalar , let , so that (1) is
of decomposing a vector y in into the sum of two satisfied.
vectors, one a multiple of u and the other orthogonal Then is orthogonal to u if an only if
to u.
We wish to write That is, (1) is satisfied with z orthogonal to u if and
(1)
where for some scalar and z is some vector only if and .
orthogonal to u. See the following figure.

The vector is called the orthogonal projection of


y onto u, and the vector z is called the component of
y orthogonal to u.
© 2016 Pearson Education, Inc. Slide 6.2- 6 © 2016 Pearson Education, Inc. Slide 6.2- 7
AN ORTHOGONAL PROJECTION AN ORTHOGONAL PROJECTION

If c is any nonzero scalar and if u is replaced by cu in


the definition of , then the orthogonal projection of Example 3: Let and . Find the
y onto cu is exactly the same as the orthogonal
projection of y onto u. orthogonal projection of y onto u. Then write y as the
Hence this projection is determined by the subspace L sum of two orthogonal vectors, one in Span{u} and
spanned by u (the line through u and 0). one orthogonal to u.
Sometimes is denoted by projLy and is called the Solution: Compute
orthogonal projection of y onto L.
That is,
(2)

© 2016 Pearson Education, Inc. Slide 6.2- 8 © 2016 Pearson Education, Inc. Slide 6.2- 9

AN ORTHOGONAL PROJECTION AN ORTHOGONAL PROJECTION


That is,
The orthogonal projection of y onto u is

and the component of y orthogonal to u is


The decomposition of y is illustrated in the following
figure:

The sum of these two vectors is y.

© 2016 Pearson Education, Inc. Slide 6.2- 10 © 2016 Pearson Education, Inc. Slide 6.2- 11
AN ORTHOGONAL PROJECTION ORTHONORMAL SETS
Note: If the calculations above are correct, then A set {u1 up} is an orthonormal set if it is an
will be an orthogonal set. orthogonal set of unit vectors.

As a check, compute If W is the subspace spanned by such a set, then


{u1 up} is an orthonormal basis for W, since the set
is automatically linearly independent, by Theorem 4.

Since the line segment in the figure on the previous slide The simplest example of an orthonormal set is the
between y and is perpendicular to L, by construction standard basis {e1 en} for .
of , the point identified with is the closest point of L
to y.
Any nonempty subset of {e1 en} is orthonormal, too.
© 2016 Pearson Education, Inc. Slide 6.2- 12 © 2016 Pearson Education, Inc. Slide 6.2- 13

ORTHONORMAL SETS ORTHONORMAL SETS


Example 2: Show that {v1, v2, v3} is an orthonormal
basis of , where
Thus {v1, v2, v3} is an orthogonal set.
Also,
, ,

which shows that v1, v2, and v3 are unit vectors.


Solution: Compute
Thus {v1, v2, v3} is an orthonormal set.
Since the set is linearly independent, its three vectors
form a basis for . See the figure on the next slide.
© 2016 Pearson Education, Inc. Slide 6.2- 14 © 2016 Pearson Education, Inc. Slide 6.2- 15
ORTHONORMAL SETS ORTHONORMAL SETS
Theorem 6: An matrix U has orthonormal
columns if and only if .

Proof: To simplify notation, we suppose that U has only


three columns, each a vector in .
Let and compute
When the vectors in an orthogonal set of nonzero u1T u1T u1 u1T u 2 u1T u 3
vectors are normalized to have unit length, the new
vectors will still be orthogonal, and hence the new set U TU u T2 u1 u2 u3 u T2 u1 u T2 u 2 u T2 u 3
will be an orthonormal set. u T3 u T3 u1 u T3 u 2 u T3 u 3
(4)
© 2016 Pearson Education, Inc. Slide 6.2- 16 © 2016 Pearson Education, Inc. Slide 6.2- 17

ORTHONORMAL SETS ORTHONORMAL SETS


Theorem 7: Let U be an matrix with
The entries in the matrix at the right are inner orthonormal columns, and let x and y be in .
products, using transpose notation.
Then
The columns of U are orthogonal if and only if
a.
, ,
b.
(5)
if and only if =0
The columns of U all have unit length if and only if
, , (6)
Properties (a) and (c) say that the linear
The theorem follows immediately from (4) (6). mapping preserves lengths and
orthogonality.

© 2016 Pearson Education, Inc. Slide 6.2- 18 © 2016 Pearson Education, Inc. Slide 6.2- 19
6 Orthogonality and Least ORTHOGONAL PROJECTIONS
Squares The orthogonal projection of a point in onto a line
through the origin has an important analogue in .
6.3
ORTHOGONAL PROJECTIONS Given a vector y and a subspace W in , there is a
vector in W such that (1) is the unique vector in W
for which is orthogonal to W, and (2) is the
unique vector in W closest to y. See the following
figure.

© 2016 Pearson Education, Inc. © 2016 Pearson Education, Inc. Slide 6.3- 2

THE ORTHOGONAL DECOMPOSITION THEOREM THE ORTHOGONAL DECOMPOSITION THEOREM


These two properties of provide the key to finding The vector in (1) is called the orthogonal
the least-squares solutions of linear systems. projection of y onto W and often is written as
Theorem 8: Let W be a subspace of . Then each y projWy. See the following figure:
in can be written uniquely in the form
(1)
where is in W and z is in .
In fact, if {u1 up} is any orthogonal basis of W,
then Proof: Let {u1 up} be any orthogonal basis for W,
(2) and define by (2).
Then is in W because is a linear combination of
and . the basis u1 up.
© 2016 Pearson Education, Inc. Slide 6.3- 3 © 2016 Pearson Education, Inc. Slide 6.3- 4
THE ORTHOGONAL DECOMPOSITION THEOREM THE ORTHOGONAL DECOMPOSITION THEOREM
Let . To show that the decomposition in (1) is unique,
suppose y can also be written as , with
Since u1 is orthogonal to u2 up, it follows from (2)
in W and z1 in .
that
Then (since both sides equal y), and
- -0 so
- =0
Thus z is orthogonal to u1. This equality shows that the vector is in W
and in (because z1 and z are both in , and
Similarly, z is orthogonal to each uj in the basis for W.
is a subspace).
Hence z is orthogonal to every vector in W.
Hence , which shows that .
That is, z is in .
This proves that and also .
© 2016 Pearson Education, Inc. Slide 6.3- 5 © 2016 Pearson Education, Inc. Slide 6.3- 6

THE ORTHOGONAL DECOMPOSITION THEOREM THE ORTHOGONAL DECOMPOSITION THEOREM


The uniqueness of the decomposition (1) shows that Solution: The orthogonal projection of y onto W is
the orthogonal projection depends only on W and not y u1 y u2
y u1 u2
on the particular basis used in (2). u1 u1 u2 u2
2 2 2 2 2/5
Example 1: Let . 9 3 9 15
5 1 5 1 2
30 6 30 30
1 1 1 1 1/ 5
Observe that {u1, u2} is an orthogonal basis for Also
. Write y as the sum of a vector in
W and a vector orthogonal to W.
© 2016 Pearson Education, Inc. Slide 6.3- 7 © 2016 Pearson Education, Inc. Slide 6.3- 8
THE ORTHOGONAL DECOMPOSITION THEOREM PROPERTIES OF ORTHOGONAL PROJECTIONS

Theorem 8 ensures that is in . If {u1 up} is an orthogonal basis for W and if y


happens to be in W, then the formula for projWy is
To check the calculations, verify that is exactly the same as the representation of y given in
orthogonal to both u1 and u2 and hence to all of W. Theorem 5 in Section 6.2.

The desired decomposition of y is In this case, .

If y is in , then .

© 2016 Pearson Education, Inc. Slide 6.3- 9 © 2016 Pearson Education, Inc. Slide 6.3- 10

THE BEST APPROXIMATION THEOREM THE BEST APPROXIMATION THEOREM


Theorem 9: Let W be a subspace of , let y be any
vector in , and let be the orthogonal projection of y Inequality (3) leads to a new proof that does not
onto W. Then is the closest point in W to y, in the depend on the particular orthogonal basis used to
sense that compute it.
(3)
for all v in W distinct from . If a different orthogonal basis for W were used to
The vector in Theorem 9 is called the best construct an orthogonal projection of y, then this
approximation to y by elements of W. projection would also be the closest point in W to y,
namely, .
The distance from y to v, given by , can be
v in place of y.
Theorem 9 says that this error is minimized when .
© 2016 Pearson Education, Inc. Slide 6.3- 11 © 2016 Pearson Education, Inc. Slide 6.3- 12
THE BEST APPROXIMATION THEOREM THE BEST APPROXIMATION THEOREM
Proof: Take v in W distinct from . See the following
figure: Since

the Pythagorean Theorem gives

Then is in W. (See the colored right triangle in the figure on the


previous slide. The length of each side is labeled.)
By the Orthogonal Decomposition Theorem, is
orthogonal to W.
Now because , and so inequality
In particular, is orthogonal to (which is in (3) follows immediately.
W ).
© 2016 Pearson Education, Inc. Slide 6.3- 13 © 2016 Pearson Education, Inc. Slide 6.3- 14

PROPERTIES OF ORTHOGONAL PROJECTIONS PROPERTIES OF ORTHOGONAL PROJECTIONS


Example 4: The distance from a point y in to a Since {u1, u2} is an orthogonal basis for W,
subspace W is defined as the distance from y to the
nearest point in W. Find the distance from y to
, where

Solution: By the Best Approximation Theorem, the


distance from y to W is , where . The distance from y to W is .
© 2016 Pearson Education, Inc. Slide 6.3- 15 © 2016 Pearson Education, Inc. Slide 6.3- 16
PROPERTIES OF ORTHOGONAL PROJECTIONS PROPERTIES OF ORTHOGONAL PROJECTIONS

Theorem 10: If {u1 up} is an orthogonal basis for a Also, (4) shows that projWy is a linear combination of the
subspace W of , then columns of U using the weights .
(4)
The weights can be written as , showing
T
that they are the entries in U y and justifying (5).
If , then

for all y in (5)

Proof: Formula (4) follows immediately from (2) in


Theorem 8.
© 2016 Pearson Education, Inc. Slide 6.3- 17 © 2016 Pearson Education, Inc. Slide 6.3- 18

6 Orthogonality and Least THE GRAM-SCHMIDT PROCESS


Squares Theorem 11: The Gram-Schmidt Process
Given a basis {x1, . . . , xp} for a nonzero subspace W of ,
6.4 define

THE GRAM-SCHMIDT
PROCESS

..
.

Then {v1 , . . . , vp} is an orthogonal basis for W. In addition


Span{v1 , . . . , vk} = Span{x1 , . . . , xk} for (1)
© 2016 Pearson Education, Inc. © 2016 Pearson Education, Inc. Slide 6.4- 2
THE GRAM-SCHMIDT PROCESS ORTHONORMAL BASES
Proof For , let Wk = Span{x1 , . . . , xk}. Set v1 = x1,
Example 3 Example 1 constructed the orthogonal basis
so that Span{v1} = Span {x1}. Suppose, for some k < p, we have
constructed v1, . . . , vk so that {v1 , . . . , vk} is an orthogonal
basis for Wk. Define ,
vk+1 = xk+1 projWkxk+1 (2)
By the Orthogonal Decomposition Theorem, vk+1 is orthogonal An orthonormal basis is
to Wk. Furthermore, vk+1 k+1 is not in Wk = Span{x1
, . . . , xk}
=
Hence {v1 , . . . , vk} is an orthogonal set of nonzero vectors in
the (k + 1)-dimensional space Wk+1. By the Basis Theorem in
Section 4.5, this set is an orthogonal basis for Wk+1. Hence Wk+1
= Span{v1 , . . . , vk+1}. When k + 1 = p, the process stops.

© 2016 Pearson Education, Inc. Slide 6.4- 3 © 2016 Pearson Education, Inc. Slide 6.4- 4

QR FACTORIZATION OF MATRICES QR FACTORIZATION OF MATRICES


Theorem 12: The QR Factorization Let
If A is an m n matrix with linearly independent
columns, then A can be factored as A = QR, where Q is an
m n matrix whose columns form an orthonormal basis For k = 1 , . . . , xk is in Span{x1, . . . , xk} = Span{u1, . . . ,
for Col A and R is an n n upper triangular invertible uk}. So there are constants, r1k , . . . , rkk, such that
matrix with positive entries on its diagonal.

Proof The columns of A form a basis {x1, . . . , xn} for


Col A. Construct an orthonormal basis {u 1, . . . , un} for W We may assume that rkk k is a linear
= Col A with property (1) in Theorem 11. This basis may combination of the columns of Q using as weights the
be constructed by the Gram-Schmidt process or some entries in the vector
other means.
© 2016 Pearson Education, Inc. Slide 6.4- 5 © 2016 Pearson Education, Inc. Slide 6.4- 6
QR FACTORIZATION OF MATRICES QR FACTORIZATION OF MATRICES

Example 4 Find a QR factorization of A = .

Solution The columns of A are the vectors x1, x2, and x3 in


Example 2. An orthogonal basis for Col A = Span{x1, x2,
x3} was found in that example:
That is, xk = Qrk for k = 1, . . . , n. Let R = [r1 . . . rn]. Then
A = [x1 . . . xn] = [Qr1 . . . Qrn] = QR
The fact that R is invertible follows easily from the fact that the ,
columns of A are linearly independent. Since R is clearly upper
triangular, its nonnegative diagonal entries must be positive.

© 2016 Pearson Education, Inc. Slide 6.4- 7 © 2016 Pearson Education, Inc. Slide 6.4- 8

QR FACTORIZATION OF MATRICES QR FACTORIZATION OF MATRICES


To simplify the arithmetic that follows, scale v3 by letting From the proof of Theorem 12, A = QR for some R. To find
v3 = 3v3. Then normalize the three vectors to obtain u1, u2, R, observe that QTQ = I, because the columns of Q are
and u3, and use these vectors as the columns of Q: orthonormal. Hence

and

Q= . R=

By construction, the first k columns of Q are an


=
orthonormal basis of Span{x1 , . . . , xk}.

© 2016 Pearson Education, Inc. Slide 6.4- 9 © 2016 Pearson Education, Inc. Slide 6.4- 10
6 Orthogonality and Least LEAST-SQUARES PROBLEMS
Squares Definition: If A is and b is in , a least-
squares solution of is an in such that
6.5
LEAST-SQUARES PROBLEMS for all x in .

The most important aspect of the least-squares


problem is that no matter what x we select, the vector
Ax will necessarily be in the column space, Col A.

So we seek an x that makes Ax the closest point in


Col A to b. See the figure on the next slide.
© 2016 Pearson Education, Inc. © 2016 Pearson Education, Inc. Slide 6.5- 2

SOLUTION OF THE GENREAL LEAST-SQUARES


LEAST-SQUARES PROBLEMS PROBLEM

Because is in the column space A, the equation


is consistent, and there is an in such that
(1)

Solution of the General Least-Squares Problem Since is the closest point in Col A to b, a vector x is a
least-squares solution of if and only if
Given A and b, apply the Best Approximation satisfies (1).
Theorem to the subspace Col A.
Such an in is a list of weights that will build out
Let of the columns of A. See the figure on the next slide.
© 2016 Pearson Education, Inc. Slide 6.5- 3 © 2016 Pearson Education, Inc. Slide 6.5- 4
SOLUTION OF THE GENREAL LEAST-SQUARES SOLUTION OF THE GENREAL LEAST-SQUARES
PROBLEM PROBLEM
Since each is a row of AT,
(2)
Thus

Suppose satisfies .
By the Orthogonal Decomposition Theorem, the These calculations show that each least-squares
projection has the property that is orthogonal solution of satisfies the equation
to Col A, so is orthogonal to each column of A. (3)
If aj is any column of A, then and The matrix equation (3) represents a system of
equations called the normal equations for .
A solution of (3) is often denoted by .
© 2016 Pearson Education, Inc. Slide 6.5- 5 © 2016 Pearson Education, Inc. Slide 6.5- 6

SOLUTION OF THE GENREAL LEAST-SQUARES SOLUTION OF THE GENREAL LEAST-SQUARES


PROBLEM PROBLEM
Theorem 13: The set of least-squares solutions of Ax b Hence the equation
coincides with the nonempty set of solutions of the
normal equation .
Proof: The set of least-squares solutions is nonempty and is a decomposition of b into the sum of a vector in Col
each least-squares solution satisfies the normal A and a vector orthogonal to Col A.
equations.
Conversely, suppose satisfies AT Ax AT b. By the uniqueness of the orthogonal decomposition,
Then satisfies (2), which shows that is must be the orthogonal projection of b onto Col A.
T
orthogonal to the rows of A and hence is orthogonal to
the columns of A.
Since the columns of A span Col A, the vector b Ax is That is, and is a least-squares solution.
orthogonal to all of Col A.
© 2016 Pearson Education, Inc. Slide 6.5- 7 © 2016 Pearson Education, Inc. Slide 6.5- 8
SOLUTION OF THE GENREAL LEAST-SQUARES SOLUTION OF THE GENREAL LEAST-SQUARES
PROBLEM PROBLEM
Example 1: Find a least-squares solution of the
inconsistent system for

Then the equation becomes


Solution: To use normal equations (3), compute:

© 2016 Pearson Education, Inc. Slide 6.5- 9 © 2016 Pearson Education, Inc. Slide 6.5- 10

SOLUTION OF THE GENREAL LEAST-SQUARES SOLUTION OF THE GENREAL LEAST-SQUARES


PROBLEM PROBLEM
Theorem 14: Let A be an matrix. The following
Row operations can be used to solve the system on the
statements are logically equivalent:
previous slide, but since ATA is invertible and , it
a. The equation has a unique least-squares
is probably faster to compute
solution for each b in .
b. The columns of A are linearly independent.
c. The matrix ATA is invertible.
When these statements are true, the least-squares
and then solve as solution is given by
(4)
When a least-squares solution is used to produce
as an approximation to b, the distance from b to Ax is
called the least-squares error of this approximation.
© 2016 Pearson Education, Inc. Slide 6.5- 11 © 2016 Pearson Education, Inc. Slide 6.5- 12
ALTERNATIVE CALCULATIONS OF LEAST- ALTERNATIVE CALCULATIONS OF LEAST-
SQUARES SOLUTIONS SQUARES SOLUTIONS
Example 4: Find a least-squares solution of for

Now that is known, we can solve Ax b.


Solution: Because the columns a1 and a2 of A are But this is trivial, since we already know weights to
orthogonal, the orthogonal projection of b onto Col A is place on the columns of A to produce .
given by It is clear from (5) that
+ (5)
© 2016 Pearson Education, Inc. Slide 6.5- 13 © 2016 Pearson Education, Inc. Slide 6.5- 14

ALTERNATIVE CALCULATIONS OF LEAST- ALTERNATIVE CALCULATIONS OF LEAST-


SQUARES SOLUTIONS SQUARES SOLUTIONS

Theorem 15: Given an matrix A with linearly The columns of Q form an orthonormal basis for Col A.
independent columns, let be a QR factorization
of A. Then, for each b in , the equation Hence, by Theorem 10, QQTb is the orthogonal
has a unique least-squares solution, given by projection of b onto Col A.
(6)
Then , which shows that is a least-squares
Proof: Let . solution of .
Then
The uniqueness of follows from Theorem 14.

© 2016 Pearson Education, Inc. Slide 6.5- 15 © 2016 Pearson Education, Inc. Slide 6.5- 16
5 Eigenvalues and Eigenvectors EIGENVECTORS AND EIGENVALUES
Definition: An eigenvector of an matrix A is
a nonzero vector x such that for some
5.1 scalar . A scalar is called an eigenvalue of A if
there is a nontrivial solution x of ; such an
EIGENVECTORS AND
x is called an eigenvector corresponding to .
EIGENVALUES
is an eigenvalue of an matrix A if and only
if the equation
(3)
has a nontrivial solution.
The set of all solutions of (3) is just the null space
of the matrix .
© 2016 Pearson Education, Inc. © 2016 Pearson Education, Inc. Slide 5.1- 2

EIGENVECTORS AND EIGENVALUES EIGENVECTORS AND EIGENVALUES


So this set is a subspace of and is called the
Solution: The scalar 7 is an eigenvalue of A if and
eigenspace of A corresponding to .
only if the equation
(1)
The eigenspace consists of the zero vector and all the
has a nontrivial solution.
eigenvectors corresponding to .
But (1) is equivalent to , or
(2)
Example 3: Show that 7 is an eigenvalue of matrix
To solve this homogeneous equation, form the matrix

and find the corresponding eigenvectors.

© 2016 Pearson Education, Inc. Slide 5.1- 3 © 2016 Pearson Education, Inc. Slide 5.1- 4
EIGENVECTORS AND EIGENVALUES EIGENVECTORS AND EIGENVALUES
The columns of are obviously linearly
dependent, so (2) has nontrivial solutions. Example 4: Let . An eigenvalue of
To find the corresponding eigenvectors, use row
operations:
A is 2. Find a basis for the corresponding eigenspace.
~ Solution: Form
4 1 6 2 0 0 2 1 6
The general solution has the form . A 2I 2 1 6 0 2 0 2 1 6
2 1 8 0 0 2 2 1 6
Each vector of this form with x2 0 is an
eigenvector corresponding to .
and row reduce the augmented matrix for .
© 2016 Pearson Education, Inc. Slide 5.1- 5 © 2016 Pearson Education, Inc. Slide 5.1- 6

EIGENVECTORS AND EIGENVALUES EIGENVECTORS AND EIGENVALUES


The eigenspace, shown in the following figure, is a two-
dimensional subspace of . A basis is
~

At this point, it is clear that 2 is indeed an eigenvalue


of A because the equation has free
variables.
The general solution is

, x2 and x3 free.

© 2016 Pearson Education, Inc. Slide 5.1- 7 © 2016 Pearson Education, Inc. Slide 5.1- 8
EIGENVECTORS AND EIGENVALUES EIGENVECTORS AND EIGENVALUES
Theorem 1: The eigenvalues of a triangular matrix
The scalar is an eigenvalue of A if and only if the
are the entries on its main diagonal. equation has a nontrivial solution,
Proof: For simplicity, consider the case. that is, if and only if the equation has a free variable.
If A is upper triangular, the has the form
Because of the zero entries in , it is easy to see
that has a free variable if and only if
at least one of the entries on the diagonal of is
zero.

This happens if and only if equals one of the entries


a11, a22, a33 in A.

© 2016 Pearson Education, Inc. Slide 5.1- 9 © 2016 Pearson Education, Inc. Slide 5.1- 10

EIGENVECTORS AND EIGENVALUES EIGENVECTORS AND EIGENVALUES


Theorem 2: If v1, , vr are eigenvectors that Then there exist scalars c1, , cp such that
correspond to distinct eigenvalues 1, , r of an
(5)
matrix A, then the set {v1, , vr} is linearly
Multiplying both sides of (5) by A and using the fact
independent.
that for each k, we obtain
Proof: Suppose {v1, , vr} is linearly dependent.
Since v1 is nonzero, Theorem 7 in Section 1.7 says that
(6)
one of the vectors in the set is a linear combination of
the preceding vectors. Multiplying both sides of (5) by and subtracting
the result from (6), we have
Let p be the least index such that is a linear
combination of the preceding (linearly independent) (7)
vectors.
© 2016 Pearson Education, Inc. Slide 5.1- 11 © 2016 Pearson Education, Inc. Slide 5.1- 12
EIGENVECTORS AND EIGENVALUES EIGENVECTORS AND DIFFERENCE EQUATIONS
Hence {v1, , vr} cannot be linearly dependent and
Since {v1, , vp} is linearly independent, the weights therefore must be linearly independent.
in (7) are all zero.

If A is an matrix, then (8) is a recursive


But none of the factors are zero, because the description of a sequence {xk} in .
eigenvalues are distinct.
(8)

Hence for .
A solution of (8) is an explicit description of {xk}
whose formula for each xk does not depend directly on
But then (5) says that , which is impossible. A or on the preceding terms in the sequence other than
the initial term x0.
© 2016 Pearson Education, Inc. Slide 5.1- 13 © 2016 Pearson Education, Inc. Slide 5.1- 14

EIGENVECTORS AND DIFFERENCE EQUATIONS 5 Eigenvalues and Eigenvectors

The simplest way to build a solution of (8) is to take


an eigenvector x0 and its corresponding eigenvalue 5.2
and let
(9) THE CHARACTERISTIC
EQUATION

This sequence is a solution because

© 2016 Pearson Education, Inc. Slide 5.1- 15 © 2016 Pearson Education, Inc.
DETERMINANATS DETERMINANATS
Let A be an matrix, let U be any echelon form
obtained from A by row replacements and row Otherwise, at least unn is zero, and the product u11 unn
interchanges (without scaling), and let r be the is zero.
number of such row interchanges.
Thus
Then the determinant of A, written as det A, is
times the product of the diagonal entries u11, , unn , when A is invertible
in U.
when A is not invertible
If A is invertible, then u11, , unn are all pivots
(because ~ and the uii have not been scaled to
1 s).
© 2016 Pearson Education, Inc. Slide 5.2- 2 © 2016 Pearson Education, Inc. Slide 5.2- 3

DETERMINANATS DETERMINANATS

So det A equals .
The following alternative row reduction avoids the
Example 1: Compute det A for . row interchange and produces a different echelon
form.
The last step adds times row 2 to row 3:
Solution: The following row reduction uses one row
interchange: ~ ~
1 5 0 1 5 0 1 5 0
A~ 0 6 1 ~ 0 2 0 ~ 0 2 0 U1
This time det A is , the same
0 2 0 0 6 1 0 0 1 as before.
© 2016 Pearson Education, Inc. Slide 5.2- 4 © 2016 Pearson Education, Inc. Slide 5.2- 5
THE INVERTIBLE MATRIX THEOREM
(CONTINUED) PROPERTIES OF DETERMINANTS
Theorem: Let A be an matrix. Then A is
invertible if and only if:
d. If A is triangular, then det A is the product of
s. The number 0 is not an eigenvalue of A. the entries on the main diagonal of A.
t. The determinant of A is not zero.
e. A row replacement operation on A does not
Theorem 3: Properties of Determinants change the determinant. A row interchange
Let A and B be matrices. changes the sign of the determinant. A row
a. A is invertible if and only if det . scaling also scales the determinant by the
same scalar factor.
b. .
c. .
© 2016 Pearson Education, Inc. Slide 5.2- 6 © 2016 Pearson Education, Inc. Slide 5.2- 7

THE CHARACTERISTIC EQUATION THE CHARACTERISTIC EQUATION

Theorem 3(a) shows how to determine when a matrix Example 3: Find the characteristic equation of
of the form is not invertible.

The scalar equation is called the


characteristic equation of A.

A scalar is an eigenvalue of an matrix A if


and only if satisfies the characteristic equation
Solution: Form , and use Theorem 3(d):

© 2016 Pearson Education, Inc. Slide 5.2- 8 © 2016 Pearson Education, Inc. Slide 5.2- 9
THE CHARACTERISTIC EQUATION THE CHARACTERISTIC EQUATION
Expanding the product, we can also write

If A is an matrix, then is a
polynomial of degree n called the characteristic
polynomial of A.
The eigenvalue 5 in Example 3 is said to have
The characteristic equation is multiplicity 2 because occurs two times as a
factor of the characteristic polynomial.
In general, the (algebraic) multiplicity of an
or eigenvalue is its multiplicity as a root of the
characteristic equation.
© 2016 Pearson Education, Inc. Slide 5.2- 10 © 2016 Pearson Education, Inc. Slide 5.2- 11

SIMILARITY SIMILARITY
If A and B are matrices, then A is similar to B if Theorem 4: If matrices A and B are similar, then
there is an invertible matrix P such that , they have the same characteristic polynomial and hence
the same eigenvalues (with the same multiplicities).
or, equivalently, .

Proof: If then,
Writing Q for , we have .

So B is also similar to A, and we say simply that A Using the multiplicative property (b) in Theorem (3),
and B are similar. we compute

Changing A into is called a similarity (2)


transformation.
© 2016 Pearson Education, Inc. Slide 5.2- 12 © 2016 Pearson Education, Inc. Slide 5.2- 13
SIMILARITY SIMILARITY

Since , we
2. Similarity is not the same as row equivalence.
see from equation (1) that .
(If A is row equivalent to B, then for
some invertible matrix E ). Row operations on
Warnings: a matrix usually change its eigenvalues.
1. The matrices

and

are not similar even though they have the same


eigenvalues.

© 2016 Pearson Education, Inc. Slide 5.2- 14 © 2016 Pearson Education, Inc. Slide 5.2- 15

5 Eigenvalues and Eigenvectors DIAGONALIZATION

Example 2: Let . Find a formula for


5.3
DIAGONALIZATION Ak, given that , where

1 1 5 0
P and D
1 2 0 3
Solution: The standard formula for the inverse of a
matrix yields

© 2016 Pearson Education, Inc. © 2016 Pearson Education, Inc. Slide 5.3- 2
DIAGONALIZATION DIAGONALIZATION

Then, by associativity of matrix multiplication, In general, for ,


A2 ( PDP 1 )( PDP 1 ) PD ( P 1 P ) DP 1
PDDP 1

2 1
1 1 52 0 2 1
PD P
1 2 0 32 1 1

Again, A square matrix A is said to be diagonalizable if A is


similar to a diagonal matrix, that is, if
A3 ( PDP 1 ) A2 ( PD P 1 ) P D 2 P 1
PDD 2 P 1
PD 3 P 1
for some invertible matrix P and some diagonal,
I matrix D.
© 2016 Pearson Education, Inc. Slide 5.3- 3 © 2016 Pearson Education, Inc. Slide 5.3- 4

THE DIAGONALIZATION THEOREM THE DIAGONALIZATION THEOREM


Theorem 5: An matrix A is diagonalizable if Proof: First, observe that if P is any matrix with
and only if A has n linearly independent eigenvectors. columns v1 vn, and if D is any diagonal matrix with
diagonal entries 1 n, then
In fact, , with D a diagonal matrix, if
(1)
and only if the columns of P and n linearly
independent eigenvectors of A. In this case, the
diagonal entries of D are eigenvalues of A that while
correspond, respectively, to the eigenvectors in P. 1

0 2
PD P
In other words, A is diagonalizable if and only if 1 1 2 2 n n

there are enough eigenvectors to form a basis of .


We call such a basis an eigenvector basis of.
0 0 n (2)
© 2016 Pearson Education, Inc. Slide 5.3- 5 © 2016 Pearson Education, Inc. Slide 5.3- 6
THE DIAGONALIZATION THEOREM THE DIAGONALIZATION THEOREM
Now suppose A is diagonalizable and . Then Also, since these columns are nonzero, the equations
right-multiplying this relation by P, we have in (4) show that 1 n are eigenvalues and v1
. vn are corresponding eigenvectors.
In this case, equations (1) and (2) imply that
and second statements, along with the third statement,
(3) of the theorem.
Equating columns, we find that
(4) Finally, given any n eigenvectors v1 vn, use them
to construct the columns of P and use corresponding
Since P is invertible, its columns v1 vn must be
eigenvalues 1 n to construct D.
linearly independent.
© 2016 Pearson Education, Inc. Slide 5.3- 7 © 2016 Pearson Education, Inc. Slide 5.3- 8

THE DIAGONALIZATION THEOREM DIAGONALIZING MATRICES


Example 3: Diagonalize the following matrix, if
By equations (1) (3), . possible.

This is true without any condition on the


eigenvectors.
That is, find an invertible matrix P and a diagonal
If, in fact, the eigenvectors are linearly independent, matrix D such that .
then P is invertible (by the Invertible Matrix Solution: There are four steps to implement the
Theorem), and implies that . description in Theorem 5.
Step 1. Find the eigenvalues of A.
Here, the characteristic equation turns out to involve a
cubic polynomial that can be factored:
© 2016 Pearson Education, Inc. Slide 5.3- 9 © 2016 Pearson Education, Inc. Slide 5.3- 10
DIAGONALIZING MATRICES DIAGONALIZING MATRICES

Basis for

The eigenvalues are and .


Step 2. Find three linearly independent eigenvectors
of A. Basis for and
Three vectors are needed because A is a matrix.
This is a critical step.
If it fails, then Theorem 5 says that A cannot be You can check that {v1, v2, v3} is a linearly
diagonalized. independent set.
© 2016 Pearson Education, Inc. Slide 5.3- 11 © 2016 Pearson Education, Inc. Slide 5.3- 12

DIAGONALIZING MATRICES DIAGONALIZING MATRICES


Step 3. Construct P from the vectors in step 2. Use the eigenvalue twice, once for each of the
The order of the vectors is unimportant. eigenvectors corresponding to :
Using the order chosen in step 2, form

To avoid computing , simply verify that .


Compute
Step 4. Construct D from the corresponding eigenvalues. 1 3 3 1 1 1 1 2 2
In this step, it is essential that the order of the eigenvalues AP 3 5 3 1 1 0 1 2 0
matches the order chosen for the columns of P. 3 3 1 1 0 1 1 0 2
© 2016 Pearson Education, Inc. Slide 5.3- 13 © 2016 Pearson Education, Inc. Slide 5.3- 14
MATRICES WHOSE EIGENVALUES ARE NOT
DIAGONALIZING MATRICES DISTINCT
1 1 1 1 0 0 1 2 2 It is not necessary for an matrix to have n
PD 1 1 0 0 2 0 1 2 0 distinct eigenvalues in order to be diagonalizable.
The 3 x 3 matrix in Example 3 is diagonalizable even
1 0 1 0 0 2 1 0 2
though it has only two distinct eigenvalues.
Theorem 6: An matrix with n distinct
eigenvalues is diagonalizable. If an matrix A has n distinct eigenvalues, with
Proof: Let v1 vn be eigenvectors corresponding corresponding eigenvectors v1 vn, and if
to the n distinct eigenvalues of a matrix A.
, then P is automatically invertible
Then {v1 vn} is linearly independent, by
Theorem 2 in Section 5.1. because its columns are linearly independent, by
Theorem 2.
Hence A is diagonalizable, by Theorem 5.
© 2016 Pearson Education, Inc. Slide 5.3- 15 © 2016 Pearson Education, Inc. Slide 5.3- 16

MATRICES WHOSE EIGENVALUES ARE NOT MATRICES WHOSE EIGENVALUES ARE NOT
DISTINCT DISTINCT
b. The matrix A is diagonalizable if and only if
When A is diagonalizable but has fewer than n
the sum of the dimensions of the eigenspaces
distinct eigenvalues, it is still possible to build P in
equals n, and this happens if and only if (i) the
a way that makes P automatically invertible, as the
characteristic polynomial factors completely
next theorem shows.
into linear factors and (ii) the dimension of the
eigenspace for each k equals the multiplicity
Theorem 7: Let A be an matrix whose distinct of k.
eigenvalues are 1 p.
a. For 1 k p, the dimension of the eigenspace c. If A is diagonalizable and k is a basis for the
for k is less than or equal to the multiplicity eigenspace corresponding to k for each k,
of the eigenvalue k. then the total collection of vectors in the sets
1 p forms an eigenvector basis for .
© 2016 Pearson Education, Inc. Slide 5.3- 17 © 2016 Pearson Education, Inc. Slide 5.3- 18
5 Eigenvalues and Eigenvectors COMPLEX EIGENVALUES

The matrix eigenvalue-eigenvector theory already


developed for applies equally well to .
5.5
COMPLEX EIGENVALUES So a complex scalar satisfied det(A- I) = 0 if and only
if there is a nonzero vector x in such that Ax =

We call a (complex) eigenvalue and x a (complex)


eigenvector corresponding to .

© 2016 Pearson Education, Inc. © 2016 Pearson Education, Inc. Slide 5.5- 2

COMPLEX EIGENVALUES COMPLEX EIGENVALUES

Example 1 If , then the linear The only roots are complex: = i and = -i. However,
if we permit A to act on , then
transformation x Ax on rotates the plane
counterclockwise through a quarter-turn.
The action of A is periodic, since after four quarter-turns,
a vector is back where it started.
Obviously, no nonzero vector is mapped into a multiple
of itself, so A has no eigenvectors in and hence no
real eigenvalues. Thus i and i are eigenvalues, with and as
In fact, the characteristic equation of A is
corresponding eigenvectors.
2+1=0

© 2016 Pearson Education, Inc. Slide 5.5- 3 © 2016 Pearson Education, Inc. Slide 5.5- 4
REAL AND IMAGINARY PARTS OF VECTORS REAL AND IMAGINARY PARTS OF VECTORS

The complex conjugate of a complex vector x in is


the vector in whose entries are the complex Example 4 If , then
conjugates of the entries in x.

The real and imaginary parts of a complex vector x are


the vectors Re x and Im x in formed from the real Re x = , Im x = , and =
and imaginary parts of the entries of x.

© 2016 Pearson Education, Inc. Slide 5.5- 5 © 2016 Pearson Education, Inc. Slide 5.5- 6

EIGENVALUES AND EIGENVECTORS OF A REAL


MATRIX THAT ACTS ON 7 Symmetric Matrices and
Theorem 9: Let A be a real 2 2 matrix with a complex
Quadratic Forms
eigenvalue and an associated
eigenvector v in . Then 7.1
DIAGONALIZATION OF
SYMMETRIC MATRICES
A = PCP-1, where P = [Rev v Im v] and

© 2016 Pearson Education, Inc. Slide 5.5- 7 © 2016 Pearson Education, Inc.
SYMMETRIC MATRIX SYMMETRIC MATRIX
Theorem 1: If A is symmetric, then any two
A symmetric matrix is a matrix A such that . eigenvectors from different eigenspaces are
orthogonal.
Such a matrix is necessarily square. Proof: Let v1 and v2 be eigenvectors that correspond
to distinct eigenvalues, say, 1 and 2.
To show that , compute
Its main diagonal entries are arbitrary, but its other
entries occur in pairs on opposite sides of the main
Since v1 is an eigenvector
diagonal.
= Since
= Since v2 is an eigenvector
= =
© 2016 Pearson Education, Inc. Slide 7.1- 2 © 2016 Pearson Education, Inc. Slide 7.1- 3

SYMMETRIC MATRIX SYMMETRIC MATRIX


Hence .
But , so . Thus A is symmetric!
An matrix A is said to be orthogonally Theorem 2: An matrix A is orthogonally
diagonalizable if there are an orthogonal matrix P diagonalizable if and only if A is symmetric matrix.
(with ) and a diagonal matrix D such that
(1) Example 3: Orthogonally diagonalize the matrix
Such a diagonalization requires n linearly 3 2 4
independent and orthonormal eigenvectors. A 2 6 2 , whose characteristic equation is
When is this possible? 4 2 3
If A is orthogonally diagonalizable as in (1), then

© 2016 Pearson Education, Inc. Slide 7.1- 4 © 2016 Pearson Education, Inc. Slide 7.1- 5
SYMMETRIC MATRIX SYMMETRIC MATRIX
The component of v2 orthogonal to v1 is
Solution: The usual calculations produce bases for
the eigenspaces:
1 1/ 2 1
1 2 3

1 0 1
Then {v1, z2} is an orthogonal set in the eigenspace
Although v1 and v2 are linearly independent, they are for .
not orthogonal.
The projection of v2 onto v1 is . (Note that z2 is linear combination of the eigenvectors
v1 and v2, so z2 is in the eigenspace).
© 2016 Pearson Education, Inc. Slide 7.1- 6 © 2016 Pearson Education, Inc. Slide 7.1- 7

SYMMETRIC MATRIX SYMMETRIC MATRIX

Since the eigenspace is two-dimensional (with basis An orthonormal basis for the eigenspace for is
v1, v2), the orthogonal set {v1, z2} is an orthogonal
basis for the eigenspace, by the Basis Theorem.

Normalize v1 and z2 to obtain the following


orthonormal basis for the eigenspace for :
By Theorem 1, u3 is orthogonal to the other
eigenvectors u1 and u2.

Hence {u1, u2, u3} is an orthonormal set.


© 2016 Pearson Education, Inc. Slide 7.1- 8 © 2016 Pearson Education, Inc. Slide 7.1- 9
SYMMETRIC MATRIX THE SPECTRAL THEOREM
The set if eigenvalues of a matrix A is sometimes
Let
called the spectrum of A, and the following
description of the eigenvalues is called a spectral
theorem.

Theorem 3: The Spectral Theorem for Symmetric


Matrices
Then P orthogonally diagonalizes A, and . An symmetric matrix A has the following
properties:
a. A has n real eigenvalues, counting
multiplicities.
© 2016 Pearson Education, Inc. Slide 7.1- 10 © 2016 Pearson Education, Inc. Slide 7.1- 11

THE SPECTRAL THEOREM SPECTRAL DECOMPOSITION


Suppose , where the columns of P are
b. The dimension of the eigenspace for each
orthonormal eigenvectors u1, ,un of A and the
eigenvalue equals the multiplicity of as a
corresponding eigenvalues 1, , n are in the
root of the characteristic equation.
diagonal matrix D.
Then, since , T
c. The eigenspaces are mutually orthogonal, in 1 1
the sense that eigenvectors corresponding to A PDP T
u1 un
different eigenvalues are orthogonal. T
0 n n

d. A is orthogonally diagonalizable. u1T


1 1 n n

© 2016 Pearson Education, Inc. Slide 7.1- 12 © 2016 Pearson Education, Inc.
u Tn Slide 7.1- 13
SPECTRAL DECOMPOSITION SPECTRAL DECOMPOSITION
Using the column-row expansion of a product, we can
write Example 4: Construct a spectral decomposition of the
(2) matrix A that has the orthogonal diagonalization
This representation of A is called a spectral
decomposition of A because it breaks up A into pieces
determined by the spectrum (eigenvalues) of A.
Each term in (2) is an matrix of rank 1.
For example, every column of is a multiple of u1. Solution: Denote the columns of P by u1 and u2.
Each matrix is a projection matrix in the sense that
for each x in , the vector is the orthogonal
Then
projection of x onto the subspace spanned by uj.
© 2016 Pearson Education, Inc. Slide 7.1- 14 © 2016 Pearson Education, Inc. Slide 7.1- 15

SPECTRAL DECOMPOSITION
To verify the decomposition of A, compute
2/ 5 4/5 2/5
u1u1T 2 / 5 1/ 5
1/ 5 2 / 5 1/ 5

1/ 5 1/ 5 2/5
u 2 u T2 1/ 5 2/ 5
2/ 5 2/5 4/5

and
32 / 5 16 / 5 3/ 5 6/ 5 7 2
8u1u1T 3u 2 u T2 A
16 / 5 8 / 5 6 / 5 12 / 5 2 4
© 2016 Pearson Education, Inc. Slide 7.1- 16

You might also like