Professional Documents
Culture Documents
Notes
Notes
spaces
= , and =
© 2016 Pearson Education, Inc. Slide 6.1- 5 © 2016 Pearson Education, Inc. Slide 6.1- 6
© 2016 Pearson Education, Inc. Slide 6.1- 7 © 2016 Pearson Education, Inc. Slide 6.1- 8
DISTANCE IN DISTANCE IN
To check that , it suffices to show that . Example 4: Compute the distance between the
vectors and .
Solution: Calculate
© 2016 Pearson Education, Inc. Slide 6.1- 13 © 2016 Pearson Education, Inc. Slide 6.1- 14
© 2016 Pearson Education, Inc. Slide 6.1- 15 © 2016 Pearson Education, Inc. Slide 6.1- 16
ORTHOGONAL COMPLEMENTS ORTHOGONAL COMPLEMENTS
Proof: The row-column rule for computing Ax shows that
if x is in Nul A, then x is orthogonal to each row of A Since this statement is true for any matrix, it is true
(with the rows treated as vectors in ). for AT.
Since the rows of A span the row space, x is orthogonal to That is, the orthogonal complement of the row space
Row A. of AT is the null space of AT.
Conversely, if x is orthogonal to Row A, then x is This proves the second statement, because
certainly orthogonal to each row of A, and hence . .
The formula is
(2) By the law of cosines,
Thus S is linearly independent. Proof: The orthogonality of {u1 up} shows that
Definition: An orthogonal basis for a subspace W of
is a basis for W that is also an orthogonal set.
© 2016 Pearson Education, Inc. Slide 6.2- 4 © 2016 Pearson Education, Inc. Slide 6.2- 5
© 2016 Pearson Education, Inc. Slide 6.2- 8 © 2016 Pearson Education, Inc. Slide 6.2- 9
© 2016 Pearson Education, Inc. Slide 6.2- 10 © 2016 Pearson Education, Inc. Slide 6.2- 11
AN ORTHOGONAL PROJECTION ORTHONORMAL SETS
Note: If the calculations above are correct, then A set {u1 up} is an orthonormal set if it is an
will be an orthogonal set. orthogonal set of unit vectors.
Since the line segment in the figure on the previous slide The simplest example of an orthonormal set is the
between y and is perpendicular to L, by construction standard basis {e1 en} for .
of , the point identified with is the closest point of L
to y.
Any nonempty subset of {e1 en} is orthonormal, too.
© 2016 Pearson Education, Inc. Slide 6.2- 12 © 2016 Pearson Education, Inc. Slide 6.2- 13
© 2016 Pearson Education, Inc. Slide 6.2- 18 © 2016 Pearson Education, Inc. Slide 6.2- 19
6 Orthogonality and Least ORTHOGONAL PROJECTIONS
Squares The orthogonal projection of a point in onto a line
through the origin has an important analogue in .
6.3
ORTHOGONAL PROJECTIONS Given a vector y and a subspace W in , there is a
vector in W such that (1) is the unique vector in W
for which is orthogonal to W, and (2) is the
unique vector in W closest to y. See the following
figure.
© 2016 Pearson Education, Inc. © 2016 Pearson Education, Inc. Slide 6.3- 2
If y is in , then .
© 2016 Pearson Education, Inc. Slide 6.3- 9 © 2016 Pearson Education, Inc. Slide 6.3- 10
Theorem 10: If {u1 up} is an orthogonal basis for a Also, (4) shows that projWy is a linear combination of the
subspace W of , then columns of U using the weights .
(4)
The weights can be written as , showing
T
that they are the entries in U y and justifying (5).
If , then
THE GRAM-SCHMIDT
PROCESS
..
.
© 2016 Pearson Education, Inc. Slide 6.4- 3 © 2016 Pearson Education, Inc. Slide 6.4- 4
© 2016 Pearson Education, Inc. Slide 6.4- 7 © 2016 Pearson Education, Inc. Slide 6.4- 8
and
Q= . R=
© 2016 Pearson Education, Inc. Slide 6.4- 9 © 2016 Pearson Education, Inc. Slide 6.4- 10
6 Orthogonality and Least LEAST-SQUARES PROBLEMS
Squares Definition: If A is and b is in , a least-
squares solution of is an in such that
6.5
LEAST-SQUARES PROBLEMS for all x in .
Solution of the General Least-Squares Problem Since is the closest point in Col A to b, a vector x is a
least-squares solution of if and only if
Given A and b, apply the Best Approximation satisfies (1).
Theorem to the subspace Col A.
Such an in is a list of weights that will build out
Let of the columns of A. See the figure on the next slide.
© 2016 Pearson Education, Inc. Slide 6.5- 3 © 2016 Pearson Education, Inc. Slide 6.5- 4
SOLUTION OF THE GENREAL LEAST-SQUARES SOLUTION OF THE GENREAL LEAST-SQUARES
PROBLEM PROBLEM
Since each is a row of AT,
(2)
Thus
Suppose satisfies .
By the Orthogonal Decomposition Theorem, the These calculations show that each least-squares
projection has the property that is orthogonal solution of satisfies the equation
to Col A, so is orthogonal to each column of A. (3)
If aj is any column of A, then and The matrix equation (3) represents a system of
equations called the normal equations for .
A solution of (3) is often denoted by .
© 2016 Pearson Education, Inc. Slide 6.5- 5 © 2016 Pearson Education, Inc. Slide 6.5- 6
© 2016 Pearson Education, Inc. Slide 6.5- 9 © 2016 Pearson Education, Inc. Slide 6.5- 10
Theorem 15: Given an matrix A with linearly The columns of Q form an orthonormal basis for Col A.
independent columns, let be a QR factorization
of A. Then, for each b in , the equation Hence, by Theorem 10, QQTb is the orthogonal
has a unique least-squares solution, given by projection of b onto Col A.
(6)
Then , which shows that is a least-squares
Proof: Let . solution of .
Then
The uniqueness of follows from Theorem 14.
© 2016 Pearson Education, Inc. Slide 6.5- 15 © 2016 Pearson Education, Inc. Slide 6.5- 16
5 Eigenvalues and Eigenvectors EIGENVECTORS AND EIGENVALUES
Definition: An eigenvector of an matrix A is
a nonzero vector x such that for some
5.1 scalar . A scalar is called an eigenvalue of A if
there is a nontrivial solution x of ; such an
EIGENVECTORS AND
x is called an eigenvector corresponding to .
EIGENVALUES
is an eigenvalue of an matrix A if and only
if the equation
(3)
has a nontrivial solution.
The set of all solutions of (3) is just the null space
of the matrix .
© 2016 Pearson Education, Inc. © 2016 Pearson Education, Inc. Slide 5.1- 2
© 2016 Pearson Education, Inc. Slide 5.1- 3 © 2016 Pearson Education, Inc. Slide 5.1- 4
EIGENVECTORS AND EIGENVALUES EIGENVECTORS AND EIGENVALUES
The columns of are obviously linearly
dependent, so (2) has nontrivial solutions. Example 4: Let . An eigenvalue of
To find the corresponding eigenvectors, use row
operations:
A is 2. Find a basis for the corresponding eigenspace.
~ Solution: Form
4 1 6 2 0 0 2 1 6
The general solution has the form . A 2I 2 1 6 0 2 0 2 1 6
2 1 8 0 0 2 2 1 6
Each vector of this form with x2 0 is an
eigenvector corresponding to .
and row reduce the augmented matrix for .
© 2016 Pearson Education, Inc. Slide 5.1- 5 © 2016 Pearson Education, Inc. Slide 5.1- 6
, x2 and x3 free.
© 2016 Pearson Education, Inc. Slide 5.1- 7 © 2016 Pearson Education, Inc. Slide 5.1- 8
EIGENVECTORS AND EIGENVALUES EIGENVECTORS AND EIGENVALUES
Theorem 1: The eigenvalues of a triangular matrix
The scalar is an eigenvalue of A if and only if the
are the entries on its main diagonal. equation has a nontrivial solution,
Proof: For simplicity, consider the case. that is, if and only if the equation has a free variable.
If A is upper triangular, the has the form
Because of the zero entries in , it is easy to see
that has a free variable if and only if
at least one of the entries on the diagonal of is
zero.
© 2016 Pearson Education, Inc. Slide 5.1- 9 © 2016 Pearson Education, Inc. Slide 5.1- 10
Hence for .
A solution of (8) is an explicit description of {xk}
whose formula for each xk does not depend directly on
But then (5) says that , which is impossible. A or on the preceding terms in the sequence other than
the initial term x0.
© 2016 Pearson Education, Inc. Slide 5.1- 13 © 2016 Pearson Education, Inc. Slide 5.1- 14
© 2016 Pearson Education, Inc. Slide 5.1- 15 © 2016 Pearson Education, Inc.
DETERMINANATS DETERMINANATS
Let A be an matrix, let U be any echelon form
obtained from A by row replacements and row Otherwise, at least unn is zero, and the product u11 unn
interchanges (without scaling), and let r be the is zero.
number of such row interchanges.
Thus
Then the determinant of A, written as det A, is
times the product of the diagonal entries u11, , unn , when A is invertible
in U.
when A is not invertible
If A is invertible, then u11, , unn are all pivots
(because ~ and the uii have not been scaled to
1 s).
© 2016 Pearson Education, Inc. Slide 5.2- 2 © 2016 Pearson Education, Inc. Slide 5.2- 3
DETERMINANATS DETERMINANATS
So det A equals .
The following alternative row reduction avoids the
Example 1: Compute det A for . row interchange and produces a different echelon
form.
The last step adds times row 2 to row 3:
Solution: The following row reduction uses one row
interchange: ~ ~
1 5 0 1 5 0 1 5 0
A~ 0 6 1 ~ 0 2 0 ~ 0 2 0 U1
This time det A is , the same
0 2 0 0 6 1 0 0 1 as before.
© 2016 Pearson Education, Inc. Slide 5.2- 4 © 2016 Pearson Education, Inc. Slide 5.2- 5
THE INVERTIBLE MATRIX THEOREM
(CONTINUED) PROPERTIES OF DETERMINANTS
Theorem: Let A be an matrix. Then A is
invertible if and only if:
d. If A is triangular, then det A is the product of
s. The number 0 is not an eigenvalue of A. the entries on the main diagonal of A.
t. The determinant of A is not zero.
e. A row replacement operation on A does not
Theorem 3: Properties of Determinants change the determinant. A row interchange
Let A and B be matrices. changes the sign of the determinant. A row
a. A is invertible if and only if det . scaling also scales the determinant by the
same scalar factor.
b. .
c. .
© 2016 Pearson Education, Inc. Slide 5.2- 6 © 2016 Pearson Education, Inc. Slide 5.2- 7
Theorem 3(a) shows how to determine when a matrix Example 3: Find the characteristic equation of
of the form is not invertible.
© 2016 Pearson Education, Inc. Slide 5.2- 8 © 2016 Pearson Education, Inc. Slide 5.2- 9
THE CHARACTERISTIC EQUATION THE CHARACTERISTIC EQUATION
Expanding the product, we can also write
If A is an matrix, then is a
polynomial of degree n called the characteristic
polynomial of A.
The eigenvalue 5 in Example 3 is said to have
The characteristic equation is multiplicity 2 because occurs two times as a
factor of the characteristic polynomial.
In general, the (algebraic) multiplicity of an
or eigenvalue is its multiplicity as a root of the
characteristic equation.
© 2016 Pearson Education, Inc. Slide 5.2- 10 © 2016 Pearson Education, Inc. Slide 5.2- 11
SIMILARITY SIMILARITY
If A and B are matrices, then A is similar to B if Theorem 4: If matrices A and B are similar, then
there is an invertible matrix P such that , they have the same characteristic polynomial and hence
the same eigenvalues (with the same multiplicities).
or, equivalently, .
Proof: If then,
Writing Q for , we have .
So B is also similar to A, and we say simply that A Using the multiplicative property (b) in Theorem (3),
and B are similar. we compute
Since , we
2. Similarity is not the same as row equivalence.
see from equation (1) that .
(If A is row equivalent to B, then for
some invertible matrix E ). Row operations on
Warnings: a matrix usually change its eigenvalues.
1. The matrices
and
© 2016 Pearson Education, Inc. Slide 5.2- 14 © 2016 Pearson Education, Inc. Slide 5.2- 15
1 1 5 0
P and D
1 2 0 3
Solution: The standard formula for the inverse of a
matrix yields
© 2016 Pearson Education, Inc. © 2016 Pearson Education, Inc. Slide 5.3- 2
DIAGONALIZATION DIAGONALIZATION
2 1
1 1 52 0 2 1
PD P
1 2 0 32 1 1
0 2
PD P
In other words, A is diagonalizable if and only if 1 1 2 2 n n
Basis for
MATRICES WHOSE EIGENVALUES ARE NOT MATRICES WHOSE EIGENVALUES ARE NOT
DISTINCT DISTINCT
b. The matrix A is diagonalizable if and only if
When A is diagonalizable but has fewer than n
the sum of the dimensions of the eigenspaces
distinct eigenvalues, it is still possible to build P in
equals n, and this happens if and only if (i) the
a way that makes P automatically invertible, as the
characteristic polynomial factors completely
next theorem shows.
into linear factors and (ii) the dimension of the
eigenspace for each k equals the multiplicity
Theorem 7: Let A be an matrix whose distinct of k.
eigenvalues are 1 p.
a. For 1 k p, the dimension of the eigenspace c. If A is diagonalizable and k is a basis for the
for k is less than or equal to the multiplicity eigenspace corresponding to k for each k,
of the eigenvalue k. then the total collection of vectors in the sets
1 p forms an eigenvector basis for .
© 2016 Pearson Education, Inc. Slide 5.3- 17 © 2016 Pearson Education, Inc. Slide 5.3- 18
5 Eigenvalues and Eigenvectors COMPLEX EIGENVALUES
© 2016 Pearson Education, Inc. © 2016 Pearson Education, Inc. Slide 5.5- 2
Example 1 If , then the linear The only roots are complex: = i and = -i. However,
if we permit A to act on , then
transformation x Ax on rotates the plane
counterclockwise through a quarter-turn.
The action of A is periodic, since after four quarter-turns,
a vector is back where it started.
Obviously, no nonzero vector is mapped into a multiple
of itself, so A has no eigenvectors in and hence no
real eigenvalues. Thus i and i are eigenvalues, with and as
In fact, the characteristic equation of A is
corresponding eigenvectors.
2+1=0
© 2016 Pearson Education, Inc. Slide 5.5- 3 © 2016 Pearson Education, Inc. Slide 5.5- 4
REAL AND IMAGINARY PARTS OF VECTORS REAL AND IMAGINARY PARTS OF VECTORS
© 2016 Pearson Education, Inc. Slide 5.5- 5 © 2016 Pearson Education, Inc. Slide 5.5- 6
© 2016 Pearson Education, Inc. Slide 5.5- 7 © 2016 Pearson Education, Inc.
SYMMETRIC MATRIX SYMMETRIC MATRIX
Theorem 1: If A is symmetric, then any two
A symmetric matrix is a matrix A such that . eigenvectors from different eigenspaces are
orthogonal.
Such a matrix is necessarily square. Proof: Let v1 and v2 be eigenvectors that correspond
to distinct eigenvalues, say, 1 and 2.
To show that , compute
Its main diagonal entries are arbitrary, but its other
entries occur in pairs on opposite sides of the main
Since v1 is an eigenvector
diagonal.
= Since
= Since v2 is an eigenvector
= =
© 2016 Pearson Education, Inc. Slide 7.1- 2 © 2016 Pearson Education, Inc. Slide 7.1- 3
© 2016 Pearson Education, Inc. Slide 7.1- 4 © 2016 Pearson Education, Inc. Slide 7.1- 5
SYMMETRIC MATRIX SYMMETRIC MATRIX
The component of v2 orthogonal to v1 is
Solution: The usual calculations produce bases for
the eigenspaces:
1 1/ 2 1
1 2 3
1 0 1
Then {v1, z2} is an orthogonal set in the eigenspace
Although v1 and v2 are linearly independent, they are for .
not orthogonal.
The projection of v2 onto v1 is . (Note that z2 is linear combination of the eigenvectors
v1 and v2, so z2 is in the eigenspace).
© 2016 Pearson Education, Inc. Slide 7.1- 6 © 2016 Pearson Education, Inc. Slide 7.1- 7
Since the eigenspace is two-dimensional (with basis An orthonormal basis for the eigenspace for is
v1, v2), the orthogonal set {v1, z2} is an orthogonal
basis for the eigenspace, by the Basis Theorem.
© 2016 Pearson Education, Inc. Slide 7.1- 12 © 2016 Pearson Education, Inc.
u Tn Slide 7.1- 13
SPECTRAL DECOMPOSITION SPECTRAL DECOMPOSITION
Using the column-row expansion of a product, we can
write Example 4: Construct a spectral decomposition of the
(2) matrix A that has the orthogonal diagonalization
This representation of A is called a spectral
decomposition of A because it breaks up A into pieces
determined by the spectrum (eigenvalues) of A.
Each term in (2) is an matrix of rank 1.
For example, every column of is a multiple of u1. Solution: Denote the columns of P by u1 and u2.
Each matrix is a projection matrix in the sense that
for each x in , the vector is the orthogonal
Then
projection of x onto the subspace spanned by uj.
© 2016 Pearson Education, Inc. Slide 7.1- 14 © 2016 Pearson Education, Inc. Slide 7.1- 15
SPECTRAL DECOMPOSITION
To verify the decomposition of A, compute
2/ 5 4/5 2/5
u1u1T 2 / 5 1/ 5
1/ 5 2 / 5 1/ 5
1/ 5 1/ 5 2/5
u 2 u T2 1/ 5 2/ 5
2/ 5 2/5 4/5
and
32 / 5 16 / 5 3/ 5 6/ 5 7 2
8u1u1T 3u 2 u T2 A
16 / 5 8 / 5 6 / 5 12 / 5 2 4
© 2016 Pearson Education, Inc. Slide 7.1- 16