for Scientists & Engineers
Matrix Analysis
for Scientists & Engineers
This page intentionally left blank This page intentionally left blank
Matrix Analysis
for Scientists & Engineers
Alan J. Laub
University of California
Davis, California
slam.
Matrix Analysis
for Scientists & Engineers
Alan J. Laub
University of California
Davis, California
Copyright © 2005 by the Society for Industrial and Applied Mathematics.
1 0 9 8 7 6 5 4 3 2 1
All rights reserved. Printed in the United States of America. No part of this book
may be reproduced, stored, or transmitted in any manner without the written permission
of the publisher. For information, write to the Society for Industrial and Applied
Mathematics, 3600 University City Science Center, Philadelphia, PA 191042688.
MATLAB® is a registered trademark of The MathWorks, Inc. For MATLAB product information,
please contact The MathWorks, Inc., 3 Apple Hill Drive, Natick, MA 017602098 USA,
5086477000, Fax: 5086477101, info@mathworks.com, www.mathworks.com
Mathematica is a registered trademark of Wolfram Research, Inc.
Mathcad is a registered trademark of Mathsoft Engineering & Education, Inc.
Library of Congress CataloginginPublication Data
Laub, Alan J., 1948
Matrix analysis for scientists and engineers / Alan J. Laub.
p. cm.
Includes bibliographical references and index.
ISBN 0898715768 (pbk.)
1. Matrices. 2. Mathematical analysis. I. Title.
QA188138 2005
512.9'434—dc22
2004059962
About the cover: The original artwork featured on the cover was created by freelance
artist Aaron Tallon of Philadelphia, PA. Used by permission.
slam is a registered trademark.
Copyright © 2005 by the Society for Industrial and Applied Mathematics.
10987654321
All rights reserved. Printed in the United States of America. No part of this book
may be reproduced, stored, or transmitted in any manner without the written permission
of the publisher. For information, write to the Society for Industrial and Applied
Mathematics, 3600 University City Science Center, Philadelphia, PA 191042688.
MATLAB® is a registered trademark of The MathWorks, Inc. For MATLAB product information,
please contact The MathWorks, Inc., 3 Apple Hill Drive, Natick, MA 017602098 USA,
5086477000, Fax: 5086477101, info@mathworks.com, wwwmathworks.com
Mathematica is a registered trademark of Wolfram Research, Inc.
Mathcad is a registered trademark of Mathsoft Engineering & Education, Inc.
Library of Congress CataloginginPublication Data
Laub, Alan J., 1948
Matrix analysis for scientists and engineers / Alan J. Laub.
p. cm.
Includes bibliographical references and index.
ISBN 0898715768 (pbk.)
1. Matrices. 2. Mathematical analysis. I. Title.
QA 188.L38 2005
512.9'434dc22
2004059962
About the cover: The original artwork featured on the cover was created by freelance
artist Aaron Tallon of Philadelphia, PA. Used by permission .
•
5.lam... is a registered trademark.
To my wife, Beverley
(who captivated me in the UBC math library
nearly forty years ago)
To my wife, Beverley
(who captivated me in the UBC math library
nearly forty years ago)
This page intentionally left blank This page intentionally left blank
Contents
Preface xi
1 Introduction and Review 1
1.1 Some Notation and Terminology 1
1.2 Matrix Arithmetic 3
1.3 Inner Products and Orthogonality 4
1.4 Determinants 4
2 Vector Spaces 7
2.1 Definitions and Examples 7
2.2 Subspaces 9
2.3 Linear Independence 10
2.4 Sums and Intersections of Subspaces 13
3 Linear Transformations 17
3.1 Definition and Examples 17
3.2 Matrix Representation of Linear Transformations 18
3.3 Composition of Transformations 19
3.4 Structure of Linear Transformations 20
3.5 Four Fundamental Subspaces 22
4 Introduction to the MoorePenrose Pseudoinverse 29
4.1 Definitions and Characterizations 29
4.2 Examples 30
4.3 Properties and Applications 31
5 Introduction to the Singular Value Decomposition 35
5.1 The Fundamental Theorem 35
5.2 Some Basic Properties 38
5.3 Row and Column Compressions 40
6 Linear Equations 43
6.1 Vector Linear Equations 43
6.2 Matrix Linear Equations 44
6.3 A More General Matrix Linear Equation 47
6.4 Some Useful and Interesting Inverses 47
vii
Contents
Preface
1 Introduction and Review
1.1 Some Notation and Terminology
1.2 Matrix Arithmetic . . . . . . . .
1.3 Inner Products and Orthogonality .
1.4 Determinants
2 Vector Spaces
2.1 Definitions and Examples .
2.2 Subspaces.........
2.3 Linear Independence . . .
2.4 Sums and Intersections of Subspaces
3 Linear Transformations
3.1 Definition and Examples . . . . . . . . . . . . .
3.2 Matrix Representation of Linear Transformations
3.3 Composition of Transformations . .
3.4 Structure of Linear Transformations
3.5 Four Fundamental Subspaces . . . .
4 Introduction to the MoorePenrose Pseudoinverse
4.1 Definitions and Characterizations.
4.2 Examples..........
4.3 Properties and Applications . . . .
5 Introduction to the Singular Value Decomposition
5.1 The Fundamental Theorem . . .
5.2 Some Basic Properties .....
5.3 Rowand Column Compressions
6 Linear Equations
6.1 Vector Linear Equations . . . . . . . . .
6.2 Matrix Linear Equations ....... .
6.3 A More General Matrix Linear Equation
6.4 Some Useful and Interesting Inverses.
vii
xi
1
1
3
4
4
7
7
9
10
13
17
17
18
19
20
22
29
29
30
31
35
35
38
40
43
43
44
47
47
viii Contents
7 Projections, Inner Product Spaces, and Norms 51
7.1 Projections 51
7.1.1 The four fundamental orthogonal projections 52
7.2 Inner Product Spaces 54
7.3 Vector Norms 57
7.4 Matrix Norms 59
8 Linear Least Squares Problems 65
8.1 The Linear Least Squares Problem 65
8.2 Geometric Solution 67
8.3 Linear Regression and Other Linear Least Squares Problems 67
8.3.1 Example: Linear regression 67
8.3.2 Other least squares problems 69
8.4 Least Squares and Singular Value Decomposition 70
8.5 Least Squares and QR Factorization 71
9 Eigenvalues and Eigenvectors 75
9.1 Fundamental Definitions and Properties 75
9.2 Jordan Canonical Form 82
9.3 Determination of the JCF 85
9.3.1 Theoretical computation 86
9.3.2 On the +1's in JCF blocks 88
9.4 Geometric Aspects of the JCF 89
9.5 The Matrix Sign Function 91
10 Canonical Forms 95
10.1 Some Basic Canonical Forms 95
10.2 Definite Matrices 99
10.3 Equivalence Transformations and Congruence 102
10.3.1 Block matrices and definiteness 104
10.4 Rational Canonical Form 104
11 Linear Differential and Difference Equations 109
11.1 Differential Equations 109
11.1.1 Properties of the matrix exponential 109
11.1.2 Homogeneous linear differential equations 112
11.1.3 Inhomogeneous linear differential equations 112
11.1.4 Linear matrix differential equations 113
11.1.5 Modal decompositions 114
11.1.6 Computation of the matrix exponential 114
11.2 Difference Equations 118
11.2.1 Homogeneous linear difference equations 118
11.2.2 Inhomogeneous linear difference equations 118
11.2.3 Computation of matrix powers 119
11.3 HigherOrder Equations 120
viii
7 Projections, Inner Product Spaces, and Norms
7.1 Projections ..................... .
7.1.1 The four fundamental orthogonal projections
7.2 Inner Product Spaces
7.3 Vector Norms
7.4 Matrix Norms ....
8 Linear Least Squares Problems
8.1 The Linear Least Squares Problem . . . . . . . . . . . . . .
8.2 Geometric Solution . . . . . . . . . . . . . . . . . . . . . .
8.3 Linear Regression and Other Linear Least Squares Problems
8.3.1 Example: Linear regression ...... .
8.3.2 Other least squares problems ...... .
8.4 Least Squares and Singular Value Decomposition
8.5 Least Squares and QR Factorization . . . . . . .
9 Eigenvalues and Eigenvectors
9.1 Fundamental Definitions and Properties
9.2 Jordan Canonical Form .... .
9.3 Determination of the JCF .... .
9.3.1 Theoretical computation .
9.3.2 On the + l's in JCF blocks
9.4 Geometric Aspects of the JCF
9.5 The Matrix Sign Function.
10 Canonical Forms
10.1 Some Basic Canonical Forms .
10.2 Definite Matrices . . . . . . .
10.3 Equivalence Transformations and Congruence
10.3.1 Block matrices and definiteness
10.4 Rational Canonical Form . . . . . . . . .
11 Linear Differential and Difference Equations
ILl Differential Equations . . . . . . . . . . . . . . . .
11.1.1 Properties ofthe matrix exponential . . . .
11.1.2 Homogeneous linear differential equations
11.1.3 Inhomogeneous linear differential equations
11.1.4 Linear matrix differential equations . .
11.1.5 Modal decompositions . . . . . . . . .
11.1.6 Computation of the matrix exponential
11.2 Difference Equations . . . . . . . . . . . . . .
11.2.1 Homogeneous linear difference equations
11.2.2 Inhomogeneous linear difference equations
11.2.3 Computation of matrix powers .
11.3 HigherOrder Equations. . . . . . . . . . . . . . .
Contents
51
51
52
54
57
59
65
65
67
67
67
69
70
71
75
75
82
85
86
88
89
91
95
95
99
102
104
104
109
109
109
112
112
113
114
114
118
118
118
119
120
Contents ix
12 Generalized Eigenvalue Problems 125
12.1 The Generalized Eigenvalue/Eigenvector Problem 125
12.2 Canonical Forms 127
12.3 Application to the Computation of System Zeros 130
12.4 Symmetric Generalized Eigenvalue Problems 131
12.5 Simultaneous Diagonalization 133
12.5.1 Simultaneous diagonalization via SVD 133
12.6 HigherOrder Eigenvalue Problems 135
12.6.1 Conversion to firstorder form 135
13 Kronecker Products 139
13.1 Definition and Examples 139
13.2 Properties of the Kronecker Product 140
13.3 Application to Sylvester and Lyapunov Equations 144
Bibliography 151
Index 153
Contents
12 Generalized Eigenvalue Problems
12.1 The Generalized EigenvaluelEigenvector Problem
12.2 Canonical Forms ................ .
12.3 Application to the Computation of System Zeros .
12.4 Symmetric Generalized Eigenvalue Problems .
12.5 Simultaneous Diagonalization ........ .
12.5.1 Simultaneous diagonalization via SVD
12.6 HigherOrder Eigenvalue Problems ..
12.6.1 Conversion to firstorder form
13 Kronecker Products
13.1 Definition and Examples ............ .
13.2 Properties of the Kronecker Product ...... .
13.3 Application to Sylvester and Lyapunov Equations
Bibliography
Index
ix
125
125
127
130
131
133
133
135
135
139
139
140
144
151
153
This page intentionally left blank This page intentionally left blank
Preface
This book is intended to be used as a text for beginning graduatelevel (or even seniorlevel)
students in engineering, the sciences, mathematics, computer science, or computational
science who wish to be familar with enough matrix analysis that they are prepared to use its
tools and ideas comfortably in a variety of applications. By matrix analysis I mean linear
algebra and matrix theory together with their intrinsic interaction with and application to
linear dynamical systems (systems of linear differential or difference equations). The text
can be used in a onequarter or onesemester course to provide a compact overview of
much of the important and useful mathematics that, in many cases, students meant to learn
thoroughly as undergraduates, but somehow didn't quite manage to do. Certain topics
that may have been treated cursorily in undergraduate courses are treated in more depth
and more advanced material is introduced. I have tried throughout to emphasize only the
more important and "useful" tools, methods, and mathematical structures. Instructors are
encouraged to supplement the book with specific application examples from their own
particular subject area.
The choice of topics covered in linear algebra and matrix theory is motivated both by
applications and by computational utility and relevance. The concept of matrix factorization
is emphasized throughout to provide a foundation for a later course in numerical linear
algebra. Matrices are stressed more than abstract vector spaces, although Chapters 2 and 3
do cover some geometric (i.e., basisfree or subspace) aspects of many of the fundamental
notions. The books by Meyer [18], Noble and Daniel [20], Ortega [21], and Strang [24]
are excellent companion texts for this book. Upon completion of a course based on this
text, the student is then wellequipped to pursue, either via formal courses or through self
study, followon topics on the computational side (at the level of [7], [11], [23], or [25], for
example) or on the theoretical side (at the level of [12], [13], or [16], for example).
Prerequisites for using this text are quite modest: essentially just an understanding
of calculus and definitely some previous exposure to matrices and linear algebra. Basic
concepts such as determinants, singularity of matrices, eigenvalues and eigenvectors, and
positive definite matrices should have been covered at least once, even though their recollec
tion may occasionally be "hazy." However, requiring such material as prerequisite permits
the early (but "outoforder" by conventional standards) introduction of topics such as pseu
doinverses and the singular value decomposition (SVD). These powerful and versatile tools
can then be exploited to provide a unifying foundation upon which to base subsequent top
ics. Because tools such as the SVD are not generally amenable to "hand computation," this
approach necessarily presupposes the availability of appropriate mathematical software on
a digital computer. For this, I highly recommend MA TL A B® although other software such as
xi
Preface
This book is intended to be used as a text for beginning graduatelevel (or even seniorlevel)
students in engineering, the sciences, mathematics, computer science, or computational
science who wish to be familar with enough matrix analysis that they are prepared to use its
tools and ideas comfortably in a variety of applications. By matrix analysis I mean linear
algebra and matrix theory together with their intrinsic interaction with and application to
linear dynamical systems (systems of linear differential or difference equations). The text
can be used in a onequarter or onesemester course to provide a compact overview of
much of the important and useful mathematics that, in many cases, students meant to learn
thoroughly as undergraduates, but somehow didn't quite manage to do. Certain topics
that may have been treated cursorily in undergraduate courses are treated in more depth
and more advanced material is introduced. I have tried throughout to emphasize only the
more important and "useful" tools, methods, and mathematical structures. Instructors are
encouraged to supplement the book with specific application examples from their own
particular subject area.
The choice of topics covered in linear algebra and matrix theory is motivated both by
applications and by computational utility and relevance. The concept of matrix factorization
is emphasized throughout to provide a foundation for a later course in numerical linear
algebra. Matrices are stressed more than abstract vector spaces, although Chapters 2 and 3
do cover some geometric (i.e., basisfree or subspace) aspects of many of the fundamental
notions. The books by Meyer [18], Noble and Daniel [20], Ortega [21], and Strang [24]
are excellent companion texts for this book. Upon completion of a course based on this
text, the student is then wellequipped to pursue, either via formal courses or through self
study, followon topics on the computational side (at the level of [7], [II], [23], or [25], for
example) or on the theoretical side (at the level of [12], [13], or [16], for example).
Prerequisites for using this text are quite modest: essentially just an understanding
of calculus and definitely some previous exposure to matrices and linear algebra. Basic
concepts such as determinants, singularity of matrices, eigenvalues and eigenvectors, and
positive definite matrices should have been covered at least once, even though their recollec
tion may occasionally be "hazy." However, requiring such material as prerequisite permits
the early (but "outoforder" by conventional standards) introduction of topics such as pseu
doinverses and the singular value decomposition (SVD). These powerful and versatile tools
can then be exploited to provide a unifying foundation upon which to base subsequent top
ics. Because tools such as the SVD are not generally amenable to "hand computation," this
approach necessarily presupposes the availability of appropriate mathematical software on
a digital computer. For this, I highly recommend MAlLAB® although other software such as
xi
xii Preface
Mathematica® or Mathcad® is also excellent. Since this text is not intended for a course in
numerical linear algebra per se, the details of most of the numerical aspects of linear algebra
are deferred to such a course.
The presentation of the material in this book is strongly influenced by computa
tional issues for two principal reasons. First, "reallife" problems seldom yield to simple
closedform formulas or solutions. They must generally be solved computationally and
it is important to know which types of algorithms can be relied upon and which cannot.
Some of the key algorithms of numerical linear algebra, in particular, form the foundation
upon which rests virtually all of modern scientific and engineering computation. A second
motivation for a computational emphasis is that it provides many of the essential tools for
what I call "qualitative mathematics." For example, in an elementary linear algebra course,
a set of vectors is either linearly independent or it is not. This is an absolutely fundamental
concept. But in most engineering or scientific contexts we want to know more than that.
If a set of vectors is linearly independent, how "nearly dependent" are the vectors? If they
are linearly dependent, are there "best" linearly independent subsets? These turn out to
be much more difficult problems and frequently involve researchlevel questions when set
in the context of the finiteprecision, finiterange floatingpoint arithmetic environment of
most modern computing platforms.
Some of the applications of matrix analysis mentioned briefly in this book derive
from the modern statespace approach to dynamical systems. Statespace methods are
now standard in much of modern engineering where, for example, control systems with
large numbers of interacting inputs, outputs, and states often give rise to models of very
high order that must be analyzed, simulated, and evaluated. The "language" in which such
models are conveniently described involves vectors and matrices. It is thus crucial to acquire
a working knowledge of the vocabulary and grammar of this language. The tools of matrix
analysis are also applied on a daily basis to problems in biology, chemistry, econometrics,
physics, statistics, and a wide variety of other fields, and thus the text can serve a rather
diverse audience. Mastery of the material in this text should enable the student to read and
understand the modern language of matrices used throughout mathematics, science, and
engineering.
While prerequisites for this text are modest, and while most material is developed from
basic ideas in the book, the student does require a certain amount of what is conventionally
referred to as "mathematical maturity." Proofs are given for many theorems. When they are
not given explicitly, they are either obvious or easily found in the literature. This is ideal
material from which to learn a bit about mathematical proofs and the mathematical maturity
and insight gained thereby. It is my firm conviction that such maturity is neither encouraged
nor nurtured by relegating the mathematical aspects of applications (for example, linear
algebra for elementary statespace theory) to an appendix or introducing it "onthefly" when
necessary. Rather, one must lay a firm foundation upon which subsequent applications and
perspectives can be built in a logical, consistent, and coherent fashion.
I have taught this material for many years, many times at UCSB and twice at UC
Davis, and the course has proven to be remarkably successful at enabling students from
disparate backgrounds to acquire a quite acceptable level of mathematical maturity and
rigor for subsequent graduate studies in a variety of disciplines. Indeed, many students who
completed the course, especially the first few times it was offered, remarked afterward that
if only they had had this course before they took linear systems, or signal processing,
xii Preface
Mathematica® or Mathcad® is also excellent. Since this text is not intended for a course in
numerical linear algebra per se, the details of most of the numerical aspects of linear algebra
are deferred to such a course.
The presentation of the material in this book is strongly influenced by computa
tional issues for two principal reasons. First, "reallife" problems seldom yield to simple
closedform formulas or solutions. They must generally be solved computationally and
it is important to know which types of algorithms can be relied upon and which cannot.
Some of the key algorithms of numerical linear algebra, in particular, form the foundation
upon which rests virtually all of modem scientific and engineering computation. A second
motivation for a computational emphasis is that it provides many of the essential tools for
what I call "qualitative mathematics." For example, in an elementary linear algebra course,
a set of vectors is either linearly independent or it is not. This is an absolutely fundamental
concept. But in most engineering or scientific contexts we want to know more than that.
If a set of vectors is linearly independent, how "nearly dependent" are the vectors? If they
are linearly dependent, are there "best" linearly independent subsets? These tum out to
be much more difficult problems and frequently involve researchlevel questions when set
in the context of the finiteprecision, finiterange floatingpoint arithmetic environment of
most modem computing platforms.
Some of the applications of matrix analysis mentioned briefly in this book derive
from the modem statespace approach to dynamical systems. Statespace methods are
now standard in much of modem engineering where, for example, control systems with
large numbers of interacting inputs, outputs, and states often give rise to models of very
high order that must be analyzed, simulated, and evaluated. The "language" in which such
models are conveniently described involves vectors and matrices. It is thus crucial to acquire
a working knowledge of the vocabulary and grammar of this language. The tools of matrix
analysis are also applied on a daily basis to problems in biology, chemistry, econometrics,
physics, statistics, and a wide variety of other fields, and thus the text can serve a rather
diverse audience. Mastery of the material in this text should enable the student to read and
understand the modem language of matrices used throughout mathematics, science, and
engineering.
While prerequisites for this text are modest, and while most material is developed from
basic ideas in the book, the student does require a certain amount of what is conventionally
referred to as "mathematical maturity." Proofs are given for many theorems. When they are
not given explicitly, they are either obvious or easily found in the literature. This is ideal
material from which to learn a bit about mathematical proofs and the mathematical maturity
and insight gained thereby. It is my firm conviction that such maturity is neither encouraged
nor nurtured by relegating the mathematical aspects of applications (for example, linear
algebra for elementary statespace theory) to an appendix or introducing it "onthef1y" when
necessary. Rather, one must lay a firm foundation upon which subsequent applications and
perspectives can be built in a logical, consistent, and coherent fashion.
I have taught this material for many years, many times at UCSB and twice at UC
Davis, and the course has proven to be remarkably successful at enabling students from
disparate backgrounds to acquire a quite acceptable level of mathematical maturity and
rigor for subsequent graduate studies in a variety of disciplines. Indeed, many students who
completed the course, especially the first few times it was offered, remarked afterward that
if only they had had this course before they took linear systems, or signal processing.
Preface xiii
or estimation theory, etc., they would have been able to concentrate on the new ideas
they wanted to learn, rather than having to spend time making up for deficiencies in their
background in matrices and linear algebra. My fellow instructors, too, realized that by
requiring this course as a prerequisite, they no longer had to provide as much time for
"review" and could focus instead on the subject at hand. The concept seems to work.
— AJL, June 2004
Preface XIII
or estimation theory, etc., they would have been able to concentrate on the new ideas
they wanted to learn, rather than having to spend time making up for deficiencies in their
background in matrices and linear algebra. My fellow instructors, too, realized that by
requiring this course as a prerequisite, they no longer had to provide as much time for
"review" and could focus instead on the subject at hand. The concept seems to work.
AJL, June 2004
This page intentionally left blank This page intentionally left blank
Chapter 1
Introduction and Review
1.1 Some Notation and Terminology
We begin with a brief introduction to some standard notation and terminology to be used
throughout the text. This is followed by a review of some basic notions in matrix analysis
and linear algebra.
The following sets appear frequently throughout subsequent chapters:
1. R
n
= the set of ntuples of real numbers represented as column vectors. Thus, x e Rn
means
where xi e R for i e n.
Henceforth, the notation n denotes the set {1, . . . , n}.
Note: Vectors are always column vectors. A row vector is denoted by y
T
, where
y G Rn and the superscript T is the transpose operation. That a vector is always a
column vector rather than a row vector is entirely arbitrary, but this convention makes
it easy to recognize immediately throughout the text that, e.g., X
T
y is a scalar while
xy
T
is an n x n matrix.
2. Cn = the set of ntuples of complex numbers represented as column vectors.
3. R
mxn
= the set of real (or realvalued) m x n matrices.
4. R
mxnr
= the set of real m x n matrices of rank r. Thus, R
nxnn
denotes the set of real
nonsingular n x n matrices.
5. C
mxn
= the set of complex (or complexvalued) m x n matrices.
6. C
mxn
= the set of complex m x n matrices of rank r.
1
Chapter 1
Introduction and Review
1.1 Some Notation and Terminology
We begin with a brief introduction to some standard notation and terminology to be used
throughout the text. This is followed by a review of some basic notions in matrix analysis
and linear algebra.
The following sets appear frequently throughout subsequent chapters:
I. IR
n
= the set of ntuples of real numbers represented as column vectors. Thus, x E IR
n
means
where Xi E IR for i E !!.
Henceforth, the notation!! denotes the set {I, ... , n }.
Note: Vectors are always column vectors. A row vector is denoted by y ~ where
y E IR
n
and the superscript T is the transpose operation. That a vector is always a
column vector rather than a row vector is entirely arbitrary, but this convention makes
it easy to recognize immediately throughout the text that, e.g., x
T
y is a scalar while
xyT is an n x n matrix.
2. en = the set of ntuples of complex numbers represented as column vectors.
3. IR
rn
xn = the set of real (or realvalued) m x n matrices.
4. 1R;n xn = the set of real m x n matrices of rank r. Thus, I R ~ xn denotes the set of real
nonsingular n x n matrices.
5. e
rnxn
= the set of complex (or complexvalued) m x n matrices.
6. e;n xn = the set of complex m x n matrices of rank r.
Chapter 1. Introduction and Review
Each of the above also has a "block" analogue obtained by replacing scalar components in
the respective definitions by block submatrices. For example, if A e R
nxn
, B e R
mx n
, and
C e R
mxm
, then the (m+ n) x (m+ n) matrix [ A0 Bc ] is block upper triangular.
The transpose of a matrix A is denoted by A
T
and is the matrix whose (i, j)th entry
is the (7, Oth entry of A, that is, (A
7
),, = a,,. Note that if A e R
mx
", then A
7
" e E"
xm
.
If A e C
mx
", then its Hermitian transpose (or conjugate transpose) is denoted by A
H
(or
sometimes A*) and its (i, j)\h entry is (A
H
),
7
= («77), where the bar indicates complex
conjugation; i.e., if z = a + jf$ (j = i = v^T), then z = a — jfi. A matrix A is symmetric
if A = A
T
and Hermitian if A = A
H
. We henceforth adopt the convention that, unless
otherwise noted, an equation like A = A
T
implies that A is realvalued while a statement
like A = A
H
implies that A is complexvalued.
Remark 1.1. While \/—\ is most commonly denoted by i in mathematics texts, j is
the more common notation in electrical engineering and system theory. There is some
advantage to being conversant with both notations. The notation j is used throughout the
text but reminders are placed at strategic locations.
Example 1.2.
Transposes of block matrices can be defined in an obvious way. For example, it is
easy to see that if A,, are appropriately dimensioned subblocks, then
is symmetric (and Hermitian).
is complexvalued symmetric but not Hermitian.
is Hermitian (but not symmetric).
2
We now classify some of the more familiar "shaped" matrices. A matrix A e
(or A eC"
x
")i s
• diagonal if a,
7
= 0 for i ^ j.
• upper triangular if a,
;
= 0 for i > j.
• lower triangular if a,
7
= 0 for / < j.
• tridiagonal if a
(y
= 0 for z — j\ > 1.
• pentadiagonal if a
i;
= 0 for / — j\ > 2.
• upper Hessenberg if a
f
j = 0 for i — j > 1.
• lower Hessenberg if a,
;
= 0 for j — i > 1.
2 Chapter 1. Introduction and Review
We now classify some of the more familiar "shaped" matrices. A matrix A E IR
n
xn
(or A E e
nxn
) is
• diagonal if aij = 0 for i i= }.
• upper triangular if aij = 0 for i > }.
• lower triangular if aij = 0 for i < }.
• tridiagonal if aij = 0 for Ii  JI > 1.
• pentadiagonal if aij = 0 for Ii  J I > 2.
• upper Hessenberg if aij = 0 for i  j > 1.
• lower Hessenberg if aij = 0 for }  i > 1.
Each of the above also has a "block" analogue obtained by replacing scalar components in
the respective definitions by block submatrices. For example, if A E IR
nxn
, B E IR
nxm
, and
C E jRmxm, then the (m + n) x (m + n) matrix [ ~ ~ ] is block upper triangular.
The transpose of a matrix A is denoted by AT and is the matrix whose (i, j)th entry
is the (j, i)th entry of A, that is, (AT)ij = aji. Note that if A E jRmxn, then AT E jRnxm.
If A E em xn, then its Hermitian transpose (or conjugate transpose) is denoted by A H (or
sometimes A*) and its (i, j)th entry is (AH)ij = (aji), where the bar indicates complex
conjugation; i.e., if z = IX + jfJ (j = i = R), then z = IX  jfJ. A matrix A is symmetric
if A = A T and Hermitian if A = A H. We henceforth adopt the convention that, unless
otherwise noted, an equation like A = A T implies that A is realvalued while a statement
like A = AH implies that A is complexvalued.
Remark 1.1. While R is most commonly denoted by i in mathematics texts, } is
the more common notation in electrical engineering and system theory. There is some
advantage to being conversant with both notations. The notation j is used throughout the
text but reminders are placed at strategic locations.
Example 1.2.
1. A = [
; ~ ] is symmetric (and Hermitian).
2. A = [
5
7+}
7 + j ]
2 is complexvalued symmetric but not Hermitian.
[
5 7+} ]
3 A  2 is Hermitian (but not symmetric).
·  7  j
Transposes of block matrices can be defined in an obvious way. For example, it is
easy to see that if Aij are appropriately dimensioned subblocks, then
r = [
1.2. Matrix Arithmetic
1.2 Matrix Arithmetic
It is assumed that the reader is familiar with the fundamental notions of matrix addition,
multiplication of a matrix by a scalar, and multiplication of matrices.
A special case of matrix multiplication occurs when the second matrix is a column
vector x, i.e., the matrixvector product Ax. A very important way to view this product is
to interpret it as a weighted sum (linear combination) of the columns of A. That is, suppose
The importance of this interpretation cannot be overemphasized. As a numerical example,
take A = [96 85 74]x = 2 . Then we can quickly calculate dot products of the rows of A
with the column x to find Ax =[50 32]' but this matrixvector product can also be computed
v1a
For large arrays of numbers, there can be important computerarchitecturerelated advan
tages to preferring the latter calculation method.
For matrix multiplication, suppose A e R
mxn
and B = [bi,...,b
p
] e R
nxp
with
bi e W
1
. Then the matrix product A B can be thought of as above, applied p times:
There is also an alternative, but equivalent, formulation of matrix multiplication that appears
frequently in the text and is presented below as a theorem. Again, its importance cannot be
overemphasized. It is deceptively simple and its full understanding is well rewarded.
Theorem 1.3. Let U = [M I , . . . , u
n
] e R
mxn
with u
t
e R
m
and V = [v
{
,..., v
n
] e R
pxn
with v
t
e R
p
. Then
If matrices C and D are compatible for multiplication, recall that (CD)
T
= D
T
C
T
(or (CD}
H
— D
H
C
H
). This gives a dual to the matrixvector result above. Namely, if
C eR
mxn
has row vectors cj e E
lx
", and is premultiplied by a row vector y
T
e R
l xm
,
then the product can be written as a weighted linear sum of the rows of C as follows:
3
Theorem 1.3 can then also be generalized to its "row dual." The details are left to the readei
Then
1.2. Matrix Arithmetic 3
1 .2 Matrix Arithmetic
It is assumed that the reader is familiar with the fundamental notions of matrix addition,
multiplication of a matrix by a scalar, and multiplication of matrices.
A special case of matrix multiplication occurs when the second matrix is a column
vector x, i.e., the matrixvector product Ax. A very important way to view this product is
to interpret it as a weighted sum (linear combination) of the columns of A. That is, suppose
I ]
A = la' ....• a"1 E JR
m
" with a, E JRm and x = l
Then
Ax = Xjal + ... + Xnan E jRm.
The importance of this interpretation cannot be overemphasized. As a numerical example,
take A = ! x = Then we can quickly calculate dot products of the rows of A
with the column x to find Ax = but this matrixvector product can also be computed
via
3.[ J+2.[ J+l.[ l
For large arrays of numbers, there can be important computerarchitecturerelated advan
tages to preferring the latter calculation method.
For matrix multiplication, suppose A E jRmxn and B = [hI,.'" h
p
] E jRnxp with
hi E jRn. Then the matrix product AB can be thought of as above, applied p times:
There is also an alternative, but equivalent, formulation of matrix multiplication that appears
frequently in the text and is presented below as a theorem. Again, its importance cannot be
overemphasized. It is deceptively simple and its full understanding is well rewarded.
Theorem 1.3. Let U = [Uj, ... , un] E jRmxn with Ui E jRm and V = [VI, .•. , Vn] E lR
Pxn
with Vi E jRP. Then
n
UV
T
= LUiVr E jRmxp.
i=I
If matrices C and D are compatible for multiplication, recall that (C D)T = DT C
T
(or (C D)H = DH C
H
). This gives a dual to the matrixvector result above. Namely, if
C E jRmxn has row vectors cJ E jRlxn, and is premultiplied by a row vector yT E jRlxm,
then the product can be written as a weighted linear sum of the rows of C as follows:
yTC=YICf EjRlxn.
Theorem 1.3 can then also be generalized to its "row dual." The details are left to the reader.
Chapter 1. Introduction and Review
1.3 Inner Products and Orthogonality
For vectors x, y e R", the Euclidean inner product (or inner product, for short) of x and
y is given by
Note that the inner product is a scalar.
If x, y e C", we define their complex Euclidean inner product (or inner product,
for short) by
and we see that, indeed, (x, y)
c
= (y, x)
c
.
Note that x
T
x = 0 if and only if x = 0 when x e Rn but that this is not true if x e Cn.
What is true in the complex case is that X
H
x = 0 if and only if x = 0. To illustrate, consider
the nonzero vector x above. Then X
T
X = 0 but X
H
X = 2.
Two nonzero vectors x, y e R are said to be orthogonal if their inner product is
zero, i.e., x
T
y = 0. Nonzero complex vectors are orthogonal if X
H
y = 0. If x and y are
orthogonal and X
T
X = 1 and y
T
y = 1, then we say that x and y are orthonormal. A
matrix A e R
nxn
is an orthogonal matrix if A
T
A = AA
T
= /, where / is the n x n
identity matrix. The notation /„ is sometimes used to denote the identity matrix in R
nx
"
(orC"
x
"). Similarly, a matrix A e C
nxn
is said to be unitary if A
H
A = AA
H
= I. Clearly
an orthogonal or unitary matrix has orthonormal rows and orthonormal columns. There is
no special name attached to a nonsquare matrix A e R
mxn
(or € C
mxn
) with orthonormal
rows or columns.
1.4 Determinants
It is assumed that the reader is familiar with the basic theory of determinants. For A e R
nxn
(or A 6 C
nxn
) we use the notation det A for the determinant of A. We list below some of
Note that (x, y)
c
= (y, x)
c
, i.e., the order in which x and y appear in the complex inner
product is important. The more conventional definition of the complex inner product is
( x , y )
c
= y
H
x = Eni=1 xiyi but throughout the text we prefer the symmetry with the real
case.
Example 1.4. Let x = [ 1j ] and y = [ 1/ 2 ]. Then
while
44 Chapter 1. Introduction and Review
1.3 Inner Products and Orthogonality
For vectors x, y E IRn, the Euclidean inner product (or inner product, for short) of x and
y is given by
n
(x, y) := x
T
y = Lx;y;.
;=1
Note that the inner product is a scalar.
If x, y E <en, we define their complex Euclidean inner product (or inner product,
for short) by
n
(x'Y}c :=xHy = Lx;y;.
;=1
Note that (x, y)c = (y, x}c, i.e., the order in which x and y appear in the complex inner
product is important. The more conventional definition of the complex inner product is
(x, y)c = yH x = L:7=1 x;y; but throughout the text we prefer the symmetry with the real
case.
Example 1.4. Let x = [} ] and y = [ ~ ] . Then
(x, Y}c = [ } JH [ ~ ] = [I  j] [ ~ ] = 1  2j
while
and we see that, indeed, (x, Y}c = {y, x)c'
Note that x
T
x = 0 if and only if x = 0 when x E IR
n
but that this is not true if x E en.
What is true in the complex case is that x
H
x = 0 if and only if x = O. To illustrate, consider
the nonzero vector x above. Then x
T
x = 0 but x
H
X = 2.
Two nonzero vectors x, y E IR
n
are said to be orthogonal if their inner product is
zero, i.e., x
T
y = O. Nonzero complex vectors are orthogonal if x
H
y = O. If x and y are
orthogonal and x
T
x = 1 and yT y = 1, then we say that x and y are orthonormal. A
matrix A E IR
nxn
is an orthogonal matrix if AT A = AAT = I, where I is the n x n
identity matrix. The notation In is sometimes used to denote the identity matrix in IR
nxn
(or en xn). Similarly, a matrix A E en xn is said to be unitary if A H A = AA H = I. Clearly
an orthogonal or unitary matrix has orthonormal rows and orthonormal columns. There is
no special name attached to a nonsquare matrix A E ]Rrn"n (or E e
mxn
) with orthonormal
rows or columns.
1.4 Determinants
It is assumed that the reader is familiar with the basic theory of determinants. For A E IR
n
xn
(or A E en xn) we use the notation det A for the determinant of A. We list below some of
1.4. Determinants
the more useful properties of determinants. Note that this is not a minimal set, i.e., several
properties are consequences of one or more of the others.
1. If A has a zero row or if any two rows of A are equal, then det A = 0.
2. If A has a zero column or if any two columns of A are equal, then det A = 0.
3. Interchanging two rows of A changes only the sign of the determinant.
4. Interchanging two columns of A changes only the sign of the determinant.
5. Multiplying a row of A by a scalar a results in a new matrix whose determinant is
a det A.
6. Multiplying a column of A by a scalar a results in a new matrix whose determinant
is a det A.
7. Multiplying a row of A by a scalar and then adding it to another row does not change
the determinant.
8. Multiplying a column of A by a scalar and then adding it to another column does not
change the determinant.
9. det A
T
= det A (det A
H
= det A if A e C
nxn
).
10. If A is diagonal, then det A = a11a22 • • • a
nn
, i.e., det A is the product of its diagonal
elements.
11. If A is upper triangular, then det A = a11a22 • • • a
nn
.
12. If A is lower triangular, then det A = a11a22 • • • a
nn
.
13. If A is block diagonal (or block upper triangular or block lower triangular), with
square diagonal blocks A11, A22, • • •, A
nn
(of possibly different sizes), then det A =
det A11 det A22 • • • det A
nn
.
14. If A, B eR
nxn
,thendet(AB) = det A det 5.
15. If A € R
nxn
, then det(A
1
) = 1det A.
16. If A e R
nxn
and D e R
mxm
, then det [Ac
B
D
] = del A det ( D – CA–
l
B).
Proof: This follows easily from the block LU factorization
17. If A eR
nxn
and D e RM
mxm
, then det [Ac
B
D
] = det D det(A – BD–
1
C) .
Proof: This follows easily from the block UL factorization
5 1.4. Determinants 5
the more useful properties of determinants. Note that this is not a minimal set, i.e., several
properties are consequences of one or more of the others.
1. If A has a zero row or if any two rows of A are equal, then det A = o.
2. If A has a zero column or if any two columns of A are equal, then det A = O.
3. Interchanging two rows of A changes only the sign of the determinant.
4. Interchanging two columns of A changes only the sign of the determinant.
5. Multiplying a row of A by a scalar ex results in a new matrix whose determinant is
exdetA.
6. Multiplying a column of A by a scalar ex results in a new matrix whose determinant
is ex det A.
7. Multiplying a row of A by a scalar and then adding it to another row does not change
the determinant.
8. Multiplying a column of A by a scalar and then adding it to another column does not
change the determinant.
9. detAT = detA (detA
H
= detA if A E C"X").
10. If A is diagonal, then det A = alla22 ... ann, i.e., det A is the product of its diagonal
elements.
11. If A is upper triangular, then det A = all a22 ... a"n.
12. If A is lower triangUlar, then det A = alla22 ... ann.
13. If A is block diagonal (or block upper triangular or block lower triangular), with
square diagonal blocks A 11, A
22
, ... , An" (of possibly different sizes), then det A =
det A 11 det A22 ... det Ann.
14. If A, B E IR
nxn
, then det(AB) = det A det B.
15. If A E then det(A
1
) = de: A .
16. If A E and DE IR
mxm
, then det = detA det(D  CA
1
B).
Proof" This follows easily from the block LU factorization
] [
17. If A E IR
nxn
and D E then det = det D det(A  B D
1
C).
Proof" This follows easily from the block UL factorization
BD
1
I
] [
Chapter 1. Introduction and Review
Remark 1.5. The factorization of a matrix A into the product of a unit lower triangular
matrix L (i.e., lower triangular with all 1's on the diagonal) and an upper triangular matrix
U is called an LU factorization; see, for example, [24]. Another such factorization is UL
where U is unit upper triangular and L is lower triangular. The factorizations used above
are block analogues of these.
Remark 1.6. The matrix D — CA–
1
B is called the Schur complement of A in [AC BD].
Similarly, A – BD–
l
C is the Schur complement of D in [AC
B
D
].
EXERCISES
1. If A e R
nxn
and or is a scalar, what is det(aA)? What is det(–A)?
2. If A is orthogonal, what is det A? If A is unitary, what is det A?
3. Let x, y e Rn. Show that det(I – xy
T
) = 1 – y
T
x.
4. Let U1, U
2
, . . ., Uk € R
nxn
be orthogonal matrices. Show that the product U =
U1 U2 • • • Uk is an orthogonal matrix.
5. Let A e R
n x n
. The trace of A, denoted TrA, is defined as the sum of its diagonal
elements, i.e., TrA = Eni=1
aii.
(a) Show that the trace is a linear function; i.e., if A, B e R
nxn
and a, ft e R, then
Tr(aA + fiB)= aTrA + fiTrB.
(b) Show that Tr(Afl) = Tr(£A), even though in general AB ^ B A.
(c) Let S € R
nxn
be skewsymmetric, i.e., S
T
= S. Show that TrS = 0. Then
either prove the converse or provide a counterexample.
6. A matrix A e W
x
" is said to be idempotent if A
2
= A.
/ x ™ . , • , ! T 2cos
2
<9 sin 20 1 . . _ ,
(a) Show that the matrix A =  . _ .. _ .
2rt
is idempotent for all #.
2 _ sin 2^ 2sm
z
# J
r
(b) Suppose A e IR"
X
" is idempotent and A ^ I. Show that A must be singular.
66 Chapter 1. Introduction and Review
Remark 1.5. The factorization of a matrix A into the product of a unit lower triangular
matrix L (i.e., lower triangular with all l's on the diagonal) and an upper triangular matrix
V is called an LV factorization; see, for example, [24]. Another such factorization is VL
where V is unit upper triangular and L is lower triangular. The factorizations used above
are block analogues of these.
Remark 1.6. The matrix D  e A I B is called the Schur complement of A in [ ~ ~ ].
Similarly, A  BDIe is the Schur complement of Din [ ~ ~ l
EXERCISES
1. If A E jRnxn and a is a scalar, what is det(aA)? What is det(A)?
2. If A is orthogonal, what is det A? If A is unitary, what is det A?
3. Letx,y E jRn. Showthatdet(lxyT) = 1 yTx.
4. Let VI, V2, ... ,Vk E jRn xn be orthogonal matrices. Show that the product V =
VI V2 ... V
k
is an orthogonal matrix.
5. Let A E jRNxn. The trace of A, denoted Tr A, is defined as the sum of its diagonal
elements, i.e., TrA = L ~ = I au·
(a) Show that the trace is a linear function; i.e., if A, B E JRn xn and a, f3 E JR, then
Tr(aA + f3B) = aTrA + f3TrB.
(b) Show that Tr(AB) = Tr(BA), even though in general AB i= BA.
(c) Let S E jRnxn be skewsymmetric, i.e., ST = So Show that TrS = O. Then
either prove the converse or provide a counterexample.
6. A matrix A E jRnxn is said to be idempotent if A2 = A.
I [ 2cos
2
0
(a) Show that the matrix A =  . 2f)
2 sm 0
sin 20 J. . d .. II II
2sin
2
0 IS I empotent lor a o.
(b) Suppose A E jRn xn is idempotent and A i= I. Show that A must be singular.
Chapter 2
Vector Spaces
In this chapter we give a brief review of some of the basic concepts of vector spaces. The
emphasis is on finitedimensional vector spaces, including spaces formed by special classes
of matrices, but some infinitedimensional examples are also cited. An excellent reference
for this and the next chapter is [10], where some of the proofs that are not given here may
be found.
2.1 Definitions and Examples
Definition 2.1. A field is a set F together with two operations +, • : F x F — > F such that
Axioms (A1)(A3) state that (F, +) is a group and an abelian group if (A4) also holds.
Axioms (M1)(M4) state that (F \ {0}, •) is an abelian group.
Generally speaking, when no confusion can arise, the multiplication operator "•" is
not written explicitly.
7
(Al) a + (P + y ) = (a + p ) + y f o r all a, f t, y € F.
(A2) there exists an element 0 e F such that a + 0 = a. for all a e F.
(A3 ) for all a e F, there exists an element (—a) e F such that a + (— a) = 0.
(A4 ) a + p = ft + afar all a, ft e F.
(M l) a  ( p  y ) = ( a  p )  y f o r al l a, p, y e F.
(M 2) there exists an element 1 e F such that a • I = a for all a e F.
(M 3 ) for all a e ¥, a ^0, there exists an element a"
1
€ F such that a • a~
l
= 1.
(M 4 ) a • p = P • a for all a, p e F.
(D) a  ( p + y)=ci p+a y f or alia, p,ye¥.
Chapter 2
Vector Spaces
In this chapter we give a brief review of some of the basic concepts of vector spaces. The
emphasis is on finitedimensional vector spaces, including spaces formed by special classes
of matrices, but some infinitedimensional examples are also cited. An excellent reference
for this and the next chapter is [10], where some of the proofs that are not given here may
be found.
2.1 Definitions and Examples
Definition 2.1. A field is a set IF together with two operations +, . : IF x IF ~ IF such that
(Al) a + (,8 + y) = (a +,8) + y for all a,,8, y Elf.
(A2) there exists an element 0 E IF such that a + 0 = a for all a E IF.
(A3) for all a E IF, there exists an element (a) E IF such that a + (a) = O.
(A4) a + ,8 = ,8 + a for all a, ,8 Elf.
(Ml) a· (,8, y) = (a·,8)· y for all a,,8, y Elf.
(M2) there exists an element I E IF such that a . I = a for all a E IF.
(M3) for all a E IF, a f. 0, there exists an element aI E IF such that a . aI = 1.
(M4) a·,8 =,8 . afar all a, ,8 E IF.
(D) a· (,8 + y) = a·,8 +a· y for all a, ,8, y Elf.
Axioms (Al)(A3) state that (IF, +) is a group and an abelian group if (A4) also holds.
Axioms (MI)(M4) state that (IF \ to), .) is an abelian group.
Generally speaking, when no confusion can arise, the multiplication operator "." is
not written explicitly.
7
Chapter 2. Vector Spaces
Example 2.2.
1. R with ordinary addition and multiplication is a field.
2. C with ordinary complex addition and multiplication is a field.
3. Raf. r] = the field of rational functions in the indeterminate x
8
where Z+ = {0,1,2, . . . }, is a field.
4. RMr
mxn
= { m x n matrices of rank r with real coefficients) is clearly not a field since,
for example, (Ml) does not hold unless m = n. Moreover, R"
x
" is not a field either
since (M4) does not hold in general (although the other 8 axioms hold).
Definition 2.3. A vector space over a field F is a set V together with two operations
+ :V x V ^V and : F xV »• V such that
A vector space is denoted by (V, F) or, when there is no possibility of confusion as to the
underlying fie Id, simply by V.
Remark 2.4. Note that + and • in Definition 2.3 are different from the + and • in Definition
2.1 in the sense of operating on different objects in different sets. In practice, this causes
no confusion and the • operator is usually not even written explicitly.
Example 2.5.
1. (R", R) with addition defined by
and scalar multiplication defined by
is a vector space. Similar definitions hold for (C", C).
(VI) (V, +) is an abelian group.
(V2) ( a  p )  v = a  ( P ' V ) f o r all a, p e F and for all v e V.
(V3) (a + ft) • v = a • v + p • v for all a, p € F and for all v e V.
(V4) a(v + w)=av + a w for all a e F and for all v, w e V.
(V5) 1 • v = v for all v e V (1 e F).
8 Chapter 2. Vector Spaces
Example 2.2.
I. IR with ordinary addition and multiplication is a field.
2. e with ordinary complex addition and multiplication is a field.
3. Ra[x] = the field of rational functions in the indeterminate x
= {a
o
+ atX + ... + apxP +}
:aj,f3i EIR ;P,qEZ ,
f30 + f3t
X
+ ... + f3qX
q
where Z+ = {O,l,2, ... }, is a field.
4. I R ~ xn = { m x n matrices of rank r with real coefficients} is clearly not a field since,
for example, (MI) does not hold unless m = n. Moreover, l R ~ x n is not a field either
since (M4) does not hold in general (although the other 8 axioms hold).
Definition 2.3. A vector space over a field IF is a set V together with two operations
+ : V x V + V and· : IF x V + V such that
(VI) (V, +) is an abelian group.
(V2) (a· f3) . v = a . (f3 . v) for all a, f3 E IF andfor all v E V.
(V3) (a + f3). v = a· v + f3. v for all a, f3 Elf andforall v E V.
(V4) a· (v + w) = a . v + a . w for all a ElF andfor all v, w E V.
(V5) I· v = v for all v E V (1 Elf).
A vector space is denoted by (V, IF) or, when there is no possibility of confusion as to the
underlying field, simply by V.
Remark 2.4. Note that + and· in Definition 2.3 are different from the + and . in Definition
2.1 in the sense of operating on different objects in different sets. In practice, this causes
no confusion and the· operator is usually not even written explicitly.
Example 2.5.
I. (IRn, IR) with addition defined by
and scalar multiplication defined by
is a vector space. Similar definitions hold for (en, e).
2.2. Subspaces
3. Let (V, F) be an arbitrary vector space and V be an arbitrary set. Let O (X > , V) be the
set of functions / mapping D to V. Then O (D, V) is a vector space with addition
defined by
2.2 Subspaces
Definition 2.6. Let (V, F) be a vector space and let W c V, W = 0. Then (W, F) is a
subspace of (V, F) i f and only i f (W, F) is i tself a vector space or, equi valently, i f and only
i f ( a w 1 + ß W 2 ) e W for all a, ß e ¥ and for all w 1 , w
2
e W.
Remark 2.7. The latter characterization of a subspace is often the easiest way to check
or prove that something is indeed a subspace (or vector space); i.e., verify that the set in
question is closed under addition and scalar multiplication. Note, too, that since 0 e F, this
implies that the zero vector must be in any subspace.
Notation: When the underlying field is understood, we write W c V, and the symbol c,
when used with vector spaces, is henceforth understood to mean "is a subspace of." The
less restrictive meaning "is a subset of" is specifically flagged as such.
9
2. (E
mxn
, E) is a vector space with addition defined by
and scalar multiplication defined by
and scalar multiplication defined by
Special Cases:
(a) V = [to, t \ ] , (V, F) = (IR", E), and the functions are piecewise continuous
=: (PC[f
0
, t\ ] )
n
or continuous =: (C[?
0
, h] )
n
.
4. Let A € R"
x
". Then (x(t) : x ( t ) = Ax(t}} is a vector space (of dimension n) .
2.2. Subspaces 9
2.
(JRmxn, JR) is a vector space with addition defined by
[ ." + P"
al2 + fJI2 aln + fJln
l
a21 + fJ2I a22 + fJ22 a2n + fJ2n
A+B= .
amI + fJml am2 + fJm2 amn + fJmn
and scalar multiplication defined by
[ ya"
y
a
l2
ya," l
y
a
21 y
a
22 ya2n
yA = . . .
yaml ya
m
2
ya
mn
3. Let (V, IF) be an arbitrary vector space and '0 be an arbitrary set. Let cf>('O, V) be the
set of functions f mapping '0 to V. Then cf>('O, V) is a vector space with addition
defined by
(f + g)(d) = fed) + g(d) for all d E '0 and for all f, g E cf>
and scalar multiplication defined by
(af)(d) = af(d) for all a E IF, for all d ED, and for all f E cf>.
Special Cases:
(a) '0 = [to, td, (V, IF) = (JR
n
, JR), and the functions are piecewise continuous
=: (PC[to, td)n or continuous =: (C[to, td)n.
(b) '0 = [to, +00), (V, IF) = (JRn, JR), etc.
4. Let A E JR(nxn. Then {x(t) : x(t) = Ax(t)} is a vector space (of dimension n).
2.2 Subspaces
Definition 2.6. Let (V, IF) be a vector space and let W ~ V, W f= 0. Then (W, IF) is a
subspace of (V, IF) if and only if (W, IF) is itself a vector space or, equivalently, if and only
if(awl + fJw2) E W foral! a, fJ E IF andforall WI, W2 E W.
Remark 2.7. The latter characterization of a subspace is often the easiest way to check
or prove that something is indeed a subspace (or vector space); i.e., verify that the set in
question is closed under addition and scalar multiplication. Note, too, that since 0 E IF, this
implies that the zero vector must be in any subspace.
Notation: When the underlying field is understood, we write W ~ V, and the symbol ~ ,
when used with vector spaces, is henceforth understood to mean "is a subspace of." The
less restrictive meaning "is a subset of' is specifically flagged as such.
Then W
a
,ß is a subspace of V if and only if ß = 0. As an interesting exercise, sketch
W2,1, W2,o, W1/2,1, and W1/2,
0
. Note, too, that the vertical line through the origin (i.e.,
a = oo) is also a subspace.
All lines through the origin are subspaces. Shifted subspaces W
a
,ß with ß = 0 are
called linear varieties.
Henceforth, we drop the explicit dependence of a vector space on an underlying field.
Thus, V usually denotes a vector space with the underlying field generally being R unless
explicitly stated otherwise.
Definition 2.9. If 12, and S are vector spaces (or subspaces), then R = S if and only if
R C S and S C R.
Note: To prove two vector spaces are equal, one usually proves the two inclusions separately:
An arbitrary r e R is shown to be an element of S and then an arbitrary 5 € S is shown to
be an element of R.
2.3 Linear Independence
Let X = { v1 , v2, • • •} be a nonempty collection of vectors u, in some vector space V.
Definition 2.10. X is a linearly dependent set of vectors if and only if there exist k distinct
elements v1, . . . , vk e X and scalars a1, . . . , ak not all zero such that
10 Chapter 2. Vector Spaces
Example 2.8.
1. Consider (V, F) = (R"
X
",R) and let W = [A e R"
x
" : A is symmetric}. Then
We V.
Proof: Suppose A\, A
2
are symmetric. Then it is easily shown that ctA\ + fiAi is
symmetric for all a, ft e R.
2. Let W = { A € R"
x
" : A is orthogonal}. Then W is /wf a subspace of R"
x
".
3. Consider (V, F) = (R
2
, R) and for each v € R
2
of the form v = [v1v2 ] identify v1 with
the jccoordinate in the plane and u
2
with the ycoordinate. For a, ß e R, define
X is a linearly independent set of vectors if and only if for any collection of k distinct
elements v1, . . . ,Vk of X and for any scalars a1, . . . , ak,
10 Chapter 2. Vector Spaces
Example 2.S.
1. Consider (V,lF) = (JR.nxn,JR.) and let W = {A E JR.nxn : A is symmetric}. Then
Proof' Suppose AI, A2 are symmetric. Then it is easily shown that aAI + f3A2 is
symmetric for all a, f3 E R
2. Let W = {A E ]Rnxn : A is orthogonal}. Then W is not a subspace of JR.nxn.
3. Consider (V, IF) = (]R2, JR.) and for each v E ]R2 of the form v = ] identify VI with
the xcoordinate in the plane and V2 with the ycoordinate. For a, f3 E R define
W",/l = {V : v = [ ac f3 ] ; c E JR.} .
Then W",/l is a subspace of V if and only if f3 = O. As an interesting exercise, sketch
W2.I, W2,O, Wi,I' and Wi,o, Note, too, that the vertical line through the origin (i.e.,
a = 00) is also a subspace.
All lines through the origin are subspaces. Shifted subspaces W",/l with f3 =1= 0 are
called linear varieties.
Henceforth, we drop the explicit dependence of a vector space on an underlying field.
Thus, V usually denotes a vector space with the underlying field generally being JR. unless
explicitly stated otherwise.
Definition 2.9. ffR and S are vector spaces (or subspaces), then R = S if and only if
R R.
Note: To prove two vector spaces are equal, one usually proves the two inclusions separately:
An arbitrary r E R is shown to be an element of S and then an arbitrary s E S is shown to
be an element of R.
2.3 Linear Independence
Let X = {VI, V2, •.• } be a nonempty collection of vectors Vi in some vector space V.
Definition 2.10. X is a linearly dependent set of vectors if and only if there exist k distinct
elements VI, ... , Vk E X and scalars aI, ..• , (Xk not all zero such that
X is a linearly independent set of vectors if and only if for any collection of k distinct
elements VI, ... , Vk of X and for any scalars aI, ••• , ak,
al VI + ... + (XkVk = 0 implies al = 0, ... , ak = O.
2.3. Linear Independence 11
(since 2v\ — v
2
+ v3 = 0).
2. Let A e R
xn
and 5 e R"
xm
. Then consider the rows of e
tA
B as vectors in C
m
[t
0
, t1]
(recall that e
fA
denotes the matrix exponential, which is discussed in more detail in
Chapter 11). Independence of these vectors turns out to be equivalent to a concept
called controllability, to be studied further in what follows.
Let v
f
e R", i e k, and consider the matrix V = [ v1 , ... ,Vk] e R
nxk
. The linear
dependence of this set of vectors is equivalent to the existence of a nonzero vector a e R
k
such that Va = 0. An equivalent condition for linear dependence is that the k x k matrix
V
T
V is singular. If the set of vectors is independent, and there exists a e R* such that
Va = 0, then a = 0. An equivalent condition for linear independence is that the matrix
V
T
V is nonsingular.
Definition 2.12. Let X = [ v1 , v2, . . . } be a collection of vectors vi. e V. Then the span of
X is defined as
Example 2.13. Let V = R
n
and define
Then Sp{e1, e
2
, ...,e
n
} = Rn.
Definition 2.14. A set of vectors X is a basis for V if and only ij
1. X is a linearly independent set (of basis vectors), and
2. Sp(X) = V.
Example 2.11.
is a linearly independent set. Why?
s a linearly dependent set However,
1. LetV = R
3
. Then
where N = {1, 2, ...}.
2.3. Linear Independence 11
Example 2.11.
I. 1£t V = Then {[ H i Hi] } i" independent.. Why?
Howe,."I [ i 1 [ i 1 [ l ] } is a Iin=ly
(since 2vI  V2 + V3 = 0).
2. Let A E ]Rnxn and B E ]Rnxm. Then consider the rows of etA B as vectors in em [to, tIl
(recall that etA denotes the matrix exponential, which is discussed in more detail in
Chapter 11). Independence of these vectors turns out to be equivalent to a concept
called controllability, to be studied further in what follows.
Let Vi E ]Rn, i E If, and consider the matrix V = [VI, ... , Vk] E ]Rnxk. The linear
dependence of this set of vectors is equivalent to the existence of a nonzero vector a E ]Rk
such that Va = O. An equivalent condition for linear dependence is that the k x k matrix
VT V is singular. If the set of vectors is independent, and there exists a E ]Rk such that
Va = 0, then a = O. An equivalent condition for linear independence is that the matrix
V T V is nonsingular.
Definition 2.12. Let X = {VI, V2, ..• } be a collection of vectors Vi E V. Then the span of
X is defined as
Sp(X) = Sp{VI, V2, ... }
= {v : V = (Xl VI + ... + (XkVk ; (Xi ElF, Vi EX, kEN},
where N = {I, 2, ... }.
Example 2.13. Let V = ]Rn and define
0 0
0 1 0
el =
0
, e2 =
0
,'" ,en =
0
o o
Then SpIel, e2, ... , en} = ]Rn.
Definition 2.14. A set of vectors X is a basis for V if and only if
1. X is a linearly independent set (of basis vectors), and
2. Sp(X) = V.
12 Chapter 2. Vector Spaces
Example 2.15. [e\,..., e
n
} is a basis for IR" (sometimes called the natural basis).
Now let b1, ..., b
n
be a basis (with a specific order associated with the basis vectors)
for V. Then for all v e V there exists a unique ntuple {E1 , . . . , E n} such that
Definition 2.16. The scalars {Ei} are called the components (or sometimes the coordinates)
of v with respect to the basis (b1, ..., b
n
] and are unique. We say that the vector x of
components represents the vector v with respect to the basis B.
Example 2.17. In Rn,
we have
To see this, write
Then
Theorem 2.18. The number of elements in a basis of a vector space is independent of the
particular basis considered.
Definition 2.19. If a basis X for a vector space V= 0) has n elements, V is said to
be ndimensional or have dimension n and we write dim(V) = n or dim V — n. For
We can also determine components of v with respect to another basis. For example, while
with respect to the basis
where
12 Chapter 2. Vector Spaces
Example 2.15. {el, ... , en} is a basis for]Rn (sometimes called the natural basis).
Now let b
l
, ... , b
n
be a basis (with a specific order associated with the basis vectors)
for V. Then for all v E V there exists a unique ntuple ... , such that
v = + ... + = Bx,
where
B [b".,b.l. x D J
Definition 2.16. The scalars } are called the components (or sometimes the coordinates)
of v with respect to the basis {b
l
, ... , b
n
} and are unique. We say that the vector x of
components represents the vector v with respect to the basis B.
Example 2.17. In]Rn,
VI ]
: = vlel + V2e2 + ... + vne
n
·
Vn
We can also determine components of v with respect to another basis. For example, while
with respect to the basis
we have
To see this, write
Then
[ ] = I . el + 2 . e2,
[ ] = 3 . [ ] + 4· [ l
[ ] = XI • [  ] + X2 • [ _! ]
= [  ! ] [ l
[ ] = [ ; 1 r I [ ; ] = [ l
Theorem 2.18. The number of elements in a basis of a vector space is independent of the
particular basis considered.
Definition 2.19. If a basis X for a vector space V(Jf 0) has n elements, V is said to
be n.dimensional or have dimension n and we write dim (V) = n or dim V = n. For
2.4 Sums and Intersections of Subspaces
Definition 2.21. Let (V, F) be a vector space and let 71, S c V. The sum and intersection
of R, and S are defined respectively by:
The subspaces R, and S are said to be complements of each other in T.
Remark 2.23. The union of two subspaces, R C S, is not necessarily a subspace.
Definition 2.24. T = R 0 S is the direct sum of R and S if
Theorem 2.22.
2.4. Sums and Intersections of Subspaces 13
consistency, and because the 0 vector is in any vector space, we define dim(O) = 0. A
vector space V is finitedimensional if there exists a basis X with n < +00 elements;
otherwise, V is infinitedimensional.
Thus, Theorem 2.18 says that dim(V) = the number of elements in a basis.
Example 2.20.
1. d i m(Rn)=n.
2. dim(R
mxn
) = mn.
Note: Check that a basis for R
mxn
is given by the mn matrices Eij; i e m, j e n,
where E
f
j is a matrix all of whose elements are 0 except for a 1 in the (i, j)th location.
The collection of Eij matrices can be called the "natural basis matrices."
3. dim(C[to, t1])  +00.
4. dim{A € R
nxn
: A = A
T
} = {1/2(n + 1).
1
2
(To see why, determine 1/ 2n( n + 1) symmetric basis matrices.)
5. dim{A e R
nxn
: A is upper (lower) triangular} = 1/ 2n( n + 1).
1. n + S = {r + s : r e U, s e 5}.
2. ft H 5 = {v : v e 7^ and v e 5}.
K
1. K + S C V (in general, U\  \ h 7^ =: ]T ft/ C V, for finite k).
1=1
2. 72. D 5 C V (in general, f] * R,
a
C V/ or an arbitrary index set A).
a e A
1. n n S = 0, and
2. U + S = T (in general ft; n (^ ft,) = 0 am/ ]Pft, = T).
y>f «
2.4. Sums and Intersections of Subspaces 13
consistency, and because the 0 vector is in any vector space, we define dim(O) = O. A
vector space V is finitedimensional if there exists a basis X with n < +00 elements;
otherwise, V is infinitedimensional.
Thus, Theorem 2.18 says that dim (V) = the number of elements in a basis.
Example 2.20.
1. = n.
2. = mn.
Note: Check that a basis for is given by the mn matrices Eij; i E m, j E
where Eij is a matrix all of whose elements are 0 except for a 1 in the (i, J)th location.
The collection of E;j matrices can be called the "natural basis matrices."
3. dim(C[to, tJJ) = +00.
4. dim{A E : A = AT} = !n(n + 1).
(To see why, determine !n(n + 1) symmetric basis matrices.)
5. dim{A E : A is upper (lower) triangular} = !n(n + 1).
2.4 Sums and Intersections of Subspaces
Definition 2.21. Let (V, JF') be a vector space and let R, S S; V. The sum and intersection
ofR and S are defined respectively by:
1. R + S = {r + s : r E R, s E S}.
2. R n S = {v : v E R and v E S}.
Theorem 2.22.
k
1. R + S S; V (in general, RI + ... + Rk =: L R; S; V, for finite k).
;=1
2. R n S S; V (in general, n Ra S; V for an arbitrary index set A).
CiEA
Remark 2.23. The union of two subspaces, R U S, is not necessarily a subspace.
Definition 2.24. T = REB S is the direct sum ofR and S if
1. R n S = 0, and
2. R + S = T (in general, R; n (L R
j
) = 0 and L Ri = T).
H;
The subspaces Rand S are said to be complements of each other in T.
14 Chapter 2. Vector Spaces
Remark 2.25. The complement of ft (or S) is not unique. For example, consider V = R
2
and let ft be any line through the origin. Then any other distinct line through the origin is
a complement of ft. Among all the complements there is a unique one orthogonal to ft.
We discuss more about orthogonal complements elsewhere in the text.
Theorem 2.26. Suppose T =R O S. Then
1. every t € T can be written uniquely in the form t = r + s with r e R and s e S.
2. dim(T) = dim(ft) + dim(S).
Proof: To prove the first part, suppose an arbitrary vector t e T can be written in two ways
as t = r1 + s1 = r2 + S2, where r1, r2 e R. and s1, S2 e S. Then r1 — r2 = s2— s\. But
r1 –r2 £ ft and 52 — si e S. Since ft fl S = 0, we must have r\ = ri and s\ = si from
which uniqueness follows.
The statement of the second part is a special case of the next theorem. D
Theorem 2.27. For arbitrary subspaces ft, S of a vector space V,
EXERCISES
1. Suppose {vi,..., Vk} is a linearly dependent set. Then show that one of the vectors
must be a linear combination of the others.
2. Let x\, *2, . . . , x/c E R" be nonzero mutually orthogonal vectors. Show that [x\,...,
X k} must be a linearly independent set.
3. Let v\,... ,v
n
be orthonormal vectors in R". Show that Av\,..., Av
n
are also or
thonormal if and only if Ae R"
x
" is orthogonal.
4. Consider the vectors v\ — [2 l]
r
and 1*2 = [3 l]
r
. Prove that vi and V2 form a basis
for R
2
. Find the components of the vector v = [4 l]
r
with respect to this basis.
Example 2.28. Let U be the subspace of upper triangular matrices in E"
x
" and let £ be the
subspace of lower triangular matrices in R
nxn
. Then it may be checked that U + L = R
nxn
while U n £ is the set of diagonal matrices in R
nxn
. Using the fact that dim (diagonal
matrices} = n, together with Examples 2.20.2 and 2.20.5, one can easily verify the validity
of the formula given in Theorem 2.27.
Example 2.29. Let (V, F) = (R
nxn
, R), let ft be the set of skewsymmetric matrices in
R"
x
", and let S be the set of symmetric matrices in R"
x
". Then V = U 0 S.
Proof: This follows easily from the fact that any Ae R"
x
" can be written in the form
The first matrix on the righthand side above is in S while the second is in ft.
14 Chapter 2. Vector Spaces
Remark 2.25. The complement of R (or S) is not unique. For example, consider V = jR2
and let R be any line through the origin. Then any other distinct line through the origin is
a complement of R. Among all the complements there is a unique one orthogonal to R.
We discuss more about orthogonal complements elsewhere in the text.
Theorem 2.26. Suppose T = R EB S. Then
1. every t E T can be written uniquely in the form t = r + s with r E Rand s E S.
2. dim(T) = dim(R) + dim(S).
Proof: To prove the first part, suppose an arbitrary vector t E T can be written in two ways
as t = rl + Sl = r2 + S2, where rl, r2 E Rand SI, S2 E S. Then r,  r2 = S2  SI. But
rl  r2 E Rand S2  SI E S. Since R n S = 0, we must have rl = r2 and SI = S2 from
which uniqueness follows.
The statement of the second part is a special case of the next theorem. 0
Theorem 2.27. For arbitrary subspaces R, S of a vector space V,
dim(R + S) = dim(R) + dim(S)  dim(R n S).
Example 2.28. Let U be the subspace of upper triangular matrices in jRn xn and let .c be the
subspace of lower triangUlar matrices in jRn xn. Then it may be checked that U + .c = jRn xn
while un.c is the set of diagonal matrices in jRnxn. Using the fact that dim {diagonal
matrices} = n, together with Examples 2.20.2 and 2.20.5, one can easily verify the validity
of the formula given in Theorem 2.27.
Example 2.29. Let (V, IF) = (jRnxn, jR), let R be the set of skewsymmetric matrices in
jRnxn, and let S be the set of symmetric matrices in jRnxn. Then V = n $ S.
Proof: This follows easily from the fact that any A E jRnxn can be written in the form
1 TIT
A=2:(A+A )+2:(AA).
The first matrix on the righthand side above is in S while the second is in R.
EXERCISES
1. Suppose {VI, ... , vd is a linearly dependent set. Then show that one of the vectors
must be a linear combination of the others.
2. Let XI, X2, ... , Xk E jRn be nonzero mutually orthogonal vectors. Show that {XI, ... ,
Xk} must be a linearly independent set.
3. Let VI, ... , Vn be orthonormal vectors in jRn. Show that Av" •.. , AV
n
are also or
thonormal if and only if A E jRnxn is orthogonal.
4. Consider the vectors VI = [2 1 f and V2 = [3 1 f. Prove that VI and V2 form a basis
for jR2. Find the components of the vector v = [4 If with respect to this basis.
Exercises 15
5. Let P denote the set of polynomials of degree less than or equal to two of the form
Po + p\x + pix
2
, where po, p\, p2 e R. Show that P is a vector space over E. Show
that the polynomials 1, *, and 2x
2
— 1 are a basis for P. Find the components of the
polynomial 2 + 3x + 4x
2
with respect to this basis.
6. Prove Theorem 2.22 (for the case of two subspaces R and S only).
7. Let P
n
denote the vector space of polynomials of degree less than or equal to n, and of
the form p( x) = po + p\x + • • • + p
n
x
n
, where the coefficients /?, are all real. Let PE
denote the subspace of all even polynomials in P
n
, i.e., those that satisfy the property
p(—x} = p(x). Similarly, let PQ denote the subspace of all odd polynomials, i.e.,
those satisfying p(—x} = – p( x) . Show that P
n
= P
E
© PO
8. Repeat Example 2.28 using instead the two subspaces 7" of tridiagonal matrices and
U of upper triangular matrices.
Exercises 15
5. Let P denote the set of polynomials of degree less than or equal to two of the form
Po + PI X + P2x2, where Po, PI, P2 E R Show that P is a vector space over R Show
that the polynomials 1, x, and 2x2  1 are a basis for P. Find the components of the
polynomial 2 + 3x + 4x
2
with respect to this basis.
6. Prove Theorem 2.22 (for the case of two subspaces Rand S only).
7. Let P
n
denote the vector space of polynomials of degree less than or equal to n, and of
the form p(x) = Po + PIX + ... + Pnxn, where the coefficients Pi are all real. Let PE
denote the subspace of all even polynomials in P
n
, i.e., those that satisfy the property
p( x) = p(x). Similarly, let Po denote the subspace of all odd polynomials, i.e.,
those satisfying p(x) = p(x). Show that P
n
= PE EB Po·
8. Repeat Example 2.28 using instead the two subspaces T of tridiagonal matrices and
U of upper triangular matrices.
This page intentionally left blank This page intentionally left blank
Chapter 3
Linear Transformations
3.1 Definition and Examples
We begin with the basic definition of a linear transformation (or linear map, linear function,
or linear operator) between two vector spaces.
Definition 3.1. Let (V, F) and (W, F) be vector spaces. Then C : V > W is a linear
transformation if and only if
£(avi + pv
2
) = aCv\ + fi£v
2
far all a, £ e F and far all v
}
,v
2
e V.
The vector space V is called the domain of the transformation C while VV, the space into
which it maps, is called the codomain.
Example 3.2.
1. Let F = R and take V = W = PC[f
0
, +00).
Define £ : PC[t
0
, +00) > PC[t
0
, +00) by
2. Let F = R and take V = W = R
mx
". Fix M e R
mxm
.
Define £ : R
mx
" > M
mxn
by
3. Let F = R and take V = P" = (p(x) = a
0
+ ct
}
x H h a
n
x" : a, E R} and
w = p
n

1
.
Define C.: V —> W by Lp — p', where' denotes differentiation with respect to x.
17
Chapter 3
Linear Transformations
3.1 Definition and Examples
We begin with the basic definition of a linear transformation (or linear map, linear function,
or linear operator) between two vector spaces.
Definition 3.1. Let (V, IF) and (W, IF) be vector spaces. Then I: : V + W is a linear
transformation if and only if
I:(avi + {3V2) = al:vi + {3I:V2 for all a, {3 ElF and for all VI, V2 E V.
The vector space V is called the domain of the transformation I: while W, the space into
which it maps, is called the codomain.
Example 3.2.
1. Let IF = JR and take V = W = PC[to, +00).
Define I: : PC[to, +00) + PC[to, +00) by
vet) f+ wet) = (I:v)(t) = 11 e(tr)v(r) dr.
to
2. Let IF = JR and take V = W = JRmxn. Fix ME JRmxm.
Define I: : JRmxn + JRmxn by
X f+ Y = I:X = MX.
3. Let IF = JR and take V = pn = {p(x) = ao + alx + ... + anx
n
: ai E JR} and
W = pnl.
Define I: : V + W by I: p = p', where I denotes differentiation with respect to x.
17
18 Chapters. Li near Transformations
3.2 Matrix Representation of Linear Transformations
Linear transformations between vector spaces with specific bases can be represented con
veniently in matrix form. Specifically, suppose £ : (V, F) — > • (W, F) is linear and further
suppose that {u,, i e n} and {Wj, j e m] are bases for V and W, respectively. Then the
ith column of A = Mat £ (the matrix representation of £ with respect to the given bases
for V and W) is the representation of £i> , with respect to {w
}
•, j e raj. In other words,
represents £ since
where W = [w\,..., w
m
] and
is the z'th column of A. Note that A = Mat £ depends on the particular bases for V and W.
This could be reflected by subscripts, say, in the notation, but this is usually not done.
The action of £ on an arbitrary vector v e V is uniquely determined (by linearity)
by its action on a basis. Thus, if v = E1v1 + • • • + E
n
v
n
= Vx (where u, and hence jc, is
arbitrary), then
Thinking of A both as a matrix and as a linear transformation from Rn to R
m
usually causes no
confusion. Change of basis then corresponds naturally to appropriate matrix multiplication.
Thus, £V = WA since x was arbitrary.
When V = R", W = R
m
and [vi , i e n}, [wj , j e m} are the usual (natural) bases
the equation £V = WA becomes simply £ = A. We thus commonly identify A as a linea
transformation with its matrix representation, i.e.,
18 Chapter 3. Linear Transformations
3.2 Matrix Representation of Linear Transformations
Linear transformations between vector spaces with specific bases can be represented con
veniently in matrix form. Specifically, suppose L : (V, IF) (W, IF) is linear and further
suppose that {Vi, i E and {w j, j E !!!.} are bases for V and W, respectively. Then the
ith column of A = Mat L (the matrix representation of L with respect to the given bases
for V and W) is the representation of LVi with respect to {w j, j E m}. In other words,
represents L since
A=
al
n
]
: E JR.mxn
a
mn
LVi = aliwl + ... + amiWm
=Wai,
where W = [WI, ... , w
m
] and
is the ith column of A. Note that A = Mat L depends on the particular bases for V and W.
This could be reflected by subscripts, say, in the notation, but this is usually not done.
The action of L on an arbitrary vector V E V is uniquely determined (by linearity)
by its action on a basis. Thus, if V = VI + ... + Vn = V x (where v, and hence x, is
arbitrary), then
LVx = Lv = + ... +
= WAx.
Thus, LV = W A since x was arbitrary.
When V = JR.n, W = lR.
m
and {Vi, i E {W j' j E !!!.} are the usual (natural) bases,
the equation LV = W A becomes simply L = A. We thus commonly identify A as a linear
transformation with its matrix representation, i.e.,
Thinking of A both as a matrix and as a linear transformation from JR." to lR.
m
usually causes no
confusion. Change of basis then corresponds naturally to appropriate matrix multiplication.
3.3. Composition of Transformations 19
3.3 Composition of Transformations
Consider three vector spaces U, V, and W and transformations B from U to V and A from
V to W. Then we can define a new transformation C as follows:
formula
Two Special Cases:
Inner Product: Let x, y e Rn. Then their inner product is the scalar
Outer Product: Let x e R
m
, y e Rn. Then their outer product is the m x n
matrix
Note that any rankone matrix A e R
mxn
can be written in the form A = xy
T
above (or xy
H
if A e C
mxn
). A rankone symmetric matrix can be written in
the form XX
T
(or XX
H
).
The above diagram illustrates the composition of transformations C = AB. Note that in
most texts, the arrows above are reversed as follows:
However, it might be useful to prefer the former since the transformations A and B appear
in the same order in both the diagram and the equation. If dimZ// = p, dimV = n,
and dim W = m, and if we associate matrices with the transformations in the usual way,
then composition of transformations corresponds to standard matrix multiplication. That is,
we have C — A B . The above is sometimes expressed componentwise by the
3.3. Composition ofTransformations 19
3.3 Composition of Transformations
Consider three vector spaces U, V, and Wand transformations B from U to V and A from
V to W. Then we can define a new transformation C as follows:
C
The above diagram illustrates the composition of transformations C = AB. Note that in
most texts, the arrows above are reversed as follows:
C
However, it might be useful to prefer the former since the transformations A and B appear
in the same order in both the diagram and the equation. If dimU = p, dim V = n,
and dim W = m, and if we associate matrices with the transformations in the usual way,
then composition of transformations corresponds to standard matrix mUltiplication. That is,
we have CAB . The above is sometimes expressed componentwise by the
mxp
formula
Two Special Cases:
nxp
n
cij = L aikbkj.
k=1
Inner Product: Let x, y E ~ n . Then their inner product is the scalar
n
xTy = Lx;y;.
;=1
Outer Product: Let x E ~ m , y E ~ n . Then their outer product is the m x n
matrix
Note that any rankone matrix A E ~ m x n can be written in the form A = xyT
above (or xyH if A E c
mxn
). A rankone symmetric matrix can be written in
the form xx
T
(or xx
H
).
20 Chapter 3. Li near Transformations
3.4 Structure of Linear Transformations
Let A : V —> W be a linear transformation.
Definition 3.3. The range of A, denotedlZ( A), is the set {w e W : w = Av for some v e V}.
Equivalently, R(A) — {Av : v e V}. The range of A is also known as the image of A and
denoted Im(A).
The nullspace of A, denoted N(A), is the set {v e V : Av = 0}. The nullspace of
A is also known as the kernel of A and denoted Ker (A).
Theorem 3.4. Let A : V — >• W be a linear transformation. Then
1. R( A) C W.
2. N(A) c V.
Note that N(A) and R(A) are, in general, subspaces of different spaces.
Theorem 3.5. Let A e R
mxn
. If A is written in terms of its columns as A = [a\,... ,a
n
],
then
Proof: The proof of this theorem is easy, essentially following immediately from the defi
nition. D
Remark 3.6. Note that in Theorem 3.5 and throughout the text, the same symbol (A) is
used to denote both a linear transformation and its matrix representation with respect to the
usual (natural) bases. See also the last paragraph of Section 3.2.
Definition 3.7. Let {v1 , . . . , vk] be a set of nonzero vectors u, e Rn. The set is said to
be orthogonal if' vjvj = 0 for i ^ j and orthonormal if vf vj = 8
ij
, where 8
t
j is the
Kronecker delta defined by
Example 3.8.
is an orthogonal set.
is an orthonormal set.
3. If { t > i , . . . , Vk} with u, € M." is an orthogonal set, then I — /==,  ., — /===  is an
I ^/v, vi ^/v'
k
v
k
]
orthonormal set.
then
20 Chapter 3. LinearTransformations
3.4 Structure of Linear Transformations
Let A : V + W be a linear transformation.
Definition3.3. The range of A, denotedR(A), is the set {w E W : w = Av for some v E V}.
Equivalently, R(A) = {Av : v E V}. The range of A is also known as the image of A and
denoted Im(A).
The nullspace of A, denoted N(A), is the set {v E V : Av = O}. The nullspace of
A is also known as the kernel of A and denoted Ker (A).
Theorem 3.4. Let A : V + W be a linear transformation. Then
1. R(A) S; W.
2. N(A) S; V.
Note that N(A) and R(A) are, in general, subspaces of different spaces.
Theorem 3.5. Let A E If A is written in terms of its columns as A = [ai, ... , an],
then
R(A) = Sp{al, ... , an} .
Proof: The proof of this theorem is easy, essentially following immediately from the defi
nition. 0
Remark 3.6. Note that in Theorem 3.5 and throughout the text, the same symbol (A) is
used to denote both a linear transformation and its matrix representation with respect to the
usual (natural) bases. See also the last paragraph of Section 3.2.
Definition 3.7. Let {VI, ... , vd be a set of nonzero vectors Vi E The set is said to
be orthogonal if vr v j = 0 for i f= j and orthonormal if vr v j = 8ij' where 8ij is the
Kronecker delta defined by
8 {I ifi=j,
ij = 0 if i f= j.
Example 3.8.
1. {[ J. [ : J} is an orthogonal set.
2. {[ ] ,[ J} is an orthonormal set.
3 If { }
. h 1Tlln • h I th { .
. VI, •.• ,Vk Wit Vi E.IN,. IS an ort ogona set, en ... , IS an
VI
orthonormal set.
3.4. Structure of Linear Transformations 21
Definition 3.9. Let S c Rn. Then the orthogonal complement of S is defined as the set
S
1
 = {v e Rn : V
T
S = 0 for all s e S}.
Example 3.10. Let
Then it can be shown that
Working from the definition, the computation involved is simply to find all nontrivial (i.e.,
nonzero) solutions of the system of equations
Note that there is nothing special about the two vectors in the basis defining S being or
thogonal. Any set of vectors will do, including dependent spanning vectors (which would,
of course, then give rise to redundant equations).
Theorem 311 Let R S C R
n
The
Proof: We prove and discuss only item 2 here. The proofs of the other results are left as
exercises. Let { v1 , ..., v
k
} be an orthonormal basis for S and let x e Rn be an arbitrary
vector. Set
3.4. Structure of Li near Transformations 21
Definition 3.9. Let S <; ]Rn. Then the orthogonal complement of S is defined as the set
vTs=OforallsES}.
Example 3.10. Let
Then it can be shown that
Working from the definition, the computation involved is simply to find all nontrivial (i.e.,
nonzero) solutions of the system of equations
3xI + 5X2 + 7X3 = 0,
4xI + X2 + X3 = 0.
Note that there is nothing special about the two vectors in the basis defining S being or
thogonal. Any set of vectors will do, including dependent spanning vectors (which would,
of course, then give rise to redundant equations).
Theorem 3.11. Let n, S <; ]Rn. Then
2. S \B = ]Rn.
3. = S.
4. n <; S if and only if <;
5. (n + = nl. n
6. (n n = +
Proof: We prove and discuss only item 2 here. The proofs of the other results are left as
exercises. Let {VI, ... , Vk} be an orthonormal basis for S and let x E ]Rn be an arbitrary
vector. Set
k
XI = L (xT Vi)Vi,
;=1
X2 = X XI.
we see that x2 is orthogonal to v1, ..., Vk and hence to any linear combination of these
vectors. In other words, X2 is orthogonal to any vector in S. We have thus shown that
S + S
1
= Rn. We also have that S U S
1
=0 since the only vector s e S orthogonal to
everything in S (i.e., including itself) is 0.
It is also easy to see directly that, when we have such direct sum decompositions, we
can write vectors in a unique way with respect to the corresponding subspaces. Suppose,
for example, that x = x1 + x2. = x'1+ x'
2
, where x\, x 1 E S and x2, x'
2
e S
1
. Then
(x'1 — x1)
T
( x'
2
— x2) = 0 by definition of ST . But then (x'1 — x1)
T
( x' 1 – x1) = 0 since
x
2
— X2 = — (x'1 — x1) (which follows by rearranging the equation x1+x2 = x'1 + x'
2
) . Thus,
x1 — x'1 and x2 = x
2
. D
Theorem 3.12. Let A : Rn —> R
m
. Then
1. N(A)
1
" = 7£(A
r
). (Note: This holds only for finitedimensional vector spaces.)
2. 'R,(A)
1
~ — J\f(A
T
). (Note: This also holds for infinitedimensional vector spaces.)
Proof: To prove the first part, take an arbitrary x e A/ "(A). Then Ax = 0 and this is
equivalent to y
T
Ax = 0 for all v. But y
T
Ax = ( A
T
y ) x. Thus, Ax = 0 if and only if x
is orthogonal to all vectors of the form A
T
v, i.e., x e R(A
r
) . Since x was arbitrary, we
have established that N(A)
1
= U(A
T
}.
The proof of the second part is similar and is left as an exercise. D
Definition 3.13. Let A : R
n
> R
m
. Then {v e R" : Av = 0} is sometimes called the
right nullspace of A. Similarly, (w e R
m
: W
T
A = 0} is called the left nullspace of A.
Clearly, the right nullspace is A/"(A) while the left nullspace is J\f(A
T
).
Theorem 3.12 and part 2 of Theorem 3.11 can be combined to give two very fun
damental and useful decompositions of vectors in the domain and codomain of a linear
transformation A. See also Theorem 2.26.
Theorem 3.14 (Decomposition Theorem). Let A : R" > R
m
. Then
7. every vector v in the domain space R" can be written in a unique way as v = x + y,
where x € M(A) and y € J\f(A)
±
= ft(A
r
) (i.e., R" = M(A) 0 ft(A
r
)).
2. every vector w in the codomain space R
m
can be written in a unique way asw = x+y,
where x e U(A) and y e ft(A)
1
 = Af(A
T
) (i.e., R
m
= 7l(A) 0 M(A
T
)).
This key theorem becomes very easy to remember by carefully studying and under
standing Figure 3.1 in the next section.
3.5 Four Fundamental Subspaces
Consider a general matrix A € E^
x
". When thought of as a linear transformation from E"
to R
m
, many properties of A can be developed in terms of the four fundamental subspaces
22 Chapters. L i near Transformations
Then x\ e < S and, since
22 Chapter 3. Linear Transformations
Then XI E S and, since
T T T
x
2
V j = X V j  X I V j
=XTVjXTVj=O,
we see that X2 is orthogonal to VI, .•. , Vk and hence to any linear combination of these
vectors. In other words, X2 is orthogonal to any vector in S. We have thus shown that
S + S.l = IRn. We also have that S n S.l = 0 since the only vector s E S orthogonal to
everything in S (i.e., including itself) is O.
It is also easy to see directly that, when we have such direct sum decompositions, we
can write vectors in a unique way with respect to the corresponding subspaces. Suppose,
for example, that x = XI + X2 = x; + x ~ , where XI, x; E Sand X2, x ~ E S.l. Then
(x;  XI/ ( x ~  X2) = 0 by definition of S.l. But then (x;  XI)T (x;  xd = 0 since
x ~ X2 = (x; XI) (which follows by rearranging the equation XI +X2 = x; + x ~ ) . Thus,
XI = x; andx2 = x ~ . 0
Theorem 3.12. Let A : IR
n
+ IRm. Then
1. N(A).l = R(A
T
). (Note: This holds only for finitedimensional vector spaces.)
2. R(A).l = N(A
T
). (Note: This also holds for infinitedimensional vector spaces.)
Proof: To prove the first part, take an arbitrary x E N(A). Then Ax = 0 and this is
equivalent to yT Ax = 0 for all y. But yT Ax = (AT y{ x. Thus, Ax = 0 if and only if x
is orthogonal to all vectors of the form AT y, i.e., x E R(AT).l. Since x was arbitrary, we
have established thatN(A).l = R(A
T
).
The proof of the second part is similar and is left as an exercise. 0
Definition 3.13. Let A : IR
n
+ IRm. Then {v E IR
n
: A v = O} is sometimes called the
right nullspace of A. Similarly, {w E IR
m
: w
T
A = O} is called the left nullspace of A.
Clearly, the right nullspace is N(A) while the left nullspace is N(A
T
).
Theorem 3.12 and part 2 of Theorem 3.11 can be combined to give two very fun
damental and useful decompositions of vectors in the domain and codomain of a linear
transformation A. See also Theorem 2.26.
Theorem 3.14 (Decomposition Theorem). Let A : IR
n
+ IRm. Then
1. every vector v in the domain space IR
n
can be written in a unique way as v = x + y,
where x E N(A) and y E N(A).l = R(AT) (i.e., IR
n
= N(A) EB R(A
T
».
2. every vector w in the codomain space IR
m
can be written ina unique way as w = x+y,
where x E R(A) and y E R(A).l = N(A
T
) (i.e., IR
m
= R(A) EBN(A
T
».
This key theorem becomes very easy to remember by carefully studying and under
standing Figure 3.1 in the next section.
3.5 Four Fundamental Subspaces
Consider a general matrix A E lR;,xn. When thought of as a linear transformation from IR
n
to IRm, many properties of A can be developed in terms of the four fundamental subspaces
3.5. Four Fundamental Subspaces 23
Figure 3.1. Four fundamental subspaces.
7£(A), 'R.(A)^, Af ( A) , and N(A)T. Figure 3.1 makes many key properties seem almost
obvious and we return to this figure frequently both in the context of linear transformations
and in illustrating concepts such as controllability and observability.
Definition 3.15. Let V and W be vector spaces and let A : V
motion.
1. A is onto (also called epic or surjective) ifR,(A) = W.
W be a linear transfor
2. A is onetoone or 11 (also called monic or infective) ifJ\f(A) = 0. Two equivalent
characterizations of A being 11 that are often easier to verify in practice are the
following:
Definition 3.16. Let A : E" > R
m
. Then rank(A) = dimftCA). This is sometimes called
the column rank of A (maximum number of independent columns). The row rank of A is
3.5. Four Fundamental Subspaces 23
A
r
N(A)1
r
EB {OJ
X {O}Gl
nr m r
Figure 3.1. Four fundamental subspaces.
R(A), R(A)1, N(A), and N(A)1. Figure 3.1 makes many key properties seem almost
obvious and we return to this figure frequently both in the context of linear transformations
and in illustrating concepts such as controllability and observability.
Definition 3.15. Let V and W be vector spaces and let A : V + W be a linear transfor
mation.
1. A is onto (also called epic or surjective) ifR(A) = W.
2. A is onetoone or 11 (also called monic or injective) if N(A) = O. Two equivalent
characterizations of A being 11 that are often easier to verify in practice are the
following:
(a) AVI = AV2 ===} VI = V2 .
(b) VI t= V2 ===} AVI t= AV2 .
Definition 3.16. Let A : IR
n
+ IRm. Then rank(A) = dim R(A). This is sometimes called
the column rank of A (maximum number of independent columns). The row rank of A is
24 Chapter3. Linear Transformations
dim 7£(A
r
) (maximum number of independent rows). The dual notion to rank is the nullity
of A, sometimes denoted nullity(A) or corank(A), and is defined as dim A/"(A).
Theorem 3.17. Let A : R
n
> R
m
. Then dim K(A) = dimA/ '(A)
±
. (Note: Since
A/^A)
1
" = 7l(A
T
), this theorem is sometimes colloquially stated "row rank of A = column
rank of A.")
Proof: Define a linear transformation T : J\f(A)~
L
— >• 7£(A) by
Clearly T is 11 (since A/"(T) = 0). To see that T is also onto, take any w e 7£(A). Then
by definition there is a vector x e R" such that Ax — w. Write x = x\ + X2, where
x\ e A/^A)
1
 and jc
2
e A/"(A). Then Ajti = u; = r*i since *i e A/^A)
1
. The last equality
shows that T is onto. We thus have that dim7?.(A) = dimA/^A^ since it is easily shown
that if { ui , . . . , iv} is abasis forA/'CA)
1
, then {Tv\, . . . , Tv
r
] is abasis for 7?.(A). Finally, if
we apply this and several previous results, the following string of equalities follows easily:
"column rank of A" = rank(A) = dim7e(A) = dim A/^A)
1
= dim7l(A
T
) = rank(A
r
) =
"row rank of A." D
The following corollary is immediate. Like the theorem, it is a statement about equality
of dimensions; the subspaces themselves are not necessarily in the same vector space.
Corollary 3.18. Let A : R" > R
m
. Then dimA/"(A) + dimft(A) = n, where n is the
dimension of the domain of A.
Proof: From Theorems 3.11 and 3.17 we see immediately that
For completeness, we include here a few miscellaneous results about ranks of sums
and products of matrices.
Theorem 3.19. Let A, B e R"
xn
. Then
Part 4 of Theorem 3.19 suggests looking at the general problem of the four fundamental
subspaces of matrix products. The basic results are contained in the following easily proved
theorem.
24 Chapter 3. LinearTransformations
dim R(AT) (maximum number of independent rows). The dual notion to rank is the nullity
of A, sometimes denoted nullity(A) or corank(A), and is defined as dimN(A).
Theorem 3.17. Let A : ]Rn ~ ]Rm. Then dim R(A) = dimNCA)L. (Note: Since
N(A)L = R(A
T
), this theorem is sometimes colloquially stated "row rank of A = column
rank of A.")
Proof: Define a linear transformation T : N(A)L ~ R(A) by
Tv = Av for all v E N(A)L.
Clearly T is 11 (since N(T) = 0). To see that T is also onto, take any W E R(A). Then
by definition there is a vector x E ]Rn such that Ax = w. Write x = Xl + X2, where
Xl E N(A)L andx2 E N(A). Then AXI = W = TXI since Xl E N(A)L. The last equality
shows that T is onto. We thus have that dim R(A) = dimN(A)L since it is easily shown
that if {VI, ... , v
r
} is a basis for N(A)L, then {TVI, ... , Tv
r
} is a basis for R(A). Finally, if
we apply this and several previous results, the following string of equalities follows easily:
"column rank of A" = rank(A) = dim R(A) = dimN(A)L = dim R(AT) = rank(AT) =
"row rank of A." 0
The following corollary is immediate. Like the theorem, it is a statement about equality
of dimensions; the subspaces themselves are not necessarily in the same vector space.
Corollary 3.18. Let A : ]Rn ~ ]Rm. Then dimN(A) + dim R(A) = n, where n is the
dimension of the domain of A.
Proof: From Theorems 3.11 and 3.17 we see immediately that
n = dimN(A) + dimN(A)L
= dimN(A) + dim R(A) . 0
For completeness, we include here a few miscellaneous results about ranks of sums
and products of matrices.
Theorem 3.19. Let A, B E ]Rnxn. Then
1. O:s rank(A + B) :s rank(A) + rank(B).
2. rank(A) + rank(B)  n :s rank(AB) :s min{rank(A), rank(B)}.
3. nullity(B) :s nullity(AB) :s nullity(A) + nullity(B).
4. if B is nonsingular, rank(AB) = rank(BA) = rank(A) and N(BA) = N(A).
Part 4 of Theorem 3.19 suggests looking atthe general problem of the four fundamental
subspaces of matrix products. The basic results are contained in the following easily proved
theorem.
3.5. Four F u n d a me n t a l Subspaces 25
Theorem 3.20. Let A e R
mxn
, B e R
nxp
. Then
The next theorem is closely related to Theorem 3.20 and is also easily proved. It
is extremely useful in text that follows, especially when dealing with pseudoinverses and
linear least squares problems.
Theorem 3.21. Let A e R
mxn
. Then
We now characterize 11 and onto transformations and provide characterizations in
terms of rank and invertibility.
Theorem 3.22. Let A : R
n
» R
m
. Then
1. A is onto if and only //"rank(A) — m (A has linearly independent rows or is said to
have full row rank; equivalently, AA
T
is nonsingular).
2. A is 11 if and only z/r a nk(A) = n (A has linearly independent columns or is said
to have full column rank; equivalently, A
T
A is nonsingular).
Proof: Proof of part 1: If A is onto, dim7?,(A) — m — rank (A). Conversely, let y e R
m
be arbitrary. Let jc = A
T
(AA
T
)~
]
y e R
n
. Then y = Ax, i.e., y e 7?.(A), so A is onto.
Proof of part 2: If A is 11, then A/"(A) = 0, which implies that dim A/^A)
1
—n —
dim 7£(A
r
), and hence dim 7£(A) = n by Theorem 3.17. Conversely, suppose Ax\ = Ax^.
Then A
r
A;t i = A
T
Ax2, which implies x\ = x^. since A
r
A is invertible. Thus, A is
11. D
Definition 3.23. A : V —» W is invertible (or bijective) if and only if it is 11 and onto.
Note that if A is invertible, then dim V — dim W. Also, A : W
1
»• E" is invertible or
nonsingular if and only z/r ank(A) = n.
Note that in the special case when A € R"
x
", the transformations A, A
r
, and A"
1
are all 11 and onto between the two spaces M(A)
±
and 7£(A). The transformations A
T
and A~
!
have the same domain and range but are in general different maps unless A is
orthogonal. Similar remarks apply to A and A~
T
.
3.5. Four Fundamental Subspaces 25
Theorem 3.20. Let A E IRmxn, B E IRnxp. Then
1. RCAB) S; RCA).
2. N(AB) ;2 N(B).
3. R«AB)T) S; R(B
T
).
4. N«AB)T) ;2 N(A
T
).
The next theorem is closely related to Theorem 3.20 and is also easily proved. It
is extremely useful in text that follows, especially when dealing with pseudoinverses and
linear least squares problems.
Theorem 3.21. Let A E IRmxn. Then
1. R(A) = R(AA
T
).
2. R(AT) = R(A
T
A).
3. N(A) = N(A
T
A).
4. N(A
T
) = N(AA
T
).
We now characterize II and onto transformations and provide characterizations in
terms of rank and invertibility.
Theorem 3.22. Let A : IR
n
+ IRm. Then
1. A is onto if and only if rank (A) = m (A has linearly independent rows or is said to
have full row rank; equivalently, AA T is nonsingular).
2. A is 11 if and only ifrank(A) = n (A has linearly independent columns or is said
to have full column rank; equivalently, AT A is nonsingular).
Proof' Proof of part 1: If A is onto, dim R(A) = m = rank(A). Conversely, let y E IRm
be arbitrary. Let x = AT (AAT)I Y E IRn. Then y = Ax, i.e., y E R(A), so A is onto.
Proof of part 2: If A is 11, then N(A) = 0, which implies that dimN(A)1 = n =
dim R(A
T
), and hence dim R(A) = n by Theorem 3.17. Conversely, suppose AXI = AX2.
Then AT AXI = AT AX2, which implies XI = X2 since AT A is invertible. Thus, A is
11. D
Definition 3.23. A : V + W is invertible (or bijective) if and only if it is 11 and onto.
Note that if A is invertible, then dim V = dim W. Also, A : IRn + IR
n
is invertible or
nonsingular ifand only ifrank(A) = n.
Note that in the special case when A E I R ~ x n , the transformations A, AT, and AI
are all 11 and onto between the two spaces N(A)1 and R(A). The transformations AT
and A I have the same domain and range but are in general different maps unless A is
orthogonal. Similar remarks apply to A and A T.
26 Chapters. Li near Transformations
If a linear transformation is not invertible, it may still be right or left invertible. Defi
nitions of these concepts are followed by a theorem characterizing left and right invertible
transformations.
Definition 3.24. Let A : V > W. Then
1. A is said to be right invertible if there exists a right inverse transformation A~
R
:
W —> V such that AA~
R
= I
w
, where I
w
denotes the identity transformation on W.
2. A is said to be left invertible if there exists a left inverse transformation A~
L
: W —>
V such that A~
L
A = I
v
, where I
v
denotes the identity transformation on V.
Theorem 3.25. Let A : V > W. Then
1. A is right invertible if and only if it is onto.
2. A is left invertible if and only if it is 11.
Moreover, A is invertible if and only if it is both right and left invertible, i.e., both 11 and
onto, in which case A~
l
= A~
R
= A~
L
.
Note: From Theorem 3.22 we see that if A : E" >• E
m
is onto, then a right inverse
is given by A~
R
= A
T
(AA
T
) . Similarly, if A is 11, then a left inverse is given by
A~
L
= (A
T
A)~
1
A
T
.
Theorem 3.26. Let A : V » V.
1. If there exists a unique right inverse A~
R
such that AA~
R
= I, then A is invertible.
2. If there exists a unique left inverse A~
L
such that A~
L
A = I, then A is invertible.
Proof: We prove the first part and leave the proof of the second to the reader. Notice the
following:
Thus, (A
R
+ A
R
A — /) must be a right inverse and, therefore, by uniqueness it must be
the case that A~
R
+ A~
R
A — I = A~
R
. But this implies that A~
R
A = /, i.e., that A~
R
is
a left inverse. It then follows from Theorem 3.25 that A is invertible. D
Example 3.27.
1. Let A = [1 2] : E
2
»• E
1
. Then A is onto. (Proof: Take any a € E
1
; then one
can always find v e E
2
such that [1 2][^] = a). Obviously A has full row rank
(=1) and A~
R
= [ _j j is a right inverse. Also, it is clear that there are infinitely many
right inverses for A. In Chapter 6 we characterize all right inverses of a matrix by
characterizing all solutions of the linear matrix equation AR = I.
26 Chapter 3. linear Transformations
If a linear transformation is not invertible, it may still be right or left invertible. Defi
nitions of these concepts are followed by a theorem characterizing left and right invertible
transformations.
Definition 3.24. Let A : V + W. Then
1. A is said to be right invertible if there exists a right inverse transformation A
R
:
W + V such that AA R = I
w
, where Iw denotes the identity transfonnation on W.
2. A is said to be left invertible if there exists a left inverse transformation A L : W +
V such that A L A = Iv, where Iv denotes the identity transfonnation on V.
Theorem 3.25. Let A : V + W. Then
1. A is right invertible if and only if it is onto.
2. A is left invertible if and only ifit is 11.
Moreover, A is invertible if and only if it is both right and left invertible, i.e., both 11 and
onto, in which case A I = A R = A L.
Note: From Theorem 3.22 we see that if A : ]Rn + ]Rm is onto, then a right inverse
is given by A R = AT (AAT) I. Similarly, if A is 11, then a left inverse is given by
A
L
= (AT A)I AT.
Theorem 3.26. Let A : V + V.
1. If there exists a unique right inverse A  R such that A A  R = I, then A is invertible.
2. If there exists a unique left inverse A L such that A L A = I, then A is invertible.
Proof: We prove the first part and leave the proof of the second to the reader. Notice the
following:
A(A
R
+ ARA I) = AA
R
+ AARA  A
= I + I A  A since AA R = I
= I.
Thus, (A R + A R A  I) must be a right inverse and, therefore, by uniqueness it must be
the case that A R + A R A  I = A R. But this implies that A R A = I, i.e., that A R is
a left inverse. It then follows from Theorem 3.25 that A is invertible. 0
Example 3.27.
1. Let A = [1 2]:]R2 + ]R I. Then A is onto. (Proo!' Take any a E ]R I; then one
can always find v E ]R2 such that [1 2][ ~ ~ ] = a). Obviously A has full row rank
(= 1) and A  R = [ _ ~ ] is a right inverse. Also, it is clear that there are infinitely many
right inverses for A. In Chapter 6 we characterize all right inverses of a matrix by
characterizing all solutions of the linear matrix equation A R = I.
Exercises 27
2. Let A = [J] : E
1
> E
2
. ThenAis 11. (Proof: The only solution to 0 = Av = [
I
2
]v
is v = 0, whence A/"(A) = 0 so A is 11). It is now obvious that A has full column
rank (=1) and A~
L
= [3 — 1] is a left inverse. Again, it is clear that there are
infinitely many left inverses for A. In Chapter 6 we characterize all left inverses of a
matrix by characterizing all solutions of the linear matrix equation LA = I.
3. The matrix
when considered as a linear transformation on IE
below bases for its four fundamental subspaces.
\ is neither 11 nor onto. We give
EXERCISES
3 4
1. Let A = [
8 5
J and consider A as a linear transformation mapping E
3
to E
2
.
Find the matrix representation of A with respect to the bases
2. Consider the vector space R
nx
" over E, let S denote the subspace of symmetric
matrices, and let 7£ denote the subspace of skewsymmetric matrices. For matrices
X, Y e E
nx
" define their inner product by (X, Y) = Tr( X
r
F) . Show that, with
respect to this inner product, 'R, — S^.
3. Consider the differentiation operator C defined in Example 3.2.3. Is £ 11? Is£
onto?
4. Prove Theorem 3.4.
of R
3
and
of E
2
.
Exercises 27
2. LetA = [i]:]Rl ~ ]R2. Then A is 11. (Proof The only solution toO = Av = [i]v
is v = 0, whence N(A) = 0 so A is 11). It is now obvious that A has full column
rank (=1) and A L = [3  1] is a left inverse. Again, it is clear that there are
infinitely many left inverses for A. In Chapter 6 we characterize all left inverses of a
matrix by characterizing all solutions of the linear matrix equation LA = I.
3. The matrix
[
1 1
A = 2 1
3 1
when considered as a linear transformation on ]R3, is neither 11 nor onto. We give
below bases for its four fundamental subspaces.
EXERCISES
1. Let A = [ ~ ; i) and consider A as a linear transformation mapping ]R3 to ]R2.
Find the matrix representation of A with respect to the bases
{[lHHU]}
{ [ i l [ ~ J }
2. Consider the vector space ]Rnxn over ]R, let S denote the subspace of symmetric
matrices, and let R denote the subspace of skewsymmetric matrices. For matrices
X, Y E ]Rnxn define their inner product by (X, y) = Tr(X
T
Y). Show that, with
respect to this inner product, R = S J. .
3. Consider the differentiation operator £, defined in Example 3.2.3. Is £, II? Is £,
onto?
4. Prove Theorem 3.4.
28 Chapters. Linear Transformations
5. Prove Theorem 3.11.4.
6. Prove Theorem 3.12.2.
7. Determine bases for the four fundamental subspaces of the matrix
8. Suppose A e R
mxn
has a left inverse. Show that A
T
has a right inverse.
9. Let A = [ J o]. Determine A/"(A) and 7£(A). Are they equal? Is this true in general?
If this is true in general, prove it; if not, provide a counterexample.
10. Suppose A € Mg
9x48
. How many linearly independent solutions can be found to the
homogeneous linear system Ax = 0?
11. Modify Figure 3.1 to illustrate the four fundamental subspaces associated with A
T
e
R
nxm
thought of as a transformation from R
m
to R".
28 Chapter 3. Linear Transformations
5. Prove Theorem 3.Il.4.
6. Prove Theorem 3.12.2.
7. Detennine bases for the four fundamental subspaces of the matrix
2 5 5 3
8. Suppose A E IR
m
xn has a left inverse. Show that A T has a right inverse.
9. Let A = n DetennineN(A) and R(A). Are they equal? Is this true in general?
If this is true in general, prove it; if not, provide a counterexample.
10. Suppose A E How many linearly independent solutions can be found to the
homogeneous linear system Ax = O?
11. Modify Figure 3.1 to illustrate the four fundamental subspaces associated with ATE
IR
nxm
thought of as a transformation from IR
m
to IRn.
Chapter 4
Introduction to the
MoorePen rose
Pseudoinverse
In this chapter we give a brief introduction to the MoorePenrose pseudoinverse, a gener
alization of the inverse of a matrix. The MoorePenrose pseudoinverse is defined for any
matrix and, as is shown in the following text, brings great notational and conceptual clarity
to the study of solutions to arbitrary systems of linear equations and linear least squares
problems.
4.1 Definitions and Characterizations
Consider a linear transformation A : X —>• y, where X and y are arbitrary finite
dimensional vector spaces. Define a transformation T : Af(A)
1
 —>• Tl(A) by
Then, as noted in the proof of Theorem 3.17, T is bijective (11 and onto), and hence we
can define a unique inverse transformation T~
l
: 7£(A) —>• J\f(A}~
L
. This transformation
can be used to give our first definition of A
+
, the MoorePenrose pseudoinverse of A.
Unfortunately, the definition neither provides nor suggests a good computational strategy
for determining A
+
.
Definition 4.1. With A and T as defined above, define a transformation A
+
: y —» • X by
where y = y\ + j2 with y\ e 7£(A) and yi e Tl(A}
L
. Then A
+
is the MoorePenrose
pseudoinverse of A.
Although X and y were arbitrary vector spaces above, let us henceforth consider the
case X = W
1
and y = R
m
. We have thus defined A+ for all A e IR ™
X
" . A purely algebraic
characterization of A
+
is given in the next theorem, which was proved by Penrose in 1955;
see [22].
29
Chapter 4
Introduction to the
MoorePenrose
Pseudoinverse
In this chapter we give a brief introduction to the MoorePenrose pseudoinverse, a gener
alization of the inverse of a matrix. The MoorePenrose pseudoinverse is defined for any
matrix and, as is shown in the following text, brings great notational and conceptual clarity
to the study of solutions to arbitrary systems of linear equations and linear least squares
problems.
4.1 Definitions and Characterizations
Consider a linear transformation A : X + y, where X and Y are arbitrary finite
dimensional vector spaces. Define a transformation T : N(A).l + R(A) by
Tx = Ax for all x E NCA).l.
Then, as noted in the proof of Theorem 3.17, T is bijective Cll and onto), and hence we
can define a unique inverse transformation T
1
: RCA) + NCA).l. This transformation
can be used to give our first definition of A +, the MoorePenrose pseudoinverse of A.
Unfortunately, the definition neither provides nor suggests a good computational strategy
for determining A + .
Definition 4.1. With A and T as defined above, define a transformation A + : Y + X by
where Y = YI + Yz with Yl E RCA) and Yz E RCA).l. Then A+ is the MoorePenrose
pseudoinverse of A.
Although X and Y were arbitrary vector spaces above, let us henceforth consider the
case X = ~ n and Y = lP1.
m
. We have thus defined A + for all A E lP1.;" xn. A purely algebraic
characterization of A + is given in the next theorem, which was proved by Penrose in 1955;
see [22].
29
30 Chapter 4. Introduction to the MoorePenrose Pseudoinverse
Theorem 4.2. Let A e R?
xn
. Then G = A
+
i f and only i f
(PI) AGA = A.
(P2) GAG = G.
(P3) (AGf = AG.
(P4) (GA)
T
= GA.
Furthermore, A
+
always exi sts and i s uni que.
Note that the inverse of a nonsingular matrix satisfies all four Penrose properties. Also,
a right or left inverse satisfies no fewer than three of the four properties. Unfortunately, as
with Definition 4.1, neither the statement of Theorem 4.2 nor its proof suggests a computa
tional algorithm. However, the Penrose properties do offer the great virtue of providing a
checkable criterion in the following sense. Given a matrix G that is a candidate for being
the pseudoinverse of A, one need simply verify the four Penrose conditions (P1)(P4). If G
satisfies all four, then by uniqueness, it must be A
+
. Such a verification is often relatively
straightforward.
Example 4.3. Consider A = [' ]. Verify directly that A
+
= [ f ] satisfies (P1)(P4).
Note that other left inverses (for example, A~
L
= [3 — 1]) satisfy properties (PI), (P2),
and (P4) but not (P3).
Still another characterization of A
+
is given in the following theorem, whose proof
can be found in [1, p. 19]. While not generally suitable for computer implementation, this
characterization can be useful for hand calculation of small examples.
Theorem 4.4. Let A e R™
xn
. Then
4.2 Examples
Each of the following can be derived or verified by using the above definitions or charac
terizations.
Example 4.5. A
+
= A
T
(AA
T
)~ if A is onto (independent rows) (A is right invertible).
Example 4.6. A
+
= (A
T
A)~ A
T
i f A is 11 (independent columns) (A is left invertible).
Example 4.7. For any scalar a,
30 Chapter 4. Introduction to the MoorePenrose Pseudoinverse
Theorem 4.2. Let A E lR;" xn. Then G = A + if and only if
(Pl) AGA = A.
(P2) GAG = G.
(P3) (AG)T = AG.
(P4) (GA)T = GA.
Furthermore, A + always exists and is unique.
Note that the inverse of a nonsingular matrix satisfies all four Penrose properties. Also,
a right or left inverse satisfies no fewer than three of the four properties. Unfortunately, as
with Definition 4.1, neither the statement of Theorem 4.2 nor its proof suggests a computa
tional algorithm. However, the Penrose properties do offer the great virtue of providing a
checkable criterion in the following sense. Given a matrix G that is a candidate for being
the pseudoinverse of A, one need simply verify the four Penrose conditions (P1)(P4). If G
satisfies all four, then by uniqueness, it must be A +. Such a verification is often relatively
straightforward.
Example 4.3. Consider A = [a Verify directly that A+ = [! ~ ] satisfies (PI)(P4).
Note that other left inverses (for example, A L = [3  1]) satisfy properties (PI), (P2),
and (P4) but not (P3).
Still another characterization of A + is given in the following theorem, whose proof
can be found in [1, p. 19]. While not generally suitable for computer implementation, this
characterization can be useful for hand calculation of small examples.
Theorem 4.4. Let A E lR;" xn. Then
4.2 Examples
A + = lim (AT A + 8
2
1) I AT
6+0
= limAT(AAT +8
2
1)1.
6+0
(4.1)
(4.2)
Each of the following can be derived or verified by using the above definitions or charac
terizations.
Example 4.5. X
t
= AT (AA T) I if A is onto (independent rows) (A is right invertible).
Example 4.6. A+ = (AT A)I AT if A is 11 (independent columns) (A is left invertible).
Example 4.7. For any scalar a,
if a t= 0,
if a =0.
4.3. Properties and Appl ications 31
Example 4.8. For any vector v e M",
Example 4.9.
Example 4.10.
4.3 Properties and Applications
This section presents some miscellaneous useful results on pseudoinverses. Many of these
are used in the text that follows.
Theorem 4.11. Let A e R
mx
" and suppose U e R
mxm
, V e R
nx
" are orthogonal (M is
orthogonal if M
T
= M
1
). Then
Proof: For the proof, simply verify that the expression above does indeed satisfy each c
the four Penrose conditions. D
Theorem 4.12. Let S e R
nxn
be symmetric with U
T
SU = D, where U is orthogonal an
D is diagonal. Then S
+
= UD
+
U
T
, where D
+
is again a diagonal matrix whose diagonc
elements are determined according to Example 4.7.
Theorem 4.13. For all A e R
mxn
,
Proof: Both results can be proved using the limit characterization of Theorem 4.4. The
proof of the first result is not particularly easy and does not even have the virtue of being
especially illuminating. The interested reader can consult the proof in [1, p. 27]. The
proof of the second result (which can also be proved easily by verifying the four Penrose
conditions) is as follows:
4.3. Properties and Applications
Example 4.8. For any vector v E jRn,
Example 4.9.
[ ~ ~ r = [
0
~ l
[ ~ ~ r = [
I I
1
Example 4.10.
4 4
I I
4 4
4.3 Properties and Applications
if v i= 0,
if v = O.
31
This section presents some miscellaneous useful results on pseudoinverses. Many of these
are used in the text that follows.
Theorem 4.11. Let A E jRmxn and suppose U E jRmxm, V E jRnxn are orthogonal (M is
orthogonal if MT = M
1
). Then
Proof: For the proof, simply verify that the expression above does indeed satisfy each of
the four Penrose conditions. 0
Theorem 4.12. Let S E jRnxn be symmetric with U
T
SU = D, where U is orthogonal and
D is diagonal. Then S+ = U D+U
T
, where D+ is again a diagonal matrix whose diagonal
elements are determined according to Example 4.7.
Theorem 4.13. For all A E jRmxn,
1. A+ = (AT A)+ AT = AT (AA
T
)+.
2. (A
T
)+ = (A+{.
Proof: Both results can be proved using the limit characterization of Theorem 4.4. The
proof of the first result is not particularly easy and does not even have the virtue of being
especially illuminating. The interested reader can consult the proof in [1, p. 27]. The
proof of the second result (which can also be proved easily by verifying the four Penrose
conditions) is as follows:
(A
T
)+ = lim (AA
T
+ 8
2
l)IA
~   + O
= lim [AT(AAT + 8
2
l)1{
~   + O
= [limAT(AAT + 8
2
l)1{
~   + O
= (A+{. 0
32 Chapter 4. Introduction to the MoorePenrose Pseudoinverse
Note that by combining Theorems 4.12 and 4.13 we can, in theory at least, compute
the MoorePenrose pseudoinverse of any matrix (since A A
T
and A
T
A are symmetric). This
turns out to be a poor approach in finiteprecision arithmetic, however (see, e.g., [7], [11],
[23]), and better methods are suggested in text that follows.
Theorem 4.11 is suggestive of a "reverseorder" property for pseudoinverses of prod
nets of matrices such as exists for inverses of nroducts TTnfortnnatelv. in peneraK
As an example consider A = [0 1J and B = LI. Then
while
However, necessary and sufficient conditions under which the reverseorder property does
hold are known and we quote a couple of moderately useful results for reference.
Theorem 4.14. (AB)
+
= B
+
A
+
if and only if
Proof: For the proof, see [9]. D
Theorem 4.15. (AB)
+
= B?A+, where BI = A+AB and A) = AB\B+.
Proof: For the proof, see [5]. D
Theorem 4.16. If A e R
n
r
xr
, B e R
r
r
xm
, then (AB)
+
= B+A+.
Proof: Since A e R
n
r
xr
, then A
+
= (A
T
A)~
l
A
T
, whence A
+
A = I
r
. Similarly, since
B e W
r
xm
, we have B
+
= B
T
(BB
T
)~\ whence BB
+
= I
r
. The result then follows by
taking BI = B, A\ = A in Theorem 4.15. D
The following theorem gives some additional useful properties of pseudoinverses.
Theorem 4.17. For all A e R
mxn
,
32 Chapter 4. Introduction to the MoorePenrose Pseudo inverse
Note that by combining Theorems 4.12 and 4.13 we can, in theory at least, compute
the MoorePenrose pseudoinverse of any matrix (since AAT and AT A are symmetric). This
turns out to be a poor approach in finiteprecision arithmetic, however (see, e.g., [7], [II],
[23]), and better methods are suggested in text that follows.
Theorem 4.11 is suggestive of a "reverseorder" property for pseudoinverses of prod
ucts of matrices such as exists for inverses of products. Unfortunately, in general,
As an example consider A = [0 I] and B = [ : J. Then
(AB)+ = 1+ = I
while
B+ A+ = [ ] =
However, necessary and sufficient conditions under which the reverseorder property does
hold are known and we quote a couple of moderately useful results for reference.
Theorem 4.14. (AB)+ = B+ A + if and only if
1. n(BB
T
AT) n(AT)
and
2. n(A T AB) nCB) .
Proof: For the proof, see [9]. 0
Theorem 4.15. (AB)+ = B{ Ai, where BI = A+ AB and AI = ABIB{.
Proof: For the proof, see [5]. 0
Theorem 4.16. If A E B E then (AB)+ = B+ A+.
Proof' Since A E then A+ = (AT A)I AT, whence A+ A = f
r
• Similarly, since
B E lR;xm, we have B+ = BT(BBT)I, whence BB+ = f
r
. The result then follows by
takingB
t
= B,At = A in Theorem 4.15. 0
The following theorem gives some additional useful properties of pseudoinverses.
Theorem 4.17. For all A E lR
mxn
,
1. (A+)+ = A.
2. (AT A)+ = A+(A
T
)+, (AA
T
)+ = (A
T
)+ A+.
3. n(A+) = n(A
T
) = n(A+ A) = n(A
T
A).
4. N(A+) = N(AA+) = N«AA
T
)+) = N(AA
T
) = N(A
T
).
5. If A is normal, then AkA+ = A+ Ak and (Ak)+ = (A+)kforall integers k > O.
Exercises 33
Note: Recall that A e R"
xn
is normal if AA
T
= A
T
A. For example, if A is symmetric,
skewsymmetric, or orthogonal, then it is normal. However, a matrix can be none of the
preceding but still be normal, such as
for scalars a, b e E.
The next theorem is fundamental to facilitating a compact and unifying approach
to studying the existence of solutions of (matrix) linear equations and linear least squares
problems.
Theorem 4.18. Suppose A e R
nxp
, B e E
MX m
. Then K(B) c U(A) if and only if
AA+B = B.
Proof: Suppose K(B) c U(A) and take arbitrary jc e R
m
. Then Bx e H(B) c H(A), so
there exists a vector y e R
p
such that Ay = Bx. Then we have
where one of the Penrose properties is used above. Since x was arbitrary, we have shown
that B = AA+B.
To prove the converse, assume that AA
+
B = B and take arbitrary y e K(B). Then
there exists a vector x e R
m
such that Bx = y, whereupon
EXERCISES
1. Use Theorem 4.4 to compute the pseudoinverse of \
2 2
1 •
2. If jc, y e R", show that (xy
T
)
+
= (x
T
x)
+
(y
T
y)
+
yx
T
.
3. For A e R
mxn
, prove that 7£(A) = 7£(AA
r
) using only definitions and elementary
properties of the MoorePenrose pseudoinverse.
4. For A e R
mxn
, prove that ft(A+) = ft(A
r
).
5. For A e R
pxn
and 5 € R
mx
", show that JV(A) C A/"(S) if and only if fiA+A = B.
6. Let A G M"
xn
, 5 e E
nxm
, and D € E
mxm
and suppose further that D is nonsingular.
(a) Prove or disprove that
(b) Prove or disprove that
Exercises 33
Note: Recall that A E IRn xn is normal if A A T = A T A. For example, if A is symmetric,
skewsymmetric, or orthogonal, then it is normal. However, a matrix can be none of the
preceding but still be normal, such as
A=[ a b]
b a
for scalars a, b E R
The next theorem is fundamental to facilitating a compact and unifying approach
to studying the existence of solutions of (matrix) linear equations and linear least squares
problems.
Theorem 4.18. Suppose A E IRnxp, B E IRnxm. Then R(B) S; R(A) if and only if
AA+B = B.
Proof: Suppose R(B) S; R(A) and take arbitrary x E IRm. Then Bx E R(B) S; RCA), so
there exists a vector y E IRP such that Ay = Bx. Then we have
Bx = Ay = AA + Ay = AA + Bx,
where one of the Penrose properties is used above. Since x was arbitrary, we have shown
that B = AA+ B.
To prove the converse, assume that AA + B = B and take arbitrary y E R(B). Then
there exists a vector x E IR
m
such that Bx = y, whereupon
y = Bx = AA+Bx E R(A). 0
EXERCISES
1. Use Theorem 4.4 to compute the pseudoinverse of U ;].
2. If x, Y E IRn, show that (xyT)+ = (x
T
x)+(yT y)+ yx
T
.
3. For A E IRmxn, prove that RCA) = R(AAT) using only definitions and elementary
properties of the MoorePenrose pseudoinverse.
4. For A E IRmxn, prove that R(A+) = R(A
T
).
5. For A E IRPxn and BE IRmxn, show thatN(A) S; N(B) if and only if BA+ A = B.
6. Let A E IRn xn, B E JRn xm , and D E IRm xm and suppose further that D is nonsingular.
(a) Prove or disprove that
[ ~
AB
r = [
A+ A+ABD
i
].
D 0
D
i
(b) Prove or disprove that
[ ~
B
r =[
A+ A+BD
1
l
D 0
D
i
This page intentionally left blank This page intentionally left blank
Chapter 5
Introduction to the Singular
Value Decomposition
In this chapter we give a brief introduction to the singular value decomposition (SVD). We
show that every matrix has an SVD and describe some useful properties and applications
of this important matrix factorization. The SVD plays a key conceptual and computational
role throughout (numerical) linear algebra and its applications.
5.1 The Fundamental Theorem
Theorem 5.1. Let A e R™
xn
. Then there exist orthogonal matrices U e R
mxm
and
V € R
nxn
such that
where S = [J °
0
], S = diagfcri, ... , o>) e R
rxr
, and a\ > • • • > o
r
> 0. More
specifically, we have
The submatrix sizes are all determined by r (which must be < min{m, «}), i.e., U\ e W
nxr
,
U
2 e
^x(mr)
; Vi e R
«xr
j
y
2 €
Rnxfor^
and the
0
JM
^/ocJb in E are compatibly
dimensioned.
Proof: Since A
r
A> 0 ( A
r
Ai s symmetric and nonnegative definite; recall, for example,
[24, Ch. 6]), its eigenvalues are all real and nonnegative. (Note: The rest of the proof follows
analogously if we start with the observation that A A
T
> 0 and the details are left to the reader
as an exercise.) Denote the set of eigenvalues of A
T
A by {of , / e n} with a\ > • • • > a
r
>
0 = o>
+
i = • • • = a
n
. Let {u, , i e n} be a set of corresponding orthonormal eigenvectors
and let V\ = [v\, ..., v
r
] , Vi = [v
r+
\, . . . , v
n
]. Letting S — diag(cri, . . . , cf
r
), we can
write A
r
AVi = ViS
2
. Premultiplying by Vf gives Vf A
T
AVi = VfV^S
2
= S
2
, the latter
equality following from the orthonormality of the r;, vectors. Pre and postmultiplying by
S~
l
eives the emotion
35
Chapter 5
Introduction to the Singular
Value Decomposition
In this chapter we give a brief introduction to the singular value decomposition (SVD). We
show that every matrix has an SVD and describe some useful properties and applications
of this important matrix factorization. The SVD plays a key conceptual and computational
role throughout (numerical) linear algebra and its applications.
5.1 The Fundamental Theorem
Theorem 5.1. Let A E Then there exist orthogonal matrices U E IRmxm and
V E IR
nxn
such that
A =
(5.1)
where =
n
S diag(ul, ... , u
r
) E
IRrxr, and UI
> > U
r
> O. More
specifically, we have
U2) [
0
] [
V
T
]
A = [U
I
I
(5.2)
0
VT
2
= Ulsvt·
(5.3)
The submatrix sizes are all determined by r (which must be S min{m, n}), i.e., UI E IRmxr,
U2 E IRrnx(mrl, VI E IRnxr, V
2
E IRnx(nr), and the Osubblocks in are compatibly
dimensioned.
Proof: Since AT A ::::: 0 (AT A is symmetric and nonnegative definite; recall, for example,
[24, Ch. 6]), its eigenvalues are all real and nonnegative. (Note: The rest of the proof follows
analogously if we start with the observation that AAT ::::: 0 and the details are left to the reader
as an exercise.) Denote the set of eigenvalues of AT A by {U?, i E !!.} with UI ::::: ... ::::: U
r
>
0= Ur+1 = ... = Un. Let {Vi, i E !!.} be a set of corresponding orthonormal eigenvectors
and let VI = [VI, ... ,V
r
), V2 = [Vr+I, ... ,V
n
]. LettingS = diag(uI, ... ,u
r
), we can
write A T A VI = VI S2. Premultiplying by vt gives vt A T A VI = vt VI S2 = S2, the latter
equality following from the orthonormality of the Vi vectors. Pre and postmultiplying by
SI gives the equation
(5.4)
35
36 Chapter 5. Introduction to the Singular Value Decomposition
Turning now to the eigenvalue equations corresponding to the eigenvalues o
r+
\, . . . , a
n
we
have that A
T
AV
2
= V
2
0 = 0, whence Vf A
T
AV
2
= 0. Thus, AV
2
= 0. Now define the
matrix Ui e M
mx/
" by U\ = AViS~
l
. Then from (5.4) we see that UfU\ = /; i.e., the
columns of U\ are orthonormal. Choose any matrix U
2
£ ^
7 7 I X(
™~
r)
such that [U\ U
2
] is
orthogonal. Then
since A V
2
=0. Referring to the equation U\ = A V\ S
l
defining U\, we see that U{ AV\ =
S and 1/2 AVi = U^UiS = 0. The latter equality follows from the orthogonality of the
columns of U\ andU
2
. Thus, we see that, in fact, U
T
AV = [Q Q], and defining this matrix
to be S completes the proof. D
Definition 5.2. Let A = t/E V
T
be an SVD of A as in Theorem 5.1.
1. The set [a\,..., a
r
} is called the set of (nonzero) singular values of the matrix A and
i
is denoted £(A). From the proof of Theorem 5.1 we see that cr,(A) = A
(
2
(A
T
A) =
A.? (AA
T
). Note that there are alsomin{m, n] — r zero singular values.
2. The columns ofU are called the left singular vectors of A (and are the orthonormal
eigenvectors of AA
T
).
3. The columns of V are called the right singular vectors of A (and are the orthonormal
eigenvectors of A
1
A).
Remark 5.3. The analogous complex case in which A e C™
x
" is quite straightforward.
The decomposition is A = t/E V
H
, where U and V are unitary and the proof is essentially
identical, except for Hermitian transposes replacing transposes.
Remark 5.4. Note that U and V can be interpreted as changes of basis in both the domain
and codomain spaces with respect to which A then has a diagonal matrix representation.
Specifically, let C, denote A thought of as a linear transformation mapping W to W. Then
rewriting A = U^V
T
as AV = U E we see that Mat £ is S with respect to the bases
[v\,..., v
n
} for R" and {u\,..., u
m
] for R
m
(see the discussion in Section 3.2). See also
Remark 5.16.
Remark 5.5. The singular value decomposition is not unique. For example, an examination
of the proof of Theorem 5.1 reveals that
• any orthonormal basis for jV(A) can be used for V
2
.
there may be nonuniqueness associated with the columns of V\ (and hence U\) cor
responding to multiple cr/' s.
36 Chapter 5. Introduction to the Singular Value Decomposition
Turning now to the eigenvalue equations corresponding to the eigenvalues ar+l, ... , an we
have that A T A V
z
= VzO = 0, whence Vi A T A V
2
= O. Thus, A V
2
= O. Now define the
matrix VI E IRmxr by VI = AViSI. Then from (5.4) we see that VrVI = /; i.e., the
columns of VI are orthonormal. Choose any matrix V2 E IRmx(mr) such that [VI V2] is
orthogonal. Then
V
T
AV = [
VrAV
I
VIAV
I
=[
VrAV
I
vIA VI
Vr AV
z
]
vI AV
z
]
since A V
2
= O. Referring to the equation V I = A VI SI defining VI, we see that V r A VI =
S and vI A VI = vI VI S = O. The latter equality follows from the orthogonality of the
columns of VI and V
2
. Thus, we see that, in fact, VT A V = and defining this matrix
to be completes the proof. 0
Definition 5.2. Let A = V"i:. VT be an SVD of A as in Theorem 5.1.
1. The set {ai, ... , a
r
} is called the set of (nonzero) singular values of the matrix A and
I
is denoted From the proof of Theorem 5.1 we see that ai(A) = A;' (AT A) =
I
At (AA
T
). Note that there are also min{m, n}  r zero singular values.
2. The columns of V are called the left singular vectors of A (and are the orthonormal
eigenvectors of AA
T
).
3. The columns of V are called the right singular vectors of A (and are the orthonormal
eigenvectors of AT A).
Remark 5.3. The analogous complex case in which A E xn is quite straightforward.
The decomposition is A = V"i:. V H, where V and V are unitary and the proof is essentially
identical, except for Hermitian transposes replacing transposes.
Remark 5.4. Note that V and V can be interpreted as changes of basis in both the domain
and codomain spaces with respect to which A then has a diagonal matrix representation.
Specifically, let C denote A thought of as a linear transformation mapping IR
n
to IRm. Then
rewriting A = V"i:. VT as A V = V"i:. we see that Mat C is "i:. with respect to the bases
{VI, ... , v
n
} for IR
n
and {u I, •.. , u
m
} for IR
m
(see the discussion in Section 3.2). See also
Remark 5.16.
Remark 5.5. The !:ingular value decomposition is not unique. For example, an examination
of the proof of Theorem 5.1 reveals that
• £lny orthonormal basis for N(A) can be used for V2.
• there may be nonuniqueness associated with the columns of VI (and hence VI) cor
responding to multiple O'i'S.
5.1. The Fundamental Theorem 37
• any C/
2
can be used so long as [U\ Ui] is orthogonal.
• columns of U and V can be changed (in tandem) by sign (or multiplier of the form
e
je
in the complex case).
What is unique, however, is the matrix E and the span of the columns of U\, f/2, Vi, and
¥2 (see Theorem 5.11). Note, too, that a "full SVD" (5.2) can always be constructed from
a "compact SVD" (5.3).
Remark 5.6. Computing an SVD by working directly with the eigenproblem for A
T
A or
AA
T
is numerically poor in finiteprecision arithmetic. Better algorithms exist that work
directly on A via a sequence of orthogonal transformations; see, e.g., [7], [11], [25].
F/vamnlp 5.7.
Example 5.10. Let A e R
MX
" be symmetric and positive definite. Let V be an orthogonal
matrix of eigenvectors that diagonalizes A, i.e., V
T
AV = A > 0. Then A = VAV
T
is an
SVD of A.
A factorization t/SV
r
o f a n m x n matrix A qualifies as an SVD if U and V are
orthogonal and £ is an m x n "diagonal" matrix whose diagonal elements in the upper
left corner are positive (and ordered). For example, if A = f/E V
T
is an SVD of A, then
VS
r
C/
r
i sanSVDof A
T
.
where U is an arbitrary 2x2 orthogonal matrix, is an SVD.
Example 5.8.
where 0 is arbitrary, is an SVD.
Example 5.9.
is an SVD.
5.1. The Fundamental Theorem 37
• any U2 can be used so long as [U
I
U2] is orthogonal.
• columns of U and V can be changed (in tandem) by sign (or multiplier of the form
e
j8
in the complex case).
What is unique, however, is the matrix I: and the span of the columns of UI, U2, VI, and
V
2
(see Theorem 5.11). Note, too, that a "full SVD" (5.2) can always be constructed from
a "compact SVD" (5.3).
Remark 5.6. Computing an SVD by working directly with the eigenproblem for A T A or
AA T is numerically poor in finiteprecision arithmetic. Better algorithms exist that work
directly on A via a sequence of orthogonal transformations; see, e.g., [7], [11], [25],
Example 5.7.
A  [1 0 ]  U I U
T
 01 ,
where U is an arbitrary 2 x 2 orthogonal matrix, is an SVD.
Example 5.8.
A _ [ 1
 0  ~ ] = [
where e is arbitrary, is an SVD.
Example 5.9.
cose
 sine
sin e
cose J [ ~ ~ J [
cose
sine
A=U n=[
I 2y'5
2 ~ ][ 3 ~ 0][
3
5
2
y'5
4y'5 0 0
3 S 15
2
0
_y'5 0 0
3
3
[
I
]
3
3J2 [ ~
~ ]
=
2
3
2
3
is an SVD.
Sine]
cose '
v'2 v'2
]
T T
v'2 v'2
T
2
Example 5.10. Let A E IR
nxn
be symmetric and positive definite. Let V be an orthogonal
matrix of eigenvectors that diagonalizes A, i.e., VT A V = A > O. Then A = V A V
T
is an
SVDof A.
A factorization UI: VT of an m x n matrix A qualifies as an SVD if U and V are
orthogonal and I: is an m x n "diagonal" matrix whose diagonal elements in the upper
left comer are positive (and ordered). For example, if A = UI:V
T
is an SVD of A, then
VI:TU
T
is an SVD of AT.
38 Chapter 5. Introduction to the Singular Value Decomposition
5.2 Some Basic Properties
Theorem 5.11. Let A e R
mxn
have a singular value decomposition A = VLV
T
. Using
the notation of Theorem 5.1, the following properties hold:
1. rank(A) = r = the number of nonzero singular values of A.
2. Let U =. [ H I , ..., u
m
] and V = [v\, ..., v
n
]. Then A has the dyadic (or outer
product) expansion
Remark 5.12. Part 4 of the above theorem provides a numerically superior method for
finding (orthonormal) bases for the four fundamental subspaces compared to methods based
on, for example, reduction to row or column echelon form. Note that each subspace requires
knowledge of the rank r. The relationship to the four fundamental subspaces is summarized
nicely in Figure 5.1.
Remark 5.13. The elegance of the dyadic decomposition (5.5) as a sum of outer products
and the key vector relations (5.6) and (5.7) explain why it is conventional to write the SVD
as A = UZV
T
rather than, say, A = UZV.
Theorem 5.14. Let A e E
mx
" have a singular value decomposition A = UHV
T
as in
TheoremS.]. Then
where
3. The singular vectors satisfy the relations
38 Chapter 5. Introduction to the Singular Value Decomposition
5.2 Some Basic Properties
Theorem 5.11. Let A E jRrnxn have a singular value decomposition A = U'£ V
T
. Using
the notation of Theorem 5.1, the following properties hold:
1. rank(A) = r = the number of nonzero singular values of A.
2. Let V = [UI, ... , urn] and V = [VI, ... , v
n
]. Then A has the dyadic (or outer
product) expansion
r
A = Laiuiv;.
i=1
3. The singular vectors satisfy the relations
for i E r.
AVi = ajui,
AT Uj = aivi
(5.5)
(5.6)
(5.7)
4. LetUI = [UI, ... , u
r
], U2 = [Ur+I, ... , urn], VI = [VI, ... , v
r
], andV2 = [Vr+I, ... , V
n
].
Then
(a) R(VI) = R(A) = N(A
T
/.
(b) R(U
2
) = R(A)1 = N(A
T
).
(c) R(VI) = N(A)1 = R(A
T
).
(d) R(V2) = N(A) = R(AT)1.
Remark 5.12. Part 4 of the above theorem provides a numerically superior method for
finding (orthonormal) bases for the four fundamental subspaces compared to methods based
on, for example, reduction to row or column echelon form. Note that each subspace requires
knowledge of the rank r. The relationship to the four fundamental subspaces is summarized
nicely in Figure 5.1.
Remark 5.13. The elegance of the dyadic decomposition (5.5) as a sum of outer products
and the key vector relations (5.6) and (5.7) explain why it is conventional to write the SVD
as A = U'£V
T
rather than, say, A = U,£V.
Theorem 5.14. Let A E jRmxn have a singular value decomposition A = U,£V
T
as in
Theorem 5.1. Then
(5.8)
where
5.2. Some Basic Properties 39
Figure 5.1. SVD and the four fundamental subspaces.
with the Qsubblocks appropriately sized. Furthermore, if we let the columns of U and V
be as defined in Theorem 5.11, then
Proof: The proof follows easily by verifying the four Penrose conditions. D
Remark 5.15. Note that none of the expressions above quite qualifies as an SVD of A
+
if we insist that the singular values be ordered from largest to smallest. However, a simple
reordering accomplishes the task:
This can also be written in matrix terms by using the socalled reverseorder identity matrix
(or exchange matrix) P = \e
r
,e
r
^\, ..., e^, e\\, which is clearly orthogonal and symmetric.
5.2. Some Basic Properties 39
A
r r
E9 {O}
/ {O)<!l
nr mr
Figure 5.1. SVD and the four fundamental subspaces.
with the Osubblocks appropriately sized. Furthermore, if we let the columns of U and V
be as defined in Theorem 5.11, then
r 1
= L v;u;, (5.10)
;=1 U;
Proof' The proof follows easily by verifying the four Penrose conditions. 0
Remark 5.15. Note that none of the expressions above quite qualifies as an SVD of A +
if we insist that the singular values be ordered from largest to smallest. However, a simple
reordering accomplishes the task:
(5.11)
This can also be written in matrix terms by using the socalled reverseorder identity matrix
(or exchange matrix) P = [e
r
, erI, ... , e2, ed, which is clearly orthogonal and symmetric.
is the matrix version of (5.11). A "full SVD" can be similarly constructed.
Remark 5.16. Recall the linear transformation T used in the proof of Theorem 3.17 and
in Definition 4.1. Since T is determined by its action on a basis, and since ( v \ , . . . , v
r
} is a
basis forJ\f(A)
±
, then T can be defined by TV; = cr, w, , / e r. Similarly, since [u\, ... ,u
r
}
isabasisfor7£(.4), then T~
l
can be defined by T^' M, = ^u, , / e r. From Section 3.2, the
matrix representation for T with respect to the bases { v \ , ..., v
r
} and { MI , . . . , u
r
] is clearly
S, while the matrix representation for the inverse linear transformation T~
l
with respect to
the same bases is 5""
1
.
5.3 Row and Column Compressions
Row compression
Let A E R
mxn
have an SVD given by (5.1). Then
Notice that M(A)  M(U
T
A) = A/"(SV,
r
) and the matrix SVf e R
r x
" has full row
rank. I n other words, premultiplication of A by U
T
is an orthogonal transformation that
"compresses" A by row transformations. Such a row compression can also be accomplished
D _
by orthogonal row transformations performed directly on A to reduce it to the form
0
,
where R is upper triangular. Both compressions are analogous to the socalled rowreduced
echelon form which, when derived by a Gaussian elimination algorithm implemented in
finiteprecision arithmetic, is not generally as reliable a procedure.
Column compression
Again, let A e R
mxn
have an SVD given by (5.1). Then
This time, notice that H(A) = K(AV) = K(UiS) and the matrix UiS e R
mxr
has full
column rank. I n other words, postmultiplication of A by V is an orthogonal transformation
that "compresses" A by column transformations. Such a compression is analogous to the
40 Chapters. Introduction to the Singular Value Decomposition
Then
40 Chapter 5. Introduction to the Singular Value Decomposition
Then
A+ = (VI p)(PS1 p)(PVr)
is the matrix version of (5.11). A "full SVD" can be similarly constructed.
Remark 5.16. Recall the linear transformation T used in the proof of Theorem 3.17 and
in Definition 4.1. Since T is determined by its action on a basis, and since {VI, ... , v
r
} is a
basisforN(A).l, then T can be defined by TVj = OjUj , i E ~ . Similarly, since {UI, ... , u
r
}
is a basis forR(A), then T
I
canbedefinedbyTIu; = tv; ,i E ~ . From Section 3.2, the
matrix representation for T with respect to the bases {VI, ... , v
r
} and {u I, ... , u
r
} is clearly
S, while the matrix representation for the inverse linear transformation T
I
with respect to
the same bases is SI.
5.3 Rowand Column Compressions
Row compression
Let A E lR.
mxn
have an SVD given by (5.1). Then
VT A = :EVT
= [ ~ ~ ] [ ~ i ]
 [ SVr ] lR.
mxn
 0 E .
Notice that N(A) = N(V
T
A) = N(svr> and the matrix SVr E lR.
rxll
has full row
rank. In other words, premultiplication of A by VT is an orthogonal transformation that
"compresses" A by row transformations. Such a row compression can also be accomplished
by orthogonal row transformations performed directly on A to reduce it to the form [ ~ ] ,
where R is upper triangular. Both compressions are analogous to the socalled rowreduced
echelon form which, when derived by a Gaussian elimination algorithm implemented in
finiteprecision arithmetic, is not generally as reliable a procedure.
Column compression
Again, let A E lR.
mxn
have an SVD given by (5.1). Then
AV = V:E
= [VI U2] [ ~ ~ ]
=[VIS 0] ElR.mxn.
This time, notice that R(A) = R(A V) = R(UI S) and the matrix VI S E lR.
m
xr has full
column rank. In other words, postmultiplication of A by V is an orthogonal transformation
that "compresses" A by I;olumn transformations. Such a compression is analogous to the
Exercises 41
socalled columnreduced echelon form, which is not generally a reliable procedure when
performed by Gauss transformations in finiteprecision arithmetic. For details, see, for
example, [7], [11], [23], [25].
EXERCISES
1. Let X € M
mx
". If X
T
X = 0, show that X = 0.
2. Prove Theorem 5.1 starting from the observation that AA
T
> 0.
3. Let A e E"
xn
be symmetric but indefinite. Determine an SVD of A.
4. Let x e R
m
, y e R
n
be nonzero vectors. Determine an SVD of the matrix A e R™
defined by A = xy
T
.
6. Let A e R
mxn
and suppose W eR
mxm
and 7 e R
nxn
are orthogonal.
(a) Show that A and W A F have the same singular values (and hence the same rank).
(b) Suppose that W and Y are nonsingular but not necessarily orthogonal. Do A
and WAY have the same singular values? Do they have the same rank?
7. Let A € R"
XM
. Use the SVD to determine a polar factorization of A, i.e., A = QP
where Q is orthogonal and P = P
T
> 0. Note: this is analogous to the polar form
z = re
l&
ofa complex scalar z (where i = j = V^T).
5. Determine SVDs of the matrices
Exercises 41
socalled columnreduced echelon form, which is not generally a reliable procedure when
performed by Gauss transformations in finiteprecision arithmetic. For details, see, for
example, [7], [11], [23], [25].
EXERCISES
1. Let X E IRmxn. If XT X = 0, show that X = o.
2. Prove Theorem 5.1 starting from the observation that AAT ~ O.
3. Let A E IR
nxn
be symmetric but indefinite. Determine an SVD of A.
4. Let x E IRm, y E ~ n be nonzero vectors. Determine an SVD of the matrix A E ~ ~ xn
defined by A = xyT.
5. Determine SVDs of the matrices
(a)
[
1
]
0 1
(b)
[
~ l
6. Let A E ~ m x n and suppose W E IR
mxm
and Y E ~ n x n are orthogonal.
(a) Show that A and WAY have the same singular values (and hence the same rank).
(b) Suppose that Wand Yare nonsingular but not necessarily orthogonal. Do A
and WAY have the same singular values? Do they have the same rank?
7. Let A E ~ ~ x n . Use the SVD to determine a polar factorization of A, i.e., A = Q P
where Q is orthogonal and P = p
T
> O. Note: this is analogous to the polar form
z = re
iO
of a complex scalar z (where i = j = J=I).
This page intentionally left blank This page intentionally left blank
Chapter 6
Li near Equations
In this chapter we examine existence and uniqueness of solutions of systems of linear
equations. General linear systems of the form
are studied and include, as a special case, the familiar vector system
6.1 Vector Li near Equations
We begin with a review of some of the principal results associated with vector linear systems.
Theorem 6.1. Consider the system of linear equations
1. There exists a solution to (6.3) if and only ifbeH(A).
2. There exists a solution to (6.3} for all b e R
m
if and only ifU(A) = W", i.e., A is
onto; equivalently, there exists a solution if and only j/"rank([A, b]) = rank(A), and
this is possible only ifm < n (since m = dimT^(A) = rank(A) < min{m, n}).
3. A solution to (6.3) is unique if and only ifJ\f(A) = 0, i.e., A is 11.
4. There exists a unique solution to (6.3) for all b e W" if and only if A is nonsingular;
equivalently, A G M
mxm
and A has neither a 0 singular value nor a 0 eigenvalue.
5. There exists at most one solution to (6.3) for all b e W
1
if and only if the columns of
A are linearly independent, i.e., A/"(A) = 0, and this is possible only ifm > n.
6. There exists a nontrivial solution to the homogeneous system Ax = 0 if and only if
rank(A) < n.
43
Chapter 6
Linear Equations
In this chapter we examine existence and uniqueness of solutions of systems of linear
equations. General linear systems of the form
(6.1)
are studied and include, as a special case, the familiar vector system
Ax = b; A E ]Rn xn, b E ]Rn.
(6.2)
6.1 Vector Linear Equations
We begin with a review of some of the principal results associated with vector linear systems.
Theorem 6.1. Consider the system of linear equations
Ax = b; A E lR
m
xn, b E lRm.
(6.3)
1. There exists a solution to (6.3) if and only if b E R(A).
2. There exists a solution to (6.3) for all b E lR
m
if and only ifR(A) = lR
m
, i.e., A is
onto; equivalently, there exists a solution if and only ifrank([A, b]) = rank(A), and
this is possible only ifm :::: n (since m = dim R(A) = rank(A) :::: min{m, n n.
3. A solution to (6.3) is unique if and only if N(A) = 0, i.e., A is 11.
4. There exists a unique solution to (6.3) for all b E ]Rm if and only if A is nonsingular;
equivalently, A E lR
mxm
and A has neither a 0 singular value nor a 0 eigenvalue.
5. There exists at most one solution to (6.3) for all b E lR
m
if and only if the columns of
A are linearly independent, i.e., N(A) = 0, and this is possible only ifm ::: n.
6. There exists a nontrivial solution to the homogeneous system Ax = 0 if and only if
rank(A) < n.
43
44 Chapter 6. Linear Equations
Proof: The proofs are straightforward and can be consulted in standard texts on linear
algebra. Note that some parts of the theorem follow directly from others. For example, to
prove part 6, note that x = 0 is always a solution to the homogeneous system. Therefore, we
must have the case of a nonunique solution, i.e., A is not 11, which implies rank(A) < n
by part 3. D
6.2 Matrix Linear Equations
In this section we present some of the principal results concerning existence and uniqueness
of solutions to the general matrix linear system (6.1). Note that the results of Theorem
6.1 follow from those below for the special case k = 1, while results for (6.2) follow by
specializing even further to the case m = n.
Theorem 6.2 (Existence). The matrix linear equation
and this is clearly of the form (6.5).
has a solution if and only ifl^(B) C 7£(A); equivalently, a solution exists if and only if
AA
+
B = B.
Proof: The subspace inclusion criterion follows essentially from the definition of the range
of a matrix. The matrix criterion is Theorem 4.18.
Theorem 6.3. Let A e R
mxn
, B eR
mxk
and suppose that AA
+
B = B. Then any matrix
of the form
is a solution of
Furthermore, all solutions of (6.6) are of this form.
Proof: To verify that (6.5) is a solution, premultiply by A:
That all solutions arc of this form can be seen as follows. Let Z be an arbitrary solution of
(6.6), i.e., AZ — B. Then we can write
44 Chapter 6. Linear Equations
Proof: The proofs are straightforward and can be consulted in standard texts on linear
algebra. Note that some parts of the theorem follow directly from others. For example, to
prove part 6, note that x = 0 is always a solution to the homogeneous system. Therefore, we
must have the case of a nonunique solution, i.e., A is not II, which implies rank(A) < n
by part 3. 0
6.2 Matrix Linear Equations
In this section we present some of the principal results concerning existence and uniqueness
of solutions to the general matrix linear system (6.1). Note that the results of Theorem
6.1 follow from those below for the special case k = 1, while results for (6.2) follow by
specializing even further to the case m = n.
Theorem 6.2 (Existence). The matrix linear equation
AX = B; A E JR.
mxn
, BE JR.mxk, (6.4)
has a solution if and only ifR(B) S; R(A); equivalently, a solution exists if and only if
AA+B = B.
Proof: The subspace inclusion criterion follows essentially from the definition of the range
of a matrix. The matrix criterion is Theorem 4.18. 0
Theorem 6.3. Let A E JR.mxn, B E JR.mxk and suppose that AA + B = B. Then any matrix
of the form
X = A+ B + (/  A+ A)Y, where Y E JR.nxk is arbitrary, (6.5)
is a solution of
AX=B. (6.6)
Furthermore, all solutions of (6.6) are of this form.
Proof: To verify that (6.5) is a solution, premultiply by A:
AX = AA+ B + A(I  A+ A)Y
= B + (A  AA+ A)Y by hypothesis
= B since AA + A = A by the first Penrose condition.
That all solutions are of this form can be seen as follows. Let Z be an arbitrary solution of
(6.6). i.e .. AZ :::: B. Then we can write
Z=A+AZ+(IA+A)Z
=A+B+(IA+A)Z
and this is clearly of the form (6.5). 0
6.2. Matrix Linear Equations 45
Remark 6.4. When A is square and nonsingular, A
+
= A"
1
and so (/ — A
+
A) = 0. Thus,
there is no "arbitrary" component, leaving only the unique solution X• = A~
1
B.
Remark 6.5. It can be shown that the particular solution X = A
+
B is the solution of (6.6)
that minimizes TrX
7
X. (Tr() denotes the trace of a matrix; recall that TrX
r
X = £\ • jcj.)
Theorem 6.6 (Uniqueness). A solution of the matrix linear equation
is unique if and only if A
+
A = /; equivalently, (6.7) has a unique solution if and only if
M(A) = 0.
Proof: The first equivalence is immediate from Theorem 6.3. The second follows by noting
that A
+
A = / can occur only if r — n, where r = rank(A) (recall r < h). But rank(A) = n
if and only if A is 11 or _ /V(A) = 0. D
Example 6.7. Suppose A e E"
x
". Find all solutions of the homogeneous system Ax — 0.
Solution:
where y e R" is arbitrary. Hence, there exists a nonzero solution if and only if A
+
A /= I.
This is equivalent to either rank (A) = r < n or A being singular. Clearly, if there exists a
nonzero solution, it is not unique.
Computation: Since y is arbitrary, it is easy to see that all solutions are generated
from a basis for 7£(7 — A
+
A). But if A has an SVD given by A = f/E V
T
, then it is easily
checked that /  A+A = V
2
V
2
r
and U(V
2
V^) = K(V
2
) = N(A).
Example 6.8. Characterize all right inverses of a matrix A e ]R
mx
"; equivalently, find all
solutions R of the equation AR = I
m
. Here, we write I
m
to emphasize the m x m identity
matrix.
Solution: There exists a right inverse if and only if 7£(/
m
) c 7£(A) and this is
equivalent to AA
+
I
m
= I
m
. Clearly, this can occur if and only if rank(A) = r = m (since
r < m) and this is equivalent to A being onto (A
+
is then a right inverse). All right inverses
of A are then of the form
where Y e E"
xm
is arbitrary. There is a unique right inverse if and only if A
+
A = /
(AA(A) = 0), in which case A must be invertible and R = A"
1
.
Example 6.9. Consider the system of linear firstorder difference equations
6.2. Matrix Linear Equations 45
Remark 6.4. When A is square and nonsingular, A + = AI and so (I  A + A) = O. Thus,
there is no "arbitrary" component, leaving only the unique solution X = AI B.
Remark 6.5. It can be shown that the particular solution X = A + B is the solution of (6.6)
that minimizes TrXT X. (TrO denotes the trace of a matrix; recall that TrXT X = Li,j xlj.)
Theorem 6.6 (Uniqueness). A solution of the matrix linear equation
AX = B; A E lR,mxn, BE lR,mxk
(6,7)
is unique if and only if A + A = I; equivalently, (6.7) has a unique solution if and only if
N(A) = O.
Proof: The first equivalence is immediate from Theorem 6.3. The second follows by noting
thatA+ A = I can occur only ifr = n, wherer = rank(A) (recallr ::: n), Butrank(A) = n
if and only if A is Ilor N(A) = O. 0
Example 6.7. Suppose A E lR,nxn. Find all solutions of the homogeneous system Ax = 0,
Solution:
x=A+O+(IA+A)y
= (IA+A)y,
where y E lR,n is arbitrary. Hence, there exists a nonzero solution if and only if A + A t= I,
This is equivalent to either rank(A) = r < n or A being singular. Clearly, if there exists a
nonzero solution, it is not unique,
Computation: Since y is arbitrary, it is easy to see that all solutions are generated
from a basis for R(I  A + A). But if A has an SVD given by A = U h VT, then it is easily
checked that 1 A+ A = Vz V[ and R(Vz vD = R(Vz) = N(A),
Example 6.S. Characterize all right inverses of a matrix A E lR,mxn; equivalently, find all
solutions R of the equation AR = 1
m
, Here, we write 1m to emphasize the m x m identity
matrix,
Solution: There exists a right inverse if and only if R(Im) S; R(A) and this is
equivalent to AA + 1m = 1m. Clearly, this can occur if and only if rank(A) = r = m (since
r ::: m) and this is equivalent to A being onto (A + is then a right inverse). All right inverses
of A are then of the form
R = A+ 1m + (In  A+ A)Y
=A++(IA+A)Y,
where Y E lR,nxm is arbitrary, There is a unique right inverse if and only if A+ A I
(N(A) = 0), in which case A must be invertible and R = AI.
Example 6.9. Consider the system of linear firstorder difference equations
(6,8)
46 Chapter 6. Linear Equations
with A e R"
xn
and fieR"
xm
(rc>l,ra>l). The vector Jt* in linear system theory is
known as the state vector at time k while Uk is the input (control) vector. The general
solution of (6.8) is given by
for k > 1. We might now ask the question: Given X Q = 0, does there exist an input sequence
{uj } y~ Q such that x^ takes an arbitrary va
of reachability. Since m > 1, from the
see that (6.8) is reachable if and only if
[ Uj }
k
jj^ such that X k takes an arbitrary value in W ? In linear system theory, this is a question
of reachability. Since m > 1, from the fundamental Existence Theorem, Theorem 6.2, we
or, equivalently, if and only if
A related question is the following: Given an arbitrary initial vector X Q , does there ex
ist an input sequence {"y} "~ o such that x
n
= 0? In linear system theory, this is called
controllability. Again from Theorem 6.2, we see that (6.8) is controllable if and only if
Clearly, reachability always implies controllability and, if A is nonsingular, control
lability and reachability are equivalent. The matrices A = [ °
1
Q
1 and 5 = f ^ 1 provide an
example of a system that is controllable but not reachable.
The above are standard conditions with analogues for continuoustime models (i.e.,
linear differential equations). There are many other algebraically equivalent conditions.
Example 6.10. We now introduce an output vector y
k
to the system (6.8) of Example 6.9
by appending the equation
with C e R
pxn
and D € R
pxm
(p > 1). We can then pose some new questions about the
overall system that are dual in the systemtheoretic sense to reachability and controllability.
The answers are cast in terms that are dual in the linear algebra sense as well. The condition
dual to reachability is called observability: When does knowledge of {"
7
}"!Q and {y_ / } "~ o
suffice to determine (uniquely) Jt
0
? As a dual to controllability, we have the notion of
reconstructibility: When does knowledge of {w
y
} "~ Q and {;y/ } "Io suffice to determine
(uniquely) x
n
l The fundamental duality result from linear system theory is the following:
(A, B) is reachable [ controllable] if and only if (A
T
, B
T
] is observable [ reconstructive].
46 Chapter 6. Linear Equations
with A E IR
nx
" and B E IR
nxm
(n I, m I). The vector Xk in linear system theory is
known as the state vector at time k while Uk is the input (control) vector. The general
solution of (6.8) is given by
kJ
Xk = Akxo + LAkJj BUj
j=O
k kJ Uk2
[
UkJ ]
•...• A B]
(6.9)
(6.10)
for k 1. We might now ask the question: Given Xo = 0, does there exist an input sequence
{u j 1 such that Xk takes an arbitrary value in 1R"? In linear system theory, this is a question
of reacbability. Since m I, from the fundamental Existence Theorem, Theorem 6.2, we
see that (6.8) is reachable if and only if
R([ B, AB, ... , A
n

J
B]) = 1R"
or, equivalently, if and only if
rank [B, AB, ... , A
n

J
B] = n.
A related question is the following: Given an arbitrary initial vector Xo, does there ex
ist an input sequence {u j l'/:b such that Xn = O? In linear system theory, this is called
controllability. Again from Theorem 6.2, we see that (6.8) is controllable if and only if
Clearly, reachability always implies controllability and, if A is nonsingular, control
lability and reachability are equivalent. The matrices A = and B = provide an
example of a system that is controllable but not reachable.
The above are standard conditions with analogues for continuoustime models (i.e.,
linear differential equations). There are many other algebraically equivalent conditions.
Example 6.10. We now introduce an output vector Yk to the system (6.8) of Example 6.9
by appending the equation
(6.11)
with C E IR
Pxn
and D E IR
Pxm
(p 1). We can then pose some new questions about the
overall system that are dual in the systemtheoretic sense to reachability and controllability.
The answers are cast in terms that are dual in the linear algebra sense as well. The condition
dual to reachability is called observability: When does knowledge of {u j r/:b and {Yj l';:b
suffice to determine (uniquely) xo? As a dual to controllability, we have the notion of
reconstructibility: When does knowledge of {u j r/:b and {YJ lj:b suffice to determine
(uniquely) xn? The fundamental duality result from linear system theory is the following:
(A. B) iJ reachable [controllablcl if and only if (A T. B T) is observable [reconsrrucrible]
6.4 Some Us ef u l and I nt er es t i ng Inverses 47
To derive a condition for observability, notice that
Thus,
Let v denote the (known) vector on the lefthand side of (6.13) and let R denote the matrix on
the righthand side. Then, by definition, v e Tl(R), so a solution exists. By the fundamental
Uniqueness Theorem, Theorem 6.6, the solution is then unique if and only if N(R) = 0,
or, equivalently, if and only if
6.3 A More General Matrix Linear Equation
Theorem 6.11. Let A e R
mxn
, B e R
mxq
, and C e R
pxti
. Then the equation
has a solution if and only if AA
+
BC
+
C = B, in which case the general solution is of the
where Y € R
n
*
p
is arbitrary.
A compact matrix criterion for uniqueness of solutions to (6.14) requires the notion
of the Kronecker product of matrices for its statement. Such a criterion (CC
+
< g) A
+
A — I)
is stated and proved in Theorem 13.27.
6.4 Some Useful and Interesting Inverses
In many applications, the coefficient matrices of interest are square and nonsingular. Listed
below is a small collection of useful matrix identities, particularly for block matrices, as
sociated with matrix inverses. In these identities, A e R
nxn
, B E R
nxm
, C e R
mxn
,
and D € E
mxm
. Invertibility is assumed for any component or subblock whose inverse is
indicated. Verification of each identity is recommended as an exercise for the reader.
6.4 Some Useful and Interesting Inverses
Thus,
To derive a condition for observability, notice that
kl
Yk = CAkxo + L CAk1j BUj + DUk.
j=O
r
Yo  Duo
Yl  CBuo  Du]
Yn]  L j : ~ CA
n

2
j BUj  DUnl
47
(6.12)
(6.13)
Let v denote the (known) vector on the lefthand side of (6.13) and let R denote the matrix on
the righthand side. Then, by definition, v E R(R), so a solution exists. By the fundamental
Uniqueness Theorem, Theorem 6.6, the solution is then unique if and only if N(R) = 0,
or, equivalently, if and only if
6.3 A More General Matrix Linear Equation
Theorem 6.11. Let A E jRmxn, B E jRmx
q
, and C E jRpxq. Then the equation
AXC=B (6.14)
has a solution if and only if AA + BC+C = B, in which case the general solution is of the
form
(6.15)
where Y E jRnxp is arbitrary.
A compact matrix criterion for uniqueness of solutions to (6.14) requires the notion
of the Kronecker product of matrices for its statement. Such a criterion (C C+ ® A + A = I)
is stated and proved in Theorem 13.27.
6.4 Some Useful and Interesting Inverses
In many applications, the coefficient matrices of interest are square and nonsingular. Listed
below is a small collection of useful matrix identities, particularly for block matrices, as
sociated with matrix inverses. In these identities, A E jRnxn, B E jRnxm, C E jRmxn,
and D E jRm xm. Invertibility is assumed for any component or subblock whose inverse is
indicated. Verification of each identity is recommended as an exercise for the reader.
48 Chapter 6. Linear Equations
1. (A + BDCr
1
= A~
l
 A~
l
B(D~
l
+ CA~
l
B)~
[
CA~
l
.
This result is known as the ShermanMorrisonWoodbury formula. It has many
applications (and is frequently "rediscovered") including, for example, formulas for
the inverse of a sum of matrices such as (A + D)"
1
or (A"
1
+ D"
1
) . It also
yields very efficient "updating" or "downdating" formulas in expressions such as
T — 1
(A + JUT ) (with symmetric A e R"
x
" and ;c e E") that arise in optimization
theory.
EXERCISES
1. As in Example 6.8, characterize all left inverses of a matrix A e M
mx
".
2. Let A € E
mx
", B e R
mxk
and suppose A has an SVD as in Theorem 5.1. Assuming
7Z(B) c 7£(A), characterize all solutions of the matrix linear equation
Both of these matrices satisfy the matrix equation X^ = I from which it is obvious
that X~
l
= X. Note that the positions of the / and — / blocks may be exchanged.
where E = (D — CA B) (E is the inverse of the Schur complement of A). This
result follows easily from the block LU factorization in property 16 of Section 1.4.
where F = (A — ED C) . This result follows easily from the block UL factor
ization in property 17 of Section 1.4.
in terms of the SVD of A
48 Chapter 6. Linear Equations
1. (A + BDC)I = AI  AIB(D
I
+ CAIB)ICAI.
This result is known as the ShermanMorrisonWoodbury formula. It has many
applications (and is frequently "rediscovered") including, for example, formulas for
the inverse of a sum of matrices such as (A + D)lor (AI + DI)I. It also
yields very efficient "updating" or "downdating" formulas in expressions such as
(A + xx
T
) I (with symmetric A E lR
nxn
and x E lRn) that arise in optimization
theory.
2. r
l
= [
3. !/ r
l
= l r
l
= 1
Both of these matrices satisfy the matrix equation X2 = / from which it is obvious
that XI = X. Note that the positions of the / and  / blocks may be exchanged.
4. r
l
= [
AI BD
I
]
D I .
5. r
l
= 1
6. [ / +c
BC
r
l
= [!C / 1
7. r
l
= [ AI l
where E = (D  CA
I
B)I (E is the inverse of the Schur complement of A). This
result follows easily from the block LU factorization in property 16 of Section 1.4.
8. r
l
= D
I
l
where F = (A  B D
I
C) I. This result follows easily from the block UL factor
ization in property 17 of Section 1.4.
EXERCISES
1. As in Example 6.8, characterize all left inverses of a matrix A E lR
m
xn .
2. Let A E lRmxn, B E lR
fflxk
and suppose A has an SVD as in Theorem 5.1. Assuming
R(B) R(A), characterize all solutions of the matrix linear equation
AX=B
in terms of the SVD of A.
Exercises 49
3. Let jc, y e E" and suppose further that X
T
y ^ 1. Show that
4. Let x, y € E" and suppose further that X
T
y ^ 1. Show that
where c = 1/(1 — x
T
y).
5. Let A e R"
x
" and let A"
1
have columns c\, ..., c
n
and individual elements y
;y
.
Assume that x/
(
7^ 0 for some / and j. Show that the matrix B — A —
l
—ei e
T
: (i.e.,
A with — subtracted from its (zy)th element) is singular.
Hint: Show that c
t
< = M(B).
6. As in Example 6.10, check directly that the condition for reconstructibility takes the
form
Exercises 49
3. Let x, y E IR
n
and suppose further that x T y i= 1. Show that
T 1 1 T
(/  xy) = I  xy .
xTy 1
4. Let x, y E IR
n
and suppose further that x T y i= 1. Show that
cxJ
C '
where C = 1/(1  x
T
y).
5. Let A E 1 R ~ xn and let A 1 have columns Cl, ... ,C
n
and individual elements Yij.
Assume that Yji i= 0 for some i and j. Show that the matrix B = A  ~ i e;e; (i.e.,
A with yl subtracted from its (ij)th element) is singular.
l'
Hint: Show that Ci E N(B).
6. As in Example 6.10, check directly that the condition for reconstructibility takes the
form
N[
fA J ~ N(A
n
).
CA
n

1
This page intentionally left blank This page intentionally left blank
Chapter 7
Projections, Inner Product
Spaces, and Norms
7.1 Projections
Definition 7.1. Let V be a vector space with V = X 0 y. By Theorem 2.26, every v e V
has a unique decomposition v = x + y with x e X and y e y. Define PX y • V — > • X c V
by
Figure 7.1. Oblique projections.
Theorem 7.2. Px,y is linear and P# y — Px,y
Theorem 7.3. A linear transformation P is a projection if and only if it is idempotent, i.e.,
P
2
= P. Also, P is a projection if and only if I —P is a projection. Infact, Py,x — I — Px,y
Proof: Suppose P is a projection, say on X along y (using the notation of Definition 7.1).
51
Px,y is called the (oblique) projection on X along 3^.
Figure 7.1 displays the projection of v on both X and 3^ in the case V =
Chapter 7
Projections, Inner Product
Spaces, and Norms
7.1 Projections
Definition 7.1. Let V be a vector space with V = X EEl Y. By Theorem 2.26, every v E V
has a unique decomposition v = x + y with x E X and y E y. Define pX,y : V + X <; V
by
PX,yV = x for all v E V.
PX,y is called the (oblique) projection on X along y.
Figure 7.1 displays the projection of von both X and Y in the case V = ]R2.
y
x
Figure 7.1. Oblique projections.
Theorem 7.2. px.y is linear and pl.
y
= px.y.
Theorem 7.3. A linear transformation P is a projection if and only if it is idempotent, i.e.,
p2 = P. Also, P isaprojectionifandonlyifl P isaprojection. Infact, Py.x = I px.y.
Proof: Suppose P is a projection, say on X along Y (using the notation of Definition 7.1).
51
52 Chapter 7. Projections, Inner Product Spaces, and Norms
Let u e V be arbitrary. Then Pv = P(x + y) = Px = x. Moreover, P
2
v = PPv —
Px = x = Pv. Thus, P
2
= P. Conversely, suppose P
2
= P. Let X = {v e V : Pv = v}
and y = {v € V : Pv = 0}. It is easy to check that X and 3^ are subspaces. We now prove
that V = X 0 y. First note that tfveX, then Pv = v. If v e y, then Pv = 0. Hence
i f v € X n y, then v = 0. Now let u e V be arbitrary. Then v = Pv + (I  P)v. Let
x = Pv, y = (I  P)v. Then Px = P
2
v = Pv = x so x e X, while Py = P(I  P}v =
Pv  P
2
v = 0 so y e y. Thus, V = X 0 y and the projection on X along y is P.
Essentially the same argument shows that / — P is the projection on y along X. D
Definition 7.4. In the speci al case where y = X^, PX.X
L
*
s
called an orthogonal projec
tion and we then use the notati on PX = PX,X
L

Theorem 7.5. P e E"
xn
i s the matri x of an orthogonal projecti on (onto K(P)} i f and only
i fP
2
= p = P
T
.
Proof: Let P be an orthogonal projection (on X, say, along X
L
} and let jc, y e R" be
arbitrary. Note that (/  P)x = (I  PX,X^X = P
x
±,
x
x by Theorem 7.3. Thus,
(/  P)x e X
L
. Since Py e X, we have ( P y f ( I  P)x = y
T
P
T
(I  P)x = 0.
Since x and y were arbitrary, we must have P
T
(I — P) = 0. Hence P
T
= P
T
P = P,
with the second equality following since P
T
P is symmetric. Conversely, suppose P is a
symmetric projection matrix and let x be arbitrary. Write x = Px + (I — P)x. Then
x
T
P
T
(I  P)x = x
T
P(I  P}x = 0. Thus, since Px e U(P), then (/  P)x 6 ft(P)
1
and P must be an orthogonal projection. D
7.1.1 The four fundamental orthogonal projections
Using the notation of Theorems 5.1 and 5.11, let A 6 R
mxn
with SVD A = UT,V
T
=
UtSVf. Then
are easily checked to be (unique) orthogonal projections onto the respective four funda
mental subspaces,
52 Chapter 7. Projections, Inner Product Spaces, and Norms
Let v E V be arbitrary. Then Pv = P(x + y) = Px = x. Moreover, p
2
v = P Pv =
Px = x = Pv. Thus, p2 = P. Conversely, suppose p2 = P. Let X = {v E V : Pv = v}
and Y = {v E V : Pv = OJ. It is easy to check that X and Y are subspaces. We now prove
that V = X $ y. First note that if v E X, then Pv = v. If v E Y, then Pv = O. Hence
if v E X ny, then v = O. Now let v E V be arbitrary. Then v = Pv + (I  P)v. Let
x = Pv, y = (I  P)v. Then Px = p
2
v = Pv = x so x E X, while Py = P(l  P)v =
Pv  p
2
v = 0 so Y E y. Thus, V = X $ Y and the projection on X along Y is P.
Essentially the same argument shows that I  P is the projection on Y along X. 0
Definition 7.4. In the special case where Y = X1, px.xl. is called an orthogonal projec
tion and we then use the notation P
x
= PX.XL
Theorem 7.5. P E jRnxn is the matrix of an orthogonal projection (onto R(P)) if and only
if p2 = P = pT.
Proof: Let P be an orthogonal projection (on X, say, along X 1) and let x, y E jR" be
arbitrary. Note that (I  P)x = (I  px.xJ.)x = PXJ..xx by Theorem 7.3. Thus,
(I  P)x E X1. Since Py E X, we have (py)T (I  P)x = yT pT (I  P)x = O.
Since x and y were arbitrary, we must have pT (I  P) = O. Hence pT = pT P = P,
with the second equality following since pT P is symmetric. Conversely, suppose P is a
symmetric projection matrix and let x be arbitrary. Write x = P x + (I  P)x. Then
x
T
pT (I  P)x = x
T
P(l  P)x = O. Thus, since Px E R(P), then (I  P)x E R(P)1
and P must be an orthogonal projection. 0
7.1 .1 The four fundamental orthogonal projections
Using the notation of Theorems 5.1 and 5.11, let A E jRmxII with SVD A = U!:V
T
U\SVr Then
r
PR(A)
AA+
U\U[
Lu;uT,
;=1
m
PR(A).L
1 AA+
U2
U
! LUiUT,
i=r+l
11
PN(A)
1 A+A
V2V{
L ViVf,
i=r+l
r
PN(A)J.
A+A
VIV{
LViVT
i=l
are easily checked to be (unique) orthogonal projections onto the respective four funda
mental subspaces.
7.1. Projections 53
Example 7.6. Determine the orthogonal projection of a vector v e M" on another nonzero
vector w e R
n
.
Solution: Think of the vector w as an element of the onedimensional subspace IZ(w).
Then the desired projection is simply
(using Example 4.8)
Moreover, the vector z that is orthogonal to w and such that v = Pv + z is given by
z = PK(
W
)±V = (/ — PK(W))V = v — (^^ j w. See Figure 7.2. A direct calculation shows
that z and u; are, in fact, orthogonal:
Figure 7.2. Orthogonal projection on a "line."
Example 7.7. Recall the proof of Theorem 3.11. There, {v\ , ..., Vk} was an orthornormal
basis for a subset S of W
1
. An arbitrary vector x e R" was chosen and a formula for x\
appeared rather mysteriously. The expression for x\ is simply the orthogonal projection of
x on S. Specifically,
Example 7.8. Recall the diagram of the four fundamental subspaces. The indicated direct
sum decompositions of the domain E" and codomain R
m
are given easily as follows.
Let x e W
1
be an arbitrary vector. Then
7.1. Projections 53
Example 7.6. Determine the orthogonal projection of a vector v E IR
n
on another nonzero
vector w E IRn.
Solution: Think of the vector w as an element of the onedimensional subspace R( w).
Then the desired projection is simply
Pn(w)v = ww+v
wwTv
(using Example 4.8)
= (WTV)
T W.
W W
Moreover, the vector z that is orthogonal to wand such that v = P v + z is given by
z = Pn(w)"' v = (l  Pn(w»v = v  ( : ; ~ ) w. See Figure 7.2. A direct calculation shows
that z and ware, in fact, orthogonal:
v
z
Pv w
Figure 7.2. Orthogonal projection on a "line."
Example 7.7. Recall the proof of Theorem 3.11. There, {VI, ... , Vk} was an orthomormal
basis for a subset S of IRn. An arbitrary vector x E IR
n
was chosen and a formula for XI
appeared rather mysteriously. The expression for XI is simply the orthogonal projection of
X on S. Specifically,
Example 7.8. Recall the diagram of the four fundamental subspaces. The indicated direct
sum decompositions of the domain IR
n
and codomain IR
m
are given easily as follows.
Let X E IR
n
be an arbitrary vector. Then
X = PN(A)u + PN(A)X
= A+ Ax + (I  A+ A)x
= VI vt x + V
2
Vi x (recall VVT = I).
54 Chapter 7. Projections, Inner Product Spaces, and Norms
Similarly, let y e ]R
m
be an arbitrary vector. Then
Example 7.9. Let
Then
and we can decompose the vector [2 3 4]
r
uniquely into the sum of a vector in A/' CA)
1
and a vector in J\f(A), respectively, as follows:
7.2 Inner Product Spaces
Definition 7.10. Let V be a vector space over R. Then { • , • ) : V x V
product if
is a real inner
1. (x, x) > Qfor all x 6V and ( x , x } =0 if and only ifx = 0.
2. (x, y) = (y,x)forallx,y e V.
3. { *, cryi + ^2) = a(x, y\) + / 3( j t, y^} for all jc, yi, j2 ^ V and for alia, ft e R.
Example 7.11. Let V = R". Then { ^, y} = X
T
y is the "usual" Euclidean inner product or
dot product.
Example 7.12. Let V = E". Then ( j c, y)
Q
= X
T
Qy, where Q = Q
T
> 0 is an arbitrary
n x n positive definite matrix, defines a "weighted" inner product.
Definition 7.13. If A e R
mx
", then A
T
e R
nxm
is the unique linear transformation or map
such that (x, Ay)  (A
T
x, y) for all x € R
m
and for all y e R".
54 Chapter 7. Projections, Inner Product Spaces, and Norms
Similarly, let Y E IR
m
be an arbitrary vector. Then
Y = PR(A)Y +
= AA+y + (l AA+)y
= U1Ur y + U2U[ Y (recall UU
T
= I).
Example 7.9. Let
Then
1/4
1/4
o
1/4 ]
1/4
o
and we can decompose the vector [2 3 4V uniquely into the sum of a vector in N(A)L
and a vector in N(A), respectively, as follows:
[ ! ] A' Ax + (l  A' A)x
[
1/2 1/2 0] [ 2] [
= ! +
[
5/2] [1/2]
= + .
7.2 Inner Product Spaces
1/2
1/2
o
1/2
1/2
o
Definition 7.10. Let V be a vector space over IR. Then (', .) : V x V + IR is a real inner
product if
1. (x, x) ::: Of or aU x E V and (x, x) = 0 if and only ifx = O.
2. (x, y) = (y, x) for all x, y E V.
3. (x, aYI + PY2) = a(x, Yl) + f3(x, Y2) for all x, Yl, Y2 E V and/or all a, f3 E IR.
Example 7.11. Let V = IRn. Then (x, y) = x
T
Y is the "usual" Euclidean inner product or
dot product.
Example 7.12. Let V = IRn. Then (x, y) Q = X T Qy, where Q = Q T > 0 is an arbitrary
n x n positive definite matrix, defines a "weighted" inner product.
Definition 7.13. If A E IR
m
xn, then ATE IR
n
xm is the unique linear transformation or map
such that {x, Ay) = {AT x, y) for all x E IR
m
andfor all y e IRn.
7.2. Inner Product Spaces 55
It is easy to check that, with this more "abstract" definition of transpose, and if the
(/, y)th element of A is a
(;
, then the (i, y)t h element of A
T
is a/ , . It can also be checked
that all the usual properties of the transpose hold, such as (Afl) = B
T
A
T
. However, the
definition above allows us to extend the concept of transpose to the case of weighted inner
products in the following way. Suppose A e R
mxn
and let {, }g and (•, }
R
, with Q and
R positive definite, be weighted inner products on R
m
and W, respectively. Then we can
define the "weighted transpose" A
#
as the unique map that satisfies
(x, Ay)
Q
= (A
#
x, y)
R
for all x e R
m
and for all y e W
1
.
By Example 7.12 above, we must then have X
T
QAy = x
T
(A
#
) Ry for all x, y. Hence we
must have QA = (A
#
) R. Taking transposes (of the usual variety) gives A
T
Q = RA
#
.
Since R is nonsingular, we find
A* = /r'A' Q.
We can also generalize the notion of orthogonality (x
T
y = 0) to Q orthogonality (Q is
a positive definite matrix). Two vectors x, y e W are <2orthogonal (or conjugate with
respect to Q) if ( x, y}
Q
= X
T
Qy = 0. Q orthogonality is an important tool used in
studying conjugate direction methods in optimization theory.
Definition 7.14. Let V be a vector space over C. Then {, •} : V x V > C is a complex
inner product if
1. ( x, x) > Qfor all x e V and ( x, x) =0 if and only ifx = 0.
2. (x, y) = (y, x) for all x, y e V.
3. (x,ayi + fiy
2
) = a(x, y\) + fi(x, y
2
}forallx, y\, y
2
e V and for alia, ft 6 C.
Remark 7.15. We could use the notation {•, }
c
to denote a complex inner product, but
if the vectors involved are complexvalued, the complex inner product is to be understood.
Note, too, from part 2 of the definition, that ( x, x) must be real for all x.
Remark 7.16. Note from parts 2 and 3 of Definition 7.14 that we have
(ax\ + fix
2
, y) = a(x\, y) + P(x
2
, y}.
Remark 7.17. The Euclidean inner product of x, y e C" is given by
The conventional definition of the complex Euclidean inner product is (x, y} = y
H
x but we
use its complex conjugate x
H
y here for symmetry with the real case.
Remark 7.18. A weighted inner product can be defined as in the real case by (x, y}
Q
—
X
H
Qy, for arbitrary Q = Q
H
> 0. The notion of Q orthogonality can be similarly
generalized to the complex case.
7.2. Inner product Spaces 55
It is easy to check that, with this more "abstract" definition of transpose, and if the
(i, j)th element of A is aij, then the (i, j)th element of AT is ap. It can also be checked
that all the usual properties of the transpose hold, such as (AB) = BT AT. However, the
definition above allows us to extend the concept of transpose to the case of weighted inner
products in the following way. Suppose A E ]Rm xn and let (., .) Q and (., .) R, with Q and
R positive definite, be weighted inner products on IR
m
and IRn, respectively. Then we can
define the "weighted transpose" A # as the unique map that satisfies
(x, AY)Q = (A#x, Y)R for all x E IRm and for all Y E IRn.
By Example 7.l2 above, we must then have x
T
QAy = x
T
(A#{ Ry for all x, y. Hence we
must have QA = (A#{ R. Taking transposes (of the usual variety) gives AT Q = RA#.
Since R is nonsingular, we find
A# = R1A
T
Q.
We can also generalize the notion of orthogonality (x
T
y = 0) to Qorthogonality (Q is
a positive definite matrix). Two vectors x, y E IRn are Qorthogonal (or conjugate with
respect to Q) if (x, y) Q = X T Qy = O. Qorthogonality is an important tool used in
studying conjugate direction methods in optimization theory.
Definition 7.14. Let V be a vector space over <C. Then (., .) : V x V + C is a complex
inner product if
1. (x, x) :::: 0 for all x E V and (x, x) = 0 if and only if x = O.
2. (x, y) = (y, x) for all x, y E V.
3. (x, aYI + f3Y2) = a(x, yll + f3(x, Y2) for all x, YI, Y2 E V andfor all a, f3 E c.
Remark 7.15. We could use the notation (., ·)e to denote a complex inner product, but
if the vectors involved are complexvalued, the complex inner product is to be understood.
Note, too, from part 2 of the definition, that (x, x) must be real for all x.
Remark 7.16. Note from parts 2 and 3 of Definition 7.14 that we have
Remark 7.17. The Euclidean inner product of x, y E C
n
is given by
n
(x, y) = LXiYi = xHy.
i=1
The conventional definition of the complex Euclidean inner product is (x, y) = yH x but we
use its complex conjugate x
H
y here for symmetry with the real case.
Remark 7.1S. A weighted inner product can be defined as in the real case by (x, y)Q =
x
H
Qy, for arbitrary Q = QH > o. The notion of Qorthogonality can be similarly
generalized to the complex case.
56 Chapter 7. Projections, Inner Product Spaces, and Norms
Definition 7.19. A vector space (V, F) endowed with a specific inner product is called an
inner product space. If F = C, we call V a complex inner product space. If F = R, we
call V a real inner product space.
Example 7.20.
1. Check that V = R"
x
" with the inner product (A, B) = Tr A
T
B is a real inner product
space. Note that other choices are possible since by properties of the trace function,
TrA
T
B = TrB
T
A = TrAB
T
= TrBA
T
.
2. Check that V = C
nx
" with the inner product (A, B) = Tr A
H
B is a complex inner
product space. Again, other choices are possible.
Definition 7.21. Let V be an inner product space. For v e V, we define the norm (or
length) ofv by \\v\\ = */(v, v). This is called the norm induced by (  ,  ) .
Example 7.22.
1. If V = E." with the usual inner product, the induced norm is given by   i>   =
xV—*« 9\ 7
( E , =i < Y )
2

2. If V = C" with the usual inner product, the induced norm is given by \\v\\ =
(£ ?
=
, l » ,  l
2
)* .
Theorem 7.23. Let P be an orthogonal projection on an inner product space V. Then
\\Pv\\ < \\v\\forallv e V.
Proof: Since P is an orthogonal projection, P
2
= P = P
#
. (Here, the notation P
#
denotes
the unique linear transformation that satisfies ( P u , v } = (u, P
#
v) for all u, v e V. If this
seems a little too abstract, consider V = R" (or C"), where P
#
is simply the usual P
T
(or
P
H
)). Hence ( P v , v) = (P
2
v, v) = (Pv, P
#
v) = ( P v , Pv) = \\Pv\\
2
> 0. Now /  P is
also a projection, so the above result applies and we get
from which the theorem follows.
Definition 7.24. The norm induced on an inner product space by the "usual" inner product
is called the natural norm.
In case V = C" or V = R", the natural norm is also called the Euclidean norm. In
the next section, other norms on these vector spaces are defined. A converse to the above
procedure is also available. That is, given a norm defined by \\x\\ — • > /(• * > x), an inner
product can be defined via the following.
56 Chapter 7. Projections, Inner Product Spaces, and Norms
Definition 7.19. A vector space (V, IF) endowed with a specific inner product is called an
inner product space. If IF = e, we call V a complex inner product space. If IF = R we
call V a real inner product space.
Example 7.20.
1. Check that V = IR
n
xn with the inner product (A, B) = Tr AT B is a real inner product
space. Note that other choices are possible since by properties of the trace function,
Tr AT B = Tr B T A = Tr A B T = Tr BAT.
2. Check that V = e
nxn
with the inner product (A, B) = Tr AH B is a complex inner
product space. Again, other choices are possible.
Definition 7.21. Let V be an inner product space. For v E V, we define the norm (or
length) ofv by IIvll = J(V,V). This is called the norm induced by (', .).
Example 7.22.
1. If V = IR
n
with the usual inner product, the induced norm is given by II v II
n 2 1
(Li=l V
i
)2.
2. If V = en with the usual inner product, the induced norm is given by II v II =
"n 2 !
(L...i=l IVi I ) .
Theorem 7.23. Let P be an orthogonal projection on an inner product space V. Then
IIPvll ::::: Ilvll for all v E V.
Proof: Since P is an orthogonal projection, p2 = P = pH. (Here, the notation p# denotes
the unique linear transformation that satisfies (Pu, v) = (u, p#v) for all u, v E V. If this
seems a little too abstract, consider V = IR
n
(or en), where p# is simply the usual pT (or
pH)). Hence (Pv, v) = (P
2
v, v) = (Pv, p#v) = (Pv, Pv) = IIPvll
2
::: O. Now / P is
also a projection, so the above result applies and we get
0::::: ((I  P)v. v) = (v. v)  (Pv, v)
= IIvll2  IIPvll
2
from which the theorem follows. 0
Definition 7.24. The norm induced on an inner product space by the "usual" inner product
is called the natural norm.
In case V = en or V = IR
n
, the natural norm is also called the Euclidean norm. In
the next section, other norms on these vector spaces are defined. A converse to the above
procedure is also available. That is, given a norm defined by IIx II = .j(X,X}, an inner
product can be defined via the following.
7.3. Vector Norms 57
Theorem 7.25 (Polarization Identity).
1. For x, y € R", an inner product is defined by
7.3 Vector Norms
Definition 7.26. Let (V, F) be a vector space. Then \ \  \ \ : V >• R is a vector norm if it
satisfies the following three properties:
2. For x, y e C", an inner product is defined by
where j = i = \/—T.
(This is called the triangle inequality, as seen readily from the usual diagram illus
trating the sum of two vectors in R
2
.)
Remark 7.27. It is convenient in the remainder of this section to state results for complex
valued vectors. The specialization to the real case is obvious.
Definition 7.28. A vector space (V, F) is said to be a normed linear space if and only if
there exists a vector norm  •  : V > R satisfying the three conditions of Definition 7.26.
Example 7.29.
1. For x e C", the Holder norms, or pnorms, are defined by
Special cases:
(The second equality is a theorem that requires proof.)
7.3. Vector Norms
Theorem 7.25 (Polarization Identity).
1. For x, y E an inner product is defined by
(x,y)=xTy=
2. For x, y E en, an inner product is defined by
where j = i = .J=I.
7.3 Vector Norms
IIx + yll2 _ IIxll2 _ lIyll2
2
57
Definition 7.26. Let (V, IF) be a vector space. Then II . II : V + IR is a vector norm ifit
satisfies the following three properties:
1. Ilxll::: Of or all x E V and IIxll = 0 ifand only ifx = O.
2. Ilaxll = lalllxllforallx E Vandforalla E IF.
3. IIx + yll :::: IIxll + IIYliforall x, y E V.
(This is called the triangle inequality, as seen readily from the usual diagram illus
trating the sum of two vectors in ]R2 .)
Remark 7.27. It is convenient in the remainder of this section to state results for complex
valued vectors. The specialization to the real case is obvious.
Definition 7.28. A vector space (V, IF) is said to be a normed linear space if and only if
there exists a vector norm II . II : V + ]R satisfying the three conditions of Definition 7.26.
Example 7.29.
1. For x E en, the HOlder norms, or pnorms, are defined by
Special cases:
(a) Ilx III = L:7=1 IXi I (the "Manhattan" norm).
1 1
(b) Ilxllz = (L:7=1Ix;l2)2 = (X
H
X)2 (the Euclidean norm).
(c) Ilxlioo = maxlx;l = lim IIxllp
IE!! p++oo
(The second equality is a theorem that requires proof.)
58 Chapter 7. Projections, Inner Product Spaces, and Norms
2. Some weighted pnorms:
(a)   JC  , .
D
= E^rf/l*/!, where 4 > 0.
(b) I k llz . g — (x
h
Q
X
Y > where Q = Q
H
> 0 (this norm is more commonly
denoted  • 
c
).
3. On the vector space (C[to, t \ ] , R), define the vector norm
On the vector space ((C[to, t\])
n
, R), define the vector norm
Fhcorem 7.30 (Holder Inequality). Let x, y e C". Ther,
A particular case of the Holder inequality is of special interest.
Theorem 7.31 (CauchyBunyakovskySchwarz Inequality). Let x, y e C". Then
with equality if and only if x and y are linearly dependent.
Proof: Consider the matrix [x y] e C"
x2
. Since
is a nonnegative definite matrix, its determinant must be nonnegative. In other words,
0 < ( x
H
x ) ( y
H
y ) — ( x
H
y ) ( y
H
x ) . Since y
H
x = x
H
y, we see immediately that \X
H
y\ <
\\X\\2\\y\\2
D
Note: This is not the classical algebraic proof of the CauchyBunyakovskySchwarz
(CBS) inequality (see, e.g., [20, p. 217]). However, it is particularly easy to remember.
Remark 7.32. The angle 0 between two nonzero vectors x, y e C" may be defined by
cos# = I, „ .^ , 0 < 0 < 5. The CBS inequality is thus equivalent to the statement
IlMmlylb — ^
COS 0 <1.
Remark 7.33. Theorem 7.31 and Remark 7.32 are true for general inner product spaces.
Remark 7.34. The norm  • 
2
is unitarily invariant, i.e., if U € C"
x
" is unitary, then
\\Ux\\
2
= \\x\\
2
(Proof. \\Ux\\l = x
H
U
H
Ux = X
H
X = \\x\\\). However,   , and   1^
58 Chapter 7. Projections, Inner Product Spaces, and Norms
2. Some weighted pnorms:
(a) IIxll1.D = whered; > O.
1
(b) IIx IIz.Q = (x
H
Qx) 2, where Q = QH > 0 (this norm is more commonly
denoted II . IIQ)'
3. On the vector space (C[to, ttl, 1Ft), define the vector norm
11111 = max 1/(t)I·
On the vector space «e[to, ttlr, 1Ft), define the vector norm
1111100 = max II/(t) 11
00
,
Theorem 7.30 (HOlder Inequality). Let x, y E en. Then
I I
+=1.
p q
A particular case of the HOlder inequality is of special interest.
Theorem 7.31 (CauchyBunyakovskySchwarz Inequality). Let x, y E en. Then
with equality if and only if x and yare linearly dependent.
Proof' Consider the matrix [x y] E en
x2
. Since
is a nonnegative definite matrix, its determinant must be nonnegative. In other words,
o (x
H
x)(yH y)  (x
H
y)(yH x). Since yH x = x
H
y, we see immediately that IXH yl
IIxll2l1yllz. 0
Note: This is not the classical algebraic proof of the CauchyBunyakovskySchwarz
(CBS) inequality (see, e.g., [20, p. 217]). However, it is particularly easy to remember.
Remark 7.32. The angle e between two nonzero vectors x, y E en may be defined by
cos e = 0 e I' The CBS inequality is thus equivalent to the statement
1 cose 1 1.
Remark 7.33. Theorem 7.31 and Remark 7.32 are true for general inner product spaces.
Remark 7.34. The norm II . 112 is unitarily invariant, i.e., if U E e
nxn
is unitary, then
IIUxll2 = IIxll2 (Proof IIUxili = XHUHUx = xHx = IIxlli)· However, 11·111 and 1I·IIClO
7.4. Matrix Norms 59
are not unitarily invariant. Similar remarks apply to the unitary invariance of norms of real
vectors under orthogonal transformation.
Remark 7.35. If x, y € C" are orthogonal, then we have the Pythagorean Identity
7.4 Matrix Norms
In this section we introduce the concept of matrix norm. As with vectors, the motivation for
using matrix norms is to have a notion of either the size of or the nearness of matrices. The
former notion is useful for perturbation analysis, while the latter is needed to make sense of
"convergence" of matrices. Attention is confined to the vector space (W
nxn
, R) since that is
what arises in the majority of applications. Extension to the complex case is straightforward
and essentially obvious.
Definition 7.39.  •  : R
mx
" > E is a matrix norm if it satisfies the following three
properties:
2 _ _/ / .
the proof of which follows easily from z2 = z z.
Theorem 7.36. All norms on C" are equivalent; i.e., there exist constants c\, ci (possibly
depending onn) such that
Example 7.37. For x G C", the following inequalities are all tight bounds; i.e., there exist
vectors x for which equality holds:
Finally, we conclude this section with a theorem about convergence of vectors. Con
vergence of a sequence of vectors to some limit vector can be converted into a statement
about convergence of real numbers, i.e., convergence in terms of vector norms.
Theorem 7.38. Let \\ • \\ be a vector norm and suppose v, i»
( 1 )
, v
(2
\ ... e C". Then
7.4. Matrix Norms 59
are not unitarily invariant. Similar remarks apply to the unitary invariance of norms of real
vectors under orthogonal transformation.
Remark 7.35. If x, y E en are orthogonal, then we have the Pythagorean Identity
Ilx ± = +
the proof of which follows easily from liz = ZH z.
Theorem 7.36. All norms on en are equivalent; i.e., there exist constants CI, C2 (possibly
depending on n) such that
Example 7.37. For x E en, the following inequalities are all tight bounds; i.e., there exist
vectors x for which equality holds:
Ilxlll :::: Jn Ilxlb
Ilxll2:::: IIxll»
IIxlloo :::: IIxll»
Ilxlll :::: n IIxlloo;
IIxl12 :::: Jn Ilxll
oo
;
IIxlioo :::: IIxllz.
Finally, we conclude this section with a theorem about convergence of vectors. Con
vergence of a sequence of vectors to some limit vector can be converted into a statement
about convergence of real numbers, i.e., convergence in terms of vector norms.
Theorem 7.38. Let II· II be a vector norm and suppose v, v(l), v(2), ... E en. Then
lim V(k) = v if and only if lim II v(k)  v II = O.
k4+00
7.4 Matrix Norms
In this section we introduce the concept of matrix norm. As with vectors, the motivation for
using matrix norms is to have a notion of either the size of or the nearness of matrices. The
former notion is useful for perturbation analysis, while the latter is needed to make sense of
"convergence" of matrices. Attention is confined to the vector space (IRm xn , IR) since that is
what arises in the majority of applications. Extension to the complex case is straightforward
and essentially obvious.
Definition 7.39. II· II : IR
mxn
IR is a matrix norm if it satisfies the following three
properties:
1. IIAII Of or all A E IR
mxn
and IIAII = 0 if and only if A = O.
2. lIaAl1 = lalliAliforall A E IR
mxn
andfor all a E IR.
3. IIA + BII :::: IIAII + IIBII for all A, BE IRmxn.
(As with vectors, this is called the triangle inequality.)
60 Chapter 7. Projections, Inner Product Spaces, and Norms
Example 7.40. Let A e R
mx
". Then the Frobenius norm (or matrix Euclidean norm) is
defined by
^wncic r = laiiK^/i;;.
Example 7.41. Let A e R
mxn
. Then the matrix pnorms are defined by
The following three special cases are important because they are "computable." Each is a
theorem and requires a proof.
1. The "maximum column sum" norm is
2. The "maximum row sum" norm is
3. The spectral norm is
Example 7.42. Let A E R
mxn
. The Schatten/7norms are defined by
Some special cases of Schatten /?norms are equal to norms defined previously. For example,
 . 
5 2
=  . \\
F
and  • 
5i00
=  • 
2
. The norm  • 
5>1
is often called the trace norm.
Example 7.43. Let A e K
mx
". Then "mixed" norms can also be defined by
Example 7.44. The "matrix analogue of the vector 1norm,"  A\\
s
= ^ j \a
i}
; , is a norm.
The concept of a matrix norm alone is not altogether useful since it does not allow us
to estimate the size of a matrix product A B in terms of the sizes of A and B individually.
60 Chapter 7. Projections, Inner Product Spaces, and Norms
Example 7.40. Let A E lR,mxn. Then the Frobenius norm (or matrix Euclidean norm) is
defined by
IIAIIF ~ (t. t ai;) I ~ (t. altA)) 1 ~ (T, (A' A)) 1 ~ (T, (AA '));
(where r = rank(A)).
Example 7.41. Let A E lR,mxn. Then the matrix pnorms are defined by
IIAxll
IIAII = max _P = max IIAxll .
P Ilxllp;60 Ilxli
p
IIxllp=1 p
The following three special cases are important because they are "computable." Each is a
theorem and requires a proof.
I. The "maximum column sum" norm is
2. The "maximum row sum" norm is
IIAlioo = max
rE!!l. (
t laUI ).
J=1
3. The spectral norm is
tTL T
IIAII2 = Amax(A A) = A ~ a x ( A A ) = a1(A).
Note: IIA+llz = l/ar(A), where r = rank(A).
Example 7.42. Let A E lR,mxn. The Schattenpnorms are defined by
I
IIAlls.p = (at' + ... + a!)"".
Some special cases of Schatten pnorms are equal to norms defined previously. For example,
11·115.2 = II . IIF and 11'115,00 = II . 112' The norm II . 115.1 is often called the trace norm.
Example 7.43. Let A E lR,mxn _ Then "mixed" norms can also be defined by
IIAII = max IIAxil
p
p,q 11.<110#0 IIxllq
Example 7.44. The "matrix analogue of the vector Inorm," IIAlis = Li.j laij I, is a norm.
The concept of a matrix norm alone is not altogether useful since it does not allow us
to estimate the size of a matrix product AB in terms of the sizes of A and B individually.
7.4. Matrix Norms 61
Notice that this difficulty did not arise for vectors, although there are analogues for, e.g.,
inner products or outer products of vectors. We thus need the following definition.
Definition 7.45. Let A e R
mxn
, B e R
nxk
. Then the norms \\ • \\
a
, \\ • \\
p
, and \\ • \\
y
are
mutually consistent if \\ A B \\
a
< \\A\\p\\B\\
y
. A matrix norm\\ • \\ is said to be consistent
if \\AB\\ <  A   fi whenever the matrix product is defined.
Example 7.46.
1.  • /7 and  • 
p
for all p are consistent matrix norms.
2. The "mixed" norm
is a matrix norm but it is not consistent. For example, take A = B = \ \ J1. Then
  Af l  
l i 00
= 2whil e  A 
l i 00
  B 
1 >00
= l.
The p norms are examples of matrix norms that are subordinate to (or induced by)
a vector norm, i.e.,
11^ 4^ 11
(or, more generally, A = max^o ., . .
p
) . For such subordinate norms, also called oper
ator norms, we clearly have Aj c < A1jt. Since   Af ij c  <   A    f l j c  < Aflj t,
it follows that all subordinate norms are consistent.
Theorem 7.47. There exists a vector x* such that Ajt* = A jc* if the matrix normis
subordinate to the vector norm.
Theorem 7.48. If \\ • \\
m
is a consistent matrix norm, there exists a vector norm \\ • \\
v
consistent with it, i.e., H Aj c JI ^ < \\A\\
m
\\x\\
v
.
Not every consistent matrix norm is subordinate to a vector norm. For example,
consider  • \\
F
. Then  A^ 
2
< A
F
j c
2
, so  • 
2
is consistent with  • 
F
, but there does
not exist a vector norm  •  such that A
F
is given by max^o \^ •
Useful Results
The following miscellaneous results about matrix norms are collected for future reference.
The interested reader is invited to prove each of them as an exercise.
2. For A e R"
x
", the following inequalities are all tight, i.e., there exist matrices A for
which equality holds:
7.4. Matrix Norms 61
Notice that this difficulty did not arise for vectors, although there are analogues for, e.g.,
inner products or outer products of vectors. We thus need the following definition.
Definition 7.45. Let A E ]Rmxn, B E ]Rnxk. Then the norms II . II", II· Ilfl' and II . lIy are
mutuallyconsistentifIlABII,,::S IIAllfllIBlly. A matrix norm 11·11 is said to be consistent
if II A B II ::s II A 1111 B II whenever the matrix product is defined.
Example 7.46.
1. II· II F and II . II p for all p are consistent matrix norms.
2. The "mixed" norm
IIAxll1
II· 11
100
= max = max laijl
, x;60 Ilx 1100 i,j
is a matrix norm but it is not consistent. For example, take A = B = [: :]. Then
IIABIII,oo = 2 while IIAIII,ooIlBIII,oo = 1.
The pnorms are examples of matrix norms that are subordinate to (or induced by)
a vector norm, i.e.,
IIAxl1
IIAII = max  = max IIAxl1
x;60 IIx II Ilxll=1
IIAxll .
(or, more generally, IIAllp,q = maxx;60 IIxll
q
P
), For such subordmate norms, also caUedoper
atornorms, wec1earlyhave IIAxll ::s IIAllllxll· Since IIABxl1 ::s IIAlIllBxll ::s IIAIIIIBllllxll,
it follows that all subordinate norms are consistent.
Theorem 7.47. There exists a vector x* such that IIAx*11 = IIAllllx*11 if the matrix norm is
subordinate to the vector norm.
Theorem 7.48. If II . 11m is a consistent matrix norm, there exists a vector norm II . IIv
consistent with it, i.e., IIAxliv ::s IIAlim Ilxli
v
'
Not every consistent matrix norm is subordinate to a vector norm. For example,
consider II . II F' Then II Ax 112 ::s II A II Filx 112, so II . 112 is consistent with II . II F, but there does
not exist a vector norm II . II such that IIAIIF is given by max
x
;60 " , ~ ~ i ' .
Useful Results
The following miscellaneous results about matrix norms are collected for future reference.
The interested reader is invited to prove each of them as an exercise.
1. II In II p = 1 for all p, while IIIn II F = .jii.
2. For A E ]Rnxn, the following inequalities are all tight, i.e., there exist matrices A for
which equality holds:
IIAIII ::s .jii IIAlb
IIAII2 ::s.jii IIAII
I
,
II A 1100 ::s n IIAII
I
,
IIAIIF ::s.jii IIAII
I
,
IIAIII ::s n IIAlloo,
IIAII2 ::s .jii IIAlloo,
IIAlioo ::s .jii IIAII2,
IIAIIF ::s .jii IIAlb
IIAIII ::s .jii II
A
IIF;
IIAII2::S IIAIIF;
IIAlioo ::s .jii IIAIIF;
IIAIIF ::s .jii IIAlioo'
62 Chapter 7. Projections, Inner Product Spaces, and Norms
3. For A eR
mxa
.
4. The norms  • \\
F
and  • 
2
(as well as all the Schatten /?norms, but not necessarily
other pnorms) are unitarily invariant; i.e., for all A e R
mx
" and for all orthogonal
matrices Q zR
mxm
and Z e M"
x
", (MZ
a
=   A 
a
fora = 2 or F.
Convergence
The following theorem uses matrix norms to convert a statement about convergence of a
sequence of matrices into a statement about the convergence of an associated sequence of
scalars.
Theorem 7.49. Let \\ \\bea matrix normand suppose A, A
( 1)
, A
(2)
, ... e R
mx
". Then
EXERCISES
1. If P is an orthogonal projection, prove that P
+
= P.
2. Suppose P and Q are orthogonal projections and P + Q = I. Prove that P — Q
must be an orthogonal matrix.
3. Prove that / — A
+
A is an orthogonal projection. Also, prove directly that V
2
V/ is an
orthogonal projection, where ¥2 is defined as in Theorem 5.1.
4. Suppose that a matrix A e W
nxn
has linearly independent columns. Prove that the
orthogonal projection onto the space spanned by these column vectors is given by the
matrix P = A(A
T
A)~
}
A
T
.
5. Find the (orthogonal) projection of the vector [2 3 4]
r
onto the subspace of R
3
spanned by the plane 3;c — v + 2z = 0.
6. Prove that E"
x
" with the inner product (A, B) = Tr A
T
B is a real inner product
space.
7. Show that the matrix norms  • 
2
and  • \\
F
are unitarily invariant.
8. Definition: Let A e R
nxn
and denote its set of eigenvalues (not necessarily distinct)
by { Ai , . . . , > . „ } . The spectral radius of A is the scalar
62 Chapter 7. Projections, Inner Product Spaces, and Norms
3. For A E IR
mxn
,
max laijl :::: IIAII2 :::: ~ max laijl.
l.] l.]
4. The norms II . IIF and II . 112 (as well as all the Schatten pnorms, but not necessarily
other pnorms) are unitarily invariant; i.e., for all A E IR
mxn
and for all orthogonal
matrices Q E IR
mxm
and Z E IR
nxn
, IIQAZlia = IIAlla fora = 2 or F.
Convergence
The following theorem uses matrix norms to convert a statement about convergence of a
sequence of matrices into a statement about the convergence of an associated sequence of
scalars.
Theorem 7.49. Let II ·11 be a matrix norm and suppose A, A(I), A(2), ... E IRmxn. Then
lim A (k) = A if and only if lim IIA (k)  A II = o.
k ~ + o o k ~ + o o
EXERCISES
1. If P is an orthogonal projection, prove that p+ = P.
2. Suppose P and Q are orthogonal projections and P + Q = I. Prove that P  Q
must be an orthogonal matrix.
3. Prove that I  A + A is an orthogonal projection. Also, prove directly that V
2
Vl is an
orthogonal projection, where V2 is defined as in Theorem 5.1.
4. Suppose that a matrix A E IR
mxn
has linearly independent columns. Prove that the
orthogonal projection onto the space spanned by these column vectors is given by the
matrix P = A(AT A) 1 AT.
5. Find the (orthogonal) projection of the vector [2 3 4f onto the subspace of 1R
3
spanned by the plane 3x  y + 2z = O.
6. Prove that IR
n
xn with the inner product (A, B) = Tr AT B is a real inner product
space.
7. Show that the matrix norms II . 112 and II . IIF are unitarily invariant.
8. Definition: Let A E IR
nxn
and denote its set of eigenvalues (not necessarily distinct)
by P.l, ... , An}. The spectral radius of A is the scalar
p(A) = max IA;I.
i
Exercises 63
Determine A
F
, H AI d , A
2
, H AH ^ , and p(A). (An n x n matrix, all of whose
columns and rows as well as main d iagonal and antid iagonal sum to s = n(n
2
+ l)/2,
is called a "magic square" matrix. I f M is a magic square matrix, it can be proved
that  M U p = s for all/?.)
10. Let A = xy
T
, where both x, y e R" are nonzero. Determine A
F
, Aj, A
2
,
and Aoo in terms of \\x\\
a
and /or \\y\\p, where a and ft take the value 1, 2, or oo as
appropriate.
Let
9. Let
Determine A
F
, \\A\\
lt
A
2
, H A^ , and p(A).
Exercises 63
Let
A = [ ~ 0 ~ ] .
14 12 5
Determine IIAIIF' IIAII
I
, IIAlb IIAlloo, and peA).
9. Let
A = [ ~ ~ ~ ] .
492
Determine IIAIIF' IIAII
I
, IIAlb IIAlloo, and peA). (An n x n matrix, all of whose
columns and rows as well as main diagonal and antidiagonal sum to s = n (n
2
+ 1) /2,
is called a "magic square" matrix. If M is a magic square matrix, it can be proved
that IIMllp = s for all p.)
10. Let A = xyT, where both x, y E IR
n
are nonzero. Determine IIAIIF' IIAIII> IIAlb
and II A 1100 in terms of IIxlla and/or IlylljJ, where ex and {3 take the value 1,2, or (Xl as
appropriate.
This page intentionally left blank This page intentionally left blank
Chapter 8
Li near Least Squares
Problems
8.1 The Li near Least Squares Problem
Problem: Suppose A e R
mx
" with m > n and b <= R
m
is a given vector. The linear least
squares problem consists of finding an element of the set
Solution: The set X has a number of easily verified properties:
1. A vector x e X if and only if A
T
r = 0, where r = b — Ax is the residual associated
with x. The equations A
T
r — 0 can be rewritten in the form A
T
Ax = A
T
b and the
latter form is commonly known as the normal equations, i.e., x e X if and only if
x is a solution of the normal equations. For further details, see Section 8.2.
2. A vector x E X if and onlv if x is of the form
To see why this must be so, write the residual r in the form
Now, (Pn(A)b — AJ C ) is clearly in 7£(A) , while
so these two vectors are orthogonal. Hence,
from the Pythagorean identity (Remark 7.35). Thus, A.x — b\\\ (and hence p ( x ) =
\\Ax —b\\2) assumes its minimum value if and only if
65
Chapter 8
Linear Least Squares
Problems
8.1 The Linear Least Squares Problem
Problem: Suppose A E jRmxn with m 2: nand b E jRm is a given vector. The linear least
squares problem consists of finding an element of the set
x = {x E jRn : p(x) = IIAx  bll
2
is minimized}.
Solution: The set X has a number of easily verified properties:
1. A vector x E X if and only if AT r = 0, where r = b  Ax is the residual associated
with x. The equations AT r = 0 can be rewritten in the form A T Ax = AT b and the
latter form is commonly known as the normal equations, i.e., x E X if and only if
x is a solution of the normal equations. For further details, see Section 8.2.
2. A vector x E X if and only if x is of the form
x=A+b+(IA+A)y, whereyEjRnisarbitrary. (8.1)
To see why this must be so, write the residual r in the form
r = (b  PR(A)b) + (PR(A)b  Ax).
Now, (PR(A)b  Ax) is clearly in 'R(A), while
(b  PR(A)b) = (I  PR(A))b
= PR(A),,b E 'R(A)L
so these two vectors are orthogonal. Hence,
= lib 
= lib  + IIPR(A)b 
from the Pythagorean identity (Remark 7.35). Thus, IIAx  (and hence p(x) =
II Ax  b 112) assumes its minimum value if and only if
(8.2)
65
66 Chapter 8. Linear Least Squares Problems
and this equation always has a solution since AA
+
b e 7£(A). By Theorem 6.3, all
solutions of (8.2) are of the form
where y e W is arbitrary. The minimum value of p ( x ) is then clearly equal to
the last inequality following by Theorem 7.23.
3. X is convex. To see why, consider two arbitrary vectors jci = A
+
b + (I — A+A)y
and *2 = A+b + (I — A+A)z in X. Let 6 e [0, 1]. Then the convex combination
0*i + (1  #)*
2
= A+b + (I  A
+
A)(Oy + (1  0)z) is clearly in X.
4. X has a unique element x* of minimal 2norm. In fact, x* = A
+
b is the unique vector
that solves this "double minimization" problem, i.e., x * minimizes the residual p ( x )
and is the vector of minimum 2norm that does so. This follows immediately from
convexity or directly from the fact that all x e X are of the form (8.1) and
which follows since the two vectors are orthogonal.
5. There is a unique solution to the least squares problem, i.e., X = {x*} = {A+b}, if
and only if A
+
A = I or, equivalently, if and only if rank (A) = n.
Just as for the solution of linear equations, we can generalize the linear least squares
problem to the matrix case.
Theorem 8.1. Let A e E
mx
" and B € R
mxk
. The general solution to
is of the form
where Y € R"
xfc
is arbitrary. The unique solution of minimum 2norm or Fnorm is
X = A+B.
Remark 8.2. Notice that solutions of the linear least squares problem look exactly the
same as solutions of the linear system AX = B. The only difference is that in the case
of linear least squares solutions, there is no "existence condition" such as K(B) c 7£(A).
If the existence condition happens to be satisfied, then equality holds and the least squares
66 Chapter 8. Linear Least Squares Problems
and this equation always has a solution since AA+b E R(A). By Theorem 6.3, all
solutions of (8.2) are of the form
x = A+ AA+b + (I  A+ A)y
=A+b+(IA+A)y,
where y E ]R.n is arbitrary. The minimum value of p (x) is then clearly equal to
lib  PR(A)bll
z
= 11(1  AA+)bI1
2
~ Ilbll z,
the last inequality following by Theorem 7.23.
3. X is convex. To see why, consider two arbitrary vectors Xl = A + b + (I  A + A) y
and Xz = A+b + (I  A+ A)z in X. Let 8 E [0,1]. Then the convex combination
8x, + (1  8)xz = A+b + (I  A+ A)(8y + (1  8)z) is clearly in X.
4. X has a unique element x" of minimal2norm. In fact, x" = A + b is the unique vector
that solves this "double minimization" problem, i.e., x* minimizes the residual p(x)
and is the vector of minimum 2norm that does so. This follows immediately from
convexity or directly from the fact that all x E X are of the form (8.1) and
which follows since the two vectors are orthogonal.
5. There is a unique solution to the least squares problem, i.e., X = {x"} = {A+b}, if
and only if A + A = lor, equivalently, if and only if rank(A) = n.
Just as for the solution of linear equations, we can generalize the linear least squares
problem to the matrix case.
Theorem 8.1. Let A E ]R.mxn and BE ]R.mxk. The general solution to
min IIAX  Bib
XElR
Plxk
is of the form
X=A+B+(IA+A)Y,
where Y E ]R.nxk is arbitrary. The unique solution of minimum 2norm or Fnorm is
X = A+B.
Remark 8.2. Notice that solutions of the linear least squares problem look exactly the
same as solutions of the linear system AX = B. The only difference is that in the case
of linear least squares solutions, there is no "existence condition" such as R(B) S; R(A).
If the existence condition happens to be satisfied. then equality holds and the least squares
8.3 Linear Regression and Other Linear Least Squares Problems 67
residual is 0. Of all solutions that give a residual of 0, the unique solution X = A
+
B has
minimum 2norm or Fnorm.
Remark 8.3. If we take B = I
m
in Theorem 8.1, then X = A
+
can be interpreted as
saying that the MoorePenrose pseudoinverse of A is the best (in the matrix 2norm sense)
matrix such that AX approximates the identity.
Remark 8.4. Many other interesting and useful approximation results are available for the
matrix 2norm (and Fnorm). One such is the following. Let A e M™
x
" with SVD
8.2 Geometric Solution
Looking at the schematic provided in Figure 8.1, it is apparent that minimizing  Ax — b\\
2
is equivalent to finding the vector x e W
1
for which p — Ax is closest to b (in the Euclidean
norm sense). Clearly, r = b — Ax must be orthogonal to 7£(A). Thus, if Ay is an arbitrary
vector in 7£(A) (i.e., y is arbitrary), we must have
Then a best rank k approximation to A for l <f c <r , i . e . , a solution to
is given by
The special case in which m = n and k = n — 1 gives a nearest singular matrix to A e
Since y is arbitrary, we must have A
T
b — A
T
Ax = 0 or A
r
A;c = A
T
b.
Special case: If A is full (column) rank, then x = (A
T
A) A
T
b.
8.3 Linear Regression and Other Linear Least Squares
Problems
8.3.1 Example: Linear regression
Suppose we have m measurements (t\,y\), . . . , (t
m
,y
m
) for which we hypothesize a linear
(affine) relationship
8.3 Linear Regression and Other Linear Least Squares Problems 67
residual is O. Of all solutions that give a residual of 0, the unique solution X = A + B has
minimum 2norm or F norm.
Remark 8.3. If we take B = 1m in Theorem 8.1, then X = A+ can be interpreted as
saying that the MoorePenrose pseudoinverse of A is the best (in the matrix 2norm sense)
matrix such that AX approximates the identity.
Remark 8.4. Many other interesting and useful approximation results are available for the
matrix 2norm (and F norm). One such is the following. Let A E with SVD
A = = LOiUiV!.
i=l
Then a best rank k approximation to A for 1 :s k :s r, i.e., a solution to
min IIA  MIi2,
MEJRZ'xn
is given by
k
Mk = LOiUiV!.
i=1
The special case in which m = nand k = n  1 gives a nearest singular matrix to A E x n .
8.2 Geometric Solution
Looking at the schematic provided in Figure 8.1, it is apparent that minimizing IIAx  bll
2
is equivalent to finding the vector x E lR
n
for which p = Ax is closest to b (in the Euclidean
norm sense). Clearly, r = b  Ax must be orthogonal to R(A). Thus, if Ay is an arbitrary
vector in R(A) (i.e., y is arbitrary), we must have
0= (Ay)T (b  Ax)
=yTAT(bAx)
= yT (ATb _ AT Ax).
Since y is arbitrary, we must have AT b  AT Ax = 0 or AT Ax = AT b.
Special case: If A is full (column) rank, then x = (AT A)l ATb.
8.3 Linear Regression and Other Linear Least Squares
Problems
8.3.1 Example: Linear regression
Suppose we have m measurements (ll, YI), ... , (trn, Ym) for which we hypothesize a linear
(affine) relationship
y = at + f3
(8.3)
68 Chapter 8. Linear Least Squares Problems
Figure 8.1. Projection of b on K(A).
for certain constants a. and ft. One way to solve this problem is to find the line that best fits
the data in the least squares sense; i.e., with the model (8.3), we have
where &\,..., 8
m
are "errors" and we wish to minimize 8\ + • • • + 8^ Geometrically, we
are trying to find the best line that minimizes the (sum of squares of the) distances from the
given data points. See, for example, Figure 8.2.
Figure 8.2. Simple linear regression.
Note that distances are measured in the vertical sense from the points to the line (as
indicated, for example, for the point (t\, y\}}. However, other criteria arc possible. For ex
ample, one could measure the distances in the horizontal sense, or the perpendicular distance
from the points to the line could be used. The latter is called total least squares. Instead
of 2norms, one could also use 1norms or oonorms. The latter two are computationally
68 Chapter 8. Linear Least Squares Problems
b
r
p=Ax Ay E R(A)
Figure S.l. Projection of b on R(A).
for certain constants a and {3. One way to solve this problem is to find the line that best fits
the data in the least squares sense; i.e., with the model (8.3), we have
YI = all + {3 + 81,
Y2 = al2 + {3 + 82
where 8
1
, ... , 8
m
are "errors" and we wish to minimize 8? + ... + 8;. Geometrically, we
are trying to find the best line that minimizes the (sum of squares of the) distances from the
given data points. See, for example, Figure 8.2.
y
Figure 8.2. Simple linear regression.
Note that distances are measured in the venical sense from the point!; to [he line (a!;
indicated. for example. for the point (tl. YIn. However. other criteria nrc For cx
ample, one could measure the distances in the horizontal sense, or the perpendiculnr distance
from the points to the line could be used. The latter is called total least squares. Instead
of 2norms, one could also use Inorms or oonorms. The latter two are computationally
8.3. Linear Regression and Other Linear Least Squares Problems 69
much more difficult to handle, and thus we present only the more tractable 2norm case in
text that follows.
The ra "error equations" can be written in matrix form as
where
We then want to solve the problem
or, equivalently,
Solution: x — [^1 is a solution of the normal equations A
T
Ax = A
T
y where, for the
special form of the matrices above, we have
and
8.3.2 Other least squares problems
Suppose the hypothesized model is not the linear equation (8.3) but rather is of the form
y = f ( t ) =
Cl
0!(0+ • • • 4 c
n
<t>
n
(t). (8.5)
In (8.5) the < / > ,(0 are given (basis) functions and the c
;
are constants to be determined to
minimize the least squares error. The matrix problem is still (8.4), where we now have
An important special case of (8.5) is least squares polynomial approximation, which
corresponds to choosing 0,• (?) = t'~
l
, i
;
e n, although this choice can lead to computational
The solution for the parameters a and ft can then be written
8.3. Linear Regression and Other Linear Least Squares Problems 69
much more difficult to handle, and thus we present only the more tractable 2norm case in
text that follows.
The m "error equations" can be written in matrix form as
Y = Ax +0,
where
We then want to solve the problem
minoT 0 = min (Ax  y)T (Ax  y)
x
or, equivalently,
min = min II Ax 
x
Solution: x = is a solution of the normal equations AT Ax
special form of the matrices above, we have
and
AT Y = [ Li ti Yi J.
LiYi
The solution for the parameters a and f3 can then be written
8.3.2 Other least squares problems
(8.4)
AT y where, for the
Suppose the hypothesized model is not the linear equation (S.3) but rather is of the form
(8.5)
In (8.5) the ¢i(t) are given (basis) functions and the Ci are constants to be determined to
minimize the least squares error. The matrix problem is still (S.4), where we now have
An important special case of (8.5) is least squares polynomial approximation, which
corresponds to choosing ¢i (t) = t
i

1
, i E !!, although this choice can lead to computational
70 Chapter 8. Linear Least Squares Problems
difficulties because of numerical ill conditioning for large n. Numerically better approaches
are based on orthogonal polynomials, piecewise polynomial functions, splines, etc.
The key feature in (8.5) is that the coefficients c, appear linearly. The basis functions
< / > , can be arbitrarily nonlinear. Sometimes a problem in which the c, 's appear nonlinearly
can be converted into a linear problem. For example, if the fitting function is of the form
y = f ( t ) = c\e
C2i
, then taking logarithms yields the equation logy = logci + cjt. Then
defining y — logy, c\ = logci, and GI = cj_ results in a standard linear least squares
problem.
8.4 Least Squares and Singular Value Decomposition
In the numerical linear algebra literature (e.g., [4], [7], [11], [23]), it is shown that solution
of linear least squares problems via the normal equations can be a very poor numerical
method in finite precision arithmetic. Since the standard Kalman filter essentially amounts
to sequential updating of normal equations, it can be expected to exhibit such poor numerical
behavior in practice (and it does). Better numerical methods are based on algorithms that
work directly and solely on A itself rather than A
T
A. Two basic classes of algorithms are
based on S VD and QR (orthogonal upper triangular) factorization, respectively. The former
is much more expensive but is generally more reliable and offers considerable theoretical
insight.
In this section we investigate solution of the linear least squares problem
The last equality follows from the fact that if v = [£ ], then u^ =   i> i \\\ + \\vi\\\ (note
that orthogonality is not what is used here; the subvectors can have different lengths). This
explains why it is convenient to work above with the square of the norm rather than the
norm. As far as the minimization is concerned, the two are equivalent. In fact, the last
quantity above is clearly minimized by taking z\ = S~
l
c\. The subvector z
2
is arbitrary,
while the minimum value of \\Ax — b\\^ is l ^l l r
via the SVD. Specifically, we assume that A has an SVD given by A = UT, V
T
= U\SVf
as in Theorem 5.1. We now note that
70 Chapter 8. Linear Least Squares Problems
difficulties because of numerical ill conditioning for large n. Numerically better approaches
are based on orthogonal polynomials, piecewise polynomial functions, splines, etc.
The key feature in (8.5) is that the coefficients Ci appear linearly. The basis functions
¢i can be arbitrarily nonlinear. Sometimes a problem in which the Ci'S appear nonlinearly
can be converted into a linear problem. For example, if the fitting function is of the form
Y = f (t) = c, e
C2
/ , then taking logarithms yields the equation log y = log c, + c2f. Then
defining y = log y, c, = log c" and C2 = C2 results in a standard linear least squares
problem.
8.4 Least Squares and Singular Value Decomposition
In the numerical linear algebra literature (e.g., [4], [7], [11], [23]), it is shown that solution
of linear least squares problems via the normal equations can be a very poor numerical
method in finiteprecision arithmetic. Since the standard Kalman filter essentially amounts
to sequential updating of normal equations, it can be expected to exhibit such poor numerical
behavior in practice (and it does). Better numerical methods are based on algorithms that
work directly and solely on A itself rather than AT A. Two basic classes of algorithms are
based on SVD and QR (orthogonalupper triangular) factorization, respectively. The former
is much more expensive but is generally more reliable and offers considerable theoretical
insight.
In this section we investigate solution of the linear least squares problem
min II Ax  b11
2
, A E IR
mxn
, bE IR
m
, (8.6)
x
via the SVD. Specifically, we assume that A has an SVD given by A = = U,SVr
as in Theorem 5.1. We now note that
IIAx  = x 
= II VT X  U
T
bll; since II . Ib is unitarily invariant
wherez=VTx,c=UTb
= II [ ]  [ ] II:
= II [ c, ] II:
The last equality follows from the fact that if v = then II v II = II viii + II v211 (note
that orthogonality is not what is used here; the subvectors can have different lengths). This
explains why it is convenient to work above with the square of the norm rather than the
norm. As far as the minimization is concerned. the two are equivalent. In fact. the last
quantity above is clearly minimized by taking z, = S'c,. The subvector Z2 is arbitrary,
while the minimum value of II Ax  b II is II czll
8.5. Least Squares and QR Factorization 71
Now transform back to the original coordinates:
The last equality follows from
Note that since 12 is arbitrary, V
2
z
2
is an arbitrary vector in 7Z(V
2
) = A/"(A). Thus, x has
been written in the form x = A
+
b + (/ — A
+
A ) _ y, where y e R
m
is arbitrary. This agrees,
of course, with (8.1).
The minimum value of the least squares residual is
and we clearly have that
minimum least squares residual is 0 4=> b is orthogonal to all vectors in U
2
•<=^ b is orthogonal to all vectors in 7l(A}
L
Another expression for the minimum residual is  (/ — AA
+
) b 
2
. This follows easily since
(7  AA+)b\\
2
2
 \\U2Ufb\\l = b
T
U
2
U^U
2
UJb = b
T
U
2
U*b = \\U?b\\
2
2
.
Finally, an important special case of the linear least squares problem is the
socalled fullrank problem, i.e., A e 1R™
X
" . In this case the SVD of A is given by
A = UZV
T
= [U
{
t/ 2][o]^i
r
> and there is thus "no V
2
part" to the solution.
8.5 Least Squares and QR Factorization
In this section, we again look at the solution of the linear least squares problem (8.6) but this
time in terms of the QR factorization. This matrix factorization is much cheaper to compute
than an SVD and, with appropriate numerical enhancements, can be quite reliable.
To simplify the exposition, we add the simplifying assumption that A has full column
rank, i.e., A e R™
X M
. It is then possible, via a sequence of socalled Householder or Givens
transformations, to reduce A in the following way. A finite sequence of simple orthogonal
row transformations (of Householder or Givens type) can be performed on A to reduce it
to triangular form. If we label the product of such orthogonal row transformations as the
orthogonal matrix Q
T
€ R
mxm
, we have
B.S. Least Squares and QR Factorization 71
Now transform back to the original coordinates:
x = Vz
= [VI V
2
1 [ ]
= VIZI + V2Z2
= VISici + V2Z2
= vlsIufb + V
2
Z
2
.
The last equality follows from
c = U T b = [ f: ] = [ l
Note that since Z2 is arbitrary, V
2
Z
2
is an arbitrary vector in R(V
2
) = N(A). Thus, x has
been written in the form x = A + b + (I  A + A) y, where y E ffi.m is arbitrary. This agrees,
of course, with (8.1).
The minimum value of the least squares residual is
and we clearly have that
minimum least squares residual is 0 {::=:} b is orthogonal to all vectors in U2
{::=:} b is orthogonal to all vectors in R(A)l.
{::=:} b E R(A).
Another expression for the minimum residual is II (I  AA +)bllz. This follows easily since
11(1 = = b
T
U
Z
V!V
2
V!b = bTVZV!b =
Finally, an important special case of the linear least squares problem is the
socalled fullrank problem, i.e., A E In this case the SVD of A is given by
A = V:EV
T
= [VI Vzl[g]Vr, and there is thus "no V
2
part" to the solution.
8.5 Least Squares and QR Factorization
In this section, we again look at the solution of the linear least squares problem (8.6) but this
time in terms of the QR factorization. This matrix factorization is much cheaper to compute
than an SVD and, with appropriate numerical enhancements, can be quite reliable.
To simplify the exposition, we add the simplifying assumption that A has full column
rank, i.e., A E It is then possible, via a sequence of socalled Householder or Givens
transformations, to reduce A in the following way. A finite sequence of simple orthogonal
row transformations (of Householder or Givens type) can be performed on A to reduce it
to triangular form. If we label the product of such orthogonal row transformations as the
orthogonal matrix QT E ffi.mxm, we have
(8.7)
72 Chapter 8. Linear Least Squares Problems
where R e M£
x
" is upper triangular. Now write Q = [Q\ Q
2
], where Q\ e R
mx
" and
Q
2
€ K"
IX(m
~"
)
. Both Q\ and <2
2
have orthonormal columns. Multiplying through by Q
in (8.7), we see that
Any of (8.7), (8.8), or (8.9) are variously referred to as QR factorizations of A. Note that
(8.9) is essentially what is accomplished by the GramSchmidt process, i.e., by writing
AR~
l
= Q\ we see that a "triangular" linear combination (given by the coefficients of
R~
l
) of the columns of A yields the orthonormal columns of Q\.
Now note that
The last quantity above is clearly minimized by taking x = R
l
c\ and the minimum residual
is \\C 2\\2 Equivalently, we have x = R~
l
Q\b = A
+
b and the minimum residual is IIC?^!^
EXERCISES
1. For A € W
xn
, b e E
m
, and any y e R", check directly that (I  A
+
A)y and A
+
b
are orthogonal vectors.
2. Consider the following set of measurements (*,, y
t
):
(a) Find the best (in the 2norm sense) line of the form y = ax + ft that fits this
data.
(b) Find the best (in the 2norm sense) line of the form jc = ay + (3 that fits this
data.
3. Suppose qi and q
2
are two orthonormal vectors and b is a fixed vector, all in R".
(a) Find the optimal linear combination aq^ + fiq
2
that is closest to b (in the 2norm
sense).
(b) Let r denote the "error vector" b — ctq\ — flq
2
 Show that r is orthogonal to
both^i and q
2
.
72 Chapter 8. Linear Least Squares Problems
where R E is upper triangular. Now write Q = [QI Qz], where QI E ffi.mxn and
Qz E ffi.m x (mn). Both Q I and Qz have orthonormal columns. Multiplying through by Q
in (8.7), we see that
(8.8)
= [QI Qz] [ ]
= QIR.
(8.9)
Any of (8.7), (8.8), or (8.9) are variously referred to as QR factorizations of A. Note that
(8.9) is essentially what is accomplished by the GramSchmidt process, i.e., by writing
AR
1
= QI we see that a "triangular" linear combination (given by the coefficients of
R
I
) of the columns of A yields the orthonormal columns of Q I.
Now note that
IIAx  = IIQ
T
Ax  since II . 112 is unitarily invariant
= II [ ] x  [ ] If:,
The last quantity above is clearly minimized by taking x = R
I
Cl and the minimum residual
is Ilczllz. Equivalently, we have x = R
1
Qf b = A +b and the minimum residual is II Qr bllz'
EXERCISES
1. For A E ffi.
mxn
, b E ffi.
m
, and any y E ffi.
n
, check directly that (I  A + A)y and A +b
are orthogonal vectors.
2. Consider the following set of measurements (Xi, Yi):
(1,2), (2,1), (3,3).
(a) Find the best (in the 2norm sense) line of the form y = ax + fJ that fits this
data.
(b) Find the best (in the 2norm sense) line of the form x = ay + fJ that fits this
data.
3. Suppose q, and qz are two orthonormal vectors and b is a fixed vector, all in ffi.
n
•
(a) Find the optimallinear combination aql + (3q2 that is closest to b (in the 2norm
sense).
(b) Let r denote the "error vector" b  aql  {3qz. Show that r is orthogonal to
both ql and q2.
Exercises 73
4. Find all solutions of the linear least squares problem
5. Consider the problem of finding the minimum 2norm solution of the linear least
«rmarp« nrr»h1<=>m
(a) Consider a perturbation E\ = [
0
pi of A, where 8 is a small positive number.
Solve the perturbed version of the above problem,
where AI = A + E\. What happens to jt* — y 
2
as 8 approaches 0?
(b) Now consider the perturbation EI = \
0 s
~\ of A, where again 8 is a small
positive number. Solve the perturbed problem
where A
2
— A + E
2
. What happens to \\x* — z
2
as 8 approaches 0?
6. Use the four Penrose conditions and the fact that Q\ has orthonormal columns to
verify that if A e R™
x
" can be factored in the form (8.9), then A+ = R~
l
Q\.
1. Let A e R"
x
", not necessarily nonsingular, and suppose A = QR, where Q is
orthogonal. Prove that A
+
= R
+
Q
T
.
Exercises 73
4. Find all solutions of the linear least squares problem
min II Ax  bll
2
x
when A = [
5. Consider the problem of finding the minimum 2norm solution of the linear least
squares problem
min II Ax  bl1
2
x
when A = ] and b = [ ! 1 The solution is
(a) Consider a perturbation EI = of A, where 8 is a small positive number.
Solve the perturbed version of the above problem,
where AI = A + E
I
. What happens to IIx*  yII2 as 8 approaches O?
(b) Now consider the perturbation E2 = n of A, where again 8 is a small
positive number. Solve the perturbed problem
min II A
2
z  bib
z
where A2 = A + E
2
• What happens to IIx*  zll2 as 8 approaches O?
6. Use the four Penrose conditions and the fact that QI has orthonormal columns to
verify that if A E can be factored in the form (8.9), then A+ = R
I
Qf.
7. Let A E not necessarily nonsingular, and suppose A = QR, where Q is
orthogonal. Prove that A + = R+ QT .
This page intentionally left blank This page intentionally left blank
Chapter 9
Eigenvalues and
Eigenvectors
9.1 Fundamental Definitions and Properties
Definition 9.1. A nonzero vector x e C" is a right eigenvector of A e C
nxn
if there exists
a scalar A. e C, called an eigenvalue, such that
Similarly, a nonzero vector y e C" is a left eigenvector corresponding to an eigenvalue
a if
By taking Hermitian transposes in (9.1), we see immediately that X
H
is a left eigen
vector of A
H
associated with A . Note that if x [y] is a right [left] eigenvector of A, then
so is ax [ay] for any nonzero scalar a E C. One oftenused scaling for an eigenvector is
a — \j';t so that the scaled eigenvector has norm 1. The 2norm is the most common
norm used for such scaling.
Definition 9.2. The polynomial n (A.) = det(A —A ,/ ) is called the characteristic polynomial
of A. (Note that the characteristic polynomial can also be defined as det(A . / — A ). This
results in at most a change of sign and, as a matter of convenience, we use both forms
throughout the text.}
The following classical theorem can be very useful in hand calculation. It can be
proved easily from the Jordan canonical form to be discussed in the text to follow (see, for
example, [21]) or directly using elementary properties of inverses and determinants (see,
for example, [3]).
Theorem 9.3 (CayleyHamilton). For any A e C
nxn
, n(A) = 0.
Example 9.4. Let A = [~g ~g] . Then n(k) = X
2
+ 2A , — 3. It is an easy exercise to
verify that n(A) = A
2
+ 2A  31 = 0.
It can be proved from elementary properties of determinants that if A e C"
x
", then
7 t (X) is a polynomial of degree n. Thus, the Fundamental Theorem of A lgebra says that
75
Chapter 9
Eigenvalues and
Eigenvectors
9.1 Fundamental Definitions and Properties
Definition 9.1. A nonzero vector x E en is a right eigenvector of A E e
nxn
if there exists
a scalar A E e, called an eigenvalue, such that
Ax = AX. (9.1)
Similarly, a nonzero vector y E en is a left eigenvector corresponding to an eigenvalue
Mif
(9.2)
By taking Hennitian transposes in (9.1), we see immediately that x
H
is a left eigen
vector of A H associated with I. Note that if x [y] is a right [left] eigenvector of A, then
so is ax [ay] for any nonzero scalar a E C. One oftenused scaling for an eigenvector is
a = 1/ IIx II so that the scaled eigenvector has nonn 1. The 2nonn is the most common
nonn used for such scaling.
Definition 9.2. The polynomialn (A) = det (A  A l) is called the characteristic polynomial
of A. (Note that the characteristic polynomial can also be defined as det(Al  A). This
results in at most a change of sign and, as a matter of convenience, we use both forms
throughout the text.)
The following classical theorem can be very useful in hand calculation. It can be
proved easily from the Jordan canonical fonn to be discussed in the text to follow (see, for
example, [21D or directly using elementary properties of inverses and determinants (see,
for example, [3]).
Theorem 9.3 (CayleyHamilton). For any A E e
nxn
, n(A) = O.
Example 9.4. Let A = [  ~  ~ ] . Then n(A) = A2 + 2A  3. It is an easy exercise to
verify that n(A) = A2 + 2A  31 = O.
It can be proved from elementary properties of detenninants that if A E e
nxn
, then
n(A) is a polynomial of degree n. Thus, the Fundamental Theorem of Algebra says that
75
and set X = 0 in this identity, we get the interesting fact that del (A) = AI • A.2 • • • A
M
(see
also Theorem 9.25).
If A e W
xn
, then n(X) has real coefficients. Hence the roots of 7 r( A) , i.e., the
eigenvalues of A, must occur in complex conjugate pairs.
Example 9.6. Let a, ft e R and let A = [ _^ £ ]. Then jr( A. ) = A.
2
 2aA + a
2
+ ft
2
and
A has eigenvalues a ± fij (where j = i = •>/—!)•
If A € R"
x
", then there is an easily checked relationship between the left and right
eigenvectors of A and A
T
(take Hermitian transposes of both sides of (9.2)). Specifically, if
y is a left eigenvector of A corresponding to A e A( A) , then y is a right eigenvector of A
T
corresponding to A. € A ( A) . Note, too, that by elementary properties of the determinant,
we always have A ( A ) = A ( A
r
) , but that A ( A ) = A ( A ) only if A e R"
x
".
Definition 9.7. IfX is a root of multiplicity m ofjr(X), we say that X is an eigenvalue of A
of algebraic multiplicity m. The geometric multiplicity ofX is the number of associated
independent eigenvectors = n — rank( A — A/) = dim J \ f(A — XI).
If A € A ( A ) has algebraic multiplicity m, then 1 < di mA/ "(A — A/) < m. Thus, if
we denote the geometric multiplicity of A by g, then we must have 1 < g < m.
Definition 9.8. A matrix A e W
x
" is said to be defective if it has an eigenvalue whose
geometric multiplicity is not equal to (i.e., less than) its algebraic multiplicity. Equivalently,
A is said to be defective if it does not have n linearly independent (right or left) eigenvectors.
From the CayleyHamilton Theorem, we know that n(A) = 0. However, it is pos
sible for A to satisfy a lowerorder polynomial. For example, if A = \
1
Q
®], then A sat
isfies (1 — I)
2
= 0. But it also clearly satisfies the smaller degree polynomial equation
a  n = o.
Definition 5.5. The minimal polynomial of A G K""" is the polynomial o/ (X) of least
degree such that a (A) =0.
It can be shown that or(l) is essentially unique (unique if we force the coefficient
of the highest power of A to be +1, say; such a polynomial is said to be monic and we
generally write et (A) as a monic polynomial throughout the text). Moreover, it can also be
7 6 Chapt er 9. Ei g e n va l ue s and Ei genvect ors
7 r( A) has n roots, possibly repeated. These roots, as solutions of the determinant equation
are the eigenvalues of A and imply the singularity of the matrix A — XI, and hence further
guarantee the existence of corresponding nonzero eigenvectors.
Definition 9.5. The spectrum of A e C"
x
" is the set of all eigenvalues of A, i.e., the set of
all roots of its characteristic polynomial n(X). The spectrum of A is denoted A ( A) .
Let the eigenvalues of A e C"
x
" be denoted X\ ,..., X
n
. Then if we write (9.3) in the
form
76 Chapter 9. Eigenvalues and Eigenvectors
n(A) has n roots, possibly repeated. These roots, as solutions of the determinant equation
n(A) = det(A  AI) = 0, (9.3)
are the eigenvalues of A and imply the singularity of the matrix A  AI, and hence further
guarantee the existence of corresponding nonzero eigenvectors.
Definition 9.5. The spectrum of A E c
nxn
is the set of all eigenvalues of A, i.e., the set of
all roots of its characteristic polynomialn(A). The spectrum of A is denoted A(A).
Let the eigenvalues of A E en xn be denoted A], ... , An. Then if we write (9.3) in the
form
n(A) = det(A  AI) = (A]  A) ... (An  A) (9.4)
and set A = 0 in this identity, we get the interesting fact that det(A) = A] . A2 ... An (see
also Theorem 9.25).
If A E 1Ftnxn, then n(A) has real coefficients. Hence the roots of n(A), i.e., the
eigenvalues of A, must occur in complex conjugate pairs.
Example 9.6. Let a, f3 E 1Ft and let A = [ ~ f 3 !]. Then n(A) = A
2
 2aA + a
2
+ f32 and
A has eigenvalues a ± f3j (where j = i = R).
If A E 1Ftnxn, then there is an easily checked relationship between the left and right
eigenvectors of A and AT (take Hermitian transposes of both sides of (9.2». Specifically, if
y is a left eigenvector of A corresponding to A E A(A), then y is a right eigenvector of AT
corresponding to I E A(A). Note, too, that by elementary properties of the determinant,
we always have A(A) = A(AT), but that A(A) = A(A) only if A E 1Ftnxn.
Definition 9.7. If A is a root of multiplicity m of n(A), we say that A is an eigenvalue of A
of algebraic multiplicity m. The geometric multiplicity of A is the number of associated
independent eigenvectors = n  rank(A  AI) = dimN(A  AI).
If A E A(A) has algebraic multiplicity m, then I :::: dimN(A  AI) :::: m. Thus, if
we denote the geometric multiplicity of A by g, then we must have I :::: g :::: m.
Definition 9.8. A matrix A E 1Ft
nxn
is said to be defective if it has an eigenvalue whose
geometric multiplicity is not equal to (i.e., less than) its algebraic multiplicity. Equivalently,
A is said to be defective if it does not have n linearly independent (right or left) eigenvectors.
From the CayleyHamilton Theorem, we know that n(A) = O. However, it is pos
sible for A to satisfy a lowerorder polynomial. For example, if A = [ ~ ~ ] , then A sat
isfies (Je  1)2 = O. But it also clearly satisfies the smaller degree polynomial equation
(it.  1) ;;;:; 0
neftnhion ~ . ~ . Thll minimal polynomial Of A l::: l!if.nxn ix (hI' polynomilll a(A) oJ IPll.ft
degree such that a(A) ~ O.
It can be shown that a(Je) is essentially unique (unique if we force the coefficient
of the highest power of A to be + 1. say; such a polynomial is said to be monic and we
generally write a(A) as a monic polynomial throughout the text). Moreover, it can also be
9.1. Fundamental Definitions and Properties 77
shown that a (A.) divides every nonzero polynomial fi(k} for which ft (A) = 0. In particular,
a(X) divides n(X).
There is an algorithm to determine or ( A . ) directly ( without knowing eigenvalues and as
sociated eigenvector structure). Unfortunately, this algorithm, called the Bezout algorithm,
is numerically unstable.
Example 9.10. The above definitions are illustrated below for a series of matrices, each
of which has an eigenvalue 2 of algebraic multiplicity 4, i. e. , 7r( A ) = ( A — 2)
4
. We denote
the geometric multiplicity by g.
A t this point, one might speculate that g plus the degree of a must always be five.
Unfortunately, such is not the case. The matrix
Theorem 9.11. Let A e C«
x
"
ana
[
e
t A ., be an eigenvalue of A with corresponding right
eigenvector j c,. Furthermore, let yj be a left eigenvector corresponding to any A
;
e A ( A )
such that Xj =£ A . ,. Then yfx{ = 0.
Proof: Since Ax
t
= A ,*,,
9.1. Fundamental Definitions and Properties 77
shown that a(A) divides every nonzero polynomial f3(A) for which f3(A) = O. In particular,
a(A) divides n(A).
There is an algorithm to determine a(A) directly (without knowing eigenvalues and as
sociated eigenvector structure). Unfortunately, this algorithm, called the Bezout algorithm,
is numerically unstable.
Example 9.10. The above definitions are illustrated below for a series of matrices, each
of which has an eigenvalue 2 of algebraic multiplicity 4, i.e., n(A) = (A  2)4. We denote
the geometric multiplicity by g.
A  [ ~
0
! ] ha,"(A) ~ (A  2)' ""d g ~ 1.
2 I
 0
0 2
0 0 0
A ~ [ ~
0
~ ] ha< a(A) ~ (A  2)' ""d g ~ 2.
2
0 2
0 0
A ~ U
I 0
~ ] h'" a(A) ~ (A  2)2 ""d g ~ 3.
2 0
0 2
0 0
A ~ U
0 0
~ ] ha<a(A) ~ (A  2) andg ~ 4.
2 0
0 2
0 0
At this point, one might speculate that g plus the degree of a must always be five.
Unfortunately, such is not the case. The matrix
A ~ U
I 0
!]
2 0
0 2
0 0
has a(A) = (A  2)2 and g = 2.
Theorem 9.11. Let A E cc
nxn
and let Ai be an eigenvalue of A with corresponding right
eigenvector Xi. Furthermore, let Yj be a left eigenvector corresponding to any Aj E l\(A)
such that Aj 1= Ai. Then YY Xi = O.
Proof' Since AXi = AiXi,
(9.5)
78 Chapter 9. Eigenvalues and Eigenvectors
Similarly, since y" A = Xjyf,
Subtracting (9.6) from (9.5), we find 0 = (A., — A
y
)j ^j c, . Since A,, — A.
7
 ^ 0, we must have
yfxt =0.
The proof of Theorem 9.11 is very similar to two other fundamental and important
results.
Theorem 9.12. Let A e C"
x
" be Hermitian, i.e., A = A
H
. Then all eigenvalues of A must
be real.
Proof: Suppose (A ., x) is an arbitrary eigenvalue/eigenvector pair such that Ax = A .J C. Then
Taking Hermitian transposes in (9.7) yields
Using the fact that A is Hermitian, we have that Xx
H
x = Xx
H
x. However, since x is an
eigenvector, we have X
H
X /= 0, from which we conclude A . = A , i.e., A . is real. D
Theorem 9.13. Let A e C"
x
" be Hermitian and suppose X and / J L are distinct eigenvalues
of A with corresponding right eigenvectors x and z, respectively. Then x and z must be
orthogonal.
Proof: Premultiply the equation Ax = A.J C by Z
H
to get Z
H
Ax = X z
H
x . Take the Hermitian
transpose of this equation and use the facts that A is Hermitian and A . is real to get X
H
Az =
Xx
H
z. Premultiply the equation Az = i^z by X
H
to get X
H
Az = / ^X
H
Z = Xx
H
z. Since
A, ^ /z, we must have that X
H
z = 0, i.e., the two vectors must be orthogonal. D
Let us now return to the general case.
Theorem 9.14. Let A €. C
nxn
have distinct eigenvalues A ,
1 ?
. . . , A .
n
with corresponding
right eigenvectors x\,... ,x
n
. Then [x\,..., x
n
} is a linearly independent set. The same
result holds for the corresponding left eigenvectors.
Proof: For the proof see, for example, [21, p. 118].
If A e C
nx
" has distinct eigenvalues, and if A ., e A (A ), then by Theorem 9.11, jc, is
orthogonal to all yj's for which j ^ i. However, it cannot be the case that yf*x
t
= 0 as
well, or else x
f
would be orthogonal to n linearly independent vectors (by Theorem 9.14)
and would thus have to be 0, contradicting the fact that it is an eigenvector. Since yf*Xi ^ 0
for each i, we can choose the normalization of the *, 's, or the y, 's, or both, so that y
t
H
x; = 1
f or / € n.
78 Chapter 9. Eigenvalues and Eigenvectors
Similarly, since YY A = A j yy,
(9.6)
Subtracting (9.6) from (9.5), we find 0 = (Ai  Aj)YY xi. Since Ai  Aj =1= 0, we must have
YyXi = O. 0
The proof of Theorem 9.11 is very similar to two other fundamental and important
results.
Theorem 9.12. Let A E c
nxn
be Hermitian, i.e., A = AH. Then all eigenvalues of A must
be real.
Proof: Suppose (A, x) is an arbitrary eigenvalue/eigenvector pair such that Ax = AX. Then
(9.7)
Taking Hermitian transposes in (9.7) yields
Using the fact that A is Hermitian, we have that IXH x = AXH x. However, since x is an
eigenvector, we have xH x =1= 0, from which we conclude I = A, i.e., A is real. 0
Theorem 9.13. Let A E c
nxn
be Hermitian and suppose A and iJ are distinct eigenvalues
of A with corresponding right eigenvectors x and z, respectively. Then x and z must be
orthogonal.
Proof: Premultiply the equation Ax = AX by ZH to get ZH Ax = AZ
H
x. Take the Hermitian
transpose of this equation and use the facts that A is Hermitian and A is real to get x H Az =
AxH z. Premultiply the equation Az = iJZ by x
H
to get x
H
Az = iJXH Z = AXH z. Since
A =1= iJ, we must have that x
H
z = 0, i.e., the two vectors must be orthogonal. 0
Let us now return to the general case.
Theorem 9.14. Let A E c
nxn
have distinct eigenvalues AI, ... , An with corresponding
right eigenvectors XI, ... , x
n
• Then {XI, ... , x
n
} is a linearly independent set. The same
result holds for the corresponding left eigenvectors.
Proof: For the proof see, for example, [21, p. 118]. 0
If A E c
nxn
has distinct eigenvalues, and if Ai E A(A), then by Theorem 9.11, Xi is
orthogonal to all y/s for which j =1= i. However, it cannot be the case that Yi
H
Xi = 0 as
well, or else Xi would be orthogonal to n linearly independent vectors (by Theorem 9.14)
and would thus have to be 0, contradicting the fact that it is an eigenvector. Since yr Xi =1= 0
for each i, we can choose the normalization of the Xi'S, or the Yi 's, or both, so that Yi
H
Xi = 1
for i E !1.
9.1. Fundament al Def i ni t i o ns and Properties 79
Theorem 9.15. Let A e C"
x
" have distinct eigenvalues A .I , ..., A .
n
and let the correspond
ing right eigenvectors form a matrix X = [x\, ..., x
n
]. Similarly, let Y — [y\, ..., y
n
]
be the matrix of corresponding left eigenvectors. Furthermore, suppose that the left and
right eigenvectors have been normalized so that yf
1
Xi = 1, / en. Finally, let A =
di ag ( A ,j , . . . , X
n
) e W
txn
. Then A J C, = A ., * /, / e n, can be written in matrix form as
Example 9.16. Let
Then n(X) = det( A  A ./) =  (A .
3
+ 4A .
2
+ 9 A . + 10) =  (A . + 2 )(A .
2
+ 2 A , + 5), from
which we find A ( A ) = { — 2 , — 1 ± 2 j } . We can now find the right and left eigenvectors
corresponding to these eigenvalues.
For A  i = — 2 , solve the 3 x 3 linear system (A — (—2 } I)x\ = 0 to get
while y^Xj = 5,
;
, / en, y' e n, is expressed by the equation
These matrix equations can be combined to yield the following matrix factorizations:
and
Note that one component of ;ci can be set arbitrarily, and this then determines the other two
(since di mA /XA — ( — 2 )7) = 1). To get the corresponding left eigenvector y\, solve the
linear system y\(A + 2 1) = 0 to get
This time we have chosen the arbitrary scale factor for y\ so that y f x \ = 1.
For A
2
= — 1 + 2 j , solve the linear system (A — (— 1 + 2 j )I)x
2
= 0 to get
9.1. Fundamental Definitions and Properties 79
Theorem 9.15. Let A E en xn have distinct eigenvalues A I, ... , An and let the correspond
ing right eigenvectors form a matrix X = [XI, ... , xn]. Similarly, let Y = [YI,"" Yn]
be the matrix of corresponding left eigenvectors. Furthermore, suppose that the left and
right eigenvectors have been normalized so that YiH Xi = 1, i E !!:: Finally, let A =
diag(AJ, ... , An) E ]Rnxn. Then AXi = AiXi, i E !!, can be written in matrixform as
AX=XA (9.8)
while YiH X j = oij, i E!!, j E !!, is expressed by the equation
yHX = I.
(9.9)
These matrix equations can be combined to yield the following matrix factorizations:
and
Example 9.16. Let
XlAX = A = yRAX
n
A = XAX
I
= XAyH = LAixiyr
2
5
3
3
2
i=1
~ ] .
4
(9.10)
(9.11)
Then rr(A) = det(A  AI) = (A
3
+ 4A2 + 9)" + 10) = ()" + 2)(),,2 + 2)" + 5), from
which we find A(A) = {2, 1 ± 2j}. We can now find the right and left eigenvectors
corresponding to these eigenvalues.
For Al = 2, solve the 3 x 3 linear system (A  (2)l)xI = 0 to get
Note that one component of XI can be set arbitrarily, and this then determines the other two
(since dimN(A  (2)1) = 1). To get the corresponding left eigenvector YI, solve the
linear system yi (A + 21) = 0 to get
This time we have chosen the arbitrary scale factor for YJ so that yi XI = 1.
For A2 = 1 + 2j, solve the linear system (A  (1 + 2j) I)x2 = 0 to get
[
3+ j ]
X2 = 3 ~ / .
80 Chapter 9. Eigenvalues and Eigenvectors
Solve the linear system y" (A — (1 + 27')/) = 0 and normalize y>
2
so that y"x
2
= 1 to get
For X T , = — 1 — 2 j, we could proceed to solve linear systems as for A.
2
. However, we
can also note that x$ =x
2
' and yi = jj. To see this, use the fact that A, 3 = A.2 and simply
conjugate the equation A;c
2
— ^.2 *2 to get Ax^ = ^2 X 2  A similar argument yields the result
for left eigenvectors.
Now define the matrix X of right eigenvectors:
It is then easy to verify that
Other results in Theorem 9.15 can also be verified. For example,
Finally, note that we could have solved directly only for *i and x
2
(and X T , = x
2
). Then,
instead of determining the j,'s directly, we could have found them instead by computing
X ~
l
and reading off its rows.
Example 9.17. Let
Then 7r(A.) = det(A  A./) = (A
3
+ 8A
2
+ 19X + 12) = (A. + 1)(A. + 3)(A, + 4),
from which we find A (A) = { —1, —3, —4}. Proceeding as in the previous example, it is
straightforward to compute
and
80 Chapter 9. Eigenvalues and Eigenvectors
Solve the linear system yf (A  ( I + 2 j) I) = 0 and nonnalize Y2 so that yf X2 = 1 to get
For A3 = I  2j, we could proceed to solve linear systems as for A2. However, we
can also note that X3 = X2 and Y3 = Y2. To see this, use the fact that A3 = A2 and simply
conjugate the equation AX2 = A2X2 to get AX2 = A2X2. A similar argument yields the result
for left eigenvectors.
Now define the matrix X of right eigenvectors:
3+j 3
j
]
3j 3+j .
2 2
It is then easy to verify that
.!.=.L
4
l+j
4
!.±1
4
.!.=.L
4
Other results in Theorem 9.15 can also be verified. For example,
[
2 0
XIAX=A= 0 1+2j
o 0
Finally, note that we could have solved directly only for XI and X2 (and X3 = X2). Then,
instead of detennining the Yi'S directly, we could have found them instead by computing
XI and reading off its rows.
Example 9.17. Let
A = .
o 3
Then Jl"(A) = det(A  AI) = _(A
3
+ 8A
2
+ 19A + 12) = (A + I)(A + 3)(A + 4),
from which we find A(A) = {I, 3, 4}. Proceeding as in the previous example, it is
gtruightforw!U"d to
I
i ]
0
I
and
1 2 1
] y'
3 0 3
2 2 2
9.1. Fundamental Definitions and Properties 81
We also have X~
l
AX = A = di ag( —1, —3, —4 ) , which is equivalent to the dyadic expan
sion
Theorem 9.18. Eigenvalues (but not eigenvectors) are invariant under a similarity trans
formation T.
Proof: Suppose (A, jc) is an eigenvalue/eigenvector pair such that Ax = Xx. Then, since T
is nonsingular, we have the equivalent statement (T~
l
AT)(T~
l
x) = X ( T ~
l
x ) , from which
the theorem statement follows. For left eigenvectors we have a similar statement, namely
y
H
A = Xy
H
ifandon\yif(T
H
y)
H
(T~
1
AT) =X(T
H
yf. D
Remark 9.19. If / is an analytic function (e.g., f ( x ) is a polynomial, or e
x
, or sin*,
or, in general, representable by a power series X^^o
fl
n*
n
)> then it is easy to show that
the eigenvalues of /( A) (defined as X^o^A") are /( A) , but /( A) does not necessarily
have all the same eigenvectors (unless, say, A is diagonalizable). For example, A = T
0 O
j
has only one right eigenvector corresponding to the eigenvalue 0, but A
2
= f
0 0
1 has two
independent right eigenvectors associated with the eigenvalue 0. What is true is that the
eigenvalue/eigenvector pair (A, x) maps to ( /( A) , x) but not conversely.
The following theorem is useful when solving systems of linear differential equations.
Details of how the matrix exponential e'
A
is used to solve the system x = Ax are the subject
of Chapter 11.
Theorem 9.20. Let A e R"
xn
and suppose X~~
1
AX — A, where A is diagonal. Then
9.1. Fundamental Definitions and Properties 81
We also have XI AX = A = diag( 1, 3, 4), which is equivalent to the dyadic expan
sion
3
A = LAixiyr
i=1
W j 0
+(4) [ ; ]
1
 
3
(I) [
I I I
J + (3) [
I
0
I
] + (4) [
I I I
l
(;
3
(;
2
2 3
3
3
I 2 I
0 0 0
I I I
3 3 3
3
3
3
I I I
I
0
I
I I I
(;
3
(;
2
2
3
3
3
Theorem 9.18. Eigenvalues (but not eigenvectors) are invariant under a similarity trans
formation T.
Proof: Suppose (A, X) is an eigenvalue/eigenvector pair such that Ax = AX. Then, since T
is nonsingular, we have the equivalent statement (T
I
AT)(T
I
x) = A(T
I
x), from which
the theorem statement follows. For left eigenvectors we have a similar statement, namely
yH A = AyH if and only if (T
H
y)H (T
1
AT) = A(T
H
y)H. D
Remark 9.19. If f is an analytic function (e.g., f(x) is a polynomial, or eX, or sinx,
or, in general, representable by a power series anxn), then it is easy to show that
the eigenvalues of f(A) (defined as are f(A), but f(A) does not necessarily
have all the same eigenvectors (unless, say, A is diagonalizable). For example, A = 6 ]
has only one right eigenvector corresponding to the eigenvalue 0, but A
2
= ] has two
independent right eigenvectors associated with the eigenvalue o. What is true is that the
eigenvalue/eigenvector pair (A, x) maps to (f(A), x) but not conversely.
The following theorem is useful when solving systems of linear differential equations.
Details of how the matrix exponential etA is used to solve the system i = Ax are the subject
of Chapter 11.
Theorem 9.20. Let A E jRnxn and suppose XI AX = A, where A is diagonal. Then
n
= LeA,txiYiH.
i=1
82 Chapter 9. Eigenvalues and Eigenvectors
Proof: Starting from the definition, we have
The following corollary is immediate from the theorem upon setting t = I.
Corollary 9.21. If A e R
nx
" is diagonalizable with eigenvalues A ., , /' en, and right
eigenvectors x
t
•, / € n_, then e
A
has eigenvalues e
X i
, i € n_, and the same eigenvectors.
There are extensions to Theorem 9.20 and Corollary 9.21 for any function that is
analytic on the spectrum of A , i.e., f ( A ) = X f ( A ) X ~
l
= Xdi ag ( / ( A . i ) , . . . , f ( X
t t
) ) X ~
l
.
It is desirable, of course, to have a version of Theorem 9.20 and its corollary in which
A is not necessarily diagonalizable. It is necessary first to consider the notion of Jordan
canonical form, from which such a result is then available and presented later in this chapter.
9.2 Jordan Canonical Form
Theorem 9.22.
1. Jordan Canonical Form (/CF): For all A e C"
x
" with eigenvalues X\,..., k
n
e C
(not necessarily distinct), there exists X € C^
x
" such that
where each of the Jordan block matrices / i , . . . , J
q
is of the form
82 Chapter 9. Eigenvalues and Eigenvectors
Proof' Starting from the definition, we have
n
= LeA;IXiYiH. 0
i=1
The following corollary is immediate from the theorem upon setting t = I.
Corollary 9.21. If A E ]Rn xn is diagonalizable with eigenvalues Ai, i E ~ , and right
eigenvectors Xi, i E ~ , then e
A
has eigenvalues e
A
" i E ~ , and the same eigenvectors.
There are extensions to Theorem 9.20 and Corollary 9.21 for any function that is
analytic on the spectrum of A, i.e., f(A) = Xf(A)X
I
= Xdiag(J(AI), ... , f(An))X
I
.
It is desirable, of course, to have a version of Theorem 9.20 and its corollary in which
A is not necessarily diagonalizable. It is necessary first to consider the notion of Jordan
canonical form, from which such a result is then available and presented later in this chapter.
9.2 Jordan Canonical Form
Theorem 9.22.
I. lordan Canonical Form (JCF): For all A E c
nxn
with eigenvalues AI, ... , An E C
(not necessarily distinct), there exists X E c ~ x n such that
XI AX = 1 = diag(ll, ... , 1q), (9.12)
where each of the lordan block matrices 1
1
, ••• , 1q is of the form
Ai
0 o
0
Ai
0
1i =
Ai
(9.13)
o
Ai
o o Ai
9.2. Jordan Canonical Form 83
2. Real Jordan Canonical Form: For all A € R
nx
" with eigenvalues Xi, . . . , X
n
(not
necessarily distinct), there exists X € R"
xn
such that
where each of the Jordan block matrices J\, ..., J
q
is of the form
in the case of real eigenvalues A., e A (A), and
where M
;
= [ _»' ^ 1 and I
2
= \
0
A in the case of complex conjugate eigenvalues
a
i
±jp
i
eA(A
>
).
Proof: For the proof see, for example, [21, pp. 120124]. D
Transformations like T = I " _, "•{ "] allow us to go back and forth between a real JCF
and its complex counterpart:
For nontrivial Jordan blocks, the situation is only a bit more complicated. With
9.2. Jordan Canonical Form 83
and L;=1 ki = n.
2. Real Jordan Canonical Form: For all A E jRnxn with eigenvalues AI, ... , An (not
necessarily distinct), there exists X E such that
(9.14)
where each of the Jordan block matrices 11, ... , 1q is of the form
where Mi = [ ] and h = [6 in the case of complex conjugate eigenvalues
(Xi ± jfJi E A(A).
Proof: For the proof see, for example, [21, pp. 120124]. 0
Transformations like T = [ _  { ] allow us to go back and forth between a real JCF
and its complex counterpart:
TI [ (X + jfJ O. ] T = [ (X fJ ] = M.
o (X  JfJ fJ (X
For nontrivial Jordan blocks, the situation is only a bit more complicated. With
1
o
j
o
j
o
1 o 0 '
o j 1
84 Chapter 9. Ei genval ues and Eigenvectors
it is easily checked that
Definition 9.23. The characteristic polynomials of the Jordan blocks defined in Theorem
9 . 2 2 are called the elementary divisors or invariant factors of A.
Theorem 9.24. The characteristic polynomial of a matrix is the product of its elementary
divisors. The minimal polynomial of a matrix is the product of the elementary divisors of
highest degree corresponding to distinct eigenvalues.
Theorem 9.25. Let A e C"
x
" with eigenvalues AI, . . . , X
n
. Then
Proof:
1. From Theorem 9.22 we have that A = X J X ~
l
. Thus,
det(A) = det(XJX
1
) = det(7) = ] ~ [ "
=l
A,  .
2. Again, from Theorem 9.22 we have that A = X J X ~
l
. Thus,
Tr(A) = Tr(XJX~
l
) = TrC/X"
1
*) = Tr(/) = £"
=1
A., . D
Example 9.26. Suppose A e E
7x7
is known to have 7r(A) = (A.  1)
4
(A  2)
3
and
a (A.) = (A. — I)
2
(A. — 2)
2
. Then A has two possible JCFs (not counting reorderings of the
diagonal blocks):
Note that 7
(1)
has elementary divisors (A  I )
2
, (A.  1), (A.  1), (A,  2)
2
, and (A  2),
while /(
2)
has elementary divisors (A  I )
2
, (A  I )
2
, (A  2)
2
, and (A  2).
84 Chapter 9. Eigenvalues and Eigenvectors
it is easily checked that
[ "+ jfi
0 0
] T ~ [ ~
T
I
0
et + jf3 0 0
h
l
0 0 et  jf3 M
0 0 0 et  jf3
Definition 9.23. The characteristic polynomials of the Jordan blocks defined in Theorem
9.22 are called the elementary divisors or invariant factors of A.
Theorem 9.24. The characteristic polynomial of a matrix is the product of its elementary
divisors. The minimal polynomial of a matrix is the product of the elementary divisors of
highest degree corresponding to distinct eigenvalues.
Theorem 9.25. Let A E c
nxn
with eigenvalues AI, .. " An. Then
n
1. det(A) = nAi.
i=1
n
2. Tr(A) = 2,)i.
i=1
Proof:
1. From Theorem 9.22 we have that A = X J XI. Thus,
det(A) = det(X J XI) = det(J) = n7=1 Ai.
2. Again, from Theorem 9.22 we have that A = X J XI. Thus,
Tr(A) = Tr(X J XI) = Tr(JX
1
X) = Tr(J) = L7=1 Ai. 0
Example 9.26. Suppose A E lR.
7x7
is known to have :rr(A) = (A  1)4(A  2)3 and
et(A) = (A  1)2(A  2)2. Then A has two possible JCFs (not counting reorderings of the
diagonal blocks):
1 0 0 0 0 0 1 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 1 I 0 0 0
J(l) =
0 0 0 1 0 0 0
and f2) =
0 0 0 1 0 0 0
0 0 0 0 2 1 0 0 0 0 0 2 1 0
0 0 0 0 0 2 0 0 0 0 0 0 2 0
0 0 0 0 0 0 2
0 0 0 0 0 0 2
Note that J(l) has elementary divisors (A  1)z, (A  1), (A  1), (A  2)2, and (A  2),
while J(2) has elementary divisors (A  1)2, (A  1)2, (A  2)2, and (A  2).
9.3. Determination of the JCF &5
Example 9.27. Knowing T T (A.), a ( A ) , and rank (A — A,,7) for distinct A., is not sufficient to
determine the JCF of A uniquely. T he matrices
both have 7r( A . ) = (A. — a) , a( A . ) = (A. — a) , and rank( A — al) = 4, i.e., three eigen
vectors.
9.3 Determination of the JCF
T he first critical item of information in determining the JCF of a matrix A e W
lxn
is its
number of eigenvectors. For each distinct eigenvalue A , , , the associated number of linearly
independent right (or left) eigenvectors is given by dim A^(A — A.,7) = n — rank( A — A.
(
7).
T he straightforward case is, of course, when X, is simple, i.e., of algebraic multiplicity 1; it
then has precisely one eigenvector. T he more interesting (and difficult) case occurs when
A, is of algebraic multiplicity greater than one. For example, suppose
T hen
has rank 1, so the eigenvalue 3 has two eigenvectors associated with it. If we let [^i £2 &]
T
denote a solution to the linear system (A — 3/) £ = 0, we find that 2£
2
+ £3=0. T hus, both
are eigenvectors (and are independent). T o get a third vector JC3 such that X = [x\ KJ_ XT,]
reduces A to JCF, we need the notion of principal vector.
Definition 9.28. Let A e C"
xn
(or R"
x
") . Then x is a right principal vector of degree k
associated with X e A (A) if and only if (A  XI)
k
x = 0 and (A  U}
k
~
l
x ^ 0.
Remark 9.29.
1. An analogous definition holds for a left principal vector of degree k.
9.3. Determination of the JCF 85
Example 9.27. Knowing rr(A), a(A), and rank(A  Ai l) for distinct Ai is not sufficient to
determine the JCF of A uniquely. The matrices
a 0 0 0 0 0 a 0 0 0 0 0
0 a 0 0 0 0 0 a 0 0 0 0
0 0 a 0 0 0 0 0 0 a 0 0 0 0
Al=
0 0 0 a 0 0
A2 =
0 0 0 a 0 0
0 0 0 0 a 0 0 0 0 0 0 a 0
0 0 0 0 0 a 1 0 0 0 0 0 a 0
0 0 0 0 0 0 a 0 0 0 0 0 0 a
both have rr(A) = (A  a)7, a(A) = (A  a)\ and rank(A  al) = 4, i.e., three eigen
vectors.
9.3 Determination of the JCF
The first critical item of information in determining the JCF of a matrix A E ]R.nxn is its
number of eigenvectors. For each distinct eigenvalue Ai, the associated number of linearly
independent right (or left) eigenvectors is given by dimN(A  A;l) = n  rank(A  A;l).
The straightforward case is, of course, when Ai is simple, i.e., of algebraic multiplicity 1; it
then has precisely one eigenvector. The more interesting (and difficult) case occurs when
Ai is of algebraic multiplicity greater than one. For example, suppose
[3 2
n
A = 0 3
o 0
Then
A3I= U
2 I]
o 0
o 0
has rank 1, so the eigenvalue 3 has two eigenvectors associated with it. If we let
denote a solution to the linear system (A  = 0, we find that + = O. Thus, both
are eigenvectors (and are independent). To get a third vector X3 such that X = [Xl X2 X3]
reduces A to JCF, we need the notion of principal vector.
Definition 9.28. Let A E c
nxn
(or ]R.nxn). Then X is a right principal vector of degree k
associated with A E A(A) ifand only if(A  ulx = 0 and (A  AI)kl x i= o.
Remark 9.29.
1. An analogous definition holds for a left principal vector of degree k.
86 Chapter 9. Eigenvalues and Eigenvectors
2. The phrase "of grade k" is often used synonymously with "of degree k."
3. Principal vectors are sometimes also called generalized eigenvectors, but the latter
term will be assigned a much different meaning in Chapter 12.
4. The case k = 1 corresponds to the "usual" eigenvector.
5. A right (or left) principal vector of degree k is associated with a Jordan block ji of
dimension k or larger.
9.3.1 Theoretical computation
To motivate the development of a procedure for determining principal vectors, consider a
2x2 Jordan block {h
0
h1. Denote by x
(1)
and x
(2)
the two columns of a matrix X e R
2
,x
2
that reduces a matrix A to this JCF. Then the equation AX = XJ can be written
The first column yields the equation Ax
(1)
= hx
(1)
which simply says that x
(1)
is a right
eigenvector. The second column yields the following equation for x
(2)
, the principal vector
of degree 2:
If we premultiply (9.17) by (A  XI), we find (A  XI )
z
x
( 2 )
= (A  XI)x
w
= 0. Thus,
the definition of principal vector is satisfied.
This suggests a "general" procedure. First, determine all eigenvalues of A e R"
x
"
(or C
nxn
). Then for each distinct X e A (A) perform the following:
1. Solve
This step finds all the eigenvectors (i.e., principal vectors of degree 1) associated with
X. The number of eigenvectors depends on the rank of A — XI. For example, if
rank(A — XI) = n — 1, there is only one eigenvector. If the algebraic multiplicity of
X is greater than its geometric multiplicity, principal vectors still need to be computed
from succeeding steps.
2. For each independent jc
(1)
, solve
The number of linearly independent solutions at this step depends on the rank of
(A — XI )
2
. If, for example, this rank is n — 2 , there are two linearly independent
solutions to the homogeneous equation (A — XI)
2
x^ = 0. One of these solutions
is, of course, x
(l)
(^ 0), since (A  XI )
2
x
( l )
= (A  XI)0 = 0. The other solution
is the desired principal vector of degree 2. (It may be necessary to take a linear
combination of jc
(1)
vectors to get a righthand side that is in 7£(A — XI). See, for
example, Exercise 7.)
86 Chapter 9. Eigenvalues and Eigenvectors
2. The phrase "of grade k" is often used synonymously with "of degree k."
3. Principal vectors are sometimes also called generalized eigenvectors, but the latter
term will be assigned a much different meaning in Chapter 12.
4. The case k = 1 corresponds to the "usual" eigenvector.
S. A right (or left) principal vector of degree k is associated with a Jordan block J; of
dimension k or larger.
9.3.1 Theoretical computation
To motivate the development of a procedure for determining principal vectors, consider a
2 x 2 Jordan block [ ~ i]. Denote by x(l) and x(2) the two columns of a matrix X E l R ~ X 2
that reduces a matrix A to this JCF. Then the equation AX = X J can be written
A [x(l) x(2)] = [x(l) X(2)] [ ~ ~ J.
The first column yields the equation Ax(!) = AX(!), which simply says that x(!) is a right
eigenvector. The second column yields the following equation for x(2), the principal vector
of degree 2:
(A  A/)x(2) = x(l). (9.17)
If we premultiply (9.17) by (A  AI), we find (A  A1)2 x(2) = (A  A1)X(l) = O. Thus,
the definition of principal vector is satisfied.
This suggests a "general" procedure. First, determine all eigenvalues of A E lR
nxn
(or c
nxn
). Then for each distinct A E A(A) perform the following:
1. Solve
(A  A1)X(l) = O.
This step finds all the eigenvectors (i.e., principal vectors of degree I) associated with
A. The number of eigenvectors depends on the rank of A  AI. For example, if
rank(A  A/) = n  1, there is only one eigenvector. If the algebraic multiplicity of
A is greater than its geometric multiplicity, principal vectors still need to be computed
from succeeding steps.
2. For each independent x(l), solve
(A  A1)x(2) = x(l).
The number of linearly independent solutions at this step depends on the rank of
(A  uf. If, for example, this rank is n  2, there are two linearly independent
solutions to the homogeneous equation (A  AI)2x (2) = o. One of these solutions
is, of course, x(l) (1= 0), since (A  'A1)
2
x(l) = (A  AI)O = o. The other solution
is the desired principal vector of degree 2. (It may be necessary to take a linear
combination of x(l) vectors to get a righthand side that is in R(A  AI). See, for
example, Exercise 7.)
9. 3. Determination of the JCF 87
4. Continue in this way until the total number of independent eigenvectors and principal
vectors is equal to the algebraic multiplicity of A.
Unfortunately, this naturallooking procedure can fail to find all Jordan vectors. For
more extensive treatments, see, for example, [20] and [21]. Determination of eigenvectors
and principal vectors is obviously very tedious for anything beyond simple problems (n = 2
or 3, say). Attempts to do such calculations in finiteprecision floatingpoint arithmetic
generally prove unreliable. There are significant numerical difficulties inherent in attempting
to compute a JCF, and the interested student is strongly urged to consult the classical and very
readable [8] to learn why. Notice that highquality mathematical software such as MATLAB
does not offer a jcf command, although a jordan command is available in MATLAB'S
Symbolic Toolbox.
Theorem 9.30. Suppose A e C
kxk
has an eigenvalue A, of algebraic multiplicity k and
suppose further that rank(A — AI) = k — 1. Let X = [ x
( l )
, . . . , x
(k)
], where the chain of
vectors x(i) is constructed as above. Then
Theorem 9.31. (x
( 1)
, . . . , x
(k)
} is a linearly independent set.
Theorem 9.32. Principal vectors associated with different Jordan blocks are linearly inde
pendent.
Example 9.33. Let
The eigenvalues of A are A1 = 1, h2 = 1, and h
3
= 2. First, find the eigenvectors associated
with the distinct eigenvalues 1 and 2.
,(1)
(A  2/)x3(1) = 0 yields
3. For each independent x
(2)
from step 2, solve
9.3. Determination of the JCF 87
3. For each independent X(2) from step 2, solve
(A  AI)x(3) = x(2).
4. Continue in this way until the total number of independent eigenvectors and principal
vectors is equal to the algebraic multiplicity of A.
Unfortunately, this naturallooking procedure can fail to find all Jordan vectors. For
more extensive treatments, see, for example, [20] and [21]. Determination of eigenvectors
and principal vectors is obviously very tedious for anything beyond simple problems (n = 2
or 3, say). Attempts to do such calculations in finiteprecision floatingpoint arithmetic
generally prove unreliable. There are significant numerical difficulties inherent in attempting
to compute a JCF, and the interested student is strongly urged to consult the classical and very
readable [8] to learn why. Notice that highquality mathematical software such as MATLAB
does not offer a j cf command, although a j ardan command is available in MATLAB's
Symbolic Toolbox.
Theorem 9.30. Suppose A E C
kxk
has an eigenvalue A of algebraic multiplicity k and
suppose further that rank(A  AI) = k  1. Let X = [x(l), ... , X(k)], where the chain of
vectors x(i) is constructed as above. Then
Theorem 9.31. {x(l), ... , X(k)} is a linearly independent set.
Theorem 9.32. Principal vectors associated with different Jordan blocks are linearly inde
pendent.
Example 9.33. Let
1 ;].
002
The eigenvalues of A are AI = I, A2 = 1, and A3 = 2. First, find the eigenvectors associated
with the distinct eigenvalues 1 and 2.
(A  = 0 yields
88 Chapter 9. Eigenvalues and Eigenvectors
(A l/)x,
(1)
=0 yields
Then it is easy to check that
9.3.2 On the +1 's in JCF blocks
In this subsection we show that the nonzero superdiagonal elements of a JCF need not be
1 's but can be arbitrary — so long as they are nonzero. For the sake of defmiteness, we
consider below the case of a single Jordan block, but the result clearly holds for any JCF.
Supposed A € R
nxn
and
Let D = d i a g ( d 1 , . . . , d
n
) be a nonsingular "scaling" matrix. Then
To find a principal vector of degree 2 associated with the multiple eigenvalue 1, solve
( A – l/)x,
(2)
= x,
(1)
toeet
Now let
88 Chapter 9. Eigenvalues and Eigenvectors
(A  11)x?J = 0 yields
To find a principal vector of degree 2 associated with the multiple eigenvalue 1, solve
(A  1I)xl
2
) = xiI) to get
[ 0 ]
(2)
x, = ~ .
Now let
xl" xl"] ~ [ ~
0 5
l
X = [xiI) 1 3
0
Then it is easy to check that
X  ' ~ U
0
5 ] [ I
n
1
i and XlAX = ~
1
0 0
9.3.2 On the +1 's in JCF blocks
In this subsection we show that the nonzero superdiagonal elements of a JCF need not be
1 's but can be arbitrary  so long as they are nonzero. For the sake of definiteness, we
consider below the case of a single Jordan block, but the result clearly holds for any JCF.
Suppose A E jRnxn and
Let D = diag(d" ... , d
n
) be a nonsingular "scaling" matrix. Then
A
4l.
0 0
d,
0
)...
!b.
0
d,
D'(X' AX)D = D' J D = j =
A
d
n

I
0
d
n

2
A
d
n
d
n

I
0 0
)...
9.4. Geometric Aspects of the JCF 89
Appropriate choice of the di 's then yields any desired nonzero superdiagonal elements.
This result can also be interpreted in terms of the matrix X = [x\,..., x
n
] of eigenvectors
and principal vectors that reduces A to its JCF. Specifically, J is obtained from A via the
similarity transformation XD = \d\x\,..., d
n
x
n
}.
In a similar fashion, the reverseorder identity matrix (or exchange matrix)
9.4 Geometric Aspects of the JCF
Note that di mM( A — A.,/ )
w
= «,.
Definition 9.35. Let V be a vector space over F and suppose A : V —>• V is a linear
transformation. A subspace S c V is Ainvariant if AS c S, where AS is defined as the
set {As : s e S}.
can be used to put the superdiagonal elements in the subdiagonal instead if that is desired:
The matrix X that reduces a matrix A e IR"
X
" (or C
nxn
) to a JCF provides a change of basis
with respect to which the matrix is diagonal or block diagonal. It is thus natural to expect an
associated direct sum decomposition of R. Such a decomposition is given in the following
theorem.
Theorem 9.34. Suppose A e R"
x
" has characteristic polynomial
and minimal polynomial
with A i , . . . , A.
m
distinct. Then
9.4. Geometric Aspects of the JCF 89
Appropriate choice of the di's then yields any desired nonzero superdiagonal elements.
This result can also be interpreted in terms of the matrix X = [x[, ... ,x
n
] of eigenvectors
and principal vectors that reduces A to its lCF. Specifically, j is obtained from A via the
similarity transformation XD = [d[x[, ... , dnxn].
In a similar fashion, the reverseorder identity matrix (or exchange matrix)
0 0 I
0
p = pT = p[ =
(9.18)
0 1
I 0 0
can be used to put the superdiagonal elements in the subdiagonal instead if that is desired:
A I 0 0 A 0 0
0 A 0 A 0
p[
A
p=
0 1 A
0
A I A 0
0 0 A 0 0 A
9.4 Geometric Aspects of the JCF
The matrix X that reduces a matrix A E jH.nxn (or c
nxn
) to a lCF provides a change of basis
with respect to which the matrix is diagonal or block diagonal. It is thus natural to expect an
associated direct sum decomposition of jH.n. Such a decomposition is given in the following
theorem.
Theorem 9.34. Suppose A E jH.nxn has characteristic polynomial
n(A) = (A  A[)n) ... (A  Amt
m
and minimal polynomial
a(A) = (A  A[)V) '" (A  Am)V
m
with A I, ... , Am distinct. Then
jH.n = N(A  AlIt) E6 ... E6 N(A  AmItm
= N (A  A 1 I) v) E6 ... E6 N (A  Am I) Vm .
Note that dimN(A  AJ)Vi = ni.
Definition 9.35. Let V be a vector space over IF and suppose A : V + V is a linear
transformation. A subspace S ~ V is A invariant if AS ~ S, where AS is defined as the
set {As: s E S}.
90 Chapter 9. Eigenvalues and Eigenvectors
If V is taken to be R" over R, and S e R"
x
* is a matrix whose columns s\,..., s/t
span a /^dimensional subspace <S, i.e., K(S) = <S, then <S is Ainvariant if and only if there
exists M eR
kxk
such that
This follows easily by comparing the /th columns of each side of (9.19):
Example 9.36. The equation Ax = A* = xA defining a right eigenvector x of an eigenvalue
X says that * spans an Ainvariant subspace (of dimension one).
Example 9.37. Suppose X block diagonalizes A, i.e.,
Rewriting in the form
we have that A A, = A", /,, / = 1, 2, so the columns of A, span an Amvanant subspace.
Theorem 9.38. Suppose A e E"
x
".
7. Let p(A) = «o/ + o?i A + • • • + <x
q
A
q
be a polynomial in A. Then N(p(A)) and
7£(p(A)) are Ainvariant.
2. S isAinvariant if and only ifS
1
 is A
T
invariant.
Theorem 9.39. If V isa vector space over F such that V = N\ ® • • • 0 N
m
, where each
A// isAinvariant, then a basisfor V can be chosen with respect to which A hasa block
diagonal representation.
The Jordan canonical form is a special case of the above theorem. If A has distinct
eigenvalues A,, as in Theorem 9.34, we could choose bases for N(A — A.,/)"' by SVD, for
example (note that the power n, could be replaced by v,). We would then get a block diagonal
representation for A with full blocks rather than the highly structured Jordan blocks. Other
such "canonical" forms are discussed in text that follows.
Suppose A" = [ X i , . . . , X
m
] e R"
n
xn
is such that X ^AX = diag(7i,. . . , J
m
), where
each Ji = diag(/,i,..., //*,.) and each /,* is a Jordan block corresponding to A, e A(A).
We could also use other block diagonal decompositions (e.g., via SVD), but we restrict our
attention here to only the Jordan block case. Note that A A", = A*, /,, so by (9.19) the columns
of A", (i.e., the eigenvectors and principal vectors associated with A.,) span an Ainvariant
subspace of W.
Finally, we return to the problem of developing a formula for e'
A
in the case that A
is not necessarily diagonalizable. Let 7, € C"
x
"' be a Jordan basis for N(A
T
— A.,/)"' .
Equivalently, partition
90 Chapter 9. Eigenvalues and Eigenvectors
If V is taken to be ]Rn over Rand S E ]Rn xk is a matrix whose columns SI, ... , Sk
span a kdimensional subspace S, i.e., R(S) = S, then S is Ainvariant if and only if there
exists M E ]Rkxk such that
AS = SM. (9.19)
This follows easily by comparing the ith columns of each side of (9.19):
Example 9.36. The equation Ax = AX = x A defining a right eigenvector x of an eigenvalue
A says that x spans an Ainvariant subspace (of dimension one).
Example 9.37. Suppose X block diagonalizes A, i.e.,
XI AX = [ ~ J
2
].
Rewriting in the form
~ J,
we have that AX
i
= X;li, i = 1,2, so the columns of Xi span an Ainvariant subspace.
Theorem 9.38. Suppose A E ]Rnxn.
1. Let peA) = CloI + ClIA + '" + ClqAq be a polynomial in A. Then N(p(A)) and
R(p(A)) are Ainvariant.
2. S is A invariant if and only if S 1. is A T invariant.
Theorem 9.39. If V is a vector space over IF such that V = NI EB ... EB N
m
, where each
N; is Ainvariant, then a basis for V can be chosen with respect to which A has a block
diagonal representation.
The Jordan canonical form is a special case of the above theorem. If A has distinct
eigenvalues Ai as in Theorem 9.34, we could choose bases for N(A  Ai/)n, by SVD, for
example (note that the power ni could be replaced by Vi). We would then get a block diagonal
representation for A with full blocks rather than the highly structured Jordan blocks. Other
such "canonical" forms are discussed in text that follows.
Suppose X = [Xl ..... Xm] E ] R ~ x n is such that XI AX = diag(J1, ... , J
m
), where
each J
i
= diag(JiI,"" Jik,) and each Jik is a Jordan block corresponding to Ai E A(A).
We could also use other block diagonal decompositions (e.g., via SVD), but we restrict our
attention here to only the Jordan block case. Note that AXi = Xi J
i
, so by (9.19) the columns
of Xi (i.e., the eigenvectors and principal vectors associated with Ai) span an Ainvariant
subspace of]Rn.
Finally, we return to the problem of developing a formula for e
l
A in the case that A
is not necessarily diagonalizable. Let Yi E <e
nxn
, be a Jordan basis for N (AT  A;lt.
Equivalently, partition
9.5. The Matrix Sign Function 91
compatibly. Then
In a similar fashion we can compute
which is a useful formula when used in conjunction with the result
for a k x k Jordan block 7, associated with an eigenvalue A. = A.,.
9.5 The Matrix Sign Function
In this section we give a very brief introduction to an interesting and useful matrix function
called the matrix sign function. It is a generalization of the sign (or signum) of a scalar. A
survey of the matrix sign function and some of its applications can be found in [15].
Definition 9.40. Let z E C with Re(z) ^ 0. Then the sign of z is defined by
Definition 9.41. Suppose A e C"
x
" has no eigenvalues on the imaginary axis, and let
be a Jordan canonical form for A, with N containing all Jordan blocks corresponding to the
eigenvalues of A in the left halfplane and P containing all Jordan blocks corresponding to
eigenvalues in the right halfplane. Then the sign of A, denoted sgn(A), is given by
9.S. The Matrix Sign Function
compatibly. Then
A = XJX
I
= XJy
H
= [XI, ... , Xm] diag(JI, ... , J
m
) [Y
I
, ••• , Ym]H
m
= LX;JiYi
H
.
i=1
In a similar fashion we can compute
m
etA = LXietJ;YiH,
i=1
which is a useful formula when used in conjunction with the result
A 0 0
eAt teAt
.lt
2
e
At
2!
0 A
0
eAt teAt
exp t
A 0
0 0
eAt
1
0 0 A
0 0
for a k x k Jordan block J
i
associated with an eigenvalue A = Ai.
9.5 The Matrix Sign Function
91
In this section we give a very brief introduction to an interesting and useful matrix function
called the matrix sign function. It is a generalization of the sign (or signum) of a scalar. A
survey of the matrix sign function and some of its applications can be found in [15].
Definition 9.40. Let z E C with Re(z) f= O. Then the sign of z is defined by
Re(z) {+1
sgn(z) = IRe(z) I = 1
ifRe(z) > 0,
ifRe(z) < O.
Definition 9.41. Suppose A E cnxn has no eigenvalues on the imaginary axis, and let
be a Jordan canonical form for A, with N containing all Jordan blocks corresponding to the
eigenvalues of A in the left halfplane and P containing all Jordan blocks corresponding to
eigenvalues in the right halfplane. Then the sign of A, denoted sgn(A), is given by
[
/ 0] I
sgn(A) = X 0 / X ,
92 Chapter 9. Eigenvalues and Eigenvectors
where the negative and positive identity matrices are of the same dimensions as N and P,
respectively.
There are other equivalent definitions of the matrix sign function, but the one given
here is especially useful in deriving many of its key properties. The JCF definition of the
matrix sign function does not generally lend itself to reliable computation on a finiteword
length digital computer. In fact, its reliable numerical calculation is an interesting topic in
its own right.
We state some of the more useful properties of the matrix sign function as theorems.
Their straightforward proofs are left to the exercises.
Theorem 9.42. Suppose A e C"
x
" has no eigenvalues on the imaginary axis, and let
S = sgn(A). Then the following hold:
1. S is diagonalizable with eigenvalues equal to del.
2. S
2
= I.
3. AS = SA.
4. sgn(A") = (sgn(A))".
5. sgn(T
l
AT) = T
l
sgn(A)TforallnonsingularT e C"
x
".
6. sgn(cA) = sgn(c) sgn(A)/or all nonzero real scalars c.
Theorem 9.43. Suppose A e C"
x
" has no eigenvalues on the imaginary axis, and let
S — sgn(A). Then the following hold:
1. 7l(S — /) is an Ainvariant subspace corresponding to the left halfplane eigenvalues
of A (the negative invariant subspace).
2. R(S+/) is an Ainvariant subspace corresponding to the right halfplane eigenvalues
of A (the positive invariant subspace).
3. negA = (/ — S)/2 is a projection onto the negative invariant subspace of A.
4. posA = (/ + S)/2 is a projection onto the positive invariant subspace of A.
EXERCISES
1. Let A e C
nxn
have distinct eigenvalues AI, ..., X
n
with corresponding right eigen
vectors Xi, ... ,x
n
and left eigenvectors y\, ..., y
n
, respectively. Let v e C" be an
arbitrary vector. Show that v can be expressed (uniquely) as a linear combination
of the right eigenvectors. Find the appropriate expression for v as a linear combination
of the left eigenvectors as well.
92 Chapter 9. Eigenvalues and Eigenvectors
where the negative and positive identity matrices are of the same dimensions as N and p,
respectively.
There are other equivalent definitions of the matrix sign function, but the one given
here is especially useful in deriving many of its key properties. The JCF definition of the
matrix sign function does not generally lend itself to reliable computation on a finiteword
length digital computer. In fact, its reliable numerical calculation is an interesting topic in
its own right.
We state some of the more useful properties of the matrix sign function as theorems.
Their straightforward proofs are left to the exercises.
Theorem 9.42. Suppose A E e
nxn
has no eigenvalues on the imaginary axis, and let
S = sgn(A). Then the following hold:
1. S is diagonalizable with eigenvalues equal to ± 1.
2. S2 = I.
3. AS = SA.
4. sgn(AH) = (sgn(A»H.
5. sgn(T1AT) = T1sgn(A)T foralinonsingularT E e
nxn
.
6. sgn(cA) = sgn(c) sgn(A) for all nonzero real scalars c.
Theorem 9.43. Suppose A E e
nxn
has no eigenvalues on the imaginary axis, and let
S = sgn(A). Then the following hold:
I. R(S l) is an Ainvariant subspace corresponding to the left halfplane eigenvalues
of A (the negative invariant subspace).
2. R(S + l) is an A invariant subspace corresponding to the right halfplane eigenvalues
of A (the positive invariant subspace).
3. negA == (l  S) /2 is a projection onto the negative invariant subspace of A.
4. posA == (l + S)/2 is a projection onto the positive invariant subspace of A.
EXERCISES
1. Let A E e
nxn
have distinct eigenvalues ),.1> ••• , ),.n with corresponding right eigen
vectors Xl, ... , Xn and left eigenvectors Yl, ••. , Yn, respectively. Let v E en be an
arbitrary vector. Show that v can be expressed (uniquely) as a linear combination
of the right eigenvectors. Find the appropriate expression for v as a linear combination
of the left eigenvectors as well.
Exercises 93
2. Suppose A € C"
x
" is skewHermitian, i.e., A
H
= —A. Prove that all eigenvalues of
a skewHermitian matrix must be pure imaginary.
3. Suppose A e C"
x
" is Hermitian. Let A be an eigenvalue of A with corresponding
right eigenvector x. Show that x is also a left eigenvector for A. Prove the same result
if A is skewHermitian.
5. Determine the eigenvalues, right eigenvectors and right principal vectors if necessary,
and (real) JCFs of the following matrices:
6. Determine the JCFs of the following matrices:
Find a nonsingular matrix X such that X
1
AX = J, where J is the JCF
Hint: Use[ — 1 1 — l]
r
as an eigenvector. The vectors [0 1 — l]
r
and[ l 0 0]
r
are both eigenvectors, but then the equation (A — /)jc
(2)
= x
(1)
can't be solved.
8. Show that all right eigenvectors of the Jordan block matrix in Theorem 9.30 must be
multiples of e\ e R*. Characterize all left eigenvectors.
9. Let A e R"
x
" be of the form A = xy
T
, where x, y e R" are nonzero vectors with
x
T
y = 0. Determine the JCF of A.
10. Let A e R"
xn
be of the form A = / + xy
T
, where x, y e R" are nonzero vectors
with x
T
y = 0. Determine the JCF of A.
11. Suppose a matrix A e R
16x 16
has 16 eigenvalues at 0 and its JCF consists of a single
Jordan block of the form specified in Theorem 9.22. Suppose the small number 10~
16
is added to the (16,1) element of J. What are the eigenvalues of this slightly perturbed
matrix?
4. Suppose a matrix A € R
5x5
has eigenvalues {2, 2, 2, 2, 3}. Determine all possible
JCFs for A.
7. Let
Exercises 93
2. Suppose A E rc
nxn
is skewHermitian, i.e., AH = A. Prove that all eigenvalues of
a skewHermitian matrix must be pure imaginary.
3. Suppose A E rc
nxn
is Hermitian. Let A be an eigenvalue of A with corresponding
right eigenvector x. Show that x is also a left eigenvector for A. Prove the same result
if A is skewHermitian.
4. Suppose a matrix A E lR.
5x5
has eigenvalues {2, 2, 2, 2, 3}. Determine all possible
JCFs for A.
5. Determine the eigenvalues, right eigenvectors and right principal vectors if necessary,
and (real) JCFs of the following matrices:
[
2 1 ]
(a) 1 0 '
6. Determine the JCFs of the following matrices:
<a) U j n
2
1
2
=n
7. Let
A = [H 1]·
2 2"
Find a nonsingular matrix X such that XI AX = J, where J is the JCF
J = [ ~ ~ ~ ] .
001
Hint: Use[1 1  I]T as an eigenvector. The vectors [0 If and[1 0 of
are both eigenvectors, but then the equation (A  I)x(2) = x(1) can't be solved.
8. Show that all right eigenvectors of the Jordan block matrix in Theorem 9.30 must be
multiples of el E lR.
k
. Characterize all left eigenvectors.
9. Let A E lR.
nxn
be of the form A = xyT, where x, y E lR.
n
are nonzero vectors with
x
T
y = O. Determine the JCF of A.
10. Let A E lR.
nxn
be of the form A = 1+ xyT, where x, y E lR.
n
are nonzero vectors
with x
T
y = O. Determine the JCF of A.
11. Suppose a matrix A E lR.
16x
16 has 16 eigenvalues at 0 and its JCF consists of a single
Jordan block of the form specified in Theorem 9.22. Suppose the small number 10
16
is added to the (16,1) element of J. What are the eigenvalues of this slightly perturbed
matrix?
94 Chapter 9. Eigenvalues and Eigenvectors
12. Show that every matrix A e R"
x
" can be factored in the form A = Si$2, where Si
and £2 are real symmetric matrices and one of them, say Si, is nonsingular.
Hint: Suppose A = X J X ~
l
is a reduction of A to JCF and suppose we can construct
the "symmetric factorization" of J . Then A = ( X S i X
T
) ( X ~
T
S
2
X ~
l
) would be the
required symmetric factorization of A. Thus, it suffices to prove the result for the
JCF. The transformation P in (9.18) is useful.
13. Prove that every matrix A e W
x
" is similar to its transpose and determine a similarity
transformation explicitly.
Hint: Use the factorization in the previous exercise.
14. Consider the block upper triangular matrix
where A e M"
xn
and A
n
e R
kxk
with 1 < k < n. Suppose A
u
^ 0 and that we
want to block diagonalize A via the similarity transformation
where X e R*
x
<«  *), i.e.,
Find a matrix equation that X must satisfy for this to be possible. If n = 2 and k = 1,
what can you say further, in terms of AU and A 22, about when the equation for X is
solvable?
15. Prove Theorem 9.42.
16. Prove Theorem 9.43.
17. Suppose A e C"
xn
has all its eigenvalues in the left half plane. Prove that
sgn(A) =  /.
94 Chapter 9. Eigenvalues and Eigenvectors
12. Show that every matrix A E jRnxn can be factored in the form A = SIS2, where SI
and S2 are real symmetric matrices and one of them, say S1, is nonsingular.
Hint: Suppose A = Xl XI is a reduction of A to JCF and suppose we can construct
the "symmetric factorization" of 1. Then A = (X SIXT)(X
T
S2XI) would be the
required symmetric factorization of A. Thus, it suffices to prove the result for the
JCF. The transformation P in (9.18) is useful.
13. Prove that every matrix A E jRn xn is similar to its transpose and determine a similarity
transformation explicitly.
Hint: Use the factorization in the previous exercise.
14. Consider the block upper triangular matrix
A _ [ All
 0
Al2 ]
A22 '
where A E jRnxn and All E jRkxk with 1 ::s: k ::s: n. Suppose Al2 =1= 0 and that we
want to block diagonalize A via the similarity transformation
where X E IRkx(nk), i.e.,
TIAT = [A 011 0 ]
A22 .
Find a matrix equation that X must satisfy for this to be possible. If n = 2 and k = 1,
what can you say further, in terms of All and A22, about when the equation for X is
solvable?
15. Prove Theorem 9.42.
16. Prove Theorem 9.43.
17. Suppose A E en xn has all its eigenvalues in the left halfplane. Prove that
sgn(A) = 1.
Chapter 10
Canonical Forms
10.1 Some Basic Canonical Forms
Problem: Let V and W be vector spaces and suppose A : V — > • W is a linear transformation.
Find bases in V and W with respect to which Mat A has a "simple form" or "canonical
form." In matrix terms, if A e R
mxn
, find P e R™
xm
and Q e R
n
n
xn
such that PAQ has a
"canonical form." The transformation A M» PAQ is called an equivalence; it is called an
orthogonal equivalence if P and Q are orthogonal matrices.
Remark 10.1. We can also consider the case A e C
m xn
and unitary equivalence if P and
< 2 are unitary.
Two special cases are of interest:
1. If W = V and < 2 = P"
1
, the transformation A H> PAP"
1
is called a similarity.
2 . If W = V and if Q = P
T
is orthogonal, the transformation A i» PAP
T
is called
an orthogonal similarity (or unitary similarity in the complex case).
The following results are typical of what can be achieved under a unitary similarity. If
A = A
H
6 C"
x
" has eigenvalues AI, . . . , A
n
, then there exists a unitary matrix £7 such that
U
H
AU — D, where D = di ag( A. j , . . . , A.
n
). This is proved in Theorem 10.2 . What other
matrices are "diagonalizable" under unitary similarity? The answer is given in Theorem
10.9, where it is proved that a general matrix A e C"
x
" is unitarily similar to a diagonal
matrix if and only if it is normal (i.e., AA
H
= A
H
A). Normal matrices include Hermitian,
skewHermitian, and unitary matrices (and their "real" counterparts: symmetric, skew
symmetric, and orthogonal, respectively), as well as other matrices that merely satisfy the
definition, such as A = [ _
a
b
^1 for real scalars a and b. If a matrix A is not normal, the
most "diagonal" we can get is the JCF described in Chapter 9.
Theorem 10.2. Let A = A
H
e C"
x
" have (real) eigenvalues A. I, . . . , X
n
. Then there
exists a unitary matrix X such that X
H
AX = D = diag(A.j , . . . , X
n
) (the columns ofX are
orthonormal eigenvectors for A).
95
Chapter 10
Canonical Forms
10.1 Some Basic Canonical Forms
Problem: Let V and W be vector spaces and suppose A : V + W is a linear transformation.
Find bases in V and W with respect to which Mat A has a "simple form" or "canonical
form." In matrix terms, if A E IR
mxn
, find P E lR;;:xm and Q E l R ~ x n such that P AQ has a
"canonical form." The transformation A f+ P AQ is called an equivalence; it is called an
orthogonal equivalence if P and Q are orthogonal matrices.
Remark 10.1. We can also consider the case A E e
mxn
and unitary equivalence if P and
Q are unitary.
Two special cases are of interest:
1. If W = V and Q = p
1
, the transformation A f+ PAPI is called a similarity.
2. If W = V and if Q = pT is orthogonal, the transformation A f+ P ApT is called
an orthogonal similarity (or unitary similarity in the complex case).
The following results are typical of what can be achieved under a unitary similarity. If
A = A H E en xn has eigenvalues AI, ... , An, then there exists a unitary matrix U such that
U
H
AU = D, where D = diag(AJ, ... , An). This is proved in Theorem 10.2. What other
matrices are "diagonalizable" under unitary similarity? The answer is given in Theorem
10.9, where it is proved that a general matrix A E e
nxn
is unitarily similar to a diagonal
matrix if and only if it is normal (i.e., AA H = AHA). Normal matrices include Hermitian,
skewHermitian, and unitary matrices (and their "real" counterparts: symmetric, skew
symmetric, and orthogonal, respectively), as well as other matrices that merely satisfy the
definition, such as A = [ _ ~ !] for real scalars a and h. If a matrix A is not normal, the
most "diagonal" we can get is the JCF described in Chapter 9.
Theorem 10.2. Let A = A H E en xn have (real) eigenvalues AI, ... ,An. Then there
exists a unitary matrix X such that X
H
AX = D = diag(Al, ... , An) (the columns of X are
orthonormal eigenvectors for A).
95
96 Chapter 10. Canonical Forms
Proof: Let x\ be a right eigenvector corresponding to X\, and normalize it such that xf*x\ =
1. Then there exist n — 1 additional vectors x
2
, ..., x
n
such that X = [x\,..., x
n
] =
[x\ X
2
] is unitary. Now
Then x^U
2
= 0 (/ € k) means that x
f
is orthogonal to each of the n — k columns of U
2
.
But the latter are orthonormal since they are the last n — k rows of the unitary matrix U.
Thus, [Xi f/2] is unitary. D
The construction called for in Theorem 10.2 is then a special case of Theorem 10.3
for k = 1. We illustrate the construction of the necessary Householder matrix for k — 1.
For simplicity, we consider the real case. Let the unit vector x\ be denoted by [£i, ..., %
n
]
T
.
In (10.1) we have used the fact that Ax\ = k\x\. When combined with the fact that
x"xi = 1, we get Ai remaining in the (l,l)block. We also get 0 in the (2,l)block by
noting that x\ is orthogonal to all vectors in X
2
. In (10.2), we get 0 in the (l,2)block by
noting that X
H
AX is Hermitian. The proof is completed easily by induction upon noting
that the (2,2)block must have eigenvalues X
2
,..., A.
n
. D
Given a unit vector x\ e E", the construction of X
2
e ]R"
X
("
1
) such that X —
[x\ X
2
] is orthogonal is frequently required. The construction can actually be performed
quite easily by means of Householder (or Givens) transformations as in the proof of the
following general result.
Theorem 10.3. Let X\ e C
nxk
have orthonormal columns and suppose U is a unitary
matrix such that UX\ = \
0
1, where R € C
kxk
is upper triangular. Write U
H
= [U\ U
2
]
with Ui € C
nxk
. Then [Xi U
2
] is unitary.
Proof: Let X\ = [x\,..., Xk]. Construct a sequence of Householder matrices (also known
as elementary reflectors) H\,..., H
k
in the usual way (see below) such that
where R is upper triangular (and nonsingular since x\, ..., Xk are orthonormal). Let U =
H
k
...H
v
. Then U
H
= / / , • • H
k
and
96 Chapter 10. Canonical Forms
Proof' Let XI be a right eigenvector corresponding to AI, and normalize it such that XI =
1. Then there exist n  1 additional vectors X2, ... , Xn such that X = (XI, ... , xn] =
[XI X
2
] is unitary. Now
XHAX = [
xH
] A [XI X2] = [
]
I
XH
XfAxl XfAX
2
2
=[
Al
]
(10.1)
0 XfAX
2
=[
Al 0
l
(10.2)
0
XfAX
z
In (l0.1) we have used the fact that AXI = AIXI. When combined with the fact that
XI = 1, we get Al remaining in the (l,I)block. We also get 0 in the (2, I)block by
noting that XI is orthogonal to all vectors in Xz. In (10.2), we get 0 in the (l,2)block by
noting that XH AX is Hermitian. The proof is completed easily by induction upon noting
that the (2,2)block must have eigenvalues A2, ... , An. 0
Given a unit vector XI E JRn, the construction of X
z
E JRnx(nl) such that X =
[XI X
2
] is orthogonal is frequently required. The construction can actually be performed
quite easily by means of Householder (or Givens) transformations as in the proof of the
following general result.
Theorem 10.3. Let XI E C
nxk
have orthonormal columns and suppose V is a unitary
matrix such that V X I = [ where R E C
kxk
is upper triangular. Write V H = [VI Vz]
with VI E C
nxk
. Then [XI V
2
] is unitary.
Proof: Let X I = [XI, ... ,xd. Construct a sequence of Householder matrices (also known
as elementary reflectors) HI, ... , Hk in the usual way (see below) such that
Hk ... HdxI, ... , xd = [ l
where R is upper triangular (and nonsingular since XI, ... , Xk are orthonormal). Let V =
Hk'" HI. Then VH = HI'" Hk and
Then X
i
H
U2 = 0 (i E means that Xi is orthogonal to each of the n  k columns of V2.
But the latter are orthonormal since they are the last n  k rows of the unitary matrix U.
Thus. [XI U2] is unitary. 0
The construction called for in Theorem 10.2 is then a special case of Theorem 10.3
for k = 1. We illustrate the construction of the necessary Householder matrix for k = 1.
For simplicity, we consider the real case. Let the unit vector XI be denoted by .. , ,
10.1. Some Basic Canonical Forms 97
Then the necessary Householder matrix needed for the construction of X^ is given by
U = I — 2uu
+
= I — ^UU
T
, where u = [t\ ± 1, £2, • • •» £«]
r
 It can easily be checked
that U is symmetric and U
T
U = U
2
= I, so U is orthogonal. To see that U effects the
necessary compression of j ci, it is easily verified that U
T
U = 2 ± 2£i and U
T
X\ = 1 ± £1.
Thus,
Further details on Householder matrices, including the choice of sign and the complex case,
can be consulted in standard numerical linear algebra texts such as [7], [11], [23], [25].
The real version of Theorem 10.2 is worth stating separately since it is applied fre
quently in applications.
Theorem 10.4. Let A = A
T
e E
nxn
have eigenvalues k\, ... ,X
n
. Then there exists an
orthogonal matrix X e W
lxn
(whose columns are orthonormal eigenvectors of A) such that
X
T
AX = D = diag(Xi, . . . , X
n
).
Note that Theorem 10.4 implies that a symmetric matrix A (with the obvious analogue
from Theorem 10.2 for Hermitian matrices) can be written
which is often called the spectral representation of A. In fact, A in (10.3) is actually a
weighted sum of orthogonal projections P, (onto the onedimensional eigenspaces corre
sponding to the A., 's), i.e.,
where P, = PUM —
x
i
x
f =
x
i
x
j since xj xi — 1.
The following pair of theorems form the theoretical foundation of the doubleFrancis
QR algorithm used to compute matrix eigenvalues in a numerically stable and reliable way.
10.1. Some Basic Canonical Forms 97
Then the necessary Householder matrix needed for the construction of X
2
is given by
U = I  2uu+ = I  +uu
T
, where u = [';1 ± 1, ';2, ... , ';nf. It can easily be checked
u u
that U is symmetric and U
T
U = U
2
= I, so U is orthogonal. To see that U effects the
necessary compression of Xl, it is easily verified that u
T
u = 2 ± 2';1 and u
T
Xl = 1 ± ';1.
Thus,
Further details on Householder matrices, including the choice of sign and the complex case,
can be consulted in standard numerical linear algebra texts such as [7], [11], [23], [25].
The real version of Theorem 10.2 is worth stating separately since it is applied fre
quently in applications.
Theorem 10.4. Let A = AT E jRnxn have eigenvalues AI, ... , An. Then there exists an
orthogonal matrix X E jRn xn (whose columns are orthonormal eigenvectors of A) such that
XT AX = D = diag(Al, ... , An).
Note that Theorem 10.4 implies that a symmetric matrix A (with the obvious analogue
from Theorem 10.2 for Hermitian matrices) can be written
n
A = XDX
T
= LAiXiXT,
(10.3)
i=1
which is often called the spectral representation of A. In fact, A in (10.3) is actually a
weighted sum of orthogonal projections Pi (onto the onedimensional eigenspaces corre
sponding to the Ai'S), i.e.,
n
A = LAiPi,
i=l
where Pi = PR(x;) = xiXt = xixT since xT Xi = 1.
The following pair of theorems form the theoretical foundation of the doubleFrancis
QR algorithm used to compute matrix eigenvalues in a numerically stable and reliable way.
98 Chapter 10. Canonical Forms
Theorem 10.5 (Schur). Let A e C"
x
". Then there exists a unitary matrix U such that
U
H
AU = T, where T is upper triangular.
Proof: The proof of this theorem is essentially the same as that of Theorem 10.2 except that
in this case (using the notation U rather than X) the (l,2)block wf AU2 is not 0. D
In the case of A e R"
x
", it is thus unitarily similar to an upper triangular matrix, but
if A has a complex conjugate pair of eigenvalues, then complex arithmetic is clearly needed
to place such eigenvalues on the diagonal of T. However, the next theorem shows that every
A e W
xn
is also orthogonally similar (i.e., real arithmetic) to a quasiuppertriangular
matrix. A quasiuppertriangular matrix is block upper triangular with 1 x 1 diagonal
blocks corresponding to its real eigenvalues and 2x2 diagonal blocks corresponding to its
complex conjugate pairs of eigenvalues.
Theorem 10.6 (MurnaghanWintner). Let A e R"
x
". Then there exists an orthogonal
matrix U such that U
T
AU = S, where S is quasiuppertriangular.
Definition 10.7. The triangular matrix T in Theorem 10.5 is called a Schur canonical
form or Schur form. The quasiuppertriangular matrix S in Theorem 10.6 is called a real
Schur canonical form or real Schur form (RSF). The columns of a unitary [orthogonal]
matrix U that reduces a matrix to [real] Schur form are called Schur vectors.
Example 10.8. The matrix
is in RSF. Its real JCF is
Note that only the first Schur vector (and then only if the corresponding first eigenvalue
is real if U is orthogonal) is an eigenvector. However, what is true, and sufficient for virtually
all applications (see, for example, [17]), is that the first k Schur vectors span the same A
invariant subspace as the eigenvectors corresponding to the first k eigenvalues along the
diagonal of T (or S).
While every matrix can be reduced to Schur form (or RSF), it is of interest to know
when we can go further and reduce a matrix via unitary similarity to diagonal form. The
following theorem answers this question.
Theorem 10.9. A matrix A e C"
x
" is unitarily similar to a diagonal matrix if and only if
A is normal (i.e., A
H
A = AA
H
).
Proof: Suppose U is a unitary matrix such that U
H
AU = D, where D is diagonal. Then
so A is normal.
98 Chapter 10. Canonical Forms
Theorem 10.5 (Schur). Let A E c
nxn
. Then there exists a unitary matrix U such that
U
H
AU = T, where T is upper triangular.
Proof: The proof of this theorem is essentially the same as that of Theorem lO.2 except that
in this case (using the notation U rather than X) the (l,2)block ur AU
2
is not O. 0
In the case of A E IR
n
xn , it is thus unitarily similar to an upper triangular matrix, but
if A has a complex conjugate pair of eigenvalues, then complex arithmetic is clearly needed
to place such eigenValues on the diagonal of T. However, the next theorem shows that every
A E IR
nxn
is also orthogonally similar (i.e., real arithmetic) to a quasiuppertriangular
matrix. A quasiuppertriangular matrix is block upper triangular with 1 x 1 diagonal
blocks corresponding to its real eigenvalues and 2 x 2 diagonal blocks corresponding to its
complex conjugate pairs of eigenvalues.
Theorem 10.6 (MurnaghanWintner). Let A E IR
n
xn. Then there exists an orthogonal
matrix U such that U
T
AU = S, where S is quasiuppertriangular.
Definition 10.7. The triangular matrix T in Theorem 10.5 is called a Schur canonical
form or Schur fonn. The quasiuppertriangular matrix S in Theorem 10.6 is called a real
Schur canonical form or real Schur fonn (RSF). The columns of a unitary [orthogonal}
matrix U that reduces a matrix to [real} Schur fonn are called Schur vectors.
Example 10.8. The matrix
s ~ [
2 5
n
2 4
0 0
is in RSF. Its real JCF is
h[
1
n
1 1
0 0
Note that only the first Schur vector (and then only if the corresponding first eigenvalue
is real if U is orthogonal) is an eigenvector. However, what is true, and sufficient for virtually
all applications (see, for example, [17]), is that the first k Schur vectors span the same A
invariant subspace as the eigenvectors corresponding to the first k eigenvalues along the
diagonal of T (or S).
While every matrix can be reduced to Schur form (or RSF), it is of interest to know
when we can go further and reduce a matrix via unitary similarity to diagonal form. The
following theorem answers this question.
Theorem 10.9. A matrix A E c
nxn
is unitarily similar to a diagonal matrix if and only if
A is normal (i.e., AH A = AA
H
).
Proof: Suppose U is a unitary matrix such that U
H
AU = D, where D is diagonal. Then
AAH = U VUHU VHU
H
= U DDHU
H
== U DH DU
H
== AH A
so A is normal.
10.2. Definite Matrices 99
Conversely, suppose A is normal and let U be a unitary matrix such that U
H
AU = T,
where T is an upper triangular matrix (Theorem 10.5). Then
It is then a routine exercise to show that T must, in fact, be diagonal. D
10.2 Definite Matrices
Definition 10.10. A symmetric matrix A e W
xn
is
1. positive definite if and only ifx
T
Ax > Qfor all nonzero x G W
1
. We write A > 0.
2. nonnegative definite (or positive semidefinite) if and only if X
T
Ax > 0 for all
nonzero x e W. We write A > 0.
3. negative definite if—A is positive definite. We write A < 0.
4. nonpositive definite (or negative semidefinite) if— A is nonnegative definite. We
write A < 0.
Also, if A and B are symmetric matrices, we write A > B if and only if A — B > 0 or
B — A < 0. Similarly, we write A > B if and only ifA — B>QorB — A < 0.
Remark 10.11. If A e C"
x
" is Hermitian, all the above definitions hold except that
superscript //s replace Ts. Indeed, this is generally true for all results in the remainder of
this section that may be stated in the real case for simplicity.
Remark 10.12. If a matrix is neither definite nor semidefinite, it is said to be indefinite.
Theorem 10.13. Let A = A
H
e C
nxn
with eigenvalues X
{
> A
2
> • • • > A
n
. Then for all
x eC",
Proof: Let U be a unitary matrix that diagonalizes A as in Theorem 10.2. Furthermore,
let v = U
H
x, where x is an arbitrary vector in C
M
, and denote the components of y by
j]i, i € n. Then
But clearly
10.2. Definite Matrices 99
Conversely, suppose A is normal and let U be a unitary matrix such that U H A U = T,
where T is an upper triangular matrix (Theorem 10.5). Then
It is then a routine exercise to show that T must, in fact, be diagonal. 0
10.2 Definite Matrices
Definition 10.10. A symmetric matrix A E lR.
nxn
is
1. positive definite if and only if x T Ax > 0 for all nonzero x E lR.
n
. We write A > O.
2. nonnegative definite (or positive semidefinite) if and only if x
T
Ax :::: 0 for all
nonzero x E lR.
n
• We write A :::: O.
3. negative definite if  A is positive definite. We write A < O.
4. nonpositive definite (or negative semidefinite) if A is nonnegative definite. We
write A ~ O.
Also, if A and B are symmetric matrices, we write A > B if and only if A  B > 0 or
B  A < O. Similarly, we write A :::: B if and only if A  B :::: 0 or B  A ~ O.
Remark 10.11. If A E e
nxn
is Hermitian, all the above definitions hold except that
superscript H s replace T s. Indeed, this is generally true for all results in the remainder of
this section that may be stated in the real case for simplicity.
Remark 10.12. If a matrix is neither definite nor semidefinite, it is said to be indefinite.
Theorem 10.13. Let A = AH E e
nxn
with eigenvalues AI :::: A2 :::: ... :::: An. Thenfor all
x E en,
Proof: Let U be a unitary matrix that diagonalizes A as in Theorem 10.2. Furthermore,
let y = U H x, where x is an arbitrary vector in en, and denote the components of y by
11;, i En. Then
But clearly
n
x
H
Ax = (U
H
X)H U
H
AU(U
H
x) = yH Dy = LA; 111;12.
n
LA; 11'/;12 ~ AlyH Y = AIX
H
X
;=1
;=1
100 Chapter 10. Canonical Forms
and
from which the theorem follows. D
Remark 10.14. The ratio ^^ for A = A
H
< = C
nxn
and nonzero jc e C" is called the
Rayleigh quotient of jc. Theorem 10.13 provides upper (AO and lower (A.
w
) bounds for
the Rayleigh quotient. If A = A
H
e C"
x
" is positive definite, X
H
Ax > 0 for all nonzero
x E C", soO < X
n
< • • • < A. I.
Corollary 10.15. Let A e C"
x
". Then \\A\\
2
= ^
m
(A
H
A}.
Proof: For all x € C" we have
Let jc be an eigenvector corresponding to X
max
(A
H
A). Then ^pjp
2
= ^^(A" A) , whence
Definition 10.16. A principal submatrix of an nxn matrix A is the (n — k)x(n — k) matrix
that remains by deleting k rows and the corresponding k columns. A leading principal
submatrix of order n — k is obtained by deleting the last k rows and columns.
Theorem 10.17. A symmetric matrix A e E"
x
" is positive definite if and only if any of the
following three equivalent conditions hold:
1. The determinants of all leading principal submatrices of A are positive.
2. All eigenvalues of A are positive.
3. A can be written in the formM
T
M, where M e R"
x
" is nonsingular.
Theorem 10.18. A symmetric matrix A € R"
x
" is nonnegative definite if and only if any
of the following three equivalent conditions hold:
1. The determinants of all principal submatrices of A are nonnegative.
2. All eigenvalues of A are nonnegative.
3. A can be written in the formM
T
M, where M 6 R
ix
" and k > rank(A) — rank(M) .
Remark 10.19. Note that the determinants of all principal eubmatrioes muet bQ nonnogativo
in Theorem 10.18.1, not just those of the leading principal submatrices. For example,
consider the matrix A — [
0
_
l
1. The determinant of the 1x1 leading submatrix is 0 and
the determinant of the 2x2 leading submatrix is also 0 (cf. Theorem 10.17). However, the
100 Chapter 10. Canonical Forms
and
n
LAillJilZ::: AnyHy = An
xHx
,
i=l
from which the theorem follows. 0
Remark 10.14. The ratio XHHAx for A = AH E e
nxn
and nonzero x E en is called the
x x
Rayleigh quotient of x. Theorem 1O.l3 provides upper (A 1) and lower (An) bounds for
the Rayleigh quotient. If A = AH E e
nxn
is positive definite, x
H
Ax > 0 for all nonzero
x E en, so 0 < An ::::: ... ::::: A I.
I
Corollary 10.15. Let A E e
nxn
. Then IIAII2 = Ar1ax(AH A).
Proof: For all x E en we have
I
Let x be an eigenvector corresponding to Amax (A H A). Then = Ar1ax (A H A), whence
IIAxll2 ! H
IIAliz = max = Amax{A A). 0
xfO IIxll2
Definition 10.16. A principal submatrixofan n x n matrix A is the (n k) x (n k) matrix
that remains by deleting k rows and the corresponding k columns. A leading principal
submatrix of order n  k is obtained by deleting the last k rows and columns.
Theorem 10.17. A symmetric matrix A E is positive definite if and only if any of the
following three equivalent conditions hold:
1. The determinants of all leading principal submatrices of A are positive.
2. All eigenvalues of A are positive.
3. A can be written in the form MT M, where M E xn is nonsingular.
Theorem 10.18. A symmetric matrix A E xn is nonnegative definite if and only if any
of the following three equivalent conditions hold:
1. The determinants of all principal submatrices of A are nonnegative.
2. All eigenvalues of A are nonnegaTive.
3. A can be wrirren in [he/orm MT M, where M E IRb<n and k ranlc(A) "" ranlc(M).
R.@mllrk 10.19. Not@th!ltthl!dl!termin!lntl:ofnllprincip!ll "ubm!ltriC[!!l mu"t bB nonnBgmivB
in Theorem 10.18.1, not just those of the leading principal submatrices. For example,
consider the matrix A = _ The determinant of the I x 1 leading submatrix is 0 and
the determinant of the 2 x 2 leading submatrix is also 0 (cf. Theorem 10.17). However, the
10.2. Definite Matrices 101
principal submatrix consisting of the (2,2) element is, in fact, negative and A is nonpositive
definite.
Remark 10.20. The factor M in Theorem 10.18.3 is not unique. For example, if
Recall that A > B if the matrix A — B is nonnegative definite. The following
theorem is useful in "comparing" symmetric matrices. Its proof is straightforward from
basic definitions.
Theorem 10.21. Let A, B e R
nxn
be symmetric.
1. If A >BandMe R
nxm
, then M
T
AM > M
T
BM.
2. If A >B and M e R
nxm
, then M
T
AM > M.
T
BM.
j m
The following standard theorem is stated without proof (see, for example, [16, p.
181]). It concerns the notion of the "square root" of a matrix. That is, if A € E"
xn
, we say
that S e R
nx
" is a square root of A if S
2
— A. In general, matrices (both symmetric and
nonsymmetric) have infinitely many square roots. For example, if A = /2, any matrix S of
the form [
c
s
°*
e
e
_
c
s
™
9
e
] is a square root.
Theorem 10.22. Let A e R"
x
" be nonnegative definite. Then A has a unique nonnegative
definite square root S. Moreover, SA = AS and rankS = rank A (and hence S is positive
definite if A is positive definite).
A stronger form of the third characterization in Theorem 10.17 is available and is
known as the Cholesky factorization. It is stated and proved below for the more general
Hermitian case.
Theorem 10.23. Let A e <C
nxn
be Hermitian and positive definite. Then there exists a
unique nonsingular lower triangular matrix L with positive diagonal elements such that
A = LL
H
.
Proof: The proof is by induction. The case n = 1 is trivially true. Write the matrix A in
the form
By our induction hypothesis, assume the result is true for matrices of order n — 1 so that B
may be written as B = L\L^, where L\ e C
1
""
1
^""^ is nonsingular and lower triangular
then M can be
10.2. Definite Matrices 101
principal submatrix consisting of the (2,2) element is, in fact, negative and A is nonpositive
definite.
Remark 10.20. The factor M in Theorem 10.18.3 is not unique. For example, if
then M can be
[1 0], [ fz
ti
o [ ~ 0]
o l ~ 0 , ...
v'3 0
Recall that A :::: B if the matrix A  B is nonnegative definite. The following
theorem is useful in "comparing" symmetric matrices. Its proof is straightforward from
basic definitions.
Theorem 10.21. Let A, B E jRnxn be symmetric.
1. 1f A :::: Band M E jRnxm, then MT AM :::: MT BM.
2. If A> Band M E j R ~ x m , then MT AM> MT BM.
The following standard theorem is stated without proof (see, for example, [16, p.
181]). It concerns the notion of the "square root" of a matrix. That is, if A E lR.
nxn
, we say
that S E jRn xn is a square root of A if S2 = A. In general, matrices (both symmetric and
nonsymmetric) have infinitely many square roots. For example, if A = lz, any matrix S of
h
" [COSO Sino] .
t e 10rm sinO _ cosO IS a square root.
Theorem 10.22. Let A E lR.
nxn
be nonnegative definite. Then A has a unique nonnegative
definite square root S. Moreover, SA = AS and rankS = rankA (and hence S is positive
definite if A is positive definite).
A stronger form of the third characterization in Theorem 10.17 is available and is
known as the Cholesky factorization. It is stated and proved below for the more general
Hermitian case.
Theorem 10.23. Let A E c
nxn
be Hermitian and positive definite. Then there exists a
unique nonsingular lower triangular matrix L with positive diagonal elements such that
A = LLH.
Proof: The proof is by induction. The case n = 1 is trivially true. Write the matrix A in
the form
By our induction hypothesis, assume the result is true for matrices of order n  1 so that B
may be written as B = L1Lf, where Ll E c(nl)x(nl) is nonsingular and lower triangular
102 Chapt er 10. Ca n o n i c a l Forms
with positive diagonal elements. It remains to prove that we can write the n x n matrix A
in the form
10.3 Equivalence Transformations and Congruence
Theorem 10.24. Let A € C™*
7 1
. Then there exist matrices P e C ™
xm
and Q e C"
n
x
" such
that
Proof: A classical proof can be consulted in, for example, [21, p. 131]. Alternatively,
suppose A has an SVD of the form (5.2) in its complex version. Then
Note that the greater freedom afforded by the equivalence transformation of Theorem
10.24, as opposed to the more restrictive situation of a similarity transformation, yields a
far "simpler" canonical form (10.4). However, numerical procedures for computing such
an equivalence directly via, say, Gaussian or elementary row and column operations, are
generally unreliable. The numerically preferred equivalence is, of course, the unitary equiv
alence known as the SVD. However, the SVD is relatively expensive to compute and other
canonical forms exist that are intermediate between (10.4) and the SVD; see, for example
[7, Ch. 5], [4, Ch. 2]. Two such forms are stated here. They are more stably computable
than (10.4) and more efficiently computable than a ful l SVD. Many similar results are also
available.
where a is positive. Performing the indicated matrix multiplication and equating the cor
responding submatrices, we see that we must have L\c = b and a
nn
= C
H
C + a
2
. Clearly
c is given simply by c = L^b. Substituting in the expression involving a, we find
a
2
= a
nn
— b
H
L\
H
L\
l
b = a
nn
— b
H
B~
l
b (= the Schur complement of B in A). But we
know that
Since det (fi ) > 0, we must have a
nn
—b
H
B
l
b > 0. Choosing a to be the positive square
root of «„„ — b
H
B~
l
b completes the proof. D
102 Chapter 10. Canonical Forms
with positive diagonal elements. It remains to prove that we can write the n x n matrix A
in the form
b ] = [L
J
0 ] [Lf c J,
ann c a 0 a
where a is positive. Performing the indicated matrix multiplication and equating the cor
responding submatrices, we see that we must have L I C = b and ann = c
H
c + a
2
• Clearly
c is given simply by c = C,lb. Substituting in the expression involving a, we find
a
2
= ann  b
H
LIH L11b = ann  b
H
B1b (= the Schur complement of B in A). But we
know that
o < det(A) = det [
b ] = det(B) det(a
nn
_ b
H
B1b).
ann
Since det(B) > 0, we must have ann  b
H
B1b > O. Choosing a to be the positive square
root of ann  b
H
B1b completes the proof. 0
10.3 Equivalence Transformations and Congruence
Theorem 10.24. Let A E c;,xn. Then there exist matrices P E C:
xm
and Q E such
that
(l0.4)
Proof: A classical proof can be consulted in, for example, [21, p. 131]. Alternatively,
suppose A has an SVD of the form (5.2) in its complex version. Then
[
Sl 0 ] [ U
H
] [I 0 ]
o I Uf AV = 0 0 .
Take P = [ 'f [I ] and Q = V to complete the proof. 0
Note that the greater freedom afforded by the equivalence transformation of Theorem
10.24, as opposed to the more restrictive situation of a similarity transformation, yields a
far "simpler" canonical form (10.4). However, numerical procedures for computing such
an equivalence directly via, say, Gaussian or elementary row and column operations, are
generally unreliable. The numerically preferred equivalence is, of course, the unitary equiv
alence known as the SVD. However, the SVD is relatively expensive to compute and other
canonical forms exist that are intermediate between (l0.4) and the SVD; see, for example
[7, Ch. 5], [4, Ch. 2]. Two such forms are stated here. They are more stably computable
than (lOA) and more efficiently computable than a full SVD. Many similar results are also
available.
10.3. Equivalence Transformations and Congruence 103
Theorem 10.25 (Complete Orthogonal Decomposition). Let A e C™
x
". Then there exist
unitary matrices U e C
mxm
and V e C
nxn
such that
where R e €,
r
r
xr
is upper (or lower) triangular with positive diagonal elements.
Proof: For the proof, see [4]. D
Theorem 10.26. Let A e C™
x
". Then there exists a unitary matrix Q e C
mxm
and a
permutation matrix Fl e C"
x
" such that
where R E C
r
r
xr
is upper triangular and S e C
r x(
"
r)
is arbitrary but in general nonzero.
Proof: For the proof, see [4]. D
Remark 10.27. When A has full column rank but is "near" a rank deficient matrix,
various rank revealing QR decompositions are available that can sometimes detect such
phenomena at a cost considerably less than a full SVD. Again, see [4] for details.
Definition 10.28. Let A e C
nxn
and X e C
n
n
xn
. The transformation A i> X
H
AX is called
a congruence. Note that a congruence is a similarity if and only ifX is unitary.
Note that congruence preserves the property of being Hermitian; i.e., if A is Hermitian,
then X
H
AX is also Hermitian. It is of interest to ask what other properties of a matrix are
preserved under congruence. It turns out that the principal property so preserved is the sign
of each eigenvalue.
Definition 10.29. Let A = A
H
e C"
x
" and let 7t, v, and £ denote the numbers of positive,
negative, and zero eigenvalues, respectively, of A. Then the inertia of A is the triple of
numbers In(A) = (n, v, £). The signature of A is given by sig(A) = n — v.
Example 10.30.
2. If A = A" eC
n x
" , t h enA > 0 if and only if In (A) = (n, 0, 0).
3. If In(A) = (TT, v, £), then rank(A) = n + v.
Theorem 10.31 (Sylvester's Law of Inertia). Let A = A
H
e C
nxn
and X e C
n
n
xn
. Then
In(A) = ln(X
H
AX).
Proof: For the proof, see, for example, [21, p. 134]. D
Theorem 10.31 guarantees that rank and signature of a matrix are preserved under
congruence. We then have the following.
10.3. Equivalence Transformations and Congruence 103
Theorem 10.25 (Complete Orthogonal Decomposition). Let A E e ~ x n . Then there exist
unitary matrices U E e
mxm
and V E e
nxn
such that
where R E e;xr is upper (or lower) triangular with positive diagonal elements.
Proof: For the proof, see [4]. 0
(10.5)
Theorem 10.26. Let A E e ~ x n . Then there exists a unitary matrix Q E e
mxm
and a
permutation matrix IT E en xn such that
QAIT = [ ~ ~ l
(10.6)
where R E e;xr is upper triangular and S E erx(nr) is arbitrary but in general nonzero.
Proof: For the proof, see [4]. 0
Remark 10.27. When A has full column rank but is "near" a rank deficient matrix,
various rank revealing QR decompositions are available that can sometimes detect such
phenomena at a cost considerably less than a full SVD. Again, see [4] for details.
Definition 10.28. Let A E e
nxn
and X E e ~ x n . The transformation A H XH AX is called
a congruence. Note that a congruence is a similarity if and only if X is unitary.
Note that congruence preserves the property of being Hermitian; i.e., if A is Hermitian,
then X H AX is also Hermitian. It is of interest to ask what other properties of a matrix are
preserved under congruence. It turns out that the principal property so preserved is the sign
of each eigenvalue.
Definition 10.29. Let A = AH E e
nxn
and let rr, v, and ~ denote the numbers of positive,
negative, and zero eigenvalues, respectively, of A. Then the inertia of A is the triple of
numbers In(A) = (rr, v, n The signature of A is given by sig(A) = rr  v.
Example 10.30.
l.In[! 1
o 0
00] o 0
10 =(2,1,1).
o 0
2. If A = AH E e
nxn
, then A> 0 if and only if In(A) = (n, 0, 0).
3. If In(A) = (rr, v, n, then rank(A) = rr + v.
Theorem 10.31 (Sylvester's Law of Inertia). Let A = A HE en xn and X E e ~ xn. Then
In(A) = In(X
H
AX).
Proof: For the proof, see, for example, [21, p. 134]. D
Theorem 10.31 guarantees that rank and signature of a matrix are preserved under
congruence. We then have the following.
104 Chapter 10. Canonical Forms
Theorem 10.32. Let A = A
H
e C"
xn
with In(A) = (jt, v, £). Then there exists a matrix
X e C"
n
xn
such that X
H
AX = diag(l, . . . , 1, 1,..., 1, 0, . . . , 0), where the number of
1 's is 7i, the number of — l's is v, and the number 0/0 's is (,.
Proof: Let AI , . . . , X
w
denote the eigenvalues of A and order them such that the first TT are
positive, the next v are negative, and the final £ are 0. By Theorem 10.2 there exists a unitary
matrix U such that U
H
AU = diag(Ai, . . . , A
w
). Define the n x n matrix
Then it is easy to check that X = U W yields the desired result. D
10.3.1 Block matrices and definiteness
Theorem 10.33. Suppose A = A
T
and D = D
T
. Then
if and only if either A > 0 and D  B
T
A~
l
B > 0, or D > 0 and A  BD^B
T
> 0.
Proof: The proof follows by considering, for example, the congruence
The details are straightforward and are left to the reader. D
Remark 10.34. Note the symmetric Schur complements of A (or D) in the theorem.
Theorem 10.35. Suppose A = A
T
and D = D
T
. Then
if and only ifA>0, AA
+
B = B, and D  B
T
A
+
B > 0.
Proof: Consider the congruence with
and proceed as in the proof of Theorem 10.33. D
10.4 Rational Canonical Form
One final canonical form to be mentioned is the rational canonical form.
104 Chapter 10. Canonical Forms
Theorem 10.32. Let A = AH E c
nxn
with In(A) = (Jr, v, O. Then there exists a matrix
X E such that XH AX = diag(1, ... , I, I, ... , 1,0, ... ,0), where the number of
1 's is Jr, the number of I 's is v, and the numberofO's
Proof: Let A I, ... , An denote the eigenvalues of A and order them such that the first Jr are
positive, the next v are negative, and the final are O. By Theorem 10.2 there exists a unitary
matrix V such that VH AV = diag(AI, ... , An). Define the n x n matrix
vv = ... , 1/.fArr+I' ... , I/.fArr+v, I, ... ,1).
Then it is easy to check that X = V VV yields the desired result. 0
10.3.1 Block matrices and definiteness
Theorem 10.33. Suppose A = AT and D = DT. Then
ifand only ifeither A> ° and D  BT AI B > 0, or D > 0 and A  BD
I
BT > O.
Proof: The proof follows by considering, for example, the congruence
B ] [I _AI B JT [ A
D 0 I BT
] [
The details are straightforward and are left to the reader. 0
Remark 10.34. Note the symmetric Schur complements of A (or D) in the theorem.
Theorem 10.35. Suppose A = AT and D = DT. Then
B ] > °
D 
if and only if A:::: 0, AA+B = B. and D  BT A+B:::: o.
Proof: Consider the congruence with
and proceed as in the proof of Theorem 10.33. 0
10.4 Rational Canonical Form
One final canonical form to be mentioned is the rational canonical form.
10.4. Rational Canonical Form 105
Definition 10.36. A matrix A e M"
x
" is said to be nonderogatory if its minimal polynomial
and characteristic polynomial are the same or, equivalently, if its Jordan canonical f orm
has only one block associated with each distinct eigenvalue.
Suppose A E W
xn
is a nonderogatory matrix and suppose its characteristic polyno
mial is 7 r( A ) = A " — ( a
0
+ «A +
is similar to a matrix of the form
+ a
n
_ i A
n
~ ' )  Then it can be shown (see [12]) that A
Definition 10.37. A matrix A e E
nx
" of the f orm (10.7) is called a companion matrix or
is said to be in companion form.
Companion matrices also appear in the literature in several equivalent forms. To
illustrate, consider the companion matrix
This matrix is a special case of a matrix in lower Hessenberg form. Using the reverseorder
identity similarity P given by (9.18), A is easily seen to be similar to the following matrix
in upper Hessenberg form:
Moreover, since a matrix is similar to its transpose (see exercise 13 in Chapter 9), the
following are also companion matrices similar to the above:
Notice that in all cases a companion matrix is nonsingular if and only if aO /= 0.
In fact, the inverse of a nonsingular companion matrix is again in companion form. For
£*Yamr\1j=»
10.4. Rational Canonical Form 105
Definition 10.36. A matrix A E lR
n
Xn is said to be nonderogatory ifits minimal polynomial
and characteristic polynomial are the same or; equivalently, if its Jordan canonical form
has only one block associated with each distinct eigenvalue.
Suppose A E lR
nxn
is a nonderogatory matrix and suppose its characteristic polyno
mial is n(A) = An  (ao + alA + ... + an_IAnI). Then it can be shown (see [12]) that A
is similar to a matrix of the form
o o o
o 0
o
(10.7)
o o
Definition 10.37. A matrix A E lR
nxn
of the form (10.7) is called a cornpanion rnatrix or
is said to be in cornpanion forrn.
Companion matrices also appear in the literature in several equivalent forms. To
illustrate, consider the companion matrix
(l0.8)
This matrix is a special case of a matrix in lower Hessenberg form. Using the reverseorder
identity similarity P given by (9.18), A is easily seen to be similar to the following matrix
in upper Hessenberg form:
a2 al
o 0
1 0
o 1
6]
o .
o
(10.9)
Moreover, since a matrix is similar to its transpose (see exercise 13 in Chapter 9), the
following are also companion matrices similar to the above:
l
:: ~ ! ~ 0 1 ] .
ao 0 0
(10.10)
Notice that in all cases a companion matrix is nonsingular if and only if ao i= O.
In fact, the inverse of a nonsingular companion matrix is again in companion form. For
example,
o
1
o
 ~
ao
1
o
o
 ~
ao
o
o
_!!l
o
o
(10.11)
106 Chapter 10. Canonical Forms
with a similar result for companion matrices of the form (10.10).
If a companion matrix of the form (10.7) is singular, i.e., if ao = 0, then its pseudo
inverse can still be computed. Let a e M""
1
denote the vector \a\, 02,..., a
n
i] and let
c =
l+
l
a
r
a
. Then it is easily verified that
Note that / — caa
T
= (I + aa
T
) , and hence the pseudoinverse of a singular companion
matrix is not a companion matrix unless a = 0.
Companion matrices have many other interesting properties, among which, and per
haps surprisingly, is the fact that their singular values can be found in closed form; see
[14].
Theorem 10.38. Let a\ > GI > • • • > a
n
be the singular values of the companion matrix
A in (10.7). Let a = a\ + a\ + • • • +a%_
{
and y = 1 + «.Q + a. Then
Remark 10.39. Explicit formulas for all the associated right and left singular vectors can
also be derived easily.
If A € R
nx
" is derogatory, i.e., has more than one Jordan block associated with
at least one eigenvalue, then it is not similar to a companion matrix of the form (10.7).
However, it can be shown that a derogatory matrix is similar to a block diagonal matrix,
each of whose diagonal blocks is a companion matrix. Such matrices are said to be in
rational canonical form (or Frobenius canonical form). For details, see, for example, [12].
Companion matrices appear frequently in the control and signal processing literature
but unfortunately they are often very difficult to work with numerically. Algorithms to reduce
an arbitrary matrix to companion form are numerically unstable. Moreover, companion
matrices are known to possess many undesirable numerical properties. For example, in
general and especially as n increases, their eigenstructure is extremely ill conditioned,
nonsingular ones are nearly singular, stable ones are nearly unstable, and so forth [14].
Ifao ^ 0, the largest and smallest singular values can also be written in the equivalent form
106 Chapter 10. Canonical Forms
with a similar result for companion matrices of the form (10.10).
If a companion matrix of the form (10.7) is singular, i.e., if ao = 0, then its pseudo
inverse can still be computed. Let a E JRn1 denote the vector [ai, a2, ... , anIf and let
c = I + ~ T a' Then it is easily verified that
o
o
o
o
o o
o
o
+
o
1 caa
T
o J.
ca
Note that I  caa T = (I + aa T) I , and hence the pseudoinverse of a singular companion
matrix is not a companion matrix unless a = O.
Companion matrices have many other interesting properties, among which, and per
haps surprisingly, is the fact that their singular values can be found in closed form; see
[14].
Theorem 10.38. Let al ~ a2 ~ ... ~ an be the singular values of the companion matrix
A in (10.7). Leta = ar + ai + ... + a;_1 and y = 1 + aJ + a. Then
2 _ 1 ( J 2 2)
a
l
 2 y + y  4a
o
'
a? = 1 for i = 2, 3, ... , n  1,
a; = ~ (y  J y2  4a
J
) .
If ao =1= 0, the largest and smallest singular values can also be written in the equivalent form
Remark 10.39. Explicit formulas for all the associated right and left singular vectors can
also be derived easily.
If A E JRnxn is derogatory, i.e., has more than one Jordan block associated with
at least one eigenvalue, then it is not similar to a companion matrix of the form (10.7).
However, it can be shown that a derogatory matrix is similar to a block diagonal matrix,
each of whose diagonal blocks is a companion matrix. Such matrices are said to be in
rational canonical form (or Frobenius canonical form). For details, see, for example, [12].
Companion matrices appear frequently in the control and signal processing literature
but unfortunately they are often very difficult to work with numerically. Algorithms to reduce
an arbitrary matrix to companion form are numerically unstable. Moreover, companion
matrices are known to possess many undesirable numerical properties. For example, in
general and especially as n increases, their eigenstructure is extremely ill conditioned,
nonsingular ones are nearly singular, stable ones are nearly unstable, and so forth [14].
Exercises 107
Companion matrices and rational canonical forms are generally to be avoided in floating
point computation.
Remark 10.40. Theorem 10.38 yields some understanding of why difficult numerical
behavior might be expected for companion matrices. For example, when solving linear
systems of equations of the form (6.2), one measure of numerical sensitivity is K
P
(A) =
I I ^ I I
p
I I A~
l
I I
p
>
m
e socalled condition number of A with respect to inversion and with respect
to the matrix Pnorm. I f this number is large, say 0(10*), one may lose up to k digits of
precision. I n the 2norm, this condition number is the ratio of largest to smallest singular
values which, by the theorem, can be determined explicitly as
I t is easy to show that y/2/ao < k2(A) < £,, and when GO is small or y is large (or both),
then K2(A) ^ T~I. I t is not unusual for y to be large for large n. Note that explicit formulas
for K\ (A) and K oo(A) can also be determined easily by using (10.11).
EXERCISES
1. Show that if a triangular matrix is normal, then it must be diagonal.
2. Prove that if A e M"
x
" is normal, then Af(A) = A/"(A
r
).
3. Let A G C
nx
" and define p(A) = maxx
€
A(A) I ' M Then p(A) is called the spectral
radius of A. Show that if A is normal, then p(A) = A
2
. Show that the converse
is true if n = 2.
4. Let A € C
nxn
be normal with eigenvalues y1 , ..., y
n
and singular values a\ > a
2
>
• • • > o
n
> 0. Show that a, (A) = A.,(A) for i e n.
5. Use the reverseorder identity matrix P introduced in (9.18) and the matrix U in
Theorem 10.5 to find a unitary matrix Q that reduces A e C"
x
" to lower triangular
form.
6. Let A = I J MeC
2x2
. Find a unitary matrix U such that
7. I f A e W
xn
is positive definite, show that A
[
must also be positive definite.
3. Suppose A e E"
x
" is positive definite. I s [ ^ /i 1 > 0?
}. Let R, S 6 E
nxn
be symmetric. Show that [ * J 1 > 0 if and only if S > 0 and
R> S
Exercises 107
Companion matrices and rational canonical forms are generally to be avoided in fioating
point computation.
Remark 10.40. Theorem 10.38 yields some understanding of why difficult numerical
behavior might be expected for companion matrices. For example, when solving linear
systems of equations of the form (6.2), one measure of numerical sensitivity is Kp(A) =
II A II p II A ] II p' the socalled condition number of A with respect to inversion and with respect
to the matrix pnorm. If this number is large, say O(lO
k
), one may lose up to k digits of
precision. In the 2norm, this condition number is the ratio of largest to smallest singular
values which, by the theorem, can be determined explicitly as
y+J
y
2 4
a5
21
a
ol
It is easy to show that :::: K2(A) :::: 1:01' and when ao is small or y is large (or both),
then K2(A) It is not unusualfor y to be large forlarge n. Note that explicit formulas
for K] (A) and Koo(A) can also be determined easily by using (l0.11).
EXERCISES
1. Show that if a triangular matrix is normal, then it must be diagonal.
2. Prove that if A E jRnxn is normal, then N(A) = N(A
T
).
3. Let A E cc
nxn
and define peA) = max)..EA(A) IAI. Then peA) is called the spectral
radius of A. Show that if A is normal, then peA) = IIAII2' Show that the converse
is true if n = 2.
4. Let A E en xn be normal with eigenvalues A], ... , An and singular values 0'1 0'2
... an O. Show that a; (A) = IA;(A)I for i E!l.
5. Use the reverseorder identity matrix P introduced in (9.18) and the matrix U in
Theorem 10.5 to find a unitary matrix Q that reduces A E cc
nxn
to lower triangular
form.
6. Let A = :] E CC
2x2
. Find a unitary matrix U such that
7. If A E jRn xn is positive definite, show that A I must also be positive definite.
8. Suppose A E jRnxn is positive definite. Is [1 O?
9. Let R, S E jRnxn be symmetric. Show that > 0 if and only if S > 0 and
R > SI.
108 Chapter 10. Canonical Forms
10. Find the inertia of the following matrices:
108
10. Find the inertia of the following matrices:
(a) [ ~ ~ l (b) [
(d) [1 1 + j ]
1  j 1 .
Chapter 10. Canonical Forms
2 1 + j ]
1  j 2 '
Chapter 11
Linear Differential and
Difference Equations
11.1 Differential Equations
In this section we study solutions of the linear homogeneous system of differential equations
for t > IQ. This is known as an initialvalue problem. We restrict our attention in this
chapter only to the socalled timeinvariant case, where the matrix A e R
nxn
is constant
and does not depend on t. The solution of (11.1) is then known always to exist and be
unique. It can be described conveniently in terms of the matrix exponential.
Definition 11.1. For all A e R
nxn
, the matrix exponential e
A
e R
nxn
is defined by the
power series
The series (11.2) can be shown to converge for all A (has radius of convergence equal
to +00). The solution of (11.1) involves the matrix
which thus also converges for all A and uniformly in t.
11.1.1 Properties of the matrix exponential
1. e° = I.
Proof: This follows immediately from Definition 11.1 by setting A = 0.
2. For all A G R"
XM
, (e
A
f  e^.
Proof: This follows immediately from Definition 11.1 and linearity of the transpose.
109
Chapter 11
Linear Differential and
Difference Equations
11.1 Differential Equations
In this section we study solutions of the linear homogeneous system of differential equations
x(t) = Ax(t); x(to) = Xo E JR.n (11.1)
for t 2: to. This is known as an initialvalue problem. We restrict our attention in this
chapter only to the socalled timeinvariant case, where the matrix A E JR.nxn is constant
and does not depend on t. The solution of (11.1) is then known always to exist and be
unique. It can be described conveniently in terms of the matrix exponential.
Definition 11.1. For all A E JR.nxn, the matrix exponential e
A
E JR.nxn is defined by the
power series
+00 1
e
A
= L ,Ak.
k=O k.
(11.2)
The series (11.2) can be shown to converge for all A (has radius of convergence equal
to +(0). The solution of (11.1) involves the matrix
(11.3)
which thus also converges for all A and uniformly in t.
11.1.1 Properties of the matrix exponential
1. eO = I.
Proof This follows immediately from Definition 11.1 by setting A = O.
T T
2. For all A E JR.nxn, (e
A
) = e
A
•
Proof This follows immediately from Definition 11.1 and linearity of the transpose.
109
110 Chapter 11. Linear Differential and Difference Equations
3. For all A e R"
x
" and for all t, r e R, e
(t
+
T)A
= e'
A
e
rA
= e
lA
e'
A
.
Proof: Note that
Compare like powers of A in the above two equations and use the binomial theorem
on (t + T)*.
4. For all A, B e R"
xn
and for all t e R, e
t(A+B)
=^e'
A
e'
B
= e'
B
e'
A
if and only if A
and B commute, i.e., AB = B A.
Proof: Note that
and
and
while
Compare like powers of t in the first equation and the second or third and use the
binomial theorem on (A + B)
k
and the commutativity of A and B.
5. For all A e R"
x
" and for all t e R, (e'
A
)~
l
= e~'
A
.
Proof: Simply take T = — t in property 3.
6. Let £ denote the Laplace transform and £~
!
the inverse Laplace transform. Then for
all A € R"
x
" and for all t € R,
(a) C{e
tA
} = (sIAr
l
.
(b) £
1
{(j/A)
1
} = «
M
.
Proof: We prove only (a). Part (b) follows similarly.
110 Chapter 11. Linear Differential and Difference Equations
3. For all A E JRnxn and for all t, T E JR, e(t+r)A = etA erA = erAe
tA
.
Proof" Note that
(t + T)2 2
e(t+r)A = I + (t + T)A + A + ...
2!
and
tA rA t 2 T 2
(
2 )( 2 )
e e = I + t A + 2! A +... I + T A + 2! A +... .
Compare like powers of A in the above two equations and use the binomial theorem
on(t+T)k.
4. For all A, B E JRnxn and for all t E JR, et(A+B) =etAe
tB
= etBe
tA
if and only if A
and B commute, i.e., AB = BA.
Proof' Note that
and
while
t
2
et(A+B) = I + teA + B) + (A + B)2 + ...
2!
tB tA t 2 t 2
(
2 )( 2 )
e e = 1+ tB + 2iB +... 1+ tA + 2!A +... .
Compare like powers of t in the first equation and the second or third and use the
binomial theorem on (A + B/ and the commutativity of A and B.
5. ForaH A E JRnxn and for all t E JR, (etA)1 = e
tA
.
Proof" Simply take T = t in property 3.
6. Let £ denote the Laplace transform and £1 the inverse Laplace transform. Then for
all A E JRnxn and for all t E lR,
(a) .l{e
tA
} = (sI  A)I.
(b) .lI{(sl A)I} = erA.
Proof" We prove only (a). Part (b) follows similarly.
{+oo
= io et(sl)e
tA
dt
(+oo
= io ef(Asl) dt since A and (sf) commute
11.1. Differential Equations 111
= (sl A)
1
.
The matrix (s I — A) ~' is called the resolvent of A and is defined for all s not in A (A).
Notice in the proof that we have assumed, for convenience, that A is diagonalizable.
If this is not the case, the scalar dyadic decomposition can be replaced by
using the JCF. All succeeding steps in the proof then follow in a straightforward way.
7. For all A e R"
x
" and for all t e R, £(e'
A
) = Ae
tA
= e'
A
A.
Proof: Since the series (11.3) is uniformly convergent, it can be differentiated termby
term from which the result follows immediately. Alternatively, the formal definition
can be employed as follows. For any consistent matrix norm,
11.1. Differential Equations 111
= {+oo t e(AiS)t x;y;H dt assuming A is diagonalizable
10 ;=1
= e(AiS)t dt]x;y;H
n 1
= '"' Xi y;H assuming Re s > Re Ai for i E !!
L..... s  A"
i=1 I
= (sI  A)I.
The matrix (s I  A) I is called the resolvent of A and is defined for all s not in A (A).
Notice in the proof that we have assumed, for convenience, that A is diagonalizable.
If this is not the case, the scalar dyadic decomposition can be replaced by
m
et(Asl) = L Xiet(Jisl)y;H
;=1
using the JCF. All succeeding steps in the proof then follow in a straightforward way.
7. For all A E JRnxn and for all t E JR, 1h(e
tA
) = Ae
tA
= etA A.
Proof: Since the series (11.3) is uniformly convergent, it can be differentiated termby
term from which the result follows immediately. Alternatively, the formal definition
d e(t+M)A _ etA
_(/A) = lim
d t L'lt
can be employed as follows. For any consistent matrix norm,
II
etA II III II
u.  Ae
tA
= L'lt  /A)  Ae
tA
= II  etA)  Ae
tA
II
= II  l)e
tA
 Ae
tA
II
II
I ( (M)2 2 ) tA tAil
= L'lt M A + A +... e  Ae
= II ( Ae
tA
+ A
2
e
tA
+ ... )  Ae
tA
II
= II ( A2 + A
3
+ .. , ) etA II
< MIIA21111e
tA
II _ + IIAII + IIAI12 + ...
(
1 L'lt (L'lt)2 )
 2! 3! 4!
< L'lt1lA21111e
tA
Il (1 + L'ltiIAIl + IIAII2 + ... )
= L'lt IIA 21111e
tA
112 Chapter 11. Linear Differential and Difference Equations
For fixed t, the righthand side above clearly goes to 0 as At goes to 0. Thus, the
limit exists and equals Ae'
A
. A similar proof yields the limit e'
A
A, or one can use the
fact that A commutes with any polynomial of A of finite degree and hence with e'
A
.
11.1.2 Homogeneous linear differential equations
Theorem 11.2. Let A e R
nxn
. The solution of the linear homogeneous initialvalue problem
Proof: Differentiate (11.5) and use property 7 of the matrix exponential to get x( t ) =
Ae
( t
~
to) A
xo = Ax( t) . Also, x( t
0
) — e
( fo
~
t
° '
) A
X Q — X Q so, by the fundamental existence and
uniqueness theorem for ordinary differential equations, (11.5) is the solution of (11.4). D
11.1.3 Inhomogeneous linear differential equations
Theorem 11.3. Let A e R
nxn
, B e W
xm
and let the vectorvalued function u be given
and, say, continuous. Then the solution of the linear inhomogeneous initialvalue problem
for t > IQ is given by the variation of parameters formula
Proof: Differentiate (11.7) and again use property 7 of the matrix exponential. The general
formula
is used to get x( t ) = Ae
{
'
to) A
x
0
+ f'
o
Ae
(
'
s) A
Bu( s) ds + Bu( t) = Ax( t) + Bu( t) . Also,
*('o)
=
< ?
(f
° ~
fo)/ 1
.¥ o + 0 = X Q so, by the fundamental existence and uniqueness theorem for
ordinary differential equations, (11.7) is the solution of (11.6). D
Remark 11.4. The proof above simply verifies the variation of parameters formula by
direct differentiation. The formula can be derived by means of an integrating factor "trick"
as follows. Premultiply the equation x — Ax = Bu by e~
tA
to get
112 Chapter 11. Linear Differential and Difference Equations
For fixed t, the righthand side above clearly goes to 0 as t:.t goes to O. Thus, the
limit exists and equals Ae
t
A • A similar proof yields the limit e
t
A A, or one can use the
fact that A commutes with any polynomial of A of finite degree and hence with etA.
11.1.2 Homogeneous linear differential equations
Theorem 11.2. Let A E IR
n
xn. The solution of the linear homogeneous initialvalue problem
x(t) = Ax(l); x(to) = Xo E IR
n
(11.4)
for t ::: to is given by
(11.5)
Proof: Differentiate (11.5) and use property 7 of the matrix exponential to get x (t) =
Ae(tto)A
xo
= Ax(t). Also, x(to) = e(toto)A Xo = Xo so, by the fundamental existence and
uniqueness theorem for ordinary differential equations, (11.5) is the solution of (11.4). 0
11.1.3 Inhomogeneous linear differential equations
Theorem 11.3. Let A E IR
nxn
, B E IR
nxm
and let the vectorvalued function u be given
and, say, continuous. Then the solution of the linear inhomogeneous initialvalue problem
x(t) = Ax(t) + Bu(t); x(to) = Xo E IR
n
for t ::: to is given by the variation of parameters formula
x(t) = e(tto)A
xo
+ t e(ts)A Bu(s) ds.
l t o
(11.6)
(11.7)
Proof: Differentiate (11.7) and again use property 7 of the matrix exponential. The general
formula
d l
q
(t) l
q
(t) af(x t) dq(t) dp(t)
 f(x, t) dx = ' dx + f(q(t), t)  f(p(t), t)
dt pet) pet) at dt dt
is used to get x(t) = Ae(tto)A Xo + Ir: Ae(ts)A Bu(s) ds + Bu(t) = Ax(t) + Bu(t). Also,
x(t
o
} = e(totolA Xo + 0 = Xo so, by the fundilm()ntill nnd uniqu()Oc:s:s theorem for
ordinary differential equations, (11.7) is the solution of (1l.6). 0
Remark 11.4. The proof above simply verifies the variation of parameters formula by
direct differentiation. The formula can be derived by means of an integrating factor "trick"
as follows. Premultiply the equation x  Ax = Bu by e
tA
to get
(11.8)
11.1. Differential Equations 113
Now integrate (11.8) over the interval [to, t]:
Thus,
and hence
11.1.4 Linear matrix differential equations
Matrixvalued initialvalue problems also occur frequently. The first is an obvious general
ization of Theorem 11.2, and the proof is essentially the same.
Theorem 11.5. Let A e W
lxn
. The solution of the matrix linear homogeneous initialvalue
nrohlcm
for t > to is given by
In the matrix case, we can have coefficient matrices on both the right and left. For
convenience, the following theorem is stated with initial time to = 0.
Theorem 11.6. Let A e Rn
xn
, B e R
mxm
, and C e Rn
xm
. Then the matrix initialvalue
problem
—
a
tA
ra
tB
has the solutionX ( t ) = e Ce
Proof: Differentiate e
tA
Ce
tB
with respect to t and use property 7 of the matrix exponential.
The fact that X ( t ) satisfies the initial condition is trivial. D
Corollary 11.7. Let A, C e IR"
X
". Then the matrix initialvalue problem
has the solution X(t} = e
tA
Ce
tAT
.
When C is symmetric in (11.12), X ( t ) is symmetric and (11.12) is known as a Lya
punov differential equation. The initialvalue problem (11.11) is known as a Sylvester
differential equation.
11.1. Differential Equations
Now integrate (11.8) over the interval [to, t]:
Thus,
and hence
esAx(s) ds = eSABu(s) ds.
1
t d 1t
to ds to
etAx(t)  etoAx(to) = t e
sA
Bu(s) ds
lto
x(t) = e(tt
olA
xo
+ t e(ts)A Bu(s) ds.
lto
11.1.4 Linear matrix differential equations
113
Matrixvalued initialvalue problems also occur frequently. The first is an obvious general
ization of Theorem 11.2, and the proof is essentially the same.
Theorem 11.5. Let A E jRnxn. The solution of the matrix linear homogeneous initialvalue
problem
X(t) = AX(t); X(to) = C E jRnxn (11.9)
for t ::: to is given by
X(t) = e(tto)Ac.
(11.10)
In the matrix case, we can have coefficient matrices on both the right and left. For
convenience, the following theorem is stated with initial time to = O.
Theorem 11.6. Let A E jRnxn, B E jRmxm, and C E ]R.nxm. Then the matrix initialvalue
problem
X(t) = AX(t) + X(t)B; X(O) = C (11.11)
has the solution X(t) = etACe
tB
.
Proof: Differentiate etACe
tB
with respect to t and use property 7 of the matrix exponential.
The fact that X (t) satisfies the initial condition is trivial. 0
Corollary 11.7. Let A, C E ]R.nxn. Then the matrix initialvalue problem
X(t) = AX(t) + X(t)AT; X(O) = C (11.12)
has the solution X(t) = etACetAT.
When C is symmetric in (11.12), X (t) is symmetric and (11.12) is known as a Lya
punov differential equation. The initialvalue problem (11.11) is known as a Sylvester
differential equation.
114 Chapter 11. Linear Differential and Difference Equations
11.1.5 Modal decompositions
Let A E W
xn
and suppose, for convenience, that it is diagonalizable (if A is not diagonaliz
able, the rest of this subsection is easily generalized by using the JCF and the decomposition
A — ^ X f Ji Y
t
H
as discussed in Chapter 9). Then the solution x(t) of (11.4) can be written
The ki s are called the modal velocities and the right eigenvectors *, are called the modal
directions. The decomposition above expresses the solution x(t) as a weighted sum of its
modal velocities and directions.
This modal decomposition can be expressed in a different looking but identical form
if we write the initial condition X Q as a weighted sum of the right eigenvectors
Then
In the last equality we have used the fact that y f * X j = S f j .
Similarly, in the inhomogeneous case we can write
11.1.6 Computation of the matrix exponential
JCF method
Let A e R"
x
" and suppose X e Rn
xn
is such that X"
1
AX = J, where J is a JCF for A.
Then
114 Chapter 11. Linear Differential and Difference Equations
11.1 .5 Modal decompositions
Let A E jRnxn and suppose, for convenience, that it is diagonalizable (if A is not diagonaliz
able, the rest of this subsection is easily generalized by using the JCF and the decomposition
A = L X;li y
i
H
as discussed in Chapter 9). Then the solution x(t) of (11.4) can be written
x(t) = e(tto)A Xo
= (ti.iUtO)Xiyr) Xo
1=1
n
= L(Yi
H
xoeAi(ttO»Xi.
i=1
The Ai s are called the modal velocities and the right eigenvectors Xi are called the modal
directions. The decomposition above expresses the solution x (t) as a weighted sum of its
modal velocities and directions.
This modal decomposition can be expressed in a different looking but identical form
n
if we write the initial condition Xo as a weighted sum of the right eigenvectors Xo = L ai Xi.
Then
n
= L(aieAiUtO»Xi.
i=1
In the last equality we have used the fact that Yi
H
X j = flij.
Similarly, in the inhomogeneous case we can write
i
t e(ts)A Bu(s) ds = t (it eAiUS)YiH Bu(s) dS) Xi.
~ i=1 ~
11.1.6 Computation of the matrix exponential
JCF method
i=1
Let A E jRnxn and suppose X E j R ~ x n is such that XI AX = J, where J is a JCF for A.
Then
etA = etXJX1
= XetJX
1
I
n
Le
A
•
,
X'Yi
H
if A is diagonalizable
1=1
~ t,x;e'J,y;H in geneml.
11.1. Differential Equations 115
If A is diagonalizable, it is then easy to compute e
tA
via the formula e
tA
= Xe
tJ
X '
since e
tj
is simply a diagonal matrix.
In the more general case, the problem clearly reduces simply to the computation of
the exponential of a Jordan block. To be specific, let .7, e <C
kxk
be a Jordan block of the form
Clearly A/ and N commute. Thus, e
tJi
= e'
u
e
tN
by property 4 of the matrix exponential.
The diagonal part is easy: e
tu
= diag(e
x
',..., e
xt
}. But e
tN
is almost as easy since N is
nilpotent of degree k.
Definition 11.8. A matrix M e M
nx
"
M
p
= 0, while M
p
~
l
^ 0.
is nilpotent of degree (or index, or grade) p if
For the matrix N defined above, it is easy to check that while N has 1's along only
its first superdiagonal (and O's elsewhere), N
2
has 1's along only its second superdiagonal,
and so forth. Finally, N
k
~
l
has a 1 in its (1, k) element and has O's everywhere else, and
N
k
= 0. Thus, the series expansion of e'
N
is finite, i.e.,
Thus,
In the case when A. is complex, a real version of the above can be worked out.
11.1. Differential Equations 115
If A is diagonalizable, it is then easy to compute etA via the formula etA = Xe
tl
XI
since e
t
I is simply a diagonal matrix.
In the more general case, the problem clearly reduces simply to the computation of
the exponential of a Jordan block. To be specific, let J
i
E C
kxk
be a Jordan block of the form
J
i
=
A 1
o A
o
o o
o =U+N.
o A
Clearly AI and N commute. Thus, e
t
I, = eO.! e
l
N by property 4 of the matrix exponential.
The diagonal part is easy: e
lH
= diag(e
At
, ••• ,eAt). But e
lN
is almost as easy since N is
nilpotent of degree k.
Definition 11.8. A matrix M E jRnxn is nilpotent of degree (or index, or grade) p if
MP = 0, while MPI t= O.
For the matrix N defined above, it is easy to check that while N has l's along only
its first superdiagonal (and O's elsewhere), N
2
has l's along only its second superdiagonal,
and so forth. Finally, N
k

I
has a 1 in its (1, k) element and has O's everywhere else, and
N
k
= O. Thus, the series expansion of e
lN
is finite, i.e.,
Thus,
t
2
t
k

I
e
IN
=I+tN+N
2
+ ... + N
k

I
2! (k  I)!
o
o o
eAt
teAt
12 At
2I
e
0
eAt teAl
ell; =
0 0
eAt
0 0
t
1
IkI At
(kI)! e
12 At
2I
e
teAl
eAt
In the case when A is complex, a real version of the above can be worked out.
116 Chapter 11. Linear Differential and Difference Equations
Example 11.9. Let A = [ ~ _ \ J]. Then A (A) = {2, 2} and
Interpolation method
This method is numerically unstable in finiteprecision arithmetic but is quite effective for
hand calculation in smallorder problems. The method is stated and illustrated for the
exponential function but applies equally well to other functions.
Given A € E.
nxn
and /(A) = e
tx
, compute f(A) = e'
A
, where t is a fixed scalar.
Suppose the characteristic polynomial of A can be written as n ( X ) = Yi?=i (^ ~~ ^ i)" ' »
where the A.,  s are distinct. Define
where O TQ , . . . , a
n
i are n constants that are to be determined. They are, in fact, the unique
solution of the n equations:
Here, the superscript (&) denotes the fcth derivative with respect to X. With the a, s then
known, the function g is known and /(A) = g(A). The motivation for this method is
the CayleyHamilton Theorem, Theorem 9.3, which says that all powers of A greater than
n — 1 can be expressed as linear combinations of A
k
for k = 0, 1, . . . , n — 1. Thus, all the
terms of order greater than n — 1 in the power series for e'
A
can be written in terms of these
lowerorder powers as well. The polynomial g gives the appropriate linear combination.
Example 11.10. Let
and /(A) = e
tK
. Then j r(A.) = (A. + I)
3
, so m = 1 and n
{
= 3.
Let g(X) — UQ + a\X + o^A.
2
. Then the three equations for the a, s are given by
116 Chapter 11. Linear Differential and Difference Equations
Example 11.9.
Let A = [=i
a Then A(A) = {2, 2} and
etA = Xe
tJ
xI
=[
2 1
] exp t [
2
 ~ ] [
1
]
0
1 2
=[
2
] [ e ~ 2 t
te
2t
] [
1
]
1
e
2t
1 2
Interpolation method
This method is numerically unstable in finiteprecision arithmetic but is quite effective for
hand calculation in smallorder problems. The method is stated and illustrated for the
exponential function but applies equally well to other functions.
Given A E jRnxn and f(A) = etA, compute f(A) = etA, where t is a fixed scalar.
Suppose the characteristic polynomial of A can be written as n(A) = nr=1 (A  Ai t',
where the Ai s are distinct. Define
where ao, ... , anl are n constants that are to be determined. They are, in fact, the unique
solution of the n equations:
g(k)(Ai) = f(k)(Ai); k = 0, I, ... , ni  I, i Em.
Here, the superscript (k) denotes the kth derivative with respect to A. With the aiS then
known, the function g is known and f(A) = g(A). The motivation for this method is
the CayleyHamilton Theorem, Theorem 9.3, which says that all powers of A greater than
n  1 can be expressed as linear combinations of A k for k = 0, I, ... , n  1. Thus, all the
terms of order greater than n  1 in the power series for e
t
A can be written in terms of these
lowerorder powers as well. The polynomial g gives the appropriate linear combination.
Example 11.10. Let
A = [  ~  ~ ~ ]
o 01
and f(A) = etA. Then n(A) = (A + 1)3, so m = 1 and nl = 3.
Let g(A) = ao + alA + a2A2. Then the three equations for the aiS are given by
g(I) = f(1) ==> ao al +a2 = e
t
,
g'(1) = f'(1) ==> at  2a2 = te
t
,
g"(I) = 1"(1) ==> 2a2 = t
2
e
t
•
11.1. Differential Equations 117
Solving for the a, s, we find
Thus,
~4 4i t f f > \ t k TU^^ _/"i\ f \ i o\ 2
Example 11.11. Let A = [ _* J] and /(A) = e
a
. Then 7 r(X ) = (A + 2)
2
so m = 1 and
«i = 2.
Let g(A.) = «o + ofiA.. Then the defining equations for the a,s are given by
Solving for the a,s, we find
Thus,
Other methods
1. Use e
tA
= £~
l
{(sl — A)^
1
} and techniques for inverse Laplace transforms. This
is quite effective for smallorder problems, but general nonsymbolic computational
techniques are numerically unstable since the problem is theoretically equivalent to
knowing precisely a JCF.
2. Use Pade approximation. There is an extensive literature on approximating cer
tain nonlinear functions by rational functions. The matrix analogue yields e
A
=
11 .1. Differential Equations
117
Solving for the ai s, we find
Thus,
Example 11.11. Let A = [ : : : : ~ 6] and f(A) = eO. Then rr(A) = (A + 2)2 so m = 1 and
nL = 2.
Let g(A) = ao + aLA. Then the defining equations for the aiS are given by
g(2) = f(2) ==> ao  2al = e
2t
,
g'(2) = f'(2) ==> al = te
2t
.
Solving for the aiS, we find
Thus,
ao = e
2t
+ 2te
2t
,
aL = te
2t
.
f(A) = etA = g(A) = aoI + al A
= (e
2t
+ 2te
2t
) [ ~
_ [ e
2t
_ 2te
2t
 te
2t
Other methods
o ] + te
2t
[4 4 ]
I I 0
1. Use etA = .cI{(sI  A)I} and techniques for inverse Laplace transforms. This
is quite effective for smallorder problems, but general nonsymbolic computational
techniques are numerically unstable since the problem is theoretically equivalent to
knowing precisely a JCE
2. Use Pade approximation. There is an extensive literature on approximating cer
tain nonlinear functions by rational functions. The matrix analogue yields e
A
~
118 Chapter 11. Linear Differential and Difference Equations
D~
l
(A)N(A), where D(A) = 8
0
I + Si A H h S
P
A
P
and N(A) = v
0
I + v
l
A +
• • • + v
q
A
q
. Explicit formulas are known for the coefficients of the numerator and
denominator polynomials of various orders. Unfortunately, a Fade approximation for
the exponential is accurate only in a neighborhood of the origin; in the matrix case
this means when  A is sufficiently small. This can be arranged by scaling A, say, by
/ * \
2
*
multiplying it by 1/2* for sufficiently large k and using the fact that e
A
= ( e
{ ] / 2 )A
j .
Numerical loss of accuracy can occur in this procedure from the successive squarings.
3. Reduce A to (real) Schur form S via the unitary similarity U and use e
A
= Ue
s
U
H
and successive recursions up the superdiagonals of the (quasi) upper triangular matrix
e
s
.
4. Many methods are outlined in, for example, [19]. Reliable and efficient computation
of matrix functions such as e
A
and log(A) remains a fertile area for research.
11.2 Difference Equations
In this section we outline solutions of discretetime analogues of the linear differential
equations of the previous section. Linear discretetime systems, modeled by systems of
difference equations, exhibit many parallels to the continuoustime differential equation
case, and this observation is exploited frequently.
11.2.1 Homogeneous linear difference equations
Theorem 11.12. Let A e Rn
xn
. The solution of the linear homogeneous system of difference
equations
11.2.2 Inhomogeneous linear difference equations
Theorem 11.14. Let A e R
nxn
, B e R
nxm
and suppose {«*}£§ « a given sequence of
mvectors. Then the solution of the inhomogeneous initialvalue problem
for k > 0 is given by
Proof: The proof is almost immediate upon substitution of (11.14) into (11.13). D
Remark 11.13. Again, we restrict our attention only to the socalled timeinvariant
case, where the matrix A in (11.13) is constant and does not depend on k. We could also
consider an arbitrary "initial time" ko, but since the system is timeinvariant, and since we
want to keep the formulas "clean" (i.e., no double subscripts), we have chosen ko = 0 for
convenience.
118 Chapter 11. Linear Differential and Difference Equations
DI(A)N(A), where D(A) = 001 + olA + ... + opAP and N(A) = vol + vIA +
... + Vq A q. Explicit formulas are known for the coefficients of the numerator and
denominator polynomials of various orders. Unfortunately, a Pad6 approximation for
the exponential is accurate only in a neighborhood of the origin; in the matrix case
this means when IIAII is sufficiently small. This can be arranged by scaling A, say, by
2'
multiplying it by 1/2k for sufficiently large k and using the fact that e
A
= (e( I /2')A )
Numerical loss of accuracy can occur in this procedure from the successive squarings.
3. Reduce A to (real) Schur form S via the unitary similarity U and use e
A
= U e
S
U H
and successive recursions up the superdiagonals of the (quasi) upper triangular matrix
e
S
.
4. Many methods are outlined in, for example, [19]. Reliable and efficient computation
of matrix functions such as e
A
and 10g(A) remains a fertile area for research.
11.2 Difference Equations
In this section we outline solutions of discretetime analogues of the linear differential
equations of the previous section. Linear discretetime systems, modeled by systems of
difference equations, exhibit many parallels to the continuoustime differential equation
case, and this observation is exploited frequently.
11.2.1 Homogeneous linear difference equations
Theorem 11.12. Let A E jRn xn. The solution of the linear homogeneous system of difference
equations
(11.13)
for k 2:: 0 is given by
Proof: The proof is almost immediate upon substitution of (11.14) into (11.13). 0
Remark 11.13. Again, we restrict our attention only to the socalled timeinvariant
case, where the matrix A in (11.13) is constant and does not depend on k. We could also
consider an arbitrary "initial time" ko, but since the system is timeinvariant, and since we
want to keep the formulas "clean" (i.e., no double subscripts), we have chosen ko = 0 for
convenience.
11.2.2 Inhomogeneous linear difference equations
Theorem 11.14. Let A E jRnxn, B E jRnxm and suppose { u d t ~ is a given sequence of
mvectors. Then the solution of the inhomogeneous initialvalue problem
(11.15)
11.2. Difference Equations 119
is given by
11.2.3 Computation of matrix powers
It is clear that solution of linear systems of difference equations involves computation of
A
k
. One solution method, which is numerically unstable but sometimes useful for hand
calculation, is to use ztransforms, by analogy with the use of Laplace transforms to compute
a matrix exponential. One definition of the ztransform of a sequence {gk} is
Assuming z > max A, the ztransform of the sequence {A
k
} is then given by
X€A(A)
Proof: The proof is again almost immediate upon substitution of (11.16) into (11.15). D
Methods based on the JCF are sometimes useful, again mostly for smallorder prob
lems. Assume that A e M"
xn
and let X e R^
n
be such that X~
1
AX = /, where J is a
JCF for A. Then
If A is diagonalizable, it is then easy to compute A
k
via the formula A
k
— XJ
k
X
l
since /* is simply a diagonal matrix.
11.2. Difference Equations
is given by
kI
xk=AkXO+LAkjIBUj, k:::.O.
j=O
119
(11.16)
Proof: The proof is again almost immediate upon substitution of (11.16) into (11.15). 0
11.2.3 Computation of matrix powers
It is clear that solution of linear systems of difference equations involves computation of
A k. One solution method, which is numerically unstable but sometimes useful for hand
calculation, is to use ztransforms, by analogy with the use of Laplace transforms to compute
a matrix exponential. One definition of the ztransform of a sequence {gk} is
+00
= LgkZ
k
.
k=O
Assuming Izl > max IAI, the ztransform of the sequence {Ak} is then given by
AEA(A)
+00
k "'kk 1 12
Z({A})=L...zA =I+A+"2A + ...
k=O z z
= (lzIA)I
= z(zI  A)I.
Methods based on the JCF are sometimes useful, again mostly for smallorder prob
lems. Assume that A E jRnxn and let X E be such that XI AX = J, where J is a
JCF for A. Then
Ak = (XJXI)k
= XJkX
1
_I
 m
LXi Jty
i
H
;=1
if A is diagonalizable,
in general.
If A is diagonalizable, it is then easy to compute Ak via the formula Ak = X Jk XI
since Jk is simply a diagonal matrix.
120 Chapter 11. Linear Differential and Difference Equations
In the general case, the problem again reduces to the computation of the power of a
Jordan block. To be specific, let 7, e C
pxp
be a Jordan block of the form
Writing /,• = XI + N and noting that XI and the nilpotent matrix N commute, it is
then straightforward to apply the binomial theorem to (XI + N)
k
and verify that
The symbol ( ) has the usual definition of ,
(
^ ., and is to be interpreted as 0 if k < q.
In the case when A. is complex, a real version of the above can be worked out.
4
Example 11.15. Let A = [_J J]. Then
Basic analogues of other methods such as those mentioned in Section 11.1.6 can also
be derived for the computation of matrix powers, but again no universally "best" method
exists. For an erudite discussion of the state of the art, see [11, Ch. 18].
11.3 HigherOrder Equations
It is well known that a higherorder (scalar) linear differential equation can be converted to
a firstorder linear system. Consider, for example, the initialvalue problem
with 4 > (t } a given function and n initial conditions
120 Chapter 11. Linear Differential and Difference Equations
In the general case, the problem again reduces to the computation of the power of a
Jordan block. To be specific, let J
i
E Cpxp be a Jordan block of the form
o ... 0 A
Writing J
i
= AI + N and noting that AI and the nilpotent matrix N commute, it is
then straightforward to apply the binomial theorem to (AI + N)k and verify that
Ak
kA kI
(;)A
k

2
(
k ) AkP+I
pl
0
Ak kA
k

1
J/ =
0 0
Ak
( ; ) A
k

2
kA
k

1
0 0
Ak
The symbol (: ) has the usual definition of q ! ( k k ~ q ) ! and is to be interpreted as 0 if k < q.
In the case when A is complex, a real version of the above can be worked out.
Example 11.15. Let A = [=i a Then
Ak = XJkX1 = [2 1 ] [(_2)k k(2)kk
1
] [ 1 2
1
]
1 1 0 (2) 1
_ [ (_2/
1
(2  2k) k( 2l+
1
]
 k( _2)k1 (2l
1
(2k  2) .
Basic analogues of other methods such as those mentioned in Section 11.1.6 can also
be derived for the computation of matrix powers, but again no universally "best" method
exists. For an erudite discussion of the state of the art, see [11, Ch. 18].
11.3 HigherOrder Equations
It is well known that a higherorder (scalar) linear differential equation can be converted to
a firstorder linear system. Consider, for example, the initialvalue problem
(11.17)
with ¢J(t) a given function and n initial conditions
y(O) = Co, y(O) = CI, ... , inI)(O) = CnI' (1l.l8)
Exercises 121
Here, v
(m)
denotes the mth derivative of y with respect to t. Define a vector x (?) e R" with
components *i(0 = y ( t ) , x
2
( t) = y ( t ) , . . . , x
n
( t) = y
{ n
~
l )
( t ) . Then
These equations can then be rewritten as the firstorder linear system
The initial conditions take the form ^(0) = c = [ C Q , c\, ..., C
M
_ I ] .
Note that det(X7 — A) = A." + a
n
\X
n
~
l
H h a\X + ao. However, the companion
matrix A in (11.19) possesses many nasty numerical properties for even moderately sized n
and, as mentioned before, is often well worth avoiding, at least for computational purposes.
A similar procedure holds for the conversion of a higherorder difference equation
EXERCISES
1. Let P € R
nxn
be a projection. Show that e
p
% / + 1.718P.
2. Suppose x, y € R" and let A = xy
T
. Further, let a = x
T
y. Show that e'
A
I + g ( t , a) xy
T
, where
3. Let
with n initial conditions, into a linear firstorder difference equation with (vector) initial
condition.
Exercises 121
Here, y(m) denotes the mth derivative of y with respect to t. Define a vector x (t) E ]Rn with
components Xl (t) = yet), X2(t) = yet), ... , Xn(t) = Inl)(t). Then
Xl (I) = X2(t) = y(t),
X2(t) = X3(t) = yet),
Xnl (t) = Xn(t) = y(nl)(t),
Xn(t) = y(n)(t) = aoy(t)  aly(t)  ...  an_llnl)(t) + ¢(t)
= aOx\ (t)  a\X2(t)  ...  anlXn(t) + ¢(t).
These equations can then be rewritten as the firstorder linear system
0 0 0
0 0 1
x(t)+ [ n ~ ( t )
x(t) =
0
0 0 1
ao a\ a
n
\
The initial conditions take the form X (0) = C = [co, Cl, •.. , C
n
\ r.
(11.19)
Note that det(A!  A) = An + an_1A
n

1
+ ... + alA + ao. However, the companion
matrix A in (11.19) possesses many nasty numerical properties for even moderately sized n
and, as mentioned before, is often well worth avoiding, at least for computational purposes.
A similar procedure holds for the conversion of a higherorder difference equation
with n initial conditions, into a linear firstorder difference equation with (vector) initial
condition.
EXERCISES
1. Let P E lR
nxn
be a projection. Show that e
P
~ ! + 1.718P.
2. Suppose x, y E lR
n
and let A = xyT. Further, let a = XT y. Show that etA
1+ get, a)xyT, where
{
!(eat  I)
g(t,a)= a t
3. Let
if a 1= 0,
if a = O.
122 Chapter 11. L i n ear Di f f eren ti al and Di f f erence Equati on s
where X e M'
nx
" is arbitrary. Show that
4. Let K denote the skewsymmetric matrix
where /„ denotes the n x n identity matrix. A matrix A e R
2n x2n
is said to be
Hamiltonian if K~
1
A
T
K = A and to be symplectic if K~
l
A
T
K  A
1
.
(a) Suppose E is Hamiltonian and let A, be an eigenvalue of H. Show that — A, must
also be an eigenvalue of H.
(b) Suppose S is symplectic and let A. be an eigenvalue of S. Show that 1 /A, must
also be an eigenvalue of S.
(c) Suppose that H is Hamiltonian and S is symplectic. Show that S~
1
HS must be
Hamiltonian.
(d) Suppose H is Hamiltonian. Show that e
H
must be symplectic.
5. Let a, ft € R and
Then show that
6. Find a general expression for
7. Find e
M
when A =
5. Let
(a) Solve the differential equation
122 Chapter 11. Linear Differential and Difference Equations
where X E jRmxn is arbitrary. Show that
e
A = [eo I sinh 1 X ]
~ I .
4. Let K denote the skewsymmetric matrix
[
0 In ]
In 0 '
where In denotes the n x n identity matrix. A matrix A E jR2nx2n is said to be
Hamiltonian if K I AT K =  A and to be symplectic if K I AT K = A I.
(a) Suppose H is Hamiltonian and let).. be an eigenvalue of H. Show that ).. must
also be an eigenvalue of H.
(b) Suppose S is symplectic and let).. be an eigenvalue of S. Show that 1/).. must
also be an eigenValue of S.
(c) Suppose that H is Hamiltonian and S is symplectic. Show that SI H S must be
Hamiltonian.
(d) Suppose H is Hamiltonian. Show that e
H
must be symplectic.
5. Let a, f3 E lR and
Then show that
6. Find a general expression for
7. Find etA when A =
8. Let
ectt cos f3t
_eut sin f3t
ectctrt sin ~ t J.
e cos/A
(a) Solve the differential equation
i = Ax ; x(O) = [ ~ J.
Exercises 123
Show that the eigenvalues of the solution X ( t ) of this problem are the same as those
of Cf or all?.
11. The year is 2004 and there are three large "free trade zones" in the world: Asia (A),
Europe (E), and the Americas (R). Suppose certain multinational companies have
total assets of $40 trillion of which $20 trillion is in E and $20 trillion is in R. Each
year half of the Americas' money stays home, a quarter goes to Europe, and a quarter
goes to Asia. For Europe and Asia, half stays home and half goes to the Americas.
(a) Find the matrix M that gives
(b) Find the eigenvalues and right eigenvectors of M.
(c) Find the distribution of the companies' assets at year k.
(d) Find the limiting distribution of the $40 trillion as the universe ends, i.e., as
k — » • +00 (i.e., around the time the Cubs win a World Series).
(Exercise adapted from Problem 5.3.11 in [24].)
(b) Solve the differential equation
9. Consider the initialvalue problem
for t > 0. Suppose that A e E"
x
" is skewsymmetric and let a = \\XQ\\
2
. Show that
*(OII
2
= af or al l f > 0.
10. Consider the n x n matrix initialvalue problem
12. (a) Find the solution of the initialvalue problem
(b) Consider the difference equation
If £
0
= 1 and z\ = 2, what is the value of Z IQ OO? What is the value of Zk in
general?
Exercises 123
(b) Solve the differential equation
i = Ax + b; x(O) = [ ~ l
9. Consider the initialvalue problem
i(t) = Ax(t); x(O) = Xo
for t ~ O. Suppose that A E ~ n x n is skewsymmetric and let ex = Ilxol12. Show that
I/X(t)1/2 = ex for all t > O.
10. Consider the n x n matrix initialvalue problem
X(t) = AX(t)  X(t)A; X(O) = c.
Show that the eigenvalues of the solution X (t) of this problem are the same as those
of C for all t.
11. The year is 2004 and there are three large "free trade zones" in the world: Asia (A),
Europe (E), and the Americas (R). Suppose certain multinational companies have
total assets of $40 trillion of which $20 trillion is in E and $20 trillion is in R. Each
year half of the Americas' money stays home, a quarter goes to Europe, and a quarter
goes to Asia. For Europe and Asia, half stays home and half goes to the Americas.
(a) Find the matrix M that gives
[
A] [A]
E =M E
R year k+1 R year k
(b) Find the eigenvalues and right eigenvectors of M.
(c) Find the distribution of the companies' assets at year k.
(d) Find the limiting distribution of the $40 trillion as the universe ends, i.e., as
k * +00 (i.e., around the time the Cubs win a World Series).
(Exercise adapted from Problem 5.3.11 in [24].)
12. (a) Find the solution of the initialvalue problem
.Yet) + 2y(t) + yet) = 0; yeO) = 1, .YeO) = O.
(b) Consider the difference equation
Zk+2 + 2Zk+1 + Zk = O.
If Zo = 1 and ZI = 2, what is the value of ZIOOO? What is the value of Zk in
general?
This page intentionally left blank This page intentionally left blank
Chapter 12
Generalized Eigenvalue
Problems
12.1 The Generalized Eigenvalue/Eigenvector Problem
In this chapter we consider the generalized eigenvalue problem
125
where A, B e C"
xn
. The standard eigenvalue problem considered in Chapter 9 obviously
corresponds to the special case that B = I.
Definition 12.1. A nonzero vector x e C" is a right generalized eigenvector of the pair
(A, B) with A, B e C
MX
" if there exists a scalar A. e C, called a generalized eigenvalue,
such that
Similarly, a nonzero vector y e C" is a left generalized eigenvector corresponding to an
eigenvalue X if
When the context is such that no confusion can arise, the adjective "generalized"
is usually dropped. As with the standard eigenvalue problem, if x [y] is a right [left]
eigenvector, then so is ax [ay] for any nonzero scalar a. e C.
Definition 12.2. The matrix A — X B is called a matrix pencil (or pencil of the matrices A
and B).
As with the standard eigenvalue problem, eigenvalues for the generalized eigenvalue
problem occur where the matrix pencil A — X B is singular.
Definition 12.3. The polynomial 7 r(A.) = det(A — A.5) is called the characteristic poly
nomial of the matrix pair (A, B) . The roots ofn(X .) are the eigenvalues of the associated
generalized eigenvalue problem.
Remark 12.4. When A, B e E"
xn
, the characteristic polynomial is obviously real, and
hence nonreal eigenvalues must occur in complex conjugate pairs.
Chapter 12
Generalized Eigenvalue
Problems
12.1 The Generalized Eigenvalue/Eigenvector Problem
In this chapter we consider the generalized eigenvalue problem
Ax = 'ABx,
where A, B E e
nxn
. The standard eigenvalue problem considered in Chapter 9 obviously
corresponds to the special case that B = I.
Definition 12.1. A nonzero vector x E en is a right generalized eigenvector of the pair
(A, B) with A, B E e
nxn
if there exists a scalar 'A E e, called a generalized eigenvalue,
such that
Ax = 'ABx. (12.1)
Similarly, a nonzero vector y E en is a left generalized eigenvector corresponding to an
eigenvalue 'A if
(12.2)
When the context is such that no confusion can arise, the adjective "generalized"
is usually dropped. As with the standard eigenvalue problem, if x [y] is a right [left]
eigenvector, then so is ax [ay] for any nonzero scalar a E <C.
Definition 12.2. The matrix A  'AB is called a matrix pencil (or pencil of the matrices A
and B).
As with the standard eigenvalue problem, eigenvalues for the generalized eigenvalue
problem occur where the matrix pencil A  'AB is singular.
Definition 12.3. The polynomial n('A) = det(A  'AB) is called the characteristic poly
nomial of the matrix pair (A, B). The roots ofn('A) are the eigenvalues of the associated
generalized eigenvalue problem.
Remark 12.4. When A, B E jRnxn, the characteristic polynomial is obviously real, and
hence nonreal eigenvalues must occur in complex conjugate pairs.
125
and there are again four cases to consider.
Case 1: a ^ 0, ft ^ 0. There are two eigenvalues, 1 and ^.
Case 2: a = 0, ft ^ 0. There is only one eigenvalue, 1 (of multiplicity 1).
Case 3: a ^ 0, f3 = 0. There are two eigenvalues, 1 and 0.
Case 4: a = 0, (3 = 0. All A 6 C are eigenvalues since det(B — uA) = 0.
At least for the case of regular pencils, it is apparent where the "missing" eigenvalues have
gone in Cases 2 and 3. That is to say, there is a second eigenvalue "at infinity" for Case 3 of
A — A.B, with its reciprocal eigenvalue being 0 in Case 3 of the reciprocal pencil B — nA.
A similar reciprocal symmetry holds for Case 2.
While there are applications in system theory and control where singular pencils
appear, only the case of regular pencils is considered in the remainder of this chapter. Note
that A and/or B may still be singular. If B is singular, the pencil A — KB always has
126 Chapter 12. Generalized Eigenvalue Problems
Remark 12.5. If B = I (or in general when B is nonsingular), then n ( X ) is a polynomial
of degree n, and hence there are n eigenvalues associated with the pencil A — X B. However,
when B = I, in particular, when B is singular, there may be 0, k e n, or infinitely many
eigenvalues associated with the pencil A — X B. For example, suppose
where a and ft are scalars. Then the characteristic polynomial is
and there are several cases to consider.
Case 1: a ^ 0, ft ^ 0. There are two eigenvalues, 1 and .
Case 2: a = 0, f3 / 0. There are two eigenvalues, 1 and 0.
Case 3: a = 0, f3 = 0. There is only one eigenvalue, 1 (of multiplicity 1).
Case 4: a = 0, f3 = 0. All A e C are eigenvalues since det(A — A. B ) =0.
Definition 12.6. If del (A — X B) is not identically zero, the pencil A — X B is said to be
regular; otherwise, it is said to be singular.
Note that if AA(A) n J\f(B) ^ 0, the associated matrix pencil is singular (as in Case
4 above).
Associated with any matrix pencil A — X B is a reciprocal pencil B — n,A and cor
responding generalized eigenvalue problem. Clearly the reciprocal pencil has eigenvalues
(JL = £. It is instructive to consider the reciprocal pencil associated with the example in
Remark 12.5. With A and B as in (12.3), the characteristic polynomial is
126 Chapter 12. Generalized Eigenvalue Problems
Remark 12.5. If B = I (or in general when B is nonsingular), then rr(A) is a polynomial
of degree n, and hence there are n eigenvalues associated with the pencil A  AB. However,
when B =I I, in particular, when B is singular, there may be 0, k E !!, or infinitely many
eigenvalues associated with the pencil A  AB. For example, suppose
where a and (3 are scalars. Then the characteristic polynomial is
det(A  AB) = (I  AHa  (3A)
and there are several cases to consider.
Case 1: a =I 0, {3 =I O. There are two eigenvalues, I and ~ .
Case 2: a = 0, {3 =I O. There are two eigenvalues, I and O.
Case 3: a =I 0, {3 = O. There is only one eigenvalue, I (of multiplicity 1).
Case 4: a = 0, (3 = O. All A E C are eigenvalues since det(A  AB) == O.
(12.3)
Definition 12.6. If det(A  AB) is not identically zero, the pencil A  AB is said to be
regular; otherwise, it is said to be singular.
Note that if N(A) n N(B) =I 0, the associated matrix pencil is singular (as in Case
4 above).
Associated with any matrix pencil A  AB is a reciprocal pencil B  /.LA and cor
responding generalized eigenvalue problem. Clearly the reciprocal pencil has eigenvalues
/.L = ±. It is instructive to consider the reciprocal pencil associated with the example in
Remark 12.5. With A and B as in (12.3), the characteristic polynomial is
det(B  /.LA) = (1  /.L)({3  a/.L)
and there are again four cases to consider.
Case 1: a =I 0, {3 =I O. There are two eigenvalues, I and ~ .
Case 2: a = 0, {3 =I O. There is only one eigenvalue, I (of multiplicity I).
Case 3: a =I 0, {3 = O. There are two eigenvalues, 1 and O.
Case 4: a = 0, (3 = O. All A E C are eigenvalues since det(B  /.LA) == O.
At least for the case of regular pencils, it is apparent where the "missing" eigenvalues have
gone in Cases 2 and 3. That is to say, there is a second eigenvalue "at infinity" for Case 3 of
A  AB, with its reciprocal eigenvalue being 0 in Case 3 of the reciprocal pencil B  /.LA.
A similar reciprocal symmetry holds for Case 2.
While there are applications in system theory and control where singular pencils
appear, only the case of regular pencils is considered in the remainder of this chapter. Note
that A and/or B may still be singular. If B is singular, the pencil A  AB always has
12. 2. Canonical Forms 127
fewer than n eigenvalues. If B is nonsingular, the pencil A A. f i always has precisely n
eigenvalues, since the generalized eigenvalue problem is then easily seen to be equivalent
to the standard eigenvalue problem B~
l
Ax = Xx (or AB~
l
w = Xw). However, this turns
out to be a very poor numerical procedure for handling the generalized eigenvalue problem
if B is even moderately ill conditioned with respect to inversion. Numerical methods that
work directly on A and B are discussed in standard textbooks on numerical linear algebra;
see, for example, [7, Sec. 7.7] or [25, Sec. 6.7].
12.2 Canonical Forms
Just as for the standard eigenvalue problem, canonical forms are available for the generalized
eigenvalue problem. Since the latter involves a pair of matrices, we now deal with equiva
lencies rather than similarities, and the first theorem deals with what happens to eigenvalues
and eigenvectors under equivalence.
Theorem 12.7. Let A, fl, Q, Z e C
nxn
with Q and Z nonsingular. Then
1. the eigenvalues of the problems A — XB and QAZ — XQBZ are the same (the two
problems are said to be equivalent).
2. ifx is a right eigenvector of A—XB, then Z~
l
x is a right eigenvector of QAZ—XQ B Z.
3. ify is a left eigenvector of A —KB, then Q~
H
y isa left eigenvector ofQAZ — XQBZ.
Proof:
1. det(QAZXQBZ) = det[0(A  XB)Z] = det gdet Zdet(A  XB). Since det 0
and det Z are nonzero, the result follows.
2. The result follows by noting that (A – yB)x  Oif andonly if Q(AXB)Z(Z~
l
x) =
0.
3. Again, the result follows easily by noting that y
H
(A — XB) — 0 if and only if
( Q~
H
y )
H
Q( A– XB ) Z = Q. D
where T
a
and Tp are upper triangular.
By Theorem 12.7, the eigenvalues of the pencil A — XB are then the ratios of the diag
onal elements of T
a
to the corresponding diagonal elements of Tp, with the understanding
that a zero diagonal element of Tp corresponds to an infinite generalized eigenvalue.
There is also an analogue of the MurnaghanWintner Theorem for real matrices.
The first canonical form is an analogue of Schur's Theorem and forms, in fact, the
theoretical foundation for the QZ algorithm, which is the generally preferred method for
solving the generalized eigenvalue problem; see, for example, [7, Sec. 7.7] or [25, Sec. 6.7].
Theorem 12.8. Let A, B e Cn
xn
. Then there exist unitary matrices Q, Z e Cn
xn
such that
12.2. Canonical Forms 127
fewer than n eigenvalues. If B is nonsingular, the pencil A  AB always has precisely n
eigenvalues, since the generalized eigenvalue problem is then easily seen to be equivalent
to the standard eigenvalue problem B
1
Ax = Ax (or AB
1
W = AW). However, this turns
out to be a very poor numerical procedure for handling the generalized eigenvalue problem
if B is even moderately ill conditioned with respect to inversion. Numerical methods that
work directly on A and B are discussed in standard textbooks on numerical linear algebra;
see, for example, [7, Sec. 7.7] or [25, Sec. 6.7].
12.2 Canonical Forms
Just as for the standard eigenvalue problem, canonical forms are available for the generalized
eigenvalue problem. Since the latter involves a pair of matrices, we now deal with equiva
lencies rather than similarities, and the first theorem deals with what happens to eigenvalues
and eigenvectors under equivalence.
Theorem 12.7. Let A, B, Q, Z E c
nxn
with Q and Z nonsingular. Then
1. the eigenvalues of the problems A  AB and QAZ  AQBZ are the same (the two
problems are said to be equivalent).
2. ifx isa right eigenvector of AAB, then Zl x isa righteigenvectorofQAZAQB Z.
3. ify isa left eigenvector of A AB, then QH y isa lefteigenvectorofQAZ AQBZ.
Proof:
1. det(QAZ  AQBZ) = det[Q(A  AB)Z] = det Q det Z det(A  AB). Since det Q
and det Z are nonzero, the result follows.
2. The result follows by noting that (A AB)x = 0 if and only if Q(A AB)Z(Zl x) =
o.
3. Again, the result follows easily by noting that yH (A  AB) o if and only if
(QH y)H Q(A _ AB)Z = O. 0
The first canonical form is an analogue of Schur's Theorem and forms, in fact, the
theoretical foundation for the QZ algorithm, which is the generally preferred method for
solving the generalized eigenvalue problem; see, for example, [7, Sec. 7.7] or [25, Sec. 6.7].
Theorem 12.8. Let A, B E c
nxn
. Then there exist unitary matrices Q, Z E c
nxn
such that
QAZ = T
a
, QBZ = T
fJ
,
where Ta and TfJ are upper triangular.
By Theorem 12.7, the eigenvalues ofthe pencil A  AB are then the ratios of the diag
onal elements of Ta to the corresponding diagonal elements of T
fJ
, with the understanding
that a zero diagonal element of TfJ corresponds to an infinite generalized eigenvalue.
There is also an analogue of the MurnaghanWintner Theorem for real matrices.
128 Chapter 12. Generalized Eigenvalue Problems
Theorem 12.9. Let A, B e R
nxn
. Then there exist orthogonal matrices Q, Z e R"
xn
such
thnt
where T is upper triangular and S is quasiuppertriangular.
When S has a 2 x 2 diagonal block, the 2 x 2 subpencil formed with the corresponding
2x2 diagonal subblock of T has a pair of complex conjugate eigenvalues. Otherwise, real
eigenvalues are given as above by the ratios of diagonal elements of S to corresponding
elements of T.
There is also an analogue of the Jordan canonical form called the Kronecker canonical
form (KCF). A full description of the KCF, including analogues of principal vectors and
so forth, is beyond the scope of this book. In this chapter, we present only statements of
the basic theorems and some examples. The first theorem pertains only to "square" regular
pencils, while the full KCF in all its generality applies also to "rectangular" and singular
pencils.
Theorem 12.10. Let A, B e C
nxn
and suppose the pencil A — XB is regular. Then there
exist nonsingular matrices P, Q € C"
x
" such that
where J is a Jordan canonical form corresponding to the finite eigenvalues of A A.fi and
N is a nilpotent matrix of Jordan blocks associated with 0 and corresponding to the infinite
eigenvalues of A — XB.
Example 12.11. The matrix pencil
with characteristic polynomial (X — 2)
2
has a finite eigenvalue 2 of multiplicty 2 and three
infinite eigenvalues.
Theorem 12.12 (Kronecker Canonical Form). Let A, B e C
mxn
. Then there exist
nonsingular matrices P e C
mxm
and Q e C
nxn
such that
128 Chapter 12. Generalized Eigenvalue Problems
Theorem 12.9. Let A, B E jRnxn. Then there exist orthogonal matrices Q, Z E jRnxn such
that
QAZ = S, QBZ = T,
where T is upper triangular and S is quasiuppertriangular.
When S has a 2 x 2 diagonal block, the 2 x 2 subpencil fonned with the corresponding
2 x 2 diagonal subblock of T has a pair of complex conjugate eigenvalues. Otherwise, real
eigenvalues are given as above by the ratios of diagonal elements of S to corresponding
elements of T.
There is also an analogue of the Jordan canonical fonn called the Kronecker canonical
form (KeF). A full description of the KeF, including analogues of principal vectors and
so forth, is beyond the scope of this book. In this chapter, we present only statements of
the basic theorems and some examples. The first theorem pertains only to "square" regular
pencils, while the full KeF in all its generality applies also to "rectangular" and singular
pencils.
Theorem 12.10. Let A, B E c
nxn
and suppose the pencil A  AB is regular. Then there
exist nonsingular matrices P, Q E c
nxn
such that
peA  AB)Q = [ ~ ~ ]  A [ ~ ~ l
where J is a Jordan canonical form corresponding to the finite eigenvalues of A  AB and
N is a nilpotent matrix of Jordan blocks associated with 0 and corresponding to the infinite
eigenvalues of A  AB.
Example 12.11. The matrix pencil
[2 I
0 0
~ ]> [ ~
0 0
o 0] o 2 0 0 I 0 o 0
o 0 1 0 0 0 I 0
o 0 0 1 0 0 o 0
o 0 0 0 0 0 0 0
with characteristic polynomial (A  2)2 has a finite eigenvalue 2 of multiplicty 2 and three
infinite eigenvalues.
Theorem 12.12 (Kronecker Canonical Form). Let A, B E c
mxn
• Then there exist
nonsingular matrices P E c
mxm
and Q E c
nxn
such that
peA  AB)Q = diag(LII' ... , L
l
" L ~ , ...• L;'. J  A.I, I  )"N),
12.2. Canonical Forms 129
where N is nilpotent, both N and J are in Jordan canonical form, and L^ is the (k + 1) x k
bidiagonal pencil
The /( are called the left minimal indices while the r, are called the right minimal indices.
Left or right minimal indices can take the value 0.
Such a matrix is in KCF. The first block of zeros actually corresponds to LQ, LQ, LQ, LQ ,
LQ, where each LQ has "zero columns" and one row, while each LQ has "zero rows" and
one column. The second block is L\ while the third block is L\. The next two blocks
correspond to
Just as sets of eigenvectors span Ainvariant subspaces in the case of the standard
eigenproblem (recall Definition 9.35), there is an analogous geometric concept for the
generalized eigenproblem.
Definition 12.14. Let A, B e W
lxn
and suppose the pencil A — XB is regular. Then V is a
deflating subspace if
Just as in the standard eigenvalue case, there is a matrix characterization of deflating
subspace. Specifically, suppose S e R n*
xk
is a matrix whose columns span a ^dimensional
subspace S of R
n
, i.e., R ( S) = <S. Then S is a deflating subspace for the pencil A — XB if
and only if there exists M e R
kxk
such that
while the nilpotent matrix N in this example is
12.2. Canonical Forms 129
where N is nilpotent, both Nand J are in Jordan canonical form, and Lk is the (k + I) x k
bidiagonal pencil
A 0 0
A
Lk =
0 0
A
0 0 I
The Ii are called the left minimal indices while the ri are called the right minimal indices.
Left or right minimal indices can take the value O.
Example 12.13. Consider a 13 x 12 block diagonal matrix whose diagonal blocks are
A 0]
I A .
o I
Such a matrix is in KCF. The first block of zeros actually corresponds to Lo, Lo, Lo, L6,
L6, where each Lo has "zero columns" and one row, while each L6 has "zero rows" and
one column. The second block is L\ while the third block is LI The next two blocks
correspond to
[
21
J = 0 2
o 0
while the nilpotent matrix N in this example is
000
Just as sets of eigenvectors span Ainvariant subspaces in the case of the standard
eigenproblem (recall Definition 9.35), there is an analogous geometric concept for the
generalized eigenproblem.
Definition 12.14. Let A, B E and suppose the pencil A  AB is regular. Then V is a
deflating subspace if
dim(AV + BV) = dimV. (12.4)
Just as in the standard eigenvalue case, there is a matrix characterization of deflating
subspace. Specifically, suppose S E is a matrix whose columns span a kdimensional
subspace S of i.e., n(S) = S. Then S is a deflating subspace for the pencil A  AB if
and only if there exists M E such that
AS = BSM. (12.5)
130 Chapter 12. Generalized Eigenvalue Problems
If B = /, then (12.4) becomes dim(AV + V) = dimV, which is clearly equivalent to
AV c V. Similarly, (12.5) becomes AS = SM as before. If the pencil is not regular, there
is a concept analogous to deflating subspace called a reducing subspace.
12.3 Application to the Computation of System Zeros
Consider the linear svstem
which has a root at —2.8 .
The method of finding system zeros via a generalized eigenvalue problem also works
well for general multiinput, multioutput systems. Numerically, however, one must be
careful first to "deflate out" the infinite zeros (infinite eigenvalues of (12.6)). This is accom
plished by computing a certain unitary equivalence on the system pencil that then yields a
smaller generalized eigenvalue problem with only finite generalized eigenvalues (the finite
zeros).
The connection between system zeros and the corresponding system pencil is non
trivial. However, we offer some insight below into the special case of a singleinput,
with A € M
n x n
, B € R"
x m
, C e R
pxn
, and D € R
pxm
. This linear timeinvariant state
space model is often used in multivariable control theory, where x(= x(t)) is called the state
vector, u is the vector of inputs or controls, and y is the vector of outputs or observables.
For details, see, for example, [26].
In general, the (finite) zeros of this system are given by the (finite) complex numbers
z, where the "system pencil"
drops rank. In the special case p = m, these values are the generalized eigenvalues of the
(n + m) x (n + m) pencil.
Example 12.15. Let
Then the transfer matrix (see [26]) of this system is
which clearly has a zero at —2.8 . Checking the finite eigenvalues of the pencil (12.6), we
find the characteristic polynomial to be
130 Chapter 12. Generalized Eigenvalue Problems
If B = I, then (12.4) becomes dim (A V + V) = dim V, which is clearly equivalent to
AV ~ V. Similarly, (12.5) becomes AS = SM as before. lEthe pencil is not regular, there
is a concept analogous to deflating subspace called a reducing subspace.
12.3 Application to the Computation of System Zeros
Consider the linear system
i = Ax + Bu,
y = Cx + Du
with A E jRnxn, B E jRnxm, C E jRPxn, and D E jRPxm. This linear timeinvariant state
space model is often used in multivariable control theory, where x(= x(t)) is called the state
vector, u is the vector of inputs or controls, and y is the vector of outputs or observables.
For details, see, for example, [26].
In general, the (finite) zeros of this system are given by the (finite) complex numbers
z, where the "system pencil"
(12.6)
drops rank. In the special case p = m, these values are the generalized eigenvalues of the
(n + m) x (n + m) pencil.
Example 12.15. Let
A=[
4
2
Then the transfer matrix (see [26)) of this system is
C = [I 2],
55 + 14
g(5)=C(sIA)'B+D= 2 '
5 + 3s + 2
D=O.
which clearly has a zero at 2.8. Checking the finite eigenvalues of the pencil (12.6), we
find the characteristic polynomial to be
det
[
A c
M
B]
D "'" 5A + 14,
which has a root at 2.8.
The method of finding system zeros via a generalized eigenvalue problem also works
well for general mUltiinput, multioutput systems. Numerically, however, one must be
careful first to "deflate out" the infinite zeros (infinite eigenvalues of (12.6». This is accom
plished by computing a certain unitary equivalence on the system pencil that then yields a
smaller generalized eigenvalue problem with only finite generalized eigenvalues (the finite
zeros).
The connection between system zeros and the corresponding system pencil is non
trivial. However, we offer some insight below into the special case of a singleinput.
12.4. Symmetric Generalized Eigenvalue Problems 131
singleoutput system. Specifically, let B = b e Rn, C = c
1
e R
l xn
, and D = d e R.
Furthermore, let g(.s) = c
r
(s7 — A )~
!
Z ? + d denote the system transfer function (matrix),
and assume that g ( s ) can be written in the form
where T T (S ) is the characteristic polynomial of A, and v(s) and T T (S ) are relatively prime
(i.e., there are no "pole/zero cancellations").
Suppose z € C is such that
is singular. Then there exists a nonzero solution to
or
Assuming z is not an eigenvalue of A (i.e., no pole/zero cancellations), then from (12.7) we
get
Substituting this in (12.8), we have
or g ( z ) y = 0 by the definition of g . Now _ y ^ 0 (else x = 0 from (12.9)). Hence g(z) = 0,
i.e., z is a zero of g.
12.4 Symmetric Generalized Eigenvalue Problems
A very important special case of the generalized eigenvalue problem
for A, B e R
nxn
arises when A = A and B = B
1
> 0. For example, the secondorder
system of differential equations
where M is a symmetric positive definite "mass matrix" and K is a symmetric "stiffness
matrix," is a frequently employed model of structures or vibrating systems and yields a
generalized eigenvalue problem of the form (12.10).
Since B is positive definite it is nonsingular. Thus, the problem (12.10) is equivalent
to the standard eigenvalue problem B~
l
Ax = A J C. However, B~
1
A is not necessarily
symmetric.
12.4. Symmetric Generalized Eigenvalue Problems 131
singleoutput system. Specifically, let B = b E ffi.n, C = c
T
E ffi.l xn, and D = d E R
Furthermore, let g(s) = c
T
(s I  A) 1 b + d denote the system transfer function (matrix),
and assume that g(s) can be written in the form
v(s)
g(s) = n(s)'
where n(s) is the characteristic polynomial of A, and v(s) and n(s) are relatively prime
(i.e., there are no "pole/zero cancellations").
Suppose Z E C is such that
[
A  zI b ]
c
T
d
is singular. Then there exists a nonzero solution to
or
(A  zl)x + by = 0,
c
T
x +dy = O.
(12.7)
(12.8)
Assuming z is not an eigenvalue of A (i.e., no pole/zero cancellations), then from (12.7) we
get
x = (A  zl)lby.
(12.9)
Substituting this in (12.8), we have
_c
T
(A  zl)lby + dy = 0,
or g(z)y = 0 by the definition of g. Now y 1= 0 (else x = 0 from (12.9». Hence g(z) = 0,
i.e., z is a zero of g.
12.4 Symmetric Generalized Eigenvalue Problems
A very important special case of the generalized eigenvalue problem
Ax = ABx (12.10)
for A, B E ffi.nxn arises when A = AT and B = BT > O. For example, the secondorder
system of differential equations
Mx+Kx=O,
where M is a symmetric positive definite "mass matrix" and K is a symmetric "stiffness
matrix," is a frequently employed model of structures or vibrating systems and yields a
generalized eigenvalue problem ofthe form (12.10).
Since B is positive definite it is nonsingular. Thus, the problem (12.10) is equivalent
to the standard eigenvalue problem B
1
Ax = AX. However, B
1
A is not necessarily
symmetric.
Nevertheless, the eigenvalues of B
l
A are always real (and are approximately 2.1926
and 3.1926 in Example 12.16).
Theorem 12.17. Let A, B e R
nxn
with A = A
T
and B = B
T
> 0. Then the generalized
eigenvalue problem
whose eigenvalues are approximately 2.1926 and —3.1926 as expected.
The material of this section can, of course, be generalized easily to the case where A
and B are Hermitian, but since realvalued matrices are commonly used in most applications,
we have restricted our attention to that case only.
has n real eigenvalues, and the n corresponding right eigenvectors can be chosen to be
orthogonal with respect to the inner product (x, y)
B
= X
T
By. Moreover, if A > 0, then
the eigenvalues are also all positive.
Proof: Since B > 0, it has a Cholesky factorization B = LL
T
, where L is nonsingular
(Theorem 10.23). Then the eigenvalue problem
can be rewritten as the equivalent problem
Letting C = L
1
AL
J
and z = L
1
x, (12.11) can then be rewritten as
Since C = C
T
, the eigenproblem (12.12) has n real eigenvalues, with corresponding eigen
vectors zi,..., z
n
satisfying
Then x, = L
T
zi, i € n, are eigenvectors of the original generalized eigenvalue problem
and satisfy
Finally, if A = A
T
> 0, then C = C
T
> 0, so the eigenvalues are positive. D
Example 12.18. The Cholesky factor for the matrix B in Example 12.16 is
Then it is easily checked thai
132 Chapter 12. Generalized Eigenvalue Problems
Example 12.16. Let A ThenB~
l
A
132 Chapter 12. Generalized Eigenvalue Problems
Example 12.16. Let A = ; l B = [i J Then A = J
Nevertheless, the eigenvalues of A are always real (and are approximately 2.1926
and 3.1926 in Example 12.16).
Theorem 12.17. Let A, B E jRnxn with A = AT and B = BT > O. Then the generalized
eigenvalue problem
Ax = ABx
has n real eigenvalues, and the n corresponding right eigenvectors can be chosen to be
orthogonal with respect to the inner product (x, y) B = x
T
By. Moreover, if A > 0, then
the eigenvalues are also all positive.
Proof: Since B > 0, it has a Cholesky factorization B = LL T, where L is nonsingular
(Theorem 10.23). Then the eigenvalue problem
Ax = ABx = ALL T x
can be rewritten as the equivalent problem
(12.11)
Letting C = L AL and Z = LT x, (12.11) can then be rewritten as
Cz = AZ. (12.12)
Since C = C
T
, the eigenproblem (12.12) has n real eigenvalues, with corresponding eigen
vectors Z I, •.. , Zn satisfying
zi Zj = Dij.
Then Xi = L Zi, i E !!., are eigenvectors of the original generalized eigenvalue problem
and satisfy
(Xi, Xj)B = xr BXj = (zi L Zj) = Dij.
Finally, if A = AT> 0, then C = C
T
> 0, so the eigenvalues are positive. 0
Example 12.18. The Cholesky factor for the matrix B in Example 12.16 is
1] .
.,fi .,fi
Then it is easily checked that
c = = [ 0 . .5
2.5
2 . .5 ]
1.5 '
whose eigenvalues are approximately 2.1926 and 3.1926 as expected.
The material of this section can, of course, be generalized easily to the case where A
and B are Hermitian, but since realvalued matrices are commonly used in most applications,
we have restricted our attention to that case only.
12.5. Simultaneous Diagonalization 133
12.5 Simultaneous Diagonalization
Recall that many matrices can be diagonalized by a similarity. In particular, normal ma
trices can be diagonalized by a unitary similarity. It turns out that in some cases a pair of
matrices (A, B) can be simultaneously diagonalized by the same matrix. There are many
such results and we present only a representative (but important and useful) theorem here.
Again, we restrict our attention only to the real case, with the complex case following in a
straightforward way.
Theorem 12.19 (Simultaneous Reduction to Diagonal Form). Let A, B e E"
x
" with
A = A
T
and B = B
T
> 0. Then there exists a nonsingular matrix Q such that
\ 2.5.1 Simultaneous diagonalization via SVD
There are situations in which forming C = L~
1
AL~
T
as in the proof of Theorem 12.19 is
numerically problematic, e.g., when L is highly ill conditioned with respect to inversion. In
such cases, simultaneous reduction can also be accomplished via an SVD. To illustrate, let
where D is diagonal. In fact, the diagonal elements of D are the eigenvalues of B
1
A.
Proof: Let B = LL
T
be the Cholesky factorization of B and setC = L~
1
AL~
T
. Since
C is symmetric, there exists an orthogonal matrix P such that P
T
CP = D, where D is
diagonal. Let Q = L~
T
P. Then
and
Finally, since QDQ~
l
= QQ
T
AQQ~
l
= L
T
PP
T
L~
1
A = L~
T
L~
1
A = B~
1
A, we
haveA(D) = A(B~
1
A). D
Note that Q is not in general orthogonal, so it does not preserve eigenvalues of A and B
individually. However, it does preserve the eigenvalues of A — XB. This can be seen directly.
LetA = Q
T
AQandB = Q
T
BQ. Then/HA = Q~
l
B~
l
Q~
T
Q
T
AQ = Q~
1
B~
1
AQ.
Theorem 12.19 is very useful for reducing many statements about pairs of symmetric
matrices to "the diagonal case." The following is typical.
Theorem 12.20. Let A, B e M"
xn
be positive definite. Then A > B if and only if B~
l
>
A
1
.
Proof: By Theorem 12.19, there exists Q e E"
x
" such that Q
T
AQ = D and Q
T
BQ = I,
where D is diagonal. Now D > 0 by Theorem 10.31. Also, since A > B, by Theorem
10.21 we have that Q
T
AQ > Q
T
BQ, i.e., D > I. But then D"
1
< / (this is trivially true
since the two matrices are diagonal). Thus, QD~
l
Q
T
< QQ
T
, i.e., A~
l
< B~
l
. D
12.5. Simultaneous Diagonalization 133
12.5 Simultaneous Diagonalization
Recall that many matrices can be diagonalized by a similarity. In particular, normal ma
trices can be diagonalized by a unitary similarity. It turns out that in some cases a pair of
matrices (A, B) can be simultaneously diagonalized by the same matrix. There are many
such results and we present only a representative (but important and useful) theorem here.
Again, we restrict our attention only to the real case, with the complex case following in a
straightforward way.
Theorem 12.19 (Simultaneous Reduction to Diagonal Form). Let A, B E ] [ ~ n x n with
A = AT and B = BT > O. Then there exists a nonsingular matrix Q such that
where D is diagonal. Infact, the diagonal elements of D are the eigenvalues of B
1
A.
Proof: Let B = LLT be the Cholesky factorization of B and set C = L I AL T. Since
C is symmetric, there exists an orthogonal matrix P such that pTe p = D, where D is
diagonal. Let Q = L  T P. Then
and
QT BQ = pT L I(LLT)L T P = pT P = [.
Finally, since QDQI = QQT AQQI = L T P pT L I A = L T L I A
B
1
A, we
have A(D) = A(B
1
A). 0
Note that Q is not in general orthogonal, so it does not preserve eigenvalues of A and B
individually. However, it does preserve the eigenvalues of A  'AB. This can be seen directly.
Let A = QT AQ and B = QT BQ. Then B
1
A = Q1 B
1
QT QT AQ = Q1 B
1
AQ.
Theorem 12.19 is very useful for reducing many statements about pairs of symmetric
matrices to "the diagonal case." The following is typical.
Theorem 12.20. Let A, B E lR
nxn
be positive definite. Then A 2: B if and only if B
1
2:
AI.
Proof: By Theorem 12.19, there exists Q E l R ~ x n such that QT AQ = D and QT BQ = [,
where D is diagonal. Now D > 0 by Theorem 10.31. Also, since A 2: B, by Theorem
10.21 we have that QT AQ 2: QT BQ, i.e., D 2: [. But then D
I
:::: [(this is trivially true
since the two matrices are diagonal). Thus, Q D
I
QT :::: Q QT, i.e., A I :::: B
1
. 0
12.5.1 Simultaneous diagonalization via SVD
There are situations in which forming C = L I AL T as in the proof of Theorem 12.19 is
numerically problematic, e.g., when L is highly iII conditioned with respect to inversion. In
such cases, simultaneous reduction can also be accomplished via an SVD. To illustrate. let
The problem (12.15) is called a generalized singular value problem and algorithms exist to
solve it (and hence equivalently (12.13)) via arithmetic operations performed only on LA
and LB separately, i.e., without forming the products L
A
L
T
A
or L
B
L
T
B
explicitly; see, for
example, [7, Sec. 8.7.3]. This is analogous to finding the singular values of a matrix M by
operations performed directly on M rather than by forming the matrix M
T
M and solving
the eigenproblem M
T
MX = Xx.
Remark 12.22. Various generalizations of the results in Remark 12.21 are possible, for
example, when A = A
T
> 0. The case when A is symmetric but indefinite is not so
straightforward, at least in real arithmetic. For example, A can be written as A = PDP
T
,
~ ~ ~ ~ T
where D is diagonal and P is orthogonal, but in writing A — PDDP = PD(PD) with
D diagonal, D may have pure imaginary elements.
134 Chapter 12. Generalized Eigenvalue Problems
us assume that both A and B are positive definite. Further, let A = L
A
L
T
A
and B — LsL
T
B
be Cholesky factorizations of A and B, respectively. Compute the SVD
where E e R£
x
" is diagonal. Then the matrix Q = L
B
T
U performs the simultaneous
diagonalization. To check this, note that
while
Remark 12.21. The SVD in (12.13) can be computed without explicitly forming the
indicated matrix product or the inverse by using the socalled generalized singular value
decomposition (GSVD). Note that
and thus the singular values of L
B
L
A
can be found from the eigenvalue problem
Letting x = L
B
z we see that (12.14) can be rewritten in the form L
A
L
A
x = XL
B
z =
ALgL^Lg
7
z, which is thus equivalent to the generalized eigenvalue problem
134 Chapter 12. Generalized Eigenvalue Problems
us assume that both A and B are positive definite. Further, let A = and B =
be Cholesky factorizations of A and B, respectively. Compute the SVD
(12.13)
where L E xn is diagonal. Then the matrix Q = L i/ u performs the simultaneous
diagonalization. To check this, note that
while
QT AQ = U
T
= UTULVTVLTUTU
= L2
QT BQ = U
T
= UTU
= I.
Remark 12.21. The SVD in (12.13) can be computed without explicitly forming the
indicated matrix product or the inverse by using the socalled generalized singular value
decomposition (GSVD). Note that
and thus the singular values of L B 1 L A can be found from the eigenvalue problem
02.14)
Letting x = LBT Z we see that 02.14) can be rewritten in the form = ALBz =
z, which is thus equivalent to the generalized eigenvalue problem
02.15)
The problem (12.15) is called a generalized singular value problem and algorithms exist to
solve it (and hence equivalently (12.13» via arithmetic operations performed only on LA
and L B separately, i.e., without forming the products LA L or L B L explicitly; see, for
example, [7, Sec. 8.7.3]. This is analogous to finding the singular values of a matrix M by
operations performed directly on M rather than by forming the matrix MT M and solving
the eigenproblem MT M x = AX.
Remark 12.22. Various generalizations of the results in Remark 12.21 are possible, for
example, when A = AT::: O. The case when A is symmetric but indefinite is not so
straightforward, at least in real arithmetic. For example, A can be written as A = PDP T,
where Disdiagonaland P is orthogonal,butin writing A = PDDp
T
= PD(PD{ with
D diagonal, b may have pure imaginary elements.
12.6. HigherOrder Eigenvalue Problems 135
12.6 HigherOrder Eigenvalue Problems
Consider the secondorder system of differential equations
where q(t} e W
1
and M, C, K e Rn
xn
. Assume for simplicity that M is nonsingular.
Suppose, by analogy with the firstorder case, that we try to find a solution of (12.16) of the
form q(t) = e
xt
p, where the nvector p and scalar A. are to be determined. Substituting in
(12.16) we get
To get a nonzero solution /?, we thus seek values of A. for which the matrix A.
2
M + A.C + K
is singular. Since the determinantal equation
yields a polynomial of degree 2rc, there are 2n eigenvalues for the secondorder (or
quadratic) eigenvalue problem A.
2
M + A.C + K.
A special case of (12.16) arises frequently in applications: M = I, C = 0, and
K = K
T
. Suppose K has eigenvalues
Let a > k =  f j i k 1
2
• Then the 2n eigenvalues of the secondorder eigenvalue problem A.
2
/ + K
are
If r = n (i.e., K = K
T
> 0), then all solutions of q + Kq = 0 are oscillatory.
12.6.1 Conversion to firstorder form
Let x\ = q and \i = q. Then (12.16) can be written as a firstorder system (with block
companion matrix)
where x(t) €. E
2
". If M is singular, or if it is desired to avoid the calculation of M
l
because
M is too ill conditioned with respect to inversion, the secondorder problem (12.16) can still
be converted to the firstorder generalized linear system
or, since
12.6. HigherOrder Eigenvalue Problems 135
12.6 HigherOrder Eigenvalue Problems
Consider the secondorder system of differential equations
Mq+Cq+Kq=O, (12.16)
where q(t) E ~ n and M, C, K E ~ n x n . Assume for simplicity that M is nonsingular.
Suppose, by analogy with the firstorder case, that we try to find a solution of (12.16) of the
form q(t) = eAt p, where the nvector p and scalar A are to be determined. Substituting in
(12.16) we get
or, since eAt :F 0,
(A
2
M + AC + K) p = O.
To get a nonzero solution p, we thus seek values of A for which the matrix A
2
M + AC + K
is singular. Since the determinantal equation
o = det(A
2
M + AC + K) = A 2n + ...
yields a polynomial of degree 2n, there are 2n eigenvalues for the secondorder (or
quadratic) eigenvalue problem A
2
M + AC + K.
A special case of (12.16) arises frequently in applications: M = I, C = 0, and
K = KT. Suppose K has eigenvalues
IL I ::: ... ::: ILr ::: 0 > ILr+ I ::: ... ::: ILn·
Let Wk = I ILk I !. Then the 2n eigenvalues of the secondorder eigenvalue problem A
2
I + K
are
± jWk; k = 1, ... , r,
± Wk; k = r + 1, ... , n.
If r = n (i.e., K = KT ::: 0), then all solutions of q + K q = 0 are oscillatory.
12.6.1 Conversion to firstorder form
Let XI = q and X2 = q. Then (12.16) can be written as a firstorder system (with block
companion matrix)
. [ 0
X = M1K
where x (t) E ~ 2 n . If M is singular, or if it is desired to avoid the calculation of M
I
because
M is too ill conditioned with respect to inversion, the secondorder problem (12.16) can still
be converted to the firstorder generalized linear system
[
I OJ' [0 I J
o M x = K C x.
136 Chapter 12. Generalized Eigenvalue Problems
Many other firstorder realizations are possible. Some can be useful when M, C, and/or K
have special symmetry or skewsymmetry properties that can exploited.
Higherorder analogues of (12.16) involving, say, the kth derivative of q, lead naturally
to higherorder eigenvalue problems that can be converted to firstorder form using aknxkn
block companion matrix analogue of (11.19). Similar procedures hold for the general k\h
order difference equation
EXERCISES
are the eigenvalues of the matrix A — BD
1
C.
2. Let F, G € C
MX
". Show that the nonzero eigenvalues of FG and GF are the same.
Hint: An easy "trick proof is to verify that the matrices
are similar via the similarity transformation
are identical for all F 6 E"
1
*" and all G G R"
xm
.
Hint: Consider the equivalence
(A similar result is also true for "nonsquare" pencils. In the parlance of control theory,
such results show that zeros are invariant under state feedback or output injection.)
which can be converted to various firstorder systems of dimension kn.
1. Suppose A e R
nx
" and D e R™
xm
. Show that the finite generalized eigenvalues of
the pencil
3. Let F e C
nxm
, G e C
mx
". Are the nonzero singular values of FG and GF the
same?
4. Suppose A € R
nxn
, B e R
n
*
m
, and C e E
wx
". Show that the generalized eigenval
ues of the pencils
and
136 Chapter 12. Generalized Eigenvalue Problems
Many other firstorder realizations are possible. Some can be useful when M, C, andlor K
have special symmetry or skewsymmetry properties that can exploited.
Higherorder analogues of (12.16) involving, say, the kth derivative of q, lead naturally
to higherorder eigenvalue problems that can be converted to firstorder form using a kn x kn
block companion matrix analogue of (11.19). Similar procedures hold for the general kth
order difference equation
which can be converted to various firstorder systems of dimension kn.
EXERCISES
1. Suppose A E lR
n
xn and D E lR::! xm. Show that the finite generalized eigenvalues of
the pencil
[ ~ ~ J  A [ ~ ~ J
are the eigenvalues of the matrix A  B D
1
C.
2. Let F, G E e
nxn
• Show that the nonzero eigenvalues of FG and G F are the same.
Hint: An easy "trick proof' is to verify that the matrices
[Fg ~ ] and [ ~ GOF ]
are similar via the similarity transformation
3. Let F E e
nxm
, G E e
mxn
• Are the nonzero singular values of FG and GF the
same?
4. Suppose A E ]Rnxn, B E lR
nxm
, and C E lRmxn. Show that the generalized eigenval
ues of the pencils
[ ~ ~ J  A [ ~ ~ J
and
[ A + B ~ + GC ~ ] _ A [ ~ ~ ]
are identical for all F E Rm xn and all G E R" xm .
Hint: Consider the equivalence
[
I G][AU B][I 0]
01 CO Fl'
(A similar result is also true for "nonsquare" pencils. In the parlance of control theory,
such results show that zeros are invariant under state feedback or output injection.)
Exercises 137
5. Another family of simultaneous diagonalization problems arises when it is desired
that the simultaneous diagonalizing transformation Q operates on matrices A, B e
]R
nx
" in such a way that Q~
l
AQ~
T
and Q
T
BQ are simultaneously diagonal. Such
a transformation is called contragredient. Consider the case where both A and
B are positive definite with Cholesky factorizations A = L&L
T
A
and B = L#Lg,
respectively, and let UW
T
be an SVD of L
T
B
L
A
.
(a) Show that Q = LA V£~
5
is a contragredient transformation that reduces both
A and B to the same diagonal matrix.
(b) Show that Q~
l
= ^~^U
T
L
T
B
.
(c) Show that the eigenvalues of A B are the same as those of E
2
and hence are
positive.
Exercises 137
5. Another family of simultaneous diagonalization problems arises when it is desired
that the simultaneous diagonalizing transformation Q operates on matrices A, B E
jRnxn in such a way that Ql AQT and QT BQ are simultaneously diagonal. Such
a transformation is called contragredient. Consider the case where both A and
B are positive definite with Cholesky factorizations A = LA L and B = L B L
respectively, and let be an SVD of
(a) Show that Q = LA is a contragredient transformation that reduces both
A and B to the same diagonal matrix.
(b) Show that Ql =
(c) Show that the eigenvalues of AB are the same as those of 1;2 and hence are
positive.
This page intentionally left blank This page intentionally left blank
Chapter 13
Kronecker Products
13.1 Definition and Examples
Definition 13.1. Let A e R
mx
", B e R
pxq
. Then the Kronecker product (or tensor
product) of A and B is defined as the matrix
Obviously, the same definition holds if A and B are complexvalued matrices. We
restrict our attention in this chapter primarily to realvalued matrices, pointing out the
extension to the complex case only where it is not obvious.
Example 13.2.
Note that B < g> A / A < g> B.
2. Foranyfl e!F
X(
7, /
2
< 8 > f l = [o l\
Replacing I
2
by /„ yields a block diagonal matrix with n copies of B along the
diagonal.
3. Let B be an arbitrary 2x2 matrix. Then
139
Chapter 13
Kronecker Products
13.1 Definition and Examples
Definition 13.1. Let A E lR
mxn
, B E lR
pxq
. Then the Kronecker product (or tensor
product) of A and B is defined as the matrix
[
allB
A@B= :
amlB
alnB ]
: E lRmpxnq.
amnB
(13.1)
Obviously, the same definition holds if A and B are complexvalued matrices. We
restrict our attention in this chapter primarily to realvalued matrices, pointing out the
extension to the complex case only where it is not obvious.
Example 13.2.
1. Let A =
2
nand B = [; Then
2
4 2 6
n
A@B = [
2B 3 4 6 6
2B 3 4 2 2
9 4 6 2
Note that B @ A i A @ B.
2. Forany B E lR
pxq
, /z @ B = J.
Replacing 12 by In yields a block diagonal matrix with n copies of B along the
diagonal.
3. Let B be an arbitrary 2 x 2 matrix. Then
l b"
0
b12
0
l
B @/z =
b
ll
0 b12
0
b
2
2
0
b
21
0 b
22
139
140 Chapter 13. Kronecker Products
The extension to arbitrary B and /„ is obvious.
4. Let Jt € R
m
, y e R". Then
5. Let* eR
m
, y eR". Then
13.2 Properties of the Kronecker Product
Theorem 13.3. Let A e R
mx
", 5 e R
rxi
, C e R"
x
^ and D e R
sxt
. Then
Proof: Simply verify that
Theorem 13.4. For all A and B,
Proof: For the proof, simply verify using the definitions of transpose and Kronecker
product. D
Corollary 13.5. If A e R"
xn
and B e R
mxm
are symmetric, then A® B is symmetric.
Theorem 13.6. If A and B are nonsingular,
Proof: Using Theorem 13.3, simply note that
140 Chapter 13. Kronecker Products
The extension to arbitrary B and In is obvious.
4. Let x E y E !R.n. Then
[
T T]T
X ® Y = XIY , ... , XmY
= [XIYJ, ... , XIYn, X2Yl, ... , xmYnf E !R.
mn
.
13.2 Properties of the Kronecker Product
Theorem 13.3. Let A E B E C E and D E Then
(A 0 B)(C 0 D) = AC 0 BD (E
Proof; Simply verify that
=AC0BD. 0
Theorem 13.4. Foral! A and B, (A ® Bl = AT ® BT.
al;kCkPBD ]
amkckpBD
(13.2)
Proof' For the proof, simply verify using the definitions of transpose and Kronecker
product. 0
Corollary 13.5. If A E ]Rn xn and B E !R.
m
xm are symmetric, then A ® B is symmetric.
Theorem 13.6. If A and Bare nonsingular, (A ® B)I = AI ® B
1
.
Proof: Using Theorem 13.3, simply note that (A ® B)(A 1 ® B
1
) = 1 ® 1 = I. 0
Corollary 13.8. If A € E"
xn
is orthogonal and B e M
mxm
15 orthogonal, then A < g > B is
orthogonal.
13.2. Properties of the Kronecker Product 141
Theorem 13.7. If A e IR"
xn
am/ B eR
mxm
are normal, then A® B is normal.
Proof:
yields a singular value decomposition of A < 8 > B (after a simple reordering of the diagonal
elements O/£A < 8 > £5 and the corresponding right and left singular vectors).
Corollary 13.11. Let A e R™
x
" have singular values a\ > • • • > a
r
> 0 and let B e
have singular values T\ > • • • > T
S
> 0. Then A < g ) B (or B < 8 > A) has rs singular values
^iT\ > • • • > ff
r
T
s
> Qand
Theorem 13.12. Let A e R
nx
" have eigenvalues A.,  , / e n, and let B e R
mxw
/zave
eigenvalues jJij, 7 € m. TTzen ?/ze mn eigenvalues of A® B are
Moreover, if x\, ..., x
p
are linearly independent right eigenvectors of A corresponding
to A  i , . . . , A.
p
(p < n), and zi, • • •, z
q
are linearly independent right eigenvectors of B
corresponding to JJL\ , ..., \Ju
q
(q < m), then ;c, < 8 > Zj € ffi.
m
" are linearly independent right
eigenvectors of A® B corresponding to A., /u ,
7
, i e /?, 7 e q.
Proof: The basic idea of the proof is as follows:
If A and B are diag onalizable in Theorem 13.12, we can take p = n and q —mand
thu s g et the complete eig enstru ctu re of A < 8 > B. In g eneral, if A and fi have Jordan form
Example 13.9. Let A and B  Then it is easily seen that
A i s orthog onal wi th eig envalu es e
±j9
and B i s orthog onal wi th eig envalu es e
±j(i>
. T he 4x4
matrix A ® 5 is then also orthog onal with eig envalu es e^'^+'W and e
±
^
( 6>
~^
>
\
Theorem 13.10. Lg f A G E
mx
" have a singular value decomposition l/^E^Vj an^ /ef
fi e ^
pxq
have a singular value decomposition UB^B^B Then
13.2. Properties of the Kronecker Product
Theorem 13.7. If A E IR
nxn
and B E IR
mxm
are normal, then A 0 B is normal.
Proof:
(A 0 B{ (A 0 B) = (AT 0 BT)(A 0 B) by Theorem 13.4
= AT A 0 BT B by Theorem 13.3
= AAT 0 B BT since A and B are normal
= (A 0 B)(A 0 B)T by Theorem 13.3. 0
141
Corollary 13.8. If A E IR
nxn
is orthogonal and B E IR
mxm
is orthogonal, then A 0 B is
orthogonal.
E I 139 L A
[
eose Sine] dB [Cos</> Sin</>] Th ., '1 h
xamp e .• et = _ sin e cose an = _ sin</> cos</>O en It IS easl y seen t at
A is orthogonal with eigenvalues e±jO and B is orthogonal with eigenvalues e±j</J. The 4 x 4
matrix A 0 B is then also orthogonal with eigenvalues e±jeH</» and e±je
fJ
</».
Theorem 13.10. Let A E IR
mxn
have a singular value decomposition VA ~ A vI and let
B E IR
pxq
have a singular value decomposition V B ~ B VI. Then
yields a singular value decomposition of A 0 B (after a simple reordering of the diagonal
elements of ~ A 0 ~ B and the corresponding right and left singular vectors).
Corollary 13.11. Let A E lR;"xn have singular values UI :::: ... :::: U
r
> 0 and let B E IRfx
q
have singular values <I :::: ... :::: <s > O. Then A 0 B (or B 0 A) has rs singular values
U, <I :::: ... :::: U
r
<s > 0 and
rank(A 0 B) = (rankA)(rankB) = rank(B 0 A) .
Theorem 13.12. Let A E IR
n
xn have eigenvalues Ai, i E !!, and let B E IR
m
xm have
eigenvalues JL j, j E m. Then the mn eigenvalues of A 0 Bare
Moreover, if Xl, ••. , xp are linearly independent right eigenvectors of A corresponding
to AI, ... , A p (p ::::: n), and Z I, ... ,Zq are linearly independent right eigenvectors of B
corresponding to JLI, ... ,JLq (q ::::: m), then Xi 0 Zj E IR
mn
are linearly independent right
eigenvectors of A 0 B corresponding to Ai JL j, i E l!! j E 1·
Proof: The basic idea of the proof is as follows:
(A 0 B)(x 0 z) = Ax 0 Bz
= AX 0 JLZ
= AJL(X 0 z). 0
If A and Bare diagonalizable in Theorem 13.12, we can take p = nand q = m and
thus get the complete eigenstructure of A 0 B. In general, if A and B have Jordan form
142 Chapter 13. Kronecker Products
decompositions given by P~
l
AP = JA and Q~
]
BQ = JB, respectively, then we get the
following Jordanlike structure:
Note that JA® JB, while upper triangular, is generally not quite in Jordan form and needs
further reduction (to an ultimate Jordan form that also depends on whether or not certain
eigenvalues are zero or nonzero).
A Schur form for A ® B can be derived similarly. For example, suppose P and
Q are unitary matrices that reduce A and 5, respectively, to Schur (triangular) form, i.e.,
P
H
AP = T
A
and Q
H
BQ = T
B
(and similarly if P and Q are orthogonal similarities
reducing A and B to real Schur form). Then
Corollary 13.13. Let A e R
nxn
and B e R
mxm
. Then
Definition 13.14. Let A e R
nxn
and B e R
mxm
. Then the Kronecker sum (or tensor sum)
of A and B, denoted A © B, is the mn x mn matrix (I
m
< g> A) + (B ® /„). Note that, in
general, A ® B ^ B © A.
Example 13.15.
Then
The reader is invited to compute B 0 A = (/3 ® B) + (A < g> /2) and note the difference
with A © B.
1. Let
142 Chapter 1 3. Kronecker Products
decompositions given by p
I
AP = J
A
and Ql BQ = J
B
, respectively, then we get the
following Jordanlike structure:
(P ® Q)I(A ® B)(P ® Q) = (P
I
® Ql)(A ® B)(P ® Q)
= (P
1
AP) ® (Ql BQ)
= J
A
® J
B ·
Note that h ® JR, while upper triangular, is generally not quite in Jordan form and needs
further reduction (to an ultimate Jordan form that also depends on whether or not certain
eigenvalues are zero or nonzero).
A Schur form for A ® B can be derived similarly. For example, suppose P and
Q are unitary matrices that reduce A and B, respectively, to Schur (triangular) form, i.e.,
pH AP = TA and QH BQ = TB (and similarly if P and Q are orthogonal similarities
reducing A and B to real Schur form). Then
(P ® Q)H (A ® B)(P ® Q) = (pH ® QH)(A ® B)(P ® Q)
= (pH AP) ® (QH BQ)
= TA ® T
R
.
Corollary 13.13. Let A E IR
n
xn and B E IR
rn
xm. Then
1. Tr(A ® B) = (TrA)(TrB) = Tr(B ® A).
2. det(A ® B) = (det A)m(det Bt = det(B ® A).
Definition 13.14. Let A E IR
n
Xn and B E IR
m
xrn. Then the Kronecker sum (or tensor sum)
of A and B, denoted A EEl B, is the mn x mn matrix Urn ® A) + (B ® In). Note that, in
general, A EEl B i= B EEl A.
Example 13.15.
1. Let
2
;
2
Then
2 3 0 0 0 2 0 0 0 0
3 2 1 0 0 0 0 2 0 0 1 0
AfflB = (h®A)+(B®h) =
1 1 4 0 0 0 0 0 2 0 0
0 0 0 2 3
+
2 0 0 3 0 0
0 0 0 3 2 0 2 0 0 3 0
0 0 0 4 0 0 2 0 0 3
The reader is invited to compute B EEl A = (h ® B) + (A 0 h) and note the difference
with A EEl B.
13.2. Properties of the Kronecker Product 143
If A and B are diagonalizable in Theorem 13.16, we can take p = n and q = m and
thus get the complete eigenstructure of A 0 B. In general, if A and B have Jordan form
decompositions given by P~
1
AP = JA and Q"
1
BQ = JB, respectively, then
is a Jordanlike structure for A © B.
Then J can be written in the very compact form J = (4 < 8 > M) + (E^®l2) = M 0 Ek.
Theorem 13.16. Let A e E"
x
" have eigenvalues A,  , i e n, and let B e R
mx
'" have
eigenvalues /z
;
, 7 e ra. TTzen r/ze Kronecker sum A® B = (I
m
(g> A) + (B < g> /„) /za^ ran
e/genva/wes
Moreover, if x\,... ,x
p
are linearly independent right eigenvectors of A corresponding
to AI, . . . , X
p
(p < n), and z\, ..., z
q
are linearly independent right eigenvectors of B
corresponding to f j i \ , . . . , f^
q
(q < ra), then Zj < 8 > Xi € W
1
" are linearly independent right
eigenvectors of A® B corresponding to A., + [ij , i € p, j e q.
Proof: The basic idea of the proof is as follows:
2. Recall the real JCF
where M =
13.2. Properties of the Kronecker Product
2. Recall the real JCF
1=
where M = [
a
f3
M I 0 0
f3
a
o M I 0
M
0
J. Define
0 0
0 0
Ek =
0
I 0
M I
o M
o
o
o
143
E jR2kx2k,
Then 1 can be written in the very compact form 1 = (I} ® M) + (Ek ® h) = M $ E
k
.
Theorem 13.16. Let A E jRnxn have eigenvalues Ai, i E !!. and let B E jRmxm have
eigenvalues fJj, j E I!!. Then the Kronecker sum A $ B = (1m ® A) + (B ® In) has mn
eigenvalues
Al + fJt, ... , AI + fJm, A2 + fJt,···, A2 + fJm, ... , An + fJm'
Moreover, if XI, .•• , xp are linearly independent right eigenvectors of A corresponding
to AI, ... , Ap (p ::s: n), and ZI, ... , Zq are linearly independent right eigenvectors of B
corresponding to fJt, ... , fJq (q ::s: m), then Z j ® Xi E jRmn are linearly independent right
eigenvectors of A $ B corresponding to Ai + fJj' i E E, j E fl·
Proof: The basic idea of the proof is as follows:
[(1m ® A) + (B ® In)](Z ® X) = (Z ® Ax) + (Bz ® X)
= (Z ® Ax) + (fJZ ® X)
= (A + fJ)(Z ® X). 0
If A and Bare diagonalizable in Theorem 13.16, we can take p = nand q = m and
thus get the complete eigenstructure of A $ B. In general, if A and B have Jordan form
decompositions given by pI AP = lA and Qt BQ = l
B
, respectively, then
[(Q ® In)(lm ® p)rt[(lm ® A) + (B ® In)][CQ ® In)(lm ® P)]
= [(1m ® p)I(Q ® In)I][(lm ® A) + (B ® In)][(Q ® In)(/m ® P)]
= [(1m ® pI)(QI ® In)][(lm ® A) + (B ® In)][CQ ® In)(/m <:9 P)]
= (1m ® lA) + (JB ® In)
is a Jordanlike structure for A $ B.
144 Chapter 13. Kronecker Products
A Schur form for A © B can be derived similarly. Again, suppose P and Q are unitary
matrices that reduce A and B, respectively, to Schur (triangular) form, i.e., P
H
AP = T
A
and Q
H
BQ = T
B
(and similarly if P and Q are orthogonal similarities reducing A and B
to real Schur form). Then
((Q ® /„)(/« ® P)]"[(/m < 8 > A) + (B ® /
B
)][(e (g) /„)(/„, ® P)] = (/
m
< 8 > r
A
) + (7* (g) /„),
where [(Q < 8 > /„)(/« ® P)] = (< 2 ® P) is unitary by Theorem 13.3 and Corollary 13.8 .
13.3 Application to Sylvester and Lyapunov Equations
In this section we study the linear matrix equation
A special case of (13.3) is the symmetric equation
obtained by taking B = A
T
. When C is symmetric, the solution X e W
x
" is easily shown
also to be symmetric and (13.4) is known as a Lyapunov equation. Lyapunov equations
arise naturally in stability theory.
The first important question to ask regarding (13.3) is, When does a solution exist?
By writing the matrices in (13.3) in terms of their columns, it is easily seen by equating the
z'th columns that
The coefficient matrix in (13.5) clearly can be written as the Kronecker sum (I
m
* A) +
(B
T
® /„). The following definition is very helpful in completing the writing of (13.5) as
an "ordinary" linear system.
where A e R"
x
", B e R
mxm
, and C e M"
xm
. This equation is now often called a Sylvester
equation in honor of J.J. Sylvester who studied general linear matrix equations of the form
These equations can then be rewritten as the mn x mn linear system
144 Chapter 13. Kronecker Products
A Schur fonn for A EB B can be derived similarly. Again, suppose P and Q are unitary
matrices that reduce A and B, respectively, to Schur (triangular) fonn, i.e., pH AP = TA
and QH BQ = TB (and similarly if P and Q are orthogonal similarities reducing A and B
to real Schur fonn). Then
where [(Q ® In)(lm ® P)] = (Q ® P) is unitary by Theorem 13.3 and Corollary 13.8.
13.3 Application to Sylvester and Lyapunov Equations
In this section we study the linear matrix equation
AX+XB=C, (13.3)
where A E IR
nxn
, B E IR
mxm
, and C E IRnxm. This equation is now often called a Sylvester
equation in honor of 1.1. Sylvester who studied general linear matrix equations of the fonn
k
LA;XB; =C.
;=1
A special case of (13.3) is the symmetric equation
AX +XAT = C (13.4)
obtained by taking B = AT. When C is symmetric, the solution X E IR
n
xn is easily shown
also to be symmetric and (13.4) is known as a Lyapunov equation. Lyapunovequations
arise naturally in stability theory.
The first important question to ask regarding (13.3) is, When does a solution exist?
By writing the matrices in (13.3) in tenns of their columns, it is easily seen by equating the
ith columns that
m
AXi + Xb; = C; = AXi +
j=1
These equations can then be rewritten as the mn x mn linear system
[
A+blll b
21
1
bl21 A + b
2Z
1
blml b2ml
(13.5)
The coefficient matrix in (13.5) clearly can be written as the Kronecker sum (1m 0 A) +
(B
T
0 In). The following definition is very helpful in completing the writing of (13.5) as
an "ordinary" linear system.
13.3. Application to Sylvester and Lyapunov Equations 145
Definition 13.17. Let c
(
€ E.
n
denote the columns ofC e R
nxm
so that C = [n,..., c
m
}.
Then vec(C) is defined to be the mnvector formed by stacking the columns ofC on top of
one another, i.e., vec(C) =
Using Definition 13.17, the linear system (13.5) can be rewritten in the form
There exists a unique solution to (13.6) if and only if [(I
m
® A) + (B
T
® /„)] is nonsingular.
But [(I
m
< 8 > A) + (B
T
(g) /„)] is nonsingular if and only if it has no zero eigenvalues.
From Theorem 13.16, the eigenvalues of [(/
m
<g> A) + (B
T
<8> /„)] are A., + IJ LJ , where
A,, e A (A), i e n_, and ^j e A(fi), j e m. We thus have the following theorem.
Theorem 13.18. Let A e R
nxn
, B G R
mxm
, and C e R"
xm
. 77ie/i the Sylvester equation
has a unique solution if and only if A and —B have no eigenvalues in common.
Sylvester equations of the form (13.3) (or symmetric Lyapunov equations of the form
(13.4)) are generally not solved using the mn x mn "vec" formulation (13.6). The most
commonly preferred numerical algorithm is described in [2]. First A and B are reduced to
(real) Schur form. An equivalent linear system is then solved in which the triangular form
of the reduced A and B can be exploited to solve successively for the columns of a suitably
transformed solution matrix X. Assuming that, say, n > m, this algorithm takes only O(n
3
)
operations rather than the O(n
6
) that would be required by solving (13.6) directly with
Gaussian elimination. A further enhancement to this algorithm is available in [6] whereby
the larger of A or B is initially reduced only to upper Hessenberg rather than triangular
Schur form.
The next few theorems are classical. They culminate in Theorem 13.24, one of many
elegant connections between matrix theory and stability theory for differential equations.
Theorem 13.19. Let A e Rn
xn
, B e R
mxm
, and C e R
nxm
. Suppose further that A and B
are asymptotically stable (a matrix is asymptotically stable if all its eigenvalues have real
parts in the open left halfplane). Then the (unique) solution of the Sylvester equation
can be written as
Proof: Since A and B are stable, A., (A) + A
;
 (B) ^ 0 for all i, j so there exists a unique
solution to(13.8 )by Theorem 13.18. Now integrate the differential equation X = AX + XB
(with X(0) = C) on [0, +00):
13.3. Application to Sylvester and Lyapunov Equations 145
Definition 13.17. Let Ci E jRn denote the columns ofC E jRnxm so that C = [CI, ... , C
m
].
: : ~ ~ : : ~ : : : d ~ ~ : : : O :[]::::fonned by "ocking the colunuu of C on top of
Using Definition 13.17, the linear system (13.5) can be rewritten in the form
[(1m ® A) + (B
T
® In)]vec(X) = vec(C). (13.6)
There exists a unique solution to (13.6) if and only if [(1m ® A) + (B
T
® In)] is nonsingular.
But [(1m ® A) + (B
T
® In)] is nonsingular if and only if it has no zero eigenvalues.
From Theorem 13.16, the eigenvalues of [(1m ® A) + (BT ® In)] are Ai + Mj, where
Ai E A(A), i E!!, and Mj E A(B), j E!!!.. We thus have the following theorem.
Theorem 13.1S. Let A E lR
nxn
, B E jRmxm, and C E jRnxm. Then the Sylvester equation
AX+XB=C
(13.7)
has a unique solution if and only if A and  B have no eigenvalues in common.
Sylvester equations of the form (13.3) (or symmetric Lyapunov equations of the form
(13.4» are generally not solved using the mn x mn "vee" formulation (13.6). The most
commonly preferred numerical algorithm is described in [2]. First A and B are reduced to
(real) Schur form. An equivalent linear system is then solved in which the triangular form
of the reduced A and B can be exploited to solve successively for the columns of a suitably
transformed solution matrix X. Assuming that, say, n :::: m, this algorithm takes only 0 (n
3
)
operations rather than the O(n
6
) that would be required by solving (13.6) directly with
Gaussian elimination. A further enhancement to this algorithm is available in [6] whereby
the larger of A or B is initially reduced only to upper Hessenberg rather than triangular
Schur form.
The next few theorems are classical. They culminate in Theorem 13.24, one of many
elegant connections between matrix theory and stability theory for differential equations.
Theorem 13.19. Let A E jRnxn, B E jRmxm, and C E jRnxm. Suppose further that A and B
are asymptotically stable (a matrix is asymptotically stable if all its eigenvalues have real
parts in the open left halfplane). Then the (unique) solution of the Sylvester equation
AX+XB=C (13.8)
can be written as
(13.9)
Proof: Since A and B are stable, Aj(A) + Aj(B) =I 0 for all i, j so there exists a unique
solution to (13.8) by Theorem 13.18. Now integrate the differential equation X = AX + X B
(with X(O) = C) on [0, +00):
lim XU)  X(O) = A roo X(t)dt + ([+00 X(t)dt) B.
IHoo 10 10
(13.10)
146 Chapter 13. Kronecker Products
Using the results of Section 11.1.6, it can be shown easily that lim e = lim e = 0.
r—> + oo t—v+oo
Hence, using the solution X ( t ) = e
tA
Ce
tB
from Theorem 11.6, we have that lim X ( t ) — 0.
/—<+3C
Substituting in (13.10) we have
Remark 13.20. An equivalent condition for the existence of a unique solution to AX +
XB = C is that [ J _
c
fi
] be similar to [ J _°
B
] (via the similarity [ J _* ]).
Theorem 13.21. Lef A, C e R"
x
". TTzen r/z e Lyapunov equation
has a unique solution if and only if A and —A
T
have no eigenvalues in common. If C is
symmetric and (13.11) has a unique solution, then that solution is symmetric.
Remark 13.22. If the matrix A e W
xn
has eigenvalues A.I ,...,!„, then — A
T
has eigen
values —A.], . . . , —k
n
. Thus, a sufficient condition that guarantees that A and — A
T
have
no common eigenvalues is that A be asymptotically stable. Many useful results exist con
cerning the relationship between stability and Lyapunov equations. Two basic results due
to Lyapunov are the following, the first of which follows immediately from Theorem 13.19.
Theorem 13.23. Let A,C e R"
x
" and suppose further that A is asymptotically stable.
Then the (unique) solution of the Lyapunov equation
can be written as
Theorem 13.24. A matrix A e R"
x
" is asymptotically stable if and only if there exists a
positive definite solution to the Lyapunov equation
Proof: Suppose A is asymptotically stable. By Theorems 13.21 and 13.23 a solution to
(13.13) exists and takes the form (13.12). Now let v be an arbitrary nonz ero vector in E".
Then
and so X
where C 
146 Chapter 13. Kronecker Products
Using the results of Section 11.1.6, it can be shown easily that lim e
lA
= lim e
lB
= O.
1>+00 1 .... +00
Hence, using the solution X (t) = elACe
lB
from Theorem 11.6, we have that lim X (t) = O.
t ~ + x
Substituting in (13.10) we have
C = A (1+
00
elACe
lB
dt) + (1+
00
elACe
lB
dt) B
{+oo
and so X = 1o elACe
lB
dt satisfies (13.8). o
Remark 13.20. An equivalent condition for the existence of a unique solution to AX +
X B = C is that [ ~ _C
B
] be similar to [ ~ _OB] (via the similarity [ ~ _ ~ ]).
Theorem 13.21. Let A, C E jRnxn. Then the Lyapunov equation
AX+XAT = C (13.11)
has a unique solution if and only if A and  A T have no eigenvalues in common. If C is
symmetric and ( 13.11) has a unique solution, then that solution is symmetric.
Remark 13.22. If the matrix A E jRn xn has eigenvalues )"" ... , An, then  AT has eigen
values AI, ... ,  An. Thus, a sufficient condition that guarantees that A and  A T have
no common eigenvalues is that A be asymptotically stable. Many useful results exist con
cerning the relationship between stability and Lyapunov equations. Two basic results due
to Lyapunov are the following, the first of which follows immediately from Theorem 13.19.
Theorem 13.23. Let A, C E jRnxn and suppose further that A is asymptotically stable.
Then the (unique) solution o/the Lyapunov equation
AX+XAT=C
can be written as
(13.12)
Theorem 13.24. A matrix A E jRnxn is asymptotically stable if and only if there exists a
positive definite solution to the Lyapunov equation
AX +XAT = C, (13.13)
where C = C
T
< O.
Proof: Suppose A is asymptotically stable. By Theorems l3.21 and l3.23 a solution to
(13.13) exists and takes the form (13.12). Now let v be an arbitrary nonzero vector in jRn.
Then
13.3. Application to Sylvester and Lyapunov Equations 147
Since — C > 0 and e
tA
is nonsingular for all t, the integrand above is positive. Hence
v
T
Xv > 0 and thus X is positive definite.
Conversely, suppose X = X
T
> 0 and let A. e A (A) with corresponding left eigen
vector y. Then
Since y
H
Xy > 0, we must have A + A = 2 Re A < 0 . Since A was arbitrary, A must be
asymptotically stable. D
Remark 13.25. The Lyapunov equation AX + XA
T
= C can also be written using the
vec notation in the equivalent form
A subtle point arises when dealing with the "dual" Lyapunov equation A
T
X + XA = C.
The equivalent "vec form" of this equation is
However, the complexvalued equation A
H
X + XA = C is equivalent to
The vec operator has many useful properties, most of which derive from one key
result.
Theorem 13.26. For any three matrices A, B, and C for which the matrix product ABC is
defined,
Proof: The proof follows in a fairly straightforward fashion either directly from the defini
tions or from the fact that vec(;t;y
r
) = y <8 > x. D
An immediate application is to the derivation of existence and uniqueness conditions
for the solution of the simple Sylvesterlike equation introduced in Theorem 6.11.
Theorem 13.27. Let A e R
mxn
, B e R
px(}
, and C e R
mxq
. Then the equation
has a solution X e R.
nxp
if and only ifAA
+
CB
+
B = C, in which case the general solution
is of the form
where Y e R
nxp
is arbitrary. The solution of (13.14) is unique if BB
+
® A
+
A = I.
Proof: Write (13.14) as
13.3. Application to Sylvester and Lyapunov Equations 147
Since C > 0 and etA is nonsingular for all t, the integrand above is positive. Hence
v
T
Xv > 0 and thus X is positive definite.
Conversely, suppose X = XT > 0 and let A E A(A) with corresponding left eigen
vector y. Then
0> yHCy = yH AXy + yHXAT Y
= (A + I)yH Xy.
Since yH Xy > 0, we must have A + I = 2 Re A < O. Since A was arbitrary, A must be
asymptotically stable. D
Remark 13.25. The Lyapunov equation AX + X A T = C can also be written using the
vec notation in the equivalent form
[(/ ® A) + (A ® l)]vec(X) = vec(C).
A subtle point arises when dealing with the "dual" Lyapunov equation A T X + X A = C.
The equivalent "vec form" of this equation is
[(/ ® AT) + (AT ® l)]vec(X) = vec(C).
However, the complexvalued equation A H X + X A = C is equivalent to
[(/ ® AH) + (AT ® l)]vec(X) = vec(C).
The vec operator has many useful properties, most of which derive from one key
result.
Theorem 13.26. For any three matrices A, B, and C for which the matrix product ABC is
defined,
vec(ABC) = (C
T
® A)vec(B).
Proof: The proof follows in a fairly straightforward fashion either directly from the defini
tions or from the fact that vec(xyT) = y ® x. D
An immediate application is to the derivation of existence and uniqueness conditions
for the solution of the simple Sylvesterlike equation introduced in Theorem 6.11.
Theorem 13.27. Let A E jRrnxn, B E jRPxq, and C E jRrnxq. Then the equation
AXB =C (13.14)
has a solution X E jRn x p if and only if A A + C B+ B = C, in which case the general solution
is of the form
(13.15)
where Y E jRnxp is arbitrary. The solution of (13. 14) is unique if BB+ ® A+ A = [.
Proof: Write (13.14) as
(B
T
® A)vec(X) = vec(C) (13.16)
148 Chapter 13. Kronecker Products
by Theorem 13.26. This "vector equation" has a solution if and only if
It is a straightforward exercise to show that (M ® N)
+
= M
+
< 8> N
+
. Thus, (13.16) has a
solution if and only if
and hence if and only if AA
+
CB
+
B = C.
The general solution of (13.16) is then given by
where Y is arbitrary. This equation can then be rewritten in the form
or, using Theorem 13.26,
The solution is clearly unique if BB
+
< 8> A
+
A = I. D
EXERCISES
1. For any two matrices A and B for which the indicated matrix product is defined,
show that (vec(A))
r
(vec(fl)) = Tr(A
r
£). In particular, if B e Rn
xn
, then Tr(fl) =
vec(/J
r
vec(fl).
2. Prove that for all matrices A and B, (A ® B)
+
= A
+
® B
+
.
3. Show that the equation AX B = C has a solution for all C if A has full row rank and
B has full column rank. Also, show that a solution, if it exists, is unique if A has full
column rank and B has full row rank. What is the solution in this case?
4. Show that the general linear equation
can be written in the form
148 Chapter 1 3. Kronecker Products
by Theorem 13.26. This "vector equation" has a solution if and only if
(B
T
® A)(B
T
® A)+ vec(C) = vec(C).
It is a straightforward exercise to show that (M ® N) + = M+ ® N+. Thus, (13.16) has a
solution if and only if
vec(C) = (B
T
® A)«B+{ ® A+)vec(C)
= [(B+ B{ ® AA+]vec(C)
= vec(AA +C B+ B)
and hence if and only if AA + C B+ B = C.
The general solution of (13 .16) is then given by
vec(X) = (B
T
® A) + vec(C) + [I  (B
T
® A) + (B
T
® A)]vec(Y),
where Y is arbitrary. This equation can then be rewritten in the form
vec(X) = «B+{ ® A+)vec(C) + [I  (BB+{ ® A+ A]vec(y)
or, using Theorem 13.26,
The solution is clearly unique if B B+ ® A + A = I. 0
EXERCISES
I. For any two matrices A and B for which the indicated matrix product is defined,
show that (vec(A»T (vec(B» = Tr(A
T
B). In particular, if B E lR
nxn
, then Tr(B) =
vec(Inl vec(B).
2. Prove that for all matrices A and B, (A ® B)+ = A+ ® B+.
3. Show that the equation AX B = C has a solution for all C if A has full row rank and
B has full column rank. Also, show that a solution, if it exists, is unique if A has full
column rank and B has full row rank. What is the solution in this case?
4. Show that the general linear equation
k
LAiXB
i
=C
i=1
can be written in the form
[BT ® AI + ... + B[ ® Ak]vec(X) = vec(C).
Exercises 149
5. Let x € M
m
and y e E". Show that *
r
< 8 > y = yx
T
.
6. Let A e R"
xn
and £ e M
mxm
.
(a) Show that A < 8 > B
2
= A
2
£
2
.
(b) What is A ® B\\
F
in terms of the Frobenius norms of A and B? Justify your
answer carefully.
(c) What is the spectral radius of A < 8 > B in terms of the spectral radii of A and B?
Justify your answer carefully.
7. Let A, 5 eR"
x
".
(a) Show that (/ ® A)* = / < 8 > A* and (fl < g > /)* = B
fc
® / for all integ ers &.
(b) Show that e
l
®
A
= I < g ) e
A
and e
5
®
7
= e
B
(g ) /.
(c) Show that the matrices / (8 ) A and B ® / commute.
(d) Show that
(Note: This result would look a little "nicer" had we defined our Kronecker
sum the other way around. However, Definition 13.14 is conventional in the
literature.)
8 . Consider the Lyapunov matrix equation (13.11) with
and C the symmetric matrix
Clearly
is a symmetric solution of the equation. Verify that
is also a solution and is nonsymmetric. Explain in lig ht of Theorem 13.21.
9. Block Triangularization: Let
where A e Rn
xn
and D e R
mxm
. It is desired to find a similarity transformation
of the form
such that T
l
ST is block upper triang ular.
Exercises 149
5. Let x E ]Rm and y E ]Rn. Show that x T ® y = y X T •
(a) Show that IIA ® BII2 = IIAII2I1Blb.
(b) What is II A ® B II F in terms of the Frobenius norms of A and B? Justify your
answer carefully.
(c) What is the spectral radius of A ® B in terms of the spectral radii of A and B?
Justify your answer carefully.
7. Let A, B E ]Rnxn.
(a) Show that (l ® A)k = I ® Ak and (B ® Il = Bk ® I for all integers k.
(b) Show that el®A = I ® e
A
and eB®1 = e
B
® I.
(c) Show that the matrices I ® A and B ® I commute.
(d) Show that
e
AEIlB
= eU®A)+(B®l) = e
B
® e
A
.
(Note: This result would look a little "nicer" had we defined our Kronecker
sum the other way around. However, Definition 13.14 is conventional in the
literature.)
8. Consider the Lyapunov matrix equation (13.11) with
A = [ ~ _ ~ ]
and C the symmetric matrix
[ ~
Clearly
Xs = [ ~ ~ ]
is a symmetric solution of the equation. Verify that
Xns = [ _ ~ ~ ]
is also a solution and is nonsymmetric. Explain in light of Theorem 13.21.
9. Block Triangularization: Let
where A E ]Rn xn and D E ]Rm xm. It is desired to find a similarity transformation
of the form
T = [ ~ ~ J
such that T
1
ST is block upper triangular.
150 Chapter 13. Kronecker Products
(a) Show that S is similar to
if X satisfies the socalled matrix Riccati equation
(b) Formulate a similar result for block lower triangularization of S.
10. Block Diagonalization: Let
where A e Rn
xn
and D E R
mxm
. It is desired to find a similarity transformation of
the form
such that T
l
ST is block diagonal,
(a) Show that S is similar to
if Y satisfies the Sylvester equation
(b) Formulate a similar result for block diagonalization of
150 Chapter 13. Kronecker Products
(a) Show that S is similar to
[
A +OBX B ]
DXB
if X satisfies the socalled matrix Riccati equation
CXA+DXXBX=O.
(b) Fonnulate a similar result for block lower triangularization of S.
to. Block Diagonalization: Let
S= [ ~ ~ l
where A E jRnxn and D E jRmxm. It is desired to find a similarity transfonnation of
the fonn
T = [ ~ ~ ]
such that T
1
ST is block diagonal.
(a) Show that S is similar to
if Y satisfies the Sylvester equation
AY  YD = B.
(b) Fonnulate a similar result for block diagonalization of
Bibliography
[1] Albert, A., Regression and the MoorePenrose Pseudoinverse, Academic Press, New
York, NY, 1972.
[2] Bartels, R.H., and G.W. Stewart, "Algorithm 432. Solution of the Matrix Equation
AX + XB = C," Cornm. ACM, 15(1972), 820826.
[3] Bellman, R., Introduction to Matrix Analysis, Second Edition, McGrawHill, New
York, NY, 1970.
[4] Bjorck, A., Numerical Methods for Least Squares Problems, SIAM, Philadelphia, PA,
1996.
[5] Cline, R.E., "Note on the Generalized Inverse of the Product of Matrices," SIAM Rev.,
6(1964), 57–58.
[6] Golub, G.H., S. Nash, and C. Van Loan, "A HessenbergSchur Method for the Problem
AX + XB = C," IEEE Trans. Autom. Control, AC24(1979), 909913.
[7] Golub, G.H., and C.F. Van Loan, Matrix Computations, Third Edition, Johns Hopkins
Univ. Press, Baltimore, MD, 1996.
[8] Golub, G.H., and J.H. Wilkinson, "IllConditioned Eigensystems and the Computation
of the Jordan Canonical Form," SIAM Rev., 18(1976), 578619.
[9] Greville, T.N.E., "Note on the Generalized Inverse of a Matrix Product," SIAM Rev.,
8(1966), 518–521 [Erratum, SIAM Rev., 9(1967), 249].
[10] Halmos, PR., FiniteDimensional Vector Spaces, Second Edition, Van Nostrand,
Princeton, NJ, 1958.
[11] Higham, N.J., Accuracy and Stability of'Numerical Algorithms, Second Edition, SIAM,
Philadelphia, PA, 2002.
[12] Horn, R.A., and C.R. Johnson, Matrix Analysis, Cambridge Univ. Press, Cambridge,
UK, 1985.
[13] Horn, R.A., and C.R. Johnson, Topics in Matrix Analysis, Cambridge Univ. Press,
Cambridge, UK, 1991.
151
Bibliography
[1] Albert, A., Regression and the MoorePenrose Pseudoinverse, Academic Press, New
York, NY, 1972.
[2] Bartels, RH., and G.w. Stewart, "Algorithm 432. Solution of the Matrix Equation
AX + X B = C," Comm. ACM, 15(1972),820826.
[3] Bellman, R, Introduction to Matrix Analysis, Second Edition, McGrawHill, New
York, NY, 1970.
[4] Bjorck, A., Numerical Methodsfor Least Squares Problems, SIAM, Philadelphia, PA,
1996.
[5] Cline, R.E., "Note on the Generalized Inverse of the Product of Matrices," SIAM Rev.,
6(1964),5758.
[6] Golub, G.H., S. Nash, and C. Van Loan, "A HessenbergSchur Method for the Problem
AX + X B = C," IEEE Trans. Autom. Control, AC24(1979), 909913.
[7] Golub, G.H., and c.F. Van Loan, Matrix Computations, Third Edition, Johns Hopkins
Univ. Press, Baltimore, MD, 1996.
[8] Golub, G.H., and lH. Wilkinson, "IllConditioned Eigensystems and the Computation
ofthe Jordan Canonical Form," SIAM Rev., 18(1976),578619.
[9] Greville, T.N.E., "Note on the Generalized Inverse of a Matrix Product," SIAM Rev.,
8(1966),518521 [Erratum, SIAM Rev., 9(1967), 249].
[10] Halmos, P.R, FiniteDimensional Vector Spaces, Second Edition, Van Nostrand,
Princeton, NJ, 1958.
[11] Higham, N.1., Accuracy and Stability of Numerical Algorithms, Second Edition, SIAM,
Philadelphia, PA, 2002.
[12] Hom, RA., and C.R. Johnson, Matrix Analysis, Cambridge Univ. Press, Cambridge,
UK, 1985.
[13] Hom, RA., and C.R. Johnson, Topics in Matrix Analysis, Cambridge Univ. Press,
Cambridge, UK, 1991.
151
152 Bibliography
[14] Kenney, C, and A.J. Laub, "Controllability and Stability Radii for Companion Form
Systems," Math, of Control, Signals, and Systems, 1(1988), 361390.
[15] Kenney, C.S., and A.J. Laub, "The Matrix Sign Function," IEEE Trans. Autom. Control,
40(1995), 1330–1348.
[16] Lancaster, P., and M. Tismenetsky, The Theory of Matrices, Second Edition with
Applications, Academic Press, Orlando, FL, 1985.
[17] Laub, A.J., "A Schur Method for Solving Algebraic Riccati Equations," IEEE Trans..
Autom. Control, AC24( 1979), 913–921.
[18] Meyer, C.D., Matrix Analysis and Applied Linear Algebra, SIAM, Philadelphia, PA,
2000.
[19] Moler, C.B., and C.F. Van Loan, "Nineteen Dubious Ways to Compute the Exponential
of a Matrix," SIAM Rev., 20(1978), 801836.
[20] Noble, B., and J.W. Daniel, Applied Linear Algebra, Third Edition, PrenticeHall,
Englewood Cliffs, NJ, 1988.
[21] Ortega, J., Matrix Theory. A Second Course, Plenum, New York, NY, 1987.
[22] Penrose, R., "A Generalized Inverse for Matrices," Proc. Cambridge Philos. Soc.,
51(1955), 406–413.
[23] Stewart, G. W., Introduction to Matrix Computations, Academic Press, New York, NY,
1973.
[24] Strang, G., Linear Algebra and Its Applications, Third Edition, Harcourt Brace
Jovanovich, San Diego, CA, 1988.
[25] Watkins, D.S., Fundamentals of Matrix Computations, Second Edition, Wiley
Interscience, New York, 2002.
[26] Wonham, W.M., Linear Multivariable Control. A Geometric Approach, Third Edition,
SpringerVerlag, New York, NY, 1985.
152 Bibliography
[14] Kenney, C., and AJ. Laub, "Controllability and Stability Radii for Companion Fonn
Systems," Math. of Control, Signals, and Systems, 1(1988),361390.
[15] Kenney, C.S., andAJ. Laub, "The Matrix Sign Function," IEEE Trans. Autom. Control,
40(1995),13301348.
[16] Lancaster, P., and M. Tismenetsky, The Theory of Matrices, Second Edition with
Applications, Academic Press, Orlando, FL, 1985.
[17] Laub, AJ., "A Schur Method for Solving Algebraic Riccati Equations," IEEE Trans ..
Autom. Control, AC24( 1979), 913921.
[18] Meyer, C.D., Matrix Analysis and Applied Linear Algebra, SIAM, Philadelphia, PA,
2000.
[19] Moler, c.B., and c.P. Van Loan, "Nineteen Dubious Ways to Compute the Exponential
of a Matrix," SIAM Rev., 20(1978),801836.
[20] Noble, B., and J.w. Daniel, Applied Linear Algebra, Third Edition, PrenticeHall,
Englewood Cliffs, NJ, 1988.
[21] Ortega, J., Matrix Theory. A Second Course, Plenum, New York, NY, 1987.
[22] Pemose, R., "A Generalized Inverse for Matrices," Proc. Cambridge Philos. Soc.,
51(1955),406413.
[23] Stewart, G.W., Introduction to Matrix Computations, Academic Press, New York, NY,
1973.
[24] Strang, G., Linear Algebra and Its Applications, Third Edition, Harcourt Brace
Jovanovich, San Diego, CA, 1988.
[25] Watkins, D.S., Fundamentals of Matrix Computations, Second Edition, Wiley
Interscience, New York, 2002.
[26] Wonham, W.M., Linear Multivariable Control. A Geometric Approach, Third Edition,
SpringerVerlag, New York, NY, 1985.
Index
A–invariant subspace, 89
matrix characterization of, 90
algebraic multiplicity, 76
angle between vectors, 58
basis, 11
natural, 12
block matrix, 2
definiteness of, 104
diagonalization, 150
inverse of, 48
LU factorization, 5
triangularization, 149
C", 1
(pmxn i
(p/nxn 1
Cauchy–Bunyakovsky–Schwarz Inequal
ity, 58
Cayley–Hamilton Theorem, 75
chain
of eigenvectors, 87
characteristic polynomial
of a matrix, 75
of a matrix pencil, 125
Cholesky factorization, 101
co–domain, 17
column
rank, 23
vector, 1
companion matrix
inverse of, 105
pseudoinverse of, 106
singular values of, 106
singular vectors of, 106
complement
of a subspace, 13
orthogonal, 21
congruence, 103
conjugate transpose, 2
contragredient transformation, 137
controllability, 46
defective, 76
degree
of a principal vector, 85
determinant, 4
of a block matrix, 5
properties of, 4–6
dimension, 12
direct sum
of subspaces, 13
domain, 17
eigenvalue, 75
invariance under similarity transfor
mation, 81
elementary divisors, 84
equivalence transformation, 95
orthogonal, 95
unitary, 95
equivalent generalized eigenvalue prob
lems, 127
equivalent matrix pencils, 127
exchange matrix, 39, 89
exponential of a Jordan block, 91, 115
exponential of a matrix, 81, 109
computation of, 114–118
inverse of, 110
properties of, 109–112
field, 7
four fundamental subspaces, 23
function of a matrix, 81
generalized eigenvalue, 125
generalized real Schur form, 128
153
Index
Ainvariant subspace, 89
matrix characterization of, 90
algebraic multiplicity, 76
angle between vectors, 58
basis, 11
natural, 12
block matrix, 2
definiteness of, 104
diagonalization, 150
inverse of, 48
LV factorization, 5
triangularization, 149
en, 1
e
mxn
, 1
e ~ x n , 1
CauchyBunyakovskySchwarz Inequal
ity,58
CayleyHamilton Theorem, 75
chain
of eigenvectors, 87
characteristic polynomial
of a matrix, 75
of a matrix pencil, 125
Cholesky factorization, 101
codomain, 17
column
rank, 23
vector, 1
companion matrix
inverse of, 105
pseudoinverse of, 106
singular values of, 106
singular vectors of, 106
complement
of a subspace, 13
orthogonal, 21
153
congruence, 103
conjugate transpose, 2
contragredient transformation, 137
controllability, 46
defective, 76
degree
of a principal vector, 85
determinant, 4
of a block matrix, 5
properties of, 46
dimension, 12
direct sum
of subspaces, 13
domain, 17
eigenvalue, 75
invariance under similarity transfor
mation,81
elementary divisors, 84
equivalence transformation, 95
orthogonal, 95
unitary, 95
equivalent generalized eigenvalue prob
lems, 127
equivalent matrix pencils, 127
exchange matrix, 39, 89
exponential of a Jordan block, 91, 115
exponential of a matrix, 81, 109
computation of, 114118
inverse of, 110
properties of, 109112
field, 7
four fundamental subspaces, 23
function of a matrix, 81
generalized eigenvalue, 125
generalized real Schur form, 128
154 Index
generalized Schur form, 127
generalized singular value decomposition,
134
geometric multiplicity, 76
Holder Inequality, 58
Hermitian transpose, 2
higher–order difference equations
conversion to first–order form, 121
higher–order differential equations
conversion to first–order form, 120
higher–order eigenvalue problems
conversion to first–order form, 136
i, 2
idempotent, 6, 51
identity matrix, 4
inertia, 103
initial–value problem, 109
for higher–order equations, 120
for homogeneous linear difference
equations, 118
for homogeneous linear differential
equations, 112
for inhomogeneous linear difference
equations, 119
for inhomogeneous linear differen
tial equations, 112
inner product
complex, 55
complex Euclidean, 4
Euclidean, 4, 54
real, 54
usual, 54
weighted, 54
invariant factors, 84
inverses
of block matrices, 47
7, 2
Jordan block, 82
Jordan canonical form (JCF), 82
Kronecker canonical form (KCF), 129
Kronecker delta, 20
Kronecker product, 139
determinant of, 142
eigenvalues of, 141
eigenvectors of, 141
products of, 140
pseudoinverse of, 148
singular values of, 141
trace of, 142
transpose of, 140
Kronecker sum, 142
eigenvalues of, 143
eigenvectors of, 143
exponential of, 149
leading principal submatrix, 100
left eigenvector, 75
left generalized eigenvector, 125
left invertible, 26
left nullspace, 22
left principal vector, 85
linear dependence, 10
linear equations
characterization of all solutions, 44
existence of solutions, 44
uniqueness of solutions, 45
linear independence, 10
linear least squares problem, 65
general solution of, 66
geometric solution of, 67
residual of, 65
solution via QR factorization, 71
solution via singular value decom
position, 70
statement of, 65
uniqueness of solution, 66
linear regression, 67
linear transformation, 17
co–domain of, 17
composition of, 19
domain of, 17
invertible, 25
left invertible, 26
matrix representation of, 18
nonsingular, 25
nullspace of, 20
154
generalized Schur form, 127
generalized singular value decomposition,
134
geometric multiplicity, 76
Holder Inequality, 58
Hermitian transpose, 2
higherorder difference equations
conversion to firstorder form, 121
higherorder differential equations
conversion to firstorder form, 120
higherorder eigenvalue problems
conversion to firstorder form, 136
i,2
idempotent, 6, 51
identity matrix, 4
inertia, 103
initialvalue problem, 109
for higherorder equations, 120
for homogeneous linear difference
equations, 118
for homogeneous linear differential
equations, 112
for inhomogeneous linear difference
equations, 119
for inhomogeneous linear differen
tial equations, 112
inner product
complex, 55
complex Euclidean, 4
Euclidean, 4, 54
real, 54
usual, 54
weighted, 54
invariant factors, 84
inverses
of block matrices, 47
j,2
Jordan block, 82
Jordan canonical form (JCF), 82
Kronecker canonical form (KCF), 129
Kronecker delta, 20
Kronecker product, 139
determinant of, 142
eigenvalues of, 141
eigenvectors of, 141
products of, 140
pseudoinverse of, 148
singUlar values of, 141
trace of, 142
transpose of, 140
Kronecker sum, 142
eigenvalues of, 143
eigenvectors of, 143
exponential of, 149
leading principal submatrix, 100
left eigenvector, 75
left generalized eigenvector, 125
left invertible. 26
left nullspace, 22
left principal vector, 85
linear dependence, 10
linear equations
Index
characterization of all solutions, 44
existence of solutions, 44
uniqueness of solutions, 45
linear independence, 10
linear least squares problem, 65
general solution of, 66
geometric solution of, 67
residual of, 65
solution via QR factorization, 71
solution via singular value decom
position, 70
statement of, 65
uniqueness of solution, 66
linear regression, 67
linear transformation, 17
codomain of, 17
composition of, 19
domain of, 17
invertible, 25
left invertible. 26
matrix representation of, 18
nonsingular, 25
nulls pace of, 20
Index 155
range of, 20
right invertible, 26
LU factorization, 6
block, 5
Lyapunov differential equation, 113
Lyapunov equation, 144
and asymptotic stability, 146
integral form of solution, 146
symmetry of solution, 146
uniqueness of solution, 146
matrix
asymptotically stable, 145
best rank k approximation to, 67
companion, 105
defective, 76
definite, 99
derogatory, 106
diagonal, 2
exponential, 109
Hamiltonian, 122
Hermitian, 2
Householder, 97
indefinite, 99
lower Hessenberg, 2
lower triangular, 2
nearest singular matrix to, 67
nilpotent, 115
nonderogatory, 105
normal, 33, 95
orthogonal, 4
pentadiagonal, 2
quasi–upper–triangular, 98
sign of a, 91
square root of a, 101
symmetric, 2
symplectic, 122
tridiagonal, 2
unitary, 4
upper Hessenberg, 2
upper triangular, 2
matrix exponential, 81, 91, 109
matrix norm, 59
1–.60
2–, 60
oo–, 60
/?–, 60
consistent, 61
Frobenius, 60
induced by a vector norm, 61
mixed, 60
mutually consistent, 61
relations among, 61
Schatten, 60
spectral, 60
subordinate to a vector norm, 61
unitarily invariant, 62
matrix pencil, 125
equivalent, 127
reciprocal, 126
regular, 126
singular, 126
matrix sign function, 91
minimal polynomial, 76
monic polynomial, 76
Moore–Penrose pseudoinverse, 29
multiplication
matrix–matrix, 3
matrix–vector, 3
Murnaghan–Wintner Theorem, 98
negative definite, 99
negative invariant subspace, 92
nonnegative definite, 99
criteria for, 100
nonpositive definite, 99
norm
induced, 56
natural, 56
normal equations, 65
normed linear space, 57
nullity, 24
nullspace, 20
left, 22
right, 22
observability, 46
one–to–one (1–1), 23
conditions for, 25
onto, 23
conditions for, 25
Index
range of, 20
right invertible, 26
LV factorization, 6
block,5
Lyapunov differential equation, 113
Lyapunov equation, 144
and asymptotic stability, 146
integral form of solution, 146
symmetry of solution, 146
uniqueness of solution, 146
matrix
asymptotically stable, 145
best rank k approximation to, 67
companion, 105
defective, 76
definite, 99
derogatory, 106
diagonal,2
exponential, 109
Hamiltonian, 122
Hermitian, 2
Householder, 97
indefinite, 99
lower Hessenberg, 2
lower triangular, 2
nearest singular matrix to, 67
nilpotent, 115
nonderogatory, 105
normal, 33, 95
orthogonal, 4
pentadiagonal, 2
quasiuppertriangular, 98
sign of a, 91
square root of a, 10 1
symmetric, 2
symplectic, 122
tridiagonal, 2
unitary, 4
upper Hessenberg, 2
upper triangular, 2
matrix exponential, 81, 91, 109
matrix norm, 59
1,60
2,60
00,60
p,60
consistent, 61
Frobenius, 60
induced by a vector norm, 61
mixed,60
mutually consistent, 61
relations among, 61
Schatten,60
spectral, 60
155
subordinate to a vector norm, 61
unitarily invariant, 62
matrix pencil, 125
equivalent, 127
reciprocal, 126
regular, 126
singUlar, 126
matrix sign function, 91
minimal polynomial, 76
monic polynomial, 76
MoorePenrose pseudoinverse, 29
multiplication
matrixmatrix, 3
matrixvector, 3
MumaghanWintner Theorem, 98
negative definite, 99
negative invariant subspace, 92
nonnegative definite, 99
criteria for, 100
nonpositive definite, 99
norm
induced,56
natural,56
normal equations, 65
normed linear space, 57
nullity, 24
nullspace,20
left, 22
right, 22
observability, 46
onetoone (11), 23
conditions for, 25
onto, 23
conditions for, 25
156 Index
orthogonal
complement, 21
matrix, 4
projection, 52
subspaces, 14
vectors, 4, 20
orthonormal
vectors, 4, 20
outer product, 19
and Kronecker product, 140
exponential of, 121
pseudoinverse of, 33
singular value decomposition of, 41
various matrix norms of, 63
pencil
equivalent, 127
of matrices, 125
reciprocal, 126
regular, 126
singular, 126
Penrose theorem, 30
polar factorization, 41
polarization identity, 57
positive definite, 99
criteria for, 100
positive invariant subspace, 92
power (Kth) of a Jordan block, 120
powers of a matrix
computation of, 119–120
principal submatrix, 100
projection
oblique, 51
on four fundamental subspaces, 52
orthogonal, 52
pseudoinverse, 29
four Penrose conditions for, 30
of a full–column–rank matrix, 30
of a full–row–rank matrix, 30
of a matrix product, 32
of a scalar, 31
of a vector, 31
uniqueness, 30
via singular value decomposition, 38
Pythagorean Identity, 59
Q –orthogonality, 55
QR factorization, 72
T O " 1
IK , 1
M
mxn i
, 1
M
mxn 1
r '
M nxn 1
n ' '
range, 20
range inclusion
characterized by pseudoinverses, 33
rank, 23
column, 23
row, 23
rank–one matrix, 19
rational canonical form, 104
Rayleigh quotient, 100
reachability, 46
real Schur canonical form, 98
real Schur form, 98
reciprocal matrix pencil, 126
reconstructibility, 46
regular matrix pencil, 126
residual, 65
resolvent, 111
reverse–order identity matrix, 39, 89
right eigenvector, 75
right generalized eigenvector, 125
right invertible, 26
right nullspace, 22
right principal vector, 85
row
rank, 23
vector, 1
Schur canonical form, 98
generalized, 127
Schur complement, 6, 48, 102, 104
Schur T heorem, 98
Schur vectors, 98
second–order eigenvalue problem, 135
conversion to first–order form, 135
Sherman–Morrison–Woodbury formula,
48
signature, 103
similarity transformation, 95
and invariance of eigenvalues, h
156
orthogonal
complement, 21
matrix, 4
projection, 52
subspaces, 14
vectors, 4, 20
orthonormal
vectors, 4, 20
outer product, 19
and Kronecker product, 140
exponential of, 121
pseudoinverse of, 33
singular value decomposition of, 41
various matrix norms of, 63
pencil
equivalent, 127
of matrices, 125
reciprocal, 126
regular, 126
singular, 126
Penrose theorem, 30
polar factorization, 41
polarization identity, 57
positive definite, 99
criteria for, 100
positive invariant subspace, 92
power (kth) of a Jordan block, 120
powers of a matrix
computation of, 119120
principal submatrix, 100
projection
oblique, 51
on four fundamental subspaces, 52
orthogonal, 52
pseudoinverse, 29
four Penrose conditions for, 30
of a fullcolumnrank matrix, 30
of a fullrowrank matrix, 30
of a matrix product, 32
of a scalar, 31
of a vector, 31
uniqueness, 30
via singular value decomposition, 38
Pythagorean Identity, 59
Qorthogonality, 55
QR factorization, 72
JR.n, I
JR.mxn,1
1
I
range, 20
range inclusion
Index
characterized by pseudoinverses, 33
rank, 23
column, 23
row, 23
rankone matrix, 19
rational canonical form, 104
Rayleigh quotient, 100
reachability, 46
real Schur canonical form, 98
real Schur form, 98
reciprocal matrix pencil, 126
reconstructibility, 46
regular matrix pencil, 126
residual, 65
resolvent, III
reverseorder identity matrix, 39, 89
right eigenvector, 75
right generalized eigenvector, 125
right invertible, 26
right nullspace, 22
right principal vector, 85
row
rank, 23
vector, I
Schur canonical form, 98
generalized, 127
Schur complement, 6, 48, 102, 104
Schur Theorem, 98
Schur vectors, 98
secondorder eigenvalue problem, 135
conversion to firstorder form, 135
ShermanMorrisonWoodbury formula,
48
signature, 103
similarity transformation, 95
and invariance of eigenvalues, 81
Index 157
orthogonal, 95
unitary, 95
simple eigenvalue, 85
simultaneous diagonalization, 133
via singular value decomposition, 134
singular matrix pencil, 126
singular value decomposition (SVD), 35
and bases for four fundamental
subspaces, 38
and pseudoinverse, 38
and rank, 38
characterization of a matrix factor
ization as, 37
dyadic expansion, 38
examples, 37
full vs. compact, 37
fundamental theorem, 35
nonuniqueness, 36
singular values, 36
singular vectors
left, 36
right, 36
span, 11
spectral radius, 62, 107
spectral representation, 97
spectrum, 76
subordinate norm, 61
subspace, 9
A–invariant, 89
deflating, 129
reducing, 130
subspaces
complements of, 13
direct sum of, 13
equality of, 10
four fundamental, 23
intersection of, 13
orthogonal, 14
sum of, 13
Sylvester differential equation, 113
Sylvester equation, 144
integral form of solution, 145
uniqueness of solution, 145
Sylvester's Law of Inertia, 103
symmetric generalized eigenvalue prob
lem, 131
total least squares, 68
trace, 6
transpose, 2
characterization by inner product, 54
of a block matrix, 2
triangle inequality
for matrix norms, 59
for vector norms, 57
unitarily invariant
matrix norm, 62
vector norm, 58
variation of parameters, 112
vec
of a matrix, 145
of a matrix product, 147
vector norm, 57
l–, 57
2–, 57
oo–, 57
P–, 51
equivalent, 59
Euclidean, 57
Manhattan, 57
relations among, 59
unitarily invariant, 58
weighted, 58
weighted p–, 58
vector space, 8
dimension of, 12
vectors, 1
column, 1
linearly dependent, 10
linearly independent, 10
orthogonal, 4, 20
orthonormal, 4, 20
row, 1
span of a set of, 11
zeros
of a linear dynamical system, 130
Index
orthogonal, 95
unitary, 95
simple eigenvalue, 85
simultaneous diagonalization, 133
via singular value decomposition, 134
singular matrix pencil, 126
singular value decomposition (SVD), 35
and bases for four fundamental
subspaces, 38
and pseudoinverse, 38
and rank, 38
characterization of a matrix factor
ization as, 37
dyadic expansion, 38
examples, 37
full vs. compact, 37
fundamental theorem, 35
nonuniqueness, 36
singular values, 36
singular vectors
left, 36
right, 36
span, 11
spectral radius, 62, 107
spectral representation, 97
spectrum, 76
subordinate norm, 61
subspace, 9
Ainvariant, 89
deflating, 129
reducing, 130
subspaces
complements of, 13
direct sum of, 13
equality of, 10
four fundamental, 23
intersection of, 13
orthogonal, 14
sum of, 13
Sylvester differential equation, 113
Sylvester equation, 144
integral form of solution, 145
uniqueness of solution, 145
157
Sylvester's Law of Inertia, 103
symmetric generalized eigenvalue prob
lem,131
total least squares, 68
trace, 6
transpose, 2
characterization by inner product, 54
of a block matrix, 2
triangle inequality
for matrix norms, 59
for vector norms, 57
unitarily invariant
matrix norm, 62
vector norm, 58
variation of parameters, 112
vec
of a matrix, 145
of a matrix product, 147
vector norm, 57
1,57
2,57
00,57
p,57
equivalent, 59
Euclidean, 57
Manhattan, 57
relations among, 59
unitarily invariant, 58
weighted, 58
weighted p, 58
vector space, 8
dimension of, 12
vectors, 1
column, 1
linearly dependent, 10
linearly independent, 10
orthogonal, 4, 20
orthonormal, 4, 20
row, 1
span of a set of, 11
zeros
of a linear dynamical system, 130
Matrix Analysis Matrix Analysis
for Scientists & Engineers for Scientists & Engineers
This page intentionally left blank This page intentionally left blank
. California slam. Laub University of California Davis.Matrix Analysis Matrix Analysis for Scientists & Engineers for Scientists & Engineers Alan J. Laub Alan J.
I. or transmitted in any manner without the written permission of the publisher. or transmitted in any manner without the written permission may be reproduced. Printed in the United States of America. 3 Apple Hill Drive. Inc. 3600 University City Science Center.com Mathematica is a registered trademark of Wolfram Research. www. Inc. 2. write to the Society for Industrial and Applied of the publisher. Used by permission. Mathematica is a registered trademark of Wolfram Research. PA. Inc. Includes bibliographical references and index. Laub. stored. Natick. is a registered trademark.. Used by permission 5. No part of this book may be reproduced. MATLAB® is a registered trademark of The MathWorks. Fax: 5086477101.lam.9'434dc22 2004059962 2004059962 About the cover: The original artwork featured on the cover was created by freelance About the cover: The original artwork featured on the cover was created by freelance artist Aaron Tallon of Philadelphia. Matrices.. Library of Congress CataloginginPublication Data Library of Congress CataloginginPublication Data Laub.L38 2005 512. 1. Mathematical analysis. Inc. Mathematics. Title. . No part of this book All rights reserved. Includes bibliographical references and index. Printed in the United States of America. cm. Laub. Inc. Philadelphia. p. • slam is a registered trademark. I. Philadelphia.. Mathematical analysis. 10987654321 10987654321 All rights reserved. please contact The MathWorks. Inc.Copyright Copyright © 2005 by the Society for Industrial and Applied Mathematics. Matrices. 3 Apple Hill Drive. Natick. Alan J. Mathcad is a registered trademark of Mathsoft Engineering & Education. Inc. PA 191042688. 2. cm. For MATLAB product information. Title. p. info@mathworks. QA188138 2005 QA 188.. MA 017602098 USA.. please contact The MathWorks. For information. PA 191042688.) 1.. For information. 2005 by the Society for Industrial and Applied Mathematics. Matrix analysis for scientists and engineers / Alan J. write to the Society for Industrial and Applied Mathematics.com.) ISBN 0898715768 (pbk.9'434—dc22 512. MATLAB® is a registered trademark of The MathWorks. info@mathworks. PA. ISBN 0898715768 (pbk. Mathcad is a registered trademark of Mathsoft Engineering & Education.com 5086477000.. artist Aaron Tallon of Philadelphia. 3600 University City Science Center. For MATLAB product information. Alan J.com. 1948Laub. 5086477000.mathworks. Inc. MA 017602098 USA. stored. Fax: 5086477101. 1948Matrix analysis for scientists and engineers / Alan J. wwwmathworks.
Beverley (who captivated me in the UBC math library captivated UBC nearly forty years ago) nearly forty .To my wife. Beverley To my wife.
This page intentionally left blank This page intentionally left blank .
. . . 6.Contents Contents Preface Preface 1 1 Introduction and Review Introduction and Review 1. .. 3.2 Examples 4. 3.2 Subspaces 2.3 Inner Products and Orthogonality . 2.4 Some Useful and Interesting Inverses vii xi xi 1 1 1 1 3 3 4 4 4 7 7 7 7 9 9 10 10 13 13 17 17 17 17 18 18 19 19 20 20 22 22 2 2 3 3 4 4 29 29 30 30 31 31 35 35 35 35 38 40 5 5 6 6 43 43 43 43 44 47 47 47 47 . . 4.3 Inner Products and Orthogonality 1... .3 Linear Independence 2.2 Matrix Linear Equations 6. 1.. . . . .. .1 Definitions and Examples . .3 Rowand Column Compressions 5. 5. . 3..2 Examples.. . ..1 Definition and Examples . 6.4 Sums and Intersections of Subspaces Linear Transformations Linear Transformations 3.5 Four Fundamental Subspaces Introduction to the MoorePenrose Pseudoinverse Introduction to the MoorePenrose Pseudoinverse 4. . . ..1 Definition and Examples 3. . . 1..3 Properties and Applications .2 Matrix Linear Equations .2 Subspaces. . .1 Some Notation and Terminology 1.2 Matrix Representation of Linear Transformations 3. .1 2. .4 Structure of Linear Transformations 3. .4 Sums and Intersections of Subspaces 2. .1 4. .4 Determinants Vector Spaces Vector Spaces 2. .3 Row and Column Compressions Linear Equations Linear Equations 6.3 A More General Matrix Linear Equation 6. . 6. .1 Vector Linear Equations .1 Some Notation and Terminology 1... .3 Properties and Applications Introduction to the Singular Value Decomposition Introduction to the Singular Value Decomposition 5. . 2. 4. . . .. 4. .1 Definitions and Characterizations Definitions and Characterizations.1 The Fundamental Theorem 5. .2 Some Basic Properties .5 Four Fundamental Subspaces . Definitions and Examples 2.2 Matrix Arithmetic 1.3 A More General Matrix Linear Equation 6. .2 Matrix Arithmetic .4 Determinants 1...3 Composition of Transformations .1 Vector Linear Equations 6. .1 The Fundamental Theorem .3 Composition of Transformations 3.2 Some Basic Properties 5... . .2 Matrix Representation of Linear Transformations 3.4 Some Useful and Interesting Inverses. . . . .4 Structure of Linear Transformations 3. .3 Linear Independence . 5. . . .
.4 Geometric Aspects of the JCF 9. 9.1. . . . . . . .1 Differential Equations ILl Differential Equations .1. . 9.1 Projections .2 Other least squares problems . . . . .3. .4 Least Squares and Singular Value Decomposition 8. . . . . . .1 7.1. . . . . . . . . 7.2 Inner Product Spaces 7.3 Computation of matrix powers 11.5 The Matrix Sign Function.5 Least Squares and QR Factorization . 10.3 Determination of the JCF 9. .1 Some Basic Canonical Forms . .3.1 8. .1.3. . .3 Inhomogeneous linear differential equations 11.1 9.4 Matrix Norms Linear Least Squares Problems 8. 8. .1 Some Basic Canonical Forms 10. .2.viii viii Contents Contents 7 Projections. .5 Modal decompositions . .3. . .3 Equivalence Transformations and Congruence 10. .2 Jordan Canonical Form 9.1. . . . . . . . .4 Linear matrix differential equations 11. 11.4 Least Squares and Singular Value Decomposition 8. . Theoretical computation 9. .3.1 The four fundamental orthogonal projections The four fundamental orthogonal projections 7. .1 Block matrices and definiteness 10.2 On the +1's in JCF blocks 9. .3 Equivalence Transformations and Congruence 10. . . .3 Vector Norms 7. .3. . . .6 Computation of the matrix exponential 11. .2. . . . . Inner Product Spaces. . .3 HigherOrder Equations . 11. .1.3 HigherOrder Equations.2 Jordan Canonical Form . . . .1 Homogeneous linear difference equations 11.3.5 The Matrix Sign Function 51 51 51 51 52 52 54 54 57 57 59 59 8 65 65 65 65 67 67 67 67 67 67 69 70 70 71 71 9 75 75 75 82 82 85 85 86 86 88 88 89 89 91 91 95 95 10 Canonical Forms 10.2.1 Fundamental Definitions and Properties 9.4 Geometric Aspects of the JCF 9.2 Difference Equations 11. . . .2 Geometric Solution 8. 10.1. . .1. . . . . .4 Rational Canonical Form 11 Linear Differential and Difference Equations 11 Linear Differential and Difference Equations 11. and Norms 7. .1 The Linear Least Squares Problem 8.3 Linear Regression and Other Linear Least Squares Problems 8. 8. .1. . . .2 Inner Product Spaces 7.3 Determination of the JCF . . .1 The Linear Least Squares Problem . .3. . . .1 Fundamental Definitions and Properties 9. . . 11. . .3 Computation of matrix powers . .1 Projections 7. . . . .2 Other least squares problems 8.2 Definite Matrices 10.1.4 Matrix Norms .2.4 Rational Canonical Form . 7.2. .5 Least Squares and QR Factorization 8. . . .2 Geometric Solution .5 Modal decompositions 11.1 Properties ofthe matrix exponential . Example: Linear regression 8. . 95 95 99 102 102 104 104 104 104 109 109 109 109 109 109 112 112 112 112 113 113 114 114 114 114 118 118 118 118 118 118 119 119 120 120 .3 Inhomogeneous linear differential equations 11. . .3. .2 Homogeneous linear differential equations 11. . 11.1 Block matrices and definiteness 10. . . 10. . . . .3 Vector Norms 7. 11. 8.2 Inhomogeneous linear difference equations 11. .1 Homogeneous linear difference equations 11. . . . 11.2 Inhomogeneous linear difference equations 11. . .1 Properties of the matrix exponential 11. . . . .1 Theoretical computation . . .1. .1.1.2 Homogeneous linear differential equations 11. . . . .1 Example: Linear regression . .2. . .1. Eigenvalues and Eigenvectors 9. .6 Computation of the matrix exponential 11. .2 Definite Matrices .4 Linear matrix differential equations . . . . . . 9. . . . .3 Linear Regression and Other Linear Least Squares Problems 8. . . .2 On the + l's in JCF blocks 9. 11.3. . .2 Difference Equations .
. .1 Simultaneous diagonalization via SVD 12. . .5 Simultaneous Diagonalization 12.2 Properties of the Kronecker Product ..5. . . . 12. . . .2 Canonical Forms 12. 12.6 HigherOrder Eigenvalue Problems 12. .Contents Contents ix ix 12 Generalized Eigenvalue Problems 12 Generalized Eigenvalue Problems 12. .5. 12.3 Application to the Computation of System Zeros 12. .1 The Generalized EigenvaluelEigenvector Problem 12.3 Application to Sylvester and Lyapunov Equations 13. .1 Definition and Examples 13. . .2 Canonical Forms . .4 Symmetric Generalized Eigenvalue Problems 12.1 Simultaneous diagonalization via SVD 12. . .1 Conversion to firstorder form 125 125 125 127 127 130 131 131 133 133 133 135 135 135 139 139 139 139 140 144 144 151 153 13 Kronecker Products 13 Kronecker Products 13.1 The Generalized Eigenvalue/Eigenvector Problem 12. . . .1 Definition and Examples . 13.4 Symmetric Generalized Eigenvalue Problems . .6 HigherOrder Eigenvalue Problems . . .1 Conversion to firstorder form 12.2 Properties of the Kronecker Product 13. .6.6.3 Application to the Computation of System Zeros . 13. . . . . . . 12. .3 Application to Sylvester and Lyapunov Equations Bibliography Bibliography Index Index . . . . .5 Simultaneous Diagonalization . . . . . 12. .
This page intentionally left blank This page intentionally left blank .
Certain topics that may have been treated cursorily in undergraduate courses are treated in more depth that may have been treated cursorily in undergraduate courses are treated in more depth and more advanced material is introduced. Upon completion of a course based on this are excellent companion texts for this book. for [11]. [II]. These powerful and versatile tools doinverses and the singular value decomposition (SVD). Certain topics thoroughly as undergraduates. in many cases. These powerful and versatile tools can then be exploited to provide a unifying foundation upon which to base subsequent topcan exploited to foundation subsequent topics. but somehow didn't quite manage to do. followon topics on the computational side (at the level of [7]. Basic of calculus and definitely some previous exposure to matrices and linear algebra. Noble and Daniel [20]. computer science. or [25]. but somehow didn't quite manage to do. The choice of topics covered in linear algebra and matrix theory is motivated both by The choice of topics covered in linear algebra and matrix theory is motivated both by applications and by computational utility and relevance.. Instructors are encouraged to supplement the book with specific application examples from their own encouraged to supplement the book with specific application examples from their own particular subject area. the sciences. The concept of matrix factorization is emphasized throughout to provide a foundation for a later course in numerical linear is emphasized throughout to provide a foundation for a later course in numerical linear algebra. example) or on the theoretical side (at the level of [12]." this approach necessarily presupposes the availability of appropriate mathematical software on approach necessarily presupposes the availability of appropriate mathematical software on a digital computer. or [16]. For this. requiring such material as prerequisite permits the early (but "outoforder" by conventional standards) introduction of topics such as pseuthe early (but "outoforder" by conventional standards) introduction of topics such as pseudoinverses and the singular value decomposition (SVD). I highly recommend MATLAB® although other software such as xi xi . [13]. mathematics. and concepts such as determinants. Upon completion of a course based on this text. basisfree or subspace) aspects of many of the fundamental do cover some geometric (i. or [16].e. For this. students meant to learn much of the important and useful mathematics that. essentially Prerequisites for using this text are quite modest: essentially just an understanding for this understanding of calculus and definitely some previous exposure to matrices and linear algebra. basisfree or subspace) aspects of many of the fundamental notions.e. singularity of matrices. either via formal courses or through selftext. although Chapters 2 and 3 do cover some geometric (i. requiring such material as prerequisite permits tion may occasionally be "hazy. Ortega [21]. and mathematical structures. for example). although Chapters 2 and 3 algebra. students meant to learn thoroughly as undergraduates. [23]. even though their recollecmatrices least tion may occasionally be "hazy. Because tools such as the SVD are not generally amenable to "hand computation. I have tried throughout to emphasize only the more important and "useful" tools. the student is then wellequipped to pursue. in many cases." this ics. Because tools such as the SVD are not generally amenable to "hand computation. example) or on the theoretical side (at the level of [12]. I highly recommend MAlLAB® although other software such as a digital computer." However. computer science. particular subject area. The books by Meyer [18]. Matrices are stressed more than abstract vector spaces. either via formal courses or through selfstudy. The text can be used in a onequarter or onesemester course to provide a compact overview of can be used in a onequarter or onesemester course to provide a compact overview of much of the important and useful mathematics that. the sciences. By matrix analysis I mean linear tools and ideas comfortably in a variety of applications. I have tried throughout to emphasize only the and more advanced material is introduced. singularity of matrices. Basic concepts such as determinants.Preface Preface This book is intended to be used as a text for beginning graduatelevel (or even seniorlevel) This book is intended to be used as a text for beginning graduatelevel (or even seniorlevel) students in engineering. eigenvalues and eigenvectors.. [13]. By matrix analysis I mean linear algebra and matrix theory together with their intrinsic interaction with and application to algebra and matrix theory together with their intrinsic interaction with and application to linear dynamical systems (systems of linear differential or difference equations). Matrices are stressed more than abstract vector spaces. or computational science science who wish to be familar with enough matrix analysis that they are prepared to use its enough analysis they are prepared to tools and ideas comfortably in a variety of applications. or computational students in engineering. methods. mathematics. for example)." However. The text linear dynamical systems (systems of linear differential or difference equations). and Strang [24] Ortega are excellent companion texts for this book. eigenvalues and eigenvectors. and positive definite matrices should have been covered at least once. The concept of matrix factorization applications and by computational utility and relevance. the student is then wellequipped to pursue.
must lay firm foundation upon which and perspectives perspectives can be built in a logical. in particular. Mastery of the material in this text should enable the student to read and understand the modern language of matrices used throughout mathematics. It is thus crucial to acquire knowledge vocabulary a working knowledge of the vocabulary and grammar of this language. and coherent fashion. and modem engineering. modern Some of the applications of matrix analysis mentioned briefly in this book derive of the applications of matrix analysis mentioned briefly in this book modem statespace from the modern statespace approach to dynamical systems. When they are not given explicitly. Since this text is not intended for a course in numerical linear algebra per se. statistics. Some of the key algorithms of numerical linear algebra. diverse audience. It is my firm conviction that such maturity is neither encouraged conviction neither nor nurtured by relegating the mathematical aspects of applications (for example. one must lay a firm foundation upon which subsequent applications and Rather. Statespace methods are Statespace modem now standard in much of modern engineering where." a set of vectors is either linearly independent or it is not. "reallife" problems seldom yield to simple "reallife" closedform formulas or solutions. I have taught this material for many years. and while most material is developed from basic ideas in the book. But in most engineering or scientific contexts we want to know more than that. econometrics. If If are linearly dependent. for example. and the course has proven to be remarkably successful at enabling students from disparate backgrounds to acquire a quite acceptable level of mathematical maturity and acceptable graduate rigor for subsequent graduate studies in a variety of disciplines. outputs. simulated. mathematics. form the foundation virtually modem upon which rests virtually all of modern scientific and engineering computation. science. . are there "best" linearly independent subsets? These tum out to turn be much more difficult problems and frequently involve researchlevel questions when set be much more difficult problems and frequently involve researchlevel questions when set in the context of the finiteprecision. The tools of matrix analysis are also applied on a daily basis to problems in biology. the student does require a certain amount of what is conventionally referred Proofs referred to as "mathematical maturity. if only they had had this course before they took linear systems. many times at UCSB and twice at UC Davis. They must generally be solved computationally and closedform it is important to know which types of algorithms can be relied upon and which cannot. chemistry. If a set of vectors is linearly independent. they are either obvious or easily found in the literature. Rather. many students who completed especially offered. control systems with standard large numbers of interacting inputs. and a wide variety of other fields. in particular. This is ideal not given explicitly. or signal processing. form the foundation Some of the key algorithms of numerical linear algebra." Proofs are given for many theorems. A second motivation for a computational emphasis is that it provides many of the essential tools for what I call "qualitative mathematics. applied physics.xii xii Preface Preface Mathcad® Mathematica® or Mathcad® is also excellent. and evaluated. The "language" in which such described models are conveniently described involves vectors and matrices. prerequisites developed While prerequisites for this text are modest. This is ideal material from which to learn a bit about mathematical proofs and the mathematical maturity and insight gained thereby. completed the course. and the course has proven to be remarkably successful at enabling students from Davis. remarked afterward that if processing. This is an absolutely fundamental fundamental concept. are deferred to such a course. First. the details of most of the numerical aspects of linear algebra per se." For example. especially the first few times it was offered. how "nearly dependent" are the vectors? If they linearly independent. Indeed. are deferred to such a course. linear algebra introducing "onthefly" algebra for elementary statespace theory) to an appendix or introducing it "onthef1y" when to necessary. in an elementary linear algebra course. and thus the text can serve a rather diverse audience. they are either obvious or easily found in the literature. and states often give rise to models of very numbers models high order that must be analyzed. finiterange floatingpoint arithmetic environment of of of most modem computing platforms. consistent. The presentation of the material in this book is strongly influenced by computais influenced by computational issues for two principal reasons.
they no longer had to provide as much time for "review" and could focus instead on the subject at hand. June 2004 — AJL. rather than having to spend time making up for deficiencies in their background background in matrices and linear algebra. etc. My fellow instructors. The concept seems to work. they would have been able to concentrate on the new ideas deficiencies they wanted to learn. AJL.Preface Preface xiii XIII or estimation theory. . realized that by requiring this course as a prerequisite.. too.
This page intentionally left blank This page intentionally left blank .
. the notation n denotes the set {1. x E Rn I.n xn Cmxn = the set of complex m x n matrices of rank r. IR~ xn denotes the set of real = set of real of rank Thus. nonsingular n x n matrices. 1 . x T y is a scalar while xyT is an n x n matrix. en 4.. where Xi E R for ii E !!. . but this convention makes it easy to recognize immediately throughout the text that. and linear algebra.. . mxn = the set of complex (or complexvalued) x n matrices. R mxn = the set of real (or realvalued) m x n matrices. That a vector is always a column vector rather than a row vector is entirely arbitrary. This is followed by a review of some basic notions in matrix analysis throughout the text. where y G Rn and the superscript T is the transpose operation.. the set of ntuples of complex numbers represented as column vectors. n }. IR n = the set of ntuples of real numbers represented as column vectors. Cn = the set of ntuples of complex numbers represented as column vectors. the notation!! denotes the set {I. Rnxnn denotes the set of real nonsingular n x n matrices.g.. Thus. 1R. A row vector is denoted by y~. Henceforth. 5. n}. Rn = the set of ntuples of real numbers represented as column vectors.Chapter 1 Chapter 1 Introduction and Review Introduction and Review 1. IR rn xn = the set of real (or realvalued) m x n matrices. 2.. XTy is a scalar while it easy to recognize immediately throughout the text that. That a vector is always a y E IR n and the superscript T is the transpose operation. Thus. xyT is an n x n matrix. 2.. x e IR n means means where xi e IR for e n. = the set of complex m x n matrices of rank r. but this convention makes column vector rather than a row vector is entirely arbitrary. Crnxn = the set of complex (or complexvalued) m x n matrices. The following sets appear frequently throughout subsequent chapters: The following sets appear frequently throughout subsequent chapters: 1. 3. Henceforth. A row vector is denoted by yT where Note: Vectors are always column vectors.g.n xn Rmxnr = the set of real m x n matrices of rank r. . This is followed by a review of some basic notions in matrix analysis and linear algebra.1 1. Thus. Note: Vectors are always column vectors. e. e.1 Some Notation and Terminology Some Notation and Terminology We begin with a brief introduction to some standard notation and terminology to be used We begin with a brief introduction to some standard notation and terminology to be used throughout the text. e. e 6. 5.
i. • tridiagonal if aij = 0 for Ii . Introduction and Review We now classify some of the more familiar "shaped" matrices. where the bar indicates complex sometimes A*) and its = IX jfJ (j = = v^T). • lower Hessenberg if aij = 0 for } . A if A = AT and Hermitian if A = AH. There is some the more common notation in electrical engineering and system theory. then easy to see that if A. • upper Hessenberg if afj = 0 for — > 1. (7. an equation like A = A T implies that A is realvalued while a statement otherwise noted. If A E em xn. then A7" e jRnxm. is complexvalued symmetric but not Hermitian. Example 1. that is. There is some advantage to being conversant with both notations. if z = a + jf$ (j = ii = R). 2 2.. an equation like A = A T implies that A is realvalued while a statement like A = AH implies that A is complexvalued.2. 7 + j ] is complexvalued symmetric but not Hermitian.JI > 1. A is conjugation. For example. (AT)ij = aji. The notation j is used throughout the text but reminders are placed at strategic locations. A = AH A complexvalued. We henceforth that.. 1. 7 = («77). For example.1. • upper triangular if aij. = 0 for i > j.jj > 1. then the (m + n) x (m + n) matrix [~ Bc] is block upper triangular. a.2. if A E IRnxn. if e Rnxn e Rmxn C e Rmxm then the (m n) x (m n) matrix [A0 ~] is block upper triangular. • lower triangular if aij7 = 0 for i/ < }. then z = IX — jfi...e. A = [ . and definitions block submatrices. For example.. it is easy to see that if Aij are appropriately dimensioned subblocks. unless if A = A T Hermitian A = A H. A matrix A E IRn xn e (or A E enxn ) is A eC" x ")is • diagonal if a. z Remark 1.j  7+} ] is Hermitian (but not symmetric).. • lower triangular if a. Oth (A 7 ). it is Transposes of block matrices can be defined in an obvious way. C E jRmxm. text but reminders are placed at strategic locations. = a jfJ.. For example. j)th entry of a matrix A is denoted by AT and is the matrix whose j)th entry A. = 0 for i ^ j. } is While R is most commonly denoted by i in mathematics texts. = 0 for i > }. • tridiagonal if a(y = 0 for z — j\ > 1. Introduction and Review Chapter 1. While \/—\ is most commonly denoted by i in mathematics texts. Each of the above also has a "block" analogue obtained by replacing scalar components in the respective definitions by block submatrices. j)\h entry is (AH)ij = (aji). . = 0 for < j. A = [ 7+} 5 3· A . i)th entry of A. then its Hermitian transpose (or conjugate transpose) is denoted by AH (or H If A e C mx ".. • lower Hessenberg if a.e. The The transpose of a matrix A is denoted by AT and is the matrix whose (i. = 0 for j — > 1. AT E E" xm is the (j. are appropriately dimensioned subblocks. Note that if A E R mx ". • upper triangular if a. We henceforth adopt the convention that..ii > 1. B E IR nxm . ~ 5 is symmetric (and Hermitian). Hermitian conjugate sometimes A*) and its (i. then r = [ .2 2 Chapter 1. where the bar indicates complex j)th entry is (A H ). j is Remark the more common notation in electrical engineering and system theory. • upper Hessenberg if aij = 0 for ii . = 0 for / — j\ > 2. • diagonal if aij7 = 0 for i i= }. A matrix A is symmetric i.[ 7 .J I > 2. otherwise noted.. • pentadiagonal if ai. • pentadiagonal if aij = 0 for Ii . Example 1. is Hermitian (but not symmetric). 2 Transposes of block matrices can be defined in an obvious way. A e jRmxn. ] is symmetric (and Hermitian).
[ ~ J+l.. vector x. . vn Rpxn p with Vit e R .. Then v E jRP. recall that (CD)T = DT C T (C D)T = DT T If H H H (or (CD} = DHC H ).. + Xnan E jRm. the matrixvector product Ax. its importance cannot be overemphasized.3..2 Matrix Arithmetic 1.[ ~ J+2. suppose A E jRmxn and [hI. matrixvector product with the column x to find Ax = [50 32]' but this matrixvector product can also be computed computed via v1a 3. A very important way to view this product is interpret weighted to interpret it as a weighted sum (linear combination) of the columns of A. multiplication of a matrix by a scalar. .•. A special case of matrix multiplication occurs when the second matrix is a column multiplication second i.. Namely. Matrix Arithmetic 3 1. Theorem 1. un Rmxn with u Rm and V = [v .~]..3 can then also be generalized to its "row dual. Let U = [MI. Vn] ]Ee lR Pxn U [Uj. . . un]]Ee jRmxn with Ui t Ee jRm and V = [VI.... n UV T = LUiVr E jRmxp.. eRmxn has row cj e E l x "... matrixvector if C E jRmxn has row vectors cJ E jRlxn.. AB bi E W1. and is premultiplied by a row yT E R l x m then the product can be written as a weighted linear sum of the rows of C as follows: follows: yTC=YICf +"'+Ymc~ EjRlxn. and multiplication of matrices. i. .e. That is. suppose (linear combination) suppose A = la' . The importance of this interpretation cannot be overemphasized. Again.• a"1 E m JR " with a.. formulation of matrix multiplication that appears frequently in the text and is presented below as a theorem. Ax.2 Arithmetic It is assumed that the reader is familiar with the fundamental notions of matrix addition. Theorem reader. i=I If matrices C and D are compatible for multiplication.. As a numerical example. there can be important computerarchitecturerelated advancomputerarchitecturerelated tages to preferring the latter calculation method. importance interpretation take A = [96 85 74]x = take A = [~ ~].2. Then we can quickly calculate dot products of the rows of A column Ax = [. E JRm and x = l I. It Theorem 1.'" p] E jRnxp For matrix multiplication. Theorem 1. . Then the matrix product A B can be thought of as above....bhp ] e Rnxp with hi e jRn. x = ! 2 Then we can quickly calculate dot products of the rows of A [~]..3. and is premultiplied by a row vector yTe jRlxm.. applied p times: There is also an alternative. It is deceptively simple and its full understanding is well rewarded. {.e.1. multiplication..xn~ ] Then Ax = Xjal + . This gives a dual to the matrixvector result above. if (C D)H — D C )." The details are left to the readei "row left . but equivalent.[ ~ l For large arrays of numbers. suppose A e Rmxn and B = [bi.
Introduction and Review Chapter 1. where I is the n x n matrix A e IRnxn is an orthogonal matrix if ATA = AAT = /. Two nonzero vectors x.4. for short) by for short) by n (x'Y}c :=xHy = Lx. for short) of x and For vectors y e IRn. Then x T x = 0 but x H X = 2. (or A E Cnxn) we use the notation det A for the determinant of A. xTyy = 0. i. There is an orthogonal or unitary matrix has orthonormal rows and orthonormal columns. A EC = (orC" xn). y E <en. (x. the order in which x and y appear in the complex inner (x.. a matrix A e en xn is said to be unitary if A H A = AA H = I. rows or columns. i. Y}c = {y. Similarly. consider What is true in the complex case is that x H 0 if and only if O. x and y are orthogonal and x Tx = 1 and yT = 1.2j while while and we see that. There is no special name attached to a nonsquare matrix A E R mxn (or E Cmxn with orthonormal no special name attached to a nonsquare matrix A e ]Rrn"n (or € e mxn ))with orthonormal rows or columns. Nonzero complex vectors are orthogonal if x H y = 0. If x. The more conventional definition of the complex inner product is H ( x .y. y ) c = yHxx = L:7=1 x. y e IRn are said to be orthogonal if their inner product is zero. In sometimes used denote identity matrix. i.e..=1 y appear in Note that (x. Then (x. We list below some of (or A 6 en xn) we use the notation det A for the determinant of A.e. . x T = O.3 1. Note that x T x = 0 if and only if x = 0 when x E IRn but that this is not true if x E en.. Then Example 1. If e C". 1. case. we define their complex Euclidean inner product (or inner product. Similarly. indeed. For A E R nnxn A e IR xn It assumed of determinants.4 4 Chapter 1. x}c. consider the nonzero vector x above. Note that x Tx = 0 if and only if x = 0 when x e Rn but that this is not true if x e Cn. If x and y are zero. where / is the n x n identity matrix.3 Inner Products and Orthogonality Inner Products and Orthogonality For vectors x.. What is true in the complex case is that XH x = 0 if and only if x = 0. the Euclidean inner product inner for short) y is given by y is given by n T (x. indeed. the Euclidean inner product (or inner product. the nonzero vector x above.y. y)c = (y. A nxn matrix E R is an orthogonal matrix if AT A = AAT = I.. x)c. . To illustrate.=1 Note that the inner product is a scalar.y. we define their complex Euclidean inner product (or inner product. Introduction and Review 1. E R are said to be orthogonal if their inner product is Two nonzero vectors x. Let x = [1j and y = [1/2]. To illustrate. (x. Nonzero complex vectors are orthogonal if XHy = O. Y}c = [ } JH [ ~ ] = [I . Let x = [} ]] and y = [~].. y)c = (y. Clearly said = an orthogonal or unitary matrix has orthonormal rows and orthonormal columns. A orthogonal and XTX = 1 and yTyy = 1. Note that the inner product is a scalar. y)c = (y. y E R".e. order in which x product is important. We list below some of . then we say that x and y are orthonormal. x)c and we see that.e. The notation /„ is sometimes used to denote the identity matrix in IR nxn in Rnx" x nxn H H (or en "). The more conventional definition of the complex inner product is product is important. then we say that x and y are orthonormal. i. y) := x y = Lx. but throughout the text we prefer the symmetry with the real (x.4. Then XTX = 0 but XHX = 2. Example 1.j] [ ~ ] = 1 . y)c = y = Eni=1 xiyi but throughout the text we prefer the symmetry with the real case.4 Determinants Determinants It is assumed that the reader is familiar with the basic theory of determinants. x)c'.4 1.
• det Ann. Multiplying a column of A by a scalar ex results in a new matrix whose determinant scalar a determinant is ex det A. Interchanging two columns of A changes only the sign of the determinant.1. 8. 11. 9. 2. 14. 4. then det A = 0.. Multiplying a column of A by a scalar and then adding it to another column does not a column of scalar column does change the determinant. Multiplying a row of A by a scalar and then adding it to another row does not change 7. 7.e. 8. Proof" Proof: This follows easily from the block UL factorization BD. are consequences of one or more of the others. properties 1..B). Multiplying A 6. det AT = det A (detA H = detA if A e C nxn ). i. Multiplying a row of A by a scalar ex results in a new matrix whose determinant is a det A. detAT = detA (det A H = det A A E C"X"). 15. Multiplying a row of A by a scalar and then adding it to another row does not change the determinant. If elements. then det(AB) = det A det B.e. Interchanging two rows of A changes only the sign of the determinant. If A is upper triangular.4. then det [~ BD] = det D det(A – B D – 11C ) . 11.. Interchanging two rows of A changes only the sign of the determinant. 3. If A has a zero column or if any two columns of A are equal.. Determinants 1. change the determinant. 3. Proof: This follows easily from the block LU factorization Proof" This follows easily from the block LU factorization [~ ~J=[ ~ ][ ~ 17. then det(A1) = 1detA . Determinants 5 properties the more useful properties of determinants.. If A is lower triangular. = alla22 • • ann i.. 16. B eR n x n ..• a"n. If A e IRnxn and D E lR~xm. 16. If A A A = o.CA..4.1 I ][ . 17. then det A = alla22 • • ann 12. the determinant. If A e R n x n and D e R m x m .• ann. then det A = 0. det A is the product of its diagonal 10..thendet(AB) = det A det 5. If A is block diagonal (or block upper triangular or block lower triangular). of 5. If A E R n x n and D e RMmxm. B E IRnxn . If A is lower triangUlar. If A has a zero row or if any two rows of A are equal. then det [~ BD] = del A det(D – CA– l 1 B).. Ann (of possibly different sizes). If A has a zero column or if any two columns of A are equal. 10... Multiplying a row of A by a scalar a results in a new matrix whose determinant is 5. If A E lR~xn and DE IR mxm det [Ac ~] detA det(D . If det = a11a22 • • ann 12. If A € lR~xn. If A E Rnxn. • • An" (of A = square diagonal blocks A11. then det A = a11a22 .. several more is a properties are consequences of one or more of the others. A 11. det A is the product of its diagonal diagonal... 15..A22.. with A block diagonal (or block 13. then det A = different det A11 det A22 . then det A = O.•.C). If A is diagonal. exdetA.. If A. . If A.1 ) = de: A. i. then det(A. then det A = a11a22 . Note that this is not a minimal set.• ann. det A 11 det A22 • • det Ann 14. is a det A.. then det A = all a22 .e. then det [Ac ~] det D det(A B D. 13. A 22 .
• . see. (c) Let S € Rnxn be skewsymmetric. Show that the product V = VI. denoted TrA.. Let A E jRNxn. The factorization of a matrix A into the product of a unit lower triangular matrix L (i.. Tr(aA + f3B) = aTrA + fiTrB.. 2f) 2 _ sm 2^ sin 0 sin sin 20 1 .. is defined as the sum of its diagonal A e Rnxn. aII o. Show that det(I – xyT) = 1. i. AB ^ BA.y E jRn. what is det A? If A 3. Introduction and Review Chapter 1. TrA = L~=I au· elements. i. _. The factorizations used above U triangular. .. Show that A must be singular.. Tr(Afl) = Tr(£A). Another such factorization is UL where V is unit upper triangular and L is lower triangular. 6.. of Din [AC ~ l EXERCISES EXERCISES nxn 1.Vk € jRn xn be orthogonal matrices.6.B D – l C is the Schur complement of D in [~ BD ]. ST = S. 2 0 IS I dempotent . then Tr(aA fiB)= aTrA + f3TrB. i.• V k is an orthogonal matrix. lower triangular with all l's on the diagonal) and an upper triangular matrix L 1's an V is called an LV factorization.. for example. [24]. lor z r 2sm2rt # J. ft e R. 4.. Showthatdet(lxyT) 1 – yTx. y e Rn. A =.. ! [ 2cos2<9 I T 2cos2 0 (a) Show that the matrix A = _. are block analogues of these. [~ ~ ].e. elements. U1 U2 • • Uk is an 5. i.. .. B e JRn xn and a. TrS O. Uk E Rnxn U = VI V2 . Show that A must be singular. see. TrA = Eni=1 aii. example.e. The trace of A. ST = So Show that TrS = 0. If A is orthogonal. Let U1. . The matrix D . A E jRnxn A2 / x™ . nxn linear E R f3 E JR. Remark 1. A matrix A e Wx" is said to be idempotent if A2 = A. Let x. what is det(aA)? What is det(–A)? E R a det(A)? A? If A unitary.yTx. Remark — C I B – BDIe Similarly. Letx. V2. If A e jRnxn and or is a scalar. The factorization of a matrix A into the product of a unit lower triangular Remark 1.6 6 Chapter 1. [24].. what is det A? If A is unitary.e.e A – 1 B is called the Schur complement of A in[ACBD]. A . Another such factorization is VL U is an LU factorization. U2 . Introduction and Review Remark 1. of denoted Tr A. . .. even though in general AB i= B A.5. either prove the converse or provide a counterexample. Then E jRnxn skewsymmetric. 2sin20 J is idempotent for all #.5. A? 2. (a) Show that the trace is a linear function. (b) Show that Tr(AB) = Tr(BA). (b) Suppose A e IR" X "is idempotent and A i= I. Suppose A E jRn xn is idempotent and A ^ I. II _ . .. if A.e.e.
(D) (D) a· p a . (Ml) a· p .8 = P • a for all a. Axioms (A1)(A3) state that (F.) ( a .y for alia. when no confusion can arise. (A3) for all a E IF. An excellent reference for this and the next chapter is [10]. An excellent reference of matrices. including spaces formed by special classes emphasis is on finitedimensional vector spaces. a"1 € IF • a~l = 1.1. (M3) e IF. y Elf..8 . (A2) there exists an element 0 e IF such that a 0 = a.. a f. where some of the proofs that are not given here may for this and the next chapter is [10]. The emphasis is on finitedimensional vector spaces.((.1 Definitions and Examples Definition 2. there exists an element (—a) E IF such that a (—a) O. y)=cip+a. y e F. (M4) a·. (M3) for all a E ¥. I = a for all a E F. not written explicitly. p.8 e F.8)· yyf for all a. . .((. (M2) there exists an element I E F such that a . ft. A field is a set IF together with two operations +. The In this chapter we give a brief review of some of the basic concepts of vector spaces.8) + y ffor all a. Axioms (Al)(A3) state that (IF.8. ft Elf. . 0. +) is a group and an abelian group if (A4) also holds. y Elf. (M2) 1 e IF • I = for a e IF. .8. 7 . •) is an abelian group. Generally speaking. but some infinitedimensional examples are also cited. (Al) a + (.Chapter 2 Vector Spaces Vector Spaces In this chapter we give a brief review of some of the basic concepts of vector spaces.1. there exists an element aI E F such that a . : IF x F —> IF such that Definition 2. Axioms (MI)(M4) state that (IF \ {0}. (A4) a + . there exists an element (a) e F such that a + (a) = 0." is not written explicitly. (A3) for all a e F. be found. aI = 1. the multiplication operator ".8 + y) = (a +. ^ 0. including spaces formed by special classes of matrices. Axioms (M1)(M4) state that (F \ to). y € F. where some of the proofs that are not given here may be found. . y Elf. for all a e IF. the multiplication operator "•" is Generally speaking. but some infinitedimensional examples are also cited.p ) . (A2) there exists an element 0 E F such that a + 0 = a for all a E F. 2.8 +a· y for all a. • F x IF ~ F such that (Al) a (P y ) = (a + p ) y o r all a. A field is a set F together with two operations +. p. +) is a group and an abelian group if (A4) also holds.8 + y) = a·. when no confusion can arise.8.8 E IF. afar all a.8 a for all a.) is an abelian group. (Ml) a . p e F. yy) = (a·.o r all a. (A4) a + p = . (M4) a • p =.8.ye¥.8 = ft + afar all a.
Similar definitions hold for (en. lR~xn is not a field either since (M4) does not hold in general (although the other 8 axioms hold). R with ordinary addition and multiplication is a field.1.4. RMrmxn= {m x n matrices of rank r with real coefficients} is clearly not a field since. (V5) 1 v = v for all v e V (1 Elf). w for all a e F and for all v. +) is an abelian group.4. . (VI) (V. (V5) I·• v = v for all v E V (1 e F)... (V2) (a·. (IRn. for example. Example 2. In practice. +) is an abelian group. F) or.}. v + a. + f3qXq :aj. this causes no confusion and the·• operator is usually not even written explicitly. when there is no possibility of confusion as to the A vector space is denoted by (V. w E V. Similar definitions hold for (C". simply by V. is a field. Remark 2. in Definition Remark 2. (V3) (a (V4) a(v w)=av a w for all a ElF andfor all v.P.. e with ordinary complex addition and multiplication is a field. Example 2. where Z+ = {0. R"x" is not a field either for example. + apxP + .5. Example 2. IR~ xn = m x n matrices of rank r with real coefficients) is clearly not a field since. Ra[x] = the field of rational functions in the indeterminate x 3.. 3. Vector Spaces Example 2.3.1 in the sense of operating on different objects in different sets.( (f3' V v) o r all a.. no confusion and the operator is usually not even written explicitly. IR with ordinary addition and multiplication is a field. p € F and for all v e V.• v = a·• v + p • v for all a. C). (R". 4.3 are different from the + and • in Definition 2. 2. A vector space over a field IF is a set V together with two operations + ::V x V + V and· :: IF x V + V such that V x V ^V and. IF) or. simply by V. this causes 2. f3 Elf andforall v E V. (Ml) does not hold unless m = n. ) f for all a.2. v = a .. In practice. IR) with addition defined by and scalar multiplication defined by and scalar multiplication defined by is a vector space. when there is no possibility of confusion as to the underlying fie Id. Definition 2. is a field. C with ordinary complex addition and multiplication is a field.8 Chapter 2. . Note that + and· in Definition 2.l.p ) .. Moreover. is a vector space. where Z+ = {O. 4. 1. .3.2. since (M4) does not hold in general (although the other 8 axioms hold). 2. Moreover. e). 1. Note that + and • in Definition 2.F xV »• V such that (VI) (V. }.2. Raf.f3i EIR . R) with addition defined by I..qEZ +} . f3 e F and for all v E V.5. A vector space over a field F is a set V together with two operations Definition 2..3 are different from the + and . v for all a. (V4) a· (v + w) = a . w e V. p E IF andfor all v e V. underlying field. I. (V3) (a + f3).r] = the field of rational functions in the indeterminate x = {ao + f30 + atX f3t X + . ft) v = a v + f3.2. (MI) does not hold unless m = n. A vector space is denoted by (V. (V2) ( a f3) v = a P .1 in the sense of operating on different objects in different sets.
Note. i. if and only if(aw1 ßW2) E if(awl + fJw2) e W for all a. and the functions are piecewise continuous (a) '0 = [to. and the functions are piecewise continuous =: (PC[to.. w2 e Remark 2. E) is a vector space with addition defined by 9 9 A+B= [ . Then (W.7.. F) be an arbitrary vector space and V be an arbitrary set. h])n (b) '0 = [to. td. Note. verify that the set in question is closed under addition and scalar multiplication. W2 E W. g E cf> and scalar multiplication defined by and scalar multiplication defined by (af)(d) = af(d) for all a E IF. Let (V. ß E ¥ and for all w1. Then (x(t) : x ( t ) = Ax(t)} is a vector space (of dimension n). (V. + fJmn l yaml yamn 3. F) be a vector space and let W ~ V. Subspaces 2. Let A € R"x". Then cf>('O. y a l2 y a 22 yam 2 ya. W = 0. F) if and only if (W. and for all f E cf>. 4. that since 0 e F. t\]. Let cf>('O. Notation: When the underlying field is understood. IF) be a vector space and let W c V. verify that the set in or prove that something is indeed a subspace (or vector space). F) is a Definition 2." + fJ2I a21 + P" . IF) is a subspace of (V.2 Subspaces Subspaces Definition 2. when used with vector spaces. if and only subspace of (V. V) be the set of functions / mapping D to V. i. td)n or continuous =: (C[?0. we write W c V. Let (V. 2. W f= 0. too.. IF) = (JR n .2. . is henceforth understood to mean "is a subspace of..2. Then (W. Let O(X>. td)n. this question is closed under addition and scalar multiplication. too." ya2n . for all d ED. etc. V) be the 3. Notation: When the underlying field is understood. F) = (IR". =: (PC[f0. The latter characterization of a subspace is often the easiest way to check Remark 2. amI al2 a22 + fJI2 + fJ22 aln + fJln a2n + fJ2n a mn + fJml am2 + fJm2 and scalar multiplication defined by and scalar multiplication defined by [ ya" y a 21 yA = . (V. is henceforth understood to mean "is a subspace of.e. fJ e IF andforall WI. Let (V. Let (V. IF) be an arbitrary vector space and '0 be an arbitrary set. and the symbol ~. this implies that the zero vector must be in any subspace. l . V) is a vector space with addition set of functions f mapping '0 to V. F) is itself a vector space or." The less restrictive meaning "is a subset of" is specifically flagged as such.6. +00).2 2. JR).2.6. E). IF) if and only if (W. Then O(D. t\])n continuous =: (C[to. Special Cases: Special Cases: (a) V = [to. (E mxn JR) is a vector space with addition defined by 2. implies that the zero vector must be in any subspace. Let A E JR(nxn. we write W ~ V. foral! a. JR). that since 0 E IF. IF) is itself a vector space or. IF) = (JRn. The latter characterization of a subspace is often the easiest way to check or prove that something is indeed a subspace (or vector space). and the symbol c." The when used with vector spaces. (JRmxn. V) is a vector space with addition defined by defined by (f + g)(d) = fed) + g(d) for all d E '0 and for all f.7.e. 4. (V. Subspaces 2. equivalently. less restrictive meaning "is a subset of' is specifically flagged as such. equivalently. Then {x(t) : x(t) = Ax(t}} is a vector space (of dimension n).
. Note. sketch Then Wa. If 12. unless explicitly stated otherwise. Wi. As an interesting exercise. sketch W2.I' and Wi. . we drop the explicit dependence of a vector space on an underlying field. f3 e R. Proof: Suppose A\. Henceforth. For a. that the vertical line through the origin (i. V usually denotes a vector space with the underlying field generally being R unless Thus.e. = {A E JR. a = oo) is also a subspace. R"x". ak = O. W2. Let X {VI.o. . . . A2 are symmetric.} . . 3./l with f3 = 0 are All lines through the origin are subspaces.1. Vk of X and for any scalars aI. one usually proves the two inclusions separately: Note: To prove two vector spaces are equal..O. ~SandS ~ R. . . . . that the vertical line through the origin (i. c E JR. W1/2.10. too. Thus../l = {V : v = [ ac ~ f3 ] . then R = S if and only if Definition 2. Then it is easily shown that aAI + fiAi is symmetric for all a.9. ..•.Vk of X and for any scalars a1.R) and 1. . al VI + . A2 are symmetric. too. . All lines through the origin are subspaces.nxn. and S are vector spaces (or subspaces)../l is a subspace of V if and only if f3 = 0.I. ft E R symmetric for all a. . one usually proves the two inclusions separately: An arbitrary r e R is shown to be an element of S and then an arbitrary s E S is shown to is shown to be an element of and then an arbitrary 5 € is shown to An arbitrary r E be an element of R. (Xk not all zero such that X is a linearly independent set of vectors if and only if for any collection of k distinct X is a linearly independent set of vectors if and only if for any collection of k distinct elements v1. X linearly set of Definition 2.. E ]Rnxn : not 2..10. elements VI.o.) and let W = [A e R"x" : A is symmetric}. . Vk e X and scalars aI. we drop the explicit dependence of a vector space on an underlying field. Then (V. ak. ak. To prove two vector spaces are equal. . • • •} be a nonempty collection of vectors u. 1. Shifted subspaces Wa. Shifted subspaces W". vk E X and scalars a1.nxn : A We V. explicitly stated otherwise. R) and for each v € R2 of the form v = [v1v2 ] identify v1 with 3.) and for each v E ]R2 of the form v = [~~ ] identify VI with the xcoordinate in the plane and u2 with the ycoordinate. in some vector space V... . •••.ß with ß =1= 0 are called linear varieties. .ß is a subspace of V if and only if ß = O. As an interesting exercise.. 2.JR.8. Consider (V. be an element of R. JR. . X is a linearly dependent set of vectors ifand only if there exist k distinct if and only if exist distinct elements v1. then R = S if and only if R C S and S C R.10 Chapter 2.. F) = (R2. Then it is easily shown that ctA\ + f3A2 is Proof' Suppose AI. Vector Spaces Example 2.S. Consider (V.0. ak not all zero such that elements VI.lF) = (R" X ". Let W = {A € R"x" : A is orthogonal}. W2.3 2. Consider (V.. W2..nxn. Definition 2.1. f3 e R. + (XkVk = 0 implies al = 0. F) = (JR. W~V. a = 00) is also a subspace.9. Definition 2. Henceforth. v2.3 Linear Independence Linear Independence Let X = {v1.• } be a nonempty collection of vectors Vi in some vector space V. V usually denotes a vector space with the underlying field generally being JR. For ß E R define the jccoordinate in the plane and V2 with the ycoordinate. V2. Example 2.e. IF) = (]R2..and W1/2. called linear varieties.. •. Then W". ffR and S are vector spaces (or subspaces). . Then W is /wf a subspace of JR. Note. define W"..
.}.14.3. The linear v E ]Rn.13...14. 1£t V = R3. . Definition 2. E V.12. and there exists a E ]Rk such that VT V is singular.2. tIl 2. An equivalent condition for linear independence is that the matrix V TV is nonsingular.. A e R xn B E ]Rnxm.. Then the span of of X is defined as X is defined as Sp(X) = Sp{VI. Why? However. (since 2vI ."I [ i1i1l ]} [[ s a linearly is a Iin=ly dependent set de~ndent ~t (since 2v\ — V2 + v3 = 0). An equivalent condition for linear dependence is that the k x k matrix condition VT V is singular.11. Independence of these vectors turns out to be equivalent to a concept Chapter 11).'" . = [ v 1 . o Definition 2. .. X = [v1 v2 .. Why? independent. . kEN}.. ii E If.. e2 = 0 1 0 . to be studied further in what follows.. Vi e span of Definition 2. ~ HHi] } Ime~ly i is a i" linearly independent set. X is a linearly independent set (of basis vectors). Vi EX. 1. 2.3.. {1. then = O.Vk] e ]Rnxk..v2 + V3 = 0). Independence of these vectors turns out to be equivalent to a concept called controllability.. en} = Rn. Sp(X) = V. (Xi ElF. Example 2. e2 . Example 2. called consider Let Vif e R".12. linear dependence x such that Va = 0.. }. V2. . Then Sp{e1. which is discussed in more detail in efA Chapter 11). } = (Xl VI + . and X (of and 2. . Howe.en} = ]Rn. 2. The dependence of this set of vectors is equivalent to the existence of a nonzero vector E Rk dependence of this set of vectors is equivalent to the existence of a nonzero vector a e ]Rk O.. + (XkVk . . . e k. A set of vectors X is a basis for V if and only ij Definition 2. Sp(X) = V.. T V is nonsingular. and consider the matrix V = [VI. V V2. = {v : where N = {I. e2. then a = 0. . If the set of vectors is independent. Then consider the rows of etA B as vectors in em [to. to be studied further in what follows. . If the set of vectors is independent. . Let A E ]Rnxn and 5 e R"xm. Vk] E Rnxk. Then {[ Then I... A set of vectors X is a basis for V if and only if 1.11. 2.. Linear Independence Example 2. and there exists a e R* such that Va = 0. Let X = {VI.... t1] (recall that etA denotes the matrix exponential. .en = 0 0 0 o SpIel. Let V = Rn and define = ]Rn and el = 0 0 . Linear Independence 2.. Then consider the rows of etA B as vectors in Cm [t0. An equivalent condition for linear independence is that the matrix Va = 0..13. LetV = 11 11 ~. .•}} be a collection of vectors vi.
. .l. bn]} and are unique. The scalars {Ei}are called the components (or sometimes the coordinates) components coordinates) Definition 2. The number of elements in a basis of a vector space is independent of the particular basis considered.[ ~  ] + 4· [ ~ l To see this. We represents B. .. e2. x ~ D J Definition 2.19. . n unique.16. r I [ . {~i } of v with respect to the basis {b l .12 12 Chapter 2.... . Example 2. n } such that for V.. bn be a basis (with a specific order associated with the basis vectors) b1. particular basis considered... . . For example. The number of elements in a basis of a vector space is independent of the Theorem 2. Definition 2.. VI ] : = vlel + V2e2 + . For example.17. In Rn. If a basis X for a vector space V(Jf 0) has n elements. . V is said to X for be ndimensional or have dimension n and we write dim(V) n or dim V n. For .. .... . components represents the vector v with respect to the basis B.16. for]Rn [e\.E~n} such that v= where ~Ibl + .. + ~nbn = Bx.. while We can also determine components of v with respect to another basis. el + 2 .. en} is a basis for IR" (sometimes called the natural basis).. .. For be n. We say that the vector x of of of (b1. write [ ] = XI • [ ~ + ] X2 • [ _! ] =[ ~ = [ ~ Then Then ! ][ ~~ l 1 [ ~~ ] = [ .b. with respect to the basis with respect to the basis {[~l[!J} we have we have [ ~ ~ ] = 3. B ~ [b".. while [ ~ ] = I . . If V= 0) V is Definition 2. {el.18. Then for all v E V there exists a unique ntuple {~I'.. Vector Spaces Example 2.15.dimensional or have dimension n and we write dim (V) = n or dim V — n.19. ] l = = Theorem 2. n for Then for all e there exists a unique ntuple {E1 . en} natural Now let b l .... + vne n · Vn We can also determine components of v with respect to another basis.18. In]Rn...
V. Example 2." 3. R H S = {v : v e R and v e S}. H. 72. Remark 2. R S C V (in general. R. 2. Definition 2. K + S S.20.. U\ + 1. a eA CiEA f] n *R. 1.= T). Remark 2.20." The collection of E. for finite k). i e m. and 2. A consistency. 2. tJJ) = +00. V is infinitedimensional. « The subspaces R. and S are said to be complements of each other in T. V. 5. y>f (L L . n (^ ft.4. S. Ra C V/or an arbitrary index set A). dim(Rn)=n.21. The union of two subspaces. vector space V is finitedimensional if there exists a basis X with n < +00 elements. Theorem 2.23. Let (V. we define dim(O) = 0. is not necessarily a subspace. Note: Check that a basis for ~mxn is given by the mn matrices Eij. R + S = {r + s : r e R.=1 K k 1=1 2. R = 0. T = R 0 S is the direct sum of R and S if = REB S is the direct sum ofR and S if Definition 2. 1. U + S = T (in general ft. 4. s E 5}. S c V.+00. determine 1/2n(n + 1) symmetric basis matrices. Thus. V (in general. j E ~.4. otherwise. n S {r s : r E U. Example 2.j matrices can be called the "natural basis matrices. . The union of two subspaces. and 1.24. Let (V. 2.4 2. R j ) = 0 am/ Ri = T). dim{A E ~nxn :: A is upper (lower) triangular} = 1/2n(n+ 1).) (To see why. The collection of Eij matrices can be called the "natural basis matrices. t1]) .) = 0 and ]P ft. and because the 0 vector is in any vector space. and because the 0 vector is in any vector space.18 says that dim(V) = the number of elements in a basis. 2. determine !n(n 1) symmetric basis matrices.2. Theorem 2. RI \ h Rk =: ]T ft/ C V. S S. dim{A e Rnxn A is upper (lower) triangular} = !n(n 1). V is infinitedimensional.22. dim(~n) = n.21. V (in general. i E m. for finite k). s e S}. + 7^ =: L R. Thus. Sums and Intersections of Subspaces 13 13 consistency. 2. we define dim(O) = O. U S.23. V for an arbitrary index set A). A vector space V is finitedimensional if there exists a basis X with n < +00 elements. is not necessarily a subspace.24..) 2 5. dim{A € Rnxn A AT} = {1/2(n 1 (To see why. where Eij is a matrix all of whose elements are 0 except for a 1 in the (i. Sums and Intersections of Subspaces 2. JF') be a vector space and let 71. where Efj is a matrix all of whose elements are 0 except for a 1 in the (i. Theorem 2. J)th location. dim(~mXn) = mn. . j e n.4 Sums and Intersections of Subspaces Subspaces Definition 2. otherwise. Theorem 2. R S = (in general. R C S. The subspaces Rand S are said to be complements of each other in T. dim{A E ~nxn :: A = AT} = !n(n + 1). R D S C V (in general..a S. n n S = 0. Note: Check that a basis for Rmxn is given by the mn matrices Eij. The sum and intersection Definition 2. dim(R mxn ) mn.18 says that dim (V) the number of elements in a basis. n 5 S. j)th location. 2. The sum and intersection ofR and S are defined respectively by: of R. and S are defined respectively by: 1. 1. F) be a vector space and let R. 1.22. dim(C[to. ft n 5 = {v : v E 7^ and v E 5}.
Xk E jRn 2. jRn xn . where r1. Avn are also orjRn. every t E T can be written uniquely in the form tt = r + s with r E Rand s E S. we must have rl = r2 and s\ = si from S2 from which uniqueness follows. Then any other distinct line through the origin is a complement of R. .. triangular + L = R xn un. We discuss more about orthogonal complements elsewhere in the text... Let x\. every t € can be written uniquely in the form r s with r e R and s e S. Xk} must be a linearly independent set. of the formula given in Theorem 2.si e S. ft. r2 e R. r2 E Rand s1.vn be orthonormal vectors in R". Prove that viand V2 form a a basis Consider v\ = [2 l]r 1*2 = [3 l] Prove that VI and V2 form basis 2 for R .29. and let S Let (V.. . Vector Spaces Chapter 2. Let U be the subspace of upper triangular matrices in E" x" and let £ be the subspace of lower triangUlar matrices in Rnxn. Theorem 2.26. D Theorem 2. . S of a vector space V. Then show that one of the vectors 1. Av" •. dim(R + S) = dim(R) + dim(S)  dim(R n S).2 and 2.c jRnxn. vd must be a linear combination of the others. . {vi.r2 S2 . jRnxn. But r1 –r2 E ft and S2 — SI E S.. S2 E S. For arbitrary subspaces ft... XI.14 14 Chapter 2. dim(T) = dim(R) + dim(S). the set in jRnxn. mutually [x\. suppose an arbitrary vector t E T can be written in two ways t e as t S2.. .. Then it may be checked that U + . validity of the formula given in Theorem 2. while U n £ is the set of diagonal matrices in Rnxn. which uniqueness follows. Find the components of the vector v = [4 If with respect to this basis.28. Since R n S = 0. Let VI.. one can easily verify the validity = n. S2 e rl Sl r2 Then r. Example 2.5. let R be the set of skewsymmetric matrices in (V.. consider V = R2 unique. Vk} is a linearly dependent set. Then V = U $ S.c = jRnnxn jRn xn. .27. .27. S of a vector space V.. ft S) = jR2 and let ft be any line through the origin.. . together with Examples 2. n Proof: A e jRnxn written Proof: This follows easily from the fact that any A E R"x" can be written in the form A=2:(A+A )+2:(AA). where rl.. ... AVn are orv\. 2.. Theorem 2.. Then any other distinct line through the origin is and let R be any line through the origin. *2.28.s\. F) (R n x n .20. Suppose =R EB Then 1.. 0 The statement of the second part is a special case of the next theorem. IF) = (jRnxn. . e jRnxn 4. Show that Av\.. v = [4 l]r jR2.. Then r1 — r2 = s2— SI. jR). Using the fact that dim {diagonal (diagonal matrices} = n.27. unique ft. ft..20.26. X2. and let R"x". . The complement of R (or S) is not unique...r2 £ Rand 52 . But as t = r1 + s1 = r2 + S2. Suppose {VI.r .29. 2.25. 2. R).25. For arbitrary subspaces R. Example 2. we must have r\ ri and SI rl ..c the Example 2. Xk} must be a linearly independent set. = dim(ft) + Proof: To Proof: To prove the first part. and SI. Show that {XI. Consider the vectors VI — [2 1f and V2 = [3 1f. 3. . Since ft fl 0. . EXERCISES EXERCISES 1. Vector Spaces Remark 2. x/c E R" be nonzero mutually orthogonal vectors. Among all the complements there is a unique one orthogonal to R. 0 S. ft be the set of symmetric matrices in R" x ". Suppose T = R O S. . Example 2. Then Theorem 2.. Vn thonormal if and only if A E R"x" is orthogonal. For example.27. 1 TIT The first matrix on the righthand side above is in S while the second is in R.
Exercises Exercises
15
5. Let denote the set of polynomials of degree less than or equal to two of the form 5. Let P denote the set of polynomials of degree less than or equal to two of the form Po + PI X + pix2, where Po, PI, p2 E R. Show that P is a vector space over R Show Po p\x P2x2, where po, p\, P2 e R Show that is a vector space over E. Show Find the components of the that the polynomials 1, *, and 2x2 — 1 are a basis for P. Find the components of the that the polynomials 1, x, and 2x2  1 are a basis for 2 2 with respect to this basis. polynomial 2 + 3x 4x polynomial 2 + 3x + 4x with respect to this basis.
6. Prove Theorem 2.22 (for the case of two subspaces Rand S only). 6. Prove Theorem 2.22 (for the case of two subspaces R and only).
7. Let n denote the vector space of polynomials of degree less than or equal to n, and of 7. Let Pn denote the vector space of polynomials of degree less than or equal to n, and of the form p ( x ) = Po + PIX + ...•+ Pnxn,, where the coefficients Pi are all real. Let PE po + p\x + • • + pnxn where the coefficients /?, are all real. Let PE the form p(x) denote the subspace of all even polynomials in Pn,, i.e., those that satisfy the property denote the subspace of all even polynomials in n i.e., those that satisfy the property p(—x} = p(x). Similarly, let PQ denote the subspace of all odd polynomials, i.e., p( x) = p(x). Similarly, let Po denote the subspace of all odd polynomials, i.e., those satisfying p(—x} = p(x). Show that Pn = PE EB Po· those satisfying p(x) = – p ( x ) . Show that n = PE © PO8. Repeat Example 2.28 using instead the two subspaces 7" of tridiagonal matrices and 8. Repeat Example 2.28 using instead the two subspaces T of tridiagonal matrices and U of upper triangular matrices. U of upper triangular matrices.
This page intentionally left blank This page intentionally left blank
Chapter 3 Chapter 3
Linear Transformations Linear Transformations
3.1 3.1
Definition and Examples Definition and Examples
definition of a linear (or function, We begin with the basic definition of a linear transformation (or linear map, linear function, or linear operator) between two vector spaces. or linear operator) between two vector spaces.
Let IF) and (W, IF) be vector spaces. Then I: : > a Definition 3.1. Let (V, F) and (W, F) be vector spaces. Then C : V + W is a linear transformation if and only if transformation if and only if I:(avi £(avi + {3V2) = aCv\ + {3I:V2 far all a, {3 e F andfor all v},v2e V. pv2) = al:vi fi£v2 for all a, £ ElF and far all VI, V2 E V. The vector space V is called the domain of the transformation C while VV, the space into called the of the transformation I: while W, the space into The vector space which it maps, is called the which it maps, is called the codomain.
Example 3.2. Example 3.2.
1. Let F = R and take V = W = PC[f0, +00). 1. Let IF JR and take V W PC[to, +00). Define I: : PC[to, +00) > PC[to, +00) by Define £ : PC[t0, +00) + PC[t0, +00) by
vet)
f+
wet) = (I:v)(t) =
11
to
e(tr)v(r) dr.
2. Let F = R and take V = W = JRmxn. Fix M e R m x m . Let IF JR and V W R mx ". Fix ME JRmxm. Define £ : JRmxn + M mxn by I: : R mx " > JRmxn by
X
f+
Y
= I:X = MX.
3. Let F = R and take V = P" = {p(x) = a0 + ct}x H ... + anx"n : a, E R} and ao alx + ai E JR} and 3. Let IF = JR and take V = pn (p(x) h anx W = pnl. w = pn1. I: : —> Define C.: V + W by I: p = p', where'I denotes differentiation with respect to x. Lp — p', where denotes differentiation x.
17
. then LVx = Lv = ~ILvI + . We thus commonly identify A as a linear transformation with its matrix representation. i e ~}. with respect to {w }•.n. Specifically. in the notation. Thus. When V = JR. then arbitrary).. but this is usually not done. Thus. j e m}. + amiWm where W = [w\.. W = R m and [ v i . {W jj' j e !!!. if V = E1v1 + .e.. Thus.. i.m and {Vi. E ~} e m] V ith column of A = Mat £ (the matrix representation of £ with respect to the given bases = L L for V and W) is the representation of LVi with respect to {w j.} are the usual (natural) bases.2 3. F) is linear and further suppose that {Vi. i e n} and {Wj. {u. z'th V This could be reflected by subscripts. w ] and L is the ith column of A. L IF) ~ (W. j E m} are the usual (natural) bases WA linea LV L = A.} are bases for V and W. Change of basis then corresponds naturally to appropriate matrix multiplication. F) —>• (W. is arbitrary).. In other words. j E raj. Thus.mxn a mn represents L since represents £ since LVi = aliwl =Wai. suppose £ : (V. usually L The action of £ on an arbitrary vector V e V is uniquely determined (by linearity) v E V uniquely determined by its action on a basis. i. In other words..• + E nVnn = V x (where u. for V and W) is the representation of £i>. + ~nLvn =~IWal+"'+~nWan = WAx.. is by its action on a basis.2 Matrix Representation of Linear Transformations Matrix Representation of Linear Transformations Linear transformations between vector spaces with specific bases can be represented conLinear transformations between vector spaces with specific bases can be represented conSpecifically... . £V = W A since x was arbitrary. j E !!!. and hence x. if v = ~I VI + • • + ~n v = Vx (where v. [ w . Thinking of both as a matrix and as a linear transformation from JR. + .. transformation with its matrix representation. We identify A the equation £V = W A becomes simply £ = A.. Note that A = Mat £ depends on the particular bases for V and W..m usually causes no naturally confusion. i E n}.. When V = R". say. n A= al : ] E JR.. W = lR. Then the {w j.18 Chapter 3." to Rm usually causes no Thinking of A both as a matrix and as a linear transformation from Rn to lR. w m] and where W = [WI. Linear Transformations 3. LV WA since x was arbitrary.e. . IF) veniently in matrix form. and hence jc. respectively. Li near Transformations Chapters..
. then composition of transformations corresponds to standard matrix mUltiplication. Then their outer product is the m x n E ~m. Note that in most texts. and if we associate matrices with the transformations in the usual way. it might be useful to prefer the former since the transformations A and B appear in the same order in both the diagram and the equation. in the same order in both the diagram and the equation. the arrows above are reversed as follows: C However.. If dimZ// = p. y E Rn. Composition of Transformations 19 19 3. Then we can define a new transformation C as follows: to W.y.3. Note that in The above diagram illustrates the composition of transformations C = AB.=1 Outer Product: Let x e Rm. A rankone symmetric matrix can be written in above (or xy if A E C xyH e c ).3.3. y e Rn. Then we can define a new transformation C as follows: C The above diagram illustrates the composition of transformations C = AB. and W and transformations B from U to V and A from Wand V to W. Inner Product: n xTy = Lx. and dim W m. we have C A B . Then their inner product is the scalar E ~n. Two Special Cases: Two Special Inner Product: Let x. then composition of transformations corresponds to standard matrix multiplication. V. Outer Product: matrix matrix mxn E R Note that any rankone matrix A e ~mxn can be written in the form A = xyT = xyT H mxn mxn). xx T XX ). and dim W = m. That is. dim V = n. Composition ofTransformations 3. the form XXT (or xx HH). The above is sometimes expressed componentwise by the C — A B . y e ~n.3 Composition of Transformations Composition Consider three vector spaces U. dimV = n. If dimU = p. expressed mxp nxp formula cij = L k=1 n aikbkj. . and if we associate matrices with the transformations in the usual way. That is.
4. If {VI. vk] be a set of nonzero vectors Vi E ~n. Note that N(A) and R(A) are. . R(A) C W. 1.20 20 Chapter 3. The range of A. orthonormal set.3. an]. . V. 2.. an} . See also the last paragraph of Section 3. Example 3. ~ 3 . . . R ( A ) S. If in of = [a\.. denoted N(A). vd of u.7. Then 1. . vi ^/v'k vk ~~~ ] . W. € 1Tlln is an orthogonal set. ... —/=== . ~.. N(A) c V. If A is written in terms of its columns as A = [ai.. {[ ~ J. . N(A) S.. {v1.. Note that N(A) and R(A) are. Then Let A : V —>• be a linear transformation. then ~ . Let A : V + be a linear transformation. is the set {w e w = Av for some v e V}. is the set {w E W : w = Av for some v E V}. Note that in Theorem 3. R(A) = {Av : v E V}. Equivalently. (A). denotedR(A). The nullspace of A.. Definition 3. ~}  ISisan 3. essentially following immediately from the definition.4 3. essentially following immediately from the defiProof: The proof of this theorem is easy. in general. denoted Im(A). IS an orthogonaI set..4 Structure of Linear Transformations Structure of Linear Transformations Let A : V —> W be a linear transformation. e Rn. an M.. I ~VI VI ^/v. Li near Transformations 3. Let {VI..3. [: J} is an orthogonal set.5. be orthogonal if' vjvj 0 for i ^ j and orthonormal if vf vj 8ij' where 8tj is the be orthogonal if vr v j = 0 for i f= j and orthonormal if vr v j = 8ij." • orthonormal set. Definition3. is an orthonormal set. then then R(A) = Sp{al. then Proof: The proof of this theorem is easy. usual (natural) bases. 1. where 8ij is the Kronecker delta defined by Kronecker delta defined by 8 = {I0 ij ifi=j. Theorem 3. The set is said to 3. .•. See also the of Section 3.2. subspaces of different spaces. is the set {v E V : Av = O}. 2. 2. then {I —/==.2. is an orthogonal set. 3.[ :~~ J} .7.Vk } With Vi E. The range of A.. The nullspace of The of denoted N(A).5 and throughout the text. denotedlZ( A). is an orthonormal set. .. Theorem 3. the same symbol (A) is Note that in Theorem and throughout the text.IN. is the {v e V Av = 0}. in general.. Definition 3... of of denoted Im(A).8. e ~mxn.6. Vk} with u.an].. {[ ~~i ]. . .i •. subspaces of different spaces. Let A E Rmxn. { t > . LinearTransformations Chapter 3. 0 nition. The range of A is also known as the image of A and — {Av e V}. The nullspace of kernel of and A is also known as the kernel of A and denoted Ker (A).8. the same symbol (A) is used to denote both a linear transformation and its matrix representation with respect to the used to denote both a linear transformation and its matrix representation with respect to the usual (natural) bases. if i f= j.. D Remark 3.. Let A : V + W be a linear transformation.
n~. 2. the computation involved is simply to find all nontrivial (i. (S~)l. . 6. . Let S <.. of course.. including dependent spanning vectors (which would. k =X . (n n S)~ = n~ + S~.11. Proof: We prove and discuss only item 2 here. . Structure of Linear Transformations 21 21 Definition 3. Theorem 311 Let Theorem 3. S1.10. Let {VI.. + X2 + X3 = 0..4. Let 3.e.9. nonzero) solutions of the system of equations 3xI 4xI + 5X2 + 7X3 = 0. 3.4. ]Rn. n S~. The proofs of the other results are left as Proof: left exercises. . Structure of Li near Transformations 3.= {v e Rn : vTs=OforallsES}... then give rise to redundant equations). Any set of vectors will do.=1 XI. Example 3. (n + S)~ = nl. = S. 4. Note that there is nothing special about the two vectors in the basis defining S being orthogonal. vk} e ]Rn vector. if and only if S~ <.3. Set vector. Vk} be an orthonormal basis for S and let x E Rn be an arbitrary {v1. S \B S~ = ]Rn.10. Let R S C Rn The S <. S 5. Then n. n <. Rn. Then the orthogonal complement of S is defined as the set c ]Rn. Then the of defined T S~={VE]Rn: V S = 0 for all s e S}. Then it can be shown that Working from the definition. Set XI X2 = L (xT Vi)Vi.
l = N(A ).12. E R(A) and E R(A). x~ e S. When thought of as a linear transformation from IR n to Rm. .26. {w E IR m : WT A = O} is called the left nullspace of A. Let A : Rn + Rm.e.5 Four Fundamental Subspaces Four Fundamental Subspaces Consider a general matrix A E lR. 2. Linear Transformations Then x\ E <S and. But yT Ax = (ATyy{ x. transformation A. XI) (which follows by rearranging the equation XI +X2 = x. Then {v E R" : Av = 0} is sometimes called the right nullspace of A.5 3. But then (x'1 —XI)TT (x.) Proof: To x E N(A).e. We S n S1 =0 the e orthogonal everything in (i.xn.) 2.l = 7£(AT).. x e R(AT). i.XI/ (x~ . Li near Transformations Chapters. But yT Ax = ( A T ) x. (Note: This also holds for infinitedimensional vector spaces. ..) N(A)1" spaces. Suppose. It can write vectors in a unique way with respect to the corresponding subspaces. everything in S (i. In other words.XITVj =XTVjXTVj=O. E S and X2.12 and part 2 of Theorem 3. y.14 (Decomposition Theorem). we see that x2 is orthogonal to v1. We have thus shown that vectors.e.l where x € M(A) and y € J\f(A)± = R(AT) (i. D Definition 3. (w e Rm : w T A = 0} is called the left nullspace of A.X2) 0 since (x'1 — x1) (x' 2 — x2) = 0 by definition of ST. E N(A) and E N(A). We have thus shown that S + S. including itself) is O. many properties of A can be developed in terms of the four fundamental subspaces . (Note: This also holds for infinitedimensional vector spaces. where x\. Rm = 7l(A) 0 M(AT)).l = 0 since the only vector s E S orthogonal to S1 = IRn.1 in the next section.e. x. Ax = 0 if and only if x orthogonal is orthogonal to all vectors of the form AT y.. Let A : R" + IRm. since Then XI e S and. every vector v in the domain space R" can be written in a unique way as v = x + y. 0 The proof of the second part is similar and is left as an exercise. Suppose. since T x 2 Vj = XTVj . N(A). the right nullspace is A/"(A) while the left nullspace is N(A T ). XI = x. .l = Rn. including itself) is 0. every vector in the codomain space R m can be written ina unique way asw = x+y. Ax = Proof: To prove the first part. that x = x1 + x2. 'R.) 2.e. R" N(A) 0 ft(Ar ». x 1 E Sand x2.l. Thus. See also Theorem 2. +x~). Similarly. See also Theorem 2. 3.26. R(A). where x e U(A) and y e ft(A)1.l. Clearly. = x'1+ x'2. Thus. IRn = M(A) EB R(A T)). that x = XI for example.x1) (x'1 xd x2 — X2 = — (x'1 — x1) (which follows by rearranging the equation x1+x2 = x'1 + x'2). we decompositions. x~ X2 = (x. Similarly. IRm = R(A) EBN(A T». – x1) = 0 since 0 by definition of S.12 and part 2 of Theorem 3. . Let A : Rn + IRm. x'2 E S1. Thus... It is also easy to see directly that.•. + x~.. Let A : IRn > Rm. Then Theorem 3.l N(A T (i. Then X2 = x. can write vectors in a unique way with respect to the corresponding subspaces.. X2 is orthogonal to any vector in S.e. Clearly. i.. In other words. every vector w in the codomain space IRm can be written in a unique way as w = x+y.l. Ax = 0 if and only if x equivalent to yT Ax = 0 for all y. Then R(A r ). we form AT v. (Note: This holds only for finitedimensional vector spaces. standing Figure 3. the right nullspace is N(A) while the left nullspace is J\f(AT).12.14 (Decomposition Theorem). Then Theorem 3. This key theorem becomes very easy to remember by carefully studying and underThis key theorem becomes very easy to remember by carefully studying and understanding Figure 3. Then T (x.. Thus. . We also have that S U S.l = R(A T}. for example. X2 is orthogonal to any vector in S.13. established that N(A) U(AT ).(A)1~ — J\f(ATT ). many properties of A can be developed in terms of the four fundamental subspaces to IRm.1 in the next section. x E R(A r ) Since x 1 have established thatN(A). D Theorem 3.22 22 Chapter 3.. Theorem 3. Theorem 3. and x2 = x~.11 can be combined to give two very funTheorem 3. Since x was arbitrary. Vk and hence to any linear combination of these we see that X2 is orthogonal to VI. But then (x. every vector v in the domain space IRn can be written in a unique way as v = x 7. Then Ax = 0 and this is an and equivalent to yT Ax = 0 for all v.= Af(AT) ) (i. take an arbitrary x e A/"(A). When thought of as a linear transformation from E" Consider a general matrix A € E^ x ". The proof of the second part is similar and is left as an exercise.13. ft(Ar) (i. where XI. Let A : IRn > IRm.. when we have such direct sum decompositions.e. (Note: This for finitedimensional 1. Then {v e IRn : A v = O} is sometimes called the Definition 3.e. Then 1. Let A : IRn —> Rm.. Vk and hence to any linear combination of these vectors. 0 x1 — x'1 andx2 = x2.11 can be combined to give two very fundamental decompositions damental and useful decompositions of vectors in the domain and codomain of a linear and transformation A.. right nullspace of A.
be a linear transforDefinition 3.(A)^. Four Fundamental Subspaces 3.16. 1. A is onto (also called epic or surjective) ifR.15. rank(A) dimftCA). A is onetoone or 11 (also called monic or injective) if N(A) = O. R(A)1. fundamental subspaces. Four fundamental subspaces.3. Figure 3. 1. Let V and W be vector spaces and let A : V + W be a linear transforDefinition 3. and in illustrating concepts such as controllability and observability.1. Let A : E" + Rm. Four Fundamental Subspaces 23 23 A r N(A)1 r EB {OJ X {O}Gl nr m r Figure 3. and N(A)1. Definition 3. t= V2 ===} AVI t= AV2 . This is sometimes called 3. Then rank(A) = dim R(A).16. 3. 2. Let and W be vector spaces and let A : motion. properties 7£(A). Figure 3. A f ( A ) .1 makes many key properties seem almost N(A)T.5. IR n > IRm.(A) = W. The row rank of A is column rank of of independent row rank of .1. R(A). Two equivalent 2. mation. A is onto (also called epic or surjective) ifR(A) = W. A is onetoone or 11 (also called monic or infective) ifJ\f(A) = 0. Two equivalent characterizations of A being 11 that are often easier to verify in practice are the characterizations of A being 11 that are often easier to verify in practice are the following: following: (a) AVI = AV2 (b) VI ===} VI = V2 .15. 'R.5. N(A). the column rank of A (maximum number of independent columns).1 obvious and we return to this figure frequently both in the context of linear transformations obvious and we return to this figure frequently both in the context of linear transformations and in illustrating concepts such as controllability and observability.
11 and 3.. (Note: Since 3. Tvrr]} is a basis for R(A).. . Theorem 3.17 we see immediately that Proof: From Theorems 3. rank(AB) = rank(BA) = rank(A) and N(BA) = N(A). rank(B)}.. Proof: From Theorems 3. 3. The dual notion to rank is the nullity R(AT) of independent rows). if B is nonsingular. Theorem 3. Then dimN(A) + dim R(A) = n.19. Part 4 of Theorem 3.17. Clearly T is 11 (since A/"(T) = 0). B E R" xn . the following string of equalities follows easily: "column rank of A" = rank(A) = dim R(A) = dimN(A)L1 = dim R(AT) = rank(AT)) = A" rank(A) = dim7e(A) = dim A/^A) = dim7l(AT) = rank(A r = "column "row rank of A. dimension of the domain of A. dimA/"(A) + dimft(A) = dimension of the domain of A. . shows that T is onto. 0 For completeness. . sometimes denoted nullity(A) or corank(A). To see that T is also onto.andx22 e A/"(A)." 0 of D The following corollary is immediate.11 and 3.18.(A).. x x e R" x\ X2. . .19 suggests looking at the general problem of the four fundamental Part 4 of Theorem 3. Then Ajti = W = TXI since Xl e A/^A). iv} abasis forA/'CA) . Finally. where n is the ]Rn > ]Rm. r*i *i E N(A)L. following follows we apply this and several previous results. 3. LinearTransformations Chapter3. then {TVI. and is defined as dimN(A).(A) = dimA/^A^ 1 if that if {VI. u. . this theorem is sometimes colloquially stated "row rank of A = column N(A)L = R(A A/^A) " = 7l(A ).19 suggests looking atthe general problem of the four fundamental subspaces of matrix products. of Corollary 3. dimensions. nullity(B) :s nullity(AB) :s nullity(A) 4. v r } is a basis for N(A)L. Then dim K(A) = dimNCA)L. Write x = Xl + X2. e ]Rnxn. take any W e R(A).24 24 Chapter 3. 1. + nullity(B). it is a statement about equality of dimensions. where Ax — w. Tv abasis 7?.. Let A. by definition there is a vector x E ]Rn such that Ax = w. the subspaces themselves are not necessarily in the same vector space.. .") Proof: Proof: Define a linear transformation T : N(A)L ~ R(A) by J\f(A)~L —>• 7£(A) by Tv = Av for all v E N(A)L. ..") of A. and products of matrices. Linear Transformations dim 7£(A r ) (maximum number of independent rows). R(A) : ]Rn ~ ]Rm. rank(A) + B) :s rank(A) + rank(B).17 we see immediately that n = dimN(A) = dimN(A) + dimN(A)L + dim R(A) . We thus have that dim R(A) = dimN(A)L since it is easily shown T dim7?. . Let A : R" ~ Rm. {Tv\. Then 3. Let A : Rn > Rm. and is defined as dim A/"(A). if {ui. Then N(T) = To w E 7£(A). The basic results are contained in the following easily proved following theorem. The last equality AXI x\ e N(A)L and jc E N(A). 1 1 Xl E A/^A) . we include here a few miscellaneous results about ranks of sums completeness. the subspaces themselves are not necessarily in the same vector space. . colloquially of = rank of A. of A..17.18. + rank(B)  n :s rank(AB) :s min{rank(A). denoted nullity(A) or corank(A).19. of A. . O:s rank(A 2. Like the theorem. dimA/'(A) ± (Note: 1 T T ).
and AI A. D D 11. Conversely. Then A r A. have full row rank. It is extremely useful in text that follows. Theorem 3. suppose Ax\ dim R(A T).(A). R(AT) 3. A is AT AXI AT AX2. i. Also. equivalently. suppose AXI = Ax^. nonsingular ifand only ifrank(A) = n. Similar remarks apply to A and A~T. 1. Let A E Rmxn.—n = dim 7£(A r ). Note that if A is invertible. dim R(A) = m = rank(A). e IRnxp. N«AB)T) .17. 1. then A/"(A) = 0. Four Fundamental Subspaces 3. AATT is nonsingular). RCAB) S. Four Fundamental Subspaces Theorem 3. B E Rnxp. Ar.20. A Proof of part 2: If A is 11.5. : R n » Rm.5. A is 11 if and only ifrank(A) = n (A has linearly independent columns or is said 2. XI = X2 AT A A 11. Let jc = AT(AAT)~]y Y E Rn. let y e Rm Proof: Proof of part 1: If A is onto. A is 11 if and only z/rank(A) = n (A has linearly independent columns or is said to have full column rank. dim7?. A : V + W is invertible (or bijective) if and only if it is 11 and onto. since ArA is invertible. A : V —» W is invertible (or bijective) if and only if it is 11 and onto. the transformations A. A : IRn »• IR n is invertible or Note that if A is invertible. 25 25 The next theorem is closely related to Theorem 3. 1. Then 3. N(A T ) = N(AA T ). and hence dim R(A) n by Theorem 3. Then Theorem 3. The transformations AT and A I have the same domain and range but are in general different maps unless A is and A~! have the same domain and range but are in general different maps unless A is orthogonal. AT A nonsingular).22.2 N(B). A"1 ± are all 11 and onto between the two spaces M(A) and 7£(A). and hence dim 7£(A) = n by Theorem 3. Proof' Proof of part 1: If A is onto.3.and R(A). equivalently. 3. Thus.21. AT.(A) — m — rank (A). Let e IRmxn. AA is nonsingular).23. then N(A) = 0. which implies that dim A/^A).23. which implies that dimN(A)11 = n — Proof of part 2: If A is 11. e 7?. x AT (AAT)I e IRn. The transformations AT are all 11 and onto between the two spaces N(A)1. RCA). .2 N(A T ). Conversely. Definition 3. 2.20. It The next theorem is closely related to Theorem 3. R«AB)T) S. Then 3.17. We now characterize 11 and onto transformations and provide characterizations in We now characterize II and onto transformations and provide characterizations in terms of rank and invertibility. especially when dealing with pseudoinverses and linear least squares problems. equivalently. Then y = Ax. A A T. 4. R(B T ). A € IR~xn. e IRmxn. linear least squares problems. let y E IRm be arbitrary. AT A is nonsingular). especially when dealing with pseudoinverses and is extremely useful in text that follows. Conversely.20 and is also easily proved.ti = AT Ax2. y E R(A).22. Let A : IRn + IRm.21. Also.. A is onto if and only //"rank(A) — m (A has linearly independent rows or is said to 1. equivalently. N(AB) . A : W1 + E" is invertible or nonsingular if and only z/rank(A) = n. Let A E Rmxn. R(A) = R(AA T ). which implies x\ = x^. N(A) = N(A T A). to have full column rank.e. Note that in the special case when A E R"x". 2. A is onto if and only if rank (A) = m (A has linearly independent rows or is said to have full row rank. AX2. Theorem 3. = R(A T A).20 and is also easily proved. 2. 4. so A is onto. then dim V = dim W. then dim V — dim W. Conversely. terms of rank and invertibility. Definition 3.
Similarly.25 that A is invertible. then a left inverse is given by A R = AT (AAT) left T L = A. . by uniqueness it must be Thus.. linear Transformations If a linear transformation is not invertible. in which case A~l = A~R = A~L. 1. is left invertible if and if it and left invertible. D Example 3. Then Definition 3. Then > 1.R + AARA = I A +IA  A since AA R = I = I. Let A = [1 2] : E2 »• E1I. then a right inverse is given by A~R = AT(AAT) I. in A I = A R = A L.e. by uniqueness it must be A R + A R A — = A R. If there exists a unique left inverse A~L such that A~LA = I. A~ (A A)~ A .22 ]Rn >• ]Rm Note: From Theorem 3. Notice the and leave the following: following: A(A. Definition 3.R = _~] (=1) and A~R = [ _j j is a right inverse. characterizing all solutions of the linear matrix equation AR = I. i. Obviously A has full row rank can always find v E ]R2 such that [1 2][ ~~] = a). But this implies that A~RA = /.. 0 a left inverse.26. right inverses for A. A. Then A is onto. are infinitely A. it may still be right or left invertible. both 11 and Moreover. A is left invertible if and only ifit is 11.R = If left A L A L A = 2. Let A : V + W. (A R + A RA .22 we see that if A : E" + Em is onto.26. Defileft If linear concepts left nitions of these concepts are followed by a theorem characterizing left and right invertible transformations. 1.I) must be a right inverse and.. Theorem 3. such that A~LA = Iv where Iv denotes the identity transformation on V.: AA R = w Iw W + V such that AA~R = Iw.L = (ATTA)I1AT. (A R + A R A — /) must be a right inverse and. 1. i. where Iv denotes the identity transfonnation on V. A is invertible if and only if it is both right and left invertible. it is clear that there are infinitely many right inverse. therefore. therefore. In Chapter 6 we characterize all right inverses of a matrix by Chapter characterize characterizing all solutions of the linear matrix equation A R = I.. If there exists a unique right inverse A~R such that AA~R = I. then A is invertible. if A is 11.. —> transformation if left + 2.. that A~R is a left inverse.e. A R A = I.26 Chapter 3. It then follows from Theorem 3.e. A is said to be right invertible if there exists a right inverse transformation A~RR : if A.25 that A is invertible. Also. € ]R . It then follows from Theorem 3.27.I = A~R. 3.24. Li near Transformations Chapters. then A is invertible.R AA. Let A : V + Then 1. then one (Proo!' = [1 2]:]R2 + ]R . Let A : V » V. If Proof: proof of second Proof: We prove the first part and leave the proof of the second to the reader. Let A : V > W. can always find v e E2 such that [1 2][^] = a). 2. i. Let + V.e..both 11 and is if and if onto. Let Theorem 3. i. A right invertible if and only if it onto. Theorem 3. A R the case that A~R + A~RA . A is said to be left invertible if there exists a left inverse transformation A~L : W —> to transformation A L : V such that A L A = Iv. A is right invertible if and only if it is onto.25. Obviously A has full row rank (= 1) and A . (Proof: Take any a E E1I. where Iw denotes the identity transfonnation on W.R + ARA I) = AA.24.
4. It is now obvious that A has full column rank (=1) and A~L = [3 . whence A/"(A) = 0 so A is 11). Y E Enx" define their inner product by (X.2. It is now obvious that A has full column is v 0. respect to this inner product. and let 7£ denote the subspace of skewsymmetric matrices. We give below bases for its four fundamental subspaces. Consider the vector space R nx " over E. (Proof Theonly solution toO = Av = [i]v 2. Let A = [i]:]Rl > ]R2. y) = Tr(X Y). Let A = [8 5 i) and consider A as a linear transformation mapping E3 to ]R2. let S denote the subspace of symmetric matrices. Y) = Tr(X Tr F). 4. below bases for its four fundamental subspaces. . 3. whence N(A) = 0 so A is 11). 'R. (Proof: The only solution to 0 = Av = [I2]v is v = 0. J E2. and let R denote the subspace of skewsymmetric matrices. Show that. Is £. £. Then A is 11. LetA [J] : E1 ~ E2. consider A linear transformation ]R3 1.1] is a left inverse. Prove Theorem 3. Again. Consider differentiation £ 11? Is£ onto? onto? 4. Prove Theorem 3. Consider the vector space ]Rnxn over ]R. it is clear that there are A L = [3 — 1] infinitely many left inverses for A. let denote the subspace of symmetric 2. Find the matrix representation of A with respect to the bases Find the matrix representation of A to bases {[lHHU]} of R3 and {[il[~J} of E .3. In Chapter 6 we characterize all left inverses of a matrix by characterizing all solutions of the linear matrix equation LA = I. 2 . is neither 11 nor onto. with Y e ]Rnxn (X. — S^. respect to this inner product. R = S J. 3. The matrix A = 1 1 2 1 [ 3 1 when considered as a linear transformation onIE \ is neither 11 nor onto. 2.Exercises 27 2. For matrices X. EXERCISES EXERCISES 3 4 1. For matrices matrices. In Chapter 6 we characterize all left inverses of a infinitely many left inverses for A. ThenAis 11. We give when considered as a linear on ]R3. II? Is £.4. The matrix 3. Let A = [~ . Consider the differentiation operator C defined in Example 3. matrix characterizing all solutions of the linear matrix equation LA = I.
28 5. linearly independent solutions 10. left T Suppose e Rmxn 9. How many linearly independent solutions can be found to the 10.12. Determine A/"(A) and 7£(A). ~ ~ 3 8.4. Rnxm thought of as a transformation from Rm to IRn. prove it.11. Suppose A € Mg 9x48 . 3. Show that AT has a right inverse. Are they equal? Is this true in general? DetennineN(A) and R(A).Il. provide a counterexample. if not. Suppose A E IR m xn has a left inverse. Linear Transformations 7.1 11.4.1 to illustrate the four fundamental subspaces associated with AT e associated ATE nxm IR from IR m R". Are they equal? Is this true in general? If this is true in general. . Prove Theorem 3. Modify Figure 3. Let A = [ J o]. Determine bases for the four fundamental subspaces of the matrix Detennine fundamental A=[~2 5 5 ~]. Let = [~ 9. Linear Transformations Chapters. homogeneous linear system Ax = 0? homogeneous linear system Ax = O? n 3. Chapter 3.2. If E 1R~9X48. Prove Theorem 3. Theorem 6.
can be used to give our first definition of A . as noted in the proof of Theorem 3. which was proved by Penrose in 1955. T is bijective Cll and onto). brings great notational and conceptual clarity to the study of solutions to arbitrary systems of linear equations and linear least squares to the study of solutions to arbitrary systems of linear equations and linear least squares problems.." X ". Then. the MoorePenrose pseudoinverse of A. problems.1. A purely algebraic y + characterization of A+ is given in the next theorem. Although X and y were arbitrary vector spaces above. 4. the MoorePenrose pseudoinverse of A. the definition neither provides nor suggests a good computational strategy good computational strategy for determining A +.1 Definitions and Characterizations Definitions and Characterizations Consider a linear transformation A : X —>• y. see [22]. where X Xand Y y are arbitrary finitedimensional vector spaces. We have thus defined A+ for all A E IR™xn.+ R(A) by dimensional Define transformation T : N(A).1 4. as is shown in the following text. characterization of A is given in the next theorem. as is shown in the following text..1. and hence we can RCA) —>• J\f(A}~L This transformation can define a unique inverse transformation Tl 1 :: 7£(A) + NCA).l. define a transformation A+ : Y —»• X by Definition 4. 29 . a generIn this chapter we give a brief introduction to the MoorePenrose pseudoinverse. which was proved by Penrose in 1955. where and are arbitrary finiteConsider a linear transformation A : X + y. case X = W1 and Y = Rm. The MoorePenrose pseudoinverse is defined for any matrix and. With A and T as defined above.17.l —>• Tl(A) by Tx = Ax for all x E NCA). a generalization of the inverse of a matrix. for determining A+ .Chapter 4 Chapter 4 Introduction to the Introduction to the MoorePenrose MoorePen rose Pseudoinverse Pseudoinverse In this chapter we give a brief introduction to the MoorePenrose pseudoinverse.l.. With A and T as defined above. neither provides Unfortunately.17. see [22]. T is bijective (11 and onto). let us henceforth consider the X ~n lP1. Then A+ is the MoorePenrose where y = y\ pseudoinverse of A. This transformation T~ + can be used to give our first definition of A+. Define a transformation T : Af(A)1. let us henceforth consider the Although X and Y were arbitrary vector spaces above.l. and hence we Then. brings great notational and conceptual clarity matrix and. define a transformation A + y + X by where Y = YI + Yz with Yl e 7£(A) and Yz e Tl(A}L. pseudoinverse of A.m We A+ A e lP1. Then A+ is the MoorePenrose j2 with y\ E RCA) and yi E RCA). as noted in the proof of Theorem 3. Definition 4.
(4. as a right or left inverse satisfies no fewer than three of the four properties. Furthermore. Note that the inverse of a nonsingular matrix satisfies all four Penrose properties.7. this can be found in [1. (P2) GAG = G. as with Definition 4.7.4. For any scalar a. If G satisfies all four.2 nor its proof suggests a computawith Definition 4. Introduction to the MoorePenrose Pseudoinverse Theorem 4. Example 4.4. if a t= 0. Consider A = ['].5. it must be A+. Given a matrix G that is a candidate for being checkable criterion in the following sense. Still another characterization of A+ is given in the following theorem. Note that other left inverses (for example. Then Theorem 4.1. = Furthermore. A L = [3 — 1]) satisfy properties (PI). AG. Unfortunately." xn. p. (P2). Example 4. Example 4.2. straightforward. Let A E lR.6. .2 Examples Examples Each of the following can be derived or verified by using the above definitions or characEach of the following can be derived or verified by using the above definitions or characterizations. A~ = [3 . neither the statement of Theorem 4.1]) satisfy properties (PI). Example 4. it must be A +. While not generally suitable for computer implementation. one need simply verify the four Penrose conditions (P1)(P4). Let A e R™xn. (P2) GAG G. then by uniqueness.1.2. terizations.1) = limAT(AAT +8 2 1)1. Consider A = f ] satisfies (P1)(P4).. characterization can be useful for hand calculation of small examples. (P3) (AGf (P3) (AG)T = AG. Let A e R?xn Then G = A + if and only if (Pl) AGA = A. For any scalar a.3. Theorem 4. A+ = (AT A)~ AT if A is 11 (independent columns) (A is left invertible). A+ = (AT A)I AT if A is 11 (independent columns) (A is left invertible). this characterization can be useful for hand calculation of small examples. Verify directly that A+ = [ ~] satisfies (PI)(P4). one need simply verify the four Penrose conditions (P1)(P4).5. Example 4. then by uniqueness. p. However. Then A+ [a [! = lim (AT A + 82 1) I AT 6+0 6+0 (4. 19]. Introduction to the MoorePenrose Pseudoinverse Chapter 4. Such a verification is often relatively straightforward. and (P4) but not (P3). Verify directly that A+ = Example 4." xn. the Penrose properties do offer the great virtue of providing a tional algorithm. However. Note that the inverse of a nonsingular matrix satisfies all four Penrose properties. Unfortunately. whose proof Still another characterization of A + is given in the following theorem. (P4) (GA)T = GA. Such a verification is often relatively satisfies all four. If G the pseudoinverse of A.30 Chapter 4. 19]. A + always exists and is unique.3. Also. Example 4. Then G = A+ if and only if Theorem 4.2 4.2) 4. A+ always exists and is unique. L Note that other left inverses (for example. neither the statement of Theorem 4. whose proof can be found in [1. While not generally suitable for computer implementation.2 nor its proof suggests a computational algorithm. (PI) AGA = A. X+ = AT(AATT) I if A is onto (independent rows) (A is right invertible). if a =0. Also. Given a matrix G that is a candidate for being the pseudoinverse of A.6. A t = AT (AA )~ if A is onto (independent rows) (A is Example 4. Let A E lR. and (P4) but not (P3). a right or left inverse satisfies no fewer than three of the four properties. (P2). (P4) (GA)T = GA. the Penrose properties do offer the great virtue of providing a checkable criterion in the following sense.
Properties and Applications 4.7. A+ = (AT A)+ AT = AT (AA T)+.3. if v i= 0.8. (A T )+ = (A+{.11.10. 4. The proof of the first result is not particularly easy and does not even have the virtue of being proof of the first result is not particularly easy and does not even have the virtue of being especially illuminating. where D+ is again a diagonal matrix whose diagonc D is diagonal.3. D Theorem 4. The interested reader can consult the proof in [1. elements are determined according to Example 4. 0 the four Penrose conditions. Let S E Rnxn be symmetric with U TSU = D. For all A E jRmxn.9. Example 4. 2. are used in the text that follows. 31 31 Example 4.3 Properties and Applications Properties and Applications This section presents some miscellaneous useful results on pseudoinverses.13. p. [~ r 1 =[ 4 4 I I ~l 4 I I 4 4.10. Let S e jRnxn be symmetric with UT SU = D.9. Many of these are used in the text that follows. The interested reader can consult the proof in [1. e jRmxn and suppose Rmxm R n are orthogonal (M is T 1 1 orthogonal if M M ).12.). Properties and Applications Example 4. [~ ~ r ~ =[ 0 Example 4. For any vector v E M". 27]. Then Proof: For the proof. p. Theorem 4. Let A E R m x "and suppose UUEejRmxm.4. . simply verify that the expression above does indeed satisfy each of Proof: For the proof. The especially illuminating. where U is orthogonal and D is diagonal. The proof of the second result (which can also be proved easily by verifying the four Penrose proof of the second result (which can also be proved easily by verifying the four Penrose conditions) is as follows: conditions) is as follows: (A T )+ = lim (AA T ~+O + 82 l)IA = lim [AT(AAT ~+O + 82 l)1{ + 82 l)1{ 0 = [limAT(AAT ~+O = (A+{. Theorem 4.12. Example 4. Proof: Both results can be proved using the limit characterization of Theorem 4.4.3 4. simply verify that the expression above does indeed satisfy each c the four Penrose conditions. 27]. For any vector e jRn.11. The Proof: Both results can be proved using the limit characterization of Theorem 4.13.4.7. . where U is orthogonal an Theorem 4. Example 4.. if v = O.8. For A e Rmxn 1. Then S+ = U D+UT. . Then orthogonal if MT = M. Then S+ UD+U T where D+ is again a diagonal matrix whose diagonal elements are determined according to Example 4. Many of these This section presents some miscellaneous useful results on pseudoinverses.VVEejRnxnx " are orthogonal (M is 4.
(AT A)+ = A+(A T)+.12 Note that by combining Theorems 4. n(A T AB) ~ nCB) . Theorem 4. BB+ f r The by taking BI = B.xm.15.. Proof: Proof: For the proof. and better methods are suggested in text that follows. we B+ BT(BBT)I. in general.12 and 4. TTnfortnnatelv. (A+)+ = A. (AB)+ = B+ A + if and only if 1. 0 The following theorem gives some additional useful properties of pseudoinverses. Introduction to the MoorePenrose Pseudo inverse 4. A+ = (AT A)I AT. A\ = A in Theorem 4. (AB)+ = B+A+ if and only if 4. Then As an example consider A = [0 I] and B = LI. necessary and sufficient conditions under which the reverseorder property does hold are known and we quote a couple of moderately useful results for reference. [9]. however (see.15. If A is normal. we have B = B (BBT)~\ whence BB+ = Ir.11 is suggestive of a "reverseorder" property for pseudoinverses of prodTheorem 4. xm + T B e Wr .32 Chapter 4. B E Rrrxm. then (AB)+ = B+ A+.g. see [5]. poor (see. [7]. the MoorePenrose pseudoinverse of any matrix (since AAT and AT A are symmetric).16. e. (AB)+ = B+A+. For e Rmxn . 0 D E lR~xr. whence A+A = f r.. properties Theorem 4. [23]). If A e Rnrxr. N(A+) 5. Then (AB)+ = 1+ = I while while B+ A+ = [~ ~J ~ = ~.17. [7]. n(BB T AT) ~ n(AT) and 2. [] sufficient reverseorder However. 1. This A AT AT turns out to be a poor approach in finiteprecision arithmetic.14. (AA T )+ = (A T)+ A+. (AB)+ = B{ Ai.17.. 2. Proof' A+ A Proof: Since A E Rnrxr. [II]. For all A E lR mxn . 4. 4. then AkA+ = A+ Ak and (Ak)+ = (A+)kforall integers k > O. e. in theory at least. Theorem 4. = N(AA+) = N«AA T)+) = N(AA T) = N(A T).g. .15. Proof: Proof: For the proof. = n(A T) = n(A+ A) = n(A TA). Introduction to the MoorePenrose Pseudoinverse Chapter 4. Ir Similarly. D takingB t = B. see [9]. in peneraK ucts of matrices such as exists for inverses of products. then A+ = (ATA)~lAT. compute 4. 4. As an example consider A = [0 1J and B = [ : J. (AB)+ = B?A+. since e lR~xr. [5].14.• Similarly.15. where BI = A+AB and A) = ABIB{. n(A+) 4. 0 D Theorem 4. [11].13 we can.11 nets of matrices such as exists for inverses of nroducts Unfortunately. The result then follows by E lR. where BI = A+ AB and AI = AB\B+. 3.16.At = A in Theorem 4. Theorem 4.13 can. If e lR~xm.
whereupon there exists a vector x e IR m such that Bx = y.i ]. Since x was arbitrary. 4.Exercises 33 Note: Recall that A E IRn xn is normal if A A = A T A. where one of the Penrose properties is used above. fiA+A B. = B. € IRm xm D 6. then it is normal. we have shown where one of the Penrose properties is used above. B E E M X m . b E E. If jc. such as preceding but still be normal. we have shown that B = AA+B. 5 e JRn x m . Note: Recall that A e R" xn is normal if AATT = AT A. assume that AA + B To prove the converse. For A E Rpxn and BE R mx ". e IRnxm. For A E IRmxn. properties of the MoorePenrose pseudoinverse. The next theorem is fundamental to facilitating a compact and unifying approach The next theorem is fundamental to facilitating a compact and unifying approach to studying the existence of solutions of (matrix) linear equations and linear least squares to studying the existence of solutions of (matrix) linear equations and linear least squares problems.. A E IRmxn.4 to compute the pseudoinverse of \ 2 1. prove that R(A+) = R(A T). a matrix can be none of the preceding but still be normal. Then there exists a vector x E Rm such that Bx = y. Then Bx e R(B) c H(A).18.• 1 2 x. that B = AA+ B. Then R(B) c R(A) if and only if Suppose e IRnxp.i l .]. Then K(B) S. A E IRn xn B E E n xm 6. then it is normal. Then Bx E H(B) S. such as A=[ b a a b] for scalars a. if A is symmetric. show that (xyT)+ = (x Tx)+(yT y)++yxT. For A e R m x n . Let A G M"xn. if A is symmetric. or orthogonal. Since x was arbitrary. R(A) and take arbitrary x e IRm. assume that AA+B = B and take arbitrary y e K(B). N(B) and 5 € IRmxn. Then B and take arbitrary y E R(B).4 to compute the pseudoinverse of U . For example. and D E E mxm and suppose further that D is nonsingular. RCA). Then we have Bx = Ay = AA + Ay = AA + Bx. For A e Rmxn. Y e R". Use Theorem 4. (a) Prove or disprove that Prove or disprove that [~ (b) Prove or disprove that (b) Prove or disprove that AB D [~ B D r r=[ =[ A+ 0 A+ABD.1 D. so there exists a vector y E Rp such that Ay = Bx. However. y E IRn. 2. problems. or orthogonal. Proof: Suppose K(B) S. skewsymmetric. ft(A+) ft(Ar 5. 0 EXERCISES EXERCISES 1. Use Theorem 4. To prove the converse. For example. b e R for scalars a. show that JV(A) C A/"(S) if and only if BA+ A = B. prove that 7£(A) = 7£(AA r ) using only definitions and elementary properties of the MoorePenrose pseudoinverse.i D. so Proof: Suppose R(B) c U(A) and take arbitrary jc E Rm. Suppose A E Rnxp. However. Theorem 4. a matrix can be none of the skewsymmetric. A e IRPxn thatN(A) S. prove that RCA) = R(AAT) using only definitions and elementary 3. A+ 0 A+BD. Then we have there exists a vector y e IRP such that Ay = Bx. (xyT)+ = (xTx)+(yTy) yx T 3. U(A) if and only if AA+B = B. whereupon y = Bx = AA+Bx E R(A).
This page intentionally left blank This page intentionally left blank .
.• = Un. 5.. i e !!. Vi eE RIRnxr. V2 = [Vr+I. .. < min{m.. Proof: Since AT A ::::: ( (AT A s symmetric and nonnegative definite.1. y22 €E Rnxfor^ and the 0JM^/ocJb in E~are compatibly IRnx(nr).2) (5. ..e. recall. .LettingSS = diag(uI. Premultiplying by Vf gives Vf A T A VI write ATAVi = VI S2.} with UI ::::: .) Denote the set of eigenvalues of AT A by {of / E n} with a\ > • • > a > 0 = o>+i = • • an. ii E !!.. for example. Pre. Premultiplying by vt gives vt ATAVi = vt VI S2 = S2. .4) 35 .. The SVD plays a key conceptual and computational of this important matrix factorization. Letting — diag(cri.we can Vi = [vr+ . the latter VfV^S2 = S2. dimensioned. for example. Ch. Preand postmultiplying by SI gives the emotion S~l eives the equation (5.} be a set of corresponding orthonormal eigenvectors and let VI = [v\.o>) E IRrxr.. n}). VI «xr j V U2 e ^x(mr) . e n} be a set of corresponding orthonormal eigenvectors 0= Ur+1 = . role throughout (numerical) linear algebra and its applications. (Note: The rest of the proof follows [24. recall. .Vv r). its eigenvalues are all real and nonnegative. specifically..and postmultiplying by equality following from the orthonormality of the r. [24.Chapter 5 Chapter 5 Introduction to the Singular Introduction to the Singular Value Decomposition Value Decomposition In this chapter we give a brief introduction to the singular value decomposition (SVD).. «}).. vn]. e IRmxm and V E IR nxn such that V € Rnxn such that UI > diag(ul. Let A e R™ x n . Let {u. UI e Wnxr.Vn ]. i.. Then there exist orthogonal matrices U E Rmxm and E IR~xn. . .) Denote the set of eigenvalues of AT A by {U?. . Ch. .1 The Fundamental Theorem Theorem A Theorem 5. and the Osubblocks in are compatibly dimensioned. More where ~ = [~ specifically. r write A r A VI = ViS2.1) rxr A = [U I U2) [ ~ 0 0 ][ ] 2 T VI VT (5.e. we can and let V\ [VI. U\ E IRmxr.. The SVD plays a key conceptual and computational role throughout (numerical) linear algebra and its applications. u r ) e R . the latter equality following from the orthonormality of the Vi vectors.3) = Ulsvt· The submatrix sizes are all determined by r (which must be S min{m. We show that every matrix has an SVD and describe some useful properties and applications show that every matrix has an SVD and describe some useful properties and applications of this important matrix factorization. (Note: The rest of the proof follows analogously if we start with the observation that AAT > 0 and the details are left to the reader analogously if we start with the observation that A A T ::::: 0 and the details are left to the reader as an exercise. vectors. its eigenvalues are all real and nonnegative... 6]). rcfr)... Proof: Since A r A > 00 A r A i is symmetric and nonnegative definite. . . we have n = U~VT. where S = [J °0]. . . U2 E IRrnx(mrl. S = diagfcri.• ::::: Urr > as an exercise... and a\ > • • • > Ur > 0.1.] ...\. i.. . 6]).. (5. Theorem 5. Let {Vi.u ). More S > o r > O... We In this chapter we give a brief introduction to the singular value decomposition (SVD). .
•. Then Specifically. D to be S completes the proof. .. The analogous complex case in which A E C™ x " is quite straightforward. matrix VI E M mx/ " by U\ = AViS~l. Remark 5.36 36 Chapter 5.. See also m [v\. From the proof of Theorem 5..denote A thought of as aalinear transformation mapping IR n to IRm.. analogous complex e IC~ xn straightforward. Then from (5. The latter equality follows from the orthogonality of the S and vI AVi = vI VI S = O.r zero singular values..2). Then T rewriting A = V"i:. . an we have that ATAV2z = VzO = 0. . For example.4) see UfU\ = columns of U\ are orthonormal. we see that Mat C is "i:. of values of i I proof of A. Remark 5. } for R" {VI.. Choose U2 £ IRmx(mr) [U\ U2] orthogonal.1 we see that ai(A) = A(2 (ATA) = £(A)..2..(AT A) = is denoted ~(A). 1.(A) At. i.2. 3. an examination decomposition Remark 5. n} .. of A A). The set {ai. in fact..4... The columns of V are called the left singular vectors of A (and are the orthonormal called orthonormal columns ofU left singular vectors of eigenvectors of AA ). The columns of V are called the right singular vectors of A (and are the orthonormal right singular vectors of of called orthonormal eigenvectors of AT1A). m for IR (see the discussion in Section 3. VTAV = [~ Q].. of the proof of Theorem 5. we see that.4) we see that VrVI = /. See also m Remark 5. Then T V AV =[ =[ VrAVI VIAVI VrAVI vIA VI Vr AVz vI AVz ] ~] since A V2 =0. Introduction to the Singular Value Decomposition Chapter 5.4. Now V20 Vf A T A V 0. singular unique. Thus.? (AA I min{m. whence Vi ATAV22 = O.5. Now define the ATA V AV O. an we Turning now to the eigenvalue equations corresponding to the eigenvalues ar+l.3.16. The decomposition is A = V"i:. . C denote A thought of as linear transformation mapping W to W..5. ar}} is called the set of (nonzero) singular values of the matrix A and called [a\. .1. Ui e (5. Specifically. n] — Note that there are also min{m. vn } for IR and {u\. we see that U{ AV\ = S and 1/2 A VI = U^UiS = 0.1 reveals that proof Theorem any orthonormal basis for jV(A) can be used for V2 • £lny orthonormal basis for N(A) can be used for V2. Definition 5. U V identical. The latter equality follows from the orthogonality of the columns of VI and V 2. . Let A = V"i:. Referring to the equation U\ = A V\ S l defining VI. except for Hermitian transposes replacing transposes. the IRmxr VI AViSI. responding to multiple cr/'s.16.. Remark 5. Introduction to the Singular Value Decomposition Turning now to the eigenvalue equations corresponding to the eigenvalues or+\. Note that V and V can be interpreted as changes of basis in both the domain Remark 5. Choose any matrix V2 E ^ 77IX( ™~ r) such that [VI V2] is VI columns orthonormal. eigenvectors of AA TT). 0 Definition 5. .e. and codomain spaces with respect to which A then has a diagonal matrix representation. VT as AV = V"i:.. with respect to the bases A = U^V as A V Mat £ is S the U E we see respect n and {u I. U be interpreted changes domain and codomain spaces with respect to which A then has a diagonal matrix representation. singular 2. .). VT be an SVD of A as in Theorem 5. cr.1. Remark 5. . where V and V are unitary and the proof is essentially proof decomposition A = t/E V H.(AATT). we see that V r A VI = since A V2 = O. to be ~ completes the proof. A. . Thus. let C.'.. A = t/E VT SVD of A 5.. Referring to the equation V I = A VI SI defining U\. V H.3. AV2 = 0.. The !:ingular value decomposition is not unique... u ]} for Rm (see the discussion in Section 3. there may be nonuniqueness associated with the columns of V\ (and hence U\) cor• there may be nonuniqueness associated with the columns of VI (and hence VI) corresponding to multiple O'i'S.. Remark 5.2).. . and defining this matrix U\ andU UT A V [Q ~].
8. e.6. Computing AA AATT is numerically poor in finiteprecision arithmetic. [7]. AT A Remark 5. is an SVD. 01  where U is an arbitrary 2x2 2 orthogonal matrix. symmetric V orthogonal matrix of eigenvectors that diagonalizes A. SVD" (5.8. VI:TU/ s n SVD of A VS C . however. n=[ [] 3 2 I 3 2 I 5 2y'5 y'5 S 4y'5 15 2~ ][ 3~ 0 0 0][ 0 0 3 0 _y'5 3 v'2 T v'2 T v'2 T v'2 2 ] 3 2 2 = 3 3 3J2 [~ ~] A E R MX Example 5. What is unique. [25]. U arbitrary 2 x orthogonal 5. For example.9. V2 (see Theorem 5.sine sin e cose J[~ ~J[ cose sine Sine] cose ' where e is arbitrary. A=U is an SVD. e j8 the case). 0 Example 5. and E U\. The Fundamental Theorem 5.e.e.5. Then A = V A VTT is an A. that "full SVD" (5.g. Vi..g. that aa"full SVD" (5. SVD of A. i.1. corner f/E V T r r T Ti isaanS V D o f AT.U I U T . F/vamnlp 5. then A A..10. [11]. The Fundamental Theorem 37 37 • any U22can be used so long as [U I U2] is orthogonal. e. orthogonal transformations. see. if A = UI:VT is an SVD of A.6. see. [7]. Let V be an orthogonal 5.10. f/2.3). Example 5.2) can always be constructed from a "compact SVD" (5... U V form • columns of U and V can be changed (in tandem) by sign (or multiplier of the form e je in the complex case).[1 0 ] . is the matrix I: and the span of the columns of UI. VT AV = A > O. A _ [ 1  0 ~ ] cose = [ . Note. C/ [U\ Ui] orthogonal.3). VI.9.1. Better algorithms exist that work directly on A via a sequence of orthogonal transformations. Example A . Let A e IRnxn" be symmetric and positive definite.7. [25]. [11]. VT A V = A = VAV eigenvectors > 0. A factorization UI: VT of a n m x n matrix A qualifies as an SVD if U and V are A t/SV r o f an m n U orthogonal and I: is an m x n "diagonal" matrix whose diagonal elements in the upper £ left comer are positive (and ordered). SVDof A..2) can always be constructed from ¥2 Theorem too. Example 5. is an SVD. too.11). Computing an SVD by working directly with the eigenproblem for AT A or 5. U2. i.
say.7) explain why it is conventional to write the SVD and the key vector relations (5. as A = UZV rather than. Then TheoremS. .2 5.1. 2. vr ]. . Then A has the dyadic (or outer 2. (b) R(U2) = R(A)1. 1. The elegance of the dyadic decomposition (5.12. .£VTT as in Let A e jRmxn have a singular value decomposition A = UHV as in Theorem Theorem 5. The singular vectors satisfy the relations AVi = ajui.. Introduction to the Singular Value Decomposition Chapter 5...1.. U2 = [Ur+I. Using the notation of Theorem 5.11. ..5) as a sum of outer products and the key vector relations (5. Let A E E mx " have a singular value decomposition A = U. . The relationship to the four fundamental subspaces is summarized knowledge of the rank r.].8) where where . The relationship to the four fundamental subspaces is summarized nicely in Figure 5. .38 38 Chapter 5. Then (5. (5. Part 4 of the above theorem provides a numerically superior method for finding (orthonormal) bases for the four fundamental subspaces compared to methods based finding (orthonormal) bases for the four fundamental subspaces compared to methods based on. the following properties hold: the notation of Theorem 5.£V.. u r ]. andV2 = [Vr+I.6) and (5.5) 3. vn]. (c) R(VI) = N(A)1. for example.5) as a sum of outer products Remark 5.. Introduction to the Singular Value Decomposition 5. urn] and V = [v\. 4.1. . Vn]. A = UZV.6) and (5. reduction to row or column echelon form. say. VI = [VI. Remark 5.6) (5. A = U.7) explain why it is conventional to write the SVD as A = U'£VTT rather than. Part 4 of the above theorem provides a numerically superior method for Remark 5.1. Let A E Rmxn have a singular value decomposition A = U'£ VT. . um] and V = [VI.14. Let U =. i=1 (5.1.. . for example. rank(A) = r = the number of nonzero singular values of A.= R(A T ). the following properties hold: 1. . reduction to row or column echelon form. ..2 Some Basic Properties Some Basic Properties Theorem 5.11. (d) R(V2) = N(A) = R(AT)1.= N(A T ).7) AT Uj = aivi for i E r. Remark 5. Let A e jRrnxn have a singular value decomposition A = VLV T Using Theorem 5.. urn]. . Then (a) R(VI) = R(A) = N(A T / . The elegance of the dyadic decomposition (5... nicely in Figure 5.. LetUI = [UI. .....13.13. Note that each subspace requires knowledge of the rank r.12. Theorem 5. . rank(A) = r = the number of nonzero singular values of A. Note that each subspace requires on. The singular vectors satisfy the relations 3. [HI. . . vn]. . Then A has the dyadic (or outer product) expansion product) expansion r A = Laiuiv. Let V = [UI.
0 D (5. with the Osubblocks appropriately sized. Some Basic Properties 39 39 A r r E9 {O} / {O)<!l nr mr Figure 5.er^\.2. then = L r 1 v.u. . which is clearly orthogonal and symmetric.11. . Note that none of the expressions above quite qualifies as an SVD of A+ if we insist that the singular values be ordered from largest to smallest.. Proof' The proof follows easily by verifying the four Penrose conditions.1. a simple if we insist that the singular values be ordered from largest to smallest. SVD and the four fundamental subspaces. Some Basic Properties 5.11. which is clearly orthogonal and symmetric. Furthermore.2. Furthermore.. e2. if we let the columns of U and V with the Qsubblocks appropriately sized. Figure 5.5. ed.. Remark 5. Note that none of the expressions above quite qualifies as an SVD of A + Remark 5.=1 U.10) . SVD and the four fundamental subspaces...15. if we let the columns of U and V be as defined in Theorem 5. However. e\\. then be as defined in Theorem 5. Proof: The proof follows easily by verifying the four Penrose conditions.11) This can also be written in matrix terms by using the socalled reverseorder identity matrix This can also be written in matrix terms by using the socalled reverseorder identity matrix (or exchange matrix) P = \err. e^. . (or exchange matrix) P = [e erI. .15. However.. a simple reordering accomplishes the task: reordering accomplishes the task: (5..1.
. premultiplication of A by VT is an orthogonal transformation that "compresses" A by row transformations..16. Both compressions are analogous to the socalled rowreduced where R is upper triangular. Then Again. .. basis forJ\f(A)±. when derived by a Gaussian elimination algorithm implemented in finiteprecision arithmetic. Similarly. mxr has full column rank. Notice that N(A) . Since T is determined by its action on a basis.[ SVr ] 0 mxn E lR. Remark 5. by orthogonal row transformations performed directly on A to reduce it to the form [~]. then T can be defined by TV.4). Column compression Column compression Again..i / E~..1. while the matrix representation for the inverse linear transformation TlI with respect to S.. Such a compression is analogous to the "compresses" A by I.vvr}}is aa is r basisforN(A).M(UT Notice that M(A) = N(V T A) = N(svr> and the matrix SVf E Rrxll" has full row A/"(SV. = ^u.2. u is a basis forR(A).olumn transformations. Such a row compression can also be accomplished "compresses" A by row transformations. and since {VI. .40 40 Then Then Chapters.17 and in Definition 4.17 and Remark 5. Such a compression is analogous to the . In other words. A "full SVD" can be similarly constructed. Such a row compression can also be accomplished by orthogonal row transformations performed directly on A to reduce it to the form 0 . let A e Rmxn have an SVD given by (5. u is clearly matrix representation for T with respect to the bases { v \ . Both compressions are analogous to the socalled rowreduced echelon form which. = cr.. . . . In other words. . . the matrix representation for T with respect to the bases {VI. and since ( v \ .. finiteprecision arithmetic. while the matrix representation for the inverse linear transformation T~ with respect to the same bases is SI.11).i e r.. Recall the linear transformation T used in the proof of Theorem 3. is the matrix version of (5.1). From Section 3. since [u\. Then Let A E lR. Introduction to the Singular Value Decomposition Chapter 5. the same bases is 5""1.1). A "full SVD" can be similarly constructed. In other words.. r x has full row rank. is not generally as reliable a procedure. Introduction to the Singular Value Decomposition A+ = (VI p)(PS1 p)(PVr) is the matrix version of (5. .1.3 5. . urr]} is clearly S.3 Rowand Column Compressions Row and Column Compressions Row compression Let A E R have an SVD given by (5.1). In other words. . notice that H(A) = K(AV) = R(UI S) and the matrix UiS e Rm xr has full K(UiS) and the matrix VI S E lR. Since T is determined by its action on a basis..2.. have an SVD given by (5.l.mxn have an SVD given by (5. then T can be defined by TVj = OjUj . the isabasisfor7£(. is not generally as reliable a procedure. . notice that R(A) R(A V) This time. . vrr} and {u I.. = tv.. v } and {MI . . 5. Then AV = V:E = [VI U2] [~ ~ ] =[VIS 0] ElR.. Then VT A = :EVT = [~ ~ ] [ ~i ] D _ .. then TlI can be defined by T^'M. where R is upper triangular. premultiplication of A by UT is an orthogonal transformation that rank. . / E~. postmultiplication of A by V is an orthogonal transformation that "compresses" A by column transformations. . This time. w. then T~ canbedefinedbyTIu.r) and the matrix SVr e lR.. .. . Recall the linear transformation T used in the proof of Theorem 3. in Definition 4.1). when derived by a Gaussian elimination algorithm implemented in echelon form which. let A E lR.mxn.11). postmultiplication of A by V is an orthogonal transformation column rank.16. From Section 3.urr}} e r.. Similarly. since {UI.
Let A E ~mxn and W E IR mxm and 7 E ~nxn are (a) Show that A and WAY have the same singular values (and hence the same rank). = o.Exercises Exercises 41 41 socalled columnreduced echelon form. Let E ~~xn. 4. Do A Wand Yare A and WAY have the same singular values? Do they have the same rank? and WAY have the same singular values? Do they have the same rank? factorization of i.e. xyT 5. y e Rn be nonzero vectors. see.. 2.e. Note: this is analogous to the polar form iO z = rel&ofaa complex scalar z (where i = j = V^T)..1 starting from the observation that AAT ~ O. y E ~n Determine A e ~~ 4. [7]. For details. of defined by A defined by A = xyT. Let x e Rm. [25]. for performed by Gauss transformations in finiteprecision arithmetic. A = QP 7. Use the SVD to determine a polar factorization of A. For details.1 starting from the observation that AAT > 0. Determine SVDs of the matrices 5. Let A € R" X M . [23]. [23]. [11]. A E IRnxn indefinite.. [25]. which is not generally a reliable procedure when socalled columnreduced echelon form. for example. an SVD A. A = Q P 7. Determine an SVD of the matrix A E R™ xn E IRm. EXERCISES EXERCISES 1. 3. (a) Show that and W A F have the same singular values (and hence the same rank). If XT X = 2. see. (b) Suppose that W and Y are nonsingular but not necessarily orthogonal. show that X = 0. Let A e Rmxn and suppose W eRmxm and Y e Rnxn are orthogonal. Determine SVDs of the matrices (a) (b) [ ] [ ~l 1 0 1 6. Use the SVD to determine a where Q is orthogonal and P p T > O.[11]. Note: this is analogous to the polar form where Q is orthogonal and P = PT > 0. Let X E M mx ". . i. Prove Theorem 5. € IRmxn. Determine an SVD of A. Prove Theorem 5. which is not generally a reliable procedure when performed by Gauss transformations in finiteprecision arithmetic. Let A e E"xn be symmetric but indefinite. [7]. z of complex scalar z (where i j J=I). If XTX = 0.
This page intentionally left blank This page intentionally left blank .
A solution to (6. A E ]Rn xn. equivalently.3) for all b E lRm if and only ifR(A) = lRm. A is 11. b]) = rank(A). There exists at most one solution to (6. i. 4. b E ]Rn. 1.3) is unique if and only if N(A) = 0. There exists a unique to (6.1. 3.3) for all b e W1 if and only if the columns of A are linearly independent.e.3} for e R m if only ifU(A) = W".e. General linear systems of the form (6. There exists a nontrivial solution to the homogeneous system Ax = 0 if and only if Ax = 0 if 6. equivalently. onto. 5. the familiar vector system are studied and include. n. n this is possible only ifm < n (since m dimT^(A) = rank(A) < min{m.1. There exists a solution to (6. (6. (6. equivalently. b E lRm..e. There exists at most one solution to (6. i. as a special case. There exists a solution to (6. Consider the system of linear equations Ax = b. b]) = rank(A). and this is possible only ifm :::: n (since m = dim R(A) = rank(A) :::: min{m. 4. i. and this is possible only ifm ::: n.e. 6. There exists a solution to (6. A/"(A) = 0. A is 11.3) for all b e W" if and only if is nonsingular.3) is unique if and only ifJ\f(A) = 0.1) are studied and include. Consider the system of linear equations Theorem 6. i.e. the familiar vector system Ax = b. n}).e. there exists a solution if and only ifrank([A. 43 .. rank(A) < n. (6.1 Vector Linear Equations Vector Linear Equations We begin with a review of some of the principal results associated with vector linear systems. A solution to (6. i.. 2.3) for all b E ]Rm if and only if A is nonsingular.3) if and only b E R(A). N(A) 0. i. and onto. only if rank(A) < n.1 6. 3. A are linearly independent.. as a special case. A E lR mxm and A has neither a 0 singular value nor a 0 eigenvalue. We begin with a review of some of the principal results associated with vector linear systems.Chapter 6 Chapter 6 Linear Equations Linear Equations In this chapter we examine existence and uniqueness of solutions of systems of linear In this chapter we examine existence and uniqueness of solutions of systems of linear equations.3) 1. General linear systems of the form equations... and this is possible only ifm > n. A is 2. A E lRmxn. equivalently.3) if and only ififbeH(A). A G M m x m and A has neither a singular value nor a eigenvalue. There exists a unique solution to (6. Theorem 6.3) for all b E lRm if and only if the columns of 5.2) 6. there exists a solution if and only j/"rank([A.
1 follow from those below for the special case k = 1. 0 . BE JR. Note that the results of Theorem of solutions to the general matrix linear system (6.mxk. a solution exists if and only if has a solution if and only ifR(B) S.e . Linear Equations Chapter 6. where Y E JR.18. (6. E JR. we prove part 6. Note that the results of Theorem 6. A is not 11. premultiply by A: Proof: To verify that (6.3. which implies rank(A) < n must have the case of a nonunique solution. B E JR.A+ A)Y. note that x 0 is always a solution to the homogeneous system..mxn. R(A). (6.. Theorem 6. A is not II.3. we must have the case of a nonunique solution.6) Furthermore.4) has a solution if and only ifl^(B) C 7£(A).5). The matrix linear equation AX = B. 0 Theorem 6. Therefore. equivalently.2 (Existence). AZ :::: B.44 Chapter 6. The matrix criterion is Theorem 4.6) are of this form.nxk is arbitrary. which implies rank(A) < n by part 0 by part 3.5). Let Z be an arbitrary solution of That all solutions arc of this form can be seen as follows. specializing even further to the case m = n. AZ — B. Note that some parts of the theorem follow directly from others.e. Therefore. For example. Linear Equations Proof: The proofs are straightforward and can be consulted in standard texts on linear Proof: The proofs are straightforward and can be consulted in standard texts on linear algebra.1 follow from those below for the special case = 1.5) is a solution. while results for (6. Note that some parts of the theorem follow directly from others.2 Matrix Linear Equations In this section we present some of the principal results concerning existence and uniqueness In this section we present some of the principal results concerning existence and uniqueness of solutions to the general matrix linear system (6. Proof: To verify that (6. all solutions of (6. A E JR. i. Let Z be an arbitrary solution of (6. Proof: The subspace inclusion criterion follows essentially from the definition of the range Proof: The subspace inclusion criterion follows essentially from the definition of the range of a matrix. premultiply by A: AX = AA+ B + A(I = B A+ A)Y + (A  AA+ A)Y by hypothesis = B since AA + A = A by the first Penrose condition.2 6. Furthermore. That all solutions are of this form can be seen as follows. i.2)follow by 6. (6. and this is clearly of the form (6.e.. Then any matrix eRmxk of the form of the form X = A+ B + (/ . For example. Let A e Rmxn.6). of a matrix. Then we can write (6.1).5) is a solution. a solution exists if and only if AA+B = B.e. mxn .1). all solutions of (6. to algebra. +B = Theorem 6. The matrix linear equation Theorem 6.5) is a solution of is a solution of AX=B. equivalently. Then we can write Z=A+AZ+(IA+A)Z =A+B+(IA+A)Z and this is clearly of the form (6. while results for (6. D 6. AA+B B..2 (Existence).6). to prove part 6. note that x = 0 is always a solution to the homogeneous system.2) follow by specializing even further to the case m = n.6) are of this form. i. The matrix criterion is Theorem 4. i.mxk and suppose that AA+B = B.18.
Hence.3. Remark 6. A solution of the matrix linear equation AX = B. where y e lR.8.S. there is no "arbitrary" component. But if A has an SVD given by A = U h VT.6. A A = f/E VT. r checked that 1. recall that TrXT X = £\ •xlj. Clearly. Solution: x=A+O+(IA+A)y = (IA+A)y. Example 6. There is a unique right inverse if and only if A+A = I/ e E"xm arbitrary. All right inverses r < m) A (A+ of A are then of the form of A R = A+ 1m + (In . wherer = rank(A) (recallr ::: h). N(A) = O. Clearly.A+ A)Y =A++(IA+A)Y. vD Example 6. A e E"x". nonzero solution. y E R" A + A t= I. Consider Example 6.7) has a unique solution if and only if unique if and only if A + A = I. if and only if A is Ilor _/V(A) = O. where r rank(A) (recall r < n). When A is square and nonsingular.5. Characterize all right inverses of a matrix A E lR. it is not unique. It particular (6. Solution: There exists a right inverse if and only if R(Im) S.mxn.7) has a unique solution if and only if M(A) = 0. Proof: Proof: The first equivalence is immediate from Theorem 6.nxn.A+ A V2 V[ and U(V = K(V2) = N(A).9. A+ = A"1 and so (I .mxk (6.n is arbitrary.A+A) = O. A+ A where Y E lR.mxn. Example 6. But rank(A) = n that A+ A = / if r — n.6) +B Remark 6. Butrank(A) = n if and only if A is 11 or N(A) = 0. in which case A must be invertible and R = AI. equivalently.) Theorem 6. Thus. Consider the system of linear firstorder difference equations (6. rank(A) = < A This is equivalent to either rank (A) = r < n or A being singular. 7£(A) and this is 7£(/m) c R(A) equivalent to AA + 1m Im. (6.2. A E lR. matrix.j jcj. Characterize AR = Im solutions R of the equation AR = 1m. X• = A~ B.2.6 (Uniqueness).. / . (6. we write 1m to emphasize the m x m identity Im matrix. equivalently.9. there exists a nonzero solution if and only if A+A /= I. then it is easily R(I — A + A). equivalently. Computation: Since y is arbitrary.6) that minimizes TrX7 (Tr() denotes the trace of a matrix.A+A). find all A e ]Rmx". D 0 Example 6. Find all solutions of the homogeneous system Ax = 0.4. (TrO denotes the trace of a matrix. A solution of the matrix linear equation Theorem 6. A R (AA(A) = A"1. recall that TrX r = Li. if there exists a unique. Clearly. leaving only the unique solution X = AI1B. Matrix Linear Equations 6.6 (Uniqueness). and (N(A) = 0). Ax — 0. it is easy to see that all solutions are generated y from a basis for 7£(7 . Example 6. A A+ AI Remark (/ — A + A) 0. this can occur if and only if rank(A) = r = m (since r ::: m) and this is equivalent to A being onto (A + is then a right inverse).) that minimizes TrXT X.7.A+A = Vz V2 and R(Vz2V^) = R(Vz) = N(A). this can occur if and only if rank(A) = r m (since equivalent to AA+Im = 1m. Matrix Linear Equations 45 Remark 6.7) is unique if and only if A+A = /. The second follows by noting thatA+A = I can occur only ifr = n.nxm is arbitrary.8) . Suppose A E lR.7.5. BE lR. It can be shown that the particular solution X = A+B is the solution of (6. Here.
8) of Example 6. linear differential equations). . standard conditions with analogues for continuoustime models (i.. The condition dual to reachability is called observability: When does knowledge of {u 7 r/:b and {Yj l'. We might now ask the question: Given Xo 0.. Example 6. we have the notion of reconstructibility: When does knowledge of {u jy }"~Q and {. we have the notion of suffice to determine (uniquely) xo? As a dual to controllability... Since > 1.2. A n .~ I). from the fundamental Existence Theorem. Theorem 6. There are many other algebraically equivalent conditions.9 by appending the equation by appending the equation (6. Linear Equations Xk with A E R"xn and B E IR nxmxm(rc>l.8) of Example 6. does there exist an input sequence for k > 1.AB •. (A. There are many other algebraically equivalent conditions. The answers are cast in terms that are dual in the linear algebra sense as well. . .8) is reachable if and only if if R([ B.9) ~Axo+[B. We now introduce an output vector yk to the system (6. if and only if rank [B.y/}"Io suffice to determine reconstructibility: When does knowledge of {w r/:b and {YJ lj:b suffice to determine (uniquely) xn? The fundamental duality result from linear system theory is the following: (uniquely) xnl The fundamental duality result from linear system theory is the following: E RPxn e IR pxn E RPxm € IR pxm (A. we see that (6. The matrices A = [~ ~]1and B5 == [~] 1 providean example of a system that is controllable but not reachable. from the fundamental Existence Theorem. The general solution of (6. Since m ~ I. see that (6.. this is a question va [Uj }k~:b such that x^ takes an arbitrary value in W ? In linear system theory. The linear differential equations). Theorem l'/:b Clearly.• A k kJ B] [ ~o (6. if and only if or.10.ra>l).2. B) is if(AT .10) for k ~ 1. equivalently.8) is given by solution of (6. The condition The answers are cast in terms that are dual in the linear algebra sense as well..9 Example 6.J B] = n. We can then pose some new questions about the with C and (p > 1).46 46 Equations Chapter 6.11) with and D (p ~ 1). does there exist an input sequence {u j an input sequence {"y}"~o such that xn = O? In linear system theory.J B]) = 1R" or. We can then pose some new questions about the overall system that are dual in the systemtheoretic sense to reachability and controllability.T.10. The matrices A = [ ° Q and f ^ provide an lability and reachability are equivalent. We might now ask the question: Given XQ = 0. A related question is the following: Given an arbitrary initial vector Xo. Again from Theorem 6. controlA 1 lability and reachability are equivalent. The above are standard conditions with analogues for continuoustime models (i. AB. A n . this is a question {u }y~Q Xk of reacbability. example of a system that is controllable but not reachable. we of reachability.8) is controllable if and only if if controllability.. does there exA related question is the following: Given an arbitrary initial vector XQ. this is called such that Xn = 0? linear system theory. equivalently.:b dual to reachability is called observability: When does knowledge of {" j }"!Q and {y_/}"~o suffice to determine (uniquely) Jt0? As a dual to controllability. overall system that are dual in the systemtheoretic sense to reachability and controllability. does there exist an input sequence {ujj 1jj^ such that Xk takes an arbitrary value in 1R"? In linear system theory.e. this is called controllability. . reachability always implies controllability and.8) is given by kJ Xk = Akxo + LAkJj BUj j=O UkJ ] Uk2 (6. We now introduce an output vector Yk to the system (6.e. The general known as the state vector at time k while Uk is the input (control) vector. m known as the state vector at time while Uk is the input (control) vector. The vector Jt* in linear system theory is e IR nx " fieR" (n ~ I. AB. B T] is observable [reconsrrucrible] [controllablcl if and T) observable [reconstructive]. if A is nonsingular.. B) iJ reachable [controllable] ifand only if (A .
13) Let v denote the (known) vector on the lefthand side of (6. in which case the general solution is of the has a solution if and only if AA + BC+C = B. and C E Rpxti. indicated. B e jRmx q .4 6. equivalently.27.14) requires the notion of the Kronecker product of matrices for its statement. particularly for block matrices. if and only if or. B E Rnxm. if and only if r Yn]  Lj:~ CA n . mxm and D E jRm Invertibility is assumed for any component or subblock whose inverse is and D € E xm..14) Theorem 6. A E Rnxn. 0. B E Rmxq. particularly for block matrices. Theorem 6.4 Some Useful and Interesting Inverses Some Useful and Interesting Inverses In many applications. Then.6. Verification of each identity is recommended as an exercise for the reader.DUnl 6. the has a solution if and only if AA+BC+C = B. A compact matrix criterion for uniqueness of solutions to (6.13) and let denote the matrix on the righthand side. (6. In these identities. is stated and proved in Theorem 13.CBuo . Verification of each identity is recommended as an exercise for the reader.6. asbelow is a small collection of useful matrix identities. E jRnxm. sociated e jRnxn.13) and let R denote the matrix on Let denote the (known) vector on the lefthand side of (6. Thus. or. equivalently. By the fundamental Uniqueness Theorem. Then. Then the equation e jRmxn.Du] (6.4 Some Useful and Interesting Inverses 47 To derive a condition for observability. the solution is then unique if and only if N(R) Uniqueness Theorem. and C e jRpxq.14) requires the notion A compact matrix criterion for uniqueness of solutions to (6. 6. arbitrary. C E jRmxn.3 6.2 j BUj .15) E jRnxp where Y € Rn*p is arbitrary. by definition. Such a criterion (C C+ ® A +A = I) of the Kronecker product of matrices for its statement. v E R(R). notice that To derive a condition for observability. +L kl CAk1j BUj + DUk. so a solution exists. Listed below is a small collection of useful matrix identities. in which case the general solution is of the form (6. the solution is then unique if and only if N(R) ==0. Listed In many applications. associated with matrix inverses.12) j=O Yo . by definition.11. By the fundamental the righthand side. Such a criterion (CC+ <g) A+ A — I) is stated and proved in Theorem 13.4 Some Useful and Interesting Inverses 6. e Rmxn. Theorem 6. the coefficient matrices of interest are square and nonsingular.6. notice that Yk = CAkxo Thus. e Tl(R).27. .Duo Yl . so a solution exists.3 A More General Matrix Linear Equation A More General Matrix Linear Equation AXC=B (6. Let A E Rmxn. the coefficient matrices of interest are square and nonsingular. Invertibility is assumed for any component or subblock whose inverse is indicated.
where F = (A ./ blocks may be exchanged.4.I B)I (E is the inverse of the Schur complement of A).AIB(DlI + CAIB)ICAI.4. This result follows easily from the block UL factorwhere F = (A — ED C) This result follows easily from the block UL factorization in property 17 of Section 1.CA. Note that the positions of the / and . Linear Equations Chapter 6. r A~I [~ ~ r [D~I~AI D~I 1 ~r ~~B 1 r l [~ ~ r [D~CF +~~I~. 2. It has many This result is known as the ShermanMorrisonWoodbury formula. [~ ~ r l 3.48 Chapter 6. 5. characterize all solutions of the matrix linear equation AX=B in terms of the SVD of A in terms of the SVD of A. Assuming 2.I ] D.8. BC 6.4.A~lB(D~ CA~lB)~[CA~l This result is known as the ShermanMorrisonWoodbury formula.I C) I. (A + BDC)I = A~l . result follows easily from the block LU factorization in property 16 of Section 1. l 8. X. 1. Assuming R(B) ~ R(A).8. 2. for example. Let A E lRmxn. theory. l = l = [!C / [~ ~ l = [ AI +_~~!~CAI A~BE = D. formulas for the inverse of a sum of matrices such as (A + D)lor (AI1 + DI)I. formulas for applications (and is frequently "rediscovered") including. Let A € E mx ". This result follows easily from the block LU factorization in property 16 of Section 1. [ / +c 7.I . characterize all left inverses of a matrix A e lR ".1.I EXERCISES EXERCISES 1. As in Example 6..4. BB EelR fflxk and suppose AAhas an SVD as in Theorem 5. Linear Equations 1. characterize all solutions of the matrix linear equation 7Z(B) c 7£(A). characterize all left inverses of a matrix A E Mm xn . Note that the positions of the / and — / blocks may be exchanged. where E = (D . 1. (A BDCr1 = AI .. for example.c E E") that arise in optimization (A + xx T ) — (with symmetric A e lRnxn and x e lRn) that arise in optimization theory. As in Example 6.BDI l = [ AI BD. It also the inverse of a sum of matrices such as (A + D)"1 or (A" + D"1) It also yields very efficient "updating" or "downdating" formulas in expressions such as yields very efficient "updating" or "downdating" formulas in expressions such as T (A + JUT ) I1 (with symmetric A E R"x" and . that X~l [~ !/ [~ ~ r [~ ~ l [~ ~/ r [~ ~ 1 l l l = [ ~ 4.1. This where E = (D — CA B) (E is the inverse of the Schur complement of A). Rmxk and suppose has an SVD as in Theorem 5. mx .. = = Both of these matrices satisfy the matrix equation X2 = / from which it is obvious these matrices satisfy the matrix equation X^ = I from which it is obvious Both of that XI = X. ization in property 17 of Section 1. It has many applications (and is frequently "rediscovered") including.B D.
x xTy). l' Hint: Show that Ci E N(B). As in Example 6. Show that 4. check directly that the condition for reconstructibility takes the 6. Show that 3. (i.e.y Assume that Yji i= 0 for some i/ and j..e.Cn and individual elements Yij. in Example 6.. . Let A e R"xxn and let A"1 have columns c\.xy) T 1 49 = I  1 xTy 1 xy .10. Hint: Show that ct <= M(B). Let x. c and individual elements y. y e IRn and suppose further that x T y ^ 1. . Show that the matrix B — A — —eie T : (i. y E E" and suppose further that XTy i= 1. 5.. Show that cxJ C ' where c 1/(1 — T y).Exercises Exercises 3. y E E" and suppose further that XTy ^ 1. Let x. T 4...10. Let jc.l ~i e. A with yl subtracted from its (ij)th element) is singular. where C = 1/(1 . Show that the matrix B = A . 6. € IRn and suppose that x T y i= 1.e. .. .. check directly condition for reconstructibility the form form N[ fA J CA n 1 ~ N(A n ).. Show that (/ . Assume that x/( 7^ 0 for some and j. Let A E 1R~ " and let A 1 have columns Cl. A with — subtracted from its (zy)th element) is singular.
This page intentionally left blank This page intentionally left blank .
px. Px. 51 51 .y is linear and P# y — px.1). Oblique projections.y Theorem 7. Py.3. PX. V by by PX. and Norms 7. say on X along Y (using the notation of Definition 7.26.1 displays the projection of v on both X and Y in the case V = ]R2.. Let V be a vector space with V = X EEl Y.2. P is a projection if and only if I —P is a projection. Infact.y.y is called the (oblique) projection on X along 3^. P isaprojectionifandonlyifl P isaprojection. Figure 7.Chapter 7 Chapter 7 Projections. Also.3.26. Let V be a vector space with V X 0 y. Infact. Py. Figure 7.y is linear and pl.1 7.1.yV = x for all v E V.1). Proof: Suppose P is a projection.x = I px. Also. i. By Theorem 2.y • V —>• c V has a unique decomposition v = x + y with x e X and y E y. Theorem 7. Inner Product Spaces. A linear transformation P is a projection if and only if it is idempotent.1 displays the projection of von both and 3^ in the case = Figure 7. every v E V Definition 7. i. and Norms Spaces. Proof: Suppose P is a projection.1.1.yp2 = P. Theorem 7. every v e V has a unique decomposition v x y with x E and y e y. y x Figure 7. A linear transformation P is a projection if and only if it is idempotent.. P2 = P.y is called the (oblique) projection on X along y. say on X along y (using the notation of Definition 7. By Theorem 2. Theorem 7. Oblique projections.e.1 Projections Definition 7. Inner Product Projections.e.1. y = Px. Define PX y : V + X <.y.x — I — Px.2. Define pX. Px.
P) = 0. along XXL} and let x.P)x = O. say.=1 m PR(A). Projections. p2 = P. mental subspaces. If v E Y. arbitrary. P e jRnxn is the matrix of an orthogonal projection (onto R(P)) if and only ifP2 PT if p2 = p = pT. while Py = P(l . V = X $ Y and the projection on X along Y is P. V Pv .11. Thus. D Essentially the same argument shows that I — P is the projection on y along X.L 1. we have ( P y f I (/ . Hence PT = PTP = P. while Py = P(I P}v = x Pv . with the second equality following since PTP is symmetric. A 6 jRmxII UtSVf.xJ.X^X = Px±. along 1) and let jc. Then v if v € Pv (I .1. Hence if v E X ny. We now prove and Y = {v E V : Pv = OJ.uT. Note that (I . then v = 0.p 2 v 0 so y e Thus. If v e y.P)v. suppose symmetric projection matrix and let x be arbitrary.1 . and Norms Let v e V be arbitrary. then Pv = 0.xx by Theorem 7. Conversely. and Norms Chapter 7.P is the projection on Y along X.)x = PXJ..11.4. Px E R(P).5. Then symmetric projection matrix and let x be arbitrary. px. Thus. then Pv = v. It is easy to check that X and 3^ are subspaces.P)x = (I . P Proof: Let P be an orthogonal projection (on X.XLtion and we then use the notation P x = PX.3.P)x = O. In the special case where y X1. Conversely.AA+ U2 U ! LUiUT.xl.1 and 5. . i=r+l PN(A) 1. Conversely. . and P must be an orthogonal projection. Let x = Pv. Projections. y = (I . let A E Rmxn with SVD A = U!:VTT = A = UT. P2v = P Pv — 2 2 Px = x = Pv. A+A VIV{ r LViVT are easily checked to be (unique) orthogonal projections onto the respective four fundaare easily checked to be (unique) orthogonal projections onto the respective four fundamental subspaces.P}x = 0. Then x T pT (I .XL Theorem 7.P2v = 0 so Y E y. Hence that V X 0 y. y = (I . Thus. suppose p2 = P. Moreover.P)x 6 R(P)1and P must be an orthogonal projection. Essentially the same argument shows that / . Moreover.P)x = (I . yy Ee jR" be arbitrary. we must have P (I — P) = O.P)v. R" be Proof: Let P be an orthogonal projection (on X. Conversely. 0 Definition 7. First note that v E X. Note that (/ . P)x = y PT(I P)x = 0. Now let u e V be arbitrary. Let X = {v e V : Pv = v} and y {v € V : Pv 0}..1 The four fundamental orthogonal projections The four fundamental orthogonal projections Using the notation of Theorems 5.P)x = x T P(l . Then Px = p 2v = Pv = x so x e X. Hence pT = pT P = P. (I . * called an orthogonal projecDefinition 7. D 0 7. Py e X.P)v. X 0 y and the projection on X along y is P.P)v = = Pv.4. then Pv = O. We now prove that V = X $ y. then v = O.P)x = xTP(I . Write x = P x + (I . Thus.xx by Theorem 7. T Since x and y were arbitrary. p 2v = PPv = Let u E V be arbitrary.1 7. First note that iftfveX.3.. PN(A)J.px. Since Py E X. Now let v E V be arbitrary. Let X n y..PX. (I . Inner Product Spaces. suppose P is a is a with the second equality following since pT P is symmetric.P)x E ft(P)1 xTPT(I . Thus. Thus. In the special case where Y = X^. say.52 52 Chapter 7. It is easy to check that X and Y are subspaces. P = P. PX.A+A V2V{ L i=r+l i=l 11 ViVf.P)x = yTTpT (I . Then Pv = P(x + y) = Px = x.V Theorems 5. P E E"xn is the matrix of an orthogonal projection (onto K(P)} if and only 7. Then U\SVr Then r PR(A) AA+ U\U[ Lu. since Px e U(P). Then Pv = P(x + y) = Px = x. suppose P = P. Then v = Pv + (I .P)x e X1. then (/ . Inner Product Spaces. then Pv v.P)x E XL. Write x = Px (I — P)x.1 5.5.P)v.XL iss called an orthogonal projection and we then use the notation PX = PX. we have (py)T ((I .P)x. Then Px = P2v = Pv = x so x E X. we must have pT (I . Let X = {v E V : Pv = v} Px = x = Pv. Since x and y were arbitrary.
Recall the diagram of the four fundamental subspaces. Orthogonal projection on a "line. Determine the orthogonal projection of a vector e M" on another nonzero Example 7. in fact. orthogonal: v z Pv w Figure 7.6.2.2. Solution: Think of the vector w as an element of the onedimensional subspace R( w)." Example 7. in fact. T W W Moreover. the vector z that is orthogonal to w and such that Pv Moreover. There. The expression for x\ is simply the orthogonal projection of XI projection of rather x on S. Example 7.7.~) w. An arbitrary vector x E R" was chosen and a formula for XI basis for a subset of IRn. An arbitrary vector x e IRn was chosen and a formula for x\ appeared rather mysteriously.8) (using Example 4.. . . { v \ .11. Orthogonal projection on a "line.. There.2.. Recall the diagram of the four fundamental subspaces.6.8. {VI. Then the desired projection is simply Then the desired projection is simply Pn(w)v = ww+v wwTv (using Example 4..2. Then Let x e W be an arbitrary vector.A+ A)x 2 = A+ Ax + (I = VI vt x + V Vi x (recall VVT = I). the vector z that is orthogonal to wand such that v = P v + z is given by z is given by z = PK(W)±Vv = (/ — PK(W))V = v — (^^ j w. The indicated direct Example 7. e Rn Solution: Think of the vector w as an element of the onedimensional subspace IZ(w).1.7. See Figure 7.(:. Projections 53 Example 7. The indicated direct sum decompositions of the domain E" and codomain IRm are given easily as follows. Specifically.11. See Figure 7. are. Recall the proof of Theorem 3. Projections 7. Vk} was an orthornormal Example 7.. Determine the orthogonal projection of a vector v E IR n on another nonzero vector w E IRn. A direct calculation shows z = Pn(w)"' = (l . Recall the proof of Theorem 3. X on Specifically.7.1. ." Figure 7.8. . IR n Rm 1 n Let X E IR be an arbitrary vector.Pn(w»v = v . Then X = PN(A)u + PN(A)X . Vk} was an orthomormal basis for a subset S of W1.8) = (WTV) W. orthogonal: that z and u. A direct calculation shows that and ware..
Then ('. ATE IR nxm transformation Definition 7. only ifx = O. Let V = R".12. x) for all x. Let Example 7. let Y E ]Rm be an arbitrary vector. Let V be a vector space over R.y E V. y) x T Y is the "usual" Euclidean inner product or Example 7. as follows: o o 4] uniquely into the sum of a vector in N(A)L 4V uniquely into the sum of a vector in A/'CA)1 r 1/4 1/4 ] 1/4 1/4 [!]~ = = A' Ax + (l  A' A)x 1/2 1/2 1/2 1/2 0] [ 2] [ 1/2 1/2 + [ 1~2 1~2 ~ o o ! 5/2] [1/2] 1~2 . j2 ^ V and for alia. Then Similarly. y) for all x € Rm and for all y e R". e R. V = IRn. respectively. [ 5~2 + 7. (x. Example 7. (x. Then (x. Y2 E V and/or all a. Y2) for all x. If e IR mx ". Example 7. then AT e Rn xm is the unique linear transformation or map T E IRm andfor IRn. (jc. > Ofor all E V ( x x) =0 if 2. and Norms Similarly.2 Inner Product Inner Product Spaces Definition 7.13. aYI + PY2) = a(x. f3ftE IR. Projections. 3. Inner Product Spaces. y)Q = XT Qy. (x. . x) ::: Qfor aU x 6V and (x. Let V be a vector space over IR. definite defines Definition 7. such that {x. yi. If A E Rm xn. Example 7. y) Q = X T Qy. Let Then Then and we can decompose the vector [2 3 and we can decompose the vector [2 3 and a vector in N(A). n x n positive definite matrix.AA+)y = U1Ur y + U2U[ Y (recall UU T = I). y\) + /3(jt. Then (x. Then {^. y e V.9.12. (x.11. and Norms Chapter 7.11. y^} for all jc.9. . 3. (x.54 Chapter 7. . as follows: and a vector in J\f(A). Then Y = PR(A)Y + PR(A)~Y = AA+y + ( l . 2. respectively.13. Ay) = {AT x. Yl. where Q = QT > 0 is an arbitrary Q = Q T > is an Example 7. Yl) + f3(x.. cryi + ^2) = a(x. y) = (y. y} = XTy is the "usual" Euclidean inner product or dot product. y) = (y.x)forallx.10. defines a "weighted" inner product. let y e IR m be an arbitrary vector. Projections.(A . Inner Product Spaces. {*.) ) :: V x V + IR is a real inner is a real inner Definition 7. x } = 0 if and only ifx = 0. Then { • • V x V if product if 1.10. Let V = IRn. Let V = E".
7.2. Inner product Spaces 7.2. Inner Product Spaces
55 55
It is easy to check that, with this more "abstract" definition of transpose, and if the It is easy to check that, with this more "abstract" definition of transpose, and if the (i, j)th element of A is aij, then the (i, j)th element of AT is ap. It can also be checked (/, y)th element of A is a(;, then the (i, y)th element of AT is a/,. It can also be checked that all the usual properties of the transpose hold, such as (Afl) = BT AT. However, the that all the usual properties of the transpose hold, such as (AB) = BT AT. However, the
definition above allows us to extend the concept of transpose to the case of weighted inner definition above allows us to extend the concept of transpose to the case of weighted inner products in the following way. Suppose A e Rmxn and let (., .) Q and (•, .) R, , with Q and A E ]Rm xn (., }R with Q and {, }g R positive definite, be weighted inner products on Rm and W, respectively. Then we can positive definite, be weighted inner products on IR m and IRn, respectively. Then we can define the "weighted transpose" A # as the unique map that satisfies define the "weighted transpose" A# as the unique map that satisfies
(x, AY)Q = (A#x, y)R all x e IRm (x, Ay)Q = (A#x, Y)R for all x E Rm and for all Y E W1. y e IRn.
By Example 7.l2 above, we must then have x T QAy x T (A#{ Ry for all x, y. Hence we By Example 7.12 above, we must then have XT QAy = xT(A#) Ry for all x, y. Hence we transposes (of AT Q = RA#. must have QA = (A#{ R. Taking transposes (of the usual variety) gives AT Q = RA#. QA = (A#) R. Since R is nonsingular, we find Since R is nonsingular, we find
A# = R1A Q. A* = /r'A' TQ.
We can also generalize the notion of orthogonality (x T = 0) to Q orthogonality (Q is We can also generalize the notion of orthogonality (xTyy = 0) to Qorthogonality (Q is a positive definite matrix). Two vectors x, y E IRn are Qorthogonal (or conjugate with a positive definite matrix). Two vectors x, y e W are <2orthogonal (or conjugate with T X Qy O. Qorthogonality is an important tool used in respect to Q) if ( x y) Q respect to Q) if (x,, y } Q = XT Qy = 0. Q orthogonality is an important tool used in studying conjugate direction methods in optimization theory. studying conjugate direction methods in optimization theory. Definition 7.14. Let V be a vector space over C. Then (., •} : V V > Definition 7.14. Let V be a vector space over <C. Then {, .) : V x V + C is a complex is a complex inner product if inner product if
1. (x,, x ) :::: Qfor all x e V and ( x , x ) = 0 if and only if x = 0. 1. ( x x) > 0 for all x E V and (x, x) =0 if and only ifx = O.
2. (x, y) = (y, x) for all x, y E V. (y, x) for all x, y e V. 2. (x, y)
3. (x, aYI + fiy2) = a(x, y\) + fi(x, Y2) for all x, YI, y2 E V and for alia, f3 6 C. 3. (x,ayi f3Y2) = a(x, yll f3(x, y2}forallx, y\, Y2 e V andfor all a, ft E c. Remark 7.15. We could use the notation (., ·)e to denote a complex inner product, but Remark 7.15. We could use the notation {•, }c to denote a complex inner product, but if the vectors involved are complexvalued, the complex inner product is to be understood. if the vectors involved are complexvalued, the complex inner product is to be understood. Note, too, from part 2 of the definition, that (x, x) must be real for all x. Note, too, from part 2 of the definition, that ( x , x ) must be real for all x.
Remark 7.16. Note from parts 2 and 3 of Definition 7.14 that we have Remark 7.16. Note from parts 2 and 3 of Definition 7.14 that we have
(ax\ + fix2, y) = a(x\, y) + P(x2, y}.
Remark 7.17. The Euclidean inner product of x, e C" is given by Remark 7.17. The Euclidean inner product of x, y E C n is given by
n
(x, y)
= LXiYi = xHy.
i=1
The conventional definition of the complex Euclidean inner product is (x, y) yH but we The conventional definition of the complex Euclidean inner product is (x, y} = yHxx but we use its complex conjugate H here for symmetry with the real case. use its complex conjugate xHyy here for symmetry with the real case.
Remark 7.18. A weighted inner product can be defined as in the real case by (x, y)Q = Remark 7.1S. A weighted inner product can be defined as in the real case by (x, y}Q — x H Qy, arbitrary Q QH > o. notion Qorthogonality can be similarly XH Qy, for arbitrary Q = QH > 0. The notion of Q orthogonality can be similarly generalized to the complex case. generalized to the complex case.
56 56
Chapter 7. Projections, Inner Product Spaces, and Norms Chapter 7. Projections, Inner Product Spaces, and Norms
Definition 7.19. A vector space (V, F) endowed with a specific inner product is called an Definition 7.19. A vector space (V, IF) endowed with a specific inner product is called an inner product space. If F = C, we call V a complex inner product space. If F = R, we inner product space. If IF = e, we call V a complex inner product space. If IF = R we call V a real inner product space. call V a real inner product space.
Example 7.20. Example 7.20. 1. Check that V = IRn x" with the inner product (A, B) = Tr AT B is a real inner product 1. Check that = R" xn with the inner product (A, B) = Tr AT B is a real inner product space. Note that other choices are possible since by properties of the trace function, space. Note that other choices are possible since by properties of the trace function, Tr AT B = TrB TA = Tr A B = TrBAT TrATB = Tr BTA = TrABTT = Tr BAT..
2. Check that V = e nxn with the inner product (A, B) = Tr AHB is a complex inner Tr AH B is a complex inner 2. Check that V = Cnx" with the inner product (A, B) product space. Again, other choices are possible. product space. Again, other choices are possible. Definition 7.21. Let V be an inner product space. For v e V, we define the norm (or Definition 7.21. Let V be an inner product space. For v E V, we define the norm (or length) ofv by IIvll = */(v, v). This is called the norm induced by (',, .).. length) ofv by \\v\\ = J(V,V). This is called the norm induced by (  ) Example 7.22. Example 7.22. 1. If V = E." with the usual inner product, the induced norm is given by i> 1. If V = IRn with the usual inner product, the induced norm is given by II v II = n 2 2 1
(Li=l V i (E,=i<Y))2.xV—*« 9\ 7
2. If V = en with the usual inner product, the induced norm is given by II v II = 2. If V = C" with the usual inner product, the induced norm is given by \\v\\ "n (L...i=l IVi ) ! (£? = ,l»,lI22)*.. Theorem 7.23. Let P be an orthogonal projection on an inner product space V. Then Then Theorem 7.23. Let P be an orthogonal projection on an inner product space \\Pv\\ ::::: Ilvll for all v e V. IIPvll < \\v\\forallv E V.
Proof: Since P is an orthogonal projection, p2 = P = pH. (Here, the notation p# denotes Proof: Since P is an orthogonal projection, P2 = P = P#. (Here, the notation P# denotes the unique linear transformation that satisfies ( P u , } = (u, p#v) for all u, v E If this the unique linear transformation that satisfies (Pu, vv) = (u, P#v) for all u, v e V. If this seems a little too abstract, consider V = R" (or en), where P# is simply the usual PT (or seems a little too abstract, consider = IRn (or C"), where p# is simply the usual pT (or pH)). Hence (Pv, v) = (P 2v, v) = (Pv, p#v) = (Pv, Pv) = IIPvll 2 > O. Now /  P is PH)). Hence ( P v , v) = (P2v, v) = (Pv, P#v) = ( P v , Pv) = \\Pv\\2 ::: 0. Now /  P is also a projection, so the above result applies and we get also a projection, so the above result applies and we get
0::::: ((I  P)v. v) = (v. v)  (Pv, v)
=
IIvll2  IIPvll 2
from which the theorem follows. from which the theorem follows.
0
Definition 7.24. The norm induced on an inner product space by the "usual" inner product Definition 7.24. The norm induced on an inner product space by the "usual" inner product is called the natural norm. is called the natural norm.
In case V = C" or V = R",, the natural norm is also called the Euclidean norm. In In case = en or = IR n the natural norm is also called the Euclidean norm. In the next section, other norms on these vector spaces are defined. A converse to the above the next section, other norms on these vector spaces are defined. A converse to the above procedure is also available. That is, given a norm defined by IIx II = •>/(•*> x), an inner procedure is also available. That is, given a norm defined by \\x\\ — .j(X,X}, an inner product can be defined via the following. product can be defined via the following.
7.3. Vector Norms 7.3. Vector Norms Theorem 7.25 (Polarization Identity). Theorem 7.25 (Polarization Identity).
1. For x, y E m~n, an inner product is defined by 1. For x, y € R", an inner product is defined by (x,y)=xTy=
57 57
IIx+YIl2~IIX_YI12_
IIx + yll2 _ IIxll2 _ lIyll2 2
2. For x, y E en, an inner product is defined by 2. For x, y e C", an inner product is defined by
where j = i = \/—T. where j = i = .J=I.
7.3 7.3
Vector Norms Vector Norms
Definition 7.26. Let (V, F) be a vector space. Then \ Definition 7.26. Let (V, IF) be a vector space. Then II \ . \ II\ : V + R is a vector norm ifit V >• IR is a vector norm if it satisfies the following three properties: satisfies the following three properties:
1. Ilxll::: Ofor all x E V and IIxll = 0 ifand only ifx
= O.
2. Ilaxll = lalllxllforallx
E
Vandforalla
E
IF.
3. IIx + yll :::: IIxll + IIYliforall x, y E V. (This is called the triangle inequality, as seen readily from the usual diagram illus (This is called the triangle inequality, as seen readily from the usual diagram illustrating the sum of two vectors in ]R2 .) trating the sum of two vectors in R2 .) Remark 7.27. It is convenient in the remainder of this section to state results for complexRemark 7.27. It is convenient in the remainder of this section to state results for complexvalued vectors. The specialization to the real case is obvious. valued vectors. The specialization to the real case is obvious. Definition 7.28. A vector space (V, F) is said to be a normed linear space if and only if Definition 7.28. A vector space (V, IF) is said to be a normed linear space if and only if there exists a vector norm  •  : V > R satisfying the three conditions of Definition 7.26. there exists a vector norm II . II : V + ]R satisfying the three conditions of Definition 7.26. Example 7.29. Example 7.29.
1. For x E en, the Holder norms, or pnorms, are defined by 1. For e C", the HOlder norms, or pnorms, are defined by
Special cases: Special cases: (a) Ilx III = L:7=1
IXi
I (the "Manhattan" norm).
1
(b) Ilxllz = (L:7=1Ix;l2)2 = (c) Ilxlioo
(X
H
1
X)2
(the Euclidean norm).
= maxlx;l
IE!!
=
(The second equality is a theorem that requires proof.) (The second equality is a theorem that requires proof.)
p++oo
lim IIxllp
y e en.. Theorem 7. then H H \\Ux\\2 \\x\\2 (Proof. y E en. 217]). Inner Product Spaces. Let x. JC. The norm II .e. Theorem 7.30 (HOlder Inequality). if U E enxn is unitary. [20.l. On the vector space (C[to. we see immediately that IXH y\ ~ 0 < ( x H ) ( y H y ) — ( x H ) ( y H x ) . Remark 7. y E en may be defined by cos# = 1I. e. it is particularly easy to remember.30 (Holder Inequality). Then with equality if and only if x and yare linearly dependent. Since yH = x H y. t\])n. p. However. The angle e between two nonzero vectors x.. its determinant must be nonnegative. Some weighted pnorms: 2. whered. Let x.34. and  . i. y e C" may be defined by Remark 7. R). Remark 7. Let x. p q I I A particular case of the HOlder inequality is of special interest. (b) IIx IIz.33.31 (CauchyBunyakovskySchwarz Inequality). In other words. tO~t:5..> where Q = QH > 0 (this norm is more commonly = QH > Ikllz.Q — (xhH Qx) 2. The angle 0 between two nonzero vectors x.32. define the vector norm On the vector space «e[to. and Norms Chapter 7. Let x. Proof' Consider the matrix [x y] E C" x2 . Remark 7. i.. However. 217]).1^ IIUxll2 = IIxll2 (Proof IIUxili = x U Ux = xHx = IIxlli)· However. „ .D = E^rf/l*/!.g = (x QXY denoted  • c). 0 < e — 5.D = L~=ld.32.31 and Remark 7. IIQ)' 1 3. with equality if and only if x and y are linearly dependent. However. In other words. 1Ft). y E C". Some weighted pnorms: (a) IIxll1. (CBS) inequality (see. Then Fhcorem 7. Since yHxx = x Hy. Since is a nonnegative definite matrix. ttlr. Projections.~~1~1112'.  .(x Hyy)(yH x).^ cos e = IlMmlylb 0 ~ 0 < I' The CBS inequality is thus equivalent to the statement ~ ^  COS 0 < 1. Remark 7.g. The CBS inequality is thus equivalent to the statement I. is a nonnegative definite matrix.32 are true for general inner product spaces.g.31 and Remark 7. define the vector norm 11111 = max 1/(t)I· to:::..lx. 112 is unitarily invariant. D 0 \\X\\2\\y\\2Note: This is not the classical algebraic proof of the CauchyBunyakovskySchwarz Note: This is not the classical algebraic proof of the CauchyBunyakovskySchwarz (CBS) inequality (see. The norm  • 2 is unitarily invariant. [20. Inner Product Spaces.t~JI On the vector space ((C[to. > 0.33. 11·111 and 1I·IIClO XHUHUx . then Remark 7. 1cose 1~ 1.tl Theorem 7.32 are true for general inner product spaces.. t \ ] R). A particular case of the Holder inequality is of special interest. Then Theorem 7. Projections. \\Ux\\l XHX = \\x\\\). p.31 (CauchyBunyakovskySchwarz Inequality). Ther. it is particularly easy to remember. On the vector space (C[to. where 4 > O.34. its determinant must be nonnegative. e. if U € C"x" is unitary. denoted II .. 1Ft). y e C". ttl. define the vector norm 3. define the vector norm 1111100 = max II/(t) 11 00 .58 58 Chapter 7. and Norms 2. o ~ (x Hxx)(yH y) . Theorem 7.e. +=1. Since Proof: Consider the matrix [x y] e en x2 . we see immediately that \XH yl < IIxll2l1yllz.
For x E en. ConFinally. the following inequalities are all tight bounds.. i.35.7. Convergence of a sequence of vectors to some limit vector can be converted into a statement vergence of a sequence of vectors to some limit vector can be converted into a statement about convergence of real numbers. Extension to the complex case is straightforward and essentially obvious. convergence in terms of vector norms. there exist Example 7.   R mx " ~ E is a matrix norm if it satisfies the following three Definition 7. As with vectors. II·• II : IR mxn > IR is a matrix norm if it satisfies the following three properties: properties: 1. vectors under orthogonal transformation. Attention is confined to the vector space (IRmnxn. then we have the Pythagorean Identity Ilx ± YII~ = IIxll~ + IIYII~. i. i. Attention is confined to the vector space (W xn R) since that is what arises in the majority of applications.. IIxlioo :::: IIxllz.e.e. ci (possibly depending onn) such that depending on n) such that Example 7.4 7. IIA + BII :::: IIAII + IIBII for all A. v(2). Similar remarks apply to the unitary invariance of norms of real are not unitarily invariant.. Matrix Norms 7... the motivation for using matrix norms is to have a notion of either the size of or the nearness of matrices... Extension to the complex case is straightforward what arises in the majority of applications. there exist vectors x for which equality holds: vectors x for which equality holds: Ilxlll :::: Jn Ilxlb Ilxll2:::: IIxll» IIxlloo :::: IIxll» Ilxlll :::: n IIxlloo. All norms on en are equivalent. All norms on C" are equivalent. y E en are orthogonal. 3. Matrix Norms 59 59 are not unitarily invariant. C2 (possibly 7. v(l). this is called the triangle inequality. while the latter is needed to make sense of "convergence" of matrices.4. If x. the following inequalities are all tight bounds. Definition 7. Then 7.) . 2.39.. i» (1) v(2\ . we conclude this section with a theorem about convergence of vectors..e. and essentially obvious. Finally. e C".e..38. we conclude this section with a theorem about convergence of vectors. 2 the proof of which follows easily from z2 _ z_//. Theorem 7. Theorem 7.36. there exist constants CI. If y € C" are orthogonal.. IIxl12 :::: Jn Ilxll oo .36. Let \\ \\ be a vector norm and suppose v. .e. convergence in terms of vector norms. then we have the Pythagorean Identity Remark 7. As with vectors. Remark 7. Then lim k4+00 V(k) = v if and only if lim k~+oo II v(k)  v II = O.. For x G C".39.38.e. The former notion is useful for perturbation analysis. the motivation for In this section we introduce the concept of matrix norm. lIaAl1 = lalliAliforall A E mxn andfor all a E IR. (As with vectors. i. E en. Let II·• II be a vector norm and suppose v. BE IRmxn. there exist constants c\.4. i.37. the proof of which follows easily from liz II~ = ZH z. IIAII ~ Ofor all A E IR mxn and IR IIAII = 0 if and only if A = O.35. The using matrix norms is to have a notion of either the size of or the nearness of matrices. i. IR) since that is "convergence" of matrices.37.4 Matrix Norms Matrix Norms In this section we introduce the concept of matrix norm. 7. while the latter is needed to make sense of former notion is useful for perturbation analysis. about convergence of real numbers. Similar remarks apply to the unitary invariance of norms of real vectors under orthogonal transformation.
The following three special cases are important because they are "computable. Inner Product Spaces.42. to estimate the size of a matrix product A B in terms of the sizes of A and B individually.60 max _P IIAxll = max Ilxli p IIxllp=1 IIAxll p . Example 7. e R mx ". (AA ')). Example 7. Example 7.44.  . Let A E lR.60 Chapter 7. and Norms Example 7..44. where r mxn = rank(A).40. defined by IIAIIF ~ (t. For example. 112' The norm II • 115.p = (at' + . is a norm.mxn IIAII p. tTL T Note: IIA+llz = l/ar(A).jj laij.."  A\\ = ^ \ai} . I. Inner Product Spaces.._ Then "mixed" norms can also be defined by e lR. IIAII P t altA)) 1 ~ (T. The concept of a matrix norm alone is not altogether useful since it does not allow us to estimate the size of a matrix product AB in terms of the sizes of A and B individually. and Norms Chapter 7. Schatten/7norms IIAlls.mxn. (t laUI). J=1 3. is a norm. I Some special cases of Schatten /?norms are equal to norms defined previously. 5>1 is often called the trace norm. Let A E lR.mxn. . The "maximum column sum" norm is 2. ^wncic = rank(A)).  5 2 = II IIF and  • 5i00 = II . The spectral norm is 3.mxn. IIAlioo = max rE!!l. pnorms previously. The "maximum row sum" norm is 2.2 =  .<110#0 IIxllq Example 7. Projections." Each is a "computable. + a!)"".43.00 =  • 2. IIAII2 = Amax(A A) = A~ax(AA ) = a1(A)." IIAliss = Li. ai. Let A E K m x ". The Schattenpnorms are defined by E lR. Then the matrix pnorms are defined by A e Rmxn. Example 7.41. Projections.42. (A' A)) 1 ~ (T..) I ~ (t. The norm  . Example 7. 1.40. \\F and 11'115. I.q = max IIAxil p 11.43. matrix = Ilxllp. The "matrix analogue of the vector 1norm.1 is often called the trace norm. Then the Frobenius norm (or matrix Euclidean norm) is 7. Let A E R . (where r = laiiK^/i." theorem and requires a proof. Example 7. 11·115. The "matrix analogue of the vector Inorm.
. also caUedoperator norms. B e Rnxk. For such subordinate norms.. Theorem 7. i.jii IIAII2. e. Since IIABxl1 ::s Afljc ::s IIAIIIIBllllxll.60 \^ • Useful Results The following miscellaneous results about matrix norms are collected for future reference. \\v consistent with it. II F' ThenA^ < A II Filx 112.e. For example. we clearly have Ajc ::s A1jt. Definition 7.7.47. II such that AF is given by max^o ".jii IIAIIF.j is a matrix norm but it is not consistent.jii IIAII I .46. IIAIIF ::s .but there does  is consistent with F. atornorms. e. IIAIII ::s . but there does consider II . Then the norms II • \\a II· Ilfl' and .ooIlBIII.60 IIx II Ilxll=1 IIAxll p . II".. there exists a vector norm \\ . 1.e. HAjcJI^ ::s \\A\\m Ilxli v. IIAII2 ::s .oo J1. Notice that this difficulty did not arise for vectors. \\m is a consistent matrix norm. )).. IIABIII.60 Ilx i.g. Matrix Norms 61 61 Notice that this difficulty did not arise for vectors. i. Then the norms \\ . II· II F and 1. reader The interested reader is invited to prove each of them as an exercise.1100 = max laijl x. •1122is consistent with II .47. II A 1100 ::s n IIAII I . We thus need the following definition.4. \\ are Definition 7. wec1earlyhave IIAxll < IIAllllxll· Since Afijc < IIAlIllBxll < Afljt.jii IIAII I. Let A e Rmxn.46. IIAxll1 = max .e. Then :]. also called oper(or. 2. not exist a vector norm II •  such that IIAIIF is given by max x . more generally..e. take A = B = \ \ Afl li00 = 2whileA li00 B 1>00 = 1.jii IIAlioo' . e R" x ".~~i'.jii. Then The p norms are examples of matrix norms that are subordinate to (or induced by) The pnorms are examples of matrix norms that are subordinate to (or induced by) a vector norm. i. IIAIIF ::s. the IIIn II F = . Example 7. •II p for all p are consistent matrix norms.e. inner products or outer products of vectors. The following miscellaneous results about matrix norms are collected for future reference. For A following inequalities are all tight. consider  • \\F. For example. 2. more generally. a vector norm. For example. if II A B II < II A 1111 fi whenever the matrix product is defined.. •II F.jii II A IIF. Let A E ]Rmxn.jii IIAlb IIAIIF ::s .45. . Not every consistent matrix norm is subordinate to a vector norm. IIAII2 ::s.jii IIAlloo. there exists a vector norm II • IIv Theorem 7. IIAlioo ::s . although there are analogues for.. \\ • \\p. exercise. IIAxl1 IIAII = max .. If \\ • 11m is a consistent matrix norm..45. take A = B = [: is a matrix norm but it is not consistent. subordinate to the vector norm. IIAxliv < IIAlim \\x\\v' Not every consistent matrix norm is subordinate to a vector norm. If II . 1. there exist matrices A for i.so II .e. Example 7. IIAllp. q For such subordmate norms. inner products or outer products of vectors.oo 2 while IIAIII. consistent with it. There exists a vector x* such that IIAx*11 = IIAllllx*11 if the matrix norm is subordinate to the vector norm. IIAlioo ::s .60 . A A 2.jii IIAlb IIAIII ::s n IIAlloo.. so not exist a vector norm  . l.= max IIAxl1 x. 2. For example. it follows that all subordinate norms are consistent. which equality holds: which equality holds: IIAIII ::s . Theorem 7.::S \\A\\p\\B\\y A matrix norm \\ • is said to be consistent if \\AB\\ ::s  A   B II whenever the matrix product is defined.48.q = maxx. A matrix norm 11·11\\is said to be consistent mutuallyconsistentifIlABII. IIAII2::S IIAIIF.Then II Ax 1122 ::s II AFjc2.. The "mixed" norm "mixed" norm II· 11 100 ..and II \\ •lIy y are mutually consistent if \\ A B \\ a < IIAllfllIBlly.  • /7and II . We thus need the following definition. although there are analogues for. Matrix Norms 7..48. i. p for all p are consistent matrix norms.. II In II p = 1 for all p. i.g. There exists a vector x* such that Ajt* = A jc* if the matrix norm is Theorem 7. while E ]Rnxn. A = max^o IIxll. B E ]Rnxk. it follows that all subordinate norms are consistent.4. 11^4^11 P (or.
. prove directly that V22Vl is an I — +A V V/ is an orthogonal projection. Then k~+oo lim A (k) = A if and only if k~+oo lim IIA (k)  A II = o. Prove that E"xn with the inner product (A.y + 2z = O. [2 3 4]r R3 spanned by the plane 3. Prove that the A e Wnxn orthogonal projection onto the space spanned by these column vectors is given by the P matrix P = A(ATTA)~}AT. orthogonal projection. Projections. but not necessarily The norms  • \\F and  • 2 (as well as all the Schatten pnorms..An}.... Suppose that a matrix A E IR mxn has linearly independent columns. Chapter 7. (MZa or F. > . IIQAZlia = A fora = 2 or F. Prove that / . Find the (orthogonal) projection of the vector [2 3 4f onto the subspace of 1R3 5.. If P projection. Definition: Let A e Rnxn and denote its set of eigenvalues (not necessarily distinct) 8.Q — Q must be an orthogonal matrix. where V2 is defined as in Theorem 5. . Theorem 7. The norms II .] 4. B) = Tr ATB is a real inner product IR n x" AT B (A. A(A A) 1 AT 5. but not necessarily other pnorms) are unitarily invariant. „ } The spectral radius of A is the scalar p(A) = max IA.62 62 3. p+ = P. 112 (as well as all the Schatten /?norms. spanned by the plane 3x . and Norms max laijl :::: IIAII2 :::: ~ max laijl.  • 2 and  • \\F 8.I. Inner Product Spaces. IIAllaa fora matrices Q E IR Convergence Convergence The following theorem uses matrix norms to convert a statement about convergence of a sequence of matrices into a statement about the convergence of an associated sequence of of scalars.l. where ¥2 is defined as in Theorem 5. B) = space. Show that the matrix norms II .e. Then 7. 3. for all A E IRmxn and for all orthogonal unitarily invariant. Definition: Let A E IRnxn and denote its set of eigenvalues (not necessarily distinct) by P. A (1) .1. Prove that P . scalars.1. i . IIF are unitarily invariant. 3. EXERCISES EXERCISES 1.e..49. .A+A is an orthogonal projection.. \\ \\bea Rmx". The spectral radius of A is the scalar by {Ai . A(I). Suppose P and Q are orthogonal projections and P + Q = I.c — v + = 0. Inner Product Spaces. l.. must be an orthogonal matrix. EeIRmxn.49. and Norms Chapter 7.. For A eRmxa . For A E IR mxn . space. Let II ·11 be a matrix norm and suppose A. i. 112 and II . . prove that P+ = P.. A(2). . 7.. Projections.. matrices Q zR and e M" ". 6. Also.. A (2) . IIF and II .] l. 1. i. e Rmx" mxm x mxm and Z E IRnxn . 2. If P is an orthogonal projection. 4.
y e R" are nonzero. IIAlb IIAlloo.) that  M Up = for all/?. A2. . H A I I A2. appropriate. Aj. Determine IIAIIF' IIAII Ilt.Exercises Exercises 63 63 Let Let A=[~ 14 0 12 5 ~]. Determine IIAIIF' IIAIII> IIAlb and Aoo in terms of \\x\\a and/or \\y\\p. Let A = xy . (An n x n matrix. Let 9. or (Xl as appropriate. where ex and {3 take the value 1. H A H ^ and peA). columns and rows as well as main diagonal and antidiagonal sum to s = n (n 2 l)/2.2. is called a "magic square" matrix. and peA). Let A = xyT. 2. HA^. Determine AF. and p(A). (An n x n matrix. y E IR n are nonzero. If M is a magic square matrix. Determine AF. \\A\\ A2. IIAlb IIAlloo. or oo as and II A 1100 in terms of IIxlla and/or IlylljJ. it can be proved that IIMllp = ss for all p.. Determine IIAIIF' IIAII d . all of whose Determine AF.) T 10.. all of whose columns and rows as well as main diagonal and antidiagonal sum to s = n(n2 + 1) /2. 10. Let A=[~4 9 2 ~ ~]. where both x. 9. where a and ft take the value 1. where both x. and p(A).
This page intentionally left blank This page intentionally left blank .
1 The Linear Least Squares Problem The Linear Least Squares Problem Problem: Suppose A E Rmx" with m 2: nand b E jRm is aagiven vector. (8.b 112) assumes its minimum value if and only if II Ax —b\\2) assumes its minimum value if and only if (8. IIrll~ = lib .. so these two vectors are orthogonal. see Section 8.Ax) is clearly in 'R(A).PR(A))b = PR(A).35). A vector x E X if and only if x is of the form 2.2. Hence.1 8.PR(A)b) + (PR(A)b  Ax). AT — A T Ax = AT b latter form is commonly known as the normal equations. x e X if and only if latter form is commonly known as the normal equations.Chapter 8 Chapter 8 Linear Least Squares Linear Least Squares Problems Problems 8. whereyEjRnisarbitrary. A vector x X if and onlv if x is of the x=A+b+(IA+A)y. i... The linear least Problem: Suppose A e jRmxn with > n and b <= Rm is given vector.bll~ (and hence p(x) = \\Ax . For further details. while (b . IIAx .1) To see why this must be so.e.bll 2 is minimized}.PR(A)bll~ + IIPR(A)b  Axll~ from the Pythagorean identity (Remark 7. write the residual r in the form To see why this must be so.e. Solution: The set X has a number of easily verified properties: The set X has a number of easily verified properties: 1. A vector x E X if and only if ATrr = 0.2) 65 . 2. Thus.2. (PR(A)b . while Now. Thus. where r = b . The equations ATrr = 0 can be rewritten in the form A TAx = ATb and the x. see Section 8.Axll~ = lib . i.PR(A)b) = (I . write the residual in the form r = (b .35). (Pn(A)b — AJC) is clearly in 7£(A). Hence. Now.x — b\\\ (and hence p ( x ) = from the Pythagorean identity (Remark 7. x E X if and only if x is a solution of the normal equations.Ax is the residual associated 1. is a solution of the normal equations. For further details. vector x e X if and only if AT where b — Ax is the residual associated with x. A.b E 'R(A)L so these two vectors are orthogonal. The linear least squares problem consists of finding an element of the set squares problem consists of finding an element of the set x = {x E jRn : p(x) = IIAx .
Let 8 e [0. If the existence condition happens to be satisfied. The minimum value of p ((x) is then clearly equal to where y E ]R.A+ A)z in X. AA+)bI1 2 the last inequality following by Theorem 7.2) are of the form x = A+ AA+b + (I  A+ A)y =A+b+(IA+A)y. i. Notice that solutions of the linear least squares problem look exactly the same as solutions of the linear system AX = B. where Y E R" xfc is arbitrary.2. By Theorem 6.8)xz2 = A+b ++ (I A+ A)(8y ++ (1 8)z) is clearly in X.23. which follows since the two vectors are orthogonal. The only difference is that in the case of linear least squares solutions. BE ]R. x* minimizes the residual p ( x ) and is the vector of minimum 2norm that does so. The unique solution of minimum 2norm or Fnorm is X = A+B. This follows immediately from convexity or directly from the fact that all x e X are of the form (8. x* minimizes the residual p(x) that solves this "double minimization" problem. In fact.e. There is a unique solution to the least squares problem. if and only if A + A lor. The minimum value of p x ) is then clearly equal to lib .mxk.3. Remark 8. Let 6 E [0. there is no "existence condition" such as R(B) S. X = A+B. To see why. the last inequality following by Theorem 7. Linear Least Squares Problems and this equation always has a solution since AA+b e 7£(A). of linear least squares solutions. Then the convex combination and Xz = A+b (I . Let A E E mx " and B € Rmxk.A+A)(Oy (1 . and only if A+A = I or. i. all and this equation always has a solution since AA+b E R(A). To see why.A+A)y and *2 = A+b + (I — A+A)z in X. Then the convex combination 8x. X is convex. 3. we can generalize the linear least squares Just as for the solution of linear equations. then equality holds and the least squares If the existence condition happens to be satisfied. has a unique element x" of minimal2norm. This follows immediately from and is the vector of minimum 2norm that does so. where y e W is arbitrary.3.66 Chapter 8. all solutions of (8. The general solution to e ]R. Just as for the solution of linear equations.1) and which follows since the two vectors are orthogonal. consider two arbitrary vectors jci = A+b + (I — A + A) y (I .nxk is arbitrary.PR(A)bll z = ~ 11(1 Ilbll z.e. Linear Least Squares Problems Chapter 8. 5.mxn XElR Plxk min IIAX  Bib is of the form is of the form X=A+B+(IA+A)Y. 7£(A).e. there is no "existence condition" such as K(B) c R(A). X = {x"} = {A+b}. In fact.e. problem to the matrix case.. then equality holds and the least squares . 0*i (1 #)* = A+b (I . i.2) are of the form solutions of (8. if 5. i. if and only if rank(A) n. X is convex. x* = A+b is the unique vector that solves this "double minimization" problem.. The unique solution of minimum 2norm or Fnorm is where Y € ]R. By Theorem 6. consider two arbitrary vectors Xl = A + b 3.1]. + (1 . X. equivalently. Notice that solutions of the linear least squares problem look exactly the Remark 8..2..23.1) and convexity or directly from the fact that all x E X are of the form (8.. X has a unique element x* of minimal 2norm.. The only difference is that in the case same as solutions of the linear system AX = B. we can generalize the linear least squares problem to the matrix case.n is arbitrary. x" = + b is the unique vector 4. The Theorem 8. There is a unique solution to the least squares problem.1. X = {x*} = {A+b}. if and only if rank (A) = n. 1]. equivalently.0)z) is clearly in 4.
8.3 Linear Regression and Other Linear Least Squares Problems 8.3 Linear Regression and Other Linear Least Squares Problems
67
O. X = +B residual is 0. Of all solutions that give a residual of 0, the unique solution X = A+B has minimum 2norm or F norm. Fnorm. Remark 8.3. If we take B = 1m in Theorem 8.1, then X = A+ can be interpreted as Im in Theorem 8.1, then Remark 8.3. If we take B A+ can be interpreted as saying that the MoorePenrose pseudoinverse of A is the best (in the matrix 2norm sense) A AX matrix such that AX approximates the identity. Remark 8.4. Many other interesting and useful approximation results are available for the F norm). matrix 2norm (and Fnorm). One such is the following. Let A E M™ x " with SVD following. e lR~xn
A
= U~VT = LOiUiV!.
i=l
Then a best rank k approximation to A for 1< f c < r r,i . e . , a solution to A k l :s k :s , i.e., a
MEJRZ'xn
min IIA  MIi2,
is given by is given by
k
Mk =
LOiUiV!.
i=1
The special case in which m = n and k = n  1 gives a nearest singular matrix to A E A e = nand = —
lR~ xn .
8.2 8.2
Geometric Solution Geometric Solution
Looking at the schematic provided in Figure 8.1, it is apparent that minimizing IIAx —bll 2 2  Ax b\\ x e W1 p — Ax is equivalent to finding the vector x E lRn for which p = Ax is closest to b (in the Euclidean b Ay norm sense). Clearly, r = b  Ax must be orthogonal to R(A). Thus, if Ay is an arbitrary r b — Ax 7£(A). R(A) vector in 7£(A) (i.e., y is arbitrary), we must have y
0= (Ay)T (b  Ax) =yTAT(bAx) = yT (ATb _ AT Ax).
Since y is arbitrary, we must have ATb — ATAx = 0 or A r A;c = AT b. AT b  AT Ax AT Ax = ATb. T Special case: If A is full (column) rank, then x = (AT A) ATb. A = (A A)l ATb.
8.3 8.3
8.3.1 8.3.1
Linear Regression and Other Linear Least Squares Linear Regression and Other Linear Least Squares Problems Problems
Example: Linear regression
Suppose we have m measurements (ll, YI), ... ,, (trn,,ym) for which we hypothesize a linear (t\,y\), . . . (tm Ym) (affine) relationship (8.3) y = at + f3
68
Chapter 8. Linear Least Squares Problems Chapter 8. Linear Least Squares Problems
b
r
p=Ax
Ay E R(A)
Figure S.l. Projection of b on K(A). Figure 8.1. Projection of b on R(A).
for certain constants a. and {3. One way to solve this problem is to find the line that best fits for certain constants a and ft. One way to solve this problem is to find the line that best fits the data in the least squares sense; i.e., with the model (8.3), we have the data in the least squares sense; i.e., with the model (8.3), we have
YI
Y2
= all + {3 + 81 ,
= al2 + {3 + 82
where &\,..., 8m are "errors" and we wish to minimize 8\ + • • 8;. Geometrically, we where 81 , ... , 8m are "errors" and we wish to minimize 8? + ...• + 8^ Geometrically, we are trying to find the best line that minimizes the (sum of squares of the) distances from the are trying to find the best line that minimizes the (sum of squares of the) distances from the given data points. See, for example, Figure 8.2. given data points. See, for example, Figure 8.2.
y
Figure 8.2. Simple linear regression. Figure 8.2. Simple linear regression.
Note that distances are measured in the vertical sense from the points to [he line (as Note that distances are measured in the venical sense from the point!; to the line (a!; indicated, for example, for the point (t\, y\}}. However, other criteria arc possible. For exindicated. for example. for the point (tl. YIn. However. other criteria nrc po~~iblc. For cxample, one could measure the distances in the horizontal sense, or the perpendicular distance ample, one could measure the distances in the horizontal sense, or the perpendiculnr distance from the points to the line could be used. The latter is called from the points to the line could be used. The latter is called total least squares. Instead squares. Instead of 2norms, one could also use 1norms or oonorms. The latter two are computationally of 2norms, one could also use Inorms or oonorms. The latter two are computationally
8.3. Linear Regression and Other Linear Least Squares Problems 8.3. Linear Regression and Other Linear Least Squares Problems
69
much more difficult to handle, and thus we present only the more tractable 2norm case in difficult text that follows. follows. The m "error equations" can be written in matrix form as ra
Y = Ax +0,
where
We then want to solve the problem
minoT 0 = min (Ax  y)T (Ax  y)
x
or, equivalently, min lIoll~ = min II Ax  YII~.
x
(8.4)
AT Solution: x = [~] is a solution of the normal equations AT Ax Solution: x — [^1 is a solution of the normal equations ATAx = ATyy where, for the special form of the matrices above, we have special form of the matrices above, we have
and and
AT Y = [ Li ti Yi
LiYi
J.
The solution for the parameters a and f3 can then be written ft
8.3.2
Other least squares problems
y = f(t) =
Cl0!(0
(8.3) of the form Suppose the hypothesized model is not the linear equation (S.3) but rather is of the form + • • • 4 cn<t>n(t). (8.5) (8.5)
In (8.5) the ¢i(t) are given (basis) functions and the Ci; are constants to be determined to </>,(0 functions c minimize the least squares error. The matrix problem is still (S.4), where we now have minimize the least squares error. The matrix problem is still (8.4), where we now have
An important special case of (8.5) is least squares polynomial approximation, which corresponds to choosing ¢i (t) = t t'~1,, i i;Ee!!, although this choice can lead to computational 0,• (?) = i  l n, although this choice can lead to computational
we assume that A has an SVD given by A U\SVf via the SVD.4 8. The basis functions coefficients c. respectively. For example. quantity above is clearly minimized by taking z\ = S'c. Z2 while the minimum value of \\Ax — b II ~ is l^llr while the minimum value of II Ax . the subvectors can have different lengths). c. Better numerical methods are based on algorithms that AT work directly and solely on A itself rather than AT A. Linear Least Squares Problems Chapter 8. etc. . we assume that A has an SVD given by A = UT.b\\^ is II czll ~. it is shown that solution [4]. [7]. Better numerical methods are based on algorithms that behavior in practice (and it does). piecewise polynomial functions. of linear least squares problems via the normal equations can be a very poor numerical method in finiteprecision arithmetic. it can be expected to exhibit such poor numerical behavior in practice (and it does). Then c. e C2 / then taking logarithms yields the equation log y = log c.6) via the SVD. the subvectors can have different lengths). The key feature in (8. VT = U. (8. = log c" and C2 = cj_ results in a standard linear least squares y — log y. Specifically. Ib is unitarily invariant =11~zcll~ wherez=VTx. z. splines.70 70 Chapter 8. The former based on SVD and QR (orthogonalupper triangular) factorization.. if the fitting function is of the form can be converted into a linear problem. In this section we investigate solution of the linear least squares problem min II Ax x b11 2 . The former is much more expensive but is generally more reliable and offers considerable theoretical offers insight. A E IRmxn . c\ logci. 8. since II . etc.can be arbitrarily nonlinear. as in Theorem 5. We now note that IIAx  bll~ = IIU~VT x =  bll~ II ~ VT X  U T bll. then II v II ~ = II viii ~ + II v211 ~ (note that orthogonality is not what is used here. Specifically. + c2f. This that orthogonality is not what is used here.c=UTb = II [~ ~] [ ~~ ] . Linear Least Squares Problems difficulties because of numerical ill conditioning for large n.. then u^ = i>i \\\ \\vi\\\ (note The last equality follows from the fact that if v = [£ ]. Since the standard Kalman filter essentially amounts method in finiteprecision arithmetic.1.g. fact. This explains why it is convenient to work above with the square of the norm rather than the concerned.5) is that the coefficients Ci appear linearly. bE IR m . insight. Two basic classes of algorithms are A itself S VD and QR (orthogonalupper triangular) factorization. S~lc\.. then taking logarithms yields the equation logy = logci + cjt. [11]. the two are equivalent. if the fitting function is of the form y t) Y = ff( (t) = c\eC2i. c. As far as the minimization is concerned. [4]. In fact. Then GI defining y = logy. problem. C2 problem. Since the standard Kalman filter essentially amounts to sequential updating of normal equations. respectively.4 Least Squares and Singular Value Decomposition Least Squares and Singular Value Decomposition In the numerical linear algebra literature (e.1. appear functions </>. are based on orthogonal polynomials. [23]). Numerically better approaches ill difficulties n. the last equivalent. ] II: = II [ The last equality follows from the fact that if v [~~]. The subvector z2 is arbitrary.[ ~~ ] II: sz~~ c. norm. For example.SVr U~VT Theorem 5. Sometimes a problem in which the Ci'S appear nonlinearly nonlinearly can be converted into a linear problem. arbitrary. [7]. 's ¢i.
Thus..5. to reduce A in the following way. an important special case of the linear least squares problem is the socalled fullrank problem.e. A finite sequence of simple orthogonal row transformations (of Householder or Givens type) can be performed on A to reduce it row transformations (of Householder or Givens type) can be performed on A to reduce it to triangular form. and there is thus "no V2 part" to the solution..e.1). Least Squares and QR Factorization Now transform back to the original coordinates: Now transform back to the original coordinates: x = Vz 71 71 = [VI V2 1[ ~~ ] = VIZ I + V2Z2 = = + V2Z2 vlsIufb + V2 2. The minimum value of the least squares residual is The minimum value of the least squares residual is and we clearly have that and we clearly have that minimum least squares residual is 0 4=> b is orthogonal to all vectors in U2 minimum least squares residual is 0 {::=:} b is orthogonal to all vectors in U2 {::=:} •<=^ {::=:} b is orthogonal to all vectors in R(A)l. where y e ffi. V2z is an arbitrary vector in R(V2 = N(A).A + A)_y. with appropriate numerical enhancements. Thus. we again look at the solution of the linear least squares problem (8..e. It is then possible. where y E Rm is arbitrary. via a sequence of socalled Householder or Givens rank. This follows easily since Another expression for the minimum residual is II (I . we add the simplifying assumption that A has full column rank. via a sequence of socalled Householder or Givens transformations. i. In this case the SVD of A is given by A A = V:EVTT = [VI{ Vzl[g]Vr. (8. It is then possible. This agrees.~xn. with (8.AA+)bll~ . This follows easily since (7 . x has been written in the form x = A+b + (I . A E ffi. This matrix factorization is much cheaper to compute time in terms of the QR factorization. A E ffi.m is arbitrary. can be quite reliable. we have QT € ffi.6) but this In this section. = +b + (/ — A + A) y. Finally. we add the simplifying assumption that A has full column To simplify the exposition.~xn. i. i. In this case the SVD of A is given by socalled fullrank problem. This matrix factorization is much cheaper to compute than an SVD and. Z VISici The last equality follows from The last equality follows from c = UTb = [ ~ f: ]= [ ~~ l Note that since Z2 is arbitrary.5 Least Squares and QR Factorization Least Squares and QR Factorization In this section.11U2U!b"~ = bTUZV!V UJb = bTVZV!b = IIV!bll~. UZV = [U t/2][o]^i r > and there is thus "no V2 part" to the solution. If we label the product of such orthogonal row transformations as the to triangular form. V2 Z 2 is an arbitrary vector in 7Z(V2)) = A/"(A). 11(1.AA+)b\\22 = \\U2Ufb\\l = bTU2U^U22V!b = bTU2U*b = \\U?b\\22. Another expression for the minimum residual is  (/ — AA + )b 2 .S. A e R™ X M . we again look at the solution of the linear least squares problem (8. To simplify the exposition. than an SVD and. i. can be quite reliable. is orthogonal to all vectors in 7l(A}L b E R(A). x has Note that since 12 is arbitrary. an important special case of the linear least squares problem is the Finally.5 8. This agrees. with appropriate numerical enhancements.e. A e 1R™ X ". A finite sequence of simple orthogonal transformations. of course.AA+)bllz.8. 8. Least Squares and QR Factorization B. of course. If we label the product of such orthogonal row transformations as the orthogonal matrix QT E R m x m .. with (8.1).6) but this time in terms of the QR factorization.mxm. to reduce A in the following way.7) .
i. Now note that Now note that IIAx  bll~ = IIQ T Ax = II [ QTbll~ since II . 2. (8.. 112 is unitarily invariant ~ ] x . and qz are two orthonormal vectors and b is a fixed vector. we see that A=Q[~J = [QI = QIR. data. (a) Find the best (in the 2norm sense) line of the form y = ax + fJ that fits this (a) Find the best (in the 2norm sense) line of the form y = ax + ft that fits this data.Cl and the minimum residual The last quantity above is clearly minimized by taking x = R lIc\ and the minimum residual is Ilczllz.8).. where QI E ffi.m IX(m ~" ) .Show that r is orthogonal to both^i and q2. Consider the following set of measurements (Xi... i.1Q\b = A+b and the minimum residual is II Qr bllz' is \\C2\\2. Both Q\ and <2 have orthonormal columns. (3. Equivalently. or (8. we see that in (8. (8.+ A)y and A+b are orthogonal vectors. Suppose q. Multiplying through by Q in (8. both ql and q2 . Note that (8. Linear Least Squares Problems Chapter 8.mxn and where R e M£ x " is upper triangular.8) ~ ] (8.e. Suppose qi and q2 are two orthonormal vectors and b is a fixed vector..e.7).2). Consider the following set of measurements (*. 3.flq2 Show that r is orthogonal to (b) Let r denote the "error vector" b — ctq\ — {3qz.A+A)y and A +b 1. Now write Q = [Q\ Qz].3). data.8). yt): (1. we have x = R. Both Q I and Qz2 have orthonormal columns. b E Em.~xn is upper triangular.7). sense). n • (a) Find the optimal linear combination aq^ + (3q2 that is closest to b (in the 2norm (a) Find the optimallinear combination aql + fiq2 that is closest to b (in the 2norm sense).7). 3. and any y E R". (b) Let r denote the "error vector" b . n . Now write Q = [QI Q2].I of the columns of yields the orthonormal columns of Q\. we have = R~l Qf b = +b and the minimum residual is IIC?^!^ EXERCISES EXERCISES 1.9) Any of (8.Equivalently.1).aql . xn.72 Chapter 8. m and any e ffi. all in R". by writing (8. or (8.[ ~~ ] If:.9) is essentially what is accomplished by the GramSchmidt process. are orthogonal vectors.9) are variously referred to as QR factorizations of A. For € ffi. Multiplying through by Q Q2 E ffi. Yi): 2.9) is essentially what is accomplished by the GramSchmidt process. all in ffi. Note that Any of (8..7). where Q\ e R mx " and Qz € K" x(mn). check directly that (I . check directly that (I . Qz] [ (8. For A E Wmxn . Linear Least Squares Problems where E ffi. R~l) ) of the columns of A yields the orthonormal columns of QI. (b) Find the best (in the 2norm sense) line of the form jc = ay + (3 that fits this (b) Find the best (in the 2norm sense) line of the form x = ay fJ that fits this data. by writing AR~l1 = Q\ we see that a "triangular" linear combination (given by the coefficients of ARQI we see that a "triangular" linear combination (given by the coefficients of R. (2.9) are variously referred to as QR factorizations of A. b e ffi. The last quantity above is clearly minimized by taking x = R.
(a) Consider a perturbation E\ = [0 ~] of A. where Q is orthogonal.Exercises Exercises 73 4. Use the four Penrose conditions and the fact that Q\ has orthonormal columns to 6. Find all solutions of the linear least squares problem min II Ax .yII2 as 8 approaches O? (b) Now consider the perturbation E2 = [~ (b) Now consider the perturbation EI = \0 s~\ of A.xn can be factored in the form (8. verify that if A E ~. not necessarily nonsingular. where 8 is a small positive number.bl1 2 when A = [~ ~ ] and b = [ !1 x The solution is (a) Consider a perturbation EI = [~ pi of A.IlQf. where 8 is a small positive number. and suppose A = QR. What happens to jt* . A+ R+QT . Let A e R"x". Solve the perturbed problem positive number. Let A E ~nxn.9).bib z n where A2 = A + E2. Find all solutions of the linear least squares problem 4. then A+ == R. then A+ R~ Q\.• What happens to IIx* . where again 8 is a small positive number. 7. Consider the problem of finding the minimum 2norm solution of the linear least 5.bll 2 x when A = [ ~ 5. where AI = A + E I . Solve the perturbed version of the above problem. not necessarily nonsingular. What happens to IIx* — y 2 as 8 approaches 0? where AI = A + E\.z2 as 8 approaches O? where A2 — A E 2 What happens to \\x* — zll2 as 8 approaches 0? 6. Prove that A+ = R+ QT. Solve the perturbed version of the above problem. where again 8 is a small of A. Solve the perturbed problem min II A 2 z .9). and suppose A where is 1. Use the four Penrose conditions and the fact that QI has orthonormal columns to verify that if A e R™ x "can be factored in the form (8.:. of 2norm solution of least «rmarp« problem squares nrr»h1<=>m min II Ax .
This page intentionally left blank This page intentionally left blank .
we see immediately that XH is a left eigenvector of A H associated with A./) is called the characteristic polynomial of A. Theorem 9. the Fundamental Theorem of Algebra says that 7t (X) is a polynomial of degree n.4. Let A = [~g ~g]. for example. One oftenused scaling for an eigenvector is One oftenused scaling for an eigenvector is so is ax [ay] for any nonzero scalar a E a = 1/ IIx II so that the scaled eigenvector has nonn 1. example. a nonzero vector y e C" is a left eigenvector corresponding to an eigenvalue a if Mif (9. then n(A) is a polynomial of degree n.2) By taking Hennitian transposes in (9. then It can be proved from elementary properties of detenninants that if A e enxn .1) Similarly. called an eigenvalue.— 3. verify that n(A) = A2 2A .3 (CayleyHamilton). This results in at most a change of sign and. such that Ax = AX. (Note that the characteristic polynomial can also be defined as det(Al . as a matter of convenience. n(A) = 0.1).4. The polynomial n (A) = det(A—A. Note that if x [y] is a right [left] eigenvector of A. as a matter of convenience.1 9. Then n(k) = X2 + 2A. It is an easy exercise to 2 verify that n(A) = A + 2A . the Fundamental Theorem of Algebra says that x 75 .) = det (A . It can be The following classical theorem can be very useful in hand calculation. (Note that the characteristic polynomial can also be defined as det(A.Chapter 9 Chapter 9 Eigenvalues and Eigenvalues and Eigenvectors Eigenvectors 9.1./ — A).) throughout the text. we use both forms throughout the text.} The following classical theorem can be very useful in hand calculation. norm used for such scaling.2. It can be proved from elementary properties of determinants that if A E C" ". such that a scalar A E e. Thus. Let A [~ ~].Al) is called the characteristic polynomial Definition 9. A nonzero vector x e C" is a right eigenvector of A e Cnxn if there exists a scalar A. The 2nonn is the most common a — \j'.1. called an eigenvalue.1). for example. n(A) = O. then vector of AH associated with I. Theorem 9. The polynomialn (A. For any A E e nxn . Thus.A). e C.3 (CayleyHamilton). A nonzero vector x E en is a right eigenvector of A E e nxn if there exists Definition 9. [3]).t so that the scaled eigenvector has norm 1.. The 2norm is the most common nonn used for such scaling. Definition 9. [21D or directly using elementary properties of inverses and determinants (see. for proved easily from the Jordan canonical fonn to be discussed in the text to follow (see. [21]) or directly using elementary properties of inverses and determinants (see.31 O. Then n(A) A2 + 2A 3.31 = 0. we see immediately that x H is a left eigenBy taking Hermitian transposes in (9. [3]). we use both forms results in at most a change of sign and. This of A.1 Fundamental Definitions and Properties Fundamental Definitions and Properties Definition 9.2. then so is ax [ay] for any nonzero scalar a E C. (9. a nonzero vector y E en is a left eigenvector corresponding to an eigenvalue Similarly. For any A e Cnxn . It is an easy exercise to Example 9. It can be proved easily from the Jordan canonical form to be discussed in the text to follow (see. Example 9. for example. Note that if x [y] is a right [left] eigenvector of A.
A is said to be defective if it does not have n linearly independent (right or left) eigenvectors. such a polynomial is said to be monic and we of the highest power of A to be +1. But it also clearly satisfies the smaller degree polynomial equation isfies (1 . Let a. sible for A to satisfy a lowerorder polynomial.e.3) in the n(A) = det(A . i •>/—!)• If A E 1Ftnxn.8. must occur in complex conjugate pairs.6. too.e. we know that n(A) = 0..AI) = (A] . it can also be generally write a(A) as a monic polynomial throughout the text).AI) :::: m. Specifically.XI. say. guarantee the existence of corresponding nonzero eigenvectors.7. we say that A is an eigenvalue of A Definition 9.. if A = [~ ~]. Equivalently.:. the set of Definition 9. Definition 9. then A satsible for A to satisfy a lowerorder polynomial. we get the interesting fact that det(A) = AI . it can also be . E A(A).5. neftnhion ~. and hence further are the eigenvalues of A and imply the singularity of the matrix A — AI..A) (9. a . A is said to be defective if it does not have n linearly independent (right or left) eigenvectors. c form form A e C" " A]. we denote the geometric multiplicity of A by g. Let a. all roots of its characteristic polynomialn(A). (An . For example. geometric multiplicity is not equal to (i. A matrix A E Wnxn is said to be defective if it has an eigenvalue whose Definition 9.7. The spectrum of A E C"x" is the set of all eigenvalues of A. The geometric multiplicity ofX is the number of associated of algebraic multiplicity m. But it also clearly satisfies the smaller degree polynomial equation (it.3) are the eigenvalues of A and imply the singularity of the matrix A .4) and set A = 0 in this identity. say.2aA + aa2+ f322 and A has eigenvalues a f3j (where A has eigenvalues a ± fij (where j = i = R). such a polynomial is said to be monic and we generally write et (A) as a monic polynomial throughout the text).. An. if If A € A(A) has algebraic multiplicity m. . It can be shown that or(l) is essentially unique (unique if we force the coefficient It can be shown that a(Je) is essentially unique (unique if we force the coefficient of the highest power of A to be + 1. less than) its algebraic multiplicity. if we denote the geometric multiplicity of A by g.. but that A(A) = A(A) only if A e R"x". then there is an easily checked relationship between the left and right If A € R"x". Moreover. i. Eigenvalues and Eigenvectors Chapter 9. Then n(A) = A22.76 Chapter 9.. then A satisfies (Je — I)2 = 0. IfXA is a root of multiplicity m ofjr(X). The minimal polynomial Of A l::: K""" ix (hI' polynomilll a(A) of least degree such that a(A) ~ O.5. it is posn(A) = O.2»...e. Xn.8. These roots. Example 9. if eigenvectors of A and AT (take Hermitian transposes of both sides of (9. (9..1) . 2aA + 2 + ft and Example 9.AI). possibly repeated.A) . These roots. then n(X) has real coefficients. The spectrum of A is denoted A (A). The spectrum of A e nxn is the set of all eigenvalues of A. From the CayleyHamilton Theorem. Let the eigenvalues of A E en xxn be denoted X\. we get the interesting fact that del (A) = A] • A2 • • An (see also Theorem 9. The geometric multiplicity of A is the number of associated independent eigenvectors = n — rank(A — A/) = dim J\f(A — XI)..rank(A . Thll minimal polynomial of A G l!if. less than) its algebraic multiplicity. i. we say that X is an eigenvalue of A of algebraic multiplicity m. ft E 1Ft and let A = [ _^ !]. and hence further guarantee the existence of corresponding nonzero eigenvectors. we always have A(A) = A(AT).. if left of A A E A(A). However. The spectrum of A is denoted A(A). If e Wxn. must occur in complex conjugate pairs. Definition 9. Then jr(A. but that A(A) = A(A) only if A E 1Ftnxn. eigenvalues of A.e.1)2 = O. then y is a right eigenvector of AT corresponding to I € A (A). A.5.) A.. the n(A) coefficients. A.2)). independent eigenvectors = n .~. Thus.e. Theorem If A E 1Ftnxn.2. if A = \1Q ®]. Specifically.. Definition 9. Note. checked eigenvectors of A and AT (take Hermitian transposes of both sides of (9. then I < dimA/"(A — A/) < m. n(A). Then if we write (9.6. the set of all roots of its characteristic polynomial n(X). that by elementary properties of the determinant. then we must have 1 :::: g :::: m. eigenvalues of A. then we must have I < g < m.. as solutions of the determinant equation n(A) = det(A  AI) = 0. Moreover. Hence the roots of 7r(A). If is a root of multiplicity m of n(A).. If A E A(A) has algebraic multiplicity m. . as solutions of the determinant equation 7r(A) has n roots. i. f3 e R and let A = [~f3 £ ]. y of AT y is a left eigenvector of A corresponding to A e A(A). • AM(see and set X = 0 in this identity.n =0o. possibly repeated. then 1 :::: dimN(A . Thus..25). However. For example. A matrix A e 1Ft x" is said to be defective if it has an eigenvalue whose geometric multiplicity is not equal to (i.AI) = dimN(A . Eigenvalues and Eigenvectors n(A) has n roots. degree such that a (A) =0.nxn is the polynomial o/(X) oJ IPll... Equivalently.ft Definition 5. of we always have A(A) = A(A r )..
be an eigenvalue of A with corresponding right Theorem 9.2)' ""d g ~ ~ ~ 1. n(A) = (A — 2)4. A[~  2 0 I 2 0 0 0 0 0 0 !] ~ ~ ~ ha. . 0 0 0 2 0 0 0 2 ] h'" a(A) (A .. Furthermore. such is not the case... left Aj E A (A) such that Xj 1= A. Fundamental Definitions and Properties 77 77 a(A) f3(A) O. Then Xi = 0. a(X) n(X).11. one might speculate that g plus the degree of a must always be five. 0 g At this point. Bezout algorithm. each 4..9."(A) ~ ~ ~ ~ (A . Let A E cc nxn and let Ai be an eigenvalue of A with corresponding right eigenvector jc. The above definitions are illustrated below for a series of matrices.10. Example 9.. such that Aj =£ Ai. Example 9.1. In particular.2)2 ""d g 3. The matrix A~U has a(A) I 2 0 0 2 0 0 0 !] (9. a(A) directly (without knowing eigenvalues and asThere is an algorithm to determine or (A. YY Proof: Since Axt = A. the geometric multiplicity by g. A~[~ A~U 2 0 0 I 2 2 ] ha< a(A) (A .2)4. e l\(A) yj Xi.5) = (A  2)2 and g = 2. Fundamental Definitions and Properties 9.2)' ""d g 2. shown that a (A.. a(A) divides n(A).) directly (without knowing eigenvalues and as Unfortunately. Unfortunately. algorithm.*.10.e. every nonzero polynomial f3(A) particular.11. let Yj be a left eigenvector corresponding to any A. this algorithm.2) andg ~ 4.) divides every nonzero polynomial fi(k} for which ft (A) = 0. We denote 7r(A) (A . i. sociated eigenvector structure). Unfortunately. Then yfx{ = O. called the Bezout algorithm. Let A e C« x " ana [et A.1. Theorem 9. eigenvector is numerically unstable. g. 0 0 0 2 A~U 0 0 ] ha<a(A) (A . Proof' Since AXi AiXi. of which has an eigenvalue 2 of algebraic multiplicity 4.
we have xHX =1= 0. . we have that XxH = AXH x. 0 Let us now return to the general case. p.13.12. 1 ?. ^ /z. since YY A = AjXjyf.— A y )j^jc.. = A.11. Proof: Proof: For the proof see. An with corresponding Theorem 9..=1= 0.. A. Theorem 9.12.13. we must have yfxt = O. i.7. Then x and z must be of A with corresponding right eigenvectors and respectively.. e A(A). we can choose the normalization of the *.. for i € !1. YyXi =0.. However. Eigenvalues and Eigenvectors Chapter 9. Then and z must be orthogonal. then by Theorem 9. it cannot be the case that YiH Xi = 0 as well. Then [x\.e.— A.. A is real. is If A e C nxn has distinct eigenvalues. . since y" A = Similarly. Since Xi ^ 0 and would thus have to be 0. so that YitH x. we find 0 = (A. Chapter 9. it cannot be the case that yf*xt = 0 as orthogonal to all yj's for which j ^ i. we must have that x H = 0. A = AH.. or the Yi 's. results.e.. then by Theorem 9. Theorem 9.. we must have Subtracting (9. 's. the two vectors must be orthogonal. be real.. A. Let A E C"x" be Hermitian. Since equation Az i^z XH get X H Az = iJXH A.JC by ZH to get ZH Ax = X z Hx . = 1 for/ E n. 0 If A E cnx " has distinct eigenvalues.14) well. we have that IXHxx = XxHx. i. Since XxHz. 118]..14. the two vectors must be orthogonal. Then Proof: Suppose (A.7) yields Taking Hermitian transposes in (9. i. Xi is orthogonal to all y/s for which j =1= i. since x is an Using the fact that A is Hermitian.6) Subtracting (9. However. D A =1= iJ. i. Since yf*Xi =1= 0 for each i. contradicting the fact that it is an eigenvector. x .Aj)YY xi. Then (9. eigenvector.7) yields Using the fact that A is Hermitian. we must have that XHzz = 0... . or both.. c Proof: Premultiply the equation Ax = A. . However.78 Similarly. However. Premultiply the equation Az = iJZ by x H to get x HAz = /^XHZZ = XxHz. or both. or else xf would be orthogonal to n linearly independent vectors (by Theorem 9.6) from (9. Since Ai . we find 0 = (Ai .7) Taking Hermitian transposes in (9.. XH x /= 0. or else Xi would be orthogonal to n linearly independent vectors (by Theorem 9. and if Ai E A(A)..xnn • Then {XI.6) from (9. (9. orthogonal. . from which we conclude I = A. x) is an arbitrary eigenvalue/eigenvector pair such that Ax = AX. jc. yr .e. p.e.5). i.. Since A. 0 The proof of Theorem 9. Then all eigenvalues of A must Theorem 9.. Let us now return to the general case. ..e. 's. AXH z. A = AH. we can choose the normalization of the Xi'S. Take the Hermitian A transpose of this equation and use the facts that A is Hermitian and A is real to get xXHAz == of equation facts A. c Proof: Suppose (A. from which conclude A..5). and if A. 118]. Let A E cnxn have distinct eigenvalues A . The same right eigenvectors XI. Let A e nxn be Hermitian.n with corresponding right eigenvectors x\. so that y H Xi = 1 for each i..11 is very similar to two other fundamental and important The proof of Theorem 9. 0 D Theorem 9.JC. is real to get H Az AxH z.11 is very similar to two other fundamental and important results.. or the y. respectively. the proof see. ..is real. x) is an arbitrary eigenvalue/eigenvector pair such that Ax = A.14..e. [21.. contradicting the fact that it is an eigenvector. Let A e C"x" be Hermitian and suppose A and /JL are distinct eigenvalues Theorem 9.. Let A €. result holds for the corresponding left eigenvectors. Take the Hermitian Proof: Premultiply the equation Ax = AX by ZH to get ZH Ax = AZ H x.14) and would thus have to be 0.11. i. for example. xn}} is a linearly independent set. The same result holds for the corresponding left eigenvectors. for [21. x is a linearly independent set... Let A E nxn be Hermitian and suppose X and iJ. Then all eigenvalues of A must be real. Eigenvalues and Eigenvectors yy.Aj ^ 0. Cnxn have distinct eigenvalues AI. since x is an eigenvector.are distinct eigenvalues of A with corresponding right eigenvectors x and z.
. An and let the corresponding right eigenvectors form a matrix X [x\.15. let Y — [y\. . ... Furthermore. solve the linear system (A — (—1 + 2j)I)x2 = 0 to get yi X2 =[ 3+ j ] 3 ~/ . solve the linear system (A . 2A.. . . can be written in matrix form as AX=XA (9.. ./) = (A 3 + 4A.9) =A = XAyH yRAX n (9. Then AJC. .. — 1 2 j } . This time we have chosen the arbitrary scale factor for y\ so that \ = 1. 1 ± 2j}. Fundamental Definitions and 79 Theorem 9. .10) = XAX. A. To get the corresponding left eigenvector YI. Furthermore. Let Example 9. suppose that the left and right eigenvectors have been normalized so that yf1 Xi = 1. from Then n(X) det(A ..A..11) Example 9. .*/. . xn].(1 + 2j) I)x2 = 0 to get For A2 — 1 + 2j.(2)l)xI = 0 to get Note that one component of . Fundamental Definitions and Properties 9. solve the 3 x 3 linear system (A . corresponding to these eigenvalues. j e n. let A = right eigenvectors have been normalized so that YiH Xi = 1. .9. Similarly. 2)(A. i E!!. let A = diag(AJ... .8) while y^Xj = 5.3 4A2 9 A. / en. can be written in matrixform as diag(A.AI) (A. Yn] ing right eigenvectors form a matrix X = [XI. Let A e en xn have distinct eigenvalues AI. Let 2 5 3 3 2 4 ~ ] ... solve the (since dimN(A . Finally."" yn] be the matrix of corresponding left eigenvectors.ci can be set arbitrarily.j. let Y = [YI. To get the corresponding left eigenvector y\.nand let the correspondTheorem 9. i E !!:: Finally.16.1. We can now find the right and left eigenvectors corresponding to these eigenvalues.... 10) (A. / en. . and this then determines the other two (since dimA/XA — (—2)7) = 1). / e n. We can now find the right and left eigenvectors which we find A (A) = {2. These matrix equations can be combined to yield the following matrix factorizations: These matrix equations can be combined to yield the following matrix factorizations: XlAX and and A (9... xn].16. = A.22 + 2)" + 5).1.I . Similarly.2 + 9)" + 10) = ()" + 2)().I = = LAixiyr i=1 (9. + 5). Then AXi = AiXi.. suppose that the left and be the matrix of corresponding left eigenvectors. Then rr(A) = det(A .15. y' E !!. For A2 = 1 + 2j. . For Al = —2. from which we find A(A) = {—2. solve the linear system (A 21) = 0 to get linear system y\(A + 21) = 0 to get yi This time we have chosen the arbitrary scale factor for YJ so that y f xXI = 1. solve the 3 x 3 linear system (A — (—2}I)x\ = 0 to get For Ai = 2. is expressed by the equation while YiH Xj = oij. is expressed by the equation yHX = I. i E !!. An) e ]Rnxn. and this then determines the other two Note that one component of XI can be set arbitrarily... Xn) E Wtxn.(2)1) = 1). Let A E C"x" have distinct eigenvalues A.
o 3 Then 7r(A. + 3)(A.. Let A = [~ ~ ~] . Eigenvalues and Eigenvectors Chapter 9.=. Let Example 9. Eigenvalues and Eigenvectors Solve the linear system yf (A — (1 + 27')/) = 0 and normalize y> so that yf 2 1 to get Solve the linear system y" (A . However. we could have found them instead by computing instead of detennining the Yi'S directly.L 4 !.2 However. 3. A.A similar argument yields the result conjugate the equation AX2 — A2X2 to get AX2 A2X2.AI) = (A33 + 8A 22+ 19A + 12) = (A + I)(A + 3)(A + 4). X~l Example 9. note that we could have solved directly only for XI and X2 (and X3 = X2). det(A . Finally.) 19X + 12) = (A.!. Proceeding as in the previous example. similar argument yields the result for left eigenvectors.'s directly.17. Then. Then. Other results in Theorem 9.15 can also be verified. = I .~q 1 3 2 2 0 2 3 ] 2 ~ y' .( I + 2 j) I) = 0 and nonnalize Y22 so that y"xX2 = 1 to get For A3 = — 1 — 2j.L Other results in Theorem 9. XIAX=A= [ 2 0 0 1+2j o 0 Finally. itit is gtruightforw!U"d to compute straightforward to comput~ X~[~ and and I 0 I i ] 1 x. we could have found them instead by computing XI and reading off its rows. For example.j ] 3+j .2j. To see this.!. For example. To see this. Now define the matrix X of right eigenvectors: Now define the matrix of right eigenvectors: 3+j 3j 3. note that we could have solved directly only for *i and x2 (and XT. we could proceed to solve linear systems as for A2. 4}.=. + 4). instead of determining the j.2 and simply can also note that x$ = X2 and Y3 = Y2. we can also note that X3 =x2' and yi jj./) _(A + 8A from which we find A(A) = {—1. we For XT. use the fact that A. + 1)(A. is from which we find A (A) = {I.±1 4 4 4 l+j . we could proceed to solve linear systems as for A. = x2). 2 It is then easy to verify that It is then easy to verify that 2 . Proceeding as in the previous example. use the fact that A33 = A2 and simply conjugate the equation A. —3.80 Chapter 9.c2 = ^. Then Jl"(A) = det(A .A. —4}.2*2 to get Ax^ = ^2X2. for left eigenvectors.17.15 can also be verified.
or eX. of Chapter 11. Fundamental Definitions and Properties 9. For left eigenvectors we have a similar statement. x) but not conversely.I = A(T. The following theorem is useful when solving systems of linear differential equations. Then suppose XI AX = A. Let A E R" xn and suppose X~~1AX — A. namely yH A AyH if and only if Hy)H(T~1AT) = A(T Hy)H. but /(A) does not necessarily the eigenvalues of /(A) (defined as X^o^A") are /(A).3 0 ~l +(4) [ .20. eigenvalue/eigenvector pair (A. —3.txiYiH.9. What is true is that the independent right eigenvectors associated with the eigenvalue 0. or.but f(A) does not necessarily have all the same eigenvectors (unless. 4).19. A is diagonalizable).18. since T is nonsingular. Then. What is true is that the eigenvalue/eigenvector pair (A. A = T0 6 2 has only one right eigenvector corresponding to the eigenvalue 0.18.. i=1 .g. Eigenvalues (but not eigenvectors) are invariant under a similarity transTheorem 9. 2 3 I (. namely the theorem statement follows. ] [~ ~ (I) [ I (. x) maps to (/(A). from the theorem statement follows. ( x ) is a polynomial. A is diagonalizable). For example. etA Ax are Details of how the matrix exponential e'A is used to solve the system x = Ax are the subject solve system i of Chapter 11. formation T. The following theorem is useful when solving systems of linear differential equations. where A is diagonal. jc) is an eigenvalue/eigenvector pair such that Ax = Xx. then easy to show that the eigenvalues of f(A) (defined as L~:OanAn) are f(A).19.g. Then. we have the equivalent statement (T. x) but not conversely. A Theorem 9. Theorem 9. or sin*. e jRnxn n = LeA. but A = [~ has two independent right eigenvectors associated with the eigenvalue o.1. Remark 9. or sinx.) . Eigenvalues (but not eigenvectors) are invariant under a similarity transformation T. X) is an eigenvalue/eigenvector pair such that Ax = AX. x) maps to (f(A). —4). ff(x) is a polynomial. which is equivalent to the dyadic expanWe also have X~l AX = A = diag( 1.. 3. For left eigenvectors we have a similar statement. J+ (3) [ 2 0 2 I I I 2 I ]+ 3 3 I I 3 I I I 3 3 I (4) [ 3 3 I I 0 3 3 l Theorem 9.20. in general. representable by a power series X^^o fln*n)> then it is easy to show that representable L~:O anxn). Proof: Suppose (A. which is equivalent to the dyadic expansion sion 3 A = LAixiyr i=1 ~(I)[ ~ W~l+(3)[ j ][~ ~ 1 . A = [~ Oj ] have all the same eigenvectors (unless. I I ~J I 2 0 0 0 3 3 3 I I (.1. Remark 9. or ex.lIAT)(T~lx)x) = X ( T ~ lIxx). For example. I 3 I (. but A2 = f0 0~1]has two has only one right eigenvector corresponding to the eigenvalue 0. say. since T Proof: Suppose (A. If f is an analytic function (e. from which equivalent statement (T~ AT)(T.1 AT) =X(THyf. Fundamental Definitions and Properties 81 81 We also have XI AX = A = diag(—1. If / is an analytic function (e. say. D D yHA = XyH ifandon\yif(T(T Hy)H (T.
9. to have a version of Theorem 9. 1q is of the form where each of the Jordan block matrices / 1 ••• Jq is of the form Ai 0 Ai Ai o 0 (9.22. i E ~.2 Jordan Canonical Form Jordan Canonical Form Theorem 9.21 for any function that isis There are extensions to Theorem 9... /' en... . € n_. i=1 0 The following corollary is immediate from the theorem upon setting t = I. from which such a result is then available and presented later in this chapter. lordan Canonical Form (JCF): For all A E C"x" with eigenvalues X\.12) where each of the lordan block matrices 1i . . then e A has eigenvalues e A There are extensions to Theorem 9. i E ~. we have Proof' Starting from the definition. ... and right Corollary 9.2 9. kn E C (not necessarily distinct). to have a version of Theorem 9. Theorem 9.. . Eigenvalues and Eigenvectors n = LeA.= Xdiag(/(A. canonical form. It is necessary first to consider the notion of Jordan A is not necessarily diagonalizable. i. from which such a result is then available and presented later in this chapter. The following corollary is immediate from the theorem upon setting t = I. .. 0 (9. ( A ) = Xf(A)X.. ii E ~..20 and its corollary in which A is not necessarily diagonalizable.IXiYiH... of course.20 and Corollary 9. If A E Rnx" is diagonalizable with eigenvalues A.21. f ( X t t ) ) X ~ It is desirable.82 Proof: Starting from the definition.20 and Corollary 9.. It is necessary first to consider the notion of Jordan canonical form.22.20 and its corollary in which It is desirable. .13) 1i = o o Ai o Ai . f(An))X. then eA has eigenvalues e X"i . Jordan Canonical Form (/CF): For all A e c nxn with eigenvalues AI.e. If A e ]Rn xn is diagonalizable with eigenvalues Ai. 1q). of course. . there exists X E C^x" such that XI AX = 1 = diag(ll.Il . ff(A) = X f ( A ) X ~ l I = Xdiag(J(AI).. and the same eigenvectors. and the same eigenvectors.. eigenvectors Xi.e. Corollary 9.21 for any function that analytic on the spectrum of A. An e C 1. there exists X € c~xn such that (not necessarily distinct). i. analytic on the spectrum of A.... . and right eigenvectors xt•. we have Chapter 9..21.i). Eigenvalues and Eigenvectors Chapter 9. I.... . / € n_.
For nontrivial Jordan blocks. Real Jordan Canonical Form: For all A E R n x " with eigenvalues AI. ~: ] and I = \0 A in the case of complex conjugate eigenvalues a ± jfJi E A(A). X (not necessarily X E lR.2. Jq is of the form of in the case of real eigenvalues A...2. With 1 j o j o 1 o o o j ~ ~] 0 1 ' ...=1 ki = n. . ] T (X . 120124]. . Jordan Canonical Form 9. and where M..~xn necessarily distinct). . (Xii±jpieA(A>)... 1q is of form where each of the Jordan block matrices 11. Proof: proof D 0 Transformations like T = [ _~ "•{"]allow us to go back and forth between aareal JCF Transformations like T = I" _... Proof: For the proof see.9. 83 83 Form: 2. Jordan Canonical Form and L. .JfJ =[ (X fJ fJ ] (X = M. there exists X € R" xn such that (9.. . complicated. for example. e A (A). .An n (not € jRnxn Xi. = [ _»' ^ 1 and h2 = [6 ~] in the case of complex conjugate eigenvalues where Mi = [ _~. { ] allow us to go back and forth between real JCF and its complex counterpart: TI [ (X + jfJ o O.14) J\.. the situation is only a bit more complicated. [21. pp.
22 are called the elementary divisors or invariant factors of A.1)z. .26. Tr(A) = Tr(XJX~ ) = TrC/X"1 X) = Tr(/) = £"=1 A.. I) .2).25.7x7 is known to have 7r(A) = (A . An.2). 2)2. 1 det(A) = det(X J XI) det(J) = n7=1 Ai. 2 . .25. highest degree corresponding to distinct eigenvalues. Then Theorem 9. 1)2(A .1*) = Tr(J) = L7=1 Ai. det(A) = det(XJX.. .) = (A. " Xn. Let A e C" " with eigenvalues AI. — 2) . and (A (A .— I) (A. and (A . The characteristic polynomials of the Jordan blocks defined in Theorem Definition 9. .. .2)2. 1.) = det(7) = ]~["=l A..2). Eigenvalues and Eigenvectors T. X XI. from Theorem 9. Then AAhas two possible JCFs (not counting reorderings of the a (A.2)2. The minimal polynomial of a matrix is the product of the elementary divisors of divisors. .24. i=1 l Proof: Proof: 1.22 we have that A = X JJX ~ l .22 we have that A = X J X ~ . 2. and 2).)i.2)3 3and is known to have :rr(A) Example 9. J(2) has elementary divisors (A while /( 2) haselementary divisors (A . From Theorem 9.. i=1 n 2. Thus.(A 1).22 are called the elementary divisors or invariant factors of A.23.23. Then c n 1.2)2..1)2. Let A E nxn with eigenvalues AI. Suppose A E E (A.84 it is easily checked that it is easily checked that Chapter 9. Suppose A e lR. Tr(A) = 2. I) .. x Theorem 9.1)2. Again. and (A .26.I)(A (A2)2. The characteristic polynomial of a matrix is the product of its elementary Theorem 9.. The characteristic polynomials of the Jordan blocks defined in Theorem 9.jf3 0 ]T~[~ l h M Definition 9.24. from Theorem 9.1)4(A . 1)4(A 2) and 2 2 et(A) = (A . D 0 Example 9. Eigenvalues and Eigenvectors Chapter 9. 1). .. Thus.jf3 0 0 et .(A.. (A1). 1). 9.2(A(A . The characteristic polynomial of a matrix is the product of its elementary divisors.22 we have that A X J XI.(A. Theorem 9. 2(A(A. . From Theorem 9. The minimal polynomial of a matrix is the product of the elementary divisors of highest degree corresponding to distinct eigenvalues. Thus..I [ "+ jfi 0 0 0 et + jf3 0 0 0 0 et . Then has two possible JCFs (not counting reorderings of the diagonal blocks): diagonal blocks): 1 J(l) = 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 2 0 0 0 0 0 0 0 0 0 0 0 1 2 0 0 0 0 1 0 0 0 and f2) 0 0 0 0 0 1 0 0 0 0 0 2 = 0 0 0 0 0 0 I 1 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 1 0 2 0 J(l) has elementary divisors (A Note that 7(1) haselementary divisors (A ..22 l Tr(A) = Tr(X J XI) Tr(JX. det(A) = nAi.
e.— a) . suppose suppose A = [3 2 0 o Then Then 3 0 A3I= U2 I] o o 0 0 n has rank 1.. the associated number of linearly A.9.3/)£ = 0. of algebraic multiplicity 1. is not sufficient to Example 9. For each distinct eigenvalue Ai. Determination of the JCF 85 &5 Example 9. Thus. three eigen7r(A. determine a 0 0 0 0 0 0 0 a 0 0 0 0 0 a 0 0 0 0 Al= 0 0 0 a 0 0 0 0 0 0 a 0 0 0 0 0 0 0 a 0 0 0 0 0 0 1 a A2 = a 0 0 0 0 0 0 0 a 0 0 0 0 0 a 0 0 0 0 0 0 0 a 0 0 0 0 0 0 a 0 0 0 0 0 0 a 0 0 0 0 0 0 0 a 4. A e nxn ]R.7) for distinct A..3. we find that 2£2 + ~3 = O.— al) == 4. we need the notion of principal vector..l) independent right — — A. 1.. X e A(A) if and only if (A XI)kx = 0 and (A U}k~l x ^ 0. i. i. a (A). For example. X principal Definition 9. left k. when Ai is simple.nxn number of eigenvectors.nxn).A. it The straightforward case is.. and rank(A —Ai l) for distinct Ai is not sufficient to rr(A).. both are eigenvectors (and are independent). eigenvectors dimN(A — A.) = (A. a(A.e. To get a third vector X3 such that X = [Xl KJ_ X3] reduces A to JCF.29.28. and rank (A A. Determination of the JCF 9.ulx = 0 and (A . is of algebraic multiplicity greater than one. of course.— a) and rank(A . it then has precisely one eigenvector. Knowing TT (A. a(A).) = (A. i..is simple.l).A. i. determine the JCF of A uniquely.28. To get a third vector JC3 such that X [x\ X2 XT.29.e. when X. c . we find that 2~2 + £ 3= 0 .e. The straightforward case is. Thus.] are eigenvectors (and are independent).3..a(A) = (A . of algebraic multiplicity 1. a)7..AI)klx i= o.(7). Definition 9. The matrices A uniquely.27. Remark 9. of course.rank(A . Remark 9.7) = n . The more interesting (and difficult) case occurs when Ai multiplicity A. three eigenboth have rr(A) = (A .3 Determination of the JCF Determination of the JCF The first critical item of information in determining the JCF of a matrix A E Wlxn is its A e ]R. If we let [~l ~2 ~3]T associated If [^i £2 &]T denote a solution to the linear system (A — 3l)~ = 0.. both denote a solution to the linear system (A . a)\ . An analogous definition holds for a left principal vector of degree k. 9.27. and rank(A al) vectors. Let A E C"xn (or R"x"). Then x is a right principal vector of degree k degree associated with A E A (A) ifand only if(A . so the eigenvalue 3 has two eigenvectors associated with it. associated independent right (or left) eigenvectors is given by dim A^(A .).
consider a determining 2 x Jordan [~ i]. Thus. wefind(A.XI)2x^ = 0. k = eigenvector.1. If. For example. we find (A If we premultiply XI) x = (A XI)x = 0.2. (9. for example. determine all eigenvalues of A e R" x " nxn ). (It may be necessary to take a linear of x(l) R(A . but the latter generalized eigenvectors. Then the equation AX = XJ can be written that reduces a matrix A to this JCF. x(l). Principal vectors are sometimes also called generalized eigenvectors. Eigenvalues and Eigenvectors synonymously "of 2. different term will be assigned a much different meaning in Chapter 12.A1)X(l) = O. which simply says that x(!) is a right Ax(1) = hx(1) x (1) (2) x(2). principal vectors still need to be computed from succeeding steps. solve (A . solutions solutions to the homogeneous equation (A . since (A .) . See. if X. A right (or left) principal vector of degree k is associated with a Jordan block J.A1)X(l) = O. of course. 9. For each independent jc (1) .AI). there is only one eigenvector. principal vectors of degree 1) associated with A.XI). Exercise 7. Denote by x(1) and x(2) the two columns of a matrix X e R2. (A — uf. of The number of linearly independent solutions at this step depends on the rank of 2 (A .A1)2 x(2) = (A . 4. Then for each distinct X e A (A) perform the following: z (2) w c 1. is.X I ) . the principal vector second of degree 2: of degree (A .1 Theoretical computation Theoretical computation To motivate the development of a procedure for determining principal vectors.X I ) ( l ) = (A AI)O o. The case k = 1 corresponds to the "usual" eigenvector.'A1)22xx(l) = (A . Thus. there are two linearly independent n — o. the definition of principal vector is satisfied. this rank is n .3." "of often 3.e.XI. First. ji of dimension k or larger. Then the equation AX = X J can be written A [x(l) x(2)] = [x(l) X(2)] [~ ~ J. The number of eigenvectors depends on the rank of A .A1)x(2) = x(l). eigenvector..17) by (A . combination of jc(1) vectors to get a righthand side that is in 7£(A — XI). Eigenvalues and Eigenvectors Chapter 9.AI).86 Chapter 9.17) The first column yields the equation Ax(!) = AX(!). of — AI. S. I) associated This step finds all the eigenvectors (i. E lR nxn This suggests a "general" procedure. Theother solution (A .A/)x(2) = x(l). A E A(A) following: (or C ). The phrase "of grade k" is often used synonymously with "of degree k. The other solution necessary is the desired principal vector of degree 2. of k 5. x(l) (^ 0). for get righthand example. The second column yields the following equation for x .3. See.1 9.A/) = — multiplicity of rank(A — XI) = n . One of these solutions (A — AI)2 x (2) x(l) (1= 0). if of . by (A . If the algebraic multiplicity of If A principal need X is greater than its geometric multiplicity. Denote by x(l) and x(2) the two columns of a matrix X E lR~X2 2x2 2 Jordan block{h0 h1.XI)0 = 0. 2. If we premultiply (9.x2 A JCF. Solve (A .
33. solve (A AI)x(3) 87 = x(2)..(1) (A . {x(l). Attempts to do such calculations in finiteprecision floatingpoint arithmetic or 3. x(k)]. where the chain of vectors x(i) is constructed as above.. Theorem 9. this naturallooking procedure can fail to find all Jordan vectors. Let Example 9. 0 The eigenvalues of A are AI = 1. Determination of eigenvectors more extensive treatments. Attempts to do such calculations in finiteprecision floatingpoint arithmetic generally prove unreliable.2/)x3(1)= 0 yields (A . . Principal vectors associated with different Jordan blocks are linearly indeTheorem 9.32. of algebraic multiplicity and Theorem 9. Then vectors x(i) is constructed as above.31. Theorem 9. Determination of the JCF 3. Notice that highquality mathematical software such as MATLAB readable [8] to learn why. Determination of eigenvectors and principal vectors is obviously very tedious for anything beyond simple problems (n = 2 and principal vectors is obviously very tedious for anything beyond simple problems (n = 2 or 3. . say). For each independent X(2) from step 2.. Let X = [[x(l). For more extensive treatments. see. Let A=[~ 0 2 ] . . x (k) } is a linearly independent set. find the eigenvectors associated The eigenvalues of A are A1 = I. pendent.3.AI) = k . [20] and [21]. find the eigenvectors associated with the distinct eigenvalues 1 and 2. and A3 = 2. First. for example.33. There are significant numerical difficulties inherent in attempting to compute a JCF.30. For Unfortunately. Symbolic Toolbox. .30. where the chain of suppose further that rank(A . (x (1) .1. . . Then Theorem 9.. . 1 . Unfortunately. Determination of the JCF 9.. X(k)} is a linearly independent set. vectors is equal to the algebraic multiplicity of A.2I)x~1) = 0 yields . Suppose A E C kxk has an eigenvalue A of algebraic multiplicity kkand suppose further that rank(A — AI) = k — 1. with the distinct eigenvalues 1 and 2. and the interested student is strongly urged to consult the classical and very readable [8] to learn why. although a jordan command is available in MATLAB's Symbolic Toolbox. Continue in this way until the total number of independent eigenvectors and principal vectors is equal to the algebraic multiplicity of A. and the interested student is strongly urged to consult the classical and very to compute a JCF. . for example.32. Notice that highquality mathematical software such as MATLAB does not offer a j cf command.. and h3 = 2. X(k)]. say). Theorem 9. Principal vectors associated with different Jordan blocks are linearly independent. First.9. Let X = x ( l ) . solve 3. see. Suppose A e Ckxk has an eigenvalue A. this naturallooking procedure can fail to find all Jordan vectors. h2 = 1. Continue in this way until the total number of independent eigenvectors and principal 4. 4. .3. There are significant numerical difficulties inherent in attempting generally prove unreliable. [20] and [21].. although a j ardan command is available in MATLAB'S does not offer a jcf command. . For each independent x(2) from step 2. Example 9.31. A2 = 1.
.88 (A . . we 1 's but can be arbitrary — so long as they are nonzero. For the sake of defmiteness.11)x?J = 0 yields (A. Now let Now let X (2) =[ 0 ] ~ .(2) = x. but the result clearly holds for any JCF... d. . 0 1 = [xiI) 0 xl" xl"] ~ [ ~ 5 ] and XlAX 5 3 0 Then it is easy to check that Then it is easy to check that l 1 X'~U i 1 =[ ~ I 0 0 n 9. 0 0 D'(X' AX)D = D' J D = j ). For the sake of definiteness.3.. but the result clearly holds for any JCF. Then A 4l. consider below the case of a single Jordan block. solve 2 (A . 0 = 0 A dn dn I 2 0 dn dn I A 0 ). 0 !b.so long as they are nonzero.. Eigenvalues and Eigenvectors To find a principal vector of degree 2 associated with the multiple eigenvalue 1. we consider below the case of a single Jordan block.2 9. . dn be a nonsingular "scaling" matrix. (1) toeet x. d. Suppose A € Rnxn and SupposedA E jRnxn and Let D diag(d1.. Then Let D = diag(d" . . =0 yields (1) Chapter 9..1I)xl ) = xiI) to get (A – l/)x. d n)) be a nonsingular "scaling" matrix..l/)x.3. solve To find a principal vector of degree 2 associated with the multiple eigenvalue 1.2 On the +1 's in JCF blocks 's JCF In this subsection we show that the nonzero superdiagonal elements of a JCF need not be In this subsection we show that the nonzero superdiagonal elements of a JCF need not be 1's but can be arbitrary .
.9.Am)Vm with Ai.n = N(A = N (A .A1I) v) E6 . dnxn}.. .. Specifically.. .A[)n) ...4. It is thus natural to expect an with respect to which the matrix is diagonal or block diagonal. dimN(A — AJ)Vi = ni. interpreted This result can also be interpreted in terms of the matrix X = [x\..4 Geometric Aspects of the JCF Geometric Aspects of the JCF The matrix X that reduces a matrix A E IR"X"(or C nxn)) to aalCF provides aachange of basis X e jH.35. . J is obtained from A via the similarity transformation XD = \d\x\. Theorem 9.. set {As s E S}. .. Let V be a vector space over F and suppose A : V —>• V is a linear Definition 9..Am I) Vm .. x n eigenvectors and principal vectors that reduces A to its lCF.... .18) 0 I 1 0 0 can be used to put the superdiagonal elements in the subdiagonal instead if that is desired: to superdiagonal elements in instead desired: A I 0 0 A 0 A 0 A 0 0 A 0 p[ A p= 0 1 0 0 A 0 I A A 0 0 0 A 9. the reverseorder identity matrix (or exchange matrix) In a similar fashion. Specifically. Suppose A E R"x" has characteristic polynomial 9. Geometric Aspects of the JCF 89 di's Appropriate choice of the di 's then yields any desired nonzero superdiagonal elements.Amtm c and minimal polynomial a(A) = (A . A subspace S ~ V is Ainvariant if AS ~ S.AlIt) E6 .35. Am distinct.34. A subspace S c V is A invariant if AS c S. the reverseorder identity matrix (or exchange matrix) 0 p = pT = p[ = 0 I 0 (9. . It is thus natural to expect an associated direct sum decomposition of R.A.. Such a decomposition is given in the following theorem.xn]] of eigenvectors = [x[.4 9. where AS is defined as the transformation. Let IF and suppose + transformation. where AS is defined as the set {As:: s e S}.. Note that dimM(A .nxn (or nxn to JCF provides change of basis with respect to which the matrix is diagonal or block diagonal.. . similarity transformation XD [d[x[..34.. A. dnxn]. E6 N(A ./) w = «..n.. Suppose e jH. E6 N (A ...nxn n(A) = (A . In a similar fashion. Then jH...AmItm . Such a decomposition is given in the following associated direct sum decomposition of jH.A[)V) '" (A .. Geometric Aspects of the JCF 9. Then AI. Definition 9. j is obtained from A via the and principal vectors that reduces A to its JCF. mdistinct.4. (A .
diagonal representation. Finally. so by (9..A.. Equivalently. A invariant if only ifS1 1.Ai/)n..36. Rewriting in the form ~ J..is A T invariant. We could also use other block diagonal decompositions (e."" Jik.). .37.e. Let p(A) = CloI + ClIA + '"• •+ ClqAqq be a polynomial in A.e.19) the columns attention here to only the Jordan block case. Eigenvalues and Eigenvectors If V is taken to be ]Rn over Rand S E ]Rn x* is a matrix whose columns SI. then S <S is Ainvariant if and only if there exists M E ]Rkxk such that eRkxk (9. we have that A A.34.. If F = NI ® • • 0 m A// is Ainvariant... Other representation for A with full blocks rather than the highly structured Jordan blocks. R(S) == S..38.lt.19) the columns of Xi (i. Other such "canonical" forms are discussed in text that follows.. . 7. such "canonical" forms are discussed in text that follows... Then N(p(A)) and R(p(A)) 7£(p(A)) are Ainvariant.Ji . Note that AXi = A*... is not necessarily diagonalizable. K(S) <S. so the columns of A. eigenvalues Ai 9. the eigenvectors and principal vectors associated with A. . for N(A . The equation Ax = A* = x A defining a right eigenvector x of an eigenvalue AX = x A defining a right eigenvector x of an eigenvalue A x X says that * spans an Ainvariant subspace (of dimension one)...e.39.36. we return to the problem of developing a formula for e l A in the case that A A formula e' A is not necessarily diagonalizable.) span an Ainvariant of A"...90 Chapter 9. S is Ainvariant if and only if S . XI AX = [~ J 2 ]..39./)"' by SVD. example (note that the power n. //*.37. = Xi. each Ji = diag(JiI. via SVD). i. e A(A). Let peA) = «o/ + o?i A + • + <xqA be a polynomial in A.Xm] ] Ee]R~xnxnisis such that X^AX ==diag(J1. i.. The equation Ax Example 9.g. 2. i /= 1. s/t span a /^dimensional subspace <S. Suppose A"== [Xl . The Jordan canonical form is a special case of the above theorem. 9. then a basis for V can be chosen with respect to which A has a block diagonal representation. Suppose X block diagonalizes A. Eigenvalues and Eigenvectors Chapter 9. . where each Theorem 9. so by (9. (i. Ainvariant. then a basis for V can be chosen with respect to which A has a block N. . We would then get a block diagonal representation for A with full blocks rather than the highly structured Jordan blocks..• EB Nm. We would then get a block diagonal example (note that the power ni could be replaced by Vi). where each Ji = diag(/. . partition .19) AS = SM. € C"x"' be a Jordan basis for N(AT — A. Note that A A". where X [ X i . If A has distinct eigenvalues A. partition Equivalently./)"'.. If V is a vector space over IF such that V = N\ EB ..span an Ainvariant subspace. the eigenvectors and principal vectors associated with Ai) span an Ainvariant subspace of]Rn. be a Jordan basis for N (AT .* is a Jordan block corresponding to Ai E A(A).. we could choose bases for N(A — A.. /. Example 9.. Let 7.38. . AT Theorem 9.) and each /. Jm). If A has distinct The Jordan canonical form is a special case of the above theorem. of W. = X.e.e. Xm R"n such that XI AX diag(7i.19): /th Example 9. e E"x". .li.. Suppose A E ]Rnxn. = 1. . could be replaced by v...34.. A".i.. 9..2.2. Jm). and S e R" xk s\. Let Yi E <enxn . /. then is Ainvariant if and only if there span a kdimensional subspace S.. so the columns of Xi span an Amvanant subspace... Sk If R" R.. but we restrict our attention here to only the Jordan block case. we have that AXi Theorem 9. is Ainvariant.) and each Jik is a Jordan block corresponding to A. i. Then N(p(A)) and 1.as in Theorem 9.. This follows easily by comparing the ith columns of each side of (9.
and let e cnxn be a Jordan canonical form for A. Jm) [YI . Then the sign of z is defined by Re(z) {+1 sgn(z) = IRe(z) I = 1 ifRe(z) > 0.YiH. Then compatibly. with N containing all Jordan blocks corresponding to the be a Jordan canonical form for with N containing all Jordan blocks corresponding to the eigenvalues of in the left halfplane and P containing all Jordan blocks corresponding to eigenvalues of A in the left halfplane and P containing all Jordan blocks corresponding to eigenvalues in the right halfplane. for a k x k Jordan block 7.. Definition 9. i=1 H In a similar fashion we can compute m etA = LXietJ. . A called the matrix sign function. It is a generalization of the sign (or signum) of a scalar. The Matrix Sign Function 9. denoted sgn(A)... Ym]H = LX. E f= 0. Then A = XJX. It is a generalization of the sign (or signum) of a scalar. The Matrix Sign Function 91 91 compatibly.= Ai. of defined Definition 9.41. m ••• .. .41. i=1 which is a useful formula when used in conjunction with the result which is a useful formula when used in conjunction with the result A 0 A A 0 eAt teAt eAt . Then the sign of A. is given by sgn(A) = X [ / 0] 0 / X I . is given by eigenvalues in the right halfplane.40. . denoted sgn(A). Definition 9.I = XJy H = [XI.5 The Matrix Sign Function The Matrix Sign Function section brief interesting useful In this section we give a very brief introduction to an interesting and useful matrix function function called the matrix sign function.9.40.. associated with an eigenvalue A.5 9. Definition 9. Let z E C with Re(z) ^ O. ifRe(z) < O. . Suppose A E C"x" has no eigenvalues on the imaginary axis.lt 2 e At 2! 0 exp t 0 0 0 1 A teAt eAt 0 0 0 0 0 block Ji associated A = A. Then the sign of A.5. A survey of the matrix sign function and some of its applications can be found in [15]. 9. Xm] diag(JI.S.. .JiYi .
. positive of P. yn.. Their left exercises. In fact. Xn and left eigenvectors y\. ± 1. Then the following hold: following 1. Then the following hold: following e 1. sgn(AH) = (sgn(A))". Show that v can be expressed (uniquely) as a linear combination e of the right eigenvectors. 2. Let e C" be an arbitrary vector. S is diagonalizable with eigenvalues equal to del.. and let = sgn(A). Yn. ••• . Let A E Cnxn have distinct eigenvalues AI. EXERCISES EXERCISES 1.. 6. We state some of the more useful properties of the matrix sign function as theorems. . ). . We state some of the more useful properties of the matrix sign function as theorems. R(S — /) Ainvariant of (the negative invariant subspace). AS = SA. Let v E en be an vectors Xl. 2. Eigenvalues and Eigenvectors where the negative and positive identity matrices are of the same dimensions as N and p.1> . . Suppose A E C"x" has no eigenvalues on the imaginary axis. The JCF definition of the matrix sign function does not generally lend itself to reliable computation on a finitewordgenerally itself length digital computer. positive = (/ + of A. sgn(cA) = sgn(c) sgn(A)/or c.43.. Suppose A E enxn has no eigenvalues on the imaginary axis. e nxn Theorem 9. . .. 3. sgn(TlAT) = T1sgn(A)T foralinonsingularT E C"x". sgn(A") = (sgn(A»H. e C"x" Theorem 9. 4. 2. S = sgn(A). Theorem 9. of A (the negative invariant subspace). AS = SA. Show that v can be expressed (uniquely) as a linear combination arbitrary vector. There are other equivalent definitions of the matrix sign function. Their straightforward proofs are left to the exercises. and let — sgn(A). The JCF definition of the here is especially useful in deriving many of its key properties..42. but the one given There are other equivalent definitions of the matrix sign function. S2 = I.xn and left eigenvectors Yl.92 92 Chapter 9.43.. 7l(S l) is an Ainvariant subspace corresponding to the left halfplane eigenvalues left halfplane I. sgn(T1AT) Tlsgn(A)TforallnonsingularT e enxn 6. ••. R(S+/) is an Ainvariant subspace corresponding to the right halfplane eigenvalues R(S + l) A invariant halfplane of (the positive invariant of A (the positive invariant subspace). 3. Xn with corresponding right eigenA e nxn ). posA == (l + S)/2 is a projection onto the positive invariant subspace of A. projection subspace of 4. respectively.n 1. Find the appropriate expression for v as a linear combination expression of the left eigenvectors as well.. 5. 4. 3. negA == (l . Theorem 9. . S = sgn(A). negA = (/ — S)/2 3. S2 = I.S) /2 is a projection onto the negative invariant subspace of A.... respectively. 5. sgn(cA) = sgn(c) sgn(A) for all nonzero real scalars c. distinct right eigenvectors Xi. but the one given here is especially useful in deriving many of its key properties. Eigenvalues and Eigenvectors Chapter 9. respectively..42. its reliable numerical calculation is an interesting topic in calculation its own right..
10. i. Prove that all eigenvalues of a skewHermitian matrix must be pure imaginary.22. Show that all right eigenvectors of the Jordan block matrix in Theorem 9. eigenvectors and if and (real) JCFs of the following matrices: (a) 2 1 ] 0 ' [ 1 6. 11. where x. = O. where J is the JCF Find a nonsingular matrix X such that X AX = J.5x5 has eigenvalues {2.l]r as an eigenvector. 5. Suppose A e rc nxn is Hermitian. y e R" are nonzero vectors with A E lR. AH = —A. where x. Suppose a matrix A E R 16x 16 has 16 eigenvalues at 0 and its JCF consists of a single A e lR. Prove that all eigenvalues of 2. What are the eigenvalues of this slightly perturbed matrix? matrix? . n are nonzero vectors with with xTTyy = 0. Determine the JCF of A. What are the eigenvalues of this slightly perturbed is added to the (16. 3. 2. Let A e R"x" be of the form A = xyT.e. y e R" are nonzero vectors 10. where J is the JCF 1 J=[~ 0 1~].. nxn be of the form A = / + xyT. 16x 16 has eigenvalues at 0 its JCF consists of single Jordan block of the form specified in Theorem 9. Determine all possible € R 5x5 {2. but then the equation (A . right eigenvectors and right principal vectors if necessary. Suppose A € rc nxn is skewHermitian. (A — I)x(2) x(1) 8. Let A e R" xn be of the form A = 1+ xyT. Suppose the small number 10. if A is skewHermitian. Determine the eigenvalues. 3}. Determine the JCF of A. Let A be an eigenvalue of A with corresponding 3. 2.e. Let A = [H 1]· 2 2" Find a nonsingular matrix X such that XI AX = J. Suppose A E C"x" is skewHermitian. Suppose a matrix A E lR. Suppose A E C"x" is Hermitian. Determine the JCF of A. x. x O.n T T x yy = 0. Show that x is also a left eigenvector for A. 9. 4. where x. JCFs for A./)jc = x can't be solved. AH = A. 2.22.16 Jordan form specified 9. Show that all right eigenvectors of the Jordan block matrix in Theorem 9.30 must be multiples of el e R*. Determine the JCFs of the following matrices: 6. Let A E lR. JCFs for A. Prove the same result right eigenvector x. y E lR. ~ 0 Hint: Use[1 1 — I]T an Hint: Use[— 1 1 . eigenvalues. Characterize all left eigenvectors. Let 7. Characterize all left eigenvectors. y E lR.. a skewHermitian matrix must be pure imaginary.Exercises 93 93 2. 5. Determine the JCFs of the following matrices: <a) Uj n 2 1 2 =n 7.1) element of J.30 must be 8.nxn A = xyT. Let A be an eigenvalue of A with corresponding right eigenvector x. Show that x is also a left eigenvector for A.1) element of J. Prove the same result if A is skewHermitian. The vectors [0 1 Ifand[l 0 of [0 — l] r and[1 0]r (2) (1) are both eigenvectors. 3}. multiples of e\ E lR. i. Determine the JCF of A. Suppose 10~16 is added to the (16. k .
it suffices to prove the result for the required symmetric factorization of A. Prove that every matrix A E W x" is similar to its transpose and determine a similarity 13. in terms of AU and A 22. Find a matrix equation that X must satisfy for this to be possible. and S2 are real symmetric matrices and one of them.43. Thus. Prove that 17. Prove Theorem 9.18) is useful. where SI and £2 are real symmetric matrices and one of them. Then A = (XS i X T ) ( X ~ T T S2XI) would be the the "symmetric factorization" of J. If n = 2 and k = 1. If n = 2 and k = 1. where Si 12. Thus.18) is useful. it suffices to prove the result for the JCF.43. Consider the block upper triangular matrix 14. xn has all its eigenvalues in the left halfplane. about when the equation for is solvable? solvable? 15. Show that every matrix A E R"x" can be factored in the form A = SIS2. Consider the block upper triangular matrix A _ [ All  0 Al2 ] A22 ' where A E M"xn and An E Rkxk with 1 ::s: k < n. 15. 16.e. 14. Suppose Al2 ^ and want to block diagonalize A via the similarity transformation want to block diagonalize A via the similarity transformation where X E IRkx(nk). Prove that every matrix e jRn xn is similar to its transpose and determine a similarity transformation explicitly. what can you say further. TIAT = [A011 A22 0 ] . JCF. Prove Theorem 9. say S1. is nonsingular. about when the equation for X is what can you say further. Eigenvalues and Eigenvectors Chapter 9. Eigenvalues and Eigenvectors 12. Hint: Use the factorization in the previous exercise. Hint: Use the factorization in the previous exercise. Suppose A E C"xn has all its eigenvalues in the left halfplane. X e R*x <«*). i. say Si. en . The transformation P in (9. transformation explicitly. Suppose A e sgn(A) = 1.94 Chapter 9. Prove Theorem 9. in terms of All and A22. Prove that 17. Show that every matrix A e jRnxn can be factored in the form A Si$2..S2X~l) would required symmetric factorization of A. Prove Theorem 9. is nonsingular. 16. sgn(A) = /. 13. Hint: Suppose A = Xl XI is a reduction of A to JCF and suppose we can construct Hint: Suppose A = X J X ~ l is a reduction of A to JCF and suppose we can construct the "symmetric factorization" of 1. Find a matrix equation that X must satisfy for this to be possible.42. The transformation P in (9. Then = ( X SIXT)(X. Suppose Au =1= 0 and that we we e jRnxn and All e jRkxk 1 < ::s: n.42.
" The transformation A M» PAQ is called an equivalence. We can also consider the case A e Cm xn and unitary equivalence if P and Remark 10. !] Theorem 10." In matrix terms.j. Normal matrices include Hermitian. it is called an orthogonal equivalence if P and Q are orthogonal matrices. This is proved in Theorem 10. most "diagonal" we can get is the JCF described in Chapter 9. if A E IR mxn find E R™ and Q E lR~xn such that P AQ has a form. and orthogonal.2. then there exists a unitary matrix U such that UH AU — D. 95 95 .2. where it is proved that a general matrix A E C"x" is unitarily similar to a diagonal 10. and unitary matrices (and their "real" counterparts: symmetric.1 Some Basic Canonical Forms Some Basic Canonical Forms Problem: Let V and W be vector spaces and suppose A : V + W is a linear transformation. where D = diag(A. an orthogonal similarity (or unitary similarity in the complex case). If The following results are typical of what can be achieved under a unitary similarity. = V and if pT is orthogonal.An. An. skewsymmetric. What other U HAU = D.. . If P"1 . . Xn Theorem 10. respectively). Q are unitary. . Find bases in V and W with respect to which Mat A has a "simple form" or "canonical Find bases in V and W with respect to which Mat A has a "simple form" or "canonical xm form. . such as A = [_~ most "diagonal" we can get is the JCF described in Chapter 9. such as A = [ _ab ^1 for real scalars a and b. The following results are typical of what can be achieved under a unitary similarity. n )... AAHH = AH A)." The transformation A f+ P AQ is called an equivalence. . Two special cases are of interest: Two special cases are of interest: 1. What other matrices are "diagonalizable" under unitary similarity? The answer is given in Theorem matrices are "diagonalizable" under unitary similarity? The answer is given in Theorem 10....Chapter 10 Chapter 10 Canonical Forms Canonical Forms 10. and unitary matrices (and their "real" counterparts: symmetric. Xn) (the columns ofX are orthonormal eigenvectors for A). .:xm and Q e Rnnxn such that PAQ has a "canonical form. An. If W = V and <2== p. An). find P e lR. A. skewskewHermitian.1. Problem: Let V and W be vector spaces and suppose A : V —>• W is a linear transformation. V and Q 1. where it is proved that a general matrix A e enxn is unitarily similar to a diagonal matrix if and only if it is normal (i. . . if A e Rmxn . .. Remark 10. as well as other matrices that merely satisfy the definition.. Then there AI.. respectively).2.. where D = diag(AJ. . ... If a matrix A is not normal..2. Let A = AH e C"x" have (real) eigenvalues A.1. the for real scalars a and h. . Normal matrices include Hermitian. orthonormal eigenvectors for A). the transformation A H> PAP" 1 is called similarity. the definition.. matrix if and only if it is normal (i. We can also consider the case A E emxn and unitary equivalence if P and <2 are unitary.1.j. .. the transformation A f+ P ApT is called 2. If a matrix A is not normal.. skewHermitian. If A = A H 6 C" " has eigenvalues AI.. = H E en xn exists a unitary matrix X such that X H AX = D = diag(Al." In matrix terms. This is proved in Theorem 10. orthogonal equivalence if P and are orthogonal matrices..I.. the transformation A f+ PAPI is called aasimilarity.9.9. AA = AHA). the transformation A i» PAPT is called If an orthogonal similarity (or unitary similarity in the complex case).e. .1 10.e. then there exists a unitary matrix £7 such that A = AH E en xxn has eigenvalues AI. as well as other matrices that merely satisfy the symmetric. If W = V and if Q = PT is orthogonal. and orthogonal. . An) (the columns of X are exists a unitary matrix X such that XHAX = D = diag(A.. it is called an "canonical form. . .
.. . we consider the real case.) [XI X2]] is orthogonal is frequently required..2)block X2 .Hv. Canonical Forms Proof: eigenvector corresponding AI. ... k 1. . xn] = [x\ ] [XI X22] is unitary. simplicity. Then [XI V 2] is unitary.. 10.2)block noting that x\ is orthogonal to all vectors in X2.. .. Thus. (/ € k) U2 X i U2 = Xi .. Now XHAX =[ xH I XH ] A [XI 2 X 2] =[ =[ =[ x~Axl XfAxl X~AX2 XfAX 2 ] (10.. Then there exist n . We illustrate the construction of the necessary Householder matrix for k — 1. We also get 0 in the (2. [Xi f/2] unitary..I)block... Hk as elementary reflectors) H\. ... .. we get Ai remaining in the (l. n .. Write UH = [U\ U ] [VI Vz] 0 2 with Ui E Cnxk . X = Given a unit vector x\ E JRn.. . A. I)block x"xi = 1. .2)block must have eigenvalues A2. Let X\ e Cnxk have orthonormal columns and suppose U is a unitary Theorem 10. The proof is completed easily by induction upon noting proof that the (2. HdxI... . When combined with the fact that x~ XI = 1.96 96 Chapter 10.3. Canonical Forms Chapter 10. Construct a sequence of Householder matrices (also known Proof: Let X [XI. ..• • Hk and Hk'" HI. xn] = 1. Hk in the usual way (see below) such that Hk . .k But the latter are orthonormal since they are the last n . [£i.. where R € Ckxk is upper triangular.. .1) we have used the fact that Ax\ = AIXI. VI € Cnxk [Xi U ] Proof: Let X\I = [x\.. orthogonal (l. X 1 XI e E"... Let the unit vector x\ be denoted by [~I...3 called Theorem 10. following general result. XH AX induction noting that XH AX is Hermitian. . Xk H Hk.. . [XI U2] is unitary.. . In (10.. Let V = XI. . (l. %n] XI .. ..T.xd. Then VH = / / . we get 0 in the (l.l)block by Al (2...3 for k = 1. k = For simplicity.. xf*x\ = Proof' Let x\ be a right eigenvector corresponding to X\. Xn such that [x\. Xk are orthonormal).. Let XI E Cnxk have orthonormal columns and suppose V is a unitary matrix such that UX\ = \ 1. .2). .2 is then a special case of Theorem 10.1 additional vectors x2. D The construction called for in Theorem 10. and normalize it such that x~ XI = XI 1. An.1) we have used the fact that AXI = k\x\. Then there exist n — 1 additional vectors X2. xn such that X = (XI.. . xd = [ ~ l U = where R is upper triangular (and nonsingular since x\. .2)block by XI Xz. —k U. Xk].. .3.k rows of the unitary matrix U. When combined with the fact that In (l0. ~nf.1) (10. where R E kxk is upper triangular. The construction can actually be performed orthogonal frequently [x\ 2 quite easily by means of Householder (or Givens) transformations as in the proof of the Householder transformations proof following general result. 0 Thus.2) Al X~AX2 XfAX 2 0 Al ] 0 XfAX z 0 l In (10.. the construction of X2 E JRnx(nl) such that X — z e ]R" (". D 0 (2. ... Then U = HI'" Hk and H Then x^U2 = 0 (i E ~) means that xf is orthogonal to each of the n — k columns of V2.l)block. . Construct a sequence of Householder matrices (also known HI. Write V H matrix such that V X I = [ ~].
i. A Note that Theorem 10. where u = ['. . consulted standard numerical linear algebra can be consulted in standard numerical linear algebra texts such as [7].1...1 and UT X\ = 1 ± £1. . [11]. .•» '. where u ^UU [t\ 1.. To see that U effects the U symmetric U U = U = I. Some Basic Canonical Forms 97 Then the necessary Householder matrix needed for the construction of X 2 is given by Then the necessary Householder matrix needed for the construction of X^ is given by U = I . Theorem 10.1 ± 1. A in (10.2 is worth stating separately since it is applied fre10.10. '.. .. [25]. X n ). U effects necessary compression of jci. i=1 (10. 's).. In fact. .. Then there exists an AT E jRnxn have eigenvalues AI. [7].4. [11]. Thus. Let A = AT e E nxn have eigenvalues k\. • • . it is easily verified that u T u = ± 2£i and u T Xl = 1 ± '.2 is worth stating separately since it is applied frequently in applications.. Let A E jRn xn (whose orthogonal matrix X e Wlxn (whose columns are orthonormal eigenvectors of A) such that of XT AX = D = diag(Al... . where Pi = PR(x. [23]. necessary compression of Xl. . The real version of Theorem 10.. . £2. Some Basic Canonical Forms 10. [23].+uu T . [25]. including the choice of sign and the complex case.nf.2uu+ = I . quently in applications. it is easily verified that UT U = 2 ± 2'.) — xiXt = i j since xT Xi = 1. .e.3) is actually a often weighted sum of orthogonal projections P. (onto the onedimensional eigenspaces correPi onedimensional eigenspaces sponding to the A. U orthogonal. Then there exists an 10. = PUM = xixf = xxixT since xj xi — 1.4 implies that a symmetric matrix A (with the obvious analogue from Theorem 10.3) spectral which is often called the spectral representation of A. . sponding to the Ai'S). . so U is orthogonal. i=l theoretical The following pair of theorems form the theoretical foundation of the doubleFrancisdoubleFrancisQR algorithm used to compute matrix eigenvalues in a numerically stable and reliable way.e. Further details on Householder matrices.2.1. An).i.4. £«] r It can checked T 2 that U is symmetric and U TU = U 2 = I.2 for Hermitian matrices) can be written n A = XDX T = LAiXiXT.Xn. An. n A = LAiPi.It can easily be checked — 2uu+ — u u T ..2 for Hermitian matrices) can be written from Theorem 10.1. x where P. XTAX = D = diag(Xi.
5 is called a Schur canonical form or Schur form. where T is upper triangular.2)block AU2 is not O. matrix U such that U AU = S. The matrix 10. and sufficient for virtually all applications (see. real arithmetic) to a quasiuppertriangular A e Wnxn is also orthogonally similar (i. for example.e.e. The matrix s~ [ 2 0 2 5 4 0 is in RSF.. However.2 except that Proof: The proof of this theorem is essentially the same as that of Theorem 10.5 (Schur).. [17]). D ur In the case of A e IRn ". Theorem 10. where S is quasiuppertriangular. A quasiuppertriangular matrix is block upper triangular with 1 x 1 diagonal matrix. Canonical Forms Chapter 10. but if A has a complex conjugate pair of eigenvalues. The quasiuppertriangular matrix S in Theorem 10. it is thus unitarily similar to an upper triangular matrix. A matrix A E C"x" is unitarily similar to a diagonal matrix if and only if Theorem 10. A is normal (i. real arithmetic) to a quasiuppertriangular matrix.8. Its real JCF is h[ 1 1 1 0 0 n n Note that only the first Schur vector (and then only if the corresponding first eigenvalue Note that only the first Schur vector (and then only if the corresponding first eigenvalue is real if U is orthogonal) is an eigenvector.6 T T matrix U such that U AU = S. where T is upper triangular. Then Proof: Suppose U is a unitary matrix such that U H AU = D. Then there exists a unitary matrix U such that U H AU = T.5 is called a Schur canonical Definition 10.7. AH A = AAH ). it is of interest to know While every matrix can be reduced to Schur form (or RSF). While every matrix can be reduced to Schur form (or RSF). where S is quasiuppertriangular. what is true. then complex arithmetic is clearly needed to place such eigenValues on the diagonal of T. Then AAH = U VUHU VHU H = U DDHU H == U DH DU H == AH A so A is normal. The triangular matrix T in Theorem 10. . The columns of a unitary [orthogonal] matrix U that reduces a matrix to [real} Schur fonn are called Schur vectors. where D is diagonal.6 is called a real form or Schur fonn. Then there exists an orthogonal Let A e IR n ". Proof: Suppose U is a unitary matrix such that U H AU = D. Let A E R"xxn. 0 in this case (using the notation U rather than X) the (l. . [17]).9. Canonical Forms Theorem 10.6 is called a real Schur canonical form or real Schur form (RSF). A matrix A e c nxn is unitarily similar to a diagonal matrix if and only if A is normal (i.6 (MurnaghanWintner). the next theorem shows that every A E IR xn is also orthogonally similar (i. so A is normal. The following theorem answers this question.2)block wf AU2 is not 0. However. However. The when we can go further and reduce a matrix via unitary similarity to diagonal form. it is thus unitarily similar to an upper triangular matrix. Example 10. following theorem answers this question. where D is diagonal. Then there exists an orthogonal 10.e.5 (Schur). then complex arithmetic is clearly needed if A has a complex conjugate pair of eigenvalues. Theorem 10. UH AU = T. Proof: The proof of this theorem is essentially the same as that of Theorem lO.8. The quasiuppertriangular matrix S in Theorem 10. Its real JCF is is in RSF. Let A e C"x". The columns of a unitary [orthogonal} Schur canonical form or real Schur fonn (RSF). the next theorem shows that every to place such eigenvalues on the diagonal of T.e.9. complex conjugate pairs of eigenvalues. matrix U that reduces a matrix to [real] Schur form are called Schur vectors.2 except that in this case (using the notation U rather than X) the (l. A quasiuppertriangular matrix is block upper triangular with 1 x 1 diagonal blocks corresponding to its real eigenvalues and 2x2 2 diagonal blocks corresponding to its blocks corresponding to its real eigenvalues and 2 x diagonal blocks corresponding to its complex conjugate pairs of eigenvalues. it is of interest to know when we can go further and reduce a matrix via unitary similarity to diagonal form. AHA = AA H). diagonal of T (or S).7. The triangular matrix T in Theorem 10. However.. Definition 10. but In the case of A E R"xxn . and sufficient for virtually is real if U is orthogonal) is an eigenvector. for example. what is true. is that the first Schur vectors span the same all applications (see.. is that the first k Schur vectors span the same Ainvariant subspace as the eigenvectors corresponding to the first eigenvalues along the invariant subspace as the eigenvectors corresponding to the first k eigenvalues along the diagonal of T (or S).98 98 Chapter 10. Let A E cnxn Then there exists a unitary matrix U such that Theorem 10.
A U U HA U T.12. we write A :::: B if and only ifA — B>QorB — A < 0. if A and B are symmetric matrices. We write A < 0.2...10.. we write A > B if and only if A . Thenfor all Let A = AH E Cnxn with eigenvalues AI > A2 > • • > An.2. 111. Then T (Theorem It is then a routine exercise to show that T must.12.B :::: 0 or B . indefinite. if A and B are symmetric matrices. B — A < 0. If a matrix is neither definite nor semidefinite. suppose A is normal and let U be a unitary matrix such that U H AU = T. this section that may be stated in the real case for simplicity. . Then n x HAx = (U HX)H U H AU(U Hx) = yH Dy = LA. superscript H s replace T s. We write A ~ 0.=1 But clearly n LA. Remark 10. it is said to be indefinite.2 Definite Matrices Definite Matrices Definition 10. It T 0 D 10. We write A > 0. in fact. A symmetric matrix A e Wxn 1.n • We write A :::: 0..A < O.11.A is positive definite. and denote the components of y by v UHx. Remark 10. be diagonal. Then 11.10. i € n. 11'/.n.11. Definite Matrices 10. CM j]i.13.12 ~ AlyH Y = AIX HX . Proof: Proof: Let U be a unitary matrix that diagonalizes A as in Theorem 10. x eC". nonnegative definite (or positive semidefinite) if and only if XT Ax :::: 0 for all (or positive if and only if x T Ax > for all nonzero x e W.10. 3. negative definite if . Similarly.5). positive definite if and only ifx Ax > Qfor all nonzero x E W1 We write A > O. we write A > B if and only if A — B > B . U diagonalizes A 10. this is generally true for all results in the remainder of of superscript //s Ts. Furthermore. 2. We write A > O..2. positive definite if and only if xTT Ax > 0 for all nonzero x G lR. We write A < O. where x is an arbitrary vector in en. Let A = AH e enxn with eigenvalues X{ :::: A2 :::: . where T is an upper triangular matrix (Theorem 10.A ~ O. Furthermore. nonpositive definite (or negative semidefinite) if A is nonnegative definite. i En. let y = U H x.nxn is Definition 10. e Theorem 10. we write A > B if and only if A . Similarly. Indeed. Also. If neither semidefinite. if—A 4. If A E C"x" is Hermitian. write A < O. negative positive definite. Definite Matrices 99 Conversely. A symmetric matrix A E lR.B > 0 or or Also. We (or negative if— A nonnegative definite. nonzero x E lR. all the above definitions hold except that A e nxn Remark 10.2 10.2.• :::: An. Then for all E en.=1 .
Let A e C"x". The determinants of all principal submatrices of A are nonnegative. The ratio ^^ x for A = AH <=enxn and nonzero x jc een isis calledthe = AH E Cnxn and nonzero E C" called the x of x.1. not just those of the leading principal submatrices. Not@th!ltthl!dl!termin!lntl:ofnllprincip!ll eubmatrioes muet bQ nonnogativo in Theorem 10. Theorem 10. Theorem 10.I. For example.19. Canonical Forms Chapter 10. Remark 10. Theorem 1O. where M 6 R ix " and k > rank(A) "" rank(M)...= Amax{A A). A can be written in the form MT M. 3. determinant the determinant of the 2x2 2 leading submatrix is also 0 (cf. Then IIAII2 = ^m(AH A}.19. If A = AH e C"x" is positive definite. Let A E enxn Then \\A\\2 = Ar1ax(AH A). A symmetric matrix A € R"x" is nonnegative definite if and only if any of following equivalent of the following three equivalent conditions hold: 1. ::::: AI. All eigenvalues of A are nonnegative. The determinant of the 1x1 1 leading submatrix is 0 and 1.14.17). Then ^pjp2 = ^^(A" HA). 2. A principal submatrix of an nxn n matrix A is the (n — k)x(n(n — k) matrix that remains by deleting k rows and the corresponding k columns. whence IIAxll2 ! H IIAliz = max . from which the theorem follows. Corollary Corollary 10. form MT E ~n xn E ~n xn definite if and only if Theorem 10.18.l3 provides upper (AO and lower (An) bounds for (A. The determinants of all leading principal submatrices of A are positive. consider the matrix A — [0 _l~]. I Proof: E C" Proof: For all x € en we have Let x be an eigenvector corresponding to Amax (A HA). A can be written in the form MT M. A symmetric matrix A e E" x" is positive definite if and only if any of the following equivalent following three equivalent conditions hold: determinants of principal 1. 0 D Remark XHHAx Remark 10. All eigenvalues of A are positive. XHAx > 0 for all nonzero = AH E enxn E en.. A leading principal submatrix of order n — k is obtained by deleting the last k rows and columns. The determinant of the I x leading submatrix is 0 and consider the matrix A = [~ 2x 0 (cf. 3. All eigenvalues of A are positive. of all principal submatrices of 2. Then 111~~1~2 Let jc be an eigenvector corresponding to Xmax(AHA).16. so 0 An ::::: .17. x E C". However.100 100 Chapter 10.17). . A can be wrirren in [he/orm MT M.15. whence Ar1ax (A A). of obtained and E ~nxn positive definite if and only if any of the Theorem 10. Theorem 10.18.@mllrk 10. Theorem 10.w) x HAx > the Rayleigh quotient. where M e R"x" is nonsingular. where M E IRb<n and k ~ ranlc(A) — ranlc(M). All eigenvalues of A are nonnegaTive.1. 2. 3. Canonical Forms and and n LAillJilZ::: i=l AnyHy = An xHx . the . of positive. xfO IIxll2 I 0 Definition submatrixofan n x k) x k) Definition 10.13 provides (A 1) Rayleigh quotient of jc. Note that the determinants of all principal "ubm!ltriC[!!l mu"t bB nonnBgmivB R.soO < X n < ••• < A.18.
A e R"x be nonnegative definite.2.21. = LLH. nxm 2. The following standard theorem is stated without proof (see. definite if A is positive definite). It is stated and proved below for the more general Hermitian case. matrices (both symmetric and nonsymmetric) have infinitely many square roots.3 is not unique. if then M can be then M can be [1 0].17 is available and is known as the Cholesky factorization. E jRnxn MT AM > M BM.18. concerns the notion of the "square root" of a matrix. Theorem 10. In general. Let A. for example. Its proof is straightforward from basic definitions. if Remark 10. j proof (see. then MT AM > MT TBM. If A > B and M e Rm . . Remark 10. The factor M in Theorem 10. if A E lR. 2.20. Then there exists a positive definite..17 is available and is A stronger form of the third characterization in Theorem 10. if = /2. In general. p. The case = is trivially true. Its proof is straightforward from theorem is useful in "comparing" symmetric matrices. SA = AS and rankS = rank A (and hence S is positive = AS S S. B e Rnxn be symmetric. The following theorem is useful in "comparing" symmetric matrices. standard theorem stated 181]). if € E" xn we say that e jRn x that S E R nxn"isisa asquare root of AA ifS2 2 =— A. It is stated and proved below for the more general known as the Cholesky factorization. Ll E C1""1^""^ and . [16.22. and positive definite.. if A = lz. rankS = rankA definite definite if positive definite). [16. matrices (both symmetric and square root of if S A.23.nxn . Let A E lR. For example. Write the matrix A in Proof: The proof is by induction. For example. [ fz ti o o l [~ 0] ~ 0 v'3 .B is nonnegative definite. If >BandMe jRnxm. If A> Band E jR~xm.20. For example. Write the matrix A in the form the form By our induction hypothesis. Then A has aaunique nonnegative definite square root S. Moreover. BM.2.3 is not unique..18. A stronger form of the third characterization in Theorem 10. in fact. with positive diagonal elements such that positive Proof: The proof is by induction.10. Then A has unique nonnegative Theorem 10. 0 Recall that A :::: B if the matrix A . basic definitions. negative and is nonpositive definite. assume the result is true for matrices of order — 1 so that B may be written as B = L\L^. p. any matrix S of c e s 9 the " °* ™ the form [ ssinOe _ ccosOe ] IS a square root.2) element is. then MT AM :::: MTTBM.we say 181]). Let A e c nxn be Hermitian unique nonsingular lower triangular matrix L nonsingular A = LLH. E <C Theorem 10.2) element is. negative and A is nonpositive principal submatrix consisting of the (2. It concerns the notion of the "square root" of a matrix. For example. The factor M in Theorem 10. nxn Theorem 10. assume the result is true for matrices of order n .1 so that B By our induction hypothesis. 10rm [COSO _ Sino] .nxn"be nonnegative definite. for example. Definite Matrices 101 101 principal submatrix consisting of the (2. 1.22. MT AM> M. 1f A :::: Band M E Rnxm. The following Recall that A > B if the matrix A — B is nonnegative definite. Definite Matrices 10. Theorem 10. That is.is a square root. any matrix of nonsymmetric) have infinitely many square roots. Hermitian case.23. in fact. That is. The case n = 1 is trivially true. where L\ e c(nl)x(nl) is nonsingular and lower triangular as = L1Lf.
2]. Two such forms are stated here. . numerical procedures for computing such procedures an equivalence directly via. 0 Note that the greater freedom afforded by the equivalence transformation of Theorem afforded 10. ann Since det(B) > 0.3 10.24. Alternatively. we find by L^b. But we = ann — b LIH L\lb = ann — bH B~lb B A). we must have ann —bHB lb > 0.b H B1b (= the Schur complement of B in A). But know that o < det(A) = det [ ~ b ] = det(B) det(a nn _ b H B1b). Ch.2) in its complex version. Performing the indicated matrix multiplication and equating the corresponding submatrices.b B1b completes D 10. Canonical Forms Chapter 10. It remains to prove that we can write the n x n matrix A It in the form in the form ann b ] = [LJ c a 0 ] [Lf 0 c a J. the unitary equivunitary alence known as the SVD. They are more stably computable than (lOA) and more efficiently computable than a full SVD. Alternatively. yields a far "simpler" canonical form (10. for example. [21. The numerically preferred equivalence is.4) Proof: proof Proof: A classical proof can be consulted in. see. as opposed to the more restrictive situation of a similarity transformation. Many similar results are also (10.24.4) and the SVD. [21.4) efficiently available. say.p. we see that we must have L\c = b and ann = CHc + a 2.102 102 Chapter 10.lb.• Clearly we see we L I C = b and ann = c HC a 2 c is given simply by c = C.4) [7. the SVD is relatively expensive to compute and other canonical forms exist that are intermediate between (l0. Gaussian or elementary row and column operations. Choosing a to be the positive square ann . However. p.b HL\H L11b = ann . Substituting in the expression involving a. of course. multiplication where a is positive.131]. 131]. Let A € C™*71.b H B1b > O. However. Ch. suppose A has an SVD of the form (5. Substituting in the involving we find 2 a2 = ann .3 Equivalence Transformations and Congruence Equivalence Transformations and Congruence Theorem 10.xn. of ann — b 0 root of «„„ . Then E c~xn such exist e C™ x m that that PAQ=[~ ~l (l0. Take P =[ S~ 'f [I ] and Q = V to complete the proof. 5]. Then there exist matrices P E C: xm and Q e C"nx" such E c.4). [4. Canonical Forms with positive diagonal elements.. Then [ Sl o 0 ] [ I Uf U H ] AV = [I 0 0 ] 0 . for example (10. are generally unreliable. Choosing a be det(fi) > HB~lb completes the proof. available.
Proof: For the proof. (TT. When A has full column rank but is "near" a rank deficient matrix. various rank revealing QR decompositions are available that can sometimes detect such various rank revealing QR decompositions are available that can sometimes detect such phenomena at a cost considerably less than a full SVD. v.v. if A is Hermitian. D Proof: For the proof. for example. then rank(A) rr v. see. of A. Note that congruence preserves the property of being Hermitian.xr E erx(nr) arbitrary general nonzero. then XH AX is also Hermitian. Example 10. n The signature of A is given by sig(A) = n . for example.25 (Complete Orthogonal Decomposition). Again.25 (Complete Orthogonal Decomposition). £). and eigenvalues. Note that a congruence is a similarity if and only ifX is unitary. Note that congruence preserves the property of being Hermitian. If A = A" E C nnxn. v. see [4]. respectively. [21.. It is of interest to ask what other properties of a matrix are preserved under congruence. 0 2. HE C xn E e~ xn. Let A e C™ ". D 0 Remark 10. Theorem 10.28. Remark 10. Again. Then there exists a unitary matrix Q e e mxm and a Theorem 10. £). p. D Theorem 10.30. We then have the following. negative.27. . and zero eigenvalues.In[! 1o o o 0 0 00] 10 =(2.31 guarantees that rank and signature of a a matrixare preserved under Theorem 10.xr is upper (or lower) triangular with positive diagonal elements.30.rrxr is upper (or lower) triangular with positive diagonal elements. We then have the following. Let A e Cnxn and X e Cnnxn. sig(A) = rr — v. Let A = AH E e nxn and let rr. then rank(A) = n + v. 0. Definition 10. 134].29. v. It is of interest to ask what other properties of a matrix are then X H AX is also Hermitian. . It turns out that the principal property so preserved is the sign preserved under congruence. upper Proof: For the proof. v. 2. see. n. a congruence.10. In(A) = ln(X Proof: For the proof. Then the inertia of A is the triple of inertia of of negative. of each eigenvalue. see [4].e. l. respectively.t h e n A > 0 if and only if In (A) = (n. Let A E e~xn. Definition 10. When A has full column rank but is "near" a rank deficient matrix. In(A) 3. numbers In(A) (n. Then there exist unitary matrices U e Cmxm and V E Cnxn such that unitary matrices U E e mxm and V e e nxn such that (10. Then there exist Theorem 10. see [4]. The signature of is Example 10.31 guarantees that rank and signature of matrix are preserved under congruence.3. 0 D x Theorem 10.XH AX Definition 10. In(A) = In(X AX).28. 134]. of A. Let A e e~xn. Let A = A He ennxn and X e Cnnxn.31 (Sylvester's Law of Inertia). where R e €. Then H HAX). Theorem 10.26. If In(A) = (rr. and ~ denote the numbers of positive. The H. If A AH e e x " then A> 0 if and only if In(A) = (n. see [4] for details. i. [21.1. Equivalence Transformations and Congruence 10..0. Let A E C™ x ". see [4] for details. Definition 10. Let A = AH e C"x" and let 7t. Then is the numbers In(A) = (rr.31 (Sylvester's Law of Inertia). i. Proof: For the proof. Equivalence Transformations and Congruence 103 103 Theorem 10. where R E Crrxr is upper triangular and S e C rx( " r) is arbitrary but in general nonzero.0).3.e. see [4]. Proof: For the proof. if A is Hermitian.5) where R E e. 0).27.1). congruence. v.6) E e. Note that a congruence is a similarity if and only if X is unitary.26. Then there exists a unitary matrix Q E Cmxm and a permutation permutation matrix IT e en xn" such that Fl E C"x QAIT = [~ ~ l (10. phenomena at a cost considerably less than a full SVD. The transformation A i> XH AX is called a congruence. v. It turns out that the principal property so preserved is the sign of each eigenvalue. nxn E e X E e~xn.29. p. and £ denote the numbers of positive.
and D .1 10. ifand only ifeither A > 0 and D .33.0. Then there exists a matrix AH C"xn In(A) = (Jr.34... Define the x n matrix vv = diag(I/~. . .. the congruence B ] [I D ~ 0 _AI B I ° JT [ A BT ~ ][ ~ 0 D The details are straightforward and are left to the reader.. if ifA>0. .. ..fArr+I' . . ...BT A+B > 0. . .32. A w ). An of Jr Proof: Let AI . An).1 Block matrices and definiteness Theorem 10.4 Rational Canonical Form Rational Canonical Form rational One final canonical form to be mentioned is the rational canonical form. Suppose A = AT and D = DT. if and if either A> and D . AA+B = B.3. .BT A+B:::: o. 1. the next v are negative.. v.. . and D .. B D ] >  ° if and only if A:::: 0. O. Theorem 10. By Theorem 10. Note the symmetric Schur complements of A (or D) in the theorem. . Then = AT D = DT. . . 0 D 10. AA+B = B. 0). for example. I.2 there exists a unitary matrix V such that VHAU = diag(AI. I. or D > 0 and A . 1.BT A~l B > 0. 0. . £).1). Let A = AHeE cnxn with In(A) = (jt.104 104 Chapter 10. 0 D Then it is easy to check that X = V VV yields the desired result. Define the nn x n matrix U UH AV = diag(Ai. I/. Canonical Forms Chapter 10. v. Then Remark Remark 10.. the number of Il's is v. D > and . . Theorem positive. Xw denote the eigenvalues of A and order them such that the first TTare ~ O. 1.. Canonical Forms Theorem 10... 1... X UW desired 10. . . .0).. Proof: proof Proof: The proof follows by considering...BD. ..35.. X e C"nxn such that XHAX = diag(l. . . 1/.. and the final £ are 0.BT AI > 0. .BD^BT > 0.4 10.. . Suppose A = AT and D = DT. . I/~. left AT D DT. .33. and the numberofO's is~. . I.I BT > O. Proof: AI. Proof: Consider the congruence with Proof: Consider proof Theorem and proceed as in the proof of Theorem 10.. where the number of X 1's is Jr. where the number of E c~xn XH AX = diag(1.3. the number 0/0 's is (.fArr+v. . the number of — 's is v. 's is 7i..
18). A is easily seen to be similar to the following matrix identity similarity P given by (9.8) This matrix is a special case of a matrix in lower Hessenberg form. Rational Canonical Form 10.(ao + «A + . Then it can be shown (see [12]) that A is similar to a matrix of the form is similar to a matrix of the form o o o o 0 o o o (10. A is easily seen to be similar to the following matrix in upper Hessenberg form: in upper Hessenberg form: a2 al o 0 0 1 o 1 6] ao o . the following are also companion matrices similar to the above: following are also companion matrices similar to the above: Notice that in all cases a companion matrix is nonsingular if and only if ao /= 0. In fact.37.4. A matrix A E lRnxn of the form (10. has only one block associated with each distinct eigenvalue. since a matrix is similar to its transpose (see exercise 13 in Chapter 9). + an_IAnI). the Moreover.7) Definition 10. Using the reverseorder identity similarity P given by (9. For £*Yamr\1j=» example. To Companion matrices also appear in the literature in several equivalent forms. : ~ ! ~01]. A matrix A e E nx " of the form (10.10) o 1 o 1 o o o o o o (10.Then it can be shown (see [12]) that A mial is 7r(A) = A" . since a matrix is similar to its transpose (see exercise 13 in Chapter 9). is said to be in cornpanion form. Rational Canonical Form 105 105 Definition A matrix A e M"x" is said to be Definition 10. equivalently. Notice that in all cases a companion matrix is nonsingular if and only if aO i= O.4.7) is called a cornpanion rnatrix or Definition 10. Companion matrices also appear in the literature in several equivalent forms. the inverse of a nonsingular companion matrix is again in companion form. To illustrate. equivalently.9) Moreover. Suppose A E lRnxn is a nonderogatory matrix and suppose its characteristic polynoSuppose A E Wxn is a nonderogatory matrix and suppose its characteristic polynon(A) An — (a0 + alA + a n _iA n ~').18). if its Jordan canonical form has only one block associated with each distinct eigenvalue.36.. For In fact. l 0 0 ~ ao ~ ao _!!l (10. the inverse of a nonsingular companion matrix is again in companion form. if its Jordan canonical form and characteristic polynomial are the same or. consider the companion matrix illustrate. A matrix A E lRn Xn is said to be nonderogatory ifits minimal polynomial if its minimal polynomial and characteristic polynomial are the same or. consider the companion matrix (l0.10.. o (10.37.11) . Using the reverseorder This matrix is a special case of a matrix in lower Hessenberg form.7) is called a companion matrix or is said to be in companion forrn.
stable ones are nearly unstable. 3. Canonical Forms with a similar result for companion matrices of the form (10.• > an be the singular values of the companion matrix A in (10. = ~ (y .4ao ' 1 2)  a? = 1 for i = 2. If a companion matrix of the form (10. see haps surprisingly.10). Let a = a\ + a\ + • • • + a%_{ and y = 1 + «.. Then + ai + ._1 and y = 1 + + a. 02. Canonical Forms Chapter 10. companion an arbitrary matrix to companion form are numerically unstable. . . For example. Moreover. then it is not similar to a companion matrix of the form (10.7). i. if ao = 0.10). and hence the pseudoinverse of a singular companion + matrix is not a companion matrix unless a = 0. Then it is easily verified that c = l+ ara' Then it is easily verified that o o o + o o o o o o 1. has more than one Jordan block associated with If A € JRnxn derogatory. then it is not similar to a companion matrix of the form (10. is the fact that their singular values can be found in closed form. Explicit formulas for all the associated right and left singular vectors can Remark 10. [12].106 Chapter 10. at least one eigenvalue. . Let al ~ GI ~ • • ~ an be the singular values of the companion matrix Theorem 10. Let a\ > a2 > .. among which. Such matrices are said to be in rational canonical form Frobenius rational canonical form (or Frobenius canonical form). n . stable ones are nearly unstable.e. is the fact that their singular values can be found in closed form.. Such matrices are said to be in each of whose diagonal blocks is a companion matrix. it can be shown that a derogatory matrix is similar to a block diagonal matrix. in matrices are known to possess many undesirable numerical properties. Explicit formulas for all the associated right and left singular vectors can also be derived easily.. Moreover. However.7). their eigenstructure is extremely ill conditioned. For example. a n i] and l c I+~T a.Q + a. Companion matrices appear frequently in the control and signal processing literature Companion matrices appear frequently in the control and signal processing literature but unfortunately they are often very difficult to work with numerically. and perCompanion matrices have many other interesting properties. associated at least one eigenvalue. and so forth [14]. Then A in (10. Algorithms to reduce an arbitrary matrix to companion form are numerically unstable. i. form).caa T ca o J. Algorithms to reduce but unfortunately they are often very difficult to work with numerically.... a. nonsingular ones are nearly singular.. see. especially nonsingular ones are nearly singular. the largest and smallest singular values can also be written in the equivalent form Remark 10. I — T = T) Note that / . for example. If A E R nx " is derogatory. + a. Ifao ^ 0. Companion matrices have many other interesting properties. Theorem 10. .. if ao = 1 inverse can still be computed.caa T = (I + aaT) I . among which. and so forth [14]. also be derived easily.39. and perhaps surprisingly..7). a2. then its pseudoIf singular. in n general and especially as n increases. companion matrices are known to possess many undesirable numerical properties.4aJ) .Jy2 . the largest and smallest singular values can also be written in the equivalent form If ao =1= 0. each of whose diagonal blocks is a companion matrix.e.7). see [14]... . Leta = ar aJ al 2_ 2 ( y + Jy 2. Let a E JRn1 denote the vector [ai..38. with a similar result for companion matrices of the form (10.1. anIf and let e M"" \a\. matrix is not a companion matrix unless a = O.39..38.7) is singular.. For details.
If this number is large.38 yields some understanding of why difficult numerical behavior might be expected for companion matrices. Theorem 10. Find a unitary matrix U such that [~ M CC x 2 Find a unitary matrix U such that 6. Show that if A is normal. If A E Wxn is positive definite..18) and the matrix U in identity in (9. this condition number is the ratio of largest to smallest singular values which. A [ must also be positive 7.40.11). and when GO is small or y is large (or both). then peA) = A2.. Let R. . when solving linear equations numerical sensitivity Kp(A) = systems of equations of the form (6. one measure of numerical sensitivity is KP(A) = A A ] > the socalled condition number of A with respect to inversion and with respect II ^ IIpp II A~l IIpp'me socalled condition number of A with respect to inversion and with respect to the matrix pnorm.. one may lose up to k digits of precision. Show that the converse radius of A.18) U A E cc nxn Theorem 10. If this number is large.4a5 21 a ol It is easy to show that 21~01 :::: k2(A) :::: 1:01' and when ao is small or y is large (or both). (A) = IA.. Remark 10.11). EXERCISES EXERCISES 1. R> S [1 A~I] ~ O? /i 1 > 0? ~] > 0 if and only if > 0 and J 1 > 0 if and only if S > 0 and . then it must be diagonal. An and singular 0'1 > 0'2 ~ 4. For example. 2. Show that a.EA(A) I'MpeA) 3. 3. can be determined explicitly as determined explicitly y+J y 2 . Show that if A is normal. is true if n = 2. Let A = I J : ]eEC 22x2. Let A 7. Then p(A) is called the spectral radius of A.. Suppose A e E"x" is positive definite. this condition number is the ratio of largest to smallest singular precision. 1. A E jRnxn N(A) = A/"(A ). S 6 E nxn be symmetric.(A) for e n. Let A G Cnx" and define p(A) = maxx€A(A) IAI. Use the reverseorder identity matrix P introduced in (9. then Af(A) = N(A Tr ). A E en x n eigenvalues A]. Show that [ * }. (A) A. Show that if a triangular matrix is normal. Show that a. In the 2norm. It is easy to show that y/2/ao < K2(A) < £.Exercises Exercises 107 Companion matrices and rational canonical forms are generally to be avoided in floatingCompanion matrices and rational canonical forms are generally to be avoided in fioatingpoint computation. If A e jRn xn 8.• ~ an ~ O. say O(lO k ). 5. Note that explicit formulas then K2(A) ~ I~I' It is not unusualfor y to be large forlarge Note that explicit formulas Koo(A) for K] (A) and Koo(A) can also be determined easily by using (l0. then p(A) = IIAII2' Show that the converse is true if n = 2.. say 0(10*). E jRnxn be symmetric. For example. show that AI must also be positive definite. It is not unusual for y to be large for large n.. Let R. . In the 2norm. Prove that if A e M"x" is normal.2). Theorem 10. then K2(A) ^ T~I.. Show that [~ R > SI. yn and singular values a\ ~ a2 > . • • > on > 0. .40.. when solving linear behavior might be expected for companion matrices.. by the theorem. 9. A E cc nxn peA) = max). K\ (A) (10.5 to find a unitary matrix Q that reduces A e C"x" to lower triangular form. one may lose up to k digits of to the matrix Pnorm. 6.38 yields some understanding of why difficult numerical Remark 10. Let A € C n xn be normal with eigenvalues y1 .. Is [ ^ A E jRnxn is definite.(A)I for ii E!l.
.1 1.108 108 10. Canonical Forms [~ ~ l (b) [ 2 1. Find the inertia of the following matrices: following 10. Canonical Forms Chapter 10.j 1+ j ] 1 .j 1+ j ] 2 ' (d) [ . (a) Chapter 10.
which thus also converges for all A and uniformly in t. where the matrix A e JR.1) involves the matrix (11.nxn is defined by Definition 11. It can be described conveniently in terms of the matrix exponential. (11.1.2) can be shown to converge for all A (has radius of convergence equal The series (11.1 11. T T 109 109 .3) which thus also converges for all A and uniformly in t. The solution of (11. A) • 2.1 by setting AA =O. It can be described conveniently in terms of the matrix exponential.nxn. 11. For all A JR.1 and linearity of the transpose. The solution of (11.nxn is constant and does not depend on t. For all A E JR.nxn. eO = I.1 Differential Equations Differential Equations = Ax(t).1. Definition 11.2) k=O The series (11.1 Properties of the matrix exponential Properties of the matrix exponential 1. This is known as an initialvalue problem. We restrict our attention in this chapter only to the socalled timeinvariant case.n (11. Forall A EG R" XM . This is known as an initialvalue problem. where the matrix A E Rnxn is constant chapter only to the socalled timeinvariant case.1. the matrix exponential e A e JR.1) for t 2: to. Proof: This follows immediately from Definition 11.Ak. unique. The solution of (11.Chapter 11 Chapter 11 Linear Differential and Linear Differential and Difference Equations Difference Equations 11. (e(eAf = e A e^.1. k. the matrix exponential e A E Rnxn is defined by the power series power series e = A L +00 1 . Proof This follows immediately from Definition 11. Proof This follows immediately from Definition 11. Proof: This follows immediately from Definition 11. = Xo In this section we study solutions of the linear homogeneous system of differential equations In this section we study solutions of the linear homogeneous system of differential equations x(t) x(to) E JR.1) is then known always to exist and be unique.1 and linearity of the transpose.1) involves the matrix to +(0).2) can be shown to converge for all A (has radius of convergence equal to +00). e° = I. We restrict our attention in this for t > IQ.1 by setting = 0. For all A e Rnxn. The solution of (11.1 11.1) is then known always to exist and be and does not depend on t.
. (e'A)~l e~'A.1 {(j/A). (b) £. (a) C{etA = (sIArl. T E JR.. et(A+B) =^e'Ae'B = e'Be'A and and B commute..110 110 Chapter 11. {+oo = io et(sl)e tA dt since A and (sf) commute =io (+oo ef(Asl) dt . and B commute.. ForaH A E R" x " and for all t e JR. Proof' Note that Proof: Note that et(A+B) = I t + teA + B) + (A + B)2 + .lI{(sl.A)I} = «M.. Let denote the Laplace transform and £~! the inverse Laplace transform. For all e JRnxn and for all E R. 6. all A € R"x" and for all t € lR.l{e tA}} = (sI . on (t + T)*. B E R" xn and for all t E JR. Linear Differential and Difference Equations Chapter 11. Then for 6. (b) . i.tA .e. Part (b) follows similarly. Compare like powers of A in the above two equations and use the binomial theorem Compare like powers of A in the above two equations and use the binomial theorem on(t+T)k. ) . binomial theorem on (A B) and the commutativity of A and B... Compare like powers of t in the first equation and the second or third and use the Compare like powers of t in the first equation and the second or third and use the k binomial theorem on (A + B/ and the commutativity of A and B. Proof" Simply take T = — t in property 3. et(A+B) =etAe tB = etBe tA if and only if A all e JRnxn and all e R. For all e R"x" and for all t. (a) . AB = BA. Linear Differential and Difference Equations e(t+r)A e(t+T)A 3.. 2 2! and and while while e e tB tA = ( 1+ tB t2 2 + 2iB 2 +... 5. r e R. ) .. 2! and and e e tA rA 2 = ( I + t A + t2! A 2 +. Proof" Note that Proof: Note that e(t+r)A = etA erA = erAe tA . Proof" We prove only (a). ) ( I + T A + T2!2 A 2 +.1 } = erA.. ) ( 1+ tA + t2!A 2 +.e. Part (b) follows similarly. = e'A erA = elAe'A .. For all A. = I + (t + T)A + (t + T)2 A 2 + . Proof: Simply take T = t in property 3.A)I.. Then for E JRnxn t E R. AB = B A. i. (etA)1 = e. Let £ denote the Laplace transform and £1 the inverse Laplace transform. For all A E JRnxn and for all t. Proof: We prove only (a). 4.
that A is diagonalizable.AetAil Ae tA I ~t (e~tAetA I (M A I ~t (e~tA . Alternatively.1.11..1.y. Notice in the proof that we have assumed. For all A e R"x" and for all t e R. using the JCF. Differential Equations 111 111 = {+oo 10 n t 1 e(AiS)t x.etA ..H using the JCF. ) etA I < MIIA21111e  L'lt (L'lt)2 + IIAII + IIAI12 + . for convenience..H L.All succeeding steps in the proof then follow in aastraightforward way.H = '"' assuming Re s > Re Ai for i E !! = (sI .Ae II = I L'lt (M)2 + ~ A 2 +. Differential Equations 11.1 The matrix (s I — A) I is called the resolvent of A and is defined for all s not in A (A).A) ~' is called the resolvent of A and is defined for all s not in A (A). it can be differentiated termbyProof: Since the series (11. All succeeding steps in the proof then follow in straightforward way. s . For all A E JRnxn and for all E JR. the scalar dyadic decomposition can be replaced by If this is not the case. For any consistent matrix norm..u . If this is not the case. that A is diagonalizable. ) = L'lt IIA 21111e tA IIe~tIIAII.3) is uniformly convergent.Ae tA I = III (etAe~tA L'lt = = /A) . for convenience...X i y..3) is uniformly convergent. the scalar dyadic decomposition can be replaced by et(Asl) =L . e'A Proof: Since the series (11.Ae tA tA tA I I e tA . Notice in the proof that we have assumed.. it can be differentiated termbyterm from which the result follows immediately.=1 = ~[fo+oo e(AiS)t dt]x.. ) 3 II I ( ~. A2 + (~~)2 A tA II tA Il 1 (_ 2! + ... ) = I ( Ae + = tA ~. . employed I e(t+~t)AAt. The matrix (s I .... .l)e .y.. ) 3! 4! L'ltiIAIl < L'lt1lA21111e (1 + + (~t IIAII2 + .. £(e'A) 7. 1h(e tA ) = AetA = etA A..A)I. the formal definition d dt _(/A) = lim ~t+O e(t+M)A _ etA L'lt can be employed as follows.=1 m Xiet(Jisl)y.H dt assuming A is diagonalizable . . A 2etA + .A"I i=1 ..Ae tA etA) . = (sl A)...
the righthand side above clearly goes to 0 as At goes to 0. Then the solution of the linear inhomogeneous initialvalue problem and.. The formula can be derived by means of an integrating factor "trick" as follows.i~t()Oc() nnd uniqu()Oc:s:s theorem for *('o)} = <?(f°~fo)/1. or one can use the limit exists and equals Ae t A A similar proof yields the limit et A A. Let A E Rnxn .7) and again use property 7 of the matrix exponential.7) and again use property 7 of the matrix exponential.f(p(t). continuous.3 Inhomogeneous linear differential equations Inhomogeneous equations Theorem 11.1. D 11.2 Homogeneous linear differential equations Homogeneous equations x(t) Theorem 11. The formula can be derived by means of an integrating factor "trick" direct differentiation. (11.4) for t ::: to is given by (11. Premultiply the equation x — Ax = Bu by e~ to get (11. the limit exists and equals Ae'A •. (11. x(to) = e(toto)A Xo = XQ so. by the fundamental existence and uniqueness theorem for ordinary differential equations. t) dx = l af(x t) ' dx pet) at (t) q + dq(t) dp(t) f(q(t). Thus. B e IR xm and let the vectorvalued function u be given Theorem and.2.4).7) Proof: Differentiate (11. by the fundamental existence and x(t0) — e(fo~t°')AXQ — Xo uniqueness theorem for ordinary differential equations.7) is the solution of (1l. D Ir: Remark 11.dt dt is used to get x ( t ) = Ae(tto)Ax0 + f'o Ae('s)ABu(s) ds + Bu(t) = Ax(t) + Bu(t).1.¥o + 0 = XQ so.t goes to O.4. (11. Linear Differential and Difference Equations For fixed t.8) .Ax = Bu by e. Also. continuous. The proof above simply verifies the variation of parameters formula by Remark 11.3. x(to) = Xo E IR n (11. or one can use the fact that A commutes with any polynomial of A of finite degree and hence with e'A.112 112 Chapter 11. 11..4.5) is the solution of (11. 0 uniqueness theorem for ordinary differential equations. The general Proof: Differentiate (11. Linear Differential and Difference Equations Chapter 11. Then the solution of the linear inhomogeneous initialvalue problem x(t) = Ax(t) + Bu(t).5) Proof: Differentiate (11. say. Let A E IR n xn. Thus. The general formula formula d dt l q (t) pet) f(x. The proof above simply verifies the variation of parameters formula by direct differentiation.6) for t ::: to is given by the variation of parameters formula for t > IQ is given by the variation of parameters formula x(t) = e(tto)A xo + t e(ts)A Bu(s) ds. B E Wnxm and let the vectorvalued function u be given Let A e IR nxn .6). 0 ordinary differential equations.5) and use property 7 of the matrix exponential to get x t ) = Ae(tto)A xo fundamental Ae(t~to)Axo = Ax(t). The solution of the linear homogeneous initialvalue problem = Ax(l). A similar proof yields the limit e'A A.4). The solution ofthe linear homogeneous initialvalue problem Let A e Rnxn. t ) . Premultiply the equation x .5) is the solution of (11. lo t (11. x(to) = Xo E IRn (11.tA to get as follows. (11. Ae(ts)A Bu(s) to get x(t) = Ae{'to)A Xo + Bu(t) = Ax(t) = x(to e(totolA Xo + = Xo fundilm()ntill ()lI. t ) .6). say.5) and use property 7 of the matrix exponential to get x ((t) = Proof: Differentiate (11. the For fixed t. the righthand side above clearly goes to 0 as t:.7) is the solution of (11. fact that A commutes with any polynomial of A of finite degree and hence with etA. Also.
6. t exponential.12). differential equation. For convenience. Theorem 11.nxn.2.. and the proof is essentially the same. The fact that X((t) satisfies the initial condition is trivial.5.4 Linear matrix differential equations Linear matrix differential equations Matrixvalued initialvalue problems also occur frequently.12) X(t) = etACetAT has the solution X(t} = etACetAT. and C e Rnxm.7. Let A E Wlxn.4 11. the Theorem 11. and hence t d esAx(s) ds = to ds 1t to eSABu(s) ds. X t) X 0 D Corollary 11. The solution of the matrix linear homogeneous initialvalue e jRnxn. (11. X((t) is symmetric and (11. X(O) = C (11. t]: 113 1 Thus.12) is known as a LyaX t) punov differential equation. the following theorem is stated with initial time to = 0. following to = O. C e IR" ".11) is known as a Sylvester Sylvester differential equation. we can have coefficient matrices on both the right and left.2. The first is an obvious generalization of Theorem 11. Differential Equations [to.11) X(t) = etACe = e ratB has the solution X ( t ) — atACe tB . E ]R. etAx(t) . punov differential equation. X(to) =C E jRnxn (11.1. and the proof is essentially the same. Differential Equations 11. .9) for t ::: to is given by for t > to is given by X(t) = e(tto)Ac. B e R m x m . Theorem 11.1.7.1.etoAx(to) = lto t e. The of nrohlcm problem X(t) = AX(t). t]: Now integrate (11.10) coefficient In the matrix case. Theorem 11. e jRnxn. Let A.11.6. the When C is symmetric in (11. Corollary 11.8) over the interval [to. E ]R.sA Bu(s) ds x(t) = e(ttolA xo + lto t e(ts)A Bu(s) ds.1.nxm. Let A E Rnxn. X(O) =C (11. The initialvalue problem (11. 11. Then the matrix initialvalue E jRmxm. Then the matrix initialvalue problem X(t) = AX(t) + X(t)AT. the Proof: Differentiate etACe tB property Proof: Differentiate etACetB with respect to t and use property 7 of the matrix exponential. problem problem X(t) = AX(t) + X(t)B.
H .5 Modal decompositions Let A and suppose. In the last equality we have used the fact that YiHXj = flij.1. Similarly.1 . Then Then i=1 n = L(aieAiUtO»Xi. The decomposition above expresses the solution x (t) as a weighted sum of its directions. the rest of this subsection is easily generalized by using the JCF and the decomposition able.li y t as discussed in Chapter 9). ~ 11. where J is a JCF for A. This modal decomposition can be expressed in a different looking but identical form This modal decomposition can be expressed in a different looking but identical form n if we write the initial condition Xo as a weighted sum of the right eigenvectors if we write the initial condition XQ as a weighted sum of the right eigenvectors Xo = L ai Xi.5 11. for convenience.4) can be written A = L X. Then Then etA = etXJX1 = XetJX. that it is diagonalizable (if A is not diagonalizable. Let A E jRnxn and suppose X e jR~xn is such that XI AX = J. The decomposition above expresses the solution x(t) as a weighted sum of its modal velocities and directions. for convenience. Linear Differential and Difference Equations 11.1 n Le A• X'YiH .iUtO)Xiyr) Xo 1=1 n = L(YiHxoeAi(ttO»Xi. ~ 1=1 I t. i=1 The ki s are called the modal velocities and the right eigenvectors Xi are called the modal The Ai s are called the modal velocities and the right eigenvectors *.4) can be written x(t) = e(tto)A Xo E jRnxn E Wxn = (ti. i=1 In the last equality we have used the fact that yf*Xj = Sfj.e'J.y.1. in the inhomogeneous case we can write t e(ts)A Bu(s) ds i~ = t i=1 (it eAiUS)YiH Bu(s) dS) Xi. that it is diagonalizable (if A is not diagonalizLet A and suppose. modal velocities and directions. Then the solution x(t) of (11. if A is diagonalizable in geneml. are called the modal directions. the rest of this subsection is easily generalized by using the JCF and the decomposition H A — ^ Xf Ji YiH as discussed in Chapter 9).114 114 Chapter 11. where J is a JCF for A. Linear Differential and Difference Equations Chapter 11.6 Computation of the matrix exponential Computation exponential JCF method JCF method Let A e R"x" and suppose X E Rnxn is such that X"1 AX = J. Then the solution x(t) of (11.x. in the inhomogeneous case we can write Similarly.
N has 1's along only its second superdiagonal.eAt). of In the more general case. o A o o A Clearly A/ and N commute.I e IN =I+tN+N 2 + .I)! I o t 1 o Thus. A matrix M E M nx " is nilpotent of degree (or index. degree k. e'u e l N tu x lH = diag(e At . Mp~l ^ O. it is then easy to compute etA via the formula etA = XetJ XI' Xe tl X If is etA etA tj since et I is simply a diagonal matrix. A. e ttJi = eO. + N k2! (k . and N kforth. while MPI t=. or grade) MP = 0. k) O's k k N = 0. To be specific.e. ext}. Nk~lI has a 1 in its (1. or grade) p if if matrix M e jRnxn is nilpotent of degree (or index.11.. Finally.0. its first superdiagonal (and O's elsewhere). aareal version of the above can be worked out. nilpotent Definition 11.. teAl eAt = 0 0 0 2I e 12 At teAl 0 eAt In the case when A is complex.. the problem clearly reduces simply to the computation of problem clearly reduces the exponential of a Jordan block. (1. But e tN is almost as easy since N The diagonal part is easy: e e = diag(e '. Mp = 0. and so forth. . Thus.. Thus.1. is complex. k) element and has O's everywhere else.. Differential Equations 115 If A is diagonalizable.1. it is easy to check that while N has 1's along only its first superdiagonal (and O's elsewhere). i. N22 has l's along only its second superdiagonal. AI e I.!etN by property 4 of the matrix exponential. O. e lN finite. ••• ..8. the series expansion of e'N is finite. let . elN is is nilpotent of degree k. eAt teAt eAt o 2I e 12 At IkI At e (kI)! 0 ell. real version of the above can be worked out. l's For the matrix N defined above. Differential Equations 11. t2 t k..7.EeCkxk be aaJordan block of the form Ji <Ckxk be Jordan block of the form A Ji = 1 o o o =U+N.
Here.3. (A.. t fixed Given A € E.9.116 Chapter 11. ni ..Ai t'.1. . ==> 2a2 = t 2 e. + I) 3 . Then A(A) = {2.. Suppose the characteristic polynomial of A can be written as n(A)) = Yi?=i (A . . the superscript (k) denotes the fcth derivative with respect to A. ani solution of the n equations: g(k)(Ai) = f(k)(Ai). so m = 1 and nl Let g(X) = UQ + alA + a2A2. Given A E jRnxn and f(A) = etA. The motivation for this method is known. lowerorder g Example 11. I. I... so m = 1 and n{ = 3. g(I) = f(1) ==> ao . — 1.t . 2} and Example 11.s known. a.) = (A + 1)3.a l +a2 = e==> at . and /(A) = etA. . Let A = [ ~_\ J].. compute f(A) = etA.10. Theorem 9. the function g is known and /(A) = g(A). the function g is known and f(A) = g(A). where t is a fixed scalar. The motivation for this method is the CayleyHamilton Theorem. . in fact. functions. . Let A Then A (A) = {2. .9.I. With the aiS then kth superscript (&) X. . k = 0. Then the three equations for the a. which says that all powers of A greater than A n .2. Thus. f(A) n(A) etK. .s are distinct. ..1 in the power series for et A can be written in terms of these greater n— e' A lowerorder powers as well. the unique OTQ.2t e. They are. all the Ak — expressed k 1. Linear Differential and Difference Equations Chapter 11. anl are n constants that are to be determined.1 can be expressed as linear combinations of Ak for k = 0. .s are given by g(A) — ao aiS a\X o^A. n . i Em.. 2} and etA Xe tJ =[=i a = xI =[ =[ 2 1 ] exp t ] [ [ 2 0 ~ ] [ 1 1 2 1 2 ] 2 1 e~2t te.. . Define the Ai nr=1 n where ao. characteristic of n(X (^ ~~ ^i)"'» where the A.2a2 = te. Let Example 11. Linear Differential and Difference Equations Example 11.nxn and /(A) = etx.2t ][ 1 ] Interpolation method Interpolation method This method is numerically unstable in finiteprecision arithmetic but is quite effective for effective hand calculation in smallorder problems. compute f(A) = e'A.. The polynomial g gives the appropriate linear combination. terms of order greater than n . The method is stated and illustrated for the hand calculation in smallorder problems. Then jr(A.t • g'(1) = f'(1) g"(I) = 1"(1) .10. . The method is stated and illustrated for the exponential function but applies equally well to other functions. Let A = [~ o ~01~ ] t .
Then the defining equations for the a. we find Solving for the aiS. 2.11.1.s are given by Let g(A) ao + aLA.2t ) 2te. Differential Equations 11 .2t + 2te. Then 7r(X) = (A+ 2)22 so m = 11and [::::~ 4i and f(A) = ea. The matrix analogue yields e A ~ functions rational eA = . There is an extensive literature on approximating certain nonlinear functions by rational functions.2t te. Differential Equations Solving for the ai s. but general nonsymbolic computational effective smallorder techniques numerically problem equivalent techniques are numerically unstable since the problem is theoretically equivalent to knowing precisely a JCE JCF.2t . Let g(A.11.) = «o + ofiA. 2.s. This etA = .cI{(sI — A)I} is quite effective for smallorder problems.2t aL = + 2te. f(A) = etA = g(A) = aoI + al A = (e.2t I [4 4] I 0 _ [  e. 1. Use Pade approximation. Then the defining equations for the aiS are given by 6] g(2) = f(2) ==> ao ==> al 2al = e. g'(2) = f'(2) = te Solving for the a.11.. Let A _* Example 11. Use etA = £~l{(sl . Thus. Then rr(A) = f\ + o\2 so m = and (A i 2) «i nL = 2. Let A = [ ~4 J] and /(A) = eO.A)^ 1 } and techniques for inverse Laplace transforms. 2t .2t _ Other methods Other methods 1. we find 117 Thus. we find ao = e. t ff>\ tk TU^^ _/"i\ Example 11.2t . te. s. we find Solving for the a.2t [ ~ o ] + te.1.2t .
. by 22' 2* )A A multiplying it by 1/2k for sufficiently large k and using the fact that A = / { ]I //2')A )\ * .. Again.1 11. The solution ofthe linear homogeneous system of difference equations equations (11. Reliable and efficient computation of matrix functions such as e A and log(A) remains a fertile area for research. 11. Again. modeled by systems of difference equations. exhibit many parallels to the continuoustime differential equation difference equations. no double subscripts). Numerical loss of accuracy can occur in this procedure from the successive squarings. 11.118 118 l Chapter 11.2 11.. modeled by systems of equations of the previous section. We could also case. we restrict our attention only to the socalled timeinvariant Remark 11.• + Vq A q. e (e( 3. we restrict our attention only to the socalled timeinvariant case. Linear Differential and Difference Equations DI(A)N(A). case.15) . [19]. Proof: The proof is almost immediate upon substitution of (11.2 Inhomogeneous linear difference equations Inhomogeneous linear difference equations E jRnxn. no double subscripts). and since we want to keep the formulas "clean" (i. we have chosen ko = 0 for want to keep the formulas "clean" (i. Unfortunately. where D(A) 80I Si A H h SPA and N(A) v0I + vlA + q Explicit formulas are known for the coefficients of the numerator and Explicit formulas are known for the coefficients of the numerator and denominator polynomials of various orders. and since we consider an arbitrary "initial time" ko. Reduce A to (real) Schur form S via the unitary similarity U and use e A 3. by this means when IIAII is sufficiently small. say. Then the solution of the inhomogeneous initialvalue problem (11.1 Homogeneous linear difference equations Homogeneous linear difference equations Theorem 11..e. This can be arranged by scaling A. where the matrix A in (11. We could also consider an arbitrary "initial time" ko.2.13.. where D(A) = 001 + olA + . Then the solution of the inhomogeneous initialvalue problem mvectors. Let A E Rnxn. The solution of the linear homogeneous system ofdifference Let A e jRn xn. Numerical loss of accuracy can occur in this procedure from the successive squarings.. This can be arranged by scaling A.2 Difference Equations Difference Equations In this section we outline solutions of discretetime analogues of the linear differential In this section we outline solutions of discretetime analogues of the linear differential equations of the previous section. exhibit many parallels to the continuoustime differential equation case. 0 D Remark 11. 11. [19]. Linear discretetime systems. say. Reduce A to (real) Schur form S via the unitary similarity U and use eA = U e SsUH Ue U H and successive recursions up the superdiagonals of the (quasi) upper triangular matrix and successive recursions up the superdiagonals of the (quasi) upper triangular matrix e s. where the matrix A in (11. eS . for example. 4.13.14. Reliable and efficient computation 4. •• vq A .2.12.e.13). = P = multiplying it by 1/2* for sufficiently large k and using the fact that e = ( e j . Let A e Rnxn. Linear discretetime systems. convenience. Many methods are outlined in. in the matrix case this means when  A is sufficiently small. of matrix functions such as e A and 10g(A) remains a fertile area for research.13) is constant and does not depend on k. Linear Differential and Difference Equations Chapter 11. + opAP and N(A) = vol + vIA + D~ (A)N(A). for example. Many methods are outlined in. E jRnxm {udt~ is of Theorem 11. and this observation is exploited frequently. B e Rnxm and suppose {«*}£§ « a given sequence of mvectors. in the matrix case the exponential is accurate only in a neighborhood of the origin.13) for k > 0 is given by for k 2:: 0 is given by Proof: The proof is almost immediate upon substitution of (11. and this observation is exploited frequently. a Fade approximation for polynomials of various orders.13) is constant and does not depend on k.2.14) into (11. but since the system is timeinvariant.14) into (11. a Pad6 approximation for denominator the exponential is accurate only in a neighborhood of the origin.13). we have chosen ko = 0 for convenience. but since the system is timeinvariant. Unfortunately.
Difference Equations 119 119 is given by kI xk=AkXO+LAkjIBUj. in general. k=O Assuming Izl > max A. X~1 AX JCF for A.2. One definition of the ztransform of a sequence is +00 Z({gk}t~) = LgkZk. +00 k=O z z = (lzIA)I = z(zI . since /* is simply a diagonal matrix. the ztransform of the sequence {Ak is then given by Assuming z > max IAI. Then JCF for A. One definition of the ztransform of a sequence {gk} is a matrix exponential. 0 D 11.15).2.16) Proof: The proof is again almost immediate Proof: The proof is again almost immediate upon substitution of (11. Assume that A e M" xn and let X e jR~xn be such that XI AX = /.11. One solution method. Difference Equations 11..y. by analogy with the use of Laplace transforms to compute ztransforms.. the ztransform of the sequence {Ak}} is then given by AEA(A) X€A(A) k "'kk 1 12 Z({A})=L. Jk . substitution of (11.2..A)I. is to use ztransforms.3 Computation of matrix powers Computation of matrix powers It is clear that solution of linear systems of difference equations involves computation of It is clear that solution of linear systems of difference equations involves computation of k. Then Ak = (XJXI)k = XJkX.16) into (11. j=O (11. where J is a E jRnxn and X E R^n J.1 _I tA~X.O.2. based Methods based on the JCF are sometimes useful.zA =I+A+"2 A + . k:::. it is then easy to compute Ak via the formula Ak = XJkXXI Ak Ak — X Jk If diagonalizable. which is numerically unstable but sometimes useful for hand calculation.15).16) into (11.=1 H l If A is diagonalizable.3 11. sometimes useful Ak.H m if A is diagonalizable. a matrix exponential. LXi Jtyi .. again mostly for smallorder probsmallorder lems.
.2k) k( _2)k1 ] k( 2l+ (2l.. ) Ak.1 Ak ( . Linear Differential and Difference Equations In the general case. To be specific. .is complex. 1 1 1 2 1 ] Basic analogues of other methods such as those mentioned in Section 11.1 Ak The symbol (: ) has the usual definition of q!(kk~q)! and is to be interpreted as 0 if k < q. [11.6 can also methods 11.. it is then straightforward to apply the binomial theorem to (AI + N)k and verify that straightforward N)k (XI verify Ak kA kI Ak k 2 (. inI)(O) = CnI' (1l. aareal version of the above can be worked out. but again no universally "best" method be derived for the computation of matrix powers. the problem again reduces to the computation of the power of a To Ji E Cpxp Jordan block. . for example.)A  ( k ) AkP+I pl 0 J/ = kA k.. Example 11. the problem again reduces to the computation of the power of a In the general case.17) with ¢J(t) a given function and n initial conditions 4>(t} y(O) = Co. Consider. see [11.(^ .. y(O) = CI. it is commute. let 7. A is complex.1.• = AI and noting that AI and the nilpotent matrix Writing Ji = XI + N and noting that XI and the nilpotent matrix N commute. Let A = [_J Example 11.2) .l8) . real version of the above can be worked out. Then Then 1 ] [(_2)k 1 0 k(2)kk(2) 1 ] [ _ [ (_2/. e Cpxp be a Jordan block of the form o . and is to be interpreted as 0 if k < q.15.3 HigherOrder Equations HigherOrder Equations differential It is well known that a higherorder (scalar) linear differential equation can be converted to higherorder a firstorder linear system. Ch.2 0 0 0 0 kA k .120 Chapter 11.15.1 (2 . but again no universally "best" method exists. In the case when A.1.6 be derived for the computation of matrix powers. For an erudite discussion of the state of the art. 18]. Let A Ak = XJkX1 = [=i 4 a [2 1 J]. 11. the initialvalue problem initialvalue (11. 0 A Writing /.3 11.1(2k . The symbol ( ) has the usual definition of . Linear Differential and Difference Equations Chapter 11.
.. . c\. Show that etA 2.."+ an_1A n~ + .. (11. Then Xl (I) X2(t) = X2(t) = y(t). as mentioned before. 3.19) possesses many nasty numerical properties for even moderately sized n matrix A in (11. Let . where I + get. y E lRn and let A = xyT. y € R" and let A = xyT. and. Let 3. .. the companion Note that det(A! . at least for computational purposes. A similar procedure holds for the conversion of a higherorder difference equation A similar procedure holds for the conversion of a higherorder difference equation with n initial conditions. Show that e P ~ ! + 1.. Then components Xl (t) yet)..an_llnl)(t) Xnl (t) Xn(t) = y(n)(t) = aoy(t)  + ¢(t) = aOx\ (t) . Show that e % / + 1. .a\X2(t) . into a linear firstorder difference equation with (vector) initial condition.. into a linear firstorder difference equation with (vector) initial with n initial conditions. . Note that det(X7 — A) = An + an\Xn 1l H alA + ao.. These equations can then be rewritten as the firstorder linear system These equations can then be rewritten as the firstorder linear system 0 0 x(t) = 0 0 1 0 0 0 ao a\ x(t)+ [ 0 1 a n\ n ~(t) r.. However. = X3(t) = yet). is often well worth avoiding. xn(t) = Inl)(t).19) possesses many nasty numerical properties for even moderately sized n and. y(m) denotes the mth derivative of y with respect to t. Cn \ . be a projection. Further. 2. is often well worth avoiding. .Exercises 121 121 Here. Define a vector x (?) e R" with components *i(0 = y ( t ) . Let P E lR nxn be a projection. = Xn(t) = y(nl)(t). as mentioned before. where !(eat . •. a)xyT. x2(t) = y ( t ) . Define a vector x (t) E ]Rn with Here.I) g(t. . condition.A) = A. However. let a = xTy. Cl. Suppose x. EXERCISES EXERCISES 1. .718P.anlXn(t) + ¢(t). aly(t) . +h a\X+ ao. a)xyT. Let P € R 1. at least for computational purposes.. v (m) denotes the mth derivative of y with respect to t. = O. the companion matrix A in (11. Xn(t) y { n ~ l ) ( t ) . ..718P... X2(t) yet).. Suppose x.. let a = XT y. C M _I] The initial conditions take the form X (0) = C = [co. Show that e'A 1+ g ( t .a)= { a t nxn p if a if a 1= 0. Further.19) The initial conditions take the form ^(0) = c [CQ.
Show that E jRmxn e = [eoI A sinh 1 X ] ~I .. be an eigenvalue of S. Linear Differential and Difference where X e M'nx" is arbitrary.. H (d) Suppose H is Hamiltonian. A matrix A e R 2nx2n is said to be K I AT K = . also eigenvalue of (c) Suppose that H is Hamiltonian and S is symplectic. . (a) Suppose H is Hamiltonian and let). Show that eH must be symplectic. also be an eigenvalue of H. f3 E R and Then show that Then show that ectt _eut cos f3t sin f3t ectctrt e sin ~t cos/A J. Hamiltonian if K~1ATK = A and to be symplectic if K I ATK = A I. be an eigenvalue of H. 6. 4. must (b) Suppose S is symplectic and let). Find eM when A = Find etA = 8.be an eigenvalue of H. Show S~1 H S Hamiltonian. must also be an eigenValue of S. Let 5.be an eigenvalue of S.122 122 Chapter 11.A 1 . ft € lR and Let a. x(O) =[ ~ J. must (a) Suppose E is Hamiltonian and let A.. Let (a) Solve the differential equation (a) Solve the differential equation i = Ax . Find a general expression for Find a general expression for 7. (b) Suppose S is symplectic and let A. Let K denote the skewsymmetric matrix 0 [ In In ] 0 ' In A E jR2nx2n where /„ denotes the n x n identity matrix.A and to be symplectic K~l AT K .. Let denote the skewsymmetric matrix 4. Let a. Linear Differential and Difference Equations Chapter 11. Show that ). must also be an eigenvalue of H. Show that 1/). Hamiltonian. Show that —A. (d) Suppose 5. Show that 1 /A. Show that SI HS must be Suppose and symplectic.
For Europe and Asia. half stays home and half goes to the Americas. Each total assets of $40 trillion of which $20 trillion is in E and $20 trillion is in R. (d) Find the limiting distribution of the $40 trillion as the universe ends. around the time the Cubs win a World Series). The year is 2004 and there are three large "free trade zones" in the world: Asia (A). For Europe and Asia. . what is the value of ZIQOO? What is the value of Zk in general? general? . If £0 = 1 and z\ If Zo = 1 and ZI = 2. and a quarter year half of the Americas' money stays home..e.e. a quarter goes to Europe. k * +00 (i. as k —»• +00 (i. X(O) = c. 11. Suppose that e E"x" is skewsymmetric and let a = \\XQ\\2. (c) Find the distribution of the companies' assets at year k. 10. Consider the n x n matrix initialvalue problem X(t) = AX(t) . half stays home and half goes to the Americas. Each year half of the Americas' money stays home. goes to Asia. a quarter goes to Europe.3. Show that for t > 0. Suppose that A E ~nxn is skewsymmetric and let ex = Ilxol12.e. (b) Find the eigenvalues and right eigenvectors of M. I/X(t)1/2 = ex for all t > 0.X(t)A.Exercises Exercises (b) Solve the differential equation (b) Solve the differential equation i 123 = Ax + b. (a) Find the solution of the initialvalue problem (a) Find the solution of the initialvalue problem .YeO) = O. x(O) =[ ~ l 9. (a) Find the matrix M that gives (a) Find the matrix M that gives [ A] E R =M year k+1 [A] E R year k (b) Find the eigenvalues and right eigenvectors of M. Suppose certain multinational companies have total assets of $40 trillion of which $20 trillion is in E and $20 trillion is in R. (Exercise adapted from Problem 5. 11. (b) Consider the difference equation (b) Consider the difference equation Zk+2 + 2Zk+1 + Zk = O. and a quarter goes to Asia..) (Exercise adapted from Problem 5.Yet) + 2y(t) + yet) = 0. what is the value of ZIOOO? What is the value of Zk in 2. of Cf or all t. i. around the time the Cubs win a World Series). The year is 2004 and there are three large "free trade zones" in the world: Asia (A).) 12.. i. and the Americas (R). Show that the eigenvalues of the solution X t ) of this problem are the same as those Show that the eigenvalues of the solution X ((t) of this problem are the same as those of C for all?. Consider the initialvalue problem i(t) = Ax(t). yeO) = 1. Show that *(OII2 = aforallf > O. Suppose certain multinational companies have Europe (E). Europe (E).3. Consider the n x n matrix initialvalue problem 10. 12. x(O) = Xo for t ~ O.11 in [24]. Consider the initialvalue problem 9. and the Americas (R).e. (c) Find the distribution of the companies' assets at year k..11 in [24]. as (d) Find the limiting distribution of the $40 trillion as the universe ends.
This page intentionally left blank This page intentionally left blank .
Definition 12.2) When the context is such that no confusion can arise.) are the eigenvalues of the associated generalized eigenvalue problem.XB is called a matrix pencil (or pencil of the matrices A and B). B e C" xn The standard eigenvalue problem considered in Chapter 9 obviously corresponds to the special case that B = I. called a generalized eigenvalue.'AB) is called the characteristic polyDefinition 12. eigenvalues for the generalized eigenvalue problem occur pencil — XB problem occur where the matrix pencil A .2. B e enxn" if there exists a scalar 'A.) = det(A .1 The Generalized Eigenvalue/Eigenvector Problem The Generalized Eigenvalue/Eigenvector Problem Ax = 'ABx. Definition 12. A nonzero vector x e C" is a right generalized eigenvector of the pair generalized eigenvector of (A. generalized eigenvalue problem. The polynomial n('A) = det(A — A. In this chapter we consider the generalized eigenvalue problem In we the generalized eigenvalue problem where A. (A. .5) is called the characteristic polynomial of the matrix pair (A. As with the standard eigenvalue problem. 125 125 . eigenvector.3.2. The roots ofn(X. a nonzero vector y e C" is a left generalized eigenvector corresponding to an E en generalized eigenvector eigenvalue 'X if eigenvalue A if (12.4. B). B) with A. B e jRnxn. When A. The matrix A . if x [y] is a right [left] ax [ay] for any eigenvector. e e. A E en Definition 12. As with the standard eigenvalue problem. then so is ax [ay] for any nonzero scalar a E <C.1.3. B) with A. B E C MX if there exists a scalar A E C. called a generalized eigenvalue. and A. e C. corresponds to the special case that B = I. The polynomial 7r(A.1 12.Chapter 12 Chapter 12 Generalized Eigenvalue Generalized Eigenvalue Problems 12. B E E" xn . such that that (12. Similarly.1. Definition 12. characteristic hence nonreal eigenvalues must occur in complex conjugate pairs.'AB is singular. B E enxn. The standard eigenvalue problem considered in Chapter 9 obviously where A. The matrix A — 'AB is called a matrix pencil (or pencil of the matrices A Definition 12.4. hence nonreal eigenvalues must occur in complex conjugate pairs. Remark 12. the characteristic polynomial is obviously real. a. the adjective "generalized" "generalized" standard eigenvalue [y] is usually dropped. B). The roots ofn('A) are the eigenvalues of the associated nomial of the matrix pair (A.1) Ax = 'ABx. and Remark 12. B).
There are two eigenvalues. =I. when B =I. only the case of regular pencils is considered in the remainder of this chapter.0. If det(A — AB) not regular. Note appear. only the case of regular pencils is considered in the remainder of this chapter. {3 = 0. regular. otherwise.126 126 Chapter 12. (12./. All A e C are eigenvalues since det(A — AB) =0. f3 = 0.B) == O. reciprocal Case of reciprocal . Case = ft ^ 0. zero. Generalized Eigenvalue Problems Remark 12.KB always has pencil — AB . At least for the case of regular pencils. There are two eigenvalues. All A E C are eigenvalues since det(A . Case 1: a =I.0.{3 = 0. the pencil A . {3 =I. (3 = O.AB Definition 12. or infinitely many B = I.L)({3 .B. There is only one eigenvalue.3).A. {3 =I. For example. eigenvalues associated with the pencil A .X B is a reciprocal pencil B — n. There are two eigenvalues. That is to say.AHa . I). is singular.5. suppose associated — AB. A similar reciprocal symmetry holds for Case 2. If del (A . I1 and . Case 1: a =I. Note that if AA(A) n J\f(B) ^ 0.L) and there are again four cases to consider. I multiplicity 1). and hence there are n eigenvalues associated with the pencil A .LA) == 0. Case 2: a = 0. There are two eigenvalues. pencil .6. Note that A and/or B may still be singular.(3A) ±.I. it is apparent where the "missing" eigenvalues have "missing" gone in Cases 2 and 3. Case 1: a ^ 0. Associated with any matrix pencil A . 1 and ~. If B = I (or in general when B is nonsingular). in particular.O. 1 Case 3: a =I.6.L = (JL = £. I and ^. Case 4: a = 0./. eigenvalues — AB. While While there are applications in system theory and control where singular pencils appear. If B is singular. f3 = O.a/.O.LA and corresponding generalized eigenvalue problem.3) where a and (3 are scalars.0. Clearly the reciprocal pencil has eigenvalues responding generalized /. there is a second eigenvalue "at infinity" for Case 3 of of . There are two eigenvalues. {3 =I.0. All A 6 C are eigenvalues since det(B — uA) = O. there may be 0. 1 and 0. There are two eigenvalues.AB) and there are several cases to consider. However. It is instructive to consider the reciprocal pencil associated with the example in It reciprocal Remark 12. Case 3: Case 4: = 0. ft ^ O. Case 1: ^ 0. ^ 0. {3 ^ 0. There is only one eigenvalue. k E !!. then rr(A) is a polynomial nonsingular). Generalized Eigenvalue Problems Chapter 12. If = of degree n. Case 4: a = 0. f3 / 0. 1 (of multiplicity 1). with its reciprocal eigenvalue being 0 in Case 3 of the reciprocal pencil B — /.XB. and ~. A — A.XB) is not identically zero. n(X) Remark 12. There are two eigenvalues.O. Case 4: a = 0.A and corAssociated with any matrix pencil — AB is a reciprocal pencil .LA) = (1 ./. There are two eigenvalues. it is said to be singular. Case 2: = 0. A similar reciprocal symmetry holds for Case 2.nA. (3 = 0. Case 2: a = 0./. when B is singular. Then the characteristic polynomial is ft det(A . det(B . However.AB. 1 and O. the pencil A — XB is said to be 12. (3 = O. All A E C are eigenvalues since det(B . ft =I.XB.LA.5. the associated matrix pencil is singular (as in Case N(A) n N(B) =Isingular 4 above). I (of multiplicity 1). f3 = O.5. With A and B as in (12. 1 and 0. Case Case 3: a = 0. the characteristic polynomial is = (I . B k e n. I and O.
canonical forms are available for the generalized Just as for the standard eigenvalue problem. Canonical Forms 127 B is nonsingular. ifx isa right eigenvector of A—XB.7] or [25. Theorem 12. the result follows easily by noting that yH(A — XB) — 0 if and only if yH (A . [7. . Numerical methods that work directly on A and are discussed in standard textbooks on numerical linear algebra. D The first canonical form is an analogue of Schur's Theorem and forms.7. Sec. see. where Ta and TfJ are upper triangular. which is the generally preferred method for solving the generalized eigenvalue problem. since the generalized eigenvalue problem is then easily seen to be equivalent to the standard eigenvalue problem B. the eigenvalues of the problems A — XB and QAZ — XQBZ are the same (the two 1.7]. in fact.AQBZ) = det[Q(A . the eigenvalues ofthe pencil A — XB are then the ratios of the diagonal elements of Ta to the corresponding diagonal elements of TfJ . Since det 0 and det Z are nonzero. the The first canonical form is an analogue of Schur's Theorem and forms. det(QAZ . with the understanding that a zero diagonal element of Tp corresponds to an infinite generalized eigenvalue.AB). 6.l W AW). Q~H y isa lefteigenvectorofQAZ — XQBZ. see.. which is the generally preferred method for theoretical foundation for the QZ algorithm.2 12. canonical forms are available for the generalized eigenvalue problem. Theorem 12.XB)Z] = detQ det Z det(A . ify isa left eigenvector of A —KB. Sec. Since the latter involves a pair of matrices.AB)Z] = det gdet Zdet(A 1. the eigenvalues of the problems A . [7. Then 1. to ifx is a Zl x is a righteigenvectorofQAZAQB Z. Then there exist unitary matrices Q.2. There is also an analogue of the MurnaghanWintner Theorem for real matrices. o.7]. in fact. and the first theorem deals with what happens to eigenvalues lencies rather than similarities. [7. the pencil A fewer than eigenvalues. QBZ = TfJ . However. work directly on A and B are discussed in standard textbooks on numerical linear algebra. and eigenvectors under equivalence. then QHy isa left eigenvector ofQAZ AQBZ. 6. 7. 0 ( Q ~ H y)H Q(A X AB)Z = O. this turns out to be a very poor numerical procedure for handling the generalized eigenvalue problem out to be a very poor numerical procedure for handling the generalized eigenvalue problem if is even moderately ill conditioned with respect to inversion. fewer than n eigenvalues. the result follows. B E Cnxn Then there exist unitary matrices Q. Sec. There is also an analogue of the MurnaghanWintner Theorem for real matrices. the eigenvalues of the pencil A . Let A. [7. E nxn with Q and nonsingular. see. Z e Cnxn with Q and Z nonsingular.Oif andonly if Q(AXB)Z(Z~lx) = 0. c 3.AB are then the ratios of the diagBy Theorem 12. where Ta and Tp are upper triangular. that a zero diagonal element of TfJ corresponds to an infinite generalized eigenvalue. of AAB. 7. Let A. this turns to the standard eigenvalue problem B~1Ax = Xx (or AB~1w = Xw). Since det Q XB). Again. Let A. the theoretical foundation for the QZ algorithm.7] or [25. lencies rather than similarities.7]. 6. Q. the pencil A AAB always has precisely n . for example.7.12.2.7. 2. By Theorem 12. If B is nonsingular.l Ax Ax (or AB. Proof: Proof: 1. 7.2 Canonical Forms Canonical Forms Just as for the standard eigenvalue problem. Let A. 2. The result follows by noting that (A AB)x = 0 if and only if Q(A AB)Z(Zl x) = The result follows by noting that (A –yB)x . Sec. f i always has precisely eigenvalues. and det Z are nonzero. then Z~lx isa right eigenvector of QAZ—XQ B Z.7].AB and QAZ . 3.AB) o if and only if (QH y ) H Q ( A –_ B ) Z = Q.7] [25. B e cnxn .AQBZ are the same (the two problems problems are said to be equivalent). However. Numerical methods that if B is even moderately ill conditioned with respect to inversion. since the generalized eigenvalue problem is then easily seen to be equivalent eigenvalues. B. 6. Z e Cnxn such that 12. with the understanding onal elements of Ta to the corresponding diagonal elements of Tp.8.7. fl. the result follows. for example. for example. Q. for example. Sec. Sec. Canonical Forms 12. we now deal with equivaa matrices. E c nxn such that QAZ = Ta . and the first theorem deals with what happens to eigenvalues and eigenvectors under equivalence. Then 12.7] or [25. 7. det(QAZXQBZ) = det[0(A . see. solving the generalized eigenvalue problem. ify is a left of AB. 12.8. Sec.
Then there exist orthogonal matrices Q.9. of eigenvalues are given as above by the ratios of diagonal elements of S to corresponding elements of T. the 2 x 2 subpencil formed with the corresponding fonned 2 x diagonal subblock 2x2 2 diagonal subblock of T has a pair of complex conjugate eigenvalues. Let A. Let A.. B e c mxn .12 mxm nxn mxm nxn E C nonsingular nonsingular matrices P e c and Q e c QE C such that peA .. When S has a 2 x 2 diagonal block. quasiuppertriangular.XB is regular.2)2 with characteristic polynomial (A — 2)2 has a finite eigenvalue 2 of multiplicty 2 and three 2 2 infinite eigenvalues.A [~ ~ l of . [2o I o o o 0 0 0 0 0 2 0 0 1 0 0 1 0 0 ~ ]> [~ 0 I 0 0 0 0 0 0 0 0 o o 0 I 0] 0 0 0 0 (X . mxn E C • Theorem 12. Let A.fi and canonical form nilpotent matrix of associated and N is a nilpotent matrix of Jordan blocks associated with 0 and corresponding to the infinite infinite eigenvalues of A . E jRnxn 12. . In this chapter. including analogues of principal vectors and description of of so forth. B e Cnxn and suppose the pencil A . form (KCF).11.'.AB)Q = [~ ~ ] . I . Then there exist 12.128 Chapter 12.AB where J is a Jordan canonical form corresponding to the finite eigenvalues of A A.. QBZ = T.A.• L.)"N). .AB. The first theorem pertains only to "square" regular pencils. Example 12. we present only statements of the basic theorems and some examples. Generalized Eigenvalue Problems Theorem 12.I.AB)Q = diag(LII' . Generalized Eigenvalue Problems Chapter 12. The matrix pencil 12. real eigenvalues.12 (Kronecker Canonical Form). while the full KeF in all its generality applies also to "rectangular" and singular KCF "rectangular" pencils. where T is upper triangular and S is quasiuppertriangular. B E c nxn pencil — AB Theorem 12. L l" L~. There is also an analogue of the Jordan canonical form called the Kronecker canonical fonn Kronecker form (KeF). T.. Otherwise. Q € c nxn"such that nonsingular E C" such that peA . KCF. A full description of the KeF. .11. is beyond the scope of this book.10.9. J . Then there x exist nonsingular matrices P. of — XB. Z e R"xn such B E jRnxn. B e Rnxn. thnt that QAZ = S.
. n(S)) = S. Lo. Lo L6 one column.13. Then is deflating subspace for the pencil A AB if and only if there exists M E Rkxk such that e ~kxk AS = BSM. Example 12.2. Canonical Forms 129 where N is nilpotent. suppose S e Rn* xk is a matrix whose columns span a kdimensional E ~nxk ^dimensional subspace S of ~n. Such a matrix is in KCF. (12. next two correspond to correspond J = 21 0 2 [ o 0 while the nilpotent matrix N in this example is N [ ~6~]. both Nand J are in Jordan canonical form. Left Left or right minimal indices can take the value O. both N and J are in Jordan canonical form. 000 Just as sets of eigenvectors span Ainvariant subspaces in the case of the standard eigenvectors eigenproblem (recall Definition 9.e.e. LQ. Then SS is aadeflating subspace for the pencil A . Specifically. 0. there is a matrix characterization of deflating subspace. LQ. Let A. Canonical Forms 12. Definition 12. L6. B e Wlxn and suppose the pencil A .14.The next two blocks second block L\ one the block is L\.. there is an analogous geometric concept for the eigenproblem generalized eigenproblem. are called the right minimal indices. where each LQ has "zero columns" and one row.XB is regular. i.35).4) eigenvalue characterization Just as in the standard eigenvalue case. i. and L^ is the (k + I) x k where N is nilpotent. The first block of zeros actually corresponds to LQ. Lo.5) . corresponds LQ. while each LQ has "zero rows" and L6. LQ .2.12. The /( are called the left minimal indices while the r. generalized eigenproblem. (12. Consider a 13 x 12 block diagonal matrix whose diagonal blocks are A 0] I o A I .— XBif S Rn. R ( S <S. Then V is a E ~nxn suppose pencil — AB deflating subspace if deflating subspace if dim(AV + BV) = dimV. Lo. The second block is L\ while the third block is LI. and Lk is the (k + 1) x k bidiagonal pencil bidiagonal pencil A 0 0 A Lk = 0 0 0 0 A I The Ii are called the left minimal indices while the ri are called the right minimal indices.
and E jRPxm.4) becomes dim (A V + V) = dim V. Generalized Eigenvalue Problems If B = /. D=O. However. we offer some insight below into the special case of a singleinput. for example. one must be well for general mUltiinput.15. there AV ~ V. Numerically. However. Example 12. Numerically. the (finite) zeros of this system are given by the (finite) complex numbers In general. = Cx + Du E jRnxn. In the special case p = m. [26]. multioutput systems.6). u is the vector of inputs or controls. which has a root at —2.6)). lEthe pencil is not regular. then (12. where x(= x(t)) is called the state space model is often used in multivariable control theory.8. For details. [26]. 12.3 Application to the Computation of System Zeros Application to the Computation of System Zeros i y Consider the linear system Consider the linear svstem = Ax + Bu. there is a concept analogous to deflating subspace called a reducing subspace. is a concept analogous to deflating subspace called a reducing subspace. for example.4) becomes dim(AV + V) = dimV. and y is the vector of outputs or observables. Checking the finite eigenvalues of the pencil (12. In the special case p = m. This linear with A € M n x n .3 12.8.5) becomes AS = SM as before. E jRPxn. Let Example 12. Then the transfer matrix (see [26]) of this system is Then the transfer matrix (see [26)) of this system is g(5)=C(sIA)'B+D= 5 55 2 + 14 ' + 3s + 2 which clearly has a zero at —2. where the "system pencil" (12. and y is the vector of outputs or observables. and D € Rpxm. we find the characteristic polynomial to be find the characteristic polynomial to be det [ which has a root at 2. Let A=[ 4 2 C = [I 2].6) drops rank. The method of finding system zeros via a generalized eigenvalue problem also works The method of finding system zeros via a generalized eigenvalue problem also works well for general multiinput. This linear timeinvariant statespace model is often used in multivariable control theory. where x(= x(t)) is called the state vector.6».8.5) becomes AS = SM as before. B] .130 Chapter 12. The connection between system zeros and the corresponding system pencil is nonThe connection between system zeros and the corresponding system pencil is nontrivial. This is accomcareful first to "deflate out" the infinite zeros (infinite eigenvalues of (12. (n + m) x (n + m) pencil. we which clearly has a zero at 2. E jRnxm.15. the (finite) zeros of this system are given by the (finite) complex numbers where the "system pencil" z. these values are the generalized eigenvalues of the drops rank. we offer some insight below into the special case of a singleinput. however.6). (12. then (12. Similarly. however. For details. (12. see. zeros). This is accomplished by computing a certain unitary equivalence on the system pencil that then yields a plished by computing a certain unitary equivalence on the system pencil that then yields a smaller generalized eigenvalue problem with only finite generalized eigenvalues (the finite smaller generalized eigenvalue problem with only finite generalized eigenvalues (the finite zeros). If the pencil is not regular. Checking the finite eigenvalues of the pencil (12. see. one must be careful first to "deflate out" the infinite zeros (infinite eigenvalues of (12. B € R" xm . these values are the generalized eigenvalues of the (n + m) x (n m) pencil. multioutput systems. In general. Similarly. which is clearly equivalent to If B = I. vector. C e Rpxn. Ac M D "'" 5A + 14. which is clearly equivalent to AV c V.8. trivial. u is the vector of inputs or controls.
9) Substituting this in (12.n. C = c T E R l x n . there are no "pole/zero cancellations").8).A to the standard eigenvalue problem Bl1Ax = AJC. the problem (12. we have Substituting this in (12. g.nxn A AT and B the B1 0.10) for A. B E Rnxn arises when A = A and B = BT > O. Symmetric Generalized Eigenvalue Problems 12.9». z is a zero of g.4 12. Symmetric Generalized Eigenvalue Problems 131 131 1 singleoutput system.9)).4 Symmetric Generalized Eigenvalue Problems Symmetric Generalized Eigenvalue Problems Ax = ABx A very important special case of the generalized eigenvalue problem (12.10) is equivalent B. (12. Hence g(z) = 0. of the Since B is positive definite it is nonsingular. B~11A is not necessarily B~ Ax = AX. . then from (12. the problem (12.l xn.8) c T x +dy = O. e ffi.e. let g(. or g(z)y 0 by the definition of g." is a frequently employed model of structures or vibrating systems and yields a frequently generalized eigenvalue problem ofthe form (12.e.4.zI cT b ] d is singular. we have _c T (A .. the secondorder A. system of differential equations differential Mx+Kx=O.10) is equivalent Since B is positive definite it is nonsingular.zl)x + by = 0.s) = c (s I — A) 1 b + d c function and assume that g(s) can be written in the form and assume that g ( s ) can be written in the form v(s) g(s) = n(s)' polynomial A. B e ffi. b e ffi.10). Specifically. 12. "pole/zero cancellations"). or g ( z ) y = 0 by the definition of g. Thus.. A pole/zero Assuming z is not an eigenvalue of A (i.e. and D e R r T(s7 . relatively where n(s) is the characteristic polynomial of A.4.. symmetric. For example.7) we get get x = (A . Now y ^ 0 (else x z i. However. M K where M is a symmetric positive definite "mass matrix" and K is a symmetric "stiffness definite "stiffness matrix. no pole/zero cancellations). (12.7) (12.12. Thus. let B = b E Rn.A)~ ! Z? + d denote the system transfer function (matrix). Suppose z € is such that Suppose Z E C is such that [ A . Now _y 1= 0 (else x = 0 from (12. Hence g(z) 0. Then there exists a nonzero solution to or or (A . g(s) Furthermore.zl)lby. 0 from (12.8).zl)lby + dy = 0. and v(s) and n(s) are relatively prime TT(S) v(s) TT(S) (i. and D = d E R.
Moreover. Let A = [~ . so the eigenvalues are positive. (12. the eigenproblem (12. Xj)B T T = xr BXj = (zi L ~l)(LLT)(L ~T Zj) = Dij. then = C T > 0.18.. Then the eigenvalue problem (Theorem 10. Finally. the eigenvalues of B l A are always real (and are approximately 2. zi Then x. The Cholesky factor for the matrix B in Example 12.. The material of this section can. B e jRnxn A AT and B BT > O. Theorem 12. be generalized easily to the case where A material of can. the eigenvalues are also all positive. it has a Cholesky factorization B = LL T.132 132 Chapter 12. but since realvalued matrices are commonly used in most applications.5 ' 3. Zn Zj = Dij.17. B E Rnxn with A = AT and B = BT > 0. are eigenvectors of the original generalized eigenvalue problem and satisfy and satisfy (Xi. if A = A > 0. if A > 0..23). the eigenvalue problem eigenvalue problem Ax = ABx has n real eigenvalues.12) Since C = C T the eigenproblem (12.16. Let A Example 12. then product y) x T By.. we have restricted our attention to that case only. but since realvalued matrices are commonly used in most applications. so the eigenvalues are positive.fi Then it is easily checked that Then it is easily checked thai c = L~lAL~T = [ 0. then C = C T > 0... Generalized Eigenvalue Problems Chapter 12. .12) has n real eigenvalues.16 is Example 12. .11) can then be rewritten as AL J and Z = LT x. positive.. and are Hermitian. (12. where L is nonsingular Proof: Since B > 0.18. y)BB = XT By. with corresponding eigenvectors zi. Then the eigenvalue problem Ax = ABx = ALL Tx (12.1926 and —3.11) can be rewritten as the equivalent problem 1 Letting C = L ~I AL ~T and z = L1 x. (12. Example 12. = L ~Tzi. ii € n.5 2.11) can then be rewritten as = Cz = AZ. Moreover.. Finally.12) has n real eigenvalues.1926 whose eigenvalues are approximately 2. . and the n corresponding right eigenvectors can be chosen to be orthogonal with respect to the inner product (x.1926 as expected. of course. Let A.5 2. l = [i ~ J B ThenB~ A Then A B~Il = [~ ~ J B~I A approximately Nevertheless. zn satisfying vectors Z I. Then the generalized A. E !!.1926 in Example 12. Generalized Eigenvalue Problems Example 12..fi 1] .. with corresponding eigenSince C = C T.16). The Cholesky factor for the matrix B in Example 12.16.23).1926 and 3.. if A = AT> 0. we have restricted our attention to that case only. it has a Cholesky factorization B = LLT. Proof: Since B > 0. •. are eigenvectors of the original generalized eigenvalue problem Xi Zi. generalized case A and B are Hermitian.16 is D 0 L=[~ .5 ] 1. where L is nonsingular (Theorem 10. if orthogonal > 0.
Proof: Let B = LLT be the Cholesky factorization of B and set C = L~1AL T. LetA QT AQ and B QT Then/HA Q~ B.e.19 is very useful for reducing many statements about pairs of symmetric Theorem 12. i. since QDQ~l have A(D) = A(B~1A).5. when L is highly iII conditioned with respect to inversion.19 (Simultaneous Reduction to Diagonal Form). Then there exists a nonsingular matrix Q such that A = AT and B = BT > 0. the diagonal elements of D are the eigenvalues of B 1A. To illustrate. haveA(D) = A(B.e.. But then D"1I :::: [(this is trivially true 10. Simultaneous Diagonalization 133 12.< / (this is trivially true 0 since the two matrices are diagonal). In particular. D 2: [. e.. QD~ QT < QQT.19.19 e ][~nxn A AT and B BT > O. Proof: By Theorem 12. e. B) can be simultaneously diagonalized by the same matrix.5. let such cases. There are many matrices (A.19 is numerically problematic. = QQT AQQ~l = LTPPTL~IA = L~TL~1A L T P pT L 1 A L T L I A QQT AQQI 0 D = B1A.. It turns out that in some cases a pair of trices can be diagonalized by a unitary similarity. To illustrate. Then there exists a nonsingular matrix Q such that where D is diagonal." The following is typical. Let Q = L~T P. Since LLT be the Cholesky factorization of and setC L I AL~T. D > I. with the complex case following in a straightforward way.'AB. where D is diagonal. D since the two matrices are diagonal). In such cases. This can be seen directly. it does preserve the eigenvalues of A . Thus. In particular. simultaneous reduction can also be accomplished via an SVD. Let A. B E lRnxn be positive definite. Then A 2: B if and only if B~ 2: AI. It turns out that in some cases a pair of matrices (A.5 12.20. This can be seen directly. In fact.1A.lI QT :::: Q QT. since A 2: B. \ 2. we restrict our attention only to the real case. let . so it does not preserve eigenvalues of and B Note that Q is not in general orthogonal.31. Theorem 12.20. there exists Q E lR~xn such that QT AQ = D and QT BQ = [. B E E"x" with 12.g. normal matrices can be diagonalized by a unitary similarity." The following is typical. with the complex case following in a Again. Infact. Now D > 0 by Theorem 10. In numerically problematic. simultaneous reduction can also be accomplished via an SVD. straightforward way. we B~ 1 A. matrices to "the diagonal case. Then B. since QDQI Finally.5 Simultaneous Diagonalization Simultaneous Diagonalization Recall that many matrices can be diagonalized by a similarity.l Q~T QT Q~ B~ AQ.. Again. when L is highly ill conditioned with respect to inversion. A I < B~ . i. Also.1AQ. where D is C is symmetric. There are many such results and we present only a representative (but important and useful) theorem here. it does preserve the eigenvalues of A — XB.5. A1.12. there exists Q e E"x" such that QT AQ = D and QT BQ = I. there exists an orthogonal matrix P such that pTe p = D.19 is There are situations in which forming C = L~1AL~T as in the proof of Theorem 12. i.e. = pT L I(LLT)L T P = pT P = [. individually.21 we have that QT AQ > QT BQ.1A). Let A = QT AQandB = QT BQ. so it does not preserve eigenvalues of A and B individually.T P. Then A > B if and only if Bl1 > Theorem 12. Since Proof: Let T C is symmetric. where D is diagonal. Now D > 0 by Theorem 10. Then diagonal. the diagonal elements of D are the eigenvalues of B. Let A. B) can be simultaneously diagonalized by the same matrix.31. we Note that Q is not in general orthogonal.5.1 Simultaneous diagonalization via SVD Simultaneous diagonalization via SVD There are situations in which forming C L I AL T as in the proof of Theorem 12. i. Q D. A~l :::: Bl1.1A = Q1l B~1QT QT AQ = Q11B. Simultaneous Diagonalization 12. Let A. there exists an orthogonal matrix P such that P CP = D. However. Proof: By Theorem 12. Let Q = L . B e M" xn be positive definite.1 12. such results and we present only a representative (but important and useful) theorem here. since A > B.19. Thus. we restrict our attention only to the real case. where D is diagonal.e.g. by Theorem where D is diagonal. Then and and QT BQ Finally.. However. by Theorem 10. Theorem 12.. But then D.19 is very useful for reducing many statements about pairs of symmetric matrices to "the diagonal case. Theorem 12. Also.21 we have that QT AQ 2: QT BQ. normal maRecall that many matrices can be diagonalized by a similarity.
For example. [7. for LB i. respectively.13)) and LB separately. 8. A can be written as A = PDP T.3].134 134 Chapter 12. Then the matrix Q U performs the simultaneous L e 1R~ xn diagonalization.e. This is analogous to finding the singular values of a matrix M by Sec.. A straightforward. see. for generalizations results 12.15) is called a generalized singular value problem and algorithms exist to problem generalized solve it (and hence equivalently (12.15) The problem (12. eigenproblem MT M x Xx. Various generalizations of the results in Remark 12. Generalized Eigenvalue Problems us assume that both A and B are positive definite. example. (12. The case when A is symmetric but indefinite is not so A = AT::: O. which is thus equivalent to the generalized eigenvalue problem ALBL~LBT z. Sec. which is thus to the generalized eigenvalue problem 02.21. Remark 12. but in writing = PDDp D diagonal. let A = LAL~ and B = LsLTB be Cholesky factorizations of A and B. Compute the SVD Cholesky factorizations A B. D may have pure imaginary elements. Remark 12.e. Generalized Eigenvalue Problems Chapter 12..21 are possible. when A = AT > 0.butin writing A — PDDP T = PD(PD) with D is diagonal and P orthogonal.14) rewritten the LAL~x = ALBz = A L g L ^ L g 7 z . without forming the products LALTA or LBLTB explicitly.14) can be rewritten in the form LALAx = XLBz = Letting x = LBT Z we see 02.7. PDPT ~ ~ ~ ~ T PD(PD{ with where Disdiagonaland P is orthogonal.13) where E E R£ x " isisdiagonal. Further. i. D b . The SVD in (12. respectively.13» via arithmetic operations performed only on LA LA (12.22.21 example..13) can be computed without explicitly forming the without Remark product indicated matrix product or the inverse by using the socalled generalized singular value decomposition (GSVD). To check this. note that T QT AQ = U Li/(LAL~)Li/U = UTULVTVLTUTU i/ = while L2 QT BQ = U T LB1(LBL~)Li/U = UTU = I. Further.14) Letting x = LB z we see that (12. operations performed directly on M rather than by forming the matrix MT M and solving performed MT forming the eigenproblem MT MX = AX. Then the matrix Q == LLBTu performs the simultaneous diagonal. at least in real arithmetic. Note that LB A and thus the singular values of L B 1 LA can be found from the eigenvalue problem 02. let A = LALTA and B — LBL~ us assume that both A and B are positive definite. products LA L ~ LBL~ see.
Since the determinantal equation o = det(A 2 M + AC + K) = A2n + . or if it is desired to avoid the calculation of M lI because M is too ill conditioned with respect to inversion. since eAt :F 0.e. there are 2n eigenvalues for the secondorder (or A2 M + AC + K.6.6.6 12.1 12. C.. . we thus seek values of A. n.2M + A. Suppose K = KT. yields a polynomial of degree 2rc.. ::: ILr ::: 0 > ILr+ I ::: .16) of the p A are to be determined. Assume for simplicity that M is nonsingular.16) Consider the secondorder system of differential equations Consider the secondorder system of differential equations q(t) E ~n E ~nxn. where the nvector p and scalar A..1 Conversion to firstorder form Conversion to firstorder form Let x\ = q and \i = q. are to be determined. by analogy with the firstorder case.12. for which the matrix A. and = = KT. r. 12. .6.2 K are are ± jWk. K e Rnxn. (A 2 M + AC + K) p = O. Then (12. (12. . A special case of (12. Suppose K has eigenvalues eigenvalues IL I ::: . Substituting in q(t) = eAt p.. Suppose.. .16) can still M secondorder generalized linear be converted to the firstorder generalized linear system converted I [ o M OJ'x = [0 K I C Jx.C + K.. and A special case of (12. Substituting in form q(t) = ext p.16) can be written as a firstorder system (with block Let XI q and X2 Then (12.16) we get (12.6 HigherOrder Eigenvalue Problems HigherOrder Eigenvalue Problems Mq+Cq+Kq=O.C + K is singular. seek A A2 M + AC + To get a nonzero solution /?... C = 0.6. If r = n (i..e. HigherOrder Eigenvalue Problems 12. polynomial 2n. that we try to find a solution of (12.. If r n (i.16) or. where q(t} e W1 and M.16) can be written as a firstorder system (with block companion matrix) X . E2". If M is singular.16) arises frequently in applications: 0.. then all solutions of q + Kq = 0 are oscillatory. ::: ILn· Let a>k = IILk I!.2M + A.• Then the 2n eigenvalues of the secondorder eigenvalue problem A2 I /+ K Let Wk =  fjik 12 Then the 2n eigenvalues of the secondorder eigenvalue problem A. HigherOrder Eigenvalue Problems 135 12. M Mwhere x(t) €.. p. k = 1. then all solutions of q K q 0 are oscillatory.16) arises frequently in applications: M = I. (12. = [ M1K 0 x (t) E ~2n. K = KT ::: 0). quadratic) eigenvalue problem A. ± Wk. k = r + 1. . Since the determinantal equation is singular. KT > 0). the secondorder problem (12.
Show that the generalized eigenval". derivative q. Are the FG and GF the 3. Show that the nonzero eigenvalues of FG and GF are the same. In the parlance of control theory. say. Show that the generalized eigenvalues of the pencils ues of the pencils e e [~ ~JA[~ ~J and and [ A + B~ + GC ~] _ A [~ ~] are identical for all F E E"1xn and all G E R" xmm . C. Show that the finite generalized eigenvalues of E lR " finite eigenvalues of e R™ x m the pencil [~ ~JA[~ ~J are the eigenvalues of the matrix A — BD 1 C.19). F 6 Rm *" G R" x . G E enxn".. Similar procedures hold for the general k\horder difference equation order difference equation which can be converted to various firstorder systems of dimension kn. properties Higherorder analogues of (12. Suppose A e Rnxn and D E lR::! xm. Let F e Cnxm . the kth derivative of q.B D. . In the parlance of control theory. andlor K Many other firstorder realizations are possible.16) involving. Some can be useful when M. Generalized Eigenvalue Problems Chapter 12. G e Cmxn • Are the nonzero singular values of FG and GF the same? same? wx E ]Rnxn. to higherorder eigenvalue problems that can be converted to firstorder form using a kn x kn to higherorder eigenvalue problems that can be converted to firstorder form using aknxkn block companion matrix analogue of (11. (A similar result is also true for "nonsquare" pencils. B e lRn*m. Hint: Consider the equivalence I G][AUO F0]' B][I l [01 C (A similar result is also true for "nonsquare" pencils. Similar procedures hold for the general kthblock companion matrix analogue of (11. and/or K have special symmetry or skewsymmetry properties that can exploited.) . and C e lRmxn. Let F.19). E Rnxm and E E 4. Some can be useful when M.. verify Hint: An easy "trick proof is to verify that the matrices "trick proof' [Fg ~] and [~ GOF ] are similar via the similarity transformation are similar via the similarity transformation Let F E nxm G E mx ". Generalized Eigenvalue Problems Many other firstorder realizations are possible. Suppose A € Rnxn. EXERCISES EXERCISES nx 1. Let € C M X • Show that the nonzero eigenvalues of and G F are the same.136 136 Chapter 12. C.1 2. which can be converted to various firstorder systems of dimension kn. such results show that zeros are invariant under state feedback or output injection. lead naturally naturally involving.
positive. Another family of simultaneous diagonalization problems arises when it is desired Another simultaneous diagonalization problems operates that the simultaneous diagonalizing transformation Q operates on matrices A. and let UWT be an SVD of L~LA'. positive Cholesky = LA L ~ = L B L ~. (b) Show that Q~l = ^~^UT LTB. Such QT BQ a transformation is called contragredient. respectively. A and B to the same diagonal matrix. Consider the case where both A and transformation contragredient. respectively. Ql = ~!UTL~.2 and hence are AB E2 positive. (c) Show that the eigenvalues of A B are the same as those of 1. B E e jRnxn Ql AQT ]Rnx" in such a way that Q~l AQ~T and QT BQ are simultaneously diagonal. A B B are positive definite with Cholesky factorizations A = L<A and B = L#Lg. .Exercises Exercises 137 137 desired 5. and let U~VT be an SVD of LTBLA (a) Show that Q = LA V £ ~ 5 is a contragredient transformation that reduces both contragredient = LA V~! A and B to the same diagonal matrix.
This page intentionally left blank This page intentionally left blank .
1 13. Let A e R mx ".1 Definition and Examples Definition and Examples Definition 13. the same definition holds if A and B are complexvalued matrices.1. Example 13. 2B 2B ~J.. 4 3 4 3 4 9 4 2 6 2 6 6 6 2 2 Note that B @ A i. pointing out the extension to the complex case only where it is not obvious.. pointing out the restrict our attention in this chapter primarily to realvalued matrices. Forany B e!F pxq /z @ B = [~ In Replacing 12 by /„ yields a block diagonal matrix with n copies of B along the I2 diagonal with n copies of along the diagonal.1) amnB Obviously.2. We Obviously.A @ B. Foranyfl E lRX(7. n 2. Example 13. / 2 <8>fl = [o ~ l\ 2. Let A = [~ 2 2 nand B = [.2. the same definition holds if A and B are complexvalued matrices. 1. Then A@B =[ 3~ ~]~U J. Let A E lRmxn B E R Definition 13.. Note that B <g> A / A <g> B. (13. Then 0 b ll b12 B @/z = l b" b~l 139 0 b2 2 0 b21 0 0 b12 0 b 22 l . Then 3. We restrict our attention in this chapter primarily to realvalued matrices.Chapter 13 Chapter 13 Kronecker Products Kronecker Products 13. Let B be an arbitrary 2x2 matrix. extension to the complex case only where it is not obvious.1. Then the Kronecker product (or tensor Then the Kronecker product (or tensor product) of A and B is defined as the matrix product) of A and B is defined as the matrix allB A@B= [ : amlB alnB ] : E lRmpxnq. B e lR pxq. Let B be an arbitrary 2 x 2 matrix.
2 13. C E R" x ^ and D E ~sxt. xmYnf E !R. Let A e R mx ".3. . B e ~rxs.. (A ® B)I = Bare 13. Simply verify that ~[ =AC0BD.5. Then 13.1. Theorem 13. Then X ® Y = [ XIY T . XIYn. . Then 13. Let E ~mxn.3. 0 .140 Chapter 13. and D e Rsxt. (A ® Bl = AT ® BT. (13.6. X2Yl.3.m xm are symmetric. 5. B In x E ~m.1 ) Theorem 13. 4.kCkPBD L~=1 amkckpBD ] 0 Theorem 13. A® 13. E R". y e !R.2) Proof: Simply verify that Proof. 5 E R r x i . . Let* eR m . simply note that (A ® B)(A 1 ® B. Proof: Proof: Using Theorem 13. If AI ® B. If E ]Rn xn e Rmxm are Theorem 13. Foral! Proof' Proof: For the proof. If A e R"xn and B E !R..6. simply verify using the definitions of transpose and Kronecker verify transpose Kronecker 0 product. then A® B is symmetric.2 Properties of the Kronecker Product Properties of the Kronecker Product (A 0 B)(C 0 D) = AC 0 BD (E ~mrxpt).. . Kronecker Products Kronecker Products The extension to arbitrary B and /„ is obvious.3.. . . Let Jt € Rm. mn . XmY T]T = [XIYJ. D Corollary 13.. L~=l al. C e ~nxp. For all A and B. If A and B are nonsingular.5.4.. y eR". = 1 ® 1 = I.n.
.12. In general. Let A E lR.m are linearly independent right corresponding to JJL\ . • • zq independent of to A . we can take p = nand q = m and n and q —m and If A and B are diagonalizable in Theorem 13. xp are linearly independent right eigenvectors of A corresponding Moreover. TTzen ?/ze mn eigenvalues of A 0 Bare Moreover.13. . elements of ~A 0 ~B and the corresponding right and left singular vectors). and let eigenvalues jJij.. . xp are linearly independent right eigenvectors of A corresponding AI. if A and fi have Jordan form .. ••.• :::: U rr > 0 and let B E IRfx Corollary e R™x" singular a\ > • • > a > e have singular values T\ > • • > <s > 0. if x\. :::: U rTs > 0 and ^iT\ > • • • > ffr <s Qand rank(A 0 B) = (rankA)(rankB) = rank(B 0 A) . Lgf A E E mxn have a singular value decomposition VA ~A Theorem 13. Sine] and B .8.. If Corollary 13.. 0 If A and Bare diagonalizable in Theorem 13. Properties of the Kronecker Product Theorem 13.2.. Example 13.10. then Xi <8> Zj ffi. . Then the mn eigenvalues of A® B are eigenvalues JL j. If A e IR nxn am/ B E IR mxm are normal./u.7. A. then A <g> B is € IR nxn orthogonal and e IR m x m 15 then 0 is orthogonal.10.2.7. <I :::: ... then A 0 B is normal. Let A E R nx "have eigenvalues Ai. Let A G IR mx " have a singular value decomposition l/^E^Vj an^ let and /ef singular decomposition UB^B^BB e IR pxq fi E ^pxq have a singular value decomposition V B ~B VI.. j € m. \Ju (q ::::: m). In general. Theorem 13.Zq are linearly independent right eigenvectors of B corresponding to JLI. we can take p thus get the complete eigenstructure of A 0 B.4 by Theorem 13. .c.3.• :::: TS > O..JLqq (q < m). q Corollary 13.[Cos</> cos</>O Then It IS easl'1y seen that .j.. i / E e!!.i . then . The 4 x 4 orthogonal e±j9 orthogonal eigenvalues e±j(i>.12.12. i E l!! 7 E 1· Proof: proof Proof: The basic idea of the proof is as follows: follows: (A 0 B)(x 0 z) = Ax 0 Bz =AX 0 JLZ = AJL(X 0 z)..3 since A and B are normal by Theorem 13. If A E E"xn is orthogonal and B E Mmxm is orthogonal. L et A E xamp Ie 139 Let A = [ _eose cose andB .11.. Then vI yields a singular value decomposition of A <8>B (after aasimple reordering of the diagonal yields a singular value decomposition of A 0 B (after simple reordering of the diagonal elements O/£A <8> £5 and the corresponding right and left singular vectors). 141 141 Proof: Proof: (A 0 B{ (A 0 B) = (AT 0 BT)(A 0 B) = AT A 0 BT B = AAT 0 B BT by Theorem 13. eigenvectors of A® B corresponding to A.. . Ap (p ::::: and ZI.. . matrix A ® 5 is then also orthogonal with eigenvalues e^'^+'W and e ± ^ (6> ~^ > \ Theorem 13. j e q.• sin e = _ sin</> Sin</>] Then it is easily seen that A is orthogonal with eigenvalues e±jO and B is orthogonal with eigenvalues e±j</J."xn have singular values UI :::: ... Properties of the Kronecker Product 13. if Xl.and let BB E e IRR mxwhave e IR nxn have eigenvalues A.. if A and B have Jordan form thus get the complete eigenstructure of A <8> B. then A® B is normal. .n. Then A <g)B (or B 0<8> A) has rs singular values U..p (p < n)..•.8.. and zi. A0 B e±jeH</» e±jefJ </».... If A E IR"xn and B eRmxm are normal.9. 0 Zj E€ IR mn "are linearly independent right eigenvectors of A 0 B corresponding to Ai JL 7 i e /?. = (A 0 B)(A 0 B)T 0 Corollary 13. 7 E m. . Then A 0 B (or B A) has rs singular values have singular values <I :::: . . mxm /zave Theorem 13.. ..
1. A EEl B ^ B EEl A. respectively.1 AP) ® (Ql BQ) = JA ® JB · Note that h ® JR. Kronecker Products Chapter 13. Then reducing A and B to real Schur form).13.13. det(A ® B) = (det A)m(det Bt = det(B ® A). A ® B i= B © A. Kronecker Products decompositions given by p. ~l 2 2 1 3 AfflB = (h®A)+(B®h) = 1 3 0 1 0 4 0 3 0 0 0 0 0 0 0 0 0 0 0 0 2 2 0 0 0 3 4 2 0 0 2 0 0 2 0 0 2 0 0 0 1 0 0 + 0 2 0 0 2 0 0 0 0 3 0 0 0 3 0 0 0 3 The reader is invited to compute B 0 A = (/3 ® B) + (A 0 h) and note the difference The reader is invited to compute B EEl A = (h ® B) (A <g> /2) and note the difference with A © B. Note that. is the mn x mn matrix Urn <g> A) + (B ® In). Then 13. 2. respectively. suppose P and Q are unitary matrices that reduce A and B. i. then we get the decompositions given by P~lI AP = J A and Ql BQ = JB. is the mn mn matrix (Im ® A) + (B ® /„). respectively. in of A and B. Let A e Rn xn and B e Rrn xm. in general. suppose P and Schur form for A ® B can be derived similarly. respectively. nxn mxm Definition 13. Example 13. Corollary 13. A Schur form for A ® B can be derived similarly. Then the Kronecker sum (or tensor sum) . E IR E IR Kronecker Definition 13. Then (P ® Q)H (A ® B)(P ® Q) = (pH ® QH)(A ® B)(P ® Q) = (pH AP) ® (QH BQ) = TA ® TR .15. denoted A © B. while upper triangular. Let 1. with A EEl B. Tr(A ® B) = (TrA)(TrB) = Tr(B ® A).. . are unitary matrices that reduce A and 5. to Schur (triangular) form. i. eigenvalues are zero or nonzero). 1. while upper triangular.e. to Schur (triangular) form. Let A e Rn Xn and B e Rm xrn. is generally not quite in Jordan form and needs further reduction (to an ultimate Jordan form that also depends on whether or not certain further reduction (to an ultimate Jordan form that also depends on whether or not certain eigenvalues are zero or nonzero).. denoted A EEl B. is generally not quite in Jordan form and needs Note that JA® JB. Let A~U Then Then 2 2 !]andB~[ .15.14.e. then we get the JA and Q~] BQ following Jordanlike structure: following Jordanlike structure: (P ® Q)I(A ® B)(P ® Q) = (P.I ® Ql)(A ® B)(P ® Q) = (P. For example. of A and B. E IR nxn E IR mxm. Example 13.14. Note that. general. pH AP = TA and QH BQ = TB (and similarly if and are orthogonal similarities PHAP = TA and QHBQ = TB (and similarly if P and Q are orthogonal similarities reducing A and B to real Schur form).AP J B . For example.142 142 Chapter 13.
. if x\.. Define 0 0 0 0 o o Ek = 0 o Then 1 can be written in the very compact form 1 = (4 <8>M) + (Ek ® h) = M $ E k . .2.. Zq are linearly independent right eigenvectors of B AI.. . . ii E E. . A2 + fJm. fJq (q < m).. and z\. Ap (p < and ZI. In general.16.\ .. zq are linearly independent eigenvectors of corresponding to fJt. Then the Kronecker sum A® B = (1m (g>A) + (B ® In) has mn (Im ® A) + (B <g> /„) /za^ ran eigenvalues fJj. . we can take p nand q and If A and B are diagonalizable in Theorem 13.i e n. respectively. then decompositions given by P~1AP = lA and Q"1 BQ = JB. j e q. ..13... .. e jRmxm eigenvalues /z.. . j E ra. then Zj ® Xi E€ jRmn" are linearly independent right Zj <8> Xi W1 are linearly independent right corresponding f j i . TTzen r/ze Kronecker sum A $ B eigenvalues e/genva/wes Al + fJt. and let B E Rmx'" have e jRnxn eigenvalues A. f^q (q ::s: ra).. eigenvectors of A® B corresponding to Ai + [ij. if A and have Jordan form thus get the complete eigenstructure of A 0 B.16. .16... AI + fJm. Properties of the Kronecker Product 13.2. respectively. we can take p = n and q = m and thus get the complete eigenstructure of A $ In general.•• . + fJj' € p. Recall the real JCF 2. . i E !!. .. Properties of the Kronecker Product 2. 0 I M 0 where M = [ where M = o M a f3 f3 a J. .. j E fl· eigenvectors of A $ B corresponding to A. . 0 If A and Bare diagonalizable in Theorem 13.xp are linearly independent right eigenvectors of A corresponding Moreover. An + fJm' Moreover. Xp (p ::s: n). .···. if A and B have Jordan form pI l B .. . Then J can be written in the very compact form J Theorem 13.. if XI. . . A2 + fJt.. Proof: The basic idea of the proof is as follows: Proof: The basic idea of the proof is as follows: [(1m ® A) + (B ® In)](Z ® X) = (Z ® Ax) = (Z + (Bz ® X) ® Ax) + (fJZ ® X) = (A + fJ)(Z ® X). 7 e I!!. (I} ® M) + (E^®l2) = M 0 Ek.. Let A E E"x" have eigenvalues Ai. then decompositions given JA and Qt BQ [(Q ® In)(lm ® p)rt[(lm ® A) = [(1m ® p)I(Q ® In)I][(lm ® A) = (1m ® lA) + (B ® In)][CQ ® In)(lm ® P)] + (B ® In)][(Q ® In)(/m ® + (B ® P)] = [(1m ® pI)(QI ® In)][(lm ® A) In)][CQ ® In)(/m <:9 P)] + (JB ® In) is a Jordanlike structure for A $ B.. is a Jordanlike structure for A © B. xp are linearly independent right eigenvectors of A corresponding to AI. . Recall the real JCF M I M 143 143 0 I M I 0 o 1= 0 E jR2kx2k. .
.5) [ blml The coefficient matrix in (13. respectively.3) is the symmetric equation AX +XAT = C (13. This equation is now often called a Sylvester equation is now often equation in honor of 1. Kronecker Products Chapter 13. The following definition is very helpful in completing the writing of (13.3) is. pH AP = TA matrices that reduce A and B.5) as (B T 0 /„).3) mxm E IRnxn E IR E IRnxm.e. Again.e. i. arise naturally in stability theory. Then ((Q ® /„)(/« ® P)]"[(/m <8> A) + (B ® /B)][(e (g) /„)(/„. [(Q ® /„)(/« ® P)] = (<2 ® P) is unitary by Theorem 13.5) clearly can be written as the Kronecker sum (Im * A) + (BT ® In). When symmetric.3) is. to Schur (triangular) form.4) is known as a Lyapunov equation.5) as an "ordinary" linear system.3 and Corollary 13. Sylvester where A e R"x". Lyapunovequations arise naturally in stability theory. j=1 These equations can then be rewritten as the These equations can then be rewritten as the mn x mn linear system x linear system A+blll bl21 A + b 2Z 1 b2ml b 21 1 (13. . 13. When C is symmetric. B e Rmxm . suppose P and are unitary A Schur form for A © B can be derived similarly.J. Then to real Schur fonn).8.3) in terms of their easily seen z'th columns that ith columns that m AXi + Xb. solution e IR xn also to be symmetric and (13. . Again.144 Chapter 13.4) obtained by taking B = AT.3) in tenns of their columns. The following definition is very helpful in completing the writing of (13. the solution X E Wnx" is easily shown taking B = AT.5) clearly can be written as the Kronecker sum (1m 0 A) + The coefficient matrix in (13. and C e M" xm . The first important question to ask regarding (13.. = AXi + l:~>j.4) is known as a Lyapunov equation.=1 A special case of (13.3 and Corollary 13. Sylvester who studied general linear matrix equations of the fonn k LA. an "ordinary" linear system. i. =C. When does a solution exist? The first important question to ask regarding (13.Xj. ® P)] = (/m <8> rA) + (7* (g) /„). suppose P and Q are unitary fonn. When does a solution exist? By writing the matrices in (13.1. PHAP = TA that reduce to Schur and QH BQ = TB (and similarly if P and Q are orthogonal similarities reducing A and B and QHBQ = TB (and similarly if P and Q are orthogonal similarities reducing A and B to real Schur form). (13. Kronecker Products A Schur fonn for A EB B can be derived similarly. = C.8. it is easily seen by equating the writing (13..3 13. .XB. where [(Q <8>In)(lm ® P)] = (Q ® P) is unitary by Theorem 13. Sylvester who studied general linear matrix equations of the form equation in honor of J.3 Application to Sylvester and Lyapunov Equations Application to Sylvester and Lyapunov Equations In this section we study the linear matrix equation In this section we study the linear matrix equation AX+XB=C. Lyapunov equations also to be symmetric and (13.
the eigenvalues of [(1m ® A) + (BT <8> /„)] are + Mj.e A (A).6). But [(Im <8>A) + (B TT ® In)] isisnonsingular ififand only ififitithas no zero eigenvalues.3) (or symmetric Lyapunov equations of the form (13. Let A e jRnxn. where A.19. . B E Rmxm. e m. The next few theorems are classical. so there exists unique Proof: Since A and B are stable. + IJLJ. one of many The next few theorems are classical.8) can be written as can be written as (13. the linear system (13. j j E!!!.6) directly with operations rather than the O(n 6 that would be required by solving (13... Suppose further are asymptotically stable (a matrix is asymptotically stable if all its eigenvalues have real are asymptotically stable (a matrix is asymptotically stable if all its eigenvalues have real parts in the open left halfplane).24. +00): IHoo lim XU) . and ^j Theorem 13.. Now integrate the differential equation X AX XB (with X(O) C) on [0. i. and C e R" xm . E!!.6) if and only if [(Im ® A) + (BT ® /„)] is nonsingular. From Theorem 13. Schur form.10) . Then the Sylvester equation G jRmxm. the eigenvalues of [(/m <g> A) + (BT ® In)] are Ai A.18. (real) Schur form.n denote the columns ofC E Rnxm so that C = [ n .5) can be rewritten in the form Using Definition 13. say.B have no eigenvalues in common. They culminate in Theorem 13.18.4)) are generally not solved using the mn x mn "vee" formulation (13.X(O) = A 10 roo X(t)dt + ([+00 X(t)dt) 10 B. Let Ci( € E. A further enhancement to this algorithm is available in [6] whereby the larger of A or B is initially reduced only to upper Hessenberg rather than triangular the larger of A or B is initially reduced only to upper Hessenberg rather than triangular Schur form. Sylvester equations of the form (13. Application to Sylvester and Lyapunov Equations 13.18. We thus have the following theorem.24..6). this algorithm takes only O(n3 ) operations rather than the O(n6)) that would be required by solving (13. where From Theorem 13. Suppose further that A and B E Rn .. one of many elegant connections between matrix theory and stability theory for differential equations. xn Theorem 13.(B) ^ solution to(13. .16.e. A(fi).3) (or symmetric Lyapunov equations of the form Sylvester equations of the form (13.6) if and only if [(1m ® A) + (B T ® In)] is nonsingular. (13. An equivalent linear system is then solved in which the triangular form equivalent linear system is then solved in which the triangular form of the reduced and can be exploited to solve successively for the columns of a suitably of the reduced A and B can be exploited to solve successively for the columns of a suitably transformed solution matrix X.3. Now integrate the differential equation X = AX + X B solution to (13. vec(C) = Using Definition 13.17.9) Proof: Since A and B are stable. Aj(A) + A. say. Cm}. Let A e lRnxn. They culminate in Theorem 13. Application to Sylvester and Lyapunov Equations 145 145 Definition 13. Definition 13.3... this algorithm takes only 0 (n 3) transformed solution matrix X. c ].6) There exists a unique solution to (13. j j so there exists aaunique for all i. . n :::: m. ofC e jRnxm [CI..and Mj Ee A(B).1S. Theorem C E jRnxm.17. . Ai E A(A). Then the (unique) solution of the Sylvester equation AX+XB=C (13. The most commonly preferred numerical algorithm is described in [2]. A further enhancement to this algorithm is available in [6] whereby Gaussian elimination.16.13. . B e Rmxm. The most (13. But [(1m ® A) + (B (g) /„)] nonsingular and only has no zero eigenvalues. and C e Rnxm.5) can be rewritten in the form [(1m ® A) + (B T ® In)]vec(X) = vec(C). A... ii e n_. elegant connections between matrix theory and stability theory for differential equations. n > m. First A and B are reduced to commonly preferred numerical algorithm is described in [2]. E R E jRnxm.6) directly with Gaussian elimination. First A and B are reduced to (real) Schur form. has a unique solution if and only if A and —B have no eigenvalues in common. Then the (unique) solution of the Sylvester equation parts in the open left halfplane). +00): (with X(0) = C) on [0. AX+XB=C (13. (13. Assuming that. There exists a unique solution to (13.17.8) by Theorem 13.17. Assuming that. (A)+ Aj(B) =I 00 for all i. c E jRn the Then vec(C) is defined to be the mnvector formed by stacking the columns ofC on top of by C ::~~::~: ::d~~:::O:[]::::fonned "ocking the colunuu of on top of one another. 77ie/i Theorem 13.8)by Theorem 13. the linear system (13.7) has a unique solution if and only if A and .4» are generally not solved using the mn x mn "vec" formulation (13. We thus have the following theorem. E jRmxm.
then .. Hence. Remark 13. —kn.24.. v E". where C Proof: asymptotically l3. Then Then .21 and 13.12).23. Theorem Substituting in (13.].19. X B = is that [ J _Cfi ] be similar to [~ _OB] (via the similarity [ Let Theorem 13. By Theorems 13. TTzen r/ze AX+XAT =C (13. it can be shown easily that lim elA = lim elB = O. Then the (unique) solution o/the Lyapunov equation of the AX+XAT=C can be written as can be written as (13. the first of which follows immediately from Theorem 13.. A. Theorem 13.C E R"x" and suppose further that A is asymptotically stable.23 a solution to (13. A matrix A E R"x" is asymptotically stable if and only if there exists a only if e jRnxn asymptotically if positive definite solution to the Lyapunov equation positive definite solution to the Lyapunov equation AX +XAT = C. +00 r—>+oo t—v+oo X t ) = etACelB X t ) — O.23 solution Proof: Suppose A is asymptotically stable. If the matrix A E Wxn has eigenvalues A.A T have no eigenvalues in common. 1>+00 1 . C E R"x".10) we have C t~+x /—<+3C = A (1+ 00 elACe lB dt) + (1+ o 00 elACe lB dt) B and so X and so X = 1o {+oo elACe lB dt satisfies (13. An equivalent condition for the existence of a unique solution to AX + AX + Remark XB = C is that [~ _cB ] be similar to [ J _°B ](via the similarity [~J _~ ]).20. .8). . If C is has unique if and only if and —A T eigenvalues in common. Two basic results due to Lyapunov are the following. _* ]). .11) has a unique solution if and only if A and ..6.146 146 Chapter 13..21 l3. a sufficient condition that guarantees that A and .12) Theorem 13. Lef A. .An. symmetric and ( 13... . we have that lim X ((t) = 0. using the solution X ((t) = elACe tB from Theorem 11. Thus.I . An. If matrix A e jRn xn eigenvalues )"" . If symmetric and (13. Now let v be an arbitrary nonzero vector in jRn.11) has a unique solution.22. Remark 13.21.ATT have A —A.. then that solution is symmetric. Kronecker Products Chapter 13. ..19.. Theorem 13.!„.11) has a unique solution.13) exists and takes the form (13. (13.AT has eigen— AT eigenvalues AI.1.. Let A.13) where C = C T < O. . Then the Lyapunov equation e jRnxn. Many useful results exist concerning the relationship between stability and Lyapunov equations. Kronecker Products Using the results of Section 11. sufficient —A common eigenvalues A asymptotically no common eigenvalues is that A be asymptotically stable. results = 0. C e jRnxn further asymptotically stable.6. then that solution is symmetric..
where Y E jRnxp is arbitrary. and C for which the matrix product ABC is Theorem 13. Hence vT Xv > 0 and thus X is positive definite. e A(A) with corresponding left eigenvector y. in which the solution is of the form is of the form (13. Conversely. the integrand above is positive. Since A was arbitrary. defined.25. The Lyapunov equation AX X A = C can also be written using the Remark 13. the complexvalued equation AHX + XA = C is equivalent to [(/ ® AH) vec(C). where Y e Rnxp is arbitrary. nx p if and only if A A+CB+BB = C. C.15) of (13. for the solution of the simple Sylvesterlike equation introduced in Theorem 6.26. vec(ABC) = (C T ® A)vec(B).14) as Proof: Write (13. Theorem 13. However.yr) = <8> x. D tions or from the fact that vec(xyT) = y ® x. B e jRPxq. The vec operator has many useful properties.13.14) is unique if BB+ ® A+ A = I. Proof: The proof follows in a fairly straightforward fashion either directly from the definiProof: The proof follows in a fairly straightforward fashion either directly from the definitions or from the fact that vec(. Since yH Xy > 0. most of which derive from one key result.16) . B. suppose X = XT > 0 and let A. The equivalent "vec form" of this equation is The equivalent "vec form" of this equation is [(/ ® AT) + (AT ® l)]vec(X) = + (AT ® l)]vec(X) = vec(C).25. e jRrnxn. the AXB =C (13. The Lyapunov equation AX + XATT = C can also be written using the vec notation in the equivalent form vec notation in the equivalent form [(/ ® A) + (A ® l)]vec(X) = vec(C). A subtle point arises when dealing with the "dual" Lyapunov equation A T X X A A subtle point arises when dealing with the "dual" Lyapunov equation ATX + XA = C. we must have A + A = 2 R e A < O. B E Rpx(}. A must be asymptotically stable. Application to Sylvester and Lyapunov Equations 147 147 Since — C > 0 and etA is nonsingular for all the integrand above is positive.t. Then the equation 13. D asymptotically stable. Then 0> yHCy = yH AXy + yHXAT Y = (A + I)yH Xy. Theorem 13. most of which derive from one key The vec operator has many useful properties. The solution of (13. B. For any three matrices A. 14) is unique if BB+ ® A+A = [. Application to Sylvester and Lyapunov Equations 13. Hence Since C > 0 and etA is nonsingular for all t. For any three matrices A. suppose X = XT > 0 and let A E A (A) with corresponding left eigenConversely. we must have A + I = 2 Re A < 0 . D Remark 13. The Proof: Write (13. the complexvalued equation H X X A = C is equivalent to However. in which case the general solution has a if only ifAA + C B+ C.27. e jRrnxq. D An immediate application is to the derivation of existence and uniqueness conditions An immediate application is to the derivation of existence and uniqueness conditions for the solution of the simple Sylvesterlike equation introduced in Theorem 6.11. Then vector y.14) as (B T ® A)vec(X) = vec(C) (13. Since A was arbitrary.11. and C E Rmxq. Let A E Rmxn. and C for which the matrix product ABC is defined.27. result. A must be Since yHXy > 0.14) xp E jRn has a solution X e R.3.26.3. v TXv > 0 and thus X is positive definite.
148 148
Chapter 1 3. Kronecker Products Chapter 13. Kronecker Products
by Theorem 13.26. This "vector equation" has a solution if and only if by Theorem 13.26. This "vector equation" has a solution if and only if
(B T ® A)(B T ® A)+ vec(C)
+
= vec(C).
+ +
It is a straightforward exercise to show that (M ® N) + = M+ ® N+.. Thus, (13.16) has aa It is a straightforward exercise to show that (M ® N) = M <8> N Thus, (13.16) has
solution if and only if solution if and only if vec(C)
=
(B T ® A)«B+{ ® A+)vec(C)
= [(B+ B{ ® AA+]vec(C)
= vec(AA +C B+ B)
and hence if and only if AA +CB+B = C. and hence if and only if AA+ C B+ B C. The general solution of (13 .16) is then given by The general solution of (13.16) is then given by vec(X) = (B T ® A) + vec(C)
+ [I 
(B T ® A) + (B T ® A)]vec(Y),
where Y is arbitrary. This equation can then be rewritten in the form where Y is arbitrary. This equation can then be rewritten in the form vec(X)
= «B+{
® A+)vec(C)
+ [I
 (BB+{ ® A+ A]vec(y)
or, using Theorem 13.26, or, using Theorem 13.26,
The solution is clearly unique if B B+ ® A + A ==I. The solution is clearly unique if BB+ <8> A+A I.
0 D
EXERCISES EXERCISES
I. For any two matrices A and B for which the indicated matrix product is defined, 1. For any two matrices A and B for which the indicated matrix product is defined, show that (vec(A»T(vec(fl)) = Tr(A T B). In particular, if B E Rn x n ,, then Tr(B) = show that (vec(A)) r (vec(B» = Tr(A r £). In particular, if B e lR nxn then Tr(fl) = vec(/J r vec(fl). vec(Inl vec(B). 2. Prove that for all matrices A and B, (A ® B)+ = A+ ® B+.. 2. Prove that for all matrices A and B, (A ® B)+ = A+ ® B+
3. Show that the equation AX B = C has a solution for all C if A has full row rank and 3. Show that the equation AX B = C has a solution for all C if A has full row rank and B has full column rank. Also, show that a solution, if it exists, is unique if A has full B has full column rank. Also, show that a solution, if it exists, is unique if A has full column rank and B has full row rank. What is the solution in this case? column rank and B has full row rank. What is the solution in this case? 4. Show that the general linear equation 4. Show that the general linear equation
k
LAiXBi =C
i=1
can be written in the form can be written in the form
[BT ® AI
+ ... + B[ ® Ak]vec(X) =
vec(C).
Exercises Exercises
149 149
5. Let x E ]Rm and y E E". Show that *rT ® yy==y X T T. x <8> € Mm e ]Rn. yx •
6. Let A e R" xn and £ e M m x m . (a) Show that IIA ® BII22 = IIAII2I1Blb. (a) Show that A <8> B = A2£2. (b) What is II A ® B II F in terms of the Frobenius norms of A and B? Justify your (b) What is A ® B\\F in terms of the Frobenius norms of A and B? Justify your answer carefully. answer carefully.
(c) What is the spectral radius of A ® B in terms of the spectral radii of A and B? of A <8> B in terms of the spectral radii of A and B? Justify your answer carefully. Justify your answer carefully. 7. Let A, 5 eR" x ". 7. Let A, B E ]Rnxn.
A)k = / <8> A* and (fl <g> l = B® I for all integers k. (a) Show that (l ® A)* = I ® Ak and (B ® I /)* =Bk fc ® / for all integers &. (/ l A (b) Show that el®A = I ® eeA and eB®1 7= eeB ® I./. e® <g) A and e5® = B (g)
(c) Show that the matrices I ® and (c) Show that the matrices / (8)AA andBB® I /commute. ® commute. (d) Show that (d) Show that
e AEIlB
= eU®A)+(B®l) = e B ® e A .
(Note: This result would look a little "nicer" had we defined our Kronecker (Note: This result would look a little "nicer" had we defined our Kronecker sum the other way around. However, Definition 13.14 is conventional in the 13.14 literature.)
8. Consider the Lyapunov matrix equation (13.11) with
A =
and C the symmetric matrix and C the symmetric matrix
[~ _~ ]
[~
Xs
Clearly Clearly
=
[~ ~ ]
[_~ ~
]
is a symmetric solution of the equation. Verify that is a symmetric solution of the equation. Verify that
Xns =
is also a solution and is nonsymmetric. Explain in light of Theorem 13.21. is also a solution and is nonsymmetric. Explain in light of Theorem 13.21. 9. Block Triangularization: Let 9. Block Triangularization: Let
A E ]Rn xn find similarity where A e Rnxn and D E ]Rm xm. It is desired to find a similarity transformation e Rmxm. of the form of the form
T=[~ ~J
such that T l1ST is block upper triangular. such that T ST is block upper triangular.
150 150 (a) Show that S is similar to
Chapter 13. Kronecker Products Chapter 13. Kronecker Products
[
A +OBX
B ] DXB
if X satisfies the socalled matrix Riccati equation if X satisfies the socalled matrix Riccati equation
CXA+DXXBX=O.
(b) Fonnulate a similar result for block lower triangularization of S. Formulate S.
to. Block 10. Block Diagonalization: Let
S=
[~ ~
l
where A E Rnxn and D E R m x m . It is desired to find a similarity transfonnation of e jRnxn E jRmxm. transformation of the fonn form
T=[~ ~]
such that T l1ST is block diagonal, T ST block diagonal. (a) Show that S is similar to
if Y satisfies the Sylvester equation Y
AY  YD = B.
(b) Formulate a similar result for block diagonalization of Fonnulate of
15(1972). SIAM. 1985. Solution of the Matrix Equation [2] Bartels. New York. Analysis. and C. S.N. 1996. T. PA. "IllConditioned Eigensystems and the Computation [8] Golub. Stewart. 18(1976). "Algorithm 432. and J. [11] Higham. and C.H. 249]." SIAM Rev. Press. 57–58. Baltimore. Univ. Second Edition.... G.A. Wilkinson.1. PR. [3] Bellman.820826. UK. and C." IEEE Trans. NY.." SIAM Rev. Second Edition. Accuracy of'Numerical Algorithms.. Numerical Methods for Least Squares Problems.. UK. R. 249]. Van Loan. MD. P. RA. ACM.R. Wilkinson. A. R. PA. Matrix Computations. 1958.5758. "Note on the Generalized Inverse of a Matrix Product... [2] Bartels. McGrawHill. Press. R. 1996. "A HessenbergSchur Method for the Problem AX + XB = C. and C.. 2002. 518–521 [Erratum.. Univ. [6] Golub. and C. [11] Higham. Academic Press. Cambridge Univ. Cambridge. "IllConditioned Eigensystems and Computation ofthe Jordan Canonical Form. Press. T. RA. R. Solution Equation AX + X B = C.H. "Note on the Generalized Inverse of the Product of Matrices. Third Edition. to Second Edition. G. Second Edition. SIAM. G. SIAM 9(1967). [9] Greville. [4] Bjorck. 151 151 . N.H. Cambridge.578619. R. Princeton. New [3] Bellman.R. Matrix Analysis..R. and G.H. Johnson. and c. Johns Hopkins [7] Golub. FiniteDimensional Vector Spaces.J. R. [10] Halmos.. Cambridge. Van Nostrand.A.N. "A HessenbergSchur Method for the Problem [6] Golub. 820826. NJ.w. Numerical Methodsfor Least Squares Problems. UK. 8(1966).H. Baltimore.F." SIAM Rev.E.. Control.. FiniteDimensional Vector Spaces. 8(1966). York. 2002.. R. Van Loan. and lH. N. [5] Cline. "Note on the Generalized Inverse of the Product of Matrices. [13] Hom. and C. 1991. Press. Johnson. Horn. McGrawHill. 1996. Philadelphia.E. Princeton. 1985. Horn. 1970. 1991.. Philadelphia. SIAM Rev. Matrix Computations.. PA.H.518521 [Erratum. Cambridge Univ. "Note on the Generalized Inverse of a Matrix Product. 909913. C. NY... Matrix Analysis. [5] Cline. Nash.H. MD.... Johnson. Accuracy and Stability ofNumerical Algorithms. G. Cambridge. NY. AX X B = C. 578619. Stewart. 1972.Bibliography [1] Albert. [9] Greville. 6(1964). of the SIAM 18(1976).. 15(1972).. 1996. Philadelphia. Press. 1972. G. Regression and the MoorePenrose Pseudoinverse. "Algorithm 432. Cambridge Univ.E.. Van Nostrand." SIAM Rev. 1958.W. RH. AC24(1979). AX + XB = C. SIAM. [10] Halmos. AC24(1979).F. G. NY.R. 9(1967). Third Edition. [8] Golub. A.R." Cornm. 6(1964).. 1970. York. S. [4] Bjorck. Introduction to Matrix Analysis. Johns Hopkins Univ. Van Loan. Second Edition. Van Loan. Philadelphia. Nash.E. Second Edition.." Comm. SIAM." IEEE [7] Golub.." SIAM Rev. UK. Autom. New [1] York. NJ. Topics in Matrix Analysis.H. and G. [12] Hom.
1(1988). G..J. New York. C. Theory of Second Edition with Applications. Second Edition.. [22] Penrose. Linear Algebra and Its Applications.. and A. PA. P. Introduction to Matrix Computations. 801836. Third Edition.. 1973. 2000. 1985. and c. Soc. PA. FL." SIAM Rev. Second Edition with [16] Lancaster. PrenticeHall. Matrix Theory. [17] Laub. FL.. NJ. of [25] Watkins. B. The Theory of Matrices. Control. G. Laub. R. [15] Kenney.W... Interscience. CA. NJ.M. J. W.. [20] Noble. Academic Press. C. 1987. and A." IEEE Trans. New York. 1985. 1988. Control." IEEE Trans .. Second Edition. [26] Wonham. New York. 2002. 406–413. of Control... . W.. Laub. Applications. "The Matrix Sign Function.S. Matrix Analysis and Applied Linear Algebra. PrenticeHall.J. 1(1988). Tismenetsky.F.. C. "A Schur Method for Solving Algebraic Riccati Equations.. New York. Linear Algebra and Its Applications. andAJ. Control. "A Schur Method for Solving Algebraic Riccati Equations. 1985. 1988. Plenum.w. SIAM. 913921. [22] Pemose. C. Plenum. A. 51(1955). [16] Lancaster. G. Autom. and M. [23] Stewart. and AJ. of Control. Analysis Applied Linear [18] Meyer. Harcourt Brace [24] Strang. "Nineteen Dubious Ways to Compute the Exponential [19] Moler. "Nineteen Dubious Ways to Compute the Exponential of a Matrix. Control. SIAM.M. Daniel. NY. and J. [21] Ortega. Van Loan.152 152 Bibliography Bibliography [14] Kenney. San Diego.361390.S. [26] Wonham. Cambridge Philos. P.. to Academic Press. D.. 361390. Signals." IEEE Trans. 20(1978). NY. Third Edition. C. Daniel. NY. Tismenetsky. SpringerVerlag. [15] Kenney. New York. Autom.. Englewood Cliffs. Third Edition." Math. Laub.B... Linear Multivariable Control. Van Loan. 913–921. [21] Ortega. AC24( 1979).B.. Harcourt Brace Jovanovich. Third Edition. Laub.S. of a Matrix. San Diego. "Controllability and Stability Radii for Companion Fonn [14] Kenney. Philadelphia.801836. Orlando. Systems. Jovanovich. 2000. Applied Linear Englewood Cliffs. Autom. NY. and J.D. W. Academic Press.D." Proc. "The Matrix Sign Function. 1988. Philadelphia. and Systems. G." Proc. [23] Stewart. c. 1979).P. Applied Linear Algebra." Math. Academic Press. "A Generalized Inverse for Matrices. 40(1995)..S. 1973. Interscience. Fundamentals of Matrix Computations. NY. "A Generalized Inverse for Matrices. CA.. A Second Course. [18] Meyer.. R. 1985.W.13301348. 2002. [24] Strang. New York.. [19] Moler. 1988. C.406413." IEEE Trans.J.. SpringerVerlag." SIAM 20(1978). and C. [17] Laub.. AJ. Wiley[25] D.. Third Edition. and M. Orlando. C. Third Edition. NY.. New York. A Geometric Approach. Soc. 40(1995). 51(1955). "Controllability and Stability Radii for Companion Form Systems. 1330–1348. 1987. [20] Noble.
46 properties of. 58 CayleyHamilton Theorem. i 1 (p/nxn 1 e~xn. 125 generalized real Schur form. 127 exchange matrix.81 elementary divisors. equivalent matrix pencils. 81 function of a matrix. 84 elementary divisors. 12 natural. 106 singular vectors of. 12 block matrix. 89 exponential of a Jordan block. 75 invariance under similarity transforinvariance under similarity transformation. 95 orthogonal. 89 A–invariant subspace. 91. 5 properties of. 106 singular values of. 109–112 properties of. 1 . 104 definiteness of. 90 algebraic multiplicity. 5 of a block matrix. 75 Cayley–Hamilton Theorem. 110 inverse of. 89 exchange matrix. 90 matrix characterization of. 103 conjugate transpose. 1 companion matrix companion matrix inverse of. four fundamental subspaces. 106 complement complement of a subspace. 89 matrix characterization of. 114118 inverse of. 91. 2 definiteness of. 95 unitary. 106 singular values of. 12 dimension. 76 defective. 2 contragredient transformation. 81 mation. 85 of a principal vector. 128 e (pmxn mxn en. 76 angle between vectors. 125 generalized eigenvalue. 13 of a subspace. 105 pseudoinverse of. 104 diagonalization. 75 chain chain of eigenvectors. 2 block matrix. 21 orthogonal. 87 characteristic polynomial characteristic polynomial of a matrix. 17 column column rank. 21 153 . 17 co–domain. 106 singular vectors of. 75 of a matrix pencil. 137 contragredient transformation. 17 eigenvalue. 95 unitary. 58 basis. 110 properties of. 81. 125 Cholesky factorization. 75 of a matrix. 39. 109 computation of. 7 field. 1 vector. 17 domain. 39. 87 of eigenvectors. 5 triangularization. 103 congruence. 115 exponential of a Jordan block. 46 defective. 81. 81 generalized eigenvalue. 2 conjugate transpose.58 ity. 150 diagonalization. 46 controllability. 127 lems. 12 direct sum direct sum of subspaces. 11 basis. 11 natural.Index Index Ainvariant subspace. 58 angle between vectors. 5 LU factorization. 48 LV factorization. 125 of a matrix pencil. 127 equivalent matrix pencils. 23 function of a matrix. 13 domain. 13 orthogonal. 105 inverse of. 109 exponential of a matrix. 106 pseudoinverse of. C". 149 congruence. 115 exponential of a matrix. 4 of a block matrix. 23 rank. 23 four fundamental subspaces. 76 algebraic multiplicity. 95 equivalent generalized eigenvalue probequivalent generalized eigenvalue problems. 48 inverse of. 85 determinant. 109112 field. 76 degree degree of a principal vector. 101 Cholesky factorization. 95 equivalence transformation. 13 of subspaces. 114–118 computation of. 137 controllability. 150 inverse of. 4–6 dimension. 75 eigenvalue. 23 vector. 4 determinant. 128 generalized real Schur form. 84 equivalence transformation. 1 CauchyBunyakovskySchwarz InequalCauchy–Bunyakovsky–Schwarz Inequality. 101 codomain.
66 geometric solution of. 103 initialvalue problem. 71 71 decomsolution via singular value decomposition. 58 Hermitian transpose. generalized decomposition. 2 transpose. 120 firstorder higherorder eigenvalue problems higher–order problems conversion to first–order form. 134 134 geometric geometric multiplicity. 67 residual of. 26 invertible. 142 eigenvalues of. 136 firstorder i. 149 leading principal submatrix. 66 linear regression. 143 eigenvectors of. 127 generalized generalized singular value decomposition. 44 existence of solutions. 22 left left principal vector. 119 for inhomogeneous linear differeninhomogeneous differential equations. 6. 142 determinant eigenvalues of. 45 uniqueness of solutions. 140 Kronecker sum. 118 for homogeneous linear differential homogeneous equations. 2 idempotent. 17 composition of. 55 complex Euclidean. 112 for inhomogeneous linear difference for inhomogeneous linear difference equations. 67 linear transformation. 2 Jordan block. 75 left generalized eigenvector. 65 uniqueness of solution. 51 identity matrix. 141 eigenvectors of. 141 singular trace of. 65 solution via QR factorization. 25 left invertible.2 7. 54 weighted. 10 linear equations linear equations characterization of all solutions. 142 transpose of. 143 exponential of. 112 equations. 76 Holder Inequality. 109 initial–value for higherorder equations. 82 Jordan canonical form (JCF). 10 linear least squares problem. higherorder higher–order difference equations conversion to first–order form. 44 existence of solutions. 17 invertible. general solution of. 4 inertia. 121 conversion firstorder higher–order differential equations higherorder conversion to first–order form. 70 statement of. 4 Euclidean. 44 characterization of all solutions. 143 eigenvectors of. 139 determinant of. 19 domain of. 120 higher–order for homogeneous linear difference homogeneous equations. 47 Index Index Kronecker product. 125 left invertible. 54 usual. 148 singUlar values of. 45 linear independence. 84 inverses of block matrices. 85 linear dependence.2 i. 10 linear independence. 51 idempotent. 20 Kronecker . 140 pseudoinverse of. left nullspace. 18 nonsingular. 20 nullspace j. 129 Kronecker canonical Kronecker delta. 26 invertible. 4. matrix representation of. 65 problem. 82 Kronecker canonical form (KCF). 54 real. 54 invariant factors. 25 nulls pace of.154 generalized Schur form. 141 eigenvectors products of. 100 left eigenvector. 4 complex Euclidean. 17 co–domain codomain of. 44 uniqueness of solutions. inner product inner product complex.
76 monic polynomial. 92 92 nonnegative definite. 146 symmetry of solution. 60 induced by a vector norm.60 /?–. 146 integral form of solution. 2 symplectic. 20 left. 99 norm norm induced. 97 indefinite. 57 nullity. 145 best rank k approximation to. 46 onetoone (11).5 block. 91 of square root of a.Index Index range of. 122 Hermitian. 62 matrix pencil. 2 unitary. 56 normal equations. subordinate to a vector norm. 2 Householder. matrix–vector. 81. 109 Hamiltonian. 24 nullspace. 99 negative invariant subspace. 126 matrix sign function. 57 normed linear space. 4 unitary. 109 matrix norm. 56 induced. 59 1. 62 unitarily invariant. 81.60 oo–. 67 nilpotent. and asymptotic stability. 22 observability. 46 observability.56 natural. 99 derogatory. 126 singUlar. 99 lower Hessenberg. 61 subordinate unitarily invariant. 146 uniqueness of solution. 25 . 2 upper Hessenberg. singular.56 natural. 61 Schatten. 91. 105 normal. 23 conditions for.20 nullspace. 61 mixed. 20 right invertible. 4 upper Hessenberg. 5 Lyapunov differential equation. 146 of 155 matrix matrix asymptotically stable. quasi–upper–triangular. 76 MoorePenrose Moore–Penrose pseudoinverse. 127 reciprocal. 2 exponential. 3 matrixvector. matrix–matrix. 60 spectral. 22 22 right. 26 LV LU factorization. 95 orthogonal. 106 diagonal. 4 pentadiagonal. 109 matrix exponential. 144 equation. 25 onto. one–to–one (1–1). 125 equivalent. 60 p. 126 regular. 29 multiplication multiplication matrixmatrix. 113 Lyapunov equation. 122 tridiagonal. 100 nonpositive definite. 98 sign of a.60 1–. 105 defective. 3 MumaghanWintner Murnaghan–Wintner Theorem. 2 nearest singular matrix to. 60 00. 91. 2 matrix exponential.60 Schatten. 33. 115 nonderogatory. 60 spectral. 61 Frobenius. 2 lower triangular.2 diagonal. 23 for. 76 defin