for Scientists & Engineers
Matrix Analysis
for Scientists & Engineers
This page intentionally left blank This page intentionally left blank
Matrix Analysis
for Scientists & Engineers
Alan J. Laub
University of California
Davis, California
slam.
Matrix Analysis
for Scientists & Engineers
Alan J. Laub
University of California
Davis, California
Copyright © 2005 by the Society for Industrial and Applied Mathematics.
1 0 9 8 7 6 5 4 3 2 1
All rights reserved. Printed in the United States of America. No part of this book
may be reproduced, stored, or transmitted in any manner without the written permission
of the publisher. For information, write to the Society for Industrial and Applied
Mathematics, 3600 University City Science Center, Philadelphia, PA 191042688.
MATLAB® is a registered trademark of The MathWorks, Inc. For MATLAB product information,
please contact The MathWorks, Inc., 3 Apple Hill Drive, Natick, MA 017602098 USA,
5086477000, Fax: 5086477101, info@mathworks.com, www.mathworks.com
Mathematica is a registered trademark of Wolfram Research, Inc.
Mathcad is a registered trademark of Mathsoft Engineering & Education, Inc.
Library of Congress CataloginginPublication Data
Laub, Alan J., 1948
Matrix analysis for scientists and engineers / Alan J. Laub.
p. cm.
Includes bibliographical references and index.
ISBN 0898715768 (pbk.)
1. Matrices. 2. Mathematical analysis. I. Title.
QA188138 2005
512.9'434—dc22
2004059962
About the cover: The original artwork featured on the cover was created by freelance
artist Aaron Tallon of Philadelphia, PA. Used by permission.
slam is a registered trademark.
Copyright © 2005 by the Society for Industrial and Applied Mathematics.
10987654321
All rights reserved. Printed in the United States of America. No part of this book
may be reproduced, stored, or transmitted in any manner without the written permission
of the publisher. For information, write to the Society for Industrial and Applied
Mathematics, 3600 University City Science Center, Philadelphia, PA 191042688.
MATLAB® is a registered trademark of The MathWorks, Inc. For MATLAB product information,
please contact The MathWorks, Inc., 3 Apple Hill Drive, Natick, MA 017602098 USA,
5086477000, Fax: 5086477101, info@mathworks.com, wwwmathworks.com
Mathematica is a registered trademark of Wolfram Research, Inc.
Mathcad is a registered trademark of Mathsoft Engineering & Education, Inc.
Library of Congress CataloginginPublication Data
Laub, Alan J., 1948
Matrix analysis for scientists and engineers / Alan J. Laub.
p. cm.
Includes bibliographical references and index.
ISBN 0898715768 (pbk.)
1. Matrices. 2. Mathematical analysis. I. Title.
QA 188.L38 2005
512.9'434dc22
2004059962
About the cover: The original artwork featured on the cover was created by freelance
artist Aaron Tallon of Philadelphia, PA. Used by permission .
•
5.lam... is a registered trademark.
To my wife, Beverley
(who captivated me in the UBC math library
nearly forty years ago)
To my wife, Beverley
(who captivated me in the UBC math library
nearly forty years ago)
This page intentionally left blank This page intentionally left blank
Contents
Preface xi
1 Introduction and Review 1
1.1 Some Notation and Terminology 1
1.2 Matrix Arithmetic 3
1.3 Inner Products and Orthogonality 4
1.4 Determinants 4
2 Vector Spaces 7
2.1 Definitions and Examples 7
2.2 Subspaces 9
2.3 Linear Independence 10
2.4 Sums and Intersections of Subspaces 13
3 Linear Transformations 17
3.1 Definition and Examples 17
3.2 Matrix Representation of Linear Transformations 18
3.3 Composition of Transformations 19
3.4 Structure of Linear Transformations 20
3.5 Four Fundamental Subspaces 22
4 Introduction to the MoorePenrose Pseudoinverse 29
4.1 Definitions and Characterizations 29
4.2 Examples 30
4.3 Properties and Applications 31
5 Introduction to the Singular Value Decomposition 35
5.1 The Fundamental Theorem 35
5.2 Some Basic Properties 38
5.3 Row and Column Compressions 40
6 Linear Equations 43
6.1 Vector Linear Equations 43
6.2 Matrix Linear Equations 44
6.3 A More General Matrix Linear Equation 47
6.4 Some Useful and Interesting Inverses 47
vii
Contents
Preface
1 Introduction and Review
1.1 Some Notation and Terminology
1.2 Matrix Arithmetic . . . . . . . .
1.3 Inner Products and Orthogonality .
1.4 Determinants
2 Vector Spaces
2.1 Definitions and Examples .
2.2 Subspaces.........
2.3 Linear Independence . . .
2.4 Sums and Intersections of Subspaces
3 Linear Transformations
3.1 Definition and Examples . . . . . . . . . . . . .
3.2 Matrix Representation of Linear Transformations
3.3 Composition of Transformations . .
3.4 Structure of Linear Transformations
3.5 Four Fundamental Subspaces . . . .
4 Introduction to the MoorePenrose Pseudoinverse
4.1 Definitions and Characterizations.
4.2 Examples..........
4.3 Properties and Applications . . . .
5 Introduction to the Singular Value Decomposition
5.1 The Fundamental Theorem . . .
5.2 Some Basic Properties .....
5.3 Rowand Column Compressions
6 Linear Equations
6.1 Vector Linear Equations . . . . . . . . .
6.2 Matrix Linear Equations ....... .
6.3 A More General Matrix Linear Equation
6.4 Some Useful and Interesting Inverses.
vii
xi
1
1
3
4
4
7
7
9
10
13
17
17
18
19
20
22
29
29
30
31
35
35
38
40
43
43
44
47
47
viii Contents
7 Projections, Inner Product Spaces, and Norms 51
7.1 Projections 51
7.1.1 The four fundamental orthogonal projections 52
7.2 Inner Product Spaces 54
7.3 Vector Norms 57
7.4 Matrix Norms 59
8 Linear Least Squares Problems 65
8.1 The Linear Least Squares Problem 65
8.2 Geometric Solution 67
8.3 Linear Regression and Other Linear Least Squares Problems 67
8.3.1 Example: Linear regression 67
8.3.2 Other least squares problems 69
8.4 Least Squares and Singular Value Decomposition 70
8.5 Least Squares and QR Factorization 71
9 Eigenvalues and Eigenvectors 75
9.1 Fundamental Definitions and Properties 75
9.2 Jordan Canonical Form 82
9.3 Determination of the JCF 85
9.3.1 Theoretical computation 86
9.3.2 On the +1's in JCF blocks 88
9.4 Geometric Aspects of the JCF 89
9.5 The Matrix Sign Function 91
10 Canonical Forms 95
10.1 Some Basic Canonical Forms 95
10.2 Definite Matrices 99
10.3 Equivalence Transformations and Congruence 102
10.3.1 Block matrices and definiteness 104
10.4 Rational Canonical Form 104
11 Linear Differential and Difference Equations 109
11.1 Differential Equations 109
11.1.1 Properties of the matrix exponential 109
11.1.2 Homogeneous linear differential equations 112
11.1.3 Inhomogeneous linear differential equations 112
11.1.4 Linear matrix differential equations 113
11.1.5 Modal decompositions 114
11.1.6 Computation of the matrix exponential 114
11.2 Difference Equations 118
11.2.1 Homogeneous linear difference equations 118
11.2.2 Inhomogeneous linear difference equations 118
11.2.3 Computation of matrix powers 119
11.3 HigherOrder Equations 120
viii
7 Projections, Inner Product Spaces, and Norms
7.1 Projections ..................... .
7.1.1 The four fundamental orthogonal projections
7.2 Inner Product Spaces
7.3 Vector Norms
7.4 Matrix Norms ....
8 Linear Least Squares Problems
8.1 The Linear Least Squares Problem . . . . . . . . . . . . . .
8.2 Geometric Solution . . . . . . . . . . . . . . . . . . . . . .
8.3 Linear Regression and Other Linear Least Squares Problems
8.3.1 Example: Linear regression ...... .
8.3.2 Other least squares problems ...... .
8.4 Least Squares and Singular Value Decomposition
8.5 Least Squares and QR Factorization . . . . . . .
9 Eigenvalues and Eigenvectors
9.1 Fundamental Definitions and Properties
9.2 Jordan Canonical Form .... .
9.3 Determination of the JCF .... .
9.3.1 Theoretical computation .
9.3.2 On the + l's in JCF blocks
9.4 Geometric Aspects of the JCF
9.5 The Matrix Sign Function.
10 Canonical Forms
10.1 Some Basic Canonical Forms .
10.2 Definite Matrices . . . . . . .
10.3 Equivalence Transformations and Congruence
10.3.1 Block matrices and definiteness
10.4 Rational Canonical Form . . . . . . . . .
11 Linear Differential and Difference Equations
ILl Differential Equations . . . . . . . . . . . . . . . .
11.1.1 Properties ofthe matrix exponential . . . .
11.1.2 Homogeneous linear differential equations
11.1.3 Inhomogeneous linear differential equations
11.1.4 Linear matrix differential equations . .
11.1.5 Modal decompositions . . . . . . . . .
11.1.6 Computation of the matrix exponential
11.2 Difference Equations . . . . . . . . . . . . . .
11.2.1 Homogeneous linear difference equations
11.2.2 Inhomogeneous linear difference equations
11.2.3 Computation of matrix powers .
11.3 HigherOrder Equations. . . . . . . . . . . . . . .
Contents
51
51
52
54
57
59
65
65
67
67
67
69
70
71
75
75
82
85
86
88
89
91
95
95
99
102
104
104
109
109
109
112
112
113
114
114
118
118
118
119
120
Contents ix
12 Generalized Eigenvalue Problems 125
12.1 The Generalized Eigenvalue/Eigenvector Problem 125
12.2 Canonical Forms 127
12.3 Application to the Computation of System Zeros 130
12.4 Symmetric Generalized Eigenvalue Problems 131
12.5 Simultaneous Diagonalization 133
12.5.1 Simultaneous diagonalization via SVD 133
12.6 HigherOrder Eigenvalue Problems 135
12.6.1 Conversion to firstorder form 135
13 Kronecker Products 139
13.1 Definition and Examples 139
13.2 Properties of the Kronecker Product 140
13.3 Application to Sylvester and Lyapunov Equations 144
Bibliography 151
Index 153
Contents
12 Generalized Eigenvalue Problems
12.1 The Generalized EigenvaluelEigenvector Problem
12.2 Canonical Forms ................ .
12.3 Application to the Computation of System Zeros .
12.4 Symmetric Generalized Eigenvalue Problems .
12.5 Simultaneous Diagonalization ........ .
12.5.1 Simultaneous diagonalization via SVD
12.6 HigherOrder Eigenvalue Problems ..
12.6.1 Conversion to firstorder form
13 Kronecker Products
13.1 Definition and Examples ............ .
13.2 Properties of the Kronecker Product ...... .
13.3 Application to Sylvester and Lyapunov Equations
Bibliography
Index
ix
125
125
127
130
131
133
133
135
135
139
139
140
144
151
153
This page intentionally left blank This page intentionally left blank
Preface
This book is intended to be used as a text for beginning graduatelevel (or even seniorlevel)
students in engineering, the sciences, mathematics, computer science, or computational
science who wish to be familar with enough matrix analysis that they are prepared to use its
tools and ideas comfortably in a variety of applications. By matrix analysis I mean linear
algebra and matrix theory together with their intrinsic interaction with and application to
linear dynamical systems (systems of linear differential or difference equations). The text
can be used in a onequarter or onesemester course to provide a compact overview of
much of the important and useful mathematics that, in many cases, students meant to learn
thoroughly as undergraduates, but somehow didn't quite manage to do. Certain topics
that may have been treated cursorily in undergraduate courses are treated in more depth
and more advanced material is introduced. I have tried throughout to emphasize only the
more important and "useful" tools, methods, and mathematical structures. Instructors are
encouraged to supplement the book with specific application examples from their own
particular subject area.
The choice of topics covered in linear algebra and matrix theory is motivated both by
applications and by computational utility and relevance. The concept of matrix factorization
is emphasized throughout to provide a foundation for a later course in numerical linear
algebra. Matrices are stressed more than abstract vector spaces, although Chapters 2 and 3
do cover some geometric (i.e., basisfree or subspace) aspects of many of the fundamental
notions. The books by Meyer [18], Noble and Daniel [20], Ortega [21], and Strang [24]
are excellent companion texts for this book. Upon completion of a course based on this
text, the student is then wellequipped to pursue, either via formal courses or through self
study, followon topics on the computational side (at the level of [7], [11], [23], or [25], for
example) or on the theoretical side (at the level of [12], [13], or [16], for example).
Prerequisites for using this text are quite modest: essentially just an understanding
of calculus and definitely some previous exposure to matrices and linear algebra. Basic
concepts such as determinants, singularity of matrices, eigenvalues and eigenvectors, and
positive definite matrices should have been covered at least once, even though their recollec
tion may occasionally be "hazy." However, requiring such material as prerequisite permits
the early (but "outoforder" by conventional standards) introduction of topics such as pseu
doinverses and the singular value decomposition (SVD). These powerful and versatile tools
can then be exploited to provide a unifying foundation upon which to base subsequent top
ics. Because tools such as the SVD are not generally amenable to "hand computation," this
approach necessarily presupposes the availability of appropriate mathematical software on
a digital computer. For this, I highly recommend MA TL A B® although other software such as
xi
Preface
This book is intended to be used as a text for beginning graduatelevel (or even seniorlevel)
students in engineering, the sciences, mathematics, computer science, or computational
science who wish to be familar with enough matrix analysis that they are prepared to use its
tools and ideas comfortably in a variety of applications. By matrix analysis I mean linear
algebra and matrix theory together with their intrinsic interaction with and application to
linear dynamical systems (systems of linear differential or difference equations). The text
can be used in a onequarter or onesemester course to provide a compact overview of
much of the important and useful mathematics that, in many cases, students meant to learn
thoroughly as undergraduates, but somehow didn't quite manage to do. Certain topics
that may have been treated cursorily in undergraduate courses are treated in more depth
and more advanced material is introduced. I have tried throughout to emphasize only the
more important and "useful" tools, methods, and mathematical structures. Instructors are
encouraged to supplement the book with specific application examples from their own
particular subject area.
The choice of topics covered in linear algebra and matrix theory is motivated both by
applications and by computational utility and relevance. The concept of matrix factorization
is emphasized throughout to provide a foundation for a later course in numerical linear
algebra. Matrices are stressed more than abstract vector spaces, although Chapters 2 and 3
do cover some geometric (i.e., basisfree or subspace) aspects of many of the fundamental
notions. The books by Meyer [18], Noble and Daniel [20], Ortega [21], and Strang [24]
are excellent companion texts for this book. Upon completion of a course based on this
text, the student is then wellequipped to pursue, either via formal courses or through self
study, followon topics on the computational side (at the level of [7], [II], [23], or [25], for
example) or on the theoretical side (at the level of [12], [13], or [16], for example).
Prerequisites for using this text are quite modest: essentially just an understanding
of calculus and definitely some previous exposure to matrices and linear algebra. Basic
concepts such as determinants, singularity of matrices, eigenvalues and eigenvectors, and
positive definite matrices should have been covered at least once, even though their recollec
tion may occasionally be "hazy." However, requiring such material as prerequisite permits
the early (but "outoforder" by conventional standards) introduction of topics such as pseu
doinverses and the singular value decomposition (SVD). These powerful and versatile tools
can then be exploited to provide a unifying foundation upon which to base subsequent top
ics. Because tools such as the SVD are not generally amenable to "hand computation," this
approach necessarily presupposes the availability of appropriate mathematical software on
a digital computer. For this, I highly recommend MAlLAB® although other software such as
xi
xii Preface
Mathematica® or Mathcad® is also excellent. Since this text is not intended for a course in
numerical linear algebra per se, the details of most of the numerical aspects of linear algebra
are deferred to such a course.
The presentation of the material in this book is strongly influenced by computa
tional issues for two principal reasons. First, "reallife" problems seldom yield to simple
closedform formulas or solutions. They must generally be solved computationally and
it is important to know which types of algorithms can be relied upon and which cannot.
Some of the key algorithms of numerical linear algebra, in particular, form the foundation
upon which rests virtually all of modern scientific and engineering computation. A second
motivation for a computational emphasis is that it provides many of the essential tools for
what I call "qualitative mathematics." For example, in an elementary linear algebra course,
a set of vectors is either linearly independent or it is not. This is an absolutely fundamental
concept. But in most engineering or scientific contexts we want to know more than that.
If a set of vectors is linearly independent, how "nearly dependent" are the vectors? If they
are linearly dependent, are there "best" linearly independent subsets? These turn out to
be much more difficult problems and frequently involve researchlevel questions when set
in the context of the finiteprecision, finiterange floatingpoint arithmetic environment of
most modern computing platforms.
Some of the applications of matrix analysis mentioned briefly in this book derive
from the modern statespace approach to dynamical systems. Statespace methods are
now standard in much of modern engineering where, for example, control systems with
large numbers of interacting inputs, outputs, and states often give rise to models of very
high order that must be analyzed, simulated, and evaluated. The "language" in which such
models are conveniently described involves vectors and matrices. It is thus crucial to acquire
a working knowledge of the vocabulary and grammar of this language. The tools of matrix
analysis are also applied on a daily basis to problems in biology, chemistry, econometrics,
physics, statistics, and a wide variety of other fields, and thus the text can serve a rather
diverse audience. Mastery of the material in this text should enable the student to read and
understand the modern language of matrices used throughout mathematics, science, and
engineering.
While prerequisites for this text are modest, and while most material is developed from
basic ideas in the book, the student does require a certain amount of what is conventionally
referred to as "mathematical maturity." Proofs are given for many theorems. When they are
not given explicitly, they are either obvious or easily found in the literature. This is ideal
material from which to learn a bit about mathematical proofs and the mathematical maturity
and insight gained thereby. It is my firm conviction that such maturity is neither encouraged
nor nurtured by relegating the mathematical aspects of applications (for example, linear
algebra for elementary statespace theory) to an appendix or introducing it "onthefly" when
necessary. Rather, one must lay a firm foundation upon which subsequent applications and
perspectives can be built in a logical, consistent, and coherent fashion.
I have taught this material for many years, many times at UCSB and twice at UC
Davis, and the course has proven to be remarkably successful at enabling students from
disparate backgrounds to acquire a quite acceptable level of mathematical maturity and
rigor for subsequent graduate studies in a variety of disciplines. Indeed, many students who
completed the course, especially the first few times it was offered, remarked afterward that
if only they had had this course before they took linear systems, or signal processing,
xii Preface
Mathematica® or Mathcad® is also excellent. Since this text is not intended for a course in
numerical linear algebra per se, the details of most of the numerical aspects of linear algebra
are deferred to such a course.
The presentation of the material in this book is strongly influenced by computa
tional issues for two principal reasons. First, "reallife" problems seldom yield to simple
closedform formulas or solutions. They must generally be solved computationally and
it is important to know which types of algorithms can be relied upon and which cannot.
Some of the key algorithms of numerical linear algebra, in particular, form the foundation
upon which rests virtually all of modem scientific and engineering computation. A second
motivation for a computational emphasis is that it provides many of the essential tools for
what I call "qualitative mathematics." For example, in an elementary linear algebra course,
a set of vectors is either linearly independent or it is not. This is an absolutely fundamental
concept. But in most engineering or scientific contexts we want to know more than that.
If a set of vectors is linearly independent, how "nearly dependent" are the vectors? If they
are linearly dependent, are there "best" linearly independent subsets? These tum out to
be much more difficult problems and frequently involve researchlevel questions when set
in the context of the finiteprecision, finiterange floatingpoint arithmetic environment of
most modem computing platforms.
Some of the applications of matrix analysis mentioned briefly in this book derive
from the modem statespace approach to dynamical systems. Statespace methods are
now standard in much of modem engineering where, for example, control systems with
large numbers of interacting inputs, outputs, and states often give rise to models of very
high order that must be analyzed, simulated, and evaluated. The "language" in which such
models are conveniently described involves vectors and matrices. It is thus crucial to acquire
a working knowledge of the vocabulary and grammar of this language. The tools of matrix
analysis are also applied on a daily basis to problems in biology, chemistry, econometrics,
physics, statistics, and a wide variety of other fields, and thus the text can serve a rather
diverse audience. Mastery of the material in this text should enable the student to read and
understand the modem language of matrices used throughout mathematics, science, and
engineering.
While prerequisites for this text are modest, and while most material is developed from
basic ideas in the book, the student does require a certain amount of what is conventionally
referred to as "mathematical maturity." Proofs are given for many theorems. When they are
not given explicitly, they are either obvious or easily found in the literature. This is ideal
material from which to learn a bit about mathematical proofs and the mathematical maturity
and insight gained thereby. It is my firm conviction that such maturity is neither encouraged
nor nurtured by relegating the mathematical aspects of applications (for example, linear
algebra for elementary statespace theory) to an appendix or introducing it "onthef1y" when
necessary. Rather, one must lay a firm foundation upon which subsequent applications and
perspectives can be built in a logical, consistent, and coherent fashion.
I have taught this material for many years, many times at UCSB and twice at UC
Davis, and the course has proven to be remarkably successful at enabling students from
disparate backgrounds to acquire a quite acceptable level of mathematical maturity and
rigor for subsequent graduate studies in a variety of disciplines. Indeed, many students who
completed the course, especially the first few times it was offered, remarked afterward that
if only they had had this course before they took linear systems, or signal processing.
Preface xiii
or estimation theory, etc., they would have been able to concentrate on the new ideas
they wanted to learn, rather than having to spend time making up for deficiencies in their
background in matrices and linear algebra. My fellow instructors, too, realized that by
requiring this course as a prerequisite, they no longer had to provide as much time for
"review" and could focus instead on the subject at hand. The concept seems to work.
— AJL, June 2004
Preface XIII
or estimation theory, etc., they would have been able to concentrate on the new ideas
they wanted to learn, rather than having to spend time making up for deficiencies in their
background in matrices and linear algebra. My fellow instructors, too, realized that by
requiring this course as a prerequisite, they no longer had to provide as much time for
"review" and could focus instead on the subject at hand. The concept seems to work.
AJL, June 2004
This page intentionally left blank This page intentionally left blank
Chapter 1
Introduction and Review
1.1 Some Notation and Terminology
We begin with a brief introduction to some standard notation and terminology to be used
throughout the text. This is followed by a review of some basic notions in matrix analysis
and linear algebra.
The following sets appear frequently throughout subsequent chapters:
1. R
n
= the set of ntuples of real numbers represented as column vectors. Thus, x e Rn
means
where xi e R for i e n.
Henceforth, the notation n denotes the set {1, . . . , n}.
Note: Vectors are always column vectors. A row vector is denoted by y
T
, where
y G Rn and the superscript T is the transpose operation. That a vector is always a
column vector rather than a row vector is entirely arbitrary, but this convention makes
it easy to recognize immediately throughout the text that, e.g., X
T
y is a scalar while
xy
T
is an n x n matrix.
2. Cn = the set of ntuples of complex numbers represented as column vectors.
3. R
mxn
= the set of real (or realvalued) m x n matrices.
4. R
mxnr
= the set of real m x n matrices of rank r. Thus, R
nxnn
denotes the set of real
nonsingular n x n matrices.
5. C
mxn
= the set of complex (or complexvalued) m x n matrices.
6. C
mxn
= the set of complex m x n matrices of rank r.
1
Chapter 1
Introduction and Review
1.1 Some Notation and Terminology
We begin with a brief introduction to some standard notation and terminology to be used
throughout the text. This is followed by a review of some basic notions in matrix analysis
and linear algebra.
The following sets appear frequently throughout subsequent chapters:
I. IR
n
= the set of ntuples of real numbers represented as column vectors. Thus, x E IR
n
means
where Xi E IR for i E !!.
Henceforth, the notation!! denotes the set {I, ... , n }.
Note: Vectors are always column vectors. A row vector is denoted by y ~ where
y E IR
n
and the superscript T is the transpose operation. That a vector is always a
column vector rather than a row vector is entirely arbitrary, but this convention makes
it easy to recognize immediately throughout the text that, e.g., x
T
y is a scalar while
xyT is an n x n matrix.
2. en = the set of ntuples of complex numbers represented as column vectors.
3. IR
rn
xn = the set of real (or realvalued) m x n matrices.
4. 1R;n xn = the set of real m x n matrices of rank r. Thus, I R ~ xn denotes the set of real
nonsingular n x n matrices.
5. e
rnxn
= the set of complex (or complexvalued) m x n matrices.
6. e;n xn = the set of complex m x n matrices of rank r.
Chapter 1. Introduction and Review
Each of the above also has a "block" analogue obtained by replacing scalar components in
the respective definitions by block submatrices. For example, if A e R
nxn
, B e R
mx n
, and
C e R
mxm
, then the (m+ n) x (m+ n) matrix [ A0 Bc ] is block upper triangular.
The transpose of a matrix A is denoted by A
T
and is the matrix whose (i, j)th entry
is the (7, Oth entry of A, that is, (A
7
),, = a,,. Note that if A e R
mx
", then A
7
" e E"
xm
.
If A e C
mx
", then its Hermitian transpose (or conjugate transpose) is denoted by A
H
(or
sometimes A*) and its (i, j)\h entry is (A
H
),
7
= («77), where the bar indicates complex
conjugation; i.e., if z = a + jf$ (j = i = v^T), then z = a — jfi. A matrix A is symmetric
if A = A
T
and Hermitian if A = A
H
. We henceforth adopt the convention that, unless
otherwise noted, an equation like A = A
T
implies that A is realvalued while a statement
like A = A
H
implies that A is complexvalued.
Remark 1.1. While \/—\ is most commonly denoted by i in mathematics texts, j is
the more common notation in electrical engineering and system theory. There is some
advantage to being conversant with both notations. The notation j is used throughout the
text but reminders are placed at strategic locations.
Example 1.2.
Transposes of block matrices can be defined in an obvious way. For example, it is
easy to see that if A,, are appropriately dimensioned subblocks, then
is symmetric (and Hermitian).
is complexvalued symmetric but not Hermitian.
is Hermitian (but not symmetric).
2
We now classify some of the more familiar "shaped" matrices. A matrix A e
(or A eC"
x
")i s
• diagonal if a,
7
= 0 for i ^ j.
• upper triangular if a,
;
= 0 for i > j.
• lower triangular if a,
7
= 0 for / < j.
• tridiagonal if a
(y
= 0 for z — j\ > 1.
• pentadiagonal if a
i;
= 0 for / — j\ > 2.
• upper Hessenberg if a
f
j = 0 for i — j > 1.
• lower Hessenberg if a,
;
= 0 for j — i > 1.
2 Chapter 1. Introduction and Review
We now classify some of the more familiar "shaped" matrices. A matrix A E IR
n
xn
(or A E e
nxn
) is
• diagonal if aij = 0 for i i= }.
• upper triangular if aij = 0 for i > }.
• lower triangular if aij = 0 for i < }.
• tridiagonal if aij = 0 for Ii  JI > 1.
• pentadiagonal if aij = 0 for Ii  J I > 2.
• upper Hessenberg if aij = 0 for i  j > 1.
• lower Hessenberg if aij = 0 for }  i > 1.
Each of the above also has a "block" analogue obtained by replacing scalar components in
the respective definitions by block submatrices. For example, if A E IR
nxn
, B E IR
nxm
, and
C E jRmxm, then the (m + n) x (m + n) matrix [ ~ ~ ] is block upper triangular.
The transpose of a matrix A is denoted by AT and is the matrix whose (i, j)th entry
is the (j, i)th entry of A, that is, (AT)ij = aji. Note that if A E jRmxn, then AT E jRnxm.
If A E em xn, then its Hermitian transpose (or conjugate transpose) is denoted by A H (or
sometimes A*) and its (i, j)th entry is (AH)ij = (aji), where the bar indicates complex
conjugation; i.e., if z = IX + jfJ (j = i = R), then z = IX  jfJ. A matrix A is symmetric
if A = A T and Hermitian if A = A H. We henceforth adopt the convention that, unless
otherwise noted, an equation like A = A T implies that A is realvalued while a statement
like A = AH implies that A is complexvalued.
Remark 1.1. While R is most commonly denoted by i in mathematics texts, } is
the more common notation in electrical engineering and system theory. There is some
advantage to being conversant with both notations. The notation j is used throughout the
text but reminders are placed at strategic locations.
Example 1.2.
1. A = [
; ~ ] is symmetric (and Hermitian).
2. A = [
5
7+}
7 + j ]
2 is complexvalued symmetric but not Hermitian.
[
5 7+} ]
3 A  2 is Hermitian (but not symmetric).
·  7  j
Transposes of block matrices can be defined in an obvious way. For example, it is
easy to see that if Aij are appropriately dimensioned subblocks, then
r = [
1.2. Matrix Arithmetic
1.2 Matrix Arithmetic
It is assumed that the reader is familiar with the fundamental notions of matrix addition,
multiplication of a matrix by a scalar, and multiplication of matrices.
A special case of matrix multiplication occurs when the second matrix is a column
vector x, i.e., the matrixvector product Ax. A very important way to view this product is
to interpret it as a weighted sum (linear combination) of the columns of A. That is, suppose
The importance of this interpretation cannot be overemphasized. As a numerical example,
take A = [96 85 74]x = 2 . Then we can quickly calculate dot products of the rows of A
with the column x to find Ax =[50 32]' but this matrixvector product can also be computed
v1a
For large arrays of numbers, there can be important computerarchitecturerelated advan
tages to preferring the latter calculation method.
For matrix multiplication, suppose A e R
mxn
and B = [bi,...,b
p
] e R
nxp
with
bi e W
1
. Then the matrix product A B can be thought of as above, applied p times:
There is also an alternative, but equivalent, formulation of matrix multiplication that appears
frequently in the text and is presented below as a theorem. Again, its importance cannot be
overemphasized. It is deceptively simple and its full understanding is well rewarded.
Theorem 1.3. Let U = [M I , . . . , u
n
] e R
mxn
with u
t
e R
m
and V = [v
{
,..., v
n
] e R
pxn
with v
t
e R
p
. Then
If matrices C and D are compatible for multiplication, recall that (CD)
T
= D
T
C
T
(or (CD}
H
— D
H
C
H
). This gives a dual to the matrixvector result above. Namely, if
C eR
mxn
has row vectors cj e E
lx
", and is premultiplied by a row vector y
T
e R
l xm
,
then the product can be written as a weighted linear sum of the rows of C as follows:
3
Theorem 1.3 can then also be generalized to its "row dual." The details are left to the readei
Then
1.2. Matrix Arithmetic 3
1 .2 Matrix Arithmetic
It is assumed that the reader is familiar with the fundamental notions of matrix addition,
multiplication of a matrix by a scalar, and multiplication of matrices.
A special case of matrix multiplication occurs when the second matrix is a column
vector x, i.e., the matrixvector product Ax. A very important way to view this product is
to interpret it as a weighted sum (linear combination) of the columns of A. That is, suppose
I ]
A = la' ....• a"1 E JR
m
" with a, E JRm and x = l
Then
Ax = Xjal + ... + Xnan E jRm.
The importance of this interpretation cannot be overemphasized. As a numerical example,
take A = ! x = Then we can quickly calculate dot products of the rows of A
with the column x to find Ax = but this matrixvector product can also be computed
via
3.[ J+2.[ J+l.[ l
For large arrays of numbers, there can be important computerarchitecturerelated advan
tages to preferring the latter calculation method.
For matrix multiplication, suppose A E jRmxn and B = [hI,.'" h
p
] E jRnxp with
hi E jRn. Then the matrix product AB can be thought of as above, applied p times:
There is also an alternative, but equivalent, formulation of matrix multiplication that appears
frequently in the text and is presented below as a theorem. Again, its importance cannot be
overemphasized. It is deceptively simple and its full understanding is well rewarded.
Theorem 1.3. Let U = [Uj, ... , un] E jRmxn with Ui E jRm and V = [VI, .•. , Vn] E lR
Pxn
with Vi E jRP. Then
n
UV
T
= LUiVr E jRmxp.
i=I
If matrices C and D are compatible for multiplication, recall that (C D)T = DT C
T
(or (C D)H = DH C
H
). This gives a dual to the matrixvector result above. Namely, if
C E jRmxn has row vectors cJ E jRlxn, and is premultiplied by a row vector yT E jRlxm,
then the product can be written as a weighted linear sum of the rows of C as follows:
yTC=YICf EjRlxn.
Theorem 1.3 can then also be generalized to its "row dual." The details are left to the reader.
Chapter 1. Introduction and Review
1.3 Inner Products and Orthogonality
For vectors x, y e R", the Euclidean inner product (or inner product, for short) of x and
y is given by
Note that the inner product is a scalar.
If x, y e C", we define their complex Euclidean inner product (or inner product,
for short) by
and we see that, indeed, (x, y)
c
= (y, x)
c
.
Note that x
T
x = 0 if and only if x = 0 when x e Rn but that this is not true if x e Cn.
What is true in the complex case is that X
H
x = 0 if and only if x = 0. To illustrate, consider
the nonzero vector x above. Then X
T
X = 0 but X
H
X = 2.
Two nonzero vectors x, y e R are said to be orthogonal if their inner product is
zero, i.e., x
T
y = 0. Nonzero complex vectors are orthogonal if X
H
y = 0. If x and y are
orthogonal and X
T
X = 1 and y
T
y = 1, then we say that x and y are orthonormal. A
matrix A e R
nxn
is an orthogonal matrix if A
T
A = AA
T
= /, where / is the n x n
identity matrix. The notation /„ is sometimes used to denote the identity matrix in R
nx
"
(orC"
x
"). Similarly, a matrix A e C
nxn
is said to be unitary if A
H
A = AA
H
= I. Clearly
an orthogonal or unitary matrix has orthonormal rows and orthonormal columns. There is
no special name attached to a nonsquare matrix A e R
mxn
(or € C
mxn
) with orthonormal
rows or columns.
1.4 Determinants
It is assumed that the reader is familiar with the basic theory of determinants. For A e R
nxn
(or A 6 C
nxn
) we use the notation det A for the determinant of A. We list below some of
Note that (x, y)
c
= (y, x)
c
, i.e., the order in which x and y appear in the complex inner
product is important. The more conventional definition of the complex inner product is
( x , y )
c
= y
H
x = Eni=1 xiyi but throughout the text we prefer the symmetry with the real
case.
Example 1.4. Let x = [ 1j ] and y = [ 1/ 2 ]. Then
while
44 Chapter 1. Introduction and Review
1.3 Inner Products and Orthogonality
For vectors x, y E IRn, the Euclidean inner product (or inner product, for short) of x and
y is given by
n
(x, y) := x
T
y = Lx;y;.
;=1
Note that the inner product is a scalar.
If x, y E <en, we define their complex Euclidean inner product (or inner product,
for short) by
n
(x'Y}c :=xHy = Lx;y;.
;=1
Note that (x, y)c = (y, x}c, i.e., the order in which x and y appear in the complex inner
product is important. The more conventional definition of the complex inner product is
(x, y)c = yH x = L:7=1 x;y; but throughout the text we prefer the symmetry with the real
case.
Example 1.4. Let x = [} ] and y = [ ~ ] . Then
(x, Y}c = [ } JH [ ~ ] = [I  j] [ ~ ] = 1  2j
while
and we see that, indeed, (x, Y}c = {y, x)c'
Note that x
T
x = 0 if and only if x = 0 when x E IR
n
but that this is not true if x E en.
What is true in the complex case is that x
H
x = 0 if and only if x = O. To illustrate, consider
the nonzero vector x above. Then x
T
x = 0 but x
H
X = 2.
Two nonzero vectors x, y E IR
n
are said to be orthogonal if their inner product is
zero, i.e., x
T
y = O. Nonzero complex vectors are orthogonal if x
H
y = O. If x and y are
orthogonal and x
T
x = 1 and yT y = 1, then we say that x and y are orthonormal. A
matrix A E IR
nxn
is an orthogonal matrix if AT A = AAT = I, where I is the n x n
identity matrix. The notation In is sometimes used to denote the identity matrix in IR
nxn
(or en xn). Similarly, a matrix A E en xn is said to be unitary if A H A = AA H = I. Clearly
an orthogonal or unitary matrix has orthonormal rows and orthonormal columns. There is
no special name attached to a nonsquare matrix A E ]Rrn"n (or E e
mxn
) with orthonormal
rows or columns.
1.4 Determinants
It is assumed that the reader is familiar with the basic theory of determinants. For A E IR
n
xn
(or A E en xn) we use the notation det A for the determinant of A. We list below some of
1.4. Determinants
the more useful properties of determinants. Note that this is not a minimal set, i.e., several
properties are consequences of one or more of the others.
1. If A has a zero row or if any two rows of A are equal, then det A = 0.
2. If A has a zero column or if any two columns of A are equal, then det A = 0.
3. Interchanging two rows of A changes only the sign of the determinant.
4. Interchanging two columns of A changes only the sign of the determinant.
5. Multiplying a row of A by a scalar a results in a new matrix whose determinant is
a det A.
6. Multiplying a column of A by a scalar a results in a new matrix whose determinant
is a det A.
7. Multiplying a row of A by a scalar and then adding it to another row does not change
the determinant.
8. Multiplying a column of A by a scalar and then adding it to another column does not
change the determinant.
9. det A
T
= det A (det A
H
= det A if A e C
nxn
).
10. If A is diagonal, then det A = a11a22 • • • a
nn
, i.e., det A is the product of its diagonal
elements.
11. If A is upper triangular, then det A = a11a22 • • • a
nn
.
12. If A is lower triangular, then det A = a11a22 • • • a
nn
.
13. If A is block diagonal (or block upper triangular or block lower triangular), with
square diagonal blocks A11, A22, • • •, A
nn
(of possibly different sizes), then det A =
det A11 det A22 • • • det A
nn
.
14. If A, B eR
nxn
,thendet(AB) = det A det 5.
15. If A € R
nxn
, then det(A
1
) = 1det A.
16. If A e R
nxn
and D e R
mxm
, then det [Ac
B
D
] = del A det ( D – CA–
l
B).
Proof: This follows easily from the block LU factorization
17. If A eR
nxn
and D e RM
mxm
, then det [Ac
B
D
] = det D det(A – BD–
1
C) .
Proof: This follows easily from the block UL factorization
5 1.4. Determinants 5
the more useful properties of determinants. Note that this is not a minimal set, i.e., several
properties are consequences of one or more of the others.
1. If A has a zero row or if any two rows of A are equal, then det A = o.
2. If A has a zero column or if any two columns of A are equal, then det A = O.
3. Interchanging two rows of A changes only the sign of the determinant.
4. Interchanging two columns of A changes only the sign of the determinant.
5. Multiplying a row of A by a scalar ex results in a new matrix whose determinant is
exdetA.
6. Multiplying a column of A by a scalar ex results in a new matrix whose determinant
is ex det A.
7. Multiplying a row of A by a scalar and then adding it to another row does not change
the determinant.
8. Multiplying a column of A by a scalar and then adding it to another column does not
change the determinant.
9. detAT = detA (detA
H
= detA if A E C"X").
10. If A is diagonal, then det A = alla22 ... ann, i.e., det A is the product of its diagonal
elements.
11. If A is upper triangular, then det A = all a22 ... a"n.
12. If A is lower triangUlar, then det A = alla22 ... ann.
13. If A is block diagonal (or block upper triangular or block lower triangular), with
square diagonal blocks A 11, A
22
, ... , An" (of possibly different sizes), then det A =
det A 11 det A22 ... det Ann.
14. If A, B E IR
nxn
, then det(AB) = det A det B.
15. If A E then det(A
1
) = de: A .
16. If A E and DE IR
mxm
, then det = detA det(D  CA
1
B).
Proof" This follows easily from the block LU factorization
] [
17. If A E IR
nxn
and D E then det = det D det(A  B D
1
C).
Proof" This follows easily from the block UL factorization
BD
1
I
] [
Chapter 1. Introduction and Review
Remark 1.5. The factorization of a matrix A into the product of a unit lower triangular
matrix L (i.e., lower triangular with all 1's on the diagonal) and an upper triangular matrix
U is called an LU factorization; see, for example, [24]. Another such factorization is UL
where U is unit upper triangular and L is lower triangular. The factorizations used above
are block analogues of these.
Remark 1.6. The matrix D — CA–
1
B is called the Schur complement of A in [AC BD].
Similarly, A – BD–
l
C is the Schur complement of D in [AC
B
D
].
EXERCISES
1. If A e R
nxn
and or is a scalar, what is det(aA)? What is det(–A)?
2. If A is orthogonal, what is det A? If A is unitary, what is det A?
3. Let x, y e Rn. Show that det(I – xy
T
) = 1 – y
T
x.
4. Let U1, U
2
, . . ., Uk € R
nxn
be orthogonal matrices. Show that the product U =
U1 U2 • • • Uk is an orthogonal matrix.
5. Let A e R
n x n
. The trace of A, denoted TrA, is defined as the sum of its diagonal
elements, i.e., TrA = Eni=1
aii.
(a) Show that the trace is a linear function; i.e., if A, B e R
nxn
and a, ft e R, then
Tr(aA + fiB)= aTrA + fiTrB.
(b) Show that Tr(Afl) = Tr(£A), even though in general AB ^ B A.
(c) Let S € R
nxn
be skewsymmetric, i.e., S
T
= S. Show that TrS = 0. Then
either prove the converse or provide a counterexample.
6. A matrix A e W
x
" is said to be idempotent if A
2
= A.
/ x ™ . , • , ! T 2cos
2
<9 sin 20 1 . . _ ,
(a) Show that the matrix A =  . _ .. _ .
2rt
is idempotent for all #.
2 _ sin 2^ 2sm
z
# J
r
(b) Suppose A e IR"
X
" is idempotent and A ^ I. Show that A must be singular.
66 Chapter 1. Introduction and Review
Remark 1.5. The factorization of a matrix A into the product of a unit lower triangular
matrix L (i.e., lower triangular with all l's on the diagonal) and an upper triangular matrix
V is called an LV factorization; see, for example, [24]. Another such factorization is VL
where V is unit upper triangular and L is lower triangular. The factorizations used above
are block analogues of these.
Remark 1.6. The matrix D  e A I B is called the Schur complement of A in [ ~ ~ ].
Similarly, A  BDIe is the Schur complement of Din [ ~ ~ l
EXERCISES
1. If A E jRnxn and a is a scalar, what is det(aA)? What is det(A)?
2. If A is orthogonal, what is det A? If A is unitary, what is det A?
3. Letx,y E jRn. Showthatdet(lxyT) = 1 yTx.
4. Let VI, V2, ... ,Vk E jRn xn be orthogonal matrices. Show that the product V =
VI V2 ... V
k
is an orthogonal matrix.
5. Let A E jRNxn. The trace of A, denoted Tr A, is defined as the sum of its diagonal
elements, i.e., TrA = L ~ = I au·
(a) Show that the trace is a linear function; i.e., if A, B E JRn xn and a, f3 E JR, then
Tr(aA + f3B) = aTrA + f3TrB.
(b) Show that Tr(AB) = Tr(BA), even though in general AB i= BA.
(c) Let S E jRnxn be skewsymmetric, i.e., ST = So Show that TrS = O. Then
either prove the converse or provide a counterexample.
6. A matrix A E jRnxn is said to be idempotent if A2 = A.
I [ 2cos
2
0
(a) Show that the matrix A =  . 2f)
2 sm 0
sin 20 J. . d .. II II
2sin
2
0 IS I empotent lor a o.
(b) Suppose A E jRn xn is idempotent and A i= I. Show that A must be singular.
Chapter 2
Vector Spaces
In this chapter we give a brief review of some of the basic concepts of vector spaces. The
emphasis is on finitedimensional vector spaces, including spaces formed by special classes
of matrices, but some infinitedimensional examples are also cited. An excellent reference
for this and the next chapter is [10], where some of the proofs that are not given here may
be found.
2.1 Definitions and Examples
Definition 2.1. A field is a set F together with two operations +, • : F x F — > F such that
Axioms (A1)(A3) state that (F, +) is a group and an abelian group if (A4) also holds.
Axioms (M1)(M4) state that (F \ {0}, •) is an abelian group.
Generally speaking, when no confusion can arise, the multiplication operator "•" is
not written explicitly.
7
(Al) a + (P + y ) = (a + p ) + y f o r all a, f t, y € F.
(A2) there exists an element 0 e F such that a + 0 = a. for all a e F.
(A3 ) for all a e F, there exists an element (—a) e F such that a + (— a) = 0.
(A4 ) a + p = ft + afar all a, ft e F.
(M l) a  ( p  y ) = ( a  p )  y f o r al l a, p, y e F.
(M 2) there exists an element 1 e F such that a • I = a for all a e F.
(M 3 ) for all a e ¥, a ^0, there exists an element a"
1
€ F such that a • a~
l
= 1.
(M 4 ) a • p = P • a for all a, p e F.
(D) a  ( p + y)=ci p+a y f or alia, p,ye¥.
Chapter 2
Vector Spaces
In this chapter we give a brief review of some of the basic concepts of vector spaces. The
emphasis is on finitedimensional vector spaces, including spaces formed by special classes
of matrices, but some infinitedimensional examples are also cited. An excellent reference
for this and the next chapter is [10], where some of the proofs that are not given here may
be found.
2.1 Definitions and Examples
Definition 2.1. A field is a set IF together with two operations +, . : IF x IF ~ IF such that
(Al) a + (,8 + y) = (a +,8) + y for all a,,8, y Elf.
(A2) there exists an element 0 E IF such that a + 0 = a for all a E IF.
(A3) for all a E IF, there exists an element (a) E IF such that a + (a) = O.
(A4) a + ,8 = ,8 + a for all a, ,8 Elf.
(Ml) a· (,8, y) = (a·,8)· y for all a,,8, y Elf.
(M2) there exists an element I E IF such that a . I = a for all a E IF.
(M3) for all a E IF, a f. 0, there exists an element aI E IF such that a . aI = 1.
(M4) a·,8 =,8 . afar all a, ,8 E IF.
(D) a· (,8 + y) = a·,8 +a· y for all a, ,8, y Elf.
Axioms (Al)(A3) state that (IF, +) is a group and an abelian group if (A4) also holds.
Axioms (MI)(M4) state that (IF \ to), .) is an abelian group.
Generally speaking, when no confusion can arise, the multiplication operator "." is
not written explicitly.
7
Chapter 2. Vector Spaces
Example 2.2.
1. R with ordinary addition and multiplication is a field.
2. C with ordinary complex addition and multiplication is a field.
3. Raf. r] = the field of rational functions in the indeterminate x
8
where Z+ = {0,1,2, . . . }, is a field.
4. RMr
mxn
= { m x n matrices of rank r with real coefficients) is clearly not a field since,
for example, (Ml) does not hold unless m = n. Moreover, R"
x
" is not a field either
since (M4) does not hold in general (although the other 8 axioms hold).
Definition 2.3. A vector space over a field F is a set V together with two operations
+ :V x V ^V and : F xV »• V such that
A vector space is denoted by (V, F) or, when there is no possibility of confusion as to the
underlying fie Id, simply by V.
Remark 2.4. Note that + and • in Definition 2.3 are different from the + and • in Definition
2.1 in the sense of operating on different objects in different sets. In practice, this causes
no confusion and the • operator is usually not even written explicitly.
Example 2.5.
1. (R", R) with addition defined by
and scalar multiplication defined by
is a vector space. Similar definitions hold for (C", C).
(VI) (V, +) is an abelian group.
(V2) ( a  p )  v = a  ( P ' V ) f o r all a, p e F and for all v e V.
(V3) (a + ft) • v = a • v + p • v for all a, p € F and for all v e V.
(V4) a(v + w)=av + a w for all a e F and for all v, w e V.
(V5) 1 • v = v for all v e V (1 e F).
8 Chapter 2. Vector Spaces
Example 2.2.
I. IR with ordinary addition and multiplication is a field.
2. e with ordinary complex addition and multiplication is a field.
3. Ra[x] = the field of rational functions in the indeterminate x
= {a
o
+ atX + ... + apxP +}
:aj,f3i EIR ;P,qEZ ,
f30 + f3t
X
+ ... + f3qX
q
where Z+ = {O,l,2, ... }, is a field.
4. I R ~ xn = { m x n matrices of rank r with real coefficients} is clearly not a field since,
for example, (MI) does not hold unless m = n. Moreover, l R ~ x n is not a field either
since (M4) does not hold in general (although the other 8 axioms hold).
Definition 2.3. A vector space over a field IF is a set V together with two operations
+ : V x V + V and· : IF x V + V such that
(VI) (V, +) is an abelian group.
(V2) (a· f3) . v = a . (f3 . v) for all a, f3 E IF andfor all v E V.
(V3) (a + f3). v = a· v + f3. v for all a, f3 Elf andforall v E V.
(V4) a· (v + w) = a . v + a . w for all a ElF andfor all v, w E V.
(V5) I· v = v for all v E V (1 Elf).
A vector space is denoted by (V, IF) or, when there is no possibility of confusion as to the
underlying field, simply by V.
Remark 2.4. Note that + and· in Definition 2.3 are different from the + and . in Definition
2.1 in the sense of operating on different objects in different sets. In practice, this causes
no confusion and the· operator is usually not even written explicitly.
Example 2.5.
I. (IRn, IR) with addition defined by
and scalar multiplication defined by
is a vector space. Similar definitions hold for (en, e).
2.2. Subspaces
3. Let (V, F) be an arbitrary vector space and V be an arbitrary set. Let O (X > , V) be the
set of functions / mapping D to V. Then O (D, V) is a vector space with addition
defined by
2.2 Subspaces
Definition 2.6. Let (V, F) be a vector space and let W c V, W = 0. Then (W, F) is a
subspace of (V, F) i f and only i f (W, F) is i tself a vector space or, equi valently, i f and only
i f ( a w 1 + ß W 2 ) e W for all a, ß e ¥ and for all w 1 , w
2
e W.
Remark 2.7. The latter characterization of a subspace is often the easiest way to check
or prove that something is indeed a subspace (or vector space); i.e., verify that the set in
question is closed under addition and scalar multiplication. Note, too, that since 0 e F, this
implies that the zero vector must be in any subspace.
Notation: When the underlying field is understood, we write W c V, and the symbol c,
when used with vector spaces, is henceforth understood to mean "is a subspace of." The
less restrictive meaning "is a subset of" is specifically flagged as such.
9
2. (E
mxn
, E) is a vector space with addition defined by
and scalar multiplication defined by
and scalar multiplication defined by
Special Cases:
(a) V = [to, t \ ] , (V, F) = (IR", E), and the functions are piecewise continuous
=: (PC[f
0
, t\ ] )
n
or continuous =: (C[?
0
, h] )
n
.
4. Let A € R"
x
". Then (x(t) : x ( t ) = Ax(t}} is a vector space (of dimension n) .
2.2. Subspaces 9
2.
(JRmxn, JR) is a vector space with addition defined by
[ ." + P"
al2 + fJI2 aln + fJln
l
a21 + fJ2I a22 + fJ22 a2n + fJ2n
A+B= .
amI + fJml am2 + fJm2 amn + fJmn
and scalar multiplication defined by
[ ya"
y
a
l2
ya," l
y
a
21 y
a
22 ya2n
yA = . . .
yaml ya
m
2
ya
mn
3. Let (V, IF) be an arbitrary vector space and '0 be an arbitrary set. Let cf>('O, V) be the
set of functions f mapping '0 to V. Then cf>('O, V) is a vector space with addition
defined by
(f + g)(d) = fed) + g(d) for all d E '0 and for all f, g E cf>
and scalar multiplication defined by
(af)(d) = af(d) for all a E IF, for all d ED, and for all f E cf>.
Special Cases:
(a) '0 = [to, td, (V, IF) = (JR
n
, JR), and the functions are piecewise continuous
=: (PC[to, td)n or continuous =: (C[to, td)n.
(b) '0 = [to, +00), (V, IF) = (JRn, JR), etc.
4. Let A E JR(nxn. Then {x(t) : x(t) = Ax(t)} is a vector space (of dimension n).
2.2 Subspaces
Definition 2.6. Let (V, IF) be a vector space and let W ~ V, W f= 0. Then (W, IF) is a
subspace of (V, IF) if and only if (W, IF) is itself a vector space or, equivalently, if and only
if(awl + fJw2) E W foral! a, fJ E IF andforall WI, W2 E W.
Remark 2.7. The latter characterization of a subspace is often the easiest way to check
or prove that something is indeed a subspace (or vector space); i.e., verify that the set in
question is closed under addition and scalar multiplication. Note, too, that since 0 E IF, this
implies that the zero vector must be in any subspace.
Notation: When the underlying field is understood, we write W ~ V, and the symbol ~ ,
when used with vector spaces, is henceforth understood to mean "is a subspace of." The
less restrictive meaning "is a subset of' is specifically flagged as such.
Then W
a
,ß is a subspace of V if and only if ß = 0. As an interesting exercise, sketch
W2,1, W2,o, W1/2,1, and W1/2,
0
. Note, too, that the vertical line through the origin (i.e.,
a = oo) is also a subspace.
All lines through the origin are subspaces. Shifted subspaces W
a
,ß with ß = 0 are
called linear varieties.
Henceforth, we drop the explicit dependence of a vector space on an underlying field.
Thus, V usually denotes a vector space with the underlying field generally being R unless
explicitly stated otherwise.
Definition 2.9. If 12, and S are vector spaces (or subspaces), then R = S if and only if
R C S and S C R.
Note: To prove two vector spaces are equal, one usually proves the two inclusions separately:
An arbitrary r e R is shown to be an element of S and then an arbitrary 5 € S is shown to
be an element of R.
2.3 Linear Independence
Let X = { v1 , v2, • • •} be a nonempty collection of vectors u, in some vector space V.
Definition 2.10. X is a linearly dependent set of vectors if and only if there exist k distinct
elements v1, . . . , vk e X and scalars a1, . . . , ak not all zero such that
10 Chapter 2. Vector Spaces
Example 2.8.
1. Consider (V, F) = (R"
X
",R) and let W = [A e R"
x
" : A is symmetric}. Then
We V.
Proof: Suppose A\, A
2
are symmetric. Then it is easily shown that ctA\ + fiAi is
symmetric for all a, ft e R.
2. Let W = { A € R"
x
" : A is orthogonal}. Then W is /wf a subspace of R"
x
".
3. Consider (V, F) = (R
2
, R) and for each v € R
2
of the form v = [v1v2 ] identify v1 with
the jccoordinate in the plane and u
2
with the ycoordinate. For a, ß e R, define
X is a linearly independent set of vectors if and only if for any collection of k distinct
elements v1, . . . ,Vk of X and for any scalars a1, . . . , ak,
10 Chapter 2. Vector Spaces
Example 2.S.
1. Consider (V,lF) = (JR.nxn,JR.) and let W = {A E JR.nxn : A is symmetric}. Then
Proof' Suppose AI, A2 are symmetric. Then it is easily shown that aAI + f3A2 is
symmetric for all a, f3 E R
2. Let W = {A E ]Rnxn : A is orthogonal}. Then W is not a subspace of JR.nxn.
3. Consider (V, IF) = (]R2, JR.) and for each v E ]R2 of the form v = ] identify VI with
the xcoordinate in the plane and V2 with the ycoordinate. For a, f3 E R define
W",/l = {V : v = [ ac f3 ] ; c E JR.} .
Then W",/l is a subspace of V if and only if f3 = O. As an interesting exercise, sketch
W2.I, W2,O, Wi,I' and Wi,o, Note, too, that the vertical line through the origin (i.e.,
a = 00) is also a subspace.
All lines through the origin are subspaces. Shifted subspaces W",/l with f3 =1= 0 are
called linear varieties.
Henceforth, we drop the explicit dependence of a vector space on an underlying field.
Thus, V usually denotes a vector space with the underlying field generally being JR. unless
explicitly stated otherwise.
Definition 2.9. ffR and S are vector spaces (or subspaces), then R = S if and only if
R R.
Note: To prove two vector spaces are equal, one usually proves the two inclusions separately:
An arbitrary r E R is shown to be an element of S and then an arbitrary s E S is shown to
be an element of R.
2.3 Linear Independence
Let X = {VI, V2, •.• } be a nonempty collection of vectors Vi in some vector space V.
Definition 2.10. X is a linearly dependent set of vectors if and only if there exist k distinct
elements VI, ... , Vk E X and scalars aI, ..• , (Xk not all zero such that
X is a linearly independent set of vectors if and only if for any collection of k distinct
elements VI, ... , Vk of X and for any scalars aI, ••• , ak,
al VI + ... + (XkVk = 0 implies al = 0, ... , ak = O.
2.3. Linear Independence 11
(since 2v\ — v
2
+ v3 = 0).
2. Let A e R
xn
and 5 e R"
xm
. Then consider the rows of e
tA
B as vectors in C
m
[t
0
, t1]
(recall that e
fA
denotes the matrix exponential, which is discussed in more detail in
Chapter 11). Independence of these vectors turns out to be equivalent to a concept
called controllability, to be studied further in what follows.
Let v
f
e R", i e k, and consider the matrix V = [ v1 , ... ,Vk] e R
nxk
. The linear
dependence of this set of vectors is equivalent to the existence of a nonzero vector a e R
k
such that Va = 0. An equivalent condition for linear dependence is that the k x k matrix
V
T
V is singular. If the set of vectors is independent, and there exists a e R* such that
Va = 0, then a = 0. An equivalent condition for linear independence is that the matrix
V
T
V is nonsingular.
Definition 2.12. Let X = [ v1 , v2, . . . } be a collection of vectors vi. e V. Then the span of
X is defined as
Example 2.13. Let V = R
n
and define
Then Sp{e1, e
2
, ...,e
n
} = Rn.
Definition 2.14. A set of vectors X is a basis for V if and only ij
1. X is a linearly independent set (of basis vectors), and
2. Sp(X) = V.
Example 2.11.
is a linearly independent set. Why?
s a linearly dependent set However,
1. LetV = R
3
. Then
where N = {1, 2, ...}.
2.3. Linear Independence 11
Example 2.11.
I. 1£t V = Then {[ H i Hi] } i" independent.. Why?
Howe,."I [ i 1 [ i 1 [ l ] } is a Iin=ly
(since 2vI  V2 + V3 = 0).
2. Let A E ]Rnxn and B E ]Rnxm. Then consider the rows of etA B as vectors in em [to, tIl
(recall that etA denotes the matrix exponential, which is discussed in more detail in
Chapter 11). Independence of these vectors turns out to be equivalent to a concept
called controllability, to be studied further in what follows.
Let Vi E ]Rn, i E If, and consider the matrix V = [VI, ... , Vk] E ]Rnxk. The linear
dependence of this set of vectors is equivalent to the existence of a nonzero vector a E ]Rk
such that Va = O. An equivalent condition for linear dependence is that the k x k matrix
VT V is singular. If the set of vectors is independent, and there exists a E ]Rk such that
Va = 0, then a = O. An equivalent condition for linear independence is that the matrix
V T V is nonsingular.
Definition 2.12. Let X = {VI, V2, ..• } be a collection of vectors Vi E V. Then the span of
X is defined as
Sp(X) = Sp{VI, V2, ... }
= {v : V = (Xl VI + ... + (XkVk ; (Xi ElF, Vi EX, kEN},
where N = {I, 2, ... }.
Example 2.13. Let V = ]Rn and define
0 0
0 1 0
el =
0
, e2 =
0
,'" ,en =
0
o o
Then SpIel, e2, ... , en} = ]Rn.
Definition 2.14. A set of vectors X is a basis for V if and only if
1. X is a linearly independent set (of basis vectors), and
2. Sp(X) = V.
12 Chapter 2. Vector Spaces
Example 2.15. [e\,..., e
n
} is a basis for IR" (sometimes called the natural basis).
Now let b1, ..., b
n
be a basis (with a specific order associated with the basis vectors)
for V. Then for all v e V there exists a unique ntuple {E1 , . . . , E n} such that
Definition 2.16. The scalars {Ei} are called the components (or sometimes the coordinates)
of v with respect to the basis (b1, ..., b
n
] and are unique. We say that the vector x of
components represents the vector v with respect to the basis B.
Example 2.17. In Rn,
we have
To see this, write
Then
Theorem 2.18. The number of elements in a basis of a vector space is independent of the
particular basis considered.
Definition 2.19. If a basis X for a vector space V= 0) has n elements, V is said to
be ndimensional or have dimension n and we write dim(V) = n or dim V — n. For
We can also determine components of v with respect to another basis. For example, while
with respect to the basis
where
12 Chapter 2. Vector Spaces
Example 2.15. {el, ... , en} is a basis for]Rn (sometimes called the natural basis).
Now let b
l
, ... , b
n
be a basis (with a specific order associated with the basis vectors)
for V. Then for all v E V there exists a unique ntuple ... , such that
v = + ... + = Bx,
where
B [b".,b.l. x D J
Definition 2.16. The scalars } are called the components (or sometimes the coordinates)
of v with respect to the basis {b
l
, ... , b
n
} and are unique. We say that the vector x of
components represents the vector v with respect to the basis B.
Example 2.17. In]Rn,
VI ]
: = vlel + V2e2 + ... + vne
n
·
Vn
We can also determine components of v with respect to another basis. For example, while
with respect to the basis
we have
To see this, write
Then
[ ] = I . el + 2 . e2,
[ ] = 3 . [ ] + 4· [ l
[ ] = XI • [  ] + X2 • [ _! ]
= [  ! ] [ l
[ ] = [ ; 1 r I [ ; ] = [ l
Theorem 2.18. The number of elements in a basis of a vector space is independent of the
particular basis considered.
Definition 2.19. If a basis X for a vector space V(Jf 0) has n elements, V is said to
be n.dimensional or have dimension n and we write dim (V) = n or dim V = n. For
2.4 Sums and Intersections of Subspaces
Definition 2.21. Let (V, F) be a vector space and let 71, S c V. The sum and intersection
of R, and S are defined respectively by:
The subspaces R, and S are said to be complements of each other in T.
Remark 2.23. The union of two subspaces, R C S, is not necessarily a subspace.
Definition 2.24. T = R 0 S is the direct sum of R and S if
Theorem 2.22.
2.4. Sums and Intersections of Subspaces 13
consistency, and because the 0 vector is in any vector space, we define dim(O) = 0. A
vector space V is finitedimensional if there exists a basis X with n < +00 elements;
otherwise, V is infinitedimensional.
Thus, Theorem 2.18 says that dim(V) = the number of elements in a basis.
Example 2.20.
1. d i m(Rn)=n.
2. dim(R
mxn
) = mn.
Note: Check that a basis for R
mxn
is given by the mn matrices Eij; i e m, j e n,
where E
f
j is a matrix all of whose elements are 0 except for a 1 in the (i, j)th location.
The collection of Eij matrices can be called the "natural basis matrices."
3. dim(C[to, t1])  +00.
4. dim{A € R
nxn
: A = A
T
} = {1/2(n + 1).
1
2
(To see why, determine 1/ 2n( n + 1) symmetric basis matrices.)
5. dim{A e R
nxn
: A is upper (lower) triangular} = 1/ 2n( n + 1).
1. n + S = {r + s : r e U, s e 5}.
2. ft H 5 = {v : v e 7^ and v e 5}.
K
1. K + S C V (in general, U\  \ h 7^ =: ]T ft/ C V, for finite k).
1=1
2. 72. D 5 C V (in general, f] * R,
a
C V/ or an arbitrary index set A).
a e A
1. n n S = 0, and
2. U + S = T (in general ft; n (^ ft,) = 0 am/ ]Pft, = T).
y>f «
2.4. Sums and Intersections of Subspaces 13
consistency, and because the 0 vector is in any vector space, we define dim(O) = O. A
vector space V is finitedimensional if there exists a basis X with n < +00 elements;
otherwise, V is infinitedimensional.
Thus, Theorem 2.18 says that dim (V) = the number of elements in a basis.
Example 2.20.
1. = n.
2. = mn.
Note: Check that a basis for is given by the mn matrices Eij; i E m, j E
where Eij is a matrix all of whose elements are 0 except for a 1 in the (i, J)th location.
The collection of E;j matrices can be called the "natural basis matrices."
3. dim(C[to, tJJ) = +00.
4. dim{A E : A = AT} = !n(n + 1).
(To see why, determine !n(n + 1) symmetric basis matrices.)
5. dim{A E : A is upper (lower) triangular} = !n(n + 1).
2.4 Sums and Intersections of Subspaces
Definition 2.21. Let (V, JF') be a vector space and let R, S S; V. The sum and intersection
ofR and S are defined respectively by:
1. R + S = {r + s : r E R, s E S}.
2. R n S = {v : v E R and v E S}.
Theorem 2.22.
k
1. R + S S; V (in general, RI + ... + Rk =: L R; S; V, for finite k).
;=1
2. R n S S; V (in general, n Ra S; V for an arbitrary index set A).
CiEA
Remark 2.23. The union of two subspaces, R U S, is not necessarily a subspace.
Definition 2.24. T = REB S is the direct sum ofR and S if
1. R n S = 0, and
2. R + S = T (in general, R; n (L R
j
) = 0 and L Ri = T).
H;
The subspaces Rand S are said to be complements of each other in T.
14 Chapter 2. Vector Spaces
Remark 2.25. The complement of ft (or S) is not unique. For example, consider V = R
2
and let ft be any line through the origin. Then any other distinct line through the origin is
a complement of ft. Among all the complements there is a unique one orthogonal to ft.
We discuss more about orthogonal complements elsewhere in the text.
Theorem 2.26. Suppose T =R O S. Then
1. every t € T can be written uniquely in the form t = r + s with r e R and s e S.
2. dim(T) = dim(ft) + dim(S).
Proof: To prove the first part, suppose an arbitrary vector t e T can be written in two ways
as t = r1 + s1 = r2 + S2, where r1, r2 e R. and s1, S2 e S. Then r1 — r2 = s2— s\. But
r1 –r2 £ ft and 52 — si e S. Since ft fl S = 0, we must have r\ = ri and s\ = si from
which uniqueness follows.
The statement of the second part is a special case of the next theorem. D
Theorem 2.27. For arbitrary subspaces ft, S of a vector space V,
EXERCISES
1. Suppose {vi,..., Vk} is a linearly dependent set. Then show that one of the vectors
must be a linear combination of the others.
2. Let x\, *2, . . . , x/c E R" be nonzero mutually orthogonal vectors. Show that [x\,...,
X k} must be a linearly independent set.
3. Let v\,... ,v
n
be orthonormal vectors in R". Show that Av\,..., Av
n
are also or
thonormal if and only if Ae R"
x
" is orthogonal.
4. Consider the vectors v\ — [2 l]
r
and 1*2 = [3 l]
r
. Prove that vi and V2 form a basis
for R
2
. Find the components of the vector v = [4 l]
r
with respect to this basis.
Example 2.28. Let U be the subspace of upper triangular matrices in E"
x
" and let £ be the
subspace of lower triangular matrices in R
nxn
. Then it may be checked that U + L = R
nxn
while U n £ is the set of diagonal matrices in R
nxn
. Using the fact that dim (diagonal
matrices} = n, together with Examples 2.20.2 and 2.20.5, one can easily verify the validity
of the formula given in Theorem 2.27.
Example 2.29. Let (V, F) = (R
nxn
, R), let ft be the set of skewsymmetric matrices in
R"
x
", and let S be the set of symmetric matrices in R"
x
". Then V = U 0 S.
Proof: This follows easily from the fact that any Ae R"
x
" can be written in the form
The first matrix on the righthand side above is in S while the second is in ft.
14 Chapter 2. Vector Spaces
Remark 2.25. The complement of R (or S) is not unique. For example, consider V = jR2
and let R be any line through the origin. Then any other distinct line through the origin is
a complement of R. Among all the complements there is a unique one orthogonal to R.
We discuss more about orthogonal complements elsewhere in the text.
Theorem 2.26. Suppose T = R EB S. Then
1. every t E T can be written uniquely in the form t = r + s with r E Rand s E S.
2. dim(T) = dim(R) + dim(S).
Proof: To prove the first part, suppose an arbitrary vector t E T can be written in two ways
as t = rl + Sl = r2 + S2, where rl, r2 E Rand SI, S2 E S. Then r,  r2 = S2  SI. But
rl  r2 E Rand S2  SI E S. Since R n S = 0, we must have rl = r2 and SI = S2 from
which uniqueness follows.
The statement of the second part is a special case of the next theorem. 0
Theorem 2.27. For arbitrary subspaces R, S of a vector space V,
dim(R + S) = dim(R) + dim(S)  dim(R n S).
Example 2.28. Let U be the subspace of upper triangular matrices in jRn xn and let .c be the
subspace of lower triangUlar matrices in jRn xn. Then it may be checked that U + .c = jRn xn
while un.c is the set of diagonal matrices in jRnxn. Using the fact that dim {diagonal
matrices} = n, together with Examples 2.20.2 and 2.20.5, one can easily verify the validity
of the formula given in Theorem 2.27.
Example 2.29. Let (V, IF) = (jRnxn, jR), let R be the set of skewsymmetric matrices in
jRnxn, and let S be the set of symmetric matrices in jRnxn. Then V = n $ S.
Proof: This follows easily from the fact that any A E jRnxn can be written in the form
1 TIT
A=2:(A+A )+2:(AA).
The first matrix on the righthand side above is in S while the second is in R.
EXERCISES
1. Suppose {VI, ... , vd is a linearly dependent set. Then show that one of the vectors
must be a linear combination of the others.
2. Let XI, X2, ... , Xk E jRn be nonzero mutually orthogonal vectors. Show that {XI, ... ,
Xk} must be a linearly independent set.
3. Let VI, ... , Vn be orthonormal vectors in jRn. Show that Av" •.. , AV
n
are also or
thonormal if and only if A E jRnxn is orthogonal.
4. Consider the vectors VI = [2 1 f and V2 = [3 1 f. Prove that VI and V2 form a basis
for jR2. Find the components of the vector v = [4 If with respect to this basis.
Exercises 15
5. Let P denote the set of polynomials of degree less than or equal to two of the form
Po + p\x + pix
2
, where po, p\, p2 e R. Show that P is a vector space over E. Show
that the polynomials 1, *, and 2x
2
— 1 are a basis for P. Find the components of the
polynomial 2 + 3x + 4x
2
with respect to this basis.
6. Prove Theorem 2.22 (for the case of two subspaces R and S only).
7. Let P
n
denote the vector space of polynomials of degree less than or equal to n, and of
the form p( x) = po + p\x + • • • + p
n
x
n
, where the coefficients /?, are all real. Let PE
denote the subspace of all even polynomials in P
n
, i.e., those that satisfy the property
p(—x} = p(x). Similarly, let PQ denote the subspace of all odd polynomials, i.e.,
those satisfying p(—x} = – p( x) . Show that P
n
= P
E
© PO
8. Repeat Example 2.28 using instead the two subspaces 7" of tridiagonal matrices and
U of upper triangular matrices.
Exercises 15
5. Let P denote the set of polynomials of degree less than or equal to two of the form
Po + PI X + P2x2, where Po, PI, P2 E R Show that P is a vector space over R Show
that the polynomials 1, x, and 2x2  1 are a basis for P. Find the components of the
polynomial 2 + 3x + 4x
2
with respect to this basis.
6. Prove Theorem 2.22 (for the case of two subspaces Rand S only).
7. Let P
n
denote the vector space of polynomials of degree less than or equal to n, and of
the form p(x) = Po + PIX + ... + Pnxn, where the coefficients Pi are all real. Let PE
denote the subspace of all even polynomials in P
n
, i.e., those that satisfy the property
p( x) = p(x). Similarly, let Po denote the subspace of all odd polynomials, i.e.,
those satisfying p(x) = p(x). Show that P
n
= PE EB Po·
8. Repeat Example 2.28 using instead the two subspaces T of tridiagonal matrices and
U of upper triangular matrices.
This page intentionally left blank This page intentionally left blank
Chapter 3
Linear Transformations
3.1 Definition and Examples
We begin with the basic definition of a linear transformation (or linear map, linear function,
or linear operator) between two vector spaces.
Definition 3.1. Let (V, F) and (W, F) be vector spaces. Then C : V > W is a linear
transformation if and only if
£(avi + pv
2
) = aCv\ + fi£v
2
far all a, £ e F and far all v
}
,v
2
e V.
The vector space V is called the domain of the transformation C while VV, the space into
which it maps, is called the codomain.
Example 3.2.
1. Let F = R and take V = W = PC[f
0
, +00).
Define £ : PC[t
0
, +00) > PC[t
0
, +00) by
2. Let F = R and take V = W = R
mx
". Fix M e R
mxm
.
Define £ : R
mx
" > M
mxn
by
3. Let F = R and take V = P" = (p(x) = a
0
+ ct
}
x H h a
n
x" : a, E R} and
w = p
n

1
.
Define C.: V —> W by Lp — p', where' denotes differentiation with respect to x.
17
Chapter 3
Linear Transformations
3.1 Definition and Examples
We begin with the basic definition of a linear transformation (or linear map, linear function,
or linear operator) between two vector spaces.
Definition 3.1. Let (V, IF) and (W, IF) be vector spaces. Then I: : V + W is a linear
transformation if and only if
I:(avi + {3V2) = al:vi + {3I:V2 for all a, {3 ElF and for all VI, V2 E V.
The vector space V is called the domain of the transformation I: while W, the space into
which it maps, is called the codomain.
Example 3.2.
1. Let IF = JR and take V = W = PC[to, +00).
Define I: : PC[to, +00) + PC[to, +00) by
vet) f+ wet) = (I:v)(t) = 11 e(tr)v(r) dr.
to
2. Let IF = JR and take V = W = JRmxn. Fix ME JRmxm.
Define I: : JRmxn + JRmxn by
X f+ Y = I:X = MX.
3. Let IF = JR and take V = pn = {p(x) = ao + alx + ... + anx
n
: ai E JR} and
W = pnl.
Define I: : V + W by I: p = p', where I denotes differentiation with respect to x.
17
18 Chapters. Li near Transformations
3.2 Matrix Representation of Linear Transformations
Linear transformations between vector spaces with specific bases can be represented con
veniently in matrix form. Specifically, suppose £ : (V, F) — > • (W, F) is linear and further
suppose that {u,, i e n} and {Wj, j e m] are bases for V and W, respectively. Then the
ith column of A = Mat £ (the matrix representation of £ with respect to the given bases
for V and W) is the representation of £i> , with respect to {w
}
•, j e raj. In other words,
represents £ since
where W = [w\,..., w
m
] and
is the z'th column of A. Note that A = Mat £ depends on the particular bases for V and W.
This could be reflected by subscripts, say, in the notation, but this is usually not done.
The action of £ on an arbitrary vector v e V is uniquely determined (by linearity)
by its action on a basis. Thus, if v = E1v1 + • • • + E
n
v
n
= Vx (where u, and hence jc, is
arbitrary), then
Thinking of A both as a matrix and as a linear transformation from Rn to R
m
usually causes no
confusion. Change of basis then corresponds naturally to appropriate matrix multiplication.
Thus, £V = WA since x was arbitrary.
When V = R", W = R
m
and [vi , i e n}, [wj , j e m} are the usual (natural) bases
the equation £V = WA becomes simply £ = A. We thus commonly identify A as a linea
transformation with its matrix representation, i.e.,
18 Chapter 3. Linear Transformations
3.2 Matrix Representation of Linear Transformations
Linear transformations between vector spaces with specific bases can be represented con
veniently in matrix form. Specifically, suppose L : (V, IF) (W, IF) is linear and further
suppose that {Vi, i E and {w j, j E !!!.} are bases for V and W, respectively. Then the
ith column of A = Mat L (the matrix representation of L with respect to the given bases
for V and W) is the representation of LVi with respect to {w j, j E m}. In other words,
represents L since
A=
al
n
]
: E JR.mxn
a
mn
LVi = aliwl + ... + amiWm
=Wai,
where W = [WI, ... , w
m
] and
is the ith column of A. Note that A = Mat L depends on the particular bases for V and W.
This could be reflected by subscripts, say, in the notation, but this is usually not done.
The action of L on an arbitrary vector V E V is uniquely determined (by linearity)
by its action on a basis. Thus, if V = VI + ... + Vn = V x (where v, and hence x, is
arbitrary), then
LVx = Lv = + ... +
= WAx.
Thus, LV = W A since x was arbitrary.
When V = JR.n, W = lR.
m
and {Vi, i E {W j' j E !!!.} are the usual (natural) bases,
the equation LV = W A becomes simply L = A. We thus commonly identify A as a linear
transformation with its matrix representation, i.e.,
Thinking of A both as a matrix and as a linear transformation from JR." to lR.
m
usually causes no
confusion. Change of basis then corresponds naturally to appropriate matrix multiplication.
3.3. Composition of Transformations 19
3.3 Composition of Transformations
Consider three vector spaces U, V, and W and transformations B from U to V and A from
V to W. Then we can define a new transformation C as follows:
formula
Two Special Cases:
Inner Product: Let x, y e Rn. Then their inner product is the scalar
Outer Product: Let x e R
m
, y e Rn. Then their outer product is the m x n
matrix
Note that any rankone matrix A e R
mxn
can be written in the form A = xy
T
above (or xy
H
if A e C
mxn
). A rankone symmetric matrix can be written in
the form XX
T
(or XX
H
).
The above diagram illustrates the composition of transformations C = AB. Note that in
most texts, the arrows above are reversed as follows:
However, it might be useful to prefer the former since the transformations A and B appear
in the same order in both the diagram and the equation. If dimZ// = p, dimV = n,
and dim W = m, and if we associate matrices with the transformations in the usual way,
then composition of transformations corresponds to standard matrix multiplication. That is,
we have C — A B . The above is sometimes expressed componentwise by the
3.3. Composition ofTransformations 19
3.3 Composition of Transformations
Consider three vector spaces U, V, and Wand transformations B from U to V and A from
V to W. Then we can define a new transformation C as follows:
C
The above diagram illustrates the composition of transformations C = AB. Note that in
most texts, the arrows above are reversed as follows:
C
However, it might be useful to prefer the former since the transformations A and B appear
in the same order in both the diagram and the equation. If dimU = p, dim V = n,
and dim W = m, and if we associate matrices with the transformations in the usual way,
then composition of transformations corresponds to standard matrix mUltiplication. That is,
we have CAB . The above is sometimes expressed componentwise by the
mxp
formula
Two Special Cases:
nxp
n
cij = L aikbkj.
k=1
Inner Product: Let x, y E ~ n . Then their inner product is the scalar
n
xTy = Lx;y;.
;=1
Outer Product: Let x E ~ m , y E ~ n . Then their outer product is the m x n
matrix
Note that any rankone matrix A E ~ m x n can be written in the form A = xyT
above (or xyH if A E c
mxn
). A rankone symmetric matrix can be written in
the form xx
T
(or xx
H
).
20 Chapter 3. Li near Transformations
3.4 Structure of Linear Transformations
Let A : V —> W be a linear transformation.
Definition 3.3. The range of A, denotedlZ( A), is the set {w e W : w = Av for some v e V}.
Equivalently, R(A) — {Av : v e V}. The range of A is also known as the image of A and
denoted Im(A).
The nullspace of A, denoted N(A), is the set {v e V : Av = 0}. The nullspace of
A is also known as the kernel of A and denoted Ker (A).
Theorem 3.4. Let A : V — >• W be a linear transformation. Then
1. R( A) C W.
2. N(A) c V.
Note that N(A) and R(A) are, in general, subspaces of different spaces.
Theorem 3.5. Let A e R
mxn
. If A is written in terms of its columns as A = [a\,... ,a
n
],
then
Proof: The proof of this theorem is easy, essentially following immediately from the defi
nition. D
Remark 3.6. Note that in Theorem 3.5 and throughout the text, the same symbol (A) is
used to denote both a linear transformation and its matrix representation with respect to the
usual (natural) bases. See also the last paragraph of Section 3.2.
Definition 3.7. Let {v1 , . . . , vk] be a set of nonzero vectors u, e Rn. The set is said to
be orthogonal if' vjvj = 0 for i ^ j and orthonormal if vf vj = 8
ij
, where 8
t
j is the
Kronecker delta defined by
Example 3.8.
is an orthogonal set.
is an orthonormal set.
3. If { t > i , . . . , Vk} with u, € M." is an orthogonal set, then I — /==,  ., — /===  is an
I ^/v, vi ^/v'
k
v
k
]
orthonormal set.
then
20 Chapter 3. LinearTransformations
3.4 Structure of Linear Transformations
Let A : V + W be a linear transformation.
Definition3.3. The range of A, denotedR(A), is the set {w E W : w = Av for some v E V}.
Equivalently, R(A) = {Av : v E V}. The range of A is also known as the image of A and
denoted Im(A).
The nullspace of A, denoted N(A), is the set {v E V : Av = O}. The nullspace of
A is also known as the kernel of A and denoted Ker (A).
Theorem 3.4. Let A : V + W be a linear transformation. Then
1. R(A) S; W.
2. N(A) S; V.
Note that N(A) and R(A) are, in general, subspaces of different spaces.
Theorem 3.5. Let A E If A is written in terms of its columns as A = [ai, ... , an],
then
R(A) = Sp{al, ... , an} .
Proof: The proof of this theorem is easy, essentially following immediately from the defi
nition. 0
Remark 3.6. Note that in Theorem 3.5 and throughout the text, the same symbol (A) is
used to denote both a linear transformation and its matrix representation with respect to the
usual (natural) bases. See also the last paragraph of Section 3.2.
Definition 3.7. Let {VI, ... , vd be a set of nonzero vectors Vi E The set is said to
be orthogonal if vr v j = 0 for i f= j and orthonormal if vr v j = 8ij' where 8ij is the
Kronecker delta defined by
8 {I ifi=j,
ij = 0 if i f= j.
Example 3.8.
1. {[ J. [ : J} is an orthogonal set.
2. {[ ] ,[ J} is an orthonormal set.
3 If { }
. h 1Tlln • h I th { .
. VI, •.• ,Vk Wit Vi E.IN,. IS an ort ogona set, en ... , IS an
VI
orthonormal set.
3.4. Structure of Linear Transformations 21
Definition 3.9. Let S c Rn. Then the orthogonal complement of S is defined as the set
S
1
 = {v e Rn : V
T
S = 0 for all s e S}.
Example 3.10. Let
Then it can be shown that
Working from the definition, the computation involved is simply to find all nontrivial (i.e.,
nonzero) solutions of the system of equations
Note that there is nothing special about the two vectors in the basis defining S being or
thogonal. Any set of vectors will do, including dependent spanning vectors (which would,
of course, then give rise to redundant equations).
Theorem 311 Let R S C R
n
The
Proof: We prove and discuss only item 2 here. The proofs of the other results are left as
exercises. Let { v1 , ..., v
k
} be an orthonormal basis for S and let x e Rn be an arbitrary
vector. Set
3.4. Structure of Li near Transformations 21
Definition 3.9. Let S <; ]Rn. Then the orthogonal complement of S is defined as the set
vTs=OforallsES}.
Example 3.10. Let
Then it can be shown that
Working from the definition, the computation involved is simply to find all nontrivial (i.e.,
nonzero) solutions of the system of equations
3xI + 5X2 + 7X3 = 0,
4xI + X2 + X3 = 0.
Note that there is nothing special about the two vectors in the basis defining S being or
thogonal. Any set of vectors will do, including dependent spanning vectors (which would,
of course, then give rise to redundant equations).
Theorem 3.11. Let n, S <; ]Rn. Then
2. S \B = ]Rn.
3. = S.
4. n <; S if and only if <;
5. (n + = nl. n
6. (n n = +
Proof: We prove and discuss only item 2 here. The proofs of the other results are left as
exercises. Let {VI, ... , Vk} be an orthonormal basis for S and let x E ]Rn be an arbitrary
vector. Set
k
XI = L (xT Vi)Vi,
;=1
X2 = X XI.
we see that x2 is orthogonal to v1, ..., Vk and hence to any linear combination of these
vectors. In other words, X2 is orthogonal to any vector in S. We have thus shown that
S + S
1
= Rn. We also have that S U S
1
=0 since the only vector s e S orthogonal to
everything in S (i.e., including itself) is 0.
It is also easy to see directly that, when we have such direct sum decompositions, we
can write vectors in a unique way with respect to the corresponding subspaces. Suppose,
for example, that x = x1 + x2. = x'1+ x'
2
, where x\, x 1 E S and x2, x'
2
e S
1
. Then
(x'1 — x1)
T
( x'
2
— x2) = 0 by definition of ST . But then (x'1 — x1)
T
( x' 1 – x1) = 0 since
x
2
— X2 = — (x'1 — x1) (which follows by rearranging the equation x1+x2 = x'1 + x'
2
) . Thus,
x1 — x'1 and x2 = x
2
. D
Theorem 3.12. Let A : Rn —> R
m
. Then
1. N(A)
1
" = 7£(A
r
). (Note: This holds only for finitedimensional vector spaces.)
2. 'R,(A)
1
~ — J\f(A
T
). (Note: This also holds for infinitedimensional vector spaces.)
Proof: To prove the first part, take an arbitrary x e A/ "(A). Then Ax = 0 and this is
equivalent to y
T
Ax = 0 for all v. But y
T
Ax = ( A
T
y ) x. Thus, Ax = 0 if and only if x
is orthogonal to all vectors of the form A
T
v, i.e., x e R(A
r
) . Since x was arbitrary, we
have established that N(A)
1
= U(A
T
}.
The proof of the second part is similar and is left as an exercise. D
Definition 3.13. Let A : R
n
> R
m
. Then {v e R" : Av = 0} is sometimes called the
right nullspace of A. Similarly, (w e R
m
: W
T
A = 0} is called the left nullspace of A.
Clearly, the right nullspace is A/"(A) while the left nullspace is J\f(A
T
).
Theorem 3.12 and part 2 of Theorem 3.11 can be combined to give two very fun
damental and useful decompositions of vectors in the domain and codomain of a linear
transformation A. See also Theorem 2.26.
Theorem 3.14 (Decomposition Theorem). Let A : R" > R
m
. Then
7. every vector v in the domain space R" can be written in a unique way as v = x + y,
where x € M(A) and y € J\f(A)
±
= ft(A
r
) (i.e., R" = M(A) 0 ft(A
r
)).
2. every vector w in the codomain space R
m
can be written in a unique way asw = x+y,
where x e U(A) and y e ft(A)
1
 = Af(A
T
) (i.e., R
m
= 7l(A) 0 M(A
T
)).
This key theorem becomes very easy to remember by carefully studying and under
standing Figure 3.1 in the next section.
3.5 Four Fundamental Subspaces
Consider a general matrix A € E^
x
". When thought of as a linear transformation from E"
to R
m
, many properties of A can be developed in terms of the four fundamental subspaces
22 Chapters. L i near Transformations
Then x\ e < S and, since
22 Chapter 3. Linear Transformations
Then XI E S and, since
T T T
x
2
V j = X V j  X I V j
=XTVjXTVj=O,
we see that X2 is orthogonal to VI, .•. , Vk and hence to any linear combination of these
vectors. In other words, X2 is orthogonal to any vector in S. We have thus shown that
S + S.l = IRn. We also have that S n S.l = 0 since the only vector s E S orthogonal to
everything in S (i.e., including itself) is O.
It is also easy to see directly that, when we have such direct sum decompositions, we
can write vectors in a unique way with respect to the corresponding subspaces. Suppose,
for example, that x = XI + X2 = x; + x ~ , where XI, x; E Sand X2, x ~ E S.l. Then
(x;  XI/ ( x ~  X2) = 0 by definition of S.l. But then (x;  XI)T (x;  xd = 0 since
x ~ X2 = (x; XI) (which follows by rearranging the equation XI +X2 = x; + x ~ ) . Thus,
XI = x; andx2 = x ~ . 0
Theorem 3.12. Let A : IR
n
+ IRm. Then
1. N(A).l = R(A
T
). (Note: This holds only for finitedimensional vector spaces.)
2. R(A).l = N(A
T
). (Note: This also holds for infinitedimensional vector spaces.)
Proof: To prove the first part, take an arbitrary x E N(A). Then Ax = 0 and this is
equivalent to yT Ax = 0 for all y. But yT Ax = (AT y{ x. Thus, Ax = 0 if and only if x
is orthogonal to all vectors of the form AT y, i.e., x E R(AT).l. Since x was arbitrary, we
have established thatN(A).l = R(A
T
).
The proof of the second part is similar and is left as an exercise. 0
Definition 3.13. Let A : IR
n
+ IRm. Then {v E IR
n
: A v = O} is sometimes called the
right nullspace of A. Similarly, {w E IR
m
: w
T
A = O} is called the left nullspace of A.
Clearly, the right nullspace is N(A) while the left nullspace is N(A
T
).
Theorem 3.12 and part 2 of Theorem 3.11 can be combined to give two very fun
damental and useful decompositions of vectors in the domain and codomain of a linear
transformation A. See also Theorem 2.26.
Theorem 3.14 (Decomposition Theorem). Let A : IR
n
+ IRm. Then
1. every vector v in the domain space IR
n
can be written in a unique way as v = x + y,
where x E N(A) and y E N(A).l = R(AT) (i.e., IR
n
= N(A) EB R(A
T
».
2. every vector w in the codomain space IR
m
can be written ina unique way as w = x+y,
where x E R(A) and y E R(A).l = N(A
T
) (i.e., IR
m
= R(A) EBN(A
T
».
This key theorem becomes very easy to remember by carefully studying and under
standing Figure 3.1 in the next section.
3.5 Four Fundamental Subspaces
Consider a general matrix A E lR;,xn. When thought of as a linear transformation from IR
n
to IRm, many properties of A can be developed in terms of the four fundamental subspaces
3.5. Four Fundamental Subspaces 23
Figure 3.1. Four fundamental subspaces.
7£(A), 'R.(A)^, Af ( A) , and N(A)T. Figure 3.1 makes many key properties seem almost
obvious and we return to this figure frequently both in the context of linear transformations
and in illustrating concepts such as controllability and observability.
Definition 3.15. Let V and W be vector spaces and let A : V
motion.
1. A is onto (also called epic or surjective) ifR,(A) = W.
W be a linear transfor
2. A is onetoone or 11 (also called monic or infective) ifJ\f(A) = 0. Two equivalent
characterizations of A being 11 that are often easier to verify in practice are the
following:
Definition 3.16. Let A : E" > R
m
. Then rank(A) = dimftCA). This is sometimes called
the column rank of A (maximum number of independent columns). The row rank of A is
3.5. Four Fundamental Subspaces 23
A
r
N(A)1
r
EB {OJ
X {O}Gl
nr m r
Figure 3.1. Four fundamental subspaces.
R(A), R(A)1, N(A), and N(A)1. Figure 3.1 makes many key properties seem almost
obvious and we return to this figure frequently both in the context of linear transformations
and in illustrating concepts such as controllability and observability.
Definition 3.15. Let V and W be vector spaces and let A : V + W be a linear transfor
mation.
1. A is onto (also called epic or surjective) ifR(A) = W.
2. A is onetoone or 11 (also called monic or injective) if N(A) = O. Two equivalent
characterizations of A being 11 that are often easier to verify in practice are the
following:
(a) AVI = AV2 ===} VI = V2 .
(b) VI t= V2 ===} AVI t= AV2 .
Definition 3.16. Let A : IR
n
+ IRm. Then rank(A) = dim R(A). This is sometimes called
the column rank of A (maximum number of independent columns). The row rank of A is
24 Chapter3. Linear Transformations
dim 7£(A
r
) (maximum number of independent rows). The dual notion to rank is the nullity
of A, sometimes denoted nullity(A) or corank(A), and is defined as dim A/"(A).
Theorem 3.17. Let A : R
n
> R
m
. Then dim K(A) = dimA/ '(A)
±
. (Note: Since
A/^A)
1
" = 7l(A
T
), this theorem is sometimes colloquially stated "row rank of A = column
rank of A.")
Proof: Define a linear transformation T : J\f(A)~
L
— >• 7£(A) by
Clearly T is 11 (since A/"(T) = 0). To see that T is also onto, take any w e 7£(A). Then
by definition there is a vector x e R" such that Ax — w. Write x = x\ + X2, where
x\ e A/^A)
1
 and jc
2
e A/"(A). Then Ajti = u; = r*i since *i e A/^A)
1
. The last equality
shows that T is onto. We thus have that dim7?.(A) = dimA/^A^ since it is easily shown
that if { ui , . . . , iv} is abasis forA/'CA)
1
, then {Tv\, . . . , Tv
r
] is abasis for 7?.(A). Finally, if
we apply this and several previous results, the following string of equalities follows easily:
"column rank of A" = rank(A) = dim7e(A) = dim A/^A)
1
= dim7l(A
T
) = rank(A
r
) =
"row rank of A." D
The following corollary is immediate. Like the theorem, it is a statement about equality
of dimensions; the subspaces themselves are not necessarily in the same vector space.
Corollary 3.18. Let A : R" > R
m
. Then dimA/"(A) + dimft(A) = n, where n is the
dimension of the domain of A.
Proof: From Theorems 3.11 and 3.17 we see immediately that
For completeness, we include here a few miscellaneous results about ranks of sums
and products of matrices.
Theorem 3.19. Let A, B e R"
xn
. Then
Part 4 of Theorem 3.19 suggests looking at the general problem of the four fundamental
subspaces of matrix products. The basic results are contained in the following easily proved
theorem.
24 Chapter 3. LinearTransformations
dim R(AT) (maximum number of independent rows). The dual notion to rank is the nullity
of A, sometimes denoted nullity(A) or corank(A), and is defined as dimN(A).
Theorem 3.17. Let A : ]Rn ~ ]Rm. Then dim R(A) = dimNCA)L. (Note: Since
N(A)L = R(A
T
), this theorem is sometimes colloquially stated "row rank of A = column
rank of A.")
Proof: Define a linear transformation T : N(A)L ~ R(A) by
Tv = Av for all v E N(A)L.
Clearly T is 11 (since N(T) = 0). To see that T is also onto, take any W E R(A). Then
by definition there is a vector x E ]Rn such that Ax = w. Write x = Xl + X2, where
Xl E N(A)L andx2 E N(A). Then AXI = W = TXI since Xl E N(A)L. The last equality
shows that T is onto. We thus have that dim R(A) = dimN(A)L since it is easily shown
that if {VI, ... , v
r
} is a basis for N(A)L, then {TVI, ... , Tv
r
} is a basis for R(A). Finally, if
we apply this and several previous results, the following string of equalities follows easily:
"column rank of A" = rank(A) = dim R(A) = dimN(A)L = dim R(AT) = rank(AT) =
"row rank of A." 0
The following corollary is immediate. Like the theorem, it is a statement about equality
of dimensions; the subspaces themselves are not necessarily in the same vector space.
Corollary 3.18. Let A : ]Rn ~ ]Rm. Then dimN(A) + dim R(A) = n, where n is the
dimension of the domain of A.
Proof: From Theorems 3.11 and 3.17 we see immediately that
n = dimN(A) + dimN(A)L
= dimN(A) + dim R(A) . 0
For completeness, we include here a few miscellaneous results about ranks of sums
and products of matrices.
Theorem 3.19. Let A, B E ]Rnxn. Then
1. O:s rank(A + B) :s rank(A) + rank(B).
2. rank(A) + rank(B)  n :s rank(AB) :s min{rank(A), rank(B)}.
3. nullity(B) :s nullity(AB) :s nullity(A) + nullity(B).
4. if B is nonsingular, rank(AB) = rank(BA) = rank(A) and N(BA) = N(A).
Part 4 of Theorem 3.19 suggests looking atthe general problem of the four fundamental
subspaces of matrix products. The basic results are contained in the following easily proved
theorem.
3.5. Four F u n d a me n t a l Subspaces 25
Theorem 3.20. Let A e R
mxn
, B e R
nxp
. Then
The next theorem is closely related to Theorem 3.20 and is also easily proved. It
is extremely useful in text that follows, especially when dealing with pseudoinverses and
linear least squares problems.
Theorem 3.21. Let A e R
mxn
. Then
We now characterize 11 and onto transformations and provide characterizations in
terms of rank and invertibility.
Theorem 3.22. Let A : R
n
» R
m
. Then
1. A is onto if and only //"rank(A) — m (A has linearly independent rows or is said to
have full row rank; equivalently, AA
T
is nonsingular).
2. A is 11 if and only z/r a nk(A) = n (A has linearly independent columns or is said
to have full column rank; equivalently, A
T
A is nonsingular).
Proof: Proof of part 1: If A is onto, dim7?,(A) — m — rank (A). Conversely, let y e R
m
be arbitrary. Let jc = A
T
(AA
T
)~
]
y e R
n
. Then y = Ax, i.e., y e 7?.(A), so A is onto.
Proof of part 2: If A is 11, then A/"(A) = 0, which implies that dim A/^A)
1
—n —
dim 7£(A
r
), and hence dim 7£(A) = n by Theorem 3.17. Conversely, suppose Ax\ = Ax^.
Then A
r
A;t i = A
T
Ax2, which implies x\ = x^. since A
r
A is invertible. Thus, A is
11. D
Definition 3.23. A : V —» W is invertible (or bijective) if and only if it is 11 and onto.
Note that if A is invertible, then dim V — dim W. Also, A : W
1
»• E" is invertible or
nonsingular if and only z/r ank(A) = n.
Note that in the special case when A € R"
x
", the transformations A, A
r
, and A"
1
are all 11 and onto between the two spaces M(A)
±
and 7£(A). The transformations A
T
and A~
!
have the same domain and range but are in general different maps unless A is
orthogonal. Similar remarks apply to A and A~
T
.
3.5. Four Fundamental Subspaces 25
Theorem 3.20. Let A E IRmxn, B E IRnxp. Then
1. RCAB) S; RCA).
2. N(AB) ;2 N(B).
3. R«AB)T) S; R(B
T
).
4. N«AB)T) ;2 N(A
T
).
The next theorem is closely related to Theorem 3.20 and is also easily proved. It
is extremely useful in text that follows, especially when dealing with pseudoinverses and
linear least squares problems.
Theorem 3.21. Let A E IRmxn. Then
1. R(A) = R(AA
T
).
2. R(AT) = R(A
T
A).
3. N(A) = N(A
T
A).
4. N(A
T
) = N(AA
T
).
We now characterize II and onto transformations and provide characterizations in
terms of rank and invertibility.
Theorem 3.22. Let A : IR
n
+ IRm. Then
1. A is onto if and only if rank (A) = m (A has linearly independent rows or is said to
have full row rank; equivalently, AA T is nonsingular).
2. A is 11 if and only ifrank(A) = n (A has linearly independent columns or is said
to have full column rank; equivalently, AT A is nonsingular).
Proof' Proof of part 1: If A is onto, dim R(A) = m = rank(A). Conversely, let y E IRm
be arbitrary. Let x = AT (AAT)I Y E IRn. Then y = Ax, i.e., y E R(A), so A is onto.
Proof of part 2: If A is 11, then N(A) = 0, which implies that dimN(A)1 = n =
dim R(A
T
), and hence dim R(A) = n by Theorem 3.17. Conversely, suppose AXI = AX2.
Then AT AXI = AT AX2, which implies XI = X2 since AT A is invertible. Thus, A is
11. D
Definition 3.23. A : V + W is invertible (or bijective) if and only if it is 11 and onto.
Note that if A is invertible, then dim V = dim W. Also, A : IRn + IR
n
is invertible or
nonsingular ifand only ifrank(A) = n.
Note that in the special case when A E I R ~ x n , the transformations A, AT, and AI
are all 11 and onto between the two spaces N(A)1 and R(A). The transformations AT
and A I have the same domain and range but are in general different maps unless A is
orthogonal. Similar remarks apply to A and A T.
26 Chapters. Li near Transformations
If a linear transformation is not invertible, it may still be right or left invertible. Defi
nitions of these concepts are followed by a theorem characterizing left and right invertible
transformations.
Definition 3.24. Let A : V > W. Then
1. A is said to be right invertible if there exists a right inverse transformation A~
R
:
W —> V such that AA~
R
= I
w
, where I
w
denotes the identity transformation on W.
2. A is said to be left invertible if there exists a left inverse transformation A~
L
: W —>
V such that A~
L
A = I
v
, where I
v
denotes the identity transformation on V.
Theorem 3.25. Let A : V > W. Then
1. A is right invertible if and only if it is onto.
2. A is left invertible if and only if it is 11.
Moreover, A is invertible if and only if it is both right and left invertible, i.e., both 11 and
onto, in which case A~
l
= A~
R
= A~
L
.
Note: From Theorem 3.22 we see that if A : E" >• E
m
is onto, then a right inverse
is given by A~
R
= A
T
(AA
T
) . Similarly, if A is 11, then a left inverse is given by
A~
L
= (A
T
A)~
1
A
T
.
Theorem 3.26. Let A : V » V.
1. If there exists a unique right inverse A~
R
such that AA~
R
= I, then A is invertible.
2. If there exists a unique left inverse A~
L
such that A~
L
A = I, then A is invertible.
Proof: We prove the first part and leave the proof of the second to the reader. Notice the
following:
Thus, (A
R
+ A
R
A — /) must be a right inverse and, therefore, by uniqueness it must be
the case that A~
R
+ A~
R
A — I = A~
R
. But this implies that A~
R
A = /, i.e., that A~
R
is
a left inverse. It then follows from Theorem 3.25 that A is invertible. D
Example 3.27.
1. Let A = [1 2] : E
2
»• E
1
. Then A is onto. (Proof: Take any a € E
1
; then one
can always find v e E
2
such that [1 2][^] = a). Obviously A has full row rank
(=1) and A~
R
= [ _j j is a right inverse. Also, it is clear that there are infinitely many
right inverses for A. In Chapter 6 we characterize all right inverses of a matrix by
characterizing all solutions of the linear matrix equation AR = I.
26 Chapter 3. linear Transformations
If a linear transformation is not invertible, it may still be right or left invertible. Defi
nitions of these concepts are followed by a theorem characterizing left and right invertible
transformations.
Definition 3.24. Let A : V + W. Then
1. A is said to be right invertible if there exists a right inverse transformation A
R
:
W + V such that AA R = I
w
, where Iw denotes the identity transfonnation on W.
2. A is said to be left invertible if there exists a left inverse transformation A L : W +
V such that A L A = Iv, where Iv denotes the identity transfonnation on V.
Theorem 3.25. Let A : V + W. Then
1. A is right invertible if and only if it is onto.
2. A is left invertible if and only ifit is 11.
Moreover, A is invertible if and only if it is both right and left invertible, i.e., both 11 and
onto, in which case A I = A R = A L.
Note: From Theorem 3.22 we see that if A : ]Rn + ]Rm is onto, then a right inverse
is given by A R = AT (AAT) I. Similarly, if A is 11, then a left inverse is given by
A
L
= (AT A)I AT.
Theorem 3.26. Let A : V + V.
1. If there exists a unique right inverse A  R such that A A  R = I, then A is invertible.
2. If there exists a unique left inverse A L such that A L A = I, then A is invertible.
Proof: We prove the first part and leave the proof of the second to the reader. Notice the
following:
A(A
R
+ ARA I) = AA
R
+ AARA  A
= I + I A  A since AA R = I
= I.
Thus, (A R + A R A  I) must be a right inverse and, therefore, by uniqueness it must be
the case that A R + A R A  I = A R. But this implies that A R A = I, i.e., that A R is
a left inverse. It then follows from Theorem 3.25 that A is invertible. 0
Example 3.27.
1. Let A = [1 2]:]R2 + ]R I. Then A is onto. (Proo!' Take any a E ]R I; then one
can always find v E ]R2 such that [1 2][ ~ ~ ] = a). Obviously A has full row rank
(= 1) and A  R = [ _ ~ ] is a right inverse. Also, it is clear that there are infinitely many
right inverses for A. In Chapter 6 we characterize all right inverses of a matrix by
characterizing all solutions of the linear matrix equation A R = I.
Exercises 27
2. Let A = [J] : E
1
> E
2
. ThenAis 11. (Proof: The only solution to 0 = Av = [
I
2
]v
is v = 0, whence A/"(A) = 0 so A is 11). It is now obvious that A has full column
rank (=1) and A~
L
= [3 — 1] is a left inverse. Again, it is clear that there are
infinitely many left inverses for A. In Chapter 6 we characterize all left inverses of a
matrix by characterizing all solutions of the linear matrix equation LA = I.
3. The matrix
when considered as a linear transformation on IE
below bases for its four fundamental subspaces.
\ is neither 11 nor onto. We give
EXERCISES
3 4
1. Let A = [
8 5
J and consider A as a linear transformation mapping E
3
to E
2
.
Find the matrix representation of A with respect to the bases
2. Consider the vector space R
nx
" over E, let S denote the subspace of symmetric
matrices, and let 7£ denote the subspace of skewsymmetric matrices. For matrices
X, Y e E
nx
" define their inner product by (X, Y) = Tr( X
r
F) . Show that, with
respect to this inner product, 'R, — S^.
3. Consider the differentiation operator C defined in Example 3.2.3. Is £ 11? Is£
onto?
4. Prove Theorem 3.4.
of R
3
and
of E
2
.
Exercises 27
2. LetA = [i]:]Rl ~ ]R2. Then A is 11. (Proof The only solution toO = Av = [i]v
is v = 0, whence N(A) = 0 so A is 11). It is now obvious that A has full column
rank (=1) and A L = [3  1] is a left inverse. Again, it is clear that there are
infinitely many left inverses for A. In Chapter 6 we characterize all left inverses of a
matrix by characterizing all solutions of the linear matrix equation LA = I.
3. The matrix
[
1 1
A = 2 1
3 1
when considered as a linear transformation on ]R3, is neither 11 nor onto. We give
below bases for its four fundamental subspaces.
EXERCISES
1. Let A = [ ~ ; i) and consider A as a linear transformation mapping ]R3 to ]R2.
Find the matrix representation of A with respect to the bases
{[lHHU]}
{ [ i l [ ~ J }
2. Consider the vector space ]Rnxn over ]R, let S denote the subspace of symmetric
matrices, and let R denote the subspace of skewsymmetric matrices. For matrices
X, Y E ]Rnxn define their inner product by (X, y) = Tr(X
T
Y). Show that, with
respect to this inner product, R = S J. .
3. Consider the differentiation operator £, defined in Example 3.2.3. Is £, II? Is £,
onto?
4. Prove Theorem 3.4.
28 Chapters. Linear Transformations
5. Prove Theorem 3.11.4.
6. Prove Theorem 3.12.2.
7. Determine bases for the four fundamental subspaces of the matrix
8. Suppose A e R
mxn
has a left inverse. Show that A
T
has a right inverse.
9. Let A = [ J o]. Determine A/"(A) and 7£(A). Are they equal? Is this true in general?
If this is true in general, prove it; if not, provide a counterexample.
10. Suppose A € Mg
9x48
. How many linearly independent solutions can be found to the
homogeneous linear system Ax = 0?
11. Modify Figure 3.1 to illustrate the four fundamental subspaces associated with A
T
e
R
nxm
thought of as a transformation from R
m
to R".
28 Chapter 3. Linear Transformations
5. Prove Theorem 3.Il.4.
6. Prove Theorem 3.12.2.
7. Detennine bases for the four fundamental subspaces of the matrix
2 5 5 3
8. Suppose A E IR
m
xn has a left inverse. Show that A T has a right inverse.
9. Let A = n DetennineN(A) and R(A). Are they equal? Is this true in general?
If this is true in general, prove it; if not, provide a counterexample.
10. Suppose A E How many linearly independent solutions can be found to the
homogeneous linear system Ax = O?
11. Modify Figure 3.1 to illustrate the four fundamental subspaces associated with ATE
IR
nxm
thought of as a transformation from IR
m
to IRn.
Chapter 4
Introduction to the
MoorePen rose
Pseudoinverse
In this chapter we give a brief introduction to the MoorePenrose pseudoinverse, a gener
alization of the inverse of a matrix. The MoorePenrose pseudoinverse is defined for any
matrix and, as is shown in the following text, brings great notational and conceptual clarity
to the study of solutions to arbitrary systems of linear equations and linear least squares
problems.
4.1 Definitions and Characterizations
Consider a linear transformation A : X —>• y, where X and y are arbitrary finite
dimensional vector spaces. Define a transformation T : Af(A)
1
 —>• Tl(A) by
Then, as noted in the proof of Theorem 3.17, T is bijective (11 and onto), and hence we
can define a unique inverse transformation T~
l
: 7£(A) —>• J\f(A}~
L
. This transformation
can be used to give our first definition of A
+
, the MoorePenrose pseudoinverse of A.
Unfortunately, the definition neither provides nor suggests a good computational strategy
for determining A
+
.
Definition 4.1. With A and T as defined above, define a transformation A
+
: y —» • X by
where y = y\ + j2 with y\ e 7£(A) and yi e Tl(A}
L
. Then A
+
is the MoorePenrose
pseudoinverse of A.
Although X and y were arbitrary vector spaces above, let us henceforth consider the
case X = W
1
and y = R
m
. We have thus defined A+ for all A e IR ™
X
" . A purely algebraic
characterization of A
+
is given in the next theorem, which was proved by Penrose in 1955;
see [22].
29
Chapter 4
Introduction to the
MoorePenrose
Pseudoinverse
In this chapter we give a brief introduction to the MoorePenrose pseudoinverse, a gener
alization of the inverse of a matrix. The MoorePenrose pseudoinverse is defined for any
matrix and, as is shown in the following text, brings great notational and conceptual clarity
to the study of solutions to arbitrary systems of linear equations and linear least squares
problems.
4.1 Definitions and Characterizations
Consider a linear transformation A : X + y, where X and Y are arbitrary finite
dimensional vector spaces. Define a transformation T : N(A).l + R(A) by
Tx = Ax for all x E NCA).l.
Then, as noted in the proof of Theorem 3.17, T is bijective Cll and onto), and hence we
can define a unique inverse transformation T
1
: RCA) + NCA).l. This transformation
can be used to give our first definition of A +, the MoorePenrose pseudoinverse of A.
Unfortunately, the definition neither provides nor suggests a good computational strategy
for determining A + .
Definition 4.1. With A and T as defined above, define a transformation A + : Y + X by
where Y = YI + Yz with Yl E RCA) and Yz E RCA).l. Then A+ is the MoorePenrose
pseudoinverse of A.
Although X and Y were arbitrary vector spaces above, let us henceforth consider the
case X = ~ n and Y = lP1.
m
. We have thus defined A + for all A E lP1.;" xn. A purely algebraic
characterization of A + is given in the next theorem, which was proved by Penrose in 1955;
see [22].
29
30 Chapter 4. Introduction to the MoorePenrose Pseudoinverse
Theorem 4.2. Let A e R?
xn
. Then G = A
+
i f and only i f
(PI) AGA = A.
(P2) GAG = G.
(P3) (AGf = AG.
(P4) (GA)
T
= GA.
Furthermore, A
+
always exi sts and i s uni que.
Note that the inverse of a nonsingular matrix satisfies all four Penrose properties. Also,
a right or left inverse satisfies no fewer than three of the four properties. Unfortunately, as
with Definition 4.1, neither the statement of Theorem 4.2 nor its proof suggests a computa
tional algorithm. However, the Penrose properties do offer the great virtue of providing a
checkable criterion in the following sense. Given a matrix G that is a candidate for being
the pseudoinverse of A, one need simply verify the four Penrose conditions (P1)(P4). If G
satisfies all four, then by uniqueness, it must be A
+
. Such a verification is often relatively
straightforward.
Example 4.3. Consider A = [' ]. Verify directly that A
+
= [ f ] satisfies (P1)(P4).
Note that other left inverses (for example, A~
L
= [3 — 1]) satisfy properties (PI), (P2),
and (P4) but not (P3).
Still another characterization of A
+
is given in the following theorem, whose proof
can be found in [1, p. 19]. While not generally suitable for computer implementation, this
characterization can be useful for hand calculation of small examples.
Theorem 4.4. Let A e R™
xn
. Then
4.2 Examples
Each of the following can be derived or verified by using the above definitions or charac
terizations.
Example 4.5. A
+
= A
T
(AA
T
)~ if A is onto (independent rows) (A is right invertible).
Example 4.6. A
+
= (A
T
A)~ A
T
i f A is 11 (independent columns) (A is left invertible).
Example 4.7. For any scalar a,
30 Chapter 4. Introduction to the MoorePenrose Pseudoinverse
Theorem 4.2. Let A E lR;" xn. Then G = A + if and only if
(Pl) AGA = A.
(P2) GAG = G.
(P3) (AG)T = AG.
(P4) (GA)T = GA.
Furthermore, A + always exists and is unique.
Note that the inverse of a nonsingular matrix satisfies all four Penrose properties. Also,
a right or left inverse satisfies no fewer than three of the four properties. Unfortunately, as
with Definition 4.1, neither the statement of Theorem 4.2 nor its proof suggests a computa
tional algorithm. However, the Penrose properties do offer the great virtue of providing a
checkable criterion in the following sense. Given a matrix G that is a candidate for being
the pseudoinverse of A, one need simply verify the four Penrose conditions (P1)(P4). If G
satisfies all four, then by uniqueness, it must be A +. Such a verification is often relatively
straightforward.
Example 4.3. Consider A = [a Verify directly that A+ = [! ~ ] satisfies (PI)(P4).
Note that other left inverses (for example, A L = [3  1]) satisfy properties (PI), (P2),
and (P4) but not (P3).
Still another characterization of A + is given in the following theorem, whose proof
can be found in [1, p. 19]. While not generally suitable for computer implementation, this
characterization can be useful for hand calculation of small examples.
Theorem 4.4. Let A E lR;" xn. Then
4.2 Examples
A + = lim (AT A + 8
2
1) I AT
6+0
= limAT(AAT +8
2
1)1.
6+0
(4.1)
(4.2)
Each of the following can be derived or verified by using the above definitions or charac
terizations.
Example 4.5. X
t
= AT (AA T) I if A is onto (independent rows) (A is right invertible).
Example 4.6. A+ = (AT A)I AT if A is 11 (independent columns) (A is left invertible).
Example 4.7. For any scalar a,
if a t= 0,
if a =0.
4.3. Properties and Appl ications 31
Example 4.8. For any vector v e M",
Example 4.9.
Example 4.10.
4.3 Properties and Applications
This section presents some miscellaneous useful results on pseudoinverses. Many of these
are used in the text that follows.
Theorem 4.11. Let A e R
mx
" and suppose U e R
mxm
, V e R
nx
" are orthogonal (M is
orthogonal if M
T
= M
1
). Then
Proof: For the proof, simply verify that the expression above does indeed satisfy each c
the four Penrose conditions. D
Theorem 4.12. Let S e R
nxn
be symmetric with U
T
SU = D, where U is orthogonal an
D is diagonal. Then S
+
= UD
+
U
T
, where D
+
is again a diagonal matrix whose diagonc
elements are determined according to Example 4.7.
Theorem 4.13. For all A e R
mxn
,
Proof: Both results can be proved using the limit characterization of Theorem 4.4. The
proof of the first result is not particularly easy and does not even have the virtue of being
especially illuminating. The interested reader can consult the proof in [1, p. 27]. The
proof of the second result (which can also be proved easily by verifying the four Penrose
conditions) is as follows:
4.3. Properties and Applications
Example 4.8. For any vector v E jRn,
Example 4.9.
[ ~ ~ r = [
0
~ l
[ ~ ~ r = [
I I
1
Example 4.10.
4 4
I I
4 4
4.3 Properties and Applications
if v i= 0,
if v = O.
31
This section presents some miscellaneous useful results on pseudoinverses. Many of these
are used in the text that follows.
Theorem 4.11. Let A E jRmxn and suppose U E jRmxm, V E jRnxn are orthogonal (M is
orthogonal if MT = M
1
). Then
Proof: For the proof, simply verify that the expression above does indeed satisfy each of
the four Penrose conditions. 0
Theorem 4.12. Let S E jRnxn be symmetric with U
T
SU = D, where U is orthogonal and
D is diagonal. Then S+ = U D+U
T
, where D+ is again a diagonal matrix whose diagonal
elements are determined according to Example 4.7.
Theorem 4.13. For all A E jRmxn,
1. A+ = (AT A)+ AT = AT (AA
T
)+.
2. (A
T
)+ = (A+{.
Proof: Both results can be proved using the limit characterization of Theorem 4.4. The
proof of the first result is not particularly easy and does not even have the virtue of being
especially illuminating. The interested reader can consult the proof in [1, p. 27]. The
proof of the second result (which can also be proved easily by verifying the four Penrose
conditions) is as follows:
(A
T
)+ = lim (AA
T
+ 8
2
l)IA
~   + O
= lim [AT(AAT + 8
2
l)1{
~   + O
= [limAT(AAT + 8
2
l)1{
~   + O
= (A+{. 0
32 Chapter 4. Introduction to the MoorePenrose Pseudoinverse
Note that by combining Theorems 4.12 and 4.13 we can, in theory at least, compute
the MoorePenrose pseudoinverse of any matrix (since A A
T
and A
T
A are symmetric). This
turns out to be a poor approach in finiteprecision arithmetic, however (see, e.g., [7], [11],
[23]), and better methods are suggested in text that follows.
Theorem 4.11 is suggestive of a "reverseorder" property for pseudoinverses of prod
nets of matrices such as exists for inverses of nroducts TTnfortnnatelv. in peneraK
As an example consider A = [0 1J and B = LI. Then
while
However, necessary and sufficient conditions under which the reverseorder property does
hold are known and we quote a couple of moderately useful results for reference.
Theorem 4.14. (AB)
+
= B
+
A
+
if and only if
Proof: For the proof, see [9]. D
Theorem 4.15. (AB)
+
= B?A+, where BI = A+AB and A) = AB\B+.
Proof: For the proof, see [5]. D
Theorem 4.16. If A e R
n
r
xr
, B e R
r
r
xm
, then (AB)
+
= B+A+.
Proof: Since A e R
n
r
xr
, then A
+
= (A
T
A)~
l
A
T
, whence A
+
A = I
r
. Similarly, since
B e W
r
xm
, we have B
+
= B
T
(BB
T
)~\ whence BB
+
= I
r
. The result then follows by
taking BI = B, A\ = A in Theorem 4.15. D
The following theorem gives some additional useful properties of pseudoinverses.
Theorem 4.17. For all A e R
mxn
,
32 Chapter 4. Introduction to the MoorePenrose Pseudo inverse
Note that by combining Theorems 4.12 and 4.13 we can, in theory at least, compute
the MoorePenrose pseudoinverse of any matrix (since AAT and AT A are symmetric). This
turns out to be a poor approach in finiteprecision arithmetic, however (see, e.g., [7], [II],
[23]), and better methods are suggested in text that follows.
Theorem 4.11 is suggestive of a "reverseorder" property for pseudoinverses of prod
ucts of matrices such as exists for inverses of products. Unfortunately, in general,
As an example consider A = [0 I] and B = [ : J. Then
(AB)+ = 1+ = I
while
B+ A+ = [ ] =
However, necessary and sufficient conditions under which the reverseorder property does
hold are known and we quote a couple of moderately useful results for reference.
Theorem 4.14. (AB)+ = B+ A + if and only if
1. n(BB
T
AT) n(AT)
and
2. n(A T AB) nCB) .
Proof: For the proof, see [9]. 0
Theorem 4.15. (AB)+ = B{ Ai, where BI = A+ AB and AI = ABIB{.
Proof: For the proof, see [5]. 0
Theorem 4.16. If A E B E then (AB)+ = B+ A+.
Proof' Since A E then A+ = (AT A)I AT, whence A+ A = f
r
• Similarly, since
B E lR;xm, we have B+ = BT(BBT)I, whence BB+ = f
r
. The result then follows by
takingB
t
= B,At = A in Theorem 4.15. 0
The following theorem gives some additional useful properties of pseudoinverses.
Theorem 4.17. For all A E lR
mxn
,
1. (A+)+ = A.
2. (AT A)+ = A+(A
T
)+, (AA
T
)+ = (A
T
)+ A+.
3. n(A+) = n(A
T
) = n(A+ A) = n(A
T
A).
4. N(A+) = N(AA+) = N«AA
T
)+) = N(AA
T
) = N(A
T
).
5. If A is normal, then AkA+ = A+ Ak and (Ak)+ = (A+)kforall integers k > O.
Exercises 33
Note: Recall that A e R"
xn
is normal if AA
T
= A
T
A. For example, if A is symmetric,
skewsymmetric, or orthogonal, then it is normal. However, a matrix can be none of the
preceding but still be normal, such as
for scalars a, b e E.
The next theorem is fundamental to facilitating a compact and unifying approach
to studying the existence of solutions of (matrix) linear equations and linear least squares
problems.
Theorem 4.18. Suppose A e R
nxp
, B e E
MX m
. Then K(B) c U(A) if and only if
AA+B = B.
Proof: Suppose K(B) c U(A) and take arbitrary jc e R
m
. Then Bx e H(B) c H(A), so
there exists a vector y e R
p
such that Ay = Bx. Then we have
where one of the Penrose properties is used above. Since x was arbitrary, we have shown
that B = AA+B.
To prove the converse, assume that AA
+
B = B and take arbitrary y e K(B). Then
there exists a vector x e R
m
such that Bx = y, whereupon
EXERCISES
1. Use Theorem 4.4 to compute the pseudoinverse of \
2 2
1 •
2. If jc, y e R", show that (xy
T
)
+
= (x
T
x)
+
(y
T
y)
+
yx
T
.
3. For A e R
mxn
, prove that 7£(A) = 7£(AA
r
) using only definitions and elementary
properties of the MoorePenrose pseudoinverse.
4. For A e R
mxn
, prove that ft(A+) = ft(A
r
).
5. For A e R
pxn
and 5 € R
mx
", show that JV(A) C A/"(S) if and only if fiA+A = B.
6. Let A G M"
xn
, 5 e E
nxm
, and D € E
mxm
and suppose further that D is nonsingular.
(a) Prove or disprove that
(b) Prove or disprove that
Exercises 33
Note: Recall that A E IRn xn is normal if A A T = A T A. For example, if A is symmetric,
skewsymmetric, or orthogonal, then it is normal. However, a matrix can be none of the
preceding but still be normal, such as
A=[ a b]
b a
for scalars a, b E R
The next theorem is fundamental to facilitating a compact and unifying approach
to studying the existence of solutions of (matrix) linear equations and linear least squares
problems.
Theorem 4.18. Suppose A E IRnxp, B E IRnxm. Then R(B) S; R(A) if and only if
AA+B = B.
Proof: Suppose R(B) S; R(A) and take arbitrary x E IRm. Then Bx E R(B) S; RCA), so
there exists a vector y E IRP such that Ay = Bx. Then we have
Bx = Ay = AA + Ay = AA + Bx,
where one of the Penrose properties is used above. Since x was arbitrary, we have shown
that B = AA+ B.
To prove the converse, assume that AA + B = B and take arbitrary y E R(B). Then
there exists a vector x E IR
m
such that Bx = y, whereupon
y = Bx = AA+Bx E R(A). 0
EXERCISES
1. Use Theorem 4.4 to compute the pseudoinverse of U ;].
2. If x, Y E IRn, show that (xyT)+ = (x
T
x)+(yT y)+ yx
T
.
3. For A E IRmxn, prove that RCA) = R(AAT) using only definitions and elementary
properties of the MoorePenrose pseudoinverse.
4. For A E IRmxn, prove that R(A+) = R(A
T
).
5. For A E IRPxn and BE IRmxn, show thatN(A) S; N(B) if and only if BA+ A = B.
6. Let A E IRn xn, B E JRn xm , and D E IRm xm and suppose further that D is nonsingular.
(a) Prove or disprove that
[ ~
AB
r = [
A+ A+ABD
i
].
D 0
D
i
(b) Prove or disprove that
[ ~
B
r =[
A+ A+BD
1
l
D 0
D
i
This page intentionally left blank This page intentionally left blank
Chapter 5
Introduction to the Singular
Value Decomposition
In this chapter we give a brief introduction to the singular value decomposition (SVD). We
show that every matrix has an SVD and describe some useful properties and applications
of this important matrix factorization. The SVD plays a key conceptual and computational
role throughout (numerical) linear algebra and its applications.
5.1 The Fundamental Theorem
Theorem 5.1. Let A e R™
xn
. Then there exist orthogonal matrices U e R
mxm
and
V € R
nxn
such that
where S = [J °
0
], S = diagfcri, ... , o>) e R
rxr
, and a\ > • • • > o
r
> 0. More
specifically, we have
The submatrix sizes are all determined by r (which must be < min{m, «}), i.e., U\ e W
nxr
,
U
2 e
^x(mr)
; Vi e R
«xr
j
y
2 €
Rnxfor^
and the
0
JM
^/ocJb in E are compatibly
dimensioned.
Proof: Since A
r
A> 0 ( A
r
Ai s symmetric and nonnegative definite; recall, for example,
[24, Ch. 6]), its eigenvalues are all real and nonnegative. (Note: The rest of the proof follows
analogously if we start with the observation that A A
T
> 0 and the details are left to the reader
as an exercise.) Denote the set of eigenvalues of A
T
A by {of , / e n} with a\ > • • • > a
r
>
0 = o>
+
i = • • • = a
n
. Let {u, , i e n} be a set of corresponding orthonormal eigenvectors
and let V\ = [v\, ..., v
r
] , Vi = [v
r+
\, . . . , v
n
]. Letting S — diag(cri, . . . , cf
r
), we can
write A
r
AVi = ViS
2
. Premultiplying by Vf gives Vf A
T
AVi = VfV^S
2
= S
2
, the latter
equality following from the orthonormality of the r;, vectors. Pre and postmultiplying by
S~
l
eives the emotion
35
Chapter 5
Introduction to the Singular
Value Decomposition
In this chapter we give a brief introduction to the singular value decomposition (SVD). We
show that every matrix has an SVD and describe some useful properties and applications
of this important matrix factorization. The SVD plays a key conceptual and computational
role throughout (numerical) linear algebra and its applications.
5.1 The Fundamental Theorem
Theorem 5.1. Let A E Then there exist orthogonal matrices U E IRmxm and
V E IR
nxn
such that
A =
(5.1)
where =
n
S diag(ul, ... , u
r
) E
IRrxr, and UI
> > U
r
> O. More
specifically, we have
U2) [
0
] [
V
T
]
A = [U
I
I
(5.2)
0
VT
2
= Ulsvt·
(5.3)
The submatrix sizes are all determined by r (which must be S min{m, n}), i.e., UI E IRmxr,
U2 E IRrnx(mrl, VI E IRnxr, V
2
E IRnx(nr), and the Osubblocks in are compatibly
dimensioned.
Proof: Since AT A ::::: 0 (AT A is symmetric and nonnegative definite; recall, for example,
[24, Ch. 6]), its eigenvalues are all real and nonnegative. (Note: The rest of the proof follows
analogously if we start with the observation that AAT ::::: 0 and the details are left to the reader
as an exercise.) Denote the set of eigenvalues of AT A by {U?, i E !!.} with UI ::::: ... ::::: U
r
>
0= Ur+1 = ... = Un. Let {Vi, i E !!.} be a set of corresponding orthonormal eigenvectors
and let VI = [VI, ... ,V
r
), V2 = [Vr+I, ... ,V
n
]. LettingS = diag(uI, ... ,u
r
), we can
write A T A VI = VI S2. Premultiplying by vt gives vt A T A VI = vt VI S2 = S2, the latter
equality following from the orthonormality of the Vi vectors. Pre and postmultiplying by
SI gives the equation
(5.4)
35
36 Chapter 5. Introduction to the Singular Value Decomposition
Turning now to the eigenvalue equations corresponding to the eigenvalues o
r+
\, . . . , a
n
we
have that A
T
AV
2
= V
2
0 = 0, whence Vf A
T
AV
2
= 0. Thus, AV
2
= 0. Now define the
matrix Ui e M
mx/
" by U\ = AViS~
l
. Then from (5.4) we see that UfU\ = /; i.e., the
columns of U\ are orthonormal. Choose any matrix U
2
£ ^
7 7 I X(
™~
r)
such that [U\ U
2
] is
orthogonal. Then
since A V
2
=0. Referring to the equation U\ = A V\ S
l
defining U\, we see that U{ AV\ =
S and 1/2 AVi = U^UiS = 0. The latter equality follows from the orthogonality of the
columns of U\ andU
2
. Thus, we see that, in fact, U
T
AV = [Q Q], and defining this matrix
to be S completes the proof. D
Definition 5.2. Let A = t/E V
T
be an SVD of A as in Theorem 5.1.
1. The set [a\,..., a
r
} is called the set of (nonzero) singular values of the matrix A and
i
is denoted £(A). From the proof of Theorem 5.1 we see that cr,(A) = A
(
2
(A
T
A) =
A.? (AA
T
). Note that there are alsomin{m, n] — r zero singular values.
2. The columns ofU are called the left singular vectors of A (and are the orthonormal
eigenvectors of AA
T
).
3. The columns of V are called the right singular vectors of A (and are the orthonormal
eigenvectors of A
1
A).
Remark 5.3. The analogous complex case in which A e C™
x
" is quite straightforward.
The decomposition is A = t/E V
H
, where U and V are unitary and the proof is essentially
identical, except for Hermitian transposes replacing transposes.
Remark 5.4. Note that U and V can be interpreted as changes of basis in both the domain
and codomain spaces with respect to which A then has a diagonal matrix representation.
Specifically, let C, denote A thought of as a linear transformation mapping W to W. Then
rewriting A = U^V
T
as AV = U E we see that Mat £ is S with respect to the bases
[v\,..., v
n
} for R" and {u\,..., u
m
] for R
m
(see the discussion in Section 3.2). See also
Remark 5.16.
Remark 5.5. The singular value decomposition is not unique. For example, an examination
of the proof of Theorem 5.1 reveals that
• any orthonormal basis for jV(A) can be used for V
2
.
there may be nonuniqueness associated with the columns of V\ (and hence U\) cor
responding to multiple cr/' s.
36 Chapter 5. Introduction to the Singular Value Decomposition
Turning now to the eigenvalue equations corresponding to the eigenvalues ar+l, ... , an we
have that A T A V
z
= VzO = 0, whence Vi A T A V
2
= O. Thus, A V
2
= O. Now define the
matrix VI E IRmxr by VI = AViSI. Then from (5.4) we see that VrVI = /; i.e., the
columns of VI are orthonormal. Choose any matrix V2 E IRmx(mr) such that [VI V2] is
orthogonal. Then
V
T
AV = [
VrAV
I
VIAV
I
=[
VrAV
I
vIA VI
Vr AV
z
]
vI AV
z
]
since A V
2
= O. Referring to the equation V I = A VI SI defining VI, we see that V r A VI =
S and vI A VI = vI VI S = O. The latter equality follows from the orthogonality of the
columns of VI and V
2
. Thus, we see that, in fact, VT A V = and defining this matrix
to be completes the proof. 0
Definition 5.2. Let A = V"i:. VT be an SVD of A as in Theorem 5.1.
1. The set {ai, ... , a
r
} is called the set of (nonzero) singular values of the matrix A and
I
is denoted From the proof of Theorem 5.1 we see that ai(A) = A;' (AT A) =
I
At (AA
T
). Note that there are also min{m, n}  r zero singular values.
2. The columns of V are called the left singular vectors of A (and are the orthonormal
eigenvectors of AA
T
).
3. The columns of V are called the right singular vectors of A (and are the orthonormal
eigenvectors of AT A).
Remark 5.3. The analogous complex case in which A E xn is quite straightforward.
The decomposition is A = V"i:. V H, where V and V are unitary and the proof is essentially
identical, except for Hermitian transposes replacing transposes.
Remark 5.4. Note that V and V can be interpreted as changes of basis in both the domain
and codomain spaces with respect to which A then has a diagonal matrix representation.
Specifically, let C denote A thought of as a linear transformation mapping IR
n
to IRm. Then
rewriting A = V"i:. VT as A V = V"i:. we see that Mat C is "i:. with respect to the bases
{VI, ... , v
n
} for IR
n
and {u I, •.. , u
m
} for IR
m
(see the discussion in Section 3.2). See also
Remark 5.16.
Remark 5.5. The !:ingular value decomposition is not unique. For example, an examination
of the proof of Theorem 5.1 reveals that
• £lny orthonormal basis for N(A) can be used for V2.
• there may be nonuniqueness associated with the columns of VI (and hence VI) cor
responding to multiple O'i'S.
5.1. The Fundamental Theorem 37
• any C/
2
can be used so long as [U\ Ui] is orthogonal.
• columns of U and V can be changed (in tandem) by sign (or multiplier of the form
e
je
in the complex case).
What is unique, however, is the matrix E and the span of the columns of U\, f/2, Vi, and
¥2 (see Theorem 5.11). Note, too, that a "full SVD" (5.2) can always be constructed from
a "compact SVD" (5.3).
Remark 5.6. Computing an SVD by working directly with the eigenproblem for A
T
A or
AA
T
is numerically poor in finiteprecision arithmetic. Better algorithms exist that work
directly on A via a sequence of orthogonal transformations; see, e.g., [7], [11], [25].
F/vamnlp 5.7.
Example 5.10. Let A e R
MX
" be symmetric and positive definite. Let V be an orthogonal
matrix of eigenvectors that diagonalizes A, i.e., V
T
AV = A > 0. Then A = VAV
T
is an
SVD of A.
A factorization t/SV
r
o f a n m x n matrix A qualifies as an SVD if U and V are
orthogonal and £ is an m x n "diagonal" matrix whose diagonal elements in the upper
left corner are positive (and ordered). For example, if A = f/E V
T
is an SVD of A, then
VS
r
C/
r
i sanSVDof A
T
.
where U is an arbitrary 2x2 orthogonal matrix, is an SVD.
Example 5.8.
where 0 is arbitrary, is an SVD.
Example 5.9.
is an SVD.
5.1. The Fundamental Theorem 37
• any U2 can be used so long as [U
I
U2] is orthogonal.
• columns of U and V can be changed (in tandem) by sign (or multiplier of the form
e
j8
in the complex case).
What is unique, however, is the matrix I: and the span of the columns of UI, U2, VI, and
V
2
(see Theorem 5.11). Note, too, that a "full SVD" (5.2) can always be constructed from
a "compact SVD" (5.3).
Remark 5.6. Computing an SVD by working directly with the eigenproblem for A T A or
AA T is numerically poor in finiteprecision arithmetic. Better algorithms exist that work
directly on A via a sequence of orthogonal transformations; see, e.g., [7], [11], [25],
Example 5.7.
A  [1 0 ]  U I U
T
 01 ,
where U is an arbitrary 2 x 2 orthogonal matrix, is an SVD.
Example 5.8.
A _ [ 1
 0  ~ ] = [
where e is arbitrary, is an SVD.
Example 5.9.
cose
 sine
sin e
cose J [ ~ ~ J [
cose
sine
A=U n=[
I 2y'5
2 ~ ][ 3 ~ 0][
3
5
2
y'5
4y'5 0 0
3 S 15
2
0
_y'5 0 0
3
3
[
I
]
3
3J2 [ ~
~ ]
=
2
3
2
3
is an SVD.
Sine]
cose '
v'2 v'2
]
T T
v'2 v'2
T
2
Example 5.10. Let A E IR
nxn
be symmetric and positive definite. Let V be an orthogonal
matrix of eigenvectors that diagonalizes A, i.e., VT A V = A > O. Then A = V A V
T
is an
SVDof A.
A factorization UI: VT of an m x n matrix A qualifies as an SVD if U and V are
orthogonal and I: is an m x n "diagonal" matrix whose diagonal elements in the upper
left comer are positive (and ordered). For example, if A = UI:V
T
is an SVD of A, then
VI:TU
T
is an SVD of AT.
38 Chapter 5. Introduction to the Singular Value Decomposition
5.2 Some Basic Properties
Theorem 5.11. Let A e R
mxn
have a singular value decomposition A = VLV
T
. Using
the notation of Theorem 5.1, the following properties hold:
1. rank(A) = r = the number of nonzero singular values of A.
2. Let U =. [ H I , ..., u
m
] and V = [v\, ..., v
n
]. Then A has the dyadic (or outer
product) expansion
Remark 5.12. Part 4 of the above theorem provides a numerically superior method for
finding (orthonormal) bases for the four fundamental subspaces compared to methods based
on, for example, reduction to row or column echelon form. Note that each subspace requires
knowledge of the rank r. The relationship to the four fundamental subspaces is summarized
nicely in Figure 5.1.
Remark 5.13. The elegance of the dyadic decomposition (5.5) as a sum of outer products
and the key vector relations (5.6) and (5.7) explain why it is conventional to write the SVD
as A = UZV
T
rather than, say, A = UZV.
Theorem 5.14. Let A e E
mx
" have a singular value decomposition A = UHV
T
as in
TheoremS.]. Then
where
3. The singular vectors satisfy the relations
38 Chapter 5. Introduction to the Singular Value Decomposition
5.2 Some Basic Properties
Theorem 5.11. Let A E jRrnxn have a singular value decomposition A = U'£ V
T
. Using
the notation of Theorem 5.1, the following properties hold:
1. rank(A) = r = the number of nonzero singular values of A.
2. Let V = [UI, ... , urn] and V = [VI, ... , v
n
]. Then A has the dyadic (or outer
product) expansion
r
A = Laiuiv;.
i=1
3. The singular vectors satisfy the relations
for i E r.
AVi = ajui,
AT Uj = aivi
(5.5)
(5.6)
(5.7)
4. LetUI = [UI, ... , u
r
], U2 = [Ur+I, ... , urn], VI = [VI, ... , v
r
], andV2 = [Vr+I, ... , V
n
].
Then
(a) R(VI) = R(A) = N(A
T
/.
(b) R(U
2
) = R(A)1 = N(A
T
).
(c) R(VI) = N(A)1 = R(A
T
).
(d) R(V2) = N(A) = R(AT)1.
Remark 5.12. Part 4 of the above theorem provides a numerically superior method for
finding (orthonormal) bases for the four fundamental subspaces compared to methods based
on, for example, reduction to row or column echelon form. Note that each subspace requires
knowledge of the rank r. The relationship to the four fundamental subspaces is summarized
nicely in Figure 5.1.
Remark 5.13. The elegance of the dyadic decomposition (5.5) as a sum of outer products
and the key vector relations (5.6) and (5.7) explain why it is conventional to write the SVD
as A = U'£V
T
rather than, say, A = U,£V.
Theorem 5.14. Let A E jRmxn have a singular value decomposition A = U,£V
T
as in
Theorem 5.1. Then
(5.8)
where
5.2. Some Basic Properties 39
Figure 5.1. SVD and the four fundamental subspaces.
with the Qsubblocks appropriately sized. Furthermore, if we let the columns of U and V
be as defined in Theorem 5.11, then
Proof: The proof follows easily by verifying the four Penrose conditions. D
Remark 5.15. Note that none of the expressions above quite qualifies as an SVD of A
+
if we insist that the singular values be ordered from largest to smallest. However, a simple
reordering accomplishes the task:
This can also be written in matrix terms by using the socalled reverseorder identity matrix
(or exchange matrix) P = \e
r
,e
r
^\, ..., e^, e\\, which is clearly orthogonal and symmetric.
5.2. Some Basic Properties 39
A
r r
E9 {O}
/ {O)<!l
nr mr
Figure 5.1. SVD and the four fundamental subspaces.
with the Osubblocks appropriately sized. Furthermore, if we let the columns of U and V
be as defined in Theorem 5.11, then
r 1
= L v;u;, (5.10)
;=1 U;
Proof' The proof follows easily by verifying the four Penrose conditions. 0
Remark 5.15. Note that none of the expressions above quite qualifies as an SVD of A +
if we insist that the singular values be ordered from largest to smallest. However, a simple
reordering accomplishes the task:
(5.11)
This can also be written in matrix terms by using the socalled reverseorder identity matrix
(or exchange matrix) P = [e
r
, erI, ... , e2, ed, which is clearly orthogonal and symmetric.
is the matrix version of (5.11). A "full SVD" can be similarly constructed.
Remark 5.16. Recall the linear transformation T used in the proof of Theorem 3.17 and
in Definition 4.1. Since T is determined by its action on a basis, and since ( v \ , . . . , v
r
} is a
basis forJ\f(A)
±
, then T can be defined by TV; = cr, w, , / e r. Similarly, since [u\, ... ,u
r
}
isabasisfor7£(.4), then T~
l
can be defined by T^' M, = ^u, , / e r. From Section 3.2, the
matrix representation for T with respect to the bases { v \ , ..., v
r
} and { MI , . . . , u
r
] is clearly
S, while the matrix representation for the inverse linear transformation T~
l
with respect to
the same bases is 5""
1
.
5.3 Row and Column Compressions
Row compression
Let A E R
mxn
have an SVD given by (5.1). Then
Notice that M(A)  M(U
T
A) = A/"(SV,
r
) and the matrix SVf e R
r x
" has full row
rank. I n other words, premultiplication of A by U
T
is an orthogonal transformation that
"compresses" A by row transformations. Such a row compression can also be accomplished
D _
by orthogonal row transformations performed directly on A to reduce it to the form
0
,
where R is upper triangular. Both compressions are analogous to the socalled rowreduced
echelon form which, when derived by a Gaussian elimination algorithm implemented in
finiteprecision arithmetic, is not generally as reliable a procedure.
Column compression
Again, let A e R
mxn
have an SVD given by (5.1). Then
This time, notice that H(A) = K(AV) = K(UiS) and the matrix UiS e R
mxr
has full
column rank. I n other words, postmultiplication of A by V is an orthogonal transformation
that "compresses" A by column transformations. Such a compression is analogous to the
40 Chapters. Introduction to the Singular Value Decomposition
Then
40 Chapter 5. Introduction to the Singular Value Decomposition
Then
A+ = (VI p)(PS1 p)(PVr)
is the matrix version of (5.11). A "full SVD" can be similarly constructed.
Remark 5.16. Recall the linear transformation T used in the proof of Theorem 3.17 and
in Definition 4.1. Since T is determined by its action on a basis, and since {VI, ... , v
r
} is a
basisforN(A).l, then T can be defined by TVj = OjUj , i E ~ . Similarly, since {UI, ... , u
r
}
is a basis forR(A), then T
I
canbedefinedbyTIu; = tv; ,i E ~ . From Section 3.2, the
matrix representation for T with respect to the bases {VI, ... , v
r
} and {u I, ... , u
r
} is clearly
S, while the matrix representation for the inverse linear transformation T
I
with respect to
the same bases is SI.
5.3 Rowand Column Compressions
Row compression
Let A E lR.
mxn
have an SVD given by (5.1). Then
VT A = :EVT
= [ ~ ~ ] [ ~ i ]
 [ SVr ] lR.
mxn
 0 E .
Notice that N(A) = N(V
T
A) = N(svr> and the matrix SVr E lR.
rxll
has full row
rank. In other words, premultiplication of A by VT is an orthogonal transformation that
"compresses" A by row transformations. Such a row compression can also be accomplished
by orthogonal row transformations performed directly on A to reduce it to the form [ ~ ] ,
where R is upper triangular. Both compressions are analogous to the socalled rowreduced
echelon form which, when derived by a Gaussian elimination algorithm implemented in
finiteprecision arithmetic, is not generally as reliable a procedure.
Column compression
Again, let A E lR.
mxn
have an SVD given by (5.1). Then
AV = V:E
= [VI U2] [ ~ ~ ]
=[VIS 0] ElR.mxn.
This time, notice that R(A) = R(A V) = R(UI S) and the matrix VI S E lR.
m
xr has full
column rank. In other words, postmultiplication of A by V is an orthogonal transformation
that "compresses" A by I;olumn transformations. Such a compression is analogous to the
Exercises 41
socalled columnreduced echelon form, which is not generally a reliable procedure when
performed by Gauss transformations in finiteprecision arithmetic. For details, see, for
example, [7], [11], [23], [25].
EXERCISES
1. Let X € M
mx
". If X
T
X = 0, show that X = 0.
2. Prove Theorem 5.1 starting from the observation that AA
T
> 0.
3. Let A e E"
xn
be symmetric but indefinite. Determine an SVD of A.
4. Let x e R
m
, y e R
n
be nonzero vectors. Determine an SVD of the matrix A e R™
defined by A = xy
T
.
6. Let A e R
mxn
and suppose W eR
mxm
and 7 e R
nxn
are orthogonal.
(a) Show that A and W A F have the same singular values (and hence the same rank).
(b) Suppose that W and Y are nonsingular but not necessarily orthogonal. Do A
and WAY have the same singular values? Do they have the same rank?
7. Let A € R"
XM
. Use the SVD to determine a polar factorization of A, i.e., A = QP
where Q is orthogonal and P = P
T
> 0. Note: this is analogous to the polar form
z = re
l&
ofa complex scalar z (where i = j = V^T).
5. Determine SVDs of the matrices
Exercises 41
socalled columnreduced echelon form, which is not generally a reliable procedure when
performed by Gauss transformations in finiteprecision arithmetic. For details, see, for
example, [7], [11], [23], [25].
EXERCISES
1. Let X E IRmxn. If XT X = 0, show that X = o.
2. Prove Theorem 5.1 starting from the observation that AAT ~ O.
3. Let A E IR
nxn
be symmetric but indefinite. Determine an SVD of A.
4. Let x E IRm, y E ~ n be nonzero vectors. Determine an SVD of the matrix A E ~ ~ xn
defined by A = xyT.
5. Determine SVDs of the matrices
(a)
[
1
]
0 1
(b)
[
~ l
6. Let A E ~ m x n and suppose W E IR
mxm
and Y E ~ n x n are orthogonal.
(a) Show that A and WAY have the same singular values (and hence the same rank).
(b) Suppose that Wand Yare nonsingular but not necessarily orthogonal. Do A
and WAY have the same singular values? Do they have the same rank?
7. Let A E ~ ~ x n . Use the SVD to determine a polar factorization of A, i.e., A = Q P
where Q is orthogonal and P = p
T
> O. Note: this is analogous to the polar form
z = re
iO
of a complex scalar z (where i = j = J=I).
This page intentionally left blank This page intentionally left blank
Chapter 6
Li near Equations
In this chapter we examine existence and uniqueness of solutions of systems of linear
equations. General linear systems of the form
are studied and include, as a special case, the familiar vector system
6.1 Vector Li near Equations
We begin with a review of some of the principal results associated with vector linear systems.
Theorem 6.1. Consider the system of linear equations
1. There exists a solution to (6.3) if and only ifbeH(A).
2. There exists a solution to (6.3} for all b e R
m
if and only ifU(A) = W", i.e., A is
onto; equivalently, there exists a solution if and only j/"rank([A, b]) = rank(A), and
this is possible only ifm < n (since m = dimT^(A) = rank(A) < min{m, n}).
3. A solution to (6.3) is unique if and only ifJ\f(A) = 0, i.e., A is 11.
4. There exists a unique solution to (6.3) for all b e W" if and only if A is nonsingular;
equivalently, A G M
mxm
and A has neither a 0 singular value nor a 0 eigenvalue.
5. There exists at most one solution to (6.3) for all b e W
1
if and only if the columns of
A are linearly independent, i.e., A/"(A) = 0, and this is possible only ifm > n.
6. There exists a nontrivial solution to the homogeneous system Ax = 0 if and only if
rank(A) < n.
43
Chapter 6
Linear Equations
In this chapter we examine existence and uniqueness of solutions of systems of linear
equations. General linear systems of the form
(6.1)
are studied and include, as a special case, the familiar vector system
Ax = b; A E ]Rn xn, b E ]Rn.
(6.2)
6.1 Vector Linear Equations
We begin with a review of some of the principal results associated with vector linear systems.
Theorem 6.1. Consider the system of linear equations
Ax = b; A E lR
m
xn, b E lRm.
(6.3)
1. There exists a solution to (6.3) if and only if b E R(A).
2. There exists a solution to (6.3) for all b E lR
m
if and only ifR(A) = lR
m
, i.e., A is
onto; equivalently, there exists a solution if and only ifrank([A, b]) = rank(A), and
this is possible only ifm :::: n (since m = dim R(A) = rank(A) :::: min{m, n n.
3. A solution to (6.3) is unique if and only if N(A) = 0, i.e., A is 11.
4. There exists a unique solution to (6.3) for all b E ]Rm if and only if A is nonsingular;
equivalently, A E lR
mxm
and A has neither a 0 singular value nor a 0 eigenvalue.
5. There exists at most one solution to (6.3) for all b E lR
m
if and only if the columns of
A are linearly independent, i.e., N(A) = 0, and this is possible only ifm ::: n.
6. There exists a nontrivial solution to the homogeneous system Ax = 0 if and only if
rank(A) < n.
43
44 Chapter 6. Linear Equations
Proof: The proofs are straightforward and can be consulted in standard texts on linear
algebra. Note that some parts of the theorem follow directly from others. For example, to
prove part 6, note that x = 0 is always a solution to the homogeneous system. Therefore, we
must have the case of a nonunique solution, i.e., A is not 11, which implies rank(A) < n
by part 3. D
6.2 Matrix Linear Equations
In this section we present some of the principal results concerning existence and uniqueness
of solutions to the general matrix linear system (6.1). Note that the results of Theorem
6.1 follow from those below for the special case k = 1, while results for (6.2) follow by
specializing even further to the case m = n.
Theorem 6.2 (Existence). The matrix linear equation
and this is clearly of the form (6.5).
has a solution if and only ifl^(B) C 7£(A); equivalently, a solution exists if and only if
AA
+
B = B.
Proof: The subspace inclusion criterion follows essentially from the definition of the range
of a matrix. The matrix criterion is Theorem 4.18.
Theorem 6.3. Let A e R
mxn
, B eR
mxk
and suppose that AA
+
B = B. Then any matrix
of the form
is a solution of
Furthermore, all solutions of (6.6) are of this form.
Proof: To verify that (6.5) is a solution, premultiply by A:
That all solutions arc of this form can be seen as follows. Let Z be an arbitrary solution of
(6.6), i.e., AZ — B. Then we can write
44 Chapter 6. Linear Equations
Proof: The proofs are straightforward and can be consulted in standard texts on linear
algebra. Note that some parts of the theorem follow directly from others. For example, to
prove part 6, note that x = 0 is always a solution to the homogeneous system. Therefore, we
must have the case of a nonunique solution, i.e., A is not II, which implies rank(A) < n
by part 3. 0
6.2 Matrix Linear Equations
In this section we present some of the principal results concerning existence and uniqueness
of solutions to the general matrix linear system (6.1). Note that the results of Theorem
6.1 follow from those below for the special case k = 1, while results for (6.2) follow by
specializing even further to the case m = n.
Theorem 6.2 (Existence). The matrix linear equation
AX = B; A E JR.
mxn
, BE JR.mxk, (6.4)
has a solution if and only ifR(B) S; R(A); equivalently, a solution exists if and only if
AA+B = B.
Proof: The subspace inclusion criterion follows essentially from the definition of the range
of a matrix. The matrix criterion is Theorem 4.18. 0
Theorem 6.3. Let A E JR.mxn, B E JR.mxk and suppose that AA + B = B. Then any matrix
of the form
X = A+ B + (/  A+ A)Y, where Y E JR.nxk is arbitrary, (6.5)
is a solution of
AX=B. (6.6)
Furthermore, all solutions of (6.6) are of this form.
Proof: To verify that (6.5) is a solution, premultiply by A:
AX = AA+ B + A(I  A+ A)Y
= B + (A  AA+ A)Y by hypothesis
= B since AA + A = A by the first Penrose condition.
That all solutions are of this form can be seen as follows. Let Z be an arbitrary solution of
(6.6). i.e .. AZ :::: B. Then we can write
Z=A+AZ+(IA+A)Z
=A+B+(IA+A)Z
and this is clearly of the form (6.5). 0
6.2. Matrix Linear Equations 45
Remark 6.4. When A is square and nonsingular, A
+
= A"
1
and so (/ — A
+
A) = 0. Thus,
there is no "arbitrary" component, leaving only the unique solution X• = A~
1
B.
Remark 6.5. It can be shown that the particular solution X = A
+
B is the solution of (6.6)
that minimizes TrX
7
X. (Tr() denotes the trace of a matrix; recall that TrX
r
X = £\ • jcj.)
Theorem 6.6 (Uniqueness). A solution of the matrix linear equation
is unique if and only if A
+
A = /; equivalently, (6.7) has a unique solution if and only if
M(A) = 0.
Proof: The first equivalence is immediate from Theorem 6.3. The second follows by noting
that A
+
A = / can occur only if r — n, where r = rank(A) (recall r < h). But rank(A) = n
if and only if A is 11 or _ /V(A) = 0. D
Example 6.7. Suppose A e E"
x
". Find all solutions of the homogeneous system Ax — 0.
Solution:
where y e R" is arbitrary. Hence, there exists a nonzero solution if and only if A
+
A /= I.
This is equivalent to either rank (A) = r < n or A being singular. Clearly, if there exists a
nonzero solution, it is not unique.
Computation: Since y is arbitrary, it is easy to see that all solutions are generated
from a basis for 7£(7 — A
+
A). But if A has an SVD given by A = f/E V
T
, then it is easily
checked that /  A+A = V
2
V
2
r
and U(V
2
V^) = K(V
2
) = N(A).
Example 6.8. Characterize all right inverses of a matrix A e ]R
mx
"; equivalently, find all
solutions R of the equation AR = I
m
. Here, we write I
m
to emphasize the m x m identity
matrix.
Solution: There exists a right inverse if and only if 7£(/
m
) c 7£(A) and this is
equivalent to AA
+
I
m
= I
m
. Clearly, this can occur if and only if rank(A) = r = m (since
r < m) and this is equivalent to A being onto (A
+
is then a right inverse). All right inverses
of A are then of the form
where Y e E"
xm
is arbitrary. There is a unique right inverse if and only if A
+
A = /
(AA(A) = 0), in which case A must be invertible and R = A"
1
.
Example 6.9. Consider the system of linear firstorder difference equations
6.2. Matrix Linear Equations 45
Remark 6.4. When A is square and nonsingular, A + = AI and so (I  A + A) = O. Thus,
there is no "arbitrary" component, leaving only the unique solution X = AI B.
Remark 6.5. It can be shown that the particular solution X = A + B is the solution of (6.6)
that minimizes TrXT X. (TrO denotes the trace of a matrix; recall that TrXT X = Li,j xlj.)
Theorem 6.6 (Uniqueness). A solution of the matrix linear equation
AX = B; A E lR,mxn, BE lR,mxk
(6,7)
is unique if and only if A + A = I; equivalently, (6.7) has a unique solution if and only if
N(A) = O.
Proof: The first equivalence is immediate from Theorem 6.3. The second follows by noting
thatA+ A = I can occur only ifr = n, wherer = rank(A) (recallr ::: n), Butrank(A) = n
if and only if A is Ilor N(A) = O. 0
Example 6.7. Suppose A E lR,nxn. Find all solutions of the homogeneous system Ax = 0,
Solution:
x=A+O+(IA+A)y
= (IA+A)y,
where y E lR,n is arbitrary. Hence, there exists a nonzero solution if and only if A + A t= I,
This is equivalent to either rank(A) = r < n or A being singular. Clearly, if there exists a
nonzero solution, it is not unique,
Computation: Since y is arbitrary, it is easy to see that all solutions are generated
from a basis for R(I  A + A). But if A has an SVD given by A = U h VT, then it is easily
checked that 1 A+ A = Vz V[ and R(Vz vD = R(Vz) = N(A),
Example 6.S. Characterize all right inverses of a matrix A E lR,mxn; equivalently, find all
solutions R of the equation AR = 1
m
, Here, we write 1m to emphasize the m x m identity
matrix,
Solution: There exists a right inverse if and only if R(Im) S; R(A) and this is
equivalent to AA + 1m = 1m. Clearly, this can occur if and only if rank(A) = r = m (since
r ::: m) and this is equivalent to A being onto (A + is then a right inverse). All right inverses
of A are then of the form
R = A+ 1m + (In  A+ A)Y
=A++(IA+A)Y,
where Y E lR,nxm is arbitrary, There is a unique right inverse if and only if A+ A I
(N(A) = 0), in which case A must be invertible and R = AI.
Example 6.9. Consider the system of linear firstorder difference equations
(6,8)
46 Chapter 6. Linear Equations
with A e R"
xn
and fieR"
xm
(rc>l,ra>l). The vector Jt* in linear system theory is
known as the state vector at time k while Uk is the input (control) vector. The general
solution of (6.8) is given by
for k > 1. We might now ask the question: Given X Q = 0, does there exist an input sequence
{uj } y~ Q such that x^ takes an arbitrary va
of reachability. Since m > 1, from the
see that (6.8) is reachable if and only if
[ Uj }
k
jj^ such that X k takes an arbitrary value in W ? In linear system theory, this is a question
of reachability. Since m > 1, from the fundamental Existence Theorem, Theorem 6.2, we
or, equivalently, if and only if
A related question is the following: Given an arbitrary initial vector X Q , does there ex
ist an input sequence {"y} "~ o such that x
n
= 0? In linear system theory, this is called
controllability. Again from Theorem 6.2, we see that (6.8) is controllable if and only if
Clearly, reachability always implies controllability and, if A is nonsingular, control
lability and reachability are equivalent. The matrices A = [ °
1
Q
1 and 5 = f ^ 1 provide an
example of a system that is controllable but not reachable.
The above are standard conditions with analogues for continuoustime models (i.e.,
linear differential equations). There are many other algebraically equivalent conditions.
Example 6.10. We now introduce an output vector y
k
to the system (6.8) of Example 6.9
by appending the equation
with C e R
pxn
and D € R
pxm
(p > 1). We can then pose some new questions about the
overall system that are dual in the systemtheoretic sense to reachability and controllability.
The answers are cast in terms that are dual in the linear algebra sense as well. The condition
dual to reachability is called observability: When does knowledge of {"
7
}"!Q and {y_ / } "~ o
suffice to determine (uniquely) Jt
0
? As a dual to controllability, we have the notion of
reconstructibility: When does knowledge of {w
y
} "~ Q and {;y/ } "Io suffice to determine
(uniquely) x
n
l The fundamental duality result from linear system theory is the following:
(A, B) is reachable [ controllable] if and only if (A
T
, B
T
] is observable [ reconstructive].
46 Chapter 6. Linear Equations
with A E IR
nx
" and B E IR
nxm
(n I, m I). The vector Xk in linear system theory is
known as the state vector at time k while Uk is the input (control) vector. The general
solution of (6.8) is given by
kJ
Xk = Akxo + LAkJj BUj
j=O
k kJ Uk2
[
UkJ ]
•...• A B]
(6.9)
(6.10)
for k 1. We might now ask the question: Given Xo = 0, does there exist an input sequence
{u j 1 such that Xk takes an arbitrary value in 1R"? In linear system theory, this is a question
of reacbability. Since m I, from the fundamental Existence Theorem, Theorem 6.2, we
see that (6.8) is reachable if and only if
R([ B, AB, ... , A
n

J
B]) = 1R"
or, equivalently, if and only if
rank [B, AB, ... , A
n

J
B] = n.
A related question is the following: Given an arbitrary initial vector Xo, does there ex
ist an input sequence {u j l'/:b such that Xn = O? In linear system theory, this is called
controllability. Again from Theorem 6.2, we see that (6.8) is controllable if and only if
Clearly, reachability always implies controllability and, if A is nonsingular, control
lability and reachability are equivalent. The matrices A = and B = provide an
example of a system that is controllable but not reachable.
The above are standard conditions with analogues for continuoustime models (i.e.,
linear differential equations). There are many other algebraically equivalent conditions.
Example 6.10. We now introduce an output vector Yk to the system (6.8) of Example 6.9
by appending the equation
(6.11)
with C E IR
Pxn
and D E IR
Pxm
(p 1). We can then pose some new questions about the
overall system that are dual in the systemtheoretic sense to reachability and controllability.
The answers are cast in terms that are dual in the linear algebra sense as well. The condition
dual to reachability is called observability: When does knowledge of {u j r/:b and {Yj l';:b
suffice to determine (uniquely) xo? As a dual to controllability, we have the notion of
reconstructibility: When does knowledge of {u j r/:b and {YJ lj:b suffice to determine
(uniquely) xn? The fundamental duality result from linear system theory is the following:
(A. B) iJ reachable [controllablcl if and only if (A T. B T) is observable [reconsrrucrible]
6.4 Some Us ef u l and I nt er es t i ng Inverses 47
To derive a condition for observability, notice that
Thus,
Let v denote the (known) vector on the lefthand side of (6.13) and let R denote the matrix on
the righthand side. Then, by definition, v e Tl(R), so a solution exists. By the fundamental
Uniqueness Theorem, Theorem 6.6, the solution is then unique if and only if N(R) = 0,
or, equivalently, if and only if
6.3 A More General Matrix Linear Equation
Theorem 6.11. Let A e R
mxn
, B e R
mxq
, and C e R
pxti
. Then the equation
has a solution if and only if AA
+
BC
+
C = B, in which case the general solution is of the
where Y € R
n
*
p
is arbitrary.
A compact matrix criterion for uniqueness of solutions to (6.14) requires the notion
of the Kronecker product of matrices for its statement. Such a criterion (CC
+
< g) A
+
A — I)
is stated and proved in Theorem 13.27.
6.4 Some Useful and Interesting Inverses
In many applications, the coefficient matrices of interest are square and nonsingular. Listed
below is a small collection of useful matrix identities, particularly for block matrices, as
sociated with matrix inverses. In these identities, A e R
nxn
, B E R
nxm
, C e R
mxn
,
and D € E
mxm
. Invertibility is assumed for any component or subblock whose inverse is
indicated. Verification of each identity is recommended as an exercise for the reader.
6.4 Some Useful and Interesting Inverses
Thus,
To derive a condition for observability, notice that
kl
Yk = CAkxo + L CAk1j BUj + DUk.
j=O
r
Yo  Duo
Yl  CBuo  Du]
Yn]  L j : ~ CA
n

2
j BUj  DUnl
47
(6.12)
(6.13)
Let v denote the (known) vector on the lefthand side of (6.13) and let R denote the matrix on
the righthand side. Then, by definition, v E R(R), so a solution exists. By the fundamental
Uniqueness Theorem, Theorem 6.6, the solution is then unique if and only if N(R) = 0,
or, equivalently, if and only if
6.3 A More General Matrix Linear Equation
Theorem 6.11. Let A E jRmxn, B E jRmx
q
, and C E jRpxq. Then the equation
AXC=B (6.14)
has a solution if and only if AA + BC+C = B, in which case the general solution is of the
form
(6.15)
where Y E jRnxp is arbitrary.
A compact matrix criterion for uniqueness of solutions to (6.14) requires the notion
of the Kronecker product of matrices for its statement. Such a criterion (C C+ ® A + A = I)
is stated and proved in Theorem 13.27.
6.4 Some Useful and Interesting Inverses
In many applications, the coefficient matrices of interest are square and nonsingular. Listed
below is a small collection of useful matrix identities, particularly for block matrices, as
sociated with matrix inverses. In these identities, A E jRnxn, B E jRnxm, C E jRmxn,
and D E jRm xm. Invertibility is assumed for any component or subblock whose inverse is
indicated. Verification of each identity is recommended as an exercise for the reader.
48 Chapter 6. Linear Equations
1. (A + BDCr
1
= A~
l
 A~
l
B(D~
l
+ CA~
l
B)~
[
CA~
l
.
This result is known as the ShermanMorrisonWoodbury formula. It has many
applications (and is frequently "rediscovered") including, for example, formulas for
the inverse of a sum of matrices such as (A + D)"
1
or (A"
1
+ D"
1
) . It also
yields very efficient "updating" or "downdating" formulas in expressions such as
T — 1
(A + JUT ) (with symmetric A e R"
x
" and ;c e E") that arise in optimization
theory.
EXERCISES
1. As in Example 6.8, characterize all left inverses of a matrix A e M
mx
".
2. Let A € E
mx
", B e R
mxk
and suppose A has an SVD as in Theorem 5.1. Assuming
7Z(B) c 7£(A), characterize all solutions of the matrix linear equation
Both of these matrices satisfy the matrix equation X^ = I from which it is obvious
that X~
l
= X. Note that the positions of the / and — / blocks may be exchanged.
where E = (D — CA B) (E is the inverse of the Schur complement of A). This
result follows easily from the block LU factorization in property 16 of Section 1.4.
where F = (A — ED C) . This result follows easily from the block UL factor
ization in property 17 of Section 1.4.
in terms of the SVD of A
48 Chapter 6. Linear Equations
1. (A + BDC)I = AI  AIB(D
I
+ CAIB)ICAI.
This result is known as the ShermanMorrisonWoodbury formula. It has many
applications (and is frequently "rediscovered") including, for example, formulas for
the inverse of a sum of matrices such as (A + D)lor (AI + DI)I. It also
yields very efficient "updating" or "downdating" formulas in expressions such as
(A + xx
T
) I (with symmetric A E lR
nxn
and x E lRn) that arise in optimization
theory.
2. r
l
= [
3. !/ r
l
= l r
l
= 1
Both of these matrices satisfy the matrix equation X2 = / from which it is obvious
that XI = X. Note that the positions of the / and  / blocks may be exchanged.
4. r
l
= [
AI BD
I
]
D I .
5. r
l
= 1
6. [ / +c
BC
r
l
= [!C / 1
7. r
l
= [ AI l
where E = (D  CA
I
B)I (E is the inverse of the Schur complement of A). This
result follows easily from the block LU factorization in property 16 of Section 1.4.
8. r
l
= D
I
l
where F = (A  B D
I
C) I. This result follows easily from the block UL factor
ization in property 17 of Section 1.4.
EXERCISES
1. As in Example 6.8, characterize all left inverses of a matrix A E lR
m
xn .
2. Let A E lRmxn, B E lR
fflxk
and suppose A has an SVD as in Theorem 5.1. Assuming
R(B) R(A), characterize all solutions of the matrix linear equation
AX=B
in terms of the SVD of A.
Exercises 49
3. Let jc, y e E" and suppose further that X
T
y ^ 1. Show that
4. Let x, y € E" and suppose further that X
T
y ^ 1. Show that
where c = 1/(1 — x
T
y).
5. Let A e R"
x
" and let A"
1
have columns c\, ..., c
n
and individual elements y
;y
.
Assume that x/
(
7^ 0 for some / and j. Show that the matrix B — A —
l
—ei e
T
: (i.e.,
A with — subtracted from its (zy)th element) is singular.
Hint: Show that c
t
< = M(B).
6. As in Example 6.10, check directly that the condition for reconstructibility takes the
form
Exercises 49
3. Let x, y E IR
n
and suppose further that x T y i= 1. Show that
T 1 1 T
(/  xy) = I  xy .
xTy 1
4. Let x, y E IR
n
and suppose further that x T y i= 1. Show that
cxJ
C '
where C = 1/(1  x
T
y).
5. Let A E 1 R ~ xn and let A 1 have columns Cl, ... ,C
n
and individual elements Yij.
Assume that Yji i= 0 for some i and j. Show that the matrix B = A  ~ i e;e; (i.e.,
A with yl subtracted from its (ij)th element) is singular.
l'
Hint: Show that Ci E N(B).
6. As in Example 6.10, check directly that the condition for reconstructibility takes the
form
N[
fA J ~ N(A
n
).
CA
n

1
This page intentionally left blank This page intentionally left blank
Chapter 7
Projections, Inner Product
Spaces, and Norms
7.1 Projections
Definition 7.1. Let V be a vector space with V = X 0 y. By Theorem 2.26, every v e V
has a unique decomposition v = x + y with x e X and y e y. Define PX y • V — > • X c V
by
Figure 7.1. Oblique projections.
Theorem 7.2. Px,y is linear and P# y — Px,y
Theorem 7.3. A linear transformation P is a projection if and only if it is idempotent, i.e.,
P
2
= P. Also, P is a projection if and only if I —P is a projection. Infact, Py,x — I — Px,y
Proof: Suppose P is a projection, say on X along y (using the notation of Definition 7.1).
51
Px,y is called the (oblique) projection on X along 3^.
Figure 7.1 displays the projection of v on both X and 3^ in the case V =
Chapter 7
Projections, Inner Product
Spaces, and Norms
7.1 Projections
Definition 7.1. Let V be a vector space with V = X EEl Y. By Theorem 2.26, every v E V
has a unique decomposition v = x + y with x E X and y E y. Define pX,y : V + X <; V
by
PX,yV = x for all v E V.
PX,y is called the (oblique) projection on X along y.
Figure 7.1 displays the projection of von both X and Y in the case V = ]R2.
y
x
Figure 7.1. Oblique projections.
Theorem 7.2. px.y is linear and pl.
y
= px.y.
Theorem 7.3. A linear transformation P is a projection if and only if it is idempotent, i.e.,
p2 = P. Also, P isaprojectionifandonlyifl P isaprojection. Infact, Py.x = I px.y.
Proof: Suppose P is a projection, say on X along Y (using the notation of Definition 7.1).
51
52 Chapter 7. Projections, Inner Product Spaces, and Norms
Let u e V be arbitrary. Then Pv = P(x + y) = Px = x. Moreover, P
2
v = PPv —
Px = x = Pv. Thus, P
2
= P. Conversely, suppose P
2
= P. Let X = {v e V : Pv = v}
and y = {v € V : Pv = 0}. It is easy to check that X and 3^ are subspaces. We now prove
that V = X 0 y. First note that tfveX, then Pv = v. If v e y, then Pv = 0. Hence
i f v € X n y, then v = 0. Now let u e V be arbitrary. Then v = Pv + (I  P)v. Let
x = Pv, y = (I  P)v. Then Px = P
2
v = Pv = x so x e X, while Py = P(I  P}v =
Pv  P
2
v = 0 so y e y. Thus, V = X 0 y and the projection on X along y is P.
Essentially the same argument shows that / — P is the projection on y along X. D
Definition 7.4. In the speci al case where y = X^, PX.X
L
*
s
called an orthogonal projec
tion and we then use the notati on PX = PX,X
L

Theorem 7.5. P e E"
xn
i s the matri x of an orthogonal projecti on (onto K(P)} i f and only
i fP
2
= p = P
T
.
Proof: Let P be an orthogonal projection (on X, say, along X
L
} and let jc, y e R" be
arbitrary. Note that (/  P)x = (I  PX,X^X = P
x
±,
x
x by Theorem 7.3. Thus,
(/  P)x e X
L
. Since Py e X, we have ( P y f ( I  P)x = y
T
P
T
(I  P)x = 0.
Since x and y were arbitrary, we must have P
T
(I — P) = 0. Hence P
T
= P
T
P = P,
with the second equality following since P
T
P is symmetric. Conversely, suppose P is a
symmetric projection matrix and let x be arbitrary. Write x = Px + (I — P)x. Then
x
T
P
T
(I  P)x = x
T
P(I  P}x = 0. Thus, since Px e U(P), then (/  P)x 6 ft(P)
1
and P must be an orthogonal projection. D
7.1.1 The four fundamental orthogonal projections
Using the notation of Theorems 5.1 and 5.11, let A 6 R
mxn
with SVD A = UT,V
T
=
UtSVf. Then
are easily checked to be (unique) orthogonal projections onto the respective four funda
mental subspaces,
52 Chapter 7. Projections, Inner Product Spaces, and Norms
Let v E V be arbitrary. Then Pv = P(x + y) = Px = x. Moreover, p
2
v = P Pv =
Px = x = Pv. Thus, p2 = P. Conversely, suppose p2 = P. Let X = {v E V : Pv = v}
and Y = {v E V : Pv = OJ. It is easy to check that X and Y are subspaces. We now prove
that V = X $ y. First note that if v E X, then Pv = v. If v E Y, then Pv = O. Hence
if v E X ny, then v = O. Now let v E V be arbitrary. Then v = Pv + (I  P)v. Let
x = Pv, y = (I  P)v. Then Px = p
2
v = Pv = x so x E X, while Py = P(l  P)v =
Pv  p
2
v = 0 so Y E y. Thus, V = X $ Y and the projection on X along Y is P.
Essentially the same argument shows that I  P is the projection on Y along X. 0
Definition 7.4. In the special case where Y = X1, px.xl. is called an orthogonal projec
tion and we then use the notation P
x
= PX.XL
Theorem 7.5. P E jRnxn is the matrix of an orthogonal projection (onto R(P)) if and only
if p2 = P = pT.
Proof: Let P be an orthogonal projection (on X, say, along X 1) and let x, y E jR" be
arbitrary. Note that (I  P)x = (I  px.xJ.)x = PXJ..xx by Theorem 7.3. Thus,
(I  P)x E X1. Since Py E X, we have (py)T (I  P)x = yT pT (I  P)x = O.
Since x and y were arbitrary, we must have pT (I  P) = O. Hence pT = pT P = P,
with the second equality following since pT P is symmetric. Conversely, suppose P is a
symmetric projection matrix and let x be arbitrary. Write x = P x + (I  P)x. Then
x
T
pT (I  P)x = x
T
P(l  P)x = O. Thus, since Px E R(P), then (I  P)x E R(P)1
and P must be an orthogonal projection. 0
7.1 .1 The four fundamental orthogonal projections
Using the notation of Theorems 5.1 and 5.11, let A E jRmxII with SVD A = U!:V
T
U\SVr Then
r
PR(A)
AA+
U\U[
Lu;uT,
;=1
m
PR(A).L
1 AA+
U2
U
! LUiUT,
i=r+l
11
PN(A)
1 A+A
V2V{
L ViVf,
i=r+l
r
PN(A)J.
A+A
VIV{
LViVT
i=l
are easily checked to be (unique) orthogonal projections onto the respective four funda
mental subspaces.
7.1. Projections 53
Example 7.6. Determine the orthogonal projection of a vector v e M" on another nonzero
vector w e R
n
.
Solution: Think of the vector w as an element of the onedimensional subspace IZ(w).
Then the desired projection is simply
(using Example 4.8)
Moreover, the vector z that is orthogonal to w and such that v = Pv + z is given by
z = PK(
W
)±V = (/ — PK(W))V = v — (^^ j w. See Figure 7.2. A direct calculation shows
that z and u; are, in fact, orthogonal:
Figure 7.2. Orthogonal projection on a "line."
Example 7.7. Recall the proof of Theorem 3.11. There, {v\ , ..., Vk} was an orthornormal
basis for a subset S of W
1
. An arbitrary vector x e R" was chosen and a formula for x\
appeared rather mysteriously. The expression for x\ is simply the orthogonal projection of
x on S. Specifically,
Example 7.8. Recall the diagram of the four fundamental subspaces. The indicated direct
sum decompositions of the domain E" and codomain R
m
are given easily as follows.
Let x e W
1
be an arbitrary vector. Then
7.1. Projections 53
Example 7.6. Determine the orthogonal projection of a vector v E IR
n
on another nonzero
vector w E IRn.
Solution: Think of the vector w as an element of the onedimensional subspace R( w).
Then the desired projection is simply
Pn(w)v = ww+v
wwTv
(using Example 4.8)
= (WTV)
T W.
W W
Moreover, the vector z that is orthogonal to wand such that v = P v + z is given by
z = Pn(w)"' v = (l  Pn(w»v = v  ( : ; ~ ) w. See Figure 7.2. A direct calculation shows
that z and ware, in fact, orthogonal:
v
z
Pv w
Figure 7.2. Orthogonal projection on a "line."
Example 7.7. Recall the proof of Theorem 3.11. There, {VI, ... , Vk} was an orthomormal
basis for a subset S of IRn. An arbitrary vector x E IR
n
was chosen and a formula for XI
appeared rather mysteriously. The expression for XI is simply the orthogonal projection of
X on S. Specifically,
Example 7.8. Recall the diagram of the four fundamental subspaces. The indicated direct
sum decompositions of the domain IR
n
and codomain IR
m
are given easily as follows.
Let X E IR
n
be an arbitrary vector. Then
X = PN(A)u + PN(A)X
= A+ Ax + (I  A+ A)x
= VI vt x + V
2
Vi x (recall VVT = I).
54 Chapter 7. Projections, Inner Product Spaces, and Norms
Similarly, let y e ]R
m
be an arbitrary vector. Then
Example 7.9. Let
Then
and we can decompose the vector [2 3 4]
r
uniquely into the sum of a vector in A/' CA)
1
and a vector in J\f(A), respectively, as follows:
7.2 Inner Product Spaces
Definition 7.10. Let V be a vector space over R. Then { • , • ) : V x V
product if
is a real inner
1. (x, x) > Qfor all x 6V and ( x , x } =0 if and only ifx = 0.
2. (x, y) = (y,x)forallx,y e V.
3. { *, cryi + ^2) = a(x, y\) + / 3( j t, y^} for all jc, yi, j2 ^ V and for alia, ft e R.
Example 7.11. Let V = R". Then { ^, y} = X
T
y is the "usual" Euclidean inner product or
dot product.
Example 7.12. Let V = E". Then ( j c, y)
Q
= X
T
Qy, where Q = Q
T
> 0 is an arbitrary
n x n positive definite matrix, defines a "weighted" inner product.
Definition 7.13. If A e R
mx
", then A
T
e R
nxm
is the unique linear transformation or map
such that (x, Ay)  (A
T
x, y) for all x € R
m
and for all y e R".
54 Chapter 7. Projections, Inner Product Spaces, and Norms
Similarly, let Y E IR
m
be an arbitrary vector. Then
Y = PR(A)Y +
= AA+y + (l AA+)y
= U1Ur y + U2U[ Y (recall UU
T
= I).
Example 7.9. Let
Then
1/4
1/4
o
1/4 ]
1/4
o
and we can decompose the vector [2 3 4V uniquely into the sum of a vector in N(A)L
and a vector in N(A), respectively, as follows:
[ ! ] A' Ax + (l  A' A)x
[
1/2 1/2 0] [ 2] [
= ! +
[
5/2] [1/2]
= + .
7.2 Inner Product Spaces
1/2
1/2
o
1/2
1/2
o
Definition 7.10. Let V be a vector space over IR. Then (', .) : V x V + IR is a real inner
product if
1. (x, x) ::: Of or aU x E V and (x, x) = 0 if and only ifx = O.
2. (x, y) = (y, x) for all x, y E V.
3. (x, aYI + PY2) = a(x, Yl) + f3(x, Y2) for all x, Yl, Y2 E V and/or all a, f3 E IR.
Example 7.11. Let V = IRn. Then (x, y) = x
T
Y is the "usual" Euclidean inner product or
dot product.
Example 7.12. Let V = IRn. Then (x, y) Q = X T Qy, where Q = Q T > 0 is an arbitrary
n x n positive definite matrix, defines a "weighted" inner product.
Definition 7.13. If A E IR
m
xn, then ATE IR
n
xm is the unique linear transformation or map
such that {x, Ay) = {AT x, y) for all x E IR
m
andfor all y e IRn.
7.2. Inner Product Spaces 55
It is easy to check that, with this more "abstract" definition of transpose, and if the
(/, y)th element of A is a
(;
, then the (i, y)t h element of A
T
is a/ , . It can also be checked
that all the usual properties of the transpose hold, such as (Afl) = B
T
A
T
. However, the
definition above allows us to extend the concept of transpose to the case of weighted inner
products in the following way. Suppose A e R
mxn
and let {, }g and (•, }
R
, with Q and
R positive definite, be weighted inner products on R
m
and W, respectively. Then we can
define the "weighted transpose" A
#
as the unique map that satisfies
(x, Ay)
Q
= (A
#
x, y)
R
for all x e R
m
and for all y e W
1
.
By Example 7.12 above, we must then have X
T
QAy = x
T
(A
#
) Ry for all x, y. Hence we
must have QA = (A
#
) R. Taking transposes (of the usual variety) gives A
T
Q = RA
#
.
Since R is nonsingular, we find
A* = /r'A' Q.
We can also generalize the notion of orthogonality (x
T
y = 0) to Q orthogonality (Q is
a positive definite matrix). Two vectors x, y e W are <2orthogonal (or conjugate with
respect to Q) if ( x, y}
Q
= X
T
Qy = 0. Q orthogonality is an important tool used in
studying conjugate direction methods in optimization theory.
Definition 7.14. Let V be a vector space over C. Then {, •} : V x V > C is a complex
inner product if
1. ( x, x) > Qfor all x e V and ( x, x) =0 if and only ifx = 0.
2. (x, y) = (y, x) for all x, y e V.
3. (x,ayi + fiy
2
) = a(x, y\) + fi(x, y
2
}forallx, y\, y
2
e V and for alia, ft 6 C.
Remark 7.15. We could use the notation {•, }
c
to denote a complex inner product, but
if the vectors involved are complexvalued, the complex inner product is to be understood.
Note, too, from part 2 of the definition, that ( x, x) must be real for all x.
Remark 7.16. Note from parts 2 and 3 of Definition 7.14 that we have
(ax\ + fix
2
, y) = a(x\, y) + P(x
2
, y}.
Remark 7.17. The Euclidean inner product of x, y e C" is given by
The conventional definition of the complex Euclidean inner product is (x, y} = y
H
x but we
use its complex conjugate x
H
y here for symmetry with the real case.
Remark 7.18. A weighted inner product can be defined as in the real case by (x, y}
Q
—
X
H
Qy, for arbitrary Q = Q
H
> 0. The notion of Q orthogonality can be similarly
generalized to the complex case.
7.2. Inner product Spaces 55
It is easy to check that, with this more "abstract" definition of transpose, and if the
(i, j)th element of A is aij, then the (i, j)th element of AT is ap. It can also be checked
that all the usual properties of the transpose hold, such as (AB) = BT AT. However, the
definition above allows us to extend the concept of transpose to the case of weighted inner
products in the following way. Suppose A E ]Rm xn and let (., .) Q and (., .) R, with Q and
R positive definite, be weighted inner products on IR
m
and IRn, respectively. Then we can
define the "weighted transpose" A # as the unique map that satisfies
(x, AY)Q = (A#x, Y)R for all x E IRm and for all Y E IRn.
By Example 7.l2 above, we must then have x
T
QAy = x
T
(A#{ Ry for all x, y. Hence we
must have QA = (A#{ R. Taking transposes (of the usual variety) gives AT Q = RA#.
Since R is nonsingular, we find
A# = R1A
T
Q.
We can also generalize the notion of orthogonality (x
T
y = 0) to Qorthogonality (Q is
a positive definite matrix). Two vectors x, y E IRn are Qorthogonal (or conjugate with
respect to Q) if (x, y) Q = X T Qy = O. Qorthogonality is an important tool used in
studying conjugate direction methods in optimization theory.
Definition 7.14. Let V be a vector space over <C. Then (., .) : V x V + C is a complex
inner product if
1. (x, x) :::: 0 for all x E V and (x, x) = 0 if and only if x = O.
2. (x, y) = (y, x) for all x, y E V.
3. (x, aYI + f3Y2) = a(x, yll + f3(x, Y2) for all x, YI, Y2 E V andfor all a, f3 E c.
Remark 7.15. We could use the notation (., ·)e to denote a complex inner product, but
if the vectors involved are complexvalued, the complex inner product is to be understood.
Note, too, from part 2 of the definition, that (x, x) must be real for all x.
Remark 7.16. Note from parts 2 and 3 of Definition 7.14 that we have
Remark 7.17. The Euclidean inner product of x, y E C
n
is given by
n
(x, y) = LXiYi = xHy.
i=1
The conventional definition of the complex Euclidean inner product is (x, y) = yH x but we
use its complex conjugate x
H
y here for symmetry with the real case.
Remark 7.1S. A weighted inner product can be defined as in the real case by (x, y)Q =
x
H
Qy, for arbitrary Q = QH > o. The notion of Qorthogonality can be similarly
generalized to the complex case.
56 Chapter 7. Projections, Inner Product Spaces, and Norms
Definition 7.19. A vector space (V, F) endowed with a specific inner product is called an
inner product space. If F = C, we call V a complex inner product space. If F = R, we
call V a real inner product space.
Example 7.20.
1. Check that V = R"
x
" with the inner product (A, B) = Tr A
T
B is a real inner product
space. Note that other choices are possible since by properties of the trace function,
TrA
T
B = TrB
T
A = TrAB
T
= TrBA
T
.
2. Check that V = C
nx
" with the inner product (A, B) = Tr A
H
B is a complex inner
product space. Again, other choices are possible.
Definition 7.21. Let V be an inner product space. For v e V, we define the norm (or
length) ofv by \\v\\ = */(v, v). This is called the norm induced by (  ,  ) .
Example 7.22.
1. If V = E." with the usual inner product, the induced norm is given by   i>   =
xV—*« 9\ 7
( E , =i < Y )
2

2. If V = C" with the usual inner product, the induced norm is given by \\v\\ =
(£ ?
=
, l » ,  l
2
)* .
Theorem 7.23. Let P be an orthogonal projection on an inner product space V. Then
\\Pv\\ < \\v\\forallv e V.
Proof: Since P is an orthogonal projection, P
2
= P = P
#
. (Here, the notation P
#
denotes
the unique linear transformation that satisfies ( P u , v } = (u, P
#
v) for all u, v e V. If this
seems a little too abstract, consider V = R" (or C"), where P
#
is simply the usual P
T
(or
P
H
)). Hence ( P v , v) = (P
2
v, v) = (Pv, P
#
v) = ( P v , Pv) = \\Pv\\
2
> 0. Now /  P is
also a projection, so the above result applies and we get
from which the theorem follows.
Definition 7.24. The norm induced on an inner product space by the "usual" inner product
is called the natural norm.
In case V = C" or V = R", the natural norm is also called the Euclidean norm. In
the next section, other norms on these vector spaces are defined. A converse to the above
procedure is also available. That is, given a norm defined by \\x\\ — • > /(• * > x), an inner
product can be defined via the following.
56 Chapter 7. Projections, Inner Product Spaces, and Norms
Definition 7.19. A vector space (V, IF) endowed with a specific inner product is called an
inner product space. If IF = e, we call V a complex inner product space. If IF = R we
call V a real inner product space.
Example 7.20.
1. Check that V = IR
n
xn with the inner product (A, B) = Tr AT B is a real inner product
space. Note that other choices are possible since by properties of the trace function,
Tr AT B = Tr B T A = Tr A B T = Tr BAT.
2. Check that V = e
nxn
with the inner product (A, B) = Tr AH B is a complex inner
product space. Again, other choices are possible.
Definition 7.21. Let V be an inner product space. For v E V, we define the norm (or
length) ofv by IIvll = J(V,V). This is called the norm induced by (', .).
Example 7.22.
1. If V = IR
n
with the usual inner product, the induced norm is given by II v II
n 2 1
(Li=l V
i
)2.
2. If V = en with the usual inner product, the induced norm is given by II v II =
"n 2 !
(L...i=l IVi I ) .
Theorem 7.23. Let P be an orthogonal projection on an inner product space V. Then
IIPvll ::::: Ilvll for all v E V.
Proof: Since P is an orthogonal projection, p2 = P = pH. (Here, the notation p# denotes
the unique linear transformation that satisfies (Pu, v) = (u, p#v) for all u, v E V. If this
seems a little too abstract, consider V = IR
n
(or en), where p# is simply the usual pT (or
pH)). Hence (Pv, v) = (P
2
v, v) = (Pv, p#v) = (Pv, Pv) = IIPvll
2
::: O. Now / P is
also a projection, so the above result applies and we get
0::::: ((I  P)v. v) = (v. v)  (Pv, v)
= IIvll2  IIPvll
2
from which the theorem follows. 0
Definition 7.24. The norm induced on an inner product space by the "usual" inner product
is called the natural norm.
In case V = en or V = IR
n
, the natural norm is also called the Euclidean norm. In
the next section, other norms on these vector spaces are defined. A converse to the above
procedure is also available. That is, given a norm defined by IIx II = .j(X,X}, an inner
product can be defined via the following.
7.3. Vector Norms 57
Theorem 7.25 (Polarization Identity).
1. For x, y € R", an inner product is defined by
7.3 Vector Norms
Definition 7.26. Let (V, F) be a vector space. Then \ \  \ \ : V >• R is a vector norm if it
satisfies the following three properties:
2. For x, y e C", an inner product is defined by
where j = i = \/—T.
(This is called the triangle inequality, as seen readily from the usual diagram illus
trating the sum of two vectors in R
2
.)
Remark 7.27. It is convenient in the remainder of this section to state results for complex
valued vectors. The specialization to the real case is obvious.
Definition 7.28. A vector space (V, F) is said to be a normed linear space if and only if
there exists a vector norm  •  : V > R satisfying the three conditions of Definition 7.26.
Example 7.29.
1. For x e C", the Holder norms, or pnorms, are defined by
Special cases:
(The second equality is a theorem that requires proof.)
7.3. Vector Norms
Theorem 7.25 (Polarization Identity).
1. For x, y E an inner product is defined by
(x,y)=xTy=
2. For x, y E en, an inner product is defined by
where j = i = .J=I.
7.3 Vector Norms
IIx + yll2 _ IIxll2 _ lIyll2
2
57
Definition 7.26. Let (V, IF) be a vector space. Then II . II : V + IR is a vector norm ifit
satisfies the following three properties:
1. Ilxll::: Of or all x E V and IIxll = 0 ifand only ifx = O.
2. Ilaxll = lalllxllforallx E Vandforalla E IF.
3. IIx + yll :::: IIxll + IIYliforall x, y E V.
(This is called the triangle inequality, as seen readily from the usual diagram illus
trating the sum of two vectors in ]R2 .)
Remark 7.27. It is convenient in the remainder of this section to state results for complex
valued vectors. The specialization to the real case is obvious.
Definition 7.28. A vector space (V, IF) is said to be a normed linear space if and only if
there exists a vector norm II . II : V + ]R satisfying the three conditions of Definition 7.26.
Example 7.29.
1. For x E en, the HOlder norms, or pnorms, are defined by
Special cases:
(a) Ilx III = L:7=1 IXi I (the "Manhattan" norm).
1 1
(b) Ilxllz = (L:7=1Ix;l2)2 = (X
H
X)2 (the Euclidean norm).
(c) Ilxlioo = maxlx;l = lim IIxllp
IE!! p++oo
(The second equality is a theorem that requires proof.)
58 Chapter 7. Projections, Inner Product Spaces, and Norms
2. Some weighted pnorms:
(a)   JC  , .
D
= E^rf/l*/!, where 4 > 0.
(b) I k llz . g — (x
h
Q
X
Y > where Q = Q
H
> 0 (this norm is more commonly
denoted  • 
c
).
3. On the vector space (C[to, t \ ] , R), define the vector norm
On the vector space ((C[to, t\])
n
, R), define the vector norm
Fhcorem 7.30 (Holder Inequality). Let x, y e C". Ther,
A particular case of the Holder inequality is of special interest.
Theorem 7.31 (CauchyBunyakovskySchwarz Inequality). Let x, y e C". Then
with equality if and only if x and y are linearly dependent.
Proof: Consider the matrix [x y] e C"
x2
. Since
is a nonnegative definite matrix, its determinant must be nonnegative. In other words,
0 < ( x
H
x ) ( y
H
y ) — ( x
H
y ) ( y
H
x ) . Since y
H
x = x
H
y, we see immediately that \X
H
y\ <
\\X\\2\\y\\2
D
Note: This is not the classical algebraic proof of the CauchyBunyakovskySchwarz
(CBS) inequality (see, e.g., [20, p. 217]). However, it is particularly easy to remember.
Remark 7.32. The angle 0 between two nonzero vectors x, y e C" may be defined by
cos# = I, „ .^ , 0 < 0 < 5. The CBS inequality is thus equivalent to the statement
IlMmlylb — ^
COS 0 <1.
Remark 7.33. Theorem 7.31 and Remark 7.32 are true for general inner product spaces.
Remark 7.34. The norm  • 
2
is unitarily invariant, i.e., if U € C"
x
" is unitary, then
\\Ux\\
2
= \\x\\
2
(Proof. \\Ux\\l = x
H
U
H
Ux = X
H
X = \\x\\\). However,   , and   1^
58 Chapter 7. Projections, Inner Product Spaces, and Norms
2. Some weighted pnorms:
(a) IIxll1.D = whered; > O.
1
(b) IIx IIz.Q = (x
H
Qx) 2, where Q = QH > 0 (this norm is more commonly
denoted II . IIQ)'
3. On the vector space (C[to, ttl, 1Ft), define the vector norm
11111 = max 1/(t)I·
On the vector space «e[to, ttlr, 1Ft), define the vector norm
1111100 = max II/(t) 11
00
,
Theorem 7.30 (HOlder Inequality). Let x, y E en. Then
I I
+=1.
p q
A particular case of the HOlder inequality is of special interest.
Theorem 7.31 (CauchyBunyakovskySchwarz Inequality). Let x, y E en. Then
with equality if and only if x and yare linearly dependent.
Proof' Consider the matrix [x y] E en
x2
. Since
is a nonnegative definite matrix, its determinant must be nonnegative. In other words,
o (x
H
x)(yH y)  (x
H
y)(yH x). Since yH x = x
H
y, we see immediately that IXH yl
IIxll2l1yllz. 0
Note: This is not the classical algebraic proof of the CauchyBunyakovskySchwarz
(CBS) inequality (see, e.g., [20, p. 217]). However, it is particularly easy to remember.
Remark 7.32. The angle e between two nonzero vectors x, y E en may be defined by
cos e = 0 e I' The CBS inequality is thus equivalent to the statement
1 cose 1 1.
Remark 7.33. Theorem 7.31 and Remark 7.32 are true for general inner product spaces.
Remark 7.34. The norm II . 112 is unitarily invariant, i.e., if U E e
nxn
is unitary, then
IIUxll2 = IIxll2 (Proof IIUxili = XHUHUx = xHx = IIxlli)· However, 11·111 and 1I·IIClO
7.4. Matrix Norms 59
are not unitarily invariant. Similar remarks apply to the unitary invariance of norms of real
vectors under orthogonal transformation.
Remark 7.35. If x, y € C" are orthogonal, then we have the Pythagorean Identity
7.4 Matrix Norms
In this section we introduce the concept of matrix norm. As with vectors, the motivation for
using matrix norms is to have a notion of either the size of or the nearness of matrices. The
former notion is useful for perturbation analysis, while the latter is needed to make sense of
"convergence" of matrices. Attention is confined to the vector space (W
nxn
, R) since that is
what arises in the majority of applications. Extension to the complex case is straightforward
and essentially obvious.
Definition 7.39.  •  : R
mx
" > E is a matrix norm if it satisfies the following three
properties:
2 _ _/ / .
the proof of which follows easily from z2 = z z.
Theorem 7.36. All norms on C" are equivalent; i.e., there exist constants c\, ci (possibly
depending onn) such that
Example 7.37. For x G C", the following inequalities are all tight bounds; i.e., there exist
vectors x for which equality holds:
Finally, we conclude this section with a theorem about convergence of vectors. Con
vergence of a sequence of vectors to some limit vector can be converted into a statement
about convergence of real numbers, i.e., convergence in terms of vector norms.
Theorem 7.38. Let \\ • \\ be a vector norm and suppose v, i»
( 1 )
, v
(2
\ ... e C". Then
7.4. Matrix Norms 59
are not unitarily invariant. Similar remarks apply to the unitary invariance of norms of real
vectors under orthogonal transformation.
Remark 7.35. If x, y E en are orthogonal, then we have the Pythagorean Identity
Ilx ± = +
the proof of which follows easily from liz = ZH z.
Theorem 7.36. All norms on en are equivalent; i.e., there exist constants CI, C2 (possibly
depending on n) such that
Example 7.37. For x E en, the following inequalities are all tight bounds; i.e., there exist
vectors x for which equality holds:
Ilxlll :::: Jn Ilxlb
Ilxll2:::: IIxll»
IIxlloo :::: IIxll»
Ilxlll :::: n IIxlloo;
IIxl12 :::: Jn Ilxll
oo
;
IIxlioo :::: IIxllz.
Finally, we conclude this section with a theorem about convergence of vectors. Con
vergence of a sequence of vectors to some limit vector can be converted into a statement
about convergence of real numbers, i.e., convergence in terms of vector norms.
Theorem 7.38. Let II· II be a vector norm and suppose v, v(l), v(2), ... E en. Then
lim V(k) = v if and only if lim II v(k)  v II = O.
k4+00
7.4 Matrix Norms
In this section we introduce the concept of matrix norm. As with vectors, the motivation for
using matrix norms is to have a notion of either the size of or the nearness of matrices. The
former notion is useful for perturbation analysis, while the latter is needed to make sense of
"convergence" of matrices. Attention is confined to the vector space (IRm xn , IR) since that is
what arises in the majority of applications. Extension to the complex case is straightforward
and essentially obvious.
Definition 7.39. II· II : IR
mxn
IR is a matrix norm if it satisfies the following three
properties:
1. IIAII Of or all A E IR
mxn
and IIAII = 0 if and only if A = O.
2. lIaAl1 = lalliAliforall A E IR
mxn
andfor all a E IR.
3. IIA + BII :::: IIAII + IIBII for all A, BE IRmxn.
(As with vectors, this is called the triangle inequality.)
60 Chapter 7. Projections, Inner Product Spaces, and Norms
Example 7.40. Let A e R
mx
". Then the Frobenius norm (or matrix Euclidean norm) is
defined by
^wncic r = laiiK^/i;;.
Example 7.41. Let A e R
mxn
. Then the matrix pnorms are defined by
The following three special cases are important because they are "computable." Each is a
theorem and requires a proof.
1. The "maximum column sum" norm is
2. The "maximum row sum" norm is
3. The spectral norm is
Example 7.42. Let A E R
mxn
. The Schatten/7norms are defined by
Some special cases of Schatten /?norms are equal to norms defined previously. For example,
 . 
5 2
=  . \\
F
and  • 
5i00
=  • 
2
. The norm  • 
5>1
is often called the trace norm.
Example 7.43. Let A e K
mx
". Then "mixed" norms can also be defined by
Example 7.44. The "matrix analogue of the vector 1norm,"  A\\
s
= ^ j \a
i}
; , is a norm.
The concept of a matrix norm alone is not altogether useful since it does not allow us
to estimate the size of a matrix product A B in terms of the sizes of A and B individually.
60 Chapter 7. Projections, Inner Product Spaces, and Norms
Example 7.40. Let A E lR,mxn. Then the Frobenius norm (or matrix Euclidean norm) is
defined by
IIAIIF ~ (t. t ai;) I ~ (t. altA)) 1 ~ (T, (A' A)) 1 ~ (T, (AA '));
(where r = rank(A)).
Example 7.41. Let A E lR,mxn. Then the matrix pnorms are defined by
IIAxll
IIAII = max _P = max IIAxll .
P Ilxllp;60 Ilxli
p
IIxllp=1 p
The following three special cases are important because they are "computable." Each is a
theorem and requires a proof.
I. The "maximum column sum" norm is
2. The "maximum row sum" norm is
IIAlioo = max
rE!!l. (
t laUI ).
J=1
3. The spectral norm is
tTL T
IIAII2 = Amax(A A) = A ~ a x ( A A ) = a1(A).
Note: IIA+llz = l/ar(A), where r = rank(A).
Example 7.42. Let A E lR,mxn. The Schattenpnorms are defined by
I
IIAlls.p = (at' + ... + a!)"".
Some special cases of Schatten pnorms are equal to norms defined previously. For example,
11·115.2 = II . IIF and 11'115,00 = II . 112' The norm II . 115.1 is often called the trace norm.
Example 7.43. Let A E lR,mxn _ Then "mixed" norms can also be defined by
IIAII = max IIAxil
p
p,q 11.<110#0 IIxllq
Example 7.44. The "matrix analogue of the vector Inorm," IIAlis = Li.j laij I, is a norm.
The concept of a matrix norm alone is not altogether useful since it does not allow us
to estimate the size of a matrix product AB in terms of the sizes of A and B individually.
7.4. Matrix Norms 61
Notice that this difficulty did not arise for vectors, although there are analogues for, e.g.,
inner products or outer products of vectors. We thus need the following definition.
Definition 7.45. Let A e R
mxn
, B e R
nxk
. Then the norms \\ • \\
a
, \\ • \\
p
, and \\ • \\
y
are
mutually consistent if \\ A B \\
a
< \\A\\p\\B\\
y
. A matrix norm\\ • \\ is said to be consistent
if \\AB\\ <  A   fi whenever the matrix product is defined.
Example 7.46.
1.  • /7 and  • 
p
for all p are consistent matrix norms.
2. The "mixed" norm
is a matrix norm but it is not consistent. For example, take A = B = \ \ J1. Then
  Af l  
l i 00
= 2whil e  A 
l i 00
  B 
1 >00
= l.
The p norms are examples of matrix norms that are subordinate to (or induced by)
a vector norm, i.e.,
11^ 4^ 11
(or, more generally, A = max^o ., . .
p
) . For such subordinate norms, also called oper
ator norms, we clearly have Aj c < A1jt. Since   Af ij c  <   A    f l j c  < Aflj t,
it follows that all subordinate norms are consistent.
Theorem 7.47. There exists a vector x* such that Ajt* = A jc* if the matrix normis
subordinate to the vector norm.
Theorem 7.48. If \\ • \\
m
is a consistent matrix norm, there exists a vector norm \\ • \\
v
consistent with it, i.e., H Aj c JI ^ < \\A\\
m
\\x\\
v
.
Not every consistent matrix norm is subordinate to a vector norm. For example,
consider  • \\
F
. Then  A^ 
2
< A
F
j c
2
, so  • 
2
is consistent with  • 
F
, but there does
not exist a vector norm  •  such that A
F
is given by max^o \^ •
Useful Results
The following miscellaneous results about matrix norms are collected for future reference.
The interested reader is invited to prove each of them as an exercise.
2. For A e R"
x
", the following inequalities are all tight, i.e., there exist matrices A for
which equality holds:
7.4. Matrix Norms 61
Notice that this difficulty did not arise for vectors, although there are analogues for, e.g.,
inner products or outer products of vectors. We thus need the following definition.
Definition 7.45. Let A E ]Rmxn, B E ]Rnxk. Then the norms II . II", II· Ilfl' and II . lIy are
mutuallyconsistentifIlABII,,::S IIAllfllIBlly. A matrix norm 11·11 is said to be consistent
if II A B II ::s II A 1111 B II whenever the matrix product is defined.
Example 7.46.
1. II· II F and II . II p for all p are consistent matrix norms.
2. The "mixed" norm
IIAxll1
II· 11
100
= max = max laijl
, x;60 Ilx 1100 i,j
is a matrix norm but it is not consistent. For example, take A = B = [: :]. Then
IIABIII,oo = 2 while IIAIII,ooIlBIII,oo = 1.
The pnorms are examples of matrix norms that are subordinate to (or induced by)
a vector norm, i.e.,
IIAxl1
IIAII = max  = max IIAxl1
x;60 IIx II Ilxll=1
IIAxll .
(or, more generally, IIAllp,q = maxx;60 IIxll
q
P
), For such subordmate norms, also caUedoper
atornorms, wec1earlyhave IIAxll ::s IIAllllxll· Since IIABxl1 ::s IIAlIllBxll ::s IIAIIIIBllllxll,
it follows that all subordinate norms are consistent.
Theorem 7.47. There exists a vector x* such that IIAx*11 = IIAllllx*11 if the matrix norm is
subordinate to the vector norm.
Theorem 7.48. If II . 11m is a consistent matrix norm, there exists a vector norm II . IIv
consistent with it, i.e., IIAxliv ::s IIAlim Ilxli
v
'
Not every consistent matrix norm is subordinate to a vector norm. For example,
consider II . II F' Then II Ax 112 ::s II A II Filx 112, so II . 112 is consistent with II . II F, but there does
not exist a vector norm II . II such that IIAIIF is given by max
x
;60 " , ~ ~ i ' .
Useful Results
The following miscellaneous results about matrix norms are collected for future reference.
The interested reader is invited to prove each of them as an exercise.
1. II In II p = 1 for all p, while IIIn II F = .jii.
2. For A E ]Rnxn, the following inequalities are all tight, i.e., there exist matrices A for
which equality holds:
IIAIII ::s .jii IIAlb
IIAII2 ::s.jii IIAII
I
,
II A 1100 ::s n IIAII
I
,
IIAIIF ::s.jii IIAII
I
,
IIAIII ::s n IIAlloo,
IIAII2 ::s .jii IIAlloo,
IIAlioo ::s .jii IIAII2,
IIAIIF ::s .jii IIAlb
IIAIII ::s .jii II
A
IIF;
IIAII2::S IIAIIF;
IIAlioo ::s .jii IIAIIF;
IIAIIF ::s .jii IIAlioo'
62 Chapter 7. Projections, Inner Product Spaces, and Norms
3. For A eR
mxa
.
4. The norms  • \\
F
and  • 
2
(as well as all the Schatten /?norms, but not necessarily
other pnorms) are unitarily invariant; i.e., for all A e R
mx
" and for all orthogonal
matrices Q zR
mxm
and Z e M"
x
", (MZ
a
=   A 
a
fora = 2 or F.
Convergence
The following theorem uses matrix norms to convert a statement about convergence of a
sequence of matrices into a statement about the convergence of an associated sequence of
scalars.
Theorem 7.49. Let \\ \\bea matrix normand suppose A, A
( 1)
, A
(2)
, ... e R
mx
". Then
EXERCISES
1. If P is an orthogonal projection, prove that P
+
= P.
2. Suppose P and Q are orthogonal projections and P + Q = I. Prove that P — Q
must be an orthogonal matrix.
3. Prove that / — A
+
A is an orthogonal projection. Also, prove directly that V
2
V/ is an
orthogonal projection, where ¥2 is defined as in Theorem 5.1.
4. Suppose that a matrix A e W
nxn
has linearly independent columns. Prove that the
orthogonal projection onto the space spanned by these column vectors is given by the
matrix P = A(A
T
A)~
}
A
T
.
5. Find the (orthogonal) projection of the vector [2 3 4]
r
onto the subspace of R
3
spanned by the plane 3;c — v + 2z = 0.
6. Prove that E"
x
" with the inner product (A, B) = Tr A
T
B is a real inner product
space.
7. Show that the matrix norms  • 
2
and  • \\
F
are unitarily invariant.
8. Definition: Let A e R
nxn
and denote its set of eigenvalues (not necessarily distinct)
by { Ai , . . . , > . „ } . The spectral radius of A is the scalar
62 Chapter 7. Projections, Inner Product Spaces, and Norms
3. For A E IR
mxn
,
max laijl :::: IIAII2 :::: ~ max laijl.
l.] l.]
4. The norms II . IIF and II . 112 (as well as all the Schatten pnorms, but not necessarily
other pnorms) are unitarily invariant; i.e., for all A E IR
mxn
and for all orthogonal
matrices Q E IR
mxm
and Z E IR
nxn
, IIQAZlia = IIAlla fora = 2 or F.
Convergence
The following theorem uses matrix norms to convert a statement about convergence of a
sequence of matrices into a statement about the convergence of an associated sequence of
scalars.
Theorem 7.49. Let II ·11 be a matrix norm and suppose A, A(I), A(2), ... E IRmxn. Then
lim A (k) = A if and only if lim IIA (k)  A II = o.
k ~ + o o k ~ + o o
EXERCISES
1. If P is an orthogonal projection, prove that p+ = P.
2. Suppose P and Q are orthogonal projections and P + Q = I. Prove that P  Q
must be an orthogonal matrix.
3. Prove that I  A + A is an orthogonal projection. Also, prove directly that V
2
Vl is an
orthogonal projection, where V2 is defined as in Theorem 5.1.
4. Suppose that a matrix A E IR
mxn
has linearly independent columns. Prove that the
orthogonal projection onto the space spanned by these column vectors is given by the
matrix P = A(AT A) 1 AT.
5. Find the (orthogonal) projection of the vector [2 3 4f onto the subspace of 1R
3
spanned by the plane 3x  y + 2z = O.
6. Prove that IR
n
xn with the inner product (A, B) = Tr AT B is a real inner product
space.
7. Show that the matrix norms II . 112 and II . IIF are unitarily invariant.
8. Definition: Let A E IR
nxn
and denote its set of eigenvalues (not necessarily distinct)
by P.l, ... , An}. The spectral radius of A is the scalar
p(A) = max IA;I.
i
Exercises 63
Determine A
F
, H AI d , A
2
, H AH ^ , and p(A). (An n x n matrix, all of whose
columns and rows as well as main d iagonal and antid iagonal sum to s = n(n
2
+ l)/2,
is called a "magic square" matrix. I f M is a magic square matrix, it can be proved
that  M U p = s for all/?.)
10. Let A = xy
T
, where both x, y e R" are nonzero. Determine A
F
, Aj, A
2
,
and Aoo in terms of \\x\\
a
and /or \\y\\p, where a and ft take the value 1, 2, or oo as
appropriate.
Let
9. Let
Determine A
F
, \\A\\
lt
A
2
, H A^ , and p(A).
Exercises 63
Let
A = [ ~ 0 ~ ] .
14 12 5
Determine IIAIIF' IIAII
I
, IIAlb IIAlloo, and peA).
9. Let
A = [ ~ ~ ~ ] .
492
Determine IIAIIF' IIAII
I
, IIAlb IIAlloo, and peA). (An n x n matrix, all of whose
columns and rows as well as main diagonal and antidiagonal sum to s = n (n
2
+ 1) /2,
is called a "magic square" matrix. If M is a magic square matrix, it can be proved
that IIMllp = s for all p.)
10. Let A = xyT, where both x, y E IR
n
are nonzero. Determine IIAIIF' IIAIII> IIAlb
and II A 1100 in terms of IIxlla and/or IlylljJ, where ex and {3 take the value 1,2, or (Xl as
appropriate.
This page intentionally left blank This page intentionally left blank
Chapter 8
Li near Least Squares
Problems
8.1 The Li near Least Squares Problem
Problem: Suppose A e R
mx
" with m > n and b <= R
m
is a given vector. The linear least
squares problem consists of finding an element of the set
Solution: The set X has a number of easily verified properties:
1. A vector x e X if and only if A
T
r = 0, where r = b — Ax is the residual associated
with x. The equations A
T
r — 0 can be rewritten in the form A
T
Ax = A
T
b and the
latter form is commonly known as the normal equations, i.e., x e X if and only if
x is a solution of the normal equations. For further details, see Section 8.2.
2. A vector x E X if and onlv if x is of the form
To see why this must be so, write the residual r in the form
Now, (Pn(A)b — AJ C ) is clearly in 7£(A) , while
so these two vectors are orthogonal. Hence,
from the Pythagorean identity (Remark 7.35). Thus, A.x — b\\\ (and hence p ( x ) =
\\Ax —b\\2) assumes its minimum value if and only if
65
Chapter 8
Linear Least Squares
Problems
8.1 The Linear Least Squares Problem
Problem: Suppose A E jRmxn with m 2: nand b E jRm is a given vector. The linear least
squares problem consists of finding an element of the set
x = {x E jRn : p(x) = IIAx  bll
2
is minimized}.
Solution: The set X has a number of easily verified properties:
1. A vector x E X if and only if AT r = 0, where r = b  Ax is the residual associated
with x. The equations AT r = 0 can be rewritten in the form A T Ax = AT b and the
latter form is commonly known as the normal equations, i.e., x E X if and only if
x is a solution of the normal equations. For further details, see Section 8.2.
2. A vector x E X if and only if x is of the form
x=A+b+(IA+A)y, whereyEjRnisarbitrary. (8.1)
To see why this must be so, write the residual r in the form
r = (b  PR(A)b) + (PR(A)b  Ax).
Now, (PR(A)b  Ax) is clearly in 'R(A), while
(b  PR(A)b) = (I  PR(A))b
= PR(A),,b E 'R(A)L
so these two vectors are orthogonal. Hence,
= lib 
= lib  + IIPR(A)b 
from the Pythagorean identity (Remark 7.35). Thus, IIAx  (and hence p(x) =
II Ax  b 112) assumes its minimum value if and only if
(8.2)
65
66 Chapter 8. Linear Least Squares Problems
and this equation always has a solution since AA
+
b e 7£(A). By Theorem 6.3, all
solutions of (8.2) are of the form
where y e W is arbitrary. The minimum value of p ( x ) is then clearly equal to
the last inequality following by Theorem 7.23.
3. X is convex. To see why, consider two arbitrary vectors jci = A
+
b + (I — A+A)y
and *2 = A+b + (I — A+A)z in X. Let 6 e [0, 1]. Then the convex combination
0*i + (1  #)*
2
= A+b + (I  A
+
A)(Oy + (1  0)z) is clearly in X.
4. X has a unique element x* of minimal 2norm. In fact, x* = A
+
b is the unique vector
that solves this "double minimization" problem, i.e., x * minimizes the residual p ( x )
and is the vector of minimum 2norm that does so. This follows immediately from
convexity or directly from the fact that all x e X are of the form (8.1) and
which follows since the two vectors are orthogonal.
5. There is a unique solution to the least squares problem, i.e., X = {x*} = {A+b}, if
and only if A
+
A = I or, equivalently, if and only if rank (A) = n.
Just as for the solution of linear equations, we can generalize the linear least squares
problem to the matrix case.
Theorem 8.1. Let A e E
mx
" and B € R
mxk
. The general solution to
is of the form
where Y € R"
xfc
is arbitrary. The unique solution of minimum 2norm or Fnorm is
X = A+B.
Remark 8.2. Notice that solutions of the linear least squares problem look exactly the
same as solutions of the linear system AX = B. The only difference is that in the case
of linear least squares solutions, there is no "existence condition" such as K(B) c 7£(A).
If the existence condition happens to be satisfied, then equality holds and the least squares
66 Chapter 8. Linear Least Squares Problems
and this equation always has a solution since AA+b E R(A). By Theorem 6.3, all
solutions of (8.2) are of the form
x = A+ AA+b + (I  A+ A)y
=A+b+(IA+A)y,
where y E ]R.n is arbitrary. The minimum value of p (x) is then clearly equal to
lib  PR(A)bll
z
= 11(1  AA+)bI1
2
~ Ilbll z,
the last inequality following by Theorem 7.23.
3. X is convex. To see why, consider two arbitrary vectors Xl = A + b + (I  A + A) y
and Xz = A+b + (I  A+ A)z in X. Let 8 E [0,1]. Then the convex combination
8x, + (1  8)xz = A+b + (I  A+ A)(8y + (1  8)z) is clearly in X.
4. X has a unique element x" of minimal2norm. In fact, x" = A + b is the unique vector
that solves this "double minimization" problem, i.e., x* minimizes the residual p(x)
and is the vector of minimum 2norm that does so. This follows immediately from
convexity or directly from the fact that all x E X are of the form (8.1) and
which follows since the two vectors are orthogonal.
5. There is a unique solution to the least squares problem, i.e., X = {x"} = {A+b}, if
and only if A + A = lor, equivalently, if and only if rank(A) = n.
Just as for the solution of linear equations, we can generalize the linear least squares
problem to the matrix case.
Theorem 8.1. Let A E ]R.mxn and BE ]R.mxk. The general solution to
min IIAX  Bib
XElR
Plxk
is of the form
X=A+B+(IA+A)Y,
where Y E ]R.nxk is arbitrary. The unique solution of minimum 2norm or Fnorm is
X = A+B.
Remark 8.2. Notice that solutions of the linear least squares problem look exactly the
same as solutions of the linear system AX = B. The only difference is that in the case
of linear least squares solutions, there is no "existence condition" such as R(B) S; R(A).
If the existence condition happens to be satisfied. then equality holds and the least squares
8.3 Linear Regression and Other Linear Least Squares Problems 67
residual is 0. Of all solutions that give a residual of 0, the unique solution X = A
+
B has
minimum 2norm or Fnorm.
Remark 8.3. If we take B = I
m
in Theorem 8.1, then X = A
+
can be interpreted as
saying that the MoorePenrose pseudoinverse of A is the best (in the matrix 2norm sense)
matrix such that AX approximates the identity.
Remark 8.4. Many other interesting and useful approximation results are available for the
matrix 2norm (and Fnorm). One such is the following. Let A e M™
x
" with SVD
8.2 Geometric Solution
Looking at the schematic provided in Figure 8.1, it is apparent that minimizing  Ax — b\\
2
is equivalent to finding the vector x e W
1
for which p — Ax is closest to b (in the Euclidean
norm sense). Clearly, r = b — Ax must be orthogonal to 7£(A). Thus, if Ay is an arbitrary
vector in 7£(A) (i.e., y is arbitrary), we must have
Then a best rank k approximation to A for l <f c <r , i . e . , a solution to
is given by
The special case in which m = n and k = n — 1 gives a nearest singular matrix to A e
Since y is arbitrary, we must have A
T
b — A
T
Ax = 0 or A
r
A;c = A
T
b.
Special case: If A is full (column) rank, then x = (A
T
A) A
T
b.
8.3 Linear Regression and Other Linear Least Squares
Problems
8.3.1 Example: Linear regression
Suppose we have m measurements (t\,y\), . . . , (t
m
,y
m
) for which we hypothesize a linear
(affine) relationship
8.3 Linear Regression and Other Linear Least Squares Problems 67
residual is O. Of all solutions that give a residual of 0, the unique solution X = A + B has
minimum 2norm or F norm.
Remark 8.3. If we take B = 1m in Theorem 8.1, then X = A+ can be interpreted as
saying that the MoorePenrose pseudoinverse of A is the best (in the matrix 2norm sense)
matrix such that AX approximates the identity.
Remark 8.4. Many other interesting and useful approximation results are available for the
matrix 2norm (and F norm). One such is the following. Let A E with SVD
A = = LOiUiV!.
i=l
Then a best rank k approximation to A for 1 :s k :s r, i.e., a solution to
min IIA  MIi2,
MEJRZ'xn
is given by
k
Mk = LOiUiV!.
i=1
The special case in which m = nand k = n  1 gives a nearest singular matrix to A E x n .
8.2 Geometric Solution
Looking at the schematic provided in Figure 8.1, it is apparent that minimizing IIAx  bll
2
is equivalent to finding the vector x E lR
n
for which p = Ax is closest to b (in the Euclidean
norm sense). Clearly, r = b  Ax must be orthogonal to R(A). Thus, if Ay is an arbitrary
vector in R(A) (i.e., y is arbitrary), we must have
0= (Ay)T (b  Ax)
=yTAT(bAx)
= yT (ATb _ AT Ax).
Since y is arbitrary, we must have AT b  AT Ax = 0 or AT Ax = AT b.
Special case: If A is full (column) rank, then x = (AT A)l ATb.
8.3 Linear Regression and Other Linear Least Squares
Problems
8.3.1 Example: Linear regression
Suppose we have m measurements (ll, YI), ... , (trn, Ym) for which we hypothesize a linear
(affine) relationship
y = at + f3
(8.3)
68 Chapter 8. Linear Least Squares Problems
Figure 8.1. Projection of b on K(A).
for certain constants a. and ft. One way to solve this problem is to find the line that best fits
the data in the least squares sense; i.e., with the model (8.3), we have
where &\,..., 8
m
are "errors" and we wish to minimize 8\ + • • • + 8^ Geometrically, we
are trying to find the best line that minimizes the (sum of squares of the) distances from the
given data points. See, for example, Figure 8.2.
Figure 8.2. Simple linear regression.
Note that distances are measured in the vertical sense from the points to the line (as
indicated, for example, for the point (t\, y\}}. However, other criteria arc possible. For ex
ample, one could measure the distances in the horizontal sense, or the perpendicular distance
from the points to the line could be used. The latter is called total least squares. Instead
of 2norms, one could also use 1norms or oonorms. The latter two are computationally
68 Chapter 8. Linear Least Squares Problems
b
r
p=Ax Ay E R(A)
Figure S.l. Projection of b on R(A).
for certain constants a and {3. One way to solve this problem is to find the line that best fits
the data in the least squares sense; i.e., with the model (8.3), we have
YI = all + {3 + 81,
Y2 = al2 + {3 + 82
where 8
1
, ... , 8
m
are "errors" and we wish to minimize 8? + ... + 8;. Geometrically, we
are trying to find the best line that minimizes the (sum of squares of the) distances from the
given data points. See, for example, Figure 8.2.
y
Figure 8.2. Simple linear regression.
Note that distances are measured in the venical sense from the point!; to [he line (a!;
indicated. for example. for the point (tl. YIn. However. other criteria nrc For cx
ample, one could measure the distances in the horizontal sense, or the perpendiculnr distance
from the points to the line could be used. The latter is called total least squares. Instead
of 2norms, one could also use Inorms or oonorms. The latter two are computationally
8.3. Linear Regression and Other Linear Least Squares Problems 69
much more difficult to handle, and thus we present only the more tractable 2norm case in
text that follows.
The ra "error equations" can be written in matrix form as
where
We then want to solve the problem
or, equivalently,
Solution: x — [^1 is a solution of the normal equations A
T
Ax = A
T
y where, for the
special form of the matrices above, we have
and
8.3.2 Other least squares problems
Suppose the hypothesized model is not the linear equation (8.3) but rather is of the form
y = f ( t ) =
Cl
0!(0+ • • • 4 c
n
<t>
n
(t). (8.5)
In (8.5) the < / > ,(0 are given (basis) functions and the c
;
are constants to be determined to
minimize the least squares error. The matrix problem is still (8.4), where we now have
An important special case of (8.5) is least squares polynomial approximation, which
corresponds to choosing 0,• (?) = t'~
l
, i
;
e n, although this choice can lead to computational
The solution for the parameters a and ft can then be written
8.3. Linear Regression and Other Linear Least Squares Problems 69
much more difficult to handle, and thus we present only the more tractable 2norm case in
text that follows.
The m "error equations" can be written in matrix form as
Y = Ax +0,
where
We then want to solve the problem
minoT 0 = min (Ax  y)T (Ax  y)
x
or, equivalently,
min = min II Ax 
x
Solution: x = is a solution of the normal equations AT Ax
special form of the matrices above, we have
and
AT Y = [ Li ti Yi J.
LiYi
The solution for the parameters a and f3 can then be written
8.3.2 Other least squares problems
(8.4)
AT y where, for the
Suppose the hypothesized model is not the linear equation (S.3) but rather is of the form
(8.5)
In (8.5) the ¢i(t) are given (basis) functions and the Ci are constants to be determined to
minimize the least squares error. The matrix problem is still (S.4), where we now have
An important special case of (8.5) is least squares polynomial approximation, which
corresponds to choosing ¢i (t) = t
i

1
, i E !!, although this choice can lead to computational
70 Chapter 8. Linear Least Squares Problems
difficulties because of numerical ill conditioning for large n. Numerically better approaches
are based on orthogonal polynomials, piecewise polynomial functions, splines, etc.
The key feature in (8.5) is that the coefficients c, appear linearly. The basis functions
< / > , can be arbitrarily nonlinear. Sometimes a problem in which the c, 's appear nonlinearly
can be converted into a linear problem. For example, if the fitting function is of the form
y = f ( t ) = c\e
C2i
, then taking logarithms yields the equation logy = logci + cjt. Then
defining y — logy, c\ = logci, and GI = cj_ results in a standard linear least squares
problem.
8.4 Least Squares and Singular Value Decomposition
In the numerical linear algebra literature (e.g., [4], [7], [11], [23]), it is shown that solution
of linear least squares problems via the normal equations can be a very poor numerical
method in finite precision arithmetic. Since the standard Kalman filter essentially amounts
to sequential updating of normal equations, it can be expected to exhibit such poor numerical
behavior in practice (and it does). Better numerical methods are based on algorithms that
work directly and solely on A itself rather than A
T
A. Two basic classes of algorithms are
based on S VD and QR (orthogonal upper triangular) factorization, respectively. The former
is much more expensive but is generally more reliable and offers considerable theoretical
insight.
In this section we investigate solution of the linear least squares problem
The last equality follows from the fact that if v = [£ ], then u^ =   i> i \\\ + \\vi\\\ (note
that orthogonality is not what is used here; the subvectors can have different lengths). This
explains why it is convenient to work above with the square of the norm rather than the
norm. As far as the minimization is concerned, the two are equivalent. In fact, the last
quantity above is clearly minimized by taking z\ = S~
l
c\. The subvector z
2
is arbitrary,
while the minimum value of \\Ax — b\\^ is l ^l l r
via the SVD. Specifically, we assume that A has an SVD given by A = UT, V
T
= U\SVf
as in Theorem 5.1. We now note that
70 Chapter 8. Linear Least Squares Problems
difficulties because of numerical ill conditioning for large n. Numerically better approaches
are based on orthogonal polynomials, piecewise polynomial functions, splines, etc.
The key feature in (8.5) is that the coefficients Ci appear linearly. The basis functions
¢i can be arbitrarily nonlinear. Sometimes a problem in which the Ci'S appear nonlinearly
can be converted into a linear problem. For example, if the fitting function is of the form
Y = f (t) = c, e
C2
/ , then taking logarithms yields the equation log y = log c, + c2f. Then
defining y = log y, c, = log c" and C2 = C2 results in a standard linear least squares
problem.
8.4 Least Squares and Singular Value Decomposition
In the numerical linear algebra literature (e.g., [4], [7], [11], [23]), it is shown that solution
of linear least squares problems via the normal equations can be a very poor numerical
method in finiteprecision arithmetic. Since the standard Kalman filter essentially amounts
to sequential updating of normal equations, it can be expected to exhibit such poor numerical
behavior in practice (and it does). Better numerical methods are based on algorithms that
work directly and solely on A itself rather than AT A. Two basic classes of algorithms are
based on SVD and QR (orthogonalupper triangular) factorization, respectively. The former
is much more expensive but is generally more reliable and offers considerable theoretical
insight.
In this section we investigate solution of the linear least squares problem
min II Ax  b11
2
, A E IR
mxn
, bE IR
m
, (8.6)
x
via the SVD. Specifically, we assume that A has an SVD given by A = = U,SVr
as in Theorem 5.1. We now note that
IIAx  = x 
= II VT X  U
T
bll; since II . Ib is unitarily invariant
wherez=VTx,c=UTb
= II [ ]  [ ] II:
= II [ c, ] II:
The last equality follows from the fact that if v = then II v II = II viii + II v211 (note
that orthogonality is not what is used here; the subvectors can have different lengths). This
explains why it is convenient to work above with the square of the norm rather than the
norm. As far as the minimization is concerned. the two are equivalent. In fact. the last
quantity above is clearly minimized by taking z, = S'c,. The subvector Z2 is arbitrary,
while the minimum value of II Ax  b II is II czll
8.5. Least Squares and QR Factorization 71
Now transform back to the original coordinates:
The last equality follows from
Note that since 12 is arbitrary, V
2
z
2
is an arbitrary vector in 7Z(V
2
) = A/"(A). Thus, x has
been written in the form x = A
+
b + (/ — A
+
A ) _ y, where y e R
m
is arbitrary. This agrees,
of course, with (8.1).
The minimum value of the least squares residual is
and we clearly have that
minimum least squares residual is 0 4=> b is orthogonal to all vectors in U
2
•<=^ b is orthogonal to all vectors in 7l(A}
L
Another expression for the minimum residual is  (/ — AA
+
) b 
2
. This follows easily since
(7  AA+)b\\
2
2
 \\U2Ufb\\l = b
T
U
2
U^U
2
UJb = b
T
U
2
U*b = \\U?b\\
2
2
.
Finally, an important special case of the linear least squares problem is the
socalled fullrank problem, i.e., A e 1R™
X
" . In this case the SVD of A is given by
A = UZV
T
= [U
{
t/ 2][o]^i
r
> and there is thus "no V
2
part" to the solution.
8.5 Least Squares and QR Factorization
In this section, we again look at the solution of the linear least squares problem (8.6) but this
time in terms of the QR factorization. This matrix factorization is much cheaper to compute
than an SVD and, with appropriate numerical enhancements, can be quite reliable.
To simplify the exposition, we add the simplifying assumption that A has full column
rank, i.e., A e R™
X M
. It is then possible, via a sequence of socalled Householder or Givens
transformations, to reduce A in the following way. A finite sequence of simple orthogonal
row transformations (of Householder or Givens type) can be performed on A to reduce it
to triangular form. If we label the product of such orthogonal row transformations as the
orthogonal matrix Q
T
€ R
mxm
, we have
B.S. Least Squares and QR Factorization 71
Now transform back to the original coordinates:
x = Vz
= [VI V
2
1 [ ]
= VIZI + V2Z2
= VISici + V2Z2
= vlsIufb + V
2
Z
2
.
The last equality follows from
c = U T b = [ f: ] = [ l
Note that since Z2 is arbitrary, V
2
Z
2
is an arbitrary vector in R(V
2
) = N(A). Thus, x has
been written in the form x = A + b + (I  A + A) y, where y E ffi.m is arbitrary. This agrees,
of course, with (8.1).
The minimum value of the least squares residual is
and we clearly have that
minimum least squares residual is 0 {::=:} b is orthogonal to all vectors in U2
{::=:} b is orthogonal to all vectors in R(A)l.
{::=:} b E R(A).
Another expression for the minimum residual is II (I  AA +)bllz. This follows easily since
11(1 = = b
T
U
Z
V!V
2
V!b = bTVZV!b =
Finally, an important special case of the linear least squares problem is the
socalled fullrank problem, i.e., A E In this case the SVD of A is given by
A = V:EV
T
= [VI Vzl[g]Vr, and there is thus "no V
2
part" to the solution.
8.5 Least Squares and QR Factorization
In this section, we again look at the solution of the linear least squares problem (8.6) but this
time in terms of the QR factorization. This matrix factorization is much cheaper to compute
than an SVD and, with appropriate numerical enhancements, can be quite reliable.
To simplify the exposition, we add the simplifying assumption that A has full column
rank, i.e., A E It is then possible, via a sequence of socalled Householder or Givens
transformations, to reduce A in the following way. A finite sequence of simple orthogonal
row transformations (of Householder or Givens type) can be performed on A to reduce it
to triangular form. If we label the product of such orthogonal row transformations as the
orthogonal matrix QT E ffi.mxm, we have
(8.7)
72 Chapter 8. Linear Least Squares Problems
where R e M£
x
" is upper triangular. Now write Q = [Q\ Q
2
], where Q\ e R
mx
" and
Q
2
€ K"
IX(m
~"
)
. Both Q\ and <2
2
have orthonormal columns. Multiplying through by Q
in (8.7), we see that
Any of (8.7), (8.8), or (8.9) are variously referred to as QR factorizations of A. Note that
(8.9) is essentially what is accomplished by the GramSchmidt process, i.e., by writing
AR~
l
= Q\ we see that a "triangular" linear combination (given by the coefficients of
R~
l
) of the columns of A yields the orthonormal columns of Q\.
Now note that
The last quantity above is clearly minimized by taking x = R
l
c\ and the minimum residual
is \\C 2\\2 Equivalently, we have x = R~
l
Q\b = A
+
b and the minimum residual is IIC?^!^
EXERCISES
1. For A € W
xn
, b e E
m
, and any y e R", check directly that (I  A
+
A)y and A
+
b
are orthogonal vectors.
2. Consider the following set of measurements (*,, y
t
):
(a) Find the best (in the 2norm sense) line of the form y = ax + ft that fits this
data.
(b) Find the best (in the 2norm sense) line of the form jc = ay + (3 that fits this
data.
3. Suppose qi and q
2
are two orthonormal vectors and b is a fixed vector, all in R".
(a) Find the optimal linear combination aq^ + fiq
2
that is closest to b (in the 2norm
sense).
(b) Let r denote the "error vector" b — ctq\ — flq
2
 Show that r is orthogonal to
both^i and q
2
.
72 Chapter 8. Linear Least Squares Problems
where R E is upper triangular. Now write Q = [QI Qz], where QI E ffi.mxn and
Qz E ffi.m x (mn). Both Q I and Qz have orthonormal columns. Multiplying through by Q
in (8.7), we see that
(8.8)
= [QI Qz] [ ]
= QIR.
(8.9)
Any of (8.7), (8.8), or (8.9) are variously referred to as QR factorizations of A. Note that
(8.9) is essentially what is accomplished by the GramSchmidt process, i.e., by writing
AR
1
= QI we see that a "triangular" linear combination (given by the coefficients of
R
I
) of the columns of A yields the orthonormal columns of Q I.
Now note that
IIAx  = IIQ
T
Ax  since II . 112 is unitarily invariant
= II [ ] x  [ ] If:,
The last quantity above is clearly minimized by taking x = R
I
Cl and the minimum residual
is Ilczllz. Equivalently, we have x = R
1
Qf b = A +b and the minimum residual is II Qr bllz'
EXERCISES
1. For A E ffi.
mxn
, b E ffi.
m
, and any y E ffi.
n
, check directly that (I  A + A)y and A +b
are orthogonal vectors.
2. Consider the following set of measurements (Xi, Yi):
(1,2), (2,1), (3,3).
(a) Find the best (in the 2norm sense) line of the form y = ax + fJ that fits this
data.
(b) Find the best (in the 2norm sense) line of the form x = ay + fJ that fits this
data.
3. Suppose q, and qz are two orthonormal vectors and b is a fixed vector, all in ffi.
n
•
(a) Find the optimallinear combination aql + (3q2 that is closest to b (in the 2norm
sense).
(b) Let r denote the "error vector" b  aql  {3qz. Show that r is orthogonal to
both ql and q2.
Exercises 73
4. Find all solutions of the linear least squares problem
5. Consider the problem of finding the minimum 2norm solution of the linear least
«rmarp« nrr»h1<=>m
(a) Consider a perturbation E\ = [
0
pi of A, where 8 is a small positive number.
Solve the perturbed version of the above problem,
where AI = A + E\. What happens to jt* — y 
2
as 8 approaches 0?
(b) Now consider the perturbation EI = \
0 s
~\ of A, where again 8 is a small
positive number. Solve the perturbed problem
where A
2
— A + E
2
. What happens to \\x* — z
2
as 8 approaches 0?
6. Use the four Penrose conditions and the fact that Q\ has orthonormal columns to
verify that if A e R™
x
" can be factored in the form (8.9), then A+ = R~
l
Q\.
1. Let A e R"
x
", not necessarily nonsingular, and suppose A = QR, where Q is
orthogonal. Prove that A
+
= R
+
Q
T
.
Exercises 73
4. Find all solutions of the linear least squares problem
min II Ax  bll
2
x
when A = [
5. Consider the problem of finding the minimum 2norm solution of the linear least
squares problem
min II Ax  bl1
2
x
when A = ] and b = [ ! 1 The solution is
(a) Consider a perturbation EI = of A, where 8 is a small positive number.
Solve the perturbed version of the above problem,
where AI = A + E
I
. What happens to IIx*  yII2 as 8 approaches O?
(b) Now consider the perturbation E2 = n of A, where again 8 is a small
positive number. Solve the perturbed problem
min II A
2
z  bib
z
where A2 = A + E
2
• What happens to IIx*  zll2 as 8 approaches O?
6. Use the four Penrose conditions and the fact that QI has orthonormal columns to
verify that if A E can be factored in the form (8.9), then A+ = R
I
Qf.
7. Let A E not necessarily nonsingular, and suppose A = QR, where Q is
orthogonal. Prove that A + = R+ QT .
This page intentionally left blank This page intentionally left blank
Chapter 9
Eigenvalues and
Eigenvectors
9.1 Fundamental Definitions and Properties
Definition 9.1. A nonzero vector x e C" is a right eigenvector of A e C
nxn
if there exists
a scalar A. e C, called an eigenvalue, such that
Similarly, a nonzero vector y e C" is a left eigenvector corresponding to an eigenvalue
a if
By taking Hermitian transposes in (9.1), we see immediately that X
H
is a left eigen
vector of A
H
associated with A . Note that if x [y] is a right [left] eigenvector of A, then
so is ax [ay] for any nonzero scalar a E C. One oftenused scaling for an eigenvector is
a — \j';t so that the scaled eigenvector has norm 1. The 2norm is the most common
norm used for such scaling.
Definition 9.2. The polynomial n (A.) = det(A —A ,/ ) is called the characteristic polynomial
of A. (Note that the characteristic polynomial can also be defined as det(A . / — A ). This
results in at most a change of sign and, as a matter of convenience, we use both forms
throughout the text.}
The following classical theorem can be very useful in hand calculation. It can be
proved easily from the Jordan canonical form to be discussed in the text to follow (see, for
example, [21]) or directly using elementary properties of inverses and determinants (see,
for example, [3]).
Theorem 9.3 (CayleyHamilton). For any A e C
nxn
, n(A) = 0.
Example 9.4. Let A = [~g ~g] . Then n(k) = X
2
+ 2A , — 3. It is an easy exercise to
verify that n(A) = A
2
+ 2A  31 = 0.
It can be proved from elementary properties of determinants that if A e C"
x
", then
7 t (X) is a polynomial of degree n. Thus, the Fundamental Theorem of A lgebra says that
75
Chapter 9
Eigenvalues and
Eigenvectors
9.1 Fundamental Definitions and Properties
Definition 9.1. A nonzero vector x E en is a right eigenvector of A E e
nxn
if there exists
a scalar A E e, called an eigenvalue, such that
Ax = AX. (9.1)
Similarly, a nonzero vector y E en is a left eigenvector corresponding to an eigenvalue
Mif
(9.2)
By taking Hennitian transposes in (9.1), we see immediately that x
H
is a left eigen
vector of A H associated with I. Note that if x [y] is a right [left] eigenvector of A, then
so is ax [ay] for any nonzero scalar a E C. One oftenused scaling for an eigenvector is
a = 1/ IIx II so that the scaled eigenvector has nonn 1. The 2nonn is the most common
nonn used for such scaling.
Definition 9.2. The polynomialn (A) = det (A  A l) is called the characteristic polynomial
of A. (Note that the characteristic polynomial can also be defined as det(Al  A). This
results in at most a change of sign and, as a matter of convenience, we use both forms
throughout the text.)
The following classical theorem can be very useful in hand calculation. It can be
proved easily from the Jordan canonical fonn to be discussed in the text to follow (see, for
example, [21D or directly using elementary properties of inverses and determinants (see,
for example, [3]).
Theorem 9.3 (CayleyHamilton). For any A E e
nxn
, n(A) = O.
Example 9.4. Let A = [  ~  ~ ] . Then n(A) = A2 + 2A  3. It is an easy exercise to
verify that n(A) = A2 + 2A  31 = O.
It can be proved from elementary properties of detenninants that if A E e
nxn
, then
n(A) is a polynomial of degree n. Thus, the Fundamental Theorem of Algebra says that
75
and set X = 0 in this identity, we get the interesting fact that del (A) = AI • A.2 • • • A
M
(see
also Theorem 9.25).
If A e W
xn
, then n(X) has real coefficients. Hence the roots of 7 r( A) , i.e., the
eigenvalues of A, must occur in complex conjugate pairs.
Example 9.6. Let a, ft e R and let A = [ _^ £ ]. Then jr( A. ) = A.
2
 2aA + a
2
+ ft
2
and
A has eigenvalues a ± fij (where j = i = •>/—!)•
If A € R"
x
", then there is an easily checked relationship between the left and right
eigenvectors of A and A
T
(take Hermitian transposes of both sides of (9.2)). Specifically, if
y is a left eigenvector of A corresponding to A e A( A) , then y is a right eigenvector of A
T
corresponding to A. € A ( A) . Note, too, that by elementary properties of the determinant,
we always have A ( A ) = A ( A
r
) , but that A ( A ) = A ( A ) only if A e R"
x
".
Definition 9.7. IfX is a root of multiplicity m ofjr(X), we say that X is an eigenvalue of A
of algebraic multiplicity m. The geometric multiplicity ofX is the number of associated
independent eigenvectors = n — rank( A — A/) = dim J \ f(A — XI).
If A € A ( A ) has algebraic multiplicity m, then 1 < di mA/ "(A — A/) < m. Thus, if
we denote the geometric multiplicity of A by g, then we must have 1 < g < m.
Definition 9.8. A matrix A e W
x
" is said to be defective if it has an eigenvalue whose
geometric multiplicity is not equal to (i.e., less than) its algebraic multiplicity. Equivalently,
A is said to be defective if it does not have n linearly independent (right or left) eigenvectors.
From the CayleyHamilton Theorem, we know that n(A) = 0. However, it is pos
sible for A to satisfy a lowerorder polynomial. For example, if A = \
1
Q
®], then A sat
isfies (1 — I)
2
= 0. But it also clearly satisfies the smaller degree polynomial equation
a  n = o.
Definition 5.5. The minimal polynomial of A G K""" is the polynomial o/ (X) of least
degree such that a (A) =0.
It can be shown that or(l) is essentially unique (unique if we force the coefficient
of the highest power of A to be +1, say; such a polynomial is said to be monic and we
generally write et (A) as a monic polynomial throughout the text). Moreover, it can also be
7 6 Chapt er 9. Ei g e n va l ue s and Ei genvect ors
7 r( A) has n roots, possibly repeated. These roots, as solutions of the determinant equation
are the eigenvalues of A and imply the singularity of the matrix A — XI, and hence further
guarantee the existence of corresponding nonzero eigenvectors.
Definition 9.5. The spectrum of A e C"
x
" is the set of all eigenvalues of A, i.e., the set of
all roots of its characteristic polynomial n(X). The spectrum of A is denoted A ( A) .
Let the eigenvalues of A e C"
x
" be denoted X\ ,..., X
n
. Then if we write (9.3) in the
form
76 Chapter 9. Eigenvalues and Eigenvectors
n(A) has n roots, possibly repeated. These roots, as solutions of the determinant equation
n(A) = det(A  AI) = 0, (9.3)
are the eigenvalues of A and imply the singularity of the matrix A  AI, and hence further
guarantee the existence of corresponding nonzero eigenvectors.
Definition 9.5. The spectrum of A E c
nxn
is the set of all eigenvalues of A, i.e., the set of
all roots of its characteristic polynomialn(A). The spectrum of A is denoted A(A).
Let the eigenvalues of A E en xn be denoted A], ... , An. Then if we write (9.3) in the
form
n(A) = det(A  AI) = (A]  A) ... (An  A) (9.4)
and set A = 0 in this identity, we get the interesting fact that det(A) = A] . A2 ... An (see
also Theorem 9.25).
If A E 1Ftnxn, then n(A) has real coefficients. Hence the roots of n(A), i.e., the
eigenvalues of A, must occur in complex conjugate pairs.
Example 9.6. Let a, f3 E 1Ft and let A = [ ~ f 3 !]. Then n(A) = A
2
 2aA + a
2
+ f32 and
A has eigenvalues a ± f3j (where j = i = R).
If A E 1Ftnxn, then there is an easily checked relationship between the left and right
eigenvectors of A and AT (take Hermitian transposes of both sides of (9.2». Specifically, if
y is a left eigenvector of A corresponding to A E A(A), then y is a right eigenvector of AT
corresponding to I E A(A). Note, too, that by elementary properties of the determinant,
we always have A(A) = A(AT), but that A(A) = A(A) only if A E 1Ftnxn.
Definition 9.7. If A is a root of multiplicity m of n(A), we say that A is an eigenvalue of A
of algebraic multiplicity m. The geometric multiplicity of A is the number of associated
independent eigenvectors = n  rank(A  AI) = dimN(A  AI).
If A E A(A) has algebraic multiplicity m, then I :::: dimN(A  AI) :::: m. Thus, if
we denote the geometric multiplicity of A by g, then we must have I :::: g :::: m.
Definition 9.8. A matrix A E 1Ft
nxn
is said to be defective if it has an eigenvalue whose
geometric multiplicity is not equal to (i.e., less than) its algebraic multiplicity. Equivalently,
A is said to be defective if it does not have n linearly independent (right or left) eigenvectors.
From the CayleyHamilton Theorem, we know that n(A) = O. However, it is pos
sible for A to satisfy a lowerorder polynomial. For example, if A = [ ~ ~ ] , then A sat
isfies (Je  1)2 = O. But it also clearly satisfies the smaller degree polynomial equation
(it.  1) ;;;:; 0
neftnhion ~ . ~ . Thll minimal polynomial Of A l::: l!if.nxn ix (hI' polynomilll a(A) oJ IPll.ft
degree such that a(A) ~ O.
It can be shown that a(Je) is essentially unique (unique if we force the coefficient
of the highest power of A to be + 1. say; such a polynomial is said to be monic and we
generally write a(A) as a monic polynomial throughout the text). Moreover, it can also be
9.1. Fundamental Definitions and Properties 77
shown that a (A.) divides every nonzero polynomial fi(k} for which ft (A) = 0. In particular,
a(X) divides n(X).
There is an algorithm to determine or ( A . ) directly ( without knowing eigenvalues and as
sociated eigenvector structure). Unfortunately, this algorithm, called the Bezout algorithm,
is numerically unstable.
Example 9.10. The above definitions are illustrated below for a series of matrices, each
of which has an eigenvalue 2 of algebraic multiplicity 4, i. e. , 7r( A ) = ( A — 2)
4
. We denote
the geometric multiplicity by g.
A t this point, one might speculate that g plus the degree of a must always be five.
Unfortunately, such is not the case. The matrix
Theorem 9.11. Let A e C«
x
"
ana
[
e
t A ., be an eigenvalue of A with corresponding right
eigenvector j c,. Furthermore, let yj be a left eigenvector corresponding to any A
;
e A ( A )
such that Xj =£ A . ,. Then yfx{ = 0.
Proof: Since Ax
t
= A ,*,,
9.1. Fundamental Definitions and Properties 77
shown that a(A) divides every nonzero polynomial f3(A) for which f3(A) = O. In particular,
a(A) divides n(A).
There is an algorithm to determine a(A) directly (without knowing eigenvalues and as
sociated eigenvector structure). Unfortunately, this algorithm, called the Bezout algorithm,
is numerically unstable.
Example 9.10. The above definitions are illustrated below for a series of matrices, each
of which has an eigenvalue 2 of algebraic multiplicity 4, i.e., n(A) = (A  2)4. We denote
the geometric multiplicity by g.
A  [ ~
0
! ] ha,"(A) ~ (A  2)' ""d g ~ 1.
2 I
 0
0 2
0 0 0
A ~ [ ~
0
~ ] ha< a(A) ~ (A  2)' ""d g ~ 2.
2
0 2
0 0
A ~ U
I 0
~ ] h'" a(A) ~ (A  2)2 ""d g ~ 3.
2 0
0 2
0 0
A ~ U
0 0
~ ] ha<a(A) ~ (A  2) andg ~ 4.
2 0
0 2
0 0
At this point, one might speculate that g plus the degree of a must always be five.
Unfortunately, such is not the case. The matrix
A ~ U
I 0
!]
2 0
0 2
0 0
has a(A) = (A  2)2 and g = 2.
Theorem 9.11. Let A E cc
nxn
and let Ai be an eigenvalue of A with corresponding right
eigenvector Xi. Furthermore, let Yj be a left eigenvector corresponding to any Aj E l\(A)
such that Aj 1= Ai. Then YY Xi = O.
Proof' Since AXi = AiXi,
(9.5)
78 Chapter 9. Eigenvalues and Eigenvectors
Similarly, since y" A = Xjyf,
Subtracting (9.6) from (9.5), we find 0 = (A., — A
y
)j ^j c, . Since A,, — A.
7
 ^ 0, we must have
yfxt =0.
The proof of Theorem 9.11 is very similar to two other fundamental and important
results.
Theorem 9.12. Let A e C"
x
" be Hermitian, i.e., A = A
H
. Then all eigenvalues of A must
be real.
Proof: Suppose (A ., x) is an arbitrary eigenvalue/eigenvector pair such that Ax = A .J C. Then
Taking Hermitian transposes in (9.7) yields
Using the fact that A is Hermitian, we have that Xx
H
x = Xx
H
x. However, since x is an
eigenvector, we have X
H
X /= 0, from which we conclude A . = A , i.e., A . is real. D
Theorem 9.13. Let A e C"
x
" be Hermitian and suppose X and / J L are distinct eigenvalues
of A with corresponding right eigenvectors x and z, respectively. Then x and z must be
orthogonal.
Proof: Premultiply the equation Ax = A.J C by Z
H
to get Z
H
Ax = X z
H
x . Take the Hermitian
transpose of this equation and use the facts that A is Hermitian and A . is real to get X
H
Az =
Xx
H
z. Premultiply the equation Az = i^z by X
H
to get X
H
Az = / ^X
H
Z = Xx
H
z. Since
A, ^ /z, we must have that X
H
z = 0, i.e., the two vectors must be orthogonal. D
Let us now return to the general case.
Theorem 9.14. Let A €. C
nxn
have distinct eigenvalues A ,
1 ?
. . . , A .
n
with corresponding
right eigenvectors x\,... ,x
n
. Then [x\,..., x
n
} is a linearly independent set. The same
result holds for the corresponding left eigenvectors.
Proof: For the proof see, for example, [21, p. 118].
If A e C
nx
" has distinct eigenvalues, and if A ., e A (A ), then by Theorem 9.11, jc, is
orthogonal to all yj's for which j ^ i. However, it cannot be the case that yf*x
t
= 0 as
well, or else x
f
would be orthogonal to n linearly independent vectors (by Theorem 9.14)
and would thus have to be 0, contradicting the fact that it is an eigenvector. Since yf*Xi ^ 0
for each i, we can choose the normalization of the *, 's, or the y, 's, or both, so that y
t
H
x; = 1
f or / € n.
78 Chapter 9. Eigenvalues and Eigenvectors
Similarly, since YY A = A j yy,
(9.6)
Subtracting (9.6) from (9.5), we find 0 = (Ai  Aj)YY xi. Since Ai  Aj =1= 0, we must have
YyXi = O. 0
The proof of Theorem 9.11 is very similar to two other fundamental and important
results.
Theorem 9.12. Let A E c
nxn
be Hermitian, i.e., A = AH. Then all eigenvalues of A must
be real.
Proof: Suppose (A, x) is an arbitrary eigenvalue/eigenvector pair such that Ax = AX. Then
(9.7)
Taking Hermitian transposes in (9.7) yields
Using the fact that A is Hermitian, we have that IXH x = AXH x. However, since x is an
eigenvector, we have xH x =1= 0, from which we conclude I = A, i.e., A is real. 0
Theorem 9.13. Let A E c
nxn
be Hermitian and suppose A and iJ are distinct eigenvalues
of A with corresponding right eigenvectors x and z, respectively. Then x and z must be
orthogonal.
Proof: Premultiply the equation Ax = AX by ZH to get ZH Ax = AZ
H
x. Take the Hermitian
transpose of this equation and use the facts that A is Hermitian and A is real to get x H Az =
AxH z. Premultiply the equation Az = iJZ by x
H
to get x
H
Az = iJXH Z = AXH z. Since
A =1= iJ, we must have that x
H
z = 0, i.e., the two vectors must be orthogonal. 0
Let us now return to the general case.
Theorem 9.14. Let A E c
nxn
have distinct eigenvalues AI, ... , An with corresponding
right eigenvectors XI, ... , x
n
• Then {XI, ... , x
n
} is a linearly independent set. The same
result holds for the corresponding left eigenvectors.
Proof: For the proof see, for example, [21, p. 118]. 0
If A E c
nxn
has distinct eigenvalues, and if Ai E A(A), then by Theorem 9.11, Xi is
orthogonal to all y/s for which j =1= i. However, it cannot be the case that Yi
H
Xi = 0 as
well, or else Xi would be orthogonal to n linearly independent vectors (by Theorem 9.14)
and would thus have to be 0, contradicting the fact that it is an eigenvector. Since yr Xi =1= 0
for each i, we can choose the normalization of the Xi'S, or the Yi 's, or both, so that Yi
H
Xi = 1
for i E !1.
9.1. Fundament al Def i ni t i o ns and Properties 79
Theorem 9.15. Let A e C"
x
" have distinct eigenvalues A .I , ..., A .
n
and let the correspond
ing right eigenvectors form a matrix X = [x\, ..., x
n
]. Similarly, let Y — [y\, ..., y
n
]
be the matrix of corresponding left eigenvectors. Furthermore, suppose that the left and
right eigenvectors have been normalized so that yf
1
Xi = 1, / en. Finally, let A =
di ag ( A ,j , . . . , X
n
) e W
txn
. Then A J C, = A ., * /, / e n, can be written in matrix form as
Example 9.16. Let
Then n(X) = det( A  A ./) =  (A .
3
+ 4A .
2
+ 9 A . + 10) =  (A . + 2 )(A .
2
+ 2 A , + 5), from
which we find A ( A ) = { — 2 , — 1 ± 2 j } . We can now find the right and left eigenvectors
corresponding to these eigenvalues.
For A  i = — 2 , solve the 3 x 3 linear system (A — (—2 } I)x\ = 0 to get
while y^Xj = 5,
;
, / en, y' e n, is expressed by the equation
These matrix equations can be combined to yield the following matrix factorizations:
and
Note that one component of ;ci can be set arbitrarily, and this then determines the other two
(since di mA /XA — ( — 2 )7) = 1). To get the corresponding left eigenvector y\, solve the
linear system y\(A + 2 1) = 0 to get
This time we have chosen the arbitrary scale factor for y\ so that y f x \ = 1.
For A
2
= — 1 + 2 j , solve the linear system (A — (— 1 + 2 j )I)x
2
= 0 to get
9.1. Fundamental Definitions and Properties 79
Theorem 9.15. Let A E en xn have distinct eigenvalues A I, ... , An and let the correspond
ing right eigenvectors form a matrix X = [XI, ... , xn]. Similarly, let Y = [YI,"" Yn]
be the matrix of corresponding left eigenvectors. Furthermore, suppose that the left and
right eigenvectors have been normalized so that YiH Xi = 1, i E !!:: Finally, let A =
diag(AJ, ... , An) E ]Rnxn. Then AXi = AiXi, i E !!, can be written in matrixform as
AX=XA (9.8)
while YiH X j = oij, i E!!, j E !!, is expressed by the equation
yHX = I.
(9.9)
These matrix equations can be combined to yield the following matrix factorizations:
and
Example 9.16. Let
XlAX = A = yRAX
n
A = XAX
I
= XAyH = LAixiyr
2
5
3
3
2
i=1
~ ] .
4
(9.10)
(9.11)
Then rr(A) = det(A  AI) = (A
3
+ 4A2 + 9)" + 10) = ()" + 2)(),,2 + 2)" + 5), from
which we find A(A) = {2, 1 ± 2j}. We can now find the right and left eigenvectors
corresponding to these eigenvalues.
For Al = 2, solve the 3 x 3 linear system (A  (2)l)xI = 0 to get
Note that one component of XI can be set arbitrarily, and this then determines the other two
(since dimN(A  (2)1) = 1). To get the corresponding left eigenvector YI, solve the
linear system yi (A + 21) = 0 to get
This time we have chosen the arbitrary scale factor for YJ so that yi XI = 1.
For A2 = 1 + 2j, solve the linear system (A  (1 + 2j) I)x2 = 0 to get
[
3+ j ]
X2 = 3 ~ / .
80 Chapter 9. Eigenvalues and Eigenvectors
Solve the linear system y" (A — (1 + 27')/) = 0 and normalize y>
2
so that y"x
2
= 1 to get
For X T , = — 1 — 2 j, we could proceed to solve linear systems as for A.
2
. However, we
can also note that x$ =x
2
' and yi = jj. To see this, use the fact that A, 3 = A.2 and simply
conjugate the equation A;c
2
— ^.2 *2 to get Ax^ = ^2 X 2  A similar argument yields the result
for left eigenvectors.
Now define the matrix X of right eigenvectors:
It is then easy to verify that
Other results in Theorem 9.15 can also be verified. For example,
Finally, note that we could have solved directly only for *i and x
2
(and X T , = x
2
). Then,
instead of determining the j,'s directly, we could have found them instead by computing
X ~
l
and reading off its rows.
Example 9.17. Let
Then 7r(A.) = det(A  A./) = (A
3
+ 8A
2
+ 19X + 12) = (A. + 1)(A. + 3)(A, + 4),
from which we find A (A) = { —1, —3, —4}. Proceeding as in the previous example, it is
straightforward to compute
and
80 Chapter 9. Eigenvalues and Eigenvectors
Solve the linear system yf (A  ( I + 2 j) I) = 0 and nonnalize Y2 so that yf X2 = 1 to get
For A3 = I  2j, we could proceed to solve linear systems as for A2. However, we
can also note that X3 = X2 and Y3 = Y2. To see this, use the fact that A3 = A2 and simply
conjugate the equation AX2 = A2X2 to get AX2 = A2X2. A similar argument yields the result
for left eigenvectors.
Now define the matrix X of right eigenvectors:
3+j 3
j
]
3j 3+j .
2 2
It is then easy to verify that
.!.=.L
4
l+j
4
!.±1
4
.!.=.L
4
Other results in Theorem 9.15 can also be verified. For example,
[
2 0
XIAX=A= 0 1+2j
o 0
Finally, note that we could have solved directly only for XI and X2 (and X3 = X2). Then,
instead of detennining the Yi'S directly, we could have found them instead by computing
XI and reading off its rows.
Example 9.17. Let
A = .
o 3
Then Jl"(A) = det(A  AI) = _(A
3
+ 8A
2
+ 19A + 12) = (A + I)(A + 3)(A + 4),
from which we find A(A) = {I, 3, 4}. Proceeding as in the previous example, it is
gtruightforw!U"d to
I
i ]
0
I
and
1 2 1
] y'
3 0 3
2 2 2
9.1. Fundamental Definitions and Properties 81
We also have X~
l
AX = A = di ag( —1, —3, —4 ) , which is equivalent to the dyadic expan
sion
Theorem 9.18. Eigenvalues (but not eigenvectors) are invariant under a similarity trans
formation T.
Proof: Suppose (A, jc) is an eigenvalue/eigenvector pair such that Ax = Xx. Then, since T
is nonsingular, we have the equivalent statement (T~
l
AT)(T~
l
x) = X ( T ~
l
x ) , from which
the theorem statement follows. For left eigenvectors we have a similar statement, namely
y
H
A = Xy
H
ifandon\yif(T
H
y)
H
(T~
1
AT) =X(T
H
yf. D
Remark 9.19. If / is an analytic function (e.g., f ( x ) is a polynomial, or e
x
, or sin*,
or, in general, representable by a power series X^^o
fl
n*
n
)> then it is easy to show that
the eigenvalues of /( A) (defined as X^o^A") are /( A) , but /( A) does not necessarily
have all the same eigenvectors (unless, say, A is diagonalizable). For example, A = T
0 O
j
has only one right eigenvector corresponding to the eigenvalue 0, but A
2
= f
0 0
1 has two
independent right eigenvectors associated with the eigenvalue 0. What is true is that the
eigenvalue/eigenvector pair (A, x) maps to ( /( A) , x) but not conversely.
The following theorem is useful when solving systems of linear differential equations.
Details of how the matrix exponential e'
A
is used to solve the system x = Ax are the subject
of Chapter 11.
Theorem 9.20. Let A e R"
xn
and suppose X~~
1
AX — A, where A is diagonal. Then
9.1. Fundamental Definitions and Properties 81
We also have XI AX = A = diag( 1, 3, 4), which is equivalent to the dyadic expan
sion
3
A = LAixiyr
i=1
W j 0
+(4) [ ; ]
1
 
3
(I) [
I I I
J + (3) [
I
0
I
] + (4) [
I I I
l
(;
3
(;
2
2 3
3
3
I 2 I
0 0 0
I I I
3 3 3
3
3
3
I I I
I
0
I
I I I
(;
3
(;
2
2
3
3
3
Theorem 9.18. Eigenvalues (but not eigenvectors) are invariant under a similarity trans
formation T.
Proof: Suppose (A, X) is an eigenvalue/eigenvector pair such that Ax = AX. Then, since T
is nonsingular, we have the equivalent statement (T
I
AT)(T
I
x) = A(T
I
x), from which
the theorem statement follows. For left eigenvectors we have a similar statement, namely
yH A = AyH if and only if (T
H
y)H (T
1
AT) = A(T
H
y)H. D
Remark 9.19. If f is an analytic function (e.g., f(x) is a polynomial, or eX, or sinx,
or, in general, representable by a power series anxn), then it is easy to show that
the eigenvalues of f(A) (defined as are f(A), but f(A) does not necessarily
have all the same eigenvectors (unless, say, A is diagonalizable). For example, A = 6 ]
has only one right eigenvector corresponding to the eigenvalue 0, but A
2
= ] has two
independent right eigenvectors associated with the eigenvalue o. What is true is that the
eigenvalue/eigenvector pair (A, x) maps to (f(A), x) but not conversely.
The following theorem is useful when solving systems of linear differential equations.
Details of how the matrix exponential etA is used to solve the system i = Ax are the subject
of Chapter 11.
Theorem 9.20. Let A E jRnxn and suppose XI AX = A, where A is diagonal. Then
n
= LeA,txiYiH.
i=1
82 Chapter 9. Eigenvalues and Eigenvectors
Proof: Starting from the definition, we have
The following corollary is immediate from the theorem upon setting t = I.
Corollary 9.21. If A e R
nx
" is diagonalizable with eigenvalues A ., , /' en, and right
eigenvectors x
t
•, / € n_, then e
A
has eigenvalues e
X i
, i € n_, and the same eigenvectors.
There are extensions to Theorem 9.20 and Corollary 9.21 for any function that is
analytic on the spectrum of A , i.e., f ( A ) = X f ( A ) X ~
l
= Xdi ag ( / ( A . i ) , . . . , f ( X
t t
) ) X ~
l
.
It is desirable, of course, to have a version of Theorem 9.20 and its corollary in which
A is not necessarily diagonalizable. It is necessary first to consider the notion of Jordan
canonical form, from which such a result is then available and presented later in this chapter.
9.2 Jordan Canonical Form
Theorem 9.22.
1. Jordan Canonical Form (/CF): For all A e C"
x
" with eigenvalues X\,..., k
n
e C
(not necessarily distinct), there exists X € C^
x
" such that
where each of the Jordan block matrices / i , . . . , J
q
is of the form
82 Chapter 9. Eigenvalues and Eigenvectors
Proof' Starting from the definition, we have
n
= LeA;IXiYiH. 0
i=1
The following corollary is immediate from the theorem upon setting t = I.
Corollary 9.21. If A E ]Rn xn is diagonalizable with eigenvalues Ai, i E ~ , and right
eigenvectors Xi, i E ~ , then e
A
has eigenvalues e
A
" i E ~ , and the same eigenvectors.
There are extensions to Theorem 9.20 and Corollary 9.21 for any function that is
analytic on the spectrum of A, i.e., f(A) = Xf(A)X
I
= Xdiag(J(AI), ... , f(An))X
I
.
It is desirable, of course, to have a version of Theorem 9.20 and its corollary in which
A is not necessarily diagonalizable. It is necessary first to consider the notion of Jordan
canonical form, from which such a result is then available and presented later in this chapter.
9.2 Jordan Canonical Form
Theorem 9.22.
I. lordan Canonical Form (JCF): For all A E c
nxn
with eigenvalues AI, ... , An E C
(not necessarily distinct), there exists X E c ~ x n such that
XI AX = 1 = diag(ll, ... , 1q), (9.12)
where each of the lordan block matrices 1
1
, ••• , 1q is of the form
Ai
0 o
0
Ai
0
1i =
Ai
(9.13)
o
Ai
o o Ai
9.2. Jordan Canonical Form 83
2. Real Jordan Canonical Form: For all A € R
nx
" with eigenvalues Xi, . . . , X
n
(not
necessarily distinct), there exists X € R"
xn
such that
where each of the Jordan block matrices J\, ..., J
q
is of the form
in the case of real eigenvalues A., e A (A), and
where M
;
= [ _»' ^ 1 and I
2
= \
0
A in the case of complex conjugate eigenvalues
a
i
±jp
i
eA(A
>
).
Proof: For the proof see, for example, [21, pp. 120124]. D
Transformations like T = I " _, "•{ "] allow us to go back and forth between a real JCF
and its complex counterpart:
For nontrivial Jordan blocks, the situation is only a bit more complicated. With
9.2. Jordan Canonical Form 83
and L;=1 ki = n.
2. Real Jordan Canonical Form: For all A E jRnxn with eigenvalues AI, ... , An (not
necessarily distinct), there exists X E such that
(9.14)
where each of the Jordan block matrices 11, ... , 1q is of the form
where Mi = [ ] and h = [6 in the case of complex conjugate eigenvalues
(Xi ± jfJi E A(A).
Proof: For the proof see, for example, [21, pp. 120124]. 0
Transformations like T = [ _  { ] allow us to go back and forth between a real JCF
and its complex counterpart:
TI [ (X + jfJ O. ] T = [ (X fJ ] = M.
o (X  JfJ fJ (X
For nontrivial Jordan blocks, the situation is only a bit more complicated. With
1
o
j
o
j
o
1 o 0 '
o j 1
84 Chapter 9. Ei genval ues and Eigenvectors
it is easily checked that
Definition 9.23. The characteristic polynomials of the Jordan blocks defined in Theorem
9 . 2 2 are called the elementary divisors or invariant factors of A.
Theorem 9.24. The characteristic polynomial of a matrix is the product of its elementary
divisors. The minimal polynomial of a matrix is the product of the elementary divisors of
highest degree corresponding to distinct eigenvalues.
Theorem 9.25. Let A e C"
x
" with eigenvalues AI, . . . , X
n
. Then
Proof:
1. From Theorem 9.22 we have that A = X J X ~
l
. Thus,
det(A) = det(XJX
1
) = det(7) = ] ~ [ "
=l
A,  .
2. Again, from Theorem 9.22 we have that A = X J X ~
l
. Thus,
Tr(A) = Tr(XJX~
l
) = TrC/X"
1
*) = Tr(/) = £"
=1
A., . D
Example 9.26. Suppose A e E
7x7
is known to have 7r(A) = (A.  1)
4
(A  2)
3
and
a (A.) = (A. — I)
2
(A. — 2)
2
. Then A has two possible JCFs (not counting reorderings of the
diagonal blocks):
Note that 7
(1)
has elementary divisors (A  I )
2
, (A.  1), (A.  1), (A,  2)
2
, and (A  2),
while /(
2)
has elementary divisors (A  I )
2
, (A  I )
2
, (A  2)
2
, and (A  2).
84 Chapter 9. Eigenvalues and Eigenvectors
it is easily checked that
[ "+ jfi
0 0
] T ~ [ ~
T
I
0
et + jf3 0 0
h
l
0 0 et  jf3 M
0 0 0 et  jf3
Definition 9.23. The characteristic polynomials of the Jordan blocks defined in Theorem
9.22 are called the elementary divisors or invariant factors of A.
Theorem 9.24. The characteristic polynomial of a matrix is the product of its elementary
divisors. The minimal polynomial of a matrix is the product of the elementary divisors of
highest degree corresponding to distinct eigenvalues.
Theorem 9.25. Let A E c
nxn
with eigenvalues AI, .. " An. Then
n
1. det(A) = nAi.
i=1
n
2. Tr(A) = 2,)i.
i=1
Proof:
1. From Theorem 9.22 we have that A = X J XI. Thus,
det(A) = det(X J XI) = det(J) = n7=1 Ai.
2. Again, from Theorem 9.22 we have that A = X J XI. Thus,
Tr(A) = Tr(X J XI) = Tr(JX
1
X) = Tr(J) = L7=1 Ai. 0
Example 9.26. Suppose A E lR.
7x7
is known to have :rr(A) = (A  1)4(A  2)3 and
et(A) = (A  1)2(A  2)2. Then A has two possible JCFs (not counting reorderings of the
diagonal blocks):
1 0 0 0 0 0 1 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 1 I 0 0 0
J(l) =
0 0 0 1 0 0 0
and f2) =
0 0 0 1 0 0 0
0 0 0 0 2 1 0 0 0 0 0 2 1 0
0 0 0 0 0 2 0 0 0 0 0 0 2 0
0 0 0 0 0 0 2
0 0 0 0 0 0 2
Note that J(l) has elementary divisors (A  1)z, (A  1), (A  1), (A  2)2, and (A  2),
while J(2) has elementary divisors (A  1)2, (A  1)2, (A  2)2, and (A  2).
9.3. Determination of the JCF &5
Example 9.27. Knowing T T (A.), a ( A ) , and rank (A — A,,7) for distinct A., is not sufficient to
determine the JCF of A uniquely. T he matrices
both have 7r( A . ) = (A. — a) , a( A . ) = (A. — a) , and rank( A — al) = 4, i.e., three eigen
vectors.
9.3 Determination of the JCF
T he first critical item of information in determining the JCF of a matrix A e W
lxn
is its
number of eigenvectors. For each distinct eigenvalue A , , , the associated number of linearly
independent right (or left) eigenvectors is given by dim A^(A — A.,7) = n — rank( A — A.
(
7).
T he straightforward case is, of course, when X, is simple, i.e., of algebraic multiplicity 1; it
then has precisely one eigenvector. T he more interesting (and difficult) case occurs when
A, is of algebraic multiplicity greater than one. For example, suppose
T hen
has rank 1, so the eigenvalue 3 has two eigenvectors associated with it. If we let [^i £2 &]
T
denote a solution to the linear system (A — 3/) £ = 0, we find that 2£
2
+ £3=0. T hus, both
are eigenvectors (and are independent). T o get a third vector JC3 such that X = [x\ KJ_ XT,]
reduces A to JCF, we need the notion of principal vector.
Definition 9.28. Let A e C"
xn
(or R"
x
") . Then x is a right principal vector of degree k
associated with X e A (A) if and only if (A  XI)
k
x = 0 and (A  U}
k
~
l
x ^ 0.
Remark 9.29.
1. An analogous definition holds for a left principal vector of degree k.
9.3. Determination of the JCF 85
Example 9.27. Knowing rr(A), a(A), and rank(A  Ai l) for distinct Ai is not sufficient to
determine the JCF of A uniquely. The matrices
a 0 0 0 0 0 a 0 0 0 0 0
0 a 0 0 0 0 0 a 0 0 0 0
0 0 a 0 0 0 0 0 0 a 0 0 0 0
Al=
0 0 0 a 0 0
A2 =
0 0 0 a 0 0
0 0 0 0 a 0 0 0 0 0 0 a 0
0 0 0 0 0 a 1 0 0 0 0 0 a 0
0 0 0 0 0 0 a 0 0 0 0 0 0 a
both have rr(A) = (A  a)7, a(A) = (A  a)\ and rank(A  al) = 4, i.e., three eigen
vectors.
9.3 Determination of the JCF
The first critical item of information in determining the JCF of a matrix A E ]R.nxn is its
number of eigenvectors. For each distinct eigenvalue Ai, the associated number of linearly
independent right (or left) eigenvectors is given by dimN(A  A;l) = n  rank(A  A;l).
The straightforward case is, of course, when Ai is simple, i.e., of algebraic multiplicity 1; it
then has precisely one eigenvector. The more interesting (and difficult) case occurs when
Ai is of algebraic multiplicity greater than one. For example, suppose
[3 2
n
A = 0 3
o 0
Then
A3I= U
2 I]
o 0
o 0
has rank 1, so the eigenvalue 3 has two eigenvectors associated with it. If we let
denote a solution to the linear system (A  = 0, we find that + = O. Thus, both
are eigenvectors (and are independent). To get a third vector X3 such that X = [Xl X2 X3]
reduces A to JCF, we need the notion of principal vector.
Definition 9.28. Let A E c
nxn
(or ]R.nxn). Then X is a right principal vector of degree k
associated with A E A(A) ifand only if(A  ulx = 0 and (A  AI)kl x i= o.
Remark 9.29.
1. An analogous definition holds for a left principal vector of degree k.
86 Chapter 9. Eigenvalues and Eigenvectors
2. The phrase "of grade k" is often used synonymously with "of degree k."
3. Principal vectors are sometimes also called generalized eigenvectors, but the latter
term will be assigned a much different meaning in Chapter 12.
4. The case k = 1 corresponds to the "usual" eigenvector.
5. A right (or left) principal vector of degree k is associated with a Jordan block ji of
dimension k or larger.
9.3.1 Theoretical computation
To motivate the development of a procedure for determining principal vectors, consider a
2x2 Jordan block {h
0
h1. Denote by x
(1)
and x
(2)
the two columns of a matrix X e R
2
,x
2
that reduces a matrix A to this JCF. Then the equation AX = XJ can be written
The first column yields the equation Ax
(1)
= hx
(1)
which simply says that x
(1)
is a right
eigenvector. The second column yields the following equation for x
(2)
, the principal vector
of degree 2:
If we premultiply (9.17) by (A  XI), we find (A  XI )
z
x
( 2 )
= (A  XI)x
w
= 0. Thus,
the definition of principal vector is satisfied.
This suggests a "general" procedure. First, determine all eigenvalues of A e R"
x
"
(or C
nxn
). Then for each distinct X e A (A) perform the following:
1. Solve
This step finds all the eigenvectors (i.e., principal vectors of degree 1) associated with
X. The number of eigenvectors depends on the rank of A — XI. For example, if
rank(A — XI) = n — 1, there is only one eigenvector. If the algebraic multiplicity of
X is greater than its geometric multiplicity, principal vectors still need to be computed
from succeeding steps.
2. For each independent jc
(1)
, solve
The number of linearly independent solutions at this step depends on the rank of
(A — XI )
2
. If, for example, this rank is n — 2 , there are two linearly independent
solutions to the homogeneous equation (A — XI)
2
x^ = 0. One of these solutions
is, of course, x
(l)
(^ 0), since (A  XI )
2
x
( l )
= (A  XI)0 = 0. The other solution
is the desired principal vector of degree 2. (It may be necessary to take a linear
combination of jc
(1)
vectors to get a righthand side that is in 7£(A — XI). See, for
example, Exercise 7.)
86 Chapter 9. Eigenvalues and Eigenvectors
2. The phrase "of grade k" is often used synonymously with "of degree k."
3. Principal vectors are sometimes also called generalized eigenvectors, but the latter
term will be assigned a much different meaning in Chapter 12.
4. The case k = 1 corresponds to the "usual" eigenvector.
S. A right (or left) principal vector of degree k is associated with a Jordan block J; of
dimension k or larger.
9.3.1 Theoretical computation
To motivate the development of a procedure for determining principal vectors, consider a
2 x 2 Jordan block [ ~ i]. Denote by x(l) and x(2) the two columns of a matrix X E l R ~ X 2
that reduces a matrix A to this JCF. Then the equation AX = X J can be written
A [x(l) x(2)] = [x(l) X(2)] [ ~ ~ J.
The first column yields the equation Ax(!) = AX(!), which simply says that x(!) is a right
eigenvector. The second column yields the following equation for x(2), the principal vector
of degree 2:
(A  A/)x(2) = x(l). (9.17)
If we premultiply (9.17) by (A  AI), we find (A  A1)2 x(2) = (A  A1)X(l) = O. Thus,
the definition of principal vector is satisfied.
This suggests a "general" procedure. First, determine all eigenvalues of A E lR
nxn
(or c
nxn
). Then for each distinct A E A(A) perform the following:
1. Solve
(A  A1)X(l) = O.
This step finds all the eigenvectors (i.e., principal vectors of degree I) associated with
A. The number of eigenvectors depends on the rank of A  AI. For example, if
rank(A  A/) = n  1, there is only one eigenvector. If the algebraic multiplicity of
A is greater than its geometric multiplicity, principal vectors still need to be computed
from succeeding steps.
2. For each independent x(l), solve
(A  A1)x(2) = x(l).
The number of linearly independent solutions at this step depends on the rank of
(A  uf. If, for example, this rank is n  2, there are two linearly independent
solutions to the homogeneous equation (A  AI)2x (2) = o. One of these solutions
is, of course, x(l) (1= 0), since (A  'A1)
2
x(l) = (A  AI)O = o. The other solution
is the desired principal vector of degree 2. (It may be necessary to take a linear
combination of x(l) vectors to get a righthand side that is in R(A  AI). See, for
example, Exercise 7.)
9. 3. Determination of the JCF 87
4. Continue in this way until the total number of independent eigenvectors and principal
vectors is equal to the algebraic multiplicity of A.
Unfortunately, this naturallooking procedure can fail to find all Jordan vectors. For
more extensive treatments, see, for example, [20] and [21]. Determination of eigenvectors
and principal vectors is obviously very tedious for anything beyond simple problems (n = 2
or 3, say). Attempts to do such calculations in finiteprecision floatingpoint arithmetic
generally prove unreliable. There are significant numerical difficulties inherent in attempting
to compute a JCF, and the interested student is strongly urged to consult the classical and very
readable [8] to learn why. Notice that highquality mathematical software such as MATLAB
does not offer a jcf command, although a jordan command is available in MATLAB'S
Symbolic Toolbox.
Theorem 9.30. Suppose A e C
kxk
has an eigenvalue A, of algebraic multiplicity k and
suppose further that rank(A — AI) = k — 1. Let X = [ x
( l )
, . . . , x
(k)
], where the chain of
vectors x(i) is constructed as above. Then
Theorem 9.31. (x
( 1)
, . . . , x
(k)
} is a linearly independent set.
Theorem 9.32. Principal vectors associated with different Jordan blocks are linearly inde
pendent.
Example 9.33. Let
The eigenvalues of A are A1 = 1, h2 = 1, and h
3
= 2. First, find the eigenvectors associated
with the distinct eigenvalues 1 and 2.
,(1)
(A  2/)x3(1) = 0 yields
3. For each independent x
(2)
from step 2, solve
9.3. Determination of the JCF 87
3. For each independent X(2) from step 2, solve
(A  AI)x(3) = x(2).
4. Continue in this way until the total number of independent eigenvectors and principal
vectors is equal to the algebraic multiplicity of A.
Unfortunately, this naturallooking procedure can fail to find all Jordan vectors. For
more extensive treatments, see, for example, [20] and [21]. Determination of eigenvectors
and principal vectors is obviously very tedious for anything beyond simple problems (n = 2
or 3, say). Attempts to do such calculations in finiteprecision floatingpoint arithmetic
generally prove unreliable. There are significant numerical difficulties inherent in attempting
to compute a JCF, and the interested student is strongly urged to consult the classical and very
readable [8] to learn why. Notice that highquality mathematical software such as MATLAB
does not offer a j cf command, although a j ardan command is available in MATLAB's
Symbolic Toolbox.
Theorem 9.30. Suppose A E C
kxk
has an eigenvalue A of algebraic multiplicity k and
suppose further that rank(A  AI) = k  1. Let X = [x(l), ... , X(k)], where the chain of
vectors x(i) is constructed as above. Then
Theorem 9.31. {x(l), ... , X(k)} is a linearly independent set.
Theorem 9.32. Principal vectors associated with different Jordan blocks are linearly inde
pendent.
Example 9.33. Let
1 ;].
002
The eigenvalues of A are AI = I, A2 = 1, and A3 = 2. First, find the eigenvectors associated
with the distinct eigenvalues 1 and 2.
(A  = 0 yields
88 Chapter 9. Eigenvalues and Eigenvectors
(A l/)x,
(1)
=0 yields
Then it is easy to check that
9.3.2 On the +1 's in JCF blocks
In this subsection we show that the nonzero superdiagonal elements of a JCF need not be
1 's but can be arbitrary — so long as they are nonzero. For the sake of defmiteness, we
consider below the case of a single Jordan block, but the result clearly holds for any JCF.
Supposed A € R
nxn
and
Let D = d i a g ( d 1 , . . . , d
n
) be a nonsingular "scaling" matrix. Then
To find a principal vector of degree 2 associated with the multiple eigenvalue 1, solve
( A – l/)x,
(2)
= x,
(1)
toeet
Now let
88 Chapter 9. Eigenvalues and Eigenvectors
(A  11)x?J = 0 yields
To find a principal vector of degree 2 associated with the multiple eigenvalue 1, solve
(A  1I)xl
2
) = xiI) to get
[ 0 ]
(2)
x, = ~ .
Now let
xl" xl"] ~ [ ~
0 5
l
X = [xiI) 1 3
0
Then it is easy to check that
X  ' ~ U
0
5 ] [ I
n
1
i and XlAX = ~
1
0 0
9.3.2 On the +1 's in JCF blocks
In this subsection we show that the nonzero superdiagonal elements of a JCF need not be
1 's but can be arbitrary  so long as they are nonzero. For the sake of definiteness, we
consider below the case of a single Jordan block, but the result clearly holds for any JCF.
Suppose A E jRnxn and
Let D = diag(d" ... , d
n
) be a nonsingular "scaling" matrix. Then
A
4l.
0 0
d,
0
)...
!b.
0
d,
D'(X' AX)D = D' J D = j =
A
d
n

I
0
d
n

2
A
d
n
d
n

I
0 0
)...
9.4. Geometric Aspects of the JCF 89
Appropriate choice of the di 's then yields any desired nonzero superdiagonal elements.
This result can also be interpreted in terms of the matrix X = [x\,..., x
n
] of eigenvectors
and principal vectors that reduces A to its JCF. Specifically, J is obtained from A via the
similarity transformation XD = \d\x\,..., d
n
x
n
}.
In a similar fashion, the reverseorder identity matrix (or exchange matrix)
9.4 Geometric Aspects of the JCF
Note that di mM( A — A.,/ )
w
= «,.
Definition 9.35. Let V be a vector space over F and suppose A : V —>• V is a linear
transformation. A subspace S c V is Ainvariant if AS c S, where AS is defined as the
set {As : s e S}.
can be used to put the superdiagonal elements in the subdiagonal instead if that is desired:
The matrix X that reduces a matrix A e IR"
X
" (or C
nxn
) to a JCF provides a change of basis
with respect to which the matrix is diagonal or block diagonal. It is thus natural to expect an
associated direct sum decomposition of R. Such a decomposition is given in the following
theorem.
Theorem 9.34. Suppose A e R"
x
" has characteristic polynomial
and minimal polynomial
with A i , . . . , A.
m
distinct. Then
9.4. Geometric Aspects of the JCF 89
Appropriate choice of the di's then yields any desired nonzero superdiagonal elements.
This result can also be interpreted in terms of the matrix X = [x[, ... ,x
n
] of eigenvectors
and principal vectors that reduces A to its lCF. Specifically, j is obtained from A via the
similarity transformation XD = [d[x[, ... , dnxn].
In a similar fashion, the reverseorder identity matrix (or exchange matrix)
0 0 I
0
p = pT = p[ =
(9.18)
0 1
I 0 0
can be used to put the superdiagonal elements in the subdiagonal instead if that is desired:
A I 0 0 A 0 0
0 A 0 A 0
p[
A
p=
0 1 A
0
A I A 0
0 0 A 0 0 A
9.4 Geometric Aspects of the JCF
The matrix X that reduces a matrix A E jH.nxn (or c
nxn
) to a lCF provides a change of basis
with respect to which the matrix is diagonal or block diagonal. It is thus natural to expect an
associated direct sum decomposition of jH.n. Such a decomposition is given in the following
theorem.
Theorem 9.34. Suppose A E jH.nxn has characteristic polynomial
n(A) = (A  A[)n) ... (A  Amt
m
and minimal polynomial
a(A) = (A  A[)V) '" (A  Am)V
m
with A I, ... , Am distinct. Then
jH.n = N(A  AlIt) E6 ... E6 N(A  AmItm
= N (A  A 1 I) v) E6 ... E6 N (A  Am I) Vm .
Note that dimN(A  AJ)Vi = ni.
Definition 9.35. Let V be a vector space over IF and suppose A : V + V is a linear
transformation. A subspace S ~ V is A invariant if AS ~ S, where AS is defined as the
set {As: s E S}.
90 Chapter 9. Eigenvalues and Eigenvectors
If V is taken to be R" over R, and S e R"
x
* is a matrix whose columns s\,..., s/t
span a /^dimensional subspace <S, i.e., K(S) = <S, then <S is Ainvariant if and only if there
exists M eR
kxk
such that
This follows easily by comparing the /th columns of each side of (9.19):
Example 9.36. The equation Ax = A* = xA defining a right eigenvector x of an eigenvalue
X says that * spans an Ainvariant subspace (of dimension one).
Example 9.37. Suppose X block diagonalizes A, i.e.,
Rewriting in the form
we have that A A, = A", /,, / = 1, 2, so the columns of A, span an Amvanant subspace.
Theorem 9.38. Suppose A e E"
x
".
7. Let p(A) = «o/ + o?i A + • • • + <x
q
A
q
be a polynomial in A. Then N(p(A)) and
7£(p(A)) are Ainvariant.
2. S isAinvariant if and only ifS
1
 is A
T
invariant.
Theorem 9.39. If V isa vector space over F such that V = N\ ® • • • 0 N
m
, where each
A// isAinvariant, then a basisfor V can be chosen with respect to which A hasa block
diagonal representation.
The Jordan canonical form is a special case of the above theorem. If A has distinct
eigenvalues A,, as in Theorem 9.34, we could choose bases for N(A — A.,/)"' by SVD, for
example (note that the power n, could be replaced by v,). We would then get a block diagonal
representation for A with full blocks rather than the highly structured Jordan blocks. Other
such "canonical" forms are discussed in text that follows.
Suppose A" = [ X i , . . . , X
m
] e R"
n
xn
is such that X ^AX = diag(7i,. . . , J
m
), where
each Ji = diag(/,i,..., //*,.) and each /,* is a Jordan block corresponding to A, e A(A).
We could also use other block diagonal decompositions (e.g., via SVD), but we restrict our
attention here to only the Jordan block case. Note that A A", = A*, /,, so by (9.19) the columns
of A", (i.e., the eigenvectors and principal vectors associated with A.,) span an Ainvariant
subspace of W.
Finally, we return to the problem of developing a formula for e'
A
in the case that A
is not necessarily diagonalizable. Let 7, € C"
x
"' be a Jordan basis for N(A
T
— A.,/)"' .
Equivalently, partition
90 Chapter 9. Eigenvalues and Eigenvectors
If V is taken to be ]Rn over Rand S E ]Rn xk is a matrix whose columns SI, ... , Sk
span a kdimensional subspace S, i.e., R(S) = S, then S is Ainvariant if and only if there
exists M E ]Rkxk such that
AS = SM. (9.19)
This follows easily by comparing the ith columns of each side of (9.19):
Example 9.36. The equation Ax = AX = x A defining a right eigenvector x of an eigenvalue
A says that x spans an Ainvariant subspace (of dimension one).
Example 9.37. Suppose X block diagonalizes A, i.e.,
XI AX = [ ~ J
2
].
Rewriting in the form
~ J,
we have that AX
i
= X;li, i = 1,2, so the columns of Xi span an Ainvariant subspace.
Theorem 9.38. Suppose A E ]Rnxn.
1. Let peA) = CloI + ClIA + '" + ClqAq be a polynomial in A. Then N(p(A)) and
R(p(A)) are Ainvariant.
2. S is A invariant if and only if S 1. is A T invariant.
Theorem 9.39. If V is a vector space over IF such that V = NI EB ... EB N
m
, where each
N; is Ainvariant, then a basis for V can be chosen with respect to which A has a block
diagonal representation.
The Jordan canonical form is a special case of the above theorem. If A has distinct
eigenvalues Ai as in Theorem 9.34, we could choose bases for N(A  Ai/)n, by SVD, for
example (note that the power ni could be replaced by Vi). We would then get a block diagonal
representation for A with full blocks rather than the highly structured Jordan blocks. Other
such "canonical" forms are discussed in text that follows.
Suppose X = [Xl ..... Xm] E ] R ~ x n is such that XI AX = diag(J1, ... , J
m
), where
each J
i
= diag(JiI,"" Jik,) and each Jik is a Jordan block corresponding to Ai E A(A).
We could also use other block diagonal decompositions (e.g., via SVD), but we restrict our
attention here to only the Jordan block case. Note that AXi = Xi J
i
, so by (9.19) the columns
of Xi (i.e., the eigenvectors and principal vectors associated with Ai) span an Ainvariant
subspace of]Rn.
Finally, we return to the problem of developing a formula for e
l
A in the case that A
is not necessarily diagonalizable. Let Yi E <e
nxn
, be a Jordan basis for N (AT  A;lt.
Equivalently, partition
9.5. The Matrix Sign Function 91
compatibly. Then
In a similar fashion we can compute
which is a useful formula when used in conjunction with the result
for a k x k Jordan block 7, associated with an eigenvalue A. = A.,.
9.5 The Matrix Sign Function
In this section we give a very brief introduction to an interesting and useful matrix function
called the matrix sign function. It is a generalization of the sign (or signum) of a scalar. A
survey of the matrix sign function and some of its applications can be found in [15].
Definition 9.40. Let z E C with Re(z) ^ 0. Then the sign of z is defined by
Definition 9.41. Suppose A e C"
x
" has no eigenvalues on the imaginary axis, and let
be a Jordan canonical form for A, with N containing all Jordan blocks corresponding to the
eigenvalues of A in the left halfplane and P containing all Jordan blocks corresponding to
eigenvalues in the right halfplane. Then the sign of A, denoted sgn(A), is given by
9.S. The Matrix Sign Function
compatibly. Then
A = XJX
I
= XJy
H
= [XI, ... , Xm] diag(JI, ... , J
m
) [Y
I
, ••• , Ym]H
m
= LX;JiYi
H
.
i=1
In a similar fashion we can compute
m
etA = LXietJ;YiH,
i=1
which is a useful formula when used in conjunction with the result
A 0 0
eAt teAt
.lt
2
e
At
2!
0 A
0
eAt teAt
exp t
A 0
0 0
eAt
1
0 0 A
0 0
for a k x k Jordan block J
i
associated with an eigenvalue A = Ai.
9.5 The Matrix Sign Function
91
In this section we give a very brief introduction to an interesting and useful matrix function
called the matrix sign function. It is a generalization of the sign (or signum) of a scalar. A
survey of the matrix sign function and some of its applications can be found in [15].
Definition 9.40. Let z E C with Re(z) f= O. Then the sign of z is defined by
Re(z) {+1
sgn(z) = IRe(z) I = 1
ifRe(z) > 0,
ifRe(z) < O.
Definition 9.41. Suppose A E cnxn has no eigenvalues on the imaginary axis, and let
be a Jordan canonical form for A, with N containing all Jordan blocks corresponding to the
eigenvalues of A in the left halfplane and P containing all Jordan blocks corresponding to
eigenvalues in the right halfplane. Then the sign of A, denoted sgn(A), is given by
[
/ 0] I
sgn(A) = X 0 / X ,
92 Chapter 9. Eigenvalues and Eigenvectors
where the negative and positive identity matrices are of the same dimensions as N and P,
respectively.
There are other equivalent definitions of the matrix sign function, but the one given
here is especially useful in deriving many of its key properties. The JCF definition of the
matrix sign function does not generally lend itself to reliable computation on a finiteword
length digital computer. In fact, its reliable numerical calculation is an interesting topic in
its own right.
We state some of the more useful properties of the matrix sign function as theorems.
Their straightforward proofs are left to the exercises.
Theorem 9.42. Suppose A e C"
x
" has no eigenvalues on the imaginary axis, and let
S = sgn(A). Then the following hold:
1. S is diagonalizable with eigenvalues equal to del.
2. S
2
= I.
3. AS = SA.
4. sgn(A") = (sgn(A))".
5. sgn(T
l
AT) = T
l
sgn(A)TforallnonsingularT e C"
x
".
6. sgn(cA) = sgn(c) sgn(A)/or all nonzero real scalars c.
Theorem 9.43. Suppose A e C"
x
" has no eigenvalues on the imaginary axis, and let
S — sgn(A). Then the following hold:
1. 7l(S — /) is an Ainvariant subspace corresponding to the left halfplane eigenvalues
of A (the negative invariant subspace).
2. R(S+/) is an Ainvariant subspace corresponding to the right halfplane eigenvalues
of A (the positive invariant subspace).
3. negA = (/ — S)/2 is a projection onto the negative invariant subspace of A.
4. posA = (/ + S)/2 is a projection onto the positive invariant subspace of A.
EXERCISES
1. Let A e C
nxn
have distinct eigenvalues AI, ..., X
n
with corresponding right eigen
vectors Xi, ... ,x
n
and left eigenvectors y\, ..., y
n
, respectively. Let v e C" be an
arbitrary vector. Show that v can be expressed (uniquely) as a linear combination
of the right eigenvectors. Find the appropriate expression for v as a linear combination
of the left eigenvectors as well.
92 Chapter 9. Eigenvalues and Eigenvectors
where the negative and positive identity matrices are of the same dimensions as N and p,
respectively.
There are other equivalent definitions of the matrix sign function, but the one given
here is especially useful in deriving many of its key properties. The JCF definition of the
matrix sign function does not generally lend itself to reliable computation on a finiteword
length digital computer. In fact, its reliable numerical calculation is an interesting topic in
its own right.
We state some of the more useful properties of the matrix sign function as theorems.
Their straightforward proofs are left to the exercises.
Theorem 9.42. Suppose A E e
nxn
has no eigenvalues on the imaginary axis, and let
S = sgn(A). Then the following hold:
1. S is diagonalizable with eigenvalues equal to ± 1.
2. S2 = I.
3. AS = SA.
4. sgn(AH) = (sgn(A»H.
5. sgn(T1AT) = T1sgn(A)T foralinonsingularT E e
nxn
.
6. sgn(cA) = sgn(c) sgn(A) for all nonzero real scalars c.
Theorem 9.43. Suppose A E e
nxn
has no eigenvalues on the imaginary axis, and let
S = sgn(A). Then the following hold:
I. R(S l) is an Ainvariant subspace corresponding to the left halfplane eigenvalues
of A (the negative invariant subspace).
2. R(S + l) is an A invariant subspace corresponding to the right halfplane eigenvalues
of A (the positive invariant subspace).
3. negA == (l  S) /2 is a projection onto the negative invariant subspace of A.
4. posA == (l + S)/2 is a projection onto the positive invariant subspace of A.
EXERCISES
1. Let A E e
nxn
have distinct eigenvalues ),.1> ••• , ),.n with corresponding right eigen
vectors Xl, ... , Xn and left eigenvectors Yl, ••. , Yn, respectively. Let v E en be an
arbitrary vector. Show that v can be expressed (uniquely) as a linear combination
of the right eigenvectors. Find the appropriate expression for v as a linear combination
of the left eigenvectors as well.
Exercises 93
2. Suppose A € C"
x
" is skewHermitian, i.e., A
H
= —A. Prove that all eigenvalues of
a skewHermitian matrix must be pure imaginary.
3. Suppose A e C"
x
" is Hermitian. Let A be an eigenvalue of A with corresponding
right eigenvector x. Show that x is also a left eigenvector for A. Prove the same result
if A is skewHermitian.
5. Determine the eigenvalues, right eigenvectors and right principal vectors if necessary,
and (real) JCFs of the following matrices:
6. Determine the JCFs of the following matrices:
Find a nonsingular matrix X such that X
1
AX = J, where J is the JCF
Hint: Use[ — 1 1 — l]
r
as an eigenvector. The vectors [0 1 — l]
r
and[ l 0 0]
r
are both eigenvectors, but then the equation (A — /)jc
(2)
= x
(1)
can't be solved.
8. Show that all right eigenvectors of the Jordan block matrix in Theorem 9.30 must be
multiples of e\ e R*. Characterize all left eigenvectors.
9. Let A e R"
x
" be of the form A = xy
T
, where x, y e R" are nonzero vectors with
x
T
y = 0. Determine the JCF of A.
10. Let A e R"
xn
be of the form A = / + xy
T
, where x, y e R" are nonzero vectors
with x
T
y = 0. Determine the JCF of A.
11. Suppose a matrix A e R
16x 16
has 16 eigenvalues at 0 and its JCF consists of a single
Jordan block of the form specified in Theorem 9.22. Suppose the small number 10~
16
is added to the (16,1) element of J. What are the eigenvalues of this slightly perturbed
matrix?
4. Suppose a matrix A € R
5x5
has eigenvalues {2, 2, 2, 2, 3}. Determine all possible
JCFs for A.
7. Let
Exercises 93
2. Suppose A E rc
nxn
is skewHermitian, i.e., AH = A. Prove that all eigenvalues of
a skewHermitian matrix must be pure imaginary.
3. Suppose A E rc
nxn
is Hermitian. Let A be an eigenvalue of A with corresponding
right eigenvector x. Show that x is also a left eigenvector for A. Prove the same result
if A is skewHermitian.
4. Suppose a matrix A E lR.
5x5
has eigenvalues {2, 2, 2, 2, 3}. Determine all possible
JCFs for A.
5. Determine the eigenvalues, right eigenvectors and right principal vectors if necessary,
and (real) JCFs of the following matrices:
[
2 1 ]
(a) 1 0 '
6. Determine the JCFs of the following matrices:
<a) U j n
2
1
2
=n
7. Let
A = [H 1]·
2 2"
Find a nonsingular matrix X such that XI AX = J, where J is the JCF
J = [ ~ ~ ~ ] .
001
Hint: Use[1 1  I]T as an eigenvector. The vectors [0 If and[1 0 of
are both eigenvectors, but then the equation (A  I)x(2) = x(1) can't be solved.
8. Show that all right eigenvectors of the Jordan block matrix in Theorem 9.30 must be
multiples of el E lR.
k
. Characterize all left eigenvectors.
9. Let A E lR.
nxn
be of the form A = xyT, where x, y E lR.
n
are nonzero vectors with
x
T
y = O. Determine the JCF of A.
10. Let A E lR.
nxn
be of the form A = 1+ xyT, where x, y E lR.
n
are nonzero vectors
with x
T
y = O. Determine the JCF of A.
11. Suppose a matrix A E lR.
16x
16 has 16 eigenvalues at 0 and its JCF consists of a single
Jordan block of the form specified in Theorem 9.22. Suppose the small number 10
16
is added to the (16,1) element of J. What are the eigenvalues of this slightly perturbed
matrix?
94 Chapter 9. Eigenvalues and Eigenvectors
12. Show that every matrix A e R"
x
" can be factored in the form A = Si$2, where Si
and £2 are real symmetric matrices and one of them, say Si, is nonsingular.
Hint: Suppose A = X J X ~
l
is a reduction of A to JCF and suppose we can construct
the "symmetric factorization" of J . Then A = ( X S i X
T
) ( X ~
T
S
2
X ~
l
) would be the
required symmetric factorization of A. Thus, it suffices to prove the result for the
JCF. The transformation P in (9.18) is useful.
13. Prove that every matrix A e W
x
" is similar to its transpose and determine a similarity
transformation explicitly.
Hint: Use the factorization in the previous exercise.
14. Consider the block upper triangular matrix
where A e M"
xn
and A
n
e R
kxk
with 1 < k < n. Suppose A
u
^ 0 and that we
want to block diagonalize A via the similarity transformation
where X e R*
x
<«  *), i.e.,
Find a matrix equation that X must satisfy for this to be possible. If n = 2 and k = 1,
what can you say further, in terms of AU and A 22, about when the equation for X is
solvable?
15. Prove Theorem 9.42.
16. Prove Theorem 9.43.
17. Suppose A e C"
xn
has all its eigenvalues in the left half plane. Prove that
sgn(A) =  /.
94 Chapter 9. Eigenvalues and Eigenvectors
12. Show that every matrix A E jRnxn can be factored in the form A = SIS2, where SI
and S2 are real symmetric matrices and one of them, say S1, is nonsingular.
Hint: Suppose A = Xl XI is a reduction of A to JCF and suppose we can construct
the "symmetric factorization" of 1. Then A = (X SIXT)(X
T
S2XI) would be the
required symmetric factorization of A. Thus, it suffices to prove the result for the
JCF. The transformation P in (9.18) is useful.
13. Prove that every matrix A E jRn xn is similar to its transpose and determine a similarity
transformation explicitly.
Hint: Use the factorization in the previous exercise.
14. Consider the block upper triangular matrix
A _ [ All
 0
Al2 ]
A22 '
where A E jRnxn and All E jRkxk with 1 ::s: k ::s: n. Suppose Al2 =1= 0 and that we
want to block diagonalize A via the similarity transformation
where X E IRkx(nk), i.e.,
TIAT = [A 011 0 ]
A22 .
Find a matrix equation that X must satisfy for this to be possible. If n = 2 and k = 1,
what can you say further, in terms of All and A22, about when the equation for X is
solvable?
15. Prove Theorem 9.42.
16. Prove Theorem 9.43.
17. Suppose A E en xn has all its eigenvalues in the left halfplane. Prove that
sgn(A) = 1.
Chapter 10
Canonical Forms
10.1 Some Basic Canonical Forms
Problem: Let V and W be vector spaces and suppose A : V — > • W is a linear transformation.
Find bases in V and W with respect to which Mat A has a "simple form" or "canonical
form." In matrix terms, if A e R
mxn
, find P e R™
xm
and Q e R
n
n
xn
such that PAQ has a
"canonical form." The transformation A M» PAQ is called an equivalence; it is called an
orthogonal equivalence if P and Q are orthogonal matrices.
Remark 10.1. We can also consider the case A e C
m xn
and unitary equivalence if P and
< 2 are unitary.
Two special cases are of interest:
1. If W = V and < 2 = P"
1
, the transformation A H> PAP"
1
is called a similarity.
2 . If W = V and if Q = P
T
is orthogonal, the transformation A i» PAP
T
is called
an orthogonal similarity (or unitary similarity in the complex case).
The following results are typical of what can be achieved under a unitary similarity. If
A = A
H
6 C"
x
" has eigenvalues AI, . . . , A
n
, then there exists a unitary matrix £7 such that
U
H
AU — D, where D = di ag( A. j , . . . , A.
n
). This is proved in Theorem 10.2 . What other
matrices are "diagonalizable" under unitary similarity? The answer is given in Theorem
10.9, where it is proved that a general matrix A e C"
x
" is unitarily similar to a diagonal
matrix if and only if it is normal (i.e., AA
H
= A
H
A). Normal matrices include Hermitian,
skewHermitian, and unitary matrices (and their "real" counterparts: symmetric, skew
symmetric, and orthogonal, respectively), as well as other matrices that merely satisfy the
definition, such as A = [ _
a
b
^1 for real scalars a and b. If a matrix A is not normal, the
most "diagonal" we can get is the JCF described in Chapter 9.
Theorem 10.2. Let A = A
H
e C"
x
" have (real) eigenvalues A. I, . . . , X
n
. Then there
exists a unitary matrix X such that X
H
AX = D = diag(A.j , . . . , X
n
) (the columns ofX are
orthonormal eigenvectors for A).
95
Chapter 10
Canonical Forms
10.1 Some Basic Canonical Forms
Problem: Let V and W be vector spaces and suppose A : V + W is a linear transformation.
Find bases in V and W with respect to which Mat A has a "simple form" or "canonical
form." In matrix terms, if A E IR
mxn
, find P E lR;;:xm and Q E l R ~ x n such that P AQ has a
"canonical form." The transformation A f+ P AQ is called an equivalence; it is called an
orthogonal equivalence if P and Q are orthogonal matrices.
Remark 10.1. We can also consider the case A E e
mxn
and unitary equivalence if P and
Q are unitary.
Two special cases are of interest:
1. If W = V and Q = p
1
, the transformation A f+ PAPI is called a similarity.
2. If W = V and if Q = pT is orthogonal, the transformation A f+ P ApT is called
an orthogonal similarity (or unitary similarity in the complex case).
The following results are typical of what can be achieved under a unitary similarity. If
A = A H E en xn has eigenvalues AI, ... , An, then there exists a unitary matrix U such that
U
H
AU = D, where D = diag(AJ, ... , An). This is proved in Theorem 10.2. What other
matrices are "diagonalizable" under unitary similarity? The answer is given in Theorem
10.9, where it is proved that a general matrix A E e
nxn
is unitarily similar to a diagonal
matrix if and only if it is normal (i.e., AA H = AHA). Normal matrices include Hermitian,
skewHermitian, and unitary matrices (and their "real" counterparts: symmetric, skew
symmetric, and orthogonal, respectively), as well as other matrices that merely satisfy the
definition, such as A = [ _ ~ !] for real scalars a and h. If a matrix A is not normal, the
most "diagonal" we can get is the JCF described in Chapter 9.
Theorem 10.2. Let A = A H E en xn have (real) eigenvalues AI, ... ,An. Then there
exists a unitary matrix X such that X
H
AX = D = diag(Al, ... , An) (the columns of X are
orthonormal eigenvectors for A).
95
96 Chapter 10. Canonical Forms
Proof: Let x\ be a right eigenvector corresponding to X\, and normalize it such that xf*x\ =
1. Then there exist n — 1 additional vectors x
2
, ..., x
n
such that X = [x\,..., x
n
] =
[x\ X
2
] is unitary. Now
Then x^U
2
= 0 (/ € k) means that x
f
is orthogonal to each of the n — k columns of U
2
.
But the latter are orthonormal since they are the last n — k rows of the unitary matrix U.
Thus, [Xi f/2] is unitary. D
The construction called for in Theorem 10.2 is then a special case of Theorem 10.3
for k = 1. We illustrate the construction of the necessary Householder matrix for k — 1.
For simplicity, we consider the real case. Let the unit vector x\ be denoted by [£i, ..., %
n
]
T
.
In (10.1) we have used the fact that Ax\ = k\x\. When combined with the fact that
x"xi = 1, we get Ai remaining in the (l,l)block. We also get 0 in the (2,l)block by
noting that x\ is orthogonal to all vectors in X
2
. In (10.2), we get 0 in the (l,2)block by
noting that X
H
AX is Hermitian. The proof is completed easily by induction upon noting
that the (2,2)block must have eigenvalues X
2
,..., A.
n
. D
Given a unit vector x\ e E", the construction of X
2
e ]R"
X
("
1
) such that X —
[x\ X
2
] is orthogonal is frequently required. The construction can actually be performed
quite easily by means of Householder (or Givens) transformations as in the proof of the
following general result.
Theorem 10.3. Let X\ e C
nxk
have orthonormal columns and suppose U is a unitary
matrix such that UX\ = \
0
1, where R € C
kxk
is upper triangular. Write U
H
= [U\ U
2
]
with Ui € C
nxk
. Then [Xi U
2
] is unitary.
Proof: Let X\ = [x\,..., Xk]. Construct a sequence of Householder matrices (also known
as elementary reflectors) H\,..., H
k
in the usual way (see below) such that
where R is upper triangular (and nonsingular since x\, ..., Xk are orthonormal). Let U =
H
k
...H
v
. Then U
H
= / / , • • H
k
and
96 Chapter 10. Canonical Forms
Proof' Let XI be a right eigenvector corresponding to AI, and normalize it such that XI =
1. Then there exist n  1 additional vectors X2, ... , Xn such that X = (XI, ... , xn] =
[XI X
2
] is unitary. Now
XHAX = [
xH
] A [XI X2] = [
]
I
XH
XfAxl XfAX
2
2
=[
Al
]
(10.1)
0 XfAX
2
=[
Al 0
l
(10.2)
0
XfAX
z
In (l0.1) we have used the fact that AXI = AIXI. When combined with the fact that
XI = 1, we get Al remaining in the (l,I)block. We also get 0 in the (2, I)block by
noting that XI is orthogonal to all vectors in Xz. In (10.2), we get 0 in the (l,2)block by
noting that XH AX is Hermitian. The proof is completed easily by induction upon noting
that the (2,2)block must have eigenvalues A2, ... , An. 0
Given a unit vector XI E JRn, the construction of X
z
E JRnx(nl) such that X =
[XI X
2
] is orthogonal is frequently required. The construction can actually be performed
quite easily by means of Householder (or Givens) transformations as in the proof of the
following general result.
Theorem 10.3. Let XI E C
nxk
have orthonormal columns and suppose V is a unitary
matrix such that V X I = [ where R E C
kxk
is upper triangular. Write V H = [VI Vz]
with VI E C
nxk
. Then [XI V
2
] is unitary.
Proof: Let X I = [XI, ... ,xd. Construct a sequence of Householder matrices (also known
as elementary reflectors) HI, ... , Hk in the usual way (see below) such that
Hk ... HdxI, ... , xd = [ l
where R is upper triangular (and nonsingular since XI, ... , Xk are orthonormal). Let V =
Hk'" HI. Then VH = HI'" Hk and
Then X
i
H
U2 = 0 (i E means that Xi is orthogonal to each of the n  k columns of V2.
But the latter are orthonormal since they are the last n  k rows of the unitary matrix U.
Thus. [XI U2] is unitary. 0
The construction called for in Theorem 10.2 is then a special case of Theorem 10.3
for k = 1. We illustrate the construction of the necessary Householder matrix for k = 1.
For simplicity, we consider the real case. Let the unit vector XI be denoted by .. , ,
10.1. Some Basic Canonical Forms 97
Then the necessary Householder matrix needed for the construction of X^ is given by
U = I — 2uu
+
= I — ^UU
T
, where u = [t\ ± 1, £2, • • •» £«]
r
 It can easily be checked
that U is symmetric and U
T
U = U
2
= I, so U is orthogonal. To see that U effects the
necessary compression of j ci, it is easily verified that U
T
U = 2 ± 2£i and U
T
X\ = 1 ± £1.
Thus,
Further details on Householder matrices, including the choice of sign and the complex case,
can be consulted in standard numerical linear algebra texts such as [7], [11], [23], [25].
The real version of Theorem 10.2 is worth stating separately since it is applied fre
quently in applications.
Theorem 10.4. Let A = A
T
e E
nxn
have eigenvalues k\, ... ,X
n
. Then there exists an
orthogonal matrix X e W
lxn
(whose columns are orthonormal eigenvectors of A) such that
X
T
AX = D = diag(Xi, . . . , X
n
).
Note that Theorem 10.4 implies that a symmetric matrix A (with the obvious analogue
from Theorem 10.2 for Hermitian matrices) can be written
which is often called the spectral representation of A. In fact, A in (10.3) is actually a
weighted sum of orthogonal projections P, (onto the onedimensional eigenspaces corre
sponding to the A., 's), i.e.,
where P, = PUM —
x
i
x
f =
x
i
x
j since xj xi — 1.
The following pair of theorems form the theoretical foundation of the doubleFrancis
QR algorithm used to compute matrix eigenvalues in a numerically stable and reliable way.
10.1. Some Basic Canonical Forms 97
Then the necessary Householder matrix needed for the construction of X
2
is given by
U = I  2uu+ = I  +uu
T
, where u = [';1 ± 1, ';2, ... , ';nf. It can easily be checked
u u
that U is symmetric and U
T
U = U
2
= I, so U is orthogonal. To see that U effects the
necessary compression of Xl, it is easily verified that u
T
u = 2 ± 2';1 and u
T
Xl = 1 ± ';1.
Thus,
Further details on Householder matrices, including the choice of sign and the complex case,
can be consulted in standard numerical linear algebra texts such as [7], [11], [23], [25].
The real version of Theorem 10.2 is worth stating separately since it is applied fre
quently in applications.
Theorem 10.4. Let A = AT E jRnxn have eigenvalues AI, ... , An. Then there exists an
orthogonal matrix X E jRn xn (whose columns are orthonormal eigenvectors of A) such that
XT AX = D = diag(Al, ... , An).
Note that Theorem 10.4 implies that a symmetric matrix A (with the obvious analogue
from Theorem 10.2 for Hermitian matrices) can be written
n
A = XDX
T
= LAiXiXT,
(10.3)
i=1
which is often called the spectral representation of A. In fact, A in (10.3) is actually a
weighted sum of orthogonal projections Pi (onto the onedimensional eigenspaces corre
sponding to the Ai'S), i.e.,
n
A = LAiPi,
i=l
where Pi = PR(x;) = xiXt = xixT since xT Xi = 1.
The following pair of theorems form the theoretical foundation of the doubleFrancis
QR algorithm used to compute matrix eigenvalues in a numerically stable and reliable way.
98 Chapter 10. Canonical Forms
Theorem 10.5 (Schur). Let A e C"
x
". Then there exists a unitary matrix U such that
U
H
AU = T, where T is upper triangular.
Proof: The proof of this theorem is essentially the same as that of Theorem 10.2 except that
in this case (using the notation U rather than X) the (l,2)block wf AU2 is not 0. D
In the case of A e R"
x
", it is thus unitarily similar to an upper triangular matrix, but
if A has a complex conjugate pair of eigenvalues, then complex arithmetic is clearly needed
to place such eigenvalues on the diagonal of T. However, the next theorem shows that every
A e W
xn
is also orthogonally similar (i.e., real arithmetic) to a quasiuppertriangular
matrix. A quasiuppertriangular matrix is block upper triangular with 1 x 1 diagonal
blocks corresponding to its real eigenvalues and 2x2 diagonal blocks corresponding to its
complex conjugate pairs of eigenvalues.
Theorem 10.6 (MurnaghanWintner). Let A e R"
x
". Then there exists an orthogonal
matrix U such that U
T
AU = S, where S is quasiuppertriangular.
Definition 10.7. The triangular matrix T in Theorem 10.5 is called a Schur canonical
form or Schur form. The quasiuppertriangular matrix S in Theorem 10.6 is called a real
Schur canonical form or real Schur form (RSF). The columns of a unitary [orthogonal]
matrix U that reduces a matrix to [real] Schur form are called Schur vectors.
Example 10.8. The matrix
is in RSF. Its real JCF is
Note that only the first Schur vector (and then only if the corresponding first eigenvalue
is real if U is orthogonal) is an eigenvector. However, what is true, and sufficient for virtually
all applications (see, for example, [17]), is that the first k Schur vectors span the same A
invariant subspace as the eigenvectors corresponding to the first k eigenvalues along the
diagonal of T (or S).
While every matrix can be reduced to Schur form (or RSF), it is of interest to know
when we can go further and reduce a matrix via unitary similarity to diagonal form. The
following theorem answers this question.
Theorem 10.9. A matrix A e C"
x
" is unitarily similar to a diagonal matrix if and only if
A is normal (i.e., A
H
A = AA
H
).
Proof: Suppose U is a unitary matrix such that U
H
AU = D, where D is diagonal. Then
so A is normal.
98 Chapter 10. Canonical Forms
Theorem 10.5 (Schur). Let A E c
nxn
. Then there exists a unitary matrix U such that
U
H
AU = T, where T is upper triangular.
Proof: The proof of this theorem is essentially the same as that of Theorem lO.2 except that
in this case (using the notation U rather than X) the (l,2)block ur AU
2
is not O. 0
In the case of A E IR
n
xn , it is thus unitarily similar to an upper triangular matrix, but
if A has a complex conjugate pair of eigenvalues, then complex arithmetic is clearly needed
to place such eigenValues on the diagonal of T. However, the next theorem shows that every
A E IR
nxn
is also orthogonally similar (i.e., real arithmetic) to a quasiuppertriangular
matrix. A quasiuppertriangular matrix is block upper triangular with 1 x 1 diagonal
blocks corresponding to its real eigenvalues and 2 x 2 diagonal blocks corresponding to its
complex conjugate pairs of eigenvalues.
Theorem 10.6 (MurnaghanWintner). Let A E IR
n
xn. Then there exists an orthogonal
matrix U such that U
T
AU = S, where S is quasiuppertriangular.
Definition 10.7. The triangular matrix T in Theorem 10.5 is called a Schur canonical
form or Schur fonn. The quasiuppertriangular matrix S in Theorem 10.6 is called a real
Schur canonical form or real Schur fonn (RSF). The columns of a unitary [orthogonal}
matrix U that reduces a matrix to [real} Schur fonn are called Schur vectors.
Example 10.8. The matrix
s ~ [
2 5
n
2 4
0 0
is in RSF. Its real JCF is
h[
1
n
1 1
0 0
Note that only the first Schur vector (and then only if the corresponding first eigenvalue
is real if U is orthogonal) is an eigenvector. However, what is true, and sufficient for virtually
all applications (see, for example, [17]), is that the first k Schur vectors span the same A
invariant subspace as the eigenvectors corresponding to the first k eigenvalues along the
diagonal of T (or S).
While every matrix can be reduced to Schur form (or RSF), it is of interest to know
when we can go further and reduce a matrix via unitary similarity to diagonal form. The
following theorem answers this question.
Theorem 10.9. A matrix A E c
nxn
is unitarily similar to a diagonal matrix if and only if
A is normal (i.e., AH A = AA
H
).
Proof: Suppose U is a unitary matrix such that U
H
AU = D, where D is diagonal. Then
AAH = U VUHU VHU
H
= U DDHU
H
== U DH DU
H
== AH A
so A is normal.
10.2. Definite Matrices 99
Conversely, suppose A is normal and let U be a unitary matrix such that U
H
AU = T,
where T is an upper triangular matrix (Theorem 10.5). Then
It is then a routine exercise to show that T must, in fact, be diagonal. D
10.2 Definite Matrices
Definition 10.10. A symmetric matrix A e W
xn
is
1. positive definite if and only ifx
T
Ax > Qfor all nonzero x G W
1
. We write A > 0.
2. nonnegative definite (or positive semidefinite) if and only if X
T
Ax > 0 for all
nonzero x e W. We write A > 0.
3. negative definite if—A is positive definite. We write A < 0.
4. nonpositive definite (or negative semidefinite) if— A is nonnegative definite. We
write A < 0.
Also, if A and B are symmetric matrices, we write A > B if and only if A — B > 0 or
B — A < 0. Similarly, we write A > B if and only ifA — B>QorB — A < 0.
Remark 10.11. If A e C"
x
" is Hermitian, all the above definitions hold except that
superscript //s replace Ts. Indeed, this is generally true for all results in the remainder of
this section that may be stated in the real case for simplicity.
Remark 10.12. If a matrix is neither definite nor semidefinite, it is said to be indefinite.
Theorem 10.13. Let A = A
H
e C
nxn
with eigenvalues X
{
> A
2
> • • • > A
n
. Then for all
x eC",
Proof: Let U be a unitary matrix that diagonalizes A as in Theorem 10.2. Furthermore,
let v = U
H
x, where x is an arbitrary vector in C
M
, and denote the components of y by
j]i, i € n. Then
But clearly
10.2. Definite Matrices 99
Conversely, suppose A is normal and let U be a unitary matrix such that U H A U = T,
where T is an upper triangular matrix (Theorem 10.5). Then
It is then a routine exercise to show that T must, in fact, be diagonal. 0
10.2 Definite Matrices
Definition 10.10. A symmetric matrix A E lR.
nxn
is
1. positive definite if and only if x T Ax > 0 for all nonzero x E lR.
n
. We write A > O.
2. nonnegative definite (or positive semidefinite) if and only if x
T
Ax :::: 0 for all
nonzero x E lR.
n
• We write A :::: O.
3. negative definite if  A is positive definite. We write A < O.
4. nonpositive definite (or negative semidefinite) if A is nonnegative definite. We
write A ~ O.
Also, if A and B are symmetric matrices, we write A > B if and only if A  B > 0 or
B  A < O. Similarly, we write A :::: B if and only if A  B :::: 0 or B  A ~ O.
Remark 10.11. If A E e
nxn
is Hermitian, all the above definitions hold except that
superscript H s replace T s. Indeed, this is generally true for all results in the remainder of
this section that may be stated in the real case for simplicity.
Remark 10.12. If a matrix is neither definite nor semidefinite, it is said to be indefinite.
Theorem 10.13. Let A = AH E e
nxn
with eigenvalues AI :::: A2 :::: ... :::: An. Thenfor all
x E en,
Proof: Let U be a unitary matrix that diagonalizes A as in Theorem 10.2. Furthermore,
let y = U H x, where x is an arbitrary vector in en, and denote the components of y by
11;, i En. Then
But clearly
n
x
H
Ax = (U
H
X)H U
H
AU(U
H
x) = yH Dy = LA; 111;12.
n
LA; 11'/;12 ~ AlyH Y = AIX
H
X
;=1
;=1
100 Chapter 10. Canonical Forms
and
from which the theorem follows. D
Remark 10.14. The ratio ^^ for A = A
H
< = C
nxn
and nonzero jc e C" is called the
Rayleigh quotient of jc. Theorem 10.13 provides upper (AO and lower (A.
w
) bounds for
the Rayleigh quotient. If A = A
H
e C"
x
" is positive definite, X
H
Ax > 0 for all nonzero
x E C", soO < X
n
< • • • < A. I.
Corollary 10.15. Let A e C"
x
". Then \\A\\
2
= ^
m
(A
H
A}.
Proof: For all x € C" we have
Let jc be an eigenvector corresponding to X
max
(A
H
A). Then ^pjp
2
= ^^(A" A) , whence
Definition 10.16. A principal submatrix of an nxn matrix A is the (n — k)x(n — k) matrix
that remains by deleting k rows and the corresponding k columns. A leading principal
submatrix of order n — k is obtained by deleting the last k rows and columns.
Theorem 10.17. A symmetric matrix A e E"
x
" is positive definite if and only if any of the
following three equivalent conditions hold:
1. The determinants of all leading principal submatrices of A are positive.
2. All eigenvalues of A are positive.
3. A can be written in the formM
T
M, where M e R"
x
" is nonsingular.
Theorem 10.18. A symmetric matrix A € R"
x
" is nonnegative definite if and only if any
of the following three equivalent conditions hold:
1. The determinants of all principal submatrices of A are nonnegative.
2. All eigenvalues of A are nonnegative.
3. A can be written in the formM
T
M, where M 6 R
ix
" and k > rank(A) — rank(M) .
Remark 10.19. Note that the determinants of all principal eubmatrioes muet bQ nonnogativo
in Theorem 10.18.1, not just those of the leading principal submatrices. For example,
consider the matrix A — [
0
_
l
1. The determinant of the 1x1 leading submatrix is 0 and
the determinant of the 2x2 leading submatrix is also 0 (cf. Theorem 10.17). However, the
100 Chapter 10. Canonical Forms
and
n
LAillJilZ::: AnyHy = An
xHx
,
i=l
from which the theorem follows. 0
Remark 10.14. The ratio XHHAx for A = AH E e
nxn
and nonzero x E en is called the
x x
Rayleigh quotient of x. Theorem 1O.l3 provides upper (A 1) and lower (An) bounds for
the Rayleigh quotient. If A = AH E e
nxn
is positive definite, x
H
Ax > 0 for all nonzero
x E en, so 0 < An ::::: ... ::::: A I.
I
Corollary 10.15. Let A E e
nxn
. Then IIAII2 = Ar1ax(AH A).
Proof: For all x E en we have
I
Let x be an eigenvector corresponding to Amax (A H A). Then = Ar1ax (A H A), whence
IIAxll2 ! H
IIAliz = max = Amax{A A). 0
xfO IIxll2
Definition 10.16. A principal submatrixofan n x n matrix A is the (n k) x (n k) matrix
that remains by deleting k rows and the corresponding k columns. A leading principal
submatrix of order n  k is obtained by deleting the last k rows and columns.
Theorem 10.17. A symmetric matrix A E is positive definite if and only if any of the
following three equivalent conditions hold:
1. The determinants of all leading principal submatrices of A are positive.
2. All eigenvalues of A are positive.
3. A can be written in the form MT M, where M E xn is nonsingular.
Theorem 10.18. A symmetric matrix A E xn is nonnegative definite if and only if any
of the following three equivalent conditions hold:
1. The determinants of all principal submatrices of A are nonnegative.
2. All eigenvalues of A are nonnegaTive.
3. A can be wrirren in [he/orm MT M, where M E IRb<n and k ranlc(A) "" ranlc(M).
R.@mllrk 10.19. Not@th!ltthl!dl!termin!lntl:ofnllprincip!ll "ubm!ltriC[!!l mu"t bB nonnBgmivB
in Theorem 10.18.1, not just those of the leading principal submatrices. For example,
consider the matrix A = _ The determinant of the I x 1 leading submatrix is 0 and
the determinant of the 2 x 2 leading submatrix is also 0 (cf. Theorem 10.17). However, the
10.2. Definite Matrices 101
principal submatrix consisting of the (2,2) element is, in fact, negative and A is nonpositive
definite.
Remark 10.20. The factor M in Theorem 10.18.3 is not unique. For example, if
Recall that A > B if the matrix A — B is nonnegative definite. The following
theorem is useful in "comparing" symmetric matrices. Its proof is straightforward from
basic definitions.
Theorem 10.21. Let A, B e R
nxn
be symmetric.
1. If A >BandMe R
nxm
, then M
T
AM > M
T
BM.
2. If A >B and M e R
nxm
, then M
T
AM > M.
T
BM.
j m
The following standard theorem is stated without proof (see, for example, [16, p.
181]). It concerns the notion of the "square root" of a matrix. That is, if A € E"
xn
, we say
that S e R
nx
" is a square root of A if S
2
— A. In general, matrices (both symmetric and
nonsymmetric) have infinitely many square roots. For example, if A = /2, any matrix S of
the form [
c
s
°*
e
e
_
c
s
™
9
e
] is a square root.
Theorem 10.22. Let A e R"
x
" be nonnegative definite. Then A has a unique nonnegative
definite square root S. Moreover, SA = AS and rankS = rank A (and hence S is positive
definite if A is positive definite).
A stronger form of the third characterization in Theorem 10.17 is available and is
known as the Cholesky factorization. It is stated and proved below for the more general
Hermitian case.
Theorem 10.23. Let A e <C
nxn
be Hermitian and positive definite. Then there exists a
unique nonsingular lower triangular matrix L with positive diagonal elements such that
A = LL
H
.
Proof: The proof is by induction. The case n = 1 is trivially true. Write the matrix A in
the form
By our induction hypothesis, assume the result is true for matrices of order n — 1 so that B
may be written as B = L\L^, where L\ e C
1
""
1
^""^ is nonsingular and lower triangular
then M can be
10.2. Definite Matrices 101
principal submatrix consisting of the (2,2) element is, in fact, negative and A is nonpositive
definite.
Remark 10.20. The factor M in Theorem 10.18.3 is not unique. For example, if
then M can be
[1 0], [ fz
ti
o [ ~ 0]
o l ~ 0 , ...
v'3 0
Recall that A :::: B if the matrix A  B is nonnegative definite. The following
theorem is useful in "comparing" symmetric matrices. Its proof is straightforward from
basic definitions.
Theorem 10.21. Let A, B E jRnxn be symmetric.
1. 1f A :::: Band M E jRnxm, then MT AM :::: MT BM.
2. If A> Band M E j R ~ x m , then MT AM> MT BM.
The following standard theorem is stated without proof (see, for example, [16, p.
181]). It concerns the notion of the "square root" of a matrix. That is, if A E lR.
nxn
, we say
that S E jRn xn is a square root of A if S2 = A. In general, matrices (both symmetric and
nonsymmetric) have infinitely many square roots. For example, if A = lz, any matrix S of
h
" [COSO Sino] .
t e 10rm sinO _ cosO IS a square root.
Theorem 10.22. Let A E lR.
nxn
be nonnegative definite. Then A has a unique nonnegative
definite square root S. Moreover, SA = AS and rankS = rankA (and hence S is positive
definite if A is positive definite).
A stronger form of the third characterization in Theorem 10.17 is available and is
known as the Cholesky factorization. It is stated and proved below for the more general
Hermitian case.
Theorem 10.23. Let A E c
nxn
be Hermitian and positive definite. Then there exists a
unique nonsingular lower triangular matrix L with positive diagonal elements such that
A = LLH.
Proof: The proof is by induction. The case n = 1 is trivially true. Write the matrix A in
the form
By our induction hypothesis, assume the result is true for matrices of order n  1 so that B
may be written as B = L1Lf, where Ll E c(nl)x(nl) is nonsingular and lower triangular
102 Chapt er 10. Ca n o n i c a l Forms
with positive diagonal elements. It remains to prove that we can write the n x n matrix A
in the form
10.3 Equivalence Transformations and Congruence
Theorem 10.24. Let A € C™*
7 1
. Then there exist matrices P e C ™
xm
and Q e C"
n
x
" such
that
Proof: A classical proof can be consulted in, for example, [21, p. 131]. Alternatively,
suppose A has an SVD of the form (5.2) in its complex version. Then
Note that the greater freedom afforded by the equivalence transformation of Theorem
10.24, as opposed to the more restrictive situation of a similarity transformation, yields a
far "simpler" canonical form (10.4). However, numerical procedures for computing such
an equivalence directly via, say, Gaussian or elementary row and column operations, are
generally unreliable. The numerically preferred equivalence is, of course, the unitary equiv
alence known as the SVD. However, the SVD is relatively expensive to compute and other
canonical forms exist that are intermediate between (10.4) and the SVD; see, for example
[7, Ch. 5], [4, Ch. 2]. Two such forms are stated here. They are more stably computable
than (10.4) and more efficiently computable than a ful l SVD. Many similar results are also
available.
where a is positive. Performing the indicated matrix multiplication and equating the cor
responding submatrices, we see that we must have L\c = b and a
nn
= C
H
C + a
2
. Clearly
c is given simply by c = L^b. Substituting in the expression involving a, we find
a
2
= a
nn
— b
H
L\
H
L\
l
b = a
nn
— b
H
B~
l
b (= the Schur complement of B in A). But we
know that
Since det (fi ) > 0, we must have a
nn
—b
H
B
l
b > 0. Choosing a to be the positive square
root of «„„ — b
H
B~
l
b completes the proof. D
102 Chapter 10. Canonical Forms
with positive diagonal elements. It remains to prove that we can write the n x n matrix A
in the form
b ] = [L
J
0 ] [Lf c J,
ann c a 0 a
where a is positive. Performing the indicated matrix multiplication and equating the cor
responding submatrices, we see that we must have L I C = b and ann = c
H
c + a
2
• Clearly
c is given simply by c = C,lb. Substituting in the expression involving a, we find
a
2
= ann  b
H
LIH L11b = ann  b
H
B1b (= the Schur complement of B in A). But we
know that
o < det(A) = det [
b ] = det(B) det(a
nn
_ b
H
B1b).
ann
Since det(B) > 0, we must have ann  b
H
B1b > O. Choosing a to be the positive square
root of ann  b
H
B1b completes the proof. 0
10.3 Equivalence Transformations and Congruence
Theorem 10.24. Let A E c;,xn. Then there exist matrices P E C:
xm
and Q E such
that
(l0.4)
Proof: A classical proof can be consulted in, for example, [21, p. 131]. Alternatively,
suppose A has an SVD of the form (5.2) in its complex version. Then
[
Sl 0 ] [ U
H
] [I 0 ]
o I Uf AV = 0 0 .
Take P = [ 'f [I ] and Q = V to complete the proof. 0
Note that the greater freedom afforded by the equivalence transformation of Theorem
10.24, as opposed to the more restrictive situation of a similarity transformation, yields a
far "simpler" canonical form (10.4). However, numerical procedures for computing such
an equivalence directly via, say, Gaussian or elementary row and column operations, are
generally unreliable. The numerically preferred equivalence is, of course, the unitary equiv
alence known as the SVD. However, the SVD is relatively expensive to compute and other
canonical forms exist that are intermediate between (l0.4) and the SVD; see, for example
[7, Ch. 5], [4, Ch. 2]. Two such forms are stated here. They are more stably computable
than (lOA) and more efficiently computable than a full SVD. Many similar results are also
available.
10.3. Equivalence Transformations and Congruence 103
Theorem 10.25 (Complete Orthogonal Decomposition). Let A e C™
x
". Then there exist
unitary matrices U e C
mxm
and V e C
nxn
such that
where R e €,
r
r
xr
is upper (or lower) triangular with positive diagonal elements.
Proof: For the proof, see [4]. D
Theorem 10.26. Let A e C™
x
". Then there exists a unitary matrix Q e C
mxm
and a
permutation matrix Fl e C"
x
" such that
where R E C
r
r
xr
is upper triangular and S e C
r x(
"
r)
is arbitrary but in general nonzero.
Proof: For the proof, see [4]. D
Remark 10.27. When A has full column rank but is "near" a rank deficient matrix,
various rank revealing QR decompositions are available that can sometimes detect such
phenomena at a cost considerably less than a full SVD. Again, see [4] for details.
Definition 10.28. Let A e C
nxn
and X e C
n
n
xn
. The transformation A i> X
H
AX is called
a congruence. Note that a congruence is a similarity if and only ifX is unitary.
Note that congruence preserves the property of being Hermitian; i.e., if A is Hermitian,
then X
H
AX is also Hermitian. It is of interest to ask what other properties of a matrix are
preserved under congruence. It turns out that the principal property so preserved is the sign
of each eigenvalue.
Definition 10.29. Let A = A
H
e C"
x
" and let 7t, v, and £ denote the numbers of positive,
negative, and zero eigenvalues, respectively, of A. Then the inertia of A is the triple of
numbers In(A) = (n, v, £). The signature of A is given by sig(A) = n — v.
Example 10.30.
2. If A = A" eC
n x
" , t h enA > 0 if and only if In (A) = (n, 0, 0).
3. If In(A) = (TT, v, £), then rank(A) = n + v.
Theorem 10.31 (Sylvester's Law of Inertia). Let A = A
H
e C
nxn
and X e C
n
n
xn
. Then
In(A) = ln(X
H
AX).
Proof: For the proof, see, for example, [21, p. 134]. D
Theorem 10.31 guarantees that rank and signature of a matrix are preserved under
congruence. We then have the following.
10.3. Equivalence Transformations and Congruence 103
Theorem 10.25 (Complete Orthogonal Decomposition). Let A E e ~ x n . Then there exist
unitary matrices U E e
mxm
and V E e
nxn
such that
where R E e;xr is upper (or lower) triangular with positive diagonal elements.
Proof: For the proof, see [4]. 0
(10.5)
Theorem 10.26. Let A E e ~ x n . Then there exists a unitary matrix Q E e
mxm
and a
permutation matrix IT E en xn such that
QAIT = [ ~ ~ l
(10.6)
where R E e;xr is upper triangular and S E erx(nr) is arbitrary but in general nonzero.
Proof: For the proof, see [4]. 0
Remark 10.27. When A has full column rank but is "near" a rank deficient matrix,
various rank revealing QR decompositions are available that can sometimes detect such
phenomena at a cost considerably less than a full SVD. Again, see [4] for details.
Definition 10.28. Let A E e
nxn
and X E e ~ x n . The transformation A H XH AX is called
a congruence. Note that a congruence is a similarity if and only if X is unitary.
Note that congruence preserves the property of being Hermitian; i.e., if A is Hermitian,
then X H AX is also Hermitian. It is of interest to ask what other properties of a matrix are
preserved under congruence. It turns out that the principal property so preserved is the sign
of each eigenvalue.
Definition 10.29. Let A = AH E e
nxn
and let rr, v, and ~ denote the numbers of positive,
negative, and zero eigenvalues, respectively, of A. Then the inertia of A is the triple of
numbers In(A) = (rr, v, n The signature of A is given by sig(A) = rr  v.
Example 10.30.
l.In[! 1
o 0
00] o 0
10 =(2,1,1).
o 0
2. If A = AH E e
nxn
, then A> 0 if and only if In(A) = (n, 0, 0).
3. If In(A) = (rr, v, n, then rank(A) = rr + v.
Theorem 10.31 (Sylvester's Law of Inertia). Let A = A HE en xn and X E e ~ xn. Then
In(A) = In(X
H
AX).
Proof: For the proof, see, for example, [21, p. 134]. D
Theorem 10.31 guarantees that rank and signature of a matrix are preserved under
congruence. We then have the following.
104 Chapter 10. Canonical Forms
Theorem 10.32. Let A = A
H
e C"
xn
with In(A) = (jt, v, £). Then there exists a matrix
X e C"
n
xn
such that X
H
AX = diag(l, . . . , 1, 1,..., 1, 0, . . . , 0), where the number of
1 's is 7i, the number of — l's is v, and the number 0/0 's is (,.
Proof: Let AI , . . . , X
w
denote the eigenvalues of A and order them such that the first TT are
positive, the next v are negative, and the final £ are 0. By Theorem 10.2 there exists a unitary
matrix U such that U
H
AU = diag(Ai, . . . , A
w
). Define the n x n matrix
Then it is easy to check that X = U W yields the desired result. D
10.3.1 Block matrices and definiteness
Theorem 10.33. Suppose A = A
T
and D = D
T
. Then
if and only if either A > 0 and D  B
T
A~
l
B > 0, or D > 0 and A  BD^B
T
> 0.
Proof: The proof follows by considering, for example, the congruence
The details are straightforward and are left to the reader. D
Remark 10.34. Note the symmetric Schur complements of A (or D) in the theorem.
Theorem 10.35. Suppose A = A
T
and D = D
T
. Then
if and only ifA>0, AA
+
B = B, and D  B
T
A
+
B > 0.
Proof: Consider the congruence with
and proceed as in the proof of Theorem 10.33. D
10.4 Rational Canonical Form
One final canonical form to be mentioned is the rational canonical form.
104 Chapter 10. Canonical Forms
Theorem 10.32. Let A = AH E c
nxn
with In(A) = (Jr, v, O. Then there exists a matrix
X E such that XH AX = diag(1, ... , I, I, ... , 1,0, ... ,0), where the number of
1 's is Jr, the number of I 's is v, and the numberofO's
Proof: Let A I, ... , An denote the eigenvalues of A and order them such that the first Jr are
positive, the next v are negative, and the final are O. By Theorem 10.2 there exists a unitary
matrix V such that VH AV = diag(AI, ... , An). Define the n x n matrix
vv = ... , 1/.fArr+I' ... , I/.fArr+v, I, ... ,1).
Then it is easy to check that X = V VV yields the desired result. 0
10.3.1 Block matrices and definiteness
Theorem 10.33. Suppose A = AT and D = DT. Then
ifand only ifeither A> ° and D  BT AI B > 0, or D > 0 and A  BD
I
BT > O.
Proof: The proof follows by considering, for example, the congruence
B ] [I _AI B JT [ A
D 0 I BT
] [
The details are straightforward and are left to the reader. 0
Remark 10.34. Note the symmetric Schur complements of A (or D) in the theorem.
Theorem 10.35. Suppose A = AT and D = DT. Then
B ] > °
D 
if and only if A:::: 0, AA+B = B. and D  BT A+B:::: o.
Proof: Consider the congruence with
and proceed as in the proof of Theorem 10.33. 0
10.4 Rational Canonical Form
One final canonical form to be mentioned is the rational canonical form.
10.4. Rational Canonical Form 105
Definition 10.36. A matrix A e M"
x
" is said to be nonderogatory if its minimal polynomial
and characteristic polynomial are the same or, equivalently, if its Jordan canonical f orm
has only one block associated with each distinct eigenvalue.
Suppose A E W
xn
is a nonderogatory matrix and suppose its characteristic polyno
mial is 7 r( A ) = A " — ( a
0
+ «A +
is similar to a matrix of the form
+ a
n
_ i A
n
~ ' )  Then it can be shown (see [12]) that A
Definition 10.37. A matrix A e E
nx
" of the f orm (10.7) is called a companion matrix or
is said to be in companion form.
Companion matrices also appear in the literature in several equivalent forms. To
illustrate, consider the companion matrix
This matrix is a special case of a matrix in lower Hessenberg form. Using the reverseorder
identity similarity P given by (9.18), A is easily seen to be similar to the following matrix
in upper Hessenberg form:
Moreover, since a matrix is similar to its transpose (see exercise 13 in Chapter 9), the
following are also companion matrices similar to the above:
Notice that in all cases a companion matrix is nonsingular if and only if aO /= 0.
In fact, the inverse of a nonsingular companion matrix is again in companion form. For
£*Yamr\1j=»
10.4. Rational Canonical Form 105
Definition 10.36. A matrix A E lR
n
Xn is said to be nonderogatory ifits minimal polynomial
and characteristic polynomial are the same or; equivalently, if its Jordan canonical form
has only one block associated with each distinct eigenvalue.
Suppose A E lR
nxn
is a nonderogatory matrix and suppose its characteristic polyno
mial is n(A) = An  (ao + alA + ... + an_IAnI). Then it can be shown (see [12]) that A
is similar to a matrix of the form
o o o
o 0
o
(10.7)
o o
Definition 10.37. A matrix A E lR
nxn
of the form (10.7) is called a cornpanion rnatrix or
is said to be in cornpanion forrn.
Companion matrices also appear in the literature in several equivalent forms. To
illustrate, consider the companion matrix
(l0.8)
This matrix is a special case of a matrix in lower Hessenberg form. Using the reverseorder
identity similarity P given by (9.18), A is easily seen to be similar to the following matrix
in upper Hessenberg form:
a2 al
o 0
1 0
o 1
6]
o .
o
(10.9)
Moreover, since a matrix is similar to its transpose (see exercise 13 in Chapter 9), the
following are also companion matrices similar to the above:
l
:: ~ ! ~ 0 1 ] .
ao 0 0
(10.10)
Notice that in all cases a companion matrix is nonsingular if and only if ao i= O.
In fact, the inverse of a nonsingular companion matrix is again in companion form. For
example,
o
1
o
 ~
ao
1
o
o
 ~
ao
o
o
_!!l
o
o
(10.11)
106 Chapter 10. Canonical Forms
with a similar result for companion matrices of the form (10.10).
If a companion matrix of the form (10.7) is singular, i.e., if ao = 0, then its pseudo
inverse can still be computed. Let a e M""
1
denote the vector \a\, 02,..., a
n
i] and let
c =
l+
l
a
r
a
. Then it is easily verified that
Note that / — caa
T
= (I + aa
T
) , and hence the pseudoinverse of a singular companion
matrix is not a companion matrix unless a = 0.
Companion matrices have many other interesting properties, among which, and per
haps surprisingly, is the fact that their singular values can be found in closed form; see
[14].
Theorem 10.38. Let a\ > GI > • • • > a
n
be the singular values of the companion matrix
A in (10.7). Let a = a\ + a\ + • • • +a%_
{
and y = 1 + «.Q + a. Then
Remark 10.39. Explicit formulas for all the associated right and left singular vectors can
also be derived easily.
If A € R
nx
" is derogatory, i.e., has more than one Jordan block associated with
at least one eigenvalue, then it is not similar to a companion matrix of the form (10.7).
However, it can be shown that a derogatory matrix is similar to a block diagonal matrix,
each of whose diagonal blocks is a companion matrix. Such matrices are said to be in
rational canonical form (or Frobenius canonical form). For details, see, for example, [12].
Companion matrices appear frequently in the control and signal processing literature
but unfortunately they are often very difficult to work with numerically. Algorithms to reduce
an arbitrary matrix to companion form are numerically unstable. Moreover, companion
matrices are known to possess many undesirable numerical properties. For example, in
general and especially as n increases, their eigenstructure is extremely ill conditioned,
nonsingular ones are nearly singular, stable ones are nearly unstable, and so forth [14].
Ifao ^ 0, the largest and smallest singular values can also be written in the equivalent form
106 Chapter 10. Canonical Forms
with a similar result for companion matrices of the form (10.10).
If a companion matrix of the form (10.7) is singular, i.e., if ao = 0, then its pseudo
inverse can still be computed. Let a E JRn1 denote the vector [ai, a2, ... , anIf and let
c = I + ~ T a' Then it is easily verified that
o
o
o
o
o o
o
o
+
o
1 caa
T
o J.
ca
Note that I  caa T = (I + aa T) I , and hence the pseudoinverse of a singular companion
matrix is not a companion matrix unless a = O.
Companion matrices have many other interesting properties, among which, and per
haps surprisingly, is the fact that their singular values can be found in closed form; see
[14].
Theorem 10.38. Let al ~ a2 ~ ... ~ an be the singular values of the companion matrix
A in (10.7). Leta = ar + ai + ... + a;_1 and y = 1 + aJ + a. Then
2 _ 1 ( J 2 2)
a
l
 2 y + y  4a
o
'
a? = 1 for i = 2, 3, ... , n  1,
a; = ~ (y  J y2  4a
J
) .
If ao =1= 0, the largest and smallest singular values can also be written in the equivalent form
Remark 10.39. Explicit formulas for all the associated right and left singular vectors can
also be derived easily.
If A E JRnxn is derogatory, i.e., has more than one Jordan block associated with
at least one eigenvalue, then it is not similar to a companion matrix of the form (10.7).
However, it can be shown that a derogatory matrix is similar to a block diagonal matrix,
each of whose diagonal blocks is a companion matrix. Such matrices are said to be in
rational canonical form (or Frobenius canonical form). For details, see, for example, [12].
Companion matrices appear frequently in the control and signal processing literature
but unfortunately they are often very difficult to work with numerically. Algorithms to reduce
an arbitrary matrix to companion form are numerically unstable. Moreover, companion
matrices are known to possess many undesirable numerical properties. For example, in
general and especially as n increases, their eigenstructure is extremely ill conditioned,
nonsingular ones are nearly singular, stable ones are nearly unstable, and so forth [14].
Exercises 107
Companion matrices and rational canonical forms are generally to be avoided in floating
point computation.
Remark 10.40. Theorem 10.38 yields some understanding of why difficult numerical
behavior might be expected for companion matrices. For example, when solving linear
systems of equations of the form (6.2), one measure of numerical sensitivity is K
P
(A) =
I I ^ I I
p
I I A~
l
I I
p
>
m
e socalled condition number of A with respect to inversion and with respect
to the matrix Pnorm. I f this number is large, say 0(10*), one may lose up to k digits of
precision. I n the 2norm, this condition number is the ratio of largest to smallest singular
values which, by the theorem, can be determined explicitly as
I t is easy to show that y/2/ao < k2(A) < £,, and when GO is small or y is large (or both),
then K2(A) ^ T~I. I t is not unusual for y to be large for large n. Note that explicit formulas
for K\ (A) and K oo(A) can also be determined easily by using (10.11).
EXERCISES
1. Show that if a triangular matrix is normal, then it must be diagonal.
2. Prove that if A e M"
x
" is normal, then Af(A) = A/"(A
r
).
3. Let A G C
nx
" and define p(A) = maxx
€
A(A) I ' M Then p(A) is called the spectral
radius of A. Show that if A is normal, then p(A) = A
2
. Show that the converse
is true if n = 2.
4. Let A € C
nxn
be normal with eigenvalues y1 , ..., y
n
and singular values a\ > a
2
>
• • • > o
n
> 0. Show that a, (A) = A.,(A) for i e n.
5. Use the reverseorder identity matrix P introduced in (9.18) and the matrix U in
Theorem 10.5 to find a unitary matrix Q that reduces A e C"
x
" to lower triangular
form.
6. Let A = I J MeC
2x2
. Find a unitary matrix U such that
7. I f A e W
xn
is positive definite, show that A
[
must also be positive definite.
3. Suppose A e E"
x
" is positive definite. I s [ ^ /i 1 > 0?
}. Let R, S 6 E
nxn
be symmetric. Show that [ * J 1 > 0 if and only if S > 0 and
R> S
Exercises 107
Companion matrices and rational canonical forms are generally to be avoided in fioating
point computation.
Remark 10.40. Theorem 10.38 yields some understanding of why difficult numerical
behavior might be expected for companion matrices. For example, when solving linear
systems of equations of the form (6.2), one measure of numerical sensitivity is Kp(A) =
II A II p II A ] II p' the socalled condition number of A with respect to inversion and with respect
to the matrix pnorm. If this number is large, say O(lO
k
), one may lose up to k digits of
precision. In the 2norm, this condition number is the ratio of largest to smallest singular
values which, by the theorem, can be determined explicitly as
y+J
y
2 4
a5
21
a
ol
It is easy to show that :::: K2(A) :::: 1:01' and when ao is small or y is large (or both),
then K2(A) It is not unusualfor y to be large forlarge n. Note that explicit formulas
for K] (A) and Koo(A) can also be determined easily by using (l0.11).
EXERCISES
1. Show that if a triangular matrix is normal, then it must be diagonal.
2. Prove that if A E jRnxn is normal, then N(A) = N(A
T
).
3. Let A E cc
nxn
and define peA) = max)..EA(A) IAI. Then peA) is called the spectral
radius of A. Show that if A is normal, then peA) = IIAII2' Show that the converse
is true if n = 2.
4. Let A E en xn be normal with eigenvalues A], ... , An and singular values 0'1 0'2
... an O. Show that a; (A) = IA;(A)I for i E!l.
5. Use the reverseorder identity matrix P introduced in (9.18) and the matrix U in
Theorem 10.5 to find a unitary matrix Q that reduces A E cc
nxn
to lower triangular
form.
6. Let A = :] E CC
2x2
. Find a unitary matrix U such that
7. If A E jRn xn is positive definite, show that A I must also be positive definite.
8. Suppose A E jRnxn is positive definite. Is [1 O?
9. Let R, S E jRnxn be symmetric. Show that > 0 if and only if S > 0 and
R > SI.
108 Chapter 10. Canonical Forms
10. Find the inertia of the following matrices:
108
10. Find the inertia of the following matrices:
(a) [ ~ ~ l (b) [
(d) [1 1 + j ]
1  j 1 .
Chapter 10. Canonical Forms
2 1 + j ]
1  j 2 '
Chapter 11
Linear Differential and
Difference Equations
11.1 Differential Equations
In this section we study solutions of the linear homogeneous system of differential equations
for t > IQ. This is known as an initialvalue problem. We restrict our attention in this
chapter only to the socalled timeinvariant case, where the matrix A e R
nxn
is constant
and does not depend on t. The solution of (11.1) is then known always to exist and be
unique. It can be described conveniently in terms of the matrix exponential.
Definition 11.1. For all A e R
nxn
, the matrix exponential e
A
e R
nxn
is defined by the
power series
The series (11.2) can be shown to converge for all A (has radius of convergence equal
to +00). The solution of (11.1) involves the matrix
which thus also converges for all A and uniformly in t.
11.1.1 Properties of the matrix exponential
1. e° = I.
Proof: This follows immediately from Definition 11.1 by setting A = 0.
2. For all A G R"
XM
, (e
A
f  e^.
Proof: This follows immediately from Definition 11.1 and linearity of the transpose.
109
Chapter 11
Linear Differential and
Difference Equations
11.1 Differential Equations
In this section we study solutions of the linear homogeneous system of differential equations
x(t) = Ax(t); x(to) = Xo E JR.n (11.1)
for t 2: to. This is known as an initialvalue problem. We restrict our attention in this
chapter only to the socalled timeinvariant case, where the matrix A E JR.nxn is constant
and does not depend on t. The solution of (11.1) is then known always to exist and be
unique. It can be described conveniently in terms of the matrix exponential.
Definition 11.1. For all A E JR.nxn, the matrix exponential e
A
E JR.nxn is defined by the
power series
+00 1
e
A
= L ,Ak.
k=O k.
(11.2)
The series (11.2) can be shown to converge for all A (has radius of convergence equal
to +(0). The solution of (11.1) involves the matrix
(11.3)
which thus also converges for all A and uniformly in t.
11.1.1 Properties of the matrix exponential
1. eO = I.
Proof This follows immediately from Definition 11.1 by setting A = O.
T T
2. For all A E JR.nxn, (e
A
) = e
A
•
Proof This follows immediately from Definition 11.1 and linearity of the transpose.
109
110 Chapter 11. Linear Differential and Difference Equations
3. For all A e R"
x
" and for all t, r e R, e
(t
+
T)A
= e'
A
e
rA
= e
lA
e'
A
.
Proof: Note that
Compare like powers of A in the above two equations and use the binomial theorem
on (t + T)*.
4. For all A, B e R"
xn
and for all t e R, e
t(A+B)
=^e'
A
e'
B
= e'
B
e'
A
if and only if A
and B commute, i.e., AB = B A.
Proof: Note that
and
and
while
Compare like powers of t in the first equation and the second or third and use the
binomial theorem on (A + B)
k
and the commutativity of A and B.
5. For all A e R"
x
" and for all t e R, (e'
A
)~
l
= e~'
A
.
Proof: Simply take T = — t in property 3.
6. Let £ denote the Laplace transform and £~
!
the inverse Laplace transform. Then for
all A € R"
x
" and for all t € R,
(a) C{e
tA
} = (sIAr
l
.
(b) £
1
{(j/A)
1
} = «
M
.
Proof: We prove only (a). Part (b) follows similarly.
110 Chapter 11. Linear Differential and Difference Equations
3. For all A E JRnxn and for all t, T E JR, e(t+r)A = etA erA = erAe
tA
.
Proof" Note that
(t + T)2 2
e(t+r)A = I + (t + T)A + A + ...
2!
and
tA rA t 2 T 2
(
2 )( 2 )
e e = I + t A + 2! A +... I + T A + 2! A +... .
Compare like powers of A in the above two equations and use the binomial theorem
on(t+T)k.
4. For all A, B E JRnxn and for all t E JR, et(A+B) =etAe
tB
= etBe
tA
if and only if A
and B commute, i.e., AB = BA.
Proof' Note that
and
while
t
2
et(A+B) = I + teA + B) + (A + B)2 + ...
2!
tB tA t 2 t 2
(
2 )( 2 )
e e = 1+ tB + 2iB +... 1+ tA + 2!A +... .
Compare like powers of t in the first equation and the second or third and use the
binomial theorem on (A + B/ and the commutativity of A and B.
5. ForaH A E JRnxn and for all t E JR, (etA)1 = e
tA
.
Proof" Simply take T = t in property 3.
6. Let £ denote the Laplace transform and £1 the inverse Laplace transform. Then for
all A E JRnxn and for all t E lR,
(a) .l{e
tA
} = (sI  A)I.
(b) .lI{(sl A)I} = erA.
Proof" We prove only (a). Part (b) follows similarly.
{+oo
= io et(sl)e
tA
dt
(+oo
= io ef(Asl) dt since A and (sf) commute
11.1. Differential Equations 111
= (sl A)
1
.
The matrix (s I — A) ~' is called the resolvent of A and is defined for all s not in A (A).
Notice in the proof that we have assumed, for convenience, that A is diagonalizable.
If this is not the case, the scalar dyadic decomposition can be replaced by
using the JCF. All succeeding steps in the proof then follow in a straightforward way.
7. For all A e R"
x
" and for all t e R, £(e'
A
) = Ae
tA
= e'
A
A.
Proof: Since the series (11.3) is uniformly convergent, it can be differentiated termby
term from which the result follows immediately. Alternatively, the formal definition
can be employed as follows. For any consistent matrix norm,
11.1. Differential Equations 111
= {+oo t e(AiS)t x;y;H dt assuming A is diagonalizable
10 ;=1
= e(AiS)t dt]x;y;H
n 1
= '"' Xi y;H assuming Re s > Re Ai for i E !!
L..... s  A"
i=1 I
= (sI  A)I.
The matrix (s I  A) I is called the resolvent of A and is defined for all s not in A (A).
Notice in the proof that we have assumed, for convenience, that A is diagonalizable.
If this is not the case, the scalar dyadic decomposition can be replaced by
m
et(Asl) = L Xiet(Jisl)y;H
;=1
using the JCF. All succeeding steps in the proof then follow in a straightforward way.
7. For all A E JRnxn and for all t E JR, 1h(e
tA
) = Ae
tA
= etA A.
Proof: Since the series (11.3) is uniformly convergent, it can be differentiated termby
term from which the result follows immediately. Alternatively, the formal definition
d e(t+M)A _ etA
_(/A) = lim
d t L'lt
can be employed as follows. For any consistent matrix norm,
II
etA II III II
u.  Ae
tA
= L'lt  /A)  Ae
tA
= II  etA)  Ae
tA
II
= II  l)e
tA
 Ae
tA
II
II
I ( (M)2 2 ) tA tAil
= L'lt M A + A +... e  Ae
= II ( Ae
tA
+ A
2
e
tA
+ ... )  Ae
tA
II
= II ( A2 + A
3
+ .. , ) etA II
< MIIA21111e
tA
II _ + IIAII + IIAI12 + ...
(
1 L'lt (L'lt)2 )
 2! 3! 4!
< L'lt1lA21111e
tA
Il (1 + L'ltiIAIl + IIAII2 + ... )
= L'lt IIA 21111e
tA
112 Chapter 11. Linear Differential and Difference Equations
For fixed t, the righthand side above clearly goes to 0 as At goes to 0. Thus, the
limit exists and equals Ae'
A
. A similar proof yields the limit e'
A
A, or one can use the
fact that A commutes with any polynomial of A of finite degree and hence with e'
A
.
11.1.2 Homogeneous linear differential equations
Theorem 11.2. Let A e R
nxn
. The solution of the linear homogeneous initialvalue problem
Proof: Differentiate (11.5) and use property 7 of the matrix exponential to get x( t ) =
Ae
( t
~
to) A
xo = Ax( t) . Also, x( t
0
) — e
( fo
~
t
° '
) A
X Q — X Q so, by the fundamental existence and
uniqueness theorem for ordinary differential equations, (11.5) is the solution of (11.4). D
11.1.3 Inhomogeneous linear differential equations
Theorem 11.3. Let A e R
nxn
, B e W
xm
and let the vectorvalued function u be given
and, say, continuous. Then the solution of the linear inhomogeneous initialvalue problem
for t > IQ is given by the variation of parameters formula
Proof: Differentiate (11.7) and again use property 7 of the matrix exponential. The general
formula
is used to get x( t ) = Ae
{
'
to) A
x
0
+ f'
o
Ae
(
'
s) A
Bu( s) ds + Bu( t) = Ax( t) + Bu( t) . Also,
*('o)
=
< ?
(f
° ~
fo)/ 1
.¥ o + 0 = X Q so, by the fundamental existence and uniqueness theorem for
ordinary differential equations, (11.7) is the solution of (11.6). D
Remark 11.4. The proof above simply verifies the variation of parameters formula by
direct differentiation. The formula can be derived by means of an integrating factor "trick"
as follows. Premultiply the equation x — Ax = Bu by e~
tA
to get
112 Chapter 11. Linear Differential and Difference Equations
For fixed t, the righthand side above clearly goes to 0 as t:.t goes to O. Thus, the
limit exists and equals Ae
t
A • A similar proof yields the limit e
t
A A, or one can use the
fact that A commutes with any polynomial of A of finite degree and hence with etA.
11.1.2 Homogeneous linear differential equations
Theorem 11.2. Let A E IR
n
xn. The solution of the linear homogeneous initialvalue problem
x(t) = Ax(l); x(to) = Xo E IR
n
(11.4)
for t ::: to is given by
(11.5)
Proof: Differentiate (11.5) and use property 7 of the matrix exponential to get x (t) =
Ae(tto)A
xo
= Ax(t). Also, x(to) = e(toto)A Xo = Xo so, by the fundamental existence and
uniqueness theorem for ordinary differential equations, (11.5) is the solution of (11.4). 0
11.1.3 Inhomogeneous linear differential equations
Theorem 11.3. Let A E IR
nxn
, B E IR
nxm
and let the vectorvalued function u be given
and, say, continuous. Then the solution of the linear inhomogeneous initialvalue problem
x(t) = Ax(t) + Bu(t); x(to) = Xo E IR
n
for t ::: to is given by the variation of parameters formula
x(t) = e(tto)A
xo
+ t e(ts)A Bu(s) ds.
l t o
(11.6)
(11.7)
Proof: Differentiate (11.7) and again use property 7 of the matrix exponential. The general
formula
d l
q
(t) l
q
(t) af(x t) dq(t) dp(t)
 f(x, t) dx = ' dx + f(q(t), t)  f(p(t), t)
dt pet) pet) at dt dt
is used to get x(t) = Ae(tto)A Xo + Ir: Ae(ts)A Bu(s) ds + Bu(t) = Ax(t) + Bu(t). Also,
x(t
o
} = e(totolA Xo + 0 = Xo so, by the fundilm()ntill nnd uniqu()Oc:s:s theorem for
ordinary differential equations, (11.7) is the solution of (1l.6). 0
Remark 11.4. The proof above simply verifies the variation of parameters formula by
direct differentiation. The formula can be derived by means of an integrating factor "trick"
as follows. Premultiply the equation x  Ax = Bu by e
tA
to get
(11.8)
11.1. Differential Equations 113
Now integrate (11.8) over the interval [to, t]:
Thus,
and hence
11.1.4 Linear matrix differential equations
Matrixvalued initialvalue problems also occur frequently. The first is an obvious general
ization of Theorem 11.2, and the proof is essentially the same.
Theorem 11.5. Let A e W
lxn
. The solution of the matrix linear homogeneous initialvalue
nrohlcm
for t > to is given by
In the matrix case, we can have coefficient matrices on both the right and left. For
convenience, the following theorem is stated with initial time to = 0.
Theorem 11.6. Let A e Rn
xn
, B e R
mxm
, and C e Rn
xm
. Then the matrix initialvalue
problem
—
a
tA
ra
tB
has the solutionX ( t ) = e Ce
Proof: Differentiate e
tA
Ce
tB
with respect to t and use property 7 of the matrix exponential.
The fact that X ( t ) satisfies the initial condition is trivial. D
Corollary 11.7. Let A, C e IR"
X
". Then the matrix initialvalue problem
has the solution X(t} = e
tA
Ce
tAT
.
When C is symmetric in (11.12), X ( t ) is symmetric and (11.12) is known as a Lya
punov differential equation. The initialvalue problem (11.11) is known as a Sylvester
differential equation.
11.1. Differential Equations
Now integrate (11.8) over the interval [to, t]:
Thus,
and hence
esAx(s) ds = eSABu(s) ds.
1
t d 1t
to ds to
etAx(t)  etoAx(to) = t e
sA
Bu(s) ds
lto
x(t) = e(tt
olA
xo
+ t e(ts)A Bu(s) ds.
lto
11.1.4 Linear matrix differential equations
113
Matrixvalued initialvalue problems also occur frequently. The first is an obvious general
ization of Theorem 11.2, and the proof is essentially the same.
Theorem 11.5. Let A E jRnxn. The solution of the matrix linear homogeneous initialvalue
problem
X(t) = AX(t); X(to) = C E jRnxn (11.9)
for t ::: to is given by
X(t) = e(tto)Ac.
(11.10)
In the matrix case, we can have coefficient matrices on both the right and left. For
convenience, the following theorem is stated with initial time to = O.
Theorem 11.6. Let A E jRnxn, B E jRmxm, and C E ]R.nxm. Then the matrix initialvalue
problem
X(t) = AX(t) + X(t)B; X(O) = C (11.11)
has the solution X(t) = etACe
tB
.
Proof: Differentiate etACe
tB
with respect to t and use property 7 of the matrix exponential.
The fact that X (t) satisfies the initial condition is trivial. 0
Corollary 11.7. Let A, C E ]R.nxn. Then the matrix initialvalue problem
X(t) = AX(t) + X(t)AT; X(O) = C (11.12)
has the solution X(t) = etACetAT.
When C is symmetric in (11.12), X (t) is symmetric and (11.12) is known as a Lya
punov differential equation. The initialvalue problem (11.11) is known as a Sylvester
differential equation.
114 Chapter 11. Linear Differential and Difference Equations
11.1.5 Modal decompositions
Let A E W
xn
and suppose, for convenience, that it is diagonalizable (if A is not diagonaliz
able, the rest of this subsection is easily generalized by using the JCF and the decomposition
A — ^ X f Ji Y
t
H
as discussed in Chapter 9). Then the solution x(t) of (11.4) can be written
The ki s are called the modal velocities and the right eigenvectors *, are called the modal
directions. The decomposition above expresses the solution x(t) as a weighted sum of its
modal velocities and directions.
This modal decomposition can be expressed in a different looking but identical form
if we write the initial condition X Q as a weighted sum of the right eigenvectors
Then
In the last equality we have used the fact that y f * X j = S f j .
Similarly, in the inhomogeneous case we can write
11.1.6 Computation of the matrix exponential
JCF method
Let A e R"
x
" and suppose X e Rn
xn
is such that X"
1
AX = J, where J is a JCF for A.
Then
114 Chapter 11. Linear Differential and Difference Equations
11.1 .5 Modal decompositions
Let A E jRnxn and suppose, for convenience, that it is diagonalizable (if A is not diagonaliz
able, the rest of this subsection is easily generalized by using the JCF and the decomposition
A = L X;li y
i
H
as discussed in Chapter 9). Then the solution x(t) of (11.4) can be written
x(t) = e(tto)A Xo
= (ti.iUtO)Xiyr) Xo
1=1
n
= L(Yi
H
xoeAi(ttO»Xi.
i=1
The Ai s are called the modal velocities and the right eigenvectors Xi are called the modal
directions. The decomposition above expresses the solution x (t) as a weighted sum of its
modal velocities and directions.
This modal decomposition can be expressed in a different looking but identical form
n
if we write the initial condition Xo as a weighted sum of the right eigenvectors Xo = L ai Xi.
Then
n
= L(aieAiUtO»Xi.
i=1
In the last equality we have used the fact that Yi
H
X j = flij.
Similarly, in the inhomogeneous case we can write
i
t e(ts)A Bu(s) ds = t (it eAiUS)YiH Bu(s) dS) Xi.
~ i=1 ~
11.1.6 Computation of the matrix exponential
JCF method
i=1
Let A E jRnxn and suppose X E j R ~ x n is such that XI AX = J, where J is a JCF for A.
Then
etA = etXJX1
= XetJX
1
I
n
Le
A
•
,
X'Yi
H
if A is diagonalizable
1=1
~ t,x;e'J,y;H in geneml.
11.1. Differential Equations 115
If A is diagonalizable, it is then easy to compute e
tA
via the formula e
tA
= Xe
tJ
X '
since e
tj
is simply a diagonal matrix.
In the more general case, the problem clearly reduces simply to the computation of
the exponential of a Jordan block. To be specific, let .7, e <C
kxk
be a Jordan block of the form
Clearly A/ and N commute. Thus, e
tJi
= e'
u
e
tN
by property 4 of the matrix exponential.
The diagonal part is easy: e
tu
= diag(e
x
',..., e
xt
}. But e
tN
is almost as easy since N is
nilpotent of degree k.
Definition 11.8. A matrix M e M
nx
"
M
p
= 0, while M
p
~
l
^ 0.
is nilpotent of degree (or index, or grade) p if
For the matrix N defined above, it is easy to check that while N has 1's along only
its first superdiagonal (and O's elsewhere), N
2
has 1's along only its second superdiagonal,
and so forth. Finally, N
k
~
l
has a 1 in its (1, k) element and has O's everywhere else, and
N
k
= 0. Thus, the series expansion of e'
N
is finite, i.e.,
Thus,
In the case when A. is complex, a real version of the above can be worked out.
11.1. Differential Equations 115
If A is diagonalizable, it is then easy to compute etA via the formula etA = Xe
tl
XI
since e
t
I is simply a diagonal matrix.
In the more general case, the problem clearly reduces simply to the computation of
the exponential of a Jordan block. To be specific, let J
i
E C
kxk
be a Jordan block of the form
J
i
=
A 1
o A
o
o o
o =U+N.
o A
Clearly AI and N commute. Thus, e
t
I, = eO.! e
l
N by property 4 of the matrix exponential.
The diagonal part is easy: e
lH
= diag(e
At
, ••• ,eAt). But e
lN
is almost as easy since N is
nilpotent of degree k.
Definition 11.8. A matrix M E jRnxn is nilpotent of degree (or index, or grade) p if
MP = 0, while MPI t= O.
For the matrix N defined above, it is easy to check that while N has l's along only
its first superdiagonal (and O's elsewhere), N
2
has l's along only its second superdiagonal,
and so forth. Finally, N
k

I
has a 1 in its (1, k) element and has O's everywhere else, and
N
k
= O. Thus, the series expansion of e
lN
is finite, i.e.,
Thus,
t
2
t
k

I
e
IN
=I+tN+N
2
+ ... + N
k

I
2! (k  I)!
o
o o
eAt
teAt
12 At
2I
e
0
eAt teAl
ell; =
0 0
eAt
0 0
t
1
IkI At
(kI)! e
12 At
2I
e
teAl
eAt
In the case when A is complex, a real version of the above can be worked out.
116 Chapter 11. Linear Differential and Difference Equations
Example 11.9. Let A = [ ~ _ \ J]. Then A (A) = {2, 2} and
Interpolation method
This method is numerically unstable in finiteprecision arithmetic but is quite effective for
hand calculation in smallorder problems. The method is stated and illustrated for the
exponential function but applies equally well to other functions.
Given A € E.
nxn
and /(A) = e
tx
, compute f(A) = e'
A
, where t is a fixed scalar.
Suppose the characteristic polynomial of A can be written as n ( X ) = Yi?=i (^ ~~ ^ i)" ' »
where the A.,  s are distinct. Define
where O TQ , . . . , a
n
i are n constants that are to be determined. They are, in fact, the unique
solution of the n equations:
Here, the superscript (&) denotes the fcth derivative with respect to X. With the a, s then
known, the function g is known and /(A) = g(A). The motivation for this method is
the CayleyHamilton Theorem, Theorem 9.3, which says that all powers of A greater than
n — 1 can be expressed as linear combinations of A
k
for k = 0, 1, . . . , n — 1. Thus, all the
terms of order greater than n — 1 in the power series for e'
A
can be written in terms of these
lowerorder powers as well. The polynomial g gives the appropriate linear combination.
Example 11.10. Let
and /(A) = e
tK
. Then j r(A.) = (A. + I)
3
, so m = 1 and n
{
= 3.
Let g(X) — UQ + a\X + o^A.
2
. Then the three equations for the a, s are given by
116 Chapter 11. Linear Differential and Difference Equations
Example 11.9.
Let A = [=i
a Then A(A) = {2, 2} and
etA = Xe
tJ
xI
=[
2 1
] exp t [
2
 ~ ] [
1
]
0
1 2
=[
2
] [ e ~ 2 t
te
2t
] [
1
]
1
e
2t
1 2
Interpolation method
This method is numerically unstable in finiteprecision arithmetic but is quite effective for
hand calculation in smallorder problems. The method is stated and illustrated for the
exponential function but applies equally well to other functions.
Given A E jRnxn and f(A) = etA, compute f(A) = etA, where t is a fixed scalar.
Suppose the characteristic polynomial of A can be written as n(A) = nr=1 (A  Ai t',
where the Ai s are distinct. Define
where ao, ... , anl are n constants that are to be determined. They are, in fact, the unique
solution of the n equations:
g(k)(Ai) = f(k)(Ai); k = 0, I, ... , ni  I, i Em.
Here, the superscript (k) denotes the kth derivative with respect to A. With the aiS then
known, the function g is known and f(A) = g(A). The motivation for this method is
the CayleyHamilton Theorem, Theorem 9.3, which says that all powers of A greater than
n  1 can be expressed as linear combinations of A k for k = 0, I, ... , n  1. Thus, all the
terms of order greater than n  1 in the power series for e
t
A can be written in terms of these
lowerorder powers as well. The polynomial g gives the appropriate linear combination.
Example 11.10. Let
A = [  ~  ~ ~ ]
o 01
and f(A) = etA. Then n(A) = (A + 1)3, so m = 1 and nl = 3.
Let g(A) = ao + alA + a2A2. Then the three equations for the aiS are given by
g(I) = f(1) ==> ao al +a2 = e
t
,
g'(1) = f'(1) ==> at  2a2 = te
t
,
g"(I) = 1"(1) ==> 2a2 = t
2
e
t
•
11.1. Differential Equations 117
Solving for the a, s, we find
Thus,
~4 4i t f f > \ t k TU^^ _/"i\ f \ i o\ 2
Example 11.11. Let A = [ _* J] and /(A) = e
a
. Then 7 r(X ) = (A + 2)
2
so m = 1 and
«i = 2.
Let g(A.) = «o + ofiA.. Then the defining equations for the a,s are given by
Solving for the a,s, we find
Thus,
Other methods
1. Use e
tA
= £~
l
{(sl — A)^
1
} and techniques for inverse Laplace transforms. This
is quite effective for smallorder problems, but general nonsymbolic computational
techniques are numerically unstable since the problem is theoretically equivalent to
knowing precisely a JCF.
2. Use Pade approximation. There is an extensive literature on approximating cer
tain nonlinear functions by rational functions. The matrix analogue yields e
A
=
11 .1. Differential Equations
117
Solving for the ai s, we find
Thus,
Example 11.11. Let A = [ : : : : ~ 6] and f(A) = eO. Then rr(A) = (A + 2)2 so m = 1 and
nL = 2.
Let g(A) = ao + aLA. Then the defining equations for the aiS are given by
g(2) = f(2) ==> ao  2al = e
2t
,
g'(2) = f'(2) ==> al = te
2t
.
Solving for the aiS, we find
Thus,
ao = e
2t
+ 2te
2t
,
aL = te
2t
.
f(A) = etA = g(A) = aoI + al A
= (e
2t
+ 2te
2t
) [ ~
_ [ e
2t
_ 2te
2t
 te
2t
Other methods
o ] + te
2t
[4 4 ]
I I 0
1. Use etA = .cI{(sI  A)I} and techniques for inverse Laplace transforms. This
is quite effective for smallorder problems, but general nonsymbolic computational
techniques are numerically unstable since the problem is theoretically equivalent to
knowing precisely a JCE
2. Use Pade approximation. There is an extensive literature on approximating cer
tain nonlinear functions by rational functions. The matrix analogue yields e
A
~
118 Chapter 11. Linear Differential and Difference Equations
D~
l
(A)N(A), where D(A) = 8
0
I + Si A H h S
P
A
P
and N(A) = v
0
I + v
l
A +
• • • + v
q
A
q
. Explicit formulas are known for the coefficients of the numerator and
denominator polynomials of various orders. Unfortunately, a Fade approximation for
the exponential is accurate only in a neighborhood of the origin; in the matrix case
this means when  A is sufficiently small. This can be arranged by scaling A, say, by
/ * \
2
*
multiplying it by 1/2* for sufficiently large k and using the fact that e
A
= ( e
{ ] / 2 )A
j .
Numerical loss of accuracy can occur in this procedure from the successive squarings.
3. Reduce A to (real) Schur form S via the unitary similarity U and use e
A
= Ue
s
U
H
and successive recursions up the superdiagonals of the (quasi) upper triangular matrix
e
s
.
4. Many methods are outlined in, for example, [19]. Reliable and efficient computation
of matrix functions such as e
A
and log(A) remains a fertile area for research.
11.2 Difference Equations
In this section we outline solutions of discretetime analogues of the linear differential
equations of the previous section. Linear discretetime systems, modeled by systems of
difference equations, exhibit many parallels to the continuoustime differential equation
case, and this observation is exploited frequently.
11.2.1 Homogeneous linear difference equations
Theorem 11.12. Let A e Rn
xn
. The solution of the linear homogeneous system of difference
equations
11.2.2 Inhomogeneous linear difference equations
Theorem 11.14. Let A e R
nxn
, B e R
nxm
and suppose {«*}£§ « a given sequence of
mvectors. Then the solution of the inhomogeneous initialvalue problem
for k > 0 is given by
Proof: The proof is almost immediate upon substitution of (11.14) into (11.13). D
Remark 11.13. Again, we restrict our attention only to the socalled timeinvariant
case, where the matrix A in (11.13) is constant and does not depend on k. We could also
consider an arbitrary "initial time" ko, but since the system is timeinvariant, and since we
want to keep the formulas "clean" (i.e., no double subscripts), we have chosen ko = 0 for
convenience.
118 Chapter 11. Linear Differential and Difference Equations
DI(A)N(A), where D(A) = 001 + olA + ... + opAP and N(A) = vol + vIA +
... + Vq A q. Explicit formulas are known for the coefficients of the numerator and
denominator polynomials of various orders. Unfortunately, a Pad6 approximation for
the exponential is accurate only in a neighborhood of the origin; in the matrix case
this means when IIAII is sufficiently small. This can be arranged by scaling A, say, by
2'
multiplying it by 1/2k for sufficiently large k and using the fact that e
A
= (e( I /2')A )
Numerical loss of accuracy can occur in this procedure from the successive squarings.
3. Reduce A to (real) Schur form S via the unitary similarity U and use e
A
= U e
S
U H
and successive recursions up the superdiagonals of the (quasi) upper triangular matrix
e
S
.
4. Many methods are outlined in, for example, [19]. Reliable and efficient computation
of matrix functions such as e
A
and 10g(A) remains a fertile area for research.
11.2 Difference Equations
In this section we outline solutions of discretetime analogues of the linear differential
equations of the previous section. Linear discretetime systems, modeled by systems of
difference equations, exhibit many parallels to the continuoustime differential equation
case, and this observation is exploited frequently.
11.2.1 Homogeneous linear difference equations
Theorem 11.12. Let A E jRn xn. The solution of the linear homogeneous system of difference
equations
(11.13)
for k 2:: 0 is given by
Proof: The proof is almost immediate upon substitution of (11.14) into (11.13). 0
Remark 11.13. Again, we restrict our attention only to the socalled timeinvariant
case, where the matrix A in (11.13) is constant and does not depend on k. We could also
consider an arbitrary "initial time" ko, but since the system is timeinvariant, and since we
want to keep the formulas "clean" (i.e., no double subscripts), we have chosen ko = 0 for
convenience.
11.2.2 Inhomogeneous linear difference equations
Theorem 11.14. Let A E jRnxn, B E jRnxm and suppose { u d t ~ is a given sequence of
mvectors. Then the solution of the inhomogeneous initialvalue problem
(11.15)
11.2. Difference Equations 119
is given by
11.2.3 Computation of matrix powers
It is clear that solution of linear systems of difference equations involves computation of
A
k
. One solution method, which is numerically unstable but sometimes useful for hand
calculation, is to use ztransforms, by analogy with the use of Laplace transforms to compute
a matrix exponential. One definition of the ztransform of a sequence {gk} is
Assuming z > max A, the ztransform of the sequence {A
k
} is then given by
X€A(A)
Proof: The proof is again almost immediate upon substitution of (11.16) into (11.15). D
Methods based on the JCF are sometimes useful, again mostly for smallorder prob
lems. Assume that A e M"
xn
and let X e R^
n
be such that X~
1
AX = /, where J is a
JCF for A. Then
If A is diagonalizable, it is then easy to compute A
k
via the formula A
k
— XJ
k
X
l
since /* is simply a diagonal matrix.
11.2. Difference Equations
is given by
kI
xk=AkXO+LAkjIBUj, k:::.O.
j=O
119
(11.16)
Proof: The proof is again almost immediate upon substitution of (11.16) into (11.15). 0
11.2.3 Computation of matrix powers
It is clear that solution of linear systems of difference equations involves computation of
A k. One solution method, which is numerically unstable but sometimes useful for hand
calculation, is to use ztransforms, by analogy with the use of Laplace transforms to compute
a matrix exponential. One definition of the ztransform of a sequence {gk} is
+00
= LgkZ
k
.
k=O
Assuming Izl > max IAI, the ztransform of the sequence {Ak} is then given by
AEA(A)
+00
k "'kk 1 12
Z({A})=L...zA =I+A+"2A + ...
k=O z z
= (lzIA)I
= z(zI  A)I.
Methods based on the JCF are sometimes useful, again mostly for smallorder prob
lems. Assume that A E jRnxn and let X E be such that XI AX = J, where J is a
JCF for A. Then
Ak = (XJXI)k
= XJkX
1
_I
 m
LXi Jty
i
H
;=1
if A is diagonalizable,
in general.
If A is diagonalizable, it is then easy to compute Ak via the formula Ak = X Jk XI
since Jk is simply a diagonal matrix.
120 Chapter 11. Linear Differential and Difference Equations
In the general case, the problem again reduces to the computation of the power of a
Jordan block. To be specific, let 7, e C
pxp
be a Jordan block of the form
Writing /,• = XI + N and noting that XI and the nilpotent matrix N commute, it is
then straightforward to apply the binomial theorem to (XI + N)
k
and verify that
The symbol ( ) has the usual definition of ,
(
^ ., and is to be interpreted as 0 if k < q.
In the case when A. is complex, a real version of the above can be worked out.
4
Example 11.15. Let A = [_J J]. Then
Basic analogues of other methods such as those mentioned in Section 11.1.6 can also
be derived for the computation of matrix powers, but again no universally "best" method
exists. For an erudite discussion of the state of the art, see [11, Ch. 18].
11.3 HigherOrder Equations
It is well known that a higherorder (scalar) linear differential equation can be converted to
a firstorder linear system. Consider, for example, the initialvalue problem
with 4 > (t } a given function and n initial conditions
120 Chapter 11. Linear Differential and Difference Equations
In the general case, the problem again reduces to the computation of the power of a
Jordan block. To be specific, let J
i
E Cpxp be a Jordan block of the form
o ... 0 A
Writing J
i
= AI + N and noting that AI and the nilpotent matrix N commute, it is
then straightforward to apply the binomial theorem to (AI + N)k and verify that
Ak
kA kI
(;)A
k

2
(
k ) AkP+I
pl
0
Ak kA
k

1
J/ =
0 0
Ak
( ; ) A
k

2
kA
k

1
0 0
Ak
The symbol (: ) has the usual definition of q ! ( k k ~ q ) ! and is to be interpreted as 0 if k < q.
In the case when A is complex, a real version of the above can be worked out.
Example 11.15. Let A = [=i a Then
Ak = XJkX1 = [2 1 ] [(_2)k k(2)kk
1
] [ 1 2
1
]
1 1 0 (2) 1
_ [ (_2/
1
(2  2k) k( 2l+
1
]
 k( _2)k1 (2l
1
(2k  2) .
Basic analogues of other methods such as those mentioned in Section 11.1.6 can also
be derived for the computation of matrix powers, but again no universally "best" method
exists. For an erudite discussion of the state of the art, see [11, Ch. 18].
11.3 HigherOrder Equations
It is well known that a higherorder (scalar) linear differential equation can be converted to
a firstorder linear system. Consider, for example, the initialvalue problem
(11.17)
with ¢J(t) a given function and n initial conditions
y(O) = Co, y(O) = CI, ... , inI)(O) = CnI' (1l.l8)
Exercises 121
Here, v
(m)
denotes the mth derivative of y with respect to t. Define a vector x (?) e R" with
components *i(0 = y ( t ) , x
2
( t) = y ( t ) , . . . , x
n
( t) = y
{ n
~
l )
( t ) . Then
These equations can then be rewritten as the firstorder linear system
The initial conditions take the form ^(0) = c = [ C Q , c\, ..., C
M
_ I ] .
Note that det(X7 — A) = A." + a
n
\X
n
~
l
H h a\X + ao. However, the companion
matrix A in (11.19) possesses many nasty numerical properties for even moderately sized n
and, as mentioned before, is often well worth avoiding, at least for computational purposes.
A similar procedure holds for the conversion of a higherorder difference equation
EXERCISES
1. Let P € R
nxn
be a projection. Show that e
p
% / + 1.718P.
2. Suppose x, y € R" and let A = xy
T
. Further, let a = x
T
y. Show that e'
A
I + g ( t , a) xy
T
, where
3. Let
with n initial conditions, into a linear firstorder difference equation with (vector) initial
condition.
Exercises 121
Here, y(m) denotes the mth derivative of y with respect to t. Define a vector x (t) E ]Rn with
components Xl (t) = yet), X2(t) = yet), ... , Xn(t) = Inl)(t). Then
Xl (I) = X2(t) = y(t),
X2(t) = X3(t) = yet),
Xnl (t) = Xn(t) = y(nl)(t),
Xn(t) = y(n)(t) = aoy(t)  aly(t)  ...  an_llnl)(t) + ¢(t)
= aOx\ (t)  a\X2(t)  ...  anlXn(t) + ¢(t).
These equations can then be rewritten as the firstorder linear system
0 0 0
0 0 1
x(t)+ [ n ~ ( t )
x(t) =
0
0 0 1
ao a\ a
n
\
The initial conditions take the form X (0) = C = [co, Cl, •.. , C
n
\ r.
(11.19)
Note that det(A!  A) = An + an_1A
n

1
+ ... + alA + ao. However, the companion
matrix A in (11.19) possesses many nasty numerical properties for even moderately sized n
and, as mentioned before, is often well worth avoiding, at least for computational purposes.
A similar procedure holds for the conversion of a higherorder difference equation
with n initial conditions, into a linear firstorder difference equation with (vector) initial
condition.
EXERCISES
1. Let P E lR
nxn
be a projection. Show that e
P
~ ! + 1.718P.
2. Suppose x, y E lR
n
and let A = xyT. Further, let a = XT y. Show that etA
1+ get, a)xyT, where
{
!(eat  I)
g(t,a)= a t
3. Let
if a 1= 0,
if a = O.
122 Chapter 11. L i n ear Di f f eren ti al and Di f f erence Equati on s
where X e M'
nx
" is arbitrary. Show that
4. Let K denote the skewsymmetric matrix
where /„ denotes the n x n identity matrix. A matrix A e R
2n x2n
is said to be
Hamiltonian if K~
1
A
T
K = A and to be symplectic if K~
l
A
T
K  A
1
.
(a) Suppose E is Hamiltonian and let A, be an eigenvalue of H. Show that — A, must
also be an eigenvalue of H.
(b) Suppose S is symplectic and let A. be an eigenvalue of S. Show that 1 /A, must
also be an eigenvalue of S.
(c) Suppose that H is Hamiltonian and S is symplectic. Show that S~
1
HS must be
Hamiltonian.
(d) Suppose H is Hamiltonian. Show that e
H
must be symplectic.
5. Let a, ft € R and
Then show that
6. Find a general expression for
7. Find e
M
when A =
5. Let
(a) Solve the differential equation
122 Chapter 11. Linear Differential and Difference Equations
where X E jRmxn is arbitrary. Show that
e
A = [eo I sinh 1 X ]
~ I .
4. Let K denote the skewsymmetric matrix
[
0 In ]
In 0 '
where In denotes the n x n identity matrix. A matrix A E jR2nx2n is said to be
Hamiltonian if K I AT K =  A and to be symplectic if K I AT K = A I.
(a) Suppose H is Hamiltonian and let).. be an eigenvalue of H. Show that ).. must
also be an eigenvalue of H.
(b) Suppose S is symplectic and let).. be an eigenvalue of S. Show that 1/).. must
also be an eigenValue of S.
(c) Suppose that H is Hamiltonian and S is symplectic. Show that SI H S must be
Hamiltonian.
(d) Suppose H is Hamiltonian. Show that e
H
must be symplectic.
5. Let a, f3 E lR and
Then show that
6. Find a general expression for
7. Find etA when A =
8. Let
ectt cos f3t
_eut sin f3t
ectctrt sin ~ t J.
e cos/A
(a) Solve the differential equation
i = Ax ; x(O) = [ ~ J.
Exercises 123
Show that the eigenvalues of the solution X ( t ) of this problem are the same as those
of Cf or all?.
11. The year is 2004 and there are three large "free trade zones" in the world: Asia (A),
Europe (E), and the Americas (R). Suppose certain multinational companies have
total assets of $40 trillion of which $20 trillion is in E and $20 trillion is in R. Each
year half of the Americas' money stays home, a quarter goes to Europe, and a quarter
goes to Asia. For Europe and Asia, half stays home and half goes to the Americas.
(a) Find the matrix M that gives
(b) Find the eigenvalues and right eigenvectors of M.
(c) Find the distribution of the companies' assets at year k.
(d) Find the limiting distribution of the $40 trillion as the universe ends, i.e., as
k — » • +00 (i.e., around the time the Cubs win a World Series).
(Exercise adapted from Problem 5.3.11 in [24].)
(b) Solve the differential equation
9. Consider the initialvalue problem
for t > 0. Suppose that A e E"
x
" is skewsymmetric and let a = \\XQ\\
2
. Show that
*(OII
2
= af or al l f > 0.
10. Consider the n x n matrix initialvalue problem
12. (a) Find the solution of the initialvalue problem
(b) Consider the difference equation
If £
0
= 1 and z\ = 2, what is the value of Z IQ OO? What is the value of Zk in
general?
Exercises 123
(b) Solve the differential equation
i = Ax + b; x(O) = [ ~ l
9. Consider the initialvalue problem
i(t) = Ax(t); x(O) = Xo
for t ~ O. Suppose that A E ~ n x n is skewsymmetric and let ex = Ilxol12. Show that
I/X(t)1/2 = ex for all t > O.
10. Consider the n x n matrix initialvalue problem
X(t) = AX(t)  X(t)A; X(O) = c.
Show that the eigenvalues of the solution X (t) of this problem are the same as those
of C for all t.
11. The year is 2004 and there are three large "free trade zones" in the world: Asia (A),
Europe (E), and the Americas (R). Suppose certain multinational companies have
total assets of $40 trillion of which $20 trillion is in E and $20 trillion is in R. Each
year half of the Americas' money stays home, a quarter goes to Europe, and a quarter
goes to Asia. For Europe and Asia, half stays home and half goes to the Americas.
(a) Find the matrix M that gives
[
A] [A]
E =M E
R year k+1 R year k
(b) Find the eigenvalues and right eigenvectors of M.
(c) Find the distribution of the companies' assets at year k.
(d) Find the limiting distribution of the $40 trillion as the universe ends, i.e., as
k * +00 (i.e., around the time the Cubs win a World Series).
(Exercise adapted from Problem 5.3.11 in [24].)
12. (a) Find the solution of the initialvalue problem
.Yet) + 2y(t) + yet) = 0; yeO) = 1, .YeO) = O.
(b) Consider the difference equation
Zk+2 + 2Zk+1 + Zk = O.
If Zo = 1 and ZI = 2, what is the value of ZIOOO? What is the value of Zk in
general?
This page intentionally left blank This page intentionally left blank
Chapter 12
Generalized Eigenvalue
Problems
12.1 The Generalized Eigenvalue/Eigenvector Problem
In this chapter we consider the generalized eigenvalue problem
125
where A, B e C"
xn
. The standard eigenvalue problem considered in Chapter 9 obviously
corresponds to the special case that B = I.
Definition 12.1. A nonzero vector x e C" is a right generalized eigenvector of the pair
(A, B) with A, B e C
MX
" if there exists a scalar A. e C, called a generalized eigenvalue,
such that
Similarly, a nonzero vector y e C" is a left generalized eigenvector corresponding to an
eigenvalue X if
When the context is such that no confusion can arise, the adjective "generalized"
is usually dropped. As with the standard eigenvalue problem, if x [y] is a right [left]
eigenvector, then so is ax [ay] for any nonzero scalar a. e C.
Definition 12.2. The matrix A — X B is called a matrix pencil (or pencil of the matrices A
and B).
As with the standard eigenvalue problem, eigenvalues for the generalized eigenvalue
problem occur where the matrix pencil A — X B is singular.
Definition 12.3. The polynomial 7 r(A.) = det(A — A.5) is called the characteristic poly
nomial of the matrix pair (A, B) . The roots ofn(X .) are the eigenvalues of the associated
generalized eigenvalue problem.
Remark 12.4. When A, B e E"
xn
, the characteristic polynomial is obviously real, and
hence nonreal eigenvalues must occur in complex conjugate pairs.
Chapter 12
Generalized Eigenvalue
Problems
12.1 The Generalized Eigenvalue/Eigenvector Problem
In this chapter we consider the generalized eigenvalue problem
Ax = 'ABx,
where A, B E e
nxn
. The standard eigenvalue problem considered in Chapter 9 obviously
corresponds to the special case that B = I.
Definition 12.1. A nonzero vector x E en is a right generalized eigenvector of the pair
(A, B) with A, B E e
nxn
if there exists a scalar 'A E e, called a generalized eigenvalue,
such that
Ax = 'ABx. (12.1)
Similarly, a nonzero vector y E en is a left generalized eigenvector corresponding to an
eigenvalue 'A if
(12.2)
When the context is such that no confusion can arise, the adjective "generalized"
is usually dropped. As with the standard eigenvalue problem, if x [y] is a right [left]
eigenvector, then so is ax [ay] for any nonzero scalar a E <C.
Definition 12.2. The matrix A  'AB is called a matrix pencil (or pencil of the matrices A
and B).
As with the standard eigenvalue problem, eigenvalues for the generalized eigenvalue
problem occur where the matrix pencil A  'AB is singular.
Definition 12.3. The polynomial n('A) = det(A  'AB) is called the characteristic poly
nomial of the matrix pair (A, B). The roots ofn('A) are the eigenvalues of the associated
generalized eigenvalue problem.
Remark 12.4. When A, B E jRnxn, the characteristic polynomial is obviously real, and
hence nonreal eigenvalues must occur in complex conjugate pairs.
125
and there are again four cases to consider.
Case 1: a ^ 0, ft ^ 0. There are two eigenvalues, 1 and ^.
Case 2: a = 0, ft ^ 0. There is only one eigenvalue, 1 (of multiplicity 1).
Case 3: a ^ 0, f3 = 0. There are two eigenvalues, 1 and 0.
Case 4: a = 0, (3 = 0. All A 6 C are eigenvalues since det(B — uA) = 0.
At least for the case of regular pencils, it is apparent where the "missing" eigenvalues have
gone in Cases 2 and 3. That is to say, there is a second eigenvalue "at infinity" for Case 3 of
A — A.B, with its reciprocal eigenvalue being 0 in Case 3 of the reciprocal pencil B — nA.
A similar reciprocal symmetry holds for Case 2.
While there are applications in system theory and control where singular pencils
appear, only the case of regular pencils is considered in the remainder of this chapter. Note
that A and/or B may still be singular. If B is singular, the pencil A — KB always has
126 Chapter 12. Generalized Eigenvalue Problems
Remark 12.5. If B = I (or in general when B is nonsingular), then n ( X ) is a polynomial
of degree n, and hence there are n eigenvalues associated with the pencil A — X B. However,
when B = I, in particular, when B is singular, there may be 0, k e n, or infinitely many
eigenvalues associated with the pencil A — X B. For example, suppose
where a and ft are scalars. Then the characteristic polynomial is
and there are several cases to consider.
Case 1: a ^ 0, ft ^ 0. There are two eigenvalues, 1 and .
Case 2: a = 0, f3 / 0. There are two eigenvalues, 1 and 0.
Case 3: a = 0, f3 = 0. There is only one eigenvalue, 1 (of multiplicity 1).
Case 4: a = 0, f3 = 0. All A e C are eigenvalues since det(A — A. B ) =0.
Definition 12.6. If del (A — X B) is not identically zero, the pencil A — X B is said to be
regular; otherwise, it is said to be singular.
Note that if AA(A) n J\f(B) ^ 0, the associated matrix pencil is singular (as in Case
4 above).
Associated with any matrix pencil A — X B is a reciprocal pencil B — n,A and cor
responding generalized eigenvalue problem. Clearly the reciprocal pencil has eigenvalues
(JL = £. It is instructive to consider the reciprocal pencil associated with the example in
Remark 12.5. With A and B as in (12.3), the characteristic polynomial is
126 Chapter 12. Generalized Eigenvalue Problems
Remark 12.5. If B = I (or in general when B is nonsingular), then rr(A) is a polynomial
of degree n, and hence there are n eigenvalues associated with the pencil A  AB. However,
when B =I I, in particular, when B is singular, there may be 0, k E !!, or infinitely many
eigenvalues associated with the pencil A  AB. For example, suppose
where a and (3 are scalars. Then the characteristic polynomial is
det(A  AB) = (I  AHa  (3A)
and there are several cases to consider.
Case 1: a =I 0, {3 =I O. There are two eigenvalues, I and ~ .
Case 2: a = 0, {3 =I O. There are two eigenvalues, I and O.
Case 3: a =I 0, {3 = O. There is only one eigenvalue, I (of multiplicity 1).
Case 4: a = 0, (3 = O. All A E C are eigenvalues since det(A  AB) == O.
(12.3)
Definition 12.6. If det(A  AB) is not identically zero, the pencil A  AB is said to be
regular; otherwise, it is said to be singular.
Note that if N(A) n N(B) =I 0, the associated matrix pencil is singular (as in Case
4 above).
Associated with any matrix pencil A  AB is a reciprocal pencil B  /.LA and cor
responding generalized eigenvalue problem. Clearly the reciprocal pencil has eigenvalues
/.L = ±. It is instructive to consider the reciprocal pencil associated with the example in
Remark 12.5. With A and B as in (12.3), the characteristic polynomial is
det(B  /.LA) = (1  /.L)({3  a/.L)
and there are again four cases to consider.
Case 1: a =I 0, {3 =I O. There are two eigenvalues, I and ~ .
Case 2: a = 0, {3 =I O. There is only one eigenvalue, I (of multiplicity I).
Case 3: a =I 0, {3 = O. There are two eigenvalues, 1 and O.
Case 4: a = 0, (3 = O. All A E C are eigenvalues since det(B  /.LA) == O.
At least for the case of regular pencils, it is apparent where the "missing" eigenvalues have
gone in Cases 2 and 3. That is to say, there is a second eigenvalue "at infinity" for Case 3 of
A  AB, with its reciprocal eigenvalue being 0 in Case 3 of the reciprocal pencil B  /.LA.
A similar reciprocal symmetry holds for Case 2.
While there are applications in system theory and control where singular pencils
appear, only the case of regular pencils is considered in the remainder of this chapter. Note
that A and/or B may still be singular. If B is singular, the pencil A  AB always has
12. 2. Canonical Forms 127
fewer than n eigenvalues. If B is nonsingular, the pencil A A. f i always has precisely n
eigenvalues, since the generalized eigenvalue problem is then easily seen to be equivalent
to the standard eigenvalue problem B~
l
Ax = Xx (or AB~
l
w = Xw). However, this turns
out to be a very poor numerical procedure for handling the generalized eigenvalue problem
if B is even moderately ill conditioned with respect to inversion. Numerical methods that
work directly on A and B are discussed in standard textbooks on numerical linear algebra;
see, for example, [7, Sec. 7.7] or [25, Sec. 6.7].
12.2 Canonical Forms
Just as for the standard eigenvalue problem, canonical forms are available for the generalized
eigenvalue problem. Since the latter involves a pair of matrices, we now deal with equiva
lencies rather than similarities, and the first theorem deals with what happens to eigenvalues
and eigenvectors under equivalence.
Theorem 12.7. Let A, fl, Q, Z e C
nxn
with Q and Z nonsingular. Then
1. the eigenvalues of the problems A — XB and QAZ — XQBZ are the same (the two
problems are said to be equivalent).
2. ifx is a right eigenvector of A—XB, then Z~
l
x is a right eigenvector of QAZ—XQ B Z.
3. ify is a left eigenvector of A —KB, then Q~
H
y isa left eigenvector ofQAZ — XQBZ.
Proof:
1. det(QAZXQBZ) = det[0(A  XB)Z] = det gdet Zdet(A  XB). Since det 0
and det Z are nonzero, the result follows.
2. The result follows by noting that (A – yB)x  Oif andonly if Q(AXB)Z(Z~
l
x) =
0.
3. Again, the result follows easily by noting that y
H
(A — XB) — 0 if and only if
( Q~
H
y )
H
Q( A– XB ) Z = Q. D
where T
a
and Tp are upper triangular.
By Theorem 12.7, the eigenvalues of the pencil A — XB are then the ratios of the diag
onal elements of T
a
to the corresponding diagonal elements of Tp, with the understanding
that a zero diagonal element of Tp corresponds to an infinite generalized eigenvalue.
There is also an analogue of the MurnaghanWintner Theorem for real matrices.
The first canonical form is an analogue of Schur's Theorem and forms, in fact, the
theoretical foundation for the QZ algorithm, which is the generally preferred method for
solving the generalized eigenvalue problem; see, for example, [7, Sec. 7.7] or [25, Sec. 6.7].
Theorem 12.8. Let A, B e Cn
xn
. Then there exist unitary matrices Q, Z e Cn
xn
such that
12.2. Canonical Forms 127
fewer than n eigenvalues. If B is nonsingular, the pencil A  AB always has precisely n
eigenvalues, since the generalized eigenvalue problem is then easily seen to be equivalent
to the standard eigenvalue problem B
1
Ax = Ax (or AB
1
W = AW). However, this turns
out to be a very poor numerical procedure for handling the generalized eigenvalue problem
if B is even moderately ill conditioned with respect to inversion. Numerical methods that
work directly on A and B are discussed in standard textbooks on numerical linear algebra;
see, for example, [7, Sec. 7.7] or [25, Sec. 6.7].
12.2 Canonical Forms
Just as for the standard eigenvalue problem, canonical forms are available for the generalized
eigenvalue problem. Since the latter involves a pair of matrices, we now deal with equiva
lencies rather than similarities, and the first theorem deals with what happens to eigenvalues
and eigenvectors under equivalence.
Theorem 12.7. Let A, B, Q, Z E c
nxn
with Q and Z nonsingular. Then
1. the eigenvalues of the problems A  AB and QAZ  AQBZ are the same (the two
problems are said to be equivalent).
2. ifx isa right eigenvector of AAB, then Zl x isa righteigenvectorofQAZAQB Z.
3. ify isa left eigenvector of A AB, then QH y isa lefteigenvectorofQAZ AQBZ.
Proof:
1. det(QAZ  AQBZ) = det[Q(A  AB)Z] = det Q det Z det(A  AB). Since det Q
and det Z are nonzero, the result follows.
2. The result follows by noting that (A AB)x = 0 if and only if Q(A AB)Z(Zl x) =
o.
3. Again, the result follows easily by noting that yH (A  AB) o if and only if
(QH y)H Q(A _ AB)Z = O. 0
The first canonical form is an analogue of Schur's Theorem and forms, in fact, the
theoretical foundation for the QZ algorithm, which is the generally preferred method for
solving the generalized eigenvalue problem; see, for example, [7, Sec. 7.7] or [25, Sec. 6.7].
Theorem 12.8. Let A, B E c
nxn
. Then there exist unitary matrices Q, Z E c
nxn
such that
QAZ = T
a
, QBZ = T
fJ
,
where Ta and TfJ are upper triangular.
By Theorem 12.7, the eigenvalues ofthe pencil A  AB are then the ratios of the diag
onal elements of Ta to the corresponding diagonal elements of T
fJ
, with the understanding
that a zero diagonal element of TfJ corresponds to an infinite generalized eigenvalue.
There is also an analogue of the MurnaghanWintner Theorem for real matrices.
128 Chapter 12. Generalized Eigenvalue Problems
Theorem 12.9. Let A, B e R
nxn
. Then there exist orthogonal matrices Q, Z e R"
xn
such
thnt
where T is upper triangular and S is quasiuppertriangular.
When S has a 2 x 2 diagonal block, the 2 x 2 subpencil formed with the corresponding
2x2 diagonal subblock of T has a pair of complex conjugate eigenvalues. Otherwise, real
eigenvalues are given as above by the ratios of diagonal elements of S to corresponding
elements of T.
There is also an analogue of the Jordan canonical form called the Kronecker canonical
form (KCF). A full description of the KCF, including analogues of principal vectors and
so forth, is beyond the scope of this book. In this chapter, we present only statements of
the basic theorems and some examples. The first theorem pertains only to "square" regular
pencils, while the full KCF in all its generality applies also to "rectangular" and singular
pencils.
Theorem 12.10. Let A, B e C
nxn
and suppose the pencil A — XB is regular. Then there
exist nonsingular matrices P, Q € C"
x
" such that
where J is a Jordan canonical form corresponding to the finite eigenvalues of A A.fi and
N is a nilpotent matrix of Jordan blocks associated with 0 and corresponding to the infinite
eigenvalues of A — XB.
Example 12.11. The matrix pencil
with characteristic polynomial (X — 2)
2
has a finite eigenvalue 2 of multiplicty 2 and three
infinite eigenvalues.
Theorem 12.12 (Kronecker Canonical Form). Let A, B e C
mxn
. Then there exist
nonsingular matrices P e C
mxm
and Q e C
nxn
such that
128 Chapter 12. Generalized Eigenvalue Problems
Theorem 12.9. Let A, B E jRnxn. Then there exist orthogonal matrices Q, Z E jRnxn such
that
QAZ = S, QBZ = T,
where T is upper triangular and S is quasiuppertriangular.
When S has a 2 x 2 diagonal block, the 2 x 2 subpencil fonned with the corresponding
2 x 2 diagonal subblock of T has a pair of complex conjugate eigenvalues. Otherwise, real
eigenvalues are given as above by the ratios of diagonal elements of S to corresponding
elements of T.
There is also an analogue of the Jordan canonical fonn called the Kronecker canonical
form (KeF). A full description of the KeF, including analogues of principal vectors and
so forth, is beyond the scope of this book. In this chapter, we present only statements of
the basic theorems and some examples. The first theorem pertains only to "square" regular
pencils, while the full KeF in all its generality applies also to "rectangular" and singular
pencils.
Theorem 12.10. Let A, B E c
nxn
and suppose the pencil A  AB is regular. Then there
exist nonsingular matrices P, Q E c
nxn
such that
peA  AB)Q = [ ~ ~ ]  A [ ~ ~ l
where J is a Jordan canonical form corresponding to the finite eigenvalues of A  AB and
N is a nilpotent matrix of Jordan blocks associated with 0 and corresponding to the infinite
eigenvalues of A  AB.
Example 12.11. The matrix pencil
[2 I
0 0
~ ]> [ ~
0 0
o 0] o 2 0 0 I 0 o 0
o 0 1 0 0 0 I 0
o 0 0 1 0 0 o 0
o 0 0 0 0 0 0 0
with characteristic polynomial (A  2)2 has a finite eigenvalue 2 of multiplicty 2 and three
infinite eigenvalues.
Theorem 12.12 (Kronecker Canonical Form). Let A, B E c
mxn
• Then there exist
nonsingular matrices P E c
mxm
and Q E c
nxn
such that
peA  AB)Q = diag(LII' ... , L
l
" L ~ , ...• L;'. J  A.I, I  )"N),
12.2. Canonical Forms 129
where N is nilpotent, both N and J are in Jordan canonical form, and L^ is the (k + 1) x k
bidiagonal pencil
The /( are called the left minimal indices while the r, are called the right minimal indices.
Left or right minimal indices can take the value 0.
Such a matrix is in KCF. The first block of zeros actually corresponds to LQ, LQ, LQ, LQ ,
LQ, where each LQ has "zero columns" and one row, while each LQ has "zero rows" and
one column. The second block is L\ while the third block is L\. The next two blocks
correspond to
Just as sets of eigenvectors span Ainvariant subspaces in the case of the standard
eigenproblem (recall Definition 9.35), there is an analogous geometric concept for the
generalized eigenproblem.
Definition 12.14. Let A, B e W
lxn
and suppose the pencil A — XB is regular. Then V is a
deflating subspace if
Just as in the standard eigenvalue case, there is a matrix characterization of deflating
subspace. Specifically, suppose S e R n*
xk
is a matrix whose columns span a ^dimensional
subspace S of R
n
, i.e., R ( S) = <S. Then S is a deflating subspace for the pencil A — XB if
and only if there exists M e R
kxk
such that
while the nilpotent matrix N in this example is
12.2. Canonical Forms 129
where N is nilpotent, both Nand J are in Jordan canonical form, and Lk is the (k + I) x k
bidiagonal pencil
A 0 0
A
Lk =
0 0
A
0 0 I
The Ii are called the left minimal indices while the ri are called the right minimal indices.
Left or right minimal indices can take the value O.
Example 12.13. Consider a 13 x 12 block diagonal matrix whose diagonal blocks are
A 0]
I A .
o I
Such a matrix is in KCF. The first block of zeros actually corresponds to Lo, Lo, Lo, L6,
L6, where each Lo has "zero columns" and one row, while each L6 has "zero rows" and
one column. The second block is L\ while the third block is LI The next two blocks
correspond to
[
21
J = 0 2
o 0
while the nilpotent matrix N in this example is
000
Just as sets of eigenvectors span Ainvariant subspaces in the case of the standard
eigenproblem (recall Definition 9.35), there is an analogous geometric concept for the
generalized eigenproblem.
Definition 12.14. Let A, B E and suppose the pencil A  AB is regular. Then V is a
deflating subspace if
dim(AV + BV) = dimV. (12.4)
Just as in the standard eigenvalue case, there is a matrix characterization of deflating
subspace. Specifically, suppose S E is a matrix whose columns span a kdimensional
subspace S of i.e., n(S) = S. Then S is a deflating subspace for the pencil A  AB if
and only if there exists M E such that
AS = BSM. (12.5)
130 Chapter 12. Generalized Eigenvalue Problems
If B = /, then (12.4) becomes dim(AV + V) = dimV, which is clearly equivalent to
AV c V. Similarly, (12.5) becomes AS = SM as before. If the pencil is not regular, there
is a concept analogous to deflating subspace called a reducing subspace.
12.3 Application to the Computation of System Zeros
Consider the linear svstem
which has a root at —2.8 .
The method of finding system zeros via a generalized eigenvalue problem also works
well for general multiinput, multioutput systems. Numerically, however, one must be
careful first to "deflate out" the infinite zeros (infinite eigenvalues of (12.6)). This is accom
plished by computing a certain unitary equivalence on the system pencil that then yields a
smaller generalized eigenvalue problem with only finite generalized eigenvalues (the finite
zeros).
The connection between system zeros and the corresponding system pencil is non
trivial. However, we offer some insight below into the special case of a singleinput,
with A € M
n x n
, B € R"
x m
, C e R
pxn
, and D € R
pxm
. This linear timeinvariant state
space model is often used in multivariable control theory, where x(= x(t)) is called the state
vector, u is the vector of inputs or controls, and y is the vector of outputs or observables.
For details, see, for example, [26].
In general, the (finite) zeros of this system are given by the (finite) complex numbers
z, where the "system pencil"
drops rank. In the special case p = m, these values are the generalized eigenvalues of the
(n + m) x (n + m) pencil.
Example 12.15. Let
Then the transfer matrix (see [26]) of this system is
which clearly has a zero at —2.8 . Checking the finite eigenvalues of the pencil (12.6), we
find the characteristic polynomial to be
130 Chapter 12. Generalized Eigenvalue Problems
If B = I, then (12.4) becomes dim (A V + V) = dim V, which is clearly equivalent to
AV ~ V. Similarly, (12.5) becomes AS = SM as before. lEthe pencil is not regular, there
is a concept analogous to deflating subspace called a reducing subspace.
12.3 Application to the Computation of System Zeros
Consider the linear system
i = Ax + Bu,
y = Cx + Du
with A E jRnxn, B E jRnxm, C E jRPxn, and D E jRPxm. This linear timeinvariant state
space model is often used in multivariable control theory, where x(= x(t)) is called the state
vector, u is the vector of inputs or controls, and y is the vector of outputs or observables.
For details, see, for example, [26].
In general, the (finite) zeros of this system are given by the (finite) complex numbers
z, where the "system pencil"
(12.6)
drops rank. In the special case p = m, these values are the generalized eigenvalues of the
(n + m) x (n + m) pencil.
Example 12.15. Let
A=[
4
2
Then the transfer matrix (see [26)) of this system is
C = [I 2],
55 + 14
g(5)=C(sIA)'B+D= 2 '
5 + 3s + 2
D=O.
which clearly has a zero at 2.8. Checking the finite eigenvalues of the pencil (12.6), we
find the characteristic polynomial to be
det
[
A c
M
B]
D "'" 5A + 14,
which has a root at 2.8.
The method of finding system zeros via a generalized eigenvalue problem also works
well for general mUltiinput, multioutput systems. Numerically, however, one must be
careful first to "deflate out" the infinite zeros (infinite eigenvalues of (12.6». This is accom
plished by computing a certain unitary equivalence on the system pencil that then yields a
smaller generalized eigenvalue problem with only finite generalized eigenvalues (the finite
zeros).
The connection between system zeros and the corresponding system pencil is non
trivial. However, we offer some insight below into the special case of a singleinput.
12.4. Symmetric Generalized Eigenvalue Problems 131
singleoutput system. Specifically, let B = b e Rn, C = c
1
e R
l xn
, and D = d e R.
Furthermore, let g(.s) = c
r
(s7 — A )~
!
Z ? + d denote the system transfer function (matrix),
and assume that g ( s ) can be written in the form
where T T (S ) is the characteristic polynomial of A, and v(s) and T T (S ) are relatively prime
(i.e., there are no "pole/zero cancellations").
Suppose z € C is such that
is singular. Then there exists a nonzero solution to
or
Assuming z is not an eigenvalue of A (i.e., no pole/zero cancellations), then from (12.7) we
get
Substituting this in (12.8), we have
or g ( z ) y = 0 by the definition of g . Now _ y ^ 0 (else x = 0 from (12.9)). Hence g(z) = 0,
i.e., z is a zero of g.
12.4 Symmetric Generalized Eigenvalue Problems
A very important special case of the generalized eigenvalue problem
for A, B e R
nxn
arises when A = A and B = B
1
> 0. For example, the secondorder
system of differential equations
where M is a symmetric positive definite "mass matrix" and K is a symmetric "stiffness
matrix," is a frequently employed model of structures or vibrating systems and yields a
generalized eigenvalue problem of the form (12.10).
Since B is positive definite it is nonsingular. Thus, the problem (12.10) is equivalent
to the standard eigenvalue problem B~
l
Ax = A J C. However, B~
1
A is not necessarily
symmetric.
12.4. Symmetric Generalized Eigenvalue Problems 131
singleoutput system. Specifically, let B = b E ffi.n, C = c
T
E ffi.l xn, and D = d E R
Furthermore, let g(s) = c
T
(s I  A) 1 b + d denote the system transfer function (matrix),
and assume that g(s) can be written in the form
v(s)
g(s) = n(s)'
where n(s) is the characteristic polynomial of A, and v(s) and n(s) are relatively prime
(i.e., there are no "pole/zero cancellations").
Suppose Z E C is such that
[
A  zI b ]
c
T
d
is singular. Then there exists a nonzero solution to
or
(A  zl)x + by = 0,
c
T
x +dy = O.
(12.7)
(12.8)
Assuming z is not an eigenvalue of A (i.e., no pole/zero cancellations), then from (12.7) we
get
x = (A  zl)lby.
(12.9)
Substituting this in (12.8), we have
_c
T
(A  zl)lby + dy = 0,
or g(z)y = 0 by the definition of g. Now y 1= 0 (else x = 0 from (12.9». Hence g(z) = 0,
i.e., z is a zero of g.
12.4 Symmetric Generalized Eigenvalue Problems
A very important special case of the generalized eigenvalue problem
Ax = ABx (12.10)
for A, B E ffi.nxn arises when A = AT and B = BT > O. For example, the secondorder
system of differential equations
Mx+Kx=O,
where M is a symmetric positive definite "mass matrix" and K is a symmetric "stiffness
matrix," is a frequently employed model of structures or vibrating systems and yields a
generalized eigenvalue problem ofthe form (12.10).
Since B is positive definite it is nonsingular. Thus, the problem (12.10) is equivalent
to the standard eigenvalue problem B
1
Ax = AX. However, B
1
A is not necessarily
symmetric.
Nevertheless, the eigenvalues of B
l
A are always real (and are approximately 2.1926
and 3.1926 in Example 12.16).
Theorem 12.17. Let A, B e R
nxn
with A = A
T
and B = B
T
> 0. Then the generalized
eigenvalue problem
whose eigenvalues are approximately 2.1926 and —3.1926 as expected.
The material of this section can, of course, be generalized easily to the case where A
and B are Hermitian, but since realvalued matrices are commonly used in most applications,
we have restricted our attention to that case only.
has n real eigenvalues, and the n corresponding right eigenvectors can be chosen to be
orthogonal with respect to the inner product (x, y)
B
= X
T
By. Moreover, if A > 0, then
the eigenvalues are also all positive.
Proof: Since B > 0, it has a Cholesky factorization B = LL
T
, where L is nonsingular
(Theorem 10.23). Then the eigenvalue problem
can be rewritten as the equivalent problem
Letting C = L
1
AL
J
and z = L
1
x, (12.11) can then be rewritten as
Since C = C
T
, the eigenproblem (12.12) has n real eigenvalues, with corresponding eigen
vectors zi,..., z
n
satisfying
Then x, = L
T
zi, i € n, are eigenvectors of the original generalized eigenvalue problem
and satisfy
Finally, if A = A
T
> 0, then C = C
T
> 0, so the eigenvalues are positive. D
Example 12.18. The Cholesky factor for the matrix B in Example 12.16 is
Then it is easily checked thai
132 Chapter 12. Generalized Eigenvalue Problems
Example 12.16. Let A ThenB~
l
A
132 Chapter 12. Generalized Eigenvalue Problems
Example 12.16. Let A = ; l B = [i J Then A = J
Nevertheless, the eigenvalues of A are always real (and are approximately 2.1926
and 3.1926 in Example 12.16).
Theorem 12.17. Let A, B E jRnxn with A = AT and B = BT > O. Then the generalized
eigenvalue problem
Ax = ABx
has n real eigenvalues, and the n corresponding right eigenvectors can be chosen to be
orthogonal with respect to the inner product (x, y) B = x
T
By. Moreover, if A > 0, then
the eigenvalues are also all positive.
Proof: Since B > 0, it has a Cholesky factorization B = LL T, where L is nonsingular
(Theorem 10.23). Then the eigenvalue problem
Ax = ABx = ALL T x
can be rewritten as the equivalent problem
(12.11)
Letting C = L AL and Z = LT x, (12.11) can then be rewritten as
Cz = AZ. (12.12)
Since C = C
T
, the eigenproblem (12.12) has n real eigenvalues, with corresponding eigen
vectors Z I, •.. , Zn satisfying
zi Zj = Dij.
Then Xi = L Zi, i E !!., are eigenvectors of the original generalized eigenvalue problem
and satisfy
(Xi, Xj)B = xr BXj = (zi L Zj) = Dij.
Finally, if A = AT> 0, then C = C
T
> 0, so the eigenvalues are positive. 0
Example 12.18. The Cholesky factor for the matrix B in Example 12.16 is
1] .
.,fi .,fi
Then it is easily checked that
c = = [ 0 . .5
2.5
2 . .5 ]
1.5 '
whose eigenvalues are approximately 2.1926 and 3.1926 as expected.
The material of this section can, of course, be generalized easily to the case where A
and B are Hermitian, but since realvalued matrices are commonly used in most applications,
we have restricted our attention to that case only.
12.5. Simultaneous Diagonalization 133
12.5 Simultaneous Diagonalization
Recall that many matrices can be diagonalized by a similarity. In particular, normal ma
trices can be diagonalized by a unitary similarity. It turns out that in some cases a pair of
matrices (A, B) can be simultaneously diagonalized by the same matrix. There are many
such results and we present only a representative (but important and useful) theorem here.
Again, we restrict our attention only to the real case, with the complex case following in a
straightforward way.
Theorem 12.19 (Simultaneous Reduction to Diagonal Form). Let A, B e E"
x
" with
A = A
T
and B = B
T
> 0. Then there exists a nonsingular matrix Q such that
\ 2.5.1 Simultaneous diagonalization via SVD
There are situations in which forming C = L~
1
AL~
T
as in the proof of Theorem 12.19 is
numerically problematic, e.g., when L is highly ill conditioned with respect to inversion. In
such cases, simultaneous reduction can also be accomplished via an SVD. To illustrate, let
where D is diagonal. In fact, the diagonal elements of D are the eigenvalues of B
1
A.
Proof: Let B = LL
T
be the Cholesky factorization of B and setC = L~
1
AL~
T
. Since
C is symmetric, there exists an orthogonal matrix P such that P
T
CP = D, where D is
diagonal. Let Q = L~
T
P. Then
and
Finally, since QDQ~
l
= QQ
T
AQQ~
l
= L
T
PP
T
L~
1
A = L~
T
L~
1
A = B~
1
A, we
haveA(D) = A(B~
1
A). D
Note that Q is not in general orthogonal, so it does not preserve eigenvalues of A and B
individually. However, it does preserve the eigenvalues of A — XB. This can be seen directly.
LetA = Q
T
AQandB = Q
T
BQ. Then/HA = Q~
l
B~
l
Q~
T
Q
T
AQ = Q~
1
B~
1
AQ.
Theorem 12.19 is very useful for reducing many statements about pairs of symmetric
matrices to "the diagonal case." The following is typical.
Theorem 12.20. Let A, B e M"
xn
be positive definite. Then A > B if and only if B~
l
>
A
1
.
Proof: By Theorem 12.19, there exists Q e E"
x
" such that Q
T
AQ = D and Q
T
BQ = I,
where D is diagonal. Now D > 0 by Theorem 10.31. Also, since A > B, by Theorem
10.21 we have that Q
T
AQ > Q
T
BQ, i.e., D > I. But then D"
1
< / (this is trivially true
since the two matrices are diagonal). Thus, QD~
l
Q
T
< QQ
T
, i.e., A~
l
< B~
l
. D
12.5. Simultaneous Diagonalization 133
12.5 Simultaneous Diagonalization
Recall that many matrices can be diagonalized by a similarity. In particular, normal ma
trices can be diagonalized by a unitary similarity. It turns out that in some cases a pair of
matrices (A, B) can be simultaneously diagonalized by the same matrix. There are many
such results and we present only a representative (but important and useful) theorem here.
Again, we restrict our attention only to the real case, with the complex case following in a
straightforward way.
Theorem 12.19 (Simultaneous Reduction to Diagonal Form). Let A, B E ] [ ~ n x n with
A = AT and B = BT > O. Then there exists a nonsingular matrix Q such that
where D is diagonal. Infact, the diagonal elements of D are the eigenvalues of B
1
A.
Proof: Let B = LLT be the Cholesky factorization of B and set C = L I AL T. Since
C is symmetric, there exists an orthogonal matrix P such that pTe p = D, where D is
diagonal. Let Q = L  T P. Then
and
QT BQ = pT L I(LLT)L T P = pT P = [.
Finally, since QDQI = QQT AQQI = L T P pT L I A = L T L I A
B
1
A, we
have A(D) = A(B
1
A). 0
Note that Q is not in general orthogonal, so it does not preserve eigenvalues of A and B
individually. However, it does preserve the eigenvalues of A  'AB. This can be seen directly.
Let A = QT AQ and B = QT BQ. Then B
1
A = Q1 B
1
QT QT AQ = Q1 B
1
AQ.
Theorem 12.19 is very useful for reducing many statements about pairs of symmetric
matrices to "the diagonal case." The following is typical.
Theorem 12.20. Let A, B E lR
nxn
be positive definite. Then A 2: B if and only if B
1
2:
AI.
Proof: By Theorem 12.19, there exists Q E l R ~ x n such that QT AQ = D and QT BQ = [,
where D is diagonal. Now D > 0 by Theorem 10.31. Also, since A 2: B, by Theorem
10.21 we have that QT AQ 2: QT BQ, i.e., D 2: [. But then D
I
:::: [(this is trivially true
since the two matrices are diagonal). Thus, Q D
I
QT :::: Q QT, i.e., A I :::: B
1
. 0
12.5.1 Simultaneous diagonalization via SVD
There are situations in which forming C = L I AL T as in the proof of Theorem 12.19 is
numerically problematic, e.g., when L is highly iII conditioned with respect to inversion. In
such cases, simultaneous reduction can also be accomplished via an SVD. To illustrate. let
The problem (12.15) is called a generalized singular value problem and algorithms exist to
solve it (and hence equivalently (12.13)) via arithmetic operations performed only on LA
and LB separately, i.e., without forming the products L
A
L
T
A
or L
B
L
T
B
explicitly; see, for
example, [7, Sec. 8.7.3]. This is analogous to finding the singular values of a matrix M by
operations performed directly on M rather than by forming the matrix M
T
M and solving
the eigenproblem M
T
MX = Xx.
Remark 12.22. Various generalizations of the results in Remark 12.21 are possible, for
example, when A = A
T
> 0. The case when A is symmetric but indefinite is not so
straightforward, at least in real arithmetic. For example, A can be written as A = PDP
T
,
~ ~ ~ ~ T
where D is diagonal and P is orthogonal, but in writing A — PDDP = PD(PD) with
D diagonal, D may have pure imaginary elements.
134 Chapter 12. Generalized Eigenvalue Problems
us assume that both A and B are positive definite. Further, let A = L
A
L
T
A
and B — LsL
T
B
be Cholesky factorizations of A and B, respectively. Compute the SVD
where E e R£
x
" is diagonal. Then the matrix Q = L
B
T
U performs the simultaneous
diagonalization. To check this, note that
while
Remark 12.21. The SVD in (12.13) can be computed without explicitly forming the
indicated matrix product or the inverse by using the socalled generalized singular value
decomposition (GSVD). Note that
and thus the singular values of L
B
L
A
can be found from the eigenvalue problem
Letting x = L
B
z we see that (12.14) can be rewritten in the form L
A
L
A
x = XL
B
z =
ALgL^Lg
7
z, which is thus equivalent to the generalized eigenvalue problem
134 Chapter 12. Generalized Eigenvalue Problems
us assume that both A and B are positive definite. Further, let A = and B =
be Cholesky factorizations of A and B, respectively. Compute the SVD
(12.13)
where L E xn is diagonal. Then the matrix Q = L i/ u performs the simultaneous
diagonalization. To check this, note that
while
QT AQ = U
T
= UTULVTVLTUTU
= L2
QT BQ = U
T
= UTU
= I.
Remark 12.21. The SVD in (12.13) can be computed without explicitly forming the
indicated matrix product or the inverse by using the socalled generalized singular value
decomposition (GSVD). Note that
and thus the singular values of L B 1 L A can be found from the eigenvalue problem
02.14)
Letting x = LBT Z we see that 02.14) can be rewritten in the form = ALBz =
z, which is thus equivalent to the generalized eigenvalue problem
02.15)
The problem (12.15) is called a generalized singular value problem and algorithms exist to
solve it (and hence equivalently (12.13» via arithmetic operations performed only on LA
and L B separately, i.e., without forming the products LA L or L B L explicitly; see, for
example, [7, Sec. 8.7.3]. This is analogous to finding the singular values of a matrix M by
operations performed directly on M rather than by forming the matrix MT M and solving
the eigenproblem MT M x = AX.
Remark 12.22. Various generalizations of the results in Remark 12.21 are possible, for
example, when A = AT::: O. The case when A is symmetric but indefinite is not so
straightforward, at least in real arithmetic. For example, A can be written as A = PDP T,
where Disdiagonaland P is orthogonal,butin writing A = PDDp
T
= PD(PD{ with
D diagonal, b may have pure imaginary elements.
12.6. HigherOrder Eigenvalue Problems 135
12.6 HigherOrder Eigenvalue Problems
Consider the secondorder system of differential equations
where q(t} e W
1
and M, C, K e Rn
xn
. Assume for simplicity that M is nonsingular.
Suppose, by analogy with the firstorder case, that we try to find a solution of (12.16) of the
form q(t) = e
xt
p, where the nvector p and scalar A. are to be determined. Substituting in
(12.16) we get
To get a nonzero solution /?, we thus seek values of A. for which the matrix A.
2
M + A.C + K
is singular. Since the determinantal equation
yields a polynomial of degree 2rc, there are 2n eigenvalues for the secondorder (or
quadratic) eigenvalue problem A.
2
M + A.C + K.
A special case of (12.16) arises frequently in applications: M = I, C = 0, and
K = K
T
. Suppose K has eigenvalues
Let a > k =  f j i k 1
2
• Then the 2n eigenvalues of the secondorder eigenvalue problem A.
2
/ + K
are
If r = n (i.e., K = K
T
> 0), then all solutions of q + Kq = 0 are oscillatory.
12.6.1 Conversion to firstorder form
Let x\ = q and \i = q. Then (12.16) can be written as a firstorder system (with block
companion matrix)
where x(t) €. E
2
". If M is singular, or if it is desired to avoid the calculation of M
l
because
M is too ill conditioned with respect to inversion, the secondorder problem (12.16) can still
be converted to the firstorder generalized linear system
or, since
12.6. HigherOrder Eigenvalue Problems 135
12.6 HigherOrder Eigenvalue Problems
Consider the secondorder system of differential equations
Mq+Cq+Kq=O, (12.16)
where q(t) E ~ n and M, C, K E ~ n x n . Assume for simplicity that M is nonsingular.
Suppose, by analogy with the firstorder case, that we try to find a solution of (12.16) of the
form q(t) = eAt p, where the nvector p and scalar A are to be determined. Substituting in
(12.16) we get
or, since eAt :F 0,
(A
2
M + AC + K) p = O.
To get a nonzero solution p, we thus seek values of A for which the matrix A
2
M + AC + K
is singular. Since the determinantal equation
o = det(A
2
M + AC + K) = A 2n + ...
yields a polynomial of degree 2n, there are 2n eigenvalues for the secondorder (or
quadratic) eigenvalue problem A
2
M + AC + K.
A special case of (12.16) arises frequently in applications: M = I, C = 0, and
K = KT. Suppose K has eigenvalues
IL I ::: ... ::: ILr ::: 0 > ILr+ I ::: ... ::: ILn·
Let Wk = I ILk I !. Then the 2n eigenvalues of the secondorder eigenvalue problem A
2
I + K
are
± jWk; k = 1, ... , r,
± Wk; k = r + 1, ... , n.
If r = n (i.e., K = KT ::: 0), then all solutions of q + K q = 0 are oscillatory.
12.6.1 Conversion to firstorder form
Let XI = q and X2 = q. Then (12.16) can be written as a firstorder system (with block
companion matrix)
. [ 0
X = M1K
where x (t) E ~ 2 n . If M is singular, or if it is desired to avoid the calculation of M
I
because
M is too ill conditioned with respect to inversion, the secondorder problem (12.16) can still
be converted to the firstorder generalized linear system
[
I OJ' [0 I J
o M x = K C x.
136 Chapter 12. Generalized Eigenvalue Problems
Many other firstorder realizations are possible. Some can be useful when M, C, and/or K
have special symmetry or skewsymmetry properties that can exploited.
Higherorder analogues of (12.16) involving, say, the kth derivative of q, lead naturally
to higherorder eigenvalue problems that can be converted to firstorder form using aknxkn
block companion matrix analogue of (11.19). Similar procedures hold for the general k\h
order difference equation
EXERCISES
are the eigenvalues of the matrix A — BD
1
C.
2. Let F, G € C
MX
". Show that the nonzero eigenvalues of FG and GF are the same.
Hint: An easy "trick proof is to verify that the matrices
are similar via the similarity transformation
are identical for all F 6 E"
1
*" and all G G R"
xm
.
Hint: Consider the equivalence
(A similar result is also true for "nonsquare" pencils. In the parlance of control theory,
such results show that zeros are invariant under state feedback or output injection.)
which can be converted to various firstorder systems of dimension kn.
1. Suppose A e R
nx
" and D e R™
xm
. Show that the finite generalized eigenvalues of
the pencil
3. Let F e C
nxm
, G e C
mx
". Are the nonzero singular values of FG and GF the
same?
4. Suppose A € R
nxn
, B e R
n
*
m
, and C e E
wx
". Show that the generalized eigenval
ues of the pencils
and
136 Chapter 12. Generalized Eigenvalue Problems
Many other firstorder realizations are possible. Some can be useful when M, C, andlor K
have special symmetry or skewsymmetry properties that can exploited.
Higherorder analogues of (12.16) involving, say, the kth derivative of q, lead naturally
to higherorder eigenvalue problems that can be converted to firstorder form using a kn x kn
block companion matrix analogue of (11.19). Similar procedures hold for the general kth
order difference equation
which can be converted to various firstorder systems of dimension kn.
EXERCISES
1. Suppose A E lR
n
xn and D E lR::! xm. Show that the finite generalized eigenvalues of
the pencil
[ ~ ~ J  A [ ~ ~ J
are the eigenvalues of the matrix A  B D
1
C.
2. Let F, G E e
nxn
• Show that the nonzero eigenvalues of FG and G F are the same.
Hint: An easy "trick proof' is to verify that the matrices
[Fg ~ ] and [ ~ GOF ]
are similar via the similarity transformation
3. Let F E e
nxm
, G E e
mxn
• Are the nonzero singular values of FG and GF the
same?
4. Suppose A E ]Rnxn, B E lR
nxm
, and C E lRmxn. Show that the generalized eigenval
ues of the pencils
[ ~ ~ J  A [ ~ ~ J
and
[ A + B ~ + GC ~ ] _ A [ ~ ~ ]
are identical for all F E Rm xn and all G E R" xm .
Hint: Consider the equivalence
[
I G][AU B][I 0]
01 CO Fl'
(A similar result is also true for "nonsquare" pencils. In the parlance of control theory,
such results show that zeros are invariant under state feedback or output injection.)
Exercises 137
5. Another family of simultaneous diagonalization problems arises when it is desired
that the simultaneous diagonalizing transformation Q operates on matrices A, B e
]R
nx
" in such a way that Q~
l
AQ~
T
and Q
T
BQ are simultaneously diagonal. Such
a transformation is called contragredient. Consider the case where both A and
B are positive definite with Cholesky factorizations A = L&L
T
A
and B = L#Lg,
respectively, and let UW
T
be an SVD of L
T
B
L
A
.
(a) Show that Q = LA V£~
5
is a contragredient transformation that reduces both
A and B to the same diagonal matrix.
(b) Show that Q~
l
= ^~^U
T
L
T
B
.
(c) Show that the eigenvalues of A B are the same as those of E
2
and hence are
positive.
Exercises 137
5. Another family of simultaneous diagonalization problems arises when it is desired
that the simultaneous diagonalizing transformation Q operates on matrices A, B E
jRnxn in such a way that Ql AQT and QT BQ are simultaneously diagonal. Such
a transformation is called contragredient. Consider the case where both A and
B are positive definite with Cholesky factorizations A = LA L and B = L B L
respectively, and let be an SVD of
(a) Show that Q = LA is a contragredient transformation that reduces both
A and B to the same diagonal matrix.
(b) Show that Ql =
(c) Show that the eigenvalues of AB are the same as those of 1;2 and hence are
positive.
This page intentionally left blank This page intentionally left blank
Chapter 13
Kronecker Products
13.1 Definition and Examples
Definition 13.1. Let A e R
mx
", B e R
pxq
. Then the Kronecker product (or tensor
product) of A and B is defined as the matrix
Obviously, the same definition holds if A and B are complexvalued matrices. We
restrict our attention in this chapter primarily to realvalued matrices, pointing out the
extension to the complex case only where it is not obvious.
Example 13.2.
Note that B < g> A / A < g> B.
2. Foranyfl e!F
X(
7, /
2
< 8 > f l = [o l\
Replacing I
2
by /„ yields a block diagonal matrix with n copies of B along the
diagonal.
3. Let B be an arbitrary 2x2 matrix. Then
139
Chapter 13
Kronecker Products
13.1 Definition and Examples
Definition 13.1. Let A E lR
mxn
, B E lR
pxq
. Then the Kronecker product (or tensor
product) of A and B is defined as the matrix
[
allB
A@B= :
amlB
alnB ]
: E lRmpxnq.
amnB
(13.1)
Obviously, the same definition holds if A and B are complexvalued matrices. We
restrict our attention in this chapter primarily to realvalued matrices, pointing out the
extension to the complex case only where it is not obvious.
Example 13.2.
1. Let A =
2
nand B = [; Then
2
4 2 6
n
A@B = [
2B 3 4 6 6
2B 3 4 2 2
9 4 6 2
Note that B @ A i A @ B.
2. Forany B E lR
pxq
, /z @ B = J.
Replacing 12 by In yields a block diagonal matrix with n copies of B along the
diagonal.
3. Let B be an arbitrary 2 x 2 matrix. Then
l b"
0
b12
0
l
B @/z =
b
ll
0 b12
0
b
2
2
0
b
21
0 b
22
139
140 Chapter 13. Kronecker Products
The extension to arbitrary B and /„ is obvious.
4. Let Jt € R
m
, y e R". Then
5. Let* eR
m
, y eR". Then
13.2 Properties of the Kronecker Product
Theorem 13.3. Let A e R
mx
", 5 e R
rxi
, C e R"
x
^ and D e R
sxt
. Then
Proof: Simply verify that
Theorem 13.4. For all A and B,
Proof: For the proof, simply verify using the definitions of transpose and Kronecker
product. D
Corollary 13.5. If A e R"
xn
and B e R
mxm
are symmetric, then A® B is symmetric.
Theorem 13.6. If A and B are nonsingular,
Proof: Using Theorem 13.3, simply note that
140 Chapter 13. Kronecker Products
The extension to arbitrary B and In is obvious.
4. Let x E y E !R.n. Then
[
T T]T
X ® Y = XIY , ... , XmY
= [XIYJ, ... , XIYn, X2Yl, ... , xmYnf E !R.
mn
.
13.2 Properties of the Kronecker Product
Theorem 13.3. Let A E B E C E and D E Then
(A 0 B)(C 0 D) = AC 0 BD (E
Proof; Simply verify that
=AC0BD. 0
Theorem 13.4. Foral! A and B, (A ® Bl = AT ® BT.
al;kCkPBD ]
amkckpBD
(13.2)
Proof' For the proof, simply verify using the definitions of transpose and Kronecker
product. 0
Corollary 13.5. If A E ]Rn xn and B E !R.
m
xm are symmetric, then A ® B is symmetric.
Theorem 13.6. If A and Bare nonsingular, (A ® B)I = AI ® B
1
.
Proof: Using Theorem 13.3, simply note that (A ® B)(A 1 ® B
1
) = 1 ® 1 = I. 0
Corollary 13.8. If A € E"
xn
is orthogonal and B e M
mxm
15 orthogonal, then A < g > B is
orthogonal.
13.2. Properties of the Kronecker Product 141
Theorem 13.7. If A e IR"
xn
am/ B eR
mxm
are normal, then A® B is normal.
Proof:
yields a singular value decomposition of A < 8 > B (after a simple reordering of the diagonal
elements O/£A < 8 > £5 and the corresponding right and left singular vectors).
Corollary 13.11. Let A e R™
x
" have singular values a\ > • • • > a
r
> 0 and let B e
have singular values T\ > • • • > T
S
> 0. Then A < g ) B (or B < 8 > A) has rs singular values
^iT\ > • • • > ff
r
T
s
> Qand
Theorem 13.12. Let A e R
nx
" have eigenvalues A.,  , / e n, and let B e R
mxw
/zave
eigenvalues jJij, 7 € m. TTzen ?/ze mn eigenvalues of A® B are
Moreover, if x\, ..., x
p
are linearly independent right eigenvectors of A corresponding
to A  i , . . . , A.
p
(p < n), and zi, • • •, z
q
are linearly independent right eigenvectors of B
corresponding to JJL\ , ..., \Ju
q
(q < m), then ;c, < 8 > Zj € ffi.
m
" are linearly independent right
eigenvectors of A® B corresponding to A., /u ,
7
, i e /?, 7 e q.
Proof: The basic idea of the proof is as follows:
If A and B are diag onalizable in Theorem 13.12, we can take p = n and q —mand
thu s g et the complete eig enstru ctu re of A < 8 > B. In g eneral, if A and fi have Jordan form
Example 13.9. Let A and B  Then it is easily seen that
A i s orthog onal wi th eig envalu es e
±j9
and B i s orthog onal wi th eig envalu es e
±j(i>
. T he 4x4
matrix A ® 5 is then also orthog onal with eig envalu es e^'^+'W and e
±
^
( 6>
~^
>
\
Theorem 13.10. Lg f A G E
mx
" have a singular value decomposition l/^E^Vj an^ /ef
fi e ^
pxq
have a singular value decomposition UB^B^B Then
13.2. Properties of the Kronecker Product
Theorem 13.7. If A E IR
nxn
and B E IR
mxm
are normal, then A 0 B is normal.
Proof:
(A 0 B{ (A 0 B) = (AT 0 BT)(A 0 B) by Theorem 13.4
= AT A 0 BT B by Theorem 13.3
= AAT 0 B BT since A and B are normal
= (A 0 B)(A 0 B)T by Theorem 13.3. 0
141
Corollary 13.8. If A E IR
nxn
is orthogonal and B E IR
mxm
is orthogonal, then A 0 B is
orthogonal.
E I 139 L A
[
eose Sine] dB [Cos</> Sin</>] Th ., '1 h
xamp e .• et = _ sin e cose an = _ sin</> cos</>O en It IS easl y seen t at
A is orthogonal with eigenvalues e±jO and B is orthogonal with eigenvalues e±j</J. The 4 x 4
matrix A 0 B is then also orthogonal with eigenvalues e±jeH</» and e±je
fJ
</».
Theorem 13.10. Let A E IR
mxn
have a singular value decomposition VA ~ A vI and let
B E IR
pxq
have a singular value decomposition V B ~ B VI. Then
yields a singular value decomposition of A 0 B (after a simple reordering of the diagonal
elements of ~ A 0 ~ B and the corresponding right and left singular vectors).
Corollary 13.11. Let A E lR;"xn have singular values UI :::: ... :::: U
r
> 0 and let B E IRfx
q
have singular values <I :::: ... :::: <s > O. Then A 0 B (or B 0 A) has rs singular values
U, <I :::: ... :::: U
r
<s > 0 and
rank(A 0 B) = (rankA)(rankB) = rank(B 0 A) .
Theorem 13.12. Let A E IR
n
xn have eigenvalues Ai, i E !!, and let B E IR
m
xm have
eigenvalues JL j, j E m. Then the mn eigenvalues of A 0 Bare
Moreover, if Xl, ••. , xp are linearly independent right eigenvectors of A corresponding
to AI, ... , A p (p ::::: n), and Z I, ... ,Zq are linearly independent right eigenvectors of B
corresponding to JLI, ... ,JLq (q ::::: m), then Xi 0 Zj E IR
mn
are linearly independent right
eigenvectors of A 0 B corresponding to Ai JL j, i E l!! j E 1·
Proof: The basic idea of the proof is as follows:
(A 0 B)(x 0 z) = Ax 0 Bz
= AX 0 JLZ
= AJL(X 0 z). 0
If A and Bare diagonalizable in Theorem 13.12, we can take p = nand q = m and
thus get the complete eigenstructure of A 0 B. In general, if A and B have Jordan form
142 Chapter 13. Kronecker Products
decompositions given by P~
l
AP = JA and Q~
]
BQ = JB, respectively, then we get the
following Jordanlike structure:
Note that JA® JB, while upper triangular, is generally not quite in Jordan form and needs
further reduction (to an ultimate Jordan form that also depends on whether or not certain
eigenvalues are zero or nonzero).
A Schur form for A ® B can be derived similarly. For example, suppose P and
Q are unitary matrices that reduce A and 5, respectively, to Schur (triangular) form, i.e.,
P
H
AP = T
A
and Q
H
BQ = T
B
(and similarly if P and Q are orthogonal similarities
reducing A and B to real Schur form). Then
Corollary 13.13. Let A e R
nxn
and B e R
mxm
. Then
Definition 13.14. Let A e R
nxn
and B e R
mxm
. Then the Kronecker sum (or tensor sum)
of A and B, denoted A © B, is the mn x mn matrix (I
m
< g> A) + (B ® /„). Note that, in
general, A ® B ^ B © A.
Example 13.15.
Then
The reader is invited to compute B 0 A = (/3 ® B) + (A < g> /2) and note the difference
with A © B.
1. Let
142 Chapter 1 3. Kronecker Products
decompositions given by p
I
AP = J
A
and Ql BQ = J
B
, respectively, then we get the
following Jordanlike structure:
(P ® Q)I(A ® B)(P ® Q) = (P
I
® Ql)(A ® B)(P ® Q)
= (P
1
AP) ® (Ql BQ)
= J
A
® J
B ·
Note that h ® JR, while upper triangular, is generally not quite in Jordan form and needs
further reduction (to an ultimate Jordan form that also depends on whether or not certain
eigenvalues are zero or nonzero).
A Schur form for A ® B can be derived similarly. For example, suppose P and
Q are unitary matrices that reduce A and B, respectively, to Schur (triangular) form, i.e.,
pH AP = TA and QH BQ = TB (and similarly if P and Q are orthogonal similarities
reducing A and B to real Schur form). Then
(P ® Q)H (A ® B)(P ® Q) = (pH ® QH)(A ® B)(P ® Q)
= (pH AP) ® (QH BQ)
= TA ® T
R
.
Corollary 13.13. Let A E IR
n
xn and B E IR
rn
xm. Then
1. Tr(A ® B) = (TrA)(TrB) = Tr(B ® A).
2. det(A ® B) = (det A)m(det Bt = det(B ® A).
Definition 13.14. Let A E IR
n
Xn and B E IR
m
xrn. Then the Kronecker sum (or tensor sum)
of A and B, denoted A EEl B, is the mn x mn matrix Urn ® A) + (B ® In). Note that, in
general, A EEl B i= B EEl A.
Example 13.15.
1. Let
2
;
2
Then
2 3 0 0 0 2 0 0 0 0
3 2 1 0 0 0 0 2 0 0 1 0
AfflB = (h®A)+(B®h) =
1 1 4 0 0 0 0 0 2 0 0
0 0 0 2 3
+
2 0 0 3 0 0
0 0 0 3 2 0 2 0 0 3 0
0 0 0 4 0 0 2 0 0 3
The reader is invited to compute B EEl A = (h ® B) + (A 0 h) and note the difference
with A EEl B.
13.2. Properties of the Kronecker Product 143
If A and B are diagonalizable in Theorem 13.16, we can take p = n and q = m and
thus get the complete eigenstructure of A 0 B. In general, if A and B have Jordan form
decompositions given by P~
1
AP = JA and Q"
1
BQ = JB, respectively, then
is a Jordanlike structure for A © B.
Then J can be written in the very compact form J = (4 < 8 > M) + (E^®l2) = M 0 Ek.
Theorem 13.16. Let A e E"
x
" have eigenvalues A,  , i e n, and let B e R
mx
'" have
eigenvalues /z
;
, 7 e ra. TTzen r/ze Kronecker sum A® B = (I
m
(g> A) + (B < g> /„) /za^ ran
e/genva/wes
Moreover, if x\,... ,x
p
are linearly independent right eigenvectors of A corresponding
to AI, . . . , X
p
(p < n), and z\, ..., z
q
are linearly independent right eigenvectors of B
corresponding to f j i \ , . . . , f^
q
(q < ra), then Zj < 8 > Xi € W
1
" are linearly independent right
eigenvectors of A® B corresponding to A., + [ij , i € p, j e q.
Proof: The basic idea of the proof is as follows:
2. Recall the real JCF
where M =
13.2. Properties of the Kronecker Product
2. Recall the real JCF
1=
where M = [
a
f3
M I 0 0
f3
a
o M I 0
M
0
J. Define
0 0
0 0
Ek =
0
I 0
M I
o M
o
o
o
143
E jR2kx2k,
Then 1 can be written in the very compact form 1 = (I} ® M) + (Ek ® h) = M $ E
k
.
Theorem 13.16. Let A E jRnxn have eigenvalues Ai, i E !!. and let B E jRmxm have
eigenvalues fJj, j E I!!. Then the Kronecker sum A $ B = (1m ® A) + (B ® In) has mn
eigenvalues
Al + fJt, ... , AI + fJm, A2 + fJt,···, A2 + fJm, ... , An + fJm'
Moreover, if XI, .•• , xp are linearly independent right eigenvectors of A corresponding
to AI, ... , Ap (p ::s: n), and ZI, ... , Zq are linearly independent right eigenvectors of B
corresponding to fJt, ... , fJq (q ::s: m), then Z j ® Xi E jRmn are linearly independent right
eigenvectors of A $ B corresponding to Ai + fJj' i E E, j E fl·
Proof: The basic idea of the proof is as follows:
[(1m ® A) + (B ® In)](Z ® X) = (Z ® Ax) + (Bz ® X)
= (Z ® Ax) + (fJZ ® X)
= (A + fJ)(Z ® X). 0
If A and Bare diagonalizable in Theorem 13.16, we can take p = nand q = m and
thus get the complete eigenstructure of A $ B. In general, if A and B have Jordan form
decompositions given by pI AP = lA and Qt BQ = l
B
, respectively, then
[(Q ® In)(lm ® p)rt[(lm ® A) + (B ® In)][CQ ® In)(lm ® P)]
= [(1m ® p)I(Q ® In)I][(lm ® A) + (B ® In)][(Q ® In)(/m ® P)]
= [(1m ® pI)(QI ® In)][(lm ® A) + (B ® In)][CQ ® In)(/m <:9 P)]
= (1m ® lA) + (JB ® In)
is a Jordanlike structure for A $ B.
144 Chapter 13. Kronecker Products
A Schur form for A © B can be derived similarly. Again, suppose P and Q are unitary
matrices that reduce A and B, respectively, to Schur (triangular) form, i.e., P
H
AP = T
A
and Q
H
BQ = T
B
(and similarly if P and Q are orthogonal similarities reducing A and B
to real Schur form). Then
((Q ® /„)(/« ® P)]"[(/m < 8 > A) + (B ® /
B
)][(e (g) /„)(/„, ® P)] = (/
m
< 8 > r
A
) + (7* (g) /„),
where [(Q < 8 > /„)(/« ® P)] = (< 2 ® P) is unitary by Theorem 13.3 and Corollary 13.8 .
13.3 Application to Sylvester and Lyapunov Equations
In this section we study the linear matrix equation
A special case of (13.3) is the symmetric equation
obtained by taking B = A
T
. When C is symmetric, the solution X e W
x
" is easily shown
also to be symmetric and (13.4) is known as a Lyapunov equation. Lyapunov equations
arise naturally in stability theory.
The first important question to ask regarding (13.3) is, When does a solution exist?
By writing the matrices in (13.3) in terms of their columns, it is easily seen by equating the
z'th columns that
The coefficient matrix in (13.5) clearly can be written as the Kronecker sum (I
m
* A) +
(B
T
® /„). The following definition is very helpful in completing the writing of (13.5) as
an "ordinary" linear system.
where A e R"
x
", B e R
mxm
, and C e M"
xm
. This equation is now often called a Sylvester
equation in honor of J.J. Sylvester who studied general linear matrix equations of the form
These equations can then be rewritten as the mn x mn linear system
144 Chapter 13. Kronecker Products
A Schur fonn for A EB B can be derived similarly. Again, suppose P and Q are unitary
matrices that reduce A and B, respectively, to Schur (triangular) fonn, i.e., pH AP = TA
and QH BQ = TB (and similarly if P and Q are orthogonal similarities reducing A and B
to real Schur fonn). Then
where [(Q ® In)(lm ® P)] = (Q ® P) is unitary by Theorem 13.3 and Corollary 13.8.
13.3 Application to Sylvester and Lyapunov Equations
In this section we study the linear matrix equation
AX+XB=C, (13.3)
where A E IR
nxn
, B E IR
mxm
, and C E IRnxm. This equation is now often called a Sylvester
equation in honor of 1.1. Sylvester who studied general linear matrix equations of the fonn
k
LA;XB; =C.
;=1
A special case of (13.3) is the symmetric equation
AX +XAT = C (13.4)
obtained by taking B = AT. When C is symmetric, the solution X E IR
n
xn is easily shown
also to be symmetric and (13.4) is known as a Lyapunov equation. Lyapunovequations
arise naturally in stability theory.
The first important question to ask regarding (13.3) is, When does a solution exist?
By writing the matrices in (13.3) in tenns of their columns, it is easily seen by equating the
ith columns that
m
AXi + Xb; = C; = AXi +
j=1
These equations can then be rewritten as the mn x mn linear system
[
A+blll b
21
1
bl21 A + b
2Z
1
blml b2ml
(13.5)
The coefficient matrix in (13.5) clearly can be written as the Kronecker sum (1m 0 A) +
(B
T
0 In). The following definition is very helpful in completing the writing of (13.5) as
an "ordinary" linear system.
13.3. Application to Sylvester and Lyapunov Equations 145
Definition 13.17. Let c
(
€ E.
n
denote the columns ofC e R
nxm
so that C = [n,..., c
m
}.
Then vec(C) is defined to be the mnvector formed by stacking the columns ofC on top of
one another, i.e., vec(C) =
Using Definition 13.17, the linear system (13.5) can be rewritten in the form
There exists a unique solution to (13.6) if and only if [(I
m
® A) + (B
T
® /„)] is nonsingular.
But [(I
m
< 8 > A) + (B
T
(g) /„)] is nonsingular if and only if it has no zero eigenvalues.
From Theorem 13.16, the eigenvalues of [(/
m
<g> A) + (B
T
<8> /„)] are A., + IJ LJ , where
A,, e A (A), i e n_, and ^j e A(fi), j e m. We thus have the following theorem.
Theorem 13.18. Let A e R
nxn
, B G R
mxm
, and C e R"
xm
. 77ie/i the Sylvester equation
has a unique solution if and only if A and —B have no eigenvalues in common.
Sylvester equations of the form (13.3) (or symmetric Lyapunov equations of the form
(13.4)) are generally not solved using the mn x mn "vec" formulation (13.6). The most
commonly preferred numerical algorithm is described in [2]. First A and B are reduced to
(real) Schur form. An equivalent linear system is then solved in which the triangular form
of the reduced A and B can be exploited to solve successively for the columns of a suitably
transformed solution matrix X. Assuming that, say, n > m, this algorithm takes only O(n
3
)
operations rather than the O(n
6
) that would be required by solving (13.6) directly with
Gaussian elimination. A further enhancement to this algorithm is available in [6] whereby
the larger of A or B is initially reduced only to upper Hessenberg rather than triangular
Schur form.
The next few theorems are classical. They culminate in Theorem 13.24, one of many
elegant connections between matrix theory and stability theory for differential equations.
Theorem 13.19. Let A e Rn
xn
, B e R
mxm
, and C e R
nxm
. Suppose further that A and B
are asymptotically stable (a matrix is asymptotically stable if all its eigenvalues have real
parts in the open left halfplane). Then the (unique) solution of the Sylvester equation
can be written as
Proof: Since A and B are stable, A., (A) + A
;
 (B) ^ 0 for all i, j so there exists a unique
solution to(13.8 )by Theorem 13.18. Now integrate the differential equation X = AX + XB
(with X(0) = C) on [0, +00):
13.3. Application to Sylvester and Lyapunov Equations 145
Definition 13.17. Let Ci E jRn denote the columns ofC E jRnxm so that C = [CI, ... , C
m
].
: : ~ ~ : : ~ : : : d ~ ~ : : : O :[]::::fonned by "ocking the colunuu of C on top of
Using Definition 13.17, the linear system (13.5) can be rewritten in the form
[(1m ® A) + (B
T
® In)]vec(X) = vec(C). (13.6)
There exists a unique solution to (13.6) if and only if [(1m ® A) + (B
T
® In)] is nonsingular.
But [(1m ® A) + (B
T
® In)] is nonsingular if and only if it has no zero eigenvalues.
From Theorem 13.16, the eigenvalues of [(1m ® A) + (BT ® In)] are Ai + Mj, where
Ai E A(A), i E!!, and Mj E A(B), j E!!!.. We thus have the following theorem.
Theorem 13.1S. Let A E lR
nxn
, B E jRmxm, and C E jRnxm. Then the Sylvester equation
AX+XB=C
(13.7)
has a unique solution if and only if A and  B have no eigenvalues in common.
Sylvester equations of the form (13.3) (or symmetric Lyapunov equations of the form
(13.4» are generally not solved using the mn x mn "vee" formulation (13.6). The most
commonly preferred numerical algorithm is described in [2]. First A and B are reduced to
(real) Schur form. An equivalent linear system is then solved in which the triangular form
of the reduced A and B can be exploited to solve successively for the columns of a suitably
transformed solution matrix X. Assuming that, say, n :::: m, this algorithm takes only 0 (n
3
)
operations rather than the O(n
6
) that would be required by solving (13.6) directly with
Gaussian elimination. A further enhancement to this algorithm is available in [6] whereby
the larger of A or B is initially reduced only to upper Hessenberg rather than triangular
Schur form.
The next few theorems are classical. They culminate in Theorem 13.24, one of many
elegant connections between matrix theory and stability theory for differential equations.
Theorem 13.19. Let A E jRnxn, B E jRmxm, and C E jRnxm. Suppose further that A and B
are asymptotically stable (a matrix is asymptotically stable if all its eigenvalues have real
parts in the open left halfplane). Then the (unique) solution of the Sylvester equation
AX+XB=C (13.8)
can be written as
(13.9)
Proof: Since A and B are stable, Aj(A) + Aj(B) =I 0 for all i, j so there exists a unique
solution to (13.8) by Theorem 13.18. Now integrate the differential equation X = AX + X B
(with X(O) = C) on [0, +00):
lim XU)  X(O) = A roo X(t)dt + ([+00 X(t)dt) B.
IHoo 10 10
(13.10)
146 Chapter 13. Kronecker Products
Using the results of Section 11.1.6, it can be shown easily that lim e = lim e = 0.
r—> + oo t—v+oo
Hence, using the solution X ( t ) = e
tA
Ce
tB
from Theorem 11.6, we have that lim X ( t ) — 0.
/—<+3C
Substituting in (13.10) we have
Remark 13.20. An equivalent condition for the existence of a unique solution to AX +
XB = C is that [ J _
c
fi
] be similar to [ J _°
B
] (via the similarity [ J _* ]).
Theorem 13.21. Lef A, C e R"
x
". TTzen r/z e Lyapunov equation
has a unique solution if and only if A and —A
T
have no eigenvalues in common. If C is
symmetric and (13.11) has a unique solution, then that solution is symmetric.
Remark 13.22. If the matrix A e W
xn
has eigenvalues A.I ,...,!„, then — A
T
has eigen
values —A.], . . . , —k
n
. Thus, a sufficient condition that guarantees that A and — A
T
have
no common eigenvalues is that A be asymptotically stable. Many useful results exist con
cerning the relationship between stability and Lyapunov equations. Two basic results due
to Lyapunov are the following, the first of which follows immediately from Theorem 13.19.
Theorem 13.23. Let A,C e R"
x
" and suppose further that A is asymptotically stable.
Then the (unique) solution of the Lyapunov equation
can be written as
Theorem 13.24. A matrix A e R"
x
" is asymptotically stable if and only if there exists a
positive definite solution to the Lyapunov equation
Proof: Suppose A is asymptotically stable. By Theorems 13.21 and 13.23 a solution to
(13.13) exists and takes the form (13.12). Now let v be an arbitrary nonz ero vector in E".
Then
and so X
where C 
146 Chapter 13. Kronecker Products
Using the results of Section 11.1.6, it can be shown easily that lim e
lA
= lim e
lB
= O.
1>+00 1 .... +00
Hence, using the solution X (t) = elACe
lB
from Theorem 11.6, we have that lim X (t) = O.
t ~ + x
Substituting in (13.10) we have
C = A (1+
00
elACe
lB
dt) + (1+
00
elACe
lB
dt) B
{+oo
and so X = 1o elACe
lB
dt satisfies (13.8). o
Remark 13.20. An equivalent condition for the existence of a unique solution to AX +
X B = C is that [ ~ _C
B
] be similar to [ ~ _OB] (via the similarity [ ~ _ ~ ]).
Theorem 13.21. Let A, C E jRnxn. Then the Lyapunov equation
AX+XAT = C (13.11)
has a unique solution if and only if A and  A T have no eigenvalues in common. If C is
symmetric and ( 13.11) has a unique solution, then that solution is symmetric.
Remark 13.22. If the matrix A E jRn xn has eigenvalues )"" ... , An, then  AT has eigen
values AI, ... ,  An. Thus, a sufficient condition that guarantees that A and  A T have
no common eigenvalues is that A be asymptotically stable. Many useful results exist con
cerning the relationship between stability and Lyapunov equations. Two basic results due
to Lyapunov are the following, the first of which follows immediately from Theorem 13.19.
Theorem 13.23. Let A, C E jRnxn and suppose further that A is asymptotically stable.
Then the (unique) solution o/the Lyapunov equation
AX+XAT=C
can be written as
(13.12)
Theorem 13.24. A matrix A E jRnxn is asymptotically stable if and only if there exists a
positive definite solution to the Lyapunov equation
AX +XAT = C, (13.13)
where C = C
T
< O.
Proof: Suppose A is asymptotically stable. By Theorems l3.21 and l3.23 a solution to
(13.13) exists and takes the form (13.12). Now let v be an arbitrary nonzero vector in jRn.
Then
13.3. Application to Sylvester and Lyapunov Equations 147
Since — C > 0 and e
tA
is nonsingular for all t, the integrand above is positive. Hence
v
T
Xv > 0 and thus X is positive definite.
Conversely, suppose X = X
T
> 0 and let A. e A (A) with corresponding left eigen
vector y. Then
Since y
H
Xy > 0, we must have A + A = 2 Re A < 0 . Since A was arbitrary, A must be
asymptotically stable. D
Remark 13.25. The Lyapunov equation AX + XA
T
= C can also be written using the
vec notation in the equivalent form
A subtle point arises when dealing with the "dual" Lyapunov equation A
T
X + XA = C.
The equivalent "vec form" of this equation is
However, the complexvalued equation A
H
X + XA = C is equivalent to
The vec operator has many useful properties, most of which derive from one key
result.
Theorem 13.26. For any three matrices A, B, and C for which the matrix product ABC is
defined,
Proof: The proof follows in a fairly straightforward fashion either directly from the defini
tions or from the fact that vec(;t;y
r
) = y <8 > x. D
An immediate application is to the derivation of existence and uniqueness conditions
for the solution of the simple Sylvesterlike equation introduced in Theorem 6.11.
Theorem 13.27. Let A e R
mxn
, B e R
px(}
, and C e R
mxq
. Then the equation
has a solution X e R.
nxp
if and only ifAA
+
CB
+
B = C, in which case the general solution
is of the form
where Y e R
nxp
is arbitrary. The solution of (13.14) is unique if BB
+
® A
+
A = I.
Proof: Write (13.14) as
13.3. Application to Sylvester and Lyapunov Equations 147
Since C > 0 and etA is nonsingular for all t, the integrand above is positive. Hence
v
T
Xv > 0 and thus X is positive definite.
Conversely, suppose X = XT > 0 and let A E A(A) with corresponding left eigen
vector y. Then
0> yHCy = yH AXy + yHXAT Y
= (A + I)yH Xy.
Since yH Xy > 0, we must have A + I = 2 Re A < O. Since A was arbitrary, A must be
asymptotically stable. D
Remark 13.25. The Lyapunov equation AX + X A T = C can also be written using the
vec notation in the equivalent form
[(/ ® A) + (A ® l)]vec(X) = vec(C).
A subtle point arises when dealing with the "dual" Lyapunov equation A T X + X A = C.
The equivalent "vec form" of this equation is
[(/ ® AT) + (AT ® l)]vec(X) = vec(C).
However, the complexvalued equation A H X + X A = C is equivalent to
[(/ ® AH) + (AT ® l)]vec(X) = vec(C).
The vec operator has many useful properties, most of which derive from one key
result.
Theorem 13.26. For any three matrices A, B, and C for which the matrix product ABC is
defined,
vec(ABC) = (C
T
® A)vec(B).
Proof: The proof follows in a fairly straightforward fashion either directly from the defini
tions or from the fact that vec(xyT) = y ® x. D
An immediate application is to the derivation of existence and uniqueness conditions
for the solution of the simple Sylvesterlike equation introduced in Theorem 6.11.
Theorem 13.27. Let A E jRrnxn, B E jRPxq, and C E jRrnxq. Then the equation
AXB =C (13.14)
has a solution X E jRn x p if and only if A A + C B+ B = C, in which case the general solution
is of the form
(13.15)
where Y E jRnxp is arbitrary. The solution of (13. 14) is unique if BB+ ® A+ A = [.
Proof: Write (13.14) as
(B
T
® A)vec(X) = vec(C) (13.16)
148 Chapter 13. Kronecker Products
by Theorem 13.26. This "vector equation" has a solution if and only if
It is a straightforward exercise to show that (M ® N)
+
= M
+
< 8> N
+
. Thus, (13.16) has a
solution if and only if
and hence if and only if AA
+
CB
+
B = C.
The general solution of (13.16) is then given by
where Y is arbitrary. This equation can then be rewritten in the form
or, using Theorem 13.26,
The solution is clearly unique if BB
+
< 8> A
+
A = I. D
EXERCISES
1. For any two matrices A and B for which the indicated matrix product is defined,
show that (vec(A))
r
(vec(fl)) = Tr(A
r
£). In particular, if B e Rn
xn
, then Tr(fl) =
vec(/J
r
vec(fl).
2. Prove that for all matrices A and B, (A ® B)
+
= A
+
® B
+
.
3. Show that the equation AX B = C has a solution for all C if A has full row rank and
B has full column rank. Also, show that a solution, if it exists, is unique if A has full
column rank and B has full row rank. What is the solution in this case?
4. Show that the general linear equation
can be written in the form
148 Chapter 1 3. Kronecker Products
by Theorem 13.26. This "vector equation" has a solution if and only if
(B
T
® A)(B
T
® A)+ vec(C) = vec(C).
It is a straightforward exercise to show that (M ® N) + = M+ ® N+. Thus, (13.16) has a
solution if and only if
vec(C) = (B
T
® A)«B+{ ® A+)vec(C)
= [(B+ B{ ® AA+]vec(C)
= vec(AA +C B+ B)
and hence if and only if AA + C B+ B = C.
The general solution of (13 .16) is then given by
vec(X) = (B
T
® A) + vec(C) + [I  (B
T
® A) + (B
T
® A)]vec(Y),
where Y is arbitrary. This equation can then be rewritten in the form
vec(X) = «B+{ ® A+)vec(C) + [I  (BB+{ ® A+ A]vec(y)
or, using Theorem 13.26,
The solution is clearly unique if B B+ ® A + A = I. 0
EXERCISES
I. For any two matrices A and B for which the indicated matrix product is defined,
show that (vec(A»T (vec(B» = Tr(A
T
B). In particular, if B E lR
nxn
, then Tr(B) =
vec(Inl vec(B).
2. Prove that for all matrices A and B, (A ® B)+ = A+ ® B+.
3. Show that the equation AX B = C has a solution for all C if A has full row rank and
B has full column rank. Also, show that a solution, if it exists, is unique if A has full
column rank and B has full row rank. What is the solution in this case?
4. Show that the general linear equation
k
LAiXB
i
=C
i=1
can be written in the form
[BT ® AI + ... + B[ ® Ak]vec(X) = vec(C).
Exercises 149
5. Let x € M
m
and y e E". Show that *
r
< 8 > y = yx
T
.
6. Let A e R"
xn
and £ e M
mxm
.
(a) Show that A < 8 > B
2
= A
2
£
2
.
(b) What is A ® B\\
F
in terms of the Frobenius norms of A and B? Justify your
answer carefully.
(c) What is the spectral radius of A < 8 > B in terms of the spectral radii of A and B?
Justify your answer carefully.
7. Let A, 5 eR"
x
".
(a) Show that (/ ® A)* = / < 8 > A* and (fl < g > /)* = B
fc
® / for all integ ers &.
(b) Show that e
l
®
A
= I < g ) e
A
and e
5
®
7
= e
B
(g ) /.
(c) Show that the matrices / (8 ) A and B ® / commute.
(d) Show that
(Note: This result would look a little "nicer" had we defined our Kronecker
sum the other way around. However, Definition 13.14 is conventional in the
literature.)
8 . Consider the Lyapunov matrix equation (13.11) with
and C the symmetric matrix
Clearly
is a symmetric solution of the equation. Verify that
is also a solution and is nonsymmetric. Explain in lig ht of Theorem 13.21.
9. Block Triangularization: Let
where A e Rn
xn
and D e R
mxm
. It is desired to find a similarity transformation
of the form
such that T
l
ST is block upper triang ular.
Exercises 149
5. Let x E ]Rm and y E ]Rn. Show that x T ® y = y X T •
(a) Show that IIA ® BII2 = IIAII2I1Blb.
(b) What is II A ® B II F in terms of the Frobenius norms of A and B? Justify your
answer carefully.
(c) What is the spectral radius of A ® B in terms of the spectral radii of A and B?
Justify your answer carefully.
7. Let A, B E ]Rnxn.
(a) Show that (l ® A)k = I ® Ak and (B ® Il = Bk ® I for all integers k.
(b) Show that el®A = I ® e
A
and eB®1 = e
B
® I.
(c) Show that the matrices I ® A and B ® I commute.
(d) Show that
e
AEIlB
= eU®A)+(B®l) = e
B
® e
A
.
(Note: This result would look a little "nicer" had we defined our Kronecker
sum the other way around. However, Definition 13.14 is conventional in the
literature.)
8. Consider the Lyapunov matrix equation (13.11) with
A = [ ~ _ ~ ]
and C the symmetric matrix
[ ~
Clearly
Xs = [ ~ ~ ]
is a symmetric solution of the equation. Verify that
Xns = [ _ ~ ~ ]
is also a solution and is nonsymmetric. Explain in light of Theorem 13.21.
9. Block Triangularization: Let
where A E ]Rn xn and D E ]Rm xm. It is desired to find a similarity transformation
of the form
T = [ ~ ~ J
such that T
1
ST is block upper triangular.
150 Chapter 13. Kronecker Products
(a) Show that S is similar to
if X satisfies the socalled matrix Riccati equation
(b) Formulate a similar result for block lower triangularization of S.
10. Block Diagonalization: Let
where A e Rn
xn
and D E R
mxm
. It is desired to find a similarity transformation of
the form
such that T
l
ST is block diagonal,
(a) Show that S is similar to
if Y satisfies the Sylvester equation
(b) Formulate a similar result for block diagonalization of
150 Chapter 13. Kronecker Products
(a) Show that S is similar to
[
A +OBX B ]
DXB
if X satisfies the socalled matrix Riccati equation
CXA+DXXBX=O.
(b) Fonnulate a similar result for block lower triangularization of S.
to. Block Diagonalization: Let
S= [ ~ ~ l
where A E jRnxn and D E jRmxm. It is desired to find a similarity transfonnation of
the fonn
T = [ ~ ~ ]
such that T
1
ST is block diagonal.
(a) Show that S is similar to
if Y satisfies the Sylvester equation
AY  YD = B.
(b) Fonnulate a similar result for block diagonalization of
Bibliography
[1] Albert, A., Regression and the MoorePenrose Pseudoinverse, Academic Press, New
York, NY, 1972.
[2] Bartels, R.H., and G.W. Stewart, "Algorithm 432. Solution of the Matrix Equation
AX + XB = C," Cornm. ACM, 15(1972), 820826.
[3] Bellman, R., Introduction to Matrix Analysis, Second Edition, McGrawHill, New
York, NY, 1970.
[4] Bjorck, A., Numerical Methods for Least Squares Problems, SIAM, Philadelphia, PA,
1996.
[5] Cline, R.E., "Note on the Generalized Inverse of the Product of Matrices," SIAM Rev.,
6(1964), 57–58.
[6] Golub, G.H., S. Nash, and C. Van Loan, "A HessenbergSchur Method for the Problem
AX + XB = C," IEEE Trans. Autom. Control, AC24(1979), 909913.
[7] Golub, G.H., and C.F. Van Loan, Matrix Computations, Third Edition, Johns Hopkins
Univ. Press, Baltimore, MD, 1996.
[8] Golub, G.H., and J.H. Wilkinson, "IllConditioned Eigensystems and the Computation
of the Jordan Canonical Form," SIAM Rev., 18(1976), 578619.
[9] Greville, T.N.E., "Note on the Generalized Inverse of a Matrix Product," SIAM Rev.,
8(1966), 518–521 [Erratum, SIAM Rev., 9(1967), 249].
[10] Halmos, PR., FiniteDimensional Vector Spaces, Second Edition, Van Nostrand,
Princeton, NJ, 1958.
[11] Higham, N.J., Accuracy and Stability of'Numerical Algorithms, Second Edition, SIAM,
Philadelphia, PA, 2002.
[12] Horn, R.A., and C.R. Johnson, Matrix Analysis, Cambridge Univ. Press, Cambridge,
UK, 1985.
[13] Horn, R.A., and C.R. Johnson, Topics in Matrix Analysis, Cambridge Univ. Press,
Cambridge, UK, 1991.
151
Bibliography
[1] Albert, A., Regression and the MoorePenrose Pseudoinverse, Academic Press, New
York, NY, 1972.
[2] Bartels, RH., and G.w. Stewart, "Algorithm 432. Solution of the Matrix Equation
AX + X B = C," Comm. ACM, 15(1972),820826.
[3] Bellman, R, Introduction to Matrix Analysis, Second Edition, McGrawHill, New
York, NY, 1970.
[4] Bjorck, A., Numerical Methodsfor Least Squares Problems, SIAM, Philadelphia, PA,
1996.
[5] Cline, R.E., "Note on the Generalized Inverse of the Product of Matrices," SIAM Rev.,
6(1964),5758.
[6] Golub, G.H., S. Nash, and C. Van Loan, "A HessenbergSchur Method for the Problem
AX + X B = C," IEEE Trans. Autom. Control, AC24(1979), 909913.
[7] Golub, G.H., and c.F. Van Loan, Matrix Computations, Third Edition, Johns Hopkins
Univ. Press, Baltimore, MD, 1996.
[8] Golub, G.H., and lH. Wilkinson, "IllConditioned Eigensystems and the Computation
ofthe Jordan Canonical Form," SIAM Rev., 18(1976),578619.
[9] Greville, T.N.E., "Note on the Generalized Inverse of a Matrix Product," SIAM Rev.,
8(1966),518521 [Erratum, SIAM Rev., 9(1967), 249].
[10] Halmos, P.R, FiniteDimensional Vector Spaces, Second Edition, Van Nostrand,
Princeton, NJ, 1958.
[11] Higham, N.1., Accuracy and Stability of Numerical Algorithms, Second Edition, SIAM,
Philadelphia, PA, 2002.
[12] Hom, RA., and C.R. Johnson, Matrix Analysis, Cambridge Univ. Press, Cambridge,
UK, 1985.
[13] Hom, RA., and C.R. Johnson, Topics in Matrix Analysis, Cambridge Univ. Press,
Cambridge, UK, 1991.
151
152 Bibliography
[14] Kenney, C, and A.J. Laub, "Controllability and Stability Radii for Companion Form
Systems," Math, of Control, Signals, and Systems, 1(1988), 361390.
[15] Kenney, C.S., and A.J. Laub, "The Matrix Sign Function," IEEE Trans. Autom. Control,
40(1995), 1330–1348.
[16] Lancaster, P., and M. Tismenetsky, The Theory of Matrices, Second Edition with
Applications, Academic Press, Orlando, FL, 1985.
[17] Laub, A.J., "A Schur Method for Solving Algebraic Riccati Equations," IEEE Trans..
Autom. Control, AC24( 1979), 913–921.
[18] Meyer, C.D., Matrix Analysis and Applied Linear Algebra, SIAM, Philadelphia, PA,
2000.
[19] Moler, C.B., and C.F. Van Loan, "Nineteen Dubious Ways to Compute the Exponential
of a Matrix," SIAM Rev., 20(1978), 801836.
[20] Noble, B., and J.W. Daniel, Applied Linear Algebra, Third Edition, PrenticeHall,
Englewood Cliffs, NJ, 1988.
[21] Ortega, J., Matrix Theory. A Second Course, Plenum, New York, NY, 1987.
[22] Penrose, R., "A Generalized Inverse for Matrices," Proc. Cambridge Philos. Soc.,
51(1955), 406–413.
[23] Stewart, G. W., Introduction to Matrix Computations, Academic Press, New York, NY,
1973.
[24] Strang, G., Linear Algebra and Its Applications, Third Edition, Harcourt Brace
Jovanovich, San Diego, CA, 1988.
[25] Watkins, D.S., Fundamentals of Matrix Computations, Second Edition, Wiley
Interscience, New York, 2002.
[26] Wonham, W.M., Linear Multivariable Control. A Geometric Approach, Third Edition,
SpringerVerlag, New York, NY, 1985.
152 Bibliography
[14] Kenney, C., and AJ. Laub, "Controllability and Stability Radii for Companion Fonn
Systems," Math. of Control, Signals, and Systems, 1(1988),361390.
[15] Kenney, C.S., andAJ. Laub, "The Matrix Sign Function," IEEE Trans. Autom. Control,
40(1995),13301348.
[16] Lancaster, P., and M. Tismenetsky, The Theory of Matrices, Second Edition with
Applications, Academic Press, Orlando, FL, 1985.
[17] Laub, AJ., "A Schur Method for Solving Algebraic Riccati Equations," IEEE Trans ..
Autom. Control, AC24( 1979), 913921.
[18] Meyer, C.D., Matrix Analysis and Applied Linear Algebra, SIAM, Philadelphia, PA,
2000.
[19] Moler, c.B., and c.P. Van Loan, "Nineteen Dubious Ways to Compute the Exponential
of a Matrix," SIAM Rev., 20(1978),801836.
[20] Noble, B., and J.w. Daniel, Applied Linear Algebra, Third Edition, PrenticeHall,
Englewood Cliffs, NJ, 1988.
[21] Ortega, J., Matrix Theory. A Second Course, Plenum, New York, NY, 1987.
[22] Pemose, R., "A Generalized Inverse for Matrices," Proc. Cambridge Philos. Soc.,
51(1955),406413.
[23] Stewart, G.W., Introduction to Matrix Computations, Academic Press, New York, NY,
1973.
[24] Strang, G., Linear Algebra and Its Applications, Third Edition, Harcourt Brace
Jovanovich, San Diego, CA, 1988.
[25] Watkins, D.S., Fundamentals of Matrix Computations, Second Edition, Wiley
Interscience, New York, 2002.
[26] Wonham, W.M., Linear Multivariable Control. A Geometric Approach, Third Edition,
SpringerVerlag, New York, NY, 1985.
Index
A–invariant subspace, 89
matrix characterization of, 90
algebraic multiplicity, 76
angle between vectors, 58
basis, 11
natural, 12
block matrix, 2
definiteness of, 104
diagonalization, 150
inverse of, 48
LU factorization, 5
triangularization, 149
C", 1
(pmxn i
(p/nxn 1
Cauchy–Bunyakovsky–Schwarz Inequal
ity, 58
Cayley–Hamilton Theorem, 75
chain
of eigenvectors, 87
characteristic polynomial
of a matrix, 75
of a matrix pencil, 125
Cholesky factorization, 101
co–domain, 17
column
rank, 23
vector, 1
companion matrix
inverse of, 105
pseudoinverse of, 106
singular values of, 106
singular vectors of, 106
complement
of a subspace, 13
orthogonal, 21
congruence, 103
conjugate transpose, 2
contragredient transformation, 137
controllability, 46
defective, 76
degree
of a principal vector, 85
determinant, 4
of a block matrix, 5
properties of, 4–6
dimension, 12
direct sum
of subspaces, 13
domain, 17
eigenvalue, 75
invariance under similarity transfor
mation, 81
elementary divisors, 84
equivalence transformation, 95
orthogonal, 95
unitary, 95
equivalent generalized eigenvalue prob
lems, 127
equivalent matrix pencils, 127
exchange matrix, 39, 89
exponential of a Jordan block, 91, 115
exponential of a matrix, 81, 109
computation of, 114–118
inverse of, 110
properties of, 109–112
field, 7
four fundamental subspaces, 23
function of a matrix, 81
generalized eigenvalue, 125
generalized real Schur form, 128
153
Index
Ainvariant subspace, 89
matrix characterization of, 90
algebraic multiplicity, 76
angle between vectors, 58
basis, 11
natural, 12
block matrix, 2
definiteness of, 104
diagonalization, 150
inverse of, 48
LV factorization, 5
triangularization, 149
en, 1
e
mxn
, 1
e ~ x n , 1
CauchyBunyakovskySchwarz Inequal
ity,58
CayleyHamilton Theorem, 75
chain
of eigenvectors, 87
characteristic polynomial
of a matrix, 75
of a matrix pencil, 125
Cholesky factorization, 101
codomain, 17
column
rank, 23
vector, 1
companion matrix
inverse of, 105
pseudoinverse of, 106
singular values of, 106
singular vectors of, 106
complement
of a subspace, 13
orthogonal, 21
153
congruence, 103
conjugate transpose, 2
contragredient transformation, 137
controllability, 46
defective, 76
degree
of a principal vector, 85
determinant, 4
of a block matrix, 5
properties of, 46
dimension, 12
direct sum
of subspaces, 13
domain, 17
eigenvalue, 75
invariance under similarity transfor
mation,81
elementary divisors, 84
equivalence transformation, 95
orthogonal, 95
unitary, 95
equivalent generalized eigenvalue prob
lems, 127
equivalent matrix pencils, 127
exchange matrix, 39, 89
exponential of a Jordan block, 91, 115
exponential of a matrix, 81, 109
computation of, 114118
inverse of, 110
properties of, 109112
field, 7
four fundamental subspaces, 23
function of a matrix, 81
generalized eigenvalue, 125
generalized real Schur form, 128
154 Index
generalized Schur form, 127
generalized singular value decomposition,
134
geometric multiplicity, 76
Holder Inequality, 58
Hermitian transpose, 2
higher–order difference equations
conversion to first–order form, 121
higher–order differential equations
conversion to first–order form, 120
higher–order eigenvalue problems
conversion to first–order form, 136
i, 2
idempotent, 6, 51
identity matrix, 4
inertia, 103
initial–value problem, 109
for higher–order equations, 120
for homogeneous linear difference
equations, 118
for homogeneous linear differential
equations, 112
for inhomogeneous linear difference
equations, 119
for inhomogeneous linear differen
tial equations, 112
inner product
complex, 55
complex Euclidean, 4
Euclidean, 4, 54
real, 54
usual, 54
weighted, 54
invariant factors, 84
inverses
of block matrices, 47
7, 2
Jordan block, 82
Jordan canonical form (JCF), 82
Kronecker canonical form (KCF), 129
Kronecker delta, 20
Kronecker product, 139
determinant of, 142
eigenvalues of, 141
eigenvectors of, 141
products of, 140
pseudoinverse of, 148
singular values of, 141
trace of, 142
transpose of, 140
Kronecker sum, 142
eigenvalues of, 143
eigenvectors of, 143
exponential of, 149
leading principal submatrix, 100
left eigenvector, 75
left generalized eigenvector, 125
left invertible, 26
left nullspace, 22
left principal vector, 85
linear dependence, 10
linear equations
characterization of all solutions, 44
existence of solutions, 44
uniqueness of solutions, 45
linear independence, 10
linear least squares problem, 65
general solution of, 66
geometric solution of, 67
residual of, 65
solution via QR factorization, 71
solution via singular value decom
position, 70
statement of, 65
uniqueness of solution, 66
linear regression, 67
linear transformation, 17
co–domain of, 17
composition of, 19
domain of, 17
invertible, 25
left invertible, 26
matrix representation of, 18
nonsingular, 25
nullspace of, 20
154
generalized Schur form, 127
generalized singular value decomposition,
134
geometric multiplicity, 76
Holder Inequality, 58
Hermitian transpose, 2
higherorder difference equations
conversion to firstorder form, 121
higherorder differential equations
conversion to firstorder form, 120
higherorder eigenvalue problems
conversion to firstorder form, 136
i,2
idempotent, 6, 51
identity matrix, 4
inertia, 103
initialvalue problem, 109
for higherorder equations, 120
for homogeneous linear difference
equations, 118
for homogeneous linear differential
equations, 112
for inhomogeneous linear difference
equations, 119
for inhomogeneous linear differen
tial equations, 112
inner product
complex, 55
complex Euclidean, 4
Euclidean, 4, 54
real, 54
usual, 54
weighted, 54
invariant factors, 84
inverses
of block matrices, 47
j,2
Jordan block, 82
Jordan canonical form (JCF), 82
Kronecker canonical form (KCF), 129
Kronecker delta, 20
Kronecker product, 139
determinant of, 142
eigenvalues of, 141
eigenvectors of, 141
products of, 140
pseudoinverse of, 148
singUlar values of, 141
trace of, 142
transpose of, 140
Kronecker sum, 142
eigenvalues of, 143
eigenvectors of, 143
exponential of, 149
leading principal submatrix, 100
left eigenvector, 75
left generalized eigenvector, 125
left invertible. 26
left nullspace, 22
left principal vector, 85
linear dependence, 10
linear equations
Index
characterization of all solutions, 44
existence of solutions, 44
uniqueness of solutions, 45
linear independence, 10
linear least squares problem, 65
general solution of, 66
geometric solution of, 67
residual of, 65
solution via QR factorization, 71
solution via singular value decom
position, 70
statement of, 65
uniqueness of solution, 66
linear regression, 67
linear transformation, 17
codomain of, 17
composition of, 19
domain of, 17
invertible, 25
left invertible. 26
matrix representation of, 18
nonsingular, 25
nulls pace of, 20
Index 155
range of, 20
right invertible, 26
LU factorization, 6
block, 5
Lyapunov differential equation, 113
Lyapunov equation, 144
and asymptotic stability, 146
integral form of solution, 146
symmetry of solution, 146
uniqueness of solution, 146
matrix
asymptotically stable, 145
best rank k approximation to, 67
companion, 105
defective, 76
definite, 99
derogatory, 106
diagonal, 2
exponential, 109
Hamiltonian, 122
Hermitian, 2
Householder, 97
indefinite, 99
lower Hessenberg, 2
lower triangular, 2
nearest singular matrix to, 67
nilpotent, 115
nonderogatory, 105
normal, 33, 95
orthogonal, 4
pentadiagonal, 2
quasi–upper–triangular, 98
sign of a, 91
square root of a, 101
symmetric, 2
symplectic, 122
tridiagonal, 2
unitary, 4
upper Hessenberg, 2
upper triangular, 2
matrix exponential, 81, 91, 109
matrix norm, 59
1–.60
2–, 60
oo–, 60
/?–, 60
consistent, 61
Frobenius, 60
induced by a vector norm, 61
mixed, 60
mutually consistent, 61
relations among, 61
Schatten, 60
spectral, 60
subordinate to a vector norm, 61
unitarily invariant, 62
matrix pencil, 125
equivalent, 127
reciprocal, 126
regular, 126
singular, 126
matrix sign function, 91
minimal polynomial, 76
monic polynomial, 76
Moore–Penrose pseudoinverse, 29
multiplication
matrix–matrix, 3
matrix–vector, 3
Murnaghan–Wintner Theorem, 98
negative definite, 99
negative invariant subspace, 92
nonnegative definite, 99
criteria for, 100
nonpositive definite, 99
norm
induced, 56
natural, 56
normal equations, 65
normed linear space, 57
nullity, 24
nullspace, 20
left, 22
right, 22
observability, 46
one–to–one (1–1), 23
conditions for, 25
onto, 23
conditions for, 25
Index
range of, 20
right invertible, 26
LV factorization, 6
block,5
Lyapunov differential equation, 113
Lyapunov equation, 144
and asymptotic stability, 146
integral form of solution, 146
symmetry of solution, 146
uniqueness of solution, 146
matrix
asymptotically stable, 145
best rank k approximation to, 67
companion, 105
defective, 76
definite, 99
derogatory, 106
diagonal,2
exponential, 109
Hamiltonian, 122
Hermitian, 2
Householder, 97
indefinite, 99
lower Hessenberg, 2
lower triangular, 2
nearest singular matrix to, 67
nilpotent, 115
nonderogatory, 105
normal, 33, 95
orthogonal, 4
pentadiagonal, 2
quasiuppertriangular, 98
sign of a, 91
square root of a, 10 1
symmetric, 2
symplectic, 122
tridiagonal, 2
unitary, 4
upper Hessenberg, 2
upper triangular, 2
matrix exponential, 81, 91, 109
matrix norm, 59
1,60
2,60
00,60
p,60
consistent, 61
Frobenius, 60
induced by a vector norm, 61
mixed,60
mutually consistent, 61
relations among, 61
Schatten,60
spectral, 60
155
subordinate to a vector norm, 61
unitarily invariant, 62
matrix pencil, 125
equivalent, 127
reciprocal, 126
regular, 126
singUlar, 126
matrix sign function, 91
minimal polynomial, 76
monic polynomial, 76
MoorePenrose pseudoinverse, 29
multiplication
matrixmatrix, 3
matrixvector, 3
MumaghanWintner Theorem, 98
negative definite, 99
negative invariant subspace, 92
nonnegative definite, 99
criteria for, 100
nonpositive definite, 99
norm
induced,56
natural,56
normal equations, 65
normed linear space, 57
nullity, 24
nullspace,20
left, 22
right, 22
observability, 46
onetoone (11), 23
conditions for, 25
onto, 23
conditions for, 25
156 Index
orthogonal
complement, 21
matrix, 4
projection, 52
subspaces, 14
vectors, 4, 20
orthonormal
vectors, 4, 20
outer product, 19
and Kronecker product, 140
exponential of, 121
pseudoinverse of, 33
singular value decomposition of, 41
various matrix norms of, 63
pencil
equivalent, 127
of matrices, 125
reciprocal, 126
regular, 126
singular, 126
Penrose theorem, 30
polar factorization, 41
polarization identity, 57
positive definite, 99
criteria for, 100
positive invariant subspace, 92
power (Kth) of a Jordan block, 120
powers of a matrix
computation of, 119–120
principal submatrix, 100
projection
oblique, 51
on four fundamental subspaces, 52
orthogonal, 52
pseudoinverse, 29
four Penrose conditions for, 30
of a full–column–rank matrix, 30
of a full–row–rank matrix, 30
of a matrix product, 32
of a scalar, 31
of a vector, 31
uniqueness, 30
via singular value decomposition, 38
Pythagorean Identity, 59
Q –orthogonality, 55
QR factorization, 72
T O " 1
IK , 1
M
mxn i
, 1
M
mxn 1
r '
M nxn 1
n ' '
range, 20
range inclusion
characterized by pseudoinverses, 33
rank, 23
column, 23
row, 23
rank–one matrix, 19
rational canonical form, 104
Rayleigh quotient, 100
reachability, 46
real Schur canonical form, 98
real Schur form, 98
reciprocal matrix pencil, 126
reconstructibility, 46
regular matrix pencil, 126
residual, 65
resolvent, 111
reverse–order identity matrix, 39, 89
right eigenvector, 75
right generalized eigenvector, 125
right invertible, 26
right nullspace, 22
right principal vector, 85
row
rank, 23
vector, 1
Schur canonical form, 98
generalized, 127
Schur complement, 6, 48, 102, 104
Schur T heorem, 98
Schur vectors, 98
second–order eigenvalue problem, 135
conversion to first–order form, 135
Sherman–Morrison–Woodbury formula,
48
signature, 103
similarity transformation, 95
and invariance of eigenvalues, h
156
orthogonal
complement, 21
matrix, 4
projection, 52
subspaces, 14
vectors, 4, 20
orthonormal
vectors, 4, 20
outer product, 19
and Kronecker product, 140
exponential of, 121
pseudoinverse of, 33
singular value decomposition of, 41
various matrix norms of, 63
pencil
equivalent, 127
of matrices, 125
reciprocal, 126
regular, 126
singular, 126
Penrose theorem, 30
polar factorization, 41
polarization identity, 57
positive definite, 99
criteria for, 100
positive invariant subspace, 92
power (kth) of a Jordan block, 120
powers of a matrix
computation of, 119120
principal submatrix, 100
projection
oblique, 51
on four fundamental subspaces, 52
orthogonal, 52
pseudoinverse, 29
four Penrose conditions for, 30
of a fullcolumnrank matrix, 30
of a fullrowrank matrix, 30
of a matrix product, 32
of a scalar, 31
of a vector, 31
uniqueness, 30
via singular value decomposition, 38
Pythagorean Identity, 59
Qorthogonality, 55
QR factorization, 72
JR.n, I
JR.mxn,1
1
I
range, 20
range inclusion
Index
characterized by pseudoinverses, 33
rank, 23
column, 23
row, 23
rankone matrix, 19
rational canonical form, 104
Rayleigh quotient, 100
reachability, 46
real Schur canonical form, 98
real Schur form, 98
reciprocal matrix pencil, 126
reconstructibility, 46
regular matrix pencil, 126
residual, 65
resolvent, III
reverseorder identity matrix, 39, 89
right eigenvector, 75
right generalized eigenvector, 125
right invertible, 26
right nullspace, 22
right principal vector, 85
row
rank, 23
vector, I
Schur canonical form, 98
generalized, 127
Schur complement, 6, 48, 102, 104
Schur Theorem, 98
Schur vectors, 98
secondorder eigenvalue problem, 135
conversion to firstorder form, 135
ShermanMorrisonWoodbury formula,
48
signature, 103
similarity transformation, 95
and invariance of eigenvalues, 81
Index 157
orthogonal, 95
unitary, 95
simple eigenvalue, 85
simultaneous diagonalization, 133
via singular value decomposition, 134
singular matrix pencil, 126
singular value decomposition (SVD), 35
and bases for four fundamental
subspaces, 38
and pseudoinverse, 38
and rank, 38
characterization of a matrix factor
ization as, 37
dyadic expansion, 38
examples, 37
full vs. compact, 37
fundamental theorem, 35
nonuniqueness, 36
singular values, 36
singular vectors
left, 36
right, 36
span, 11
spectral radius, 62, 107
spectral representation, 97
spectrum, 76
subordinate norm, 61
subspace, 9
A–invariant, 89
deflating, 129
reducing, 130
subspaces
complements of, 13
direct sum of, 13
equality of, 10
four fundamental, 23
intersection of, 13
orthogonal, 14
sum of, 13
Sylvester differential equation, 113
Sylvester equation, 144
integral form of solution, 145
uniqueness of solution, 145
Sylvester's Law of Inertia, 103
symmetric generalized eigenvalue prob
lem, 131
total least squares, 68
trace, 6
transpose, 2
characterization by inner product, 54
of a block matrix, 2
triangle inequality
for matrix norms, 59
for vector norms, 57
unitarily invariant
matrix norm, 62
vector norm, 58
variation of parameters, 112
vec
of a matrix, 145
of a matrix product, 147
vector norm, 57
l–, 57
2–, 57
oo–, 57
P–, 51
equivalent, 59
Euclidean, 57
Manhattan, 57
relations among, 59
unitarily invariant, 58
weighted, 58
weighted p–, 58
vector space, 8
dimension of, 12
vectors, 1
column, 1
linearly dependent, 10
linearly independent, 10
orthogonal, 4, 20
orthonormal, 4, 20
row, 1
span of a set of, 11
zeros
of a linear dynamical system, 130
Index
orthogonal, 95
unitary, 95
simple eigenvalue, 85
simultaneous diagonalization, 133
via singular value decomposition, 134
singular matrix pencil, 126
singular value decomposition (SVD), 35
and bases for four fundamental
subspaces, 38
and pseudoinverse, 38
and rank, 38
characterization of a matrix factor
ization as, 37
dyadic expansion, 38
examples, 37
full vs. compact, 37
fundamental theorem, 35
nonuniqueness, 36
singular values, 36
singular vectors
left, 36
right, 36
span, 11
spectral radius, 62, 107
spectral representation, 97
spectrum, 76
subordinate norm, 61
subspace, 9
Ainvariant, 89
deflating, 129
reducing, 130
subspaces
complements of, 13
direct sum of, 13
equality of, 10
four fundamental, 23
intersection of, 13
orthogonal, 14
sum of, 13
Sylvester differential equation, 113
Sylvester equation, 144
integral form of solution, 145
uniqueness of solution, 145
157
Sylvester's Law of Inertia, 103
symmetric generalized eigenvalue prob
lem,131
total least squares, 68
trace, 6
transpose, 2
characterization by inner product, 54
of a block matrix, 2
triangle inequality
for matrix norms, 59
for vector norms, 57
unitarily invariant
matrix norm, 62
vector norm, 58
variation of parameters, 112
vec
of a matrix, 145
of a matrix product, 147
vector norm, 57
1,57
2,57
00,57
p,57
equivalent, 59
Euclidean, 57
Manhattan, 57
relations among, 59
unitarily invariant, 58
weighted, 58
weighted p, 58
vector space, 8
dimension of, 12
vectors, 1
column, 1
linearly dependent, 10
linearly independent, 10
orthogonal, 4, 20
orthonormal, 4, 20
row, 1
span of a set of, 11
zeros
of a linear dynamical system, 130
Matrix Analysis Matrix Analysis
for Scientists & Engineers for Scientists & Engineers
This page intentionally left blank This page intentionally left blank
California slam. Laub Alan J. .Matrix Analysis Matrix Analysis for Scientists & Engineers for Scientists & Engineers Alan J. Laub University of California Davis.
Inc.. write to the Society for Industrial and Applied of the publisher. MATLAB® is a registered trademark of The MathWorks. Mathematical analysis. MATLAB® is a registered trademark of The MathWorks. PA. MA 017602098 USA. Natick.) 1. I. Philadelphia. info@mathworks. PA 191042688. wwwmathworks. 1. Natick. No part of this book All rights reserved.com Mathematica is a registered trademark of Wolfram Research.. 2.9'434dc22 2004059962 2004059962 About the cover: The original artwork featured on the cover was created by freelance About the cover: The original artwork featured on the cover was created by freelance artist Aaron Tallon of Philadelphia.. For information. Library of Congress CataloginginPublication Data Library of Congress CataloginginPublication Data Laub. cm. PA. Matrices. or transmitted in any manner without the written permission may be reproduced. Includes bibliographical references and index. 2. Mathematica is a registered trademark of Wolfram Research.9'434—dc22 512. or transmitted in any manner without the written permission of the publisher. Used by permission.Copyright Copyright © 2005 by the Society for Industrial and Applied Mathematics. Laub. Mathcad is a registered trademark of Mathsoft Engineering & Education. For information. Inc. Mathematics. Inc. Mathcad is a registered trademark of Mathsoft Engineering & Education. please contact The MathWorks. For MATLAB product information.. • slam is a registered trademark.. artist Aaron Tallon of Philadelphia. I. Matrix analysis for scientists and engineers / Alan J. Inc. Printed in the United States of America..com. Fax: 5086477101. Inc. MA 017602098 USA. please contact The MathWorks. ISBN 0898715768 (pbk. stored. 3600 University City Science Center. 3 Apple Hill Drive.) ISBN 0898715768 (pbk. cm. Inc. No part of this book may be reproduced. 3600 University City Science Center. www. QA188138 2005 QA 188. stored. 10987654321 10987654321 All rights reserved. Title. Fax: 5086477101.com. 3 Apple Hill Drive.lam. Used by permission 5.mathworks. 5086477000. Laub. For MATLAB product information. p. Mathematical analysis. Alan J. . Matrices. write to the Society for Industrial and Applied Mathematics. PA 191042688.L38 2005 512. Includes bibliographical references and index. is a registered trademark. info@mathworks. 1948Laub. Inc.com 5086477000. Inc. 1948Matrix analysis for scientists and engineers / Alan J. Alan J. p.. Printed in the United States of America. Title. Philadelphia. 2005 by the Society for Industrial and Applied Mathematics.
Beverley To my wife.To my wife. Beverley (who captivated me in the UBC math library captivated UBC nearly forty years ago) nearly forty .
This page intentionally left blank This page intentionally left blank .
1.2 Examples 4.3 Inner Products and Orthogonality 1. .. 6.. . .2 Examples.1 4.3 Row and Column Compressions Linear Equations Linear Equations 6. .1 Some Notation and Terminology 1... .3 A More General Matrix Linear Equation 6. . .2 Matrix Linear Equations 6.. 2. .3 Properties and Applications Introduction to the Singular Value Decomposition Introduction to the Singular Value Decomposition 5.3 Inner Products and Orthogonality .. . 6. .. . .5 Four Fundamental Subspaces Introduction to the MoorePenrose Pseudoinverse Introduction to the MoorePenrose Pseudoinverse 4. . . .4 Structure of Linear Transformations 3. . . 5.. .Contents Contents Preface Preface 1 1 Introduction and Review Introduction and Review 1.3 Linear Independence 2.1 Definition and Examples 3. . .3 Rowand Column Compressions 5.3 Linear Independence .3 A More General Matrix Linear Equation 6. . .. . . . .2 Matrix Representation of Linear Transformations 3. 2.4 Some Useful and Interesting Inverses. . 5.5 Four Fundamental Subspaces . .1 2.2 Some Basic Properties 5. . 4. 3..4 Some Useful and Interesting Inverses vii xi xi 1 1 1 1 3 3 4 4 4 7 7 7 7 9 9 10 10 13 13 17 17 17 17 18 18 19 19 20 20 22 22 2 2 3 3 4 4 29 29 30 30 31 31 35 35 35 35 38 40 5 5 6 6 43 43 43 43 44 47 47 47 47 .1 Definition and Examples .1 The Fundamental Theorem 5. 4.2 Matrix Arithmetic .2 Subspaces.1 The Fundamental Theorem .4 Sums and Intersections of Subspaces Linear Transformations Linear Transformations 3. .3 Composition of Transformations 3.4 Sums and Intersections of Subspaces 2. . .1 Vector Linear Equations 6.4 Structure of Linear Transformations 3.1 Some Notation and Terminology 1. . 3.3 Composition of Transformations . .. . . . . .3 Properties and Applications . . 3.4 Determinants 1.1 Definitions and Characterizations Definitions and Characterizations. ..2 Matrix Representation of Linear Transformations 3.2 Matrix Arithmetic 1. Definitions and Examples 2. .. . .2 Some Basic Properties . 4.2 Matrix Linear Equations . .2 Subspaces 2. 6.4 Determinants Vector Spaces Vector Spaces 2. . 1.. ... ..1 Definitions and Examples . .1 Vector Linear Equations .
8. 11. . . . . .6 Computation of the matrix exponential 11. . .3. 10.2 Jordan Canonical Form 9. .2 On the +1's in JCF blocks 9. .1 Properties of the matrix exponential 11.5 The Matrix Sign Function 51 51 51 51 52 52 54 54 57 57 59 59 8 65 65 65 65 67 67 67 67 67 67 69 70 70 71 71 9 75 75 75 82 82 85 85 86 86 88 88 89 89 91 91 95 95 10 Canonical Forms 10.3 Linear Regression and Other Linear Least Squares Problems 8. .1 Block matrices and definiteness 10. . . 9.3.3.1 Projections . 9.2. . . . .2 Geometric Solution 8.1. . .3.1 Some Basic Canonical Forms . .1.1 Projections 7. . . . 7. .1 8. .2 Definite Matrices . . . . . . .4 Geometric Aspects of the JCF 9.1 The Linear Least Squares Problem 8. . . . . .1. . . Eigenvalues and Eigenvectors 9. . . . . 11. . . .1 Fundamental Definitions and Properties 9. . Example: Linear regression 8. . 10. . 11.3 Computation of matrix powers . . . . . . .1.1 Example: Linear regression . . . . .1 Fundamental Definitions and Properties 9.4 Linear matrix differential equations 11. .1. . . . .1.2 Geometric Solution . . . . . .1 7. 11. . . 11.3. .2 Difference Equations .1 Properties ofthe matrix exponential .3. 8.5 Least Squares and QR Factorization .2.2 Other least squares problems . . . .3 Equivalence Transformations and Congruence 10.1. 11. . . .2 Definite Matrices 10. .2 Other least squares problems 8. .1.2 Jordan Canonical Form . .3. . . . .5 The Matrix Sign Function. . . . . .2 Inner Product Spaces 7. . . . . 7. . . .1 Differential Equations ILl Differential Equations . 8. .3 Computation of matrix powers 11. . .3 Linear Regression and Other Linear Least Squares Problems 8. .3 Vector Norms 7. .1 The Linear Least Squares Problem . .6 Computation of the matrix exponential 11. .1 9. . .1.3. .1 Homogeneous linear difference equations 11.3 Inhomogeneous linear differential equations 11.3 Determination of the JCF . .3.3. .4 Rational Canonical Form 11 Linear Differential and Difference Equations 11 Linear Differential and Difference Equations 11.3 HigherOrder Equations . .3 Vector Norms 7.1 Theoretical computation .3 Equivalence Transformations and Congruence 10.4 Matrix Norms Linear Least Squares Problems 8.2 Homogeneous linear differential equations 11.1.viii viii Contents Contents 7 Projections.1.3 Inhomogeneous linear differential equations 11. 10. . .1 Homogeneous linear difference equations 11.1. .4 Least Squares and Singular Value Decomposition 8. . . . . 11. .4 Least Squares and Singular Value Decomposition 8. . .2. . Inner Product Spaces.2 Difference Equations 11.4 Linear matrix differential equations . .1 The four fundamental orthogonal projections The four fundamental orthogonal projections 7.4 Geometric Aspects of the JCF 9. . .3 Determination of the JCF 9.2.2 Inhomogeneous linear difference equations 11. Theoretical computation 9. . . . . . . 95 95 99 102 102 104 104 104 104 109 109 109 109 109 109 112 112 112 112 113 113 114 114 114 114 118 118 118 118 118 118 119 119 120 120 . . . . . . . . .1. .2 Homogeneous linear differential equations 11. . 9.4 Matrix Norms .1 Some Basic Canonical Forms 10. . . .1 Block matrices and definiteness 10. . . .2 On the + l's in JCF blocks 9.2.5 Modal decompositions . .1.2 Inner Product Spaces 7. and Norms 7. . . .2 Inhomogeneous linear difference equations 11.2.5 Modal decompositions 11. .5 Least Squares and QR Factorization 8. . .4 Rational Canonical Form .3 HigherOrder Equations.
.1 The Generalized EigenvaluelEigenvector Problem 12.. .Contents Contents ix ix 12 Generalized Eigenvalue Problems 12 Generalized Eigenvalue Problems 12.1 Definition and Examples 13. .5 Simultaneous Diagonalization 12. . . .1 The Generalized Eigenvalue/Eigenvector Problem 12.2 Canonical Forms . . . 13.6 HigherOrder Eigenvalue Problems 12.3 Application to the Computation of System Zeros .5. .1 Definition and Examples . . . . . .3 Application to Sylvester and Lyapunov Equations 13. 12. .5 Simultaneous Diagonalization . .3 Application to Sylvester and Lyapunov Equations Bibliography Bibliography Index Index .6 HigherOrder Eigenvalue Problems .2 Properties of the Kronecker Product . . .4 Symmetric Generalized Eigenvalue Problems . . . 12. . 12. . . 12.2 Canonical Forms 12. . .3 Application to the Computation of System Zeros 12.6. .5. . .1 Conversion to firstorder form 12.2 Properties of the Kronecker Product 13.6. . . 13.1 Conversion to firstorder form 125 125 125 127 127 130 131 131 133 133 133 135 135 135 139 139 139 139 140 144 144 151 153 13 Kronecker Products 13 Kronecker Products 13. . . . . . . . . 12.1 Simultaneous diagonalization via SVD 12. .1 Simultaneous diagonalization via SVD 12. . . .4 Symmetric Generalized Eigenvalue Problems 12.
This page intentionally left blank This page intentionally left blank .
for example). The concept of matrix factorization is emphasized throughout to provide a foundation for a later course in numerical linear is emphasized throughout to provide a foundation for a later course in numerical linear algebra. in many cases. in many cases. and Strang [24] Ortega are excellent companion texts for this book. computer science. particular subject area. followon topics on the computational side (at the level of [7]. I have tried throughout to emphasize only the and more advanced material is introduced. singularity of matrices. Because tools such as the SVD are not generally amenable to "hand computation. [II].Preface Preface This book is intended to be used as a text for beginning graduatelevel (or even seniorlevel) This book is intended to be used as a text for beginning graduatelevel (or even seniorlevel) students in engineering." this ics. eigenvalues and eigenvectors. singularity of matrices. I highly recommend MATLAB® although other software such as xi xi . mathematics. The concept of matrix factorization applications and by computational utility and relevance. I highly recommend MAlLAB® although other software such as a digital computer. either via formal courses or through selfstudy. For this. basisfree or subspace) aspects of many of the fundamental notions. By matrix analysis I mean linear algebra and matrix theory together with their intrinsic interaction with and application to algebra and matrix theory together with their intrinsic interaction with and application to linear dynamical systems (systems of linear differential or difference equations). Noble and Daniel [20]. These powerful and versatile tools can then be exploited to provide a unifying foundation upon which to base subsequent topcan exploited to foundation subsequent topics. Because tools such as the SVD are not generally amenable to "hand computation." However. but somehow didn't quite manage to do. computer science. Certain topics thoroughly as undergraduates. eigenvalues and eigenvectors. methods. The choice of topics covered in linear algebra and matrix theory is motivated both by The choice of topics covered in linear algebra and matrix theory is motivated both by applications and by computational utility and relevance. Basic of calculus and definitely some previous exposure to matrices and linear algebra. Upon completion of a course based on this are excellent companion texts for this book. example) or on the theoretical side (at the level of [12]. either via formal courses or through selftext.. [13].e. essentially Prerequisites for using this text are quite modest: essentially just an understanding for this understanding of calculus and definitely some previous exposure to matrices and linear algebra. Certain topics that may have been treated cursorily in undergraduate courses are treated in more depth that may have been treated cursorily in undergraduate courses are treated in more depth and more advanced material is introduced. for example).e. but somehow didn't quite manage to do.. although Chapters 2 and 3 algebra. the sciences. students meant to learn much of the important and useful mathematics that. the student is then wellequipped to pursue. The books by Meyer [18]. Ortega [21]. requiring such material as prerequisite permits the early (but "outoforder" by conventional standards) introduction of topics such as pseuthe early (but "outoforder" by conventional standards) introduction of topics such as pseudoinverses and the singular value decomposition (SVD). [23]. basisfree or subspace) aspects of many of the fundamental do cover some geometric (i. the sciences. I have tried throughout to emphasize only the more important and "useful" tools. or [16]. although Chapters 2 and 3 do cover some geometric (i. Upon completion of a course based on this text. [13]. and concepts such as determinants. For this. and mathematical structures. mathematics. requiring such material as prerequisite permits tion may occasionally be "hazy. for [11]. the student is then wellequipped to pursue. or [25]." this approach necessarily presupposes the availability of appropriate mathematical software on approach necessarily presupposes the availability of appropriate mathematical software on a digital computer. even though their recollecmatrices least tion may occasionally be "hazy. By matrix analysis I mean linear tools and ideas comfortably in a variety of applications. The text can be used in a onequarter or onesemester course to provide a compact overview of can be used in a onequarter or onesemester course to provide a compact overview of much of the important and useful mathematics that. Instructors are encouraged to supplement the book with specific application examples from their own encouraged to supplement the book with specific application examples from their own particular subject area. Basic concepts such as determinants. Matrices are stressed more than abstract vector spaces. These powerful and versatile tools doinverses and the singular value decomposition (SVD)." However. or [16]. The text linear dynamical systems (systems of linear differential or difference equations). or computational students in engineering. and positive definite matrices should have been covered at least once. Matrices are stressed more than abstract vector spaces. or computational science science who wish to be familar with enough matrix analysis that they are prepared to use its enough analysis they are prepared to tools and ideas comfortably in a variety of applications. example) or on the theoretical side (at the level of [12]. students meant to learn thoroughly as undergraduates.
diverse audience. The presentation of the material in this book is strongly influenced by computais influenced by computational issues for two principal reasons. in an elementary linear algebra course. and thus the text can serve a rather diverse audience." For example. applied physics. are there "best" linearly independent subsets? These tum out to turn be much more difficult problems and frequently involve researchlevel questions when set be much more difficult problems and frequently involve researchlevel questions when set in the context of the finiteprecision. many students who completed especially offered. prerequisites developed While prerequisites for this text are modest. and states often give rise to models of very numbers models high order that must be analyzed. and the course has proven to be remarkably successful at enabling students from disparate backgrounds to acquire a quite acceptable level of mathematical maturity and acceptable graduate rigor for subsequent graduate studies in a variety of disciplines. finiterange floatingpoint arithmetic environment of of of most modem computing platforms. Indeed. for example. linear algebra introducing "onthefly" algebra for elementary statespace theory) to an appendix or introducing it "onthef1y" when to necessary. and modem engineering. This is ideal material from which to learn a bit about mathematical proofs and the mathematical maturity and insight gained thereby. many times at UCSB and twice at UC Davis. A second motivation for a computational emphasis is that it provides many of the essential tools for what I call "qualitative mathematics. in particular. But in most engineering or scientific contexts we want to know more than that. control systems with standard large numbers of interacting inputs. mathematics. and a wide variety of other fields. This is ideal not given explicitly. It is thus crucial to acquire knowledge vocabulary a working knowledge of the vocabulary and grammar of this language. remarked afterward that if processing. they are either obvious or easily found in the literature. they are either obvious or easily found in the literature. completed the course. consistent. This is an absolutely fundamental fundamental concept. and coherent fashion. I have taught this material for many years. econometrics. Mastery of the material in this text should enable the student to read and understand the modern language of matrices used throughout mathematics." Proofs are given for many theorems. and evaluated. especially the first few times it was offered. simulated. statistics. Rather. chemistry. and the course has proven to be remarkably successful at enabling students from Davis. "reallife" problems seldom yield to simple "reallife" closedform formulas or solutions. are deferred to such a course. Some of the key algorithms of numerical linear algebra. They must generally be solved computationally and closedform it is important to know which types of algorithms can be relied upon and which cannot. one must lay a firm foundation upon which subsequent applications and Rather." a set of vectors is either linearly independent or it is not. If a set of vectors is linearly independent. how "nearly dependent" are the vectors? If they linearly independent. the student does require a certain amount of what is conventionally referred Proofs referred to as "mathematical maturity. outputs. It is my firm conviction that such maturity is neither encouraged conviction neither nor nurtured by relegating the mathematical aspects of applications (for example. First. When they are not given explicitly.xii xii Preface Preface Mathcad® Mathematica® or Mathcad® is also excellent. and while most material is developed from basic ideas in the book. in particular. if only they had had this course before they took linear systems. If If are linearly dependent. The "language" in which such described models are conveniently described involves vectors and matrices. The tools of matrix analysis are also applied on a daily basis to problems in biology. . form the foundation virtually modem upon which rests virtually all of modern scientific and engineering computation. Since this text is not intended for a course in numerical linear algebra per se. modern Some of the applications of matrix analysis mentioned briefly in this book derive of the applications of matrix analysis mentioned briefly in this book modem statespace from the modern statespace approach to dynamical systems. science. Statespace methods are Statespace modem now standard in much of modern engineering where. form the foundation Some of the key algorithms of numerical linear algebra. must lay firm foundation upon which and perspectives perspectives can be built in a logical. the details of most of the numerical aspects of linear algebra per se. or signal processing. are deferred to such a course.
etc. rather than having to spend time making up for deficiencies in their background background in matrices and linear algebra. realized that by requiring this course as a prerequisite. . they no longer had to provide as much time for "review" and could focus instead on the subject at hand.. My fellow instructors.Preface Preface xiii XIII or estimation theory. AJL. The concept seems to work. June 2004 — AJL. they would have been able to concentrate on the new ideas deficiencies they wanted to learn. too.
This page intentionally left blank This page intentionally left blank .
Rnxnn denotes the set of real nonsingular n x n matrices. 5. A row vector is denoted by yT where Note: Vectors are always column vectors.1 1. 2. nonsingular n x n matrices. Henceforth. n }. 1R. and linear algebra. Thus. . = the set of complex m x n matrices of rank r.n xn Cmxn = the set of complex m x n matrices of rank r.n xn Rmxnr = the set of real m x n matrices of rank r... x e IR n means means where xi e IR for e n. Rn = the set of ntuples of real numbers represented as column vectors. where Xi E R for ii E !!. e. That a vector is always a y E IR n and the superscript T is the transpose operation. where y G Rn and the superscript T is the transpose operation. XTy is a scalar while it easy to recognize immediately throughout the text that. IR n = the set of ntuples of real numbers represented as column vectors. Henceforth. e 6. IR~ xn denotes the set of real = set of real of rank Thus.g. the notation n denotes the set {1. Note: Vectors are always column vectors. en 4. the set of ntuples of complex numbers represented as column vectors. . x T y is a scalar while xyT is an n x n matrix.. This is followed by a review of some basic notions in matrix analysis throughout the text. e. mxn = the set of complex (or complexvalued) x n matrices. The following sets appear frequently throughout subsequent chapters: The following sets appear frequently throughout subsequent chapters: 1. R mxn = the set of real (or realvalued) m x n matrices. but this convention makes it easy to recognize immediately throughout the text that.. the notation!! denotes the set {I. 1 . x E Rn I. . e. A row vector is denoted by y~.1 Some Notation and Terminology Some Notation and Terminology We begin with a brief introduction to some standard notation and terminology to be used We begin with a brief introduction to some standard notation and terminology to be used throughout the text. Cn = the set of ntuples of complex numbers represented as column vectors. 2. 3. Crnxn = the set of complex (or complexvalued) m x n matrices..Chapter 1 Chapter 1 Introduction and Review Introduction and Review 1. Thus. but this convention makes column vector rather than a row vector is entirely arbitrary.g.. 5. That a vector is always a column vector rather than a row vector is entirely arbitrary. xyT is an n x n matrix. IR rn xn = the set of real (or realvalued) m x n matrices. n}. This is followed by a review of some basic notions in matrix analysis and linear algebra. Thus..
• lower triangular if a.ii > 1. A matrix A is symmetric i. (7. ~ 5 is symmetric (and Hermitian). if A E IRnxn. then r = [ . The The transpose of a matrix A is denoted by AT and is the matrix whose (i. • lower Hessenberg if a. if z = a + jf$ (j = ii = R). an equation like A = A T implies that A is realvalued while a statement like A = AH implies that A is complexvalued. text but reminders are placed at strategic locations.jj > 1. it is Transposes of block matrices can be defined in an obvious way. Hermitian conjugate sometimes A*) and its (i. if e Rnxn e Rmxn C e Rmxm then the (m n) x (m n) matrix [A0 ~] is block upper triangular. 1. A e jRmxn. • lower Hessenberg if aij = 0 for } . Note that if A E R mx ". A is conjugation. AT E E" xm is the (j.e.2. that is. If A E em xn.. • upper triangular if aij. • upper Hessenberg if aij = 0 for ii . = 0 for i > j. = a jfJ. • upper triangular if a. ] is symmetric (and Hermitian). A matrix A E IRn xn e (or A E enxn ) is A eC" x ")is • diagonal if a. A if A = AT and Hermitian if A = AH. j)th entry of a matrix A is denoted by AT and is the matrix whose j)th entry A. C E jRmxm. where the bar indicates complex sometimes A*) and its = IX jfJ (j = = v^T)..1. = 0 for / — j\ > 2. A = [ 7+} 5 3· A . Example 1. • pentadiagonal if ai. j)\h entry is (AH)ij = (aji).. Introduction and Review We now classify some of the more familiar "shaped" matrices.. = 0 for j — > 1. There is some the more common notation in electrical engineering and system theory. Example 1.. is Hermitian (but not symmetric). ... then the (m + n) x (m + n) matrix [~ Bc] is block upper triangular. is complexvalued symmetric but not Hermitian. • pentadiagonal if aij = 0 for Ii .2. then A7" e jRnxm. = 0 for < j. 2 2. unless if A = A T Hermitian A = A H.. an equation like A = A T implies that A is realvalued while a statement otherwise noted. j is Remark the more common notation in electrical engineering and system theory.e. otherwise noted. Introduction and Review Chapter 1. it is easy to see that if Aij are appropriately dimensioned subblocks. then z = IX — jfi. z Remark 1. where the bar indicates complex j)th entry is (A H ). B E IR nxm . = 0 for i ^ j. i. then easy to see that if A. • diagonal if aij7 = 0 for i i= }. For example. (AT)ij = aji. } is While R is most commonly denoted by i in mathematics texts.2 2 Chapter 1. Oth (A 7 ). • lower triangular if aij7 = 0 for i/ < }. and definitions block submatrices.J I > 2.j  7+} ] is Hermitian (but not symmetric). There is some advantage to being conversant with both notations. a. are appropriately dimensioned subblocks. For example. A = AH A complexvalued. 7 + j ] is complexvalued symmetric but not Hermitian. i)th entry of A. • tridiagonal if aij = 0 for Ii .. 7 = («77). = 0 for i > }.. A = [ . We henceforth that. For example. We henceforth adopt the convention that.[ 7 . then its Hermitian transpose (or conjugate transpose) is denoted by AH (or H If A e C mx ". • upper Hessenberg if afj = 0 for — > 1. Each of the above also has a "block" analogue obtained by replacing scalar components in the respective definitions by block submatrices. The notation j is used throughout the text but reminders are placed at strategic locations. For example. • tridiagonal if a(y = 0 for z — j\ > 1. 2 Transposes of block matrices can be defined in an obvious way. While \/—\ is most commonly denoted by i in mathematics texts.JI > 1.
Theorem 1. It Theorem 1. . applied p times: There is also an alternative. its importance cannot be overemphasized.• a"1 E m JR " with a.e. AB bi E W1.... un]]Ee jRmxn with Ui t Ee jRm and V = [VI. suppose A E jRmxn and [hI. matrixvector product with the column x to find Ax = [50 32]' but this matrixvector product can also be computed computed via v1a 3. and is premultiplied by a row yT E R l x m then the product can be written as a weighted linear sum of the rows of C as follows: follows: yTC=YICf +"'+Ymc~ EjRlxn.... Then the matrix product A B can be thought of as above.'" p] E jRnxp For matrix multiplication... importance interpretation take A = [96 85 74]x = take A = [~ ~]. vector x. The importance of this interpretation cannot be overemphasized. formulation of matrix multiplication that appears frequently in the text and is presented below as a theorem. That is. . i.. A very important way to view this product is interpret weighted to interpret it as a weighted sum (linear combination) of the columns of A. suppose A e Rmxn and B = [bi.xn~ ] Then Ax = Xjal + .. E JRm and x = l I.. Matrix Arithmetic 3 1. Then we can quickly calculate dot products of the rows of A column Ax = [.[ ~ J+l. x = ! 2 Then we can quickly calculate dot products of the rows of A [~]..3. Ax. suppose (linear combination) suppose A = la' . the matrixvector product Ax.[ ~ J+2. Then v E jRP. multiplication. Vn] ]Ee lR Pxn U [Uj. This gives a dual to the matrixvector result above.1. multiplication of a matrix by a scalar.2.. As a numerical example. Theorem reader.. It is deceptively simple and its full understanding is well rewarded. . vn Rpxn p with Vit e R . recall that (CD)T = DT C T (C D)T = DT T If H H H (or (CD} = DHC H ). and multiplication of matrices. but equivalent." The details are left to the readei "row left . .3 can then also be generalized to its "row dual. . Namely. Let U = [MI. if (C D)H — D C ). un Rmxn with u Rm and V = [v .3.2 Arithmetic It is assumed that the reader is familiar with the fundamental notions of matrix addition. i=I If matrices C and D are compatible for multiplication.•. eRmxn has row cj e E l x ". Theorem 1...[ ~ l For large arrays of numbers.. A special case of matrix multiplication occurs when the second matrix is a column multiplication second i. and is premultiplied by a row vector yTe jRlxm. matrixvector if C E jRmxn has row vectors cJ E jRlxn. Again. . there can be important computerarchitecturerelated advancomputerarchitecturerelated tages to preferring the latter calculation method.bhp ] e Rnxp with hi e jRn..e.2 Matrix Arithmetic 1. + Xnan E jRm. {..~]. n UV T = LUiVr E jRmxp...
Y}c = [ } JH [ ~ ] = [I . we define their complex Euclidean inner product (or inner product. The notation /„ is sometimes used to denote the identity matrix in IR nxn in Rnx" x nxn H H (or en "). Let x = [1j and y = [1/2]. x)c'. where / is the n x n identity matrix. y ) c = yHxx = L:7=1 x. Note that x Tx = 0 if and only if x = 0 when x e Rn but that this is not true if x e Cn. x)c. The more conventional definition of the complex inner product is H ( x . A orthogonal and XTX = 1 and yTyy = 1. Introduction and Review Chapter 1. indeed. then we say that x and y are orthonormal. a matrix A e en xn is said to be unitary if A H A = AA H = I. There is no special name attached to a nonsquare matrix A E R mxn (or E Cmxn with orthonormal no special name attached to a nonsquare matrix A e ]Rrn"n (or € e mxn ))with orthonormal rows or columns.=1 Note that the inner product is a scalar. y) := x y = Lx. then we say that x and y are orthonormal.y. order in which x product is important.. consider the nonzero vector x above. (x.. Note that the inner product is a scalar. x)c and we see that. To illustrate. If x and y are zero. Y}c = {y.4 Determinants Determinants It is assumed that the reader is familiar with the basic theory of determinants. Similarly. y E R". i. for short) by for short) by n (x'Y}c :=xHy = Lx.e. y)c = (y. 1. x}c. E R are said to be orthogonal if their inner product is Two nonzero vectors x. Then Example 1. the order in which x and y appear in the complex inner (x. rows or columns.2j while while and we see that. Then XTX = 0 but XHX = 2. Note that x T x = 0 if and only if x = 0 when x E IRn but that this is not true if x E en. Let x = [} ]] and y = [~]. xTyy = 0. the nonzero vector x above. Introduction and Review 1. Nonzero complex vectors are orthogonal if XHy = O. .e. the Euclidean inner product (or inner product. Example 1. y)c = y = Eni=1 xiyi but throughout the text we prefer the symmetry with the real case.. Clearly said = an orthogonal or unitary matrix has orthonormal rows and orthonormal columns.e.e. i. i.4 4 Chapter 1. The more conventional definition of the complex inner product is product is important. If x. we define their complex Euclidean inner product (or inner product.3 Inner Products and Orthogonality Inner Products and Orthogonality For vectors x.y. x T = O.4. If e C". A EC = (orC" xn).. We list below some of (or A 6 en xn) we use the notation det A for the determinant of A.4.4 1. indeed. Similarly. the Euclidean inner product inner for short) y is given by y is given by n T (x. Then x T x = 0 but x H X = 2. y)c = (y.y. In sometimes used denote identity matrix. Two nonzero vectors x. consider What is true in the complex case is that x H 0 if and only if O.. y e IRn are said to be orthogonal if their inner product is zero.j] [ ~ ] = 1 . (x. where I is the n x n matrix A e IRnxn is an orthogonal matrix if ATA = AAT = /. A nxn matrix E R is an orthogonal matrix if AT A = AAT = I. We list below some of . For A E R nnxn A e IR xn It assumed of determinants. y E <en. There is an orthogonal or unitary matrix has orthonormal rows and orthonormal columns.3 1. case.=1 y appear in Note that (x. To illustrate. Then (x. y)c = (y. but throughout the text we prefer the symmetry with the real (x. for short) of x and For vectors y e IRn. What is true in the complex case is that XH x = 0 if and only if x = 0. i. . (or A E Cnxn) we use the notation det A for the determinant of A. x and y are orthogonal and x Tx = 1 and yT = 1.. Nonzero complex vectors are orthogonal if x H y = 0.
• ann. then det [~ BD] = del A det(D – CA– l 1 B). then det A = all a22 .• det Ann. then det A = alla22 • • ann 12.. 9.. 8. If elements. 3. B eR n x n . . If A is upper triangular. exdetA.A22. det A is the product of its diagonal diagonal. If A has a zero column or if any two columns of A are equal. 13. several more is a properties are consequences of one or more of the others. with A block diagonal (or block 13. i. Interchanging two rows of A changes only the sign of the determinant. then det(A1) = 1detA .e. then det A = a11a22 . If A has a zero row or if any two rows of A are equal. 14.. A 22 . 15.CA. If A. 2. det A is the product of its diagonal 10. then det(A.. 10. B E IRnxn . 3. • • An" (of A = square diagonal blocks A11.. 16. Determinants 5 properties the more useful properties of determinants. Multiplying a row of A by a scalar and then adding it to another row does not change 7. 8. Multiplying a row of A by a scalar and then adding it to another row does not change the determinant. If A is diagonal. then det(AB) = det A det B. Multiplying a column of A by a scalar and then adding it to another column does not a column of scalar column does change the determinant. Determinants 1. detAT = detA (det A H = det A A E C"X"). are consequences of one or more of the others. Ann (of possibly different sizes). Note that this is not a minimal set. then det A = 0. 7. If A is lower triangular. then det A = O. of 5. det AT = det A (detA H = detA if A e C nxn ). = alla22 • • ann i. 11.• ann.. If A has a zero column or if any two columns of A are equal.C).thendet(AB) = det A det 5. Interchanging two columns of A changes only the sign of the determinant. Proof: This follows easily from the block LU factorization Proof" This follows easily from the block LU factorization [~ ~J=[ ~ ][ ~ 17. change the determinant. the determinant.. properties 1. If A E lR~xn and DE IR mxm det [Ac ~] detA det(D . If det = a11a22 • • ann 12.1 ) = de: A. Multiplying a column of A by a scalar ex results in a new matrix whose determinant scalar a determinant is ex det A. A 11. If A E R n x n and D e RMmxm. Proof" Proof: This follows easily from the block UL factorization BD. If A E Rnxn.. Multiplying a row of A by a scalar a results in a new matrix whose determinant is 5. then det [~ BD] = det D det(A – B D – 11C ) .... then det A = a11a22 . then det A = different det A11 det A22 .4.B).. 16.•. is a det A..e.. det A 11 det A22 • • det Ann 14. If A is lower triangUlar.4. 4. 15. If A e IRnxn and D E lR~xm. 17. 11. then det [Ac ~] det D det(A B D.. i.e. If A e R n x n and D e R m x m . If A A A = o. Multiplying a row of A by a scalar ex results in a new matrix whose determinant is a det A.1. then det A = 0...• a"n.. Multiplying A 6. Interchanging two rows of A changes only the sign of the determinant. If A is block diagonal (or block upper triangular or block lower triangular). If A. If A € lR~xn.1 I ][ ..
Suppose A E jRn xn is idempotent and A ^ I.. 6. 2 0 IS I dempotent . .e. Another such factorization is UL where V is unit upper triangular and L is lower triangular. TrA = L~=I au· elements. what is det A? If A is unitary.B D – l C is the Schur complement of D in [~ BD ]. elements.. B e JRn xn and a. II _ . The trace of A. A? 2. 2sin20 J is idempotent for all #. aII o.. [~ ~ ]. Showthatdet(lxyT) 1 – yTx. . . see. Show that the product V = VI. A E jRnxn A2 / x™ . • . The matrix D . ST = So Show that TrS = 0. The factorization of a matrix A into the product of a unit lower triangular Remark 1. then Tr(aA fiB)= aTrA + f3TrB. example.6. _. lor z r 2sm2rt # J. 2f) 2 _ sm 2^ sin 0 sin sin 20 1 .Vk € jRn xn be orthogonal matrices. Let x.. Show that A must be singular. see.e. denoted TrA. The factorization of a matrix A into the product of a unit lower triangular matrix L (i. Introduction and Review Chapter 1. (c) Let S € Rnxn be skewsymmetric. Tr(Afl) = Tr(£A).. Uk E Rnxn U = VI V2 ..6 6 Chapter 1.e. nxn linear E R f3 E JR. i. (a) Show that the trace is a linear function.yTx. . are block analogues of these. A . ft e R.5. of Din [AC ~ l EXERCISES EXERCISES nxn 1. what is det A? If A 3. . lower triangular with all l's on the diagonal) and an upper triangular matrix L 1's an V is called an LV factorization. If A is orthogonal. Remark — C I B – BDIe Similarly. V2. Let U1.• V k is an orthogonal matrix. Another such factorization is VL U is an LU factorization.. Show that det(I – xyT) = 1. A matrix A e Wx" is said to be idempotent if A2 = A. i. ! [ 2cos2<9 I T 2cos2 0 (a) Show that the matrix A = _. Remark 1. ST = S. (b) Show that Tr(AB) = Tr(BA). (b) Suppose A e IR" X "is idempotent and A i= I. Show that A must be singular. The factorizations used above U triangular.5. y e Rn. U2 . i.. Tr(aA + f3B) = aTrA + fiTrB. A =. either prove the converse or provide a counterexample. 4. [24].e. if A. Let A E jRNxn. is defined as the sum of its diagonal A e Rnxn. Introduction and Review Remark 1. U1 U2 • • Uk is an 5.y E jRn. even though in general AB i= B A... Then E jRnxn skewsymmetric.. If A e jRnxn and or is a scalar.. AB ^ BA. i. for example. TrA = Eni=1 aii. of denoted Tr A.. what is det(aA)? What is det(–A)? E R a det(A)? A? If A unitary.. Letx.e. [24]. TrS O.e A – 1 B is called the Schur complement of A in[ACBD].. .
8.1. aI = 1. ft. .8 a for all a.8)· yyf for all a. p. (M2) there exists an element I E F such that a . . ft Elf. Axioms (MI)(M4) state that (IF \ {0}. 7 . (A3) for all a e F. •) is an abelian group. (Al) a + (. A field is a set F together with two operations +.1 Definitions and Examples Definition 2. when no confusion can arise. the multiplication operator "•" is Generally speaking. I = a for all a E F.ye¥. p. a"1 € IF • a~l = 1.8 + y) = a·. A field is a set IF together with two operations +. not written explicitly.p ) . y € F. (A3) for all a E IF.((. y)=cip+a. . the multiplication operator ". be found.y for alia.1.) ( a . . there exists an element (—a) E IF such that a (—a) O." is not written explicitly. : IF x F —> IF such that Definition 2.8) + y ffor all a.8. +) is a group and an abelian group if (A4) also holds. Axioms (A1)(A3) state that (F.. (Ml) a· p . for all a e IF. Axioms (M1)(M4) state that (F \ to). 2. Axioms (Al)(A3) state that (IF.8 + y) = (a +.8 = ft + afar all a.8. (Ml) a . An excellent reference for this and the next chapter is [10]. there exists an element (a) e F such that a + (a) = 0. (M4) a·.8 = P • a for all a. y Elf. 0. The emphasis is on finitedimensional vector spaces. . there exists an element aI E F such that a . +) is a group and an abelian group if (A4) also holds.Chapter 2 Vector Spaces Vector Spaces In this chapter we give a brief review of some of the basic concepts of vector spaces. (D) (D) a· p a . (M2) 1 e IF • I = for a e IF.8 E IF.8 e F. but some infinitedimensional examples are also cited.. but some infinitedimensional examples are also cited. • F x IF ~ F such that (Al) a (P y ) = (a + p ) y o r all a. (A2) there exists an element 0 e IF such that a 0 = a.((. when no confusion can arise. (A4) a + . where some of the proofs that are not given here may be found. (M4) a • p =. afar all a. Generally speaking. including spaces formed by special classes of matrices.o r all a. p e F.8 . including spaces formed by special classes emphasis is on finitedimensional vector spaces. (M3) e IF. ^ 0. An excellent reference of matrices. The In this chapter we give a brief review of some of the basic concepts of vector spaces. (A2) there exists an element 0 E F such that a + 0 = a for all a E F. y Elf. y e F. yy) = (a·. a f.) is an abelian group. where some of the proofs that are not given here may for this and the next chapter is [10]. (M3) for all a E ¥.8 +a· y for all a. y Elf. (A4) a + p = .8.
Example 2.P. simply by V.l. when there is no possibility of confusion as to the A vector space is denoted by (V.p ) .5. (IRn. C). f3 e F and for all v E V. +) is an abelian group. + apxP + . R"x" is not a field either for example. (V2) ( a f3) v = a P . Example 2. 4.F xV »• V such that (VI) (V. f3 Elf andforall v E V.3 are different from the + and • in Definition 2. 2. where Z+ = {0. RMrmxn= {m x n matrices of rank r with real coefficients} is clearly not a field since. this causes 2. .. 4. v = a . Note that + and· in Definition 2. w for all a e F and for all v. (Ml) does not hold unless m = n.qEZ +} . IR~ xn = m x n matrices of rank r with real coefficients) is clearly not a field since. F) or. . since (M4) does not hold in general (although the other 8 axioms hold). (V4) a· (v + w) = a . (V3) (a (V4) a(v w)=av a w for all a ElF andfor all v. Definition 2. (V5) 1 v = v for all v e V (1 Elf). Raf.2. (V5) I·• v = v for all v E V (1 e F). }.1 in the sense of operating on different objects in different sets. (R". is a field.3 are different from the + and . simply by V. (V2) (a·. 3.8 Chapter 2. IR) with addition defined by and scalar multiplication defined by and scalar multiplication defined by is a vector space. Note that + and • in Definition 2. w E V.. Ra[x] = the field of rational functions in the indeterminate x 3.3..4. 1. (V3) (a + f3). p € F and for all v e V. Moreover. when there is no possibility of confusion as to the underlying fie Id.• v = a·• v + p • v for all a. . (VI) (V.3.. R with ordinary addition and multiplication is a field.f3i EIR .. (MI) does not hold unless m = n. Similar definitions hold for (en.. is a field. A vector space is denoted by (V. 2. A vector space over a field F is a set V together with two operations Definition 2. is a vector space.2. w e V.2. e). Example 2. no confusion and the operator is usually not even written explicitly.2.1 in the sense of operating on different objects in different sets.5. lR~xn is not a field either since (M4) does not hold in general (although the other 8 axioms hold). Vector Spaces Example 2. ) f for all a. for example. Remark 2.. p E IF andfor all v e V. v for all a..( (f3' V v) o r all a.4. where Z+ = {O. IR with ordinary addition and multiplication is a field. Moreover. R) with addition defined by I. in Definition Remark 2. In practice.r] = the field of rational functions in the indeterminate x = {ao + f30 + atX f3t X + . v + a. +) is an abelian group. this causes no confusion and the·• operator is usually not even written explicitly. + f3qXq :aj. 1. ft) v = a v + f3. In practice. A vector space over a field IF is a set V together with two operations + ::V x V + V and· :: IF x V + V such that V x V ^V and. Similar definitions hold for (C".}. IF) or. underlying field. e with ordinary complex addition and multiplication is a field..1. I. C with ordinary complex addition and multiplication is a field.
Special Cases: Special Cases: (a) V = [to. Then O(D. amI al2 a22 + fJI2 + fJ22 aln + fJln a2n + fJ2n a mn + fJml am2 + fJm2 and scalar multiplication defined by and scalar multiplication defined by [ ya" y a 21 yA = . Then (W. W = 0.6. IF) be a vector space and let W c V." The less restrictive meaning "is a subset of" is specifically flagged as such. we write W ~ V. Let A € R"x". when used with vector spaces. (V. (JRmxn. implies that the zero vector must be in any subspace." + fJ2I a21 + P" . F) is a Definition 2. V) is a vector space with addition set of functions f mapping '0 to V. foral! a. Notation: When the underlying field is understood. +00). Notation: When the underlying field is understood. V) be the 3. this question is closed under addition and scalar multiplication. Let cf>('O. 2. F) = (IR". is henceforth understood to mean "is a subspace of. and the functions are piecewise continuous =: (PC[to. Let O(X>.2 Subspaces Subspaces Definition 2. equivalently. too. IF) = (JRn. IF) is a subspace of (V. V) is a vector space with addition defined by defined by (f + g)(d) = fed) + g(d) for all d E '0 and for all f. verify that the set in or prove that something is indeed a subspace (or vector space). verify that the set in question is closed under addition and scalar multiplication. The latter characterization of a subspace is often the easiest way to check or prove that something is indeed a subspace (or vector space). F) is itself a vector space or. Then cf>('O.. IF) = (JR n . td)n.2. Then {x(t) : x(t) = Ax(t}} is a vector space (of dimension n). (V. E).. (V.2 2. less restrictive meaning "is a subset of' is specifically flagged as such. 4. 4. F) be an arbitrary vector space and V be an arbitrary set. fJ e IF andforall WI. that since 0 e F. Let (V. td)n or continuous =: (C[?0. that since 0 E IF. too. JR). if and only if(aw1 ßW2) E if(awl + fJw2) e W for all a. The latter characterization of a subspace is often the easiest way to check Remark 2.7. V) be the set of functions / mapping D to V. Subspaces 2. Let (V.. Note. (E mxn JR) is a vector space with addition defined by 2. IF) if and only if (W.2." The when used with vector spaces.7. W2 E W. i. and the functions are piecewise continuous (a) '0 = [to. Then (W.. w2 e Remark 2. IF) be an arbitrary vector space and '0 be an arbitrary set. h])n (b) '0 = [to. =: (PC[f0." ya2n . W f= 0. and the symbol ~. Let (V. Note. we write W c V. and for all f E cf>. t\])n continuous =: (C[to.6. Let A E JR(nxn. F) if and only if (W.e. . t\]. i. y a l2 y a 22 yam 2 ya.2. etc. for all d ED. F) be a vector space and let W ~ V. equivalently. ß E ¥ and for all w1. Then (x(t) : x ( t ) = Ax(t)} is a vector space (of dimension n).e. td. JR). this implies that the zero vector must be in any subspace. + fJmn l yaml yamn 3. E) is a vector space with addition defined by 9 9 A+B= [ . is henceforth understood to mean "is a subspace of. and the symbol c. l . if and only subspace of (V. Let (V. Subspaces 2. IF) is itself a vector space or. g E cf> and scalar multiplication defined by and scalar multiplication defined by (af)(d) = af(d) for all a E IF.
. .nxn. too.JR. •••. sketch W2. a = oo) is also a subspace. V2. and S are vector spaces (or subspaces). that the vertical line through the origin (i. V usually denotes a vector space with the underlying field generally being R unless Thus. . a = 00) is also a subspace. •. 3./l = {V : v = [ ac ~ f3 ] .ß is a subspace of V if and only if ß = O.8. F) = (R2. F) = (JR.3 Linear Independence Linear Independence Let X = {v1. Then it is easily shown that ctA\ + f3A2 is Proof' Suppose AI. If 12. For ß E R define the jccoordinate in the plane and V2 with the ycoordinate. ffR and S are vector spaces (or subspaces).. Note.nxn.S. ft E R symmetric for all a. W1/2.. c E JR.R) and 1.o. . . called linear varieties. W2. define W"... X is a linearly dependent set of vectors ifand only if there exist k distinct if and only if exist distinct elements v1. v2. Vk e X and scalars aI.o. ak not all zero such that elements VI. Consider (V.and W1/2. Example 2. For a. . Let W = {A € R"x" : A is orthogonal}. 1. that the vertical line through the origin (i. As an interesting exercise.. Shifted subspaces W".. Henceforth...3 2.0.e. W2. All lines through the origin are subspaces.•. W2..lF) = (R" X ". X linearly set of Definition 2.e. . V usually denotes a vector space with the underlying field generally being JR../l with f3 = 0 are All lines through the origin are subspaces. R"x". . Consider (V.I' and Wi.. Proof: Suppose A\. IF) = (]R2./l is a subspace of V if and only if f3 = 0. . Henceforth.1. . al VI + . JR. Definition 2.. then R = S if and only if Definition 2. be an element of R. then R = S if and only if R C S and S C R. Then it is easily shown that aAI + fiAi is symmetric for all a..• } be a nonempty collection of vectors Vi in some vector space V. Vector Spaces Example 2. ak.) and let W = [A e R"x" : A is symmetric}.O. unless explicitly stated otherwise. Let X {VI. in some vector space V. Consider (V. we drop the explicit dependence of a vector space on an underlying field. Vk of X and for any scalars aI. (Xk not all zero such that X is a linearly independent set of vectors if and only if for any collection of k distinct X is a linearly independent set of vectors if and only if for any collection of k distinct elements v1. we drop the explicit dependence of a vector space on an underlying field. 2. Definition 2.I... Note. A2 are symmetric. Then W". one usually proves the two inclusions separately: Note: To prove two vector spaces are equal. explicitly stated otherwise. Wi. too. ..9. .9. + (XkVk = 0 implies al = 0. A2 are symmetric. Then (V. • • •} be a nonempty collection of vectors u. . R) and for each v € R2 of the form v = [v1v2 ] identify v1 with 3.10. elements VI. ~SandS ~ R. As an interesting exercise. vk E X and scalars a1. = {A E JR. ak = O. E ]Rnxn : not 2.10.ß with ß =1= 0 are called linear varieties. Then W is /wf a subspace of JR. W~V.) and for each v E ]R2 of the form v = [~~ ] identify VI with the xcoordinate in the plane and u2 with the ycoordinate. . . Thus. one usually proves the two inclusions separately: An arbitrary r e R is shown to be an element of S and then an arbitrary s E S is shown to is shown to be an element of and then an arbitrary 5 € is shown to An arbitrary r E be an element of R. Shifted subspaces Wa.1. . f3 e R. . .10 Chapter 2. ak.Vk of X and for any scalars a1. f3 e R. To prove two vector spaces are equal.} . . sketch Then Wa.nxn : A We V.
. Sp(X) = V. called consider Let Vif e R"..}.. 2.. An equivalent condition for linear independence is that the matrix Va = 0. e2. . Independence of these vectors turns out to be equivalent to a concept Chapter 11). to be studied further in what follows.14. and X (of and 2.Vk] e ]Rnxk.. o Definition 2.11. If the set of vectors is independent. Why? However.14. 2. T V is nonsingular. Let A E ]Rnxn and 5 e R"xm. Let X = {VI. . tIl 2. LetV = 11 11 ~.2.. t1] (recall that etA denotes the matrix exponential. The dependence of this set of vectors is equivalent to the existence of a nonzero vector E Rk dependence of this set of vectors is equivalent to the existence of a nonzero vector a e ]Rk O.. = [ v 1 . ii E If. en} = Rn. A e R xn B E ]Rnxm.. } = (Xl VI + .. E V. e2 = 0 1 0 .. . ..v2 + V3 = 0).. (since 2vI . Then the span of of X is defined as X is defined as Sp(X) = Sp{VI. . Example 2.. V2. Then consider the rows of etA B as vectors in Cm [t0.. Definition 2."I [ i1i1l ]} [[ s a linearly is a Iin=ly dependent set de~ndent ~t (since 2v\ — V2 + v3 = 0). An equivalent condition for linear independence is that the matrix V TV is nonsingular.en = 0 0 0 o SpIel... Independence of these vectors turns out to be equivalent to a concept called controllability. . Vi e span of Definition 2.11. V V2. A set of vectors X is a basis for V if and only ij Definition 2. 1£t V = R3.en} = ]Rn. and there exists a e R* such that Va = 0. e2 . then = O.12. {1.3.•}} be a collection of vectors vi. then a = 0. which is discussed in more detail in efA Chapter 11). ... 2. . Vk] E Rnxk. Example 2. A set of vectors X is a basis for V if and only if 1. Then Sp{e1. + (XkVk . . to be studied further in what follows.13. (Xi ElF. Why? independent. .. X = [v1 v2 . Linear Independence Example 2.12. Howe.. Let V = Rn and define = ]Rn and el = 0 0 . Then {[ Then I. Then consider the rows of etA B as vectors in em [to. e k. X is a linearly independent set (of basis vectors). Sp(X) = V.. and there exists a E ]Rk such that VT V is singular. The linear v E ]Rn.. . linear dependence x such that Va = 0.'" .. Linear Independence 2. If the set of vectors is independent. Vi EX. An equivalent condition for linear dependence is that the k x k matrix condition VT V is singular. and consider the matrix V = [VI. ~ HHi] } Ime~ly i is a i" linearly independent set. }.13. kEN}... 1.3. = {v : where N = {I.
{~i } of v with respect to the basis {b l .19. with respect to the basis with respect to the basis {[~l[!J} we have we have [ ~ ~ ] = 3. n for Then for all e there exists a unique ntuple {E1 ..16. We say that the vector x of of of (b1. . bn be a basis (with a specific order associated with the basis vectors) b1.. {el....l.. n } such that for V. B ~ [b".. If a basis X for a vector space V(Jf 0) has n elements. For example. en} natural Now let b l . el + 2 . e2. while [ ~ ] = I .19. for]Rn [e\..17.18.E~n} such that v= where ~Ibl + .[ ~  ] + 4· [ ~ l To see this.. . en} is a basis for IR" (sometimes called the natural basis). VI ] : = vlel + V2e2 + .dimensional or have dimension n and we write dim (V) = n or dim V — n.18.. If V= 0) V is Definition 2. r I [ ... while We can also determine components of v with respect to another basis. .. . + ~nbn = Bx. Vector Spaces Example 2. The number of elements in a basis of a vector space is independent of the Theorem 2.. We represents B. . bn]} and are unique.16. . In Rn.. n unique.. For be n. + vne n · Vn We can also determine components of v with respect to another basis. Example 2. . . x ~ D J Definition 2.. The number of elements in a basis of a vector space is independent of the particular basis considered. Then for all v E V there exists a unique ntuple {~I'.15. ... ] l = = Theorem 2.. particular basis considered.12 12 Chapter 2. The scalars {Ei}are called the components (or sometimes the coordinates) components coordinates) Definition 2.. For example. V is said to X for be ndimensional or have dimension n and we write dim(V) n or dim V n. For .. components represents the vector v with respect to the basis B.. . In]Rn. Definition 2.. .b. . write [ ] = XI • [ ~ + ] X2 • [ _! ] =[ ~ = [ ~ Then Then ! ][ ~~ l 1 [ ~~ ] = [ .
dim{A E ~nxn :: A is upper (lower) triangular} = 1/2n(n+ 1). Thus. y>f (L L .20. F) be a vector space and let R. j E ~. « The subspaces R. where Efj is a matrix all of whose elements are 0 except for a 1 in the (i. 2. tJJ) = +00. and 1. dim{A € Rnxn A AT} = {1/2(n 1 (To see why. V (in general. where Eij is a matrix all of whose elements are 0 except for a 1 in the (i. we define dim(O) = O. Theorem 2. Let (V.21.23. K + S S.20. 2. Example 2.4 Sums and Intersections of Subspaces Subspaces Definition 2. we define dim(O) = 0. 5. 72.22. i E m. S S. 1. i e m. The union of two subspaces. V. V is infinitedimensional." The collection of E. Remark 2. 1. dim{A E ~nxn :: A = AT} = !n(n + 1). V. for finite k). R S = (in general.18 says that dim(V) = the number of elements in a basis. n S {r s : r E U. . Remark 2. The sum and intersection ofR and S are defined respectively by: of R.21. j e n.a S. a eA CiEA f] n *R. is not necessarily a subspace. S c V.4. 1.4 2. vector space V is finitedimensional if there exists a basis X with n < +00 elements. A consistency. The union of two subspaces. dim(~n) = n. + 7^ =: L R. Ra C V/or an arbitrary index set A). 1.) = 0 and ]P ft. dim(R mxn ) mn. Let (V. The collection of Eij matrices can be called the "natural basis matrices. dim(~mXn) = mn. R = 0. U S. ft n 5 = {v : v E 7^ and v E 5}. Note: Check that a basis for Rmxn is given by the mn matrices Eij. Note: Check that a basis for ~mxn is given by the mn matrices Eij. 2.) 2 5..4. JF') be a vector space and let 71. determine !n(n 1) symmetric basis matrices.+00. Sums and Intersections of Subspaces 13 13 consistency. otherwise. R j ) = 0 am/ Ri = T).. V is infinitedimensional. J)th location. Theorem 2. V for an arbitrary index set A). n 5 S. U\ + 1.22.= T). R S C V (in general. and S are said to be complements of each other in T. The sum and intersection Definition 2. R. R + S = {r + s : r e R. Example 2. Theorem 2.=1 K k 1=1 2. s e S}.) (To see why.. for finite k). 2. Sums and Intersections of Subspaces 2. and S are defined respectively by: 1.18 says that dim (V) the number of elements in a basis. otherwise. n n S = 0. H. n (^ ft. R C S. s E 5}. R H S = {v : v e R and v e S}. U + S = T (in general ft. The subspaces Rand S are said to be complements of each other in T. Theorem 2. R D S C V (in general. 2. and because the 0 vector is in any vector space. determine 1/2n(n + 1) symmetric basis matrices. RI \ h Rk =: ]T ft/ C V.2." 3. V (in general. A vector space V is finitedimensional if there exists a basis X with n < +00 elements.23. and because the 0 vector is in any vector space. t1]) . T = R 0 S is the direct sum of R and S if = REB S is the direct sum ofR and S if Definition 2. and 2.24. 4. S.24. Thus. j)th location. is not necessarily a subspace.j matrices can be called the "natural basis matrices. dim(Rn)=n. 2. . dim(C[to. dim{A e Rnxn A is upper (lower) triangular} = !n(n 1). Definition 2.
Av" •.25. and SI.. triangular + L = R xn un.. 2. . ft. But as t = r1 + s1 = r2 + S2.. and let R"x". XI. Since ft fl 0.2 and 2. Xk E jRn 2.. 2.. S of a vector space V. Example 2.. 2. Show that Av\. Then r1 — r2 = s2— SI. Then V = U $ S.. . where r1. 0 S. Vector Spaces Chapter 2. = dim(ft) + Proof: To Proof: To prove the first part.5. let R be the set of skewsymmetric matrices in (V. Suppose {VI. For arbitrary subspaces ft. R). where rl. Let U be the subspace of upper triangular matrices in E" x" and let £ be the subspace of lower triangUlar matrices in Rnxn. every t € can be written uniquely in the form r s with r e R and s e S. Let x\. the set in jRnxn. Among all the complements there is a unique one orthogonal to R.. . validity of the formula given in Theorem 2. For arbitrary subspaces R. Prove that viand V2 form a a basis Consider v\ = [2 l]r 1*2 = [3 l] Prove that VI and V2 form basis 2 for R . S2 e rl Sl r2 Then r. one can easily verify the validity = n. together with Examples 2. . Then show that one of the vectors 1. Show that {XI. of the formula given in Theorem 2. unique ft.si e S. X2... But r1 –r2 E ft and S2 — SI E S. 1 TIT The first matrix on the righthand side above is in S while the second is in R.25.26. Then Theorem 2. jRn xn . .29. Suppose =R EB Then 1. Consider the vectors VI — [2 1f and V2 = [3 1f. .. consider V = R2 unique. .27.28. suppose an arbitrary vector t E T can be written in two ways t e as t S2. Vn thonormal if and only if A E R"x" is orthogonal. jR).. S of a vector space V. . D Theorem 2..20. . Vector Spaces Remark 2. dim(T) = dim(R) + dim(S).20. vd must be a linear combination of the others...26. dim(R + S) = dim(R) + dim(S)  dim(R n S)..28. Then it may be checked that U + . mutually [x\. we must have r\ ri and SI rl . 0 The statement of the second part is a special case of the next theorem.c = jRnnxn jRn xn. x/c E R" be nonzero mutually orthogonal vectors. We discuss more about orthogonal complements elsewhere in the text.s\. which uniqueness follows. .r2 £ Rand 52 . Then any other distinct line through the origin is a complement of R.29. ft. Vk} is a linearly dependent set. 3. while U n £ is the set of diagonal matrices in Rnxn. Theorem 2. Avn are also orjRn. ft S) = jR2 and let ft be any line through the origin.27. n Proof: A e jRnxn written Proof: This follows easily from the fact that any A E R"x" can be written in the form A=2:(A+A )+2:(AA).. r2 E Rand s1. . Then any other distinct line through the origin is and let R be any line through the origin.. and let S Let (V. Let VI. AVn are orv\. . For example. Theorem 2.c jRnxn. e jRnxn 4. ft be the set of symmetric matrices in R" x ".. Xk} must be a linearly independent set. IF) = (jRnxn. Xk} must be a linearly independent set.r . Example 2.vn be orthonormal vectors in R".27. Find the components of the vector v = [4 If with respect to this basis. .. we must have rl = r2 and s\ = si from S2 from which uniqueness follows. S2 E S.. EXERCISES EXERCISES 1.27. The complement of R (or S) is not unique. r2 e R.. *2. .r2 S2 .c the Example 2.. Using the fact that dim {diagonal (diagonal matrices} = n. Example 2. jRnxn.. F) (R n x n . {vi. Suppose T = R O S. v = [4 l]r jR2.14 14 Chapter 2... every t E T can be written uniquely in the form tt = r + s with r E Rand s E S. Since R n S = 0.
Exercises Exercises
15
5. Let denote the set of polynomials of degree less than or equal to two of the form 5. Let P denote the set of polynomials of degree less than or equal to two of the form Po + PI X + pix2, where Po, PI, p2 E R. Show that P is a vector space over R Show Po p\x P2x2, where po, p\, P2 e R Show that is a vector space over E. Show Find the components of the that the polynomials 1, *, and 2x2 — 1 are a basis for P. Find the components of the that the polynomials 1, x, and 2x2  1 are a basis for 2 2 with respect to this basis. polynomial 2 + 3x 4x polynomial 2 + 3x + 4x with respect to this basis.
6. Prove Theorem 2.22 (for the case of two subspaces Rand S only). 6. Prove Theorem 2.22 (for the case of two subspaces R and only).
7. Let n denote the vector space of polynomials of degree less than or equal to n, and of 7. Let Pn denote the vector space of polynomials of degree less than or equal to n, and of the form p ( x ) = Po + PIX + ...•+ Pnxn,, where the coefficients Pi are all real. Let PE po + p\x + • • + pnxn where the coefficients /?, are all real. Let PE the form p(x) denote the subspace of all even polynomials in Pn,, i.e., those that satisfy the property denote the subspace of all even polynomials in n i.e., those that satisfy the property p(—x} = p(x). Similarly, let PQ denote the subspace of all odd polynomials, i.e., p( x) = p(x). Similarly, let Po denote the subspace of all odd polynomials, i.e., those satisfying p(—x} = p(x). Show that Pn = PE EB Po· those satisfying p(x) = – p ( x ) . Show that n = PE © PO8. Repeat Example 2.28 using instead the two subspaces 7" of tridiagonal matrices and 8. Repeat Example 2.28 using instead the two subspaces T of tridiagonal matrices and U of upper triangular matrices. U of upper triangular matrices.
This page intentionally left blank This page intentionally left blank
Chapter 3 Chapter 3
Linear Transformations Linear Transformations
3.1 3.1
Definition and Examples Definition and Examples
definition of a linear (or function, We begin with the basic definition of a linear transformation (or linear map, linear function, or linear operator) between two vector spaces. or linear operator) between two vector spaces.
Let IF) and (W, IF) be vector spaces. Then I: : > a Definition 3.1. Let (V, F) and (W, F) be vector spaces. Then C : V + W is a linear transformation if and only if transformation if and only if I:(avi £(avi + {3V2) = aCv\ + {3I:V2 far all a, {3 e F andfor all v},v2e V. pv2) = al:vi fi£v2 for all a, £ ElF and far all VI, V2 E V. The vector space V is called the domain of the transformation C while VV, the space into called the of the transformation I: while W, the space into The vector space which it maps, is called the which it maps, is called the codomain.
Example 3.2. Example 3.2.
1. Let F = R and take V = W = PC[f0, +00). 1. Let IF JR and take V W PC[to, +00). Define I: : PC[to, +00) > PC[to, +00) by Define £ : PC[t0, +00) + PC[t0, +00) by
vet)
f+
wet) = (I:v)(t) =
11
to
e(tr)v(r) dr.
2. Let F = R and take V = W = JRmxn. Fix M e R m x m . Let IF JR and V W R mx ". Fix ME JRmxm. Define £ : JRmxn + M mxn by I: : R mx " > JRmxn by
X
f+
Y
= I:X = MX.
3. Let F = R and take V = P" = {p(x) = a0 + ct}x H ... + anx"n : a, E R} and ao alx + ai E JR} and 3. Let IF = JR and take V = pn (p(x) h anx W = pnl. w = pn1. I: : —> Define C.: V + W by I: p = p', where'I denotes differentiation with respect to x. Lp — p', where denotes differentiation x.
17
... i. w ] and L is the ith column of A. L IF) ~ (W. i e ~}.18 Chapter 3. i E n}. Li near Transformations Chapters. Specifically.. . Thus. When V = JR. We identify A the equation £V = W A becomes simply £ = A. W = R m and [ v i . and hence jc. In other words. i e n} and {Wj. usually L The action of £ on an arbitrary vector V e V is uniquely determined (by linearity) v E V uniquely determined by its action on a basis. In other words.. Linear Transformations 3. but this is usually not done.. suppose £ : (V. + amiWm where W = [w\.. then LVx = Lv = ~ILvI + . F) is linear and further suppose that {Vi. if v = ~I VI + • • + ~n v = Vx (where v. j E m} are the usual (natural) bases WA linea LV L = A.. z'th V This could be reflected by subscripts..m usually causes no naturally confusion. Thus. in the notation. is arbitrary). for V and W) is the representation of £i>. then arbitrary). W = lR.m and {Vi. Note that A = Mat £ depends on the particular bases for V and W.n. j e m}. i. Thus. When V = R". with respect to {w }•. We thus commonly identify A as a linear transformation with its matrix representation.. + . respectively.. n A= al : ] E JR. w m] and where W = [WI.. is by its action on a basis. and hence x. . j E !!!." to Rm usually causes no Thinking of A both as a matrix and as a linear transformation from Rn to lR. E ~} e m] V ith column of A = Mat £ (the matrix representation of £ with respect to the given bases = L L for V and W) is the representation of LVi with respect to {w j.. Thus. {u.• + E nVnn = V x (where u. Thinking of both as a matrix and as a linear transformation from JR. {W jj' j e !!!. IF) veniently in matrix form..} are the usual (natural) bases. Change of basis then corresponds naturally to appropriate matrix multiplication. £V = W A since x was arbitrary.e. transformation with its matrix representation. say.. LV WA since x was arbitrary.2 Matrix Representation of Linear Transformations Matrix Representation of Linear Transformations Linear transformations between vector spaces with specific bases can be represented conLinear transformations between vector spaces with specific bases can be represented conSpecifically. Then the {w j. if V = E1v1 + ..} are bases for V and W.e. [ w . F) —>• (W. j E raj. + ~nLvn =~IWal+"'+~nWan = WAx.mxn a mn represents L since represents £ since LVi = aliwl =Wai.2 3.
. y E Rn. The above is sometimes expressed componentwise by the C — A B .y. Note that in most texts. Two Special Cases: Two Special Inner Product: Let x. and if we associate matrices with the transformations in the usual way. the arrows above are reversed as follows: C However. expressed mxp nxp formula cij = L k=1 n aikbkj. and dim W m. That is. dim V = n.3. Then we can define a new transformation C as follows: to W. Composition ofTransformations 3. and if we associate matrices with the transformations in the usual way. Outer Product: matrix matrix mxn E R Note that any rankone matrix A e ~mxn can be written in the form A = xyT = xyT H mxn mxn). and dim W = m. y e Rn. Note that in The above diagram illustrates the composition of transformations C = AB. the form XXT (or xx HH). If dimU = p. . Inner Product: n xTy = Lx. dimV = n. it might be useful to prefer the former since the transformations A and B appear in the same order in both the diagram and the equation. Then their outer product is the m x n E ~m. That is. xx T XX ).=1 Outer Product: Let x e Rm. A rankone symmetric matrix can be written in above (or xy if A E C xyH e c ). Then their inner product is the scalar E ~n. and W and transformations B from U to V and A from Wand V to W. If dimZ// = p. . Then we can define a new transformation C as follows: C The above diagram illustrates the composition of transformations C = AB.3 Composition of Transformations Composition Consider three vector spaces U. y e ~n. Composition of Transformations 19 19 3.3. in the same order in both the diagram and the equation. then composition of transformations corresponds to standard matrix multiplication. we have C A B . V. then composition of transformations corresponds to standard matrix mUltiplication.3.
Theorem 3.8. Definition3. vd of u. . Vk} with u. R ( A ) S.. Let A : V + W be a linear transformation.5. 2..6. Li near Transformations 3. . Let {VI.4 Structure of Linear Transformations Structure of Linear Transformations Let A : V —> W be a linear transformation. denoted N(A). IS an orthogonaI set. . {v1.20 20 Chapter 3. .. See also the of Section 3. .. ~ 3 .. of of denoted Im(A). Note that in Theorem 3. The nullspace of The of denoted N(A). is the set {w e w = Av for some v e V}. subspaces of different spaces. .. then ~ . . (A)..3. if i f= j. then then R(A) = Sp{al." • orthonormal set. 3. is an orthonormal set. The nullspace of A. the same symbol (A) is Note that in Theorem and throughout the text.[ :~~ J} .. be orthogonal if' vjvj 0 for i ^ j and orthonormal if vf vj 8ij' where 8tj is the be orthogonal if vr v j = 0 for i f= j and orthonormal if vr v j = 8ij. 2. The set is said to 3.8. { t > . e ~mxn.2. [: J} is an orthogonal set.. essentially following immediately from the defiProof: The proof of this theorem is easy.4 3. the same symbol (A) is used to denote both a linear transformation and its matrix representation with respect to the used to denote both a linear transformation and its matrix representation with respect to the usual (natural) bases. in general. Let A E Rmxn. The range of A is also known as the image of A and — {Av e V}. ~... 2. Definition 3. Definition 3. denotedR(A). vi ^/v'k vk ~~~ ] .7. essentially following immediately from the definition. If A is written in terms of its columns as A = [ai. denoted Im(A). LinearTransformations Chapter 3. is an orthonormal set. The range of A. is the set {w E W : w = Av for some v E V}.. {[ ~ J. The range of A. R(A) = {Av : v E V}. Note that N(A) and R(A) are.Vk } With Vi E. an M.. Equivalently.. If in of = [a\. subspaces of different spaces. —/=== .•. . . See also the last paragraph of Section 3.2. R(A) C W. an]. is the set {v E V : Av = O}. W. The nullspace of kernel of and A is also known as the kernel of A and denoted Ker (A).. where 8ij is the Kronecker delta defined by Kronecker delta defined by 8 = {I0 ij ifi=j.i •. N(A) c V. 0 nition. Then Let A : V —>• be a linear transformation.. is an orthogonal set. 1. then Proof: The proof of this theorem is easy.3. € 1Tlln is an orthogonal set..7. e Rn. Example 3. If {VI.IN. Theorem 3. . Let A : V + be a linear transformation..an]. an} . vk] be a set of nonzero vectors Vi E ~n. in general.5 and throughout the text. . orthonormal set. ~}  ISisan 3.. usual (natural) bases. is the {v e V Av = 0}. Then 1. I ~VI VI ^/v.4. {[ ~~i ]. D Remark 3. 1. denotedlZ( A). V. Note that N(A) and R(A) are. then {I —/==. N(A) S..
S1. Set vector. (n n S)~ = n~ + S~. = S. nonzero) solutions of the system of equations 3xI 4xI + 5X2 + 7X3 = 0.. n <.. . . the computation involved is simply to find all nontrivial (i. Structure of Linear Transformations 21 21 Definition 3.. Let S <. Proof: We prove and discuss only item 2 here. 2. vk} e ]Rn vector. k =X . + X2 + X3 = 0.11.4.4. n~. Let R S C Rn The S <. . 3. Then the of defined T S~={VE]Rn: V S = 0 for all s e S}.10. including dependent spanning vectors (which would. Example 3.10. Vk} be an orthonormal basis for S and let x E Rn be an arbitrary {v1. of course. ]Rn.3.. Rn. Let {VI. 6. Then n. if and only if S~ <. Set XI X2 = L (xT Vi)Vi. Theorem 311 Let Theorem 3. Note that there is nothing special about the two vectors in the basis defining S being orthogonal. then give rise to redundant equations).e. (S~)l.= {v e Rn : vTs=OforallsES}. S 5. Then the orthogonal complement of S is defined as the set c ]Rn. (n + S)~ = nl.. The proofs of the other results are left as Proof: left exercises. n S~. Then it can be shown that Working from the definition.9.. 4.=1 XI. Any set of vectors will do. Structure of Li near Transformations 3. Let 3. S \B S~ = ]Rn. .
Ax = Proof: To prove the first part. Then {v e IRn : A v = O} is sometimes called the Definition 3. that x = x1 + x2.e. The proof of the second part is similar and is left as an exercise.26. Then T (x. Suppose. +x~). 'R. where x e U(A) and y e ft(A)1.14 (Decomposition Theorem). It is also easy to see directly that.22 22 Chapter 3.5 3. Theorem 3.26.l N(A T (i.13. Let A : Rn + Rm.5 Four Fundamental Subspaces Four Fundamental Subspaces Consider a general matrix A E lR..XI/ (x~ .e.l. IRn = M(A) EB R(A T)). Clearly. Let A : R" + IRm. take an arbitrary x e A/"(A). = x'1+ x'2.(A)1~ — J\f(ATT ).) 2. including itself) is 0.. when we have such direct sum decompositions. See also Theorem 2. We S n S1 =0 the e orthogonal everything in (i.12 and part 2 of Theorem 3.1 in the next section. x e R(AT).e. E R(A) and E R(A).1 in the next section. the right nullspace is A/"(A) while the left nullspace is N(A T ). {w E IR m : WT A = O} is called the left nullspace of A. . Then {v E R" : Av = 0} is sometimes called the right nullspace of A.. + x~. This key theorem becomes very easy to remember by carefully studying and underThis key theorem becomes very easy to remember by carefully studying and understanding Figure 3.e. Then X2 = x. . When thought of as a linear transformation from E" Consider a general matrix A € E^ x ". We have thus shown that vectors. x E R(A r ) Since x 1 have established thatN(A)..l. we form AT v. X2 is orthogonal to any vector in S. that x = XI for example.l where x € M(A) and y € J\f(A)± = R(AT) (i. Then Theorem 3. (Note: This also holds for infinitedimensional vector spaces.= Af(AT) ) (i. It can write vectors in a unique way with respect to the corresponding subspaces. ft(Ar) (i. X2 is orthogonal to any vector in S.13.) Proof: To x E N(A). we decompositions. every vector v in the domain space IRn can be written in a unique way as v = x 7.l = 0 since the only vector s E S orthogonal to S1 = IRn.. XI) (which follows by rearranging the equation XI +X2 = x. Let A : Rn + IRm.e.. standing Figure 3. x 1 E Sand x2. See also Theorem 2. (Note: This holds only for finitedimensional vector spaces.e. Ax = 0 if and only if x equivalent to yT Ax = 0 for all y.. D Theorem 3. Ax = 0 if and only if x orthogonal is orthogonal to all vectors of the form AT y. But then (x. Thus. (Note: This for finitedimensional 1. Then R(A r )..l = R(A T}. In other words. Thus. IRm = R(A) EBN(A T»..l = N(A ). R(A). Thus. 0 x1 — x'1 andx2 = x2. since T x 2 Vj = XTVj .. E S and X2. the right nullspace is N(A) while the left nullspace is J\f(AT)..12.14 (Decomposition Theorem). every vector v in the domain space R" can be written in a unique way as v = x + y.) 2. Suppose. established that N(A) U(AT ). can write vectors in a unique way with respect to the corresponding subspaces. Vk and hence to any linear combination of these we see that X2 is orthogonal to VI.11 can be combined to give two very funTheorem 3. – x1) = 0 since 0 by definition of S. We have thus shown that S + S. x.. many properties of A can be developed in terms of the four fundamental subspaces . Then Theorem 3. In other words. Thus. Then Ax = 0 and this is an and equivalent to yT Ax = 0 for all v. Linear Transformations Then x\ E <S and. Rm = 7l(A) 0 M(AT)). (Note: This also holds for infinitedimensional vector spaces. When thought of as a linear transformation from IR n to Rm. Let A : IRn —> Rm. every vector in the codomain space R m can be written ina unique way asw = x+y. we see that x2 is orthogonal to v1. 2.l = 7£(AT).) N(A)1" spaces. Li near Transformations Chapters. x~ X2 = (x. for example.xn. XI = x. transformation A. Clearly.x1) (x'1 xd x2 — X2 = — (x'1 — x1) (which follows by rearranging the equation x1+x2 = x'1 + x'2). Since x was arbitrary. R" N(A) 0 ft(Ar ».12 and part 2 of Theorem 3. where x\. E N(A) and E N(A).. many properties of A can be developed in terms of the four fundamental subspaces to IRm.l = Rn. right nullspace of A. Let A : IRn > IRm. 0 The proof of the second part is similar and is left as an exercise. Let A : IRn > Rm. and x2 = x~. We also have that S U S. i. 3. x'2 E S1. But then (x'1 —XI)TT (x.l. Then 1. Similarly. N(A). Theorem 3. . But yT Ax = (ATyy{ x. (w e Rm : w T A = 0} is called the left nullspace of A. . But yT Ax = ( A T ) x. i. everything in S (i. Vk and hence to any linear combination of these vectors. . Similarly. y. including itself) is O.e.12.e. x~ e S. where XI.11 can be combined to give two very fundamental decompositions damental and useful decompositions of vectors in the domain and codomain of a linear and transformation A.•.XITVj =XTVjXTVj=O. D Definition 3. every vector w in the codomain space IRm can be written in a unique way as w = x+y. since Then XI e S and.X2) 0 since (x'1 — x1) (x' 2 — x2) = 0 by definition of ST.
N(A).1 obvious and we return to this figure frequently both in the context of linear transformations obvious and we return to this figure frequently both in the context of linear transformations and in illustrating concepts such as controllability and observability. A is onetoone or 11 (also called monic or injective) if N(A) = O. A f ( A ) . 3. Let V and W be vector spaces and let A : V + W be a linear transforDefinition 3. This is sometimes called 3. Then rank(A) = dim R(A). 'R. t= V2 ===} AVI t= AV2 .1.16. the column rank of A (maximum number of independent columns). properties 7£(A). Let A : E" + Rm.1 makes many key properties seem almost N(A)T.3. A is onto (also called epic or surjective) ifR.(A) = W. and N(A)1. Four Fundamental Subspaces 23 23 A r N(A)1 r EB {OJ X {O}Gl nr m r Figure 3. A is onto (also called epic or surjective) ifR(A) = W. Definition 3. fundamental subspaces.1.15. Two equivalent 2. R(A). R(A)1. Let and W be vector spaces and let A : motion. IR n > IRm. Two equivalent characterizations of A being 11 that are often easier to verify in practice are the characterizations of A being 11 that are often easier to verify in practice are the following: following: (a) AVI = AV2 (b) VI ===} VI = V2 . Figure 3. 1.15.5. A is onetoone or 11 (also called monic or infective) ifJ\f(A) = 0. be a linear transforDefinition 3. Four fundamental subspaces.(A)^. 1. Figure 3. Four Fundamental Subspaces 3.5. rank(A) dimftCA). and in illustrating concepts such as controllability and observability. The row rank of A is column rank of of independent row rank of . mation. 2.16.
19 suggests looking atthe general problem of the four fundamental subspaces of matrix products.18. Tvrr]} is a basis for R(A). Clearly T is 11 (since A/"(T) = 0). . .11 and 3. Let A : Rn > Rm.17. dimension of the domain of A. B E R" xn ." 0 of D The following corollary is immediate.. u. + nullity(B). Finally. 3. rank(AB) = rank(BA) = rank(A) and N(BA) = N(A). ..andx22 e A/"(A).. take any W e R(A).. sometimes denoted nullity(A) or corank(A). Linear Transformations dim 7£(A r ) (maximum number of independent rows). R(A) : ]Rn ~ ]Rm. Tv abasis 7?..") of A.17. denoted nullity(A) or corank(A). . The last equality AXI x\ e N(A)L and jc E N(A). (Note: Since 3. + rank(B)  n :s rank(AB) :s min{rank(A). x x e R" x\ X2.17 we see immediately that Proof: From Theorems 3. shows that T is onto. where Ax — w. the subspaces themselves are not necessarily in the same vector space. . Let A. . nullity(B) :s nullity(AB) :s nullity(A) 4.17 we see immediately that n = dimN(A) = dimN(A) + dimN(A)L + dim R(A) . of Corollary 3.. by definition there is a vector x E ]Rn such that Ax = w. O:s rank(A 2. We thus have that dim R(A) = dimN(A)L since it is easily shown T dim7?. The dual notion to rank is the nullity R(AT) of independent rows). dimA/"(A) + dimft(A) = dimension of the domain of A. The basic results are contained in the following easily proved following theorem. Then dim K(A) = dimNCA)L.. dimA/'(A) ± (Note: 1 T T ).19 suggests looking at the general problem of the four fundamental Part 4 of Theorem 3. . . and is defined as dim A/"(A). if {ui.18. and is defined as dimN(A). colloquially of = rank of A. r*i *i E N(A)L. Then Ajti = W = TXI since Xl e A/^A).(A) = dimA/^A^ 1 if that if {VI. following follows we apply this and several previous results. {Tv\. Theorem 3.(A). iv} abasis forA/'CA) .. Write x = Xl + X2. 3. where n is the ]Rn > ]Rm. if B is nonsingular. Proof: From Theorems 3.19. Part 4 of Theorem 3. of A.") Proof: Proof: Define a linear transformation T : N(A)L ~ R(A) by J\f(A)~L —>• 7£(A) by Tv = Av for all v E N(A)L. 1. 1 1 Xl E A/^A) . Then 3.24 24 Chapter 3. . Then dimN(A) + dim R(A) = n. 0 For completeness. dimensions. Like the theorem. LinearTransformations Chapter3. .11 and 3. e ]Rnxn. . Theorem 3. of A. rank(B)}.19. To see that T is also onto. it is a statement about equality of dimensions. the following string of equalities follows easily: "column rank of A" = rank(A) = dim R(A) = dimN(A)L1 = dim R(AT) = rank(AT)) = A" rank(A) = dim7e(A) = dim A/^A) = dim7l(AT) = rank(A r = "column "row rank of A. the subspaces themselves are not necessarily in the same vector space. v r } is a basis for N(A)L. Let A : R" ~ Rm. rank(A) + B) :s rank(A) + rank(B). this theorem is sometimes colloquially stated "row rank of A = column N(A)L = R(A A/^A) " = 7l(A ). Then N(T) = To w E 7£(A). we include here a few miscellaneous results about ranks of sums completeness. then {TVI. and products of matrices.
Conversely.21. e 7?. 1. then dim V = dim W. e IRnxp. dim R(A) = m = rank(A). It The next theorem is closely related to Theorem 3. the transformations A. A : W1 + E" is invertible or nonsingular if and only z/rank(A) = n.ti = AT Ax2. Then y = Ax. AT A is nonsingular). A"1 ± are all 11 and onto between the two spaces M(A) and 7£(A). A : V + W is invertible (or bijective) if and only if it is 11 and onto. R«AB)T) S. Note that if A is invertible. RCAB) S. A : V —» W is invertible (or bijective) if and only if it is 11 and onto. N«AB)T) . Conversely. Let e IRmxn.and R(A). Note that in the special case when A E R"x". Let A E Rmxn. N(A T ) = N(AA T ).(A) — m — rank (A). R(A) = R(AA T ). Thus. linear least squares problems.5. A is 11 if and only z/rank(A) = n (A has linearly independent columns or is said to have full column rank. A is AT AXI AT AX2. Let jc = AT(AAT)~]y Y E Rn. We now characterize 11 and onto transformations and provide characterizations in We now characterize II and onto transformations and provide characterizations in terms of rank and invertibility. i.20 and is also easily proved.2 N(B). 2. Theorem 3. B E Rnxp. which implies that dimN(A)11 = n — Proof of part 2: If A is 11. Conversely. A is onto if and only if rank (A) = m (A has linearly independent rows or is said to have full row rank. AA is nonsingular). It is extremely useful in text that follows. Then 3. e IRmxn. A Proof of part 2: If A is 11. to have full column rank. N(A) = N(A T A). since ArA is invertible.2 N(A T ). The transformations AT are all 11 and onto between the two spaces N(A)1. equivalently. 2. The transformations AT and A I have the same domain and range but are in general different maps unless A is and A~! have the same domain and range but are in general different maps unless A is orthogonal. especially when dealing with pseudoinverses and is extremely useful in text that follows.22.(A). Theorem 3. then N(A) = 0. let y E IRm be arbitrary. Similar remarks apply to A and A~T. AT A nonsingular). Also.21. A € IR~xn. = R(A T A). 2. Ar. x AT (AAT)I e IRn. which implies that dim A/^A).22. Then A r A.17. AATT is nonsingular). 1. y E R(A). and hence dim 7£(A) = n by Theorem 3. RCA). R(AT) 3.20. 4.3. Proof' Proof of part 1: If A is onto. A : IRn »• IR n is invertible or Note that if A is invertible. R(B T ). let y e Rm Proof: Proof of part 1: If A is onto. dim7?.e. 4. : R n » Rm. D D 11. which implies x\ = x^. then A/"(A) = 0. Let A E Rmxn.20. Also. equivalently. have full row rank. AX2. A is 11 if and only ifrank(A) = n (A has linearly independent columns or is said 2.23. Four Fundamental Subspaces 3. nonsingular ifand only ifrank(A) = n. then dim V — dim W. . Definition 3. Then Theorem 3. 25 25 The next theorem is closely related to Theorem 3. terms of rank and invertibility.17. Definition 3. Conversely. A A T. Let A : IRn + IRm. so A is onto. and hence dim R(A) n by Theorem 3.—n = dim 7£(A r ). A is onto if and only //"rank(A) — m (A has linearly independent rows or is said to 1.23. N(AB) . Then 3.5.20 and is also easily proved. and AI A. especially when dealing with pseudoinverses and linear least squares problems. equivalently. Four Fundamental Subspaces Theorem 3. 1. suppose AXI = Ax^. AT. suppose Ax\ dim R(A T). XI = X2 AT A A 11.. equivalently. 3.
Let A : V + Then 1. can always find v e E2 such that [1 2][^] = a). then a right inverse is given by A~R = AT(AAT) I. are infinitely A.e. then A is invertible. i. i. Let A : V > W. if A is 11.26. Also. characterizing all solutions of the linear matrix equation AR = I. it is clear that there are infinitely many right inverse. by uniqueness it must be Thus. where Iw denotes the identity transfonnation on W. A is invertible if and only if it is both right and left invertible. D Example 3.R = If left A L A L A = 2. A is said to be left invertible if there exists a left inverse transformation A~L : W —> to transformation A L : V such that A L A = Iv. (Proof: Take any a E E1I. A is said to be right invertible if there exists a right inverse transformation A~RR : if A. Let A = [1 2] : E2 »• E1I. both 11 and Moreover. such that A~LA = Iv where Iv denotes the identity transformation on V.25 that A is invertible. Theorem 3. Obviously A has full row rank can always find v E ]R2 such that [1 2][ ~~] = a)...25. If Proof: proof of second Proof: We prove the first part and leave the proof of the second to the reader. right inverses for A. is left invertible if and if it and left invertible. therefore.R + ARA I) = AA. then one (Proo!' = [1 2]:]R2 + ]R . —> transformation if left + 2. A. 1. i. Li near Transformations Chapters. A is right invertible if and only if it is onto. Defileft If linear concepts left nitions of these concepts are followed by a theorem characterizing left and right invertible transformations. Definition 3.. therefore. € ]R . A right invertible if and only if it onto. in A I = A R = A L.: AA R = w Iw W + V such that AA~R = Iw.e. (A R + A R A — /) must be a right inverse and.22 we see that if A : E" + Em is onto. it may still be right or left invertible. A~ (A A)~ A . A R A = I. Notice the and leave the following: following: A(A. Let Theorem 3. A R the case that A~R + A~RA .e. Then Definition 3..27. Then A is onto. If there exists a unique right inverse A~R such that AA~R = I.I) must be a right inverse and. 0 a left inverse. It then follows from Theorem 3. linear Transformations If a linear transformation is not invertible.R = _~] (=1) and A~R = [ _j j is a right inverse. in which case A~l = A~R = A~L. .24.I = A~R. i.. that A~R is a left inverse. If there exists a unique left inverse A~L such that A~LA = I. 1. In Chapter 6 we characterize all right inverses of a matrix by Chapter characterize characterizing all solutions of the linear matrix equation A R = I. A is left invertible if and only ifit is 11. 2.e.26. 3.22 ]Rn >• ]Rm Note: From Theorem 3. Similarly. 1. Let A : V » V.26 Chapter 3. Let A : V + W. by uniqueness it must be A R + A R A — = A R.24. Let + V. (A R + A RA . then A is invertible.L = (ATTA)I1AT. But this implies that A~RA = /. where Iv denotes the identity transfonnation on V.R AA. Theorem 3... Obviously A has full row rank (= 1) and A .25 that A is invertible.R + AARA = I A +IA  A since AA R = I = I. It then follows from Theorem 3. then a left inverse is given by A R = AT (AAT) left T L = A.both 11 and is if and if onto. Then > 1.
3. The matrix 3. J E2.1] is a left inverse. ThenAis 11. with Y e ]Rnxn (X. Prove Theorem 3. whence A/"(A) = 0 so A is 11). it is clear that there are A L = [3 — 1] infinitely many left inverses for A. Let A = [8 5 i) and consider A as a linear transformation mapping E3 to ]R2. Prove Theorem 3. Consider the vector space R nx " over E.Exercises 27 2. Y) = Tr(X Tr F). respect to this inner product. II? Is £.3. For matrices matrices. below bases for its four fundamental subspaces. Let A = [~ . 4. Consider the vector space ]Rnxn over ]R.4. Let A = [i]:]Rl > ]R2. Show that. LetA [J] : E1 ~ E2. It is now obvious that A has full column rank (=1) and A~L = [3 . The matrix A = 1 1 2 1 [ 3 1 when considered as a linear transformation onIE \ is neither 11 nor onto. and let R denote the subspace of skewsymmetric matrices. Find the matrix representation of A with respect to the bases Find the matrix representation of A to bases {[lHHU]} of R3 and {[il[~J} of E . Again. 2 . In Chapter 6 we characterize all left inverses of a infinitely many left inverses for A. (Proof: The only solution to 0 = Av = [I2]v is v = 0. and let 7£ denote the subspace of skewsymmetric matrices. 'R. let S denote the subspace of symmetric matrices. consider A linear transformation ]R3 1. — S^. y) = Tr(X Y). Then A is 11. For matrices X. whence N(A) = 0 so A is 11). We give below bases for its four fundamental subspaces. . £. respect to this inner product. R = S J. We give when considered as a linear on ]R3. is neither 11 nor onto. 2. let denote the subspace of symmetric 2. Consider differentiation £ 11? Is£ onto? onto? 4. (Proof Theonly solution toO = Av = [i]v 2. Y E Enx" define their inner product by (X. In Chapter 6 we characterize all left inverses of a matrix by characterizing all solutions of the linear matrix equation LA = I.4.2. Consider the differentiation operator C defined in Example 3. matrix characterizing all solutions of the linear matrix equation LA = I. 3. Is £. EXERCISES EXERCISES 3 4 1. It is now obvious that A has full column is v 0.
Chapter 3. If E 1R~9X48.12.1 11. Determine bases for the four fundamental subspaces of the matrix Detennine fundamental A=[~2 5 5 ~]. Show that AT has a right inverse. .28 5. Let A = [ J o]. How many linearly independent solutions can be found to the 10. Rnxm thought of as a transformation from Rm to IRn. Suppose A E IR m xn has a left inverse. Prove Theorem 3.1 to illustrate the four fundamental subspaces associated with AT e associated ATE nxm IR from IR m R". Linear Transformations 7.4.4. if not. Prove Theorem 3. Theorem 6. 3. Linear Transformations Chapters. provide a counterexample.11. left T Suppose e Rmxn 9. Determine A/"(A) and 7£(A).2. prove it. Modify Figure 3. linearly independent solutions 10. Are they equal? Is this true in general? If this is true in general. ~ ~ 3 8.Il. Let = [~ 9. Suppose A € Mg 9x48 . Are they equal? Is this true in general? DetennineN(A) and R(A). homogeneous linear system Ax = 0? homogeneous linear system Ax = O? n 3.
pseudoinverse of A. T is bijective Cll and onto). brings great notational and conceptual clarity matrix and. as is shown in the following text.+ R(A) by dimensional Define transformation T : N(A). We have thus defined A+ for all A E IR™xn.. With A and T as defined above. the MoorePenrose pseudoinverse of A. the definition neither provides nor suggests a good computational strategy good computational strategy for determining A +.Chapter 4 Chapter 4 Introduction to the Introduction to the MoorePenrose MoorePen rose Pseudoinverse Pseudoinverse In this chapter we give a brief introduction to the MoorePenrose pseudoinverse. which was proved by Penrose in 1955. see [22]. problems. The MoorePenrose pseudoinverse is defined for any matrix and. characterization of A is given in the next theorem. see [22]. Then A+ is the MoorePenrose where y = y\ pseudoinverse of A. Then A+ is the MoorePenrose j2 with y\ E RCA) and yi E RCA).1. the MoorePenrose pseudoinverse of A. With A and T as defined above. T is bijective (11 and onto). for determining A+ . let us henceforth consider the Although X and Y were arbitrary vector spaces above. This transformation T~ + can be used to give our first definition of A+. define a transformation A + y + X by where Y = YI + Yz with Yl e 7£(A) and Yz e Tl(A}L. and hence we Then.. as noted in the proof of Theorem 3. Although X and y were arbitrary vector spaces above. as is shown in the following text.l. let us henceforth consider the X ~n lP1. a generIn this chapter we give a brief introduction to the MoorePenrose pseudoinverse.17.1.m We A+ A e lP1. and hence we can RCA) —>• J\f(A}~L This transformation can define a unique inverse transformation Tl 1 :: 7£(A) + NCA). brings great notational and conceptual clarity to the study of solutions to arbitrary systems of linear equations and linear least squares to the study of solutions to arbitrary systems of linear equations and linear least squares problems. can be used to give our first definition of A . Definition 4. Define a transformation T : Af(A)1.l. where X Xand Y y are arbitrary finitedimensional vector spaces. 29 . which was proved by Penrose in 1955.17." X ".l. 4..1 Definitions and Characterizations Definitions and Characterizations Consider a linear transformation A : X —>• y.l —>• Tl(A) by Tx = Ax for all x E NCA). case X = W1 and Y = Rm.1 4. as noted in the proof of Theorem 3. where and are arbitrary finiteConsider a linear transformation A : X + y. A purely algebraic y + characterization of A+ is given in the next theorem. define a transformation A+ : Y —»• X by Definition 4. a generalization of the inverse of a matrix. neither provides Unfortunately. Then.
Unfortunately. Note that the inverse of a nonsingular matrix satisfies all four Penrose properties. 19]. Also. Example 4.30 Chapter 4. p.2 nor its proof suggests a computawith Definition 4. then by uniqueness. it must be A +. as with Definition 4. (P2). A L = [3 — 1]) satisfy properties (PI). A~ = [3 . Given a matrix G that is a candidate for being checkable criterion in the following sense.1. A+ = (AT A)~ AT if A is 11 (independent columns) (A is left invertible). Introduction to the MoorePenrose Pseudoinverse Theorem 4. While not generally suitable for computer implementation. Furthermore. A t = AT (AA )~ if A is onto (independent rows) (A is Example 4. Then G = A+ if and only if Theorem 4. p.2 4. (P3) (AGf (P3) (AG)T = AG. For any scalar a. Also. straightforward. X+ = AT(AATT) I if A is onto (independent rows) (A is right invertible). whose proof can be found in [1.3. If G the pseudoinverse of A.2. (P4) (GA)T = GA. For any scalar a. Unfortunately. A+ = (AT A)I AT if A is 11 (independent columns) (A is left invertible). Introduction to the MoorePenrose Pseudoinverse Chapter 4." xn. neither the statement of Theorem 4. Verify directly that A+ = Example 4. the Penrose properties do offer the great virtue of providing a checkable criterion in the following sense. whose proof Still another characterization of A + is given in the following theorem. one need simply verify the four Penrose conditions (P1)(P4)." xn. (P2) GAG G. Note that the inverse of a nonsingular matrix satisfies all four Penrose properties. if a =0. this can be found in [1. . Such a verification is often relatively satisfies all four.2 Examples Examples Each of the following can be derived or verified by using the above definitions or characEach of the following can be derived or verified by using the above definitions or characterizations. Verify directly that A+ = [ ~] satisfies (PI)(P4). 19].6. L Note that other left inverses (for example. Let A e R™xn. one need simply verify the four Penrose conditions (P1)(P4). as a right or left inverse satisfies no fewer than three of the four properties. However. then by uniqueness. Then A+ [a [! = lim (AT A + 82 1) I AT 6+0 6+0 (4. Still another characterization of A+ is given in the following theorem.5. A + always exists and is unique.4. If G satisfies all four. Example 4. (P4) (GA)T = GA. neither the statement of Theorem 4.7. While not generally suitable for computer implementation.3. Such a verification is often relatively straightforward. (PI) AGA = A. Let A e R?xn Then G = A + if and only if (Pl) AGA = A. Let A E lR. terizations. a right or left inverse satisfies no fewer than three of the four properties. (P2). Let A E lR. Note that other left inverses (for example. Theorem 4.6.1) = limAT(AAT +8 2 1)1.1.2 nor its proof suggests a computational algorithm. Example 4. and (P4) but not (P3). Example 4. A+ always exists and is unique. AG. Given a matrix G that is a candidate for being the pseudoinverse of A.. Consider A = f ] satisfies (P1)(P4).7. Example 4.4. this characterization can be useful for hand calculation of small examples. (4.5. if a t= 0.1]) satisfy properties (PI).2.2) 4. Then Theorem 4. characterization can be useful for hand calculation of small examples. the Penrose properties do offer the great virtue of providing a tional algorithm. (P2) GAG = G. it must be A+. Consider A = [']. and (P4) but not (P3). = Furthermore. However. Example 4.
. Many of these are used in the text that follows. Then S+ = U D+UT. The proof of the first result is not particularly easy and does not even have the virtue of being proof of the first result is not particularly easy and does not even have the virtue of being especially illuminating.3. The interested reader can consult the proof in [1.). The proof of the second result (which can also be proved easily by verifying the four Penrose proof of the second result (which can also be proved easily by verifying the four Penrose conditions) is as follows: conditions) is as follows: (A T )+ = lim (AA T ~+O + 82 l)IA = lim [AT(AAT ~+O + 82 l)1{ + 82 l)1{ 0 = [limAT(AAT ~+O = (A+{. Then orthogonal if MT = M. Then S+ UD+U T where D+ is again a diagonal matrix whose diagonal elements are determined according to Example 4. 2. 31 31 Example 4.3 Properties and Applications Properties and Applications This section presents some miscellaneous useful results on pseudoinverses. For any vector v E M". 27]. Properties and Applications 4. where D+ is again a diagonal matrix whose diagonc D is diagonal. The interested reader can consult the proof in [1.13. Example 4.12. . D Theorem 4. simply verify that the expression above does indeed satisfy each c the four Penrose conditions. Proof: Both results can be proved using the limit characterization of Theorem 4. 0 the four Penrose conditions.3 4. Let S E Rnxn be symmetric with U TSU = D. 27].4.VVEejRnxnx " are orthogonal (M is 4. Theorem 4.12. For any vector e jRn. Example 4.10. The Proof: Both results can be proved using the limit characterization of Theorem 4. simply verify that the expression above does indeed satisfy each of Proof: For the proof. For all A E jRmxn. . Let S e jRnxn be symmetric with UT SU = D. For A e Rmxn 1. where U is orthogonal an Theorem 4. if v i= 0.9. Theorem 4.11. 4.7.11. where U is orthogonal and D is diagonal. .3. [~ r 1 =[ 4 4 I I ~l 4 I I 4 4.9.4.8. Properties and Applications Example 4. if v = O. Let A E R m x "and suppose UUEejRmxm. Many of these This section presents some miscellaneous useful results on pseudoinverses. A+ = (AT A)+ AT = AT (AA T)+.10. The especially illuminating. are used in the text that follows. Example 4. (A T )+ = (A+{. p. elements are determined according to Example 4. e jRmxn and suppose Rmxm R n are orthogonal (M is T 1 1 orthogonal if M M ).13.8. p. Then Proof: For the proof. [~ ~ r ~ =[ 0 Example 4.4.7.
• Similarly. properties Theorem 4..32 Chapter 4. necessary and sufficient conditions under which the reverseorder property does hold are known and we quote a couple of moderately useful results for reference.12 and 4.15. 3. see [9]. (AB)+ = B+ A + if and only if 1. A+ = (AT A)I AT. however (see. (AB)+ = B?A+. [23]). = n(A T) = n(A+ A) = n(A TA). B E Rrrxm. Ir Similarly. Proof' A+ A Proof: Since A E Rnrxr. we have B = B (BBT)~\ whence BB+ = Ir. (AA T )+ = (A T)+ A+. in theory at least.11 is suggestive of a "reverseorder" property for pseudoinverses of prodTheorem 4. where BI = A+AB and A) = ABIB{. [] sufficient reverseorder However. . we B+ BT(BBT)I. For e Rmxn . compute 4. in general. whence A+A = f r. [9].g. Theorem 4. Introduction to the MoorePenrose Pseudoinverse Chapter 4. xm + T B e Wr . Proof: Proof: For the proof. where BI = A+ AB and AI = AB\B+. TTnfortnnatelv. and better methods are suggested in text that follows.14.15. The result then follows by E lR. [11].13 can. then (AB)+ = B+ A+. Theorem 4. (AB)+ = B+A+.. (AT A)+ = A+(A T)+. N(A+) 5. Proof: Proof: For the proof. then AkA+ = A+ Ak and (Ak)+ = (A+)kforall integers k > O. (AB)+ = B+A+ if and only if 4.g. = N(AA+) = N«AA T)+) = N(AA T) = N(A T). 0 The following theorem gives some additional useful properties of pseudoinverses. n(A T AB) ~ nCB) . in peneraK ucts of matrices such as exists for inverses of products. then A+ = (ATA)~lAT.16.12 Note that by combining Theorems 4. 4.11 nets of matrices such as exists for inverses of nroducts Unfortunately. If e lR~xm. e.13 we can. poor (see. [7]. (A+)+ = A. Then (AB)+ = 1+ = I while while B+ A+ = [~ ~J ~ = ~. Theorem 4. 0 D Theorem 4.17. n(A+) 4.. This A AT AT turns out to be a poor approach in finiteprecision arithmetic. [7]. As an example consider A = [0 1J and B = [ : J.15. For all A E lR mxn .14.At = A in Theorem 4.17. 0 D E lR~xr. (AB)+ = B{ Ai.15. n(BB T AT) ~ n(AT) and 2. 2. 1. Then As an example consider A = [0 I] and B = LI.16. since e lR~xr. see [5]. [II]. A\ = A in Theorem 4. 4. [5]. the MoorePenrose pseudoinverse of any matrix (since AAT and AT A are symmetric). If A e Rnrxr. BB+ f r The by taking BI = B. e. Introduction to the MoorePenrose Pseudo inverse 4. 4. If A is normal. D takingB t = B.xm.
we have shown where one of the Penrose properties is used above. prove that 7£(A) = 7£(AA r ) using only definitions and elementary properties of the MoorePenrose pseudoinverse.1 D. (a) Prove or disprove that Prove or disprove that [~ (b) Prove or disprove that (b) Prove or disprove that AB D [~ B D r r=[ =[ A+ 0 A+ABD. and D E E mxm and suppose further that D is nonsingular. y E IRn. Theorem 4. U(A) if and only if AA+B = B. Use Theorem 4. Proof: Suppose K(B) S. For A e Rmxn. where one of the Penrose properties is used above. fiA+A B. Then we have Bx = Ay = AA + Ay = AA + Bx. Then K(B) S. If jc. so there exists a vector y E Rp such that Ay = Bx. For A E IRmxn. such as preceding but still be normal.i D. e IRnxm. However. then it is normal. (xyT)+ = (xTx)+(yTy) yx T 3.i ]. Then there exists a vector x E Rm such that Bx = y. For example. Then B and take arbitrary y E R(B). For example. R(A) and take arbitrary x e IRm. Then Bx E H(B) S. ft(A+) ft(Ar 5. Let A G M"xn. prove that R(A+) = R(A T).4 to compute the pseudoinverse of \ 2 1. a matrix can be none of the skewsymmetric. 4. A E IRn xn B E E n xm 6. properties of the MoorePenrose pseudoinverse. Since x was arbitrary. or orthogonal. Then R(B) c R(A) if and only if Suppose e IRnxp.18. assume that AA + B To prove the converse. To prove the converse. problems.. 2. A e IRPxn thatN(A) S. then it is normal. whereupon y = Bx = AA+Bx E R(A). RCA). we have shown that B = AA+B.]. a matrix can be none of the preceding but still be normal. Then Bx e R(B) c H(A). Then we have there exists a vector y e IRP such that Ay = Bx. Use Theorem 4. or orthogonal. Note: Recall that A e R" xn is normal if AATT = AT A. = B. The next theorem is fundamental to facilitating a compact and unifying approach The next theorem is fundamental to facilitating a compact and unifying approach to studying the existence of solutions of (matrix) linear equations and linear least squares to studying the existence of solutions of (matrix) linear equations and linear least squares problems. For A e R m x n . Since x was arbitrary. However. prove that RCA) = R(AAT) using only definitions and elementary 3. B E E M X m . € IRm xm D 6. b e R for scalars a. skewsymmetric. show that (xyT)+ = (x Tx)+(yT y)++yxT. 0 EXERCISES EXERCISES 1.Exercises 33 Note: Recall that A E IRn xn is normal if A A = A T A. that B = AA+ B.i l .• 1 2 x. Suppose A E Rnxp. A E IRmxn. 5 e JRn x m . show that JV(A) C A/"(S) if and only if BA+ A = B. b E E. Y e R". whereupon there exists a vector x e IR m such that Bx = y. N(B) and 5 € IRmxn. so Proof: Suppose R(B) c U(A) and take arbitrary jc E Rm. A+ 0 A+BD. such as A=[ b a a b] for scalars a.4 to compute the pseudoinverse of U . if A is symmetric. assume that AA+B = B and take arbitrary y e K(B). For A E Rpxn and BE R mx ". if A is symmetric.
This page intentionally left blank This page intentionally left blank .
< min{m. The SVD plays a key conceptual and computational role throughout (numerical) linear algebra and its applications. . V2 = [Vr+I. We show that every matrix has an SVD and describe some useful properties and applications show that every matrix has an SVD and describe some useful properties and applications of this important matrix factorization. UI e Wnxr.) Denote the set of eigenvalues of AT A by {U?. Vi eE RIRnxr. .Vv r).1.3) = Ulsvt· The submatrix sizes are all determined by r (which must be S min{m.\.u ). . Let A e R™ x n . Theorem 5. the latter equality following from the orthonormality of the Vi vectors. rcfr).Chapter 5 Chapter 5 Introduction to the Singular Introduction to the Singular Value Decomposition Value Decomposition In this chapter we give a brief introduction to the singular value decomposition (SVD).. dimensioned. its eigenvalues are all real and nonnegative. [24.and postmultiplying by equality following from the orthonormality of the r. S = diagfcri. VI «xr j V U2 e ^x(mr) . Preand postmultiplying by SI gives the emotion S~l eives the equation (5. Let {u.. The SVD plays a key conceptual and computational of this important matrix factorization.1 The Fundamental Theorem Theorem A Theorem 5. where S = [J °0].. «}).} with UI ::::: . Pre. vn]. More S > o r > O. Ch. u r ) e R . recall. . ii E !!. 6]). . ..1) rxr A = [U I U2) [ ~ 0 0 ][ ] 2 T VI VT (5.1... r write A r A VI = ViS2... Proof: Since A r A > 00 A r A i is symmetric and nonnegative definite.• = Un. .e.. . i.2) (5. vectors. n}).we can Vi = [vr+ .. Letting — diag(cri. U2 E IRrnx(mrl. recall..LettingSS = diag(uI. 5.. for example. for example.. U\ E IRmxr. Proof: Since AT A ::::: ( (AT A s symmetric and nonnegative definite. e IRmxm and V E IR nxn such that V € Rnxn such that UI > diag(ul.] . (Note: The rest of the proof follows analogously if we start with the observation that AAT > 0 and the details are left to the reader analogously if we start with the observation that A A T ::::: 0 and the details are left to the reader as an exercise. . Ch. (Note: The rest of the proof follows [24. 6]). .4) 35 . Let {Vi..... More where ~ = [~ specifically.e.o>) E IRrxr.• ::::: Urr > as an exercise.. Premultiplying by vt gives vt ATAVi = vt VI S2 = S2. .) Denote the set of eigenvalues of AT A by {of / E n} with a\ > • • > a > 0 = o>+i = • • an. i. (5.} be a set of corresponding orthonormal eigenvectors and let VI = [v\. . we can and let V\ [VI. Then there exist orthogonal matrices U E Rmxm and E IR~xn.Vn ].. specifically. . . We In this chapter we give a brief introduction to the singular value decomposition (SVD). Premultiplying by Vf gives Vf A T A VI write ATAVi = VI S2.. we have n = U~VT. . role throughout (numerical) linear algebra and its applications. its eigenvalues are all real and nonnegative.. and the Osubblocks in are compatibly dimensioned. e n} be a set of corresponding orthonormal eigenvectors 0= Ur+1 = . .... y22 €E Rnxfor^ and the 0JM^/ocJb in E~are compatibly IRnx(nr). the latter VfV^S2 = S2. and a\ > • • • > Ur > 0. i e !!.
(AATT).1 reveals that proof Theorem any orthonormal basis for jV(A) can be used for V2 • £lny orthonormal basis for N(A) can be used for V2.. } for R" {VI. Choose any matrix V2 E ^ 77IX( ™~ r) such that [VI V2] is VI columns orthonormal. VT as AV = V"i:. U V identical. we see that Mat C is "i:. Remark 5. The latter equality follows from the orthogonality of the S and vI AVi = vI VI S = O. an examination decomposition Remark 5. VT be an SVD of A as in Theorem 5. See also m [v\..3. V H..1..(AT A) = is denoted ~(A). The set {ai. singular unique. Now define the ATA V AV O. to be ~ completes the proof. of A A). with respect to the bases A = U^V as A V Mat £ is S the U E we see respect n and {u I. cr. ar}} is called the set of (nonzero) singular values of the matrix A and called [a\. and codomain spaces with respect to which A then has a diagonal matrix representation.1. where V and V are unitary and the proof is essentially proof decomposition A = t/E V H..). we see that V r A VI = since A V2 = O. Introduction to the Singular Value Decomposition Turning now to the eigenvalue equations corresponding to the eigenvalues or+\. let C. A. The analogous complex case in which A E C™ x " is quite straightforward. .. and defining this matrix U\ andU UT A V [Q ~].'.2. Let A = V"i:. we see that U{ AV\ = S and 1/2 A VI = U^UiS = 0. n] — Note that there are also min{m. vn } for IR and {u\.2).2.5. of the proof of Theorem 5.. analogous complex e IC~ xn straightforward. an we Turning now to the eigenvalue equations corresponding to the eigenvalues ar+l. •.? (AA I min{m. the IRmxr VI AViSI. The latter equality follows from the orthogonality of the columns of VI and V 2.3.(A) At. . i. From the proof of Theorem 5.4) we see that VrVI = /. . U be interpreted changes domain and codomain spaces with respect to which A then has a diagonal matrix representation.4. Thus. 3.r zero singular values. Referring to the equation V I = A VI SI defining U\. Remark 5. . . Remark 5.2).5...4.16. Referring to the equation U\ = A V\ S l defining VI. D to be S completes the proof.. VTAV = [~ Q]. .. eigenvectors of AA TT). .. . Remark 5. Thus. whence Vi ATAV22 = O.. Introduction to the Singular Value Decomposition Chapter 5. responding to multiple cr/'s. matrix VI E M mx/ " by U\ = AViS~l. The columns of V are called the left singular vectors of A (and are the orthonormal called orthonormal columns ofU left singular vectors of eigenvectors of AA ).. Then Specifically. Specifically.. Choose U2 £ IRmx(mr) [U\ U2] orthogonal. C denote A thought of as linear transformation mapping W to W. singular 2. we see that.denote A thought of as aalinear transformation mapping IR n to IRm. 0 Definition 5.16. A = t/E VT SVD of A 5. For example. in fact. n} . See also m Remark 5. Ui e (5. there may be nonuniqueness associated with the columns of V\ (and hence U\) cor• there may be nonuniqueness associated with the columns of VI (and hence VI) corresponding to multiple O'i'S.. The columns of V are called the right singular vectors of A (and are the orthonormal right singular vectors of of called orthonormal eigenvectors of AT1A).. 1. . u ]} for Rm (see the discussion in Section 3.1 we see that ai(A) = A(2 (ATA) = £(A).4) see UfU\ = columns of U\ are orthonormal. . an we have that ATAV2z = VzO = 0.36 36 Chapter 5. m for IR (see the discussion in Section 3. Then T V AV =[ =[ VrAVI VIAVI VrAVI vIA VI Vr AVz vI AVz ] ~] since A V2 =0. except for Hermitian transposes replacing transposes.. of values of i I proof of A. . Note that V and V can be interpreted as changes of basis in both the domain Remark 5. The !:ingular value decomposition is not unique. Then T rewriting A = V"i:. Then from (5. AV2 = 0. Now V20 Vf A T A V 0...e... Remark 5. Definition 5.. The decomposition is A = V"i:.
10. e j8 the case). C/ [U\ Ui] orthogonal. 0 Example 5. VT A V = A = VAV eigenvectors > 0.5.g. A factorization UI: VT of a n m x n matrix A qualifies as an SVD if U and V are A t/SV r o f an m n U orthogonal and I: is an m x n "diagonal" matrix whose diagonal elements in the upper £ left comer are positive (and ordered). [25]. What is unique. is an SVD.2) can always be constructed from ¥2 Theorem too. Example A . [11]. and E U\. SVD of A. U2. [7].9. VI:TU/ s n SVD of A VS C . if A = UI:VT is an SVD of A. A _ [ 1  0 ~ ] cose = [ .e.6. Then A = V A VTT is an A.7. A=U is an SVD. Note. Example 5.2) can always be constructed from a "compact SVD" (5. Computing an SVD by working directly with the eigenproblem for AT A or 5. is the matrix I: and the span of the columns of UI.. e. that "full SVD" (5.. The Fundamental Theorem 5.10. that aa"full SVD" (5. Let V be an orthogonal 5.11)..6. corner f/E V T r r T Ti isaanS V D o f AT.9. then A A. SVDof A.. F/vamnlp 5.1.e. n=[ [] 3 2 I 3 2 I 5 2y'5 y'5 S 4y'5 15 2~ ][ 3~ 0 0 0][ 0 0 3 0 _y'5 3 v'2 T v'2 T v'2 T v'2 2 ] 3 2 2 = 3 3 3J2 [~ ~] A E R MX Example 5. AT A Remark 5. [11].U I U T . Let A e IRnxn" be symmetric and positive definite. 01  where U is an arbitrary 2x2 2 orthogonal matrix. see. The Fundamental Theorem 37 37 • any U22can be used so long as [U I U2] is orthogonal.[1 0 ] . Computing AA AATT is numerically poor in finiteprecision arithmetic. see.8. symmetric V orthogonal matrix of eigenvectors that diagonalizes A. U V form • columns of U and V can be changed (in tandem) by sign (or multiplier of the form e je in the complex case). For example.. f/2. [7].3). is an SVD. however. SVD" (5. i.3). Example 5. i. e. V2 (see Theorem 5. orthogonal transformations.8. VI. too. VT AV = A > O. Vi. Better algorithms exist that work directly on A via a sequence of orthogonal transformations.g. U arbitrary 2 x orthogonal 5. [25].sine sin e cose J[~ ~J[ cose sine Sine] cose ' where e is arbitrary.1.
1.12.£V.13. rank(A) = r = the number of nonzero singular values of A. 1. .. vn]. Then TheoremS. reduction to row or column echelon form.6) (5. Part 4 of the above theorem provides a numerically superior method for finding (orthonormal) bases for the four fundamental subspaces compared to methods based finding (orthonormal) bases for the four fundamental subspaces compared to methods based on. Let A E E mx " have a singular value decomposition A = U.]. LetUI = [UI. The relationship to the four fundamental subspaces is summarized knowledge of the rank r. . Remark 5. vn].= R(A T ). (5. .14. Introduction to the Singular Value Decomposition Chapter 5.. vr ]... 2.. The singular vectors satisfy the relations AVi = ajui. . .2 5.11...12. Using the notation of Theorem 5. (c) R(VI) = N(A)1.1..6) and (5.. U2 = [Ur+I..5) as a sum of outer products Remark 5. Note that each subspace requires knowledge of the rank r.1. The singular vectors satisfy the relations 3.= N(A T ). urn] and V = [v\.1. andV2 = [Vr+I. the following properties hold: 1. Then (5. .. . . The elegance of the dyadic decomposition (5. . Then A has the dyadic (or outer 2. Note that each subspace requires on. Let A E Rmxn have a singular value decomposition A = U'£ VT. The elegance of the dyadic decomposition (5. reduction to row or column echelon form.7) AT Uj = aivi for i E r. Then (a) R(VI) = R(A) = N(A T / .6) and (5.£VTT as in Let A e jRmxn have a singular value decomposition A = UHV as in Theorem Theorem 5..11. nicely in Figure 5. u r ]. .38 38 Chapter 5.8) where where . A = UZV. Remark 5. . Theorem 5. i=1 (5. [HI. Vn]..7) explain why it is conventional to write the SVD as A = U'£VTT rather than. . Part 4 of the above theorem provides a numerically superior method for Remark 5. Let V = [UI.5) 3. . rank(A) = r = the number of nonzero singular values of A. say. Introduction to the Singular Value Decomposition 5. um] and V = [VI. 4. Let A e jRrnxn have a singular value decomposition A = VLV T Using Theorem 5. for example.2 Some Basic Properties Some Basic Properties Theorem 5. urn]. (d) R(V2) = N(A) = R(AT)1. Let U =.13... ..7) explain why it is conventional to write the SVD and the key vector relations (5. .5) as a sum of outer products and the key vector relations (5. as A = UZV rather than. (b) R(U2) = R(A)1. .. The relationship to the four fundamental subspaces is summarized nicely in Figure 5. Then A has the dyadic (or outer product) expansion product) expansion r A = Laiuiv. say.1. for example. VI = [VI. the following properties hold: the notation of Theorem 5. A = U. .
=1 U. Remark 5.11... a simple reordering accomplishes the task: reordering accomplishes the task: (5. However. e2. then = L r 1 v.. . ed. Some Basic Properties 39 39 A r r E9 {O} / {O)<!l nr mr Figure 5. Furthermore. e^. .15.. Proof: The proof follows easily by verifying the four Penrose conditions.5.11.10) .1. Note that none of the expressions above quite qualifies as an SVD of A+ if we insist that the singular values be ordered from largest to smallest. Figure 5.15...1. then be as defined in Theorem 5. Some Basic Properties 5. which is clearly orthogonal and symmetric. However.11) This can also be written in matrix terms by using the socalled reverseorder identity matrix This can also be written in matrix terms by using the socalled reverseorder identity matrix (or exchange matrix) P = \err.er^\. 0 D (5. with the Osubblocks appropriately sized. if we let the columns of U and V be as defined in Theorem 5. which is clearly orthogonal and symmetric. SVD and the four fundamental subspaces. Proof' The proof follows easily by verifying the four Penrose conditions.2. (or exchange matrix) P = [e erI. a simple if we insist that the singular values be ordered from largest to smallest. e\\. Furthermore.u. SVD and the four fundamental subspaces. . Note that none of the expressions above quite qualifies as an SVD of A + Remark 5. . if we let the columns of U and V with the Qsubblocks appropriately sized..2.
In other words. / E~.3 5. then T can be defined by TV.1). then T can be defined by TVj = OjUj . then T~ canbedefinedbyTIu.1).. when derived by a Gaussian elimination algorithm implemented in finiteprecision arithmetic.17 and in Definition 4. is not generally as reliable a procedure. From Section 3.1).olumn transformations. . Since T is determined by its action on a basis. u is a basis forR(A)..17 and Remark 5.1). From Section 3. the isabasisfor7£(. while the matrix representation for the inverse linear transformation T~ with respect to the same bases is SI. let A E lR.1.11).... . Then Let A E lR. vrr} and {u I.[ SVr ] 0 mxn E lR.1. .11).. have an SVD given by (5. . . In other words. = ^u. in Definition 4. This time. Column compression Column compression Again.. . since [u\. = tv. Recall the linear transformation T used in the proof of Theorem 3. Introduction to the Singular Value Decomposition Chapter 5. Similarly..r) and the matrix SVr e lR. then TlI can be defined by T^'M. Such a compression is analogous to the . . premultiplication of A by UT is an orthogonal transformation that rank.i e r. while the matrix representation for the inverse linear transformation TlI with respect to S. Such a row compression can also be accomplished "compresses" A by row transformations.. postmultiplication of A by V is an orthogonal transformation that "compresses" A by column transformations. In other words.. when derived by a Gaussian elimination algorithm implemented in echelon form which. . A "full SVD" can be similarly constructed.mxn have an SVD given by (5.urr}} e r. Recall the linear transformation T used in the proof of Theorem 3. and since ( v \ . In other words.2. Since T is determined by its action on a basis.. = cr.4). Such a row compression can also be accomplished by orthogonal row transformations performed directly on A to reduce it to the form 0 . Remark 5.. v } and {MI . the matrix representation for T with respect to the bases {VI. finiteprecision arithmetic. premultiplication of A by VT is an orthogonal transformation that "compresses" A by row transformations. . Then VT A = :EVT = [~ ~ ] [ ~i ] D _ ..40 40 Then Then Chapters. . . Such a compression is analogous to the "compresses" A by I. Both compressions are analogous to the socalled rowreduced echelon form which. notice that R(A) R(A V) This time. by orthogonal row transformations performed directly on A to reduce it to the form [~].i / E~.. where R is upper triangular. Then Again. . Similarly. .mxn.3 Rowand Column Compressions Row and Column Compressions Row compression Let A E R have an SVD given by (5. r x has full row rank.. A "full SVD" can be similarly constructed.l. .vvr}}is aa is r basisforN(A). Then AV = V:E = [VI U2] [~ ~ ] =[VIS 0] ElR. notice that H(A) = K(AV) = R(UI S) and the matrix UiS e Rm xr has full K(UiS) and the matrix VI S E lR. let A e Rmxn have an SVD given by (5. urr]} is clearly S.M(UT Notice that M(A) = N(V T A) = N(svr> and the matrix SVf E Rrxll" has full row A/"(SV. u is clearly matrix representation for T with respect to the bases { v \ .. postmultiplication of A by V is an orthogonal transformation column rank. since {UI.16. . . Notice that N(A) . .. 5...16. Both compressions are analogous to the socalled rowreduced where R is upper triangular. is not generally as reliable a procedure. basis forJ\f(A)±.2. w. the same bases is 5""1. and since {VI. mxr has full column rank. is the matrix version of (5. Introduction to the Singular Value Decomposition A+ = (VI p)(PS1 p)(PVr) is the matrix version of (5.
[23].1 starting from the observation that AAT ~ O. If XT X = 2. i. Prove Theorem 5. of defined by A defined by A = xyT. show that X = 0. Let E ~~xn. z of complex scalar z (where i j J=I). (a) Show that and W A F have the same singular values (and hence the same rank). for example. [23]. y e Rn be nonzero vectors.[11]. Note: this is analogous to the polar form where Q is orthogonal and P = PT > 0. an SVD A. y E ~n Determine A e ~~ 4. Let A e E"xn be symmetric but indefinite.. Prove Theorem 5. A = QP 7. Let A € R" X M . Let A E ~mxn and W E IR mxm and 7 E ~nxn are (a) Show that A and WAY have the same singular values (and hence the same rank). which is not generally a reliable procedure when performed by Gauss transformations in finiteprecision arithmetic. EXERCISES EXERCISES 1. Determine SVDs of the matrices 5.Exercises Exercises 41 41 socalled columnreduced echelon form. For details. For details. [7]. Note: this is analogous to the polar form iO z = rel&ofaa complex scalar z (where i = j = V^T). for performed by Gauss transformations in finiteprecision arithmetic. A = Q P 7.. = o. Use the SVD to determine a where Q is orthogonal and P p T > O. [11]. [25]. A E IRnxn indefinite. Determine an SVD of A. xyT 5. 3.e. 4. . Determine SVDs of the matrices (a) (b) [ ] [ ~l 1 0 1 6. see.. Use the SVD to determine a polar factorization of A. 2. € IRmxn. Do A Wand Yare A and WAY have the same singular values? Do they have the same rank? and WAY have the same singular values? Do they have the same rank? factorization of i. which is not generally a reliable procedure when socalled columnreduced echelon form.1 starting from the observation that AAT > 0. If XTX = 0. Determine an SVD of the matrix A E R™ xn E IRm. see. Let x e Rm.e. [7]. [25]. Let A e Rmxn and suppose W eRmxm and Y e Rnxn are orthogonal. (b) Suppose that W and Y are nonsingular but not necessarily orthogonal. Let X E M mx ".
This page intentionally left blank This page intentionally left blank .
43 . 1. i. there exists a solution if and only j/"rank([A.e.e. 5.3) for all b e W1 if and only if the columns of A are linearly independent. A G M m x m and A has neither a singular value nor a eigenvalue.Chapter 6 Chapter 6 Linear Equations Linear Equations In this chapter we examine existence and uniqueness of solutions of systems of linear In this chapter we examine existence and uniqueness of solutions of systems of linear equations. as a special case. A is 11.3) for all b E lRm if and only ifR(A) = lRm.3) for all b e W" if and only if is nonsingular. There exists a unique solution to (6. 6.3) for all b E lRm if and only if the columns of 5. Consider the system of linear equations Theorem 6. 4. We begin with a review of some of the principal results associated with vector linear systems. N(A) 0. b E ]Rn.. A E ]Rn xn. There exists a solution to (6. n this is possible only ifm < n (since m dimT^(A) = rank(A) < min{m. equivalently.e. and this is possible only ifm :::: n (since m = dim R(A) = rank(A) :::: min{m. A/"(A) = 0. n. 3.e.3) if and only ififbeH(A).e.2) 6. equivalently. i. only if rank(A) < n. There exists at most one solution to (6. i. the familiar vector system Ax = b. 4.3) if and only b E R(A). A E lR mxm and A has neither a 0 singular value nor a 0 eigenvalue. equivalently. There exists a solution to (6. 3. A solution to (6. onto. There exists a unique to (6.3) for all b E ]Rm if and only if A is nonsingular. as a special case.1) are studied and include. (6.. and onto. General linear systems of the form equations. (6. i. There exists a nontrivial solution to the homogeneous system Ax = 0 if and only if Ax = 0 if 6. and this is possible only ifm ::: n. Consider the system of linear equations Ax = b...3) is unique if and only if N(A) = 0. there exists a solution if and only ifrank([A.e. rank(A) < n. b]) = rank(A). and this is possible only ifm > n. b]) = rank(A). i.1 Vector Linear Equations Vector Linear Equations We begin with a review of some of the principal results associated with vector linear systems. General linear systems of the form (6. equivalently. the familiar vector system are studied and include. A is 11. 2.3} for e R m if only ifU(A) = W". There exists at most one solution to (6. A is 2. A solution to (6. A are linearly independent.. There exists a solution to (6.. A E lRmxn. (6.1. n}).3) is unique if and only ifJ\f(A) = 0.3) 1. b E lRm.1. Theorem 6.1 6. i.
equivalently. while results for (6. Then any matrix eRmxk of the form of the form X = A+ B + (/ . BE JR. Proof: The subspace inclusion criterion follows essentially from the definition of the range Proof: The subspace inclusion criterion follows essentially from the definition of the range of a matrix.6) are of this form. which implies rank(A) < n must have the case of a nonunique solution. Note that the results of Theorem 6. AZ — B. specializing even further to the case m = n. Let A e Rmxn. Then we can write (6.4) has a solution if and only ifl^(B) C 7£(A). all solutions of (6. R(A). we must have the case of a nonunique solution. to algebra.5). i.2 (Existence).1).mxk.2)follow by 6.44 Chapter 6. (6. Linear Equations Proof: The proofs are straightforward and can be consulted in standard texts on linear Proof: The proofs are straightforward and can be consulted in standard texts on linear algebra.1).e. a solution exists if and only if has a solution if and only ifR(B) S. of a matrix. a solution exists if and only if AA+B = B. A E JR.6).2 6. to prove part 6. Furthermore.18. The matrix criterion is Theorem 4. and this is clearly of the form (6. premultiply by A: Proof: To verify that (6. Note that some parts of the theorem follow directly from others.e.A+ A)Y. The matrix criterion is Theorem 4.5) is a solution..5) is a solution. Theorem 6. AA+B B. note that x = 0 is always a solution to the homogeneous system.mxn. That all solutions are of this form can be seen as follows.3. Therefore.5) is a solution of is a solution of AX=B. i.e. mxn . AZ :::: B. note that x 0 is always a solution to the homogeneous system. Let Z be an arbitrary solution of That all solutions arc of this form can be seen as follows. which implies rank(A) < n by part 0 by part 3. A is not II. Note that some parts of the theorem follow directly from others. Then we can write Z=A+AZ+(IA+A)Z =A+B+(IA+A)Z and this is clearly of the form (6. Let Z be an arbitrary solution of (6.2 (Existence).. premultiply by A: AX = AA+ B + A(I = B A+ A)Y + (A  AA+ A)Y by hypothesis = B since AA + A = A by the first Penrose condition.e .6).nxk is arbitrary.1 follow from those below for the special case = 1. +B = Theorem 6.2) follow by specializing even further to the case m = n. For example. A is not 11. (6.6) Furthermore. The matrix linear equation Theorem 6. equivalently. i.2 Matrix Linear Equations In this section we present some of the principal results concerning existence and uniqueness In this section we present some of the principal results concerning existence and uniqueness of solutions to the general matrix linear system (6.18. Therefore..1 follow from those below for the special case k = 1. where Y E JR. Note that the results of Theorem of solutions to the general matrix linear system (6. (6. E JR. B E JR.. D 6. while results for (6.mxk and suppose that AA+B = B.6) are of this form. all solutions of (6.3.5). Linear Equations Chapter 6. For example. The matrix linear equation AX = B. 0 . i. 0 Theorem 6. we prove part 6. Proof: To verify that (6.
Matrix Linear Equations 6.7.A+A). there is no "arbitrary" component. this can occur if and only if rank(A) = r = m (since r ::: m) and this is equivalent to A being onto (A + is then a right inverse). Consider Example 6. Consider the system of linear firstorder difference equations (6. There is a unique right inverse if and only if A+A = I/ e E"xm arbitrary. (6.7) has a unique solution if and only if unique if and only if A + A = I.A+A = Vz V2 and R(Vz2V^) = R(Vz) = N(A).6 (Uniqueness).3. Ax — 0. this can occur if and only if rank(A) = r m (since equivalent to AA+Im = 1m. in which case A must be invertible and R = AI. N(A) = O. Characterize all right inverses of a matrix A E lR. nonzero solution.A+ A V2 V[ and U(V = K(V2) = N(A). Clearly. Example 6. vD Example 6.9. Example 6. It particular (6. it is not unique.6) that minimizes TrX7 (Tr() denotes the trace of a matrix. Butrank(A) = n if and only if A is 11 or N(A) = 0. we write 1m to emphasize the m x m identity Im matrix. But rank(A) = n that A+ A = / if r — n.6) +B Remark 6. find all A e ]Rmx". wherer = rank(A) (recallr ::: h). where r rank(A) (recall r < n). X• = A~ B.) that minimizes TrXT X.7. then it is easily R(I — A + A). recall that TrX r = Li. leaving only the unique solution X = AI1B. A e E"x". Computation: Since y is arbitrary.nxn.5.8. if there exists a unique. Characterize AR = Im solutions R of the equation AR = 1m. A A = f/E VT. A solution of the matrix linear equation Theorem 6.mxn. Example 6. Matrix Linear Equations 45 Remark 6. When A is square and nonsingular. The second follows by noting thatA+A = I can occur only ifr = n.) Theorem 6.2. A+ = A"1 and so (I .n is arbitrary. matrix. it is easy to see that all solutions are generated y from a basis for 7£(7 . rank(A) = < A This is equivalent to either rank (A) = r < n or A being singular. A solution of the matrix linear equation AX = B.5. A E lR. y E R" A + A t= I. where y e lR. A+ A where Y E lR.j jcj. But if A has an SVD given by A = U h VT. Proof: Proof: The first equivalence is immediate from Theorem 6. 7£(A) and this is 7£(/m) c R(A) equivalent to AA + 1m Im..nxm is arbitrary. Hence. Suppose A E lR. equivalently. All right inverses r < m) A (A+ of A are then of the form of A R = A+ 1m + (In . BE lR.9.7) has a unique solution if and only if M(A) = 0.mxk (6.A+A) = O. Find all solutions of the homogeneous system Ax = 0. Clearly. equivalently.mxn. equivalently. Thus. Remark 6. A A+ AI Remark (/ — A + A) 0.S. A R (AA(A) = A"1. Solution: There exists a right inverse if and only if R(Im) S. Solution: x=A+O+(IA+A)y = (IA+A)y. (TrO denotes the trace of a matrix.4.2. and (N(A) = 0). It can be shown that the particular solution X = A+B is the solution of (6.6 (Uniqueness). there exists a nonzero solution if and only if A+A /= I. if and only if A is Ilor _/V(A) = O. D 0 Example 6. recall that TrXT X = £\ •xlj.8) . r checked that 1.A+ A)Y =A++(IA+A)Y. Clearly. / . (6. Here.6.7) is unique if and only if A+A = /.
The condition The answers are cast in terms that are dual in the linear algebra sense as well. .e. equivalently. if A is nonsingular.10. The matrices A = [ ° Q and f ^ provide an lability and reachability are equivalent. we of reachability. see that (6.2.:b dual to reachability is called observability: When does knowledge of {" j }"!Q and {y_/}"~o suffice to determine (uniquely) Jt0? As a dual to controllability. if and only if or.y/}"Io suffice to determine reconstructibility: When does knowledge of {w r/:b and {YJ lj:b suffice to determine (uniquely) xn? The fundamental duality result from linear system theory is the following: (uniquely) xnl The fundamental duality result from linear system theory is the following: E RPxn e IR pxn E RPxm € IR pxm (A.10. The vector Jt* in linear system theory is e IR nx " fieR" (n ~ I. does there exist an input sequence {ujj 1jj^ such that Xk takes an arbitrary value in 1R"? In linear system theory. We can then pose some new questions about the overall system that are dual in the systemtheoretic sense to reachability and controllability.8) is controllable if and only if if controllability. this is called such that Xn = 0? linear system theory. we have the notion of suffice to determine (uniquely) xo? As a dual to controllability. Linear Equations Xk with A E R"xn and B E IR nxmxm(rc>l. reachability always implies controllability and.9 by appending the equation by appending the equation (6. linear differential equations). We can then pose some new questions about the with C and (p > 1).AB •.ra>l). The above are standard conditions with analogues for continuoustime models (i. We might now ask the question: Given Xo 0. AB. The general known as the state vector at time k while Uk is the input (control) vector. from the fundamental Existence Theorem. standard conditions with analogues for continuoustime models (i. The condition dual to reachability is called observability: When does knowledge of {u 7 r/:b and {Yj l'. controlA 1 lability and reachability are equivalent.11) with and D (p ~ 1).T..46 46 Equations Chapter 6. A n .8) of Example 6. equivalently. we have the notion of reconstructibility: When does knowledge of {u jy }"~Q and {. A n . does there exist an input sequence {u j an input sequence {"y}"~o such that xn = O? In linear system theory. this is a question va [Uj }k~:b such that x^ takes an arbitrary value in W ? In linear system theory.9) ~Axo+[B. Since > 1. B) is if(AT . overall system that are dual in the systemtheoretic sense to reachability and controllability..J B] = n.8) is reachable if and only if if R([ B.2.~ I).9 Example 6. There are many other algebraically equivalent conditions.10) for k ~ 1. The answers are cast in terms that are dual in the linear algebra sense as well. does there exist an input sequence for k > 1. Since m ~ I. Again from Theorem 6.. We now introduce an output vector Yk to the system (6. example of a system that is controllable but not reachable. Example 6. if and only if rank [B.8) is given by kJ Xk = Akxo + LAkJj BUj j=O UkJ ] Uk2 (6. this is called controllability.8) of Example 6. Theorem 6. this is a question {u }y~Q Xk of reacbability..• A k kJ B] [ ~o (6.J B]) = 1R" or. we see that (6. We now introduce an output vector yk to the system (6... The linear differential equations). B T] is observable [reconsrrucrible] [controllablcl if and T) observable [reconstructive]. (A.. . We might now ask the question: Given XQ = 0. The general solution of (6. from the fundamental Existence Theorem. Theorem l'/:b Clearly. There are many other algebraically equivalent conditions. The matrices A = [~ ~]1and B5 == [~] 1 providean example of a system that is controllable but not reachable. ..e. AB. . m known as the state vector at time while Uk is the input (control) vector. A related question is the following: Given an arbitrary initial vector Xo. does there exA related question is the following: Given an arbitrary initial vector XQ. B) iJ reachable [controllable] ifand only if (A .8) is given by solution of (6.
13) Let v denote the (known) vector on the lefthand side of (6. B E Rmxq. by definition. C E jRmxn. Then the equation e jRmxn. notice that Yk = CAkxo Thus.DUnl 6. Such a criterion (CC+ <g) A+ A — I) is stated and proved in Theorem 13. A E Rnxn. so a solution exists. asbelow is a small collection of useful matrix identities. e Tl(R). by definition. e Rmxn. Thus. Then.27. associated with matrix inverses. Theorem 6. Such a criterion (C C+ ® A +A = I) of the Kronecker product of matrices for its statement.Duo Yl . In these identities.4 6. notice that To derive a condition for observability. (6.15) E jRnxp where Y € Rn*p is arbitrary. particularly for block matrices. equivalently. and C E Rpxti. 0.4 Some Useful and Interesting Inverses 47 To derive a condition for observability.3 A More General Matrix Linear Equation A More General Matrix Linear Equation AXC=B (6.14) requires the notion of the Kronecker product of matrices for its statement. equivalently.11.12) j=O Yo .13) and let denote the matrix on the righthand side. if and only if r Yn]  Lj:~ CA n .14) requires the notion A compact matrix criterion for uniqueness of solutions to (6. By the fundamental the righthand side.4 Some Useful and Interesting Inverses Some Useful and Interesting Inverses In many applications. the solution is then unique if and only if N(R) Uniqueness Theorem. Then. so a solution exists. if and only if or. B E Rnxm. particularly for block matrices. 6. v E R(R). is stated and proved in Theorem 13.4 Some Useful and Interesting Inverses 6.13) and let R denote the matrix on Let denote the (known) vector on the lefthand side of (6.2 j BUj .6. B e jRmx q . Verification of each identity is recommended as an exercise for the reader.14) Theorem 6. or. By the fundamental Uniqueness Theorem. arbitrary. in which case the general solution is of the form (6..27. the coefficient matrices of interest are square and nonsingular.3 6. Let A E Rmxn. sociated e jRnxn. the has a solution if and only if AA+BC+C = B. Listed below is a small collection of useful matrix identities. indicated.6. the coefficient matrices of interest are square and nonsingular. Listed In many applications. E jRnxm. A compact matrix criterion for uniqueness of solutions to (6. .6. and C e jRpxq. Invertibility is assumed for any component or subblock whose inverse is indicated.Du] (6.CBuo . +L kl CAk1j BUj + DUk. mxm and D E jRm Invertibility is assumed for any component or subblock whose inverse is and D € E xm. Verification of each identity is recommended as an exercise for the reader. the solution is then unique if and only if N(R) ==0. Theorem 6. in which case the general solution is of the has a solution if and only if AA + BC+C = B.
characterize all left inverses of a matrix A E Mm xn .I EXERCISES EXERCISES 1. 2. This result follows easily from the block LU factorization in property 16 of Section 1. ization in property 17 of Section 1. Let A € E mx ".4. Rmxk and suppose has an SVD as in Theorem 5. characterize all left inverses of a matrix A e lR ". Note that the positions of the / and . where E = (D . (A BDCr1 = AI . [~ ~ r l 3. formulas for the inverse of a sum of matrices such as (A + D)lor (AI1 + DI)I. X. This where E = (D — CA B) (E is the inverse of the Schur complement of A)..I ] D.I B)I (E is the inverse of the Schur complement of A). theory. It has many applications (and is frequently "rediscovered") including.1. As in Example 6. Linear Equations 1.CA. Assuming R(B) ~ R(A).8.B D. 2. This result follows easily from the block UL factorwhere F = (A — ED C) This result follows easily from the block UL factorization in property 17 of Section 1.. for example. As in Example 6.I C) I.AIB(DlI + CAIB)ICAI. 1.4.48 Chapter 6. for example. r A~I [~ ~ r [D~I~AI D~I 1 ~r ~~B 1 r l [~ ~ r [D~CF +~~I~.4.A~lB(D~ CA~lB)~[CA~l This result is known as the ShermanMorrisonWoodbury formula.4. BC 6. Note that the positions of the / and — / blocks may be exchanged. It also the inverse of a sum of matrices such as (A + D)"1 or (A" + D"1) It also yields very efficient "updating" or "downdating" formulas in expressions such as yields very efficient "updating" or "downdating" formulas in expressions such as T (A + JUT ) I1 (with symmetric A E R"x" and . = = Both of these matrices satisfy the matrix equation X2 = / from which it is obvious these matrices satisfy the matrix equation X^ = I from which it is obvious Both of that XI = X.1. Assuming 2.I .. characterize all solutions of the matrix linear equation AX=B in terms of the SVD of A in terms of the SVD of A./ blocks may be exchanged. l 8. mx . result follows easily from the block LU factorization in property 16 of Section 1.c E E") that arise in optimization (A + xx T ) — (with symmetric A e lRnxn and x e lRn) that arise in optimization theory. [ / +c 7. formulas for applications (and is frequently "rediscovered") including.BDI l = [ AI BD. (A + BDC)I = A~l . l = l = [!C / [~ ~ l = [ AI +_~~!~CAI A~BE = D. BB EelR fflxk and suppose AAhas an SVD as in Theorem 5. 1. where F = (A . Let A E lRmxn. characterize all solutions of the matrix linear equation 7Z(B) c 7£(A). Linear Equations Chapter 6.8. that X~l [~ !/ [~ ~ r [~ ~ l [~ ~/ r [~ ~ 1 l l l = [ ~ 4. 5. It has many This result is known as the ShermanMorrisonWoodbury formula.
Show that 3. where C = 1/(1 . Show that cxJ C ' where c 1/(1 — T y).xy) T 1 49 = I  1 xTy 1 xy . Show that the matrix B — A — —eie T : (i. Show that the matrix B = A . y e IRn and suppose further that x T y ^ 1. .l ~i e. Let A E 1R~ " and let A 1 have columns Cl.. y E E" and suppose further that XTy ^ 1. T 4. 5. Hint: Show that ct <= M(B). Let jc.. Let A e R"xxn and let A"1 have columns c\. A with — subtracted from its (zy)th element) is singular.e. Show that (/ .y Assume that Yji i= 0 for some i/ and j.e. l' Hint: Show that Ci E N(B).Cn and individual elements Yij. (i. A with yl subtracted from its (ij)th element) is singular.Exercises Exercises 3. in Example 6. check directly condition for reconstructibility the form form N[ fA J CA n 1 ~ N(A n )... check directly that the condition for reconstructibility takes the 6. c and individual elements y. € IRn and suppose that x T y i= 1. 6. Let x.x xTy). .e. Show that 4..10. As in Example 6. y E E" and suppose further that XTy i= 1. Assume that x/( 7^ 0 for some and j.. .10.. Let x. ..
This page intentionally left blank This page intentionally left blank .
i.1 displays the projection of v on both X and Y in the case V = ]R2. Oblique projections. Px. A linear transformation P is a projection if and only if it is idempotent.x — I — Px.x = I px. Figure 7.e.e.1.1.1. Proof: Suppose P is a projection. 51 51 .y.yV = x for all v E V. Let V be a vector space with V = X EEl Y. Also.1 displays the projection of von both and 3^ in the case = Figure 7.y • V —>• c V has a unique decomposition v = x + y with x e X and y E y.1.2.Chapter 7 Chapter 7 Projections. Let V be a vector space with V X 0 y.y is linear and P# y — px. P2 = P.y is called the (oblique) projection on X along 3^.1). Define pX.1 7. Infact.y.26.y Theorem 7. and Norms Spaces. Also.1). Inner Product Projections. Inner Product Spaces.3.. PX. Py. Define PX y : V + X <. By Theorem 2. y x Figure 7. px. P is a projection if and only if I —P is a projection. every v e V has a unique decomposition v x y with x E and y e y. By Theorem 2. Theorem 7. Oblique projections. Theorem 7.1 Projections Definition 7. A linear transformation P is a projection if and only if it is idempotent. i. and Norms 7.26..3.y is called the (oblique) projection on X along y. P isaprojectionifandonlyifl P isaprojection. Theorem 7.y is linear and pl. every v E V Definition 7. Px.yp2 = P. Infact. Proof: Suppose P is a projection.2. say on X along Y (using the notation of Definition 7. V by by PX. Figure 7. say on X along y (using the notation of Definition 7. Py. y = Px.
Thus. Let X = {v e V : Pv = v} and y {v € V : Pv 0}. while Py = P(l .1 The four fundamental orthogonal projections The four fundamental orthogonal projections Using the notation of Theorems 5.1 5. we must have P (I — P) = O. while Py = P(I P}v = x Pv . First note that iftfveX.XLtion and we then use the notation P x = PX. along 1) and let jc. y = (I . P = P. It is easy to check that X and Y are subspaces.5. Note that (/ .5.11. then Pv = O.P)v. D Essentially the same argument shows that I — P is the projection on y along X. then (/ .P)v.. P E E"xn is the matrix of an orthogonal projection (onto K(P)} if and only 7. Now let v E V be arbitrary. then v = O. * called an orthogonal projecDefinition 7.X^X = Px±.P)x. Py e X.11.A+A V2V{ L i=r+l i=l 11 ViVf. Moreover. V = X $ Y and the projection on X along Y is P. Moreover. Now let u e V be arbitrary. Px E R(P). y = (I . We now prove and Y = {v E V : Pv = OJ. Thus. Write x = Px (I — P)x.P)x = x T P(l . Projections. If v e y.4.P)x = (I . then Pv v. Then Pv = P(x + y) = Px = x. Conversely. Hence if v E X ny.P is the projection on Y along X.P)v = = Pv.AA+ U2 U ! LUiUT. First note that v E X. with the second equality following since PTP is symmetric. Essentially the same argument shows that / . If v E Y. and P must be an orthogonal projection.)x = PXJ.P)x E XL. and Norms Chapter 7. A 6 jRmxII UtSVf. Then v = Pv + (I . Thus. Projections.L 1.P)x e X1.1 and 5. In the special case where y X1.P2v = 0 so Y E y. along XXL} and let x. let A E Rmxn with SVD A = U!:VTT = A = UT.52 52 Chapter 7.P)v. . We now prove that V = X $ y. Hence PT = PTP = P. suppose P = P.1 . Hence that V X 0 y. and Norms Let v e V be arbitrary. since Px e U(P)..uT. suppose p2 = P.P)x 6 R(P)1and P must be an orthogonal projection. Thus. Conversely. Then Pv = P(x + y) = Px = x. In the special case where Y = X^. Inner Product Spaces. we have (py)T ((I .p 2 v 0 so y e Thus. Let X = {v E V : Pv = v} Px = x = Pv. Let X n y. . R" be Proof: Let P be an orthogonal projection (on X. X 0 y and the projection on X along y is P. Thus. Then x T pT (I . say. Note that (I . D 0 7. then Pv = 0. Thus. Then Px = p 2v = Pv = x so x e X.PX. suppose P is a is a with the second equality following since pT P is symmetric.P}x = 0. PX.P)v. V Pv .xJ. Conversely. P2v = P Pv — 2 2 Px = x = Pv. P e jRnxn is the matrix of an orthogonal projection (onto R(P)) if and only ifP2 PT if p2 = p = pT.P)x = yTTpT (I . Inner Product Spaces. (I .P)x = (I . P Proof: Let P be an orthogonal projection (on X. PN(A)J.3. Since x and y were arbitrary.P)x = O.1. Since Py E X. Conversely. P)x = y PT(I P)x = 0. Then symmetric projection matrix and let x be arbitrary.V Theorems 5.XL Theorem 7. px. Write x = P x + (I .P) = 0.P)x = O. say.px. Hence pT = pT P = P. Then U\SVr Then r PR(A) AA+ U\U[ Lu.1 7. yy Ee jR" be arbitrary.XL iss called an orthogonal projection and we then use the notation PX = PX.=1 m PR(A). then v = 0..xx by Theorem 7.xx by Theorem 7. p2 = P.. Let x = Pv.P)x E ft(P)1 xTPT(I . p 2v = PPv = Let u E V be arbitrary. Then Px = P2v = Pv = x so x E X.3. i=r+l PN(A) 1. we have ( P y f I (/ . T Since x and y were arbitrary. (I . we must have pT (I . It is easy to check that X and 3^ are subspaces.4. 0 Definition 7. arbitrary.P)x = xTP(I .xl. suppose symmetric projection matrix and let x be arbitrary. mental subspaces. A+A VIV{ r LViVT are easily checked to be (unique) orthogonal projections onto the respective four fundaare easily checked to be (unique) orthogonal projections onto the respective four fundamental subspaces. then Pv = v. Then v if v € Pv (I .
See Figure 7. Recall the diagram of the four fundamental subspaces." Figure 7.A+ A)x 2 = A+ Ax + (I = VI vt x + V Vi x (recall VVT = I)." Example 7. A direct calculation shows z = Pn(w)"' = (l . An arbitrary vector x E R" was chosen and a formula for XI basis for a subset of IRn.8) = (WTV) W.(:.2. Recall the diagram of the four fundamental subspaces. Determine the orthogonal projection of a vector e M" on another nonzero Example 7. the vector z that is orthogonal to w and such that Pv Moreover. orthogonal: v z Pv w Figure 7.. Recall the proof of Theorem 3.8.7. . {VI.8.2. e Rn Solution: Think of the vector w as an element of the onedimensional subspace IZ(w).2. There.11. orthogonal: that z and u. The indicated direct Example 7. { v \ .8) (using Example 4.. in fact. Then X = PN(A)u + PN(A)X . See Figure 7. T W W Moreover. Then the desired projection is simply Then the desired projection is simply Pn(w)v = ww+v wwTv (using Example 4. Vk} was an orthornormal Example 7. Projections 53 Example 7. .1. Projections 7... .6.. An arbitrary vector x e IRn was chosen and a formula for x\ appeared rather mysteriously. Vk} was an orthomormal basis for a subset S of W1. There.11. Recall the proof of Theorem 3. Then Let x e W be an arbitrary vector. Example 7. The indicated direct sum decompositions of the domain E" and codomain IRm are given easily as follows.Pn(w»v = v .7. the vector z that is orthogonal to wand such that v = P v + z is given by z is given by z = PK(W)±Vv = (/ — PK(W))V = v — (^^ j w. IR n Rm 1 n Let X E IR be an arbitrary vector.7. are. in fact. Orthogonal projection on a "line. A direct calculation shows that and ware.~) w. X on Specifically.2. The expression for x\ is simply the orthogonal projection of XI projection of rather x on S. Determine the orthogonal projection of a vector v E IR n on another nonzero vector w E IRn. .6..1. Specifically. Orthogonal projection on a "line. Solution: Think of the vector w as an element of the onedimensional subspace R( w).
defines a "weighted" inner product. x) for all x. Then {^. as follows: and a vector in J\f(A). .2 Inner Product Inner Product Spaces Definition 7. then AT e Rn xm is the unique linear transformation or map T E IRm andfor IRn. y) for all x € Rm and for all y e R". [ 5~2 + 7. . V = IRn. and Norms Chapter 7. as follows: o o 4] uniquely into the sum of a vector in N(A)L 4V uniquely into the sum of a vector in A/'CA)1 r 1/4 1/4 ] 1/4 1/4 [!]~ = = A' Ax + (l  A' A)x 1/2 1/2 1/2 1/2 0] [ 2] [ 1/2 1/2 + [ 1~2 1~2 ~ o o ! 5/2] [1/2] 1~2 . Ay) = {AT x. ATE IR nxm transformation Definition 7. Let V = E". Then (x.11. yi. y e V. such that {x. Yl) + f3(x. 3.10.13. Yl. Inner Product Spaces. (x. n x n positive definite matrix.y E V. Y2) for all x. Let Then Then and we can decompose the vector [2 3 and we can decompose the vector [2 3 and a vector in N(A). (x.11. Then Y = PR(A)Y + PR(A)~Y = AA+y + ( l . Let V be a vector space over IR. Example 7. respectively.10. If A E Rm xn. Example 7.. 3. respectively. y) = (y. (x. Then (x. cryi + ^2) = a(x.AA+)y = U1Ur y + U2U[ Y (recall UU T = I). aYI + PY2) = a(x.) ) :: V x V + IR is a real inner is a real inner Definition 7. Then { • • V x V if product if 1. Then Similarly.54 Chapter 7. Example 7. let Y E ]Rm be an arbitrary vector. y)Q = XT Qy. Y2 E V and/or all a. definite defines Definition 7. 2. Let Example 7. y\) + /3(jt. x) ::: Qfor aU x 6V and (x. . Projections. only ifx = O.12. y) = (y.12. (jc. y} = XTy is the "usual" Euclidean inner product or dot product. f3ftE IR. let y e IR m be an arbitrary vector. If e IR mx ". Then ('. Inner Product Spaces. where Q = QT > 0 is an arbitrary Q = Q T > is an Example 7. j2 ^ V and for alia. > Ofor all E V ( x x) =0 if 2.(A .9. y) x T Y is the "usual" Euclidean inner product or Example 7. {*.x)forallx. Let V be a vector space over R. e R. Let V = R". (x. and Norms Similarly.13. y) Q = X T Qy. Let V = IRn. Projections. x } = 0 if and only ifx = 0. (x.9. y^} for all jc.
7.2. Inner product Spaces 7.2. Inner Product Spaces
55 55
It is easy to check that, with this more "abstract" definition of transpose, and if the It is easy to check that, with this more "abstract" definition of transpose, and if the (i, j)th element of A is aij, then the (i, j)th element of AT is ap. It can also be checked (/, y)th element of A is a(;, then the (i, y)th element of AT is a/,. It can also be checked that all the usual properties of the transpose hold, such as (Afl) = BT AT. However, the that all the usual properties of the transpose hold, such as (AB) = BT AT. However, the
definition above allows us to extend the concept of transpose to the case of weighted inner definition above allows us to extend the concept of transpose to the case of weighted inner products in the following way. Suppose A e Rmxn and let (., .) Q and (•, .) R, , with Q and A E ]Rm xn (., }R with Q and {, }g R positive definite, be weighted inner products on Rm and W, respectively. Then we can positive definite, be weighted inner products on IR m and IRn, respectively. Then we can define the "weighted transpose" A # as the unique map that satisfies define the "weighted transpose" A# as the unique map that satisfies
(x, AY)Q = (A#x, y)R all x e IRm (x, Ay)Q = (A#x, Y)R for all x E Rm and for all Y E W1. y e IRn.
By Example 7.l2 above, we must then have x T QAy x T (A#{ Ry for all x, y. Hence we By Example 7.12 above, we must then have XT QAy = xT(A#) Ry for all x, y. Hence we transposes (of AT Q = RA#. must have QA = (A#{ R. Taking transposes (of the usual variety) gives AT Q = RA#. QA = (A#) R. Since R is nonsingular, we find Since R is nonsingular, we find
A# = R1A Q. A* = /r'A' TQ.
We can also generalize the notion of orthogonality (x T = 0) to Q orthogonality (Q is We can also generalize the notion of orthogonality (xTyy = 0) to Qorthogonality (Q is a positive definite matrix). Two vectors x, y E IRn are Qorthogonal (or conjugate with a positive definite matrix). Two vectors x, y e W are <2orthogonal (or conjugate with T X Qy O. Qorthogonality is an important tool used in respect to Q) if ( x y) Q respect to Q) if (x,, y } Q = XT Qy = 0. Q orthogonality is an important tool used in studying conjugate direction methods in optimization theory. studying conjugate direction methods in optimization theory. Definition 7.14. Let V be a vector space over C. Then (., •} : V V > Definition 7.14. Let V be a vector space over <C. Then {, .) : V x V + C is a complex is a complex inner product if inner product if
1. (x,, x ) :::: Qfor all x e V and ( x , x ) = 0 if and only if x = 0. 1. ( x x) > 0 for all x E V and (x, x) =0 if and only ifx = O.
2. (x, y) = (y, x) for all x, y E V. (y, x) for all x, y e V. 2. (x, y)
3. (x, aYI + fiy2) = a(x, y\) + fi(x, Y2) for all x, YI, y2 E V and for alia, f3 6 C. 3. (x,ayi f3Y2) = a(x, yll f3(x, y2}forallx, y\, Y2 e V andfor all a, ft E c. Remark 7.15. We could use the notation (., ·)e to denote a complex inner product, but Remark 7.15. We could use the notation {•, }c to denote a complex inner product, but if the vectors involved are complexvalued, the complex inner product is to be understood. if the vectors involved are complexvalued, the complex inner product is to be understood. Note, too, from part 2 of the definition, that (x, x) must be real for all x. Note, too, from part 2 of the definition, that ( x , x ) must be real for all x.
Remark 7.16. Note from parts 2 and 3 of Definition 7.14 that we have Remark 7.16. Note from parts 2 and 3 of Definition 7.14 that we have
(ax\ + fix2, y) = a(x\, y) + P(x2, y}.
Remark 7.17. The Euclidean inner product of x, e C" is given by Remark 7.17. The Euclidean inner product of x, y E C n is given by
n
(x, y)
= LXiYi = xHy.
i=1
The conventional definition of the complex Euclidean inner product is (x, y) yH but we The conventional definition of the complex Euclidean inner product is (x, y} = yHxx but we use its complex conjugate H here for symmetry with the real case. use its complex conjugate xHyy here for symmetry with the real case.
Remark 7.18. A weighted inner product can be defined as in the real case by (x, y)Q = Remark 7.1S. A weighted inner product can be defined as in the real case by (x, y}Q — x H Qy, arbitrary Q QH > o. notion Qorthogonality can be similarly XH Qy, for arbitrary Q = QH > 0. The notion of Q orthogonality can be similarly generalized to the complex case. generalized to the complex case.
56 56
Chapter 7. Projections, Inner Product Spaces, and Norms Chapter 7. Projections, Inner Product Spaces, and Norms
Definition 7.19. A vector space (V, F) endowed with a specific inner product is called an Definition 7.19. A vector space (V, IF) endowed with a specific inner product is called an inner product space. If F = C, we call V a complex inner product space. If F = R, we inner product space. If IF = e, we call V a complex inner product space. If IF = R we call V a real inner product space. call V a real inner product space.
Example 7.20. Example 7.20. 1. Check that V = IRn x" with the inner product (A, B) = Tr AT B is a real inner product 1. Check that = R" xn with the inner product (A, B) = Tr AT B is a real inner product space. Note that other choices are possible since by properties of the trace function, space. Note that other choices are possible since by properties of the trace function, Tr AT B = TrB TA = Tr A B = TrBAT TrATB = Tr BTA = TrABTT = Tr BAT..
2. Check that V = e nxn with the inner product (A, B) = Tr AHB is a complex inner Tr AH B is a complex inner 2. Check that V = Cnx" with the inner product (A, B) product space. Again, other choices are possible. product space. Again, other choices are possible. Definition 7.21. Let V be an inner product space. For v e V, we define the norm (or Definition 7.21. Let V be an inner product space. For v E V, we define the norm (or length) ofv by IIvll = */(v, v). This is called the norm induced by (',, .).. length) ofv by \\v\\ = J(V,V). This is called the norm induced by (  ) Example 7.22. Example 7.22. 1. If V = E." with the usual inner product, the induced norm is given by i> 1. If V = IRn with the usual inner product, the induced norm is given by II v II = n 2 2 1
(Li=l V i (E,=i<Y))2.xV—*« 9\ 7
2. If V = en with the usual inner product, the induced norm is given by II v II = 2. If V = C" with the usual inner product, the induced norm is given by \\v\\ "n (L...i=l IVi ) ! (£? = ,l»,lI22)*.. Theorem 7.23. Let P be an orthogonal projection on an inner product space V. Then Then Theorem 7.23. Let P be an orthogonal projection on an inner product space \\Pv\\ ::::: Ilvll for all v e V. IIPvll < \\v\\forallv E V.
Proof: Since P is an orthogonal projection, p2 = P = pH. (Here, the notation p# denotes Proof: Since P is an orthogonal projection, P2 = P = P#. (Here, the notation P# denotes the unique linear transformation that satisfies ( P u , } = (u, p#v) for all u, v E If this the unique linear transformation that satisfies (Pu, vv) = (u, P#v) for all u, v e V. If this seems a little too abstract, consider V = R" (or en), where P# is simply the usual PT (or seems a little too abstract, consider = IRn (or C"), where p# is simply the usual pT (or pH)). Hence (Pv, v) = (P 2v, v) = (Pv, p#v) = (Pv, Pv) = IIPvll 2 > O. Now /  P is PH)). Hence ( P v , v) = (P2v, v) = (Pv, P#v) = ( P v , Pv) = \\Pv\\2 ::: 0. Now /  P is also a projection, so the above result applies and we get also a projection, so the above result applies and we get
0::::: ((I  P)v. v) = (v. v)  (Pv, v)
=
IIvll2  IIPvll 2
from which the theorem follows. from which the theorem follows.
0
Definition 7.24. The norm induced on an inner product space by the "usual" inner product Definition 7.24. The norm induced on an inner product space by the "usual" inner product is called the natural norm. is called the natural norm.
In case V = C" or V = R",, the natural norm is also called the Euclidean norm. In In case = en or = IR n the natural norm is also called the Euclidean norm. In the next section, other norms on these vector spaces are defined. A converse to the above the next section, other norms on these vector spaces are defined. A converse to the above procedure is also available. That is, given a norm defined by IIx II = •>/(•*> x), an inner procedure is also available. That is, given a norm defined by \\x\\ — .j(X,X}, an inner product can be defined via the following. product can be defined via the following.
7.3. Vector Norms 7.3. Vector Norms Theorem 7.25 (Polarization Identity). Theorem 7.25 (Polarization Identity).
1. For x, y E m~n, an inner product is defined by 1. For x, y € R", an inner product is defined by (x,y)=xTy=
57 57
IIx+YIl2~IIX_YI12_
IIx + yll2 _ IIxll2 _ lIyll2 2
2. For x, y E en, an inner product is defined by 2. For x, y e C", an inner product is defined by
where j = i = \/—T. where j = i = .J=I.
7.3 7.3
Vector Norms Vector Norms
Definition 7.26. Let (V, F) be a vector space. Then \ Definition 7.26. Let (V, IF) be a vector space. Then II \ . \ II\ : V + R is a vector norm ifit V >• IR is a vector norm if it satisfies the following three properties: satisfies the following three properties:
1. Ilxll::: Ofor all x E V and IIxll = 0 ifand only ifx
= O.
2. Ilaxll = lalllxllforallx
E
Vandforalla
E
IF.
3. IIx + yll :::: IIxll + IIYliforall x, y E V. (This is called the triangle inequality, as seen readily from the usual diagram illus (This is called the triangle inequality, as seen readily from the usual diagram illustrating the sum of two vectors in ]R2 .) trating the sum of two vectors in R2 .) Remark 7.27. It is convenient in the remainder of this section to state results for complexRemark 7.27. It is convenient in the remainder of this section to state results for complexvalued vectors. The specialization to the real case is obvious. valued vectors. The specialization to the real case is obvious. Definition 7.28. A vector space (V, F) is said to be a normed linear space if and only if Definition 7.28. A vector space (V, IF) is said to be a normed linear space if and only if there exists a vector norm  •  : V > R satisfying the three conditions of Definition 7.26. there exists a vector norm II . II : V + ]R satisfying the three conditions of Definition 7.26. Example 7.29. Example 7.29.
1. For x E en, the Holder norms, or pnorms, are defined by 1. For e C", the HOlder norms, or pnorms, are defined by
Special cases: Special cases: (a) Ilx III = L:7=1
IXi
I (the "Manhattan" norm).
1
(b) Ilxllz = (L:7=1Ix;l2)2 = (c) Ilxlioo
(X
H
1
X)2
(the Euclidean norm).
= maxlx;l
IE!!
=
(The second equality is a theorem that requires proof.) (The second equality is a theorem that requires proof.)
p++oo
lim IIxllp
e.> where Q = QH > 0 (this norm is more commonly = QH > Ikllz. p.Q — (xhH Qx) 2. [20. A particular case of the Holder inequality is of special interest. is a nonnegative definite matrix. its determinant must be nonnegative. Since yH = x H y.. y e C" may be defined by Remark 7.. define the vector norm 3. with equality if and only if x and y are linearly dependent.31 (CauchyBunyakovskySchwarz Inequality). [20. Then Fhcorem 7. ttlr. (b) IIx IIz. Theorem 7.31 (CauchyBunyakovskySchwarz Inequality). if U E enxn is unitary. and Norms Chapter 7.32. e. R). Proof' Consider the matrix [x y] E C" x2 .D = E^rf/l*/!. denoted II . Projections. The norm  • 2 is unitarily invariant. it is particularly easy to remember. IIQ)' 1 3. tO~t:5.tl Theorem 7.33.31 and Remark 7. Ther. 112 is unitarily invariant. y E en. Since is a nonnegative definite matrix.. it is particularly easy to remember. define the vector norm 1111100 = max II/(t) 11 00 .1^ IIUxll2 = IIxll2 (Proof IIUxili = x U Ux = xHx = IIxlli)· However. t \ ] R). The angle 0 between two nonzero vectors x. On the vector space (C[to. 11·111 and 1I·IIClO XHUHUx . and Norms 2. its determinant must be nonnegative.  .32 are true for general inner product spaces. On the vector space (C[to. y e C".D = L~=ld. „ . Remark 7. Let x.31 and Remark 7.... Since yHxx = x Hy. then H H \\Ux\\2 \\x\\2 (Proof. where 4 > O.34. p q I I A particular case of the HOlder inequality is of special interest. whered. Theorem 7. Inner Product Spaces. 1cose 1~ 1. Then Theorem 7. Some weighted pnorms: (a) IIxll1. +=1. Inner Product Spaces.(x Hyy)(yH x). Projections. The norm II . o ~ (x Hxx)(yH y) . 1Ft). e. However.32 are true for general inner product spaces.30 (HOlder Inequality). define the vector norm On the vector space «e[to.^ cos e = IlMmlylb 0 ~ 0 < I' The CBS inequality is thus equivalent to the statement ~ ^  COS 0 < 1. y E C". Let x. t\])n. However. Since Proof: Consider the matrix [x y] e en x2 . However..g. Let x. Theorem 7. Remark 7. Some weighted pnorms: 2. i. and  .58 58 Chapter 7. 1Ft). JC. In other words.33. Remark 7.34. we see immediately that \XH yl < IIxll2l1yllz. define the vector norm 11111 = max 1/(t)I· to:::.t~JI On the vector space ((C[to. \\Ux\\l XHX = \\x\\\). > 0. (CBS) inequality (see.l. then Remark 7.g = (x QXY denoted  • c). 0 < e — 5. ttl. In other words. 217]). Remark 7.e. p. The angle e between two nonzero vectors x. y E en may be defined by cos# = 1I. Let x.lx. D 0 \\X\\2\\y\\2Note: This is not the classical algebraic proof of the CauchyBunyakovskySchwarz Note: This is not the classical algebraic proof of the CauchyBunyakovskySchwarz (CBS) inequality (see. if U € C"x" is unitary.30 (Holder Inequality). The CBS inequality is thus equivalent to the statement I. we see immediately that IXH y\ ~ 0 < ( x H ) ( y H y ) — ( x H ) ( y H x ) .g.32. y e en. Then with equality if and only if x and yare linearly dependent.~~1~1112'. 217]). i.
Let \\ \\ be a vector norm and suppose v. vectors under orthogonal transformation. Let II·• II be a vector norm and suppose v. 7. and essentially obvious. y E en are orthogonal.4. then we have the Pythagorean Identity Ilx ± YII~ = IIxll~ + IIYII~. we conclude this section with a theorem about convergence of vectors.. Convergence of a sequence of vectors to some limit vector can be converted into a statement vergence of a sequence of vectors to some limit vector can be converted into a statement about convergence of real numbers. E en. IR) since that is "convergence" of matrices.35.. As with vectors. 2.4 Matrix Norms Matrix Norms In this section we introduce the concept of matrix norm. there exist constants c\. Then lim k4+00 V(k) = v if and only if lim k~+oo II v(k)  v II = O.4.e. v(2). this is called the triangle inequality. convergence in terms of vector norms. the following inequalities are all tight bounds. lIaAl1 = lalliAliforall A E mxn andfor all a E IR. while the latter is needed to make sense of "convergence" of matrices. e C". Theorem 7. i.. The former notion is useful for perturbation analysis. If y € C" are orthogonal.38. Theorem 7..37. Attention is confined to the vector space (W xn R) since that is what arises in the majority of applications. Extension to the complex case is straightforward and essentially obvious.. i.7.e.36. IIAII ~ Ofor all A E IR mxn and IR IIAII = 0 if and only if A = O. All norms on C" are equivalent. i. then we have the Pythagorean Identity Remark 7. i. i» (1) v(2\ .. convergence in terms of vector norms. The using matrix norms is to have a notion of either the size of or the nearness of matrices.39. IIxl12 :::: Jn Ilxll oo .35. Similar remarks apply to the unitary invariance of norms of real are not unitarily invariant. (As with vectors. i. about convergence of real numbers. Remark 7. there exist constants CI. If x. Matrix Norms 59 59 are not unitarily invariant.. ci (possibly depending onn) such that depending on n) such that Example 7.   R mx " ~ E is a matrix norm if it satisfies the following three Definition 7.. II·• II : IR mxn > IR is a matrix norm if it satisfies the following three properties: properties: 1. . the following inequalities are all tight bounds. As with vectors.e.36.37.39. there exist vectors x for which equality holds: vectors x for which equality holds: Ilxlll :::: Jn Ilxlb Ilxll2:::: IIxll» IIxlloo :::: IIxll» Ilxlll :::: n IIxlloo.. BE IRmxn. v(l). Finally. there exist Example 7.38. Attention is confined to the vector space (IRmnxn. 2 the proof of which follows easily from z2 _ z_//.. Then 7..e. All norms on en are equivalent. the motivation for In this section we introduce the concept of matrix norm. IIxlioo :::: IIxllz. i. while the latter is needed to make sense of former notion is useful for perturbation analysis. Definition 7. the proof of which follows easily from liz II~ = ZH z. For x E en.. Similar remarks apply to the unitary invariance of norms of real vectors under orthogonal transformation.4 7.e. we conclude this section with a theorem about convergence of vectors. For x G C". IIA + BII :::: IIAII + IIBII for all A. the motivation for using matrix norms is to have a notion of either the size of or the nearness of matrices. ConFinally. Extension to the complex case is straightforward what arises in the majority of applications.) . 3. C2 (possibly 7. Matrix Norms 7.e.
e R mx ". pnorms previously. (A' A)) 1 ~ (T. Example 7.43.. . (where r = laiiK^/i. + a!)"".q = max IIAxil p 11. Example 7. where r mxn = rank(A). Example 7. Example 7.. is a norm. IIAlioo = max rE!!l. The "matrix analogue of the vector Inorm. J=1 3.2 =  . is a norm. 112' The norm II • 115.42. Schatten/7norms IIAlls. Example 7. Let A E K m x ". tTL T Note: IIA+llz = l/ar(A)."  A\\ = ^ \ai} . The "matrix analogue of the vector 1norm. to estimate the size of a matrix product A B in terms of the sizes of A and B individually.43. Inner Product Spaces.42. For example. 1. The norm  .) I ~ (t. Projections.mxn IIAII p. Let A E lR.44.41. The following three special cases are important because they are "computable." Each is a "computable.  ..jj laij. Projections._ Then "mixed" norms can also be defined by e lR. The Schattenpnorms are defined by E lR. ^wncic = rank(A)). and Norms Example 7." IIAliss = Li. Inner Product Spaces.1 is often called the trace norm.p = (at' + . Then the Frobenius norm (or matrix Euclidean norm) is 7. I." theorem and requires a proof. (AA ')). Then the matrix pnorms are defined by A e Rmxn. The concept of a matrix norm alone is not altogether useful since it does not allow us to estimate the size of a matrix product AB in terms of the sizes of A and B individually. The spectral norm is 3. defined by IIAIIF ~ (t.<110#0 IIxllq Example 7. 11·115. matrix = Ilxllp.40. I. The "maximum row sum" norm is 2.60 Chapter 7. IIAII2 = Amax(A A) = A~ax(AA ) = a1(A).mxn. Let A E lR. 5>1 is often called the trace norm. and Norms Chapter 7.60 max _P IIAxll = max Ilxli p IIxllp=1 IIAxll p . \\F and 11'115.mxn.44. Example 7.. I Some special cases of Schatten /?norms are equal to norms defined previously. (t laUI).mxn.  5 2 = II IIF and  • 5i00 = II . The "maximum column sum" norm is 2. Let A E R .00 =  • 2. ai.40. IIAII P t altA)) 1 ~ (T.
1. If \\ • 11m is a consistent matrix norm. Since IIABxl1 ::s Afljc ::s IIAIIIIBllllxll.e.60 . the IIIn II F = . •II F. there exists a vector norm \\ . Theorem 7. B e Rnxk. i.7.60 \^ • Useful Results The following miscellaneous results about matrix norms are collected for future reference.::S \\A\\p\\B\\y A matrix norm \\ • is said to be consistent if \\AB\\ ::s  A   B II whenever the matrix product is defined. p for all p are consistent matrix norms. B E ]Rnxk. Matrix Norms 7. II· II F and 1.. inner products or outer products of vectors. which equality holds: which equality holds: IIAIII ::s . For example. IIAxl1 IIAII = max . For example. although there are analogues for. consistent with it. IIAIIF ::s. \\v consistent with it.Then II Ax 1122 ::s II AFjc2.4.g..jii.e. 2.~~i'.g. Then the norms \\ .jii IIAlb IIAIIF ::s . We thus need the following definition. IIAII2 ::s . subordinate to the vector norm. If II .48.. )). We thus need the following definition. Example 7. For such subordinate norms. II". A matrix norm 11·11\\is said to be consistent mutuallyconsistentifIlABII.60 Ilx i.e. For A following inequalities are all tight. The "mixed" norm "mixed" norm II· 11 100 . A = max^o IIxll.. IIABIII.jii IIAlb IIAIII ::s n IIAlloo. l. Matrix Norms 61 61 Notice that this difficulty did not arise for vectors. Then The p norms are examples of matrix norms that are subordinate to (or induced by) The pnorms are examples of matrix norms that are subordinate to (or induced by) a vector norm.q = maxx.. more generally. Not every consistent matrix norm is subordinate to a vector norm. q For such subordmate norms.47.jii IIAlioo' .4. For example. Notice that this difficulty did not arise for vectors. •II p for all p are consistent matrix norms.but there does  is consistent with F.e. Example 7. take A = B = \ \ Afl li00 = 2whileA li00 B 1>00 = 1.1100 = max laijl x. not exist a vector norm II •  such that IIAIIF is given by max x . IIAxll1 = max . atornorms.. there exist matrices A for i. . Then the norms II • \\a II· Ilfl' and . a vector norm.45.60 IIx II Ilxll=1 IIAxll p . 1. •1122is consistent with II .. \\m is a consistent matrix norm. i.ooIlBIII. IIAII2::S IIAIIF.jii II A IIF. IIAllp. IIAIII ::s . IIAlioo ::s . consider  • \\F. reader The interested reader is invited to prove each of them as an exercise.oo J1. \\ are Definition 7. IIAII2 ::s.jii IIAlloo. \\ • \\p...45. IIAlioo ::s . exercise.e. II In II p = 1 for all p.j is a matrix norm but it is not consistent. Theorem 7. e.so II . A A 2.  • /7and II . HAjcJI^ ::s \\A\\m Ilxli v.47.jii IIAII2.. e. i.and II \\ •lIy y are mutually consistent if \\ A B \\ a < IIAllfllIBlly.. For example. while E ]Rnxn. The following miscellaneous results about matrix norms are collected for future reference. II A 1100 ::s n IIAII I . although there are analogues for.oo 2 while IIAIII. Definition 7.jii IIAIIF. IIAxliv < IIAlim \\x\\v' Not every consistent matrix norm is subordinate to a vector norm. 11^4^11 P (or.. if II A B II < II A 1111 fi whenever the matrix product is defined.e. it follows that all subordinate norms are consistent. e R" x ". 2. also caUedoperator norms. i. 2. take A = B = [: is a matrix norm but it is not consistent. II such that AF is given by max^o ". more generally. Then :]. II F' ThenA^ < A II Filx 112. IIAIIF ::s .46. i. inner products or outer products of vectors.= max IIAxl1 x. but there does consider II . we clearly have Ajc ::s A1jt. Let A E ]Rmxn. Let A e Rmxn.48. There exists a vector x* such that IIAx*11 = IIAllllx*11 if the matrix norm is subordinate to the vector norm. so not exist a vector norm  . wec1earlyhave IIAxll < IIAllllxll· Since Afijc < IIAlIllBxll < Afljt.jii IIAII I. it follows that all subordinate norms are consistent. there exists a vector norm II • IIv Theorem 7.. also called oper(or. .46.jii IIAII I . There exists a vector x* such that Ajt* = A jc* if the matrix norm is Theorem 7.
Then k~+oo lim A (k) = A if and only if k~+oo lim IIA (k)  A II = o. i. 1. Definition: Let A E IRnxn and denote its set of eigenvalues (not necessarily distinct) by P. EeIRmxn.. A (2) .1. space. .A+A is an orthogonal projection. IIF are unitarily invariant. . A(A A) 1 AT 5.I. matrices Q zR and e M" ".. Suppose P and Q are orthogonal projections and P + Q = I.1.An}. Theorem 7. but not necessarily The norms  • \\F and  • 2 (as well as all the Schatten pnorms. Inner Product Spaces. Also. 6. where ¥2 is defined as in Theorem 5..e. where V2 is defined as in Theorem 5..e. B) = space. orthogonal projection.49. Chapter 7... must be an orthogonal matrix. and Norms Chapter 7. 112 and II . spanned by the plane 3x .. IIAllaa fora matrices Q E IR Convergence Convergence The following theorem uses matrix norms to convert a statement about convergence of a sequence of matrices into a statement about the convergence of an associated sequence of of scalars. Then 7. 3. and Norms max laijl :::: IIAII2 :::: ~ max laijl. but not necessarily other pnorms) are unitarily invariant. „ } The spectral radius of A is the scalar p(A) = max IA.. prove directly that V22Vl is an I — +A V V/ is an orthogonal projection... .. A(2).  • 2 and  • \\F 8. If P projection. For A eRmxa . Let II ·11 be a matrix norm and suppose A.. Projections. . \\ \\bea Rmx". for all A E IRmxn and for all orthogonal unitarily invariant. p+ = P. l.c — v + = 0. > . Projections. The spectral radius of A is the scalar by {Ai . i . For A E IR mxn . Inner Product Spaces. Prove that / . A(I). 2. 7. 112 (as well as all the Schatten /?norms. A (1) . The norms II . (MZa or F. 4. prove that P+ = P.Q — Q must be an orthogonal matrix.] l. Definition: Let A e Rnxn and denote its set of eigenvalues (not necessarily distinct) 8. B) = Tr ATB is a real inner product IR n x" AT B (A.62 62 3. If P is an orthogonal projection. Prove that the A e Wnxn orthogonal projection onto the space spanned by these column vectors is given by the P matrix P = A(ATTA)~}AT. scalars.l. Suppose that a matrix A E IR mxn has linearly independent columns. i.49. [2 3 4]r R3 spanned by the plane 3. . EXERCISES EXERCISES 1.] 4. IIQAZlia = A fora = 2 or F. Prove that P .y + 2z = O. IIF and II . e Rmx" mxm x mxm and Z E IRnxn . 3. Prove that E"xn with the inner product (A. Find the (orthogonal) projection of the vector [2 3 4f onto the subspace of 1R3 5.. Show that the matrix norms II .
all of whose Determine AF. 9. or (Xl as appropriate. Let A=[~4 9 2 ~ ~]. where ex and {3 take the value 1. y E IR n are nonzero. Determine IIAIIF' IIAII d . 2. and p(A). all of whose columns and rows as well as main diagonal and antidiagonal sum to s = n(n2 + 1) /2. where both x. H A H ^ and peA). is called a "magic square" matrix. Determine IIAIIF' IIAIII> IIAlb and Aoo in terms of \\x\\a and/or \\y\\p. Determine AF. Let 9. IIAlb IIAlloo. Let A = xyT. appropriate.) T 10. it can be proved that IIMllp = ss for all p.) that  M Up = for all/?. H A I I A2. \\A\\ A2. Determine AF. where both x. y e R" are nonzero.. (An n x n matrix.2. or oo as and II A 1100 in terms of IIxlla and/or IlylljJ. where a and ft take the value 1.. 10. A2. HA^. Determine IIAIIF' IIAII Ilt. (An n x n matrix. Let A = xy . columns and rows as well as main diagonal and antidiagonal sum to s = n (n 2 l)/2. IIAlb IIAlloo. If M is a magic square matrix. .Exercises Exercises 63 63 Let Let A=[~ 14 0 12 5 ~]. and peA). Aj. and p(A).
This page intentionally left blank This page intentionally left blank .
. so these two vectors are orthogonal.bll 2 is minimized}.1 The Linear Least Squares Problem The Linear Least Squares Problem Problem: Suppose A E Rmx" with m 2: nand b E jRm is aagiven vector.. IIAx . The linear least Problem: Suppose A e jRmxn with > n and b <= Rm is given vector. write the residual in the form r = (b . (Pn(A)b — AJC) is clearly in 7£(A). Thus. Solution: The set X has a number of easily verified properties: The set X has a number of easily verified properties: 1.PR(A)bll~ + IIPR(A)b  Axll~ from the Pythagorean identity (Remark 7. (8.b E 'R(A)L so these two vectors are orthogonal. is a solution of the normal equations. x e X if and only if latter form is commonly known as the normal equations.Chapter 8 Chapter 8 Linear Least Squares Linear Least Squares Problems Problems 8.35).Axll~ = lib .PR(A))b = PR(A). The linear least squares problem consists of finding an element of the set squares problem consists of finding an element of the set x = {x E jRn : p(x) = IIAx . A.1 8.2. write the residual r in the form To see why this must be so.PR(A)b) = (I . The equations ATrr = 0 can be rewritten in the form A TAx = ATb and the x. whereyEjRnisarbitrary.Ax is the residual associated 1.2) 65 . A vector x X if and onlv if x is of the x=A+b+(IA+A)y.bll~ (and hence p(x) = \\Ax . A vector x E X if and only if ATrr = 0. A vector x E X if and only if x is of the form 2. see Section 8.x — b\\\ (and hence p ( x ) = from the Pythagorean identity (Remark 7.1) To see why this must be so. i. (PR(A)b . AT — A T Ax = AT b latter form is commonly known as the normal equations.e. For further details.PR(A)b) + (PR(A)b  Ax). Now. while (b .2. 2. Hence. vector x e X if and only if AT where b — Ax is the residual associated with x. where r = b . IIrll~ = lib . x E X if and only if x is a solution of the normal equations.b 112) assumes its minimum value if and only if II Ax —b\\2) assumes its minimum value if and only if (8. while Now. i. see Section 8.. For further details.Ax) is clearly in 'R(A).35). Thus. Hence.e.
A+ A)z in X. Remark 8. x* minimizes the residual p(x) that solves this "double minimization" problem. which follows since the two vectors are orthogonal. if and only if A + A lor.0)z) is clearly in 4. There is a unique solution to the least squares problem. all and this equation always has a solution since AA+b E R(A). AA+)bI1 2 the last inequality following by Theorem 7. X has a unique element x* of minimal 2norm. X is convex. X. 5. there is no "existence condition" such as K(B) c R(A). where y e W is arbitrary.mxn XElR Plxk min IIAX  Bib is of the form is of the form X=A+B+(IA+A)Y. X is convex.e.1]. then equality holds and the least squares If the existence condition happens to be satisfied.2) are of the form solutions of (8. X = A+B.23. i. has a unique element x" of minimal2norm. there is no "existence condition" such as R(B) S. The minimum value of p ((x) is then clearly equal to where y E ]R. By Theorem 6. The unique solution of minimum 2norm or Fnorm is X = A+B..A+A)y and *2 = A+b + (I — A+A)z in X. consider two arbitrary vectors Xl = A + b 3. x* minimizes the residual p ( x ) and is the vector of minimum 2norm that does so.e. i.mxk. 3. Let 8 e [0. The only difference is that in the case same as solutions of the linear system AX = B. problem to the matrix case.. consider two arbitrary vectors jci = A+b + (I — A + A) y (I . Notice that solutions of the linear least squares problem look exactly the same as solutions of the linear system AX = B.. To see why.e.PR(A)bll z = ~ 11(1 Ilbll z. we can generalize the linear least squares Just as for the solution of linear equations. By Theorem 6. Let 6 E [0.e.2. equivalently. of linear least squares solutions.. and only if A+A = I or. There is a unique solution to the least squares problem. Linear Least Squares Problems and this equation always has a solution since AA+b e 7£(A). Just as for the solution of linear equations. x" = + b is the unique vector 4. i. we can generalize the linear least squares problem to the matrix case. all solutions of (8. This follows immediately from and is the vector of minimum 2norm that does so. The minimum value of p x ) is then clearly equal to lib . The general solution to e ]R.23. equivalently.n is arbitrary. In fact.2.2) are of the form x = A+ AA+b + (I  A+ A)y =A+b+(IA+A)y. Notice that solutions of the linear least squares problem look exactly the Remark 8. 7£(A).3.66 Chapter 8. To see why. The unique solution of minimum 2norm or Fnorm is where Y € ]R.1.. Then the convex combination and Xz = A+b (I .3. The only difference is that in the case of linear least squares solutions.8)xz2 = A+b ++ (I A+ A)(8y ++ (1 8)z) is clearly in X. Then the convex combination 8x. if and only if rank (A) = n. then equality holds and the least squares . if and only if rank(A) n. Linear Least Squares Problems Chapter 8. If the existence condition happens to be satisfied. Let A E E mx " and B € Rmxk.nxk is arbitrary.A+A)(Oy (1 . where Y E R" xfc is arbitrary. if 5. 1]. This follows immediately from convexity or directly from the fact that all x e X are of the form (8. X = {x"} = {A+b}. X = {x*} = {A+b}.1) and which follows since the two vectors are orthogonal. + (1 . the last inequality following by Theorem 7. The Theorem 8. x* = A+b is the unique vector that solves this "double minimization" problem. i.1) and convexity or directly from the fact that all x E X are of the form (8.. In fact. BE ]R. 0*i (1 #)* = A+b (I .
8.3 Linear Regression and Other Linear Least Squares Problems 8.3 Linear Regression and Other Linear Least Squares Problems
67
O. X = +B residual is 0. Of all solutions that give a residual of 0, the unique solution X = A+B has minimum 2norm or F norm. Fnorm. Remark 8.3. If we take B = 1m in Theorem 8.1, then X = A+ can be interpreted as Im in Theorem 8.1, then Remark 8.3. If we take B A+ can be interpreted as saying that the MoorePenrose pseudoinverse of A is the best (in the matrix 2norm sense) A AX matrix such that AX approximates the identity. Remark 8.4. Many other interesting and useful approximation results are available for the F norm). matrix 2norm (and Fnorm). One such is the following. Let A E M™ x " with SVD following. e lR~xn
A
= U~VT = LOiUiV!.
i=l
Then a best rank k approximation to A for 1< f c < r r,i . e . , a solution to A k l :s k :s , i.e., a
MEJRZ'xn
min IIA  MIi2,
is given by is given by
k
Mk =
LOiUiV!.
i=1
The special case in which m = n and k = n  1 gives a nearest singular matrix to A E A e = nand = —
lR~ xn .
8.2 8.2
Geometric Solution Geometric Solution
Looking at the schematic provided in Figure 8.1, it is apparent that minimizing IIAx —bll 2 2  Ax b\\ x e W1 p — Ax is equivalent to finding the vector x E lRn for which p = Ax is closest to b (in the Euclidean b Ay norm sense). Clearly, r = b  Ax must be orthogonal to R(A). Thus, if Ay is an arbitrary r b — Ax 7£(A). R(A) vector in 7£(A) (i.e., y is arbitrary), we must have y
0= (Ay)T (b  Ax) =yTAT(bAx) = yT (ATb _ AT Ax).
Since y is arbitrary, we must have ATb — ATAx = 0 or A r A;c = AT b. AT b  AT Ax AT Ax = ATb. T Special case: If A is full (column) rank, then x = (AT A) ATb. A = (A A)l ATb.
8.3 8.3
8.3.1 8.3.1
Linear Regression and Other Linear Least Squares Linear Regression and Other Linear Least Squares Problems Problems
Example: Linear regression
Suppose we have m measurements (ll, YI), ... ,, (trn,,ym) for which we hypothesize a linear (t\,y\), . . . (tm Ym) (affine) relationship (8.3) y = at + f3
68
Chapter 8. Linear Least Squares Problems Chapter 8. Linear Least Squares Problems
b
r
p=Ax
Ay E R(A)
Figure S.l. Projection of b on K(A). Figure 8.1. Projection of b on R(A).
for certain constants a. and {3. One way to solve this problem is to find the line that best fits for certain constants a and ft. One way to solve this problem is to find the line that best fits the data in the least squares sense; i.e., with the model (8.3), we have the data in the least squares sense; i.e., with the model (8.3), we have
YI
Y2
= all + {3 + 81 ,
= al2 + {3 + 82
where &\,..., 8m are "errors" and we wish to minimize 8\ + • • 8;. Geometrically, we where 81 , ... , 8m are "errors" and we wish to minimize 8? + ...• + 8^ Geometrically, we are trying to find the best line that minimizes the (sum of squares of the) distances from the are trying to find the best line that minimizes the (sum of squares of the) distances from the given data points. See, for example, Figure 8.2. given data points. See, for example, Figure 8.2.
y
Figure 8.2. Simple linear regression. Figure 8.2. Simple linear regression.
Note that distances are measured in the vertical sense from the points to [he line (as Note that distances are measured in the venical sense from the point!; to the line (a!; indicated, for example, for the point (t\, y\}}. However, other criteria arc possible. For exindicated. for example. for the point (tl. YIn. However. other criteria nrc po~~iblc. For cxample, one could measure the distances in the horizontal sense, or the perpendicular distance ample, one could measure the distances in the horizontal sense, or the perpendiculnr distance from the points to the line could be used. The latter is called from the points to the line could be used. The latter is called total least squares. Instead squares. Instead of 2norms, one could also use 1norms or oonorms. The latter two are computationally of 2norms, one could also use Inorms or oonorms. The latter two are computationally
8.3. Linear Regression and Other Linear Least Squares Problems 8.3. Linear Regression and Other Linear Least Squares Problems
69
much more difficult to handle, and thus we present only the more tractable 2norm case in difficult text that follows. follows. The m "error equations" can be written in matrix form as ra
Y = Ax +0,
where
We then want to solve the problem
minoT 0 = min (Ax  y)T (Ax  y)
x
or, equivalently, min lIoll~ = min II Ax  YII~.
x
(8.4)
AT Solution: x = [~] is a solution of the normal equations AT Ax Solution: x — [^1 is a solution of the normal equations ATAx = ATyy where, for the special form of the matrices above, we have special form of the matrices above, we have
and and
AT Y = [ Li ti Yi
LiYi
J.
The solution for the parameters a and f3 can then be written ft
8.3.2
Other least squares problems
y = f(t) =
Cl0!(0
(8.3) of the form Suppose the hypothesized model is not the linear equation (S.3) but rather is of the form + • • • 4 cn<t>n(t). (8.5) (8.5)
In (8.5) the ¢i(t) are given (basis) functions and the Ci; are constants to be determined to </>,(0 functions c minimize the least squares error. The matrix problem is still (S.4), where we now have minimize the least squares error. The matrix problem is still (8.4), where we now have
An important special case of (8.5) is least squares polynomial approximation, which corresponds to choosing ¢i (t) = t t'~1,, i i;Ee!!, although this choice can lead to computational 0,• (?) = i  l n, although this choice can lead to computational
norm. VT = U. A E IRmxn . Specifically. we assume that A has an SVD given by A U\SVf via the SVD. problem. etc. insight. c. then II v II ~ = II viii ~ + II v211 ~ (note that orthogonality is not what is used here. [11]. We now note that IIAx  bll~ = IIU~VT x =  bll~ II ~ VT X  U T bll. bE IR m . For example.1. This that orthogonality is not what is used here. The subvector z2 is arbitrary. z. [7]. if the fitting function is of the form y t) Y = ff( (t) = c\eC2i.c=UTb = II [~ ~] [ ~~ ] ... Sometimes a problem in which the Ci'S appear nonlinearly nonlinearly can be converted into a linear problem. S~lc\. 8. it is shown that solution [4]. Ib is unitarily invariant =11~zcll~ wherez=VTx. Better numerical methods are based on algorithms that behavior in practice (and it does).. The former is much more expensive but is generally more reliable and offers considerable theoretical offers insight. Then c. Since the standard Kalman filter essentially amounts to sequential updating of normal equations. Specifically. The key feature in (8.1. c. of linear least squares problems via the normal equations can be a very poor numerical method in finiteprecision arithmetic. The former based on SVD and QR (orthogonalupper triangular) factorization.b\\^ is II czll ~. since II .can be arbitrarily nonlinear. respectively. Better numerical methods are based on algorithms that AT work directly and solely on A itself rather than AT A.g. the last equivalent. C2 problem. Linear Least Squares Problems Chapter 8. c\ logci. Z2 while the minimum value of \\Ax — b II ~ is l^llr while the minimum value of II Ax .70 70 Chapter 8. For example. Since the standard Kalman filter essentially amounts method in finiteprecision arithmetic.6) via the SVD. Two basic classes of algorithms are A itself S VD and QR (orthogonalupper triangular) factorization. then taking logarithms yields the equation logy = logci + cjt. . fact. the subvectors can have different lengths). are based on orthogonal polynomials. ] II: = II [ The last equality follows from the fact that if v [~~]. + c2f. = log c" and C2 = cj_ results in a standard linear least squares y — log y.5) is that the coefficients Ci appear linearly. In this section we investigate solution of the linear least squares problem min II Ax x b11 2 . As far as the minimization is concerned.SVr U~VT Theorem 5. 's ¢i. arbitrary. respectively. e C2 / then taking logarithms yields the equation log y = log c.4 Least Squares and Singular Value Decomposition Least Squares and Singular Value Decomposition In the numerical linear algebra literature (e. [23]). Linear Least Squares Problems difficulties because of numerical ill conditioning for large n. it can be expected to exhibit such poor numerical behavior in practice (and it does). the subvectors can have different lengths). appear functions </>. quantity above is clearly minimized by taking z\ = S'c.4 8. This explains why it is convenient to work above with the square of the norm rather than the concerned. piecewise polynomial functions. then u^ = i>i \\\ \\vi\\\ (note The last equality follows from the fact that if v = [£ ]. Numerically better approaches ill difficulties n. etc. (8. [7]. splines.[ ~~ ] II: sz~~ c. The basis functions coefficients c. Then GI defining y = logy. the two are equivalent. we assume that A has an SVD given by A = UT. [4]. In fact. if the fitting function is of the form can be converted into a linear problem. as in Theorem 5.
we have QT € ffi. with (8.5.AA+)b\\22 = \\U2Ufb\\l = bTU2U^U22V!b = bTU2U*b = \\U?b\\22. i.~xn. with (8. x has been written in the form x = A+b + (I . and there is thus "no V2 part" to the solution. Thus. where y E Rm is arbitrary. via a sequence of socalled Householder or Givens transformations. Thus. V2z is an arbitrary vector in R(V2 = N(A). can be quite reliable.S. V2 Z 2 is an arbitrary vector in 7Z(V2)) = A/"(A). of course. Another expression for the minimum residual is  (/ — AA + )b 2 . than an SVD and.1). 8. If we label the product of such orthogonal row transformations as the to triangular form. A E ffi. can be quite reliable. Least Squares and QR Factorization B. Finally.7) . = +b + (/ — A + A) y. To simplify the exposition. to reduce A in the following way. we add the simplifying assumption that A has full column rank. A e 1R™ X ".mxm. It is then possible.. 11(1. In this case the SVD of A is given by A A = V:EVTT = [VI{ Vzl[g]Vr. is orthogonal to all vectors in 7l(A}L b E R(A).e. UZV = [U t/2][o]^i r > and there is thus "no V2 part" to the solution.1). we add the simplifying assumption that A has full column To simplify the exposition. to reduce A in the following way.~xn.AA+)bll~ . It is then possible. If we label the product of such orthogonal row transformations as the orthogonal matrix QT E R m x m . This follows easily since Another expression for the minimum residual is II (I . x has Note that since 12 is arbitrary. Least Squares and QR Factorization Now transform back to the original coordinates: Now transform back to the original coordinates: x = Vz 71 71 = [VI V2 1[ ~~ ] = VIZ I + V2Z2 = = + V2Z2 vlsIufb + V2 2... The minimum value of the least squares residual is The minimum value of the least squares residual is and we clearly have that and we clearly have that minimum least squares residual is 0 4=> b is orthogonal to all vectors in U2 minimum least squares residual is 0 {::=:} b is orthogonal to all vectors in U2 {::=:} •<=^ {::=:} b is orthogonal to all vectors in R(A)l.e. This agrees. A E ffi.AA+)bllz. we again look at the solution of the linear least squares problem (8. A finite sequence of simple orthogonal row transformations (of Householder or Givens type) can be performed on A to reduce it row transformations (of Householder or Givens type) can be performed on A to reduce it to triangular form. A e R™ X M .m is arbitrary.5 Least Squares and QR Factorization Least Squares and QR Factorization In this section. This agrees. i. with appropriate numerical enhancements. This matrix factorization is much cheaper to compute time in terms of the QR factorization.8. where y e ffi.5 8. via a sequence of socalled Householder or Givens rank. we again look at the solution of the linear least squares problem (8. i. This follows easily since (7 . i.6) but this time in terms of the QR factorization.e.11U2U!b"~ = bTUZV!V UJb = bTVZV!b = IIV!bll~. In this case the SVD of A is given by socalled fullrank problem.A + A)_y. (8.e. A finite sequence of simple orthogonal transformations.. This matrix factorization is much cheaper to compute than an SVD and. with appropriate numerical enhancements.6) but this In this section. an important special case of the linear least squares problem is the socalled fullrank problem. an important special case of the linear least squares problem is the Finally. Z VISici The last equality follows from The last equality follows from c = UTb = [ ~ f: ]= [ ~~ l Note that since Z2 is arbitrary. of course.
where QI E ffi. check directly that (I . Multiplying through by Q Q2 E ffi.8). 3.. (8. For € ffi. sense).Cl and the minimum residual The last quantity above is clearly minimized by taking x = R lIc\ and the minimum residual is Ilczllz.7). Linear Least Squares Problems Chapter 8. 112 is unitarily invariant ~ ] x .flq2 Show that r is orthogonal to (b) Let r denote the "error vector" b — ctq\ — {3qz. b e ffi.8) ~ ] (8. all in R". Note that (8. Now note that Now note that IIAx  bll~ = IIQ T Ax = II [ QTbll~ since II . Linear Least Squares Problems where E ffi..1Q\b = A+b and the minimum residual is II Qr bllz' is \\C2\\2. Both Q I and Qz2 have orthonormal columns. yt): (1.7). Equivalently. Qz] [ (8. 2. Note that Any of (8. (a) Find the best (in the 2norm sense) line of the form y = ax + fJ that fits this (a) Find the best (in the 2norm sense) line of the form y = ax + ft that fits this data.[ ~~ ] If:.7)..e. we see that A=Q[~J = [QI = QIR. (8. Consider the following set of measurements (*.A+A)y and A +b 1. or (8. xn. and any y E R". we see that in (8. n . Multiplying through by Q in (8. Both Q\ and <2 have orthonormal columns.. data. by writing (8. b E Em. by writing AR~l1 = Q\ we see that a "triangular" linear combination (given by the coefficients of ARQI we see that a "triangular" linear combination (given by the coefficients of R.9) is essentially what is accomplished by the GramSchmidt process.9) are variously referred to as QR factorizations of A. all in ffi.mxn and where R e M£ x " is upper triangular. we have = R~l Qf b = +b and the minimum residual is IIC?^!^ EXERCISES EXERCISES 1. data.. are orthogonal vectors.e. i.+ A)y and A+b are orthogonal vectors. Now write Q = [Q\ Qz]. we have x = R. check directly that (I . Suppose qi and q2 are two orthonormal vectors and b is a fixed vector.8). or (8.~xn is upper triangular. Yi): 2.3). m and any e ffi. both ql and q2 .9) is essentially what is accomplished by the GramSchmidt process.2). (2. 3. n • (a) Find the optimal linear combination aq^ + (3q2 that is closest to b (in the 2norm (a) Find the optimallinear combination aql + fiq2 that is closest to b (in the 2norm sense). (3.I of the columns of yields the orthonormal columns of Q\. The last quantity above is clearly minimized by taking x = R.Show that r is orthogonal to both^i and q2.Equivalently. Consider the following set of measurements (Xi.9) are variously referred to as QR factorizations of A. (b) Let r denote the "error vector" b .m IX(m ~" ) .9) Any of (8. (b) Find the best (in the 2norm sense) line of the form jc = ay + (3 that fits this (b) Find the best (in the 2norm sense) line of the form x = ay fJ that fits this data. Suppose q. and qz are two orthonormal vectors and b is a fixed vector. R~l) ) of the columns of A yields the orthonormal columns of QI. i.1).aql .. where Q\ e R mx " and Qz € K" x(mn). Now write Q = [QI Q2]. For A E Wmxn .72 Chapter 8.7).
Use the four Penrose conditions and the fact that QI has orthonormal columns to verify that if A e R™ x "can be factored in the form (8. Solve the perturbed version of the above problem.bll 2 x when A = [ ~ 5. A+ R+QT .xn can be factored in the form (8. of 2norm solution of least «rmarp« problem squares nrr»h1<=>m min II Ax . where Q is orthogonal. Let A e R"x". Solve the perturbed problem positive number.• What happens to IIx* . Find all solutions of the linear least squares problem min II Ax . 7.Exercises Exercises 73 4.:. where again 8 is a small of A. Solve the perturbed version of the above problem. What happens to IIx* — y 2 as 8 approaches 0? where AI = A + E\. and suppose A = QR.z2 as 8 approaches O? where A2 — A E 2 What happens to \\x* — zll2 as 8 approaches 0? 6.bl1 2 when A = [~ ~ ] and b = [ !1 x The solution is (a) Consider a perturbation EI = [~ pi of A. where again 8 is a small positive number.yII2 as 8 approaches O? (b) Now consider the perturbation E2 = [~ (b) Now consider the perturbation EI = \0 s~\ of A. where 8 is a small positive number. Find all solutions of the linear least squares problem 4. then A+ R~ Q\.bib z n where A2 = A + E2. Solve the perturbed problem min II A 2 z .9).9).IlQf. Consider the problem of finding the minimum 2norm solution of the linear least 5. Prove that A+ = R+ QT. where 8 is a small positive number. then A+ == R. not necessarily nonsingular. What happens to jt* . verify that if A E ~. and suppose A where is 1. not necessarily nonsingular. where AI = A + E I . Use the four Penrose conditions and the fact that Q\ has orthonormal columns to 6. Let A E ~nxn. (a) Consider a perturbation E\ = [0 ~] of A.
This page intentionally left blank This page intentionally left blank .
It is an easy exercise to Example 9.31 O. such that Ax = AX.) throughout the text. [3]). we use both forms results in at most a change of sign and.t so that the scaled eigenvector has norm 1.1). One oftenused scaling for an eigenvector is One oftenused scaling for an eigenvector is so is ax [ay] for any nonzero scalar a E a = 1/ IIx II so that the scaled eigenvector has nonn 1. For any A e Cnxn . example. such that a scalar A E e.Chapter 9 Chapter 9 Eigenvalues and Eigenvalues and Eigenvectors Eigenvectors 9.— 3. we see immediately that XH is a left eigenvector of A H associated with A. A nonzero vector x e C" is a right eigenvector of A e Cnxn if there exists a scalar A.2. The polynomialn (A. It can be proved easily from the Jordan canonical form to be discussed in the text to follow (see. called an eigenvalue. the Fundamental Theorem of Algebra says that x 75 . a nonzero vector y e C" is a left eigenvector corresponding to an eigenvalue a if Mif (9.1. called an eigenvalue.1) Similarly.A). [21D or directly using elementary properties of inverses and determinants (see. e C.1 9. then so is ax [ay] for any nonzero scalar a E C. [21]) or directly using elementary properties of inverses and determinants (see.2. we see immediately that x H is a left eigenBy taking Hermitian transposes in (9. then vector of AH associated with I.2) By taking Hennitian transposes in (9. Thus. This results in at most a change of sign and. for example. (Note that the characteristic polynomial can also be defined as det(Al ./ — A). we use both forms throughout the text. For any A E e nxn . Note that if x [y] is a right [left] eigenvector of A. for example.4. the Fundamental Theorem of Algebra says that 7t (X) is a polynomial of degree n. Theorem 9. norm used for such scaling. verify that n(A) = A2 2A .} The following classical theorem can be very useful in hand calculation.3 (CayleyHamilton)./) is called the characteristic polynomial of A. for proved easily from the Jordan canonical fonn to be discussed in the text to follow (see. Definition 9.Al) is called the characteristic polynomial Definition 9.1). (9. as a matter of convenience. [3]). as a matter of convenience. a nonzero vector y E en is a left eigenvector corresponding to an eigenvalue Similarly. Thus.4.31 = 0.1 Fundamental Definitions and Properties Fundamental Definitions and Properties Definition 9. Then n(A) A2 + 2A 3. A nonzero vector x E en is a right eigenvector of A E e nxn if there exists Definition 9. Then n(k) = X2 + 2A. (Note that the characteristic polynomial can also be defined as det(A. then n(A) is a polynomial of degree n. The 2nonn is the most common a — \j'. It can be proved from elementary properties of determinants that if A E C" ". The polynomial n (A) = det(A—A. Example 9.1. Let A [~ ~]. then It can be proved from elementary properties of detenninants that if A e enxn . Let A = [~g ~g].) = det (A . n(A) = O.3 (CayleyHamilton). The 2norm is the most common nonn used for such scaling. n(A) = 0. Theorem 9. for example. This of A. It can be The following classical theorem can be very useful in hand calculation. Note that if x [y] is a right [left] eigenvector of A.. It is an easy exercise to 2 verify that n(A) = A + 2A .
The geometric multiplicity of A is the number of associated independent eigenvectors = n — rank(A — A/) = dim J\f(A — XI). Theorem If A E 1Ftnxn. These roots. Eigenvalues and Eigenvectors n(A) has n roots. if A = \1Q ®]. we get the interesting fact that del (A) = A] • A2 • • An (see also Theorem 9.. degree such that a (A) =0. The spectrum of A is denoted A(A)... we get the interesting fact that det(A) = AI .e. too. we know that n(A) = 0. guarantee the existence of corresponding nonzero eigenvectors. However.8.. A matrix A e 1Ft x" is said to be defective if it has an eigenvalue whose geometric multiplicity is not equal to (i. i.3) in the n(A) = det(A . If is a root of multiplicity m of n(A). The spectrum of A e nxn is the set of all eigenvalues of A.. then 1 :::: dimN(A . If A E A(A) has algebraic multiplicity m.e. that by elementary properties of the determinant. then we must have 1 :::: g :::: m. If e Wxn. Xn. i •>/—!)• If A E 1Ftnxn. less than) its algebraic multiplicity. For example.7. we always have A(A) = A(AT).rank(A .2. Thus. (An .e.. An. if left of A A E A(A).8.. A. Let a. Moreover. then y is a right eigenvector of AT corresponding to I € A (A). Equivalently. must occur in complex conjugate pairs.:. but that A(A) = A(A) only if A e R"x". and hence further guarantee the existence of corresponding nonzero eigenvectors.. (9.. the set of all roots of its characteristic polynomial n(X).5.AI) = (A] .3) are the eigenvalues of A and imply the singularity of the matrix A . . Moreover. Definition 9. as solutions of the determinant equation n(A) = det(A  AI) = 0.2aA + aa2+ f322 and A has eigenvalues a f3j (where A has eigenvalues a ± fij (where j = i = R). i. Hence the roots of 7r(A).. Note.e. it can also be . and hence further are the eigenvalues of A and imply the singularity of the matrix A — AI.ft Definition 5. of we always have A(A) = A(A r ). Example 9. A is said to be defective if it does not have n linearly independent (right or left) eigenvectors. Specifically. it can also be generally write a(A) as a monic polynomial throughout the text). then we must have I < g < m.. Thll minimal polynomial of A G l!if. such a polynomial is said to be monic and we generally write et (A) as a monic polynomial throughout the text).5. Then if we write (9. neftnhion ~. But it also clearly satisfies the smaller degree polynomial equation isfies (1 . For example.. The spectrum of A is denoted A (A). A.AI) = dimN(A . Definition 9. Then jr(A. Definition 9.AI). A matrix A E Wnxn is said to be defective if it has an eigenvalue whose Definition 9. c form form A e C" " A]. These roots. The geometric multiplicity ofX is the number of associated of algebraic multiplicity m.2)). A is said to be defective if it does not have n linearly independent (right or left) eigenvectors..2». Specifically. then I < dimA/"(A — A/) < m. if eigenvectors of A and AT (take Hermitian transposes of both sides of (9.. a .1)2 = O. eigenvalues of A. possibly repeated. then n(X) has real coefficients. but that A(A) = A(A) only if A E 1Ftnxn.. such a polynomial is said to be monic and we of the highest power of A to be +1..AI) :::: m. Thus. the set of Definition 9. y of AT y is a left eigenvector of A corresponding to A e A(A). then A satisfies (Je — I)2 = 0. Eigenvalues and Eigenvectors Chapter 9. the n(A) coefficients.~. E A(A).7. . then there is an easily checked relationship between the left and right If A € R"x".. geometric multiplicity is not equal to (i. n(A). eigenvalues of A. independent eigenvectors = n .A) (9. It can be shown that or(l) is essentially unique (unique if we force the coefficient It can be shown that a(Je) is essentially unique (unique if we force the coefficient of the highest power of A to be + 1. However.25).6. The minimal polynomial Of A l::: K""" ix (hI' polynomilll a(A) of least degree such that a(A) ~ O. we denote the geometric multiplicity of A by g. • AM(see and set X = 0 in this identity. ft E 1Ft and let A = [ _^ !]. we say that X is an eigenvalue of A of algebraic multiplicity m.A) . then A satsible for A to satisfy a lowerorder polynomial.e. Let a.. must occur in complex conjugate pairs. say. if A = [~ ~]. sible for A to satisfy a lowerorder polynomial.5. 2aA + 2 + ft and Example 9.6.76 Chapter 9.n =0o.nxn is the polynomial o/(X) oJ IPll.) A. Equivalently. it is posn(A) = O. we say that A is an eigenvalue of A Definition 9. But it also clearly satisfies the smaller degree polynomial equation (it. checked eigenvectors of A and AT (take Hermitian transposes of both sides of (9. Let the eigenvalues of A E en xxn be denoted X\. if we denote the geometric multiplicity of A by g. if If A € A(A) has algebraic multiplicity m.XI. IfXA is a root of multiplicity m ofjr(X). i.4) and set A = 0 in this identity.1) . as solutions of the determinant equation 7r(A) has n roots. From the CayleyHamilton Theorem. possibly repeated.. The spectrum of A E C"x" is the set of all eigenvalues of A. Then n(A) = A22. less than) its algebraic multiplicity. say. f3 e R and let A = [~f3 £ ]. all roots of its characteristic polynomialn(A).
each 4. of which has an eigenvalue 2 of algebraic multiplicity 4. The above definitions are illustrated below for a series of matrices. called the Bezout algorithm.e. a(A) directly (without knowing eigenvalues and asThere is an algorithm to determine or (A. The matrix A~U has a(A) I 2 0 0 2 0 0 0 !] (9. eigenvector is numerically unstable. 0 0 0 2 A~U 0 0 ] ha<a(A) (A . i.*. n(A) = (A — 2)4. We denote 7r(A) (A . shown that a (A.2)2 ""d g 3..1. In particular.2)' ""d g 2.2) andg ~ 4. this algorithm.2)' ""d g ~ ~ ~ 1. YY Proof: Since Axt = A. Example 9.11. Fundamental Definitions and Properties 9. let Yj be a left eigenvector corresponding to any A. Furthermore.5) = (A  2)2 and g = 2. the geometric multiplicity by g. Example 9.) divides every nonzero polynomial fi(k} for which ft (A) = 0. .) directly (without knowing eigenvalues and as Unfortunately. Let A e C« x " ana [et A. 0 g At this point. Then Xi = 0. A[~  2 0 I 2 0 0 0 0 0 0 !] ~ ~ ~ ha. Unfortunately. Proof' Since AXi AiXi.11. a(X) n(X).. g. e l\(A) yj Xi. Theorem 9. algorithm. Unfortunately."(A) ~ ~ ~ ~ (A . sociated eigenvector structure)..9.10. Then yfx{ = O.. be an eigenvalue of A with corresponding right Theorem 9.. 0 0 0 2 0 0 0 2 ] h'" a(A) (A .1. a(A) divides n(A).10. Bezout algorithm. such that Aj =£ Ai.. left Aj E A (A) such that Xj 1= A. Let A E cc nxn and let Ai be an eigenvalue of A with corresponding right eigenvector jc. A~[~ A~U 2 0 0 I 2 2 ] ha< a(A) (A . Fundamental Definitions and Properties 77 77 a(A) f3(A) O.2)4. one might speculate that g plus the degree of a must always be five. such is not the case. every nonzero polynomial f3(A) particular.
14) and would thus have to be 0. Let A E C"x" be Hermitian.. for i € !1. Since Ai . Then all eigenvalues of A must Theorem 9. or the Yi 's.12.— A y )j^jc. for example. or else xf would be orthogonal to n linearly independent vectors (by Theorem 9. it cannot be the case that yf*xt = 0 as orthogonal to all yj's for which j ^ i... Since A.. contradicting the fact that it is an eigenvector. D A =1= iJ.7) yields Using the fact that A is Hermitian. xn}} is a linearly independent set. 0 The proof of Theorem 9. so that y H Xi = 1 for each i. i.JC. x) is an arbitrary eigenvalue/eigenvector pair such that Ax = A. A = AH. or else Xi would be orthogonal to n linearly independent vectors (by Theorem 9.. A.7.n with corresponding right eigenvectors x\. YyXi =0.. Take the Hermitian A transpose of this equation and use the facts that A is Hermitian and A is real to get xXHAz == of equation facts A.12. we can choose the normalization of the Xi'S. i. Then and z must be orthogonal. XH x /= 0. from which we conclude I = A.JC by ZH to get ZH Ax = X z Hx . c Proof: Suppose (A.. .14. 's.78 Similarly. since y" A = Similarly. then by Theorem 9. result holds for the corresponding left eigenvectors. 0 D Theorem 9. or the y.. x) is an arbitrary eigenvalue/eigenvector pair such that Ax = AX. The same result holds for the corresponding left eigenvectors.. (9. Then (9. The same right eigenvectors XI. 118].11..is real. results. Let A e nxn be Hermitian.. AXH z.6) Subtracting (9.. i. However. . x . jc. Chapter 9.. Take the Hermitian Proof: Premultiply the equation Ax = AX by ZH to get ZH Ax = AZ H x. from which conclude A. we must have that x H = 0. since YY A = AjXjyf. we have that XxH = AXH x. Cnxn have distinct eigenvalues AI.=1= 0. contradicting the fact that it is an eigenvector. However. and if A. Eigenvalues and Eigenvectors Chapter 9.5). 0 Let us now return to the general case.. since x is an eigenvector. [21. 's. we have that IXHxx = XxHx.11 is very similar to two other fundamental and important The proof of Theorem 9. A is real. Since Xi ^ 0 and would thus have to be 0. . Let A E nxn be Hermitian and suppose X and iJ.. Then Proof: Suppose (A. we must have Subtracting (9.. the two vectors must be orthogonal. and if Ai E A(A). the proof see. Then x and z must be of A with corresponding right eigenvectors and respectively.. we have xHX =1= 0.13. An with corresponding Theorem 9.Aj ^ 0. = A. for [21. since x is an Using the fact that A is Hermitian..5). . 0 If A E cnx " has distinct eigenvalues.. we find 0 = (Ai . Since XxHz. i. be real. is real to get H Az AxH z. = 1 for/ E n. 118]. eigenvector.6) from (9. i. we must have that XHzz = 0. Premultiply the equation Az = iJZ by x H to get x HAz = /^XHZZ = XxHz. A.. . Let A €.— A. Since equation Az i^z XH get X H Az = iJXH A..e. .14) well. 1 ?.. is If A e C nxn has distinct eigenvalues.are distinct eigenvalues of A with corresponding right eigenvectors x and z. c Proof: Premultiply the equation Ax = A. p. so that YitH x. we must have yfxt = O.e. Eigenvalues and Eigenvectors yy. p.. it cannot be the case that YiH Xi = 0 as well... Xi is orthogonal to all y/s for which j =1= i. However. Let us now return to the general case. x is a linearly independent set.e. or both. orthogonal.xnn • Then {XI. Theorem 9. . the two vectors must be orthogonal. Then all eigenvalues of A must be real.14.e.e.7) Taking Hermitian transposes in (9.e. e A(A).. or both.. we find 0 = (A. i. then by Theorem 9. Let A E cnxn have distinct eigenvalues A .11.7) yields Taking Hermitian transposes in (9..13. Then [x\. Since yf*Xi =1= 0 for each i. Theorem 9. Let A e C"x" be Hermitian and suppose A and /JL are distinct eigenvalues Theorem 9. Proof: Proof: For the proof see. A = AH.. we can choose the normalization of the *. respectively. ^ /z. yr .11 is very similar to two other fundamental and important results.Aj)YY xi.6) from (9. However.
These matrix equations can be combined to yield the following matrix factorizations: These matrix equations can be combined to yield the following matrix factorizations: XlAX and and A (9. 1 ± 2j}. ...8) while y^Xj = 5.. . Let A E C"x" have distinct eigenvalues A. let Y = [YI. xn]. i E !!:: Finally. is expressed by the equation yHX = I.. 2A. can be written in matrixform as diag(A. = A.j. / e n. — 1 2 j } .. / en. Furthermore.9.nand let the correspondTheorem 9.1. This time we have chosen the arbitrary scale factor for y\ so that \ = 1.. xn]. j e n. .ci can be set arbitrarily. solve the linear system (A . suppose that the left and right eigenvectors have been normalized so that yf1 Xi = 1. let A = right eigenvectors have been normalized so that YiH Xi = 1. For Al = —2. Fundamental Definitions and 79 Theorem 9. .. solve the (since dimN(A . . and this then determines the other two (since dimA/XA — (—2)7) = 1). Finally. y' E !!. solve the linear system (A 21) = 0 to get linear system y\(A + 21) = 0 to get yi This time we have chosen the arbitrary scale factor for YJ so that y f xXI = 1. . A.I = = LAixiyr i=1 (9. Then AJC.2 + 9)" + 10) = ()" + 2)(). Similarly.AI) (A.16. To get the corresponding left eigenvector YI. is expressed by the equation while YiH Xj = oij. .11) Example 9. An and let the corresponding right eigenvectors form a matrix X [x\.(2)1) = 1). Then AXi = AiXi. For A2 = 1 + 2j. We can now find the right and left eigenvectors corresponding to these eigenvalues. can be written in matrix form as AX=XA (9. . solve the 3 x 3 linear system (A . .. i E !!..I .22 + 2)" + 5). We can now find the right and left eigenvectors which we find A (A) = {2. i E!!. and this then determines the other two Note that one component of XI can be set arbitrarily. from Then n(X) det(A . Let A e en xn have distinct eigenvalues AI.*/. Similarly. An) e ]Rnxn.A...3 4A2 9 A./) = (A 3 + 4A.16. .. corresponding to these eigenvalues.(1 + 2j) I)x2 = 0 to get For A2 — 1 + 2j. 2)(A.9) =A = XAyH yRAX n (9. . 10) (A. Then rr(A) = det(A . Let Example 9.15. Let 2 5 3 3 2 4 ~ ] .. Yn] ing right eigenvectors form a matrix X = [XI.. To get the corresponding left eigenvector y\."" yn] be the matrix of corresponding left eigenvectors. from which we find A(A) = {—2. let Y — [y\. . + 5). .(2)l)xI = 0 to get Note that one component of .. suppose that the left and be the matrix of corresponding left eigenvectors..10) = XAX. Furthermore.. Fundamental Definitions and Properties 9. solve the 3 x 3 linear system (A — (—2}I)x\ = 0 to get For Ai = 2.15... let A = diag(AJ. solve the linear system (A — (—1 + 2j)I)x2 = 0 to get yi X2 =[ 3+ j ] 3 ~/ . / en.. Xn) E Wtxn.1.
's directly. = x2). we could proceed to solve linear systems as for A.2 However. we can also note that X3 =x2' and yi jj. Eigenvalues and Eigenvectors Solve the linear system yf (A — (1 + 27')/) = 0 and normalize y> so that yf 2 1 to get Solve the linear system y" (A . Proceeding as in the previous example. X~l Example 9.15 can also be verified. —3.17.c2 = ^.AI) = (A33 + 8A 22+ 19A + 12) = (A + I)(A + 3)(A + 4). 2 It is then easy to verify that It is then easy to verify that 2 ..j ] 3+j . —4}. To see this.!.A similar argument yields the result conjugate the equation AX2 — A2X2 to get AX2 A2X2.~q 1 3 2 2 0 2 3 ] 2 ~ y' . use the fact that A.) 19X + 12) = (A. Finally. To see this.A. For example. is from which we find A (A) = {I. we could proceed to solve linear systems as for A2. instead of determining the j. 4}. itit is gtruightforw!U"d to compute straightforward to comput~ X~[~ and and I 0 I i ] 1 x. use the fact that A33 = A2 and simply conjugate the equation A.80 Chapter 9.15 can also be verified. 3.( I + 2 j) I) = 0 and nonnalize Y22 so that y"xX2 = 1 to get For A3 = — 1 — 2j. we could have found them instead by computing XI and reading off its rows. Other results in Theorem 9. o 3 Then 7r(A.±1 4 4 4 l+j . + 3)(A.=.2 and simply can also note that x$ = X2 and Y3 = Y2. = I . Then. similar argument yields the result for left eigenvectors. For example. det(A . XIAX=A= [ 2 0 0 1+2j o 0 Finally. note that we could have solved directly only for XI and X2 (and X3 = X2). note that we could have solved directly only for *i and x2 (and XT. we For XT. Let Example 9.!. Now define the matrix X of right eigenvectors: Now define the matrix of right eigenvectors: 3+j 3j 3. Then Jl"(A) = det(A . Then. we could have found them instead by computing instead of detennining the Yi'S directly. for left eigenvectors. + 1)(A.L 4 !. + 4). Eigenvalues and Eigenvectors Chapter 9.=. A.2j.L Other results in Theorem 9.17. Let A = [~ ~ ~] ./) _(A + 8A from which we find A(A) = {—1. However. Proceeding as in the previous example.2*2 to get Ax^ = ^2X2.
then easy to show that the eigenvalues of f(A) (defined as L~:OanAn) are f(A). which is equivalent to the dyadic expanWe also have X~l AX = A = diag( 1.. For left eigenvectors we have a similar statement.20. e jRnxn n = LeA. eigenvalue/eigenvector pair (A. What is true is that the eigenvalue/eigenvector pair (A. but A2 = f0 0~1]has two has only one right eigenvector corresponding to the eigenvalue 0.18. but A = [~ has two independent right eigenvectors associated with the eigenvalue o.19. or ex. For example. but /(A) does not necessarily the eigenvalues of /(A) (defined as X^o^A") are /(A). A is diagonalizable). The following theorem is useful when solving systems of linear differential equations. jc) is an eigenvalue/eigenvector pair such that Ax = Xx. from which equivalent statement (T~ AT)(T. What is true is that the independent right eigenvectors associated with the eigenvalue 0.3 0 ~l +(4) [ .9. ] [~ ~ (I) [ I (. of Chapter 11.txiYiH. X) is an eigenvalue/eigenvector pair such that Ax = AX. —3. A = [~ Oj ] have all the same eigenvectors (unless.1 AT) =X(THyf. D D yHA = XyH ifandon\yif(T(T Hy)H (T.19. or eX. Remark 9. ff(x) is a polynomial. I I ~J I 2 0 0 0 3 3 3 I I (. 2 3 I (.g. Let A E R" xn and suppose X~~1AX — A.1.18. since T is nonsingular. representable by a power series X^^o fln*n)> then it is easy to show that representable L~:O anxn). If f is an analytic function (e. or. —4). Fundamental Definitions and Properties 81 81 We also have XI AX = A = diag(—1. x) maps to (f(A).20. A Theorem 9. Then. namely yH A AyH if and only if Hy)H(T~1AT) = A(T Hy)H. Proof: Suppose (A. A = T0 6 2 has only one right eigenvector corresponding to the eigenvalue 0. Then suppose XI AX = A. say. x) but not conversely. Eigenvalues (but not eigenvectors) are invariant under a similarity transTheorem 9. etA Ax are Details of how the matrix exponential e'A is used to solve the system x = Ax are the subject solve system i of Chapter 11. 3. Eigenvalues (but not eigenvectors) are invariant under a similarity transformation T. For example. Theorem 9. A is diagonalizable). or sin*.lIAT)(T~lx)x) = X ( T ~ lIxx).but f(A) does not necessarily have all the same eigenvectors (unless. i=1 . Then. The following theorem is useful when solving systems of linear differential equations. since T Proof: Suppose (A.. in general. from the theorem statement follows. For left eigenvectors we have a similar statement.I = A(T. x) but not conversely. If / is an analytic function (e. namely the theorem statement follows. or sinx. I 3 I (.g.1.) . say. we have the equivalent statement (T. formation T. Fundamental Definitions and Properties 9. where A is diagonal. which is equivalent to the dyadic expansion sion 3 A = LAixiyr i=1 ~(I)[ ~ W~l+(3)[ j ][~ ~ 1 . 4). J+ (3) [ 2 0 2 I I I 2 I ]+ 3 3 I I 3 I I I 3 3 I (4) [ 3 3 I I 0 3 3 l Theorem 9. ( x ) is a polynomial. x) maps to (/(A). Remark 9.
eigenvectors Xi. i E ~. and the same eigenvectors..2 Jordan Canonical Form Jordan Canonical Form Theorem 9.12) where each of the lordan block matrices 1i .. Theorem 9.. € n_. analytic on the spectrum of A. i. An e C 1.21 for any function that isis There are extensions to Theorem 9. . i E ~. I. of course.21.20 and its corollary in which It is desirable. If A e ]Rn xn is diagonalizable with eigenvalues Ai. there exists X € c~xn such that (not necessarily distinct).= Xdiag(/(A.2 9.. It is necessary first to consider the notion of Jordan A is not necessarily diagonalizable. / € n_. to have a version of Theorem 9. f ( X t t ) ) X ~ It is desirable.. i=1 0 The following corollary is immediate from the theorem upon setting t = I.. Eigenvalues and Eigenvectors Chapter 9. /' en.82 Proof: Starting from the definition. from which such a result is then available and presented later in this chapter. ( A ) = Xf(A)X. there exists X E C^x" such that XI AX = 1 = diag(ll.Il .. we have Chapter 9. ff(A) = X f ( A ) X ~ l I = Xdiag(J(AI).22. . of course. . to have a version of Theorem 9. Corollary 9.. then e A has eigenvalues e A There are extensions to Theorem 9. 1q is of the form where each of the Jordan block matrices / 1 ••• Jq is of the form Ai 0 Ai Ai o 0 (9.21 for any function that analytic on the spectrum of A.IXiYiH. we have Proof' Starting from the definition. from which such a result is then available and presented later in this chapter.. 1q).. .... and right eigenvectors xt•. ... and right Corollary 9. The following corollary is immediate from the theorem upon setting t = I. Eigenvalues and Eigenvectors n = LeA. . Jordan Canonical Form (/CF): For all A e c nxn with eigenvalues AI.i). f(An))X.e.. 0 (9. then eA has eigenvalues e X"i . . If A E Rnx" is diagonalizable with eigenvalues A.20 and its corollary in which A is not necessarily diagonalizable. .e.. canonical form. . i. It is necessary first to consider the notion of Jordan canonical form. 9.20 and Corollary 9... kn E C (not necessarily distinct)...20 and Corollary 9.13) 1i = o o Ai o Ai . ii E ~. lordan Canonical Form (JCF): For all A E C"x" with eigenvalues X\.21.22.. and the same eigenvectors.
complicated.. .2.. 1q is of form where each of the Jordan block matrices 11. ~: ] and I = \0 A in the case of complex conjugate eigenvalues a ± jfJi E A(A). . Proof: proof D 0 Transformations like T = [ _~ "•{"]allow us to go back and forth between aareal JCF Transformations like T = I" _.9. e A (A). (Xii±jpieA(A>). = [ _»' ^ 1 and h2 = [6 ~] in the case of complex conjugate eigenvalues where Mi = [ _~. .14) J\. [21. Real Jordan Canonical Form: For all A E R n x " with eigenvalues AI. Proof: For the proof see. For nontrivial Jordan blocks.. With 1 j o j o 1 o o o j ~ ~] 0 1 ' .. X (not necessarily X E lR. and where M.. . Jordan Canonical Form and L. { ] allow us to go back and forth between real JCF and its complex counterpart: TI [ (X + jfJ o O..2. there exists X € R" xn such that (9. .. 120124]. .. Jq is of the form of in the case of real eigenvalues A.=1 ki = n.An n (not € jRnxn Xi. for example.JfJ =[ (X fJ fJ ] (X = M. the situation is only a bit more complicated. 83 83 Form: 2. pp.~xn necessarily distinct)... ] T (X . Jordan Canonical Form 9..
22 l Tr(A) = Tr(X J XI) Tr(JX. Eigenvalues and Eigenvectors Chapter 9.2).1)4(A .2). .2)2. 1. I) . From Theorem 9.)i..(A.jf3 0 ]T~[~ l h M Definition 9.25.2)2... from Theorem 9. J(2) has elementary divisors (A while /( 2) haselementary divisors (A .23. From Theorem 9. Then c n 1. Let A e C" " with eigenvalues AI. i=1 l Proof: Proof: 1.— I) (A. I) . Let A E nxn with eigenvalues AI. Tr(A) = Tr(XJX~ ) = TrC/X"1 X) = Tr(/) = £"=1 A.23. 1).I)(A (A2)2.22 we have that A X J XI. Thus. Suppose A E E (A. Then Theorem 9.. from Theorem 9.22 are called the elementary divisors or invariant factors of A.7x7 is known to have 7r(A) = (A . The minimal polynomial of a matrix is the product of the elementary divisors of highest degree corresponding to distinct eigenvalues. .. and (A . 1 det(A) = det(X J XI) det(J) = n7=1 Ai. X XI.(A. det(A) = nAi. The characteristic polynomials of the Jordan blocks defined in Theorem Definition 9. Eigenvalues and Eigenvectors T. (A1).1)2. .(A 1). 2)2.1)z. 1)4(A 2) and 2 2 et(A) = (A . D 0 Example 9.. Again. i=1 n 2. Thus.2)3 3and is known to have :rr(A) Example 9.26. highest degree corresponding to distinct eigenvalues.22 we have that A = X J X ~ . Thus. The characteristic polynomial of a matrix is the product of its elementary divisors. Then AAhas two possible JCFs (not counting reorderings of the a (A... .84 it is easily checked that it is easily checked that Chapter 9. . Suppose A e lR.2). The minimal polynomial of a matrix is the product of the elementary divisors of divisors.24. and (A . and (A (A .) = (A.2)2.1*) = Tr(J) = L7=1 Ai.. 1). " Xn. The characteristic polynomial of a matrix is the product of its elementary Theorem 9. 9.. 2. 2 . 2(A(A. .25. 1)2(A . .. The characteristic polynomials of the Jordan blocks defined in Theorem 9.2(A(A .22 are called the elementary divisors or invariant factors of A. An. Theorem 9. .I [ "+ jfi 0 0 0 et + jf3 0 0 0 0 et . Then has two possible JCFs (not counting reorderings of the diagonal blocks): diagonal blocks): 1 J(l) = 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 2 0 0 0 0 0 0 0 0 0 0 0 1 2 0 0 0 0 1 0 0 0 and f2) 0 0 0 0 0 1 0 0 0 0 0 2 = 0 0 0 0 0 0 I 1 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 1 0 2 0 J(l) has elementary divisors (A Note that 7(1) haselementary divisors (A . — 2) . x Theorem 9.) = det(7) = ]~["=l A.jf3 0 0 et .1)2. det(A) = det(XJX.26. and 2).. Tr(A) = 2.22 we have that A = X JJX ~ l .24.
).l) independent right — — A.— al) == 4. we find that 2~2 + £ 3= 0 . a(A. determine a 0 0 0 0 0 0 0 a 0 0 0 0 0 a 0 0 0 0 Al= 0 0 0 a 0 0 0 0 0 0 a 0 0 0 0 0 0 0 a 0 0 0 0 0 0 1 a A2 = a 0 0 0 0 0 0 0 a 0 0 0 0 0 a 0 0 0 0 0 0 0 a 0 0 0 0 0 0 a 0 0 0 0 0 0 a 0 0 0 0 0 0 0 a 4. To get a third vector JC3 such that X [x\ X2 XT.3. Thus. and rank(A al) vectors.28.. For example.(7).. is of algebraic multiplicity greater than one... To get a third vector X3 such that X = [Xl KJ_ X3] reduces A to JCF. c .) = (A.— a) .7) = n . of algebraic multiplicity 1.9. Let A E C"xn (or R"x"). i.. a)7. Definition 9. three eigenboth have rr(A) = (A .nxn). and rank(A —Ai l) for distinct Ai is not sufficient to rr(A).. Remark 9.. Then x is a right principal vector of degree k degree associated with A E A (A) ifand only if(A . the associated number of linearly A.— a) and rank(A .28. Determination of the JCF 9.a(A) = (A . The matrices A uniquely. of algebraic multiplicity 1.27.3/)£ = 0.. For each distinct eigenvalue Ai. Remark 9. An analogous definition holds for a left principal vector of degree k.A. three eigen7r(A.nxn number of eigenvectors. Knowing TT (A. i.29. i.e. X e A(A) if and only if (A XI)kx = 0 and (A U}k~l x ^ 0.. and rank (A A. A e nxn ]R.l).29. Determination of the JCF 85 &5 Example 9.e. when X. it then has precisely one eigenvector. If we let [~l ~2 ~3]T associated If [^i £2 &]T denote a solution to the linear system (A — 3l)~ = 0. a (A). left k. we need the notion of principal vector.) = (A. eigenvectors dimN(A — A. associated independent right (or left) eigenvectors is given by dim A^(A . we find that 2£2 + ~3 = O. 9. is not sufficient to Example 9.is simple. i. determine the JCF of A uniquely. Thus. of course.7) for distinct A. when Ai is simple. suppose suppose A = [3 2 0 o Then Then 3 0 A3I= U2 I] o o 0 0 n has rank 1. The more interesting (and difficult) case occurs when Ai multiplicity A.rank(A . so the eigenvalue 3 has two eigenvectors associated with it. a)\ .3. a(A).. X principal Definition 9.e.] are eigenvectors (and are independent).27.ulx = 0 and (A . it The straightforward case is. of course. The straightforward case is. both are eigenvectors (and are independent).A.e.AI)klx i= o.3 Determination of the JCF Determination of the JCF The first critical item of information in determining the JCF of a matrix A E Wlxn is its A e ]R. 1. both denote a solution to the linear system (A .
XI)0 = 0. The number of eigenvectors depends on the rank of A .) .XI). For example.1 Theoretical computation Theoretical computation To motivate the development of a procedure for determining principal vectors. this rank is n . A right (or left) principal vector of degree k is associated with a Jordan block J.A1)x(2) = x(l). determine all eigenvalues of A e R" x " nxn ). If we premultiply (9. x(l) (^ 0). Then the equation AX = XJ can be written that reduces a matrix A to this JCF. (9. 4. For each independent jc (1) . The case k = 1 corresponds to the "usual" eigenvector.3. there are two linearly independent n — o.17) by (A . the principal vector second of degree 2: of degree (A .A1)2 x(2) = (A . Then the equation AX = X J can be written A [x(l) x(2)] = [x(l) X(2)] [~ ~ J. we find (A If we premultiply XI) x = (A XI)x = 0. If the algebraic multiplicity of If A principal need X is greater than its geometric multiplicity. there is only one eigenvector. the definition of principal vector is satisfied. Theother solution (A . by (A . The phrase "of grade k" is often used synonymously with "of degree k.AI). If. of The number of linearly independent solutions at this step depends on the rank of 2 (A . which simply says that x(!) is a right Ax(1) = hx(1) x (1) (2) x(2). if X. if of .3. for get righthand example.17) The first column yields the equation Ax(!) = AX(!). (It may be necessary to take a linear of x(l) R(A . principal vectors of degree 1) associated with A. I) associated This step finds all the eigenvectors (i. One of these solutions (A — AI)2 x (2) x(l) (1= 0). The second column yields the following equation for x . is. Denote by x(1) and x(2) the two columns of a matrix X e R2.x2 A JCF. consider a determining 2 x Jordan [~ i]. See. but the latter generalized eigenvectors. S. 2.. E lR nxn This suggests a "general" procedure.XI.A/)x(2) = x(l). Denote by x(l) and x(2) the two columns of a matrix X E lR~X2 2x2 2 Jordan block{h0 h1. of course. A E A(A) following: (or C ).86 Chapter 9." "of often 3. Eigenvalues and Eigenvectors synonymously "of 2. for example. combination of jc(1) vectors to get a righthand side that is in 7£(A — XI). of — AI. Thus. since (A . Eigenvalues and Eigenvectors Chapter 9. First. Exercise 7.1. x(l). Solve (A .'A1)22xx(l) = (A .A1)X(l) = O. ji of dimension k or larger. (A — uf. The other solution necessary is the desired principal vector of degree 2. eigenvector.XI)2x^ = 0. different term will be assigned a much different meaning in Chapter 12.X I ) ( l ) = (A AI)O o.X I ) .A/) = — multiplicity of rank(A — XI) = n . solve (A . principal vectors still need to be computed from succeeding steps. solutions solutions to the homogeneous equation (A . wefind(A.AI).A1)X(l) = O. Then for each distinct X e A (A) perform the following: z (2) w c 1.1 9. Thus.e. of k 5.2. 9. Principal vectors are sometimes also called generalized eigenvectors. See. k = eigenvector.
. Theorem 9. Attempts to do such calculations in finiteprecision floatingpoint arithmetic generally prove unreliable.2/)x3(1)= 0 yields (A . There are significant numerical difficulties inherent in attempting to compute a JCF. Theorem 9. and h3 = 2. Determination of the JCF 3. . x (k) } is a linearly independent set. Notice that highquality mathematical software such as MATLAB does not offer a j cf command. For Unfortunately.33. and A3 = 2. Determination of eigenvectors and principal vectors is obviously very tedious for anything beyond simple problems (n = 2 and principal vectors is obviously very tedious for anything beyond simple problems (n = 2 or 3. . For each independent x(2) from step 2. Then vectors x(i) is constructed as above. h2 = 1.9. although a j ardan command is available in MATLAB'S does not offer a jcf command. although a jordan command is available in MATLAB's Symbolic Toolbox. say). Unfortunately. Let Example 9.AI) = k . solve 3. this naturallooking procedure can fail to find all Jordan vectors. Principal vectors associated with different Jordan blocks are linearly indeTheorem 9. For each independent X(2) from step 2. Continue in this way until the total number of independent eigenvectors and principal vectors is equal to the algebraic multiplicity of A. Example 9. [20] and [21]. see. . Suppose A e Ckxk has an eigenvalue A. Theorem 9.31. Continue in this way until the total number of independent eigenvectors and principal 4. {x(l). Let X = [[x(l). Notice that highquality mathematical software such as MATLAB readable [8] to learn why. .. . . For more extensive treatments. of algebraic multiplicity and Theorem 9. where the chain of vectors x(i) is constructed as above.3. A2 = 1. . and the interested student is strongly urged to consult the classical and very to compute a JCF.. 0 The eigenvalues of A are AI = 1. x(k)]. solve (A AI)x(3) 87 = x(2). say). Attempts to do such calculations in finiteprecision floatingpoint arithmetic or 3. Let X = x ( l ) ...3. (x (1) . Then Theorem 9. X(k)]. Let A=[~ 0 2 ] . [20] and [21]. Determination of the JCF 9. where the chain of suppose further that rank(A ..30. First. and the interested student is strongly urged to consult the classical and very readable [8] to learn why. 4.32. find the eigenvectors associated The eigenvalues of A are A1 = I.30. vectors is equal to the algebraic multiplicity of A. Suppose A E C kxk has an eigenvalue A of algebraic multiplicity kkand suppose further that rank(A — AI) = k — 1. for example. .31. . . Symbolic Toolbox. There are significant numerical difficulties inherent in attempting generally prove unreliable.1. 1 . Determination of eigenvectors more extensive treatments. see. find the eigenvectors associated with the distinct eigenvalues 1 and 2. for example.2I)x~1) = 0 yields .(1) (A . X(k)} is a linearly independent set. Principal vectors associated with different Jordan blocks are linearly independent..32. pendent.33. this naturallooking procedure can fail to find all Jordan vectors. First. with the distinct eigenvalues 1 and 2.
but the result clearly holds for any JCF. . Then Let D = diag(d" ..3. For the sake of defmiteness. Eigenvalues and Eigenvectors To find a principal vector of degree 2 associated with the multiple eigenvalue 1.1I)xl ) = xiI) to get (A – l/)x. d.. d.(2) = x. ..11)x?J = 0 yields (A. (1) toeet x. 0 0 D'(X' AX)D = D' J D = j ). .3.88 (A . For the sake of definiteness. Now let Now let X (2) =[ 0 ] ~ . 0 !b. but the result clearly holds for any JCF.2 On the +1 's in JCF blocks 's JCF In this subsection we show that the nonzero superdiagonal elements of a JCF need not be In this subsection we show that the nonzero superdiagonal elements of a JCF need not be 1's but can be arbitrary .. solve To find a principal vector of degree 2 associated with the multiple eigenvalue 1.. 0 = 0 A dn dn I 2 0 dn dn I A 0 ). we 1 's but can be arbitrary — so long as they are nonzero. we consider below the case of a single Jordan block.so long as they are nonzero. consider below the case of a single Jordan block. Then A 4l. dn be a nonsingular "scaling" matrix. . Suppose A € Rnxn and SupposedA E jRnxn and Let D diag(d1..2 9.l/)x. d n)) be a nonsingular "scaling" matrix.. solve 2 (A . =0 yields (1) Chapter 9. 0 1 = [xiI) 0 xl" xl"] ~ [ ~ 5 ] and XlAX 5 3 0 Then it is easy to check that Then it is easy to check that l 1 X'~U i 1 =[ ~ I 0 0 n 9. .
A subspace S ~ V is Ainvariant if AS ~ S.Am I) Vm ... dnxn].. set {As s E S}.4 9.AmItm . E6 N (A ..A[)n) . J is obtained from A via the similarity transformation XD = \d\x\.4...nxn (or nxn to JCF provides change of basis with respect to which the matrix is diagonal or block diagonal. Then AI. It is thus natural to expect an associated direct sum decomposition of R. Let IF and suppose + transformation. Geometric Aspects of the JCF 89 di's Appropriate choice of the di 's then yields any desired nonzero superdiagonal elements. E6 N(A . (A .. A subspace S c V is A invariant if AS c S.A.. It is thus natural to expect an with respect to which the matrix is diagonal or block diagonal. . ... similarity transformation XD [d[x[. Suppose e jH. .. where AS is defined as the transformation. Let V be a vector space over F and suppose A : V —>• V is a linear Definition 9.4.Am)Vm with Ai. Suppose A E R"x" has characteristic polynomial 9. In a similar fashion. A.34.. Then jH..18) 0 I 1 0 0 can be used to put the superdiagonal elements in the subdiagonal instead if that is desired: to superdiagonal elements in instead desired: A I 0 0 A 0 A 0 A 0 0 A 0 p[ A p= 0 1 0 0 A 0 I A A 0 0 0 A 9.4 Geometric Aspects of the JCF Geometric Aspects of the JCF The matrix X that reduces a matrix A E IR"X"(or C nxn)) to aalCF provides aachange of basis X e jH.AlIt) E6 . the reverseorder identity matrix (or exchange matrix) In a similar fashion.. mdistinct. .Amtm c and minimal polynomial a(A) = (A . the reverseorder identity matrix (or exchange matrix) 0 p = pT = p[ = 0 I 0 (9.n. where AS is defined as the set {As:: s e S}.. Such a decomposition is given in the following theorem. . Geometric Aspects of the JCF 9.n = N(A = N (A .34. Such a decomposition is given in the following associated direct sum decomposition of jH.. interpreted This result can also be interpreted in terms of the matrix X = [x\. .nxn n(A) = (A ../) w = «. j is obtained from A via the and principal vectors that reduces A to its JCF.9. dimN(A — AJ)Vi = ni. .35....A1I) v) E6 . Specifically.. Specifically. x n eigenvectors and principal vectors that reduces A to its lCF.. Note that dimM(A .A[)V) '" (A .35.. Definition 9. Theorem 9. Am distinct... . dnxn}.xn]] of eigenvectors = [x[.
e A(A).2. then a basis for V can be chosen with respect to which A has a block diagonal representation. = X.. Rewriting in the form ~ J. The equation Ax Example 9. ... This follows easily by comparing the ith columns of each side of (9. Let Yi E <enxn . each Ji = diag(JiI./)"'.. Note that A A". We could also use other block diagonal decompositions (e. AT Theorem 9. Let 7.. we have that AXi Theorem 9. We would then get a block diagonal example (note that the power ni could be replaced by Vi). Suppose A E ]Rnxn.Ji .e.39... (i. the eigenvectors and principal vectors associated with Ai) span an Ainvariant subspace of]Rn. Let peA) = «o/ + o?i A + • + <xqA be a polynomial in A. If A has distinct eigenvalues A. then is Ainvariant if and only if there span a kdimensional subspace S. 9.... /. we have that A A. then a basis for V can be chosen with respect to which A has a block N. = Xi.) span an Ainvariant of A"..37.. S is Ainvariant if and only if S .e."" Jik. partition . Eigenvalues and Eigenvectors Chapter 9. The Jordan canonical form is a special case of the above theorem.A. where each Ji = diag(/. Other such "canonical" forms are discussed in text that follows.39. Then N(p(A)) and 1.36. . We would then get a block diagonal representation for A with full blocks rather than the highly structured Jordan blocks. XI AX = [~ J 2 ]. but we restrict our attention here to only the Jordan block case. we return to the problem of developing a formula for e l A in the case that A A formula e' A is not necessarily diagonalizable..19) the columns attention here to only the Jordan block case. ... .li.Xm] ] Ee]R~xnxnisis such that X^AX ==diag(J1. Example 9. so by (9. = 1. Equivalently. If V is a vector space over IF such that V = N\ EB .. //*.. eigenvalues Ai 9.37.19) AS = SM. ..span an Ainvariant subspace. diagonal representation.. . partition Equivalently.. so the columns of A. we could choose bases for N(A — A.g.is A T invariant. such "canonical" forms are discussed in text that follows.) and each Jik is a Jordan block corresponding to A. s/t span a /^dimensional subspace <S. A invariant if only ifS1 1. the eigenvectors and principal vectors associated with A. € C"x"' be a Jordan basis for N(AT — A. e E"x". is not necessarily diagonalizable..2. Eigenvalues and Eigenvectors If V is taken to be ]Rn over Rand S E ]Rn x* is a matrix whose columns SI.as in Theorem 9. Ainvariant.• EB Nm. 7..19): /th Example 9.) and each /. Jm). /. i. .. where X [ X i .. Let p(A) = CloI + ClIA + '"• •+ ClqAqq be a polynomial in A. If F = NI ® • • 0 m A// is Ainvariant... Finally. where each Theorem 9.. Suppose X block diagonalizes A./)"' by SVD.e. Note that AXi = A*.* is a Jordan block corresponding to Ai E A(A).). . 9. Suppose A"== [Xl . i...i. Then N(p(A)) and R(p(A)) 7£(p(A)) are Ainvariant... of W. could be replaced by v. is Ainvariant. via SVD). K(S) <S. for N(A . be a Jordan basis for N (AT . then S <S is Ainvariant if and only if there exists M E ]Rkxk such that eRkxk (9.34. R(S) == S. If A has distinct The Jordan canonical form is a special case of the above theorem. example (note that the power n.38.. so the columns of Xi span an Amvanant subspace.e.90 Chapter 9.. Other representation for A with full blocks rather than the highly structured Jordan blocks.. Sk If R" R. .19) the columns of Xi (i..38. A". The equation Ax = A* = x A defining a right eigenvector x of an eigenvalue AX = x A defining a right eigenvector x of an eigenvalue A x X says that * spans an Ainvariant subspace (of dimension one). 2. Jm).e.. so by (9. and S e R" xk s\. Xm R"n such that XI AX diag(7i. i /= 1.36.Ai/)n.34. i.lt.
S.41. . for a k x k Jordan block 7.YiH. denoted sgn(A).40. ifRe(z) < O. Then the sign of A. Jm) [YI . and let e cnxn be a Jordan canonical form for A. It is a generalization of the sign (or signum) of a scalar.JiYi .I = XJy H = [XI.. It is a generalization of the sign (or signum) of a scalar. 9.41. of defined Definition 9. is given by eigenvalues in the right halfplane..5. Then compatibly. E f= 0. Xm] diag(JI.5 9. .40... Let z E C with Re(z) ^ O. The Matrix Sign Function 9. Definition 9. with N containing all Jordan blocks corresponding to the be a Jordan canonical form for with N containing all Jordan blocks corresponding to the eigenvalues of in the left halfplane and P containing all Jordan blocks corresponding to eigenvalues of A in the left halfplane and P containing all Jordan blocks corresponding to eigenvalues in the right halfplane. A called the matrix sign function.. Suppose A E C"x" has no eigenvalues on the imaginary axis. Definition 9.5 The Matrix Sign Function The Matrix Sign Function section brief interesting useful In this section we give a very brief introduction to an interesting and useful matrix function function called the matrix sign function. . i=1 which is a useful formula when used in conjunction with the result which is a useful formula when used in conjunction with the result A 0 A A 0 eAt teAt eAt . . m ••• . Then the sign of A.= Ai. Then A = XJX. denoted sgn(A). Ym]H = LX. associated with an eigenvalue A. . i=1 H In a similar fashion we can compute m etA = LXietJ. Definition 9.lt 2 e At 2! 0 exp t 0 0 0 1 A teAt eAt 0 0 0 0 0 block Ji associated A = A. is given by sgn(A) = X [ / 0] 0 / X I . The Matrix Sign Function 91 91 compatibly. Then the sign of z is defined by Re(z) {+1 sgn(z) = IRe(z) I = 1 ifRe(z) > 0.9. A survey of the matrix sign function and some of its applications can be found in [15]..
. and let — sgn(A). Then the following hold: following 1. respectively. Yn. The JCF definition of the matrix sign function does not generally lend itself to reliable computation on a finitewordgenerally itself length digital computer. Let A E Cnxn have distinct eigenvalues AI. projection subspace of 4. S2 = I. e C"x" Theorem 9. 7l(S l) is an Ainvariant subspace corresponding to the left halfplane eigenvalues left halfplane I. S = sgn(A). EXERCISES EXERCISES 1. Their left exercises. 6. sgn(A") = (sgn(A»H. but the one given here is especially useful in deriving many of its key properties. of A (the negative invariant subspace). Eigenvalues and Eigenvectors where the negative and positive identity matrices are of the same dimensions as N and p.92 92 Chapter 9. ± 1. AS = SA. Suppose A E enxn has no eigenvalues on the imaginary axis. sgn(cA) = sgn(c) sgn(A) for all nonzero real scalars c. negA = (/ — S)/2 3. distinct right eigenvectors Xi. S is diagonalizable with eigenvalues equal to del. .n 1. ••• . . We state some of the more useful properties of the matrix sign function as theorems. . e nxn Theorem 9. Suppose A E C"x" has no eigenvalues on the imaginary axis..42. 2. Show that v can be expressed (uniquely) as a linear combination e of the right eigenvectors. 2. Show that v can be expressed (uniquely) as a linear combination arbitrary vector. but the one given There are other equivalent definitions of the matrix sign function.. negA == (l . . Theorem 9.. ... yn.43. In fact. ).. Find the appropriate expression for v as a linear combination expression of the left eigenvectors as well. Theorem 9. 4. positive = (/ + of A. 3. respectively. its reliable numerical calculation is an interesting topic in calculation its own right. respectively. ••. Let v E en be an vectors Xl. 4.S) /2 is a projection onto the negative invariant subspace of A. sgn(cA) = sgn(c) sgn(A)/or c. Xn and left eigenvectors y\. posA == (l + S)/2 is a projection onto the positive invariant subspace of A.43. S = sgn(A). Then the following hold: following e 1.xn and left eigenvectors Yl. Let e C" be an arbitrary vector.. Their straightforward proofs are left to the exercises. R(S — /) Ainvariant of (the negative invariant subspace). We state some of the more useful properties of the matrix sign function as theorems. The JCF definition of the here is especially useful in deriving many of its key properties. AS = SA... 5. R(S+/) is an Ainvariant subspace corresponding to the right halfplane eigenvalues R(S + l) A invariant halfplane of (the positive invariant of A (the positive invariant subspace). 3. Xn with corresponding right eigenA e nxn ). 3. ..1> . There are other equivalent definitions of the matrix sign function. sgn(AH) = (sgn(A))". 2.42. sgn(T1AT) Tlsgn(A)TforallnonsingularT e enxn 6.. and let = sgn(A).. sgn(TlAT) = T1sgn(A)T foralinonsingularT E C"x". positive of P. Eigenvalues and Eigenvectors Chapter 9.. S2 = I. 5.
Show that all right eigenvectors of the Jordan block matrix in Theorem 9. i. y E lR. Determine all possible € R 5x5 {2. where x. Let A be an eigenvalue of A with corresponding 3. Let A E lR.Exercises 93 93 2. where J is the JCF Find a nonsingular matrix X such that X AX = J.5x5 has eigenvalues {2. Determine the JCFs of the following matrices: 6. Let A be an eigenvalue of A with corresponding right eigenvector x.l]r as an eigenvector. = O. 3}.30 must be multiples of el e R*. where x. 9. y e R" are nonzero vectors 10.. multiples of e\ E lR. Suppose A E C"x" is Hermitian. ~ 0 Hint: Use[1 1 — I]T an Hint: Use[— 1 1 . 11. 10. Suppose the small number 10. Characterize all left eigenvectors. x. where J is the JCF 1 J=[~ 0 1~]. Determine the JCFs of the following matrices: <a) Uj n 2 1 2 =n 7. eigenvalues. eigenvectors and if and (real) JCFs of the following matrices: (a) 2 1 ] 0 ' [ 1 6. JCFs for A. 2. but then the equation (A . Show that x is also a left eigenvector for A. (A — I)x(2) x(1) 8. n are nonzero vectors with with xTTyy = 0. Prove the same result if A is skewHermitian. 5. Let A e R" xn be of the form A = 1+ xyT. nxn be of the form A = / + xyT.nxn A = xyT. right eigenvectors and right principal vectors if necessary./)jc = x can't be solved. 3}. where x.22. Determine the JCF of A. k .. 4.22.1) element of J. Suppose A € rc nxn is skewHermitian. y E lR. i. 2. Let A = [H 1]· 2 2" Find a nonsingular matrix X such that XI AX = J. Show that x is also a left eigenvector for A. Suppose 10~16 is added to the (16.e. AH = A.30 must be 8. Let A e R"x" be of the form A = xyT. Determine the JCF of A. Prove that all eigenvalues of 2. Let 7. if A is skewHermitian. JCFs for A. Prove the same result right eigenvector x. What are the eigenvalues of this slightly perturbed is added to the (16. Suppose a matrix A E R 16x 16 has 16 eigenvalues at 0 and its JCF consists of a single A e lR. Characterize all left eigenvectors. Suppose A e rc nxn is Hermitian. The vectors [0 1 Ifand[l 0 of [0 — l] r and[1 0]r (2) (1) are both eigenvectors. x O. Determine the JCF of A. a skewHermitian matrix must be pure imaginary. AH = —A. Prove that all eigenvalues of a skewHermitian matrix must be pure imaginary. What are the eigenvalues of this slightly perturbed matrix? matrix? . Determine the eigenvalues.e. 5. y e R" are nonzero vectors with A E lR. 3. Suppose A E C"x" is skewHermitian.1) element of J. Suppose a matrix A E lR.n T T x yy = 0. Show that all right eigenvectors of the Jordan block matrix in Theorem 9. Determine the JCF of A. 16x 16 has eigenvalues at 0 its JCF consists of single Jordan block of the form specified in Theorem 9.16 Jordan form specified 9. 2.
en . Then = ( X SIXT)(X. Prove that 17. Show that every matrix A E R"x" can be factored in the form A = SIS2. Prove Theorem 9. Consider the block upper triangular matrix 14. where Si 12. say S1. is nonsingular.94 Chapter 9. Prove Theorem 9. Prove Theorem 9.42. Suppose A E C"xn has all its eigenvalues in the left halfplane. Hint: Suppose A = Xl XI is a reduction of A to JCF and suppose we can construct Hint: Suppose A = X J X ~ l is a reduction of A to JCF and suppose we can construct the "symmetric factorization" of 1. Suppose Al2 ^ and want to block diagonalize A via the similarity transformation want to block diagonalize A via the similarity transformation where X E IRkx(nk). 16. Consider the block upper triangular matrix A _ [ All  0 Al2 ] A22 ' where A E M"xn and An E Rkxk with 1 ::s: k < n. is nonsingular. The transformation P in (9. 15. in terms of AU and A 22. what can you say further. X e R*x <«*). xn has all its eigenvalues in the left halfplane. Hint: Use the factorization in the previous exercise. If n = 2 and k = 1.18) is useful. Find a matrix equation that X must satisfy for this to be possible.43. Prove Theorem 9.18) is useful. The transformation P in (9. sgn(A) = /. Eigenvalues and Eigenvectors Chapter 9. If n = 2 and k = 1.S2X~l) would required symmetric factorization of A. Find a matrix equation that X must satisfy for this to be possible. say Si. transformation explicitly.e. Suppose Au =1= 0 and that we we e jRnxn and All e jRkxk 1 < ::s: n. Show that every matrix A e jRnxn can be factored in the form A Si$2. Thus. Prove that 17. about when the equation for X is what can you say further. Prove that every matrix e jRn xn is similar to its transpose and determine a similarity transformation explicitly.43.42. about when the equation for is solvable? solvable? 15. Thus. TIAT = [A011 A22 0 ] . and S2 are real symmetric matrices and one of them. where SI and £2 are real symmetric matrices and one of them. 16. it suffices to prove the result for the JCF. i. Eigenvalues and Eigenvectors 12. JCF.. 14. Then A = (XS i X T ) ( X ~ T T S2XI) would be the the "symmetric factorization" of J. it suffices to prove the result for the required symmetric factorization of A. Hint: Use the factorization in the previous exercise. Suppose A e sgn(A) = 1. 13. in terms of All and A22. Prove that every matrix A E W x" is similar to its transpose and determine a similarity 13.
AAHH = AH A). if A e Rmxn . Normal matrices include Hermitian. Two special cases are of interest: Two special cases are of interest: 1. .1. orthogonal equivalence if P and are orthogonal matrices.. Q are unitary. The following results are typical of what can be achieved under a unitary similarity. where D = diag(AJ. We can also consider the case A e Cm xn and unitary equivalence if P and Remark 10. as well as other matrices that merely satisfy the definition. An. then there exists a unitary matrix U such that UH AU — D. find P e lR.. .1 10. it is called an orthogonal equivalence if P and Q are orthogonal matrices. Normal matrices include Hermitian. 95 95 . ." The transformation A f+ P AQ is called an equivalence. an orthogonal similarity (or unitary similarity in the complex case). . n ). . matrix if and only if it is normal (i.An. the transformation A i» PAPT is called If an orthogonal similarity (or unitary similarity in the complex case)...2. respectively). the for real scalars a and h. such as A = [_~ most "diagonal" we can get is the JCF described in Chapter 9... . If a matrix A is not normal.Chapter 10 Chapter 10 Canonical Forms Canonical Forms 10. An. the definition. = H E en xn exists a unitary matrix X such that X H AX = D = diag(Al..j.e. Xn Theorem 10. orthonormal eigenvectors for A).. An) (the columns of X are exists a unitary matrix X such that XHAX = D = diag(A. skewsymmetric. This is proved in Theorem 10. . = V and if pT is orthogonal. and unitary matrices (and their "real" counterparts: symmetric. skewskewHermitian. if A E IR mxn find E R™ and Q E lR~xn such that P AQ has a form.2. where it is proved that a general matrix A E C"x" is unitarily similar to a diagonal 10.2. most "diagonal" we can get is the JCF described in Chapter 9. and unitary matrices (and their "real" counterparts: symmetric.. . We can also consider the case A E emxn and unitary equivalence if P and <2 are unitary. .9. where D = diag(A. . . and orthogonal. Let A = AH e C"x" have (real) eigenvalues A. . What other matrices are "diagonalizable" under unitary similarity? The answer is given in Theorem matrices are "diagonalizable" under unitary similarity? The answer is given in Theorem 10. If A = A H 6 C" " has eigenvalues AI. then there exists a unitary matrix £7 such that A = AH E en xxn has eigenvalues AI.. If The following results are typical of what can be achieved under a unitary similarity.e.2. the transformation A H> PAP" 1 is called similarity.1. .I. An). Remark 10. as well as other matrices that merely satisfy the symmetric. Problem: Let V and W be vector spaces and suppose A : V —>• W is a linear transformation. . Xn) (the columns ofX are orthonormal eigenvectors for A). If a matrix A is not normal." The transformation A M» PAQ is called an equivalence. AA = AHA). Then there AI. such as A = [ _ab ^1 for real scalars a and b. the transformation A f+ P ApT is called 2.1 Some Basic Canonical Forms Some Basic Canonical Forms Problem: Let V and W be vector spaces and suppose A : V + W is a linear transformation. This is proved in Theorem 10." In matrix terms.. respectively). What other U HAU = D. Find bases in V and W with respect to which Mat A has a "simple form" or "canonical Find bases in V and W with respect to which Mat A has a "simple form" or "canonical xm form... !] Theorem 10..1. If P"1 . the transformation A f+ PAPI is called aasimilarity.. skewHermitian..:xm and Q e Rnnxn such that PAQ has a "canonical form. If W = V and if Q = PT is orthogonal. . . If W = V and <2== p. it is called an "canonical form. V and Q 1." In matrix terms. ... and orthogonal.9.. where it is proved that a general matrix A e enxn is unitarily similar to a diagonal matrix if and only if it is normal (i. A.j..
. (l.3. Construct a sequence of Householder matrices (also known Proof: Let X [XI. k = For simplicity...96 96 Chapter 10. Let X\ e Cnxk have orthonormal columns and suppose U is a unitary Theorem 10.. .. xf*x\ = Proof' Let x\ be a right eigenvector corresponding to X\.2)block by XI Xz. We illustrate the construction of the necessary Householder matrix for k — 1. xd = [ ~ l U = where R is upper triangular (and nonsingular since x\. I)block x"xi = 1. Thus. ..3 for k = 1. where R € Ckxk is upper triangular. ... VI € Cnxk [Xi U ] Proof: Let X\I = [x\.. xn] = 1... D The construction called for in Theorem 10..• • Hk and Hk'" HI. . In (10. Xk]. XH AX induction noting that XH AX is Hermitian. Construct a sequence of Householder matrices (also known HI. X = Given a unit vector x\ E JRn. and normalize it such that x~ XI = XI 1. Then VH = / / .k rows of the unitary matrix U.l)block by Al (2.. Then there exist n — 1 additional vectors X2.3. Canonical Forms Chapter 10. . the construction of X2 E JRnx(nl) such that X — z e ]R" (". .. Xk H Hk....... Now XHAX =[ xH I XH ] A [XI 2 X 2] =[ =[ =[ x~Axl XfAxl X~AX2 XfAX 2 ] (10.l)block. ... Then [XI V 2] is unitary. ..) [XI X2]] is orthogonal is frequently required. 0 Thus.. An. xn such that X = (XI. where R E kxk is upper triangular.I)block. Hk in the usual way (see below) such that Hk . simplicity.k But the latter are orthonormal since they are the last n .2)block noting that x\ is orthogonal to all vectors in X2.2) Al X~AX2 XfAX 2 0 Al ] 0 XfAX z 0 l In (10. D 0 (2. . orthogonal (l. Canonical Forms Proof: eigenvector corresponding AI... X 1 XI e E". When combined with the fact that x~ XI = 1. (/ € k) U2 X i U2 = Xi . 10. .2)block must have eigenvalues A2.. HdxI. [£i. When combined with the fact that In (l0..Hv.. . ..1) we have used the fact that Ax\ = AIXI. we get Ai remaining in the (l. %n] XI . .. . . Let V = XI. Xn such that [x\..1) (10..3 called Theorem 10. Then there exist n . —k U. . . k 1. We also get 0 in the (2.. . we consider the real case.1) we have used the fact that AXI = k\x\.. . Let the unit vector x\ be denoted by [~I. The proof is completed easily by induction upon noting proof that the (2.. [Xi f/2] unitary. we get 0 in the (l.2)block X2 . [XI U2] is unitary. Write UH = [U\ U ] [VI Vz] 0 2 with Ui E Cnxk . . Write V H matrix such that V X I = [ ~].2). .xd. Hk as elementary reflectors) H\. following general result. The construction can actually be performed orthogonal frequently [x\ 2 quite easily by means of Householder (or Givens) transformations as in the proof of the Householder transformations proof following general result. ~nf. Xk are orthonormal). xn] = [x\ ] [XI X22] is unitary.. A.T.2 is then a special case of Theorem 10.. Then U = HI'" Hk and H Then x^U2 = 0 (i E ~) means that xf is orthogonal to each of the n — k columns of V2.1 additional vectors x2. . .... Let XI E Cnxk have orthonormal columns and suppose V is a unitary matrix such that UX\ = \ 1. .. n .
consulted standard numerical linear algebra can be consulted in standard numerical linear algebra texts such as [7]. Thus..2. where u = ['. £«] r It can checked T 2 that U is symmetric and U TU = U 2 = I. Then there exists an AT E jRnxn have eigenvalues AI. A Note that Theorem 10. [25].1. In fact.1 ± 1. (onto the onedimensional eigenspaces correPi onedimensional eigenspaces sponding to the A. ..2uu+ = I . X n ). XTAX = D = diag(Xi. • • . U orthogonal... where Pi = PR(x. Theorem 10.It can easily be checked — 2uu+ — u u T . The real version of Theorem 10.. £2. where u ^UU [t\ 1.) — xiXt = i j since xT Xi = 1. [11]..1.. [23]. i. it is easily verified that u T u = ± 2£i and u T Xl = 1 ± '. Let A E jRn xn (whose orthogonal matrix X e Wlxn (whose columns are orthonormal eigenvectors of A) such that of XT AX = D = diag(Al. An. . . Then there exists an 10. U effects necessary compression of jci.2 is worth stating separately since it is applied fre10.2 for Hermitian matrices) can be written n A = XDX T = LAiXiXT. quently in applications. i=l theoretical The following pair of theorems form the theoretical foundation of the doubleFrancisdoubleFrancisQR algorithm used to compute matrix eigenvalues in a numerically stable and reliable way.•» '.Xn. . . Some Basic Canonical Forms 10.+uu T .3) is actually a often weighted sum of orthogonal projections P.. [7]. . Some Basic Canonical Forms 97 Then the necessary Householder matrix needed for the construction of X 2 is given by Then the necessary Householder matrix needed for the construction of X^ is given by U = I .4.nf. . x where P. [11]. n A = LAiPi. sponding to the Ai'S). [23]. . Further details on Householder matrices. '..1 and UT X\ = 1 ± £1.e. necessary compression of Xl. .2 is worth stating separately since it is applied frequently in applications.e.4 implies that a symmetric matrix A (with the obvious analogue from Theorem 10. it is easily verified that UT U = 2 ± 2'. Let A = AT e E nxn have eigenvalues k\.1.2 for Hermitian matrices) can be written from Theorem 10. . A in (10.i. . [25]. including the choice of sign and the complex case. An). so U is orthogonal. = PUM = xixf = xxixT since xj xi — 1. i=1 (10.4. 's)..10. To see that U effects the U symmetric U U = U = I.3) spectral which is often called the spectral representation of A...
complex conjugate pairs of eigenvalues. The matrix 10.e. UH AU = T. the next theorem shows that every A E IR xn is also orthogonally similar (i. AH A = AAH ). A is normal (i.5 is called a Schur canonical form or Schur form. and sufficient for virtually all applications (see.6 T T matrix U such that U AU = S. 0 in this case (using the notation U rather than X) the (l.8. Its real JCF is h[ 1 1 1 0 0 n n Note that only the first Schur vector (and then only if the corresponding first eigenvalue Note that only the first Schur vector (and then only if the corresponding first eigenvalue is real if U is orthogonal) is an eigenvector. is that the first k Schur vectors span the same Ainvariant subspace as the eigenvectors corresponding to the first eigenvalues along the invariant subspace as the eigenvectors corresponding to the first k eigenvalues along the diagonal of T (or S). . then complex arithmetic is clearly needed to place such eigenValues on the diagonal of T. However. The matrix s~ [ 2 0 2 5 4 0 is in RSF. matrix U such that U AU = S.e. what is true.2 except that Proof: The proof of this theorem is essentially the same as that of Theorem 10.7. real arithmetic) to a quasiuppertriangular A e Wnxn is also orthogonally similar (i. Theorem 10. is that the first Schur vectors span the same all applications (see. However. where T is upper triangular. where S is quasiuppertriangular. Let A e C"x". Canonical Forms Theorem 10. where D is diagonal. A matrix A E C"x" is unitarily similar to a diagonal matrix if and only if Theorem 10.2)block AU2 is not O.e. what is true.7. Then there exists an orthogonal 10. where T is upper triangular.e. The columns of a unitary [orthogonal} Schur canonical form or real Schur fonn (RSF). AHA = AA H). The triangular matrix T in Theorem 10. but if A has a complex conjugate pair of eigenvalues. it is thus unitarily similar to an upper triangular matrix. Definition 10. A quasiuppertriangular matrix is block upper triangular with 1 x 1 diagonal matrix. [17]). the next theorem shows that every to place such eigenvalues on the diagonal of T. However. Then AAH = U VUHU VHU H = U DDHU H == U DH DU H == AH A so A is normal. A matrix A e c nxn is unitarily similar to a diagonal matrix if and only if A is normal (i. .5 is called a Schur canonical Definition 10.5 (Schur). Example 10. Its real JCF is is in RSF.98 98 Chapter 10. following theorem answers this question. Then Proof: Suppose U is a unitary matrix such that U H AU = D. for example.8.9. it is of interest to know While every matrix can be reduced to Schur form (or RSF).. The columns of a unitary [orthogonal] matrix U that reduces a matrix to [real} Schur fonn are called Schur vectors. The when we can go further and reduce a matrix via unitary similarity to diagonal form. and sufficient for virtually is real if U is orthogonal) is an eigenvector. A quasiuppertriangular matrix is block upper triangular with 1 x 1 diagonal blocks corresponding to its real eigenvalues and 2x2 2 diagonal blocks corresponding to its blocks corresponding to its real eigenvalues and 2 x diagonal blocks corresponding to its complex conjugate pairs of eigenvalues. matrix U that reduces a matrix to [real] Schur form are called Schur vectors.2)block wf AU2 is not 0. diagonal of T (or S).. Then there exists a unitary matrix U such that U H AU = T.6 (MurnaghanWintner). then complex arithmetic is clearly needed if A has a complex conjugate pair of eigenvalues.6 is called a real Schur canonical form or real Schur form (RSF).. but In the case of A E R"xxn . Proof: The proof of this theorem is essentially the same as that of Theorem lO. where D is diagonal.. D ur In the case of A e IRn ". so A is normal.5 (Schur). Then there exists an orthogonal Let A e IR n ". Theorem 10.2 except that in this case (using the notation U rather than X) the (l. The quasiuppertriangular matrix S in Theorem 10. Canonical Forms Chapter 10. The quasiuppertriangular matrix S in Theorem 10. The following theorem answers this question. for example. Let A E R"xxn. where S is quasiuppertriangular. [17]).9. it is thus unitarily similar to an upper triangular matrix. Proof: Suppose U is a unitary matrix such that U H AU = D. The triangular matrix T in Theorem 10. real arithmetic) to a quasiuppertriangular matrix. Let A E cnxn Then there exists a unitary matrix U such that Theorem 10.6 is called a real form or Schur fonn. However. it is of interest to know when we can go further and reduce a matrix via unitary similarity to diagonal form. While every matrix can be reduced to Schur form (or RSF).
. Then 11. we write A > B if and only if A . if A and B are symmetric matrices.• :::: An. negative positive definite. Remark 10. i En. U diagonalizes A 10. e Theorem 10.n • We write A :::: 0.12 ~ AlyH Y = AIX HX . nonnegative definite (or positive semidefinite) if and only if XT Ax :::: 0 for all (or positive if and only if x T Ax > for all nonzero x e W. Let A = AH e enxn with eigenvalues X{ :::: A2 :::: . A symmetric matrix A e Wxn 1.10.2.10. It T 0 D 10.2. we write A > B if and only if A . if—A 4.5).n. We write A < O. Furthermore. 11'/.B > 0 or or Also. A symmetric matrix A E lR. If A E C"x" is Hermitian. 2. superscript H s replace T s. CM j]i. Indeed. Then T (Theorem It is then a routine exercise to show that T must. 3. We (or negative if— A nonnegative definite. Similarly. be diagonal. all the above definitions hold except that A e nxn Remark 10.11..13. where T is an upper triangular matrix (Theorem 10. .A ~ O. Thenfor all Let A = AH E Cnxn with eigenvalues AI > A2 > • • > An. Definite Matrices 10. positive definite if and only ifx Ax > Qfor all nonzero x E W1 We write A > O.A < O. We write A < 0.2 10. We write A > O. negative definite if .A is positive definite.12.12.11. Definite Matrices 99 Conversely. let y = U H x. if A and B are symmetric matrices.B :::: 0 or B . Furthermore. Proof: Proof: Let U be a unitary matrix that diagonalizes A as in Theorem 10. indefinite. x eC"..=1 . we write A > B if and only if A — B > B .2.nxn is Definition 10. Then n x HAx = (U HX)H U H AU(U Hx) = yH Dy = LA. this section that may be stated in the real case for simplicity. Also. suppose A is normal and let U be a unitary matrix such that U H AU = T. Then for all E en. i € n. it is said to be indefinite. in fact.2 Definite Matrices Definite Matrices Definition 10. this is generally true for all results in the remainder of of superscript //s Ts. we write A :::: B if and only ifA — B>QorB — A < 0. write A < O. nonpositive definite (or negative semidefinite) if A is nonnegative definite. We write A ~ 0. Remark 10. We write A > 0. If a matrix is neither definite nor semidefinite. and denote the components of y by v UHx. B — A < 0.10.=1 But clearly n LA.2. A U U HA U T.. nonzero x E lR. positive definite if and only if xTT Ax > 0 for all nonzero x G lR. 111. where x is an arbitrary vector in en. Similarly.. If neither semidefinite.
18. Canonical Forms Chapter 10. of obtained and E ~nxn positive definite if and only if any of the Theorem 10. Let A E enxn Then \\A\\2 = Ar1ax(AH A). A can be written in the form MT M..16. 2. However.w) x HAx > the Rayleigh quotient. All eigenvalues of A are positive. A symmetric matrix A e E" x" is positive definite if and only if any of the following equivalent following three equivalent conditions hold: determinants of principal 1. Corollary Corollary 10. Theorem 10. Theorem 10. from which the theorem follows. the . Remark 10. The determinants of all leading principal submatrices of A are positive.15. where M 6 R ix " and k > rank(A) "" rank(M).1.17. A leading principal submatrix of order n — k is obtained by deleting the last k rows and columns. 0 D Remark XHHAx Remark 10. The determinant of the I x leading submatrix is 0 and consider the matrix A = [~ 2x 0 (cf. Not@th!ltthl!dl!termin!lntl:ofnllprincip!ll eubmatrioes muet bQ nonnogativo in Theorem 10.I. For example. Note that the determinants of all principal "ubm!ltriC[!!l mu"t bB nonnBgmivB R. ::::: AI.18. Canonical Forms and and n LAillJilZ::: i=l AnyHy = An xHx . A can be written in the form MT M.soO < X n < ••• < A. 2. whence Ar1ax (A A). . The ratio ^^ x for A = AH <=enxn and nonzero x jc een isis calledthe = AH E Cnxn and nonzero E C" called the x of x. If A = AH e C"x" is positive definite. Then 111~~1~2 Let jc be an eigenvector corresponding to Xmax(AHA). where M e R"x" is nonsingular. Then IIAII2 = ^m(AH A}. A symmetric matrix A € R"x" is nonnegative definite if and only if any of following equivalent of the following three equivalent conditions hold: 1. so 0 An ::::: . A principal submatrix of an nxn n matrix A is the (n — k)x(n(n — k) matrix that remains by deleting k rows and the corresponding k columns..13 provides (A 1) Rayleigh quotient of jc. where M E IRb<n and k ~ ranlc(A) — ranlc(M). x E C". determinant the determinant of the 2x2 2 leading submatrix is also 0 (cf. not just those of the leading principal submatrices. of all principal submatrices of 2.100 100 Chapter 10. Theorem 10.17). Then ^pjp2 = ^^(A" HA). I Proof: E C" Proof: For all x € en we have Let x be an eigenvector corresponding to Amax (A HA). xfO IIxll2 I 0 Definition submatrixofan n x k) x k) Definition 10.19.= Amax{A A). of positive. 3. Theorem 10. Let A e C"x". All eigenvalues of A are positive. A can be wrirren in [he/orm MT M. 3. XHAx > 0 for all nonzero = AH E enxn E en.14.1. 3. Theorem 1O. The determinants of all principal submatrices of A are nonnegative. The determinant of the 1x1 1 leading submatrix is 0 and 1.@mllrk 10. form MT E ~n xn E ~n xn definite if and only if Theorem 10.. All eigenvalues of A are nonnegative. consider the matrix A — [0 _l~]. All eigenvalues of A are nonnegaTive.18.17).19.l3 provides upper (AO and lower (An) bounds for (A. whence IIAxll2 ! H IIAliz = max .
where L\ e c(nl)x(nl) is nonsingular and lower triangular as = L1Lf. = LLH.3 is not unique.. concerns the notion of the "square root" of a matrix. The following standard theorem is stated without proof (see. Let A E lR. p. nxm 2.2) element is. 1f A :::: Band M E Rnxm.3 is not unique. For example.2.nxn"be nonnegative definite. A e R"x be nonnegative definite.10. It is stated and proved below for the more general Hermitian case. [ fz ti o o l [~ 0] ~ 0 v'3 ..17 is available and is known as the Cholesky factorization. Its proof is straightforward from basic definitions. That is. A stronger form of the third characterization in Theorem 10.17 is available and is A stronger form of the third characterization in Theorem 10. rankS = rankA definite definite if positive definite). E jRnxn MT AM > M BM. p. That is.20. E <C Theorem 10. Theorem 10. standard theorem stated 181]). assume the result is true for matrices of order — 1 so that B may be written as B = L\L^. Then A has unique nonnegative Theorem 10. matrices (both symmetric and nonsymmetric) have infinitely many square roots. if then M can be then M can be [1 0]. Write the matrix A in the form the form By our induction hypothesis. 0 Recall that A :::: B if the matrix A . with positive diagonal elements such that positive Proof: The proof is by induction.22. [16. 10rm [COSO _ Sino] . The factor M in Theorem 10. for example. Moreover.18. The following theorem is useful in "comparing" symmetric matrices.23. for example. then MT AM :::: MTTBM. j proof (see. .we say 181]).23. B e Rnxn be symmetric. 2. in fact.2) element is. The case n = 1 is trivially true. For example. BM. It concerns the notion of the "square root" of a matrix. If >BandMe jRnxm. If A > B and M e Rm . if € E" xn we say that e jRn x that S E R nxn"isisa asquare root of AA ifS2 2 =— A. if A = lz. Write the matrix A in Proof: The proof is by induction.18. It is stated and proved below for the more general known as the Cholesky factorization.1 so that B By our induction hypothesis. Its proof is straightforward from theorem is useful in "comparing" symmetric matrices. negative and A is nonpositive principal submatrix consisting of the (2. The following Recall that A > B if the matrix A — B is nonnegative definite.22.is a square root. Remark 10. negative and is nonpositive definite. then MT AM > MT TBM. definite if A is positive definite). Then A has aaunique nonnegative definite square root S. The case = is trivially true. nxn Theorem 10.nxn . any matrix S of c e s 9 the " °* ™ the form [ ssinOe _ ccosOe ] IS a square root. if A E lR. SA = AS and rankS = rank A (and hence S is positive = AS S S.21. Let A e c nxn be Hermitian unique nonsingular lower triangular matrix L nonsingular A = LLH.B is nonnegative definite. Then there exists a positive definite. Theorem 10.. in fact. If A> Band E jR~xm. Let A. For example. 1. assume the result is true for matrices of order n . basic definitions. if Remark 10. any matrix of nonsymmetric) have infinitely many square roots.2. In general. matrices (both symmetric and square root of if S A. and positive definite. In general. The factor M in Theorem 10. Hermitian case. Definite Matrices 10. if = /2. MT AM> M.20. [16. Ll E C1""1^""^ and . Definite Matrices 101 101 principal submatrix consisting of the (2. For example.
0 Note that the greater freedom afforded by the equivalence transformation of Theorem afforded 10. Alternatively. for example.3 10.b HL\H L11b = ann . 131]. Canonical Forms Chapter 10. Two such forms are stated here. But we = ann — b LIH L\lb = ann — bH B~lb B A). suppose A has an SVD of the form (5.24. Take P =[ S~ 'f [I ] and Q = V to complete the proof. the SVD is relatively expensive to compute and other canonical forms exist that are intermediate between (l0. as opposed to the more restrictive situation of a similarity transformation. p.p.lb. Then there exist matrices P E C: xm and Q e C"nx" such E c.4) and the SVD. 5]. [21. Let A € C™*71. Ch. But know that o < det(A) = det [ ~ b ] = det(B) det(a nn _ b H B1b).b H B1b (= the Schur complement of B in A). yields a far "simpler" canonical form (10.. are generally unreliable. of ann — b 0 root of «„„ .3 Equivalence Transformations and Congruence Equivalence Transformations and Congruence Theorem 10. Alternatively. see.131]. available. we must have ann —bHB lb > 0.24. They are more stably computable than (lOA) and more efficiently computable than a full SVD. say. However. Ch.xn. Substituting in the involving we find 2 a2 = ann . Substituting in the expression involving a.2) in its complex version. Gaussian or elementary row and column operations. of course. Canonical Forms with positive diagonal elements.102 102 Chapter 10. Then [ Sl o 0 ] [ I Uf U H ] AV = [I 0 0 ] 0 . Choosing a be det(fi) > HB~lb completes the proof.4) efficiently available. numerical procedures for computing such procedures an equivalence directly via. we find by L^b. Many similar results are also (10. [4.4). Choosing a to be the positive square ann . we see that we must have L\c = b and ann = CHc + a 2.b H B1b > O. 2].• Clearly we see we L I C = b and ann = c HC a 2 c is given simply by c = C. ann Since det(B) > 0. Then E c~xn such exist e C™ x m that that PAQ=[~ ~l (l0. However.b B1b completes D 10. the unitary equivunitary alence known as the SVD.4) Proof: proof Proof: A classical proof can be consulted in. It remains to prove that we can write the n x n matrix A It in the form in the form ann b ] = [LJ c a 0 ] [Lf 0 c a J. The numerically preferred equivalence is.4) [7. Performing the indicated matrix multiplication and equating the corresponding submatrices. . multiplication where a is positive. for example (10. [21.
and ~ denote the numbers of positive. Example 10. upper Proof: For the proof. where R e €.1). p. then rank(A) = n + v. for example.10. and eigenvalues. It turns out that the principal property so preserved is the sign preserved under congruence. various rank revealing QR decompositions are available that can sometimes detect such various rank revealing QR decompositions are available that can sometimes detect such phenomena at a cost considerably less than a full SVD. The transformation A i> XH AX is called a congruence. 0). Note that congruence preserves the property of being Hermitian. negative. . Definition 10. Then there exists a unitary matrix Q E Cmxm and a permutation permutation matrix IT e en xn" such that Fl E C"x QAIT = [~ ~ l (10.rrxr is upper (or lower) triangular with positive diagonal elements. v. Note that a congruence is a similarity if and only if X is unitary. see [4].e.31 (Sylvester's Law of Inertia). [21. We then have the following. Then is the numbers In(A) = (rr. (TT.. v. 134]. In(A) 3. see. and zero eigenvalues. In(A) = ln(X Proof: For the proof.6) E e.1.25 (Complete Orthogonal Decomposition). Then the inertia of A is the triple of inertia of of negative. phenomena at a cost considerably less than a full SVD. v. Let A = AH E e nxn and let rr. l.28. Proof: For the proof. Then H HAX). Let A E C™ x ".xr E erx(nr) arbitrary general nonzero.t h e n A > 0 if and only if In (A) = (n. Let A E e~xn. a congruence. i. [21. Let A e e~xn. see.In[! 1o o o 0 0 00] 10 =(2. see [4] for details. Again. Let A e Cnxn and X e Cnnxn.29. Then there exist unitary matrices U e Cmxm and V E Cnxn such that unitary matrices U E e mxm and V e e nxn such that (10. if A is Hermitian. £). In(A) = In(X AX). for example. i. of A. v. HE C xn E e~ xn. congruence. Equivalence Transformations and Congruence 103 103 Theorem 10. We then have the following. Again. D 0 Remark 10. v.xr is upper (or lower) triangular with positive diagonal elements. Theorem 10. Then there exist Theorem 10. of A. of each eigenvalue.26. The H.3. Equivalence Transformations and Congruence 10.30. n The signature of A is given by sig(A) = n . Proof: For the proof.3.27. £). numbers In(A) (n.XH AX Definition 10. then XH AX is also Hermitian. respectively. n. sig(A) = rr — v. Let A e C™ ". It is of interest to ask what other properties of a matrix are then X H AX is also Hermitian. 2. Note that congruence preserves the property of being Hermitian. D Proof: For the proof. Remark 10. 0 2.0). If In(A) = (rr.. see [4]. Definition 10. if A is Hermitian. nxn E e X E e~xn. see [4].25 (Complete Orthogonal Decomposition).5) where R E e. where R E Crrxr is upper triangular and S e C rx( " r) is arbitrary but in general nonzero. Let A = AH e C"x" and let 7t.v. Let A = A He ennxn and X e Cnnxn. 134]. Note that a congruence is a similarity if and only ifX is unitary.31 guarantees that rank and signature of matrix are preserved under congruence. It turns out that the principal property so preserved is the sign of each eigenvalue. When A has full column rank but is "near" a rank deficient matrix. The signature of is Example 10. 0 D x Theorem 10. 0. see [4].31 (Sylvester's Law of Inertia).26. Then there exists a unitary matrix Q e e mxm and a Theorem 10. v.27. Definition 10. and £ denote the numbers of positive. then rank(A) rr v. p. Theorem 10. If A AH e e x " then A> 0 if and only if In(A) = (n. see [4] for details.0. When A has full column rank but is "near" a rank deficient matrix. Proof: For the proof. . It is of interest to ask what other properties of a matrix are preserved under congruence.30. respectively. D Theorem 10. If A = A" E C nnxn.29.e.28.31 guarantees that rank and signature of a a matrixare preserved under Theorem 10.
I BT > O. v. 1. Suppose A = AT and D = DT. I/. 1. X e C"nxn such that XHAX = diag(l. 0.32. D > and .. the number of Il's is v. left AT D DT.BT A+B > 0. where the number of E c~xn XH AX = diag(1. .. .2 there exists a unitary matrix V such that VHAU = diag(AI. An of Jr Proof: Let AI . I/~. if ifA>0. . 1. .. . Suppose A = AT and D = DT.4 Rational Canonical Form Rational Canonical Form rational One final canonical form to be mentioned is the rational canonical form..35.4 10.. 1. Proof: Consider the congruence with Proof: Consider proof Theorem and proceed as in the proof of Theorem 10.33. . the number 0/0 's is (.. 0 D 10. and D .33. A w ). .. Then = AT D = DT. Define the x n matrix vv = diag(I/~. £).104 104 Chapter 10.. . if and if either A> and D . for example.BT A+B:::: o. . 0 D Then it is easy to check that X = V VV yields the desired result. . B D ] >  ° if and only if A:::: 0.BT AI > 0. the next v are negative. ifand only ifeither A > 0 and D . Xw denote the eigenvalues of A and order them such that the first TTare ~ O. .... and the numberofO's is~. . I. Then there exists a matrix AH C"xn In(A) = (Jr. the number of — 's is v. v. Then Remark Remark 10.... 1/. . I. 's is 7i.BD^BT > 0. Proof: AI. . Define the nn x n matrix U UH AV = diag(Ai. .. the congruence B ] [I D ~ 0 _AI B I ° JT [ A BT ~ ][ ~ 0 D The details are straightforward and are left to the reader.0. .BD. An). . . O.. Proof: proof Proof: The proof follows by considering. Theorem 10. . and the final £ are 0. . Canonical Forms Theorem 10.BT A~l B > 0... . By Theorem 10. .fArr+I' . .3. AA+B = B. . where the number of X 1's is Jr.... Canonical Forms Chapter 10. Theorem positive. Note the symmetric Schur complements of A (or D) in the theorem..fArr+v.1).. Let A = AHeE cnxn with In(A) = (jt. or D > 0 and A . X UW desired 10. 0).. .1 Block matrices and definiteness Theorem 10. . AA+B = B.1 10. .. and D .3.0). I.34..
A matrix A E lRn Xn is said to be nonderogatory ifits minimal polynomial if its minimal polynomial and characteristic polynomial are the same or. Rational Canonical Form 105 105 Definition A matrix A e M"x" is said to be Definition 10.10. consider the companion matrix (l0. Companion matrices also appear in the literature in several equivalent forms. To Companion matrices also appear in the literature in several equivalent forms. To illustrate. equivalently.10) o 1 o 1 o o o o o o (10. equivalently.Then it can be shown (see [12]) that A mial is 7r(A) = A" . In fact.4. For £*Yamr\1j=» example.11) . Then it can be shown (see [12]) that A is similar to a matrix of the form is similar to a matrix of the form o o o o 0 o o o (10. o (10.(ao + «A + .36.9) Moreover. A matrix A e E nx " of the form (10.18). A is easily seen to be similar to the following matrix identity similarity P given by (9. A is easily seen to be similar to the following matrix in upper Hessenberg form: in upper Hessenberg form: a2 al o 0 0 1 o 1 6] ao o .7) Definition 10. the following are also companion matrices similar to the above: following are also companion matrices similar to the above: Notice that in all cases a companion matrix is nonsingular if and only if ao /= 0. since a matrix is similar to its transpose (see exercise 13 in Chapter 9). has only one block associated with each distinct eigenvalue. For In fact. l 0 0 ~ ao ~ ao _!!l (10. Rational Canonical Form 10.4. Suppose A E lRnxn is a nonderogatory matrix and suppose its characteristic polynoSuppose A E Wxn is a nonderogatory matrix and suppose its characteristic polynon(A) An — (a0 + alA + a n _iA n ~'). consider the companion matrix illustrate. Using the reverseorder identity similarity P given by (9.8) This matrix is a special case of a matrix in lower Hessenberg form.. A matrix A E lRnxn of the form (10.37.7) is called a cornpanion rnatrix or Definition 10. is said to be in cornpanion form. Notice that in all cases a companion matrix is nonsingular if and only if aO i= O. the Moreover.18). : ~ ! ~01]. the inverse of a nonsingular companion matrix is again in companion form. if its Jordan canonical form has only one block associated with each distinct eigenvalue.. since a matrix is similar to its transpose (see exercise 13 in Chapter 9). if its Jordan canonical form and characteristic polynomial are the same or. Using the reverseorder This matrix is a special case of a matrix in lower Hessenberg form.7) is called a companion matrix or is said to be in companion forrn.37. + an_IAnI). the inverse of a nonsingular companion matrix is again in companion form.
Let a\ > a2 > . then its pseudoIf singular. . among which. Canonical Forms with a similar result for companion matrices of the form (10.. If a companion matrix of the form (10. Explicit formulas for all the associated right and left singular vectors can Remark 10. in matrices are known to possess many undesirable numerical properties. [12]. Algorithms to reduce but unfortunately they are often very difficult to work with numerically. at least one eigenvalue. associated at least one eigenvalue.. a2. in n general and especially as n increases. nonsingular ones are nearly singular. a n i] and l c I+~T a.. . and hence the pseudoinverse of a singular companion + matrix is not a companion matrix unless a = 0.caa T ca o J.caa T = (I + aaT) I .... Such matrices are said to be in rational canonical form Frobenius rational canonical form (or Frobenius canonical form)..7) is singular. Then + ai + . .7).. Let a E JRn1 denote the vector [ai. Companion matrices appear frequently in the control and signal processing literature Companion matrices appear frequently in the control and signal processing literature but unfortunately they are often very difficult to work with numerically. also be derived easily. Let al ~ GI ~ • • ~ an be the singular values of the companion matrix Theorem 10. see haps surprisingly.4ao ' 1 2)  a? = 1 for i = 2. stable ones are nearly unstable. matrix is not a companion matrix unless a = O.10). Theorem 10. For details. + a.106 Chapter 10. However. Ifao ^ 0. For example. anIf and let e M"" \a\. has more than one Jordan block associated with If A € JRnxn derogatory. I — T = T) Note that / . Then A in (10.. each of whose diagonal blocks is a companion matrix.38. among which.._1 and y = 1 + + a.10). especially nonsingular ones are nearly singular.4aJ) .38.7).e. companion matrices are known to possess many undesirable numerical properties.7). Such matrices are said to be in each of whose diagonal blocks is a companion matrix. i.e. it can be shown that a derogatory matrix is similar to a block diagonal matrix. 3. the largest and smallest singular values can also be written in the equivalent form Remark 10. = ~ (y .Jy2 . . is the fact that their singular values can be found in closed form. form). i.39..• > an be the singular values of the companion matrix A in (10. For example.7). 02.39. their eigenstructure is extremely ill conditioned. is the fact that their singular values can be found in closed form. a.... Canonical Forms Chapter 10. for example. Moreover. If A E R nx " is derogatory. see. and so forth [14]. Explicit formulas for all the associated right and left singular vectors can also be derived easily. with a similar result for companion matrices of the form (10. Let a = a\ + a\ + • • • + a%_{ and y = 1 + «. Algorithms to reduce an arbitrary matrix to companion form are numerically unstable. the largest and smallest singular values can also be written in the equivalent form If ao =1= 0. . Leta = ar aJ al 2_ 2 ( y + Jy 2.Q + a. then it is not similar to a companion matrix of the form (10. Companion matrices have many other interesting properties. if ao = 0. stable ones are nearly unstable. if ao = 1 inverse can still be computed.. then it is not similar to a companion matrix of the form (10. and perCompanion matrices have many other interesting properties. and perhaps surprisingly. see [14]. n . Moreover. companion an arbitrary matrix to companion form are numerically unstable.1. Then it is easily verified that c = l+ ara' Then it is easily verified that o o o + o o o o o o 1. and so forth [14].
2.EA(A) I'MpeA) 3.18) and the matrix U in identity in (9. For example. show that AI must also be positive definite. An and singular 0'1 > 0'2 ~ 4. 9. one may lose up to k digits of precision. Show that a. A E jRnxn N(A) = A/"(A ). Let A 7. It is easy to show that y/2/ao < K2(A) < £..• ~ an ~ O.. Let R. If this number is large. Let R. Is [ ^ A E jRnxn is definite. by the theorem. Show that a.. . R> S [1 A~I] ~ O? /i 1 > 0? ~] > 0 if and only if > 0 and J 1 > 0 if and only if S > 0 and . (A) A. Theorem 10.38 yields some understanding of why difficult numerical behavior might be expected for companion matrices. then peA) = A2.11). 3..2). Prove that if A e M"x" is normal. when solving linear behavior might be expected for companion matrices.5 to find a unitary matrix Q that reduces A e C"x" to lower triangular form. when solving linear equations numerical sensitivity Kp(A) = systems of equations of the form (6. Let A G Cnx" and define p(A) = maxx€A(A) IAI.. is true if n = 2. 6. 1.(A)I for ii E!l. one may lose up to k digits of to the matrix Pnorm. Show that [ * }. then K2(A) ^ T~I. For example.. and when GO is small or y is large (or both)..Exercises Exercises 107 Companion matrices and rational canonical forms are generally to be avoided in floatingCompanion matrices and rational canonical forms are generally to be avoided in fioatingpoint computation.(A) for e n. can be determined explicitly as determined explicitly y+J y 2 . . one measure of numerical sensitivity is KP(A) = A A ] > the socalled condition number of A with respect to inversion and with respect II ^ IIpp II A~l IIpp'me socalled condition number of A with respect to inversion and with respect to the matrix pnorm. (A) = IA. It is not unusual for y to be large for large n. Show that if A is normal.40. S 6 E nxn be symmetric. then p(A) = IIAII2' Show that the converse is true if n = 2. Show that if A is normal. this condition number is the ratio of largest to smallest singular precision..18) U A E cc nxn Theorem 10.38 yields some understanding of why difficult numerical Remark 10. 5. A [ must also be positive 7. If A e jRn xn 8.40.. Suppose A e E"x" is positive definite.. Let A = I J : ]eEC 22x2. A E cc nxn peA) = max). this condition number is the ratio of largest to smallest singular values which. EXERCISES EXERCISES 1. then Af(A) = N(A Tr ). If this number is large. . Find a unitary matrix U such that [~ M CC x 2 Find a unitary matrix U such that 6. A E en x n eigenvalues A]. Show that [~ R > SI. In the 2norm. • • > on > 0. Show that the converse radius of A. If A E Wxn is positive definite. In the 2norm. say 0(10*).11).4a5 21 a ol It is easy to show that 21~01 :::: k2(A) :::: 1:01' and when ao is small or y is large (or both). Remark 10. Then p(A) is called the spectral radius of A. Show that if a triangular matrix is normal. Use the reverseorder identity matrix P introduced in (9.. K\ (A) (10. yn and singular values a\ ~ a2 > . Let A € C n xn be normal with eigenvalues y1 . say O(lO k ). Note that explicit formulas then K2(A) ~ I~I' It is not unusualfor y to be large forlarge Note that explicit formulas Koo(A) for K] (A) and Koo(A) can also be determined easily by using (l0. E jRnxn be symmetric. Theorem 10. then it must be diagonal.
j 1+ j ] 1 .1 1. (a) Chapter 10.j 1+ j ] 2 ' (d) [ . Find the inertia of the following matrices: following 10.108 108 10. . Canonical Forms Chapter 10. Canonical Forms [~ ~ l (b) [ 2 1.
1 Differential Equations Differential Equations = Ax(t).nxn.1 Properties of the matrix exponential Properties of the matrix exponential 1.nxn is defined by Definition 11. The solution of (11. e° = I. k. The solution of (11.1 and linearity of the transpose. = Xo In this section we study solutions of the linear homogeneous system of differential equations In this section we study solutions of the linear homogeneous system of differential equations x(t) x(to) E JR. Proof: This follows immediately from Definition 11. It can be described conveniently in terms of the matrix exponential.1. This is known as an initialvalue problem.1) for t 2: to.nxn is constant and does not depend on t.1 11. Forall A EG R" XM . For all A E JR. where the matrix A e JR.2) k=O The series (11.2) can be shown to converge for all A (has radius of convergence equal The series (11. For all A e Rnxn.1) is then known always to exist and be unique.1 by setting = 0. which thus also converges for all A and uniformly in t.1 by setting AA =O. Definition 11.3) which thus also converges for all A and uniformly in t. Proof: This follows immediately from Definition 11. The solution of (11.Ak. For all A JR.1.1 and linearity of the transpose. (11. T T 109 109 . 11. We restrict our attention in this for t > IQ. the matrix exponential e A e JR.nxn. (e(eAf = e A e^. the matrix exponential e A E Rnxn is defined by the power series power series e = A L +00 1 . Proof This follows immediately from Definition 11.1) involves the matrix (11.1) involves the matrix to +(0).Chapter 11 Chapter 11 Linear Differential and Linear Differential and Difference Equations Difference Equations 11. It can be described conveniently in terms of the matrix exponential. Proof This follows immediately from Definition 11.1) is then known always to exist and be and does not depend on t. eO = I.1.1 11.1.n (11. The solution of (11. We restrict our attention in this chapter only to the socalled timeinvariant case. A) • 2. unique. where the matrix A E Rnxn is constant chapter only to the socalled timeinvariant case.2) can be shown to converge for all A (has radius of convergence equal to +00). This is known as an initialvalue problem.
Proof' Note that Proof: Note that et(A+B) = I t + teA + B) + (A + B)2 + ..1 } = erA. Compare like powers of t in the first equation and the second or third and use the Compare like powers of t in the first equation and the second or third and use the k binomial theorem on (A + B/ and the commutativity of A and B. et(A+B) =etAe tB = etBe tA if and only if A all e JRnxn and all e R.. ) .. Proof: Simply take T = t in property 3. binomial theorem on (A B) and the commutativity of A and B. = I + (t + T)A + (t + T)2 A 2 + ...110 110 Chapter 11. ) ( 1+ tA + t2!A 2 +. ) . i.. Let £ denote the Laplace transform and £1 the inverse Laplace transform. (etA)1 = e. Then for 6. i.tA . and B commute. 2! and and e e tA rA 2 = ( I + t A + t2! A 2 +. Proof" We prove only (a)..l{e tA}} = (sI . For all e R"x" and for all t. Compare like powers of A in the above two equations and use the binomial theorem Compare like powers of A in the above two equations and use the binomial theorem on(t+T)k. For all A E JRnxn and for all t.. For all A. Part (b) follows similarly. Linear Differential and Difference Equations e(t+r)A e(t+T)A 3. T E JR. all A € R"x" and for all t € lR. 2 2! and and while while e e tB tA = ( 1+ tB t2 2 + 2iB 2 +. on (t + T)*. (a) . {+oo = io et(sl)e tA dt since A and (sf) commute =io (+oo ef(Asl) dt .e. 6. (e'A)~l e~'A. ) ( I + T A + T2!2 A 2 +. AB = B A.A)I. AB = BA. (b) £.. Proof: We prove only (a).A)I} = «M.. = e'A erA = elAe'A .1 {(j/A)... Part (b) follows similarly. Proof" Simply take T = — t in property 3. (a) C{etA = (sIArl.e. 5. For all e JRnxn and for all E R. 4. ForaH A E R" x " and for all t e JR. r e R. Let denote the Laplace transform and £~! the inverse Laplace transform. B E R" xn and for all t E JR.. Then for E JRnxn t E R.. (b) . et(A+B) =^e'Ae'B = e'Be'A and and B commute.lI{(sl. Linear Differential and Difference Equations Chapter 11. Proof" Note that Proof: Note that e(t+r)A = etA erA = erAe tA .
l)e .H using the JCF. the scalar dyadic decomposition can be replaced by If this is not the case. Differential Equations 11..1. it can be differentiated termbyterm from which the result follows immediately.AetAil Ae tA I ~t (e~tAetA I (M A I ~t (e~tA .Ae tA etA) . = (sl A)..3) is uniformly convergent. Notice in the proof that we have assumed.y.A) ~' is called the resolvent of A and is defined for all s not in A (A). ) = L'lt IIA 21111e tA IIe~tIIAII. Differential Equations 111 111 = {+oo 10 n t 1 e(AiS)t x. ...H L.A)I.Ae tA tA tA I I e tA . A2 + (~~)2 A tA II tA Il 1 (_ 2! + . . For all A E JRnxn and for all E JR. If this is not the case.=1 = ~[fo+oo e(AiS)t dt]x.=1 m Xiet(Jisl)y. that A is diagonalizable.3) is uniformly convergent.Ae II = I L'lt (M)2 + ~ A 2 +.X i y. 1h(e tA ) = AetA = etA A. . it can be differentiated termbyProof: Since the series (11.A"I i=1 . A 2etA + .1 The matrix (s I — A) I is called the resolvent of A and is defined for all s not in A (A)... The matrix (s I .All succeeding steps in the proof then follow in aastraightforward way. ) 3! 4! L'ltiIAIl < L'lt1lA21111e (1 + + (~t IIAII2 + . ) etA I < MIIA21111e  L'lt (L'lt)2 + IIAII + IIAI12 + . ) = I ( Ae + = tA ~. for convenience. for convenience. For any consistent matrix norm.Ae tA I = III (etAe~tA L'lt = = /A) .... that A is diagonalizable.H dt assuming A is diagonalizable .11. All succeeding steps in the proof then follow in straightforward way.1. For all A e R"x" and for all t e R. the scalar dyadic decomposition can be replaced by et(Asl) =L ....H = '"' assuming Re s > Re Ai for i E !! = (sI . using the JCF... ) 3 II I ( ~. employed I e(t+~t)AAt.u . s .etA .y.. Notice in the proof that we have assumed. Alternatively. e'A Proof: Since the series (11... £(e'A) 7.. the formal definition d dt _(/A) = lim ~t+O e(t+M)A _ etA L'lt can be employed as follows...
Thus. Let A E IR n xn.3.1. 0 ordinary differential equations. the righthand side above clearly goes to 0 as t:.7) and again use property 7 of the matrix exponential.4) for t ::: to is given by (11. Linear Differential and Difference Equations For fixed t.2 Homogeneous linear differential equations Homogeneous equations x(t) Theorem 11. the righthand side above clearly goes to 0 as At goes to 0. (11. The formula can be derived by means of an integrating factor "trick" direct differentiation.7) is the solution of (11. Then the solution of the linear inhomogeneous initialvalue problem x(t) = Ax(t) + Bu(t).5) and use property 7 of the matrix exponential to get x t ) = Ae(tto)A xo fundamental Ae(t~to)Axo = Ax(t). fact that A commutes with any polynomial of A of finite degree and hence with etA. The solution ofthe linear homogeneous initialvalue problem Let A e Rnxn.8) .1.tA to get as follows. or one can use the limit exists and equals Ae t A A similar proof yields the limit et A A.4. 0 uniqueness theorem for ordinary differential equations.f(p(t).dt dt is used to get x ( t ) = Ae(tto)Ax0 + f'o Ae('s)ABu(s) ds + Bu(t) = Ax(t) + Bu(t).5) Proof: Differentiate (11.t goes to O.6). Premultiply the equation x . The proof above simply verifies the variation of parameters formula by direct differentiation. B e IR xm and let the vectorvalued function u be given Theorem and.2.3 Inhomogeneous linear differential equations Inhomogeneous equations Theorem 11. (11. Also. by the fundamental existence and uniqueness theorem for ordinary differential equations. Thus. B E Wnxm and let the vectorvalued function u be given Let A e IR nxn . D 11. t ) . A similar proof yields the limit e'A A. The formula can be derived by means of an integrating factor "trick" as follows. The proof above simply verifies the variation of parameters formula by Remark 11.5) and use property 7 of the matrix exponential to get x ((t) = Proof: Differentiate (11. 11. (11. say. continuous. D Ir: Remark 11.6) for t ::: to is given by the variation of parameters formula for t > IQ is given by the variation of parameters formula x(t) = e(tto)A xo + t e(ts)A Bu(s) ds.4. t) dx = l af(x t) ' dx pet) at (t) q + dq(t) dp(t) f(q(t). x(to) = Xo E IR n (11.7) is the solution of (1l.i~t()Oc() nnd uniqu()Oc:s:s theorem for *('o)} = <?(f°~fo)/1. (11.7) Proof: Differentiate (11.5) is the solution of (11. continuous. The general formula formula d dt l q (t) pet) f(x.112 112 Chapter 11.¥o + 0 = XQ so.5) is the solution of (11. Also. The general Proof: Differentiate (11. x(to) = e(toto)A Xo = XQ so.. The solution of the linear homogeneous initialvalue problem = Ax(l).4). the limit exists and equals Ae'A •.6). or one can use the fact that A commutes with any polynomial of A of finite degree and hence with e'A. x(to) = Xo E IRn (11. Ae(ts)A Bu(s) to get x(t) = Ae{'to)A Xo + Bu(t) = Ax(t) = x(to e(totolA Xo + = Xo fundilm()ntill ()lI. Premultiply the equation x — Ax = Bu by e~ to get (11. say. lo t (11. by the fundamental existence and x(t0) — e(fo~t°')AXQ — Xo uniqueness theorem for ordinary differential equations..Ax = Bu by e. the For fixed t.4). Let A E Rnxn . Linear Differential and Difference Equations Chapter 11. t ) .7) and again use property 7 of the matrix exponential. Then the solution of the linear inhomogeneous initialvalue problem and.
the following theorem is stated with initial time to = 0. punov differential equation.1. Differential Equations 11. X(to) =C E jRnxn (11.12). X(O) = C (11.1. following to = O.11) is known as a Sylvester Sylvester differential equation. E ]R. X t) X 0 D Corollary 11.11. Then the matrix initialvalue E jRmxm. and the proof is essentially the same. t exponential. (11. t]: 113 1 Thus.etoAx(to) = lto t e. t]: Now integrate (11. The solution of the matrix linear homogeneous initialvalue e jRnxn. etAx(t) . The of nrohlcm problem X(t) = AX(t).. the When C is symmetric in (11. e jRnxn. Corollary 11.7. and C e Rnxm.nxm.5. and hence t d esAx(s) ds = to ds 1t to eSABu(s) ds. The initialvalue problem (11. Then the matrix initialvalue problem X(t) = AX(t) + X(t)AT. C e IR" ". B e R m x m . problem problem X(t) = AX(t) + X(t)B.12) X(t) = etACetAT has the solution X(t} = etACetAT. .10) coefficient In the matrix case.1.6.4 11. Let A E Rnxn. differential equation. Let A E Wlxn. The first is an obvious generalization of Theorem 11.2.sA Bu(s) ds x(t) = e(ttolA xo + lto t e(ts)A Bu(s) ds. the Theorem 11. and the proof is essentially the same.2.8) over the interval [to.12) is known as a LyaX t) punov differential equation. X(O) =C (11. Theorem 11. X((t) is symmetric and (11.7. the Proof: Differentiate etACe tB property Proof: Differentiate etACetB with respect to t and use property 7 of the matrix exponential.9) for t ::: to is given by for t > to is given by X(t) = e(tto)Ac. The fact that X((t) satisfies the initial condition is trivial. Theorem 11. Let A. Theorem 11. E ]R.4 Linear matrix differential equations Linear matrix differential equations Matrixvalued initialvalue problems also occur frequently.nxn. we can have coefficient matrices on both the right and left. Differential Equations [to. For convenience.6.1. 11.11) X(t) = etACe = e ratB has the solution X ( t ) — atACe tB .
x. Then the solution x(t) of (11.5 11.iUtO)Xiyr) Xo 1=1 n = L(YiHxoeAi(ttO»Xi. i=1 In the last equality we have used the fact that yf*Xj = Sfj. for convenience.li y t as discussed in Chapter 9). are called the modal directions. if A is diagonalizable in geneml. the rest of this subsection is easily generalized by using the JCF and the decomposition H A — ^ Xf Ji YiH as discussed in Chapter 9).H . Linear Differential and Difference Equations 11.1 . modal velocities and directions. for convenience. Similarly. Let A E jRnxn and suppose X e jR~xn is such that XI AX = J.6 Computation of the matrix exponential Computation exponential JCF method JCF method Let A e R"x" and suppose X E Rnxn is such that X"1 AX = J.e'J. in the inhomogeneous case we can write Similarly. ~ 11.1 n Le A• X'YiH . in the inhomogeneous case we can write t e(ts)A Bu(s) ds i~ = t i=1 (it eAiUS)YiH Bu(s) dS) Xi. Then Then i=1 n = L(aieAiUtO»Xi.y. that it is diagonalizable (if A is not diagonalizLet A and suppose. ~ 1=1 I t.4) can be written A = L X. In the last equality we have used the fact that YiHXj = flij. where J is a JCF for A. Linear Differential and Difference Equations Chapter 11. This modal decomposition can be expressed in a different looking but identical form This modal decomposition can be expressed in a different looking but identical form n if we write the initial condition Xo as a weighted sum of the right eigenvectors if we write the initial condition XQ as a weighted sum of the right eigenvectors Xo = L ai Xi. i=1 The ki s are called the modal velocities and the right eigenvectors Xi are called the modal The Ai s are called the modal velocities and the right eigenvectors *. Then the solution x(t) of (11.114 114 Chapter 11. The decomposition above expresses the solution x (t) as a weighted sum of its directions.5 Modal decompositions Let A and suppose.1.1. the rest of this subsection is easily generalized by using the JCF and the decomposition able. that it is diagonalizable (if A is not diagonalizable. where J is a JCF for A. Then Then etA = etXJX1 = XetJX.4) can be written x(t) = e(tto)A Xo E jRnxn E Wxn = (ti. The decomposition above expresses the solution x(t) as a weighted sum of its modal velocities and directions.
let . e ttJi = eO. A.1. is complex. the problem clearly reduces simply to the computation of problem clearly reduces the exponential of a Jordan block. aareal version of the above can be worked out. N22 has l's along only its second superdiagonal. But e tN is almost as easy since N The diagonal part is easy: e e = diag(e '. ext}. A matrix M E M nx " is nilpotent of degree (or index. elN is is nilpotent of degree k. i. Finally.. of In the more general case.I e IN =I+tN+N 2 + .11. Nk~lI has a 1 in its (1.8.7. Mp~l ^ O. o A o o A Clearly A/ and N commute. and so forth. eAt teAt eAt o 2I e 12 At IkI At e (kI)! 0 ell. it is easy to check that while N has 1's along only its first superdiagonal (and O's elsewhere)..I)! I o t 1 o Thus. e'u e l N tu x lH = diag(e At . or grade) p if if matrix M e jRnxn is nilpotent of degree (or index. .. degree k. nilpotent Definition 11. t2 t k. it is then easy to compute etA via the formula etA = XetJ XI' Xe tl X If is etA etA tj since et I is simply a diagonal matrix.!etN by property 4 of the matrix exponential.e.0. N has 1's along only its second superdiagonal. while MPI t=.. k) O's k k N = 0. Thus. Thus.eAt). real version of the above can be worked out. To be specific. O.1.EeCkxk be aaJordan block of the form Ji <Ckxk be Jordan block of the form A Ji = 1 o o o =U+N. the series expansion of e'N is finite. ••• . l's For the matrix N defined above.. k) element and has O's everywhere else. teAl eAt = 0 0 0 2I e 12 At teAl 0 eAt In the case when A is complex. its first superdiagonal (and O's elsewhere). Mp = 0. + N k2! (k . Differential Equations 11.. (1. Differential Equations 115 If A is diagonalizable. AI e I. or grade) MP = 0. e lN finite.. and N kforth.
ni .2a2 = te. in fact. the function g is known and f(A) = g(A). The motivation for this method is the CayleyHamilton Theorem. . I. a. Here. Define the Ai nr=1 n where ao. 2} and etA Xe tJ =[=i a = xI =[ =[ 2 1 ] exp t ] [ [ 2 0 ~ ] [ 1 1 2 1 2 ] 2 1 e~2t te. + I) 3 . all the Ak — expressed k 1. Then the three equations for the a. .t . lowerorder g Example 11.. The method is stated and illustrated for the hand calculation in smallorder problems. compute f(A) = etA. I. The polynomial g gives the appropriate linear combination.10. . anl are n constants that are to be determined. so m = 1 and nl Let g(X) = UQ + alA + a2A2. . ==> 2a2 = t 2 e. characteristic of n(X (^ ~~ ^i)"'» where the A.. so m = 1 and n{ = 3. k = 0.nxn and /(A) = etx. where t is a fixed scalar. Let A Then A (A) = {2.2. . With the aiS then kth superscript (&) X. and /(A) = etA. Suppose the characteristic polynomial of A can be written as n(A)) = Yi?=i (A .s known.9.1.Ai t'. Linear Differential and Difference Equations Chapter 11. . terms of order greater than n . compute f(A) = e'A. . The method is stated and illustrated for the exponential function but applies equally well to other functions. The motivation for this method is known. Let A = [ ~_\ J].. the superscript (k) denotes the fcth derivative with respect to A.2t e. Let A = [~ o ~01~ ] t .. Given A E jRnxn and f(A) = etA.9.10. Theorem 9.2t ][ 1 ] Interpolation method Interpolation method This method is numerically unstable in finiteprecision arithmetic but is quite effective for effective hand calculation in smallorder problems. (A. Then A(A) = {2. — 1. t fixed Given A € E. functions.t • g'(1) = f'(1) g"(I) = 1"(1) . They are.1 in the power series for et A can be written in terms of these greater n— e' A lowerorder powers as well... g(I) = f(1) ==> ao . n .s are given by g(A) — ao aiS a\X o^A. Thus.... the unique OTQ. 2} and Example 11. ani solution of the n equations: g(k)(Ai) = f(k)(Ai). .3. Then jr(A. Linear Differential and Difference Equations Example 11.116 Chapter 11. . Let Example 11.I.s are distinct. f(A) n(A) etK. . .) = (A + 1)3. .. the function g is known and /(A) = g(A). i Em. which says that all powers of A greater than A n .a l +a2 = e==> at .1 can be expressed as linear combinations of Ak for k = 0.
A)^ 1 } and techniques for inverse Laplace transforms. Use etA = £~l{(sl .2t ) 2te. g'(2) = f'(2) = te Solving for the a. Thus. Let A _* Example 11. Then 7r(X) = (A+ 2)22 so m = 11and [::::~ 4i and f(A) = ea.2t te.s. Then rr(A) = f\ + o\2 so m = and (A i 2) «i nL = 2. This etA = .2t + 2te.2t .) = «o + ofiA.2t aL = + 2te. 1. we find Solving for the aiS. we find ao = e. Then the defining equations for the a. 2.1.11.1. There is an extensive literature on approximating certain nonlinear functions by rational functions.s are given by Let g(A) ao + aLA. Then the defining equations for the aiS are given by 6] g(2) = f(2) ==> ao ==> al 2al = e. Let g(A.cI{(sI — A)I} is quite effective for smallorder problems. Differential Equations 11 . we find Solving for the a.. Use Pade approximation. we find 117 Thus.11.2t [ ~ o ] + te. f(A) = etA = g(A) = aoI + al A = (e. te. s.2t _ Other methods Other methods 1. but general nonsymbolic computational effective smallorder techniques numerically problem equivalent techniques are numerically unstable since the problem is theoretically equivalent to knowing precisely a JCE JCF. 2t .2t . Let A = [ ~4 J] and /(A) = eO. Differential Equations Solving for the ai s. t ff>\ tk TU^^ _/"i\ Example 11.2t . 2.11.2t I [4 4] I 0 _ [  e. The matrix analogue yields e A ~ functions rational eA = .
Many methods are outlined in.13). and this observation is exploited frequently. in the matrix case this means when  A is sufficiently small.13) for k > 0 is given by for k 2:: 0 is given by Proof: The proof is almost immediate upon substitution of (11. and this observation is exploited frequently. 11. Unfortunately. Again. where D(A) 80I Si A H h SPA and N(A) v0I + vlA + q Explicit formulas are known for the coefficients of the numerator and Explicit formulas are known for the coefficients of the numerator and denominator polynomials of various orders. by this means when IIAII is sufficiently small. eS . Then the solution of the inhomogeneous initialvalue problem (11. exhibit many parallels to the continuoustime differential equation case. Let A E Rnxn. say. Then the solution of the inhomogeneous initialvalue problem mvectors. where D(A) = 001 + olA + . Numerical loss of accuracy can occur in this procedure from the successive squarings. by 22' 2* )A A multiplying it by 1/2k for sufficiently large k and using the fact that A = / { ]I //2')A )\ * . say. 0 D Remark 11. Many methods are outlined in.13).1 Homogeneous linear difference equations Homogeneous linear difference equations Theorem 11.2 Inhomogeneous linear difference equations Inhomogeneous linear difference equations E jRnxn. We could also case.12.2. e (e( 3. The solution ofthe linear homogeneous system of difference equations equations (11. This can be arranged by scaling A. we restrict our attention only to the socalled timeinvariant case. Linear discretetime systems. 11. convenience.e. Again. but since the system is timeinvariant. where the matrix A in (11. we have chosen ko = 0 for want to keep the formulas "clean" (i. Linear discretetime systems. Unfortunately.14) into (11.• + Vq A q. modeled by systems of equations of the previous section. a Fade approximation for polynomials of various orders. and since we consider an arbitrary "initial time" ko.. •• vq A . Reliable and efficient computation of matrix functions such as e A and log(A) remains a fertile area for research. Reliable and efficient computation 4. Reduce A to (real) Schur form S via the unitary similarity U and use eA = U e SsUH Ue U H and successive recursions up the superdiagonals of the (quasi) upper triangular matrix and successive recursions up the superdiagonals of the (quasi) upper triangular matrix e s. where the matrix A in (11. [19].14.118 118 l Chapter 11.13) is constant and does not depend on k. This can be arranged by scaling A.2.2 11.2 Difference Equations Difference Equations In this section we outline solutions of discretetime analogues of the linear differential In this section we outline solutions of discretetime analogues of the linear differential equations of the previous section. Proof: The proof is almost immediate upon substitution of (11. exhibit many parallels to the continuoustime differential equation difference equations. Linear Differential and Difference Equations DI(A)N(A). The solution of the linear homogeneous system ofdifference Let A e jRn xn. of matrix functions such as e A and 10g(A) remains a fertile area for research. Linear Differential and Difference Equations Chapter 11. we have chosen ko = 0 for convenience.1 11.. but since the system is timeinvariant. no double subscripts). for example.13) is constant and does not depend on k. [19]. 4. Let A e Rnxn. Numerical loss of accuracy can occur in this procedure from the successive squarings.. = P = multiplying it by 1/2* for sufficiently large k and using the fact that e = ( e j . We could also consider an arbitrary "initial time" ko.14) into (11.13. B e Rnxm and suppose {«*}£§ « a given sequence of mvectors. + opAP and N(A) = vol + vIA + D~ (A)N(A)..15) .. in the matrix case the exponential is accurate only in a neighborhood of the origin. 11. modeled by systems of difference equations. no double subscripts).e. Reduce A to (real) Schur form S via the unitary similarity U and use e A 3.13. we restrict our attention only to the socalled timeinvariant Remark 11.2. and since we want to keep the formulas "clean" (i. a Pad6 approximation for denominator the exponential is accurate only in a neighborhood of the origin. for example. case. E jRnxm {udt~ is of Theorem 11..
the ztransform of the sequence {Ak is then given by Assuming z > max IAI. LXi Jtyi . by analogy with the use of Laplace transforms to compute ztransforms. sometimes useful Ak.15).. j=O (11. which is numerically unstable but sometimes useful for hand calculation. Then Ak = (XJXI)k = XJkX. a matrix exponential. X~1 AX JCF for A. Then JCF for A. Difference Equations 119 119 is given by kI xk=AkXO+LAkjIBUj. k=O Assuming Izl > max A.2.2. the ztransform of the sequence {Ak}} is then given by AEA(A) X€A(A) k "'kk 1 12 Z({A})=L. Jk .y. in general.O.2. since /* is simply a diagonal matrix.3 11.1 _I tA~X. One definition of the ztransform of a sequence is +00 Z({gk}t~) = LgkZk.=1 H l If A is diagonalizable. again mostly for smallorder probsmallorder lems.15). Difference Equations 11. One solution method. based Methods based on the JCF are sometimes useful. substitution of (11. 0 D 11. is to use ztransforms.2.16) Proof: The proof is again almost immediate Proof: The proof is again almost immediate upon substitution of (11... Assume that A e M" xn and let X e jR~xn be such that XI AX = /. k:::.H m if A is diagonalizable.zA =I+A+"2 A + . where J is a E jRnxn and X E R^n J.11.3 Computation of matrix powers Computation of matrix powers It is clear that solution of linear systems of difference equations involves computation of It is clear that solution of linear systems of difference equations involves computation of k. +00 k=O z z = (lzIA)I = z(zI . it is then easy to compute Ak via the formula Ak = XJkXXI Ak Ak — X Jk If diagonalizable.. One definition of the ztransform of a sequence {gk} is a matrix exponential.16) into (11.16) into (11.A)I.
Linear Differential and Difference Equations In the general case.6 can also methods 11. ) Ak. Ch.(^ . 1 1 1 2 1 ] Basic analogues of other methods such as those mentioned in Section 11. see [11. inI)(O) = CnI' (1l. . aareal version of the above can be worked out.is complex. 0 A Writing /. 18]. A is complex.l8) .17) with ¢J(t) a given function and n initial conditions 4>(t} y(O) = Co.• = AI and noting that AI and the nilpotent matrix Writing Ji = XI + N and noting that XI and the nilpotent matrix N commute. the problem again reduces to the computation of the power of a In the general case. Consider. For an erudite discussion of the state of the art.)A  ( k ) AkP+I pl 0 J/ = kA k.3 11.3 HigherOrder Equations HigherOrder Equations differential It is well known that a higherorder (scalar) linear differential equation can be converted to higherorder a firstorder linear system..1 (2 . e Cpxp be a Jordan block of the form o . Linear Differential and Difference Equations Chapter 11. the initialvalue problem initialvalue (11.. for example.2) .1(2k .120 Chapter 11.. 11.1 Ak ( . In the case when A. To be specific. let 7..1. it is commute.15.2k) k( _2)k1 ] k( 2l+ (2l. but again no universally "best" method be derived for the computation of matrix powers. y(O) = CI.1 Ak The symbol (: ) has the usual definition of q!(kk~q)! and is to be interpreted as 0 if k < q. but again no universally "best" method exists. the problem again reduces to the computation of the power of a To Ji E Cpxp Jordan block.. Then Then 1 ] [(_2)k 1 0 k(2)kk(2) 1 ] [ _ [ (_2/. The symbol ( ) has the usual definition of . [11. real version of the above can be worked out.2 0 0 0 0 kA k . and is to be interpreted as 0 if k < q.1. Let A Ak = XJkX1 = [=i 4 a [2 1 J]. Let A = [_J Example 11. it is then straightforward to apply the binomial theorem to (AI + N)k and verify that straightforward N)k (XI verify Ak kA kI Ak k 2 (. .6 be derived for the computation of matrix powers.15. Example 11.
y € R" and let A = xyT.. the companion Note that det(A! ..19) possesses many nasty numerical properties for even moderately sized n matrix A in (11. . Show that e'A 1+ g ( t .. X2(t) yet).A) = A. 3. = Xn(t) = y(nl)(t). Then components Xl (t) yet). EXERCISES EXERCISES 1. Let .19) The initial conditions take the form ^(0) = c [CQ. These equations can then be rewritten as the firstorder linear system These equations can then be rewritten as the firstorder linear system 0 0 x(t) = 0 0 1 0 0 0 ao a\ x(t)+ [ 0 1 a n\ n ~(t) r. a)xyT. A similar procedure holds for the conversion of a higherorder difference equation A similar procedure holds for the conversion of a higherorder difference equation with n initial conditions. the companion matrix A in (11.. Define a vector x (t) E ]Rn with Here. Cl..718P. Then Xl (I) X2(t) = X2(t) = y(t). .19) possesses many nasty numerical properties for even moderately sized n and. c\. (11. at least for computational purposes. .. Let P E lR nxn be a projection. Further.an_llnl)(t) Xnl (t) Xn(t) = y(n)(t) = aoy(t)  + ¢(t) = aOx\ (t) . y E lRn and let A = xyT. where !(eat . . = X3(t) = yet). is often well worth avoiding.. . 2. However.a\X2(t) . Suppose x..718P. Further. y(m) denotes the mth derivative of y with respect to t. into a linear firstorder difference equation with (vector) initial with n initial conditions. Suppose x. Xn(t) y { n ~ l ) ( t ) . aly(t) . xn(t) = Inl)(t). . Note that det(X7 — A) = An + an\Xn 1l H alA + ao.. as mentioned before. a)xyT... x2(t) = y ( t ) . Show that e % / + 1. Show that e P ~ ! + 1. Show that etA 2. at least for computational purposes.Exercises 121 121 Here. •.anlXn(t) + ¢(t). is often well worth avoiding. However. +h a\X+ ao. be a projection. C M _I] The initial conditions take the form X (0) = C = [co.. let a = XT y. v (m) denotes the mth derivative of y with respect to t. where I + get. let a = xTy... as mentioned before. and. . Let 3."+ an_1A n~ + . Define a vector x (?) e R" with components *i(0 = y ( t ) . into a linear firstorder difference equation with (vector) initial condition.I) g(t.. = O. condition.. Cn \ .a)= { a t nxn p if a if a 1= 0. Let P € R 1. .
(b) Suppose S is symplectic and let A.. must (a) Suppose E is Hamiltonian and let A.A and to be symplectic K~l AT K . Show that 1/). Find a general expression for Find a general expression for 7. Show S~1 H S Hamiltonian. Linear Differential and Difference Equations Chapter 11. Let K denote the skewsymmetric matrix 0 [ In In ] 0 ' In A E jR2nx2n where /„ denotes the n x n identity matrix. H (d) Suppose H is Hamiltonian. must (b) Suppose S is symplectic and let). Hamiltonian. ft € lR and Let a.A 1 . must also be an eigenvalue of H. A matrix A e R 2nx2n is said to be K I AT K = .. . x(O) =[ ~ J. Let a. Find eM when A = Find etA = 8. Let 5. be an eigenvalue of H.be an eigenvalue of S. Show that ). Show that 1 /A.. (d) Suppose 5.122 122 Chapter 11. Show that SI HS must be Suppose and symplectic. Show that eH must be symplectic. 4. Hamiltonian if K~1ATK = A and to be symplectic if K I ATK = A I. also eigenvalue of (c) Suppose that H is Hamiltonian and S is symplectic. Show that E jRmxn e = [eoI A sinh 1 X ] ~I . Show that —A. be an eigenvalue of S.be an eigenvalue of H. 6. also be an eigenvalue of H.. Let (a) Solve the differential equation (a) Solve the differential equation i = Ax . Linear Differential and Difference where X e M'nx" is arbitrary. (a) Suppose H is Hamiltonian and let). Let denote the skewsymmetric matrix 4. f3 E R and Then show that Then show that ectt _eut cos f3t sin f3t ectctrt e sin ~t cos/A J. must also be an eigenValue of S.
e. 12. and a quarter goes to Asia.Exercises Exercises (b) Solve the differential equation (b) Solve the differential equation i 123 = Ax + b. (a) Find the solution of the initialvalue problem (a) Find the solution of the initialvalue problem . Each year half of the Americas' money stays home. (b) Consider the difference equation (b) Consider the difference equation Zk+2 + 2Zk+1 + Zk = O. Show that the eigenvalues of the solution X t ) of this problem are the same as those Show that the eigenvalues of the solution X ((t) of this problem are the same as those of C for all?. Consider the initialvalue problem i(t) = Ax(t). For Europe and Asia. (b) Find the eigenvalues and right eigenvectors of M. (c) Find the distribution of the companies' assets at year k. x(O) = Xo for t ~ O.3. 10. and the Americas (R).Yet) + 2y(t) + yet) = 0. For Europe and Asia.e.) (Exercise adapted from Problem 5. Consider the n x n matrix initialvalue problem 10. Suppose certain multinational companies have total assets of $40 trillion of which $20 trillion is in E and $20 trillion is in R. (d) Find the limiting distribution of the $40 trillion as the universe ends. 11. x(O) =[ ~ l 9. as k —»• +00 (i. The year is 2004 and there are three large "free trade zones" in the world: Asia (A).e. 11. i. Show that for t > 0. a quarter goes to Europe.X(t)A. what is the value of ZIOOO? What is the value of Zk in 2. Show that *(OII2 = aforallf > O. Consider the n x n matrix initialvalue problem X(t) = AX(t) .3.. The year is 2004 and there are three large "free trade zones" in the world: Asia (A). (c) Find the distribution of the companies' assets at year k. i. I/X(t)1/2 = ex for all t > 0. and the Americas (R).11 in [24]. (Exercise adapted from Problem 5. half stays home and half goes to the Americas. (a) Find the matrix M that gives (a) Find the matrix M that gives [ A] E R =M year k+1 [A] E R year k (b) Find the eigenvalues and right eigenvectors of M. half stays home and half goes to the Americas. X(O) = c. Each total assets of $40 trillion of which $20 trillion is in E and $20 trillion is in R.. Suppose that A E ~nxn is skewsymmetric and let ex = Ilxol12. a quarter goes to Europe. what is the value of ZIQOO? What is the value of Zk in general? general? . If £0 = 1 and z\ If Zo = 1 and ZI = 2.e.) 12. around the time the Cubs win a World Series). yeO) = 1. Suppose certain multinational companies have Europe (E). of Cf or all t. .. Europe (E). and a quarter year half of the Americas' money stays home..11 in [24]. around the time the Cubs win a World Series). k * +00 (i. Consider the initialvalue problem 9.YeO) = O. as (d) Find the limiting distribution of the $40 trillion as the universe ends. Suppose that e E"x" is skewsymmetric and let a = \\XQ\\2. goes to Asia.
This page intentionally left blank This page intentionally left blank .
characteristic hence nonreal eigenvalues must occur in complex conjugate pairs. eigenvalues for the generalized eigenvalue problem occur pencil — XB problem occur where the matrix pencil A .3. a. if x [y] is a right [left] ax [ay] for any eigenvector. and A.1 12. then so is ax [ay] for any nonzero scalar a E <C. A nonzero vector x e C" is a right generalized eigenvector of the pair generalized eigenvector of (A. called a generalized eigenvalue. The matrix A .XB is called a matrix pencil (or pencil of the matrices A and B). eigenvector. Similarly. B). B) with A. .) are the eigenvalues of the associated generalized eigenvalue problem. As with the standard eigenvalue problem. the adjective "generalized" "generalized" standard eigenvalue [y] is usually dropped. In this chapter we consider the generalized eigenvalue problem In we the generalized eigenvalue problem where A. generalized eigenvalue problem. B).) = det(A . A E en Definition 12. e C. B e jRnxn.Chapter 12 Chapter 12 Generalized Eigenvalue Generalized Eigenvalue Problems 12. Definition 12. such that that (12. The roots ofn(X. Definition 12. The standard eigenvalue problem considered in Chapter 9 obviously where A.5) is called the characteristic polynomial of the matrix pair (A.2. Remark 12.'AB is singular.1.4. B) with A. hence nonreal eigenvalues must occur in complex conjugate pairs. called a generalized eigenvalue. the characteristic polynomial is obviously real. The polynomial 7r(A. e e.2.1) Ax = 'ABx. B E enxn. (A. B E E" xn . B e C" xn The standard eigenvalue problem considered in Chapter 9 obviously corresponds to the special case that B = I. corresponds to the special case that B = I. Definition 12. a nonzero vector y e C" is a left generalized eigenvector corresponding to an E en generalized eigenvector eigenvalue 'X if eigenvalue A if (12.1. The roots ofn('A) are the eigenvalues of the associated nomial of the matrix pair (A. B e enxn" if there exists a scalar 'A. B E C MX if there exists a scalar A E C.4. The matrix A — 'AB is called a matrix pencil (or pencil of the matrices A Definition 12. When A.2) When the context is such that no confusion can arise. As with the standard eigenvalue problem. and Remark 12. 125 125 .'AB) is called the characteristic polyDefinition 12.3.1 The Generalized Eigenvalue/Eigenvector Problem The Generalized Eigenvalue/Eigenvector Problem Ax = 'ABx. B). The polynomial n('A) = det(A — A.
With A and B as in (12. f3 = 0.0. {3 = 0. That is to say. then rr(A) is a polynomial nonsingular). 1 (of multiplicity 1). Case 1: a ^ 0.LA) = (1 . (12. regular. pencil . {3 =I.126 126 Chapter 12. f3 / 0. There are two eigenvalues.AB.L = (JL = £. There are two eigenvalues.A and corAssociated with any matrix pencil — AB is a reciprocal pencil . If del (A .L)({3 ./. (3 = O. when B =I. ^ 0. Case 2: a = 0.a/. otherwise. is singular.AB) and there are several cases to consider.5. B k e n. 1 Case 3: a =I. Generalized Eigenvalue Problems Chapter 12. There are two eigenvalues. Note appear. All A E C are eigenvalues since det(B .AHa .X B is a reciprocal pencil B — n. with its reciprocal eigenvalue being 0 in Case 3 of the reciprocal pencil B — /. All A 6 C are eigenvalues since det(B — uA) = O.6. 1 and O. in particular. only the case of regular pencils is considered in the remainder of this chapter.nA. k E !!. (3 = O. If B = I (or in general when B is nonsingular). Case 4: a = 0. (3 = 0.6. If det(A — AB) not regular. There is only one eigenvalue. f3 = O.(3A) ±. and hence there are n eigenvalues associated with the pencil A . There are two eigenvalues. there is a second eigenvalue "at infinity" for Case 3 of of . Case = ft ^ 0. eigenvalues — AB. only the case of regular pencils is considered in the remainder of this chapter. If B is singular. Then the characteristic polynomial is ft det(A . There are two eigenvalues. I and O. However. suppose associated — AB. Case 4: a = 0. f3 = O. Note that A and/or B may still be singular. n(X) Remark 12.LA) == 0. or infinitely many B = I.{3 = 0. 1 and 0.L) and there are again four cases to consider. I (of multiplicity 1). there may be 0. Case 1: a =I. There are two eigenvalues. However. A — A. {3 =I. Case 4: a = 0. Note that if AA(A) n J\f(B) ^ 0. Case 1: a =I.B. the associated matrix pencil is singular (as in Case N(A) n N(B) =Isingular 4 above).3). There are two eigenvalues. There is only one eigenvalue.0.3) where a and (3 are scalars. I1 and . While While there are applications in system theory and control where singular pencils appear.0.5.O.XB. Case Case 3: a = 0. If = of degree n.LA. Generalized Eigenvalue Problems Remark 12. eigenvalues associated with the pencil A . it is apparent where the "missing" eigenvalues have "missing" gone in Cases 2 and 3. At least for the case of regular pencils. A similar reciprocal symmetry holds for Case 2. Associated with any matrix pencil A . Clearly the reciprocal pencil has eigenvalues responding generalized /.A./. There are two eigenvalues. All A e C are eigenvalues since det(A — AB) =0.O. {3 =I. =I.O. it is said to be singular.B) == O. All A E C are eigenvalues since det(A . Case 2: a = 0. and ~. For example.AB Definition 12. I and ^. when B is singular. the characteristic polynomial is = (I . I multiplicity 1).KB always has pencil — AB . ft =I. Case 3: Case 4: = 0.I. Case 2: = 0.XB) is not identically zero.XB. 1 and ~. I). ft ^ O. 1 and 0.0. {3 ^ 0. the pencil A — XB is said to be 12. zero./. A similar reciprocal symmetry holds for Case 2. reciprocal Case of reciprocal .LA and corresponding generalized eigenvalue problem. Case 1: ^ 0. the pencil A .5. It is instructive to consider the reciprocal pencil associated with the example in It reciprocal Remark 12./. det(B .
we now deal with equivaa matrices.2. Q~H y isa lefteigenvectorofQAZ — XQBZ.7]. B E Cnxn Then there exist unitary matrices Q. see.7] [25. fewer than n eigenvalues.2. the eigenvalues of the problems A . Sec.Oif andonly if Q(AXB)Z(Z~lx) = 0.2 12. to ifx is a Zl x is a righteigenvectorofQAZAQB Z. the result follows. the pencil A AAB always has precisely n . D The first canonical form is an analogue of Schur's Theorem and forms. det(QAZXQBZ) = det[0(A . for example. the eigenvalues ofthe pencil A — XB are then the ratios of the diagonal elements of Ta to the corresponding diagonal elements of TfJ . 6. Q. f i always has precisely eigenvalues. see.XB)Z] = detQ det Z det(A .7.8. ify isa left eigenvector of A —KB. 7.7] or [25. E c nxn such that QAZ = Ta . 2. Let A. with the understanding onal elements of Ta to the corresponding diagonal elements of Tp. the eigenvalues of the problems A — XB and QAZ — XQBZ are the same (the two 1. which is the generally preferred method for solving the generalized eigenvalue problem. Let A. Sec. Sec. Sec. Numerical methods that work directly on A and are discussed in standard textbooks on numerical linear algebra. then Z~lx isa right eigenvector of QAZ—XQ B Z. this turns to the standard eigenvalue problem B~1Ax = Xx (or AB~1w = Xw).AB).AB and QAZ . ify is a left of AB. of AAB. Sec. 12.AB) o if and only if (QH y ) H Q ( A –_ B ) Z = Q.2 Canonical Forms Canonical Forms Just as for the standard eigenvalue problem. [7. Then 12.7. E nxn with Q and nonsingular. the theoretical foundation for the QZ algorithm.l W AW). QBZ = TfJ . Canonical Forms 127 B is nonsingular.AQBZ are the same (the two problems problems are said to be equivalent). However. lencies rather than similarities. Z e Cnxn such that 12. and the first theorem deals with what happens to eigenvalues lencies rather than similarities. o.8.7]. since the generalized eigenvalue problem is then easily seen to be equivalent to the standard eigenvalue problem B. the result follows easily by noting that yH(A — XB) — 0 if and only if yH (A . which is the generally preferred method for theoretical foundation for the QZ algorithm. 7. the eigenvalues of the pencil A . the pencil A fewer than eigenvalues.l Ax Ax (or AB.7].. 6. and det Z are nonzero. canonical forms are available for the generalized Just as for the standard eigenvalue problem. B. and eigenvectors under equivalence. Since the latter involves a pair of matrices. By Theorem 12. in fact. Theorem 12. B e cnxn . 6. then QHy isa left eigenvector ofQAZ AQBZ. with the understanding that a zero diagonal element of Tp corresponds to an infinite generalized eigenvalue. Q. 0 ( Q ~ H y)H Q(A X AB)Z = O. Numerical methods that if B is even moderately ill conditioned with respect to inversion. Then 1. Canonical Forms 12. see. Sec. There is also an analogue of the MurnaghanWintner Theorem for real matrices. [7. ifx isa right eigenvector of A—XB. the The first canonical form is an analogue of Schur's Theorem and forms. 6. 7.12.AB are then the ratios of the diagBy Theorem 12. that a zero diagonal element of TfJ corresponds to an infinite generalized eigenvalue. where Ta and Tp are upper triangular. However. since the generalized eigenvalue problem is then easily seen to be equivalent eigenvalues. canonical forms are available for the generalized eigenvalue problem. Z e Cnxn with Q and Z nonsingular. and the first theorem deals with what happens to eigenvalues and eigenvectors under equivalence. Sec. for example.7].7. where Ta and TfJ are upper triangular. solving the generalized eigenvalue problem. 3. Then there exist unitary matrices Q. Proof: Proof: 1.AB)Z] = det gdet Zdet(A 1. If B is nonsingular. Let A. . c 3. Let A. det(QAZ . the result follows. 7. Since det 0 and det Z are nonzero. 2. this turns out to be a very poor numerical procedure for handling the generalized eigenvalue problem out to be a very poor numerical procedure for handling the generalized eigenvalue problem if is even moderately ill conditioned with respect to inversion. fl.AQBZ) = det[Q(A . [7.7] or [25. Theorem 12. Since det Q XB). The result follows by noting that (A AB)x = 0 if and only if Q(A AB)Z(Zl x) = The result follows by noting that (A –yB)x . [7.7] or [25.7. see. for example. There is also an analogue of the MurnaghanWintner Theorem for real matrices. for example. in fact. Again. work directly on A and B are discussed in standard textbooks on numerical linear algebra.
form (KCF). The matrix pencil 12. Then there exist 12.AB)Q = [~ ~ ] .AB.2)2 with characteristic polynomial (A — 2)2 has a finite eigenvalue 2 of multiplicty 2 and three 2 2 infinite eigenvalues. Let A. of eigenvalues are given as above by the ratios of diagonal elements of S to corresponding elements of T. T.AB where J is a Jordan canonical form corresponding to the finite eigenvalues of A A.XB is regular. . [2o I o o o 0 0 0 0 0 2 0 0 1 0 0 1 0 0 ~ ]> [~ 0 I 0 0 0 0 0 0 0 0 o o 0 I 0] 0 0 0 0 (X . we present only statements of the basic theorems and some examples. E jRnxn 12. Then there exist orthogonal matrices Q. In this chapter. where T is upper triangular and S is quasiuppertriangular. of — XB.11. The first theorem pertains only to "square" regular pencils. the 2 x 2 subpencil formed with the corresponding fonned 2 x diagonal subblock 2x2 2 diagonal subblock of T has a pair of complex conjugate eigenvalues. while the full KeF in all its generality applies also to "rectangular" and singular KCF "rectangular" pencils. QBZ = T. mxn E C • Theorem 12. including analogues of principal vectors and description of of so forth. Q € c nxn"such that nonsingular E C" such that peA . Generalized Eigenvalue Problems Chapter 12. Z e R"xn such B E jRnxn.AB)Q = diag(LII' . Then there x exist nonsingular matrices P. B e Cnxn and suppose the pencil A . Let A. B e Rnxn. A full description of the KeF.• L.fi and canonical form nilpotent matrix of associated and N is a nilpotent matrix of Jordan blocks associated with 0 and corresponding to the infinite infinite eigenvalues of A . J .A [~ ~ l of . L l" L~. is beyond the scope of this book..)"N).'.128 Chapter 12.11.10. Example 12.I.12 mxm nxn mxm nxn E C nonsingular nonsingular matrices P e c and Q e c QE C such that peA . When S has a 2 x 2 diagonal block. I . Let A. B E c nxn pencil — AB Theorem 12.. KCF. real eigenvalues.12 (Kronecker Canonical Form). There is also an analogue of the Jordan canonical form called the Kronecker canonical fonn Kronecker form (KeF). . Otherwise.9.A. .. thnt that QAZ = S. B e c mxn . quasiuppertriangular..9. Generalized Eigenvalue Problems Theorem 12.
B e Wlxn and suppose the pencil A . generalized eigenproblem.. next two correspond to correspond J = 21 0 2 [ o 0 while the nilpotent matrix N in this example is N [ ~6~]. where each LQ has "zero columns" and one row. The second block is L\ while the third block is LI. The /( are called the left minimal indices while the r. and L^ is the (k + I) x k where N is nilpotent. LQ .e.12. Left Left or right minimal indices can take the value O. n(S)) = S. there is an analogous geometric concept for the eigenproblem generalized eigenproblem.The next two blocks second block L\ one the block is L\. Canonical Forms 12.— XBif S Rn. R ( S <S. Lo. and Lk is the (k + 1) x k bidiagonal pencil bidiagonal pencil A 0 0 A Lk = 0 0 0 0 A I The Ii are called the left minimal indices while the ri are called the right minimal indices. Lo L6 one column.4) eigenvalue characterization Just as in the standard eigenvalue case. while each LQ has "zero rows" and L6. (12. L6.. Then is deflating subspace for the pencil A AB if and only if there exists M E Rkxk such that e ~kxk AS = BSM. Let A. Lo.XB is regular. both Nand J are in Jordan canonical form.2. 000 Just as sets of eigenvectors span Ainvariant subspaces in the case of the standard eigenvectors eigenproblem (recall Definition 9. (12.35). Consider a 13 x 12 block diagonal matrix whose diagonal blocks are A 0] I o A I . are called the right minimal indices.13. 0. Such a matrix is in KCF. suppose S e Rn* xk is a matrix whose columns span a kdimensional E ~nxk ^dimensional subspace S of ~n. LQ. Then SS is aadeflating subspace for the pencil A . i. both N and J are in Jordan canonical form. Lo. LQ.2. Definition 12. Example 12. Canonical Forms 129 where N is nilpotent. Specifically.14. there is a matrix characterization of deflating subspace.e. i. corresponds LQ. The first block of zeros actually corresponds to LQ.5) . Then V is a E ~nxn suppose pencil — AB deflating subspace if deflating subspace if dim(AV + BV) = dimV.
Similarly. and E jRPxm. Similarly. D=O. B € R" xm . which has a root at —2. Numerically. zeros). The method of finding system zeros via a generalized eigenvalue problem also works The method of finding system zeros via a generalized eigenvalue problem also works well for general multiinput.6)).6).5) becomes AS = SM as before. Then the transfer matrix (see [26]) of this system is Then the transfer matrix (see [26)) of this system is g(5)=C(sIA)'B+D= 5 55 2 + 14 ' + 3s + 2 which clearly has a zero at —2.15.4) becomes dim (A V + V) = dim V.130 Chapter 12. Ac M D "'" 5A + 14.5) becomes AS = SM as before. we offer some insight below into the special case of a singleinput.3 Application to the Computation of System Zeros Application to the Computation of System Zeros i y Consider the linear system Consider the linear svstem = Ax + Bu. and y is the vector of outputs or observables. the (finite) zeros of this system are given by the (finite) complex numbers where the "system pencil" z. 12.6) drops rank. which is clearly equivalent to If B = I. there AV ~ V. E jRPxn. Let A=[ 4 2 C = [I 2].8. then (12. where x(= x(t)) is called the state vector. E jRnxm. these values are the generalized eigenvalues of the (n + m) x (n m) pencil.8. lEthe pencil is not regular. one must be well for general mUltiinput.4) becomes dim(AV + V) = dimV. vector. [26]. C e Rpxn. there is a concept analogous to deflating subspace called a reducing subspace. = Cx + Du E jRnxn. Checking the finite eigenvalues of the pencil (12. however. In the special case p = m. Example 12.6». However. for example. trivial. (n + m) x (n + m) pencil. This is accomplished by computing a certain unitary equivalence on the system pencil that then yields a plished by computing a certain unitary equivalence on the system pencil that then yields a smaller generalized eigenvalue problem with only finite generalized eigenvalues (the finite smaller generalized eigenvalue problem with only finite generalized eigenvalues (the finite zeros). For details. In general. see. one must be careful first to "deflate out" the infinite zeros (infinite eigenvalues of (12. This linear with A € M n x n . u is the vector of inputs or controls. and D € Rpxm. is a concept analogous to deflating subspace called a reducing subspace. For details. see. multioutput systems.3 12. which is clearly equivalent to AV c V. we find the characteristic polynomial to be find the characteristic polynomial to be det [ which has a root at 2. In the special case p = m. Checking the finite eigenvalues of the pencil (12. This is accomcareful first to "deflate out" the infinite zeros (infinite eigenvalues of (12. Numerically.15. u is the vector of inputs or controls. and y is the vector of outputs or observables. where the "system pencil" (12. these values are the generalized eigenvalues of the drops rank. Let Example 12.6). we which clearly has a zero at 2. The connection between system zeros and the corresponding system pencil is nonThe connection between system zeros and the corresponding system pencil is nontrivial.8. [26]. B] . multioutput systems. for example. If the pencil is not regular. we offer some insight below into the special case of a singleinput. However. (12. (12.8. Generalized Eigenvalue Problems If B = /. however. where x(= x(t)) is called the state space model is often used in multivariable control theory. then (12. the (finite) zeros of this system are given by the (finite) complex numbers In general. This linear timeinvariant statespace model is often used in multivariable control theory.
l xn. For example. symmetric.8) c T x +dy = O.A)~ ! Z? + d denote the system transfer function (matrix).nxn A AT and B the B1 0. M K where M is a symmetric positive definite "mass matrix" and K is a symmetric "stiffness definite "stiffness matrix. and v(s) and n(s) are relatively prime TT(S) v(s) TT(S) (i. then from (12. and D e R r T(s7 . z is a zero of g. we have Substituting this in (12. system of differential equations differential Mx+Kx=O. or g ( z ) y = 0 by the definition of g.7) (12.8). of the Since B is positive definite it is nonsingular. B e ffi..4. and D = d E R. "pole/zero cancellations"). Symmetric Generalized Eigenvalue Problems 12.4 12.e.7) we get get x = (A . no pole/zero cancellations). 12. the problem (12. However.4 Symmetric Generalized Eigenvalue Problems Symmetric Generalized Eigenvalue Problems Ax = ABx A very important special case of the generalized eigenvalue problem (12. the problem (12.n. Hence g(z) = 0." is a frequently employed model of structures or vibrating systems and yields a frequently generalized eigenvalue problem ofthe form (12. Then there exists a nonzero solution to or or (A . Hence g(z) 0.e. b e ffi.12. .zl)x + by = 0. Symmetric Generalized Eigenvalue Problems 131 131 1 singleoutput system.zI cT b ] d is singular.zl)lby + dy = 0. C = c T E R l x n . B E Rnxn arises when A = A and B = BT > O.10) is equivalent B. Suppose z € is such that Suppose Z E C is such that [ A . Thus.9) Substituting this in (12.10) is equivalent Since B is positive definite it is nonsingular.A to the standard eigenvalue problem Bl1Ax = AJC.10) for A.9». relatively where n(s) is the characteristic polynomial of A.e. Now y ^ 0 (else x z i. e ffi. g. let B = b E Rn.. (12.10).9)). we have _c T (A . (12. Specifically. 0 from (12.s) = c (s I — A) 1 b + d c function and assume that g(s) can be written in the form and assume that g ( s ) can be written in the form v(s) g(s) = n(s)' polynomial A. B~11A is not necessarily B~ Ax = AX.8).. there are no "pole/zero cancellations"). let g(. or g(z)y 0 by the definition of g. g(s) Furthermore. A pole/zero Assuming z is not an eigenvalue of A (i.zl)lby.4. Now _y 1= 0 (else x = 0 from (12. Thus. the secondorder A.
but since realvalued matrices are commonly used in most applications..1926 and 3.23).12) has n real eigenvalues. l = [i ~ J B ThenB~ A Then A B~Il = [~ ~ J B~I A approximately Nevertheless.16 is Example 12.12) Since C = C T the eigenproblem (12. if orthogonal > 0..fi 1] . zn satisfying vectors Z I. the eigenvalues are also all positive. (12. if A = AT> 0. The Cholesky factor for the matrix B in Example 12. with corresponding eigenvectors zi.16. = L ~Tzi. then C = C T > 0. where L is nonsingular Proof: Since B > 0. but since realvalued matrices are commonly used in most applications. B E Rnxn with A = AT and B = BT > 0. .. Finally. if A = A > 0. Moreover.. be generalized easily to the case where A material of can.16 is D 0 L=[~ . Finally. then = C T > 0. Let A. so the eigenvalues are positive. Generalized Eigenvalue Problems Example 12.1926 as expected. Theorem 12. Moreover..16. (12.5 2. y)BB = XT By. Example 12. B e jRnxn A AT and B BT > O.1926 whose eigenvalues are approximately 2. Then the eigenvalue problem (Theorem 10. Then the eigenvalue problem Ax = ABx = ALL Tx (12. are eigenvectors of the original generalized eigenvalue problem and satisfy and satisfy (Xi.16).23). •. and the n corresponding right eigenvectors can be chosen to be orthogonal with respect to the inner product (x.12) has n real eigenvalues. where L is nonsingular (Theorem 10. The material of this section can. .5 ] 1..17. Proof: Since B > 0. Xj)B T T = xr BXj = (zi L ~l)(LLT)(L ~T Zj) = Dij. the eigenproblem (12. Then the generalized A..11) can then be rewritten as = Cz = AZ.18. Let A = [~ . (12.18. the eigenvalues of B l A are always real (and are approximately 2. The Cholesky factor for the matrix B in Example 12. so the eigenvalues are positive. it has a Cholesky factorization B = LLT.. ii € n.. of course. if A > 0..11) can then be rewritten as AL J and Z = LT x. Let A Example 12. positive. we have restricted our attention to that case only. Zn Zj = Dij.5 ' 3. and are Hermitian.1926 and —3. . the eigenvalue problem eigenvalue problem Ax = ABx has n real eigenvalues. then product y) x T By.fi Then it is easily checked that Then it is easily checked thai c = L~lAL~T = [ 0. zi Then x. are eigenvectors of the original generalized eigenvalue problem Xi Zi. it has a Cholesky factorization B = LL T.132 132 Chapter 12.1926 in Example 12. with corresponding eigenSince C = C T.11) can be rewritten as the equivalent problem 1 Letting C = L ~I AL ~T and z = L1 x. generalized case A and B are Hermitian. Generalized Eigenvalue Problems Chapter 12.5 2. E !!. we have restricted our attention to that case only..
since QDQ~l have A(D) = A(B~1A). there exists Q e E"x" such that QT AQ = D and QT BQ = I.e. Again. the diagonal elements of D are the eigenvalues of B. straightforward way. since A 2: B.19.e. we restrict our attention only to the real case. Simultaneous Diagonalization 133 12.21 we have that QT AQ > QT BQ. such results and we present only a representative (but important and useful) theorem here. It turns out that in some cases a pair of matrices (A. Thus.< / (this is trivially true 0 since the two matrices are diagonal). Then there exists a nonsingular matrix Q such that where D is diagonal.19 (Simultaneous Reduction to Diagonal Form). Also.1 Simultaneous diagonalization via SVD Simultaneous diagonalization via SVD There are situations in which forming C L I AL T as in the proof of Theorem 12.lI QT :::: Q QT.19. since QDQI Finally.1AQ. B e M" xn be positive definite.e. where D is diagonal. Simultaneous Diagonalization 12. Let Q = L .19 is very useful for reducing many statements about pairs of symmetric Theorem 12. A~l :::: Bl1.19 is very useful for reducing many statements about pairs of symmetric matrices to "the diagonal case. e.'AB. Then B. Then there exists a nonsingular matrix Q such that A = AT and B = BT > 0. Proof: By Theorem 12. To illustrate. so it does not preserve eigenvalues of and B Note that Q is not in general orthogonal. B E lRnxn be positive definite. Then A 2: B if and only if B~ 2: AI.1A = Q1l B~1QT QT AQ = Q11B. In such cases. when L is highly iII conditioned with respect to inversion. Let A. simultaneous reduction can also be accomplished via an SVD. B) can be simultaneously diagonalized by the same matrix. There are many matrices (A. In numerically problematic. simultaneous reduction can also be accomplished via an SVD. To illustrate. Let A.19 is numerically problematic. there exists Q E lR~xn such that QT AQ = D and QT BQ = [. with the complex case following in a Again. Also. LetA QT AQ and B QT Then/HA Q~ B.5 Simultaneous Diagonalization Simultaneous Diagonalization Recall that many matrices can be diagonalized by a similarity.5. = QQT AQQ~l = LTPPTL~IA = L~TL~1A L T P pT L 1 A L T L I A QQT AQQI 0 D = B1A.5 12.20. But then D. Since LLT be the Cholesky factorization of and setC L I AL~T. when L is highly ill conditioned with respect to inversion. It turns out that in some cases a pair of trices can be diagonalized by a unitary similarity. D > I..20. In fact.21 we have that QT AQ 2: QT BQ.1 12. since A > B. Proof: By Theorem 12. there exists an orthogonal matrix P such that P CP = D.. \ 2.31. we Note that Q is not in general orthogonal. i. Infact.19 is There are situations in which forming C = L~1AL~T as in the proof of Theorem 12. There are many such results and we present only a representative (but important and useful) theorem here. Let A.. Now D > 0 by Theorem 10. In particular. B E E"x" with 12. D 2: [.1A. A1. i. where D is diagonal.g.g.l Q~T QT Q~ B~ AQ. Then A > B if and only if Bl1 > Theorem 12. e. Proof: Let B = LLT be the Cholesky factorization of B and set C = L~1AL T. it does preserve the eigenvalues of A — XB. by Theorem 10.5. Theorem 12.. Let Q = L~T P. However. i." The following is typical. A I < B~ . it does preserve the eigenvalues of A .1A). Then and and QT BQ Finally.31. Q D. In particular. But then D"1I :::: [(this is trivially true 10. Then diagonal. let such cases. the diagonal elements of D are the eigenvalues of B 1A. Thus. = pT L I(LLT)L T P = pT P = [. haveA(D) = A(B. with the complex case following in a straightforward way.5. normal maRecall that many matrices can be diagonalized by a similarity. where D is C is symmetric.e. D since the two matrices are diagonal). However. Let A = QT AQandB = QT BQ. we restrict our attention only to the real case.12. Since Proof: Let T C is symmetric.T P.. so it does not preserve eigenvalues of A and B individually. normal matrices can be diagonalized by a unitary similarity. i.5. B) can be simultaneously diagonalized by the same matrix. individually. matrices to "the diagonal case. we B~ 1 A.19 e ][~nxn A AT and B BT > O. Theorem 12. This can be seen directly. Now D > 0 by Theorem 10. QD~ QT < QQT. let . where D is diagonal. there exists an orthogonal matrix P such that pTe p = D. by Theorem where D is diagonal. This can be seen directly." The following is typical.. Theorem 12.
note that T QT AQ = U Li/(LAL~)Li/U = UTULVTVLTUTU i/ = while L2 QT BQ = U T LB1(LBL~)Li/U = UTU = I.butin writing A — PDDP T = PD(PD) with D is diagonal and P orthogonal.21.e. but in writing = PDDp D diagonal. Further. The case when A is symmetric but indefinite is not so A = AT::: O. Compute the SVD Cholesky factorizations A B. which is thus to the generalized eigenvalue problem 02. D b .. To check this.. For example.21 example.15) The problem (12. eigenproblem MT M x Xx. Generalized Eigenvalue Problems Chapter 12. Remark 12.134 134 Chapter 12.13)) and LB separately. at least in real arithmetic. This is analogous to finding the singular values of a matrix M by Sec. D may have pure imaginary elements.13» via arithmetic operations performed only on LA LA (12. Then the matrix Q U performs the simultaneous L e 1R~ xn diagonalization. let A = LALTA and B — LBL~ us assume that both A and B are positive definite. A can be written as A = PDP T. [7.3].13) where E E R£ x " isisdiagonal.13) can be computed without explicitly forming the without Remark product indicated matrix product or the inverse by using the socalled generalized singular value decomposition (GSVD). i. for generalizations results 12. Then the matrix Q == LLBTu performs the simultaneous diagonal. Various generalizations of the results in Remark 12. see. Note that LB A and thus the singular values of L B 1 LA can be found from the eigenvalue problem 02. respectively. example.14) Letting x = LB z we see that (12. operations performed directly on M rather than by forming the matrix MT M and solving performed MT forming the eigenproblem MT MX = AX.14) can be rewritten in the form LALAx = XLBz = Letting x = LBT Z we see 02.e. (12.21 are possible. Remark 12.7. Further. A straightforward. The SVD in (12. when A = AT > 0. respectively. which is thus equivalent to the generalized eigenvalue problem ALBL~LBT z.14) rewritten the LAL~x = ALBz = A L g L ^ L g 7 z . without forming the products LALTA or LBLTB explicitly. products LA L ~ LBL~ see. Generalized Eigenvalue Problems us assume that both A and B are positive definite. PDPT ~ ~ ~ ~ T PD(PD{ with where Disdiagonaland P is orthogonal.22.. Sec. for LB i.15) is called a generalized singular value problem and algorithms exist to problem generalized solve it (and hence equivalently (12. let A = LAL~ and B = LsLTB be Cholesky factorizations of A and B. 8.
16) arises frequently in applications: M = I.. If r n (i.6 12. then all solutions of q K q 0 are oscillatory.. HigherOrder Eigenvalue Problems 135 12. C = 0.e. A special case of (12. are to be determined.16) arises frequently in applications: 0. k = 1. KT > 0). (12. C. the secondorder problem (12. (12. HigherOrder Eigenvalue Problems 12.. and A special case of (12.. by analogy with the firstorder case.C + K. K e Rnxn. then all solutions of q + Kq = 0 are oscillatory.16) or.16) can still M secondorder generalized linear be converted to the firstorder generalized linear system converted I [ o M OJ'x = [0 K I C Jx. .. . or if it is desired to avoid the calculation of M lI because M is too ill conditioned with respect to inversion. Then (12.6.2M + A.6. Suppose. since eAt :F 0.. Since the determinantal equation is singular.6. 12..1 12.16) we get (12.• Then the 2n eigenvalues of the secondorder eigenvalue problem A2 I /+ K Let Wk =  fjik 12 Then the 2n eigenvalues of the secondorder eigenvalue problem A. k = r + 1. there are 2n eigenvalues for the secondorder (or A2 M + AC + K. . Suppose K = KT.2 K are are ± jWk. p. Assume for simplicity that M is nonsingular. Suppose K has eigenvalues eigenvalues IL I ::: . (A 2 M + AC + K) p = O. . yields a polynomial of degree 2rc.. If r = n (i. If M is singular..C + K is singular. M Mwhere x(t) €. where q(t} e W1 and M. Since the determinantal equation o = det(A 2 M + AC + K) = A2n + . Substituting in form q(t) = ext p.e. we thus seek values of A. for which the matrix A.16) of the p A are to be determined.12. ± Wk. where the nvector p and scalar A.16) Consider the secondorder system of differential equations Consider the secondorder system of differential equations q(t) E ~n E ~nxn.6..1 Conversion to firstorder form Conversion to firstorder form Let x\ = q and \i = q. seek A A2 M + AC + To get a nonzero solution /?. K = KT ::: 0). polynomial 2n. r.6 HigherOrder Eigenvalue Problems HigherOrder Eigenvalue Problems Mq+Cq+Kq=O.2M + A. quadratic) eigenvalue problem A. ..16) can be written as a firstorder system (with block companion matrix) X . E2". Substituting in q(t) = eAt p.16) can be written as a firstorder system (with block Let XI q and X2 Then (12. = [ M1K 0 x (t) E ~2n. and = = KT.. ::: ILr ::: 0 > ILr+ I ::: . n. that we try to find a solution of (12. ::: ILn· Let a>k = IILk I!.
EXERCISES EXERCISES nx 1. andlor K Many other firstorder realizations are possible. Are the FG and GF the 3. Generalized Eigenvalue Problems Chapter 12.B D.19). Hint: Consider the equivalence I G][AUO F0]' B][I l [01 C (A similar result is also true for "nonsquare" pencils. Let F e Cnxm . properties Higherorder analogues of (12. In the parlance of control theory. C. Show that the finite generalized eigenvalues of E lR " finite eigenvalues of e R™ x m the pencil [~ ~JA[~ ~J are the eigenvalues of the matrix A — BD 1 C. such results show that zeros are invariant under state feedback or output injection. F 6 Rm *" G R" x ..1 2. (A similar result is also true for "nonsquare" pencils. G e Cmxn • Are the nonzero singular values of FG and GF the same? same? wx E ]Rnxn. E Rnxm and E E 4. Let € C M X • Show that the nonzero eigenvalues of and G F are the same. lead naturally naturally involving. the kth derivative of q.19). to higherorder eigenvalue problems that can be converted to firstorder form using a kn x kn to higherorder eigenvalue problems that can be converted to firstorder form using aknxkn block companion matrix analogue of (11. Suppose A € Rnxn. Generalized Eigenvalue Problems Many other firstorder realizations are possible.) .. G E enxn". Some can be useful when M.16) involving. and/or K have special symmetry or skewsymmetry properties that can exploited. derivative q. B e lRn*m. . which can be converted to various firstorder systems of dimension kn. say. Let F. In the parlance of control theory. Similar procedures hold for the general k\horder difference equation order difference equation which can be converted to various firstorder systems of dimension kn. and C e lRmxn. Show that the generalized eigenvalues of the pencils ues of the pencils e e [~ ~JA[~ ~J and and [ A + B~ + GC ~] _ A [~ ~] are identical for all F E E"1xn and all G E R" xmm . Show that the nonzero eigenvalues of FG and GF are the same. C. verify Hint: An easy "trick proof is to verify that the matrices "trick proof' [Fg ~] and [~ GOF ] are similar via the similarity transformation are similar via the similarity transformation Let F E nxm G E mx ". Suppose A e Rnxn and D E lR::! xm. Some can be useful when M.136 136 Chapter 12. Similar procedures hold for the general kthblock companion matrix analogue of (11. Show that the generalized eigenval".
A and B to the same diagonal matrix.Exercises Exercises 137 137 desired 5. Another family of simultaneous diagonalization problems arises when it is desired Another simultaneous diagonalization problems operates that the simultaneous diagonalizing transformation Q operates on matrices A.2 and hence are AB E2 positive. respectively. (b) Show that Q~l = ^~^UT LTB. (c) Show that the eigenvalues of A B are the same as those of 1. respectively. Consider the case where both A and transformation contragredient. and let U~VT be an SVD of LTBLA (a) Show that Q = LA V £ ~ 5 is a contragredient transformation that reduces both contragredient = LA V~! A and B to the same diagonal matrix. Ql = ~!UTL~. positive Cholesky = LA L ~ = L B L ~. positive. and let UWT be an SVD of L~LA'. B E e jRnxn Ql AQT ]Rnx" in such a way that Q~l AQ~T and QT BQ are simultaneously diagonal. A B B are positive definite with Cholesky factorizations A = L<A and B = L#Lg. . Such QT BQ a transformation is called contragredient.
This page intentionally left blank This page intentionally left blank .
extension to the complex case only where it is not obvious.. We Obviously. (13. Example 13.1. Then the Kronecker product (or tensor Then the Kronecker product (or tensor product) of A and B is defined as the matrix product) of A and B is defined as the matrix allB A@B= [ : amlB alnB ] : E lRmpxnq.1. We restrict our attention in this chapter primarily to realvalued matrices. pointing out the restrict our attention in this chapter primarily to realvalued matrices.1 13. Foranyfl E lRX(7. Forany B e!F pxq /z @ B = [~ In Replacing 12 by /„ yields a block diagonal matrix with n copies of B along the I2 diagonal with n copies of along the diagonal.Chapter 13 Chapter 13 Kronecker Products Kronecker Products 13. n 2. Then A@B =[ 3~ ~]~U J.2. B e lR pxq. Then 3. Example 13. Let A e R mx ". Then 0 b ll b12 B @/z = l b" b~l 139 0 b2 2 0 b21 0 0 b12 0 b 22 l . the same definition holds if A and B are complexvalued matrices. 2B 2B ~J.1) amnB Obviously. Let A = [~ 2 2 nand B = [. / 2 <8>fl = [o ~ l\ 2. Let B be an arbitrary 2x2 matrix. 1. 4 3 4 3 4 9 4 2 6 2 6 6 6 2 2 Note that B @ A i.2. Let B be an arbitrary 2 x 2 matrix. pointing out the extension to the complex case only where it is not obvious. the same definition holds if A and B are complexvalued matrices.1 Definition and Examples Definition and Examples Definition 13. Note that B <g> A / A <g> B.A @ B... Let A E lRmxn B E R Definition 13.
.2 13. Let A e R mx ". If A and B are nonsingular.5. y e !R.kCkPBD L~=1 amkckpBD ] 0 Theorem 13. For all A and B.n. 5 E R r x i . Theorem 13... E R".m xm are symmetric. If AI ® B. .5. B e ~rxs. Let Jt € Rm.3.2 Properties of the Kronecker Product Properties of the Kronecker Product (A 0 B)(C 0 D) = AC 0 BD (E ~mrxpt).3. C e ~nxp. (A ® Bl = AT ® BT..140 Chapter 13.1 ) Theorem 13. y eR". 5. Then 13. Proof: Proof: Using Theorem 13. simply verify using the definitions of transpose and Kronecker verify transpose Kronecker 0 product.3. Simply verify that ~[ =AC0BD.1. and D e Rsxt.6. then A® B is symmetric.. L~=l al. (A ® B)I = Bare 13. 0 . D Corollary 13. xmYnf E !R. . X2Yl. B In x E ~m.. . (13.. Kronecker Products Kronecker Products The extension to arbitrary B and /„ is obvious. Let E ~mxn. Then 13. Foral! Proof' Proof: For the proof. mn .2) Proof: Simply verify that Proof. A® 13. C E R" x ^ and D E ~sxt. If E ]Rn xn e Rmxm are Theorem 13. .4. XmY T]T = [XIYJ. Let* eR m . .6. simply note that (A ® B)(A 1 ® B. If A e R"xn and B E !R. Then X ® Y = [ XIY T .3. = 1 ® 1 = I. XIYn. 4.
7.. Properties of the Kronecker Product Theorem 13. and let eigenvalues jJij. we can take p thus get the complete eigenstructure of A 0 B. 141 141 Proof: Proof: (A 0 B{ (A 0 B) = (AT 0 BT)(A 0 B) = AT A 0 BT B = AAT 0 B BT by Theorem 13.. then A 0 B is normal. If A e IR nxn am/ B E IR mxm are normal.and let BB E e IRR mxwhave e IR nxn have eigenvalues A. matrix A ® 5 is then also orthogonal with eigenvalues e^'^+'W and e ± ^ (6> ~^ > \ Theorem 13. and zi.. Example 13.12. . TTzen ?/ze mn eigenvalues of A 0 Bare Moreover..c..10..13... Then vI yields a singular value decomposition of A <8>B (after aasimple reordering of the diagonal yields a singular value decomposition of A 0 B (after simple reordering of the diagonal elements O/£A <8> £5 and the corresponding right and left singular vectors). elements of ~A 0 ~B and the corresponding right and left singular vectors). ... then Xi <8> Zj ffi. q Corollary 13. eigenvectors of A® B corresponding to A. if A and fi have Jordan form . then . we can take p = nand q = m and n and q —m and If A and B are diagonalizable in Theorem 13. The 4 x 4 orthogonal e±j9 orthogonal eigenvalues e±j(i>.3 since A and B are normal by Theorem 13..[Cos</> cos</>O Then It IS easl'1y seen that . Let A E R nx "have eigenvalues Ai.3. \Ju (q ::::: m). <I :::: .•. Then A <g)B (or B 0<8> A) has rs singular values U. Lgf A E E mxn have a singular value decomposition VA ~A Theorem 13.. Ap (p ::::: and ZI..JLqq (q < m). Then the mn eigenvalues of A® B are eigenvalues JL j.p (p < n). . 0 Zj E€ IR mn "are linearly independent right eigenvectors of A 0 B corresponding to Ai JL 7 i e /?. 0 If A and Bare diagonalizable in Theorem 13./u. If A E E"xn is orthogonal and B E Mmxm is orthogonal. • • zq independent of to A ..• :::: TS > O. then A <g> B is € IR nxn orthogonal and e IR m x m 15 then 0 is orthogonal.10.. Let A E lR. Sine] and B .8. .2. . . xp are linearly independent right eigenvectors of A corresponding AI..n.9.m are linearly independent right corresponding to JJL\ . 7 E m..8. i E l!! 7 E 1· Proof: proof Proof: The basic idea of the proof is as follows: follows: (A 0 B)(x 0 z) = Ax 0 Bz =AX 0 JLZ = AJL(X 0 z). . then A® B is normal.j. :::: U rTs > 0 and ^iT\ > • • • > ffr <s Qand rank(A 0 B) = (rankA)(rankB) = rank(B 0 A) ..7..2.12. = (A 0 B)(A 0 B)T 0 Corollary 13.• :::: U rr > 0 and let B E IRfx Corollary e R™x" singular a\ > • • > a > e have singular values T\ > • • > <s > 0. . In general. ••.4 by Theorem 13. In general. Properties of the Kronecker Product 13. Then A 0 B (or B A) has rs singular values have singular values <I :::: . i / E e!!. mxm /zave Theorem 13. A0 B e±jeH</» e±jefJ </».• sin e = _ sin</> Sin</>] Then it is easily seen that A is orthogonal with eigenvalues e±jO and B is orthogonal with eigenvalues e±j</J. L et A E xamp Ie 139 Let A = [ _eose cose andB .. . j € m."xn have singular values UI :::: . if A and B have Jordan form thus get the complete eigenstructure of A <8> B.. if x\. if Xl. Theorem 13. Let A G IR mx " have a singular value decomposition l/^E^Vj an^ let and /ef singular decomposition UB^B^BB e IR pxq fi E ^pxq have a singular value decomposition V B ~B VI. . xp are linearly independent right eigenvectors of A corresponding Moreover. If Corollary 13.11.Zq are linearly independent right eigenvectors of B corresponding to JLI.. If A E IR"xn and B eRmxm are normal... j e q.12. A...i .
Then the Kronecker sum (or tensor sum) .13. suppose P and Schur form for A ® B can be derived similarly. Corollary 13. in general. Then 13. while upper triangular. general. denoted A © B. For example. Kronecker Products decompositions given by p.AP J B . are unitary matrices that reduce A and 5. E IR nxn E IR mxm.. Let A e Rn xn and B e Rrn xm. Then reducing A and B to real Schur form).142 142 Chapter 13. is the mn x mn matrix Urn <g> A) + (B ® In).14. is generally not quite in Jordan form and needs Note that JA® JB. Kronecker Products Chapter 13. while upper triangular. Then (P ® Q)H (A ® B)(P ® Q) = (pH ® QH)(A ® B)(P ® Q) = (pH AP) ® (QH BQ) = TA ® TR . is generally not quite in Jordan form and needs further reduction (to an ultimate Jordan form that also depends on whether or not certain further reduction (to an ultimate Jordan form that also depends on whether or not certain eigenvalues are zero or nonzero).. For example. Note that. to Schur (triangular) form. respectively. Let A e Rn Xn and B e Rm xrn. 2. to Schur (triangular) form. Example 13. is the mn mn matrix (Im ® A) + (B ® /„). nxn mxm Definition 13. Tr(A ® B) = (TrA)(TrB) = Tr(B ® A). then we get the JA and Q~] BQ following Jordanlike structure: following Jordanlike structure: (P ® Q)I(A ® B)(P ® Q) = (P.15. of A and B. pH AP = TA and QH BQ = TB (and similarly if and are orthogonal similarities PHAP = TA and QHBQ = TB (and similarly if P and Q are orthogonal similarities reducing A and B to real Schur form). A EEl B ^ B EEl A. ~l 2 2 1 3 AfflB = (h®A)+(B®h) = 1 3 0 1 0 4 0 3 0 0 0 0 0 0 0 0 0 0 0 0 2 2 0 0 0 3 4 2 0 0 2 0 0 2 0 0 2 0 0 0 1 0 0 + 0 2 0 0 2 0 0 0 0 3 0 0 0 3 0 0 0 3 The reader is invited to compute B 0 A = (/3 ® B) + (A 0 h) and note the difference The reader is invited to compute B EEl A = (h ® B) (A <g> /2) and note the difference with A © B. Let 1. with A EEl B.e. Note that. det(A ® B) = (det A)m(det Bt = det(B ® A). respectively. Example 13.13.1 AP) ® (Ql BQ) = JA ® JB · Note that h ® JR. 1. then we get the decompositions given by P~lI AP = J A and Ql BQ = JB. A ® B i= B © A. respectively. denoted A EEl B. eigenvalues are zero or nonzero). . i.15. 1. respectively. Let A~U Then Then 2 2 !]andB~[ . suppose P and Q are unitary matrices that reduce A and B.I ® Ql)(A ® B)(P ® Q) = (P.14.e. i. A Schur form for A ® B can be derived similarly. E IR E IR Kronecker Definition 13. in of A and B.
and let B E Rmx'" have e jRnxn eigenvalues A.xp are linearly independent right eigenvectors of A corresponding Moreover. (I} ® M) + (E^®l2) = M 0 Ek... Recall the real JCF 2.. then decompositions given JA and Qt BQ [(Q ® In)(lm ® p)rt[(lm ® A) = [(1m ® p)I(Q ® In)I][(lm ® A) = (1m ® lA) + (B ® In)][CQ ® In)(lm ® P)] + (B ® In)][(Q ® In)(/m ® + (B ® P)] = [(1m ® pI)(QI ® In)][(lm ® A) In)][CQ ® In)(/m <:9 P)] + (JB ® In) is a Jordanlike structure for A $ B. . . ii E E... + fJj' € p. Properties of the Kronecker Product 13. i E !!. f^q (q ::s: ra)..i e n. . eigenvectors of A® B corresponding to Ai + [ij.13. ..16.2. A2 + fJt. zq are linearly independent eigenvectors of corresponding to fJt.. Define 0 0 0 0 o o Ek = 0 o Then 1 can be written in the very compact form 1 = (4 <8>M) + (Ek ® h) = M $ E k . if A and have Jordan form thus get the complete eigenstructure of A 0 B. j e q. j E ra. A2 + fJm. 0 I M 0 where M = [ where M = o M a f3 f3 a J.2.16.. e jRmxm eigenvalues /z. . TTzen r/ze Kronecker sum A $ B eigenvalues e/genva/wes Al + fJt.. . . xp are linearly independent right eigenvectors of A corresponding to AI. fJq (q < m).. j E fl· eigenvectors of A $ B corresponding to A.•• . we can take p = n and q = m and thus get the complete eigenstructure of A $ In general. if XI. .. . . . Then the Kronecker sum A® B = (1m (g>A) + (B ® In) has mn (Im ® A) + (B <g> /„) /za^ ran eigenvalues fJj. if x\. respectively. . .. 0 If A and Bare diagonalizable in Theorem 13. and z\. then Zj ® Xi E€ jRmn" are linearly independent right Zj <8> Xi W1 are linearly independent right corresponding f j i . we can take p nand q and If A and B are diagonalizable in Theorem 13.. is a Jordanlike structure for A © B.···.. . . In general.16. respectively. . then decompositions given by P~1AP = lA and Q"1 BQ = JB. 7 e I!!.. Ap (p < and ZI. AI + fJm. . Xp (p ::s: n). Properties of the Kronecker Product 2.. .. if A and B have Jordan form pI l B . An + fJm' Moreover. Proof: The basic idea of the proof is as follows: Proof: The basic idea of the proof is as follows: [(1m ® A) + (B ® In)](Z ® X) = (Z ® Ax) = (Z + (Bz ® X) ® Ax) + (fJZ ® X) = (A + fJ)(Z ® X). ... Recall the real JCF M I M 143 143 0 I M I 0 o 1= 0 E jR2kx2k.\ . Then J can be written in the very compact form J Theorem 13. Zq are linearly independent right eigenvectors of B AI.. Let A E E"x" have eigenvalues Ai.
[(Q ® /„)(/« ® P)] = (<2 ® P) is unitary by Theorem 13. . (13. The following definition is very helpful in completing the writing of (13.5) clearly can be written as the Kronecker sum (1m 0 A) + The coefficient matrix in (13.3 and Corollary 13.=1 A special case of (13.3) is the symmetric equation AX +XAT = C (13. When does a solution exist? The first important question to ask regarding (13.. j=1 These equations can then be rewritten as the These equations can then be rewritten as the mn x mn linear system x linear system A+blll bl21 A + b 2Z 1 b2ml b 21 1 (13. i.3 and Corollary 13. When C is symmetric.3) is. Lyapunov equations also to be symmetric and (13.XB. where [(Q <8>In)(lm ® P)] = (Q ® P) is unitary by Theorem 13. = AXi + l:~>j.3) mxm E IRnxn E IR E IRnxm. the solution X E Wnx" is easily shown taking B = AT. B e Rmxm . Again. an "ordinary" linear system. arise naturally in stability theory. . Kronecker Products A Schur fonn for A EB B can be derived similarly.e. to Schur (triangular) form.3) in terms of their easily seen z'th columns that ith columns that m AXi + Xb. = C. Sylvester who studied general linear matrix equations of the fonn k LA. it is easily seen by equating the writing (13.8. pH AP = TA matrices that reduce A and B. and C e M" xm . When symmetric. =C. ® P)] = (/m <8> rA) + (7* (g) /„). suppose P and Q are unitary fonn. i. Again. PHAP = TA that reduce to Schur and QH BQ = TB (and similarly if P and Q are orthogonal similarities reducing A and B and QHBQ = TB (and similarly if P and Q are orthogonal similarities reducing A and B to real Schur form).4) is known as a Lyapunov equation. 13.J. Sylvester who studied general linear matrix equations of the form equation in honor of J. Kronecker Products Chapter 13.5) [ blml The coefficient matrix in (13.3 Application to Sylvester and Lyapunov Equations Application to Sylvester and Lyapunov Equations In this section we study the linear matrix equation In this section we study the linear matrix equation AX+XB=C.144 Chapter 13..3) in tenns of their columns.3 13.e.1.5) clearly can be written as the Kronecker sum (Im * A) + (BT ® In). This equation is now often called a Sylvester equation is now often equation in honor of 1.Xj. Then ((Q ® /„)(/« ® P)]"[(/m <8> A) + (B ® /B)][(e (g) /„)(/„.4) obtained by taking B = AT.5) as an "ordinary" linear system. Lyapunovequations arise naturally in stability theory.8. solution e IR xn also to be symmetric and (13. The first important question to ask regarding (13. Sylvester where A e R"x".. respectively.5) as (B T 0 /„). When does a solution exist? By writing the matrices in (13. suppose P and are unitary A Schur form for A © B can be derived similarly. The following definition is very helpful in completing the writing of (13.4) is known as a Lyapunov equation. .3) is. Then to real Schur fonn).
4)) are generally not solved using the mn x mn "vee" formulation (13.8) can be written as can be written as (13.X(O) = A 10 roo X(t)dt + ([+00 X(t)dt) 10 B.8) by Theorem 13. elegant connections between matrix theory and stability theory for differential equations.16. + IJLJ. Now integrate the differential equation X AX XB (with X(O) C) on [0. the linear system (13.6) There exists a unique solution to (13.6) directly with operations rather than the O(n 6 that would be required by solving (13.18. B e Rmxm.6) if and only if [(Im ® A) + (BT ® /„)] is nonsingular. Sylvester equations of the form (13. one of many The next few theorems are classical.18. . Let A e jRnxn. xn Theorem 13.. Theorem C E jRnxm. n :::: m. Then the Sylvester equation G jRmxm..6) if and only if [(1m ® A) + (B T ® In)] is nonsingular. n > m.B have no eigenvalues in common. say. and C e Rnxm. so there exists unique Proof: Since A and B are stable. E R E jRnxm. j j so there exists aaunique for all i. (real) Schur form. the linear system (13. A further enhancement to this algorithm is available in [6] whereby the larger of A or B is initially reduced only to upper Hessenberg rather than triangular the larger of A or B is initially reduced only to upper Hessenberg rather than triangular Schur form. Ai E A(A). . ofC e jRnxm [CI. where From Theorem 13. has a unique solution if and only if A and —B have no eigenvalues in common.1S. An equivalent linear system is then solved in which the triangular form equivalent linear system is then solved in which the triangular form of the reduced and can be exploited to solve successively for the columns of a suitably of the reduced A and B can be exploited to solve successively for the columns of a suitably transformed solution matrix X. i.(B) ^ solution to(13.13. (13. Let Ci( € E.5) can be rewritten in the form Using Definition 13. A(fi). Assuming that.7) has a unique solution if and only if A and . They culminate in Theorem 13. The most commonly preferred numerical algorithm is described in [2].. . Then the (unique) solution of the Sylvester equation parts in the open left halfplane).17. (13. A. this algorithm takes only O(n3 ) operations rather than the O(n6)) that would be required by solving (13. E jRmxm. The next few theorems are classical. A further enhancement to this algorithm is available in [6] whereby Gaussian elimination. We thus have the following theorem. Schur form.17. B E Rmxm. There exists a unique solution to (13. The most (13.n denote the columns ofC E Rnxm so that C = [ n . and C e R" xm .10) . c ].3) (or symmetric Lyapunov equations of the form Sylvester equations of the form (13. .5) can be rewritten in the form [(1m ® A) + (B T ® In)]vec(X) = vec(C). Now integrate the differential equation X = AX + X B solution to (13.and Mj Ee A(B).3) (or symmetric Lyapunov equations of the form (13. Suppose further are asymptotically stable (a matrix is asymptotically stable if all its eigenvalues have real are asymptotically stable (a matrix is asymptotically stable if all its eigenvalues have real parts in the open left halfplane).24. We thus have the following theorem.18.. . First A and B are reduced to (real) Schur form.19.. Application to Sylvester and Lyapunov Equations 13. the eigenvalues of [(1m ® A) + (BT <8> /„)] are + Mj. First A and B are reduced to commonly preferred numerical algorithm is described in [2]. one of many elegant connections between matrix theory and stability theory for differential equations. and ^j Theorem 13. e m.6) directly with Gaussian elimination.9) Proof: Since A and B are stable. Assuming that. AX+XB=C (13. (A)+ Aj(B) =I 00 for all i.6). c E jRn the Then vec(C) is defined to be the mnvector formed by stacking the columns ofC on top of by C ::~~::~: ::d~~:::O:[]::::fonned "ocking the colunuu of on top of one another. From Theorem 13.24. E!!.e A (A).8)by Theorem 13.17.. j j E!!!. this algorithm takes only 0 (n 3) transformed solution matrix X. Suppose further that A and B E Rn .. vec(C) = Using Definition 13. Aj(A) + A. Let A e lRnxn.. Application to Sylvester and Lyapunov Equations 145 145 Definition 13.16.4» are generally not solved using the mn x mn "vec" formulation (13.. Then the (unique) solution of the Sylvester equation AX+XB=C (13. say. +00): (with X(0) = C) on [0. +00): IHoo lim XU) .17. But [(Im <8>A) + (B TT ® In)] isisnonsingular ififand only ififitithas no zero eigenvalues. 77ie/i Theorem 13. Definition 13. the eigenvalues of [(/m <g> A) + (BT ® In)] are Ai A. Cm}.6).3. ii e n_.e. where A. They culminate in Theorem 13. But [(1m ® A) + (B (g) /„)] nonsingular and only has no zero eigenvalues.3..
.21. Then the Lyapunov equation e jRnxn.. where C Proof: asymptotically l3. Thus.ATT have A —A.19.23 solution Proof: Suppose A is asymptotically stable. A. If the matrix A E Wxn has eigenvalues A.20.I . using the solution X ((t) = elACe tB from Theorem 11. A matrix A E R"x" is asymptotically stable if and only if there exists a only if e jRnxn asymptotically if positive definite solution to the Lyapunov equation positive definite solution to the Lyapunov equation AX +XAT = C. TTzen r/ze AX+XAT =C (13. it can be shown easily that lim elA = lim elB = O. _* ]). then that solution is symmetric. 1>+00 1 . Let A. results = 0.12) Theorem 13. Many useful results exist concerning the relationship between stability and Lyapunov equations.6. .10) we have C t~+x /—<+3C = A (1+ 00 elACe lB dt) + (1+ o 00 elACe lB dt) B and so X and so X = 1o {+oo elACe lB dt satisfies (13. An equivalent condition for the existence of a unique solution to AX + AX + Remark XB = C is that [~ _cB ] be similar to [ J _°B ](via the similarity [~J _~ ]). a sufficient condition that guarantees that A and .23. Lef A. Theorem 13. Theorem 13. we have that lim X ((t) = 0.21 and 13. If matrix A e jRn xn eigenvalues )"" . Kronecker Products Using the results of Section 11. then . Remark 13. .23 a solution to (13.146 146 Chapter 13....C E R"x" and suppose further that A is asymptotically stable. Then the (unique) solution o/the Lyapunov equation of the AX+XAT=C can be written as can be written as (13.1. .. symmetric and ( 13.. the first of which follows immediately from Theorem 13.24. Then Then . .]. If C is has unique if and only if and —A T eigenvalues in common. .11) has a unique solution if and only if A and . Kronecker Products Chapter 13. sufficient —A common eigenvalues A asymptotically no common eigenvalues is that A be asymptotically stable.13) exists and takes the form (13. C e jRnxn further asymptotically stable.8).. X B = is that [ J _Cfi ] be similar to [~ _OB] (via the similarity [ Let Theorem 13.12).A T have no eigenvalues in common. Remark 13. then that solution is symmetric. By Theorems 13...19. (13.11) has a unique solution. Now let v be an arbitrary nonzero vector in jRn. C E R"x". Theorem Substituting in (13. —kn.6. Two basic results due to Lyapunov are the following. An.13) where C = C T < O.!„. v E". +00 r—>+oo t—v+oo X t ) = etACelB X t ) — O. If symmetric and (13.11) has a unique solution. . Hence.An.21 l3..AT has eigen— AT eigenvalues AI..22..
16) .25.11. vec(ABC) = (C T ® A)vec(B). suppose X = XT > 0 and let A E A (A) with corresponding left eigenConversely. Let A E Rmxn.26. suppose X = XT > 0 and let A. and C E Rmxq. B. defined.14) as Proof: Write (13. e jRrnxq.yr) = <8> x.26. we must have A + I = 2 Re A < 0 .11.25. e jRrnxn. 14) is unique if BB+ ® A+A = [.14) xp E jRn has a solution X e R. Since A was arbitrary. Application to Sylvester and Lyapunov Equations 13. in which the solution is of the form is of the form (13. For any three matrices A. B.3. Since A was arbitrary. where Y e Rnxp is arbitrary. Then vector y. e A(A) with corresponding left eigenvector y.14) as (B T ® A)vec(X) = vec(C) (13. Proof: The proof follows in a fairly straightforward fashion either directly from the definiProof: The proof follows in a fairly straightforward fashion either directly from the definitions or from the fact that vec(.15) of (13. we must have A + A = 2 R e A < O. The Lyapunov equation AX + XATT = C can also be written using the vec notation in the equivalent form vec notation in the equivalent form [(/ ® A) + (A ® l)]vec(X) = vec(C). the complexvalued equation H X X A = C is equivalent to However. D asymptotically stable. D tions or from the fact that vec(xyT) = y ® x. the integrand above is positive. result. The equivalent "vec form" of this equation is The equivalent "vec form" of this equation is [(/ ® AT) + (AT ® l)]vec(X) = + (AT ® l)]vec(X) = vec(C). The Lyapunov equation AX X A = C can also be written using the Remark 13. most of which derive from one key result. D Remark 13. Then the equation 13. Conversely. A must be Since yHXy > 0. most of which derive from one key The vec operator has many useful properties. the complexvalued equation AHX + XA = C is equivalent to [(/ ® AH) vec(C). the AXB =C (13.3. For any three matrices A. The Proof: Write (13. The solution of (13. D An immediate application is to the derivation of existence and uniqueness conditions An immediate application is to the derivation of existence and uniqueness conditions for the solution of the simple Sylvesterlike equation introduced in Theorem 6. Hence vT Xv > 0 and thus X is positive definite. The vec operator has many useful properties. Hence Since C > 0 and etA is nonsingular for all t. where Y E jRnxp is arbitrary. A must be asymptotically stable. Then 0> yHCy = yH AXy + yHXAT Y = (A + I)yH Xy. for the solution of the simple Sylvesterlike equation introduced in Theorem 6. and C for which the matrix product ABC is defined. A subtle point arises when dealing with the "dual" Lyapunov equation A T X X A A subtle point arises when dealing with the "dual" Lyapunov equation ATX + XA = C.27. nx p if and only if A A+CB+BB = C. B e jRPxq.27.14) is unique if BB+ ® A+ A = I. B E Rpx(}. Since yH Xy > 0. v TXv > 0 and thus X is positive definite. C. However. in which case the general solution has a if only ifAA + C B+ C. and C for which the matrix product ABC is Theorem 13. Theorem 13. Theorem 13.13. Application to Sylvester and Lyapunov Equations 147 147 Since — C > 0 and etA is nonsingular for all the integrand above is positive.t.
148 148
Chapter 1 3. Kronecker Products Chapter 13. Kronecker Products
by Theorem 13.26. This "vector equation" has a solution if and only if by Theorem 13.26. This "vector equation" has a solution if and only if
(B T ® A)(B T ® A)+ vec(C)
+
= vec(C).
+ +
It is a straightforward exercise to show that (M ® N) + = M+ ® N+.. Thus, (13.16) has aa It is a straightforward exercise to show that (M ® N) = M <8> N Thus, (13.16) has
solution if and only if solution if and only if vec(C)
=
(B T ® A)«B+{ ® A+)vec(C)
= [(B+ B{ ® AA+]vec(C)
= vec(AA +C B+ B)
and hence if and only if AA +CB+B = C. and hence if and only if AA+ C B+ B C. The general solution of (13 .16) is then given by The general solution of (13.16) is then given by vec(X) = (B T ® A) + vec(C)
+ [I 
(B T ® A) + (B T ® A)]vec(Y),
where Y is arbitrary. This equation can then be rewritten in the form where Y is arbitrary. This equation can then be rewritten in the form vec(X)
= «B+{
® A+)vec(C)
+ [I
 (BB+{ ® A+ A]vec(y)
or, using Theorem 13.26, or, using Theorem 13.26,
The solution is clearly unique if B B+ ® A + A ==I. The solution is clearly unique if BB+ <8> A+A I.
0 D
EXERCISES EXERCISES
I. For any two matrices A and B for which the indicated matrix product is defined, 1. For any two matrices A and B for which the indicated matrix product is defined, show that (vec(A»T(vec(fl)) = Tr(A T B). In particular, if B E Rn x n ,, then Tr(B) = show that (vec(A)) r (vec(B» = Tr(A r £). In particular, if B e lR nxn then Tr(fl) = vec(/J r vec(fl). vec(Inl vec(B). 2. Prove that for all matrices A and B, (A ® B)+ = A+ ® B+.. 2. Prove that for all matrices A and B, (A ® B)+ = A+ ® B+
3. Show that the equation AX B = C has a solution for all C if A has full row rank and 3. Show that the equation AX B = C has a solution for all C if A has full row rank and B has full column rank. Also, show that a solution, if it exists, is unique if A has full B has full column rank. Also, show that a solution, if it exists, is unique if A has full column rank and B has full row rank. What is the solution in this case? column rank and B has full row rank. What is the solution in this case? 4. Show that the general linear equation 4. Show that the general linear equation
k
LAiXBi =C
i=1
can be written in the form can be written in the form
[BT ® AI
+ ... + B[ ® Ak]vec(X) =
vec(C).
Exercises Exercises
149 149
5. Let x E ]Rm and y E E". Show that *rT ® yy==y X T T. x <8> € Mm e ]Rn. yx •
6. Let A e R" xn and £ e M m x m . (a) Show that IIA ® BII22 = IIAII2I1Blb. (a) Show that A <8> B = A2£2. (b) What is II A ® B II F in terms of the Frobenius norms of A and B? Justify your (b) What is A ® B\\F in terms of the Frobenius norms of A and B? Justify your answer carefully. answer carefully.
(c) What is the spectral radius of A ® B in terms of the spectral radii of A and B? of A <8> B in terms of the spectral radii of A and B? Justify your answer carefully. Justify your answer carefully. 7. Let A, 5 eR" x ". 7. Let A, B E ]Rnxn.
A)k = / <8> A* and (fl <g> l = B® I for all integers k. (a) Show that (l ® A)* = I ® Ak and (B ® I /)* =Bk fc ® / for all integers &. (/ l A (b) Show that el®A = I ® eeA and eB®1 7= eeB ® I./. e® <g) A and e5® = B (g)
(c) Show that the matrices I ® and (c) Show that the matrices / (8)AA andBB® I /commute. ® commute. (d) Show that (d) Show that
e AEIlB
= eU®A)+(B®l) = e B ® e A .
(Note: This result would look a little "nicer" had we defined our Kronecker (Note: This result would look a little "nicer" had we defined our Kronecker sum the other way around. However, Definition 13.14 is conventional in the 13.14 literature.)
8. Consider the Lyapunov matrix equation (13.11) with
A =
and C the symmetric matrix and C the symmetric matrix
[~ _~ ]
[~
Xs
Clearly Clearly
=
[~ ~ ]
[_~ ~
]
is a symmetric solution of the equation. Verify that is a symmetric solution of the equation. Verify that
Xns =
is also a solution and is nonsymmetric. Explain in light of Theorem 13.21. is also a solution and is nonsymmetric. Explain in light of Theorem 13.21. 9. Block Triangularization: Let 9. Block Triangularization: Let
A E ]Rn xn find similarity where A e Rnxn and D E ]Rm xm. It is desired to find a similarity transformation e Rmxm. of the form of the form
T=[~ ~J
such that T l1ST is block upper triangular. such that T ST is block upper triangular.
150 150 (a) Show that S is similar to
Chapter 13. Kronecker Products Chapter 13. Kronecker Products
[
A +OBX
B ] DXB
if X satisfies the socalled matrix Riccati equation if X satisfies the socalled matrix Riccati equation
CXA+DXXBX=O.
(b) Fonnulate a similar result for block lower triangularization of S. Formulate S.
to. Block 10. Block Diagonalization: Let
S=
[~ ~
l
where A E Rnxn and D E R m x m . It is desired to find a similarity transfonnation of e jRnxn E jRmxm. transformation of the fonn form
T=[~ ~]
such that T l1ST is block diagonal, T ST block diagonal. (a) Show that S is similar to
if Y satisfies the Sylvester equation Y
AY  YD = B.
(b) Formulate a similar result for block diagonalization of Fonnulate of
G.H. Matrix Analysis... [5] Cline.. R.. Second Edition. 8(1966). Third Edition. New [3] Bellman. 2002.R. [5] Cline. RH.. 1972. Control.1.. 57–58. [2] Bartels. AC24(1979). A. 1958. 18(1976). PR.Bibliography [1] Albert.H. and C." Cornm. "IllConditioned Eigensystems and the Computation [8] Golub. Second Edition. 15(1972). Cambridge. SIAM. and C. Philadelphia. NY. Baltimore. Horn." IEEE Trans.A. 820826. 1970. 9(1967). [6] Golub.. "Note on the Generalized Inverse of a Matrix Product. "Algorithm 432. R." SIAM Rev. NJ. [3] Bellman. RA. [9] Greville. [13] Hom. and C. R. [11] Higham. Second Edition. 1991. New [1] York." IEEE [7] Golub.820826. NY. AX + XB = C. FiniteDimensional Vector Spaces. 909913. P. G. UK. Wilkinson. [4] Bjorck.. [4] Bjorck.J. and C. Princeton. Johns Hopkins [7] Golub. Philadelphia.H. Introduction to Matrix Analysis.. Philadelphia. Cambridge. 1958. 1972. SIAM Rev.. Johnson. Matrix Analysis. UK. Baltimore.H. Van Nostrand. Second Edition. Topics in Matrix Analysis. MD. to Second Edition. 2002. 1996. AX X B = C.. 1991. ACM.R. Van Loan.5758. 249].. [11] Higham.F. Solution of the Matrix Equation [2] Bartels. NJ. [10] Halmos.. Solution Equation AX + X B = C. Press. Accuracy of'Numerical Algorithms.E. Analysis. Matrix Computations. York. R. Nash.F.E.578619. Cambridge Univ. RA. S. Cambridge Univ.N. Accuracy and Stability ofNumerical Algorithms. "IllConditioned Eigensystems and Computation ofthe Jordan Canonical Form. Press.E..R. [10] Halmos. N. T. G. Cambridge Univ. Press. SIAM. G. Van Loan. S. R.... Nash. York. Johnson. PA. McGrawHill. Stewart. "Algorithm 432. Philadelphia.A. G. Press. Univ. Regression and the MoorePenrose Pseudoinverse.W. Univ. G. "A HessenbergSchur Method for the Problem AX + XB = C. McGrawHill. MD. R. A. Cambridge. Van Nostrand. Wilkinson. New York. SIAM. 1996. 15(1972). and J. "A HessenbergSchur Method for the Problem [6] Golub. NY. 6(1964). R. [9] Greville. Matrix Computations..H.. of the SIAM 18(1976). 1996. UK. Numerical Methods for Least Squares Problems.R.. Van Loan. and C.. 1985. 518–521 [Erratum. N.. PA. Van Loan.H. 1970. SIAM..H. AC24(1979).. 8(1966). [12] Hom. Academic Press. Press." SIAM Rev. C. [8] Golub.H. Princeton.. Stewart. T.. Johnson. Numerical Methodsfor Least Squares Problems. 151 151 . and C. 1996. Horn. "Note on the Generalized Inverse of the Product of Matrices. SIAM 9(1967). PA." Comm. UK. "Note on the Generalized Inverse of a Matrix Product." SIAM Rev.. and G.N. Autom. 1985. Second Edition. and c. Third Edition. Johns Hopkins Univ. and lH.w.E. "Note on the Generalized Inverse of the Product of Matrices. 6(1964). 249]. Cambridge." SIAM Rev.. 578619. and G. NY.R.518521 [Erratum. FiniteDimensional Vector Spaces." SIAM Rev.
and AJ. 913–921.. "A Schur Method for Solving Algebraic Riccati Equations. G.." IEEE Trans . Philadelphia. and J. Laub. The Theory of Matrices. G. Matrix Analysis and Applied Linear Algebra. of Control. SpringerVerlag. and M. [18] Meyer. 2000. W. 913921. Fundamentals of Matrix Computations. R. Autom. Laub. C." Proc. of a Matrix. Third Edition.w." SIAM Rev. 1330–1348. Third Edition. Control. "Nineteen Dubious Ways to Compute the Exponential of a Matrix. C.13301348.. Plenum. 1(1988). "Controllability and Stability Radii for Companion Fonn [14] Kenney.B. P. Second Edition.W. "The Matrix Sign Function. 1973.J.J.. of Control. [21] Ortega. W. 2002. NY. 801836. C. C. Introduction to Matrix Computations. 1985. Third Edition. Orlando. 1988. NY. AJ. 2000... New York. Academic Press. Third Edition." Math." SIAM 20(1978). "A Schur Method for Solving Algebraic Riccati Equations. CA. [24] Strang. Laub. [26] Wonham. NY.152 152 Bibliography Bibliography [14] Kenney.. Second Edition. W.. Control.B. [17] Laub. Autom. Englewood Cliffs. Linear Algebra and Its Applications. 51(1955). 1985. and C." IEEE Trans. 1985. New York. Theory of Second Edition with Applications. and Systems. Wiley[25] D. "Controllability and Stability Radii for Companion Form Systems.S. C. Applied Linear Algebra.. Linear Algebra and Its Applications.. Daniel. Harcourt Brace [24] Strang. Linear Multivariable Control. G. C. Laub. Soc. 1987.. Systems. Jovanovich. 20(1978). 1987.S..M. D..406413. Third Edition. Applied Linear Englewood Cliffs... [21] Ortega..." IEEE Trans. 40(1995).D. [15] Kenney. 1979). A. Daniel.. Academic Press. and A. G. "A Generalized Inverse for Matrices.. P. [26] Wonham. 1988. NY. Second Edition with [16] Lancaster. 40(1995). B. Applications. c. [22] Penrose... Harcourt Brace Jovanovich.S. and A. J. Control.J.P. Tismenetsky..361390. Interscience. PA.W. [20] Noble. and c. Third Edition.D. PrenticeHall. A Geometric Approach. San Diego. "The Matrix Sign Function. 1988. [16] Lancaster.. CA. AC24( 1979)... 1(1988). and J. [19] Moler." Math. Cambridge Philos. "Nineteen Dubious Ways to Compute the Exponential [19] Moler. NJ." Proc. PrenticeHall. Control. C. New York. and M. 51(1955). San Diego. PA.S." IEEE Trans. 406–413. Orlando. andAJ. SIAM.. Philadelphia. . Van Loan. "A Generalized Inverse for Matrices. of [25] Watkins. 1985.. to Academic Press. 2002. Signals.M. 1973. New York. A Second Course.. [23] Stewart. 1988. 361390. NJ. Tismenetsky. NY. [15] Kenney. Matrix Theory. [20] Noble. Van Loan. SpringerVerlag. R.F. [23] Stewart. New York. Analysis Applied Linear [18] Meyer. SIAM. New York. NY. FL. Soc. [17] Laub. Autom. New York. [22] Pemose. Plenum. Academic Press. FL.801836. Interscience.
110 properties of. 23 vector. 150 diagonalization. 12 natural. 75 of a matrix pencil. 127 lems. 48 LV factorization. 81 function of a matrix. 110 inverse of. 89 exponential of a Jordan block. 75 eigenvalue. 13 of a subspace. 106 pseudoinverse of. 58 angle between vectors. 4–6 dimension. 85 of a principal vector. 17 co–domain. 127 exchange matrix. 46 controllability. 4 of a block matrix. 81 mation. 137 controllability. 84 equivalence transformation. 109 exponential of a matrix. 150 inverse of. i 1 (p/nxn 1 e~xn. 1 companion matrix companion matrix inverse of. 76 algebraic multiplicity. 1 CauchyBunyakovskySchwarz InequalCauchy–Bunyakovsky–Schwarz Inequality. 105 pseudoinverse of. 87 of eigenvectors. 103 congruence. 109 computation of. 89 A–invariant subspace.58 ity. 23 function of a matrix. 95 equivalent generalized eigenvalue probequivalent generalized eigenvalue problems. 106 complement complement of a subspace. 23 rank. C". 75 invariance under similarity transforinvariance under similarity transformation.81 elementary divisors. 101 codomain. 89 matrix characterization of. 149 congruence. 46 defective. 128 generalized real Schur form. 109–112 properties of. 81. 125 Cholesky factorization. 101 Cholesky factorization. 75 of a matrix. 17 domain. 76 degree degree of a principal vector. 109112 field. 95 orthogonal. 81. 106 singular values of. 128 e (pmxn mxn en. 90 algebraic multiplicity. 39. 75 chain chain of eigenvectors. four fundamental subspaces. 17 column column rank. 127 equivalent matrix pencils. 1 . 12 dimension. 76 defective. 75 Cayley–Hamilton Theorem. 105 inverse of. 11 natural. 95 unitary. 106 singular vectors of. 115 exponential of a matrix. 21 orthogonal. 48 inverse of. 7 field. 17 eigenvalue. 13 domain. 106 singular vectors of. 104 diagonalization. 5 LU factorization. 58 CayleyHamilton Theorem. 21 153 . 1 vector. 46 properties of. 125 generalized eigenvalue. 137 contragredient transformation. 39. 23 four fundamental subspaces. 2 block matrix. 2 conjugate transpose. 114–118 computation of. 84 elementary divisors. equivalent matrix pencils. 95 unitary. 85 determinant. 81 generalized eigenvalue. 12 direct sum direct sum of subspaces. 106 singular values of. 90 matrix characterization of. 125 of a matrix pencil. 104 definiteness of. 58 basis. 91. 91. 125 generalized real Schur form. 12 block matrix. 5 triangularization. 13 orthogonal.Index Index Ainvariant subspace. 2 definiteness of. 5 of a block matrix. 87 characteristic polynomial characteristic polynomial of a matrix. 5 properties of. 11 basis. 103 conjugate transpose. 4 determinant. 95