Matrix Analysis
for Scientists & Engineers
Matrix Analysis
for Scientists & Engineers
This page intentionally left blank This page intentionally left blank
Matrix Analysis
for Scientists & Engineers
Alan J. Laub
University of California
Davis, California
slam.
Matrix Analysis
for Scientists & Engineers
Alan J. Laub
University of California
Davis, California
Copyright © 2005 by the Society for Industrial and Applied Mathematics.
1 0 9 8 7 6 5 4 3 2 1
All rights reserved. Printed in the United States of America. No part of this book
may be reproduced, stored, or transmitted in any manner without the written permission
of the publisher. For information, write to the Society for Industrial and Applied
Mathematics, 3600 University City Science Center, Philadelphia, PA 191042688.
MATLAB® is a registered trademark of The MathWorks, Inc. For MATLAB product information,
please contact The MathWorks, Inc., 3 Apple Hill Drive, Natick, MA 017602098 USA,
5086477000, Fax: 5086477101, info@mathworks.com, www.mathworks.com
Mathematica is a registered trademark of Wolfram Research, Inc.
Mathcad is a registered trademark of Mathsoft Engineering & Education, Inc.
Library of Congress CataloginginPublication Data
Laub, Alan J., 1948
Matrix analysis for scientists and engineers / Alan J. Laub.
p. cm.
Includes bibliographical references and index.
ISBN 0898715768 (pbk.)
1. Matrices. 2. Mathematical analysis. I. Title.
QA188138 2005
512.9'434—dc22
2004059962
About the cover: The original artwork featured on the cover was created by freelance
artist Aaron Tallon of Philadelphia, PA. Used by permission.
slam is a registered trademark.
Copyright © 2005 by the Society for Industrial and Applied Mathematics.
10987654321
All rights reserved. Printed in the United States of America. No part of this book
may be reproduced, stored, or transmitted in any manner without the written permission
of the publisher. For information, write to the Society for Industrial and Applied
Mathematics, 3600 University City Science Center, Philadelphia, PA 191042688.
MATLAB® is a registered trademark of The MathWorks, Inc. For MATLAB product information,
please contact The MathWorks, Inc., 3 Apple Hill Drive, Natick, MA 017602098 USA,
5086477000, Fax: 5086477101, info@mathworks.com, wwwmathworks.com
Mathematica is a registered trademark of Wolfram Research, Inc.
Mathcad is a registered trademark of Mathsoft Engineering & Education, Inc.
Library of Congress CataloginginPublication Data
Laub, Alan J., 1948
Matrix analysis for scientists and engineers / Alan J. Laub.
p. cm.
Includes bibliographical references and index.
ISBN 0898715768 (pbk.)
1. Matrices. 2. Mathematical analysis. I. Title.
QA 188.L38 2005
512.9'434dc22
2004059962
About the cover: The original artwork featured on the cover was created by freelance
artist Aaron Tallon of Philadelphia, PA. Used by permission .
•
5.lam... is a registered trademark.
To my wife, Beverley
(who captivated me in the UBC math library
nearly forty years ago)
To my wife, Beverley
(who captivated me in the UBC math library
nearly forty years ago)
This page intentionally left blank This page intentionally left blank
Contents
Preface xi
1 Introduction and Review 1
1.1 Some Notation and Terminology 1
1.2 Matrix Arithmetic 3
1.3 Inner Products and Orthogonality 4
1.4 Determinants 4
2 Vector Spaces 7
2.1 Definitions and Examples 7
2.2 Subspaces 9
2.3 Linear Independence 10
2.4 Sums and Intersections of Subspaces 13
3 Linear Transformations 17
3.1 Definition and Examples 17
3.2 Matrix Representation of Linear Transformations 18
3.3 Composition of Transformations 19
3.4 Structure of Linear Transformations 20
3.5 Four Fundamental Subspaces 22
4 Introduction to the MoorePenrose Pseudoinverse 29
4.1 Definitions and Characterizations 29
4.2 Examples 30
4.3 Properties and Applications 31
5 Introduction to the Singular Value Decomposition 35
5.1 The Fundamental Theorem 35
5.2 Some Basic Properties 38
5.3 Row and Column Compressions 40
6 Linear Equations 43
6.1 Vector Linear Equations 43
6.2 Matrix Linear Equations 44
6.3 A More General Matrix Linear Equation 47
6.4 Some Useful and Interesting Inverses 47
vii
Contents
Preface
1 Introduction and Review
1.1 Some Notation and Terminology
1.2 Matrix Arithmetic . . . . . . . .
1.3 Inner Products and Orthogonality .
1.4 Determinants
2 Vector Spaces
2.1 Definitions and Examples .
2.2 Subspaces.........
2.3 Linear Independence . . .
2.4 Sums and Intersections of Subspaces
3 Linear Transformations
3.1 Definition and Examples . . . . . . . . . . . . .
3.2 Matrix Representation of Linear Transformations
3.3 Composition of Transformations . .
3.4 Structure of Linear Transformations
3.5 Four Fundamental Subspaces . . . .
4 Introduction to the MoorePenrose Pseudoinverse
4.1 Definitions and Characterizations.
4.2 Examples..........
4.3 Properties and Applications . . . .
5 Introduction to the Singular Value Decomposition
5.1 The Fundamental Theorem . . .
5.2 Some Basic Properties .....
5.3 Rowand Column Compressions
6 Linear Equations
6.1 Vector Linear Equations . . . . . . . . .
6.2 Matrix Linear Equations ....... .
6.3 A More General Matrix Linear Equation
6.4 Some Useful and Interesting Inverses.
vii
xi
1
1
3
4
4
7
7
9
10
13
17
17
18
19
20
22
29
29
30
31
35
35
38
40
43
43
44
47
47
viii Contents
7 Projections, Inner Product Spaces, and Norms 51
7.1 Projections 51
7.1.1 The four fundamental orthogonal projections 52
7.2 Inner Product Spaces 54
7.3 Vector Norms 57
7.4 Matrix Norms 59
8 Linear Least Squares Problems 65
8.1 The Linear Least Squares Problem 65
8.2 Geometric Solution 67
8.3 Linear Regression and Other Linear Least Squares Problems 67
8.3.1 Example: Linear regression 67
8.3.2 Other least squares problems 69
8.4 Least Squares and Singular Value Decomposition 70
8.5 Least Squares and QR Factorization 71
9 Eigenvalues and Eigenvectors 75
9.1 Fundamental Definitions and Properties 75
9.2 Jordan Canonical Form 82
9.3 Determination of the JCF 85
9.3.1 Theoretical computation 86
9.3.2 On the +1's in JCF blocks 88
9.4 Geometric Aspects of the JCF 89
9.5 The Matrix Sign Function 91
10 Canonical Forms 95
10.1 Some Basic Canonical Forms 95
10.2 Definite Matrices 99
10.3 Equivalence Transformations and Congruence 102
10.3.1 Block matrices and definiteness 104
10.4 Rational Canonical Form 104
11 Linear Differential and Difference Equations 109
11.1 Differential Equations 109
11.1.1 Properties of the matrix exponential 109
11.1.2 Homogeneous linear differential equations 112
11.1.3 Inhomogeneous linear differential equations 112
11.1.4 Linear matrix differential equations 113
11.1.5 Modal decompositions 114
11.1.6 Computation of the matrix exponential 114
11.2 Difference Equations 118
11.2.1 Homogeneous linear difference equations 118
11.2.2 Inhomogeneous linear difference equations 118
11.2.3 Computation of matrix powers 119
11.3 HigherOrder Equations 120
viii
7 Projections, Inner Product Spaces, and Norms
7.1 Projections ..................... .
7.1.1 The four fundamental orthogonal projections
7.2 Inner Product Spaces
7.3 Vector Norms
7.4 Matrix Norms ....
8 Linear Least Squares Problems
8.1 The Linear Least Squares Problem . . . . . . . . . . . . . .
8.2 Geometric Solution . . . . . . . . . . . . . . . . . . . . . .
8.3 Linear Regression and Other Linear Least Squares Problems
8.3.1 Example: Linear regression ...... .
8.3.2 Other least squares problems ...... .
8.4 Least Squares and Singular Value Decomposition
8.5 Least Squares and QR Factorization . . . . . . .
9 Eigenvalues and Eigenvectors
9.1 Fundamental Definitions and Properties
9.2 Jordan Canonical Form .... .
9.3 Determination of the JCF .... .
9.3.1 Theoretical computation .
9.3.2 On the + l's in JCF blocks
9.4 Geometric Aspects of the JCF
9.5 The Matrix Sign Function.
10 Canonical Forms
10.1 Some Basic Canonical Forms .
10.2 Definite Matrices . . . . . . .
10.3 Equivalence Transformations and Congruence
10.3.1 Block matrices and definiteness
10.4 Rational Canonical Form . . . . . . . . .
11 Linear Differential and Difference Equations
ILl Differential Equations . . . . . . . . . . . . . . . .
11.1.1 Properties ofthe matrix exponential . . . .
11.1.2 Homogeneous linear differential equations
11.1.3 Inhomogeneous linear differential equations
11.1.4 Linear matrix differential equations . .
11.1.5 Modal decompositions . . . . . . . . .
11.1.6 Computation of the matrix exponential
11.2 Difference Equations . . . . . . . . . . . . . .
11.2.1 Homogeneous linear difference equations
11.2.2 Inhomogeneous linear difference equations
11.2.3 Computation of matrix powers .
11.3 HigherOrder Equations. . . . . . . . . . . . . . .
Contents
51
51
52
54
57
59
65
65
67
67
67
69
70
71
75
75
82
85
86
88
89
91
95
95
99
102
104
104
109
109
109
112
112
113
114
114
118
118
118
119
120
Contents ix
12 Generalized Eigenvalue Problems 125
12.1 The Generalized Eigenvalue/Eigenvector Problem 125
12.2 Canonical Forms 127
12.3 Application to the Computation of System Zeros 130
12.4 Symmetric Generalized Eigenvalue Problems 131
12.5 Simultaneous Diagonalization 133
12.5.1 Simultaneous diagonalization via SVD 133
12.6 HigherOrder Eigenvalue Problems 135
12.6.1 Conversion to firstorder form 135
13 Kronecker Products 139
13.1 Definition and Examples 139
13.2 Properties of the Kronecker Product 140
13.3 Application to Sylvester and Lyapunov Equations 144
Bibliography 151
Index 153
Contents
12 Generalized Eigenvalue Problems
12.1 The Generalized EigenvaluelEigenvector Problem
12.2 Canonical Forms ................ .
12.3 Application to the Computation of System Zeros .
12.4 Symmetric Generalized Eigenvalue Problems .
12.5 Simultaneous Diagonalization ........ .
12.5.1 Simultaneous diagonalization via SVD
12.6 HigherOrder Eigenvalue Problems ..
12.6.1 Conversion to firstorder form
13 Kronecker Products
13.1 Definition and Examples ............ .
13.2 Properties of the Kronecker Product ...... .
13.3 Application to Sylvester and Lyapunov Equations
Bibliography
Index
ix
125
125
127
130
131
133
133
135
135
139
139
140
144
151
153
This page intentionally left blank This page intentionally left blank
Preface
This book is intended to be used as a text for beginning graduatelevel (or even seniorlevel)
students in engineering, the sciences, mathematics, computer science, or computational
science who wish to be familar with enough matrix analysis that they are prepared to use its
tools and ideas comfortably in a variety of applications. By matrix analysis I mean linear
algebra and matrix theory together with their intrinsic interaction with and application to
linear dynamical systems (systems of linear differential or difference equations). The text
can be used in a onequarter or onesemester course to provide a compact overview of
much of the important and useful mathematics that, in many cases, students meant to learn
thoroughly as undergraduates, but somehow didn't quite manage to do. Certain topics
that may have been treated cursorily in undergraduate courses are treated in more depth
and more advanced material is introduced. I have tried throughout to emphasize only the
more important and "useful" tools, methods, and mathematical structures. Instructors are
encouraged to supplement the book with specific application examples from their own
particular subject area.
The choice of topics covered in linear algebra and matrix theory is motivated both by
applications and by computational utility and relevance. The concept of matrix factorization
is emphasized throughout to provide a foundation for a later course in numerical linear
algebra. Matrices are stressed more than abstract vector spaces, although Chapters 2 and 3
do cover some geometric (i.e., basisfree or subspace) aspects of many of the fundamental
notions. The books by Meyer [18], Noble and Daniel [20], Ortega [21], and Strang [24]
are excellent companion texts for this book. Upon completion of a course based on this
text, the student is then wellequipped to pursue, either via formal courses or through self
study, followon topics on the computational side (at the level of [7], [11], [23], or [25], for
example) or on the theoretical side (at the level of [12], [13], or [16], for example).
Prerequisites for using this text are quite modest: essentially just an understanding
of calculus and definitely some previous exposure to matrices and linear algebra. Basic
concepts such as determinants, singularity of matrices, eigenvalues and eigenvectors, and
positive definite matrices should have been covered at least once, even though their recollec
tion may occasionally be "hazy." However, requiring such material as prerequisite permits
the early (but "outoforder" by conventional standards) introduction of topics such as pseu
doinverses and the singular value decomposition (SVD). These powerful and versatile tools
can then be exploited to provide a unifying foundation upon which to base subsequent top
ics. Because tools such as the SVD are not generally amenable to "hand computation," this
approach necessarily presupposes the availability of appropriate mathematical software on
a digital computer. For this, I highly recommend MA TL A B® although other software such as
xi
Preface
This book is intended to be used as a text for beginning graduatelevel (or even seniorlevel)
students in engineering, the sciences, mathematics, computer science, or computational
science who wish to be familar with enough matrix analysis that they are prepared to use its
tools and ideas comfortably in a variety of applications. By matrix analysis I mean linear
algebra and matrix theory together with their intrinsic interaction with and application to
linear dynamical systems (systems of linear differential or difference equations). The text
can be used in a onequarter or onesemester course to provide a compact overview of
much of the important and useful mathematics that, in many cases, students meant to learn
thoroughly as undergraduates, but somehow didn't quite manage to do. Certain topics
that may have been treated cursorily in undergraduate courses are treated in more depth
and more advanced material is introduced. I have tried throughout to emphasize only the
more important and "useful" tools, methods, and mathematical structures. Instructors are
encouraged to supplement the book with specific application examples from their own
particular subject area.
The choice of topics covered in linear algebra and matrix theory is motivated both by
applications and by computational utility and relevance. The concept of matrix factorization
is emphasized throughout to provide a foundation for a later course in numerical linear
algebra. Matrices are stressed more than abstract vector spaces, although Chapters 2 and 3
do cover some geometric (i.e., basisfree or subspace) aspects of many of the fundamental
notions. The books by Meyer [18], Noble and Daniel [20], Ortega [21], and Strang [24]
are excellent companion texts for this book. Upon completion of a course based on this
text, the student is then wellequipped to pursue, either via formal courses or through self
study, followon topics on the computational side (at the level of [7], [II], [23], or [25], for
example) or on the theoretical side (at the level of [12], [13], or [16], for example).
Prerequisites for using this text are quite modest: essentially just an understanding
of calculus and definitely some previous exposure to matrices and linear algebra. Basic
concepts such as determinants, singularity of matrices, eigenvalues and eigenvectors, and
positive definite matrices should have been covered at least once, even though their recollec
tion may occasionally be "hazy." However, requiring such material as prerequisite permits
the early (but "outoforder" by conventional standards) introduction of topics such as pseu
doinverses and the singular value decomposition (SVD). These powerful and versatile tools
can then be exploited to provide a unifying foundation upon which to base subsequent top
ics. Because tools such as the SVD are not generally amenable to "hand computation," this
approach necessarily presupposes the availability of appropriate mathematical software on
a digital computer. For this, I highly recommend MAlLAB® although other software such as
xi
xii Preface
Mathematica® or Mathcad® is also excellent. Since this text is not intended for a course in
numerical linear algebra per se, the details of most of the numerical aspects of linear algebra
are deferred to such a course.
The presentation of the material in this book is strongly influenced by computa
tional issues for two principal reasons. First, "reallife" problems seldom yield to simple
closedform formulas or solutions. They must generally be solved computationally and
it is important to know which types of algorithms can be relied upon and which cannot.
Some of the key algorithms of numerical linear algebra, in particular, form the foundation
upon which rests virtually all of modern scientific and engineering computation. A second
motivation for a computational emphasis is that it provides many of the essential tools for
what I call "qualitative mathematics." For example, in an elementary linear algebra course,
a set of vectors is either linearly independent or it is not. This is an absolutely fundamental
concept. But in most engineering or scientific contexts we want to know more than that.
If a set of vectors is linearly independent, how "nearly dependent" are the vectors? If they
are linearly dependent, are there "best" linearly independent subsets? These turn out to
be much more difficult problems and frequently involve researchlevel questions when set
in the context of the finiteprecision, finiterange floatingpoint arithmetic environment of
most modern computing platforms.
Some of the applications of matrix analysis mentioned briefly in this book derive
from the modern statespace approach to dynamical systems. Statespace methods are
now standard in much of modern engineering where, for example, control systems with
large numbers of interacting inputs, outputs, and states often give rise to models of very
high order that must be analyzed, simulated, and evaluated. The "language" in which such
models are conveniently described involves vectors and matrices. It is thus crucial to acquire
a working knowledge of the vocabulary and grammar of this language. The tools of matrix
analysis are also applied on a daily basis to problems in biology, chemistry, econometrics,
physics, statistics, and a wide variety of other fields, and thus the text can serve a rather
diverse audience. Mastery of the material in this text should enable the student to read and
understand the modern language of matrices used throughout mathematics, science, and
engineering.
While prerequisites for this text are modest, and while most material is developed from
basic ideas in the book, the student does require a certain amount of what is conventionally
referred to as "mathematical maturity." Proofs are given for many theorems. When they are
not given explicitly, they are either obvious or easily found in the literature. This is ideal
material from which to learn a bit about mathematical proofs and the mathematical maturity
and insight gained thereby. It is my firm conviction that such maturity is neither encouraged
nor nurtured by relegating the mathematical aspects of applications (for example, linear
algebra for elementary statespace theory) to an appendix or introducing it "onthefly" when
necessary. Rather, one must lay a firm foundation upon which subsequent applications and
perspectives can be built in a logical, consistent, and coherent fashion.
I have taught this material for many years, many times at UCSB and twice at UC
Davis, and the course has proven to be remarkably successful at enabling students from
disparate backgrounds to acquire a quite acceptable level of mathematical maturity and
rigor for subsequent graduate studies in a variety of disciplines. Indeed, many students who
completed the course, especially the first few times it was offered, remarked afterward that
if only they had had this course before they took linear systems, or signal processing,
xii Preface
Mathematica® or Mathcad® is also excellent. Since this text is not intended for a course in
numerical linear algebra per se, the details of most of the numerical aspects of linear algebra
are deferred to such a course.
The presentation of the material in this book is strongly influenced by computa
tional issues for two principal reasons. First, "reallife" problems seldom yield to simple
closedform formulas or solutions. They must generally be solved computationally and
it is important to know which types of algorithms can be relied upon and which cannot.
Some of the key algorithms of numerical linear algebra, in particular, form the foundation
upon which rests virtually all of modem scientific and engineering computation. A second
motivation for a computational emphasis is that it provides many of the essential tools for
what I call "qualitative mathematics." For example, in an elementary linear algebra course,
a set of vectors is either linearly independent or it is not. This is an absolutely fundamental
concept. But in most engineering or scientific contexts we want to know more than that.
If a set of vectors is linearly independent, how "nearly dependent" are the vectors? If they
are linearly dependent, are there "best" linearly independent subsets? These tum out to
be much more difficult problems and frequently involve researchlevel questions when set
in the context of the finiteprecision, finiterange floatingpoint arithmetic environment of
most modem computing platforms.
Some of the applications of matrix analysis mentioned briefly in this book derive
from the modem statespace approach to dynamical systems. Statespace methods are
now standard in much of modem engineering where, for example, control systems with
large numbers of interacting inputs, outputs, and states often give rise to models of very
high order that must be analyzed, simulated, and evaluated. The "language" in which such
models are conveniently described involves vectors and matrices. It is thus crucial to acquire
a working knowledge of the vocabulary and grammar of this language. The tools of matrix
analysis are also applied on a daily basis to problems in biology, chemistry, econometrics,
physics, statistics, and a wide variety of other fields, and thus the text can serve a rather
diverse audience. Mastery of the material in this text should enable the student to read and
understand the modem language of matrices used throughout mathematics, science, and
engineering.
While prerequisites for this text are modest, and while most material is developed from
basic ideas in the book, the student does require a certain amount of what is conventionally
referred to as "mathematical maturity." Proofs are given for many theorems. When they are
not given explicitly, they are either obvious or easily found in the literature. This is ideal
material from which to learn a bit about mathematical proofs and the mathematical maturity
and insight gained thereby. It is my firm conviction that such maturity is neither encouraged
nor nurtured by relegating the mathematical aspects of applications (for example, linear
algebra for elementary statespace theory) to an appendix or introducing it "onthef1y" when
necessary. Rather, one must lay a firm foundation upon which subsequent applications and
perspectives can be built in a logical, consistent, and coherent fashion.
I have taught this material for many years, many times at UCSB and twice at UC
Davis, and the course has proven to be remarkably successful at enabling students from
disparate backgrounds to acquire a quite acceptable level of mathematical maturity and
rigor for subsequent graduate studies in a variety of disciplines. Indeed, many students who
completed the course, especially the first few times it was offered, remarked afterward that
if only they had had this course before they took linear systems, or signal processing.
Preface xiii
or estimation theory, etc., they would have been able to concentrate on the new ideas
they wanted to learn, rather than having to spend time making up for deficiencies in their
background in matrices and linear algebra. My fellow instructors, too, realized that by
requiring this course as a prerequisite, they no longer had to provide as much time for
"review" and could focus instead on the subject at hand. The concept seems to work.
— AJL, June 2004
Preface XIII
or estimation theory, etc., they would have been able to concentrate on the new ideas
they wanted to learn, rather than having to spend time making up for deficiencies in their
background in matrices and linear algebra. My fellow instructors, too, realized that by
requiring this course as a prerequisite, they no longer had to provide as much time for
"review" and could focus instead on the subject at hand. The concept seems to work.
AJL, June 2004
This page intentionally left blank This page intentionally left blank
Chapter 1
Introduction and Review
1.1 Some Notation and Terminology
We begin with a brief introduction to some standard notation and terminology to be used
throughout the text. This is followed by a review of some basic notions in matrix analysis
and linear algebra.
The following sets appear frequently throughout subsequent chapters:
1. R
n
= the set of ntuples of real numbers represented as column vectors. Thus, x e Rn
means
where xi e R for i e n.
Henceforth, the notation n denotes the set {1, . . . , n}.
Note: Vectors are always column vectors. A row vector is denoted by y
T
, where
y G Rn and the superscript T is the transpose operation. That a vector is always a
column vector rather than a row vector is entirely arbitrary, but this convention makes
it easy to recognize immediately throughout the text that, e.g., X
T
y is a scalar while
xy
T
is an n x n matrix.
2. Cn = the set of ntuples of complex numbers represented as column vectors.
3. R
mxn
= the set of real (or realvalued) m x n matrices.
4. R
mxnr
= the set of real m x n matrices of rank r. Thus, R
nxnn
denotes the set of real
nonsingular n x n matrices.
5. C
mxn
= the set of complex (or complexvalued) m x n matrices.
6. C
mxn
= the set of complex m x n matrices of rank r.
1
Chapter 1
Introduction and Review
1.1 Some Notation and Terminology
We begin with a brief introduction to some standard notation and terminology to be used
throughout the text. This is followed by a review of some basic notions in matrix analysis
and linear algebra.
The following sets appear frequently throughout subsequent chapters:
I. IR
n
= the set of ntuples of real numbers represented as column vectors. Thus, x E IR
n
means
where Xi E IR for i E !!.
Henceforth, the notation!! denotes the set {I, ... , n }.
Note: Vectors are always column vectors. A row vector is denoted by y ~ where
y E IR
n
and the superscript T is the transpose operation. That a vector is always a
column vector rather than a row vector is entirely arbitrary, but this convention makes
it easy to recognize immediately throughout the text that, e.g., x
T
y is a scalar while
xyT is an n x n matrix.
2. en = the set of ntuples of complex numbers represented as column vectors.
3. IR
rn
xn = the set of real (or realvalued) m x n matrices.
4. 1R;n xn = the set of real m x n matrices of rank r. Thus, I R ~ xn denotes the set of real
nonsingular n x n matrices.
5. e
rnxn
= the set of complex (or complexvalued) m x n matrices.
6. e;n xn = the set of complex m x n matrices of rank r.
Chapter 1. Introduction and Review
Each of the above also has a "block" analogue obtained by replacing scalar components in
the respective definitions by block submatrices. For example, if A e R
nxn
, B e R
mx n
, and
C e R
mxm
, then the (m+ n) x (m+ n) matrix [ A0 Bc ] is block upper triangular.
The transpose of a matrix A is denoted by A
T
and is the matrix whose (i, j)th entry
is the (7, Oth entry of A, that is, (A
7
),, = a,,. Note that if A e R
mx
", then A
7
" e E"
xm
.
If A e C
mx
", then its Hermitian transpose (or conjugate transpose) is denoted by A
H
(or
sometimes A*) and its (i, j)\h entry is (A
H
),
7
= («77), where the bar indicates complex
conjugation; i.e., if z = a + jf$ (j = i = v^T), then z = a — jfi. A matrix A is symmetric
if A = A
T
and Hermitian if A = A
H
. We henceforth adopt the convention that, unless
otherwise noted, an equation like A = A
T
implies that A is realvalued while a statement
like A = A
H
implies that A is complexvalued.
Remark 1.1. While \/—\ is most commonly denoted by i in mathematics texts, j is
the more common notation in electrical engineering and system theory. There is some
advantage to being conversant with both notations. The notation j is used throughout the
text but reminders are placed at strategic locations.
Example 1.2.
Transposes of block matrices can be defined in an obvious way. For example, it is
easy to see that if A,, are appropriately dimensioned subblocks, then
is symmetric (and Hermitian).
is complexvalued symmetric but not Hermitian.
is Hermitian (but not symmetric).
2
We now classify some of the more familiar "shaped" matrices. A matrix A e
(or A eC"
x
")i s
• diagonal if a,
7
= 0 for i ^ j.
• upper triangular if a,
;
= 0 for i > j.
• lower triangular if a,
7
= 0 for / < j.
• tridiagonal if a
(y
= 0 for z — j\ > 1.
• pentadiagonal if a
i;
= 0 for / — j\ > 2.
• upper Hessenberg if a
f
j = 0 for i — j > 1.
• lower Hessenberg if a,
;
= 0 for j — i > 1.
2 Chapter 1. Introduction and Review
We now classify some of the more familiar "shaped" matrices. A matrix A E IR
n
xn
(or A E e
nxn
) is
• diagonal if aij = 0 for i i= }.
• upper triangular if aij = 0 for i > }.
• lower triangular if aij = 0 for i < }.
• tridiagonal if aij = 0 for Ii  JI > 1.
• pentadiagonal if aij = 0 for Ii  J I > 2.
• upper Hessenberg if aij = 0 for i  j > 1.
• lower Hessenberg if aij = 0 for }  i > 1.
Each of the above also has a "block" analogue obtained by replacing scalar components in
the respective definitions by block submatrices. For example, if A E IR
nxn
, B E IR
nxm
, and
C E jRmxm, then the (m + n) x (m + n) matrix [ ~ ~ ] is block upper triangular.
The transpose of a matrix A is denoted by AT and is the matrix whose (i, j)th entry
is the (j, i)th entry of A, that is, (AT)ij = aji. Note that if A E jRmxn, then AT E jRnxm.
If A E em xn, then its Hermitian transpose (or conjugate transpose) is denoted by A H (or
sometimes A*) and its (i, j)th entry is (AH)ij = (aji), where the bar indicates complex
conjugation; i.e., if z = IX + jfJ (j = i = R), then z = IX  jfJ. A matrix A is symmetric
if A = A T and Hermitian if A = A H. We henceforth adopt the convention that, unless
otherwise noted, an equation like A = A T implies that A is realvalued while a statement
like A = AH implies that A is complexvalued.
Remark 1.1. While R is most commonly denoted by i in mathematics texts, } is
the more common notation in electrical engineering and system theory. There is some
advantage to being conversant with both notations. The notation j is used throughout the
text but reminders are placed at strategic locations.
Example 1.2.
1. A = [
; ~ ] is symmetric (and Hermitian).
2. A = [
5
7+}
7 + j ]
2 is complexvalued symmetric but not Hermitian.
[
5 7+} ]
3 A  2 is Hermitian (but not symmetric).
·  7  j
Transposes of block matrices can be defined in an obvious way. For example, it is
easy to see that if Aij are appropriately dimensioned subblocks, then
r = [
1.2. Matrix Arithmetic
1.2 Matrix Arithmetic
It is assumed that the reader is familiar with the fundamental notions of matrix addition,
multiplication of a matrix by a scalar, and multiplication of matrices.
A special case of matrix multiplication occurs when the second matrix is a column
vector x, i.e., the matrixvector product Ax. A very important way to view this product is
to interpret it as a weighted sum (linear combination) of the columns of A. That is, suppose
The importance of this interpretation cannot be overemphasized. As a numerical example,
take A = [96 85 74]x = 2 . Then we can quickly calculate dot products of the rows of A
with the column x to find Ax =[50 32]' but this matrixvector product can also be computed
v1a
For large arrays of numbers, there can be important computerarchitecturerelated advan
tages to preferring the latter calculation method.
For matrix multiplication, suppose A e R
mxn
and B = [bi,...,b
p
] e R
nxp
with
bi e W
1
. Then the matrix product A B can be thought of as above, applied p times:
There is also an alternative, but equivalent, formulation of matrix multiplication that appears
frequently in the text and is presented below as a theorem. Again, its importance cannot be
overemphasized. It is deceptively simple and its full understanding is well rewarded.
Theorem 1.3. Let U = [M I , . . . , u
n
] e R
mxn
with u
t
e R
m
and V = [v
{
,..., v
n
] e R
pxn
with v
t
e R
p
. Then
If matrices C and D are compatible for multiplication, recall that (CD)
T
= D
T
C
T
(or (CD}
H
— D
H
C
H
). This gives a dual to the matrixvector result above. Namely, if
C eR
mxn
has row vectors cj e E
lx
", and is premultiplied by a row vector y
T
e R
l xm
,
then the product can be written as a weighted linear sum of the rows of C as follows:
3
Theorem 1.3 can then also be generalized to its "row dual." The details are left to the readei
Then
1.2. Matrix Arithmetic 3
1 .2 Matrix Arithmetic
It is assumed that the reader is familiar with the fundamental notions of matrix addition,
multiplication of a matrix by a scalar, and multiplication of matrices.
A special case of matrix multiplication occurs when the second matrix is a column
vector x, i.e., the matrixvector product Ax. A very important way to view this product is
to interpret it as a weighted sum (linear combination) of the columns of A. That is, suppose
I ]
A = la' ....• a"1 E JR
m
" with a, E JRm and x = l
Then
Ax = Xjal + ... + Xnan E jRm.
The importance of this interpretation cannot be overemphasized. As a numerical example,
take A = ! x = Then we can quickly calculate dot products of the rows of A
with the column x to find Ax = but this matrixvector product can also be computed
via
3.[ J+2.[ J+l.[ l
For large arrays of numbers, there can be important computerarchitecturerelated advan
tages to preferring the latter calculation method.
For matrix multiplication, suppose A E jRmxn and B = [hI,.'" h
p
] E jRnxp with
hi E jRn. Then the matrix product AB can be thought of as above, applied p times:
There is also an alternative, but equivalent, formulation of matrix multiplication that appears
frequently in the text and is presented below as a theorem. Again, its importance cannot be
overemphasized. It is deceptively simple and its full understanding is well rewarded.
Theorem 1.3. Let U = [Uj, ... , un] E jRmxn with Ui E jRm and V = [VI, .•. , Vn] E lR
Pxn
with Vi E jRP. Then
n
UV
T
= LUiVr E jRmxp.
i=I
If matrices C and D are compatible for multiplication, recall that (C D)T = DT C
T
(or (C D)H = DH C
H
). This gives a dual to the matrixvector result above. Namely, if
C E jRmxn has row vectors cJ E jRlxn, and is premultiplied by a row vector yT E jRlxm,
then the product can be written as a weighted linear sum of the rows of C as follows:
yTC=YICf EjRlxn.
Theorem 1.3 can then also be generalized to its "row dual." The details are left to the reader.
Chapter 1. Introduction and Review
1.3 Inner Products and Orthogonality
For vectors x, y e R", the Euclidean inner product (or inner product, for short) of x and
y is given by
Note that the inner product is a scalar.
If x, y e C", we define their complex Euclidean inner product (or inner product,
for short) by
and we see that, indeed, (x, y)
c
= (y, x)
c
.
Note that x
T
x = 0 if and only if x = 0 when x e Rn but that this is not true if x e Cn.
What is true in the complex case is that X
H
x = 0 if and only if x = 0. To illustrate, consider
the nonzero vector x above. Then X
T
X = 0 but X
H
X = 2.
Two nonzero vectors x, y e R are said to be orthogonal if their inner product is
zero, i.e., x
T
y = 0. Nonzero complex vectors are orthogonal if X
H
y = 0. If x and y are
orthogonal and X
T
X = 1 and y
T
y = 1, then we say that x and y are orthonormal. A
matrix A e R
nxn
is an orthogonal matrix if A
T
A = AA
T
= /, where / is the n x n
identity matrix. The notation /„ is sometimes used to denote the identity matrix in R
nx
"
(orC"
x
"). Similarly, a matrix A e C
nxn
is said to be unitary if A
H
A = AA
H
= I. Clearly
an orthogonal or unitary matrix has orthonormal rows and orthonormal columns. There is
no special name attached to a nonsquare matrix A e R
mxn
(or € C
mxn
) with orthonormal
rows or columns.
1.4 Determinants
It is assumed that the reader is familiar with the basic theory of determinants. For A e R
nxn
(or A 6 C
nxn
) we use the notation det A for the determinant of A. We list below some of
Note that (x, y)
c
= (y, x)
c
, i.e., the order in which x and y appear in the complex inner
product is important. The more conventional definition of the complex inner product is
( x , y )
c
= y
H
x = Eni=1 xiyi but throughout the text we prefer the symmetry with the real
case.
Example 1.4. Let x = [ 1j ] and y = [ 1/ 2 ]. Then
while
44 Chapter 1. Introduction and Review
1.3 Inner Products and Orthogonality
For vectors x, y E IRn, the Euclidean inner product (or inner product, for short) of x and
y is given by
n
(x, y) := x
T
y = Lx;y;.
;=1
Note that the inner product is a scalar.
If x, y E <en, we define their complex Euclidean inner product (or inner product,
for short) by
n
(x'Y}c :=xHy = Lx;y;.
;=1
Note that (x, y)c = (y, x}c, i.e., the order in which x and y appear in the complex inner
product is important. The more conventional definition of the complex inner product is
(x, y)c = yH x = L:7=1 x;y; but throughout the text we prefer the symmetry with the real
case.
Example 1.4. Let x = [} ] and y = [ ~ ] . Then
(x, Y}c = [ } JH [ ~ ] = [I  j] [ ~ ] = 1  2j
while
and we see that, indeed, (x, Y}c = {y, x)c'
Note that x
T
x = 0 if and only if x = 0 when x E IR
n
but that this is not true if x E en.
What is true in the complex case is that x
H
x = 0 if and only if x = O. To illustrate, consider
the nonzero vector x above. Then x
T
x = 0 but x
H
X = 2.
Two nonzero vectors x, y E IR
n
are said to be orthogonal if their inner product is
zero, i.e., x
T
y = O. Nonzero complex vectors are orthogonal if x
H
y = O. If x and y are
orthogonal and x
T
x = 1 and yT y = 1, then we say that x and y are orthonormal. A
matrix A E IR
nxn
is an orthogonal matrix if AT A = AAT = I, where I is the n x n
identity matrix. The notation In is sometimes used to denote the identity matrix in IR
nxn
(or en xn). Similarly, a matrix A E en xn is said to be unitary if A H A = AA H = I. Clearly
an orthogonal or unitary matrix has orthonormal rows and orthonormal columns. There is
no special name attached to a nonsquare matrix A E ]Rrn"n (or E e
mxn
) with orthonormal
rows or columns.
1.4 Determinants
It is assumed that the reader is familiar with the basic theory of determinants. For A E IR
n
xn
(or A E en xn) we use the notation det A for the determinant of A. We list below some of
1.4. Determinants
the more useful properties of determinants. Note that this is not a minimal set, i.e., several
properties are consequences of one or more of the others.
1. If A has a zero row or if any two rows of A are equal, then det A = 0.
2. If A has a zero column or if any two columns of A are equal, then det A = 0.
3. Interchanging two rows of A changes only the sign of the determinant.
4. Interchanging two columns of A changes only the sign of the determinant.
5. Multiplying a row of A by a scalar a results in a new matrix whose determinant is
a det A.
6. Multiplying a column of A by a scalar a results in a new matrix whose determinant
is a det A.
7. Multiplying a row of A by a scalar and then adding it to another row does not change
the determinant.
8. Multiplying a column of A by a scalar and then adding it to another column does not
change the determinant.
9. det A
T
= det A (det A
H
= det A if A e C
nxn
).
10. If A is diagonal, then det A = a11a22 • • • a
nn
, i.e., det A is the product of its diagonal
elements.
11. If A is upper triangular, then det A = a11a22 • • • a
nn
.
12. If A is lower triangular, then det A = a11a22 • • • a
nn
.
13. If A is block diagonal (or block upper triangular or block lower triangular), with
square diagonal blocks A11, A22, • • •, A
nn
(of possibly different sizes), then det A =
det A11 det A22 • • • det A
nn
.
14. If A, B eR
nxn
,thendet(AB) = det A det 5.
15. If A € R
nxn
, then det(A
1
) = 1det A.
16. If A e R
nxn
and D e R
mxm
, then det [Ac
B
D
] = del A det ( D – CA–
l
B).
Proof: This follows easily from the block LU factorization
17. If A eR
nxn
and D e RM
mxm
, then det [Ac
B
D
] = det D det(A – BD–
1
C) .
Proof: This follows easily from the block UL factorization
5 1.4. Determinants 5
the more useful properties of determinants. Note that this is not a minimal set, i.e., several
properties are consequences of one or more of the others.
1. If A has a zero row or if any two rows of A are equal, then det A = o.
2. If A has a zero column or if any two columns of A are equal, then det A = O.
3. Interchanging two rows of A changes only the sign of the determinant.
4. Interchanging two columns of A changes only the sign of the determinant.
5. Multiplying a row of A by a scalar ex results in a new matrix whose determinant is
exdetA.
6. Multiplying a column of A by a scalar ex results in a new matrix whose determinant
is ex det A.
7. Multiplying a row of A by a scalar and then adding it to another row does not change
the determinant.
8. Multiplying a column of A by a scalar and then adding it to another column does not
change the determinant.
9. detAT = detA (detA
H
= detA if A E C"X").
10. If A is diagonal, then det A = alla22 ... ann, i.e., det A is the product of its diagonal
elements.
11. If A is upper triangular, then det A = all a22 ... a"n.
12. If A is lower triangUlar, then det A = alla22 ... ann.
13. If A is block diagonal (or block upper triangular or block lower triangular), with
square diagonal blocks A 11, A
22
, ... , An" (of possibly different sizes), then det A =
det A 11 det A22 ... det Ann.
14. If A, B E IR
nxn
, then det(AB) = det A det B.
15. If A E then det(A
1
) = de: A .
16. If A E and DE IR
mxm
, then det = detA det(D  CA
1
B).
Proof" This follows easily from the block LU factorization
] [
17. If A E IR
nxn
and D E then det = det D det(A  B D
1
C).
Proof" This follows easily from the block UL factorization
BD
1
I
] [
Chapter 1. Introduction and Review
Remark 1.5. The factorization of a matrix A into the product of a unit lower triangular
matrix L (i.e., lower triangular with all 1's on the diagonal) and an upper triangular matrix
U is called an LU factorization; see, for example, [24]. Another such factorization is UL
where U is unit upper triangular and L is lower triangular. The factorizations used above
are block analogues of these.
Remark 1.6. The matrix D — CA–
1
B is called the Schur complement of A in [AC BD].
Similarly, A – BD–
l
C is the Schur complement of D in [AC
B
D
].
EXERCISES
1. If A e R
nxn
and or is a scalar, what is det(aA)? What is det(–A)?
2. If A is orthogonal, what is det A? If A is unitary, what is det A?
3. Let x, y e Rn. Show that det(I – xy
T
) = 1 – y
T
x.
4. Let U1, U
2
, . . ., Uk € R
nxn
be orthogonal matrices. Show that the product U =
U1 U2 • • • Uk is an orthogonal matrix.
5. Let A e R
n x n
. The trace of A, denoted TrA, is defined as the sum of its diagonal
elements, i.e., TrA = Eni=1
aii.
(a) Show that the trace is a linear function; i.e., if A, B e R
nxn
and a, ft e R, then
Tr(aA + fiB)= aTrA + fiTrB.
(b) Show that Tr(Afl) = Tr(£A), even though in general AB ^ B A.
(c) Let S € R
nxn
be skewsymmetric, i.e., S
T
= S. Show that TrS = 0. Then
either prove the converse or provide a counterexample.
6. A matrix A e W
x
" is said to be idempotent if A
2
= A.
/ x ™ . , • , ! T 2cos
2
<9 sin 20 1 . . _ ,
(a) Show that the matrix A =  . _ .. _ .
2rt
is idempotent for all #.
2 _ sin 2^ 2sm
z
# J
r
(b) Suppose A e IR"
X
" is idempotent and A ^ I. Show that A must be singular.
66 Chapter 1. Introduction and Review
Remark 1.5. The factorization of a matrix A into the product of a unit lower triangular
matrix L (i.e., lower triangular with all l's on the diagonal) and an upper triangular matrix
V is called an LV factorization; see, for example, [24]. Another such factorization is VL
where V is unit upper triangular and L is lower triangular. The factorizations used above
are block analogues of these.
Remark 1.6. The matrix D  e A I B is called the Schur complement of A in [ ~ ~ ].
Similarly, A  BDIe is the Schur complement of Din [ ~ ~ l
EXERCISES
1. If A E jRnxn and a is a scalar, what is det(aA)? What is det(A)?
2. If A is orthogonal, what is det A? If A is unitary, what is det A?
3. Letx,y E jRn. Showthatdet(lxyT) = 1 yTx.
4. Let VI, V2, ... ,Vk E jRn xn be orthogonal matrices. Show that the product V =
VI V2 ... V
k
is an orthogonal matrix.
5. Let A E jRNxn. The trace of A, denoted Tr A, is defined as the sum of its diagonal
elements, i.e., TrA = L ~ = I au·
(a) Show that the trace is a linear function; i.e., if A, B E JRn xn and a, f3 E JR, then
Tr(aA + f3B) = aTrA + f3TrB.
(b) Show that Tr(AB) = Tr(BA), even though in general AB i= BA.
(c) Let S E jRnxn be skewsymmetric, i.e., ST = So Show that TrS = O. Then
either prove the converse or provide a counterexample.
6. A matrix A E jRnxn is said to be idempotent if A2 = A.
I [ 2cos
2
0
(a) Show that the matrix A =  . 2f)
2 sm 0
sin 20 J. . d .. II II
2sin
2
0 IS I empotent lor a o.
(b) Suppose A E jRn xn is idempotent and A i= I. Show that A must be singular.
Chapter 2
Vector Spaces
In this chapter we give a brief review of some of the basic concepts of vector spaces. The
emphasis is on finitedimensional vector spaces, including spaces formed by special classes
of matrices, but some infinitedimensional examples are also cited. An excellent reference
for this and the next chapter is [10], where some of the proofs that are not given here may
be found.
2.1 Definitions and Examples
Definition 2.1. A field is a set F together with two operations +, • : F x F — > F such that
Axioms (A1)(A3) state that (F, +) is a group and an abelian group if (A4) also holds.
Axioms (M1)(M4) state that (F \ {0}, •) is an abelian group.
Generally speaking, when no confusion can arise, the multiplication operator "•" is
not written explicitly.
7
(Al) a + (P + y ) = (a + p ) + y f o r all a, f t, y € F.
(A2) there exists an element 0 e F such that a + 0 = a. for all a e F.
(A3 ) for all a e F, there exists an element (—a) e F such that a + (— a) = 0.
(A4 ) a + p = ft + afar all a, ft e F.
(M l) a  ( p  y ) = ( a  p )  y f o r al l a, p, y e F.
(M 2) there exists an element 1 e F such that a • I = a for all a e F.
(M 3 ) for all a e ¥, a ^0, there exists an element a"
1
€ F such that a • a~
l
= 1.
(M 4 ) a • p = P • a for all a, p e F.
(D) a  ( p + y)=ci p+a y f or alia, p,ye¥.
Chapter 2
Vector Spaces
In this chapter we give a brief review of some of the basic concepts of vector spaces. The
emphasis is on finitedimensional vector spaces, including spaces formed by special classes
of matrices, but some infinitedimensional examples are also cited. An excellent reference
for this and the next chapter is [10], where some of the proofs that are not given here may
be found.
2.1 Definitions and Examples
Definition 2.1. A field is a set IF together with two operations +, . : IF x IF ~ IF such that
(Al) a + (,8 + y) = (a +,8) + y for all a,,8, y Elf.
(A2) there exists an element 0 E IF such that a + 0 = a for all a E IF.
(A3) for all a E IF, there exists an element (a) E IF such that a + (a) = O.
(A4) a + ,8 = ,8 + a for all a, ,8 Elf.
(Ml) a· (,8, y) = (a·,8)· y for all a,,8, y Elf.
(M2) there exists an element I E IF such that a . I = a for all a E IF.
(M3) for all a E IF, a f. 0, there exists an element aI E IF such that a . aI = 1.
(M4) a·,8 =,8 . afar all a, ,8 E IF.
(D) a· (,8 + y) = a·,8 +a· y for all a, ,8, y Elf.
Axioms (Al)(A3) state that (IF, +) is a group and an abelian group if (A4) also holds.
Axioms (MI)(M4) state that (IF \ to), .) is an abelian group.
Generally speaking, when no confusion can arise, the multiplication operator "." is
not written explicitly.
7
Chapter 2. Vector Spaces
Example 2.2.
1. R with ordinary addition and multiplication is a field.
2. C with ordinary complex addition and multiplication is a field.
3. Raf. r] = the field of rational functions in the indeterminate x
8
where Z+ = {0,1,2, . . . }, is a field.
4. RMr
mxn
= { m x n matrices of rank r with real coefficients) is clearly not a field since,
for example, (Ml) does not hold unless m = n. Moreover, R"
x
" is not a field either
since (M4) does not hold in general (although the other 8 axioms hold).
Definition 2.3. A vector space over a field F is a set V together with two operations
+ :V x V ^V and : F xV »• V such that
A vector space is denoted by (V, F) or, when there is no possibility of confusion as to the
underlying fie Id, simply by V.
Remark 2.4. Note that + and • in Definition 2.3 are different from the + and • in Definition
2.1 in the sense of operating on different objects in different sets. In practice, this causes
no confusion and the • operator is usually not even written explicitly.
Example 2.5.
1. (R", R) with addition defined by
and scalar multiplication defined by
is a vector space. Similar definitions hold for (C", C).
(VI) (V, +) is an abelian group.
(V2) ( a  p )  v = a  ( P ' V ) f o r all a, p e F and for all v e V.
(V3) (a + ft) • v = a • v + p • v for all a, p € F and for all v e V.
(V4) a(v + w)=av + a w for all a e F and for all v, w e V.
(V5) 1 • v = v for all v e V (1 e F).
8 Chapter 2. Vector Spaces
Example 2.2.
I. IR with ordinary addition and multiplication is a field.
2. e with ordinary complex addition and multiplication is a field.
3. Ra[x] = the field of rational functions in the indeterminate x
= {a
o
+ atX + ... + apxP +}
:aj,f3i EIR ;P,qEZ ,
f30 + f3t
X
+ ... + f3qX
q
where Z+ = {O,l,2, ... }, is a field.
4. I R ~ xn = { m x n matrices of rank r with real coefficients} is clearly not a field since,
for example, (MI) does not hold unless m = n. Moreover, l R ~ x n is not a field either
since (M4) does not hold in general (although the other 8 axioms hold).
Definition 2.3. A vector space over a field IF is a set V together with two operations
+ : V x V + V and· : IF x V + V such that
(VI) (V, +) is an abelian group.
(V2) (a· f3) . v = a . (f3 . v) for all a, f3 E IF andfor all v E V.
(V3) (a + f3). v = a· v + f3. v for all a, f3 Elf andforall v E V.
(V4) a· (v + w) = a . v + a . w for all a ElF andfor all v, w E V.
(V5) I· v = v for all v E V (1 Elf).
A vector space is denoted by (V, IF) or, when there is no possibility of confusion as to the
underlying field, simply by V.
Remark 2.4. Note that + and· in Definition 2.3 are different from the + and . in Definition
2.1 in the sense of operating on different objects in different sets. In practice, this causes
no confusion and the· operator is usually not even written explicitly.
Example 2.5.
I. (IRn, IR) with addition defined by
and scalar multiplication defined by
is a vector space. Similar definitions hold for (en, e).
2.2. Subspaces
3. Let (V, F) be an arbitrary vector space and V be an arbitrary set. Let O (X > , V) be the
set of functions / mapping D to V. Then O (D, V) is a vector space with addition
defined by
2.2 Subspaces
Definition 2.6. Let (V, F) be a vector space and let W c V, W = 0. Then (W, F) is a
subspace of (V, F) i f and only i f (W, F) is i tself a vector space or, equi valently, i f and only
i f ( a w 1 + ß W 2 ) e W for all a, ß e ¥ and for all w 1 , w
2
e W.
Remark 2.7. The latter characterization of a subspace is often the easiest way to check
or prove that something is indeed a subspace (or vector space); i.e., verify that the set in
question is closed under addition and scalar multiplication. Note, too, that since 0 e F, this
implies that the zero vector must be in any subspace.
Notation: When the underlying field is understood, we write W c V, and the symbol c,
when used with vector spaces, is henceforth understood to mean "is a subspace of." The
less restrictive meaning "is a subset of" is specifically flagged as such.
9
2. (E
mxn
, E) is a vector space with addition defined by
and scalar multiplication defined by
and scalar multiplication defined by
Special Cases:
(a) V = [to, t \ ] , (V, F) = (IR", E), and the functions are piecewise continuous
=: (PC[f
0
, t\ ] )
n
or continuous =: (C[?
0
, h] )
n
.
4. Let A € R"
x
". Then (x(t) : x ( t ) = Ax(t}} is a vector space (of dimension n) .
2.2. Subspaces 9
2.
(JRmxn, JR) is a vector space with addition defined by
[ ." + P"
al2 + fJI2 aln + fJln
l
a21 + fJ2I a22 + fJ22 a2n + fJ2n
A+B= .
amI + fJml am2 + fJm2 amn + fJmn
and scalar multiplication defined by
[ ya"
y
a
l2
ya," l
y
a
21 y
a
22 ya2n
yA = . . .
yaml ya
m
2
ya
mn
3. Let (V, IF) be an arbitrary vector space and '0 be an arbitrary set. Let cf>('O, V) be the
set of functions f mapping '0 to V. Then cf>('O, V) is a vector space with addition
defined by
(f + g)(d) = fed) + g(d) for all d E '0 and for all f, g E cf>
and scalar multiplication defined by
(af)(d) = af(d) for all a E IF, for all d ED, and for all f E cf>.
Special Cases:
(a) '0 = [to, td, (V, IF) = (JR
n
, JR), and the functions are piecewise continuous
=: (PC[to, td)n or continuous =: (C[to, td)n.
(b) '0 = [to, +00), (V, IF) = (JRn, JR), etc.
4. Let A E JR(nxn. Then {x(t) : x(t) = Ax(t)} is a vector space (of dimension n).
2.2 Subspaces
Definition 2.6. Let (V, IF) be a vector space and let W ~ V, W f= 0. Then (W, IF) is a
subspace of (V, IF) if and only if (W, IF) is itself a vector space or, equivalently, if and only
if(awl + fJw2) E W foral! a, fJ E IF andforall WI, W2 E W.
Remark 2.7. The latter characterization of a subspace is often the easiest way to check
or prove that something is indeed a subspace (or vector space); i.e., verify that the set in
question is closed under addition and scalar multiplication. Note, too, that since 0 E IF, this
implies that the zero vector must be in any subspace.
Notation: When the underlying field is understood, we write W ~ V, and the symbol ~ ,
when used with vector spaces, is henceforth understood to mean "is a subspace of." The
less restrictive meaning "is a subset of' is specifically flagged as such.
Then W
a
,ß is a subspace of V if and only if ß = 0. As an interesting exercise, sketch
W2,1, W2,o, W1/2,1, and W1/2,
0
. Note, too, that the vertical line through the origin (i.e.,
a = oo) is also a subspace.
All lines through the origin are subspaces. Shifted subspaces W
a
,ß with ß = 0 are
called linear varieties.
Henceforth, we drop the explicit dependence of a vector space on an underlying field.
Thus, V usually denotes a vector space with the underlying field generally being R unless
explicitly stated otherwise.
Definition 2.9. If 12, and S are vector spaces (or subspaces), then R = S if and only if
R C S and S C R.
Note: To prove two vector spaces are equal, one usually proves the two inclusions separately:
An arbitrary r e R is shown to be an element of S and then an arbitrary 5 € S is shown to
be an element of R.
2.3 Linear Independence
Let X = { v1 , v2, • • •} be a nonempty collection of vectors u, in some vector space V.
Definition 2.10. X is a linearly dependent set of vectors if and only if there exist k distinct
elements v1, . . . , vk e X and scalars a1, . . . , ak not all zero such that
10 Chapter 2. Vector Spaces
Example 2.8.
1. Consider (V, F) = (R"
X
",R) and let W = [A e R"
x
" : A is symmetric}. Then
We V.
Proof: Suppose A\, A
2
are symmetric. Then it is easily shown that ctA\ + fiAi is
symmetric for all a, ft e R.
2. Let W = { A € R"
x
" : A is orthogonal}. Then W is /wf a subspace of R"
x
".
3. Consider (V, F) = (R
2
, R) and for each v € R
2
of the form v = [v1v2 ] identify v1 with
the jccoordinate in the plane and u
2
with the ycoordinate. For a, ß e R, define
X is a linearly independent set of vectors if and only if for any collection of k distinct
elements v1, . . . ,Vk of X and for any scalars a1, . . . , ak,
10 Chapter 2. Vector Spaces
Example 2.S.
1. Consider (V,lF) = (JR.nxn,JR.) and let W = {A E JR.nxn : A is symmetric}. Then
Proof' Suppose AI, A2 are symmetric. Then it is easily shown that aAI + f3A2 is
symmetric for all a, f3 E R
2. Let W = {A E ]Rnxn : A is orthogonal}. Then W is not a subspace of JR.nxn.
3. Consider (V, IF) = (]R2, JR.) and for each v E ]R2 of the form v = ] identify VI with
the xcoordinate in the plane and V2 with the ycoordinate. For a, f3 E R define
W",/l = {V : v = [ ac f3 ] ; c E JR.} .
Then W",/l is a subspace of V if and only if f3 = O. As an interesting exercise, sketch
W2.I, W2,O, Wi,I' and Wi,o, Note, too, that the vertical line through the origin (i.e.,
a = 00) is also a subspace.
All lines through the origin are subspaces. Shifted subspaces W",/l with f3 =1= 0 are
called linear varieties.
Henceforth, we drop the explicit dependence of a vector space on an underlying field.
Thus, V usually denotes a vector space with the underlying field generally being JR. unless
explicitly stated otherwise.
Definition 2.9. ffR and S are vector spaces (or subspaces), then R = S if and only if
R R.
Note: To prove two vector spaces are equal, one usually proves the two inclusions separately:
An arbitrary r E R is shown to be an element of S and then an arbitrary s E S is shown to
be an element of R.
2.3 Linear Independence
Let X = {VI, V2, •.• } be a nonempty collection of vectors Vi in some vector space V.
Definition 2.10. X is a linearly dependent set of vectors if and only if there exist k distinct
elements VI, ... , Vk E X and scalars aI, ..• , (Xk not all zero such that
X is a linearly independent set of vectors if and only if for any collection of k distinct
elements VI, ... , Vk of X and for any scalars aI, ••• , ak,
al VI + ... + (XkVk = 0 implies al = 0, ... , ak = O.
2.3. Linear Independence 11
(since 2v\ — v
2
+ v3 = 0).
2. Let A e R
xn
and 5 e R"
xm
. Then consider the rows of e
tA
B as vectors in C
m
[t
0
, t1]
(recall that e
fA
denotes the matrix exponential, which is discussed in more detail in
Chapter 11). Independence of these vectors turns out to be equivalent to a concept
called controllability, to be studied further in what follows.
Let v
f
e R", i e k, and consider the matrix V = [ v1 , ... ,Vk] e R
nxk
. The linear
dependence of this set of vectors is equivalent to the existence of a nonzero vector a e R
k
such that Va = 0. An equivalent condition for linear dependence is that the k x k matrix
V
T
V is singular. If the set of vectors is independent, and there exists a e R* such that
Va = 0, then a = 0. An equivalent condition for linear independence is that the matrix
V
T
V is nonsingular.
Definition 2.12. Let X = [ v1 , v2, . . . } be a collection of vectors vi. e V. Then the span of
X is defined as
Example 2.13. Let V = R
n
and define
Then Sp{e1, e
2
, ...,e
n
} = Rn.
Definition 2.14. A set of vectors X is a basis for V if and only ij
1. X is a linearly independent set (of basis vectors), and
2. Sp(X) = V.
Example 2.11.
is a linearly independent set. Why?
s a linearly dependent set However,
1. LetV = R
3
. Then
where N = {1, 2, ...}.
2.3. Linear Independence 11
Example 2.11.
I. 1£t V = Then {[ H i Hi] } i" independent.. Why?
Howe,."I [ i 1 [ i 1 [ l ] } is a Iin=ly
(since 2vI  V2 + V3 = 0).
2. Let A E ]Rnxn and B E ]Rnxm. Then consider the rows of etA B as vectors in em [to, tIl
(recall that etA denotes the matrix exponential, which is discussed in more detail in
Chapter 11). Independence of these vectors turns out to be equivalent to a concept
called controllability, to be studied further in what follows.
Let Vi E ]Rn, i E If, and consider the matrix V = [VI, ... , Vk] E ]Rnxk. The linear
dependence of this set of vectors is equivalent to the existence of a nonzero vector a E ]Rk
such that Va = O. An equivalent condition for linear dependence is that the k x k matrix
VT V is singular. If the set of vectors is independent, and there exists a E ]Rk such that
Va = 0, then a = O. An equivalent condition for linear independence is that the matrix
V T V is nonsingular.
Definition 2.12. Let X = {VI, V2, ..• } be a collection of vectors Vi E V. Then the span of
X is defined as
Sp(X) = Sp{VI, V2, ... }
= {v : V = (Xl VI + ... + (XkVk ; (Xi ElF, Vi EX, kEN},
where N = {I, 2, ... }.
Example 2.13. Let V = ]Rn and define
0 0
0 1 0
el =
0
, e2 =
0
,'" ,en =
0
o o
Then SpIel, e2, ... , en} = ]Rn.
Definition 2.14. A set of vectors X is a basis for V if and only if
1. X is a linearly independent set (of basis vectors), and
2. Sp(X) = V.
12 Chapter 2. Vector Spaces
Example 2.15. [e\,..., e
n
} is a basis for IR" (sometimes called the natural basis).
Now let b1, ..., b
n
be a basis (with a specific order associated with the basis vectors)
for V. Then for all v e V there exists a unique ntuple {E1 , . . . , E n} such that
Definition 2.16. The scalars {Ei} are called the components (or sometimes the coordinates)
of v with respect to the basis (b1, ..., b
n
] and are unique. We say that the vector x of
components represents the vector v with respect to the basis B.
Example 2.17. In Rn,
we have
To see this, write
Then
Theorem 2.18. The number of elements in a basis of a vector space is independent of the
particular basis considered.
Definition 2.19. If a basis X for a vector space V= 0) has n elements, V is said to
be ndimensional or have dimension n and we write dim(V) = n or dim V — n. For
We can also determine components of v with respect to another basis. For example, while
with respect to the basis
where
12 Chapter 2. Vector Spaces
Example 2.15. {el, ... , en} is a basis for]Rn (sometimes called the natural basis).
Now let b
l
, ... , b
n
be a basis (with a specific order associated with the basis vectors)
for V. Then for all v E V there exists a unique ntuple ... , such that
v = + ... + = Bx,
where
B [b".,b.l. x D J
Definition 2.16. The scalars } are called the components (or sometimes the coordinates)
of v with respect to the basis {b
l
, ... , b
n
} and are unique. We say that the vector x of
components represents the vector v with respect to the basis B.
Example 2.17. In]Rn,
VI ]
: = vlel + V2e2 + ... + vne
n
·
Vn
We can also determine components of v with respect to another basis. For example, while
with respect to the basis
we have
To see this, write
Then
[ ] = I . el + 2 . e2,
[ ] = 3 . [ ] + 4· [ l
[ ] = XI • [  ] + X2 • [ _! ]
= [  ! ] [ l
[ ] = [ ; 1 r I [ ; ] = [ l
Theorem 2.18. The number of elements in a basis of a vector space is independent of the
particular basis considered.
Definition 2.19. If a basis X for a vector space V(Jf 0) has n elements, V is said to
be n.dimensional or have dimension n and we write dim (V) = n or dim V = n. For
2.4 Sums and Intersections of Subspaces
Definition 2.21. Let (V, F) be a vector space and let 71, S c V. The sum and intersection
of R, and S are defined respectively by:
The subspaces R, and S are said to be complements of each other in T.
Remark 2.23. The union of two subspaces, R C S, is not necessarily a subspace.
Definition 2.24. T = R 0 S is the direct sum of R and S if
Theorem 2.22.
2.4. Sums and Intersections of Subspaces 13
consistency, and because the 0 vector is in any vector space, we define dim(O) = 0. A
vector space V is finitedimensional if there exists a basis X with n < +00 elements;
otherwise, V is infinitedimensional.
Thus, Theorem 2.18 says that dim(V) = the number of elements in a basis.
Example 2.20.
1. d i m(Rn)=n.
2. dim(R
mxn
) = mn.
Note: Check that a basis for R
mxn
is given by the mn matrices Eij; i e m, j e n,
where E
f
j is a matrix all of whose elements are 0 except for a 1 in the (i, j)th location.
The collection of Eij matrices can be called the "natural basis matrices."
3. dim(C[to, t1])  +00.
4. dim{A € R
nxn
: A = A
T
} = {1/2(n + 1).
1
2
(To see why, determine 1/ 2n( n + 1) symmetric basis matrices.)
5. dim{A e R
nxn
: A is upper (lower) triangular} = 1/ 2n( n + 1).
1. n + S = {r + s : r e U, s e 5}.
2. ft H 5 = {v : v e 7^ and v e 5}.
K
1. K + S C V (in general, U\  \ h 7^ =: ]T ft/ C V, for finite k).
1=1
2. 72. D 5 C V (in general, f] * R,
a
C V/ or an arbitrary index set A).
a e A
1. n n S = 0, and
2. U + S = T (in general ft; n (^ ft,) = 0 am/ ]Pft, = T).
y>f «
2.4. Sums and Intersections of Subspaces 13
consistency, and because the 0 vector is in any vector space, we define dim(O) = O. A
vector space V is finitedimensional if there exists a basis X with n < +00 elements;
otherwise, V is infinitedimensional.
Thus, Theorem 2.18 says that dim (V) = the number of elements in a basis.
Example 2.20.
1. = n.
2. = mn.
Note: Check that a basis for is given by the mn matrices Eij; i E m, j E
where Eij is a matrix all of whose elements are 0 except for a 1 in the (i, J)th location.
The collection of E;j matrices can be called the "natural basis matrices."
3. dim(C[to, tJJ) = +00.
4. dim{A E : A = AT} = !n(n + 1).
(To see why, determine !n(n + 1) symmetric basis matrices.)
5. dim{A E : A is upper (lower) triangular} = !n(n + 1).
2.4 Sums and Intersections of Subspaces
Definition 2.21. Let (V, JF') be a vector space and let R, S S; V. The sum and intersection
ofR and S are defined respectively by:
1. R + S = {r + s : r E R, s E S}.
2. R n S = {v : v E R and v E S}.
Theorem 2.22.
k
1. R + S S; V (in general, RI + ... + Rk =: L R; S; V, for finite k).
;=1
2. R n S S; V (in general, n Ra S; V for an arbitrary index set A).
CiEA
Remark 2.23. The union of two subspaces, R U S, is not necessarily a subspace.
Definition 2.24. T = REB S is the direct sum ofR and S if
1. R n S = 0, and
2. R + S = T (in general, R; n (L R
j
) = 0 and L Ri = T).
H;
The subspaces Rand S are said to be complements of each other in T.
14 Chapter 2. Vector Spaces
Remark 2.25. The complement of ft (or S) is not unique. For example, consider V = R
2
and let ft be any line through the origin. Then any other distinct line through the origin is
a complement of ft. Among all the complements there is a unique one orthogonal to ft.
We discuss more about orthogonal complements elsewhere in the text.
Theorem 2.26. Suppose T =R O S. Then
1. every t € T can be written uniquely in the form t = r + s with r e R and s e S.
2. dim(T) = dim(ft) + dim(S).
Proof: To prove the first part, suppose an arbitrary vector t e T can be written in two ways
as t = r1 + s1 = r2 + S2, where r1, r2 e R. and s1, S2 e S. Then r1 — r2 = s2— s\. But
r1 –r2 £ ft and 52 — si e S. Since ft fl S = 0, we must have r\ = ri and s\ = si from
which uniqueness follows.
The statement of the second part is a special case of the next theorem. D
Theorem 2.27. For arbitrary subspaces ft, S of a vector space V,
EXERCISES
1. Suppose {vi,..., Vk} is a linearly dependent set. Then show that one of the vectors
must be a linear combination of the others.
2. Let x\, *2, . . . , x/c E R" be nonzero mutually orthogonal vectors. Show that [x\,...,
X k} must be a linearly independent set.
3. Let v\,... ,v
n
be orthonormal vectors in R". Show that Av\,..., Av
n
are also or
thonormal if and only if Ae R"
x
" is orthogonal.
4. Consider the vectors v\ — [2 l]
r
and 1*2 = [3 l]
r
. Prove that vi and V2 form a basis
for R
2
. Find the components of the vector v = [4 l]
r
with respect to this basis.
Example 2.28. Let U be the subspace of upper triangular matrices in E"
x
" and let £ be the
subspace of lower triangular matrices in R
nxn
. Then it may be checked that U + L = R
nxn
while U n £ is the set of diagonal matrices in R
nxn
. Using the fact that dim (diagonal
matrices} = n, together with Examples 2.20.2 and 2.20.5, one can easily verify the validity
of the formula given in Theorem 2.27.
Example 2.29. Let (V, F) = (R
nxn
, R), let ft be the set of skewsymmetric matrices in
R"
x
", and let S be the set of symmetric matrices in R"
x
". Then V = U 0 S.
Proof: This follows easily from the fact that any Ae R"
x
" can be written in the form
The first matrix on the righthand side above is in S while the second is in ft.
14 Chapter 2. Vector Spaces
Remark 2.25. The complement of R (or S) is not unique. For example, consider V = jR2
and let R be any line through the origin. Then any other distinct line through the origin is
a complement of R. Among all the complements there is a unique one orthogonal to R.
We discuss more about orthogonal complements elsewhere in the text.
Theorem 2.26. Suppose T = R EB S. Then
1. every t E T can be written uniquely in the form t = r + s with r E Rand s E S.
2. dim(T) = dim(R) + dim(S).
Proof: To prove the first part, suppose an arbitrary vector t E T can be written in two ways
as t = rl + Sl = r2 + S2, where rl, r2 E Rand SI, S2 E S. Then r,  r2 = S2  SI. But
rl  r2 E Rand S2  SI E S. Since R n S = 0, we must have rl = r2 and SI = S2 from
which uniqueness follows.
The statement of the second part is a special case of the next theorem. 0
Theorem 2.27. For arbitrary subspaces R, S of a vector space V,
dim(R + S) = dim(R) + dim(S)  dim(R n S).
Example 2.28. Let U be the subspace of upper triangular matrices in jRn xn and let .c be the
subspace of lower triangUlar matrices in jRn xn. Then it may be checked that U + .c = jRn xn
while un.c is the set of diagonal matrices in jRnxn. Using the fact that dim {diagonal
matrices} = n, together with Examples 2.20.2 and 2.20.5, one can easily verify the validity
of the formula given in Theorem 2.27.
Example 2.29. Let (V, IF) = (jRnxn, jR), let R be the set of skewsymmetric matrices in
jRnxn, and let S be the set of symmetric matrices in jRnxn. Then V = n $ S.
Proof: This follows easily from the fact that any A E jRnxn can be written in the form
1 TIT
A=2:(A+A )+2:(AA).
The first matrix on the righthand side above is in S while the second is in R.
EXERCISES
1. Suppose {VI, ... , vd is a linearly dependent set. Then show that one of the vectors
must be a linear combination of the others.
2. Let XI, X2, ... , Xk E jRn be nonzero mutually orthogonal vectors. Show that {XI, ... ,
Xk} must be a linearly independent set.
3. Let VI, ... , Vn be orthonormal vectors in jRn. Show that Av" •.. , AV
n
are also or
thonormal if and only if A E jRnxn is orthogonal.
4. Consider the vectors VI = [2 1 f and V2 = [3 1 f. Prove that VI and V2 form a basis
for jR2. Find the components of the vector v = [4 If with respect to this basis.
Exercises 15
5. Let P denote the set of polynomials of degree less than or equal to two of the form
Po + p\x + pix
2
, where po, p\, p2 e R. Show that P is a vector space over E. Show
that the polynomials 1, *, and 2x
2
— 1 are a basis for P. Find the components of the
polynomial 2 + 3x + 4x
2
with respect to this basis.
6. Prove Theorem 2.22 (for the case of two subspaces R and S only).
7. Let P
n
denote the vector space of polynomials of degree less than or equal to n, and of
the form p( x) = po + p\x + • • • + p
n
x
n
, where the coefficients /?, are all real. Let PE
denote the subspace of all even polynomials in P
n
, i.e., those that satisfy the property
p(—x} = p(x). Similarly, let PQ denote the subspace of all odd polynomials, i.e.,
those satisfying p(—x} = – p( x) . Show that P
n
= P
E
© PO
8. Repeat Example 2.28 using instead the two subspaces 7" of tridiagonal matrices and
U of upper triangular matrices.
Exercises 15
5. Let P denote the set of polynomials of degree less than or equal to two of the form
Po + PI X + P2x2, where Po, PI, P2 E R Show that P is a vector space over R Show
that the polynomials 1, x, and 2x2  1 are a basis for P. Find the components of the
polynomial 2 + 3x + 4x
2
with respect to this basis.
6. Prove Theorem 2.22 (for the case of two subspaces Rand S only).
7. Let P
n
denote the vector space of polynomials of degree less than or equal to n, and of
the form p(x) = Po + PIX + ... + Pnxn, where the coefficients Pi are all real. Let PE
denote the subspace of all even polynomials in P
n
, i.e., those that satisfy the property
p( x) = p(x). Similarly, let Po denote the subspace of all odd polynomials, i.e.,
those satisfying p(x) = p(x). Show that P
n
= PE EB Po·
8. Repeat Example 2.28 using instead the two subspaces T of tridiagonal matrices and
U of upper triangular matrices.
This page intentionally left blank This page intentionally left blank
Chapter 3
Linear Transformations
3.1 Definition and Examples
We begin with the basic definition of a linear transformation (or linear map, linear function,
or linear operator) between two vector spaces.
Definition 3.1. Let (V, F) and (W, F) be vector spaces. Then C : V > W is a linear
transformation if and only if
£(avi + pv
2
) = aCv\ + fi£v
2
far all a, £ e F and far all v
}
,v
2
e V.
The vector space V is called the domain of the transformation C while VV, the space into
which it maps, is called the codomain.
Example 3.2.
1. Let F = R and take V = W = PC[f
0
, +00).
Define £ : PC[t
0
, +00) > PC[t
0
, +00) by
2. Let F = R and take V = W = R
mx
". Fix M e R
mxm
.
Define £ : R
mx
" > M
mxn
by
3. Let F = R and take V = P" = (p(x) = a
0
+ ct
}
x H h a
n
x" : a, E R} and
w = p
n

1
.
Define C.: V —> W by Lp — p', where' denotes differentiation with respect to x.
17
Chapter 3
Linear Transformations
3.1 Definition and Examples
We begin with the basic definition of a linear transformation (or linear map, linear function,
or linear operator) between two vector spaces.
Definition 3.1. Let (V, IF) and (W, IF) be vector spaces. Then I: : V + W is a linear
transformation if and only if
I:(avi + {3V2) = al:vi + {3I:V2 for all a, {3 ElF and for all VI, V2 E V.
The vector space V is called the domain of the transformation I: while W, the space into
which it maps, is called the codomain.
Example 3.2.
1. Let IF = JR and take V = W = PC[to, +00).
Define I: : PC[to, +00) + PC[to, +00) by
vet) f+ wet) = (I:v)(t) = 11 e(tr)v(r) dr.
to
2. Let IF = JR and take V = W = JRmxn. Fix ME JRmxm.
Define I: : JRmxn + JRmxn by
X f+ Y = I:X = MX.
3. Let IF = JR and take V = pn = {p(x) = ao + alx + ... + anx
n
: ai E JR} and
W = pnl.
Define I: : V + W by I: p = p', where I denotes differentiation with respect to x.
17
18 Chapters. Li near Transformations
3.2 Matrix Representation of Linear Transformations
Linear transformations between vector spaces with specific bases can be represented con
veniently in matrix form. Specifically, suppose £ : (V, F) — > • (W, F) is linear and further
suppose that {u,, i e n} and {Wj, j e m] are bases for V and W, respectively. Then the
ith column of A = Mat £ (the matrix representation of £ with respect to the given bases
for V and W) is the representation of £i> , with respect to {w
}
•, j e raj. In other words,
represents £ since
where W = [w\,..., w
m
] and
is the z'th column of A. Note that A = Mat £ depends on the particular bases for V and W.
This could be reflected by subscripts, say, in the notation, but this is usually not done.
The action of £ on an arbitrary vector v e V is uniquely determined (by linearity)
by its action on a basis. Thus, if v = E1v1 + • • • + E
n
v
n
= Vx (where u, and hence jc, is
arbitrary), then
Thinking of A both as a matrix and as a linear transformation from Rn to R
m
usually causes no
confusion. Change of basis then corresponds naturally to appropriate matrix multiplication.
Thus, £V = WA since x was arbitrary.
When V = R", W = R
m
and [vi , i e n}, [wj , j e m} are the usual (natural) bases
the equation £V = WA becomes simply £ = A. We thus commonly identify A as a linea
transformation with its matrix representation, i.e.,
18 Chapter 3. Linear Transformations
3.2 Matrix Representation of Linear Transformations
Linear transformations between vector spaces with specific bases can be represented con
veniently in matrix form. Specifically, suppose L : (V, IF) (W, IF) is linear and further
suppose that {Vi, i E and {w j, j E !!!.} are bases for V and W, respectively. Then the
ith column of A = Mat L (the matrix representation of L with respect to the given bases
for V and W) is the representation of LVi with respect to {w j, j E m}. In other words,
represents L since
A=
al
n
]
: E JR.mxn
a
mn
LVi = aliwl + ... + amiWm
=Wai,
where W = [WI, ... , w
m
] and
is the ith column of A. Note that A = Mat L depends on the particular bases for V and W.
This could be reflected by subscripts, say, in the notation, but this is usually not done.
The action of L on an arbitrary vector V E V is uniquely determined (by linearity)
by its action on a basis. Thus, if V = VI + ... + Vn = V x (where v, and hence x, is
arbitrary), then
LVx = Lv = + ... +
= WAx.
Thus, LV = W A since x was arbitrary.
When V = JR.n, W = lR.
m
and {Vi, i E {W j' j E !!!.} are the usual (natural) bases,
the equation LV = W A becomes simply L = A. We thus commonly identify A as a linear
transformation with its matrix representation, i.e.,
Thinking of A both as a matrix and as a linear transformation from JR." to lR.
m
usually causes no
confusion. Change of basis then corresponds naturally to appropriate matrix multiplication.
3.3. Composition of Transformations 19
3.3 Composition of Transformations
Consider three vector spaces U, V, and W and transformations B from U to V and A from
V to W. Then we can define a new transformation C as follows:
formula
Two Special Cases:
Inner Product: Let x, y e Rn. Then their inner product is the scalar
Outer Product: Let x e R
m
, y e Rn. Then their outer product is the m x n
matrix
Note that any rankone matrix A e R
mxn
can be written in the form A = xy
T
above (or xy
H
if A e C
mxn
). A rankone symmetric matrix can be written in
the form XX
T
(or XX
H
).
The above diagram illustrates the composition of transformations C = AB. Note that in
most texts, the arrows above are reversed as follows:
However, it might be useful to prefer the former since the transformations A and B appear
in the same order in both the diagram and the equation. If dimZ// = p, dimV = n,
and dim W = m, and if we associate matrices with the transformations in the usual way,
then composition of transformations corresponds to standard matrix multiplication. That is,
we have C — A B . The above is sometimes expressed componentwise by the
3.3. Composition ofTransformations 19
3.3 Composition of Transformations
Consider three vector spaces U, V, and Wand transformations B from U to V and A from
V to W. Then we can define a new transformation C as follows:
C
The above diagram illustrates the composition of transformations C = AB. Note that in
most texts, the arrows above are reversed as follows:
C
However, it might be useful to prefer the former since the transformations A and B appear
in the same order in both the diagram and the equation. If dimU = p, dim V = n,
and dim W = m, and if we associate matrices with the transformations in the usual way,
then composition of transformations corresponds to standard matrix mUltiplication. That is,
we have CAB . The above is sometimes expressed componentwise by the
mxp
formula
Two Special Cases:
nxp
n
cij = L aikbkj.
k=1
Inner Product: Let x, y E ~ n . Then their inner product is the scalar
n
xTy = Lx;y;.
;=1
Outer Product: Let x E ~ m , y E ~ n . Then their outer product is the m x n
matrix
Note that any rankone matrix A E ~ m x n can be written in the form A = xyT
above (or xyH if A E c
mxn
). A rankone symmetric matrix can be written in
the form xx
T
(or xx
H
).
20 Chapter 3. Li near Transformations
3.4 Structure of Linear Transformations
Let A : V —> W be a linear transformation.
Definition 3.3. The range of A, denotedlZ( A), is the set {w e W : w = Av for some v e V}.
Equivalently, R(A) — {Av : v e V}. The range of A is also known as the image of A and
denoted Im(A).
The nullspace of A, denoted N(A), is the set {v e V : Av = 0}. The nullspace of
A is also known as the kernel of A and denoted Ker (A).
Theorem 3.4. Let A : V — >• W be a linear transformation. Then
1. R( A) C W.
2. N(A) c V.
Note that N(A) and R(A) are, in general, subspaces of different spaces.
Theorem 3.5. Let A e R
mxn
. If A is written in terms of its columns as A = [a\,... ,a
n
],
then
Proof: The proof of this theorem is easy, essentially following immediately from the defi
nition. D
Remark 3.6. Note that in Theorem 3.5 and throughout the text, the same symbol (A) is
used to denote both a linear transformation and its matrix representation with respect to the
usual (natural) bases. See also the last paragraph of Section 3.2.
Definition 3.7. Let {v1 , . . . , vk] be a set of nonzero vectors u, e Rn. The set is said to
be orthogonal if' vjvj = 0 for i ^ j and orthonormal if vf vj = 8
ij
, where 8
t
j is the
Kronecker delta defined by
Example 3.8.
is an orthogonal set.
is an orthonormal set.
3. If { t > i , . . . , Vk} with u, € M." is an orthogonal set, then I — /==,  ., — /===  is an
I ^/v, vi ^/v'
k
v
k
]
orthonormal set.
then
20 Chapter 3. LinearTransformations
3.4 Structure of Linear Transformations
Let A : V + W be a linear transformation.
Definition3.3. The range of A, denotedR(A), is the set {w E W : w = Av for some v E V}.
Equivalently, R(A) = {Av : v E V}. The range of A is also known as the image of A and
denoted Im(A).
The nullspace of A, denoted N(A), is the set {v E V : Av = O}. The nullspace of
A is also known as the kernel of A and denoted Ker (A).
Theorem 3.4. Let A : V + W be a linear transformation. Then
1. R(A) S; W.
2. N(A) S; V.
Note that N(A) and R(A) are, in general, subspaces of different spaces.
Theorem 3.5. Let A E If A is written in terms of its columns as A = [ai, ... , an],
then
R(A) = Sp{al, ... , an} .
Proof: The proof of this theorem is easy, essentially following immediately from the defi
nition. 0
Remark 3.6. Note that in Theorem 3.5 and throughout the text, the same symbol (A) is
used to denote both a linear transformation and its matrix representation with respect to the
usual (natural) bases. See also the last paragraph of Section 3.2.
Definition 3.7. Let {VI, ... , vd be a set of nonzero vectors Vi E The set is said to
be orthogonal if vr v j = 0 for i f= j and orthonormal if vr v j = 8ij' where 8ij is the
Kronecker delta defined by
8 {I ifi=j,
ij = 0 if i f= j.
Example 3.8.
1. {[ J. [ : J} is an orthogonal set.
2. {[ ] ,[ J} is an orthonormal set.
3 If { }
. h 1Tlln • h I th { .
. VI, •.• ,Vk Wit Vi E.IN,. IS an ort ogona set, en ... , IS an
VI
orthonormal set.
3.4. Structure of Linear Transformations 21
Definition 3.9. Let S c Rn. Then the orthogonal complement of S is defined as the set
S
1
 = {v e Rn : V
T
S = 0 for all s e S}.
Example 3.10. Let
Then it can be shown that
Working from the definition, the computation involved is simply to find all nontrivial (i.e.,
nonzero) solutions of the system of equations
Note that there is nothing special about the two vectors in the basis defining S being or
thogonal. Any set of vectors will do, including dependent spanning vectors (which would,
of course, then give rise to redundant equations).
Theorem 311 Let R S C R
n
The
Proof: We prove and discuss only item 2 here. The proofs of the other results are left as
exercises. Let { v1 , ..., v
k
} be an orthonormal basis for S and let x e Rn be an arbitrary
vector. Set
3.4. Structure of Li near Transformations 21
Definition 3.9. Let S <; ]Rn. Then the orthogonal complement of S is defined as the set
vTs=OforallsES}.
Example 3.10. Let
Then it can be shown that
Working from the definition, the computation involved is simply to find all nontrivial (i.e.,
nonzero) solutions of the system of equations
3xI + 5X2 + 7X3 = 0,
4xI + X2 + X3 = 0.
Note that there is nothing special about the two vectors in the basis defining S being or
thogonal. Any set of vectors will do, including dependent spanning vectors (which would,
of course, then give rise to redundant equations).
Theorem 3.11. Let n, S <; ]Rn. Then
2. S \B = ]Rn.
3. = S.
4. n <; S if and only if <;
5. (n + = nl. n
6. (n n = +
Proof: We prove and discuss only item 2 here. The proofs of the other results are left as
exercises. Let {VI, ... , Vk} be an orthonormal basis for S and let x E ]Rn be an arbitrary
vector. Set
k
XI = L (xT Vi)Vi,
;=1
X2 = X XI.
we see that x2 is orthogonal to v1, ..., Vk and hence to any linear combination of these
vectors. In other words, X2 is orthogonal to any vector in S. We have thus shown that
S + S
1
= Rn. We also have that S U S
1
=0 since the only vector s e S orthogonal to
everything in S (i.e., including itself) is 0.
It is also easy to see directly that, when we have such direct sum decompositions, we
can write vectors in a unique way with respect to the corresponding subspaces. Suppose,
for example, that x = x1 + x2. = x'1+ x'
2
, where x\, x 1 E S and x2, x'
2
e S
1
. Then
(x'1 — x1)
T
( x'
2
— x2) = 0 by definition of ST . But then (x'1 — x1)
T
( x' 1 – x1) = 0 since
x
2
— X2 = — (x'1 — x1) (which follows by rearranging the equation x1+x2 = x'1 + x'
2
) . Thus,
x1 — x'1 and x2 = x
2
. D
Theorem 3.12. Let A : Rn —> R
m
. Then
1. N(A)
1
" = 7£(A
r
). (Note: This holds only for finitedimensional vector spaces.)
2. 'R,(A)
1
~ — J\f(A
T
). (Note: This also holds for infinitedimensional vector spaces.)
Proof: To prove the first part, take an arbitrary x e A/ "(A). Then Ax = 0 and this is
equivalent to y
T
Ax = 0 for all v. But y
T
Ax = ( A
T
y ) x. Thus, Ax = 0 if and only if x
is orthogonal to all vectors of the form A
T
v, i.e., x e R(A
r
) . Since x was arbitrary, we
have established that N(A)
1
= U(A
T
}.
The proof of the second part is similar and is left as an exercise. D
Definition 3.13. Let A : R
n
> R
m
. Then {v e R" : Av = 0} is sometimes called the
right nullspace of A. Similarly, (w e R
m
: W
T
A = 0} is called the left nullspace of A.
Clearly, the right nullspace is A/"(A) while the left nullspace is J\f(A
T
).
Theorem 3.12 and part 2 of Theorem 3.11 can be combined to give two very fun
damental and useful decompositions of vectors in the domain and codomain of a linear
transformation A. See also Theorem 2.26.
Theorem 3.14 (Decomposition Theorem). Let A : R" > R
m
. Then
7. every vector v in the domain space R" can be written in a unique way as v = x + y,
where x € M(A) and y € J\f(A)
±
= ft(A
r
) (i.e., R" = M(A) 0 ft(A
r
)).
2. every vector w in the codomain space R
m
can be written in a unique way asw = x+y,
where x e U(A) and y e ft(A)
1
 = Af(A
T
) (i.e., R
m
= 7l(A) 0 M(A
T
)).
This key theorem becomes very easy to remember by carefully studying and under
standing Figure 3.1 in the next section.
3.5 Four Fundamental Subspaces
Consider a general matrix A € E^
x
". When thought of as a linear transformation from E"
to R
m
, many properties of A can be developed in terms of the four fundamental subspaces
22 Chapters. L i near Transformations
Then x\ e < S and, since
22 Chapter 3. Linear Transformations
Then XI E S and, since
T T T
x
2
V j = X V j  X I V j
=XTVjXTVj=O,
we see that X2 is orthogonal to VI, .•. , Vk and hence to any linear combination of these
vectors. In other words, X2 is orthogonal to any vector in S. We have thus shown that
S + S.l = IRn. We also have that S n S.l = 0 since the only vector s E S orthogonal to
everything in S (i.e., including itself) is O.
It is also easy to see directly that, when we have such direct sum decompositions, we
can write vectors in a unique way with respect to the corresponding subspaces. Suppose,
for example, that x = XI + X2 = x; + x ~ , where XI, x; E Sand X2, x ~ E S.l. Then
(x;  XI/ ( x ~  X2) = 0 by definition of S.l. But then (x;  XI)T (x;  xd = 0 since
x ~ X2 = (x; XI) (which follows by rearranging the equation XI +X2 = x; + x ~ ) . Thus,
XI = x; andx2 = x ~ . 0
Theorem 3.12. Let A : IR
n
+ IRm. Then
1. N(A).l = R(A
T
). (Note: This holds only for finitedimensional vector spaces.)
2. R(A).l = N(A
T
). (Note: This also holds for infinitedimensional vector spaces.)
Proof: To prove the first part, take an arbitrary x E N(A). Then Ax = 0 and this is
equivalent to yT Ax = 0 for all y. But yT Ax = (AT y{ x. Thus, Ax = 0 if and only if x
is orthogonal to all vectors of the form AT y, i.e., x E R(AT).l. Since x was arbitrary, we
have established thatN(A).l = R(A
T
).
The proof of the second part is similar and is left as an exercise. 0
Definition 3.13. Let A : IR
n
+ IRm. Then {v E IR
n
: A v = O} is sometimes called the
right nullspace of A. Similarly, {w E IR
m
: w
T
A = O} is called the left nullspace of A.
Clearly, the right nullspace is N(A) while the left nullspace is N(A
T
).
Theorem 3.12 and part 2 of Theorem 3.11 can be combined to give two very fun
damental and useful decompositions of vectors in the domain and codomain of a linear
transformation A. See also Theorem 2.26.
Theorem 3.14 (Decomposition Theorem). Let A : IR
n
+ IRm. Then
1. every vector v in the domain space IR
n
can be written in a unique way as v = x + y,
where x E N(A) and y E N(A).l = R(AT) (i.e., IR
n
= N(A) EB R(A
T
».
2. every vector w in the codomain space IR
m
can be written ina unique way as w = x+y,
where x E R(A) and y E R(A).l = N(A
T
) (i.e., IR
m
= R(A) EBN(A
T
».
This key theorem becomes very easy to remember by carefully studying and under
standing Figure 3.1 in the next section.
3.5 Four Fundamental Subspaces
Consider a general matrix A E lR;,xn. When thought of as a linear transformation from IR
n
to IRm, many properties of A can be developed in terms of the four fundamental subspaces
3.5. Four Fundamental Subspaces 23
Figure 3.1. Four fundamental subspaces.
7£(A), 'R.(A)^, Af ( A) , and N(A)T. Figure 3.1 makes many key properties seem almost
obvious and we return to this figure frequently both in the context of linear transformations
and in illustrating concepts such as controllability and observability.
Definition 3.15. Let V and W be vector spaces and let A : V
motion.
1. A is onto (also called epic or surjective) ifR,(A) = W.
W be a linear transfor
2. A is onetoone or 11 (also called monic or infective) ifJ\f(A) = 0. Two equivalent
characterizations of A being 11 that are often easier to verify in practice are the
following:
Definition 3.16. Let A : E" > R
m
. Then rank(A) = dimftCA). This is sometimes called
the column rank of A (maximum number of independent columns). The row rank of A is
3.5. Four Fundamental Subspaces 23
A
r
N(A)1
r
EB {OJ
X {O}Gl
nr m r
Figure 3.1. Four fundamental subspaces.
R(A), R(A)1, N(A), and N(A)1. Figure 3.1 makes many key properties seem almost
obvious and we return to this figure frequently both in the context of linear transformations
and in illustrating concepts such as controllability and observability.
Definition 3.15. Let V and W be vector spaces and let A : V + W be a linear transfor
mation.
1. A is onto (also called epic or surjective) ifR(A) = W.
2. A is onetoone or 11 (also called monic or injective) if N(A) = O. Two equivalent
characterizations of A being 11 that are often easier to verify in practice are the
following:
(a) AVI = AV2 ===} VI = V2 .
(b) VI t= V2 ===} AVI t= AV2 .
Definition 3.16. Let A : IR
n
+ IRm. Then rank(A) = dim R(A). This is sometimes called
the column rank of A (maximum number of independent columns). The row rank of A is
24 Chapter3. Linear Transformations
dim 7£(A
r
) (maximum number of independent rows). The dual notion to rank is the nullity
of A, sometimes denoted nullity(A) or corank(A), and is defined as dim A/"(A).
Theorem 3.17. Let A : R
n
> R
m
. Then dim K(A) = dimA/ '(A)
±
. (Note: Since
A/^A)
1
" = 7l(A
T
), this theorem is sometimes colloquially stated "row rank of A = column
rank of A.")
Proof: Define a linear transformation T : J\f(A)~
L
— >• 7£(A) by
Clearly T is 11 (since A/"(T) = 0). To see that T is also onto, take any w e 7£(A). Then
by definition there is a vector x e R" such that Ax — w. Write x = x\ + X2, where
x\ e A/^A)
1
 and jc
2
e A/"(A). Then Ajti = u; = r*i since *i e A/^A)
1
. The last equality
shows that T is onto. We thus have that dim7?.(A) = dimA/^A^ since it is easily shown
that if { ui , . . . , iv} is abasis forA/'CA)
1
, then {Tv\, . . . , Tv
r
] is abasis for 7?.(A). Finally, if
we apply this and several previous results, the following string of equalities follows easily:
"column rank of A" = rank(A) = dim7e(A) = dim A/^A)
1
= dim7l(A
T
) = rank(A
r
) =
"row rank of A." D
The following corollary is immediate. Like the theorem, it is a statement about equality
of dimensions; the subspaces themselves are not necessarily in the same vector space.
Corollary 3.18. Let A : R" > R
m
. Then dimA/"(A) + dimft(A) = n, where n is the
dimension of the domain of A.
Proof: From Theorems 3.11 and 3.17 we see immediately that
For completeness, we include here a few miscellaneous results about ranks of sums
and products of matrices.
Theorem 3.19. Let A, B e R"
xn
. Then
Part 4 of Theorem 3.19 suggests looking at the general problem of the four fundamental
subspaces of matrix products. The basic results are contained in the following easily proved
theorem.
24 Chapter 3. LinearTransformations
dim R(AT) (maximum number of independent rows). The dual notion to rank is the nullity
of A, sometimes denoted nullity(A) or corank(A), and is defined as dimN(A).
Theorem 3.17. Let A : ]Rn ~ ]Rm. Then dim R(A) = dimNCA)L. (Note: Since
N(A)L = R(A
T
), this theorem is sometimes colloquially stated "row rank of A = column
rank of A.")
Proof: Define a linear transformation T : N(A)L ~ R(A) by
Tv = Av for all v E N(A)L.
Clearly T is 11 (since N(T) = 0). To see that T is also onto, take any W E R(A). Then
by definition there is a vector x E ]Rn such that Ax = w. Write x = Xl + X2, where
Xl E N(A)L andx2 E N(A). Then AXI = W = TXI since Xl E N(A)L. The last equality
shows that T is onto. We thus have that dim R(A) = dimN(A)L since it is easily shown
that if {VI, ... , v
r
} is a basis for N(A)L, then {TVI, ... , Tv
r
} is a basis for R(A). Finally, if
we apply this and several previous results, the following string of equalities follows easily:
"column rank of A" = rank(A) = dim R(A) = dimN(A)L = dim R(AT) = rank(AT) =
"row rank of A." 0
The following corollary is immediate. Like the theorem, it is a statement about equality
of dimensions; the subspaces themselves are not necessarily in the same vector space.
Corollary 3.18. Let A : ]Rn ~ ]Rm. Then dimN(A) + dim R(A) = n, where n is the
dimension of the domain of A.
Proof: From Theorems 3.11 and 3.17 we see immediately that
n = dimN(A) + dimN(A)L
= dimN(A) + dim R(A) . 0
For completeness, we include here a few miscellaneous results about ranks of sums
and products of matrices.
Theorem 3.19. Let A, B E ]Rnxn. Then
1. O:s rank(A + B) :s rank(A) + rank(B).
2. rank(A) + rank(B)  n :s rank(AB) :s min{rank(A), rank(B)}.
3. nullity(B) :s nullity(AB) :s nullity(A) + nullity(B).
4. if B is nonsingular, rank(AB) = rank(BA) = rank(A) and N(BA) = N(A).
Part 4 of Theorem 3.19 suggests looking atthe general problem of the four fundamental
subspaces of matrix products. The basic results are contained in the following easily proved
theorem.
3.5. Four F u n d a me n t a l Subspaces 25
Theorem 3.20. Let A e R
mxn
, B e R
nxp
. Then
The next theorem is closely related to Theorem 3.20 and is also easily proved. It
is extremely useful in text that follows, especially when dealing with pseudoinverses and
linear least squares problems.
Theorem 3.21. Let A e R
mxn
. Then
We now characterize 11 and onto transformations and provide characterizations in
terms of rank and invertibility.
Theorem 3.22. Let A : R
n
» R
m
. Then
1. A is onto if and only //"rank(A) — m (A has linearly independent rows or is said to
have full row rank; equivalently, AA
T
is nonsingular).
2. A is 11 if and only z/r a nk(A) = n (A has linearly independent columns or is said
to have full column rank; equivalently, A
T
A is nonsingular).
Proof: Proof of part 1: If A is onto, dim7?,(A) — m — rank (A). Conversely, let y e R
m
be arbitrary. Let jc = A
T
(AA
T
)~
]
y e R
n
. Then y = Ax, i.e., y e 7?.(A), so A is onto.
Proof of part 2: If A is 11, then A/"(A) = 0, which implies that dim A/^A)
1
—n —
dim 7£(A
r
), and hence dim 7£(A) = n by Theorem 3.17. Conversely, suppose Ax\ = Ax^.
Then A
r
A;t i = A
T
Ax2, which implies x\ = x^. since A
r
A is invertible. Thus, A is
11. D
Definition 3.23. A : V —» W is invertible (or bijective) if and only if it is 11 and onto.
Note that if A is invertible, then dim V — dim W. Also, A : W
1
»• E" is invertible or
nonsingular if and only z/r ank(A) = n.
Note that in the special case when A € R"
x
", the transformations A, A
r
, and A"
1
are all 11 and onto between the two spaces M(A)
±
and 7£(A). The transformations A
T
and A~
!
have the same domain and range but are in general different maps unless A is
orthogonal. Similar remarks apply to A and A~
T
.
3.5. Four Fundamental Subspaces 25
Theorem 3.20. Let A E IRmxn, B E IRnxp. Then
1. RCAB) S; RCA).
2. N(AB) ;2 N(B).
3. R«AB)T) S; R(B
T
).
4. N«AB)T) ;2 N(A
T
).
The next theorem is closely related to Theorem 3.20 and is also easily proved. It
is extremely useful in text that follows, especially when dealing with pseudoinverses and
linear least squares problems.
Theorem 3.21. Let A E IRmxn. Then
1. R(A) = R(AA
T
).
2. R(AT) = R(A
T
A).
3. N(A) = N(A
T
A).
4. N(A
T
) = N(AA
T
).
We now characterize II and onto transformations and provide characterizations in
terms of rank and invertibility.
Theorem 3.22. Let A : IR
n
+ IRm. Then
1. A is onto if and only if rank (A) = m (A has linearly independent rows or is said to
have full row rank; equivalently, AA T is nonsingular).
2. A is 11 if and only ifrank(A) = n (A has linearly independent columns or is said
to have full column rank; equivalently, AT A is nonsingular).
Proof' Proof of part 1: If A is onto, dim R(A) = m = rank(A). Conversely, let y E IRm
be arbitrary. Let x = AT (AAT)I Y E IRn. Then y = Ax, i.e., y E R(A), so A is onto.
Proof of part 2: If A is 11, then N(A) = 0, which implies that dimN(A)1 = n =
dim R(A
T
), and hence dim R(A) = n by Theorem 3.17. Conversely, suppose AXI = AX2.
Then AT AXI = AT AX2, which implies XI = X2 since AT A is invertible. Thus, A is
11. D
Definition 3.23. A : V + W is invertible (or bijective) if and only if it is 11 and onto.
Note that if A is invertible, then dim V = dim W. Also, A : IRn + IR
n
is invertible or
nonsingular ifand only ifrank(A) = n.
Note that in the special case when A E I R ~ x n , the transformations A, AT, and AI
are all 11 and onto between the two spaces N(A)1 and R(A). The transformations AT
and A I have the same domain and range but are in general different maps unless A is
orthogonal. Similar remarks apply to A and A T.
26 Chapters. Li near Transformations
If a linear transformation is not invertible, it may still be right or left invertible. Defi
nitions of these concepts are followed by a theorem characterizing left and right invertible
transformations.
Definition 3.24. Let A : V > W. Then
1. A is said to be right invertible if there exists a right inverse transformation A~
R
:
W —> V such that AA~
R
= I
w
, where I
w
denotes the identity transformation on W.
2. A is said to be left invertible if there exists a left inverse transformation A~
L
: W —>
V such that A~
L
A = I
v
, where I
v
denotes the identity transformation on V.
Theorem 3.25. Let A : V > W. Then
1. A is right invertible if and only if it is onto.
2. A is left invertible if and only if it is 11.
Moreover, A is invertible if and only if it is both right and left invertible, i.e., both 11 and
onto, in which case A~
l
= A~
R
= A~
L
.
Note: From Theorem 3.22 we see that if A : E" >• E
m
is onto, then a right inverse
is given by A~
R
= A
T
(AA
T
) . Similarly, if A is 11, then a left inverse is given by
A~
L
= (A
T
A)~
1
A
T
.
Theorem 3.26. Let A : V » V.
1. If there exists a unique right inverse A~
R
such that AA~
R
= I, then A is invertible.
2. If there exists a unique left inverse A~
L
such that A~
L
A = I, then A is invertible.
Proof: We prove the first part and leave the proof of the second to the reader. Notice the
following:
Thus, (A
R
+ A
R
A — /) must be a right inverse and, therefore, by uniqueness it must be
the case that A~
R
+ A~
R
A — I = A~
R
. But this implies that A~
R
A = /, i.e., that A~
R
is
a left inverse. It then follows from Theorem 3.25 that A is invertible. D
Example 3.27.
1. Let A = [1 2] : E
2
»• E
1
. Then A is onto. (Proof: Take any a € E
1
; then one
can always find v e E
2
such that [1 2][^] = a). Obviously A has full row rank
(=1) and A~
R
= [ _j j is a right inverse. Also, it is clear that there are infinitely many
right inverses for A. In Chapter 6 we characterize all right inverses of a matrix by
characterizing all solutions of the linear matrix equation AR = I.
26 Chapter 3. linear Transformations
If a linear transformation is not invertible, it may still be right or left invertible. Defi
nitions of these concepts are followed by a theorem characterizing left and right invertible
transformations.
Definition 3.24. Let A : V + W. Then
1. A is said to be right invertible if there exists a right inverse transformation A
R
:
W + V such that AA R = I
w
, where Iw denotes the identity transfonnation on W.
2. A is said to be left invertible if there exists a left inverse transformation A L : W +
V such that A L A = Iv, where Iv denotes the identity transfonnation on V.
Theorem 3.25. Let A : V + W. Then
1. A is right invertible if and only if it is onto.
2. A is left invertible if and only ifit is 11.
Moreover, A is invertible if and only if it is both right and left invertible, i.e., both 11 and
onto, in which case A I = A R = A L.
Note: From Theorem 3.22 we see that if A : ]Rn + ]Rm is onto, then a right inverse
is given by A R = AT (AAT) I. Similarly, if A is 11, then a left inverse is given by
A
L
= (AT A)I AT.
Theorem 3.26. Let A : V + V.
1. If there exists a unique right inverse A  R such that A A  R = I, then A is invertible.
2. If there exists a unique left inverse A L such that A L A = I, then A is invertible.
Proof: We prove the first part and leave the proof of the second to the reader. Notice the
following:
A(A
R
+ ARA I) = AA
R
+ AARA  A
= I + I A  A since AA R = I
= I.
Thus, (A R + A R A  I) must be a right inverse and, therefore, by uniqueness it must be
the case that A R + A R A  I = A R. But this implies that A R A = I, i.e., that A R is
a left inverse. It then follows from Theorem 3.25 that A is invertible. 0
Example 3.27.
1. Let A = [1 2]:]R2 + ]R I. Then A is onto. (Proo!' Take any a E ]R I; then one
can always find v E ]R2 such that [1 2][ ~ ~ ] = a). Obviously A has full row rank
(= 1) and A  R = [ _ ~ ] is a right inverse. Also, it is clear that there are infinitely many
right inverses for A. In Chapter 6 we characterize all right inverses of a matrix by
characterizing all solutions of the linear matrix equation A R = I.
Exercises 27
2. Let A = [J] : E
1
> E
2
. ThenAis 11. (Proof: The only solution to 0 = Av = [
I
2
]v
is v = 0, whence A/"(A) = 0 so A is 11). It is now obvious that A has full column
rank (=1) and A~
L
= [3 — 1] is a left inverse. Again, it is clear that there are
infinitely many left inverses for A. In Chapter 6 we characterize all left inverses of a
matrix by characterizing all solutions of the linear matrix equation LA = I.
3. The matrix
when considered as a linear transformation on IE
below bases for its four fundamental subspaces.
\ is neither 11 nor onto. We give
EXERCISES
3 4
1. Let A = [
8 5
J and consider A as a linear transformation mapping E
3
to E
2
.
Find the matrix representation of A with respect to the bases
2. Consider the vector space R
nx
" over E, let S denote the subspace of symmetric
matrices, and let 7£ denote the subspace of skewsymmetric matrices. For matrices
X, Y e E
nx
" define their inner product by (X, Y) = Tr( X
r
F) . Show that, with
respect to this inner product, 'R, — S^.
3. Consider the differentiation operator C defined in Example 3.2.3. Is £ 11? Is£
onto?
4. Prove Theorem 3.4.
of R
3
and
of E
2
.
Exercises 27
2. LetA = [i]:]Rl ~ ]R2. Then A is 11. (Proof The only solution toO = Av = [i]v
is v = 0, whence N(A) = 0 so A is 11). It is now obvious that A has full column
rank (=1) and A L = [3  1] is a left inverse. Again, it is clear that there are
infinitely many left inverses for A. In Chapter 6 we characterize all left inverses of a
matrix by characterizing all solutions of the linear matrix equation LA = I.
3. The matrix
[
1 1
A = 2 1
3 1
when considered as a linear transformation on ]R3, is neither 11 nor onto. We give
below bases for its four fundamental subspaces.
EXERCISES
1. Let A = [ ~ ; i) and consider A as a linear transformation mapping ]R3 to ]R2.
Find the matrix representation of A with respect to the bases
{[lHHU]}
{ [ i l [ ~ J }
2. Consider the vector space ]Rnxn over ]R, let S denote the subspace of symmetric
matrices, and let R denote the subspace of skewsymmetric matrices. For matrices
X, Y E ]Rnxn define their inner product by (X, y) = Tr(X
T
Y). Show that, with
respect to this inner product, R = S J. .
3. Consider the differentiation operator £, defined in Example 3.2.3. Is £, II? Is £,
onto?
4. Prove Theorem 3.4.
28 Chapters. Linear Transformations
5. Prove Theorem 3.11.4.
6. Prove Theorem 3.12.2.
7. Determine bases for the four fundamental subspaces of the matrix
8. Suppose A e R
mxn
has a left inverse. Show that A
T
has a right inverse.
9. Let A = [ J o]. Determine A/"(A) and 7£(A). Are they equal? Is this true in general?
If this is true in general, prove it; if not, provide a counterexample.
10. Suppose A € Mg
9x48
. How many linearly independent solutions can be found to the
homogeneous linear system Ax = 0?
11. Modify Figure 3.1 to illustrate the four fundamental subspaces associated with A
T
e
R
nxm
thought of as a transformation from R
m
to R".
28 Chapter 3. Linear Transformations
5. Prove Theorem 3.Il.4.
6. Prove Theorem 3.12.2.
7. Detennine bases for the four fundamental subspaces of the matrix
2 5 5 3
8. Suppose A E IR
m
xn has a left inverse. Show that A T has a right inverse.
9. Let A = n DetennineN(A) and R(A). Are they equal? Is this true in general?
If this is true in general, prove it; if not, provide a counterexample.
10. Suppose A E How many linearly independent solutions can be found to the
homogeneous linear system Ax = O?
11. Modify Figure 3.1 to illustrate the four fundamental subspaces associated with ATE
IR
nxm
thought of as a transformation from IR
m
to IRn.
Chapter 4
Introduction to the
MoorePen rose
Pseudoinverse
In this chapter we give a brief introduction to the MoorePenrose pseudoinverse, a gener
alization of the inverse of a matrix. The MoorePenrose pseudoinverse is defined for any
matrix and, as is shown in the following text, brings great notational and conceptual clarity
to the study of solutions to arbitrary systems of linear equations and linear least squares
problems.
4.1 Definitions and Characterizations
Consider a linear transformation A : X —>• y, where X and y are arbitrary finite
dimensional vector spaces. Define a transformation T : Af(A)
1
 —>• Tl(A) by
Then, as noted in the proof of Theorem 3.17, T is bijective (11 and onto), and hence we
can define a unique inverse transformation T~
l
: 7£(A) —>• J\f(A}~
L
. This transformation
can be used to give our first definition of A
+
, the MoorePenrose pseudoinverse of A.
Unfortunately, the definition neither provides nor suggests a good computational strategy
for determining A
+
.
Definition 4.1. With A and T as defined above, define a transformation A
+
: y —» • X by
where y = y\ + j2 with y\ e 7£(A) and yi e Tl(A}
L
. Then A
+
is the MoorePenrose
pseudoinverse of A.
Although X and y were arbitrary vector spaces above, let us henceforth consider the
case X = W
1
and y = R
m
. We have thus defined A+ for all A e IR ™
X
" . A purely algebraic
characterization of A
+
is given in the next theorem, which was proved by Penrose in 1955;
see [22].
29
Chapter 4
Introduction to the
MoorePenrose
Pseudoinverse
In this chapter we give a brief introduction to the MoorePenrose pseudoinverse, a gener
alization of the inverse of a matrix. The MoorePenrose pseudoinverse is defined for any
matrix and, as is shown in the following text, brings great notational and conceptual clarity
to the study of solutions to arbitrary systems of linear equations and linear least squares
problems.
4.1 Definitions and Characterizations
Consider a linear transformation A : X + y, where X and Y are arbitrary finite
dimensional vector spaces. Define a transformation T : N(A).l + R(A) by
Tx = Ax for all x E NCA).l.
Then, as noted in the proof of Theorem 3.17, T is bijective Cll and onto), and hence we
can define a unique inverse transformation T
1
: RCA) + NCA).l. This transformation
can be used to give our first definition of A +, the MoorePenrose pseudoinverse of A.
Unfortunately, the definition neither provides nor suggests a good computational strategy
for determining A + .
Definition 4.1. With A and T as defined above, define a transformation A + : Y + X by
where Y = YI + Yz with Yl E RCA) and Yz E RCA).l. Then A+ is the MoorePenrose
pseudoinverse of A.
Although X and Y were arbitrary vector spaces above, let us henceforth consider the
case X = ~ n and Y = lP1.
m
. We have thus defined A + for all A E lP1.;" xn. A purely algebraic
characterization of A + is given in the next theorem, which was proved by Penrose in 1955;
see [22].
29
30 Chapter 4. Introduction to the MoorePenrose Pseudoinverse
Theorem 4.2. Let A e R?
xn
. Then G = A
+
i f and only i f
(PI) AGA = A.
(P2) GAG = G.
(P3) (AGf = AG.
(P4) (GA)
T
= GA.
Furthermore, A
+
always exi sts and i s uni que.
Note that the inverse of a nonsingular matrix satisfies all four Penrose properties. Also,
a right or left inverse satisfies no fewer than three of the four properties. Unfortunately, as
with Definition 4.1, neither the statement of Theorem 4.2 nor its proof suggests a computa
tional algorithm. However, the Penrose properties do offer the great virtue of providing a
checkable criterion in the following sense. Given a matrix G that is a candidate for being
the pseudoinverse of A, one need simply verify the four Penrose conditions (P1)(P4). If G
satisfies all four, then by uniqueness, it must be A
+
. Such a verification is often relatively
straightforward.
Example 4.3. Consider A = [' ]. Verify directly that A
+
= [ f ] satisfies (P1)(P4).
Note that other left inverses (for example, A~
L
= [3 — 1]) satisfy properties (PI), (P2),
and (P4) but not (P3).
Still another characterization of A
+
is given in the following theorem, whose proof
can be found in [1, p. 19]. While not generally suitable for computer implementation, this
characterization can be useful for hand calculation of small examples.
Theorem 4.4. Let A e R™
xn
. Then
4.2 Examples
Each of the following can be derived or verified by using the above definitions or charac
terizations.
Example 4.5. A
+
= A
T
(AA
T
)~ if A is onto (independent rows) (A is right invertible).
Example 4.6. A
+
= (A
T
A)~ A
T
i f A is 11 (independent columns) (A is left invertible).
Example 4.7. For any scalar a,
30 Chapter 4. Introduction to the MoorePenrose Pseudoinverse
Theorem 4.2. Let A E lR;" xn. Then G = A + if and only if
(Pl) AGA = A.
(P2) GAG = G.
(P3) (AG)T = AG.
(P4) (GA)T = GA.
Furthermore, A + always exists and is unique.
Note that the inverse of a nonsingular matrix satisfies all four Penrose properties. Also,
a right or left inverse satisfies no fewer than three of the four properties. Unfortunately, as
with Definition 4.1, neither the statement of Theorem 4.2 nor its proof suggests a computa
tional algorithm. However, the Penrose properties do offer the great virtue of providing a
checkable criterion in the following sense. Given a matrix G that is a candidate for being
the pseudoinverse of A, one need simply verify the four Penrose conditions (P1)(P4). If G
satisfies all four, then by uniqueness, it must be A +. Such a verification is often relatively
straightforward.
Example 4.3. Consider A = [a Verify directly that A+ = [! ~ ] satisfies (PI)(P4).
Note that other left inverses (for example, A L = [3  1]) satisfy properties (PI), (P2),
and (P4) but not (P3).
Still another characterization of A + is given in the following theorem, whose proof
can be found in [1, p. 19]. While not generally suitable for computer implementation, this
characterization can be useful for hand calculation of small examples.
Theorem 4.4. Let A E lR;" xn. Then
4.2 Examples
A + = lim (AT A + 8
2
1) I AT
6+0
= limAT(AAT +8
2
1)1.
6+0
(4.1)
(4.2)
Each of the following can be derived or verified by using the above definitions or charac
terizations.
Example 4.5. X
t
= AT (AA T) I if A is onto (independent rows) (A is right invertible).
Example 4.6. A+ = (AT A)I AT if A is 11 (independent columns) (A is left invertible).
Example 4.7. For any scalar a,
if a t= 0,
if a =0.
4.3. Properties and Appl ications 31
Example 4.8. For any vector v e M",
Example 4.9.
Example 4.10.
4.3 Properties and Applications
This section presents some miscellaneous useful results on pseudoinverses. Many of these
are used in the text that follows.
Theorem 4.11. Let A e R
mx
" and suppose U e R
mxm
, V e R
nx
" are orthogonal (M is
orthogonal if M
T
= M
1
). Then
Proof: For the proof, simply verify that the expression above does indeed satisfy each c
the four Penrose conditions. D
Theorem 4.12. Let S e R
nxn
be symmetric with U
T
SU = D, where U is orthogonal an
D is diagonal. Then S
+
= UD
+
U
T
, where D
+
is again a diagonal matrix whose diagonc
elements are determined according to Example 4.7.
Theorem 4.13. For all A e R
mxn
,
Proof: Both results can be proved using the limit characterization of Theorem 4.4. The
proof of the first result is not particularly easy and does not even have the virtue of being
especially illuminating. The interested reader can consult the proof in [1, p. 27]. The
proof of the second result (which can also be proved easily by verifying the four Penrose
conditions) is as follows:
4.3. Properties and Applications
Example 4.8. For any vector v E jRn,
Example 4.9.
[ ~ ~ r = [
0
~ l
[ ~ ~ r = [
I I
1
Example 4.10.
4 4
I I
4 4
4.3 Properties and Applications
if v i= 0,
if v = O.
31
This section presents some miscellaneous useful results on pseudoinverses. Many of these
are used in the text that follows.
Theorem 4.11. Let A E jRmxn and suppose U E jRmxm, V E jRnxn are orthogonal (M is
orthogonal if MT = M
1
). Then
Proof: For the proof, simply verify that the expression above does indeed satisfy each of
the four Penrose conditions. 0
Theorem 4.12. Let S E jRnxn be symmetric with U
T
SU = D, where U is orthogonal and
D is diagonal. Then S+ = U D+U
T
, where D+ is again a diagonal matrix whose diagonal
elements are determined according to Example 4.7.
Theorem 4.13. For all A E jRmxn,
1. A+ = (AT A)+ AT = AT (AA
T
)+.
2. (A
T
)+ = (A+{.
Proof: Both results can be proved using the limit characterization of Theorem 4.4. The
proof of the first result is not particularly easy and does not even have the virtue of being
especially illuminating. The interested reader can consult the proof in [1, p. 27]. The
proof of the second result (which can also be proved easily by verifying the four Penrose
conditions) is as follows:
(A
T
)+ = lim (AA
T
+ 8
2
l)IA
~   + O
= lim [AT(AAT + 8
2
l)1{
~   + O
= [limAT(AAT + 8
2
l)1{
~   + O
= (A+{. 0
32 Chapter 4. Introduction to the MoorePenrose Pseudoinverse
Note that by combining Theorems 4.12 and 4.13 we can, in theory at least, compute
the MoorePenrose pseudoinverse of any matrix (since A A
T
and A
T
A are symmetric). This
turns out to be a poor approach in finiteprecision arithmetic, however (see, e.g., [7], [11],
[23]), and better methods are suggested in text that follows.
Theorem 4.11 is suggestive of a "reverseorder" property for pseudoinverses of prod
nets of matrices such as exists for inverses of nroducts TTnfortnnatelv. in peneraK
As an example consider A = [0 1J and B = LI. Then
while
However, necessary and sufficient conditions under which the reverseorder property does
hold are known and we quote a couple of moderately useful results for reference.
Theorem 4.14. (AB)
+
= B
+
A
+
if and only if
Proof: For the proof, see [9]. D
Theorem 4.15. (AB)
+
= B?A+, where BI = A+AB and A) = AB\B+.
Proof: For the proof, see [5]. D
Theorem 4.16. If A e R
n
r
xr
, B e R
r
r
xm
, then (AB)
+
= B+A+.
Proof: Since A e R
n
r
xr
, then A
+
= (A
T
A)~
l
A
T
, whence A
+
A = I
r
. Similarly, since
B e W
r
xm
, we have B
+
= B
T
(BB
T
)~\ whence BB
+
= I
r
. The result then follows by
taking BI = B, A\ = A in Theorem 4.15. D
The following theorem gives some additional useful properties of pseudoinverses.
Theorem 4.17. For all A e R
mxn
,
32 Chapter 4. Introduction to the MoorePenrose Pseudo inverse
Note that by combining Theorems 4.12 and 4.13 we can, in theory at least, compute
the MoorePenrose pseudoinverse of any matrix (since AAT and AT A are symmetric). This
turns out to be a poor approach in finiteprecision arithmetic, however (see, e.g., [7], [II],
[23]), and better methods are suggested in text that follows.
Theorem 4.11 is suggestive of a "reverseorder" property for pseudoinverses of prod
ucts of matrices such as exists for inverses of products. Unfortunately, in general,
As an example consider A = [0 I] and B = [ : J. Then
(AB)+ = 1+ = I
while
B+ A+ = [ ] =
However, necessary and sufficient conditions under which the reverseorder property does
hold are known and we quote a couple of moderately useful results for reference.
Theorem 4.14. (AB)+ = B+ A + if and only if
1. n(BB
T
AT) n(AT)
and
2. n(A T AB) nCB) .
Proof: For the proof, see [9]. 0
Theorem 4.15. (AB)+ = B{ Ai, where BI = A+ AB and AI = ABIB{.
Proof: For the proof, see [5]. 0
Theorem 4.16. If A E B E then (AB)+ = B+ A+.
Proof' Since A E then A+ = (AT A)I AT, whence A+ A = f
r
• Similarly, since
B E lR;xm, we have B+ = BT(BBT)I, whence BB+ = f
r
. The result then follows by
takingB
t
= B,At = A in Theorem 4.15. 0
The following theorem gives some additional useful properties of pseudoinverses.
Theorem 4.17. For all A E lR
mxn
,
1. (A+)+ = A.
2. (AT A)+ = A+(A
T
)+, (AA
T
)+ = (A
T
)+ A+.
3. n(A+) = n(A
T
) = n(A+ A) = n(A
T
A).
4. N(A+) = N(AA+) = N«AA
T
)+) = N(AA
T
) = N(A
T
).
5. If A is normal, then AkA+ = A+ Ak and (Ak)+ = (A+)kforall integers k > O.
Exercises 33
Note: Recall that A e R"
xn
is normal if AA
T
= A
T
A. For example, if A is symmetric,
skewsymmetric, or orthogonal, then it is normal. However, a matrix can be none of the
preceding but still be normal, such as
for scalars a, b e E.
The next theorem is fundamental to facilitating a compact and unifying approach
to studying the existence of solutions of (matrix) linear equations and linear least squares
problems.
Theorem 4.18. Suppose A e R
nxp
, B e E
MX m
. Then K(B) c U(A) if and only if
AA+B = B.
Proof: Suppose K(B) c U(A) and take arbitrary jc e R
m
. Then Bx e H(B) c H(A), so
there exists a vector y e R
p
such that Ay = Bx. Then we have
where one of the Penrose properties is used above. Since x was arbitrary, we have shown
that B = AA+B.
To prove the converse, assume that AA
+
B = B and take arbitrary y e K(B). Then
there exists a vector x e R
m
such that Bx = y, whereupon
EXERCISES
1. Use Theorem 4.4 to compute the pseudoinverse of \
2 2
1 •
2. If jc, y e R", show that (xy
T
)
+
= (x
T
x)
+
(y
T
y)
+
yx
T
.
3. For A e R
mxn
, prove that 7£(A) = 7£(AA
r
) using only definitions and elementary
properties of the MoorePenrose pseudoinverse.
4. For A e R
mxn
, prove that ft(A+) = ft(A
r
).
5. For A e R
pxn
and 5 € R
mx
", show that JV(A) C A/"(S) if and only if fiA+A = B.
6. Let A G M"
xn
, 5 e E
nxm
, and D € E
mxm
and suppose further that D is nonsingular.
(a) Prove or disprove that
(b) Prove or disprove that
Exercises 33
Note: Recall that A E IRn xn is normal if A A T = A T A. For example, if A is symmetric,
skewsymmetric, or orthogonal, then it is normal. However, a matrix can be none of the
preceding but still be normal, such as
A=[ a b]
b a
for scalars a, b E R
The next theorem is fundamental to facilitating a compact and unifying approach
to studying the existence of solutions of (matrix) linear equations and linear least squares
problems.
Theorem 4.18. Suppose A E IRnxp, B E IRnxm. Then R(B) S; R(A) if and only if
AA+B = B.
Proof: Suppose R(B) S; R(A) and take arbitrary x E IRm. Then Bx E R(B) S; RCA), so
there exists a vector y E IRP such that Ay = Bx. Then we have
Bx = Ay = AA + Ay = AA + Bx,
where one of the Penrose properties is used above. Since x was arbitrary, we have shown
that B = AA+ B.
To prove the converse, assume that AA + B = B and take arbitrary y E R(B). Then
there exists a vector x E IR
m
such that Bx = y, whereupon
y = Bx = AA+Bx E R(A). 0
EXERCISES
1. Use Theorem 4.4 to compute the pseudoinverse of U ;].
2. If x, Y E IRn, show that (xyT)+ = (x
T
x)+(yT y)+ yx
T
.
3. For A E IRmxn, prove that RCA) = R(AAT) using only definitions and elementary
properties of the MoorePenrose pseudoinverse.
4. For A E IRmxn, prove that R(A+) = R(A
T
).
5. For A E IRPxn and BE IRmxn, show thatN(A) S; N(B) if and only if BA+ A = B.
6. Let A E IRn xn, B E JRn xm , and D E IRm xm and suppose further that D is nonsingular.
(a) Prove or disprove that
[ ~
AB
r = [
A+ A+ABD
i
].
D 0
D
i
(b) Prove or disprove that
[ ~
B
r =[
A+ A+BD
1
l
D 0
D
i
This page intentionally left blank This page intentionally left blank
Chapter 5
Introduction to the Singular
Value Decomposition
In this chapter we give a brief introduction to the singular value decomposition (SVD). We
show that every matrix has an SVD and describe some useful properties and applications
of this important matrix factorization. The SVD plays a key conceptual and computational
role throughout (numerical) linear algebra and its applications.
5.1 The Fundamental Theorem
Theorem 5.1. Let A e R™
xn
. Then there exist orthogonal matrices U e R
mxm
and
V € R
nxn
such that
where S = [J °
0
], S = diagfcri, ... , o>) e R
rxr
, and a\ > • • • > o
r
> 0. More
specifically, we have
The submatrix sizes are all determined by r (which must be < min{m, «}), i.e., U\ e W
nxr
,
U
2 e
^x(mr)
; Vi e R
«xr
j
y
2 €
Rnxfor^
and the
0
JM
^/ocJb in E are compatibly
dimensioned.
Proof: Since A
r
A> 0 ( A
r
Ai s symmetric and nonnegative definite; recall, for example,
[24, Ch. 6]), its eigenvalues are all real and nonnegative. (Note: The rest of the proof follows
analogously if we start with the observation that A A
T
> 0 and the details are left to the reader
as an exercise.) Denote the set of eigenvalues of A
T
A by {of , / e n} with a\ > • • • > a
r
>
0 = o>
+
i = • • • = a
n
. Let {u, , i e n} be a set of corresponding orthonormal eigenvectors
and let V\ = [v\, ..., v
r
] , Vi = [v
r+
\, . . . , v
n
]. Letting S — diag(cri, . . . , cf
r
), we can
write A
r
AVi = ViS
2
. Premultiplying by Vf gives Vf A
T
AVi = VfV^S
2
= S
2
, the latter
equality following from the orthonormality of the r;, vectors. Pre and postmultiplying by
S~
l
eives the emotion
35
Chapter 5
Introduction to the Singular
Value Decomposition
In this chapter we give a brief introduction to the singular value decomposition (SVD). We
show that every matrix has an SVD and describe some useful properties and applications
of this important matrix factorization. The SVD plays a key conceptual and computational
role throughout (numerical) linear algebra and its applications.
5.1 The Fundamental Theorem
Theorem 5.1. Let A E Then there exist orthogonal matrices U E IRmxm and
V E IR
nxn
such that
A =
(5.1)
where =
n
S diag(ul, ... , u
r
) E
IRrxr, and UI
> > U
r
> O. More
specifically, we have
U2) [
0
] [
V
T
]
A = [U
I
I
(5.2)
0
VT
2
= Ulsvt·
(5.3)
The submatrix sizes are all determined by r (which must be S min{m, n}), i.e., UI E IRmxr,
U2 E IRrnx(mrl, VI E IRnxr, V
2
E IRnx(nr), and the Osubblocks in are compatibly
dimensioned.
Proof: Since AT A ::::: 0 (AT A is symmetric and nonnegative definite; recall, for example,
[24, Ch. 6]), its eigenvalues are all real and nonnegative. (Note: The rest of the proof follows
analogously if we start with the observation that AAT ::::: 0 and the details are left to the reader
as an exercise.) Denote the set of eigenvalues of AT A by {U?, i E !!.} with UI ::::: ... ::::: U
r
>
0= Ur+1 = ... = Un. Let {Vi, i E !!.} be a set of corresponding orthonormal eigenvectors
and let VI = [VI, ... ,V
r
), V2 = [Vr+I, ... ,V
n
]. LettingS = diag(uI, ... ,u
r
), we can
write A T A VI = VI S2. Premultiplying by vt gives vt A T A VI = vt VI S2 = S2, the latter
equality following from the orthonormality of the Vi vectors. Pre and postmultiplying by
SI gives the equation
(5.4)
35
36 Chapter 5. Introduction to the Singular Value Decomposition
Turning now to the eigenvalue equations corresponding to the eigenvalues o
r+
\, . . . , a
n
we
have that A
T
AV
2
= V
2
0 = 0, whence Vf A
T
AV
2
= 0. Thus, AV
2
= 0. Now define the
matrix Ui e M
mx/
" by U\ = AViS~
l
. Then from (5.4) we see that UfU\ = /; i.e., the
columns of U\ are orthonormal. Choose any matrix U
2
£ ^
7 7 I X(
™~
r)
such that [U\ U
2
] is
orthogonal. Then
since A V
2
=0. Referring to the equation U\ = A V\ S
l
defining U\, we see that U{ AV\ =
S and 1/2 AVi = U^UiS = 0. The latter equality follows from the orthogonality of the
columns of U\ andU
2
. Thus, we see that, in fact, U
T
AV = [Q Q], and defining this matrix
to be S completes the proof. D
Definition 5.2. Let A = t/E V
T
be an SVD of A as in Theorem 5.1.
1. The set [a\,..., a
r
} is called the set of (nonzero) singular values of the matrix A and
i
is denoted £(A). From the proof of Theorem 5.1 we see that cr,(A) = A
(
2
(A
T
A) =
A.? (AA
T
). Note that there are alsomin{m, n] — r zero singular values.
2. The columns ofU are called the left singular vectors of A (and are the orthonormal
eigenvectors of AA
T
).
3. The columns of V are called the right singular vectors of A (and are the orthonormal
eigenvectors of A
1
A).
Remark 5.3. The analogous complex case in which A e C™
x
" is quite straightforward.
The decomposition is A = t/E V
H
, where U and V are unitary and the proof is essentially
identical, except for Hermitian transposes replacing transposes.
Remark 5.4. Note that U and V can be interpreted as changes of basis in both the domain
and codomain spaces with respect to which A then has a diagonal matrix representation.
Specifically, let C, denote A thought of as a linear transformation mapping W to W. Then
rewriting A = U^V
T
as AV = U E we see that Mat £ is S with respect to the bases
[v\,..., v
n
} for R" and {u\,..., u
m
] for R
m
(see the discussion in Section 3.2). See also
Remark 5.16.
Remark 5.5. The singular value decomposition is not unique. For example, an examination
of the proof of Theorem 5.1 reveals that
• any orthonormal basis for jV(A) can be used for V
2
.
there may be nonuniqueness associated with the columns of V\ (and hence U\) cor
responding to multiple cr/' s.
36 Chapter 5. Introduction to the Singular Value Decomposition
Turning now to the eigenvalue equations corresponding to the eigenvalues ar+l, ... , an we
have that A T A V
z
= VzO = 0, whence Vi A T A V
2
= O. Thus, A V
2
= O. Now define the
matrix VI E IRmxr by VI = AViSI. Then from (5.4) we see that VrVI = /; i.e., the
columns of VI are orthonormal. Choose any matrix V2 E IRmx(mr) such that [VI V2] is
orthogonal. Then
V
T
AV = [
VrAV
I
VIAV
I
=[
VrAV
I
vIA VI
Vr AV
z
]
vI AV
z
]
since A V
2
= O. Referring to the equation V I = A VI SI defining VI, we see that V r A VI =
S and vI A VI = vI VI S = O. The latter equality follows from the orthogonality of the
columns of VI and V
2
. Thus, we see that, in fact, VT A V = and defining this matrix
to be completes the proof. 0
Definition 5.2. Let A = V"i:. VT be an SVD of A as in Theorem 5.1.
1. The set {ai, ... , a
r
} is called the set of (nonzero) singular values of the matrix A and
I
is denoted From the proof of Theorem 5.1 we see that ai(A) = A;' (AT A) =
I
At (AA
T
). Note that there are also min{m, n}  r zero singular values.
2. The columns of V are called the left singular vectors of A (and are the orthonormal
eigenvectors of AA
T
).
3. The columns of V are called the right singular vectors of A (and are the orthonormal
eigenvectors of AT A).
Remark 5.3. The analogous complex case in which A E xn is quite straightforward.
The decomposition is A = V"i:. V H, where V and V are unitary and the proof is essentially
identical, except for Hermitian transposes replacing transposes.
Remark 5.4. Note that V and V can be interpreted as changes of basis in both the domain
and codomain spaces with respect to which A then has a diagonal matrix representation.
Specifically, let C denote A thought of as a linear transformation mapping IR
n
to IRm. Then
rewriting A = V"i:. VT as A V = V"i:. we see that Mat C is "i:. with respect to the bases
{VI, ... , v
n
} for IR
n
and {u I, •.. , u
m
} for IR
m
(see the discussion in Section 3.2). See also
Remark 5.16.
Remark 5.5. The !:ingular value decomposition is not unique. For example, an examination
of the proof of Theorem 5.1 reveals that
• £lny orthonormal basis for N(A) can be used for V2.
• there may be nonuniqueness associated with the columns of VI (and hence VI) cor
responding to multiple O'i'S.
5.1. The Fundamental Theorem 37
• any C/
2
can be used so long as [U\ Ui] is orthogonal.
• columns of U and V can be changed (in tandem) by sign (or multiplier of the form
e
je
in the complex case).
What is unique, however, is the matrix E and the span of the columns of U\, f/2, Vi, and
¥2 (see Theorem 5.11). Note, too, that a "full SVD" (5.2) can always be constructed from
a "compact SVD" (5.3).
Remark 5.6. Computing an SVD by working directly with the eigenproblem for A
T
A or
AA
T
is numerically poor in finiteprecision arithmetic. Better algorithms exist that work
directly on A via a sequence of orthogonal transformations; see, e.g., [7], [11], [25].
F/vamnlp 5.7.
Example 5.10. Let A e R
MX
" be symmetric and positive definite. Let V be an orthogonal
matrix of eigenvectors that diagonalizes A, i.e., V
T
AV = A > 0. Then A = VAV
T
is an
SVD of A.
A factorization t/SV
r
o f a n m x n matrix A qualifies as an SVD if U and V are
orthogonal and £ is an m x n "diagonal" matrix whose diagonal elements in the upper
left corner are positive (and ordered). For example, if A = f/E V
T
is an SVD of A, then
VS
r
C/
r
i sanSVDof A
T
.
where U is an arbitrary 2x2 orthogonal matrix, is an SVD.
Example 5.8.
where 0 is arbitrary, is an SVD.
Example 5.9.
is an SVD.
5.1. The Fundamental Theorem 37
• any U2 can be used so long as [U
I
U2] is orthogonal.
• columns of U and V can be changed (in tandem) by sign (or multiplier of the form
e
j8
in the complex case).
What is unique, however, is the matrix I: and the span of the columns of UI, U2, VI, and
V
2
(see Theorem 5.11). Note, too, that a "full SVD" (5.2) can always be constructed from
a "compact SVD" (5.3).
Remark 5.6. Computing an SVD by working directly with the eigenproblem for A T A or
AA T is numerically poor in finiteprecision arithmetic. Better algorithms exist that work
directly on A via a sequence of orthogonal transformations; see, e.g., [7], [11], [25],
Example 5.7.
A  [1 0 ]  U I U
T
 01 ,
where U is an arbitrary 2 x 2 orthogonal matrix, is an SVD.
Example 5.8.
A _ [ 1
 0  ~ ] = [
where e is arbitrary, is an SVD.
Example 5.9.
cose
 sine
sin e
cose J [ ~ ~ J [
cose
sine
A=U n=[
I 2y'5
2 ~ ][ 3 ~ 0][
3
5
2
y'5
4y'5 0 0
3 S 15
2
0
_y'5 0 0
3
3
[
I
]
3
3J2 [ ~
~ ]
=
2
3
2
3
is an SVD.
Sine]
cose '
v'2 v'2
]
T T
v'2 v'2
T
2
Example 5.10. Let A E IR
nxn
be symmetric and positive definite. Let V be an orthogonal
matrix of eigenvectors that diagonalizes A, i.e., VT A V = A > O. Then A = V A V
T
is an
SVDof A.
A factorization UI: VT of an m x n matrix A qualifies as an SVD if U and V are
orthogonal and I: is an m x n "diagonal" matrix whose diagonal elements in the upper
left comer are positive (and ordered). For example, if A = UI:V
T
is an SVD of A, then
VI:TU
T
is an SVD of AT.
38 Chapter 5. Introduction to the Singular Value Decomposition
5.2 Some Basic Properties
Theorem 5.11. Let A e R
mxn
have a singular value decomposition A = VLV
T
. Using
the notation of Theorem 5.1, the following properties hold:
1. rank(A) = r = the number of nonzero singular values of A.
2. Let U =. [ H I , ..., u
m
] and V = [v\, ..., v
n
]. Then A has the dyadic (or outer
product) expansion
Remark 5.12. Part 4 of the above theorem provides a numerically superior method for
finding (orthonormal) bases for the four fundamental subspaces compared to methods based
on, for example, reduction to row or column echelon form. Note that each subspace requires
knowledge of the rank r. The relationship to the four fundamental subspaces is summarized
nicely in Figure 5.1.
Remark 5.13. The elegance of the dyadic decomposition (5.5) as a sum of outer products
and the key vector relations (5.6) and (5.7) explain why it is conventional to write the SVD
as A = UZV
T
rather than, say, A = UZV.
Theorem 5.14. Let A e E
mx
" have a singular value decomposition A = UHV
T
as in
TheoremS.]. Then
where
3. The singular vectors satisfy the relations
38 Chapter 5. Introduction to the Singular Value Decomposition
5.2 Some Basic Properties
Theorem 5.11. Let A E jRrnxn have a singular value decomposition A = U'£ V
T
. Using
the notation of Theorem 5.1, the following properties hold:
1. rank(A) = r = the number of nonzero singular values of A.
2. Let V = [UI, ... , urn] and V = [VI, ... , v
n
]. Then A has the dyadic (or outer
product) expansion
r
A = Laiuiv;.
i=1
3. The singular vectors satisfy the relations
for i E r.
AVi = ajui,
AT Uj = aivi
(5.5)
(5.6)
(5.7)
4. LetUI = [UI, ... , u
r
], U2 = [Ur+I, ... , urn], VI = [VI, ... , v
r
], andV2 = [Vr+I, ... , V
n
].
Then
(a) R(VI) = R(A) = N(A
T
/.
(b) R(U
2
) = R(A)1 = N(A
T
).
(c) R(VI) = N(A)1 = R(A
T
).
(d) R(V2) = N(A) = R(AT)1.
Remark 5.12. Part 4 of the above theorem provides a numerically superior method for
finding (orthonormal) bases for the four fundamental subspaces compared to methods based
on, for example, reduction to row or column echelon form. Note that each subspace requires
knowledge of the rank r. The relationship to the four fundamental subspaces is summarized
nicely in Figure 5.1.
Remark 5.13. The elegance of the dyadic decomposition (5.5) as a sum of outer products
and the key vector relations (5.6) and (5.7) explain why it is conventional to write the SVD
as A = U'£V
T
rather than, say, A = U,£V.
Theorem 5.14. Let A E jRmxn have a singular value decomposition A = U,£V
T
as in
Theorem 5.1. Then
(5.8)
where
5.2. Some Basic Properties 39
Figure 5.1. SVD and the four fundamental subspaces.
with the Qsubblocks appropriately sized. Furthermore, if we let the columns of U and V
be as defined in Theorem 5.11, then
Proof: The proof follows easily by verifying the four Penrose conditions. D
Remark 5.15. Note that none of the expressions above quite qualifies as an SVD of A
+
if we insist that the singular values be ordered from largest to smallest. However, a simple
reordering accomplishes the task:
This can also be written in matrix terms by using the socalled reverseorder identity matrix
(or exchange matrix) P = \e
r
,e
r
^\, ..., e^, e\\, which is clearly orthogonal and symmetric.
5.2. Some Basic Properties 39
A
r r
E9 {O}
/ {O)<!l
nr mr
Figure 5.1. SVD and the four fundamental subspaces.
with the Osubblocks appropriately sized. Furthermore, if we let the columns of U and V
be as defined in Theorem 5.11, then
r 1
= L v;u;, (5.10)
;=1 U;
Proof' The proof follows easily by verifying the four Penrose conditions. 0
Remark 5.15. Note that none of the expressions above quite qualifies as an SVD of A +
if we insist that the singular values be ordered from largest to smallest. However, a simple
reordering accomplishes the task:
(5.11)
This can also be written in matrix terms by using the socalled reverseorder identity matrix
(or exchange matrix) P = [e
r
, erI, ... , e2, ed, which is clearly orthogonal and symmetric.
is the matrix version of (5.11). A "full SVD" can be similarly constructed.
Remark 5.16. Recall the linear transformation T used in the proof of Theorem 3.17 and
in Definition 4.1. Since T is determined by its action on a basis, and since ( v \ , . . . , v
r
} is a
basis forJ\f(A)
±
, then T can be defined by TV; = cr, w, , / e r. Similarly, since [u\, ... ,u
r
}
isabasisfor7£(.4), then T~
l
can be defined by T^' M, = ^u, , / e r. From Section 3.2, the
matrix representation for T with respect to the bases { v \ , ..., v
r
} and { MI , . . . , u
r
] is clearly
S, while the matrix representation for the inverse linear transformation T~
l
with respect to
the same bases is 5""
1
.
5.3 Row and Column Compressions
Row compression
Let A E R
mxn
have an SVD given by (5.1). Then
Notice that M(A)  M(U
T
A) = A/"(SV,
r
) and the matrix SVf e R
r x
" has full row
rank. I n other words, premultiplication of A by U
T
is an orthogonal transformation that
"compresses" A by row transformations. Such a row compression can also be accomplished
D _
by orthogonal row transformations performed directly on A to reduce it to the form
0
,
where R is upper triangular. Both compressions are analogous to the socalled rowreduced
echelon form which, when derived by a Gaussian elimination algorithm implemented in
finiteprecision arithmetic, is not generally as reliable a procedure.
Column compression
Again, let A e R
mxn
have an SVD given by (5.1). Then
This time, notice that H(A) = K(AV) = K(UiS) and the matrix UiS e R
mxr
has full
column rank. I n other words, postmultiplication of A by V is an orthogonal transformation
that "compresses" A by column transformations. Such a compression is analogous to the
40 Chapters. Introduction to the Singular Value Decomposition
Then
40 Chapter 5. Introduction to the Singular Value Decomposition
Then
A+ = (VI p)(PS1 p)(PVr)
is the matrix version of (5.11). A "full SVD" can be similarly constructed.
Remark 5.16. Recall the linear transformation T used in the proof of Theorem 3.17 and
in Definition 4.1. Since T is determined by its action on a basis, and since {VI, ... , v
r
} is a
basisforN(A).l, then T can be defined by TVj = OjUj , i E ~ . Similarly, since {UI, ... , u
r
}
is a basis forR(A), then T
I
canbedefinedbyTIu; = tv; ,i E ~ . From Section 3.2, the
matrix representation for T with respect to the bases {VI, ... , v
r
} and {u I, ... , u
r
} is clearly
S, while the matrix representation for the inverse linear transformation T
I
with respect to
the same bases is SI.
5.3 Rowand Column Compressions
Row compression
Let A E lR.
mxn
have an SVD given by (5.1). Then
VT A = :EVT
= [ ~ ~ ] [ ~ i ]
 [ SVr ] lR.
mxn
 0 E .
Notice that N(A) = N(V
T
A) = N(svr> and the matrix SVr E lR.
rxll
has full row
rank. In other words, premultiplication of A by VT is an orthogonal transformation that
"compresses" A by row transformations. Such a row compression can also be accomplished
by orthogonal row transformations performed directly on A to reduce it to the form [ ~ ] ,
where R is upper triangular. Both compressions are analogous to the socalled rowreduced
echelon form which, when derived by a Gaussian elimination algorithm implemented in
finiteprecision arithmetic, is not generally as reliable a procedure.
Column compression
Again, let A E lR.
mxn
have an SVD given by (5.1). Then
AV = V:E
= [VI U2] [ ~ ~ ]
=[VIS 0] ElR.mxn.
This time, notice that R(A) = R(A V) = R(UI S) and the matrix VI S E lR.
m
xr has full
column rank. In other words, postmultiplication of A by V is an orthogonal transformation
that "compresses" A by I;olumn transformations. Such a compression is analogous to the
Exercises 41
socalled columnreduced echelon form, which is not generally a reliable procedure when
performed by Gauss transformations in finiteprecision arithmetic. For details, see, for
example, [7], [11], [23], [25].
EXERCISES
1. Let X € M
mx
". If X
T
X = 0, show that X = 0.
2. Prove Theorem 5.1 starting from the observation that AA
T
> 0.
3. Let A e E"
xn
be symmetric but indefinite. Determine an SVD of A.
4. Let x e R
m
, y e R
n
be nonzero vectors. Determine an SVD of the matrix A e R™
defined by A = xy
T
.
6. Let A e R
mxn
and suppose W eR
mxm
and 7 e R
nxn
are orthogonal.
(a) Show that A and W A F have the same singular values (and hence the same rank).
(b) Suppose that W and Y are nonsingular but not necessarily orthogonal. Do A
and WAY have the same singular values? Do they have the same rank?
7. Let A € R"
XM
. Use the SVD to determine a polar factorization of A, i.e., A = QP
where Q is orthogonal and P = P
T
> 0. Note: this is analogous to the polar form
z = re
l&
ofa complex scalar z (where i = j = V^T).
5. Determine SVDs of the matrices
Exercises 41
socalled columnreduced echelon form, which is not generally a reliable procedure when
performed by Gauss transformations in finiteprecision arithmetic. For details, see, for
example, [7], [11], [23], [25].
EXERCISES
1. Let X E IRmxn. If XT X = 0, show that X = o.
2. Prove Theorem 5.1 starting from the observation that AAT ~ O.
3. Let A E IR
nxn
be symmetric but indefinite. Determine an SVD of A.
4. Let x E IRm, y E ~ n be nonzero vectors. Determine an SVD of the matrix A E ~ ~ xn
defined by A = xyT.
5. Determine SVDs of the matrices
(a)
[
1
]
0 1
(b)
[
~ l
6. Let A E ~ m x n and suppose W E IR
mxm
and Y E ~ n x n are orthogonal.
(a) Show that A and WAY have the same singular values (and hence the same rank).
(b) Suppose that Wand Yare nonsingular but not necessarily orthogonal. Do A
and WAY have the same singular values? Do they have the same rank?
7. Let A E ~ ~ x n . Use the SVD to determine a polar factorization of A, i.e., A = Q P
where Q is orthogonal and P = p
T
> O. Note: this is analogous to the polar form
z = re
iO
of a complex scalar z (where i = j = J=I).
This page intentionally left blank This page intentionally left blank
Chapter 6
Li near Equations
In this chapter we examine existence and uniqueness of solutions of systems of linear
equations. General linear systems of the form
are studied and include, as a special case, the familiar vector system
6.1 Vector Li near Equations
We begin with a review of some of the principal results associated with vector linear systems.
Theorem 6.1. Consider the system of linear equations
1. There exists a solution to (6.3) if and only ifbeH(A).
2. There exists a solution to (6.3} for all b e R
m
if and only ifU(A) = W", i.e., A is
onto; equivalently, there exists a solution if and only j/"rank([A, b]) = rank(A), and
this is possible only ifm < n (since m = dimT^(A) = rank(A) < min{m, n}).
3. A solution to (6.3) is unique if and only ifJ\f(A) = 0, i.e., A is 11.
4. There exists a unique solution to (6.3) for all b e W" if and only if A is nonsingular;
equivalently, A G M
mxm
and A has neither a 0 singular value nor a 0 eigenvalue.
5. There exists at most one solution to (6.3) for all b e W
1
if and only if the columns of
A are linearly independent, i.e., A/"(A) = 0, and this is possible only ifm > n.
6. There exists a nontrivial solution to the homogeneous system Ax = 0 if and only if
rank(A) < n.
43
Chapter 6
Linear Equations
In this chapter we examine existence and uniqueness of solutions of systems of linear
equations. General linear systems of the form
(6.1)
are studied and include, as a special case, the familiar vector system
Ax = b; A E ]Rn xn, b E ]Rn.
(6.2)
6.1 Vector Linear Equations
We begin with a review of some of the principal results associated with vector linear systems.
Theorem 6.1. Consider the system of linear equations
Ax = b; A E lR
m
xn, b E lRm.
(6.3)
1. There exists a solution to (6.3) if and only if b E R(A).
2. There exists a solution to (6.3) for all b E lR
m
if and only ifR(A) = lR
m
, i.e., A is
onto; equivalently, there exists a solution if and only ifrank([A, b]) = rank(A), and
this is possible only ifm :::: n (since m = dim R(A) = rank(A) :::: min{m, n n.
3. A solution to (6.3) is unique if and only if N(A) = 0, i.e., A is 11.
4. There exists a unique solution to (6.3) for all b E ]Rm if and only if A is nonsingular;
equivalently, A E lR
mxm
and A has neither a 0 singular value nor a 0 eigenvalue.
5. There exists at most one solution to (6.3) for all b E lR
m
if and only if the columns of
A are linearly independent, i.e., N(A) = 0, and this is possible only ifm ::: n.
6. There exists a nontrivial solution to the homogeneous system Ax = 0 if and only if
rank(A) < n.
43
44 Chapter 6. Linear Equations
Proof: The proofs are straightforward and can be consulted in standard texts on linear
algebra. Note that some parts of the theorem follow directly from others. For example, to
prove part 6, note that x = 0 is always a solution to the homogeneous system. Therefore, we
must have the case of a nonunique solution, i.e., A is not 11, which implies rank(A) < n
by part 3. D
6.2 Matrix Linear Equations
In this section we present some of the principal results concerning existence and uniqueness
of solutions to the general matrix linear system (6.1). Note that the results of Theorem
6.1 follow from those below for the special case k = 1, while results for (6.2) follow by
specializing even further to the case m = n.
Theorem 6.2 (Existence). The matrix linear equation
and this is clearly of the form (6.5).
has a solution if and only ifl^(B) C 7£(A); equivalently, a solution exists if and only if
AA
+
B = B.
Proof: The subspace inclusion criterion follows essentially from the definition of the range
of a matrix. The matrix criterion is Theorem 4.18.
Theorem 6.3. Let A e R
mxn
, B eR
mxk
and suppose that AA
+
B = B. Then any matrix
of the form
is a solution of
Furthermore, all solutions of (6.6) are of this form.
Proof: To verify that (6.5) is a solution, premultiply by A:
That all solutions arc of this form can be seen as follows. Let Z be an arbitrary solution of
(6.6), i.e., AZ — B. Then we can write
44 Chapter 6. Linear Equations
Proof: The proofs are straightforward and can be consulted in standard texts on linear
algebra. Note that some parts of the theorem follow directly from others. For example, to
prove part 6, note that x = 0 is always a solution to the homogeneous system. Therefore, we
must have the case of a nonunique solution, i.e., A is not II, which implies rank(A) < n
by part 3. 0
6.2 Matrix Linear Equations
In this section we present some of the principal results concerning existence and uniqueness
of solutions to the general matrix linear system (6.1). Note that the results of Theorem
6.1 follow from those below for the special case k = 1, while results for (6.2) follow by
specializing even further to the case m = n.
Theorem 6.2 (Existence). The matrix linear equation
AX = B; A E JR.
mxn
, BE JR.mxk, (6.4)
has a solution if and only ifR(B) S; R(A); equivalently, a solution exists if and only if
AA+B = B.
Proof: The subspace inclusion criterion follows essentially from the definition of the range
of a matrix. The matrix criterion is Theorem 4.18. 0
Theorem 6.3. Let A E JR.mxn, B E JR.mxk and suppose that AA + B = B. Then any matrix
of the form
X = A+ B + (/  A+ A)Y, where Y E JR.nxk is arbitrary, (6.5)
is a solution of
AX=B. (6.6)
Furthermore, all solutions of (6.6) are of this form.
Proof: To verify that (6.5) is a solution, premultiply by A:
AX = AA+ B + A(I  A+ A)Y
= B + (A  AA+ A)Y by hypothesis
= B since AA + A = A by the first Penrose condition.
That all solutions are of this form can be seen as follows. Let Z be an arbitrary solution of
(6.6). i.e .. AZ :::: B. Then we can write
Z=A+AZ+(IA+A)Z
=A+B+(IA+A)Z
and this is clearly of the form (6.5). 0
6.2. Matrix Linear Equations 45
Remark 6.4. When A is square and nonsingular, A
+
= A"
1
and so (/ — A
+
A) = 0. Thus,
there is no "arbitrary" component, leaving only the unique solution X• = A~
1
B.
Remark 6.5. It can be shown that the particular solution X = A
+
B is the solution of (6.6)
that minimizes TrX
7
X. (Tr() denotes the trace of a matrix; recall that TrX
r
X = £\ • jcj.)
Theorem 6.6 (Uniqueness). A solution of the matrix linear equation
is unique if and only if A
+
A = /; equivalently, (6.7) has a unique solution if and only if
M(A) = 0.
Proof: The first equivalence is immediate from Theorem 6.3. The second follows by noting
that A
+
A = / can occur only if r — n, where r = rank(A) (recall r < h). But rank(A) = n
if and only if A is 11 or _ /V(A) = 0. D
Example 6.7. Suppose A e E"
x
". Find all solutions of the homogeneous system Ax — 0.
Solution:
where y e R" is arbitrary. Hence, there exists a nonzero solution if and only if A
+
A /= I.
This is equivalent to either rank (A) = r < n or A being singular. Clearly, if there exists a
nonzero solution, it is not unique.
Computation: Since y is arbitrary, it is easy to see that all solutions are generated
from a basis for 7£(7 — A
+
A). But if A has an SVD given by A = f/E V
T
, then it is easily
checked that /  A+A = V
2
V
2
r
and U(V
2
V^) = K(V
2
) = N(A).
Example 6.8. Characterize all right inverses of a matrix A e ]R
mx
"; equivalently, find all
solutions R of the equation AR = I
m
. Here, we write I
m
to emphasize the m x m identity
matrix.
Solution: There exists a right inverse if and only if 7£(/
m
) c 7£(A) and this is
equivalent to AA
+
I
m
= I
m
. Clearly, this can occur if and only if rank(A) = r = m (since
r < m) and this is equivalent to A being onto (A
+
is then a right inverse). All right inverses
of A are then of the form
where Y e E"
xm
is arbitrary. There is a unique right inverse if and only if A
+
A = /
(AA(A) = 0), in which case A must be invertible and R = A"
1
.
Example 6.9. Consider the system of linear firstorder difference equations
6.2. Matrix Linear Equations 45
Remark 6.4. When A is square and nonsingular, A + = AI and so (I  A + A) = O. Thus,
there is no "arbitrary" component, leaving only the unique solution X = AI B.
Remark 6.5. It can be shown that the particular solution X = A + B is the solution of (6.6)
that minimizes TrXT X. (TrO denotes the trace of a matrix; recall that TrXT X = Li,j xlj.)
Theorem 6.6 (Uniqueness). A solution of the matrix linear equation
AX = B; A E lR,mxn, BE lR,mxk
(6,7)
is unique if and only if A + A = I; equivalently, (6.7) has a unique solution if and only if
N(A) = O.
Proof: The first equivalence is immediate from Theorem 6.3. The second follows by noting
thatA+ A = I can occur only ifr = n, wherer = rank(A) (recallr ::: n), Butrank(A) = n
if and only if A is Ilor N(A) = O. 0
Example 6.7. Suppose A E lR,nxn. Find all solutions of the homogeneous system Ax = 0,
Solution:
x=A+O+(IA+A)y
= (IA+A)y,
where y E lR,n is arbitrary. Hence, there exists a nonzero solution if and only if A + A t= I,
This is equivalent to either rank(A) = r < n or A being singular. Clearly, if there exists a
nonzero solution, it is not unique,
Computation: Since y is arbitrary, it is easy to see that all solutions are generated
from a basis for R(I  A + A). But if A has an SVD given by A = U h VT, then it is easily
checked that 1 A+ A = Vz V[ and R(Vz vD = R(Vz) = N(A),
Example 6.S. Characterize all right inverses of a matrix A E lR,mxn; equivalently, find all
solutions R of the equation AR = 1
m
, Here, we write 1m to emphasize the m x m identity
matrix,
Solution: There exists a right inverse if and only if R(Im) S; R(A) and this is
equivalent to AA + 1m = 1m. Clearly, this can occur if and only if rank(A) = r = m (since
r ::: m) and this is equivalent to A being onto (A + is then a right inverse). All right inverses
of A are then of the form
R = A+ 1m + (In  A+ A)Y
=A++(IA+A)Y,
where Y E lR,nxm is arbitrary, There is a unique right inverse if and only if A+ A I
(N(A) = 0), in which case A must be invertible and R = AI.
Example 6.9. Consider the system of linear firstorder difference equations
(6,8)
46 Chapter 6. Linear Equations
with A e R"
xn
and fieR"
xm
(rc>l,ra>l). The vector Jt* in linear system theory is
known as the state vector at time k while Uk is the input (control) vector. The general
solution of (6.8) is given by
for k > 1. We might now ask the question: Given X Q = 0, does there exist an input sequence
{uj } y~ Q such that x^ takes an arbitrary va
of reachability. Since m > 1, from the
see that (6.8) is reachable if and only if
[ Uj }
k
jj^ such that X k takes an arbitrary value in W ? In linear system theory, this is a question
of reachability. Since m > 1, from the fundamental Existence Theorem, Theorem 6.2, we
or, equivalently, if and only if
A related question is the following: Given an arbitrary initial vector X Q , does there ex
ist an input sequence {"y} "~ o such that x
n
= 0? In linear system theory, this is called
controllability. Again from Theorem 6.2, we see that (6.8) is controllable if and only if
Clearly, reachability always implies controllability and, if A is nonsingular, control
lability and reachability are equivalent. The matrices A = [ °
1
Q
1 and 5 = f ^ 1 provide an
example of a system that is controllable but not reachable.
The above are standard conditions with analogues for continuoustime models (i.e.,
linear differential equations). There are many other algebraically equivalent conditions.
Example 6.10. We now introduce an output vector y
k
to the system (6.8) of Example 6.9
by appending the equation
with C e R
pxn
and D € R
pxm
(p > 1). We can then pose some new questions about the
overall system that are dual in the systemtheoretic sense to reachability and controllability.
The answers are cast in terms that are dual in the linear algebra sense as well. The condition
dual to reachability is called observability: When does knowledge of {"
7
}"!Q and {y_ / } "~ o
suffice to determine (uniquely) Jt
0
? As a dual to controllability, we have the notion of
reconstructibility: When does knowledge of {w
y
} "~ Q and {;y/ } "Io suffice to determine
(uniquely) x
n
l The fundamental duality result from linear system theory is the following:
(A, B) is reachable [ controllable] if and only if (A
T
, B
T
] is observable [ reconstructive].
46 Chapter 6. Linear Equations
with A E IR
nx
" and B E IR
nxm
(n I, m I). The vector Xk in linear system theory is
known as the state vector at time k while Uk is the input (control) vector. The general
solution of (6.8) is given by
kJ
Xk = Akxo + LAkJj BUj
j=O
k kJ Uk2
[
UkJ ]
•...• A B]
(6.9)
(6.10)
for k 1. We might now ask the question: Given Xo = 0, does there exist an input sequence
{u j 1 such that Xk takes an arbitrary value in 1R"? In linear system theory, this is a question
of reacbability. Since m I, from the fundamental Existence Theorem, Theorem 6.2, we
see that (6.8) is reachable if and only if
R([ B, AB, ... , A
n

J
B]) = 1R"
or, equivalently, if and only if
rank [B, AB, ... , A
n

J
B] = n.
A related question is the following: Given an arbitrary initial vector Xo, does there ex
ist an input sequence {u j l'/:b such that Xn = O? In linear system theory, this is called
controllability. Again from Theorem 6.2, we see that (6.8) is controllable if and only if
Clearly, reachability always implies controllability and, if A is nonsingular, control
lability and reachability are equivalent. The matrices A = and B = provide an
example of a system that is controllable but not reachable.
The above are standard conditions with analogues for continuoustime models (i.e.,
linear differential equations). There are many other algebraically equivalent conditions.
Example 6.10. We now introduce an output vector Yk to the system (6.8) of Example 6.9
by appending the equation
(6.11)
with C E IR
Pxn
and D E IR
Pxm
(p 1). We can then pose some new questions about the
overall system that are dual in the systemtheoretic sense to reachability and controllability.
The answers are cast in terms that are dual in the linear algebra sense as well. The condition
dual to reachability is called observability: When does knowledge of {u j r/:b and {Yj l';:b
suffice to determine (uniquely) xo? As a dual to controllability, we have the notion of
reconstructibility: When does knowledge of {u j r/:b and {YJ lj:b suffice to determine
(uniquely) xn? The fundamental duality result from linear system theory is the following:
(A. B) iJ reachable [controllablcl if and only if (A T. B T) is observable [reconsrrucrible]
6.4 Some Us ef u l and I nt er es t i ng Inverses 47
To derive a condition for observability, notice that
Thus,
Let v denote the (known) vector on the lefthand side of (6.13) and let R denote the matrix on
the righthand side. Then, by definition, v e Tl(R), so a solution exists. By the fundamental
Uniqueness Theorem, Theorem 6.6, the solution is then unique if and only if N(R) = 0,
or, equivalently, if and only if
6.3 A More General Matrix Linear Equation
Theorem 6.11. Let A e R
mxn
, B e R
mxq
, and C e R
pxti
. Then the equation
has a solution if and only if AA
+
BC
+
C = B, in which case the general solution is of the
where Y € R
n
*
p
is arbitrary.
A compact matrix criterion for uniqueness of solutions to (6.14) requires the notion
of the Kronecker product of matrices for its statement. Such a criterion (CC
+
< g) A
+
A — I)
is stated and proved in Theorem 13.27.
6.4 Some Useful and Interesting Inverses
In many applications, the coefficient matrices of interest are square and nonsingular. Listed
below is a small collection of useful matrix identities, particularly for block matrices, as
sociated with matrix inverses. In these identities, A e R
nxn
, B E R
nxm
, C e R
mxn
,
and D € E
mxm
. Invertibility is assumed for any component or subblock whose inverse is
indicated. Verification of each identity is recommended as an exercise for the reader.
6.4 Some Useful and Interesting Inverses
Thus,
To derive a condition for observability, notice that
kl
Yk = CAkxo + L CAk1j BUj + DUk.
j=O
r
Yo  Duo
Yl  CBuo  Du]
Yn]  L j : ~ CA
n

2
j BUj  DUnl
47
(6.12)
(6.13)
Let v denote the (known) vector on the lefthand side of (6.13) and let R denote the matrix on
the righthand side. Then, by definition, v E R(R), so a solution exists. By the fundamental
Uniqueness Theorem, Theorem 6.6, the solution is then unique if and only if N(R) = 0,
or, equivalently, if and only if
6.3 A More General Matrix Linear Equation
Theorem 6.11. Let A E jRmxn, B E jRmx
q
, and C E jRpxq. Then the equation
AXC=B (6.14)
has a solution if and only if AA + BC+C = B, in which case the general solution is of the
form
(6.15)
where Y E jRnxp is arbitrary.
A compact matrix criterion for uniqueness of solutions to (6.14) requires the notion
of the Kronecker product of matrices for its statement. Such a criterion (C C+ ® A + A = I)
is stated and proved in Theorem 13.27.
6.4 Some Useful and Interesting Inverses
In many applications, the coefficient matrices of interest are square and nonsingular. Listed
below is a small collection of useful matrix identities, particularly for block matrices, as
sociated with matrix inverses. In these identities, A E jRnxn, B E jRnxm, C E jRmxn,
and D E jRm xm. Invertibility is assumed for any component or subblock whose inverse is
indicated. Verification of each identity is recommended as an exercise for the reader.
48 Chapter 6. Linear Equations
1. (A + BDCr
1
= A~
l
 A~
l
B(D~
l
+ CA~
l
B)~
[
CA~
l
.
This result is known as the ShermanMorrisonWoodbury formula. It has many
applications (and is frequently "rediscovered") including, for example, formulas for
the inverse of a sum of matrices such as (A + D)"
1
or (A"
1
+ D"
1
) . It also
yields very efficient "updating" or "downdating" formulas in expressions such as
T — 1
(A + JUT ) (with symmetric A e R"
x
" and ;c e E") that arise in optimization
theory.
EXERCISES
1. As in Example 6.8, characterize all left inverses of a matrix A e M
mx
".
2. Let A € E
mx
", B e R
mxk
and suppose A has an SVD as in Theorem 5.1. Assuming
7Z(B) c 7£(A), characterize all solutions of the matrix linear equation
Both of these matrices satisfy the matrix equation X^ = I from which it is obvious
that X~
l
= X. Note that the positions of the / and — / blocks may be exchanged.
where E = (D — CA B) (E is the inverse of the Schur complement of A). This
result follows easily from the block LU factorization in property 16 of Section 1.4.
where F = (A — ED C) . This result follows easily from the block UL factor
ization in property 17 of Section 1.4.
in terms of the SVD of A
48 Chapter 6. Linear Equations
1. (A + BDC)I = AI  AIB(D
I
+ CAIB)ICAI.
This result is known as the ShermanMorrisonWoodbury formula. It has many
applications (and is frequently "rediscovered") including, for example, formulas for
the inverse of a sum of matrices such as (A + D)lor (AI + DI)I. It also
yields very efficient "updating" or "downdating" formulas in expressions such as
(A + xx
T
) I (with symmetric A E lR
nxn
and x E lRn) that arise in optimization
theory.
2. r
l
= [
3. !/ r
l
= l r
l
= 1
Both of these matrices satisfy the matrix equation X2 = / from which it is obvious
that XI = X. Note that the positions of the / and  / blocks may be exchanged.
4. r
l
= [
AI BD
I
]
D I .
5. r
l
= 1
6. [ / +c
BC
r
l
= [!C / 1
7. r
l
= [ AI l
where E = (D  CA
I
B)I (E is the inverse of the Schur complement of A). This
result follows easily from the block LU factorization in property 16 of Section 1.4.
8. r
l
= D
I
l
where F = (A  B D
I
C) I. This result follows easily from the block UL factor
ization in property 17 of Section 1.4.
EXERCISES
1. As in Example 6.8, characterize all left inverses of a matrix A E lR
m
xn .
2. Let A E lRmxn, B E lR
fflxk
and suppose A has an SVD as in Theorem 5.1. Assuming
R(B) R(A), characterize all solutions of the matrix linear equation
AX=B
in terms of the SVD of A.
Exercises 49
3. Let jc, y e E" and suppose further that X
T
y ^ 1. Show that
4. Let x, y € E" and suppose further that X
T
y ^ 1. Show that
where c = 1/(1 — x
T
y).
5. Let A e R"
x
" and let A"
1
have columns c\, ..., c
n
and individual elements y
;y
.
Assume that x/
(
7^ 0 for some / and j. Show that the matrix B — A —
l
—ei e
T
: (i.e.,
A with — subtracted from its (zy)th element) is singular.
Hint: Show that c
t
< = M(B).
6. As in Example 6.10, check directly that the condition for reconstructibility takes the
form
Exercises 49
3. Let x, y E IR
n
and suppose further that x T y i= 1. Show that
T 1 1 T
(/  xy) = I  xy .
xTy 1
4. Let x, y E IR
n
and suppose further that x T y i= 1. Show that
cxJ
C '
where C = 1/(1  x
T
y).
5. Let A E 1 R ~ xn and let A 1 have columns Cl, ... ,C
n
and individual elements Yij.
Assume that Yji i= 0 for some i and j. Show that the matrix B = A  ~ i e;e; (i.e.,
A with yl subtracted from its (ij)th element) is singular.
l'
Hint: Show that Ci E N(B).
6. As in Example 6.10, check directly that the condition for reconstructibility takes the
form
N[
fA J ~ N(A
n
).
CA
n

1
This page intentionally left blank This page intentionally left blank
Chapter 7
Projections, Inner Product
Spaces, and Norms
7.1 Projections
Definition 7.1. Let V be a vector space with V = X 0 y. By Theorem 2.26, every v e V
has a unique decomposition v = x + y with x e X and y e y. Define PX y • V — > • X c V
by
Figure 7.1. Oblique projections.
Theorem 7.2. Px,y is linear and P# y — Px,y
Theorem 7.3. A linear transformation P is a projection if and only if it is idempotent, i.e.,
P
2
= P. Also, P is a projection if and only if I —P is a projection. Infact, Py,x — I — Px,y
Proof: Suppose P is a projection, say on X along y (using the notation of Definition 7.1).
51
Px,y is called the (oblique) projection on X along 3^.
Figure 7.1 displays the projection of v on both X and 3^ in the case V =
Chapter 7
Projections, Inner Product
Spaces, and Norms
7.1 Projections
Definition 7.1. Let V be a vector space with V = X EEl Y. By Theorem 2.26, every v E V
has a unique decomposition v = x + y with x E X and y E y. Define pX,y : V + X <; V
by
PX,yV = x for all v E V.
PX,y is called the (oblique) projection on X along y.
Figure 7.1 displays the projection of von both X and Y in the case V = ]R2.
y
x
Figure 7.1. Oblique projections.
Theorem 7.2. px.y is linear and pl.
y
= px.y.
Theorem 7.3. A linear transformation P is a projection if and only if it is idempotent, i.e.,
p2 = P. Also, P isaprojectionifandonlyifl P isaprojection. Infact, Py.x = I px.y.
Proof: Suppose P is a projection, say on X along Y (using the notation of Definition 7.1).
51
52 Chapter 7. Projections, Inner Product Spaces, and Norms
Let u e V be arbitrary. Then Pv = P(x + y) = Px = x. Moreover, P
2
v = PPv —
Px = x = Pv. Thus, P
2
= P. Conversely, suppose P
2
= P. Let X = {v e V : Pv = v}
and y = {v € V : Pv = 0}. It is easy to check that X and 3^ are subspaces. We now prove
that V = X 0 y. First note that tfveX, then Pv = v. If v e y, then Pv = 0. Hence
i f v € X n y, then v = 0. Now let u e V be arbitrary. Then v = Pv + (I  P)v. Let
x = Pv, y = (I  P)v. Then Px = P
2
v = Pv = x so x e X, while Py = P(I  P}v =
Pv  P
2
v = 0 so y e y. Thus, V = X 0 y and the projection on X along y is P.
Essentially the same argument shows that / — P is the projection on y along X. D
Definition 7.4. In the speci al case where y = X^, PX.X
L
*
s
called an orthogonal projec
tion and we then use the notati on PX = PX,X
L

Theorem 7.5. P e E"
xn
i s the matri x of an orthogonal projecti on (onto K(P)} i f and only
i fP
2
= p = P
T
.
Proof: Let P be an orthogonal projection (on X, say, along X
L
} and let jc, y e R" be
arbitrary. Note that (/  P)x = (I  PX,X^X = P
x
±,
x
x by Theorem 7.3. Thus,
(/  P)x e X
L
. Since Py e X, we have ( P y f ( I  P)x = y
T
P
T
(I  P)x = 0.
Since x and y were arbitrary, we must have P
T
(I — P) = 0. Hence P
T
= P
T
P = P,
with the second equality following since P
T
P is symmetric. Conversely, suppose P is a
symmetric projection matrix and let x be arbitrary. Write x = Px + (I — P)x. Then
x
T
P
T
(I  P)x = x
T
P(I  P}x = 0. Thus, since Px e U(P), then (/  P)x 6 ft(P)
1
and P must be an orthogonal projection. D
7.1.1 The four fundamental orthogonal projections
Using the notation of Theorems 5.1 and 5.11, let A 6 R
mxn
with SVD A = UT,V
T
=
UtSVf. Then
are easily checked to be (unique) orthogonal projections onto the respective four funda
mental subspaces,
52 Chapter 7. Projections, Inner Product Spaces, and Norms
Let v E V be arbitrary. Then Pv = P(x + y) = Px = x. Moreover, p
2
v = P Pv =
Px = x = Pv. Thus, p2 = P. Conversely, suppose p2 = P. Let X = {v E V : Pv = v}
and Y = {v E V : Pv = OJ. It is easy to check that X and Y are subspaces. We now prove
that V = X $ y. First note that if v E X, then Pv = v. If v E Y, then Pv = O. Hence
if v E X ny, then v = O. Now let v E V be arbitrary. Then v = Pv + (I  P)v. Let
x = Pv, y = (I  P)v. Then Px = p
2
v = Pv = x so x E X, while Py = P(l  P)v =
Pv  p
2
v = 0 so Y E y. Thus, V = X $ Y and the projection on X along Y is P.
Essentially the same argument shows that I  P is the projection on Y along X. 0
Definition 7.4. In the special case where Y = X1, px.xl. is called an orthogonal projec
tion and we then use the notation P
x
= PX.XL
Theorem 7.5. P E jRnxn is the matrix of an orthogonal projection (onto R(P)) if and only
if p2 = P = pT.
Proof: Let P be an orthogonal projection (on X, say, along X 1) and let x, y E jR" be
arbitrary. Note that (I  P)x = (I  px.xJ.)x = PXJ..xx by Theorem 7.3. Thus,
(I  P)x E X1. Since Py E X, we have (py)T (I  P)x = yT pT (I  P)x = O.
Since x and y were arbitrary, we must have pT (I  P) = O. Hence pT = pT P = P,
with the second equality following since pT P is symmetric. Conversely, suppose P is a
symmetric projection matrix and let x be arbitrary. Write x = P x + (I  P)x. Then
x
T
pT (I  P)x = x
T
P(l  P)x = O. Thus, since Px E R(P), then (I  P)x E R(P)1
and P must be an orthogonal projection. 0
7.1 .1 The four fundamental orthogonal projections
Using the notation of Theorems 5.1 and 5.11, let A E jRmxII with SVD A = U!:V
T
U\SVr Then
r
PR(A)
AA+
U\U[
Lu;uT,
;=1
m
PR(A).L
1 AA+
U2
U
! LUiUT,
i=r+l
11
PN(A)
1 A+A
V2V{
L ViVf,
i=r+l
r
PN(A)J.
A+A
VIV{
LViVT
i=l
are easily checked to be (unique) orthogonal projections onto the respective four funda
mental subspaces.
7.1. Projections 53
Example 7.6. Determine the orthogonal projection of a vector v e M" on another nonzero
vector w e R
n
.
Solution: Think of the vector w as an element of the onedimensional subspace IZ(w).
Then the desired projection is simply
(using Example 4.8)
Moreover, the vector z that is orthogonal to w and such that v = Pv + z is given by
z = PK(
W
)±V = (/ — PK(W))V = v — (^^ j w. See Figure 7.2. A direct calculation shows
that z and u; are, in fact, orthogonal:
Figure 7.2. Orthogonal projection on a "line."
Example 7.7. Recall the proof of Theorem 3.11. There, {v\ , ..., Vk} was an orthornormal
basis for a subset S of W
1
. An arbitrary vector x e R" was chosen and a formula for x\
appeared rather mysteriously. The expression for x\ is simply the orthogonal projection of
x on S. Specifically,
Example 7.8. Recall the diagram of the four fundamental subspaces. The indicated direct
sum decompositions of the domain E" and codomain R
m
are given easily as follows.
Let x e W
1
be an arbitrary vector. Then
7.1. Projections 53
Example 7.6. Determine the orthogonal projection of a vector v E IR
n
on another nonzero
vector w E IRn.
Solution: Think of the vector w as an element of the onedimensional subspace R( w).
Then the desired projection is simply
Pn(w)v = ww+v
wwTv
(using Example 4.8)
= (WTV)
T W.
W W
Moreover, the vector z that is orthogonal to wand such that v = P v + z is given by
z = Pn(w)"' v = (l  Pn(w»v = v  ( : ; ~ ) w. See Figure 7.2. A direct calculation shows
that z and ware, in fact, orthogonal:
v
z
Pv w
Figure 7.2. Orthogonal projection on a "line."
Example 7.7. Recall the proof of Theorem 3.11. There, {VI, ... , Vk} was an orthomormal
basis for a subset S of IRn. An arbitrary vector x E IR
n
was chosen and a formula for XI
appeared rather mysteriously. The expression for XI is simply the orthogonal projection of
X on S. Specifically,
Example 7.8. Recall the diagram of the four fundamental subspaces. The indicated direct
sum decompositions of the domain IR
n
and codomain IR
m
are given easily as follows.
Let X E IR
n
be an arbitrary vector. Then
X = PN(A)u + PN(A)X
= A+ Ax + (I  A+ A)x
= VI vt x + V
2
Vi x (recall VVT = I).
54 Chapter 7. Projections, Inner Product Spaces, and Norms
Similarly, let y e ]R
m
be an arbitrary vector. Then
Example 7.9. Let
Then
and we can decompose the vector [2 3 4]
r
uniquely into the sum of a vector in A/' CA)
1
and a vector in J\f(A), respectively, as follows:
7.2 Inner Product Spaces
Definition 7.10. Let V be a vector space over R. Then { • , • ) : V x V
product if
is a real inner
1. (x, x) > Qfor all x 6V and ( x , x } =0 if and only ifx = 0.
2. (x, y) = (y,x)forallx,y e V.
3. { *, cryi + ^2) = a(x, y\) + / 3( j t, y^} for all jc, yi, j2 ^ V and for alia, ft e R.
Example 7.11. Let V = R". Then { ^, y} = X
T
y is the "usual" Euclidean inner product or
dot product.
Example 7.12. Let V = E". Then ( j c, y)
Q
= X
T
Qy, where Q = Q
T
> 0 is an arbitrary
n x n positive definite matrix, defines a "weighted" inner product.
Definition 7.13. If A e R
mx
", then A
T
e R
nxm
is the unique linear transformation or map
such that (x, Ay)  (A
T
x, y) for all x € R
m
and for all y e R".
54 Chapter 7. Projections, Inner Product Spaces, and Norms
Similarly, let Y E IR
m
be an arbitrary vector. Then
Y = PR(A)Y +
= AA+y + (l AA+)y
= U1Ur y + U2U[ Y (recall UU
T
= I).
Example 7.9. Let
Then
1/4
1/4
o
1/4 ]
1/4
o
and we can decompose the vector [2 3 4V uniquely into the sum of a vector in N(A)L
and a vector in N(A), respectively, as follows:
[ ! ] A' Ax + (l  A' A)x
[
1/2 1/2 0] [ 2] [
= ! +
[
5/2] [1/2]
= + .
7.2 Inner Product Spaces
1/2
1/2
o
1/2
1/2
o
Definition 7.10. Let V be a vector space over IR. Then (', .) : V x V + IR is a real inner
product if
1. (x, x) ::: Of or aU x E V and (x, x) = 0 if and only ifx = O.
2. (x, y) = (y, x) for all x, y E V.
3. (x, aYI + PY2) = a(x, Yl) + f3(x, Y2) for all x, Yl, Y2 E V and/or all a, f3 E IR.
Example 7.11. Let V = IRn. Then (x, y) = x
T
Y is the "usual" Euclidean inner product or
dot product.
Example 7.12. Let V = IRn. Then (x, y) Q = X T Qy, where Q = Q T > 0 is an arbitrary
n x n positive definite matrix, defines a "weighted" inner product.
Definition 7.13. If A E IR
m
xn, then ATE IR
n
xm is the unique linear transformation or map
such that {x, Ay) = {AT x, y) for all x E IR
m
andfor all y e IRn.
7.2. Inner Product Spaces 55
It is easy to check that, with this more "abstract" definition of transpose, and if the
(/, y)th element of A is a
(;
, then the (i, y)t h element of A
T
is a/ , . It can also be checked
that all the usual properties of the transpose hold, such as (Afl) = B
T
A
T
. However, the
definition above allows us to extend the concept of transpose to the case of weighted inner
products in the following way. Suppose A e R
mxn
and let {, }g and (•, }
R
, with Q and
R positive definite, be weighted inner products on R
m
and W, respectively. Then we can
define the "weighted transpose" A
#
as the unique map that satisfies
(x, Ay)
Q
= (A
#
x, y)
R
for all x e R
m
and for all y e W
1
.
By Example 7.12 above, we must then have X
T
QAy = x
T
(A
#
) Ry for all x, y. Hence we
must have QA = (A
#
) R. Taking transposes (of the usual variety) gives A
T
Q = RA
#
.
Since R is nonsingular, we find
A* = /r'A' Q.
We can also generalize the notion of orthogonality (x
T
y = 0) to Q orthogonality (Q is
a positive definite matrix). Two vectors x, y e W are <2orthogonal (or conjugate with
respect to Q) if ( x, y}
Q
= X
T
Qy = 0. Q orthogonality is an important tool used in
studying conjugate direction methods in optimization theory.
Definition 7.14. Let V be a vector space over C. Then {, •} : V x V > C is a complex
inner product if
1. ( x, x) > Qfor all x e V and ( x, x) =0 if and only ifx = 0.
2. (x, y) = (y, x) for all x, y e V.
3. (x,ayi + fiy
2
) = a(x, y\) + fi(x, y
2
}forallx, y\, y
2
e V and for alia, ft 6 C.
Remark 7.15. We could use the notation {•, }
c
to denote a complex inner product, but
if the vectors involved are complexvalued, the complex inner product is to be understood.
Note, too, from part 2 of the definition, that ( x, x) must be real for all x.
Remark 7.16. Note from parts 2 and 3 of Definition 7.14 that we have
(ax\ + fix
2
, y) = a(x\, y) + P(x
2
, y}.
Remark 7.17. The Euclidean inner product of x, y e C" is given by
The conventional definition of the complex Euclidean inner product is (x, y} = y
H
x but we
use its complex conjugate x
H
y here for symmetry with the real case.
Remark 7.18. A weighted inner product can be defined as in the real case by (x, y}
Q
—
X
H
Qy, for arbitrary Q = Q
H
> 0. The notion of Q orthogonality can be similarly
generalized to the complex case.
7.2. Inner product Spaces 55
It is easy to check that, with this more "abstract" definition of transpose, and if the
(i, j)th element of A is aij, then the (i, j)th element of AT is ap. It can also be checked
that all the usual properties of the transpose hold, such as (AB) = BT AT. However, the
definition above allows us to extend the concept of transpose to the case of weighted inner
products in the following way. Suppose A E ]Rm xn and let (., .) Q and (., .) R, with Q and
R positive definite, be weighted inner products on IR
m
and IRn, respectively. Then we can
define the "weighted transpose" A # as the unique map that satisfies
(x, AY)Q = (A#x, Y)R for all x E IRm and for all Y E IRn.
By Example 7.l2 above, we must then have x
T
QAy = x
T
(A#{ Ry for all x, y. Hence we
must have QA = (A#{ R. Taking transposes (of the usual variety) gives AT Q = RA#.
Since R is nonsingular, we find
A# = R1A
T
Q.
We can also generalize the notion of orthogonality (x
T
y = 0) to Qorthogonality (Q is
a positive definite matrix). Two vectors x, y E IRn are Qorthogonal (or conjugate with
respect to Q) if (x, y) Q = X T Qy = O. Qorthogonality is an important tool used in
studying conjugate direction methods in optimization theory.
Definition 7.14. Let V be a vector space over <C. Then (., .) : V x V + C is a complex
inner product if
1. (x, x) :::: 0 for all x E V and (x, x) = 0 if and only if x = O.
2. (x, y) = (y, x) for all x, y E V.
3. (x, aYI + f3Y2) = a(x, yll + f3(x, Y2) for all x, YI, Y2 E V andfor all a, f3 E c.
Remark 7.15. We could use the notation (., ·)e to denote a complex inner product, but
if the vectors involved are complexvalued, the complex inner product is to be understood.
Note, too, from part 2 of the definition, that (x, x) must be real for all x.
Remark 7.16. Note from parts 2 and 3 of Definition 7.14 that we have
Remark 7.17. The Euclidean inner product of x, y E C
n
is given by
n
(x, y) = LXiYi = xHy.
i=1
The conventional definition of the complex Euclidean inner product is (x, y) = yH x but we
use its complex conjugate x
H
y here for symmetry with the real case.
Remark 7.1S. A weighted inner product can be defined as in the real case by (x, y)Q =
x
H
Qy, for arbitrary Q = QH > o. The notion of Qorthogonality can be similarly
generalized to the complex case.
56 Chapter 7. Projections, Inner Product Spaces, and Norms
Definition 7.19. A vector space (V, F) endowed with a specific inner product is called an
inner product space. If F = C, we call V a complex inner product space. If F = R, we
call V a real inner product space.
Example 7.20.
1. Check that V = R"
x
" with the inner product (A, B) = Tr A
T
B is a real inner product
space. Note that other choices are possible since by properties of the trace function,
TrA
T
B = TrB
T
A = TrAB
T
= TrBA
T
.
2. Check that V = C
nx
" with the inner product (A, B) = Tr A
H
B is a complex inner
product space. Again, other choices are possible.
Definition 7.21. Let V be an inner product space. For v e V, we define the norm (or
length) ofv by \\v\\ = */(v, v). This is called the norm induced by (  ,  ) .
Example 7.22.
1. If V = E." with the usual inner product, the induced norm is given by   i>   =
xV—*« 9\ 7
( E , =i < Y )
2

2. If V = C" with the usual inner product, the induced norm is given by \\v\\ =
(£ ?
=
, l » ,  l
2
)* .
Theorem 7.23. Let P be an orthogonal projection on an inner product space V. Then
\\Pv\\ < \\v\\forallv e V.
Proof: Since P is an orthogonal projection, P
2
= P = P
#
. (Here, the notation P
#
denotes
the unique linear transformation that satisfies ( P u , v } = (u, P
#
v) for all u, v e V. If this
seems a little too abstract, consider V = R" (or C"), where P
#
is simply the usual P
T
(or
P
H
)). Hence ( P v , v) = (P
2
v, v) = (Pv, P
#
v) = ( P v , Pv) = \\Pv\\
2
> 0. Now /  P is
also a projection, so the above result applies and we get
from which the theorem follows.
Definition 7.24. The norm induced on an inner product space by the "usual" inner product
is called the natural norm.
In case V = C" or V = R", the natural norm is also called the Euclidean norm. In
the next section, other norms on these vector spaces are defined. A converse to the above
procedure is also available. That is, given a norm defined by \\x\\ — • > /(• * > x), an inner
product can be defined via the following.
56 Chapter 7. Projections, Inner Product Spaces, and Norms
Definition 7.19. A vector space (V, IF) endowed with a specific inner product is called an
inner product space. If IF = e, we call V a complex inner product space. If IF = R we
call V a real inner product space.
Example 7.20.
1. Check that V = IR
n
xn with the inner product (A, B) = Tr AT B is a real inner product
space. Note that other choices are possible since by properties of the trace function,
Tr AT B = Tr B T A = Tr A B T = Tr BAT.
2. Check that V = e
nxn
with the inner product (A, B) = Tr AH B is a complex inner
product space. Again, other choices are possible.
Definition 7.21. Let V be an inner product space. For v E V, we define the norm (or
length) ofv by IIvll = J(V,V). This is called the norm induced by (', .).
Example 7.22.
1. If V = IR
n
with the usual inner product, the induced norm is given by II v II
n 2 1
(Li=l V
i
)2.
2. If V = en with the usual inner product, the induced norm is given by II v II =
"n 2 !
(L...i=l IVi I ) .
Theorem 7.23. Let P be an orthogonal projection on an inner product space V. Then
IIPvll ::::: Ilvll for all v E V.
Proof: Since P is an orthogonal projection, p2 = P = pH. (Here, the notation p# denotes
the unique linear transformation that satisfies (Pu, v) = (u, p#v) for all u, v E V. If this
seems a little too abstract, consider V = IR
n
(or en), where p# is simply the usual pT (or
pH)). Hence (Pv, v) = (P
2
v, v) = (Pv, p#v) = (Pv, Pv) = IIPvll
2
::: O. Now / P is
also a projection, so the above result applies and we get
0::::: ((I  P)v. v) = (v. v)  (Pv, v)
= IIvll2  IIPvll
2
from which the theorem follows. 0
Definition 7.24. The norm induced on an inner product space by the "usual" inner product
is called the natural norm.
In case V = en or V = IR
n
, the natural norm is also called the Euclidean norm. In
the next section, other norms on these vector spaces are defined. A converse to the above
procedure is also available. That is, given a norm defined by IIx II = .j(X,X}, an inner
product can be defined via the following.
7.3. Vector Norms 57
Theorem 7.25 (Polarization Identity).
1. For x, y € R", an inner product is defined by
7.3 Vector Norms
Definition 7.26. Let (V, F) be a vector space. Then \ \  \ \ : V >• R is a vector norm if it
satisfies the following three properties:
2. For x, y e C", an inner product is defined by
where j = i = \/—T.
(This is called the triangle inequality, as seen readily from the usual diagram illus
trating the sum of two vectors in R
2
.)
Remark 7.27. It is convenient in the remainder of this section to state results for complex
valued vectors. The specialization to the real case is obvious.
Definition 7.28. A vector space (V, F) is said to be a normed linear space if and only if
there exists a vector norm  •  : V > R satisfying the three conditions of Definition 7.26.
Example 7.29.
1. For x e C", the Holder norms, or pnorms, are defined by
Special cases:
(The second equality is a theorem that requires proof.)
7.3. Vector Norms
Theorem 7.25 (Polarization Identity).
1. For x, y E an inner product is defined by
(x,y)=xTy=
2. For x, y E en, an inner product is defined by
where j = i = .J=I.
7.3 Vector Norms
IIx + yll2 _ IIxll2 _ lIyll2
2
57
Definition 7.26. Let (V, IF) be a vector space. Then II . II : V + IR is a vector norm ifit
satisfies the following three properties:
1. Ilxll::: Of or all x E V and IIxll = 0 ifand only ifx = O.
2. Ilaxll = lalllxllforallx E Vandforalla E IF.
3. IIx + yll :::: IIxll + IIYliforall x, y E V.
(This is called the triangle inequality, as seen readily from the usual diagram illus
trating the sum of two vectors in ]R2 .)
Remark 7.27. It is convenient in the remainder of this section to state results for complex
valued vectors. The specialization to the real case is obvious.
Definition 7.28. A vector space (V, IF) is said to be a normed linear space if and only if
there exists a vector norm II . II : V + ]R satisfying the three conditions of Definition 7.26.
Example 7.29.
1. For x E en, the HOlder norms, or pnorms, are defined by
Special cases:
(a) Ilx III = L:7=1 IXi I (the "Manhattan" norm).
1 1
(b) Ilxllz = (L:7=1Ix;l2)2 = (X
H
X)2 (the Euclidean norm).
(c) Ilxlioo = maxlx;l = lim IIxllp
IE!! p++oo
(The second equality is a theorem that requires proof.)
58 Chapter 7. Projections, Inner Product Spaces, and Norms
2. Some weighted pnorms:
(a)   JC  , .
D
= E^rf/l*/!, where 4 > 0.
(b) I k llz . g — (x
h
Q
X
Y > where Q = Q
H
> 0 (this norm is more commonly
denoted  • 
c
).
3. On the vector space (C[to, t \ ] , R), define the vector norm
On the vector space ((C[to, t\])
n
, R), define the vector norm
Fhcorem 7.30 (Holder Inequality). Let x, y e C". Ther,
A particular case of the Holder inequality is of special interest.
Theorem 7.31 (CauchyBunyakovskySchwarz Inequality). Let x, y e C". Then
with equality if and only if x and y are linearly dependent.
Proof: Consider the matrix [x y] e C"
x2
. Since
is a nonnegative definite matrix, its determinant must be nonnegative. In other words,
0 < ( x
H
x ) ( y
H
y ) — ( x
H
y ) ( y
H
x ) . Since y
H
x = x
H
y, we see immediately that \X
H
y\ <
\\X\\2\\y\\2
D
Note: This is not the classical algebraic proof of the CauchyBunyakovskySchwarz
(CBS) inequality (see, e.g., [20, p. 217]). However, it is particularly easy to remember.
Remark 7.32. The angle 0 between two nonzero vectors x, y e C" may be defined by
cos# = I, „ .^ , 0 < 0 < 5. The CBS inequality is thus equivalent to the statement
IlMmlylb — ^
COS 0 <1.
Remark 7.33. Theorem 7.31 and Remark 7.32 are true for general inner product spaces.
Remark 7.34. The norm  • 
2
is unitarily invariant, i.e., if U € C"
x
" is unitary, then
\\Ux\\
2
= \\x\\
2
(Proof. \\Ux\\l = x
H
U
H
Ux = X
H
X = \\x\\\). However,   , and   1^
58 Chapter 7. Projections, Inner Product Spaces, and Norms
2. Some weighted pnorms:
(a) IIxll1.D = whered; > O.
1
(b) IIx IIz.Q = (x
H
Qx) 2, where Q = QH > 0 (this norm is more commonly
denoted II . IIQ)'
3. On the vector space (C[to, ttl, 1Ft), define the vector norm
11111 = max 1/(t)I·
On the vector space «e[to, ttlr, 1Ft), define the vector norm
1111100 = max II/(t) 11
00
,
Theorem 7.30 (HOlder Inequality). Let x, y E en. Then
I I
+=1.
p q
A particular case of the HOlder inequality is of special interest.
Theorem 7.31 (CauchyBunyakovskySchwarz Inequality). Let x, y E en. Then
with equality if and only if x and yare linearly dependent.
Proof' Consider the matrix [x y] E en
x2
. Since
is a nonnegative definite matrix, its determinant must be nonnegative. In other words,
o (x
H
x)(yH y)  (x
H
y)(yH x). Since yH x = x
H
y, we see immediately that IXH yl
IIxll2l1yllz. 0
Note: This is not the classical algebraic proof of the CauchyBunyakovskySchwarz
(CBS) inequality (see, e.g., [20, p. 217]). However, it is particularly easy to remember.
Remark 7.32. The angle e between two nonzero vectors x, y E en may be defined by
cos e = 0 e I' The CBS inequality is thus equivalent to the statement
1 cose 1 1.
Remark 7.33. Theorem 7.31 and Remark 7.32 are true for general inner product spaces.
Remark 7.34. The norm II . 112 is unitarily invariant, i.e., if U E e
nxn
is unitary, then
IIUxll2 = IIxll2 (Proof IIUxili = XHUHUx = xHx = IIxlli)· However, 11·111 and 1I·IIClO
7.4. Matrix Norms 59
are not unitarily invariant. Similar remarks apply to the unitary invariance of norms of real
vectors under orthogonal transformation.
Remark 7.35. If x, y € C" are orthogonal, then we have the Pythagorean Identity
7.4 Matrix Norms
In this section we introduce the concept of matrix norm. As with vectors, the motivation for
using matrix norms is to have a notion of either the size of or the nearness of matrices. The
former notion is useful for perturbation analysis, while the latter is needed to make sense of
"convergence" of matrices. Attention is confined to the vector space (W
nxn
, R) since that is
what arises in the majority of applications. Extension to the complex case is straightforward
and essentially obvious.
Definition 7.39.  •  : R
mx
" > E is a matrix norm if it satisfies the following three
properties:
2 _ _/ / .
the proof of which follows easily from z2 = z z.
Theorem 7.36. All norms on C" are equivalent; i.e., there exist constants c\, ci (possibly
depending onn) such that
Example 7.37. For x G C", the following inequalities are all tight bounds; i.e., there exist
vectors x for which equality holds:
Finally, we conclude this section with a theorem about convergence of vectors. Con
vergence of a sequence of vectors to some limit vector can be converted into a statement
about convergence of real numbers, i.e., convergence in terms of vector norms.
Theorem 7.38. Let \\ • \\ be a vector norm and suppose v, i»
( 1 )
, v
(2
\ ... e C". Then
7.4. Matrix Norms 59
are not unitarily invariant. Similar remarks apply to the unitary invariance of norms of real
vectors under orthogonal transformation.
Remark 7.35. If x, y E en are orthogonal, then we have the Pythagorean Identity
Ilx ± = +
the proof of which follows easily from liz = ZH z.
Theorem 7.36. All norms on en are equivalent; i.e., there exist constants CI, C2 (possibly
depending on n) such that
Example 7.37. For x E en, the following inequalities are all tight bounds; i.e., there exist
vectors x for which equality holds:
Ilxlll :::: Jn Ilxlb
Ilxll2:::: IIxll»
IIxlloo :::: IIxll»
Ilxlll :::: n IIxlloo;
IIxl12 :::: Jn Ilxll
oo
;
IIxlioo :::: IIxllz.
Finally, we conclude this section with a theorem about convergence of vectors. Con
vergence of a sequence of vectors to some limit vector can be converted into a statement
about convergence of real numbers, i.e., convergence in terms of vector norms.
Theorem 7.38. Let II· II be a vector norm and suppose v, v(l), v(2), ... E en. Then
lim V(k) = v if and only if lim II v(k)  v II = O.
k4+00
7.4 Matrix Norms
In this section we introduce the concept of matrix norm. As with vectors, the motivation for
using matrix norms is to have a notion of either the size of or the nearness of matrices. The
former notion is useful for perturbation analysis, while the latter is needed to make sense of
"convergence" of matrices. Attention is confined to the vector space (IRm xn , IR) since that is
what arises in the majority of applications. Extension to the complex case is straightforward
and essentially obvious.
Definition 7.39. II· II : IR
mxn
IR is a matrix norm if it satisfies the following three
properties:
1. IIAII Of or all A E IR
mxn
and IIAII = 0 if and only if A = O.
2. lIaAl1 = lalliAliforall A E IR
mxn
andfor all a E IR.
3. IIA + BII :::: IIAII + IIBII for all A, BE IRmxn.
(As with vectors, this is called the triangle inequality.)
60 Chapter 7. Projections, Inner Product Spaces, and Norms
Example 7.40. Let A e R
mx
". Then the Frobenius norm (or matrix Euclidean norm) is
defined by
^wncic r = laiiK^/i;;.
Example 7.41. Let A e R
mxn
. Then the matrix pnorms are defined by
The following three special cases are important because they are "computable." Each is a
theorem and requires a proof.
1. The "maximum column sum" norm is
2. The "maximum row sum" norm is
3. The spectral norm is
Example 7.42. Let A E R
mxn
. The Schatten/7norms are defined by
Some special cases of Schatten /?norms are equal to norms defined previously. For example,
 . 
5 2
=  . \\
F
and  • 
5i00
=  • 
2
. The norm  • 
5>1
is often called the trace norm.
Example 7.43. Let A e K
mx
". Then "mixed" norms can also be defined by
Example 7.44. The "matrix analogue of the vector 1norm,"  A\\
s
= ^ j \a
i}
; , is a norm.
The concept of a matrix norm alone is not altogether useful since it does not allow us
to estimate the size of a matrix product A B in terms of the sizes of A and B individually.
60 Chapter 7. Projections, Inner Product Spaces, and Norms
Example 7.40. Let A E lR,mxn. Then the Frobenius norm (or matrix Euclidean norm) is
defined by
IIAIIF ~ (t. t ai;) I ~ (t. altA)) 1 ~ (T, (A' A)) 1 ~ (T, (AA '));
(where r = rank(A)).
Example 7.41. Let A E lR,mxn. Then the matrix pnorms are defined by
IIAxll
IIAII = max _P = max IIAxll .
P Ilxllp;60 Ilxli
p
IIxllp=1 p
The following three special cases are important because they are "computable." Each is a
theorem and requires a proof.
I. The "maximum column sum" norm is
2. The "maximum row sum" norm is
IIAlioo = max
rE!!l. (
t laUI ).
J=1
3. The spectral norm is
tTL T
IIAII2 = Amax(A A) = A ~ a x ( A A ) = a1(A).
Note: IIA+llz = l/ar(A), where r = rank(A).
Example 7.42. Let A E lR,mxn. The Schattenpnorms are defined by
I
IIAlls.p = (at' + ... + a!)"".
Some special cases of Schatten pnorms are equal to norms defined previously. For example,
11·115.2 = II . IIF and 11'115,00 = II . 112' The norm II . 115.1 is often called the trace norm.
Example 7.43. Let A E lR,mxn _ Then "mixed" norms can also be defined by
IIAII = max IIAxil
p
p,q 11.<110#0 IIxllq
Example 7.44. The "matrix analogue of the vector Inorm," IIAlis = Li.j laij I, is a norm.
The concept of a matrix norm alone is not altogether useful since it does not allow us
to estimate the size of a matrix product AB in terms of the sizes of A and B individually.
7.4. Matrix Norms 61
Notice that this difficulty did not arise for vectors, although there are analogues for, e.g.,
inner products or outer products of vectors. We thus need the following definition.
Definition 7.45. Let A e R
mxn
, B e R
nxk
. Then the norms \\ • \\
a
, \\ • \\
p
, and \\ • \\
y
are
mutually consistent if \\ A B \\
a
< \\A\\p\\B\\
y
. A matrix norm\\ • \\ is said to be consistent
if \\AB\\ <  A   fi whenever the matrix product is defined.
Example 7.46.
1.  • /7 and  • 
p
for all p are consistent matrix norms.
2. The "mixed" norm
is a matrix norm but it is not consistent. For example, take A = B = \ \ J1. Then
  Af l  
l i 00
= 2whil e  A 
l i 00
  B 
1 >00
= l.
The p norms are examples of matrix norms that are subordinate to (or induced by)
a vector norm, i.e.,
11^ 4^ 11
(or, more generally, A = max^o ., . .
p
) . For such subordinate norms, also called oper
ator norms, we clearly have Aj c < A1jt. Since   Af ij c  <   A    f l j c  < Aflj t,
it follows that all subordinate norms are consistent.
Theorem 7.47. There exists a vector x* such that Ajt* = A jc* if the matrix normis
subordinate to the vector norm.
Theorem 7.48. If \\ • \\
m
is a consistent matrix norm, there exists a vector norm \\ • \\
v
consistent with it, i.e., H Aj c JI ^ < \\A\\
m
\\x\\
v
.
Not every consistent matrix norm is subordinate to a vector norm. For example,
consider  • \\
F
. Then  A^ 
2
< A
F
j c
2
, so  • 
2
is consistent with  • 
F
, but there does
not exist a vector norm  •  such that A
F
is given by max^o \^ •
Useful Results
The following miscellaneous results about matrix norms are collected for future reference.
The interested reader is invited to prove each of them as an exercise.
2. For A e R"
x
", the following inequalities are all tight, i.e., there exist matrices A for
which equality holds:
7.4. Matrix Norms 61
Notice that this difficulty did not arise for vectors, although there are analogues for, e.g.,
inner products or outer products of vectors. We thus need the following definition.
Definition 7.45. Let A E ]Rmxn, B E ]Rnxk. Then the norms II . II", II· Ilfl' and II . lIy are
mutuallyconsistentifIlABII,,::S IIAllfllIBlly. A matrix norm 11·11 is said to be consistent
if II A B II ::s II A 1111 B II whenever the matrix product is defined.
Example 7.46.
1. II· II F and II . II p for all p are consistent matrix norms.
2. The "mixed" norm
IIAxll1
II· 11
100
= max = max laijl
, x;60 Ilx 1100 i,j
is a matrix norm but it is not consistent. For example, take A = B = [: :]. Then
IIABIII,oo = 2 while IIAIII,ooIlBIII,oo = 1.
The pnorms are examples of matrix norms that are subordinate to (or induced by)
a vector norm, i.e.,
IIAxl1
IIAII = max  = max IIAxl1
x;60 IIx II Ilxll=1
IIAxll .
(or, more generally, IIAllp,q = maxx;60 IIxll
q
P
), For such subordmate norms, also caUedoper
atornorms, wec1earlyhave IIAxll ::s IIAllllxll· Since IIABxl1 ::s IIAlIllBxll ::s IIAIIIIBllllxll,
it follows that all subordinate norms are consistent.
Theorem 7.47. There exists a vector x* such that IIAx*11 = IIAllllx*11 if the matrix norm is
subordinate to the vector norm.
Theorem 7.48. If II . 11m is a consistent matrix norm, there exists a vector norm II . IIv
consistent with it, i.e., IIAxliv ::s IIAlim Ilxli
v
'
Not every consistent matrix norm is subordinate to a vector norm. For example,
consider II . II F' Then II Ax 112 ::s II A II Filx 112, so II . 112 is consistent with II . II F, but there does
not exist a vector norm II . II such that IIAIIF is given by max
x
;60 " , ~ ~ i ' .
Useful Results
The following miscellaneous results about matrix norms are collected for future reference.
The interested reader is invited to prove each of them as an exercise.
1. II In II p = 1 for all p, while IIIn II F = .jii.
2. For A E ]Rnxn, the following inequalities are all tight, i.e., there exist matrices A for
which equality holds:
IIAIII ::s .jii IIAlb
IIAII2 ::s.jii IIAII
I
,
II A 1100 ::s n IIAII
I
,
IIAIIF ::s.jii IIAII
I
,
IIAIII ::s n IIAlloo,
IIAII2 ::s .jii IIAlloo,
IIAlioo ::s .jii IIAII2,
IIAIIF ::s .jii IIAlb
IIAIII ::s .jii II
A
IIF;
IIAII2::S IIAIIF;
IIAlioo ::s .jii IIAIIF;
IIAIIF ::s .jii IIAlioo'
62 Chapter 7. Projections, Inner Product Spaces, and Norms
3. For A eR
mxa
.
4. The norms  • \\
F
and  • 
2
(as well as all the Schatten /?norms, but not necessarily
other pnorms) are unitarily invariant; i.e., for all A e R
mx
" and for all orthogonal
matrices Q zR
mxm
and Z e M"
x
", (MZ
a
=   A 
a
fora = 2 or F.
Convergence
The following theorem uses matrix norms to convert a statement about convergence of a
sequence of matrices into a statement about the convergence of an associated sequence of
scalars.
Theorem 7.49. Let \\ \\bea matrix normand suppose A, A
( 1)
, A
(2)
, ... e R
mx
". Then
EXERCISES
1. If P is an orthogonal projection, prove that P
+
= P.
2. Suppose P and Q are orthogonal projections and P + Q = I. Prove that P — Q
must be an orthogonal matrix.
3. Prove that / — A
+
A is an orthogonal projection. Also, prove directly that V
2
V/ is an
orthogonal projection, where ¥2 is defined as in Theorem 5.1.
4. Suppose that a matrix A e W
nxn
has linearly independent columns. Prove that the
orthogonal projection onto the space spanned by these column vectors is given by the
matrix P = A(A
T
A)~
}
A
T
.
5. Find the (orthogonal) projection of the vector [2 3 4]
r
onto the subspace of R
3
spanned by the plane 3;c — v + 2z = 0.
6. Prove that E"
x
" with the inner product (A, B) = Tr A
T
B is a real inner product
space.
7. Show that the matrix norms  • 
2
and  • \\
F
are unitarily invariant.
8. Definition: Let A e R
nxn
and denote its set of eigenvalues (not necessarily distinct)
by { Ai , . . . , > . „ } . The spectral radius of A is the scalar
62 Chapter 7. Projections, Inner Product Spaces, and Norms
3. For A E IR
mxn
,
max laijl :::: IIAII2 :::: ~ max laijl.
l.] l.]
4. The norms II . IIF and II . 112 (as well as all the Schatten pnorms, but not necessarily
other pnorms) are unitarily invariant; i.e., for all A E IR
mxn
and for all orthogonal
matrices Q E IR
mxm
and Z E IR
nxn
, IIQAZlia = IIAlla fora = 2 or F.
Convergence
The following theorem uses matrix norms to convert a statement about convergence of a
sequence of matrices into a statement about the convergence of an associated sequence of
scalars.
Theorem 7.49. Let II ·11 be a matrix norm and suppose A, A(I), A(2), ... E IRmxn. Then
lim A (k) = A if and only if lim IIA (k)  A II = o.
k ~ + o o k ~ + o o
EXERCISES
1. If P is an orthogonal projection, prove that p+ = P.
2. Suppose P and Q are orthogonal projections and P + Q = I. Prove that P  Q
must be an orthogonal matrix.
3. Prove that I  A + A is an orthogonal projection. Also, prove directly that V
2
Vl is an
orthogonal projection, where V2 is defined as in Theorem 5.1.
4. Suppose that a matrix A E IR
mxn
has linearly independent columns. Prove that the
orthogonal projection onto the space spanned by these column vectors is given by the
matrix P = A(AT A) 1 AT.
5. Find the (orthogonal) projection of the vector [2 3 4f onto the subspace of 1R
3
spanned by the plane 3x  y + 2z = O.
6. Prove that IR
n
xn with the inner product (A, B) = Tr AT B is a real inner product
space.
7. Show that the matrix norms II . 112 and II . IIF are unitarily invariant.
8. Definition: Let A E IR
nxn
and denote its set of eigenvalues (not necessarily distinct)
by P.l, ... , An}. The spectral radius of A is the scalar
p(A) = max IA;I.
i
Exercises 63
Determine A
F
, H AI d , A
2
, H AH ^ , and p(A). (An n x n matrix, all of whose
columns and rows as well as main d iagonal and antid iagonal sum to s = n(n
2
+ l)/2,
is called a "magic square" matrix. I f M is a magic square matrix, it can be proved
that  M U p = s for all/?.)
10. Let A = xy
T
, where both x, y e R" are nonzero. Determine A
F
, Aj, A
2
,
and Aoo in terms of \\x\\
a
and /or \\y\\p, where a and ft take the value 1, 2, or oo as
appropriate.
Let
9. Let
Determine A
F
, \\A\\
lt
A
2
, H A^ , and p(A).
Exercises 63
Let
A = [ ~ 0 ~ ] .
14 12 5
Determine IIAIIF' IIAII
I
, IIAlb IIAlloo, and peA).
9. Let
A = [ ~ ~ ~ ] .
492
Determine IIAIIF' IIAII
I
, IIAlb IIAlloo, and peA). (An n x n matrix, all of whose
columns and rows as well as main diagonal and antidiagonal sum to s = n (n
2
+ 1) /2,
is called a "magic square" matrix. If M is a magic square matrix, it can be proved
that IIMllp = s for all p.)
10. Let A = xyT, where both x, y E IR
n
are nonzero. Determine IIAIIF' IIAIII> IIAlb
and II A 1100 in terms of IIxlla and/or IlylljJ, where ex and {3 take the value 1,2, or (Xl as
appropriate.
This page intentionally left blank This page intentionally left blank
Chapter 8
Li near Least Squares
Problems
8.1 The Li near Least Squares Problem
Problem: Suppose A e R
mx
" with m > n and b <= R
m
is a given vector. The linear least
squares problem consists of finding an element of the set
Solution: The set X has a number of easily verified properties:
1. A vector x e X if and only if A
T
r = 0, where r = b — Ax is the residual associated
with x. The equations A
T
r — 0 can be rewritten in the form A
T
Ax = A
T
b and the
latter form is commonly known as the normal equations, i.e., x e X if and only if
x is a solution of the normal equations. For further details, see Section 8.2.
2. A vector x E X if and onlv if x is of the form
To see why this must be so, write the residual r in the form
Now, (Pn(A)b — AJ C ) is clearly in 7£(A) , while
so these two vectors are orthogonal. Hence,
from the Pythagorean identity (Remark 7.35). Thus, A.x — b\\\ (and hence p ( x ) =
\\Ax —b\\2) assumes its minimum value if and only if
65
Chapter 8
Linear Least Squares
Problems
8.1 The Linear Least Squares Problem
Problem: Suppose A E jRmxn with m 2: nand b E jRm is a given vector. The linear least
squares problem consists of finding an element of the set
x = {x E jRn : p(x) = IIAx  bll
2
is minimized}.
Solution: The set X has a number of easily verified properties:
1. A vector x E X if and only if AT r = 0, where r = b  Ax is the residual associated
with x. The equations AT r = 0 can be rewritten in the form A T Ax = AT b and the
latter form is commonly known as the normal equations, i.e., x E X if and only if
x is a solution of the normal equations. For further details, see Section 8.2.
2. A vector x E X if and only if x is of the form
x=A+b+(IA+A)y, whereyEjRnisarbitrary. (8.1)
To see why this must be so, write the residual r in the form
r = (b  PR(A)b) + (PR(A)b  Ax).
Now, (PR(A)b  Ax) is clearly in 'R(A), while
(b  PR(A)b) = (I  PR(A))b
= PR(A),,b E 'R(A)L
so these two vectors are orthogonal. Hence,
= lib 
= lib  + IIPR(A)b 
from the Pythagorean identity (Remark 7.35). Thus, IIAx  (and hence p(x) =
II Ax  b 112) assumes its minimum value if and only if
(8.2)
65
66 Chapter 8. Linear Least Squares Problems
and this equation always has a solution since AA
+
b e 7£(A). By Theorem 6.3, all
solutions of (8.2) are of the form
where y e W is arbitrary. The minimum value of p ( x ) is then clearly equal to
the last inequality following by Theorem 7.23.
3. X is convex. To see why, consider two arbitrary vectors jci = A
+
b + (I — A+A)y
and *2 = A+b + (I — A+A)z in X. Let 6 e [0, 1]. Then the convex combination
0*i + (1  #)*
2
= A+b + (I  A
+
A)(Oy + (1  0)z) is clearly in X.
4. X has a unique element x* of minimal 2norm. In fact, x* = A
+
b is the unique vector
that solves this "double minimization" problem, i.e., x * minimizes the residual p ( x )
and is the vector of minimum 2norm that does so. This follows immediately from
convexity or directly from the fact that all x e X are of the form (8.1) and
which follows since the two vectors are orthogonal.
5. There is a unique solution to the least squares problem, i.e., X = {x*} = {A+b}, if
and only if A
+
A = I or, equivalently, if and only if rank (A) = n.
Just as for the solution of linear equations, we can generalize the linear least squares
problem to the matrix case.
Theorem 8.1. Let A e E
mx
" and B € R
mxk
. The general solution to
is of the form
where Y € R"
xfc
is arbitrary. The unique solution of minimum 2norm or Fnorm is
X = A+B.
Remark 8.2. Notice that solutions of the linear least squares problem look exactly the
same as solutions of the linear system AX = B. The only difference is that in the case
of linear least squares solutions, there is no "existence condition" such as K(B) c 7£(A).
If the existence condition happens to be satisfied, then equality holds and the least squares
66 Chapter 8. Linear Least Squares Problems
and this equation always has a solution since AA+b E R(A). By Theorem 6.3, all
solutions of (8.2) are of the form
x = A+ AA+b + (I  A+ A)y
=A+b+(IA+A)y,
where y E ]R.n is arbitrary. The minimum value of p (x) is then clearly equal to
lib  PR(A)bll
z
= 11(1  AA+)bI1
2
~ Ilbll z,
the last inequality following by Theorem 7.23.
3. X is convex. To see why, consider two arbitrary vectors Xl = A + b + (I  A + A) y
and Xz = A+b + (I  A+ A)z in X. Let 8 E [0,1]. Then the convex combination
8x, + (1  8)xz = A+b + (I  A+ A)(8y + (1  8)z) is clearly in X.
4. X has a unique element x" of minimal2norm. In fact, x" = A + b is the unique vector
that solves this "double minimization" problem, i.e., x* minimizes the residual p(x)
and is the vector of minimum 2norm that does so. This follows immediately from
convexity or directly from the fact that all x E X are of the form (8.1) and
which follows since the two vectors are orthogonal.
5. There is a unique solution to the least squares problem, i.e., X = {x"} = {A+b}, if
and only if A + A = lor, equivalently, if and only if rank(A) = n.
Just as for the solution of linear equations, we can generalize the linear least squares
problem to the matrix case.
Theorem 8.1. Let A E ]R.mxn and BE ]R.mxk. The general solution to
min IIAX  Bib
XElR
Plxk
is of the form
X=A+B+(IA+A)Y,
where Y E ]R.nxk is arbitrary. The unique solution of minimum 2norm or Fnorm is
X = A+B.
Remark 8.2. Notice that solutions of the linear least squares problem look exactly the
same as solutions of the linear system AX = B. The only difference is that in the case
of linear least squares solutions, there is no "existence condition" such as R(B) S; R(A).
If the existence condition happens to be satisfied. then equality holds and the least squares
8.3 Linear Regression and Other Linear Least Squares Problems 67
residual is 0. Of all solutions that give a residual of 0, the unique solution X = A
+
B has
minimum 2norm or Fnorm.
Remark 8.3. If we take B = I
m
in Theorem 8.1, then X = A
+
can be interpreted as
saying that the MoorePenrose pseudoinverse of A is the best (in the matrix 2norm sense)
matrix such that AX approximates the identity.
Remark 8.4. Many other interesting and useful approximation results are available for the
matrix 2norm (and Fnorm). One such is the following. Let A e M™
x
" with SVD
8.2 Geometric Solution
Looking at the schematic provided in Figure 8.1, it is apparent that minimizing  Ax — b\\
2
is equivalent to finding the vector x e W
1
for which p — Ax is closest to b (in the Euclidean
norm sense). Clearly, r = b — Ax must be orthogonal to 7£(A). Thus, if Ay is an arbitrary
vector in 7£(A) (i.e., y is arbitrary), we must have
Then a best rank k approximation to A for l <f c <r , i . e . , a solution to
is given by
The special case in which m = n and k = n — 1 gives a nearest singular matrix to A e
Since y is arbitrary, we must have A
T
b — A
T
Ax = 0 or A
r
A;c = A
T
b.
Special case: If A is full (column) rank, then x = (A
T
A) A
T
b.
8.3 Linear Regression and Other Linear Least Squares
Problems
8.3.1 Example: Linear regression
Suppose we have m measurements (t\,y\), . . . , (t
m
,y
m
) for which we hypothesize a linear
(affine) relationship
8.3 Linear Regression and Other Linear Least Squares Problems 67
residual is O. Of all solutions that give a residual of 0, the unique solution X = A + B has
minimum 2norm or F norm.
Remark 8.3. If we take B = 1m in Theorem 8.1, then X = A+ can be interpreted as
saying that the MoorePenrose pseudoinverse of A is the best (in the matrix 2norm sense)
matrix such that AX approximates the identity.
Remark 8.4. Many other interesting and useful approximation results are available for the
matrix 2norm (and F norm). One such is the following. Let A E with SVD
A = = LOiUiV!.
i=l
Then a best rank k approximation to A for 1 :s k :s r, i.e., a solution to
min IIA  MIi2,
MEJRZ'xn
is given by
k
Mk = LOiUiV!.
i=1
The special case in which m = nand k = n  1 gives a nearest singular matrix to A E x n .
8.2 Geometric Solution
Looking at the schematic provided in Figure 8.1, it is apparent that minimizing IIAx  bll
2
is equivalent to finding the vector x E lR
n
for which p = Ax is closest to b (in the Euclidean
norm sense). Clearly, r = b  Ax must be orthogonal to R(A). Thus, if Ay is an arbitrary
vector in R(A) (i.e., y is arbitrary), we must have
0= (Ay)T (b  Ax)
=yTAT(bAx)
= yT (ATb _ AT Ax).
Since y is arbitrary, we must have AT b  AT Ax = 0 or AT Ax = AT b.
Special case: If A is full (column) rank, then x = (AT A)l ATb.
8.3 Linear Regression and Other Linear Least Squares
Problems
8.3.1 Example: Linear regression
Suppose we have m measurements (ll, YI), ... , (trn, Ym) for which we hypothesize a linear
(affine) relationship
y = at + f3
(8.3)
68 Chapter 8. Linear Least Squares Problems
Figure 8.1. Projection of b on K(A).
for certain constants a. and ft. One way to solve this problem is to find the line that best fits
the data in the least squares sense; i.e., with the model (8.3), we have
where &\,..., 8
m
are "errors" and we wish to minimize 8\ + • • • + 8^ Geometrically, we
are trying to find the best line that minimizes the (sum of squares of the) distances from the
given data points. See, for example, Figure 8.2.
Figure 8.2. Simple linear regression.
Note that distances are measured in the vertical sense from the points to the line (as
indicated, for example, for the point (t\, y\}}. However, other criteria arc possible. For ex
ample, one could measure the distances in the horizontal sense, or the perpendicular distance
from the points to the line could be used. The latter is called total least squares. Instead
of 2norms, one could also use 1norms or oonorms. The latter two are computationally
68 Chapter 8. Linear Least Squares Problems
b
r
p=Ax Ay E R(A)
Figure S.l. Projection of b on R(A).
for certain constants a and {3. One way to solve this problem is to find the line that best fits
the data in the least squares sense; i.e., with the model (8.3), we have
YI = all + {3 + 81,
Y2 = al2 + {3 + 82
where 8
1
, ... , 8
m
are "errors" and we wish to minimize 8? + ... + 8;. Geometrically, we
are trying to find the best line that minimizes the (sum of squares of the) distances from the
given data points. See, for example, Figure 8.2.
y
Figure 8.2. Simple linear regression.
Note that distances are measured in the venical sense from the point!; to [he line (a!;
indicated. for example. for the point (tl. YIn. However. other criteria nrc For cx
ample, one could measure the distances in the horizontal sense, or the perpendiculnr distance
from the points to the line could be used. The latter is called total least squares. Instead
of 2norms, one could also use Inorms or oonorms. The latter two are computationally
8.3. Linear Regression and Other Linear Least Squares Problems 69
much more difficult to handle, and thus we present only the more tractable 2norm case in
text that follows.
The ra "error equations" can be written in matrix form as
where
We then want to solve the problem
or, equivalently,
Solution: x — [^1 is a solution of the normal equations A
T
Ax = A
T
y where, for the
special form of the matrices above, we have
and
8.3.2 Other least squares problems
Suppose the hypothesized model is not the linear equation (8.3) but rather is of the form
y = f ( t ) =
Cl
0!(0+ • • • 4 c
n
<t>
n
(t). (8.5)
In (8.5) the < / > ,(0 are given (basis) functions and the c
;
are constants to be determined to
minimize the least squares error. The matrix problem is still (8.4), where we now have
An important special case of (8.5) is least squares polynomial approximation, which
corresponds to choosing 0,• (?) = t'~
l
, i
;
e n, although this choice can lead to computational
The solution for the parameters a and ft can then be written
8.3. Linear Regression and Other Linear Least Squares Problems 69
much more difficult to handle, and thus we present only the more tractable 2norm case in
text that follows.
The m "error equations" can be written in matrix form as
Y = Ax +0,
where
We then want to solve the problem
minoT 0 = min (Ax  y)T (Ax  y)
x
or, equivalently,
min = min II Ax 
x
Solution: x = is a solution of the normal equations AT Ax
special form of the matrices above, we have
and
AT Y = [ Li ti Yi J.
LiYi
The solution for the parameters a and f3 can then be written
8.3.2 Other least squares problems
(8.4)
AT y where, for the
Suppose the hypothesized model is not the linear equation (S.3) but rather is of the form
(8.5)
In (8.5) the ¢i(t) are given (basis) functions and the Ci are constants to be determined to
minimize the least squares error. The matrix problem is still (S.4), where we now have
An important special case of (8.5) is least squares polynomial approximation, which
corresponds to choosing ¢i (t) = t
i

1
, i E !!, although this choice can lead to computational
70 Chapter 8. Linear Least Squares Problems
difficulties because of numerical ill conditioning for large n. Numerically better approaches
are based on orthogonal polynomials, piecewise polynomial functions, splines, etc.
The key feature in (8.5) is that the coefficients c, appear linearly. The basis functions
< / > , can be arbitrarily nonlinear. Sometimes a problem in which the c, 's appear nonlinearly
can be converted into a linear problem. For example, if the fitting function is of the form
y = f ( t ) = c\e
C2i
, then taking logarithms yields the equation logy = logci + cjt. Then
defining y — logy, c\ = logci, and GI = cj_ results in a standard linear least squares
problem.
8.4 Least Squares and Singular Value Decomposition
In the numerical linear algebra literature (e.g., [4], [7], [11], [23]), it is shown that solution
of linear least squares problems via the normal equations can be a very poor numerical
method in finite precision arithmetic. Since the standard Kalman filter essentially amounts
to sequential updating of normal equations, it can be expected to exhibit such poor numerical
behavior in practice (and it does). Better numerical methods are based on algorithms that
work directly and solely on A itself rather than A
T
A. Two basic classes of algorithms are
based on S VD and QR (orthogonal upper triangular) factorization, respectively. The former
is much more expensive but is generally more reliable and offers considerable theoretical
insight.
In this section we investigate solution of the linear least squares problem
The last equality follows from the fact that if v = [£ ], then u^ =   i> i \\\ + \\vi\\\ (note
that orthogonality is not what is used here; the subvectors can have different lengths). This
explains why it is convenient to work above with the square of the norm rather than the
norm. As far as the minimization is concerned, the two are equivalent. In fact, the last
quantity above is clearly minimized by taking z\ = S~
l
c\. The subvector z
2
is arbitrary,
while the minimum value of \\Ax — b\\^ is l ^l l r
via the SVD. Specifically, we assume that A has an SVD given by A = UT, V
T
= U\SVf
as in Theorem 5.1. We now note that
70 Chapter 8. Linear Least Squares Problems
difficulties because of numerical ill conditioning for large n. Numerically better approaches
are based on orthogonal polynomials, piecewise polynomial functions, splines, etc.
The key feature in (8.5) is that the coefficients Ci appear linearly. The basis functions
¢i can be arbitrarily nonlinear. Sometimes a problem in which the Ci'S appear nonlinearly
can be converted into a linear problem. For example, if the fitting function is of the form
Y = f (t) = c, e
C2
/ , then taking logarithms yields the equation log y = log c, + c2f. Then
defining y = log y, c, = log c" and C2 = C2 results in a standard linear least squares
problem.
8.4 Least Squares and Singular Value Decomposition
In the numerical linear algebra literature (e.g., [4], [7], [11], [23]), it is shown that solution
of linear least squares problems via the normal equations can be a very poor numerical
method in finiteprecision arithmetic. Since the standard Kalman filter essentially amounts
to sequential updating of normal equations, it can be expected to exhibit such poor numerical
behavior in practice (and it does). Better numerical methods are based on algorithms that
work directly and solely on A itself rather than AT A. Two basic classes of algorithms are
based on SVD and QR (orthogonalupper triangular) factorization, respectively. The former
is much more expensive but is generally more reliable and offers considerable theoretical
insight.
In this section we investigate solution of the linear least squares problem
min II Ax  b11
2
, A E IR
mxn
, bE IR
m
, (8.6)
x
via the SVD. Specifically, we assume that A has an SVD given by A = = U,SVr
as in Theorem 5.1. We now note that
IIAx  = x 
= II VT X  U
T
bll; since II . Ib is unitarily invariant
wherez=VTx,c=UTb
= II [ ]  [ ] II:
= II [ c, ] II:
The last equality follows from the fact that if v = then II v II = II viii + II v211 (note
that orthogonality is not what is used here; the subvectors can have different lengths). This
explains why it is convenient to work above with the square of the norm rather than the
norm. As far as the minimization is concerned. the two are equivalent. In fact. the last
quantity above is clearly minimized by taking z, = S'c,. The subvector Z2 is arbitrary,
while the minimum value of II Ax  b II is II czll
8.5. Least Squares and QR Factorization 71
Now transform back to the original coordinates:
The last equality follows from
Note that since 12 is arbitrary, V
2
z
2
is an arbitrary vector in 7Z(V
2
) = A/"(A). Thus, x has
been written in the form x = A
+
b + (/ — A
+
A ) _ y, where y e R
m
is arbitrary. This agrees,
of course, with (8.1).
The minimum value of the least squares residual is
and we clearly have that
minimum least squares residual is 0 4=> b is orthogonal to all vectors in U
2
•<=^ b is orthogonal to all vectors in 7l(A}
L
Another expression for the minimum residual is  (/ — AA
+
) b 
2
. This follows easily since
(7  AA+)b\\
2
2
 \\U2Ufb\\l = b
T
U
2
U^U
2
UJb = b
T
U
2
U*b = \\U?b\\
2
2
.
Finally, an important special case of the linear least squares problem is the
socalled fullrank problem, i.e., A e 1R™
X
" . In this case the SVD of A is given by
A = UZV
T
= [U
{
t/ 2][o]^i
r
> and there is thus "no V
2
part" to the solution.
8.5 Least Squares and QR Factorization
In this section, we again look at the solution of the linear least squares problem (8.6) but this
time in terms of the QR factorization. This matrix factorization is much cheaper to compute
than an SVD and, with appropriate numerical enhancements, can be quite reliable.
To simplify the exposition, we add the simplifying assumption that A has full column
rank, i.e., A e R™
X M
. It is then possible, via a sequence of socalled Householder or Givens
transformations, to reduce A in the following way. A finite sequence of simple orthogonal
row transformations (of Householder or Givens type) can be performed on A to reduce it
to triangular form. If we label the product of such orthogonal row transformations as the
orthogonal matrix Q
T
€ R
mxm
, we have
B.S. Least Squares and QR Factorization 71
Now transform back to the original coordinates:
x = Vz
= [VI V
2
1 [ ]
= VIZI + V2Z2
= VISici + V2Z2
= vlsIufb + V
2
Z
2
.
The last equality follows from
c = U T b = [ f: ] = [ l
Note that since Z2 is arbitrary, V
2
Z
2
is an arbitrary vector in R(V
2
) = N(A). Thus, x has
been written in the form x = A + b + (I  A + A) y, where y E ffi.m is arbitrary. This agrees,
of course, with (8.1).
The minimum value of the least squares residual is
and we clearly have that
minimum least squares residual is 0 {::=:} b is orthogonal to all vectors in U2
{::=:} b is orthogonal to all vectors in R(A)l.
{::=:} b E R(A).
Another expression for the minimum residual is II (I  AA +)bllz. This follows easily since
11(1 = = b
T
U
Z
V!V
2
V!b = bTVZV!b =
Finally, an important special case of the linear least squares problem is the
socalled fullrank problem, i.e., A E In this case the SVD of A is given by
A = V:EV
T
= [VI Vzl[g]Vr, and there is thus "no V
2
part" to the solution.
8.5 Least Squares and QR Factorization
In this section, we again look at the solution of the linear least squares problem (8.6) but this
time in terms of the QR factorization. This matrix factorization is much cheaper to compute
than an SVD and, with appropriate numerical enhancements, can be quite reliable.
To simplify the exposition, we add the simplifying assumption that A has full column
rank, i.e., A E It is then possible, via a sequence of socalled Householder or Givens
transformations, to reduce A in the following way. A finite sequence of simple orthogonal
row transformations (of Householder or Givens type) can be performed on A to reduce it
to triangular form. If we label the product of such orthogonal row transformations as the
orthogonal matrix QT E ffi.mxm, we have
(8.7)
72 Chapter 8. Linear Least Squares Problems
where R e M£
x
" is upper triangular. Now write Q = [Q\ Q
2
], where Q\ e R
mx
" and
Q
2
€ K"
IX(m
~"
)
. Both Q\ and <2
2
have orthonormal columns. Multiplying through by Q
in (8.7), we see that
Any of (8.7), (8.8), or (8.9) are variously referred to as QR factorizations of A. Note that
(8.9) is essentially what is accomplished by the GramSchmidt process, i.e., by writing
AR~
l
= Q\ we see that a "triangular" linear combination (given by the coefficients of
R~
l
) of the columns of A yields the orthonormal columns of Q\.
Now note that
The last quantity above is clearly minimized by taking x = R
l
c\ and the minimum residual
is \\C 2\\2 Equivalently, we have x = R~
l
Q\b = A
+
b and the minimum residual is IIC?^!^
EXERCISES
1. For A € W
xn
, b e E
m
, and any y e R", check directly that (I  A
+
A)y and A
+
b
are orthogonal vectors.
2. Consider the following set of measurements (*,, y
t
):
(a) Find the best (in the 2norm sense) line of the form y = ax + ft that fits this
data.
(b) Find the best (in the 2norm sense) line of the form jc = ay + (3 that fits this
data.
3. Suppose qi and q
2
are two orthonormal vectors and b is a fixed vector, all in R".
(a) Find the optimal linear combination aq^ + fiq
2
that is closest to b (in the 2norm
sense).
(b) Let r denote the "error vector" b — ctq\ — flq
2
 Show that r is orthogonal to
both^i and q
2
.
72 Chapter 8. Linear Least Squares Problems
where R E is upper triangular. Now write Q = [QI Qz], where QI E ffi.mxn and
Qz E ffi.m x (mn). Both Q I and Qz have orthonormal columns. Multiplying through by Q
in (8.7), we see that
(8.8)
= [QI Qz] [ ]
= QIR.
(8.9)
Any of (8.7), (8.8), or (8.9) are variously referred to as QR factorizations of A. Note that
(8.9) is essentially what is accomplished by the GramSchmidt process, i.e., by writing
AR
1
= QI we see that a "triangular" linear combination (given by the coefficients of
R
I
) of the columns of A yields the orthonormal columns of Q I.
Now note that
IIAx  = IIQ
T
Ax  since II . 112 is unitarily invariant
= II [ ] x  [ ] If:,
The last quantity above is clearly minimized by taking x = R
I
Cl and the minimum residual
is Ilczllz. Equivalently, we have x = R
1
Qf b = A +b and the minimum residual is II Qr bllz'
EXERCISES
1. For A E ffi.
mxn
, b E ffi.
m
, and any y E ffi.
n
, check directly that (I  A + A)y and A +b
are orthogonal vectors.
2. Consider the following set of measurements (Xi, Yi):
(1,2), (2,1), (3,3).
(a) Find the best (in the 2norm sense) line of the form y = ax + fJ that fits this
data.
(b) Find the best (in the 2norm sense) line of the form x = ay + fJ that fits this
data.
3. Suppose q, and qz are two orthonormal vectors and b is a fixed vector, all in ffi.
n
•
(a) Find the optimallinear combination aql + (3q2 that is closest to b (in the 2norm
sense).
(b) Let r denote the "error vector" b  aql  {3qz. Show that r is orthogonal to
both ql and q2.
Exercises 73
4. Find all solutions of the linear least squares problem
5. Consider the problem of finding the minimum 2norm solution of the linear least
«rmarp« nrr»h1<=>m
(a) Consider a perturbation E\ = [
0
pi of A, where 8 is a small positive number.
Solve the perturbed version of the above problem,
where AI = A + E\. What happens to jt* — y 
2
as 8 approaches 0?
(b) Now consider the perturbation EI = \
0 s
~\ of A, where again 8 is a small
positive number. Solve the perturbed problem
where A
2
— A + E
2
. What happens to \\x* — z
2
as 8 approaches 0?
6. Use the four Penrose conditions and the fact that Q\ has orthonormal columns to
verify that if A e R™
x
" can be factored in the form (8.9), then A+ = R~
l
Q\.
1. Let A e R"
x
", not necessarily nonsingular, and suppose A = QR, where Q is
orthogonal. Prove that A
+
= R
+
Q
T
.
Exercises 73
4. Find all solutions of the linear least squares problem
min II Ax  bll
2
x
when A = [
5. Consider the problem of finding the minimum 2norm solution of the linear least
squares problem
min II Ax  bl1
2
x
when A = ] and b = [ ! 1 The solution is
(a) Consider a perturbation EI = of A, where 8 is a small positive number.
Solve the perturbed version of the above problem,
where AI = A + E
I
. What happens to IIx*  yII2 as 8 approaches O?
(b) Now consider the perturbation E2 = n of A, where again 8 is a small
positive number. Solve the perturbed problem
min II A
2
z  bib
z
where A2 = A + E
2
• What happens to IIx*  zll2 as 8 approaches O?
6. Use the four Penrose conditions and the fact that QI has orthonormal columns to
verify that if A E can be factored in the form (8.9), then A+ = R
I
Qf.
7. Let A E not necessarily nonsingular, and suppose A = QR, where Q is
orthogonal. Prove that A + = R+ QT .
This page intentionally left blank This page intentionally left blank
Chapter 9
Eigenvalues and
Eigenvectors
9.1 Fundamental Definitions and Properties
Definition 9.1. A nonzero vector x e C" is a right eigenvector of A e C
nxn
if there exists
a scalar A. e C, called an eigenvalue, such that
Similarly, a nonzero vector y e C" is a left eigenvector corresponding to an eigenvalue
a if
By taking Hermitian transposes in (9.1), we see immediately that X
H
is a left eigen
vector of A
H
associated with A . Note that if x [y] is a right [left] eigenvector of A, then
so is ax [ay] for any nonzero scalar a E C. One oftenused scaling for an eigenvector is
a — \j';t so that the scaled eigenvector has norm 1. The 2norm is the most common
norm used for such scaling.
Definition 9.2. The polynomial n (A.) = det(A —A ,/ ) is called the characteristic polynomial
of A. (Note that the characteristic polynomial can also be defined as det(A . / — A ). This
results in at most a change of sign and, as a matter of convenience, we use both forms
throughout the text.}
The following classical theorem can be very useful in hand calculation. It can be
proved easily from the Jordan canonical form to be discussed in the text to follow (see, for
example, [21]) or directly using elementary properties of inverses and determinants (see,
for example, [3]).
Theorem 9.3 (CayleyHamilton). For any A e C
nxn
, n(A) = 0.
Example 9.4. Let A = [~g ~g] . Then n(k) = X
2
+ 2A , — 3. It is an easy exercise to
verify that n(A) = A
2
+ 2A  31 = 0.
It can be proved from elementary properties of determinants that if A e C"
x
", then
7 t (X) is a polynomial of degree n. Thus, the Fundamental Theorem of A lgebra says that
75
Chapter 9
Eigenvalues and
Eigenvectors
9.1 Fundamental Definitions and Properties
Definition 9.1. A nonzero vector x E en is a right eigenvector of A E e
nxn
if there exists
a scalar A E e, called an eigenvalue, such that
Ax = AX. (9.1)
Similarly, a nonzero vector y E en is a left eigenvector corresponding to an eigenvalue
Mif
(9.2)
By taking Hennitian transposes in (9.1), we see immediately that x
H
is a left eigen
vector of A H associated with I. Note that if x [y] is a right [left] eigenvector of A, then
so is ax [ay] for any nonzero scalar a E C. One oftenused scaling for an eigenvector is
a = 1/ IIx II so that the scaled eigenvector has nonn 1. The 2nonn is the most common
nonn used for such scaling.
Definition 9.2. The polynomialn (A) = det (A  A l) is called the characteristic polynomial
of A. (Note that the characteristic polynomial can also be defined as det(Al  A). This
results in at most a change of sign and, as a matter of convenience, we use both forms
throughout the text.)
The following classical theorem can be very useful in hand calculation. It can be
proved easily from the Jordan canonical fonn to be discussed in the text to follow (see, for
example, [21D or directly using elementary properties of inverses and determinants (see,
for example, [3]).
Theorem 9.3 (CayleyHamilton). For any A E e
nxn
, n(A) = O.
Example 9.4. Let A = [  ~  ~ ] . Then n(A) = A2 + 2A  3. It is an easy exercise to
verify that n(A) = A2 + 2A  31 = O.
It can be proved from elementary properties of detenninants that if A E e
nxn
, then
n(A) is a polynomial of degree n. Thus, the Fundamental Theorem of Algebra says that
75
and set X = 0 in this identity, we get the interesting fact that del (A) = AI • A.2 • • • A
M
(see
also Theorem 9.25).
If A e W
xn
, then n(X) has real coefficients. Hence the roots of 7 r( A) , i.e., the
eigenvalues of A, must occur in complex conjugate pairs.
Example 9.6. Let a, ft e R and let A = [ _^ £ ]. Then jr( A. ) = A.
2
 2aA + a
2
+ ft
2
and
A has eigenvalues a ± fij (where j = i = •>/—!)•
If A € R"
x
", then there is an easily checked relationship between the left and right
eigenvectors of A and A
T
(take Hermitian transposes of both sides of (9.2)). Specifically, if
y is a left eigenvector of A corresponding to A e A( A) , then y is a right eigenvector of A
T
corresponding to A. € A ( A) . Note, too, that by elementary properties of the determinant,
we always have A ( A ) = A ( A
r
) , but that A ( A ) = A ( A ) only if A e R"
x
".
Definition 9.7. IfX is a root of multiplicity m ofjr(X), we say that X is an eigenvalue of A
of algebraic multiplicity m. The geometric multiplicity ofX is the number of associated
independent eigenvectors = n — rank( A — A/) = dim J \ f(A — XI).
If A € A ( A ) has algebraic multiplicity m, then 1 < di mA/ "(A — A/) < m. Thus, if
we denote the geometric multiplicity of A by g, then we must have 1 < g < m.
Definition 9.8. A matrix A e W
x
" is said to be defective if it has an eigenvalue whose
geometric multiplicity is not equal to (i.e., less than) its algebraic multiplicity. Equivalently,
A is said to be defective if it does not have n linearly independent (right or left) eigenvectors.
From the CayleyHamilton Theorem, we know that n(A) = 0. However, it is pos
sible for A to satisfy a lowerorder polynomial. For example, if A = \
1
Q
®], then A sat
isfies (1 — I)
2
= 0. But it also clearly satisfies the smaller degree polynomial equation
a  n = o.
Definition 5.5. The minimal polynomial of A G K""" is the polynomial o/ (X) of least
degree such that a (A) =0.
It can be shown that or(l) is essentially unique (unique if we force the coefficient
of the highest power of A to be +1, say; such a polynomial is said to be monic and we
generally write et (A) as a monic polynomial throughout the text). Moreover, it can also be
7 6 Chapt er 9. Ei g e n va l ue s and Ei genvect ors
7 r( A) has n roots, possibly repeated. These roots, as solutions of the determinant equation
are the eigenvalues of A and imply the singularity of the matrix A — XI, and hence further
guarantee the existence of corresponding nonzero eigenvectors.
Definition 9.5. The spectrum of A e C"
x
" is the set of all eigenvalues of A, i.e., the set of
all roots of its characteristic polynomial n(X). The spectrum of A is denoted A ( A) .
Let the eigenvalues of A e C"
x
" be denoted X\ ,..., X
n
. Then if we write (9.3) in the
form
76 Chapter 9. Eigenvalues and Eigenvectors
n(A) has n roots, possibly repeated. These roots, as solutions of the determinant equation
n(A) = det(A  AI) = 0, (9.3)
are the eigenvalues of A and imply the singularity of the matrix A  AI, and hence further
guarantee the existence of corresponding nonzero eigenvectors.
Definition 9.5. The spectrum of A E c
nxn
is the set of all eigenvalues of A, i.e., the set of
all roots of its characteristic polynomialn(A). The spectrum of A is denoted A(A).
Let the eigenvalues of A E en xn be denoted A], ... , An. Then if we write (9.3) in the
form
n(A) = det(A  AI) = (A]  A) ... (An  A) (9.4)
and set A = 0 in this identity, we get the interesting fact that det(A) = A] . A2 ... An (see
also Theorem 9.25).
If A E 1Ftnxn, then n(A) has real coefficients. Hence the roots of n(A), i.e., the
eigenvalues of A, must occur in complex conjugate pairs.
Example 9.6. Let a, f3 E 1Ft and let A = [ ~ f 3 !]. Then n(A) = A
2
 2aA + a
2
+ f32 and
A has eigenvalues a ± f3j (where j = i = R).
If A E 1Ftnxn, then there is an easily checked relationship between the left and right
eigenvectors of A and AT (take Hermitian transposes of both sides of (9.2». Specifically, if
y is a left eigenvector of A corresponding to A E A(A), then y is a right eigenvector of AT
corresponding to I E A(A). Note, too, that by elementary properties of the determinant,
we always have A(A) = A(AT), but that A(A) = A(A) only if A E 1Ftnxn.
Definition 9.7. If A is a root of multiplicity m of n(A), we say that A is an eigenvalue of A
of algebraic multiplicity m. The geometric multiplicity of A is the number of associated
independent eigenvectors = n  rank(A  AI) = dimN(A  AI).
If A E A(A) has algebraic multiplicity m, then I :::: dimN(A  AI) :::: m. Thus, if
we denote the geometric multiplicity of A by g, then we must have I :::: g :::: m.
Definition 9.8. A matrix A E 1Ft
nxn
is said to be defective if it has an eigenvalue whose
geometric multiplicity is not equal to (i.e., less than) its algebraic multiplicity. Equivalently,
A is said to be defective if it does not have n linearly independent (right or left) eigenvectors.
From the CayleyHamilton Theorem, we know that n(A) = O. However, it is pos
sible for A to satisfy a lowerorder polynomial. For example, if A = [ ~ ~ ] , then A sat
isfies (Je  1)2 = O. But it also clearly satisfies the smaller degree polynomial equation
(it.  1) ;;;:; 0
neftnhion ~ . ~ . Thll minimal polynomial Of A l::: l!if.nxn ix (hI' polynomilll a(A) oJ IPll.ft
degree such that a(A) ~ O.
It can be shown that a(Je) is essentially unique (unique if we force the coefficient
of the highest power of A to be + 1. say; such a polynomial is said to be monic and we
generally write a(A) as a monic polynomial throughout the text). Moreover, it can also be
9.1. Fundamental Definitions and Properties 77
shown that a (A.) divides every nonzero polynomial fi(k} for which ft (A) = 0. In particular,
a(X) divides n(X).
There is an algorithm to determine or ( A . ) directly ( without knowing eigenvalues and as
sociated eigenvector structure). Unfortunately, this algorithm, called the Bezout algorithm,
is numerically unstable.
Example 9.10. The above definitions are illustrated below for a series of matrices, each
of which has an eigenvalue 2 of algebraic multiplicity 4, i. e. , 7r( A ) = ( A — 2)
4
. We denote
the geometric multiplicity by g.
A t this point, one might speculate that g plus the degree of a must always be five.
Unfortunately, such is not the case. The matrix
Theorem 9.11. Let A e C«
x
"
ana
[
e
t A ., be an eigenvalue of A with corresponding right
eigenvector j c,. Furthermore, let yj be a left eigenvector corresponding to any A
;
e A ( A )
such that Xj =£ A . ,. Then yfx{ = 0.
Proof: Since Ax
t
= A ,*,,
9.1. Fundamental Definitions and Properties 77
shown that a(A) divides every nonzero polynomial f3(A) for which f3(A) = O. In particular,
a(A) divides n(A).
There is an algorithm to determine a(A) directly (without knowing eigenvalues and as
sociated eigenvector structure). Unfortunately, this algorithm, called the Bezout algorithm,
is numerically unstable.
Example 9.10. The above definitions are illustrated below for a series of matrices, each
of which has an eigenvalue 2 of algebraic multiplicity 4, i.e., n(A) = (A  2)4. We denote
the geometric multiplicity by g.
A  [ ~
0
! ] ha,"(A) ~ (A  2)' ""d g ~ 1.
2 I
 0
0 2
0 0 0
A ~ [ ~
0
~ ] ha< a(A) ~ (A  2)' ""d g ~ 2.
2
0 2
0 0
A ~ U
I 0
~ ] h'" a(A) ~ (A  2)2 ""d g ~ 3.
2 0
0 2
0 0
A ~ U
0 0
~ ] ha<a(A) ~ (A  2) andg ~ 4.
2 0
0 2
0 0
At this point, one might speculate that g plus the degree of a must always be five.
Unfortunately, such is not the case. The matrix
A ~ U
I 0
!]
2 0
0 2
0 0
has a(A) = (A  2)2 and g = 2.
Theorem 9.11. Let A E cc
nxn
and let Ai be an eigenvalue of A with corresponding right
eigenvector Xi. Furthermore, let Yj be a left eigenvector corresponding to any Aj E l\(A)
such that Aj 1= Ai. Then YY Xi = O.
Proof' Since AXi = AiXi,
(9.5)
78 Chapter 9. Eigenvalues and Eigenvectors
Similarly, since y" A = Xjyf,
Subtracting (9.6) from (9.5), we find 0 = (A., — A
y
)j ^j c, . Since A,, — A.
7
 ^ 0, we must have
yfxt =0.
The proof of Theorem 9.11 is very similar to two other fundamental and important
results.
Theorem 9.12. Let A e C"
x
" be Hermitian, i.e., A = A
H
. Then all eigenvalues of A must
be real.
Proof: Suppose (A ., x) is an arbitrary eigenvalue/eigenvector pair such that Ax = A .J C. Then
Taking Hermitian transposes in (9.7) yields
Using the fact that A is Hermitian, we have that Xx
H
x = Xx
H
x. However, since x is an
eigenvector, we have X
H
X /= 0, from which we conclude A . = A , i.e., A . is real. D
Theorem 9.13. Let A e C"
x
" be Hermitian and suppose X and / J L are distinct eigenvalues
of A with corresponding right eigenvectors x and z, respectively. Then x and z must be
orthogonal.
Proof: Premultiply the equation Ax = A.J C by Z
H
to get Z
H
Ax = X z
H
x . Take the Hermitian
transpose of this equation and use the facts that A is Hermitian and A . is real to get X
H
Az =
Xx
H
z. Premultiply the equation Az = i^z by X
H
to get X
H
Az = / ^X
H
Z = Xx
H
z. Since
A, ^ /z, we must have that X
H
z = 0, i.e., the two vectors must be orthogonal. D
Let us now return to the general case.
Theorem 9.14. Let A €. C
nxn
have distinct eigenvalues A ,
1 ?
. . . , A .
n
with corresponding
right eigenvectors x\,... ,x
n
. Then [x\,..., x
n
} is a linearly independent set. The same
result holds for the corresponding left eigenvectors.
Proof: For the proof see, for example, [21, p. 118].
If A e C
nx
" has distinct eigenvalues, and if A ., e A (A ), then by Theorem 9.11, jc, is
orthogonal to all yj's for which j ^ i. However, it cannot be the case that yf*x
t
= 0 as
well, or else x
f
would be orthogonal to n linearly independent vectors (by Theorem 9.14)
and would thus have to be 0, contradicting the fact that it is an eigenvector. Since yf*Xi ^ 0
for each i, we can choose the normalization of the *, 's, or the y, 's, or both, so that y
t
H
x; = 1
f or / € n.
78 Chapter 9. Eigenvalues and Eigenvectors
Similarly, since YY A = A j yy,
(9.6)
Subtracting (9.6) from (9.5), we find 0 = (Ai  Aj)YY xi. Since Ai  Aj =1= 0, we must have
YyXi = O. 0
The proof of Theorem 9.11 is very similar to two other fundamental and important
results.
Theorem 9.12. Let A E c
nxn
be Hermitian, i.e., A = AH. Then all eigenvalues of A must
be real.
Proof: Suppose (A, x) is an arbitrary eigenvalue/eigenvector pair such that Ax = AX. Then
(9.7)
Taking Hermitian transposes in (9.7) yields
Using the fact that A is Hermitian, we have that IXH x = AXH x. However, since x is an
eigenvector, we have xH x =1= 0, from which we conclude I = A, i.e., A is real. 0
Theorem 9.13. Let A E c
nxn
be Hermitian and suppose A and iJ are distinct eigenvalues
of A with corresponding right eigenvectors x and z, respectively. Then x and z must be
orthogonal.
Proof: Premultiply the equation Ax = AX by ZH to get ZH Ax = AZ
H
x. Take the Hermitian
transpose of this equation and use the facts that A is Hermitian and A is real to get x H Az =
AxH z. Premultiply the equation Az = iJZ by x
H
to get x
H
Az = iJXH Z = AXH z. Since
A =1= iJ, we must have that x
H
z = 0, i.e., the two vectors must be orthogonal. 0
Let us now return to the general case.
Theorem 9.14. Let A E c
nxn
have distinct eigenvalues AI, ... , An with corresponding
right eigenvectors XI, ... , x
n
• Then {XI, ... , x
n
} is a linearly independent set. The same
result holds for the corresponding left eigenvectors.
Proof: For the proof see, for example, [21, p. 118]. 0
If A E c
nxn
has distinct eigenvalues, and if Ai E A(A), then by Theorem 9.11, Xi is
orthogonal to all y/s for which j =1= i. However, it cannot be the case that Yi
H
Xi = 0 as
well, or else Xi would be orthogonal to n linearly independent vectors (by Theorem 9.14)
and would thus have to be 0, contradicting the fact that it is an eigenvector. Since yr Xi =1= 0
for each i, we can choose the normalization of the Xi'S, or the Yi 's, or both, so that Yi
H
Xi = 1
for i E !1.
9.1. Fundament al Def i ni t i o ns and Properties 79
Theorem 9.15. Let A e C"
x
" have distinct eigenvalues A .I , ..., A .
n
and let the correspond
ing right eigenvectors form a matrix X = [x\, ..., x
n
]. Similarly, let Y — [y\, ..., y
n
]
be the matrix of corresponding left eigenvectors. Furthermore, suppose that the left and
right eigenvectors have been normalized so that yf
1
Xi = 1, / en. Finally, let A =
di ag ( A ,j , . . . , X
n
) e W
txn
. Then A J C, = A ., * /, / e n, can be written in matrix form as
Example 9.16. Let
Then n(X) = det( A  A ./) =  (A .
3
+ 4A .
2
+ 9 A . + 10) =  (A . + 2 )(A .
2
+ 2 A , + 5), from
which we find A ( A ) = { — 2 , — 1 ± 2 j } . We can now find the right and left eigenvectors
corresponding to these eigenvalues.
For A  i = — 2 , solve the 3 x 3 linear system (A — (—2 } I)x\ = 0 to get
while y^Xj = 5,
;
, / en, y' e n, is expressed by the equation
These matrix equations can be combined to yield the following matrix factorizations:
and
Note that one component of ;ci can be set arbitrarily, and this then determines the other two
(since di mA /XA — ( — 2 )7) = 1). To get the corresponding left eigenvector y\, solve the
linear system y\(A + 2 1) = 0 to get
This time we have chosen the arbitrary scale factor for y\ so that y f x \ = 1.
For A
2
= — 1 + 2 j , solve the linear system (A — (— 1 + 2 j )I)x
2
= 0 to get
9.1. Fundamental Definitions and Properties 79
Theorem 9.15. Let A E en xn have distinct eigenvalues A I, ... , An and let the correspond
ing right eigenvectors form a matrix X = [XI, ... , xn]. Similarly, let Y = [YI,"" Yn]
be the matrix of corresponding left eigenvectors. Furthermore, suppose that the left and
right eigenvectors have been normalized so that YiH Xi = 1, i E !!:: Finally, let A =
diag(AJ, ... , An) E ]Rnxn. Then AXi = AiXi, i E !!, can be written in matrixform as
AX=XA (9.8)
while YiH X j = oij, i E!!, j E !!, is expressed by the equation
yHX = I.
(9.9)
These matrix equations can be combined to yield the following matrix factorizations:
and
Example 9.16. Let
XlAX = A = yRAX
n
A = XAX
I
= XAyH = LAixiyr
2
5
3
3
2
i=1
~ ] .
4
(9.10)
(9.11)
Then rr(A) = det(A  AI) = (A
3
+ 4A2 + 9)" + 10) = ()" + 2)(),,2 + 2)" + 5), from
which we find A(A) = {2, 1 ± 2j}. We can now find the right and left eigenvectors
corresponding to these eigenvalues.
For Al = 2, solve the 3 x 3 linear system (A  (2)l)xI = 0 to get
Note that one component of XI can be set arbitrarily, and this then determines the other two
(since dimN(A  (2)1) = 1). To get the corresponding left eigenvector YI, solve the
linear system yi (A + 21) = 0 to get
This time we have chosen the arbitrary scale factor for YJ so that yi XI = 1.
For A2 = 1 + 2j, solve the linear system (A  (1 + 2j) I)x2 = 0 to get
[
3+ j ]
X2 = 3 ~ / .
80 Chapter 9. Eigenvalues and Eigenvectors
Solve the linear system y" (A — (1 + 27')/) = 0 and normalize y>
2
so that y"x
2
= 1 to get
For X T , = — 1 — 2 j, we could proceed to solve linear systems as for A.
2
. However, we
can also note that x$ =x
2
' and yi = jj. To see this, use the fact that A, 3 = A.2 and simply
conjugate the equation A;c
2
— ^.2 *2 to get Ax^ = ^2 X 2  A similar argument yields the result
for left eigenvectors.
Now define the matrix X of right eigenvectors:
It is then easy to verify that
Other results in Theorem 9.15 can also be verified. For example,
Finally, note that we could have solved directly only for *i and x
2
(and X T , = x
2
). Then,
instead of determining the j,'s directly, we could have found them instead by computing
X ~
l
and reading off its rows.
Example 9.17. Let
Then 7r(A.) = det(A  A./) = (A
3
+ 8A
2
+ 19X + 12) = (A. + 1)(A. + 3)(A, + 4),
from which we find A (A) = { —1, —3, —4}. Proceeding as in the previous example, it is
straightforward to compute
and
80 Chapter 9. Eigenvalues and Eigenvectors
Solve the linear system yf (A  ( I + 2 j) I) = 0 and nonnalize Y2 so that yf X2 = 1 to get
For A3 = I  2j, we could proceed to solve linear systems as for A2. However, we
can also note that X3 = X2 and Y3 = Y2. To see this, use the fact that A3 = A2 and simply
conjugate the equation AX2 = A2X2 to get AX2 = A2X2. A similar argument yields the result
for left eigenvectors.
Now define the matrix X of right eigenvectors:
3+j 3
j
]
3j 3+j .
2 2
It is then easy to verify that
.!.=.L
4
l+j
4
!.±1
4
.!.=.L
4
Other results in Theorem 9.15 can also be verified. For example,
[
2 0
XIAX=A= 0 1+2j
o 0
Finally, note that we could have solved directly only for XI and X2 (and X3 = X2). Then,
instead of detennining the Yi'S directly, we could have found them instead by computing
XI and reading off its rows.
Example 9.17. Let
A = .
o 3
Then Jl"(A) = det(A  AI) = _(A
3
+ 8A
2
+ 19A + 12) = (A + I)(A + 3)(A + 4),
from which we find A(A) = {I, 3, 4}. Proceeding as in the previous example, it is
gtruightforw!U"d to
I
i ]
0
I
and
1 2 1
] y'
3 0 3
2 2 2
9.1. Fundamental Definitions and Properties 81
We also have X~
l
AX = A = di ag( —1, —3, —4 ) , which is equivalent to the dyadic expan
sion
Theorem 9.18. Eigenvalues (but not eigenvectors) are invariant under a similarity trans
formation T.
Proof: Suppose (A, jc) is an eigenvalue/eigenvector pair such that Ax = Xx. Then, since T
is nonsingular, we have the equivalent statement (T~
l
AT)(T~
l
x) = X ( T ~
l
x ) , from which
the theorem statement follows. For left eigenvectors we have a similar statement, namely
y
H
A = Xy
H
ifandon\yif(T
H
y)
H
(T~
1
AT) =X(T
H
yf. D
Remark 9.19. If / is an analytic function (e.g., f ( x ) is a polynomial, or e
x
, or sin*,
or, in general, representable by a power series X^^o
fl
n*
n
)> then it is easy to show that
the eigenvalues of /( A) (defined as X^o^A") are /( A) , but /( A) does not necessarily
have all the same eigenvectors (unless, say, A is diagonalizable). For example, A = T
0 O
j
has only one right eigenvector corresponding to the eigenvalue 0, but A
2
= f
0 0
1 has two
independent right eigenvectors associated with the eigenvalue 0. What is true is that the
eigenvalue/eigenvector pair (A, x) maps to ( /( A) , x) but not conversely.
The following theorem is useful when solving systems of linear differential equations.
Details of how the matrix exponential e'
A
is used to solve the system x = Ax are the subject
of Chapter 11.
Theorem 9.20. Let A e R"
xn
and suppose X~~
1
AX — A, where A is diagonal. Then
9.1. Fundamental Definitions and Properties 81
We also have XI AX = A = diag( 1, 3, 4), which is equivalent to the dyadic expan
sion
3
A = LAixiyr
i=1
W j 0
+(4) [ ; ]
1
 
3
(I) [
I I I
J + (3) [
I
0
I
] + (4) [
I I I
l
(;
3
(;
2
2 3
3
3
I 2 I
0 0 0
I I I
3 3 3
3
3
3
I I I
I
0
I
I I I
(;
3
(;
2
2
3
3
3
Theorem 9.18. Eigenvalues (but not eigenvectors) are invariant under a similarity trans
formation T.
Proof: Suppose (A, X) is an eigenvalue/eigenvector pair such that Ax = AX. Then, since T
is nonsingular, we have the equivalent statement (T
I
AT)(T
I
x) = A(T
I
x), from which
the theorem statement follows. For left eigenvectors we have a similar statement, namely
yH A = AyH if and only if (T
H
y)H (T
1
AT) = A(T
H
y)H. D
Remark 9.19. If f is an analytic function (e.g., f(x) is a polynomial, or eX, or sinx,
or, in general, representable by a power series anxn), then it is easy to show that
the eigenvalues of f(A) (defined as are f(A), but f(A) does not necessarily
have all the same eigenvectors (unless, say, A is diagonalizable). For example, A = 6 ]
has only one right eigenvector corresponding to the eigenvalue 0, but A
2
= ] has two
independent right eigenvectors associated with the eigenvalue o. What is true is that the
eigenvalue/eigenvector pair (A, x) maps to (f(A), x) but not conversely.
The following theorem is useful when solving systems of linear differential equations.
Details of how the matrix exponential etA is used to solve the system i = Ax are the subject
of Chapter 11.
Theorem 9.20. Let A E jRnxn and suppose XI AX = A, where A is diagonal. Then
n
= LeA,txiYiH.
i=1
82 Chapter 9. Eigenvalues and Eigenvectors
Proof: Starting from the definition, we have
The following corollary is immediate from the theorem upon setting t = I.
Corollary 9.21. If A e R
nx
" is diagonalizable with eigenvalues A ., , /' en, and right
eigenvectors x
t
•, / € n_, then e
A
has eigenvalues e
X i
, i € n_, and the same eigenvectors.
There are extensions to Theorem 9.20 and Corollary 9.21 for any function that is
analytic on the spectrum of A , i.e., f ( A ) = X f ( A ) X ~
l
= Xdi ag ( / ( A . i ) , . . . , f ( X
t t
) ) X ~
l
.
It is desirable, of course, to have a version of Theorem 9.20 and its corollary in which
A is not necessarily diagonalizable. It is necessary first to consider the notion of Jordan
canonical form, from which such a result is then available and presented later in this chapter.
9.2 Jordan Canonical Form
Theorem 9.22.
1. Jordan Canonical Form (/CF): For all A e C"
x
" with eigenvalues X\,..., k
n
e C
(not necessarily distinct), there exists X € C^
x
" such that
where each of the Jordan block matrices / i , . . . , J
q
is of the form
82 Chapter 9. Eigenvalues and Eigenvectors
Proof' Starting from the definition, we have
n
= LeA;IXiYiH. 0
i=1
The following corollary is immediate from the theorem upon setting t = I.
Corollary 9.21. If A E ]Rn xn is diagonalizable with eigenvalues Ai, i E ~ , and right
eigenvectors Xi, i E ~ , then e
A
has eigenvalues e
A
" i E ~ , and the same eigenvectors.
There are extensions to Theorem 9.20 and Corollary 9.21 for any function that is
analytic on the spectrum of A, i.e., f(A) = Xf(A)X
I
= Xdiag(J(AI), ... , f(An))X
I
.
It is desirable, of course, to have a version of Theorem 9.20 and its corollary in which
A is not necessarily diagonalizable. It is necessary first to consider the notion of Jordan
canonical form, from which such a result is then available and presented later in this chapter.
9.2 Jordan Canonical Form
Theorem 9.22.
I. lordan Canonical Form (JCF): For all A E c
nxn
with eigenvalues AI, ... , An E C
(not necessarily distinct), there exists X E c ~ x n such that
XI AX = 1 = diag(ll, ... , 1q), (9.12)
where each of the lordan block matrices 1
1
, ••• , 1q is of the form
Ai
0 o
0
Ai
0
1i =
Ai
(9.13)
o
Ai
o o Ai
9.2. Jordan Canonical Form 83
2. Real Jordan Canonical Form: For all A € R
nx
" with eigenvalues Xi, . . . , X
n
(not
necessarily distinct), there exists X € R"
xn
such that
where each of the Jordan block matrices J\, ..., J
q
is of the form
in the case of real eigenvalues A., e A (A), and
where M
;
= [ _»' ^ 1 and I
2
= \
0
A in the case of complex conjugate eigenvalues
a
i
±jp
i
eA(A
>
).
Proof: For the proof see, for example, [21, pp. 120124]. D
Transformations like T = I " _, "•{ "] allow us to go back and forth between a real JCF
and its complex counterpart:
For nontrivial Jordan blocks, the situation is only a bit more complicated. With
9.2. Jordan Canonical Form 83
and L;=1 ki = n.
2. Real Jordan Canonical Form: For all A E jRnxn with eigenvalues AI, ... , An (not
necessarily distinct), there exists X E such that
(9.14)
where each of the Jordan block matrices 11, ... , 1q is of the form
where Mi = [ ] and h = [6 in the case of complex conjugate eigenvalues
(Xi ± jfJi E A(A).
Proof: For the proof see, for example, [21, pp. 120124]. 0
Transformations like T = [ _  { ] allow us to go back and forth between a real JCF
and its complex counterpart:
TI [ (X + jfJ O. ] T = [ (X fJ ] = M.
o (X  JfJ fJ (X
For nontrivial Jordan blocks, the situation is only a bit more complicated. With
1
o
j
o
j
o
1 o 0 '
o j 1
84 Chapter 9. Ei genval ues and Eigenvectors
it is easily checked that
Definition 9.23. The characteristic polynomials of the Jordan blocks defined in Theorem
9 . 2 2 are called the elementary divisors or invariant factors of A.
Theorem 9.24. The characteristic polynomial of a matrix is the product of its elementary
divisors. The minimal polynomial of a matrix is the product of the elementary divisors of
highest degree corresponding to distinct eigenvalues.
Theorem 9.25. Let A e C"
x
" with eigenvalues AI, . . . , X
n
. Then
Proof:
1. From Theorem 9.22 we have that A = X J X ~
l
. Thus,
det(A) = det(XJX
1
) = det(7) = ] ~ [ "
=l
A,  .
2. Again, from Theorem 9.22 we have that A = X J X ~
l
. Thus,
Tr(A) = Tr(XJX~
l
) = TrC/X"
1
*) = Tr(/) = £"
=1
A., . D
Example 9.26. Suppose A e E
7x7
is known to have 7r(A) = (A.  1)
4
(A  2)
3
and
a (A.) = (A. — I)
2
(A. — 2)
2
. Then A has two possible JCFs (not counting reorderings of the
diagonal blocks):
Note that 7
(1)
has elementary divisors (A  I )
2
, (A.  1), (A.  1), (A,  2)
2
, and (A  2),
while /(
2)
has elementary divisors (A  I )
2
, (A  I )
2
, (A  2)
2
, and (A  2).
84 Chapter 9. Eigenvalues and Eigenvectors
it is easily checked that
[ "+ jfi
0 0
] T ~ [ ~
T
I
0
et + jf3 0 0
h
l
0 0 et  jf3 M
0 0 0 et  jf3
Definition 9.23. The characteristic polynomials of the Jordan blocks defined in Theorem
9.22 are called the elementary divisors or invariant factors of A.
Theorem 9.24. The characteristic polynomial of a matrix is the product of its elementary
divisors. The minimal polynomial of a matrix is the product of the elementary divisors of
highest degree corresponding to distinct eigenvalues.
Theorem 9.25. Let A E c
nxn
with eigenvalues AI, .. " An. Then
n
1. det(A) = nAi.
i=1
n
2. Tr(A) = 2,)i.
i=1
Proof:
1. From Theorem 9.22 we have that A = X J XI. Thus,
det(A) = det(X J XI) = det(J) = n7=1 Ai.
2. Again, from Theorem 9.22 we have that A = X J XI. Thus,
Tr(A) = Tr(X J XI) = Tr(JX
1
X) = Tr(J) = L7=1 Ai. 0
Example 9.26. Suppose A E lR.
7x7
is known to have :rr(A) = (A  1)4(A  2)3 and
et(A) = (A  1)2(A  2)2. Then A has two possible JCFs (not counting reorderings of the
diagonal blocks):
1 0 0 0 0 0 1 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 1 I 0 0 0
J(l) =
0 0 0 1 0 0 0
and f2) =
0 0 0 1 0 0 0
0 0 0 0 2 1 0 0 0 0 0 2 1 0
0 0 0 0 0 2 0 0 0 0 0 0 2 0
0 0 0 0 0 0 2
0 0 0 0 0 0 2
Note that J(l) has elementary divisors (A  1)z, (A  1), (A  1), (A  2)2, and (A  2),
while J(2) has elementary divisors (A  1)2, (A  1)2, (A  2)2, and (A  2).
9.3. Determination of the JCF &5
Example 9.27. Knowing T T (A.), a ( A ) , and rank (A — A,,7) for distinct A., is not sufficient to
determine the JCF of A uniquely. T he matrices
both have 7r( A . ) = (A. — a) , a( A . ) = (A. — a) , and rank( A — al) = 4, i.e., three eigen
vectors.
9.3 Determination of the JCF
T he first critical item of information in determining the JCF of a matrix A e W
lxn
is its
number of eigenvectors. For each distinct eigenvalue A , , , the associated number of linearly
independent right (or left) eigenvectors is given by dim A^(A — A.,7) = n — rank( A — A.
(
7).
T he straightforward case is, of course, when X, is simple, i.e., of algebraic multiplicity 1; it
then has precisely one eigenvector. T he more interesting (and difficult) case occurs when
A, is of algebraic multiplicity greater than one. For example, suppose
T hen
has rank 1, so the eigenvalue 3 has two eigenvectors associated with it. If we let [^i £2 &]
T
denote a solution to the linear system (A — 3/) £ = 0, we find that 2£
2
+ £3=0. T hus, both
are eigenvectors (and are independent). T o get a third vector JC3 such that X = [x\ KJ_ XT,]
reduces A to JCF, we need the notion of principal vector.
Definition 9.28. Let A e C"
xn
(or R"
x
") . Then x is a right principal vector of degree k
associated with X e A (A) if and only if (A  XI)
k
x = 0 and (A  U}
k
~
l
x ^ 0.
Remark 9.29.
1. An analogous definition holds for a left principal vector of degree k.
9.3. Determination of the JCF 85
Example 9.27. Knowing rr(A), a(A), and rank(A  Ai l) for distinct Ai is not sufficient to
determine the JCF of A uniquely. The matrices
a 0 0 0 0 0 a 0 0 0 0 0
0 a 0 0 0 0 0 a 0 0 0 0
0 0 a 0 0 0 0 0 0 a 0 0 0 0
Al=
0 0 0 a 0 0
A2 =
0 0 0 a 0 0
0 0 0 0 a 0 0 0 0 0 0 a 0
0 0 0 0 0 a 1 0 0 0 0 0 a 0
0 0 0 0 0 0 a 0 0 0 0 0 0 a
both have rr(A) = (A  a)7, a(A) = (A  a)\ and rank(A  al) = 4, i.e., three eigen
vectors.
9.3 Determination of the JCF
The first critical item of information in determining the JCF of a matrix A E ]R.nxn is its
number of eigenvectors. For each distinct eigenvalue Ai, the associated number of linearly
independent right (or left) eigenvectors is given by dimN(A  A;l) = n  rank(A  A;l).
The straightforward case is, of course, when Ai is simple, i.e., of algebraic multiplicity 1; it
then has precisely one eigenvector. The more interesting (and difficult) case occurs when
Ai is of algebraic multiplicity greater than one. For example, suppose
[3 2
n
A = 0 3
o 0
Then
A3I= U
2 I]
o 0
o 0
has rank 1, so the eigenvalue 3 has two eigenvectors associated with it. If we let
denote a solution to the linear system (A  = 0, we find that + = O. Thus, both
are eigenvectors (and are independent). To get a third vector X3 such that X = [Xl X2 X3]
reduces A to JCF, we need the notion of principal vector.
Definition 9.28. Let A E c
nxn
(or ]R.nxn). Then X is a right principal vector of degree k
associated with A E A(A) ifand only if(A  ulx = 0 and (A  AI)kl x i= o.
Remark 9.29.
1. An analogous definition holds for a left principal vector of degree k.
86 Chapter 9. Eigenvalues and Eigenvectors
2. The phrase "of grade k" is often used synonymously with "of degree k."
3. Principal vectors are sometimes also called generalized eigenvectors, but the latter
term will be assigned a much different meaning in Chapter 12.
4. The case k = 1 corresponds to the "usual" eigenvector.
5. A right (or left) principal vector of degree k is associated with a Jordan block ji of
dimension k or larger.
9.3.1 Theoretical computation
To motivate the development of a procedure for determining principal vectors, consider a
2x2 Jordan block {h
0
h1. Denote by x
(1)
and x
(2)
the two columns of a matrix X e R
2
,x
2
that reduces a matrix A to this JCF. Then the equation AX = XJ can be written
The first column yields the equation Ax
(1)
= hx
(1)
which simply says that x
(1)
is a right
eigenvector. The second column yields the following equation for x
(2)
, the principal vector
of degree 2:
If we premultiply (9.17) by (A  XI), we find (A  XI )
z
x
( 2 )
= (A  XI)x
w
= 0. Thus,
the definition of principal vector is satisfied.
This suggests a "general" procedure. First, determine all eigenvalues of A e R"
x
"
(or C
nxn
). Then for each distinct X e A (A) perform the following:
1. Solve
This step finds all the eigenvectors (i.e., principal vectors of degree 1) associated with
X. The number of eigenvectors depends on the rank of A — XI. For example, if
rank(A — XI) = n — 1, there is only one eigenvector. If the algebraic multiplicity of
X is greater than its geometric multiplicity, principal vectors still need to be computed
from succeeding steps.
2. For each independent jc
(1)
, solve
The number of linearly independent solutions at this step depends on the rank of
(A — XI )
2
. If, for example, this rank is n — 2 , there are two linearly independent
solutions to the homogeneous equation (A — XI)
2
x^ = 0. One of these solutions
is, of course, x
(l)
(^ 0), since (A  XI )
2
x
( l )
= (A  XI)0 = 0. The other solution
is the desired principal vector of degree 2. (It may be necessary to take a linear
combination of jc
(1)
vectors to get a righthand side that is in 7£(A — XI). See, for
example, Exercise 7.)
86 Chapter 9. Eigenvalues and Eigenvectors
2. The phrase "of grade k" is often used synonymously with "of degree k."
3. Principal vectors are sometimes also called generalized eigenvectors, but the latter
term will be assigned a much different meaning in Chapter 12.
4. The case k = 1 corresponds to the "usual" eigenvector.
S. A right (or left) principal vector of degree k is associated with a Jordan block J; of
dimension k or larger.
9.3.1 Theoretical computation
To motivate the development of a procedure for determining principal vectors, consider a
2 x 2 Jordan block [ ~ i]. Denote by x(l) and x(2) the two columns of a matrix X E l R ~ X 2
that reduces a matrix A to this JCF. Then the equation AX = X J can be written
A [x(l) x(2)] = [x(l) X(2)] [ ~ ~ J.
The first column yields the equation Ax(!) = AX(!), which simply says that x(!) is a right
eigenvector. The second column yields the following equation for x(2), the principal vector
of degree 2:
(A  A/)x(2) = x(l). (9.17)
If we premultiply (9.17) by (A  AI), we find (A  A1)2 x(2) = (A  A1)X(l) = O. Thus,
the definition of principal vector is satisfied.
This suggests a "general" procedure. First, determine all eigenvalues of A E lR
nxn
(or c
nxn
). Then for each distinct A E A(A) perform the following:
1. Solve
(A  A1)X(l) = O.
This step finds all the eigenvectors (i.e., principal vectors of degree I) associated with
A. The number of eigenvectors depends on the rank of A  AI. For example, if
rank(A  A/) = n  1, there is only one eigenvector. If the algebraic multiplicity of
A is greater than its geometric multiplicity, principal vectors still need to be computed
from succeeding steps.
2. For each independent x(l), solve
(A  A1)x(2) = x(l).
The number of linearly independent solutions at this step depends on the rank of
(A  uf. If, for example, this rank is n  2, there are two linearly independent
solutions to the homogeneous equation (A  AI)2x (2) = o. One of these solutions
is, of course, x(l) (1= 0), since (A  'A1)
2
x(l) = (A  AI)O = o. The other solution
is the desired principal vector of degree 2. (It may be necessary to take a linear
combination of x(l) vectors to get a righthand side that is in R(A  AI). See, for
example, Exercise 7.)
9. 3. Determination of the JCF 87
4. Continue in this way until the total number of independent eigenvectors and principal
vectors is equal to the algebraic multiplicity of A.
Unfortunately, this naturallooking procedure can fail to find all Jordan vectors. For
more extensive treatments, see, for example, [20] and [21]. Determination of eigenvectors
and principal vectors is obviously very tedious for anything beyond simple problems (n = 2
or 3, say). Attempts to do such calculations in finiteprecision floatingpoint arithmetic
generally prove unreliable. There are significant numerical difficulties inherent in attempting
to compute a JCF, and the interested student is strongly urged to consult the classical and very
readable [8] to learn why. Notice that highquality mathematical software such as MATLAB
does not offer a jcf command, although a jordan command is available in MATLAB'S
Symbolic Toolbox.
Theorem 9.30. Suppose A e C
kxk
has an eigenvalue A, of algebraic multiplicity k and
suppose further that rank(A — AI) = k — 1. Let X = [ x
( l )
, . . . , x
(k)
], where the chain of
vectors x(i) is constructed as above. Then
Theorem 9.31. (x
( 1)
, . . . , x
(k)
} is a linearly independent set.
Theorem 9.32. Principal vectors associated with different Jordan blocks are linearly inde
pendent.
Example 9.33. Let
The eigenvalues of A are A1 = 1, h2 = 1, and h
3
= 2. First, find the eigenvectors associated
with the distinct eigenvalues 1 and 2.
,(1)
(A  2/)x3(1) = 0 yields
3. For each independent x
(2)
from step 2, solve
9.3. Determination of the JCF 87
3. For each independent X(2) from step 2, solve
(A  AI)x(3) = x(2).
4. Continue in this way until the total number of independent eigenvectors and principal
vectors is equal to the algebraic multiplicity of A.
Unfortunately, this naturallooking procedure can fail to find all Jordan vectors. For
more extensive treatments, see, for example, [20] and [21]. Determination of eigenvectors
and principal vectors is obviously very tedious for anything beyond simple problems (n = 2
or 3, say). Attempts to do such calculations in finiteprecision floatingpoint arithmetic
generally prove unreliable. There are significant numerical difficulties inherent in attempting
to compute a JCF, and the interested student is strongly urged to consult the classical and very
readable [8] to learn why. Notice that highquality mathematical software such as MATLAB
does not offer a j cf command, although a j ardan command is available in MATLAB's
Symbolic Toolbox.
Theorem 9.30. Suppose A E C
kxk
has an eigenvalue A of algebraic multiplicity k and
suppose further that rank(A  AI) = k  1. Let X = [x(l), ... , X(k)], where the chain of
vectors x(i) is constructed as above. Then
Theorem 9.31. {x(l), ... , X(k)} is a linearly independent set.
Theorem 9.32. Principal vectors associated with different Jordan blocks are linearly inde
pendent.
Example 9.33. Let
1 ;].
002
The eigenvalues of A are AI = I, A2 = 1, and A3 = 2. First, find the eigenvectors associated
with the distinct eigenvalues 1 and 2.
(A  = 0 yields
88 Chapter 9. Eigenvalues and Eigenvectors
(A l/)x,
(1)
=0 yields
Then it is easy to check that
9.3.2 On the +1 's in JCF blocks
In this subsection we show that the nonzero superdiagonal elements of a JCF need not be
1 's but can be arbitrary — so long as they are nonzero. For the sake of defmiteness, we
consider below the case of a single Jordan block, but the result clearly holds for any JCF.
Supposed A € R
nxn
and
Let D = d i a g ( d 1 , . . . , d
n
) be a nonsingular "scaling" matrix. Then
To find a principal vector of degree 2 associated with the multiple eigenvalue 1, solve
( A – l/)x,
(2)
= x,
(1)
toeet
Now let
88 Chapter 9. Eigenvalues and Eigenvectors
(A  11)x?J = 0 yields
To find a principal vector of degree 2 associated with the multiple eigenvalue 1, solve
(A  1I)xl
2
) = xiI) to get
[ 0 ]
(2)
x, = ~ .
Now let
xl" xl"] ~ [ ~
0 5
l
X = [xiI) 1 3
0
Then it is easy to check that
X  ' ~ U
0
5 ] [ I
n
1
i and XlAX = ~
1
0 0
9.3.2 On the +1 's in JCF blocks
In this subsection we show that the nonzero superdiagonal elements of a JCF need not be
1 's but can be arbitrary  so long as they are nonzero. For the sake of definiteness, we
consider below the case of a single Jordan block, but the result clearly holds for any JCF.
Suppose A E jRnxn and
Let D = diag(d" ... , d
n
) be a nonsingular "scaling" matrix. Then
A
4l.
0 0
d,
0
)...
!b.
0
d,
D'(X' AX)D = D' J D = j =
A
d
n

I
0
d
n

2
A
d
n
d
n

I
0 0
)...
9.4. Geometric Aspects of the JCF 89
Appropriate choice of the di 's then yields any desired nonzero superdiagonal elements.
This result can also be interpreted in terms of the matrix X = [x\,..., x
n
] of eigenvectors
and principal vectors that reduces A to its JCF. Specifically, J is obtained from A via the
similarity transformation XD = \d\x\,..., d
n
x
n
}.
In a similar fashion, the reverseorder identity matrix (or exchange matrix)
9.4 Geometric Aspects of the JCF
Note that di mM( A — A.,/ )
w
= «,.
Definition 9.35. Let V be a vector space over F and suppose A : V —>• V is a linear
transformation. A subspace S c V is Ainvariant if AS c S, where AS is defined as the
set {As : s e S}.
can be used to put the superdiagonal elements in the subdiagonal instead if that is desired:
The matrix X that reduces a matrix A e IR"
X
" (or C
nxn
) to a JCF provides a change of basis
with respect to which the matrix is diagonal or block diagonal. It is thus natural to expect an
associated direct sum decomposition of R. Such a decomposition is given in the following
theorem.
Theorem 9.34. Suppose A e R"
x
" has characteristic polynomial
and minimal polynomial
with A i , . . . , A.
m
distinct. Then
9.4. Geometric Aspects of the JCF 89
Appropriate choice of the di's then yields any desired nonzero superdiagonal elements.
This result can also be interpreted in terms of the matrix X = [x[, ... ,x
n
] of eigenvectors
and principal vectors that reduces A to its lCF. Specifically, j is obtained from A via the
similarity transformation XD = [d[x[, ... , dnxn].
In a similar fashion, the reverseorder identity matrix (or exchange matrix)
0 0 I
0
p = pT = p[ =
(9.18)
0 1
I 0 0
can be used to put the superdiagonal elements in the subdiagonal instead if that is desired:
A I 0 0 A 0 0
0 A 0 A 0
p[
A
p=
0 1 A
0
A I A 0
0 0 A 0 0 A
9.4 Geometric Aspects of the JCF
The matrix X that reduces a matrix A E jH.nxn (or c
nxn
) to a lCF provides a change of basis
with respect to which the matrix is diagonal or block diagonal. It is thus natural to expect an
associated direct sum decomposition of jH.n. Such a decomposition is given in the following
theorem.
Theorem 9.34. Suppose A E jH.nxn has characteristic polynomial
n(A) = (A  A[)n) ... (A  Amt
m
and minimal polynomial
a(A) = (A  A[)V) '" (A  Am)V
m
with A I, ... , Am distinct. Then
jH.n = N(A  AlIt) E6 ... E6 N(A  AmItm
= N (A  A 1 I) v) E6 ... E6 N (A  Am I) Vm .
Note that dimN(A  AJ)Vi = ni.
Definition 9.35. Let V be a vector space over IF and suppose A : V + V is a linear
transformation. A subspace S ~ V is A invariant if AS ~ S, where AS is defined as the
set {As: s E S}.
90 Chapter 9. Eigenvalues and Eigenvectors
If V is taken to be R" over R, and S e R"
x
* is a matrix whose columns s\,..., s/t
span a /^dimensional subspace <S, i.e., K(S) = <S, then <S is Ainvariant if and only if there
exists M eR
kxk
such that
This follows easily by comparing the /th columns of each side of (9.19):
Example 9.36. The equation Ax = A* = xA defining a right eigenvector x of an eigenvalue
X says that * spans an Ainvariant subspace (of dimension one).
Example 9.37. Suppose X block diagonalizes A, i.e.,
Rewriting in the form
we have that A A, = A", /,, / = 1, 2, so the columns of A, span an Amvanant subspace.
Theorem 9.38. Suppose A e E"
x
".
7. Let p(A) = «o/ + o?i A + • • • + <x
q
A
q
be a polynomial in A. Then N(p(A)) and
7£(p(A)) are Ainvariant.
2. S isAinvariant if and only ifS
1
 is A
T
invariant.
Theorem 9.39. If V isa vector space over F such that V = N\ ® • • • 0 N
m
, where each
A// isAinvariant, then a basisfor V can be chosen with respect to which A hasa block
diagonal representation.
The Jordan canonical form is a special case of the above theorem. If A has distinct
eigenvalues A,, as in Theorem 9.34, we could choose bases for N(A — A.,/)"' by SVD, for
example (note that the power n, could be replaced by v,). We would then get a block diagonal
representation for A with full blocks rather than the highly structured Jordan blocks. Other
such "canonical" forms are discussed in text that follows.
Suppose A" = [ X i , . . . , X
m
] e R"
n
xn
is such that X ^AX = diag(7i,. . . , J
m
), where
each Ji = diag(/,i,..., //*,.) and each /,* is a Jordan block corresponding to A, e A(A).
We could also use other block diagonal decompositions (e.g., via SVD), but we restrict our
attention here to only the Jordan block case. Note that A A", = A*, /,, so by (9.19) the columns
of A", (i.e., the eigenvectors and principal vectors associated with A.,) span an Ainvariant
subspace of W.
Finally, we return to the problem of developing a formula for e'
A
in the case that A
is not necessarily diagonalizable. Let 7, € C"
x
"' be a Jordan basis for N(A
T
— A.,/)"' .
Equivalently, partition
90 Chapter 9. Eigenvalues and Eigenvectors
If V is taken to be ]Rn over Rand S E ]Rn xk is a matrix whose columns SI, ... , Sk
span a kdimensional subspace S, i.e., R(S) = S, then S is Ainvariant if and only if there
exists M E ]Rkxk such that
AS = SM. (9.19)
This follows easily by comparing the ith columns of each side of (9.19):
Example 9.36. The equation Ax = AX = x A defining a right eigenvector x of an eigenvalue
A says that x spans an Ainvariant subspace (of dimension one).
Example 9.37. Suppose X block diagonalizes A, i.e.,
XI AX = [ ~ J
2
].
Rewriting in the form
~ J,
we have that AX
i
= X;li, i = 1,2, so the columns of Xi span an Ainvariant subspace.
Theorem 9.38. Suppose A E ]Rnxn.
1. Let peA) = CloI + ClIA + '" + ClqAq be a polynomial in A. Then N(p(A)) and
R(p(A)) are Ainvariant.
2. S is A invariant if and only if S 1. is A T invariant.
Theorem 9.39. If V is a vector space over IF such that V = NI EB ... EB N
m
, where each
N; is Ainvariant, then a basis for V can be chosen with respect to which A has a block
diagonal representation.
The Jordan canonical form is a special case of the above theorem. If A has distinct
eigenvalues Ai as in Theorem 9.34, we could choose bases for N(A  Ai/)n, by SVD, for
example (note that the power ni could be replaced by Vi). We would then get a block diagonal
representation for A with full blocks rather than the highly structured Jordan blocks. Other
such "canonical" forms are discussed in text that follows.
Suppose X = [Xl ..... Xm] E ] R ~ x n is such that XI AX = diag(J1, ... , J
m
), where
each J
i
= diag(JiI,"" Jik,) and each Jik is a Jordan block corresponding to Ai E A(A).
We could also use other block diagonal decompositions (e.g., via SVD), but we restrict our
attention here to only the Jordan block case. Note that AXi = Xi J
i
, so by (9.19) the columns
of Xi (i.e., the eigenvectors and principal vectors associated with Ai) span an Ainvariant
subspace of]Rn.
Finally, we return to the problem of developing a formula for e
l
A in the case that A
is not necessarily diagonalizable. Let Yi E <e
nxn
, be a Jordan basis for N (AT  A;lt.
Equivalently, partition
9.5. The Matrix Sign Function 91
compatibly. Then
In a similar fashion we can compute
which is a useful formula when used in conjunction with the result
for a k x k Jordan block 7, associated with an eigenvalue A. = A.,.
9.5 The Matrix Sign Function
In this section we give a very brief introduction to an interesting and useful matrix function
called the matrix sign function. It is a generalization of the sign (or signum) of a scalar. A
survey of the matrix sign function and some of its applications can be found in [15].
Definition 9.40. Let z E C with Re(z) ^ 0. Then the sign of z is defined by
Definition 9.41. Suppose A e C"
x
" has no eigenvalues on the imaginary axis, and let
be a Jordan canonical form for A, with N containing all Jordan blocks corresponding to the
eigenvalues of A in the left halfplane and P containing all Jordan blocks corresponding to
eigenvalues in the right halfplane. Then the sign of A, denoted sgn(A), is given by
9.S. The Matrix Sign Function
compatibly. Then
A = XJX
I
= XJy
H
= [XI, ... , Xm] diag(JI, ... , J
m
) [Y
I
, ••• , Ym]H
m
= LX;JiYi
H
.
i=1
In a similar fashion we can compute
m
etA = LXietJ;YiH,
i=1
which is a useful formula when used in conjunction with the result
A 0 0
eAt teAt
.lt
2
e
At
2!
0 A
0
eAt teAt
exp t
A 0
0 0
eAt
1
0 0 A
0 0
for a k x k Jordan block J
i
associated with an eigenvalue A = Ai.
9.5 The Matrix Sign Function
91
In this section we give a very brief introduction to an interesting and useful matrix function
called the matrix sign function. It is a generalization of the sign (or signum) of a scalar. A
survey of the matrix sign function and some of its applications can be found in [15].
Definition 9.40. Let z E C with Re(z) f= O. Then the sign of z is defined by
Re(z) {+1
sgn(z) = IRe(z) I = 1
ifRe(z) > 0,
ifRe(z) < O.
Definition 9.41. Suppose A E cnxn has no eigenvalues on the imaginary axis, and let
be a Jordan canonical form for A, with N containing all Jordan blocks corresponding to the
eigenvalues of A in the left halfplane and P containing all Jordan blocks corresponding to
eigenvalues in the right halfplane. Then the sign of A, denoted sgn(A), is given by
[
/ 0] I
sgn(A) = X 0 / X ,
92 Chapter 9. Eigenvalues and Eigenvectors
where the negative and positive identity matrices are of the same dimensions as N and P,
respectively.
There are other equivalent definitions of the matrix sign function, but the one given
here is especially useful in deriving many of its key properties. The JCF definition of the
matrix sign function does not generally lend itself to reliable computation on a finiteword
length digital computer. In fact, its reliable numerical calculation is an interesting topic in
its own right.
We state some of the more useful properties of the matrix sign function as theorems.
Their straightforward proofs are left to the exercises.
Theorem 9.42. Suppose A e C"
x
" has no eigenvalues on the imaginary axis, and let
S = sgn(A). Then the following hold:
1. S is diagonalizable with eigenvalues equal to del.
2. S
2
= I.
3. AS = SA.
4. sgn(A") = (sgn(A))".
5. sgn(T
l
AT) = T
l
sgn(A)TforallnonsingularT e C"
x
".
6. sgn(cA) = sgn(c) sgn(A)/or all nonzero real scalars c.
Theorem 9.43. Suppose A e C"
x
" has no eigenvalues on the imaginary axis, and let
S — sgn(A). Then the following hold:
1. 7l(S — /) is an Ainvariant subspace corresponding to the left halfplane eigenvalues
of A (the negative invariant subspace).
2. R(S+/) is an Ainvariant subspace corresponding to the right halfplane eigenvalues
of A (the positive invariant subspace).
3. negA = (/ — S)/2 is a projection onto the negative invariant subspace of A.
4. posA = (/ + S)/2 is a projection onto the positive invariant subspace of A.
EXERCISES
1. Let A e C
nxn
have distinct eigenvalues AI, ..., X
n
with corresponding right eigen
vectors Xi, ... ,x
n
and left eigenvectors y\, ..., y
n
, respectively. Let v e C" be an
arbitrary vector. Show that v can be expressed (uniquely) as a linear combination
of the right eigenvectors. Find the appropriate expression for v as a linear combination
of the left eigenvectors as well.
92 Chapter 9. Eigenvalues and Eigenvectors
where the negative and positive identity matrices are of the same dimensions as N and p,
respectively.
There are other equivalent definitions of the matrix sign function, but the one given
here is especially useful in deriving many of its key properties. The JCF definition of the
matrix sign function does not generally lend itself to reliable computation on a finiteword
length digital computer. In fact, its reliable numerical calculation is an interesting topic in
its own right.
We state some of the more useful properties of the matrix sign function as theorems.
Their straightforward proofs are left to the exercises.
Theorem 9.42. Suppose A E e
nxn
has no eigenvalues on the imaginary axis, and let
S = sgn(A). Then the following hold:
1. S is diagonalizable with eigenvalues equal to ± 1.
2. S2 = I.
3. AS = SA.
4. sgn(AH) = (sgn(A»H.
5. sgn(T1AT) = T1sgn(A)T foralinonsingularT E e
nxn
.
6. sgn(cA) = sgn(c) sgn(A) for all nonzero real scalars c.
Theorem 9.43. Suppose A E e
nxn
has no eigenvalues on the imaginary axis, and let
S = sgn(A). Then the following hold:
I. R(S l) is an Ainvariant subspace corresponding to the left halfplane eigenvalues
of A (the negative invariant subspace).
2. R(S + l) is an A invariant subspace corresponding to the right halfplane eigenvalues
of A (the positive invariant subspace).
3. negA == (l  S) /2 is a projection onto the negative invariant subspace of A.
4. posA == (l + S)/2 is a projection onto the positive invariant subspace of A.
EXERCISES
1. Let A E e
nxn
have distinct eigenvalues ),.1> ••• , ),.n with corresponding right eigen
vectors Xl, ... , Xn and left eigenvectors Yl, ••. , Yn, respectively. Let v E en be an
arbitrary vector. Show that v can be expressed (uniquely) as a linear combination
of the right eigenvectors. Find the appropriate expression for v as a linear combination
of the left eigenvectors as well.
Exercises 93
2. Suppose A € C"
x
" is skewHermitian, i.e., A
H
= —A. Prove that all eigenvalues of
a skewHermitian matrix must be pure imaginary.
3. Suppose A e C"
x
" is Hermitian. Let A be an eigenvalue of A with corresponding
right eigenvector x. Show that x is also a left eigenvector for A. Prove the same result
if A is skewHermitian.
5. Determine the eigenvalues, right eigenvectors and right principal vectors if necessary,
and (real) JCFs of the following matrices:
6. Determine the JCFs of the following matrices:
Find a nonsingular matrix X such that X
1
AX = J, where J is the JCF
Hint: Use[ — 1 1 — l]
r
as an eigenvector. The vectors [0 1 — l]
r
and[ l 0 0]
r
are both eigenvectors, but then the equation (A — /)jc
(2)
= x
(1)
can't be solved.
8. Show that all right eigenvectors of the Jordan block matrix in Theorem 9.30 must be
multiples of e\ e R*. Characterize all left eigenvectors.
9. Let A e R"
x
" be of the form A = xy
T
, where x, y e R" are nonzero vectors with
x
T
y = 0. Determine the JCF of A.
10. Let A e R"
xn
be of the form A = / + xy
T
, where x, y e R" are nonzero vectors
with x
T
y = 0. Determine the JCF of A.
11. Suppose a matrix A e R
16x 16
has 16 eigenvalues at 0 and its JCF consists of a single
Jordan block of the form specified in Theorem 9.22. Suppose the small number 10~
16
is added to the (16,1) element of J. What are the eigenvalues of this slightly perturbed
matrix?
4. Suppose a matrix A € R
5x5
has eigenvalues {2, 2, 2, 2, 3}. Determine all possible
JCFs for A.
7. Let
Exercises 93
2. Suppose A E rc
nxn
is skewHermitian, i.e., AH = A. Prove that all eigenvalues of
a skewHermitian matrix must be pure imaginary.
3. Suppose A E rc
nxn
is Hermitian. Let A be an eigenvalue of A with corresponding
right eigenvector x. Show that x is also a left eigenvector for A. Prove the same result
if A is skewHermitian.
4. Suppose a matrix A E lR.
5x5
has eigenvalues {2, 2, 2, 2, 3}. Determine all possible
JCFs for A.
5. Determine the eigenvalues, right eigenvectors and right principal vectors if necessary,
and (real) JCFs of the following matrices:
[
2 1 ]
(a) 1 0 '
6. Determine the JCFs of the following matrices:
<a) U j n
2
1
2
=n
7. Let
A = [H 1]·
2 2"
Find a nonsingular matrix X such that XI AX = J, where J is the JCF
J = [ ~ ~ ~ ] .
001
Hint: Use[1 1  I]T as an eigenvector. The vectors [0 If and[1 0 of
are both eigenvectors, but then the equation (A  I)x(2) = x(1) can't be solved.
8. Show that all right eigenvectors of the Jordan block matrix in Theorem 9.30 must be
multiples of el E lR.
k
. Characterize all left eigenvectors.
9. Let A E lR.
nxn
be of the form A = xyT, where x, y E lR.
n
are nonzero vectors with
x
T
y = O. Determine the JCF of A.
10. Let A E lR.
nxn
be of the form A = 1+ xyT, where x, y E lR.
n
are nonzero vectors
with x
T
y = O. Determine the JCF of A.
11. Suppose a matrix A E lR.
16x
16 has 16 eigenvalues at 0 and its JCF consists of a single
Jordan block of the form specified in Theorem 9.22. Suppose the small number 10
16
is added to the (16,1) element of J. What are the eigenvalues of this slightly perturbed
matrix?
94 Chapter 9. Eigenvalues and Eigenvectors
12. Show that every matrix A e R"
x
" can be factored in the form A = Si$2, where Si
and £2 are real symmetric matrices and one of them, say Si, is nonsingular.
Hint: Suppose A = X J X ~
l
is a reduction of A to JCF and suppose we can construct
the "symmetric factorization" of J . Then A = ( X S i X
T
) ( X ~
T
S
2
X ~
l
) would be the
required symmetric factorization of A. Thus, it suffices to prove the result for the
JCF. The transformation P in (9.18) is useful.
13. Prove that every matrix A e W
x
" is similar to its transpose and determine a similarity
transformation explicitly.
Hint: Use the factorization in the previous exercise.
14. Consider the block upper triangular matrix
where A e M"
xn
and A
n
e R
kxk
with 1 < k < n. Suppose A
u
^ 0 and that we
want to block diagonalize A via the similarity transformation
where X e R*
x
<«  *), i.e.,
Find a matrix equation that X must satisfy for this to be possible. If n = 2 and k = 1,
what can you say further, in terms of AU and A 22, about when the equation for X is
solvable?
15. Prove Theorem 9.42.
16. Prove Theorem 9.43.
17. Suppose A e C"
xn
has all its eigenvalues in the left half plane. Prove that
sgn(A) =  /.
94 Chapter 9. Eigenvalues and Eigenvectors
12. Show that every matrix A E jRnxn can be factored in the form A = SIS2, where SI
and S2 are real symmetric matrices and one of them, say S1, is nonsingular.
Hint: Suppose A = Xl XI is a reduction of A to JCF and suppose we can construct
the "symmetric factorization" of 1. Then A = (X SIXT)(X
T
S2XI) would be the
required symmetric factorization of A. Thus, it suffices to prove the result for the
JCF. The transformation P in (9.18) is useful.
13. Prove that every matrix A E jRn xn is similar to its transpose and determine a similarity
transformation explicitly.
Hint: Use the factorization in the previous exercise.
14. Consider the block upper triangular matrix
A _ [ All
 0
Al2 ]
A22 '
where A E jRnxn and All E jRkxk with 1 ::s: k ::s: n. Suppose Al2 =1= 0 and that we
want to block diagonalize A via the similarity transformation
where X E IRkx(nk), i.e.,
TIAT = [A 011 0 ]
A22 .
Find a matrix equation that X must satisfy for this to be possible. If n = 2 and k = 1,
what can you say further, in terms of All and A22, about when the equation for X is
solvable?
15. Prove Theorem 9.42.
16. Prove Theorem 9.43.
17. Suppose A E en xn has all its eigenvalues in the left halfplane. Prove that
sgn(A) = 1.
Chapter 10
Canonical Forms
10.1 Some Basic Canonical Forms
Problem: Let V and W be vector spaces and suppose A : V — > • W is a linear transformation.
Find bases in V and W with respect to which Mat A has a "simple form" or "canonical
form." In matrix terms, if A e R
mxn
, find P e R™
xm
and Q e R
n
n
xn
such that PAQ has a
"canonical form." The transformation A M» PAQ is called an equivalence; it is called an
orthogonal equivalence if P and Q are orthogonal matrices.
Remark 10.1. We can also consider the case A e C
m xn
and unitary equivalence if P and
< 2 are unitary.
Two special cases are of interest:
1. If W = V and < 2 = P"
1
, the transformation A H> PAP"
1
is called a similarity.
2 . If W = V and if Q = P
T
is orthogonal, the transformation A i» PAP
T
is called
an orthogonal similarity (or unitary similarity in the complex case).
The following results are typical of what can be achieved under a unitary similarity. If
A = A
H
6 C"
x
" has eigenvalues AI, . . . , A
n
, then there exists a unitary matrix £7 such that
U
H
AU — D, where D = di ag( A. j , . . . , A.
n
). This is proved in Theorem 10.2 . What other
matrices are "diagonalizable" under unitary similarity? The answer is given in Theorem
10.9, where it is proved that a general matrix A e C"
x
" is unitarily similar to a diagonal
matrix if and only if it is normal (i.e., AA
H
= A
H
A). Normal matrices include Hermitian,
skewHermitian, and unitary matrices (and their "real" counterparts: symmetric, skew
symmetric, and orthogonal, respectively), as well as other matrices that merely satisfy the
definition, such as A = [ _
a
b
^1 for real scalars a and b. If a matrix A is not normal, the
most "diagonal" we can get is the JCF described in Chapter 9.
Theorem 10.2. Let A = A
H
e C"
x
" have (real) eigenvalues A. I, . . . , X
n
. Then there
exists a unitary matrix X such that X
H
AX = D = diag(A.j , . . . , X
n
) (the columns ofX are
orthonormal eigenvectors for A).
95
Chapter 10
Canonical Forms
10.1 Some Basic Canonical Forms
Problem: Let V and W be vector spaces and suppose A : V + W is a linear transformation.
Find bases in V and W with respect to which Mat A has a "simple form" or "canonical
form." In matrix terms, if A E IR
mxn
, find P E lR;;:xm and Q E l R ~ x n such that P AQ has a
"canonical form." The transformation A f+ P AQ is called an equivalence; it is called an
orthogonal equivalence if P and Q are orthogonal matrices.
Remark 10.1. We can also consider the case A E e
mxn
and unitary equivalence if P and
Q are unitary.
Two special cases are of interest:
1. If W = V and Q = p
1
, the transformation A f+ PAPI is called a similarity.
2. If W = V and if Q = pT is orthogonal, the transformation A f+ P ApT is called
an orthogonal similarity (or unitary similarity in the complex case).
The following results are typical of what can be achieved under a unitary similarity. If
A = A H E en xn has eigenvalues AI, ... , An, then there exists a unitary matrix U such that
U
H
AU = D, where D = diag(AJ, ... , An). This is proved in Theorem 10.2. What other
matrices are "diagonalizable" under unitary similarity? The answer is given in Theorem
10.9, where it is proved that a general matrix A E e
nxn
is unitarily similar to a diagonal
matrix if and only if it is normal (i.e., AA H = AHA). Normal matrices include Hermitian,
skewHermitian, and unitary matrices (and their "real" counterparts: symmetric, skew
symmetric, and orthogonal, respectively), as well as other matrices that merely satisfy the
definition, such as A = [ _ ~ !] for real scalars a and h. If a matrix A is not normal, the
most "diagonal" we can get is the JCF described in Chapter 9.
Theorem 10.2. Let A = A H E en xn have (real) eigenvalues AI, ... ,An. Then there
exists a unitary matrix X such that X
H
AX = D = diag(Al, ... , An) (the columns of X are
orthonormal eigenvectors for A).
95
96 Chapter 10. Canonical Forms
Proof: Let x\ be a right eigenvector corresponding to X\, and normalize it such that xf*x\ =
1. Then there exist n — 1 additional vectors x
2
, ..., x
n
such that X = [x\,..., x
n
] =
[x\ X
2
] is unitary. Now
Then x^U
2
= 0 (/ € k) means that x
f
is orthogonal to each of the n — k columns of U
2
.
But the latter are orthonormal since they are the last n — k rows of the unitary matrix U.
Thus, [Xi f/2] is unitary. D
The construction called for in Theorem 10.2 is then a special case of Theorem 10.3
for k = 1. We illustrate the construction of the necessary Householder matrix for k — 1.
For simplicity, we consider the real case. Let the unit vector x\ be denoted by [£i, ..., %
n
]
T
.
In (10.1) we have used the fact that Ax\ = k\x\. When combined with the fact that
x"xi = 1, we get Ai remaining in the (l,l)block. We also get 0 in the (2,l)block by
noting that x\ is orthogonal to all vectors in X
2
. In (10.2), we get 0 in the (l,2)block by
noting that X
H
AX is Hermitian. The proof is completed easily by induction upon noting
that the (2,2)block must have eigenvalues X
2
,..., A.
n
. D
Given a unit vector x\ e E", the construction of X
2
e ]R"
X
("
1
) such that X —
[x\ X
2
] is orthogonal is frequently required. The construction can actually be performed
quite easily by means of Householder (or Givens) transformations as in the proof of the
following general result.
Theorem 10.3. Let X\ e C
nxk
have orthonormal columns and suppose U is a unitary
matrix such that UX\ = \
0
1, where R € C
kxk
is upper triangular. Write U
H
= [U\ U
2
]
with Ui € C
nxk
. Then [Xi U
2
] is unitary.
Proof: Let X\ = [x\,..., Xk]. Construct a sequence of Householder matrices (also known
as elementary reflectors) H\,..., H
k
in the usual way (see below) such that
where R is upper triangular (and nonsingular since x\, ..., Xk are orthonormal). Let U =
H
k
...H
v
. Then U
H
= / / , • • H
k
and
96 Chapter 10. Canonical Forms
Proof' Let XI be a right eigenvector corresponding to AI, and normalize it such that XI =
1. Then there exist n  1 additional vectors X2, ... , Xn such that X = (XI, ... , xn] =
[XI X
2
] is unitary. Now
XHAX = [
xH
] A [XI X2] = [
]
I
XH
XfAxl XfAX
2
2
=[
Al
]
(10.1)
0 XfAX
2
=[
Al 0
l
(10.2)
0
XfAX
z
In (l0.1) we have used the fact that AXI = AIXI. When combined with the fact that
XI = 1, we get Al remaining in the (l,I)block. We also get 0 in the (2, I)block by
noting that XI is orthogonal to all vectors in Xz. In (10.2), we get 0 in the (l,2)block by
noting that XH AX is Hermitian. The proof is completed easily by induction upon noting
that the (2,2)block must have eigenvalues A2, ... , An. 0
Given a unit vector XI E JRn, the construction of X
z
E JRnx(nl) such that X =
[XI X
2
] is orthogonal is frequently required. The construction can actually be performed
quite easily by means of Householder (or Givens) transformations as in the proof of the
following general result.
Theorem 10.3. Let XI E C
nxk
have orthonormal columns and suppose V is a unitary
matrix such that V X I = [ where R E C
kxk
is upper triangular. Write V H = [VI Vz]
with VI E C
nxk
. Then [XI V
2
] is unitary.
Proof: Let X I = [XI, ... ,xd. Construct a sequence of Householder matrices (also known
as elementary reflectors) HI, ... , Hk in the usual way (see below) such that
Hk ... HdxI, ... , xd = [ l
where R is upper triangular (and nonsingular since XI, ... , Xk are orthonormal). Let V =
Hk'" HI. Then VH = HI'" Hk and
Then X
i
H
U2 = 0 (i E means that Xi is orthogonal to each of the n  k columns of V2.
But the latter are orthonormal since they are the last n  k rows of the unitary matrix U.
Thus. [XI U2] is unitary. 0
The construction called for in Theorem 10.2 is then a special case of Theorem 10.3
for k = 1. We illustrate the construction of the necessary Householder matrix for k = 1.
For simplicity, we consider the real case. Let the unit vector XI be denoted by .. , ,
10.1. Some Basic Canonical Forms 97
Then the necessary Householder matrix needed for the construction of X^ is given by
U = I — 2uu
+
= I — ^UU
T
, where u = [t\ ± 1, £2, • • •» £«]
r
 It can easily be checked
that U is symmetric and U
T
U = U
2
= I, so U is orthogonal. To see that U effects the
necessary compression of j ci, it is easily verified that U
T
U = 2 ± 2£i and U
T
X\ = 1 ± £1.
Thus,
Further details on Householder matrices, including the choice of sign and the complex case,
can be consulted in standard numerical linear algebra texts such as [7], [11], [23], [25].
The real version of Theorem 10.2 is worth stating separately since it is applied fre
quently in applications.
Theorem 10.4. Let A = A
T
e E
nxn
have eigenvalues k\, ... ,X
n
. Then there exists an
orthogonal matrix X e W
lxn
(whose columns are orthonormal eigenvectors of A) such that
X
T
AX = D = diag(Xi, . . . , X
n
).
Note that Theorem 10.4 implies that a symmetric matrix A (with the obvious analogue
from Theorem 10.2 for Hermitian matrices) can be written
which is often called the spectral representation of A. In fact, A in (10.3) is actually a
weighted sum of orthogonal projections P, (onto the onedimensional eigenspaces corre
sponding to the A., 's), i.e.,
where P, = PUM —
x
i
x
f =
x
i
x
j since xj xi — 1.
The following pair of theorems form the theoretical foundation of the doubleFrancis
QR algorithm used to compute matrix eigenvalues in a numerically stable and reliable way.
10.1. Some Basic Canonical Forms 97
Then the necessary Householder matrix needed for the construction of X
2
is given by
U = I  2uu+ = I  +uu
T
, where u = [';1 ± 1, ';2, ... , ';nf. It can easily be checked
u u
that U is symmetric and U
T
U = U
2
= I, so U is orthogonal. To see that U effects the
necessary compression of Xl, it is easily verified that u
T
u = 2 ± 2';1 and u
T
Xl = 1 ± ';1.
Thus,
Further details on Householder matrices, including the choice of sign and the complex case,
can be consulted in standard numerical linear algebra texts such as [7], [11], [23], [25].
The real version of Theorem 10.2 is worth stating separately since it is applied fre
quently in applications.
Theorem 10.4. Let A = AT E jRnxn have eigenvalues AI, ... , An. Then there exists an
orthogonal matrix X E jRn xn (whose columns are orthonormal eigenvectors of A) such that
XT AX = D = diag(Al, ... , An).
Note that Theorem 10.4 implies that a symmetric matrix A (with the obvious analogue
from Theorem 10.2 for Hermitian matrices) can be written
n
A = XDX
T
= LAiXiXT,
(10.3)
i=1
which is often called the spectral representation of A. In fact, A in (10.3) is actually a
weighted sum of orthogonal projections Pi (onto the onedimensional eigenspaces corre
sponding to the Ai'S), i.e.,
n
A = LAiPi,
i=l
where Pi = PR(x;) = xiXt = xixT since xT Xi = 1.
The following pair of theorems form the theoretical foundation of the doubleFrancis
QR algorithm used to compute matrix eigenvalues in a numerically stable and reliable way.
98 Chapter 10. Canonical Forms
Theorem 10.5 (Schur). Let A e C"
x
". Then there exists a unitary matrix U such that
U
H
AU = T, where T is upper triangular.
Proof: The proof of this theorem is essentially the same as that of Theorem 10.2 except that
in this case (using the notation U rather than X) the (l,2)block wf AU2 is not 0. D
In the case of A e R"
x
", it is thus unitarily similar to an upper triangular matrix, but
if A has a complex conjugate pair of eigenvalues, then complex arithmetic is clearly needed
to place such eigenvalues on the diagonal of T. However, the next theorem shows that every
A e W
xn
is also orthogonally similar (i.e., real arithmetic) to a quasiuppertriangular
matrix. A quasiuppertriangular matrix is block upper triangular with 1 x 1 diagonal
blocks corresponding to its real eigenvalues and 2x2 diagonal blocks corresponding to its
complex conjugate pairs of eigenvalues.
Theorem 10.6 (MurnaghanWintner). Let A e R"
x
". Then there exists an orthogonal
matrix U such that U
T
AU = S, where S is quasiuppertriangular.
Definition 10.7. The triangular matrix T in Theorem 10.5 is called a Schur canonical
form or Schur form. The quasiuppertriangular matrix S in Theorem 10.6 is called a real
Schur canonical form or real Schur form (RSF). The columns of a unitary [orthogonal]
matrix U that reduces a matrix to [real] Schur form are called Schur vectors.
Example 10.8. The matrix
is in RSF. Its real JCF is
Note that only the first Schur vector (and then only if the corresponding first eigenvalue
is real if U is orthogonal) is an eigenvector. However, what is true, and sufficient for virtually
all applications (see, for example, [17]), is that the first k Schur vectors span the same A
invariant subspace as the eigenvectors corresponding to the first k eigenvalues along the
diagonal of T (or S).
While every matrix can be reduced to Schur form (or RSF), it is of interest to know
when we can go further and reduce a matrix via unitary similarity to diagonal form. The
following theorem answers this question.
Theorem 10.9. A matrix A e C"
x
" is unitarily similar to a diagonal matrix if and only if
A is normal (i.e., A
H
A = AA
H
).
Proof: Suppose U is a unitary matrix such that U
H
AU = D, where D is diagonal. Then
so A is normal.
98 Chapter 10. Canonical Forms
Theorem 10.5 (Schur). Let A E c
nxn
. Then there exists a unitary matrix U such that
U
H
AU = T, where T is upper triangular.
Proof: The proof of this theorem is essentially the same as that of Theorem lO.2 except that
in this case (using the notation U rather than X) the (l,2)block ur AU
2
is not O. 0
In the case of A E IR
n
xn , it is thus unitarily similar to an upper triangular matrix, but
if A has a complex conjugate pair of eigenvalues, then complex arithmetic is clearly needed
to place such eigenValues on the diagonal of T. However, the next theorem shows that every
A E IR
nxn
is also orthogonally similar (i.e., real arithmetic) to a quasiuppertriangular
matrix. A quasiuppertriangular matrix is block upper triangular with 1 x 1 diagonal
blocks corresponding to its real eigenvalues and 2 x 2 diagonal blocks corresponding to its
complex conjugate pairs of eigenvalues.
Theorem 10.6 (MurnaghanWintner). Let A E IR
n
xn. Then there exists an orthogonal
matrix U such that U
T
AU = S, where S is quasiuppertriangular.
Definition 10.7. The triangular matrix T in Theorem 10.5 is called a Schur canonical
form or Schur fonn. The quasiuppertriangular matrix S in Theorem 10.6 is called a real
Schur canonical form or real Schur fonn (RSF). The columns of a unitary [orthogonal}
matrix U that reduces a matrix to [real} Schur fonn are called Schur vectors.
Example 10.8. The matrix
s ~ [
2 5
n
2 4
0 0
is in RSF. Its real JCF is
h[
1
n
1 1
0 0
Note that only the first Schur vector (and then only if the corresponding first eigenvalue
is real if U is orthogonal) is an eigenvector. However, what is true, and sufficient for virtually
all applications (see, for example, [17]), is that the first k Schur vectors span the same A
invariant subspace as the eigenvectors corresponding to the first k eigenvalues along the
diagonal of T (or S).
While every matrix can be reduced to Schur form (or RSF), it is of interest to know
when we can go further and reduce a matrix via unitary similarity to diagonal form. The
following theorem answers this question.
Theorem 10.9. A matrix A E c
nxn
is unitarily similar to a diagonal matrix if and only if
A is normal (i.e., AH A = AA
H
).
Proof: Suppose U is a unitary matrix such that U
H
AU = D, where D is diagonal. Then
AAH = U VUHU VHU
H
= U DDHU
H
== U DH DU
H
== AH A
so A is normal.
10.2. Definite Matrices 99
Conversely, suppose A is normal and let U be a unitary matrix such that U
H
AU = T,
where T is an upper triangular matrix (Theorem 10.5). Then
It is then a routine exercise to show that T must, in fact, be diagonal. D
10.2 Definite Matrices
Definition 10.10. A symmetric matrix A e W
xn
is
1. positive definite if and only ifx
T
Ax > Qfor all nonzero x G W
1
. We write A > 0.
2. nonnegative definite (or positive semidefinite) if and only if X
T
Ax > 0 for all
nonzero x e W. We write A > 0.
3. negative definite if—A is positive definite. We write A < 0.
4. nonpositive definite (or negative semidefinite) if— A is nonnegative definite. We
write A < 0.
Also, if A and B are symmetric matrices, we write A > B if and only if A — B > 0 or
B — A < 0. Similarly, we write A > B if and only ifA — B>QorB — A < 0.
Remark 10.11. If A e C"
x
" is Hermitian, all the above definitions hold except that
superscript //s replace Ts. Indeed, this is generally true for all results in the remainder of
this section that may be stated in the real case for simplicity.
Remark 10.12. If a matrix is neither definite nor semidefinite, it is said to be indefinite.
Theorem 10.13. Let A = A
H
e C
nxn
with eigenvalues X
{
> A
2
> • • • > A
n
. Then for all
x eC",
Proof: Let U be a unitary matrix that diagonalizes A as in Theorem 10.2. Furthermore,
let v = U
H
x, where x is an arbitrary vector in C
M
, and denote the components of y by
j]i, i € n. Then
But clearly
10.2. Definite Matrices 99
Conversely, suppose A is normal and let U be a unitary matrix such that U H A U = T,
where T is an upper triangular matrix (Theorem 10.5). Then
It is then a routine exercise to show that T must, in fact, be diagonal. 0
10.2 Definite Matrices
Definition 10.10. A symmetric matrix A E lR.
nxn
is
1. positive definite if and only if x T Ax > 0 for all nonzero x E lR.
n
. We write A > O.
2. nonnegative definite (or positive semidefinite) if and only if x
T
Ax :::: 0 for all
nonzero x E lR.
n
• We write A :::: O.
3. negative definite if  A is positive definite. We write A < O.
4. nonpositive definite (or negative semidefinite) if A is nonnegative definite. We
write A ~ O.
Also, if A and B are symmetric matrices, we write A > B if and only if A  B > 0 or
B  A < O. Similarly, we write A :::: B if and only if A  B :::: 0 or B  A ~ O.
Remark 10.11. If A E e
nxn
is Hermitian, all the above definitions hold except that
superscript H s replace T s. Indeed, this is generally true for all results in the remainder of
this section that may be stated in the real case for simplicity.
Remark 10.12. If a matrix is neither definite nor semidefinite, it is said to be indefinite.
Theorem 10.13. Let A = AH E e
nxn
with eigenvalues AI :::: A2 :::: ... :::: An. Thenfor all
x E en,
Proof: Let U be a unitary matrix that diagonalizes A as in Theorem 10.2. Furthermore,
let y = U H x, where x is an arbitrary vector in en, and denote the components of y by
11;, i En. Then
But clearly
n
x
H
Ax = (U
H
X)H U
H
AU(U
H
x) = yH Dy = LA; 111;12.
n
LA; 11'/;12 ~ AlyH Y = AIX
H
X
;=1
;=1
100 Chapter 10. Canonical Forms
and
from which the theorem follows. D
Remark 10.14. The ratio ^^ for A = A
H
< = C
nxn
and nonzero jc e C" is called the
Rayleigh quotient of jc. Theorem 10.13 provides upper (AO and lower (A.
w
) bounds for
the Rayleigh quotient. If A = A
H
e C"
x
" is positive definite, X
H
Ax > 0 for all nonzero
x E C", soO < X
n
< • • • < A. I.
Corollary 10.15. Let A e C"
x
". Then \\A\\
2
= ^
m
(A
H
A}.
Proof: For all x € C" we have
Let jc be an eigenvector corresponding to X
max
(A
H
A). Then ^pjp
2
= ^^(A" A) , whence
Definition 10.16. A principal submatrix of an nxn matrix A is the (n — k)x(n — k) matrix
that remains by deleting k rows and the corresponding k columns. A leading principal
submatrix of order n — k is obtained by deleting the last k rows and columns.
Theorem 10.17. A symmetric matrix A e E"
x
" is positive definite if and only if any of the
following three equivalent conditions hold:
1. The determinants of all leading principal submatrices of A are positive.
2. All eigenvalues of A are positive.
3. A can be written in the formM
T
M, where M e R"
x
" is nonsingular.
Theorem 10.18. A symmetric matrix A € R"
x
" is nonnegative definite if and only if any
of the following three equivalent conditions hold:
1. The determinants of all principal submatrices of A are nonnegative.
2. All eigenvalues of A are nonnegative.
3. A can be written in the formM
T
M, where M 6 R
ix
" and k > rank(A) — rank(M) .
Remark 10.19. Note that the determinants of all principal eubmatrioes muet bQ nonnogativo
in Theorem 10.18.1, not just those of the leading principal submatrices. For example,
consider the matrix A — [
0
_
l
1. The determinant of the 1x1 leading submatrix is 0 and
the determinant of the 2x2 leading submatrix is also 0 (cf. Theorem 10.17). However, the
100 Chapter 10. Canonical Forms
and
n
LAillJilZ::: AnyHy = An
xHx
,
i=l
from which the theorem follows. 0
Remark 10.14. The ratio XHHAx for A = AH E e
nxn
and nonzero x E en is called the
x x
Rayleigh quotient of x. Theorem 1O.l3 provides upper (A 1) and lower (An) bounds for
the Rayleigh quotient. If A = AH E e
nxn
is positive definite, x
H
Ax > 0 for all nonzero
x E en, so 0 < An ::::: ... ::::: A I.
I
Corollary 10.15. Let A E e
nxn
. Then IIAII2 = Ar1ax(AH A).
Proof: For all x E en we have
I
Let x be an eigenvector corresponding to Amax (A H A). Then = Ar1ax (A H A), whence
IIAxll2 ! H
IIAliz = max = Amax{A A). 0
xfO IIxll2
Definition 10.16. A principal submatrixofan n x n matrix A is the (n k) x (n k) matrix
that remains by deleting k rows and the corresponding k columns. A leading principal
submatrix of order n  k is obtained by deleting the last k rows and columns.
Theorem 10.17. A symmetric matrix A E is positive definite if and only if any of the
following three equivalent conditions hold:
1. The determinants of all leading principal submatrices of A are positive.
2. All eigenvalues of A are positive.
3. A can be written in the form MT M, where M E xn is nonsingular.
Theorem 10.18. A symmetric matrix A E xn is nonnegative definite if and only if any
of the following three equivalent conditions hold:
1. The determinants of all principal submatrices of A are nonnegative.
2. All eigenvalues of A are nonnegaTive.
3. A can be wrirren in [he/orm MT M, where M E IRb<n and k ranlc(A) "" ranlc(M).
R.@mllrk 10.19. Not@th!ltthl!dl!termin!lntl:ofnllprincip!ll "ubm!ltriC[!!l mu"t bB nonnBgmivB
in Theorem 10.18.1, not just those of the leading principal submatrices. For example,
consider the matrix A = _ The determinant of the I x 1 leading submatrix is 0 and
the determinant of the 2 x 2 leading submatrix is also 0 (cf. Theorem 10.17). However, the
10.2. Definite Matrices 101
principal submatrix consisting of the (2,2) element is, in fact, negative and A is nonpositive
definite.
Remark 10.20. The factor M in Theorem 10.18.3 is not unique. For example, if
Recall that A > B if the matrix A — B is nonnegative definite. The following
theorem is useful in "comparing" symmetric matrices. Its proof is straightforward from
basic definitions.
Theorem 10.21. Let A, B e R
nxn
be symmetric.
1. If A >BandMe R
nxm
, then M
T
AM > M
T
BM.
2. If A >B and M e R
nxm
, then M
T
AM > M.
T
BM.
j m
The following standard theorem is stated without proof (see, for example, [16, p.
181]). It concerns the notion of the "square root" of a matrix. That is, if A € E"
xn
, we say
that S e R
nx
" is a square root of A if S
2
— A. In general, matrices (both symmetric and
nonsymmetric) have infinitely many square roots. For example, if A = /2, any matrix S of
the form [
c
s
°*
e
e
_
c
s
™
9
e
] is a square root.
Theorem 10.22. Let A e R"
x
" be nonnegative definite. Then A has a unique nonnegative
definite square root S. Moreover, SA = AS and rankS = rank A (and hence S is positive
definite if A is positive definite).
A stronger form of the third characterization in Theorem 10.17 is available and is
known as the Cholesky factorization. It is stated and proved below for the more general
Hermitian case.
Theorem 10.23. Let A e <C
nxn
be Hermitian and positive definite. Then there exists a
unique nonsingular lower triangular matrix L with positive diagonal elements such that
A = LL
H
.
Proof: The proof is by induction. The case n = 1 is trivially true. Write the matrix A in
the form
By our induction hypothesis, assume the result is true for matrices of order n — 1 so that B
may be written as B = L\L^, where L\ e C
1
""
1
^""^ is nonsingular and lower triangular
then M can be
10.2. Definite Matrices 101
principal submatrix consisting of the (2,2) element is, in fact, negative and A is nonpositive
definite.
Remark 10.20. The factor M in Theorem 10.18.3 is not unique. For example, if
then M can be
[1 0], [ fz
ti
o [ ~ 0]
o l ~ 0 , ...
v'3 0
Recall that A :::: B if the matrix A  B is nonnegative definite. The following
theorem is useful in "comparing" symmetric matrices. Its proof is straightforward from
basic definitions.
Theorem 10.21. Let A, B E jRnxn be symmetric.
1. 1f A :::: Band M E jRnxm, then MT AM :::: MT BM.
2. If A> Band M E j R ~ x m , then MT AM> MT BM.
The following standard theorem is stated without proof (see, for example, [16, p.
181]). It concerns the notion of the "square root" of a matrix. That is, if A E lR.
nxn
, we say
that S E jRn xn is a square root of A if S2 = A. In general, matrices (both symmetric and
nonsymmetric) have infinitely many square roots. For example, if A = lz, any matrix S of
h
" [COSO Sino] .
t e 10rm sinO _ cosO IS a square root.
Theorem 10.22. Let A E lR.
nxn
be nonnegative definite. Then A has a unique nonnegative
definite square root S. Moreover, SA = AS and rankS = rankA (and hence S is positive
definite if A is positive definite).
A stronger form of the third characterization in Theorem 10.17 is available and is
known as the Cholesky factorization. It is stated and proved below for the more general
Hermitian case.
Theorem 10.23. Let A E c
nxn
be Hermitian and positive definite. Then there exists a
unique nonsingular lower triangular matrix L with positive diagonal elements such that
A = LLH.
Proof: The proof is by induction. The case n = 1 is trivially true. Write the matrix A in
the form
By our induction hypothesis, assume the result is true for matrices of order n  1 so that B
may be written as B = L1Lf, where Ll E c(nl)x(nl) is nonsingular and lower triangular
102 Chapt er 10. Ca n o n i c a l Forms
with positive diagonal elements. It remains to prove that we can write the n x n matrix A
in the form
10.3 Equivalence Transformations and Congruence
Theorem 10.24. Let A € C™*
7 1
. Then there exist matrices P e C ™
xm
and Q e C"
n
x
" such
that
Proof: A classical proof can be consulted in, for example, [21, p. 131]. Alternatively,
suppose A has an SVD of the form (5.2) in its complex version. Then
Note that the greater freedom afforded by the equivalence transformation of Theorem
10.24, as opposed to the more restrictive situation of a similarity transformation, yields a
far "simpler" canonical form (10.4). However, numerical procedures for computing such
an equivalence directly via, say, Gaussian or elementary row and column operations, are
generally unreliable. The numerically preferred equivalence is, of course, the unitary equiv
alence known as the SVD. However, the SVD is relatively expensive to compute and other
canonical forms exist that are intermediate between (10.4) and the SVD; see, for example
[7, Ch. 5], [4, Ch. 2]. Two such forms are stated here. They are more stably computable
than (10.4) and more efficiently computable than a ful l SVD. Many similar results are also
available.
where a is positive. Performing the indicated matrix multiplication and equating the cor
responding submatrices, we see that we must have L\c = b and a
nn
= C
H
C + a
2
. Clearly
c is given simply by c = L^b. Substituting in the expression involving a, we find
a
2
= a
nn
— b
H
L\
H
L\
l
b = a
nn
— b
H
B~
l
b (= the Schur complement of B in A). But we
know that
Since det (fi ) > 0, we must have a
nn
—b
H
B
l
b > 0. Choosing a to be the positive square
root of «„„ — b
H
B~
l
b completes the proof. D
102 Chapter 10. Canonical Forms
with positive diagonal elements. It remains to prove that we can write the n x n matrix A
in the form
b ] = [L
J
0 ] [Lf c J,
ann c a 0 a
where a is positive. Performing the indicated matrix multiplication and equating the cor
responding submatrices, we see that we must have L I C = b and ann = c
H
c + a
2
• Clearly
c is given simply by c = C,lb. Substituting in the expression involving a, we find
a
2
= ann  b
H
LIH L11b = ann  b
H
B1b (= the Schur complement of B in A). But we
know that
o < det(A) = det [
b ] = det(B) det(a
nn
_ b
H
B1b).
ann
Since det(B) > 0, we must have ann  b
H
B1b > O. Choosing a to be the positive square
root of ann  b
H
B1b completes the proof. 0
10.3 Equivalence Transformations and Congruence
Theorem 10.24. Let A E c;,xn. Then there exist matrices P E C:
xm
and Q E such
that
(l0.4)
Proof: A classical proof can be consulted in, for example, [21, p. 131]. Alternatively,
suppose A has an SVD of the form (5.2) in its complex version. Then
[
Sl 0 ] [ U
H
] [I 0 ]
o I Uf AV = 0 0 .
Take P = [ 'f [I ] and Q = V to complete the proof. 0
Note that the greater freedom afforded by the equivalence transformation of Theorem
10.24, as opposed to the more restrictive situation of a similarity transformation, yields a
far "simpler" canonical form (10.4). However, numerical procedures for computing such
an equivalence directly via, say, Gaussian or elementary row and column operations, are
generally unreliable. The numerically preferred equivalence is, of course, the unitary equiv
alence known as the SVD. However, the SVD is relatively expensive to compute and other
canonical forms exist that are intermediate between (l0.4) and the SVD; see, for example
[7, Ch. 5], [4, Ch. 2]. Two such forms are stated here. They are more stably computable
than (lOA) and more efficiently computable than a full SVD. Many similar results are also
available.
10.3. Equivalence Transformations and Congruence 103
Theorem 10.25 (Complete Orthogonal Decomposition). Let A e C™
x
". Then there exist
unitary matrices U e C
mxm
and V e C
nxn
such that
where R e €,
r
r
xr
is upper (or lower) triangular with positive diagonal elements.
Proof: For the proof, see [4]. D
Theorem 10.26. Let A e C™
x
". Then there exists a unitary matrix Q e C
mxm
and a
permutation matrix Fl e C"
x
" such that
where R E C
r
r
xr
is upper triangular and S e C
r x(
"
r)
is arbitrary but in general nonzero.
Proof: For the proof, see [4]. D
Remark 10.27. When A has full column rank but is "near" a rank deficient matrix,
various rank revealing QR decompositions are available that can sometimes detect such
phenomena at a cost considerably less than a full SVD. Again, see [4] for details.
Definition 10.28. Let A e C
nxn
and X e C
n
n
xn
. The transformation A i> X
H
AX is called
a congruence. Note that a congruence is a similarity if and only ifX is unitary.
Note that congruence preserves the property of being Hermitian; i.e., if A is Hermitian,
then X
H
AX is also Hermitian. It is of interest to ask what other properties of a matrix are
preserved under congruence. It turns out that the principal property so preserved is the sign
of each eigenvalue.
Definition 10.29. Let A = A
H
e C"
x
" and let 7t, v, and £ denote the numbers of positive,
negative, and zero eigenvalues, respectively, of A. Then the inertia of A is the triple of
numbers In(A) = (n, v, £). The signature of A is given by sig(A) = n — v.
Example 10.30.
2. If A = A" eC
n x
" , t h enA > 0 if and only if In (A) = (n, 0, 0).
3. If In(A) = (TT, v, £), then rank(A) = n + v.
Theorem 10.31 (Sylvester's Law of Inertia). Let A = A
H
e C
nxn
and X e C
n
n
xn
. Then
In(A) = ln(X
H
AX).
Proof: For the proof, see, for example, [21, p. 134]. D
Theorem 10.31 guarantees that rank and signature of a matrix are preserved under
congruence. We then have the following.
10.3. Equivalence Transformations and Congruence 103
Theorem 10.25 (Complete Orthogonal Decomposition). Let A E e ~ x n . Then there exist
unitary matrices U E e
mxm
and V E e
nxn
such that
where R E e;xr is upper (or lower) triangular with positive diagonal elements.
Proof: For the proof, see [4]. 0
(10.5)
Theorem 10.26. Let A E e ~ x n . Then there exists a unitary matrix Q E e
mxm
and a
permutation matrix IT E en xn such that
QAIT = [ ~ ~ l
(10.6)
where R E e;xr is upper triangular and S E erx(nr) is arbitrary but in general nonzero.
Proof: For the proof, see [4]. 0
Remark 10.27. When A has full column rank but is "near" a rank deficient matrix,
various rank revealing QR decompositions are available that can sometimes detect such
phenomena at a cost considerably less than a full SVD. Again, see [4] for details.
Definition 10.28. Let A E e
nxn
and X E e ~ x n . The transformation A H XH AX is called
a congruence. Note that a congruence is a similarity if and only if X is unitary.
Note that congruence preserves the property of being Hermitian; i.e., if A is Hermitian,
then X H AX is also Hermitian. It is of interest to ask what other properties of a matrix are
preserved under congruence. It turns out that the principal property so preserved is the sign
of each eigenvalue.
Definition 10.29. Let A = AH E e
nxn
and let rr, v, and ~ denote the numbers of positive,
negative, and zero eigenvalues, respectively, of A. Then the inertia of A is the triple of
numbers In(A) = (rr, v, n The signature of A is given by sig(A) = rr  v.
Example 10.30.
l.In[! 1
o 0
00] o 0
10 =(2,1,1).
o 0
2. If A = AH E e
nxn
, then A> 0 if and only if In(A) = (n, 0, 0).
3. If In(A) = (rr, v, n, then rank(A) = rr + v.
Theorem 10.31 (Sylvester's Law of Inertia). Let A = A HE en xn and X E e ~ xn. Then
In(A) = In(X
H
AX).
Proof: For the proof, see, for example, [21, p. 134]. D
Theorem 10.31 guarantees that rank and signature of a matrix are preserved under
congruence. We then have the following.
104 Chapter 10. Canonical Forms
Theorem 10.32. Let A = A
H
e C"
xn
with In(A) = (jt, v, £). Then there exists a matrix
X e C"
n
xn
such that X
H
AX = diag(l, . . . , 1, 1,..., 1, 0, . . . , 0), where the number of
1 's is 7i, the number of — l's is v, and the number 0/0 's is (,.
Proof: Let AI , . . . , X
w
denote the eigenvalues of A and order them such that the first TT are
positive, the next v are negative, and the final £ are 0. By Theorem 10.2 there exists a unitary
matrix U such that U
H
AU = diag(Ai, . . . , A
w
). Define the n x n matrix
Then it is easy to check that X = U W yields the desired result. D
10.3.1 Block matrices and definiteness
Theorem 10.33. Suppose A = A
T
and D = D
T
. Then
if and only if either A > 0 and D  B
T
A~
l
B > 0, or D > 0 and A  BD^B
T
> 0.
Proof: The proof follows by considering, for example, the congruence
The details are straightforward and are left to the reader. D
Remark 10.34. Note the symmetric Schur complements of A (or D) in the theorem.
Theorem 10.35. Suppose A = A
T
and D = D
T
. Then
if and only ifA>0, AA
+
B = B, and D  B
T
A
+
B > 0.
Proof: Consider the congruence with
and proceed as in the proof of Theorem 10.33. D
10.4 Rational Canonical Form
One final canonical form to be mentioned is the rational canonical form.
104 Chapter 10. Canonical Forms
Theorem 10.32. Let A = AH E c
nxn
with In(A) = (Jr, v, O. Then there exists a matrix
X E such that XH AX = diag(1, ... , I, I, ... , 1,0, ... ,0), where the number of
1 's is Jr, the number of I 's is v, and the numberofO's
Proof: Let A I, ... , An denote the eigenvalues of A and order them such that the first Jr are
positive, the next v are negative, and the final are O. By Theorem 10.2 there exists a unitary
matrix V such that VH AV = diag(AI, ... , An). Define the n x n matrix
vv = ... , 1/.fArr+I' ... , I/.fArr+v, I, ... ,1).
Then it is easy to check that X = V VV yields the desired result. 0
10.3.1 Block matrices and definiteness
Theorem 10.33. Suppose A = AT and D = DT. Then
ifand only ifeither A> ° and D  BT AI B > 0, or D > 0 and A  BD
I
BT > O.
Proof: The proof follows by considering, for example, the congruence
B ] [I _AI B JT [ A
D 0 I BT
] [
The details are straightforward and are left to the reader. 0
Remark 10.34. Note the symmetric Schur complements of A (or D) in the theorem.
Theorem 10.35. Suppose A = AT and D = DT. Then
B ] > °
D 
if and only if A:::: 0, AA+B = B. and D  BT A+B:::: o.
Proof: Consider the congruence with
and proceed as in the proof of Theorem 10.33. 0
10.4 Rational Canonical Form
One final canonical form to be mentioned is the rational canonical form.
10.4. Rational Canonical Form 105
Definition 10.36. A matrix A e M"
x
" is said to be nonderogatory if its minimal polynomial
and characteristic polynomial are the same or, equivalently, if its Jordan canonical f orm
has only one block associated with each distinct eigenvalue.
Suppose A E W
xn
is a nonderogatory matrix and suppose its characteristic polyno
mial is 7 r( A ) = A " — ( a
0
+ «A +
is similar to a matrix of the form
+ a
n
_ i A
n
~ ' )  Then it can be shown (see [12]) that A
Definition 10.37. A matrix A e E
nx
" of the f orm (10.7) is called a companion matrix or
is said to be in companion form.
Companion matrices also appear in the literature in several equivalent forms. To
illustrate, consider the companion matrix
This matrix is a special case of a matrix in lower Hessenberg form. Using the reverseorder
identity similarity P given by (9.18), A is easily seen to be similar to the following matrix
in upper Hessenberg form:
Moreover, since a matrix is similar to its transpose (see exercise 13 in Chapter 9), the
following are also companion matrices similar to the above:
Notice that in all cases a companion matrix is nonsingular if and only if aO /= 0.
In fact, the inverse of a nonsingular companion matrix is again in companion form. For
£*Yamr\1j=»
10.4. Rational Canonical Form 105
Definition 10.36. A matrix A E lR
n
Xn is said to be nonderogatory ifits minimal polynomial
and characteristic polynomial are the same or; equivalently, if its Jordan canonical form
has only one block associated with each distinct eigenvalue.
Suppose A E lR
nxn
is a nonderogatory matrix and suppose its characteristic polyno
mial is n(A) = An  (ao + alA + ... + an_IAnI). Then it can be shown (see [12]) that A
is similar to a matrix of the form
o o o
o 0
o
(10.7)
o o
Definition 10.37. A matrix A E lR
nxn
of the form (10.7) is called a cornpanion rnatrix or
is said to be in cornpanion forrn.
Companion matrices also appear in the literature in several equivalent forms. To
illustrate, consider the companion matrix
(l0.8)
This matrix is a special case of a matrix in lower Hessenberg form. Using the reverseorder
identity similarity P given by (9.18), A is easily seen to be similar to the following matrix
in upper Hessenberg form:
a2 al
o 0
1 0
o 1
6]
o .
o
(10.9)
Moreover, since a matrix is similar to its transpose (see exercise 13 in Chapter 9), the
following are also companion matrices similar to the above:
l
:: ~ ! ~ 0 1 ] .
ao 0 0
(10.10)
Notice that in all cases a companion matrix is nonsingular if and only if ao i= O.
In fact, the inverse of a nonsingular companion matrix is again in companion form. For
example,
o
1
o
 ~
ao
1
o
o
 ~
ao
o
o
_!!l
o
o
(10.11)
106 Chapter 10. Canonical Forms
with a similar result for companion matrices of the form (10.10).
If a companion matrix of the form (10.7) is singular, i.e., if ao = 0, then its pseudo
inverse can still be computed. Let a e M""
1
denote the vector \a\, 02,..., a
n
i] and let
c =
l+
l
a
r
a
. Then it is easily verified that
Note that / — caa
T
= (I + aa
T
) , and hence the pseudoinverse of a singular companion
matrix is not a companion matrix unless a = 0.
Companion matrices have many other interesting properties, among which, and per
haps surprisingly, is the fact that their singular values can be found in closed form; see
[14].
Theorem 10.38. Let a\ > GI > • • • > a
n
be the singular values of the companion matrix
A in (10.7). Let a = a\ + a\ + • • • +a%_
{
and y = 1 + «.Q + a. Then
Remark 10.39. Explicit formulas for all the associated right and left singular vectors can
also be derived easily.
If A € R
nx
" is derogatory, i.e., has more than one Jordan block associated with
at least one eigenvalue, then it is not similar to a companion matrix of the form (10.7).
However, it can be shown that a derogatory matrix is similar to a block diagonal matrix,
each of whose diagonal blocks is a companion matrix. Such matrices are said to be in
rational canonical form (or Frobenius canonical form). For details, see, for example, [12].
Companion matrices appear frequently in the control and signal processing literature
but unfortunately they are often very difficult to work with numerically. Algorithms to reduce
an arbitrary matrix to companion form are numerically unstable. Moreover, companion
matrices are known to possess many undesirable numerical properties. For example, in
general and especially as n increases, their eigenstructure is extremely ill conditioned,
nonsingular ones are nearly singular, stable ones are nearly unstable, and so forth [14].
Ifao ^ 0, the largest and smallest singular values can also be written in the equivalent form
106 Chapter 10. Canonical Forms
with a similar result for companion matrices of the form (10.10).
If a companion matrix of the form (10.7) is singular, i.e., if ao = 0, then its pseudo
inverse can still be computed. Let a E JRn1 denote the vector [ai, a2, ... , anIf and let
c = I + ~ T a' Then it is easily verified that
o
o
o
o
o o
o
o
+
o
1 caa
T
o J.
ca
Note that I  caa T = (I + aa T) I , and hence the pseudoinverse of a singular companion
matrix is not a companion matrix unless a = O.
Companion matrices have many other interesting properties, among which, and per
haps surprisingly, is the fact that their singular values can be found in closed form; see
[14].
Theorem 10.38. Let al ~ a2 ~ ... ~ an be the singular values of the companion matrix
A in (10.7). Leta = ar + ai + ... + a;_1 and y = 1 + aJ + a. Then
2 _ 1 ( J 2 2)
a
l
 2 y + y  4a
o
'
a? = 1 for i = 2, 3, ... , n  1,
a; = ~ (y  J y2  4a
J
) .
If ao =1= 0, the largest and smallest singular values can also be written in the equivalent form
Remark 10.39. Explicit formulas for all the associated right and left singular vectors can
also be derived easily.
If A E JRnxn is derogatory, i.e., has more than one Jordan block associated with
at least one eigenvalue, then it is not similar to a companion matrix of the form (10.7).
However, it can be shown that a derogatory matrix is similar to a block diagonal matrix,
each of whose diagonal blocks is a companion matrix. Such matrices are said to be in
rational canonical form (or Frobenius canonical form). For details, see, for example, [12].
Companion matrices appear frequently in the control and signal processing literature
but unfortunately they are often very difficult to work with numerically. Algorithms to reduce
an arbitrary matrix to companion form are numerically unstable. Moreover, companion
matrices are known to possess many undesirable numerical properties. For example, in
general and especially as n increases, their eigenstructure is extremely ill conditioned,
nonsingular ones are nearly singular, stable ones are nearly unstable, and so forth [14].
Exercises 107
Companion matrices and rational canonical forms are generally to be avoided in floating
point computation.
Remark 10.40. Theorem 10.38 yields some understanding of why difficult numerical
behavior might be expected for companion matrices. For example, when solving linear
systems of equations of the form (6.2), one measure of numerical sensitivity is K
P
(A) =
I I ^ I I
p
I I A~
l
I I
p
>
m
e socalled condition number of A with respect to inversion and with respect
to the matrix Pnorm. I f this number is large, say 0(10*), one may lose up to k digits of
precision. I n the 2norm, this condition number is the ratio of largest to smallest singular
values which, by the theorem, can be determined explicitly as
I t is easy to show that y/2/ao < k2(A) < £,, and when GO is small or y is large (or both),
then K2(A) ^ T~I. I t is not unusual for y to be large for large n. Note that explicit formulas
for K\ (A) and K oo(A) can also be determined easily by using (10.11).
EXERCISES
1. Show that if a triangular matrix is normal, then it must be diagonal.
2. Prove that if A e M"
x
" is normal, then Af(A) = A/"(A
r
).
3. Let A G C
nx
" and define p(A) = maxx
€
A(A) I ' M Then p(A) is called the spectral
radius of A. Show that if A is normal, then p(A) = A
2
. Show that the converse
is true if n = 2.
4. Let A € C
nxn
be normal with eigenvalues y1 , ..., y
n
and singular values a\ > a
2
>
• • • > o
n
> 0. Show that a, (A) = A.,(A) for i e n.
5. Use the reverseorder identity matrix P introduced in (9.18) and the matrix U in
Theorem 10.5 to find a unitary matrix Q that reduces A e C"
x
" to lower triangular
form.
6. Let A = I J MeC
2x2
. Find a unitary matrix U such that
7. I f A e W
xn
is positive definite, show that A
[
must also be positive definite.
3. Suppose A e E"
x
" is positive definite. I s [ ^ /i 1 > 0?
}. Let R, S 6 E
nxn
be symmetric. Show that [ * J 1 > 0 if and only if S > 0 and
R> S
Exercises 107
Companion matrices and rational canonical forms are generally to be avoided in fioating
point computation.
Remark 10.40. Theorem 10.38 yields some understanding of why difficult numerical
behavior might be expected for companion matrices. For example, when solving linear
systems of equations of the form (6.2), one measure of numerical sensitivity is Kp(A) =
II A II p II A ] II p' the socalled condition number of A with respect to inversion and with respect
to the matrix pnorm. If this number is large, say O(lO
k
), one may lose up to k digits of
precision. In the 2norm, this condition number is the ratio of largest to smallest singular
values which, by the theorem, can be determined explicitly as
y+J
y
2 4
a5
21
a
ol
It is easy to show that :::: K2(A) :::: 1:01' and when ao is small or y is large (or both),
then K2(A) It is not unusualfor y to be large forlarge n. Note that explicit formulas
for K] (A) and Koo(A) can also be determined easily by using (l0.11).
EXERCISES
1. Show that if a triangular matrix is normal, then it must be diagonal.
2. Prove that if A E jRnxn is normal, then N(A) = N(A
T
).
3. Let A E cc
nxn
and define peA) = max)..EA(A) IAI. Then peA) is called the spectral
radius of A. Show that if A is normal, then peA) = IIAII2' Show that the converse
is true if n = 2.
4. Let A E en xn be normal with eigenvalues A], ... , An and singular values 0'1 0'2
... an O. Show that a; (A) = IA;(A)I for i E!l.
5. Use the reverseorder identity matrix P introduced in (9.18) and the matrix U in
Theorem 10.5 to find a unitary matrix Q that reduces A E cc
nxn
to lower triangular
form.
6. Let A = :] E CC
2x2
. Find a unitary matrix U such that
7. If A E jRn xn is positive definite, show that A I must also be positive definite.
8. Suppose A E jRnxn is positive definite. Is [1 O?
9. Let R, S E jRnxn be symmetric. Show that > 0 if and only if S > 0 and
R > SI.
108 Chapter 10. Canonical Forms
10. Find the inertia of the following matrices:
108
10. Find the inertia of the following matrices:
(a) [ ~ ~ l (b) [
(d) [1 1 + j ]
1  j 1 .
Chapter 10. Canonical Forms
2 1 + j ]
1  j 2 '
Chapter 11
Linear Differential and
Difference Equations
11.1 Differential Equations
In this section we study solutions of the linear homogeneous system of differential equations
for t > IQ. This is known as an initialvalue problem. We restrict our attention in this
chapter only to the socalled timeinvariant case, where the matrix A e R
nxn
is constant
and does not depend on t. The solution of (11.1) is then known always to exist and be
unique. It can be described conveniently in terms of the matrix exponential.
Definition 11.1. For all A e R
nxn
, the matrix exponential e
A
e R
nxn
is defined by the
power series
The series (11.2) can be shown to converge for all A (has radius of convergence equal
to +00). The solution of (11.1) involves the matrix
which thus also converges for all A and uniformly in t.
11.1.1 Properties of the matrix exponential
1. e° = I.
Proof: This follows immediately from Definition 11.1 by setting A = 0.
2. For all A G R"
XM
, (e
A
f  e^.
Proof: This follows immediately from Definition 11.1 and linearity of the transpose.
109
Chapter 11
Linear Differential and
Difference Equations
11.1 Differential Equations
In this section we study solutions of the linear homogeneous system of differential equations
x(t) = Ax(t); x(to) = Xo E JR.n (11.1)
for t 2: to. This is known as an initialvalue problem. We restrict our attention in this
chapter only to the socalled timeinvariant case, where the matrix A E JR.nxn is constant
and does not depend on t. The solution of (11.1) is then known always to exist and be
unique. It can be described conveniently in terms of the matrix exponential.
Definition 11.1. For all A E JR.nxn, the matrix exponential e
A
E JR.nxn is defined by the
power series
+00 1
e
A
= L ,Ak.
k=O k.
(11.2)
The series (11.2) can be shown to converge for all A (has radius of convergence equal
to +(0). The solution of (11.1) involves the matrix
(11.3)
which thus also converges for all A and uniformly in t.
11.1.1 Properties of the matrix exponential
1. eO = I.
Proof This follows immediately from Definition 11.1 by setting A = O.
T T
2. For all A E JR.nxn, (e
A
) = e
A
•
Proof This follows immediately from Definition 11.1 and linearity of the transpose.
109
110 Chapter 11. Linear Differential and Difference Equations
3. For all A e R"
x
" and for all t, r e R, e
(t
+
T)A
= e'
A
e
rA
= e
lA
e'
A
.
Proof: Note that
Compare like powers of A in the above two equations and use the binomial theorem
on (t + T)*.
4. For all A, B e R"
xn
and for all t e R, e
t(A+B)
=^e'
A
e'
B
= e'
B
e'
A
if and only if A
and B commute, i.e., AB = B A.
Proof: Note that
and
and
while
Compare like powers of t in the first equation and the second or third and use the
binomial theorem on (A + B)
k
and the commutativity of A and B.
5. For all A e R"
x
" and for all t e R, (e'
A
)~
l
= e~'
A
.
Proof: Simply take T = — t in property 3.
6. Let £ denote the Laplace transform and £~
!
the inverse Laplace transform. Then for
all A € R"
x
" and for all t € R,
(a) C{e
tA
} = (sIAr
l
.
(b) £
1
{(j/A)
1
} = «
M
.
Proof: We prove only (a). Part (b) follows similarly.
110 Chapter 11. Linear Differential and Difference Equations
3. For all A E JRnxn and for all t, T E JR, e(t+r)A = etA erA = erAe
tA
.
Proof" Note that
(t + T)2 2
e(t+r)A = I + (t + T)A + A + ...
2!
and
tA rA t 2 T 2
(
2 )( 2 )
e e = I + t A + 2! A +... I + T A + 2! A +... .
Compare like powers of A in the above two equations and use the binomial theorem
on(t+T)k.
4. For all A, B E JRnxn and for all t E JR, et(A+B) =etAe
tB
= etBe
tA
if and only if A
and B commute, i.e., AB = BA.
Proof' Note that
and
while
t
2
et(A+B) = I + teA + B) + (A + B)2 + ...
2!
tB tA t 2 t 2
(
2 )( 2 )
e e = 1+ tB + 2iB +... 1+ tA + 2!A +... .
Compare like powers of t in the first equation and the second or third and use the
binomial theorem on (A + B/ and the commutativity of A and B.
5. ForaH A E JRnxn and for all t E JR, (etA)1 = e
tA
.
Proof" Simply take T = t in property 3.
6. Let £ denote the Laplace transform and £1 the inverse Laplace transform. Then for
all A E JRnxn and for all t E lR,
(a) .l{e
tA
} = (sI  A)I.
(b) .lI{(sl A)I} = erA.
Proof" We prove only (a). Part (b) follows similarly.
{+oo
= io et(sl)e
tA
dt
(+oo
= io ef(Asl) dt since A and (sf) commute
11.1. Differential Equations 111
= (sl A)
1
.
The matrix (s I — A) ~' is called the resolvent of A and is defined for all s not in A (A).
Notice in the proof that we have assumed, for convenience, that A is diagonalizable.
If this is not the case, the scalar dyadic decomposition can be replaced by
using the JCF. All succeeding steps in the proof then follow in a straightforward way.
7. For all A e R"
x
" and for all t e R, £(e'
A
) = Ae
tA
= e'
A
A.
Proof: Since the series (11.3) is uniformly convergent, it can be differentiated termby
term from which the result follows immediately. Alternatively, the formal definition
can be employed as follows. For any consistent matrix norm,
11.1. Differential Equations 111
= {+oo t e(AiS)t x;y;H dt assuming A is diagonalizable
10 ;=1
= e(AiS)t dt]x;y;H
n 1
= '"' Xi y;H assuming Re s > Re Ai for i E !!
L..... s  A"
i=1 I
= (sI  A)I.
The matrix (s I  A) I is called the resolvent of A and is defined for all s not in A (A).
Notice in the proof that we have assumed, for convenience, that A is diagonalizable.
If this is not the case, the scalar dyadic decomposition can be replaced by
m
et(Asl) = L Xiet(Jisl)y;H
;=1
using the JCF. All succeeding steps in the proof then follow in a straightforward way.
7. For all A E JRnxn and for all t E JR, 1h(e
tA
) = Ae
tA
= etA A.
Proof: Since the series (11.3) is uniformly convergent, it can be differentiated termby
term from which the result follows immediately. Alternatively, the formal definition
d e(t+M)A _ etA
_(/A) = lim
d t L'lt
can be employed as follows. For any consistent matrix norm,
II
etA II III II
u.  Ae
tA
= L'lt  /A)  Ae
tA
= II  etA)  Ae
tA
II
= II  l)e
tA
 Ae
tA
II
II
I ( (M)2 2 ) tA tAil
= L'lt M A + A +... e  Ae
= II ( Ae
tA
+ A
2
e
tA
+ ... )  Ae
tA
II
= II ( A2 + A
3
+ .. , ) etA II
< MIIA21111e
tA
II _ + IIAII + IIAI12 + ...
(
1 L'lt (L'lt)2 )
 2! 3! 4!
< L'lt1lA21111e
tA
Il (1 + L'ltiIAIl + IIAII2 + ... )
= L'lt IIA 21111e
tA
112 Chapter 11. Linear Differential and Difference Equations
For fixed t, the righthand side above clearly goes to 0 as At goes to 0. Thus, the
limit exists and equals Ae'
A
. A similar proof yields the limit e'
A
A, or one can use the
fact that A commutes with any polynomial of A of finite degree and hence with e'
A
.
11.1.2 Homogeneous linear differential equations
Theorem 11.2. Let A e R
nxn
. The solution of the linear homogeneous initialvalue problem
Proof: Differentiate (11.5) and use property 7 of the matrix exponential to get x( t ) =
Ae
( t
~
to) A
xo = Ax( t) . Also, x( t
0
) — e
( fo
~
t
° '
) A
X Q — X Q so, by the fundamental existence and
uniqueness theorem for ordinary differential equations, (11.5) is the solution of (11.4). D
11.1.3 Inhomogeneous linear differential equations
Theorem 11.3. Let A e R
nxn
, B e W
xm
and let the vectorvalued function u be given
and, say, continuous. Then the solution of the linear inhomogeneous initialvalue problem
for t > IQ is given by the variation of parameters formula
Proof: Differentiate (11.7) and again use property 7 of the matrix exponential. The general
formula
is used to get x( t ) = Ae
{
'
to) A
x
0
+ f'
o
Ae
(
'
s) A
Bu( s) ds + Bu( t) = Ax( t) + Bu( t) . Also,
*('o)
=
< ?
(f
° ~
fo)/ 1
.¥ o + 0 = X Q so, by the fundamental existence and uniqueness theorem for
ordinary differential equations, (11.7) is the solution of (11.6). D
Remark 11.4. The proof above simply verifies the variation of parameters formula by
direct differentiation. The formula can be derived by means of an integrating factor "trick"
as follows. Premultiply the equation x — Ax = Bu by e~
tA
to get
112 Chapter 11. Linear Differential and Difference Equations
For fixed t, the righthand side above clearly goes to 0 as t:.t goes to O. Thus, the
limit exists and equals Ae
t
A • A similar proof yields the limit e
t
A A, or one can use the
fact that A commutes with any polynomial of A of finite degree and hence with etA.
11.1.2 Homogeneous linear differential equations
Theorem 11.2. Let A E IR
n
xn. The solution of the linear homogeneous initialvalue problem
x(t) = Ax(l); x(to) = Xo E IR
n
(11.4)
for t ::: to is given by
(11.5)
Proof: Differentiate (11.5) and use property 7 of the matrix exponential to get x (t) =
Ae(tto)A
xo
= Ax(t). Also, x(to) = e(toto)A Xo = Xo so, by the fundamental existence and
uniqueness theorem for ordinary differential equations, (11.5) is the solution of (11.4). 0
11.1.3 Inhomogeneous linear differential equations
Theorem 11.3. Let A E IR
nxn
, B E IR
nxm
and let the vectorvalued function u be given
and, say, continuous. Then the solution of the linear inhomogeneous initialvalue problem
x(t) = Ax(t) + Bu(t); x(to) = Xo E IR
n
for t ::: to is given by the variation of parameters formula
x(t) = e(tto)A
xo
+ t e(ts)A Bu(s) ds.
l t o
(11.6)
(11.7)
Proof: Differentiate (11.7) and again use property 7 of the matrix exponential. The general
formula
d l
q
(t) l
q
(t) af(x t) dq(t) dp(t)
 f(x, t) dx = ' dx + f(q(t), t)  f(p(t), t)
dt pet) pet) at dt dt
is used to get x(t) = Ae(tto)A Xo + Ir: Ae(ts)A Bu(s) ds + Bu(t) = Ax(t) + Bu(t). Also,
x(t
o
} = e(totolA Xo + 0 = Xo so, by the fundilm()ntill nnd uniqu()Oc:s:s theorem for
ordinary differential equations, (11.7) is the solution of (1l.6). 0
Remark 11.4. The proof above simply verifies the variation of parameters formula by
direct differentiation. The formula can be derived by means of an integrating factor "trick"
as follows. Premultiply the equation x  Ax = Bu by e
tA
to get
(11.8)
11.1. Differential Equations 113
Now integrate (11.8) over the interval [to, t]:
Thus,
and hence
11.1.4 Linear matrix differential equations
Matrixvalued initialvalue problems also occur frequently. The first is an obvious general
ization of Theorem 11.2, and the proof is essentially the same.
Theorem 11.5. Let A e W
lxn
. The solution of the matrix linear homogeneous initialvalue
nrohlcm
for t > to is given by
In the matrix case, we can have coefficient matrices on both the right and left. For
convenience, the following theorem is stated with initial time to = 0.
Theorem 11.6. Let A e Rn
xn
, B e R
mxm
, and C e Rn
xm
. Then the matrix initialvalue
problem
—
a
tA
ra
tB
has the solutionX ( t ) = e Ce
Proof: Differentiate e
tA
Ce
tB
with respect to t and use property 7 of the matrix exponential.
The fact that X ( t ) satisfies the initial condition is trivial. D
Corollary 11.7. Let A, C e IR"
X
". Then the matrix initialvalue problem
has the solution X(t} = e
tA
Ce
tAT
.
When C is symmetric in (11.12), X ( t ) is symmetric and (11.12) is known as a Lya
punov differential equation. The initialvalue problem (11.11) is known as a Sylvester
differential equation.
11.1. Differential Equations
Now integrate (11.8) over the interval [to, t]:
Thus,
and hence
esAx(s) ds = eSABu(s) ds.
1
t d 1t
to ds to
etAx(t)  etoAx(to) = t e
sA
Bu(s) ds
lto
x(t) = e(tt
olA
xo
+ t e(ts)A Bu(s) ds.
lto
11.1.4 Linear matrix differential equations
113
Matrixvalued initialvalue problems also occur frequently. The first is an obvious general
ization of Theorem 11.2, and the proof is essentially the same.
Theorem 11.5. Let A E jRnxn. The solution of the matrix linear homogeneous initialvalue
problem
X(t) = AX(t); X(to) = C E jRnxn (11.9)
for t ::: to is given by
X(t) = e(tto)Ac.
(11.10)
In the matrix case, we can have coefficient matrices on both the right and left. For
convenience, the following theorem is stated with initial time to = O.
Theorem 11.6. Let A E jRnxn, B E jRmxm, and C E ]R.nxm. Then the matrix initialvalue
problem
X(t) = AX(t) + X(t)B; X(O) = C (11.11)
has the solution X(t) = etACe
tB
.
Proof: Differentiate etACe
tB
with respect to t and use property 7 of the matrix exponential.
The fact that X (t) satisfies the initial condition is trivial. 0
Corollary 11.7. Let A, C E ]R.nxn. Then the matrix initialvalue problem
X(t) = AX(t) + X(t)AT; X(O) = C (11.12)
has the solution X(t) = etACetAT.
When C is symmetric in (11.12), X (t) is symmetric and (11.12) is known as a Lya
punov differential equation. The initialvalue problem (11.11) is known as a Sylvester
differential equation.
114 Chapter 11. Linear Differential and Difference Equations
11.1.5 Modal decompositions
Let A E W
xn
and suppose, for convenience, that it is diagonalizable (if A is not diagonaliz
able, the rest of this subsection is easily generalized by using the JCF and the decomposition
A — ^ X f Ji Y
t
H
as discussed in Chapter 9). Then the solution x(t) of (11.4) can be written
The ki s are called the modal velocities and the right eigenvectors *, are called the modal
directions. The decomposition above expresses the solution x(t) as a weighted sum of its
modal velocities and directions.
This modal decomposition can be expressed in a different looking but identical form
if we write the initial condition X Q as a weighted sum of the right eigenvectors
Then
In the last equality we have used the fact that y f * X j = S f j .
Similarly, in the inhomogeneous case we can write
11.1.6 Computation of the matrix exponential
JCF method
Let A e R"
x
" and suppose X e Rn
xn
is such that X"
1
AX = J, where J is a JCF for A.
Then
114 Chapter 11. Linear Differential and Difference Equations
11.1 .5 Modal decompositions
Let A E jRnxn and suppose, for convenience, that it is diagonalizable (if A is not diagonaliz
able, the rest of this subsection is easily generalized by using the JCF and the decomposition
A = L X;li y
i
H
as discussed in Chapter 9). Then the solution x(t) of (11.4) can be written
x(t) = e(tto)A Xo
= (ti.iUtO)Xiyr) Xo
1=1
n
= L(Yi
H
xoeAi(ttO»Xi.
i=1
The Ai s are called the modal velocities and the right eigenvectors Xi are called the modal
directions. The decomposition above expresses the solution x (t) as a weighted sum of its
modal velocities and directions.
This modal decomposition can be expressed in a different looking but identical form
n
if we write the initial condition Xo as a weighted sum of the right eigenvectors Xo = L ai Xi.
Then
n
= L(aieAiUtO»Xi.
i=1
In the last equality we have used the fact that Yi
H
X j = flij.
Similarly, in the inhomogeneous case we can write
i
t e(ts)A Bu(s) ds = t (it eAiUS)YiH Bu(s) dS) Xi.
~ i=1 ~
11.1.6 Computation of the matrix exponential
JCF method
i=1
Let A E jRnxn and suppose X E j R ~ x n is such that XI AX = J, where J is a JCF for A.
Then
etA = etXJX1
= XetJX
1
I
n
Le
A
•
,
X'Yi
H
if A is diagonalizable
1=1
~ t,x;e'J,y;H in geneml.
11.1. Differential Equations 115
If A is diagonalizable, it is then easy to compute e
tA
via the formula e
tA
= Xe
tJ
X '
since e
tj
is simply a diagonal matrix.
In the more general case, the problem clearly reduces simply to the computation of
the exponential of a Jordan block. To be specific, let .7, e <C
kxk
be a Jordan block of the form
Clearly A/ and N commute. Thus, e
tJi
= e'
u
e
tN
by property 4 of the matrix exponential.
The diagonal part is easy: e
tu
= diag(e
x
',..., e
xt
}. But e
tN
is almost as easy since N is
nilpotent of degree k.
Definition 11.8. A matrix M e M
nx
"
M
p
= 0, while M
p
~
l
^ 0.
is nilpotent of degree (or index, or grade) p if
For the matrix N defined above, it is easy to check that while N has 1's along only
its first superdiagonal (and O's elsewhere), N
2
has 1's along only its second superdiagonal,
and so forth. Finally, N
k
~
l
has a 1 in its (1, k) element and has O's everywhere else, and
N
k
= 0. Thus, the series expansion of e'
N
is finite, i.e.,
Thus,
In the case when A. is complex, a real version of the above can be worked out.
11.1. Differential Equations 115
If A is diagonalizable, it is then easy to compute etA via the formula etA = Xe
tl
XI
since e
t
I is simply a diagonal matrix.
In the more general case, the problem clearly reduces simply to the computation of
the exponential of a Jordan block. To be specific, let J
i
E C
kxk
be a Jordan block of the form
J
i
=
A 1
o A
o
o o
o =U+N.
o A
Clearly AI and N commute. Thus, e
t
I, = eO.! e
l
N by property 4 of the matrix exponential.
The diagonal part is easy: e
lH
= diag(e
At
, ••• ,eAt). But e
lN
is almost as easy since N is
nilpotent of degree k.
Definition 11.8. A matrix M E jRnxn is nilpotent of degree (or index, or grade) p if
MP = 0, while MPI t= O.
For the matrix N defined above, it is easy to check that while N has l's along only
its first superdiagonal (and O's elsewhere), N
2
has l's along only its second superdiagonal,
and so forth. Finally, N
k

I
has a 1 in its (1, k) element and has O's everywhere else, and
N
k
= O. Thus, the series expansion of e
lN
is finite, i.e.,
Thus,
t
2
t
k

I
e
IN
=I+tN+N
2
+ ... + N
k

I
2! (k  I)!
o
o o
eAt
teAt
12 At
2I
e
0
eAt teAl
ell; =
0 0
eAt
0 0
t
1
IkI At
(kI)! e
12 At
2I
e
teAl
eAt
In the case when A is complex, a real version of the above can be worked out.
116 Chapter 11. Linear Differential and Difference Equations
Example 11.9. Let A = [ ~ _ \ J]. Then A (A) = {2, 2} and
Interpolation method
This method is numerically unstable in finiteprecision arithmetic but is quite effective for
hand calculation in smallorder problems. The method is stated and illustrated for the
exponential function but applies equally well to other functions.
Given A € E.
nxn
and /(A) = e
tx
, compute f(A) = e'
A
, where t is a fixed scalar.
Suppose the characteristic polynomial of A can be written as n ( X ) = Yi?=i (^ ~~ ^ i)" ' »
where the A.,  s are distinct. Define
where O TQ , . . . , a
n
i are n constants that are to be determined. They are, in fact, the unique
solution of the n equations:
Here, the superscript (&) denotes the fcth derivative with respect to X. With the a, s then
known, the function g is known and /(A) = g(A). The motivation for this method is
the CayleyHamilton Theorem, Theorem 9.3, which says that all powers of A greater than
n — 1 can be expressed as linear combinations of A
k
for k = 0, 1, . . . , n — 1. Thus, all the
terms of order greater than n — 1 in the power series for e'
A
can be written in terms of these
lowerorder powers as well. The polynomial g gives the appropriate linear combination.
Example 11.10. Let
and /(A) = e
tK
. Then j r(A.) = (A. + I)
3
, so m = 1 and n
{
= 3.
Let g(X) — UQ + a\X + o^A.
2
. Then the three equations for the a, s are given by
116 Chapter 11. Linear Differential and Difference Equations
Example 11.9.
Let A = [=i
a Then A(A) = {2, 2} and
etA = Xe
tJ
xI
=[
2 1
] exp t [
2
 ~ ] [
1
]
0
1 2
=[
2
] [ e ~ 2 t
te
2t
] [
1
]
1
e
2t
1 2
Interpolation method
This method is numerically unstable in finiteprecision arithmetic but is quite effective for
hand calculation in smallorder problems. The method is stated and illustrated for the
exponential function but applies equally well to other functions.
Given A E jRnxn and f(A) = etA, compute f(A) = etA, where t is a fixed scalar.
Suppose the characteristic polynomial of A can be written as n(A) = nr=1 (A  Ai t',
where the Ai s are distinct. Define
where ao, ... , anl are n constants that are to be determined. They are, in fact, the unique
solution of the n equations:
g(k)(Ai) = f(k)(Ai); k = 0, I, ... , ni  I, i Em.
Here, the superscript (k) denotes the kth derivative with respect to A. With the aiS then
known, the function g is known and f(A) = g(A). The motivation for this method is
the CayleyHamilton Theorem, Theorem 9.3, which says that all powers of A greater than
n  1 can be expressed as linear combinations of A k for k = 0, I, ... , n  1. Thus, all the
terms of order greater than n  1 in the power series for e
t
A can be written in terms of these
lowerorder powers as well. The polynomial g gives the appropriate linear combination.
Example 11.10. Let
A = [  ~  ~ ~ ]
o 01
and f(A) = etA. Then n(A) = (A + 1)3, so m = 1 and nl = 3.
Let g(A) = ao + alA + a2A2. Then the three equations for the aiS are given by
g(I) = f(1) ==> ao al +a2 = e
t
,
g'(1) = f'(1) ==> at  2a2 = te
t
,
g"(I) = 1"(1) ==> 2a2 = t
2
e
t
•
11.1. Differential Equations 117
Solving for the a, s, we find
Thus,
~4 4i t f f > \ t k TU^^ _/"i\ f \ i o\ 2
Example 11.11. Let A = [ _* J] and /(A) = e
a
. Then 7 r(X ) = (A + 2)
2
so m = 1 and
«i = 2.
Let g(A.) = «o + ofiA.. Then the defining equations for the a,s are given by
Solving for the a,s, we find
Thus,
Other methods
1. Use e
tA
= £~
l
{(sl — A)^
1
} and techniques for inverse Laplace transforms. This
is quite effective for smallorder problems, but general nonsymbolic computational
techniques are numerically unstable since the problem is theoretically equivalent to
knowing precisely a JCF.
2. Use Pade approximation. There is an extensive literature on approximating cer
tain nonlinear functions by rational functions. The matrix analogue yields e
A
=
11 .1. Differential Equations
117
Solving for the ai s, we find
Thus,
Example 11.11. Let A = [ : : : : ~ 6] and f(A) = eO. Then rr(A) = (A + 2)2 so m = 1 and
nL = 2.
Let g(A) = ao + aLA. Then the defining equations for the aiS are given by
g(2) = f(2) ==> ao  2al = e
2t
,
g'(2) = f'(2) ==> al = te
2t
.
Solving for the aiS, we find
Thus,
ao = e
2t
+ 2te
2t
,
aL = te
2t
.
f(A) = etA = g(A) = aoI + al A
= (e
2t
+ 2te
2t
) [ ~
_ [ e
2t
_ 2te
2t
 te
2t
Other methods
o ] + te
2t
[4 4 ]
I I 0
1. Use etA = .cI{(sI  A)I} and techniques for inverse Laplace transforms. This
is quite effective for smallorder problems, but general nonsymbolic computational
techniques are numerically unstable since the problem is theoretically equivalent to
knowing precisely a JCE
2. Use Pade approximation. There is an extensive literature on approximating cer
tain nonlinear functions by rational functions. The matrix analogue yields e
A
~
118 Chapter 11. Linear Differential and Difference Equations
D~
l
(A)N(A), where D(A) = 8
0
I + Si A H h S
P
A
P
and N(A) = v
0
I + v
l
A +
• • • + v
q
A
q
. Explicit formulas are known for the coefficients of the numerator and
denominator polynomials of various orders. Unfortunately, a Fade approximation for
the exponential is accurate only in a neighborhood of the origin; in the matrix case
this means when  A is sufficiently small. This can be arranged by scaling A, say, by
/ * \
2
*
multiplying it by 1/2* for sufficiently large k and using the fact that e
A
= ( e
{ ] / 2 )A
j .
Numerical loss of accuracy can occur in this procedure from the successive squarings.
3. Reduce A to (real) Schur form S via the unitary similarity U and use e
A
= Ue
s
U
H
and successive recursions up the superdiagonals of the (quasi) upper triangular matrix
e
s
.
4. Many methods are outlined in, for example, [19]. Reliable and efficient computation
of matrix functions such as e
A
and log(A) remains a fertile area for research.
11.2 Difference Equations
In this section we outline solutions of discretetime analogues of the linear differential
equations of the previous section. Linear discretetime systems, modeled by systems of
difference equations, exhibit many parallels to the continuoustime differential equation
case, and this observation is exploited frequently.
11.2.1 Homogeneous linear difference equations
Theorem 11.12. Let A e Rn
xn
. The solution of the linear homogeneous system of difference
equations
11.2.2 Inhomogeneous linear difference equations
Theorem 11.14. Let A e R
nxn
, B e R
nxm
and suppose {«*}£§ « a given sequence of
mvectors. Then the solution of the inhomogeneous initialvalue problem
for k > 0 is given by
Proof: The proof is almost immediate upon substitution of (11.14) into (11.13). D
Remark 11.13. Again, we restrict our attention only to the socalled timeinvariant
case, where the matrix A in (11.13) is constant and does not depend on k. We could also
consider an arbitrary "initial time" ko, but since the system is timeinvariant, and since we
want to keep the formulas "clean" (i.e., no double subscripts), we have chosen ko = 0 for
convenience.
118 Chapter 11. Linear Differential and Difference Equations
DI(A)N(A), where D(A) = 001 + olA + ... + opAP and N(A) = vol + vIA +
... + Vq A q. Explicit formulas are known for the coefficients of the numerator and
denominator polynomials of various orders. Unfortunately, a Pad6 approximation for
the exponential is accurate only in a neighborhood of the origin; in the matrix case
this means when IIAII is sufficiently small. This can be arranged by scaling A, say, by
2'
multiplying it by 1/2k for sufficiently large k and using the fact that e
A
= (e( I /2')A )
Numerical loss of accuracy can occur in this procedure from the successive squarings.
3. Reduce A to (real) Schur form S via the unitary similarity U and use e
A
= U e
S
U H
and successive recursions up the superdiagonals of the (quasi) upper triangular matrix
e
S
.
4. Many methods are outlined in, for example, [19]. Reliable and efficient computation
of matrix functions such as e
A
and 10g(A) remains a fertile area for research.
11.2 Difference Equations
In this section we outline solutions of discretetime analogues of the linear differential
equations of the previous section. Linear discretetime systems, modeled by systems of
difference equations, exhibit many parallels to the continuoustime differential equation
case, and this observation is exploited frequently.
11.2.1 Homogeneous linear difference equations
Theorem 11.12. Let A E jRn xn. The solution of the linear homogeneous system of difference
equations
(11.13)
for k 2:: 0 is given by
Proof: The proof is almost immediate upon substitution of (11.14) into (11.13). 0
Remark 11.13. Again, we restrict our attention only to the socalled timeinvariant
case, where the matrix A in (11.13) is constant and does not depend on k. We could also
consider an arbitrary "initial time" ko, but since the system is timeinvariant, and since we
want to keep the formulas "clean" (i.e., no double subscripts), we have chosen ko = 0 for
convenience.
11.2.2 Inhomogeneous linear difference equations
Theorem 11.14. Let A E jRnxn, B E jRnxm and suppose { u d t ~ is a given sequence of
mvectors. Then the solution of the inhomogeneous initialvalue problem
(11.15)
11.2. Difference Equations 119
is given by
11.2.3 Computation of matrix powers
It is clear that solution of linear systems of difference equations involves computation of
A
k
. One solution method, which is numerically unstable but sometimes useful for hand
calculation, is to use ztransforms, by analogy with the use of Laplace transforms to compute
a matrix exponential. One definition of the ztransform of a sequence {gk} is
Assuming z > max A, the ztransform of the sequence {A
k
} is then given by
X€A(A)
Proof: The proof is again almost immediate upon substitution of (11.16) into (11.15). D
Methods based on the JCF are sometimes useful, again mostly for smallorder prob
lems. Assume that A e M"
xn
and let X e R^
n
be such that X~
1
AX = /, where J is a
JCF for A. Then
If A is diagonalizable, it is then easy to compute A
k
via the formula A
k
— XJ
k
X
l
since /* is simply a diagonal matrix.
11.2. Difference Equations
is given by
kI
xk=AkXO+LAkjIBUj, k:::.O.
j=O
119
(11.16)
Proof: The proof is again almost immediate upon substitution of (11.16) into (11.15). 0
11.2.3 Computation of matrix powers
It is clear that solution of linear systems of difference equations involves computation of
A k. One solution method, which is numerically unstable but sometimes useful for hand
calculation, is to use ztransforms, by analogy with the use of Laplace transforms to compute
a matrix exponential. One definition of the ztransform of a sequence {gk} is
+00
= LgkZ
k
.
k=O
Assuming Izl > max IAI, the ztransform of the sequence {Ak} is then given by
AEA(A)
+00
k "'kk 1 12
Z({A})=L...zA =I+A+"2A + ...
k=O z z
= (lzIA)I
= z(zI  A)I.
Methods based on the JCF are sometimes useful, again mostly for smallorder prob
lems. Assume that A E jRnxn and let X E be such that XI AX = J, where J is a
JCF for A. Then
Ak = (XJXI)k
= XJkX
1
_I
 m
LXi Jty
i
H
;=1
if A is diagonalizable,
in general.
If A is diagonalizable, it is then easy to compute Ak via the formula Ak = X Jk XI
since Jk is simply a diagonal matrix.
120 Chapter 11. Linear Differential and Difference Equations
In the general case, the problem again reduces to the computation of the power of a
Jordan block. To be specific, let 7, e C
pxp
be a Jordan block of the form
Writing /,• = XI + N and noting that XI and the nilpotent matrix N commute, it is
then straightforward to apply the binomial theorem to (XI + N)
k
and verify that
The symbol ( ) has the usual definition of ,
(
^ ., and is to be interpreted as 0 if k < q.
In the case when A. is complex, a real version of the above can be worked out.
4
Example 11.15. Let A = [_J J]. Then
Basic analogues of other methods such as those mentioned in Section 11.1.6 can also
be derived for the computation of matrix powers, but again no universally "best" method
exists. For an erudite discussion of the state of the art, see [11, Ch. 18].
11.3 HigherOrder Equations
It is well known that a higherorder (scalar) linear differential equation can be converted to
a firstorder linear system. Consider, for example, the initialvalue problem
with 4 > (t } a given function and n initial conditions
120 Chapter 11. Linear Differential and Difference Equations
In the general case, the problem again reduces to the computation of the power of a
Jordan block. To be specific, let J
i
E Cpxp be a Jordan block of the form
o ... 0 A
Writing J
i
= AI + N and noting that AI and the nilpotent matrix N commute, it is
then straightforward to apply the binomial theorem to (AI + N)k and verify that
Ak
kA kI
(;)A
k

2
(
k ) AkP+I
pl
0
Ak kA
k

1
J/ =
0 0
Ak
( ; ) A
k

2
kA
k

1
0 0
Ak
The symbol (: ) has the usual definition of q ! ( k k ~ q ) ! and is to be interpreted as 0 if k < q.
In the case when A is complex, a real version of the above can be worked out.
Example 11.15. Let A = [=i a Then
Ak = XJkX1 = [2 1 ] [(_2)k k(2)kk
1
] [ 1 2
1
]
1 1 0 (2) 1
_ [ (_2/
1
(2  2k) k( 2l+
1
]
 k( _2)k1 (2l
1
(2k  2) .
Basic analogues of other methods such as those mentioned in Section 11.1.6 can also
be derived for the computation of matrix powers, but again no universally "best" method
exists. For an erudite discussion of the state of the art, see [11, Ch. 18].
11.3 HigherOrder Equations
It is well known that a higherorder (scalar) linear differential equation can be converted to
a firstorder linear system. Consider, for example, the initialvalue problem
(11.17)
with ¢J(t) a given function and n initial conditions
y(O) = Co, y(O) = CI, ... , inI)(O) = CnI' (1l.l8)
Exercises 121
Here, v
(m)
denotes the mth derivative of y with respect to t. Define a vector x (?) e R" with
components *i(0 = y ( t ) , x
2
( t) = y ( t ) , . . . , x
n
( t) = y
{ n
~
l )
( t ) . Then
These equations can then be rewritten as the firstorder linear system
The initial conditions take the form ^(0) = c = [ C Q , c\, ..., C
M
_ I ] .
Note that det(X7 — A) = A." + a
n
\X
n
~
l
H h a\X + ao. However, the companion
matrix A in (11.19) possesses many nasty numerical properties for even moderately sized n
and, as mentioned before, is often well worth avoiding, at least for computational purposes.
A similar procedure holds for the conversion of a higherorder difference equation
EXERCISES
1. Let P € R
nxn
be a projection. Show that e
p
% / + 1.718P.
2. Suppose x, y € R" and let A = xy
T
. Further, let a = x
T
y. Show that e'
A
I + g ( t , a) xy
T
, where
3. Let
with n initial conditions, into a linear firstorder difference equation with (vector) initial
condition.
Exercises 121
Here, y(m) denotes the mth derivative of y with respect to t. Define a vector x (t) E ]Rn with
components Xl (t) = yet), X2(t) = yet), ... , Xn(t) = Inl)(t). Then
Xl (I) = X2(t) = y(t),
X2(t) = X3(t) = yet),
Xnl (t) = Xn(t) = y(nl)(t),
Xn(t) = y(n)(t) = aoy(t)  aly(t)  ...  an_llnl)(t) + ¢(t)
= aOx\ (t)  a\X2(t)  ...  anlXn(t) + ¢(t).
These equations can then be rewritten as the firstorder linear system
0 0 0
0 0 1
x(t)+ [ n ~ ( t )
x(t) =
0
0 0 1
ao a\ a
n
\
The initial conditions take the form X (0) = C = [co, Cl, •.. , C
n
\ r.
(11.19)
Note that det(A!  A) = An + an_1A
n

1
+ ... + alA + ao. However, the companion
matrix A in (11.19) possesses many nasty numerical properties for even moderately sized n
and, as mentioned before, is often well worth avoiding, at least for computational purposes.
A similar procedure holds for the conversion of a higherorder difference equation
with n initial conditions, into a linear firstorder difference equation with (vector) initial
condition.
EXERCISES
1. Let P E lR
nxn
be a projection. Show that e
P
~ ! + 1.718P.
2. Suppose x, y E lR
n
and let A = xyT. Further, let a = XT y. Show that etA
1+ get, a)xyT, where
{
!(eat  I)
g(t,a)= a t
3. Let
if a 1= 0,
if a = O.
122 Chapter 11. L i n ear Di f f eren ti al and Di f f erence Equati on s
where X e M'
nx
" is arbitrary. Show that
4. Let K denote the skewsymmetric matrix
where /„ denotes the n x n identity matrix. A matrix A e R
2n x2n
is said to be
Hamiltonian if K~
1
A
T
K = A and to be symplectic if K~
l
A
T
K  A
1
.
(a) Suppose E is Hamiltonian and let A, be an eigenvalue of H. Show that — A, must
also be an eigenvalue of H.
(b) Suppose S is symplectic and let A. be an eigenvalue of S. Show that 1 /A, must
also be an eigenvalue of S.
(c) Suppose that H is Hamiltonian and S is symplectic. Show that S~
1
HS must be
Hamiltonian.
(d) Suppose H is Hamiltonian. Show that e
H
must be symplectic.
5. Let a, ft € R and
Then show that
6. Find a general expression for
7. Find e
M
when A =
5. Let
(a) Solve the differential equation
122 Chapter 11. Linear Differential and Difference Equations
where X E jRmxn is arbitrary. Show that
e
A = [eo I sinh 1 X ]
~ I .
4. Let K denote the skewsymmetric matrix
[
0 In ]
In 0 '
where In denotes the n x n identity matrix. A matrix A E jR2nx2n is said to be
Hamiltonian if K I AT K =  A and to be symplectic if K I AT K = A I.
(a) Suppose H is Hamiltonian and let).. be an eigenvalue of H. Show that ).. must
also be an eigenvalue of H.
(b) Suppose S is symplectic and let).. be an eigenvalue of S. Show that 1/).. must
also be an eigenValue of S.
(c) Suppose that H is Hamiltonian and S is symplectic. Show that SI H S must be
Hamiltonian.
(d) Suppose H is Hamiltonian. Show that e
H
must be symplectic.
5. Let a, f3 E lR and
Then show that
6. Find a general expression for
7. Find etA when A =
8. Let
ectt cos f3t
_eut sin f3t
ectctrt sin ~ t J.
e cos/A
(a) Solve the differential equation
i = Ax ; x(O) = [ ~ J.
Exercises 123
Show that the eigenvalues of the solution X ( t ) of this problem are the same as those
of Cf or all?.
11. The year is 2004 and there are three large "free trade zones" in the world: Asia (A),
Europe (E), and the Americas (R). Suppose certain multinational companies have
total assets of $40 trillion of which $20 trillion is in E and $20 trillion is in R. Each
year half of the Americas' money stays home, a quarter goes to Europe, and a quarter
goes to Asia. For Europe and Asia, half stays home and half goes to the Americas.
(a) Find the matrix M that gives
(b) Find the eigenvalues and right eigenvectors of M.
(c) Find the distribution of the companies' assets at year k.
(d) Find the limiting distribution of the $40 trillion as the universe ends, i.e., as
k — » • +00 (i.e., around the time the Cubs win a World Series).
(Exercise adapted from Problem 5.3.11 in [24].)
(b) Solve the differential equation
9. Consider the initialvalue problem
for t > 0. Suppose that A e E"
x
" is skewsymmetric and let a = \\XQ\\
2
. Show that
*(OII
2
= af or al l f > 0.
10. Consider the n x n matrix initialvalue problem
12. (a) Find the solution of the initialvalue problem
(b) Consider the difference equation
If £
0
= 1 and z\ = 2, what is the value of Z IQ OO? What is the value of Zk in
general?
Exercises 123
(b) Solve the differential equation
i = Ax + b; x(O) = [ ~ l
9. Consider the initialvalue problem
i(t) = Ax(t); x(O) = Xo
for t ~ O. Suppose that A E ~ n x n is skewsymmetric and let ex = Ilxol12. Show that
I/X(t)1/2 = ex for all t > O.
10. Consider the n x n matrix initialvalue problem
X(t) = AX(t)  X(t)A; X(O) = c.
Show that the eigenvalues of the solution X (t) of this problem are the same as those
of C for all t.
11. The year is 2004 and there are three large "free trade zones" in the world: Asia (A),
Europe (E), and the Americas (R). Suppose certain multinational companies have
total assets of $40 trillion of which $20 trillion is in E and $20 trillion is in R. Each
year half of the Americas' money stays home, a quarter goes to Europe, and a quarter
goes to Asia. For Europe and Asia, half stays home and half goes to the Americas.
(a) Find the matrix M that gives
[
A] [A]
E =M E
R year k+1 R year k
(b) Find the eigenvalues and right eigenvectors of M.
(c) Find the distribution of the companies' assets at year k.
(d) Find the limiting distribution of the $40 trillion as the universe ends, i.e., as
k * +00 (i.e., around the time the Cubs win a World Series).
(Exercise adapted from Problem 5.3.11 in [24].)
12. (a) Find the solution of the initialvalue problem
.Yet) + 2y(t) + yet) = 0; yeO) = 1, .YeO) = O.
(b) Consider the difference equation
Zk+2 + 2Zk+1 + Zk = O.
If Zo = 1 and ZI = 2, what is the value of ZIOOO? What is the value of Zk in
general?
This page intentionally left blank This page intentionally left blank
Chapter 12
Generalized Eigenvalue
Problems
12.1 The Generalized Eigenvalue/Eigenvector Problem
In this chapter we consider the generalized eigenvalue problem
125
where A, B e C"
xn
. The standard eigenvalue problem considered in Chapter 9 obviously
corresponds to the special case that B = I.
Definition 12.1. A nonzero vector x e C" is a right generalized eigenvector of the pair
(A, B) with A, B e C
MX
" if there exists a scalar A. e C, called a generalized eigenvalue,
such that
Similarly, a nonzero vector y e C" is a left generalized eigenvector corresponding to an
eigenvalue X if
When the context is such that no confusion can arise, the adjective "generalized"
is usually dropped. As with the standard eigenvalue problem, if x [y] is a right [left]
eigenvector, then so is ax [ay] for any nonzero scalar a. e C.
Definition 12.2. The matrix A — X B is called a matrix pencil (or pencil of the matrices A
and B).
As with the standard eigenvalue problem, eigenvalues for the generalized eigenvalue
problem occur where the matrix pencil A — X B is singular.
Definition 12.3. The polynomial 7 r(A.) = det(A — A.5) is called the characteristic poly
nomial of the matrix pair (A, B) . The roots ofn(X .) are the eigenvalues of the associated
generalized eigenvalue problem.
Remark 12.4. When A, B e E"
xn
, the characteristic polynomial is obviously real, and
hence nonreal eigenvalues must occur in complex conjugate pairs.
Chapter 12
Generalized Eigenvalue
Problems
12.1 The Generalized Eigenvalue/Eigenvector Problem
In this chapter we consider the generalized eigenvalue problem
Ax = 'ABx,
where A, B E e
nxn
. The standard eigenvalue problem considered in Chapter 9 obviously
corresponds to the special case that B = I.
Definition 12.1. A nonzero vector x E en is a right generalized eigenvector of the pair
(A, B) with A, B E e
nxn
if there exists a scalar 'A E e, called a generalized eigenvalue,
such that
Ax = 'ABx. (12.1)
Similarly, a nonzero vector y E en is a left generalized eigenvector corresponding to an
eigenvalue 'A if
(12.2)
When the context is such that no confusion can arise, the adjective "generalized"
is usually dropped. As with the standard eigenvalue problem, if x [y] is a right [left]
eigenvector, then so is ax [ay] for any nonzero scalar a E <C.
Definition 12.2. The matrix A  'AB is called a matrix pencil (or pencil of the matrices A
and B).
As with the standard eigenvalue problem, eigenvalues for the generalized eigenvalue
problem occur where the matrix pencil A  'AB is singular.
Definition 12.3. The polynomial n('A) = det(A  'AB) is called the characteristic poly
nomial of the matrix pair (A, B). The roots ofn('A) are the eigenvalues of the associated
generalized eigenvalue problem.
Remark 12.4. When A, B E jRnxn, the characteristic polynomial is obviously real, and
hence nonreal eigenvalues must occur in complex conjugate pairs.
125
and there are again four cases to consider.
Case 1: a ^ 0, ft ^ 0. There are two eigenvalues, 1 and ^.
Case 2: a = 0, ft ^ 0. There is only one eigenvalue, 1 (of multiplicity 1).
Case 3: a ^ 0, f3 = 0. There are two eigenvalues, 1 and 0.
Case 4: a = 0, (3 = 0. All A 6 C are eigenvalues since det(B — uA) = 0.
At least for the case of regular pencils, it is apparent where the "missing" eigenvalues have
gone in Cases 2 and 3. That is to say, there is a second eigenvalue "at infinity" for Case 3 of
A — A.B, with its reciprocal eigenvalue being 0 in Case 3 of the reciprocal pencil B — nA.
A similar reciprocal symmetry holds for Case 2.
While there are applications in system theory and control where singular pencils
appear, only the case of regular pencils is considered in the remainder of this chapter. Note
that A and/or B may still be singular. If B is singular, the pencil A — KB always has
126 Chapter 12. Generalized Eigenvalue Problems
Remark 12.5. If B = I (or in general when B is nonsingular), then n ( X ) is a polynomial
of degree n, and hence there are n eigenvalues associated with the pencil A — X B. However,
when B = I, in particular, when B is singular, there may be 0, k e n, or infinitely many
eigenvalues associated with the pencil A — X B. For example, suppose
where a and ft are scalars. Then the characteristic polynomial is
and there are several cases to consider.
Case 1: a ^ 0, ft ^ 0. There are two eigenvalues, 1 and .
Case 2: a = 0, f3 / 0. There are two eigenvalues, 1 and 0.
Case 3: a = 0, f3 = 0. There is only one eigenvalue, 1 (of multiplicity 1).
Case 4: a = 0, f3 = 0. All A e C are eigenvalues since det(A — A. B ) =0.
Definition 12.6. If del (A — X B) is not identically zero, the pencil A — X B is said to be
regular; otherwise, it is said to be singular.
Note that if AA(A) n J\f(B) ^ 0, the associated matrix pencil is singular (as in Case
4 above).
Associated with any matrix pencil A — X B is a reciprocal pencil B — n,A and cor
responding generalized eigenvalue problem. Clearly the reciprocal pencil has eigenvalues
(JL = £. It is instructive to consider the reciprocal pencil associated with the example in
Remark 12.5. With A and B as in (12.3), the characteristic polynomial is
126 Chapter 12. Generalized Eigenvalue Problems
Remark 12.5. If B = I (or in general when B is nonsingular), then rr(A) is a polynomial
of degree n, and hence there are n eigenvalues associated with the pencil A  AB. However,
when B =I I, in particular, when B is singular, there may be 0, k E !!, or infinitely many
eigenvalues associated with the pencil A  AB. For example, suppose
where a and (3 are scalars. Then the characteristic polynomial is
det(A  AB) = (I  AHa  (3A)
and there are several cases to consider.
Case 1: a =I 0, {3 =I O. There are two eigenvalues, I and ~ .
Case 2: a = 0, {3 =I O. There are two eigenvalues, I and O.
Case 3: a =I 0, {3 = O. There is only one eigenvalue, I (of multiplicity 1).
Case 4: a = 0, (3 = O. All A E C are eigenvalues since det(A  AB) == O.
(12.3)
Definition 12.6. If det(A  AB) is not identically zero, the pencil A  AB is said to be
regular; otherwise, it is said to be singular.
Note that if N(A) n N(B) =I 0, the associated matrix pencil is singular (as in Case
4 above).
Associated with any matrix pencil A  AB is a reciprocal pencil B  /.LA and cor
responding generalized eigenvalue problem. Clearly the reciprocal pencil has eigenvalues
/.L = ±. It is instructive to consider the reciprocal pencil associated with the example in
Remark 12.5. With A and B as in (12.3), the characteristic polynomial is
det(B  /.LA) = (1  /.L)({3  a/.L)
and there are again four cases to consider.
Case 1: a =I 0, {3 =I O. There are two eigenvalues, I and ~ .
Case 2: a = 0, {3 =I O. There is only one eigenvalue, I (of multiplicity I).
Case 3: a =I 0, {3 = O. There are two eigenvalues, 1 and O.
Case 4: a = 0, (3 = O. All A E C are eigenvalues since det(B  /.LA) == O.
At least for the case of regular pencils, it is apparent where the "missing" eigenvalues have
gone in Cases 2 and 3. That is to say, there is a second eigenvalue "at infinity" for Case 3 of
A  AB, with its reciprocal eigenvalue being 0 in Case 3 of the reciprocal pencil B  /.LA.
A similar reciprocal symmetry holds for Case 2.
While there are applications in system theory and control where singular pencils
appear, only the case of regular pencils is considered in the remainder of this chapter. Note
that A and/or B may still be singular. If B is singular, the pencil A  AB always has
12. 2. Canonical Forms 127
fewer than n eigenvalues. If B is nonsingular, the pencil A A. f i always has precisely n
eigenvalues, since the generalized eigenvalue problem is then easily seen to be equivalent
to the standard eigenvalue problem B~
l
Ax = Xx (or AB~
l
w = Xw). However, this turns
out to be a very poor numerical procedure for handling the generalized eigenvalue problem
if B is even moderately ill conditioned with respect to inversion. Numerical methods that
work directly on A and B are discussed in standard textbooks on numerical linear algebra;
see, for example, [7, Sec. 7.7] or [25, Sec. 6.7].
12.2 Canonical Forms
Just as for the standard eigenvalue problem, canonical forms are available for the generalized
eigenvalue problem. Since the latter involves a pair of matrices, we now deal with equiva
lencies rather than similarities, and the first theorem deals with what happens to eigenvalues
and eigenvectors under equivalence.
Theorem 12.7. Let A, fl, Q, Z e C
nxn
with Q and Z nonsingular. Then
1. the eigenvalues of the problems A — XB and QAZ — XQBZ are the same (the two
problems are said to be equivalent).
2. ifx is a right eigenvector of A—XB, then Z~
l
x is a right eigenvector of QAZ—XQ B Z.
3. ify is a left eigenvector of A —KB, then Q~
H
y isa left eigenvector ofQAZ — XQBZ.
Proof:
1. det(QAZXQBZ) = det[0(A  XB)Z] = det gdet Zdet(A  XB). Since det 0
and det Z are nonzero, the result follows.
2. The result follows by noting that (A – yB)x  Oif andonly if Q(AXB)Z(Z~
l
x) =
0.
3. Again, the result follows easily by noting that y
H
(A — XB) — 0 if and only if
( Q~
H
y )
H
Q( A– XB ) Z = Q. D
where T
a
and Tp are upper triangular.
By Theorem 12.7, the eigenvalues of the pencil A — XB are then the ratios of the diag
onal elements of T
a
to the corresponding diagonal elements of Tp, with the understanding
that a zero diagonal element of Tp corresponds to an infinite generalized eigenvalue.
There is also an analogue of the MurnaghanWintner Theorem for real matrices.
The first canonical form is an analogue of Schur's Theorem and forms, in fact, the
theoretical foundation for the QZ algorithm, which is the generally preferred method for
solving the generalized eigenvalue problem; see, for example, [7, Sec. 7.7] or [25, Sec. 6.7].
Theorem 12.8. Let A, B e Cn
xn
. Then there exist unitary matrices Q, Z e Cn
xn
such that
12.2. Canonical Forms 127
fewer than n eigenvalues. If B is nonsingular, the pencil A  AB always has precisely n
eigenvalues, since the generalized eigenvalue problem is then easily seen to be equivalent
to the standard eigenvalue problem B
1
Ax = Ax (or AB
1
W = AW). However, this turns
out to be a very poor numerical procedure for handling the generalized eigenvalue problem
if B is even moderately ill conditioned with respect to inversion. Numerical methods that
work directly on A and B are discussed in standard textbooks on numerical linear algebra;
see, for example, [7, Sec. 7.7] or [25, Sec. 6.7].
12.2 Canonical Forms
Just as for the standard eigenvalue problem, canonical forms are available for the generalized
eigenvalue problem. Since the latter involves a pair of matrices, we now deal with equiva
lencies rather than similarities, and the first theorem deals with what happens to eigenvalues
and eigenvectors under equivalence.
Theorem 12.7. Let A, B, Q, Z E c
nxn
with Q and Z nonsingular. Then
1. the eigenvalues of the problems A  AB and QAZ  AQBZ are the same (the two
problems are said to be equivalent).
2. ifx isa right eigenvector of AAB, then Zl x isa righteigenvectorofQAZAQB Z.
3. ify isa left eigenvector of A AB, then QH y isa lefteigenvectorofQAZ AQBZ.
Proof:
1. det(QAZ  AQBZ) = det[Q(A  AB)Z] = det Q det Z det(A  AB). Since det Q
and det Z are nonzero, the result follows.
2. The result follows by noting that (A AB)x = 0 if and only if Q(A AB)Z(Zl x) =
o.
3. Again, the result follows easily by noting that yH (A  AB) o if and only if
(QH y)H Q(A _ AB)Z = O. 0
The first canonical form is an analogue of Schur's Theorem and forms, in fact, the
theoretical foundation for the QZ algorithm, which is the generally preferred method for
solving the generalized eigenvalue problem; see, for example, [7, Sec. 7.7] or [25, Sec. 6.7].
Theorem 12.8. Let A, B E c
nxn
. Then there exist unitary matrices Q, Z E c
nxn
such that
QAZ = T
a
, QBZ = T
fJ
,
where Ta and TfJ are upper triangular.
By Theorem 12.7, the eigenvalues ofthe pencil A  AB are then the ratios of the diag
onal elements of Ta to the corresponding diagonal elements of T
fJ
, with the understanding
that a zero diagonal element of TfJ corresponds to an infinite generalized eigenvalue.
There is also an analogue of the MurnaghanWintner Theorem for real matrices.
128 Chapter 12. Generalized Eigenvalue Problems
Theorem 12.9. Let A, B e R
nxn
. Then there exist orthogonal matrices Q, Z e R"
xn
such
thnt
where T is upper triangular and S is quasiuppertriangular.
When S has a 2 x 2 diagonal block, the 2 x 2 subpencil formed with the corresponding
2x2 diagonal subblock of T has a pair of complex conjugate eigenvalues. Otherwise, real
eigenvalues are given as above by the ratios of diagonal elements of S to corresponding
elements of T.
There is also an analogue of the Jordan canonical form called the Kronecker canonical
form (KCF). A full description of the KCF, including analogues of principal vectors and
so forth, is beyond the scope of this book. In this chapter, we present only statements of
the basic theorems and some examples. The first theorem pertains only to "square" regular
pencils, while the full KCF in all its generality applies also to "rectangular" and singular
pencils.
Theorem 12.10. Let A, B e C
nxn
and suppose the pencil A — XB is regular. Then there
exist nonsingular matrices P, Q € C"
x
" such that
where J is a Jordan canonical form corresponding to the finite eigenvalues of A A.fi and
N is a nilpotent matrix of Jordan blocks associated with 0 and corresponding to the infinite
eigenvalues of A — XB.
Example 12.11. The matrix pencil
with characteristic polynomial (X — 2)
2
has a finite eigenvalue 2 of multiplicty 2 and three
infinite eigenvalues.
Theorem 12.12 (Kronecker Canonical Form). Let A, B e C
mxn
. Then there exist
nonsingular matrices P e C
mxm
and Q e C
nxn
such that
128 Chapter 12. Generalized Eigenvalue Problems
Theorem 12.9. Let A, B E jRnxn. Then there exist orthogonal matrices Q, Z E jRnxn such
that
QAZ = S, QBZ = T,
where T is upper triangular and S is quasiuppertriangular.
When S has a 2 x 2 diagonal block, the 2 x 2 subpencil fonned with the corresponding
2 x 2 diagonal subblock of T has a pair of complex conjugate eigenvalues. Otherwise, real
eigenvalues are given as above by the ratios of diagonal elements of S to corresponding
elements of T.
There is also an analogue of the Jordan canonical fonn called the Kronecker canonical
form (KeF). A full description of the KeF, including analogues of principal vectors and
so forth, is beyond the scope of this book. In this chapter, we present only statements of
the basic theorems and some examples. The first theorem pertains only to "square" regular
pencils, while the full KeF in all its generality applies also to "rectangular" and singular
pencils.
Theorem 12.10. Let A, B E c
nxn
and suppose the pencil A  AB is regular. Then there
exist nonsingular matrices P, Q E c
nxn
such that
peA  AB)Q = [ ~ ~ ]  A [ ~ ~ l
where J is a Jordan canonical form corresponding to the finite eigenvalues of A  AB and
N is a nilpotent matrix of Jordan blocks associated with 0 and corresponding to the infinite
eigenvalues of A  AB.
Example 12.11. The matrix pencil
[2 I
0 0
~ ]> [ ~
0 0
o 0] o 2 0 0 I 0 o 0
o 0 1 0 0 0 I 0
o 0 0 1 0 0 o 0
o 0 0 0 0 0 0 0
with characteristic polynomial (A  2)2 has a finite eigenvalue 2 of multiplicty 2 and three
infinite eigenvalues.
Theorem 12.12 (Kronecker Canonical Form). Let A, B E c
mxn
• Then there exist
nonsingular matrices P E c
mxm
and Q E c
nxn
such that
peA  AB)Q = diag(LII' ... , L
l
" L ~ , ...• L;'. J  A.I, I  )"N),
12.2. Canonical Forms 129
where N is nilpotent, both N and J are in Jordan canonical form, and L^ is the (k + 1) x k
bidiagonal pencil
The /( are called the left minimal indices while the r, are called the right minimal indices.
Left or right minimal indices can take the value 0.
Such a matrix is in KCF. The first block of zeros actually corresponds to LQ, LQ, LQ, LQ ,
LQ, where each LQ has "zero columns" and one row, while each LQ has "zero rows" and
one column. The second block is L\ while the third block is L\. The next two blocks
correspond to
Just as sets of eigenvectors span Ainvariant subspaces in the case of the standard
eigenproblem (recall Definition 9.35), there is an analogous geometric concept for the
generalized eigenproblem.
Definition 12.14. Let A, B e W
lxn
and suppose the pencil A — XB is regular. Then V is a
deflating subspace if
Just as in the standard eigenvalue case, there is a matrix characterization of deflating
subspace. Specifically, suppose S e R n*
xk
is a matrix whose columns span a ^dimensional
subspace S of R
n
, i.e., R ( S) = <S. Then S is a deflating subspace for the pencil A — XB if
and only if there exists M e R
kxk
such that
while the nilpotent matrix N in this example is
12.2. Canonical Forms 129
where N is nilpotent, both Nand J are in Jordan canonical form, and Lk is the (k + I) x k
bidiagonal pencil
A 0 0
A
Lk =
0 0
A
0 0 I
The Ii are called the left minimal indices while the ri are called the right minimal indices.
Left or right minimal indices can take the value O.
Example 12.13. Consider a 13 x 12 block diagonal matrix whose diagonal blocks are
A 0]
I A .
o I
Such a matrix is in KCF. The first block of zeros actually corresponds to Lo, Lo, Lo, L6,
L6, where each Lo has "zero columns" and one row, while each L6 has "zero rows" and
one column. The second block is L\ while the third block is LI The next two blocks
correspond to
[
21
J = 0 2
o 0
while the nilpotent matrix N in this example is
000
Just as sets of eigenvectors span Ainvariant subspaces in the case of the standard
eigenproblem (recall Definition 9.35), there is an analogous geometric concept for the
generalized eigenproblem.
Definition 12.14. Let A, B E and suppose the pencil A  AB is regular. Then V is a
deflating subspace if
dim(AV + BV) = dimV. (12.4)
Just as in the standard eigenvalue case, there is a matrix characterization of deflating
subspace. Specifically, suppose S E is a matrix whose columns span a kdimensional
subspace S of i.e., n(S) = S. Then S is a deflating subspace for the pencil A  AB if
and only if there exists M E such that
AS = BSM. (12.5)
130 Chapter 12. Generalized Eigenvalue Problems
If B = /, then (12.4) becomes dim(AV + V) = dimV, which is clearly equivalent to
AV c V. Similarly, (12.5) becomes AS = SM as before. If the pencil is not regular, there
is a concept analogous to deflating subspace called a reducing subspace.
12.3 Application to the Computation of System Zeros
Consider the linear svstem
which has a root at —2.8 .
The method of finding system zeros via a generalized eigenvalue problem also works
well for general multiinput, multioutput systems. Numerically, however, one must be
careful first to "deflate out" the infinite zeros (infinite eigenvalues of (12.6)). This is accom
plished by computing a certain unitary equivalence on the system pencil that then yields a
smaller generalized eigenvalue problem with only finite generalized eigenvalues (the finite
zeros).
The connection between system zeros and the corresponding system pencil is non
trivial. However, we offer some insight below into the special case of a singleinput,
with A € M
n x n
, B € R"
x m
, C e R
pxn
, and D € R
pxm
. This linear timeinvariant state
space model is often used in multivariable control theory, where x(= x(t)) is called the state
vector, u is the vector of inputs or controls, and y is the vector of outputs or observables.
For details, see, for example, [26].
In general, the (finite) zeros of this system are given by the (finite) complex numbers
z, where the "system pencil"
drops rank. In the special case p = m, these values are the generalized eigenvalues of the
(n + m) x (n + m) pencil.
Example 12.15. Let
Then the transfer matrix (see [26]) of this system is
which clearly has a zero at —2.8 . Checking the finite eigenvalues of the pencil (12.6), we
find the characteristic polynomial to be
130 Chapter 12. Generalized Eigenvalue Problems
If B = I, then (12.4) becomes dim (A V + V) = dim V, which is clearly equivalent to
AV ~ V. Similarly, (12.5) becomes AS = SM as before. lEthe pencil is not regular, there
is a concept analogous to deflating subspace called a reducing subspace.
12.3 Application to the Computation of System Zeros
Consider the linear system
i = Ax + Bu,
y = Cx + Du
with A E jRnxn, B E jRnxm, C E jRPxn, and D E jRPxm. This linear timeinvariant state
space model is often used in multivariable control theory, where x(= x(t)) is called the state
vector, u is the vector of inputs or controls, and y is the vector of outputs or observables.
For details, see, for example, [26].
In general, the (finite) zeros of this system are given by the (finite) complex numbers
z, where the "system pencil"
(12.6)
drops rank. In the special case p = m, these values are the generalized eigenvalues of the
(n + m) x (n + m) pencil.
Example 12.15. Let
A=[
4
2
Then the transfer matrix (see [26)) of this system is
C = [I 2],
55 + 14
g(5)=C(sIA)'B+D= 2 '
5 + 3s + 2
D=O.
which clearly has a zero at 2.8. Checking the finite eigenvalues of the pencil (12.6), we
find the characteristic polynomial to be
det
[
A c
M
B]
D "'" 5A + 14,
which has a root at 2.8.
The method of finding system zeros via a generalized eigenvalue problem also works
well for general mUltiinput, multioutput systems. Numerically, however, one must be
careful first to "deflate out" the infinite zeros (infinite eigenvalues of (12.6». This is accom
plished by computing a certain unitary equivalence on the system pencil that then yields a
smaller generalized eigenvalue problem with only finite generalized eigenvalues (the finite
zeros).
The connection between system zeros and the corresponding system pencil is non
trivial. However, we offer some insight below into the special case of a singleinput.
12.4. Symmetric Generalized Eigenvalue Problems 131
singleoutput system. Specifically, let B = b e Rn, C = c
1
e R
l xn
, and D = d e R.
Furthermore, let g(.s) = c
r
(s7 — A )~
!
Z ? + d denote the system transfer function (matrix),
and assume that g ( s ) can be written in the form
where T T (S ) is the characteristic polynomial of A, and v(s) and T T (S ) are relatively prime
(i.e., there are no "pole/zero cancellations").
Suppose z € C is such that
is singular. Then there exists a nonzero solution to
or
Assuming z is not an eigenvalue of A (i.e., no pole/zero cancellations), then from (12.7) we
get
Substituting this in (12.8), we have
or g ( z ) y = 0 by the definition of g . Now _ y ^ 0 (else x = 0 from (12.9)). Hence g(z) = 0,
i.e., z is a zero of g.
12.4 Symmetric Generalized Eigenvalue Problems
A very important special case of the generalized eigenvalue problem
for A, B e R
nxn
arises when A = A and B = B
1
> 0. For example, the secondorder
system of differential equations
where M is a symmetric positive definite "mass matrix" and K is a symmetric "stiffness
matrix," is a frequently employed model of structures or vibrating systems and yields a
generalized eigenvalue problem of the form (12.10).
Since B is positive definite it is nonsingular. Thus, the problem (12.10) is equivalent
to the standard eigenvalue problem B~
l
Ax = A J C. However, B~
1
A is not necessarily
symmetric.
12.4. Symmetric Generalized Eigenvalue Problems 131
singleoutput system. Specifically, let B = b E ffi.n, C = c
T
E ffi.l xn, and D = d E R
Furthermore, let g(s) = c
T
(s I  A) 1 b + d denote the system transfer function (matrix),
and assume that g(s) can be written in the form
v(s)
g(s) = n(s)'
where n(s) is the characteristic polynomial of A, and v(s) and n(s) are relatively prime
(i.e., there are no "pole/zero cancellations").
Suppose Z E C is such that
[
A  zI b ]
c
T
d
is singular. Then there exists a nonzero solution to
or
(A  zl)x + by = 0,
c
T
x +dy = O.
(12.7)
(12.8)
Assuming z is not an eigenvalue of A (i.e., no pole/zero cancellations), then from (12.7) we
get
x = (A  zl)lby.
(12.9)
Substituting this in (12.8), we have
_c
T
(A  zl)lby + dy = 0,
or g(z)y = 0 by the definition of g. Now y 1= 0 (else x = 0 from (12.9». Hence g(z) = 0,
i.e., z is a zero of g.
12.4 Symmetric Generalized Eigenvalue Problems
A very important special case of the generalized eigenvalue problem
Ax = ABx (12.10)
for A, B E ffi.nxn arises when A = AT and B = BT > O. For example, the secondorder
system of differential equations
Mx+Kx=O,
where M is a symmetric positive definite "mass matrix" and K is a symmetric "stiffness
matrix," is a frequently employed model of structures or vibrating systems and yields a
generalized eigenvalue problem ofthe form (12.10).
Since B is positive definite it is nonsingular. Thus, the problem (12.10) is equivalent
to the standard eigenvalue problem B
1
Ax = AX. However, B
1
A is not necessarily
symmetric.
Nevertheless, the eigenvalues of B
l
A are always real (and are approximately 2.1926
and 3.1926 in Example 12.16).
Theorem 12.17. Let A, B e R
nxn
with A = A
T
and B = B
T
> 0. Then the generalized
eigenvalue problem
whose eigenvalues are approximately 2.1926 and —3.1926 as expected.
The material of this section can, of course, be generalized easily to the case where A
and B are Hermitian, but since realvalued matrices are commonly used in most applications,
we have restricted our attention to that case only.
has n real eigenvalues, and the n corresponding right eigenvectors can be chosen to be
orthogonal with respect to the inner product (x, y)
B
= X
T
By. Moreover, if A > 0, then
the eigenvalues are also all positive.
Proof: Since B > 0, it has a Cholesky factorization B = LL
T
, where L is nonsingular
(Theorem 10.23). Then the eigenvalue problem
can be rewritten as the equivalent problem
Letting C = L
1
AL
J
and z = L
1
x, (12.11) can then be rewritten as
Since C = C
T
, the eigenproblem (12.12) has n real eigenvalues, with corresponding eigen
vectors zi,..., z
n
satisfying
Then x, = L
T
zi, i € n, are eigenvectors of the original generalized eigenvalue problem
and satisfy
Finally, if A = A
T
> 0, then C = C
T
> 0, so the eigenvalues are positive. D
Example 12.18. The Cholesky factor for the matrix B in Example 12.16 is
Then it is easily checked thai
132 Chapter 12. Generalized Eigenvalue Problems
Example 12.16. Let A ThenB~
l
A
132 Chapter 12. Generalized Eigenvalue Problems
Example 12.16. Let A = ; l B = [i J Then A = J
Nevertheless, the eigenvalues of A are always real (and are approximately 2.1926
and 3.1926 in Example 12.16).
Theorem 12.17. Let A, B E jRnxn with A = AT and B = BT > O. Then the generalized
eigenvalue problem
Ax = ABx
has n real eigenvalues, and the n corresponding right eigenvectors can be chosen to be
orthogonal with respect to the inner product (x, y) B = x
T
By. Moreover, if A > 0, then
the eigenvalues are also all positive.
Proof: Since B > 0, it has a Cholesky factorization B = LL T, where L is nonsingular
(Theorem 10.23). Then the eigenvalue problem
Ax = ABx = ALL T x
can be rewritten as the equivalent problem
(12.11)
Letting C = L AL and Z = LT x, (12.11) can then be rewritten as
Cz = AZ. (12.12)
Since C = C
T
, the eigenproblem (12.12) has n real eigenvalues, with corresponding eigen
vectors Z I, •.. , Zn satisfying
zi Zj = Dij.
Then Xi = L Zi, i E !!., are eigenvectors of the original generalized eigenvalue problem
and satisfy
(Xi, Xj)B = xr BXj = (zi L Zj) = Dij.
Finally, if A = AT> 0, then C = C
T
> 0, so the eigenvalues are positive. 0
Example 12.18. The Cholesky factor for the matrix B in Example 12.16 is
1] .
.,fi .,fi
Then it is easily checked that
c = = [ 0 . .5
2.5
2 . .5 ]
1.5 '
whose eigenvalues are approximately 2.1926 and 3.1926 as expected.
The material of this section can, of course, be generalized easily to the case where A
and B are Hermitian, but since realvalued matrices are commonly used in most applications,
we have restricted our attention to that case only.
12.5. Simultaneous Diagonalization 133
12.5 Simultaneous Diagonalization
Recall that many matrices can be diagonalized by a similarity. In particular, normal ma
trices can be diagonalized by a unitary similarity. It turns out that in some cases a pair of
matrices (A, B) can be simultaneously diagonalized by the same matrix. There are many
such results and we present only a representative (but important and useful) theorem here.
Again, we restrict our attention only to the real case, with the complex case following in a
straightforward way.
Theorem 12.19 (Simultaneous Reduction to Diagonal Form). Let A, B e E"
x
" with
A = A
T
and B = B
T
> 0. Then there exists a nonsingular matrix Q such that
\ 2.5.1 Simultaneous diagonalization via SVD
There are situations in which forming C = L~
1
AL~
T
as in the proof of Theorem 12.19 is
numerically problematic, e.g., when L is highly ill conditioned with respect to inversion. In
such cases, simultaneous reduction can also be accomplished via an SVD. To illustrate, let
where D is diagonal. In fact, the diagonal elements of D are the eigenvalues of B
1
A.
Proof: Let B = LL
T
be the Cholesky factorization of B and setC = L~
1
AL~
T
. Since
C is symmetric, there exists an orthogonal matrix P such that P
T
CP = D, where D is
diagonal. Let Q = L~
T
P. Then
and
Finally, since QDQ~
l
= QQ
T
AQQ~
l
= L
T
PP
T
L~
1
A = L~
T
L~
1
A = B~
1
A, we
haveA(D) = A(B~
1
A). D
Note that Q is not in general orthogonal, so it does not preserve eigenvalues of A and B
individually. However, it does preserve the eigenvalues of A — XB. This can be seen directly.
LetA = Q
T
AQandB = Q
T
BQ. Then/HA = Q~
l
B~
l
Q~
T
Q
T
AQ = Q~
1
B~
1
AQ.
Theorem 12.19 is very useful for reducing many statements about pairs of symmetric
matrices to "the diagonal case." The following is typical.
Theorem 12.20. Let A, B e M"
xn
be positive definite. Then A > B if and only if B~
l
>
A
1
.
Proof: By Theorem 12.19, there exists Q e E"
x
" such that Q
T
AQ = D and Q
T
BQ = I,
where D is diagonal. Now D > 0 by Theorem 10.31. Also, since A > B, by Theorem
10.21 we have that Q
T
AQ > Q
T
BQ, i.e., D > I. But then D"
1
< / (this is trivially true
since the two matrices are diagonal). Thus, QD~
l
Q
T
< QQ
T
, i.e., A~
l
< B~
l
. D
12.5. Simultaneous Diagonalization 133
12.5 Simultaneous Diagonalization
Recall that many matrices can be diagonalized by a similarity. In particular, normal ma
trices can be diagonalized by a unitary similarity. It turns out that in some cases a pair of
matrices (A, B) can be simultaneously diagonalized by the same matrix. There are many
such results and we present only a representative (but important and useful) theorem here.
Again, we restrict our attention only to the real case, with the complex case following in a
straightforward way.
Theorem 12.19 (Simultaneous Reduction to Diagonal Form). Let A, B E ] [ ~ n x n with
A = AT and B = BT > O. Then there exists a nonsingular matrix Q such that
where D is diagonal. Infact, the diagonal elements of D are the eigenvalues of B
1
A.
Proof: Let B = LLT be the Cholesky factorization of B and set C = L I AL T. Since
C is symmetric, there exists an orthogonal matrix P such that pTe p = D, where D is
diagonal. Let Q = L  T P. Then
and
QT BQ = pT L I(LLT)L T P = pT P = [.
Finally, since QDQI = QQT AQQI = L T P pT L I A = L T L I A
B
1
A, we
have A(D) = A(B
1
A). 0
Note that Q is not in general orthogonal, so it does not preserve eigenvalues of A and B
individually. However, it does preserve the eigenvalues of A  'AB. This can be seen directly.
Let A = QT AQ and B = QT BQ. Then B
1
A = Q1 B
1
QT QT AQ = Q1 B
1
AQ.
Theorem 12.19 is very useful for reducing many statements about pairs of symmetric
matrices to "the diagonal case." The following is typical.
Theorem 12.20. Let A, B E lR
nxn
be positive definite. Then A 2: B if and only if B
1
2:
AI.
Proof: By Theorem 12.19, there exists Q E l R ~ x n such that QT AQ = D and QT BQ = [,
where D is diagonal. Now D > 0 by Theorem 10.31. Also, since A 2: B, by Theorem
10.21 we have that QT AQ 2: QT BQ, i.e., D 2: [. But then D
I
:::: [(this is trivially true
since the two matrices are diagonal). Thus, Q D
I
QT :::: Q QT, i.e., A I :::: B
1
. 0
12.5.1 Simultaneous diagonalization via SVD
There are situations in which forming C = L I AL T as in the proof of Theorem 12.19 is
numerically problematic, e.g., when L is highly iII conditioned with respect to inversion. In
such cases, simultaneous reduction can also be accomplished via an SVD. To illustrate. let
The problem (12.15) is called a generalized singular value problem and algorithms exist to
solve it (and hence equivalently (12.13)) via arithmetic operations performed only on LA
and LB separately, i.e., without forming the products L
A
L
T
A
or L
B
L
T
B
explicitly; see, for
example, [7, Sec. 8.7.3]. This is analogous to finding the singular values of a matrix M by
operations performed directly on M rather than by forming the matrix M
T
M and solving
the eigenproblem M
T
MX = Xx.
Remark 12.22. Various generalizations of the results in Remark 12.21 are possible, for
example, when A = A
T
> 0. The case when A is symmetric but indefinite is not so
straightforward, at least in real arithmetic. For example, A can be written as A = PDP
T
,
~ ~ ~ ~ T
where D is diagonal and P is orthogonal, but in writing A — PDDP = PD(PD) with
D diagonal, D may have pure imaginary elements.
134 Chapter 12. Generalized Eigenvalue Problems
us assume that both A and B are positive definite. Further, let A = L
A
L
T
A
and B — LsL
T
B
be Cholesky factorizations of A and B, respectively. Compute the SVD
where E e R£
x
" is diagonal. Then the matrix Q = L
B
T
U performs the simultaneous
diagonalization. To check this, note that
while
Remark 12.21. The SVD in (12.13) can be computed without explicitly forming the
indicated matrix product or the inverse by using the socalled generalized singular value
decomposition (GSVD). Note that
and thus the singular values of L
B
L
A
can be found from the eigenvalue problem
Letting x = L
B
z we see that (12.14) can be rewritten in the form L
A
L
A
x = XL
B
z =
ALgL^Lg
7
z, which is thus equivalent to the generalized eigenvalue problem
134 Chapter 12. Generalized Eigenvalue Problems
us assume that both A and B are positive definite. Further, let A = and B =
be Cholesky factorizations of A and B, respectively. Compute the SVD
(12.13)
where L E xn is diagonal. Then the matrix Q = L i/ u performs the simultaneous
diagonalization. To check this, note that
while
QT AQ = U
T
= UTULVTVLTUTU
= L2
QT BQ = U
T
= UTU
= I.
Remark 12.21. The SVD in (12.13) can be computed without explicitly forming the
indicated matrix product or the inverse by using the socalled generalized singular value
decomposition (GSVD). Note that
and thus the singular values of L B 1 L A can be found from the eigenvalue problem
02.14)
Letting x = LBT Z we see that 02.14) can be rewritten in the form = ALBz =
z, which is thus equivalent to the generalized eigenvalue problem
02.15)
The problem (12.15) is called a generalized singular value problem and algorithms exist to
solve it (and hence equivalently (12.13» via arithmetic operations performed only on LA
and L B separately, i.e., without forming the products LA L or L B L explicitly; see, for
example, [7, Sec. 8.7.3]. This is analogous to finding the singular values of a matrix M by
operations performed directly on M rather than by forming the matrix MT M and solving
the eigenproblem MT M x = AX.
Remark 12.22. Various generalizations of the results in Remark 12.21 are possible, for
example, when A = AT::: O. The case when A is symmetric but indefinite is not so
straightforward, at least in real arithmetic. For example, A can be written as A = PDP T,
where Disdiagonaland P is orthogonal,butin writing A = PDDp
T
= PD(PD{ with
D diagonal, b may have pure imaginary elements.
12.6. HigherOrder Eigenvalue Problems 135
12.6 HigherOrder Eigenvalue Problems
Consider the secondorder system of differential equations
where q(t} e W
1
and M, C, K e Rn
xn
. Assume for simplicity that M is nonsingular.
Suppose, by analogy with the firstorder case, that we try to find a solution of (12.16) of the
form q(t) = e
xt
p, where the nvector p and scalar A. are to be determined. Substituting in
(12.16) we get
To get a nonzero solution /?, we thus seek values of A. for which the matrix A.
2
M + A.C + K
is singular. Since the determinantal equation
yields a polynomial of degree 2rc, there are 2n eigenvalues for the secondorder (or
quadratic) eigenvalue problem A.
2
M + A.C + K.
A special case of (12.16) arises frequently in applications: M = I, C = 0, and
K = K
T
. Suppose K has eigenvalues
Let a > k =  f j i k 1
2
• Then the 2n eigenvalues of the secondorder eigenvalue problem A.
2
/ + K
are
If r = n (i.e., K = K
T
> 0), then all solutions of q + Kq = 0 are oscillatory.
12.6.1 Conversion to firstorder form
Let x\ = q and \i = q. Then (12.16) can be written as a firstorder system (with block
companion matrix)
where x(t) €. E
2
". If M is singular, or if it is desired to avoid the calculation of M
l
because
M is too ill conditioned with respect to inversion, the secondorder problem (12.16) can still
be converted to the firstorder generalized linear system
or, since
12.6. HigherOrder Eigenvalue Problems 135
12.6 HigherOrder Eigenvalue Problems
Consider the secondorder system of differential equations
Mq+Cq+Kq=O, (12.16)
where q(t) E ~ n and M, C, K E ~ n x n . Assume for simplicity that M is nonsingular.
Suppose, by analogy with the firstorder case, that we try to find a solution of (12.16) of the
form q(t) = eAt p, where the nvector p and scalar A are to be determined. Substituting in
(12.16) we get
or, since eAt :F 0,
(A
2
M + AC + K) p = O.
To get a nonzero solution p, we thus seek values of A for which the matrix A
2
M + AC + K
is singular. Since the determinantal equation
o = det(A
2
M + AC + K) = A 2n + ...
yields a polynomial of degree 2n, there are 2n eigenvalues for the secondorder (or
quadratic) eigenvalue problem A
2
M + AC + K.
A special case of (12.16) arises frequently in applications: M = I, C = 0, and
K = KT. Suppose K has eigenvalues
IL I ::: ... ::: ILr ::: 0 > ILr+ I ::: ... ::: ILn·
Let Wk = I ILk I !. Then the 2n eigenvalues of the secondorder eigenvalue problem A
2
I + K
are
± jWk; k = 1, ... , r,
± Wk; k = r + 1, ... , n.
If r = n (i.e., K = KT ::: 0), then all solutions of q + K q = 0 are oscillatory.
12.6.1 Conversion to firstorder form
Let XI = q and X2 = q. Then (12.16) can be written as a firstorder system (with block
companion matrix)
. [ 0
X = M1K
where x (t) E ~ 2 n . If M is singular, or if it is desired to avoid the calculation of M
I
because
M is too ill conditioned with respect to inversion, the secondorder problem (12.16) can still
be converted to the firstorder generalized linear system
[
I OJ' [0 I J
o M x = K C x.
136 Chapter 12. Generalized Eigenvalue Problems
Many other firstorder realizations are possible. Some can be useful when M, C, and/or K
have special symmetry or skewsymmetry properties that can exploited.
Higherorder analogues of (12.16) involving, say, the kth derivative of q, lead naturally
to higherorder eigenvalue problems that can be converted to firstorder form using aknxkn
block companion matrix analogue of (11.19). Similar procedures hold for the general k\h
order difference equation
EXERCISES
are the eigenvalues of the matrix A — BD
1
C.
2. Let F, G € C
MX
". Show that the nonzero eigenvalues of FG and GF are the same.
Hint: An easy "trick proof is to verify that the matrices
are similar via the similarity transformation
are identical for all F 6 E"
1
*" and all G G R"
xm
.
Hint: Consider the equivalence
(A similar result is also true for "nonsquare" pencils. In the parlance of control theory,
such results show that zeros are invariant under state feedback or output injection.)
which can be converted to various firstorder systems of dimension kn.
1. Suppose A e R
nx
" and D e R™
xm
. Show that the finite generalized eigenvalues of
the pencil
3. Let F e C
nxm
, G e C
mx
". Are the nonzero singular values of FG and GF the
same?
4. Suppose A € R
nxn
, B e R
n
*
m
, and C e E
wx
". Show that the generalized eigenval
ues of the pencils
and
136 Chapter 12. Generalized Eigenvalue Problems
Many other firstorder realizations are possible. Some can be useful when M, C, andlor K
have special symmetry or skewsymmetry properties that can exploited.
Higherorder analogues of (12.16) involving, say, the kth derivative of q, lead naturally
to higherorder eigenvalue problems that can be converted to firstorder form using a kn x kn
block companion matrix analogue of (11.19). Similar procedures hold for the general kth
order difference equation
which can be converted to various firstorder systems of dimension kn.
EXERCISES
1. Suppose A E lR
n
xn and D E lR::! xm. Show that the finite generalized eigenvalues of
the pencil
[ ~ ~ J  A [ ~ ~ J
are the eigenvalues of the matrix A  B D
1
C.
2. Let F, G E e
nxn
• Show that the nonzero eigenvalues of FG and G F are the same.
Hint: An easy "trick proof' is to verify that the matrices
[Fg ~ ] and [ ~ GOF ]
are similar via the similarity transformation
3. Let F E e
nxm
, G E e
mxn
• Are the nonzero singular values of FG and GF the
same?
4. Suppose A E ]Rnxn, B E lR
nxm
, and C E lRmxn. Show that the generalized eigenval
ues of the pencils
[ ~ ~ J  A [ ~ ~ J
and
[ A + B ~ + GC ~ ] _ A [ ~ ~ ]
are identical for all F E Rm xn and all G E R" xm .
Hint: Consider the equivalence
[
I G][AU B][I 0]
01 CO Fl'
(A similar result is also true for "nonsquare" pencils. In the parlance of control theory,
such results show that zeros are invariant under state feedback or output injection.)
Exercises 137
5. Another family of simultaneous diagonalization problems arises when it is desired
that the simultaneous diagonalizing transformation Q operates on matrices A, B e
]R
nx
" in such a way that Q~
l
AQ~
T
and Q
T
BQ are simultaneously diagonal. Such
a transformation is called contragredient. Consider the case where both A and
B are positive definite with Cholesky factorizations A = L&L
T
A
and B = L#Lg,
respectively, and let UW
T
be an SVD of L
T
B
L
A
.
(a) Show that Q = LA V£~
5
is a contragredient transformation that reduces both
A and B to the same diagonal matrix.
(b) Show that Q~
l
= ^~^U
T
L
T
B
.
(c) Show that the eigenvalues of A B are the same as those of E
2
and hence are
positive.
Exercises 137
5. Another family of simultaneous diagonalization problems arises when it is desired
that the simultaneous diagonalizing transformation Q operates on matrices A, B E
jRnxn in such a way that Ql AQT and QT BQ are simultaneously diagonal. Such
a transformation is called contragredient. Consider the case where both A and
B are positive definite with Cholesky factorizations A = LA L and B = L B L
respectively, and let be an SVD of
(a) Show that Q = LA is a contragredient transformation that reduces both
A and B to the same diagonal matrix.
(b) Show that Ql =
(c) Show that the eigenvalues of AB are the same as those of 1;2 and hence are
positive.
This page intentionally left blank This page intentionally left blank
Chapter 13
Kronecker Products
13.1 Definition and Examples
Definition 13.1. Let A e R
mx
", B e R
pxq
. Then the Kronecker product (or tensor
product) of A and B is defined as the matrix
Obviously, the same definition holds if A and B are complexvalued matrices. We
restrict our attention in this chapter primarily to realvalued matrices, pointing out the
extension to the complex case only where it is not obvious.
Example 13.2.
Note that B < g> A / A < g> B.
2. Foranyfl e!F
X(
7, /
2
< 8 > f l = [o l\
Replacing I
2
by /„ yields a block diagonal matrix with n copies of B along the
diagonal.
3. Let B be an arbitrary 2x2 matrix. Then
139
Chapter 13
Kronecker Products
13.1 Definition and Examples
Definition 13.1. Let A E lR
mxn
, B E lR
pxq
. Then the Kronecker product (or tensor
product) of A and B is defined as the matrix
[
allB
A@B= :
amlB
alnB ]
: E lRmpxnq.
amnB
(13.1)
Obviously, the same definition holds if A and B are complexvalued matrices. We
restrict our attention in this chapter primarily to realvalued matrices, pointing out the
extension to the complex case only where it is not obvious.
Example 13.2.
1. Let A =
2
nand B = [; Then
2
4 2 6
n
A@B = [
2B 3 4 6 6
2B 3 4 2 2
9 4 6 2
Note that B @ A i A @ B.
2. Forany B E lR
pxq
, /z @ B = J.
Replacing 12 by In yields a block diagonal matrix with n copies of B along the
diagonal.
3. Let B be an arbitrary 2 x 2 matrix. Then
l b"
0
b12
0
l
B @/z =
b
ll
0 b12
0
b
2
2
0
b
21
0 b
22
139
140 Chapter 13. Kronecker Products
The extension to arbitrary B and /„ is obvious.
4. Let Jt € R
m
, y e R". Then
5. Let* eR
m
, y eR". Then
13.2 Properties of the Kronecker Product
Theorem 13.3. Let A e R
mx
", 5 e R
rxi
, C e R"
x
^ and D e R
sxt
. Then
Proof: Simply verify that
Theorem 13.4. For all A and B,
Proof: For the proof, simply verify using the definitions of transpose and Kronecker
product. D
Corollary 13.5. If A e R"
xn
and B e R
mxm
are symmetric, then A® B is symmetric.
Theorem 13.6. If A and B are nonsingular,
Proof: Using Theorem 13.3, simply note that
140 Chapter 13. Kronecker Products
The extension to arbitrary B and In is obvious.
4. Let x E y E !R.n. Then
[
T T]T
X ® Y = XIY , ... , XmY
= [XIYJ, ... , XIYn, X2Yl, ... , xmYnf E !R.
mn
.
13.2 Properties of the Kronecker Product
Theorem 13.3. Let A E B E C E and D E Then
(A 0 B)(C 0 D) = AC 0 BD (E
Proof; Simply verify that
=AC0BD. 0
Theorem 13.4. Foral! A and B, (A ® Bl = AT ® BT.
al;kCkPBD ]
amkckpBD
(13.2)
Proof' For the proof, simply verify using the definitions of transpose and Kronecker
product. 0
Corollary 13.5. If A E ]Rn xn and B E !R.
m
xm are symmetric, then A ® B is symmetric.
Theorem 13.6. If A and Bare nonsingular, (A ® B)I = AI ® B
1
.
Proof: Using Theorem 13.3, simply note that (A ® B)(A 1 ® B
1
) = 1 ® 1 = I. 0
Corollary 13.8. If A € E"
xn
is orthogonal and B e M
mxm
15 orthogonal, then A < g > B is
orthogonal.
13.2. Properties of the Kronecker Product 141
Theorem 13.7. If A e IR"
xn
am/ B eR
mxm
are normal, then A® B is normal.
Proof:
yields a singular value decomposition of A < 8 > B (after a simple reordering of the diagonal
elements O/£A < 8 > £5 and the corresponding right and left singular vectors).
Corollary 13.11. Let A e R™
x
" have singular values a\ > • • • > a
r
> 0 and let B e
have singular values T\ > • • • > T
S
> 0. Then A < g ) B (or B < 8 > A) has rs singular values
^iT\ > • • • > ff
r
T
s
> Qand
Theorem 13.12. Let A e R
nx
" have eigenvalues A.,  , / e n, and let B e R
mxw
/zave
eigenvalues jJij, 7 € m. TTzen ?/ze mn eigenvalues of A® B are
Moreover, if x\, ..., x
p
are linearly independent right eigenvectors of A corresponding
to A  i , . . . , A.
p
(p < n), and zi, • • •, z
q
are linearly independent right eigenvectors of B
corresponding to JJL\ , ..., \Ju
q
(q < m), then ;c, < 8 > Zj € ffi.
m
" are linearly independent right
eigenvectors of A® B corresponding to A., /u ,
7
, i e /?, 7 e q.
Proof: The basic idea of the proof is as follows:
If A and B are diag onalizable in Theorem 13.12, we can take p = n and q —mand
thu s g et the complete eig enstru ctu re of A < 8 > B. In g eneral, if A and fi have Jordan form
Example 13.9. Let A and B  Then it is easily seen that
A i s orthog onal wi th eig envalu es e
±j9
and B i s orthog onal wi th eig envalu es e
±j(i>
. T he 4x4
matrix A ® 5 is then also orthog onal with eig envalu es e^'^+'W and e
±
^
( 6>
~^
>
\
Theorem 13.10. Lg f A G E
mx
" have a singular value decomposition l/^E^Vj an^ /ef
fi e ^
pxq
have a singular value decomposition UB^B^B Then
13.2. Properties of the Kronecker Product
Theorem 13.7. If A E IR
nxn
and B E IR
mxm
are normal, then A 0 B is normal.
Proof:
(A 0 B{ (A 0 B) = (AT 0 BT)(A 0 B) by Theorem 13.4
= AT A 0 BT B by Theorem 13.3
= AAT 0 B BT since A and B are normal
= (A 0 B)(A 0 B)T by Theorem 13.3. 0
141
Corollary 13.8. If A E IR
nxn
is orthogonal and B E IR
mxm
is orthogonal, then A 0 B is
orthogonal.
E I 139 L A
[
eose Sine] dB [Cos</> Sin</>] Th ., '1 h
xamp e .• et = _ sin e cose an = _ sin</> cos</>O en It IS easl y seen t at
A is orthogonal with eigenvalues e±jO and B is orthogonal with eigenvalues e±j</J. The 4 x 4
matrix A 0 B is then also orthogonal with eigenvalues e±jeH</» and e±je
fJ
</».
Theorem 13.10. Let A E IR
mxn
have a singular value decomposition VA ~ A vI and let
B E IR
pxq
have a singular value decomposition V B ~ B VI. Then
yields a singular value decomposition of A 0 B (after a simple reordering of the diagonal
elements of ~ A 0 ~ B and the corresponding right and left singular vectors).
Corollary 13.11. Let A E lR;"xn have singular values UI :::: ... :::: U
r
> 0 and let B E IRfx
q
have singular values <I :::: ... :::: <s > O. Then A 0 B (or B 0 A) has rs singular values
U, <I :::: ... :::: U
r
<s > 0 and
rank(A 0 B) = (rankA)(rankB) = rank(B 0 A) .
Theorem 13.12. Let A E IR
n
xn have eigenvalues Ai, i E !!, and let B E IR
m
xm have
eigenvalues JL j, j E m. Then the mn eigenvalues of A 0 Bare
Moreover, if Xl, ••. , xp are linearly independent right eigenvectors of A corresponding
to AI, ... , A p (p ::::: n), and Z I, ... ,Zq are linearly independent right eigenvectors of B
corresponding to JLI, ... ,JLq (q ::::: m), then Xi 0 Zj E IR
mn
are linearly independent right
eigenvectors of A 0 B corresponding to Ai JL j, i E l!! j E 1·
Proof: The basic idea of the proof is as follows:
(A 0 B)(x 0 z) = Ax 0 Bz
= AX 0 JLZ
= AJL(X 0 z). 0
If A and Bare diagonalizable in Theorem 13.12, we can take p = nand q = m and
thus get the complete eigenstructure of A 0 B. In general, if A and B have Jordan form
142 Chapter 13. Kronecker Products
decompositions given by P~
l
AP = JA and Q~
]
BQ = JB, respectively, then we get the
following Jordanlike structure:
Note that JA® JB, while upper triangular, is generally not quite in Jordan form and needs
further reduction (to an ultimate Jordan form that also depends on whether or not certain
eigenvalues are zero or nonzero).
A Schur form for A ® B can be derived similarly. For example, suppose P and
Q are unitary matrices that reduce A and 5, respectively, to Schur (triangular) form, i.e.,
P
H
AP = T
A
and Q
H
BQ = T
B
(and similarly if P and Q are orthogonal similarities
reducing A and B to real Schur form). Then
Corollary 13.13. Let A e R
nxn
and B e R
mxm
. Then
Definition 13.14. Let A e R
nxn
and B e R
mxm
. Then the Kronecker sum (or tensor sum)
of A and B, denoted A © B, is the mn x mn matrix (I
m
< g> A) + (B ® /„). Note that, in
general, A ® B ^ B © A.
Example 13.15.
Then
The reader is invited to compute B 0 A = (/3 ® B) + (A < g> /2) and note the difference
with A © B.
1. Let
142 Chapter 1 3. Kronecker Products
decompositions given by p
I
AP = J
A
and Ql BQ = J
B
, respectively, then we get the
following Jordanlike structure:
(P ® Q)I(A ® B)(P ® Q) = (P
I
® Ql)(A ® B)(P ® Q)
= (P
1
AP) ® (Ql BQ)
= J
A
® J
B ·
Note that h ® JR, while upper triangular, is generally not quite in Jordan form and needs
further reduction (to an ultimate Jordan form that also depends on whether or not certain
eigenvalues are zero or nonzero).
A Schur form for A ® B can be derived similarly. For example, suppose P and
Q are unitary matrices that reduce A and B, respectively, to Schur (triangular) form, i.e.,
pH AP = TA and QH BQ = TB (and similarly if P and Q are orthogonal similarities
reducing A and B to real Schur form). Then
(P ® Q)H (A ® B)(P ® Q) = (pH ® QH)(A ® B)(P ® Q)
= (pH AP) ® (QH BQ)
= TA ® T
R
.
Corollary 13.13. Let A E IR
n
xn and B E IR
rn
xm. Then
1. Tr(A ® B) = (TrA)(TrB) = Tr(B ® A).
2. det(A ® B) = (det A)m(det Bt = det(B ® A).
Definition 13.14. Let A E IR
n
Xn and B E IR
m
xrn. Then the Kronecker sum (or tensor sum)
of A and B, denoted A EEl B, is the mn x mn matrix Urn ® A) + (B ® In). Note that, in
general, A EEl B i= B EEl A.
Example 13.15.
1. Let
2
;
2
Then
2 3 0 0 0 2 0 0 0 0
3 2 1 0 0 0 0 2 0 0 1 0
AfflB = (h®A)+(B®h) =
1 1 4 0 0 0 0 0 2 0 0
0 0 0 2 3
+
2 0 0 3 0 0
0 0 0 3 2 0 2 0 0 3 0
0 0 0 4 0 0 2 0 0 3
The reader is invited to compute B EEl A = (h ® B) + (A 0 h) and note the difference
with A EEl B.
13.2. Properties of the Kronecker Product 143
If A and B are diagonalizable in Theorem 13.16, we can take p = n and q = m and
thus get the complete eigenstructure of A 0 B. In general, if A and B have Jordan form
decompositions given by P~
1
AP = JA and Q"
1
BQ = JB, respectively, then
is a Jordanlike structure for A © B.
Then J can be written in the very compact form J = (4 < 8 > M) + (E^®l2) = M 0 Ek.
Theorem 13.16. Let A e E"
x
" have eigenvalues A,  , i e n, and let B e R
mx
'" have
eigenvalues /z
;
, 7 e ra. TTzen r/ze Kronecker sum A® B = (I
m
(g> A) + (B < g> /„) /za^ ran
e/genva/wes
Moreover, if x\,... ,x
p
are linearly independent right eigenvectors of A corresponding
to AI, . . . , X
p
(p < n), and z\, ..., z
q
are linearly independent right eigenvectors of B
corresponding to f j i \ , . . . , f^
q
(q < ra), then Zj < 8 > Xi € W
1
" are linearly independent right
eigenvectors of A® B corresponding to A., + [ij , i € p, j e q.
Proof: The basic idea of the proof is as follows:
2. Recall the real JCF
where M =
13.2. Properties of the Kronecker Product
2. Recall the real JCF
1=
where M = [
a
f3
M I 0 0
f3
a
o M I 0
M
0
J. Define
0 0
0 0
Ek =
0
I 0
M I
o M
o
o
o
143
E jR2kx2k,
Then 1 can be written in the very compact form 1 = (I} ® M) + (Ek ® h) = M $ E
k
.
Theorem 13.16. Let A E jRnxn have eigenvalues Ai, i E !!. and let B E jRmxm have
eigenvalues fJj, j E I!!. Then the Kronecker sum A $ B = (1m ® A) + (B ® In) has mn
eigenvalues
Al + fJt, ... , AI + fJm, A2 + fJt,···, A2 + fJm, ... , An + fJm'
Moreover, if XI, .•• , xp are linearly independent right eigenvectors of A corresponding
to AI, ... , Ap (p ::s: n), and ZI, ... , Zq are linearly independent right eigenvectors of B
corresponding to fJt, ... , fJq (q ::s: m), then Z j ® Xi E jRmn are linearly independent right
eigenvectors of A $ B corresponding to Ai + fJj' i E E, j E fl·
Proof: The basic idea of the proof is as follows:
[(1m ® A) + (B ® In)](Z ® X) = (Z ® Ax) + (Bz ® X)
= (Z ® Ax) + (fJZ ® X)
= (A + fJ)(Z ® X). 0
If A and Bare diagonalizable in Theorem 13.16, we can take p = nand q = m and
thus get the complete eigenstructure of A $ B. In general, if A and B have Jordan form
decompositions given by pI AP = lA and Qt BQ = l
B
, respectively, then
[(Q ® In)(lm ® p)rt[(lm ® A) + (B ® In)][CQ ® In)(lm ® P)]
= [(1m ® p)I(Q ® In)I][(lm ® A) + (B ® In)][(Q ® In)(/m ® P)]
= [(1m ® pI)(QI ® In)][(lm ® A) + (B ® In)][CQ ® In)(/m <:9 P)]
= (1m ® lA) + (JB ® In)
is a Jordanlike structure for A $ B.
144 Chapter 13. Kronecker Products
A Schur form for A © B can be derived similarly. Again, suppose P and Q are unitary
matrices that reduce A and B, respectively, to Schur (triangular) form, i.e., P
H
AP = T
A
and Q
H
BQ = T
B
(and similarly if P and Q are orthogonal similarities reducing A and B
to real Schur form). Then
((Q ® /„)(/« ® P)]"[(/m < 8 > A) + (B ® /
B
)][(e (g) /„)(/„, ® P)] = (/
m
< 8 > r
A
) + (7* (g) /„),
where [(Q < 8 > /„)(/« ® P)] = (< 2 ® P) is unitary by Theorem 13.3 and Corollary 13.8 .
13.3 Application to Sylvester and Lyapunov Equations
In this section we study the linear matrix equation
A special case of (13.3) is the symmetric equation
obtained by taking B = A
T
. When C is symmetric, the solution X e W
x
" is easily shown
also to be symmetric and (13.4) is known as a Lyapunov equation. Lyapunov equations
arise naturally in stability theory.
The first important question to ask regarding (13.3) is, When does a solution exist?
By writing the matrices in (13.3) in terms of their columns, it is easily seen by equating the
z'th columns that
The coefficient matrix in (13.5) clearly can be written as the Kronecker sum (I
m
* A) +
(B
T
® /„). The following definition is very helpful in completing the writing of (13.5) as
an "ordinary" linear system.
where A e R"
x
", B e R
mxm
, and C e M"
xm
. This equation is now often called a Sylvester
equation in honor of J.J. Sylvester who studied general linear matrix equations of the form
These equations can then be rewritten as the mn x mn linear system
144 Chapter 13. Kronecker Products
A Schur fonn for A EB B can be derived similarly. Again, suppose P and Q are unitary
matrices that reduce A and B, respectively, to Schur (triangular) fonn, i.e., pH AP = TA
and QH BQ = TB (and similarly if P and Q are orthogonal similarities reducing A and B
to real Schur fonn). Then
where [(Q ® In)(lm ® P)] = (Q ® P) is unitary by Theorem 13.3 and Corollary 13.8.
13.3 Application to Sylvester and Lyapunov Equations
In this section we study the linear matrix equation
AX+XB=C, (13.3)
where A E IR
nxn
, B E IR
mxm
, and C E IRnxm. This equation is now often called a Sylvester
equation in honor of 1.1. Sylvester who studied general linear matrix equations of the fonn
k
LA;XB; =C.
;=1
A special case of (13.3) is the symmetric equation
AX +XAT = C (13.4)
obtained by taking B = AT. When C is symmetric, the solution X E IR
n
xn is easily shown
also to be symmetric and (13.4) is known as a Lyapunov equation. Lyapunovequations
arise naturally in stability theory.
The first important question to ask regarding (13.3) is, When does a solution exist?
By writing the matrices in (13.3) in tenns of their columns, it is easily seen by equating the
ith columns that
m
AXi + Xb; = C; = AXi +
j=1
These equations can then be rewritten as the mn x mn linear system
[
A+blll b
21
1
bl21 A + b
2Z
1
blml b2ml
(13.5)
The coefficient matrix in (13.5) clearly can be written as the Kronecker sum (1m 0 A) +
(B
T
0 In). The following definition is very helpful in completing the writing of (13.5) as
an "ordinary" linear system.
13.3. Application to Sylvester and Lyapunov Equations 145
Definition 13.17. Let c
(
€ E.
n
denote the columns ofC e R
nxm
so that C = [n,..., c
m
}.
Then vec(C) is defined to be the mnvector formed by stacking the columns ofC on top of
one another, i.e., vec(C) =
Using Definition 13.17, the linear system (13.5) can be rewritten in the form
There exists a unique solution to (13.6) if and only if [(I
m
® A) + (B
T
® /„)] is nonsingular.
But [(I
m
< 8 > A) + (B
T
(g) /„)] is nonsingular if and only if it has no zero eigenvalues.
From Theorem 13.16, the eigenvalues of [(/
m
<g> A) + (B
T
<8> /„)] are A., + IJ LJ , where
A,, e A (A), i e n_, and ^j e A(fi), j e m. We thus have the following theorem.
Theorem 13.18. Let A e R
nxn
, B G R
mxm
, and C e R"
xm
. 77ie/i the Sylvester equation
has a unique solution if and only if A and —B have no eigenvalues in common.
Sylvester equations of the form (13.3) (or symmetric Lyapunov equations of the form
(13.4)) are generally not solved using the mn x mn "vec" formulation (13.6). The most
commonly preferred numerical algorithm is described in [2]. First A and B are reduced to
(real) Schur form. An equivalent linear system is then solved in which the triangular form
of the reduced A and B can be exploited to solve successively for the columns of a suitably
transformed solution matrix X. Assuming that, say, n > m, this algorithm takes only O(n
3
)
operations rather than the O(n
6
) that would be required by solving (13.6) directly with
Gaussian elimination. A further enhancement to this algorithm is available in [6] whereby
the larger of A or B is initially reduced only to upper Hessenberg rather than triangular
Schur form.
The next few theorems are classical. They culminate in Theorem 13.24, one of many
elegant connections between matrix theory and stability theory for differential equations.
Theorem 13.19. Let A e Rn
xn
, B e R
mxm
, and C e R
nxm
. Suppose further that A and B
are asymptotically stable (a matrix is asymptotically stable if all its eigenvalues have real
parts in the open left halfplane). Then the (unique) solution of the Sylvester equation
can be written as
Proof: Since A and B are stable, A., (A) + A
;
 (B) ^ 0 for all i, j so there exists a unique
solution to(13.8 )by Theorem 13.18. Now integrate the differential equation X = AX + XB
(with X(0) = C) on [0, +00):
13.3. Application to Sylvester and Lyapunov Equations 145
Definition 13.17. Let Ci E jRn denote the columns ofC E jRnxm so that C = [CI, ... , C
m
].
: : ~ ~ : : ~ : : : d ~ ~ : : : O :[]::::fonned by "ocking the colunuu of C on top of
Using Definition 13.17, the linear system (13.5) can be rewritten in the form
[(1m ® A) + (B
T
® In)]vec(X) = vec(C). (13.6)
There exists a unique solution to (13.6) if and only if [(1m ® A) + (B
T
® In)] is nonsingular.
But [(1m ® A) + (B
T
® In)] is nonsingular if and only if it has no zero eigenvalues.
From Theorem 13.16, the eigenvalues of [(1m ® A) + (BT ® In)] are Ai + Mj, where
Ai E A(A), i E!!, and Mj E A(B), j E!!!.. We thus have the following theorem.
Theorem 13.1S. Let A E lR
nxn
, B E jRmxm, and C E jRnxm. Then the Sylvester equation
AX+XB=C
(13.7)
has a unique solution if and only if A and  B have no eigenvalues in common.
Sylvester equations of the form (13.3) (or symmetric Lyapunov equations of the form
(13.4» are generally not solved using the mn x mn "vee" formulation (13.6). The most
commonly preferred numerical algorithm is described in [2]. First A and B are reduced to
(real) Schur form. An equivalent linear system is then solved in which the triangular form
of the reduced A and B can be exploited to solve successively for the columns of a suitably
transformed solution matrix X. Assuming that, say, n :::: m, this algorithm takes only 0 (n
3
)
operations rather than the O(n
6
) that would be required by solving (13.6) directly with
Gaussian elimination. A further enhancement to this algorithm is available in [6] whereby
the larger of A or B is initially reduced only to upper Hessenberg rather than triangular
Schur form.
The next few theorems are classical. They culminate in Theorem 13.24, one of many
elegant connections between matrix theory and stability theory for differential equations.
Theorem 13.19. Let A E jRnxn, B E jRmxm, and C E jRnxm. Suppose further that A and B
are asymptotically stable (a matrix is asymptotically stable if all its eigenvalues have real
parts in the open left halfplane). Then the (unique) solution of the Sylvester equation
AX+XB=C (13.8)
can be written as
(13.9)
Proof: Since A and B are stable, Aj(A) + Aj(B) =I 0 for all i, j so there exists a unique
solution to (13.8) by Theorem 13.18. Now integrate the differential equation X = AX + X B
(with X(O) = C) on [0, +00):
lim XU)  X(O) = A roo X(t)dt + ([+00 X(t)dt) B.
IHoo 10 10
(13.10)
146 Chapter 13. Kronecker Products
Using the results of Section 11.1.6, it can be shown easily that lim e = lim e = 0.
r—> + oo t—v+oo
Hence, using the solution X ( t ) = e
tA
Ce
tB
from Theorem 11.6, we have that lim X ( t ) — 0.
/—<+3C
Substituting in (13.10) we have
Remark 13.20. An equivalent condition for the existence of a unique solution to AX +
XB = C is that [ J _
c
fi
] be similar to [ J _°
B
] (via the similarity [ J _* ]).
Theorem 13.21. Lef A, C e R"
x
". TTzen r/z e Lyapunov equation
has a unique solution if and only if A and —A
T
have no eigenvalues in common. If C is
symmetric and (13.11) has a unique solution, then that solution is symmetric.
Remark 13.22. If the matrix A e W
xn
has eigenvalues A.I ,...,!„, then — A
T
has eigen
values —A.], . . . , —k
n
. Thus, a sufficient condition that guarantees that A and — A
T
have
no common eigenvalues is that A be asymptotically stable. Many useful results exist con
cerning the relationship between stability and Lyapunov equations. Two basic results due
to Lyapunov are the following, the first of which follows immediately from Theorem 13.19.
Theorem 13.23. Let A,C e R"
x
" and suppose further that A is asymptotically stable.
Then the (unique) solution of the Lyapunov equation
can be written as
Theorem 13.24. A matrix A e R"
x
" is asymptotically stable if and only if there exists a
positive definite solution to the Lyapunov equation
Proof: Suppose A is asymptotically stable. By Theorems 13.21 and 13.23 a solution to
(13.13) exists and takes the form (13.12). Now let v be an arbitrary nonz ero vector in E".
Then
and so X
where C 
146 Chapter 13. Kronecker Products
Using the results of Section 11.1.6, it can be shown easily that lim e
lA
= lim e
lB
= O.
1>+00 1 .... +00
Hence, using the solution X (t) = elACe
lB
from Theorem 11.6, we have that lim X (t) = O.
t ~ + x
Substituting in (13.10) we have
C = A (1+
00
elACe
lB
dt) + (1+
00
elACe
lB
dt) B
{+oo
and so X = 1o elACe
lB
dt satisfies (13.8). o
Remark 13.20. An equivalent condition for the existence of a unique solution to AX +
X B = C is that [ ~ _C
B
] be similar to [ ~ _OB] (via the similarity [ ~ _ ~ ]).
Theorem 13.21. Let A, C E jRnxn. Then the Lyapunov equation
AX+XAT = C (13.11)
has a unique solution if and only if A and  A T have no eigenvalues in common. If C is
symmetric and ( 13.11) has a unique solution, then that solution is symmetric.
Remark 13.22. If the matrix A E jRn xn has eigenvalues )"" ... , An, then  AT has eigen
values AI, ... ,  An. Thus, a sufficient condition that guarantees that A and  A T have
no common eigenvalues is that A be asymptotically stable. Many useful results exist con
cerning the relationship between stability and Lyapunov equations. Two basic results due
to Lyapunov are the following, the first of which follows immediately from Theorem 13.19.
Theorem 13.23. Let A, C E jRnxn and suppose further that A is asymptotically stable.
Then the (unique) solution o/the Lyapunov equation
AX+XAT=C
can be written as
(13.12)
Theorem 13.24. A matrix A E jRnxn is asymptotically stable if and only if there exists a
positive definite solution to the Lyapunov equation
AX +XAT = C, (13.13)
where C = C
T
< O.
Proof: Suppose A is asymptotically stable. By Theorems l3.21 and l3.23 a solution to
(13.13) exists and takes the form (13.12). Now let v be an arbitrary nonzero vector in jRn.
Then
13.3. Application to Sylvester and Lyapunov Equations 147
Since — C > 0 and e
tA
is nonsingular for all t, the integrand above is positive. Hence
v
T
Xv > 0 and thus X is positive definite.
Conversely, suppose X = X
T
> 0 and let A. e A (A) with corresponding left eigen
vector y. Then
Since y
H
Xy > 0, we must have A + A = 2 Re A < 0 . Since A was arbitrary, A must be
asymptotically stable. D
Remark 13.25. The Lyapunov equation AX + XA
T
= C can also be written using the
vec notation in the equivalent form
A subtle point arises when dealing with the "dual" Lyapunov equation A
T
X + XA = C.
The equivalent "vec form" of this equation is
However, the complexvalued equation A
H
X + XA = C is equivalent to
The vec operator has many useful properties, most of which derive from one key
result.
Theorem 13.26. For any three matrices A, B, and C for which the matrix product ABC is
defined,
Proof: The proof follows in a fairly straightforward fashion either directly from the defini
tions or from the fact that vec(;t;y
r
) = y <8 > x. D
An immediate application is to the derivation of existence and uniqueness conditions
for the solution of the simple Sylvesterlike equation introduced in Theorem 6.11.
Theorem 13.27. Let A e R
mxn
, B e R
px(}
, and C e R
mxq
. Then the equation
has a solution X e R.
nxp
if and only ifAA
+
CB
+
B = C, in which case the general solution
is of the form
where Y e R
nxp
is arbitrary. The solution of (13.14) is unique if BB
+
® A
+
A = I.
Proof: Write (13.14) as
13.3. Application to Sylvester and Lyapunov Equations 147
Since C > 0 and etA is nonsingular for all t, the integrand above is positive. Hence
v
T
Xv > 0 and thus X is positive definite.
Conversely, suppose X = XT > 0 and let A E A(A) with corresponding left eigen
vector y. Then
0> yHCy = yH AXy + yHXAT Y
= (A + I)yH Xy.
Since yH Xy > 0, we must have A + I = 2 Re A < O. Since A was arbitrary, A must be
asymptotically stable. D
Remark 13.25. The Lyapunov equation AX + X A T = C can also be written using the
vec notation in the equivalent form
[(/ ® A) + (A ® l)]vec(X) = vec(C).
A subtle point arises when dealing with the "dual" Lyapunov equation A T X + X A = C.
The equivalent "vec form" of this equation is
[(/ ® AT) + (AT ® l)]vec(X) = vec(C).
However, the complexvalued equation A H X + X A = C is equivalent to
[(/ ® AH) + (AT ® l)]vec(X) = vec(C).
The vec operator has many useful properties, most of which derive from one key
result.
Theorem 13.26. For any three matrices A, B, and C for which the matrix product ABC is
defined,
vec(ABC) = (C
T
® A)vec(B).
Proof: The proof follows in a fairly straightforward fashion either directly from the defini
tions or from the fact that vec(xyT) = y ® x. D
An immediate application is to the derivation of existence and uniqueness conditions
for the solution of the simple Sylvesterlike equation introduced in Theorem 6.11.
Theorem 13.27. Let A E jRrnxn, B E jRPxq, and C E jRrnxq. Then the equation
AXB =C (13.14)
has a solution X E jRn x p if and only if A A + C B+ B = C, in which case the general solution
is of the form
(13.15)
where Y E jRnxp is arbitrary. The solution of (13. 14) is unique if BB+ ® A+ A = [.
Proof: Write (13.14) as
(B
T
® A)vec(X) = vec(C) (13.16)
148 Chapter 13. Kronecker Products
by Theorem 13.26. This "vector equation" has a solution if and only if
It is a straightforward exercise to show that (M ® N)
+
= M
+
< 8> N
+
. Thus, (13.16) has a
solution if and only if
and hence if and only if AA
+
CB
+
B = C.
The general solution of (13.16) is then given by
where Y is arbitrary. This equation can then be rewritten in the form
or, using Theorem 13.26,
The solution is clearly unique if BB
+
< 8> A
+
A = I. D
EXERCISES
1. For any two matrices A and B for which the indicated matrix product is defined,
show that (vec(A))
r
(vec(fl)) = Tr(A
r
£). In particular, if B e Rn
xn
, then Tr(fl) =
vec(/J
r
vec(fl).
2. Prove that for all matrices A and B, (A ® B)
+
= A
+
® B
+
.
3. Show that the equation AX B = C has a solution for all C if A has full row rank and
B has full column rank. Also, show that a solution, if it exists, is unique if A has full
column rank and B has full row rank. What is the solution in this case?
4. Show that the general linear equation
can be written in the form
148 Chapter 1 3. Kronecker Products
by Theorem 13.26. This "vector equation" has a solution if and only if
(B
T
® A)(B
T
® A)+ vec(C) = vec(C).
It is a straightforward exercise to show that (M ® N) + = M+ ® N+. Thus, (13.16) has a
solution if and only if
vec(C) = (B
T
® A)«B+{ ® A+)vec(C)
= [(B+ B{ ® AA+]vec(C)
= vec(AA +C B+ B)
and hence if and only if AA + C B+ B = C.
The general solution of (13 .16) is then given by
vec(X) = (B
T
® A) + vec(C) + [I  (B
T
® A) + (B
T
® A)]vec(Y),
where Y is arbitrary. This equation can then be rewritten in the form
vec(X) = «B+{ ® A+)vec(C) + [I  (BB+{ ® A+ A]vec(y)
or, using Theorem 13.26,
The solution is clearly unique if B B+ ® A + A = I. 0
EXERCISES
I. For any two matrices A and B for which the indicated matrix product is defined,
show that (vec(A»T (vec(B» = Tr(A
T
B). In particular, if B E lR
nxn
, then Tr(B) =
vec(Inl vec(B).
2. Prove that for all matrices A and B, (A ® B)+ = A+ ® B+.
3. Show that the equation AX B = C has a solution for all C if A has full row rank and
B has full column rank. Also, show that a solution, if it exists, is unique if A has full
column rank and B has full row rank. What is the solution in this case?
4. Show that the general linear equation
k
LAiXB
i
=C
i=1
can be written in the form
[BT ® AI + ... + B[ ® Ak]vec(X) = vec(C).
Exercises 149
5. Let x € M
m
and y e E". Show that *
r
< 8 > y = yx
T
.
6. Let A e R"
xn
and £ e M
mxm
.
(a) Show that A < 8 > B
2
= A
2
£
2
.
(b) What is A ® B\\
F
in terms of the Frobenius norms of A and B? Justify your
answer carefully.
(c) What is the spectral radius of A < 8 > B in terms of the spectral radii of A and B?
Justify your answer carefully.
7. Let A, 5 eR"
x
".
(a) Show that (/ ® A)* = / < 8 > A* and (fl < g > /)* = B
fc
® / for all integ ers &.
(b) Show that e
l
®
A
= I < g ) e
A
and e
5
®
7
= e
B
(g ) /.
(c) Show that the matrices / (8 ) A and B ® / commute.
(d) Show that
(Note: This result would look a little "nicer" had we defined our Kronecker
sum the other way around. However, Definition 13.14 is conventional in the
literature.)
8 . Consider the Lyapunov matrix equation (13.11) with
and C the symmetric matrix
Clearly
is a symmetric solution of the equation. Verify that
is also a solution and is nonsymmetric. Explain in lig ht of Theorem 13.21.
9. Block Triangularization: Let
where A e Rn
xn
and D e R
mxm
. It is desired to find a similarity transformation
of the form
such that T
l
ST is block upper triang ular.
Exercises 149
5. Let x E ]Rm and y E ]Rn. Show that x T ® y = y X T •
(a) Show that IIA ® BII2 = IIAII2I1Blb.
(b) What is II A ® B II F in terms of the Frobenius norms of A and B? Justify your
answer carefully.
(c) What is the spectral radius of A ® B in terms of the spectral radii of A and B?
Justify your answer carefully.
7. Let A, B E ]Rnxn.
(a) Show that (l ® A)k = I ® Ak and (B ® Il = Bk ® I for all integers k.
(b) Show that el®A = I ® e
A
and eB®1 = e
B
® I.
(c) Show that the matrices I ® A and B ® I commute.
(d) Show that
e
AEIlB
= eU®A)+(B®l) = e
B
® e
A
.
(Note: This result would look a little "nicer" had we defined our Kronecker
sum the other way around. However, Definition 13.14 is conventional in the
literature.)
8. Consider the Lyapunov matrix equation (13.11) with
A = [ ~ _ ~ ]
and C the symmetric matrix
[ ~
Clearly
Xs = [ ~ ~ ]
is a symmetric solution of the equation. Verify that
Xns = [ _ ~ ~ ]
is also a solution and is nonsymmetric. Explain in light of Theorem 13.21.
9. Block Triangularization: Let
where A E ]Rn xn and D E ]Rm xm. It is desired to find a similarity transformation
of the form
T = [ ~ ~ J
such that T
1
ST is block upper triangular.
150 Chapter 13. Kronecker Products
(a) Show that S is similar to
if X satisfies the socalled matrix Riccati equation
(b) Formulate a similar result for block lower triangularization of S.
10. Block Diagonalization: Let
where A e Rn
xn
and D E R
mxm
. It is desired to find a similarity transformation of
the form
such that T
l
ST is block diagonal,
(a) Show that S is similar to
if Y satisfies the Sylvester equation
(b) Formulate a similar result for block diagonalization of
150 Chapter 13. Kronecker Products
(a) Show that S is similar to
[
A +OBX B ]
DXB
if X satisfies the socalled matrix Riccati equation
CXA+DXXBX=O.
(b) Fonnulate a similar result for block lower triangularization of S.
to. Block Diagonalization: Let
S= [ ~ ~ l
where A E jRnxn and D E jRmxm. It is desired to find a similarity transfonnation of
the fonn
T = [ ~ ~ ]
such that T
1
ST is block diagonal.
(a) Show that S is similar to
if Y satisfies the Sylvester equation
AY  YD = B.
(b) Fonnulate a similar result for block diagonalization of
Bibliography
[1] Albert, A., Regression and the MoorePenrose Pseudoinverse, Academic Press, New
York, NY, 1972.
[2] Bartels, R.H., and G.W. Stewart, "Algorithm 432. Solution of the Matrix Equation
AX + XB = C," Cornm. ACM, 15(1972), 820826.
[3] Bellman, R., Introduction to Matrix Analysis, Second Edition, McGrawHill, New
York, NY, 1970.
[4] Bjorck, A., Numerical Methods for Least Squares Problems, SIAM, Philadelphia, PA,
1996.
[5] Cline, R.E., "Note on the Generalized Inverse of the Product of Matrices," SIAM Rev.,
6(1964), 57–58.
[6] Golub, G.H., S. Nash, and C. Van Loan, "A HessenbergSchur Method for the Problem
AX + XB = C," IEEE Trans. Autom. Control, AC24(1979), 909913.
[7] Golub, G.H., and C.F. Van Loan, Matrix Computations, Third Edition, Johns Hopkins
Univ. Press, Baltimore, MD, 1996.
[8] Golub, G.H., and J.H. Wilkinson, "IllConditioned Eigensystems and the Computation
of the Jordan Canonical Form," SIAM Rev., 18(1976), 578619.
[9] Greville, T.N.E., "Note on the Generalized Inverse of a Matrix Product," SIAM Rev.,
8(1966), 518–521 [Erratum, SIAM Rev., 9(1967), 249].
[10] Halmos, PR., FiniteDimensional Vector Spaces, Second Edition, Van Nostrand,
Princeton, NJ, 1958.
[11] Higham, N.J., Accuracy and Stability of'Numerical Algorithms, Second Edition, SIAM,
Philadelphia, PA, 2002.
[12] Horn, R.A., and C.R. Johnson, Matrix Analysis, Cambridge Univ. Press, Cambridge,
UK, 1985.
[13] Horn, R.A., and C.R. Johnson, Topics in Matrix Analysis, Cambridge Univ. Press,
Cambridge, UK, 1991.
151
Bibliography
[1] Albert, A., Regression and the MoorePenrose Pseudoinverse, Academic Press, New
York, NY, 1972.
[2] Bartels, RH., and G.w. Stewart, "Algorithm 432. Solution of the Matrix Equation
AX + X B = C," Comm. ACM, 15(1972),820826.
[3] Bellman, R, Introduction to Matrix Analysis, Second Edition, McGrawHill, New
York, NY, 1970.
[4] Bjorck, A., Numerical Methodsfor Least Squares Problems, SIAM, Philadelphia, PA,
1996.
[5] Cline, R.E., "Note on the Generalized Inverse of the Product of Matrices," SIAM Rev.,
6(1964),5758.
[6] Golub, G.H., S. Nash, and C. Van Loan, "A HessenbergSchur Method for the Problem
AX + X B = C," IEEE Trans. Autom. Control, AC24(1979), 909913.
[7] Golub, G.H., and c.F. Van Loan, Matrix Computations, Third Edition, Johns Hopkins
Univ. Press, Baltimore, MD, 1996.
[8] Golub, G.H., and lH. Wilkinson, "IllConditioned Eigensystems and the Computation
ofthe Jordan Canonical Form," SIAM Rev., 18(1976),578619.
[9] Greville, T.N.E., "Note on the Generalized Inverse of a Matrix Product," SIAM Rev.,
8(1966),518521 [Erratum, SIAM Rev., 9(1967), 249].
[10] Halmos, P.R, FiniteDimensional Vector Spaces, Second Edition, Van Nostrand,
Princeton, NJ, 1958.
[11] Higham, N.1., Accuracy and Stability of Numerical Algorithms, Second Edition, SIAM,
Philadelphia, PA, 2002.
[12] Hom, RA., and C.R. Johnson, Matrix Analysis, Cambridge Univ. Press, Cambridge,
UK, 1985.
[13] Hom, RA., and C.R. Johnson, Topics in Matrix Analysis, Cambridge Univ. Press,
Cambridge, UK, 1991.
151
152 Bibliography
[14] Kenney, C, and A.J. Laub, "Controllability and Stability Radii for Companion Form
Systems," Math, of Control, Signals, and Systems, 1(1988), 361390.
[15] Kenney, C.S., and A.J. Laub, "The Matrix Sign Function," IEEE Trans. Autom. Control,
40(1995), 1330–1348.
[16] Lancaster, P., and M. Tismenetsky, The Theory of Matrices, Second Edition with
Applications, Academic Press, Orlando, FL, 1985.
[17] Laub, A.J., "A Schur Method for Solving Algebraic Riccati Equations," IEEE Trans..
Autom. Control, AC24( 1979), 913–921.
[18] Meyer, C.D., Matrix Analysis and Applied Linear Algebra, SIAM, Philadelphia, PA,
2000.
[19] Moler, C.B., and C.F. Van Loan, "Nineteen Dubious Ways to Compute the Exponential
of a Matrix," SIAM Rev., 20(1978), 801836.
[20] Noble, B., and J.W. Daniel, Applied Linear Algebra, Third Edition, PrenticeHall,
Englewood Cliffs, NJ, 1988.
[21] Ortega, J., Matrix Theory. A Second Course, Plenum, New York, NY, 1987.
[22] Penrose, R., "A Generalized Inverse for Matrices," Proc. Cambridge Philos. Soc.,
51(1955), 406–413.
[23] Stewart, G. W., Introduction to Matrix Computations, Academic Press, New York, NY,
1973.
[24] Strang, G., Linear Algebra and Its Applications, Third Edition, Harcourt Brace
Jovanovich, San Diego, CA, 1988.
[25] Watkins, D.S., Fundamentals of Matrix Computations, Second Edition, Wiley
Interscience, New York, 2002.
[26] Wonham, W.M., Linear Multivariable Control. A Geometric Approach, Third Edition,
SpringerVerlag, New York, NY, 1985.
152 Bibliography
[14] Kenney, C., and AJ. Laub, "Controllability and Stability Radii for Companion Fonn
Systems," Math. of Control, Signals, and Systems, 1(1988),361390.
[15] Kenney, C.S., andAJ. Laub, "The Matrix Sign Function," IEEE Trans. Autom. Control,
40(1995),13301348.
[16] Lancaster, P., and M. Tismenetsky, The Theory of Matrices, Second Edition with
Applications, Academic Press, Orlando, FL, 1985.
[17] Laub, AJ., "A Schur Method for Solving Algebraic Riccati Equations," IEEE Trans ..
Autom. Control, AC24( 1979), 913921.
[18] Meyer, C.D., Matrix Analysis and Applied Linear Algebra, SIAM, Philadelphia, PA,
2000.
[19] Moler, c.B., and c.P. Van Loan, "Nineteen Dubious Ways to Compute the Exponential
of a Matrix," SIAM Rev., 20(1978),801836.
[20] Noble, B., and J.w. Daniel, Applied Linear Algebra, Third Edition, PrenticeHall,
Englewood Cliffs, NJ, 1988.
[21] Ortega, J., Matrix Theory. A Second Course, Plenum, New York, NY, 1987.
[22] Pemose, R., "A Generalized Inverse for Matrices," Proc. Cambridge Philos. Soc.,
51(1955),406413.
[23] Stewart, G.W., Introduction to Matrix Computations, Academic Press, New York, NY,
1973.
[24] Strang, G., Linear Algebra and Its Applications, Third Edition, Harcourt Brace
Jovanovich, San Diego, CA, 1988.
[25] Watkins, D.S., Fundamentals of Matrix Computations, Second Edition, Wiley
Interscience, New York, 2002.
[26] Wonham, W.M., Linear Multivariable Control. A Geometric Approach, Third Edition,
SpringerVerlag, New York, NY, 1985.
Index
A–invariant subspace, 89
matrix characterization of, 90
algebraic multiplicity, 76
angle between vectors, 58
basis, 11
natural, 12
block matrix, 2
definiteness of, 104
diagonalization, 150
inverse of, 48
LU factorization, 5
triangularization, 149
C", 1
(pmxn i
(p/nxn 1
Cauchy–Bunyakovsky–Schwarz Inequal
ity, 58
Cayley–Hamilton Theorem, 75
chain
of eigenvectors, 87
characteristic polynomial
of a matrix, 75
of a matrix pencil, 125
Cholesky factorization, 101
co–domain, 17
column
rank, 23
vector, 1
companion matrix
inverse of, 105
pseudoinverse of, 106
singular values of, 106
singular vectors of, 106
complement
of a subspace, 13
orthogonal, 21
congruence, 103
conjugate transpose, 2
contragredient transformation, 137
controllability, 46
defective, 76
degree
of a principal vector, 85
determinant, 4
of a block matrix, 5
properties of, 4–6
dimension, 12
direct sum
of subspaces, 13
domain, 17
eigenvalue, 75
invariance under similarity transfor
mation, 81
elementary divisors, 84
equivalence transformation, 95
orthogonal, 95
unitary, 95
equivalent generalized eigenvalue prob
lems, 127
equivalent matrix pencils, 127
exchange matrix, 39, 89
exponential of a Jordan block, 91, 115
exponential of a matrix, 81, 109
computation of, 114–118
inverse of, 110
properties of, 109–112
field, 7
four fundamental subspaces, 23
function of a matrix, 81
generalized eigenvalue, 125
generalized real Schur form, 128
153
Index
Ainvariant subspace, 89
matrix characterization of, 90
algebraic multiplicity, 76
angle between vectors, 58
basis, 11
natural, 12
block matrix, 2
definiteness of, 104
diagonalization, 150
inverse of, 48
LV factorization, 5
triangularization, 149
en, 1
e
mxn
, 1
e ~ x n , 1
CauchyBunyakovskySchwarz Inequal
ity,58
CayleyHamilton Theorem, 75
chain
of eigenvectors, 87
characteristic polynomial
of a matrix, 75
of a matrix pencil, 125
Cholesky factorization, 101
codomain, 17
column
rank, 23
vector, 1
companion matrix
inverse of, 105
pseudoinverse of, 106
singular values of, 106
singular vectors of, 106
complement
of a subspace, 13
orthogonal, 21
153
congruence, 103
conjugate transpose, 2
contragredient transformation, 137
controllability, 46
defective, 76
degree
of a principal vector, 85
determinant, 4
of a block matrix, 5
properties of, 46
dimension, 12
direct sum
of subspaces, 13
domain, 17
eigenvalue, 75
invariance under similarity transfor
mation,81
elementary divisors, 84
equivalence transformation, 95
orthogonal, 95
unitary, 95
equivalent generalized eigenvalue prob
lems, 127
equivalent matrix pencils, 127
exchange matrix, 39, 89
exponential of a Jordan block, 91, 115
exponential of a matrix, 81, 109
computation of, 114118
inverse of, 110
properties of, 109112
field, 7
four fundamental subspaces, 23
function of a matrix, 81
generalized eigenvalue, 125
generalized real Schur form, 128
154 Index
generalized Schur form, 127
generalized singular value decomposition,
134
geometric multiplicity, 76
Holder Inequality, 58
Hermitian transpose, 2
higher–order difference equations
conversion to first–order form, 121
higher–order differential equations
conversion to first–order form, 120
higher–order eigenvalue problems
conversion to first–order form, 136
i, 2
idempotent, 6, 51
identity matrix, 4
inertia, 103
initial–value problem, 109
for higher–order equations, 120
for homogeneous linear difference
equations, 118
for homogeneous linear differential
equations, 112
for inhomogeneous linear difference
equations, 119
for inhomogeneous linear differen
tial equations, 112
inner product
complex, 55
complex Euclidean, 4
Euclidean, 4, 54
real, 54
usual, 54
weighted, 54
invariant factors, 84
inverses
of block matrices, 47
7, 2
Jordan block, 82
Jordan canonical form (JCF), 82
Kronecker canonical form (KCF), 129
Kronecker delta, 20
Kronecker product, 139
determinant of, 142
eigenvalues of, 141
eigenvectors of, 141
products of, 140
pseudoinverse of, 148
singular values of, 141
trace of, 142
transpose of, 140
Kronecker sum, 142
eigenvalues of, 143
eigenvectors of, 143
exponential of, 149
leading principal submatrix, 100
left eigenvector, 75
left generalized eigenvector, 125
left invertible, 26
left nullspace, 22
left principal vector, 85
linear dependence, 10
linear equations
characterization of all solutions, 44
existence of solutions, 44
uniqueness of solutions, 45
linear independence, 10
linear least squares problem, 65
general solution of, 66
geometric solution of, 67
residual of, 65
solution via QR factorization, 71
solution via singular value decom
position, 70
statement of, 65
uniqueness of solution, 66
linear regression, 67
linear transformation, 17
co–domain of, 17
composition of, 19
domain of, 17
invertible, 25
left invertible, 26
matrix representation of, 18
nonsingular, 25
nullspace of, 20
154
generalized Schur form, 127
generalized singular value decomposition,
134
geometric multiplicity, 76
Holder Inequality, 58
Hermitian transpose, 2
higherorder difference equations
conversion to firstorder form, 121
higherorder differential equations
conversion to firstorder form, 120
higherorder eigenvalue problems
conversion to firstorder form, 136
i,2
idempotent, 6, 51
identity matrix, 4
inertia, 103
initialvalue problem, 109
for higherorder equations, 120
for homogeneous linear difference
equations, 118
for homogeneous linear differential
equations, 112
for inhomogeneous linear difference
equations, 119
for inhomogeneous linear differen
tial equations, 112
inner product
complex, 55
complex Euclidean, 4
Euclidean, 4, 54
real, 54
usual, 54
weighted, 54
invariant factors, 84
inverses
of block matrices, 47
j,2
Jordan block, 82
Jordan canonical form (JCF), 82
Kronecker canonical form (KCF), 129
Kronecker delta, 20
Kronecker product, 139
determinant of, 142
eigenvalues of, 141
eigenvectors of, 141
products of, 140
pseudoinverse of, 148
singUlar values of, 141
trace of, 142
transpose of, 140
Kronecker sum, 142
eigenvalues of, 143
eigenvectors of, 143
exponential of, 149
leading principal submatrix, 100
left eigenvector, 75
left generalized eigenvector, 125
left invertible. 26
left nullspace, 22
left principal vector, 85
linear dependence, 10
linear equations
Index
characterization of all solutions, 44
existence of solutions, 44
uniqueness of solutions, 45
linear independence, 10
linear least squares problem, 65
general solution of, 66
geometric solution of, 67
residual of, 65
solution via QR factorization, 71
solution via singular value decom
position, 70
statement of, 65
uniqueness of solution, 66
linear regression, 67
linear transformation, 17
codomain of, 17
composition of, 19
domain of, 17
invertible, 25
left invertible. 26
matrix representation of, 18
nonsingular, 25
nulls pace of, 20
Index 155
range of, 20
right invertible, 26
LU factorization, 6
block, 5
Lyapunov differential equation, 113
Lyapunov equation, 144
and asymptotic stability, 146
integral form of solution, 146
symmetry of solution, 146
uniqueness of solution, 146
matrix
asymptotically stable, 145
best rank k approximation to, 67
companion, 105
defective, 76
definite, 99
derogatory, 106
diagonal, 2
exponential, 109
Hamiltonian, 122
Hermitian, 2
Householder, 97
indefinite, 99
lower Hessenberg, 2
lower triangular, 2
nearest singular matrix to, 67
nilpotent, 115
nonderogatory, 105
normal, 33, 95
orthogonal, 4
pentadiagonal, 2
quasi–upper–triangular, 98
sign of a, 91
square root of a, 101
symmetric, 2
symplectic, 122
tridiagonal, 2
unitary, 4
upper Hessenberg, 2
upper triangular, 2
matrix exponential, 81, 91, 109
matrix norm, 59
1–.60
2–, 60
oo–, 60
/?–, 60
consistent, 61
Frobenius, 60
induced by a vector norm, 61
mixed, 60
mutually consistent, 61
relations among, 61
Schatten, 60
spectral, 60
subordinate to a vector norm, 61
unitarily invariant, 62
matrix pencil, 125
equivalent, 127
reciprocal, 126
regular, 126
singular, 126
matrix sign function, 91
minimal polynomial, 76
monic polynomial, 76
Moore–Penrose pseudoinverse, 29
multiplication
matrix–matrix, 3
matrix–vector, 3
Murnaghan–Wintner Theorem, 98
negative definite, 99
negative invariant subspace, 92
nonnegative definite, 99
criteria for, 100
nonpositive definite, 99
norm
induced, 56
natural, 56
normal equations, 65
normed linear space, 57
nullity, 24
nullspace, 20
left, 22
right, 22
observability, 46
one–to–one (1–1), 23
conditions for, 25
onto, 23
conditions for, 25
Index
range of, 20
right invertible, 26
LV factorization, 6
block,5
Lyapunov differential equation, 113
Lyapunov equation, 144
and asymptotic stability, 146
integral form of solution, 146
symmetry of solution, 146
uniqueness of solution, 146
matrix
asymptotically stable, 145
best rank k approximation to, 67
companion, 105
defective, 76
definite, 99
derogatory, 106
diagonal,2
exponential, 109
Hamiltonian, 122
Hermitian, 2
Householder, 97
indefinite, 99
lower Hessenberg, 2
lower triangular, 2
nearest singular matrix to, 67
nilpotent, 115
nonderogatory, 105
normal, 33, 95
orthogonal, 4
pentadiagonal, 2
quasiuppertriangular, 98
sign of a, 91
square root of a, 10 1
symmetric, 2
symplectic, 122
tridiagonal, 2
unitary, 4
upper Hessenberg, 2
upper triangular, 2
matrix exponential, 81, 91, 109
matrix norm, 59
1,60
2,60
00,60
p,60
consistent, 61
Frobenius, 60
induced by a vector norm, 61
mixed,60
mutually consistent, 61
relations among, 61
Schatten,60
spectral, 60
155
subordinate to a vector norm, 61
unitarily invariant, 62
matrix pencil, 125
equivalent, 127
reciprocal, 126
regular, 126
singUlar, 126
matrix sign function, 91
minimal polynomial, 76
monic polynomial, 76
MoorePenrose pseudoinverse, 29
multiplication
matrixmatrix, 3
matrixvector, 3
MumaghanWintner Theorem, 98
negative definite, 99
negative invariant subspace, 92
nonnegative definite, 99
criteria for, 100
nonpositive definite, 99
norm
induced,56
natural,56
normal equations, 65
normed linear space, 57
nullity, 24
nullspace,20
left, 22
right, 22
observability, 46
onetoone (11), 23
conditions for, 25
onto, 23
conditions for, 25
156 Index
orthogonal
complement, 21
matrix, 4
projection, 52
subspaces, 14
vectors, 4, 20
orthonormal
vectors, 4, 20
outer product, 19
and Kronecker product, 140
exponential of, 121
pseudoinverse of, 33
singular value decomposition of, 41
various matrix norms of, 63
pencil
equivalent, 127
of matrices, 125
reciprocal, 126
regular, 126
singular, 126
Penrose theorem, 30
polar factorization, 41
polarization identity, 57
positive definite, 99
criteria for, 100
positive invariant subspace, 92
power (Kth) of a Jordan block, 120
powers of a matrix
computation of, 119–120
principal submatrix, 100
projection
oblique, 51
on four fundamental subspaces, 52
orthogonal, 52
pseudoinverse, 29
four Penrose conditions for, 30
of a full–column–rank matrix, 30
of a full–row–rank matrix, 30
of a matrix product, 32
of a scalar, 31
of a vector, 31
uniqueness, 30
via singular value decomposition, 38
Pythagorean Identity, 59
Q –orthogonality, 55
QR factorization, 72
T O " 1
IK , 1
M
mxn i
, 1
M
mxn 1
r '
M nxn 1
n ' '
range, 20
range inclusion
characterized by pseudoinverses, 33
rank, 23
column, 23
row, 23
rank–one matrix, 19
rational canonical form, 104
Rayleigh quotient, 100
reachability, 46
real Schur canonical form, 98
real Schur form, 98
reciprocal matrix pencil, 126
reconstructibility, 46
regular matrix pencil, 126
residual, 65
resolvent, 111
reverse–order identity matrix, 39, 89
right eigenvector, 75
right generalized eigenvector, 125
right invertible, 26
right nullspace, 22
right principal vector, 85
row
rank, 23
vector, 1
Schur canonical form, 98
generalized, 127
Schur complement, 6, 48, 102, 104
Schur T heorem, 98
Schur vectors, 98
second–order eigenvalue problem, 135
conversion to first–order form, 135
Sherman–Morrison–Woodbury formula,
48
signature, 103
similarity transformation, 95
and invariance of eigenvalues, h
156
orthogonal
complement, 21
matrix, 4
projection, 52
subspaces, 14
vectors, 4, 20
orthonormal
vectors, 4, 20
outer product, 19
and Kronecker product, 140
exponential of, 121
pseudoinverse of, 33
singular value decomposition of, 41
various matrix norms of, 63
pencil
equivalent, 127
of matrices, 125
reciprocal, 126
regular, 126
singular, 126
Penrose theorem, 30
polar factorization, 41
polarization identity, 57
positive definite, 99
criteria for, 100
positive invariant subspace, 92
power (kth) of a Jordan block, 120
powers of a matrix
computation of, 119120
principal submatrix, 100
projection
oblique, 51
on four fundamental subspaces, 52
orthogonal, 52
pseudoinverse, 29
four Penrose conditions for, 30
of a fullcolumnrank matrix, 30
of a fullrowrank matrix, 30
of a matrix product, 32
of a scalar, 31
of a vector, 31
uniqueness, 30
via singular value decomposition, 38
Pythagorean Identity, 59
Qorthogonality, 55
QR factorization, 72
JR.n, I
JR.mxn,1
1
I
range, 20
range inclusion
Index
characterized by pseudoinverses, 33
rank, 23
column, 23
row, 23
rankone matrix, 19
rational canonical form, 104
Rayleigh quotient, 100
reachability, 46
real Schur canonical form, 98
real Schur form, 98
reciprocal matrix pencil, 126
reconstructibility, 46
regular matrix pencil, 126
residual, 65
resolvent, III
reverseorder identity matrix, 39, 89
right eigenvector, 75
right generalized eigenvector, 125
right invertible, 26
right nullspace, 22
right principal vector, 85
row
rank, 23
vector, I
Schur canonical form, 98
generalized, 127
Schur complement, 6, 48, 102, 104
Schur Theorem, 98
Schur vectors, 98
secondorder eigenvalue problem, 135
conversion to firstorder form, 135
ShermanMorrisonWoodbury formula,
48
signature, 103
similarity transformation, 95
and invariance of eigenvalues, 81
Index 157
orthogonal, 95
unitary, 95
simple eigenvalue, 85
simultaneous diagonalization, 133
via singular value decomposition, 134
singular matrix pencil, 126
singular value decomposition (SVD), 35
and bases for four fundamental
subspaces, 38
and pseudoinverse, 38
and rank, 38
characterization of a matrix factor
ization as, 37
dyadic expansion, 38
examples, 37
full vs. compact, 37
fundamental theorem, 35
nonuniqueness, 36
singular values, 36
singular vectors
left, 36
right, 36
span, 11
spectral radius, 62, 107
spectral representation, 97
spectrum, 76
subordinate norm, 61
subspace, 9
A–invariant, 89
deflating, 129
reducing, 130
subspaces
complements of, 13
direct sum of, 13
equality of, 10
four fundamental, 23
intersection of, 13
orthogonal, 14
sum of, 13
Sylvester differential equation, 113
Sylvester equation, 144
integral form of solution, 145
uniqueness of solution, 145
Sylvester's Law of Inertia, 103
symmetric generalized eigenvalue prob
lem, 131
total least squares, 68
trace, 6
transpose, 2
characterization by inner product, 54
of a block matrix, 2
triangle inequality
for matrix norms, 59
for vector norms, 57
unitarily invariant
matrix norm, 62
vector norm, 58
variation of parameters, 112
vec
of a matrix, 145
of a matrix product, 147
vector norm, 57
l–, 57
2–, 57
oo–, 57
P–, 51
equivalent, 59
Euclidean, 57
Manhattan, 57
relations among, 59
unitarily invariant, 58
weighted, 58
weighted p–, 58
vector space, 8
dimension of, 12
vectors, 1
column, 1
linearly dependent, 10
linearly independent, 10
orthogonal, 4, 20
orthonormal, 4, 20
row, 1
span of a set of, 11
zeros
of a linear dynamical system, 130
Index
orthogonal, 95
unitary, 95
simple eigenvalue, 85
simultaneous diagonalization, 133
via singular value decomposition, 134
singular matrix pencil, 126
singular value decomposition (SVD), 35
and bases for four fundamental
subspaces, 38
and pseudoinverse, 38
and rank, 38
characterization of a matrix factor
ization as, 37
dyadic expansion, 38
examples, 37
full vs. compact, 37
fundamental theorem, 35
nonuniqueness, 36
singular values, 36
singular vectors
left, 36
right, 36
span, 11
spectral radius, 62, 107
spectral representation, 97
spectrum, 76
subordinate norm, 61
subspace, 9
Ainvariant, 89
deflating, 129
reducing, 130
subspaces
complements of, 13
direct sum of, 13
equality of, 10
four fundamental, 23
intersection of, 13
orthogonal, 14
sum of, 13
Sylvester differential equation, 113
Sylvester equation, 144
integral form of solution, 145
uniqueness of solution, 145
157
Sylvester's Law of Inertia, 103
symmetric generalized eigenvalue prob
lem,131
total least squares, 68
trace, 6
transpose, 2
characterization by inner product, 54
of a block matrix, 2
triangle inequality
for matrix norms, 59
for vector norms, 57
unitarily invariant
matrix norm, 62
vector norm, 58
variation of parameters, 112
vec
of a matrix, 145
of a matrix product, 147
vector norm, 57
1,57
2,57
00,57
p,57
equivalent, 59
Euclidean, 57
Manhattan, 57
relations among, 59
unitarily invariant, 58
weighted, 58
weighted p, 58
vector space, 8
dimension of, 12
vectors, 1
column, 1
linearly dependent, 10
linearly independent, 10
orthogonal, 4, 20
orthonormal, 4, 20
row, 1
span of a set of, 11
zeros
of a linear dynamical system, 130
Matrix Analysis Matrix Analysis
for Scientists & Engineers for Scientists & Engineers
This page intentionally left blank This page intentionally left blank
California slam. Laub Alan J. Laub University of California Davis.Matrix Analysis Matrix Analysis for Scientists & Engineers for Scientists & Engineers Alan J. .
2. Inc.9'434—dc22 512. Philadelphia. Inc.Copyright Copyright © 2005 by the Society for Industrial and Applied Mathematics. Mathematics. Used by permission. No part of this book All rights reserved. Matrices. Laub.com. For information.L38 2005 512.. PA 191042688. www. 5086477000.. Includes bibliographical references and index. Mathematica is a registered trademark of Wolfram Research. is a registered trademark. 3600 University City Science Center. 2. Mathcad is a registered trademark of Mathsoft Engineering & Education. Inc. Includes bibliographical references and index.com Mathematica is a registered trademark of Wolfram Research. PA. 1948Matrix analysis for scientists and engineers / Alan J. MA 017602098 USA. Fax: 5086477101. wwwmathworks. 3600 University City Science Center. Printed in the United States of America.com. Inc. Natick. write to the Society for Industrial and Applied Mathematics. 2005 by the Society for Industrial and Applied Mathematics. p. MATLAB® is a registered trademark of The MathWorks. Used by permission 5.) ISBN 0898715768 (pbk. Alan J. Mathcad is a registered trademark of Mathsoft Engineering & Education. stored. MATLAB® is a registered trademark of The MathWorks. For MATLAB product information. info@mathworks. info@mathworks. Inc. Inc.. For MATLAB product information. Printed in the United States of America. • slam is a registered trademark... Library of Congress CataloginginPublication Data Library of Congress CataloginginPublication Data Laub. write to the Society for Industrial and Applied of the publisher. Matrices.lam. Laub.. MA 017602098 USA. Fax: 5086477101. . or transmitted in any manner without the written permission may be reproduced. Alan J. 3 Apple Hill Drive. please contact The MathWorks. Natick. p. 1. PA. Matrix analysis for scientists and engineers / Alan J. artist Aaron Tallon of Philadelphia. Inc. 10987654321 10987654321 All rights reserved.9'434dc22 2004059962 2004059962 About the cover: The original artwork featured on the cover was created by freelance About the cover: The original artwork featured on the cover was created by freelance artist Aaron Tallon of Philadelphia.com 5086477000.mathworks.) 1. No part of this book may be reproduced. Mathematical analysis. 3 Apple Hill Drive. stored. cm. Philadelphia. ISBN 0898715768 (pbk. cm. I. or transmitted in any manner without the written permission of the publisher. For information. I. 1948Laub. Inc. Title. please contact The MathWorks.. PA 191042688. Mathematical analysis. QA188138 2005 QA 188. Title.
Beverley (who captivated me in the UBC math library captivated UBC nearly forty years ago) nearly forty . Beverley To my wife.To my wife.
This page intentionally left blank This page intentionally left blank .
3.1 The Fundamental Theorem 5.1 Definition and Examples 3.3 Composition of Transformations 3.1 Some Notation and Terminology 1.1 The Fundamental Theorem . .2 Subspaces 2.2 Matrix Linear Equations . 6... 2. 1.3 A More General Matrix Linear Equation 6.. . .4 Structure of Linear Transformations 3.1 4. . .3 A More General Matrix Linear Equation 6.3 Inner Products and Orthogonality 1.3 Rowand Column Compressions 5. .1 Definition and Examples . . . .1 Definitions and Examples .1 2.2 Subspaces. . 1.2 Matrix Representation of Linear Transformations 3.1 Vector Linear Equations . . . Definitions and Examples 2.5 Four Fundamental Subspaces Introduction to the MoorePenrose Pseudoinverse Introduction to the MoorePenrose Pseudoinverse 4..4 Sums and Intersections of Subspaces Linear Transformations Linear Transformations 3. . .. .5 Four Fundamental Subspaces . 6.1 Vector Linear Equations 6. . 6.Contents Contents Preface Preface 1 1 Introduction and Review Introduction and Review 1.4 Structure of Linear Transformations 3. . . 4.3 Inner Products and Orthogonality ..3 Properties and Applications Introduction to the Singular Value Decomposition Introduction to the Singular Value Decomposition 5.2 Matrix Representation of Linear Transformations 3... 3.2 Matrix Arithmetic 1. . . . .3 Composition of Transformations .. . 4. . . . . . 4. .4 Determinants 1.2 Some Basic Properties 5. ..1 Some Notation and Terminology 1. . .2 Matrix Arithmetic .2 Examples 4.. ..4 Determinants Vector Spaces Vector Spaces 2.. .3 Row and Column Compressions Linear Equations Linear Equations 6. . . . . .3 Properties and Applications . 2.4 Some Useful and Interesting Inverses vii xi xi 1 1 1 1 3 3 4 4 4 7 7 7 7 9 9 10 10 13 13 17 17 17 17 18 18 19 19 20 20 22 22 2 2 3 3 4 4 29 29 30 30 31 31 35 35 35 35 38 40 5 5 6 6 43 43 43 43 44 47 47 47 47 ..3 Linear Independence . .2 Matrix Linear Equations 6. .. .3 Linear Independence 2. . 3. . 5. 5. .2 Examples... .4 Sums and Intersections of Subspaces 2.1 Definitions and Characterizations Definitions and Characterizations. . . .4 Some Useful and Interesting Inverses.2 Some Basic Properties .
. .1 8. . .6 Computation of the matrix exponential 11.2.3.4 Matrix Norms Linear Least Squares Problems 8.2.2 Other least squares problems 8. .3.3. . . . 7. . . .2. . . . .5 Least Squares and QR Factorization . .1. 11. .5 Modal decompositions . . . .1 Fundamental Definitions and Properties 9. . . . .4 Matrix Norms . . . . . .4 Geometric Aspects of the JCF 9. .2 Homogeneous linear differential equations 11. . . . 8.3 Determination of the JCF 9.3 Inhomogeneous linear differential equations 11.1. .1 Theoretical computation .1 The Linear Least Squares Problem .1 Projections 7.4 Rational Canonical Form . 11. . .2 Difference Equations .5 The Matrix Sign Function 51 51 51 51 52 52 54 54 57 57 59 59 8 65 65 65 65 67 67 67 67 67 67 69 70 70 71 71 9 75 75 75 82 82 85 85 86 86 88 88 89 89 91 91 95 95 10 Canonical Forms 10.1 Properties of the matrix exponential 11. . .2 Jordan Canonical Form 9.1 Homogeneous linear difference equations 11. . . . .3 Computation of matrix powers . . . . . .3. . 11. .2 Inner Product Spaces 7.1 The four fundamental orthogonal projections The four fundamental orthogonal projections 7. . . . . .2 Difference Equations 11. .1 Homogeneous linear difference equations 11. . . . 7. . . Example: Linear regression 8. .2 Jordan Canonical Form .1. . . . . .1. . .3 Vector Norms 7. . 8.2 Geometric Solution 8. .4 Least Squares and Singular Value Decomposition 8. . and Norms 7. .1. . .1. . . . .1 Differential Equations ILl Differential Equations .3 Equivalence Transformations and Congruence 10. . . 8. . . . 95 95 99 102 102 104 104 104 104 109 109 109 109 109 109 112 112 112 112 113 113 114 114 114 114 118 118 118 118 118 118 119 119 120 120 . . .2 Homogeneous linear differential equations 11. .1. . 9.3 HigherOrder Equations. . . .viii viii Contents Contents 7 Projections.1.1 Some Basic Canonical Forms . .2. .3.2 Inhomogeneous linear difference equations 11. .3 Equivalence Transformations and Congruence 10. 11.2 On the + l's in JCF blocks 9. 9. .3. . . .2 Inhomogeneous linear difference equations 11. .5 Least Squares and QR Factorization 8. .3 Linear Regression and Other Linear Least Squares Problems 8. . . .2 Definite Matrices .1 Example: Linear regression .3. . 11.1.3. . . Eigenvalues and Eigenvectors 9.3 HigherOrder Equations . .4 Linear matrix differential equations 11.2 Definite Matrices 10. . 11. .4 Least Squares and Singular Value Decomposition 8.3 Determination of the JCF .2 On the +1's in JCF blocks 9. . . .1.1 Block matrices and definiteness 10. 10.1 Block matrices and definiteness 10. .1 The Linear Least Squares Problem 8. Theoretical computation 9. . . .4 Rational Canonical Form 11 Linear Differential and Difference Equations 11 Linear Differential and Difference Equations 11. .1.1.2 Geometric Solution . . . .1 7. . . .1 Some Basic Canonical Forms 10.3. . . . .3 Computation of matrix powers 11.4 Linear matrix differential equations . . . 10. . 10.4 Geometric Aspects of the JCF 9.5 The Matrix Sign Function. .1 Fundamental Definitions and Properties 9. . . . . . . 11. .3 Vector Norms 7. .3 Linear Regression and Other Linear Least Squares Problems 8.6 Computation of the matrix exponential 11.5 Modal decompositions 11.2. .2 Inner Product Spaces 7. . . .2.2 Other least squares problems .3 Inhomogeneous linear differential equations 11. .1 Projections .1.3. . Inner Product Spaces. . 9. . .1 Properties ofthe matrix exponential . .1.1 9. . . .
. . . .1 Conversion to firstorder form 12. . 12.6 HigherOrder Eigenvalue Problems . . . .5 Simultaneous Diagonalization 12. 12. .1 Simultaneous diagonalization via SVD 12. 12. . .3 Application to the Computation of System Zeros .1 Simultaneous diagonalization via SVD 12. .3 Application to Sylvester and Lyapunov Equations 13. . . . .. .5 Simultaneous Diagonalization . 13. .1 The Generalized EigenvaluelEigenvector Problem 12. . .2 Canonical Forms 12. . . .2 Properties of the Kronecker Product 13. 12. .5. . . . . .1 Conversion to firstorder form 125 125 125 127 127 130 131 131 133 133 133 135 135 135 139 139 139 139 140 144 144 151 153 13 Kronecker Products 13 Kronecker Products 13. .1 Definition and Examples .1 The Generalized Eigenvalue/Eigenvector Problem 12.6.3 Application to the Computation of System Zeros 12. 13. . . .3 Application to Sylvester and Lyapunov Equations Bibliography Bibliography Index Index . .2 Properties of the Kronecker Product . 12.Contents Contents ix ix 12 Generalized Eigenvalue Problems 12 Generalized Eigenvalue Problems 12. . . .4 Symmetric Generalized Eigenvalue Problems 12. . .4 Symmetric Generalized Eigenvalue Problems .1 Definition and Examples 13.6 HigherOrder Eigenvalue Problems 12. .5.6. . .2 Canonical Forms .
This page intentionally left blank This page intentionally left blank .
but somehow didn't quite manage to do. and concepts such as determinants. Noble and Daniel [20]. The concept of matrix factorization applications and by computational utility and relevance. particular subject area. Instructors are encouraged to supplement the book with specific application examples from their own encouraged to supplement the book with specific application examples from their own particular subject area. although Chapters 2 and 3 algebra. mathematics. for example). eigenvalues and eigenvectors. Because tools such as the SVD are not generally amenable to "hand computation. computer science. for example). for [11].. Because tools such as the SVD are not generally amenable to "hand computation. Upon completion of a course based on this text. and mathematical structures. requiring such material as prerequisite permits the early (but "outoforder" by conventional standards) introduction of topics such as pseuthe early (but "outoforder" by conventional standards) introduction of topics such as pseudoinverses and the singular value decomposition (SVD). Certain topics that may have been treated cursorily in undergraduate courses are treated in more depth that may have been treated cursorily in undergraduate courses are treated in more depth and more advanced material is introduced. For this. the student is then wellequipped to pursue. Basic of calculus and definitely some previous exposure to matrices and linear algebra. basisfree or subspace) aspects of many of the fundamental do cover some geometric (i. The books by Meyer [18]. or computational students in engineering. [13]. students meant to learn much of the important and useful mathematics that. requiring such material as prerequisite permits tion may occasionally be "hazy. although Chapters 2 and 3 do cover some geometric (i. The text can be used in a onequarter or onesemester course to provide a compact overview of can be used in a onequarter or onesemester course to provide a compact overview of much of the important and useful mathematics that." However." However. students meant to learn thoroughly as undergraduates. Certain topics thoroughly as undergraduates." this ics. or [16].e. but somehow didn't quite manage to do. and Strang [24] Ortega are excellent companion texts for this book. methods. followon topics on the computational side (at the level of [7]. basisfree or subspace) aspects of many of the fundamental notions. example) or on the theoretical side (at the level of [12]. the student is then wellequipped to pursue. I highly recommend MATLAB® although other software such as xi xi . or computational science science who wish to be familar with enough matrix analysis that they are prepared to use its enough analysis they are prepared to tools and ideas comfortably in a variety of applications. I have tried throughout to emphasize only the and more advanced material is introduced. Matrices are stressed more than abstract vector spaces. in many cases. These powerful and versatile tools can then be exploited to provide a unifying foundation upon which to base subsequent topcan exploited to foundation subsequent topics. I have tried throughout to emphasize only the more important and "useful" tools. and positive definite matrices should have been covered at least once. singularity of matrices. eigenvalues and eigenvectors. These powerful and versatile tools doinverses and the singular value decomposition (SVD). or [16]. Ortega [21]. The text linear dynamical systems (systems of linear differential or difference equations).. [II]. either via formal courses or through selfstudy. essentially Prerequisites for using this text are quite modest: essentially just an understanding for this understanding of calculus and definitely some previous exposure to matrices and linear algebra. By matrix analysis I mean linear algebra and matrix theory together with their intrinsic interaction with and application to algebra and matrix theory together with their intrinsic interaction with and application to linear dynamical systems (systems of linear differential or difference equations).Preface Preface This book is intended to be used as a text for beginning graduatelevel (or even seniorlevel) This book is intended to be used as a text for beginning graduatelevel (or even seniorlevel) students in engineering. in many cases. computer science. By matrix analysis I mean linear tools and ideas comfortably in a variety of applications. I highly recommend MAlLAB® although other software such as a digital computer. either via formal courses or through selftext. The choice of topics covered in linear algebra and matrix theory is motivated both by The choice of topics covered in linear algebra and matrix theory is motivated both by applications and by computational utility and relevance. the sciences. even though their recollecmatrices least tion may occasionally be "hazy. Matrices are stressed more than abstract vector spaces. mathematics. singularity of matrices. or [25]. Basic concepts such as determinants." this approach necessarily presupposes the availability of appropriate mathematical software on approach necessarily presupposes the availability of appropriate mathematical software on a digital computer. Upon completion of a course based on this are excellent companion texts for this book. [13]. [23]. the sciences.e. For this. The concept of matrix factorization is emphasized throughout to provide a foundation for a later course in numerical linear is emphasized throughout to provide a foundation for a later course in numerical linear algebra. example) or on the theoretical side (at the level of [12].
It is my firm conviction that such maturity is neither encouraged conviction neither nor nurtured by relegating the mathematical aspects of applications (for example. linear algebra introducing "onthefly" algebra for elementary statespace theory) to an appendix or introducing it "onthef1y" when to necessary. mathematics. I have taught this material for many years. many students who completed especially offered. modern Some of the applications of matrix analysis mentioned briefly in this book derive of the applications of matrix analysis mentioned briefly in this book modem statespace from the modern statespace approach to dynamical systems. are there "best" linearly independent subsets? These tum out to turn be much more difficult problems and frequently involve researchlevel questions when set be much more difficult problems and frequently involve researchlevel questions when set in the context of the finiteprecision. and the course has proven to be remarkably successful at enabling students from Davis. If If are linearly dependent. remarked afterward that if processing. simulated. Since this text is not intended for a course in numerical linear algebra per se. If a set of vectors is linearly independent. The presentation of the material in this book is strongly influenced by computais influenced by computational issues for two principal reasons. This is ideal not given explicitly. diverse audience. the student does require a certain amount of what is conventionally referred Proofs referred to as "mathematical maturity. Statespace methods are Statespace modem now standard in much of modern engineering where. or signal processing. and while most material is developed from basic ideas in the book. and evaluated. A second motivation for a computational emphasis is that it provides many of the essential tools for what I call "qualitative mathematics. This is ideal material from which to learn a bit about mathematical proofs and the mathematical maturity and insight gained thereby. statistics. The "language" in which such described models are conveniently described involves vectors and matrices. especially the first few times it was offered. must lay firm foundation upon which and perspectives perspectives can be built in a logical. in an elementary linear algebra course. This is an absolutely fundamental fundamental concept. and the course has proven to be remarkably successful at enabling students from disparate backgrounds to acquire a quite acceptable level of mathematical maturity and acceptable graduate rigor for subsequent graduate studies in a variety of disciplines. are deferred to such a course. if only they had had this course before they took linear systems. prerequisites developed While prerequisites for this text are modest. Indeed. and thus the text can serve a rather diverse audience. outputs. Rather. control systems with standard large numbers of interacting inputs. applied physics. the details of most of the numerical aspects of linear algebra per se." Proofs are given for many theorems. consistent." For example. First. econometrics. They must generally be solved computationally and closedform it is important to know which types of algorithms can be relied upon and which cannot. completed the course." a set of vectors is either linearly independent or it is not. But in most engineering or scientific contexts we want to know more than that. they are either obvious or easily found in the literature. for example. one must lay a firm foundation upon which subsequent applications and Rather. form the foundation Some of the key algorithms of numerical linear algebra. in particular.xii xii Preface Preface Mathcad® Mathematica® or Mathcad® is also excellent. form the foundation virtually modem upon which rests virtually all of modern scientific and engineering computation. and coherent fashion. It is thus crucial to acquire knowledge vocabulary a working knowledge of the vocabulary and grammar of this language. and modem engineering. Some of the key algorithms of numerical linear algebra. they are either obvious or easily found in the literature. When they are not given explicitly. finiterange floatingpoint arithmetic environment of of of most modem computing platforms. "reallife" problems seldom yield to simple "reallife" closedform formulas or solutions. The tools of matrix analysis are also applied on a daily basis to problems in biology. and states often give rise to models of very numbers models high order that must be analyzed. chemistry. how "nearly dependent" are the vectors? If they linearly independent. many times at UCSB and twice at UC Davis. science. and a wide variety of other fields. in particular. . are deferred to such a course. Mastery of the material in this text should enable the student to read and understand the modern language of matrices used throughout mathematics.
too. realized that by requiring this course as a prerequisite. etc.Preface Preface xiii XIII or estimation theory.. The concept seems to work. . AJL. they no longer had to provide as much time for "review" and could focus instead on the subject at hand. rather than having to spend time making up for deficiencies in their background background in matrices and linear algebra. June 2004 — AJL. My fellow instructors. they would have been able to concentrate on the new ideas deficiencies they wanted to learn.
This page intentionally left blank This page intentionally left blank .
. mxn = the set of complex (or complexvalued) x n matrices. Henceforth.. where y G Rn and the superscript T is the transpose operation. Henceforth. e. XTy is a scalar while it easy to recognize immediately throughout the text that. e. IR n = the set of ntuples of real numbers represented as column vectors. This is followed by a review of some basic notions in matrix analysis and linear algebra.n xn Rmxnr = the set of real m x n matrices of rank r. n}.. The following sets appear frequently throughout subsequent chapters: The following sets appear frequently throughout subsequent chapters: 1. xyT is an n x n matrix. nonsingular n x n matrices. A row vector is denoted by yT where Note: Vectors are always column vectors. Thus. Thus. A row vector is denoted by y~. 3.. and linear algebra. That a vector is always a y E IR n and the superscript T is the transpose operation. 5. en 4. . x E Rn I. x T y is a scalar while xyT is an n x n matrix. the notation!! denotes the set {I. = the set of complex m x n matrices of rank r. IR~ xn denotes the set of real = set of real of rank Thus. R mxn = the set of real (or realvalued) m x n matrices.. 1 . the set of ntuples of complex numbers represented as column vectors. 2. but this convention makes column vector rather than a row vector is entirely arbitrary. Rn = the set of ntuples of real numbers represented as column vectors. 1R. Crnxn = the set of complex (or complexvalued) m x n matrices.Chapter 1 Chapter 1 Introduction and Review Introduction and Review 1. That a vector is always a column vector rather than a row vector is entirely arbitrary.g.1 Some Notation and Terminology Some Notation and Terminology We begin with a brief introduction to some standard notation and terminology to be used We begin with a brief introduction to some standard notation and terminology to be used throughout the text.. the notation n denotes the set {1..1 1.g. Cn = the set of ntuples of complex numbers represented as column vectors. e 6. Note: Vectors are always column vectors. This is followed by a review of some basic notions in matrix analysis throughout the text. Thus. . e. n }.n xn Cmxn = the set of complex m x n matrices of rank r. IR rn xn = the set of real (or realvalued) m x n matrices. 2. x e IR n means means where xi e IR for e n. Rnxnn denotes the set of real nonsingular n x n matrices. 5. but this convention makes it easy to recognize immediately throughout the text that. where Xi E R for ii E !!..
2. We henceforth that. (7. 2 2. ~ 5 is symmetric (and Hermitian). if z = a + jf$ (j = ii = R). Hermitian conjugate sometimes A*) and its (i. j)th entry of a matrix A is denoted by AT and is the matrix whose j)th entry A. A matrix A is symmetric i. Example 1. • upper Hessenberg if afj = 0 for — > 1.. Oth (A 7 ). otherwise noted. • lower triangular if a. Each of the above also has a "block" analogue obtained by replacing scalar components in the respective definitions by block submatrices. that is. . a.[ 7 .1. then A7" e jRnxm. then z = IX — jfi. j is Remark the more common notation in electrical engineering and system theory.JI > 1. • pentadiagonal if ai. then the (m + n) x (m + n) matrix [~ Bc] is block upper triangular. i)th entry of A. then r = [ . Example 1. = 0 for / — j\ > 2.. • diagonal if aij7 = 0 for i i= }. A e jRmxn. • lower Hessenberg if aij = 0 for } . = 0 for i ^ j.. We henceforth adopt the convention that. if e Rnxn e Rmxn C e Rmxm then the (m n) x (m n) matrix [A0 ~] is block upper triangular.. where the bar indicates complex sometimes A*) and its = IX jfJ (j = = v^T). AT E E" xm is the (j. There is some the more common notation in electrical engineering and system theory. For example. A if A = AT and Hermitian if A = AH. There is some advantage to being conversant with both notations. Introduction and Review We now classify some of the more familiar "shaped" matrices. • upper triangular if a.e. • tridiagonal if aij = 0 for Ii .j  7+} ] is Hermitian (but not symmetric). A = AH A complexvalued. then its Hermitian transpose (or conjugate transpose) is denoted by AH (or H If A e C mx ". } is While R is most commonly denoted by i in mathematics texts. The notation j is used throughout the text but reminders are placed at strategic locations. j)\h entry is (AH)ij = (aji).ii > 1. 2 Transposes of block matrices can be defined in an obvious way. is complexvalued symmetric but not Hermitian. For example... If A E em xn.. where the bar indicates complex j)th entry is (A H ). (AT)ij = aji. an equation like A = A T implies that A is realvalued while a statement like A = AH implies that A is complexvalued. if A E IRnxn. an equation like A = A T implies that A is realvalued while a statement otherwise noted.e. • upper triangular if aij. A = [ . B E IR nxm . 1. it is Transposes of block matrices can be defined in an obvious way.. are appropriately dimensioned subblocks. = 0 for i > j. 7 + j ] is complexvalued symmetric but not Hermitian. and definitions block submatrices. = 0 for j — > 1. For example.2 2 Chapter 1.J I > 2. A matrix A E IRn xn e (or A E enxn ) is A eC" x ")is • diagonal if a. then easy to see that if A. C E jRmxm. = 0 for i > }. unless if A = A T Hermitian A = A H. 7 = («77). A = [ 7+} 5 3· A . • pentadiagonal if aij = 0 for Ii . • lower Hessenberg if a. = a jfJ.jj > 1. The The transpose of a matrix A is denoted by AT and is the matrix whose (i.2. ] is symmetric (and Hermitian). = 0 for < j.. • tridiagonal if a(y = 0 for z — j\ > 1. • lower triangular if aij7 = 0 for i/ < }. text but reminders are placed at strategic locations. For example. i. z Remark 1. it is easy to see that if Aij are appropriately dimensioned subblocks. While \/—\ is most commonly denoted by i in mathematics texts. Note that if A E R mx ".. is Hermitian (but not symmetric). A is conjugation. • upper Hessenberg if aij = 0 for ii . Introduction and Review Chapter 1.
matrixvector if C E jRmxn has row vectors cJ E jRlxn.... Then we can quickly calculate dot products of the rows of A column Ax = [. there can be important computerarchitecturerelated advancomputerarchitecturerelated tages to preferring the latter calculation method. formulation of matrix multiplication that appears frequently in the text and is presented below as a theorem.. It is deceptively simple and its full understanding is well rewarded.•.[ ~ J+l. Ax. and multiplication of matrices. AB bi E W1. eRmxn has row cj e E l x ".. Namely." The details are left to the readei "row left .• a"1 E m JR " with a. n UV T = LUiVr E jRmxp... .. un Rmxn with u Rm and V = [v . suppose A e Rmxn and B = [bi. E JRm and x = l I. Theorem 1. . Vn] ]Ee lR Pxn U [Uj. .. suppose (linear combination) suppose A = la' ..3. It Theorem 1. applied p times: There is also an alternative. its importance cannot be overemphasized. vn Rpxn p with Vit e R . The importance of this interpretation cannot be overemphasized.[ ~ l For large arrays of numbers. Theorem 1. That is. This gives a dual to the matrixvector result above...bhp ] e Rnxp with hi e jRn. Then v E jRP. + Xnan E jRm. {.. Then the matrix product A B can be thought of as above. suppose A E jRmxn and [hI.3. Theorem reader. and is premultiplied by a row vector yTe jRlxm. recall that (CD)T = DT C T (C D)T = DT T If H H H (or (CD} = DHC H ).e.. As a numerical example. matrixvector product with the column x to find Ax = [50 32]' but this matrixvector product can also be computed computed via v1a 3.'" p] E jRnxp For matrix multiplication.. but equivalent. vector x. A special case of matrix multiplication occurs when the second matrix is a column multiplication second i. importance interpretation take A = [96 85 74]x = take A = [~ ~]. Let U = [MI..2 Matrix Arithmetic 1.. un]]Ee jRmxn with Ui t Ee jRm and V = [VI.xn~ ] Then Ax = Xjal + .. .[ ~ J+2.2. i=I If matrices C and D are compatible for multiplication. the matrixvector product Ax. . i.1. x = ! 2 Then we can quickly calculate dot products of the rows of A [~]. Again. Matrix Arithmetic 3 1... ..2 Arithmetic It is assumed that the reader is familiar with the fundamental notions of matrix addition. A very important way to view this product is interpret weighted to interpret it as a weighted sum (linear combination) of the columns of A. multiplication. if (C D)H — D C ).3 can then also be generalized to its "row dual.e.~]. multiplication of a matrix by a scalar. and is premultiplied by a row yT E R l x m then the product can be written as a weighted linear sum of the rows of C as follows: follows: yTC=YICf +"'+Ymc~ EjRlxn.
order in which x product is important. E R are said to be orthogonal if their inner product is Two nonzero vectors x. The notation /„ is sometimes used to denote the identity matrix in IR nxn in Rnx" x nxn H H (or en ").4. Similarly. (x. Two nonzero vectors x. We list below some of . the order in which x and y appear in the complex inner (x. If x. xTyy = 0. i. x)c.4 4 Chapter 1. i.4.4 Determinants Determinants It is assumed that the reader is familiar with the basic theory of determinants. Then (x.. i. x T = O. Let x = [} ]] and y = [~]. A EC = (orC" xn). The more conventional definition of the complex inner product is H ( x . To illustrate. for short) of x and For vectors y e IRn. Then Example 1. for short) by for short) by n (x'Y}c :=xHy = Lx. consider What is true in the complex case is that x H 0 if and only if O. indeed. Then XTX = 0 but XHX = 2. we define their complex Euclidean inner product (or inner product. y)c = (y.. The more conventional definition of the complex inner product is product is important.j] [ ~ ] = 1 . then we say that x and y are orthonormal. indeed. x}c. then we say that x and y are orthonormal. Clearly said = an orthogonal or unitary matrix has orthonormal rows and orthonormal columns. Example 1. Similarly.. Note that x Tx = 0 if and only if x = 0 when x e Rn but that this is not true if x e Cn. y E <en. where / is the n x n identity matrix. A nxn matrix E R is an orthogonal matrix if AT A = AAT = I. i.4 1.. Introduction and Review Chapter 1. If e C". rows or columns. y)c = (y. If x and y are zero. Let x = [1j and y = [1/2]. Introduction and Review 1. Nonzero complex vectors are orthogonal if x H y = 0. the Euclidean inner product inner for short) y is given by y is given by n T (x.3 1.. but throughout the text we prefer the symmetry with the real (x. (x.e. Note that x T x = 0 if and only if x = 0 when x E IRn but that this is not true if x E en. where I is the n x n matrix A e IRnxn is an orthogonal matrix if ATA = AAT = /. Then x T x = 0 but x H X = 2.y. Nonzero complex vectors are orthogonal if XHy = O. (or A E Cnxn) we use the notation det A for the determinant of A. For A E R nnxn A e IR xn It assumed of determinants.e. y)c = (y. There is an orthogonal or unitary matrix has orthonormal rows and orthonormal columns. case.3 Inner Products and Orthogonality Inner Products and Orthogonality For vectors x. x and y are orthogonal and x Tx = 1 and yT = 1.y.2j while while and we see that. y e IRn are said to be orthogonal if their inner product is zero. What is true in the complex case is that XH x = 0 if and only if x = 0. consider the nonzero vector x above. x)c and we see that. We list below some of (or A 6 en xn) we use the notation det A for the determinant of A. Y}c = [ } JH [ ~ ] = [I . y ) c = yHxx = L:7=1 x. 1.e. we define their complex Euclidean inner product (or inner product. the Euclidean inner product (or inner product. the nonzero vector x above.=1 y appear in Note that (x. To illustrate. Y}c = {y.=1 Note that the inner product is a scalar. . Note that the inner product is a scalar. a matrix A e en xn is said to be unitary if A H A = AA H = I. y E R". x)c'. y)c = y = Eni=1 xiyi but throughout the text we prefer the symmetry with the real case.e. A orthogonal and XTX = 1 and yTyy = 1..y. . y) := x y = Lx. In sometimes used denote identity matrix. There is no special name attached to a nonsquare matrix A E R mxn (or E Cmxn with orthonormal no special name attached to a nonsquare matrix A e ]Rrn"n (or € e mxn ))with orthonormal rows or columns.
then det [~ BD] = det D det(A – B D – 11C ) . then det A = a11a22 . then det A = different det A11 det A22 . det A is the product of its diagonal diagonal.e.. If A has a zero row or if any two rows of A are equal.. If A. 14.• det Ann. Note that this is not a minimal set. 7.• a"n. then det [~ BD] = del A det(D – CA– l 1 B).. is a det A. If A is lower triangular. • • An" (of A = square diagonal blocks A11. 17.4. i. properties 1.. 15.. If A has a zero column or if any two columns of A are equal. 11. det AT = det A (detA H = detA if A e C nxn ). . det A is the product of its diagonal 10. Interchanging two columns of A changes only the sign of the determinant. the determinant. Multiplying a row of A by a scalar a results in a new matrix whose determinant is 5. If A is lower triangUlar. then det [Ac ~] det D det(A B D.CA. 13. change the determinant.• ann.1 I ][ . 3.. If A E R n x n and D e RMmxm. Multiplying a column of A by a scalar ex results in a new matrix whose determinant scalar a determinant is ex det A.. If A e IRnxn and D E lR~xm. then det A = a11a22 .thendet(AB) = det A det 5. Proof" Proof: This follows easily from the block UL factorization BD. Determinants 5 properties the more useful properties of determinants. 4. i. 15. Ann (of possibly different sizes). exdetA.. 16. 9. then det(AB) = det A det B.B). then det(A.C). If A e R n x n and D e R m x m . A 11. 3. of 5. then det A = O.•..A22.4.. 2. 10. then det A = 0. If elements.. with A block diagonal (or block 13. Interchanging two rows of A changes only the sign of the determinant. several more is a properties are consequences of one or more of the others.. A 22 .e.. then det A = 0.1 ) = de: A.. = alla22 • • ann i.e. Determinants 1.. det A 11 det A22 • • det Ann 14.• ann. detAT = detA (det A H = det A A E C"X"). Interchanging two rows of A changes only the sign of the determinant.1. If A € lR~xn. If det = a11a22 • • ann 12. If A is upper triangular.. 8. Proof: This follows easily from the block LU factorization Proof" This follows easily from the block LU factorization [~ ~J=[ ~ ][ ~ 17. If A is diagonal.. If A. B E IRnxn .. If A is block diagonal (or block upper triangular or block lower triangular). B eR n x n . are consequences of one or more of the others. then det A = all a22 . If A has a zero column or if any two columns of A are equal. 8. then det(A1) = 1detA . Multiplying a row of A by a scalar ex results in a new matrix whose determinant is a det A. Multiplying a row of A by a scalar and then adding it to another row does not change the determinant.. If A A A = o. Multiplying a column of A by a scalar and then adding it to another column does not a column of scalar column does change the determinant. then det A = alla22 • • ann 12. 11. If A E Rnxn. Multiplying a row of A by a scalar and then adding it to another row does not change 7. Multiplying A 6. 16. If A E lR~xn and DE IR mxm det [Ac ~] detA det(D .
y E jRn. (b) Suppose A e IR" X "is idempotent and A i= I. lor z r 2sm2rt # J...Vk € jRn xn be orthogonal matrices. . i. Suppose A E jRn xn is idempotent and A ^ I. nxn linear E R f3 E JR. B e JRn xn and a. Showthatdet(lxyT) 1 – yTx.. lower triangular with all l's on the diagonal) and an upper triangular matrix L 1's an V is called an LV factorization. Let x. Another such factorization is VL U is an LU factorization. Introduction and Review Remark 1. 4. TrA = Eni=1 aii. A E jRnxn A2 / x™ . ST = S. The factorization of a matrix A into the product of a unit lower triangular matrix L (i.B D – l C is the Schur complement of D in [~ BD ]. ST = So Show that TrS = 0. The factorizations used above U triangular. see. of denoted Tr A.e. . V2. Let U1. are block analogues of these.. TrA = L~=I au· elements. see. elements. U2 .. II _ . Another such factorization is UL where V is unit upper triangular and L is lower triangular. 2 0 IS I dempotent .• V k is an orthogonal matrix.. then Tr(aA fiB)= aTrA + f3TrB. Remark 1.e A – 1 B is called the Schur complement of A in[ACBD].. Show that the product V = VI. The factorization of a matrix A into the product of a unit lower triangular Remark 1. If A e jRnxn and or is a scalar. if A. Letx. Then E jRnxn skewsymmetric.6 6 Chapter 1. either prove the converse or provide a counterexample. i.. (a) Show that the trace is a linear function. y e Rn. 6. If A is orthogonal.. Tr(Afl) = Tr(£A). is defined as the sum of its diagonal A e Rnxn. Remark — C I B – BDIe Similarly. Show that A must be singular.e. The trace of A. even though in general AB i= B A. ! [ 2cos2<9 I T 2cos2 0 (a) Show that the matrix A = _.e. Uk E Rnxn U = VI V2 . (c) Let S € Rnxn be skewsymmetric.e. Introduction and Review Chapter 1. .. • . .. i. 2f) 2 _ sm 2^ sin 0 sin sin 20 1 . A =. Show that det(I – xyT) = 1. denoted TrA. (b) Show that Tr(AB) = Tr(BA). U1 U2 • • Uk is an 5. Let A E jRNxn. TrS O. AB ^ BA. Tr(aA + f3B) = aTrA + fiTrB. [24]. [24]. [~ ~ ].. A? 2. ft e R. _. what is det A? If A is unitary.e. . i... for example. aII o. . what is det A? If A 3. 2sin20 J is idempotent for all #. Show that A must be singular.6.5. A matrix A e Wx" is said to be idempotent if A2 = A.5.yTx. example. of Din [AC ~ l EXERCISES EXERCISES nxn 1. The matrix D . what is det(aA)? What is det(–A)? E R a det(A)? A? If A unitary.. A .
p.8 E IF.8)· yyf for all a. The emphasis is on finitedimensional vector spaces.) ( a .) is an abelian group. including spaces formed by special classes of matrices. . Axioms (Al)(A3) state that (IF.y for alia. afar all a.8 . (A3) for all a E IF. y)=cip+a. for all a e IF.p ) . An excellent reference of matrices..8 + y) = a·. : IF x F —> IF such that Definition 2. a f. ft Elf. Axioms (M1)(M4) state that (F \ to). . y Elf. (Al) a + (. there exists an element (—a) E IF such that a (—a) O. Generally speaking.8.8. y Elf. y Elf. there exists an element (a) e F such that a + (a) = 0. (D) (D) a· p a . (M4) a·. not written explicitly. (A2) there exists an element 0 e IF such that a 0 = a.8. (Ml) a· p . (A4) a + p = .8 +a· y for all a.8 + y) = (a +. (Ml) a . (A4) a + . where some of the proofs that are not given here may be found. •) is an abelian group." is not written explicitly. A field is a set F together with two operations +.8.8) + y ffor all a. 0. +) is a group and an abelian group if (A4) also holds. (M3) e IF. but some infinitedimensional examples are also cited. (M3) for all a E ¥.1 Definitions and Examples Definition 2. +) is a group and an abelian group if (A4) also holds.1.8 = P • a for all a. a"1 € IF • a~l = 1.. ft. where some of the proofs that are not given here may for this and the next chapter is [10]. . when no confusion can arise. 2. . p. 7 .((.o r all a.1. aI = 1. the multiplication operator "•" is Generally speaking. y e F.8 = ft + afar all a. p e F. including spaces formed by special classes emphasis is on finitedimensional vector spaces. y € F. An excellent reference for this and the next chapter is [10]. A field is a set IF together with two operations +. but some infinitedimensional examples are also cited. (M2) there exists an element I E F such that a . the multiplication operator ". • F x IF ~ F such that (Al) a (P y ) = (a + p ) y o r all a.8 a for all a. when no confusion can arise. I = a for all a E F. yy) = (a·.Chapter 2 Vector Spaces Vector Spaces In this chapter we give a brief review of some of the basic concepts of vector spaces. ^ 0. (M2) 1 e IF • I = for a e IF. Axioms (A1)(A3) state that (F. The In this chapter we give a brief review of some of the basic concepts of vector spaces. (M4) a • p =.ye¥. Axioms (MI)(M4) state that (IF \ {0}. (A3) for all a e F. there exists an element aI E F such that a . (A2) there exists an element 0 E F such that a + 0 = a for all a E F. .8 e F.((. be found.
w for all a e F and for all v. IR~ xn = m x n matrices of rank r with real coefficients) is clearly not a field since. ft) v = a v + f3. where Z+ = {0..• v = a·• v + p • v for all a.2..1. e with ordinary complex addition and multiplication is a field.. R) with addition defined by I. (IRn. p E IF andfor all v e V.}. Moreover. + f3qXq :aj. 4. A vector space is denoted by (V. (Ml) does not hold unless m = n. (V2) ( a f3) v = a P . 1.8 Chapter 2. simply by V. (V5) I·• v = v for all v E V (1 e F). w e V.2.P.. e). . Raf.. ) f for all a. 2. simply by V. underlying field. }.3 are different from the + and • in Definition 2.4. where Z+ = {O.f3i EIR .4. 2. Definition 2. (V3) (a + f3). v for all a. in Definition Remark 2.. Remark 2. Example 2. since (M4) does not hold in general (although the other 8 axioms hold). (VI) (V. this causes 2. +) is an abelian group.qEZ +} .1 in the sense of operating on different objects in different sets.( (f3' V v) o r all a.l.3. v + a. .. (V4) a· (v + w) = a . R"x" is not a field either for example. . f3 e F and for all v E V. Example 2. 4. A vector space over a field F is a set V together with two operations Definition 2.F xV »• V such that (VI) (V. + apxP + .3. (V5) 1 v = v for all v e V (1 Elf). I. C with ordinary complex addition and multiplication is a field. (V3) (a (V4) a(v w)=av a w for all a ElF andfor all v. this causes no confusion and the·• operator is usually not even written explicitly. (R". is a field. A vector space over a field IF is a set V together with two operations + ::V x V + V and· :: IF x V + V such that V x V ^V and. p € F and for all v e V. IF) or. 1.p ) . lR~xn is not a field either since (M4) does not hold in general (although the other 8 axioms hold). is a vector space. Similar definitions hold for (en. IR with ordinary addition and multiplication is a field. Moreover. when there is no possibility of confusion as to the underlying fie Id. is a field.r] = the field of rational functions in the indeterminate x = {ao + f30 + atX f3t X + . v = a . Note that + and • in Definition 2. Similar definitions hold for (C".5. In practice. when there is no possibility of confusion as to the A vector space is denoted by (V. +) is an abelian group.1 in the sense of operating on different objects in different sets. F) or.. (V2) (a·. IR) with addition defined by and scalar multiplication defined by and scalar multiplication defined by is a vector space. f3 Elf andforall v E V. R with ordinary addition and multiplication is a field. no confusion and the operator is usually not even written explicitly.2. In practice. Note that + and· in Definition 2. 3.. Example 2.3 are different from the + and . Vector Spaces Example 2. w E V.2. Ra[x] = the field of rational functions in the indeterminate x 3. RMrmxn= {m x n matrices of rank r with real coefficients} is clearly not a field since. for example.5. C). (MI) does not hold unless m = n.
(JRmxn. that since 0 E IF. verify that the set in question is closed under addition and scalar multiplication. + fJmn l yaml yamn 3. F) be a vector space and let W ~ V." ya2n . Then cf>('O.2 Subspaces Subspaces Definition 2. F) be an arbitrary vector space and V be an arbitrary set. and the functions are piecewise continuous =: (PC[to. IF) be an arbitrary vector space and '0 be an arbitrary set. Let O(X>. amI al2 a22 + fJI2 + fJ22 aln + fJln a2n + fJ2n a mn + fJml am2 + fJm2 and scalar multiplication defined by and scalar multiplication defined by [ ya" y a 21 yA = . W = 0. Note. etc. when used with vector spaces. W2 E W. The latter characterization of a subspace is often the easiest way to check Remark 2. V) is a vector space with addition set of functions f mapping '0 to V. if and only if(aw1 ßW2) E if(awl + fJw2) e W for all a. F) if and only if (W. (E mxn JR) is a vector space with addition defined by 2. l . The latter characterization of a subspace is often the easiest way to check or prove that something is indeed a subspace (or vector space). 4. E) is a vector space with addition defined by 9 9 A+B= [ . t\])n continuous =: (C[to. if and only subspace of (V.2. IF) = (JR n . too. Let (V." The when used with vector spaces. this implies that the zero vector must be in any subspace. Subspaces 2. Notation: When the underlying field is understood.7. y a l2 y a 22 yam 2 ya. Let (V. Then (W. for all d ED. Let (V. implies that the zero vector must be in any subspace.. (V. E). Notation: When the underlying field is understood.6. Let (V. g E cf> and scalar multiplication defined by and scalar multiplication defined by (af)(d) = af(d) for all a E IF.6. and the functions are piecewise continuous (a) '0 = [to. =: (PC[f0. IF) is a subspace of (V. less restrictive meaning "is a subset of' is specifically flagged as such. we write W c V. V) be the set of functions / mapping D to V. JR). t\]. we write W ~ V. is henceforth understood to mean "is a subspace of.2. td. td)n or continuous =: (C[?0. F) is a Definition 2. equivalently. IF) be a vector space and let W c V. Note. td)n. h])n (b) '0 = [to.7.. foral! a. too." The less restrictive meaning "is a subset of" is specifically flagged as such. IF) = (JRn. Then O(D. Let cf>('O. Then (x(t) : x ( t ) = Ax(t)} is a vector space (of dimension n). equivalently.. V) be the 3.e. i. and the symbol c. fJ e IF andforall WI. F) is itself a vector space or. i. +00). F) = (IR". this question is closed under addition and scalar multiplication.e.2 2. Let A E JR(nxn.. and for all f E cf>. V) is a vector space with addition defined by defined by (f + g)(d) = fed) + g(d) for all d E '0 and for all f. W f= 0. and the symbol ~. (V." + fJ2I a21 + P" . . w2 e Remark 2. Special Cases: Special Cases: (a) V = [to. Then {x(t) : x(t) = Ax(t}} is a vector space (of dimension n). verify that the set in or prove that something is indeed a subspace (or vector space). ß E ¥ and for all w1. 4. (V. is henceforth understood to mean "is a subspace of. Then (W. Subspaces 2. IF) if and only if (W. Let A € R"x". that since 0 e F. 2. JR). IF) is itself a vector space or.2.
Proof: Suppose A\. v2. then R = S if and only if Definition 2. V2. X linearly set of Definition 2. elements VI. be an element of R.} . one usually proves the two inclusions separately: Note: To prove two vector spaces are equal. Then it is easily shown that ctA\ + f3A2 is Proof' Suppose AI. f3 e R. .) and let W = [A e R"x" : A is symmetric}. Vk of X and for any scalars aI. X is a linearly dependent set of vectors ifand only if there exist k distinct if and only if exist distinct elements v1./l = {V : v = [ ac ~ f3 ] ./l with f3 = 0 are All lines through the origin are subspaces. . 1.. Shifted subspaces Wa. As an interesting exercise. For a.1. Then it is easily shown that aAI + fiAi is symmetric for all a. .9. Vector Spaces Example 2. A2 are symmetric.JR..ß with ß =1= 0 are called linear varieties. a = 00) is also a subspace.•. •. . ~SandS ~ R. All lines through the origin are subspaces.3 Linear Independence Linear Independence Let X = {v1. Definition 2. •••. Henceforth. W~V. . W2. define W". explicitly stated otherwise. Example 2. IF) = (]R2.10... f3 e R.o. too.9. A2 are symmetric.3 2. • • •} be a nonempty collection of vectors u. ffR and S are vector spaces (or subspaces). R"x".. Consider (V.S. . ak = O. E ]Rnxn : not 2. . unless explicitly stated otherwise.. . V usually denotes a vector space with the underlying field generally being JR.nxn. As an interesting exercise. . Consider (V.. in some vector space V. Definition 2. ft E R symmetric for all a. Let W = {A € R"x" : A is orthogonal}.nxn : A We V. Consider (V. we drop the explicit dependence of a vector space on an underlying field. . called linear varieties.. Note. For ß E R define the jccoordinate in the plane and V2 with the ycoordinate.ß is a subspace of V if and only if ß = O. V usually denotes a vector space with the underlying field generally being R unless Thus. W2. . Thus.and W1/2.0.R) and 1. vk E X and scalars a1.. that the vertical line through the origin (i..lF) = (R" X ". then R = S if and only if R C S and S C R. W1/2. + (XkVk = 0 implies al = 0.O. . W2. Then W is /wf a subspace of JR... ak. To prove two vector spaces are equal. . Note. JR. Vk e X and scalars aI. If 12. Wi. and S are vector spaces (or subspaces). .nxn.. . Then (V.e.1. = {A E JR. too. R) and for each v € R2 of the form v = [v1v2 ] identify v1 with 3. c E JR. . (Xk not all zero such that X is a linearly independent set of vectors if and only if for any collection of k distinct X is a linearly independent set of vectors if and only if for any collection of k distinct elements v1. that the vertical line through the origin (i. Henceforth. Let X {VI. 3. we drop the explicit dependence of a vector space on an underlying field. .I. sketch W2.o. ak. al VI + . F) = (JR. F) = (R2. Shifted subspaces W"./l is a subspace of V if and only if f3 = 0.8. sketch Then Wa.10. Then W"..) and for each v E ]R2 of the form v = [~~ ] identify VI with the xcoordinate in the plane and u2 with the ycoordinate. ..e. a = oo) is also a subspace..Vk of X and for any scalars a1. ak not all zero such that elements VI. 2.• } be a nonempty collection of vectors Vi in some vector space V. one usually proves the two inclusions separately: An arbitrary r e R is shown to be an element of S and then an arbitrary s E S is shown to is shown to be an element of and then an arbitrary 5 € is shown to An arbitrary r E be an element of R. .10 Chapter 2.I' and Wi.
Let A E ]Rnxn and 5 e R"xm..'" . The linear v E ]Rn. e2. . ~ HHi] } Ime~ly i is a i" linearly independent set. A set of vectors X is a basis for V if and only if 1. = [ v 1 . 2...•}} be a collection of vectors vi. Then {[ Then I. t1] (recall that etA denotes the matrix exponential. and X (of and 2. (since 2vI . tIl 2. (Xi ElF. A e R xn B E ]Rnxm.v2 + V3 = 0). Why? independent. If the set of vectors is independent. } = (Xl VI + . LetV = 11 11 ~. .. and there exists a E ]Rk such that VT V is singular. ii E If. T V is nonsingular. E V. V V2.. Independence of these vectors turns out to be equivalent to a concept Chapter 11). 2. Vi EX. . Vi e span of Definition 2..en} = ]Rn. Then Sp{e1. Then consider the rows of etA B as vectors in Cm [t0. Linear Independence 2. . + (XkVk .3. Then consider the rows of etA B as vectors in em [to. An equivalent condition for linear dependence is that the k x k matrix condition VT V is singular.. Let V = Rn and define = ]Rn and el = 0 0 . A set of vectors X is a basis for V if and only ij Definition 2.14. .12. kEN}. . and consider the matrix V = [VI. Independence of these vectors turns out to be equivalent to a concept called controllability. The dependence of this set of vectors is equivalent to the existence of a nonzero vector E Rk dependence of this set of vectors is equivalent to the existence of a nonzero vector a e ]Rk O. called consider Let Vif e R". }. .."I [ i1i1l ]} [[ s a linearly is a Iin=ly dependent set de~ndent ~t (since 2v\ — V2 + v3 = 0).12. Example 2. then = O. en} = Rn. 1£t V = R3. . Then the span of of X is defined as X is defined as Sp(X) = Sp{VI. to be studied further in what follows.2.11.. o Definition 2. X is a linearly independent set (of basis vectors). e2 . V2. X = [v1 v2 . If the set of vectors is independent.... Sp(X) = V.3.. An equivalent condition for linear independence is that the matrix Va = 0. ... An equivalent condition for linear independence is that the matrix V TV is nonsingular. . . Vk] E Rnxk.... Definition 2.Vk] e ]Rnxk. . Why? However.. Sp(X) = V.14. to be studied further in what follows. which is discussed in more detail in efA Chapter 11).13.. Linear Independence Example 2. linear dependence x such that Va = 0.}.11. Let X = {VI. {1.en = 0 0 0 o SpIel. then a = 0. = {v : where N = {I... 1.. Example 2. 2. and there exists a e R* such that Va = 0. e k. e2 = 0 1 0 .. Howe.13.
. . For be n. while [ ~ ] = I . .. For example..18. . In]Rn. ..12 12 Chapter 2.18. while We can also determine components of v with respect to another basis. n unique. en} natural Now let b l . e2. Vector Spaces Example 2.. components represents the vector v with respect to the basis B. en} is a basis for IR" (sometimes called the natural basis). VI ] : = vlel + V2e2 + . . n for Then for all e there exists a unique ntuple {E1 ... Definition 2.. for]Rn [e\. + vne n · Vn We can also determine components of v with respect to another basis. n } such that for V. We say that the vector x of of of (b1. . .. write [ ] = XI • [ ~ + ] X2 • [ _! ] =[ ~ = [ ~ Then Then ! ][ ~~ l 1 [ ~~ ] = [ .16. ] l = = Theorem 2. V is said to X for be ndimensional or have dimension n and we write dim(V) n or dim V n. particular basis considered. . Example 2.17... .dimensional or have dimension n and we write dim (V) = n or dim V — n. The number of elements in a basis of a vector space is independent of the Theorem 2. Then for all v E V there exists a unique ntuple {~I'. ..E~n} such that v= where ~Ibl + .... In Rn.l. The scalars {Ei}are called the components (or sometimes the coordinates) components coordinates) Definition 2.15. For . If V= 0) V is Definition 2. r I [ . For example. B ~ [b". bn be a basis (with a specific order associated with the basis vectors) b1. If a basis X for a vector space V(Jf 0) has n elements.. The number of elements in a basis of a vector space is independent of the particular basis considered. with respect to the basis with respect to the basis {[~l[!J} we have we have [ ~ ~ ] = 3. + ~nbn = Bx.b..19.[ ~  ] + 4· [ ~ l To see this. .. {~i } of v with respect to the basis {b l . .. x ~ D J Definition 2... We represents B.16..19.. bn]} and are unique.. {el. el + 2 .
Sums and Intersections of Subspaces 2. dim{A E ~nxn :: A is upper (lower) triangular} = 1/2n(n+ 1). for finite k).. and S are defined respectively by: 1.a S. 2. R j ) = 0 am/ Ri = T). dim(~mXn) = mn. i e m." 3. S. we define dim(O) = 0. JF') be a vector space and let 71. n n S = 0. Example 2. y>f (L L .2. The union of two subspaces.24. we define dim(O) = O. s e S}. A consistency. t1]) .23.22. K + S S." The collection of E. j e n. + 7^ =: L R. The union of two subspaces. a eA CiEA f] n *R. V (in general. Thus.20. n (^ ft. dim(C[to.24. and because the 0 vector is in any vector space. n 5 S. for finite k). 2.) = 0 and ]P ft. i E m. R = 0. n S {r s : r E U.21. dim{A e Rnxn A is upper (lower) triangular} = !n(n 1). V for an arbitrary index set A). R S C V (in general. determine 1/2n(n + 1) symmetric basis matrices. The sum and intersection Definition 2. H. U S. j)th location. Note: Check that a basis for ~mxn is given by the mn matrices Eij.4 Sums and Intersections of Subspaces Subspaces Definition 2. and S are said to be complements of each other in T. S c V. Let (V. Theorem 2. RI \ h Rk =: ]T ft/ C V. V is infinitedimensional. where Efj is a matrix all of whose elements are 0 except for a 1 in the (i.j matrices can be called the "natural basis matrices.+00. where Eij is a matrix all of whose elements are 0 except for a 1 in the (i. Let (V.. 1. R + S = {r + s : r e R. .4 2. 2. Theorem 2.18 says that dim (V) the number of elements in a basis. The subspaces Rand S are said to be complements of each other in T. Ra C V/or an arbitrary index set A). dim(Rn)=n. tJJ) = +00.20. R S = (in general. Note: Check that a basis for Rmxn is given by the mn matrices Eij. Remark 2. otherwise. otherwise. and 2. T = R 0 S is the direct sum of R and S if = REB S is the direct sum ofR and S if Definition 2. Theorem 2. determine !n(n 1) symmetric basis matrices.18 says that dim(V) = the number of elements in a basis. and because the 0 vector is in any vector space. Sums and Intersections of Subspaces 13 13 consistency.=1 K k 1=1 2. is not necessarily a subspace. and 1. The sum and intersection ofR and S are defined respectively by: of R. V (in general. 72. 2. is not necessarily a subspace. V is infinitedimensional. dim(R mxn ) mn. R H S = {v : v e R and v e S}. s E 5}. 1. R.) (To see why.22.= T). . Example 2. V. 2. V. vector space V is finitedimensional if there exists a basis X with n < +00 elements. ft n 5 = {v : v E 7^ and v E 5}.) 2 5. 1. dim{A E ~nxn :: A = AT} = !n(n + 1). U + S = T (in general ft. A vector space V is finitedimensional if there exists a basis X with n < +00 elements. j E ~. 2.4. dim{A € Rnxn A AT} = {1/2(n 1 (To see why. J)th location. 4. 5. dim(~n) = n. U\ + 1. Remark 2. 1.23.4. « The subspaces R. R D S C V (in general. S S. R C S. The collection of Eij matrices can be called the "natural basis matrices. Thus.. Theorem 2. F) be a vector space and let R. Definition 2.21.
.27. Suppose {VI. . Avn are also orjRn. XI. 2. r2 E Rand s1. 0 S..c jRnxn. ft. one can easily verify the validity = n. vd must be a linear combination of the others. Vector Spaces Remark 2. The complement of R (or S) is not unique. ft. 1 TIT The first matrix on the righthand side above is in S while the second is in R. S2 e rl Sl r2 Then r. and let R"x". n Proof: A e jRnxn written Proof: This follows easily from the fact that any A E R"x" can be written in the form A=2:(A+A )+2:(AA).... dim(R + S) = dim(R) + dim(S)  dim(R n S). let R be the set of skewsymmetric matrices in (V.r .20..5.. and SI.r2 £ Rand 52 . Show that Av\. For arbitrary subspaces ft. IF) = (jRnxn. we must have rl = r2 and s\ = si from S2 from which uniqueness follows. Vn thonormal if and only if A E R"x" is orthogonal. 2. For arbitrary subspaces R. Av" •.. and let S Let (V. every t € can be written uniquely in the form r s with r e R and s e S. 2.20. We discuss more about orthogonal complements elsewhere in the text..vn be orthonormal vectors in R". Then any other distinct line through the origin is a complement of R. Xk E jRn 2.. .28.26. .25. jRn xn . suppose an arbitrary vector t E T can be written in two ways t e as t S2..26.. mutually [x\.. which uniqueness follows. while U n £ is the set of diagonal matrices in Rnxn. Find the components of the vector v = [4 If with respect to this basis. . v = [4 l]r jR2.27.c the Example 2. .. ft S) = jR2 and let ft be any line through the origin.29.29. Example 2.. Theorem 2.14 14 Chapter 2. Let x\. Then it may be checked that U + .. unique ft..28. consider V = R2 unique. Prove that viand V2 form a a basis Consider v\ = [2 l]r 1*2 = [3 l] Prove that VI and V2 form basis 2 for R .. X2. where r1.. triangular + L = R xn un. Then V = U $ S.2 and 2. S of a vector space V. Example 2.27.. together with Examples 2. Consider the vectors VI — [2 1f and V2 = [3 1f. 0 The statement of the second part is a special case of the next theorem. every t E T can be written uniquely in the form tt = r + s with r E Rand s E S.. Since ft fl 0.si e S. . . Vector Spaces Chapter 2. Theorem 2. *2.. Example 2. . For example. R). F) (R n x n . . r2 e R. But as t = r1 + s1 = r2 + S2. Among all the complements there is a unique one orthogonal to R.27. But r1 –r2 E ft and S2 — SI E S. Then show that one of the vectors 1. {vi. of the formula given in Theorem 2. AVn are orv\.r2 S2 . . S2 E S. . Then Theorem 2. Since R n S = 0. Let VI. the set in jRnxn. . Using the fact that dim {diagonal (diagonal matrices} = n. Xk} must be a linearly independent set.. Then any other distinct line through the origin is and let R be any line through the origin. ft be the set of symmetric matrices in R" x ".s\. dim(T) = dim(R) + dim(S). where rl. = dim(ft) + Proof: To Proof: To prove the first part. we must have r\ ri and SI rl . Xk} must be a linearly independent set. EXERCISES EXERCISES 1.25. Suppose =R EB Then 1. Suppose T = R O S. Then r1 — r2 = s2— SI. jR). x/c E R" be nonzero mutually orthogonal vectors. . 3. D Theorem 2.. .. e jRnxn 4. jRnxn.c = jRnnxn jRn xn. Show that {XI.. validity of the formula given in Theorem 2. Vk} is a linearly dependent set. S of a vector space V. Let U be the subspace of upper triangular matrices in E" x" and let £ be the subspace of lower triangUlar matrices in Rnxn.
Exercises Exercises
15
5. Let denote the set of polynomials of degree less than or equal to two of the form 5. Let P denote the set of polynomials of degree less than or equal to two of the form Po + PI X + pix2, where Po, PI, p2 E R. Show that P is a vector space over R Show Po p\x P2x2, where po, p\, P2 e R Show that is a vector space over E. Show Find the components of the that the polynomials 1, *, and 2x2 — 1 are a basis for P. Find the components of the that the polynomials 1, x, and 2x2  1 are a basis for 2 2 with respect to this basis. polynomial 2 + 3x 4x polynomial 2 + 3x + 4x with respect to this basis.
6. Prove Theorem 2.22 (for the case of two subspaces Rand S only). 6. Prove Theorem 2.22 (for the case of two subspaces R and only).
7. Let n denote the vector space of polynomials of degree less than or equal to n, and of 7. Let Pn denote the vector space of polynomials of degree less than or equal to n, and of the form p ( x ) = Po + PIX + ...•+ Pnxn,, where the coefficients Pi are all real. Let PE po + p\x + • • + pnxn where the coefficients /?, are all real. Let PE the form p(x) denote the subspace of all even polynomials in Pn,, i.e., those that satisfy the property denote the subspace of all even polynomials in n i.e., those that satisfy the property p(—x} = p(x). Similarly, let PQ denote the subspace of all odd polynomials, i.e., p( x) = p(x). Similarly, let Po denote the subspace of all odd polynomials, i.e., those satisfying p(—x} = p(x). Show that Pn = PE EB Po· those satisfying p(x) = – p ( x ) . Show that n = PE © PO8. Repeat Example 2.28 using instead the two subspaces 7" of tridiagonal matrices and 8. Repeat Example 2.28 using instead the two subspaces T of tridiagonal matrices and U of upper triangular matrices. U of upper triangular matrices.
This page intentionally left blank This page intentionally left blank
Chapter 3 Chapter 3
Linear Transformations Linear Transformations
3.1 3.1
Definition and Examples Definition and Examples
definition of a linear (or function, We begin with the basic definition of a linear transformation (or linear map, linear function, or linear operator) between two vector spaces. or linear operator) between two vector spaces.
Let IF) and (W, IF) be vector spaces. Then I: : > a Definition 3.1. Let (V, F) and (W, F) be vector spaces. Then C : V + W is a linear transformation if and only if transformation if and only if I:(avi £(avi + {3V2) = aCv\ + {3I:V2 far all a, {3 e F andfor all v},v2e V. pv2) = al:vi fi£v2 for all a, £ ElF and far all VI, V2 E V. The vector space V is called the domain of the transformation C while VV, the space into called the of the transformation I: while W, the space into The vector space which it maps, is called the which it maps, is called the codomain.
Example 3.2. Example 3.2.
1. Let F = R and take V = W = PC[f0, +00). 1. Let IF JR and take V W PC[to, +00). Define I: : PC[to, +00) > PC[to, +00) by Define £ : PC[t0, +00) + PC[t0, +00) by
vet)
f+
wet) = (I:v)(t) =
11
to
e(tr)v(r) dr.
2. Let F = R and take V = W = JRmxn. Fix M e R m x m . Let IF JR and V W R mx ". Fix ME JRmxm. Define £ : JRmxn + M mxn by I: : R mx " > JRmxn by
X
f+
Y
= I:X = MX.
3. Let F = R and take V = P" = {p(x) = a0 + ct}x H ... + anx"n : a, E R} and ao alx + ai E JR} and 3. Let IF = JR and take V = pn (p(x) h anx W = pnl. w = pn1. I: : —> Define C.: V + W by I: p = p', where'I denotes differentiation with respect to x. Lp — p', where denotes differentiation x.
17
IF) veniently in matrix form. {W jj' j e !!!..n. w m] and where W = [WI. is by its action on a basis. i e ~}. say. .} are bases for V and W. + ~nLvn =~IWal+"'+~nWan = WAx. Change of basis then corresponds naturally to appropriate matrix multiplication. . F) is linear and further suppose that {Vi. Thus. W = R m and [ v i . but this is usually not done. Thus. n A= al : ] E JR. j E raj. usually L The action of £ on an arbitrary vector V e V is uniquely determined (by linearity) v E V uniquely determined by its action on a basis...} are the usual (natural) bases. Then the {w j.m usually causes no naturally confusion.mxn a mn represents L since represents £ since LVi = aliwl =Wai. i. Li near Transformations Chapters. L IF) ~ (W... Linear Transformations 3. j E !!!. LV WA since x was arbitrary. w ] and L is the ith column of A.2 Matrix Representation of Linear Transformations Matrix Representation of Linear Transformations Linear transformations between vector spaces with specific bases can be represented conLinear transformations between vector spaces with specific bases can be represented conSpecifically.. i. In other words. suppose £ : (V. and hence x..2 3. Thus.e. Note that A = Mat £ depends on the particular bases for V and W. We thus commonly identify A as a linear transformation with its matrix representation. for V and W) is the representation of £i>. When V = R"..... in the notation.18 Chapter 3.e.• + E nVnn = V x (where u. F) —>• (W. i E n}. transformation with its matrix representation. W = lR. j E m} are the usual (natural) bases WA linea LV L = A. and hence jc. + . z'th V This could be reflected by subscripts. if V = E1v1 + . + amiWm where W = [w\. j e m}. i e n} and {Wj. is arbitrary).. Thus.. if v = ~I VI + • • + ~n v = Vx (where v. then LVx = Lv = ~ILvI + . When V = JR. E ~} e m] V ith column of A = Mat £ (the matrix representation of £ with respect to the given bases = L L for V and W) is the representation of LVi with respect to {w j... £V = W A since x was arbitrary. respectively. with respect to {w }•.. Thinking of both as a matrix and as a linear transformation from JR.m and {Vi. We identify A the equation £V = W A becomes simply £ = A. {u. then arbitrary)." to Rm usually causes no Thinking of A both as a matrix and as a linear transformation from Rn to lR. Specifically. [ w . In other words.
=1 Outer Product: Let x e Rm. y E Rn. Inner Product: n xTy = Lx.y.3. Then their inner product is the scalar E ~n. in the same order in both the diagram and the equation. Note that in most texts. xx T XX ). y e Rn. Two Special Cases: Two Special Inner Product: Let x.3. the arrows above are reversed as follows: C However. dim V = n. A rankone symmetric matrix can be written in above (or xy if A E C xyH e c ).3. Composition of Transformations 19 19 3. we have C A B . expressed mxp nxp formula cij = L k=1 n aikbkj. and dim W = m. and if we associate matrices with the transformations in the usual way. Then we can define a new transformation C as follows: to W. dimV = n. it might be useful to prefer the former since the transformations A and B appear in the same order in both the diagram and the equation. y e ~n. Then we can define a new transformation C as follows: C The above diagram illustrates the composition of transformations C = AB. Note that in The above diagram illustrates the composition of transformations C = AB. If dimZ// = p.. then composition of transformations corresponds to standard matrix mUltiplication. and W and transformations B from U to V and A from Wand V to W. Then their outer product is the m x n E ~m. . If dimU = p. . V. the form XXT (or xx HH).3 Composition of Transformations Composition Consider three vector spaces U. That is. Composition ofTransformations 3. That is. and dim W m. then composition of transformations corresponds to standard matrix multiplication. and if we associate matrices with the transformations in the usual way. Outer Product: matrix matrix mxn E R Note that any rankone matrix A e ~mxn can be written in the form A = xyT = xyT H mxn mxn). The above is sometimes expressed componentwise by the C — A B .
where 8ij is the Kronecker delta defined by Kronecker delta defined by 8 = {I0 ij ifi=j. 3. I ~VI VI ^/v..8. subspaces of different spaces.2.IN. See also the of Section 3.•.8.7. is the set {w e w = Av for some v e V}. is an orthonormal set. Theorem 3. IS an orthogonaI set. is the set {w E W : w = Av for some v E V}. Definition 3. an M. Theorem 3. {[ ~ J..7.[ :~~ J} . 1. 2. Let A E Rmxn. Let A : V + be a linear transformation. Example 3. vk] be a set of nonzero vectors Vi E ~n. Definition 3. then ~ . . Note that N(A) and R(A) are. Let {VI. Let A : V + W be a linear transformation. the same symbol (A) is Note that in Theorem and throughout the text. usual (natural) bases. Then 1.4 Structure of Linear Transformations Structure of Linear Transformations Let A : V —> W be a linear transformation. (A). N(A) c V. if i f= j.. Note that in Theorem 3. is an orthonormal set. orthonormal set. ~.. be orthogonal if' vjvj 0 for i ^ j and orthonormal if vf vj 8ij' where 8tj is the be orthogonal if vr v j = 0 for i f= j and orthonormal if vr v j = 8ij.. See also the last paragraph of Section 3.i •.. {[ ~~i ]. denoted N(A). R(A) = {Av : v E V}.5. If in of = [a\. Definition3. denotedR(A). vd of u. € 1Tlln is an orthogonal set.5 and throughout the text. denoted Im(A). The nullspace of The of denoted N(A). .. denotedlZ( A). . essentially following immediately from the defiProof: The proof of this theorem is easy. If {VI. e Rn. 0 nition. Li near Transformations 3. The set is said to 3. . then {I —/==. D Remark 3. in general. . 1. then then R(A) = Sp{al. The nullspace of kernel of and A is also known as the kernel of A and denoted Ker (A). . an]..3.4. Then Let A : V —>• be a linear transformation. —/=== . .4 3. an} . . Equivalently. N(A) S. ~ 3 .. Note that N(A) and R(A) are.. The range of A. ~}  ISisan 3.. The range of A is also known as the image of A and — {Av e V}. {v1... R(A) C W. { t > ...20 20 Chapter 3.an].Vk } With Vi E. 2. is the {v e V Av = 0}.. is an orthogonal set. vi ^/v'k vk ~~~ ] ..6. . .. essentially following immediately from the definition. The range of A. . Vk} with u.. the same symbol (A) is used to denote both a linear transformation and its matrix representation with respect to the used to denote both a linear transformation and its matrix representation with respect to the usual (natural) bases.2. subspaces of different spaces.. of of denoted Im(A). V." • orthonormal set. is the set {v E V : Av = O}. R ( A ) S. then Proof: The proof of this theorem is easy. [: J} is an orthogonal set. W. e ~mxn. in general. 2. LinearTransformations Chapter 3.3. The nullspace of A. If A is written in terms of its columns as A = [ai.
+ X2 + X3 = 0. = S.11. the computation involved is simply to find all nontrivial (i. n~. of course. 4. vk} e ]Rn vector. then give rise to redundant equations). Vk} be an orthonormal basis for S and let x E Rn be an arbitrary {v1. (n + S)~ = nl. Then the orthogonal complement of S is defined as the set c ]Rn. (S~)l. Theorem 311 Let Theorem 3. nonzero) solutions of the system of equations 3xI 4xI + 5X2 + 7X3 = 0. Then it can be shown that Working from the definition.9.= {v e Rn : vTs=OforallsES}. The proofs of the other results are left as Proof: left exercises. Proof: We prove and discuss only item 2 here... . Set vector. n S~. Structure of Linear Transformations 21 21 Definition 3... ]Rn. S \B S~ = ]Rn.10. Let S <. Structure of Li near Transformations 3.e. k =X . Let 3. Rn. 3. if and only if S~ <. S1.10. 2. .4. Let R S C Rn The S <. Set XI X2 = L (xT Vi)Vi. including dependent spanning vectors (which would.. .3.. .=1 XI.4. Any set of vectors will do. Then the of defined T S~={VE]Rn: V S = 0 for all s e S}. n <. Then n. Note that there is nothing special about the two vectors in the basis defining S being orthogonal. Example 3. Let {VI. (n n S)~ = n~ + S~. S 5. 6.
. In other words.xn. Similarly. Then {v E R" : Av = 0} is sometimes called the right nullspace of A.= Af(AT) ) (i.l = Rn. Thus..13..l = R(A T}. where x e U(A) and y e ft(A)1. N(A). But yT Ax = (ATyy{ x. Linear Transformations Then x\ E <S and.. we form AT v. (Note: This also holds for infinitedimensional vector spaces. can write vectors in a unique way with respect to the corresponding subspaces. But then (x'1 —XI)TT (x. Let A : IRn > Rm. i. x 1 E Sand x2. Ax = 0 if and only if x orthogonal is orthogonal to all vectors of the form AT y. {w E IR m : WT A = O} is called the left nullspace of A. +x~).l.) Proof: To x E N(A). every vector w in the codomain space IRm can be written in a unique way as w = x+y. See also Theorem 2.l = 0 since the only vector s E S orthogonal to S1 = IRn.11 can be combined to give two very fundamental decompositions damental and useful decompositions of vectors in the domain and codomain of a linear and transformation A. = x'1+ x'2.e. . . (Note: This also holds for infinitedimensional vector spaces.XITVj =XTVjXTVj=O.l N(A T (i. the right nullspace is A/"(A) while the left nullspace is N(A T ). Ax = Proof: To prove the first part. See also Theorem 2.. Suppose. Then Theorem 3. for example. Vk and hence to any linear combination of these we see that X2 is orthogonal to VI. D Definition 3. Let A : IRn —> Rm. Then {v e IRn : A v = O} is sometimes called the Definition 3.. ft(Ar) (i. E R(A) and E R(A). we decompositions. Let A : R" + IRm. including itself) is 0.. We also have that S U S. This key theorem becomes very easy to remember by carefully studying and underThis key theorem becomes very easy to remember by carefully studying and understanding Figure 3. take an arbitrary x e A/"(A). many properties of A can be developed in terms of the four fundamental subspaces to IRm. The proof of the second part is similar and is left as an exercise. standing Figure 3. transformation A.(A)1~ — J\f(ATT ). Then Ax = 0 and this is an and equivalent to yT Ax = 0 for all v. Then T (x. y.1 in the next section. every vector in the codomain space R m can be written ina unique way asw = x+y. IRm = R(A) EBN(A T».. every vector v in the domain space IRn can be written in a unique way as v = x 7.14 (Decomposition Theorem). Thus.X2) 0 since (x'1 — x1) (x' 2 — x2) = 0 by definition of ST. + x~. Theorem 3. Li near Transformations Chapters.e. and x2 = x~.. .26.l. IRn = M(A) EB R(A T)). 0 x1 — x'1 andx2 = x2. X2 is orthogonal to any vector in S.12..5 Four Fundamental Subspaces Four Fundamental Subspaces Consider a general matrix A E lR. When thought of as a linear transformation from E" Consider a general matrix A € E^ x ".22 22 Chapter 3.) 2. E S and X2. 3. We S n S1 =0 the e orthogonal everything in (i. D Theorem 3.. We have thus shown that S + S. Then X2 = x. It is also easy to see directly that. (Note: This holds only for finitedimensional vector spaces. XI = x. It can write vectors in a unique way with respect to the corresponding subspaces.12 and part 2 of Theorem 3. everything in S (i. x. Thus. In other words. x~ e S. that x = XI for example. Let A : Rn + Rm. Clearly. where x\. Ax = 0 if and only if x equivalent to yT Ax = 0 for all y. i.1 in the next section. every vector v in the domain space R" can be written in a unique way as v = x + y. Since x was arbitrary. where XI.XI/ (x~ . XI) (which follows by rearranging the equation XI +X2 = x. we see that x2 is orthogonal to v1.12. – x1) = 0 since 0 by definition of S. R(A). When thought of as a linear transformation from IR n to Rm. Thus. x~ X2 = (x. Vk and hence to any linear combination of these vectors.. many properties of A can be developed in terms of the four fundamental subspaces .•.e.e. Let A : Rn + IRm.l.26. since T x 2 Vj = XTVj .) 2. 0 The proof of the second part is similar and is left as an exercise.5 3. X2 is orthogonal to any vector in S.e. Then R(A r ). Suppose. But yT Ax = ( A T ) x.) N(A)1" spaces. when we have such direct sum decompositions. right nullspace of A. We have thus shown that vectors. 'R. x'2 E S1..e.12 and part 2 of Theorem 3.l where x € M(A) and y € J\f(A)± = R(AT) (i. the right nullspace is N(A) while the left nullspace is J\f(AT). R" N(A) 0 ft(Ar ».l = N(A ). Let A : IRn > IRm.x1) (x'1 xd x2 — X2 = — (x'1 — x1) (which follows by rearranging the equation x1+x2 = x'1 + x'2). Then Theorem 3. established that N(A) U(AT ). E N(A) and E N(A). Theorem 3.13. 2. Similarly. Clearly. since Then XI e S and.14 (Decomposition Theorem). that x = x1 + x2. Rm = 7l(A) 0 M(AT)). (Note: This for finitedimensional 1.e. x E R(A r ) Since x 1 have established thatN(A).11 can be combined to give two very funTheorem 3. x e R(AT). .e.l = 7£(AT). including itself) is O. Then 1. But then (x. (w e Rm : w T A = 0} is called the left nullspace of A. .
Four fundamental subspaces. fundamental subspaces. mation.1. Figure 3. Let V and W be vector spaces and let A : V + W be a linear transforDefinition 3. Figure 3. R(A). 3.15. The row rank of A is column rank of of independent row rank of .1 makes many key properties seem almost N(A)T. and in illustrating concepts such as controllability and observability. This is sometimes called 3. A is onto (also called epic or surjective) ifR(A) = W. t= V2 ===} AVI t= AV2 .1. properties 7£(A). N(A).16. A f ( A ) . A is onetoone or 11 (also called monic or injective) if N(A) = O.5.1 obvious and we return to this figure frequently both in the context of linear transformations obvious and we return to this figure frequently both in the context of linear transformations and in illustrating concepts such as controllability and observability.15.16. 1. Definition 3. Then rank(A) = dim R(A).(A)^.3. rank(A) dimftCA).(A) = W. Let and W be vector spaces and let A : motion. Two equivalent 2. Four Fundamental Subspaces 23 23 A r N(A)1 r EB {OJ X {O}Gl nr m r Figure 3. 2. Four Fundamental Subspaces 3. and N(A)1. A is onetoone or 11 (also called monic or infective) ifJ\f(A) = 0. 1.5. 'R. A is onto (also called epic or surjective) ifR. Two equivalent characterizations of A being 11 that are often easier to verify in practice are the characterizations of A being 11 that are often easier to verify in practice are the following: following: (a) AVI = AV2 (b) VI ===} VI = V2 . be a linear transforDefinition 3. Let A : E" + Rm. R(A)1. IR n > IRm. the column rank of A (maximum number of independent columns).
x x e R" x\ X2.(A) = dimA/^A^ 1 if that if {VI. the following string of equalities follows easily: "column rank of A" = rank(A) = dim R(A) = dimN(A)L1 = dim R(AT) = rank(AT)) = A" rank(A) = dim7e(A) = dim A/^A) = dim7l(AT) = rank(A r = "column "row rank of A. . Then dimN(A) + dim R(A) = n. the subspaces themselves are not necessarily in the same vector space. R(A) : ]Rn ~ ]Rm. of A. where n is the ]Rn > ]Rm. .19 suggests looking at the general problem of the four fundamental Part 4 of Theorem 3. . Let A : Rn > Rm. Tvrr]} is a basis for R(A). it is a statement about equality of dimensions. The last equality AXI x\ e N(A)L and jc E N(A). of Corollary 3. B E R" xn .") of A. and products of matrices.18. take any W e R(A).. Let A.andx22 e A/"(A).17 we see immediately that Proof: From Theorems 3. if {ui... u. sometimes denoted nullity(A) or corank(A). . Tv abasis 7?. + rank(B)  n :s rank(AB) :s min{rank(A). {Tv\. Linear Transformations dim 7£(A r ) (maximum number of independent rows). nullity(B) :s nullity(AB) :s nullity(A) 4..19. colloquially of = rank of A.17. e ]Rnxn. and is defined as dimN(A). rank(B)}. Let A : R" ~ Rm. Then 3. Theorem 3. dimA/'(A) ± (Note: 1 T T ). 3. 3. Like the theorem. Write x = Xl + X2. . rank(A) + B) :s rank(A) + rank(B). . dimensions. . Then dim K(A) = dimNCA)L.24 24 Chapter 3. O:s rank(A 2. dimension of the domain of A. then {TVI. of A. . Part 4 of Theorem 3. the subspaces themselves are not necessarily in the same vector space.. To see that T is also onto. . and is defined as dim A/"(A). shows that T is onto. if B is nonsingular. The dual notion to rank is the nullity R(AT) of independent rows). we include here a few miscellaneous results about ranks of sums completeness. Finally.11 and 3. rank(AB) = rank(BA) = rank(A) and N(BA) = N(A). 1 1 Xl E A/^A) . Then Ajti = W = TXI since Xl e A/^A). Proof: From Theorems 3. 0 For completeness.19 suggests looking atthe general problem of the four fundamental subspaces of matrix products. . (Note: Since 3.11 and 3. ..") Proof: Proof: Define a linear transformation T : N(A)L ~ R(A) by J\f(A)~L —>• 7£(A) by Tv = Av for all v E N(A)L. dimA/"(A) + dimft(A) = dimension of the domain of A.18." 0 of D The following corollary is immediate.19. denoted nullity(A) or corank(A). + nullity(B). We thus have that dim R(A) = dimN(A)L since it is easily shown T dim7?. 1. following follows we apply this and several previous results. Theorem 3. LinearTransformations Chapter3. Clearly T is 11 (since A/"(T) = 0). v r } is a basis for N(A)L.. by definition there is a vector x E ]Rn such that Ax = w. iv} abasis forA/'CA) . r*i *i E N(A)L. Then N(T) = To w E 7£(A).(A). The basic results are contained in the following easily proved following theorem. where Ax — w. this theorem is sometimes colloquially stated "row rank of A = column N(A)L = R(A A/^A) " = 7l(A )..17 we see immediately that n = dimN(A) = dimN(A) + dimN(A)L + dim R(A) .17.
: R n » Rm. and hence dim R(A) n by Theorem 3. A : IRn »• IR n is invertible or Note that if A is invertible. then dim V — dim W.20 and is also easily proved. R(A) = R(AA T ). e IRnxp. which implies that dim A/^A). Four Fundamental Subspaces Theorem 3. Let e IRmxn.22. R(AT) 3.. e IRmxn. A A T. suppose AXI = Ax^.5. Theorem 3. since ArA is invertible. Let A E Rmxn. suppose Ax\ dim R(A T). Let jc = AT(AAT)~]y Y E Rn.—n = dim 7£(A r ).17. N(A) = N(A T A). R«AB)T) S. 1. Conversely. A"1 ± are all 11 and onto between the two spaces M(A) and 7£(A). A is onto if and only //"rank(A) — m (A has linearly independent rows or is said to 1. y E R(A). Then A r A. AA is nonsingular). Note that if A is invertible. e 7?. N«AB)T) . A is 11 if and only z/rank(A) = n (A has linearly independent columns or is said to have full column rank. Conversely. We now characterize 11 and onto transformations and provide characterizations in We now characterize II and onto transformations and provide characterizations in terms of rank and invertibility. R(B T ).and R(A). A : W1 + E" is invertible or nonsingular if and only z/rank(A) = n. 2. Let A : IRn + IRm. AT A is nonsingular). Also. linear least squares problems. equivalently. 1. A : V —» W is invertible (or bijective) if and only if it is 11 and onto. Theorem 3. let y E IRm be arbitrary.2 N(B). RCAB) S. It is extremely useful in text that follows. equivalently. Conversely.3. especially when dealing with pseudoinverses and linear least squares problems.22. N(A T ) = N(AA T ). terms of rank and invertibility. 4. A is onto if and only if rank (A) = m (A has linearly independent rows or is said to have full row rank. so A is onto. equivalently. AT A nonsingular). to have full column rank. especially when dealing with pseudoinverses and is extremely useful in text that follows. D D 11. 2. RCA).(A) — m — rank (A). The transformations AT and A I have the same domain and range but are in general different maps unless A is and A~! have the same domain and range but are in general different maps unless A is orthogonal. dim R(A) = m = rank(A). N(AB) . which implies x\ = x^. Ar.21.23. Also. A € IR~xn. Let A E Rmxn. . Similar remarks apply to A and A~T. A is 11 if and only ifrank(A) = n (A has linearly independent columns or is said 2.ti = AT Ax2. Conversely. It The next theorem is closely related to Theorem 3. then N(A) = 0. which implies that dimN(A)11 = n — Proof of part 2: If A is 11. 25 25 The next theorem is closely related to Theorem 3. and hence dim 7£(A) = n by Theorem 3.5. Note that in the special case when A E R"x". = R(A T A). then dim V = dim W.e. A Proof of part 2: If A is 11.20. AATT is nonsingular).17. XI = X2 AT A A 11. 2.20 and is also easily proved. Then y = Ax. Then Theorem 3. 4. dim7?. Proof' Proof of part 1: If A is onto. Then 3. equivalently. Definition 3. The transformations AT are all 11 and onto between the two spaces N(A)1. have full row rank. Thus. 3.23. nonsingular ifand only ifrank(A) = n.20. Then 3.(A). x AT (AAT)I e IRn. AX2. the transformations A.21. AT. Four Fundamental Subspaces 3. let y e Rm Proof: Proof of part 1: If A is onto.2 N(A T ). i. B E Rnxp. and AI A. 1. Definition 3. then A/"(A) = 0. A is AT AXI AT AX2. A : V + W is invertible (or bijective) if and only if it is 11 and onto.
26 Chapter 3.R AA. Let A : V > W. by uniqueness it must be Thus. A. i. A R A = I.R = If left A L A L A = 2. Let + V.22 we see that if A : E" + Em is onto. then a right inverse is given by A~R = AT(AAT) I.27.. Obviously A has full row rank (= 1) and A . Also. linear Transformations If a linear transformation is not invertible. that A~R is a left inverse. can always find v e E2 such that [1 2][^] = a). where Iv denotes the identity transfonnation on V.. such that A~LA = Iv where Iv denotes the identity transformation on V.e. it may still be right or left invertible. A right invertible if and only if it onto. (A R + A R A — /) must be a right inverse and..R = _~] (=1) and A~R = [ _j j is a right inverse. if A is 11.R + ARA I) = AA.e. A is said to be left invertible if there exists a left inverse transformation A~L : W —> to transformation A L : V such that A L A = Iv. A is right invertible if and only if it is onto.. characterizing all solutions of the linear matrix equation AR = I. in A I = A R = A L. It then follows from Theorem 3. In Chapter 6 we characterize all right inverses of a matrix by Chapter characterize characterizing all solutions of the linear matrix equation A R = I.. But this implies that A~RA = /.22 ]Rn >• ]Rm Note: From Theorem 3. 0 a left inverse. —> transformation if left + 2.R + AARA = I A +IA  A since AA R = I = I. Let A = [1 2] : E2 »• E1I. in which case A~l = A~R = A~L. 2. A is left invertible if and only ifit is 11. 1. then a left inverse is given by A R = AT (AAT) left T L = A.26. Let A : V + W. Similarly.I = A~R. If Proof: proof of second Proof: We prove the first part and leave the proof of the second to the reader.L = (ATTA)I1AT. A~ (A A)~ A . i..24.I) must be a right inverse and. Let Theorem 3. 1. Obviously A has full row rank can always find v E ]R2 such that [1 2][ ~~] = a). where Iw denotes the identity transfonnation on W. If there exists a unique right inverse A~R such that AA~R = I. 3. 1.25. € ]R . both 11 and Moreover. It then follows from Theorem 3. right inverses for A. then one (Proo!' = [1 2]:]R2 + ]R .25 that A is invertible. Then > 1. by uniqueness it must be A R + A R A — = A R. Definition 3. A R the case that A~R + A~RA . Then Definition 3. is left invertible if and if it and left invertible.25 that A is invertible. Li near Transformations Chapters. If there exists a unique left inverse A~L such that A~LA = I. then A is invertible. then A is invertible. therefore. Then A is onto. A is invertible if and only if it is both right and left invertible.e. Defileft If linear concepts left nitions of these concepts are followed by a theorem characterizing left and right invertible transformations.e. Let A : V + Then 1. are infinitely A. (Proof: Take any a E E1I. D Example 3. it is clear that there are infinitely many right inverse. (A R + A RA . i.both 11 and is if and if onto. Notice the and leave the following: following: A(A. Let A : V » V.: AA R = w Iw W + V such that AA~R = Iw.. Theorem 3. therefore.26.24. i. Theorem 3. . A is said to be right invertible if there exists a right inverse transformation A~RR : if A.
4. 2 . Let A = [i]:]Rl > ]R2. Y E Enx" define their inner product by (X. Find the matrix representation of A with respect to the bases Find the matrix representation of A to bases {[lHHU]} of R3 and {[il[~J} of E . £. Y) = Tr(X Tr F). . The matrix 3.3. (Proof: The only solution to 0 = Av = [I2]v is v = 0. Consider the vector space ]Rnxn over ]R. Show that. Consider the differentiation operator C defined in Example 3. with Y e ]Rnxn (X. For matrices X. Is £. ThenAis 11. respect to this inner product. Let A = [~ . It is now obvious that A has full column rank (=1) and A~L = [3 . — S^. whence N(A) = 0 so A is 11). EXERCISES EXERCISES 3 4 1.4. it is clear that there are A L = [3 — 1] infinitely many left inverses for A. II? Is £. (Proof Theonly solution toO = Av = [i]v 2. matrix characterizing all solutions of the linear matrix equation LA = I. and let R denote the subspace of skewsymmetric matrices. Let A = [8 5 i) and consider A as a linear transformation mapping E3 to ]R2. whence A/"(A) = 0 so A is 11). y) = Tr(X Y). Consider differentiation £ 11? Is£ onto? onto? 4. R = S J. 3. below bases for its four fundamental subspaces. In Chapter 6 we characterize all left inverses of a matrix by characterizing all solutions of the linear matrix equation LA = I. Prove Theorem 3. Again. 'R.Exercises 27 2. LetA [J] : E1 ~ E2.2. Then A is 11. Prove Theorem 3. 2. J E2.1] is a left inverse. 4. is neither 11 nor onto. For matrices matrices. In Chapter 6 we characterize all left inverses of a infinitely many left inverses for A. respect to this inner product. Consider the vector space R nx " over E. let S denote the subspace of symmetric matrices. We give when considered as a linear on ]R3. We give below bases for its four fundamental subspaces. 3. let denote the subspace of symmetric 2. and let 7£ denote the subspace of skewsymmetric matrices. The matrix A = 1 1 2 1 [ 3 1 when considered as a linear transformation onIE \ is neither 11 nor onto. consider A linear transformation ]R3 1. It is now obvious that A has full column is v 0.
Linear Transformations Chapters.12. homogeneous linear system Ax = 0? homogeneous linear system Ax = O? n 3. if not. ~ ~ 3 8. prove it. provide a counterexample. Determine bases for the four fundamental subspaces of the matrix Detennine fundamental A=[~2 5 5 ~].Il. left T Suppose e Rmxn 9. Suppose A € Mg 9x48 . . Are they equal? Is this true in general? DetennineN(A) and R(A). Prove Theorem 3.4. Chapter 3.1 11. Linear Transformations 7. Determine A/"(A) and 7£(A).28 5. Show that AT has a right inverse. If E 1R~9X48.2.1 to illustrate the four fundamental subspaces associated with AT e associated ATE nxm IR from IR m R". How many linearly independent solutions can be found to the 10. 3. Modify Figure 3.4. Let = [~ 9. linearly independent solutions 10. Suppose A E IR m xn has a left inverse. Are they equal? Is this true in general? If this is true in general. Let A = [ J o]. Theorem 6. Prove Theorem 3.11. Rnxm thought of as a transformation from Rm to IRn.
define a transformation A+ : Y —»• X by Definition 4. can be used to give our first definition of A . let us henceforth consider the Although X and Y were arbitrary vector spaces above. neither provides Unfortunately. The MoorePenrose pseudoinverse is defined for any matrix and. Then A+ is the MoorePenrose where y = y\ pseudoinverse of A. problems. define a transformation A + y + X by where Y = YI + Yz with Yl e 7£(A) and Yz e Tl(A}L. Define a transformation T : Af(A)1. see [22]." X ". as is shown in the following text. let us henceforth consider the X ~n lP1. 29 . see [22]. Although X and y were arbitrary vector spaces above.l.l. This transformation T~ + can be used to give our first definition of A+.m We A+ A e lP1. where and are arbitrary finiteConsider a linear transformation A : X + y.1. and hence we can RCA) —>• J\f(A}~L This transformation can define a unique inverse transformation Tl 1 :: 7£(A) + NCA).1 4. pseudoinverse of A. brings great notational and conceptual clarity matrix and. With A and T as defined above.17. a generalization of the inverse of a matrix. Then. 4. where X Xand Y y are arbitrary finitedimensional vector spaces..l —>• Tl(A) by Tx = Ax for all x E NCA). and hence we Then.1 Definitions and Characterizations Definitions and Characterizations Consider a linear transformation A : X —>• y.. T is bijective (11 and onto). as noted in the proof of Theorem 3.. Definition 4. as noted in the proof of Theorem 3. the MoorePenrose pseudoinverse of A. Then A+ is the MoorePenrose j2 with y\ E RCA) and yi E RCA). characterization of A is given in the next theorem.1. T is bijective Cll and onto). We have thus defined A+ for all A E IR™xn. for determining A+ . which was proved by Penrose in 1955. as is shown in the following text. brings great notational and conceptual clarity to the study of solutions to arbitrary systems of linear equations and linear least squares to the study of solutions to arbitrary systems of linear equations and linear least squares problems.Chapter 4 Chapter 4 Introduction to the Introduction to the MoorePenrose MoorePen rose Pseudoinverse Pseudoinverse In this chapter we give a brief introduction to the MoorePenrose pseudoinverse. the definition neither provides nor suggests a good computational strategy good computational strategy for determining A +.17. With A and T as defined above.+ R(A) by dimensional Define transformation T : N(A). which was proved by Penrose in 1955. case X = W1 and Y = Rm. the MoorePenrose pseudoinverse of A. a generIn this chapter we give a brief introduction to the MoorePenrose pseudoinverse. A purely algebraic y + characterization of A+ is given in the next theorem.l.
1) = limAT(AAT +8 2 1)1. it must be A+.7..6. as with Definition 4.5. Still another characterization of A+ is given in the following theorem. If G satisfies all four. and (P4) but not (P3). Then Theorem 4. 19]. Furthermore. Example 4. Let A E lR. a right or left inverse satisfies no fewer than three of the four properties. (P4) (GA)T = GA. Unfortunately. Example 4. A+ = (AT A)I AT if A is 11 (independent columns) (A is left invertible).2) 4. (P3) (AGf (P3) (AG)T = AG. if a t= 0. one need simply verify the four Penrose conditions (P1)(P4). Note that the inverse of a nonsingular matrix satisfies all four Penrose properties. Let A e R™xn. However. A + always exists and is unique. Verify directly that A+ = Example 4. the Penrose properties do offer the great virtue of providing a tional algorithm.2." xn. L Note that other left inverses (for example. Such a verification is often relatively straightforward. X+ = AT(AATT) I if A is onto (independent rows) (A is right invertible). Example 4.3. and (P4) but not (P3). Introduction to the MoorePenrose Pseudoinverse Chapter 4. neither the statement of Theorem 4.4. However. it must be A +.2 4. A+ = (AT A)~ AT if A is 11 (independent columns) (A is left invertible). While not generally suitable for computer implementation. one need simply verify the four Penrose conditions (P1)(P4).2 nor its proof suggests a computational algorithm. the Penrose properties do offer the great virtue of providing a checkable criterion in the following sense.6.1]) satisfy properties (PI). If G the pseudoinverse of A. then by uniqueness. (PI) AGA = A.2. Example 4. whose proof Still another characterization of A + is given in the following theorem. Unfortunately.4. Example 4. Then A+ [a [! = lim (AT A + 82 1) I AT 6+0 6+0 (4.2 nor its proof suggests a computawith Definition 4. terizations.5. (P2) GAG G. = Furthermore.7. whose proof can be found in [1. Also. A t = AT (AA )~ if A is onto (independent rows) (A is Example 4. Such a verification is often relatively satisfies all four. Then G = A+ if and only if Theorem 4. p. AG.1. straightforward. Verify directly that A+ = [ ~] satisfies (PI)(P4). this can be found in [1. (4. A~ = [3 . Given a matrix G that is a candidate for being the pseudoinverse of A. (P2). Note that the inverse of a nonsingular matrix satisfies all four Penrose properties. (P2) GAG = G. characterization can be useful for hand calculation of small examples.3. if a =0. Example 4. (P2)." xn. Consider A = [']. this characterization can be useful for hand calculation of small examples. While not generally suitable for computer implementation. Note that other left inverses (for example.1. p. Given a matrix G that is a candidate for being checkable criterion in the following sense. Also.30 Chapter 4.2 Examples Examples Each of the following can be derived or verified by using the above definitions or characEach of the following can be derived or verified by using the above definitions or characterizations. A+ always exists and is unique. . neither the statement of Theorem 4. For any scalar a. Consider A = f ] satisfies (P1)(P4). Let A e R?xn Then G = A + if and only if (Pl) AGA = A. (P4) (GA)T = GA. Introduction to the MoorePenrose Pseudoinverse Theorem 4. For any scalar a. 19]. Theorem 4. Let A E lR. as a right or left inverse satisfies no fewer than three of the four properties. then by uniqueness. A L = [3 — 1]) satisfy properties (PI).
VVEejRnxnx " are orthogonal (M is 4. The Proof: Both results can be proved using the limit characterization of Theorem 4.9.3 Properties and Applications Properties and Applications This section presents some miscellaneous useful results on pseudoinverses. simply verify that the expression above does indeed satisfy each of Proof: For the proof. Then orthogonal if MT = M. .4. .4.12. where D+ is again a diagonal matrix whose diagonc D is diagonal. Theorem 4.4.13.8. Let S e jRnxn be symmetric with UT SU = D. A+ = (AT A)+ AT = AT (AA T)+. The proof of the second result (which can also be proved easily by verifying the four Penrose proof of the second result (which can also be proved easily by verifying the four Penrose conditions) is as follows: conditions) is as follows: (A T )+ = lim (AA T ~+O + 82 l)IA = lim [AT(AAT ~+O + 82 l)1{ + 82 l)1{ 0 = [limAT(AAT ~+O = (A+{. e jRmxn and suppose Rmxm R n are orthogonal (M is T 1 1 orthogonal if M M ). Then Proof: For the proof.10.12. The proof of the first result is not particularly easy and does not even have the virtue of being proof of the first result is not particularly easy and does not even have the virtue of being especially illuminating. simply verify that the expression above does indeed satisfy each c the four Penrose conditions. p. 0 the four Penrose conditions. D Theorem 4. if v i= 0. where U is orthogonal and D is diagonal. Then S+ UD+U T where D+ is again a diagonal matrix whose diagonal elements are determined according to Example 4. 31 31 Example 4. The interested reader can consult the proof in [1. Example 4. Theorem 4. Many of these This section presents some miscellaneous useful results on pseudoinverses. p.). Many of these are used in the text that follows.7. For any vector v E M".3. For any vector e jRn. elements are determined according to Example 4. 4. Properties and Applications Example 4.7. 27]. .11. The interested reader can consult the proof in [1.8.13. where U is orthogonal an Theorem 4. [~ ~ r ~ =[ 0 Example 4.10. The especially illuminating. 2. Let A E R m x "and suppose UUEejRmxm. For all A E jRmxn. Proof: Both results can be proved using the limit characterization of Theorem 4. Properties and Applications 4. Then S+ = U D+UT. Let S E Rnxn be symmetric with U TSU = D. if v = O.. are used in the text that follows.3 4.3.11.9. Example 4. (A T )+ = (A+{. Example 4. [~ r 1 =[ 4 4 I I ~l 4 I I 4 4. 27]. For A e Rmxn 1.
where BI = A+AB and A) = ABIB{. As an example consider A = [0 1J and B = [ : J. Theorem 4.13 can. n(A T AB) ~ nCB) . properties Theorem 4. however (see.12 Note that by combining Theorems 4. since e lR~xr. [7]. 4.15. (AB)+ = B?A+.11 is suggestive of a "reverseorder" property for pseudoinverses of prodTheorem 4. [23]). xm + T B e Wr . Proof' A+ A Proof: Since A E Rnrxr. 3. 1. [9]. whence A+A = f r. then AkA+ = A+ Ak and (Ak)+ = (A+)kforall integers k > O.15.g.14. we have B = B (BBT)~\ whence BB+ = Ir. Proof: Proof: For the proof. . [11].At = A in Theorem 4. If e lR~xm. A\ = A in Theorem 4. BB+ f r The by taking BI = B.. Proof: Proof: For the proof. 0 D E lR~xr. (AB)+ = B+ A + if and only if 1. For e Rmxn .xm.32 Chapter 4. If A e Rnrxr.12 and 4. Ir Similarly. Introduction to the MoorePenrose Pseudoinverse Chapter 4. [] sufficient reverseorder However. This A AT AT turns out to be a poor approach in finiteprecision arithmetic. 4. N(A+) 5. we B+ BT(BBT)I. then (AB)+ = B+ A+. n(A+) 4.. Then As an example consider A = [0 I] and B = LI. [7].15. B E Rrrxm. The result then follows by E lR. For all A E lR mxn . [5]. then A+ = (ATA)~lAT.g. Theorem 4. TTnfortnnatelv. A+ = (AT A)I AT. e. compute 4. 0 D Theorem 4. (AB)+ = B+A+. e. Introduction to the MoorePenrose Pseudo inverse 4. the MoorePenrose pseudoinverse of any matrix (since AAT and AT A are symmetric). (AA T )+ = (A T)+ A+. 0 The following theorem gives some additional useful properties of pseudoinverses.16. in peneraK ucts of matrices such as exists for inverses of products. n(BB T AT) ~ n(AT) and 2.17. = N(AA+) = N«AA T)+) = N(AA T) = N(A T). poor (see. where BI = A+ AB and AI = AB\B+.14. in general.13 we can.15. 4. [II].16.• Similarly. Theorem 4. Then (AB)+ = 1+ = I while while B+ A+ = [~ ~J ~ = ~. (AB)+ = B{ Ai. (AT A)+ = A+(A T)+. (AB)+ = B+A+ if and only if 4.17.11 nets of matrices such as exists for inverses of nroducts Unfortunately. see [5].. If A is normal. necessary and sufficient conditions under which the reverseorder property does hold are known and we quote a couple of moderately useful results for reference. and better methods are suggested in text that follows. in theory at least. = n(A T) = n(A+ A) = n(A TA). 2. (A+)+ = A. see [9]. D takingB t = B.
For A e R m x n . Y e R". Then K(B) S. y E IRn.i ]. ft(A+) ft(Ar 5. U(A) if and only if AA+B = B. 2.i l . e IRnxm. a matrix can be none of the skewsymmetric. or orthogonal. N(B) and 5 € IRmxn. Theorem 4. so there exists a vector y E Rp such that Ay = Bx. For example. Then Bx e R(B) c H(A). To prove the converse. Then we have Bx = Ay = AA + Ay = AA + Bx. Use Theorem 4. b E E. € IRm xm D 6. Note: Recall that A e R" xn is normal if AATT = AT A.4 to compute the pseudoinverse of U . Then there exists a vector x E Rm such that Bx = y. show that (xyT)+ = (x Tx)+(yT y)++yxT. A e IRPxn thatN(A) S. prove that RCA) = R(AAT) using only definitions and elementary 3. For A E IRmxn. A E IRmxn. where one of the Penrose properties is used above. A E IRn xn B E E n xm 6. show that JV(A) C A/"(S) if and only if BA+ A = B. or orthogonal. Suppose A E Rnxp. For example. problems. If jc. For A E Rpxn and BE R mx ". whereupon there exists a vector x e IR m such that Bx = y. For A e Rmxn. then it is normal. that B = AA+ B. Since x was arbitrary. (a) Prove or disprove that Prove or disprove that [~ (b) Prove or disprove that (b) Prove or disprove that AB D [~ B D r r=[ =[ A+ 0 A+ABD. Since x was arbitrary. Then we have there exists a vector y e IRP such that Ay = Bx. a matrix can be none of the preceding but still be normal. B E E M X m .. and D E E mxm and suppose further that D is nonsingular. so Proof: Suppose R(B) c U(A) and take arbitrary jc E Rm. whereupon y = Bx = AA+Bx E R(A). if A is symmetric. (xyT)+ = (xTx)+(yTy) yx T 3.• 1 2 x. = B. Proof: Suppose K(B) S. However. RCA). if A is symmetric.Exercises 33 Note: Recall that A E IRn xn is normal if A A = A T A.1 D. such as preceding but still be normal. b e R for scalars a. Then R(B) c R(A) if and only if Suppose e IRnxp. prove that 7£(A) = 7£(AA r ) using only definitions and elementary properties of the MoorePenrose pseudoinverse. assume that AA+B = B and take arbitrary y e K(B). then it is normal. properties of the MoorePenrose pseudoinverse. Then B and take arbitrary y E R(B). we have shown where one of the Penrose properties is used above. Let A G M"xn. skewsymmetric.4 to compute the pseudoinverse of \ 2 1. such as A=[ b a a b] for scalars a.]. Then Bx E H(B) S. fiA+A B. R(A) and take arbitrary x e IRm. 5 e JRn x m .i D. 0 EXERCISES EXERCISES 1. Use Theorem 4. The next theorem is fundamental to facilitating a compact and unifying approach The next theorem is fundamental to facilitating a compact and unifying approach to studying the existence of solutions of (matrix) linear equations and linear least squares to studying the existence of solutions of (matrix) linear equations and linear least squares problems. assume that AA + B To prove the converse. prove that R(A+) = R(A T). 4.18. However. we have shown that B = AA+B. A+ 0 A+BD.
This page intentionally left blank This page intentionally left blank .
.. . .. ii E !!. (5. recall. We In this chapter we give a brief introduction to the singular value decomposition (SVD).] . (Note: The rest of the proof follows analogously if we start with the observation that AAT > 0 and the details are left to the reader analogously if we start with the observation that A A T ::::: 0 and the details are left to the reader as an exercise. [24.. U2 E IRrnx(mrl..Vv r). vn].u ). vectors. . (Note: The rest of the proof follows [24. specifically. . Then there exist orthogonal matrices U E Rmxm and E IR~xn. Ch. rcfr).. . its eigenvalues are all real and nonnegative. ... dimensioned... UI e Wnxr. n}). i. e IRmxm and V E IR nxn such that V € Rnxn such that UI > diag(ul.3) = Ulsvt· The submatrix sizes are all determined by r (which must be S min{m.e.1) rxr A = [U I U2) [ ~ 0 0 ][ ] 2 T VI VT (5. Preand postmultiplying by SI gives the emotion S~l eives the equation (5. .• = Un... S = diagfcri.and postmultiplying by equality following from the orthonormality of the r. and the Osubblocks in are compatibly dimensioned. Proof: Since A r A > 00 A r A i is symmetric and nonnegative definite. . its eigenvalues are all real and nonnegative.. We show that every matrix has an SVD and describe some useful properties and applications show that every matrix has an SVD and describe some useful properties and applications of this important matrix factorization. Vi eE RIRnxr.LettingSS = diag(uI. VI «xr j V U2 e ^x(mr) . for example.• ::::: Urr > as an exercise. . . . the latter VfV^S2 = S2.) Denote the set of eigenvalues of AT A by {of / E n} with a\ > • • > a > 0 = o>+i = • • an. Let {Vi.} with UI ::::: . The SVD plays a key conceptual and computational role throughout (numerical) linear algebra and its applications. e n} be a set of corresponding orthonormal eigenvectors 0= Ur+1 = . Letting — diag(cri.. role throughout (numerical) linear algebra and its applications.. Proof: Since AT A ::::: ( (AT A s symmetric and nonnegative definite. we can and let V\ [VI. .} be a set of corresponding orthonormal eigenvectors and let VI = [v\. we have n = U~VT.1. i. More where ~ = [~ specifically. «}).. . and a\ > • • • > Ur > 0. Theorem 5.4) 35 . Pre. 6]). 5. i e !!. V2 = [Vr+I. The SVD plays a key conceptual and computational of this important matrix factorization. Premultiplying by vt gives vt ATAVi = vt VI S2 = S2. u r ) e R ..o>) E IRrxr. r write A r A VI = ViS2. U\ E IRmxr.Vn ].Chapter 5 Chapter 5 Introduction to the Singular Introduction to the Singular Value Decomposition Value Decomposition In this chapter we give a brief introduction to the singular value decomposition (SVD).\. . Premultiplying by Vf gives Vf A T A VI write ATAVi = VI S2. Let {u.. < min{m... the latter equality following from the orthonormality of the Vi vectors. . where S = [J °0]. y22 €E Rnxfor^ and the 0JM^/ocJb in E~are compatibly IRnx(nr).we can Vi = [vr+ . recall.2) (5.1.. Ch.1 The Fundamental Theorem Theorem A Theorem 5.e.) Denote the set of eigenvalues of AT A by {U?. 6])... for example. . More S > o r > O. Let A e R™ x n ..
Then Specifically. .. Remark 5.(AATT). Remark 5. 3.1. . and codomain spaces with respect to which A then has a diagonal matrix representation.. Choose any matrix V2 E ^ 77IX( ™~ r) such that [VI V2] is VI columns orthonormal. in fact. 0 Definition 5.. •.4) see UfU\ = columns of U\ are orthonormal. Introduction to the Singular Value Decomposition Turning now to the eigenvalue equations corresponding to the eigenvalues or+\. responding to multiple cr/'s. n} . Referring to the equation V I = A VI SI defining U\.4) we see that VrVI = /. 1. The analogous complex case in which A E C™ x " is quite straightforward. Definition 5. The decomposition is A = V"i:..2. where V and V are unitary and the proof is essentially proof decomposition A = t/E V H.. we see that. V H. . singular unique. VT be an SVD of A as in Theorem 5.(AT A) = is denoted ~(A). singular 2.'. an we Turning now to the eigenvalue equations corresponding to the eigenvalues ar+l.1 we see that ai(A) = A(2 (ATA) = £(A). Then T rewriting A = V"i:. an examination decomposition Remark 5.r zero singular values.e. Remark 5.. Now define the ATA V AV O. analogous complex e IC~ xn straightforward.. u ]} for Rm (see the discussion in Section 3. The columns of V are called the right singular vectors of A (and are the orthonormal right singular vectors of of called orthonormal eigenvectors of AT1A). VTAV = [~ Q]. the IRmxr VI AViSI. Note that V and V can be interpreted as changes of basis in both the domain Remark 5.. A. From the proof of Theorem 5. The !:ingular value decomposition is not unique. an we have that ATAV2z = VzO = 0.1 reveals that proof Theorem any orthonormal basis for jV(A) can be used for V2 • £lny orthonormal basis for N(A) can be used for V2. of the proof of Theorem 5. C denote A thought of as linear transformation mapping W to W. we see that U{ AV\ = S and 1/2 A VI = U^UiS = 0. The latter equality follows from the orthogonality of the S and vI AVi = vI VI S = O. .2. .1. . vn } for IR and {u\. we see that V r A VI = since A V2 = O... and defining this matrix U\ andU UT A V [Q ~]. Specifically. with respect to the bases A = U^V as A V Mat £ is S the U E we see respect n and {u I. VT as AV = V"i:. of values of i I proof of A. i. U V identical. of A A).2). there may be nonuniqueness associated with the columns of V\ (and hence U\) cor• there may be nonuniqueness associated with the columns of VI (and hence VI) corresponding to multiple O'i'S. U be interpreted changes domain and codomain spaces with respect to which A then has a diagonal matrix representation. Then T V AV =[ =[ VrAVI VIAVI VrAVI vIA VI Vr AVz vI AVz ] ~] since A V2 =0. For example.. Then from (5.. n] — Note that there are also min{m. .5. Choose U2 £ IRmx(mr) [U\ U2] orthogonal..16.denote A thought of as aalinear transformation mapping IR n to IRm.4. The latter equality follows from the orthogonality of the columns of VI and V 2. to be ~ completes the proof.. except for Hermitian transposes replacing transposes. we see that Mat C is "i:. Now V20 Vf A T A V 0. D to be S completes the proof. See also m Remark 5. A = t/E VT SVD of A 5. The columns of V are called the left singular vectors of A (and are the orthonormal called orthonormal columns ofU left singular vectors of eigenvectors of AA ). .. AV2 = 0. .. ... Remark 5. cr.. Introduction to the Singular Value Decomposition Chapter 5. Referring to the equation U\ = A V\ S l defining VI.. eigenvectors of AA TT).4.2)... Let A = V"i:. whence Vi ATAV22 = O. See also m [v\.3.36 36 Chapter 5. m for IR (see the discussion in Section 3. .16.? (AA I min{m. matrix VI E M mx/ " by U\ = AViS~l.5. Ui e (5.). Thus. } for R" {VI. Thus. Remark 5. The set {ai.3. let C. ar}} is called the set of (nonzero) singular values of the matrix A and called [a\.(A) At..
11).g. too.. e j8 the case). is an SVD. For example.. [11]. if A = UI:VT is an SVD of A.6. see.7. orthogonal transformations. A factorization UI: VT of a n m x n matrix A qualifies as an SVD if U and V are A t/SV r o f an m n U orthogonal and I: is an m x n "diagonal" matrix whose diagonal elements in the upper £ left comer are positive (and ordered). and E U\. The Fundamental Theorem 5.sine sin e cose J[~ ~J[ cose sine Sine] cose ' where e is arbitrary. A=U is an SVD.g.2) can always be constructed from a "compact SVD" (5.8. Computing AA AATT is numerically poor in finiteprecision arithmetic. VT A V = A = VAV eigenvectors > 0. corner f/E V T r r T Ti isaanS V D o f AT. however. VI:TU/ s n SVD of A VS C . see. e.9.U I U T . that "full SVD" (5. Then A = V A VTT is an A. SVD of A. [25]. Let V be an orthogonal 5.3). i. What is unique. Let A e IRnxn" be symmetric and positive definite. i. U arbitrary 2 x orthogonal 5. 0 Example 5. Note. f/2.[1 0 ] . VI. AT A Remark 5.. VT AV = A > O. V2 (see Theorem 5.9. e. C/ [U\ Ui] orthogonal. SVDof A. U V form • columns of U and V can be changed (in tandem) by sign (or multiplier of the form e je in the complex case). Vi. that aa"full SVD" (5.1. is the matrix I: and the span of the columns of UI.8. SVD" (5.10.10. symmetric V orthogonal matrix of eigenvectors that diagonalizes A. The Fundamental Theorem 37 37 • any U22can be used so long as [U I U2] is orthogonal. U2. Example 5. Example 5.3). F/vamnlp 5. Computing an SVD by working directly with the eigenproblem for AT A or 5. 01  where U is an arbitrary 2x2 2 orthogonal matrix. Better algorithms exist that work directly on A via a sequence of orthogonal transformations.6.2) can always be constructed from ¥2 Theorem too. n=[ [] 3 2 I 3 2 I 5 2y'5 y'5 S 4y'5 15 2~ ][ 3~ 0 0 0][ 0 0 3 0 _y'5 3 v'2 T v'2 T v'2 T v'2 2 ] 3 2 2 = 3 3 3J2 [~ ~] A E R MX Example 5.. then A A.5. is an SVD.1.e.. [11]. [7]. [25]. A _ [ 1  0 ~ ] cose = [ .e. [7]. Example A .
u r ]. . the following properties hold: the notation of Theorem 5. Let V = [UI..1. vr ].5) 3.6) (5. ..13.11..14.12. Then A has the dyadic (or outer 2. say. Vn]. VI = [VI.5) as a sum of outer products Remark 5. Then A has the dyadic (or outer product) expansion product) expansion r A = Laiuiv.1. urn].6) and (5.. (d) R(V2) = N(A) = R(AT)1.. The elegance of the dyadic decomposition (5. Remark 5.12.1. A = UZV. Let U =.]. Part 4 of the above theorem provides a numerically superior method for Remark 5. (c) R(VI) = N(A)1. . rank(A) = r = the number of nonzero singular values of A.. Let A e jRrnxn have a singular value decomposition A = VLV T Using Theorem 5. Theorem 5.1. (b) R(U2) = R(A)1..7) explain why it is conventional to write the SVD as A = U'£VTT rather than. . The singular vectors satisfy the relations AVi = ajui.. ..7) explain why it is conventional to write the SVD and the key vector relations (5. Let A E Rmxn have a singular value decomposition A = U'£ VT. . . Introduction to the Singular Value Decomposition 5.£V.. Then (a) R(VI) = R(A) = N(A T / . The relationship to the four fundamental subspaces is summarized nicely in Figure 5. ..= R(A T ).38 38 Chapter 5. LetUI = [UI.2 Some Basic Properties Some Basic Properties Theorem 5.5) as a sum of outer products and the key vector relations (5.7) AT Uj = aivi for i E r.. . Note that each subspace requires knowledge of the rank r. nicely in Figure 5. for example. . vn]. 2.. Then TheoremS.11. andV2 = [Vr+I. Remark 5.£VTT as in Let A e jRmxn have a singular value decomposition A = UHV as in Theorem Theorem 5. U2 = [Ur+I. as A = UZV rather than. Note that each subspace requires on. [HI.2 5. 1. Part 4 of the above theorem provides a numerically superior method for finding (orthonormal) bases for the four fundamental subspaces compared to methods based finding (orthonormal) bases for the four fundamental subspaces compared to methods based on. for example. . 4. Using the notation of Theorem 5.= N(A T ).. reduction to row or column echelon form. um] and V = [VI. Then (5. the following properties hold: 1. Let A E E mx " have a singular value decomposition A = U. i=1 (5. The relationship to the four fundamental subspaces is summarized knowledge of the rank r.. .8) where where . . . urn] and V = [v\. A = U.13.1. (5. . Introduction to the Singular Value Decomposition Chapter 5. The elegance of the dyadic decomposition (5. . .6) and (5. say.. reduction to row or column echelon form. vn]. The singular vectors satisfy the relations 3. rank(A) = r = the number of nonzero singular values of A..
Proof: The proof follows easily by verifying the four Penrose conditions. Remark 5. e^.15. which is clearly orthogonal and symmetric.11) This can also be written in matrix terms by using the socalled reverseorder identity matrix This can also be written in matrix terms by using the socalled reverseorder identity matrix (or exchange matrix) P = \err. Furthermore.. (or exchange matrix) P = [e erI. . e\\..11.2. Some Basic Properties 5. 0 D (5.15. .1. if we let the columns of U and V with the Qsubblocks appropriately sized. Some Basic Properties 39 39 A r r E9 {O} / {O)<!l nr mr Figure 5. if we let the columns of U and V be as defined in Theorem 5. then = L r 1 v. which is clearly orthogonal and symmetric. with the Osubblocks appropriately sized.er^\.=1 U. Figure 5.. Proof' The proof follows easily by verifying the four Penrose conditions. Note that none of the expressions above quite qualifies as an SVD of A + Remark 5. However. SVD and the four fundamental subspaces. Note that none of the expressions above quite qualifies as an SVD of A+ if we insist that the singular values be ordered from largest to smallest.1. ed. then be as defined in Theorem 5. .5.11. Furthermore. a simple if we insist that the singular values be ordered from largest to smallest. SVD and the four fundamental subspaces.10) .u.2.. However.. ... e2. a simple reordering accomplishes the task: reordering accomplishes the task: (5.
. u is clearly matrix representation for T with respect to the bases { v \ ... w. . From Section 3. is not generally as reliable a procedure. then T can be defined by TVj = OjUj . Recall the linear transformation T used in the proof of Theorem 3..16. u is a basis forR(A). .l.i / E~. Such a compression is analogous to the . where R is upper triangular.17 and Remark 5. Since T is determined by its action on a basis. Since T is determined by its action on a basis.1).. . is the matrix version of (5.r) and the matrix SVr e lR. by orthogonal row transformations performed directly on A to reduce it to the form [~]. is not generally as reliable a procedure.40 40 Then Then Chapters. .mxn have an SVD given by (5.2. Notice that N(A) .17 and in Definition 4. . . Then AV = V:E = [VI U2] [~ ~ ] =[VIS 0] ElR. = ^u. Such a row compression can also be accomplished by orthogonal row transformations performed directly on A to reduce it to the form 0 . v } and {MI . . Column compression Column compression Again.4). . then T~ canbedefinedbyTIu. = tv. Introduction to the Singular Value Decomposition Chapter 5. Both compressions are analogous to the socalled rowreduced where R is upper triangular. .[ SVr ] 0 mxn E lR. the same bases is 5""1. Then Let A E lR.1). postmultiplication of A by V is an orthogonal transformation that "compresses" A by column transformations. . finiteprecision arithmetic.vvr}}is aa is r basisforN(A)..3 Rowand Column Compressions Row and Column Compressions Row compression Let A E R have an SVD given by (5.olumn transformations. = cr. . . Both compressions are analogous to the socalled rowreduced echelon form which. Such a compression is analogous to the "compresses" A by I. Remark 5. Introduction to the Singular Value Decomposition A+ = (VI p)(PS1 p)(PVr) is the matrix version of (5. notice that H(A) = K(AV) = R(UI S) and the matrix UiS e Rm xr has full K(UiS) and the matrix VI S E lR. while the matrix representation for the inverse linear transformation TlI with respect to S. when derived by a Gaussian elimination algorithm implemented in finiteprecision arithmetic. . In other words. the isabasisfor7£(. Similarly.11)... Then Again. From Section 3.11). in Definition 4..1. In other words.3 5. while the matrix representation for the inverse linear transformation T~ with respect to the same bases is SI. / E~. then TlI can be defined by T^'M... In other words. vrr} and {u I.. premultiplication of A by VT is an orthogonal transformation that "compresses" A by row transformations. urr]} is clearly S.urr}} e r.. In other words.i e r. . let A E lR.1). .. since [u\..M(UT Notice that M(A) = N(V T A) = N(svr> and the matrix SVf E Rrxll" has full row A/"(SV. postmultiplication of A by V is an orthogonal transformation column rank. and since {VI. . .. when derived by a Gaussian elimination algorithm implemented in echelon form which.mxn. have an SVD given by (5. A "full SVD" can be similarly constructed. since {UI. mxr has full column rank. Such a row compression can also be accomplished "compresses" A by row transformations. 5. Then VT A = :EVT = [~ ~ ] [ ~i ] D _ .. Recall the linear transformation T used in the proof of Theorem 3.1. basis forJ\f(A)±. then T can be defined by TV. notice that R(A) R(A V) This time.1). premultiplication of A by UT is an orthogonal transformation that rank. A "full SVD" can be similarly constructed. let A e Rmxn have an SVD given by (5.16.. r x has full row rank. the matrix representation for T with respect to the bases {VI. Similarly.2. and since ( v \ . This time..
A E IRnxn indefinite. which is not generally a reliable procedure when socalled columnreduced echelon form. for example. Note: this is analogous to the polar form iO z = rel&ofaa complex scalar z (where i = j = V^T). If XTX = 0. A = QP 7. If XT X = 2. Determine an SVD of A. Let X E M mx ".. € IRmxn. . [11]. of defined by A defined by A = xyT. Use the SVD to determine a polar factorization of A. Determine SVDs of the matrices (a) (b) [ ] [ ~l 1 0 1 6. Determine SVDs of the matrices 5. 2. Let A e Rmxn and suppose W eRmxm and Y e Rnxn are orthogonal. Determine an SVD of the matrix A E R™ xn E IRm. which is not generally a reliable procedure when performed by Gauss transformations in finiteprecision arithmetic. For details. [23]. A = Q P 7. Let A e E"xn be symmetric but indefinite. z of complex scalar z (where i j J=I). 3.1 starting from the observation that AAT ~ O..Exercises Exercises 41 41 socalled columnreduced echelon form. [25]. xyT 5. Let A € R" X M .e. [7]. Prove Theorem 5. For details. i. Do A Wand Yare A and WAY have the same singular values? Do they have the same rank? and WAY have the same singular values? Do they have the same rank? factorization of i. [25]. EXERCISES EXERCISES 1.[11]. 4. see. y e Rn be nonzero vectors. Let A E ~mxn and W E IR mxm and 7 E ~nxn are (a) Show that A and WAY have the same singular values (and hence the same rank). for performed by Gauss transformations in finiteprecision arithmetic. see. Prove Theorem 5.. = o. Note: this is analogous to the polar form where Q is orthogonal and P = PT > 0.1 starting from the observation that AAT > 0. (a) Show that and W A F have the same singular values (and hence the same rank). Let E ~~xn.e. [7]. an SVD A. y E ~n Determine A e ~~ 4. [23]. show that X = 0. Let x e Rm. Use the SVD to determine a where Q is orthogonal and P p T > O. (b) Suppose that W and Y are nonsingular but not necessarily orthogonal.
This page intentionally left blank This page intentionally left blank .
(6. A are linearly independent.e. n}). Consider the system of linear equations Theorem 6.3) is unique if and only ifJ\f(A) = 0. and this is possible only ifm :::: n (since m = dim R(A) = rank(A) :::: min{m. equivalently. There exists a unique solution to (6. there exists a solution if and only j/"rank([A. i.1. There exists a solution to (6. equivalently. 2.. and onto. there exists a solution if and only ifrank([A. rank(A) < n. the familiar vector system Ax = b. A solution to (6. the familiar vector system are studied and include. 43 . n. General linear systems of the form (6.Chapter 6 Chapter 6 Linear Equations Linear Equations In this chapter we examine existence and uniqueness of solutions of systems of linear In this chapter we examine existence and uniqueness of solutions of systems of linear equations.1) are studied and include. There exists a nontrivial solution to the homogeneous system Ax = 0 if and only if Ax = 0 if 6. There exists at most one solution to (6.3) for all b e W1 if and only if the columns of A are linearly independent. as a special case.1. A/"(A) = 0. b E ]Rn.3) is unique if and only if N(A) = 0. Theorem 6.. A is 11. There exists at most one solution to (6. equivalently. (6. i. There exists a solution to (6. A is 2. as a special case. A E lRmxn.. 3. i.3) for all b E ]Rm if and only if A is nonsingular.2) 6. There exists a unique to (6. A E ]Rn xn. only if rank(A) < n.3) for all b E lRm if and only ifR(A) = lRm. i.1 Vector Linear Equations Vector Linear Equations We begin with a review of some of the principal results associated with vector linear systems. b]) = rank(A). b]) = rank(A). 1.1 6.3} for e R m if only ifU(A) = W".e. 4.. equivalently. and this is possible only ifm > n.e.3) if and only ififbeH(A). i.3) for all b E lRm if and only if the columns of 5. A G M m x m and A has neither a singular value nor a eigenvalue.e. i. A is 11. 3. (6. A E lR mxm and A has neither a 0 singular value nor a 0 eigenvalue.. 6. b E lRm.. General linear systems of the form equations. n this is possible only ifm < n (since m dimT^(A) = rank(A) < min{m.e. onto.3) for all b e W" if and only if is nonsingular.3) 1. There exists a solution to (6. and this is possible only ifm ::: n. We begin with a review of some of the principal results associated with vector linear systems. 4. N(A) 0.3) if and only b E R(A). Consider the system of linear equations Ax = b.e. 5. A solution to (6.
. premultiply by A: AX = AA+ B + A(I = B A+ A)Y + (A  AA+ A)Y by hypothesis = B since AA + A = A by the first Penrose condition. note that x 0 is always a solution to the homogeneous system.e . Let Z be an arbitrary solution of That all solutions arc of this form can be seen as follows.5) is a solution. A is not II. a solution exists if and only if has a solution if and only ifR(B) S. The matrix criterion is Theorem 4. equivalently.2) follow by specializing even further to the case m = n. while results for (6. Proof: To verify that (6.6) are of this form. The matrix criterion is Theorem 4. while results for (6. where Y E JR.6) are of this form. Therefore. Theorem 6. 0 Theorem 6. That all solutions are of this form can be seen as follows. mxn . Linear Equations Proof: The proofs are straightforward and can be consulted in standard texts on linear Proof: The proofs are straightforward and can be consulted in standard texts on linear algebra.2)follow by 6. i. Note that the results of Theorem of solutions to the general matrix linear system (6. Note that some parts of the theorem follow directly from others.1).4) has a solution if and only ifl^(B) C 7£(A). i. and this is clearly of the form (6.1).6).1 follow from those below for the special case = 1. (6. equivalently. Then we can write Z=A+AZ+(IA+A)Z =A+B+(IA+A)Z and this is clearly of the form (6. all solutions of (6.e. specializing even further to the case m = n.18. For example.mxk.5).mxn. to algebra. of a matrix. note that x = 0 is always a solution to the homogeneous system...nxk is arbitrary. which implies rank(A) < n by part 0 by part 3.2 6. Then any matrix eRmxk of the form of the form X = A+ B + (/ . 0 .5) is a solution of is a solution of AX=B. E JR..18. R(A). For example.5). Note that the results of Theorem 6.44 Chapter 6. The matrix linear equation AX = B. D 6. BE JR. Furthermore. Proof: The subspace inclusion criterion follows essentially from the definition of the range Proof: The subspace inclusion criterion follows essentially from the definition of the range of a matrix.3. premultiply by A: Proof: To verify that (6. Note that some parts of the theorem follow directly from others.3. The matrix linear equation Theorem 6. i. Let A e Rmxn. i.e. we must have the case of a nonunique solution.A+ A)Y. Let Z be an arbitrary solution of (6. to prove part 6.2 (Existence). Therefore. which implies rank(A) < n must have the case of a nonunique solution. (6. A E JR. all solutions of (6. AZ — B. (6. B E JR. Linear Equations Chapter 6. +B = Theorem 6. we prove part 6.2 (Existence).2 Matrix Linear Equations In this section we present some of the principal results concerning existence and uniqueness In this section we present some of the principal results concerning existence and uniqueness of solutions to the general matrix linear system (6.1 follow from those below for the special case k = 1.6) Furthermore.e. a solution exists if and only if AA+B = B.5) is a solution. Then we can write (6. AZ :::: B. AA+B B.6).mxk and suppose that AA+B = B. A is not 11.
5. Clearly. this can occur if and only if rank(A) = r = m (since r ::: m) and this is equivalent to A being onto (A + is then a right inverse). / .6) that minimizes TrX7 (Tr() denotes the trace of a matrix. wherer = rank(A) (recallr ::: h). A e E"x". find all A e ]Rmx".S.5. vD Example 6. where r rank(A) (recall r < n). equivalently. Characterize AR = Im solutions R of the equation AR = 1m. But if A has an SVD given by A = U h VT. It particular (6. Proof: Proof: The first equivalence is immediate from Theorem 6.A+A = Vz V2 and R(Vz2V^) = R(Vz) = N(A). Example 6.A+A) = O. in which case A must be invertible and R = AI. equivalently. r checked that 1.4. nonzero solution.8. Hence.A+ A)Y =A++(IA+A)Y. Ax — 0. this can occur if and only if rank(A) = r m (since equivalent to AA+Im = 1m. where y e lR.7) has a unique solution if and only if unique if and only if A + A = I. and (N(A) = 0). matrix.7. Characterize all right inverses of a matrix A E lR. if and only if A is Ilor _/V(A) = O. When A is square and nonsingular. Solution: x=A+O+(IA+A)y = (IA+A)y.8) .6.9. A solution of the matrix linear equation Theorem 6. A A+ AI Remark (/ — A + A) 0.A+A). recall that TrX r = Li. Example 6. if there exists a unique.6 (Uniqueness). leaving only the unique solution X = AI1B.9. Consider Example 6. It can be shown that the particular solution X = A+B is the solution of (6. There is a unique right inverse if and only if A+A = I/ e E"xm arbitrary. there exists a nonzero solution if and only if A+A /= I. then it is easily R(I — A + A). A R (AA(A) = A"1. A A = f/E VT. (6. it is easy to see that all solutions are generated y from a basis for 7£(7 . Computation: Since y is arbitrary.2. 7£(A) and this is 7£(/m) c R(A) equivalent to AA + 1m Im. D 0 Example 6. Clearly.6) +B Remark 6. But rank(A) = n that A+ A = / if r — n. there is no "arbitrary" component. The second follows by noting thatA+A = I can occur only ifr = n.) Theorem 6.j jcj. X• = A~ B. A solution of the matrix linear equation AX = B. Here.A+ A V2 V[ and U(V = K(V2) = N(A). Consider the system of linear firstorder difference equations (6. All right inverses r < m) A (A+ of A are then of the form of A R = A+ 1m + (In . Find all solutions of the homogeneous system Ax = 0. we write 1m to emphasize the m x m identity Im matrix. it is not unique. y E R" A + A t= I. A+ = A"1 and so (I . N(A) = O. (6.3.n is arbitrary. rank(A) = < A This is equivalent to either rank (A) = r < n or A being singular. (TrO denotes the trace of a matrix.) that minimizes TrXT X.6 (Uniqueness). A E lR. Remark 6.mxn. Matrix Linear Equations 6.7) is unique if and only if A+A = /.mxk (6. Matrix Linear Equations 45 Remark 6.nxn. Clearly. BE lR.. A+ A where Y E lR.7. Example 6.7) has a unique solution if and only if M(A) = 0. Thus.mxn. Solution: There exists a right inverse if and only if R(Im) S. recall that TrXT X = £\ •xlj. equivalently. Suppose A E lR. Butrank(A) = n if and only if A is 11 or N(A) = 0.nxm is arbitrary.2.
J B]) = 1R" or.8) is reachable if and only if if R([ B. we of reachability. we see that (6. this is a question va [Uj }k~:b such that x^ takes an arbitrary value in W ? In linear system theory.8) is controllable if and only if if controllability...11) with and D (p ~ 1). Again from Theorem 6.8) is given by kJ Xk = Akxo + LAkJj BUj j=O UkJ ] Uk2 (6. Linear Equations Xk with A E R"xn and B E IR nxmxm(rc>l. The condition The answers are cast in terms that are dual in the linear algebra sense as well. does there exA related question is the following: Given an arbitrary initial vector XQ.8) is given by solution of (6. example of a system that is controllable but not reachable. see that (6. we have the notion of reconstructibility: When does knowledge of {u jy }"~Q and {. equivalently. We might now ask the question: Given Xo 0. m known as the state vector at time while Uk is the input (control) vector. A n . this is called such that Xn = 0? linear system theory.. We now introduce an output vector Yk to the system (6. controlA 1 lability and reachability are equivalent. does there exist an input sequence for k > 1.10) for k ~ 1. B T] is observable [reconsrrucrible] [controllablcl if and T) observable [reconstructive]. The general known as the state vector at time k while Uk is the input (control) vector. if and only if rank [B. Since > 1. linear differential equations). this is a question {u }y~Q Xk of reacbability. The vector Jt* in linear system theory is e IR nx " fieR" (n ~ I. AB.. if A is nonsingular. equivalently. The general solution of (6. does there exist an input sequence {u j an input sequence {"y}"~o such that xn = O? In linear system theory. if and only if or.8) of Example 6. .J B] = n.~ I).10.• A k kJ B] [ ~o (6.2.9) ~Axo+[B. we have the notion of suffice to determine (uniquely) xo? As a dual to controllability. A related question is the following: Given an arbitrary initial vector Xo.. There are many other algebraically equivalent conditions.10. Theorem 6. The answers are cast in terms that are dual in the linear algebra sense as well. The matrices A = [~ ~]1and B5 == [~] 1 providean example of a system that is controllable but not reachable.ra>l).. AB.e.T.46 46 Equations Chapter 6. The above are standard conditions with analogues for continuoustime models (i.:b dual to reachability is called observability: When does knowledge of {" j }"!Q and {y_/}"~o suffice to determine (uniquely) Jt0? As a dual to controllability. from the fundamental Existence Theorem..AB •. overall system that are dual in the systemtheoretic sense to reachability and controllability. from the fundamental Existence Theorem.9 Example 6. . B) is if(AT . There are many other algebraically equivalent conditions.e.8) of Example 6.y/}"Io suffice to determine reconstructibility: When does knowledge of {w r/:b and {YJ lj:b suffice to determine (uniquely) xn? The fundamental duality result from linear system theory is the following: (uniquely) xnl The fundamental duality result from linear system theory is the following: E RPxn e IR pxn E RPxm € IR pxm (A. Since m ~ I.. A n . Theorem l'/:b Clearly. this is called controllability. The linear differential equations). standard conditions with analogues for continuoustime models (i. The condition dual to reachability is called observability: When does knowledge of {u 7 r/:b and {Yj l'. (A. We can then pose some new questions about the with C and (p > 1).9 by appending the equation by appending the equation (6. . B) iJ reachable [controllable] ifand only if (A . Example 6. The matrices A = [ ° Q and f ^ provide an lability and reachability are equivalent.2. We can then pose some new questions about the overall system that are dual in the systemtheoretic sense to reachability and controllability. reachability always implies controllability and. does there exist an input sequence {ujj 1jj^ such that Xk takes an arbitrary value in 1R"? In linear system theory. We now introduce an output vector yk to the system (6. We might now ask the question: Given XQ = 0. .
Then the equation e jRmxn.CBuo . equivalently. the solution is then unique if and only if N(R) Uniqueness Theorem.3 6. is stated and proved in Theorem 13. B E Rnxm.Du] (6. Theorem 6. Verification of each identity is recommended as an exercise for the reader.2 j BUj . the has a solution if and only if AA+BC+C = B. By the fundamental the righthand side.4 Some Useful and Interesting Inverses 47 To derive a condition for observability. associated with matrix inverses. and C e jRpxq.13) Let v denote the (known) vector on the lefthand side of (6.13) and let denote the matrix on the righthand side. Then.6. In these identities. B E Rmxq.27.3 A More General Matrix Linear Equation A More General Matrix Linear Equation AXC=B (6.. Listed In many applications. 6. B e jRmx q . notice that To derive a condition for observability. C E jRmxn.Duo Yl . A E Rnxn. Then. if and only if r Yn]  Lj:~ CA n .11. particularly for block matrices. the coefficient matrices of interest are square and nonsingular.6. Such a criterion (C C+ ® A +A = I) of the Kronecker product of matrices for its statement. Theorem 6. v E R(R).4 Some Useful and Interesting Inverses Some Useful and Interesting Inverses In many applications.14) Theorem 6. e Tl(R).12) j=O Yo . arbitrary. E jRnxm. asbelow is a small collection of useful matrix identities. the coefficient matrices of interest are square and nonsingular. Let A E Rmxn. and C E Rpxti. e Rmxn. Listed below is a small collection of useful matrix identities. so a solution exists.15) E jRnxp where Y € Rn*p is arbitrary. . +L kl CAk1j BUj + DUk. Such a criterion (CC+ <g) A+ A — I) is stated and proved in Theorem 13.27. in which case the general solution is of the has a solution if and only if AA + BC+C = B.DUnl 6. by definition. or.4 6. sociated e jRnxn. (6. indicated. by definition. notice that Yk = CAkxo Thus. Invertibility is assumed for any component or subblock whose inverse is indicated. A compact matrix criterion for uniqueness of solutions to (6. By the fundamental Uniqueness Theorem. in which case the general solution is of the form (6. Verification of each identity is recommended as an exercise for the reader. mxm and D E jRm Invertibility is assumed for any component or subblock whose inverse is and D € E xm. the solution is then unique if and only if N(R) ==0.13) and let R denote the matrix on Let denote the (known) vector on the lefthand side of (6. Thus.14) requires the notion of the Kronecker product of matrices for its statement. 0. if and only if or.6. so a solution exists. equivalently. particularly for block matrices.14) requires the notion A compact matrix criterion for uniqueness of solutions to (6.4 Some Useful and Interesting Inverses 6.
As in Example 6. for example.A~lB(D~ CA~lB)~[CA~l This result is known as the ShermanMorrisonWoodbury formula.AIB(DlI + CAIB)ICAI.. formulas for applications (and is frequently "rediscovered") including. Linear Equations Chapter 6. This result follows easily from the block UL factorwhere F = (A — ED C) This result follows easily from the block UL factorization in property 17 of Section 1. (A BDCr1 = AI ..I EXERCISES EXERCISES 1.c E E") that arise in optimization (A + xx T ) — (with symmetric A e lRnxn and x e lRn) that arise in optimization theory.BDI l = [ AI BD. = = Both of these matrices satisfy the matrix equation X2 = / from which it is obvious these matrices satisfy the matrix equation X^ = I from which it is obvious Both of that XI = X.4.. characterize all left inverses of a matrix A e lR ". result follows easily from the block LU factorization in property 16 of Section 1. ization in property 17 of Section 1.4. This result follows easily from the block LU factorization in property 16 of Section 1.8.8. mx . Assuming 2. It has many This result is known as the ShermanMorrisonWoodbury formula. Note that the positions of the / and — / blocks may be exchanged.I ] D. Assuming R(B) ~ R(A). where E = (D . characterize all left inverses of a matrix A E Mm xn . Rmxk and suppose has an SVD as in Theorem 5.I C) I. where F = (A . Linear Equations 1. 5. for example.I B)I (E is the inverse of the Schur complement of A). 1. Let A E lRmxn.I . It has many applications (and is frequently "rediscovered") including.4. formulas for the inverse of a sum of matrices such as (A + D)lor (AI1 + DI)I. This where E = (D — CA B) (E is the inverse of the Schur complement of A). characterize all solutions of the matrix linear equation 7Z(B) c 7£(A). Note that the positions of the / and . As in Example 6.1. l = l = [!C / [~ ~ l = [ AI +_~~!~CAI A~BE = D.CA. BB EelR fflxk and suppose AAhas an SVD as in Theorem 5.48 Chapter 6. It also the inverse of a sum of matrices such as (A + D)"1 or (A" + D"1) It also yields very efficient "updating" or "downdating" formulas in expressions such as yields very efficient "updating" or "downdating" formulas in expressions such as T (A + JUT ) I1 (with symmetric A E R"x" and . 2./ blocks may be exchanged. 2. 1. r A~I [~ ~ r [D~I~AI D~I 1 ~r ~~B 1 r l [~ ~ r [D~CF +~~I~. that X~l [~ !/ [~ ~ r [~ ~ l [~ ~/ r [~ ~ 1 l l l = [ ~ 4.B D. characterize all solutions of the matrix linear equation AX=B in terms of the SVD of A in terms of the SVD of A.1. Let A € E mx ". BC 6. X. (A + BDC)I = A~l . [~ ~ r l 3.4. theory. [ / +c 7. l 8.
check directly that the condition for reconstructibility takes the 6. Let x. .. As in Example 6. (i.xy) T 1 49 = I  1 xTy 1 xy . Let x. Show that 4.e.y Assume that Yji i= 0 for some i/ and j. € IRn and suppose that x T y i= 1. y E E" and suppose further that XTy ^ 1.l ~i e. 6.e...Cn and individual elements Yij. Let A e R"xxn and let A"1 have columns c\. Show that 3. T 4.e.x xTy). Show that the matrix B — A — —eie T : (i.. c and individual elements y. A with yl subtracted from its (ij)th element) is singular. A with — subtracted from its (zy)th element) is singular. y e IRn and suppose further that x T y ^ 1. . Show that the matrix B = A . Show that (/ ... . 5. l' Hint: Show that Ci E N(B).Exercises Exercises 3. check directly condition for reconstructibility the form form N[ fA J CA n 1 ~ N(A n ). where C = 1/(1 . Assume that x/( 7^ 0 for some and j. in Example 6. . Show that cxJ C ' where c 1/(1 — T y)...10. Let jc. y E E" and suppose further that XTy i= 1. Hint: Show that ct <= M(B).10. Let A E 1R~ " and let A 1 have columns Cl.
This page intentionally left blank This page intentionally left blank .
x = I px.y..1 Projections Definition 7. and Norms 7.y is linear and P# y — px. Let V be a vector space with V X 0 y.y is called the (oblique) projection on X along y. Proof: Suppose P is a projection.y is called the (oblique) projection on X along 3^.e. Theorem 7.y Theorem 7.yp2 = P. V by by PX. P2 = P. Py. Also. every v E V Definition 7. PX. Figure 7. i. say on X along y (using the notation of Definition 7. every v e V has a unique decomposition v x y with x E and y e y. Oblique projections. i.e.2. Oblique projections.2. A linear transformation P is a projection if and only if it is idempotent. y x Figure 7.3. Define PX y : V + X <.y is linear and pl. Px. say on X along Y (using the notation of Definition 7. Infact.26. Inner Product Spaces.1. Py. y = Px.y.1 7.3.1. Px.26. and Norms Spaces.1). Also.1).y • V —>• c V has a unique decomposition v = x + y with x e X and y E y. By Theorem 2.x — I — Px. A linear transformation P is a projection if and only if it is idempotent. Infact. Define pX. Figure 7.1..1 displays the projection of von both and 3^ in the case = Figure 7. Theorem 7. Inner Product Projections.1. By Theorem 2.1 displays the projection of v on both X and Y in the case V = ]R2. Let V be a vector space with V = X EEl Y. Theorem 7. Proof: Suppose P is a projection. px. P is a projection if and only if I —P is a projection. P isaprojectionifandonlyifl P isaprojection.yV = x for all v E V. 51 51 .Chapter 7 Chapter 7 Projections.
P is the projection on Y along X. Then Px = P2v = Pv = x so x E X. mental subspaces. Then v = Pv + (I . Conversely. p2 = P. then Pv = 0.1 and 5. Let X = {v e V : Pv = v} and y {v € V : Pv 0}. Thus.xx by Theorem 7.xl.. along XXL} and let x. Hence if v E X ny. Now let u e V be arbitrary. Thus. y = (I . V = X $ Y and the projection on X along Y is P.P)x. A+A VIV{ r LViVT are easily checked to be (unique) orthogonal projections onto the respective four fundaare easily checked to be (unique) orthogonal projections onto the respective four fundamental subspaces. then v = 0. suppose P = P. Then symmetric projection matrix and let x be arbitrary.P)x = O. Then x T pT (I . Let x = Pv. Note that (I . Conversely.1 7. Note that (/ . R" be Proof: Let P be an orthogonal projection (on X. along 1) and let jc. suppose p2 = P.XL Theorem 7. since Px e U(P).11. T Since x and y were arbitrary. Inner Product Spaces. A 6 jRmxII UtSVf. then Pv = O. P Proof: Let P be an orthogonal projection (on X.P)x = yTTpT (I . then Pv = v. Px E R(P).X^X = Px±.P2v = 0 so Y E y. It is easy to check that X and 3^ are subspaces. Thus. D 0 7. and P must be an orthogonal projection. with the second equality following since PTP is symmetric. In the special case where Y = X^.A+A V2V{ L i=r+l i=l 11 ViVf. Since x and y were arbitrary.XLtion and we then use the notation P x = PX.P)x = O. we have ( P y f I (/ .P)x E XL. suppose P is a is a with the second equality following since pT P is symmetric.5.1 5. First note that iftfveX. Conversely.P) = 0. Inner Product Spaces.3. while Py = P(l . and Norms Chapter 7. we have (py)T ((I .p 2 v 0 so y e Thus. Now let v E V be arbitrary.P)x = xTP(I . Thus.P)x E ft(P)1 xTPT(I . (I .P)x 6 R(P)1and P must be an orthogonal projection.3.XL iss called an orthogonal projection and we then use the notation PX = PX. Then v if v € Pv (I . We now prove that V = X $ y. Py e X.)x = PXJ.. P E E"xn is the matrix of an orthogonal projection (onto K(P)} if and only 7. 0 Definition 7. Then Pv = P(x + y) = Px = x.P}x = 0.1 . then Pv v. Write x = P x + (I .L 1.P)x = x T P(l .P)x = (I . Hence that V X 0 y.P)v.4. Hence PT = PTP = P. Projections. yy Ee jR" be arbitrary. Then U\SVr Then r PR(A) AA+ U\U[ Lu. * called an orthogonal projecDefinition 7.xx by Theorem 7. (I . suppose symmetric projection matrix and let x be arbitrary. then v = O. Essentially the same argument shows that / . Thus. Then Pv = P(x + y) = Px = x. First note that v E X. Moreover. P)x = y PT(I P)x = 0. Since Py E X. P2v = P Pv — 2 2 Px = x = Pv.1. P e jRnxn is the matrix of an orthogonal projection (onto R(P)) if and only ifP2 PT if p2 = p = pT. . say.px.P)v.=1 m PR(A). V Pv .PX. If v e y. Then Px = p 2v = Pv = x so x e X.P)v.5. Let X n y. Thus. while Py = P(I P}v = x Pv . In the special case where y X1. D Essentially the same argument shows that I — P is the projection on y along X.P)x = (I . say.4.P)v = = Pv. Moreover. px. . p 2v = PPv = Let u E V be arbitrary. We now prove and Y = {v E V : Pv = OJ.. we must have P (I — P) = O.P)v.. PN(A)J. Let X = {v E V : Pv = v} Px = x = Pv. Projections. It is easy to check that X and Y are subspaces. i=r+l PN(A) 1. then (/ . P = P.AA+ U2 U ! LUiUT. and Norms Let v e V be arbitrary.uT.xJ. arbitrary. PX. Hence pT = pT P = P. Write x = Px (I — P)x. If v E Y. Conversely.V Theorems 5.11.P)x e X1. y = (I . we must have pT (I . X 0 y and the projection on X along y is P.52 52 Chapter 7. let A E Rmxn with SVD A = U!:VTT = A = UT.1 The four fundamental orthogonal projections The four fundamental orthogonal projections Using the notation of Theorems 5.
are. Orthogonal projection on a "line.. Determine the orthogonal projection of a vector e M" on another nonzero Example 7.8. .7.(:.8. e Rn Solution: Think of the vector w as an element of the onedimensional subspace IZ(w).. Recall the proof of Theorem 3. the vector z that is orthogonal to w and such that Pv Moreover. Recall the proof of Theorem 3. the vector z that is orthogonal to wand such that v = P v + z is given by z is given by z = PK(W)±Vv = (/ — PK(W))V = v — (^^ j w. An arbitrary vector x e IRn was chosen and a formula for x\ appeared rather mysteriously. Recall the diagram of the four fundamental subspaces.7.2. .. orthogonal: v z Pv w Figure 7. T W W Moreover." Figure 7. Solution: Think of the vector w as an element of the onedimensional subspace R( w).1. . An arbitrary vector x E R" was chosen and a formula for XI basis for a subset of IRn.~) w.6. X on Specifically. Determine the orthogonal projection of a vector v E IR n on another nonzero vector w E IRn. . in fact.7.2. Vk} was an orthornormal Example 7.8) = (WTV) W." Example 7. Recall the diagram of the four fundamental subspaces.2. See Figure 7. { v \ .. Then Let x e W be an arbitrary vector. A direct calculation shows that and ware..8) (using Example 4. See Figure 7. The indicated direct Example 7. Specifically.. IR n Rm 1 n Let X E IR be an arbitrary vector. orthogonal: that z and u.6.A+ A)x 2 = A+ Ax + (I = VI vt x + V Vi x (recall VVT = I). Orthogonal projection on a "line.1. Vk} was an orthomormal basis for a subset S of W1.Pn(w»v = v . There. Projections 53 Example 7. Then the desired projection is simply Then the desired projection is simply Pn(w)v = ww+v wwTv (using Example 4.2. The expression for x\ is simply the orthogonal projection of XI projection of rather x on S. A direct calculation shows z = Pn(w)"' = (l . There. Then X = PN(A)u + PN(A)X .11. Example 7. {VI. Projections 7. in fact.11. The indicated direct sum decompositions of the domain E" and codomain IRm are given easily as follows.
Let Then Then and we can decompose the vector [2 3 and we can decompose the vector [2 3 and a vector in N(A).. x) ::: Qfor aU x 6V and (x. y) Q = X T Qy. let y e IR m be an arbitrary vector. Y2 E V and/or all a. then AT e Rn xm is the unique linear transformation or map T E IRm andfor IRn. ATE IR nxm transformation Definition 7. definite defines Definition 7.10. . Then {^.(A . Yl. j2 ^ V and for alia. 3. Projections. (jc. Let V = E". y) = (y. Let V be a vector space over IR. y^} for all jc.x)forallx. let Y E ]Rm be an arbitrary vector. defines a "weighted" inner product. Yl) + f3(x. y\) + /3(jt.12.12.9. If A E Rm xn. y e V.13. 2.13.y E V. aYI + PY2) = a(x. where Q = QT > 0 is an arbitrary Q = Q T > is an Example 7.10. only ifx = O. (x. Then Similarly. Example 7. e R. (x.AA+)y = U1Ur y + U2U[ Y (recall UU T = I). Example 7. Inner Product Spaces.9. {*.11. and Norms Chapter 7. y) for all x € Rm and for all y e R".2 Inner Product Inner Product Spaces Definition 7.) ) :: V x V + IR is a real inner is a real inner Definition 7. as follows: o o 4] uniquely into the sum of a vector in N(A)L 4V uniquely into the sum of a vector in A/'CA)1 r 1/4 1/4 ] 1/4 1/4 [!]~ = = A' Ax + (l  A' A)x 1/2 1/2 1/2 1/2 0] [ 2] [ 1/2 1/2 + [ 1~2 1~2 ~ o o ! 5/2] [1/2] 1~2 . such that {x.54 Chapter 7. Let V be a vector space over R. respectively. Ay) = {AT x. If e IR mx ". Y2) for all x. (x. Then (x. f3ftE IR. y) x T Y is the "usual" Euclidean inner product or Example 7. Then ('. 3. V = IRn.11. x } = 0 if and only ifx = 0. (x. > Ofor all E V ( x x) =0 if 2. . Projections. Let V = IRn. . respectively. cryi + ^2) = a(x. y)Q = XT Qy. n x n positive definite matrix. Inner Product Spaces. x) for all x. (x. Example 7. Let V = R". Then (x. [ 5~2 + 7. as follows: and a vector in J\f(A). and Norms Similarly. Then Y = PR(A)Y + PR(A)~Y = AA+y + ( l . Then { • • V x V if product if 1. y) = (y. y} = XTy is the "usual" Euclidean inner product or dot product. yi. Let Example 7.
7.2. Inner product Spaces 7.2. Inner Product Spaces
55 55
It is easy to check that, with this more "abstract" definition of transpose, and if the It is easy to check that, with this more "abstract" definition of transpose, and if the (i, j)th element of A is aij, then the (i, j)th element of AT is ap. It can also be checked (/, y)th element of A is a(;, then the (i, y)th element of AT is a/,. It can also be checked that all the usual properties of the transpose hold, such as (Afl) = BT AT. However, the that all the usual properties of the transpose hold, such as (AB) = BT AT. However, the
definition above allows us to extend the concept of transpose to the case of weighted inner definition above allows us to extend the concept of transpose to the case of weighted inner products in the following way. Suppose A e Rmxn and let (., .) Q and (•, .) R, , with Q and A E ]Rm xn (., }R with Q and {, }g R positive definite, be weighted inner products on Rm and W, respectively. Then we can positive definite, be weighted inner products on IR m and IRn, respectively. Then we can define the "weighted transpose" A # as the unique map that satisfies define the "weighted transpose" A# as the unique map that satisfies
(x, AY)Q = (A#x, y)R all x e IRm (x, Ay)Q = (A#x, Y)R for all x E Rm and for all Y E W1. y e IRn.
By Example 7.l2 above, we must then have x T QAy x T (A#{ Ry for all x, y. Hence we By Example 7.12 above, we must then have XT QAy = xT(A#) Ry for all x, y. Hence we transposes (of AT Q = RA#. must have QA = (A#{ R. Taking transposes (of the usual variety) gives AT Q = RA#. QA = (A#) R. Since R is nonsingular, we find Since R is nonsingular, we find
A# = R1A Q. A* = /r'A' TQ.
We can also generalize the notion of orthogonality (x T = 0) to Q orthogonality (Q is We can also generalize the notion of orthogonality (xTyy = 0) to Qorthogonality (Q is a positive definite matrix). Two vectors x, y E IRn are Qorthogonal (or conjugate with a positive definite matrix). Two vectors x, y e W are <2orthogonal (or conjugate with T X Qy O. Qorthogonality is an important tool used in respect to Q) if ( x y) Q respect to Q) if (x,, y } Q = XT Qy = 0. Q orthogonality is an important tool used in studying conjugate direction methods in optimization theory. studying conjugate direction methods in optimization theory. Definition 7.14. Let V be a vector space over C. Then (., •} : V V > Definition 7.14. Let V be a vector space over <C. Then {, .) : V x V + C is a complex is a complex inner product if inner product if
1. (x,, x ) :::: Qfor all x e V and ( x , x ) = 0 if and only if x = 0. 1. ( x x) > 0 for all x E V and (x, x) =0 if and only ifx = O.
2. (x, y) = (y, x) for all x, y E V. (y, x) for all x, y e V. 2. (x, y)
3. (x, aYI + fiy2) = a(x, y\) + fi(x, Y2) for all x, YI, y2 E V and for alia, f3 6 C. 3. (x,ayi f3Y2) = a(x, yll f3(x, y2}forallx, y\, Y2 e V andfor all a, ft E c. Remark 7.15. We could use the notation (., ·)e to denote a complex inner product, but Remark 7.15. We could use the notation {•, }c to denote a complex inner product, but if the vectors involved are complexvalued, the complex inner product is to be understood. if the vectors involved are complexvalued, the complex inner product is to be understood. Note, too, from part 2 of the definition, that (x, x) must be real for all x. Note, too, from part 2 of the definition, that ( x , x ) must be real for all x.
Remark 7.16. Note from parts 2 and 3 of Definition 7.14 that we have Remark 7.16. Note from parts 2 and 3 of Definition 7.14 that we have
(ax\ + fix2, y) = a(x\, y) + P(x2, y}.
Remark 7.17. The Euclidean inner product of x, e C" is given by Remark 7.17. The Euclidean inner product of x, y E C n is given by
n
(x, y)
= LXiYi = xHy.
i=1
The conventional definition of the complex Euclidean inner product is (x, y) yH but we The conventional definition of the complex Euclidean inner product is (x, y} = yHxx but we use its complex conjugate H here for symmetry with the real case. use its complex conjugate xHyy here for symmetry with the real case.
Remark 7.18. A weighted inner product can be defined as in the real case by (x, y)Q = Remark 7.1S. A weighted inner product can be defined as in the real case by (x, y}Q — x H Qy, arbitrary Q QH > o. notion Qorthogonality can be similarly XH Qy, for arbitrary Q = QH > 0. The notion of Q orthogonality can be similarly generalized to the complex case. generalized to the complex case.
56 56
Chapter 7. Projections, Inner Product Spaces, and Norms Chapter 7. Projections, Inner Product Spaces, and Norms
Definition 7.19. A vector space (V, F) endowed with a specific inner product is called an Definition 7.19. A vector space (V, IF) endowed with a specific inner product is called an inner product space. If F = C, we call V a complex inner product space. If F = R, we inner product space. If IF = e, we call V a complex inner product space. If IF = R we call V a real inner product space. call V a real inner product space.
Example 7.20. Example 7.20. 1. Check that V = IRn x" with the inner product (A, B) = Tr AT B is a real inner product 1. Check that = R" xn with the inner product (A, B) = Tr AT B is a real inner product space. Note that other choices are possible since by properties of the trace function, space. Note that other choices are possible since by properties of the trace function, Tr AT B = TrB TA = Tr A B = TrBAT TrATB = Tr BTA = TrABTT = Tr BAT..
2. Check that V = e nxn with the inner product (A, B) = Tr AHB is a complex inner Tr AH B is a complex inner 2. Check that V = Cnx" with the inner product (A, B) product space. Again, other choices are possible. product space. Again, other choices are possible. Definition 7.21. Let V be an inner product space. For v e V, we define the norm (or Definition 7.21. Let V be an inner product space. For v E V, we define the norm (or length) ofv by IIvll = */(v, v). This is called the norm induced by (',, .).. length) ofv by \\v\\ = J(V,V). This is called the norm induced by (  ) Example 7.22. Example 7.22. 1. If V = E." with the usual inner product, the induced norm is given by i> 1. If V = IRn with the usual inner product, the induced norm is given by II v II = n 2 2 1
(Li=l V i (E,=i<Y))2.xV—*« 9\ 7
2. If V = en with the usual inner product, the induced norm is given by II v II = 2. If V = C" with the usual inner product, the induced norm is given by \\v\\ "n (L...i=l IVi ) ! (£? = ,l»,lI22)*.. Theorem 7.23. Let P be an orthogonal projection on an inner product space V. Then Then Theorem 7.23. Let P be an orthogonal projection on an inner product space \\Pv\\ ::::: Ilvll for all v e V. IIPvll < \\v\\forallv E V.
Proof: Since P is an orthogonal projection, p2 = P = pH. (Here, the notation p# denotes Proof: Since P is an orthogonal projection, P2 = P = P#. (Here, the notation P# denotes the unique linear transformation that satisfies ( P u , } = (u, p#v) for all u, v E If this the unique linear transformation that satisfies (Pu, vv) = (u, P#v) for all u, v e V. If this seems a little too abstract, consider V = R" (or en), where P# is simply the usual PT (or seems a little too abstract, consider = IRn (or C"), where p# is simply the usual pT (or pH)). Hence (Pv, v) = (P 2v, v) = (Pv, p#v) = (Pv, Pv) = IIPvll 2 > O. Now /  P is PH)). Hence ( P v , v) = (P2v, v) = (Pv, P#v) = ( P v , Pv) = \\Pv\\2 ::: 0. Now /  P is also a projection, so the above result applies and we get also a projection, so the above result applies and we get
0::::: ((I  P)v. v) = (v. v)  (Pv, v)
=
IIvll2  IIPvll 2
from which the theorem follows. from which the theorem follows.
0
Definition 7.24. The norm induced on an inner product space by the "usual" inner product Definition 7.24. The norm induced on an inner product space by the "usual" inner product is called the natural norm. is called the natural norm.
In case V = C" or V = R",, the natural norm is also called the Euclidean norm. In In case = en or = IR n the natural norm is also called the Euclidean norm. In the next section, other norms on these vector spaces are defined. A converse to the above the next section, other norms on these vector spaces are defined. A converse to the above procedure is also available. That is, given a norm defined by IIx II = •>/(•*> x), an inner procedure is also available. That is, given a norm defined by \\x\\ — .j(X,X}, an inner product can be defined via the following. product can be defined via the following.
7.3. Vector Norms 7.3. Vector Norms Theorem 7.25 (Polarization Identity). Theorem 7.25 (Polarization Identity).
1. For x, y E m~n, an inner product is defined by 1. For x, y € R", an inner product is defined by (x,y)=xTy=
57 57
IIx+YIl2~IIX_YI12_
IIx + yll2 _ IIxll2 _ lIyll2 2
2. For x, y E en, an inner product is defined by 2. For x, y e C", an inner product is defined by
where j = i = \/—T. where j = i = .J=I.
7.3 7.3
Vector Norms Vector Norms
Definition 7.26. Let (V, F) be a vector space. Then \ Definition 7.26. Let (V, IF) be a vector space. Then II \ . \ II\ : V + R is a vector norm ifit V >• IR is a vector norm if it satisfies the following three properties: satisfies the following three properties:
1. Ilxll::: Ofor all x E V and IIxll = 0 ifand only ifx
= O.
2. Ilaxll = lalllxllforallx
E
Vandforalla
E
IF.
3. IIx + yll :::: IIxll + IIYliforall x, y E V. (This is called the triangle inequality, as seen readily from the usual diagram illus (This is called the triangle inequality, as seen readily from the usual diagram illustrating the sum of two vectors in ]R2 .) trating the sum of two vectors in R2 .) Remark 7.27. It is convenient in the remainder of this section to state results for complexRemark 7.27. It is convenient in the remainder of this section to state results for complexvalued vectors. The specialization to the real case is obvious. valued vectors. The specialization to the real case is obvious. Definition 7.28. A vector space (V, F) is said to be a normed linear space if and only if Definition 7.28. A vector space (V, IF) is said to be a normed linear space if and only if there exists a vector norm  •  : V > R satisfying the three conditions of Definition 7.26. there exists a vector norm II . II : V + ]R satisfying the three conditions of Definition 7.26. Example 7.29. Example 7.29.
1. For x E en, the Holder norms, or pnorms, are defined by 1. For e C", the HOlder norms, or pnorms, are defined by
Special cases: Special cases: (a) Ilx III = L:7=1
IXi
I (the "Manhattan" norm).
1
(b) Ilxllz = (L:7=1Ix;l2)2 = (c) Ilxlioo
(X
H
1
X)2
(the Euclidean norm).
= maxlx;l
IE!!
=
(The second equality is a theorem that requires proof.) (The second equality is a theorem that requires proof.)
p++oo
lim IIxllp
However. Remark 7. y e C" may be defined by Remark 7. JC. Let x. Let x. Then Theorem 7. Inner Product Spaces. Since yHxx = x Hy. 0 < e — 5. Remark 7.. On the vector space (C[to.34. 217]).32 are true for general inner product spaces. The angle e between two nonzero vectors x.. t\])n.(x Hyy)(yH x).t~JI On the vector space ((C[to. tO~t:5.e. ttlr.. denoted II . Let x. p. t \ ] R). i. Remark 7. 112 is unitarily invariant. 1Ft). ttl. define the vector norm 11111 = max 1/(t)I· to:::.D = E^rf/l*/!. 11·111 and 1I·IIClO XHUHUx . p. D 0 \\X\\2\\y\\2Note: This is not the classical algebraic proof of the CauchyBunyakovskySchwarz Note: This is not the classical algebraic proof of the CauchyBunyakovskySchwarz (CBS) inequality (see. its determinant must be nonnegative.  .~~1~1112'. Projections. [20. In other words. define the vector norm On the vector space «e[to.30 (HOlder Inequality). 1cose 1~ 1. y E C". However. is a nonnegative definite matrix. The CBS inequality is thus equivalent to the statement I. Then Fhcorem 7. then Remark 7.l. Since Proof: Consider the matrix [x y] e en x2 . y e C".31 and Remark 7.33.tl Theorem 7. it is particularly easy to remember. Projections. Then with equality if and only if x and yare linearly dependent. y E en. we see immediately that IXH y\ ~ 0 < ( x H ) ( y H y ) — ( x H ) ( y H x ) .32 are true for general inner product spaces. then H H \\Ux\\2 \\x\\2 (Proof.32.33. (CBS) inequality (see.58 58 Chapter 7.. Proof' Consider the matrix [x y] E C" x2 . The angle 0 between two nonzero vectors x. and Norms 2. Ther. and Norms Chapter 7. whered. Since is a nonnegative definite matrix. if U € C"x" is unitary.1^ IIUxll2 = IIxll2 (Proof IIUxili = x U Ux = xHx = IIxlli)· However. Remark 7.^ cos e = IlMmlylb 0 ~ 0 < I' The CBS inequality is thus equivalent to the statement ~ ^  COS 0 < 1. (b) IIx IIz. Theorem 7.D = L~=ld. [20.lx. IIQ)' 1 3. > 0. Theorem 7. 1Ft). with equality if and only if x and y are linearly dependent.g. +=1. R). „ . i. Let x. However. The norm II . y E en may be defined by cos# = 1I.Q — (xhH Qx) 2. Some weighted pnorms: (a) IIxll1. Theorem 7.. p q I I A particular case of the HOlder inequality is of special interest. e. it is particularly easy to remember.g = (x QXY denoted  • c). we see immediately that \XH yl < IIxll2l1yllz.32. In other words. where 4 > O. e. On the vector space (C[to.. o ~ (x Hxx)(yH y) . Inner Product Spaces.30 (Holder Inequality).31 (CauchyBunyakovskySchwarz Inequality). Some weighted pnorms: 2. its determinant must be nonnegative. if U E enxn is unitary. \\Ux\\l XHX = \\x\\\). define the vector norm 1111100 = max II/(t) 11 00 .34..e. y e en.31 and Remark 7. The norm  • 2 is unitarily invariant.> where Q = QH > 0 (this norm is more commonly = QH > Ikllz. 217]).g. and  .31 (CauchyBunyakovskySchwarz Inequality). A particular case of the Holder inequality is of special interest. Since yH = x H y. define the vector norm 3.
7. then we have the Pythagorean Identity Ilx ± YII~ = IIxll~ + IIYII~.35. Finally. If x.37. Then 7. then we have the Pythagorean Identity Remark 7.e.e.e.38. Attention is confined to the vector space (W xn R) since that is what arises in the majority of applications. Matrix Norms 7. All norms on C" are equivalent. Theorem 7. y E en are orthogonal. IIA + BII :::: IIAII + IIBII for all A. The former notion is useful for perturbation analysis. there exist Example 7. the motivation for using matrix norms is to have a notion of either the size of or the nearness of matrices. Let \\ \\ be a vector norm and suppose v. For x E en.e. e C".7. For x G C".   R mx " ~ E is a matrix norm if it satisfies the following three Definition 7.37. there exist constants CI. If y € C" are orthogonal. v(2). the proof of which follows easily from liz II~ = ZH z. Remark 7.e.4 7. the following inequalities are all tight bounds. convergence in terms of vector norms.. i» (1) v(2\ .. IIAII ~ Ofor all A E IR mxn and IR IIAII = 0 if and only if A = O. Matrix Norms 59 59 are not unitarily invariant. i. ci (possibly depending onn) such that depending on n) such that Example 7.. (As with vectors. As with vectors.e. IIxl12 :::: Jn Ilxll oo .. v(l). 2 the proof of which follows easily from z2 _ z_//. lIaAl1 = lalliAliforall A E mxn andfor all a E IR.. C2 (possibly 7. the following inequalities are all tight bounds. ConFinally.38. convergence in terms of vector norms.36. this is called the triangle inequality. Then lim k4+00 V(k) = v if and only if lim k~+oo II v(k)  v II = O.4. Attention is confined to the vector space (IRmnxn. there exist vectors x for which equality holds: vectors x for which equality holds: Ilxlll :::: Jn Ilxlb Ilxll2:::: IIxll» IIxlloo :::: IIxll» Ilxlll :::: n IIxlloo.36. and essentially obvious. Similar remarks apply to the unitary invariance of norms of real vectors under orthogonal transformation.. the motivation for In this section we introduce the concept of matrix norm. i. we conclude this section with a theorem about convergence of vectors.. about convergence of real numbers. Similar remarks apply to the unitary invariance of norms of real are not unitarily invariant. Extension to the complex case is straightforward what arises in the majority of applications. IIxlioo :::: IIxllz...39. The using matrix norms is to have a notion of either the size of or the nearness of matrices.) .39. i. i. . Theorem 7. while the latter is needed to make sense of former notion is useful for perturbation analysis. Let II·• II be a vector norm and suppose v. 3.. IR) since that is "convergence" of matrices. BE IRmxn. while the latter is needed to make sense of "convergence" of matrices. i.4. All norms on en are equivalent. there exist constants c\. 2. E en.4 Matrix Norms Matrix Norms In this section we introduce the concept of matrix norm. vectors under orthogonal transformation.35. we conclude this section with a theorem about convergence of vectors.. II·• II : IR mxn > IR is a matrix norm if it satisfies the following three properties: properties: 1. Extension to the complex case is straightforward and essentially obvious. As with vectors. Definition 7. Convergence of a sequence of vectors to some limit vector can be converted into a statement vergence of a sequence of vectors to some limit vector can be converted into a statement about convergence of real numbers. i..
The "matrix analogue of the vector Inorm.mxn.  5 2 = II IIF and  • 5i00 = II .2 =  .mxn IIAII p. The "maximum row sum" norm is 2. (t laUI). The spectral norm is 3. Example 7. matrix = Ilxllp. 1. (where r = laiiK^/i. e R mx ". defined by IIAIIF ~ (t. Example 7.40. \\F and 11'115. ^wncic = rank(A)). Inner Product Spaces.60 max _P IIAxll = max Ilxli p IIxllp=1 IIAxll p .1 is often called the trace norm. to estimate the size of a matrix product A B in terms of the sizes of A and B individually. I. Projections.. The following three special cases are important because they are "computable. 5>1 is often called the trace norm.40. The concept of a matrix norm alone is not altogether useful since it does not allow us to estimate the size of a matrix product AB in terms of the sizes of A and B individually. The "matrix analogue of the vector 1norm."  A\\ = ^ \ai} . The norm  . pnorms previously. is a norm.mxn. tTL T Note: IIA+llz = l/ar(A). is a norm. where r mxn = rank(A). (A' A)) 1 ~ (T.  . and Norms Chapter 7.42.44. IIAII P t altA)) 1 ~ (T.mxn. Schatten/7norms IIAlls.q = max IIAxil p 11.44." theorem and requires a proof.60 Chapter 7. Example 7.41. The "maximum column sum" norm is 2.p = (at' + . (AA ')).43.. ai.<110#0 IIxllq Example 7. and Norms Example 7.. I Some special cases of Schatten /?norms are equal to norms defined previously. Example 7. . 11·115.43. Example 7._ Then "mixed" norms can also be defined by e lR. For example. Inner Product Spaces. Let A E R . J=1 3.) I ~ (t. Let A E lR. Then the Frobenius norm (or matrix Euclidean norm) is 7. IIAII2 = Amax(A A) = A~ax(AA ) = a1(A)." IIAliss = Li. 112' The norm II • 115.00 =  • 2. The Schattenpnorms are defined by E lR. Let A E K m x ".42.. + a!)"". Example 7. Then the matrix pnorms are defined by A e Rmxn. I. Projections.jj laij." Each is a "computable. Let A E lR. IIAlioo = max rE!!l.
IIAlioo ::s . 1. There exists a vector x* such that IIAx*11 = IIAllllx*11 if the matrix norm is subordinate to the vector norm..jii IIAII I.. \\ • \\p. II F' ThenA^ < A II Filx 112.60 Ilx i.46. we clearly have Ajc ::s A1jt. •II p for all p are consistent matrix norms.. 11^4^11 P (or. while E ]Rnxn. IIAII2::S IIAIIF.  • /7and II . also caUedoperator norms.60 \^ • Useful Results The following miscellaneous results about matrix norms are collected for future reference. wec1earlyhave IIAxll < IIAllllxll· Since Afijc < IIAlIllBxll < Afljt. p for all p are consistent matrix norms. II· II F and 1. there exists a vector norm II • IIv Theorem 7.jii.e. there exists a vector norm \\ . reader The interested reader is invited to prove each of them as an exercise.oo J1. . a vector norm. We thus need the following definition. i.47.. A A 2. IIAllp. Matrix Norms 7..jii II A IIF. Then the norms II • \\a II· Ilfl' and . 2.e. Example 7. although there are analogues for. \\v consistent with it. subordinate to the vector norm. it follows that all subordinate norms are consistent. inner products or outer products of vectors.jii IIAlb IIAIII ::s n IIAlloo.. \\ are Definition 7. 1. take A = B = \ \ Afl li00 = 2whileA li00 B 1>00 = 1.. it follows that all subordinate norms are consistent. so not exist a vector norm  .and II \\ •lIy y are mutually consistent if \\ A B \\ a < IIAllfllIBlly. Let A e Rmxn.so II . l. more generally. 2. The "mixed" norm "mixed" norm II· 11 100 . B E ]Rnxk. if II A B II < II A 1111 fi whenever the matrix product is defined.jii IIAII2. Then The p norms are examples of matrix norms that are subordinate to (or induced by) The pnorms are examples of matrix norms that are subordinate to (or induced by) a vector norm. HAjcJI^ ::s \\A\\m Ilxli v. e R" x ". )).48. Example 7. Matrix Norms 61 61 Notice that this difficulty did not arise for vectors. Then the norms \\ . II".. atornorms. e. i. If \\ • 11m is a consistent matrix norm. although there are analogues for. For A following inequalities are all tight. We thus need the following definition.46. e. which equality holds: which equality holds: IIAIII ::s . i.. A matrix norm 11·11\\is said to be consistent mutuallyconsistentifIlABII. IIAxliv < IIAlim \\x\\v' Not every consistent matrix norm is subordinate to a vector norm.Then II Ax 1122 ::s II AFjc2. Since IIABxl1 ::s Afljc ::s IIAIIIIBllllxll. q For such subordmate norms. consider  • \\F. For example.4. not exist a vector norm II •  such that IIAIIF is given by max x . IIAIIF ::s. Theorem 7. exercise. inner products or outer products of vectors.48. but there does consider II .47. IIAIII ::s . .7.jii IIAlloo. Definition 7.jii IIAlb IIAIIF ::s .oo 2 while IIAIII.e. A = max^o IIxll. there exist matrices A for i. Not every consistent matrix norm is subordinate to a vector norm..= max IIAxl1 x. Notice that this difficulty did not arise for vectors. IIAxll1 = max . consistent with it. B e Rnxk. the IIIn II F = . 2. IIAIIF ::s . •1122is consistent with II . take A = B = [: is a matrix norm but it is not consistent.60 . Let A E ]Rmxn. •II F.~~i'.45. also called oper(or.e.j is a matrix norm but it is not consistent. II such that AF is given by max^o ".jii IIAlioo' .. For example. II In II p = 1 for all p.jii IIAII I . \\m is a consistent matrix norm.g. If II .e. For example.1100 = max laijl x.e. Then :].. more generally. The following miscellaneous results about matrix norms are collected for future reference..4.g. II A 1100 ::s n IIAII I . IIABIII.60 IIx II Ilxll=1 IIAxll p . Theorem 7. IIAxl1 IIAII = max .jii IIAIIF.ooIlBIII. i.but there does  is consistent with F. For example. i. IIAlioo ::s .::S \\A\\p\\B\\y A matrix norm \\ • is said to be consistent if \\AB\\ ::s  A   B II whenever the matrix product is defined. For such subordinate norms.45. IIAII2 ::s. IIAII2 ::s . There exists a vector x* such that Ajt* = A jc* if the matrix norm is Theorem 7.q = maxx.
6. . Suppose P and Q are orthogonal projections and P + Q = I.e.. Show that the matrix norms II . 3. i . [2 3 4]r R3 spanned by the plane 3. .. Then k~+oo lim A (k) = A if and only if k~+oo lim IIA (k)  A II = o.1. > . B) = space. IIAllaa fora matrices Q E IR Convergence Convergence The following theorem uses matrix norms to convert a statement about convergence of a sequence of matrices into a statement about the convergence of an associated sequence of of scalars. 7. IIF are unitarily invariant. Prove that / . Then 7.A+A is an orthogonal projection. space. where V2 is defined as in Theorem 5.. 1. Prove that the A e Wnxn orthogonal projection onto the space spanned by these column vectors is given by the P matrix P = A(ATTA)~}AT. EeIRmxn. scalars. For A eRmxa . prove directly that V22Vl is an I — +A V V/ is an orthogonal projection. . Suppose that a matrix A E IR mxn has linearly independent columns. Projections. A (1) . EXERCISES EXERCISES 1. Theorem 7.49..c — v + = 0. must be an orthogonal matrix. Prove that E"xn with the inner product (A. „ } The spectral radius of A is the scalar p(A) = max IA.. Inner Product Spaces.. For A E IR mxn . 3.  • 2 and  • \\F 8.y + 2z = O.] 4. A (2) . B) = Tr ATB is a real inner product IR n x" AT B (A.l. If P projection. where ¥2 is defined as in Theorem 5. ...1. . for all A E IRmxn and for all orthogonal unitarily invariant. prove that P+ = P. matrices Q zR and e M" ". IIQAZlia = A fora = 2 or F. 4. The spectral radius of A is the scalar by {Ai . If P is an orthogonal projection. but not necessarily The norms  • \\F and  • 2 (as well as all the Schatten pnorms. A(2). 2... Definition: Let A E IRnxn and denote its set of eigenvalues (not necessarily distinct) by P. 112 (as well as all the Schatten /?norms. l.49. and Norms Chapter 7.. A(I). p+ = P.62 62 3.e. Inner Product Spaces. A(A A) 1 AT 5. 112 and II . The norms II . \\ \\bea Rmx". but not necessarily other pnorms) are unitarily invariant. (MZa or F.. Also.An}. Projections. and Norms max laijl :::: IIAII2 :::: ~ max laijl.Q — Q must be an orthogonal matrix.. Let II ·11 be a matrix norm and suppose A. Chapter 7.] l. Find the (orthogonal) projection of the vector [2 3 4f onto the subspace of 1R3 5. orthogonal projection. IIF and II . i. Definition: Let A e Rnxn and denote its set of eigenvalues (not necessarily distinct) 8. Prove that P . e Rmx" mxm x mxm and Z E IRnxn .I. i. spanned by the plane 3x .
2. and p(A). all of whose Determine AF. Let A = xy . it can be proved that IIMllp = ss for all p. y E IR n are nonzero. Determine IIAIIF' IIAII Ilt. and p(A). 9. y e R" are nonzero. all of whose columns and rows as well as main diagonal and antidiagonal sum to s = n(n2 + 1) /2. where both x. Determine IIAIIF' IIAIII> IIAlb and Aoo in terms of \\x\\a and/or \\y\\p. IIAlb IIAlloo. columns and rows as well as main diagonal and antidiagonal sum to s = n (n 2 l)/2. If M is a magic square matrix. or (Xl as appropriate. IIAlb IIAlloo. \\A\\ A2. H A I I A2. Determine AF. HA^. and peA). Let 9.2. appropriate.. is called a "magic square" matrix. (An n x n matrix. 10. or oo as and II A 1100 in terms of IIxlla and/or IlylljJ. Determine IIAIIF' IIAII d .Exercises Exercises 63 63 Let Let A=[~ 14 0 12 5 ~]. . A2.. H A H ^ and peA).) T 10. Let A=[~4 9 2 ~ ~]. Determine AF. where a and ft take the value 1. where ex and {3 take the value 1. where both x.) that  M Up = for all/?. (An n x n matrix. Aj. Let A = xyT.
This page intentionally left blank This page intentionally left blank .
The equations ATrr = 0 can be rewritten in the form A TAx = ATb and the x..Chapter 8 Chapter 8 Linear Least Squares Linear Least Squares Problems Problems 8.bll~ (and hence p(x) = \\Ax .b E 'R(A)L so these two vectors are orthogonal. see Section 8. whereyEjRnisarbitrary. The linear least Problem: Suppose A e jRmxn with > n and b <= Rm is given vector.e.2.1 The Linear Least Squares Problem The Linear Least Squares Problem Problem: Suppose A E Rmx" with m 2: nand b E jRm is aagiven vector. Now.x — b\\\ (and hence p ( x ) = from the Pythagorean identity (Remark 7... (8. The linear least squares problem consists of finding an element of the set squares problem consists of finding an element of the set x = {x E jRn : p(x) = IIAx .2) 65 .PR(A))b = PR(A). Hence.b 112) assumes its minimum value if and only if II Ax —b\\2) assumes its minimum value if and only if (8. Thus.1 8.PR(A)bll~ + IIPR(A)b  Axll~ from the Pythagorean identity (Remark 7.Ax is the residual associated 1. For further details. Solution: The set X has a number of easily verified properties: The set X has a number of easily verified properties: 1. AT — A T Ax = AT b latter form is commonly known as the normal equations. A vector x E X if and only if x is of the form 2. Hence.Axll~ = lib . write the residual r in the form To see why this must be so.35). i. so these two vectors are orthogonal.2. A vector x E X if and only if ATrr = 0. while Now.35). vector x e X if and only if AT where b — Ax is the residual associated with x. is a solution of the normal equations. 2.bll 2 is minimized}. (PR(A)b . where r = b .e. Thus. A vector x X if and onlv if x is of the x=A+b+(IA+A)y. A. i. write the residual in the form r = (b . IIAx . For further details. IIrll~ = lib .1) To see why this must be so. (Pn(A)b — AJC) is clearly in 7£(A). see Section 8. x E X if and only if x is a solution of the normal equations. while (b . x e X if and only if latter form is commonly known as the normal equations.Ax) is clearly in 'R(A).PR(A)b) = (I .PR(A)b) + (PR(A)b  Ax).
all solutions of (8.e. equivalently. In fact. if and only if A + A lor.1].e. If the existence condition happens to be satisfied. Then the convex combination 8x. x* minimizes the residual p ( x ) and is the vector of minimum 2norm that does so.2. the last inequality following by Theorem 7. X is convex. BE ]R. of linear least squares solutions. The minimum value of p ((x) is then clearly equal to where y E ]R. Linear Least Squares Problems and this equation always has a solution since AA+b e 7£(A). The only difference is that in the case same as solutions of the linear system AX = B. where Y E R" xfc is arbitrary. consider two arbitrary vectors Xl = A + b 3. + (1 .. problem to the matrix case. The unique solution of minimum 2norm or Fnorm is where Y € ]R. then equality holds and the least squares . has a unique element x" of minimal2norm. all and this equation always has a solution since AA+b E R(A).3. Linear Least Squares Problems Chapter 8. i. 7£(A).. X is convex. and only if A+A = I or. This follows immediately from and is the vector of minimum 2norm that does so.PR(A)bll z = ~ 11(1 Ilbll z. equivalently. Let 8 e [0. 5.23. By Theorem 6. 3. Let 6 E [0.e.mxk. we can generalize the linear least squares problem to the matrix case. X = A+B. if 5. To see why. 1].. Notice that solutions of the linear least squares problem look exactly the same as solutions of the linear system AX = B. if and only if rank(A) n. i. x" = + b is the unique vector 4. There is a unique solution to the least squares problem. The unique solution of minimum 2norm or Fnorm is X = A+B.1. 0*i (1 #)* = A+b (I .. There is a unique solution to the least squares problem. x* = A+b is the unique vector that solves this "double minimization" problem..1) and convexity or directly from the fact that all x E X are of the form (8. then equality holds and the least squares If the existence condition happens to be satisfied. The minimum value of p x ) is then clearly equal to lib . By Theorem 6.2. consider two arbitrary vectors jci = A+b + (I — A + A) y (I . AA+)bI1 2 the last inequality following by Theorem 7.2) are of the form x = A+ AA+b + (I  A+ A)y =A+b+(IA+A)y. To see why.nxk is arbitrary.3.n is arbitrary. there is no "existence condition" such as R(B) S. i.A+ A)z in X. X has a unique element x* of minimal 2norm.. In fact. which follows since the two vectors are orthogonal. we can generalize the linear least squares Just as for the solution of linear equations. The Theorem 8. x* minimizes the residual p(x) that solves this "double minimization" problem. Notice that solutions of the linear least squares problem look exactly the Remark 8. if and only if rank (A) = n.A+A)(Oy (1 . This follows immediately from convexity or directly from the fact that all x e X are of the form (8. The general solution to e ]R.8)xz2 = A+b ++ (I A+ A)(8y ++ (1 8)z) is clearly in X.66 Chapter 8. where y e W is arbitrary.e. Just as for the solution of linear equations.23. X. there is no "existence condition" such as K(B) c R(A).2) are of the form solutions of (8. X = {x*} = {A+b}.1) and which follows since the two vectors are orthogonal. Then the convex combination and Xz = A+b (I . X = {x"} = {A+b}. Remark 8.0)z) is clearly in 4. i.mxn XElR Plxk min IIAX  Bib is of the form is of the form X=A+B+(IA+A)Y. The only difference is that in the case of linear least squares solutions. Let A E E mx " and B € Rmxk.A+A)y and *2 = A+b + (I — A+A)z in X.
8.3 Linear Regression and Other Linear Least Squares Problems 8.3 Linear Regression and Other Linear Least Squares Problems
67
O. X = +B residual is 0. Of all solutions that give a residual of 0, the unique solution X = A+B has minimum 2norm or F norm. Fnorm. Remark 8.3. If we take B = 1m in Theorem 8.1, then X = A+ can be interpreted as Im in Theorem 8.1, then Remark 8.3. If we take B A+ can be interpreted as saying that the MoorePenrose pseudoinverse of A is the best (in the matrix 2norm sense) A AX matrix such that AX approximates the identity. Remark 8.4. Many other interesting and useful approximation results are available for the F norm). matrix 2norm (and Fnorm). One such is the following. Let A E M™ x " with SVD following. e lR~xn
A
= U~VT = LOiUiV!.
i=l
Then a best rank k approximation to A for 1< f c < r r,i . e . , a solution to A k l :s k :s , i.e., a
MEJRZ'xn
min IIA  MIi2,
is given by is given by
k
Mk =
LOiUiV!.
i=1
The special case in which m = n and k = n  1 gives a nearest singular matrix to A E A e = nand = —
lR~ xn .
8.2 8.2
Geometric Solution Geometric Solution
Looking at the schematic provided in Figure 8.1, it is apparent that minimizing IIAx —bll 2 2  Ax b\\ x e W1 p — Ax is equivalent to finding the vector x E lRn for which p = Ax is closest to b (in the Euclidean b Ay norm sense). Clearly, r = b  Ax must be orthogonal to R(A). Thus, if Ay is an arbitrary r b — Ax 7£(A). R(A) vector in 7£(A) (i.e., y is arbitrary), we must have y
0= (Ay)T (b  Ax) =yTAT(bAx) = yT (ATb _ AT Ax).
Since y is arbitrary, we must have ATb — ATAx = 0 or A r A;c = AT b. AT b  AT Ax AT Ax = ATb. T Special case: If A is full (column) rank, then x = (AT A) ATb. A = (A A)l ATb.
8.3 8.3
8.3.1 8.3.1
Linear Regression and Other Linear Least Squares Linear Regression and Other Linear Least Squares Problems Problems
Example: Linear regression
Suppose we have m measurements (ll, YI), ... ,, (trn,,ym) for which we hypothesize a linear (t\,y\), . . . (tm Ym) (affine) relationship (8.3) y = at + f3
68
Chapter 8. Linear Least Squares Problems Chapter 8. Linear Least Squares Problems
b
r
p=Ax
Ay E R(A)
Figure S.l. Projection of b on K(A). Figure 8.1. Projection of b on R(A).
for certain constants a. and {3. One way to solve this problem is to find the line that best fits for certain constants a and ft. One way to solve this problem is to find the line that best fits the data in the least squares sense; i.e., with the model (8.3), we have the data in the least squares sense; i.e., with the model (8.3), we have
YI
Y2
= all + {3 + 81 ,
= al2 + {3 + 82
where &\,..., 8m are "errors" and we wish to minimize 8\ + • • 8;. Geometrically, we where 81 , ... , 8m are "errors" and we wish to minimize 8? + ...• + 8^ Geometrically, we are trying to find the best line that minimizes the (sum of squares of the) distances from the are trying to find the best line that minimizes the (sum of squares of the) distances from the given data points. See, for example, Figure 8.2. given data points. See, for example, Figure 8.2.
y
Figure 8.2. Simple linear regression. Figure 8.2. Simple linear regression.
Note that distances are measured in the vertical sense from the points to [he line (as Note that distances are measured in the venical sense from the point!; to the line (a!; indicated, for example, for the point (t\, y\}}. However, other criteria arc possible. For exindicated. for example. for the point (tl. YIn. However. other criteria nrc po~~iblc. For cxample, one could measure the distances in the horizontal sense, or the perpendicular distance ample, one could measure the distances in the horizontal sense, or the perpendiculnr distance from the points to the line could be used. The latter is called from the points to the line could be used. The latter is called total least squares. Instead squares. Instead of 2norms, one could also use 1norms or oonorms. The latter two are computationally of 2norms, one could also use Inorms or oonorms. The latter two are computationally
8.3. Linear Regression and Other Linear Least Squares Problems 8.3. Linear Regression and Other Linear Least Squares Problems
69
much more difficult to handle, and thus we present only the more tractable 2norm case in difficult text that follows. follows. The m "error equations" can be written in matrix form as ra
Y = Ax +0,
where
We then want to solve the problem
minoT 0 = min (Ax  y)T (Ax  y)
x
or, equivalently, min lIoll~ = min II Ax  YII~.
x
(8.4)
AT Solution: x = [~] is a solution of the normal equations AT Ax Solution: x — [^1 is a solution of the normal equations ATAx = ATyy where, for the special form of the matrices above, we have special form of the matrices above, we have
and and
AT Y = [ Li ti Yi
LiYi
J.
The solution for the parameters a and f3 can then be written ft
8.3.2
Other least squares problems
y = f(t) =
Cl0!(0
(8.3) of the form Suppose the hypothesized model is not the linear equation (S.3) but rather is of the form + • • • 4 cn<t>n(t). (8.5) (8.5)
In (8.5) the ¢i(t) are given (basis) functions and the Ci; are constants to be determined to </>,(0 functions c minimize the least squares error. The matrix problem is still (S.4), where we now have minimize the least squares error. The matrix problem is still (8.4), where we now have
An important special case of (8.5) is least squares polynomial approximation, which corresponds to choosing ¢i (t) = t t'~1,, i i;Ee!!, although this choice can lead to computational 0,• (?) = i  l n, although this choice can lead to computational
. piecewise polynomial functions. Z2 while the minimum value of \\Ax — b II ~ is l^llr while the minimum value of II Ax . bE IR m . [23]). Then c. Two basic classes of algorithms are A itself S VD and QR (orthogonalupper triangular) factorization. A E IRmxn . 's ¢i. We now note that IIAx  bll~ = IIU~VT x =  bll~ II ~ VT X  U T bll. 8. then II v II ~ = II viii ~ + II v211 ~ (note that orthogonality is not what is used here. it is shown that solution [4]. VT = U. The subvector z2 is arbitrary. Numerically better approaches ill difficulties n. Linear Least Squares Problems difficulties because of numerical ill conditioning for large n. the subvectors can have different lengths). Sometimes a problem in which the Ci'S appear nonlinearly nonlinearly can be converted into a linear problem.. [7]. Linear Least Squares Problems Chapter 8. S~lc\. insight. In fact. This that orthogonality is not what is used here. Better numerical methods are based on algorithms that AT work directly and solely on A itself rather than AT A. etc. then u^ = i>i \\\ \\vi\\\ (note The last equality follows from the fact that if v = [£ ].. z. c\ logci. if the fitting function is of the form y t) Y = ff( (t) = c\eC2i. Since the standard Kalman filter essentially amounts method in finiteprecision arithmetic. C2 problem. quantity above is clearly minimized by taking z\ = S'c. Specifically. e C2 / then taking logarithms yields the equation log y = log c.4 8. appear functions </>. c. arbitrary. ] II: = II [ The last equality follows from the fact that if v [~~].1. the last equivalent. the two are equivalent.b\\^ is II czll ~. This explains why it is convenient to work above with the square of the norm rather than the concerned. since II . For example. (8. = log c" and C2 = cj_ results in a standard linear least squares y — log y.1. [7]. the subvectors can have different lengths).6) via the SVD.5) is that the coefficients Ci appear linearly. + c2f. norm. we assume that A has an SVD given by A = UT. For example. problem. if the fitting function is of the form can be converted into a linear problem. [11]. Better numerical methods are based on algorithms that behavior in practice (and it does).SVr U~VT Theorem 5. Since the standard Kalman filter essentially amounts to sequential updating of normal equations. Then GI defining y = logy. The former is much more expensive but is generally more reliable and offers considerable theoretical offers insight. The former based on SVD and QR (orthogonalupper triangular) factorization. .4 Least Squares and Singular Value Decomposition Least Squares and Singular Value Decomposition In the numerical linear algebra literature (e. Specifically. Ib is unitarily invariant =11~zcll~ wherez=VTx. c. are based on orthogonal polynomials. then taking logarithms yields the equation logy = logci + cjt. [4]. respectively.g.c=UTb = II [~ ~] [ ~~ ] .[ ~~ ] II: sz~~ c. respectively. As far as the minimization is concerned. fact. The key feature in (8. In this section we investigate solution of the linear least squares problem min II Ax x b11 2 . The basis functions coefficients c. we assume that A has an SVD given by A U\SVf via the SVD. etc.can be arbitrarily nonlinear. of linear least squares problems via the normal equations can be a very poor numerical method in finiteprecision arithmetic. it can be expected to exhibit such poor numerical behavior in practice (and it does).70 70 Chapter 8. as in Theorem 5. splines.
In this case the SVD of A is given by socalled fullrank problem. If we label the product of such orthogonal row transformations as the orthogonal matrix QT E R m x m . we add the simplifying assumption that A has full column To simplify the exposition.11U2U!b"~ = bTUZV!V UJb = bTVZV!b = IIV!bll~. (8.6) but this In this section. A e R™ X M . A finite sequence of simple orthogonal transformations.S. If we label the product of such orthogonal row transformations as the to triangular form.. 11(1.e. an important special case of the linear least squares problem is the socalled fullrank problem.A + A)_y.AA+)b\\22 = \\U2Ufb\\l = bTU2U^U22V!b = bTU2U*b = \\U?b\\22. V2z is an arbitrary vector in R(V2 = N(A). Finally.e.1). A E ffi.AA+)bll~ . This agrees. an important special case of the linear least squares problem is the Finally. with (8.1). V2 Z 2 is an arbitrary vector in 7Z(V2)) = A/"(A). This agrees. x has been written in the form x = A+b + (I . than an SVD and.5 8. we add the simplifying assumption that A has full column rank. To simplify the exposition.e. where y E Rm is arbitrary. i. via a sequence of socalled Householder or Givens rank. = +b + (/ — A + A) y. we have QT € ffi.8. where y e ffi. i.5.AA+)bllz.e. This matrix factorization is much cheaper to compute time in terms of the QR factorization. This follows easily since (7 .~xn.5 Least Squares and QR Factorization Least Squares and QR Factorization In this section. Least Squares and QR Factorization Now transform back to the original coordinates: Now transform back to the original coordinates: x = Vz 71 71 = [VI V2 1[ ~~ ] = VIZ I + V2Z2 = = + V2Z2 vlsIufb + V2 2. Another expression for the minimum residual is  (/ — AA + )b 2 . of course. A E ffi. A e 1R™ X ". 8. In this case the SVD of A is given by A A = V:EVTT = [VI{ Vzl[g]Vr. to reduce A in the following way. It is then possible. It is then possible. i.6) but this time in terms of the QR factorization. with appropriate numerical enhancements. can be quite reliable. i. x has Note that since 12 is arbitrary.. is orthogonal to all vectors in 7l(A}L b E R(A). with (8. can be quite reliable. we again look at the solution of the linear least squares problem (8. of course.~xn. via a sequence of socalled Householder or Givens transformations. Thus. Least Squares and QR Factorization B. Thus.. The minimum value of the least squares residual is The minimum value of the least squares residual is and we clearly have that and we clearly have that minimum least squares residual is 0 4=> b is orthogonal to all vectors in U2 minimum least squares residual is 0 {::=:} b is orthogonal to all vectors in U2 {::=:} •<=^ {::=:} b is orthogonal to all vectors in R(A)l. A finite sequence of simple orthogonal row transformations (of Householder or Givens type) can be performed on A to reduce it row transformations (of Householder or Givens type) can be performed on A to reduce it to triangular form. we again look at the solution of the linear least squares problem (8.mxm. UZV = [U t/2][o]^i r > and there is thus "no V2 part" to the solution. with appropriate numerical enhancements. This follows easily since Another expression for the minimum residual is II (I .m is arbitrary. to reduce A in the following way. Z VISici The last equality follows from The last equality follows from c = UTb = [ ~ f: ]= [ ~~ l Note that since Z2 is arbitrary.7) .. This matrix factorization is much cheaper to compute than an SVD and. and there is thus "no V2 part" to the solution.
n • (a) Find the optimal linear combination aq^ + (3q2 that is closest to b (in the 2norm (a) Find the optimallinear combination aql + fiq2 that is closest to b (in the 2norm sense). Suppose qi and q2 are two orthonormal vectors and b is a fixed vector.e. Both Q\ and <2 have orthonormal columns. Both Q I and Qz2 have orthonormal columns. or (8.I of the columns of yields the orthonormal columns of Q\. Now write Q = [QI Q2]. R~l) ) of the columns of A yields the orthonormal columns of QI. i.e. by writing (8. and any y E R". we have x = R.Cl and the minimum residual The last quantity above is clearly minimized by taking x = R lIc\ and the minimum residual is Ilczllz. all in R". 3.9) are variously referred to as QR factorizations of A.1). Consider the following set of measurements (Xi. sense). both ql and q2 . we have = R~l Qf b = +b and the minimum residual is IIC?^!^ EXERCISES EXERCISES 1. or (8.3). we see that A=Q[~J = [QI = QIR.~xn is upper triangular.m IX(m ~" ) . m and any e ffi. (b) Let r denote the "error vector" b . xn. Yi): 2.aql . (8. n . Note that (8.72 Chapter 8.7).A+A)y and A +b 1.Show that r is orthogonal to both^i and q2.9) is essentially what is accomplished by the GramSchmidt process.. data.8). Consider the following set of measurements (*. Suppose q.9) is essentially what is accomplished by the GramSchmidt process.Equivalently. Linear Least Squares Problems Chapter 8. (b) Find the best (in the 2norm sense) line of the form jc = ay + (3 that fits this (b) Find the best (in the 2norm sense) line of the form x = ay fJ that fits this data.. yt): (1. For € ffi. b e ffi. and qz are two orthonormal vectors and b is a fixed vector. Note that Any of (8. by writing AR~l1 = Q\ we see that a "triangular" linear combination (given by the coefficients of ARQI we see that a "triangular" linear combination (given by the coefficients of R..1Q\b = A+b and the minimum residual is II Qr bllz' is \\C2\\2.2). 2.flq2 Show that r is orthogonal to (b) Let r denote the "error vector" b — ctq\ — {3qz.9) are variously referred to as QR factorizations of A. (2. For A E Wmxn .. all in ffi.[ ~~ ] If:. we see that in (8.. Multiplying through by Q in (8. check directly that (I . are orthogonal vectors.7). (3..8) ~ ] (8. (a) Find the best (in the 2norm sense) line of the form y = ax + fJ that fits this (a) Find the best (in the 2norm sense) line of the form y = ax + ft that fits this data.+ A)y and A+b are orthogonal vectors. data.9) Any of (8. Equivalently. Linear Least Squares Problems where E ffi. 3. (8. i. The last quantity above is clearly minimized by taking x = R. check directly that (I . Now write Q = [Q\ Qz].8).mxn and where R e M£ x " is upper triangular. Now note that Now note that IIAx  bll~ = IIQ T Ax = II [ QTbll~ since II . where Q\ e R mx " and Qz € K" x(mn). where QI E ffi.7). Qz] [ (8.7). b E Em. Multiplying through by Q Q2 E ffi. 112 is unitarily invariant ~ ] x .
bib z n where A2 = A + E2.bll 2 x when A = [ ~ 5. Prove that A+ = R+ QT. where 8 is a small positive number.9). of 2norm solution of least «rmarp« problem squares nrr»h1<=>m min II Ax . Use the four Penrose conditions and the fact that Q\ has orthonormal columns to 6. 7. Consider the problem of finding the minimum 2norm solution of the linear least 5. Solve the perturbed problem positive number.z2 as 8 approaches O? where A2 — A E 2 What happens to \\x* — zll2 as 8 approaches 0? 6. A+ R+QT . and suppose A where is 1. where again 8 is a small positive number. Solve the perturbed problem min II A 2 z . What happens to jt* . Let A e R"x".• What happens to IIx* . where AI = A + E I . then A+ R~ Q\. Solve the perturbed version of the above problem.Exercises Exercises 73 4. not necessarily nonsingular. and suppose A = QR. Find all solutions of the linear least squares problem 4. Use the four Penrose conditions and the fact that QI has orthonormal columns to verify that if A e R™ x "can be factored in the form (8. where again 8 is a small of A. Let A E ~nxn. not necessarily nonsingular. Solve the perturbed version of the above problem. where Q is orthogonal.bl1 2 when A = [~ ~ ] and b = [ !1 x The solution is (a) Consider a perturbation EI = [~ pi of A. Find all solutions of the linear least squares problem min II Ax . verify that if A E ~.yII2 as 8 approaches O? (b) Now consider the perturbation E2 = [~ (b) Now consider the perturbation EI = \0 s~\ of A. where 8 is a small positive number.xn can be factored in the form (8. then A+ == R. What happens to IIx* — y 2 as 8 approaches 0? where AI = A + E\. (a) Consider a perturbation E\ = [0 ~] of A.9).IlQf.:.
This page intentionally left blank This page intentionally left blank .
Then n(k) = X2 + 2A. It is an easy exercise to Example 9. such that Ax = AX. for proved easily from the Jordan canonical fonn to be discussed in the text to follow (see. Theorem 9.) = det (A . One oftenused scaling for an eigenvector is One oftenused scaling for an eigenvector is so is ax [ay] for any nonzero scalar a E a = 1/ IIx II so that the scaled eigenvector has nonn 1.1 Fundamental Definitions and Properties Fundamental Definitions and Properties Definition 9. This results in at most a change of sign and.Chapter 9 Chapter 9 Eigenvalues and Eigenvalues and Eigenvectors Eigenvectors 9.— 3. for example. for example. A nonzero vector x E en is a right eigenvector of A E e nxn if there exists Definition 9. Definition 9. as a matter of convenience. Note that if x [y] is a right [left] eigenvector of A.2. then n(A) is a polynomial of degree n. verify that n(A) = A2 2A .A). It can be proved easily from the Jordan canonical form to be discussed in the text to follow (see. we see immediately that XH is a left eigenvector of A H associated with A. For any A E e nxn ./) is called the characteristic polynomial of A.t so that the scaled eigenvector has norm 1. [3]). (9. [21]) or directly using elementary properties of inverses and determinants (see.1 9. the Fundamental Theorem of Algebra says that x 75 .) throughout the text. Thus. It can be The following classical theorem can be very useful in hand calculation. The polynomial n (A) = det(A—A. called an eigenvalue. we use both forms throughout the text. a nonzero vector y E en is a left eigenvector corresponding to an eigenvalue Similarly. It is an easy exercise to 2 verify that n(A) = A + 2A .Al) is called the characteristic polynomial Definition 9. we see immediately that x H is a left eigenBy taking Hermitian transposes in (9. It can be proved from elementary properties of determinants that if A E C" ". n(A) = 0. Example 9. Let A [~ ~]. Note that if x [y] is a right [left] eigenvector of A. The polynomialn (A.31 O.3 (CayleyHamilton). as a matter of convenience. called an eigenvalue.1).1) Similarly. A nonzero vector x e C" is a right eigenvector of A e Cnxn if there exists a scalar A. Then n(A) A2 + 2A 3. The 2nonn is the most common a — \j'. example.31 = 0. the Fundamental Theorem of Algebra says that 7t (X) is a polynomial of degree n. then It can be proved from elementary properties of detenninants that if A e enxn .} The following classical theorem can be very useful in hand calculation. then vector of AH associated with I. norm used for such scaling.1.. we use both forms results in at most a change of sign and. [21D or directly using elementary properties of inverses and determinants (see.1)./ — A). For any A e Cnxn . then so is ax [ay] for any nonzero scalar a E C. a nonzero vector y e C" is a left eigenvector corresponding to an eigenvalue a if Mif (9.3 (CayleyHamilton). Let A = [~g ~g]. Thus. such that a scalar A E e. e C. for example.1. n(A) = O.4. Theorem 9. This of A. [3]).2.4. (Note that the characteristic polynomial can also be defined as det(A. The 2norm is the most common nonn used for such scaling.2) By taking Hennitian transposes in (9. (Note that the characteristic polynomial can also be defined as det(Al .
. A. For example. then there is an easily checked relationship between the left and right If A € R"x". the n(A) coefficients.2. i.4) and set A = 0 in this identity. A is said to be defective if it does not have n linearly independent (right or left) eigenvectors.5. a . A. we denote the geometric multiplicity of A by g. that by elementary properties of the determinant. 2aA + 2 + ft and Example 9. f3 e R and let A = [~f3 £ ]. must occur in complex conjugate pairs.:.e. then I < dimA/"(A — A/) < m. Then jr(A. we always have A(A) = A(AT). Thus.e.7. if eigenvectors of A and AT (take Hermitian transposes of both sides of (9. However. IfXA is a root of multiplicity m ofjr(X). less than) its algebraic multiplicity. such a polynomial is said to be monic and we generally write et (A) as a monic polynomial throughout the text). The spectrum of A E C"x" is the set of all eigenvalues of A. it can also be . then n(X) has real coefficients.. ..AI). it is posn(A) = O.rank(A . A matrix A E Wnxn is said to be defective if it has an eigenvalue whose Definition 9..25). then A satsible for A to satisfy a lowerorder polynomial. less than) its algebraic multiplicity. (An . Let a. of we always have A(A) = A(A r )... Definition 9.e..AI) :::: m. A is said to be defective if it does not have n linearly independent (right or left) eigenvectors.3) in the n(A) = det(A . . then 1 :::: dimN(A . we say that X is an eigenvalue of A of algebraic multiplicity m. Note. i •>/—!)• If A E 1Ftnxn.5.e. Example 9. but that A(A) = A(A) only if A e R"x".AI) = (A] . Thus. The spectrum of A is denoted A (A).8.e.76 Chapter 9. • AM(see and set X = 0 in this identity. say. The geometric multiplicity of A is the number of associated independent eigenvectors = n — rank(A — A/) = dim J\f(A — XI). Eigenvalues and Eigenvectors n(A) has n roots.. possibly repeated.ft Definition 5. Xn. c form form A e C" " A].2». possibly repeated. then y is a right eigenvector of AT corresponding to I € A (A). Equivalently. guarantee the existence of corresponding nonzero eigenvectors. Then n(A) = A22. Hence the roots of 7r(A). the set of all roots of its characteristic polynomial n(X). The minimal polynomial Of A l::: K""" ix (hI' polynomilll a(A) of least degree such that a(A) ~ O. However. Then if we write (9. Let the eigenvalues of A E en xxn be denoted X\. i. but that A(A) = A(A) only if A E 1Ftnxn. n(A). Specifically. we say that A is an eigenvalue of A Definition 9. too.2)). degree such that a (A) =0. independent eigenvectors = n . if A = [~ ~]. we know that n(A) = 0. if If A € A(A) has algebraic multiplicity m. checked eigenvectors of A and AT (take Hermitian transposes of both sides of (9. These roots.. as solutions of the determinant equation n(A) = det(A  AI) = 0. Moreover. An. neftnhion ~. Eigenvalues and Eigenvectors Chapter 9.A) .. These roots. Specifically.. eigenvalues of A. if we denote the geometric multiplicity of A by g.~. Definition 9.nxn is the polynomial o/(X) oJ IPll. From the CayleyHamilton Theorem.5. Thll minimal polynomial of A G l!if. ft E 1Ft and let A = [ _^ !]. If e Wxn. Equivalently.. it can also be generally write a(A) as a monic polynomial throughout the text).n =0o. if A = \1Q ®]. then we must have I < g < m. the set of Definition 9. i.. geometric multiplicity is not equal to (i.1) .7. as solutions of the determinant equation 7r(A) has n roots. then A satisfies (Je — I)2 = 0.2aA + aa2+ f322 and A has eigenvalues a f3j (where A has eigenvalues a ± fij (where j = i = R).AI) = dimN(A . But it also clearly satisfies the smaller degree polynomial equation isfies (1 . we get the interesting fact that del (A) = A] • A2 • • An (see also Theorem 9. and hence further are the eigenvalues of A and imply the singularity of the matrix A — AI. we get the interesting fact that det(A) = AI . The geometric multiplicity ofX is the number of associated of algebraic multiplicity m. Theorem If A E 1Ftnxn. sible for A to satisfy a lowerorder polynomial. then we must have 1 :::: g :::: m. For example. E A(A). The spectrum of A is denoted A(A).. The spectrum of A e nxn is the set of all eigenvalues of A. It can be shown that or(l) is essentially unique (unique if we force the coefficient It can be shown that a(Je) is essentially unique (unique if we force the coefficient of the highest power of A to be + 1..A) (9.6. If is a root of multiplicity m of n(A). A matrix A e 1Ft x" is said to be defective if it has an eigenvalue whose geometric multiplicity is not equal to (i. all roots of its characteristic polynomialn(A)... eigenvalues of A.6.. such a polynomial is said to be monic and we of the highest power of A to be +1.3) are the eigenvalues of A and imply the singularity of the matrix A .1)2 = O. Definition 9. (9.) A. say. if left of A A E A(A). and hence further guarantee the existence of corresponding nonzero eigenvectors. y of AT y is a left eigenvector of A corresponding to A e A(A). must occur in complex conjugate pairs. If A E A(A) has algebraic multiplicity m. But it also clearly satisfies the smaller degree polynomial equation (it. Let a..XI.8. Moreover.
11. A~[~ A~U 2 0 0 I 2 2 ] ha< a(A) (A . . Example 9. Fundamental Definitions and Properties 9.2) andg ~ 4..) directly (without knowing eigenvalues and as Unfortunately. Example 9.. n(A) = (A — 2)4. We denote 7r(A) (A . the geometric multiplicity by g. algorithm.. each 4."(A) ~ ~ ~ ~ (A . 0 0 0 2 A~U 0 0 ] ha<a(A) (A . In particular.. every nonzero polynomial f3(A) particular.9.e. sociated eigenvector structure).) divides every nonzero polynomial fi(k} for which ft (A) = 0. a(A) directly (without knowing eigenvalues and asThere is an algorithm to determine or (A.2)' ""d g 2. Bezout algorithm. of which has an eigenvalue 2 of algebraic multiplicity 4.10. left Aj E A (A) such that Xj 1= A.5) = (A  2)2 and g = 2. such is not the case. YY Proof: Since Axt = A. Unfortunately. Furthermore.. eigenvector is numerically unstable. shown that a (A. Fundamental Definitions and Properties 77 77 a(A) f3(A) O. Then yfx{ = O. Unfortunately. Theorem 9. Then Xi = 0.2)4.*. A[~  2 0 I 2 0 0 0 0 0 0 !] ~ ~ ~ ha.2)2 ""d g 3. a(A) divides n(A). Let A E cc nxn and let Ai be an eigenvalue of A with corresponding right eigenvector jc. Let A e C« x " ana [et A. Proof' Since AXi AiXi. be an eigenvalue of A with corresponding right Theorem 9. this algorithm.. g. 0 0 0 2 0 0 0 2 ] h'" a(A) (A . let Yj be a left eigenvector corresponding to any A.2)' ""d g ~ ~ ~ 1.11. e l\(A) yj Xi. called the Bezout algorithm.1. The above definitions are illustrated below for a series of matrices. The matrix A~U has a(A) I 2 0 0 2 0 0 0 !] (9. 0 g At this point. such that Aj =£ Ai.10. i.1. one might speculate that g plus the degree of a must always be five. a(X) n(X).
Cnxn have distinct eigenvalues AI.. we can choose the normalization of the *. 118]. A. since y" A = Similarly. the proof see. jc. xn}} is a linearly independent set. = 1 for/ E n.JC. Let A E nxn be Hermitian and suppose X and iJ. YyXi =0.. .— A y )j^jc. we have that IXHxx = XxHx. Eigenvalues and Eigenvectors Chapter 9. or else Xi would be orthogonal to n linearly independent vectors (by Theorem 9.78 Similarly... Proof: Proof: For the proof see. we can choose the normalization of the Xi'S. we have that XxH = AXH x.. e A(A). since YY A = AjXjyf. AXH z. The same right eigenvectors XI.12.14) well. A = AH. D A =1= iJ. so that y H Xi = 1 for each i.12.6) Subtracting (9. from which we conclude I = A. Premultiply the equation Az = iJZ by x H to get x HAz = /^XHZZ = XxHz.=1= 0.. i. and if Ai E A(A). 0 If A E cnx " has distinct eigenvalues. Since yf*Xi =1= 0 for each i. The same result holds for the corresponding left eigenvectors.14) and would thus have to be 0.n with corresponding right eigenvectors x\. Take the Hermitian A transpose of this equation and use the facts that A is Hermitian and A is real to get xXHAz == of equation facts A. Then and z must be orthogonal. Then Proof: Suppose (A. 's. since x is an eigenvector. i..are distinct eigenvalues of A with corresponding right eigenvectors x and z.xnn • Then {XI. yr .e.e. or both. A = AH. However.e. .7) yields Using the fact that A is Hermitian. However.13. result holds for the corresponding left eigenvectors. for example.5).6) from (9.. 0 The proof of Theorem 9. i. since x is an Using the fact that A is Hermitian.Aj)YY xi. . 1 ?. or both. Since Ai . Then [x\.e. XH x /= 0. is real to get H Az AxH z. [21. we have xHX =1= 0. .11 is very similar to two other fundamental and important The proof of Theorem 9.7) Taking Hermitian transposes in (9. Then (9... (9. 's. Since equation Az i^z XH get X H Az = iJXH A. i.. we must have Subtracting (9.. then by Theorem 9. = A. 0 Let us now return to the general case.7.. results.Aj ^ 0.. Let A E C"x" be Hermitian. i. Eigenvalues and Eigenvectors yy. . . for [21. Theorem 9. eigenvector. we must have yfxt = O. Let A €. x) is an arbitrary eigenvalue/eigenvector pair such that Ax = AX.. . Since A. Let A e nxn be Hermitian. the two vectors must be orthogonal. A is real.e. it cannot be the case that yf*xt = 0 as orthogonal to all yj's for which j ^ i. Let us now return to the general case. we must have that x H = 0. respectively.JC by ZH to get ZH Ax = X z Hx . x is a linearly independent set. contradicting the fact that it is an eigenvector.11. 0 D Theorem 9. and if A.. for i € !1.. However..6) from (9.is real. x) is an arbitrary eigenvalue/eigenvector pair such that Ax = A. x . or the Yi 's.13. or else xf would be orthogonal to n linearly independent vectors (by Theorem 9.. from which conclude A. then by Theorem 9..11 is very similar to two other fundamental and important results.14.11. we must have that XHzz = 0. c Proof: Premultiply the equation Ax = A.. Since XxHz.. Let A e C"x" be Hermitian and suppose A and /JL are distinct eigenvalues Theorem 9. c Proof: Suppose (A. Let A E cnxn have distinct eigenvalues A .e. orthogonal. p. Then x and z must be of A with corresponding right eigenvectors and respectively. is If A e C nxn has distinct eigenvalues.. An with corresponding Theorem 9. However. be real. it cannot be the case that YiH Xi = 0 as well.14. ^ /z.— A..7) yields Taking Hermitian transposes in (9. Take the Hermitian Proof: Premultiply the equation Ax = AX by ZH to get ZH Ax = AZ H x. Xi is orthogonal to all y/s for which j =1= i. contradicting the fact that it is an eigenvector. i. we find 0 = (A. p. the two vectors must be orthogonal.5). Chapter 9. so that YitH x... Then all eigenvalues of A must Theorem 9. 118]. A. we find 0 = (Ai . or the y.. Theorem 9. Since Xi ^ 0 and would thus have to be 0.. Then all eigenvalues of A must be real..
— 1 2 j } . Xn) E Wtxn. . suppose that the left and be the matrix of corresponding left eigenvectors."" yn] be the matrix of corresponding left eigenvectors. ..(2)1) = 1).. . An and let the corresponding right eigenvectors form a matrix X [x\. We can now find the right and left eigenvectors which we find A (A) = {2. 1 ± 2j}.15. For A2 = 1 + 2j.j.11) Example 9.. We can now find the right and left eigenvectors corresponding to these eigenvalues. For Al = —2. solve the (since dimN(A . j e n. solve the linear system (A — (—1 + 2j)I)x2 = 0 to get yi X2 =[ 3+ j ] 3 ~/ .8) while y^Xj = 5. Yn] ing right eigenvectors form a matrix X = [XI.(2)l)xI = 0 to get Note that one component of . Then AJC. Similarly. Let Example 9. can be written in matrix form as AX=XA (9. = A.1. . Finally.ci can be set arbitrarily. To get the corresponding left eigenvector YI.nand let the correspondTheorem 9.I = = LAixiyr i=1 (9. i E !!:: Finally. from which we find A(A) = {—2. Fundamental Definitions and Properties 9. An) e ]Rnxn. / en.22 + 2)" + 5). from Then n(X) det(A . These matrix equations can be combined to yield the following matrix factorizations: These matrix equations can be combined to yield the following matrix factorizations: XlAX and and A (9. . . 2)(A. / e n.. corresponding to these eigenvalues. Let A e en xn have distinct eigenvalues AI. solve the linear system (A 21) = 0 to get linear system y\(A + 21) = 0 to get yi This time we have chosen the arbitrary scale factor for YJ so that y f xXI = 1. can be written in matrixform as diag(A. . y' E !!. . Let 2 5 3 3 2 4 ~ ] . .. + 5). solve the linear system (A ...15.. / en. solve the 3 x 3 linear system (A .. let Y — [y\.. i E !!.I . Fundamental Definitions and 79 Theorem 9. i E!!..*/. let A = diag(AJ.9. Similarly. xn]./) = (A 3 + 4A.10) = XAX. let Y = [YI. let A = right eigenvectors have been normalized so that YiH Xi = 1..16. and this then determines the other two (since dimA/XA — (—2)7) = 1).. is expressed by the equation while YiH Xj = oij. and this then determines the other two Note that one component of XI can be set arbitrarily. . Let A E C"x" have distinct eigenvalues A. . Furthermore. This time we have chosen the arbitrary scale factor for y\ so that \ = 1.16.(1 + 2j) I)x2 = 0 to get For A2 — 1 + 2j.9) =A = XAyH yRAX n (9. .1. solve the 3 x 3 linear system (A — (—2}I)x\ = 0 to get For Ai = 2. 2A.2 + 9)" + 10) = ()" + 2)(). Furthermore.. . Then rr(A) = det(A ...A. 10) (A.3 4A2 9 A.. suppose that the left and right eigenvectors have been normalized so that yf1 Xi = 1. Then AXi = AiXi.. is expressed by the equation yHX = I. xn].AI) (A.. A. To get the corresponding left eigenvector y\..
—4}. Then Jl"(A) = det(A . Then.=. + 1)(A.15 can also be verified.c2 = ^. —3. we could proceed to solve linear systems as for A.) 19X + 12) = (A. similar argument yields the result for left eigenvectors.~q 1 3 2 2 0 2 3 ] 2 ~ y' . Eigenvalues and Eigenvectors Chapter 9.2*2 to get Ax^ = ^2X2.2j. + 4).( I + 2 j) I) = 0 and nonnalize Y22 so that y"xX2 = 1 to get For A3 = — 1 — 2j. + 3)(A. Proceeding as in the previous example.17. we can also note that X3 =x2' and yi jj.j ] 3+j .=.!. To see this. = I . Let Example 9../) _(A + 8A from which we find A(A) = {—1. note that we could have solved directly only for *i and x2 (and XT. For example. Finally.2 However. det(A . use the fact that A. for left eigenvectors.AI) = (A33 + 8A 22+ 19A + 12) = (A + I)(A + 3)(A + 4).'s directly.17. Other results in Theorem 9. use the fact that A33 = A2 and simply conjugate the equation A. To see this.±1 4 4 4 l+j .A similar argument yields the result conjugate the equation AX2 — A2X2 to get AX2 A2X2. Then. we could have found them instead by computing XI and reading off its rows. itit is gtruightforw!U"d to compute straightforward to comput~ X~[~ and and I 0 I i ] 1 x. A.!.L Other results in Theorem 9. o 3 Then 7r(A. = x2). we could proceed to solve linear systems as for A2. X~l Example 9.80 Chapter 9. 3. However. is from which we find A (A) = {I. we could have found them instead by computing instead of detennining the Yi'S directly. we For XT.2 and simply can also note that x$ = X2 and Y3 = Y2. instead of determining the j.15 can also be verified. note that we could have solved directly only for XI and X2 (and X3 = X2).A. Now define the matrix X of right eigenvectors: Now define the matrix of right eigenvectors: 3+j 3j 3. 4}. For example. XIAX=A= [ 2 0 0 1+2j o 0 Finally. Let A = [~ ~ ~] . Proceeding as in the previous example. Eigenvalues and Eigenvectors Solve the linear system yf (A — (1 + 27')/) = 0 and normalize y> so that yf 2 1 to get Solve the linear system y" (A . 2 It is then easy to verify that It is then easy to verify that 2 .L 4 !.
A = [~ Oj ] have all the same eigenvectors (unless.I = A(T. Theorem 9. Fundamental Definitions and Properties 81 81 We also have XI AX = A = diag(—1. A is diagonalizable). A Theorem 9. ff(x) is a polynomial. formation T.20. x) maps to (/(A).18. in general.20. x) maps to (f(A). 3. J+ (3) [ 2 0 2 I I I 2 I ]+ 3 3 I I 3 I I I 3 3 I (4) [ 3 3 I I 0 3 3 l Theorem 9. Let A E R" xn and suppose X~~1AX — A. since T Proof: Suppose (A. namely the theorem statement follows. 2 3 I (.txiYiH. Eigenvalues (but not eigenvectors) are invariant under a similarity transTheorem 9. X) is an eigenvalue/eigenvector pair such that Ax = AX. For left eigenvectors we have a similar statement.. say. If / is an analytic function (e. or. Remark 9. I 3 I (.1. i=1 .g. or eX. or ex.19.3 0 ~l +(4) [ . For example. ] [~ ~ (I) [ I (. eigenvalue/eigenvector pair (A. or sinx. which is equivalent to the dyadic expanWe also have X~l AX = A = diag( 1. or sin*. What is true is that the eigenvalue/eigenvector pair (A.lIAT)(T~lx)x) = X ( T ~ lIxx). A = T0 6 2 has only one right eigenvector corresponding to the eigenvalue 0.1 AT) =X(THyf. For left eigenvectors we have a similar statement. The following theorem is useful when solving systems of linear differential equations.19. The following theorem is useful when solving systems of linear differential equations. If f is an analytic function (e. but A2 = f0 0~1]has two has only one right eigenvector corresponding to the eigenvalue 0. D D yHA = XyH ifandon\yif(T(T Hy)H (T. —4). namely yH A AyH if and only if Hy)H(T~1AT) = A(T Hy)H.1. A is diagonalizable).g.9. say. 4). Eigenvalues (but not eigenvectors) are invariant under a similarity transformation T. Fundamental Definitions and Properties 9. ( x ) is a polynomial. then easy to show that the eigenvalues of f(A) (defined as L~:OanAn) are f(A). of Chapter 11. Proof: Suppose (A. we have the equivalent statement (T.. which is equivalent to the dyadic expansion sion 3 A = LAixiyr i=1 ~(I)[ ~ W~l+(3)[ j ][~ ~ 1 . from the theorem statement follows. where A is diagonal.but f(A) does not necessarily have all the same eigenvectors (unless. jc) is an eigenvalue/eigenvector pair such that Ax = Xx. Remark 9. but /(A) does not necessarily the eigenvalues of /(A) (defined as X^o^A") are /(A). from which equivalent statement (T~ AT)(T.) .18. I I ~J I 2 0 0 0 3 3 3 I I (. Then suppose XI AX = A. For example. representable by a power series X^^o fln*n)> then it is easy to show that representable L~:O anxn). etA Ax are Details of how the matrix exponential e'A is used to solve the system x = Ax are the subject solve system i of Chapter 11. x) but not conversely. Then. —3. Then. x) but not conversely. since T is nonsingular. but A = [~ has two independent right eigenvectors associated with the eigenvalue o. What is true is that the independent right eigenvectors associated with the eigenvalue 0. e jRnxn n = LeA.
. An e C 1.22. . kn E C (not necessarily distinct). i. .20 and Corollary 9. i E ~. ii E ~... Eigenvalues and Eigenvectors Chapter 9. from which such a result is then available and presented later in this chapter.2 9. .Il . It is necessary first to consider the notion of Jordan canonical form. If A E Rnx" is diagonalizable with eigenvalues A. to have a version of Theorem 9.21. to have a version of Theorem 9.21 for any function that analytic on the spectrum of A. Corollary 9. 0 (9.. 1q is of the form where each of the Jordan block matrices / 1 ••• Jq is of the form Ai 0 Ai Ai o 0 (9.12) where each of the lordan block matrices 1i . and right Corollary 9. . ff(A) = X f ( A ) X ~ l I = Xdiag(J(AI). Eigenvalues and Eigenvectors n = LeA.20 and Corollary 9. .21 for any function that isis There are extensions to Theorem 9. and right eigenvectors xt•.. It is necessary first to consider the notion of Jordan A is not necessarily diagonalizable. Jordan Canonical Form (/CF): For all A e c nxn with eigenvalues AI. . of course. 1q). / € n_. The following corollary is immediate from the theorem upon setting t = I. then eA has eigenvalues e X"i . ( A ) = Xf(A)X. we have Proof' Starting from the definition. I.2 Jordan Canonical Form Jordan Canonical Form Theorem 9. there exists X E C^x" such that XI AX = 1 = diag(ll. f ( X t t ) ) X ~ It is desirable. 9.. ... and the same eigenvectors..IXiYiH.. from which such a result is then available and presented later in this chapter. then e A has eigenvalues e A There are extensions to Theorem 9. /' en.. we have Chapter 9. Theorem 9.. . analytic on the spectrum of A. If A e ]Rn xn is diagonalizable with eigenvalues Ai. i=1 0 The following corollary is immediate from the theorem upon setting t = I. of course...i). canonical form.22. eigenvectors Xi......20 and its corollary in which A is not necessarily diagonalizable. and the same eigenvectors.21. f(An))X.13) 1i = o o Ai o Ai .. i E ~. i.= Xdiag(/(A..82 Proof: Starting from the definition.. .e.e.20 and its corollary in which It is desirable. lordan Canonical Form (JCF): For all A E C"x" with eigenvalues X\. there exists X € c~xn such that (not necessarily distinct). € n_.
. Jordan Canonical Form and L. and where M. .. . (Xii±jpieA(A>). ~: ] and I = \0 A in the case of complex conjugate eigenvalues a ± jfJi E A(A).. ... With 1 j o j o 1 o o o j ~ ~] 0 1 ' . ] T (X . For nontrivial Jordan blocks.JfJ =[ (X fJ fJ ] (X = M. { ] allow us to go back and forth between real JCF and its complex counterpart: TI [ (X + jfJ o O.~xn necessarily distinct). Jordan Canonical Form 9. Proof: For the proof see.=1 ki = n. complicated. [21. for example.9. . there exists X € R" xn such that (9. 120124]. the situation is only a bit more complicated. pp.An n (not € jRnxn Xi... = [ _»' ^ 1 and h2 = [6 ~] in the case of complex conjugate eigenvalues where Mi = [ _~. .14) J\. e A (A).2.. Real Jordan Canonical Form: For all A E R n x " with eigenvalues AI. . X (not necessarily X E lR.. Proof: proof D 0 Transformations like T = [ _~ "•{"]allow us to go back and forth between aareal JCF Transformations like T = I" _. 83 83 Form: 2..2. Jq is of the form of in the case of real eigenvalues A.. 1q is of form where each of the Jordan block matrices 11.
. . and 2).. det(A) = nAi.7x7 is known to have 7r(A) = (A . Then Theorem 9. 1.. The minimal polynomial of a matrix is the product of the elementary divisors of divisors. . D 0 Example 9. 1)4(A 2) and 2 2 et(A) = (A . " Xn. 2.2(A(A . x Theorem 9.2)3 3and is known to have :rr(A) Example 9.(A 1). and (A . (A1).23.1)z. From Theorem 9. Thus. 1). Then c n 1. . The characteristic polynomial of a matrix is the product of its elementary divisors.. The characteristic polynomials of the Jordan blocks defined in Theorem 9. from Theorem 9.22 are called the elementary divisors or invariant factors of A. An. J(2) has elementary divisors (A while /( 2) haselementary divisors (A . highest degree corresponding to distinct eigenvalues. X XI. and (A .I)(A (A2)2. from Theorem 9.24.— I) (A.25. 1). Thus. i=1 n 2.23. Thus. From Theorem 9.. The minimal polynomial of a matrix is the product of the elementary divisors of highest degree corresponding to distinct eigenvalues. . Let A E nxn with eigenvalues AI.26. The characteristic polynomial of a matrix is the product of its elementary Theorem 9.2).24.2)2. . 1 det(A) = det(X J XI) det(J) = n7=1 Ai.22 we have that A X J XI.. 9.1)2. Then has two possible JCFs (not counting reorderings of the diagonal blocks): diagonal blocks): 1 J(l) = 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 2 0 0 0 0 0 0 0 0 0 0 0 1 2 0 0 0 0 1 0 0 0 and f2) 0 0 0 0 0 1 0 0 0 0 0 2 = 0 0 0 0 0 0 I 1 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 1 0 2 0 J(l) has elementary divisors (A Note that 7(1) haselementary divisors (A ..22 l Tr(A) = Tr(X J XI) Tr(JX. i=1 l Proof: Proof: 1.. 1)2(A .jf3 0 ]T~[~ l h M Definition 9.25.) = (A.1)2..jf3 0 0 et .22 we have that A = X J X ~ . 2 .2).26.1)4(A .. Again. Eigenvalues and Eigenvectors T.2).. Theorem 9.84 it is easily checked that it is easily checked that Chapter 9. . Eigenvalues and Eigenvectors Chapter 9. Let A e C" " with eigenvalues AI.2)2. Tr(A) = Tr(XJX~ ) = TrC/X"1 X) = Tr(/) = £"=1 A.22 are called the elementary divisors or invariant factors of A. Suppose A E E (A.(A.)i. I) . 2)2..) = det(7) = ]~["=l A.1*) = Tr(J) = L7=1 Ai. and (A (A .I [ "+ jfi 0 0 0 et + jf3 0 0 0 0 et . Then AAhas two possible JCFs (not counting reorderings of the a (A. The characteristic polynomials of the Jordan blocks defined in Theorem Definition 9.(A. Tr(A) = 2. Suppose A e lR.2)2. — 2) . . 2(A(A. I) . . det(A) = det(XJX.22 we have that A = X JJX ~ l .
9. The more interesting (and difficult) case occurs when Ai multiplicity A.ulx = 0 and (A .. so the eigenvalue 3 has two eigenvectors associated with it. and rank(A al) vectors. For each distinct eigenvalue Ai.. determine a 0 0 0 0 0 0 0 a 0 0 0 0 0 a 0 0 0 0 Al= 0 0 0 a 0 0 0 0 0 0 a 0 0 0 0 0 0 0 a 0 0 0 0 0 0 1 a A2 = a 0 0 0 0 0 0 0 a 0 0 0 0 0 a 0 0 0 0 0 0 0 a 0 0 0 0 0 0 a 0 0 0 0 0 0 a 0 0 0 0 0 0 0 a 4. when X.27.29. To get a third vector JC3 such that X [x\ X2 XT.. three eigen7r(A. If we let [~l ~2 ~3]T associated If [^i £2 &]T denote a solution to the linear system (A — 3l)~ = 0. For example. X e A(A) if and only if (A XI)kx = 0 and (A U}k~l x ^ 0.a(A) = (A . three eigenboth have rr(A) = (A .A. a(A).e. is of algebraic multiplicity greater than one. Let A E C"xn (or R"x"). we find that 2£2 + ~3 = O. The matrices A uniquely.A.e. Remark 9. An analogous definition holds for a left principal vector of degree k. X principal Definition 9.7) = n . we find that 2~2 + £ 3= 0 . Determination of the JCF 9.nxn number of eigenvectors. when Ai is simple.(7).3.l). it then has precisely one eigenvector.27.] are eigenvectors (and are independent). a)7. of course.7) for distinct A. of course.) = (A. 1. Then x is a right principal vector of degree k degree associated with A E A (A) ifand only if(A . i.. determine the JCF of A uniquely. left k. is not sufficient to Example 9.— a) .).nxn). associated independent right (or left) eigenvectors is given by dim A^(A . suppose suppose A = [3 2 0 o Then Then 3 0 A3I= U2 I] o o 0 0 n has rank 1.29. and rank (A A. of algebraic multiplicity 1. The straightforward case is.. i.3 Determination of the JCF Determination of the JCF The first critical item of information in determining the JCF of a matrix A E Wlxn is its A e ]R. i.rank(A .28. A e nxn ]R. a(A. i.— al) == 4.— a) and rank(A .e.. Thus. both denote a solution to the linear system (A . 9. Definition 9.. and rank(A —Ai l) for distinct Ai is not sufficient to rr(A)..3/)£ = 0. a (A). Knowing TT (A. the associated number of linearly A.3. we need the notion of principal vector.is simple. To get a third vector X3 such that X = [Xl KJ_ X3] reduces A to JCF.l) independent right — — A. Thus.) = (A.. a)\ .AI)klx i= o.28. of algebraic multiplicity 1. both are eigenvectors (and are independent).. Determination of the JCF 85 &5 Example 9. c . it The straightforward case is.e. eigenvectors dimN(A — A. Remark 9.
which simply says that x(!) is a right Ax(1) = hx(1) x (1) (2) x(2). (It may be necessary to take a linear of x(l) R(A . we find (A If we premultiply XI) x = (A XI)x = 0. of — AI.17) by (A . solutions solutions to the homogeneous equation (A . (9. Eigenvalues and Eigenvectors synonymously "of 2. if of . consider a determining 2 x Jordan [~ i].. ji of dimension k or larger. If the algebraic multiplicity of If A principal need X is greater than its geometric multiplicity. Theother solution (A . Thus. eigenvector. If. principal vectors still need to be computed from succeeding steps. Principal vectors are sometimes also called generalized eigenvectors. for get righthand example. See. Then the equation AX = XJ can be written that reduces a matrix A to this JCF. k = eigenvector. The other solution necessary is the desired principal vector of degree 2. Solve (A . For example.A1)x(2) = x(l). One of these solutions (A — AI)2 x (2) x(l) (1= 0). Denote by x(1) and x(2) the two columns of a matrix X e R2.X I ) .'A1)22xx(l) = (A . of course. this rank is n . determine all eigenvalues of A e R" x " nxn ). First. 4. E lR nxn This suggests a "general" procedure. x(l). of k 5. different term will be assigned a much different meaning in Chapter 12. since (A .e. there is only one eigenvector. The phrase "of grade k" is often used synonymously with "of degree k. For each independent jc (1) . Exercise 7.A1)2 x(2) = (A . The case k = 1 corresponds to the "usual" eigenvector.XI.3.A/) = — multiplicity of rank(A — XI) = n . Thus.1 9. 9. The second column yields the following equation for x . The number of eigenvectors depends on the rank of A .A1)X(l) = O. of The number of linearly independent solutions at this step depends on the rank of 2 (A . there are two linearly independent n — o." "of often 3.AI). A right (or left) principal vector of degree k is associated with a Jordan block J. Denote by x(l) and x(2) the two columns of a matrix X E lR~X2 2x2 2 Jordan block{h0 h1. See. principal vectors of degree 1) associated with A. by (A .17) The first column yields the equation Ax(!) = AX(!). wefind(A. If we premultiply (9. A E A(A) following: (or C ).XI)0 = 0. S. the definition of principal vector is satisfied.1.3. 2. if X.A/)x(2) = x(l).2. (A — uf.) .X I ) ( l ) = (A AI)O o. is. Then the equation AX = X J can be written A [x(l) x(2)] = [x(l) X(2)] [~ ~ J.86 Chapter 9.1 Theoretical computation Theoretical computation To motivate the development of a procedure for determining principal vectors.XI).x2 A JCF.A1)X(l) = O. I) associated This step finds all the eigenvectors (i. but the latter generalized eigenvectors.XI)2x^ = 0. Then for each distinct X e A (A) perform the following: z (2) w c 1. combination of jc(1) vectors to get a righthand side that is in 7£(A — XI). Eigenvalues and Eigenvectors Chapter 9. the principal vector second of degree 2: of degree (A . for example. solve (A .AI). x(l) (^ 0).
First. Let Example 9. and the interested student is strongly urged to consult the classical and very to compute a JCF. Unfortunately. Theorem 9.. Notice that highquality mathematical software such as MATLAB does not offer a j cf command. and A3 = 2. Determination of eigenvectors more extensive treatments.30. with the distinct eigenvalues 1 and 2. For Unfortunately. Principal vectors associated with different Jordan blocks are linearly independent. . this naturallooking procedure can fail to find all Jordan vectors. x (k) } is a linearly independent set.(1) (A . for example. vectors is equal to the algebraic multiplicity of A.. There are significant numerical difficulties inherent in attempting to compute a JCF. . where the chain of vectors x(i) is constructed as above. find the eigenvectors associated with the distinct eigenvalues 1 and 2. Let A=[~ 0 2 ] . . {x(l).9..32. Then Theorem 9. Continue in this way until the total number of independent eigenvectors and principal 4. Let X = [[x(l). First.. Principal vectors associated with different Jordan blocks are linearly indeTheorem 9. For each independent x(2) from step 2. although a jordan command is available in MATLAB's Symbolic Toolbox. . h2 = 1. although a j ardan command is available in MATLAB'S does not offer a jcf command.1. . Symbolic Toolbox.32. X(k)]. For more extensive treatments. and the interested student is strongly urged to consult the classical and very readable [8] to learn why. Determination of eigenvectors and principal vectors is obviously very tedious for anything beyond simple problems (n = 2 and principal vectors is obviously very tedious for anything beyond simple problems (n = 2 or 3. say). pendent. Notice that highquality mathematical software such as MATLAB readable [8] to learn why..3. Determination of the JCF 3. A2 = 1.. for example.2/)x3(1)= 0 yields (A . There are significant numerical difficulties inherent in attempting generally prove unreliable. Theorem 9.AI) = k .2I)x~1) = 0 yields . solve 3. Let X = x ( l ) . [20] and [21]. . find the eigenvectors associated The eigenvalues of A are A1 = I.33. Suppose A E C kxk has an eigenvalue A of algebraic multiplicity kkand suppose further that rank(A — AI) = k — 1. Example 9. Theorem 9.. 0 The eigenvalues of A are AI = 1. .30. see.33.31. say). . x(k)]. 1 . Determination of the JCF 9. 4. [20] and [21]. Then vectors x(i) is constructed as above. . Attempts to do such calculations in finiteprecision floatingpoint arithmetic generally prove unreliable. . where the chain of suppose further that rank(A .3. Suppose A e Ckxk has an eigenvalue A. X(k)} is a linearly independent set.31. Attempts to do such calculations in finiteprecision floatingpoint arithmetic or 3. see. solve (A AI)x(3) 87 = x(2). and h3 = 2. Continue in this way until the total number of independent eigenvectors and principal vectors is equal to the algebraic multiplicity of A. this naturallooking procedure can fail to find all Jordan vectors. (x (1) . of algebraic multiplicity and Theorem 9. For each independent X(2) from step 2.
dn be a nonsingular "scaling" matrix..3. .1I)xl ) = xiI) to get (A – l/)x. but the result clearly holds for any JCF.. 0 !b.so long as they are nonzero. (1) toeet x.(2) = x. solve To find a principal vector of degree 2 associated with the multiple eigenvalue 1.3. For the sake of definiteness. d n)) be a nonsingular "scaling" matrix. d. we 1 's but can be arbitrary — so long as they are nonzero. . For the sake of defmiteness.. Then Let D = diag(d" . 0 1 = [xiI) 0 xl" xl"] ~ [ ~ 5 ] and XlAX 5 3 0 Then it is easy to check that Then it is easy to check that l 1 X'~U i 1 =[ ~ I 0 0 n 9..2 On the +1 's in JCF blocks 's JCF In this subsection we show that the nonzero superdiagonal elements of a JCF need not be In this subsection we show that the nonzero superdiagonal elements of a JCF need not be 1's but can be arbitrary . solve 2 (A .11)x?J = 0 yields (A. 0 = 0 A dn dn I 2 0 dn dn I A 0 ). Eigenvalues and Eigenvectors To find a principal vector of degree 2 associated with the multiple eigenvalue 1.. consider below the case of a single Jordan block. Then A 4l. Now let Now let X (2) =[ 0 ] ~ . d. .88 (A .. =0 yields (1) Chapter 9. but the result clearly holds for any JCF.2 9. . Suppose A € Rnxn and SupposedA E jRnxn and Let D diag(d1. . we consider below the case of a single Jordan block.. 0 0 D'(X' AX)D = D' J D = j ).l/)x.
where AS is defined as the transformation. Definition 9. .18) 0 I 1 0 0 can be used to put the superdiagonal elements in the subdiagonal instead if that is desired: to superdiagonal elements in instead desired: A I 0 0 A 0 A 0 A 0 0 A 0 p[ A p= 0 1 0 0 A 0 I A A 0 0 0 A 9. E6 N(A . mdistinct. the reverseorder identity matrix (or exchange matrix) 0 p = pT = p[ = 0 I 0 (9. Specifically. A subspace S c V is A invariant if AS c S... . Such a decomposition is given in the following theorem... Suppose A E R"x" has characteristic polynomial 9.Am)Vm with Ai.nxn (or nxn to JCF provides change of basis with respect to which the matrix is diagonal or block diagonal.A.. dimN(A — AJ)Vi = ni.Amtm c and minimal polynomial a(A) = (A . . . dnxn}.. . E6 N (A .34.. Theorem 9.n = N(A = N (A .. dnxn].4 Geometric Aspects of the JCF Geometric Aspects of the JCF The matrix X that reduces a matrix A E IR"X"(or C nxn)) to aalCF provides aachange of basis X e jH.. Suppose e jH.. Geometric Aspects of the JCF 89 di's Appropriate choice of the di 's then yields any desired nonzero superdiagonal elements. Let V be a vector space over F and suppose A : V —>• V is a linear Definition 9. x n eigenvectors and principal vectors that reduces A to its lCF. Then jH... Am distinct. Then AI.35..4..4 9. set {As s E S}. It is thus natural to expect an with respect to which the matrix is diagonal or block diagonal... Let IF and suppose + transformation.. It is thus natural to expect an associated direct sum decomposition of R. Specifically. . (A .9.. Note that dimM(A .n. A. In a similar fashion. the reverseorder identity matrix (or exchange matrix) In a similar fashion.nxn n(A) = (A .. j is obtained from A via the and principal vectors that reduces A to its JCF.. J is obtained from A via the similarity transformation XD = \d\x\.35.4. A subspace S ~ V is Ainvariant if AS ~ S. Geometric Aspects of the JCF 9.AlIt) E6 . .A[)n) .AmItm ...Am I) Vm . Such a decomposition is given in the following associated direct sum decomposition of jH.A1I) v) E6 . .. interpreted This result can also be interpreted in terms of the matrix X = [x\.. similarity transformation XD [d[x[.xn]] of eigenvectors = [x[../) w = «. where AS is defined as the set {As:: s e S}.A[)V) '" (A .34.
Let peA) = «o/ + o?i A + • + <xqA be a polynomial in A.37. is Ainvariant. If F = NI ® • • 0 m A// is Ainvariant. Then N(p(A)) and R(p(A)) 7£(p(A)) are Ainvariant. then S <S is Ainvariant if and only if there exists M E ]Rkxk such that eRkxk (9.. .90 Chapter 9. Let 7."" Jik. i..lt. Note that AXi = A*.e. € C"x"' be a Jordan basis for N(AT — A. . such "canonical" forms are discussed in text that follows. we have that AXi Theorem 9. i. each Ji = diag(JiI.. /. 2. Eigenvalues and Eigenvectors Chapter 9. for N(A . If A has distinct The Jordan canonical form is a special case of the above theorem.. /. and S e R" xk s\. If V is a vector space over IF such that V = N\ EB . (i... S is Ainvariant if and only if S . so by (9. The equation Ax Example 9.A.39... e E"x".19) the columns attention here to only the Jordan block case. Finally.as in Theorem 9. then a basis for V can be chosen with respect to which A has a block N.* is a Jordan block corresponding to Ai E A(A).34. Suppose A E ]Rnxn.e. . of W.. where each Theorem 9. so the columns of A. e A(A).span an Ainvariant subspace.19) the columns of Xi (i. This follows easily by comparing the ith columns of each side of (9.Ji . could be replaced by v.2. 9. The Jordan canonical form is a special case of the above theorem. = X. Ainvariant.li. is not necessarily diagonalizable.37. via SVD).. the eigenvectors and principal vectors associated with Ai) span an Ainvariant subspace of]Rn... If A has distinct eigenvalues A.) and each Jik is a Jordan block corresponding to A. then is Ainvariant if and only if there span a kdimensional subspace S. we could choose bases for N(A — A. The equation Ax = A* = x A defining a right eigenvector x of an eigenvalue AX = x A defining a right eigenvector x of an eigenvalue A x X says that * spans an Ainvariant subspace (of dimension one). Equivalently... Jm).19) AS = SM. We could also use other block diagonal decompositions (e. Note that A A".. 9.38. Let p(A) = CloI + ClIA + '"• •+ ClqAqq be a polynomial in A.Xm] ] Ee]R~xnxnisis such that X^AX ==diag(J1. ... 7.38.g.) span an Ainvariant of A"./)"' by SVD. Example 9. A invariant if only ifS1 1. partition .. Let Yi E <enxn . . so the columns of Xi span an Amvanant subspace. i /= 1. then a basis for V can be chosen with respect to which A has a block diagonal representation.. A". so by (9.. partition Equivalently. Jm). Rewriting in the form ~ J. we have that A A. . = Xi. example (note that the power n. //*. Other representation for A with full blocks rather than the highly structured Jordan blocks.). Suppose A"== [Xl .. where each Ji = diag(/. Other such "canonical" forms are discussed in text that follows. be a Jordan basis for N (AT . Suppose X block diagonalizes A. . the eigenvectors and principal vectors associated with A..e. diagonal representation.• EB Nm. AT Theorem 9. We would then get a block diagonal example (note that the power ni could be replaced by Vi)..... s/t span a /^dimensional subspace <S. We would then get a block diagonal representation for A with full blocks rather than the highly structured Jordan blocks.e. XI AX = [~ J 2 ].. eigenvalues Ai 9.. . = 1.. R(S) == S.. .2....e. Sk If R" R.. i. K(S) <S. Then N(p(A)) and 1.is A T invariant.) and each /.19): /th Example 9. Eigenvalues and Eigenvectors If V is taken to be ]Rn over Rand S E ]Rn x* is a matrix whose columns SI.34. Xm R"n such that XI AX diag(7i.. but we restrict our attention here to only the Jordan block case. we return to the problem of developing a formula for e l A in the case that A A formula e' A is not necessarily diagonalizable.36../)"'.Ai/)n.39.i.36. where X [ X i .
for a k x k Jordan block 7.. E f= 0.. A called the matrix sign function.5.S.40.40. . . Definition 9. is given by eigenvalues in the right halfplane.5 9.9. 9. with N containing all Jordan blocks corresponding to the be a Jordan canonical form for with N containing all Jordan blocks corresponding to the eigenvalues of in the left halfplane and P containing all Jordan blocks corresponding to eigenvalues of A in the left halfplane and P containing all Jordan blocks corresponding to eigenvalues in the right halfplane. denoted sgn(A). Definition 9. is given by sgn(A) = X [ / 0] 0 / X I .. ifRe(z) < O. Suppose A E C"x" has no eigenvalues on the imaginary axis.41.JiYi . Then the sign of A. Then the sign of A. i=1 which is a useful formula when used in conjunction with the result which is a useful formula when used in conjunction with the result A 0 A A 0 eAt teAt eAt .I = XJy H = [XI.5 The Matrix Sign Function The Matrix Sign Function section brief interesting useful In this section we give a very brief introduction to an interesting and useful matrix function function called the matrix sign function.lt 2 e At 2! 0 exp t 0 0 0 1 A teAt eAt 0 0 0 0 0 block Ji associated A = A. associated with an eigenvalue A. Jm) [YI . denoted sgn(A). The Matrix Sign Function 9. It is a generalization of the sign (or signum) of a scalar. Let z E C with Re(z) ^ O. of defined Definition 9.= Ai. and let e cnxn be a Jordan canonical form for A. Then A = XJX. Then the sign of z is defined by Re(z) {+1 sgn(z) = IRe(z) I = 1 ifRe(z) > 0.41.. m ••• . i=1 H In a similar fashion we can compute m etA = LXietJ. Definition 9. . It is a generalization of the sign (or signum) of a scalar. Xm] diag(JI. Then compatibly. A survey of the matrix sign function and some of its applications can be found in [15].YiH... Ym]H = LX. The Matrix Sign Function 91 91 compatibly. . .
Xn and left eigenvectors y\. Eigenvalues and Eigenvectors Chapter 9. 2. S is diagonalizable with eigenvalues equal to del. e C"x" Theorem 9. Suppose A E enxn has no eigenvalues on the imaginary axis. positive = (/ + of A. Find the appropriate expression for v as a linear combination expression of the left eigenvectors as well.n 1. Theorem 9.42..92 92 Chapter 9.. ••. Then the following hold: following e 1. 2. negA = (/ — S)/2 3. We state some of the more useful properties of the matrix sign function as theorems. S = sgn(A).. . There are other equivalent definitions of the matrix sign function. projection subspace of 4. 3.42. 5.1> . 4. AS = SA.S) /2 is a projection onto the negative invariant subspace of A. The JCF definition of the here is especially useful in deriving many of its key properties. . AS = SA.43. Their left exercises. 5. negA == (l . but the one given here is especially useful in deriving many of its key properties. 3. Show that v can be expressed (uniquely) as a linear combination arbitrary vector. its reliable numerical calculation is an interesting topic in calculation its own right. but the one given There are other equivalent definitions of the matrix sign function.. .. ± 1.43. Let A E Cnxn have distinct eigenvalues AI. of A (the negative invariant subspace). yn. . In fact. Their straightforward proofs are left to the exercises. 7l(S l) is an Ainvariant subspace corresponding to the left halfplane eigenvalues left halfplane I. Eigenvalues and Eigenvectors where the negative and positive identity matrices are of the same dimensions as N and p. R(S — /) Ainvariant of (the negative invariant subspace). Show that v can be expressed (uniquely) as a linear combination e of the right eigenvectors. Then the following hold: following 1. Suppose A E C"x" has no eigenvalues on the imaginary axis.. Xn with corresponding right eigenA e nxn ). EXERCISES EXERCISES 1. positive of P. sgn(cA) = sgn(c) sgn(A)/or c. respectively.. The JCF definition of the matrix sign function does not generally lend itself to reliable computation on a finitewordgenerally itself length digital computer. 2.xn and left eigenvectors Yl. sgn(A") = (sgn(A»H. S = sgn(A). . 4. sgn(cA) = sgn(c) sgn(A) for all nonzero real scalars c. respectively.. 6. Let e C" be an arbitrary vector. 3. respectively. R(S+/) is an Ainvariant subspace corresponding to the right halfplane eigenvalues R(S + l) A invariant halfplane of (the positive invariant of A (the positive invariant subspace). Let v E en be an vectors Xl. sgn(AH) = (sgn(A))". Theorem 9. posA == (l + S)/2 is a projection onto the positive invariant subspace of A... Yn. ••• ... We state some of the more useful properties of the matrix sign function as theorems.. sgn(T1AT) Tlsgn(A)TforallnonsingularT e enxn 6. S2 = I. . ). S2 = I. distinct right eigenvectors Xi. and let = sgn(A). e nxn Theorem 9. sgn(TlAT) = T1sgn(A)T foralinonsingularT E C"x".. and let — sgn(A).
right eigenvectors and right principal vectors if necessary. = O. Let A e R" xn be of the form A = 1+ xyT. Show that all right eigenvectors of the Jordan block matrix in Theorem 9. n are nonzero vectors with with xTTyy = 0. AH = —A. The vectors [0 1 Ifand[l 0 of [0 — l] r and[1 0]r (2) (1) are both eigenvectors. eigenvectors and if and (real) JCFs of the following matrices: (a) 2 1 ] 0 ' [ 1 6. i. multiples of e\ E lR./)jc = x can't be solved. Let A be an eigenvalue of A with corresponding 3. Suppose the small number 10.16 Jordan form specified 9.1) element of J. 11. y E lR. k . y e R" are nonzero vectors with A E lR. x. Let A e R"x" be of the form A = xyT.e. where x. y E lR. Determine the JCFs of the following matrices: 6. nxn be of the form A = / + xyT. 5. Prove that all eigenvalues of a skewHermitian matrix must be pure imaginary. Let A E lR. i. Determine the eigenvalues. x O. but then the equation (A . Let 7. Suppose A E C"x" is Hermitian.. JCFs for A. Determine the JCFs of the following matrices: <a) Uj n 2 1 2 =n 7. Suppose 10~16 is added to the (16. ~ 0 Hint: Use[1 1 — I]T an Hint: Use[— 1 1 . Show that all right eigenvectors of the Jordan block matrix in Theorem 9. 16x 16 has eigenvalues at 0 its JCF consists of single Jordan block of the form specified in Theorem 9. 2. 5.Exercises 93 93 2. Let A be an eigenvalue of A with corresponding right eigenvector x. where J is the JCF 1 J=[~ 0 1~].l]r as an eigenvector.. eigenvalues.e. Characterize all left eigenvectors.5x5 has eigenvalues {2. Suppose A € rc nxn is skewHermitian. Suppose A E C"x" is skewHermitian.22. Suppose a matrix A E lR. Characterize all left eigenvectors. if A is skewHermitian. Prove the same result right eigenvector x. Prove that all eigenvalues of 2. AH = A. where J is the JCF Find a nonsingular matrix X such that X AX = J. 9. y e R" are nonzero vectors 10. where x. Suppose a matrix A E R 16x 16 has 16 eigenvalues at 0 and its JCF consists of a single A e lR.n T T x yy = 0.30 must be multiples of el e R*. Determine all possible € R 5x5 {2. 2. Show that x is also a left eigenvector for A.1) element of J. 3}.30 must be 8. Show that x is also a left eigenvector for A. What are the eigenvalues of this slightly perturbed matrix? matrix? . Prove the same result if A is skewHermitian. a skewHermitian matrix must be pure imaginary.22. Suppose A e rc nxn is Hermitian. Determine the JCF of A. 10. (A — I)x(2) x(1) 8. Determine the JCF of A. 2. Determine the JCF of A. What are the eigenvalues of this slightly perturbed is added to the (16.nxn A = xyT. 3}. 3. 4. JCFs for A. Determine the JCF of A. where x. Let A = [H 1]· 2 2" Find a nonsingular matrix X such that XI AX = J.
Prove that 17. TIAT = [A011 A22 0 ] . 15. Consider the block upper triangular matrix 14. Consider the block upper triangular matrix A _ [ All  0 Al2 ] A22 ' where A E M"xn and An E Rkxk with 1 ::s: k < n.18) is useful.42. If n = 2 and k = 1. Eigenvalues and Eigenvectors 12. JCF. is nonsingular.S2X~l) would required symmetric factorization of A. what can you say further. where Si 12. Hint: Use the factorization in the previous exercise. X e R*x <«*). If n = 2 and k = 1. xn has all its eigenvalues in the left halfplane. where SI and £2 are real symmetric matrices and one of them. Then A = (XS i X T ) ( X ~ T T S2XI) would be the the "symmetric factorization" of J. it suffices to prove the result for the JCF. Prove that 17. in terms of AU and A 22. Suppose A e sgn(A) = 1.18) is useful.. about when the equation for X is what can you say further. Then = ( X SIXT)(X. Prove Theorem 9. about when the equation for is solvable? solvable? 15. say Si. 16. Show that every matrix A E R"x" can be factored in the form A = SIS2. Eigenvalues and Eigenvectors Chapter 9. Suppose Al2 ^ and want to block diagonalize A via the similarity transformation want to block diagonalize A via the similarity transformation where X E IRkx(nk).42. 14. Hint: Suppose A = Xl XI is a reduction of A to JCF and suppose we can construct Hint: Suppose A = X J X ~ l is a reduction of A to JCF and suppose we can construct the "symmetric factorization" of 1. Prove Theorem 9. The transformation P in (9. Hint: Use the factorization in the previous exercise. Suppose Au =1= 0 and that we we e jRnxn and All e jRkxk 1 < ::s: n. is nonsingular.94 Chapter 9. Find a matrix equation that X must satisfy for this to be possible. Prove Theorem 9.43. in terms of All and A22. Find a matrix equation that X must satisfy for this to be possible. 16. i.43. sgn(A) = /. Prove Theorem 9. it suffices to prove the result for the required symmetric factorization of A. Thus. The transformation P in (9. transformation explicitly.e. Show that every matrix A e jRnxn can be factored in the form A Si$2. 13. en . Suppose A E C"xn has all its eigenvalues in the left halfplane. say S1. and S2 are real symmetric matrices and one of them. Prove that every matrix e jRn xn is similar to its transpose and determine a similarity transformation explicitly. Prove that every matrix A E W x" is similar to its transpose and determine a similarity 13. Thus.
." The transformation A f+ P AQ is called an equivalence. We can also consider the case A e Cm xn and unitary equivalence if P and Remark 10.. most "diagonal" we can get is the JCF described in Chapter 9.. . such as A = [ _ab ^1 for real scalars a and b.. What other matrices are "diagonalizable" under unitary similarity? The answer is given in Theorem matrices are "diagonalizable" under unitary similarity? The answer is given in Theorem 10." The transformation A M» PAQ is called an equivalence. If a matrix A is not normal.. Problem: Let V and W be vector spaces and suppose A : V —>• W is a linear transformation.1. and orthogonal. If a matrix A is not normal. ." In matrix terms. What other U HAU = D. .. Let A = AH e C"x" have (real) eigenvalues A. If A = A H 6 C" " has eigenvalues AI.I. Normal matrices include Hermitian.. This is proved in Theorem 10. . Xn Theorem 10. and unitary matrices (and their "real" counterparts: symmetric. it is called an orthogonal equivalence if P and Q are orthogonal matrices.2. .. orthogonal equivalence if P and are orthogonal matrices. find P e lR. .An.1 Some Basic Canonical Forms Some Basic Canonical Forms Problem: Let V and W be vector spaces and suppose A : V + W is a linear transformation. if A E IR mxn find E R™ and Q E lR~xn such that P AQ has a form. Q are unitary. where it is proved that a general matrix A e enxn is unitarily similar to a diagonal matrix if and only if it is normal (i. where D = diag(A.. . an orthogonal similarity (or unitary similarity in the complex case).j. This is proved in Theorem 10. An..." In matrix terms. the transformation A H> PAP" 1 is called similarity. skewHermitian. .2.. . Xn) (the columns ofX are orthonormal eigenvectors for A). A. the transformation A i» PAPT is called If an orthogonal similarity (or unitary similarity in the complex case). where D = diag(AJ. If W = V and <2== p.2. the transformation A f+ PAPI is called aasimilarity. as well as other matrices that merely satisfy the symmetric.. 95 95 . Normal matrices include Hermitian. such as A = [_~ most "diagonal" we can get is the JCF described in Chapter 9. The following results are typical of what can be achieved under a unitary similarity. the definition. .. ..Chapter 10 Chapter 10 Canonical Forms Canonical Forms 10. .j. V and Q 1. If The following results are typical of what can be achieved under a unitary similarity.:xm and Q e Rnnxn such that PAQ has a "canonical form. skewsymmetric.. orthonormal eigenvectors for A). it is called an "canonical form. if A e Rmxn . then there exists a unitary matrix £7 such that A = AH E en xxn has eigenvalues AI.e. n ). where it is proved that a general matrix A E C"x" is unitarily similar to a diagonal 10. . then there exists a unitary matrix U such that UH AU — D.9. . Remark 10. AAHH = AH A).1. If P"1 .9. . skewskewHermitian.1. If W = V and if Q = PT is orthogonal. and unitary matrices (and their "real" counterparts: symmetric. = H E en xn exists a unitary matrix X such that X H AX = D = diag(Al. !] Theorem 10. Find bases in V and W with respect to which Mat A has a "simple form" or "canonical Find bases in V and W with respect to which Mat A has a "simple form" or "canonical xm form. An). AA = AHA)..e..1 10. matrix if and only if it is normal (i. the for real scalars a and h.2. An. respectively). Then there AI. respectively). Two special cases are of interest: Two special cases are of interest: 1.. We can also consider the case A E emxn and unitary equivalence if P and <2 are unitary.. = V and if pT is orthogonal. as well as other matrices that merely satisfy the definition. . and orthogonal.. An) (the columns of X are exists a unitary matrix X such that XHAX = D = diag(A. the transformation A f+ P ApT is called 2.
Construct a sequence of Householder matrices (also known Proof: Let X [XI.. %n] XI . Now XHAX =[ xH I XH ] A [XI 2 X 2] =[ =[ =[ x~Axl XfAxl X~AX2 XfAX 2 ] (10. Let V = XI..1) we have used the fact that AXI = k\x\.1) we have used the fact that Ax\ = AIXI. 10.xd. . When combined with the fact that In (l0. [£i. In (10... Construct a sequence of Householder matrices (also known HI.. Xk H Hk. . . .) [XI X2]] is orthogonal is frequently required.2)block noting that x\ is orthogonal to all vectors in X2... Xn such that [x\. Canonical Forms Chapter 10. 0 Thus..2). . Then there exist n .96 96 Chapter 10. . Then [XI V 2] is unitary.. . . Then U = HI'" Hk and H Then x^U2 = 0 (i E ~) means that xf is orthogonal to each of the n — k columns of V2.2)block must have eigenvalues A2. Thus. X 1 XI e E". Xk].... the construction of X2 E JRnx(nl) such that X — z e ]R" (".l)block by Al (2. .. . A.. (/ € k) U2 X i U2 = Xi .I)block. The proof is completed easily by induction upon noting proof that the (2. We also get 0 in the (2.. Write V H matrix such that V X I = [ ~]. XH AX induction noting that XH AX is Hermitian. ..2)block X2 . Then VH = / / . The construction can actually be performed orthogonal frequently [x\ 2 quite easily by means of Householder (or Givens) transformations as in the proof of the Householder transformations proof following general result. ~nf.. k 1.. xf*x\ = Proof' Let x\ be a right eigenvector corresponding to X\. Hk as elementary reflectors) H\. [XI U2] is unitary...k But the latter are orthonormal since they are the last n . —k U.1) (10. . k = For simplicity. . we consider the real case.2)block by XI Xz. Xk are orthonormal).k rows of the unitary matrix U..2 is then a special case of Theorem 10.. orthogonal (l..l)block. simplicity..3 called Theorem 10. When combined with the fact that x~ XI = 1. .. . . . .. I)block x"xi = 1. Let the unit vector x\ be denoted by [~I. An. . . where R € Ckxk is upper triangular.. Write UH = [U\ U ] [VI Vz] 0 2 with Ui E Cnxk . X = Given a unit vector x\ E JRn.. where R E kxk is upper triangular. we get Ai remaining in the (l..T. Canonical Forms Proof: eigenvector corresponding AI.... Then there exist n — 1 additional vectors X2. xn such that X = (XI. HdxI.1 additional vectors x2.3 for k = 1.. [Xi f/2] unitary.2) Al X~AX2 XfAX 2 0 Al ] 0 XfAX z 0 l In (10.. Hk in the usual way (see below) such that Hk ..3.Hv.. xd = [ ~ l U = where R is upper triangular (and nonsingular since x\. .. D 0 (2. .. D The construction called for in Theorem 10.• • Hk and Hk'" HI. n . and normalize it such that x~ XI = XI 1.3... we get 0 in the (l. . We illustrate the construction of the necessary Householder matrix for k — 1. xn] = 1. xn] = [x\ ] [XI X22] is unitary. (l. following general result. . Let XI E Cnxk have orthonormal columns and suppose V is a unitary matrix such that UX\ = \ 1.. VI € Cnxk [Xi U ] Proof: Let X\I = [x\.. Let X\ e Cnxk have orthonormal columns and suppose U is a unitary Theorem 10.
U effects necessary compression of jci. Some Basic Canonical Forms 10.. An). Some Basic Canonical Forms 97 Then the necessary Householder matrix needed for the construction of X 2 is given by Then the necessary Householder matrix needed for the construction of X^ is given by U = I .4 implies that a symmetric matrix A (with the obvious analogue from Theorem 10. '. To see that U effects the U symmetric U U = U = I.e. [11].Xn. . [23]..•» '. Then there exists an 10. so U is orthogonal. Theorem 10. Then there exists an AT E jRnxn have eigenvalues AI..3) spectral which is often called the spectral representation of A. consulted standard numerical linear algebra can be consulted in standard numerical linear algebra texts such as [7].2 for Hermitian matrices) can be written n A = XDX T = LAiXiXT. X n ).i. .4..2uu+ = I . XTAX = D = diag(Xi.1 ± 1. i=1 (10. Further details on Householder matrices.2 for Hermitian matrices) can be written from Theorem 10. . where u ^UU [t\ 1.+uu T . i. necessary compression of Xl. .3) is actually a often weighted sum of orthogonal projections P. where Pi = PR(x. Let A = AT e E nxn have eigenvalues k\. where u = ['.1. it is easily verified that UT U = 2 ± 2'.4. it is easily verified that u T u = ± 2£i and u T Xl = 1 ± '. = PUM = xixf = xxixT since xj xi — 1. Let A E jRn xn (whose orthogonal matrix X e Wlxn (whose columns are orthonormal eigenvectors of A) such that of XT AX = D = diag(Al..2 is worth stating separately since it is applied fre10. sponding to the Ai'S). i=l theoretical The following pair of theorems form the theoretical foundation of the doubleFrancisdoubleFrancisQR algorithm used to compute matrix eigenvalues in a numerically stable and reliable way. .. quently in applications.... . In fact. Thus. including the choice of sign and the complex case.2. n A = LAiPi. . A in (10. £«] r It can checked T 2 that U is symmetric and U TU = U 2 = I. (onto the onedimensional eigenspaces correPi onedimensional eigenspaces sponding to the A. [11].. The real version of Theorem 10.10. [25]. [7].It can easily be checked — 2uu+ — u u T .nf. . .1.1 and UT X\ = 1 ± £1.) — xiXt = i j since xT Xi = 1.. An. [25]. . £2.1. • • . U orthogonal. . A Note that Theorem 10. x where P.2 is worth stating separately since it is applied frequently in applications. [23].. 's).e.
but if A has a complex conjugate pair of eigenvalues. Canonical Forms Chapter 10. Theorem 10.. it is of interest to know when we can go further and reduce a matrix via unitary similarity to diagonal form. D ur In the case of A e IRn ". Then there exists a unitary matrix U such that U H AU = T. A quasiuppertriangular matrix is block upper triangular with 1 x 1 diagonal blocks corresponding to its real eigenvalues and 2x2 2 diagonal blocks corresponding to its blocks corresponding to its real eigenvalues and 2 x diagonal blocks corresponding to its complex conjugate pairs of eigenvalues.. is that the first Schur vectors span the same all applications (see.e. Proof: The proof of this theorem is essentially the same as that of Theorem lO. Then there exists an orthogonal 10. The when we can go further and reduce a matrix via unitary similarity to diagonal form. AHA = AA H). 0 in this case (using the notation U rather than X) the (l.2)block AU2 is not O. it is thus unitarily similar to an upper triangular matrix.7. Its real JCF is h[ 1 1 1 0 0 n n Note that only the first Schur vector (and then only if the corresponding first eigenvalue Note that only the first Schur vector (and then only if the corresponding first eigenvalue is real if U is orthogonal) is an eigenvector.5 is called a Schur canonical Definition 10. However. A is normal (i. Canonical Forms Theorem 10. matrix U that reduces a matrix to [real] Schur form are called Schur vectors.6 T T matrix U such that U AU = S.9. The triangular matrix T in Theorem 10. where T is upper triangular. The columns of a unitary [orthogonal} Schur canonical form or real Schur fonn (RSF). is that the first k Schur vectors span the same Ainvariant subspace as the eigenvectors corresponding to the first eigenvalues along the invariant subspace as the eigenvectors corresponding to the first k eigenvalues along the diagonal of T (or S). it is of interest to know While every matrix can be reduced to Schur form (or RSF). for example. complex conjugate pairs of eigenvalues. following theorem answers this question. and sufficient for virtually is real if U is orthogonal) is an eigenvector.8. then complex arithmetic is clearly needed if A has a complex conjugate pair of eigenvalues. where S is quasiuppertriangular. The triangular matrix T in Theorem 10. A matrix A e c nxn is unitarily similar to a diagonal matrix if and only if A is normal (i.6 is called a real Schur canonical form or real Schur form (RSF). The matrix 10. AH A = AAH ). The quasiuppertriangular matrix S in Theorem 10. The following theorem answers this question. the next theorem shows that every A E IR xn is also orthogonally similar (i. While every matrix can be reduced to Schur form (or RSF). A quasiuppertriangular matrix is block upper triangular with 1 x 1 diagonal matrix. Proof: Suppose U is a unitary matrix such that U H AU = D. However. [17]).e. Theorem 10. for example.2 except that Proof: The proof of this theorem is essentially the same as that of Theorem 10. The matrix s~ [ 2 0 2 5 4 0 is in RSF.7.9.98 98 Chapter 10. . Then Proof: Suppose U is a unitary matrix such that U H AU = D. where S is quasiuppertriangular. Let A E cnxn Then there exists a unitary matrix U such that Theorem 10. but In the case of A E R"xxn . where D is diagonal.6 is called a real form or Schur fonn. [17]). matrix U such that U AU = S. real arithmetic) to a quasiuppertriangular matrix. and sufficient for virtually all applications (see. Definition 10. Let A E R"xxn. The columns of a unitary [orthogonal] matrix U that reduces a matrix to [real} Schur fonn are called Schur vectors. However..e. what is true. it is thus unitarily similar to an upper triangular matrix. Then AAH = U VUHU VHU H = U DDHU H == U DH DU H == AH A so A is normal.e. However. real arithmetic) to a quasiuppertriangular A e Wnxn is also orthogonally similar (i. what is true. The quasiuppertriangular matrix S in Theorem 10.2)block wf AU2 is not 0. . A matrix A E C"x" is unitarily similar to a diagonal matrix if and only if Theorem 10.5 (Schur).8..2 except that in this case (using the notation U rather than X) the (l. Then there exists an orthogonal Let A e IR n ".6 (MurnaghanWintner). so A is normal.5 is called a Schur canonical form or Schur form. where D is diagonal. where T is upper triangular.5 (Schur). Its real JCF is is in RSF. diagonal of T (or S). UH AU = T. Let A e C"x". the next theorem shows that every to place such eigenvalues on the diagonal of T. then complex arithmetic is clearly needed to place such eigenValues on the diagonal of T. Example 10.
Furthermore. this section that may be stated in the real case for simplicity. nonpositive definite (or negative semidefinite) if A is nonnegative definite. 11'/. Remark 10. Let A = AH e enxn with eigenvalues X{ :::: A2 :::: . positive definite if and only if xTT Ax > 0 for all nonzero x G lR.=1 But clearly n LA. it is said to be indefinite. positive definite if and only ifx Ax > Qfor all nonzero x E W1 We write A > O.2. 111. in fact. where T is an upper triangular matrix (Theorem 10. Then 11. We write A < 0. negative positive definite. Then T (Theorem It is then a routine exercise to show that T must. i € n. A symmetric matrix A e Wxn 1.5). If A E C"x" is Hermitian. x eC". if—A 4. B — A < 0. we write A :::: B if and only ifA — B>QorB — A < 0. nonzero x E lR. Furthermore. Then n x HAx = (U HX)H U H AU(U Hx) = yH Dy = LA.11...12.13.n • We write A :::: 0. Proof: Proof: Let U be a unitary matrix that diagonalizes A as in Theorem 10. i En. A U U HA U T.B > 0 or or Also.• :::: An. If neither semidefinite.12. if A and B are symmetric matrices.A < O. 2. . It T 0 D 10. Definite Matrices 99 Conversely.B :::: 0 or B . we write A > B if and only if A — B > B . Remark 10. indefinite. superscript H s replace T s. U diagonalizes A 10.nxn is Definition 10. be diagonal.A ~ O. all the above definitions hold except that A e nxn Remark 10. Then for all E en..n. We write A > 0.2.10. suppose A is normal and let U be a unitary matrix such that U H AU = T.10. Also.=1 .2 10. negative definite if . Thenfor all Let A = AH E Cnxn with eigenvalues AI > A2 > • • > An. and denote the components of y by v UHx. Definite Matrices 10. Indeed.. we write A > B if and only if A . We write A > O. nonnegative definite (or positive semidefinite) if and only if XT Ax :::: 0 for all (or positive if and only if x T Ax > for all nonzero x e W.A is positive definite. CM j]i. If a matrix is neither definite nor semidefinite. we write A > B if and only if A . this is generally true for all results in the remainder of of superscript //s Ts.10.2 Definite Matrices Definite Matrices Definition 10.11.2. where x is an arbitrary vector in en. We (or negative if— A nonnegative definite. We write A < O.2.12 ~ AlyH Y = AIX HX . Similarly. 3. write A < O.. e Theorem 10. Similarly. if A and B are symmetric matrices. A symmetric matrix A E lR. let y = U H x. We write A ~ 0.
Then ^pjp2 = ^^(A" HA). A symmetric matrix A € R"x" is nonnegative definite if and only if any of following equivalent of the following three equivalent conditions hold: 1. Let A e C"x". 3. Corollary Corollary 10. Note that the determinants of all principal "ubm!ltriC[!!l mu"t bB nonnBgmivB R. However.l3 provides upper (AO and lower (An) bounds for (A. 3. A can be written in the form MT M. A leading principal submatrix of order n — k is obtained by deleting the last k rows and columns. Theorem 1O.1.. A principal submatrix of an nxn n matrix A is the (n — k)x(n(n — k) matrix that remains by deleting k rows and the corresponding k columns.soO < X n < ••• < A. xfO IIxll2 I 0 Definition submatrixofan n x k) x k) Definition 10. Remark 10. For example. Let A E enxn Then \\A\\2 = Ar1ax(AH A).1. Theorem 10. . of obtained and E ~nxn positive definite if and only if any of the Theorem 10. I Proof: E C" Proof: For all x € en we have Let x be an eigenvector corresponding to Amax (A HA). 3. A symmetric matrix A e E" x" is positive definite if and only if any of the following equivalent following three equivalent conditions hold: determinants of principal 1. so 0 An ::::: . 2. XHAx > 0 for all nonzero = AH E enxn E en. The ratio ^^ x for A = AH <=enxn and nonzero x jc een isis calledthe = AH E Cnxn and nonzero E C" called the x of x.15. If A = AH e C"x" is positive definite.= Amax{A A). Theorem 10.17).17). whence IIAxll2 ! H IIAliz = max . All eigenvalues of A are nonnegaTive. 0 D Remark XHHAx Remark 10. of positive..13 provides (A 1) Rayleigh quotient of jc.. Theorem 10.I.19. from which the theorem follows. where M 6 R ix " and k > rank(A) "" rank(M). A can be written in the form MT M. of all principal submatrices of 2. where M e R"x" is nonsingular. determinant the determinant of the 2x2 2 leading submatrix is also 0 (cf. not just those of the leading principal submatrices. 2. ::::: AI.19. The determinants of all leading principal submatrices of A are positive. All eigenvalues of A are positive.100 100 Chapter 10.18. the .w) x HAx > the Rayleigh quotient. form MT E ~n xn E ~n xn definite if and only if Theorem 10. consider the matrix A — [0 _l~].16.@mllrk 10. A can be wrirren in [he/orm MT M. Theorem 10. Not@th!ltthl!dl!termin!lntl:ofnllprincip!ll eubmatrioes muet bQ nonnogativo in Theorem 10.14. Canonical Forms Chapter 10.17. Then 111~~1~2 Let jc be an eigenvector corresponding to Xmax(AHA). The determinants of all principal submatrices of A are nonnegative. Canonical Forms and and n LAillJilZ::: i=l AnyHy = An xHx . Then IIAII2 = ^m(AH A}. x E C". All eigenvalues of A are positive. The determinant of the I x leading submatrix is 0 and consider the matrix A = [~ 2x 0 (cf. All eigenvalues of A are nonnegative. whence Ar1ax (A A).18. The determinant of the 1x1 1 leading submatrix is 0 and 1. where M E IRb<n and k ~ ranlc(A) — ranlc(M).18.
E jRnxn MT AM > M BM. BM. A stronger form of the third characterization in Theorem 10. Remark 10.10. = LLH. rankS = rankA definite definite if positive definite)..23. Write the matrix A in Proof: The proof is by induction. if € E" xn we say that e jRn x that S E R nxn"isisa asquare root of AA ifS2 2 =— A. For example.nxn"be nonnegative definite. The following standard theorem is stated without proof (see. negative and is nonpositive definite. Then there exists a positive definite. and positive definite.2. For example. If A > B and M e Rm .23.17 is available and is A stronger form of the third characterization in Theorem 10. if then M can be then M can be [1 0].2) element is. B e Rnxn be symmetric. standard theorem stated 181]).. assume the result is true for matrices of order n .18. The factor M in Theorem 10. matrices (both symmetric and nonsymmetric) have infinitely many square roots. [16. That is. If >BandMe jRnxm. negative and A is nonpositive principal submatrix consisting of the (2. [ fz ti o o l [~ 0] ~ 0 v'3 . Its proof is straightforward from basic definitions.1 so that B By our induction hypothesis. The case = is trivially true. For example. any matrix of nonsymmetric) have infinitely many square roots. then MT AM :::: MTTBM.2. with positive diagonal elements such that positive Proof: The proof is by induction. basic definitions. The case n = 1 is trivially true. 10rm [COSO _ Sino] . That is. Moreover. If A> Band E jR~xm. The factor M in Theorem 10. j proof (see. In general. if = /2. Let A e c nxn be Hermitian unique nonsingular lower triangular matrix L nonsingular A = LLH. in fact. Theorem 10. E <C Theorem 10. matrices (both symmetric and square root of if S A. if A E lR. Theorem 10. Ll E C1""1^""^ and . SA = AS and rankS = rank A (and hence S is positive = AS S S. for example. Hermitian case.2) element is. The following theorem is useful in "comparing" symmetric matrices. nxm 2. nxn Theorem 10. In general. Let A E lR. The following Recall that A > B if the matrix A — B is nonnegative definite. . then MT AM > MT TBM. assume the result is true for matrices of order — 1 so that B may be written as B = L\L^. MT AM> M. It concerns the notion of the "square root" of a matrix.nxn .is a square root. p.21. 1f A :::: Band M E Rnxm. 1.22. It is stated and proved below for the more general Hermitian case.20.3 is not unique. concerns the notion of the "square root" of a matrix.20.we say 181]).22. Definite Matrices 10.B is nonnegative definite. where L\ e c(nl)x(nl) is nonsingular and lower triangular as = L1Lf. A e R"x be nonnegative definite.18. Definite Matrices 101 101 principal submatrix consisting of the (2. [16.. Then A has aaunique nonnegative definite square root S. any matrix S of c e s 9 the " °* ™ the form [ ssinOe _ ccosOe ] IS a square root. It is stated and proved below for the more general known as the Cholesky factorization.17 is available and is known as the Cholesky factorization. if A = lz. 0 Recall that A :::: B if the matrix A .3 is not unique. 2. in fact. Its proof is straightforward from theorem is useful in "comparing" symmetric matrices. Then A has unique nonnegative Theorem 10. For example. Write the matrix A in the form the form By our induction hypothesis. definite if A is positive definite). Let A. p. for example. if Remark 10.
2) in its complex version. 131]. of course.24. But we = ann — b LIH L\lb = ann — bH B~lb B A). Gaussian or elementary row and column operations.. are generally unreliable. But know that o < det(A) = det [ ~ b ] = det(B) det(a nn _ b H B1b). 5].4) and the SVD.lb. Then [ Sl o 0 ] [ I Uf U H ] AV = [I 0 0 ] 0 .102 102 Chapter 10. we must have ann —bHB lb > 0. Then E c~xn such exist e C™ x m that that PAQ=[~ ~l (l0. ann Since det(B) > 0. available. of ann — b 0 root of «„„ . Choosing a be det(fi) > HB~lb completes the proof. Substituting in the involving we find 2 a2 = ann .• Clearly we see we L I C = b and ann = c HC a 2 c is given simply by c = C.b HL\H L11b = ann . suppose A has an SVD of the form (5. see. [4.3 10. Canonical Forms with positive diagonal elements. Let A € C™*71. However.4) [7. we see that we must have L\c = b and ann = CHc + a 2.131]. Two such forms are stated here. Performing the indicated matrix multiplication and equating the corresponding submatrices. Take P =[ S~ 'f [I ] and Q = V to complete the proof. Then there exist matrices P E C: xm and Q e C"nx" such E c.b B1b completes D 10. .p. The numerically preferred equivalence is. However. multiplication where a is positive. 2].4). Many similar results are also (10. p. [21.4) Proof: proof Proof: A classical proof can be consulted in. Substituting in the expression involving a. the unitary equivunitary alence known as the SVD. for example (10. Choosing a to be the positive square ann . [21. numerical procedures for computing such procedures an equivalence directly via. 0 Note that the greater freedom afforded by the equivalence transformation of Theorem afforded 10. say. for example. we find by L^b. as opposed to the more restrictive situation of a similarity transformation.24. Ch. Canonical Forms Chapter 10. Ch.3 Equivalence Transformations and Congruence Equivalence Transformations and Congruence Theorem 10. the SVD is relatively expensive to compute and other canonical forms exist that are intermediate between (l0. Alternatively. yields a far "simpler" canonical form (10. Alternatively.xn.b H B1b (= the Schur complement of B in A). It remains to prove that we can write the n x n matrix A It in the form in the form ann b ] = [LJ c a 0 ] [Lf 0 c a J.4) efficiently available.b H B1b > O. They are more stably computable than (lOA) and more efficiently computable than a full SVD.
Example 10..26. Then there exist unitary matrices U e Cmxm and V E Cnxn such that unitary matrices U E e mxm and V e e nxn such that (10.1. In(A) = ln(X Proof: For the proof. Proof: For the proof. Remark 10..v. The transformation A i> XH AX is called a congruence. Proof: For the proof. When A has full column rank but is "near" a rank deficient matrix.29.t h e n A > 0 if and only if In (A) = (n.3.rrxr is upper (or lower) triangular with positive diagonal elements. . Let A e Cnxn and X e Cnnxn.31 (Sylvester's Law of Inertia). if A is Hermitian. D 0 Remark 10. for example. We then have the following. Then the inertia of A is the triple of inertia of of negative. see [4]. Definition 10. In(A) 3.30. p.30. Let A E C™ x ". 134]. It is of interest to ask what other properties of a matrix are then X H AX is also Hermitian. The signature of is Example 10. 0 2.31 guarantees that rank and signature of a a matrixare preserved under Theorem 10. Then H HAX). Note that a congruence is a similarity if and only if X is unitary. where R e €. and ~ denote the numbers of positive.3. D Theorem 10.0). negative. v. Definition 10. nxn E e X E e~xn.27. When A has full column rank but is "near" a rank deficient matrix.29. Let A = AH e C"x" and let 7t. D Proof: For the proof. Then there exist Theorem 10.xr is upper (or lower) triangular with positive diagonal elements. see.e. see [4] for details. Then there exists a unitary matrix Q E Cmxm and a permutation permutation matrix IT e en xn" such that Fl E C"x QAIT = [~ ~ l (10. then XH AX is also Hermitian. The H. and zero eigenvalues.XH AX Definition 10. Definition 10. i. Theorem 10. numbers In(A) (n.25 (Complete Orthogonal Decomposition). Let A e e~xn. It turns out that the principal property so preserved is the sign of each eigenvalue. 134]. [21. (TT. 0).31 guarantees that rank and signature of matrix are preserved under congruence. sig(A) = rr — v. upper Proof: For the proof.e. of A.1). respectively.26. v.28. then rank(A) rr v. £). If A = A" E C nnxn. In(A) = In(X AX).31 (Sylvester's Law of Inertia). HE C xn E e~ xn. v. Again.5) where R E e. [21.28. Let A = A He ennxn and X e Cnnxn. It is of interest to ask what other properties of a matrix are preserved under congruence. Theorem 10. i. and £ denote the numbers of positive. v.xr E erx(nr) arbitrary general nonzero. if A is Hermitian. If A AH e e x " then A> 0 if and only if In(A) = (n. Then there exists a unitary matrix Q e e mxm and a Theorem 10. see. Again. v. £). respectively. Note that congruence preserves the property of being Hermitian.25 (Complete Orthogonal Decomposition). Note that congruence preserves the property of being Hermitian. If In(A) = (rr. phenomena at a cost considerably less than a full SVD. Let A = AH E e nxn and let rr. Proof: For the proof. Equivalence Transformations and Congruence 10. see [4]. Let A e C™ ". v.27. congruence. 0.0. p. n The signature of A is given by sig(A) = n . for example. various rank revealing QR decompositions are available that can sometimes detect such various rank revealing QR decompositions are available that can sometimes detect such phenomena at a cost considerably less than a full SVD. l.10. Equivalence Transformations and Congruence 103 103 Theorem 10. see [4] for details. Then is the numbers In(A) = (rr. a congruence. We then have the following. where R E Crrxr is upper triangular and S e C rx( " r) is arbitrary but in general nonzero. n. It turns out that the principal property so preserved is the sign preserved under congruence. 2. then rank(A) = n + v. 0 D x Theorem 10. Let A E e~xn. .6) E e. of A. see [4].In[! 1o o o 0 0 00] 10 =(2. see [4]. Note that a congruence is a similarity if and only ifX is unitary. of each eigenvalue. and eigenvalues.
1.0). X UW desired 10. the congruence B ] [I D ~ 0 _AI B I ° JT [ A BT ~ ][ ~ 0 D The details are straightforward and are left to the reader. 1. 1.. the number of — 's is v. v. and the final £ are 0. B D ] >  ° if and only if A:::: 0. Suppose A = AT and D = DT. for example.. 's is 7i. . .33. O. v.. AA+B = B. I/~.BT A+B:::: o.BT AI > 0.35. I. . . Proof: AI.fArr+I' .104 104 Chapter 10. .. and D . . . Proof: proof Proof: The proof follows by considering. .1)...BT A+B > 0. Theorem positive.1 10. Note the symmetric Schur complements of A (or D) in the theorem. .1 Block matrices and definiteness Theorem 10. Proof: Consider the congruence with Proof: Consider proof Theorem and proceed as in the proof of Theorem 10. left AT D DT. and D . I/. the number of Il's is v..33..... ..BD^BT > 0. . . Let A = AHeE cnxn with In(A) = (jt. . where the number of X 1's is Jr. . By Theorem 10. Define the x n matrix vv = diag(I/~.. where the number of E c~xn XH AX = diag(1.BT A~l B > 0. 0 D Then it is easy to check that X = V VV yields the desired result.. D > and . Theorem 10. . . Canonical Forms Chapter 10.0. An).. . Canonical Forms Theorem 10. £). ... I. ifand only ifeither A > 0 and D .. . Xw denote the eigenvalues of A and order them such that the first TTare ~ O. the number 0/0 's is (. 1. the next v are negative. AA+B = B.4 Rational Canonical Form Rational Canonical Form rational One final canonical form to be mentioned is the rational canonical form. . . 0. or D > 0 and A . 0). 1/. 0 D 10.3.34. . X e C"nxn such that XHAX = diag(l. and the numberofO's is~.3. Then = AT D = DT. I.. Then there exists a matrix AH C"xn In(A) = (Jr. Then Remark Remark 10. ... if ifA>0.. An of Jr Proof: Let AI . . Define the nn x n matrix U UH AV = diag(Ai.fArr+v.I BT > O. Suppose A = AT and D = DT. ...2 there exists a unitary matrix V such that VHAU = diag(AI..BD. A w ).. .32.4 10. if and if either A> and D . .
18). To Companion matrices also appear in the literature in several equivalent forms.7) Definition 10. + an_IAnI).10. A matrix A E lRn Xn is said to be nonderogatory ifits minimal polynomial if its minimal polynomial and characteristic polynomial are the same or.. if its Jordan canonical form and characteristic polynomial are the same or.37. : ~ ! ~01]. o (10.(ao + «A + . Using the reverseorder identity similarity P given by (9. Companion matrices also appear in the literature in several equivalent forms. Notice that in all cases a companion matrix is nonsingular if and only if aO i= O. A matrix A E lRnxn of the form (10. has only one block associated with each distinct eigenvalue.37. is said to be in cornpanion form. equivalently.10) o 1 o 1 o o o o o o (10.18). For In fact.4.9) Moreover. Rational Canonical Form 10. l 0 0 ~ ao ~ ao _!!l (10. consider the companion matrix (l0. the following are also companion matrices similar to the above: following are also companion matrices similar to the above: Notice that in all cases a companion matrix is nonsingular if and only if ao /= 0. Using the reverseorder This matrix is a special case of a matrix in lower Hessenberg form. A is easily seen to be similar to the following matrix in upper Hessenberg form: in upper Hessenberg form: a2 al o 0 0 1 o 1 6] ao o . the inverse of a nonsingular companion matrix is again in companion form.11) .36.7) is called a cornpanion rnatrix or Definition 10. Rational Canonical Form 105 105 Definition A matrix A e M"x" is said to be Definition 10. For £*Yamr\1j=» example. since a matrix is similar to its transpose (see exercise 13 in Chapter 9). since a matrix is similar to its transpose (see exercise 13 in Chapter 9). the Moreover. In fact.Then it can be shown (see [12]) that A mial is 7r(A) = A" . To illustrate. if its Jordan canonical form has only one block associated with each distinct eigenvalue.4. the inverse of a nonsingular companion matrix is again in companion form. equivalently.7) is called a companion matrix or is said to be in companion forrn. consider the companion matrix illustrate.. A matrix A e E nx " of the form (10. A is easily seen to be similar to the following matrix identity similarity P given by (9. Suppose A E lRnxn is a nonderogatory matrix and suppose its characteristic polynoSuppose A E Wxn is a nonderogatory matrix and suppose its characteristic polynon(A) An — (a0 + alA + a n _iA n ~').8) This matrix is a special case of a matrix in lower Hessenberg form. Then it can be shown (see [12]) that A is similar to a matrix of the form is similar to a matrix of the form o o o o 0 o o o (10.
_1 and y = 1 + + a. a n i] and l c I+~T a. associated at least one eigenvalue. it can be shown that a derogatory matrix is similar to a block diagonal matrix. if ao = 1 inverse can still be computed. 3. Then it is easily verified that c = l+ ara' Then it is easily verified that o o o + o o o o o o 1.10).. in matrices are known to possess many undesirable numerical properties.38. anIf and let e M"" \a\.caa T ca o J. Canonical Forms with a similar result for companion matrices of the form (10.caa T = (I + aaT) I . . and perhaps surprisingly. n .1.39. . For example.4aJ) .. Algorithms to reduce an arbitrary matrix to companion form are numerically unstable. However. and perCompanion matrices have many other interesting properties. Then A in (10.7). stable ones are nearly unstable.39. is the fact that their singular values can be found in closed form. I — T = T) Note that / . If a companion matrix of the form (10. the largest and smallest singular values can also be written in the equivalent form Remark 10. Explicit formulas for all the associated right and left singular vectors can Remark 10. Ifao ^ 0. see haps surprisingly.7). [12]. Companion matrices appear frequently in the control and signal processing literature Companion matrices appear frequently in the control and signal processing literature but unfortunately they are often very difficult to work with numerically.7). the largest and smallest singular values can also be written in the equivalent form If ao =1= 0. if ao = 0. = ~ (y . Moreover. Then + ai + .. Let al ~ GI ~ • • ~ an be the singular values of the companion matrix Theorem 10.7) is singular.. .. 02. Let a\ > a2 > . and so forth [14]. Such matrices are said to be in each of whose diagonal blocks is a companion matrix.. is the fact that their singular values can be found in closed form. If A E R nx " is derogatory..• > an be the singular values of the companion matrix A in (10. Theorem 10. and hence the pseudoinverse of a singular companion + matrix is not a companion matrix unless a = 0. nonsingular ones are nearly singular.10). Companion matrices have many other interesting properties. then it is not similar to a companion matrix of the form (10. companion matrices are known to possess many undesirable numerical properties. Explicit formulas for all the associated right and left singular vectors can also be derived easily.... Such matrices are said to be in rational canonical form Frobenius rational canonical form (or Frobenius canonical form). and so forth [14]. then it is not similar to a companion matrix of the form (10. Canonical Forms Chapter 10. Moreover. among which.4ao ' 1 2)  a? = 1 for i = 2.. each of whose diagonal blocks is a companion matrix. among which.Jy2 .38. Let a E JRn1 denote the vector [ai. has more than one Jordan block associated with If A € JRnxn derogatory.. i. .106 Chapter 10. especially nonsingular ones are nearly singular. Leta = ar aJ al 2_ 2 ( y + Jy 2. companion an arbitrary matrix to companion form are numerically unstable. see [14]. a2... also be derived easily. .Q + a. For details. at least one eigenvalue. then its pseudoIf singular.7).. with a similar result for companion matrices of the form (10.e. Let a = a\ + a\ + • • • + a%_{ and y = 1 + «. stable ones are nearly unstable. i. see. Algorithms to reduce but unfortunately they are often very difficult to work with numerically. For example. form). for example. matrix is not a companion matrix unless a = O. a. in n general and especially as n increases. their eigenstructure is extremely ill conditioned. + a.e.
1.18) and the matrix U in identity in (9. 2.. E jRnxn be symmetric. It is not unusual for y to be large for large n. Show that if A is normal. . one may lose up to k digits of to the matrix Pnorm. . A [ must also be positive 7.38 yields some understanding of why difficult numerical behavior might be expected for companion matrices. say 0(10*). then p(A) = IIAII2' Show that the converse is true if n = 2.4a5 21 a ol It is easy to show that 21~01 :::: k2(A) :::: 1:01' and when ao is small or y is large (or both). 6.. Show that a. 5. Let R. this condition number is the ratio of largest to smallest singular precision.38 yields some understanding of why difficult numerical Remark 10. Prove that if A e M"x" is normal. show that AI must also be positive definite. Show that if a triangular matrix is normal. If this number is large. (A) = IA. S 6 E nxn be symmetric. Use the reverseorder identity matrix P introduced in (9.40. K\ (A) (10. when solving linear equations numerical sensitivity Kp(A) = systems of equations of the form (6. In the 2norm.(A) for e n.. Let A G Cnx" and define p(A) = maxx€A(A) IAI. Let A € C n xn be normal with eigenvalues y1 . can be determined explicitly as determined explicitly y+J y 2 . EXERCISES EXERCISES 1.2). Is [ ^ A E jRnxn is definite. Find a unitary matrix U such that [~ M CC x 2 Find a unitary matrix U such that 6. Show that the converse radius of A. Suppose A e E"x" is positive definite. Remark 10. A E en x n eigenvalues A]. Show that [ * }.11).. then K2(A) ^ T~I. (A) A. by the theorem.. then peA) = A2. Theorem 10. say O(lO k ). If A E Wxn is positive definite.5 to find a unitary matrix Q that reduces A e C"x" to lower triangular form.40.EA(A) I'MpeA) 3. For example. one may lose up to k digits of precision. yn and singular values a\ ~ a2 > .. Let A 7. A E jRnxn N(A) = A/"(A ).18) U A E cc nxn Theorem 10. Then p(A) is called the spectral radius of A. Theorem 10. Let A = I J : ]eEC 22x2. when solving linear behavior might be expected for companion matrices..• ~ an ~ O. this condition number is the ratio of largest to smallest singular values which.Exercises Exercises 107 Companion matrices and rational canonical forms are generally to be avoided in floatingCompanion matrices and rational canonical forms are generally to be avoided in fioatingpoint computation. Show that a.11).(A)I for ii E!l. Note that explicit formulas then K2(A) ~ I~I' It is not unusualfor y to be large forlarge Note that explicit formulas Koo(A) for K] (A) and Koo(A) can also be determined easily by using (l0. Show that if A is normal. For example. An and singular 0'1 > 0'2 ~ 4. Show that [~ R > SI. R> S [1 A~I] ~ O? /i 1 > 0? ~] > 0 if and only if > 0 and J 1 > 0 if and only if S > 0 and . In the 2norm.. A E cc nxn peA) = max). It is easy to show that y/2/ao < K2(A) < £.. then Af(A) = N(A Tr ). one measure of numerical sensitivity is KP(A) = A A ] > the socalled condition number of A with respect to inversion and with respect II ^ IIpp II A~l IIpp'me socalled condition number of A with respect to inversion and with respect to the matrix pnorm. If this number is large. 9. and when GO is small or y is large (or both). • • > on > 0.. 3. is true if n = 2. If A e jRn xn 8. Let R. then it must be diagonal.. .
j 1+ j ] 2 ' (d) [ . Canonical Forms [~ ~ l (b) [ 2 1.j 1+ j ] 1 . Canonical Forms Chapter 10. Find the inertia of the following matrices: following 10.1 1.108 108 10. . (a) Chapter 10.
(e(eAf = e A e^. For all A JR.3) which thus also converges for all A and uniformly in t.nxn is defined by Definition 11.2) k=O The series (11. The solution of (11. T T 109 109 .nxn. e° = I.1 11.1 by setting AA =O. Proof This follows immediately from Definition 11.1) is then known always to exist and be unique. For all A E JR. the matrix exponential e A e JR.1 Properties of the matrix exponential Properties of the matrix exponential 1. eO = I.1) is then known always to exist and be and does not depend on t.1 by setting = 0. (11.Ak.1. The solution of (11.1. which thus also converges for all A and uniformly in t. 11.1. We restrict our attention in this for t > IQ. We restrict our attention in this chapter only to the socalled timeinvariant case. This is known as an initialvalue problem.1 11.1) involves the matrix to +(0). Forall A EG R" XM . = Xo In this section we study solutions of the linear homogeneous system of differential equations In this section we study solutions of the linear homogeneous system of differential equations x(t) x(to) E JR.Chapter 11 Chapter 11 Linear Differential and Linear Differential and Difference Equations Difference Equations 11. The solution of (11. where the matrix A e JR. Proof This follows immediately from Definition 11.nxn is constant and does not depend on t. A) • 2.1) for t 2: to.1. unique.nxn.1 and linearity of the transpose. the matrix exponential e A E Rnxn is defined by the power series power series e = A L +00 1 .n (11. This is known as an initialvalue problem. Proof: This follows immediately from Definition 11.2) can be shown to converge for all A (has radius of convergence equal to +00).2) can be shown to converge for all A (has radius of convergence equal The series (11.1) involves the matrix (11.1 Differential Equations Differential Equations = Ax(t). where the matrix A E Rnxn is constant chapter only to the socalled timeinvariant case. For all A e Rnxn. k. It can be described conveniently in terms of the matrix exponential. The solution of (11.1 and linearity of the transpose. Proof: This follows immediately from Definition 11. It can be described conveniently in terms of the matrix exponential. Definition 11.
5. {+oo = io et(sl)e tA dt since A and (sf) commute =io (+oo ef(Asl) dt . ) ( I + T A + T2!2 A 2 +. For all A E JRnxn and for all t.. Proof: Simply take T = t in property 3.e. = e'A erA = elAe'A .. Compare like powers of t in the first equation and the second or third and use the Compare like powers of t in the first equation and the second or third and use the k binomial theorem on (A + B/ and the commutativity of A and B.1 } = erA. Proof" Simply take T = — t in property 3. For all A. 2! and and e e tA rA 2 = ( I + t A + t2! A 2 +. Let denote the Laplace transform and £~! the inverse Laplace transform.110 110 Chapter 11. 2 2! and and while while e e tB tA = ( 1+ tB t2 2 + 2iB 2 +.. Proof" We prove only (a). (b) .. ) ( 1+ tA + t2!A 2 +. For all e JRnxn and for all E R. (a) . Part (b) follows similarly.tA . AB = B A.lI{(sl. Part (b) follows similarly. Then for 6.. 4... (e'A)~l e~'A. r e R. i. Let £ denote the Laplace transform and £1 the inverse Laplace transform. For all e R"x" and for all t. ) . on (t + T)*. (etA)1 = e.. et(A+B) =etAe tB = etBe tA if and only if A all e JRnxn and all e R.. et(A+B) =^e'Ae'B = e'Be'A and and B commute. Linear Differential and Difference Equations e(t+r)A e(t+T)A 3.A)I} = «M. = I + (t + T)A + (t + T)2 A 2 + . B E R" xn and for all t E JR.A)I. ForaH A E R" x " and for all t e JR.. Then for E JRnxn t E R. (a) C{etA = (sIArl.. T E JR.. Proof: We prove only (a). binomial theorem on (A B) and the commutativity of A and B...l{e tA}} = (sI . Proof' Note that Proof: Note that et(A+B) = I t + teA + B) + (A + B)2 + . ) . (b) £. Linear Differential and Difference Equations Chapter 11. AB = BA. Proof" Note that Proof: Note that e(t+r)A = etA erA = erAe tA . and B commute. i. 6. Compare like powers of A in the above two equations and use the binomial theorem Compare like powers of A in the above two equations and use the binomial theorem on(t+T)k. all A € R"x" and for all t € lR.1 {(j/A).e.
. s .All succeeding steps in the proof then follow in aastraightforward way.. ) 3 II I ( ~.. ) 3! 4! L'ltiIAIl < L'lt1lA21111e (1 + + (~t IIAII2 + .. it can be differentiated termbyterm from which the result follows immediately.. For all A e R"x" and for all t e R..A) ~' is called the resolvent of A and is defined for all s not in A (A). Differential Equations 111 111 = {+oo 10 n t 1 e(AiS)t x. Notice in the proof that we have assumed. for convenience. A 2etA + ..l)e ..=1 = ~[fo+oo e(AiS)t dt]x.1 The matrix (s I — A) I is called the resolvent of A and is defined for all s not in A (A). All succeeding steps in the proof then follow in straightforward way. If this is not the case. The matrix (s I ... = (sl A). that A is diagonalizable..H = '"' assuming Re s > Re Ai for i E !! = (sI ..11. the formal definition d dt _(/A) = lim ~t+O e(t+M)A _ etA L'lt can be employed as follows. . the scalar dyadic decomposition can be replaced by If this is not the case. ) = I ( Ae + = tA ~. the scalar dyadic decomposition can be replaced by et(Asl) =L . 1h(e tA ) = AetA = etA A. A2 + (~~)2 A tA II tA Il 1 (_ 2! + ..y.A)I. .1.3) is uniformly convergent.=1 m Xiet(Jisl)y.AetAil Ae tA I ~t (e~tAetA I (M A I ~t (e~tA .A"I i=1 . it can be differentiated termbyProof: Since the series (11. ) = L'lt IIA 21111e tA IIe~tIIAII.3) is uniformly convergent. using the JCF.Ae tA I = III (etAe~tA L'lt = = /A) . ) etA I < MIIA21111e  L'lt (L'lt)2 + IIAII + IIAI12 + . .H L..u .X i y. £(e'A) 7..etA .Ae tA etA) ... Notice in the proof that we have assumed.H dt assuming A is diagonalizable .y. For any consistent matrix norm. for convenience. that A is diagonalizable..Ae II = I L'lt (M)2 + ~ A 2 +. For all A E JRnxn and for all E JR. e'A Proof: Since the series (11.1.H using the JCF. Alternatively.Ae tA tA tA I I e tA ... Differential Equations 11. employed I e(t+~t)AAt.
0 ordinary differential equations. A similar proof yields the limit e'A A.2.1. by the fundamental existence and x(t0) — e(fo~t°')AXQ — Xo uniqueness theorem for ordinary differential equations. Premultiply the equation x . the righthand side above clearly goes to 0 as At goes to 0.112 112 Chapter 11. the limit exists and equals Ae'A •. continuous. Linear Differential and Difference Equations For fixed t. (11.i~t()Oc() nnd uniqu()Oc:s:s theorem for *('o)} = <?(f°~fo)/1. x(to) = e(toto)A Xo = XQ so. Let A E Rnxn . say. or one can use the limit exists and equals Ae t A A similar proof yields the limit et A A. B e IR xm and let the vectorvalued function u be given Theorem and.3.3 Inhomogeneous linear differential equations Inhomogeneous equations Theorem 11. The proof above simply verifies the variation of parameters formula by Remark 11.1. The proof above simply verifies the variation of parameters formula by direct differentiation.f(p(t). the For fixed t.Ax = Bu by e.. Premultiply the equation x — Ax = Bu by e~ to get (11. (11.4.5) and use property 7 of the matrix exponential to get x t ) = Ae(tto)A xo fundamental Ae(t~to)Axo = Ax(t). D Ir: Remark 11. Linear Differential and Difference Equations Chapter 11. (11. Ae(ts)A Bu(s) to get x(t) = Ae{'to)A Xo + Bu(t) = Ax(t) = x(to e(totolA Xo + = Xo fundilm()ntill ()lI.6).4.2 Homogeneous linear differential equations Homogeneous equations x(t) Theorem 11.¥o + 0 = XQ so. t) dx = l af(x t) ' dx pet) at (t) q + dq(t) dp(t) f(q(t). t ) .tA to get as follows..7) is the solution of (11. continuous.dt dt is used to get x ( t ) = Ae(tto)Ax0 + f'o Ae('s)ABu(s) ds + Bu(t) = Ax(t) + Bu(t).4).7) Proof: Differentiate (11. lo t (11. Let A E IR n xn. x(to) = Xo E IRn (11. by the fundamental existence and uniqueness theorem for ordinary differential equations. The formula can be derived by means of an integrating factor "trick" as follows. The solution ofthe linear homogeneous initialvalue problem Let A e Rnxn. the righthand side above clearly goes to 0 as t:.7) and again use property 7 of the matrix exponential. (11.6) for t ::: to is given by the variation of parameters formula for t > IQ is given by the variation of parameters formula x(t) = e(tto)A xo + t e(ts)A Bu(s) ds.7) is the solution of (1l. Thus. The solution of the linear homogeneous initialvalue problem = Ax(l). x(to) = Xo E IR n (11. The formula can be derived by means of an integrating factor "trick" direct differentiation. B E Wnxm and let the vectorvalued function u be given Let A e IR nxn . Also. fact that A commutes with any polynomial of A of finite degree and hence with etA. Then the solution of the linear inhomogeneous initialvalue problem x(t) = Ax(t) + Bu(t). say.6).5) is the solution of (11. t ) .5) and use property 7 of the matrix exponential to get x ((t) = Proof: Differentiate (11. 0 uniqueness theorem for ordinary differential equations.t goes to O.5) is the solution of (11.4). Then the solution of the linear inhomogeneous initialvalue problem and. Thus. The general Proof: Differentiate (11.4) for t ::: to is given by (11. 11.8) . The general formula formula d dt l q (t) pet) f(x. D 11. Also.5) Proof: Differentiate (11.7) and again use property 7 of the matrix exponential. or one can use the fact that A commutes with any polynomial of A of finite degree and hence with e'A.
5. and hence t d esAx(s) ds = to ds 1t to eSABu(s) ds. E ]R. Let A. X(to) =C E jRnxn (11.11) X(t) = etACe = e ratB has the solution X ( t ) — atACe tB .7.11. following to = O. punov differential equation.7.1. Corollary 11. etAx(t) . Let A E Wlxn. E ]R.2.9) for t ::: to is given by for t > to is given by X(t) = e(tto)Ac. t]: 113 1 Thus. t exponential. (11. The first is an obvious generalization of Theorem 11.sA Bu(s) ds x(t) = e(ttolA xo + lto t e(ts)A Bu(s) ds. . differential equation. Differential Equations 11. X(O) = C (11.12) X(t) = etACetAT has the solution X(t} = etACetAT.1. the Theorem 11.4 Linear matrix differential equations Linear matrix differential equations Matrixvalued initialvalue problems also occur frequently.1. C e IR" ".6. e jRnxn. X t) X 0 D Corollary 11. Theorem 11. Differential Equations [to. and the proof is essentially the same.6. Theorem 11. Then the matrix initialvalue E jRmxm.12) is known as a LyaX t) punov differential equation. X(O) =C (11. The initialvalue problem (11.. and C e Rnxm.8) over the interval [to. t]: Now integrate (11. the When C is symmetric in (11.nxn. Then the matrix initialvalue problem X(t) = AX(t) + X(t)AT. the following theorem is stated with initial time to = 0.1. 11. problem problem X(t) = AX(t) + X(t)B.etoAx(to) = lto t e. we can have coefficient matrices on both the right and left.4 11. X((t) is symmetric and (11. the Proof: Differentiate etACe tB property Proof: Differentiate etACetB with respect to t and use property 7 of the matrix exponential. Let A E Rnxn. For convenience.10) coefficient In the matrix case. The fact that X((t) satisfies the initial condition is trivial.2. The solution of the matrix linear homogeneous initialvalue e jRnxn.11) is known as a Sylvester Sylvester differential equation. B e R m x m . Theorem 11. The of nrohlcm problem X(t) = AX(t). and the proof is essentially the same.12).nxm.
1 n Le A• X'YiH . i=1 The ki s are called the modal velocities and the right eigenvectors Xi are called the modal The Ai s are called the modal velocities and the right eigenvectors *.6 Computation of the matrix exponential Computation exponential JCF method JCF method Let A e R"x" and suppose X E Rnxn is such that X"1 AX = J. Linear Differential and Difference Equations 11.4) can be written A = L X. The decomposition above expresses the solution x (t) as a weighted sum of its directions. in the inhomogeneous case we can write t e(ts)A Bu(s) ds i~ = t i=1 (it eAiUS)YiH Bu(s) dS) Xi. where J is a JCF for A. that it is diagonalizable (if A is not diagonalizLet A and suppose. in the inhomogeneous case we can write Similarly.5 Modal decompositions Let A and suppose.114 114 Chapter 11. This modal decomposition can be expressed in a different looking but identical form This modal decomposition can be expressed in a different looking but identical form n if we write the initial condition Xo as a weighted sum of the right eigenvectors if we write the initial condition XQ as a weighted sum of the right eigenvectors Xo = L ai Xi. In the last equality we have used the fact that YiHXj = flij. that it is diagonalizable (if A is not diagonalizable.5 11. for convenience. where J is a JCF for A. i=1 In the last equality we have used the fact that yf*Xj = Sfj. the rest of this subsection is easily generalized by using the JCF and the decomposition H A — ^ Xf Ji YiH as discussed in Chapter 9).1.x.li y t as discussed in Chapter 9). ~ 1=1 I t.H . for convenience. Then Then i=1 n = L(aieAiUtO»Xi.1 . modal velocities and directions. Then the solution x(t) of (11. The decomposition above expresses the solution x(t) as a weighted sum of its modal velocities and directions. if A is diagonalizable in geneml.4) can be written x(t) = e(tto)A Xo E jRnxn E Wxn = (ti. ~ 11.1. Then Then etA = etXJX1 = XetJX. Let A E jRnxn and suppose X e jR~xn is such that XI AX = J. are called the modal directions.iUtO)Xiyr) Xo 1=1 n = L(YiHxoeAi(ttO»Xi. Then the solution x(t) of (11.y. Linear Differential and Difference Equations Chapter 11.e'J. the rest of this subsection is easily generalized by using the JCF and the decomposition able. Similarly.
. let . and so forth. Mp~l ^ O. e'u e l N tu x lH = diag(e At . it is easy to check that while N has 1's along only its first superdiagonal (and O's elsewhere). ext}. Mp = 0.7. e ttJi = eO.. is complex.!etN by property 4 of the matrix exponential. Thus. Thus. i.I)! I o t 1 o Thus.. t2 t k. and N kforth.1. A. But e tN is almost as easy since N The diagonal part is easy: e e = diag(e '. it is then easy to compute etA via the formula etA = XetJ XI' Xe tl X If is etA etA tj since et I is simply a diagonal matrix. + N k2! (k .EeCkxk be aaJordan block of the form Ji <Ckxk be Jordan block of the form A Ji = 1 o o o =U+N. or grade) MP = 0. k) O's k k N = 0.0. e lN finite. (1.8.I e IN =I+tN+N 2 + . aareal version of the above can be worked out. o A o o A Clearly A/ and N commute. To be specific. or grade) p if if matrix M e jRnxn is nilpotent of degree (or index. Differential Equations 11.. N has 1's along only its second superdiagonal. of In the more general case. Differential Equations 115 If A is diagonalizable.1. teAl eAt = 0 0 0 2I e 12 At teAl 0 eAt In the case when A is complex. elN is is nilpotent of degree k. the problem clearly reduces simply to the computation of problem clearly reduces the exponential of a Jordan block.e. A matrix M E M nx " is nilpotent of degree (or index. real version of the above can be worked out. ••• .eAt). l's For the matrix N defined above.. AI e I. N22 has l's along only its second superdiagonal. while MPI t=. nilpotent Definition 11. the series expansion of e'N is finite. Finally. k) element and has O's everywhere else.. eAt teAt eAt o 2I e 12 At IkI At e (kI)! 0 ell.. .11. O. degree k. its first superdiagonal (and O's elsewhere). Nk~lI has a 1 in its (1.
.1 can be expressed as linear combinations of Ak for k = 0.a l +a2 = e==> at . Suppose the characteristic polynomial of A can be written as n(A)) = Yi?=i (A . . terms of order greater than n . .. ani solution of the n equations: g(k)(Ai) = f(k)(Ai). The method is stated and illustrated for the hand calculation in smallorder problems.1 in the power series for et A can be written in terms of these greater n— e' A lowerorder powers as well.. (A. the superscript (k) denotes the fcth derivative with respect to A. 2} and Example 11.Ai t'. the function g is known and /(A) = g(A).. f(A) n(A) etK. Here. which says that all powers of A greater than A n . characteristic of n(X (^ ~~ ^i)"'» where the A.10. .s known. Let A = [~ o ~01~ ] t . Then jr(A. Then A(A) = {2. Let A Then A (A) = {2. . the unique OTQ.s are given by g(A) — ao aiS a\X o^A.2t ][ 1 ] Interpolation method Interpolation method This method is numerically unstable in finiteprecision arithmetic but is quite effective for effective hand calculation in smallorder problems. Thus. compute f(A) = etA.. . Linear Differential and Difference Equations Chapter 11.s are distinct. Let Example 11. . in fact.116 Chapter 11. a.1. so m = 1 and nl Let g(X) = UQ + alA + a2A2. k = 0. + I) 3 .t . i Em. ==> 2a2 = t 2 e. I. Linear Differential and Difference Equations Example 11.t • g'(1) = f'(1) g"(I) = 1"(1) . n . 2} and etA Xe tJ =[=i a = xI =[ =[ 2 1 ] exp t ] [ [ 2 0 ~ ] [ 1 1 2 1 2 ] 2 1 e~2t te. Theorem 9. They are.) = (A + 1)3. functions.. anl are n constants that are to be determined. — 1.. The polynomial g gives the appropriate linear combination. the function g is known and f(A) = g(A).2a2 = te.. Let A = [ ~_\ J]. . .I. Given A E jRnxn and f(A) = etA. The method is stated and illustrated for the exponential function but applies equally well to other functions. all the Ak — expressed k 1. compute f(A) = e'A.2. t fixed Given A € E.2t e.. so m = 1 and n{ = 3.nxn and /(A) = etx. g(I) = f(1) ==> ao .. ni . .9. .. lowerorder g Example 11. I. Then the three equations for the a. With the aiS then kth superscript (&) X. and /(A) = etA. Define the Ai nr=1 n where ao. where t is a fixed scalar.3. The motivation for this method is known.10.9. The motivation for this method is the CayleyHamilton Theorem. .
Differential Equations Solving for the ai s.1.s. 1. we find ao = e.) = «o + ofiA. Then 7r(X) = (A+ 2)22 so m = 11and [::::~ 4i and f(A) = ea. There is an extensive literature on approximating certain nonlinear functions by rational functions. Use etA = £~l{(sl . 2t .2t ) 2te. Thus. This etA = . The matrix analogue yields e A ~ functions rational eA = . Let A = [ ~4 J] and /(A) = eO. 2.A)^ 1 } and techniques for inverse Laplace transforms. Differential Equations 11 . Then the defining equations for the a.2t . Then the defining equations for the aiS are given by 6] g(2) = f(2) ==> ao ==> al 2al = e. but general nonsymbolic computational effective smallorder techniques numerically problem equivalent techniques are numerically unstable since the problem is theoretically equivalent to knowing precisely a JCE JCF. we find Solving for the aiS.2t _ Other methods Other methods 1.2t + 2te. Let g(A.11.cI{(sI — A)I} is quite effective for smallorder problems. Then rr(A) = f\ + o\2 so m = and (A i 2) «i nL = 2.2t te. te. we find 117 Thus. f(A) = etA = g(A) = aoI + al A = (e. g'(2) = f'(2) = te Solving for the a. Use Pade approximation.11.2t [ ~ o ] + te. we find Solving for the a. 2.2t aL = + 2te.1. t ff>\ tk TU^^ _/"i\ Example 11. Let A _* Example 11..2t .2t .11.s are given by Let g(A) ao + aLA.2t I [4 4] I 0 _ [  e. s.
2 11.1 11. [19].. Reliable and efficient computation 4. [19]. Then the solution of the inhomogeneous initialvalue problem (11. where the matrix A in (11.14.13) is constant and does not depend on k. Let A E Rnxn. but since the system is timeinvariant.15) . by this means when IIAII is sufficiently small.1 Homogeneous linear difference equations Homogeneous linear difference equations Theorem 11. 0 D Remark 11. Linear discretetime systems. and since we consider an arbitrary "initial time" ko. = P = multiplying it by 1/2* for sufficiently large k and using the fact that e = ( e j . we restrict our attention only to the socalled timeinvariant Remark 11. Reliable and efficient computation of matrix functions such as e A and log(A) remains a fertile area for research.. + opAP and N(A) = vol + vIA + D~ (A)N(A). no double subscripts).e. in the matrix case this means when  A is sufficiently small.2. where the matrix A in (11.13) for k > 0 is given by for k 2:: 0 is given by Proof: The proof is almost immediate upon substitution of (11. exhibit many parallels to the continuoustime differential equation difference equations. of matrix functions such as e A and 10g(A) remains a fertile area for research. and this observation is exploited frequently. case. We could also case. Linear Differential and Difference Equations Chapter 11. Again. Numerical loss of accuracy can occur in this procedure from the successive squarings. •• vq A . eS .• + Vq A q. we have chosen ko = 0 for convenience. 11.14) into (11.13. This can be arranged by scaling A. Proof: The proof is almost immediate upon substitution of (11.13. where D(A) = 001 + olA + .. modeled by systems of difference equations. where D(A) 80I Si A H h SPA and N(A) v0I + vlA + q Explicit formulas are known for the coefficients of the numerator and Explicit formulas are known for the coefficients of the numerator and denominator polynomials of various orders. E jRnxm {udt~ is of Theorem 11. Many methods are outlined in. in the matrix case the exponential is accurate only in a neighborhood of the origin. Many methods are outlined in. we have chosen ko = 0 for want to keep the formulas "clean" (i. B e Rnxm and suppose {«*}£§ « a given sequence of mvectors. for example. 11.. 4.e. exhibit many parallels to the continuoustime differential equation case. modeled by systems of equations of the previous section. a Pad6 approximation for denominator the exponential is accurate only in a neighborhood of the origin..13) is constant and does not depend on k. for example. This can be arranged by scaling A. Then the solution of the inhomogeneous initialvalue problem mvectors. no double subscripts).. Linear Differential and Difference Equations DI(A)N(A). and since we want to keep the formulas "clean" (i. convenience. Again. Let A e Rnxn.12. 11. say.2. Unfortunately.13). say. Reduce A to (real) Schur form S via the unitary similarity U and use eA = U e SsUH Ue U H and successive recursions up the superdiagonals of the (quasi) upper triangular matrix and successive recursions up the superdiagonals of the (quasi) upper triangular matrix e s. but since the system is timeinvariant. Numerical loss of accuracy can occur in this procedure from the successive squarings. by 22' 2* )A A multiplying it by 1/2k for sufficiently large k and using the fact that A = / { ]I //2')A )\ * . We could also consider an arbitrary "initial time" ko.14) into (11. The solution of the linear homogeneous system ofdifference Let A e jRn xn. a Fade approximation for polynomials of various orders.2 Inhomogeneous linear difference equations Inhomogeneous linear difference equations E jRnxn. Unfortunately.118 118 l Chapter 11.2 Difference Equations Difference Equations In this section we outline solutions of discretetime analogues of the linear differential In this section we outline solutions of discretetime analogues of the linear differential equations of the previous section. Reduce A to (real) Schur form S via the unitary similarity U and use e A 3. e (e( 3.2.13). Linear discretetime systems. we restrict our attention only to the socalled timeinvariant case. and this observation is exploited frequently. The solution ofthe linear homogeneous system of difference equations equations (11.
16) into (11. sometimes useful Ak.2. Difference Equations 11. 0 D 11. j=O (11. Jk . Then Ak = (XJXI)k = XJkX. a matrix exponential.2. by analogy with the use of Laplace transforms to compute ztransforms. Then JCF for A. One solution method. which is numerically unstable but sometimes useful for hand calculation.O. One definition of the ztransform of a sequence is +00 Z({gk}t~) = LgkZk. LXi Jtyi . Assume that A e M" xn and let X e jR~xn be such that XI AX = /.11.zA =I+A+"2 A + . One definition of the ztransform of a sequence {gk} is a matrix exponential. where J is a E jRnxn and X E R^n J.15).2. it is then easy to compute Ak via the formula Ak = XJkXXI Ak Ak — X Jk If diagonalizable. in general.A)I.H m if A is diagonalizable. X~1 AX JCF for A.3 11. Difference Equations 119 119 is given by kI xk=AkXO+LAkjIBUj. is to use ztransforms.2..y. again mostly for smallorder probsmallorder lems.3 Computation of matrix powers Computation of matrix powers It is clear that solution of linear systems of difference equations involves computation of It is clear that solution of linear systems of difference equations involves computation of k. k:::. k=O Assuming Izl > max A.. since /* is simply a diagonal matrix. the ztransform of the sequence {Ak is then given by Assuming z > max IAI. the ztransform of the sequence {Ak}} is then given by AEA(A) X€A(A) k "'kk 1 12 Z({A})=L.=1 H l If A is diagonalizable.. substitution of (11.15).16) Proof: The proof is again almost immediate Proof: The proof is again almost immediate upon substitution of (11. based Methods based on the JCF are sometimes useful.16) into (11. +00 k=O z z = (lzIA)I = z(zI ..1 _I tA~X.
.1. but again no universally "best" method be derived for the computation of matrix powers. inI)(O) = CnI' (1l.is complex. 18]. Linear Differential and Difference Equations Chapter 11.6 be derived for the computation of matrix powers. it is commute.. 1 1 1 2 1 ] Basic analogues of other methods such as those mentioned in Section 11. Let A Ak = XJkX1 = [=i 4 a [2 1 J]. y(O) = CI.6 can also methods 11. Example 11.(^ . 0 A Writing /.3 HigherOrder Equations HigherOrder Equations differential It is well known that a higherorder (scalar) linear differential equation can be converted to higherorder a firstorder linear system.1 Ak ( . and is to be interpreted as 0 if k < q. for example.15.1(2k .1.)A  ( k ) AkP+I pl 0 J/ = kA k. To be specific. A is complex. the problem again reduces to the computation of the power of a In the general case. see [11. In the case when A. the problem again reduces to the computation of the power of a To Ji E Cpxp Jordan block. ) Ak.• = AI and noting that AI and the nilpotent matrix Writing Ji = XI + N and noting that XI and the nilpotent matrix N commute.3 11. real version of the above can be worked out.l8) . [11..2k) k( _2)k1 ] k( 2l+ (2l. Consider..2) . let 7. .. but again no universally "best" method exists.1 (2 . the initialvalue problem initialvalue (11.120 Chapter 11. For an erudite discussion of the state of the art.2 0 0 0 0 kA k .15. Ch. The symbol ( ) has the usual definition of .. aareal version of the above can be worked out. it is then straightforward to apply the binomial theorem to (AI + N)k and verify that straightforward N)k (XI verify Ak kA kI Ak k 2 (. Let A = [_J Example 11. 11. e Cpxp be a Jordan block of the form o . Then Then 1 ] [(_2)k 1 0 k(2)kk(2) 1 ] [ _ [ (_2/.17) with ¢J(t) a given function and n initial conditions 4>(t} y(O) = Co.1 Ak The symbol (: ) has the usual definition of q!(kk~q)! and is to be interpreted as 0 if k < q. Linear Differential and Difference Equations In the general case.
Further. let a = XT y. X2(t) yet). 3.. .Exercises 121 121 Here. is often well worth avoiding. a)xyT.. the companion matrix A in (11. into a linear firstorder difference equation with (vector) initial condition. 2. Define a vector x (t) E ]Rn with Here. a)xyT.19) possesses many nasty numerical properties for even moderately sized n matrix A in (11. . Let . Cn \ . xn(t) = Inl)(t). C M _I] The initial conditions take the form X (0) = C = [co. Note that det(X7 — A) = An + an\Xn 1l H alA + ao. is often well worth avoiding. (11.I) g(t. let a = xTy... where I + get. Then components Xl (t) yet). = X3(t) = yet). Further. . v (m) denotes the mth derivative of y with respect to t. Then Xl (I) X2(t) = X2(t) = y(t). y(m) denotes the mth derivative of y with respect to t..718P.718P. +h a\X+ ao. .. be a projection. Define a vector x (?) e R" with components *i(0 = y ( t ) . •. Let 3. Show that etA 2. x2(t) = y ( t ) . condition. into a linear firstorder difference equation with (vector) initial with n initial conditions. where !(eat . at least for computational purposes.. . Let P € R 1. at least for computational purposes. and. Xn(t) y { n ~ l ) ( t ) .A) = A. y E lRn and let A = xyT.. = Xn(t) = y(nl)(t). . However..an_llnl)(t) Xnl (t) Xn(t) = y(n)(t) = aoy(t)  + ¢(t) = aOx\ (t) .a)= { a t nxn p if a if a 1= 0. Cl."+ an_1A n~ + . These equations can then be rewritten as the firstorder linear system These equations can then be rewritten as the firstorder linear system 0 0 x(t) = 0 0 1 0 0 0 ao a\ x(t)+ [ 0 1 a n\ n ~(t) r. However. EXERCISES EXERCISES 1.. y € R" and let A = xyT. = O. as mentioned before.. the companion Note that det(A! .anlXn(t) + ¢(t).19) possesses many nasty numerical properties for even moderately sized n and. Show that e'A 1+ g ( t .19) The initial conditions take the form ^(0) = c [CQ. aly(t) .. ..a\X2(t) . . Suppose x... c\.. Show that e P ~ ! + 1. as mentioned before. Let P E lR nxn be a projection. Show that e % / + 1. A similar procedure holds for the conversion of a higherorder difference equation A similar procedure holds for the conversion of a higherorder difference equation with n initial conditions. Suppose x.
f3 E R and Then show that Then show that ectt _eut cos f3t sin f3t ectctrt e sin ~t cos/A J. must also be an eigenValue of S.A and to be symplectic K~l AT K . Show that SI HS must be Suppose and symplectic. also eigenvalue of (c) Suppose that H is Hamiltonian and S is symplectic. must also be an eigenvalue of H. Show that 1 /A. Find eM when A = Find etA = 8.122 122 Chapter 11. Show that ). Show that E jRmxn e = [eoI A sinh 1 X ] ~I . Show S~1 H S Hamiltonian.. . Let K denote the skewsymmetric matrix 0 [ In In ] 0 ' In A E jR2nx2n where /„ denotes the n x n identity matrix... Show that eH must be symplectic. be an eigenvalue of H. Find a general expression for Find a general expression for 7.be an eigenvalue of H. (a) Suppose H is Hamiltonian and let). Linear Differential and Difference where X e M'nx" is arbitrary. also be an eigenvalue of H.A 1 . ft € lR and Let a. Show that —A. 6. Let denote the skewsymmetric matrix 4.be an eigenvalue of S. Linear Differential and Difference Equations Chapter 11. Let (a) Solve the differential equation (a) Solve the differential equation i = Ax . (d) Suppose 5. must (a) Suppose E is Hamiltonian and let A.. H (d) Suppose H is Hamiltonian. x(O) =[ ~ J. be an eigenvalue of S. Show that 1/). A matrix A e R 2nx2n is said to be K I AT K = . must (b) Suppose S is symplectic and let). (b) Suppose S is symplectic and let A. Hamiltonian if K~1ATK = A and to be symplectic if K I ATK = A I. 4. Hamiltonian. Let 5. Let a.
what is the value of ZIOOO? What is the value of Zk in 2.Yet) + 2y(t) + yet) = 0. half stays home and half goes to the Americas. half stays home and half goes to the Americas. The year is 2004 and there are three large "free trade zones" in the world: Asia (A). Each total assets of $40 trillion of which $20 trillion is in E and $20 trillion is in R. yeO) = 1. of Cf or all t. and the Americas (R). k * +00 (i. a quarter goes to Europe. (b) Find the eigenvalues and right eigenvectors of M.e.) (Exercise adapted from Problem 5. x(O) = Xo for t ~ O. (Exercise adapted from Problem 5. 11.11 in [24]. I/X(t)1/2 = ex for all t > 0. and a quarter goes to Asia.e. 10. (d) Find the limiting distribution of the $40 trillion as the universe ends. x(O) =[ ~ l 9. and the Americas (R). Each year half of the Americas' money stays home. Show that *(OII2 = aforallf > O. (b) Consider the difference equation (b) Consider the difference equation Zk+2 + 2Zk+1 + Zk = O. Europe (E). If £0 = 1 and z\ If Zo = 1 and ZI = 2. around the time the Cubs win a World Series). i. . For Europe and Asia. (c) Find the distribution of the companies' assets at year k.YeO) = O. Consider the n x n matrix initialvalue problem 10. Show that for t > 0. Suppose certain multinational companies have total assets of $40 trillion of which $20 trillion is in E and $20 trillion is in R. For Europe and Asia. and a quarter year half of the Americas' money stays home. Show that the eigenvalues of the solution X t ) of this problem are the same as those Show that the eigenvalues of the solution X ((t) of this problem are the same as those of C for all?. i.X(t)A. as k —»• +00 (i. Consider the initialvalue problem i(t) = Ax(t)..e.. (a) Find the solution of the initialvalue problem (a) Find the solution of the initialvalue problem .3. Suppose that A E ~nxn is skewsymmetric and let ex = Ilxol12.11 in [24]. around the time the Cubs win a World Series).Exercises Exercises (b) Solve the differential equation (b) Solve the differential equation i 123 = Ax + b. Consider the initialvalue problem 9. The year is 2004 and there are three large "free trade zones" in the world: Asia (A). what is the value of ZIQOO? What is the value of Zk in general? general? .) 12. X(O) = c. 11..e.3. Suppose that e E"x" is skewsymmetric and let a = \\XQ\\2. (a) Find the matrix M that gives (a) Find the matrix M that gives [ A] E R =M year k+1 [A] E R year k (b) Find the eigenvalues and right eigenvectors of M. 12. (c) Find the distribution of the companies' assets at year k. Consider the n x n matrix initialvalue problem X(t) = AX(t) . Suppose certain multinational companies have Europe (E). a quarter goes to Europe.. goes to Asia. as (d) Find the limiting distribution of the $40 trillion as the universe ends.
This page intentionally left blank This page intentionally left blank .
(A. generalized eigenvalue problem. B e enxn" if there exists a scalar 'A. B e C" xn The standard eigenvalue problem considered in Chapter 9 obviously corresponds to the special case that B = I. Similarly. a nonzero vector y e C" is a left generalized eigenvector corresponding to an E en generalized eigenvector eigenvalue 'X if eigenvalue A if (12.) are the eigenvalues of the associated generalized eigenvalue problem. Definition 12. The standard eigenvalue problem considered in Chapter 9 obviously where A.2) When the context is such that no confusion can arise. 125 125 . The roots ofn(X.3. As with the standard eigenvalue problem.'AB is singular.1 12.XB is called a matrix pencil (or pencil of the matrices A and B). eigenvalues for the generalized eigenvalue problem occur pencil — XB problem occur where the matrix pencil A . called a generalized eigenvalue. B). a. the adjective "generalized" "generalized" standard eigenvalue [y] is usually dropped.) = det(A . A nonzero vector x e C" is a right generalized eigenvector of the pair generalized eigenvector of (A. As with the standard eigenvalue problem. Remark 12. B). and A. The matrix A — 'AB is called a matrix pencil (or pencil of the matrices A Definition 12. The polynomial n('A) = det(A — A. In this chapter we consider the generalized eigenvalue problem In we the generalized eigenvalue problem where A. e e.5) is called the characteristic polynomial of the matrix pair (A. B) with A. called a generalized eigenvalue. Definition 12.Chapter 12 Chapter 12 Generalized Eigenvalue Generalized Eigenvalue Problems 12.2. The polynomial 7r(A. then so is ax [ay] for any nonzero scalar a E <C.1. if x [y] is a right [left] ax [ay] for any eigenvector. B e jRnxn. B E enxn. the characteristic polynomial is obviously real.1.3.4. characteristic hence nonreal eigenvalues must occur in complex conjugate pairs. and Remark 12. The matrix A . B E C MX if there exists a scalar A E C. eigenvector. e C. B). Definition 12.4. such that that (12. corresponds to the special case that B = I.2. When A. B E E" xn . A E en Definition 12. B) with A. .1) Ax = 'ABx. The roots ofn('A) are the eigenvalues of the associated nomial of the matrix pair (A. hence nonreal eigenvalues must occur in complex conjugate pairs.'AB) is called the characteristic polyDefinition 12.1 The Generalized Eigenvalue/Eigenvector Problem The Generalized Eigenvalue/Eigenvector Problem Ax = 'ABx.
{3 =I. I and ^.(3A) ±. Note appear. All A E C are eigenvalues since det(B .O.O. is singular. f3 = O. reciprocal Case of reciprocal .XB. and ~. eigenvalues — AB. ft ^ O. k E !!. the associated matrix pencil is singular (as in Case N(A) n N(B) =Isingular 4 above).5. 1 and ~. If = of degree n. Note that if AA(A) n J\f(B) ^ 0.LA. then rr(A) is a polynomial nonsingular). when B is singular.{3 = 0. (12. Case 1: a =I.LA) = (1 . Case Case 3: a = 0. Case 4: a = 0. f3 = O. suppose associated — AB.L) and there are again four cases to consider. If B = I (or in general when B is nonsingular). There are two eigenvalues. There are two eigenvalues. zero.a/.LA) == 0.5.XB) is not identically zero. only the case of regular pencils is considered in the remainder of this chapter. Associated with any matrix pencil A . f3 / 0./. All A 6 C are eigenvalues since det(B — uA) = O. the pencil A — XB is said to be 12. If det(A — AB) not regular. Case = ft ^ 0.L = (JL = £.AB. I (of multiplicity 1). All A e C are eigenvalues since det(A — AB) =0. it is said to be singular.XB. Clearly the reciprocal pencil has eigenvalues responding generalized /. n(X) Remark 12. the characteristic polynomial is = (I . There are two eigenvalues. There are two eigenvalues. Case 4: a = 0. All A E C are eigenvalues since det(A .I. pencil . 1 and 0.A and corAssociated with any matrix pencil — AB is a reciprocal pencil . det(B . 1 (of multiplicity 1).B. (3 = 0.AHa . B k e n. =I.O. {3 = 0. Case 2: a = 0.0. If B is singular.AB) and there are several cases to consider. A similar reciprocal symmetry holds for Case 2.0. There are two eigenvalues. I1 and . or infinitely many B = I. I). there is a second eigenvalue "at infinity" for Case 3 of of .126 126 Chapter 12. However. Generalized Eigenvalue Problems Chapter 12. There is only one eigenvalue./. However.L)({3 .A. With A and B as in (12.nA. Case 1: a ^ 0./. there may be 0. it is apparent where the "missing" eigenvalues have "missing" gone in Cases 2 and 3. A similar reciprocal symmetry holds for Case 2.6.6. I and O. and hence there are n eigenvalues associated with the pencil A . At least for the case of regular pencils.0. (3 = O. It is instructive to consider the reciprocal pencil associated with the example in It reciprocal Remark 12. Case 2: a = 0.X B is a reciprocal pencil B — n. when B =I. eigenvalues associated with the pencil A . {3 =I. Case 1: ^ 0. If del (A .AB Definition 12. That is to say.3).LA and corresponding generalized eigenvalue problem. (3 = O. 1 and O. only the case of regular pencils is considered in the remainder of this chapter. For example. Note that A and/or B may still be singular. the pencil A . Case 1: a =I. in particular. Case 2: = 0. regular. 1 Case 3: a =I.3) where a and (3 are scalars. Generalized Eigenvalue Problems Remark 12. Then the characteristic polynomial is ft det(A .5. There is only one eigenvalue. While While there are applications in system theory and control where singular pencils appear. There are two eigenvalues. otherwise. Case 3: Case 4: = 0. I multiplicity 1). f3 = 0. {3 ^ 0. {3 =I. There are two eigenvalues. There are two eigenvalues. ^ 0./. ft =I.0.B) == O. 1 and 0.KB always has pencil — AB . with its reciprocal eigenvalue being 0 in Case 3 of the reciprocal pencil B — /. Case 4: a = 0. A — A.
we now deal with equivaa matrices.8. Sec. Let A. 2. B E Cnxn Then there exist unitary matrices Q. [7. If B is nonsingular. the theoretical foundation for the QZ algorithm. Q~H y isa lefteigenvectorofQAZ — XQBZ. then QHy isa left eigenvector ofQAZ AQBZ. where Ta and Tp are upper triangular. for example.12.AQBZ) = det[Q(A . Since det 0 and det Z are nonzero. see. Let A.AB). in fact.7.7]. 12. Z e Cnxn such that 12. in fact.7] [25.8. However. f i always has precisely eigenvalues. since the generalized eigenvalue problem is then easily seen to be equivalent to the standard eigenvalue problem B. Let A. [7. D The first canonical form is an analogue of Schur's Theorem and forms. see. this turns out to be a very poor numerical procedure for handling the generalized eigenvalue problem out to be a very poor numerical procedure for handling the generalized eigenvalue problem if is even moderately ill conditioned with respect to inversion. with the understanding onal elements of Ta to the corresponding diagonal elements of Tp. which is the generally preferred method for solving the generalized eigenvalue problem. The result follows by noting that (A AB)x = 0 if and only if Q(A AB)Z(Zl x) = The result follows by noting that (A –yB)x . solving the generalized eigenvalue problem. Proof: Proof: 1. and eigenvectors under equivalence. and the first theorem deals with what happens to eigenvalues and eigenvectors under equivalence. Then there exist unitary matrices Q. 0 ( Q ~ H y)H Q(A X AB)Z = O.7].7] or [25. of AAB. work directly on A and B are discussed in standard textbooks on numerical linear algebra. [7. B e cnxn . 7.7].AB are then the ratios of the diagBy Theorem 12.. since the generalized eigenvalue problem is then easily seen to be equivalent eigenvalues. the result follows. det(QAZ . 6. Canonical Forms 127 B is nonsingular.7]. this turns to the standard eigenvalue problem B~1Ax = Xx (or AB~1w = Xw). see.2. the eigenvalues ofthe pencil A — XB are then the ratios of the diagonal elements of Ta to the corresponding diagonal elements of TfJ . There is also an analogue of the MurnaghanWintner Theorem for real matrices.7. Canonical Forms 12. 6.7] or [25. lencies rather than similarities. canonical forms are available for the generalized eigenvalue problem. which is the generally preferred method for theoretical foundation for the QZ algorithm. Sec.AB and QAZ . 7. Sec. the result follows easily by noting that yH(A — XB) — 0 if and only if yH (A . ify is a left of AB. .AB)Z] = det gdet Zdet(A 1. for example. the pencil A fewer than eigenvalues. ifx isa right eigenvector of A—XB. for example.7. Since the latter involves a pair of matrices. B.l W AW). 3. Sec. Then 12. that a zero diagonal element of TfJ corresponds to an infinite generalized eigenvalue. Theorem 12. fewer than n eigenvalues. Sec. There is also an analogue of the MurnaghanWintner Theorem for real matrices. QBZ = TfJ .XB)Z] = detQ det Z det(A . [7. to ifx is a Zl x is a righteigenvectorofQAZAQB Z. Again. 6. 7.Oif andonly if Q(AXB)Z(Z~lx) = 0. Sec. 7. c 3. Q. ify isa left eigenvector of A —KB. with the understanding that a zero diagonal element of Tp corresponds to an infinite generalized eigenvalue. the result follows. and det Z are nonzero.2 12. Z e Cnxn with Q and Z nonsingular.AQBZ are the same (the two problems problems are said to be equivalent).2. the eigenvalues of the problems A — XB and QAZ — XQBZ are the same (the two 1.7. By Theorem 12. canonical forms are available for the generalized Just as for the standard eigenvalue problem. the pencil A AAB always has precisely n . then Z~lx isa right eigenvector of QAZ—XQ B Z. Theorem 12. E c nxn such that QAZ = Ta .l Ax Ax (or AB. the eigenvalues of the problems A . Numerical methods that if B is even moderately ill conditioned with respect to inversion. and the first theorem deals with what happens to eigenvalues lencies rather than similarities. E nxn with Q and nonsingular. Let A. Numerical methods that work directly on A and are discussed in standard textbooks on numerical linear algebra. o. the eigenvalues of the pencil A . Q. det(QAZXQBZ) = det[0(A .AB) o if and only if (QH y ) H Q ( A –_ B ) Z = Q. the The first canonical form is an analogue of Schur's Theorem and forms. for example. However. Since det Q XB). fl. see. Sec. Then 1. 6.2 Canonical Forms Canonical Forms Just as for the standard eigenvalue problem.7] or [25. where Ta and TfJ are upper triangular. 2.
XB is regular. Z e R"xn such B E jRnxn. Let A.9. B e Rnxn. is beyond the scope of this book. T. When S has a 2 x 2 diagonal block.)"N). real eigenvalues. B e Cnxn and suppose the pencil A .AB)Q = diag(LII' . There is also an analogue of the Jordan canonical form called the Kronecker canonical fonn Kronecker form (KeF).A [~ ~ l of . L l" L~.AB where J is a Jordan canonical form corresponding to the finite eigenvalues of A A.10.'. Otherwise. where T is upper triangular and S is quasiuppertriangular.12 (Kronecker Canonical Form). B e c mxn .AB. The matrix pencil 12.• L. form (KCF). Then there exist 12. A full description of the KeF. Then there x exist nonsingular matrices P. QBZ = T.. of — XB.I.2)2 with characteristic polynomial (A — 2)2 has a finite eigenvalue 2 of multiplicty 2 and three 2 2 infinite eigenvalues. Generalized Eigenvalue Problems Chapter 12. .11. Let A. Let A. of eigenvalues are given as above by the ratios of diagonal elements of S to corresponding elements of T. while the full KeF in all its generality applies also to "rectangular" and singular KCF "rectangular" pencils. KCF. thnt that QAZ = S. .12 mxm nxn mxm nxn E C nonsingular nonsingular matrices P e c and Q e c QE C such that peA .. In this chapter. I . mxn E C • Theorem 12. The first theorem pertains only to "square" regular pencils. including analogues of principal vectors and description of of so forth.9. .128 Chapter 12. E jRnxn 12. quasiuppertriangular. Generalized Eigenvalue Problems Theorem 12. J . we present only statements of the basic theorems and some examples. Example 12.fi and canonical form nilpotent matrix of associated and N is a nilpotent matrix of Jordan blocks associated with 0 and corresponding to the infinite infinite eigenvalues of A .. the 2 x 2 subpencil formed with the corresponding fonned 2 x diagonal subblock 2x2 2 diagonal subblock of T has a pair of complex conjugate eigenvalues. Then there exist orthogonal matrices Q. [2o I o o o 0 0 0 0 0 2 0 0 1 0 0 1 0 0 ~ ]> [~ 0 I 0 0 0 0 0 0 0 0 o o 0 I 0] 0 0 0 0 (X .AB)Q = [~ ~ ] .. Q € c nxn"such that nonsingular E C" such that peA .A.11. B E c nxn pencil — AB Theorem 12.
both N and J are in Jordan canonical form. LQ. R ( S <S. (12.. there is a matrix characterization of deflating subspace.The next two blocks second block L\ one the block is L\.13.5) .e. Lo. (12. suppose S e Rn* xk is a matrix whose columns span a kdimensional E ~nxk ^dimensional subspace S of ~n. are called the right minimal indices. The /( are called the left minimal indices while the r. Lo. corresponds LQ. Lo.— XBif S Rn.35).e.12. while each LQ has "zero rows" and L6.XB is regular. i. B e Wlxn and suppose the pencil A .14. both Nand J are in Jordan canonical form. Consider a 13 x 12 block diagonal matrix whose diagonal blocks are A 0] I o A I . Example 12. Specifically. Then is deflating subspace for the pencil A AB if and only if there exists M E Rkxk such that e ~kxk AS = BSM. LQ . Then V is a E ~nxn suppose pencil — AB deflating subspace if deflating subspace if dim(AV + BV) = dimV. Definition 12. n(S)) = S. L6. Then SS is aadeflating subspace for the pencil A .2. i. The second block is L\ while the third block is LI. where each LQ has "zero columns" and one row. 000 Just as sets of eigenvectors span Ainvariant subspaces in the case of the standard eigenvectors eigenproblem (recall Definition 9.2. 0. Let A. LQ.. Left Left or right minimal indices can take the value O. Canonical Forms 12. Lo L6 one column. Canonical Forms 129 where N is nilpotent. and L^ is the (k + I) x k where N is nilpotent. The first block of zeros actually corresponds to LQ. there is an analogous geometric concept for the eigenproblem generalized eigenproblem. and Lk is the (k + 1) x k bidiagonal pencil bidiagonal pencil A 0 0 A Lk = 0 0 0 0 A I The Ii are called the left minimal indices while the ri are called the right minimal indices. Such a matrix is in KCF. next two correspond to correspond J = 21 0 2 [ o 0 while the nilpotent matrix N in this example is N [ ~6~].4) eigenvalue characterization Just as in the standard eigenvalue case. generalized eigenproblem.
there is a concept analogous to deflating subspace called a reducing subspace. there AV ~ V. the (finite) zeros of this system are given by the (finite) complex numbers In general. which has a root at —2. Let A=[ 4 2 C = [I 2]. vector. Similarly. however. [26]. (12. If the pencil is not regular.4) becomes dim(AV + V) = dimV. then (12. Example 12. and y is the vector of outputs or observables.8.3 12. multioutput systems. then (12.130 Chapter 12. The method of finding system zeros via a generalized eigenvalue problem also works The method of finding system zeros via a generalized eigenvalue problem also works well for general multiinput.4) becomes dim (A V + V) = dim V. D=O.6».8. (12. The connection between system zeros and the corresponding system pencil is nonThe connection between system zeros and the corresponding system pencil is nontrivial. multioutput systems. and D € Rpxm.3 Application to the Computation of System Zeros Application to the Computation of System Zeros i y Consider the linear system Consider the linear svstem = Ax + Bu. Similarly.8. E jRPxn. C e Rpxn.6). E jRnxm. see. we which clearly has a zero at 2. Numerically. we offer some insight below into the special case of a singleinput. which is clearly equivalent to AV c V.5) becomes AS = SM as before. which is clearly equivalent to If B = I. This is accomplished by computing a certain unitary equivalence on the system pencil that then yields a plished by computing a certain unitary equivalence on the system pencil that then yields a smaller generalized eigenvalue problem with only finite generalized eigenvalues (the finite smaller generalized eigenvalue problem with only finite generalized eigenvalues (the finite zeros). Generalized Eigenvalue Problems If B = /. u is the vector of inputs or controls. (n + m) x (n + m) pencil. This is accomcareful first to "deflate out" the infinite zeros (infinite eigenvalues of (12.15. Numerically. where x(= x(t)) is called the state vector. and E jRPxm. Checking the finite eigenvalues of the pencil (12. Then the transfer matrix (see [26]) of this system is Then the transfer matrix (see [26)) of this system is g(5)=C(sIA)'B+D= 5 55 2 + 14 ' + 3s + 2 which clearly has a zero at —2. For details.5) becomes AS = SM as before. Checking the finite eigenvalues of the pencil (12.6)). Ac M D "'" 5A + 14. [26]. for example.6). In the special case p = m. however. trivial. In the special case p = m. For details. B € R" xm . 12. zeros). B] . This linear timeinvariant statespace model is often used in multivariable control theory. where x(= x(t)) is called the state space model is often used in multivariable control theory. one must be well for general mUltiinput. we offer some insight below into the special case of a singleinput.15. Let Example 12. is a concept analogous to deflating subspace called a reducing subspace. these values are the generalized eigenvalues of the drops rank. see. However. where the "system pencil" (12. = Cx + Du E jRnxn. This linear with A € M n x n . for example. one must be careful first to "deflate out" the infinite zeros (infinite eigenvalues of (12. u is the vector of inputs or controls.6) drops rank. we find the characteristic polynomial to be find the characteristic polynomial to be det [ which has a root at 2. In general. the (finite) zeros of this system are given by the (finite) complex numbers where the "system pencil" z. and y is the vector of outputs or observables.8. these values are the generalized eigenvalues of the (n + m) x (n m) pencil. lEthe pencil is not regular. However.
zI cT b ] d is singular. g(s) Furthermore. system of differential equations differential Mx+Kx=O.e. or g(z)y 0 by the definition of g. and D = d E R. Hence g(z) 0. symmetric.10) for A.7) we get get x = (A . then from (12. Then there exists a nonzero solution to or or (A . 12.l xn. (12.zl)lby + dy = 0. or g ( z ) y = 0 by the definition of g. Suppose z € is such that Suppose Z E C is such that [ A . the problem (12. . A pole/zero Assuming z is not an eigenvalue of A (i. no pole/zero cancellations). B E Rnxn arises when A = A and B = BT > O.9)). of the Since B is positive definite it is nonsingular. and D e R r T(s7 .10) is equivalent B. e ffi. B e ffi.4 12.8) c T x +dy = O. the problem (12. B~11A is not necessarily B~ Ax = AX.nxn A AT and B the B1 0.12. C = c T E R l x n . For example.9) Substituting this in (12.7) (12.8).s) = c (s I — A) 1 b + d c function and assume that g(s) can be written in the form and assume that g ( s ) can be written in the form v(s) g(s) = n(s)' polynomial A. Thus. 0 from (12. Symmetric Generalized Eigenvalue Problems 12.4.8).4 Symmetric Generalized Eigenvalue Problems Symmetric Generalized Eigenvalue Problems Ax = ABx A very important special case of the generalized eigenvalue problem (12. we have Substituting this in (12. (12. M K where M is a symmetric positive definite "mass matrix" and K is a symmetric "stiffness definite "stiffness matrix. Now y ^ 0 (else x z i. However. there are no "pole/zero cancellations").10). we have _c T (A .. relatively where n(s) is the characteristic polynomial of A.zl)lby.. g. Hence g(z) = 0." is a frequently employed model of structures or vibrating systems and yields a frequently generalized eigenvalue problem ofthe form (12.A)~ ! Z? + d denote the system transfer function (matrix).e. the secondorder A. and v(s) and n(s) are relatively prime TT(S) v(s) TT(S) (i. let B = b E Rn. let g(. Now _y 1= 0 (else x = 0 from (12. z is a zero of g. "pole/zero cancellations"). Thus..A to the standard eigenvalue problem Bl1Ax = AJC.9». b e ffi. Specifically.zl)x + by = 0.10) is equivalent Since B is positive definite it is nonsingular.e.4. Symmetric Generalized Eigenvalue Problems 131 131 1 singleoutput system.n.
where L is nonsingular (Theorem 10. Finally. where L is nonsingular Proof: Since B > 0. zi Then x.23).5 ] 1.. positive. then = C T > 0.16).11) can then be rewritten as = Cz = AZ. Generalized Eigenvalue Problems Example 12.1926 whose eigenvalues are approximately 2. with corresponding eigenSince C = C T.1926 as expected. Theorem 12. Generalized Eigenvalue Problems Chapter 12.1926 in Example 12. Moreover. . Let A. we have restricted our attention to that case only.16 is Example 12.12) Since C = C T the eigenproblem (12. so the eigenvalues are positive. . ii € n. it has a Cholesky factorization B = LLT. of course. if A > 0. •. = L ~Tzi. but since realvalued matrices are commonly used in most applications. y)BB = XT By. Xj)B T T = xr BXj = (zi L ~l)(LLT)(L ~T Zj) = Dij. the eigenvalues are also all positive..5 2. it has a Cholesky factorization B = LL T..18.1926 and 3. then product y) x T By. Then the eigenvalue problem (Theorem 10.fi 1] .17. and are Hermitian.. the eigenvalue problem eigenvalue problem Ax = ABx has n real eigenvalues.... .11) can then be rewritten as AL J and Z = LT x.16.. B e jRnxn A AT and B BT > O. The Cholesky factor for the matrix B in Example 12. if A = AT> 0. Proof: Since B > 0. Then the eigenvalue problem Ax = ABx = ALL Tx (12. Then the generalized A. zn satisfying vectors Z I. if A = A > 0. The material of this section can. the eigenproblem (12. generalized case A and B are Hermitian.18. B E Rnxn with A = AT and B = BT > 0.16 is D 0 L=[~ . we have restricted our attention to that case only. (12. the eigenvalues of B l A are always real (and are approximately 2. E !!.11) can be rewritten as the equivalent problem 1 Letting C = L ~I AL ~T and z = L1 x. but since realvalued matrices are commonly used in most applications. then C = C T > 0. and the n corresponding right eigenvectors can be chosen to be orthogonal with respect to the inner product (x. Finally.132 132 Chapter 12. if orthogonal > 0.12) has n real eigenvalues.12) has n real eigenvalues. Let A = [~ . are eigenvectors of the original generalized eigenvalue problem and satisfy and satisfy (Xi. so the eigenvalues are positive.. Zn Zj = Dij.. Moreover. be generalized easily to the case where A material of can. Example 12. The Cholesky factor for the matrix B in Example 12.1926 and —3. with corresponding eigenvectors zi.fi Then it is easily checked that Then it is easily checked thai c = L~lAL~T = [ 0. are eigenvectors of the original generalized eigenvalue problem Xi Zi.5 2. (12. l = [i ~ J B ThenB~ A Then A B~Il = [~ ~ J B~I A approximately Nevertheless. Let A Example 12. (12.5 ' 3.23)..16.
5. Then B. D > I. D since the two matrices are diagonal). simultaneous reduction can also be accomplished via an SVD. by Theorem 10. In particular.e. A I < B~ . such results and we present only a representative (but important and useful) theorem here. Proof: By Theorem 12.l Q~T QT Q~ B~ AQ. individually. by Theorem where D is diagonal. Infact. it does preserve the eigenvalues of A — XB. But then D.19 is very useful for reducing many statements about pairs of symmetric matrices to "the diagonal case. However. simultaneous reduction can also be accomplished via an SVD.19 (Simultaneous Reduction to Diagonal Form).19. since A 2: B. A1. Let A. B E E"x" with 12. Also.5. B) can be simultaneously diagonalized by the same matrix. There are many such results and we present only a representative (but important and useful) theorem here. so it does not preserve eigenvalues of and B Note that Q is not in general orthogonal.. we B~ 1 A. Thus. Also. it does preserve the eigenvalues of A ." The following is typical. Then there exists a nonsingular matrix Q such that A = AT and B = BT > 0. there exists an orthogonal matrix P such that pTe p = D. Simultaneous Diagonalization 12.. In such cases. i. Q D. In numerically problematic. Let Q = L~T P.21 we have that QT AQ > QT BQ.21 we have that QT AQ 2: QT BQ. when L is highly iII conditioned with respect to inversion. matrices to "the diagonal case. This can be seen directly. Theorem 12.20. Let Q = L .< / (this is trivially true 0 since the two matrices are diagonal).31. there exists an orthogonal matrix P such that P CP = D. haveA(D) = A(B.19 is There are situations in which forming C = L~1AL~T as in the proof of Theorem 12. when L is highly ill conditioned with respect to inversion. with the complex case following in a Again. let such cases. the diagonal elements of D are the eigenvalues of B. Since LLT be the Cholesky factorization of and setC L I AL~T. normal maRecall that many matrices can be diagonalized by a similarity.g.31.1A). However.19. Since Proof: Let T C is symmetric. since A > B. Theorem 12. There are many matrices (A. B E lRnxn be positive definite. since QDQ~l have A(D) = A(B~1A).1 12. i. we restrict our attention only to the real case.5 12. Then and and QT BQ Finally. e. Again. since QDQI Finally.e.5. we Note that Q is not in general orthogonal. straightforward way.. LetA QT AQ and B QT Then/HA Q~ B.g.5 Simultaneous Diagonalization Simultaneous Diagonalization Recall that many matrices can be diagonalized by a similarity. Proof: Let B = LLT be the Cholesky factorization of B and set C = L~1AL T. where D is diagonal. e. D 2: [. \ 2. QD~ QT < QQT. let .. so it does not preserve eigenvalues of A and B individually. This can be seen directly.20. where D is diagonal. normal matrices can be diagonalized by a unitary similarity. there exists Q e E"x" such that QT AQ = D and QT BQ = I.1A. Then diagonal. where D is C is symmetric. But then D"1I :::: [(this is trivially true 10. It turns out that in some cases a pair of matrices (A. = pT L I(LLT)L T P = pT P = [.1 Simultaneous diagonalization via SVD Simultaneous diagonalization via SVD There are situations in which forming C L I AL T as in the proof of Theorem 12.12. Let A = QT AQandB = QT BQ.19 e ][~nxn A AT and B BT > O. Simultaneous Diagonalization 133 12. Now D > 0 by Theorem 10. Theorem 12. we restrict our attention only to the real case." The following is typical.T P.1A = Q1l B~1QT QT AQ = Q11B. Let A.19 is numerically problematic. Proof: By Theorem 12.5. = QQT AQQ~l = LTPPTL~IA = L~TL~1A L T P pT L 1 A L T L I A QQT AQQI 0 D = B1A. Then A 2: B if and only if B~ 2: AI. there exists Q E lR~xn such that QT AQ = D and QT BQ = [. the diagonal elements of D are the eigenvalues of B 1A. In fact.19 is very useful for reducing many statements about pairs of symmetric Theorem 12. Thus. Then there exists a nonsingular matrix Q such that where D is diagonal. Now D > 0 by Theorem 10.e. B e M" xn be positive definite. To illustrate.. A~l :::: Bl1. where D is diagonal.'AB. In particular. It turns out that in some cases a pair of trices can be diagonalized by a unitary similarity.lI QT :::: Q QT. i. Then A > B if and only if Bl1 > Theorem 12. i. Let A. B) can be simultaneously diagonalized by the same matrix. with the complex case following in a straightforward way..e. To illustrate.1AQ.
D may have pure imaginary elements.e.21. let A = LAL~ and B = LsLTB be Cholesky factorizations of A and B. Remark 12. D b . Then the matrix Q == LLBTu performs the simultaneous diagonal.. Remark 12.. at least in real arithmetic..134 134 Chapter 12. A can be written as A = PDP T.22. [7.13) where E E R£ x " isisdiagonal. Various generalizations of the results in Remark 12. The SVD in (12. Generalized Eigenvalue Problems us assume that both A and B are positive definite. This is analogous to finding the singular values of a matrix M by Sec. A straightforward. 8. Generalized Eigenvalue Problems Chapter 12. products LA L ~ LBL~ see. Then the matrix Q U performs the simultaneous L e 1R~ xn diagonalization. eigenproblem MT M x Xx. Note that LB A and thus the singular values of L B 1 LA can be found from the eigenvalue problem 02. Further.14) can be rewritten in the form LALAx = XLBz = Letting x = LBT Z we see 02. For example.21 are possible. (12. without forming the products LALTA or LBLTB explicitly. see. note that T QT AQ = U Li/(LAL~)Li/U = UTULVTVLTUTU i/ = while L2 QT BQ = U T LB1(LBL~)Li/U = UTU = I.21 example. which is thus to the generalized eigenvalue problem 02. when A = AT > 0. respectively. i.15) The problem (12.13) can be computed without explicitly forming the without Remark product indicated matrix product or the inverse by using the socalled generalized singular value decomposition (GSVD). for LB i.13» via arithmetic operations performed only on LA LA (12.15) is called a generalized singular value problem and algorithms exist to problem generalized solve it (and hence equivalently (12.7. which is thus equivalent to the generalized eigenvalue problem ALBL~LBT z. example. Further. operations performed directly on M rather than by forming the matrix MT M and solving performed MT forming the eigenproblem MT MX = AX. but in writing = PDDp D diagonal. Compute the SVD Cholesky factorizations A B. let A = LALTA and B — LBL~ us assume that both A and B are positive definite. for generalizations results 12.14) rewritten the LAL~x = ALBz = A L g L ^ L g 7 z .e. Sec.butin writing A — PDDP T = PD(PD) with D is diagonal and P orthogonal.3]. To check this. PDPT ~ ~ ~ ~ T PD(PD{ with where Disdiagonaland P is orthogonal.14) Letting x = LB z we see that (12.13)) and LB separately. respectively. The case when A is symmetric but indefinite is not so A = AT::: O.
6. since eAt :F 0..16) arises frequently in applications: 0. HigherOrder Eigenvalue Problems 135 12. k = 1. by analogy with the firstorder case. Substituting in q(t) = eAt p. K = KT ::: 0). yields a polynomial of degree 2rc. that we try to find a solution of (12.16) Consider the secondorder system of differential equations Consider the secondorder system of differential equations q(t) E ~n E ~nxn.16) of the p A are to be determined... Since the determinantal equation o = det(A 2 M + AC + K) = A2n + . the secondorder problem (12. C = 0. and A special case of (12.. ::: ILn· Let a>k = IILk I!. If r n (i..6..e.6.6 HigherOrder Eigenvalue Problems HigherOrder Eigenvalue Problems Mq+Cq+Kq=O. If M is singular..16) or. are to be determined.e. where q(t} e W1 and M.1 12. Suppose. then all solutions of q + Kq = 0 are oscillatory.. C. ::: ILr ::: 0 > ILr+ I ::: .12.1 Conversion to firstorder form Conversion to firstorder form Let x\ = q and \i = q. E2". seek A A2 M + AC + To get a nonzero solution /?. n. p. If r = n (i. (12. A special case of (12. . KT > 0). = [ M1K 0 x (t) E ~2n.C + K is singular.16) can be written as a firstorder system (with block companion matrix) X . M Mwhere x(t) €. there are 2n eigenvalues for the secondorder (or A2 M + AC + K. . HigherOrder Eigenvalue Problems 12. k = r + 1. we thus seek values of A. Suppose K = KT. (12.2M + A.2M + A. quadratic) eigenvalue problem A. Assume for simplicity that M is nonsingular.. where the nvector p and scalar A. 12. . Then (12.C + K. and = = KT. (A 2 M + AC + K) p = O.• Then the 2n eigenvalues of the secondorder eigenvalue problem A2 I /+ K Let Wk =  fjik 12 Then the 2n eigenvalues of the secondorder eigenvalue problem A.16) we get (12. r.6. then all solutions of q K q 0 are oscillatory.16) arises frequently in applications: M = I. ± Wk. Substituting in form q(t) = ext p. Suppose K has eigenvalues eigenvalues IL I ::: ...6 12.2 K are are ± jWk. Since the determinantal equation is singular. . K e Rnxn.16) can still M secondorder generalized linear be converted to the firstorder generalized linear system converted I [ o M OJ'x = [0 K I C Jx. or if it is desired to avoid the calculation of M lI because M is too ill conditioned with respect to inversion.16) can be written as a firstorder system (with block Let XI q and X2 Then (12. .. for which the matrix A. polynomial 2n.
G E enxn". Some can be useful when M. Let F e Cnxm .19). Are the FG and GF the 3. Let F.) .16) involving. (A similar result is also true for "nonsquare" pencils.. G e Cmxn • Are the nonzero singular values of FG and GF the same? same? wx E ]Rnxn. Show that the generalized eigenvalues of the pencils ues of the pencils e e [~ ~JA[~ ~J and and [ A + B~ + GC ~] _ A [~ ~] are identical for all F E E"1xn and all G E R" xmm . Similar procedures hold for the general kthblock companion matrix analogue of (11. EXERCISES EXERCISES nx 1. to higherorder eigenvalue problems that can be converted to firstorder form using a kn x kn to higherorder eigenvalue problems that can be converted to firstorder form using aknxkn block companion matrix analogue of (11.B D. Show that the generalized eigenval".19). In the parlance of control theory. andlor K Many other firstorder realizations are possible. say. Hint: Consider the equivalence I G][AUO F0]' B][I l [01 C (A similar result is also true for "nonsquare" pencils. Generalized Eigenvalue Problems Chapter 12. .1 2. In the parlance of control theory. verify Hint: An easy "trick proof is to verify that the matrices "trick proof' [Fg ~] and [~ GOF ] are similar via the similarity transformation are similar via the similarity transformation Let F E nxm G E mx ". C.. and/or K have special symmetry or skewsymmetry properties that can exploited. which can be converted to various firstorder systems of dimension kn. properties Higherorder analogues of (12. Suppose A € Rnxn. Let € C M X • Show that the nonzero eigenvalues of and G F are the same. Suppose A e Rnxn and D E lR::! xm. F 6 Rm *" G R" x . such results show that zeros are invariant under state feedback or output injection. derivative q. Show that the finite generalized eigenvalues of E lR " finite eigenvalues of e R™ x m the pencil [~ ~JA[~ ~J are the eigenvalues of the matrix A — BD 1 C. and C e lRmxn. Some can be useful when M. Generalized Eigenvalue Problems Many other firstorder realizations are possible. E Rnxm and E E 4. the kth derivative of q. lead naturally naturally involving. Similar procedures hold for the general k\horder difference equation order difference equation which can be converted to various firstorder systems of dimension kn. Show that the nonzero eigenvalues of FG and GF are the same.136 136 Chapter 12. C. B e lRn*m.
(c) Show that the eigenvalues of A B are the same as those of 1. and let UWT be an SVD of L~LA'. respectively. respectively. (b) Show that Q~l = ^~^UT LTB. B E e jRnxn Ql AQT ]Rnx" in such a way that Q~l AQ~T and QT BQ are simultaneously diagonal. Such QT BQ a transformation is called contragredient. A and B to the same diagonal matrix.2 and hence are AB E2 positive.Exercises Exercises 137 137 desired 5. Ql = ~!UTL~. A B B are positive definite with Cholesky factorizations A = L<A and B = L#Lg. Consider the case where both A and transformation contragredient. Another family of simultaneous diagonalization problems arises when it is desired Another simultaneous diagonalization problems operates that the simultaneous diagonalizing transformation Q operates on matrices A. positive Cholesky = LA L ~ = L B L ~. . and let U~VT be an SVD of LTBLA (a) Show that Q = LA V £ ~ 5 is a contragredient transformation that reduces both contragredient = LA V~! A and B to the same diagonal matrix. positive.
This page intentionally left blank This page intentionally left blank .
We Obviously. Example 13. pointing out the extension to the complex case only where it is not obvious.2.. Let A = [~ 2 2 nand B = [.1) amnB Obviously. (13. B e lR pxq. / 2 <8>fl = [o ~ l\ 2. Let B be an arbitrary 2x2 matrix. Foranyfl E lRX(7. Forany B e!F pxq /z @ B = [~ In Replacing 12 by /„ yields a block diagonal matrix with n copies of B along the I2 diagonal with n copies of along the diagonal.A @ B. 1. pointing out the restrict our attention in this chapter primarily to realvalued matrices.1. Then the Kronecker product (or tensor Then the Kronecker product (or tensor product) of A and B is defined as the matrix product) of A and B is defined as the matrix allB A@B= [ : amlB alnB ] : E lRmpxnq. extension to the complex case only where it is not obvious. Let A e R mx ". 2B 2B ~J. the same definition holds if A and B are complexvalued matrices. n 2. Then A@B =[ 3~ ~]~U J. 4 3 4 3 4 9 4 2 6 2 6 6 6 2 2 Note that B @ A i. the same definition holds if A and B are complexvalued matrices. Example 13.1 Definition and Examples Definition and Examples Definition 13. Then 0 b ll b12 B @/z = l b" b~l 139 0 b2 2 0 b21 0 0 b12 0 b 22 l . Let B be an arbitrary 2 x 2 matrix...1.1 13.2. Then 3.Chapter 13 Chapter 13 Kronecker Products Kronecker Products 13. Note that B <g> A / A <g> B. Let A E lRmxn B E R Definition 13. We restrict our attention in this chapter primarily to realvalued matrices.
4.kCkPBD L~=1 amkckpBD ] 0 Theorem 13. = 1 ® 1 = I. (A ® B)I = Bare 13.3.3. then A® B is symmetric. L~=l al. XmY T]T = [XIYJ. If E ]Rn xn e Rmxm are Theorem 13. Let Jt € Rm. ..3. X2Yl. If A e R"xn and B E !R.2 13. Kronecker Products Kronecker Products The extension to arbitrary B and /„ is obvious. .n. 5. B e ~rxs. y e !R.6. (13..5. Let* eR m . Theorem 13.2) Proof: Simply verify that Proof. (A ® Bl = AT ® BT. Let A e R mx ".1 ) Theorem 13.m xm are symmetric.5.. and D e Rsxt. . Simply verify that ~[ =AC0BD.1. Let E ~mxn. C e ~nxp. For all A and B.. 4. simply verify using the definitions of transpose and Kronecker verify transpose Kronecker 0 product. E R".3.2 Properties of the Kronecker Product Properties of the Kronecker Product (A 0 B)(C 0 D) = AC 0 BD (E ~mrxpt). Then 13. Foral! Proof' Proof: For the proof. C E R" x ^ and D E ~sxt. . If A and B are nonsingular. . Then 13. simply note that (A ® B)(A 1 ® B. A® 13. xmYnf E !R.. D Corollary 13. Then X ® Y = [ XIY T .140 Chapter 13. 0 . y eR". XIYn..6. If AI ® B. . Proof: Proof: Using Theorem 13. B In x E ~m. mn . 5 E R r x i .
c. eigenvectors of A® B corresponding to A.3 since A and B are normal by Theorem 13. L et A E xamp Ie 139 Let A = [ _eose cose andB . . Then A <g)B (or B 0<8> A) has rs singular values U. matrix A ® 5 is then also orthogonal with eigenvalues e^'^+'W and e ± ^ (6> ~^ > \ Theorem 13.12. Then the mn eigenvalues of A® B are eigenvalues JL j. . Let A G IR mx " have a singular value decomposition l/^E^Vj an^ let and /ef singular decomposition UB^B^BB e IR pxq fi E ^pxq have a singular value decomposition V B ~B VI. . Sine] and B . we can take p thus get the complete eigenstructure of A 0 B. if Xl. .8. 0 Zj E€ IR mn "are linearly independent right eigenvectors of A 0 B corresponding to Ai JL 7 i e /?.3.8. then A 0 B is normal. If Corollary 13... j e q. xp are linearly independent right eigenvectors of A corresponding Moreover.4 by Theorem 13... 141 141 Proof: Proof: (A 0 B{ (A 0 B) = (AT 0 BT)(A 0 B) = AT A 0 BT B = AAT 0 B BT by Theorem 13. . A. .. Ap (p ::::: and ZI. then .• :::: U rr > 0 and let B E IRfx Corollary e R™x" singular a\ > • • > a > e have singular values T\ > • • > <s > 0.. TTzen ?/ze mn eigenvalues of A 0 Bare Moreover. and zi..10. if A and fi have Jordan form . Then vI yields a singular value decomposition of A <8>B (after aasimple reordering of the diagonal yields a singular value decomposition of A 0 B (after simple reordering of the diagonal elements O/£A <8> £5 and the corresponding right and left singular vectors)./u.. . <I :::: .. If A E IR"xn and B eRmxm are normal.12.11.j. If A e IR nxn am/ B E IR mxm are normal. = (A 0 B)(A 0 B)T 0 Corollary 13.. Example 13. elements of ~A 0 ~B and the corresponding right and left singular vectors).• :::: TS > O. i / E e!!.2.Zq are linearly independent right eigenvectors of B corresponding to JLI...[Cos</> cos</>O Then It IS easl'1y seen that .. . Let A E lR.i ..10.and let BB E e IRR mxwhave e IR nxn have eigenvalues A. The 4 x 4 orthogonal e±j9 orthogonal eigenvalues e±j(i>. If A E E"xn is orthogonal and B E Mmxm is orthogonal. ••.• sin e = _ sin</> Sin</>] Then it is easily seen that A is orthogonal with eigenvalues e±jO and B is orthogonal with eigenvalues e±j</J. Then A 0 B (or B A) has rs singular values have singular values <I :::: .. xp are linearly independent right eigenvectors of A corresponding AI. we can take p = nand q = m and n and q —m and If A and B are diagonalizable in Theorem 13... A0 B e±jeH</» e±jefJ </». then A <g> B is € IR nxn orthogonal and e IR m x m 15 then 0 is orthogonal. and let eigenvalues jJij... Theorem 13. :::: U rTs > 0 and ^iT\ > • • • > ffr <s Qand rank(A 0 B) = (rankA)(rankB) = rank(B 0 A) . if x\.p (p < n).. Properties of the Kronecker Product 13. then Xi <8> Zj ffi.9.2... then A® B is normal. . 0 If A and Bare diagonalizable in Theorem 13.7. In general..n.m are linearly independent right corresponding to JJL\ . .. j € m. \Ju (q ::::: m). Let A E R nx "have eigenvalues Ai.. if A and B have Jordan form thus get the complete eigenstructure of A <8> B. In general.JLqq (q < m). Properties of the Kronecker Product Theorem 13. • • zq independent of to A .13. mxm /zave Theorem 13. q Corollary 13. Lgf A E E mxn have a singular value decomposition VA ~A Theorem 13.12. 7 E m.7..•."xn have singular values UI :::: . i E l!! 7 E 1· Proof: proof Proof: The basic idea of the proof is as follows: follows: (A 0 B)(x 0 z) = Ax 0 Bz =AX 0 JLZ = AJL(X 0 z).
Kronecker Products Chapter 13. then we get the decompositions given by P~lI AP = J A and Ql BQ = JB. denoted A EEl B. Example 13. to Schur (triangular) form. respectively.14. respectively. 1. Then (P ® Q)H (A ® B)(P ® Q) = (pH ® QH)(A ® B)(P ® Q) = (pH AP) ® (QH BQ) = TA ® TR .e. Then 13.15. with A EEl B. to Schur (triangular) form. respectively. Let A e Rn Xn and B e Rm xrn. For example. while upper triangular. while upper triangular. is generally not quite in Jordan form and needs Note that JA® JB. nxn mxm Definition 13. i. Then reducing A and B to real Schur form). Corollary 13. is the mn x mn matrix Urn <g> A) + (B ® In).14.AP J B .13.e. then we get the JA and Q~] BQ following Jordanlike structure: following Jordanlike structure: (P ® Q)I(A ® B)(P ® Q) = (P. A ® B i= B © A.13.15. is generally not quite in Jordan form and needs further reduction (to an ultimate Jordan form that also depends on whether or not certain further reduction (to an ultimate Jordan form that also depends on whether or not certain eigenvalues are zero or nonzero). pH AP = TA and QH BQ = TB (and similarly if and are orthogonal similarities PHAP = TA and QHBQ = TB (and similarly if P and Q are orthogonal similarities reducing A and B to real Schur form).. suppose P and Q are unitary matrices that reduce A and B. Note that. E IR nxn E IR mxm. in general. general. det(A ® B) = (det A)m(det Bt = det(B ® A). For example.142 142 Chapter 13. is the mn mn matrix (Im ® A) + (B ® /„). suppose P and Schur form for A ® B can be derived similarly. Let A e Rn xn and B e Rrn xm. ~l 2 2 1 3 AfflB = (h®A)+(B®h) = 1 3 0 1 0 4 0 3 0 0 0 0 0 0 0 0 0 0 0 0 2 2 0 0 0 3 4 2 0 0 2 0 0 2 0 0 2 0 0 0 1 0 0 + 0 2 0 0 2 0 0 0 0 3 0 0 0 3 0 0 0 3 The reader is invited to compute B 0 A = (/3 ® B) + (A 0 h) and note the difference The reader is invited to compute B EEl A = (h ® B) (A <g> /2) and note the difference with A © B.. .I ® Ql)(A ® B)(P ® Q) = (P. Let A~U Then Then 2 2 !]andB~[ . of A and B. eigenvalues are zero or nonzero). Tr(A ® B) = (TrA)(TrB) = Tr(B ® A). 1. A EEl B ^ B EEl A. E IR E IR Kronecker Definition 13. respectively. Then the Kronecker sum (or tensor sum) . 2. i. Note that. Let 1. Kronecker Products decompositions given by p. denoted A © B. A Schur form for A ® B can be derived similarly. are unitary matrices that reduce A and 5. Example 13.1 AP) ® (Ql BQ) = JA ® JB · Note that h ® JR. in of A and B.
then decompositions given by P~1AP = lA and Q"1 BQ = JB.... Zq are linearly independent right eigenvectors of B AI..2. A2 + fJt. A2 + fJm.. is a Jordanlike structure for A © B. . . Properties of the Kronecker Product 2. eigenvectors of A® B corresponding to Ai + [ij. .. + fJj' € p. . An + fJm' Moreover. if A and B have Jordan form pI l B . ii E E..•• . . . e jRmxm eigenvalues /z.···. In general... Recall the real JCF M I M 143 143 0 I M I 0 o 1= 0 E jR2kx2k. respectively.. Then J can be written in the very compact form J Theorem 13.. . Then the Kronecker sum A® B = (1m (g>A) + (B ® In) has mn (Im ® A) + (B <g> /„) /za^ ran eigenvalues fJj. .. f^q (q ::s: ra). j E ra. Proof: The basic idea of the proof is as follows: Proof: The basic idea of the proof is as follows: [(1m ® A) + (B ® In)](Z ® X) = (Z ® Ax) = (Z + (Bz ® X) ® Ax) + (fJZ ® X) = (A + fJ)(Z ® X).i e n.\ . .2. we can take p nand q and If A and B are diagonalizable in Theorem 13..16. 0 I M 0 where M = [ where M = o M a f3 f3 a J.16.. 0 If A and Bare diagonalizable in Theorem 13. i E !!.. . and z\. then decompositions given JA and Qt BQ [(Q ® In)(lm ® p)rt[(lm ® A) = [(1m ® p)I(Q ® In)I][(lm ® A) = (1m ® lA) + (B ® In)][CQ ® In)(lm ® P)] + (B ® In)][(Q ® In)(/m ® + (B ® P)] = [(1m ® pI)(QI ® In)][(lm ® A) In)][CQ ® In)(/m <:9 P)] + (JB ® In) is a Jordanlike structure for A $ B. Define 0 0 0 0 o o Ek = 0 o Then 1 can be written in the very compact form 1 = (4 <8>M) + (Ek ® h) = M $ E k .. . j e q. . TTzen r/ze Kronecker sum A $ B eigenvalues e/genva/wes Al + fJt. 7 e I!!. then Zj ® Xi E€ jRmn" are linearly independent right Zj <8> Xi W1 are linearly independent right corresponding f j i . j E fl· eigenvectors of A $ B corresponding to A. if x\... Let A E E"x" have eigenvalues Ai. xp are linearly independent right eigenvectors of A corresponding to AI.13. . Ap (p < and ZI. respectively. . fJq (q < m). . AI + fJm. if A and have Jordan form thus get the complete eigenstructure of A 0 B. Xp (p ::s: n). we can take p = n and q = m and thus get the complete eigenstructure of A $ In general. . zq are linearly independent eigenvectors of corresponding to fJt. ... (I} ® M) + (E^®l2) = M 0 Ek. . and let B E Rmx'" have e jRnxn eigenvalues A. Recall the real JCF 2.16. Properties of the Kronecker Product 13. if XI. .xp are linearly independent right eigenvectors of A corresponding Moreover..
Sylvester where A e R"x". The following definition is very helpful in completing the writing of (13. [(Q ® /„)(/« ® P)] = (<2 ® P) is unitary by Theorem 13. it is easily seen by equating the writing (13. suppose P and are unitary A Schur form for A © B can be derived similarly. When does a solution exist? By writing the matrices in (13.3) in tenns of their columns. PHAP = TA that reduce to Schur and QH BQ = TB (and similarly if P and Q are orthogonal similarities reducing A and B and QHBQ = TB (and similarly if P and Q are orthogonal similarities reducing A and B to real Schur form).3 and Corollary 13. 13. When symmetric. . to Schur (triangular) form.J. Then ((Q ® /„)(/« ® P)]"[(/m <8> A) + (B ® /B)][(e (g) /„)(/„.1. Again. Sylvester who studied general linear matrix equations of the form equation in honor of J.. = AXi + l:~>j. i.e.3) in terms of their easily seen z'th columns that ith columns that m AXi + Xb.4) is known as a Lyapunov equation. Lyapunov equations also to be symmetric and (13. .5) as an "ordinary" linear system.3 Application to Sylvester and Lyapunov Equations Application to Sylvester and Lyapunov Equations In this section we study the linear matrix equation In this section we study the linear matrix equation AX+XB=C.Xj. arise naturally in stability theory. and C e M" xm . Kronecker Products Chapter 13. Again.144 Chapter 13.5) [ blml The coefficient matrix in (13. i. (13. When C is symmetric.3 13. an "ordinary" linear system. Lyapunovequations arise naturally in stability theory.3) is. solution e IR xn also to be symmetric and (13. pH AP = TA matrices that reduce A and B.5) as (B T 0 /„). j=1 These equations can then be rewritten as the These equations can then be rewritten as the mn x mn linear system x linear system A+blll bl21 A + b 2Z 1 b2ml b 21 1 (13. This equation is now often called a Sylvester equation is now often equation in honor of 1. The first important question to ask regarding (13. suppose P and Q are unitary fonn.8. Sylvester who studied general linear matrix equations of the fonn k LA. the solution X E Wnx" is easily shown taking B = AT. Kronecker Products A Schur fonn for A EB B can be derived similarly.3) is. = C.e.3) mxm E IRnxn E IR E IRnxm.. B e Rmxm .5) clearly can be written as the Kronecker sum (Im * A) + (BT ® In). =C.XB. The following definition is very helpful in completing the writing of (13.4) is known as a Lyapunov equation.. where [(Q <8>In)(lm ® P)] = (Q ® P) is unitary by Theorem 13.3 and Corollary 13. ® P)] = (/m <8> rA) + (7* (g) /„).3) is the symmetric equation AX +XAT = C (13.=1 A special case of (13. When does a solution exist? The first important question to ask regarding (13. . Then to real Schur fonn).8.5) clearly can be written as the Kronecker sum (1m 0 A) + The coefficient matrix in (13.4) obtained by taking B = AT. respectively.
(A)+ Aj(B) =I 00 for all i. Application to Sylvester and Lyapunov Equations 145 145 Definition 13. A further enhancement to this algorithm is available in [6] whereby Gaussian elimination. (13.10) . .8) by Theorem 13. this algorithm takes only 0 (n 3) transformed solution matrix X.9) Proof: Since A and B are stable. e m. . +00): IHoo lim XU) . Schur form. Let Ci( € E. A further enhancement to this algorithm is available in [6] whereby the larger of A or B is initially reduced only to upper Hessenberg rather than triangular the larger of A or B is initially reduced only to upper Hessenberg rather than triangular Schur form.5) can be rewritten in the form [(1m ® A) + (B T ® In)]vec(X) = vec(C). where A. They culminate in Theorem 13. +00): (with X(0) = C) on [0. (real) Schur form.16. say. They culminate in Theorem 13. The most (13.8) can be written as can be written as (13.5) can be rewritten in the form Using Definition 13. A(fi).and Mj Ee A(B). one of many elegant connections between matrix theory and stability theory for differential equations.. the eigenvalues of [(1m ® A) + (BT <8> /„)] are + Mj. Cm}. The next few theorems are classical. n :::: m..6). Suppose further are asymptotically stable (a matrix is asymptotically stable if all its eigenvalues have real are asymptotically stable (a matrix is asymptotically stable if all its eigenvalues have real parts in the open left halfplane). (13. j j so there exists aaunique for all i. Application to Sylvester and Lyapunov Equations 13.6) directly with operations rather than the O(n 6 that would be required by solving (13.24. Assuming that. But [(1m ® A) + (B (g) /„)] nonsingular and only has no zero eigenvalues.e A (A). so there exists unique Proof: Since A and B are stable. and ^j Theorem 13..6) There exists a unique solution to (13. Then the (unique) solution of the Sylvester equation AX+XB=C (13.n denote the columns ofC E Rnxm so that C = [ n . . First A and B are reduced to (real) Schur form. A.24. c ].B have no eigenvalues in common. B e Rmxm. Let A e lRnxn. one of many The next few theorems are classical.1S. xn Theorem 13. and C e R" xm . Suppose further that A and B E Rn . the linear system (13. The most commonly preferred numerical algorithm is described in [2].4» are generally not solved using the mn x mn "vec" formulation (13.16.. the eigenvalues of [(/m <g> A) + (BT ® In)] are Ai A. j j E!!!. E R E jRnxm. Then the Sylvester equation G jRmxm.e. Assuming that.17. First A and B are reduced to commonly preferred numerical algorithm is described in [2]. the linear system (13.6).6) if and only if [(1m ® A) + (B T ® In)] is nonsingular. ii e n_. this algorithm takes only O(n3 ) operations rather than the O(n6)) that would be required by solving (13..7) has a unique solution if and only if A and .8)by Theorem 13. n > m. c E jRn the Then vec(C) is defined to be the mnvector formed by stacking the columns ofC on top of by C ::~~::~: ::d~~:::O:[]::::fonned "ocking the colunuu of on top of one another.6) if and only if [(Im ® A) + (BT ® /„)] is nonsingular. where From Theorem 13. . Definition 13. Now integrate the differential equation X = AX + X B solution to (13. + IJLJ.X(O) = A 10 roo X(t)dt + ([+00 X(t)dt) 10 B. Sylvester equations of the form (13. 77ie/i Theorem 13. . Let A e jRnxn.17.6) directly with Gaussian elimination.3. and C e Rnxm.. Then the (unique) solution of the Sylvester equation parts in the open left halfplane). vec(C) = Using Definition 13. B E Rmxm. say. We thus have the following theorem.. We thus have the following theorem.19.3. But [(Im <8>A) + (B TT ® In)] isisnonsingular ififand only ififitithas no zero eigenvalues. elegant connections between matrix theory and stability theory for differential equations. E!!. has a unique solution if and only if A and —B have no eigenvalues in common.13.3) (or symmetric Lyapunov equations of the form (13. There exists a unique solution to (13.4)) are generally not solved using the mn x mn "vee" formulation (13.. AX+XB=C (13. Ai E A(A). From Theorem 13.18.18.18. ofC e jRnxm [CI.(B) ^ solution to(13..3) (or symmetric Lyapunov equations of the form Sylvester equations of the form (13. Theorem C E jRnxm.. i. Now integrate the differential equation X AX XB (with X(O) C) on [0. An equivalent linear system is then solved in which the triangular form equivalent linear system is then solved in which the triangular form of the reduced and can be exploited to solve successively for the columns of a suitably of the reduced A and B can be exploited to solve successively for the columns of a suitably transformed solution matrix X.17.17. E jRmxm. Aj(A) + A.
21 and 13. If the matrix A E Wxn has eigenvalues A.!„. . Let A. Many useful results exist concerning the relationship between stability and Lyapunov equations.. C e jRnxn further asymptotically stable. Remark 13.12) Theorem 13. results = 0.A T have no eigenvalues in common. By Theorems 13. Remark 13. sufficient —A common eigenvalues A asymptotically no common eigenvalues is that A be asymptotically stable.ATT have A —A. then that solution is symmetric. the first of which follows immediately from Theorem 13.. .22.20.An. .I .21 l3. it can be shown easily that lim elA = lim elB = O.12). Then Then . then . _* ])... —kn.13) where C = C T < O. Now let v be an arbitrary nonzero vector in jRn.11) has a unique solution.24. Two basic results due to Lyapunov are the following. A matrix A E R"x" is asymptotically stable if and only if there exists a only if e jRnxn asymptotically if positive definite solution to the Lyapunov equation positive definite solution to the Lyapunov equation AX +XAT = C.19. v E". then that solution is symmetric. X B = is that [ J _Cfi ] be similar to [~ _OB] (via the similarity [ Let Theorem 13. where C Proof: asymptotically l3. Kronecker Products Using the results of Section 11. If symmetric and (13.C E R"x" and suppose further that A is asymptotically stable. we have that lim X ((t) = 0. If C is has unique if and only if and —A T eigenvalues in common.10) we have C t~+x /—<+3C = A (1+ 00 elACe lB dt) + (1+ o 00 elACe lB dt) B and so X and so X = 1o {+oo elACe lB dt satisfies (13. .8). Then the (unique) solution o/the Lyapunov equation of the AX+XAT=C can be written as can be written as (13. . If matrix A e jRn xn eigenvalues )"" .6.146 146 Chapter 13..11) has a unique solution if and only if A and ... +00 r—>+oo t—v+oo X t ) = etACelB X t ) — O.11) has a unique solution. Then the Lyapunov equation e jRnxn. A.23 solution Proof: Suppose A is asymptotically stable. symmetric and ( 13.]. An. (13.13) exists and takes the form (13. 1>+00 1 .6. Theorem Substituting in (13. Theorem 13.23 a solution to (13. Kronecker Products Chapter 13. Hence. a sufficient condition that guarantees that A and .AT has eigen— AT eigenvalues AI.23.21. TTzen r/ze AX+XAT =C (13. using the solution X ((t) = elACe tB from Theorem 11. C E R"x"...1. .. An equivalent condition for the existence of a unique solution to AX + AX + Remark XB = C is that [~ _cB ] be similar to [ J _°B ](via the similarity [~J _~ ]). Theorem 13.19. .. Thus. Lef A..
Proof: The proof follows in a fairly straightforward fashion either directly from the definiProof: The proof follows in a fairly straightforward fashion either directly from the definitions or from the fact that vec(. the AXB =C (13. in which the solution is of the form is of the form (13. where Y E jRnxp is arbitrary. suppose X = XT > 0 and let A E A (A) with corresponding left eigenConversely.13.t. B. most of which derive from one key The vec operator has many useful properties.3. A subtle point arises when dealing with the "dual" Lyapunov equation A T X X A A subtle point arises when dealing with the "dual" Lyapunov equation ATX + XA = C. e A(A) with corresponding left eigenvector y. 14) is unique if BB+ ® A+A = [.11. Theorem 13.14) xp E jRn has a solution X e R. and C for which the matrix product ABC is Theorem 13. v TXv > 0 and thus X is positive definite. Since A was arbitrary.27.27. The equivalent "vec form" of this equation is The equivalent "vec form" of this equation is [(/ ® AT) + (AT ® l)]vec(X) = + (AT ® l)]vec(X) = vec(C). D asymptotically stable. vec(ABC) = (C T ® A)vec(B). Since yH Xy > 0.25. D An immediate application is to the derivation of existence and uniqueness conditions An immediate application is to the derivation of existence and uniqueness conditions for the solution of the simple Sylvesterlike equation introduced in Theorem 6. Hence Since C > 0 and etA is nonsingular for all t. Hence vT Xv > 0 and thus X is positive definite.14) is unique if BB+ ® A+ A = I. the integrand above is positive.yr) = <8> x. Let A E Rmxn. result. Then vector y. For any three matrices A. The solution of (13. and C E Rmxq. nx p if and only if A A+CB+BB = C. the complexvalued equation AHX + XA = C is equivalent to [(/ ® AH) vec(C). Conversely. B e jRPxq.11. Since A was arbitrary. the complexvalued equation H X X A = C is equivalent to However.15) of (13. The Lyapunov equation AX X A = C can also be written using the Remark 13. B E Rpx(}. The Proof: Write (13.26.14) as (B T ® A)vec(X) = vec(C) (13. C. Theorem 13. D Remark 13. D tions or from the fact that vec(xyT) = y ® x. A must be Since yHXy > 0. Then 0> yHCy = yH AXy + yHXAT Y = (A + I)yH Xy. and C for which the matrix product ABC is defined. for the solution of the simple Sylvesterlike equation introduced in Theorem 6. Application to Sylvester and Lyapunov Equations 13. B. defined.16) . we must have A + I = 2 Re A < 0 . The Lyapunov equation AX + XATT = C can also be written using the vec notation in the equivalent form vec notation in the equivalent form [(/ ® A) + (A ® l)]vec(X) = vec(C). The vec operator has many useful properties. in which case the general solution has a if only ifAA + C B+ C.25. most of which derive from one key result. suppose X = XT > 0 and let A. e jRrnxn. we must have A + A = 2 R e A < O. However.14) as Proof: Write (13. A must be asymptotically stable. Then the equation 13. where Y e Rnxp is arbitrary.3. For any three matrices A. Application to Sylvester and Lyapunov Equations 147 147 Since — C > 0 and etA is nonsingular for all the integrand above is positive. e jRrnxq.26.
148 148
Chapter 1 3. Kronecker Products Chapter 13. Kronecker Products
by Theorem 13.26. This "vector equation" has a solution if and only if by Theorem 13.26. This "vector equation" has a solution if and only if
(B T ® A)(B T ® A)+ vec(C)
+
= vec(C).
+ +
It is a straightforward exercise to show that (M ® N) + = M+ ® N+.. Thus, (13.16) has aa It is a straightforward exercise to show that (M ® N) = M <8> N Thus, (13.16) has
solution if and only if solution if and only if vec(C)
=
(B T ® A)«B+{ ® A+)vec(C)
= [(B+ B{ ® AA+]vec(C)
= vec(AA +C B+ B)
and hence if and only if AA +CB+B = C. and hence if and only if AA+ C B+ B C. The general solution of (13 .16) is then given by The general solution of (13.16) is then given by vec(X) = (B T ® A) + vec(C)
+ [I 
(B T ® A) + (B T ® A)]vec(Y),
where Y is arbitrary. This equation can then be rewritten in the form where Y is arbitrary. This equation can then be rewritten in the form vec(X)
= «B+{
® A+)vec(C)
+ [I
 (BB+{ ® A+ A]vec(y)
or, using Theorem 13.26, or, using Theorem 13.26,
The solution is clearly unique if B B+ ® A + A ==I. The solution is clearly unique if BB+ <8> A+A I.
0 D
EXERCISES EXERCISES
I. For any two matrices A and B for which the indicated matrix product is defined, 1. For any two matrice